[
  {
    "path": ".codespell/.codespellrc",
    "content": "[codespell]\nquiet-level = 2\ncount =\nskip = ./deps,./src/crc16_slottable.h,tmp*,./.git,./lcov-html\nignore-words = ./.codespell/wordlist.txt\n"
  },
  {
    "path": ".codespell/requirements.txt",
    "content": "codespell==2.2.5\n"
  },
  {
    "path": ".codespell/wordlist.txt",
    "content": "ake\nbale\nfle\nfo\ngameboy\nmutli\nnd\nnees\noll\noptin\not\nsmove\nte\ntre\ncancelability\nist\nstatics\nfiletest\nro\nexat\nclen"
  },
  {
    "path": ".gitattributes",
    "content": "# We set commands.c's merge driver to `binary` so when it conflicts during a\n# merge git will leave the local version unmodified. This way our Makefile\n# will rebuild it based on src/commands/*.json before trying to compile it.\n# Otherwise the file gets modified and gets the same timestamp as the .json\n# files. So the Makefile doesn't attempt to rebuild it before compiling.\nsrc/commands.c merge=binary\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.md",
    "content": "---\nname: Bug report\nabout: Help us improve Redis by reporting a bug\ntitle: '[BUG]'\nlabels: ''\nassignees: ''\n\n---\n\n**Describe the bug**\n\nA short description of the bug.\n\n**To reproduce**\n\nSteps to reproduce the behavior and/or a minimal code sample.\n\n**Expected behavior**\n\nA description of what you expected to happen.\n\n**Additional information**\n\nAny additional information that is relevant to the problem.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/crash_report.md",
    "content": "---\nname: Crash report\nabout: Submit a crash report\ntitle: '[CRASH] <short description>'\nlabels: ''\nassignees: ''\n\n---\n\nNotice!\n- If a Redis module was involved, please open an issue in the module's repo instead!\n- If you're using docker on Apple M1, please make sure the image you're using was compiled for ARM!\n\n\n**Crash report**\n\nPaste the complete crash log between the quotes below. Please include a few lines from the log preceding the crash report to provide some context.\n\n```\n```\n\n**Additional information**\n\n1. OS distribution and version\n2. Steps to reproduce (if any)\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.md",
    "content": "---\nname: Feature request\nabout: Suggest a feature for Redis\ntitle: '[NEW]'\nlabels: ''\nassignees: ''\n\n---\n\n**The problem/use-case that the feature addresses**\n\nA description of the problem that the feature will solve, or the use-case with which the feature will be used.\n\n**Description of the feature**\n\nA description of what you want to happen.\n\n**Alternatives you've considered**\n\nAny alternative solutions or features you've considered, including references to existing open and closed feature requests in this repository.\n\n**Additional information**\n\nAny additional information that is relevant to the feature request.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/other_stuff.md",
    "content": "---\nname: Other\nabout: Can't find the right issue type? Use this one!\ntitle: ''\nlabels: ''\nassignees: ''\n\n---\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/question.md",
    "content": "---\nname: Question\nabout: Ask the Redis developers\ntitle: '[QUESTION]'\nlabels: ''\nassignees: ''\n\n---\n\nPlease keep in mind that this issue tracker should be used for reporting bugs or proposing improvements to the Redis server.\n\nGenerally, questions about using Redis should be directed to the [community](https://redis.io/community):\n\n* [the mailing list](https://groups.google.com/forum/#!forum/redis-db)\n* [the `redis` tag at StackOverflow](http://stackoverflow.com/questions/tagged/redis)\n* [/r/redis subreddit](http://www.reddit.com/r/redis)\n* [github discussions](https://github.com/redis/redis/discussions)\n\nIt is also possible that your question was already asked here, so please do a quick issues search before submitting. Lastly, if your question is about one of Redis' [clients](https://redis.io/clients), you may to contact your client's developers for help.\n\nThat said, please feel free to replace all this with your question :)\n"
  },
  {
    "path": ".github/dependabot.yml",
    "content": "# To get started with Dependabot version updates, you'll need to specify which\n# package ecosystems to update and where the package manifests are located.\n# Please see the documentation for all configuration options:\n# https://help.github.com/github/administering-a-repository/configuration-options-for-dependency-updates\n\nversion: 2\nupdates:\n  - package-ecosystem: github-actions\n    directory: /\n    schedule:\n      interval: weekly\n  - package-ecosystem: pip\n    directory: /.codespell\n    schedule:\n      interval: weekly\n"
  },
  {
    "path": ".github/workflows/ci.yml",
    "content": "name: CI\n\non: [push, pull_request]\n\njobs:\n\n  test-ubuntu-latest:\n    runs-on: ubuntu-latest\n    steps:\n    - uses: actions/checkout@v4\n    - name: make\n      # Fail build if there are warnings\n      # build with TLS just for compilation coverage\n      run: make REDIS_CFLAGS='-Werror' BUILD_TLS=yes\n    - name: test\n      run: |\n        sudo apt-get install tcl8.6 tclx\n        ./runtest --verbose --tags -slow --dump-logs\n    - name: validate commands.def up to date\n      run: |\n        touch src/commands/ping.json\n        make commands.def\n        dirty=$(git diff)\n        if [[ ! -z  $dirty ]]; then echo $dirty; exit 1; fi\n\n  test-sanitizer-address:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v4\n      - name: make\n        # build with TLS module just for compilation coverage\n        run: make SANITIZER=address REDIS_CFLAGS='-Werror -DDEBUG_ASSERTIONS -DREDIS_TEST' BUILD_TLS=module\n      - name: testprep\n        run: sudo apt-get install tcl8.6 tclx -y\n      - name: test\n        run: ./runtest --verbose --tags -slow --dump-logs\n\n  build-debian-old:\n    runs-on: ubuntu-latest\n    container: debian:buster\n    steps:\n    - uses: actions/checkout@v4\n    - name: make\n      run: |\n        sed -i 's|http://deb.debian.org/debian|http://archive.debian.org/debian|g' /etc/apt/sources.list\n        sed -i 's|http://security.debian.org|http://archive.debian.org/debian-security|g' /etc/apt/sources.list\n        apt-get update && apt-get install -y build-essential\n        make REDIS_CFLAGS='-Werror'\n\n  build-macos-latest:\n    runs-on: macos-latest\n    steps:\n    - uses: actions/checkout@v4\n    - name: make\n      # Fail build if there are warnings\n      # build with TLS just for compilation coverage\n      run: make REDIS_CFLAGS='-Werror' BUILD_TLS=yes\n\n  build-32bit:\n    runs-on: ubuntu-latest\n    steps:\n    - uses: actions/checkout@v4\n    - name: make\n      run: |\n        sudo apt-get update && sudo apt-get install libc6-dev-i386 gcc-multilib\n        make REDIS_CFLAGS='-Werror' 32bit\n\n  build-libc-malloc:\n    runs-on: ubuntu-latest\n    steps:\n    - uses: actions/checkout@v4\n    - name: make\n      run: make REDIS_CFLAGS='-Werror' MALLOC=libc\n\n  build-centos-jemalloc:\n    runs-on: ubuntu-latest\n    container: quay.io/centos/centos:stream9\n    steps:\n    - uses: actions/checkout@v4\n    - name: make\n      run: |\n        dnf -y install which gcc make\n        make REDIS_CFLAGS='-Werror'\n\n  build-old-chain-jemalloc:\n    runs-on: ubuntu-latest\n    container: ubuntu:20.04\n    steps:\n    - uses: actions/checkout@v4\n    - name: make\n      run: |\n        apt-get update\n        apt-get install -y gnupg2\n        echo \"deb http://archive.ubuntu.com/ubuntu/ xenial main\" >> /etc/apt/sources.list\n        echo \"deb http://archive.ubuntu.com/ubuntu/ xenial universe\" >> /etc/apt/sources.list\n        apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 40976EAF437D05B5\n        apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 3B4FE6ACC0B21F32\n        apt-get update\n        apt-get install -y make gcc-4.8\n        update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.8 100\n        make CC=gcc REDIS_CFLAGS='-Werror'\n"
  },
  {
    "path": ".github/workflows/codecov.yml",
    "content": "name: \"Codecov\"\n\n# Enabling on each push is to display the coverage changes in every PR, \n# where each PR needs to be compared against the coverage of the head commit\non: [push, pull_request]\n\npermissions:\n  contents: read\n\njobs:\n  code-coverage:\n    if: ${{ github.repository == 'redis/redis' }}\n    runs-on: ubuntu-latest\n\n    steps:\n    - name: Checkout repository\n      uses: actions/checkout@v4\n\n    - name: Install lcov and run test\n      run: |\n        sudo apt-get install lcov\n        make lcov\n\n    - name: Upload coverage reports to Codecov\n      uses: codecov/codecov-action@57e3a136b779b570ffcdbf80b3bdc90e7fab3de2 # v6\n      env:\n        CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}\n      with:\n        files: ./src/redis.info\n        disable_search: true\n        fail_ci_if_error: true\n\n"
  },
  {
    "path": ".github/workflows/codeql-analysis.yml",
    "content": "name: \"CodeQL\"\n\non:\n  pull_request:\n  schedule:\n    # run weekly new vulnerability was added to the database\n    - cron: '0 0 * * 0'\n\njobs:\n  analyze:\n    name: Analyze\n    runs-on: ubuntu-latest\n    if: github.event_name != 'schedule' || github.repository == 'redis/redis'\n\n    strategy:\n      fail-fast: false\n      matrix:\n        language: [ 'cpp' ]\n\n    steps:\n    - name: Checkout repository\n      uses: actions/checkout@v4\n\n    - name: Initialize CodeQL\n      uses: github/codeql-action/init@v3\n      with:\n        languages: ${{ matrix.language }}\n\n    - name: Autobuild\n      uses: github/codeql-action/autobuild@v3\n\n    - name: Perform CodeQL Analysis\n      uses: github/codeql-action/analyze@v3\n"
  },
  {
    "path": ".github/workflows/coverity.yml",
    "content": "# Creates and uploads a Coverity build on a schedule\nname: Coverity Scan\non:\n  schedule:\n  # Run once daily, since below 500k LOC can have 21 builds per week, per https://scan.coverity.com/faq#frequency\n  - cron: '0 0 * * *'\n  # Support manual execution\n  workflow_dispatch:\njobs:\n  coverity:\n    if: github.repository == 'redis/redis'\n    runs-on: ubuntu-latest\n    steps:\n    - uses: actions/checkout@main\n    - name: Download and extract the Coverity Build Tool\n      run: |\n          wget -q https://scan.coverity.com/download/cxx/linux64 --post-data \"token=${COVERITY_SCAN_TOKEN}&project=redis-unstable\" -O cov-analysis-linux64.tar.gz\n          mkdir cov-analysis-linux64\n          tar xzf cov-analysis-linux64.tar.gz --strip 1 -C cov-analysis-linux64\n      env:\n        COVERITY_SCAN_TOKEN: ${{ secrets.COVERITY_SCAN_TOKEN }}\n    - name: Install Redis dependencies\n      run: sudo apt install -y gcc tcl8.6 tclx procps libssl-dev\n    - name: Build with cov-build\n      run: cov-analysis-linux64/bin/cov-build --dir cov-int make\n    - name: Upload the result\n      run: |\n          tar czvf cov-int.tgz cov-int\n          curl \\\n            --form project=redis-unstable \\\n            --form email=\"${COVERITY_SCAN_EMAIL}\" \\\n            --form token=\"${COVERITY_SCAN_TOKEN}\" \\\n            --form file=@cov-int.tgz \\\n            https://scan.coverity.com/builds\n      env:\n        COVERITY_SCAN_EMAIL: ${{ secrets.COVERITY_SCAN_EMAIL }}\n        COVERITY_SCAN_TOKEN: ${{ secrets.COVERITY_SCAN_TOKEN }}\n"
  },
  {
    "path": ".github/workflows/daily.yml",
    "content": "name: Daily\n\non:\n  pull_request:\n    branches:\n      # any PR to a release branch.\n      - '[0-9].[0-9]'\n  schedule:\n    - cron: '0 0 * * *'\n  workflow_dispatch:\n    inputs:\n      skipjobs:\n        description: 'jobs to skip (delete the ones you wanna keep, do not leave empty)'\n        default: 'valgrind,sanitizer,tls,freebsd,macos,alpine,32bit,iothreads,ubuntu,centos,malloc,specific,fortify,reply-schema,oldTC,defrag,vectorset,assert-keyspace,arm'\n      skiptests:\n        description: 'tests to skip (delete the ones you wanna keep, do not leave empty)'\n        default: 'redis,modules,sentinel,cluster,unittest'\n      test_args:\n        description: 'extra test arguments'\n        default: ''\n      cluster_test_args:\n        description: 'extra cluster / sentinel test arguments'\n        default: ''\n      use_repo:\n        description: 'repo owner and name'\n        default: 'redis/redis'\n      use_git_ref:\n        description: 'git branch or sha to use'\n        default: 'unstable'\n\n\njobs:\n\n  test-ubuntu-jemalloc:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'ubuntu')\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: make REDIS_CFLAGS='-Werror -DREDIS_TEST -DDEBUG_ASSERTIONS'\n    - name: testprep\n      run: sudo apt-get install tcl8.6 tclx\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: ./runtest --accurate --verbose --dump-logs ${{github.event.inputs.test_args}}\n    - name: sentinel tests\n      if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n      run: ./runtest-sentinel ${{github.event.inputs.cluster_test_args}}\n    - name: cluster tests\n      if: true && !contains(github.event.inputs.skiptests, 'cluster')\n      run: ./runtest-cluster ${{github.event.inputs.cluster_test_args}}\n    - name: unittest\n      if: true && !contains(github.event.inputs.skiptests, 'unittest')\n      run: ./src/redis-server test all --accurate\n\n  test-ubuntu-arm:\n    runs-on: ubuntu-24.04-arm\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'arm')\n    timeout-minutes: 360\n    steps:\n      - name: prep\n        if: github.event_name == 'workflow_dispatch'\n        run: |\n          echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n          echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n          echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n          echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n          echo \"test_args: ${{github.event.inputs.test_args}}\"\n          echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n      - uses: actions/checkout@v4\n        with:\n          repository: ${{ env.GITHUB_REPOSITORY }}\n          ref: ${{ env.GITHUB_HEAD_REF }}\n      - name: make\n        run: make REDIS_CFLAGS='-Werror -DREDIS_TEST'\n      - name: testprep\n        run: |\n          sudo apt-get update\n          sudo apt-get install -y tcl8.6 tclx\n      - name: test\n        if: true && !contains(github.event.inputs.skiptests, 'redis')\n        run: ./runtest --accurate --verbose --dump-logs ${{github.event.inputs.test_args}}\n      - name: sentinel tests\n        if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n        run: ./runtest-sentinel ${{github.event.inputs.cluster_test_args}}\n      - name: cluster tests\n        if: true && !contains(github.event.inputs.skiptests, 'cluster')\n        run: ./runtest-cluster ${{github.event.inputs.cluster_test_args}}\n      - name: unittest\n        if: true && !contains(github.event.inputs.skiptests, 'unittest')\n        run: ./src/redis-server test all --accurate\n\n  test-ubuntu-arm-libc-malloc:\n    runs-on: ubuntu-24.04-arm\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      (!contains(github.event.inputs.skipjobs, 'arm') || !contains(github.event.inputs.skipjobs, 'malloc'))\n    timeout-minutes: 360\n    steps:\n      - name: prep\n        if: github.event_name == 'workflow_dispatch'\n        run: |\n          echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n          echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n          echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n          echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n          echo \"test_args: ${{github.event.inputs.test_args}}\"\n          echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n      - uses: actions/checkout@v4\n        with:\n          repository: ${{ env.GITHUB_REPOSITORY }}\n          ref: ${{ env.GITHUB_HEAD_REF }}\n      - name: make\n        run: make MALLOC=libc REDIS_CFLAGS='-Werror -DREDIS_TEST'\n      - name: testprep\n        run: |\n          sudo apt-get update\n          sudo apt-get install -y tcl8.6 tclx\n      - name: test\n        if: true && !contains(github.event.inputs.skiptests, 'redis')\n        run: ./runtest --accurate --verbose --dump-logs ${{github.event.inputs.test_args}}\n      - name: sentinel tests\n        if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n        run: ./runtest-sentinel ${{github.event.inputs.cluster_test_args}}\n      - name: cluster tests\n        if: true && !contains(github.event.inputs.skiptests, 'cluster')\n        run: ./runtest-cluster ${{github.event.inputs.cluster_test_args}}\n\n  test-ubuntu-arm-tls:\n    runs-on: ubuntu-24.04-arm\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      (!contains(github.event.inputs.skipjobs, 'arm') || !contains(github.event.inputs.skipjobs, 'tls'))\n    timeout-minutes: 360\n    steps:\n      - name: prep\n        if: github.event_name == 'workflow_dispatch'\n        run: |\n          echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n          echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n          echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n          echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n          echo \"test_args: ${{github.event.inputs.test_args}}\"\n          echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n      - uses: actions/checkout@v4\n        with:\n          repository: ${{ env.GITHUB_REPOSITORY }}\n          ref: ${{ env.GITHUB_HEAD_REF }}\n      - name: make\n        run: make BUILD_TLS=yes REDIS_CFLAGS='-Werror -DREDIS_TEST'\n      - name: testprep\n        run: |\n          sudo apt-get update\n          sudo apt-get install -y tcl8.6 tclx tcl-tls\n          ./utils/gen-test-certs.sh\n      - name: test\n        if: true && !contains(github.event.inputs.skiptests, 'redis')\n        run: ./runtest --accurate --verbose --tls --dump-logs ${{github.event.inputs.test_args}}\n      - name: sentinel tests\n        if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n        run: ./runtest-sentinel --tls ${{github.event.inputs.cluster_test_args}}\n      - name: cluster tests\n        if: true && !contains(github.event.inputs.skiptests, 'cluster')\n        run: ./runtest-cluster --tls ${{github.event.inputs.cluster_test_args}}\n\n  # Test with DEBUG_ASSERT_KEYSPACE enabled to verify keyspace consistency.\n  # This enables additional runtime checks after each command for:\n  # - Info keysizes histogram\n  # - Cluster slot stats\n  # Skips slow and defrag tests to avoid timeouts while maintaining good coverage.\n  test-debug-assert-keyspace:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'assert-keyspace')\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: make REDIS_CFLAGS='-Werror -DREDIS_TEST -DDEBUG_ASSERT_KEYSPACE -DDEBUG_ASSERTIONS'\n    - name: testprep\n      run: sudo apt-get install tcl8.6 tclx\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: |\n        ./runtest --verbose --tags \"-slow -defrag\" \\\n          --dump-logs ${{github.event.inputs.test_args}}\n    - name: cluster tests\n      if: true && !contains(github.event.inputs.skiptests, 'cluster')\n      run: ./runtest-cluster ${{github.event.inputs.cluster_test_args}}\n\n  test-ubuntu-jemalloc-fortify:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'fortify')\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: |\n        apt-get update && apt-get install -y make gcc\n        make CC=gcc REDIS_CFLAGS='-Werror -DREDIS_TEST -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=3'\n    - name: testprep\n      run: sudo apt-get install -y tcl8.6 tclx procps\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: ./runtest --accurate --verbose --dump-logs ${{github.event.inputs.test_args}}\n    - name: sentinel tests\n      if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n      run: ./runtest-sentinel ${{github.event.inputs.cluster_test_args}}\n    - name: cluster tests\n      if: true && !contains(github.event.inputs.skiptests, 'cluster')\n      run: ./runtest-cluster ${{github.event.inputs.cluster_test_args}}\n    - name: unittest\n      if: true && !contains(github.event.inputs.skiptests, 'unittest')\n      run: ./src/redis-server test all --accurate\n\n  test-ubuntu-libc-malloc:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'malloc')\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: make MALLOC=libc REDIS_CFLAGS='-Werror'\n    - name: testprep\n      run: sudo apt-get install tcl8.6 tclx\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: ./runtest --accurate --verbose --dump-logs ${{github.event.inputs.test_args}}\n    - name: sentinel tests\n      if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n      run: ./runtest-sentinel ${{github.event.inputs.cluster_test_args}}\n    - name: cluster tests\n      if: true && !contains(github.event.inputs.skiptests, 'cluster')\n      run: ./runtest-cluster ${{github.event.inputs.cluster_test_args}}\n\n  test-ubuntu-no-malloc-usable-size:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'malloc')\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: make MALLOC=libc CFLAGS=-DNO_MALLOC_USABLE_SIZE REDIS_CFLAGS='-Werror'\n    - name: testprep\n      run: sudo apt-get install tcl8.6 tclx\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: ./runtest --accurate --verbose --dump-logs ${{github.event.inputs.test_args}}\n    - name: sentinel tests\n      if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n      run: ./runtest-sentinel ${{github.event.inputs.cluster_test_args}}\n    - name: cluster tests\n      if: true && !contains(github.event.inputs.skiptests, 'cluster')\n      run: ./runtest-cluster ${{github.event.inputs.cluster_test_args}}\n\n  test-ubuntu-32bit:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, '32bit')\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: |\n        sudo apt-get update && sudo apt-get install libc6-dev-i386 gcc-multilib\n        make 32bit REDIS_CFLAGS='-Werror -DREDIS_TEST'\n        make -C tests/modules 32bit # the script below doesn't have an argument, we must build manually ahead of time\n    - name: testprep\n      run: sudo apt-get install tcl8.6 tclx\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: ./runtest --accurate --verbose --dump-logs ${{github.event.inputs.test_args}}\n    - name: sentinel tests\n      if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n      run: ./runtest-sentinel ${{github.event.inputs.cluster_test_args}}\n    - name: cluster tests\n      if: true && !contains(github.event.inputs.skiptests, 'cluster')\n      run: ./runtest-cluster ${{github.event.inputs.cluster_test_args}}\n    - name: unittest\n      if: true && !contains(github.event.inputs.skiptests, 'unittest')\n      run: ./src/redis-server test all --accurate\n\n  test-ubuntu-tls:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'tls')\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: |\n        make BUILD_TLS=yes REDIS_CFLAGS='-Werror'\n    - name: testprep\n      run: |\n        sudo apt-get install tcl8.6 tclx tcl-tls\n        ./utils/gen-test-certs.sh\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: |\n        ./runtest --accurate --verbose --dump-logs --tls --dump-logs ${{github.event.inputs.test_args}}\n    - name: sentinel tests\n      if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n      run: |\n        ./runtest-sentinel --tls ${{github.event.inputs.cluster_test_args}}\n    - name: cluster tests\n      if: true && !contains(github.event.inputs.skiptests, 'cluster')\n      run: |\n        ./runtest-cluster --tls ${{github.event.inputs.cluster_test_args}}\n\n  test-ubuntu-tls-no-tls:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'tls')\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: |\n        make BUILD_TLS=yes REDIS_CFLAGS='-Werror'\n    - name: testprep\n      run: |\n        sudo apt-get install tcl8.6 tclx tcl-tls\n        ./utils/gen-test-certs.sh\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: |\n        ./runtest --accurate --verbose --dump-logs ${{github.event.inputs.test_args}}\n    - name: sentinel tests\n      if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n      run: |\n        ./runtest-sentinel ${{github.event.inputs.cluster_test_args}}\n    - name: cluster tests\n      if: true && !contains(github.event.inputs.skiptests, 'cluster')\n      run: |\n        ./runtest-cluster ${{github.event.inputs.cluster_test_args}}\n\n  test-ubuntu-io-threads:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'iothreads')\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: |\n        make REDIS_CFLAGS='-Werror'\n    - name: testprep\n      run: sudo apt-get install tcl8.6 tclx\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: ./runtest --config io-threads 4 --accurate --verbose --tags \"network iothreads psync2 repl failover\" --dump-logs ${{github.event.inputs.test_args}}\n    - name: cluster tests\n      if: true && !contains(github.event.inputs.skiptests, 'cluster')\n      run: ./runtest-cluster --config io-threads 4 ${{github.event.inputs.cluster_test_args}}\n\n  test-ubuntu-reclaim-cache:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'specific')\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: |\n        make REDIS_CFLAGS='-Werror'\n    - name: testprep\n      run: |\n        sudo apt-get install vmtouch\n        mkdir /tmp/master \n        mkdir /tmp/slave\n    - name: warm up\n      run: |\n        ./src/redis-server --daemonize yes --logfile /dev/null\n        ./src/redis-benchmark -n 1 > /dev/null\n        ./src/redis-cli save | grep OK > /dev/null\n        vmtouch -v ./dump.rdb > /dev/null\n    - name: test\n      run: |\n        echo \"test SAVE doesn't increase cache\"\n        CACHE0=$(grep -w file /sys/fs/cgroup/memory.stat | awk '{print $2}')\n        echo \"$CACHE0\"\n        ./src/redis-server --daemonize yes --logfile /dev/null --dir /tmp/master --port 8080 --repl-diskless-sync no --pidfile /tmp/master/redis.pid --rdbcompression no --enable-debug-command yes\n        ./src/redis-cli -p 8080 debug populate 10000 k 102400\n        ./src/redis-server --daemonize yes --logfile /dev/null --dir /tmp/slave --port 8081 --repl-diskless-load disabled --rdbcompression no\n        ./src/redis-cli -p 8080 save > /dev/null\n        VMOUT=$(vmtouch -v /tmp/master/dump.rdb)\n        echo $VMOUT\n        grep -q \" 0%\" <<< $VMOUT \n        CACHE=$(grep -w file /sys/fs/cgroup/memory.stat | awk '{print $2}')\n        echo \"$CACHE\"\n        if [ \"$(( $CACHE-$CACHE0 ))\" -gt \"8000000\" ]; then exit 1; fi\n\n        echo \"test replication doesn't increase cache\"\n        ./src/redis-cli -p 8081 REPLICAOF 127.0.0.1 8080 > /dev/null\n        while [ $(./src/redis-cli -p 8081 info replication | grep \"master_link_status:down\") ]; do sleep 1; done;\n        sleep 1 # wait for the completion of cache reclaim bio\n        VMOUT=$(vmtouch -v /tmp/master/dump.rdb)\n        echo $VMOUT\n        grep -q \" 0%\" <<< $VMOUT \n        VMOUT=$(vmtouch -v /tmp/slave/dump.rdb)\n        echo $VMOUT\n        grep -q \" 0%\" <<< $VMOUT \n        CACHE=$(grep -w file /sys/fs/cgroup/memory.stat | awk '{print $2}')\n        echo \"$CACHE\"\n        if [ \"$(( $CACHE-$CACHE0 ))\" -gt \"8000000\" ]; then exit 1; fi\n        \n        echo \"test reboot doesn't increase cache\"\n        PID=$(cat /tmp/master/redis.pid)\n        kill -15 $PID\n        while [ -x /proc/${PID} ]; do sleep 1; done\n        ./src/redis-server --daemonize yes --logfile /dev/null --dir /tmp/master --port 8080\n        while [ $(./src/redis-cli -p 8080 info persistence | grep \"loading:1\") ]; do sleep 1; done;\n        sleep 1 # wait for the completion of cache reclaim bio\n        VMOUT=$(vmtouch -v /tmp/master/dump.rdb)\n        echo $VMOUT\n        grep -q \" 0%\" <<< $VMOUT\n        CACHE=$(grep -w file /sys/fs/cgroup/memory.stat | awk '{print $2}')\n        echo \"$CACHE\"\n        if [ \"$(( $CACHE-$CACHE0 ))\" -gt \"8000000\" ]; then exit 1; fi\n\n  test-valgrind-test:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'valgrind') && !contains(github.event.inputs.skiptests, 'redis')\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: make valgrind REDIS_CFLAGS='-Werror -DREDIS_TEST'\n    - name: testprep\n      run: |\n        sudo apt-get update\n        sudo apt-get install tcl8.6 tclx valgrind -y\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      # Note that valgrind's overhead doesn't pair well with io-threads so we\n      # explicitly disable tests tagged with 'iothreads' - these are tests that\n      # always run with io-threads enabled.\n      run: ./runtest --valgrind --no-latency --verbose --clients 1 --timeout 2400 --tags -iothreads --dump-logs ${{github.event.inputs.test_args}}\n\n  test-valgrind-misc:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'valgrind') && !(contains(github.event.inputs.skiptests, 'modules') && contains(github.event.inputs.skiptests, 'unittest'))\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: make valgrind REDIS_CFLAGS='-Werror -DREDIS_TEST'\n    - name: testprep\n      run: |\n        sudo apt-get update\n        sudo apt-get install tcl8.6 tclx valgrind -y\n    - name: unittest\n      if: true && !contains(github.event.inputs.skiptests, 'unittest')\n      run: |\n        valgrind --track-origins=yes --suppressions=./src/valgrind.sup --show-reachable=no --show-possibly-lost=no --leak-check=full --log-file=err.txt ./src/redis-server test all --valgrind\n        if grep -q 0x err.txt; then cat err.txt; exit 1; fi\n\n  test-valgrind-no-malloc-usable-size-test:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'valgrind') && !contains(github.event.inputs.skiptests, 'redis')\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: make valgrind CFLAGS=\"-DNO_MALLOC_USABLE_SIZE -DREDIS_TEST\" REDIS_CFLAGS='-Werror'\n    - name: testprep\n      run: |\n        sudo apt-get update\n        sudo apt-get install tcl8.6 tclx valgrind -y\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: ./runtest --valgrind --tags -iothreads --no-latency --verbose --clients 1 --timeout 2400 --dump-logs ${{github.event.inputs.test_args}}\n\n  test-valgrind-no-malloc-usable-size-misc:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'valgrind') && !(contains(github.event.inputs.skiptests, 'modules') && contains(github.event.inputs.skiptests, 'unittest'))\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: make valgrind CFLAGS=\"-DNO_MALLOC_USABLE_SIZE -DREDIS_TEST\" REDIS_CFLAGS='-Werror'\n    - name: testprep\n      run: |\n        sudo apt-get update\n        sudo apt-get install tcl8.6 tclx valgrind -y\n    - name: unittest\n      if: true && !contains(github.event.inputs.skiptests, 'unittest')\n      run: |\n        valgrind --track-origins=yes --suppressions=./src/valgrind.sup --show-reachable=no --show-possibly-lost=no --leak-check=full --log-file=err.txt ./src/redis-server test all --valgrind\n        if grep -q 0x err.txt; then cat err.txt; exit 1; fi\n\n  test-sanitizer-address:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'sanitizer')\n    timeout-minutes: 360\n    strategy:\n      matrix:\n        compiler: [ gcc, clang ]\n    env:\n      CC: ${{ matrix.compiler }}\n    steps:\n      - name: prep\n        if: github.event_name == 'workflow_dispatch'\n        run: |\n          echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n          echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n          echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n          echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n          echo \"test_args: ${{github.event.inputs.test_args}}\"\n          echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n      - uses: actions/checkout@v4\n        with:\n          repository: ${{ env.GITHUB_REPOSITORY }}\n          ref: ${{ env.GITHUB_HEAD_REF }}\n      - name: make\n        run: make SANITIZER=address REDIS_CFLAGS='-DREDIS_TEST -Werror -DDEBUG_ASSERTIONS'\n      - name: testprep\n        run: |\n          sudo apt-get update\n          sudo apt-get install tcl8.6 tclx -y\n      - name: test\n        if: true && !contains(github.event.inputs.skiptests, 'redis')\n        run: ./runtest --accurate --verbose --dump-logs ${{github.event.inputs.test_args}}\n      - name: sentinel tests\n        if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n        run: ./runtest-sentinel ${{github.event.inputs.cluster_test_args}}\n      - name: cluster tests\n        if: true && !contains(github.event.inputs.skiptests, 'cluster')\n        run: ./runtest-cluster ${{github.event.inputs.cluster_test_args}}\n      - name: unittest\n        if: true && !contains(github.event.inputs.skiptests, 'unittest')\n        run: ./src/redis-server test all\n\n  test-sanitizer-memory:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'sanitizer')\n    timeout-minutes: 360\n    env:\n      CC: clang # MSan works only with clang\n    steps:\n      - name: prep\n        if: github.event_name == 'workflow_dispatch'\n        run: |\n          echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n          echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n          echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n          echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n          echo \"test_args: ${{github.event.inputs.test_args}}\"\n          echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n      - uses: actions/checkout@v4\n        with:\n          repository: ${{ env.GITHUB_REPOSITORY }}\n          ref: ${{ env.GITHUB_HEAD_REF }}\n      - name: make\n        run: make SANITIZER=memory REDIS_CFLAGS='-DREDIS_TEST -Werror -DDEBUG_ASSERTIONS'\n      - name: testprep\n        run: |\n          sudo apt-get update\n          sudo apt-get install tcl8.6 tclx -y\n      - name: test\n        if: true && !contains(github.event.inputs.skiptests, 'redis')\n        run: ./runtest --accurate --verbose --dump-logs ${{github.event.inputs.test_args}}\n      - name: sentinel tests\n        if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n        run: ./runtest-sentinel ${{github.event.inputs.cluster_test_args}}\n      - name: cluster tests\n        if: true && !contains(github.event.inputs.skiptests, 'cluster')\n        run: ./runtest-cluster ${{github.event.inputs.cluster_test_args}}\n      - name: unittest\n        if: true && !contains(github.event.inputs.skiptests, 'unittest')\n        run: ./src/redis-server test all\n\n  test-sanitizer-undefined:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'sanitizer')\n    timeout-minutes: 360\n    strategy:\n      matrix:\n        compiler: [ gcc, clang ]\n    env:\n      CC: ${{ matrix.compiler }}\n    steps:\n      - name: prep\n        if: github.event_name == 'workflow_dispatch'\n        run: |\n          echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n          echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n          echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n          echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n          echo \"test_args: ${{github.event.inputs.test_args}}\"\n          echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n      - uses: actions/checkout@v4\n        with:\n          repository: ${{ env.GITHUB_REPOSITORY }}\n          ref: ${{ env.GITHUB_HEAD_REF }}\n      - name: make\n        run: make SANITIZER=undefined REDIS_CFLAGS='-DREDIS_TEST -Werror' SKIP_VEC_SETS=yes LUA_DEBUG=yes # we (ab)use this flow to also check Lua C API violations\n      - name: testprep\n        run: |\n          sudo apt-get update\n          sudo apt-get install tcl8.6 tclx -y\n      - name: test\n        if: true && !contains(github.event.inputs.skiptests, 'redis')\n        run: ./runtest --accurate --verbose --dump-logs ${{github.event.inputs.test_args}}\n      - name: sentinel tests\n        if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n        run: ./runtest-sentinel ${{github.event.inputs.cluster_test_args}}\n      - name: cluster tests\n        if: true && !contains(github.event.inputs.skiptests, 'cluster')\n        run: ./runtest-cluster ${{github.event.inputs.cluster_test_args}}\n      - name: unittest\n        if: true && !contains(github.event.inputs.skiptests, 'unittest')\n        run: ./src/redis-server test all --accurate\n\n  test-sanitizer-thread:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'sanitizer')\n    timeout-minutes: 360\n    strategy:\n      fail-fast: false # let gcc and clang both run until the end even if one of them fails\n      matrix:\n        compiler: [ gcc, clang ]\n    env:\n      CC: ${{ matrix.compiler }}\n    steps:\n      - name: prep\n        if: github.event_name == 'workflow_dispatch'\n        run: |\n          echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n          echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n          echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n          echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n          echo \"test_args: ${{github.event.inputs.test_args}}\"\n          echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n      - uses: actions/checkout@v4\n        with:\n          repository: ${{ env.GITHUB_REPOSITORY }}\n          ref: ${{ env.GITHUB_HEAD_REF }}\n      - name: make\n        # TODO Investigate why jemalloc with clang TSan crash on start;\n        # with gcc TSan, jemalloc works modulo sentinel tests hanging.\n        run: make SANITIZER=thread USE_JEMALLOC=no REDIS_CFLAGS='-DREDIS_TEST -Werror -DDEBUG_ASSERTIONS'\n      - name: testprep\n        run: |\n          sudo apt-get update\n          sudo apt-get install tcl8.6 tclx -y\n      - name: test\n        if: true && !contains(github.event.inputs.skiptests, 'redis')\n        run: ./runtest --tsan --clients 1 --config io-threads 4 --accurate --verbose --dump-logs ${{github.event.inputs.test_args}}\n      - name: sentinel tests\n        if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n        run: ./runtest-sentinel --tsan ${{github.event.inputs.cluster_test_args}}\n      - name: cluster tests\n        if: true && !contains(github.event.inputs.skiptests, 'cluster')\n        run: ./runtest-cluster --config io-threads 2 ${{github.event.inputs.cluster_test_args}}\n\n  test-centos-jemalloc:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'centos')\n    container: quay.io/centos/centos:stream9\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: |\n        dnf -y install which gcc make\n        make REDIS_CFLAGS='-Werror'\n    - name: testprep\n      run: |\n        dnf -y install epel-release\n        dnf -y install tcl tcltls procps-ng /usr/bin/kill\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: ./runtest --accurate --verbose --dump-logs ${{github.event.inputs.test_args}}\n    - name: sentinel tests\n      if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n      run: ./runtest-sentinel ${{github.event.inputs.cluster_test_args}}\n    - name: cluster tests\n      if: true && !contains(github.event.inputs.skiptests, 'cluster')\n      run: ./runtest-cluster ${{github.event.inputs.cluster_test_args}}\n\n  test-centos-tls-module:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'tls')\n    container: quay.io/centos/centos:stream9\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: |\n        dnf -y install which gcc make openssl-devel openssl\n        make BUILD_TLS=module REDIS_CFLAGS='-Werror'\n    - name: testprep\n      run: |\n        dnf -y install epel-release\n        dnf -y install tcl tcltls procps-ng /usr/bin/kill\n        ./utils/gen-test-certs.sh\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: |\n        ./runtest --accurate --verbose --dump-logs --tls-module --dump-logs ${{github.event.inputs.test_args}}\n    - name: sentinel tests\n      if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n      run: |\n        ./runtest-sentinel ${{github.event.inputs.cluster_test_args}}\n    - name: cluster tests\n      if: true && !contains(github.event.inputs.skiptests, 'cluster')\n      run: |\n        ./runtest-cluster --tls-module ${{github.event.inputs.cluster_test_args}}\n\n  test-centos-tls-module-no-tls:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'tls')\n    container: quay.io/centos/centos:stream9\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: |\n        dnf -y install which gcc make openssl-devel openssl\n        make BUILD_TLS=module REDIS_CFLAGS='-Werror'\n    - name: testprep\n      run: |\n        dnf -y install epel-release\n        dnf -y install tcl tcltls procps-ng /usr/bin/kill\n        ./utils/gen-test-certs.sh\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: |\n        ./runtest --accurate --verbose --dump-logs ${{github.event.inputs.test_args}}\n    - name: sentinel tests\n      if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n      run: |\n        ./runtest-sentinel ${{github.event.inputs.cluster_test_args}}\n    - name: cluster tests\n      if: true && !contains(github.event.inputs.skiptests, 'cluster')\n      run: |\n        ./runtest-cluster ${{github.event.inputs.cluster_test_args}}\n\n  test-macos-latest:\n    runs-on: macos-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'macos') && !(contains(github.event.inputs.skiptests, 'redis') && contains(github.event.inputs.skiptests, 'modules'))\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: make REDIS_CFLAGS='-Werror'\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: ./runtest --accurate --verbose --clients 1 --no-latency --dump-logs ${{github.event.inputs.test_args}}\n\n  test-macos-latest-sentinel:\n    runs-on: macos-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'macos') && !contains(github.event.inputs.skiptests, 'sentinel')\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: make REDIS_CFLAGS='-Werror'\n    - name: sentinel tests\n      if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n      run: ./runtest-sentinel ${{github.event.inputs.cluster_test_args}}\n\n  test-macos-latest-cluster:\n    runs-on: macos-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'macos') && !contains(github.event.inputs.skiptests, 'cluster')\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: make REDIS_CFLAGS='-Werror'\n    - name: cluster tests\n      if: true && !contains(github.event.inputs.skiptests, 'cluster')\n      run: ./runtest-cluster ${{github.event.inputs.cluster_test_args}}\n\n  build-macos:\n    strategy:\n      matrix:\n        os: [macos-14, macos-26]\n    runs-on: ${{ matrix.os }}\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'macos')\n    timeout-minutes: 360\n    steps:\n    - uses: maxim-lobanov/setup-xcode@v1\n      with:\n        xcode-version: latest\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: make REDIS_CFLAGS='-Werror -DREDIS_TEST'\n\n  test-freebsd:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'freebsd')\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: test\n      uses: cross-platform-actions/action@v0.30.0\n      with:\n        operating_system: freebsd\n        environment_variables: MAKE\n        version: 13.2\n        shell: bash\n        run: |\n          sudo pkg install -y bash gmake lang/tcl86 lang/tclX gcc\n          gmake\n          ./runtest --single unit/keyspace --single unit/auth --single unit/networking --single unit/protocol\n\n  test-alpine-jemalloc:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'alpine')\n    container: alpine:latest\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: |\n          apk add build-base\n          make REDIS_CFLAGS='-Werror'\n    - name: testprep\n      run: apk add tcl procps tclx\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: ./runtest --accurate --verbose --dump-logs ${{github.event.inputs.test_args}}\n    - name: sentinel tests\n      if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n      run: ./runtest-sentinel ${{github.event.inputs.cluster_test_args}}\n    - name: cluster tests\n      if: true && !contains(github.event.inputs.skiptests, 'cluster')\n      run: ./runtest-cluster ${{github.event.inputs.cluster_test_args}}\n\n  test-alpine-libc-malloc:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'alpine')\n    container: alpine:latest\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: |\n          apk add build-base\n          make REDIS_CFLAGS='-Werror' USE_JEMALLOC=no CFLAGS=-DUSE_MALLOC_USABLE_SIZE\n    - name: testprep\n      run: apk add tcl procps tclx\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: ./runtest --accurate --verbose --dump-logs ${{github.event.inputs.test_args}}\n    - name: sentinel tests\n      if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n      run: ./runtest-sentinel ${{github.event.inputs.cluster_test_args}}\n    - name: cluster tests\n      if: true && !contains(github.event.inputs.skiptests, 'cluster')\n      run: ./runtest-cluster ${{github.event.inputs.cluster_test_args}}\n\n  reply-schemas-validator:\n    runs-on: ubuntu-latest\n    timeout-minutes: 360\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'reply-schema')\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: make REDIS_CFLAGS='-Werror -DLOG_REQ_RES'\n    - name: testprep\n      run: sudo apt-get install tcl8.6 tclx\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: ./runtest --log-req-res --no-latency --dont-clean --force-resp3 --tags -slow --verbose --dump-logs  ${{github.event.inputs.test_args}}\n    - name: sentinel tests\n      if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n      run: ./runtest-sentinel --log-req-res --dont-clean --force-resp3 ${{github.event.inputs.cluster_test_args}}\n    - name: cluster tests\n      if: true && !contains(github.event.inputs.skiptests, 'cluster')\n      run: ./runtest-cluster --log-req-res --dont-clean --force-resp3 ${{github.event.inputs.cluster_test_args}}\n    - name: Install Python dependencies\n      uses: py-actions/py-dependency-install@30aa0023464ed4b5b116bd9fbdab87acf01a484e # v4.1.0\n      with:\n        path: \"./utils/req-res-validator/requirements.txt\"\n    - name: validator\n      run: ./utils/req-res-log-validator.py --verbose --fail-missing-reply-schemas ${{ (!contains(github.event.inputs.skiptests, 'redis') && !contains(github.event.inputs.skiptests, 'module') && !contains(github.event.inputs.sentinel, 'redis') && !contains(github.event.inputs.skiptests, 'cluster')) && github.event.inputs.test_args == '' && github.event.inputs.cluster_test_args == '' && '--fail-commands-not-all-hit' || '' }}\n\n  test-old-chain-jemalloc:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'oldTC')\n    container: ubuntu:20.04\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: |\n        apt-get update\n        apt-get install -y gnupg2\n        echo \"deb http://archive.ubuntu.com/ubuntu/ xenial main\" >> /etc/apt/sources.list\n        echo \"deb http://archive.ubuntu.com/ubuntu/ xenial universe\" >> /etc/apt/sources.list\n        apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 40976EAF437D05B5\n        apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 3B4FE6ACC0B21F32\n        apt-get update\n        apt-get install -y make gcc-4.8\n        update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.8 100\n        make CC=gcc REDIS_CFLAGS='-Werror'\n    - name: testprep\n      run: apt-get install -y tcl tcltls tclx\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: ./runtest --accurate --verbose --dump-logs ${{github.event.inputs.test_args}}\n    - name: sentinel tests\n      if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n      run: ./runtest-sentinel ${{github.event.inputs.cluster_test_args}}\n    - name: cluster tests\n      if: true && !contains(github.event.inputs.skiptests, 'cluster')\n      run: ./runtest-cluster ${{github.event.inputs.cluster_test_args}}\n\n  test-old-chain-tls-module:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'tls') && !contains(github.event.inputs.skipjobs, 'oldTC')\n    container: ubuntu:20.04\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: |\n        apt-get update\n        apt-get install -y gnupg2\n        echo \"deb http://archive.ubuntu.com/ubuntu/ xenial main\" >> /etc/apt/sources.list\n        echo \"deb http://archive.ubuntu.com/ubuntu/ xenial universe\" >> /etc/apt/sources.list\n        apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 40976EAF437D05B5\n        apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 3B4FE6ACC0B21F32\n        apt-get update\n        apt-get install -y make gcc-4.8 openssl libssl-dev\n        update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.8 100\n        make CC=gcc BUILD_TLS=module REDIS_CFLAGS='-Werror'\n    - name: testprep\n      run: |\n        apt-get install -y tcl tcltls tclx\n        ./utils/gen-test-certs.sh\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: |\n        ./runtest --accurate --verbose --dump-logs --tls-module --dump-logs ${{github.event.inputs.test_args}}\n    - name: sentinel tests\n      if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n      run: |\n        ./runtest-sentinel ${{github.event.inputs.cluster_test_args}}\n    - name: cluster tests\n      if: true && !contains(github.event.inputs.skiptests, 'cluster')\n      run: |\n        ./runtest-cluster --tls-module ${{github.event.inputs.cluster_test_args}}\n\n  test-old-chain-tls-module-no-tls:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'tls') && !contains(github.event.inputs.skipjobs, 'oldTC')\n    container: ubuntu:20.04\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: |\n        apt-get update\n        apt-get install -y gnupg2 \n        echo \"deb http://archive.ubuntu.com/ubuntu/ xenial main\" >> /etc/apt/sources.list\n        echo \"deb http://archive.ubuntu.com/ubuntu/ xenial universe\" >> /etc/apt/sources.list\n        apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 40976EAF437D05B5\n        apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 3B4FE6ACC0B21F32\n        apt-get update\n        apt-get install -y make gcc-4.8 openssl libssl-dev\n        update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.8 100\n        make BUILD_TLS=module CC=gcc REDIS_CFLAGS='-Werror'\n    - name: testprep\n      run: |\n        apt-get install -y tcl tcltls tclx\n        ./utils/gen-test-certs.sh\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: |\n        ./runtest --accurate --verbose --dump-logs ${{github.event.inputs.test_args}}\n    - name: sentinel tests\n      if: true && !contains(github.event.inputs.skiptests, 'sentinel')\n      run: |\n        ./runtest-sentinel ${{github.event.inputs.cluster_test_args}}\n    - name: cluster tests\n      if: true && !contains(github.event.inputs.skiptests, 'cluster')\n      run: |\n        ./runtest-cluster ${{github.event.inputs.cluster_test_args}}\n\n  test-sanitizer-force-defrag:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'defrag')\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: make SANITIZER=address DEBUG_DEFRAG=force REDIS_CFLAGS='-Werror'\n    - name: testprep\n      run: sudo apt-get install tcl8.6 tclx\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: ./runtest --debug-defrag --verbose --clients 1 ${{github.event.inputs.test_args}}\n\n  test-vectorset:\n    runs-on: ubuntu-latest\n    if: |\n      (github.event_name == 'workflow_dispatch' || (github.event_name != 'workflow_dispatch' && github.repository == 'redis/redis')) &&\n      !contains(github.event.inputs.skipjobs, 'vectorset')\n    timeout-minutes: 360\n    steps:\n    - name: prep\n      if: github.event_name == 'workflow_dispatch'\n      run: |\n        echo \"GITHUB_REPOSITORY=${{github.event.inputs.use_repo}}\" >> $GITHUB_ENV\n        echo \"GITHUB_HEAD_REF=${{github.event.inputs.use_git_ref}}\" >> $GITHUB_ENV\n        echo \"skipjobs: ${{github.event.inputs.skipjobs}}\"\n        echo \"skiptests: ${{github.event.inputs.skiptests}}\"\n        echo \"test_args: ${{github.event.inputs.test_args}}\"\n        echo \"cluster_test_args: ${{github.event.inputs.cluster_test_args}}\"\n    - uses: actions/checkout@v4\n      with:\n        repository: ${{ env.GITHUB_REPOSITORY }}\n        ref: ${{ env.GITHUB_HEAD_REF }}\n    - name: make\n      run: make REDIS_CFLAGS='-Werror -DREDIS_TEST'\n    - name: testprep\n      run: |\n        sudo apt-get install tcl8.6 tclx\n        sudo pip install redis\n    - name: test\n      if: true && !contains(github.event.inputs.skiptests, 'redis')\n      run: ./runtest --accurate --verbose --dump-logs --single vectorset/vectorset ${{github.event.inputs.test_args}}\n"
  },
  {
    "path": ".github/workflows/external.yml",
    "content": "name: External Server Tests\n\non:\n    pull_request:\n    push:\n    schedule:\n      - cron: '0 0 * * *'\n\njobs:\n  test-external-standalone:\n    runs-on: ubuntu-latest\n    if: github.event_name != 'schedule' || github.repository == 'redis/redis'\n    timeout-minutes: 360\n    steps:\n    - uses: actions/checkout@v4\n    - name: Build\n      run: make REDIS_CFLAGS=-Werror\n    - name: Start redis-server\n      run: |\n        ./src/redis-server --daemonize yes --save \"\" --logfile external-redis.log \\\n          --enable-protected-configs yes --enable-debug-command yes --enable-module-command yes\n    - name: Run external test\n      run: |\n          ./runtest \\\n            --host 127.0.0.1 --port 6379 \\\n            --verbose \\\n            --tags -slow\n    - name: Archive redis log\n      if: ${{ failure() }}\n      uses: actions/upload-artifact@v4\n      with:\n        name: test-external-redis-log\n        path: external-redis.log\n\n  test-external-cluster:\n    runs-on: ubuntu-latest\n    if: github.event_name != 'schedule' || github.repository == 'redis/redis'\n    timeout-minutes: 360\n    steps:\n    - uses: actions/checkout@v4\n    - name: Build\n      run: make REDIS_CFLAGS=-Werror\n    - name: Start redis-server\n      run: |\n        ./src/redis-server --cluster-enabled yes --daemonize yes --save \"\" --logfile external-redis-cluster.log \\\n          --enable-protected-configs yes --enable-debug-command yes --enable-module-command yes\n    - name: Create a single node cluster\n      run: ./src/redis-cli cluster addslots $(for slot in {0..16383}; do echo $slot; done); sleep 5\n    - name: Run external test\n      run: |\n          ./runtest \\\n            --host 127.0.0.1 --port 6379 \\\n            --verbose \\\n            --cluster-mode \\\n            --tags -slow\n    - name: Archive redis log\n      if: ${{ failure() }}\n      uses: actions/upload-artifact@v4\n      with:\n        name: test-external-cluster-log\n        path: external-redis-cluster.log\n\n  test-external-nodebug:\n    runs-on: ubuntu-latest\n    if: github.event_name != 'schedule' || github.repository == 'redis/redis'\n    timeout-minutes: 360\n    steps:\n      - uses: actions/checkout@v4\n      - name: Build\n        run: make REDIS_CFLAGS=-Werror\n      - name: Start redis-server\n        run: |\n          ./src/redis-server --daemonize yes --save \"\" --logfile external-redis-nodebug.log\n      - name: Run external test\n        run: |\n          ./runtest \\\n            --host 127.0.0.1 --port 6379 \\\n            --verbose \\\n            --tags \"-slow -needs:debug\"\n      - name: Archive redis log\n        if: ${{ failure() }}\n        uses: actions/upload-artifact@v4\n        with:\n          name: test-external-redis-nodebug-log\n          path: external-redis-nodebug.log\n"
  },
  {
    "path": ".github/workflows/post-release-automation.yml",
    "content": "name: Post-Release Automation\n\non:\n  release:\n    types: [published]\n\njobs:\n  extract-release-info:\n    if: github.repository == 'redis/redis'\n    runs-on: ubuntu-latest\n    outputs:\n      tag_name: ${{ steps.release-info.outputs.tag_name }}\n      release_type: ${{ steps.release-info.outputs.release_type }}\n    steps:\n      - name: Checkout repository\n        uses: actions/checkout@v5\n\n      - name: Extract and validate release information\n        id: release-info\n        env:\n          TAG_NAME: ${{ github.event.release.tag_name }}\n          GH_TOKEN: ${{ github.token }}\n        run: |\n          echo \"tag_name=${TAG_NAME}\" >> $GITHUB_OUTPUT\n          echo \"Release tag: ${TAG_NAME}\"\n\n          LATEST_TAG=$(gh release view --json tagName --jq '.tagName')\n          echo \"Latest release tag(from gh release): ${LATEST_TAG}\"\n\n          if [[ \"${TAG_NAME}\" == \"${LATEST_TAG}\" ]]; then\n            echo \"release_type=latest\" >> $GITHUB_OUTPUT\n            echo \"Detected latest release: ${TAG_NAME}\"\n          else\n            echo \"release_type=non-latest\" >> $GITHUB_OUTPUT\n            echo \"Detected non-latest release: ${TAG_NAME} (latest is ${LATEST_TAG})\"\n          fi\n\n  create-tarball:\n    needs: extract-release-info\n    runs-on: ubuntu-latest\n    env:\n      TAG_NAME: ${{ needs.extract-release-info.outputs.tag_name }}\n    outputs:\n      sha256: ${{ steps.checksum.outputs.sha256 }}\n      size_mb: ${{ steps.size.outputs.size_mb }}\n      size_warning: ${{ steps.size.outputs.size_warning }}\n    steps:\n      - name: Checkout repository\n        uses: actions/checkout@v5\n        with:\n          ref: ${{ env.TAG_NAME }}\n          fetch-depth: 0\n\n      - name: Create tarball\n        run: ./utils/releasetools/01_create_tarball.sh \"$TAG_NAME\"\n\n      - name: Verify tarball size\n        id: size\n        run: |\n          TARBALL=\"/tmp/redis-${TAG_NAME}.tar.gz\"\n          SIZE_MB=$(du -m \"$TARBALL\" | cut -f1)\n          echo \"Tarball size: ${SIZE_MB} MB\"\n          echo \"size_mb=${SIZE_MB}\" >> $GITHUB_OUTPUT\n          if [ \"$SIZE_MB\" -lt 3 ] || [ \"$SIZE_MB\" -gt 5 ]; then\n            echo \"::warning::Tarball size ${SIZE_MB} MB is outside expected range (3-5 MB)\"\n            echo \"size_warning=true\" >> $GITHUB_OUTPUT\n          else\n            echo \"size_warning=false\" >> $GITHUB_OUTPUT\n          fi\n\n      - name: Calculate SHA256 checksum\n        id: checksum\n        run: |\n          TARBALL=\"/tmp/redis-${TAG_NAME}.tar.gz\"\n          SHA256=$(shasum -a 256 \"$TARBALL\" | cut -d' ' -f1)\n          echo \"SHA256: $SHA256\"\n          echo \"sha256=$SHA256\" >> $GITHUB_OUTPUT\n\n      - name: Upload tarball as artifact\n        uses: actions/upload-artifact@v6\n        with:\n          name: redis-${{ env.TAG_NAME }}-tarball\n          path: /tmp/redis-${{ env.TAG_NAME }}.tar.gz\n          compression-level: 0\n\n  # approval-gate:\n  #   needs: [extract-release-info, create-tarball]\n  #   if: needs.extract-release-info.outputs.release_type == 'latest'\n  #   runs-on: ubuntu-latest\n  #   steps:\n  #     - name: Approval gate\n  #       run: |\n  #         echo \"Latest release detected. Manual approval required for production deployment.\"\n  #         # TODO: Implement approval workflow\n  #         # This could use GitHub Environments with required reviewers\n  #         # or a manual approval step\n\n  # upload-tarball:\n  #   needs: [extract-release-info, create-tarball, approval-gate]\n  #   if: always() && !cancelled() && needs.create-tarball.result == 'success' && (needs.approval-gate.result == 'success' || needs.approval-gate.result == 'skipped')\n  #   runs-on: ubuntu-latest\n  #   steps:\n  #     - name: Upload tarball\n  #       run: |\n  #         echo \"TODO: Implement tarball upload\"\n  #         # This will require:\n  #         # - SSH credentials/keys for upload to download.redis.io\n  #         # - Adaptation of utils/releasetools/02_upload_tarball.sh for CI environment\n\n  # test-release-tarball:\n  #   needs: upload-tarball\n  #   runs-on: ubuntu-latest\n  #   steps:\n  #     - name: Test release tarball\n  #       run: |\n  #         echo \"TODO: Implement release testing using utils/releasetools/03_test_release.sh\"\n  #         # This will:\n  #         # - Download the uploaded tarball\n  #         # - Extract and build Redis\n\n  # update-release-hashes:\n  #   needs: test-release-tarball\n  #   runs-on: ubuntu-latest\n  #   steps:\n  #     - name: Update release hashes\n  #       run: |\n  #         echo \"TODO: Implement hash update using utils/releasetools/04_release_hash.sh\"\n  #         # This will require:\n  #         # - Access to redis-hashes repository\n  #         # - Git credentials for committing and pushing\n\n  summary-and-notify:\n    needs: [extract-release-info, create-tarball] # update-release-hashes\n    if: always() && github.repository == 'redis/redis'\n    runs-on: ubuntu-latest\n    env:\n      TAG_NAME: ${{ needs.extract-release-info.outputs.tag_name }}\n      RELEASE_TYPE: ${{ needs.extract-release-info.outputs.release_type }}\n      SHA256: ${{ needs.create-tarball.outputs.sha256 }}\n      SIZE_MB: ${{ needs.create-tarball.outputs.size_mb }}\n      SIZE_WARNING: ${{ needs.create-tarball.outputs.size_warning }}\n    steps:\n      - name: Summary\n        run: |\n          {\n            echo \"## Post-Release Automation Summary\"\n            echo \"\"\n            echo \"- **Release Tag:** ${TAG_NAME}\"\n            echo \"- **Release Type:** ${RELEASE_TYPE}\"\n            echo \"- **Tarball SHA256:** ${SHA256}\"\n            echo \"- **Tarball Size:** ${SIZE_MB} MB\"\n            if [ \"${SIZE_WARNING}\" == \"true\" ]; then\n              echo \"\"\n              echo \"> [!WARNING]\"\n              echo \"> Tarball size is outside expected range, check the logs for details.\"\n            fi\n          } >> $GITHUB_STEP_SUMMARY\n\n      # - name: Send Slack notification\n      #   run: |\n      #     echo \"TODO: Implement Slack notification\"\n      #     # This will require:\n      #     # - Slack webhook URL or bot token (stored in secrets)\n      #     # - Determine appropriate channel (e.g., #releases, #redis-releases)\n      #     # - Craft message with release information and workflow status\n"
  },
  {
    "path": ".github/workflows/redis_docs_sync.yaml",
    "content": "name: redis_docs_sync\n\non:\n  release:\n    types: [published]\n\njobs:\n  redis_docs_sync:\n    if: github.repository == 'redis/redis'\n    runs-on: ubuntu-latest\n    steps:\n      - name: Generate a token\n        id: generate-token\n        uses: actions/create-github-app-token@v1\n        with:\n          app-id: ${{ secrets.DOCS_APP_ID }}\n          private-key: ${{ secrets.DOCS_APP_PRIVATE_KEY }}\n\n      - name: Invoke workflow on redis/docs\n        env:\n          GH_TOKEN: ${{ steps.generate-token.outputs.token }}\n          RELEASE_NAME: ${{ github.event.release.tag_name }}\n        run: |\n          LATEST_RELEASE=$(\n              curl -Ls \\\n              -H \"Accept: application/vnd.github+json\" \\\n              -H \"Authorization: Bearer ${GH_TOKEN}\" \\\n              -H \"X-GitHub-Api-Version: 2022-11-28\" \\\n              https://api.github.com/repos/redis/redis/releases/latest \\\n              | jq -r '.tag_name'\n          )\n\n          if [[ \"${LATEST_RELEASE}\" == \"${RELEASE_NAME}\" ]]; then\n            gh workflow run -R redis/docs redis_docs_sync.yaml -f release=\"${RELEASE_NAME}\"\n          fi\n"
  },
  {
    "path": ".github/workflows/reply-schemas-linter.yml",
    "content": "name: Reply-schemas linter\n\non:\n  push:\n    paths:\n      - 'src/commands/*.json'\n  pull_request:\n    paths:\n      - 'src/commands/*.json'\n\njobs:\n  reply-schemas-linter:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v4\n      - name: Setup nodejs\n        uses: actions/setup-node@v4\n      - name: Install packages\n        run: npm install ajv\n      - name: linter\n        run: node ./utils/reply_schema_linter.js\n\n"
  },
  {
    "path": ".github/workflows/spell-check.yml",
    "content": "# A CI action that using codespell to check spell.\n# .github/.codespellrc is a config file.\n# .github/wordlist.txt is a list of words that will ignore word checks.\n# More details please check the following link:\n# https://github.com/codespell-project/codespell\nname: Spellcheck\n\non:\n  push:\n  pull_request:\n\njobs:\n  build:\n    name: Spellcheck\n    runs-on: ubuntu-latest\n\n    steps:\n      - name: Checkout repository\n        uses: actions/checkout@v4\n\n      - name: pip cache\n        uses: actions/cache@v4\n        with:\n          path: ~/.cache/pip\n          key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}\n          restore-keys: ${{ runner.os }}-pip-\n\n      - name: Install prerequisites\n        run: sudo pip install -r ./.codespell/requirements.txt\n\n      - name: Spell check\n        run: codespell --config=./.codespell/.codespellrc\n"
  },
  {
    "path": ".gitignore",
    "content": ".*.swp\n*.o\n*.xo\n*.so\n*.d\n*.log\ndump.rdb\nredis-benchmark\nredis-check-aof\nredis-check-rdb\nredis-check-dump\nredis-cli\nredis-sentinel\nredis-server\ndoc-tools\nrelease\nmisc/*\nsrc/release.h\nappendonly.aof*\nappendonlydir\nSHORT_TERM_TODO\nrelease.h\nsrc/transfer.sh\nsrc/configs\nredis.ds\nsrc/redis.conf\nsrc/nodes.conf\ndeps/lua/src/lua\ndeps/lua/src/luac\ndeps/lua/src/liblua.a\ndeps/hdr_histogram/libhdrhistogram.a\ndeps/fpconv/libfpconv.a\ntests/tls/*\n.make-*\n.prerequisites\n*.dSYM\nMakefile.dep\n.vscode/*\n.idea/*\n.ccls\n.ccls-cache/*\ncompile_commands.json\nredis.code-workspace\n"
  },
  {
    "path": "00-RELEASENOTES",
    "content": "Hello! This file is just a placeholder, since this is the \"unstable\" branch\nof Redis, the place where all the development happens.\n\nThere is no release notes for this branch, it gets forked into another branch\nevery time there is a partial feature freeze in order to eventually create\na new stable release.\n\nUsually \"unstable\" is stable enough for you to use it in development environments\nhowever you should never use it in production environments. It is possible\nto download the latest stable release here:\n\n    https://download.redis.io/redis-stable.tar.gz\n\nMore information is available at https://redis.io\n\nHappy hacking!\n"
  },
  {
    "path": "BUGS",
    "content": "Please check https://github.com/redis/redis/issues\n"
  },
  {
    "path": "CODE_OF_CONDUCT.md",
    "content": "Contributor Covenant Code of Conduct\nOur Pledge\nWe as members, contributors, and leaders pledge to make participation in our\ncommunity a harassment-free experience for everyone, regardless of age, body\nsize, visible or invisible disability, ethnicity, sex characteristics, gender\nidentity and expression, level of experience, education, socio-economic status,\nnationality, personal appearance, race, religion, or sexual identity\nand orientation.\nWe pledge to act and interact in ways that contribute to an open, welcoming,\ndiverse, inclusive, and healthy community.\nOur Standards\nExamples of behavior that contributes to a positive environment for our\ncommunity include:\n\n* Demonstrating empathy and kindness toward other people\n* Being respectful of differing opinions, viewpoints, and experiences\n* Giving and gracefully accepting constructive feedback\n* Accepting responsibility and apologizing to those affected by our mistakes,\nand learning from the experience\n* Focusing on what is best not just for us as individuals, but for the\noverall community\n\nExamples of unacceptable behavior include:\n\n* The use of sexualized language or imagery, and sexual attention or\nadvances of any kind\n* Trolling, insulting or derogatory comments, and personal or political attacks\n* Public or private harassment\n* Publishing others' private information, such as a physical or email\naddress, without their explicit permission\n* Other conduct which could reasonably be considered inappropriate in a\nprofessional setting\n\nEnforcement Responsibilities\nCommunity leaders are responsible for clarifying and enforcing our standards of\nacceptable behavior and will take appropriate and fair corrective action in\nresponse to any behavior that they deem inappropriate, threatening, offensive,\nor harmful.\nCommunity leaders have the right and responsibility to remove, edit, or reject\ncomments, commits, code, wiki edits, issues, and other contributions that are\nnot aligned to this Code of Conduct, and will communicate reasons for moderation\ndecisions when appropriate.\nScope\nThis Code of Conduct applies within all community spaces, and also applies when\nan individual is officially representing the community in public spaces.\nExamples of representing our community include using an official e-mail address,\nposting via an official social media account, or acting as an appointed\nrepresentative at an online or offline event.\nEnforcement\nInstances of abusive, harassing, or otherwise unacceptable behavior may be\nreported to the community leaders responsible for enforcement at\nthis email address: redis@redis.io.\nAll complaints will be reviewed and investigated promptly and fairly.\nAll community leaders are obligated to respect the privacy and security of the\nreporter of any incident.\nEnforcement Guidelines\nCommunity leaders will follow these Community Impact Guidelines in determining\nthe consequences for any action they deem in violation of this Code of Conduct:\n1. Correction\nCommunity Impact: Use of inappropriate language or other behavior deemed\nunprofessional or unwelcome in the community.\nConsequence: A private, written warning from community leaders, providing\nclarity around the nature of the violation and an explanation of why the\nbehavior was inappropriate. A public apology may be requested.\n2. Warning\nCommunity Impact: A violation through a single incident or series\nof actions.\nConsequence: A warning with consequences for continued behavior. No\ninteraction with the people involved, including unsolicited interaction with\nthose enforcing the Code of Conduct, for a specified period of time. This\nincludes avoiding interactions in community spaces as well as external channels\nlike social media. Violating these terms may lead to a temporary or\npermanent ban.\n3. Temporary Ban\nCommunity Impact: A serious violation of community standards, including\nsustained inappropriate behavior.\nConsequence: A temporary ban from any sort of interaction or public\ncommunication with the community for a specified period of time. No public or\nprivate interaction with the people involved, including unsolicited interaction\nwith those enforcing the Code of Conduct, is allowed during this period.\nViolating these terms may lead to a permanent ban.\n4. Permanent Ban\nCommunity Impact: Demonstrating a pattern of violation of community\nstandards, including sustained inappropriate behavior,  harassment of an\nindividual, or aggression toward or disparagement of classes of individuals.\nConsequence: A permanent ban from any sort of public interaction within\nthe community.\nAttribution\nThis Code of Conduct is adapted from the Contributor Covenant,\nversion 2.0, available at\nhttps://www.contributor-covenant.org/version/2/0/code_of_conduct.html.\nCommunity Impact Guidelines were inspired by Mozilla's code of conduct\nenforcement ladder.\nFor answers to common questions about this code of conduct, see the FAQ at\nhttps://www.contributor-covenant.org/faq. Translations are available at\nhttps://www.contributor-covenant.org/translations.\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "By contributing code to the Redis project in any form you agree to the Redis Software Grant and\nContributor License Agreement attached below. Only contributions made under the Redis Software Grant\nand Contributor License Agreement may be accepted by Redis, and any contribution is subject to the\nterms of the Redis tri-license under RSALv2/SSPLv1/AGPLv3 as described in the LICENSE.txt file included in\nthe Redis source distribution.\n\n# REDIS SOFTWARE GRANT AND CONTRIBUTOR LICENSE AGREEMENT\n\nTo specify the intellectual property license granted in any Contribution, Redis Ltd., (\"**Redis**\")\nrequires a Software Grant and Contributor License Agreement (\"**Agreement**\"). This Agreement is for\nyour protection as a contributor as well as the protection of Redis and its users; it does not\nchange your rights to use your own Contribution for any other purpose permitted by this Agreement.\n\nBy making any Contribution, You accept and agree to the following terms and conditions for the\nContribution. Except for the license granted in this Agreement to Redis and the recipients of the\nsoftware distributed by Redis, You reserve all right, title, and interest in and to Your\nContribution.\n\n1. **Definitions**\n\n    1.1. \"**You**\" (or \"**Your**\") means the copyright owner or legal entity authorized by the\n    copyright owner that is entering into this Agreement with Redis. For legal entities, the entity\n    making a Contribution and all other entities that Control, are Controlled by, or are under\n    common Control with that entity are considered to be a single contributor. For the purposes of\n    this definition, \"**Control**\" means (i) the power, direct or indirect, to cause the direction\n    or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty\n    percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.\n\n    1.2. \"**Contribution**\" means the code, documentation, or any original work of authorship,\n    including any modifications or additions to an existing work described above.\n\n2. \"**Work**\" means any software project stewarded by Redis.\n\n3. **Grant of Copyright License**. Subject to the terms and conditions of this Agreement, You grant\n   to Redis and to the recipients of the software distributed by Redis a perpetual, worldwide,\n   non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare\n   derivative works of, publicly display, publicly perform, sublicense, and distribute Your\n   Contribution and such derivative works.\n\n4. **Grant of Patent License**. Subject to the terms and conditions of this Agreement, You grant to\n   Redis and to the recipients of the software distributed by Redis a perpetual, worldwide,\n   non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent\n   license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work,\n   where such license applies only to those patent claims licensable by You that are necessarily\n   infringed by Your Contribution alone or by a combination of Your Contribution with the Work to\n   which such Contribution was submitted. If any entity institutes patent litigation against You or\n   any other entity (including a cross-claim or counterclaim in a lawsuit) alleging that your\n   Contribution, or the Work to which you have contributed, constitutes a direct or contributory\n   patent infringement, then any patent licenses granted to the claimant entity under this Agreement\n   for that Contribution or Work terminate as of the date such litigation is filed.\n\n5. **Representations and Warranties**. You represent and warrant that: (i) You are legally entitled\n   to grant the above licenses; and (ii) if You are an entity, each employee or agent designated by\n   You is authorized to submit the Contribution on behalf of You; and (iii) your Contribution is\n   Your original work, and that it will not infringe on any third party's intellectual property\n   right(s).\n\n6. **Disclaimer**. You are not expected to provide support for Your Contribution, except to the\n   extent You desire to provide support. You may provide support for free, for a fee, or not at all.\n   Unless required by applicable law or agreed to in writing, You provide Your Contribution on an\n   \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied,\n   including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT,\n   MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE.\n\n7. **Enforceability**. Nothing in this Agreement will be construed as creating any joint venture,\n   employment relationship, or partnership between You and Redis. If any provision of this Agreement\n   is held to be unenforceable, the remaining provisions of this Agreement will not be affected.\n   This represents the entire agreement between You and Redis relating to the Contribution.\n\n# IMPORTANT: HOW TO USE REDIS GITHUB ISSUES\n\nGitHub issues SHOULD ONLY BE USED to report bugs and for DETAILED feature\nrequests. Everything else should be asked on Discord:\n      \n    https://discord.com/invite/redis\n\nPLEASE DO NOT POST GENERAL QUESTIONS that are not about bugs or suspected\nbugs in the GitHub issues system. We'll be delighted to help you and provide\nall the support on Discord.\n\nThere is also an active community of Redis users at Stack Overflow:\n\n    https://stackoverflow.com/questions/tagged/redis\n\nIssues and pull requests for documentation belong on the redis-doc repo:\n\n    https://github.com/redis/redis-doc\n\nIf you are reporting a security bug or vulnerability, see SECURITY.md.\n\n# How to provide a patch for a new feature\n\n1. If it is a major feature or a semantical change, please don't start coding\nstraight away: if your feature is not a conceptual fit you'll lose a lot of\ntime writing the code without any reason. Start by posting in the mailing list\nand creating an issue at Github with the description of, exactly, what you want\nto accomplish and why. Use cases are important for features to be accepted.\nHere you can see if there is consensus about your idea.\n\n2. If in step 1 you get an acknowledgment from the project leaders, use the\n   following procedure to submit a patch:\n\n    a. Fork Redis on GitHub ( https://docs.github.com/en/github/getting-started-with-github/fork-a-repo )\n    b. Create a topic branch (git checkout -b my_branch)\n    c. Push to your branch (git push origin my_branch)\n    d. Initiate a pull request on GitHub ( https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request )\n    e. Done :)\n\n3. Keep in mind that we are very overloaded, so issues and PRs sometimes wait\nfor a *very* long time. However this is not a lack of interest, as the project\ngets more and more users, we find ourselves in a constant need to prioritize\ncertain issues/PRs over others. If you think your issue/PR is very important\ntry to popularize it, have other users commenting and sharing their point of\nview, and so forth. This helps.\n\n4. For minor fixes - open a pull request on GitHub.\n\nAdditional information on the RSALv2/SSPLv1/AGPLv3 tri-license is also found in the LICENSE.txt file.\n"
  },
  {
    "path": "INSTALL",
    "content": "See README\n"
  },
  {
    "path": "LICENSE.txt",
    "content": "Starting with Redis 8, Redis Open Source is moving to a tri-licensing model with all new Redis code\ncontributions governed by the updated Redis Software Grant and Contributor License Agreement.\nAfter this release, contributions are subject to your choice of: (a) the Redis Source Available License v2\n(RSALv2);or (b) the Server Side Public License v1 (SSPLv1); or (c) the GNU Affero General Public License v3 (AGPLv3).\nRedis Open Source 7.2 and prior releases remain subject to the BSDv3 clause license as referenced\nin the REDISCONTRIBUTIONS.txt file.\n\nThe licensing structure for Redis 8.0 and subsequent releases is as follows:\n\n\n1. Redis Source Available License 2.0 (RSALv2) Agreement\n========================================================\n\nLast Update: December 30, 2023\n\nAcceptance\n----------\n\nThis Agreement sets forth the terms and conditions on which the Licensor\nmakes available the Software. By installing, downloading, accessing,\nUsing, or distributing any of the Software, You agree to all of the\nterms and conditions of this Agreement.\n\nIf You are receiving the Software on behalf of Your Company, You\nrepresent and warrant that You have the authority to agree to this\nAgreement on behalf of such entity.\n\nThe Licensor reserves the right to update this Agreement from time to\ntime.\n\nThe terms below have the meanings set forth below for purposes of this\nAgreement:\n\nDefinitions\n-----------\n\nAgreement: this Redis Source Available License 2.0 Agreement.\n\nControl: ownership, directly or indirectly, of substantially all the\nassets of an entity, or the power to direct its management and policies\nby vote, contract, or otherwise.\n\nLicense: the License as described in the License paragraph below.\n\nLicensor: the entity offering these terms, which includes Redis Ltd. on\nbehalf of itself and its subsidiaries and affiliates worldwide.\n\nModify, Modified, or Modification: copy from or adapt all or part of the\nwork in a fashion requiring copyright permission other than making an\nexact copy. The resulting work is called a Modified version of the\nearlier work.\n\nRedis: the Redis software as described in redis.com redis.io.\n\nSoftware: certain Software components designed to work with Redis and\nprovided to You under this Agreement.\n\nTrademark: the trademarks, service marks, and any other similar rights.\n\nUse: anything You do with the Software requiring one of Your Licenses.\n\nYou: the recipient of the Software, the individual or entity on whose\nbehalf You are agreeing to this Agreement.\n\nYour Company: any legal entity, sole proprietorship, or other kind of\norganization that You work for, plus all organizations that have control\nover, are under the control of, or are under common control with that\norganization.\n\nYour Licenses: means all the Licenses granted to You for the Software\nunder this Agreement.\n\nLicense\n-------\n\nThe Licensor grants You a non-exclusive, royalty-free, worldwide,\nnon-sublicensable, non-transferable license to use, copy, distribute,\nmake available, and prepare derivative works of the Software, in each\ncase subject to the limitations and conditions below.\n\nLimitations\n-----------\n\nYou may not make the functionality of the Software or a Modified version\navailable to third parties as a service or distribute the Software or a\nModified version in a manner that makes the functionality of the\nSoftware available to third parties.\n\nMaking the functionality of the Software or Modified version available\nto third parties includes, without limitation, enabling third parties to\ninteract with the functionality of the Software or Modified version in\ndistributed form or remotely through a computer network, offering a\nproduct or service, the value of which entirely or primarily derives\nfrom the value of the Software or Modified version, or offering a\nproduct or service that accomplishes for users the primary purpose of\nthe Software or Modified version.\n\nYou may not alter, remove, or obscure any licensing, copyright, or other\nnotices of the Licensor in the Software. Any use of the Licensor's\nTrademarks is subject to applicable law.\n\nPatents\n-------\n\nThe Licensor grants You a License, under any patent claims the Licensor\ncan License, or becomes able to License, to make, have made, use, sell,\noffer for sale, import and have imported the Software, in each case\nsubject to the limitations and conditions in this License. This License\ndoes not cover any patent claims that You cause to be infringed by\nModifications or additions to the Software. If You or Your Company make\nany written claim that the Software infringes or contributes to\ninfringement of any patent, your patent License for the Software granted\nunder this Agreement ends immediately. If Your Company makes such a\nclaim, your patent License ends immediately for work on behalf of Your\nCompany.\n\nNotices\n-------\n\nYou must ensure that anyone who gets a copy of any part of the Software\nfrom You also gets a copy of the terms and conditions in this Agreement.\n\nIf You modify the Software, You must include in any Modified copies of\nthe Software prominent notices stating that You have Modified the\nSoftware.\n\nNo Other Rights\n---------------\n\nThe terms and conditions of this Agreement do not imply any Licenses\nother than those expressly granted in this Agreement.\n\nTermination\n-----------\n\nIf You Use the Software in violation of this Agreement, such Use is not\nLicensed, and Your Licenses will automatically terminate. If the\nLicensor provides You with a notice of your violation, and You cease all\nviolations of this License no later than 30 days after You receive that\nnotice, Your Licenses will be reinstated retroactively. However, if You\nviolate this Agreement after such reinstatement, any additional\nviolation of this Agreement will cause your Licenses to terminate\nautomatically and permanently.\n\nNo Liability\n------------\n\nAs far as the law allows, the Software comes as is, without any\nwarranty or condition, and the Licensor will not be liable to You for\nany damages arising out of this Agreement or the Use or nature of the\nSoftware, under any kind of legal claim.\n\nGoverning Law and Jurisdiction\n------------------------------\n\nIf You are located in Asia, Pacific, Americas, or other jurisdictions\nnot listed below, the Agreement will be construed and enforced in all\nrespects in accordance with the laws of the State of California, U.S.A.,\nwithout reference to its choice of law rules. The courts located in the\nCounty of Santa Clara, California, have exclusive jurisdiction for all\npurposes relating to this Agreement.\n\nIf You are located in Israel, the Agreement will be construed and\nenforced in all respects in accordance with the laws of the State of\nIsrael without reference to its choice of law rules. The courts located\nin the Central District of the State of Israel have exclusive\njurisdiction for all purposes relating to this Agreement.\n\nIf You are located in Europe, United Kingdom, Middle East or Africa, the\nAgreement will be construed and enforced in all respects in accordance\nwith the laws of England and Wales without reference to its choice of\nlaw rules. The competent courts located in London, England, have\nexclusive jurisdiction for all purposes relating to this Agreement.\n\n\n\n2. Server Side Public License (SSPL)\n====================================\n\n                     Server Side Public License\n                     VERSION 1, OCTOBER 16, 2018\n\n                    Copyright (c) 2018 MongoDB, Inc.\n\n  Everyone is permitted to copy and distribute verbatim copies of this\n  license document, but changing it is not allowed.\n\n                       TERMS AND CONDITIONS\n\n  0. Definitions.\n\n  \"This License\" refers to Server Side Public License.\n\n  \"Copyright\" also means copyright-like laws that apply to other kinds of\n  works, such as semiconductor masks.\n\n  \"The Program\" refers to any copyrightable work licensed under this\n  License.  Each licensee is addressed as \"you\". \"Licensees\" and\n  \"recipients\" may be individuals or organizations.\n\n  To \"modify\" a work means to copy from or adapt all or part of the work in\n  a fashion requiring copyright permission, other than the making of an\n  exact copy. The resulting work is called a \"modified version\" of the\n  earlier work or a work \"based on\" the earlier work.\n\n  A \"covered work\" means either the unmodified Program or a work based on\n  the Program.\n\n  To \"propagate\" a work means to do anything with it that, without\n  permission, would make you directly or secondarily liable for\n  infringement under applicable copyright law, except executing it on a\n  computer or modifying a private copy. Propagation includes copying,\n  distribution (with or without modification), making available to the\n  public, and in some countries other activities as well.\n\n  To \"convey\" a work means any kind of propagation that enables other\n  parties to make or receive copies. Mere interaction with a user through a\n  computer network, with no transfer of a copy, is not conveying.\n\n  An interactive user interface displays \"Appropriate Legal Notices\" to the\n  extent that it includes a convenient and prominently visible feature that\n  (1) displays an appropriate copyright notice, and (2) tells the user that\n  there is no warranty for the work (except to the extent that warranties\n  are provided), that licensees may convey the work under this License, and\n  how to view a copy of this License. If the interface presents a list of\n  user commands or options, such as a menu, a prominent item in the list\n  meets this criterion.\n\n  1. Source Code.\n\n  The \"source code\" for a work means the preferred form of the work for\n  making modifications to it. \"Object code\" means any non-source form of a\n  work.\n\n  A \"Standard Interface\" means an interface that either is an official\n  standard defined by a recognized standards body, or, in the case of\n  interfaces specified for a particular programming language, one that is\n  widely used among developers working in that language.  The \"System\n  Libraries\" of an executable work include anything, other than the work as\n  a whole, that (a) is included in the normal form of packaging a Major\n  Component, but which is not part of that Major Component, and (b) serves\n  only to enable use of the work with that Major Component, or to implement\n  a Standard Interface for which an implementation is available to the\n  public in source code form. A \"Major Component\", in this context, means a\n  major essential component (kernel, window system, and so on) of the\n  specific operating system (if any) on which the executable work runs, or\n  a compiler used to produce the work, or an object code interpreter used\n  to run it.\n\n  The \"Corresponding Source\" for a work in object code form means all the\n  source code needed to generate, install, and (for an executable work) run\n  the object code and to modify the work, including scripts to control\n  those activities. However, it does not include the work's System\n  Libraries, or general-purpose tools or generally available free programs\n  which are used unmodified in performing those activities but which are\n  not part of the work. For example, Corresponding Source includes\n  interface definition files associated with source files for the work, and\n  the source code for shared libraries and dynamically linked subprograms\n  that the work is specifically designed to require, such as by intimate\n  data communication or control flow between those subprograms and other\n  parts of the work.\n\n  The Corresponding Source need not include anything that users can\n  regenerate automatically from other parts of the Corresponding Source.\n\n  The Corresponding Source for a work in source code form is that same work.\n\n  2. Basic Permissions.\n\n  All rights granted under this License are granted for the term of\n  copyright on the Program, and are irrevocable provided the stated\n  conditions are met. This License explicitly affirms your unlimited\n  permission to run the unmodified Program, subject to section 13. The\n  output from running a covered work is covered by this License only if the\n  output, given its content, constitutes a covered work. This License\n  acknowledges your rights of fair use or other equivalent, as provided by\n  copyright law.  Subject to section 13, you may make, run and propagate\n  covered works that you do not convey, without conditions so long as your\n  license otherwise remains in force. You may convey covered works to\n  others for the sole purpose of having them make modifications exclusively\n  for you, or provide you with facilities for running those works, provided\n  that you comply with the terms of this License in conveying all\n  material for which you do not control copyright. Those thus making or\n  running the covered works for you must do so exclusively on your\n  behalf, under your direction and control, on terms that prohibit them\n  from making any copies of your copyrighted material outside their\n  relationship with you.\n\n  Conveying under any other circumstances is permitted solely under the\n  conditions stated below. Sublicensing is not allowed; section 10 makes it\n  unnecessary.\n\n  3. Protecting Users' Legal Rights From Anti-Circumvention Law.\n\n  No covered work shall be deemed part of an effective technological\n  measure under any applicable law fulfilling obligations under article 11\n  of the WIPO copyright treaty adopted on 20 December 1996, or similar laws\n  prohibiting or restricting circumvention of such measures.\n\n  When you convey a covered work, you waive any legal power to forbid\n  circumvention of technological measures to the extent such circumvention is\n  effected by exercising rights under this License with respect to the\n  covered work, and you disclaim any intention to limit operation or\n  modification of the work as a means of enforcing, against the work's users,\n  your or third parties' legal rights to forbid circumvention of\n  technological measures.\n\n  4. Conveying Verbatim Copies.\n\n  You may convey verbatim copies of the Program's source code as you\n  receive it, in any medium, provided that you conspicuously and\n  appropriately publish on each copy an appropriate copyright notice; keep\n  intact all notices stating that this License and any non-permissive terms\n  added in accord with section 7 apply to the code; keep intact all notices\n  of the absence of any warranty; and give all recipients a copy of this\n  License along with the Program.  You may charge any price or no price for\n  each copy that you convey, and you may offer support or warranty\n  protection for a fee.\n\n  5. Conveying Modified Source Versions.\n\n  You may convey a work based on the Program, or the modifications to\n  produce it from the Program, in the form of source code under the terms\n  of section 4, provided that you also meet all of these conditions:\n\n    a) The work must carry prominent notices stating that you modified it,\n    and giving a relevant date.\n\n    b) The work must carry prominent notices stating that it is released\n    under this License and any conditions added under section 7. This\n    requirement modifies the requirement in section 4 to \"keep intact all\n    notices\".\n\n    c) You must license the entire work, as a whole, under this License to\n    anyone who comes into possession of a copy. This License will therefore\n    apply, along with any applicable section 7 additional terms, to the\n    whole of the work, and all its parts, regardless of how they are\n    packaged. This License gives no permission to license the work in any\n    other way, but it does not invalidate such permission if you have\n    separately received it.\n\n    d) If the work has interactive user interfaces, each must display\n    Appropriate Legal Notices; however, if the Program has interactive\n    interfaces that do not display Appropriate Legal Notices, your work\n    need not make them do so.\n\n  A compilation of a covered work with other separate and independent\n  works, which are not by their nature extensions of the covered work, and\n  which are not combined with it such as to form a larger program, in or on\n  a volume of a storage or distribution medium, is called an \"aggregate\" if\n  the compilation and its resulting copyright are not used to limit the\n  access or legal rights of the compilation's users beyond what the\n  individual works permit. Inclusion of a covered work in an aggregate does\n  not cause this License to apply to the other parts of the aggregate.\n\n  6. Conveying Non-Source Forms.\n\n  You may convey a covered work in object code form under the terms of\n  sections 4 and 5, provided that you also convey the machine-readable\n  Corresponding Source under the terms of this License, in one of these\n  ways:\n\n    a) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by the\n    Corresponding Source fixed on a durable physical medium customarily\n    used for software interchange.\n\n    b) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by a written\n    offer, valid for at least three years and valid for as long as you\n    offer spare parts or customer support for that product model, to give\n    anyone who possesses the object code either (1) a copy of the\n    Corresponding Source for all the software in the product that is\n    covered by this License, on a durable physical medium customarily used\n    for software interchange, for a price no more than your reasonable cost\n    of physically performing this conveying of source, or (2) access to\n    copy the Corresponding Source from a network server at no charge.\n\n    c) Convey individual copies of the object code with a copy of the\n    written offer to provide the Corresponding Source. This alternative is\n    allowed only occasionally and noncommercially, and only if you received\n    the object code with such an offer, in accord with subsection 6b.\n\n    d) Convey the object code by offering access from a designated place\n    (gratis or for a charge), and offer equivalent access to the\n    Corresponding Source in the same way through the same place at no\n    further charge. You need not require recipients to copy the\n    Corresponding Source along with the object code. If the place to copy\n    the object code is a network server, the Corresponding Source may be on\n    a different server (operated by you or a third party) that supports\n    equivalent copying facilities, provided you maintain clear directions\n    next to the object code saying where to find the Corresponding Source.\n    Regardless of what server hosts the Corresponding Source, you remain\n    obligated to ensure that it is available for as long as needed to\n    satisfy these requirements.\n\n    e) Convey the object code using peer-to-peer transmission, provided you\n    inform other peers where the object code and Corresponding Source of\n    the work are being offered to the general public at no charge under\n    subsection 6d.\n\n  A separable portion of the object code, whose source code is excluded\n  from the Corresponding Source as a System Library, need not be included\n  in conveying the object code work.\n\n  A \"User Product\" is either (1) a \"consumer product\", which means any\n  tangible personal property which is normally used for personal, family,\n  or household purposes, or (2) anything designed or sold for incorporation\n  into a dwelling. In determining whether a product is a consumer product,\n  doubtful cases shall be resolved in favor of coverage. For a particular\n  product received by a particular user, \"normally used\" refers to a\n  typical or common use of that class of product, regardless of the status\n  of the particular user or of the way in which the particular user\n  actually uses, or expects or is expected to use, the product. A product\n  is a consumer product regardless of whether the product has substantial\n  commercial, industrial or non-consumer uses, unless such uses represent\n  the only significant mode of use of the product.\n\n  \"Installation Information\" for a User Product means any methods,\n  procedures, authorization keys, or other information required to install\n  and execute modified versions of a covered work in that User Product from\n  a modified version of its Corresponding Source. The information must\n  suffice to ensure that the continued functioning of the modified object\n  code is in no case prevented or interfered with solely because\n  modification has been made.\n\n  If you convey an object code work under this section in, or with, or\n  specifically for use in, a User Product, and the conveying occurs as part\n  of a transaction in which the right of possession and use of the User\n  Product is transferred to the recipient in perpetuity or for a fixed term\n  (regardless of how the transaction is characterized), the Corresponding\n  Source conveyed under this section must be accompanied by the\n  Installation Information. But this requirement does not apply if neither\n  you nor any third party retains the ability to install modified object\n  code on the User Product (for example, the work has been installed in\n  ROM).\n\n  The requirement to provide Installation Information does not include a\n  requirement to continue to provide support service, warranty, or updates\n  for a work that has been modified or installed by the recipient, or for\n  the User Product in which it has been modified or installed. Access\n  to a network may be denied when the modification itself materially\n  and adversely affects the operation of the network or violates the\n  rules and protocols for communication across the network.\n\n  Corresponding Source conveyed, and Installation Information provided, in\n  accord with this section must be in a format that is publicly documented\n  (and with an implementation available to the public in source code form),\n  and must require no special password or key for unpacking, reading or\n  copying.\n\n  7. Additional Terms.\n\n  \"Additional permissions\" are terms that supplement the terms of this\n  License by making exceptions from one or more of its conditions.\n  Additional permissions that are applicable to the entire Program shall be\n  treated as though they were included in this License, to the extent that\n  they are valid under applicable law. If additional permissions apply only\n  to part of the Program, that part may be used separately under those\n  permissions, but the entire Program remains governed by this License\n  without regard to the additional permissions.  When you convey a copy of\n  a covered work, you may at your option remove any additional permissions\n  from that copy, or from any part of it. (Additional permissions may be\n  written to require their own removal in certain cases when you modify the\n  work.) You may place additional permissions on material, added by you to\n  a covered work, for which you have or can give appropriate copyright\n  permission.\n\n  Notwithstanding any other provision of this License, for material you add\n  to a covered work, you may (if authorized by the copyright holders of\n  that material) supplement the terms of this License with terms:\n\n    a) Disclaiming warranty or limiting liability differently from the\n    terms of sections 15 and 16 of this License; or\n\n    b) Requiring preservation of specified reasonable legal notices or\n    author attributions in that material or in the Appropriate Legal\n    Notices displayed by works containing it; or\n\n    c) Prohibiting misrepresentation of the origin of that material, or\n    requiring that modified versions of such material be marked in\n    reasonable ways as different from the original version; or\n\n    d) Limiting the use for publicity purposes of names of licensors or\n    authors of the material; or\n\n    e) Declining to grant rights under trademark law for use of some trade\n    names, trademarks, or service marks; or\n\n    f) Requiring indemnification of licensors and authors of that material\n    by anyone who conveys the material (or modified versions of it) with\n    contractual assumptions of liability to the recipient, for any\n    liability that these contractual assumptions directly impose on those\n    licensors and authors.\n\n  All other non-permissive additional terms are considered \"further\n  restrictions\" within the meaning of section 10. If the Program as you\n  received it, or any part of it, contains a notice stating that it is\n  governed by this License along with a term that is a further restriction,\n  you may remove that term. If a license document contains a further\n  restriction but permits relicensing or conveying under this License, you\n  may add to a covered work material governed by the terms of that license\n  document, provided that the further restriction does not survive such\n  relicensing or conveying.\n\n  If you add terms to a covered work in accord with this section, you must\n  place, in the relevant source files, a statement of the additional terms\n  that apply to those files, or a notice indicating where to find the\n  applicable terms.  Additional terms, permissive or non-permissive, may be\n  stated in the form of a separately written license, or stated as\n  exceptions; the above requirements apply either way.\n\n  8. Termination.\n\n  You may not propagate or modify a covered work except as expressly\n  provided under this License. Any attempt otherwise to propagate or modify\n  it is void, and will automatically terminate your rights under this\n  License (including any patent licenses granted under the third paragraph\n  of section 11).\n\n  However, if you cease all violation of this License, then your license\n  from a particular copyright holder is reinstated (a) provisionally,\n  unless and until the copyright holder explicitly and finally terminates\n  your license, and (b) permanently, if the copyright holder fails to\n  notify you of the violation by some reasonable means prior to 60 days\n  after the cessation.\n\n  Moreover, your license from a particular copyright holder is reinstated\n  permanently if the copyright holder notifies you of the violation by some\n  reasonable means, this is the first time you have received notice of\n  violation of this License (for any work) from that copyright holder, and\n  you cure the violation prior to 30 days after your receipt of the notice.\n\n  Termination of your rights under this section does not terminate the\n  licenses of parties who have received copies or rights from you under\n  this License. If your rights have been terminated and not permanently\n  reinstated, you do not qualify to receive new licenses for the same\n  material under section 10.\n\n  9. Acceptance Not Required for Having Copies.\n\n  You are not required to accept this License in order to receive or run a\n  copy of the Program. Ancillary propagation of a covered work occurring\n  solely as a consequence of using peer-to-peer transmission to receive a\n  copy likewise does not require acceptance. However, nothing other than\n  this License grants you permission to propagate or modify any covered\n  work. These actions infringe copyright if you do not accept this License.\n  Therefore, by modifying or propagating a covered work, you indicate your\n  acceptance of this License to do so.\n\n  10. Automatic Licensing of Downstream Recipients.\n\n  Each time you convey a covered work, the recipient automatically receives\n  a license from the original licensors, to run, modify and propagate that\n  work, subject to this License. You are not responsible for enforcing\n  compliance by third parties with this License.\n\n  An \"entity transaction\" is a transaction transferring control of an\n  organization, or substantially all assets of one, or subdividing an\n  organization, or merging organizations. If propagation of a covered work\n  results from an entity transaction, each party to that transaction who\n  receives a copy of the work also receives whatever licenses to the work\n  the party's predecessor in interest had or could give under the previous\n  paragraph, plus a right to possession of the Corresponding Source of the\n  work from the predecessor in interest, if the predecessor has it or can\n  get it with reasonable efforts.\n\n  You may not impose any further restrictions on the exercise of the rights\n  granted or affirmed under this License. For example, you may not impose a\n  license fee, royalty, or other charge for exercise of rights granted\n  under this License, and you may not initiate litigation (including a\n  cross-claim or counterclaim in a lawsuit) alleging that any patent claim\n  is infringed by making, using, selling, offering for sale, or importing\n  the Program or any portion of it.\n\n  11. Patents.\n\n  A \"contributor\" is a copyright holder who authorizes use under this\n  License of the Program or a work on which the Program is based. The work\n  thus licensed is called the contributor's \"contributor version\".\n\n  A contributor's \"essential patent claims\" are all patent claims owned or\n  controlled by the contributor, whether already acquired or hereafter\n  acquired, that would be infringed by some manner, permitted by this\n  License, of making, using, or selling its contributor version, but do not\n  include claims that would be infringed only as a consequence of further\n  modification of the contributor version. For purposes of this definition,\n  \"control\" includes the right to grant patent sublicenses in a manner\n  consistent with the requirements of this License.\n\n  Each contributor grants you a non-exclusive, worldwide, royalty-free\n  patent license under the contributor's essential patent claims, to make,\n  use, sell, offer for sale, import and otherwise run, modify and propagate\n  the contents of its contributor version.\n\n  In the following three paragraphs, a \"patent license\" is any express\n  agreement or commitment, however denominated, not to enforce a patent\n  (such as an express permission to practice a patent or covenant not to\n  sue for patent infringement). To \"grant\" such a patent license to a party\n  means to make such an agreement or commitment not to enforce a patent\n  against the party.\n\n  If you convey a covered work, knowingly relying on a patent license, and\n  the Corresponding Source of the work is not available for anyone to copy,\n  free of charge and under the terms of this License, through a publicly\n  available network server or other readily accessible means, then you must\n  either (1) cause the Corresponding Source to be so available, or (2)\n  arrange to deprive yourself of the benefit of the patent license for this\n  particular work, or (3) arrange, in a manner consistent with the\n  requirements of this License, to extend the patent license to downstream\n  recipients. \"Knowingly relying\" means you have actual knowledge that, but\n  for the patent license, your conveying the covered work in a country, or\n  your recipient's use of the covered work in a country, would infringe\n  one or more identifiable patents in that country that you have reason\n  to believe are valid.\n\n  If, pursuant to or in connection with a single transaction or\n  arrangement, you convey, or propagate by procuring conveyance of, a\n  covered work, and grant a patent license to some of the parties receiving\n  the covered work authorizing them to use, propagate, modify or convey a\n  specific copy of the covered work, then the patent license you grant is\n  automatically extended to all recipients of the covered work and works\n  based on it.\n\n  A patent license is \"discriminatory\" if it does not include within the\n  scope of its coverage, prohibits the exercise of, or is conditioned on\n  the non-exercise of one or more of the rights that are specifically\n  granted under this License. You may not convey a covered work if you are\n  a party to an arrangement with a third party that is in the business of\n  distributing software, under which you make payment to the third party\n  based on the extent of your activity of conveying the work, and under\n  which the third party grants, to any of the parties who would receive the\n  covered work from you, a discriminatory patent license (a) in connection\n  with copies of the covered work conveyed by you (or copies made from\n  those copies), or (b) primarily for and in connection with specific\n  products or compilations that contain the covered work, unless you\n  entered into that arrangement, or that patent license was granted, prior\n  to 28 March 2007.\n\n  Nothing in this License shall be construed as excluding or limiting any\n  implied license or other defenses to infringement that may otherwise be\n  available to you under applicable patent law.\n\n  12. No Surrender of Others' Freedom.\n\n  If conditions are imposed on you (whether by court order, agreement or\n  otherwise) that contradict the conditions of this License, they do not\n  excuse you from the conditions of this License. If you cannot use,\n  propagate or convey a covered work so as to satisfy simultaneously your\n  obligations under this License and any other pertinent obligations, then\n  as a consequence you may not use, propagate or convey it at all. For\n  example, if you agree to terms that obligate you to collect a royalty for\n  further conveying from those to whom you convey the Program, the only way\n  you could satisfy both those terms and this License would be to refrain\n  entirely from conveying the Program.\n\n  13. Offering the Program as a Service.\n\n  If you make the functionality of the Program or a modified version\n  available to third parties as a service, you must make the Service Source\n  Code available via network download to everyone at no charge, under the\n  terms of this License. Making the functionality of the Program or\n  modified version available to third parties as a service includes,\n  without limitation, enabling third parties to interact with the\n  functionality of the Program or modified version remotely through a\n  computer network, offering a service the value of which entirely or\n  primarily derives from the value of the Program or modified version, or\n  offering a service that accomplishes for users the primary purpose of the\n  Program or modified version.\n\n  \"Service Source Code\" means the Corresponding Source for the Program or\n  the modified version, and the Corresponding Source for all programs that\n  you use to make the Program or modified version available as a service,\n  including, without limitation, management software, user interfaces,\n  application program interfaces, automation software, monitoring software,\n  backup software, storage software and hosting software, all such that a\n  user could run an instance of the service using the Service Source Code\n  you make available.\n\n  14. Revised Versions of this License.\n\n  MongoDB, Inc. may publish revised and/or new versions of the Server Side\n  Public License from time to time. Such new versions will be similar in\n  spirit to the present version, but may differ in detail to address new\n  problems or concerns.\n\n  Each version is given a distinguishing version number. If the Program\n  specifies that a certain numbered version of the Server Side Public\n  License \"or any later version\" applies to it, you have the option of\n  following the terms and conditions either of that numbered version or of\n  any later version published by MongoDB, Inc. If the Program does not\n  specify a version number of the Server Side Public License, you may\n  choose any version ever published by MongoDB, Inc.\n\n  If the Program specifies that a proxy can decide which future versions of\n  the Server Side Public License can be used, that proxy's public statement\n  of acceptance of a version permanently authorizes you to choose that\n  version for the Program.\n\n  Later license versions may give you additional or different permissions.\n  However, no additional obligations are imposed on any author or copyright\n  holder as a result of your choosing to follow a later version.\n\n  15. Disclaimer of Warranty.\n\n  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY\n  APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT\n  HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM \"AS IS\" WITHOUT WARRANTY\n  OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,\n  THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR\n  PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM\n  IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF\n  ALL NECESSARY SERVICING, REPAIR OR CORRECTION.\n\n  16. Limitation of Liability.\n\n  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING\n  WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS\n  THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING\n  ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF\n  THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO\n  LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU\n  OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER\n  PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE\n  POSSIBILITY OF SUCH DAMAGES.\n\n  17. Interpretation of Sections 15 and 16.\n\n  If the disclaimer of warranty and limitation of liability provided above\n  cannot be given local legal effect according to their terms, reviewing\n  courts shall apply local law that most closely approximates an absolute\n  waiver of all civil liability in connection with the Program, unless a\n  warranty or assumption of liability accompanies a copy of the Program in\n  return for a fee.\n\n                        END OF TERMS AND CONDITIONS\n\n\n3. GNU AFFERO GENERAL PUBLIC LICENSE, Version 3, 19 Nov 2007\n========================================================\n\n Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>\n Everyone is permitted to copy and distribute verbatim copies\n of this license document, but changing it is not allowed.\n\n                            Preamble\n\n  The GNU Affero General Public License is a free, copyleft license for\nsoftware and other kinds of works, specifically designed to ensure\ncooperation with the community in the case of network server software.\n\n  The licenses for most software and other practical works are designed\nto take away your freedom to share and change the works.  By contrast,\nour General Public Licenses are intended to guarantee your freedom to\nshare and change all versions of a program--to make sure it remains free\nsoftware for all its users.\n\n  When we speak of free software, we are referring to freedom, not\nprice.  Our General Public Licenses are designed to make sure that you\nhave the freedom to distribute copies of free software (and charge for\nthem if you wish), that you receive source code or can get it if you\nwant it, that you can change the software or use pieces of it in new\nfree programs, and that you know you can do these things.\n\n  Developers that use our General Public Licenses protect your rights\nwith two steps: (1) assert copyright on the software, and (2) offer\nyou this License which gives you legal permission to copy, distribute\nand/or modify the software.\n\n  A secondary benefit of defending all users' freedom is that\nimprovements made in alternate versions of the program, if they\nreceive widespread use, become available for other developers to\nincorporate.  Many developers of free software are heartened and\nencouraged by the resulting cooperation.  However, in the case of\nsoftware used on network servers, this result may fail to come about.\nThe GNU General Public License permits making a modified version and\nletting the public access it on a server without ever releasing its\nsource code to the public.\n\n  The GNU Affero General Public License is designed specifically to\nensure that, in such cases, the modified source code becomes available\nto the community.  It requires the operator of a network server to\nprovide the source code of the modified version running there to the\nusers of that server.  Therefore, public use of a modified version, on\na publicly accessible server, gives the public access to the source\ncode of the modified version.\n\n  An older license, called the Affero General Public License and\npublished by Affero, was designed to accomplish similar goals.  This is\na different license, not a version of the Affero GPL, but Affero has\nreleased a new version of the Affero GPL which permits relicensing under\nthis license.\n\n  The precise terms and conditions for copying, distribution and\nmodification follow.\n\n                       TERMS AND CONDITIONS\n\n  0. Definitions.\n\n  \"This License\" refers to version 3 of the GNU Affero General Public License.\n\n  \"Copyright\" also means copyright-like laws that apply to other kinds of\nworks, such as semiconductor masks.\n\n  \"The Program\" refers to any copyrightable work licensed under this\nLicense.  Each licensee is addressed as \"you\".  \"Licensees\" and\n\"recipients\" may be individuals or organizations.\n\n  To \"modify\" a work means to copy from or adapt all or part of the work\nin a fashion requiring copyright permission, other than the making of an\nexact copy.  The resulting work is called a \"modified version\" of the\nearlier work or a work \"based on\" the earlier work.\n\n  A \"covered work\" means either the unmodified Program or a work based\non the Program.\n\n  To \"propagate\" a work means to do anything with it that, without\npermission, would make you directly or secondarily liable for\ninfringement under applicable copyright law, except executing it on a\ncomputer or modifying a private copy.  Propagation includes copying,\ndistribution (with or without modification), making available to the\npublic, and in some countries other activities as well.\n\n  To \"convey\" a work means any kind of propagation that enables other\nparties to make or receive copies.  Mere interaction with a user through\na computer network, with no transfer of a copy, is not conveying.\n\n  An interactive user interface displays \"Appropriate Legal Notices\"\nto the extent that it includes a convenient and prominently visible\nfeature that (1) displays an appropriate copyright notice, and (2)\ntells the user that there is no warranty for the work (except to the\nextent that warranties are provided), that licensees may convey the\nwork under this License, and how to view a copy of this License.  If\nthe interface presents a list of user commands or options, such as a\nmenu, a prominent item in the list meets this criterion.\n\n  1. Source Code.\n\n  The \"source code\" for a work means the preferred form of the work\nfor making modifications to it.  \"Object code\" means any non-source\nform of a work.\n\n  A \"Standard Interface\" means an interface that either is an official\nstandard defined by a recognized standards body, or, in the case of\ninterfaces specified for a particular programming language, one that\nis widely used among developers working in that language.\n\n  The \"System Libraries\" of an executable work include anything, other\nthan the work as a whole, that (a) is included in the normal form of\npackaging a Major Component, but which is not part of that Major\nComponent, and (b) serves only to enable use of the work with that\nMajor Component, or to implement a Standard Interface for which an\nimplementation is available to the public in source code form.  A\n\"Major Component\", in this context, means a major essential component\n(kernel, window system, and so on) of the specific operating system\n(if any) on which the executable work runs, or a compiler used to\nproduce the work, or an object code interpreter used to run it.\n\n  The \"Corresponding Source\" for a work in object code form means all\nthe source code needed to generate, install, and (for an executable\nwork) run the object code and to modify the work, including scripts to\ncontrol those activities.  However, it does not include the work's\nSystem Libraries, or general-purpose tools or generally available free\nprograms which are used unmodified in performing those activities but\nwhich are not part of the work.  For example, Corresponding Source\nincludes interface definition files associated with source files for\nthe work, and the source code for shared libraries and dynamically\nlinked subprograms that the work is specifically designed to require,\nsuch as by intimate data communication or control flow between those\nsubprograms and other parts of the work.\n\n  The Corresponding Source need not include anything that users\ncan regenerate automatically from other parts of the Corresponding\nSource.\n\n  The Corresponding Source for a work in source code form is that\nsame work.\n\n  2. Basic Permissions.\n\n  All rights granted under this License are granted for the term of\ncopyright on the Program, and are irrevocable provided the stated\nconditions are met.  This License explicitly affirms your unlimited\npermission to run the unmodified Program.  The output from running a\ncovered work is covered by this License only if the output, given its\ncontent, constitutes a covered work.  This License acknowledges your\nrights of fair use or other equivalent, as provided by copyright law.\n\n  You may make, run and propagate covered works that you do not\nconvey, without conditions so long as your license otherwise remains\nin force.  You may convey covered works to others for the sole purpose\nof having them make modifications exclusively for you, or provide you\nwith facilities for running those works, provided that you comply with\nthe terms of this License in conveying all material for which you do\nnot control copyright.  Those thus making or running the covered works\nfor you must do so exclusively on your behalf, under your direction\nand control, on terms that prohibit them from making any copies of\nyour copyrighted material outside their relationship with you.\n\n  Conveying under any other circumstances is permitted solely under\nthe conditions stated below.  Sublicensing is not allowed; section 10\nmakes it unnecessary.\n\n  3. Protecting Users' Legal Rights From Anti-Circumvention Law.\n\n  No covered work shall be deemed part of an effective technological\nmeasure under any applicable law fulfilling obligations under article\n11 of the WIPO copyright treaty adopted on 20 December 1996, or\nsimilar laws prohibiting or restricting circumvention of such\nmeasures.\n\n  When you convey a covered work, you waive any legal power to forbid\ncircumvention of technological measures to the extent such circumvention\nis effected by exercising rights under this License with respect to\nthe covered work, and you disclaim any intention to limit operation or\nmodification of the work as a means of enforcing, against the work's\nusers, your or third parties' legal rights to forbid circumvention of\ntechnological measures.\n\n  4. Conveying Verbatim Copies.\n\n  You may convey verbatim copies of the Program's source code as you\nreceive it, in any medium, provided that you conspicuously and\nappropriately publish on each copy an appropriate copyright notice;\nkeep intact all notices stating that this License and any\nnon-permissive terms added in accord with section 7 apply to the code;\nkeep intact all notices of the absence of any warranty; and give all\nrecipients a copy of this License along with the Program.\n\n  You may charge any price or no price for each copy that you convey,\nand you may offer support or warranty protection for a fee.\n\n  5. Conveying Modified Source Versions.\n\n  You may convey a work based on the Program, or the modifications to\nproduce it from the Program, in the form of source code under the\nterms of section 4, provided that you also meet all of these conditions:\n\n    a) The work must carry prominent notices stating that you modified\n    it, and giving a relevant date.\n\n    b) The work must carry prominent notices stating that it is\n    released under this License and any conditions added under section\n    7.  This requirement modifies the requirement in section 4 to\n    \"keep intact all notices\".\n\n    c) You must license the entire work, as a whole, under this\n    License to anyone who comes into possession of a copy.  This\n    License will therefore apply, along with any applicable section 7\n    additional terms, to the whole of the work, and all its parts,\n    regardless of how they are packaged.  This License gives no\n    permission to license the work in any other way, but it does not\n    invalidate such permission if you have separately received it.\n\n    d) If the work has interactive user interfaces, each must display\n    Appropriate Legal Notices; however, if the Program has interactive\n    interfaces that do not display Appropriate Legal Notices, your\n    work need not make them do so.\n\n  A compilation of a covered work with other separate and independent\nworks, which are not by their nature extensions of the covered work,\nand which are not combined with it such as to form a larger program,\nin or on a volume of a storage or distribution medium, is called an\n\"aggregate\" if the compilation and its resulting copyright are not\nused to limit the access or legal rights of the compilation's users\nbeyond what the individual works permit.  Inclusion of a covered work\nin an aggregate does not cause this License to apply to the other\nparts of the aggregate.\n\n  6. Conveying Non-Source Forms.\n\n  You may convey a covered work in object code form under the terms\nof sections 4 and 5, provided that you also convey the\nmachine-readable Corresponding Source under the terms of this License,\nin one of these ways:\n\n    a) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by the\n    Corresponding Source fixed on a durable physical medium\n    customarily used for software interchange.\n\n    b) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by a\n    written offer, valid for at least three years and valid for as\n    long as you offer spare parts or customer support for that product\n    model, to give anyone who possesses the object code either (1) a\n    copy of the Corresponding Source for all the software in the\n    product that is covered by this License, on a durable physical\n    medium customarily used for software interchange, for a price no\n    more than your reasonable cost of physically performing this\n    conveying of source, or (2) access to copy the\n    Corresponding Source from a network server at no charge.\n\n    c) Convey individual copies of the object code with a copy of the\n    written offer to provide the Corresponding Source.  This\n    alternative is allowed only occasionally and noncommercially, and\n    only if you received the object code with such an offer, in accord\n    with subsection 6b.\n\n    d) Convey the object code by offering access from a designated\n    place (gratis or for a charge), and offer equivalent access to the\n    Corresponding Source in the same way through the same place at no\n    further charge.  You need not require recipients to copy the\n    Corresponding Source along with the object code.  If the place to\n    copy the object code is a network server, the Corresponding Source\n    may be on a different server (operated by you or a third party)\n    that supports equivalent copying facilities, provided you maintain\n    clear directions next to the object code saying where to find the\n    Corresponding Source.  Regardless of what server hosts the\n    Corresponding Source, you remain obligated to ensure that it is\n    available for as long as needed to satisfy these requirements.\n\n    e) Convey the object code using peer-to-peer transmission, provided\n    you inform other peers where the object code and Corresponding\n    Source of the work are being offered to the general public at no\n    charge under subsection 6d.\n\n  A separable portion of the object code, whose source code is excluded\nfrom the Corresponding Source as a System Library, need not be\nincluded in conveying the object code work.\n\n  A \"User Product\" is either (1) a \"consumer product\", which means any\ntangible personal property which is normally used for personal, family,\nor household purposes, or (2) anything designed or sold for incorporation\ninto a dwelling.  In determining whether a product is a consumer product,\ndoubtful cases shall be resolved in favor of coverage.  For a particular\nproduct received by a particular user, \"normally used\" refers to a\ntypical or common use of that class of product, regardless of the status\nof the particular user or of the way in which the particular user\nactually uses, or expects or is expected to use, the product.  A product\nis a consumer product regardless of whether the product has substantial\ncommercial, industrial or non-consumer uses, unless such uses represent\nthe only significant mode of use of the product.\n\n  \"Installation Information\" for a User Product means any methods,\nprocedures, authorization keys, or other information required to install\nand execute modified versions of a covered work in that User Product from\na modified version of its Corresponding Source.  The information must\nsuffice to ensure that the continued functioning of the modified object\ncode is in no case prevented or interfered with solely because\nmodification has been made.\n\n  If you convey an object code work under this section in, or with, or\nspecifically for use in, a User Product, and the conveying occurs as\npart of a transaction in which the right of possession and use of the\nUser Product is transferred to the recipient in perpetuity or for a\nfixed term (regardless of how the transaction is characterized), the\nCorresponding Source conveyed under this section must be accompanied\nby the Installation Information.  But this requirement does not apply\nif neither you nor any third party retains the ability to install\nmodified object code on the User Product (for example, the work has\nbeen installed in ROM).\n\n  The requirement to provide Installation Information does not include a\nrequirement to continue to provide support service, warranty, or updates\nfor a work that has been modified or installed by the recipient, or for\nthe User Product in which it has been modified or installed.  Access to a\nnetwork may be denied when the modification itself materially and\nadversely affects the operation of the network or violates the rules and\nprotocols for communication across the network.\n\n  Corresponding Source conveyed, and Installation Information provided,\nin accord with this section must be in a format that is publicly\ndocumented (and with an implementation available to the public in\nsource code form), and must require no special password or key for\nunpacking, reading or copying.\n\n  7. Additional Terms.\n\n  \"Additional permissions\" are terms that supplement the terms of this\nLicense by making exceptions from one or more of its conditions.\nAdditional permissions that are applicable to the entire Program shall\nbe treated as though they were included in this License, to the extent\nthat they are valid under applicable law.  If additional permissions\napply only to part of the Program, that part may be used separately\nunder those permissions, but the entire Program remains governed by\nthis License without regard to the additional permissions.\n\n  When you convey a copy of a covered work, you may at your option\nremove any additional permissions from that copy, or from any part of\nit.  (Additional permissions may be written to require their own\nremoval in certain cases when you modify the work.)  You may place\nadditional permissions on material, added by you to a covered work,\nfor which you have or can give appropriate copyright permission.\n\n  Notwithstanding any other provision of this License, for material you\nadd to a covered work, you may (if authorized by the copyright holders of\nthat material) supplement the terms of this License with terms:\n\n    a) Disclaiming warranty or limiting liability differently from the\n    terms of sections 15 and 16 of this License; or\n\n    b) Requiring preservation of specified reasonable legal notices or\n    author attributions in that material or in the Appropriate Legal\n    Notices displayed by works containing it; or\n\n    c) Prohibiting misrepresentation of the origin of that material, or\n    requiring that modified versions of such material be marked in\n    reasonable ways as different from the original version; or\n\n    d) Limiting the use for publicity purposes of names of licensors or\n    authors of the material; or\n\n    e) Declining to grant rights under trademark law for use of some\n    trade names, trademarks, or service marks; or\n\n    f) Requiring indemnification of licensors and authors of that\n    material by anyone who conveys the material (or modified versions of\n    it) with contractual assumptions of liability to the recipient, for\n    any liability that these contractual assumptions directly impose on\n    those licensors and authors.\n\n  All other non-permissive additional terms are considered \"further\nrestrictions\" within the meaning of section 10.  If the Program as you\nreceived it, or any part of it, contains a notice stating that it is\ngoverned by this License along with a term that is a further\nrestriction, you may remove that term.  If a license document contains\na further restriction but permits relicensing or conveying under this\nLicense, you may add to a covered work material governed by the terms\nof that license document, provided that the further restriction does\nnot survive such relicensing or conveying.\n\n  If you add terms to a covered work in accord with this section, you\nmust place, in the relevant source files, a statement of the\nadditional terms that apply to those files, or a notice indicating\nwhere to find the applicable terms.\n\n  Additional terms, permissive or non-permissive, may be stated in the\nform of a separately written license, or stated as exceptions;\nthe above requirements apply either way.\n\n  8. Termination.\n\n  You may not propagate or modify a covered work except as expressly\nprovided under this License.  Any attempt otherwise to propagate or\nmodify it is void, and will automatically terminate your rights under\nthis License (including any patent licenses granted under the third\nparagraph of section 11).\n\n  However, if you cease all violation of this License, then your\nlicense from a particular copyright holder is reinstated (a)\nprovisionally, unless and until the copyright holder explicitly and\nfinally terminates your license, and (b) permanently, if the copyright\nholder fails to notify you of the violation by some reasonable means\nprior to 60 days after the cessation.\n\n  Moreover, your license from a particular copyright holder is\nreinstated permanently if the copyright holder notifies you of the\nviolation by some reasonable means, this is the first time you have\nreceived notice of violation of this License (for any work) from that\ncopyright holder, and you cure the violation prior to 30 days after\nyour receipt of the notice.\n\n  Termination of your rights under this section does not terminate the\nlicenses of parties who have received copies or rights from you under\nthis License.  If your rights have been terminated and not permanently\nreinstated, you do not qualify to receive new licenses for the same\nmaterial under section 10.\n\n  9. Acceptance Not Required for Having Copies.\n\n  You are not required to accept this License in order to receive or\nrun a copy of the Program.  Ancillary propagation of a covered work\noccurring solely as a consequence of using peer-to-peer transmission\nto receive a copy likewise does not require acceptance.  However,\nnothing other than this License grants you permission to propagate or\nmodify any covered work.  These actions infringe copyright if you do\nnot accept this License.  Therefore, by modifying or propagating a\ncovered work, you indicate your acceptance of this License to do so.\n\n  10. Automatic Licensing of Downstream Recipients.\n\n  Each time you convey a covered work, the recipient automatically\nreceives a license from the original licensors, to run, modify and\npropagate that work, subject to this License.  You are not responsible\nfor enforcing compliance by third parties with this License.\n\n  An \"entity transaction\" is a transaction transferring control of an\norganization, or substantially all assets of one, or subdividing an\norganization, or merging organizations.  If propagation of a covered\nwork results from an entity transaction, each party to that\ntransaction who receives a copy of the work also receives whatever\nlicenses to the work the party's predecessor in interest had or could\ngive under the previous paragraph, plus a right to possession of the\nCorresponding Source of the work from the predecessor in interest, if\nthe predecessor has it or can get it with reasonable efforts.\n\n  You may not impose any further restrictions on the exercise of the\nrights granted or affirmed under this License.  For example, you may\nnot impose a license fee, royalty, or other charge for exercise of\nrights granted under this License, and you may not initiate litigation\n(including a cross-claim or counterclaim in a lawsuit) alleging that\nany patent claim is infringed by making, using, selling, offering for\nsale, or importing the Program or any portion of it.\n\n  11. Patents.\n\n  A \"contributor\" is a copyright holder who authorizes use under this\nLicense of the Program or a work on which the Program is based.  The\nwork thus licensed is called the contributor's \"contributor version\".\n\n  A contributor's \"essential patent claims\" are all patent claims\nowned or controlled by the contributor, whether already acquired or\nhereafter acquired, that would be infringed by some manner, permitted\nby this License, of making, using, or selling its contributor version,\nbut do not include claims that would be infringed only as a\nconsequence of further modification of the contributor version.  For\npurposes of this definition, \"control\" includes the right to grant\npatent sublicenses in a manner consistent with the requirements of\nthis License.\n\n  Each contributor grants you a non-exclusive, worldwide, royalty-free\npatent license under the contributor's essential patent claims, to\nmake, use, sell, offer for sale, import and otherwise run, modify and\npropagate the contents of its contributor version.\n\n  In the following three paragraphs, a \"patent license\" is any express\nagreement or commitment, however denominated, not to enforce a patent\n(such as an express permission to practice a patent or covenant not to\nsue for patent infringement).  To \"grant\" such a patent license to a\nparty means to make such an agreement or commitment not to enforce a\npatent against the party.\n\n  If you convey a covered work, knowingly relying on a patent license,\nand the Corresponding Source of the work is not available for anyone\nto copy, free of charge and under the terms of this License, through a\npublicly available network server or other readily accessible means,\nthen you must either (1) cause the Corresponding Source to be so\navailable, or (2) arrange to deprive yourself of the benefit of the\npatent license for this particular work, or (3) arrange, in a manner\nconsistent with the requirements of this License, to extend the patent\nlicense to downstream recipients.  \"Knowingly relying\" means you have\nactual knowledge that, but for the patent license, your conveying the\ncovered work in a country, or your recipient's use of the covered work\nin a country, would infringe one or more identifiable patents in that\ncountry that you have reason to believe are valid.\n\n  If, pursuant to or in connection with a single transaction or\narrangement, you convey, or propagate by procuring conveyance of, a\ncovered work, and grant a patent license to some of the parties\nreceiving the covered work authorizing them to use, propagate, modify\nor convey a specific copy of the covered work, then the patent license\nyou grant is automatically extended to all recipients of the covered\nwork and works based on it.\n\n  A patent license is \"discriminatory\" if it does not include within\nthe scope of its coverage, prohibits the exercise of, or is\nconditioned on the non-exercise of one or more of the rights that are\nspecifically granted under this License.  You may not convey a covered\nwork if you are a party to an arrangement with a third party that is\nin the business of distributing software, under which you make payment\nto the third party based on the extent of your activity of conveying\nthe work, and under which the third party grants, to any of the\nparties who would receive the covered work from you, a discriminatory\npatent license (a) in connection with copies of the covered work\nconveyed by you (or copies made from those copies), or (b) primarily\nfor and in connection with specific products or compilations that\ncontain the covered work, unless you entered into that arrangement,\nor that patent license was granted, prior to 28 March 2007.\n\n  Nothing in this License shall be construed as excluding or limiting\nany implied license or other defenses to infringement that may\notherwise be available to you under applicable patent law.\n\n  12. No Surrender of Others' Freedom.\n\n  If conditions are imposed on you (whether by court order, agreement or\notherwise) that contradict the conditions of this License, they do not\nexcuse you from the conditions of this License.  If you cannot convey a\ncovered work so as to satisfy simultaneously your obligations under this\nLicense and any other pertinent obligations, then as a consequence you may\nnot convey it at all.  For example, if you agree to terms that obligate you\nto collect a royalty for further conveying from those to whom you convey\nthe Program, the only way you could satisfy both those terms and this\nLicense would be to refrain entirely from conveying the Program.\n\n  13. Remote Network Interaction; Use with the GNU General Public License.\n\n  Notwithstanding any other provision of this License, if you modify the\nProgram, your modified version must prominently offer all users\ninteracting with it remotely through a computer network (if your version\nsupports such interaction) an opportunity to receive the Corresponding\nSource of your version by providing access to the Corresponding Source\nfrom a network server at no charge, through some standard or customary\nmeans of facilitating copying of software.  This Corresponding Source\nshall include the Corresponding Source for any work covered by version 3\nof the GNU General Public License that is incorporated pursuant to the\nfollowing paragraph.\n\n  Notwithstanding any other provision of this License, you have\npermission to link or combine any covered work with a work licensed\nunder version 3 of the GNU General Public License into a single\ncombined work, and to convey the resulting work.  The terms of this\nLicense will continue to apply to the part which is the covered work,\nbut the work with which it is combined will remain governed by version\n3 of the GNU General Public License.\n\n  14. Revised Versions of this License.\n\n  The Free Software Foundation may publish revised and/or new versions of\nthe GNU Affero General Public License from time to time.  Such new versions\nwill be similar in spirit to the present version, but may differ in detail to\naddress new problems or concerns.\n\n  Each version is given a distinguishing version number.  If the\nProgram specifies that a certain numbered version of the GNU Affero General\nPublic License \"or any later version\" applies to it, you have the\noption of following the terms and conditions either of that numbered\nversion or of any later version published by the Free Software\nFoundation.  If the Program does not specify a version number of the\nGNU Affero General Public License, you may choose any version ever published\nby the Free Software Foundation.\n\n  If the Program specifies that a proxy can decide which future\nversions of the GNU Affero General Public License can be used, that proxy's\npublic statement of acceptance of a version permanently authorizes you\nto choose that version for the Program.\n\n  Later license versions may give you additional or different\npermissions.  However, no additional obligations are imposed on any\nauthor or copyright holder as a result of your choosing to follow a\nlater version.\n\n  15. Disclaimer of Warranty.\n\n  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY\nAPPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT\nHOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM \"AS IS\" WITHOUT WARRANTY\nOF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,\nTHE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR\nPURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM\nIS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF\nALL NECESSARY SERVICING, REPAIR OR CORRECTION.\n\n  16. Limitation of Liability.\n\n  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING\nWILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS\nTHE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY\nGENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE\nUSE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF\nDATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD\nPARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),\nEVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF\nSUCH DAMAGES.\n\n  17. Interpretation of Sections 15 and 16.\n\n  If the disclaimer of warranty and limitation of liability provided\nabove cannot be given local legal effect according to their terms,\nreviewing courts shall apply local law that most closely approximates\nan absolute waiver of all civil liability in connection with the\nProgram, unless a warranty or assumption of liability accompanies a\ncopy of the Program in return for a fee.\n\n                     END OF TERMS AND CONDITIONS\n\n            How to Apply These Terms to Your New Programs\n\n  If you develop a new program, and you want it to be of the greatest\npossible use to the public, the best way to achieve this is to make it\nfree software which everyone can redistribute and change under these terms.\n\n  To do so, attach the following notices to the program.  It is safest\nto attach them to the start of each source file to most effectively\nstate the exclusion of warranty; and each file should have at least\nthe \"copyright\" line and a pointer to where the full notice is found.\n\n    <one line to give the program's name and a brief idea of what it does.>\n    Copyright (C) <year>  <name of author>\n\n    This program is free software: you can redistribute it and/or modify\n    it under the terms of the GNU Affero General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    This program is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU Affero General Public License for more details.\n\n    You should have received a copy of the GNU Affero General Public License\n    along with this program.  If not, see <https://www.gnu.org/licenses/>.\n\nAlso add information on how to contact you by electronic and paper mail.\n\n  If your software can interact with users remotely through a computer\nnetwork, you should also make sure that it provides a way for users to\nget its source.  For example, if your program is a web application, its\ninterface could display a \"Source\" link that leads users to an archive\nof the code.  There are many ways you could offer source, and different\nsolutions will be better for different programs; see section 13 for the\nspecific requirements.\n\n  You should also get your employer (if you work as a programmer) or school,\nif any, to sign a \"copyright disclaimer\" for the program, if necessary.\nFor more information on this, and how to apply and follow the GNU AGPL, see\n<https://www.gnu.org/licenses/>.\n"
  },
  {
    "path": "MANIFESTO",
    "content": "[Note: this is the Redis manifesto, for general information about\n       installing and running Redis read the README file instead.]\n\nRedis Manifesto\n===============\n\n1 - A DSL for Abstract Data Types. Redis is a DSL (Domain Specific Language)\n    that manipulates abstract data types and implemented as a TCP daemon.\n    Commands manipulate a key space where keys are binary-safe strings and\n    values are different kinds of abstract data types. Every data type\n    represents an abstract version of a fundamental data structure. For instance\n    Redis Lists are an abstract representation of linked lists. In Redis, the\n    essence of a data type isn't just the kind of operations that the data types\n    support, but also the space and time complexity of the data type and the\n    operations performed upon it.\n\n2 - Memory storage is #1. The Redis data set, composed of defined key-value\n    pairs, is primarily stored in the computer's memory. The amount of memory in\n    all kinds of computers, including entry-level servers, is increasing\n    significantly each year. Memory is fast, and allows Redis to have very\n    predictable performance. Datasets composed of 10k or 40 millions keys will\n    perform similarly. Complex data types like Redis Sorted Sets are easy to\n    implement and manipulate in memory with good performance, making Redis very\n    simple. Redis will continue to explore alternative options (where data can\n    be optionally stored on disk, say) but the main goal of the project remains\n    the development of an in-memory database.\n\n3 - Fundamental data structures for a fundamental API. The Redis API is a direct\n    consequence of fundamental data structures. APIs can often be arbitrary but\n    not an API that resembles the nature of fundamental data structures. If we\n    ever meet intelligent life forms from another part of the universe, they'll\n    likely know, understand and recognize the same basic data structures we have\n    in our computer science books. Redis will avoid intermediate layers in API,\n    so that the complexity is obvious and more complex operations can be\n    performed as the sum of the basic operations.\n\n4 - We believe in code efficiency. Computers get faster and faster, yet we\n    believe that abusing computing capabilities is not wise: the amount of\n    operations you can do for a given amount of energy remains anyway a\n    significant parameter: it allows to do more with less computers and, at\n    the same time, having a smaller environmental impact. Similarly Redis is\n    able to \"scale down\" to smaller devices. It is perfectly usable in a\n    Raspberry Pi and other small ARM based computers. Faster code having\n    just the layers of abstractions that are really needed will also result,\n    often, in more predictable performances. We think likewise about memory\n    usage, one of the fundamental goals of the Redis project is to\n    incrementally build more and more memory efficient data structures, so that\n    problems that were not approachable in RAM in the past will be perfectly\n    fine to handle in the future.\n\n5 - Code is like a poem; it's not just something we write to reach some\n    practical result. Sometimes people that are far from the Redis philosophy\n    suggest using other code written by other authors (frequently in other\n    languages) in order to implement something Redis currently lacks. But to us\n    this is like if Shakespeare decided to end Enrico IV using the Paradiso from\n    the Divina Commedia. Is using any external code a bad idea? Not at all. Like\n    in \"One Thousand and One Nights\" smaller self contained stories are embedded\n    in a bigger story, we'll be happy to use beautiful self contained libraries\n    when needed. At the same time, when writing the Redis story we're trying to\n    write smaller stories that will fit in to other code.\n\n6 - We're against complexity. We believe designing systems is a fight against\n    complexity. We'll accept to fight the complexity when it's worthwhile but\n    we'll try hard to recognize when a small feature is not worth 1000s of lines\n    of code. Most of the time the best way to fight complexity is by not\n    creating it at all. Complexity is also a form of lock-in: code that is\n    very hard to understand cannot be modified by users in an independent way\n    regardless of the license. One of the main Redis goals is to remain\n    understandable, enough for a single programmer to have a clear idea of how\n    it works in detail just reading the source code for a couple of weeks.\n\n7 - Threading is not a silver bullet. Instead of making Redis threaded we\n    believe on the idea of an efficient (mostly) single threaded Redis core.\n    Multiple of such cores, that may run in the same computer or may run\n    in multiple computers, are abstracted away as a single big system by\n    higher order protocols and features: Redis Cluster and the upcoming\n    Redis Proxy are our main goals. A shared nothing approach is not just\n    much simpler (see the previous point in this document), is also optimal\n    in NUMA systems. In the specific case of Redis it allows for each instance\n    to have a more limited amount of data, making the Redis persist-by-fork\n    approach more sounding. In the future we may explore parallelism only for\n    I/O, which is the low hanging fruit: minimal complexity could provide an\n    improved single process experience.\n\n8 - Two levels of API. The Redis API has two levels: 1) a subset of the API fits\n    naturally into a distributed version of Redis and 2) a more complex API that\n    supports multi-key operations. Both are useful if used judiciously but\n    there's no way to make the more complex multi-keys API distributed in an\n    opaque way without violating our other principles. We don't want to provide\n    the illusion of something that will work magically when actually it can't in\n    all cases. Instead we'll provide commands to quickly migrate keys from one\n    instance to another to perform multi-key operations and expose the\n    trade-offs to the user.\n\n9 - We optimize for joy. We believe writing code is a lot of hard work, and the\n    only way it can be worth is by enjoying it. When there is no longer joy in\n    writing code, the best thing to do is stop. To prevent this, we'll avoid\n    taking paths that will make Redis less of a joy to develop.\n\n10 - All the above points are put together in what we call opportunistic\n     programming: trying to get the most for the user with minimal increases\n     in complexity (hanging fruits). Solve 95% of the problem with 5% of the\n     code when it is acceptable. Avoid a fixed schedule but follow the flow of\n     user requests, inspiration, Redis internal readiness for certain features\n     (sometimes many past changes reach a critical point making a previously\n     complex feature very easy to obtain).\n"
  },
  {
    "path": "Makefile",
    "content": "# Top level makefile, the real stuff is at ./src/Makefile and in ./modules/Makefile\n\nSUBDIRS = src\nifeq ($(BUILD_WITH_MODULES), yes)\n\tifeq ($(MAKECMDGOALS),32bit)\n    \t$(error BUILD_WITH_MODULES=yes is not supported on 32 bit systems)\n\tendif\n\tSUBDIRS += modules\nendif\n\ndefault: all\n\n.DEFAULT:\n\tfor dir in $(SUBDIRS); do $(MAKE) -C $$dir $@; done\n\ninstall:\n\tfor dir in $(SUBDIRS); do $(MAKE) -C $$dir $@; done\n\n.PHONY: install\n"
  },
  {
    "path": "README.md",
    "content": "[![codecov](https://codecov.io/github/redis/redis/graph/badge.svg?token=6bVHb5fRuz)](https://codecov.io/github/redis/redis)\n\nThis document serves as both a quick start guide to Redis and a detailed resource for building it from source.\n\n- New to Redis? Start with [What is Redis](#what-is-redis) and [Getting Started](#getting-started)\n- Ready to build from source? Jump to [Build Redis from Source](#build-redis-from-source)\n- Want to contribute? See the [Code contributions](#code-contributions) section\n  and [CONTRIBUTING.md](./CONTRIBUTING.md)\n- Looking for detailed documentation? Navigate to [redis.io/docs](https://redis.io/docs/)\n\n## Table of contents\n\n- [What is Redis?](#what-is-redis)\n  - [Key use cases](#key-use-cases)\n- [Why choose Redis?](#why-choose-redis)\n- [What is Redis Open Source?](#what-is-redis-open-source)\n- [Getting started](#getting-started)\n  - [Redis starter projects](#redis-starter-projects)\n  - [Using Redis with client libraries](#using-redis-with-client-libraries)\n  - [Using Redis with redis-cli](#using-redis-with-redis-cli)\n  - [Using Redis with Redis Insight](#using-redis-with-redis-insight)\n- [Redis data types, processing engines, and capabilities](#redis-data-types-processing-engines-and-capabilities)\n- [Cloud hosted Redis](#cloud-hosted-redis)\n- [Community](#community)\n- [Build Redis from source](#build-redis-from-source)\n  - [Build and run Redis with all data structures - Ubuntu 20.04 (Focal)](#build-and-run-redis-with-all-data-structures---ubuntu-2004-focal)\n  - [Build and run Redis with all data structures - Ubuntu 22.04 (Jammy)](#build-and-run-redis-with-all-data-structures---ubuntu-2204-jammy)\n  - [Build and run Redis with all data structures - Ubuntu 24.04 (Noble)](#build-and-run-redis-with-all-data-structures---ubuntu-2404-noble)\n  - [Build and run Redis with all data structures - Debian 11 (Bullseye) / 12 (Bookworm)](#build-and-run-redis-with-all-data-structures---debian-11-bullseye--12-bookworm)\n  - [Build and run Redis with all data structures - AlmaLinux 8.10 / Rocky Linux 8.10](#build-and-run-redis-with-all-data-structures---almalinux-810--rocky-linux-810)\n  - [Build and run Redis with all data structures - AlmaLinux 9.5 / Rocky Linux 9.5](#build-and-run-redis-with-all-data-structures---almalinux-95--rocky-linux-95)\n  - [Build and run Redis with all data structures - macOS 13 (Ventura) and macOS 14 (Sonoma)](#build-and-run-redis-with-all-data-structures---macos-13-ventura-and-macos-14-sonoma)\n  - [Build and run Redis with all data structures - macOS 15 (Sequoia)](#build-and-run-redis-with-all-data-structures---macos-15-sequoia)\n  - [Building Redis - flags and general notes](#building-redis---flags-and-general-notes)\n  - [Fixing build problems with dependencies or cached build options](#fixing-build-problems-with-dependencies-or-cached-build-options)\n  - [Fixing problems building 32 bit binaries](#fixing-problems-building-32-bit-binaries)\n  - [Allocator](#allocator)\n  - [Monotonic clock](#monotonic-clock)\n  - [Verbose build](#verbose-build)\n  - [Running Redis with TLS](#running-redis-with-tls)\n- [Code contributions](#code-contributions)\n- [Redis Trademarks](#redis-trademarks)\n\n## What is Redis?\n\nFor developers, who are building real-time data-driven applications, Redis is the preferred, fastest, and most feature-rich cache, data structure server, and document and vector query engine.\n\n### Key use cases\n\nRedis excels in various applications, including:\n\n- **Caching:** Supports multiple eviction policies, key expiration, and hash-field expiration.\n- **Distributed Session Store:** Offers flexible session data modeling (string, JSON, hash).\n- **Data Structure Server:** Provides low-level data structures (strings, lists, sets, hashes, sorted sets, JSON, etc.) with high-level semantics (counters, queues, leaderboards, rate limiters) and supports transactions & scripting.\n- **NoSQL Data Store:** Key-value, document, and time series data storage.\n- **Search and Query Engine:** Indexing for hash/JSON documents, supporting vector search, full-text search, geospatial queries, ranking, and aggregations via Redis Query Engine.\n- **Event Store & Message Broker:** Implements queues (lists), priority queues (sorted sets), event deduplication (sets), streams, and pub/sub with probabilistic stream processing capabilities.\n- **Vector Store for GenAI:** Integrates with AI applications (e.g. LangGraph, mem0) for short-term memory, long-term memory, LLM response caching (semantic caching), and retrieval augmented generation (RAG).\n- **Real-Time Analytics:** Powers personalization, recommendations, fraud detection, and risk assessment.\n\n## Why choose Redis?\n\nRedis is a popular choice for developers worldwide due to its combination of speed, flexibility, and rich feature set. Here's why people choose Redis for:\n\n- **Performance:** Because Redis keeps data primarily in memory and uses efficient data structures, it achieves extremely low latency (often sub-millisecond) for both read and write operations. This makes it ideal for applications demanding real-time responsiveness.\n- **Flexibility:** Redis isn't just a key-value store, it provides native support for a wide range of data structures and capabilities listed in [What is Redis?](#what-is-redis)\n- **Extensibility:** Redis is not limited to the built-in data structures, it has a [modules API](https://redis.io/docs/latest/develop/reference/modules/) that makes it possible to extend Redis functionality and rapidly implement new Redis commands\n- **Simplicity:** Redis has a simple, text-based protocol and [well-documented command set](https://redis.io/docs/latest/commands/)\n- **Ubiquity:** Redis is battle tested in production workloads at a massive scale. There is a good chance you indirectly interact with Redis several times daily\n- **Versatility**: Redis is the de facto standard for use cases such as:\n  - **Caching:** quickly access frequently used data without needing to query your primary database\n  - **Session management:** read and write user session data without hurting user experience or slowing down every API call\n  - **Querying, sorting, and analytics:** perform deduplication, full text search, and secondary indexing on in-memory data as fast as possible\n  - **Messaging and interservice communication:** job queues, message brokering, pub/sub, and streams for communicating between services\n  - **Vector operations:** Long-term and short-term LLM memory, RAG content retrieval, semantic caching, semantic routing, and vector similarity search\n\nIn summary, Redis provides a powerful, fast, and flexible toolkit for solving a wide variety of data management challenges. If you want to know more, here is a list of starting points:\n\n- [**Introduction to Redis data types**](https://redis.io/docs/latest/develop/data-types/)\n- [**The full list of Redis commands**](https://redis.io/commands/)\n- [**Redis for AI**](https://redis.io/docs/latest/develop/ai/)\n- [**Redis documentation**](https://redis.io/documentation/)\n\n## What is Redis Open Source?\n\nRedis Community Edition (Redis CE) was renamed Redis Open Source with the v8.0 release.\n\nRedis Ltd. also offers [Redis Software](https://redis.io/enterprise/), a self-managed software with additional compliance, reliability, and resiliency for enterprise scaling,\nand [Redis Cloud](https://redis.io/cloud/), a fully managed service integrated with Google Cloud, Azure, and AWS for production-ready apps.\n\nRead more about the differences between Redis Open Source and Redis [here](https://redis.io/technology/advantages/).\n\n## Getting started\n\nIf you want to get up and running with Redis quickly without needing to build from source, use one of the following methods:\n\n- [**Redis Cloud**](https://cloud.redis.io/)\n- [**Official Redis Docker images (Alpine/Debian)**](https://hub.docker.com/_/redis)\n  ```sh\n  docker run -d -p 6379:6379 redis:latest\n  ```\n- **Redis binary distributions**\n  - [**Snap**](https://github.com/redis/redis-snap)\n  - [**Homebrew**](https://github.com/redis/homebrew-redis)\n  - [**RPM**](https://github.com/redis/redis-rpm)\n  - [**Debian**](https://github.com/redis/redis-debian)\n- [**Redis quick start guides**](https://redis.io/docs/latest/develop/get-started/)\n\nIf you prefer to [build Redis from source](#build-redis-from-source) - see instructions below.\n\n### Redis starter projects\n\nTo get started as quickly as possible in your language of choice, use one of the following starter projects:\n\n- [**Python (redis-py)**](https://github.com/redis-developer/redis-starter-python)\n- [**C#/.NET (NRedisStack/StackExchange.Redis)**](https://github.com/redis-developer/redis-starter-csharp)\n- [**Go (go-redis)**](https://github.com/redis-developer/redis-starter-go)\n- [**JavaScript (node-redis)**](https://github.com/redis-developer/redis-starter-js)\n- [**Java/Spring (Jedis)**](https://github.com/redis-developer/redis-starter-java)\n\n### Using Redis with client libraries\n\nTo connect your application to Redis, you will need a client library. Redis has documented client libraries in most popular languages, with community-supported client libraries in additional languages.\n\n- [**Python (redis-py)**](https://redis.io/docs/latest/develop/clients/redis-py/)\n- [**Python (RedisVL)**](https://redis.io/docs/latest/integrate/redisvl/)\n- [**C#/.NET (NRedisStack/StackExchange.Redis)**](https://redis.io/docs/latest/develop/clients/dotnet/)\n- [**JavaScript (node-redis)**](https://redis.io/docs/latest/develop/clients/nodejs/)\n- [**Java (Jedis)**](https://redis.io/docs/latest/develop/clients/jedis/)\n- [**Java (Lettuce)**](https://redis.io/docs/latest/develop/clients/lettuce/)\n- [**Go (go-redis)**](https://redis.io/docs/latest/develop/clients/go/)\n- [**PHP (Predis)**](https://redis.io/docs/latest/develop/clients/php/)\n- [**C (hiredis)**](https://redis.io/docs/latest/develop/clients/hiredis/)\n- [**Full list of client libraries**](https://redis.io/docs/latest/develop/clients/)\n\n### Using Redis with redis-cli\n\n[`redis-cli`](https://redis.io/docs/latest/develop/tools/cli/) is Redis' command line interface. It is available as part of all the binary distributions and when you build Redis from source.\n\nYou can start a redis-server instance, and then, in another terminal try the following:\n\n```sh\ncd src\n./redis-cli\n```\n\n```text\nredis> ping\nPONG\nredis> set foo bar\nOK\nredis> get foo\n\"bar\"\nredis> incr mycounter\n(integer) 1\nredis> incr mycounter\n(integer) 2\nredis>\n```\n\n### Using Redis with Redis Insight\n\nFor a more visual and user-friendly experience, use [Redis Insight](https://redis.io/docs/latest/develop/tools/insight/) - a tool that lets you explore data, design, develop, and optimize your applications while also serving as a platform for Redis education and onboarding. Redis Insight integrates [Redis Copilot](https://redis.io/chat), a natural language AI assistant that improves the experience when working with data and commands.\n\n- [**Redis Insight documentation**](https://redis.io/docs/latest/develop/tools/insight/)\n- [**Redis Insight GitHub repository**](https://github.com/RedisInsight/RedisInsight)\n\n## Redis data types, processing engines, and capabilities\n\nRedis provides a variety of data types, processing engines, and capabilities to support a wide range of use cases:\n\n**Important:** Features marked with an asterisk (\\*) require Redis to be compiled with the `BUILD_WITH_MODULES=yes` flag when [building Redis from source](#build-redis-from-source)\n\n- [**String:**](https://redis.io/docs/latest/develop/data-types/strings) Sequences of bytes, including text, serialized objects, and binary arrays used for caching, counters, and bitwise operations.\n- [**JSON:**](https://redis.io/docs/latest/develop/data-types/json/) Nested JSON documents that are indexed and searchable using JSONPath expressions and with [Redis Query Engine](https://redis.io/docs/latest/develop/interact/search-and-query/)\n- [**Hash:**](https://redis.io/docs/latest/develop/data-types/hashes/) Field-value maps used to represent basic objects and store groupings of key-value pairs with support for [hash field expiration (TTL)](https://redis.io/docs/latest/develop/data-types/hashes/#field-expiration)\n- [**Redis Query Engine:**](https://redis.io/docs/latest/develop/interact/search-and-query/) Use Redis as a document database, a vector database, a secondary index, and a search engine. Define indexes for hash and JSON documents and then use a rich query language for vector search, full-text search, geospatial queries, and aggregations.\n- [**List:**](https://redis.io/docs/latest/develop/data-types/lists/) Linked lists of string values used as stacks, queues, and for queue management.\n- [**Set:**](https://redis.io/docs/latest/develop/data-types/sets/) Unordered collection of unique strings used for tracking unique items, relations, and common set operations (intersections, unions, differences).\n- [**Sorted set:**](https://redis.io/docs/latest/develop/data-types/sorted-sets/) Collection of unique strings ordered by an associated score used for leaderboards and rate limiters.\n- [**Vector set (beta):**](https://redis.io/docs/latest/develop/data-types/vector-sets/) Collection of vector embeddings used for semantic similarity search, semantic caching, semantic routing, and Retrieval Augmented Generation (RAG).\n- [**Geospatial indexes:**](https://redis.io/docs/latest/develop/data-types/geospatial/) Coordinates used for finding nearby points within a given radius or bounding box.\n- [**Bitmap:**](https://redis.io/docs/latest/develop/data-types/bitmaps/) A set of bit-oriented operations defined on the string type used for efficient set representations and object permissions.\n- [**Bitfield:**](https://redis.io/docs/latest/develop/data-types/bitfields/) Binary-encoded strings that let you set, increment, and get integer values of arbitrary bit length used for limited-range counters, numeric values, and multi-level object permissions such as role-based access control (RBAC)\n- [**Hyperloglog:**](https://redis.io/docs/latest/develop/data-types/probabilistic/hyperloglogs/) A probabilistic data structure for approximating the cardinality of a set used for analytics such as counting unique visits, form fills, etc.\n- \\*[**Bloom filter:**](https://redis.io/docs/latest/develop/data-types/probabilistic/bloom-filter/) A probabilistic data structure to check if a given value is present in a set. Used for fraud detection, ad placement, and unique column (i.e. username/email/slug) checks.\n- \\*[**Cuckoo filter:**](https://redis.io/docs/latest/develop/data-types/probabilistic/cuckoo-filter/) A probabilistic data structure for checking if a given value is present in a set while also allowing limited counting and deletions used in targeted advertising and coupon code validation.\n- \\*[**t-digest:**](https://redis.io/docs/latest/develop/data-types/probabilistic/t-digest/) A probabilistic data structure used for estimating the percentile of a large dataset without having to store and order all the data points. Used for hardware/software monitoring, online gaming, network traffic monitoring, and predictive maintenance.\n- \\*[**Top-k:**](https://redis.io/docs/latest/develop/data-types/probabilistic/top-k/) A probabilistic data structure for finding the most frequent values in a data stream used for trend discovery.\n- \\*[**Count-min sketch:**](https://redis.io/docs/latest/develop/data-types/probabilistic/count-min-sketch/) A probabilistic data structure for estimating how many times a given value appears in a data stream used for sales volume calculations.\n- [**Time series:**](https://redis.io/docs/latest/develop/data-types/timeseries/) Data points indexed in time order used for monitoring sensor data, asset\n  tracking, and predictive analytics\n- [**Pub/sub**:](https://redis.io/docs/latest/develop/interact/pubsub/) A lightweight messaging capability. Publishers send messages to a channel, and subscribers receive messages from that channel.\n- [**Stream**:](https://redis.io/docs/latest/develop/data-types/streams/) An append-only log with random access capabilities and complex consumption strategies such as consumer groups. Used for event sourcing, sensor monitoring, and notifications.\n- [**Transaction:**](https://redis.io/docs/latest/develop/interact/transactions/) Allows the execution of a group of commands in a single step. A request sent by another client will never be served in the middle of the execution of a transaction. This guarantees that the commands are executed as a single isolated operation.\n- [**Programmability:**](https://redis.io/docs/latest/develop/interact/programmability/eval-intro/) Upload and execute Lua scripts on the server. Scripts can employ programmatic control structures and use most of the commands while executing to access the database. Because scripts are executed on the server, reading and writing data from scripts is very efficient.\n\n## Cloud hosted Redis\n\nFully-managed Redis with real-time performance at scale.\n\n[**Redis Cloud**](https://redis.io/cloud/)\n\n## Community\n\n[**Redis Community Resources**](https://redis.io/community/)\n\n## Build Redis from source\n\nThis section refers to building Redis from source. If you want to get up and running with Redis quickly without needing to build from source see the [Getting started section](#getting-started).\n\n### Build and run Redis with all data structures - Ubuntu 20.04 (Focal)\n\nTested with the following Docker image:\n\n- ubuntu:20.04\n\n1. Install required dependencies\n\n   Update your package lists and install the necessary development tools and libraries:\n\n   ```sh\n   apt-get update\n   apt-get install -y sudo\n   sudo apt-get install -y --no-install-recommends ca-certificates wget dpkg-dev gcc g++ libc6-dev libssl-dev make git python3 python3-pip python3-venv python3-dev unzip rsync clang automake autoconf gcc-10 g++-10 libtool\n   ```\n\n2. Use GCC 10 as the default compiler\n\n   Update the system's default compiler to GCC 10:\n\n   ```sh\n   sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 100 --slave /usr/bin/g++ g++ /usr/bin/g++-10\n   ```\n\n3. Install CMake\n\n   Install CMake using `pip3` and link it for system-wide access:\n\n   ```sh\n   pip3 install cmake==3.31.6\n   sudo ln -sf /usr/local/bin/cmake /usr/bin/cmake\n   cmake --version\n   ```\n\n   Note: CMake version 3.31.6 is the latest supported version. Newer versions cannot be used.\n\n4. Download the Redis source\n\n   Download a specific version of the Redis source code archive from GitHub.\n\n   Replace `<version>` with the Redis version, for example: `8.0.0`.\n\n   ```sh\n   cd /usr/src\n   wget -O redis-<version>.tar.gz https://github.com/redis/redis/archive/refs/tags/<version>.tar.gz\n   ```\n\n5. Extract the source archive\n\n   Create a directory for the source code and extract the contents into it:\n\n   ```sh\n   cd /usr/src\n   tar xvf redis-<version>.tar.gz\n   rm redis-<version>.tar.gz\n   ```\n\n6. Build Redis\n\n   Set the necessary environment variables and compile Redis:\n\n   ```sh\n   cd /usr/src/redis-<version>\n   export BUILD_TLS=yes BUILD_WITH_MODULES=yes INSTALL_RUST_TOOLCHAIN=yes DISABLE_WERRORS=yes\n   make -j \"$(nproc)\" all\n   ```\n\n7. Run Redis\n\n   ```sh\n   cd /usr/src/redis-<version>\n   ./src/redis-server redis-full.conf\n   ```\n\n### Build and run Redis with all data structures - Ubuntu 22.04 (Jammy)\n\nTested with the following Docker image:\n\n- ubuntu:22.04\n\n1. Install required dependencies\n\n   Update your package lists and install the necessary development tools and libraries:\n\n   ```sh\n   apt-get update\n   apt-get install -y sudo\n   sudo apt-get install -y --no-install-recommends ca-certificates wget dpkg-dev gcc g++ libc6-dev libssl-dev make git cmake python3 python3-pip python3-venv python3-dev unzip rsync clang automake autoconf libtool\n   ```\n\n2. Install CMake\n\n   Install CMake using `pip3` and link it for system-wide access:\n\n   ```sh\n   pip3 install cmake==3.31.6\n   sudo ln -sf /usr/local/bin/cmake /usr/bin/cmake\n   cmake --version\n   ```\n\n   Note: CMake version 3.31.6 is the latest supported version. Newer versions cannot be used.\n\n3. Download the Redis source\n\n   Download a specific version of the Redis source code archive from GitHub.\n\n   Replace `<version>` with the Redis version, for example: `8.0.0`.\n\n   ```sh\n   cd /usr/src\n   wget -O redis-<version>.tar.gz https://github.com/redis/redis/archive/refs/tags/<version>.tar.gz\n   ```\n\n4. Extract the source archive\n\n   Create a directory for the source code and extract the contents into it:\n\n   ```sh\n   cd /usr/src\n   tar xvf redis-<version>.tar.gz\n   rm redis-<version>.tar.gz\n   ```\n\n5. Build Redis\n\n   Set the necessary environment variables and build Redis:\n\n   ```sh\n   cd /usr/src/redis-<version>\n   export BUILD_TLS=yes BUILD_WITH_MODULES=yes INSTALL_RUST_TOOLCHAIN=yes DISABLE_WERRORS=yes\n   make -j \"$(nproc)\" all\n   ```\n\n6. Run Redis\n\n   ```sh\n   cd /usr/src/redis-<version>\n   ./src/redis-server redis-full.conf\n   ```\n\n### Build and run Redis with all data structures - Ubuntu 24.04 (Noble)\n\nTested with the following Docker image:\n\n- ubuntu:24.04\n\n1. Install required dependencies\n\n   Update your package lists and install the necessary development tools and libraries:\n\n   ```sh\n   apt-get update\n   apt-get install -y sudo\n   sudo apt-get install -y --no-install-recommends ca-certificates wget dpkg-dev gcc g++ libc6-dev libssl-dev make git cmake python3 python3-pip python3-venv python3-dev unzip rsync clang automake autoconf libtool\n   ```\n\n2. Download the Redis source\n\n   Download a specific version of the Redis source code archive from GitHub.\n\n   Replace `<version>` with the Redis version, for example: `8.0.0`.\n\n   ```sh\n   cd /usr/src\n   wget -O redis-<version>.tar.gz https://github.com/redis/redis/archive/refs/tags/<version>.tar.gz\n   ```\n\n3. Extract the source archive\n\n   Create a directory for the source code and extract the contents into it:\n\n   ```sh\n   cd /usr/src\n   tar xvf redis-<version>.tar.gz\n   rm redis-<version>.tar.gz\n   ```\n\n4. Build Redis\n\n   Set the necessary environment variables and build Redis:\n\n   ```sh\n   cd /usr/src/redis-<version>\n   export BUILD_TLS=yes BUILD_WITH_MODULES=yes INSTALL_RUST_TOOLCHAIN=yes DISABLE_WERRORS=yes\n   make -j \"$(nproc)\" all\n   ```\n\n5. Run Redis\n\n   ```sh\n   cd /usr/src/redis-<version>\n   ./src/redis-server redis-full.conf\n   ```\n\n### Build and run Redis with all data structures - Debian 11 (Bullseye) / 12 (Bookworm)\n\nTested with the following Docker images:\n\n- debian:bullseye\n- debian:bullseye-slim\n- debian:bookworm\n- debian:bookworm-slim\n\n1. Install required dependencies\n\n   Update your package lists and install the necessary development tools and libraries:\n\n   ```sh\n   apt-get update\n   apt-get install -y sudo\n   sudo apt-get install -y --no-install-recommends ca-certificates wget dpkg-dev gcc g++ libc6-dev libssl-dev make git cmake python3 python3-pip python3-venv python3-dev unzip rsync clang automake autoconf libtool\n   ```\n\n2. Download the Redis source\n\n   Download a specific version of the Redis source code archive from GitHub.\n\n   Replace `<version>` with the Redis version, for example: `8.0.0`.\n\n   ```sh\n   cd /usr/src\n   wget -O redis-<version>.tar.gz https://github.com/redis/redis/archive/refs/tags/<version>.tar.gz\n   ```\n\n3. Extract the source archive\n\n   Create a directory for the source code and extract the contents into it:\n\n   ```sh\n   cd /usr/src\n   tar xvf redis-<version>.tar.gz\n   rm redis-<version>.tar.gz\n   ```\n\n4. Build Redis\n\n   Set the necessary environment variables and build Redis:\n\n   ```sh\n   cd /usr/src/redis-<version>\n   export BUILD_TLS=yes BUILD_WITH_MODULES=yes INSTALL_RUST_TOOLCHAIN=yes DISABLE_WERRORS=yes\n   make -j \"$(nproc)\" all\n   ```\n\n5. Run Redis\n\n   ```sh\n   cd /usr/src/redis-<version>\n   ./src/redis-server redis-full.conf\n   ```\n\n### Build and run Redis with all data structures - AlmaLinux 8.10 / Rocky Linux 8.10\n\nTested with the following Docker images:\n\n- almalinux:8.10\n- almalinux:8.10-minimal\n- rockylinux/rockylinux:8.10\n- rockylinux/rockylinux:8.10-minimal\n\n1. Prepare the system\n\n   For 8.10-minimal, install `sudo` and `dnf` as follows:\n\n   ```sh\n   microdnf install dnf sudo -y\n   ```\n\n   For 8.10 (regular), install sudo as follows:\n\n   ```sh\n   dnf install sudo -y\n   ```\n\n   Clean the package metadata, enable required repositories, and install development tools:\n\n   ```sh\n   sudo dnf clean all\n   sudo tee /etc/yum.repos.d/goreleaser.repo > /dev/null <<EOF\n   [goreleaser]\n   name=GoReleaser\n   baseurl=https://repo.goreleaser.com/yum/\n   enabled=1\n   gpgcheck=0\n   EOF\n   sudo dnf update -y\n   sudo dnf groupinstall \"Development Tools\" -y\n   sudo dnf config-manager --set-enabled powertools\n   sudo dnf install -y epel-release\n   ```\n\n2. Install required dependencies\n\n   Update your package lists and install the necessary development tools and libraries:\n\n   ```sh\n   sudo dnf install -y --nobest --skip-broken pkg-config wget gcc-toolset-13-gcc gcc-toolset-13-gcc-c++ git make openssl openssl-devel python3.11 python3.11-pip python3.11-devel unzip rsync clang curl libtool automake autoconf jq systemd-devel\n   ```\n\n   Create a Python virtual environment:\n\n   ```sh\n   python3.11 -m venv /opt/venv\n   ```\n\n   Enable the GCC toolset:\n\n   ```sh\n   sudo cp /opt/rh/gcc-toolset-13/enable /etc/profile.d/gcc-toolset-13.sh\n   echo \"source /etc/profile.d/gcc-toolset-13.sh\" | sudo tee -a /etc/bashrc\n   ```\n\n3. Install CMake\n\n   Install CMake 3.25.1 manually:\n\n   ```sh\n   CMAKE_VERSION=3.25.1\n   ARCH=$(uname -m)\n   if [ \"$ARCH\" = \"x86_64\" ]; then\n     CMAKE_FILE=cmake-${CMAKE_VERSION}-linux-x86_64.sh\n   else\n     CMAKE_FILE=cmake-${CMAKE_VERSION}-linux-aarch64.sh\n   fi\n   wget https://github.com/Kitware/CMake/releases/download/v${CMAKE_VERSION}/${CMAKE_FILE}\n   chmod +x ${CMAKE_FILE}\n   ./${CMAKE_FILE} --skip-license --prefix=/usr/local --exclude-subdir\n   rm ${CMAKE_FILE}\n   cmake --version\n   ```\n\n4. Download the Redis source\n\n   Download a specific version of the Redis source code archive from GitHub.\n\n   Replace `<version>` with the Redis version, for example: `8.0.0`.\n\n   ```sh\n   cd /usr/src\n   wget -O redis-<version>.tar.gz https://github.com/redis/redis/archive/refs/tags/<version>.tar.gz\n   ```\n\n5. Extract the source archive\n\n   Create a directory for the source code and extract the contents into it:\n\n   ```sh\n   cd /usr/src\n   tar xvf redis-<version>.tar.gz\n   rm redis-<version>.tar.gz\n   ```\n\n6. Build Redis\n\n   Enable the GCC toolset, set the necessary environment variables, and build Redis:\n\n   ```sh\n   source /etc/profile.d/gcc-toolset-13.sh\n   cd /usr/src/redis-<version>\n   export BUILD_TLS=yes BUILD_WITH_MODULES=yes INSTALL_RUST_TOOLCHAIN=yes DISABLE_WERRORS=yes\n   make -j \"$(nproc)\" all\n   ```\n\n7. Run Redis\n\n   ```sh\n   cd /usr/src/redis-<version>\n   ./src/redis-server redis-full.conf\n   ```\n\n### Build and run Redis with all data structures - AlmaLinux 9.5 / Rocky Linux 9.5\n\nTested with the following Docker images:\n\n- almalinux:9.5\n- almalinux:9.5-minimal\n- rockylinux/rockylinux:9.5\n- rockylinux/rockylinux:9.5-minimal\n\n1. Prepare the system\n\n   For 9.5-minimal, install `sudo` and `dnf` as follows:\n\n   ```sh\n   microdnf install dnf sudo -y\n   ```\n\n   For 9.5 (regular), install sudo as follows:\n\n   ```sh\n   dnf install sudo -y\n   ```\n\n   Clean the package metadata, enable required repositories, and install development tools:\n\n   ```sh\n   sudo tee /etc/yum.repos.d/goreleaser.repo > /dev/null <<EOF\n   [goreleaser]\n   name=GoReleaser\n   baseurl=https://repo.goreleaser.com/yum/\n   enabled=1\n   gpgcheck=0\n   EOF\n   sudo dnf clean all\n   sudo dnf makecache\n   sudo dnf update -y\n   ```\n\n2. Install required dependencies\n\n   Update your package lists and install the necessary development tools and libraries:\n\n   ```sh\n   sudo dnf install -y --nobest --skip-broken pkg-config xz wget which gcc-toolset-13-gcc gcc-toolset-13-gcc-c++ git make openssl openssl-devel python3 python3-pip python3-devel unzip rsync clang curl libtool automake autoconf jq systemd-devel\n   ```\n\n   Create a Python virtual environment:\n\n   ```sh\n   python3 -m venv /opt/venv\n   ```\n\n   Enable the GCC toolset:\n\n   ```sh\n   sudo cp /opt/rh/gcc-toolset-13/enable /etc/profile.d/gcc-toolset-13.sh\n   echo \"source /etc/profile.d/gcc-toolset-13.sh\" | sudo tee -a /etc/bashrc\n   ```\n\n3. Install CMake\n\n   Install CMake 3.25.1 manually:\n\n   ```sh\n   CMAKE_VERSION=3.25.1\n   ARCH=$(uname -m)\n   if [ \"$ARCH\" = \"x86_64\" ]; then\n     CMAKE_FILE=cmake-${CMAKE_VERSION}-linux-x86_64.sh\n   else\n     CMAKE_FILE=cmake-${CMAKE_VERSION}-linux-aarch64.sh\n   fi\n   wget https://github.com/Kitware/CMake/releases/download/v${CMAKE_VERSION}/${CMAKE_FILE}\n   chmod +x ${CMAKE_FILE}\n   ./${CMAKE_FILE} --skip-license --prefix=/usr/local --exclude-subdir\n   rm ${CMAKE_FILE}\n   cmake --version\n   ```\n\n4. Download the Redis source\n\n   Download a specific version of the Redis source code archive from GitHub.\n\n   Replace `<version>` with the Redis version, for example: `8.0.0`.\n\n   ```sh\n   cd /usr/src\n   wget -O redis-<version>.tar.gz https://github.com/redis/redis/archive/refs/tags/<version>.tar.gz\n   ```\n\n5. Extract the source archive\n\n   Create a directory for the source code and extract the contents into it:\n\n   ```sh\n   cd /usr/src\n   tar xvf redis-<version>.tar.gz\n   rm redis-<version>.tar.gz\n   ```\n\n6. Build Redis\n\n   Enable the GCC toolset, set the necessary environment variables, and build Redis:\n\n   ```sh\n   source /etc/profile.d/gcc-toolset-13.sh\n   cd /usr/src/redis-<version>\n   export BUILD_TLS=yes BUILD_WITH_MODULES=yes INSTALL_RUST_TOOLCHAIN=yes DISABLE_WERRORS=yes\n   make -j \"$(nproc)\" all\n   ```\n\n7. Run Redis\n\n   ```sh\n   cd /usr/src/redis-<version>\n   ./src/redis-server redis-full.conf\n   ```\n\n### Build and run Redis with all data structures - macOS 13 (Ventura) and macOS 14 (Sonoma)\n\n1. Install Homebrew\n\n   If Homebrew is not already installed, follow the installation instructions on the [Homebrew home page](https://brew.sh/).\n\n2. Install required packages\n\n   ```sh\n   export HOMEBREW_NO_AUTO_UPDATE=1\n   brew update\n   brew install coreutils\n   brew install make\n   brew install openssl\n   brew install llvm@18\n   brew install cmake\n   brew install gnu-sed\n   brew install automake\n   brew install libtool\n   brew install wget\n   ```\n\n3. Install Rust\n\n   Rust is required to build the JSON package.\n\n   ```sh\n   RUST_INSTALLER=rust-1.80.1-$(if [ \"$(uname -m)\" = \"arm64\" ]; then echo \"aarch64\"; else echo \"x86_64\"; fi)-apple-darwin\n   wget --quiet -O ${RUST_INSTALLER}.tar.xz https://static.rust-lang.org/dist/${RUST_INSTALLER}.tar.xz\n   tar -xf ${RUST_INSTALLER}.tar.xz\n   (cd ${RUST_INSTALLER} && sudo ./install.sh)\n   ```\n\n4. Download the Redis source\n\n   Download a specific version of the Redis source code archive from GitHub.\n\n   Replace `<version>` with the Redis version, for example: `8.0.0`.\n\n   ```sh\n   cd ~/src\n   wget -O redis-<version>.tar.gz https://github.com/redis/redis/archive/refs/tags/<version>.tar.gz\n   ```\n\n5. Extract the source archive\n\n   Create a directory for the source code and extract the contents into it:\n\n   ```sh\n   cd ~/src\n   tar xvf redis-<version>.tar.gz\n   rm redis-<version>.tar.gz\n   ```\n\n6. Build Redis\n\n   ```sh\n   cd ~/src/redis-<version>\n   export HOMEBREW_PREFIX=\"$(brew --prefix)\"\n   export BUILD_WITH_MODULES=yes\n   export BUILD_TLS=yes\n   export DISABLE_WERRORS=yes\n   PATH=\"$HOMEBREW_PREFIX/opt/libtool/libexec/gnubin:$HOMEBREW_PREFIX/opt/llvm@18/bin:$HOMEBREW_PREFIX/opt/make/libexec/gnubin:$HOMEBREW_PREFIX/opt/gnu-sed/libexec/gnubin:$HOMEBREW_PREFIX/opt/coreutils/libexec/gnubin:$PATH\"\n   export LDFLAGS=\"-L$HOMEBREW_PREFIX/opt/llvm@18/lib\"\n   export CPPFLAGS=\"-I$HOMEBREW_PREFIX/opt/llvm@18/include\"\n   mkdir -p build_dir/etc\n   make -C redis-8.0 -j \"$(nproc)\" all OS=macos\n   make -C redis-8.0 install PREFIX=$(pwd)/build_dir OS=macos\n   ```\n\n7. Run Redis\n\n   ```sh\n   export LC_ALL=en_US.UTF-8\n   export LANG=en_US.UTF-8\n   build_dir/bin/redis-server redis-full.conf\n   ```\n\n### Build and run Redis with all data structures - macOS 15 (Sequoia)\n\nSupport and instructions will be provided at a later date.\n\n### Building Redis - flags and general notes\n\nRedis can be compiled and used on Linux, OSX, OpenBSD, NetBSD, FreeBSD.\nWe support big endian and little endian architectures, and both 32 bit and 64-bit systems.\n\nIt may compile on Solaris derived systems (for instance SmartOS) but our support for this platform is _best effort_ and Redis is not guaranteed to work as well as on Linux, OSX, and \\*BSD.\n\nTo build Redis with all the data structures (including JSON, time series, Bloom filter, cuckoo filter, count-min sketch, top-k, and t-digest) and with Redis Query Engine, make sure first that all the prerequisites are installed (see build instructions above, per operating system). You need to use the following flag in the make command:\n\n```sh\nmake BUILD_WITH_MODULES=yes\n```\n\nNote: `BUILD_WITH_MODULES=yes` is not supported on 32 bit systems.\n\nTo build Redis with just the core data structures, use:\n\n```sh\nmake\n```\n\nTo build with TLS support, you need OpenSSL development libraries (e.g. libssl-dev on Debian/Ubuntu) and the following flag in the make command:\n\n```sh\nmake BUILD_TLS=yes\n```\n\nTo build with systemd support, you need systemd development libraries (such as libsystemd-dev on Debian/Ubuntu or systemd-devel on CentOS), and the following flag:\n\n```sh\nmake USE_SYSTEMD=yes\n```\n\nTo append a suffix to Redis program names, add the following flag:\n\n```sh\nmake PROG_SUFFIX=\"-alt\"\n```\n\nYou can build a 32 bit Redis binary using:\n\n```sh\nmake 32bit\n```\n\nAfter building Redis, it is a good idea to test it using:\n\n```sh\nmake test\n```\n\nIf TLS is built, running the tests with TLS enabled (you will need `tcl-tls` installed):\n\n```sh\n./utils/gen-test-certs.sh\n./runtest --tls\n```\n\n### Fixing build problems with dependencies or cached build options\n\nRedis has some dependencies which are included in the `deps` directory. `make` does not automatically rebuild dependencies even if something in the source code of dependencies changes.\n\nWhen you update the source code with `git pull` or when code inside the dependencies tree is modified in any other way, make sure to use the following command in order to really clean everything and rebuild from scratch:\n\n```sh\nmake distclean\n```\n\nThis will clean: jemalloc, lua, hiredis, linenoise and other dependencies.\n\nAlso, if you force certain build options like 32bit target, no C compiler optimizations (for debugging purposes), and other similar build time options, those options are cached indefinitely until you issue a `make distclean`\ncommand.\n\n### Fixing problems building 32 bit binaries\n\nIf after building Redis with a 32 bit target you need to rebuild it\nwith a 64 bit target, or the other way around, you need to perform a `make distclean` in the root directory of the Redis distribution.\n\nIn case of build errors when trying to build a 32 bit binary of Redis, try the following steps:\n\n- Install the package libc6-dev-i386 (also try g++-multilib).\n- Try using the following command line instead of `make 32bit`:\n  `make CFLAGS=\"-m32 -march=native\" LDFLAGS=\"-m32\"`\n\n### Allocator\n\nSelecting a non-default memory allocator when building Redis is done by setting the `MALLOC` environment variable. Redis is compiled and linked against libc malloc by default, except for jemalloc being the default on Linux systems. This default was picked because jemalloc has proven to have fewer fragmentation problems than libc malloc.\n\nTo force compiling against libc malloc, use:\n\n```sh\nmake MALLOC=libc\n```\n\nTo compile against jemalloc on Mac OS X systems, use:\n\n```sh\nmake MALLOC=jemalloc\n```\n\n### Monotonic clock\n\nBy default, Redis will build using the POSIX clock_gettime function as the monotonic clock source. On most modern systems, the internal processor clock can be used to improve performance. Cautions can be found here: http://oliveryang.net/2015/09/pitfalls-of-TSC-usage/\n\nOn ARM aarch64 systems, the hardware clock is enabled by default because the ARM Generic Timer is architecturally guaranteed to be available and monotonic on all ARMv8-A processors (see the *“The Generic Timer in AArch64 state”* section of the *Arm Architecture Reference Manual for Armv8-A*).\n\nTo build with support for the processor's internal instruction clock on other architectures, use:\n\n```sh\nmake CFLAGS=\"-DUSE_PROCESSOR_CLOCK\"\n```\n\n### Verbose build\n\nRedis will build with a user-friendly colorized output by default.\nIf you want to see a more verbose output, use the following:\n\n```sh\nmake V=1\n```\n\n### Running Redis with TLS\n\nPlease consult the [TLS.md](TLS.md) file for more information on how to use Redis with TLS.\n\n### Running Redis with the Query Engine and optional proprietary Intel SVS-VAMANA optimisations\n\n**License Disclaimer**\nIf you are using Redis Open Source under AGPLv3 or SSPLv1, you cannot use it together with the Intel Optimizations (Leanvec and LVQ binaries). The reason is that the Intel SVS license is not compatible with those licenses.\nThe Leanvec and LVQ techniques are closed source and are only available for use with Redis Open Source when distributed under the RSALv2 license.\nFor more details, please refer to the information provided by Intel [here](https://github.com/intel/ScalableVectorSearch).\n\nBy default, Redis with the Redis Query Engine supports SVS-VAMANA index with global 8-bit quantisation. To compile Redis with the Intel SVS-VAMANA optimisations, LeanVec and LVQ, use the following:\n\n```sh\nmake BUILD_INTEL_SVS_OPT=yes\n```\n\nAlternatively, you can export the variable before running the build step for your platform:\n\n```sh\nexport BUILD_INTEL_SVS_OPT=yes\nmake\n```\n\n\n## Code contributions\n\nBy contributing code to the Redis project in any form, including sending a pull request via GitHub, a code fragment or patch via private email or public discussion groups, you agree to release your code under the terms of the Redis Software Grant and Contributor License Agreement. Please see the CONTRIBUTING.md file in this source distribution for more information. For security bugs and vulnerabilities, please see SECURITY.md and the description of the ability of users to backport security patches under Redis Open Source 7.4+ under BSDv3. Open Source Redis releases are subject to the following licenses:\n\n1. Version 7.2.x and prior releases are subject to BSDv3. These contributions to the original Redis core project are owned by their contributors and licensed under the 3BSDv3 license as referenced in the REDISCONTRIBUTIONS.txt file. Any copy of that license in this repository applies only to those contributions;\n\n2. Versions 7.4.x to 7.8.x are subject to your choice of RSALv2 or SSPLv1; and\n\n3. Version 8.0.x and subsequent releases are subject to the tri-license RSALv2/SSPLv1/AGPLv3 at your option as referenced in the LICENSE.txt file.\n\n## Redis Trademarks\n\nThe purpose of a trademark is to identify the goods and services of a person or company without causing confusion. As the registered owner of its name and logo, Redis accepts certain limited uses of its trademarks, but it has requirements that must be followed as described in its Trademark Guidelines available at: https://redis.io/legal/trademark-policy/.\n"
  },
  {
    "path": "REDISCONTRIBUTIONS.txt",
    "content": "Copyright (c) 2006-Present, Redis Ltd. and Contributors\nAll rights reserved.\n\nNote: Continued Applicability of the BSD-3-Clause License\n\nDespite the shift to the dual-licensing model with version 7.4 (RSALv2 or SSPLv1) and\nthe shift to a tri-license option with version 8.0 (RSALv2/SSPLv1/AGPLv3), portions of\nRedis Open Source remain available subject to the BSD-3-Clause License (BSD).\nSee below for the full BSD license:\n\nRedistribution and use in source and binary forms, with or without modification, are permitted\nprovided that the following conditions are met:\n\n1. Redistributions of source code must retain the above copyright notice, this list of conditions\nand the following disclaimer.\n\n2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions\nand the following disclaimer in the documentation and/or other materials provided with the\ndistribution.\n\n3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse\nor promote products derived from this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR\nIMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND\nFITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR\nCONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\nDATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER\nIN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF\nTHE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "SECURITY.md",
    "content": "# Security Policy\n\n## Supported Versions\n\nRedis is generally backward compatible with very few exceptions, so we\nrecommend users to always use the latest version to experience stability,\nperformance and security.\n\nWe generally backport security issues to a single previous major version,\nunless this is not possible or feasible with a reasonable effort.\n\n| Version | Supported                                                              |\n|---------|------------------------------------------------------------------------|\n| 8.6.x   | :white_check_mark:                                                     |\n| 8.4.x   | :white_check_mark:                                                     |\n| 8.2.x   | :white_check_mark:                                                     |\n| 8.0.x   | :x:                                                                    |\n| 7.4.x   | :white_check_mark:                                                     |\n| 7.2.x   | :white_check_mark: support extended till 7.4 end of support            |\n| < 7.2.x | :x:                                                                    |\n| 6.2.x   | :white_check_mark: support extended                                    |\n| < 6.2.x | :x:                                                                    |\n\n## Reporting a Vulnerability\n\nIf you believe you have found a security vulnerability, to ensure proper review \nand assessment, we kindly ask vulnerability reports be submitted through\nour [Redis Vulnerability Disclosure Program.](https://redis.io/redis-responsible-vulnerability-disclosure/)\n\nWe have found this path to be beneficial for both researchers and us for \na number of reasons. Including, offering fast response times to researchers and \nopportunities for us to invite those with exceptional reports into closed paid \nengagements.  \n\nFor those averse to using our chosen platform, we will also accept reports directly\nvia GitHub's \"Report a Vulnerability\".  \n\nTo contact the security team directly with questions use: [security@redis.com](mailto:security@redis.com)\n\n\n## Responsible Disclosure\n\nIn some cases, we may apply a responsible disclosure process to reported or\notherwise discovered vulnerabilities. We will usually do that for a critical\nvulnerability, and only if we have a good reason to believe information about\nit is not yet public.\n\nThis process involves providing an early notification about the vulnerability,\nits impact and mitigations to a short list of vendors under a time-limited\nembargo on public disclosure.\n\nIf you believe you should be on the list, please contact us and we will\nconsider your request based on the above criteria.\n\n## Support across Operating Systems, Architectures, and Compilers\n\nRedis is primarily tested on modern Linux distributions, using contemporary\nIntel and AMD x86_64 CPUs, as well as ARM-based CPUs, and recent versions of\nthe GCC compiler.\nVulnerability reports that rely on unsupported or uncommon environments\n(for example, 32-bit architectures, non-Linux operating systems, or outdated\ntoolchains) may be considered out of scope, even if the issue is technically\nvalid. Such reports will be evaluated on a case-by-case basis at our discretion.\n\n## License Compatibility\n\nFor security vulnerability patches released under Redis Open Source 7.4 and \nthereafter, Redis permits users of earlier versions (7.2 and prior) to access \npatches under the BSD3 license noted in REDISCONTRIBUTIONS.txt instead of the \nfull license requirements described in LICENSE.txt. Security fixes are tested \nonly against the specific versions for which they are provided. Applicability \nor portability to other versions or forks has not been evaluated.\n"
  },
  {
    "path": "TLS.md",
    "content": "TLS Support\n===========\n\nGetting Started\n---------------\n\n### Building\n\nTo build with TLS support you'll need OpenSSL development libraries (e.g.\nlibssl-dev on Debian/Ubuntu).\n\nTo build TLS support as Redis built-in:\nRun `make BUILD_TLS=yes`.\n\nOr to build TLS as Redis module:\nRun `make BUILD_TLS=module`.\n\nNote that sentinel mode does not support TLS module.\n\n### Tests\n\nTo run Redis test suite with TLS, you'll need TLS support for TCL (i.e.\n`tcl-tls` package on Debian/Ubuntu).\n\n1. Run `./utils/gen-test-certs.sh` to generate a root CA and a server\n   certificate.\n\n2. Run `./runtest --tls` or `./runtest-cluster --tls` to run Redis and Redis\n   Cluster tests in TLS mode.\n\n3. Run `./runtest --tls-module` or `./runtest-cluster --tls-module` to\n   run Redis and Redis cluster tests in TLS mode with Redis module.\n\n### Running manually\n\nTo manually run a Redis server with TLS mode (assuming `gen-test-certs.sh` was\ninvoked so sample certificates/keys are available):\n\nFor TLS built-in mode:\n    ./src/redis-server --tls-port 6379 --port 0 \\\n        --tls-cert-file ./tests/tls/redis.crt \\\n        --tls-key-file ./tests/tls/redis.key \\\n        --tls-ca-cert-file ./tests/tls/ca.crt\n\nFor TLS module mode:\n    ./src/redis-server --tls-port 6379 --port 0 \\\n        --tls-cert-file ./tests/tls/redis.crt \\\n        --tls-key-file ./tests/tls/redis.key \\\n        --tls-ca-cert-file ./tests/tls/ca.crt \\\n        --loadmodule src/redis-tls.so\n\nTo connect to this Redis server with `redis-cli`:\n\n    ./src/redis-cli --tls \\\n        --cert ./tests/tls/redis.crt \\\n        --key ./tests/tls/redis.key \\\n        --cacert ./tests/tls/ca.crt\n\nThis will disable TCP and enable TLS on port 6379. It's also possible to have\nboth TCP and TLS available, but you'll need to assign different ports.\n\nTo make a Replica connect to the master using TLS, use `--tls-replication yes`,\nand to make Redis Cluster use TLS across nodes use `--tls-cluster yes`.\n\nConnections\n-----------\n\nAll socket operations now go through a connection abstraction layer that hides\nI/O and read/write event handling from the caller.\n\n**Multi-threading I/O is not currently supported for TLS**, as a TLS connection\nneeds to do its own manipulation of AE events which is not thread safe. The\nsolution is probably to manage independent AE loops for I/O threads and longer\nterm association of connections with threads. This may potentially improve\noverall performance as well.\n\nSync IO for TLS is currently implemented in a hackish way, i.e. making the\nsocket blocking and configuring socket-level timeout.  This means the timeout\nvalue may not be so accurate, and there would be a lot of syscall overhead.\nHowever I believe that getting rid of syncio completely in favor of pure async\nwork is probably a better move than trying to fix that. For replication it would\nprobably not be so hard. For cluster keys migration it might be more difficult,\nbut there are probably other good reasons to improve that part anyway.\n\nTo-Do List\n----------\n\n- [ ] redis-benchmark support. The current implementation is a mix of using\n  hiredis for parsing and basic networking (establishing connections), but\n  directly manipulating sockets for most actions. This will need to be cleaned\n  up for proper TLS support. The best approach is probably to migrate to hiredis\n  async mode.\n- [ ] redis-cli `--slave` and `--rdb` support.\n\nMulti-port\n----------\n\nConsider the implications of allowing TLS to be configured on a separate port,\nmaking Redis listening on multiple ports:\n\n1. Startup banner port notification\n2. Proctitle\n3. How slaves announce themselves\n4. Cluster bus port calculation\n"
  },
  {
    "path": "codecov.yml",
    "content": "coverage:\n  status:\n    patch:\n      default:\n        informational: true\n    project:\n      default:\n        informational: true\n\ncomment:\n  require_changes: false\n  require_head: false\n  require_base: false\n  layout: \"condensed_header, diff, files\"\n  hide_project_coverage: false\n  behavior: default\n\ngithub_checks:\n  annotations: false\n"
  },
  {
    "path": "deps/Makefile",
    "content": "# Redis dependency Makefile\n\nuname_S:= $(shell sh -c 'uname -s 2>/dev/null || echo not')\n\nLUA_DEBUG?=no\nLUA_COVERAGE?=no\n\nCCCOLOR=\"\\033[34m\"\nLINKCOLOR=\"\\033[34;1m\"\nSRCCOLOR=\"\\033[33m\"\nBINCOLOR=\"\\033[37;1m\"\nMAKECOLOR=\"\\033[32;1m\"\nENDCOLOR=\"\\033[0m\"\n\nDEPS_CFLAGS := $(CFLAGS)\nDEPS_LDFLAGS := $(LDFLAGS)\nCLANG := $(findstring clang,$(shell sh -c '$(CC) --version | head -1'))\n\n# MSan looks for errors related to uninitialized memory.\n# Make sure to build the dependencies with MSan as it needs all the code to be instrumented.\n# A library could be used to initialize memory but if it's not build with --fsanitize=memory then\n# MSan doesn't know about it and will spit false positive error when that memory is then used.\nifeq ($(SANITIZER),memory)\nifeq (clang, $(CLANG))\n\tDEPS_CFLAGS+=-fsanitize=memory -fsanitize-memory-track-origins=2 -fno-sanitize-recover=all -fno-omit-frame-pointer\n\tDEPS_LDFLAGS+=-fsanitize=memory\nelse\n    $(error \"MemorySanitizer needs to be compiled and linked with clang. Please use CC=clang\")\nendif\nendif\n\ndefault:\n\t@echo \"Explicit target required\"\n\n.PHONY: default\n\n# Prerequisites target\n.make-prerequisites:\n\t@touch $@\n\n# Clean everything when CFLAGS is different\nifneq ($(shell sh -c '[ -f .make-cflags ] && cat .make-cflags || echo none'), $(CFLAGS))\n.make-cflags: distclean\n\t-(echo \"$(CFLAGS)\" > .make-cflags)\n.make-prerequisites: .make-cflags\nendif\n\n# Clean everything when LDFLAGS is different\nifneq ($(shell sh -c '[ -f .make-ldflags ] && cat .make-ldflags || echo none'), $(LDFLAGS))\n.make-ldflags: distclean\n\t-(echo \"$(LDFLAGS)\" > .make-ldflags)\n.make-prerequisites: .make-ldflags\nendif\n\ndistclean:\n\t-(cd hiredis && $(MAKE) clean) > /dev/null || true\n\t-(cd linenoise && $(MAKE) clean) > /dev/null || true\n\t-(cd lua && $(MAKE) clean) > /dev/null || true\n\t-(cd jemalloc && [ -f Makefile ] && $(MAKE) distclean) > /dev/null || true\n\t-(cd hdr_histogram && $(MAKE) clean) > /dev/null || true\n\t-(cd fpconv && $(MAKE) clean) > /dev/null || true\n\t-(cd xxhash && $(MAKE) clean) > /dev/null || true\n\t-(rm -f .make-*)\n\n.PHONY: distclean\n\nifneq (,$(filter $(BUILD_TLS),yes module))\n    HIREDIS_MAKE_FLAGS = USE_SSL=1\nendif\n\n# Special care is needed to pass additional CFLAGS/LDFLAGS to hiredis as it\n# modifies these variables in its Makefile.\nhiredis: .make-prerequisites\n\t@printf '%b %b\\n' $(MAKECOLOR)MAKE$(ENDCOLOR) $(BINCOLOR)$@$(ENDCOLOR)\n\tcd hiredis && $(MAKE) static $(HIREDIS_MAKE_FLAGS) HIREDIS_CFLAGS=\"$(DEPS_CFLAGS)\" HIREDIS_LDFLAGS=\"$(DEPS_LDFLAGS)\"\n\n.PHONY: hiredis\n\nlinenoise: .make-prerequisites\n\t@printf '%b %b\\n' $(MAKECOLOR)MAKE$(ENDCOLOR) $(BINCOLOR)$@$(ENDCOLOR)\n\tcd linenoise && $(MAKE) CFLAGS=\"$(DEPS_CFLAGS)\" LDFLAGS=\"$(DEPS_LDFLAGS)\"\n\n.PHONY: linenoise\n\nhdr_histogram: .make-prerequisites\n\t@printf '%b %b\\n' $(MAKECOLOR)MAKE$(ENDCOLOR) $(BINCOLOR)$@$(ENDCOLOR)\n\tcd hdr_histogram && $(MAKE) CFLAGS=\"$(DEPS_CFLAGS)\" LDFLAGS=\"$(DEPS_LDFLAGS)\"\n\n.PHONY: hdr_histogram\n\nfpconv: .make-prerequisites\n\t@printf '%b %b\\n' $(MAKECOLOR)MAKE$(ENDCOLOR) $(BINCOLOR)$@$(ENDCOLOR)\n\tcd fpconv && $(MAKE) CFLAGS=\"$(DEPS_CFLAGS)\" LDFLAGS=\"$(DEPS_LDFLAGS)\"\n\n.PHONY: fpconv\n\nXXHASH_CFLAGS = -fPIC $(DEPS_CFLAGS)\nxxhash: .make-prerequisites\n\t@printf '%b %b\\n' $(MAKECOLOR)MAKE$(ENDCOLOR) $(BINCOLOR)$@$(ENDCOLOR)\n\tcd xxhash && $(MAKE) lib CFLAGS=\"$(XXHASH_CFLAGS)\" LDFLAGS=\"$(DEPS_LDFLAGS)\"\n.PHONY: xxhash\n\nifeq ($(uname_S),SunOS)\n\t# Make isinf() available\n\tLUA_CFLAGS= -D__C99FEATURES__=1\nendif\n\nLUA_CFLAGS+= -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC='' -DLUA_USE_MKSTEMP $(DEPS_CFLAGS)\nLUA_LDFLAGS+= $(DEPS_LDFLAGS)\nifeq ($(LUA_DEBUG),yes)\n\tLUA_CFLAGS+= -O0 -g -DLUA_USE_APICHECK\nelse\n\tLUA_CFLAGS+= -O2 \nendif\nifeq ($(LUA_COVERAGE),yes)\n\tLUA_CFLAGS += -fprofile-arcs -ftest-coverage\n\tLUA_LDFLAGS += -fprofile-arcs -ftest-coverage\nendif\n\n# lua's Makefile defines AR=\"ar rcu\", which is unusual, and makes it more\n# challenging to cross-compile lua (and redis).  These defines make it easier\n# to fit redis into cross-compilation environments, which typically set AR.\nAR=ar\nARFLAGS=rc\n\nlua: .make-prerequisites\n\t@printf '%b %b\\n' $(MAKECOLOR)MAKE$(ENDCOLOR) $(BINCOLOR)$@$(ENDCOLOR)\n\tcd lua/src && $(MAKE) all CFLAGS=\"$(LUA_CFLAGS)\" MYLDFLAGS=\"$(LUA_LDFLAGS)\" AR=\"$(AR) $(ARFLAGS)\"\n\n.PHONY: lua\n\nJEMALLOC_CFLAGS=$(CFLAGS)\nJEMALLOC_LDFLAGS=$(LDFLAGS)\n\nifneq ($(DEB_HOST_GNU_TYPE),)\nJEMALLOC_CONFIGURE_OPTS += --host=$(DEB_HOST_GNU_TYPE)\nendif\n\njemalloc: .make-prerequisites\n\t@printf '%b %b\\n' $(MAKECOLOR)MAKE$(ENDCOLOR) $(BINCOLOR)$@$(ENDCOLOR)\n\tcd jemalloc && ./configure --disable-cxx --with-version=5.3.0-0-g0 --with-lg-quantum=3 --disable-cache-oblivious --with-jemalloc-prefix=je_ CFLAGS=\"$(JEMALLOC_CFLAGS)\" LDFLAGS=\"$(JEMALLOC_LDFLAGS)\" $(JEMALLOC_CONFIGURE_OPTS)\n\tcd jemalloc && $(MAKE) lib/libjemalloc.a\n\n.PHONY: jemalloc\n"
  },
  {
    "path": "deps/README.md",
    "content": "This directory contains all Redis dependencies, except for the libc that\nshould be provided by the operating system.\n\n* **Jemalloc** is our memory allocator, used as replacement for libc malloc on Linux by default. It has good performances and excellent fragmentation behavior. This component is upgraded from time to time.\n* **hiredis** is the official C client library for Redis. It is used by redis-cli, redis-benchmark and Redis Sentinel. It is part of the Redis official ecosystem but is developed externally from the Redis repository, so we just upgrade it as needed.\n* **linenoise** is a readline replacement. It is developed by the same authors of Redis but is managed as a separated project and updated as needed.\n* **lua** is Lua 5.1 with minor changes for security and additional libraries.\n* **hdr_histogram** Used for per-command latency tracking histograms.\n\nHow to upgrade the above dependencies\n===\n\nJemalloc\n---\n\nJemalloc is modified with changes that allow us to implement the Redis\nactive defragmentation logic. However this feature of Redis is not mandatory\nand Redis is able to understand if the Jemalloc version it is compiled\nagainst supports such Redis-specific modifications. So in theory, if you\nare not interested in the active defragmentation, you can replace Jemalloc\njust following these steps:\n\n1. Remove the jemalloc directory.\n2. Substitute it with the new jemalloc source tree.\n3. Edit the Makefile located in the same directory as the README you are\n   reading, and change the --with-version in the Jemalloc configure script\n   options with the version you are using. This is required because otherwise\n   Jemalloc configuration script is broken and will not work nested in another\n   git repository.\n\nHowever note that we change Jemalloc settings via the `configure` script of Jemalloc using the `--with-lg-quantum` option, setting it to the value of 3 instead of 4. This provides us with more size classes that better suit the Redis data structures, in order to gain memory efficiency.\n\nIf you want to upgrade Jemalloc while also providing support for\nactive defragmentation, in addition to the above steps you need to perform\nthe following additional steps:\n\n5. In Jemalloc tree, file `include/jemalloc/jemalloc_macros.h.in`, make sure\n   to add `#define JEMALLOC_FRAG_HINT`.\n6. Implement the function `je_get_defrag_hint()` inside `src/jemalloc.c`. You\n   can see how it is implemented in the current Jemalloc source tree shipped\n   with Redis, and rewrite it according to the new Jemalloc internals, if they\n   changed, otherwise you could just copy the old implementation if you are\n   upgrading just to a similar version of Jemalloc.\n\n#### Updating/upgrading jemalloc\n\nThe jemalloc directory is pulled as a subtree from the upstream jemalloc github repo. To update it you should run from the project root:\n\n1. `git subtree pull --prefix deps/jemalloc https://github.com/jemalloc/jemalloc.git <version-tag> --squash`<br>\nThis should hopefully merge the local changes into the new version.\n2. In case any conflicts arise (due to our changes) you'll need to resolve them and commit.\n3. Reconfigure jemalloc:<br>\n```sh\nrm deps/jemalloc/VERSION deps/jemalloc/configure\ncd deps/jemalloc\n./autogen.sh --with-version=<version-tag>-0-g0\n```\n4. Update jemalloc's version in `deps/Makefile`: search for \"`--with-version=<old-version-tag>-0-g0`\" and update it accordingly.\n5. Commit the changes (VERSION,configure,Makefile).\n\nHiredis\n---\n\nHiredis is used by Sentinel, `redis-cli` and `redis-benchmark`. Like Redis, uses the SDS string library, but not necessarily the same version. In order to avoid conflicts, this version has all SDS identifiers prefixed by `hi`.\n\n1. `git subtree pull --prefix deps/hiredis https://github.com/redis/hiredis.git <version-tag> --squash`<br>\nThis should hopefully merge the local changes into the new version.\n2. Conflicts will arise (due to our changes) you'll need to resolve them and commit.\n\nLinenoise\n---\n\nLinenoise is rarely upgraded as needed. The upgrade process is trivial since\nRedis uses a non modified version of linenoise, so to upgrade just do the\nfollowing:\n\n1. Remove the linenoise directory.\n2. Substitute it with the new linenoise source tree.\n\nLua\n---\n\nWe use Lua 5.1 and no upgrade is planned currently, since we don't want to break\nLua scripts for new Lua features: in the context of Redis Lua scripts the\ncapabilities of 5.1 are usually more than enough, the release is rock solid,\nand we definitely don't want to break old scripts.\n\nSo upgrading of Lua is up to the Redis project maintainers and should be a\nmanual procedure performed by taking a diff between the different versions.\n\nCurrently we have at least the following differences between official Lua 5.1\nand our version:\n\n1. Makefile is modified to allow a different compiler than GCC.\n2. We have the implementation source code, and directly link to the following external libraries: `lua_cjson.o`, `lua_struct.o`, `lua_cmsgpack.o` and `lua_bit.o`.\n3. There is a security fix in `ldo.c`, line 498: The check for `LUA_SIGNATURE[0]` is removed in order to avoid direct bytecode execution.\n\nHdr_Histogram\n---\n\nUpdated source can be found here: https://github.com/HdrHistogram/HdrHistogram_c\nWe use a customized version based on master branch commit e4448cf6d1cd08fff519812d3b1e58bd5a94ac42.\n1. Compare all changes under /hdr_histogram directory to upstream master commit e4448cf6d1cd08fff519812d3b1e58bd5a94ac42\n2. Copy updated files from newer version onto files in /hdr_histogram.\n3. Apply the changes from 1 above to the updated files.\n\n"
  },
  {
    "path": "deps/fpconv/LICENSE.txt",
    "content": "Boost Software License - Version 1.0 - August 17th, 2003\n\nPermission is hereby granted, free of charge, to any person or organization\nobtaining a copy of the software and accompanying documentation covered by\nthis license (the \"Software\") to use, reproduce, display, distribute,\nexecute, and transmit the Software, and to prepare derivative works of the\nSoftware, and to permit third-parties to whom the Software is furnished to\ndo so, all subject to the following:\n\nThe copyright notices in the Software and this entire statement, including\nthe above license grant, this restriction and the following disclaimer,\nmust be included in all copies of the Software, in whole or in part, and\nall derivative works of the Software, unless such copies or derivative\nworks are solely in the form of machine-executable object code generated by\na source language processor.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT\nSHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE\nFOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,\nARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\nDEALINGS IN THE SOFTWARE.\n"
  },
  {
    "path": "deps/fpconv/Makefile",
    "content": "STD=\nWARN= -Wall\nOPT= -Os\n\nR_CFLAGS= $(STD) $(WARN) $(OPT) $(DEBUG) $(CFLAGS)\nR_LDFLAGS= $(LDFLAGS)\nDEBUG= -g\n\nR_CC=$(CC) $(R_CFLAGS)\nR_LD=$(CC) $(R_LDFLAGS)\n\nAR= ar\nARFLAGS= rcs\n\nlibfpconv.a: fpconv_dtoa.o\n\t$(AR) $(ARFLAGS) $@ $+\n\nfpconv_dtoa.o: fpconv_dtoa.h fpconv_dtoa.c\n\n.c.o:\n\t$(R_CC) -c  $< \n\nclean:\n\trm -f *.o\n\trm -f *.a\n\n\n"
  },
  {
    "path": "deps/fpconv/README.md",
    "content": "libfpconv\n\n----------------------------------------------\n\nFast and accurate double to string conversion based on Florian Loitsch's Grisu-algorithm[1].\n\nThis port contains a subset of the 'C' version of Fast and accurate double to string conversion based on Florian Loitsch's Grisu-algorithm available at [github.com/night-shift/fpconv](https://github.com/night-shift/fpconv)).\n\n[1] https://www.cs.tufts.edu/~nr/cs257/archive/florian-loitsch/printf.pdf\n"
  },
  {
    "path": "deps/fpconv/fpconv_dtoa.c",
    "content": "/* fpconv_dtoa.c -- floating point conversion utilities.\n *\n * Fast and accurate double to string conversion based on Florian Loitsch's\n * Grisu-algorithm[1].\n *\n * [1] https://www.cs.tufts.edu/~nr/cs257/archive/florian-loitsch/printf.pdf\n * ----------------------------------------------------------------------------\n *\n * Copyright (c) 2013-2019, night-shift <as.smljk at gmail dot com>\n * Copyright (c) 2009, Florian Loitsch < florian.loitsch at inria dot fr >\n * All rights reserved.\n *\n * Boost Software License - Version 1.0 - August 17th, 2003\n *\n * Permission is hereby granted, free of charge, to any person or organization\n * obtaining a copy of the software and accompanying documentation covered by\n * this license (the \"Software\") to use, reproduce, display, distribute,\n * execute, and transmit the Software, and to prepare derivative works of the\n * Software, and to permit third-parties to whom the Software is furnished to\n * do so, all subject to the following:\n *\n * The copyright notices in the Software and this entire statement, including\n * the above license grant, this restriction and the following disclaimer,\n * must be included in all copies of the Software, in whole or in part, and\n * all derivative works of the Software, unless such copies or derivative\n * works are solely in the form of machine-executable object code generated by\n * a source language processor.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT\n * SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE\n * FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,\n * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n * DEALINGS IN THE SOFTWARE.\n */\n\n#include \"fpconv_dtoa.h\"\n\n#include \"fpconv_powers.h\"\n\n#include <stdbool.h>\n#include <string.h>\n\n#define fracmask 0x000FFFFFFFFFFFFFU\n#define expmask 0x7FF0000000000000U\n#define hiddenbit 0x0010000000000000U\n#define signmask 0x8000000000000000U\n#define expbias (1023 + 52)\n\n#define absv(n) ((n) < 0 ? -(n) : (n))\n#define minv(a, b) ((a) < (b) ? (a) : (b))\n\nstatic uint64_t tens[] = { 10000000000000000000U,\n                           1000000000000000000U,\n                           100000000000000000U,\n                           10000000000000000U,\n                           1000000000000000U,\n                           100000000000000U,\n                           10000000000000U,\n                           1000000000000U,\n                           100000000000U,\n                           10000000000U,\n                           1000000000U,\n                           100000000U,\n                           10000000U,\n                           1000000U,\n                           100000U,\n                           10000U,\n                           1000U,\n                           100U,\n                           10U,\n                           1U };\n\nstatic inline uint64_t get_dbits(double d) {\n    union\n    {\n        double dbl;\n        uint64_t i;\n    } dbl_bits = { d };\n\n    return dbl_bits.i;\n}\n\nstatic Fp build_fp(double d) {\n    uint64_t bits = get_dbits(d);\n\n    Fp fp;\n    fp.frac = bits & fracmask;\n    fp.exp = (bits & expmask) >> 52;\n\n    if (fp.exp) {\n        fp.frac += hiddenbit;\n        fp.exp -= expbias;\n\n    } else {\n        fp.exp = -expbias + 1;\n    }\n\n    return fp;\n}\n\nstatic void normalize(Fp *fp) {\n    while ((fp->frac & hiddenbit) == 0) {\n        fp->frac <<= 1;\n        fp->exp--;\n    }\n\n    int shift = 64 - 52 - 1;\n    fp->frac <<= shift;\n    fp->exp -= shift;\n}\n\nstatic void get_normalized_boundaries(Fp *fp, Fp *lower, Fp *upper) {\n    upper->frac = (fp->frac << 1) + 1;\n    upper->exp = fp->exp - 1;\n\n    while ((upper->frac & (hiddenbit << 1)) == 0) {\n        upper->frac <<= 1;\n        upper->exp--;\n    }\n\n    int u_shift = 64 - 52 - 2;\n\n    upper->frac <<= u_shift;\n    upper->exp = upper->exp - u_shift;\n\n    int l_shift = fp->frac == hiddenbit ? 2 : 1;\n\n    lower->frac = (fp->frac << l_shift) - 1;\n    lower->exp = fp->exp - l_shift;\n\n    lower->frac <<= lower->exp - upper->exp;\n    lower->exp = upper->exp;\n}\n\nstatic Fp multiply(Fp *a, Fp *b) {\n    const uint64_t lomask = 0x00000000FFFFFFFF;\n\n    uint64_t ah_bl = (a->frac >> 32) * (b->frac & lomask);\n    uint64_t al_bh = (a->frac & lomask) * (b->frac >> 32);\n    uint64_t al_bl = (a->frac & lomask) * (b->frac & lomask);\n    uint64_t ah_bh = (a->frac >> 32) * (b->frac >> 32);\n\n    uint64_t tmp = (ah_bl & lomask) + (al_bh & lomask) + (al_bl >> 32);\n    /* round up */\n    tmp += 1U << 31;\n\n    Fp fp = { ah_bh + (ah_bl >> 32) + (al_bh >> 32) + (tmp >> 32), a->exp + b->exp + 64 };\n\n    return fp;\n}\n\nstatic void round_digit(char *digits,\n                        int ndigits,\n                        uint64_t delta,\n                        uint64_t rem,\n                        uint64_t kappa,\n                        uint64_t frac) {\n    while (rem < frac && delta - rem >= kappa &&\n           (rem + kappa < frac || frac - rem > rem + kappa - frac)) {\n        digits[ndigits - 1]--;\n        rem += kappa;\n    }\n}\n\nstatic int generate_digits(Fp *fp, Fp *upper, Fp *lower, char *digits, int *K) {\n    uint64_t wfrac = upper->frac - fp->frac;\n    uint64_t delta = upper->frac - lower->frac;\n\n    Fp one;\n    one.frac = 1ULL << -upper->exp;\n    one.exp = upper->exp;\n\n    uint64_t part1 = upper->frac >> -one.exp;\n    uint64_t part2 = upper->frac & (one.frac - 1);\n\n    int idx = 0, kappa = 10;\n    uint64_t *divp;\n    /* 1000000000 */\n    for (divp = tens + 10; kappa > 0; divp++) {\n        uint64_t div = *divp;\n        unsigned digit = part1 / div;\n\n        if (digit || idx) {\n            digits[idx++] = digit + '0';\n        }\n\n        part1 -= digit * div;\n        kappa--;\n\n        uint64_t tmp = (part1 << -one.exp) + part2;\n        if (tmp <= delta) {\n            *K += kappa;\n            round_digit(digits, idx, delta, tmp, div << -one.exp, wfrac);\n\n            return idx;\n        }\n    }\n\n    /* 10 */\n    uint64_t *unit = tens + 18;\n\n    while (true) {\n        part2 *= 10;\n        delta *= 10;\n        kappa--;\n\n        unsigned digit = part2 >> -one.exp;\n        if (digit || idx) {\n            digits[idx++] = digit + '0';\n        }\n\n        part2 &= one.frac - 1;\n        if (part2 < delta) {\n            *K += kappa;\n            round_digit(digits, idx, delta, part2, one.frac, wfrac * *unit);\n\n            return idx;\n        }\n\n        unit--;\n    }\n}\n\nstatic int grisu2(double d, char *digits, int *K) {\n    Fp w = build_fp(d);\n\n    Fp lower, upper;\n    get_normalized_boundaries(&w, &lower, &upper);\n\n    normalize(&w);\n\n    int k;\n    Fp cp = find_cachedpow10(upper.exp, &k);\n\n    w = multiply(&w, &cp);\n    upper = multiply(&upper, &cp);\n    lower = multiply(&lower, &cp);\n\n    lower.frac++;\n    upper.frac--;\n\n    *K = -k;\n\n    return generate_digits(&w, &upper, &lower, digits, K);\n}\n\nstatic int emit_digits(char *digits, int ndigits, char *dest, int K, bool neg) {\n    int exp = absv(K + ndigits - 1);\n\n    /* write plain integer */\n    if (K >= 0 && (exp < (ndigits + 7))) {\n        memcpy(dest, digits, ndigits);\n        memset(dest + ndigits, '0', K);\n\n        return ndigits + K;\n    }\n\n    /* write decimal w/o scientific notation */\n    if (K < 0 && (K > -7 || exp < 4)) {\n        int offset = ndigits - absv(K);\n        /* fp < 1.0 -> write leading zero */\n        if (offset <= 0) {\n            offset = -offset;\n            dest[0] = '0';\n            dest[1] = '.';\n            memset(dest + 2, '0', offset);\n            memcpy(dest + offset + 2, digits, ndigits);\n\n            return ndigits + 2 + offset;\n\n            /* fp > 1.0 */\n        } else {\n            memcpy(dest, digits, offset);\n            dest[offset] = '.';\n            memcpy(dest + offset + 1, digits + offset, ndigits - offset);\n\n            return ndigits + 1;\n        }\n    }\n\n    /* write decimal w/ scientific notation */\n    ndigits = minv(ndigits, 18 - neg);\n\n    int idx = 0;\n    dest[idx++] = digits[0];\n\n    if (ndigits > 1) {\n        dest[idx++] = '.';\n        memcpy(dest + idx, digits + 1, ndigits - 1);\n        idx += ndigits - 1;\n    }\n\n    dest[idx++] = 'e';\n\n    char sign = K + ndigits - 1 < 0 ? '-' : '+';\n    dest[idx++] = sign;\n\n    int cent = 0;\n\n    if (exp > 99) {\n        cent = exp / 100;\n        dest[idx++] = cent + '0';\n        exp -= cent * 100;\n    }\n    if (exp > 9) {\n        int dec = exp / 10;\n        dest[idx++] = dec + '0';\n        exp -= dec * 10;\n\n    } else if (cent) {\n        dest[idx++] = '0';\n    }\n\n    dest[idx++] = exp % 10 + '0';\n\n    return idx;\n}\n\nstatic int filter_special(double fp, char *dest) {\n    if (fp == 0.0) {\n        dest[0] = '0';\n        return 1;\n    }\n\n    uint64_t bits = get_dbits(fp);\n\n    bool nan = (bits & expmask) == expmask;\n\n    if (!nan) {\n        return 0;\n    }\n\n    if (bits & fracmask) {\n        dest[0] = 'n';\n        dest[1] = 'a';\n        dest[2] = 'n';\n\n    } else {\n        dest[0] = 'i';\n        dest[1] = 'n';\n        dest[2] = 'f';\n    }\n\n    return 3;\n}\n\nint fpconv_dtoa(double d, char dest[24]) {\n    char digits[18];\n\n    int str_len = 0;\n    bool neg = false;\n\n    if (get_dbits(d) & signmask) {\n        dest[0] = '-';\n        str_len++;\n        neg = true;\n    }\n\n    int spec = filter_special(d, dest + str_len);\n\n    if (spec) {\n        return str_len + spec;\n    }\n\n    int K = 0;\n    int ndigits = grisu2(d, digits, &K);\n\n    str_len += emit_digits(digits, ndigits, dest + str_len, K, neg);\n\n    return str_len;\n}\n"
  },
  {
    "path": "deps/fpconv/fpconv_dtoa.h",
    "content": "/* fpconv_dtoa.h -- floating point conversion utilities.\n *\n * Fast and accurate double to string conversion based on Florian Loitsch's\n * Grisu-algorithm[1].\n *\n * [1] https://www.cs.tufts.edu/~nr/cs257/archive/florian-loitsch/printf.pdf\n * ----------------------------------------------------------------------------\n *\n * Copyright (c) 2013-2019, night-shift <as.smljk at gmail dot com>\n * Copyright (c) 2009, Florian Loitsch < florian.loitsch at inria dot fr >\n * All rights reserved.\n *\n * Boost Software License - Version 1.0 - August 17th, 2003\n *\n * Permission is hereby granted, free of charge, to any person or organization\n * obtaining a copy of the software and accompanying documentation covered by\n * this license (the \"Software\") to use, reproduce, display, distribute,\n * execute, and transmit the Software, and to prepare derivative works of the\n * Software, and to permit third-parties to whom the Software is furnished to\n * do so, all subject to the following:\n *\n * The copyright notices in the Software and this entire statement, including\n * the above license grant, this restriction and the following disclaimer,\n * must be included in all copies of the Software, in whole or in part, and\n * all derivative works of the Software, unless such copies or derivative\n * works are solely in the form of machine-executable object code generated by\n * a source language processor.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT\n * SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE\n * FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,\n * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n * DEALINGS IN THE SOFTWARE.\n */\n\n#ifndef FPCONV_DTOA_H\n#define FPCONV_DTOA_H\n\nint fpconv_dtoa(double fp, char dest[24]);\n\n#endif\n\n/* [1] http://florian.loitsch.com/publications/dtoa-pldi2010.pdf */\n"
  },
  {
    "path": "deps/fpconv/fpconv_powers.h",
    "content": "/* fpconv_powers.h -- floating point conversion utilities.\n *\n * Fast and accurate double to string conversion based on Florian Loitsch's\n * Grisu-algorithm[1].\n *\n * [1] https://www.cs.tufts.edu/~nr/cs257/archive/florian-loitsch/printf.pdf\n * ----------------------------------------------------------------------------\n *\n * Copyright (c) 2021, Redis Labs\n * Copyright (c) 2013-2019, night-shift <as.smljk at gmail dot com>\n * Copyright (c) 2009, Florian Loitsch < florian.loitsch at inria dot fr >\n * All rights reserved.\n *\n * Boost Software License - Version 1.0 - August 17th, 2003\n *\n * Permission is hereby granted, free of charge, to any person or organization\n * obtaining a copy of the software and accompanying documentation covered by\n * this license (the \"Software\") to use, reproduce, display, distribute,\n * execute, and transmit the Software, and to prepare derivative works of the\n * Software, and to permit third-parties to whom the Software is furnished to\n * do so, all subject to the following:\n *\n * The copyright notices in the Software and this entire statement, including\n * the above license grant, this restriction and the following disclaimer,\n * must be included in all copies of the Software, in whole or in part, and\n * all derivative works of the Software, unless such copies or derivative\n * works are solely in the form of machine-executable object code generated by\n * a source language processor.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT\n * SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE\n * FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,\n * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n * DEALINGS IN THE SOFTWARE.\n */\n\n#include <stdint.h>\n\n#define npowers     87\n#define steppowers  8\n#define firstpower -348 /* 10 ^ -348 */\n\n#define expmax     -32\n#define expmin     -60\n\n\ntypedef struct Fp {\n    uint64_t frac;\n    int exp;\n} Fp;\n\nstatic Fp powers_ten[] = {\n    { 18054884314459144840U, -1220 }, { 13451937075301367670U, -1193 },\n    { 10022474136428063862U, -1166 }, { 14934650266808366570U, -1140 },\n    { 11127181549972568877U, -1113 }, { 16580792590934885855U, -1087 },\n    { 12353653155963782858U, -1060 }, { 18408377700990114895U, -1034 },\n    { 13715310171984221708U, -1007 }, { 10218702384817765436U, -980 },\n    { 15227053142812498563U, -954 },  { 11345038669416679861U, -927 },\n    { 16905424996341287883U, -901 },  { 12595523146049147757U, -874 },\n    { 9384396036005875287U,  -847 },  { 13983839803942852151U, -821 },\n    { 10418772551374772303U, -794 },  { 15525180923007089351U, -768 },\n    { 11567161174868858868U, -741 },  { 17236413322193710309U, -715 },\n    { 12842128665889583758U, -688 },  { 9568131466127621947U,  -661 },\n    { 14257626930069360058U, -635 },  { 10622759856335341974U, -608 },\n    { 15829145694278690180U, -582 },  { 11793632577567316726U, -555 },\n    { 17573882009934360870U, -529 },  { 13093562431584567480U, -502 },\n    { 9755464219737475723U,  -475 },  { 14536774485912137811U, -449 },\n    { 10830740992659433045U, -422 },  { 16139061738043178685U, -396 },\n    { 12024538023802026127U, -369 },  { 17917957937422433684U, -343 },\n    { 13349918974505688015U, -316 },  { 9946464728195732843U,  -289 },\n    { 14821387422376473014U, -263 },  { 11042794154864902060U, -236 },\n    { 16455045573212060422U, -210 },  { 12259964326927110867U, -183 },\n    { 18268770466636286478U, -157 },  { 13611294676837538539U, -130 },\n    { 10141204801825835212U, -103 },  { 15111572745182864684U, -77 },\n    { 11258999068426240000U, -50 },   { 16777216000000000000U, -24 },\n    { 12500000000000000000U,   3 },   { 9313225746154785156U,   30 },\n    { 13877787807814456755U,  56 },   { 10339757656912845936U,  83 },\n    { 15407439555097886824U, 109 },   { 11479437019748901445U, 136 },\n    { 17105694144590052135U, 162 },   { 12744735289059618216U, 189 },\n    { 9495567745759798747U,  216 },   { 14149498560666738074U, 242 },\n    { 10542197943230523224U, 269 },   { 15709099088952724970U, 295 },\n    { 11704190886730495818U, 322 },   { 17440603504673385349U, 348 },\n    { 12994262207056124023U, 375 },   { 9681479787123295682U,  402 },\n    { 14426529090290212157U, 428 },   { 10748601772107342003U, 455 },\n    { 16016664761464807395U, 481 },   { 11933345169920330789U, 508 },\n    { 17782069995880619868U, 534 },   { 13248674568444952270U, 561 },\n    { 9871031767461413346U,  588 },   { 14708983551653345445U, 614 },\n    { 10959046745042015199U, 641 },   { 16330252207878254650U, 667 },\n    { 12166986024289022870U, 694 },   { 18130221999122236476U, 720 },\n    { 13508068024458167312U, 747 },   { 10064294952495520794U, 774 },\n    { 14996968138956309548U, 800 },   { 11173611982879273257U, 827 },\n    { 16649979327439178909U, 853 },   { 12405201291620119593U, 880 },\n    { 9242595204427927429U,  907 },   { 13772540099066387757U, 933 },\n    { 10261342003245940623U, 960 },   { 15290591125556738113U, 986 },\n    { 11392378155556871081U, 1013 },  { 16975966327722178521U, 1039 },\n    { 12648080533535911531U, 1066 }\n};\n\n/**\n *  Grisu needs a cache of precomputed powers-of-ten.\n *  This function, given an exponent and an integer k\n *  return the normalized floating-point approximation of the power of 10.\n * @param exp\n * @param k\n * @return\n */\nstatic Fp find_cachedpow10(int exp, int* k)\n{\n    const double one_log_ten = 0.30102999566398114;\n\n    const int approx = -(exp + npowers) * one_log_ten;\n    int idx = (approx - firstpower) / steppowers;\n\n    while(1) {\n        int current = exp + powers_ten[idx].exp + 64;\n\n        if(current < expmin) {\n            idx++;\n            continue;\n        }\n\n        if(current > expmax) {\n            idx--;\n            continue;\n        }\n\n        *k = (firstpower + idx * steppowers);\n\n        return powers_ten[idx];\n    }\n}\n"
  },
  {
    "path": "deps/hdr_histogram/COPYING.txt",
    "content": "Creative Commons Legal Code\n\nCC0 1.0 Universal\n\n    CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE\n    LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN\n    ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS\n    INFORMATION ON AN \"AS-IS\" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES\n    REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS\n    PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM\n    THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED\n    HEREUNDER.\n\nStatement of Purpose\n\nThe laws of most jurisdictions throughout the world automatically confer\nexclusive Copyright and Related Rights (defined below) upon the creator\nand subsequent owner(s) (each and all, an \"owner\") of an original work of\nauthorship and/or a database (each, a \"Work\").\n\nCertain owners wish to permanently relinquish those rights to a Work for\nthe purpose of contributing to a commons of creative, cultural and\nscientific works (\"Commons\") that the public can reliably and without fear\nof later claims of infringement build upon, modify, incorporate in other\nworks, reuse and redistribute as freely as possible in any form whatsoever\nand for any purposes, including without limitation commercial purposes.\nThese owners may contribute to the Commons to promote the ideal of a free\nculture and the further production of creative, cultural and scientific\nworks, or to gain reputation or greater distribution for their Work in\npart through the use and efforts of others.\n\nFor these and/or other purposes and motivations, and without any\nexpectation of additional consideration or compensation, the person\nassociating CC0 with a Work (the \"Affirmer\"), to the extent that he or she\nis an owner of Copyright and Related Rights in the Work, voluntarily\nelects to apply CC0 to the Work and publicly distribute the Work under its\nterms, with knowledge of his or her Copyright and Related Rights in the\nWork and the meaning and intended legal effect of CC0 on those rights.\n\n1. Copyright and Related Rights. A Work made available under CC0 may be\nprotected by copyright and related or neighboring rights (\"Copyright and\nRelated Rights\"). Copyright and Related Rights include, but are not\nlimited to, the following:\n\n  i. the right to reproduce, adapt, distribute, perform, display,\n     communicate, and translate a Work;\n ii. moral rights retained by the original author(s) and/or performer(s);\niii. publicity and privacy rights pertaining to a person's image or\n     likeness depicted in a Work;\n iv. rights protecting against unfair competition in regards to a Work,\n     subject to the limitations in paragraph 4(a), below;\n  v. rights protecting the extraction, dissemination, use and reuse of data\n     in a Work;\n vi. database rights (such as those arising under Directive 96/9/EC of the\n     European Parliament and of the Council of 11 March 1996 on the legal\n     protection of databases, and under any national implementation\n     thereof, including any amended or successor version of such\n     directive); and\nvii. other similar, equivalent or corresponding rights throughout the\n     world based on applicable law or treaty, and any national\n     implementations thereof.\n\n2. Waiver. To the greatest extent permitted by, but not in contravention\nof, applicable law, Affirmer hereby overtly, fully, permanently,\nirrevocably and unconditionally waives, abandons, and surrenders all of\nAffirmer's Copyright and Related Rights and associated claims and causes\nof action, whether now known or unknown (including existing as well as\nfuture claims and causes of action), in the Work (i) in all territories\nworldwide, (ii) for the maximum duration provided by applicable law or\ntreaty (including future time extensions), (iii) in any current or future\nmedium and for any number of copies, and (iv) for any purpose whatsoever,\nincluding without limitation commercial, advertising or promotional\npurposes (the \"Waiver\"). Affirmer makes the Waiver for the benefit of each\nmember of the public at large and to the detriment of Affirmer's heirs and\nsuccessors, fully intending that such Waiver shall not be subject to\nrevocation, rescission, cancellation, termination, or any other legal or\nequitable action to disrupt the quiet enjoyment of the Work by the public\nas contemplated by Affirmer's express Statement of Purpose.\n\n3. Public License Fallback. Should any part of the Waiver for any reason\nbe judged legally invalid or ineffective under applicable law, then the\nWaiver shall be preserved to the maximum extent permitted taking into\naccount Affirmer's express Statement of Purpose. In addition, to the\nextent the Waiver is so judged Affirmer hereby grants to each affected\nperson a royalty-free, non transferable, non sublicensable, non exclusive,\nirrevocable and unconditional license to exercise Affirmer's Copyright and\nRelated Rights in the Work (i) in all territories worldwide, (ii) for the\nmaximum duration provided by applicable law or treaty (including future\ntime extensions), (iii) in any current or future medium and for any number\nof copies, and (iv) for any purpose whatsoever, including without\nlimitation commercial, advertising or promotional purposes (the\n\"License\"). The License shall be deemed effective as of the date CC0 was\napplied by Affirmer to the Work. Should any part of the License for any\nreason be judged legally invalid or ineffective under applicable law, such\npartial invalidity or ineffectiveness shall not invalidate the remainder\nof the License, and in such case Affirmer hereby affirms that he or she\nwill not (i) exercise any of his or her remaining Copyright and Related\nRights in the Work or (ii) assert any associated claims and causes of\naction with respect to the Work, in either case contrary to Affirmer's\nexpress Statement of Purpose.\n\n4. Limitations and Disclaimers.\n\n a. No trademark or patent rights held by Affirmer are waived, abandoned,\n    surrendered, licensed or otherwise affected by this document.\n b. Affirmer offers the Work as-is and makes no representations or\n    warranties of any kind concerning the Work, express, implied,\n    statutory or otherwise, including without limitation warranties of\n    title, merchantability, fitness for a particular purpose, non\n    infringement, or the absence of latent or other defects, accuracy, or\n    the present or absence of errors, whether or not discoverable, all to\n    the greatest extent permissible under applicable law.\n c. Affirmer disclaims responsibility for clearing rights of other persons\n    that may apply to the Work or any use thereof, including without\n    limitation any person's Copyright and Related Rights in the Work.\n    Further, Affirmer disclaims responsibility for obtaining any necessary\n    consents, permissions or other rights required for any use of the\n    Work.\n d. Affirmer understands and acknowledges that Creative Commons is not a\n    party to this document and has no duty or obligation with respect to\n    this CC0 or use of the Work.\n"
  },
  {
    "path": "deps/hdr_histogram/LICENSE.txt",
    "content": "The code in this repository code was Written by Gil Tene, Michael Barker,\nand Matt Warren, and released to the public domain, as explained at\nhttp://creativecommons.org/publicdomain/zero/1.0/\n\nFor users of this code who wish to consume it under the \"BSD\" license\nrather than under the public domain or CC0 contribution text mentioned\nabove, the code found under this directory is *also* provided under the\nfollowing license (commonly referred to as the BSD 2-Clause License). This\nlicense does not detract from the above stated release of the code into\nthe public domain, and simply represents an additional license granted by\nthe Author.\n\n-----------------------------------------------------------------------------\n** Beginning of \"BSD 2-Clause License\" text. **\n\n Copyright (c) 2012, 2013, 2014 Gil Tene\n Copyright (c) 2014 Michael Barker\n Copyright (c) 2014 Matt Warren\n All rights reserved.\n\n Redistribution and use in source and binary forms, with or without\n modification, are permitted provided that the following conditions are met:\n\n 1. Redistributions of source code must retain the above copyright notice,\n    this list of conditions and the following disclaimer.\n\n 2. Redistributions in binary form must reproduce the above copyright notice,\n    this list of conditions and the following disclaimer in the documentation\n    and/or other materials provided with the distribution.\n\n THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE\n LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF\n THE POSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "deps/hdr_histogram/Makefile",
    "content": "STD= -std=c99\nWARN= -Wall\nOPT= -Os\n\nR_CFLAGS= $(STD) $(WARN) $(OPT) $(DEBUG) $(CFLAGS) -DHDR_MALLOC_INCLUDE=\\\"hdr_redis_malloc.h\\\"\nR_LDFLAGS= $(LDFLAGS)\nDEBUG= -g\n\nR_CC=$(CC) $(R_CFLAGS)\nR_LD=$(CC) $(R_LDFLAGS)\n\nAR= ar\nARFLAGS= rcs\n\nlibhdrhistogram.a: hdr_histogram.o\n\t$(AR) $(ARFLAGS) $@ $+\n\nhdr_histogram.o: hdr_histogram.h hdr_histogram.c\n\n.c.o:\n\t$(R_CC) -c  $< \n\nclean:\n\trm -f *.o\n\trm -f *.a\n\n\n"
  },
  {
    "path": "deps/hdr_histogram/README.md",
    "content": "HdrHistogram_c: 'C' port of High Dynamic Range (HDR) Histogram\n\nHdrHistogram\n----------------------------------------------\n\n[![Gitter chat](https://badges.gitter.im/HdrHistogram/HdrHistogram.png)](https://gitter.im/HdrHistogram/HdrHistogram)\n\nThis port contains a subset of the functionality supported by the Java\nimplementation.  The current supported features are:\n\n* Standard histogram with 64 bit counts (32/16 bit counts not supported)\n* All iterator types (all values, recorded, percentiles, linear, logarithmic)\n* Histogram serialisation (encoding version 1.2, decoding 1.0-1.2)\n* Reader/writer phaser and interval recorder\n\nFeatures not supported, but planned\n\n* Auto-resizing of histograms\n\nFeatures unlikely to be implemented\n\n* Double histograms\n* Atomic/Concurrent histograms\n* 16/32 bit histograms\n\n# Simple Tutorial\n\n## Recording values\n\n```C\n#include <hdr_histogram.h>\n\nstruct hdr_histogram* histogram;\n\n// Initialise the histogram\nhdr_init(\n    1,  // Minimum value\n    INT64_C(3600000000),  // Maximum value\n    3,  // Number of significant figures\n    &histogram)  // Pointer to initialise\n\n// Record value\nhdr_record_value(\n    histogram,  // Histogram to record to\n    value)  // Value to record\n\n// Record value n times\nhdr_record_values(\n    histogram,  // Histogram to record to\n    value,  // Value to record\n    10)  // Record value 10 times\n\n// Record value with correction for co-ordinated omission.\nhdr_record_corrected_value(\n    histogram,  // Histogram to record to\n    value,  // Value to record\n    1000)  // Record with expected interval of 1000.\n\n// Print out the values of the histogram\nhdr_percentiles_print(\n    histogram,\n    stdout,  // File to write to\n    5,  // Granularity of printed values\n    1.0,  // Multiplier for results\n    CLASSIC);  // Format CLASSIC/CSV supported.\n```\n\n## More examples\n\nFor more detailed examples of recording and logging results look at the\n[hdr_decoder](examples/hdr_decoder.c)\nand [hiccup](examples/hiccup.c)\nexamples.  You can run hiccup and decoder\nand pipe the results of one into the other.\n\n```\n$ ./examples/hiccup | ./examples/hdr_decoder\n```\n"
  },
  {
    "path": "deps/hdr_histogram/hdr_atomic.h",
    "content": "/**\n * hdr_atomic.h\n * Written by Philip Orwig and released to the public domain,\n * as explained at http://creativecommons.org/publicdomain/zero/1.0/\n */\n\n#ifndef HDR_ATOMIC_H__\n#define HDR_ATOMIC_H__\n\n\n#if defined(_MSC_VER)\n\n#include <stdint.h>\n#include <intrin.h>\n#include <stdbool.h>\n\nstatic void __inline * hdr_atomic_load_pointer(void** pointer)\n{\n\t_ReadBarrier();\n\treturn *pointer;\n}\n\nstatic void hdr_atomic_store_pointer(void** pointer, void* value)\n{\n\t_WriteBarrier();\n\t*pointer = value;\n}\n\nstatic int64_t __inline hdr_atomic_load_64(int64_t* field)\n{ \n\t_ReadBarrier();\n\treturn *field;\n}\n\nstatic void __inline hdr_atomic_store_64(int64_t* field, int64_t value)\n{\n\t_WriteBarrier();\n\t*field = value;\n}\n\nstatic int64_t __inline hdr_atomic_exchange_64(volatile int64_t* field, int64_t value)\n{\n#if defined(_WIN64)\n    return _InterlockedExchange64(field, value);\n#else\n    int64_t comparand;\n    int64_t initial_value = *field;\n    do\n    {\n        comparand = initial_value;\n        initial_value = _InterlockedCompareExchange64(field, value, comparand);\n    }\n    while (comparand != initial_value);\n\n    return initial_value;\n#endif\n}\n\nstatic int64_t __inline hdr_atomic_add_fetch_64(volatile int64_t* field, int64_t value)\n{\n#if defined(_WIN64)\n\treturn _InterlockedExchangeAdd64(field, value) + value;\n#else\n    int64_t comparand;\n    int64_t initial_value = *field;\n    do\n    {\n        comparand = initial_value;\n        initial_value = _InterlockedCompareExchange64(field, comparand + value, comparand);\n    }\n    while (comparand != initial_value);\n\n    return initial_value + value;\n#endif\n}\n\nstatic bool __inline hdr_atomic_compare_exchange_64(volatile int64_t* field, int64_t* expected, int64_t desired)\n{\n    return *expected == _InterlockedCompareExchange64(field, desired, *expected);\n}\n\n#elif defined(__ATOMIC_SEQ_CST)\n\n#define hdr_atomic_load_pointer(x) __atomic_load_n(x, __ATOMIC_SEQ_CST)\n#define hdr_atomic_store_pointer(f,v) __atomic_store_n(f,v, __ATOMIC_SEQ_CST)\n#define hdr_atomic_load_64(x) __atomic_load_n(x, __ATOMIC_SEQ_CST)\n#define hdr_atomic_store_64(f,v) __atomic_store_n(f,v, __ATOMIC_SEQ_CST)\n#define hdr_atomic_exchange_64(f,i) __atomic_exchange_n(f,i, __ATOMIC_SEQ_CST)\n#define hdr_atomic_add_fetch_64(field, value) __atomic_add_fetch(field, value, __ATOMIC_SEQ_CST)\n#define hdr_atomic_compare_exchange_64(field, expected, desired) __atomic_compare_exchange_n(field, expected, desired, false, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST)\n\n#elif defined(__x86_64__)\n\n#include <stdint.h>\n#include <stdbool.h>\n\nstatic inline void* hdr_atomic_load_pointer(void** pointer)\n{\n   void* p =  *pointer;\n\tasm volatile (\"\" ::: \"memory\");\n\treturn p;\n}\n\nstatic inline void hdr_atomic_store_pointer(void** pointer, void* value)\n{\n    asm volatile (\"lock; xchgq %0, %1\" : \"+q\" (value), \"+m\" (*pointer));\n}\n\nstatic inline int64_t hdr_atomic_load_64(int64_t* field)\n{\n    int64_t i = *field;\n\tasm volatile (\"\" ::: \"memory\");\n\treturn i;\n}\n\nstatic inline void hdr_atomic_store_64(int64_t* field, int64_t value)\n{\n    asm volatile (\"lock; xchgq %0, %1\" : \"+q\" (value), \"+m\" (*field));\n}\n\nstatic inline int64_t hdr_atomic_exchange_64(volatile int64_t* field, int64_t value)\n{\n    int64_t result = 0;\n    asm volatile (\"lock; xchgq %1, %2\" : \"=r\" (result), \"+q\" (value), \"+m\" (*field));\n    return result;\n}\n\nstatic inline int64_t hdr_atomic_add_fetch_64(volatile int64_t* field, int64_t value)\n{\n    return __sync_add_and_fetch(field, value);\n}\n\nstatic inline bool hdr_atomic_compare_exchange_64(volatile int64_t* field, int64_t* expected, int64_t desired)\n{\n    int64_t original;\n    asm volatile( \"lock; cmpxchgq %2, %1\" : \"=a\"(original), \"+m\"(*field) : \"q\"(desired), \"0\"(*expected));\n    return original == *expected;\n}\n\n#else\n\n#error \"Unable to determine atomic operations for your platform\"\n\n#endif\n\n#endif /* HDR_ATOMIC_H__ */\n"
  },
  {
    "path": "deps/hdr_histogram/hdr_histogram.c",
    "content": "/**\n * hdr_histogram.c\n * Written by Michael Barker and released to the public domain,\n * as explained at http://creativecommons.org/publicdomain/zero/1.0/\n */\n\n#include <stdlib.h>\n#include <stdbool.h>\n#include <math.h>\n#include <stdio.h>\n#include <string.h>\n#include <stdint.h>\n#include <errno.h>\n#include <inttypes.h>\n\n#include \"hdr_histogram.h\"\n#include \"hdr_tests.h\"\n#include \"hdr_atomic.h\"\n\n#ifndef HDR_MALLOC_INCLUDE\n#define HDR_MALLOC_INCLUDE \"hdr_malloc.h\"\n#endif\n\n#include HDR_MALLOC_INCLUDE\n\n/*  ######   #######  ##     ## ##    ## ########  ######  */\n/* ##    ## ##     ## ##     ## ###   ##    ##    ##    ## */\n/* ##       ##     ## ##     ## ####  ##    ##    ##       */\n/* ##       ##     ## ##     ## ## ## ##    ##     ######  */\n/* ##       ##     ## ##     ## ##  ####    ##          ## */\n/* ##    ## ##     ## ##     ## ##   ###    ##    ##    ## */\n/*  ######   #######   #######  ##    ##    ##     ######  */\n\nstatic int32_t normalize_index(const struct hdr_histogram* h, int32_t index)\n{\n    int32_t normalized_index;\n    int32_t adjustment = 0;\n    if (h->normalizing_index_offset == 0)\n    {\n        return index;\n    }\n\n    normalized_index = index - h->normalizing_index_offset;\n\n    if (normalized_index < 0)\n    {\n        adjustment = h->counts_len;\n    }\n    else if (normalized_index >= h->counts_len)\n    {\n        adjustment = -h->counts_len;\n    }\n\n    return normalized_index + adjustment;\n}\n\nstatic int64_t counts_get_direct(const struct hdr_histogram* h, int32_t index)\n{\n    return h->counts[index];\n}\n\nstatic int64_t counts_get_normalised(const struct hdr_histogram* h, int32_t index)\n{\n    return counts_get_direct(h, normalize_index(h, index));\n}\n\nstatic void counts_inc_normalised(\n    struct hdr_histogram* h, int32_t index, int64_t value)\n{\n    int32_t normalised_index = normalize_index(h, index);\n    h->counts[normalised_index] += value;\n    h->total_count += value;\n}\n\nstatic void counts_inc_normalised_atomic(\n    struct hdr_histogram* h, int32_t index, int64_t value)\n{\n    int32_t normalised_index = normalize_index(h, index);\n\n    hdr_atomic_add_fetch_64(&h->counts[normalised_index], value);\n    hdr_atomic_add_fetch_64(&h->total_count, value);\n}\n\nstatic void update_min_max(struct hdr_histogram* h, int64_t value)\n{\n    h->min_value = (value < h->min_value && value != 0) ? value : h->min_value;\n    h->max_value = (value > h->max_value) ? value : h->max_value;\n}\n\nstatic void update_min_max_atomic(struct hdr_histogram* h, int64_t value)\n{\n    int64_t current_min_value;\n    int64_t current_max_value;\n    do\n    {\n        current_min_value = hdr_atomic_load_64(&h->min_value);\n\n        if (0 == value || current_min_value <= value)\n        {\n            break;\n        }\n    }\n    while (!hdr_atomic_compare_exchange_64(&h->min_value, &current_min_value, value));\n\n    do\n    {\n        current_max_value = hdr_atomic_load_64(&h->max_value);\n\n        if (value <= current_max_value)\n        {\n            break;\n        }\n    }\n    while (!hdr_atomic_compare_exchange_64(&h->max_value, &current_max_value, value));\n}\n\n\n/* ##     ## ######## #### ##       #### ######## ##    ## */\n/* ##     ##    ##     ##  ##        ##     ##     ##  ##  */\n/* ##     ##    ##     ##  ##        ##     ##      ####   */\n/* ##     ##    ##     ##  ##        ##     ##       ##    */\n/* ##     ##    ##     ##  ##        ##     ##       ##    */\n/* ##     ##    ##     ##  ##        ##     ##       ##    */\n/*  #######     ##    #### ######## ####    ##       ##    */\n\nstatic int64_t power(int64_t base, int64_t exp)\n{\n    int64_t result = 1;\n    while(exp)\n    {\n        result *= base; exp--;\n    }\n    return result;\n}\n\n#if defined(_MSC_VER)\n#   if defined(_WIN64)\n#       pragma intrinsic(_BitScanReverse64)\n#   else\n#       pragma intrinsic(_BitScanReverse)\n#   endif\n#endif\n\nstatic int32_t count_leading_zeros_64(int64_t value)\n{\n#if defined(_MSC_VER)\n    uint32_t leading_zero = 0;\n#if defined(_WIN64)\n    _BitScanReverse64(&leading_zero, value);\n#else\n    uint32_t high = value >> 32;\n    if  (_BitScanReverse(&leading_zero, high))\n    {\n        leading_zero += 32;\n    }\n    else\n    {\n        uint32_t low = value & 0x00000000FFFFFFFF;\n        _BitScanReverse(&leading_zero, low);\n    }\n#endif\n    return 63 - leading_zero; /* smallest power of 2 containing value */\n#else\n    return __builtin_clzll(value); /* smallest power of 2 containing value */\n#endif\n}\n\nstatic int32_t get_bucket_index(const struct hdr_histogram* h, int64_t value)\n{\n    int32_t pow2ceiling = 64 - count_leading_zeros_64(value | h->sub_bucket_mask); /* smallest power of 2 containing value */\n    return pow2ceiling - h->unit_magnitude - (h->sub_bucket_half_count_magnitude + 1);\n}\n\nstatic int32_t get_sub_bucket_index(int64_t value, int32_t bucket_index, int32_t unit_magnitude)\n{\n    return (int32_t)(value >> (bucket_index + unit_magnitude));\n}\n\nstatic int32_t counts_index(const struct hdr_histogram* h, int32_t bucket_index, int32_t sub_bucket_index)\n{\n    /* Calculate the index for the first entry in the bucket: */\n    /* (The following is the equivalent of ((bucket_index + 1) * subBucketHalfCount) ): */\n    int32_t bucket_base_index = (bucket_index + 1) << h->sub_bucket_half_count_magnitude;\n    /* Calculate the offset in the bucket: */\n    int32_t offset_in_bucket = sub_bucket_index - h->sub_bucket_half_count;\n    /* The following is the equivalent of ((sub_bucket_index  - subBucketHalfCount) + bucketBaseIndex; */\n    return bucket_base_index + offset_in_bucket;\n}\n\nstatic int64_t value_from_index(int32_t bucket_index, int32_t sub_bucket_index, int32_t unit_magnitude)\n{\n    return ((int64_t) sub_bucket_index) << (bucket_index + unit_magnitude);\n}\n\nint32_t counts_index_for(const struct hdr_histogram* h, int64_t value)\n{\n    int32_t bucket_index     = get_bucket_index(h, value);\n    int32_t sub_bucket_index = get_sub_bucket_index(value, bucket_index, h->unit_magnitude);\n\n    return counts_index(h, bucket_index, sub_bucket_index);\n}\n\nint64_t hdr_value_at_index(const struct hdr_histogram *h, int32_t index)\n{\n    int32_t bucket_index = (index >> h->sub_bucket_half_count_magnitude) - 1;\n    int32_t sub_bucket_index = (index & (h->sub_bucket_half_count - 1)) + h->sub_bucket_half_count;\n\n    if (bucket_index < 0)\n    {\n        sub_bucket_index -= h->sub_bucket_half_count;\n        bucket_index = 0;\n    }\n\n    return value_from_index(bucket_index, sub_bucket_index, h->unit_magnitude);\n}\n\nint64_t hdr_size_of_equivalent_value_range(const struct hdr_histogram* h, int64_t value)\n{\n    int32_t bucket_index     = get_bucket_index(h, value);\n    int32_t sub_bucket_index = get_sub_bucket_index(value, bucket_index, h->unit_magnitude);\n    int32_t adjusted_bucket  = (sub_bucket_index >= h->sub_bucket_count) ? (bucket_index + 1) : bucket_index;\n    return INT64_C(1) << (h->unit_magnitude + adjusted_bucket);\n}\n\nstatic int64_t size_of_equivalent_value_range_given_bucket_indices(\n    const struct hdr_histogram *h,\n    int32_t bucket_index,\n    int32_t sub_bucket_index)\n{\n    const int32_t adjusted_bucket  = (sub_bucket_index >= h->sub_bucket_count) ? (bucket_index + 1) : bucket_index;\n    return INT64_C(1) << (h->unit_magnitude + adjusted_bucket);\n}\n\nstatic int64_t lowest_equivalent_value(const struct hdr_histogram* h, int64_t value)\n{\n    int32_t bucket_index     = get_bucket_index(h, value);\n    int32_t sub_bucket_index = get_sub_bucket_index(value, bucket_index, h->unit_magnitude);\n    return value_from_index(bucket_index, sub_bucket_index, h->unit_magnitude);\n}\n\nstatic int64_t lowest_equivalent_value_given_bucket_indices(\n    const struct hdr_histogram *h,\n    int32_t bucket_index,\n    int32_t sub_bucket_index)\n{\n    return value_from_index(bucket_index, sub_bucket_index, h->unit_magnitude);\n}\n\nint64_t hdr_next_non_equivalent_value(const struct hdr_histogram *h, int64_t value)\n{\n    return lowest_equivalent_value(h, value) + hdr_size_of_equivalent_value_range(h, value);\n}\n\nstatic int64_t highest_equivalent_value(const struct hdr_histogram* h, int64_t value)\n{\n    return hdr_next_non_equivalent_value(h, value) - 1;\n}\n\nint64_t hdr_median_equivalent_value(const struct hdr_histogram *h, int64_t value)\n{\n    return lowest_equivalent_value(h, value) + (hdr_size_of_equivalent_value_range(h, value) >> 1);\n}\n\nstatic int64_t non_zero_min(const struct hdr_histogram* h)\n{\n    if (INT64_MAX == h->min_value)\n    {\n        return INT64_MAX;\n    }\n\n    return lowest_equivalent_value(h, h->min_value);\n}\n\nvoid hdr_reset_internal_counters(struct hdr_histogram* h)\n{\n    int min_non_zero_index = -1;\n    int max_index = -1;\n    int64_t observed_total_count = 0;\n    int i;\n\n    for (i = 0; i < h->counts_len; i++)\n    {\n        int64_t count_at_index;\n\n        if ((count_at_index = counts_get_direct(h, i)) > 0)\n        {\n            observed_total_count += count_at_index;\n            max_index = i;\n            if (min_non_zero_index == -1 && i != 0)\n            {\n                min_non_zero_index = i;\n            }\n        }\n    }\n\n    if (max_index == -1)\n    {\n        h->max_value = 0;\n    }\n    else\n    {\n        int64_t max_value = hdr_value_at_index(h, max_index);\n        h->max_value = highest_equivalent_value(h, max_value);\n    }\n\n    if (min_non_zero_index == -1)\n    {\n        h->min_value = INT64_MAX;\n    }\n    else\n    {\n        h->min_value = hdr_value_at_index(h, min_non_zero_index);\n    }\n\n    h->total_count = observed_total_count;\n}\n\nstatic int32_t buckets_needed_to_cover_value(int64_t value, int32_t sub_bucket_count, int32_t unit_magnitude)\n{\n    int64_t smallest_untrackable_value = ((int64_t) sub_bucket_count) << unit_magnitude;\n    int32_t buckets_needed = 1;\n    while (smallest_untrackable_value <= value)\n    {\n        if (smallest_untrackable_value > INT64_MAX / 2)\n        {\n            return buckets_needed + 1;\n        }\n        smallest_untrackable_value <<= 1;\n        buckets_needed++;\n    }\n\n    return buckets_needed;\n}\n\n/* ##     ## ######## ##     ##  #######  ########  ##    ## */\n/* ###   ### ##       ###   ### ##     ## ##     ##  ##  ##  */\n/* #### #### ##       #### #### ##     ## ##     ##   ####   */\n/* ## ### ## ######   ## ### ## ##     ## ########     ##    */\n/* ##     ## ##       ##     ## ##     ## ##   ##      ##    */\n/* ##     ## ##       ##     ## ##     ## ##    ##     ##    */\n/* ##     ## ######## ##     ##  #######  ##     ##    ##    */\n\nint hdr_calculate_bucket_config(\n        int64_t lowest_discernible_value,\n        int64_t highest_trackable_value,\n        int significant_figures,\n        struct hdr_histogram_bucket_config* cfg)\n{\n    int32_t sub_bucket_count_magnitude;\n    int64_t largest_value_with_single_unit_resolution;\n\n    if (lowest_discernible_value < 1 ||\n            significant_figures < 1 || 5 < significant_figures ||\n            lowest_discernible_value * 2 > highest_trackable_value)\n    {\n        return EINVAL;\n    }\n\n    cfg->lowest_discernible_value = lowest_discernible_value;\n    cfg->significant_figures = significant_figures;\n    cfg->highest_trackable_value = highest_trackable_value;\n\n    largest_value_with_single_unit_resolution = 2 * power(10, significant_figures);\n    sub_bucket_count_magnitude = (int32_t) ceil(log((double)largest_value_with_single_unit_resolution) / log(2));\n    cfg->sub_bucket_half_count_magnitude = ((sub_bucket_count_magnitude > 1) ? sub_bucket_count_magnitude : 1) - 1;\n\n    double unit_magnitude = log((double)lowest_discernible_value) / log(2);\n    if (INT32_MAX < unit_magnitude)\n    {\n        return EINVAL;\n    }\n\n    cfg->unit_magnitude = (int32_t) unit_magnitude;\n    cfg->sub_bucket_count      = (int32_t) pow(2, (cfg->sub_bucket_half_count_magnitude + 1));\n    cfg->sub_bucket_half_count = cfg->sub_bucket_count / 2;\n    cfg->sub_bucket_mask       = ((int64_t) cfg->sub_bucket_count - 1) << cfg->unit_magnitude;\n\n    if (cfg->unit_magnitude + cfg->sub_bucket_half_count_magnitude > 61)\n    {\n        return EINVAL;\n    }\n\n    cfg->bucket_count = buckets_needed_to_cover_value(highest_trackable_value, cfg->sub_bucket_count, (int32_t)cfg->unit_magnitude);\n    cfg->counts_len = (cfg->bucket_count + 1) * (cfg->sub_bucket_count / 2);\n\n    return 0;\n}\n\nvoid hdr_init_preallocated(struct hdr_histogram* h, struct hdr_histogram_bucket_config* cfg)\n{\n    h->lowest_discernible_value        = cfg->lowest_discernible_value;\n    h->highest_trackable_value         = cfg->highest_trackable_value;\n    h->unit_magnitude                  = (int32_t)cfg->unit_magnitude;\n    h->significant_figures             = (int32_t)cfg->significant_figures;\n    h->sub_bucket_half_count_magnitude = cfg->sub_bucket_half_count_magnitude;\n    h->sub_bucket_half_count           = cfg->sub_bucket_half_count;\n    h->sub_bucket_mask                 = cfg->sub_bucket_mask;\n    h->sub_bucket_count                = cfg->sub_bucket_count;\n    h->min_value                       = INT64_MAX;\n    h->max_value                       = 0;\n    h->normalizing_index_offset        = 0;\n    h->conversion_ratio                = 1.0;\n    h->bucket_count                    = cfg->bucket_count;\n    h->counts_len                      = cfg->counts_len;\n    h->total_count                     = 0;\n}\n\nint hdr_init(\n        int64_t lowest_discernible_value,\n        int64_t highest_trackable_value,\n        int significant_figures,\n        struct hdr_histogram** result)\n{\n    int64_t* counts;\n    struct hdr_histogram_bucket_config cfg;\n    struct hdr_histogram* histogram;\n\n    int r = hdr_calculate_bucket_config(lowest_discernible_value, highest_trackable_value, significant_figures, &cfg);\n    if (r)\n    {\n        return r;\n    }\n\n    counts = (int64_t*) hdr_calloc((size_t) cfg.counts_len, sizeof(int64_t));\n    if (!counts)\n    {\n        return ENOMEM;\n    }\n\n    histogram = (struct hdr_histogram*) hdr_calloc(1, sizeof(struct hdr_histogram));\n    if (!histogram)\n    {\n        hdr_free(counts);\n        return ENOMEM;\n    }\n\n    histogram->counts = counts;\n\n    hdr_init_preallocated(histogram, &cfg);\n    *result = histogram;\n\n    return 0;\n}\n\nvoid hdr_close(struct hdr_histogram* h)\n{\n    if (h) {\n\thdr_free(h->counts);\n\thdr_free(h);\n    }\n}\n\nint hdr_alloc(int64_t highest_trackable_value, int significant_figures, struct hdr_histogram** result)\n{\n    return hdr_init(1, highest_trackable_value, significant_figures, result);\n}\n\n/* reset a histogram to zero. */\nvoid hdr_reset(struct hdr_histogram *h)\n{\n     h->total_count=0;\n     h->min_value = INT64_MAX;\n     h->max_value = 0;\n     memset(h->counts, 0, (sizeof(int64_t) * h->counts_len));\n}\n\nsize_t hdr_get_memory_size(struct hdr_histogram *h)\n{\n    return sizeof(struct hdr_histogram) + h->counts_len * sizeof(int64_t);\n}\n\n/* ##     ## ########  ########     ###    ######## ########  ######  */\n/* ##     ## ##     ## ##     ##   ## ##      ##    ##       ##    ## */\n/* ##     ## ##     ## ##     ##  ##   ##     ##    ##       ##       */\n/* ##     ## ########  ##     ## ##     ##    ##    ######    ######  */\n/* ##     ## ##        ##     ## #########    ##    ##             ## */\n/* ##     ## ##        ##     ## ##     ##    ##    ##       ##    ## */\n/*  #######  ##        ########  ##     ##    ##    ########  ######  */\n\n\nbool hdr_record_value(struct hdr_histogram* h, int64_t value)\n{\n    return hdr_record_values(h, value, 1);\n}\n\nbool hdr_record_value_atomic(struct hdr_histogram* h, int64_t value)\n{\n    return hdr_record_values_atomic(h, value, 1);\n}\n\nbool hdr_record_values(struct hdr_histogram* h, int64_t value, int64_t count)\n{\n    int32_t counts_index;\n\n    if (value < 0)\n    {\n        return false;\n    }\n\n    counts_index = counts_index_for(h, value);\n\n    if (counts_index < 0 || h->counts_len <= counts_index)\n    {\n        return false;\n    }\n\n    counts_inc_normalised(h, counts_index, count);\n    update_min_max(h, value);\n\n    return true;\n}\n\nbool hdr_record_values_atomic(struct hdr_histogram* h, int64_t value, int64_t count)\n{\n    int32_t counts_index;\n\n    if (value < 0)\n    {\n        return false;\n    }\n\n    counts_index = counts_index_for(h, value);\n\n    if (counts_index < 0 || h->counts_len <= counts_index)\n    {\n        return false;\n    }\n\n    counts_inc_normalised_atomic(h, counts_index, count);\n    update_min_max_atomic(h, value);\n\n    return true;\n}\n\nbool hdr_record_corrected_value(struct hdr_histogram* h, int64_t value, int64_t expected_interval)\n{\n    return hdr_record_corrected_values(h, value, 1, expected_interval);\n}\n\nbool hdr_record_corrected_value_atomic(struct hdr_histogram* h, int64_t value, int64_t expected_interval)\n{\n    return hdr_record_corrected_values_atomic(h, value, 1, expected_interval);\n}\n\nbool hdr_record_corrected_values(struct hdr_histogram* h, int64_t value, int64_t count, int64_t expected_interval)\n{\n    int64_t missing_value;\n\n    if (!hdr_record_values(h, value, count))\n    {\n        return false;\n    }\n\n    if (expected_interval <= 0 || value <= expected_interval)\n    {\n        return true;\n    }\n\n    missing_value = value - expected_interval;\n    for (; missing_value >= expected_interval; missing_value -= expected_interval)\n    {\n        if (!hdr_record_values(h, missing_value, count))\n        {\n            return false;\n        }\n    }\n\n    return true;\n}\n\nbool hdr_record_corrected_values_atomic(struct hdr_histogram* h, int64_t value, int64_t count, int64_t expected_interval)\n{\n    int64_t missing_value;\n\n    if (!hdr_record_values_atomic(h, value, count))\n    {\n        return false;\n    }\n\n    if (expected_interval <= 0 || value <= expected_interval)\n    {\n        return true;\n    }\n\n    missing_value = value - expected_interval;\n    for (; missing_value >= expected_interval; missing_value -= expected_interval)\n    {\n        if (!hdr_record_values_atomic(h, missing_value, count))\n        {\n            return false;\n        }\n    }\n\n    return true;\n}\n\nint64_t hdr_add(struct hdr_histogram* h, const struct hdr_histogram* from)\n{\n    struct hdr_iter iter;\n    int64_t dropped = 0;\n    hdr_iter_recorded_init(&iter, from);\n\n    while (hdr_iter_next(&iter))\n    {\n        int64_t value = iter.value;\n        int64_t count = iter.count;\n\n        if (!hdr_record_values(h, value, count))\n        {\n            dropped += count;\n        }\n    }\n\n    return dropped;\n}\n\nint64_t hdr_add_while_correcting_for_coordinated_omission(\n        struct hdr_histogram* h, struct hdr_histogram* from, int64_t expected_interval)\n{\n    struct hdr_iter iter;\n    int64_t dropped = 0;\n    hdr_iter_recorded_init(&iter, from);\n\n    while (hdr_iter_next(&iter))\n    {\n        int64_t value = iter.value;\n        int64_t count = iter.count;\n\n        if (!hdr_record_corrected_values(h, value, count, expected_interval))\n        {\n            dropped += count;\n        }\n    }\n\n    return dropped;\n}\n\n\n\n/* ##     ##    ###    ##       ##     ## ########  ######  */\n/* ##     ##   ## ##   ##       ##     ## ##       ##    ## */\n/* ##     ##  ##   ##  ##       ##     ## ##       ##       */\n/* ##     ## ##     ## ##       ##     ## ######    ######  */\n/*  ##   ##  ######### ##       ##     ## ##             ## */\n/*   ## ##   ##     ## ##       ##     ## ##       ##    ## */\n/*    ###    ##     ## ########  #######  ########  ######  */\n\n\nint64_t hdr_max(const struct hdr_histogram* h)\n{\n    if (0 == h->max_value)\n    {\n        return 0;\n    }\n\n    return highest_equivalent_value(h, h->max_value);\n}\n\nint64_t hdr_min(const struct hdr_histogram* h)\n{\n    if (0 < hdr_count_at_index(h, 0))\n    {\n        return 0;\n    }\n\n    return non_zero_min(h);\n}\n\nstatic int64_t get_value_from_idx_up_to_count(const struct hdr_histogram* h, int64_t count_at_percentile)\n{\n    int64_t count_to_idx = 0;\n\n    count_at_percentile = 0 < count_at_percentile ? count_at_percentile : 1;\n    for (int32_t idx = 0; idx < h->counts_len; idx++)\n    {\n        count_to_idx += h->counts[idx];\n        if (count_to_idx >= count_at_percentile)\n        {\n            return hdr_value_at_index(h, idx);\n        }\n    }\n\n    return 0;\n}\n\n\nint64_t hdr_value_at_percentile(const struct hdr_histogram* h, double percentile)\n{\n    double requested_percentile = percentile < 100.0 ? percentile : 100.0;\n    int64_t count_at_percentile =\n        (int64_t) (((requested_percentile / 100) * h->total_count) + 0.5);\n    int64_t value_from_idx = get_value_from_idx_up_to_count(h, count_at_percentile);\n    if (percentile == 0.0)\n    {\n        return lowest_equivalent_value(h, value_from_idx);\n    }\n    return highest_equivalent_value(h, value_from_idx);\n}\n\nint hdr_value_at_percentiles(const struct hdr_histogram *h, const double *percentiles, int64_t *values, size_t length)\n{\n    if (NULL == percentiles || NULL == values)\n    {\n        return EINVAL;\n    }\n\n    struct hdr_iter iter;\n    const int64_t total_count = h->total_count;\n    // to avoid allocations we use the values array for intermediate computation\n    // i.e. to store the expected cumulative count at each percentile\n    for (size_t i = 0; i < length; i++)\n    {\n        const double requested_percentile = percentiles[i] < 100.0 ? percentiles[i] : 100.0;\n        const int64_t count_at_percentile =\n        (int64_t) (((requested_percentile / 100) * total_count) + 0.5);\n        values[i] = count_at_percentile > 1 ? count_at_percentile : 1;\n    }\n\n    hdr_iter_init(&iter, h);\n    int64_t total = 0;\n    size_t at_pos = 0;\n    while (hdr_iter_next(&iter) && at_pos < length)\n    {\n        total += iter.count;\n        while (at_pos < length && total >= values[at_pos])\n        {\n            values[at_pos] = highest_equivalent_value(h, iter.value);\n            at_pos++;\n        }\n    }\n    return 0;\n}\n\ndouble hdr_mean(const struct hdr_histogram* h)\n{\n    struct hdr_iter iter;\n    int64_t total = 0;\n\n    hdr_iter_init(&iter, h);\n\n    while (hdr_iter_next(&iter))\n    {\n        if (0 != iter.count)\n        {\n            total += iter.count * hdr_median_equivalent_value(h, iter.value);\n        }\n    }\n\n    return (total * 1.0) / h->total_count;\n}\n\ndouble hdr_stddev(const struct hdr_histogram* h)\n{\n    double mean = hdr_mean(h);\n    double geometric_dev_total = 0.0;\n\n    struct hdr_iter iter;\n    hdr_iter_init(&iter, h);\n\n    while (hdr_iter_next(&iter))\n    {\n        if (0 != iter.count)\n        {\n            double dev = (hdr_median_equivalent_value(h, iter.value) * 1.0) - mean;\n            geometric_dev_total += (dev * dev) * iter.count;\n        }\n    }\n\n    return sqrt(geometric_dev_total / h->total_count);\n}\n\nbool hdr_values_are_equivalent(const struct hdr_histogram* h, int64_t a, int64_t b)\n{\n    return lowest_equivalent_value(h, a) == lowest_equivalent_value(h, b);\n}\n\nint64_t hdr_lowest_equivalent_value(const struct hdr_histogram* h, int64_t value)\n{\n    return lowest_equivalent_value(h, value);\n}\n\nint64_t hdr_count_at_value(const struct hdr_histogram* h, int64_t value)\n{\n    return counts_get_normalised(h, counts_index_for(h, value));\n}\n\nint64_t hdr_count_at_index(const struct hdr_histogram* h, int32_t index)\n{\n    return counts_get_normalised(h, index);\n}\n\n\n/* #### ######## ######## ########     ###    ########  #######  ########   ######  */\n/*  ##     ##    ##       ##     ##   ## ##      ##    ##     ## ##     ## ##    ## */\n/*  ##     ##    ##       ##     ##  ##   ##     ##    ##     ## ##     ## ##       */\n/*  ##     ##    ######   ########  ##     ##    ##    ##     ## ########   ######  */\n/*  ##     ##    ##       ##   ##   #########    ##    ##     ## ##   ##         ## */\n/*  ##     ##    ##       ##    ##  ##     ##    ##    ##     ## ##    ##  ##    ## */\n/* ####    ##    ######## ##     ## ##     ##    ##     #######  ##     ##  ######  */\n\n\nstatic bool has_buckets(struct hdr_iter* iter)\n{\n    return iter->counts_index < iter->h->counts_len;\n}\n\nstatic bool has_next(struct hdr_iter* iter)\n{\n    return iter->cumulative_count < iter->total_count;\n}\n\nstatic bool move_next(struct hdr_iter* iter)\n{\n    iter->counts_index++;\n\n    if (!has_buckets(iter))\n    {\n        return false;\n    }\n\n    iter->count = counts_get_normalised(iter->h, iter->counts_index);\n    iter->cumulative_count += iter->count;\n    const int64_t value = hdr_value_at_index(iter->h, iter->counts_index);\n    const int32_t bucket_index = get_bucket_index(iter->h, value);\n    const int32_t sub_bucket_index = get_sub_bucket_index(value, bucket_index, iter->h->unit_magnitude);\n    const int64_t leq = lowest_equivalent_value_given_bucket_indices(iter->h, bucket_index, sub_bucket_index);\n    const int64_t size_of_equivalent_value_range = size_of_equivalent_value_range_given_bucket_indices(\n        iter->h, bucket_index, sub_bucket_index);\n    iter->lowest_equivalent_value = leq;\n    iter->value = value;\n    iter->highest_equivalent_value = leq + size_of_equivalent_value_range - 1;\n    iter->median_equivalent_value = leq + (size_of_equivalent_value_range >> 1);\n\n    return true;\n}\n\nstatic int64_t peek_next_value_from_index(struct hdr_iter* iter)\n{\n    return hdr_value_at_index(iter->h, iter->counts_index + 1);\n}\n\nstatic bool next_value_greater_than_reporting_level_upper_bound(\n    struct hdr_iter *iter, int64_t reporting_level_upper_bound)\n{\n    if (iter->counts_index >= iter->h->counts_len)\n    {\n        return false;\n    }\n\n    return peek_next_value_from_index(iter) > reporting_level_upper_bound;\n}\n\nstatic bool basic_iter_next(struct hdr_iter *iter)\n{\n    if (!has_next(iter) || iter->counts_index >= iter->h->counts_len)\n    {\n        return false;\n    }\n\n    move_next(iter);\n\n    return true;\n}\n\nstatic void update_iterated_values(struct hdr_iter* iter, int64_t new_value_iterated_to)\n{\n    iter->value_iterated_from = iter->value_iterated_to;\n    iter->value_iterated_to = new_value_iterated_to;\n}\n\nstatic bool all_values_iter_next(struct hdr_iter* iter)\n{\n    bool result = move_next(iter);\n\n    if (result)\n    {\n        update_iterated_values(iter, iter->value);\n    }\n\n    return result;\n}\n\nvoid hdr_iter_init(struct hdr_iter* iter, const struct hdr_histogram* h)\n{\n    iter->h = h;\n\n    iter->counts_index = -1;\n    iter->total_count = h->total_count;\n    iter->count = 0;\n    iter->cumulative_count = 0;\n    iter->value = 0;\n    iter->highest_equivalent_value = 0;\n    iter->value_iterated_from = 0;\n    iter->value_iterated_to = 0;\n\n    iter->_next_fp = all_values_iter_next;\n}\n\nbool hdr_iter_next(struct hdr_iter* iter)\n{\n    return iter->_next_fp(iter);\n}\n\n/* ########  ######## ########   ######  ######## ##    ## ######## #### ##       ########  ######  */\n/* ##     ## ##       ##     ## ##    ## ##       ###   ##    ##     ##  ##       ##       ##    ## */\n/* ##     ## ##       ##     ## ##       ##       ####  ##    ##     ##  ##       ##       ##       */\n/* ########  ######   ########  ##       ######   ## ## ##    ##     ##  ##       ######    ######  */\n/* ##        ##       ##   ##   ##       ##       ##  ####    ##     ##  ##       ##             ## */\n/* ##        ##       ##    ##  ##    ## ##       ##   ###    ##     ##  ##       ##       ##    ## */\n/* ##        ######## ##     ##  ######  ######## ##    ##    ##    #### ######## ########  ######  */\n\nstatic bool percentile_iter_next(struct hdr_iter* iter)\n{\n    int64_t temp, half_distance, percentile_reporting_ticks;\n\n    struct hdr_iter_percentiles* percentiles = &iter->specifics.percentiles;\n\n    if (!has_next(iter))\n    {\n        if (percentiles->seen_last_value)\n        {\n            return false;\n        }\n\n        percentiles->seen_last_value = true;\n        percentiles->percentile = 100.0;\n\n        return true;\n    }\n\n    if (iter->counts_index == -1 && !basic_iter_next(iter))\n    {\n        return false;\n    }\n\n    do\n    {\n        double current_percentile = (100.0 * (double) iter->cumulative_count) / iter->h->total_count;\n        if (iter->count != 0 &&\n                percentiles->percentile_to_iterate_to <= current_percentile)\n        {\n            update_iterated_values(iter, highest_equivalent_value(iter->h, iter->value));\n\n            percentiles->percentile = percentiles->percentile_to_iterate_to;\n            temp = (int64_t)(log(100 / (100.0 - (percentiles->percentile_to_iterate_to))) / log(2)) + 1;\n            half_distance = (int64_t) pow(2, (double) temp);\n            percentile_reporting_ticks = percentiles->ticks_per_half_distance * half_distance;\n            percentiles->percentile_to_iterate_to += 100.0 / percentile_reporting_ticks;\n\n            return true;\n        }\n    }\n    while (basic_iter_next(iter));\n\n    return true;\n}\n\nvoid hdr_iter_percentile_init(struct hdr_iter* iter, const struct hdr_histogram* h, int32_t ticks_per_half_distance)\n{\n    iter->h = h;\n\n    hdr_iter_init(iter, h);\n\n    iter->specifics.percentiles.seen_last_value          = false;\n    iter->specifics.percentiles.ticks_per_half_distance  = ticks_per_half_distance;\n    iter->specifics.percentiles.percentile_to_iterate_to = 0.0;\n    iter->specifics.percentiles.percentile               = 0.0;\n\n    iter->_next_fp = percentile_iter_next;\n}\n\nstatic void format_line_string(char* str, size_t len, int significant_figures, format_type format)\n{\n#if defined(_MSC_VER)\n#define snprintf _snprintf\n#pragma warning(push)\n#pragma warning(disable: 4996)\n#endif\n    const char* format_str = \"%s%d%s\";\n\n    switch (format)\n    {\n        case CSV:\n            snprintf(str, len, format_str, \"%.\", significant_figures, \"f,%f,%d,%.2f\\n\");\n            break;\n        case CLASSIC:\n            snprintf(str, len, format_str, \"%12.\", significant_figures, \"f %12f %12d %12.2f\\n\");\n            break;\n        default:\n            snprintf(str, len, format_str, \"%12.\", significant_figures, \"f %12f %12d %12.2f\\n\");\n    }\n#if defined(_MSC_VER)\n#undef snprintf\n#pragma warning(pop)\n#endif\n}\n\n\n/* ########  ########  ######   #######  ########  ########  ######## ########   */\n/* ##     ## ##       ##    ## ##     ## ##     ## ##     ## ##       ##     ##  */\n/* ##     ## ##       ##       ##     ## ##     ## ##     ## ##       ##     ##  */\n/* ########  ######   ##       ##     ## ########  ##     ## ######   ##     ##  */\n/* ##   ##   ##       ##       ##     ## ##   ##   ##     ## ##       ##     ##  */\n/* ##    ##  ##       ##    ## ##     ## ##    ##  ##     ## ##       ##     ##  */\n/* ##     ## ########  ######   #######  ##     ## ########  ######## ########   */\n\n\nstatic bool recorded_iter_next(struct hdr_iter* iter)\n{\n    while (basic_iter_next(iter))\n    {\n        if (iter->count != 0)\n        {\n            update_iterated_values(iter, iter->value);\n\n            iter->specifics.recorded.count_added_in_this_iteration_step = iter->count;\n            return true;\n        }\n    }\n\n    return false;\n}\n\nvoid hdr_iter_recorded_init(struct hdr_iter* iter, const struct hdr_histogram* h)\n{\n    hdr_iter_init(iter, h);\n\n    iter->specifics.recorded.count_added_in_this_iteration_step = 0;\n\n    iter->_next_fp = recorded_iter_next;\n}\n\n/* ##       #### ##    ## ########    ###    ########  */\n/* ##        ##  ###   ## ##         ## ##   ##     ## */\n/* ##        ##  ####  ## ##        ##   ##  ##     ## */\n/* ##        ##  ## ## ## ######   ##     ## ########  */\n/* ##        ##  ##  #### ##       ######### ##   ##   */\n/* ##        ##  ##   ### ##       ##     ## ##    ##  */\n/* ######## #### ##    ## ######## ##     ## ##     ## */\n\n\nstatic bool iter_linear_next(struct hdr_iter* iter)\n{\n    struct hdr_iter_linear* linear = &iter->specifics.linear;\n\n    linear->count_added_in_this_iteration_step = 0;\n\n    if (has_next(iter) ||\n        next_value_greater_than_reporting_level_upper_bound(\n            iter, linear->next_value_reporting_level_lowest_equivalent))\n    {\n        do\n        {\n            if (iter->value >= linear->next_value_reporting_level_lowest_equivalent)\n            {\n                update_iterated_values(iter, linear->next_value_reporting_level);\n\n                linear->next_value_reporting_level += linear->value_units_per_bucket;\n                linear->next_value_reporting_level_lowest_equivalent =\n                    lowest_equivalent_value(iter->h, linear->next_value_reporting_level);\n\n                return true;\n            }\n\n            if (!move_next(iter))\n            {\n                return true;\n            }\n\n            linear->count_added_in_this_iteration_step += iter->count;\n        }\n        while (true);\n    }\n\n    return false;\n}\n\n\nvoid hdr_iter_linear_init(struct hdr_iter* iter, const struct hdr_histogram* h, int64_t value_units_per_bucket)\n{\n    hdr_iter_init(iter, h);\n\n    iter->specifics.linear.count_added_in_this_iteration_step = 0;\n    iter->specifics.linear.value_units_per_bucket = value_units_per_bucket;\n    iter->specifics.linear.next_value_reporting_level = value_units_per_bucket;\n    iter->specifics.linear.next_value_reporting_level_lowest_equivalent = lowest_equivalent_value(h, value_units_per_bucket);\n\n    iter->_next_fp = iter_linear_next;\n}\n\nvoid hdr_iter_linear_set_value_units_per_bucket(struct hdr_iter* iter, int64_t value_units_per_bucket)\n{\n    iter->specifics.linear.value_units_per_bucket = value_units_per_bucket;\n}\n\n/* ##        #######   ######      ###    ########  #### ######## ##     ## ##     ## ####  ######  */\n/* ##       ##     ## ##    ##    ## ##   ##     ##  ##     ##    ##     ## ###   ###  ##  ##    ## */\n/* ##       ##     ## ##         ##   ##  ##     ##  ##     ##    ##     ## #### ####  ##  ##       */\n/* ##       ##     ## ##   #### ##     ## ########   ##     ##    ######### ## ### ##  ##  ##       */\n/* ##       ##     ## ##    ##  ######### ##   ##    ##     ##    ##     ## ##     ##  ##  ##       */\n/* ##       ##     ## ##    ##  ##     ## ##    ##   ##     ##    ##     ## ##     ##  ##  ##    ## */\n/* ########  #######   ######   ##     ## ##     ## ####    ##    ##     ## ##     ## ####  ######  */\n\nstatic bool log_iter_next(struct hdr_iter *iter)\n{\n    struct hdr_iter_log* logarithmic = &iter->specifics.log;\n\n    logarithmic->count_added_in_this_iteration_step = 0;\n\n    if (has_next(iter) ||\n        next_value_greater_than_reporting_level_upper_bound(\n            iter, logarithmic->next_value_reporting_level_lowest_equivalent))\n    {\n        do\n        {\n            if (iter->value >= logarithmic->next_value_reporting_level_lowest_equivalent)\n            {\n                update_iterated_values(iter, logarithmic->next_value_reporting_level);\n\n                logarithmic->next_value_reporting_level *= (int64_t)logarithmic->log_base;\n                logarithmic->next_value_reporting_level_lowest_equivalent = lowest_equivalent_value(iter->h, logarithmic->next_value_reporting_level);\n\n                return true;\n            }\n\n            if (!move_next(iter))\n            {\n                return true;\n            }\n\n            logarithmic->count_added_in_this_iteration_step += iter->count;\n        }\n        while (true);\n    }\n\n    return false;\n}\n\nvoid hdr_iter_log_init(\n        struct hdr_iter* iter,\n        const struct hdr_histogram* h,\n        int64_t value_units_first_bucket,\n        double log_base)\n{\n    hdr_iter_init(iter, h);\n    iter->specifics.log.count_added_in_this_iteration_step = 0;\n    iter->specifics.log.log_base = log_base;\n    iter->specifics.log.next_value_reporting_level = value_units_first_bucket;\n    iter->specifics.log.next_value_reporting_level_lowest_equivalent = lowest_equivalent_value(h, value_units_first_bucket);\n\n    iter->_next_fp = log_iter_next;\n}\n\n/* Printing. */\n\nstatic const char* format_head_string(format_type format)\n{\n    switch (format)\n    {\n        case CSV:\n            return \"%s,%s,%s,%s\\n\";\n        case CLASSIC:\n        default:\n            return \"%12s %12s %12s %12s\\n\\n\";\n    }\n}\n\nstatic const char CLASSIC_FOOTER[] =\n    \"#[Mean    = %12.3f, StdDeviation   = %12.3f]\\n\"\n    \"#[Max     = %12.3f, Total count    = %12\" PRIu64 \"]\\n\"\n    \"#[Buckets = %12d, SubBuckets     = %12d]\\n\";\n\nint hdr_percentiles_print(\n        struct hdr_histogram* h, FILE* stream, int32_t ticks_per_half_distance,\n        double value_scale, format_type format)\n{\n    char line_format[25];\n    const char* head_format;\n    int rc = 0;\n    struct hdr_iter iter;\n    struct hdr_iter_percentiles * percentiles;\n\n    format_line_string(line_format, 25, h->significant_figures, format);\n    head_format = format_head_string(format);\n\n    hdr_iter_percentile_init(&iter, h, ticks_per_half_distance);\n\n    if (fprintf(\n            stream, head_format,\n            \"Value\", \"Percentile\", \"TotalCount\", \"1/(1-Percentile)\") < 0)\n    {\n        rc = EIO;\n        goto cleanup;\n    }\n\n    percentiles = &iter.specifics.percentiles;\n    while (hdr_iter_next(&iter))\n    {\n        double  value               = iter.highest_equivalent_value / value_scale;\n        double  percentile          = percentiles->percentile / 100.0;\n        int64_t total_count         = iter.cumulative_count;\n        double  inverted_percentile = (1.0 / (1.0 - percentile));\n\n        if (fprintf(\n                stream, line_format, value, percentile, total_count, inverted_percentile) < 0)\n        {\n            rc = EIO;\n            goto cleanup;\n        }\n    }\n\n    if (CLASSIC == format)\n    {\n        double mean   = hdr_mean(h)   / value_scale;\n        double stddev = hdr_stddev(h) / value_scale;\n        double max    = hdr_max(h)    / value_scale;\n\n        if (fprintf(\n                stream, CLASSIC_FOOTER,  mean, stddev, max,\n                h->total_count, h->bucket_count, h->sub_bucket_count) < 0)\n        {\n            rc = EIO;\n            goto cleanup;\n        }\n    }\n\n    cleanup:\n    return rc;\n}\n"
  },
  {
    "path": "deps/hdr_histogram/hdr_histogram.h",
    "content": "/**\n * hdr_histogram.h\n * Written by Michael Barker and released to the public domain,\n * as explained at http://creativecommons.org/publicdomain/zero/1.0/\n *\n * The source for the hdr_histogram utilises a few C99 constructs, specifically\n * the use of stdint/stdbool and inline variable declaration.\n */\n\n#ifndef HDR_HISTOGRAM_H\n#define HDR_HISTOGRAM_H 1\n\n#include <stdint.h>\n#include <stdbool.h>\n#include <stdio.h>\n\nstruct hdr_histogram\n{\n    int64_t lowest_discernible_value;\n    int64_t highest_trackable_value;\n    int32_t unit_magnitude;\n    int32_t significant_figures;\n    int32_t sub_bucket_half_count_magnitude;\n    int32_t sub_bucket_half_count;\n    int64_t sub_bucket_mask;\n    int32_t sub_bucket_count;\n    int32_t bucket_count;\n    int64_t min_value;\n    int64_t max_value;\n    int32_t normalizing_index_offset;\n    double conversion_ratio;\n    int32_t counts_len;\n    int64_t total_count;\n    int64_t* counts;\n};\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/**\n * Allocate the memory and initialise the hdr_histogram.\n *\n * Due to the size of the histogram being the result of some reasonably\n * involved math on the input parameters this function it is tricky to stack allocate.\n * The histogram should be released with hdr_close\n *\n * @param lowest_discernible_value The smallest possible value that is distinguishable from 0.\n * Must be a positive integer that is >= 1. May be internally rounded down to nearest power of 2.\n * @param highest_trackable_value The largest possible value to be put into the\n * histogram.\n * @param significant_figures The level of precision for this histogram, i.e. the number\n * of figures in a decimal number that will be maintained.  E.g. a value of 3 will mean\n * the results from the histogram will be accurate up to the first three digits.  Must\n * be a value between 1 and 5 (inclusive).\n * @param result Output parameter to capture allocated histogram.\n * @return 0 on success, EINVAL if lowest_discernible_value is < 1 or the\n * significant_figure value is outside of the allowed range, ENOMEM if malloc\n * failed.\n */\nint hdr_init(\n    int64_t lowest_discernible_value,\n    int64_t highest_trackable_value,\n    int significant_figures,\n    struct hdr_histogram** result);\n\n/**\n * Free the memory and close the hdr_histogram.\n *\n * @param h The histogram you want to close.\n */\nvoid hdr_close(struct hdr_histogram* h);\n\n/**\n * Allocate the memory and initialise the hdr_histogram.  This is the equivalent of calling\n * hdr_init(1, highest_trackable_value, significant_figures, result);\n *\n * @deprecated use hdr_init.\n */\nint hdr_alloc(int64_t highest_trackable_value, int significant_figures, struct hdr_histogram** result);\n\n\n/**\n * Reset a histogram to zero - empty out a histogram and re-initialise it\n *\n * If you want to re-use an existing histogram, but reset everything back to zero, this\n * is the routine to use.\n *\n * @param h The histogram you want to reset to empty.\n *\n */\nvoid hdr_reset(struct hdr_histogram* h);\n\n/**\n * Get the memory size of the hdr_histogram.\n *\n * @param h \"This\" pointer\n * @return The amount of memory used by the hdr_histogram in bytes\n */\nsize_t hdr_get_memory_size(struct hdr_histogram* h);\n\n/**\n * Records a value in the histogram, will round this value of to a precision at or better\n * than the significant_figure specified at construction time.\n *\n * @param h \"This\" pointer\n * @param value Value to add to the histogram\n * @return false if the value is larger than the highest_trackable_value and can't be recorded,\n * true otherwise.\n */\nbool hdr_record_value(struct hdr_histogram* h, int64_t value);\n\n/**\n * Records a value in the histogram, will round this value of to a precision at or better\n * than the significant_figure specified at construction time.\n *\n * Will record this value atomically, however the whole structure may appear inconsistent\n * when read concurrently with this update.  Do NOT mix calls to this method with calls\n * to non-atomic updates.\n *\n * @param h \"This\" pointer\n * @param value Value to add to the histogram\n * @return false if the value is larger than the highest_trackable_value and can't be recorded,\n * true otherwise.\n */\nbool hdr_record_value_atomic(struct hdr_histogram* h, int64_t value);\n\n/**\n * Records count values in the histogram, will round this value of to a\n * precision at or better than the significant_figure specified at construction\n * time.\n *\n * @param h \"This\" pointer\n * @param value Value to add to the histogram\n * @param count Number of 'value's to add to the histogram\n * @return false if any value is larger than the highest_trackable_value and can't be recorded,\n * true otherwise.\n */\nbool hdr_record_values(struct hdr_histogram* h, int64_t value, int64_t count);\n\n/**\n * Records count values in the histogram, will round this value of to a\n * precision at or better than the significant_figure specified at construction\n * time.\n *\n * Will record this value atomically, however the whole structure may appear inconsistent\n * when read concurrently with this update.  Do NOT mix calls to this method with calls\n * to non-atomic updates.\n *\n * @param h \"This\" pointer\n * @param value Value to add to the histogram\n * @param count Number of 'value's to add to the histogram\n * @return false if any value is larger than the highest_trackable_value and can't be recorded,\n * true otherwise.\n */\nbool hdr_record_values_atomic(struct hdr_histogram* h, int64_t value, int64_t count);\n\n/**\n * Record a value in the histogram and backfill based on an expected interval.\n *\n * Records a value in the histogram, will round this value of to a precision at or better\n * than the significant_figure specified at construction time.  This is specifically used\n * for recording latency.  If the value is larger than the expected_interval then the\n * latency recording system has experienced co-ordinated omission.  This method fills in the\n * values that would have occurred had the client providing the load not been blocked.\n\n * @param h \"This\" pointer\n * @param value Value to add to the histogram\n * @param expected_interval The delay between recording values.\n * @return false if the value is larger than the highest_trackable_value and can't be recorded,\n * true otherwise.\n */\nbool hdr_record_corrected_value(struct hdr_histogram* h, int64_t value, int64_t expected_interval);\n\n/**\n * Record a value in the histogram and backfill based on an expected interval.\n *\n * Records a value in the histogram, will round this value of to a precision at or better\n * than the significant_figure specified at construction time.  This is specifically used\n * for recording latency.  If the value is larger than the expected_interval then the\n * latency recording system has experienced co-ordinated omission.  This method fills in the\n * values that would have occurred had the client providing the load not been blocked.\n *\n * Will record this value atomically, however the whole structure may appear inconsistent\n * when read concurrently with this update.  Do NOT mix calls to this method with calls\n * to non-atomic updates.\n *\n * @param h \"This\" pointer\n * @param value Value to add to the histogram\n * @param expected_interval The delay between recording values.\n * @return false if the value is larger than the highest_trackable_value and can't be recorded,\n * true otherwise.\n */\nbool hdr_record_corrected_value_atomic(struct hdr_histogram* h, int64_t value, int64_t expected_interval);\n\n/**\n * Record a value in the histogram 'count' times.  Applies the same correcting logic\n * as 'hdr_record_corrected_value'.\n *\n * @param h \"This\" pointer\n * @param value Value to add to the histogram\n * @param count Number of 'value's to add to the histogram\n * @param expected_interval The delay between recording values.\n * @return false if the value is larger than the highest_trackable_value and can't be recorded,\n * true otherwise.\n */\nbool hdr_record_corrected_values(struct hdr_histogram* h, int64_t value, int64_t count, int64_t expected_interval);\n\n/**\n * Record a value in the histogram 'count' times.  Applies the same correcting logic\n * as 'hdr_record_corrected_value'.\n *\n * Will record this value atomically, however the whole structure may appear inconsistent\n * when read concurrently with this update.  Do NOT mix calls to this method with calls\n * to non-atomic updates.\n *\n * @param h \"This\" pointer\n * @param value Value to add to the histogram\n * @param count Number of 'value's to add to the histogram\n * @param expected_interval The delay between recording values.\n * @return false if the value is larger than the highest_trackable_value and can't be recorded,\n * true otherwise.\n */\nbool hdr_record_corrected_values_atomic(struct hdr_histogram* h, int64_t value, int64_t count, int64_t expected_interval);\n\n/**\n * Adds all of the values from 'from' to 'this' histogram.  Will return the\n * number of values that are dropped when copying.  Values will be dropped\n * if they around outside of h.lowest_discernible_value and\n * h.highest_trackable_value.\n *\n * @param h \"This\" pointer\n * @param from Histogram to copy values from.\n * @return The number of values dropped when copying.\n */\nint64_t hdr_add(struct hdr_histogram* h, const struct hdr_histogram* from);\n\n/**\n * Adds all of the values from 'from' to 'this' histogram.  Will return the\n * number of values that are dropped when copying.  Values will be dropped\n * if they around outside of h.lowest_discernible_value and\n * h.highest_trackable_value.\n *\n * @param h \"This\" pointer\n * @param from Histogram to copy values from.\n * @return The number of values dropped when copying.\n */\nint64_t hdr_add_while_correcting_for_coordinated_omission(\n    struct hdr_histogram* h, struct hdr_histogram* from, int64_t expected_interval);\n\n/**\n * Get minimum value from the histogram.  Will return 2^63-1 if the histogram\n * is empty.\n *\n * @param h \"This\" pointer\n */\nint64_t hdr_min(const struct hdr_histogram* h);\n\n/**\n * Get maximum value from the histogram.  Will return 0 if the histogram\n * is empty.\n *\n * @param h \"This\" pointer\n */\nint64_t hdr_max(const struct hdr_histogram* h);\n\n/**\n * Get the value at a specific percentile.\n *\n * @param h \"This\" pointer.\n * @param percentile The percentile to get the value for\n */\nint64_t hdr_value_at_percentile(const struct hdr_histogram* h, double percentile);\n\n/**\n * Get the values at the given percentiles.\n *\n * @param h \"This\" pointer.\n * @param percentiles The ordered percentiles array to get the values for.\n * @param length Number of elements in the arrays.\n * @param values Destination array containing the values at the given percentiles.\n * The values array should be allocated by the caller.\n * @return 0 on success, ENOMEM if the provided destination array is null.\n */\nint hdr_value_at_percentiles(const struct hdr_histogram *h, const double *percentiles, int64_t *values, size_t length);\n\n/**\n * Gets the standard deviation for the values in the histogram.\n *\n * @param h \"This\" pointer\n * @return The standard deviation\n */\ndouble hdr_stddev(const struct hdr_histogram* h);\n\n/**\n * Gets the mean for the values in the histogram.\n *\n * @param h \"This\" pointer\n * @return The mean\n */\ndouble hdr_mean(const struct hdr_histogram* h);\n\n/**\n * Determine if two values are equivalent with the histogram's resolution.\n * Where \"equivalent\" means that value samples recorded for any two\n * equivalent values are counted in a common total count.\n *\n * @param h \"This\" pointer\n * @param a first value to compare\n * @param b second value to compare\n * @return 'true' if values are equivalent with the histogram's resolution.\n */\nbool hdr_values_are_equivalent(const struct hdr_histogram* h, int64_t a, int64_t b);\n\n/**\n * Get the lowest value that is equivalent to the given value within the histogram's resolution.\n * Where \"equivalent\" means that value samples recorded for any two\n * equivalent values are counted in a common total count.\n *\n * @param h \"This\" pointer\n * @param value The given value\n * @return The lowest value that is equivalent to the given value within the histogram's resolution.\n */\nint64_t hdr_lowest_equivalent_value(const struct hdr_histogram* h, int64_t value);\n\n/**\n * Get the count of recorded values at a specific value\n * (to within the histogram resolution at the value level).\n *\n * @param h \"This\" pointer\n * @param value The value for which to provide the recorded count\n * @return The total count of values recorded in the histogram within the value range that is\n * {@literal >=} lowestEquivalentValue(<i>value</i>) and {@literal <=} highestEquivalentValue(<i>value</i>)\n */\nint64_t hdr_count_at_value(const struct hdr_histogram* h, int64_t value);\n\nint64_t hdr_count_at_index(const struct hdr_histogram* h, int32_t index);\n\nint64_t hdr_value_at_index(const struct hdr_histogram* h, int32_t index);\n\nstruct hdr_iter_percentiles\n{\n    bool seen_last_value;\n    int32_t ticks_per_half_distance;\n    double percentile_to_iterate_to;\n    double percentile;\n};\n\nstruct hdr_iter_recorded\n{\n    int64_t count_added_in_this_iteration_step;\n};\n\nstruct hdr_iter_linear\n{\n    int64_t value_units_per_bucket;\n    int64_t count_added_in_this_iteration_step;\n    int64_t next_value_reporting_level;\n    int64_t next_value_reporting_level_lowest_equivalent;\n};\n\nstruct hdr_iter_log\n{\n    double log_base;\n    int64_t count_added_in_this_iteration_step;\n    int64_t next_value_reporting_level;\n    int64_t next_value_reporting_level_lowest_equivalent;\n};\n\n/**\n * The basic iterator.  This is a generic structure\n * that supports all of the types of iteration.  Use\n * the appropriate initialiser to get the desired\n * iteration.\n *\n * @\n */\nstruct hdr_iter\n{\n    const struct hdr_histogram* h;\n    /** raw index into the counts array */\n    int32_t counts_index;\n    /** snapshot of the length at the time the iterator is created */\n    int64_t total_count;\n    /** value directly from array for the current counts_index */\n    int64_t count;\n    /** sum of all of the counts up to and including the count at this index */\n    int64_t cumulative_count;\n    /** The current value based on counts_index */\n    int64_t value;\n    int64_t highest_equivalent_value;\n    int64_t lowest_equivalent_value;\n    int64_t median_equivalent_value;\n    int64_t value_iterated_from;\n    int64_t value_iterated_to;\n\n    union\n    {\n        struct hdr_iter_percentiles percentiles;\n        struct hdr_iter_recorded recorded;\n        struct hdr_iter_linear linear;\n        struct hdr_iter_log log;\n    } specifics;\n\n    bool (* _next_fp)(struct hdr_iter* iter);\n\n};\n\n/**\n * Initalises the basic iterator.\n *\n * @param itr 'This' pointer\n * @param h The histogram to iterate over\n */\nvoid hdr_iter_init(struct hdr_iter* iter, const struct hdr_histogram* h);\n\n/**\n * Initialise the iterator for use with percentiles.\n */\nvoid hdr_iter_percentile_init(struct hdr_iter* iter, const struct hdr_histogram* h, int32_t ticks_per_half_distance);\n\n/**\n * Initialise the iterator for use with recorded values.\n */\nvoid hdr_iter_recorded_init(struct hdr_iter* iter, const struct hdr_histogram* h);\n\n/**\n * Initialise the iterator for use with linear values.\n */\nvoid hdr_iter_linear_init(\n    struct hdr_iter* iter,\n    const struct hdr_histogram* h,\n    int64_t value_units_per_bucket);\n\n/**\n * Update the iterator value units per bucket\n */\nvoid hdr_iter_linear_set_value_units_per_bucket(struct hdr_iter* iter, int64_t value_units_per_bucket);\n\n/**\n * Initialise the iterator for use with logarithmic values\n */\nvoid hdr_iter_log_init(\n    struct hdr_iter* iter,\n    const struct hdr_histogram* h,\n    int64_t value_units_first_bucket,\n    double log_base);\n\n/**\n * Iterate to the next value for the iterator.  If there are no more values\n * available return faluse.\n *\n * @param itr 'This' pointer\n * @return 'false' if there are no values remaining for this iterator.\n */\nbool hdr_iter_next(struct hdr_iter* iter);\n\ntypedef enum\n{\n    CLASSIC,\n    CSV\n} format_type;\n\n/**\n * Print out a percentile based histogram to the supplied stream.  Note that\n * this call will not flush the FILE, this is left up to the user.\n *\n * @param h 'This' pointer\n * @param stream The FILE to write the output to\n * @param ticks_per_half_distance The number of iteration steps per half-distance to 100%\n * @param value_scale Scale the output values by this amount\n * @param format_type Format to use, e.g. CSV.\n * @return 0 on success, error code on failure.  EIO if an error occurs writing\n * the output.\n */\nint hdr_percentiles_print(\n    struct hdr_histogram* h, FILE* stream, int32_t ticks_per_half_distance,\n    double value_scale, format_type format);\n\n/**\n* Internal allocation methods, used by hdr_dbl_histogram.\n*/\nstruct hdr_histogram_bucket_config\n{\n    int64_t lowest_discernible_value;\n    int64_t highest_trackable_value;\n    int64_t unit_magnitude;\n    int64_t significant_figures;\n    int32_t sub_bucket_half_count_magnitude;\n    int32_t sub_bucket_half_count;\n    int64_t sub_bucket_mask;\n    int32_t sub_bucket_count;\n    int32_t bucket_count;\n    int32_t counts_len;\n};\n\nint hdr_calculate_bucket_config(\n    int64_t lowest_discernible_value,\n    int64_t highest_trackable_value,\n    int significant_figures,\n    struct hdr_histogram_bucket_config* cfg);\n\nvoid hdr_init_preallocated(struct hdr_histogram* h, struct hdr_histogram_bucket_config* cfg);\n\nint64_t hdr_size_of_equivalent_value_range(const struct hdr_histogram* h, int64_t value);\n\nint64_t hdr_next_non_equivalent_value(const struct hdr_histogram* h, int64_t value);\n\nint64_t hdr_median_equivalent_value(const struct hdr_histogram* h, int64_t value);\n\n/**\n * Used to reset counters after importing data manually into the histogram, used by the logging code\n * and other custom serialisation tools.\n */\nvoid hdr_reset_internal_counters(struct hdr_histogram* h);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif\n"
  },
  {
    "path": "deps/hdr_histogram/hdr_redis_malloc.h",
    "content": "#ifndef HDR_MALLOC_H__\n#define HDR_MALLOC_H__\n\nvoid *zmalloc(size_t size);\nvoid *zcalloc_num(size_t num, size_t size);\nvoid *zrealloc(void *ptr, size_t size);\nvoid zfree(void *ptr);\n\n#define hdr_malloc zmalloc\n#define hdr_calloc zcalloc_num\n#define hdr_realloc zrealloc\n#define hdr_free zfree\n#endif\n"
  },
  {
    "path": "deps/hdr_histogram/hdr_tests.h",
    "content": "#ifndef HDR_TESTS_H\n#define HDR_TESTS_H\n\n/* These are functions used in tests and are not intended for normal usage. */\n\n#include \"hdr_histogram.h\"\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\nint32_t counts_index_for(const struct hdr_histogram* h, int64_t value);\nint hdr_encode_compressed(struct hdr_histogram* h, uint8_t** compressed_histogram, size_t* compressed_len);\nint hdr_decode_compressed(uint8_t* buffer, size_t length, struct hdr_histogram** histogram);\nvoid hdr_base64_decode_block(const char* input, uint8_t* output);\nvoid hdr_base64_encode_block(const uint8_t* input, char* output);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif\n"
  },
  {
    "path": "deps/hiredis/.github/release-drafter-config.yml",
    "content": "name-template: '$NEXT_MAJOR_VERSION'\ntag-template: 'v$NEXT_MAJOR_VERSION'\nautolabeler:\n  - label: 'maintenance'\n    files:\n      - '*.md'\n      - '.github/*'\n  - label: 'bug'\n    branch:\n      - '/bug-.+'\n  - label: 'maintenance'\n    branch:\n      - '/maintenance-.+'\n  - label: 'feature'\n    branch:\n      - '/feature-.+'\ncategories:\n  - title: 'Breaking Changes'\n    labels:\n      - 'breakingchange'\n\n  - title: '🧪 Experimental Features'\n    labels:\n      - 'experimental'\n  - title: '🚀 New Features'\n    labels:\n      - 'feature'\n      - 'enhancement'\n  - title: '🐛 Bug Fixes'\n    labels:\n      - 'fix'\n      - 'bugfix'\n      - 'bug'\n      - 'BUG'\n  - title: '🧰 Maintenance'\n    label: 'maintenance'\nchange-template: '- $TITLE @$AUTHOR (#$NUMBER)'\nexclude-labels:\n  - 'skip-changelog'\ntemplate: |\n  ## Changes\n\n  $CHANGES\n\n  ## Contributors\n  We'd like to thank all the contributors who worked on this release!\n\n  $CONTRIBUTORS\n\n"
  },
  {
    "path": "deps/hiredis/.github/workflows/build.yml",
    "content": "name: Build and test\non: [push, pull_request]\n\njobs:\n  ubuntu:\n    name: Ubuntu\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v3\n\n      - name: Install dependencies\n        run: |\n          curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg\n          echo \"deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main\" | sudo tee /etc/apt/sources.list.d/redis.list\n          sudo apt-get update\n          sudo apt-get install -y redis-server valgrind libevent-dev\n\n      - name: Build using cmake\n        env:\n          EXTRA_CMAKE_OPTS: -DENABLE_EXAMPLES:BOOL=ON -DENABLE_SSL:BOOL=ON -DENABLE_SSL_TESTS:BOOL=ON -DENABLE_ASYNC_TESTS:BOOL=ON\n          CFLAGS: -Werror\n          CXXFLAGS: -Werror\n        run: mkdir build && cd build && cmake .. && make\n\n      - name: Build using makefile\n        run: USE_SSL=1 TEST_ASYNC=1 make\n\n      - name: Run tests\n        env:\n          SKIPS_AS_FAILS: 1\n          TEST_SSL: 1\n        run: $GITHUB_WORKSPACE/test.sh\n\n      #      - name: Run tests under valgrind\n      #        env:\n      #          SKIPS_AS_FAILS: 1\n      #          TEST_PREFIX: valgrind --error-exitcode=99 --track-origins=yes --leak-check=full\n      #        run: $GITHUB_WORKSPACE/test.sh\n\n  centos7:\n    name: CentOS 7\n    runs-on: ubuntu-latest\n    container: centos:7\n    steps:\n      - uses: actions/checkout@v3\n\n      - name: Install dependencies\n        run: |\n          yum -y install http://rpms.remirepo.net/enterprise/remi-release-7.rpm\n          yum -y --enablerepo=remi install redis\n          yum -y install gcc gcc-c++ make openssl openssl-devel cmake3 valgrind libevent-devel\n\n      - name: Build using cmake\n        env:\n          EXTRA_CMAKE_OPTS: -DENABLE_EXAMPLES:BOOL=ON -DENABLE_SSL:BOOL=ON -DENABLE_SSL_TESTS:BOOL=ON -DENABLE_ASYNC_TESTS:BOOL=ON\n          CFLAGS: -Werror\n          CXXFLAGS: -Werror\n        run: mkdir build && cd build && cmake3 .. && make\n\n      - name: Build using Makefile\n        run: USE_SSL=1 TEST_ASYNC=1 make\n\n      - name: Run tests\n        env:\n          SKIPS_AS_FAILS: 1\n          TEST_SSL: 1\n        run: $GITHUB_WORKSPACE/test.sh\n\n      - name: Run tests under valgrind\n        env:\n          SKIPS_AS_FAILS: 1\n          TEST_SSL: 1\n          TEST_PREFIX: valgrind --error-exitcode=99 --track-origins=yes --leak-check=full\n        run: $GITHUB_WORKSPACE/test.sh\n\n  centos8:\n    name: RockyLinux 8\n    runs-on: ubuntu-latest\n    container: rockylinux:8\n    steps:\n      - uses: actions/checkout@v3\n\n      - name: Install dependencies\n        run: |\n          dnf -y upgrade --refresh\n          dnf -y install https://rpms.remirepo.net/enterprise/remi-release-8.rpm\n          dnf -y module install redis:remi-6.0\n          dnf -y group install \"Development Tools\"\n          dnf -y install openssl-devel cmake valgrind libevent-devel\n\n      - name: Build using cmake\n        env:\n          EXTRA_CMAKE_OPTS: -DENABLE_EXAMPLES:BOOL=ON -DENABLE_SSL:BOOL=ON -DENABLE_SSL_TESTS:BOOL=ON -DENABLE_ASYNC_TESTS:BOOL=ON\n          CFLAGS: -Werror\n          CXXFLAGS: -Werror\n        run: mkdir build && cd build && cmake .. && make\n\n      - name: Build using Makefile\n        run: USE_SSL=1 TEST_ASYNC=1 make\n\n      - name: Run tests\n        env:\n          SKIPS_AS_FAILS: 1\n          TEST_SSL: 1\n        run: $GITHUB_WORKSPACE/test.sh\n\n      - name: Run tests under valgrind\n        env:\n          SKIPS_AS_FAILS: 1\n          TEST_SSL: 1\n          TEST_PREFIX: valgrind --error-exitcode=99 --track-origins=yes --leak-check=full\n        run: $GITHUB_WORKSPACE/test.sh\n\n  freebsd:\n    runs-on: macos-12\n    name:  FreeBSD\n    steps:\n      - uses: actions/checkout@v3\n\n      - name: Build in FreeBSD\n        uses: vmactions/freebsd-vm@v0\n        with:\n          prepare: pkg install -y gmake cmake\n          run: |\n            mkdir build && cd build && cmake .. && make && cd ..\n            gmake\n\n  macos:\n    name: macOS\n    runs-on: macos-latest\n    steps:\n      - uses: actions/checkout@v3\n\n      - name: Install dependencies\n        run: |\n          brew install openssl redis@7.0\n          brew link redis@7.0 --force\n\n      - name: Build hiredis\n        run: USE_SSL=1 make\n\n      - name: Run tests\n        env:\n          TEST_SSL: 1\n        run: $GITHUB_WORKSPACE/test.sh\n\n  windows:\n    name: Windows\n    runs-on: windows-latest\n    steps:\n      - uses: actions/checkout@v3\n\n      - name: Install dependencies\n        run: |\n          choco install -y ninja memurai-developer\n\n      - uses: ilammy/msvc-dev-cmd@v1\n      - name: Build hiredis\n        run: |\n          mkdir build && cd build\n          cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Release -DENABLE_EXAMPLES=ON\n          ninja -v\n\n      - name: Run tests\n        run: |\n          ./build/hiredis-test.exe\n\n      - name: Install Cygwin Action\n        uses: cygwin/cygwin-install-action@v2\n        with:\n          packages: make git gcc-core\n\n      - name: Build in cygwin\n        env:\n          HIREDIS_PATH: ${{ github.workspace }}\n        run: |\n          make clean && make\n"
  },
  {
    "path": "deps/hiredis/.github/workflows/release-drafter.yml",
    "content": "name: Release Drafter\n\non:\n  push:\n    # branches to consider in the event; optional, defaults to all\n    branches:\n      - master\n\njobs:\n  update_release_draft:\n    runs-on: ubuntu-latest\n    steps:\n      # Drafts your next Release notes as Pull Requests are merged into \"master\"\n      - uses: release-drafter/release-drafter@v5\n        with:\n          # (Optional) specify config name to use, relative to .github/. Default: release-drafter.yml\n           config-name: release-drafter-config.yml\n        env:\n          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n"
  },
  {
    "path": "deps/hiredis/.github/workflows/test.yml",
    "content": "name: C/C++ CI\n\non:\n  push:\n    branches: [ master ]\n  pull_request:\n    branches: [ master ]\n\njobs:\n  full-build:\n    name: Build all, plus default examples, run tests against redis\n    runs-on: ubuntu-latest\n    env:\n      # the docker image used by the test.sh\n      REDIS_DOCKER: redis:alpine\n\n    steps:\n    - name: Install prerequisites\n      run: sudo apt-get update && sudo apt-get install -y libev-dev libevent-dev libglib2.0-dev libssl-dev valgrind\n    - uses: actions/checkout@v3\n    - name: Run make\n      run: make all examples\n    - name: Run unittests\n      run: make check\n    - name: Run tests with valgrind\n      env:\n        TEST_PREFIX: valgrind --error-exitcode=100\n        SKIPS_ARG: --skip-throughput\n      run: make check\n\n  build-32-bit:\n    name: Build and test minimal 32 bit linux\n    runs-on: ubuntu-latest\n    steps:\n    - name: Install prerequisites\n      run: sudo apt-get update && sudo apt-get install gcc-multilib\n    - uses: actions/checkout@v3\n    - name: Run make\n      run: make all\n      env:\n        PLATFORM_FLAGS: -m32\n    - name: Run unittests\n      env:\n        REDIS_DOCKER: redis:alpine\n      run: make check\n\n  build-arm:\n    name: Cross-compile and test arm linux with Qemu\n    runs-on: ubuntu-latest\n    strategy:\n      matrix:\n        include:\n          - name: arm\n            toolset: arm-linux-gnueabi\n            emulator: qemu-arm\n          - name: aarch64\n            toolset: aarch64-linux-gnu\n            emulator: qemu-aarch64\n\n    steps:\n    - name: Install qemu\n      if: matrix.emulator\n      run: sudo apt-get update && sudo apt-get install -y qemu-user\n    - name: Install platform toolset\n      if: matrix.toolset\n      run: sudo apt-get install -y gcc-${{matrix.toolset}}\n    - uses: actions/checkout@v3\n    - name: Run make\n      run: make all\n      env:\n        CC: ${{matrix.toolset}}-gcc\n        AR: ${{matrix.toolset}}-ar\n    - name: Run unittests\n      env:\n        REDIS_DOCKER: redis:alpine\n        TEST_PREFIX: ${{matrix.emulator}} -L /usr/${{matrix.toolset}}/\n      run: make check\n\n  build-windows:\n    name: Build and test on windows 64 bit Intel\n    runs-on: windows-latest\n    steps:\n      - uses: microsoft/setup-msbuild@v1.0.2\n      - uses: actions/checkout@v3\n      - name: Run CMake (shared lib)\n        run: cmake -Wno-dev CMakeLists.txt\n      - name: Build hiredis (shared lib)\n        run: MSBuild hiredis.vcxproj /p:Configuration=Debug\n      - name: Run CMake (static lib)\n        run: cmake -Wno-dev CMakeLists.txt -DBUILD_SHARED_LIBS=OFF\n      - name: Build hiredis (static lib)\n        run: MSBuild hiredis.vcxproj /p:Configuration=Debug\n      - name: Build hiredis-test\n        run: MSBuild hiredis-test.vcxproj /p:Configuration=Debug\n      # use memurai, redis compatible server, since it is easy to install.  Can't\n      # install official redis containers on the windows runner\n      - name: Install Memurai redis server\n        run: choco install -y memurai-developer.install\n      - name: Run tests\n        run: Debug\\hiredis-test.exe\n"
  },
  {
    "path": "deps/hiredis/.gitignore",
    "content": "/hiredis-test\n/examples/hiredis-example*\n/*.o\n/*.so\n/*.dylib\n/*.a\n/*.pc\n*.dSYM\ntags\n"
  },
  {
    "path": "deps/hiredis/.travis.yml",
    "content": "language: c\ncompiler:\n  - gcc\n  - clang\n\nos:\n  - linux\n  - osx\n\ndist: bionic\n\nbranches:\n  only:\n    - staging\n    - trying\n    - master\n    - /^release\\/.*$/\n\ninstall:\n    - if [ \"$TRAVIS_COMPILER\" != \"mingw\" ]; then\n        wget https://github.com/redis/redis/archive/6.0.6.tar.gz;\n        tar -xzvf 6.0.6.tar.gz;\n        pushd redis-6.0.6 && BUILD_TLS=yes make && export PATH=$PWD/src:$PATH && popd;\n      fi;\n\nbefore_script:\n    - if [ \"$TRAVIS_OS_NAME\" == \"osx\" ]; then\n        curl -O https://distfiles.macports.org/MacPorts/MacPorts-2.6.2-10.13-HighSierra.pkg;\n        sudo installer -pkg MacPorts-2.6.2-10.13-HighSierra.pkg -target /;\n        export PATH=$PATH:/opt/local/bin && sudo port -v selfupdate;\n        sudo port -N install openssl redis;\n      fi;\n\naddons:\n  apt:\n    packages:\n    - libc6-dbg\n    - libc6-dev\n    - libc6:i386\n    - libc6-dev-i386\n    - libc6-dbg:i386\n    - gcc-multilib\n    - g++-multilib\n    - libssl-dev\n    - libssl-dev:i386\n    - valgrind\n\nenv:\n  - BITS=\"32\"\n  - BITS=\"64\"\n\nscript:\n  - EXTRA_CMAKE_OPTS=\"-DENABLE_EXAMPLES:BOOL=ON -DENABLE_SSL:BOOL=ON -DENABLE_SSL_TESTS:BOOL=ON\";\n    if [ \"$TRAVIS_OS_NAME\" == \"osx\" ]; then\n      if [ \"$BITS\" == \"32\" ]; then\n        CFLAGS=\"-m32 -Werror\";\n        CXXFLAGS=\"-m32 -Werror\";\n        LDFLAGS=\"-m32\";\n        EXTRA_CMAKE_OPTS=;\n      else\n        CFLAGS=\"-Werror\";\n        CXXFLAGS=\"-Werror\";\n      fi;\n    else\n      TEST_PREFIX=\"valgrind --track-origins=yes --leak-check=full\";\n      if [ \"$BITS\" == \"32\" ]; then\n        CFLAGS=\"-m32 -Werror\";\n        CXXFLAGS=\"-m32 -Werror\";\n        LDFLAGS=\"-m32\";\n        EXTRA_CMAKE_OPTS=;\n      else\n        CFLAGS=\"-Werror\";\n        CXXFLAGS=\"-Werror\";\n      fi;\n    fi;\n    export CFLAGS CXXFLAGS LDFLAGS TEST_PREFIX EXTRA_CMAKE_OPTS\n  - make && make clean;\n    if [ \"$TRAVIS_OS_NAME\" == \"osx\" ]; then\n      if [ \"$BITS\" == \"64\" ]; then\n        OPENSSL_PREFIX=\"$(ls -d /usr/local/Cellar/openssl@1.1/*)\" USE_SSL=1 make;\n      fi;\n    else\n      USE_SSL=1 make;\n    fi;\n  - mkdir build/ && cd build/\n  - cmake .. ${EXTRA_CMAKE_OPTS}\n  - make VERBOSE=1\n  - if [ \"$BITS\" == \"64\" ]; then\n      TEST_SSL=1 SKIPS_AS_FAILS=1 ctest -V;\n    else\n      SKIPS_AS_FAILS=1 ctest -V;\n    fi;\n\njobs:\n  include:\n    # Windows MinGW cross compile on Linux\n    - os: linux\n      dist: xenial\n      compiler: mingw\n      addons:\n        apt:\n          packages:\n            - ninja-build\n            - gcc-mingw-w64-x86-64\n            - g++-mingw-w64-x86-64\n      script:\n        - mkdir build && cd build\n        - CC=x86_64-w64-mingw32-gcc CXX=x86_64-w64-mingw32-g++ cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Release -DCMAKE_BUILD_WITH_INSTALL_RPATH=on\n        - ninja -v\n\n    # Windows MSVC 2017\n    - os: windows\n      compiler: msvc\n      env:\n        - MATRIX_EVAL=\"CC=cl.exe && CXX=cl.exe\"\n      before_install:\n        - eval \"${MATRIX_EVAL}\"\n      install:\n        - choco install ninja\n        - choco install -y memurai-developer\n      script:\n        - mkdir build && cd build\n        - cmd.exe //C 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\BuildTools\\VC\\Auxiliary\\Build\\vcvarsall.bat' amd64 '&&'\n          cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Release -DENABLE_EXAMPLES=ON '&&' ninja -v\n        - ./hiredis-test.exe\n"
  },
  {
    "path": "deps/hiredis/CHANGELOG.md",
    "content": "## [1.2.0](https://github.com/redis/hiredis/tree/v1.2.0) - (2023-06-04)\n\nAnnouncing Hiredis v1.2.0 with with new adapters, and a great many bug fixes.\n\n## 🚀 New Features\n\n- Add sdevent adapter @Oipo (#1144)\n- Allow specifying the keepalive interval @michael-grunder (#1168)\n- Add RedisModule adapter @tezc (#1182)\n- Helper for setting TCP_USER_TIMEOUT socket option @zuiderkwast (#1188)\n\n## 🐛 Bug Fixes\n\n- Fix a typo in b6a052f. @yossigo (#1190)\n- Fix wincrypt symbols conflict @hudayou (#1151)\n- Don't attempt to set a timeout if we are in an error state. @michael-grunder (#1180)\n- Accept -nan per the RESP3 spec recommendation. @michael-grunder (#1178)\n- Fix colliding option values @zuiderkwast (#1172)\n- Ensure functionality without `_MSC_VER` definition @windyakin (#1194)\n\n## 🧰 Maintenance\n\n- Add a test for the TCP_USER_TIMEOUT option. @michael-grunder (#1192)\n- Add -Werror as a default. @yossigo (#1193)\n- CI: Update homebrew Redis version. @yossigo (#1191)\n- Fix typo in makefile. @michael-grunder (#1179)\n- Write a version file for the CMake package @Neverlord (#1165)\n- CMakeLists.txt: respect BUILD_SHARED_LIBS @ffontaine (#1147)\n- Cmake static or shared @autoantwort (#1160)\n- fix typo @tillkruss (#1153)\n- Add a test ensuring we don't clobber connection error. @michael-grunder (#1181)\n- Search for openssl on macOS @michael-grunder (#1169)\n\n\n## Contributors\nWe'd like to thank all the contributors who worked on this release!\n\n<a href=\"https://github.com/neverlord\"><img src=\"https://github.com/neverlord.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/Oipo\"><img src=\"https://github.com/Oipo.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/autoantwort\"><img src=\"https://github.com/autoantwort.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/ffontaine\"><img src=\"https://github.com/ffontaine.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/hudayou\"><img src=\"https://github.com/hudayou.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/michael-grunder\"><img src=\"https://github.com/michael-grunder.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/postgraph\"><img src=\"https://github.com/postgraph.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/tezc\"><img src=\"https://github.com/tezc.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/tillkruss\"><img src=\"https://github.com/tillkruss.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/vityafx\"><img src=\"https://github.com/vityafx.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/windyakin\"><img src=\"https://github.com/windyakin.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/yossigo\"><img src=\"https://github.com/yossigo.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/zuiderkwast\"><img src=\"https://github.com/zuiderkwast.png\" width=\"32\" height=\"32\"></a>\n\n## [1.1.0](https://github.com/redis/hiredis/tree/v1.1.0) - (2022-11-15)\n\nAnnouncing Hiredis v1.1.0 GA with better SSL convenience, new async adapters and a great many bug fixes.\n\n**NOTE**:  Hiredis can now return `nan` in addition to `-inf` and `inf` when returning a `REDIS_REPLY_DOUBLE`.\n\n## 🐛 Bug Fixes\n\n- Add support for nan in RESP3 double [@filipecosta90](https://github.com/filipecosta90)\n  ([\\#1133](https://github.com/redis/hiredis/pull/1133))\n\n## 🧰 Maintenance\n\n- Add an example that calls redisCommandArgv [@michael-grunder](https://github.com/michael-grunder)\n  ([\\#1140](https://github.com/redis/hiredis/pull/1140))\n- fix flag reference [@pata00](https://github.com/pata00) ([\\#1136](https://github.com/redis/hiredis/pull/1136))\n- Make freeing a NULL redisAsyncContext a no op. [@michael-grunder](https://github.com/michael-grunder)\n  ([\\#1135](https://github.com/redis/hiredis/pull/1135))\n- CI updates ([@bjosv](https://github.com/redis/bjosv) ([\\#1139](https://github.com/redis/hiredis/pull/1139))\n\n\n## Contributors\nWe'd like to thank all the contributors who worked on this release!\n\n<a href=\"https://github.com/bjsov\"><img src=\"https://github.com/bjosv.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/filipecosta90\"><img src=\"https://github.com/filipecosta90.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/michael-grunder\"><img src=\"https://github.com/michael-grunder.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/pata00\"><img src=\"https://github.com/pata00.png\" width=\"32\" height=\"32\"></a>\n\n## [1.1.0-rc1](https://github.com/redis/hiredis/tree/v1.1.0-rc1) - (2022-11-06)\n\nAnnouncing Hiredis v1.1.0-rc1, with better SSL convenience, new async adapters, and a great many bug fixes.\n\n## 🚀 New Features\n\n- Add possibility to prefer IPv6, IPv4 or unspecified [@zuiderkwast](https://github.com/zuiderkwast)\n  ([\\#1096](https://github.com/redis/hiredis/pull/1096))\n- Add adapters/libhv [@ithewei](https://github.com/ithewei) ([\\#904](https://github.com/redis/hiredis/pull/904))\n- Add timeout support to libhv adapter. [@michael-grunder](https://github.com/michael-grunder) ([\\#1109](https://github.com/redis/hiredis/pull/1109))\n- set default SSL verification path [@adobeturchenko](https://github.com/adobeturchenko) ([\\#928](https://github.com/redis/hiredis/pull/928))\n- Introduce .close method for redisContextFuncs [@pizhenwei](https://github.com/pizhenwei) ([\\#1094](https://github.com/redis/hiredis/pull/1094))\n- Make it possible to set SSL verify mode [@stanhu](https://github.com/stanhu) ([\\#1085](https://github.com/redis/hiredis/pull/1085))\n- Polling adapter and example [@kristjanvalur](https://github.com/kristjanvalur) ([\\#932](https://github.com/redis/hiredis/pull/932))\n- Unsubscribe handling in async [@bjosv](https://github.com/bjosv) ([\\#1047](https://github.com/redis/hiredis/pull/1047))\n- Add timeout support for libuv adapter [@MichaelSuen-thePointer](https://github.com/@MichaelSuenthePointer) ([\\#1016](https://github.com/redis/hiredis/pull/1016))\n\n## 🐛 Bug Fixes\n\n- Update for MinGW cross compile [@bit0fun](https://github.com/bit0fun) ([\\#1127](https://github.com/redis/hiredis/pull/1127))\n- fixed CPP build error with adapters/libhv.h [@mtdxc](https://github.com/mtdxc) ([\\#1125](https://github.com/redis/hiredis/pull/1125))\n- Fix protocol error\n  [@michael-grunder](https://github.com/michael-grunder),\n  [@mtuleika-appcast](https://github.com/mtuleika-appcast) ([\\#1106](https://github.com/redis/hiredis/pull/1106))\n- Use a windows specific keepalive function. [@michael-grunder](https://github.com/michael-grunder) ([\\#1104](https://github.com/redis/hiredis/pull/1104))\n- Fix CMake config path on Linux. [@xkszltl](https://github.com/xkszltl) ([\\#989](https://github.com/redis/hiredis/pull/989))\n- Fix potential fault at createDoubleObject [@afcidk](https://github.com/afcidk) ([\\#964](https://github.com/redis/hiredis/pull/964))\n- Fix some undefined behavior [@jengab](https://github.com/jengab) ([\\#1091](https://github.com/redis/hiredis/pull/1091))\n- Copy OOM errors to redisAsyncContext when finding subscribe callback [@bjosv](https://github.com/bjosv) ([\\#1090](https://github.com/redis/hiredis/pull/1090))\n- Maintain backward compatibility with our onConnect callback. [@michael-grunder](https://github.com/michael-grunder) ([\\#1087](https://github.com/redis/hiredis/pull/1087))\n- Fix PUSH handler tests for Redis >= 7.0.5 [@michael-grunder](https://github.com/michael-grunder) ([\\#1121](https://github.com/redis/hiredis/pull/1121))\n- fix heap-buffer-overflow [@zhangtaoXT5](https://github.com/zhangtaoXT5) ([\\#957](https://github.com/redis/hiredis/pull/957))\n- Fix heap-buffer-overflow issue in redisvFormatCommad [@bjosv](https://github.com/bjosv) ([\\#1097](https://github.com/redis/hiredis/pull/1097))\n- Polling adapter requires sockcompat.h [@michael-grunder](https://github.com/michael-grunder) ([\\#1095](https://github.com/redis/hiredis/pull/1095))\n- Illumos test fixes, error message difference for bad hostname test. [@devnexen](https://github.com/devnexen) ([\\#901](https://github.com/redis/hiredis/pull/901))\n- Remove semicolon after do-while in \\_EL\\_CLEANUP [@sundb](https://github.com/sundb) ([\\#905](https://github.com/redis/hiredis/pull/905))\n- Stability: Support calling redisAsyncCommand and redisAsyncDisconnect from the onConnected callback [@kristjanvalur](https://github.com/kristjanvalur)\n  ([\\#931](https://github.com/redis/hiredis/pull/931))\n- Fix async connect on Windows [@kristjanvalur](https://github.com/kristjanvalur) ([\\#1073](https://github.com/redis/hiredis/pull/1073))\n- Fix tests so they work for Redis 7.0 [@michael-grunder](https://github.com/michael-grunder) ([\\#1072](https://github.com/redis/hiredis/pull/1072))\n- Fix warnings on Win64 [@orgads](https://github.com/orgads) ([\\#1058](https://github.com/redis/hiredis/pull/1058))\n- Handle push notifications before or after reply. [@yossigo](https://github.com/yossigo) ([\\#1062](https://github.com/redis/hiredis/pull/1062))\n- Update hiredis sds with improvements found in redis [@bjosv](https://github.com/bjosv) ([\\#1045](https://github.com/redis/hiredis/pull/1045))\n- Avoid incorrect call to the previous reply's callback [@bjosv](https://github.com/bjosv) ([\\#1040](https://github.com/redis/hiredis/pull/1040))\n- fix building on AIX and SunOS [\\#1031](https://github.com/redis/hiredis/pull/1031) ([@scddev](https://github.com/scddev))\n- Allow sending commands after sending an unsubscribe [@bjosv](https://github.com/bjosv) ([\\#1036](https://github.com/redis/hiredis/pull/1036))\n- Correction for command timeout during pubsub [@bjosv](https://github.com/bjosv) ([\\#1038](https://github.com/redis/hiredis/pull/1038))\n- Fix adapters/libevent.h compilation for 64-bit Windows [@pbtummillo](https://github.com/pbtummillo) ([\\#937](https://github.com/redis/hiredis/pull/937))\n- Fix integer overflow when format command larger than 4GB [@sundb](https://github.com/sundb) ([\\#1030](https://github.com/redis/hiredis/pull/1030))\n- Handle array response during subscribe in RESP3 [@bjosv](https://github.com/bjosv) ([\\#1014](https://github.com/redis/hiredis/pull/1014))\n- Support PING while subscribing (RESP2) [@bjosv](https://github.com/bjosv) ([\\#1027](https://github.com/redis/hiredis/pull/1027))\n\n## 🧰 Maintenance\n\n- CI fixes in preparation of release [@michael-grunder](https://github.com/michael-grunder) ([\\#1130](https://github.com/redis/hiredis/pull/1130))\n- Add do while(0) (protection for macros [@afcidk](https://github.com/afcidk) [\\#959](https://github.com/redis/hiredis/pull/959))\n- Fixup of PR734: Coverage of hiredis.c [@bjosv](https://github.com/bjosv) ([\\#1124](https://github.com/redis/hiredis/pull/1124))\n- CMake corrections for building on Windows [@bjosv](https://github.com/bjosv) ([\\#1122](https://github.com/redis/hiredis/pull/1122))\n- Install on windows fixes [@bjosv](https://github.com/bjosv) ([\\#1117](https://github.com/redis/hiredis/pull/1117))\n- Add libhv example to our standard Makefile [@michael-grunder](https://github.com/michael-grunder) ([\\#1108](https://github.com/redis/hiredis/pull/1108))\n- Additional include directory given by pkg-config [@bjosv](https://github.com/bjosv) ([\\#1118](https://github.com/redis/hiredis/pull/1118))\n- Use __attribute__ when building with Clang on Windows [@bjosv](https://github.com/bjosv) ([\\#1115](https://github.com/redis/hiredis/pull/1115))\n- Minor refactor [@michael-grunder](https://github.com/michael-grunder) ([\\#1110](https://github.com/redis/hiredis/pull/1110))\n- Fix pkgconfig result for hiredis_ssl [@bjosv](https://github.com/bjosv) ([\\#1107](https://github.com/redis/hiredis/pull/1107))\n- Update documentation to explain redisConnectWithOptions. [@michael-grunder](https://github.com/michael-grunder) ([\\#1099](https://github.com/redis/hiredis/pull/1099))\n- uvadapter: reduce number of uv_poll_start calls [@noxiouz](https://github.com/noxiouz) ([\\#1098](https://github.com/redis/hiredis/pull/1098))\n- Regression test for off-by-one parsing error [@bugwz](https://github.com/bugwz) ([\\#1092](https://github.com/redis/hiredis/pull/1092))\n- CMake: remove dict.c form hiredis_sources [@Lipraxde](https://github.com/Lipraxde) ([\\#1055](https://github.com/redis/hiredis/pull/1055))\n- Do store command timeout in the context for redisSetTimeout [@catterer](https://github.com/catterer) ([\\#593](https://github.com/redis/hiredis/pull/593), [\\#1093](https://github.com/redis/hiredis/pull/1093))\n- Add GitHub Actions CI workflow for hiredis: Arm, Arm64, 386, windows. [@kristjanvalur](https://github.com/kristjanvalur) ([\\#943](https://github.com/redis/hiredis/pull/943))\n- CI: bump macOS runner version [@SukkaW](https://github.com/SukkaW) ([\\#1079](https://github.com/redis/hiredis/pull/1079))\n- Support for generating release notes [@chayim](https://github.com/chayim) ([\\#1083](https://github.com/redis/hiredis/pull/1083))\n- Improve example for SSL initialization in README.md [@stanhu](https://github.com/stanhu) ([\\#1084](https://github.com/redis/hiredis/pull/1084))\n- Fix README typos [@bjosv](https://github.com/bjosv) ([\\#1080](https://github.com/redis/hiredis/pull/1080))\n- fix cmake version [@smmir-cent](https://github.com/@smmircent) ([\\#1050](https://github.com/redis/hiredis/pull/1050))\n- Use the same name for static and shared libraries [@orgads](https://github.com/orgads) ([\\#1057](https://github.com/redis/hiredis/pull/1057))\n- Embed debug information in windows static .lib file [@kristjanvalur](https://github.com/kristjanvalur) ([\\#1054](https://github.com/redis/hiredis/pull/1054))\n- Improved async documentation [@kristjanvalur](https://github.com/kristjanvalur) ([\\#1074](https://github.com/redis/hiredis/pull/1074))\n- Use official repository for redis package. [@yossigo](https://github.com/yossigo) ([\\#1061](https://github.com/redis/hiredis/pull/1061))\n- Whitelist hiredis repo path in cygwin [@michael-grunder](https://github.com/michael-grunder) ([\\#1063](https://github.com/redis/hiredis/pull/1063))\n- CentOS 8 is EOL, switch to RockyLinux [@michael-grunder](https://github.com/michael-grunder) ([\\#1046](https://github.com/redis/hiredis/pull/1046))\n- CMakeLists.txt: allow building without a C++ compiler [@ffontaine](https://github.com/ffontaine) ([\\#872](https://github.com/redis/hiredis/pull/872))\n- Makefile: move SSL options into a block and refine rules [@pizhenwei](https://github.com/pizhenwei) ([\\#997](https://github.com/redis/hiredis/pull/997))\n- Update CMakeLists.txt for more portability [@EricDeng1001](https://github.com/EricDeng1001) ([\\#1005](https://github.com/redis/hiredis/pull/1005))\n- FreeBSD build fixes + CI [@michael-grunder](https://github.com/michael-grunder) ([\\#1026](https://github.com/redis/hiredis/pull/1026))\n- Add asynchronous test for pubsub using RESP3 [@bjosv](https://github.com/bjosv) ([\\#1012](https://github.com/redis/hiredis/pull/1012))\n- Trigger CI failure when Valgrind issues are found [@bjosv](https://github.com/bjosv) ([\\#1011](https://github.com/redis/hiredis/pull/1011))\n- Move to using make directly in Cygwin [@michael-grunder](https://github.com/michael-grunder) ([\\#1020](https://github.com/redis/hiredis/pull/1020))\n- Add asynchronous API tests [@bjosv](https://github.com/bjosv) ([\\#1010](https://github.com/redis/hiredis/pull/1010))\n- Correcting the build target `coverage` for enabled SSL [@bjosv](https://github.com/bjosv) ([\\#1009](https://github.com/redis/hiredis/pull/1009))\n- GH Actions: Run SSL tests during CI [@bjosv](https://github.com/bjosv) ([\\#1008](https://github.com/redis/hiredis/pull/1008))\n- GH: Actions - Add valgrind and CMake [@michael-grunder](https://github.com/michael-grunder) ([\\#1004](https://github.com/redis/hiredis/pull/1004))\n- Add Centos8 tests in GH Actions [@michael-grunder](https://github.com/michael-grunder) ([\\#1001](https://github.com/redis/hiredis/pull/1001))\n- We should run actions on PRs [@michael-grunder](https://github.com/michael-grunder) (([\\#1000](https://github.com/redis/hiredis/pull/1000))\n- Add Cygwin test in GitHub actions [@michael-grunder](https://github.com/michael-grunder) ([\\#999](https://github.com/redis/hiredis/pull/999))\n- Add Windows tests in GitHub actions [@michael-grunder](https://github.com/michael-grunder) ([\\#996](https://github.com/redis/hiredis/pull/996))\n- Switch to GitHub actions [@michael-grunder](https://github.com/michael-grunder) ([\\#995](https://github.com/redis/hiredis/pull/995))\n- Minor refactor of CVE-2021-32765 fix. [@michael-grunder](https://github.com/michael-grunder) ([\\#993](https://github.com/redis/hiredis/pull/993))\n- Remove extra comma from CMake var. [@xkszltl](https://github.com/xkszltl) ([\\#988](https://github.com/redis/hiredis/pull/988))\n- Add REDIS\\_OPT\\_PREFER\\_UNSPEC [@michael-grunder](https://github.com/michael-grunder) ([\\#1101](https://github.com/redis/hiredis/pull/1101))\n\n## Contributors\nWe'd like to thank all the contributors who worked on this release!\n\n<a href=\"https://github.com/EricDeng1001\"><img src=\"https://github.com/EricDeng1001.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/Lipraxde\"><img src=\"https://github.com/Lipraxde.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/MichaelSuen-thePointer\"><img src=\"https://github.com/MichaelSuen-thePointer.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/SukkaW\"><img src=\"https://github.com/SukkaW.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/adobeturchenko\"><img src=\"https://github.com/adobeturchenko.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/afcidk\"><img src=\"https://github.com/afcidk.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/bit0fun\"><img src=\"https://github.com/bit0fun.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/bjosv\"><img src=\"https://github.com/bjosv.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/bugwz\"><img src=\"https://github.com/bugwz.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/catterer\"><img src=\"https://github.com/catterer.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/chayim\"><img src=\"https://github.com/chayim.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/devnexen\"><img src=\"https://github.com/devnexen.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/ffontaine\"><img src=\"https://github.com/ffontaine.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/ithewei\"><img src=\"https://github.com/ithewei.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/jengab\"><img src=\"https://github.com/jengab.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/kristjanvalur\"><img src=\"https://github.com/kristjanvalur.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/michael-grunder\"><img src=\"https://github.com/michael-grunder.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/noxiouz\"><img src=\"https://github.com/noxiouz.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/mtdxc\"><img src=\"https://github.com/mtdxc.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/orgads\"><img src=\"https://github.com/orgads.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/pbtummillo\"><img src=\"https://github.com/pbtummillo.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/pizhenwei\"><img src=\"https://github.com/pizhenwei.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/scddev\"><img src=\"https://github.com/scddev.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/smmir-cent\"><img src=\"https://github.com/smmir-cent.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/stanhu\"><img src=\"https://github.com/stanhu.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/sundb\"><img src=\"https://github.com/sundb.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/vturchenko\"><img src=\"https://github.com/vturchenko.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/xkszltl\"><img src=\"https://github.com/xkszltl.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/yossigo\"><img src=\"https://github.com/yossigo.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/zhangtaoXT5\"><img src=\"https://github.com/zhangtaoXT5.png\" width=\"32\" height=\"32\"></a>\n<a href=\"https://github.com/zuiderkwast\"><img src=\"https://github.com/zuiderkwast.png\" width=\"32\" height=\"32\"></a>\n\n## [1.0.2](https://github.com/redis/hiredis/tree/v1.0.2) - (2021-10-07)\n\nAnnouncing Hiredis v1.0.2, which fixes CVE-2021-32765 but returns the SONAME to the correct value of `1.0.0`.\n\n- [Revert SONAME bump](https://github.com/redis/hiredis/commit/d4e6f109a064690cde64765c654e679fea1d3548)\n  ([Michael Grunder](https://github.com/michael-grunder))\n\n## [1.0.1](https://github.com/redis/hiredis/tree/v1.0.1) - (2021-10-04)\n\n<span style=\"color:red\">This release erroneously bumped the SONAME, please use [1.0.2](https://github.com/redis/hiredis/tree/v1.0.2)</span>\n\nAnnouncing Hiredis v1.0.1, a security release fixing CVE-2021-32765\n\n- Fix for [CVE-2021-32765](https://github.com/redis/hiredis/security/advisories/GHSA-hfm9-39pp-55p2)\n  [commit](https://github.com/redis/hiredis/commit/76a7b10005c70babee357a7d0f2becf28ec7ed1e)\n  ([Yossi Gottlieb](https://github.com/yossigo))\n\n_Thanks to [Yossi Gottlieb](https://github.com/yossigo) for the security fix and to [Microsoft Security Vulnerability Research](https://www.microsoft.com/en-us/msrc/msvr) for finding the bug._ :sparkling_heart:\n\n## [1.0.0](https://github.com/redis/hiredis/tree/v1.0.0) - (2020-08-03)\n\nAnnouncing Hiredis v1.0.0, which adds support for RESP3, SSL connections, allocator injection, and better Windows support! :tada:\n\n_A big thanks to everyone who helped with this release.  The following list includes everyone who contributed at least five lines, sorted by lines contributed._ :sparkling_heart:\n\n[Michael Grunder](https://github.com/michael-grunder), [Yossi Gottlieb](https://github.com/yossigo),\n[Mark Nunberg](https://github.com/mnunberg), [Marcus Geelnard](https://github.com/mbitsnbites),\n[Justin Brewer](https://github.com/justinbrewer), [Valentino Geron](https://github.com/valentinogeron),\n[Minun Dragonation](https://github.com/dragonation), [Omri Steiner](https://github.com/OmriSteiner),\n[Sangmoon Yi](https://github.com/jman-krafton), [Jinjiazh](https://github.com/jinjiazhang),\n[Odin Hultgren Van Der Horst](https://github.com/Miniwoffer), [Muhammad Zahalqa](https://github.com/tryfinally),\n[Nick Rivera](https://github.com/heronr), [Qi Yang](https://github.com/movebean),\n[kevin1018](https://github.com/kevin1018)\n\n[Full Changelog](https://github.com/redis/hiredis/compare/v0.14.1...v1.0.0)\n\n**BREAKING CHANGES**:\n\n* `redisOptions` now has two timeout fields.  One for connecting, and one for commands.  If you're presently using `options->timeout` you will need to change it to use `options->connect_timeout`. (See [example](https://github.com/redis/hiredis/commit/38b5ae543f5c99eb4ccabbe277770fc6bc81226f#diff-86ba39d37aa829c8c82624cce4f049fbL36))\n\n* Bulk and multi-bulk lengths less than -1 or greater than `LLONG_MAX` are now protocol errors. This is consistent\n  with the RESP specification. On 32-bit platforms, the upper bound is lowered to `SIZE_MAX`.\n\n* `redisReplyObjectFunctions.createArray` now takes `size_t` for its length parameter.\n\n**New features:**\n- Support for RESP3\n  [\\#697](https://github.com/redis/hiredis/pull/697),\n  [\\#805](https://github.com/redis/hiredis/pull/805),\n  [\\#819](https://github.com/redis/hiredis/pull/819),\n  [\\#841](https://github.com/redis/hiredis/pull/841)\n  ([Yossi Gottlieb](https://github.com/yossigo), [Michael Grunder](https://github.com/michael-grunder))\n- Support for SSL connections\n  [\\#645](https://github.com/redis/hiredis/pull/645),\n  [\\#699](https://github.com/redis/hiredis/pull/699),\n  [\\#702](https://github.com/redis/hiredis/pull/702),\n  [\\#708](https://github.com/redis/hiredis/pull/708),\n  [\\#711](https://github.com/redis/hiredis/pull/711),\n  [\\#821](https://github.com/redis/hiredis/pull/821),\n  [more](https://github.com/redis/hiredis/pulls?q=is%3Apr+is%3Amerged+SSL)\n  ([Mark Nunberg](https://github.com/mnunberg), [Yossi Gottlieb](https://github.com/yossigo))\n- Run-time allocator injection\n  [\\#800](https://github.com/redis/hiredis/pull/800)\n  ([Michael Grunder](https://github.com/michael-grunder))\n- Improved Windows support (including MinGW and Windows CI)\n  [\\#652](https://github.com/redis/hiredis/pull/652),\n  [\\#663](https://github.com/redis/hiredis/pull/663)\n  ([Marcus Geelnard](https://www.bitsnbites.eu/author/m/))\n- Adds support for distinct connect and command timeouts\n  [\\#839](https://github.com/redis/hiredis/pull/839),\n  [\\#829](https://github.com/redis/hiredis/pull/829)\n  ([Valentino Geron](https://github.com/valentinogeron))\n- Add generic pointer and destructor to `redisContext` that users can use for context.\n  [\\#855](https://github.com/redis/hiredis/pull/855)\n  ([Michael Grunder](https://github.com/michael-grunder))\n\n**Closed issues (that involved code changes):**\n\n- Makefile does not install TLS libraries  [\\#809](https://github.com/redis/hiredis/issues/809)\n- redisConnectWithOptions should not set command timeout [\\#722](https://github.com/redis/hiredis/issues/722), [\\#829](https://github.com/redis/hiredis/pull/829) ([valentinogeron](https://github.com/valentinogeron))\n- Fix integer overflow in `sdsrange` [\\#827](https://github.com/redis/hiredis/issues/827)\n- INFO & CLUSTER commands failed when using RESP3 [\\#802](https://github.com/redis/hiredis/issues/802)\n- Windows compatibility patches [\\#687](https://github.com/redis/hiredis/issues/687), [\\#838](https://github.com/redis/hiredis/issues/838), [\\#842](https://github.com/redis/hiredis/issues/842)\n- RESP3 PUSH messages incorrectly use pending callback [\\#825](https://github.com/redis/hiredis/issues/825)\n- Asynchronous PSUBSCRIBE command fails when using RESP3 [\\#815](https://github.com/redis/hiredis/issues/815)\n- New SSL API [\\#804](https://github.com/redis/hiredis/issues/804), [\\#813](https://github.com/redis/hiredis/issues/813)\n- Hard-coded limit of nested reply depth [\\#794](https://github.com/redis/hiredis/issues/794)\n- Fix TCP_NODELAY in Windows/OSX [\\#679](https://github.com/redis/hiredis/issues/679), [\\#690](https://github.com/redis/hiredis/issues/690), [\\#779](https://github.com/redis/hiredis/issues/779), [\\#785](https://github.com/redis/hiredis/issues/785),\n- Added timers to libev adapter.  [\\#778](https://github.com/redis/hiredis/issues/778), [\\#795](https://github.com/redis/hiredis/pull/795)\n- Initialization discards const qualifier [\\#777](https://github.com/redis/hiredis/issues/777)\n- \\[BUG\\]\\[MinGW64\\] Error setting socket timeout  [\\#775](https://github.com/redis/hiredis/issues/775)\n- undefined reference to hi_malloc [\\#769](https://github.com/redis/hiredis/issues/769)\n- hiredis pkg-config file incorrectly ignores multiarch libdir spec'n [\\#767](https://github.com/redis/hiredis/issues/767)\n- Don't use -G to build shared object on Solaris [\\#757](https://github.com/redis/hiredis/issues/757)\n- error when make USE\\_SSL=1 [\\#748](https://github.com/redis/hiredis/issues/748)\n- Allow to change SSL Mode [\\#646](https://github.com/redis/hiredis/issues/646)\n- hiredis/adapters/libevent.h memleak [\\#618](https://github.com/redis/hiredis/issues/618)\n- redisLibuvPoll crash when server closes the connetion [\\#545](https://github.com/redis/hiredis/issues/545)\n- about redisAsyncDisconnect question [\\#518](https://github.com/redis/hiredis/issues/518)\n- hiredis adapters libuv error for help [\\#508](https://github.com/redis/hiredis/issues/508)\n- API/ABI changes analysis [\\#506](https://github.com/redis/hiredis/issues/506)\n- Memory leak patch in Redis [\\#502](https://github.com/redis/hiredis/issues/502)\n- Remove the depth limitation [\\#421](https://github.com/redis/hiredis/issues/421)\n\n**Merged pull requests:**\n\n- Move SSL management to a distinct private pointer [\\#855](https://github.com/redis/hiredis/pull/855) ([michael-grunder](https://github.com/michael-grunder))\n- Move include to sockcompat.h to maintain style [\\#850](https://github.com/redis/hiredis/pull/850) ([michael-grunder](https://github.com/michael-grunder))\n- Remove erroneous tag and add license to push example [\\#849](https://github.com/redis/hiredis/pull/849) ([michael-grunder](https://github.com/michael-grunder))\n- fix windows compiling with mingw [\\#848](https://github.com/redis/hiredis/pull/848) ([rmalizia44](https://github.com/rmalizia44))\n- Some Windows quality of life improvements. [\\#846](https://github.com/redis/hiredis/pull/846) ([michael-grunder](https://github.com/michael-grunder))\n- Use \\_WIN32 define instead of WIN32 [\\#845](https://github.com/redis/hiredis/pull/845) ([michael-grunder](https://github.com/michael-grunder))\n- Non Linux CI fixes [\\#844](https://github.com/redis/hiredis/pull/844) ([michael-grunder](https://github.com/michael-grunder))\n- Resp3 oob push support [\\#841](https://github.com/redis/hiredis/pull/841) ([michael-grunder](https://github.com/michael-grunder))\n- fix \\#785: defer TCP\\_NODELAY in async tcp connections [\\#836](https://github.com/redis/hiredis/pull/836) ([OmriSteiner](https://github.com/OmriSteiner))\n- sdsrange overflow fix [\\#830](https://github.com/redis/hiredis/pull/830) ([michael-grunder](https://github.com/michael-grunder))\n- Use explicit pointer casting for c++ compatibility [\\#826](https://github.com/redis/hiredis/pull/826) ([aureus1](https://github.com/aureus1))\n- Document allocator injection and completeness fix in test.c [\\#824](https://github.com/redis/hiredis/pull/824) ([michael-grunder](https://github.com/michael-grunder))\n- Use unique names for allocator struct members [\\#823](https://github.com/redis/hiredis/pull/823) ([michael-grunder](https://github.com/michael-grunder))\n- New SSL API to replace redisSecureConnection\\(\\). [\\#821](https://github.com/redis/hiredis/pull/821) ([yossigo](https://github.com/yossigo))\n- Add logic to handle RESP3 push messages [\\#819](https://github.com/redis/hiredis/pull/819) ([michael-grunder](https://github.com/michael-grunder))\n- Use standrad isxdigit instead of custom helper function. [\\#814](https://github.com/redis/hiredis/pull/814) ([tryfinally](https://github.com/tryfinally))\n- Fix missing SSL build/install options. [\\#812](https://github.com/redis/hiredis/pull/812) ([yossigo](https://github.com/yossigo))\n- Add link to ABI tracker [\\#808](https://github.com/redis/hiredis/pull/808) ([michael-grunder](https://github.com/michael-grunder))\n- Resp3 verbatim string support [\\#805](https://github.com/redis/hiredis/pull/805) ([michael-grunder](https://github.com/michael-grunder))\n- Allow users to replace allocator and handle OOM everywhere. [\\#800](https://github.com/redis/hiredis/pull/800) ([michael-grunder](https://github.com/michael-grunder))\n- Remove nested depth limitation. [\\#797](https://github.com/redis/hiredis/pull/797) ([michael-grunder](https://github.com/michael-grunder))\n- Attempt to fix compilation on Solaris [\\#796](https://github.com/redis/hiredis/pull/796) ([michael-grunder](https://github.com/michael-grunder))\n- Support timeouts in libev adapater [\\#795](https://github.com/redis/hiredis/pull/795) ([michael-grunder](https://github.com/michael-grunder))\n- Fix pkgconfig when installing to a custom lib dir [\\#793](https://github.com/redis/hiredis/pull/793) ([michael-grunder](https://github.com/michael-grunder))\n- Fix USE\\_SSL=1 make/cmake on OSX and CMake tests [\\#789](https://github.com/redis/hiredis/pull/789) ([michael-grunder](https://github.com/michael-grunder))\n- Use correct libuv call on Windows [\\#784](https://github.com/redis/hiredis/pull/784) ([michael-grunder](https://github.com/michael-grunder))\n- Added CMake package config and fixed hiredis\\_ssl on Windows [\\#783](https://github.com/redis/hiredis/pull/783) ([michael-grunder](https://github.com/michael-grunder))\n- CMake: Set hiredis\\_ssl shared object version. [\\#780](https://github.com/redis/hiredis/pull/780) ([yossigo](https://github.com/yossigo))\n- Win32 tests and timeout fix [\\#776](https://github.com/redis/hiredis/pull/776) ([michael-grunder](https://github.com/michael-grunder))\n- Provides an optional cleanup callback for async data. [\\#768](https://github.com/redis/hiredis/pull/768) ([heronr](https://github.com/heronr))\n- Housekeeping fixes [\\#764](https://github.com/redis/hiredis/pull/764) ([michael-grunder](https://github.com/michael-grunder))\n- install alloc.h [\\#756](https://github.com/redis/hiredis/pull/756) ([ch1aki](https://github.com/ch1aki))\n- fix spelling mistakes [\\#746](https://github.com/redis/hiredis/pull/746) ([ShooterIT](https://github.com/ShooterIT))\n- Free the reply in redisGetReply when passed NULL [\\#741](https://github.com/redis/hiredis/pull/741) ([michael-grunder](https://github.com/michael-grunder))\n- Fix dead code in sslLogCallback relating to should\\_log variable. [\\#737](https://github.com/redis/hiredis/pull/737) ([natoscott](https://github.com/natoscott))\n- Fix typo in dict.c. [\\#731](https://github.com/redis/hiredis/pull/731) ([Kevin-Xi](https://github.com/Kevin-Xi))\n- Adding an option to DISABLE\\_TESTS [\\#727](https://github.com/redis/hiredis/pull/727) ([pbotros](https://github.com/pbotros))\n- Update README with SSL support. [\\#720](https://github.com/redis/hiredis/pull/720) ([yossigo](https://github.com/yossigo))\n- Fixes leaks in unit tests [\\#715](https://github.com/redis/hiredis/pull/715) ([michael-grunder](https://github.com/michael-grunder))\n- SSL Tests [\\#711](https://github.com/redis/hiredis/pull/711) ([yossigo](https://github.com/yossigo))\n- SSL Reorganization [\\#708](https://github.com/redis/hiredis/pull/708) ([yossigo](https://github.com/yossigo))\n- Fix MSVC build. [\\#706](https://github.com/redis/hiredis/pull/706) ([yossigo](https://github.com/yossigo))\n- SSL: Properly report SSL\\_connect\\(\\) errors. [\\#702](https://github.com/redis/hiredis/pull/702) ([yossigo](https://github.com/yossigo))\n- Silent SSL trace to stdout by default. [\\#699](https://github.com/redis/hiredis/pull/699) ([yossigo](https://github.com/yossigo))\n- Port RESP3 support from Redis. [\\#697](https://github.com/redis/hiredis/pull/697) ([yossigo](https://github.com/yossigo))\n- Removed whitespace before newline [\\#691](https://github.com/redis/hiredis/pull/691) ([Miniwoffer](https://github.com/Miniwoffer))\n- Add install adapters header files [\\#688](https://github.com/redis/hiredis/pull/688) ([kevin1018](https://github.com/kevin1018))\n- Remove unnecessary null check before free [\\#684](https://github.com/redis/hiredis/pull/684) ([qlyoung](https://github.com/qlyoung))\n- redisReaderGetReply leak memory [\\#671](https://github.com/redis/hiredis/pull/671) ([movebean](https://github.com/movebean))\n- fix timeout code in windows [\\#670](https://github.com/redis/hiredis/pull/670) ([jman-krafton](https://github.com/jman-krafton))\n- test: fix errstr matching for musl libc [\\#665](https://github.com/redis/hiredis/pull/665) ([ghost](https://github.com/ghost))\n- Windows: MinGW fixes and Windows Travis builders [\\#663](https://github.com/redis/hiredis/pull/663) ([mbitsnbites](https://github.com/mbitsnbites))\n- The setsockopt and getsockopt API diffs from BSD socket and WSA one [\\#662](https://github.com/redis/hiredis/pull/662) ([dragonation](https://github.com/dragonation))\n- Fix Compile Error On Windows \\(Visual Studio\\) [\\#658](https://github.com/redis/hiredis/pull/658) ([jinjiazhang](https://github.com/jinjiazhang))\n- Fix NXDOMAIN test case [\\#653](https://github.com/redis/hiredis/pull/653) ([michael-grunder](https://github.com/michael-grunder))\n- Add MinGW support [\\#652](https://github.com/redis/hiredis/pull/652) ([mbitsnbites](https://github.com/mbitsnbites))\n- SSL Support [\\#645](https://github.com/redis/hiredis/pull/645) ([mnunberg](https://github.com/mnunberg))\n- Fix Invalid argument after redisAsyncConnectUnix [\\#644](https://github.com/redis/hiredis/pull/644) ([codehz](https://github.com/codehz))\n- Makefile: use predefined AR [\\#632](https://github.com/redis/hiredis/pull/632) ([Mic92](https://github.com/Mic92))\n- FreeBSD  build fix [\\#628](https://github.com/redis/hiredis/pull/628) ([devnexen](https://github.com/devnexen))\n- Fix errors not propagating properly with libuv.h. [\\#624](https://github.com/redis/hiredis/pull/624) ([yossigo](https://github.com/yossigo))\n- Update README.md [\\#621](https://github.com/redis/hiredis/pull/621) ([Crunsher](https://github.com/Crunsher))\n- Fix redisBufferRead documentation [\\#620](https://github.com/redis/hiredis/pull/620) ([hacst](https://github.com/hacst))\n- Add CPPFLAGS to REAL\\_CFLAGS [\\#614](https://github.com/redis/hiredis/pull/614) ([thomaslee](https://github.com/thomaslee))\n- Update createArray to take size\\_t [\\#597](https://github.com/redis/hiredis/pull/597) ([justinbrewer](https://github.com/justinbrewer))\n- fix common realloc mistake and add null check more [\\#580](https://github.com/redis/hiredis/pull/580) ([charsyam](https://github.com/charsyam))\n- Proper error reporting for connect failures [\\#578](https://github.com/redis/hiredis/pull/578) ([mnunberg](https://github.com/mnunberg))\n\n\\* *This Changelog was automatically generated by [github_changelog_generator](https://github.com/github-changelog-generator/github-changelog-generator)*\n\n## [1.0.0-rc1](https://github.com/redis/hiredis/tree/v1.0.0-rc1) - (2020-07-29)\n\n_Note:  There were no changes to code between v1.0.0-rc1 and v1.0.0 so see v1.0.0 for changelog_\n\n### 0.14.1 (2020-03-13)\n\n* Adds safe allocation wrappers (CVE-2020-7105, #747, #752) (Michael Grunder)\n\n### 0.14.0 (2018-09-25)\n**BREAKING CHANGES**:\n\n* Change `redisReply.len` to `size_t`, as it denotes the the size of a string\n\n  User code should compare this to `size_t` values as well.\n  If it was used to compare to other values, casting might be necessary or can be removed, if casting was applied before.\n\n* Make string2ll static to fix conflict with Redis (Tom Lee [c3188b])\n* Use -dynamiclib instead of -shared for OSX (Ryan Schmidt [a65537])\n* Use string2ll from Redis w/added tests (Michael Grunder [7bef04, 60f622])\n* Makefile - OSX compilation fixes (Ryan Schmidt [881fcb, 0e9af8])\n* Remove redundant NULL checks (Justin Brewer [54acc8, 58e6b8])\n* Fix bulk and multi-bulk length truncation (Justin Brewer [109197])\n* Fix SIGSEGV in OpenBSD by checking for NULL before calling freeaddrinfo (Justin Brewer [546d94])\n* Several POSIX compatibility fixes (Justin Brewer [bbeab8, 49bbaa, d1c1b6])\n* Makefile - Compatibility fixes (Dimitri Vorobiev [3238cf, 12a9d1])\n* Makefile - Fix make install on FreeBSD (Zach Shipko [a2ef2b])\n* Makefile - don't assume $(INSTALL) is cp (Igor Gnatenko [725a96])\n* Separate side-effect causing function from assert and small cleanup (amallia [b46413, 3c3234])\n* Don't send negative values to `__redisAsyncCommand` (Frederik Deweerdt [706129])\n* Fix leak if setsockopt fails (Frederik Deweerdt [e21c9c])\n* Fix libevent leak (zfz [515228])\n* Clean up GCC warning (Ichito Nagata [2ec774])\n* Keep track of errno in `__redisSetErrorFromErrno()` as snprintf may use it (Jin Qing [25cd88])\n* Solaris compilation fix (Donald Whyte [41b07d])\n* Reorder linker arguments when building examples (Tustfarm-heart [06eedd])\n* Keep track of subscriptions in case of rapid subscribe/unsubscribe (Hyungjin Kim [073dc8, be76c5, d46999])\n* libuv use after free fix (Paul Scott [cbb956])\n* Properly close socket fd on reconnect attempt (WSL [64d1ec])\n* Skip valgrind in OSX tests (Jan-Erik Rediger [9deb78])\n* Various updates for Travis testing OSX (Ted Nyman [fa3774, 16a459, bc0ea5])\n* Update libevent (Chris Xin [386802])\n* Change sds.h for building in C++ projects (Ali Volkan ATLI [f5b32e])\n* Use proper format specifier in redisFormatSdsCommandArgv (Paulino Huerta, Jan-Erik Rediger [360a06, 8655a6])\n* Better handling of NULL reply in example code (Jan-Erik Rediger [1b8ed3])\n* Prevent overflow when formatting an error (Jan-Erik Rediger [0335cb])\n* Compatibility fix for strerror_r (Tom Lee [bb1747])\n* Properly detect integer parse/overflow errors (Justin Brewer [93421f])\n* Adds CI for Windows and cygwin fixes (owent, [6c53d6, 6c3e40])\n* Catch a buffer overflow when formatting the error message\n* Import latest upstream sds. This breaks applications that are linked against the old hiredis v0.13\n* Fix warnings, when compiled with -Wshadow\n* Make hiredis compile in Cygwin on Windows, now CI-tested\n* Bulk and multi-bulk lengths less than -1 or greater than `LLONG_MAX` are now\n  protocol errors. This is consistent with the RESP specification. On 32-bit\n  platforms, the upper bound is lowered to `SIZE_MAX`.\n\n* Remove backwards compatibility macro's\n\nThis removes the following old function aliases, use the new name now:\n\n| Old                         | New                    |\n| --------------------------- | ---------------------- |\n| redisReplyReaderCreate      | redisReaderCreate      |\n| redisReplyReaderCreate      | redisReaderCreate      |\n| redisReplyReaderFree        | redisReaderFree        |\n| redisReplyReaderFeed        | redisReaderFeed        |\n| redisReplyReaderGetReply    | redisReaderGetReply    |\n| redisReplyReaderSetPrivdata | redisReaderSetPrivdata |\n| redisReplyReaderGetObject   | redisReaderGetObject   |\n| redisReplyReaderGetError    | redisReaderGetError    |\n\n* The `DEBUG` variable in the Makefile was renamed to `DEBUG_FLAGS`\n\nPreviously it broke some builds for people that had `DEBUG` set to some arbitrary value,\ndue to debugging other software.\nBy renaming we avoid unintentional name clashes.\n\nSimply rename `DEBUG` to `DEBUG_FLAGS` in your environment to make it working again.\n\n### 0.13.3 (2015-09-16)\n\n* Revert \"Clear `REDIS_CONNECTED` flag when connection is closed\".\n* Make tests pass on FreeBSD (Thanks, Giacomo Olgeni)\n\n\nIf the `REDIS_CONNECTED` flag is cleared,\nthe async onDisconnect callback function will never be called.\nThis causes problems as the disconnect is never reported back to the user.\n\n### 0.13.2 (2015-08-25)\n\n* Prevent crash on pending replies in async code (Thanks, @switch-st)\n* Clear `REDIS_CONNECTED` flag when connection is closed (Thanks, Jerry Jacobs)\n* Add MacOS X addapter (Thanks, @dizzus)\n* Add Qt adapter (Thanks, Pietro Cerutti)\n* Add Ivykis adapter (Thanks, Gergely Nagy)\n\nAll adapters are provided as is and are only tested where possible.\n\n### 0.13.1 (2015-05-03)\n\nThis is a bug fix release.\nThe new `reconnect` method introduced new struct members, which clashed with pre-defined names in pre-C99 code.\nAnother commit forced C99 compilation just to make it work, but of course this is not desirable for outside projects.\nOther non-C99 code can now use hiredis as usual again.\nSorry for the inconvenience.\n\n* Fix memory leak in async reply handling (Salvatore Sanfilippo)\n* Rename struct member to avoid name clash with pre-c99 code (Alex Balashov, ncopa)\n\n### 0.13.0 (2015-04-16)\n\nThis release adds a minimal Windows compatibility layer.\nThe parser, standalone since v0.12.0, can now be compiled on Windows\n(and thus used in other client libraries as well)\n\n* Windows compatibility layer for parser code (tzickel)\n* Properly escape data printed to PKGCONF file (Dan Skorupski)\n* Fix tests when assert() undefined (Keith Bennett, Matt Stancliff)\n* Implement a reconnect method for the client context, this changes the structure of `redisContext` (Aaron Bedra)\n\n### 0.12.1 (2015-01-26)\n\n* Fix `make install`: DESTDIR support, install all required files, install PKGCONF in proper location\n* Fix `make test` as 32 bit build on 64 bit platform\n\n### 0.12.0 (2015-01-22)\n\n* Add optional KeepAlive support\n\n* Try again on EINTR errors\n\n* Add libuv adapter\n\n* Add IPv6 support\n\n* Remove possibility of multiple close on same fd\n\n* Add ability to bind source address on connect\n\n* Add redisConnectFd() and redisFreeKeepFd()\n\n* Fix getaddrinfo() memory leak\n\n* Free string if it is unused (fixes memory leak)\n\n* Improve redisAppendCommandArgv performance 2.5x\n\n* Add support for SO_REUSEADDR\n\n* Fix redisvFormatCommand format parsing\n\n* Add GLib 2.0 adapter\n\n* Refactor reading code into read.c\n\n* Fix errno error buffers to not clobber errors\n\n* Generate pkgconf during build\n\n* Silence _BSD_SOURCE warnings\n\n* Improve digit counting for multibulk creation\n\n\n### 0.11.0\n\n* Increase the maximum multi-bulk reply depth to 7.\n\n* Increase the read buffer size from 2k to 16k.\n\n* Use poll(2) instead of select(2) to support large fds (>= 1024).\n\n### 0.10.1\n\n* Makefile overhaul. Important to check out if you override one or more\n  variables using environment variables or via arguments to the \"make\" tool.\n\n* Issue #45: Fix potential memory leak for a multi bulk reply with 0 elements\n  being created by the default reply object functions.\n\n* Issue #43: Don't crash in an asynchronous context when Redis returns an error\n  reply after the connection has been made (this happens when the maximum\n  number of connections is reached).\n\n### 0.10.0\n\n* See commit log.\n"
  },
  {
    "path": "deps/hiredis/CMakeLists.txt",
    "content": "CMAKE_MINIMUM_REQUIRED(VERSION 3.0.0)\n\nOPTION(BUILD_SHARED_LIBS \"Build shared libraries\" ON)\nOPTION(ENABLE_SSL \"Build hiredis_ssl for SSL support\" OFF)\nOPTION(DISABLE_TESTS \"If tests should be compiled or not\" OFF)\nOPTION(ENABLE_SSL_TESTS \"Should we test SSL connections\" OFF)\nOPTION(ENABLE_EXAMPLES \"Enable building hiredis examples\" OFF)\nOPTION(ENABLE_ASYNC_TESTS \"Should we run all asynchronous API tests\" OFF)\n\nMACRO(getVersionBit name)\n  SET(VERSION_REGEX \"^#define ${name} (.+)$\")\n  FILE(STRINGS \"${CMAKE_CURRENT_SOURCE_DIR}/hiredis.h\"\n    VERSION_BIT REGEX ${VERSION_REGEX})\n  STRING(REGEX REPLACE ${VERSION_REGEX} \"\\\\1\" ${name} \"${VERSION_BIT}\")\nENDMACRO(getVersionBit)\n\ngetVersionBit(HIREDIS_MAJOR)\ngetVersionBit(HIREDIS_MINOR)\ngetVersionBit(HIREDIS_PATCH)\ngetVersionBit(HIREDIS_SONAME)\nSET(VERSION \"${HIREDIS_MAJOR}.${HIREDIS_MINOR}.${HIREDIS_PATCH}\")\nMESSAGE(\"Detected version: ${VERSION}\")\n\nPROJECT(hiredis LANGUAGES \"C\" VERSION \"${VERSION}\")\nINCLUDE(GNUInstallDirs)\n\n# Hiredis requires C99\nSET(CMAKE_C_STANDARD 99)\nSET(CMAKE_DEBUG_POSTFIX d)\n\nSET(hiredis_sources\n    alloc.c\n    async.c\n    hiredis.c\n    net.c\n    read.c\n    sds.c\n    sockcompat.c)\n\nSET(hiredis_sources ${hiredis_sources})\n\nIF(WIN32)\n    ADD_DEFINITIONS(-D_CRT_SECURE_NO_WARNINGS -DWIN32_LEAN_AND_MEAN)\nENDIF()\n\nADD_LIBRARY(hiredis ${hiredis_sources})\nADD_LIBRARY(hiredis::hiredis ALIAS hiredis)\nset(hiredis_export_name hiredis CACHE STRING \"Name of the exported target\")\nset_target_properties(hiredis PROPERTIES EXPORT_NAME ${hiredis_export_name})\n\nSET_TARGET_PROPERTIES(hiredis\n    PROPERTIES WINDOWS_EXPORT_ALL_SYMBOLS TRUE\n    VERSION \"${HIREDIS_SONAME}\")\nIF(MSVC)\n    SET_TARGET_PROPERTIES(hiredis\n        PROPERTIES COMPILE_FLAGS /Z7)\nENDIF()\nIF(WIN32)\n    TARGET_LINK_LIBRARIES(hiredis PUBLIC ws2_32 crypt32)\nELSEIF(CMAKE_SYSTEM_NAME MATCHES \"FreeBSD\")\n    TARGET_LINK_LIBRARIES(hiredis PUBLIC m)\nELSEIF(CMAKE_SYSTEM_NAME MATCHES \"SunOS\")\n    TARGET_LINK_LIBRARIES(hiredis PUBLIC socket)\nENDIF()\n\nTARGET_INCLUDE_DIRECTORIES(hiredis PUBLIC $<INSTALL_INTERFACE:include> $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}>)\n\nCONFIGURE_FILE(hiredis.pc.in hiredis.pc @ONLY)\n\nset(CPACK_PACKAGE_VENDOR \"Redis\")\nset(CPACK_PACKAGE_DESCRIPTION \"\\\nHiredis is a minimalistic C client library for the Redis database.\n\nIt is minimalistic because it just adds minimal support for the protocol, \\\nbut at the same time it uses a high level printf-alike API in order to make \\\nit much higher level than otherwise suggested by its minimal code base and the \\\nlack of explicit bindings for every Redis command.\n\nApart from supporting sending commands and receiving replies, it comes with a \\\nreply parser that is decoupled from the I/O layer. It is a stream parser designed \\\nfor easy reusability, which can for instance be used in higher level language bindings \\\nfor efficient reply parsing.\n\nHiredis only supports the binary-safe Redis protocol, so you can use it with any Redis \\\nversion >= 1.2.0.\n\nThe library comes with multiple APIs. There is the synchronous API, the asynchronous API \\\nand the reply parsing API.\")\nset(CPACK_PACKAGE_HOMEPAGE_URL \"https://github.com/redis/hiredis\")\nset(CPACK_PACKAGE_CONTACT \"michael dot grunder at gmail dot com\")\nset(CPACK_DEBIAN_PACKAGE_SHLIBDEPS ON)\nset(CPACK_RPM_PACKAGE_AUTOREQPROV ON)\n\ninclude(CPack)\n\nINSTALL(TARGETS hiredis\n    EXPORT hiredis-targets\n    RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR}\n    LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}\n    ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR})\n\nif (MSVC AND BUILD_SHARED_LIBS)\n    INSTALL(FILES $<TARGET_PDB_FILE:hiredis>\n        DESTINATION ${CMAKE_INSTALL_BINDIR}\n        CONFIGURATIONS Debug RelWithDebInfo)\nendif()\n\n# For NuGet packages\nINSTALL(FILES hiredis.targets\n    DESTINATION build/native)\n\nINSTALL(FILES hiredis.h read.h sds.h async.h alloc.h sockcompat.h\n    DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/hiredis)\n\nINSTALL(DIRECTORY adapters\n    DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/hiredis)\n\nINSTALL(FILES ${CMAKE_CURRENT_BINARY_DIR}/hiredis.pc\n    DESTINATION ${CMAKE_INSTALL_LIBDIR}/pkgconfig)\n\nexport(EXPORT hiredis-targets\n    FILE \"${CMAKE_CURRENT_BINARY_DIR}/hiredis-targets.cmake\"\n    NAMESPACE hiredis::)\n\nif(WIN32)\n    SET(CMAKE_CONF_INSTALL_DIR share/hiredis)\nelse()\n    SET(CMAKE_CONF_INSTALL_DIR ${CMAKE_INSTALL_LIBDIR}/cmake/hiredis)\nendif()\nSET(INCLUDE_INSTALL_DIR include)\ninclude(CMakePackageConfigHelpers)\nwrite_basic_package_version_file(\"${CMAKE_CURRENT_BINARY_DIR}/hiredis-config-version.cmake\"\n                                 COMPATIBILITY SameMajorVersion)\nconfigure_package_config_file(hiredis-config.cmake.in ${CMAKE_CURRENT_BINARY_DIR}/hiredis-config.cmake\n                              INSTALL_DESTINATION ${CMAKE_CONF_INSTALL_DIR}\n                              PATH_VARS INCLUDE_INSTALL_DIR)\n\nINSTALL(EXPORT hiredis-targets\n        FILE hiredis-targets.cmake\n        NAMESPACE hiredis::\n        DESTINATION ${CMAKE_CONF_INSTALL_DIR})\n\nINSTALL(FILES ${CMAKE_CURRENT_BINARY_DIR}/hiredis-config.cmake\n              ${CMAKE_CURRENT_BINARY_DIR}/hiredis-config-version.cmake\n        DESTINATION ${CMAKE_CONF_INSTALL_DIR})\n\n\nIF(ENABLE_SSL)\n    IF (NOT OPENSSL_ROOT_DIR)\n        IF (APPLE)\n            SET(OPENSSL_ROOT_DIR \"/usr/local/opt/openssl\")\n        ENDIF()\n    ENDIF()\n    FIND_PACKAGE(OpenSSL REQUIRED)\n    SET(hiredis_ssl_sources\n        ssl.c)\n    ADD_LIBRARY(hiredis_ssl ${hiredis_ssl_sources})\n    ADD_LIBRARY(hiredis::hiredis_ssl ALIAS hiredis_ssl)\n\n    IF (APPLE AND BUILD_SHARED_LIBS)\n        SET_PROPERTY(TARGET hiredis_ssl PROPERTY LINK_FLAGS \"-Wl,-undefined -Wl,dynamic_lookup\")\n    ENDIF()\n\n    SET_TARGET_PROPERTIES(hiredis_ssl\n        PROPERTIES\n        WINDOWS_EXPORT_ALL_SYMBOLS TRUE\n        VERSION \"${HIREDIS_SONAME}\")\n    IF(MSVC)\n        SET_TARGET_PROPERTIES(hiredis_ssl\n            PROPERTIES COMPILE_FLAGS /Z7)\n    ENDIF()\n    TARGET_LINK_LIBRARIES(hiredis_ssl PRIVATE OpenSSL::SSL)\n    IF(WIN32)\n        TARGET_LINK_LIBRARIES(hiredis_ssl PRIVATE hiredis)\n    ENDIF()\n    CONFIGURE_FILE(hiredis_ssl.pc.in hiredis_ssl.pc @ONLY)\n\n    INSTALL(TARGETS hiredis_ssl\n        EXPORT hiredis_ssl-targets\n        RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR}\n        LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}\n        ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR})\n\n    if (MSVC AND BUILD_SHARED_LIBS)\n        INSTALL(FILES $<TARGET_PDB_FILE:hiredis_ssl>\n            DESTINATION ${CMAKE_INSTALL_BINDIR}\n            CONFIGURATIONS Debug RelWithDebInfo)\n    endif()\n\n    INSTALL(FILES hiredis_ssl.h\n        DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/hiredis)\n\n    INSTALL(FILES ${CMAKE_CURRENT_BINARY_DIR}/hiredis_ssl.pc\n        DESTINATION ${CMAKE_INSTALL_LIBDIR}/pkgconfig)\n\n    export(EXPORT hiredis_ssl-targets\n           FILE \"${CMAKE_CURRENT_BINARY_DIR}/hiredis_ssl-targets.cmake\"\n           NAMESPACE hiredis::)\n\n    if(WIN32)\n        SET(CMAKE_CONF_INSTALL_DIR share/hiredis_ssl)\n    else()\n        SET(CMAKE_CONF_INSTALL_DIR ${CMAKE_INSTALL_LIBDIR}/cmake/hiredis_ssl)\n    endif()\n    configure_package_config_file(hiredis_ssl-config.cmake.in ${CMAKE_CURRENT_BINARY_DIR}/hiredis_ssl-config.cmake\n                                  INSTALL_DESTINATION ${CMAKE_CONF_INSTALL_DIR}\n                                  PATH_VARS INCLUDE_INSTALL_DIR)\n\n    INSTALL(EXPORT hiredis_ssl-targets\n        FILE hiredis_ssl-targets.cmake\n        NAMESPACE hiredis::\n        DESTINATION ${CMAKE_CONF_INSTALL_DIR})\n\n    INSTALL(FILES ${CMAKE_CURRENT_BINARY_DIR}/hiredis_ssl-config.cmake\n        DESTINATION ${CMAKE_CONF_INSTALL_DIR})\nENDIF()\n\nIF(NOT DISABLE_TESTS)\n    ENABLE_TESTING()\n    ADD_EXECUTABLE(hiredis-test test.c)\n    TARGET_LINK_LIBRARIES(hiredis-test hiredis)\n    IF(ENABLE_SSL_TESTS)\n        ADD_DEFINITIONS(-DHIREDIS_TEST_SSL=1)\n        TARGET_LINK_LIBRARIES(hiredis-test hiredis_ssl)\n    ENDIF()\n    IF(ENABLE_ASYNC_TESTS)\n        ADD_DEFINITIONS(-DHIREDIS_TEST_ASYNC=1)\n        TARGET_LINK_LIBRARIES(hiredis-test event)\n    ENDIF()\n    ADD_TEST(NAME hiredis-test\n        COMMAND ${CMAKE_CURRENT_SOURCE_DIR}/test.sh)\nENDIF()\n\n# Add examples\nIF(ENABLE_EXAMPLES)\n    ADD_SUBDIRECTORY(examples)\nENDIF(ENABLE_EXAMPLES)\n"
  },
  {
    "path": "deps/hiredis/COPYING",
    "content": "Copyright (c) 2009-2011, Salvatore Sanfilippo <antirez at gmail dot com>\nCopyright (c) 2010-2011, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice,\n  this list of conditions and the following disclaimer.\n\n* Redistributions in binary form must reproduce the above copyright notice,\n  this list of conditions and the following disclaimer in the documentation\n  and/or other materials provided with the distribution.\n\n* Neither the name of Redis nor the names of its contributors may be used\n  to endorse or promote products derived from this software without specific\n  prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\nWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR\nANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\nLOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON\nANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "deps/hiredis/Makefile",
    "content": "# Hiredis Makefile\n# Copyright (C) 2010-2011 Salvatore Sanfilippo <antirez at gmail dot com>\n# Copyright (C) 2010-2011 Pieter Noordhuis <pcnoordhuis at gmail dot com>\n# This file is released under the BSD license, see the COPYING file\n\nOBJ=alloc.o net.o hiredis.o sds.o async.o read.o sockcompat.o\nEXAMPLES=hiredis-example hiredis-example-libevent hiredis-example-libev hiredis-example-glib hiredis-example-push hiredis-example-poll\nTESTS=hiredis-test\nLIBNAME=libhiredis\nPKGCONFNAME=hiredis.pc\n\nHIREDIS_MAJOR=$(shell grep HIREDIS_MAJOR hiredis.h | awk '{print $$3}')\nHIREDIS_MINOR=$(shell grep HIREDIS_MINOR hiredis.h | awk '{print $$3}')\nHIREDIS_PATCH=$(shell grep HIREDIS_PATCH hiredis.h | awk '{print $$3}')\nHIREDIS_SONAME=$(shell grep HIREDIS_SONAME hiredis.h | awk '{print $$3}')\n\n# Installation related variables and target\nPREFIX?=/usr/local\nINCLUDE_PATH?=include/hiredis\nLIBRARY_PATH?=lib\nPKGCONF_PATH?=pkgconfig\nINSTALL_INCLUDE_PATH= $(DESTDIR)$(PREFIX)/$(INCLUDE_PATH)\nINSTALL_LIBRARY_PATH= $(DESTDIR)$(PREFIX)/$(LIBRARY_PATH)\nINSTALL_PKGCONF_PATH= $(INSTALL_LIBRARY_PATH)/$(PKGCONF_PATH)\n\n# redis-server configuration used for testing\nREDIS_PORT=56379\nREDIS_SERVER=redis-server\ndefine REDIS_TEST_CONFIG\n\tdaemonize yes\n\tpidfile /tmp/hiredis-test-redis.pid\n\tport $(REDIS_PORT)\n\tbind 127.0.0.1\n\tunixsocket /tmp/hiredis-test-redis.sock\nendef\nexport REDIS_TEST_CONFIG\n\n# Flags passed from outside so as not to override CFLAGS or LDFLAGS as they may\n# be changed inside this Makefile\nHIREDIS_CFLAGS=\nHIREDIS_LDFLAGS=\n\n# Fallback to gcc when $CC is not in $PATH.\nCC:=$(shell sh -c 'type $${CC%% *} >/dev/null 2>/dev/null && echo $(CC) || echo gcc')\nCXX:=$(shell sh -c 'type $${CXX%% *} >/dev/null 2>/dev/null && echo $(CXX) || echo g++')\nOPTIMIZATION?=-O3\nWARNINGS=-Wall -Wextra -Werror -Wstrict-prototypes -Wwrite-strings -Wno-missing-field-initializers\nDEBUG_FLAGS?= -g -ggdb\nREAL_CFLAGS=$(OPTIMIZATION) -fPIC $(CPPFLAGS) $(CFLAGS) $(WARNINGS) $(DEBUG_FLAGS) $(PLATFORM_FLAGS) $(HIREDIS_CFLAGS)\nREAL_LDFLAGS=$(LDFLAGS) $(HIREDIS_LDFLAGS)\n\nDYLIBSUFFIX=so\nSTLIBSUFFIX=a\nDYLIB_MINOR_NAME=$(LIBNAME).$(DYLIBSUFFIX).$(HIREDIS_SONAME)\nDYLIB_MAJOR_NAME=$(LIBNAME).$(DYLIBSUFFIX).$(HIREDIS_MAJOR)\nDYLIBNAME=$(LIBNAME).$(DYLIBSUFFIX)\n\nDYLIB_MAKE_CMD=$(CC) $(PLATFORM_FLAGS) -shared -Wl,-soname,$(DYLIB_MINOR_NAME)\nSTLIBNAME=$(LIBNAME).$(STLIBSUFFIX)\nSTLIB_MAKE_CMD=$(AR) rcs\n\n#################### SSL variables start ####################\nSSL_OBJ=ssl.o\nSSL_LIBNAME=libhiredis_ssl\nSSL_PKGCONFNAME=hiredis_ssl.pc\nSSL_INSTALLNAME=install-ssl\nSSL_DYLIB_MINOR_NAME=$(SSL_LIBNAME).$(DYLIBSUFFIX).$(HIREDIS_SONAME)\nSSL_DYLIB_MAJOR_NAME=$(SSL_LIBNAME).$(DYLIBSUFFIX).$(HIREDIS_MAJOR)\nSSL_DYLIBNAME=$(SSL_LIBNAME).$(DYLIBSUFFIX)\nSSL_STLIBNAME=$(SSL_LIBNAME).$(STLIBSUFFIX)\nSSL_DYLIB_MAKE_CMD=$(CC) $(PLATFORM_FLAGS) -shared -Wl,-soname,$(SSL_DYLIB_MINOR_NAME)\n\nUSE_SSL?=0\nifeq ($(USE_SSL),1)\n  # This is required for test.c only\n  CFLAGS+=-DHIREDIS_TEST_SSL\n  EXAMPLES+=hiredis-example-ssl hiredis-example-libevent-ssl\n  SSL_STLIB=$(SSL_STLIBNAME)\n  SSL_DYLIB=$(SSL_DYLIBNAME)\n  SSL_PKGCONF=$(SSL_PKGCONFNAME)\n  SSL_INSTALL=$(SSL_INSTALLNAME)\nelse\n  SSL_STLIB=\n  SSL_DYLIB=\n  SSL_PKGCONF=\n  SSL_INSTALL=\nendif\n##################### SSL variables end #####################\n\n\n# Platform-specific overrides\nuname_S := $(shell sh -c 'uname -s 2>/dev/null || echo not')\n\n# This is required for test.c only\nifeq ($(TEST_ASYNC),1)\n  export CFLAGS+=-DHIREDIS_TEST_ASYNC\nendif\n\nifeq ($(USE_SSL),1)\n  ifndef OPENSSL_PREFIX\n    ifeq ($(uname_S),Darwin)\n      SEARCH_PATH1=/opt/homebrew/opt/openssl\n      SEARCH_PATH2=/usr/local/opt/openssl\n\n      ifneq ($(wildcard $(SEARCH_PATH1)),)\n        OPENSSL_PREFIX=$(SEARCH_PATH1)\n      else ifneq ($(wildcard $(SEARCH_PATH2)),)\n        OPENSSL_PREFIX=$(SEARCH_PATH2)\n      endif\n    endif\n  endif\n\n  ifdef OPENSSL_PREFIX\n    CFLAGS+=-I$(OPENSSL_PREFIX)/include\n    SSL_LDFLAGS+=-L$(OPENSSL_PREFIX)/lib\n  endif\n\n  SSL_LDFLAGS+=-lssl -lcrypto\nendif\n\nifeq ($(uname_S),FreeBSD)\n  LDFLAGS+=-lm\n  IS_GCC=$(shell sh -c '$(CC) --version 2>/dev/null |egrep -i -c \"gcc\"')\n  ifeq ($(IS_GCC),1)\n    REAL_CFLAGS+=-pedantic\n  endif\nelse\n  REAL_CFLAGS+=-pedantic\nendif\n\nifeq ($(uname_S),SunOS)\n  IS_SUN_CC=$(shell sh -c '$(CC) -V 2>&1 |egrep -i -c \"sun|studio\"')\n  ifeq ($(IS_SUN_CC),1)\n    SUN_SHARED_FLAG=-G\n  else\n    SUN_SHARED_FLAG=-shared\n  endif\n  REAL_LDFLAGS+= -ldl -lnsl -lsocket\n  DYLIB_MAKE_CMD=$(CC) $(SUN_SHARED_FLAG) -o $(DYLIBNAME) -h $(DYLIB_MINOR_NAME) $(LDFLAGS)\n  SSL_DYLIB_MAKE_CMD=$(CC) $(SUN_SHARED_FLAG) -o $(SSL_DYLIBNAME) -h $(SSL_DYLIB_MINOR_NAME) $(LDFLAGS) $(SSL_LDFLAGS)\nendif\nifeq ($(uname_S),Darwin)\n  DYLIBSUFFIX=dylib\n  DYLIB_MINOR_NAME=$(LIBNAME).$(HIREDIS_SONAME).$(DYLIBSUFFIX)\n  DYLIB_MAKE_CMD=$(CC) -dynamiclib -Wl,-install_name,$(PREFIX)/$(LIBRARY_PATH)/$(DYLIB_MINOR_NAME) -o $(DYLIBNAME) $(LDFLAGS)\n  SSL_DYLIB_MAKE_CMD=$(CC) -dynamiclib -Wl,-install_name,$(PREFIX)/$(LIBRARY_PATH)/$(SSL_DYLIB_MINOR_NAME) -o $(SSL_DYLIBNAME) $(LDFLAGS) $(SSL_LDFLAGS)\n  DYLIB_PLUGIN=-Wl,-undefined -Wl,dynamic_lookup\nendif\n\nall: dynamic static hiredis-test pkgconfig\n\ndynamic: $(DYLIBNAME) $(SSL_DYLIB)\n\nstatic: $(STLIBNAME) $(SSL_STLIB)\n\npkgconfig: $(PKGCONFNAME) $(SSL_PKGCONF)\n\n# Deps (use make dep to generate this)\nalloc.o: alloc.c fmacros.h alloc.h\nasync.o: async.c fmacros.h alloc.h async.h hiredis.h read.h sds.h net.h dict.c dict.h win32.h async_private.h\ndict.o: dict.c fmacros.h alloc.h dict.h\nhiredis.o: hiredis.c fmacros.h hiredis.h read.h sds.h alloc.h net.h async.h win32.h\nnet.o: net.c fmacros.h net.h hiredis.h read.h sds.h alloc.h sockcompat.h win32.h\nread.o: read.c fmacros.h alloc.h read.h sds.h win32.h\nsds.o: sds.c sds.h sdsalloc.h alloc.h\nsockcompat.o: sockcompat.c sockcompat.h\ntest.o: test.c fmacros.h hiredis.h read.h sds.h alloc.h net.h sockcompat.h win32.h\n\n$(DYLIBNAME): $(OBJ)\n\t$(DYLIB_MAKE_CMD) -o $(DYLIBNAME) $(OBJ) $(REAL_LDFLAGS)\n\n$(STLIBNAME): $(OBJ)\n\t$(STLIB_MAKE_CMD) $(STLIBNAME) $(OBJ)\n\n#################### SSL building rules start ####################\n$(SSL_DYLIBNAME): $(SSL_OBJ)\n\t$(SSL_DYLIB_MAKE_CMD) $(DYLIB_PLUGIN) -o $(SSL_DYLIBNAME) $(SSL_OBJ) $(REAL_LDFLAGS) $(LDFLAGS) $(SSL_LDFLAGS)\n\n$(SSL_STLIBNAME): $(SSL_OBJ)\n\t$(STLIB_MAKE_CMD) $(SSL_STLIBNAME) $(SSL_OBJ)\n\n$(SSL_OBJ): ssl.c hiredis.h read.h sds.h alloc.h async.h win32.h async_private.h\n#################### SSL building rules end ####################\n\n# Binaries:\nhiredis-example-libevent: examples/example-libevent.c adapters/libevent.h $(STLIBNAME)\n\t$(CC) -o examples/$@ $(REAL_CFLAGS) -I. $< -levent $(STLIBNAME) $(REAL_LDFLAGS)\n\nhiredis-example-libevent-ssl: examples/example-libevent-ssl.c adapters/libevent.h $(STLIBNAME) $(SSL_STLIBNAME)\n\t$(CC) -o examples/$@ $(REAL_CFLAGS) -I. $< -levent $(STLIBNAME) $(SSL_STLIBNAME) $(REAL_LDFLAGS) $(SSL_LDFLAGS)\n\nhiredis-example-libev: examples/example-libev.c adapters/libev.h $(STLIBNAME)\n\t$(CC) -o examples/$@ $(REAL_CFLAGS) -I. $< -lev $(STLIBNAME) $(REAL_LDFLAGS)\n\nhiredis-example-libhv: examples/example-libhv.c adapters/libhv.h $(STLIBNAME)\n\t$(CC) -o examples/$@ $(REAL_CFLAGS) -I. $< -lhv $(STLIBNAME) $(REAL_LDFLAGS)\n\nhiredis-example-glib: examples/example-glib.c adapters/glib.h $(STLIBNAME)\n\t$(CC) -o examples/$@ $(REAL_CFLAGS) -I. $< $(shell pkg-config --cflags --libs glib-2.0) $(STLIBNAME) $(REAL_LDFLAGS)\n\nhiredis-example-ivykis: examples/example-ivykis.c adapters/ivykis.h $(STLIBNAME)\n\t$(CC) -o examples/$@ $(REAL_CFLAGS) -I. $< -livykis $(STLIBNAME) $(REAL_LDFLAGS)\n\nhiredis-example-macosx: examples/example-macosx.c adapters/macosx.h $(STLIBNAME)\n\t$(CC) -o examples/$@ $(REAL_CFLAGS) -I. $< -framework CoreFoundation $(STLIBNAME) $(REAL_LDFLAGS)\n\nhiredis-example-ssl: examples/example-ssl.c $(STLIBNAME) $(SSL_STLIBNAME)\n\t$(CC) -o examples/$@ $(REAL_CFLAGS) -I. $< $(STLIBNAME) $(SSL_STLIBNAME) $(REAL_LDFLAGS) $(SSL_LDFLAGS)\n\nhiredis-example-poll: examples/example-poll.c adapters/poll.h $(STLIBNAME)\n\t$(CC) -o examples/$@ $(REAL_CFLAGS) -I. $< $(STLIBNAME) $(REAL_LDFLAGS)\n\nifndef AE_DIR\nhiredis-example-ae:\n\t@echo \"Please specify AE_DIR (e.g. <redis repository>/src)\"\n\t@false\nelse\nhiredis-example-ae: examples/example-ae.c adapters/ae.h $(STLIBNAME)\n\t$(CC) -o examples/$@ $(REAL_CFLAGS) $(REAL_LDFLAGS) -I. -I$(AE_DIR) $< $(AE_DIR)/ae.o $(AE_DIR)/zmalloc.o $(AE_DIR)/../deps/jemalloc/lib/libjemalloc.a -pthread $(STLIBNAME)\nendif\n\nifndef LIBUV_DIR\n# dynamic link libuv.so\nhiredis-example-libuv: examples/example-libuv.c adapters/libuv.h $(STLIBNAME)\n\t$(CC) -o examples/$@ $(REAL_CFLAGS) -I. -I$(LIBUV_DIR)/include $< -luv -lpthread -lrt $(STLIBNAME) $(REAL_LDFLAGS)\nelse\n# use user provided static lib\nhiredis-example-libuv: examples/example-libuv.c adapters/libuv.h $(STLIBNAME)\n\t$(CC) -o examples/$@ $(REAL_CFLAGS) -I. -I$(LIBUV_DIR)/include $< $(LIBUV_DIR)/.libs/libuv.a -lpthread -lrt $(STLIBNAME) $(REAL_LDFLAGS)\nendif\n\nifeq ($(and $(QT_MOC),$(QT_INCLUDE_DIR),$(QT_LIBRARY_DIR)),)\nhiredis-example-qt:\n\t@echo \"Please specify QT_MOC, QT_INCLUDE_DIR AND QT_LIBRARY_DIR\"\n\t@false\nelse\nhiredis-example-qt: examples/example-qt.cpp adapters/qt.h $(STLIBNAME)\n\t$(QT_MOC) adapters/qt.h -I. -I$(QT_INCLUDE_DIR) -I$(QT_INCLUDE_DIR)/QtCore | \\\n\t    $(CXX) -x c++ -o qt-adapter-moc.o -c - $(REAL_CFLAGS) -I. -I$(QT_INCLUDE_DIR) -I$(QT_INCLUDE_DIR)/QtCore\n\t$(QT_MOC) examples/example-qt.h -I. -I$(QT_INCLUDE_DIR) -I$(QT_INCLUDE_DIR)/QtCore | \\\n\t    $(CXX) -x c++ -o qt-example-moc.o -c - $(REAL_CFLAGS) -I. -I$(QT_INCLUDE_DIR) -I$(QT_INCLUDE_DIR)/QtCore\n\t$(CXX) -o examples/$@ $(REAL_CFLAGS) $(REAL_LDFLAGS) -I. -I$(QT_INCLUDE_DIR) -I$(QT_INCLUDE_DIR)/QtCore -L$(QT_LIBRARY_DIR) qt-adapter-moc.o qt-example-moc.o $< -pthread $(STLIBNAME) -lQtCore\nendif\n\nhiredis-example: examples/example.c $(STLIBNAME)\n\t$(CC) -o examples/$@ $(REAL_CFLAGS) -I. $< $(STLIBNAME) $(REAL_LDFLAGS)\n\nhiredis-example-push: examples/example-push.c $(STLIBNAME)\n\t$(CC) -o examples/$@ $(REAL_CFLAGS) -I. $< $(STLIBNAME) $(REAL_LDFLAGS)\n\nexamples: $(EXAMPLES)\n\nTEST_LIBS = $(STLIBNAME) $(SSL_STLIB)\nTEST_LDFLAGS = $(SSL_LDFLAGS)\nifeq ($(USE_SSL),1)\n  TEST_LDFLAGS += -pthread\nendif\nifeq ($(TEST_ASYNC),1)\n    TEST_LDFLAGS += -levent\nendif\n\nhiredis-test: test.o $(TEST_LIBS)\n\t$(CC) -o $@ $(REAL_CFLAGS) -I. $^ $(REAL_LDFLAGS) $(TEST_LDFLAGS)\n\nhiredis-%: %.o $(STLIBNAME)\n\t$(CC) $(REAL_CFLAGS) -o $@ $< $(TEST_LIBS) $(REAL_LDFLAGS)\n\ntest: hiredis-test\n\t./hiredis-test\n\ncheck: hiredis-test\n\tTEST_SSL=$(USE_SSL) ./test.sh\n\n.c.o:\n\t$(CC) -std=c99 -c $(REAL_CFLAGS) $<\n\nclean:\n\trm -rf $(DYLIBNAME) $(STLIBNAME) $(SSL_DYLIBNAME) $(SSL_STLIBNAME) $(TESTS) $(PKGCONFNAME) examples/hiredis-example* *.o *.gcda *.gcno *.gcov\n\ndep:\n\t$(CC) $(CPPFLAGS) $(CFLAGS) -MM *.c\n\nINSTALL?= cp -pPR\n\n$(PKGCONFNAME): hiredis.h\n\t@echo \"Generating $@ for pkgconfig...\"\n\t@echo prefix=$(PREFIX) > $@\n\t@echo exec_prefix=\\$${prefix} >> $@\n\t@echo libdir=$(PREFIX)/$(LIBRARY_PATH) >> $@\n\t@echo includedir=$(PREFIX)/include >> $@\n\t@echo pkgincludedir=$(PREFIX)/$(INCLUDE_PATH) >> $@\n\t@echo >> $@\n\t@echo Name: hiredis >> $@\n\t@echo Description: Minimalistic C client library for Redis. >> $@\n\t@echo Version: $(HIREDIS_MAJOR).$(HIREDIS_MINOR).$(HIREDIS_PATCH) >> $@\n\t@echo Libs: -L\\$${libdir} -lhiredis >> $@\n\t@echo Cflags: -I\\$${pkgincludedir} -I\\$${includedir} -D_FILE_OFFSET_BITS=64 >> $@\n\n$(SSL_PKGCONFNAME): hiredis_ssl.h\n\t@echo \"Generating $@ for pkgconfig...\"\n\t@echo prefix=$(PREFIX) > $@\n\t@echo exec_prefix=\\$${prefix} >> $@\n\t@echo libdir=$(PREFIX)/$(LIBRARY_PATH) >> $@\n\t@echo includedir=$(PREFIX)/include >> $@\n\t@echo pkgincludedir=$(PREFIX)/$(INCLUDE_PATH) >> $@\n\t@echo >> $@\n\t@echo Name: hiredis_ssl >> $@\n\t@echo Description: SSL Support for hiredis. >> $@\n\t@echo Version: $(HIREDIS_MAJOR).$(HIREDIS_MINOR).$(HIREDIS_PATCH) >> $@\n\t@echo Requires: hiredis >> $@\n\t@echo Libs: -L\\$${libdir} -lhiredis_ssl >> $@\n\t@echo Libs.private: -lssl -lcrypto >> $@\n\ninstall: $(DYLIBNAME) $(STLIBNAME) $(PKGCONFNAME) $(SSL_INSTALL)\n\tmkdir -p $(INSTALL_INCLUDE_PATH) $(INSTALL_INCLUDE_PATH)/adapters $(INSTALL_LIBRARY_PATH)\n\t$(INSTALL) hiredis.h async.h read.h sds.h alloc.h sockcompat.h $(INSTALL_INCLUDE_PATH)\n\t$(INSTALL) adapters/*.h $(INSTALL_INCLUDE_PATH)/adapters\n\t$(INSTALL) $(DYLIBNAME) $(INSTALL_LIBRARY_PATH)/$(DYLIB_MINOR_NAME)\n\tcd $(INSTALL_LIBRARY_PATH) && ln -sf $(DYLIB_MINOR_NAME) $(DYLIBNAME) && ln -sf $(DYLIB_MINOR_NAME) $(DYLIB_MAJOR_NAME)\n\t$(INSTALL) $(STLIBNAME) $(INSTALL_LIBRARY_PATH)\n\tmkdir -p $(INSTALL_PKGCONF_PATH)\n\t$(INSTALL) $(PKGCONFNAME) $(INSTALL_PKGCONF_PATH)\n\ninstall-ssl: $(SSL_DYLIBNAME) $(SSL_STLIBNAME) $(SSL_PKGCONFNAME)\n\tmkdir -p $(INSTALL_INCLUDE_PATH) $(INSTALL_LIBRARY_PATH)\n\t$(INSTALL) hiredis_ssl.h $(INSTALL_INCLUDE_PATH)\n\t$(INSTALL) $(SSL_DYLIBNAME) $(INSTALL_LIBRARY_PATH)/$(SSL_DYLIB_MINOR_NAME)\n\tcd $(INSTALL_LIBRARY_PATH) && ln -sf $(SSL_DYLIB_MINOR_NAME) $(SSL_DYLIBNAME) && ln -sf $(SSL_DYLIB_MINOR_NAME) $(SSL_DYLIB_MAJOR_NAME)\n\t$(INSTALL) $(SSL_STLIBNAME) $(INSTALL_LIBRARY_PATH)\n\tmkdir -p $(INSTALL_PKGCONF_PATH)\n\t$(INSTALL) $(SSL_PKGCONFNAME) $(INSTALL_PKGCONF_PATH)\n\n32bit:\n\t@echo \"\"\n\t@echo \"WARNING: if this fails under Linux you probably need to install libc6-dev-i386\"\n\t@echo \"\"\n\t$(MAKE) CFLAGS=\"-m32\" LDFLAGS=\"-m32\"\n\n32bit-vars:\n\t$(eval CFLAGS=-m32)\n\t$(eval LDFLAGS=-m32)\n\ngprof:\n\t$(MAKE) CFLAGS=\"-pg\" LDFLAGS=\"-pg\"\n\ngcov:\n\t$(MAKE) CFLAGS+=\"-fprofile-arcs -ftest-coverage\" LDFLAGS=\"-fprofile-arcs\"\n\ncoverage: gcov\n\tmake check\n\tmkdir -p tmp/lcov\n\tlcov -d . -c --exclude '/usr*' -o tmp/lcov/hiredis.info\n\tlcov -q -l tmp/lcov/hiredis.info\n\tgenhtml --legend -q -o tmp/lcov/report tmp/lcov/hiredis.info\n\nnoopt:\n\t$(MAKE) OPTIMIZATION=\"\"\n\n.PHONY: all test check clean dep install 32bit 32bit-vars gprof gcov noopt\n"
  },
  {
    "path": "deps/hiredis/README.md",
    "content": "\n[![Build Status](https://github.com/redis/hiredis/actions/workflows/build.yml/badge.svg)](https://github.com/redis/hiredis/actions/workflows/build.yml)\n\n**This Readme reflects the latest changed in the master branch. See [v1.0.0](https://github.com/redis/hiredis/tree/v1.0.0) for the Readme and documentation for the latest release ([API/ABI history](https://abi-laboratory.pro/?view=timeline&l=hiredis)).**\n\n# HIREDIS\n\nHiredis is a minimalistic C client library for the [Redis](https://redis.io/) database.\n\nIt is minimalistic because it just adds minimal support for the protocol, but\nat the same time it uses a high level printf-alike API in order to make it\nmuch higher level than otherwise suggested by its minimal code base and the\nlack of explicit bindings for every Redis command.\n\nApart from supporting sending commands and receiving replies, it comes with\na reply parser that is decoupled from the I/O layer. It\nis a stream parser designed for easy reusability, which can for instance be used\nin higher level language bindings for efficient reply parsing.\n\nHiredis only supports the binary-safe Redis protocol, so you can use it with any\nRedis version >= 1.2.0.\n\nThe library comes with multiple APIs. There is the\n*synchronous API*, the *asynchronous API* and the *reply parsing API*.\n\n## Upgrading to `1.1.0`\n\nAlmost all users will simply need to recompile their applications against the newer version of hiredis.\n\n**NOTE**:  Hiredis can now return `nan` in addition to `-inf` and `inf` in a `REDIS_REPLY_DOUBLE`.\n           Applications that deal with `RESP3` doubles should make sure to account for this.\n\n## Upgrading to `1.0.2`\n\n<span style=\"color:red\">NOTE:  v1.0.1 erroneously bumped SONAME, which is why it is skipped here.</span>\n\nVersion 1.0.2 is simply 1.0.0 with a fix for [CVE-2021-32765](https://github.com/redis/hiredis/security/advisories/GHSA-hfm9-39pp-55p2).  They are otherwise identical.\n\n## Upgrading to `1.0.0`\n\nVersion 1.0.0 marks the first stable release of Hiredis.\nIt includes some minor breaking changes, mostly to make the exposed API more uniform and self-explanatory.\nIt also bundles the updated `sds` library, to sync up with upstream and Redis.\nFor code changes see the [Changelog](CHANGELOG.md).\n\n_Note:  As described below, a few member names have been changed but most applications should be able to upgrade with minor code changes and recompiling._\n\n## IMPORTANT:  Breaking changes from `0.14.1` -> `1.0.0`\n\n* `redisContext` has two additional members (`free_privdata`, and `privctx`).\n* `redisOptions.timeout` has been renamed to `redisOptions.connect_timeout`, and we've added `redisOptions.command_timeout`.\n* `redisReplyObjectFunctions.createArray` now takes `size_t` instead of `int` for its length parameter.\n\n## IMPORTANT:  Breaking changes when upgrading from 0.13.x -> 0.14.x\n\nBulk and multi-bulk lengths less than -1 or greater than `LLONG_MAX` are now\nprotocol errors. This is consistent with the RESP specification. On 32-bit\nplatforms, the upper bound is lowered to `SIZE_MAX`.\n\nChange `redisReply.len` to `size_t`, as it denotes the the size of a string\n\nUser code should compare this to `size_t` values as well.  If it was used to\ncompare to other values, casting might be necessary or can be removed, if\ncasting was applied before.\n\n## Upgrading from `<0.9.0`\n\nVersion 0.9.0 is a major overhaul of hiredis in every aspect. However, upgrading existing\ncode using hiredis should not be a big pain. The key thing to keep in mind when\nupgrading is that hiredis >= 0.9.0 uses a `redisContext*` to keep state, in contrast to\nthe stateless 0.0.1 that only has a file descriptor to work with.\n\n## Synchronous API\n\nTo consume the synchronous API, there are only a few function calls that need to be introduced:\n\n```c\nredisContext *redisConnect(const char *ip, int port);\nvoid *redisCommand(redisContext *c, const char *format, ...);\nvoid freeReplyObject(void *reply);\n```\n\n### Connecting\n\nThe function `redisConnect` is used to create a so-called `redisContext`. The\ncontext is where Hiredis holds state for a connection. The `redisContext`\nstruct has an integer `err` field that is non-zero when the connection is in\nan error state. The field `errstr` will contain a string with a description of\nthe error. More information on errors can be found in the **Errors** section.\nAfter trying to connect to Redis using `redisConnect` you should\ncheck the `err` field to see if establishing the connection was successful:\n\n```c\nredisContext *c = redisConnect(\"127.0.0.1\", 6379);\nif (c == NULL || c->err) {\n    if (c) {\n        printf(\"Error: %s\\n\", c->errstr);\n        // handle error\n    } else {\n        printf(\"Can't allocate redis context\\n\");\n    }\n}\n```\n\nOne can also use `redisConnectWithOptions` which takes a `redisOptions` argument\nthat can be configured with endpoint information as well as many different flags\nto change how the `redisContext` will be configured.\n\n```c\nredisOptions opt = {0};\n\n/* One can set the endpoint with one of our helper macros */\nif (tcp) {\n    REDIS_OPTIONS_SET_TCP(&opt, \"localhost\", 6379);\n} else {\n    REDIS_OPTIONS_SET_UNIX(&opt, \"/tmp/redis.sock\");\n}\n\n/* And privdata can be specified with another helper */\nREDIS_OPTIONS_SET_PRIVDATA(&opt, myPrivData, myPrivDataDtor);\n\n/* Finally various options may be set via the `options` member, as described below */\nopt->options |= REDIS_OPT_PREFER_IPV4;\n```\n\nIf a connection is lost, `int redisReconnect(redisContext *c)` can be used to restore the connection using the same endpoint and options as the given context.\n\n### Configurable redisOptions flags\n\nThere are several flags you may set in the `redisOptions` struct to change default behavior.  You can specify the flags via the `redisOptions->options` member.\n\n| Flag | Description  |\n| --- | --- |\n| REDIS\\_OPT\\_NONBLOCK | Tells hiredis to make a non-blocking connection. |\n| REDIS\\_OPT\\_REUSEADDR | Tells hiredis to set the [SO_REUSEADDR](https://man7.org/linux/man-pages/man7/socket.7.html) socket option |\n| REDIS\\_OPT\\_PREFER\\_IPV4<br>REDIS\\_OPT\\_PREFER_IPV6<br>REDIS\\_OPT\\_PREFER\\_IP\\_UNSPEC | Informs hiredis to either prefer IPv4 or IPv6 when invoking [getaddrinfo](https://man7.org/linux/man-pages/man3/gai_strerror.3.html).  `REDIS_OPT_PREFER_IP_UNSPEC` will cause hiredis to specify `AF_UNSPEC` in the getaddrinfo call, which means both IPv4 and IPv6 addresses will be searched simultaneously.<br>Hiredis prefers IPv4 by default. |\n| REDIS\\_OPT\\_NO\\_PUSH\\_AUTOFREE | Tells hiredis to not install the default RESP3 PUSH handler (which just intercepts and frees the replies).  This is useful in situations where you want to process these messages in-band. |\n| REDIS\\_OPT\\_NOAUTOFREEREPLIES | **ASYNC**: tells hiredis not to automatically invoke `freeReplyObject` after executing the reply callback. |\n| REDIS\\_OPT\\_NOAUTOFREE | **ASYNC**: Tells hiredis not to automatically free the `redisAsyncContext` on connection/communication failure, but only if the user makes an explicit call to `redisAsyncDisconnect` or `redisAsyncFree` |\n\n*Note: A `redisContext` is not thread-safe.*\n\n### Other configuration using socket options\n\nThe following socket options are applied directly to the underlying socket.\nThe values are not stored in the `redisContext`, so they are not automatically applied when reconnecting using `redisReconnect()`.\nThese functions return `REDIS_OK` on success.\nOn failure, `REDIS_ERR` is returned and the underlying connection is closed.\n\nTo configure these for an asyncronous context (see *Asynchronous API* below), use `ac->c` to get the redisContext out of an asyncRedisContext.\n\n```C\nint redisEnableKeepAlive(redisContext *c);\nint redisEnableKeepAliveWithInterval(redisContext *c, int interval);\n```\n\nEnables TCP keepalive by setting the following socket options (with some variations depending on OS):\n\n* `SO_KEEPALIVE`;\n* `TCP_KEEPALIVE` or `TCP_KEEPIDLE`, value configurable using the `interval` parameter, default 15 seconds;\n* `TCP_KEEPINTVL` set to 1/3 of `interval`;\n* `TCP_KEEPCNT` set to 3.\n\n```C\nint redisSetTcpUserTimeout(redisContext *c, unsigned int timeout);\n```\n\nSet the `TCP_USER_TIMEOUT` Linux-specific socket option which is as described in the `tcp` man page:\n\n> When the value is greater than 0, it specifies the maximum amount of time in milliseconds that trans mitted data may remain unacknowledged before TCP will forcibly close the corresponding connection and return ETIMEDOUT to the application.\n> If the option value is specified as 0, TCP will use the system default.\n\n### Sending commands\n\nThere are several ways to issue commands to Redis. The first that will be introduced is\n`redisCommand`. This function takes a format similar to printf. In the simplest form,\nit is used like this:\n```c\nreply = redisCommand(context, \"SET foo bar\");\n```\n\nThe specifier `%s` interpolates a string in the command, and uses `strlen` to\ndetermine the length of the string:\n```c\nreply = redisCommand(context, \"SET foo %s\", value);\n```\nWhen you need to pass binary safe strings in a command, the `%b` specifier can be\nused. Together with a pointer to the string, it requires a `size_t` length argument\nof the string:\n```c\nreply = redisCommand(context, \"SET foo %b\", value, (size_t) valuelen);\n```\nInternally, Hiredis splits the command in different arguments and will\nconvert it to the protocol used to communicate with Redis.\nOne or more spaces separates arguments, so you can use the specifiers\nanywhere in an argument:\n```c\nreply = redisCommand(context, \"SET key:%s %s\", myid, value);\n```\n\n### Using replies\n\nThe return value of `redisCommand` holds a reply when the command was\nsuccessfully executed. When an error occurs, the return value is `NULL` and\nthe `err` field in the context will be set (see section on **Errors**).\nOnce an error is returned the context cannot be reused and you should set up\na new connection.\n\nThe standard replies that `redisCommand` are of the type `redisReply`. The\n`type` field in the `redisReply` should be used to test what kind of reply\nwas received:\n\n### RESP2\n\n* **`REDIS_REPLY_STATUS`**:\n    * The command replied with a status reply. The status string can be accessed using `reply->str`.\n      The length of this string can be accessed using `reply->len`.\n\n* **`REDIS_REPLY_ERROR`**:\n    *  The command replied with an error. The error string can be accessed identical to `REDIS_REPLY_STATUS`.\n\n* **`REDIS_REPLY_INTEGER`**:\n    * The command replied with an integer. The integer value can be accessed using the\n      `reply->integer` field of type `long long`.\n\n* **`REDIS_REPLY_NIL`**:\n    * The command replied with a **nil** object. There is no data to access.\n\n* **`REDIS_REPLY_STRING`**:\n    * A bulk (string) reply. The value of the reply can be accessed using `reply->str`.\n      The length of this string can be accessed using `reply->len`.\n\n* **`REDIS_REPLY_ARRAY`**:\n    * A multi bulk reply. The number of elements in the multi bulk reply is stored in\n      `reply->elements`. Every element in the multi bulk reply is a `redisReply` object as well\n      and can be accessed via `reply->element[..index..]`.\n      Redis may reply with nested arrays but this is fully supported.\n\n### RESP3\n\nHiredis also supports every new `RESP3` data type which are as follows.  For more information about the protocol see the `RESP3` [specification.](https://github.com/antirez/RESP3/blob/master/spec.md)\n\n* **`REDIS_REPLY_DOUBLE`**:\n    * The command replied with a double-precision floating point number.\n      The value is stored as a string in the `str` member, and can be converted with `strtod` or similar.\n\n* **`REDIS_REPLY_BOOL`**:\n    * A boolean true/false reply.\n      The value is stored in the `integer` member and will be either `0` or `1`.\n\n* **`REDIS_REPLY_MAP`**:\n    * An array with the added invariant that there will always be an even number of elements.\n      The MAP is functionally equivalent to `REDIS_REPLY_ARRAY` except for the previously mentioned invariant.\n\n* **`REDIS_REPLY_SET`**:\n    * An array response where each entry is unique.\n      Like the MAP type, the data is identical to an array response except there are no duplicate values.\n\n* **`REDIS_REPLY_PUSH`**:\n    * An array that can be generated spontaneously by Redis.\n      This array response will always contain at least two subelements.  The first contains the type of `PUSH` message (e.g. `message`, or `invalidate`), and the second being a sub-array with the `PUSH` payload itself.\n\n* **`REDIS_REPLY_ATTR`**:\n    * An array structurally identical to a `MAP` but intended as meta-data about a reply.\n      _As of Redis 6.0.6 this reply type is not used in Redis_\n\n* **`REDIS_REPLY_BIGNUM`**:\n    * A string representing an arbitrarily large signed or unsigned integer value.\n      The number will be encoded as a string in the `str` member of `redisReply`.\n\n* **`REDIS_REPLY_VERB`**:\n    * A verbatim string, intended to be presented to the user without modification.\n      The string payload is stored in the `str` member, and type data is stored in the `vtype` member (e.g. `txt` for raw text or `md` for markdown).\n\nReplies should be freed using the `freeReplyObject()` function.\nNote that this function will take care of freeing sub-reply objects\ncontained in arrays and nested arrays, so there is no need for the user to\nfree the sub replies (it is actually harmful and will corrupt the memory).\n\n**Important:** the current version of hiredis (1.0.0) frees replies when the\nasynchronous API is used. This means you should not call `freeReplyObject` when\nyou use this API. The reply is cleaned up by hiredis _after_ the callback\nreturns.  We may introduce a flag to make this configurable in future versions of the library.\n\n### Cleaning up\n\nTo disconnect and free the context the following function can be used:\n```c\nvoid redisFree(redisContext *c);\n```\nThis function immediately closes the socket and then frees the allocations done in\ncreating the context.\n\n### Sending commands (cont'd)\n\nTogether with `redisCommand`, the function `redisCommandArgv` can be used to issue commands.\nIt has the following prototype:\n```c\nvoid *redisCommandArgv(redisContext *c, int argc, const char **argv, const size_t *argvlen);\n```\nIt takes the number of arguments `argc`, an array of strings `argv` and the lengths of the\narguments `argvlen`. For convenience, `argvlen` may be set to `NULL` and the function will\nuse `strlen(3)` on every argument to determine its length. Obviously, when any of the arguments\nneed to be binary safe, the entire array of lengths `argvlen` should be provided.\n\nThe return value has the same semantic as `redisCommand`.\n\n### Pipelining\n\nTo explain how Hiredis supports pipelining in a blocking connection, there needs to be\nunderstanding of the internal execution flow.\n\nWhen any of the functions in the `redisCommand` family is called, Hiredis first formats the\ncommand according to the Redis protocol. The formatted command is then put in the output buffer\nof the context. This output buffer is dynamic, so it can hold any number of commands.\nAfter the command is put in the output buffer, `redisGetReply` is called. This function has the\nfollowing two execution paths:\n\n1. The input buffer is non-empty:\n    * Try to parse a single reply from the input buffer and return it\n    * If no reply could be parsed, continue at *2*\n2. The input buffer is empty:\n    * Write the **entire** output buffer to the socket\n    * Read from the socket until a single reply could be parsed\n\nThe function `redisGetReply` is exported as part of the Hiredis API and can be used when a reply\nis expected on the socket. To pipeline commands, the only thing that needs to be done is\nfilling up the output buffer. For this cause, two commands can be used that are identical\nto the `redisCommand` family, apart from not returning a reply:\n```c\nvoid redisAppendCommand(redisContext *c, const char *format, ...);\nvoid redisAppendCommandArgv(redisContext *c, int argc, const char **argv, const size_t *argvlen);\n```\nAfter calling either function one or more times, `redisGetReply` can be used to receive the\nsubsequent replies. The return value for this function is either `REDIS_OK` or `REDIS_ERR`, where\nthe latter means an error occurred while reading a reply. Just as with the other commands,\nthe `err` field in the context can be used to find out what the cause of this error is.\n\nThe following examples shows a simple pipeline (resulting in only a single call to `write(2)` and\na single call to `read(2)`):\n```c\nredisReply *reply;\nredisAppendCommand(context,\"SET foo bar\");\nredisAppendCommand(context,\"GET foo\");\nredisGetReply(context,(void**)&reply); // reply for SET\nfreeReplyObject(reply);\nredisGetReply(context,(void**)&reply); // reply for GET\nfreeReplyObject(reply);\n```\nThis API can also be used to implement a blocking subscriber:\n```c\nreply = redisCommand(context,\"SUBSCRIBE foo\");\nfreeReplyObject(reply);\nwhile(redisGetReply(context,(void *)&reply) == REDIS_OK) {\n    // consume message\n    freeReplyObject(reply);\n}\n```\n### Errors\n\nWhen a function call is not successful, depending on the function either `NULL` or `REDIS_ERR` is\nreturned. The `err` field inside the context will be non-zero and set to one of the\nfollowing constants:\n\n* **`REDIS_ERR_IO`**:\n    There was an I/O error while creating the connection, trying to write\n    to the socket or read from the socket. If you included `errno.h` in your\n    application, you can use the global `errno` variable to find out what is\n    wrong.\n\n* **`REDIS_ERR_EOF`**:\n    The server closed the connection which resulted in an empty read.\n\n* **`REDIS_ERR_PROTOCOL`**:\n    There was an error while parsing the protocol.\n\n* **`REDIS_ERR_OTHER`**:\n    Any other error. Currently, it is only used when a specified hostname to connect\n    to cannot be resolved.\n\nIn every case, the `errstr` field in the context will be set to hold a string representation\nof the error.\n\n## Asynchronous API\n\nHiredis comes with an asynchronous API that works easily with any event library.\nExamples are bundled that show using Hiredis with [libev](http://software.schmorp.de/pkg/libev.html)\nand [libevent](http://monkey.org/~provos/libevent/).\n\n### Connecting\n\nThe function `redisAsyncConnect` can be used to establish a non-blocking connection to\nRedis. It returns a pointer to the newly created `redisAsyncContext` struct. The `err` field\nshould be checked after creation to see if there were errors creating the connection.\nBecause the connection that will be created is non-blocking, the kernel is not able to\ninstantly return if the specified host and port is able to accept a connection.\nIn case of error, it is the caller's responsibility to free the context using `redisAsyncFree()`\n\n*Note: A `redisAsyncContext` is not thread-safe.*\n\nAn application function creating a connection might look like this:\n\n```c\nvoid appConnect(myAppData *appData)\n{\n    redisAsyncContext *c = redisAsyncConnect(\"127.0.0.1\", 6379);\n    if (c->err) {\n        printf(\"Error: %s\\n\", c->errstr);\n        // handle error\n        redisAsyncFree(c);\n        c = NULL;\n    } else {\n        appData->context = c;\n        appData->connecting = 1;\n        c->data = appData; /* store application pointer for the callbacks */\n        redisAsyncSetConnectCallback(c, appOnConnect);\n        redisAsyncSetDisconnectCallback(c, appOnDisconnect);\n    }\n}\n\n```\n\n\nThe asynchronous context _should_ hold a *connect* callback function that is called when the connection\nattempt completes, either successfully or with an error.\nIt _can_ also hold a *disconnect* callback function that is called when the\nconnection is disconnected (either because of an error or per user request). Both callbacks should\nhave the following prototype:\n```c\nvoid(const redisAsyncContext *c, int status);\n```\n\nOn a *connect*, the `status` argument is set to `REDIS_OK` if the connection attempt succeeded.  In this\ncase, the context is ready to accept commands.  If it is called with `REDIS_ERR` then the\nconnection attempt failed. The `err` field in the context can be accessed to find out the cause of the error.\nAfter a failed connection attempt, the context object is automatically freed by the library after calling\nthe connect callback.  This may be a good point to create a new context and retry the connection.\n\nOn a disconnect, the `status` argument is set to `REDIS_OK` when disconnection was initiated by the\nuser, or `REDIS_ERR` when the disconnection was caused by an error. When it is `REDIS_ERR`, the `err`\nfield in the context can be accessed to find out the cause of the error.\n\nThe context object is always freed after the disconnect callback fired. When a reconnect is needed,\nthe disconnect callback is a good point to do so.\n\nSetting the connect or disconnect callbacks can only be done once per context. For subsequent calls the\napi will return `REDIS_ERR`. The function to set the callbacks have the following prototype:\n```c\n/* Alternatively you can use redisAsyncSetConnectCallbackNC which will be passed a non-const\n   redisAsyncContext* on invocation (e.g. allowing writes to the privdata member). */\nint redisAsyncSetConnectCallback(redisAsyncContext *ac, redisConnectCallback *fn);\nint redisAsyncSetDisconnectCallback(redisAsyncContext *ac, redisDisconnectCallback *fn);\n```\n`ac->data` may be used to pass user data to both callbacks.  A typical implementation\nmight look something like this:\n```c\nvoid appOnConnect(redisAsyncContext *c, int status)\n{\n    myAppData *appData = (myAppData*)c->data; /* get my application specific context*/\n    appData->connecting = 0;\n    if (status == REDIS_OK) {\n        appData->connected = 1;\n    } else {\n        appData->connected = 0;\n        appData->err = c->err;\n        appData->context = NULL; /* avoid stale pointer when callback returns */\n    }\n    appAttemptReconnect();\n}\n\nvoid appOnDisconnect(redisAsyncContext *c, int status)\n{\n    myAppData *appData = (myAppData*)c->data; /* get my application specific context*/\n    appData->connected = 0;\n    appData->err = c->err;\n    appData->context = NULL; /* avoid stale pointer when callback returns */\n    if (status == REDIS_OK) {\n        appNotifyDisconnectCompleted(mydata);\n    } else {\n        appNotifyUnexpectedDisconnect(mydata);\n        appAttemptReconnect();\n    }\n}\n```\n\n### Sending commands and their callbacks\n\nIn an asynchronous context, commands are automatically pipelined due to the nature of an event loop.\nTherefore, unlike the synchronous API, there is only a single way to send commands.\nBecause commands are sent to Redis asynchronously, issuing a command requires a callback function\nthat is called when the reply is received. Reply callbacks should have the following prototype:\n```c\nvoid(redisAsyncContext *c, void *reply, void *privdata);\n```\nThe `privdata` argument can be used to curry arbitrary data to the callback from the point where\nthe command is initially queued for execution.\n\nThe functions that can be used to issue commands in an asynchronous context are:\n```c\nint redisAsyncCommand(\n  redisAsyncContext *ac, redisCallbackFn *fn, void *privdata,\n  const char *format, ...);\nint redisAsyncCommandArgv(\n  redisAsyncContext *ac, redisCallbackFn *fn, void *privdata,\n  int argc, const char **argv, const size_t *argvlen);\n```\nBoth functions work like their blocking counterparts. The return value is `REDIS_OK` when the command\nwas successfully added to the output buffer and `REDIS_ERR` otherwise. Example: when the connection\nis being disconnected per user-request, no new commands may be added to the output buffer and `REDIS_ERR` is\nreturned on calls to the `redisAsyncCommand` family.\n\nIf the reply for a command with a `NULL` callback is read, it is immediately freed. When the callback\nfor a command is non-`NULL`, the memory is freed immediately following the callback: the reply is only\nvalid for the duration of the callback.\n\nAll pending callbacks are called with a `NULL` reply when the context encountered an error.\n\nFor every command issued, with the exception of **SUBSCRIBE** and **PSUBSCRIBE**, the callback is\ncalled exactly once.  Even if the context object id disconnected or deleted, every pending callback\nwill be called with a `NULL` reply.\n\nFor **SUBSCRIBE** and **PSUBSCRIBE**, the callbacks may be called repeatedly until an `unsubscribe`\nmessage arrives.  This will be the last invocation of the callback. In case of error, the callbacks\nmay receive a final `NULL` reply instead.\n\n### Disconnecting\n\nAn asynchronous connection can be terminated using:\n```c\nvoid redisAsyncDisconnect(redisAsyncContext *ac);\n```\nWhen this function is called, the connection is **not** immediately terminated. Instead, new\ncommands are no longer accepted and the connection is only terminated when all pending commands\nhave been written to the socket, their respective replies have been read and their respective\ncallbacks have been executed. After this, the disconnection callback is executed with the\n`REDIS_OK` status and the context object is freed.\n\nThe connection can be forcefully disconnected using\n```c\nvoid redisAsyncFree(redisAsyncContext *ac);\n```\nIn this case, nothing more is written to the socket, all pending callbacks are called with a `NULL`\nreply and the disconnection callback is called with `REDIS_OK`, after which the context object\nis freed.\n\n\n### Hooking it up to event library *X*\n\nThere are a few hooks that need to be set on the context object after it is created.\nSee the `adapters/` directory for bindings to *libev* and *libevent*.\n\n## Reply parsing API\n\nHiredis comes with a reply parsing API that makes it easy for writing higher\nlevel language bindings.\n\nThe reply parsing API consists of the following functions:\n```c\nredisReader *redisReaderCreate(void);\nvoid redisReaderFree(redisReader *reader);\nint redisReaderFeed(redisReader *reader, const char *buf, size_t len);\nint redisReaderGetReply(redisReader *reader, void **reply);\n```\nThe same set of functions are used internally by hiredis when creating a\nnormal Redis context, the above API just exposes it to the user for a direct\nusage.\n\n### Usage\n\nThe function `redisReaderCreate` creates a `redisReader` structure that holds a\nbuffer with unparsed data and state for the protocol parser.\n\nIncoming data -- most likely from a socket -- can be placed in the internal\nbuffer of the `redisReader` using `redisReaderFeed`. This function will make a\ncopy of the buffer pointed to by `buf` for `len` bytes. This data is parsed\nwhen `redisReaderGetReply` is called. This function returns an integer status\nand a reply object (as described above) via `void **reply`. The returned status\ncan be either `REDIS_OK` or `REDIS_ERR`, where the latter means something went\nwrong (either a protocol error, or an out of memory error).\n\nThe parser limits the level of nesting for multi bulk payloads to 7. If the\nmulti bulk nesting level is higher than this, the parser returns an error.\n\n### Customizing replies\n\nThe function `redisReaderGetReply` creates `redisReply` and makes the function\nargument `reply` point to the created `redisReply` variable. For instance, if\nthe response of type `REDIS_REPLY_STATUS` then the `str` field of `redisReply`\nwill hold the status as a vanilla C string. However, the functions that are\nresponsible for creating instances of the `redisReply` can be customized by\nsetting the `fn` field on the `redisReader` struct. This should be done\nimmediately after creating the `redisReader`.\n\nFor example, [hiredis-rb](https://github.com/pietern/hiredis-rb/blob/master/ext/hiredis_ext/reader.c)\nuses customized reply object functions to create Ruby objects.\n\n### Reader max buffer\n\nBoth when using the Reader API directly or when using it indirectly via a\nnormal Redis context, the redisReader structure uses a buffer in order to\naccumulate data from the server.\nUsually this buffer is destroyed when it is empty and is larger than 16\nKiB in order to avoid wasting memory in unused buffers\n\nHowever when working with very big payloads destroying the buffer may slow\ndown performances considerably, so it is possible to modify the max size of\nan idle buffer changing the value of the `maxbuf` field of the reader structure\nto the desired value. The special value of 0 means that there is no maximum\nvalue for an idle buffer, so the buffer will never get freed.\n\nFor instance if you have a normal Redis context you can set the maximum idle\nbuffer to zero (unlimited) just with:\n```c\ncontext->reader->maxbuf = 0;\n```\nThis should be done only in order to maximize performances when working with\nlarge payloads. The context should be set back to `REDIS_READER_MAX_BUF` again\nas soon as possible in order to prevent allocation of useless memory.\n\n### Reader max array elements\n\nBy default the hiredis reply parser sets the maximum number of multi-bulk elements\nto 2^32 - 1 or 4,294,967,295 entries.  If you need to process multi-bulk replies\nwith more than this many elements you can set the value higher or to zero, meaning\nunlimited with:\n```c\ncontext->reader->maxelements = 0;\n```\n\n## SSL/TLS Support\n\n### Building\n\nSSL/TLS support is not built by default and requires an explicit flag:\n\n    make USE_SSL=1\n\nThis requires OpenSSL development package (e.g. including header files to be\navailable.\n\nWhen enabled, SSL/TLS support is built into extra `libhiredis_ssl.a` and\n`libhiredis_ssl.so` static/dynamic libraries. This leaves the original libraries\nunaffected so no additional dependencies are introduced.\n\n### Using it\n\nFirst, you'll need to make sure you include the SSL header file:\n\n```c\n#include <hiredis/hiredis.h>\n#include <hiredis/hiredis_ssl.h>\n```\n\nYou will also need to link against `libhiredis_ssl`, **in addition** to\n`libhiredis` and add `-lssl -lcrypto` to satisfy its dependencies.\n\nHiredis implements SSL/TLS on top of its normal `redisContext` or\n`redisAsyncContext`, so you will need to establish a connection first and then\ninitiate an SSL/TLS handshake.\n\n#### Hiredis OpenSSL Wrappers\n\nBefore Hiredis can negotiate an SSL/TLS connection, it is necessary to\ninitialize OpenSSL and create a context. You can do that in two ways:\n\n1. Work directly with the OpenSSL API to initialize the library's global context\n   and create `SSL_CTX *` and `SSL *` contexts. With an `SSL *` object you can\n   call `redisInitiateSSL()`.\n2. Work with a set of Hiredis-provided wrappers around OpenSSL, create a\n   `redisSSLContext` object to hold configuration and use\n   `redisInitiateSSLWithContext()` to initiate the SSL/TLS handshake.\n\n```c\n/* An Hiredis SSL context. It holds SSL configuration and can be reused across\n * many contexts.\n */\nredisSSLContext *ssl_context;\n\n/* An error variable to indicate what went wrong, if the context fails to\n * initialize.\n */\nredisSSLContextError ssl_error = REDIS_SSL_CTX_NONE;\n\n/* Initialize global OpenSSL state.\n *\n * You should call this only once when your app initializes, and only if\n * you don't explicitly or implicitly initialize OpenSSL it elsewhere.\n */\nredisInitOpenSSL();\n\n/* Create SSL context */\nssl_context = redisCreateSSLContext(\n    \"cacertbundle.crt\",     /* File name of trusted CA/ca bundle file, optional */\n    \"/path/to/certs\",       /* Path of trusted certificates, optional */\n    \"client_cert.pem\",      /* File name of client certificate file, optional */\n    \"client_key.pem\",       /* File name of client private key, optional */\n    \"redis.mydomain.com\",   /* Server name to request (SNI), optional */\n    &ssl_error);\n\nif(ssl_context == NULL || ssl_error != REDIS_SSL_CTX_NONE) {\n    /* Handle error and abort... */\n    /* e.g.\n    printf(\"SSL error: %s\\n\",\n        (ssl_error != REDIS_SSL_CTX_NONE) ?\n            redisSSLContextGetError(ssl_error) : \"Unknown error\");\n    // Abort\n    */\n}\n\n/* Create Redis context and establish connection */\nc = redisConnect(\"localhost\", 6443);\nif (c == NULL || c->err) {\n    /* Handle error and abort... */\n}\n\n/* Negotiate SSL/TLS */\nif (redisInitiateSSLWithContext(c, ssl_context) != REDIS_OK) {\n    /* Handle error, in c->err / c->errstr */\n}\n```\n\n## RESP3 PUSH replies\nRedis 6.0 introduced PUSH replies with the reply-type `>`.  These messages are generated spontaneously and can arrive at any time, so must be handled using callbacks.\n\n### Default behavior\nHiredis installs handlers on `redisContext` and `redisAsyncContext` by default, which will intercept and free any PUSH replies detected.  This means existing code will work as-is after upgrading to Redis 6 and switching to `RESP3`.\n\n### Custom PUSH handler prototypes\nThe callback prototypes differ between `redisContext` and `redisAsyncContext`.\n\n#### redisContext\n```c\nvoid my_push_handler(void *privdata, void *reply) {\n    /* Handle the reply */\n\n    /* Note: We need to free the reply in our custom handler for\n             blocking contexts.  This lets us keep the reply if\n             we want. */\n    freeReplyObject(reply);\n}\n```\n\n#### redisAsyncContext\n```c\nvoid my_async_push_handler(redisAsyncContext *ac, void *reply) {\n    /* Handle the reply */\n\n    /* Note:  Because async hiredis always frees replies, you should\n              not call freeReplyObject in an async push callback. */\n}\n```\n\n### Installing a custom handler\nThere are two ways to set your own PUSH handlers.\n\n1. Set `push_cb` or `async_push_cb` in the `redisOptions` struct and connect with `redisConnectWithOptions` or `redisAsyncConnectWithOptions`.\n    ```c\n    redisOptions = {0};\n    REDIS_OPTIONS_SET_TCP(&options, \"127.0.0.1\", 6379);\n    options->push_cb = my_push_handler;\n    redisContext *context = redisConnectWithOptions(&options);\n    ```\n2.  Call `redisSetPushCallback` or `redisAsyncSetPushCallback` on a connected context.\n    ```c\n    redisContext *context = redisConnect(\"127.0.0.1\", 6379);\n    redisSetPushCallback(context, my_push_handler);\n    ```\n\n    _Note `redisSetPushCallback` and `redisAsyncSetPushCallback` both return any currently configured handler,  making it easy to override and then return to the old value._\n\n### Specifying no handler\nIf you have a unique use-case where you don't want hiredis to automatically intercept and free PUSH replies, you will want to configure no handler at all.  This can be done in two ways.\n1.  Set the `REDIS_OPT_NO_PUSH_AUTOFREE` flag in `redisOptions` and leave the callback function pointer `NULL`.\n    ```c\n    redisOptions = {0};\n    REDIS_OPTIONS_SET_TCP(&options, \"127.0.0.1\", 6379);\n    options->options |= REDIS_OPT_NO_PUSH_AUTOFREE;\n    redisContext *context = redisConnectWithOptions(&options);\n    ```\n3.  Call `redisSetPushCallback` with `NULL` once connected.\n    ```c\n    redisContext *context = redisConnect(\"127.0.0.1\", 6379);\n    redisSetPushCallback(context, NULL);\n    ```\n\n    _Note:  With no handler configured, calls to `redisCommand` may generate more than one reply, so this strategy is only applicable when there's some kind of blocking `redisGetReply()` loop (e.g. `MONITOR` or `SUBSCRIBE` workloads)._\n\n## Allocator injection\n\nHiredis uses a pass-thru structure of function pointers defined in [alloc.h](https://github.com/redis/hiredis/blob/f5d25850/alloc.h#L41) that contain the currently configured allocation and deallocation functions.  By default they just point to libc (`malloc`, `calloc`, `realloc`, etc).\n\n### Overriding\n\nOne can override the allocators like so:\n\n```c\nhiredisAllocFuncs myfuncs = {\n    .mallocFn = my_malloc,\n    .callocFn = my_calloc,\n    .reallocFn = my_realloc,\n    .strdupFn = my_strdup,\n    .freeFn = my_free,\n};\n\n// Override allocators (function returns current allocators if needed)\nhiredisAllocFuncs orig = hiredisSetAllocators(&myfuncs);\n```\n\nTo reset the allocators to their default libc function simply call:\n\n```c\nhiredisResetAllocators();\n```\n\n## AUTHORS\n\nSalvatore Sanfilippo (antirez at gmail),\\\nPieter Noordhuis (pcnoordhuis at gmail)\\\nMichael Grunder (michael dot grunder at gmail)\n\n_Hiredis is released under the BSD license._\n"
  },
  {
    "path": "deps/hiredis/adapters/ae.h",
    "content": "/*\n * Copyright (c) 2010-2011, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#ifndef __HIREDIS_AE_H__\n#define __HIREDIS_AE_H__\n#include <sys/types.h>\n#include <ae.h>\n#include \"../hiredis.h\"\n#include \"../async.h\"\n\ntypedef struct redisAeEvents {\n    redisAsyncContext *context;\n    aeEventLoop *loop;\n    int fd;\n    int reading, writing;\n} redisAeEvents;\n\nstatic void redisAeReadEvent(aeEventLoop *el, int fd, void *privdata, int mask) {\n    ((void)el); ((void)fd); ((void)mask);\n\n    redisAeEvents *e = (redisAeEvents*)privdata;\n    redisAsyncHandleRead(e->context);\n}\n\nstatic void redisAeWriteEvent(aeEventLoop *el, int fd, void *privdata, int mask) {\n    ((void)el); ((void)fd); ((void)mask);\n\n    redisAeEvents *e = (redisAeEvents*)privdata;\n    redisAsyncHandleWrite(e->context);\n}\n\nstatic void redisAeAddRead(void *privdata) {\n    redisAeEvents *e = (redisAeEvents*)privdata;\n    aeEventLoop *loop = e->loop;\n    if (!e->reading) {\n        e->reading = 1;\n        aeCreateFileEvent(loop,e->fd,AE_READABLE,redisAeReadEvent,e);\n    }\n}\n\nstatic void redisAeDelRead(void *privdata) {\n    redisAeEvents *e = (redisAeEvents*)privdata;\n    aeEventLoop *loop = e->loop;\n    if (e->reading) {\n        e->reading = 0;\n        aeDeleteFileEvent(loop,e->fd,AE_READABLE);\n    }\n}\n\nstatic void redisAeAddWrite(void *privdata) {\n    redisAeEvents *e = (redisAeEvents*)privdata;\n    aeEventLoop *loop = e->loop;\n    if (!e->writing) {\n        e->writing = 1;\n        aeCreateFileEvent(loop,e->fd,AE_WRITABLE,redisAeWriteEvent,e);\n    }\n}\n\nstatic void redisAeDelWrite(void *privdata) {\n    redisAeEvents *e = (redisAeEvents*)privdata;\n    aeEventLoop *loop = e->loop;\n    if (e->writing) {\n        e->writing = 0;\n        aeDeleteFileEvent(loop,e->fd,AE_WRITABLE);\n    }\n}\n\nstatic void redisAeCleanup(void *privdata) {\n    redisAeEvents *e = (redisAeEvents*)privdata;\n    redisAeDelRead(privdata);\n    redisAeDelWrite(privdata);\n    hi_free(e);\n}\n\nstatic int redisAeAttach(aeEventLoop *loop, redisAsyncContext *ac) {\n    redisContext *c = &(ac->c);\n    redisAeEvents *e;\n\n    /* Nothing should be attached when something is already attached */\n    if (ac->ev.data != NULL)\n        return REDIS_ERR;\n\n    /* Create container for context and r/w events */\n    e = (redisAeEvents*)hi_malloc(sizeof(*e));\n    if (e == NULL)\n        return REDIS_ERR;\n\n    e->context = ac;\n    e->loop = loop;\n    e->fd = c->fd;\n    e->reading = e->writing = 0;\n\n    /* Register functions to start/stop listening for events */\n    ac->ev.addRead = redisAeAddRead;\n    ac->ev.delRead = redisAeDelRead;\n    ac->ev.addWrite = redisAeAddWrite;\n    ac->ev.delWrite = redisAeDelWrite;\n    ac->ev.cleanup = redisAeCleanup;\n    ac->ev.data = e;\n\n    return REDIS_OK;\n}\n#endif\n"
  },
  {
    "path": "deps/hiredis/adapters/glib.h",
    "content": "#ifndef __HIREDIS_GLIB_H__\n#define __HIREDIS_GLIB_H__\n\n#include <glib.h>\n\n#include \"../hiredis.h\"\n#include \"../async.h\"\n\ntypedef struct\n{\n    GSource source;\n    redisAsyncContext *ac;\n    GPollFD poll_fd;\n} RedisSource;\n\nstatic void\nredis_source_add_read (gpointer data)\n{\n    RedisSource *source = (RedisSource *)data;\n    g_return_if_fail(source);\n    source->poll_fd.events |= G_IO_IN;\n    g_main_context_wakeup(g_source_get_context((GSource *)data));\n}\n\nstatic void\nredis_source_del_read (gpointer data)\n{\n    RedisSource *source = (RedisSource *)data;\n    g_return_if_fail(source);\n    source->poll_fd.events &= ~G_IO_IN;\n    g_main_context_wakeup(g_source_get_context((GSource *)data));\n}\n\nstatic void\nredis_source_add_write (gpointer data)\n{\n    RedisSource *source = (RedisSource *)data;\n    g_return_if_fail(source);\n    source->poll_fd.events |= G_IO_OUT;\n    g_main_context_wakeup(g_source_get_context((GSource *)data));\n}\n\nstatic void\nredis_source_del_write (gpointer data)\n{\n    RedisSource *source = (RedisSource *)data;\n    g_return_if_fail(source);\n    source->poll_fd.events &= ~G_IO_OUT;\n    g_main_context_wakeup(g_source_get_context((GSource *)data));\n}\n\nstatic void\nredis_source_cleanup (gpointer data)\n{\n    RedisSource *source = (RedisSource *)data;\n\n    g_return_if_fail(source);\n\n    redis_source_del_read(source);\n    redis_source_del_write(source);\n    /*\n     * It is not our responsibility to remove ourself from the\n     * current main loop. However, we will remove the GPollFD.\n     */\n    if (source->poll_fd.fd >= 0) {\n        g_source_remove_poll((GSource *)data, &source->poll_fd);\n        source->poll_fd.fd = -1;\n    }\n}\n\nstatic gboolean\nredis_source_prepare (GSource *source,\n                      gint    *timeout_)\n{\n    RedisSource *redis = (RedisSource *)source;\n    *timeout_ = -1;\n    return !!(redis->poll_fd.events & redis->poll_fd.revents);\n}\n\nstatic gboolean\nredis_source_check (GSource *source)\n{\n    RedisSource *redis = (RedisSource *)source;\n    return !!(redis->poll_fd.events & redis->poll_fd.revents);\n}\n\nstatic gboolean\nredis_source_dispatch (GSource      *source,\n                       GSourceFunc   callback,\n                       gpointer      user_data)\n{\n    RedisSource *redis = (RedisSource *)source;\n\n    if ((redis->poll_fd.revents & G_IO_OUT)) {\n        redisAsyncHandleWrite(redis->ac);\n        redis->poll_fd.revents &= ~G_IO_OUT;\n    }\n\n    if ((redis->poll_fd.revents & G_IO_IN)) {\n        redisAsyncHandleRead(redis->ac);\n        redis->poll_fd.revents &= ~G_IO_IN;\n    }\n\n    if (callback) {\n        return callback(user_data);\n    }\n\n    return TRUE;\n}\n\nstatic void\nredis_source_finalize (GSource *source)\n{\n    RedisSource *redis = (RedisSource *)source;\n\n    if (redis->poll_fd.fd >= 0) {\n        g_source_remove_poll(source, &redis->poll_fd);\n        redis->poll_fd.fd = -1;\n    }\n}\n\nstatic GSource *\nredis_source_new (redisAsyncContext *ac)\n{\n    static GSourceFuncs source_funcs = {\n        .prepare  = redis_source_prepare,\n        .check     = redis_source_check,\n        .dispatch = redis_source_dispatch,\n        .finalize = redis_source_finalize,\n    };\n    redisContext *c = &ac->c;\n    RedisSource *source;\n\n    g_return_val_if_fail(ac != NULL, NULL);\n\n    source = (RedisSource *)g_source_new(&source_funcs, sizeof *source);\n    if (source == NULL)\n        return NULL;\n\n    source->ac = ac;\n    source->poll_fd.fd = c->fd;\n    source->poll_fd.events = 0;\n    source->poll_fd.revents = 0;\n    g_source_add_poll((GSource *)source, &source->poll_fd);\n\n    ac->ev.addRead = redis_source_add_read;\n    ac->ev.delRead = redis_source_del_read;\n    ac->ev.addWrite = redis_source_add_write;\n    ac->ev.delWrite = redis_source_del_write;\n    ac->ev.cleanup = redis_source_cleanup;\n    ac->ev.data = source;\n\n    return (GSource *)source;\n}\n\n#endif /* __HIREDIS_GLIB_H__ */\n"
  },
  {
    "path": "deps/hiredis/adapters/ivykis.h",
    "content": "#ifndef __HIREDIS_IVYKIS_H__\n#define __HIREDIS_IVYKIS_H__\n#include <iv.h>\n#include \"../hiredis.h\"\n#include \"../async.h\"\n\ntypedef struct redisIvykisEvents {\n    redisAsyncContext *context;\n    struct iv_fd fd;\n} redisIvykisEvents;\n\nstatic void redisIvykisReadEvent(void *arg) {\n    redisAsyncContext *context = (redisAsyncContext *)arg;\n    redisAsyncHandleRead(context);\n}\n\nstatic void redisIvykisWriteEvent(void *arg) {\n    redisAsyncContext *context = (redisAsyncContext *)arg;\n    redisAsyncHandleWrite(context);\n}\n\nstatic void redisIvykisAddRead(void *privdata) {\n    redisIvykisEvents *e = (redisIvykisEvents*)privdata;\n    iv_fd_set_handler_in(&e->fd, redisIvykisReadEvent);\n}\n\nstatic void redisIvykisDelRead(void *privdata) {\n    redisIvykisEvents *e = (redisIvykisEvents*)privdata;\n    iv_fd_set_handler_in(&e->fd, NULL);\n}\n\nstatic void redisIvykisAddWrite(void *privdata) {\n    redisIvykisEvents *e = (redisIvykisEvents*)privdata;\n    iv_fd_set_handler_out(&e->fd, redisIvykisWriteEvent);\n}\n\nstatic void redisIvykisDelWrite(void *privdata) {\n    redisIvykisEvents *e = (redisIvykisEvents*)privdata;\n    iv_fd_set_handler_out(&e->fd, NULL);\n}\n\nstatic void redisIvykisCleanup(void *privdata) {\n    redisIvykisEvents *e = (redisIvykisEvents*)privdata;\n\n    iv_fd_unregister(&e->fd);\n    hi_free(e);\n}\n\nstatic int redisIvykisAttach(redisAsyncContext *ac) {\n    redisContext *c = &(ac->c);\n    redisIvykisEvents *e;\n\n    /* Nothing should be attached when something is already attached */\n    if (ac->ev.data != NULL)\n        return REDIS_ERR;\n\n    /* Create container for context and r/w events */\n    e = (redisIvykisEvents*)hi_malloc(sizeof(*e));\n    if (e == NULL)\n        return REDIS_ERR;\n\n    e->context = ac;\n\n    /* Register functions to start/stop listening for events */\n    ac->ev.addRead = redisIvykisAddRead;\n    ac->ev.delRead = redisIvykisDelRead;\n    ac->ev.addWrite = redisIvykisAddWrite;\n    ac->ev.delWrite = redisIvykisDelWrite;\n    ac->ev.cleanup = redisIvykisCleanup;\n    ac->ev.data = e;\n\n    /* Initialize and install read/write events */\n    IV_FD_INIT(&e->fd);\n    e->fd.fd = c->fd;\n    e->fd.handler_in = redisIvykisReadEvent;\n    e->fd.handler_out = redisIvykisWriteEvent;\n    e->fd.handler_err = NULL;\n    e->fd.cookie = e->context;\n\n    iv_fd_register(&e->fd);\n\n    return REDIS_OK;\n}\n#endif\n"
  },
  {
    "path": "deps/hiredis/adapters/libev.h",
    "content": "/*\n * Copyright (c) 2010-2011, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#ifndef __HIREDIS_LIBEV_H__\n#define __HIREDIS_LIBEV_H__\n#include <stdlib.h>\n#include <sys/types.h>\n#include <ev.h>\n#include \"../hiredis.h\"\n#include \"../async.h\"\n\ntypedef struct redisLibevEvents {\n    redisAsyncContext *context;\n    struct ev_loop *loop;\n    int reading, writing;\n    ev_io rev, wev;\n    ev_timer timer;\n} redisLibevEvents;\n\nstatic void redisLibevReadEvent(EV_P_ ev_io *watcher, int revents) {\n#if EV_MULTIPLICITY\n    ((void)EV_A);\n#endif\n    ((void)revents);\n\n    redisLibevEvents *e = (redisLibevEvents*)watcher->data;\n    redisAsyncHandleRead(e->context);\n}\n\nstatic void redisLibevWriteEvent(EV_P_ ev_io *watcher, int revents) {\n#if EV_MULTIPLICITY\n    ((void)EV_A);\n#endif\n    ((void)revents);\n\n    redisLibevEvents *e = (redisLibevEvents*)watcher->data;\n    redisAsyncHandleWrite(e->context);\n}\n\nstatic void redisLibevAddRead(void *privdata) {\n    redisLibevEvents *e = (redisLibevEvents*)privdata;\n#if EV_MULTIPLICITY\n    struct ev_loop *loop = e->loop;\n#endif\n    if (!e->reading) {\n        e->reading = 1;\n        ev_io_start(EV_A_ &e->rev);\n    }\n}\n\nstatic void redisLibevDelRead(void *privdata) {\n    redisLibevEvents *e = (redisLibevEvents*)privdata;\n#if EV_MULTIPLICITY\n    struct ev_loop *loop = e->loop;\n#endif\n    if (e->reading) {\n        e->reading = 0;\n        ev_io_stop(EV_A_ &e->rev);\n    }\n}\n\nstatic void redisLibevAddWrite(void *privdata) {\n    redisLibevEvents *e = (redisLibevEvents*)privdata;\n#if EV_MULTIPLICITY\n    struct ev_loop *loop = e->loop;\n#endif\n    if (!e->writing) {\n        e->writing = 1;\n        ev_io_start(EV_A_ &e->wev);\n    }\n}\n\nstatic void redisLibevDelWrite(void *privdata) {\n    redisLibevEvents *e = (redisLibevEvents*)privdata;\n#if EV_MULTIPLICITY\n    struct ev_loop *loop = e->loop;\n#endif\n    if (e->writing) {\n        e->writing = 0;\n        ev_io_stop(EV_A_ &e->wev);\n    }\n}\n\nstatic void redisLibevStopTimer(void *privdata) {\n    redisLibevEvents *e = (redisLibevEvents*)privdata;\n#if EV_MULTIPLICITY\n    struct ev_loop *loop = e->loop;\n#endif\n    ev_timer_stop(EV_A_ &e->timer);\n}\n\nstatic void redisLibevCleanup(void *privdata) {\n    redisLibevEvents *e = (redisLibevEvents*)privdata;\n    redisLibevDelRead(privdata);\n    redisLibevDelWrite(privdata);\n    redisLibevStopTimer(privdata);\n    hi_free(e);\n}\n\nstatic void redisLibevTimeout(EV_P_ ev_timer *timer, int revents) {\n#if EV_MULTIPLICITY\n    ((void)EV_A);\n#endif\n    ((void)revents);\n    redisLibevEvents *e = (redisLibevEvents*)timer->data;\n    redisAsyncHandleTimeout(e->context);\n}\n\nstatic void redisLibevSetTimeout(void *privdata, struct timeval tv) {\n    redisLibevEvents *e = (redisLibevEvents*)privdata;\n#if EV_MULTIPLICITY\n    struct ev_loop *loop = e->loop;\n#endif\n\n    if (!ev_is_active(&e->timer)) {\n        ev_init(&e->timer, redisLibevTimeout);\n        e->timer.data = e;\n    }\n\n    e->timer.repeat = tv.tv_sec + tv.tv_usec / 1000000.00;\n    ev_timer_again(EV_A_ &e->timer);\n}\n\nstatic int redisLibevAttach(EV_P_ redisAsyncContext *ac) {\n    redisContext *c = &(ac->c);\n    redisLibevEvents *e;\n\n    /* Nothing should be attached when something is already attached */\n    if (ac->ev.data != NULL)\n        return REDIS_ERR;\n\n    /* Create container for context and r/w events */\n    e = (redisLibevEvents*)hi_calloc(1, sizeof(*e));\n    if (e == NULL)\n        return REDIS_ERR;\n\n    e->context = ac;\n#if EV_MULTIPLICITY\n    e->loop = EV_A;\n#else\n    e->loop = NULL;\n#endif\n    e->rev.data = e;\n    e->wev.data = e;\n\n    /* Register functions to start/stop listening for events */\n    ac->ev.addRead = redisLibevAddRead;\n    ac->ev.delRead = redisLibevDelRead;\n    ac->ev.addWrite = redisLibevAddWrite;\n    ac->ev.delWrite = redisLibevDelWrite;\n    ac->ev.cleanup = redisLibevCleanup;\n    ac->ev.scheduleTimer = redisLibevSetTimeout;\n    ac->ev.data = e;\n\n    /* Initialize read/write events */\n    ev_io_init(&e->rev,redisLibevReadEvent,c->fd,EV_READ);\n    ev_io_init(&e->wev,redisLibevWriteEvent,c->fd,EV_WRITE);\n    return REDIS_OK;\n}\n\n#endif\n"
  },
  {
    "path": "deps/hiredis/adapters/libevent.h",
    "content": "/*\n * Copyright (c) 2010-2011, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#ifndef __HIREDIS_LIBEVENT_H__\n#define __HIREDIS_LIBEVENT_H__\n#include <event2/event.h>\n#include \"../hiredis.h\"\n#include \"../async.h\"\n\n#define REDIS_LIBEVENT_DELETED 0x01\n#define REDIS_LIBEVENT_ENTERED 0x02\n\ntypedef struct redisLibeventEvents {\n    redisAsyncContext *context;\n    struct event *ev;\n    struct event_base *base;\n    struct timeval tv;\n    short flags;\n    short state;\n} redisLibeventEvents;\n\nstatic void redisLibeventDestroy(redisLibeventEvents *e) {\n    hi_free(e);\n}\n\nstatic void redisLibeventHandler(evutil_socket_t fd, short event, void *arg) {\n    ((void)fd);\n    redisLibeventEvents *e = (redisLibeventEvents*)arg;\n    e->state |= REDIS_LIBEVENT_ENTERED;\n\n    #define CHECK_DELETED() if (e->state & REDIS_LIBEVENT_DELETED) {\\\n        redisLibeventDestroy(e);\\\n        return; \\\n    }\n\n    if ((event & EV_TIMEOUT) && (e->state & REDIS_LIBEVENT_DELETED) == 0) {\n        redisAsyncHandleTimeout(e->context);\n        CHECK_DELETED();\n    }\n\n    if ((event & EV_READ) && e->context && (e->state & REDIS_LIBEVENT_DELETED) == 0) {\n        redisAsyncHandleRead(e->context);\n        CHECK_DELETED();\n    }\n\n    if ((event & EV_WRITE) && e->context && (e->state & REDIS_LIBEVENT_DELETED) == 0) {\n        redisAsyncHandleWrite(e->context);\n        CHECK_DELETED();\n    }\n\n    e->state &= ~REDIS_LIBEVENT_ENTERED;\n    #undef CHECK_DELETED\n}\n\nstatic void redisLibeventUpdate(void *privdata, short flag, int isRemove) {\n    redisLibeventEvents *e = (redisLibeventEvents *)privdata;\n    const struct timeval *tv = e->tv.tv_sec || e->tv.tv_usec ? &e->tv : NULL;\n\n    if (isRemove) {\n        if ((e->flags & flag) == 0) {\n            return;\n        } else {\n            e->flags &= ~flag;\n        }\n    } else {\n        if (e->flags & flag) {\n            return;\n        } else {\n            e->flags |= flag;\n        }\n    }\n\n    event_del(e->ev);\n    event_assign(e->ev, e->base, e->context->c.fd, e->flags | EV_PERSIST,\n                 redisLibeventHandler, privdata);\n    event_add(e->ev, tv);\n}\n\nstatic void redisLibeventAddRead(void *privdata) {\n    redisLibeventUpdate(privdata, EV_READ, 0);\n}\n\nstatic void redisLibeventDelRead(void *privdata) {\n    redisLibeventUpdate(privdata, EV_READ, 1);\n}\n\nstatic void redisLibeventAddWrite(void *privdata) {\n    redisLibeventUpdate(privdata, EV_WRITE, 0);\n}\n\nstatic void redisLibeventDelWrite(void *privdata) {\n    redisLibeventUpdate(privdata, EV_WRITE, 1);\n}\n\nstatic void redisLibeventCleanup(void *privdata) {\n    redisLibeventEvents *e = (redisLibeventEvents*)privdata;\n    if (!e) {\n        return;\n    }\n    event_del(e->ev);\n    event_free(e->ev);\n    e->ev = NULL;\n\n    if (e->state & REDIS_LIBEVENT_ENTERED) {\n        e->state |= REDIS_LIBEVENT_DELETED;\n    } else {\n        redisLibeventDestroy(e);\n    }\n}\n\nstatic void redisLibeventSetTimeout(void *privdata, struct timeval tv) {\n    redisLibeventEvents *e = (redisLibeventEvents *)privdata;\n    short flags = e->flags;\n    e->flags = 0;\n    e->tv = tv;\n    redisLibeventUpdate(e, flags, 0);\n}\n\nstatic int redisLibeventAttach(redisAsyncContext *ac, struct event_base *base) {\n    redisContext *c = &(ac->c);\n    redisLibeventEvents *e;\n\n    /* Nothing should be attached when something is already attached */\n    if (ac->ev.data != NULL)\n        return REDIS_ERR;\n\n    /* Create container for context and r/w events */\n    e = (redisLibeventEvents*)hi_calloc(1, sizeof(*e));\n    if (e == NULL)\n        return REDIS_ERR;\n\n    e->context = ac;\n\n    /* Register functions to start/stop listening for events */\n    ac->ev.addRead = redisLibeventAddRead;\n    ac->ev.delRead = redisLibeventDelRead;\n    ac->ev.addWrite = redisLibeventAddWrite;\n    ac->ev.delWrite = redisLibeventDelWrite;\n    ac->ev.cleanup = redisLibeventCleanup;\n    ac->ev.scheduleTimer = redisLibeventSetTimeout;\n    ac->ev.data = e;\n\n    /* Initialize and install read/write events */\n    e->ev = event_new(base, c->fd, EV_READ | EV_WRITE, redisLibeventHandler, e);\n    e->base = base;\n    return REDIS_OK;\n}\n#endif\n"
  },
  {
    "path": "deps/hiredis/adapters/libhv.h",
    "content": "#ifndef __HIREDIS_LIBHV_H__\n#define __HIREDIS_LIBHV_H__\n\n#include <hv/hloop.h>\n#include \"../hiredis.h\"\n#include \"../async.h\"\n\ntypedef struct redisLibhvEvents {\n    hio_t *io;\n    htimer_t *timer;\n} redisLibhvEvents;\n\nstatic void redisLibhvHandleEvents(hio_t* io) {\n    redisAsyncContext* context = (redisAsyncContext*)hevent_userdata(io);\n    int events = hio_events(io);\n    int revents = hio_revents(io);\n    if (context && (events & HV_READ) && (revents & HV_READ)) {\n        redisAsyncHandleRead(context);\n    }\n    if (context && (events & HV_WRITE) && (revents & HV_WRITE)) {\n        redisAsyncHandleWrite(context);\n    }\n}\n\nstatic void redisLibhvAddRead(void *privdata) {\n    redisLibhvEvents* events = (redisLibhvEvents*)privdata;\n    hio_add(events->io, redisLibhvHandleEvents, HV_READ);\n}\n\nstatic void redisLibhvDelRead(void *privdata) {\n    redisLibhvEvents* events = (redisLibhvEvents*)privdata;\n    hio_del(events->io, HV_READ);\n}\n\nstatic void redisLibhvAddWrite(void *privdata) {\n    redisLibhvEvents* events = (redisLibhvEvents*)privdata;\n    hio_add(events->io, redisLibhvHandleEvents, HV_WRITE);\n}\n\nstatic void redisLibhvDelWrite(void *privdata) {\n    redisLibhvEvents* events = (redisLibhvEvents*)privdata;\n    hio_del(events->io, HV_WRITE);\n}\n\nstatic void redisLibhvCleanup(void *privdata) {\n    redisLibhvEvents* events = (redisLibhvEvents*)privdata;\n\n    if (events->timer)\n        htimer_del(events->timer);\n\n    hio_close(events->io);\n    hevent_set_userdata(events->io, NULL);\n\n    hi_free(events);\n}\n\nstatic void redisLibhvTimeout(htimer_t* timer) {\n    hio_t* io = (hio_t*)hevent_userdata(timer);\n    redisAsyncHandleTimeout((redisAsyncContext*)hevent_userdata(io));\n}\n\nstatic void redisLibhvSetTimeout(void *privdata, struct timeval tv) {\n    redisLibhvEvents* events;\n    uint32_t millis;\n    hloop_t* loop;\n\n    events = (redisLibhvEvents*)privdata;\n    millis = tv.tv_sec * 1000 + tv.tv_usec / 1000;\n\n    if (millis == 0) {\n        /* Libhv disallows zero'd timers so treat this as a delete or NO OP */\n        if (events->timer) {\n            htimer_del(events->timer);\n            events->timer = NULL;\n        }\n    } else if (events->timer == NULL) {\n        /* Add new timer */\n        loop = hevent_loop(events->io);\n        events->timer = htimer_add(loop, redisLibhvTimeout, millis, 1);\n        hevent_set_userdata(events->timer, events->io);\n    } else {\n        /* Update existing timer */\n        htimer_reset(events->timer, millis);\n    }\n}\n\nstatic int redisLibhvAttach(redisAsyncContext* ac, hloop_t* loop) {\n    redisContext *c = &(ac->c);\n    redisLibhvEvents *events;\n    hio_t* io = NULL;\n\n    if (ac->ev.data != NULL) {\n        return REDIS_ERR;\n    }\n\n    /* Create container struct to keep track of our io and any timer */\n    events = (redisLibhvEvents*)hi_malloc(sizeof(*events));\n    if (events == NULL) {\n        return REDIS_ERR;\n    }\n\n    io = hio_get(loop, c->fd);\n    if (io == NULL) {\n        hi_free(events);\n        return REDIS_ERR;\n    }\n\n    hevent_set_userdata(io, ac);\n\n    events->io = io;\n    events->timer = NULL;\n\n    ac->ev.addRead  = redisLibhvAddRead;\n    ac->ev.delRead  = redisLibhvDelRead;\n    ac->ev.addWrite = redisLibhvAddWrite;\n    ac->ev.delWrite = redisLibhvDelWrite;\n    ac->ev.cleanup  = redisLibhvCleanup;\n    ac->ev.scheduleTimer = redisLibhvSetTimeout;\n    ac->ev.data = events;\n\n    return REDIS_OK;\n}\n#endif\n"
  },
  {
    "path": "deps/hiredis/adapters/libsdevent.h",
    "content": "#ifndef HIREDIS_LIBSDEVENT_H\n#define HIREDIS_LIBSDEVENT_H\n#include <systemd/sd-event.h>\n#include \"../hiredis.h\"\n#include \"../async.h\"\n\n#define REDIS_LIBSDEVENT_DELETED 0x01\n#define REDIS_LIBSDEVENT_ENTERED 0x02\n\ntypedef struct redisLibsdeventEvents {\n    redisAsyncContext *context;\n    struct sd_event *event;\n    struct sd_event_source *fdSource;\n    struct sd_event_source *timerSource;\n    int fd;\n    short flags;\n    short state;\n} redisLibsdeventEvents;\n\nstatic void redisLibsdeventDestroy(redisLibsdeventEvents *e) {\n    if (e->fdSource) {\n        e->fdSource = sd_event_source_disable_unref(e->fdSource);\n    }\n    if (e->timerSource) {\n        e->timerSource = sd_event_source_disable_unref(e->timerSource);\n    }\n    sd_event_unref(e->event);\n    hi_free(e);\n}\n\nstatic int redisLibsdeventTimeoutHandler(sd_event_source *s, uint64_t usec, void *userdata) {\n    ((void)s);\n    ((void)usec);\n    redisLibsdeventEvents *e = (redisLibsdeventEvents*)userdata;\n    redisAsyncHandleTimeout(e->context);\n    return 0;\n}\n\nstatic int redisLibsdeventHandler(sd_event_source *s, int fd, uint32_t event, void *userdata) {\n    ((void)s);\n    ((void)fd);\n    redisLibsdeventEvents *e = (redisLibsdeventEvents*)userdata;\n    e->state |= REDIS_LIBSDEVENT_ENTERED;\n\n#define CHECK_DELETED() if (e->state & REDIS_LIBSDEVENT_DELETED) {\\\n        redisLibsdeventDestroy(e);\\\n        return 0; \\\n    }\n\n    if ((event & EPOLLIN) && e->context && (e->state & REDIS_LIBSDEVENT_DELETED) == 0) {\n        redisAsyncHandleRead(e->context);\n        CHECK_DELETED();\n    }\n\n    if ((event & EPOLLOUT) && e->context && (e->state & REDIS_LIBSDEVENT_DELETED) == 0) {\n        redisAsyncHandleWrite(e->context);\n        CHECK_DELETED();\n    }\n\n    e->state &= ~REDIS_LIBSDEVENT_ENTERED;\n#undef CHECK_DELETED\n\n    return 0;\n}\n\nstatic void redisLibsdeventAddRead(void *userdata) {\n    redisLibsdeventEvents *e = (redisLibsdeventEvents*)userdata;\n\n    if (e->flags & EPOLLIN) {\n        return;\n    }\n\n    e->flags |= EPOLLIN;\n\n    if (e->flags & EPOLLOUT) {\n        sd_event_source_set_io_events(e->fdSource, e->flags);\n    } else {\n        sd_event_add_io(e->event, &e->fdSource, e->fd, e->flags, redisLibsdeventHandler, e);\n    }\n}\n\nstatic void redisLibsdeventDelRead(void *userdata) {\n    redisLibsdeventEvents *e = (redisLibsdeventEvents*)userdata;\n\n    e->flags &= ~EPOLLIN;\n\n    if (e->flags) {\n        sd_event_source_set_io_events(e->fdSource, e->flags);\n    } else {\n        e->fdSource = sd_event_source_disable_unref(e->fdSource);\n    }\n}\n\nstatic void redisLibsdeventAddWrite(void *userdata) {\n    redisLibsdeventEvents *e = (redisLibsdeventEvents*)userdata;\n\n    if (e->flags & EPOLLOUT) {\n        return;\n    }\n\n    e->flags |= EPOLLOUT;\n\n    if (e->flags & EPOLLIN) {\n        sd_event_source_set_io_events(e->fdSource, e->flags);\n    } else {\n        sd_event_add_io(e->event, &e->fdSource, e->fd, e->flags, redisLibsdeventHandler, e);\n    }\n}\n\nstatic void redisLibsdeventDelWrite(void *userdata) {\n    redisLibsdeventEvents *e = (redisLibsdeventEvents*)userdata;\n\n    e->flags &= ~EPOLLOUT;\n\n    if (e->flags) {\n        sd_event_source_set_io_events(e->fdSource, e->flags);\n    } else {\n        e->fdSource = sd_event_source_disable_unref(e->fdSource);\n    }\n}\n\nstatic void redisLibsdeventCleanup(void *userdata) {\n    redisLibsdeventEvents *e = (redisLibsdeventEvents*)userdata;\n\n    if (!e) {\n        return;\n    }\n\n    if (e->state & REDIS_LIBSDEVENT_ENTERED) {\n        e->state |= REDIS_LIBSDEVENT_DELETED;\n    } else {\n        redisLibsdeventDestroy(e);\n    }\n}\n\nstatic void redisLibsdeventSetTimeout(void *userdata, struct timeval tv) {\n    redisLibsdeventEvents *e = (redisLibsdeventEvents *)userdata;\n\n    uint64_t usec = tv.tv_sec * 1000000 + tv.tv_usec;\n    if (!e->timerSource) {\n        sd_event_add_time_relative(e->event, &e->timerSource, CLOCK_MONOTONIC, usec, 1, redisLibsdeventTimeoutHandler, e);\n    } else {\n        sd_event_source_set_time_relative(e->timerSource, usec);\n    }\n}\n\nstatic int redisLibsdeventAttach(redisAsyncContext *ac, struct sd_event *event) {\n    redisContext *c = &(ac->c);\n    redisLibsdeventEvents *e;\n\n    /* Nothing should be attached when something is already attached */\n    if (ac->ev.data != NULL)\n        return REDIS_ERR;\n\n    /* Create container for context and r/w events */\n    e = (redisLibsdeventEvents*)hi_calloc(1, sizeof(*e));\n    if (e == NULL)\n        return REDIS_ERR;\n\n    /* Initialize and increase event refcount */\n    e->context = ac;\n    e->event = event;\n    e->fd = c->fd;\n    sd_event_ref(event);\n\n    /* Register functions to start/stop listening for events */\n    ac->ev.addRead = redisLibsdeventAddRead;\n    ac->ev.delRead = redisLibsdeventDelRead;\n    ac->ev.addWrite = redisLibsdeventAddWrite;\n    ac->ev.delWrite = redisLibsdeventDelWrite;\n    ac->ev.cleanup = redisLibsdeventCleanup;\n    ac->ev.scheduleTimer = redisLibsdeventSetTimeout;\n    ac->ev.data = e;\n\n    return REDIS_OK;\n}\n#endif\n"
  },
  {
    "path": "deps/hiredis/adapters/libuv.h",
    "content": "#ifndef __HIREDIS_LIBUV_H__\n#define __HIREDIS_LIBUV_H__\n#include <stdlib.h>\n#include <uv.h>\n#include \"../hiredis.h\"\n#include \"../async.h\"\n#include <string.h>\n\ntypedef struct redisLibuvEvents {\n    redisAsyncContext* context;\n    uv_poll_t          handle;\n    uv_timer_t         timer;\n    int                events;\n} redisLibuvEvents;\n\n\nstatic void redisLibuvPoll(uv_poll_t* handle, int status, int events) {\n    redisLibuvEvents* p = (redisLibuvEvents*)handle->data;\n    int ev = (status ? p->events : events);\n\n    if (p->context != NULL && (ev & UV_READABLE)) {\n        redisAsyncHandleRead(p->context);\n    }\n    if (p->context != NULL && (ev & UV_WRITABLE)) {\n        redisAsyncHandleWrite(p->context);\n    }\n}\n\n\nstatic void redisLibuvAddRead(void *privdata) {\n    redisLibuvEvents* p = (redisLibuvEvents*)privdata;\n\n    if (p->events & UV_READABLE) {\n        return;\n    }\n\n    p->events |= UV_READABLE;\n\n    uv_poll_start(&p->handle, p->events, redisLibuvPoll);\n}\n\n\nstatic void redisLibuvDelRead(void *privdata) {\n    redisLibuvEvents* p = (redisLibuvEvents*)privdata;\n\n    p->events &= ~UV_READABLE;\n\n    if (p->events) {\n        uv_poll_start(&p->handle, p->events, redisLibuvPoll);\n    } else {\n        uv_poll_stop(&p->handle);\n    }\n}\n\n\nstatic void redisLibuvAddWrite(void *privdata) {\n    redisLibuvEvents* p = (redisLibuvEvents*)privdata;\n\n    if (p->events & UV_WRITABLE) {\n        return;\n    }\n\n    p->events |= UV_WRITABLE;\n\n    uv_poll_start(&p->handle, p->events, redisLibuvPoll);\n}\n\n\nstatic void redisLibuvDelWrite(void *privdata) {\n    redisLibuvEvents* p = (redisLibuvEvents*)privdata;\n\n    p->events &= ~UV_WRITABLE;\n\n    if (p->events) {\n        uv_poll_start(&p->handle, p->events, redisLibuvPoll);\n    } else {\n        uv_poll_stop(&p->handle);\n    }\n}\n\nstatic void on_timer_close(uv_handle_t *handle) {\n    redisLibuvEvents* p = (redisLibuvEvents*)handle->data;\n    p->timer.data = NULL;\n    if (!p->handle.data) {\n        // both timer and handle are closed\n        hi_free(p);\n    }\n    // else, wait for `on_handle_close`\n}\n\nstatic void on_handle_close(uv_handle_t *handle) {\n    redisLibuvEvents* p = (redisLibuvEvents*)handle->data;\n    p->handle.data = NULL;\n    if (!p->timer.data) {\n        // timer never started, or timer already destroyed\n        hi_free(p);\n    }\n    // else, wait for `on_timer_close`\n}\n\n// libuv removed `status` parameter since v0.11.23\n// see: https://github.com/libuv/libuv/blob/v0.11.23/include/uv.h\n#if (UV_VERSION_MAJOR == 0 && UV_VERSION_MINOR < 11) || \\\n    (UV_VERSION_MAJOR == 0 && UV_VERSION_MINOR == 11 && UV_VERSION_PATCH < 23)\nstatic void redisLibuvTimeout(uv_timer_t *timer, int status) {\n    (void)status; // unused\n#else\nstatic void redisLibuvTimeout(uv_timer_t *timer) {\n#endif\n    redisLibuvEvents *e = (redisLibuvEvents*)timer->data;\n    redisAsyncHandleTimeout(e->context);\n}\n\nstatic void redisLibuvSetTimeout(void *privdata, struct timeval tv) {\n    redisLibuvEvents* p = (redisLibuvEvents*)privdata;\n\n    uint64_t millsec = tv.tv_sec * 1000 + tv.tv_usec / 1000.0;\n    if (!p->timer.data) {\n        // timer is uninitialized\n        if (uv_timer_init(p->handle.loop, &p->timer) != 0) {\n            return;\n        }\n        p->timer.data = p;\n    }\n    // updates the timeout if the timer has already started\n    // or start the timer\n    uv_timer_start(&p->timer, redisLibuvTimeout, millsec, 0);\n}\n\nstatic void redisLibuvCleanup(void *privdata) {\n    redisLibuvEvents* p = (redisLibuvEvents*)privdata;\n\n    p->context = NULL; // indicate that context might no longer exist\n    if (p->timer.data) {\n        uv_close((uv_handle_t*)&p->timer, on_timer_close);\n    }\n    uv_close((uv_handle_t*)&p->handle, on_handle_close);\n}\n\n\nstatic int redisLibuvAttach(redisAsyncContext* ac, uv_loop_t* loop) {\n    redisContext *c = &(ac->c);\n\n    if (ac->ev.data != NULL) {\n        return REDIS_ERR;\n    }\n\n    ac->ev.addRead        = redisLibuvAddRead;\n    ac->ev.delRead        = redisLibuvDelRead;\n    ac->ev.addWrite       = redisLibuvAddWrite;\n    ac->ev.delWrite       = redisLibuvDelWrite;\n    ac->ev.cleanup        = redisLibuvCleanup;\n    ac->ev.scheduleTimer  = redisLibuvSetTimeout;\n\n    redisLibuvEvents* p = (redisLibuvEvents*)hi_malloc(sizeof(*p));\n    if (p == NULL)\n        return REDIS_ERR;\n\n    memset(p, 0, sizeof(*p));\n\n    if (uv_poll_init_socket(loop, &p->handle, c->fd) != 0) {\n        return REDIS_ERR;\n    }\n\n    ac->ev.data    = p;\n    p->handle.data = p;\n    p->context     = ac;\n\n    return REDIS_OK;\n}\n#endif\n"
  },
  {
    "path": "deps/hiredis/adapters/macosx.h",
    "content": "//\n//  Created by Дмитрий Бахвалов on 13.07.15.\n//  Copyright (c) 2015 Dmitry Bakhvalov. All rights reserved.\n//\n\n#ifndef __HIREDIS_MACOSX_H__\n#define __HIREDIS_MACOSX_H__\n\n#include <CoreFoundation/CoreFoundation.h>\n\n#include \"../hiredis.h\"\n#include \"../async.h\"\n\ntypedef struct {\n    redisAsyncContext *context;\n    CFSocketRef socketRef;\n    CFRunLoopSourceRef sourceRef;\n} RedisRunLoop;\n\nstatic int freeRedisRunLoop(RedisRunLoop* redisRunLoop) {\n    if( redisRunLoop != NULL ) {\n        if( redisRunLoop->sourceRef != NULL ) {\n            CFRunLoopSourceInvalidate(redisRunLoop->sourceRef);\n            CFRelease(redisRunLoop->sourceRef);\n        }\n        if( redisRunLoop->socketRef != NULL ) {\n            CFSocketInvalidate(redisRunLoop->socketRef);\n            CFRelease(redisRunLoop->socketRef);\n        }\n        hi_free(redisRunLoop);\n    }\n    return REDIS_ERR;\n}\n\nstatic void redisMacOSAddRead(void *privdata) {\n    RedisRunLoop *redisRunLoop = (RedisRunLoop*)privdata;\n    CFSocketEnableCallBacks(redisRunLoop->socketRef, kCFSocketReadCallBack);\n}\n\nstatic void redisMacOSDelRead(void *privdata) {\n    RedisRunLoop *redisRunLoop = (RedisRunLoop*)privdata;\n    CFSocketDisableCallBacks(redisRunLoop->socketRef, kCFSocketReadCallBack);\n}\n\nstatic void redisMacOSAddWrite(void *privdata) {\n    RedisRunLoop *redisRunLoop = (RedisRunLoop*)privdata;\n    CFSocketEnableCallBacks(redisRunLoop->socketRef, kCFSocketWriteCallBack);\n}\n\nstatic void redisMacOSDelWrite(void *privdata) {\n    RedisRunLoop *redisRunLoop = (RedisRunLoop*)privdata;\n    CFSocketDisableCallBacks(redisRunLoop->socketRef, kCFSocketWriteCallBack);\n}\n\nstatic void redisMacOSCleanup(void *privdata) {\n    RedisRunLoop *redisRunLoop = (RedisRunLoop*)privdata;\n    freeRedisRunLoop(redisRunLoop);\n}\n\nstatic void redisMacOSAsyncCallback(CFSocketRef __unused s, CFSocketCallBackType callbackType, CFDataRef __unused address, const void __unused *data, void *info) {\n    redisAsyncContext* context = (redisAsyncContext*) info;\n\n    switch (callbackType) {\n        case kCFSocketReadCallBack:\n            redisAsyncHandleRead(context);\n            break;\n\n        case kCFSocketWriteCallBack:\n            redisAsyncHandleWrite(context);\n            break;\n\n        default:\n            break;\n    }\n}\n\nstatic int redisMacOSAttach(redisAsyncContext *redisAsyncCtx, CFRunLoopRef runLoop) {\n    redisContext *redisCtx = &(redisAsyncCtx->c);\n\n    /* Nothing should be attached when something is already attached */\n    if( redisAsyncCtx->ev.data != NULL ) return REDIS_ERR;\n\n    RedisRunLoop* redisRunLoop = (RedisRunLoop*) hi_calloc(1, sizeof(RedisRunLoop));\n    if (redisRunLoop == NULL)\n        return REDIS_ERR;\n\n    /* Setup redis stuff */\n    redisRunLoop->context = redisAsyncCtx;\n\n    redisAsyncCtx->ev.addRead  = redisMacOSAddRead;\n    redisAsyncCtx->ev.delRead  = redisMacOSDelRead;\n    redisAsyncCtx->ev.addWrite = redisMacOSAddWrite;\n    redisAsyncCtx->ev.delWrite = redisMacOSDelWrite;\n    redisAsyncCtx->ev.cleanup  = redisMacOSCleanup;\n    redisAsyncCtx->ev.data     = redisRunLoop;\n\n    /* Initialize and install read/write events */\n    CFSocketContext socketCtx = { 0, redisAsyncCtx, NULL, NULL, NULL };\n\n    redisRunLoop->socketRef = CFSocketCreateWithNative(NULL, redisCtx->fd,\n                                                       kCFSocketReadCallBack | kCFSocketWriteCallBack,\n                                                       redisMacOSAsyncCallback,\n                                                       &socketCtx);\n    if( !redisRunLoop->socketRef ) return freeRedisRunLoop(redisRunLoop);\n\n    redisRunLoop->sourceRef = CFSocketCreateRunLoopSource(NULL, redisRunLoop->socketRef, 0);\n    if( !redisRunLoop->sourceRef ) return freeRedisRunLoop(redisRunLoop);\n\n    CFRunLoopAddSource(runLoop, redisRunLoop->sourceRef, kCFRunLoopDefaultMode);\n\n    return REDIS_OK;\n}\n\n#endif\n\n"
  },
  {
    "path": "deps/hiredis/adapters/poll.h",
    "content": "\n#ifndef HIREDIS_POLL_H\n#define HIREDIS_POLL_H\n\n#include \"../async.h\"\n#include \"../sockcompat.h\"\n#include <string.h> // for memset\n#include <errno.h>\n\n/* Values to return from redisPollTick */\n#define REDIS_POLL_HANDLED_READ    1\n#define REDIS_POLL_HANDLED_WRITE   2\n#define REDIS_POLL_HANDLED_TIMEOUT 4\n\n/* An adapter to allow manual polling of the async context by checking the state\n * of the underlying file descriptor.  Useful in cases where there is no formal\n * IO event loop but regular ticking can be used, such as in game engines. */\n\ntypedef struct redisPollEvents {\n    redisAsyncContext *context;\n    redisFD fd;\n    char reading, writing;\n    char in_tick;\n    char deleted;\n    double deadline;\n} redisPollEvents;\n\nstatic double redisPollTimevalToDouble(struct timeval *tv) {\n    if (tv == NULL)\n        return 0.0;\n    return tv->tv_sec + tv->tv_usec / 1000000.00;\n}\n\nstatic double redisPollGetNow(void) {\n#ifndef _MSC_VER\n    struct timeval tv;\n    gettimeofday(&tv,NULL);\n    return redisPollTimevalToDouble(&tv);\n#else\n    FILETIME ft;\n    ULARGE_INTEGER li;\n    GetSystemTimeAsFileTime(&ft);\n    li.HighPart = ft.dwHighDateTime;\n    li.LowPart = ft.dwLowDateTime;\n    return (double)li.QuadPart * 1e-7;\n#endif\n}\n\n/* Poll for io, handling any pending callbacks.  The timeout argument can be\n * positive to wait for a maximum given time for IO, zero to poll, or negative\n * to wait forever */\nstatic int redisPollTick(redisAsyncContext *ac, double timeout) {\n    int reading, writing;\n    struct pollfd pfd;\n    int handled;\n    int ns;\n    int itimeout;\n\n    redisPollEvents *e = (redisPollEvents*)ac->ev.data;\n    if (!e)\n        return 0;\n\n    /* local flags, won't get changed during callbacks */\n    reading = e->reading;\n    writing = e->writing;\n    if (!reading && !writing)\n        return 0;\n\n    pfd.fd = e->fd;\n    pfd.events = 0;\n    if (reading)\n        pfd.events = POLLIN;   \n    if (writing)\n        pfd.events |= POLLOUT;\n\n    if (timeout >= 0.0) {\n        itimeout = (int)(timeout * 1000.0);\n    } else {\n        itimeout = -1;\n    }\n\n    ns = poll(&pfd, 1, itimeout);\n    if (ns < 0) {\n        /* ignore the EINTR error */\n        if (errno != EINTR)\n            return ns;\n        ns = 0;\n    }\n    \n    handled = 0;\n    e->in_tick = 1;\n    if (ns) {\n        if (reading && (pfd.revents & POLLIN)) {\n            redisAsyncHandleRead(ac);\n            handled |= REDIS_POLL_HANDLED_READ;\n        }\n        /* on Windows, connection failure is indicated with the Exception fdset.\n         * handle it the same as writable. */\n        if (writing && (pfd.revents & (POLLOUT | POLLERR))) {\n            /* context Read callback may have caused context to be deleted, e.g.\n               by doing an redisAsyncDisconnect() */\n            if (!e->deleted) {\n                redisAsyncHandleWrite(ac);\n                handled |= REDIS_POLL_HANDLED_WRITE;\n            }\n        }\n    }\n\n    /* perform timeouts */\n    if (!e->deleted && e->deadline != 0.0) {\n        double now = redisPollGetNow();\n        if (now >= e->deadline) {\n            /* deadline has passed.  disable timeout and perform callback */\n            e->deadline = 0.0;\n            redisAsyncHandleTimeout(ac);\n            handled |= REDIS_POLL_HANDLED_TIMEOUT;\n        }\n    }\n\n    /* do a delayed cleanup if required */\n    if (e->deleted)\n        hi_free(e);\n    else\n        e->in_tick = 0;\n\n    return handled;\n}\n\nstatic void redisPollAddRead(void *data) {\n    redisPollEvents *e = (redisPollEvents*)data;\n    e->reading = 1;\n}\n\nstatic void redisPollDelRead(void *data) {\n    redisPollEvents *e = (redisPollEvents*)data;\n    e->reading = 0;\n}\n\nstatic void redisPollAddWrite(void *data) {\n    redisPollEvents *e = (redisPollEvents*)data;\n    e->writing = 1;\n}\n\nstatic void redisPollDelWrite(void *data) {\n    redisPollEvents *e = (redisPollEvents*)data;\n    e->writing = 0;\n}\n\nstatic void redisPollCleanup(void *data) {\n    redisPollEvents *e = (redisPollEvents*)data;\n\n    /* if we are currently processing a tick, postpone deletion */\n    if (e->in_tick)\n        e->deleted = 1;\n    else\n        hi_free(e);\n}\n\nstatic void redisPollScheduleTimer(void *data, struct timeval tv)\n{\n    redisPollEvents *e = (redisPollEvents*)data;\n    double now = redisPollGetNow();\n    e->deadline = now + redisPollTimevalToDouble(&tv);\n}\n\nstatic int redisPollAttach(redisAsyncContext *ac) {\n    redisContext *c = &(ac->c);\n    redisPollEvents *e;\n\n    /* Nothing should be attached when something is already attached */\n    if (ac->ev.data != NULL)\n        return REDIS_ERR;\n\n    /* Create container for context and r/w events */\n    e = (redisPollEvents*)hi_malloc(sizeof(*e));\n    if (e == NULL)\n        return REDIS_ERR;\n    memset(e, 0, sizeof(*e));\n\n    e->context = ac;\n    e->fd = c->fd;\n    e->reading = e->writing = 0;\n    e->in_tick = e->deleted = 0;\n    e->deadline = 0.0;\n\n    /* Register functions to start/stop listening for events */\n    ac->ev.addRead = redisPollAddRead;\n    ac->ev.delRead = redisPollDelRead;\n    ac->ev.addWrite = redisPollAddWrite;\n    ac->ev.delWrite = redisPollDelWrite;\n    ac->ev.scheduleTimer = redisPollScheduleTimer;\n    ac->ev.cleanup = redisPollCleanup;\n    ac->ev.data = e;\n\n    return REDIS_OK;\n}\n#endif /* HIREDIS_POLL_H */\n"
  },
  {
    "path": "deps/hiredis/adapters/qt.h",
    "content": "/*-\n * Copyright (C) 2014 Pietro Cerutti <gahr@gahr.ch>\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions\n * are met:\n * 1. Redistributions of source code must retain the above copyright\n *    notice, this list of conditions and the following disclaimer.\n * 2. Redistributions in binary form must reproduce the above copyright\n *    notice, this list of conditions and the following disclaimer in the\n *    documentation and/or other materials provided with the distribution.\n *\n * THIS SOFTWARE IS PROVIDED BY AUTHOR AND CONTRIBUTORS ``AS IS'' AND\n * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED.  IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE\n * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n * SUCH DAMAGE.\n */\n\n#ifndef __HIREDIS_QT_H__\n#define __HIREDIS_QT_H__\n#include <QSocketNotifier>\n#include \"../async.h\"\n\nstatic void RedisQtAddRead(void *);\nstatic void RedisQtDelRead(void *);\nstatic void RedisQtAddWrite(void *);\nstatic void RedisQtDelWrite(void *);\nstatic void RedisQtCleanup(void *);\n\nclass RedisQtAdapter : public QObject {\n\n    Q_OBJECT\n\n    friend\n    void RedisQtAddRead(void * adapter) {\n        RedisQtAdapter * a = static_cast<RedisQtAdapter *>(adapter);\n        a->addRead();\n    }\n\n    friend\n    void RedisQtDelRead(void * adapter) {\n        RedisQtAdapter * a = static_cast<RedisQtAdapter *>(adapter);\n        a->delRead();\n    }\n\n    friend\n    void RedisQtAddWrite(void * adapter) {\n        RedisQtAdapter * a = static_cast<RedisQtAdapter *>(adapter);\n        a->addWrite();\n    }\n\n    friend\n    void RedisQtDelWrite(void * adapter) {\n        RedisQtAdapter * a = static_cast<RedisQtAdapter *>(adapter);\n        a->delWrite();\n    }\n\n    friend\n    void RedisQtCleanup(void * adapter) {\n        RedisQtAdapter * a = static_cast<RedisQtAdapter *>(adapter);\n        a->cleanup();\n    }\n\n    public:\n        RedisQtAdapter(QObject * parent = 0)\n            : QObject(parent), m_ctx(0), m_read(0), m_write(0) { }\n\n        ~RedisQtAdapter() {\n            if (m_ctx != 0) {\n                m_ctx->ev.data = NULL;\n            }\n        }\n\n        int setContext(redisAsyncContext * ac) {\n            if (ac->ev.data != NULL) {\n                return REDIS_ERR;\n            }\n            m_ctx = ac;\n            m_ctx->ev.data = this;\n            m_ctx->ev.addRead = RedisQtAddRead;\n            m_ctx->ev.delRead = RedisQtDelRead;\n            m_ctx->ev.addWrite = RedisQtAddWrite;\n            m_ctx->ev.delWrite = RedisQtDelWrite;\n            m_ctx->ev.cleanup = RedisQtCleanup;\n            return REDIS_OK;\n        }\n\n    private:\n        void addRead() {\n            if (m_read) return;\n            m_read = new QSocketNotifier(m_ctx->c.fd, QSocketNotifier::Read, 0);\n            connect(m_read, SIGNAL(activated(int)), this, SLOT(read()));\n        }\n\n        void delRead() {\n            if (!m_read) return;\n            delete m_read;\n            m_read = 0;\n        }\n\n        void addWrite() {\n            if (m_write) return;\n            m_write = new QSocketNotifier(m_ctx->c.fd, QSocketNotifier::Write, 0);\n            connect(m_write, SIGNAL(activated(int)), this, SLOT(write()));\n        }\n\n        void delWrite() {\n            if (!m_write) return;\n            delete m_write;\n            m_write = 0;\n        }\n\n        void cleanup() {\n            delRead();\n            delWrite();\n        }\n\n    private slots:\n        void read() { redisAsyncHandleRead(m_ctx); }\n        void write() { redisAsyncHandleWrite(m_ctx); }\n\n    private:\n        redisAsyncContext * m_ctx;\n        QSocketNotifier * m_read;\n        QSocketNotifier * m_write;\n};\n\n#endif /* !__HIREDIS_QT_H__ */\n"
  },
  {
    "path": "deps/hiredis/adapters/redismoduleapi.h",
    "content": "#ifndef __HIREDIS_REDISMODULEAPI_H__\n#define __HIREDIS_REDISMODULEAPI_H__\n\n#include \"redismodule.h\"\n\n#include \"../async.h\"\n#include \"../hiredis.h\"\n\n#include <sys/types.h>\n\ntypedef struct redisModuleEvents {\n    redisAsyncContext *context;\n    RedisModuleCtx *module_ctx;\n    int fd;\n    int reading, writing;\n    int timer_active;\n    RedisModuleTimerID timer_id;\n} redisModuleEvents;\n\nstatic inline void redisModuleReadEvent(int fd, void *privdata, int mask) {\n    (void) fd;\n    (void) mask;\n\n    redisModuleEvents *e = (redisModuleEvents*)privdata;\n    redisAsyncHandleRead(e->context);\n}\n\nstatic inline void redisModuleWriteEvent(int fd, void *privdata, int mask) {\n    (void) fd;\n    (void) mask;\n\n    redisModuleEvents *e = (redisModuleEvents*)privdata;\n    redisAsyncHandleWrite(e->context);\n}\n\nstatic inline void redisModuleAddRead(void *privdata) {\n    redisModuleEvents *e = (redisModuleEvents*)privdata;\n    if (!e->reading) {\n        e->reading = 1;\n        RedisModule_EventLoopAdd(e->fd, REDISMODULE_EVENTLOOP_READABLE, redisModuleReadEvent, e);\n    }\n}\n\nstatic inline void redisModuleDelRead(void *privdata) {\n    redisModuleEvents *e = (redisModuleEvents*)privdata;\n    if (e->reading) {\n        e->reading = 0;\n        RedisModule_EventLoopDel(e->fd, REDISMODULE_EVENTLOOP_READABLE);\n    }\n}\n\nstatic inline void redisModuleAddWrite(void *privdata) {\n    redisModuleEvents *e = (redisModuleEvents*)privdata;\n    if (!e->writing) {\n        e->writing = 1;\n        RedisModule_EventLoopAdd(e->fd, REDISMODULE_EVENTLOOP_WRITABLE, redisModuleWriteEvent, e);\n    }\n}\n\nstatic inline void redisModuleDelWrite(void *privdata) {\n    redisModuleEvents *e = (redisModuleEvents*)privdata;\n    if (e->writing) {\n        e->writing = 0;\n        RedisModule_EventLoopDel(e->fd, REDISMODULE_EVENTLOOP_WRITABLE);\n    }\n}\n\nstatic inline void redisModuleStopTimer(void *privdata) {\n    redisModuleEvents *e = (redisModuleEvents*)privdata;\n    if (e->timer_active) {\n        RedisModule_StopTimer(e->module_ctx, e->timer_id, NULL);\n    }\n    e->timer_active = 0;\n}\n\nstatic inline void redisModuleCleanup(void *privdata) {\n    redisModuleEvents *e = (redisModuleEvents*)privdata;\n    redisModuleDelRead(privdata);\n    redisModuleDelWrite(privdata);\n    redisModuleStopTimer(privdata);\n    hi_free(e);\n}\n\nstatic inline void redisModuleTimeout(RedisModuleCtx *ctx, void *privdata) {\n    (void) ctx;\n\n    redisModuleEvents *e = (redisModuleEvents*)privdata;\n    e->timer_active = 0;\n    redisAsyncHandleTimeout(e->context);\n}\n\nstatic inline void redisModuleSetTimeout(void *privdata, struct timeval tv) {\n    redisModuleEvents* e = (redisModuleEvents*)privdata;\n\n    redisModuleStopTimer(privdata);\n\n    mstime_t millis = tv.tv_sec * 1000 + tv.tv_usec / 1000.0;\n    e->timer_id = RedisModule_CreateTimer(e->module_ctx, millis, redisModuleTimeout, e);\n    e->timer_active = 1;\n}\n\n/* Check if Redis version is compatible with the adapter. */\nstatic inline int redisModuleCompatibilityCheck(void) {\n    if (!RedisModule_EventLoopAdd ||\n        !RedisModule_EventLoopDel ||\n        !RedisModule_CreateTimer ||\n        !RedisModule_StopTimer) {\n        return REDIS_ERR;\n    }\n    return REDIS_OK;\n}\n\nstatic inline int redisModuleAttach(redisAsyncContext *ac, RedisModuleCtx *module_ctx) {\n    redisContext *c = &(ac->c);\n    redisModuleEvents *e;\n\n    /* Nothing should be attached when something is already attached */\n    if (ac->ev.data != NULL)\n        return REDIS_ERR;\n\n    /* Create container for context and r/w events */\n    e = (redisModuleEvents*)hi_malloc(sizeof(*e));\n    if (e == NULL)\n        return REDIS_ERR;\n\n    e->context = ac;\n    e->module_ctx = module_ctx;\n    e->fd = c->fd;\n    e->reading = e->writing = 0;\n    e->timer_active = 0;\n\n    /* Register functions to start/stop listening for events */\n    ac->ev.addRead = redisModuleAddRead;\n    ac->ev.delRead = redisModuleDelRead;\n    ac->ev.addWrite = redisModuleAddWrite;\n    ac->ev.delWrite = redisModuleDelWrite;\n    ac->ev.cleanup = redisModuleCleanup;\n    ac->ev.scheduleTimer = redisModuleSetTimeout;\n    ac->ev.data = e;\n\n    return REDIS_OK;\n}\n\n#endif\n"
  },
  {
    "path": "deps/hiredis/alloc.c",
    "content": "/*\n * Copyright (c) 2020, Michael Grunder <michael dot grunder at gmail dot com>\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#include \"fmacros.h\"\n#include \"alloc.h\"\n#include <string.h>\n#include <stdlib.h>\n\nhiredisAllocFuncs hiredisAllocFns = {\n    .mallocFn = malloc,\n    .callocFn = calloc,\n    .reallocFn = realloc,\n    .strdupFn = strdup,\n    .freeFn = free,\n};\n\n/* Override hiredis' allocators with ones supplied by the user */\nhiredisAllocFuncs hiredisSetAllocators(hiredisAllocFuncs *override) {\n    hiredisAllocFuncs orig = hiredisAllocFns;\n\n    hiredisAllocFns = *override;\n\n    return orig;\n}\n\n/* Reset allocators to use libc defaults */\nvoid hiredisResetAllocators(void) {\n    hiredisAllocFns = (hiredisAllocFuncs) {\n        .mallocFn = malloc,\n        .callocFn = calloc,\n        .reallocFn = realloc,\n        .strdupFn = strdup,\n        .freeFn = free,\n    };\n}\n\n#ifdef _WIN32\n\nvoid *hi_malloc(size_t size) {\n    return hiredisAllocFns.mallocFn(size);\n}\n\nvoid *hi_calloc(size_t nmemb, size_t size) {\n    /* Overflow check as the user can specify any arbitrary allocator */\n    if (SIZE_MAX / size < nmemb)\n        return NULL;\n\n    return hiredisAllocFns.callocFn(nmemb, size);\n}\n\nvoid *hi_realloc(void *ptr, size_t size) {\n    return hiredisAllocFns.reallocFn(ptr, size);\n}\n\nchar *hi_strdup(const char *str) {\n    return hiredisAllocFns.strdupFn(str);\n}\n\nvoid hi_free(void *ptr) {\n    hiredisAllocFns.freeFn(ptr);\n}\n\n#endif\n"
  },
  {
    "path": "deps/hiredis/alloc.h",
    "content": "/*\n * Copyright (c) 2020, Michael Grunder <michael dot grunder at gmail dot com>\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#ifndef HIREDIS_ALLOC_H\n#define HIREDIS_ALLOC_H\n\n#include <stddef.h> /* for size_t */\n#include <stdint.h>\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/* Structure pointing to our actually configured allocators */\ntypedef struct hiredisAllocFuncs {\n    void *(*mallocFn)(size_t);\n    void *(*callocFn)(size_t,size_t);\n    void *(*reallocFn)(void*,size_t);\n    char *(*strdupFn)(const char*);\n    void (*freeFn)(void*);\n} hiredisAllocFuncs;\n\nhiredisAllocFuncs hiredisSetAllocators(hiredisAllocFuncs *ha);\nvoid hiredisResetAllocators(void);\n\n#ifndef _WIN32\n\n/* Hiredis' configured allocator function pointer struct */\nextern hiredisAllocFuncs hiredisAllocFns;\n\nstatic inline void *hi_malloc(size_t size) {\n    return hiredisAllocFns.mallocFn(size);\n}\n\nstatic inline void *hi_calloc(size_t nmemb, size_t size) {\n    /* Overflow check as the user can specify any arbitrary allocator */\n    if (SIZE_MAX / size < nmemb)\n        return NULL;\n\n    return hiredisAllocFns.callocFn(nmemb, size);\n}\n\nstatic inline void *hi_realloc(void *ptr, size_t size) {\n    return hiredisAllocFns.reallocFn(ptr, size);\n}\n\nstatic inline char *hi_strdup(const char *str) {\n    return hiredisAllocFns.strdupFn(str);\n}\n\nstatic inline void hi_free(void *ptr) {\n    hiredisAllocFns.freeFn(ptr);\n}\n\n#else\n\nvoid *hi_malloc(size_t size);\nvoid *hi_calloc(size_t nmemb, size_t size);\nvoid *hi_realloc(void *ptr, size_t size);\nchar *hi_strdup(const char *str);\nvoid hi_free(void *ptr);\n\n#endif\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif /* HIREDIS_ALLOC_H */\n"
  },
  {
    "path": "deps/hiredis/appveyor.yml",
    "content": "# Appveyor configuration file for CI build of hiredis on Windows (under Cygwin)\nenvironment:\n    matrix:\n        - CYG_BASH: C:\\cygwin64\\bin\\bash\n          CC: gcc\n        - CYG_BASH: C:\\cygwin\\bin\\bash\n          CC: gcc\n          CFLAGS: -m32\n          CXXFLAGS: -m32\n          LDFLAGS: -m32\n\nclone_depth: 1\n\n# Attempt to ensure we don't try to convert line endings to Win32 CRLF as this will cause build to fail\ninit:\n    - git config --global core.autocrlf input\n\n# Install needed build dependencies\ninstall:\n    - '%CYG_BASH% -lc \"cygcheck -dc cygwin\"'\n\nbuild_script:\n    - 'echo building...'\n    - '%CYG_BASH% -lc \"cd $APPVEYOR_BUILD_FOLDER; exec 0</dev/null; mkdir build && cd build && cmake .. -G \\\"Unix Makefiles\\\" && make VERBOSE=1\"'\n"
  },
  {
    "path": "deps/hiredis/async.c",
    "content": "/*\n * Copyright (c) 2009-2011, Salvatore Sanfilippo <antirez at gmail dot com>\n * Copyright (c) 2010-2011, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#include \"fmacros.h\"\n#include \"alloc.h\"\n#include <stdlib.h>\n#include <string.h>\n#ifndef _MSC_VER\n#include <strings.h>\n#endif\n#include <assert.h>\n#include <ctype.h>\n#include <errno.h>\n#include \"async.h\"\n#include \"net.h\"\n#include \"dict.c\"\n#include \"sds.h\"\n#include \"win32.h\"\n\n#include \"async_private.h\"\n\n#ifdef NDEBUG\n#undef assert\n#define assert(e) (void)(e)\n#endif\n\n/* Forward declarations of hiredis.c functions */\nint __redisAppendCommand(redisContext *c, const char *cmd, size_t len);\nvoid __redisSetError(redisContext *c, int type, const char *str);\n\n/* Functions managing dictionary of callbacks for pub/sub. */\nstatic unsigned int callbackHash(const void *key) {\n    return dictGenHashFunction((const unsigned char *)key,\n                               hi_sdslen((const hisds)key));\n}\n\nstatic void *callbackValDup(void *privdata, const void *src) {\n    ((void) privdata);\n    redisCallback *dup;\n\n    dup = hi_malloc(sizeof(*dup));\n    if (dup == NULL)\n        return NULL;\n\n    memcpy(dup,src,sizeof(*dup));\n    return dup;\n}\n\nstatic int callbackKeyCompare(void *privdata, const void *key1, const void *key2) {\n    int l1, l2;\n    ((void) privdata);\n\n    l1 = hi_sdslen((const hisds)key1);\n    l2 = hi_sdslen((const hisds)key2);\n    if (l1 != l2) return 0;\n    return memcmp(key1,key2,l1) == 0;\n}\n\nstatic void callbackKeyDestructor(void *privdata, void *key) {\n    ((void) privdata);\n    hi_sdsfree((hisds)key);\n}\n\nstatic void callbackValDestructor(void *privdata, void *val) {\n    ((void) privdata);\n    hi_free(val);\n}\n\nstatic dictType callbackDict = {\n    callbackHash,\n    NULL,\n    callbackValDup,\n    callbackKeyCompare,\n    callbackKeyDestructor,\n    callbackValDestructor\n};\n\nstatic redisAsyncContext *redisAsyncInitialize(redisContext *c) {\n    redisAsyncContext *ac;\n    dict *channels = NULL, *patterns = NULL;\n\n    channels = dictCreate(&callbackDict,NULL);\n    if (channels == NULL)\n        goto oom;\n\n    patterns = dictCreate(&callbackDict,NULL);\n    if (patterns == NULL)\n        goto oom;\n\n    ac = hi_realloc(c,sizeof(redisAsyncContext));\n    if (ac == NULL)\n        goto oom;\n\n    c = &(ac->c);\n\n    /* The regular connect functions will always set the flag REDIS_CONNECTED.\n     * For the async API, we want to wait until the first write event is\n     * received up before setting this flag, so reset it here. */\n    c->flags &= ~REDIS_CONNECTED;\n\n    ac->err = 0;\n    ac->errstr = NULL;\n    ac->data = NULL;\n    ac->dataCleanup = NULL;\n\n    ac->ev.data = NULL;\n    ac->ev.addRead = NULL;\n    ac->ev.delRead = NULL;\n    ac->ev.addWrite = NULL;\n    ac->ev.delWrite = NULL;\n    ac->ev.cleanup = NULL;\n    ac->ev.scheduleTimer = NULL;\n\n    ac->onConnect = NULL;\n    ac->onConnectNC = NULL;\n    ac->onDisconnect = NULL;\n\n    ac->replies.head = NULL;\n    ac->replies.tail = NULL;\n    ac->sub.replies.head = NULL;\n    ac->sub.replies.tail = NULL;\n    ac->sub.channels = channels;\n    ac->sub.patterns = patterns;\n    ac->sub.pending_unsubs = 0;\n\n    return ac;\noom:\n    if (channels) dictRelease(channels);\n    if (patterns) dictRelease(patterns);\n    return NULL;\n}\n\n/* We want the error field to be accessible directly instead of requiring\n * an indirection to the redisContext struct. */\nstatic void __redisAsyncCopyError(redisAsyncContext *ac) {\n    if (!ac)\n        return;\n\n    redisContext *c = &(ac->c);\n    ac->err = c->err;\n    ac->errstr = c->errstr;\n}\n\nredisAsyncContext *redisAsyncConnectWithOptions(const redisOptions *options) {\n    redisOptions myOptions = *options;\n    redisContext *c;\n    redisAsyncContext *ac;\n\n    /* Clear any erroneously set sync callback and flag that we don't want to\n     * use freeReplyObject by default. */\n    myOptions.push_cb = NULL;\n    myOptions.options |= REDIS_OPT_NO_PUSH_AUTOFREE;\n\n    myOptions.options |= REDIS_OPT_NONBLOCK;\n    c = redisConnectWithOptions(&myOptions);\n    if (c == NULL) {\n        return NULL;\n    }\n\n    ac = redisAsyncInitialize(c);\n    if (ac == NULL) {\n        redisFree(c);\n        return NULL;\n    }\n\n    /* Set any configured async push handler */\n    redisAsyncSetPushCallback(ac, myOptions.async_push_cb);\n\n    __redisAsyncCopyError(ac);\n    return ac;\n}\n\nredisAsyncContext *redisAsyncConnect(const char *ip, int port) {\n    redisOptions options = {0};\n    REDIS_OPTIONS_SET_TCP(&options, ip, port);\n    return redisAsyncConnectWithOptions(&options);\n}\n\nredisAsyncContext *redisAsyncConnectBind(const char *ip, int port,\n                                         const char *source_addr) {\n    redisOptions options = {0};\n    REDIS_OPTIONS_SET_TCP(&options, ip, port);\n    options.endpoint.tcp.source_addr = source_addr;\n    return redisAsyncConnectWithOptions(&options);\n}\n\nredisAsyncContext *redisAsyncConnectBindWithReuse(const char *ip, int port,\n                                                  const char *source_addr) {\n    redisOptions options = {0};\n    REDIS_OPTIONS_SET_TCP(&options, ip, port);\n    options.options |= REDIS_OPT_REUSEADDR;\n    options.endpoint.tcp.source_addr = source_addr;\n    return redisAsyncConnectWithOptions(&options);\n}\n\nredisAsyncContext *redisAsyncConnectUnix(const char *path) {\n    redisOptions options = {0};\n    REDIS_OPTIONS_SET_UNIX(&options, path);\n    return redisAsyncConnectWithOptions(&options);\n}\n\nstatic int\nredisAsyncSetConnectCallbackImpl(redisAsyncContext *ac, redisConnectCallback *fn,\n                                 redisConnectCallbackNC *fn_nc)\n{\n    /* If either are already set, this is an error */\n    if (ac->onConnect || ac->onConnectNC)\n        return REDIS_ERR;\n\n    if (fn) {\n        ac->onConnect = fn;\n    } else if (fn_nc) {\n        ac->onConnectNC = fn_nc;\n    }\n\n    /* The common way to detect an established connection is to wait for\n     * the first write event to be fired. This assumes the related event\n     * library functions are already set. */\n    _EL_ADD_WRITE(ac);\n\n    return REDIS_OK;\n}\n\nint redisAsyncSetConnectCallback(redisAsyncContext *ac, redisConnectCallback *fn) {\n    return redisAsyncSetConnectCallbackImpl(ac, fn, NULL);\n}\n\nint redisAsyncSetConnectCallbackNC(redisAsyncContext *ac, redisConnectCallbackNC *fn) {\n    return redisAsyncSetConnectCallbackImpl(ac, NULL, fn);\n}\n\nint redisAsyncSetDisconnectCallback(redisAsyncContext *ac, redisDisconnectCallback *fn) {\n    if (ac->onDisconnect == NULL) {\n        ac->onDisconnect = fn;\n        return REDIS_OK;\n    }\n    return REDIS_ERR;\n}\n\n/* Helper functions to push/shift callbacks */\nstatic int __redisPushCallback(redisCallbackList *list, redisCallback *source) {\n    redisCallback *cb;\n\n    /* Copy callback from stack to heap */\n    cb = hi_malloc(sizeof(*cb));\n    if (cb == NULL)\n        return REDIS_ERR_OOM;\n\n    if (source != NULL) {\n        memcpy(cb,source,sizeof(*cb));\n        cb->next = NULL;\n    }\n\n    /* Store callback in list */\n    if (list->head == NULL)\n        list->head = cb;\n    if (list->tail != NULL)\n        list->tail->next = cb;\n    list->tail = cb;\n    return REDIS_OK;\n}\n\nstatic int __redisShiftCallback(redisCallbackList *list, redisCallback *target) {\n    redisCallback *cb = list->head;\n    if (cb != NULL) {\n        list->head = cb->next;\n        if (cb == list->tail)\n            list->tail = NULL;\n\n        /* Copy callback from heap to stack */\n        if (target != NULL)\n            memcpy(target,cb,sizeof(*cb));\n        hi_free(cb);\n        return REDIS_OK;\n    }\n    return REDIS_ERR;\n}\n\nstatic void __redisRunCallback(redisAsyncContext *ac, redisCallback *cb, redisReply *reply) {\n    redisContext *c = &(ac->c);\n    if (cb->fn != NULL) {\n        c->flags |= REDIS_IN_CALLBACK;\n        cb->fn(ac,reply,cb->privdata);\n        c->flags &= ~REDIS_IN_CALLBACK;\n    }\n}\n\nstatic void __redisRunPushCallback(redisAsyncContext *ac, redisReply *reply) {\n    if (ac->push_cb != NULL) {\n        ac->c.flags |= REDIS_IN_CALLBACK;\n        ac->push_cb(ac, reply);\n        ac->c.flags &= ~REDIS_IN_CALLBACK;\n    }\n}\n\nstatic void __redisRunConnectCallback(redisAsyncContext *ac, int status)\n{\n    if (ac->onConnect == NULL && ac->onConnectNC == NULL)\n        return;\n\n    if (!(ac->c.flags & REDIS_IN_CALLBACK)) {\n        ac->c.flags |= REDIS_IN_CALLBACK;\n        if (ac->onConnect) {\n            ac->onConnect(ac, status);\n        } else {\n            ac->onConnectNC(ac, status);\n        }\n        ac->c.flags &= ~REDIS_IN_CALLBACK;\n    } else {\n        /* already in callback */\n        if (ac->onConnect) {\n            ac->onConnect(ac, status);\n        } else {\n            ac->onConnectNC(ac, status);\n        }\n    }\n}\n\nstatic void __redisRunDisconnectCallback(redisAsyncContext *ac, int status)\n{\n    if (ac->onDisconnect) {\n        if (!(ac->c.flags & REDIS_IN_CALLBACK)) {\n            ac->c.flags |= REDIS_IN_CALLBACK;\n            ac->onDisconnect(ac, status);\n            ac->c.flags &= ~REDIS_IN_CALLBACK;\n        } else {\n            /* already in callback */\n            ac->onDisconnect(ac, status);\n        }\n    }\n}\n\n/* Helper function to free the context. */\nstatic void __redisAsyncFree(redisAsyncContext *ac) {\n    redisContext *c = &(ac->c);\n    redisCallback cb;\n    dictIterator it;\n    dictEntry *de;\n\n    /* Execute pending callbacks with NULL reply. */\n    while (__redisShiftCallback(&ac->replies,&cb) == REDIS_OK)\n        __redisRunCallback(ac,&cb,NULL);\n    while (__redisShiftCallback(&ac->sub.replies,&cb) == REDIS_OK)\n        __redisRunCallback(ac,&cb,NULL);\n\n    /* Run subscription callbacks with NULL reply */\n    if (ac->sub.channels) {\n        dictInitIterator(&it,ac->sub.channels);\n        while ((de = dictNext(&it)) != NULL)\n            __redisRunCallback(ac,dictGetEntryVal(de),NULL);\n\n        dictRelease(ac->sub.channels);\n    }\n\n    if (ac->sub.patterns) {\n        dictInitIterator(&it,ac->sub.patterns);\n        while ((de = dictNext(&it)) != NULL)\n            __redisRunCallback(ac,dictGetEntryVal(de),NULL);\n\n        dictRelease(ac->sub.patterns);\n    }\n\n    /* Signal event lib to clean up */\n    _EL_CLEANUP(ac);\n\n    /* Execute disconnect callback. When redisAsyncFree() initiated destroying\n     * this context, the status will always be REDIS_OK. */\n    if (c->flags & REDIS_CONNECTED) {\n        int status = ac->err == 0 ? REDIS_OK : REDIS_ERR;\n        if (c->flags & REDIS_FREEING)\n            status = REDIS_OK;\n        __redisRunDisconnectCallback(ac, status);\n    }\n\n    if (ac->dataCleanup) {\n        ac->dataCleanup(ac->data);\n    }\n\n    /* Cleanup self */\n    redisFree(c);\n}\n\n/* Free the async context. When this function is called from a callback,\n * control needs to be returned to redisProcessCallbacks() before actual\n * free'ing. To do so, a flag is set on the context which is picked up by\n * redisProcessCallbacks(). Otherwise, the context is immediately free'd. */\nvoid redisAsyncFree(redisAsyncContext *ac) {\n    if (ac == NULL)\n        return;\n\n    redisContext *c = &(ac->c);\n\n    c->flags |= REDIS_FREEING;\n    if (!(c->flags & REDIS_IN_CALLBACK))\n        __redisAsyncFree(ac);\n}\n\n/* Helper function to make the disconnect happen and clean up. */\nvoid __redisAsyncDisconnect(redisAsyncContext *ac) {\n    redisContext *c = &(ac->c);\n\n    /* Make sure error is accessible if there is any */\n    __redisAsyncCopyError(ac);\n\n    if (ac->err == 0) {\n        /* For clean disconnects, there should be no pending callbacks. */\n        int ret = __redisShiftCallback(&ac->replies,NULL);\n        assert(ret == REDIS_ERR);\n    } else {\n        /* Disconnection is caused by an error, make sure that pending\n         * callbacks cannot call new commands. */\n        c->flags |= REDIS_DISCONNECTING;\n    }\n\n    /* cleanup event library on disconnect.\n     * this is safe to call multiple times */\n    _EL_CLEANUP(ac);\n\n    /* For non-clean disconnects, __redisAsyncFree() will execute pending\n     * callbacks with a NULL-reply. */\n    if (!(c->flags & REDIS_NO_AUTO_FREE)) {\n      __redisAsyncFree(ac);\n    }\n}\n\n/* Tries to do a clean disconnect from Redis, meaning it stops new commands\n * from being issued, but tries to flush the output buffer and execute\n * callbacks for all remaining replies. When this function is called from a\n * callback, there might be more replies and we can safely defer disconnecting\n * to redisProcessCallbacks(). Otherwise, we can only disconnect immediately\n * when there are no pending callbacks. */\nvoid redisAsyncDisconnect(redisAsyncContext *ac) {\n    redisContext *c = &(ac->c);\n    c->flags |= REDIS_DISCONNECTING;\n\n    /** unset the auto-free flag here, because disconnect undoes this */\n    c->flags &= ~REDIS_NO_AUTO_FREE;\n    if (!(c->flags & REDIS_IN_CALLBACK) && ac->replies.head == NULL)\n        __redisAsyncDisconnect(ac);\n}\n\nstatic int __redisGetSubscribeCallback(redisAsyncContext *ac, redisReply *reply, redisCallback *dstcb) {\n    redisContext *c = &(ac->c);\n    dict *callbacks;\n    redisCallback *cb = NULL;\n    dictEntry *de;\n    int pvariant;\n    char *stype;\n    hisds sname = NULL;\n\n    /* Match reply with the expected format of a pushed message.\n     * The type and number of elements (3 to 4) are specified at:\n     * https://redis.io/docs/latest/develop/interact/pubsub/#format-of-pushed-messages */\n    if ((reply->type == REDIS_REPLY_ARRAY && !(c->flags & REDIS_SUPPORTS_PUSH) && reply->elements >= 3) ||\n        reply->type == REDIS_REPLY_PUSH) {\n        assert(reply->element[0]->type == REDIS_REPLY_STRING);\n        stype = reply->element[0]->str;\n        pvariant = (tolower(stype[0]) == 'p') ? 1 : 0;\n\n        if (pvariant)\n            callbacks = ac->sub.patterns;\n        else\n            callbacks = ac->sub.channels;\n\n        /* Locate the right callback */\n        if (reply->element[1]->type == REDIS_REPLY_STRING) {\n            sname = hi_sdsnewlen(reply->element[1]->str,reply->element[1]->len);\n            if (sname == NULL) goto oom;\n\n            if ((de = dictFind(callbacks,sname)) != NULL) {\n                cb = dictGetEntryVal(de);\n                memcpy(dstcb,cb,sizeof(*dstcb));\n            }\n        }\n\n        /* If this is an subscribe reply decrease pending counter. */\n        if (strcasecmp(stype+pvariant,\"subscribe\") == 0) {\n            assert(cb != NULL);\n            cb->pending_subs -= 1;\n\n        } else if (strcasecmp(stype+pvariant,\"unsubscribe\") == 0) {\n            if (cb == NULL)\n                ac->sub.pending_unsubs -= 1;\n            else if (cb->pending_subs == 0)\n                dictDelete(callbacks,sname);\n\n            /* If this was the last unsubscribe message, revert to\n             * non-subscribe mode. */\n            assert(reply->element[2]->type == REDIS_REPLY_INTEGER);\n\n            /* Unset subscribed flag only when no pipelined pending subscribe\n             * or pending unsubscribe replies. */\n            if (reply->element[2]->integer == 0\n                && dictSize(ac->sub.channels) == 0\n                && dictSize(ac->sub.patterns) == 0\n                && ac->sub.pending_unsubs == 0) {\n                c->flags &= ~REDIS_SUBSCRIBED;\n\n                /* Move ongoing regular command callbacks. */\n                redisCallback cb;\n                while (__redisShiftCallback(&ac->sub.replies,&cb) == REDIS_OK) {\n                    __redisPushCallback(&ac->replies,&cb);\n                }\n            }\n        }\n        hi_sdsfree(sname);\n    } else {\n        /* Shift callback for pending command in subscribed context. */\n        __redisShiftCallback(&ac->sub.replies,dstcb);\n    }\n    return REDIS_OK;\noom:\n    __redisSetError(&(ac->c), REDIS_ERR_OOM, \"Out of memory\");\n    __redisAsyncCopyError(ac);\n    return REDIS_ERR;\n}\n\n#define redisIsSpontaneousPushReply(r) \\\n    (redisIsPushReply(r) && !redisIsSubscribeReply(r))\n\nstatic int redisIsSubscribeReply(redisReply *reply) {\n    char *str;\n    size_t len, off;\n\n    /* We will always have at least one string with the subscribe/message type */\n    if (reply->elements < 1 || reply->element[0]->type != REDIS_REPLY_STRING ||\n        reply->element[0]->len < sizeof(\"message\") - 1)\n    {\n        return 0;\n    }\n\n    /* Get the string/len moving past 'p' if needed */\n    off = tolower(reply->element[0]->str[0]) == 'p';\n    str = reply->element[0]->str + off;\n    len = reply->element[0]->len - off;\n\n    return !strncasecmp(str, \"subscribe\", len) ||\n           !strncasecmp(str, \"message\", len) ||\n           !strncasecmp(str, \"unsubscribe\", len);\n}\n\nvoid redisProcessCallbacks(redisAsyncContext *ac) {\n    redisContext *c = &(ac->c);\n    void *reply = NULL;\n    int status;\n\n    while((status = redisGetReply(c,&reply)) == REDIS_OK) {\n        if (reply == NULL) {\n            /* When the connection is being disconnected and there are\n             * no more replies, this is the cue to really disconnect. */\n            if (c->flags & REDIS_DISCONNECTING && hi_sdslen(c->obuf) == 0\n                && ac->replies.head == NULL) {\n                __redisAsyncDisconnect(ac);\n                return;\n            }\n            /* When the connection is not being disconnected, simply stop\n             * trying to get replies and wait for the next loop tick. */\n            break;\n        }\n\n        /* Keep track of push message support for subscribe handling */\n        if (redisIsPushReply(reply)) c->flags |= REDIS_SUPPORTS_PUSH;\n\n        /* Send any non-subscribe related PUSH messages to our PUSH handler\n         * while allowing subscribe related PUSH messages to pass through.\n         * This allows existing code to be backward compatible and work in\n         * either RESP2 or RESP3 mode. */\n        if (redisIsSpontaneousPushReply(reply)) {\n            __redisRunPushCallback(ac, reply);\n            c->reader->fn->freeObject(reply);\n            continue;\n        }\n\n        /* Even if the context is subscribed, pending regular\n         * callbacks will get a reply before pub/sub messages arrive. */\n        redisCallback cb = {NULL, NULL, 0, 0, NULL};\n        if (__redisShiftCallback(&ac->replies,&cb) != REDIS_OK) {\n            /*\n             * A spontaneous reply in a not-subscribed context can be the error\n             * reply that is sent when a new connection exceeds the maximum\n             * number of allowed connections on the server side.\n             *\n             * This is seen as an error instead of a regular reply because the\n             * server closes the connection after sending it.\n             *\n             * To prevent the error from being overwritten by an EOF error the\n             * connection is closed here. See issue #43.\n             *\n             * Another possibility is that the server is loading its dataset.\n             * In this case we also want to close the connection, and have the\n             * user wait until the server is ready to take our request.\n             */\n            if (((redisReply*)reply)->type == REDIS_REPLY_ERROR) {\n                c->err = REDIS_ERR_OTHER;\n                snprintf(c->errstr,sizeof(c->errstr),\"%s\",((redisReply*)reply)->str);\n                c->reader->fn->freeObject(reply);\n                __redisAsyncDisconnect(ac);\n                return;\n            }\n            /* No more regular callbacks and no errors, the context *must* be subscribed. */\n            assert(c->flags & REDIS_SUBSCRIBED);\n            if (c->flags & REDIS_SUBSCRIBED)\n                __redisGetSubscribeCallback(ac,reply,&cb);\n        }\n\n        if (cb.fn != NULL) {\n            __redisRunCallback(ac,&cb,reply);\n            if (!(c->flags & REDIS_NO_AUTO_FREE_REPLIES)){\n                c->reader->fn->freeObject(reply);\n            }\n\n            /* Proceed with free'ing when redisAsyncFree() was called. */\n            if (c->flags & REDIS_FREEING) {\n                __redisAsyncFree(ac);\n                return;\n            }\n        } else {\n            /* No callback for this reply. This can either be a NULL callback,\n             * or there were no callbacks to begin with. Either way, don't\n             * abort with an error, but simply ignore it because the client\n             * doesn't know what the server will spit out over the wire. */\n            c->reader->fn->freeObject(reply);\n        }\n\n        /* If in monitor mode, repush the callback */\n        if (c->flags & REDIS_MONITORING) {\n            __redisPushCallback(&ac->replies,&cb);\n        }\n    }\n\n    /* Disconnect when there was an error reading the reply */\n    if (status != REDIS_OK)\n        __redisAsyncDisconnect(ac);\n}\n\nstatic void __redisAsyncHandleConnectFailure(redisAsyncContext *ac) {\n    __redisRunConnectCallback(ac, REDIS_ERR);\n    __redisAsyncDisconnect(ac);\n}\n\n/* Internal helper function to detect socket status the first time a read or\n * write event fires. When connecting was not successful, the connect callback\n * is called with a REDIS_ERR status and the context is free'd. */\nstatic int __redisAsyncHandleConnect(redisAsyncContext *ac) {\n    int completed = 0;\n    redisContext *c = &(ac->c);\n\n    if (redisCheckConnectDone(c, &completed) == REDIS_ERR) {\n        /* Error! */\n        if (redisCheckSocketError(c) == REDIS_ERR)\n            __redisAsyncCopyError(ac);\n        __redisAsyncHandleConnectFailure(ac);\n        return REDIS_ERR;\n    } else if (completed == 1) {\n        /* connected! */\n        if (c->connection_type == REDIS_CONN_TCP &&\n            redisSetTcpNoDelay(c) == REDIS_ERR) {\n            __redisAsyncHandleConnectFailure(ac);\n            return REDIS_ERR;\n        }\n\n        /* flag us as fully connect, but allow the callback\n         * to disconnect.  For that reason, permit the function\n         * to delete the context here after callback return.\n         */\n        c->flags |= REDIS_CONNECTED;\n        __redisRunConnectCallback(ac, REDIS_OK);\n        if ((ac->c.flags & REDIS_DISCONNECTING)) {\n            redisAsyncDisconnect(ac);\n            return REDIS_ERR;\n        } else if ((ac->c.flags & REDIS_FREEING)) {\n            redisAsyncFree(ac);\n            return REDIS_ERR;\n        }\n        return REDIS_OK;\n    } else {\n        return REDIS_OK;\n    }\n}\n\nvoid redisAsyncRead(redisAsyncContext *ac) {\n    redisContext *c = &(ac->c);\n\n    if (redisBufferRead(c) == REDIS_ERR) {\n        __redisAsyncDisconnect(ac);\n    } else {\n        /* Always re-schedule reads */\n        _EL_ADD_READ(ac);\n        redisProcessCallbacks(ac);\n    }\n}\n\n/* This function should be called when the socket is readable.\n * It processes all replies that can be read and executes their callbacks.\n */\nvoid redisAsyncHandleRead(redisAsyncContext *ac) {\n    redisContext *c = &(ac->c);\n    /* must not be called from a callback */\n    assert(!(c->flags & REDIS_IN_CALLBACK));\n\n    if (!(c->flags & REDIS_CONNECTED)) {\n        /* Abort connect was not successful. */\n        if (__redisAsyncHandleConnect(ac) != REDIS_OK)\n            return;\n        /* Try again later when the context is still not connected. */\n        if (!(c->flags & REDIS_CONNECTED))\n            return;\n    }\n\n    c->funcs->async_read(ac);\n}\n\nvoid redisAsyncWrite(redisAsyncContext *ac) {\n    redisContext *c = &(ac->c);\n    int done = 0;\n\n    if (redisBufferWrite(c,&done) == REDIS_ERR) {\n        __redisAsyncDisconnect(ac);\n    } else {\n        /* Continue writing when not done, stop writing otherwise */\n        if (!done)\n            _EL_ADD_WRITE(ac);\n        else\n            _EL_DEL_WRITE(ac);\n\n        /* Always schedule reads after writes */\n        _EL_ADD_READ(ac);\n    }\n}\n\nvoid redisAsyncHandleWrite(redisAsyncContext *ac) {\n    redisContext *c = &(ac->c);\n    /* must not be called from a callback */\n    assert(!(c->flags & REDIS_IN_CALLBACK));\n\n    if (!(c->flags & REDIS_CONNECTED)) {\n        /* Abort connect was not successful. */\n        if (__redisAsyncHandleConnect(ac) != REDIS_OK)\n            return;\n        /* Try again later when the context is still not connected. */\n        if (!(c->flags & REDIS_CONNECTED))\n            return;\n    }\n\n    c->funcs->async_write(ac);\n}\n\nvoid redisAsyncHandleTimeout(redisAsyncContext *ac) {\n    redisContext *c = &(ac->c);\n    redisCallback cb;\n    /* must not be called from a callback */\n    assert(!(c->flags & REDIS_IN_CALLBACK));\n\n    if ((c->flags & REDIS_CONNECTED)) {\n        if (ac->replies.head == NULL && ac->sub.replies.head == NULL) {\n            /* Nothing to do - just an idle timeout */\n            return;\n        }\n\n        if (!ac->c.command_timeout ||\n            (!ac->c.command_timeout->tv_sec && !ac->c.command_timeout->tv_usec)) {\n            /* A belated connect timeout arriving, ignore */\n            return;\n        }\n    }\n\n    if (!c->err) {\n        __redisSetError(c, REDIS_ERR_TIMEOUT, \"Timeout\");\n        __redisAsyncCopyError(ac);\n    }\n\n    if (!(c->flags & REDIS_CONNECTED)) {\n        __redisRunConnectCallback(ac, REDIS_ERR);\n    }\n\n    while (__redisShiftCallback(&ac->replies, &cb) == REDIS_OK) {\n        __redisRunCallback(ac, &cb, NULL);\n    }\n\n    /**\n     * TODO: Don't automatically sever the connection,\n     * rather, allow to ignore <x> responses before the queue is clear\n     */\n    __redisAsyncDisconnect(ac);\n}\n\n/* Sets a pointer to the first argument and its length starting at p. Returns\n * the number of bytes to skip to get to the following argument. */\nstatic const char *nextArgument(const char *start, const char **str, size_t *len) {\n    const char *p = start;\n    if (p[0] != '$') {\n        p = strchr(p,'$');\n        if (p == NULL) return NULL;\n    }\n\n    *len = (int)strtol(p+1,NULL,10);\n    p = strchr(p,'\\r');\n    assert(p);\n    *str = p+2;\n    return p+2+(*len)+2;\n}\n\n/* Helper function for the redisAsyncCommand* family of functions. Writes a\n * formatted command to the output buffer and registers the provided callback\n * function with the context. */\nstatic int __redisAsyncCommand(redisAsyncContext *ac, redisCallbackFn *fn, void *privdata, const char *cmd, size_t len) {\n    redisContext *c = &(ac->c);\n    redisCallback cb;\n    struct dict *cbdict;\n    dictIterator it;\n    dictEntry *de;\n    redisCallback *existcb;\n    int pvariant, hasnext;\n    const char *cstr, *astr;\n    size_t clen, alen;\n    const char *p;\n    hisds sname;\n    int ret;\n\n    /* Don't accept new commands when the connection is about to be closed. */\n    if (c->flags & (REDIS_DISCONNECTING | REDIS_FREEING)) return REDIS_ERR;\n\n    /* Setup callback */\n    cb.fn = fn;\n    cb.privdata = privdata;\n    cb.pending_subs = 1;\n    cb.unsubscribe_sent = 0;\n\n    /* Find out which command will be appended. */\n    p = nextArgument(cmd,&cstr,&clen);\n    assert(p != NULL);\n    hasnext = (p[0] == '$');\n    pvariant = (tolower(cstr[0]) == 'p') ? 1 : 0;\n    cstr += pvariant;\n    clen -= pvariant;\n\n    if (hasnext && strncasecmp(cstr,\"subscribe\\r\\n\",11) == 0) {\n        c->flags |= REDIS_SUBSCRIBED;\n\n        /* Add every channel/pattern to the list of subscription callbacks. */\n        while ((p = nextArgument(p,&astr,&alen)) != NULL) {\n            sname = hi_sdsnewlen(astr,alen);\n            if (sname == NULL)\n                goto oom;\n\n            if (pvariant)\n                cbdict = ac->sub.patterns;\n            else\n                cbdict = ac->sub.channels;\n\n            de = dictFind(cbdict,sname);\n\n            if (de != NULL) {\n                existcb = dictGetEntryVal(de);\n                cb.pending_subs = existcb->pending_subs + 1;\n            }\n\n            ret = dictReplace(cbdict,sname,&cb);\n\n            if (ret == 0) hi_sdsfree(sname);\n        }\n    } else if (strncasecmp(cstr,\"unsubscribe\\r\\n\",13) == 0) {\n        /* It is only useful to call (P)UNSUBSCRIBE when the context is\n         * subscribed to one or more channels or patterns. */\n        if (!(c->flags & REDIS_SUBSCRIBED)) return REDIS_ERR;\n\n        if (pvariant)\n            cbdict = ac->sub.patterns;\n        else\n            cbdict = ac->sub.channels;\n\n        if (hasnext) {\n            /* Send an unsubscribe with specific channels/patterns.\n             * Bookkeeping the number of expected replies */\n            while ((p = nextArgument(p,&astr,&alen)) != NULL) {\n                sname = hi_sdsnewlen(astr,alen);\n                if (sname == NULL)\n                    goto oom;\n\n                de = dictFind(cbdict,sname);\n                if (de != NULL) {\n                    existcb = dictGetEntryVal(de);\n                    if (existcb->unsubscribe_sent == 0)\n                        existcb->unsubscribe_sent = 1;\n                    else\n                        /* Already sent, reply to be ignored */\n                        ac->sub.pending_unsubs += 1;\n                } else {\n                    /* Not subscribed to, reply to be ignored */\n                    ac->sub.pending_unsubs += 1;\n                }\n                hi_sdsfree(sname);\n            }\n        } else {\n            /* Send an unsubscribe without specific channels/patterns.\n             * Bookkeeping the number of expected replies */\n            int no_subs = 1;\n            dictInitIterator(&it,cbdict);\n            while ((de = dictNext(&it)) != NULL) {\n                existcb = dictGetEntryVal(de);\n                if (existcb->unsubscribe_sent == 0) {\n                    existcb->unsubscribe_sent = 1;\n                    no_subs = 0;\n                }\n            }\n            /* Unsubscribing to all channels/patterns, where none is\n             * subscribed to, results in a single reply to be ignored. */\n            if (no_subs == 1)\n                ac->sub.pending_unsubs += 1;\n        }\n\n        /* (P)UNSUBSCRIBE does not have its own response: every channel or\n         * pattern that is unsubscribed will receive a message. This means we\n         * should not append a callback function for this command. */\n    } else if (strncasecmp(cstr,\"monitor\\r\\n\",9) == 0) {\n        /* Set monitor flag and push callback */\n        c->flags |= REDIS_MONITORING;\n        if (__redisPushCallback(&ac->replies,&cb) != REDIS_OK)\n            goto oom;\n    } else {\n        if (c->flags & REDIS_SUBSCRIBED) {\n            if (__redisPushCallback(&ac->sub.replies,&cb) != REDIS_OK)\n                goto oom;\n        } else {\n            if (__redisPushCallback(&ac->replies,&cb) != REDIS_OK)\n                goto oom;\n        }\n    }\n\n    __redisAppendCommand(c,cmd,len);\n\n    /* Always schedule a write when the write buffer is non-empty */\n    _EL_ADD_WRITE(ac);\n\n    return REDIS_OK;\noom:\n    __redisSetError(&(ac->c), REDIS_ERR_OOM, \"Out of memory\");\n    __redisAsyncCopyError(ac);\n    return REDIS_ERR;\n}\n\nint redisvAsyncCommand(redisAsyncContext *ac, redisCallbackFn *fn, void *privdata, const char *format, va_list ap) {\n    char *cmd;\n    int len;\n    int status;\n    len = redisvFormatCommand(&cmd,format,ap);\n\n    /* We don't want to pass -1 or -2 to future functions as a length. */\n    if (len < 0)\n        return REDIS_ERR;\n\n    status = __redisAsyncCommand(ac,fn,privdata,cmd,len);\n    hi_free(cmd);\n    return status;\n}\n\nint redisAsyncCommand(redisAsyncContext *ac, redisCallbackFn *fn, void *privdata, const char *format, ...) {\n    va_list ap;\n    int status;\n    va_start(ap,format);\n    status = redisvAsyncCommand(ac,fn,privdata,format,ap);\n    va_end(ap);\n    return status;\n}\n\nint redisAsyncCommandArgv(redisAsyncContext *ac, redisCallbackFn *fn, void *privdata, int argc, const char **argv, const size_t *argvlen) {\n    hisds cmd;\n    long long len;\n    int status;\n    len = redisFormatSdsCommandArgv(&cmd,argc,argv,argvlen);\n    if (len < 0)\n        return REDIS_ERR;\n    status = __redisAsyncCommand(ac,fn,privdata,cmd,len);\n    hi_sdsfree(cmd);\n    return status;\n}\n\nint redisAsyncFormattedCommand(redisAsyncContext *ac, redisCallbackFn *fn, void *privdata, const char *cmd, size_t len) {\n    int status = __redisAsyncCommand(ac,fn,privdata,cmd,len);\n    return status;\n}\n\nredisAsyncPushFn *redisAsyncSetPushCallback(redisAsyncContext *ac, redisAsyncPushFn *fn) {\n    redisAsyncPushFn *old = ac->push_cb;\n    ac->push_cb = fn;\n    return old;\n}\n\nint redisAsyncSetTimeout(redisAsyncContext *ac, struct timeval tv) {\n    if (!ac->c.command_timeout) {\n        ac->c.command_timeout = hi_calloc(1, sizeof(tv));\n        if (ac->c.command_timeout == NULL) {\n            __redisSetError(&ac->c, REDIS_ERR_OOM, \"Out of memory\");\n            __redisAsyncCopyError(ac);\n            return REDIS_ERR;\n        }\n    }\n\n    if (tv.tv_sec != ac->c.command_timeout->tv_sec ||\n        tv.tv_usec != ac->c.command_timeout->tv_usec)\n    {\n        *ac->c.command_timeout = tv;\n    }\n\n    return REDIS_OK;\n}\n"
  },
  {
    "path": "deps/hiredis/async.h",
    "content": "/*\n * Copyright (c) 2009-2011, Salvatore Sanfilippo <antirez at gmail dot com>\n * Copyright (c) 2010-2011, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#ifndef __HIREDIS_ASYNC_H\n#define __HIREDIS_ASYNC_H\n#include \"hiredis.h\"\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\nstruct redisAsyncContext; /* need forward declaration of redisAsyncContext */\nstruct dict; /* dictionary header is included in async.c */\n\n/* Reply callback prototype and container */\ntypedef void (redisCallbackFn)(struct redisAsyncContext*, void*, void*);\ntypedef struct redisCallback {\n    struct redisCallback *next; /* simple singly linked list */\n    redisCallbackFn *fn;\n    int pending_subs;\n    int unsubscribe_sent;\n    void *privdata;\n} redisCallback;\n\n/* List of callbacks for either regular replies or pub/sub */\ntypedef struct redisCallbackList {\n    redisCallback *head, *tail;\n} redisCallbackList;\n\n/* Connection callback prototypes */\ntypedef void (redisDisconnectCallback)(const struct redisAsyncContext*, int status);\ntypedef void (redisConnectCallback)(const struct redisAsyncContext*, int status);\ntypedef void (redisConnectCallbackNC)(struct redisAsyncContext *, int status);\ntypedef void(redisTimerCallback)(void *timer, void *privdata);\n\n/* Context for an async connection to Redis */\ntypedef struct redisAsyncContext {\n    /* Hold the regular context, so it can be realloc'ed. */\n    redisContext c;\n\n    /* Setup error flags so they can be used directly. */\n    int err;\n    char *errstr;\n\n    /* Not used by hiredis */\n    void *data;\n    void (*dataCleanup)(void *privdata);\n\n    /* Event library data and hooks */\n    struct {\n        void *data;\n\n        /* Hooks that are called when the library expects to start\n         * reading/writing. These functions should be idempotent. */\n        void (*addRead)(void *privdata);\n        void (*delRead)(void *privdata);\n        void (*addWrite)(void *privdata);\n        void (*delWrite)(void *privdata);\n        void (*cleanup)(void *privdata);\n        void (*scheduleTimer)(void *privdata, struct timeval tv);\n    } ev;\n\n    /* Called when either the connection is terminated due to an error or per\n     * user request. The status is set accordingly (REDIS_OK, REDIS_ERR). */\n    redisDisconnectCallback *onDisconnect;\n\n    /* Called when the first write event was received. */\n    redisConnectCallback *onConnect;\n    redisConnectCallbackNC *onConnectNC;\n\n    /* Regular command callbacks */\n    redisCallbackList replies;\n\n    /* Address used for connect() */\n    struct sockaddr *saddr;\n    size_t addrlen;\n\n    /* Subscription callbacks */\n    struct {\n        redisCallbackList replies;\n        struct dict *channels;\n        struct dict *patterns;\n        int pending_unsubs;\n    } sub;\n\n    /* Any configured RESP3 PUSH handler */\n    redisAsyncPushFn *push_cb;\n} redisAsyncContext;\n\n/* Functions that proxy to hiredis */\nredisAsyncContext *redisAsyncConnectWithOptions(const redisOptions *options);\nredisAsyncContext *redisAsyncConnect(const char *ip, int port);\nredisAsyncContext *redisAsyncConnectBind(const char *ip, int port, const char *source_addr);\nredisAsyncContext *redisAsyncConnectBindWithReuse(const char *ip, int port,\n                                                  const char *source_addr);\nredisAsyncContext *redisAsyncConnectUnix(const char *path);\nint redisAsyncSetConnectCallback(redisAsyncContext *ac, redisConnectCallback *fn);\nint redisAsyncSetConnectCallbackNC(redisAsyncContext *ac, redisConnectCallbackNC *fn);\nint redisAsyncSetDisconnectCallback(redisAsyncContext *ac, redisDisconnectCallback *fn);\n\nredisAsyncPushFn *redisAsyncSetPushCallback(redisAsyncContext *ac, redisAsyncPushFn *fn);\nint redisAsyncSetTimeout(redisAsyncContext *ac, struct timeval tv);\nvoid redisAsyncDisconnect(redisAsyncContext *ac);\nvoid redisAsyncFree(redisAsyncContext *ac);\n\n/* Handle read/write events */\nvoid redisAsyncHandleRead(redisAsyncContext *ac);\nvoid redisAsyncHandleWrite(redisAsyncContext *ac);\nvoid redisAsyncHandleTimeout(redisAsyncContext *ac);\nvoid redisAsyncRead(redisAsyncContext *ac);\nvoid redisAsyncWrite(redisAsyncContext *ac);\n\n/* Command functions for an async context. Write the command to the\n * output buffer and register the provided callback. */\nint redisvAsyncCommand(redisAsyncContext *ac, redisCallbackFn *fn, void *privdata, const char *format, va_list ap);\nint redisAsyncCommand(redisAsyncContext *ac, redisCallbackFn *fn, void *privdata, const char *format, ...);\nint redisAsyncCommandArgv(redisAsyncContext *ac, redisCallbackFn *fn, void *privdata, int argc, const char **argv, const size_t *argvlen);\nint redisAsyncFormattedCommand(redisAsyncContext *ac, redisCallbackFn *fn, void *privdata, const char *cmd, size_t len);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif\n"
  },
  {
    "path": "deps/hiredis/async_private.h",
    "content": "/*\n * Copyright (c) 2009-2011, Salvatore Sanfilippo <antirez at gmail dot com>\n * Copyright (c) 2010-2011, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#ifndef __HIREDIS_ASYNC_PRIVATE_H\n#define __HIREDIS_ASYNC_PRIVATE_H\n\n#define _EL_ADD_READ(ctx)                                         \\\n    do {                                                          \\\n        refreshTimeout(ctx);                                      \\\n        if ((ctx)->ev.addRead) (ctx)->ev.addRead((ctx)->ev.data); \\\n    } while (0)\n#define _EL_DEL_READ(ctx) do { \\\n        if ((ctx)->ev.delRead) (ctx)->ev.delRead((ctx)->ev.data); \\\n    } while(0)\n#define _EL_ADD_WRITE(ctx)                                          \\\n    do {                                                            \\\n        refreshTimeout(ctx);                                        \\\n        if ((ctx)->ev.addWrite) (ctx)->ev.addWrite((ctx)->ev.data); \\\n    } while (0)\n#define _EL_DEL_WRITE(ctx) do { \\\n        if ((ctx)->ev.delWrite) (ctx)->ev.delWrite((ctx)->ev.data); \\\n    } while(0)\n#define _EL_CLEANUP(ctx) do { \\\n        if ((ctx)->ev.cleanup) (ctx)->ev.cleanup((ctx)->ev.data); \\\n        ctx->ev.cleanup = NULL; \\\n    } while(0)\n\nstatic inline void refreshTimeout(redisAsyncContext *ctx) {\n    #define REDIS_TIMER_ISSET(tvp) \\\n        (tvp && ((tvp)->tv_sec || (tvp)->tv_usec))\n\n    #define REDIS_EL_TIMER(ac, tvp) \\\n        if ((ac)->ev.scheduleTimer && REDIS_TIMER_ISSET(tvp)) { \\\n            (ac)->ev.scheduleTimer((ac)->ev.data, *(tvp)); \\\n        }\n\n    if (ctx->c.flags & REDIS_CONNECTED) {\n        REDIS_EL_TIMER(ctx, ctx->c.command_timeout);\n    } else {\n        REDIS_EL_TIMER(ctx, ctx->c.connect_timeout);\n    }\n}\n\nvoid __redisAsyncDisconnect(redisAsyncContext *ac);\nvoid redisProcessCallbacks(redisAsyncContext *ac);\n\n#endif  /* __HIREDIS_ASYNC_PRIVATE_H */\n"
  },
  {
    "path": "deps/hiredis/dict.c",
    "content": "/* Hash table implementation.\n *\n * This file implements in memory hash tables with insert/del/replace/find/\n * get-random-element operations. Hash tables will auto resize if needed\n * tables of power of two in size are used, collisions are handled by\n * chaining. See the source code for more information... :)\n *\n * Copyright (c) 2006-2010, Salvatore Sanfilippo <antirez at gmail dot com>\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#include \"fmacros.h\"\n#include \"alloc.h\"\n#include <stdlib.h>\n#include <assert.h>\n#include <limits.h>\n#include \"dict.h\"\n\n/* -------------------------- private prototypes ---------------------------- */\n\nstatic int _dictExpandIfNeeded(dict *ht);\nstatic unsigned long _dictNextPower(unsigned long size);\nstatic int _dictKeyIndex(dict *ht, const void *key);\nstatic int _dictInit(dict *ht, dictType *type, void *privDataPtr);\n\n/* -------------------------- hash functions -------------------------------- */\n\n/* Generic hash function (a popular one from Bernstein).\n * I tested a few and this was the best. */\nstatic unsigned int dictGenHashFunction(const unsigned char *buf, int len) {\n    unsigned int hash = 5381;\n\n    while (len--)\n        hash = ((hash << 5) + hash) + (*buf++); /* hash * 33 + c */\n    return hash;\n}\n\n/* ----------------------------- API implementation ------------------------- */\n\n/* Reset an hashtable already initialized with ht_init().\n * NOTE: This function should only called by ht_destroy(). */\nstatic void _dictReset(dict *ht) {\n    ht->table = NULL;\n    ht->size = 0;\n    ht->sizemask = 0;\n    ht->used = 0;\n}\n\n/* Create a new hash table */\nstatic dict *dictCreate(dictType *type, void *privDataPtr) {\n    dict *ht = hi_malloc(sizeof(*ht));\n    if (ht == NULL)\n        return NULL;\n\n    _dictInit(ht,type,privDataPtr);\n    return ht;\n}\n\n/* Initialize the hash table */\nstatic int _dictInit(dict *ht, dictType *type, void *privDataPtr) {\n    _dictReset(ht);\n    ht->type = type;\n    ht->privdata = privDataPtr;\n    return DICT_OK;\n}\n\n/* Expand or create the hashtable */\nstatic int dictExpand(dict *ht, unsigned long size) {\n    dict n; /* the new hashtable */\n    unsigned long realsize = _dictNextPower(size), i;\n\n    /* the size is invalid if it is smaller than the number of\n     * elements already inside the hashtable */\n    if (ht->used > size)\n        return DICT_ERR;\n\n    _dictInit(&n, ht->type, ht->privdata);\n    n.size = realsize;\n    n.sizemask = realsize-1;\n    n.table = hi_calloc(realsize,sizeof(dictEntry*));\n    if (n.table == NULL)\n        return DICT_ERR;\n\n    /* Copy all the elements from the old to the new table:\n     * note that if the old hash table is empty ht->size is zero,\n     * so dictExpand just creates an hash table. */\n    n.used = ht->used;\n    for (i = 0; i < ht->size && ht->used > 0; i++) {\n        dictEntry *he, *nextHe;\n\n        if (ht->table[i] == NULL) continue;\n\n        /* For each hash entry on this slot... */\n        he = ht->table[i];\n        while(he) {\n            unsigned int h;\n\n            nextHe = he->next;\n            /* Get the new element index */\n            h = dictHashKey(ht, he->key) & n.sizemask;\n            he->next = n.table[h];\n            n.table[h] = he;\n            ht->used--;\n            /* Pass to the next element */\n            he = nextHe;\n        }\n    }\n    assert(ht->used == 0);\n    hi_free(ht->table);\n\n    /* Remap the new hashtable in the old */\n    *ht = n;\n    return DICT_OK;\n}\n\n/* Add an element to the target hash table */\nstatic int dictAdd(dict *ht, void *key, void *val) {\n    int index;\n    dictEntry *entry;\n\n    /* Get the index of the new element, or -1 if\n     * the element already exists. */\n    if ((index = _dictKeyIndex(ht, key)) == -1)\n        return DICT_ERR;\n\n    /* Allocates the memory and stores key */\n    entry = hi_malloc(sizeof(*entry));\n    if (entry == NULL)\n        return DICT_ERR;\n\n    entry->next = ht->table[index];\n    ht->table[index] = entry;\n\n    /* Set the hash entry fields. */\n    dictSetHashKey(ht, entry, key);\n    dictSetHashVal(ht, entry, val);\n    ht->used++;\n    return DICT_OK;\n}\n\n/* Add an element, discarding the old if the key already exists.\n * Return 1 if the key was added from scratch, 0 if there was already an\n * element with such key and dictReplace() just performed a value update\n * operation. */\nstatic int dictReplace(dict *ht, void *key, void *val) {\n    dictEntry *entry, auxentry;\n\n    /* Try to add the element. If the key\n     * does not exists dictAdd will succeed. */\n    if (dictAdd(ht, key, val) == DICT_OK)\n        return 1;\n    /* It already exists, get the entry */\n    entry = dictFind(ht, key);\n    if (entry == NULL)\n        return 0;\n\n    /* Free the old value and set the new one */\n    /* Set the new value and free the old one. Note that it is important\n     * to do that in this order, as the value may just be exactly the same\n     * as the previous one. In this context, think to reference counting,\n     * you want to increment (set), and then decrement (free), and not the\n     * reverse. */\n    auxentry = *entry;\n    dictSetHashVal(ht, entry, val);\n    dictFreeEntryVal(ht, &auxentry);\n    return 0;\n}\n\n/* Search and remove an element */\nstatic int dictDelete(dict *ht, const void *key) {\n    unsigned int h;\n    dictEntry *de, *prevde;\n\n    if (ht->size == 0)\n        return DICT_ERR;\n    h = dictHashKey(ht, key) & ht->sizemask;\n    de = ht->table[h];\n\n    prevde = NULL;\n    while(de) {\n        if (dictCompareHashKeys(ht,key,de->key)) {\n            /* Unlink the element from the list */\n            if (prevde)\n                prevde->next = de->next;\n            else\n                ht->table[h] = de->next;\n\n            dictFreeEntryKey(ht,de);\n            dictFreeEntryVal(ht,de);\n            hi_free(de);\n            ht->used--;\n            return DICT_OK;\n        }\n        prevde = de;\n        de = de->next;\n    }\n    return DICT_ERR; /* not found */\n}\n\n/* Destroy an entire hash table */\nstatic int _dictClear(dict *ht) {\n    unsigned long i;\n\n    /* Free all the elements */\n    for (i = 0; i < ht->size && ht->used > 0; i++) {\n        dictEntry *he, *nextHe;\n\n        if ((he = ht->table[i]) == NULL) continue;\n        while(he) {\n            nextHe = he->next;\n            dictFreeEntryKey(ht, he);\n            dictFreeEntryVal(ht, he);\n            hi_free(he);\n            ht->used--;\n            he = nextHe;\n        }\n    }\n    /* Free the table and the allocated cache structure */\n    hi_free(ht->table);\n    /* Re-initialize the table */\n    _dictReset(ht);\n    return DICT_OK; /* never fails */\n}\n\n/* Clear & Release the hash table */\nstatic void dictRelease(dict *ht) {\n    _dictClear(ht);\n    hi_free(ht);\n}\n\nstatic dictEntry *dictFind(dict *ht, const void *key) {\n    dictEntry *he;\n    unsigned int h;\n\n    if (ht->size == 0) return NULL;\n    h = dictHashKey(ht, key) & ht->sizemask;\n    he = ht->table[h];\n    while(he) {\n        if (dictCompareHashKeys(ht, key, he->key))\n            return he;\n        he = he->next;\n    }\n    return NULL;\n}\n\nstatic void dictInitIterator(dictIterator *iter, dict *ht) {\n    iter->ht = ht;\n    iter->index = -1;\n    iter->entry = NULL;\n    iter->nextEntry = NULL;\n}\n\nstatic dictEntry *dictNext(dictIterator *iter) {\n    while (1) {\n        if (iter->entry == NULL) {\n            iter->index++;\n            if (iter->index >=\n                    (signed)iter->ht->size) break;\n            iter->entry = iter->ht->table[iter->index];\n        } else {\n            iter->entry = iter->nextEntry;\n        }\n        if (iter->entry) {\n            /* We need to save the 'next' here, the iterator user\n             * may delete the entry we are returning. */\n            iter->nextEntry = iter->entry->next;\n            return iter->entry;\n        }\n    }\n    return NULL;\n}\n\n/* ------------------------- private functions ------------------------------ */\n\n/* Expand the hash table if needed */\nstatic int _dictExpandIfNeeded(dict *ht) {\n    /* If the hash table is empty expand it to the initial size,\n     * if the table is \"full\" double its size. */\n    if (ht->size == 0)\n        return dictExpand(ht, DICT_HT_INITIAL_SIZE);\n    if (ht->used == ht->size)\n        return dictExpand(ht, ht->size*2);\n    return DICT_OK;\n}\n\n/* Our hash table capability is a power of two */\nstatic unsigned long _dictNextPower(unsigned long size) {\n    unsigned long i = DICT_HT_INITIAL_SIZE;\n\n    if (size >= LONG_MAX) return LONG_MAX;\n    while(1) {\n        if (i >= size)\n            return i;\n        i *= 2;\n    }\n}\n\n/* Returns the index of a free slot that can be populated with\n * an hash entry for the given 'key'.\n * If the key already exists, -1 is returned. */\nstatic int _dictKeyIndex(dict *ht, const void *key) {\n    unsigned int h;\n    dictEntry *he;\n\n    /* Expand the hashtable if needed */\n    if (_dictExpandIfNeeded(ht) == DICT_ERR)\n        return -1;\n    /* Compute the key hash value */\n    h = dictHashKey(ht, key) & ht->sizemask;\n    /* Search if this slot does not already contain the given key */\n    he = ht->table[h];\n    while(he) {\n        if (dictCompareHashKeys(ht, key, he->key))\n            return -1;\n        he = he->next;\n    }\n    return h;\n}\n\n"
  },
  {
    "path": "deps/hiredis/dict.h",
    "content": "/* Hash table implementation.\n *\n * This file implements in memory hash tables with insert/del/replace/find/\n * get-random-element operations. Hash tables will auto resize if needed\n * tables of power of two in size are used, collisions are handled by\n * chaining. See the source code for more information... :)\n *\n * Copyright (c) 2006-2010, Salvatore Sanfilippo <antirez at gmail dot com>\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#ifndef __DICT_H\n#define __DICT_H\n\n#define DICT_OK 0\n#define DICT_ERR 1\n\n/* Unused arguments generate annoying warnings... */\n#define DICT_NOTUSED(V) ((void) V)\n\ntypedef struct dictEntry {\n    void *key;\n    void *val;\n    struct dictEntry *next;\n} dictEntry;\n\ntypedef struct dictType {\n    unsigned int (*hashFunction)(const void *key);\n    void *(*keyDup)(void *privdata, const void *key);\n    void *(*valDup)(void *privdata, const void *obj);\n    int (*keyCompare)(void *privdata, const void *key1, const void *key2);\n    void (*keyDestructor)(void *privdata, void *key);\n    void (*valDestructor)(void *privdata, void *obj);\n} dictType;\n\ntypedef struct dict {\n    dictEntry **table;\n    dictType *type;\n    unsigned long size;\n    unsigned long sizemask;\n    unsigned long used;\n    void *privdata;\n} dict;\n\ntypedef struct dictIterator {\n    dict *ht;\n    int index;\n    dictEntry *entry, *nextEntry;\n} dictIterator;\n\n/* This is the initial size of every hash table */\n#define DICT_HT_INITIAL_SIZE     4\n\n/* ------------------------------- Macros ------------------------------------*/\n#define dictFreeEntryVal(ht, entry) \\\n    if ((ht)->type->valDestructor) \\\n        (ht)->type->valDestructor((ht)->privdata, (entry)->val)\n\n#define dictSetHashVal(ht, entry, _val_) do { \\\n    if ((ht)->type->valDup) \\\n        entry->val = (ht)->type->valDup((ht)->privdata, _val_); \\\n    else \\\n        entry->val = (_val_); \\\n} while(0)\n\n#define dictFreeEntryKey(ht, entry) \\\n    if ((ht)->type->keyDestructor) \\\n        (ht)->type->keyDestructor((ht)->privdata, (entry)->key)\n\n#define dictSetHashKey(ht, entry, _key_) do { \\\n    if ((ht)->type->keyDup) \\\n        entry->key = (ht)->type->keyDup((ht)->privdata, _key_); \\\n    else \\\n        entry->key = (_key_); \\\n} while(0)\n\n#define dictCompareHashKeys(ht, key1, key2) \\\n    (((ht)->type->keyCompare) ? \\\n        (ht)->type->keyCompare((ht)->privdata, key1, key2) : \\\n        (key1) == (key2))\n\n#define dictHashKey(ht, key) (ht)->type->hashFunction(key)\n\n#define dictGetEntryKey(he) ((he)->key)\n#define dictGetEntryVal(he) ((he)->val)\n#define dictSlots(ht) ((ht)->size)\n#define dictSize(ht) ((ht)->used)\n\n/* API */\nstatic unsigned int dictGenHashFunction(const unsigned char *buf, int len);\nstatic dict *dictCreate(dictType *type, void *privDataPtr);\nstatic int dictExpand(dict *ht, unsigned long size);\nstatic int dictAdd(dict *ht, void *key, void *val);\nstatic int dictReplace(dict *ht, void *key, void *val);\nstatic int dictDelete(dict *ht, const void *key);\nstatic void dictRelease(dict *ht);\nstatic dictEntry * dictFind(dict *ht, const void *key);\nstatic void dictInitIterator(dictIterator *iter, dict *ht);\nstatic dictEntry *dictNext(dictIterator *iter);\n\n#endif /* __DICT_H */\n"
  },
  {
    "path": "deps/hiredis/examples/CMakeLists.txt",
    "content": "INCLUDE(FindPkgConfig)\n# Check for GLib\n\nPKG_CHECK_MODULES(GLIB2 glib-2.0)\nif (GLIB2_FOUND)\n    INCLUDE_DIRECTORIES(${GLIB2_INCLUDE_DIRS})\n    LINK_DIRECTORIES(${GLIB2_LIBRARY_DIRS})\n    ADD_EXECUTABLE(example-glib example-glib.c)\n    TARGET_LINK_LIBRARIES(example-glib hiredis ${GLIB2_LIBRARIES})\nENDIF(GLIB2_FOUND)\n\nFIND_PATH(LIBEV ev.h\n    HINTS /usr/local /usr/opt/local\n    ENV LIBEV_INCLUDE_DIR)\n\nif (LIBEV)\n    # Just compile and link with libev\n    ADD_EXECUTABLE(example-libev example-libev.c)\n    TARGET_LINK_LIBRARIES(example-libev hiredis ev)\nENDIF()\n\nFIND_PATH(LIBEVENT event.h)\nif (LIBEVENT)\n    ADD_EXECUTABLE(example-libevent example-libevent.c)\n    TARGET_LINK_LIBRARIES(example-libevent hiredis event)\nENDIF()\n\nFIND_PATH(LIBHV hv/hv.h)\nIF (LIBHV)\n    ADD_EXECUTABLE(example-libhv example-libhv.c)\n    TARGET_LINK_LIBRARIES(example-libhv hiredis hv)\nENDIF()\n\nFIND_PATH(LIBUV uv.h)\nIF (LIBUV)\n    ADD_EXECUTABLE(example-libuv example-libuv.c)\n    TARGET_LINK_LIBRARIES(example-libuv hiredis uv)\nENDIF()\n\nFIND_PATH(LIBSDEVENT systemd/sd-event.h)\nIF (LIBSDEVENT)\n    ADD_EXECUTABLE(example-libsdevent example-libsdevent.c)\n    TARGET_LINK_LIBRARIES(example-libsdevent hiredis systemd)\nENDIF()\n\nIF (APPLE)\n    FIND_LIBRARY(CF CoreFoundation)\n    ADD_EXECUTABLE(example-macosx example-macosx.c)\n    TARGET_LINK_LIBRARIES(example-macosx hiredis ${CF})\nENDIF()\n\nIF (ENABLE_SSL)\n    ADD_EXECUTABLE(example-ssl example-ssl.c)\n    TARGET_LINK_LIBRARIES(example-ssl hiredis hiredis_ssl)\nENDIF()\n\nADD_EXECUTABLE(example example.c)\nTARGET_LINK_LIBRARIES(example hiredis)\n\nADD_EXECUTABLE(example-push example-push.c)\nTARGET_LINK_LIBRARIES(example-push hiredis)\n"
  },
  {
    "path": "deps/hiredis/examples/example-ae.c",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <signal.h>\n\n#include <hiredis.h>\n#include <async.h>\n#include <adapters/ae.h>\n\n/* Put event loop in the global scope, so it can be explicitly stopped */\nstatic aeEventLoop *loop;\n\nvoid getCallback(redisAsyncContext *c, void *r, void *privdata) {\n    redisReply *reply = r;\n    if (reply == NULL) return;\n    printf(\"argv[%s]: %s\\n\", (char*)privdata, reply->str);\n\n    /* Disconnect after receiving the reply to GET */\n    redisAsyncDisconnect(c);\n}\n\nvoid connectCallback(const redisAsyncContext *c, int status) {\n    if (status != REDIS_OK) {\n        printf(\"Error: %s\\n\", c->errstr);\n        aeStop(loop);\n        return;\n    }\n\n    printf(\"Connected...\\n\");\n}\n\nvoid disconnectCallback(const redisAsyncContext *c, int status) {\n    if (status != REDIS_OK) {\n        printf(\"Error: %s\\n\", c->errstr);\n        aeStop(loop);\n        return;\n    }\n\n    printf(\"Disconnected...\\n\");\n    aeStop(loop);\n}\n\nint main (int argc, char **argv) {\n    signal(SIGPIPE, SIG_IGN);\n\n    redisAsyncContext *c = redisAsyncConnect(\"127.0.0.1\", 6379);\n    if (c->err) {\n        /* Let *c leak for now... */\n        printf(\"Error: %s\\n\", c->errstr);\n        return 1;\n    }\n\n    loop = aeCreateEventLoop(64);\n    redisAeAttach(loop, c);\n    redisAsyncSetConnectCallback(c,connectCallback);\n    redisAsyncSetDisconnectCallback(c,disconnectCallback);\n    redisAsyncCommand(c, NULL, NULL, \"SET key %b\", argv[argc-1], strlen(argv[argc-1]));\n    redisAsyncCommand(c, getCallback, (char*)\"end-1\", \"GET key\");\n    aeMain(loop);\n    return 0;\n}\n\n"
  },
  {
    "path": "deps/hiredis/examples/example-glib.c",
    "content": "#include <stdlib.h>\n\n#include <hiredis.h>\n#include <async.h>\n#include <adapters/glib.h>\n\nstatic GMainLoop *mainloop;\n\nstatic void\nconnect_cb (const redisAsyncContext *ac G_GNUC_UNUSED,\n            int status)\n{\n    if (status != REDIS_OK) {\n        g_printerr(\"Failed to connect: %s\\n\", ac->errstr);\n        g_main_loop_quit(mainloop);\n    } else {\n        g_printerr(\"Connected...\\n\");\n    }\n}\n\nstatic void\ndisconnect_cb (const redisAsyncContext *ac G_GNUC_UNUSED,\n               int status)\n{\n    if (status != REDIS_OK) {\n        g_error(\"Failed to disconnect: %s\", ac->errstr);\n    } else {\n        g_printerr(\"Disconnected...\\n\");\n        g_main_loop_quit(mainloop);\n    }\n}\n\nstatic void\ncommand_cb(redisAsyncContext *ac,\n           gpointer r,\n           gpointer user_data G_GNUC_UNUSED)\n{\n    redisReply *reply = r;\n\n    if (reply) {\n        g_print(\"REPLY: %s\\n\", reply->str);\n    }\n\n    redisAsyncDisconnect(ac);\n}\n\ngint\nmain (gint argc     G_GNUC_UNUSED,\n      gchar *argv[] G_GNUC_UNUSED)\n{\n    redisAsyncContext *ac;\n    GMainContext *context = NULL;\n    GSource *source;\n\n    ac = redisAsyncConnect(\"127.0.0.1\", 6379);\n    if (ac->err) {\n        g_printerr(\"%s\\n\", ac->errstr);\n        exit(EXIT_FAILURE);\n    }\n\n    source = redis_source_new(ac);\n    mainloop = g_main_loop_new(context, FALSE);\n    g_source_attach(source, context);\n\n    redisAsyncSetConnectCallback(ac, connect_cb);\n    redisAsyncSetDisconnectCallback(ac, disconnect_cb);\n    redisAsyncCommand(ac, command_cb, NULL, \"SET key 1234\");\n    redisAsyncCommand(ac, command_cb, NULL, \"GET key\");\n\n    g_main_loop_run(mainloop);\n\n    return EXIT_SUCCESS;\n}\n"
  },
  {
    "path": "deps/hiredis/examples/example-ivykis.c",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <signal.h>\n\n#include <hiredis.h>\n#include <async.h>\n#include <adapters/ivykis.h>\n\nvoid getCallback(redisAsyncContext *c, void *r, void *privdata) {\n    redisReply *reply = r;\n    if (reply == NULL) return;\n    printf(\"argv[%s]: %s\\n\", (char*)privdata, reply->str);\n\n    /* Disconnect after receiving the reply to GET */\n    redisAsyncDisconnect(c);\n}\n\nvoid connectCallback(const redisAsyncContext *c, int status) {\n    if (status != REDIS_OK) {\n        printf(\"Error: %s\\n\", c->errstr);\n        return;\n    }\n    printf(\"Connected...\\n\");\n}\n\nvoid disconnectCallback(const redisAsyncContext *c, int status) {\n    if (status != REDIS_OK) {\n        printf(\"Error: %s\\n\", c->errstr);\n        return;\n    }\n    printf(\"Disconnected...\\n\");\n}\n\nint main (int argc, char **argv) {\n#ifndef _WIN32\n    signal(SIGPIPE, SIG_IGN);\n#endif\n\n    iv_init();\n\n    redisAsyncContext *c = redisAsyncConnect(\"127.0.0.1\", 6379);\n    if (c->err) {\n        /* Let *c leak for now... */\n        printf(\"Error: %s\\n\", c->errstr);\n        return 1;\n    }\n\n    redisIvykisAttach(c);\n    redisAsyncSetConnectCallback(c,connectCallback);\n    redisAsyncSetDisconnectCallback(c,disconnectCallback);\n    redisAsyncCommand(c, NULL, NULL, \"SET key %b\", argv[argc-1], strlen(argv[argc-1]));\n    redisAsyncCommand(c, getCallback, (char*)\"end-1\", \"GET key\");\n\n    iv_main();\n\n    iv_deinit();\n\n    return 0;\n}\n"
  },
  {
    "path": "deps/hiredis/examples/example-libev.c",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <signal.h>\n\n#include <hiredis.h>\n#include <async.h>\n#include <adapters/libev.h>\n\nvoid getCallback(redisAsyncContext *c, void *r, void *privdata) {\n    redisReply *reply = r;\n    if (reply == NULL) return;\n    printf(\"argv[%s]: %s\\n\", (char*)privdata, reply->str);\n\n    /* Disconnect after receiving the reply to GET */\n    redisAsyncDisconnect(c);\n}\n\nvoid connectCallback(const redisAsyncContext *c, int status) {\n    if (status != REDIS_OK) {\n        printf(\"Error: %s\\n\", c->errstr);\n        return;\n    }\n    printf(\"Connected...\\n\");\n}\n\nvoid disconnectCallback(const redisAsyncContext *c, int status) {\n    if (status != REDIS_OK) {\n        printf(\"Error: %s\\n\", c->errstr);\n        return;\n    }\n    printf(\"Disconnected...\\n\");\n}\n\nint main (int argc, char **argv) {\n#ifndef _WIN32\n    signal(SIGPIPE, SIG_IGN);\n#endif\n\n    redisAsyncContext *c = redisAsyncConnect(\"127.0.0.1\", 6379);\n    if (c->err) {\n        /* Let *c leak for now... */\n        printf(\"Error: %s\\n\", c->errstr);\n        return 1;\n    }\n\n    redisLibevAttach(EV_DEFAULT_ c);\n    redisAsyncSetConnectCallback(c,connectCallback);\n    redisAsyncSetDisconnectCallback(c,disconnectCallback);\n    redisAsyncCommand(c, NULL, NULL, \"SET key %b\", argv[argc-1], strlen(argv[argc-1]));\n    redisAsyncCommand(c, getCallback, (char*)\"end-1\", \"GET key\");\n    ev_loop(EV_DEFAULT_ 0);\n    return 0;\n}\n"
  },
  {
    "path": "deps/hiredis/examples/example-libevent-ssl.c",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <signal.h>\n\n#include <hiredis.h>\n#include <hiredis_ssl.h>\n#include <async.h>\n#include <adapters/libevent.h>\n\nvoid getCallback(redisAsyncContext *c, void *r, void *privdata) {\n    redisReply *reply = r;\n    if (reply == NULL) return;\n    printf(\"argv[%s]: %s\\n\", (char*)privdata, reply->str);\n\n    /* Disconnect after receiving the reply to GET */\n    redisAsyncDisconnect(c);\n}\n\nvoid connectCallback(const redisAsyncContext *c, int status) {\n    if (status != REDIS_OK) {\n        printf(\"Error: %s\\n\", c->errstr);\n        return;\n    }\n    printf(\"Connected...\\n\");\n}\n\nvoid disconnectCallback(const redisAsyncContext *c, int status) {\n    if (status != REDIS_OK) {\n        printf(\"Error: %s\\n\", c->errstr);\n        return;\n    }\n    printf(\"Disconnected...\\n\");\n}\n\nint main (int argc, char **argv) {\n#ifndef _WIN32\n    signal(SIGPIPE, SIG_IGN);\n#endif\n\n    struct event_base *base = event_base_new();\n    if (argc < 5) {\n        fprintf(stderr,\n                \"Usage: %s <key> <host> <port> <cert> <certKey> [ca]\\n\", argv[0]);\n        exit(1);\n    }\n\n    const char *value = argv[1];\n    size_t nvalue = strlen(value);\n\n    const char *hostname = argv[2];\n    int port = atoi(argv[3]);\n\n    const char *cert = argv[4];\n    const char *certKey = argv[5];\n    const char *caCert = argc > 5 ? argv[6] : NULL;\n\n    redisSSLContext *ssl;\n    redisSSLContextError ssl_error = REDIS_SSL_CTX_NONE;\n\n    redisInitOpenSSL();\n\n    ssl = redisCreateSSLContext(caCert, NULL,\n            cert, certKey, NULL, &ssl_error);\n    if (!ssl) {\n        printf(\"Error: %s\\n\", redisSSLContextGetError(ssl_error));\n        return 1;\n    }\n\n    redisAsyncContext *c = redisAsyncConnect(hostname, port);\n    if (c->err) {\n        /* Let *c leak for now... */\n        printf(\"Error: %s\\n\", c->errstr);\n        return 1;\n    }\n    if (redisInitiateSSLWithContext(&c->c, ssl) != REDIS_OK) {\n        printf(\"SSL Error!\\n\");\n        exit(1);\n    }\n\n    redisLibeventAttach(c,base);\n    redisAsyncSetConnectCallback(c,connectCallback);\n    redisAsyncSetDisconnectCallback(c,disconnectCallback);\n    redisAsyncCommand(c, NULL, NULL, \"SET key %b\", value, nvalue);\n    redisAsyncCommand(c, getCallback, (char*)\"end-1\", \"GET key\");\n    event_base_dispatch(base);\n\n    redisFreeSSLContext(ssl);\n    return 0;\n}\n"
  },
  {
    "path": "deps/hiredis/examples/example-libevent.c",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <signal.h>\n\n#include <hiredis.h>\n#include <async.h>\n#include <adapters/libevent.h>\n\nvoid getCallback(redisAsyncContext *c, void *r, void *privdata) {\n    redisReply *reply = r;\n    if (reply == NULL) {\n        if (c->errstr) {\n            printf(\"errstr: %s\\n\", c->errstr);\n        }\n        return;\n    }\n    printf(\"argv[%s]: %s\\n\", (char*)privdata, reply->str);\n\n    /* Disconnect after receiving the reply to GET */\n    redisAsyncDisconnect(c);\n}\n\nvoid connectCallback(const redisAsyncContext *c, int status) {\n    if (status != REDIS_OK) {\n        printf(\"Error: %s\\n\", c->errstr);\n        return;\n    }\n    printf(\"Connected...\\n\");\n}\n\nvoid disconnectCallback(const redisAsyncContext *c, int status) {\n    if (status != REDIS_OK) {\n        printf(\"Error: %s\\n\", c->errstr);\n        return;\n    }\n    printf(\"Disconnected...\\n\");\n}\n\nint main (int argc, char **argv) {\n#ifndef _WIN32\n    signal(SIGPIPE, SIG_IGN);\n#endif\n\n    struct event_base *base = event_base_new();\n    redisOptions options = {0};\n    REDIS_OPTIONS_SET_TCP(&options, \"127.0.0.1\", 6379);\n    struct timeval tv = {0};\n    tv.tv_sec = 1;\n    options.connect_timeout = &tv;\n\n\n    redisAsyncContext *c = redisAsyncConnectWithOptions(&options);\n    if (c->err) {\n        /* Let *c leak for now... */\n        printf(\"Error: %s\\n\", c->errstr);\n        return 1;\n    }\n\n    redisLibeventAttach(c,base);\n    redisAsyncSetConnectCallback(c,connectCallback);\n    redisAsyncSetDisconnectCallback(c,disconnectCallback);\n    redisAsyncCommand(c, NULL, NULL, \"SET key %b\", argv[argc-1], strlen(argv[argc-1]));\n    redisAsyncCommand(c, getCallback, (char*)\"end-1\", \"GET key\");\n    event_base_dispatch(base);\n    return 0;\n}\n"
  },
  {
    "path": "deps/hiredis/examples/example-libhv.c",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <signal.h>\n\n#include <hiredis.h>\n#include <async.h>\n#include <adapters/libhv.h>\n\nvoid getCallback(redisAsyncContext *c, void *r, void *privdata) {\n    redisReply *reply = r;\n    if (reply == NULL) return;\n    printf(\"argv[%s]: %s\\n\", (char*)privdata, reply->str);\n\n    /* Disconnect after receiving the reply to GET */\n    redisAsyncDisconnect(c);\n}\n\nvoid debugCallback(redisAsyncContext *c, void *r, void *privdata) {\n    (void)privdata;\n    redisReply *reply = r;\n\n    if (reply == NULL) {\n        printf(\"`DEBUG SLEEP` error: %s\\n\", c->errstr ? c->errstr : \"unknown error\");\n        return;\n    }\n\n    redisAsyncDisconnect(c);\n}\n\nvoid connectCallback(const redisAsyncContext *c, int status) {\n    if (status != REDIS_OK) {\n        printf(\"Error: %s\\n\", c->errstr);\n        return;\n    }\n    printf(\"Connected...\\n\");\n}\n\nvoid disconnectCallback(const redisAsyncContext *c, int status) {\n    if (status != REDIS_OK) {\n        printf(\"Error: %s\\n\", c->errstr);\n        return;\n    }\n    printf(\"Disconnected...\\n\");\n}\n\nint main (int argc, char **argv) {\n#ifndef _WIN32\n    signal(SIGPIPE, SIG_IGN);\n#endif\n\n    redisAsyncContext *c = redisAsyncConnect(\"127.0.0.1\", 6379);\n    if (c->err) {\n        /* Let *c leak for now... */\n        printf(\"Error: %s\\n\", c->errstr);\n        return 1;\n    }\n\n    hloop_t* loop = hloop_new(HLOOP_FLAG_QUIT_WHEN_NO_ACTIVE_EVENTS);\n    redisLibhvAttach(c, loop);\n    redisAsyncSetTimeout(c, (struct timeval){.tv_sec = 0, .tv_usec = 500000});\n    redisAsyncSetConnectCallback(c,connectCallback);\n    redisAsyncSetDisconnectCallback(c,disconnectCallback);\n    redisAsyncCommand(c, NULL, NULL, \"SET key %b\", argv[argc-1], strlen(argv[argc-1]));\n    redisAsyncCommand(c, getCallback, (char*)\"end-1\", \"GET key\");\n    redisAsyncCommand(c, debugCallback, NULL, \"DEBUG SLEEP %d\", 1);\n    hloop_run(loop);\n    hloop_free(&loop);\n    return 0;\n}\n"
  },
  {
    "path": "deps/hiredis/examples/example-libsdevent.c",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <signal.h>\n\n#include <hiredis.h>\n#include <async.h>\n#include <adapters/libsdevent.h>\n\nvoid debugCallback(redisAsyncContext *c, void *r, void *privdata) {\n    (void)privdata;\n    redisReply *reply = r;\n    if (reply == NULL) {\n        /* The DEBUG SLEEP command will almost always fail, because we have set a 1 second timeout */\n        printf(\"`DEBUG SLEEP` error: %s\\n\", c->errstr ? c->errstr : \"unknown error\");\n        return;\n    }\n    /* Disconnect after receiving the reply of DEBUG SLEEP (which will not)*/\n    redisAsyncDisconnect(c);\n}\n\nvoid getCallback(redisAsyncContext *c, void *r, void *privdata) {\n    redisReply *reply = r;\n    if (reply == NULL) {\n        printf(\"`GET key` error: %s\\n\", c->errstr ? c->errstr : \"unknown error\");\n        return;\n    }\n    printf(\"`GET key` result: argv[%s]: %s\\n\", (char*)privdata, reply->str);\n\n    /* start another request that demonstrate timeout */\n    redisAsyncCommand(c, debugCallback, NULL, \"DEBUG SLEEP %f\", 1.5);\n}\n\nvoid connectCallback(const redisAsyncContext *c, int status) {\n    if (status != REDIS_OK) {\n        printf(\"connect error: %s\\n\", c->errstr);\n        return;\n    }\n    printf(\"Connected...\\n\");\n}\n\nvoid disconnectCallback(const redisAsyncContext *c, int status) {\n    if (status != REDIS_OK) {\n        printf(\"disconnect because of error: %s\\n\", c->errstr);\n        return;\n    }\n    printf(\"Disconnected...\\n\");\n}\n\nint main (int argc, char **argv) {\n    signal(SIGPIPE, SIG_IGN);\n\n    struct sd_event *event;\n    sd_event_default(&event);\n\n    redisAsyncContext *c = redisAsyncConnect(\"127.0.0.1\", 6379);\n    if (c->err) {\n        printf(\"Error: %s\\n\", c->errstr);\n        redisAsyncFree(c);\n        return 1;\n    }\n\n    redisLibsdeventAttach(c,event);\n    redisAsyncSetConnectCallback(c,connectCallback);\n    redisAsyncSetDisconnectCallback(c,disconnectCallback);\n    redisAsyncSetTimeout(c, (struct timeval){ .tv_sec = 1, .tv_usec = 0});\n\n    /*\n    In this demo, we first `set key`, then `get key` to demonstrate the basic usage of libsdevent adapter.\n    Then in `getCallback`, we start a `debug sleep` command to create 1.5 second long request.\n    Because we have set a 1 second timeout to the connection, the command will always fail with a\n    timeout error, which is shown in the `debugCallback`.\n    */\n\n    redisAsyncCommand(c, NULL, NULL, \"SET key %b\", argv[argc-1], strlen(argv[argc-1]));\n    redisAsyncCommand(c, getCallback, (char*)\"end-1\", \"GET key\");\n\n    /* sd-event does not quit when there are no handlers registered. Manually exit after 1.5 seconds */\n    sd_event_source *s;\n    sd_event_add_time_relative(event, &s, CLOCK_MONOTONIC, 1500000, 1, NULL, NULL);\n\n    sd_event_loop(event);\n    sd_event_source_disable_unref(s);\n    sd_event_unref(event);\n    return 0;\n}\n"
  },
  {
    "path": "deps/hiredis/examples/example-libuv.c",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <signal.h>\n\n#include <hiredis.h>\n#include <async.h>\n#include <adapters/libuv.h>\n\nvoid debugCallback(redisAsyncContext *c, void *r, void *privdata) {\n    (void)privdata; //unused\n    redisReply *reply = r;\n    if (reply == NULL) {\n        /* The DEBUG SLEEP command will almost always fail, because we have set a 1 second timeout */\n        printf(\"`DEBUG SLEEP` error: %s\\n\", c->errstr ? c->errstr : \"unknown error\");\n        return;\n    }\n    /* Disconnect after receiving the reply of DEBUG SLEEP (which will not)*/\n    redisAsyncDisconnect(c);\n}\n\nvoid getCallback(redisAsyncContext *c, void *r, void *privdata) {\n    redisReply *reply = r;\n    if (reply == NULL) {\n        printf(\"`GET key` error: %s\\n\", c->errstr ? c->errstr : \"unknown error\");\n        return;\n    }\n    printf(\"`GET key` result: argv[%s]: %s\\n\", (char*)privdata, reply->str);\n\n    /* start another request that demonstrate timeout */\n    redisAsyncCommand(c, debugCallback, NULL, \"DEBUG SLEEP %f\", 1.5);\n}\n\nvoid connectCallback(const redisAsyncContext *c, int status) {\n    if (status != REDIS_OK) {\n        printf(\"connect error: %s\\n\", c->errstr);\n        return;\n    }\n    printf(\"Connected...\\n\");\n}\n\nvoid disconnectCallback(const redisAsyncContext *c, int status) {\n    if (status != REDIS_OK) {\n        printf(\"disconnect because of error: %s\\n\", c->errstr);\n        return;\n    }\n    printf(\"Disconnected...\\n\");\n}\n\nint main (int argc, char **argv) {\n#ifndef _WIN32\n    signal(SIGPIPE, SIG_IGN);\n#endif\n\n    uv_loop_t* loop = uv_default_loop();\n\n    redisAsyncContext *c = redisAsyncConnect(\"127.0.0.1\", 6379);\n    if (c->err) {\n        /* Let *c leak for now... */\n        printf(\"Error: %s\\n\", c->errstr);\n        return 1;\n    }\n\n    redisLibuvAttach(c,loop);\n    redisAsyncSetConnectCallback(c,connectCallback);\n    redisAsyncSetDisconnectCallback(c,disconnectCallback);\n    redisAsyncSetTimeout(c, (struct timeval){ .tv_sec = 1, .tv_usec = 0});\n\n    /*\n    In this demo, we first `set key`, then `get key` to demonstrate the basic usage of libuv adapter.\n    Then in `getCallback`, we start a `debug sleep` command to create 1.5 second long request.\n    Because we have set a 1 second timeout to the connection, the command will always fail with a\n    timeout error, which is shown in the `debugCallback`.\n    */\n\n    redisAsyncCommand(c, NULL, NULL, \"SET key %b\", argv[argc-1], strlen(argv[argc-1]));\n    redisAsyncCommand(c, getCallback, (char*)\"end-1\", \"GET key\");\n\n    uv_run(loop, UV_RUN_DEFAULT);\n    return 0;\n}\n"
  },
  {
    "path": "deps/hiredis/examples/example-macosx.c",
    "content": "//\n//  Created by Дмитрий Бахвалов on 13.07.15.\n//  Copyright (c) 2015 Dmitry Bakhvalov. All rights reserved.\n//\n\n#include <stdio.h>\n\n#include <hiredis.h>\n#include <async.h>\n#include <adapters/macosx.h>\n\nvoid getCallback(redisAsyncContext *c, void *r, void *privdata) {\n    redisReply *reply = r;\n    if (reply == NULL) return;\n    printf(\"argv[%s]: %s\\n\", (char*)privdata, reply->str);\n\n    /* Disconnect after receiving the reply to GET */\n    redisAsyncDisconnect(c);\n}\n\nvoid connectCallback(const redisAsyncContext *c, int status) {\n    if (status != REDIS_OK) {\n        printf(\"Error: %s\\n\", c->errstr);\n        return;\n    }\n    printf(\"Connected...\\n\");\n}\n\nvoid disconnectCallback(const redisAsyncContext *c, int status) {\n    if (status != REDIS_OK) {\n        printf(\"Error: %s\\n\", c->errstr);\n        return;\n    }\n    CFRunLoopStop(CFRunLoopGetCurrent());\n    printf(\"Disconnected...\\n\");\n}\n\nint main (int argc, char **argv) {\n    signal(SIGPIPE, SIG_IGN);\n\n    CFRunLoopRef loop = CFRunLoopGetCurrent();\n    if( !loop ) {\n        printf(\"Error: Cannot get current run loop\\n\");\n        return 1;\n    }\n\n    redisAsyncContext *c = redisAsyncConnect(\"127.0.0.1\", 6379);\n    if (c->err) {\n        /* Let *c leak for now... */\n        printf(\"Error: %s\\n\", c->errstr);\n        return 1;\n    }\n\n    redisMacOSAttach(c, loop);\n\n    redisAsyncSetConnectCallback(c,connectCallback);\n    redisAsyncSetDisconnectCallback(c,disconnectCallback);\n\n    redisAsyncCommand(c, NULL, NULL, \"SET key %b\", argv[argc-1], strlen(argv[argc-1]));\n    redisAsyncCommand(c, getCallback, (char*)\"end-1\", \"GET key\");\n\n    CFRunLoopRun();\n\n    return 0;\n}\n\n"
  },
  {
    "path": "deps/hiredis/examples/example-poll.c",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <signal.h>\n#include <unistd.h>\n\n#include <async.h>\n#include <adapters/poll.h>\n\n/* Put in the global scope, so that loop can be explicitly stopped */\nstatic int exit_loop = 0;\n\nvoid getCallback(redisAsyncContext *c, void *r, void *privdata) {\n    redisReply *reply = r;\n    if (reply == NULL) return;\n    printf(\"argv[%s]: %s\\n\", (char*)privdata, reply->str);\n\n    /* Disconnect after receiving the reply to GET */\n    redisAsyncDisconnect(c);\n}\n\nvoid connectCallback(const redisAsyncContext *c, int status) {\n    if (status != REDIS_OK) {\n        printf(\"Error: %s\\n\", c->errstr);\n        exit_loop = 1;\n        return;\n    }\n\n    printf(\"Connected...\\n\");\n}\n\nvoid disconnectCallback(const redisAsyncContext *c, int status) {\n    exit_loop = 1;\n    if (status != REDIS_OK) {\n        printf(\"Error: %s\\n\", c->errstr);\n        return;\n    }\n\n    printf(\"Disconnected...\\n\");\n}\n\nint main (int argc, char **argv) {\n    signal(SIGPIPE, SIG_IGN);\n\n    redisAsyncContext *c = redisAsyncConnect(\"127.0.0.1\", 6379);\n    if (c->err) {\n        /* Let *c leak for now... */\n        printf(\"Error: %s\\n\", c->errstr);\n        return 1;\n    }\n\n    redisPollAttach(c);\n    redisAsyncSetConnectCallback(c,connectCallback);\n    redisAsyncSetDisconnectCallback(c,disconnectCallback);\n    redisAsyncCommand(c, NULL, NULL, \"SET key %b\", argv[argc-1], strlen(argv[argc-1]));\n    redisAsyncCommand(c, getCallback, (char*)\"end-1\", \"GET key\");\n    while (!exit_loop)\n    {\n        redisPollTick(c, 0.1);\n    }\n    return 0;\n}\n"
  },
  {
    "path": "deps/hiredis/examples/example-push.c",
    "content": "/*\n * Copyright (c) 2020, Michael Grunder <michael dot grunder at gmail dot com>\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <hiredis.h>\n\n#define KEY_COUNT 5\n\n#define panicAbort(fmt, ...) \\\n    do { \\\n        fprintf(stderr, \"%s:%d:%s(): \" fmt, __FILE__, __LINE__, __func__, __VA_ARGS__); \\\n        exit(-1); \\\n    } while (0)\n\nstatic void assertReplyAndFree(redisContext *context, redisReply *reply, int type) {\n    if (reply == NULL)\n        panicAbort(\"NULL reply from server (error: %s)\", context->errstr);\n\n    if (reply->type != type) {\n        if (reply->type == REDIS_REPLY_ERROR)\n            fprintf(stderr, \"Redis Error: %s\\n\", reply->str);\n\n        panicAbort(\"Expected reply type %d but got type %d\", type, reply->type);\n    }\n\n    freeReplyObject(reply);\n}\n\n/* Switch to the RESP3 protocol and enable client tracking */\nstatic void enableClientTracking(redisContext *c) {\n    redisReply *reply = redisCommand(c, \"HELLO 3\");\n    if (reply == NULL || c->err) {\n        panicAbort(\"NULL reply or server error (error: %s)\", c->errstr);\n    }\n\n    if (reply->type != REDIS_REPLY_MAP) {\n        fprintf(stderr, \"Error: Can't send HELLO 3 command.  Are you sure you're \");\n        fprintf(stderr, \"connected to redis-server >= 6.0.0?\\nRedis error: %s\\n\",\n                        reply->type == REDIS_REPLY_ERROR ? reply->str : \"(unknown)\");\n        exit(-1);\n    }\n\n    freeReplyObject(reply);\n\n    /* Enable client tracking */\n    reply = redisCommand(c, \"CLIENT TRACKING ON\");\n    assertReplyAndFree(c, reply, REDIS_REPLY_STATUS);\n}\n\nvoid pushReplyHandler(void *privdata, void *r) {\n    redisReply *reply = r;\n    int *invalidations = privdata;\n\n    /* Sanity check on the invalidation reply */\n    if (reply->type != REDIS_REPLY_PUSH || reply->elements != 2 ||\n        reply->element[1]->type != REDIS_REPLY_ARRAY ||\n        reply->element[1]->element[0]->type != REDIS_REPLY_STRING)\n    {\n        panicAbort(\"%s\", \"Can't parse PUSH message!\");\n    }\n\n    /* Increment our invalidation count */\n    *invalidations += 1;\n\n    printf(\"pushReplyHandler(): INVALIDATE '%s' (invalidation count: %d)\\n\",\n           reply->element[1]->element[0]->str, *invalidations);\n\n    freeReplyObject(reply);\n}\n\n/* We aren't actually freeing anything here, but it is included to show that we can\n * have hiredis call our data destructor when freeing the context */\nvoid privdata_dtor(void *privdata) {\n    unsigned int *icount = privdata;\n    printf(\"privdata_dtor():  In context privdata dtor (invalidations: %u)\\n\", *icount);\n}\n\nint main(int argc, char **argv) {\n    unsigned int j, invalidations = 0;\n    redisContext *c;\n    redisReply *reply;\n\n    const char *hostname = (argc > 1) ? argv[1] : \"127.0.0.1\";\n    int port = (argc > 2) ? atoi(argv[2]) : 6379;\n\n    redisOptions o = {0};\n    REDIS_OPTIONS_SET_TCP(&o, hostname, port);\n\n    /* Set our context privdata to the address of our invalidation counter.  Each\n     * time our PUSH handler is called, hiredis will pass the privdata for context.\n     *\n     * This could also be done after we create the context like so:\n     *\n     *    c->privdata = &invalidations;\n     *    c->free_privdata = privdata_dtor;\n     */\n    REDIS_OPTIONS_SET_PRIVDATA(&o, &invalidations, privdata_dtor);\n\n    /* Set our custom PUSH message handler */\n    o.push_cb = pushReplyHandler;\n\n    c = redisConnectWithOptions(&o);\n    if (c == NULL || c->err)\n        panicAbort(\"Connection error:  %s\", c ? c->errstr : \"OOM\");\n\n    /* Enable RESP3 and turn on client tracking */\n    enableClientTracking(c);\n\n    /* Set some keys and then read them back.  Once we do that, Redis will deliver\n     * invalidation push messages whenever the key is modified */\n    for (j = 0; j < KEY_COUNT; j++) {\n        reply = redisCommand(c, \"SET key:%d initial:%d\", j, j);\n        assertReplyAndFree(c, reply, REDIS_REPLY_STATUS);\n\n        reply = redisCommand(c, \"GET key:%d\", j);\n        assertReplyAndFree(c, reply, REDIS_REPLY_STRING);\n    }\n\n    /* Trigger invalidation messages by updating keys we just read */\n    for (j = 0; j < KEY_COUNT; j++) {\n        printf(\"            main(): SET key:%d update:%d\\n\", j, j);\n        reply = redisCommand(c, \"SET key:%d update:%d\", j, j);\n        assertReplyAndFree(c, reply, REDIS_REPLY_STATUS);\n        printf(\"            main(): SET REPLY OK\\n\");\n    }\n\n    printf(\"\\nTotal detected invalidations: %d, expected: %d\\n\", invalidations, KEY_COUNT);\n\n    /* PING server */\n    redisFree(c);\n}\n"
  },
  {
    "path": "deps/hiredis/examples/example-qt.cpp",
    "content": "#include <iostream>\nusing namespace std;\n\n#include <QCoreApplication>\n#include <QTimer>\n\n#include \"example-qt.h\"\n\nvoid getCallback(redisAsyncContext *, void * r, void * privdata) {\n\n    redisReply * reply = static_cast<redisReply *>(r);\n    ExampleQt * ex = static_cast<ExampleQt *>(privdata);\n    if (reply == nullptr || ex == nullptr) return;\n\n    cout << \"key: \" << reply->str << endl;\n\n    ex->finish();\n}\n\nvoid ExampleQt::run() {\n\n    m_ctx = redisAsyncConnect(\"localhost\", 6379);\n\n    if (m_ctx->err) {\n        cerr << \"Error: \" << m_ctx->errstr << endl;\n        redisAsyncFree(m_ctx);\n        emit finished();\n    }\n\n    m_adapter.setContext(m_ctx);\n\n    redisAsyncCommand(m_ctx, NULL, NULL, \"SET key %s\", m_value);\n    redisAsyncCommand(m_ctx, getCallback, this, \"GET key\");\n}\n\nint main (int argc, char **argv) {\n\n    QCoreApplication app(argc, argv);\n\n    ExampleQt example(argv[argc-1]);\n\n    QObject::connect(&example, SIGNAL(finished()), &app, SLOT(quit()));\n    QTimer::singleShot(0, &example, SLOT(run()));\n\n    return app.exec();\n}\n"
  },
  {
    "path": "deps/hiredis/examples/example-qt.h",
    "content": "#ifndef __HIREDIS_EXAMPLE_QT_H\n#define __HIREDIS_EXAMPLE_QT_H\n\n#include <adapters/qt.h>\n\nclass ExampleQt : public QObject {\n\n    Q_OBJECT\n\n    public:\n        ExampleQt(const char * value, QObject * parent = 0)\n            : QObject(parent), m_value(value) {}\n\n    signals:\n        void finished();\n\n    public slots:\n        void run();\n\n    private:\n        void finish() { emit finished(); }\n\n    private:\n        const char * m_value;\n        redisAsyncContext * m_ctx;\n        RedisQtAdapter m_adapter;\n\n    friend\n    void getCallback(redisAsyncContext *, void *, void *);\n};\n\n#endif /* !__HIREDIS_EXAMPLE_QT_H */\n"
  },
  {
    "path": "deps/hiredis/examples/example-redismoduleapi.c",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <signal.h>\n\n#include <hiredis.h>\n#include <async.h>\n#include <adapters/redismoduleapi.h>\n\nvoid debugCallback(redisAsyncContext *c, void *r, void *privdata) {\n    (void)privdata; //unused\n    redisReply *reply = r;\n    if (reply == NULL) {\n        /* The DEBUG SLEEP command will almost always fail, because we have set a 1 second timeout */\n        printf(\"`DEBUG SLEEP` error: %s\\n\", c->errstr ? c->errstr : \"unknown error\");\n        return;\n    }\n    /* Disconnect after receiving the reply of DEBUG SLEEP (which will not)*/\n    redisAsyncDisconnect(c);\n}\n\nvoid getCallback(redisAsyncContext *c, void *r, void *privdata) {\n    redisReply *reply = r;\n    if (reply == NULL) {\n        if (c->errstr) {\n            printf(\"errstr: %s\\n\", c->errstr);\n        }\n        return;\n    }\n    printf(\"argv[%s]: %s\\n\", (char*)privdata, reply->str);\n\n    /* start another request that demonstrate timeout */\n    redisAsyncCommand(c, debugCallback, NULL, \"DEBUG SLEEP %f\", 1.5);\n}\n\nvoid connectCallback(const redisAsyncContext *c, int status) {\n    if (status != REDIS_OK) {\n        printf(\"Error: %s\\n\", c->errstr);\n        return;\n    }\n    printf(\"Connected...\\n\");\n}\n\nvoid disconnectCallback(const redisAsyncContext *c, int status) {\n    if (status != REDIS_OK) {\n        printf(\"Error: %s\\n\", c->errstr);\n        return;\n    }\n    printf(\"Disconnected...\\n\");\n}\n\n/*\n * This example requires Redis 7.0 or above.\n *\n * 1- Compile this file as a shared library. Directory of \"redismodule.h\" must\n *    be in the include path.\n *       gcc -fPIC -shared -I../../redis/src/ -I.. example-redismoduleapi.c -o example-redismoduleapi.so\n *\n * 2- Load module:\n *       redis-server --loadmodule ./example-redismoduleapi.so value\n */\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n\n    int ret = RedisModule_Init(ctx, \"example-redismoduleapi\", 1, REDISMODULE_APIVER_1);\n    if (ret != REDISMODULE_OK) {\n        printf(\"error module init \\n\");\n        return REDISMODULE_ERR;\n    }\n\n    if (redisModuleCompatibilityCheck() != REDIS_OK) {\n        printf(\"Redis 7.0 or above is required! \\n\");\n        return REDISMODULE_ERR;\n    }\n\n    redisAsyncContext *c = redisAsyncConnect(\"127.0.0.1\", 6379);\n    if (c->err) {\n        /* Let *c leak for now... */\n        printf(\"Error: %s\\n\", c->errstr);\n        return 1;\n    }\n\n    size_t len;\n    const char *val = RedisModule_StringPtrLen(argv[argc-1], &len);\n\n    RedisModuleCtx *module_ctx = RedisModule_GetDetachedThreadSafeContext(ctx);\n    redisModuleAttach(c, module_ctx);\n    redisAsyncSetConnectCallback(c,connectCallback);\n    redisAsyncSetDisconnectCallback(c,disconnectCallback);\n    redisAsyncSetTimeout(c, (struct timeval){ .tv_sec = 1, .tv_usec = 0});\n\n    /*\n    In this demo, we first `set key`, then `get key` to demonstrate the basic usage of the adapter.\n    Then in `getCallback`, we start a `debug sleep` command to create 1.5 second long request.\n    Because we have set a 1 second timeout to the connection, the command will always fail with a\n    timeout error, which is shown in the `debugCallback`.\n    */\n\n    redisAsyncCommand(c, NULL, NULL, \"SET key %b\", val, len);\n    redisAsyncCommand(c, getCallback, (char*)\"end-1\", \"GET key\");\n    return 0;\n}\n"
  },
  {
    "path": "deps/hiredis/examples/example-ssl.c",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n#include <hiredis.h>\n#include <hiredis_ssl.h>\n\n#ifdef _MSC_VER\n#include <winsock2.h> /* For struct timeval */\n#endif\n\nint main(int argc, char **argv) {\n    unsigned int j;\n    redisSSLContext *ssl;\n    redisSSLContextError ssl_error = REDIS_SSL_CTX_NONE;\n    redisContext *c;\n    redisReply *reply;\n    if (argc < 4) {\n        printf(\"Usage: %s <host> <port> <cert> <key> [ca]\\n\", argv[0]);\n        exit(1);\n    }\n    const char *hostname = (argc > 1) ? argv[1] : \"127.0.0.1\";\n    int port = atoi(argv[2]);\n    const char *cert = argv[3];\n    const char *key = argv[4];\n    const char *ca = argc > 4 ? argv[5] : NULL;\n\n    redisInitOpenSSL();\n    ssl = redisCreateSSLContext(ca, NULL, cert, key, NULL, &ssl_error);\n    if (!ssl || ssl_error != REDIS_SSL_CTX_NONE) {\n        printf(\"SSL Context error: %s\\n\", redisSSLContextGetError(ssl_error));\n        exit(1);\n    }\n\n    struct timeval tv = { 1, 500000 }; // 1.5 seconds\n    redisOptions options = {0};\n    REDIS_OPTIONS_SET_TCP(&options, hostname, port);\n    options.connect_timeout = &tv;\n    c = redisConnectWithOptions(&options);\n\n    if (c == NULL || c->err) {\n        if (c) {\n            printf(\"Connection error: %s\\n\", c->errstr);\n            redisFree(c);\n        } else {\n            printf(\"Connection error: can't allocate redis context\\n\");\n        }\n        exit(1);\n    }\n\n    if (redisInitiateSSLWithContext(c, ssl) != REDIS_OK) {\n        printf(\"Couldn't initialize SSL!\\n\");\n        printf(\"Error: %s\\n\", c->errstr);\n        redisFree(c);\n        exit(1);\n    }\n\n    /* PING server */\n    reply = redisCommand(c,\"PING\");\n    printf(\"PING: %s\\n\", reply->str);\n    freeReplyObject(reply);\n\n    /* Set a key */\n    reply = redisCommand(c,\"SET %s %s\", \"foo\", \"hello world\");\n    printf(\"SET: %s\\n\", reply->str);\n    freeReplyObject(reply);\n\n    /* Set a key using binary safe API */\n    reply = redisCommand(c,\"SET %b %b\", \"bar\", (size_t) 3, \"hello\", (size_t) 5);\n    printf(\"SET (binary API): %s\\n\", reply->str);\n    freeReplyObject(reply);\n\n    /* Try a GET and two INCR */\n    reply = redisCommand(c,\"GET foo\");\n    printf(\"GET foo: %s\\n\", reply->str);\n    freeReplyObject(reply);\n\n    reply = redisCommand(c,\"INCR counter\");\n    printf(\"INCR counter: %lld\\n\", reply->integer);\n    freeReplyObject(reply);\n    /* again ... */\n    reply = redisCommand(c,\"INCR counter\");\n    printf(\"INCR counter: %lld\\n\", reply->integer);\n    freeReplyObject(reply);\n\n    /* Create a list of numbers, from 0 to 9 */\n    reply = redisCommand(c,\"DEL mylist\");\n    freeReplyObject(reply);\n    for (j = 0; j < 10; j++) {\n        char buf[64];\n\n        snprintf(buf,64,\"%u\",j);\n        reply = redisCommand(c,\"LPUSH mylist element-%s\", buf);\n        freeReplyObject(reply);\n    }\n\n    /* Let's check what we have inside the list */\n    reply = redisCommand(c,\"LRANGE mylist 0 -1\");\n    if (reply->type == REDIS_REPLY_ARRAY) {\n        for (j = 0; j < reply->elements; j++) {\n            printf(\"%u) %s\\n\", j, reply->element[j]->str);\n        }\n    }\n    freeReplyObject(reply);\n\n    /* Disconnects and frees the context */\n    redisFree(c);\n\n    redisFreeSSLContext(ssl);\n\n    return 0;\n}\n"
  },
  {
    "path": "deps/hiredis/examples/example.c",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <hiredis.h>\n\n#ifdef _MSC_VER\n#include <winsock2.h> /* For struct timeval */\n#endif\n\nstatic void example_argv_command(redisContext *c, size_t n) {\n    char **argv, tmp[42];\n    size_t *argvlen;\n    redisReply *reply;\n\n    /* We're allocating two additional elements for command and key */\n    argv = malloc(sizeof(*argv) * (2 + n));\n    argvlen = malloc(sizeof(*argvlen) * (2 + n));\n\n    /* First the command */\n    argv[0] = (char*)\"RPUSH\";\n    argvlen[0] = sizeof(\"RPUSH\") - 1;\n\n    /* Now our key */\n    argv[1] = (char*)\"argvlist\";\n    argvlen[1] = sizeof(\"argvlist\") - 1;\n\n    /* Now add the entries we wish to add to the list */\n    for (size_t i = 2; i < (n + 2); i++) {\n        argvlen[i] = snprintf(tmp, sizeof(tmp), \"argv-element-%zu\", i - 2);\n        argv[i] = strdup(tmp);\n    }\n\n    /* Execute the command using redisCommandArgv.  We're sending the arguments with\n     * two explicit arrays.  One for each argument's string, and the other for its\n     * length. */\n    reply = redisCommandArgv(c, n + 2, (const char **)argv, (const size_t*)argvlen);\n\n    if (reply == NULL || c->err) {\n        fprintf(stderr, \"Error:  Couldn't execute redisCommandArgv\\n\");\n        exit(1);\n    }\n\n    if (reply->type == REDIS_REPLY_INTEGER) {\n        printf(\"%s reply: %lld\\n\", argv[0], reply->integer);\n    }\n\n    freeReplyObject(reply);\n\n    /* Clean up */\n    for (size_t i = 2; i < (n + 2); i++) {\n        free(argv[i]);\n    }\n\n    free(argv);\n    free(argvlen);\n}\n\nint main(int argc, char **argv) {\n    unsigned int j, isunix = 0;\n    redisContext *c;\n    redisReply *reply;\n    const char *hostname = (argc > 1) ? argv[1] : \"127.0.0.1\";\n\n    if (argc > 2) {\n        if (*argv[2] == 'u' || *argv[2] == 'U') {\n            isunix = 1;\n            /* in this case, host is the path to the unix socket */\n            printf(\"Will connect to unix socket @%s\\n\", hostname);\n        }\n    }\n\n    int port = (argc > 2) ? atoi(argv[2]) : 6379;\n\n    struct timeval timeout = { 1, 500000 }; // 1.5 seconds\n    if (isunix) {\n        c = redisConnectUnixWithTimeout(hostname, timeout);\n    } else {\n        c = redisConnectWithTimeout(hostname, port, timeout);\n    }\n    if (c == NULL || c->err) {\n        if (c) {\n            printf(\"Connection error: %s\\n\", c->errstr);\n            redisFree(c);\n        } else {\n            printf(\"Connection error: can't allocate redis context\\n\");\n        }\n        exit(1);\n    }\n\n    /* PING server */\n    reply = redisCommand(c,\"PING\");\n    printf(\"PING: %s\\n\", reply->str);\n    freeReplyObject(reply);\n\n    /* Set a key */\n    reply = redisCommand(c,\"SET %s %s\", \"foo\", \"hello world\");\n    printf(\"SET: %s\\n\", reply->str);\n    freeReplyObject(reply);\n\n    /* Set a key using binary safe API */\n    reply = redisCommand(c,\"SET %b %b\", \"bar\", (size_t) 3, \"hello\", (size_t) 5);\n    printf(\"SET (binary API): %s\\n\", reply->str);\n    freeReplyObject(reply);\n\n    /* Try a GET and two INCR */\n    reply = redisCommand(c,\"GET foo\");\n    printf(\"GET foo: %s\\n\", reply->str);\n    freeReplyObject(reply);\n\n    reply = redisCommand(c,\"INCR counter\");\n    printf(\"INCR counter: %lld\\n\", reply->integer);\n    freeReplyObject(reply);\n    /* again ... */\n    reply = redisCommand(c,\"INCR counter\");\n    printf(\"INCR counter: %lld\\n\", reply->integer);\n    freeReplyObject(reply);\n\n    /* Create a list of numbers, from 0 to 9 */\n    reply = redisCommand(c,\"DEL mylist\");\n    freeReplyObject(reply);\n    for (j = 0; j < 10; j++) {\n        char buf[64];\n\n        snprintf(buf,64,\"%u\",j);\n        reply = redisCommand(c,\"LPUSH mylist element-%s\", buf);\n        freeReplyObject(reply);\n    }\n\n    /* Let's check what we have inside the list */\n    reply = redisCommand(c,\"LRANGE mylist 0 -1\");\n    if (reply->type == REDIS_REPLY_ARRAY) {\n        for (j = 0; j < reply->elements; j++) {\n            printf(\"%u) %s\\n\", j, reply->element[j]->str);\n        }\n    }\n    freeReplyObject(reply);\n\n    /* See function for an example of redisCommandArgv */\n    example_argv_command(c, 10);\n\n    /* Disconnects and frees the context */\n    redisFree(c);\n\n    return 0;\n}\n"
  },
  {
    "path": "deps/hiredis/fmacros.h",
    "content": "#ifndef __HIREDIS_FMACRO_H\n#define __HIREDIS_FMACRO_H\n\n#ifndef _AIX\n#define _XOPEN_SOURCE 600\n#define _POSIX_C_SOURCE 200112L\n#endif\n\n#if defined(__APPLE__) && defined(__MACH__)\n/* Enable TCP_KEEPALIVE */\n#define _DARWIN_C_SOURCE\n#endif\n\n#endif\n"
  },
  {
    "path": "deps/hiredis/fuzzing/format_command_fuzzer.c",
    "content": "/*\n * Copyright (c) 2020, Salvatore Sanfilippo <antirez at gmail dot com>\n * Copyright (c) 2020, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n * Copyright (c) 2020, Matt Stancliff <matt at genges dot com>,\n *                     Jan-Erik Rediger <janerik at fnordig dot com>\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#include <stdlib.h>\n#include <string.h>\n#include \"hiredis.h\"\n\nint LLVMFuzzerTestOneInput(const uint8_t *data, size_t size) {\n    char *new_str, *cmd;\n\n    if (size < 3)\n        return 0;\n\n    new_str = malloc(size+1);\n    if (new_str == NULL)\n        return 0;\n\n    memcpy(new_str, data, size);\n    new_str[size] = '\\0';\n\n    if (redisFormatCommand(&cmd, new_str) != -1)\n        hi_free(cmd);\n\n    free(new_str);\n    return 0;\n}\n"
  },
  {
    "path": "deps/hiredis/hiredis-config.cmake.in",
    "content": "@PACKAGE_INIT@\n\nset_and_check(hiredis_INCLUDEDIR \"@PACKAGE_INCLUDE_INSTALL_DIR@\")\n\nIF (NOT TARGET hiredis::@hiredis_export_name@)\n\tINCLUDE(${CMAKE_CURRENT_LIST_DIR}/hiredis-targets.cmake)\nENDIF()\n\nSET(hiredis_LIBRARIES hiredis::@hiredis_export_name@)\nSET(hiredis_INCLUDE_DIRS ${hiredis_INCLUDEDIR})\n\ncheck_required_components(hiredis)\n\n"
  },
  {
    "path": "deps/hiredis/hiredis.c",
    "content": "/*\n * Copyright (c) 2009-2011, Salvatore Sanfilippo <antirez at gmail dot com>\n * Copyright (c) 2010-2014, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n * Copyright (c) 2015, Matt Stancliff <matt at genges dot com>,\n *                     Jan-Erik Rediger <janerik at fnordig dot com>\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#include \"fmacros.h\"\n#include <string.h>\n#include <stdlib.h>\n#include <assert.h>\n#include <errno.h>\n#include <ctype.h>\n\n#include \"hiredis.h\"\n#include \"net.h\"\n#include \"sds.h\"\n#include \"async.h\"\n#include \"win32.h\"\n\nextern int redisContextUpdateConnectTimeout(redisContext *c, const struct timeval *timeout);\nextern int redisContextUpdateCommandTimeout(redisContext *c, const struct timeval *timeout);\n\nstatic redisContextFuncs redisContextDefaultFuncs = {\n    .close = redisNetClose,\n    .free_privctx = NULL,\n    .async_read = redisAsyncRead,\n    .async_write = redisAsyncWrite,\n    .read = redisNetRead,\n    .write = redisNetWrite\n};\n\nstatic redisReply *createReplyObject(int type);\nstatic void *createStringObject(const redisReadTask *task, char *str, size_t len);\nstatic void *createArrayObject(const redisReadTask *task, size_t elements);\nstatic void *createIntegerObject(const redisReadTask *task, long long value);\nstatic void *createDoubleObject(const redisReadTask *task, double value, char *str, size_t len);\nstatic void *createNilObject(const redisReadTask *task);\nstatic void *createBoolObject(const redisReadTask *task, int bval);\n\n/* Default set of functions to build the reply. Keep in mind that such a\n * function returning NULL is interpreted as OOM. */\nstatic redisReplyObjectFunctions defaultFunctions = {\n    createStringObject,\n    createArrayObject,\n    createIntegerObject,\n    createDoubleObject,\n    createNilObject,\n    createBoolObject,\n    freeReplyObject\n};\n\n/* Create a reply object */\nstatic redisReply *createReplyObject(int type) {\n    redisReply *r = hi_calloc(1,sizeof(*r));\n\n    if (r == NULL)\n        return NULL;\n\n    r->type = type;\n    return r;\n}\n\n/* Free a reply object */\nvoid freeReplyObject(void *reply) {\n    redisReply *r = reply;\n    size_t j;\n\n    if (r == NULL)\n        return;\n\n    switch(r->type) {\n    case REDIS_REPLY_INTEGER:\n    case REDIS_REPLY_NIL:\n    case REDIS_REPLY_BOOL:\n        break; /* Nothing to free */\n    case REDIS_REPLY_ARRAY:\n    case REDIS_REPLY_MAP:\n    case REDIS_REPLY_SET:\n    case REDIS_REPLY_PUSH:\n        if (r->element != NULL) {\n            for (j = 0; j < r->elements; j++)\n                freeReplyObject(r->element[j]);\n            hi_free(r->element);\n        }\n        break;\n    case REDIS_REPLY_ERROR:\n    case REDIS_REPLY_STATUS:\n    case REDIS_REPLY_STRING:\n    case REDIS_REPLY_DOUBLE:\n    case REDIS_REPLY_VERB:\n    case REDIS_REPLY_BIGNUM:\n        hi_free(r->str);\n        break;\n    }\n    hi_free(r);\n}\n\nstatic void *createStringObject(const redisReadTask *task, char *str, size_t len) {\n    redisReply *r, *parent;\n    char *buf;\n\n    r = createReplyObject(task->type);\n    if (r == NULL)\n        return NULL;\n\n    assert(task->type == REDIS_REPLY_ERROR  ||\n           task->type == REDIS_REPLY_STATUS ||\n           task->type == REDIS_REPLY_STRING ||\n           task->type == REDIS_REPLY_VERB   ||\n           task->type == REDIS_REPLY_BIGNUM);\n\n    /* Copy string value */\n    if (task->type == REDIS_REPLY_VERB) {\n        buf = hi_malloc(len-4+1); /* Skip 4 bytes of verbatim type header. */\n        if (buf == NULL) goto oom;\n\n        memcpy(r->vtype,str,3);\n        r->vtype[3] = '\\0';\n        memcpy(buf,str+4,len-4);\n        buf[len-4] = '\\0';\n        r->len = len - 4;\n    } else {\n        buf = hi_malloc(len+1);\n        if (buf == NULL) goto oom;\n\n        memcpy(buf,str,len);\n        buf[len] = '\\0';\n        r->len = len;\n    }\n    r->str = buf;\n\n    if (task->parent) {\n        parent = task->parent->obj;\n        assert(parent->type == REDIS_REPLY_ARRAY ||\n               parent->type == REDIS_REPLY_MAP ||\n               parent->type == REDIS_REPLY_SET ||\n               parent->type == REDIS_REPLY_PUSH);\n        parent->element[task->idx] = r;\n    }\n    return r;\n\noom:\n    freeReplyObject(r);\n    return NULL;\n}\n\nstatic void *createArrayObject(const redisReadTask *task, size_t elements) {\n    redisReply *r, *parent;\n\n    r = createReplyObject(task->type);\n    if (r == NULL)\n        return NULL;\n\n    if (elements > 0) {\n        r->element = hi_calloc(elements,sizeof(redisReply*));\n        if (r->element == NULL) {\n            freeReplyObject(r);\n            return NULL;\n        }\n    }\n\n    r->elements = elements;\n\n    if (task->parent) {\n        parent = task->parent->obj;\n        assert(parent->type == REDIS_REPLY_ARRAY ||\n               parent->type == REDIS_REPLY_MAP ||\n               parent->type == REDIS_REPLY_SET ||\n               parent->type == REDIS_REPLY_PUSH);\n        parent->element[task->idx] = r;\n    }\n    return r;\n}\n\nstatic void *createIntegerObject(const redisReadTask *task, long long value) {\n    redisReply *r, *parent;\n\n    r = createReplyObject(REDIS_REPLY_INTEGER);\n    if (r == NULL)\n        return NULL;\n\n    r->integer = value;\n\n    if (task->parent) {\n        parent = task->parent->obj;\n        assert(parent->type == REDIS_REPLY_ARRAY ||\n               parent->type == REDIS_REPLY_MAP ||\n               parent->type == REDIS_REPLY_SET ||\n               parent->type == REDIS_REPLY_PUSH);\n        parent->element[task->idx] = r;\n    }\n    return r;\n}\n\nstatic void *createDoubleObject(const redisReadTask *task, double value, char *str, size_t len) {\n    redisReply *r, *parent;\n\n    if (len == SIZE_MAX) // Prevents hi_malloc(0) if len equals to SIZE_MAX\n        return NULL;\n\n    r = createReplyObject(REDIS_REPLY_DOUBLE);\n    if (r == NULL)\n        return NULL;\n\n    r->dval = value;\n    r->str = hi_malloc(len+1);\n    if (r->str == NULL) {\n        freeReplyObject(r);\n        return NULL;\n    }\n\n    /* The double reply also has the original protocol string representing a\n     * double as a null terminated string. This way the caller does not need\n     * to format back for string conversion, especially since Redis does efforts\n     * to make the string more human readable avoiding the calssical double\n     * decimal string conversion artifacts. */\n    memcpy(r->str, str, len);\n    r->str[len] = '\\0';\n    r->len = len;\n\n    if (task->parent) {\n        parent = task->parent->obj;\n        assert(parent->type == REDIS_REPLY_ARRAY ||\n               parent->type == REDIS_REPLY_MAP ||\n               parent->type == REDIS_REPLY_SET ||\n               parent->type == REDIS_REPLY_PUSH);\n        parent->element[task->idx] = r;\n    }\n    return r;\n}\n\nstatic void *createNilObject(const redisReadTask *task) {\n    redisReply *r, *parent;\n\n    r = createReplyObject(REDIS_REPLY_NIL);\n    if (r == NULL)\n        return NULL;\n\n    if (task->parent) {\n        parent = task->parent->obj;\n        assert(parent->type == REDIS_REPLY_ARRAY ||\n               parent->type == REDIS_REPLY_MAP ||\n               parent->type == REDIS_REPLY_SET ||\n               parent->type == REDIS_REPLY_PUSH);\n        parent->element[task->idx] = r;\n    }\n    return r;\n}\n\nstatic void *createBoolObject(const redisReadTask *task, int bval) {\n    redisReply *r, *parent;\n\n    r = createReplyObject(REDIS_REPLY_BOOL);\n    if (r == NULL)\n        return NULL;\n\n    r->integer = bval != 0;\n\n    if (task->parent) {\n        parent = task->parent->obj;\n        assert(parent->type == REDIS_REPLY_ARRAY ||\n               parent->type == REDIS_REPLY_MAP ||\n               parent->type == REDIS_REPLY_SET ||\n               parent->type == REDIS_REPLY_PUSH);\n        parent->element[task->idx] = r;\n    }\n    return r;\n}\n\n/* Return the number of digits of 'v' when converted to string in radix 10.\n * Implementation borrowed from link in redis/src/util.c:string2ll(). */\nstatic uint32_t countDigits(uint64_t v) {\n  uint32_t result = 1;\n  for (;;) {\n    if (v < 10) return result;\n    if (v < 100) return result + 1;\n    if (v < 1000) return result + 2;\n    if (v < 10000) return result + 3;\n    v /= 10000U;\n    result += 4;\n  }\n}\n\n/* Helper that calculates the bulk length given a certain string length. */\nstatic size_t bulklen(size_t len) {\n    return 1+countDigits(len)+2+len+2;\n}\n\nint redisvFormatCommand(char **target, const char *format, va_list ap) {\n    const char *c = format;\n    char *cmd = NULL; /* final command */\n    int pos; /* position in final command */\n    hisds curarg, newarg; /* current argument */\n    int touched = 0; /* was the current argument touched? */\n    char **curargv = NULL, **newargv = NULL;\n    int argc = 0;\n    int totlen = 0;\n    int error_type = 0; /* 0 = no error; -1 = memory error; -2 = format error */\n    int j;\n\n    /* Abort if there is not target to set */\n    if (target == NULL)\n        return -1;\n\n    /* Build the command string accordingly to protocol */\n    curarg = hi_sdsempty();\n    if (curarg == NULL)\n        return -1;\n\n    while(*c != '\\0') {\n        if (*c != '%' || c[1] == '\\0') {\n            if (*c == ' ') {\n                if (touched) {\n                    newargv = hi_realloc(curargv,sizeof(char*)*(argc+1));\n                    if (newargv == NULL) goto memory_err;\n                    curargv = newargv;\n                    curargv[argc++] = curarg;\n                    totlen += bulklen(hi_sdslen(curarg));\n\n                    /* curarg is put in argv so it can be overwritten. */\n                    curarg = hi_sdsempty();\n                    if (curarg == NULL) goto memory_err;\n                    touched = 0;\n                }\n            } else {\n                newarg = hi_sdscatlen(curarg,c,1);\n                if (newarg == NULL) goto memory_err;\n                curarg = newarg;\n                touched = 1;\n            }\n        } else {\n            char *arg;\n            size_t size;\n\n            /* Set newarg so it can be checked even if it is not touched. */\n            newarg = curarg;\n\n            switch(c[1]) {\n            case 's':\n                arg = va_arg(ap,char*);\n                size = strlen(arg);\n                if (size > 0)\n                    newarg = hi_sdscatlen(curarg,arg,size);\n                break;\n            case 'b':\n                arg = va_arg(ap,char*);\n                size = va_arg(ap,size_t);\n                if (size > 0)\n                    newarg = hi_sdscatlen(curarg,arg,size);\n                break;\n            case '%':\n                newarg = hi_sdscat(curarg,\"%\");\n                break;\n            default:\n                /* Try to detect printf format */\n                {\n                    static const char intfmts[] = \"diouxX\";\n                    static const char flags[] = \"#0-+ \";\n                    char _format[16];\n                    const char *_p = c+1;\n                    size_t _l = 0;\n                    va_list _cpy;\n\n                    /* Flags */\n                    while (*_p != '\\0' && strchr(flags,*_p) != NULL) _p++;\n\n                    /* Field width */\n                    while (*_p != '\\0' && isdigit((int) *_p)) _p++;\n\n                    /* Precision */\n                    if (*_p == '.') {\n                        _p++;\n                        while (*_p != '\\0' && isdigit((int) *_p)) _p++;\n                    }\n\n                    /* Copy va_list before consuming with va_arg */\n                    va_copy(_cpy,ap);\n\n                    /* Make sure we have more characters otherwise strchr() accepts\n                     * '\\0' as an integer specifier. This is checked after above\n                     * va_copy() to avoid UB in fmt_invalid's call to va_end(). */\n                    if (*_p == '\\0') goto fmt_invalid;\n\n                    /* Integer conversion (without modifiers) */\n                    if (strchr(intfmts,*_p) != NULL) {\n                        va_arg(ap,int);\n                        goto fmt_valid;\n                    }\n\n                    /* Double conversion (without modifiers) */\n                    if (strchr(\"eEfFgGaA\",*_p) != NULL) {\n                        va_arg(ap,double);\n                        goto fmt_valid;\n                    }\n\n                    /* Size: char */\n                    if (_p[0] == 'h' && _p[1] == 'h') {\n                        _p += 2;\n                        if (*_p != '\\0' && strchr(intfmts,*_p) != NULL) {\n                            va_arg(ap,int); /* char gets promoted to int */\n                            goto fmt_valid;\n                        }\n                        goto fmt_invalid;\n                    }\n\n                    /* Size: short */\n                    if (_p[0] == 'h') {\n                        _p += 1;\n                        if (*_p != '\\0' && strchr(intfmts,*_p) != NULL) {\n                            va_arg(ap,int); /* short gets promoted to int */\n                            goto fmt_valid;\n                        }\n                        goto fmt_invalid;\n                    }\n\n                    /* Size: long long */\n                    if (_p[0] == 'l' && _p[1] == 'l') {\n                        _p += 2;\n                        if (*_p != '\\0' && strchr(intfmts,*_p) != NULL) {\n                            va_arg(ap,long long);\n                            goto fmt_valid;\n                        }\n                        goto fmt_invalid;\n                    }\n\n                    /* Size: long */\n                    if (_p[0] == 'l') {\n                        _p += 1;\n                        if (*_p != '\\0' && strchr(intfmts,*_p) != NULL) {\n                            va_arg(ap,long);\n                            goto fmt_valid;\n                        }\n                        goto fmt_invalid;\n                    }\n\n                fmt_invalid:\n                    va_end(_cpy);\n                    goto format_err;\n\n                fmt_valid:\n                    _l = (_p+1)-c;\n                    if (_l < sizeof(_format)-2) {\n                        memcpy(_format,c,_l);\n                        _format[_l] = '\\0';\n                        newarg = hi_sdscatvprintf(curarg,_format,_cpy);\n\n                        /* Update current position (note: outer blocks\n                         * increment c twice so compensate here) */\n                        c = _p-1;\n                    }\n\n                    va_end(_cpy);\n                    break;\n                }\n            }\n\n            if (newarg == NULL) goto memory_err;\n            curarg = newarg;\n\n            touched = 1;\n            c++;\n            if (*c == '\\0')\n                break;\n        }\n        c++;\n    }\n\n    /* Add the last argument if needed */\n    if (touched) {\n        newargv = hi_realloc(curargv,sizeof(char*)*(argc+1));\n        if (newargv == NULL) goto memory_err;\n        curargv = newargv;\n        curargv[argc++] = curarg;\n        totlen += bulklen(hi_sdslen(curarg));\n    } else {\n        hi_sdsfree(curarg);\n    }\n\n    /* Clear curarg because it was put in curargv or was free'd. */\n    curarg = NULL;\n\n    /* Add bytes needed to hold multi bulk count */\n    totlen += 1+countDigits(argc)+2;\n\n    /* Build the command at protocol level */\n    cmd = hi_malloc(totlen+1);\n    if (cmd == NULL) goto memory_err;\n\n    pos = sprintf(cmd,\"*%d\\r\\n\",argc);\n    for (j = 0; j < argc; j++) {\n        pos += sprintf(cmd+pos,\"$%zu\\r\\n\",hi_sdslen(curargv[j]));\n        memcpy(cmd+pos,curargv[j],hi_sdslen(curargv[j]));\n        pos += hi_sdslen(curargv[j]);\n        hi_sdsfree(curargv[j]);\n        cmd[pos++] = '\\r';\n        cmd[pos++] = '\\n';\n    }\n    assert(pos == totlen);\n    cmd[pos] = '\\0';\n\n    hi_free(curargv);\n    *target = cmd;\n    return totlen;\n\nformat_err:\n    error_type = -2;\n    goto cleanup;\n\nmemory_err:\n    error_type = -1;\n    goto cleanup;\n\ncleanup:\n    if (curargv) {\n        while(argc--)\n            hi_sdsfree(curargv[argc]);\n        hi_free(curargv);\n    }\n\n    hi_sdsfree(curarg);\n    hi_free(cmd);\n\n    return error_type;\n}\n\n/* Format a command according to the Redis protocol. This function\n * takes a format similar to printf:\n *\n * %s represents a C null terminated string you want to interpolate\n * %b represents a binary safe string\n *\n * When using %b you need to provide both the pointer to the string\n * and the length in bytes as a size_t. Examples:\n *\n * len = redisFormatCommand(target, \"GET %s\", mykey);\n * len = redisFormatCommand(target, \"SET %s %b\", mykey, myval, myvallen);\n */\nint redisFormatCommand(char **target, const char *format, ...) {\n    va_list ap;\n    int len;\n    va_start(ap,format);\n    len = redisvFormatCommand(target,format,ap);\n    va_end(ap);\n\n    /* The API says \"-1\" means bad result, but we now also return \"-2\" in some\n     * cases.  Force the return value to always be -1. */\n    if (len < 0)\n        len = -1;\n\n    return len;\n}\n\n/* Format a command according to the Redis protocol using an hisds string and\n * hi_sdscatfmt for the processing of arguments. This function takes the\n * number of arguments, an array with arguments and an array with their\n * lengths. If the latter is set to NULL, strlen will be used to compute the\n * argument lengths.\n */\nlong long redisFormatSdsCommandArgv(hisds *target, int argc, const char **argv,\n                                    const size_t *argvlen)\n{\n    hisds cmd, aux;\n    unsigned long long totlen, len;\n    int j;\n\n    /* Abort on a NULL target */\n    if (target == NULL)\n        return -1;\n\n    /* Calculate our total size */\n    totlen = 1+countDigits(argc)+2;\n    for (j = 0; j < argc; j++) {\n        len = argvlen ? argvlen[j] : strlen(argv[j]);\n        totlen += bulklen(len);\n    }\n\n    /* Use an SDS string for command construction */\n    cmd = hi_sdsempty();\n    if (cmd == NULL)\n        return -1;\n\n    /* We already know how much storage we need */\n    aux = hi_sdsMakeRoomFor(cmd, totlen);\n    if (aux == NULL) {\n        hi_sdsfree(cmd);\n        return -1;\n    }\n\n    cmd = aux;\n\n    /* Construct command */\n    cmd = hi_sdscatfmt(cmd, \"*%i\\r\\n\", argc);\n    for (j=0; j < argc; j++) {\n        len = argvlen ? argvlen[j] : strlen(argv[j]);\n        cmd = hi_sdscatfmt(cmd, \"$%U\\r\\n\", len);\n        cmd = hi_sdscatlen(cmd, argv[j], len);\n        cmd = hi_sdscatlen(cmd, \"\\r\\n\", sizeof(\"\\r\\n\")-1);\n    }\n\n    assert(hi_sdslen(cmd)==totlen);\n\n    *target = cmd;\n    return totlen;\n}\n\nvoid redisFreeSdsCommand(hisds cmd) {\n    hi_sdsfree(cmd);\n}\n\n/* Format a command according to the Redis protocol. This function takes the\n * number of arguments, an array with arguments and an array with their\n * lengths. If the latter is set to NULL, strlen will be used to compute the\n * argument lengths.\n */\nlong long redisFormatCommandArgv(char **target, int argc, const char **argv, const size_t *argvlen) {\n    char *cmd = NULL; /* final command */\n    size_t pos; /* position in final command */\n    size_t len, totlen;\n    int j;\n\n    /* Abort on a NULL target */\n    if (target == NULL)\n        return -1;\n\n    /* Calculate number of bytes needed for the command */\n    totlen = 1+countDigits(argc)+2;\n    for (j = 0; j < argc; j++) {\n        len = argvlen ? argvlen[j] : strlen(argv[j]);\n        totlen += bulklen(len);\n    }\n\n    /* Build the command at protocol level */\n    cmd = hi_malloc(totlen+1);\n    if (cmd == NULL)\n        return -1;\n\n    pos = sprintf(cmd,\"*%d\\r\\n\",argc);\n    for (j = 0; j < argc; j++) {\n        len = argvlen ? argvlen[j] : strlen(argv[j]);\n        pos += sprintf(cmd+pos,\"$%zu\\r\\n\",len);\n        memcpy(cmd+pos,argv[j],len);\n        pos += len;\n        cmd[pos++] = '\\r';\n        cmd[pos++] = '\\n';\n    }\n    assert(pos == totlen);\n    cmd[pos] = '\\0';\n\n    *target = cmd;\n    return totlen;\n}\n\nvoid redisFreeCommand(char *cmd) {\n    hi_free(cmd);\n}\n\nvoid __redisSetError(redisContext *c, int type, const char *str) {\n    size_t len;\n\n    c->err = type;\n    if (str != NULL) {\n        len = strlen(str);\n        len = len < (sizeof(c->errstr)-1) ? len : (sizeof(c->errstr)-1);\n        memcpy(c->errstr,str,len);\n        c->errstr[len] = '\\0';\n    } else {\n        /* Only REDIS_ERR_IO may lack a description! */\n        assert(type == REDIS_ERR_IO);\n        strerror_r(errno, c->errstr, sizeof(c->errstr));\n    }\n}\n\nredisReader *redisReaderCreate(void) {\n    return redisReaderCreateWithFunctions(&defaultFunctions);\n}\n\nstatic void redisPushAutoFree(void *privdata, void *reply) {\n    (void)privdata;\n    freeReplyObject(reply);\n}\n\nstatic redisContext *redisContextInit(void) {\n    redisContext *c;\n\n    c = hi_calloc(1, sizeof(*c));\n    if (c == NULL)\n        return NULL;\n\n    c->funcs = &redisContextDefaultFuncs;\n\n    c->obuf = hi_sdsempty();\n    c->reader = redisReaderCreate();\n    c->fd = REDIS_INVALID_FD;\n\n    if (c->obuf == NULL || c->reader == NULL) {\n        redisFree(c);\n        return NULL;\n    }\n\n    return c;\n}\n\nvoid redisFree(redisContext *c) {\n    if (c == NULL)\n        return;\n\n    if (c->funcs && c->funcs->close) {\n        c->funcs->close(c);\n    }\n\n    hi_sdsfree(c->obuf);\n    redisReaderFree(c->reader);\n    hi_free(c->tcp.host);\n    hi_free(c->tcp.source_addr);\n    hi_free(c->unix_sock.path);\n    hi_free(c->connect_timeout);\n    hi_free(c->command_timeout);\n    hi_free(c->saddr);\n\n    if (c->privdata && c->free_privdata)\n        c->free_privdata(c->privdata);\n\n    if (c->funcs && c->funcs->free_privctx)\n        c->funcs->free_privctx(c->privctx);\n\n    memset(c, 0xff, sizeof(*c));\n    hi_free(c);\n}\n\nredisFD redisFreeKeepFd(redisContext *c) {\n    redisFD fd = c->fd;\n    c->fd = REDIS_INVALID_FD;\n    redisFree(c);\n    return fd;\n}\n\nint redisReconnect(redisContext *c) {\n    c->err = 0;\n    memset(c->errstr, '\\0', strlen(c->errstr));\n\n    if (c->privctx && c->funcs->free_privctx) {\n        c->funcs->free_privctx(c->privctx);\n        c->privctx = NULL;\n    }\n\n    if (c->funcs && c->funcs->close) {\n        c->funcs->close(c);\n    }\n\n    hi_sdsfree(c->obuf);\n    redisReaderFree(c->reader);\n\n    c->obuf = hi_sdsempty();\n    c->reader = redisReaderCreate();\n\n    if (c->obuf == NULL || c->reader == NULL) {\n        __redisSetError(c, REDIS_ERR_OOM, \"Out of memory\");\n        return REDIS_ERR;\n    }\n\n    int ret = REDIS_ERR;\n    if (c->connection_type == REDIS_CONN_TCP) {\n        ret = redisContextConnectBindTcp(c, c->tcp.host, c->tcp.port,\n               c->connect_timeout, c->tcp.source_addr);\n    } else if (c->connection_type == REDIS_CONN_UNIX) {\n        ret = redisContextConnectUnix(c, c->unix_sock.path, c->connect_timeout);\n    } else {\n        /* Something bad happened here and shouldn't have. There isn't\n           enough information in the context to reconnect. */\n        __redisSetError(c,REDIS_ERR_OTHER,\"Not enough information to reconnect\");\n        ret = REDIS_ERR;\n    }\n\n    if (c->command_timeout != NULL && (c->flags & REDIS_BLOCK) && c->fd != REDIS_INVALID_FD) {\n        redisContextSetTimeout(c, *c->command_timeout);\n    }\n\n    return ret;\n}\n\nredisContext *redisConnectWithOptions(const redisOptions *options) {\n    redisContext *c = redisContextInit();\n    if (c == NULL) {\n        return NULL;\n    }\n    if (!(options->options & REDIS_OPT_NONBLOCK)) {\n        c->flags |= REDIS_BLOCK;\n    }\n    if (options->options & REDIS_OPT_REUSEADDR) {\n        c->flags |= REDIS_REUSEADDR;\n    }\n    if (options->options & REDIS_OPT_NOAUTOFREE) {\n        c->flags |= REDIS_NO_AUTO_FREE;\n    }\n    if (options->options & REDIS_OPT_NOAUTOFREEREPLIES) {\n        c->flags |= REDIS_NO_AUTO_FREE_REPLIES;\n    }\n    if (options->options & REDIS_OPT_PREFER_IPV4) {\n        c->flags |= REDIS_PREFER_IPV4;\n    }\n    if (options->options & REDIS_OPT_PREFER_IPV6) {\n        c->flags |= REDIS_PREFER_IPV6;\n    }\n\n    /* Set any user supplied RESP3 PUSH handler or use freeReplyObject\n     * as a default unless specifically flagged that we don't want one. */\n    if (options->push_cb != NULL)\n        redisSetPushCallback(c, options->push_cb);\n    else if (!(options->options & REDIS_OPT_NO_PUSH_AUTOFREE))\n        redisSetPushCallback(c, redisPushAutoFree);\n\n    c->privdata = options->privdata;\n    c->free_privdata = options->free_privdata;\n\n    if (redisContextUpdateConnectTimeout(c, options->connect_timeout) != REDIS_OK ||\n        redisContextUpdateCommandTimeout(c, options->command_timeout) != REDIS_OK) {\n        __redisSetError(c, REDIS_ERR_OOM, \"Out of memory\");\n        return c;\n    }\n\n    if (options->type == REDIS_CONN_TCP) {\n        redisContextConnectBindTcp(c, options->endpoint.tcp.ip,\n                                   options->endpoint.tcp.port, options->connect_timeout,\n                                   options->endpoint.tcp.source_addr);\n    } else if (options->type == REDIS_CONN_UNIX) {\n        redisContextConnectUnix(c, options->endpoint.unix_socket,\n                                options->connect_timeout);\n    } else if (options->type == REDIS_CONN_USERFD) {\n        c->fd = options->endpoint.fd;\n        c->flags |= REDIS_CONNECTED;\n    } else {\n        redisFree(c);\n        return NULL;\n    }\n\n    if (c->err == 0 && c->fd != REDIS_INVALID_FD &&\n        options->command_timeout != NULL && (c->flags & REDIS_BLOCK))\n    {\n        redisContextSetTimeout(c, *options->command_timeout);\n    }\n\n    return c;\n}\n\n/* Connect to a Redis instance. On error the field error in the returned\n * context will be set to the return value of the error function.\n * When no set of reply functions is given, the default set will be used. */\nredisContext *redisConnect(const char *ip, int port) {\n    redisOptions options = {0};\n    REDIS_OPTIONS_SET_TCP(&options, ip, port);\n    return redisConnectWithOptions(&options);\n}\n\nredisContext *redisConnectWithTimeout(const char *ip, int port, const struct timeval tv) {\n    redisOptions options = {0};\n    REDIS_OPTIONS_SET_TCP(&options, ip, port);\n    options.connect_timeout = &tv;\n    return redisConnectWithOptions(&options);\n}\n\nredisContext *redisConnectNonBlock(const char *ip, int port) {\n    redisOptions options = {0};\n    REDIS_OPTIONS_SET_TCP(&options, ip, port);\n    options.options |= REDIS_OPT_NONBLOCK;\n    return redisConnectWithOptions(&options);\n}\n\nredisContext *redisConnectBindNonBlock(const char *ip, int port,\n                                       const char *source_addr) {\n    redisOptions options = {0};\n    REDIS_OPTIONS_SET_TCP(&options, ip, port);\n    options.endpoint.tcp.source_addr = source_addr;\n    options.options |= REDIS_OPT_NONBLOCK;\n    return redisConnectWithOptions(&options);\n}\n\nredisContext *redisConnectBindNonBlockWithReuse(const char *ip, int port,\n                                                const char *source_addr) {\n    redisOptions options = {0};\n    REDIS_OPTIONS_SET_TCP(&options, ip, port);\n    options.endpoint.tcp.source_addr = source_addr;\n    options.options |= REDIS_OPT_NONBLOCK|REDIS_OPT_REUSEADDR;\n    return redisConnectWithOptions(&options);\n}\n\nredisContext *redisConnectUnix(const char *path) {\n    redisOptions options = {0};\n    REDIS_OPTIONS_SET_UNIX(&options, path);\n    return redisConnectWithOptions(&options);\n}\n\nredisContext *redisConnectUnixWithTimeout(const char *path, const struct timeval tv) {\n    redisOptions options = {0};\n    REDIS_OPTIONS_SET_UNIX(&options, path);\n    options.connect_timeout = &tv;\n    return redisConnectWithOptions(&options);\n}\n\nredisContext *redisConnectUnixNonBlock(const char *path) {\n    redisOptions options = {0};\n    REDIS_OPTIONS_SET_UNIX(&options, path);\n    options.options |= REDIS_OPT_NONBLOCK;\n    return redisConnectWithOptions(&options);\n}\n\nredisContext *redisConnectFd(redisFD fd) {\n    redisOptions options = {0};\n    options.type = REDIS_CONN_USERFD;\n    options.endpoint.fd = fd;\n    return redisConnectWithOptions(&options);\n}\n\n/* Set read/write timeout on a blocking socket. */\nint redisSetTimeout(redisContext *c, const struct timeval tv) {\n    if (c->flags & REDIS_BLOCK)\n        return redisContextSetTimeout(c,tv);\n    return REDIS_ERR;\n}\n\nint redisEnableKeepAliveWithInterval(redisContext *c, int interval) {\n    return redisKeepAlive(c, interval);\n}\n\n/* Enable connection KeepAlive. */\nint redisEnableKeepAlive(redisContext *c) {\n    return redisKeepAlive(c, REDIS_KEEPALIVE_INTERVAL);\n}\n\n/* Set the socket option TCP_USER_TIMEOUT. */\nint redisSetTcpUserTimeout(redisContext *c, unsigned int timeout) {\n    return redisContextSetTcpUserTimeout(c, timeout);\n}\n\n/* Set a user provided RESP3 PUSH handler and return any old one set. */\nredisPushFn *redisSetPushCallback(redisContext *c, redisPushFn *fn) {\n    redisPushFn *old = c->push_cb;\n    c->push_cb = fn;\n    return old;\n}\n\n/* Use this function to handle a read event on the descriptor. It will try\n * and read some bytes from the socket and feed them to the reply parser.\n *\n * After this function is called, you may use redisGetReplyFromReader to\n * see if there is a reply available. */\nint redisBufferRead(redisContext *c) {\n    char buf[1024*16];\n    int nread;\n\n    /* Return early when the context has seen an error. */\n    if (c->err)\n        return REDIS_ERR;\n\n    nread = c->funcs->read(c, buf, sizeof(buf));\n    if (nread < 0) {\n        return REDIS_ERR;\n    }\n    if (nread > 0 && redisReaderFeed(c->reader, buf, nread) != REDIS_OK) {\n        __redisSetError(c, c->reader->err, c->reader->errstr);\n        return REDIS_ERR;\n    }\n    return REDIS_OK;\n}\n\n/* Write the output buffer to the socket.\n *\n * Returns REDIS_OK when the buffer is empty, or (a part of) the buffer was\n * successfully written to the socket. When the buffer is empty after the\n * write operation, \"done\" is set to 1 (if given).\n *\n * Returns REDIS_ERR if an unrecoverable error occurred in the underlying\n * c->funcs->write function.\n */\nint redisBufferWrite(redisContext *c, int *done) {\n\n    /* Return early when the context has seen an error. */\n    if (c->err)\n        return REDIS_ERR;\n\n    if (hi_sdslen(c->obuf) > 0) {\n        ssize_t nwritten = c->funcs->write(c);\n        if (nwritten < 0) {\n            return REDIS_ERR;\n        } else if (nwritten > 0) {\n            if (nwritten == (ssize_t)hi_sdslen(c->obuf)) {\n                hi_sdsfree(c->obuf);\n                c->obuf = hi_sdsempty();\n                if (c->obuf == NULL)\n                    goto oom;\n            } else {\n                if (hi_sdsrange(c->obuf,nwritten,-1) < 0) goto oom;\n            }\n        }\n    }\n    if (done != NULL) *done = (hi_sdslen(c->obuf) == 0);\n    return REDIS_OK;\n\noom:\n    __redisSetError(c, REDIS_ERR_OOM, \"Out of memory\");\n    return REDIS_ERR;\n}\n\n/* Internal helper that returns 1 if the reply was a RESP3 PUSH\n * message and we handled it with a user-provided callback. */\nstatic int redisHandledPushReply(redisContext *c, void *reply) {\n    if (reply && c->push_cb && redisIsPushReply(reply)) {\n        c->push_cb(c->privdata, reply);\n        return 1;\n    }\n\n    return 0;\n}\n\n/* Get a reply from our reader or set an error in the context. */\nint redisGetReplyFromReader(redisContext *c, void **reply) {\n    if (redisReaderGetReply(c->reader, reply) == REDIS_ERR) {\n        __redisSetError(c,c->reader->err,c->reader->errstr);\n        return REDIS_ERR;\n    }\n\n    return REDIS_OK;\n}\n\n/* Internal helper to get the next reply from our reader while handling\n * any PUSH messages we encounter along the way.  This is separate from\n * redisGetReplyFromReader so as to not change its behavior. */\nstatic int redisNextInBandReplyFromReader(redisContext *c, void **reply) {\n    do {\n        if (redisGetReplyFromReader(c, reply) == REDIS_ERR)\n            return REDIS_ERR;\n    } while (redisHandledPushReply(c, *reply));\n\n    return REDIS_OK;\n}\n\nint redisGetReply(redisContext *c, void **reply) {\n    int wdone = 0;\n    void *aux = NULL;\n\n    /* Try to read pending replies */\n    if (redisNextInBandReplyFromReader(c,&aux) == REDIS_ERR)\n        return REDIS_ERR;\n\n    /* For the blocking context, flush output buffer and read reply */\n    if (aux == NULL && c->flags & REDIS_BLOCK) {\n        /* Write until done */\n        do {\n            if (redisBufferWrite(c,&wdone) == REDIS_ERR)\n                return REDIS_ERR;\n        } while (!wdone);\n\n        /* Read until there is a reply */\n        do {\n            if (redisBufferRead(c) == REDIS_ERR)\n                return REDIS_ERR;\n\n            if (redisNextInBandReplyFromReader(c,&aux) == REDIS_ERR)\n                return REDIS_ERR;\n        } while (aux == NULL);\n    }\n\n    /* Set reply or free it if we were passed NULL */\n    if (reply != NULL) {\n        *reply = aux;\n    } else {\n        freeReplyObject(aux);\n    }\n\n    return REDIS_OK;\n}\n\n\n/* Helper function for the redisAppendCommand* family of functions.\n *\n * Write a formatted command to the output buffer. When this family\n * is used, you need to call redisGetReply yourself to retrieve\n * the reply (or replies in pub/sub).\n */\nint __redisAppendCommand(redisContext *c, const char *cmd, size_t len) {\n    hisds newbuf;\n\n    newbuf = hi_sdscatlen(c->obuf,cmd,len);\n    if (newbuf == NULL) {\n        __redisSetError(c,REDIS_ERR_OOM,\"Out of memory\");\n        return REDIS_ERR;\n    }\n\n    c->obuf = newbuf;\n    return REDIS_OK;\n}\n\nint redisAppendFormattedCommand(redisContext *c, const char *cmd, size_t len) {\n\n    if (__redisAppendCommand(c, cmd, len) != REDIS_OK) {\n        return REDIS_ERR;\n    }\n\n    return REDIS_OK;\n}\n\nint redisvAppendCommand(redisContext *c, const char *format, va_list ap) {\n    char *cmd;\n    int len;\n\n    len = redisvFormatCommand(&cmd,format,ap);\n    if (len == -1) {\n        __redisSetError(c,REDIS_ERR_OOM,\"Out of memory\");\n        return REDIS_ERR;\n    } else if (len == -2) {\n        __redisSetError(c,REDIS_ERR_OTHER,\"Invalid format string\");\n        return REDIS_ERR;\n    }\n\n    if (__redisAppendCommand(c,cmd,len) != REDIS_OK) {\n        hi_free(cmd);\n        return REDIS_ERR;\n    }\n\n    hi_free(cmd);\n    return REDIS_OK;\n}\n\nint redisAppendCommand(redisContext *c, const char *format, ...) {\n    va_list ap;\n    int ret;\n\n    va_start(ap,format);\n    ret = redisvAppendCommand(c,format,ap);\n    va_end(ap);\n    return ret;\n}\n\nint redisAppendCommandArgv(redisContext *c, int argc, const char **argv, const size_t *argvlen) {\n    hisds cmd;\n    long long len;\n\n    len = redisFormatSdsCommandArgv(&cmd,argc,argv,argvlen);\n    if (len == -1) {\n        __redisSetError(c,REDIS_ERR_OOM,\"Out of memory\");\n        return REDIS_ERR;\n    }\n\n    if (__redisAppendCommand(c,cmd,len) != REDIS_OK) {\n        hi_sdsfree(cmd);\n        return REDIS_ERR;\n    }\n\n    hi_sdsfree(cmd);\n    return REDIS_OK;\n}\n\n/* Helper function for the redisCommand* family of functions.\n *\n * Write a formatted command to the output buffer. If the given context is\n * blocking, immediately read the reply into the \"reply\" pointer. When the\n * context is non-blocking, the \"reply\" pointer will not be used and the\n * command is simply appended to the write buffer.\n *\n * Returns the reply when a reply was successfully retrieved. Returns NULL\n * otherwise. When NULL is returned in a blocking context, the error field\n * in the context will be set.\n */\nstatic void *__redisBlockForReply(redisContext *c) {\n    void *reply;\n\n    if (c->flags & REDIS_BLOCK) {\n        if (redisGetReply(c,&reply) != REDIS_OK)\n            return NULL;\n        return reply;\n    }\n    return NULL;\n}\n\nvoid *redisvCommand(redisContext *c, const char *format, va_list ap) {\n    if (redisvAppendCommand(c,format,ap) != REDIS_OK)\n        return NULL;\n    return __redisBlockForReply(c);\n}\n\nvoid *redisCommand(redisContext *c, const char *format, ...) {\n    va_list ap;\n    va_start(ap,format);\n    void *reply = redisvCommand(c,format,ap);\n    va_end(ap);\n    return reply;\n}\n\nvoid *redisCommandArgv(redisContext *c, int argc, const char **argv, const size_t *argvlen) {\n    if (redisAppendCommandArgv(c,argc,argv,argvlen) != REDIS_OK)\n        return NULL;\n    return __redisBlockForReply(c);\n}\n"
  },
  {
    "path": "deps/hiredis/hiredis.h",
    "content": "/*\n * Copyright (c) 2009-2011, Salvatore Sanfilippo <antirez at gmail dot com>\n * Copyright (c) 2010-2014, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n * Copyright (c) 2015, Matt Stancliff <matt at genges dot com>,\n *                     Jan-Erik Rediger <janerik at fnordig dot com>\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#ifndef __HIREDIS_H\n#define __HIREDIS_H\n#include \"read.h\"\n#include <stdarg.h> /* for va_list */\n#ifndef _MSC_VER\n#include <sys/time.h> /* for struct timeval */\n#else\nstruct timeval; /* forward declaration */\ntypedef long long ssize_t;\n#endif\n#include <stdint.h> /* uintXX_t, etc */\n#include \"sds.h\" /* for hisds */\n#include \"alloc.h\" /* for allocation wrappers */\n\n#define HIREDIS_MAJOR 1\n#define HIREDIS_MINOR 2\n#define HIREDIS_PATCH 0\n#define HIREDIS_SONAME 1.1.0\n\n/* Connection type can be blocking or non-blocking and is set in the\n * least significant bit of the flags field in redisContext. */\n#define REDIS_BLOCK 0x1\n\n/* Connection may be disconnected before being free'd. The second bit\n * in the flags field is set when the context is connected. */\n#define REDIS_CONNECTED 0x2\n\n/* The async API might try to disconnect cleanly and flush the output\n * buffer and read all subsequent replies before disconnecting.\n * This flag means no new commands can come in and the connection\n * should be terminated once all replies have been read. */\n#define REDIS_DISCONNECTING 0x4\n\n/* Flag specific to the async API which means that the context should be clean\n * up as soon as possible. */\n#define REDIS_FREEING 0x8\n\n/* Flag that is set when an async callback is executed. */\n#define REDIS_IN_CALLBACK 0x10\n\n/* Flag that is set when the async context has one or more subscriptions. */\n#define REDIS_SUBSCRIBED 0x20\n\n/* Flag that is set when monitor mode is active */\n#define REDIS_MONITORING 0x40\n\n/* Flag that is set when we should set SO_REUSEADDR before calling bind() */\n#define REDIS_REUSEADDR 0x80\n\n/* Flag that is set when the async connection supports push replies. */\n#define REDIS_SUPPORTS_PUSH 0x100\n\n/**\n * Flag that indicates the user does not want the context to\n * be automatically freed upon error\n */\n#define REDIS_NO_AUTO_FREE 0x200\n\n/* Flag that indicates the user does not want replies to be automatically freed */\n#define REDIS_NO_AUTO_FREE_REPLIES 0x400\n\n/* Flags to prefer IPv6 or IPv4 when doing DNS lookup. (If both are set,\n * AF_UNSPEC is used.) */\n#define REDIS_PREFER_IPV4 0x800\n#define REDIS_PREFER_IPV6 0x1000\n\n#define REDIS_KEEPALIVE_INTERVAL 15 /* seconds */\n\n/* number of times we retry to connect in the case of EADDRNOTAVAIL and\n * SO_REUSEADDR is being used. */\n#define REDIS_CONNECT_RETRIES  10\n\n/* Forward declarations for structs defined elsewhere */\nstruct redisAsyncContext;\nstruct redisContext;\n\n/* RESP3 push helpers and callback prototypes */\n#define redisIsPushReply(r) (((redisReply*)(r))->type == REDIS_REPLY_PUSH)\ntypedef void (redisPushFn)(void *, void *);\ntypedef void (redisAsyncPushFn)(struct redisAsyncContext *, void *);\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/* This is the reply object returned by redisCommand() */\ntypedef struct redisReply {\n    int type; /* REDIS_REPLY_* */\n    long long integer; /* The integer when type is REDIS_REPLY_INTEGER */\n    double dval; /* The double when type is REDIS_REPLY_DOUBLE */\n    size_t len; /* Length of string */\n    char *str; /* Used for REDIS_REPLY_ERROR, REDIS_REPLY_STRING\n                  REDIS_REPLY_VERB, REDIS_REPLY_DOUBLE (in additional to dval),\n                  and REDIS_REPLY_BIGNUM. */\n    char vtype[4]; /* Used for REDIS_REPLY_VERB, contains the null\n                      terminated 3 character content type, such as \"txt\". */\n    size_t elements; /* number of elements, for REDIS_REPLY_ARRAY */\n    struct redisReply **element; /* elements vector for REDIS_REPLY_ARRAY */\n} redisReply;\n\nredisReader *redisReaderCreate(void);\n\n/* Function to free the reply objects hiredis returns by default. */\nvoid freeReplyObject(void *reply);\n\n/* Functions to format a command according to the protocol. */\nint redisvFormatCommand(char **target, const char *format, va_list ap);\nint redisFormatCommand(char **target, const char *format, ...);\nlong long redisFormatCommandArgv(char **target, int argc, const char **argv, const size_t *argvlen);\nlong long redisFormatSdsCommandArgv(hisds *target, int argc, const char ** argv, const size_t *argvlen);\nvoid redisFreeCommand(char *cmd);\nvoid redisFreeSdsCommand(hisds cmd);\n\nenum redisConnectionType {\n    REDIS_CONN_TCP,\n    REDIS_CONN_UNIX,\n    REDIS_CONN_USERFD\n};\n\nstruct redisSsl;\n\n#define REDIS_OPT_NONBLOCK 0x01\n#define REDIS_OPT_REUSEADDR 0x02\n#define REDIS_OPT_NOAUTOFREE 0x04        /* Don't automatically free the async\n                                          * object on a connection failure, or\n                                          * other implicit conditions. Only free\n                                          * on an explicit call to disconnect()\n                                          * or free() */\n#define REDIS_OPT_NO_PUSH_AUTOFREE 0x08  /* Don't automatically intercept and\n                                          * free RESP3 PUSH replies. */\n#define REDIS_OPT_NOAUTOFREEREPLIES 0x10 /* Don't automatically free replies. */\n#define REDIS_OPT_PREFER_IPV4 0x20       /* Prefer IPv4 in DNS lookups. */\n#define REDIS_OPT_PREFER_IPV6 0x40       /* Prefer IPv6 in DNS lookups. */\n#define REDIS_OPT_PREFER_IP_UNSPEC (REDIS_OPT_PREFER_IPV4 | REDIS_OPT_PREFER_IPV6)\n\n/* In Unix systems a file descriptor is a regular signed int, with -1\n * representing an invalid descriptor. In Windows it is a SOCKET\n * (32- or 64-bit unsigned integer depending on the architecture), where\n * all bits set (~0) is INVALID_SOCKET.  */\n#ifndef _WIN32\ntypedef int redisFD;\n#define REDIS_INVALID_FD -1\n#else\n#ifdef _WIN64\ntypedef unsigned long long redisFD; /* SOCKET = 64-bit UINT_PTR */\n#else\ntypedef unsigned long redisFD;      /* SOCKET = 32-bit UINT_PTR */\n#endif\n#define REDIS_INVALID_FD ((redisFD)(~0)) /* INVALID_SOCKET */\n#endif\n\ntypedef struct {\n    /*\n     * the type of connection to use. This also indicates which\n     * `endpoint` member field to use\n     */\n    int type;\n    /* bit field of REDIS_OPT_xxx */\n    int options;\n    /* timeout value for connect operation. If NULL, no timeout is used */\n    const struct timeval *connect_timeout;\n    /* timeout value for commands. If NULL, no timeout is used.  This can be\n     * updated at runtime with redisSetTimeout/redisAsyncSetTimeout. */\n    const struct timeval *command_timeout;\n    union {\n        /** use this field for tcp/ip connections */\n        struct {\n            const char *source_addr;\n            const char *ip;\n            int port;\n        } tcp;\n        /** use this field for unix domain sockets */\n        const char *unix_socket;\n        /**\n         * use this field to have hiredis operate an already-open\n         * file descriptor */\n        redisFD fd;\n    } endpoint;\n\n    /* Optional user defined data/destructor */\n    void *privdata;\n    void (*free_privdata)(void *);\n\n    /* A user defined PUSH message callback */\n    redisPushFn *push_cb;\n    redisAsyncPushFn *async_push_cb;\n} redisOptions;\n\n/**\n * Helper macros to initialize options to their specified fields.\n */\n#define REDIS_OPTIONS_SET_TCP(opts, ip_, port_) do { \\\n        (opts)->type = REDIS_CONN_TCP;               \\\n        (opts)->endpoint.tcp.ip = ip_;               \\\n        (opts)->endpoint.tcp.port = port_;           \\\n    } while(0)\n\n#define REDIS_OPTIONS_SET_UNIX(opts, path) do { \\\n        (opts)->type = REDIS_CONN_UNIX;         \\\n        (opts)->endpoint.unix_socket = path;    \\\n    } while(0)\n\n#define REDIS_OPTIONS_SET_PRIVDATA(opts, data, dtor) do {  \\\n        (opts)->privdata = data;                           \\\n        (opts)->free_privdata = dtor;                      \\\n    } while(0)\n\ntypedef struct redisContextFuncs {\n    void (*close)(struct redisContext *);\n    void (*free_privctx)(void *);\n    void (*async_read)(struct redisAsyncContext *);\n    void (*async_write)(struct redisAsyncContext *);\n\n    /* Read/Write data to the underlying communication stream, returning the\n     * number of bytes read/written.  In the event of an unrecoverable error\n     * these functions shall return a value < 0.  In the event of a\n     * recoverable error, they should return 0. */\n    ssize_t (*read)(struct redisContext *, char *, size_t);\n    ssize_t (*write)(struct redisContext *);\n} redisContextFuncs;\n\n\n/* Context for a connection to Redis */\ntypedef struct redisContext {\n    const redisContextFuncs *funcs;   /* Function table */\n\n    int err; /* Error flags, 0 when there is no error */\n    char errstr[128]; /* String representation of error when applicable */\n    redisFD fd;\n    int flags;\n    char *obuf; /* Write buffer */\n    redisReader *reader; /* Protocol reader */\n\n    enum redisConnectionType connection_type;\n    struct timeval *connect_timeout;\n    struct timeval *command_timeout;\n\n    struct {\n        char *host;\n        char *source_addr;\n        int port;\n    } tcp;\n\n    struct {\n        char *path;\n    } unix_sock;\n\n    /* For non-blocking connect */\n    struct sockaddr *saddr;\n    size_t addrlen;\n\n    /* Optional data and corresponding destructor users can use to provide\n     * context to a given redisContext.  Not used by hiredis. */\n    void *privdata;\n    void (*free_privdata)(void *);\n\n    /* Internal context pointer presently used by hiredis to manage\n     * SSL connections. */\n    void *privctx;\n\n    /* An optional RESP3 PUSH handler */\n    redisPushFn *push_cb;\n} redisContext;\n\nredisContext *redisConnectWithOptions(const redisOptions *options);\nredisContext *redisConnect(const char *ip, int port);\nredisContext *redisConnectWithTimeout(const char *ip, int port, const struct timeval tv);\nredisContext *redisConnectNonBlock(const char *ip, int port);\nredisContext *redisConnectBindNonBlock(const char *ip, int port,\n                                       const char *source_addr);\nredisContext *redisConnectBindNonBlockWithReuse(const char *ip, int port,\n                                                const char *source_addr);\nredisContext *redisConnectUnix(const char *path);\nredisContext *redisConnectUnixWithTimeout(const char *path, const struct timeval tv);\nredisContext *redisConnectUnixNonBlock(const char *path);\nredisContext *redisConnectFd(redisFD fd);\n\n/**\n * Reconnect the given context using the saved information.\n *\n * This re-uses the exact same connect options as in the initial connection.\n * host, ip (or path), timeout and bind address are reused,\n * flags are used unmodified from the existing context.\n *\n * Returns REDIS_OK on successful connect or REDIS_ERR otherwise.\n */\nint redisReconnect(redisContext *c);\n\nredisPushFn *redisSetPushCallback(redisContext *c, redisPushFn *fn);\nint redisSetTimeout(redisContext *c, const struct timeval tv);\nint redisEnableKeepAlive(redisContext *c);\nint redisEnableKeepAliveWithInterval(redisContext *c, int interval);\nint redisSetTcpUserTimeout(redisContext *c, unsigned int timeout);\nvoid redisFree(redisContext *c);\nredisFD redisFreeKeepFd(redisContext *c);\nint redisBufferRead(redisContext *c);\nint redisBufferWrite(redisContext *c, int *done);\n\n/* In a blocking context, this function first checks if there are unconsumed\n * replies to return and returns one if so. Otherwise, it flushes the output\n * buffer to the socket and reads until it has a reply. In a non-blocking\n * context, it will return unconsumed replies until there are no more. */\nint redisGetReply(redisContext *c, void **reply);\nint redisGetReplyFromReader(redisContext *c, void **reply);\n\n/* Write a formatted command to the output buffer. Use these functions in blocking mode\n * to get a pipeline of commands. */\nint redisAppendFormattedCommand(redisContext *c, const char *cmd, size_t len);\n\n/* Write a command to the output buffer. Use these functions in blocking mode\n * to get a pipeline of commands. */\nint redisvAppendCommand(redisContext *c, const char *format, va_list ap);\nint redisAppendCommand(redisContext *c, const char *format, ...);\nint redisAppendCommandArgv(redisContext *c, int argc, const char **argv, const size_t *argvlen);\n\n/* Issue a command to Redis. In a blocking context, it is identical to calling\n * redisAppendCommand, followed by redisGetReply. The function will return\n * NULL if there was an error in performing the request, otherwise it will\n * return the reply. In a non-blocking context, it is identical to calling\n * only redisAppendCommand and will always return NULL. */\nvoid *redisvCommand(redisContext *c, const char *format, va_list ap);\nvoid *redisCommand(redisContext *c, const char *format, ...);\nvoid *redisCommandArgv(redisContext *c, int argc, const char **argv, const size_t *argvlen);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif\n"
  },
  {
    "path": "deps/hiredis/hiredis.pc.in",
    "content": "prefix=@CMAKE_INSTALL_PREFIX@\ninstall_libdir=@CMAKE_INSTALL_LIBDIR@\nexec_prefix=${prefix}\nlibdir=${exec_prefix}/${install_libdir}\nincludedir=${prefix}/include\npkgincludedir=${includedir}/hiredis\n\nName: hiredis\nDescription: Minimalistic C client library for Redis.\nVersion: @PROJECT_VERSION@\nLibs: -L${libdir} -lhiredis\nCflags: -I${pkgincludedir} -I${includedir} -D_FILE_OFFSET_BITS=64\n"
  },
  {
    "path": "deps/hiredis/hiredis.targets",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n  <ItemDefinitionGroup>\n    <ClCompile>\n      <AdditionalIncludeDirectories>$(MSBuildThisFileDirectory)\\..\\..\\include;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n    </ClCompile>\n    <Link>\n      <AdditionalLibraryDirectories>$(MSBuildThisFileDirectory)\\..\\..\\lib;%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>\n    </Link>\n  </ItemDefinitionGroup>\n</Project>"
  },
  {
    "path": "deps/hiredis/hiredis_ssl-config.cmake.in",
    "content": "@PACKAGE_INIT@\n\nset_and_check(hiredis_ssl_INCLUDEDIR \"@PACKAGE_INCLUDE_INSTALL_DIR@\")\n\ninclude(CMakeFindDependencyMacro)\nfind_dependency(OpenSSL)\n\nIF (NOT TARGET hiredis::hiredis_ssl)\n\tINCLUDE(${CMAKE_CURRENT_LIST_DIR}/hiredis_ssl-targets.cmake)\nENDIF()\n\nSET(hiredis_ssl_LIBRARIES hiredis::hiredis_ssl)\nSET(hiredis_ssl_INCLUDE_DIRS ${hiredis_ssl_INCLUDEDIR})\n\ncheck_required_components(hiredis_ssl)\n\n"
  },
  {
    "path": "deps/hiredis/hiredis_ssl.h",
    "content": "\n/*\n * Copyright (c) 2019, Redis Labs\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#ifndef __HIREDIS_SSL_H\n#define __HIREDIS_SSL_H\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/* This is the underlying struct for SSL in ssl.h, which is not included to\n * keep build dependencies short here.\n */\nstruct ssl_st;\n\n/* A wrapper around OpenSSL SSL_CTX to allow easy SSL use without directly\n * calling OpenSSL.\n */\ntypedef struct redisSSLContext redisSSLContext;\n\n/**\n * Initialization errors that redisCreateSSLContext() may return.\n */\n\ntypedef enum {\n    REDIS_SSL_CTX_NONE = 0,                     /* No Error */\n    REDIS_SSL_CTX_CREATE_FAILED,                /* Failed to create OpenSSL SSL_CTX */\n    REDIS_SSL_CTX_CERT_KEY_REQUIRED,            /* Client cert and key must both be specified or skipped */\n    REDIS_SSL_CTX_CA_CERT_LOAD_FAILED,          /* Failed to load CA Certificate or CA Path */\n    REDIS_SSL_CTX_CLIENT_CERT_LOAD_FAILED,      /* Failed to load client certificate */\n    REDIS_SSL_CTX_CLIENT_DEFAULT_CERT_FAILED,   /* Failed to set client default certificate directory */\n    REDIS_SSL_CTX_PRIVATE_KEY_LOAD_FAILED,      /* Failed to load private key */\n    REDIS_SSL_CTX_OS_CERTSTORE_OPEN_FAILED,     /* Failed to open system certificate store */\n    REDIS_SSL_CTX_OS_CERT_ADD_FAILED            /* Failed to add CA certificates obtained from system to the SSL context */\n} redisSSLContextError;\n\n/* Constants that mirror OpenSSL's verify modes. By default,\n * REDIS_SSL_VERIFY_PEER is used with redisCreateSSLContext().\n * Some Redis clients disable peer verification if there are no\n * certificates specified.\n */\n#define REDIS_SSL_VERIFY_NONE 0x00\n#define REDIS_SSL_VERIFY_PEER 0x01\n#define REDIS_SSL_VERIFY_FAIL_IF_NO_PEER_CERT 0x02\n#define REDIS_SSL_VERIFY_CLIENT_ONCE 0x04\n#define REDIS_SSL_VERIFY_POST_HANDSHAKE 0x08\n\n/* Options to create an OpenSSL context. */\ntypedef struct {\n    const char *cacert_filename;\n    const char *capath;\n    const char *cert_filename;\n    const char *private_key_filename;\n    const char *server_name;\n    int verify_mode;\n} redisSSLOptions;\n\n/**\n * Return the error message corresponding with the specified error code.\n */\n\nconst char *redisSSLContextGetError(redisSSLContextError error);\n\n/**\n * Helper function to initialize the OpenSSL library.\n *\n * OpenSSL requires one-time initialization before it can be used. Callers should\n * call this function only once, and only if OpenSSL is not directly initialized\n * elsewhere.\n */\nint redisInitOpenSSL(void);\n\n/**\n * Helper function to initialize an OpenSSL context that can be used\n * to initiate SSL connections.\n *\n * cacert_filename is an optional name of a CA certificate/bundle file to load\n * and use for validation.\n *\n * capath is an optional directory path where trusted CA certificate files are\n * stored in an OpenSSL-compatible structure.\n *\n * cert_filename and private_key_filename are optional names of a client side\n * certificate and private key files to use for authentication. They need to\n * be both specified or omitted.\n *\n * server_name is an optional and will be used as a server name indication\n * (SNI) TLS extension.\n *\n * If error is non-null, it will be populated in case the context creation fails\n * (returning a NULL).\n */\n\nredisSSLContext *redisCreateSSLContext(const char *cacert_filename, const char *capath,\n        const char *cert_filename, const char *private_key_filename,\n        const char *server_name, redisSSLContextError *error);\n\n/**\n  * Helper function to initialize an OpenSSL context that can be used\n  * to initiate SSL connections. This is a more extensible version of redisCreateSSLContext().\n  *\n  * options contains a structure of SSL options to use.\n  *\n  * If error is non-null, it will be populated in case the context creation fails\n  * (returning a NULL).\n*/\nredisSSLContext *redisCreateSSLContextWithOptions(redisSSLOptions *options,\n        redisSSLContextError *error);\n\n/**\n * Free a previously created OpenSSL context.\n */\nvoid redisFreeSSLContext(redisSSLContext *redis_ssl_ctx);\n\n/**\n * Initiate SSL on an existing redisContext.\n *\n * This is similar to redisInitiateSSL() but does not require the caller\n * to directly interact with OpenSSL, and instead uses a redisSSLContext\n * previously created using redisCreateSSLContext().\n */\n\nint redisInitiateSSLWithContext(redisContext *c, redisSSLContext *redis_ssl_ctx);\n\n/**\n * Initiate SSL/TLS negotiation on a provided OpenSSL SSL object.\n */\n\nint redisInitiateSSL(redisContext *c, struct ssl_st *ssl);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif  /* __HIREDIS_SSL_H */\n"
  },
  {
    "path": "deps/hiredis/hiredis_ssl.pc.in",
    "content": "prefix=@CMAKE_INSTALL_PREFIX@\ninstall_libdir=@CMAKE_INSTALL_LIBDIR@\nexec_prefix=${prefix}\nlibdir=${exec_prefix}/${install_libdir}\nincludedir=${prefix}/include\npkgincludedir=${includedir}/hiredis\n\nName: hiredis_ssl\nDescription: SSL Support for hiredis.\nVersion: @PROJECT_VERSION@\nRequires: hiredis\nLibs: -L${libdir} -lhiredis_ssl\nLibs.private: -lssl -lcrypto\n"
  },
  {
    "path": "deps/hiredis/net.c",
    "content": "/* Extracted from anet.c to work properly with Hiredis error reporting.\n *\n * Copyright (c) 2009-2011, Salvatore Sanfilippo <antirez at gmail dot com>\n * Copyright (c) 2010-2014, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n * Copyright (c) 2015, Matt Stancliff <matt at genges dot com>,\n *                     Jan-Erik Rediger <janerik at fnordig dot com>\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#include \"fmacros.h\"\n#include <sys/types.h>\n#include <fcntl.h>\n#include <string.h>\n#include <errno.h>\n#include <stdarg.h>\n#include <stdio.h>\n#include <limits.h>\n#include <stdlib.h>\n\n#include \"net.h\"\n#include \"sds.h\"\n#include \"sockcompat.h\"\n#include \"win32.h\"\n\n/* Defined in hiredis.c */\nvoid __redisSetError(redisContext *c, int type, const char *str);\n\nint redisContextUpdateCommandTimeout(redisContext *c, const struct timeval *timeout);\n\nvoid redisNetClose(redisContext *c) {\n    if (c && c->fd != REDIS_INVALID_FD) {\n        close(c->fd);\n        c->fd = REDIS_INVALID_FD;\n    }\n}\n\nssize_t redisNetRead(redisContext *c, char *buf, size_t bufcap) {\n    ssize_t nread = recv(c->fd, buf, bufcap, 0);\n    if (nread == -1) {\n        if ((errno == EWOULDBLOCK && !(c->flags & REDIS_BLOCK)) || (errno == EINTR)) {\n            /* Try again later */\n            return 0;\n        } else if(errno == ETIMEDOUT && (c->flags & REDIS_BLOCK)) {\n            /* especially in windows */\n            __redisSetError(c, REDIS_ERR_TIMEOUT, \"recv timeout\");\n            return -1;\n        } else {\n            __redisSetError(c, REDIS_ERR_IO, strerror(errno));\n            return -1;\n        }\n    } else if (nread == 0) {\n        __redisSetError(c, REDIS_ERR_EOF, \"Server closed the connection\");\n        return -1;\n    } else {\n        return nread;\n    }\n}\n\nssize_t redisNetWrite(redisContext *c) {\n    ssize_t nwritten;\n\n    nwritten = send(c->fd, c->obuf, hi_sdslen(c->obuf), 0);\n    if (nwritten < 0) {\n        if ((errno == EWOULDBLOCK && !(c->flags & REDIS_BLOCK)) || (errno == EINTR)) {\n            /* Try again */\n            return 0;\n        } else {\n            __redisSetError(c, REDIS_ERR_IO, strerror(errno));\n            return -1;\n        }\n    }\n\n    return nwritten;\n}\n\nstatic void __redisSetErrorFromErrno(redisContext *c, int type, const char *prefix) {\n    int errorno = errno;  /* snprintf() may change errno */\n    char buf[128] = { 0 };\n    size_t len = 0;\n\n    if (prefix != NULL)\n        len = snprintf(buf,sizeof(buf),\"%s: \",prefix);\n    strerror_r(errorno, (char *)(buf + len), sizeof(buf) - len);\n    __redisSetError(c,type,buf);\n}\n\nstatic int redisSetReuseAddr(redisContext *c) {\n    int on = 1;\n    if (setsockopt(c->fd, SOL_SOCKET, SO_REUSEADDR, &on, sizeof(on)) == -1) {\n        __redisSetErrorFromErrno(c,REDIS_ERR_IO,NULL);\n        redisNetClose(c);\n        return REDIS_ERR;\n    }\n    return REDIS_OK;\n}\n\nstatic int redisCreateSocket(redisContext *c, int type) {\n    redisFD s;\n    if ((s = socket(type, SOCK_STREAM, 0)) == REDIS_INVALID_FD) {\n        __redisSetErrorFromErrno(c,REDIS_ERR_IO,NULL);\n        return REDIS_ERR;\n    }\n    c->fd = s;\n    if (type == AF_INET) {\n        if (redisSetReuseAddr(c) == REDIS_ERR) {\n            return REDIS_ERR;\n        }\n    }\n    return REDIS_OK;\n}\n\nstatic int redisSetBlocking(redisContext *c, int blocking) {\n#ifndef _WIN32\n    int flags;\n\n    /* Set the socket nonblocking.\n     * Note that fcntl(2) for F_GETFL and F_SETFL can't be\n     * interrupted by a signal. */\n    if ((flags = fcntl(c->fd, F_GETFL)) == -1) {\n        __redisSetErrorFromErrno(c,REDIS_ERR_IO,\"fcntl(F_GETFL)\");\n        redisNetClose(c);\n        return REDIS_ERR;\n    }\n\n    if (blocking)\n        flags &= ~O_NONBLOCK;\n    else\n        flags |= O_NONBLOCK;\n\n    if (fcntl(c->fd, F_SETFL, flags) == -1) {\n        __redisSetErrorFromErrno(c,REDIS_ERR_IO,\"fcntl(F_SETFL)\");\n        redisNetClose(c);\n        return REDIS_ERR;\n    }\n#else\n    u_long mode = blocking ? 0 : 1;\n    if (ioctl(c->fd, FIONBIO, &mode) == -1) {\n        __redisSetErrorFromErrno(c, REDIS_ERR_IO, \"ioctl(FIONBIO)\");\n        redisNetClose(c);\n        return REDIS_ERR;\n    }\n#endif /* _WIN32 */\n    return REDIS_OK;\n}\n\nint redisKeepAlive(redisContext *c, int interval) {\n    int val = 1;\n    redisFD fd = c->fd;\n\n#ifndef _WIN32\n    if (setsockopt(fd, SOL_SOCKET, SO_KEEPALIVE, &val, sizeof(val)) == -1){\n        __redisSetError(c,REDIS_ERR_OTHER,strerror(errno));\n        return REDIS_ERR;\n    }\n\n    val = interval;\n\n#if defined(__APPLE__) && defined(__MACH__)\n    if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPALIVE, &val, sizeof(val)) < 0) {\n        __redisSetError(c,REDIS_ERR_OTHER,strerror(errno));\n        return REDIS_ERR;\n    }\n#else\n#if defined(__GLIBC__) && !defined(__FreeBSD_kernel__)\n    if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPIDLE, &val, sizeof(val)) < 0) {\n        __redisSetError(c,REDIS_ERR_OTHER,strerror(errno));\n        return REDIS_ERR;\n    }\n\n    val = interval/3;\n    if (val == 0) val = 1;\n    if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPINTVL, &val, sizeof(val)) < 0) {\n        __redisSetError(c,REDIS_ERR_OTHER,strerror(errno));\n        return REDIS_ERR;\n    }\n\n    val = 3;\n    if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPCNT, &val, sizeof(val)) < 0) {\n        __redisSetError(c,REDIS_ERR_OTHER,strerror(errno));\n        return REDIS_ERR;\n    }\n#endif\n#endif\n#else\n    int res;\n\n    res = win32_redisKeepAlive(fd, interval * 1000);\n    if (res != 0) {\n        __redisSetError(c, REDIS_ERR_OTHER, strerror(res));\n        return REDIS_ERR;\n    }\n#endif\n    return REDIS_OK;\n}\n\nint redisSetTcpNoDelay(redisContext *c) {\n    int yes = 1;\n    if (setsockopt(c->fd, IPPROTO_TCP, TCP_NODELAY, &yes, sizeof(yes)) == -1) {\n        __redisSetErrorFromErrno(c,REDIS_ERR_IO,\"setsockopt(TCP_NODELAY)\");\n        redisNetClose(c);\n        return REDIS_ERR;\n    }\n    return REDIS_OK;\n}\n\nint redisContextSetTcpUserTimeout(redisContext *c, unsigned int timeout) {\n    int res;\n#ifdef TCP_USER_TIMEOUT\n    res = setsockopt(c->fd, IPPROTO_TCP, TCP_USER_TIMEOUT, &timeout, sizeof(timeout));\n#else\n    res = -1;\n    errno = ENOTSUP;\n    (void)timeout;\n#endif\n    if (res == -1) {\n        __redisSetErrorFromErrno(c,REDIS_ERR_IO,\"setsockopt(TCP_USER_TIMEOUT)\");\n        redisNetClose(c);\n        return REDIS_ERR;\n    }\n    return REDIS_OK;\n}\n\n#define __MAX_MSEC (((LONG_MAX) - 999) / 1000)\n\nstatic int redisContextTimeoutMsec(redisContext *c, long *result)\n{\n    const struct timeval *timeout = c->connect_timeout;\n    long msec = -1;\n\n    /* Only use timeout when not NULL. */\n    if (timeout != NULL) {\n        if (timeout->tv_usec > 1000000 || timeout->tv_sec > __MAX_MSEC) {\n            __redisSetError(c, REDIS_ERR_IO, \"Invalid timeout specified\");\n            *result = msec;\n            return REDIS_ERR;\n        }\n\n        msec = (timeout->tv_sec * 1000) + ((timeout->tv_usec + 999) / 1000);\n\n        if (msec < 0 || msec > INT_MAX) {\n            msec = INT_MAX;\n        }\n    }\n\n    *result = msec;\n    return REDIS_OK;\n}\n\nstatic int redisContextWaitReady(redisContext *c, long msec) {\n    struct pollfd   wfd[1];\n\n    wfd[0].fd     = c->fd;\n    wfd[0].events = POLLOUT;\n\n    if (errno == EINPROGRESS) {\n        int res;\n\n        if ((res = poll(wfd, 1, msec)) == -1) {\n            __redisSetErrorFromErrno(c, REDIS_ERR_IO, \"poll(2)\");\n            redisNetClose(c);\n            return REDIS_ERR;\n        } else if (res == 0) {\n            errno = ETIMEDOUT;\n            __redisSetErrorFromErrno(c,REDIS_ERR_IO,NULL);\n            redisNetClose(c);\n            return REDIS_ERR;\n        }\n\n        if (redisCheckConnectDone(c, &res) != REDIS_OK || res == 0) {\n            redisCheckSocketError(c);\n            return REDIS_ERR;\n        }\n\n        return REDIS_OK;\n    }\n\n    __redisSetErrorFromErrno(c,REDIS_ERR_IO,NULL);\n    redisNetClose(c);\n    return REDIS_ERR;\n}\n\nint redisCheckConnectDone(redisContext *c, int *completed) {\n    int rc = connect(c->fd, (const struct sockaddr *)c->saddr, c->addrlen);\n    if (rc == 0) {\n        *completed = 1;\n        return REDIS_OK;\n    }\n    int error = errno;\n    if (error == EINPROGRESS) {\n        /* must check error to see if connect failed.  Get the socket error */\n        int fail, so_error;\n        socklen_t optlen = sizeof(so_error);\n        fail = getsockopt(c->fd, SOL_SOCKET, SO_ERROR, &so_error, &optlen);\n        if (fail == 0) {\n            if (so_error == 0) {\n                /* Socket is connected! */\n                *completed = 1;\n                return REDIS_OK;\n            }\n            /* connection error; */\n            errno = so_error;\n            error = so_error;\n        }\n    }\n    switch (error) {\n    case EISCONN:\n        *completed = 1;\n        return REDIS_OK;\n    case EALREADY:\n    case EWOULDBLOCK:\n        *completed = 0;\n        return REDIS_OK;\n    default:\n        return REDIS_ERR;\n    }\n}\n\nint redisCheckSocketError(redisContext *c) {\n    int err = 0, errno_saved = errno;\n    socklen_t errlen = sizeof(err);\n\n    if (getsockopt(c->fd, SOL_SOCKET, SO_ERROR, &err, &errlen) == -1) {\n        __redisSetErrorFromErrno(c,REDIS_ERR_IO,\"getsockopt(SO_ERROR)\");\n        return REDIS_ERR;\n    }\n\n    if (err == 0) {\n        err = errno_saved;\n    }\n\n    if (err) {\n        errno = err;\n        __redisSetErrorFromErrno(c,REDIS_ERR_IO,NULL);\n        return REDIS_ERR;\n    }\n\n    return REDIS_OK;\n}\n\nint redisContextSetTimeout(redisContext *c, const struct timeval tv) {\n    const void *to_ptr = &tv;\n    size_t to_sz = sizeof(tv);\n\n    if (redisContextUpdateCommandTimeout(c, &tv) != REDIS_OK) {\n        __redisSetError(c, REDIS_ERR_OOM, \"Out of memory\");\n        return REDIS_ERR;\n    }\n    if (setsockopt(c->fd,SOL_SOCKET,SO_RCVTIMEO,to_ptr,to_sz) == -1) {\n        __redisSetErrorFromErrno(c,REDIS_ERR_IO,\"setsockopt(SO_RCVTIMEO)\");\n        return REDIS_ERR;\n    }\n    if (setsockopt(c->fd,SOL_SOCKET,SO_SNDTIMEO,to_ptr,to_sz) == -1) {\n        __redisSetErrorFromErrno(c,REDIS_ERR_IO,\"setsockopt(SO_SNDTIMEO)\");\n        return REDIS_ERR;\n    }\n    return REDIS_OK;\n}\n\nint redisContextUpdateConnectTimeout(redisContext *c, const struct timeval *timeout) {\n    /* Same timeval struct, short circuit */\n    if (c->connect_timeout == timeout)\n        return REDIS_OK;\n\n    /* Allocate context timeval if we need to */\n    if (c->connect_timeout == NULL) {\n        c->connect_timeout = hi_malloc(sizeof(*c->connect_timeout));\n        if (c->connect_timeout == NULL)\n            return REDIS_ERR;\n    }\n\n    memcpy(c->connect_timeout, timeout, sizeof(*c->connect_timeout));\n    return REDIS_OK;\n}\n\nint redisContextUpdateCommandTimeout(redisContext *c, const struct timeval *timeout) {\n    /* Same timeval struct, short circuit */\n    if (c->command_timeout == timeout)\n        return REDIS_OK;\n\n    /* Allocate context timeval if we need to */\n    if (c->command_timeout == NULL) {\n        c->command_timeout = hi_malloc(sizeof(*c->command_timeout));\n        if (c->command_timeout == NULL)\n            return REDIS_ERR;\n    }\n\n    memcpy(c->command_timeout, timeout, sizeof(*c->command_timeout));\n    return REDIS_OK;\n}\n\nstatic int _redisContextConnectTcp(redisContext *c, const char *addr, int port,\n                                   const struct timeval *timeout,\n                                   const char *source_addr) {\n    redisFD s;\n    int rv, n;\n    char _port[6];  /* strlen(\"65535\"); */\n    struct addrinfo hints, *servinfo, *bservinfo, *p, *b;\n    int blocking = (c->flags & REDIS_BLOCK);\n    int reuseaddr = (c->flags & REDIS_REUSEADDR);\n    int reuses = 0;\n    long timeout_msec = -1;\n\n    servinfo = NULL;\n    c->connection_type = REDIS_CONN_TCP;\n    c->tcp.port = port;\n\n    /* We need to take possession of the passed parameters\n     * to make them reusable for a reconnect.\n     * We also carefully check we don't free data we already own,\n     * as in the case of the reconnect method.\n     *\n     * This is a bit ugly, but atleast it works and doesn't leak memory.\n     **/\n    if (c->tcp.host != addr) {\n        hi_free(c->tcp.host);\n\n        c->tcp.host = hi_strdup(addr);\n        if (c->tcp.host == NULL)\n            goto oom;\n    }\n\n    if (timeout) {\n        if (redisContextUpdateConnectTimeout(c, timeout) == REDIS_ERR)\n            goto oom;\n    } else {\n        hi_free(c->connect_timeout);\n        c->connect_timeout = NULL;\n    }\n\n    if (redisContextTimeoutMsec(c, &timeout_msec) != REDIS_OK) {\n        goto error;\n    }\n\n    if (source_addr == NULL) {\n        hi_free(c->tcp.source_addr);\n        c->tcp.source_addr = NULL;\n    } else if (c->tcp.source_addr != source_addr) {\n        hi_free(c->tcp.source_addr);\n        c->tcp.source_addr = hi_strdup(source_addr);\n    }\n\n    snprintf(_port, 6, \"%d\", port);\n    memset(&hints,0,sizeof(hints));\n    hints.ai_family = AF_INET;\n    hints.ai_socktype = SOCK_STREAM;\n\n    /* DNS lookup. To use dual stack, set both flags to prefer both IPv4 and\n     * IPv6. By default, for historical reasons, we try IPv4 first and then we\n     * try IPv6 only if no IPv4 address was found. */\n    if (c->flags & REDIS_PREFER_IPV6 && c->flags & REDIS_PREFER_IPV4)\n        hints.ai_family = AF_UNSPEC;\n    else if (c->flags & REDIS_PREFER_IPV6)\n        hints.ai_family = AF_INET6;\n    else\n        hints.ai_family = AF_INET;\n\n    rv = getaddrinfo(c->tcp.host, _port, &hints, &servinfo);\n    if (rv != 0 && hints.ai_family != AF_UNSPEC) {\n        /* Try again with the other IP version. */\n        hints.ai_family = (hints.ai_family == AF_INET) ? AF_INET6 : AF_INET;\n        rv = getaddrinfo(c->tcp.host, _port, &hints, &servinfo);\n    }\n    if (rv != 0) {\n        __redisSetError(c, REDIS_ERR_OTHER, gai_strerror(rv));\n        return REDIS_ERR;\n    }\n    for (p = servinfo; p != NULL; p = p->ai_next) {\naddrretry:\n        if ((s = socket(p->ai_family,p->ai_socktype,p->ai_protocol)) == REDIS_INVALID_FD)\n            continue;\n\n        c->fd = s;\n        if (redisSetBlocking(c,0) != REDIS_OK)\n            goto error;\n        if (c->tcp.source_addr) {\n            int bound = 0;\n            /* Using getaddrinfo saves us from self-determining IPv4 vs IPv6 */\n            if ((rv = getaddrinfo(c->tcp.source_addr, NULL, &hints, &bservinfo)) != 0) {\n                char buf[128];\n                snprintf(buf,sizeof(buf),\"Can't get addr: %s\",gai_strerror(rv));\n                __redisSetError(c,REDIS_ERR_OTHER,buf);\n                goto error;\n            }\n\n            if (reuseaddr) {\n                n = 1;\n                if (setsockopt(s, SOL_SOCKET, SO_REUSEADDR, (char*) &n,\n                               sizeof(n)) < 0) {\n                    freeaddrinfo(bservinfo);\n                    goto error;\n                }\n            }\n\n            for (b = bservinfo; b != NULL; b = b->ai_next) {\n                if (bind(s,b->ai_addr,b->ai_addrlen) != -1) {\n                    bound = 1;\n                    break;\n                }\n            }\n            freeaddrinfo(bservinfo);\n            if (!bound) {\n                char buf[128];\n                snprintf(buf,sizeof(buf),\"Can't bind socket: %s\",strerror(errno));\n                __redisSetError(c,REDIS_ERR_OTHER,buf);\n                goto error;\n            }\n        }\n\n        /* For repeat connection */\n        hi_free(c->saddr);\n        c->saddr = hi_malloc(p->ai_addrlen);\n        if (c->saddr == NULL)\n            goto oom;\n\n        memcpy(c->saddr, p->ai_addr, p->ai_addrlen);\n        c->addrlen = p->ai_addrlen;\n\n        if (connect(s,p->ai_addr,p->ai_addrlen) == -1) {\n            if (errno == EHOSTUNREACH) {\n                redisNetClose(c);\n                continue;\n            } else if (errno == EINPROGRESS) {\n                if (blocking) {\n                    goto wait_for_ready;\n                }\n                /* This is ok.\n                 * Note that even when it's in blocking mode, we unset blocking\n                 * for `connect()`\n                 */\n            } else if (errno == EADDRNOTAVAIL && reuseaddr) {\n                if (++reuses >= REDIS_CONNECT_RETRIES) {\n                    goto error;\n                } else {\n                    redisNetClose(c);\n                    goto addrretry;\n                }\n            } else {\n                wait_for_ready:\n                if (redisContextWaitReady(c,timeout_msec) != REDIS_OK)\n                    goto error;\n                if (redisSetTcpNoDelay(c) != REDIS_OK)\n                    goto error;\n            }\n        }\n        if (blocking && redisSetBlocking(c,1) != REDIS_OK)\n            goto error;\n\n        c->flags |= REDIS_CONNECTED;\n        rv = REDIS_OK;\n        goto end;\n    }\n    if (p == NULL) {\n        char buf[128];\n        snprintf(buf,sizeof(buf),\"Can't create socket: %s\",strerror(errno));\n        __redisSetError(c,REDIS_ERR_OTHER,buf);\n        goto error;\n    }\n\noom:\n    __redisSetError(c, REDIS_ERR_OOM, \"Out of memory\");\nerror:\n    rv = REDIS_ERR;\nend:\n    if(servinfo) {\n        freeaddrinfo(servinfo);\n    }\n\n    return rv;  // Need to return REDIS_OK if alright\n}\n\nint redisContextConnectTcp(redisContext *c, const char *addr, int port,\n                           const struct timeval *timeout) {\n    return _redisContextConnectTcp(c, addr, port, timeout, NULL);\n}\n\nint redisContextConnectBindTcp(redisContext *c, const char *addr, int port,\n                               const struct timeval *timeout,\n                               const char *source_addr) {\n    return _redisContextConnectTcp(c, addr, port, timeout, source_addr);\n}\n\nint redisContextConnectUnix(redisContext *c, const char *path, const struct timeval *timeout) {\n#ifndef _WIN32\n    int blocking = (c->flags & REDIS_BLOCK);\n    struct sockaddr_un *sa;\n    long timeout_msec = -1;\n\n    if (redisCreateSocket(c,AF_UNIX) < 0)\n        return REDIS_ERR;\n    if (redisSetBlocking(c,0) != REDIS_OK)\n        return REDIS_ERR;\n\n    c->connection_type = REDIS_CONN_UNIX;\n    if (c->unix_sock.path != path) {\n        hi_free(c->unix_sock.path);\n\n        c->unix_sock.path = hi_strdup(path);\n        if (c->unix_sock.path == NULL)\n            goto oom;\n    }\n\n    if (timeout) {\n        if (redisContextUpdateConnectTimeout(c, timeout) == REDIS_ERR)\n            goto oom;\n    } else {\n        hi_free(c->connect_timeout);\n        c->connect_timeout = NULL;\n    }\n\n    if (redisContextTimeoutMsec(c,&timeout_msec) != REDIS_OK)\n        return REDIS_ERR;\n\n    /* Don't leak sockaddr if we're reconnecting */\n    if (c->saddr) hi_free(c->saddr);\n\n    sa = (struct sockaddr_un*)(c->saddr = hi_malloc(sizeof(struct sockaddr_un)));\n    if (sa == NULL)\n        goto oom;\n\n    c->addrlen = sizeof(struct sockaddr_un);\n    sa->sun_family = AF_UNIX;\n    strncpy(sa->sun_path, path, sizeof(sa->sun_path) - 1);\n    if (connect(c->fd, (struct sockaddr*)sa, sizeof(*sa)) == -1) {\n        if (errno == EINPROGRESS && !blocking) {\n            /* This is ok. */\n        } else {\n            if (redisContextWaitReady(c,timeout_msec) != REDIS_OK)\n                return REDIS_ERR;\n        }\n    }\n\n    /* Reset socket to be blocking after connect(2). */\n    if (blocking && redisSetBlocking(c,1) != REDIS_OK)\n        return REDIS_ERR;\n\n    c->flags |= REDIS_CONNECTED;\n    return REDIS_OK;\n#else\n    /* We currently do not support Unix sockets for Windows. */\n    /* TODO(m): https://devblogs.microsoft.com/commandline/af_unix-comes-to-windows/ */\n    errno = EPROTONOSUPPORT;\n    return REDIS_ERR;\n#endif /* _WIN32 */\noom:\n    __redisSetError(c, REDIS_ERR_OOM, \"Out of memory\");\n    return REDIS_ERR;\n}\n"
  },
  {
    "path": "deps/hiredis/net.h",
    "content": "/* Extracted from anet.c to work properly with Hiredis error reporting.\n *\n * Copyright (c) 2009-2011, Salvatore Sanfilippo <antirez at gmail dot com>\n * Copyright (c) 2010-2014, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n * Copyright (c) 2015, Matt Stancliff <matt at genges dot com>,\n *                     Jan-Erik Rediger <janerik at fnordig dot com>\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#ifndef __NET_H\n#define __NET_H\n\n#include \"hiredis.h\"\n\nvoid redisNetClose(redisContext *c);\nssize_t redisNetRead(redisContext *c, char *buf, size_t bufcap);\nssize_t redisNetWrite(redisContext *c);\n\nint redisCheckSocketError(redisContext *c);\nint redisContextSetTimeout(redisContext *c, const struct timeval tv);\nint redisContextConnectTcp(redisContext *c, const char *addr, int port, const struct timeval *timeout);\nint redisContextConnectBindTcp(redisContext *c, const char *addr, int port,\n                               const struct timeval *timeout,\n                               const char *source_addr);\nint redisContextConnectUnix(redisContext *c, const char *path, const struct timeval *timeout);\nint redisKeepAlive(redisContext *c, int interval);\nint redisCheckConnectDone(redisContext *c, int *completed);\n\nint redisSetTcpNoDelay(redisContext *c);\nint redisContextSetTcpUserTimeout(redisContext *c, unsigned int timeout);\n\n#endif\n"
  },
  {
    "path": "deps/hiredis/read.c",
    "content": "/*\n * Copyright (c) 2009-2011, Salvatore Sanfilippo <antirez at gmail dot com>\n * Copyright (c) 2010-2011, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#include \"fmacros.h\"\n#include <string.h>\n#include <stdlib.h>\n#ifndef _MSC_VER\n#include <unistd.h>\n#include <strings.h>\n#endif\n#include <assert.h>\n#include <errno.h>\n#include <ctype.h>\n#include <limits.h>\n#include <math.h>\n\n#include \"alloc.h\"\n#include \"read.h\"\n#include \"sds.h\"\n#include \"win32.h\"\n\n/* Initial size of our nested reply stack and how much we grow it when needd */\n#define REDIS_READER_STACK_SIZE 9\n\nstatic void __redisReaderSetError(redisReader *r, int type, const char *str) {\n    size_t len;\n\n    if (r->reply != NULL && r->fn && r->fn->freeObject) {\n        r->fn->freeObject(r->reply);\n        r->reply = NULL;\n    }\n\n    /* Clear input buffer on errors. */\n    hi_sdsfree(r->buf);\n    r->buf = NULL;\n    r->pos = r->len = 0;\n\n    /* Reset task stack. */\n    r->ridx = -1;\n\n    /* Set error. */\n    r->err = type;\n    len = strlen(str);\n    len = len < (sizeof(r->errstr)-1) ? len : (sizeof(r->errstr)-1);\n    memcpy(r->errstr,str,len);\n    r->errstr[len] = '\\0';\n}\n\nstatic size_t chrtos(char *buf, size_t size, char byte) {\n    size_t len = 0;\n\n    switch(byte) {\n    case '\\\\':\n    case '\"':\n        len = snprintf(buf,size,\"\\\"\\\\%c\\\"\",byte);\n        break;\n    case '\\n': len = snprintf(buf,size,\"\\\"\\\\n\\\"\"); break;\n    case '\\r': len = snprintf(buf,size,\"\\\"\\\\r\\\"\"); break;\n    case '\\t': len = snprintf(buf,size,\"\\\"\\\\t\\\"\"); break;\n    case '\\a': len = snprintf(buf,size,\"\\\"\\\\a\\\"\"); break;\n    case '\\b': len = snprintf(buf,size,\"\\\"\\\\b\\\"\"); break;\n    default:\n        if (isprint(byte))\n            len = snprintf(buf,size,\"\\\"%c\\\"\",byte);\n        else\n            len = snprintf(buf,size,\"\\\"\\\\x%02x\\\"\",(unsigned char)byte);\n        break;\n    }\n\n    return len;\n}\n\nstatic void __redisReaderSetErrorProtocolByte(redisReader *r, char byte) {\n    char cbuf[8], sbuf[128];\n\n    chrtos(cbuf,sizeof(cbuf),byte);\n    snprintf(sbuf,sizeof(sbuf),\n        \"Protocol error, got %s as reply type byte\", cbuf);\n    __redisReaderSetError(r,REDIS_ERR_PROTOCOL,sbuf);\n}\n\nstatic void __redisReaderSetErrorOOM(redisReader *r) {\n    __redisReaderSetError(r,REDIS_ERR_OOM,\"Out of memory\");\n}\n\nstatic char *readBytes(redisReader *r, unsigned int bytes) {\n    char *p;\n    if (r->len-r->pos >= bytes) {\n        p = r->buf+r->pos;\n        r->pos += bytes;\n        return p;\n    }\n    return NULL;\n}\n\n/* Find pointer to \\r\\n. */\nstatic char *seekNewline(char *s, size_t len) {\n    char *ret;\n\n    /* We cannot match with fewer than 2 bytes */\n    if (len < 2)\n        return NULL;\n\n    /* Search up to len - 1 characters */\n    len--;\n\n    /* Look for the \\r */\n    while ((ret = memchr(s, '\\r', len)) != NULL) {\n        if (ret[1] == '\\n') {\n            /* Found. */\n            break;\n        }\n        /* Continue searching. */\n        ret++;\n        len -= ret - s;\n        s = ret;\n    }\n\n    return ret;\n}\n\n/* Convert a string into a long long. Returns REDIS_OK if the string could be\n * parsed into a (non-overflowing) long long, REDIS_ERR otherwise. The value\n * will be set to the parsed value when appropriate.\n *\n * Note that this function demands that the string strictly represents\n * a long long: no spaces or other characters before or after the string\n * representing the number are accepted, nor zeroes at the start if not\n * for the string \"0\" representing the zero number.\n *\n * Because of its strictness, it is safe to use this function to check if\n * you can convert a string into a long long, and obtain back the string\n * from the number without any loss in the string representation. */\nstatic int string2ll(const char *s, size_t slen, long long *value) {\n    const char *p = s;\n    size_t plen = 0;\n    int negative = 0;\n    unsigned long long v;\n\n    if (plen == slen)\n        return REDIS_ERR;\n\n    /* Special case: first and only digit is 0. */\n    if (slen == 1 && p[0] == '0') {\n        if (value != NULL) *value = 0;\n        return REDIS_OK;\n    }\n\n    if (p[0] == '-') {\n        negative = 1;\n        p++; plen++;\n\n        /* Abort on only a negative sign. */\n        if (plen == slen)\n            return REDIS_ERR;\n    }\n\n    /* First digit should be 1-9, otherwise the string should just be 0. */\n    if (p[0] >= '1' && p[0] <= '9') {\n        v = p[0]-'0';\n        p++; plen++;\n    } else if (p[0] == '0' && slen == 1) {\n        *value = 0;\n        return REDIS_OK;\n    } else {\n        return REDIS_ERR;\n    }\n\n    while (plen < slen && p[0] >= '0' && p[0] <= '9') {\n        if (v > (ULLONG_MAX / 10)) /* Overflow. */\n            return REDIS_ERR;\n        v *= 10;\n\n        if (v > (ULLONG_MAX - (p[0]-'0'))) /* Overflow. */\n            return REDIS_ERR;\n        v += p[0]-'0';\n\n        p++; plen++;\n    }\n\n    /* Return if not all bytes were used. */\n    if (plen < slen)\n        return REDIS_ERR;\n\n    if (negative) {\n        if (v > ((unsigned long long)(-(LLONG_MIN+1))+1)) /* Overflow. */\n            return REDIS_ERR;\n        if (value != NULL) *value = -v;\n    } else {\n        if (v > LLONG_MAX) /* Overflow. */\n            return REDIS_ERR;\n        if (value != NULL) *value = v;\n    }\n    return REDIS_OK;\n}\n\nstatic char *readLine(redisReader *r, int *_len) {\n    char *p, *s;\n    int len;\n\n    p = r->buf+r->pos;\n    s = seekNewline(p,(r->len-r->pos));\n    if (s != NULL) {\n        len = s-(r->buf+r->pos);\n        r->pos += len+2; /* skip \\r\\n */\n        if (_len) *_len = len;\n        return p;\n    }\n    return NULL;\n}\n\nstatic void moveToNextTask(redisReader *r) {\n    redisReadTask *cur, *prv;\n    while (r->ridx >= 0) {\n        /* Return a.s.a.p. when the stack is now empty. */\n        if (r->ridx == 0) {\n            r->ridx--;\n            return;\n        }\n\n        cur = r->task[r->ridx];\n        prv = r->task[r->ridx-1];\n        assert(prv->type == REDIS_REPLY_ARRAY ||\n               prv->type == REDIS_REPLY_MAP ||\n               prv->type == REDIS_REPLY_SET ||\n               prv->type == REDIS_REPLY_PUSH);\n        if (cur->idx == prv->elements-1) {\n            r->ridx--;\n        } else {\n            /* Reset the type because the next item can be anything */\n            assert(cur->idx < prv->elements);\n            cur->type = -1;\n            cur->elements = -1;\n            cur->idx++;\n            return;\n        }\n    }\n}\n\nstatic int processLineItem(redisReader *r) {\n    redisReadTask *cur = r->task[r->ridx];\n    void *obj;\n    char *p;\n    int len;\n\n    if ((p = readLine(r,&len)) != NULL) {\n        if (cur->type == REDIS_REPLY_INTEGER) {\n            long long v;\n\n            if (string2ll(p, len, &v) == REDIS_ERR) {\n                __redisReaderSetError(r,REDIS_ERR_PROTOCOL,\n                        \"Bad integer value\");\n                return REDIS_ERR;\n            }\n\n            if (r->fn && r->fn->createInteger) {\n                obj = r->fn->createInteger(cur,v);\n            } else {\n                obj = (void*)REDIS_REPLY_INTEGER;\n            }\n        } else if (cur->type == REDIS_REPLY_DOUBLE) {\n            char buf[326], *eptr;\n            double d;\n\n            if ((size_t)len >= sizeof(buf)) {\n                __redisReaderSetError(r,REDIS_ERR_PROTOCOL,\n                        \"Double value is too large\");\n                return REDIS_ERR;\n            }\n\n            memcpy(buf,p,len);\n            buf[len] = '\\0';\n\n            if (len == 3 && strcasecmp(buf,\"inf\") == 0) {\n                d = INFINITY; /* Positive infinite. */\n            } else if (len == 4 && strcasecmp(buf,\"-inf\") == 0) {\n                d = -INFINITY; /* Negative infinite. */\n            } else if ((len == 3 && strcasecmp(buf,\"nan\") == 0) ||\n                       (len == 4 && strcasecmp(buf, \"-nan\") == 0)) {\n                d = NAN; /* nan. */\n            } else {\n                d = strtod((char*)buf,&eptr);\n                /* RESP3 only allows \"inf\", \"-inf\", and finite values, while\n                 * strtod() allows other variations on infinity,\n                 * etc. We explicity handle our two allowed infinite cases and NaN\n                 * above, so strtod() should only result in finite values. */\n                if (buf[0] == '\\0' || eptr != &buf[len] || !isfinite(d)) {\n                    __redisReaderSetError(r,REDIS_ERR_PROTOCOL,\n                            \"Bad double value\");\n                    return REDIS_ERR;\n                }\n            }\n\n            if (r->fn && r->fn->createDouble) {\n                obj = r->fn->createDouble(cur,d,buf,len);\n            } else {\n                obj = (void*)REDIS_REPLY_DOUBLE;\n            }\n        } else if (cur->type == REDIS_REPLY_NIL) {\n            if (len != 0) {\n                __redisReaderSetError(r,REDIS_ERR_PROTOCOL,\n                        \"Bad nil value\");\n                return REDIS_ERR;\n            }\n\n            if (r->fn && r->fn->createNil)\n                obj = r->fn->createNil(cur);\n            else\n                obj = (void*)REDIS_REPLY_NIL;\n        } else if (cur->type == REDIS_REPLY_BOOL) {\n            int bval;\n\n            if (len != 1 || !strchr(\"tTfF\", p[0])) {\n                __redisReaderSetError(r,REDIS_ERR_PROTOCOL,\n                        \"Bad bool value\");\n                return REDIS_ERR;\n            }\n\n            bval = p[0] == 't' || p[0] == 'T';\n            if (r->fn && r->fn->createBool)\n                obj = r->fn->createBool(cur,bval);\n            else\n                obj = (void*)REDIS_REPLY_BOOL;\n        } else if (cur->type == REDIS_REPLY_BIGNUM) {\n            /* Ensure all characters are decimal digits (with possible leading\n             * minus sign). */\n            for (int i = 0; i < len; i++) {\n                /* XXX Consider: Allow leading '+'? Error on leading '0's? */\n                if (i == 0 && p[0] == '-') continue;\n                if (p[i] < '0' || p[i] > '9') {\n                    __redisReaderSetError(r,REDIS_ERR_PROTOCOL,\n                            \"Bad bignum value\");\n                    return REDIS_ERR;\n                }\n            }\n            if (r->fn && r->fn->createString)\n                obj = r->fn->createString(cur,p,len);\n            else\n                obj = (void*)REDIS_REPLY_BIGNUM;\n        } else {\n            /* Type will be error or status. */\n            for (int i = 0; i < len; i++) {\n                if (p[i] == '\\r' || p[i] == '\\n') {\n                    __redisReaderSetError(r,REDIS_ERR_PROTOCOL,\n                            \"Bad simple string value\");\n                    return REDIS_ERR;\n                }\n            }\n            if (r->fn && r->fn->createString)\n                obj = r->fn->createString(cur,p,len);\n            else\n                obj = (void*)(uintptr_t)(cur->type);\n        }\n\n        if (obj == NULL) {\n            __redisReaderSetErrorOOM(r);\n            return REDIS_ERR;\n        }\n\n        /* Set reply if this is the root object. */\n        if (r->ridx == 0) r->reply = obj;\n        moveToNextTask(r);\n        return REDIS_OK;\n    }\n\n    return REDIS_ERR;\n}\n\nstatic int processBulkItem(redisReader *r) {\n    redisReadTask *cur = r->task[r->ridx];\n    void *obj = NULL;\n    char *p, *s;\n    long long len;\n    unsigned long bytelen;\n    int success = 0;\n\n    p = r->buf+r->pos;\n    s = seekNewline(p,r->len-r->pos);\n    if (s != NULL) {\n        p = r->buf+r->pos;\n        bytelen = s-(r->buf+r->pos)+2; /* include \\r\\n */\n\n        if (string2ll(p, bytelen - 2, &len) == REDIS_ERR) {\n            __redisReaderSetError(r,REDIS_ERR_PROTOCOL,\n                    \"Bad bulk string length\");\n            return REDIS_ERR;\n        }\n\n        if (len < -1 || (LLONG_MAX > SIZE_MAX && len > (long long)SIZE_MAX)) {\n            __redisReaderSetError(r,REDIS_ERR_PROTOCOL,\n                    \"Bulk string length out of range\");\n            return REDIS_ERR;\n        }\n\n        if (len == -1) {\n            /* The nil object can always be created. */\n            if (r->fn && r->fn->createNil)\n                obj = r->fn->createNil(cur);\n            else\n                obj = (void*)REDIS_REPLY_NIL;\n            success = 1;\n        } else {\n            /* Only continue when the buffer contains the entire bulk item. */\n            bytelen += len+2; /* include \\r\\n */\n            if (r->pos+bytelen <= r->len) {\n                if ((cur->type == REDIS_REPLY_VERB && len < 4) ||\n                    (cur->type == REDIS_REPLY_VERB && s[5] != ':'))\n                {\n                    __redisReaderSetError(r,REDIS_ERR_PROTOCOL,\n                            \"Verbatim string 4 bytes of content type are \"\n                            \"missing or incorrectly encoded.\");\n                    return REDIS_ERR;\n                }\n                if (r->fn && r->fn->createString)\n                    obj = r->fn->createString(cur,s+2,len);\n                else\n                    obj = (void*)(uintptr_t)cur->type;\n                success = 1;\n            }\n        }\n\n        /* Proceed when obj was created. */\n        if (success) {\n            if (obj == NULL) {\n                __redisReaderSetErrorOOM(r);\n                return REDIS_ERR;\n            }\n\n            r->pos += bytelen;\n\n            /* Set reply if this is the root object. */\n            if (r->ridx == 0) r->reply = obj;\n            moveToNextTask(r);\n            return REDIS_OK;\n        }\n    }\n\n    return REDIS_ERR;\n}\n\nstatic int redisReaderGrow(redisReader *r) {\n    redisReadTask **aux;\n    int newlen;\n\n    /* Grow our stack size */\n    newlen = r->tasks + REDIS_READER_STACK_SIZE;\n    aux = hi_realloc(r->task, sizeof(*r->task) * newlen);\n    if (aux == NULL)\n        goto oom;\n\n    r->task = aux;\n\n    /* Allocate new tasks */\n    for (; r->tasks < newlen; r->tasks++) {\n        r->task[r->tasks] = hi_calloc(1, sizeof(**r->task));\n        if (r->task[r->tasks] == NULL)\n            goto oom;\n    }\n\n    return REDIS_OK;\noom:\n    __redisReaderSetErrorOOM(r);\n    return REDIS_ERR;\n}\n\n/* Process the array, map and set types. */\nstatic int processAggregateItem(redisReader *r) {\n    redisReadTask *cur = r->task[r->ridx];\n    void *obj;\n    char *p;\n    long long elements;\n    int root = 0, len;\n\n    if (r->ridx == r->tasks - 1) {\n        if (redisReaderGrow(r) == REDIS_ERR)\n            return REDIS_ERR;\n    }\n\n    if ((p = readLine(r,&len)) != NULL) {\n        if (string2ll(p, len, &elements) == REDIS_ERR) {\n            __redisReaderSetError(r,REDIS_ERR_PROTOCOL,\n                    \"Bad multi-bulk length\");\n            return REDIS_ERR;\n        }\n\n        root = (r->ridx == 0);\n\n        if (elements < -1 || (LLONG_MAX > SIZE_MAX && elements > SIZE_MAX) ||\n            (r->maxelements > 0 && elements > r->maxelements))\n        {\n            __redisReaderSetError(r,REDIS_ERR_PROTOCOL,\n                    \"Multi-bulk length out of range\");\n            return REDIS_ERR;\n        }\n\n        if (elements == -1) {\n            if (r->fn && r->fn->createNil)\n                obj = r->fn->createNil(cur);\n            else\n                obj = (void*)REDIS_REPLY_NIL;\n\n            if (obj == NULL) {\n                __redisReaderSetErrorOOM(r);\n                return REDIS_ERR;\n            }\n\n            moveToNextTask(r);\n        } else {\n            if (cur->type == REDIS_REPLY_MAP) elements *= 2;\n\n            if (r->fn && r->fn->createArray)\n                obj = r->fn->createArray(cur,elements);\n            else\n                obj = (void*)(uintptr_t)cur->type;\n\n            if (obj == NULL) {\n                __redisReaderSetErrorOOM(r);\n                return REDIS_ERR;\n            }\n\n            /* Modify task stack when there are more than 0 elements. */\n            if (elements > 0) {\n                cur->elements = elements;\n                cur->obj = obj;\n                r->ridx++;\n                r->task[r->ridx]->type = -1;\n                r->task[r->ridx]->elements = -1;\n                r->task[r->ridx]->idx = 0;\n                r->task[r->ridx]->obj = NULL;\n                r->task[r->ridx]->parent = cur;\n                r->task[r->ridx]->privdata = r->privdata;\n            } else {\n                moveToNextTask(r);\n            }\n        }\n\n        /* Set reply if this is the root object. */\n        if (root) r->reply = obj;\n        return REDIS_OK;\n    }\n\n    return REDIS_ERR;\n}\n\nstatic int processItem(redisReader *r) {\n    redisReadTask *cur = r->task[r->ridx];\n    char *p;\n\n    /* check if we need to read type */\n    if (cur->type < 0) {\n        if ((p = readBytes(r,1)) != NULL) {\n            switch (p[0]) {\n            case '-':\n                cur->type = REDIS_REPLY_ERROR;\n                break;\n            case '+':\n                cur->type = REDIS_REPLY_STATUS;\n                break;\n            case ':':\n                cur->type = REDIS_REPLY_INTEGER;\n                break;\n            case ',':\n                cur->type = REDIS_REPLY_DOUBLE;\n                break;\n            case '_':\n                cur->type = REDIS_REPLY_NIL;\n                break;\n            case '$':\n                cur->type = REDIS_REPLY_STRING;\n                break;\n            case '*':\n                cur->type = REDIS_REPLY_ARRAY;\n                break;\n            case '%':\n                cur->type = REDIS_REPLY_MAP;\n                break;\n            case '~':\n                cur->type = REDIS_REPLY_SET;\n                break;\n            case '#':\n                cur->type = REDIS_REPLY_BOOL;\n                break;\n            case '=':\n                cur->type = REDIS_REPLY_VERB;\n                break;\n            case '>':\n                cur->type = REDIS_REPLY_PUSH;\n                break;\n            case '(':\n                cur->type = REDIS_REPLY_BIGNUM;\n                break;\n            default:\n                __redisReaderSetErrorProtocolByte(r,*p);\n                return REDIS_ERR;\n            }\n        } else {\n            /* could not consume 1 byte */\n            return REDIS_ERR;\n        }\n    }\n\n    /* process typed item */\n    switch(cur->type) {\n    case REDIS_REPLY_ERROR:\n    case REDIS_REPLY_STATUS:\n    case REDIS_REPLY_INTEGER:\n    case REDIS_REPLY_DOUBLE:\n    case REDIS_REPLY_NIL:\n    case REDIS_REPLY_BOOL:\n    case REDIS_REPLY_BIGNUM:\n        return processLineItem(r);\n    case REDIS_REPLY_STRING:\n    case REDIS_REPLY_VERB:\n        return processBulkItem(r);\n    case REDIS_REPLY_ARRAY:\n    case REDIS_REPLY_MAP:\n    case REDIS_REPLY_SET:\n    case REDIS_REPLY_PUSH:\n        return processAggregateItem(r);\n    default:\n        assert(NULL);\n        return REDIS_ERR; /* Avoid warning. */\n    }\n}\n\nredisReader *redisReaderCreateWithFunctions(redisReplyObjectFunctions *fn) {\n    redisReader *r;\n\n    r = hi_calloc(1,sizeof(redisReader));\n    if (r == NULL)\n        return NULL;\n\n    r->buf = hi_sdsempty();\n    if (r->buf == NULL)\n        goto oom;\n\n    r->task = hi_calloc(REDIS_READER_STACK_SIZE, sizeof(*r->task));\n    if (r->task == NULL)\n        goto oom;\n\n    for (; r->tasks < REDIS_READER_STACK_SIZE; r->tasks++) {\n        r->task[r->tasks] = hi_calloc(1, sizeof(**r->task));\n        if (r->task[r->tasks] == NULL)\n            goto oom;\n    }\n\n    r->fn = fn;\n    r->maxbuf = REDIS_READER_MAX_BUF;\n    r->maxelements = REDIS_READER_MAX_ARRAY_ELEMENTS;\n    r->ridx = -1;\n\n    return r;\noom:\n    redisReaderFree(r);\n    return NULL;\n}\n\nvoid redisReaderFree(redisReader *r) {\n    if (r == NULL)\n        return;\n\n    if (r->reply != NULL && r->fn && r->fn->freeObject)\n        r->fn->freeObject(r->reply);\n\n    if (r->task) {\n        /* We know r->task[i] is allocated if i < r->tasks */\n        for (int i = 0; i < r->tasks; i++) {\n            hi_free(r->task[i]);\n        }\n\n        hi_free(r->task);\n    }\n\n    hi_sdsfree(r->buf);\n    hi_free(r);\n}\n\nint redisReaderFeed(redisReader *r, const char *buf, size_t len) {\n    hisds newbuf;\n\n    /* Return early when this reader is in an erroneous state. */\n    if (r->err)\n        return REDIS_ERR;\n\n    /* Copy the provided buffer. */\n    if (buf != NULL && len >= 1) {\n        /* Destroy internal buffer when it is empty and is quite large. */\n        if (r->len == 0 && r->maxbuf != 0 && hi_sdsavail(r->buf) > r->maxbuf) {\n            hi_sdsfree(r->buf);\n            r->buf = hi_sdsempty();\n            if (r->buf == 0) goto oom;\n\n            r->pos = 0;\n        }\n\n        newbuf = hi_sdscatlen(r->buf,buf,len);\n        if (newbuf == NULL) goto oom;\n\n        r->buf = newbuf;\n        r->len = hi_sdslen(r->buf);\n    }\n\n    return REDIS_OK;\noom:\n    __redisReaderSetErrorOOM(r);\n    return REDIS_ERR;\n}\n\nint redisReaderGetReply(redisReader *r, void **reply) {\n    /* Default target pointer to NULL. */\n    if (reply != NULL)\n        *reply = NULL;\n\n    /* Return early when this reader is in an erroneous state. */\n    if (r->err)\n        return REDIS_ERR;\n\n    /* When the buffer is empty, there will never be a reply. */\n    if (r->len == 0)\n        return REDIS_OK;\n\n    /* Set first item to process when the stack is empty. */\n    if (r->ridx == -1) {\n        r->task[0]->type = -1;\n        r->task[0]->elements = -1;\n        r->task[0]->idx = -1;\n        r->task[0]->obj = NULL;\n        r->task[0]->parent = NULL;\n        r->task[0]->privdata = r->privdata;\n        r->ridx = 0;\n    }\n\n    /* Process items in reply. */\n    while (r->ridx >= 0)\n        if (processItem(r) != REDIS_OK)\n            break;\n\n    /* Return ASAP when an error occurred. */\n    if (r->err)\n        return REDIS_ERR;\n\n    /* Discard part of the buffer when we've consumed at least 1k, to avoid\n     * doing unnecessary calls to memmove() in sds.c. */\n    if (r->pos >= 1024) {\n        if (hi_sdsrange(r->buf,r->pos,-1) < 0) return REDIS_ERR;\n        r->pos = 0;\n        r->len = hi_sdslen(r->buf);\n    }\n\n    /* Emit a reply when there is one. */\n    if (r->ridx == -1) {\n        if (reply != NULL) {\n            *reply = r->reply;\n        } else if (r->reply != NULL && r->fn && r->fn->freeObject) {\n            r->fn->freeObject(r->reply);\n        }\n        r->reply = NULL;\n    }\n    return REDIS_OK;\n}\n"
  },
  {
    "path": "deps/hiredis/read.h",
    "content": "/*\n * Copyright (c) 2009-2011, Salvatore Sanfilippo <antirez at gmail dot com>\n * Copyright (c) 2010-2011, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n\n#ifndef __HIREDIS_READ_H\n#define __HIREDIS_READ_H\n#include <stdio.h> /* for size_t */\n\n#define REDIS_ERR -1\n#define REDIS_OK 0\n\n/* When an error occurs, the err flag in a context is set to hold the type of\n * error that occurred. REDIS_ERR_IO means there was an I/O error and you\n * should use the \"errno\" variable to find out what is wrong.\n * For other values, the \"errstr\" field will hold a description. */\n#define REDIS_ERR_IO 1 /* Error in read or write */\n#define REDIS_ERR_EOF 3 /* End of file */\n#define REDIS_ERR_PROTOCOL 4 /* Protocol error */\n#define REDIS_ERR_OOM 5 /* Out of memory */\n#define REDIS_ERR_TIMEOUT 6 /* Timed out */\n#define REDIS_ERR_OTHER 2 /* Everything else... */\n\n#define REDIS_REPLY_STRING 1\n#define REDIS_REPLY_ARRAY 2\n#define REDIS_REPLY_INTEGER 3\n#define REDIS_REPLY_NIL 4\n#define REDIS_REPLY_STATUS 5\n#define REDIS_REPLY_ERROR 6\n#define REDIS_REPLY_DOUBLE 7\n#define REDIS_REPLY_BOOL 8\n#define REDIS_REPLY_MAP 9\n#define REDIS_REPLY_SET 10\n#define REDIS_REPLY_ATTR 11\n#define REDIS_REPLY_PUSH 12\n#define REDIS_REPLY_BIGNUM 13\n#define REDIS_REPLY_VERB 14\n\n/* Default max unused reader buffer. */\n#define REDIS_READER_MAX_BUF (1024*16)\n\n/* Default multi-bulk element limit */\n#define REDIS_READER_MAX_ARRAY_ELEMENTS ((1LL<<32) - 1)\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\ntypedef struct redisReadTask {\n    int type;\n    long long elements; /* number of elements in multibulk container */\n    int idx; /* index in parent (array) object */\n    void *obj; /* holds user-generated value for a read task */\n    struct redisReadTask *parent; /* parent task */\n    void *privdata; /* user-settable arbitrary field */\n} redisReadTask;\n\ntypedef struct redisReplyObjectFunctions {\n    void *(*createString)(const redisReadTask*, char*, size_t);\n    void *(*createArray)(const redisReadTask*, size_t);\n    void *(*createInteger)(const redisReadTask*, long long);\n    void *(*createDouble)(const redisReadTask*, double, char*, size_t);\n    void *(*createNil)(const redisReadTask*);\n    void *(*createBool)(const redisReadTask*, int);\n    void (*freeObject)(void*);\n} redisReplyObjectFunctions;\n\ntypedef struct redisReader {\n    int err; /* Error flags, 0 when there is no error */\n    char errstr[128]; /* String representation of error when applicable */\n\n    char *buf; /* Read buffer */\n    size_t pos; /* Buffer cursor */\n    size_t len; /* Buffer length */\n    size_t maxbuf; /* Max length of unused buffer */\n    long long maxelements; /* Max multi-bulk elements */\n\n    redisReadTask **task;\n    int tasks;\n\n    int ridx; /* Index of current read task */\n    void *reply; /* Temporary reply pointer */\n\n    redisReplyObjectFunctions *fn;\n    void *privdata;\n} redisReader;\n\n/* Public API for the protocol parser. */\nredisReader *redisReaderCreateWithFunctions(redisReplyObjectFunctions *fn);\nvoid redisReaderFree(redisReader *r);\nint redisReaderFeed(redisReader *r, const char *buf, size_t len);\nint redisReaderGetReply(redisReader *r, void **reply);\n\n#define redisReaderSetPrivdata(_r, _p) (int)(((redisReader*)(_r))->privdata = (_p))\n#define redisReaderGetObject(_r) (((redisReader*)(_r))->reply)\n#define redisReaderGetError(_r) (((redisReader*)(_r))->errstr)\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif\n"
  },
  {
    "path": "deps/hiredis/sds.c",
    "content": "/* SDSLib 2.0 -- A C dynamic strings library\n *\n * Copyright (c) 2006-2015, Salvatore Sanfilippo <antirez at gmail dot com>\n * Copyright (c) 2015, Oran Agra\n * Copyright (c) 2015, Redis Labs, Inc\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#include \"fmacros.h\"\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <ctype.h>\n#include <assert.h>\n#include <limits.h>\n#include \"sds.h\"\n#include \"sdsalloc.h\"\n\nstatic inline int hi_sdsHdrSize(char type) {\n    switch(type&HI_SDS_TYPE_MASK) {\n        case HI_SDS_TYPE_5:\n            return sizeof(struct hisdshdr5);\n        case HI_SDS_TYPE_8:\n            return sizeof(struct hisdshdr8);\n        case HI_SDS_TYPE_16:\n            return sizeof(struct hisdshdr16);\n        case HI_SDS_TYPE_32:\n            return sizeof(struct hisdshdr32);\n        case HI_SDS_TYPE_64:\n            return sizeof(struct hisdshdr64);\n    }\n    return 0;\n}\n\nstatic inline char hi_sdsReqType(size_t string_size) {\n    if (string_size < 32)\n        return HI_SDS_TYPE_5;\n    if (string_size < 0xff)\n        return HI_SDS_TYPE_8;\n    if (string_size < 0xffff)\n        return HI_SDS_TYPE_16;\n    if (string_size < 0xffffffff)\n        return HI_SDS_TYPE_32;\n    return HI_SDS_TYPE_64;\n}\n\n/* Create a new hisds string with the content specified by the 'init' pointer\n * and 'initlen'.\n * If NULL is used for 'init' the string is initialized with zero bytes.\n *\n * The string is always null-terminated (all the hisds strings are, always) so\n * even if you create an hisds string with:\n *\n * mystring = hi_sdsnewlen(\"abc\",3);\n *\n * You can print the string with printf() as there is an implicit \\0 at the\n * end of the string. However the string is binary safe and can contain\n * \\0 characters in the middle, as the length is stored in the hisds header. */\nhisds hi_sdsnewlen(const void *init, size_t initlen) {\n    void *sh;\n    hisds s;\n    char type = hi_sdsReqType(initlen);\n    /* Empty strings are usually created in order to append. Use type 8\n     * since type 5 is not good at this. */\n    if (type == HI_SDS_TYPE_5 && initlen == 0) type = HI_SDS_TYPE_8;\n    int hdrlen = hi_sdsHdrSize(type);\n    unsigned char *fp; /* flags pointer. */\n\n    if (hdrlen+initlen+1 <= initlen) return NULL; /* Catch size_t overflow */\n    sh = hi_s_malloc(hdrlen+initlen+1);\n    if (sh == NULL) return NULL;\n    if (!init)\n        memset(sh, 0, hdrlen+initlen+1);\n    s = (char*)sh+hdrlen;\n    fp = ((unsigned char*)s)-1;\n    switch(type) {\n        case HI_SDS_TYPE_5: {\n            *fp = type | (initlen << HI_SDS_TYPE_BITS);\n            break;\n        }\n        case HI_SDS_TYPE_8: {\n            HI_SDS_HDR_VAR(8,s);\n            sh->len = initlen;\n            sh->alloc = initlen;\n            *fp = type;\n            break;\n        }\n        case HI_SDS_TYPE_16: {\n            HI_SDS_HDR_VAR(16,s);\n            sh->len = initlen;\n            sh->alloc = initlen;\n            *fp = type;\n            break;\n        }\n        case HI_SDS_TYPE_32: {\n            HI_SDS_HDR_VAR(32,s);\n            sh->len = initlen;\n            sh->alloc = initlen;\n            *fp = type;\n            break;\n        }\n        case HI_SDS_TYPE_64: {\n            HI_SDS_HDR_VAR(64,s);\n            sh->len = initlen;\n            sh->alloc = initlen;\n            *fp = type;\n            break;\n        }\n    }\n    if (initlen && init)\n        memcpy(s, init, initlen);\n    s[initlen] = '\\0';\n    return s;\n}\n\n/* Create an empty (zero length) hisds string. Even in this case the string\n * always has an implicit null term. */\nhisds hi_sdsempty(void) {\n    return hi_sdsnewlen(\"\",0);\n}\n\n/* Create a new hisds string starting from a null terminated C string. */\nhisds hi_sdsnew(const char *init) {\n    size_t initlen = (init == NULL) ? 0 : strlen(init);\n    return hi_sdsnewlen(init, initlen);\n}\n\n/* Duplicate an hisds string. */\nhisds hi_sdsdup(const hisds s) {\n    return hi_sdsnewlen(s, hi_sdslen(s));\n}\n\n/* Free an hisds string. No operation is performed if 's' is NULL. */\nvoid hi_sdsfree(hisds s) {\n    if (s == NULL) return;\n    hi_s_free((char*)s-hi_sdsHdrSize(s[-1]));\n}\n\n/* Set the hisds string length to the length as obtained with strlen(), so\n * considering as content only up to the first null term character.\n *\n * This function is useful when the hisds string is hacked manually in some\n * way, like in the following example:\n *\n * s = hi_sdsnew(\"foobar\");\n * s[2] = '\\0';\n * hi_sdsupdatelen(s);\n * printf(\"%d\\n\", hi_sdslen(s));\n *\n * The output will be \"2\", but if we comment out the call to hi_sdsupdatelen()\n * the output will be \"6\" as the string was modified but the logical length\n * remains 6 bytes. */\nvoid hi_sdsupdatelen(hisds s) {\n    size_t reallen = strlen(s);\n    hi_sdssetlen(s, reallen);\n}\n\n/* Modify an hisds string in-place to make it empty (zero length).\n * However all the existing buffer is not discarded but set as free space\n * so that next append operations will not require allocations up to the\n * number of bytes previously available. */\nvoid hi_sdsclear(hisds s) {\n    hi_sdssetlen(s, 0);\n    s[0] = '\\0';\n}\n\n/* Enlarge the free space at the end of the hisds string so that the caller\n * is sure that after calling this function can overwrite up to addlen\n * bytes after the end of the string, plus one more byte for nul term.\n *\n * Note: this does not change the *length* of the hisds string as returned\n * by hi_sdslen(), but only the free buffer space we have. */\nhisds hi_sdsMakeRoomFor(hisds s, size_t addlen) {\n    void *sh, *newsh;\n    size_t avail = hi_sdsavail(s);\n    size_t len, newlen, reqlen;\n    char type, oldtype = s[-1] & HI_SDS_TYPE_MASK;\n    int hdrlen;\n\n    /* Return ASAP if there is enough space left. */\n    if (avail >= addlen) return s;\n\n    len = hi_sdslen(s);\n    sh = (char*)s-hi_sdsHdrSize(oldtype);\n    reqlen = newlen = (len+addlen);\n    if (newlen <= len) return NULL; /* Catch size_t overflow */\n    if (newlen < HI_SDS_MAX_PREALLOC)\n        newlen *= 2;\n    else\n        newlen += HI_SDS_MAX_PREALLOC;\n\n    type = hi_sdsReqType(newlen);\n\n    /* Don't use type 5: the user is appending to the string and type 5 is\n     * not able to remember empty space, so hi_sdsMakeRoomFor() must be called\n     * at every appending operation. */\n    if (type == HI_SDS_TYPE_5) type = HI_SDS_TYPE_8;\n\n    hdrlen = hi_sdsHdrSize(type);\n    if (hdrlen+newlen+1 <= reqlen) return NULL; /* Catch size_t overflow */\n    if (oldtype==type) {\n        newsh = hi_s_realloc(sh, hdrlen+newlen+1);\n        if (newsh == NULL) return NULL;\n        s = (char*)newsh+hdrlen;\n    } else {\n        /* Since the header size changes, need to move the string forward,\n         * and can't use realloc */\n        newsh = hi_s_malloc(hdrlen+newlen+1);\n        if (newsh == NULL) return NULL;\n        memcpy((char*)newsh+hdrlen, s, len+1);\n        hi_s_free(sh);\n        s = (char*)newsh+hdrlen;\n        s[-1] = type;\n        hi_sdssetlen(s, len);\n    }\n    hi_sdssetalloc(s, newlen);\n    return s;\n}\n\n/* Reallocate the hisds string so that it has no free space at the end. The\n * contained string remains not altered, but next concatenation operations\n * will require a reallocation.\n *\n * After the call, the passed hisds string is no longer valid and all the\n * references must be substituted with the new pointer returned by the call. */\nhisds hi_sdsRemoveFreeSpace(hisds s) {\n    void *sh, *newsh;\n    char type, oldtype = s[-1] & HI_SDS_TYPE_MASK;\n    int hdrlen;\n    size_t len = hi_sdslen(s);\n    sh = (char*)s-hi_sdsHdrSize(oldtype);\n\n    type = hi_sdsReqType(len);\n    hdrlen = hi_sdsHdrSize(type);\n    if (oldtype==type) {\n        newsh = hi_s_realloc(sh, hdrlen+len+1);\n        if (newsh == NULL) return NULL;\n        s = (char*)newsh+hdrlen;\n    } else {\n        newsh = hi_s_malloc(hdrlen+len+1);\n        if (newsh == NULL) return NULL;\n        memcpy((char*)newsh+hdrlen, s, len+1);\n        hi_s_free(sh);\n        s = (char*)newsh+hdrlen;\n        s[-1] = type;\n        hi_sdssetlen(s, len);\n    }\n    hi_sdssetalloc(s, len);\n    return s;\n}\n\n/* Return the total size of the allocation of the specifed hisds string,\n * including:\n * 1) The hisds header before the pointer.\n * 2) The string.\n * 3) The free buffer at the end if any.\n * 4) The implicit null term.\n */\nsize_t hi_sdsAllocSize(hisds s) {\n    size_t alloc = hi_sdsalloc(s);\n    return hi_sdsHdrSize(s[-1])+alloc+1;\n}\n\n/* Return the pointer of the actual SDS allocation (normally SDS strings\n * are referenced by the start of the string buffer). */\nvoid *hi_sdsAllocPtr(hisds s) {\n    return (void*) (s-hi_sdsHdrSize(s[-1]));\n}\n\n/* Increment the hisds length and decrements the left free space at the\n * end of the string according to 'incr'. Also set the null term\n * in the new end of the string.\n *\n * This function is used in order to fix the string length after the\n * user calls hi_sdsMakeRoomFor(), writes something after the end of\n * the current string, and finally needs to set the new length.\n *\n * Note: it is possible to use a negative increment in order to\n * right-trim the string.\n *\n * Usage example:\n *\n * Using hi_sdsIncrLen() and hi_sdsMakeRoomFor() it is possible to mount the\n * following schema, to cat bytes coming from the kernel to the end of an\n * hisds string without copying into an intermediate buffer:\n *\n * oldlen = hi_hi_sdslen(s);\n * s = hi_sdsMakeRoomFor(s, BUFFER_SIZE);\n * nread = read(fd, s+oldlen, BUFFER_SIZE);\n * ... check for nread <= 0 and handle it ...\n * hi_sdsIncrLen(s, nread);\n */\nvoid hi_sdsIncrLen(hisds s, int incr) {\n    unsigned char flags = s[-1];\n    size_t len;\n    switch(flags&HI_SDS_TYPE_MASK) {\n        case HI_SDS_TYPE_5: {\n            unsigned char *fp = ((unsigned char*)s)-1;\n            unsigned char oldlen = HI_SDS_TYPE_5_LEN(flags);\n            assert((incr > 0 && oldlen+incr < 32) || (incr < 0 && oldlen >= (unsigned int)(-incr)));\n            *fp = HI_SDS_TYPE_5 | ((oldlen+incr) << HI_SDS_TYPE_BITS);\n            len = oldlen+incr;\n            break;\n        }\n        case HI_SDS_TYPE_8: {\n            HI_SDS_HDR_VAR(8,s);\n            assert((incr >= 0 && sh->alloc-sh->len >= incr) || (incr < 0 && sh->len >= (unsigned int)(-incr)));\n            len = (sh->len += incr);\n            break;\n        }\n        case HI_SDS_TYPE_16: {\n            HI_SDS_HDR_VAR(16,s);\n            assert((incr >= 0 && sh->alloc-sh->len >= incr) || (incr < 0 && sh->len >= (unsigned int)(-incr)));\n            len = (sh->len += incr);\n            break;\n        }\n        case HI_SDS_TYPE_32: {\n            HI_SDS_HDR_VAR(32,s);\n            assert((incr >= 0 && sh->alloc-sh->len >= (unsigned int)incr) || (incr < 0 && sh->len >= (unsigned int)(-incr)));\n            len = (sh->len += incr);\n            break;\n        }\n        case HI_SDS_TYPE_64: {\n            HI_SDS_HDR_VAR(64,s);\n            assert((incr >= 0 && sh->alloc-sh->len >= (uint64_t)incr) || (incr < 0 && sh->len >= (uint64_t)(-incr)));\n            len = (sh->len += incr);\n            break;\n        }\n        default: len = 0; /* Just to avoid compilation warnings. */\n    }\n    s[len] = '\\0';\n}\n\n/* Grow the hisds to have the specified length. Bytes that were not part of\n * the original length of the hisds will be set to zero.\n *\n * if the specified length is smaller than the current length, no operation\n * is performed. */\nhisds hi_sdsgrowzero(hisds s, size_t len) {\n    size_t curlen = hi_sdslen(s);\n\n    if (len <= curlen) return s;\n    s = hi_sdsMakeRoomFor(s,len-curlen);\n    if (s == NULL) return NULL;\n\n    /* Make sure added region doesn't contain garbage */\n    memset(s+curlen,0,(len-curlen+1)); /* also set trailing \\0 byte */\n    hi_sdssetlen(s, len);\n    return s;\n}\n\n/* Append the specified binary-safe string pointed by 't' of 'len' bytes to the\n * end of the specified hisds string 's'.\n *\n * After the call, the passed hisds string is no longer valid and all the\n * references must be substituted with the new pointer returned by the call. */\nhisds hi_sdscatlen(hisds s, const void *t, size_t len) {\n    size_t curlen = hi_sdslen(s);\n\n    s = hi_sdsMakeRoomFor(s,len);\n    if (s == NULL) return NULL;\n    memcpy(s+curlen, t, len);\n    hi_sdssetlen(s, curlen+len);\n    s[curlen+len] = '\\0';\n    return s;\n}\n\n/* Append the specified null termianted C string to the hisds string 's'.\n *\n * After the call, the passed hisds string is no longer valid and all the\n * references must be substituted with the new pointer returned by the call. */\nhisds hi_sdscat(hisds s, const char *t) {\n    return hi_sdscatlen(s, t, strlen(t));\n}\n\n/* Append the specified hisds 't' to the existing hisds 's'.\n *\n * After the call, the modified hisds string is no longer valid and all the\n * references must be substituted with the new pointer returned by the call. */\nhisds hi_sdscatsds(hisds s, const hisds t) {\n    return hi_sdscatlen(s, t, hi_sdslen(t));\n}\n\n/* Destructively modify the hisds string 's' to hold the specified binary\n * safe string pointed by 't' of length 'len' bytes. */\nhisds hi_sdscpylen(hisds s, const char *t, size_t len) {\n    if (hi_sdsalloc(s) < len) {\n        s = hi_sdsMakeRoomFor(s,len-hi_sdslen(s));\n        if (s == NULL) return NULL;\n    }\n    memcpy(s, t, len);\n    s[len] = '\\0';\n    hi_sdssetlen(s, len);\n    return s;\n}\n\n/* Like hi_sdscpylen() but 't' must be a null-terminated string so that the length\n * of the string is obtained with strlen(). */\nhisds hi_sdscpy(hisds s, const char *t) {\n    return hi_sdscpylen(s, t, strlen(t));\n}\n\n/* Helper for hi_sdscatlonglong() doing the actual number -> string\n * conversion. 's' must point to a string with room for at least\n * HI_SDS_LLSTR_SIZE bytes.\n *\n * The function returns the length of the null-terminated string\n * representation stored at 's'. */\n#define HI_SDS_LLSTR_SIZE 21\nint hi_sdsll2str(char *s, long long value) {\n    char *p, aux;\n    unsigned long long v;\n    size_t l;\n\n    /* Generate the string representation, this method produces\n     * an reversed string. */\n    v = (value < 0) ? -value : value;\n    p = s;\n    do {\n        *p++ = '0'+(v%10);\n        v /= 10;\n    } while(v);\n    if (value < 0) *p++ = '-';\n\n    /* Compute length and add null term. */\n    l = p-s;\n    *p = '\\0';\n\n    /* Reverse the string. */\n    p--;\n    while(s < p) {\n        aux = *s;\n        *s = *p;\n        *p = aux;\n        s++;\n        p--;\n    }\n    return l;\n}\n\n/* Identical hi_sdsll2str(), but for unsigned long long type. */\nint hi_sdsull2str(char *s, unsigned long long v) {\n    char *p, aux;\n    size_t l;\n\n    /* Generate the string representation, this method produces\n     * an reversed string. */\n    p = s;\n    do {\n        *p++ = '0'+(v%10);\n        v /= 10;\n    } while(v);\n\n    /* Compute length and add null term. */\n    l = p-s;\n    *p = '\\0';\n\n    /* Reverse the string. */\n    p--;\n    while(s < p) {\n        aux = *s;\n        *s = *p;\n        *p = aux;\n        s++;\n        p--;\n    }\n    return l;\n}\n\n/* Create an hisds string from a long long value. It is much faster than:\n *\n * hi_sdscatprintf(hi_sdsempty(),\"%lld\\n\", value);\n */\nhisds hi_sdsfromlonglong(long long value) {\n    char buf[HI_SDS_LLSTR_SIZE];\n    int len = hi_sdsll2str(buf,value);\n\n    return hi_sdsnewlen(buf,len);\n}\n\n/* Like hi_sdscatprintf() but gets va_list instead of being variadic. */\nhisds hi_sdscatvprintf(hisds s, const char *fmt, va_list ap) {\n    va_list cpy;\n    char staticbuf[1024], *buf = staticbuf, *t;\n    size_t buflen = strlen(fmt)*2;\n\n    /* We try to start using a static buffer for speed.\n     * If not possible we revert to heap allocation. */\n    if (buflen > sizeof(staticbuf)) {\n        buf = hi_s_malloc(buflen);\n        if (buf == NULL) return NULL;\n    } else {\n        buflen = sizeof(staticbuf);\n    }\n\n    /* Try with buffers two times bigger every time we fail to\n     * fit the string in the current buffer size. */\n    while(1) {\n        buf[buflen-2] = '\\0';\n        va_copy(cpy,ap);\n        vsnprintf(buf, buflen, fmt, cpy);\n        va_end(cpy);\n        if (buf[buflen-2] != '\\0') {\n            if (buf != staticbuf) hi_s_free(buf);\n            buflen *= 2;\n            buf = hi_s_malloc(buflen);\n            if (buf == NULL) return NULL;\n            continue;\n        }\n        break;\n    }\n\n    /* Finally concat the obtained string to the SDS string and return it. */\n    t = hi_sdscat(s, buf);\n    if (buf != staticbuf) hi_s_free(buf);\n    return t;\n}\n\n/* Append to the hisds string 's' a string obtained using printf-alike format\n * specifier.\n *\n * After the call, the modified hisds string is no longer valid and all the\n * references must be substituted with the new pointer returned by the call.\n *\n * Example:\n *\n * s = hi_sdsnew(\"Sum is: \");\n * s = hi_sdscatprintf(s,\"%d+%d = %d\",a,b,a+b).\n *\n * Often you need to create a string from scratch with the printf-alike\n * format. When this is the need, just use hi_sdsempty() as the target string:\n *\n * s = hi_sdscatprintf(hi_sdsempty(), \"... your format ...\", args);\n */\nhisds hi_sdscatprintf(hisds s, const char *fmt, ...) {\n    va_list ap;\n    char *t;\n    va_start(ap, fmt);\n    t = hi_sdscatvprintf(s,fmt,ap);\n    va_end(ap);\n    return t;\n}\n\n/* This function is similar to hi_sdscatprintf, but much faster as it does\n * not rely on sprintf() family functions implemented by the libc that\n * are often very slow. Moreover directly handling the hisds string as\n * new data is concatenated provides a performance improvement.\n *\n * However this function only handles an incompatible subset of printf-alike\n * format specifiers:\n *\n * %s - C String\n * %S - SDS string\n * %i - signed int\n * %I - 64 bit signed integer (long long, int64_t)\n * %u - unsigned int\n * %U - 64 bit unsigned integer (unsigned long long, uint64_t)\n * %% - Verbatim \"%\" character.\n */\nhisds hi_sdscatfmt(hisds s, char const *fmt, ...) {\n    const char *f = fmt;\n    long i;\n    va_list ap;\n\n    va_start(ap,fmt);\n    i = hi_sdslen(s); /* Position of the next byte to write to dest str. */\n    while(*f) {\n        char next, *str;\n        size_t l;\n        long long num;\n        unsigned long long unum;\n\n        /* Make sure there is always space for at least 1 char. */\n        if (hi_sdsavail(s)==0) {\n            s = hi_sdsMakeRoomFor(s,1);\n            if (s == NULL) goto fmt_error;\n        }\n\n        switch(*f) {\n        case '%':\n            next = *(f+1);\n            f++;\n            switch(next) {\n            case 's':\n            case 'S':\n                str = va_arg(ap,char*);\n                l = (next == 's') ? strlen(str) : hi_sdslen(str);\n                if (hi_sdsavail(s) < l) {\n                    s = hi_sdsMakeRoomFor(s,l);\n                    if (s == NULL) goto fmt_error;\n                }\n                memcpy(s+i,str,l);\n                hi_sdsinclen(s,l);\n                i += l;\n                break;\n            case 'i':\n            case 'I':\n                if (next == 'i')\n                    num = va_arg(ap,int);\n                else\n                    num = va_arg(ap,long long);\n                {\n                    char buf[HI_SDS_LLSTR_SIZE];\n                    l = hi_sdsll2str(buf,num);\n                    if (hi_sdsavail(s) < l) {\n                        s = hi_sdsMakeRoomFor(s,l);\n                        if (s == NULL) goto fmt_error;\n                    }\n                    memcpy(s+i,buf,l);\n                    hi_sdsinclen(s,l);\n                    i += l;\n                }\n                break;\n            case 'u':\n            case 'U':\n                if (next == 'u')\n                    unum = va_arg(ap,unsigned int);\n                else\n                    unum = va_arg(ap,unsigned long long);\n                {\n                    char buf[HI_SDS_LLSTR_SIZE];\n                    l = hi_sdsull2str(buf,unum);\n                    if (hi_sdsavail(s) < l) {\n                        s = hi_sdsMakeRoomFor(s,l);\n                        if (s == NULL) goto fmt_error;\n                    }\n                    memcpy(s+i,buf,l);\n                    hi_sdsinclen(s,l);\n                    i += l;\n                }\n                break;\n            default: /* Handle %% and generally %<unknown>. */\n                s[i++] = next;\n                hi_sdsinclen(s,1);\n                break;\n            }\n            break;\n        default:\n            s[i++] = *f;\n            hi_sdsinclen(s,1);\n            break;\n        }\n        f++;\n    }\n    va_end(ap);\n\n    /* Add null-term */\n    s[i] = '\\0';\n    return s;\n\nfmt_error:\n    va_end(ap);\n    return NULL;\n}\n\n/* Remove the part of the string from left and from right composed just of\n * contiguous characters found in 'cset', that is a null terminted C string.\n *\n * After the call, the modified hisds string is no longer valid and all the\n * references must be substituted with the new pointer returned by the call.\n *\n * Example:\n *\n * s = hi_sdsnew(\"AA...AA.a.aa.aHelloWorld     :::\");\n * s = hi_sdstrim(s,\"Aa. :\");\n * printf(\"%s\\n\", s);\n *\n * Output will be just \"Hello World\".\n */\nhisds hi_sdstrim(hisds s, const char *cset) {\n    char *start, *end, *sp, *ep;\n    size_t len;\n\n    sp = start = s;\n    ep = end = s+hi_sdslen(s)-1;\n    while(sp <= end && strchr(cset, *sp)) sp++;\n    while(ep > sp && strchr(cset, *ep)) ep--;\n    len = (sp > ep) ? 0 : ((ep-sp)+1);\n    if (s != sp) memmove(s, sp, len);\n    s[len] = '\\0';\n    hi_sdssetlen(s,len);\n    return s;\n}\n\n/* Turn the string into a smaller (or equal) string containing only the\n * substring specified by the 'start' and 'end' indexes.\n *\n * start and end can be negative, where -1 means the last character of the\n * string, -2 the penultimate character, and so forth.\n *\n * The interval is inclusive, so the start and end characters will be part\n * of the resulting string.\n *\n * The string is modified in-place.\n *\n * Return value:\n * -1 (error) if hi_sdslen(s) is larger than maximum positive ssize_t value.\n *  0 on success.\n *\n * Example:\n *\n * s = hi_sdsnew(\"Hello World\");\n * hi_sdsrange(s,1,-1); => \"ello World\"\n */\nint hi_sdsrange(hisds s, ssize_t start, ssize_t end) {\n    size_t newlen, len = hi_sdslen(s);\n    if (len > SSIZE_MAX) return -1;\n\n    if (len == 0) return 0;\n    if (start < 0) {\n        start = len+start;\n        if (start < 0) start = 0;\n    }\n    if (end < 0) {\n        end = len+end;\n        if (end < 0) end = 0;\n    }\n    newlen = (start > end) ? 0 : (end-start)+1;\n    if (newlen != 0) {\n        if (start >= (ssize_t)len) {\n            newlen = 0;\n        } else if (end >= (ssize_t)len) {\n            end = len-1;\n            newlen = (start > end) ? 0 : (end-start)+1;\n        }\n    } else {\n        start = 0;\n    }\n    if (start && newlen) memmove(s, s+start, newlen);\n    s[newlen] = 0;\n    hi_sdssetlen(s,newlen);\n    return 0;\n}\n\n/* Apply tolower() to every character of the sds string 's'. */\nvoid hi_sdstolower(hisds s) {\n    size_t len = hi_sdslen(s), j;\n\n    for (j = 0; j < len; j++) s[j] = tolower(s[j]);\n}\n\n/* Apply toupper() to every character of the sds string 's'. */\nvoid hi_sdstoupper(hisds s) {\n    size_t len = hi_sdslen(s), j;\n\n    for (j = 0; j < len; j++) s[j] = toupper(s[j]);\n}\n\n/* Compare two hisds strings s1 and s2 with memcmp().\n *\n * Return value:\n *\n *     positive if s1 > s2.\n *     negative if s1 < s2.\n *     0 if s1 and s2 are exactly the same binary string.\n *\n * If two strings share exactly the same prefix, but one of the two has\n * additional characters, the longer string is considered to be greater than\n * the smaller one. */\nint hi_sdscmp(const hisds s1, const hisds s2) {\n    size_t l1, l2, minlen;\n    int cmp;\n\n    l1 = hi_sdslen(s1);\n    l2 = hi_sdslen(s2);\n    minlen = (l1 < l2) ? l1 : l2;\n    cmp = memcmp(s1,s2,minlen);\n    if (cmp == 0) return l1-l2;\n    return cmp;\n}\n\n/* Split 's' with separator in 'sep'. An array\n * of hisds strings is returned. *count will be set\n * by reference to the number of tokens returned.\n *\n * On out of memory, zero length string, zero length\n * separator, NULL is returned.\n *\n * Note that 'sep' is able to split a string using\n * a multi-character separator. For example\n * hi_sdssplit(\"foo_-_bar\",\"_-_\"); will return two\n * elements \"foo\" and \"bar\".\n *\n * This version of the function is binary-safe but\n * requires length arguments. hi_sdssplit() is just the\n * same function but for zero-terminated strings.\n */\nhisds *hi_sdssplitlen(const char *s, int len, const char *sep, int seplen, int *count) {\n    int elements = 0, slots = 5, start = 0, j;\n    hisds *tokens;\n\n    if (seplen < 1 || len < 0) return NULL;\n\n    tokens = hi_s_malloc(sizeof(hisds)*slots);\n    if (tokens == NULL) return NULL;\n\n    if (len == 0) {\n        *count = 0;\n        return tokens;\n    }\n    for (j = 0; j < (len-(seplen-1)); j++) {\n        /* make sure there is room for the next element and the final one */\n        if (slots < elements+2) {\n            hisds *newtokens;\n\n            slots *= 2;\n            newtokens = hi_s_realloc(tokens,sizeof(hisds)*slots);\n            if (newtokens == NULL) goto cleanup;\n            tokens = newtokens;\n        }\n        /* search the separator */\n        if ((seplen == 1 && *(s+j) == sep[0]) || (memcmp(s+j,sep,seplen) == 0)) {\n            tokens[elements] = hi_sdsnewlen(s+start,j-start);\n            if (tokens[elements] == NULL) goto cleanup;\n            elements++;\n            start = j+seplen;\n            j = j+seplen-1; /* skip the separator */\n        }\n    }\n    /* Add the final element. We are sure there is room in the tokens array. */\n    tokens[elements] = hi_sdsnewlen(s+start,len-start);\n    if (tokens[elements] == NULL) goto cleanup;\n    elements++;\n    *count = elements;\n    return tokens;\n\ncleanup:\n    {\n        int i;\n        for (i = 0; i < elements; i++) hi_sdsfree(tokens[i]);\n        hi_s_free(tokens);\n        *count = 0;\n        return NULL;\n    }\n}\n\n/* Free the result returned by hi_sdssplitlen(), or do nothing if 'tokens' is NULL. */\nvoid hi_sdsfreesplitres(hisds *tokens, int count) {\n    if (!tokens) return;\n    while(count--)\n        hi_sdsfree(tokens[count]);\n    hi_s_free(tokens);\n}\n\n/* Append to the hisds string \"s\" an escaped string representation where\n * all the non-printable characters (tested with isprint()) are turned into\n * escapes in the form \"\\n\\r\\a....\" or \"\\x<hex-number>\".\n *\n * After the call, the modified hisds string is no longer valid and all the\n * references must be substituted with the new pointer returned by the call. */\nhisds hi_sdscatrepr(hisds s, const char *p, size_t len) {\n    s = hi_sdscatlen(s,\"\\\"\",1);\n    while(len--) {\n        switch(*p) {\n        case '\\\\':\n        case '\"':\n            s = hi_sdscatprintf(s,\"\\\\%c\",*p);\n            break;\n        case '\\n': s = hi_sdscatlen(s,\"\\\\n\",2); break;\n        case '\\r': s = hi_sdscatlen(s,\"\\\\r\",2); break;\n        case '\\t': s = hi_sdscatlen(s,\"\\\\t\",2); break;\n        case '\\a': s = hi_sdscatlen(s,\"\\\\a\",2); break;\n        case '\\b': s = hi_sdscatlen(s,\"\\\\b\",2); break;\n        default:\n            if (isprint(*p))\n                s = hi_sdscatprintf(s,\"%c\",*p);\n            else\n                s = hi_sdscatprintf(s,\"\\\\x%02x\",(unsigned char)*p);\n            break;\n        }\n        p++;\n    }\n    return hi_sdscatlen(s,\"\\\"\",1);\n}\n\n/* Helper function for hi_sdssplitargs() that converts a hex digit into an\n * integer from 0 to 15 */\nstatic int hi_hex_digit_to_int(char c) {\n    switch(c) {\n    case '0': return 0;\n    case '1': return 1;\n    case '2': return 2;\n    case '3': return 3;\n    case '4': return 4;\n    case '5': return 5;\n    case '6': return 6;\n    case '7': return 7;\n    case '8': return 8;\n    case '9': return 9;\n    case 'a': case 'A': return 10;\n    case 'b': case 'B': return 11;\n    case 'c': case 'C': return 12;\n    case 'd': case 'D': return 13;\n    case 'e': case 'E': return 14;\n    case 'f': case 'F': return 15;\n    default: return 0;\n    }\n}\n\n/* Split a line into arguments, where every argument can be in the\n * following programming-language REPL-alike form:\n *\n * foo bar \"newline are supported\\n\" and \"\\xff\\x00otherstuff\"\n *\n * The number of arguments is stored into *argc, and an array\n * of hisds is returned.\n *\n * The caller should free the resulting array of hisds strings with\n * hi_sdsfreesplitres().\n *\n * Note that hi_sdscatrepr() is able to convert back a string into\n * a quoted string in the same format hi_sdssplitargs() is able to parse.\n *\n * The function returns the allocated tokens on success, even when the\n * input string is empty, or NULL if the input contains unbalanced\n * quotes or closed quotes followed by non space characters\n * as in: \"foo\"bar or \"foo'\n */\nhisds *hi_sdssplitargs(const char *line, int *argc) {\n    const char *p = line;\n    char *current = NULL;\n    char **vector = NULL;\n\n    *argc = 0;\n    while(1) {\n        /* skip blanks */\n        while(*p && isspace((int) *p)) p++;\n        if (*p) {\n            /* get a token */\n            int inq=0;  /* set to 1 if we are in \"quotes\" */\n            int insq=0; /* set to 1 if we are in 'single quotes' */\n            int done=0;\n\n            if (current == NULL) current = hi_sdsempty();\n            while(!done) {\n                if (inq) {\n                    if (*p == '\\\\' && *(p+1) == 'x' &&\n                                             isxdigit((int) *(p+2)) &&\n                                             isxdigit((int) *(p+3)))\n                    {\n                        unsigned char byte;\n\n                        byte = (hi_hex_digit_to_int(*(p+2))*16)+\n                                hi_hex_digit_to_int(*(p+3));\n                        current = hi_sdscatlen(current,(char*)&byte,1);\n                        p += 3;\n                    } else if (*p == '\\\\' && *(p+1)) {\n                        char c;\n\n                        p++;\n                        switch(*p) {\n                        case 'n': c = '\\n'; break;\n                        case 'r': c = '\\r'; break;\n                        case 't': c = '\\t'; break;\n                        case 'b': c = '\\b'; break;\n                        case 'a': c = '\\a'; break;\n                        default: c = *p; break;\n                        }\n                        current = hi_sdscatlen(current,&c,1);\n                    } else if (*p == '\"') {\n                        /* closing quote must be followed by a space or\n                         * nothing at all. */\n                        if (*(p+1) && !isspace((int) *(p+1))) goto err;\n                        done=1;\n                    } else if (!*p) {\n                        /* unterminated quotes */\n                        goto err;\n                    } else {\n                        current = hi_sdscatlen(current,p,1);\n                    }\n                } else if (insq) {\n                    if (*p == '\\\\' && *(p+1) == '\\'') {\n                        p++;\n                        current = hi_sdscatlen(current,\"'\",1);\n                    } else if (*p == '\\'') {\n                        /* closing quote must be followed by a space or\n                         * nothing at all. */\n                        if (*(p+1) && !isspace((int) *(p+1))) goto err;\n                        done=1;\n                    } else if (!*p) {\n                        /* unterminated quotes */\n                        goto err;\n                    } else {\n                        current = hi_sdscatlen(current,p,1);\n                    }\n                } else {\n                    switch(*p) {\n                    case ' ':\n                    case '\\n':\n                    case '\\r':\n                    case '\\t':\n                    case '\\0':\n                        done=1;\n                        break;\n                    case '\"':\n                        inq=1;\n                        break;\n                    case '\\'':\n                        insq=1;\n                        break;\n                    default:\n                        current = hi_sdscatlen(current,p,1);\n                        break;\n                    }\n                }\n                if (*p) p++;\n            }\n            /* add the token to the vector */\n            {\n                char **new_vector = hi_s_realloc(vector,((*argc)+1)*sizeof(char*));\n                if (new_vector == NULL) {\n                    hi_s_free(vector);\n                    return NULL;\n                }\n\n                vector = new_vector;\n                vector[*argc] = current;\n                (*argc)++;\n                current = NULL;\n            }\n        } else {\n            /* Even on empty input string return something not NULL. */\n            if (vector == NULL) vector = hi_s_malloc(sizeof(void*));\n            return vector;\n        }\n    }\n\nerr:\n    while((*argc)--)\n        hi_sdsfree(vector[*argc]);\n    hi_s_free(vector);\n    if (current) hi_sdsfree(current);\n    *argc = 0;\n    return NULL;\n}\n\n/* Modify the string substituting all the occurrences of the set of\n * characters specified in the 'from' string to the corresponding character\n * in the 'to' array.\n *\n * For instance: hi_sdsmapchars(mystring, \"ho\", \"01\", 2)\n * will have the effect of turning the string \"hello\" into \"0ell1\".\n *\n * The function returns the hisds string pointer, that is always the same\n * as the input pointer since no resize is needed. */\nhisds hi_sdsmapchars(hisds s, const char *from, const char *to, size_t setlen) {\n    size_t j, i, l = hi_sdslen(s);\n\n    for (j = 0; j < l; j++) {\n        for (i = 0; i < setlen; i++) {\n            if (s[j] == from[i]) {\n                s[j] = to[i];\n                break;\n            }\n        }\n    }\n    return s;\n}\n\n/* Join an array of C strings using the specified separator (also a C string).\n * Returns the result as an hisds string. */\nhisds hi_sdsjoin(char **argv, int argc, char *sep) {\n    hisds join = hi_sdsempty();\n    int j;\n\n    for (j = 0; j < argc; j++) {\n        join = hi_sdscat(join, argv[j]);\n        if (j != argc-1) join = hi_sdscat(join,sep);\n    }\n    return join;\n}\n\n/* Like hi_sdsjoin, but joins an array of SDS strings. */\nhisds hi_sdsjoinsds(hisds *argv, int argc, const char *sep, size_t seplen) {\n    hisds join = hi_sdsempty();\n    int j;\n\n    for (j = 0; j < argc; j++) {\n        join = hi_sdscatsds(join, argv[j]);\n        if (j != argc-1) join = hi_sdscatlen(join,sep,seplen);\n    }\n    return join;\n}\n\n/* Wrappers to the allocators used by SDS. Note that SDS will actually\n * just use the macros defined into sdsalloc.h in order to avoid to pay\n * the overhead of function calls. Here we define these wrappers only for\n * the programs SDS is linked to, if they want to touch the SDS internals\n * even if they use a different allocator. */\nvoid *hi_sds_malloc(size_t size) { return hi_s_malloc(size); }\nvoid *hi_sds_realloc(void *ptr, size_t size) { return hi_s_realloc(ptr,size); }\nvoid hi_sds_free(void *ptr) { hi_s_free(ptr); }\n\n#if defined(HI_SDS_TEST_MAIN)\n#include <stdio.h>\n#include \"testhelp.h\"\n#include \"limits.h\"\n\n#define UNUSED(x) (void)(x)\nint hi_sdsTest(void) {\n    {\n        hisds x = hi_sdsnew(\"foo\"), y;\n\n        test_cond(\"Create a string and obtain the length\",\n            hi_sdslen(x) == 3 && memcmp(x,\"foo\\0\",4) == 0)\n\n        hi_sdsfree(x);\n        x = hi_sdsnewlen(\"foo\",2);\n        test_cond(\"Create a string with specified length\",\n            hi_sdslen(x) == 2 && memcmp(x,\"fo\\0\",3) == 0)\n\n        x = hi_sdscat(x,\"bar\");\n        test_cond(\"Strings concatenation\",\n            hi_sdslen(x) == 5 && memcmp(x,\"fobar\\0\",6) == 0);\n\n        x = hi_sdscpy(x,\"a\");\n        test_cond(\"hi_sdscpy() against an originally longer string\",\n            hi_sdslen(x) == 1 && memcmp(x,\"a\\0\",2) == 0)\n\n        x = hi_sdscpy(x,\"xyzxxxxxxxxxxyyyyyyyyyykkkkkkkkkk\");\n        test_cond(\"hi_sdscpy() against an originally shorter string\",\n            hi_sdslen(x) == 33 &&\n            memcmp(x,\"xyzxxxxxxxxxxyyyyyyyyyykkkkkkkkkk\\0\",33) == 0)\n\n        hi_sdsfree(x);\n        x = hi_sdscatprintf(hi_sdsempty(),\"%d\",123);\n        test_cond(\"hi_sdscatprintf() seems working in the base case\",\n            hi_sdslen(x) == 3 && memcmp(x,\"123\\0\",4) == 0)\n\n        hi_sdsfree(x);\n        x = hi_sdsnew(\"--\");\n        x = hi_sdscatfmt(x, \"Hello %s World %I,%I--\", \"Hi!\", LLONG_MIN,LLONG_MAX);\n        test_cond(\"hi_sdscatfmt() seems working in the base case\",\n            hi_sdslen(x) == 60 &&\n            memcmp(x,\"--Hello Hi! World -9223372036854775808,\"\n                     \"9223372036854775807--\",60) == 0)\n        printf(\"[%s]\\n\",x);\n\n        hi_sdsfree(x);\n        x = hi_sdsnew(\"--\");\n        x = hi_sdscatfmt(x, \"%u,%U--\", UINT_MAX, ULLONG_MAX);\n        test_cond(\"hi_sdscatfmt() seems working with unsigned numbers\",\n            hi_sdslen(x) == 35 &&\n            memcmp(x,\"--4294967295,18446744073709551615--\",35) == 0)\n\n        hi_sdsfree(x);\n        x = hi_sdsnew(\" x \");\n        hi_sdstrim(x,\" x\");\n        test_cond(\"hi_sdstrim() works when all chars match\",\n            hi_sdslen(x) == 0)\n\n        hi_sdsfree(x);\n        x = hi_sdsnew(\" x \");\n        hi_sdstrim(x,\" \");\n        test_cond(\"hi_sdstrim() works when a single char remains\",\n            hi_sdslen(x) == 1 && x[0] == 'x')\n\n        hi_sdsfree(x);\n        x = hi_sdsnew(\"xxciaoyyy\");\n        hi_sdstrim(x,\"xy\");\n        test_cond(\"hi_sdstrim() correctly trims characters\",\n            hi_sdslen(x) == 4 && memcmp(x,\"ciao\\0\",5) == 0)\n\n        y = hi_sdsdup(x);\n        hi_sdsrange(y,1,1);\n        test_cond(\"hi_sdsrange(...,1,1)\",\n            hi_sdslen(y) == 1 && memcmp(y,\"i\\0\",2) == 0)\n\n        hi_sdsfree(y);\n        y = hi_sdsdup(x);\n        hi_sdsrange(y,1,-1);\n        test_cond(\"hi_sdsrange(...,1,-1)\",\n            hi_sdslen(y) == 3 && memcmp(y,\"iao\\0\",4) == 0)\n\n        hi_sdsfree(y);\n        y = hi_sdsdup(x);\n        hi_sdsrange(y,-2,-1);\n        test_cond(\"hi_sdsrange(...,-2,-1)\",\n            hi_sdslen(y) == 2 && memcmp(y,\"ao\\0\",3) == 0)\n\n        hi_sdsfree(y);\n        y = hi_sdsdup(x);\n        hi_sdsrange(y,2,1);\n        test_cond(\"hi_sdsrange(...,2,1)\",\n            hi_sdslen(y) == 0 && memcmp(y,\"\\0\",1) == 0)\n\n        hi_sdsfree(y);\n        y = hi_sdsdup(x);\n        hi_sdsrange(y,1,100);\n        test_cond(\"hi_sdsrange(...,1,100)\",\n            hi_sdslen(y) == 3 && memcmp(y,\"iao\\0\",4) == 0)\n\n        hi_sdsfree(y);\n        y = hi_sdsdup(x);\n        hi_sdsrange(y,100,100);\n        test_cond(\"hi_sdsrange(...,100,100)\",\n            hi_sdslen(y) == 0 && memcmp(y,\"\\0\",1) == 0)\n\n        hi_sdsfree(y);\n        hi_sdsfree(x);\n        x = hi_sdsnew(\"foo\");\n        y = hi_sdsnew(\"foa\");\n        test_cond(\"hi_sdscmp(foo,foa)\", hi_sdscmp(x,y) > 0)\n\n        hi_sdsfree(y);\n        hi_sdsfree(x);\n        x = hi_sdsnew(\"bar\");\n        y = hi_sdsnew(\"bar\");\n        test_cond(\"hi_sdscmp(bar,bar)\", hi_sdscmp(x,y) == 0)\n\n        hi_sdsfree(y);\n        hi_sdsfree(x);\n        x = hi_sdsnew(\"aar\");\n        y = hi_sdsnew(\"bar\");\n        test_cond(\"hi_sdscmp(bar,bar)\", hi_sdscmp(x,y) < 0)\n\n        hi_sdsfree(y);\n        hi_sdsfree(x);\n        x = hi_sdsnewlen(\"\\a\\n\\0foo\\r\",7);\n        y = hi_sdscatrepr(hi_sdsempty(),x,hi_sdslen(x));\n        test_cond(\"hi_sdscatrepr(...data...)\",\n            memcmp(y,\"\\\"\\\\a\\\\n\\\\x00foo\\\\r\\\"\",15) == 0)\n\n        {\n            unsigned int oldfree;\n            char *p;\n            int step = 10, j, i;\n\n            hi_sdsfree(x);\n            hi_sdsfree(y);\n            x = hi_sdsnew(\"0\");\n            test_cond(\"hi_sdsnew() free/len buffers\", hi_sdslen(x) == 1 && hi_sdsavail(x) == 0);\n\n            /* Run the test a few times in order to hit the first two\n             * SDS header types. */\n            for (i = 0; i < 10; i++) {\n                int oldlen = hi_sdslen(x);\n                x = hi_sdsMakeRoomFor(x,step);\n                int type = x[-1]&HI_SDS_TYPE_MASK;\n\n                test_cond(\"sdsMakeRoomFor() len\", hi_sdslen(x) == oldlen);\n                if (type != HI_SDS_TYPE_5) {\n                    test_cond(\"hi_sdsMakeRoomFor() free\", hi_sdsavail(x) >= step);\n                    oldfree = hi_sdsavail(x);\n                }\n                p = x+oldlen;\n                for (j = 0; j < step; j++) {\n                    p[j] = 'A'+j;\n                }\n                hi_sdsIncrLen(x,step);\n            }\n            test_cond(\"hi_sdsMakeRoomFor() content\",\n                memcmp(\"0ABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJ\",x,101) == 0);\n            test_cond(\"sdsMakeRoomFor() final length\",hi_sdslen(x)==101);\n\n            hi_sdsfree(x);\n        }\n    }\n    test_report();\n    return 0;\n}\n#endif\n\n#ifdef HI_SDS_TEST_MAIN\nint main(void) {\n    return hi_sdsTest();\n}\n#endif\n"
  },
  {
    "path": "deps/hiredis/sds.h",
    "content": "/* SDSLib 2.0 -- A C dynamic strings library\n *\n * Copyright (c) 2006-2015, Salvatore Sanfilippo <antirez at gmail dot com>\n * Copyright (c) 2015, Oran Agra\n * Copyright (c) 2015, Redis Labs, Inc\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#ifndef HIREDIS_SDS_H\n#define HIREDIS_SDS_H\n\n#define HI_SDS_MAX_PREALLOC (1024*1024)\n#ifdef _MSC_VER\ntypedef long long ssize_t;\n#define SSIZE_MAX (LLONG_MAX >> 1)\n#ifndef __clang__\n#define __attribute__(x)\n#endif\n#endif\n\n#include <sys/types.h>\n#include <stdarg.h>\n#include <stdint.h>\n\ntypedef char *hisds;\n\n/* Note: sdshdr5 is never used, we just access the flags byte directly.\n * However is here to document the layout of type 5 SDS strings. */\nstruct __attribute__ ((__packed__)) hisdshdr5 {\n    unsigned char flags; /* 3 lsb of type, and 5 msb of string length */\n    char buf[];\n};\nstruct __attribute__ ((__packed__)) hisdshdr8 {\n    uint8_t len; /* used */\n    uint8_t alloc; /* excluding the header and null terminator */\n    unsigned char flags; /* 3 lsb of type, 5 unused bits */\n    char buf[];\n};\nstruct __attribute__ ((__packed__)) hisdshdr16 {\n    uint16_t len; /* used */\n    uint16_t alloc; /* excluding the header and null terminator */\n    unsigned char flags; /* 3 lsb of type, 5 unused bits */\n    char buf[];\n};\nstruct __attribute__ ((__packed__)) hisdshdr32 {\n    uint32_t len; /* used */\n    uint32_t alloc; /* excluding the header and null terminator */\n    unsigned char flags; /* 3 lsb of type, 5 unused bits */\n    char buf[];\n};\nstruct __attribute__ ((__packed__)) hisdshdr64 {\n    uint64_t len; /* used */\n    uint64_t alloc; /* excluding the header and null terminator */\n    unsigned char flags; /* 3 lsb of type, 5 unused bits */\n    char buf[];\n};\n\n#define HI_SDS_TYPE_5  0\n#define HI_SDS_TYPE_8  1\n#define HI_SDS_TYPE_16 2\n#define HI_SDS_TYPE_32 3\n#define HI_SDS_TYPE_64 4\n#define HI_SDS_TYPE_MASK 7\n#define HI_SDS_TYPE_BITS 3\n#define HI_SDS_HDR_VAR(T,s) struct hisdshdr##T *sh = (struct hisdshdr##T *)((s)-(sizeof(struct hisdshdr##T)));\n#define HI_SDS_HDR(T,s) ((struct hisdshdr##T *)((s)-(sizeof(struct hisdshdr##T))))\n#define HI_SDS_TYPE_5_LEN(f) ((f)>>HI_SDS_TYPE_BITS)\n\nstatic inline size_t hi_sdslen(const hisds s) {\n    unsigned char flags = s[-1];\n    switch(flags & HI_SDS_TYPE_MASK) {\n        case HI_SDS_TYPE_5:\n            return HI_SDS_TYPE_5_LEN(flags);\n        case HI_SDS_TYPE_8:\n            return HI_SDS_HDR(8,s)->len;\n        case HI_SDS_TYPE_16:\n            return HI_SDS_HDR(16,s)->len;\n        case HI_SDS_TYPE_32:\n            return HI_SDS_HDR(32,s)->len;\n        case HI_SDS_TYPE_64:\n            return HI_SDS_HDR(64,s)->len;\n    }\n    return 0;\n}\n\nstatic inline size_t hi_sdsavail(const hisds s) {\n    unsigned char flags = s[-1];\n    switch(flags&HI_SDS_TYPE_MASK) {\n        case HI_SDS_TYPE_5: {\n            return 0;\n        }\n        case HI_SDS_TYPE_8: {\n            HI_SDS_HDR_VAR(8,s);\n            return sh->alloc - sh->len;\n        }\n        case HI_SDS_TYPE_16: {\n            HI_SDS_HDR_VAR(16,s);\n            return sh->alloc - sh->len;\n        }\n        case HI_SDS_TYPE_32: {\n            HI_SDS_HDR_VAR(32,s);\n            return sh->alloc - sh->len;\n        }\n        case HI_SDS_TYPE_64: {\n            HI_SDS_HDR_VAR(64,s);\n            return sh->alloc - sh->len;\n        }\n    }\n    return 0;\n}\n\nstatic inline void hi_sdssetlen(hisds s, size_t newlen) {\n    unsigned char flags = s[-1];\n    switch(flags&HI_SDS_TYPE_MASK) {\n        case HI_SDS_TYPE_5:\n            {\n                unsigned char *fp = ((unsigned char*)s)-1;\n                *fp = (unsigned char)(HI_SDS_TYPE_5 | (newlen << HI_SDS_TYPE_BITS));\n            }\n            break;\n        case HI_SDS_TYPE_8:\n            HI_SDS_HDR(8,s)->len = (uint8_t)newlen;\n            break;\n        case HI_SDS_TYPE_16:\n            HI_SDS_HDR(16,s)->len = (uint16_t)newlen;\n            break;\n        case HI_SDS_TYPE_32:\n            HI_SDS_HDR(32,s)->len = (uint32_t)newlen;\n            break;\n        case HI_SDS_TYPE_64:\n            HI_SDS_HDR(64,s)->len = (uint64_t)newlen;\n            break;\n    }\n}\n\nstatic inline void hi_sdsinclen(hisds s, size_t inc) {\n    unsigned char flags = s[-1];\n    switch(flags&HI_SDS_TYPE_MASK) {\n        case HI_SDS_TYPE_5:\n            {\n                unsigned char *fp = ((unsigned char*)s)-1;\n                unsigned char newlen = HI_SDS_TYPE_5_LEN(flags)+(unsigned char)inc;\n                *fp = HI_SDS_TYPE_5 | (newlen << HI_SDS_TYPE_BITS);\n            }\n            break;\n        case HI_SDS_TYPE_8:\n            HI_SDS_HDR(8,s)->len += (uint8_t)inc;\n            break;\n        case HI_SDS_TYPE_16:\n            HI_SDS_HDR(16,s)->len += (uint16_t)inc;\n            break;\n        case HI_SDS_TYPE_32:\n            HI_SDS_HDR(32,s)->len += (uint32_t)inc;\n            break;\n        case HI_SDS_TYPE_64:\n            HI_SDS_HDR(64,s)->len += (uint64_t)inc;\n            break;\n    }\n}\n\n/* hi_sdsalloc() = hi_sdsavail() + hi_sdslen() */\nstatic inline size_t hi_sdsalloc(const hisds s) {\n    unsigned char flags = s[-1];\n    switch(flags & HI_SDS_TYPE_MASK) {\n        case HI_SDS_TYPE_5:\n            return HI_SDS_TYPE_5_LEN(flags);\n        case HI_SDS_TYPE_8:\n            return HI_SDS_HDR(8,s)->alloc;\n        case HI_SDS_TYPE_16:\n            return HI_SDS_HDR(16,s)->alloc;\n        case HI_SDS_TYPE_32:\n            return HI_SDS_HDR(32,s)->alloc;\n        case HI_SDS_TYPE_64:\n            return HI_SDS_HDR(64,s)->alloc;\n    }\n    return 0;\n}\n\nstatic inline void hi_sdssetalloc(hisds s, size_t newlen) {\n    unsigned char flags = s[-1];\n    switch(flags&HI_SDS_TYPE_MASK) {\n        case HI_SDS_TYPE_5:\n            /* Nothing to do, this type has no total allocation info. */\n            break;\n        case HI_SDS_TYPE_8:\n            HI_SDS_HDR(8,s)->alloc = (uint8_t)newlen;\n            break;\n        case HI_SDS_TYPE_16:\n            HI_SDS_HDR(16,s)->alloc = (uint16_t)newlen;\n            break;\n        case HI_SDS_TYPE_32:\n            HI_SDS_HDR(32,s)->alloc = (uint32_t)newlen;\n            break;\n        case HI_SDS_TYPE_64:\n            HI_SDS_HDR(64,s)->alloc = (uint64_t)newlen;\n            break;\n    }\n}\n\nhisds hi_sdsnewlen(const void *init, size_t initlen);\nhisds hi_sdsnew(const char *init);\nhisds hi_sdsempty(void);\nhisds hi_sdsdup(const hisds s);\nvoid  hi_sdsfree(hisds s);\nhisds hi_sdsgrowzero(hisds s, size_t len);\nhisds hi_sdscatlen(hisds s, const void *t, size_t len);\nhisds hi_sdscat(hisds s, const char *t);\nhisds hi_sdscatsds(hisds s, const hisds t);\nhisds hi_sdscpylen(hisds s, const char *t, size_t len);\nhisds hi_sdscpy(hisds s, const char *t);\n\nhisds hi_sdscatvprintf(hisds s, const char *fmt, va_list ap);\n#ifdef __GNUC__\nhisds hi_sdscatprintf(hisds s, const char *fmt, ...)\n    __attribute__((format(printf, 2, 3)));\n#else\nhisds hi_sdscatprintf(hisds s, const char *fmt, ...);\n#endif\n\nhisds hi_sdscatfmt(hisds s, char const *fmt, ...);\nhisds hi_sdstrim(hisds s, const char *cset);\nint hi_sdsrange(hisds s, ssize_t start, ssize_t end);\nvoid hi_sdsupdatelen(hisds s);\nvoid hi_sdsclear(hisds s);\nint hi_sdscmp(const hisds s1, const hisds s2);\nhisds *hi_sdssplitlen(const char *s, int len, const char *sep, int seplen, int *count);\nvoid hi_sdsfreesplitres(hisds *tokens, int count);\nvoid hi_sdstolower(hisds s);\nvoid hi_sdstoupper(hisds s);\nhisds hi_sdsfromlonglong(long long value);\nhisds hi_sdscatrepr(hisds s, const char *p, size_t len);\nhisds *hi_sdssplitargs(const char *line, int *argc);\nhisds hi_sdsmapchars(hisds s, const char *from, const char *to, size_t setlen);\nhisds hi_sdsjoin(char **argv, int argc, char *sep);\nhisds hi_sdsjoinsds(hisds *argv, int argc, const char *sep, size_t seplen);\n\n/* Low level functions exposed to the user API */\nhisds hi_sdsMakeRoomFor(hisds s, size_t addlen);\nvoid hi_sdsIncrLen(hisds s, int incr);\nhisds hi_sdsRemoveFreeSpace(hisds s);\nsize_t hi_sdsAllocSize(hisds s);\nvoid *hi_sdsAllocPtr(hisds s);\n\n/* Export the allocator used by SDS to the program using SDS.\n * Sometimes the program SDS is linked to, may use a different set of\n * allocators, but may want to allocate or free things that SDS will\n * respectively free or allocate. */\nvoid *hi_sds_malloc(size_t size);\nvoid *hi_sds_realloc(void *ptr, size_t size);\nvoid hi_sds_free(void *ptr);\n\n#ifdef REDIS_TEST\nint hi_sdsTest(int argc, char *argv[]);\n#endif\n\n#endif /* HIREDIS_SDS_H */\n"
  },
  {
    "path": "deps/hiredis/sdsalloc.h",
    "content": "/* SDSLib 2.0 -- A C dynamic strings library\n *\n * Copyright (c) 2006-2015, Salvatore Sanfilippo <antirez at gmail dot com>\n * Copyright (c) 2015, Oran Agra\n * Copyright (c) 2015, Redis Labs, Inc\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n/* SDS allocator selection.\n *\n * This file is used in order to change the SDS allocator at compile time.\n * Just define the following defines to what you want to use. Also add\n * the include of your alternate allocator if needed (not needed in order\n * to use the default libc allocator). */\n\n#include \"alloc.h\"\n\n#define hi_s_malloc hi_malloc\n#define hi_s_realloc hi_realloc\n#define hi_s_free hi_free\n"
  },
  {
    "path": "deps/hiredis/sdscompat.h",
    "content": "/*\n * Copyright (c) 2020, Michael Grunder <michael dot grunder at gmail dot com>\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n/*\n * SDS compatibility header.\n *\n * This simple file maps sds types and calls to their unique hiredis symbol names.\n * It's useful when we build Hiredis as a dependency of Redis and want to call\n * Hiredis' sds symbols rather than the ones built into Redis, as the libraries\n * have slightly diverged and could cause hard to track down ABI incompatibility\n * bugs.\n *\n */\n\n#ifndef HIREDIS_SDS_COMPAT\n#define HIREDIS_SDS_COMPAT\n\n#define sds hisds\n\n#define sdslen hi_sdslen\n#define sdsavail hi_sdsavail\n#define sdssetlen hi_sdssetlen\n#define sdsinclen hi_sdsinclen\n#define sdsalloc hi_sdsalloc\n#define sdssetalloc hi_sdssetalloc\n\n#define sdsAllocPtr hi_sdsAllocPtr\n#define sdsAllocSize hi_sdsAllocSize\n#define sdscat hi_sdscat\n#define sdscatfmt hi_sdscatfmt\n#define sdscatlen hi_sdscatlen\n#define sdscatprintf hi_sdscatprintf\n#define sdscatrepr hi_sdscatrepr\n#define sdscatsds hi_sdscatsds\n#define sdscatvprintf hi_sdscatvprintf\n#define sdsclear hi_sdsclear\n#define sdscmp hi_sdscmp\n#define sdscpy hi_sdscpy\n#define sdscpylen hi_sdscpylen\n#define sdsdup hi_sdsdup\n#define sdsempty hi_sdsempty\n#define sds_free hi_sds_free\n#define sdsfree hi_sdsfree\n#define sdsfreesplitres hi_sdsfreesplitres\n#define sdsfromlonglong hi_sdsfromlonglong\n#define sdsgrowzero hi_sdsgrowzero\n#define sdsIncrLen hi_sdsIncrLen\n#define sdsjoin hi_sdsjoin\n#define sdsjoinsds hi_sdsjoinsds\n#define sdsll2str hi_sdsll2str\n#define sdsMakeRoomFor hi_sdsMakeRoomFor\n#define sds_malloc hi_sds_malloc\n#define sdsmapchars hi_sdsmapchars\n#define sdsnew hi_sdsnew\n#define sdsnewlen hi_sdsnewlen\n#define sdsrange hi_sdsrange\n#define sds_realloc hi_sds_realloc\n#define sdsRemoveFreeSpace hi_sdsRemoveFreeSpace\n#define sdssplitargs hi_sdssplitargs\n#define sdssplitlen hi_sdssplitlen\n#define sdstolower hi_sdstolower\n#define sdstoupper hi_sdstoupper\n#define sdstrim hi_sdstrim\n#define sdsull2str hi_sdsull2str\n#define sdsupdatelen hi_sdsupdatelen\n\n#endif /* HIREDIS_SDS_COMPAT */\n"
  },
  {
    "path": "deps/hiredis/sockcompat.c",
    "content": "/*\n * Copyright (c) 2019, Marcus Geelnard <m at bitsnbites dot eu>\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#define REDIS_SOCKCOMPAT_IMPLEMENTATION\n#include \"sockcompat.h\"\n\n#ifdef _WIN32\nstatic int _wsaErrorToErrno(int err) {\n    switch (err) {\n        case WSAEWOULDBLOCK:\n            return EWOULDBLOCK;\n        case WSAEINPROGRESS:\n            return EINPROGRESS;\n        case WSAEALREADY:\n            return EALREADY;\n        case WSAENOTSOCK:\n            return ENOTSOCK;\n        case WSAEDESTADDRREQ:\n            return EDESTADDRREQ;\n        case WSAEMSGSIZE:\n            return EMSGSIZE;\n        case WSAEPROTOTYPE:\n            return EPROTOTYPE;\n        case WSAENOPROTOOPT:\n            return ENOPROTOOPT;\n        case WSAEPROTONOSUPPORT:\n            return EPROTONOSUPPORT;\n        case WSAEOPNOTSUPP:\n            return EOPNOTSUPP;\n        case WSAEAFNOSUPPORT:\n            return EAFNOSUPPORT;\n        case WSAEADDRINUSE:\n            return EADDRINUSE;\n        case WSAEADDRNOTAVAIL:\n            return EADDRNOTAVAIL;\n        case WSAENETDOWN:\n            return ENETDOWN;\n        case WSAENETUNREACH:\n            return ENETUNREACH;\n        case WSAENETRESET:\n            return ENETRESET;\n        case WSAECONNABORTED:\n            return ECONNABORTED;\n        case WSAECONNRESET:\n            return ECONNRESET;\n        case WSAENOBUFS:\n            return ENOBUFS;\n        case WSAEISCONN:\n            return EISCONN;\n        case WSAENOTCONN:\n            return ENOTCONN;\n        case WSAETIMEDOUT:\n            return ETIMEDOUT;\n        case WSAECONNREFUSED:\n            return ECONNREFUSED;\n        case WSAELOOP:\n            return ELOOP;\n        case WSAENAMETOOLONG:\n            return ENAMETOOLONG;\n        case WSAEHOSTUNREACH:\n            return EHOSTUNREACH;\n        case WSAENOTEMPTY:\n            return ENOTEMPTY;\n        default:\n            /* We just return a generic I/O error if we could not find a relevant error. */\n            return EIO;\n    }\n}\n\nstatic void _updateErrno(int success) {\n    errno = success ? 0 : _wsaErrorToErrno(WSAGetLastError());\n}\n\nstatic int _initWinsock() {\n    static int s_initialized = 0;\n    if (!s_initialized) {\n        static WSADATA wsadata;\n        int err = WSAStartup(MAKEWORD(2,2), &wsadata);\n        if (err != 0) {\n            errno = _wsaErrorToErrno(err);\n            return 0;\n        }\n        s_initialized = 1;\n    }\n    return 1;\n}\n\nint win32_getaddrinfo(const char *node, const char *service, const struct addrinfo *hints, struct addrinfo **res) {\n    /* Note: This function is likely to be called before other functions, so run init here. */\n    if (!_initWinsock()) {\n        return EAI_FAIL;\n    }\n\n    switch (getaddrinfo(node, service, hints, res)) {\n        case 0:                     return 0;\n        case WSATRY_AGAIN:          return EAI_AGAIN;\n        case WSAEINVAL:             return EAI_BADFLAGS;\n        case WSAEAFNOSUPPORT:       return EAI_FAMILY;\n        case WSA_NOT_ENOUGH_MEMORY: return EAI_MEMORY;\n        case WSAHOST_NOT_FOUND:     return EAI_NONAME;\n        case WSATYPE_NOT_FOUND:     return EAI_SERVICE;\n        case WSAESOCKTNOSUPPORT:    return EAI_SOCKTYPE;\n        default:                    return EAI_FAIL;     /* Including WSANO_RECOVERY */\n    }\n}\n\nconst char *win32_gai_strerror(int errcode) {\n    switch (errcode) {\n        case 0:            errcode = 0;                     break;\n        case EAI_AGAIN:    errcode = WSATRY_AGAIN;          break;\n        case EAI_BADFLAGS: errcode = WSAEINVAL;             break;\n        case EAI_FAMILY:   errcode = WSAEAFNOSUPPORT;       break;\n        case EAI_MEMORY:   errcode = WSA_NOT_ENOUGH_MEMORY; break;\n        case EAI_NONAME:   errcode = WSAHOST_NOT_FOUND;     break;\n        case EAI_SERVICE:  errcode = WSATYPE_NOT_FOUND;     break;\n        case EAI_SOCKTYPE: errcode = WSAESOCKTNOSUPPORT;    break;\n        default:           errcode = WSANO_RECOVERY;        break; /* Including EAI_FAIL */\n    }\n    return gai_strerror(errcode);\n}\n\nvoid win32_freeaddrinfo(struct addrinfo *res) {\n    freeaddrinfo(res);\n}\n\nSOCKET win32_socket(int domain, int type, int protocol) {\n    SOCKET s;\n\n    /* Note: This function is likely to be called before other functions, so run init here. */\n    if (!_initWinsock()) {\n        return INVALID_SOCKET;\n    }\n\n    _updateErrno((s = socket(domain, type, protocol)) != INVALID_SOCKET);\n    return s;\n}\n\nint win32_ioctl(SOCKET fd, unsigned long request, unsigned long *argp) {\n    int ret = ioctlsocket(fd, (long)request, argp);\n    _updateErrno(ret != SOCKET_ERROR);\n    return ret != SOCKET_ERROR ? ret : -1;\n}\n\nint win32_bind(SOCKET sockfd, const struct sockaddr *addr, socklen_t addrlen) {\n    int ret = bind(sockfd, addr, addrlen);\n    _updateErrno(ret != SOCKET_ERROR);\n    return ret != SOCKET_ERROR ? ret : -1;\n}\n\nint win32_connect(SOCKET sockfd, const struct sockaddr *addr, socklen_t addrlen) {\n    int ret = connect(sockfd, addr, addrlen);\n    _updateErrno(ret != SOCKET_ERROR);\n\n    /* For Winsock connect(), the WSAEWOULDBLOCK error means the same thing as\n     * EINPROGRESS for POSIX connect(), so we do that translation to keep POSIX\n     * logic consistent.\n     * Additionally, WSAALREADY is can be reported as WSAEINVAL to  and this is\n     * translated to EIO.  Convert appropriately\n     */\n    int err = errno;\n    if (err == EWOULDBLOCK) {\n        errno = EINPROGRESS;\n    }\n    else if (err == EIO) {\n        errno = EALREADY;\n    }\n\n    return ret != SOCKET_ERROR ? ret : -1;\n}\n\nint win32_getsockopt(SOCKET sockfd, int level, int optname, void *optval, socklen_t *optlen) {\n    int ret = 0;\n    if ((level == SOL_SOCKET) && ((optname == SO_RCVTIMEO) || (optname == SO_SNDTIMEO))) {\n        if (*optlen >= sizeof (struct timeval)) {\n            struct timeval *tv = optval;\n            DWORD timeout = 0;\n            socklen_t dwlen = 0;\n            ret = getsockopt(sockfd, level, optname, (char *)&timeout, &dwlen);\n            tv->tv_sec = timeout / 1000;\n            tv->tv_usec = (timeout * 1000) % 1000000;\n        } else {\n            ret = WSAEFAULT;\n        }\n        *optlen = sizeof (struct timeval);\n    } else {\n        ret = getsockopt(sockfd, level, optname, (char*)optval, optlen);\n    }\n    if (ret != SOCKET_ERROR && level == SOL_SOCKET && optname == SO_ERROR) {\n        /* translate SO_ERROR codes, if non-zero */\n        int err = *(int*)optval;\n        if (err != 0) {\n            err = _wsaErrorToErrno(err);\n            *(int*)optval = err;\n        }\n    }\n    _updateErrno(ret != SOCKET_ERROR);\n    return ret != SOCKET_ERROR ? ret : -1;\n}\n\nint win32_setsockopt(SOCKET sockfd, int level, int optname, const void *optval, socklen_t optlen) {\n    int ret = 0;\n    if ((level == SOL_SOCKET) && ((optname == SO_RCVTIMEO) || (optname == SO_SNDTIMEO))) {\n        const struct timeval *tv = optval;\n        DWORD timeout = tv->tv_sec * 1000 + tv->tv_usec / 1000;\n        ret = setsockopt(sockfd, level, optname, (const char*)&timeout, sizeof(DWORD));\n    } else {\n        ret = setsockopt(sockfd, level, optname, (const char*)optval, optlen);\n    }\n    _updateErrno(ret != SOCKET_ERROR);\n    return ret != SOCKET_ERROR ? ret : -1;\n}\n\nint win32_close(SOCKET fd) {\n    int ret = closesocket(fd);\n    _updateErrno(ret != SOCKET_ERROR);\n    return ret != SOCKET_ERROR ? ret : -1;\n}\n\nssize_t win32_recv(SOCKET sockfd, void *buf, size_t len, int flags) {\n    int ret = recv(sockfd, (char*)buf, (int)len, flags);\n    _updateErrno(ret != SOCKET_ERROR);\n    return ret != SOCKET_ERROR ? ret : -1;\n}\n\nssize_t win32_send(SOCKET sockfd, const void *buf, size_t len, int flags) {\n    int ret = send(sockfd, (const char*)buf, (int)len, flags);\n    _updateErrno(ret != SOCKET_ERROR);\n    return ret != SOCKET_ERROR ? ret : -1;\n}\n\nint win32_poll(struct pollfd *fds, nfds_t nfds, int timeout) {\n    int ret = WSAPoll(fds, nfds, timeout);\n    _updateErrno(ret != SOCKET_ERROR);\n    return ret != SOCKET_ERROR ? ret : -1;\n}\n\nint win32_redisKeepAlive(SOCKET sockfd, int interval_ms) {\n    struct tcp_keepalive cfg;\n    DWORD bytes_in;\n    int res;\n\n    cfg.onoff = 1;\n    cfg.keepaliveinterval = interval_ms;\n    cfg.keepalivetime = interval_ms;\n\n    res = WSAIoctl(sockfd, SIO_KEEPALIVE_VALS, &cfg,\n                   sizeof(struct tcp_keepalive), NULL, 0,\n                   &bytes_in, NULL, NULL);\n\n    return res == 0 ? 0 : _wsaErrorToErrno(res);\n}\n\n#endif /* _WIN32 */\n"
  },
  {
    "path": "deps/hiredis/sockcompat.h",
    "content": "/*\n * Copyright (c) 2019, Marcus Geelnard <m at bitsnbites dot eu>\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#ifndef __SOCKCOMPAT_H\n#define __SOCKCOMPAT_H\n\n#ifndef _WIN32\n/* For POSIX systems we use the standard BSD socket API. */\n#include <unistd.h>\n#include <sys/socket.h>\n#include <sys/select.h>\n#include <sys/un.h>\n#include <netinet/in.h>\n#include <netinet/tcp.h>\n#include <arpa/inet.h>\n#include <netdb.h>\n#include <poll.h>\n#else\n/* For Windows we use winsock. */\n#undef _WIN32_WINNT\n#define _WIN32_WINNT 0x0600 /* To get WSAPoll etc. */\n#include <winsock2.h>\n#include <ws2tcpip.h>\n#include <stddef.h>\n#include <errno.h>\n#include <mstcpip.h>\n\n#ifdef _MSC_VER\ntypedef long long ssize_t;\n#endif\n\n/* Emulate the parts of the BSD socket API that we need (override the winsock signatures). */\nint win32_getaddrinfo(const char *node, const char *service, const struct addrinfo *hints, struct addrinfo **res);\nconst char *win32_gai_strerror(int errcode);\nvoid win32_freeaddrinfo(struct addrinfo *res);\nSOCKET win32_socket(int domain, int type, int protocol);\nint win32_ioctl(SOCKET fd, unsigned long request, unsigned long *argp);\nint win32_bind(SOCKET sockfd, const struct sockaddr *addr, socklen_t addrlen);\nint win32_connect(SOCKET sockfd, const struct sockaddr *addr, socklen_t addrlen);\nint win32_getsockopt(SOCKET sockfd, int level, int optname, void *optval, socklen_t *optlen);\nint win32_setsockopt(SOCKET sockfd, int level, int optname, const void *optval, socklen_t optlen);\nint win32_close(SOCKET fd);\nssize_t win32_recv(SOCKET sockfd, void *buf, size_t len, int flags);\nssize_t win32_send(SOCKET sockfd, const void *buf, size_t len, int flags);\ntypedef ULONG nfds_t;\nint win32_poll(struct pollfd *fds, nfds_t nfds, int timeout);\n\nint win32_redisKeepAlive(SOCKET sockfd, int interval_ms);\n\n#ifndef REDIS_SOCKCOMPAT_IMPLEMENTATION\n#define getaddrinfo(node, service, hints, res) win32_getaddrinfo(node, service, hints, res)\n#undef gai_strerror\n#define gai_strerror(errcode) win32_gai_strerror(errcode)\n#define freeaddrinfo(res) win32_freeaddrinfo(res)\n#define socket(domain, type, protocol) win32_socket(domain, type, protocol)\n#define ioctl(fd, request, argp) win32_ioctl(fd, request, argp)\n#define bind(sockfd, addr, addrlen) win32_bind(sockfd, addr, addrlen)\n#define connect(sockfd, addr, addrlen) win32_connect(sockfd, addr, addrlen)\n#define getsockopt(sockfd, level, optname, optval, optlen) win32_getsockopt(sockfd, level, optname, optval, optlen)\n#define setsockopt(sockfd, level, optname, optval, optlen) win32_setsockopt(sockfd, level, optname, optval, optlen)\n#define close(fd) win32_close(fd)\n#define recv(sockfd, buf, len, flags) win32_recv(sockfd, buf, len, flags)\n#define send(sockfd, buf, len, flags) win32_send(sockfd, buf, len, flags)\n#define poll(fds, nfds, timeout) win32_poll(fds, nfds, timeout)\n#endif /* REDIS_SOCKCOMPAT_IMPLEMENTATION */\n#endif /* _WIN32 */\n\n#endif /* __SOCKCOMPAT_H */\n"
  },
  {
    "path": "deps/hiredis/ssl.c",
    "content": "/*\n * Copyright (c) 2009-2011, Salvatore Sanfilippo <antirez at gmail dot com>\n * Copyright (c) 2010-2011, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n * Copyright (c) 2019, Redis Labs\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#include \"hiredis.h\"\n#include \"async.h\"\n#include \"net.h\"\n\n#include <assert.h>\n#include <errno.h>\n#include <string.h>\n#ifdef _WIN32\n#include <windows.h>\n#include <wincrypt.h>\n#ifdef OPENSSL_IS_BORINGSSL\n#undef X509_NAME\n#undef X509_EXTENSIONS\n#undef PKCS7_ISSUER_AND_SERIAL\n#undef PKCS7_SIGNER_INFO\n#undef OCSP_REQUEST\n#undef OCSP_RESPONSE\n#endif\n#else\n#include <pthread.h>\n#endif\n\n#include <openssl/ssl.h>\n#include <openssl/err.h>\n\n#include \"win32.h\"\n#include \"async_private.h\"\n#include \"hiredis_ssl.h\"\n\n#define OPENSSL_1_1_0 0x10100000L\n\nvoid __redisSetError(redisContext *c, int type, const char *str);\n\nstruct redisSSLContext {\n    /* Associated OpenSSL SSL_CTX as created by redisCreateSSLContext() */\n    SSL_CTX *ssl_ctx;\n\n    /* Requested SNI, or NULL */\n    char *server_name;\n};\n\n/* The SSL connection context is attached to SSL/TLS connections as a privdata. */\ntypedef struct redisSSL {\n    /**\n     * OpenSSL SSL object.\n     */\n    SSL *ssl;\n\n    /**\n     * SSL_write() requires to be called again with the same arguments it was\n     * previously called with in the event of an SSL_read/SSL_write situation\n     */\n    size_t lastLen;\n\n    /** Whether the SSL layer requires read (possibly before a write) */\n    int wantRead;\n\n    /**\n     * Whether a write was requested prior to a read. If set, the write()\n     * should resume whenever a read takes place, if possible\n     */\n    int pendingWrite;\n} redisSSL;\n\n/* Forward declaration */\nredisContextFuncs redisContextSSLFuncs;\n\n/**\n * OpenSSL global initialization and locking handling callbacks.\n * Note that this is only required for OpenSSL < 1.1.0.\n */\n\n#if OPENSSL_VERSION_NUMBER < OPENSSL_1_1_0\n#define HIREDIS_USE_CRYPTO_LOCKS\n#endif\n\n#ifdef HIREDIS_USE_CRYPTO_LOCKS\n#ifdef _WIN32\ntypedef CRITICAL_SECTION sslLockType;\nstatic void sslLockInit(sslLockType* l) {\n    InitializeCriticalSection(l);\n}\nstatic void sslLockAcquire(sslLockType* l) {\n    EnterCriticalSection(l);\n}\nstatic void sslLockRelease(sslLockType* l) {\n    LeaveCriticalSection(l);\n}\n#else\ntypedef pthread_mutex_t sslLockType;\nstatic void sslLockInit(sslLockType *l) {\n    pthread_mutex_init(l, NULL);\n}\nstatic void sslLockAcquire(sslLockType *l) {\n    pthread_mutex_lock(l);\n}\nstatic void sslLockRelease(sslLockType *l) {\n    pthread_mutex_unlock(l);\n}\n#endif\n\nstatic sslLockType* ossl_locks;\n\nstatic void opensslDoLock(int mode, int lkid, const char *f, int line) {\n    sslLockType *l = ossl_locks + lkid;\n\n    if (mode & CRYPTO_LOCK) {\n        sslLockAcquire(l);\n    } else {\n        sslLockRelease(l);\n    }\n\n    (void)f;\n    (void)line;\n}\n\nstatic int initOpensslLocks(void) {\n    unsigned ii, nlocks;\n    if (CRYPTO_get_locking_callback() != NULL) {\n        /* Someone already set the callback before us. Don't destroy it! */\n        return REDIS_OK;\n    }\n    nlocks = CRYPTO_num_locks();\n    ossl_locks = hi_malloc(sizeof(*ossl_locks) * nlocks);\n    if (ossl_locks == NULL)\n        return REDIS_ERR;\n\n    for (ii = 0; ii < nlocks; ii++) {\n        sslLockInit(ossl_locks + ii);\n    }\n    CRYPTO_set_locking_callback(opensslDoLock);\n    return REDIS_OK;\n}\n#endif /* HIREDIS_USE_CRYPTO_LOCKS */\n\nint redisInitOpenSSL(void)\n{\n    SSL_library_init();\n#ifdef HIREDIS_USE_CRYPTO_LOCKS\n    initOpensslLocks();\n#endif\n\n    return REDIS_OK;\n}\n\n/**\n * redisSSLContext helper context destruction.\n */\n\nconst char *redisSSLContextGetError(redisSSLContextError error)\n{\n    switch (error) {\n        case REDIS_SSL_CTX_NONE:\n            return \"No Error\";\n        case REDIS_SSL_CTX_CREATE_FAILED:\n            return \"Failed to create OpenSSL SSL_CTX\";\n        case REDIS_SSL_CTX_CERT_KEY_REQUIRED:\n            return \"Client cert and key must both be specified or skipped\";\n        case REDIS_SSL_CTX_CA_CERT_LOAD_FAILED:\n            return \"Failed to load CA Certificate or CA Path\";\n        case REDIS_SSL_CTX_CLIENT_CERT_LOAD_FAILED:\n            return \"Failed to load client certificate\";\n        case REDIS_SSL_CTX_PRIVATE_KEY_LOAD_FAILED:\n            return \"Failed to load private key\";\n        case REDIS_SSL_CTX_OS_CERTSTORE_OPEN_FAILED:\n            return \"Failed to open system certificate store\";\n        case REDIS_SSL_CTX_OS_CERT_ADD_FAILED:\n            return \"Failed to add CA certificates obtained from system to the SSL context\";\n        default:\n            return \"Unknown error code\";\n    }\n}\n\nvoid redisFreeSSLContext(redisSSLContext *ctx)\n{\n    if (!ctx)\n        return;\n\n    if (ctx->server_name) {\n        hi_free(ctx->server_name);\n        ctx->server_name = NULL;\n    }\n\n    if (ctx->ssl_ctx) {\n        SSL_CTX_free(ctx->ssl_ctx);\n        ctx->ssl_ctx = NULL;\n    }\n\n    hi_free(ctx);\n}\n\n\n/**\n * redisSSLContext helper context initialization.\n */\n\nredisSSLContext *redisCreateSSLContext(const char *cacert_filename, const char *capath,\n        const char *cert_filename, const char *private_key_filename,\n        const char *server_name, redisSSLContextError *error)\n{\n    redisSSLOptions options = {\n        .cacert_filename = cacert_filename,\n        .capath = capath,\n        .cert_filename = cert_filename,\n        .private_key_filename = private_key_filename,\n        .server_name = server_name,\n        .verify_mode = REDIS_SSL_VERIFY_PEER,\n    };\n\n    return redisCreateSSLContextWithOptions(&options, error);\n}\n\nredisSSLContext *redisCreateSSLContextWithOptions(redisSSLOptions *options, redisSSLContextError *error) {\n    const char *cacert_filename = options->cacert_filename;\n    const char *capath = options->capath;\n    const char *cert_filename = options->cert_filename;\n    const char *private_key_filename = options->private_key_filename;\n    const char *server_name = options->server_name;\n\n#ifdef _WIN32\n    HCERTSTORE win_store = NULL;\n    PCCERT_CONTEXT win_ctx = NULL;\n#endif\n\n    redisSSLContext *ctx = hi_calloc(1, sizeof(redisSSLContext));\n    if (ctx == NULL)\n        goto error;\n\n    const SSL_METHOD *ssl_method;\n#if OPENSSL_VERSION_NUMBER >= OPENSSL_1_1_0\n    ssl_method = TLS_client_method();\n#else\n    ssl_method = SSLv23_client_method();\n#endif\n\n    ctx->ssl_ctx = SSL_CTX_new(ssl_method);\n    if (!ctx->ssl_ctx) {\n        if (error) *error = REDIS_SSL_CTX_CREATE_FAILED;\n        goto error;\n    }\n\n#if OPENSSL_VERSION_NUMBER >= OPENSSL_1_1_0\n    SSL_CTX_set_min_proto_version(ctx->ssl_ctx, TLS1_2_VERSION);\n#else\n    SSL_CTX_set_options(ctx->ssl_ctx, SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3 | SSL_OP_NO_TLSv1 | SSL_OP_NO_TLSv1_1);\n#endif\n\n    SSL_CTX_set_verify(ctx->ssl_ctx, options->verify_mode, NULL);\n\n    if ((cert_filename != NULL && private_key_filename == NULL) ||\n            (private_key_filename != NULL && cert_filename == NULL)) {\n        if (error) *error = REDIS_SSL_CTX_CERT_KEY_REQUIRED;\n        goto error;\n    }\n\n    if (capath || cacert_filename) {\n#ifdef _WIN32\n        if (0 == strcmp(cacert_filename, \"wincert\")) {\n            win_store = CertOpenSystemStore(NULL, \"Root\");\n            if (!win_store) {\n                if (error) *error = REDIS_SSL_CTX_OS_CERTSTORE_OPEN_FAILED;\n                goto error;\n            }\n            X509_STORE* store = SSL_CTX_get_cert_store(ctx->ssl_ctx);\n            while (win_ctx = CertEnumCertificatesInStore(win_store, win_ctx)) {\n                X509* x509 = NULL;\n                x509 = d2i_X509(NULL, (const unsigned char**)&win_ctx->pbCertEncoded, win_ctx->cbCertEncoded);\n                if (x509) {\n                    if ((1 != X509_STORE_add_cert(store, x509)) ||\n                        (1 != SSL_CTX_add_client_CA(ctx->ssl_ctx, x509)))\n                    {\n                        if (error) *error = REDIS_SSL_CTX_OS_CERT_ADD_FAILED;\n                        goto error;\n                    }\n                    X509_free(x509);\n                }\n            }\n            CertFreeCertificateContext(win_ctx);\n            CertCloseStore(win_store, 0);\n        } else\n#endif\n        if (!SSL_CTX_load_verify_locations(ctx->ssl_ctx, cacert_filename, capath)) {\n            if (error) *error = REDIS_SSL_CTX_CA_CERT_LOAD_FAILED;\n            goto error;\n        }\n    } else {\n        if (!SSL_CTX_set_default_verify_paths(ctx->ssl_ctx)) {\n            if (error) *error = REDIS_SSL_CTX_CLIENT_DEFAULT_CERT_FAILED;\n            goto error;\n        }\n    }\n\n    if (cert_filename) {\n        if (!SSL_CTX_use_certificate_chain_file(ctx->ssl_ctx, cert_filename)) {\n            if (error) *error = REDIS_SSL_CTX_CLIENT_CERT_LOAD_FAILED;\n            goto error;\n        }\n        if (!SSL_CTX_use_PrivateKey_file(ctx->ssl_ctx, private_key_filename, SSL_FILETYPE_PEM)) {\n            if (error) *error = REDIS_SSL_CTX_PRIVATE_KEY_LOAD_FAILED;\n            goto error;\n        }\n    }\n\n    if (server_name)\n        ctx->server_name = hi_strdup(server_name);\n\n    return ctx;\n\nerror:\n#ifdef _WIN32\n    CertFreeCertificateContext(win_ctx);\n    CertCloseStore(win_store, 0);\n#endif\n    redisFreeSSLContext(ctx);\n    return NULL;\n}\n\n/**\n * SSL Connection initialization.\n */\n\n\nstatic int redisSSLConnect(redisContext *c, SSL *ssl) {\n    if (c->privctx) {\n        __redisSetError(c, REDIS_ERR_OTHER, \"redisContext was already associated\");\n        return REDIS_ERR;\n    }\n\n    redisSSL *rssl = hi_calloc(1, sizeof(redisSSL));\n    if (rssl == NULL) {\n        __redisSetError(c, REDIS_ERR_OOM, \"Out of memory\");\n        return REDIS_ERR;\n    }\n\n    c->funcs = &redisContextSSLFuncs;\n    rssl->ssl = ssl;\n\n    SSL_set_mode(rssl->ssl, SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER);\n    SSL_set_fd(rssl->ssl, c->fd);\n    SSL_set_connect_state(rssl->ssl);\n\n    ERR_clear_error();\n    int rv = SSL_connect(rssl->ssl);\n    if (rv == 1) {\n        c->privctx = rssl;\n        return REDIS_OK;\n    }\n\n    rv = SSL_get_error(rssl->ssl, rv);\n    if (((c->flags & REDIS_BLOCK) == 0) &&\n        (rv == SSL_ERROR_WANT_READ || rv == SSL_ERROR_WANT_WRITE)) {\n        c->privctx = rssl;\n        return REDIS_OK;\n    }\n\n    if (c->err == 0) {\n        char err[512];\n        if (rv == SSL_ERROR_SYSCALL)\n            snprintf(err,sizeof(err)-1,\"SSL_connect failed: %s\",strerror(errno));\n        else {\n            unsigned long e = ERR_peek_last_error();\n            snprintf(err,sizeof(err)-1,\"SSL_connect failed: %s\",\n                    ERR_reason_error_string(e));\n        }\n        __redisSetError(c, REDIS_ERR_IO, err);\n    }\n\n    hi_free(rssl);\n    return REDIS_ERR;\n}\n\n/**\n * A wrapper around redisSSLConnect() for users who manage their own context and\n * create their own SSL object.\n */\n\nint redisInitiateSSL(redisContext *c, SSL *ssl) {\n    return redisSSLConnect(c, ssl);\n}\n\n/**\n * A wrapper around redisSSLConnect() for users who use redisSSLContext and don't\n * manage their own SSL objects.\n */\n\nint redisInitiateSSLWithContext(redisContext *c, redisSSLContext *redis_ssl_ctx)\n{\n    if (!c || !redis_ssl_ctx)\n        return REDIS_ERR;\n\n    /* We want to verify that redisSSLConnect() won't fail on this, as it will\n     * not own the SSL object in that case and we'll end up leaking.\n     */\n    if (c->privctx)\n        return REDIS_ERR;\n\n    SSL *ssl = SSL_new(redis_ssl_ctx->ssl_ctx);\n    if (!ssl) {\n        __redisSetError(c, REDIS_ERR_OTHER, \"Couldn't create new SSL instance\");\n        goto error;\n    }\n\n    if (redis_ssl_ctx->server_name) {\n        if (!SSL_set_tlsext_host_name(ssl, redis_ssl_ctx->server_name)) {\n            __redisSetError(c, REDIS_ERR_OTHER, \"Failed to set server_name/SNI\");\n            goto error;\n        }\n    }\n\n    if (redisSSLConnect(c, ssl) != REDIS_OK) {\n        goto error;\n    }\n\n    return REDIS_OK;\n\nerror:\n    if (ssl)\n        SSL_free(ssl);\n    return REDIS_ERR;\n}\n\nstatic int maybeCheckWant(redisSSL *rssl, int rv) {\n    /**\n     * If the error is WANT_READ or WANT_WRITE, the appropriate flags are set\n     * and true is returned. False is returned otherwise\n     */\n    if (rv == SSL_ERROR_WANT_READ) {\n        rssl->wantRead = 1;\n        return 1;\n    } else if (rv == SSL_ERROR_WANT_WRITE) {\n        rssl->pendingWrite = 1;\n        return 1;\n    } else {\n        return 0;\n    }\n}\n\n/**\n * Implementation of redisContextFuncs for SSL connections.\n */\n\nstatic void redisSSLFree(void *privctx){\n    redisSSL *rsc = privctx;\n\n    if (!rsc) return;\n    if (rsc->ssl) {\n        SSL_free(rsc->ssl);\n        rsc->ssl = NULL;\n    }\n    hi_free(rsc);\n}\n\nstatic ssize_t redisSSLRead(redisContext *c, char *buf, size_t bufcap) {\n    redisSSL *rssl = c->privctx;\n\n    int nread = SSL_read(rssl->ssl, buf, bufcap);\n    if (nread > 0) {\n        return nread;\n    } else if (nread == 0) {\n        __redisSetError(c, REDIS_ERR_EOF, \"Server closed the connection\");\n        return -1;\n    } else {\n        int err = SSL_get_error(rssl->ssl, nread);\n        if (c->flags & REDIS_BLOCK) {\n            /**\n             * In blocking mode, we should never end up in a situation where\n             * we get an error without it being an actual error, except\n             * in the case of EINTR, which can be spuriously received from\n             * debuggers or whatever.\n             */\n            if (errno == EINTR) {\n                return 0;\n            } else {\n                const char *msg = NULL;\n                if (errno == EAGAIN) {\n                    msg = \"Resource temporarily unavailable\";\n                }\n                __redisSetError(c, REDIS_ERR_IO, msg);\n                return -1;\n            }\n        }\n\n        /**\n         * We can very well get an EWOULDBLOCK/EAGAIN, however\n         */\n        if (maybeCheckWant(rssl, err)) {\n            return 0;\n        } else {\n            __redisSetError(c, REDIS_ERR_IO, NULL);\n            return -1;\n        }\n    }\n}\n\nstatic ssize_t redisSSLWrite(redisContext *c) {\n    redisSSL *rssl = c->privctx;\n\n    size_t len = rssl->lastLen ? rssl->lastLen : hi_sdslen(c->obuf);\n    int rv = SSL_write(rssl->ssl, c->obuf, len);\n\n    if (rv > 0) {\n        rssl->lastLen = 0;\n    } else if (rv < 0) {\n        rssl->lastLen = len;\n\n        int err = SSL_get_error(rssl->ssl, rv);\n        if ((c->flags & REDIS_BLOCK) == 0 && maybeCheckWant(rssl, err)) {\n            return 0;\n        } else {\n            __redisSetError(c, REDIS_ERR_IO, NULL);\n            return -1;\n        }\n    }\n    return rv;\n}\n\nstatic void redisSSLAsyncRead(redisAsyncContext *ac) {\n    int rv;\n    redisSSL *rssl = ac->c.privctx;\n    redisContext *c = &ac->c;\n\n    rssl->wantRead = 0;\n\n    if (rssl->pendingWrite) {\n        int done;\n\n        /* This is probably just a write event */\n        rssl->pendingWrite = 0;\n        rv = redisBufferWrite(c, &done);\n        if (rv == REDIS_ERR) {\n            __redisAsyncDisconnect(ac);\n            return;\n        } else if (!done) {\n            _EL_ADD_WRITE(ac);\n        }\n    }\n\n    rv = redisBufferRead(c);\n    if (rv == REDIS_ERR) {\n        __redisAsyncDisconnect(ac);\n    } else {\n        _EL_ADD_READ(ac);\n        redisProcessCallbacks(ac);\n    }\n}\n\nstatic void redisSSLAsyncWrite(redisAsyncContext *ac) {\n    int rv, done = 0;\n    redisSSL *rssl = ac->c.privctx;\n    redisContext *c = &ac->c;\n\n    rssl->pendingWrite = 0;\n    rv = redisBufferWrite(c, &done);\n    if (rv == REDIS_ERR) {\n        __redisAsyncDisconnect(ac);\n        return;\n    }\n\n    if (!done) {\n        if (rssl->wantRead) {\n            /* Need to read-before-write */\n            rssl->pendingWrite = 1;\n            _EL_DEL_WRITE(ac);\n        } else {\n            /* No extra reads needed, just need to write more */\n            _EL_ADD_WRITE(ac);\n        }\n    } else {\n        /* Already done! */\n        _EL_DEL_WRITE(ac);\n    }\n\n    /* Always reschedule a read */\n    _EL_ADD_READ(ac);\n}\n\nredisContextFuncs redisContextSSLFuncs = {\n    .close = redisNetClose,\n    .free_privctx = redisSSLFree,\n    .async_read = redisSSLAsyncRead,\n    .async_write = redisSSLAsyncWrite,\n    .read = redisSSLRead,\n    .write = redisSSLWrite\n};\n\n"
  },
  {
    "path": "deps/hiredis/test.c",
    "content": "#include \"fmacros.h\"\n#include \"sockcompat.h\"\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#ifndef _WIN32\n#include <strings.h>\n#include <sys/time.h>\n#endif\n#include <assert.h>\n#include <signal.h>\n#include <errno.h>\n#include <limits.h>\n#include <math.h>\n\n#include \"hiredis.h\"\n#include \"async.h\"\n#include \"adapters/poll.h\"\n#ifdef HIREDIS_TEST_SSL\n#include \"hiredis_ssl.h\"\n#endif\n#ifdef HIREDIS_TEST_ASYNC\n#include \"adapters/libevent.h\"\n#include <event2/event.h>\n#endif\n#include \"net.h\"\n#include \"win32.h\"\n\nenum connection_type {\n    CONN_TCP,\n    CONN_UNIX,\n    CONN_FD,\n    CONN_SSL\n};\n\nstruct config {\n    enum connection_type type;\n    struct timeval connect_timeout;\n\n    struct {\n        const char *host;\n        int port;\n    } tcp;\n\n    struct {\n        const char *path;\n    } unix_sock;\n\n    struct {\n        const char *host;\n        int port;\n        const char *ca_cert;\n        const char *cert;\n        const char *key;\n    } ssl;\n};\n\nstruct privdata {\n    int dtor_counter;\n};\n\nstruct pushCounters {\n    int nil;\n    int str;\n};\n\nstatic int insecure_calloc_calls;\n\n#ifdef HIREDIS_TEST_SSL\nredisSSLContext *_ssl_ctx = NULL;\n#endif\n\n/* The following lines make up our testing \"framework\" :) */\nstatic int tests = 0, fails = 0, skips = 0;\n#define test(_s) { printf(\"#%02d \", ++tests); printf(_s); }\n#define test_cond(_c) if(_c) printf(\"\\033[0;32mPASSED\\033[0;0m\\n\"); else {printf(\"\\033[0;31mFAILED\\033[0;0m\\n\"); fails++;}\n#define test_skipped() { printf(\"\\033[01;33mSKIPPED\\033[0;0m\\n\"); skips++; }\n\nstatic void millisleep(int ms)\n{\n#ifdef _MSC_VER\n    Sleep(ms);\n#else\n    usleep(ms*1000);\n#endif\n}\n\nstatic long long usec(void) {\n#ifndef _MSC_VER\n    struct timeval tv;\n    gettimeofday(&tv,NULL);\n    return (((long long)tv.tv_sec)*1000000)+tv.tv_usec;\n#else\n    FILETIME ft;\n    GetSystemTimeAsFileTime(&ft);\n    return (((long long)ft.dwHighDateTime << 32) | ft.dwLowDateTime) / 10;\n#endif\n}\n\n/* The assert() calls below have side effects, so we need assert()\n * even if we are compiling without asserts (-DNDEBUG). */\n#ifdef NDEBUG\n#undef assert\n#define assert(e) (void)(e)\n#endif\n\n/* Helper to extract Redis version information.  Aborts on any failure. */\n#define REDIS_VERSION_FIELD \"redis_version:\"\nvoid get_redis_version(redisContext *c, int *majorptr, int *minorptr) {\n    redisReply *reply;\n    char *eptr, *s, *e;\n    int major, minor;\n\n    reply = redisCommand(c, \"INFO\");\n    if (reply == NULL || c->err || reply->type != REDIS_REPLY_STRING)\n        goto abort;\n    if ((s = strstr(reply->str, REDIS_VERSION_FIELD)) == NULL)\n        goto abort;\n\n    s += strlen(REDIS_VERSION_FIELD);\n\n    /* We need a field terminator and at least 'x.y.z' (5) bytes of data */\n    if ((e = strstr(s, \"\\r\\n\")) == NULL || (e - s) < 5)\n        goto abort;\n\n    /* Extract version info */\n    major = strtol(s, &eptr, 10);\n    if (*eptr != '.') goto abort;\n    minor = strtol(eptr+1, NULL, 10);\n\n    /* Push info the caller wants */\n    if (majorptr) *majorptr = major;\n    if (minorptr) *minorptr = minor;\n\n    freeReplyObject(reply);\n    return;\n\nabort:\n    freeReplyObject(reply);\n    fprintf(stderr, \"Error:  Cannot determine Redis version, aborting\\n\");\n    exit(1);\n}\n\nstatic redisContext *select_database(redisContext *c) {\n    redisReply *reply;\n\n    /* Switch to DB 9 for testing, now that we know we can chat. */\n    reply = redisCommand(c,\"SELECT 9\");\n    assert(reply != NULL);\n    freeReplyObject(reply);\n\n    /* Make sure the DB is emtpy */\n    reply = redisCommand(c,\"DBSIZE\");\n    assert(reply != NULL);\n    if (reply->type == REDIS_REPLY_INTEGER && reply->integer == 0) {\n        /* Awesome, DB 9 is empty and we can continue. */\n        freeReplyObject(reply);\n    } else {\n        printf(\"Database #9 is not empty, test can not continue\\n\");\n        exit(1);\n    }\n\n    return c;\n}\n\n/* Switch protocol */\nstatic void send_hello(redisContext *c, int version) {\n    redisReply *reply;\n    int expected;\n\n    reply = redisCommand(c, \"HELLO %d\", version);\n    expected = version == 3 ? REDIS_REPLY_MAP : REDIS_REPLY_ARRAY;\n    assert(reply != NULL && reply->type == expected);\n    freeReplyObject(reply);\n}\n\n/* Togggle client tracking */\nstatic void send_client_tracking(redisContext *c, const char *str) {\n    redisReply *reply;\n\n    reply = redisCommand(c, \"CLIENT TRACKING %s\", str);\n    assert(reply != NULL && reply->type == REDIS_REPLY_STATUS);\n    freeReplyObject(reply);\n}\n\nstatic int disconnect(redisContext *c, int keep_fd) {\n    redisReply *reply;\n\n    /* Make sure we're on DB 9. */\n    reply = redisCommand(c,\"SELECT 9\");\n    assert(reply != NULL);\n    freeReplyObject(reply);\n    reply = redisCommand(c,\"FLUSHDB\");\n    assert(reply != NULL);\n    freeReplyObject(reply);\n\n    /* Free the context as well, but keep the fd if requested. */\n    if (keep_fd)\n        return redisFreeKeepFd(c);\n    redisFree(c);\n    return -1;\n}\n\nstatic void do_ssl_handshake(redisContext *c) {\n#ifdef HIREDIS_TEST_SSL\n    redisInitiateSSLWithContext(c, _ssl_ctx);\n    if (c->err) {\n        printf(\"SSL error: %s\\n\", c->errstr);\n        redisFree(c);\n        exit(1);\n    }\n#else\n    (void) c;\n#endif\n}\n\nstatic redisContext *do_connect(struct config config) {\n    redisContext *c = NULL;\n\n    if (config.type == CONN_TCP) {\n        c = redisConnect(config.tcp.host, config.tcp.port);\n    } else if (config.type == CONN_SSL) {\n        c = redisConnect(config.ssl.host, config.ssl.port);\n    } else if (config.type == CONN_UNIX) {\n        c = redisConnectUnix(config.unix_sock.path);\n    } else if (config.type == CONN_FD) {\n        /* Create a dummy connection just to get an fd to inherit */\n        redisContext *dummy_ctx = redisConnectUnix(config.unix_sock.path);\n        if (dummy_ctx) {\n            int fd = disconnect(dummy_ctx, 1);\n            printf(\"Connecting to inherited fd %d\\n\", fd);\n            c = redisConnectFd(fd);\n        }\n    } else {\n        assert(NULL);\n    }\n\n    if (c == NULL) {\n        printf(\"Connection error: can't allocate redis context\\n\");\n        exit(1);\n    } else if (c->err) {\n        printf(\"Connection error: %s\\n\", c->errstr);\n        redisFree(c);\n        exit(1);\n    }\n\n    if (config.type == CONN_SSL) {\n        do_ssl_handshake(c);\n    }\n\n    return select_database(c);\n}\n\nstatic void do_reconnect(redisContext *c, struct config config) {\n    redisReconnect(c);\n\n    if (config.type == CONN_SSL) {\n        do_ssl_handshake(c);\n    }\n}\n\nstatic void test_format_commands(void) {\n    char *cmd;\n    int len;\n\n    test(\"Format command without interpolation: \");\n    len = redisFormatCommand(&cmd,\"SET foo bar\");\n    test_cond(strncmp(cmd,\"*3\\r\\n$3\\r\\nSET\\r\\n$3\\r\\nfoo\\r\\n$3\\r\\nbar\\r\\n\",len) == 0 &&\n        len == 4+4+(3+2)+4+(3+2)+4+(3+2));\n    hi_free(cmd);\n\n    test(\"Format command with %%s string interpolation: \");\n    len = redisFormatCommand(&cmd,\"SET %s %s\",\"foo\",\"bar\");\n    test_cond(strncmp(cmd,\"*3\\r\\n$3\\r\\nSET\\r\\n$3\\r\\nfoo\\r\\n$3\\r\\nbar\\r\\n\",len) == 0 &&\n        len == 4+4+(3+2)+4+(3+2)+4+(3+2));\n    hi_free(cmd);\n\n    test(\"Format command with %%s and an empty string: \");\n    len = redisFormatCommand(&cmd,\"SET %s %s\",\"foo\",\"\");\n    test_cond(strncmp(cmd,\"*3\\r\\n$3\\r\\nSET\\r\\n$3\\r\\nfoo\\r\\n$0\\r\\n\\r\\n\",len) == 0 &&\n        len == 4+4+(3+2)+4+(3+2)+4+(0+2));\n    hi_free(cmd);\n\n    test(\"Format command with an empty string in between proper interpolations: \");\n    len = redisFormatCommand(&cmd,\"SET %s %s\",\"\",\"foo\");\n    test_cond(strncmp(cmd,\"*3\\r\\n$3\\r\\nSET\\r\\n$0\\r\\n\\r\\n$3\\r\\nfoo\\r\\n\",len) == 0 &&\n        len == 4+4+(3+2)+4+(0+2)+4+(3+2));\n    hi_free(cmd);\n\n    test(\"Format command with %%b string interpolation: \");\n    len = redisFormatCommand(&cmd,\"SET %b %b\",\"foo\",(size_t)3,\"b\\0r\",(size_t)3);\n    test_cond(strncmp(cmd,\"*3\\r\\n$3\\r\\nSET\\r\\n$3\\r\\nfoo\\r\\n$3\\r\\nb\\0r\\r\\n\",len) == 0 &&\n        len == 4+4+(3+2)+4+(3+2)+4+(3+2));\n    hi_free(cmd);\n\n    test(\"Format command with %%b and an empty string: \");\n    len = redisFormatCommand(&cmd,\"SET %b %b\",\"foo\",(size_t)3,\"\",(size_t)0);\n    test_cond(strncmp(cmd,\"*3\\r\\n$3\\r\\nSET\\r\\n$3\\r\\nfoo\\r\\n$0\\r\\n\\r\\n\",len) == 0 &&\n        len == 4+4+(3+2)+4+(3+2)+4+(0+2));\n    hi_free(cmd);\n\n    test(\"Format command with literal %%: \");\n    len = redisFormatCommand(&cmd,\"SET %% %%\");\n    test_cond(strncmp(cmd,\"*3\\r\\n$3\\r\\nSET\\r\\n$1\\r\\n%\\r\\n$1\\r\\n%\\r\\n\",len) == 0 &&\n        len == 4+4+(3+2)+4+(1+2)+4+(1+2));\n    hi_free(cmd);\n\n    /* Vararg width depends on the type. These tests make sure that the\n     * width is correctly determined using the format and subsequent varargs\n     * can correctly be interpolated. */\n#define INTEGER_WIDTH_TEST(fmt, type) do {                                                \\\n    type value = 123;                                                                     \\\n    test(\"Format command with printf-delegation (\" #type \"): \");                          \\\n    len = redisFormatCommand(&cmd,\"key:%08\" fmt \" str:%s\", value, \"hello\");               \\\n    test_cond(strncmp(cmd,\"*2\\r\\n$12\\r\\nkey:00000123\\r\\n$9\\r\\nstr:hello\\r\\n\",len) == 0 && \\\n        len == 4+5+(12+2)+4+(9+2));                                                       \\\n    hi_free(cmd);                                                                         \\\n} while(0)\n\n#define FLOAT_WIDTH_TEST(type) do {                                                       \\\n    type value = 123.0;                                                                   \\\n    test(\"Format command with printf-delegation (\" #type \"): \");                          \\\n    len = redisFormatCommand(&cmd,\"key:%08.3f str:%s\", value, \"hello\");                   \\\n    test_cond(strncmp(cmd,\"*2\\r\\n$12\\r\\nkey:0123.000\\r\\n$9\\r\\nstr:hello\\r\\n\",len) == 0 && \\\n        len == 4+5+(12+2)+4+(9+2));                                                       \\\n    hi_free(cmd);                                                                         \\\n} while(0)\n\n    INTEGER_WIDTH_TEST(\"d\", int);\n    INTEGER_WIDTH_TEST(\"hhd\", char);\n    INTEGER_WIDTH_TEST(\"hd\", short);\n    INTEGER_WIDTH_TEST(\"ld\", long);\n    INTEGER_WIDTH_TEST(\"lld\", long long);\n    INTEGER_WIDTH_TEST(\"u\", unsigned int);\n    INTEGER_WIDTH_TEST(\"hhu\", unsigned char);\n    INTEGER_WIDTH_TEST(\"hu\", unsigned short);\n    INTEGER_WIDTH_TEST(\"lu\", unsigned long);\n    INTEGER_WIDTH_TEST(\"llu\", unsigned long long);\n    FLOAT_WIDTH_TEST(float);\n    FLOAT_WIDTH_TEST(double);\n\n    test(\"Format command with unhandled printf format (specifier 'p' not supported): \");\n    len = redisFormatCommand(&cmd,\"key:%08p %b\",(void*)1234,\"foo\",(size_t)3);\n    test_cond(len == -1);\n\n    test(\"Format command with invalid printf format (specifier missing): \");\n    len = redisFormatCommand(&cmd,\"%-\");\n    test_cond(len == -1);\n\n    const char *argv[3];\n    argv[0] = \"SET\";\n    argv[1] = \"foo\\0xxx\";\n    argv[2] = \"bar\";\n    size_t lens[3] = { 3, 7, 3 };\n    int argc = 3;\n\n    test(\"Format command by passing argc/argv without lengths: \");\n    len = redisFormatCommandArgv(&cmd,argc,argv,NULL);\n    test_cond(strncmp(cmd,\"*3\\r\\n$3\\r\\nSET\\r\\n$3\\r\\nfoo\\r\\n$3\\r\\nbar\\r\\n\",len) == 0 &&\n        len == 4+4+(3+2)+4+(3+2)+4+(3+2));\n    hi_free(cmd);\n\n    test(\"Format command by passing argc/argv with lengths: \");\n    len = redisFormatCommandArgv(&cmd,argc,argv,lens);\n    test_cond(strncmp(cmd,\"*3\\r\\n$3\\r\\nSET\\r\\n$7\\r\\nfoo\\0xxx\\r\\n$3\\r\\nbar\\r\\n\",len) == 0 &&\n        len == 4+4+(3+2)+4+(7+2)+4+(3+2));\n    hi_free(cmd);\n\n    hisds sds_cmd;\n\n    sds_cmd = NULL;\n    test(\"Format command into hisds by passing argc/argv without lengths: \");\n    len = redisFormatSdsCommandArgv(&sds_cmd,argc,argv,NULL);\n    test_cond(strncmp(sds_cmd,\"*3\\r\\n$3\\r\\nSET\\r\\n$3\\r\\nfoo\\r\\n$3\\r\\nbar\\r\\n\",len) == 0 &&\n        len == 4+4+(3+2)+4+(3+2)+4+(3+2));\n    hi_sdsfree(sds_cmd);\n\n    sds_cmd = NULL;\n    test(\"Format command into hisds by passing argc/argv with lengths: \");\n    len = redisFormatSdsCommandArgv(&sds_cmd,argc,argv,lens);\n    test_cond(strncmp(sds_cmd,\"*3\\r\\n$3\\r\\nSET\\r\\n$7\\r\\nfoo\\0xxx\\r\\n$3\\r\\nbar\\r\\n\",len) == 0 &&\n        len == 4+4+(3+2)+4+(7+2)+4+(3+2));\n    hi_sdsfree(sds_cmd);\n}\n\nstatic void test_append_formatted_commands(struct config config) {\n    redisContext *c;\n    redisReply *reply;\n    char *cmd;\n    int len;\n\n    c = do_connect(config);\n\n    test(\"Append format command: \");\n\n    len = redisFormatCommand(&cmd, \"SET foo bar\");\n\n    test_cond(redisAppendFormattedCommand(c, cmd, len) == REDIS_OK);\n\n    assert(redisGetReply(c, (void*)&reply) == REDIS_OK);\n\n    hi_free(cmd);\n    freeReplyObject(reply);\n\n    disconnect(c, 0);\n}\n\nstatic void test_tcp_options(struct config cfg) {\n    redisContext *c;\n\n    c = do_connect(cfg);\n\n    test(\"We can enable TCP_KEEPALIVE: \");\n    test_cond(redisEnableKeepAlive(c) == REDIS_OK);\n\n#ifdef TCP_USER_TIMEOUT\n    test(\"We can set TCP_USER_TIMEOUT: \");\n    test_cond(redisSetTcpUserTimeout(c, 100) == REDIS_OK);\n#else\n    test(\"Setting TCP_USER_TIMEOUT errors when unsupported: \");\n    test_cond(redisSetTcpUserTimeout(c, 100) == REDIS_ERR && c->err == REDIS_ERR_IO);\n#endif\n\n    redisFree(c);\n}\n\nstatic void test_reply_reader(void) {\n    redisReader *reader;\n    void *reply, *root;\n    int ret;\n    int i;\n\n    test(\"Error handling in reply parser: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader,(char*)\"@foo\\r\\n\",6);\n    ret = redisReaderGetReply(reader,NULL);\n    test_cond(ret == REDIS_ERR &&\n              strcasecmp(reader->errstr,\"Protocol error, got \\\"@\\\" as reply type byte\") == 0);\n    redisReaderFree(reader);\n\n    /* when the reply already contains multiple items, they must be free'd\n     * on an error. valgrind will bark when this doesn't happen. */\n    test(\"Memory cleanup in reply parser: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader,(char*)\"*2\\r\\n\",4);\n    redisReaderFeed(reader,(char*)\"$5\\r\\nhello\\r\\n\",11);\n    redisReaderFeed(reader,(char*)\"@foo\\r\\n\",6);\n    ret = redisReaderGetReply(reader,NULL);\n    test_cond(ret == REDIS_ERR &&\n              strcasecmp(reader->errstr,\"Protocol error, got \\\"@\\\" as reply type byte\") == 0);\n    redisReaderFree(reader);\n\n    reader = redisReaderCreate();\n    test(\"Can handle arbitrarily nested multi-bulks: \");\n    for (i = 0; i < 128; i++) {\n        redisReaderFeed(reader,(char*)\"*1\\r\\n\", 4);\n    }\n    redisReaderFeed(reader,(char*)\"$6\\r\\nLOLWUT\\r\\n\",12);\n    ret = redisReaderGetReply(reader,&reply);\n    root = reply; /* Keep track of the root reply */\n    test_cond(ret == REDIS_OK &&\n        ((redisReply*)reply)->type == REDIS_REPLY_ARRAY &&\n        ((redisReply*)reply)->elements == 1);\n\n    test(\"Can parse arbitrarily nested multi-bulks correctly: \");\n    while(i--) {\n        assert(reply != NULL && ((redisReply*)reply)->type == REDIS_REPLY_ARRAY);\n        reply = ((redisReply*)reply)->element[0];\n    }\n    test_cond(((redisReply*)reply)->type == REDIS_REPLY_STRING &&\n        !memcmp(((redisReply*)reply)->str, \"LOLWUT\", 6));\n    freeReplyObject(root);\n    redisReaderFree(reader);\n\n    test(\"Correctly parses LLONG_MAX: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader, \":9223372036854775807\\r\\n\",22);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_OK &&\n            ((redisReply*)reply)->type == REDIS_REPLY_INTEGER &&\n            ((redisReply*)reply)->integer == LLONG_MAX);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Set error when > LLONG_MAX: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader, \":9223372036854775808\\r\\n\",22);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_ERR &&\n              strcasecmp(reader->errstr,\"Bad integer value\") == 0);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Correctly parses LLONG_MIN: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader, \":-9223372036854775808\\r\\n\",23);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_OK &&\n            ((redisReply*)reply)->type == REDIS_REPLY_INTEGER &&\n            ((redisReply*)reply)->integer == LLONG_MIN);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Set error when < LLONG_MIN: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader, \":-9223372036854775809\\r\\n\",23);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_ERR &&\n              strcasecmp(reader->errstr,\"Bad integer value\") == 0);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Set error when array < -1: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader, \"*-2\\r\\n+asdf\\r\\n\",12);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_ERR &&\n              strcasecmp(reader->errstr,\"Multi-bulk length out of range\") == 0);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Set error when bulk < -1: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader, \"$-2\\r\\nasdf\\r\\n\",11);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_ERR &&\n              strcasecmp(reader->errstr,\"Bulk string length out of range\") == 0);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Can configure maximum multi-bulk elements: \");\n    reader = redisReaderCreate();\n    reader->maxelements = 1024;\n    redisReaderFeed(reader, \"*1025\\r\\n\", 7);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_ERR &&\n              strcasecmp(reader->errstr, \"Multi-bulk length out of range\") == 0);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Multi-bulk never overflows regardless of maxelements: \");\n    size_t bad_mbulk_len = (SIZE_MAX / sizeof(void *)) + 3;\n    char bad_mbulk_reply[100];\n    snprintf(bad_mbulk_reply, sizeof(bad_mbulk_reply), \"*%llu\\r\\n+asdf\\r\\n\",\n        (unsigned long long) bad_mbulk_len);\n\n    reader = redisReaderCreate();\n    reader->maxelements = 0;    /* Don't rely on default limit */\n    redisReaderFeed(reader, bad_mbulk_reply, strlen(bad_mbulk_reply));\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_ERR && strcasecmp(reader->errstr, \"Out of memory\") == 0);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n#if LLONG_MAX > SIZE_MAX\n    test(\"Set error when array > SIZE_MAX: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader, \"*9223372036854775807\\r\\n+asdf\\r\\n\",29);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_ERR &&\n            strcasecmp(reader->errstr,\"Multi-bulk length out of range\") == 0);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Set error when bulk > SIZE_MAX: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader, \"$9223372036854775807\\r\\nasdf\\r\\n\",28);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_ERR &&\n            strcasecmp(reader->errstr,\"Bulk string length out of range\") == 0);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n#endif\n\n    test(\"Works with NULL functions for reply: \");\n    reader = redisReaderCreate();\n    reader->fn = NULL;\n    redisReaderFeed(reader,(char*)\"+OK\\r\\n\",5);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_OK && reply == (void*)REDIS_REPLY_STATUS);\n    redisReaderFree(reader);\n\n    test(\"Works when a single newline (\\\\r\\\\n) covers two calls to feed: \");\n    reader = redisReaderCreate();\n    reader->fn = NULL;\n    redisReaderFeed(reader,(char*)\"+OK\\r\",4);\n    ret = redisReaderGetReply(reader,&reply);\n    assert(ret == REDIS_OK && reply == NULL);\n    redisReaderFeed(reader,(char*)\"\\n\",1);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_OK && reply == (void*)REDIS_REPLY_STATUS);\n    redisReaderFree(reader);\n\n    test(\"Don't reset state after protocol error: \");\n    reader = redisReaderCreate();\n    reader->fn = NULL;\n    redisReaderFeed(reader,(char*)\"x\",1);\n    ret = redisReaderGetReply(reader,&reply);\n    assert(ret == REDIS_ERR);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_ERR && reply == NULL);\n    redisReaderFree(reader);\n\n    test(\"Don't reset state after protocol error(not segfault): \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader,(char*)\"*3\\r\\n$3\\r\\nSET\\r\\n$5\\r\\nhello\\r\\n$\", 25);\n    ret = redisReaderGetReply(reader,&reply);\n    assert(ret == REDIS_OK);\n    redisReaderFeed(reader,(char*)\"3\\r\\nval\\r\\n\", 8);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_OK &&\n        ((redisReply*)reply)->type == REDIS_REPLY_ARRAY &&\n        ((redisReply*)reply)->elements == 3);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    /* Regression test for issue #45 on GitHub. */\n    test(\"Don't do empty allocation for empty multi bulk: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader,(char*)\"*0\\r\\n\",4);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_OK &&\n        ((redisReply*)reply)->type == REDIS_REPLY_ARRAY &&\n        ((redisReply*)reply)->elements == 0);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    /* RESP3 verbatim strings (GitHub issue #802) */\n    test(\"Can parse RESP3 verbatim strings: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader,(char*)\"=10\\r\\ntxt:LOLWUT\\r\\n\",17);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_OK &&\n        ((redisReply*)reply)->type == REDIS_REPLY_VERB &&\n         !memcmp(((redisReply*)reply)->str,\"LOLWUT\", 6));\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    /* RESP3 push messages (Github issue #815) */\n    test(\"Can parse RESP3 push messages: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader,(char*)\">2\\r\\n$6\\r\\nLOLWUT\\r\\n:42\\r\\n\",21);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_OK &&\n        ((redisReply*)reply)->type == REDIS_REPLY_PUSH &&\n        ((redisReply*)reply)->elements == 2 &&\n        ((redisReply*)reply)->element[0]->type == REDIS_REPLY_STRING &&\n        !memcmp(((redisReply*)reply)->element[0]->str,\"LOLWUT\",6) &&\n        ((redisReply*)reply)->element[1]->type == REDIS_REPLY_INTEGER &&\n        ((redisReply*)reply)->element[1]->integer == 42);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Can parse RESP3 doubles: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader, \",3.14159265358979323846\\r\\n\",25);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_OK &&\n              ((redisReply*)reply)->type == REDIS_REPLY_DOUBLE &&\n              fabs(((redisReply*)reply)->dval - 3.14159265358979323846) < 0.00000001 &&\n              ((redisReply*)reply)->len == 22 &&\n              strcmp(((redisReply*)reply)->str, \"3.14159265358979323846\") == 0);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Set error on invalid RESP3 double: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader, \",3.14159\\000265358979323846\\r\\n\",26);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_ERR &&\n              strcasecmp(reader->errstr,\"Bad double value\") == 0);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Correctly parses RESP3 double INFINITY: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader, \",inf\\r\\n\",6);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_OK &&\n              ((redisReply*)reply)->type == REDIS_REPLY_DOUBLE &&\n              isinf(((redisReply*)reply)->dval) &&\n              ((redisReply*)reply)->dval > 0);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Correctly parses RESP3 double NaN: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader, \",nan\\r\\n\",6);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_OK &&\n              ((redisReply*)reply)->type == REDIS_REPLY_DOUBLE &&\n              isnan(((redisReply*)reply)->dval));\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Correctly parses RESP3 double -Nan: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader, \",-nan\\r\\n\", 7);\n    ret = redisReaderGetReply(reader, &reply);\n    test_cond(ret == REDIS_OK &&\n              ((redisReply*)reply)->type == REDIS_REPLY_DOUBLE &&\n              isnan(((redisReply*)reply)->dval));\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Can parse RESP3 nil: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader, \"_\\r\\n\",3);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_OK &&\n              ((redisReply*)reply)->type == REDIS_REPLY_NIL);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Set error on invalid RESP3 nil: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader, \"_nil\\r\\n\",6);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_ERR &&\n              strcasecmp(reader->errstr,\"Bad nil value\") == 0);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Can parse RESP3 bool (true): \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader, \"#t\\r\\n\",4);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_OK &&\n              ((redisReply*)reply)->type == REDIS_REPLY_BOOL &&\n              ((redisReply*)reply)->integer);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Can parse RESP3 bool (false): \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader, \"#f\\r\\n\",4);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_OK &&\n              ((redisReply*)reply)->type == REDIS_REPLY_BOOL &&\n              !((redisReply*)reply)->integer);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Set error on invalid RESP3 bool: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader, \"#foobar\\r\\n\",9);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_ERR &&\n              strcasecmp(reader->errstr,\"Bad bool value\") == 0);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Can parse RESP3 map: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader, \"%2\\r\\n+first\\r\\n:123\\r\\n$6\\r\\nsecond\\r\\n#t\\r\\n\",34);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_OK &&\n        ((redisReply*)reply)->type == REDIS_REPLY_MAP &&\n        ((redisReply*)reply)->elements == 4 &&\n        ((redisReply*)reply)->element[0]->type == REDIS_REPLY_STATUS &&\n        ((redisReply*)reply)->element[0]->len == 5 &&\n        !strcmp(((redisReply*)reply)->element[0]->str,\"first\") &&\n        ((redisReply*)reply)->element[1]->type == REDIS_REPLY_INTEGER &&\n        ((redisReply*)reply)->element[1]->integer == 123 &&\n        ((redisReply*)reply)->element[2]->type == REDIS_REPLY_STRING &&\n        ((redisReply*)reply)->element[2]->len == 6 &&\n        !strcmp(((redisReply*)reply)->element[2]->str,\"second\") &&\n        ((redisReply*)reply)->element[3]->type == REDIS_REPLY_BOOL &&\n        ((redisReply*)reply)->element[3]->integer);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Can parse RESP3 set: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader, \"~5\\r\\n+orange\\r\\n$5\\r\\napple\\r\\n#f\\r\\n:100\\r\\n:999\\r\\n\",40);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_OK &&\n        ((redisReply*)reply)->type == REDIS_REPLY_SET &&\n        ((redisReply*)reply)->elements == 5 &&\n        ((redisReply*)reply)->element[0]->type == REDIS_REPLY_STATUS &&\n        ((redisReply*)reply)->element[0]->len == 6 &&\n        !strcmp(((redisReply*)reply)->element[0]->str,\"orange\") &&\n        ((redisReply*)reply)->element[1]->type == REDIS_REPLY_STRING &&\n        ((redisReply*)reply)->element[1]->len == 5 &&\n        !strcmp(((redisReply*)reply)->element[1]->str,\"apple\") &&\n        ((redisReply*)reply)->element[2]->type == REDIS_REPLY_BOOL &&\n        !((redisReply*)reply)->element[2]->integer &&\n        ((redisReply*)reply)->element[3]->type == REDIS_REPLY_INTEGER &&\n        ((redisReply*)reply)->element[3]->integer == 100 &&\n        ((redisReply*)reply)->element[4]->type == REDIS_REPLY_INTEGER &&\n        ((redisReply*)reply)->element[4]->integer == 999);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Can parse RESP3 bignum: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader,\"(3492890328409238509324850943850943825024385\\r\\n\",46);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_OK &&\n        ((redisReply*)reply)->type == REDIS_REPLY_BIGNUM &&\n        ((redisReply*)reply)->len == 43 &&\n        !strcmp(((redisReply*)reply)->str,\"3492890328409238509324850943850943825024385\"));\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n\n    test(\"Can parse RESP3 doubles in an array: \");\n    reader = redisReaderCreate();\n    redisReaderFeed(reader, \"*1\\r\\n,3.14159265358979323846\\r\\n\",31);\n    ret = redisReaderGetReply(reader,&reply);\n    test_cond(ret == REDIS_OK &&\n        ((redisReply*)reply)->type == REDIS_REPLY_ARRAY &&\n        ((redisReply*)reply)->elements == 1 &&\n        ((redisReply*)reply)->element[0]->type == REDIS_REPLY_DOUBLE &&\n        fabs(((redisReply*)reply)->element[0]->dval - 3.14159265358979323846) < 0.00000001 &&\n        ((redisReply*)reply)->element[0]->len == 22 &&\n        strcmp(((redisReply*)reply)->element[0]->str, \"3.14159265358979323846\") == 0);\n    freeReplyObject(reply);\n    redisReaderFree(reader);\n}\n\nstatic void test_free_null(void) {\n    void *redisCtx = NULL;\n    void *reply = NULL;\n\n    test(\"Don't fail when redisFree is passed a NULL value: \");\n    redisFree(redisCtx);\n    test_cond(redisCtx == NULL);\n\n    test(\"Don't fail when freeReplyObject is passed a NULL value: \");\n    freeReplyObject(reply);\n    test_cond(reply == NULL);\n}\n\nstatic void *hi_malloc_fail(size_t size) {\n    (void)size;\n    return NULL;\n}\n\nstatic void *hi_calloc_fail(size_t nmemb, size_t size) {\n    (void)nmemb;\n    (void)size;\n    return NULL;\n}\n\nstatic void *hi_calloc_insecure(size_t nmemb, size_t size) {\n    (void)nmemb;\n    (void)size;\n    insecure_calloc_calls++;\n    return (void*)0xdeadc0de;\n}\n\nstatic void *hi_realloc_fail(void *ptr, size_t size) {\n    (void)ptr;\n    (void)size;\n    return NULL;\n}\n\nstatic void test_allocator_injection(void) {\n    void *ptr;\n\n    hiredisAllocFuncs ha = {\n        .mallocFn = hi_malloc_fail,\n        .callocFn = hi_calloc_fail,\n        .reallocFn = hi_realloc_fail,\n        .strdupFn = strdup,\n        .freeFn = free,\n    };\n\n    // Override hiredis allocators\n    hiredisSetAllocators(&ha);\n\n    test(\"redisContext uses injected allocators: \");\n    redisContext *c = redisConnect(\"localhost\", 6379);\n    test_cond(c == NULL);\n\n    test(\"redisReader uses injected allocators: \");\n    redisReader *reader = redisReaderCreate();\n    test_cond(reader == NULL);\n\n    /* Make sure hiredis itself protects against a non-overflow checking calloc */\n    test(\"hiredis calloc wrapper protects against overflow: \");\n    ha.callocFn = hi_calloc_insecure;\n    hiredisSetAllocators(&ha);\n    ptr = hi_calloc((SIZE_MAX / sizeof(void*)) + 3, sizeof(void*));\n    test_cond(ptr == NULL && insecure_calloc_calls == 0);\n\n    // Return allocators to default\n    hiredisResetAllocators();\n}\n\n#define HIREDIS_BAD_DOMAIN \"idontexist-noreally.com\"\nstatic void test_blocking_connection_errors(void) {\n    struct addrinfo hints = {.ai_family = AF_INET};\n    struct addrinfo *ai_tmp = NULL;\n    redisContext *c;\n\n    int rv = getaddrinfo(HIREDIS_BAD_DOMAIN, \"6379\", &hints, &ai_tmp);\n    if (rv != 0) {\n        // Address does *not* exist\n        test(\"Returns error when host cannot be resolved: \");\n        // First see if this domain name *actually* resolves to NXDOMAIN\n        c = redisConnect(HIREDIS_BAD_DOMAIN, 6379);\n        test_cond(\n            c->err == REDIS_ERR_OTHER &&\n            (strcmp(c->errstr, \"Name or service not known\") == 0 ||\n             strcmp(c->errstr, \"Can't resolve: \" HIREDIS_BAD_DOMAIN) == 0 ||\n             strcmp(c->errstr, \"Name does not resolve\") == 0 ||\n             strcmp(c->errstr, \"nodename nor servname provided, or not known\") == 0 ||\n             strcmp(c->errstr, \"node name or service name not known\") == 0 ||\n             strcmp(c->errstr, \"No address associated with hostname\") == 0 ||\n             strcmp(c->errstr, \"Temporary failure in name resolution\") == 0 ||\n             strcmp(c->errstr, \"hostname nor servname provided, or not known\") == 0 ||\n             strcmp(c->errstr, \"no address associated with name\") == 0 ||\n             strcmp(c->errstr, \"No such host is known. \") == 0));\n        redisFree(c);\n    } else {\n        printf(\"Skipping NXDOMAIN test. Found evil ISP!\\n\");\n        freeaddrinfo(ai_tmp);\n    }\n\n#ifndef _WIN32\n    redisOptions opt = {0};\n    struct timeval tv;\n\n    test(\"Returns error when the port is not open: \");\n    c = redisConnect((char*)\"localhost\", 1);\n    test_cond(c->err == REDIS_ERR_IO &&\n        strcmp(c->errstr,\"Connection refused\") == 0);\n    redisFree(c);\n\n\n    /* Verify we don't regress from the fix in PR #1180 */\n    test(\"We don't clobber connection exception with setsockopt error: \");\n    tv = (struct timeval){.tv_sec = 0, .tv_usec = 500000};\n    opt.command_timeout = opt.connect_timeout = &tv;\n    REDIS_OPTIONS_SET_TCP(&opt, \"localhost\", 10337);\n    c = redisConnectWithOptions(&opt);\n    test_cond(c->err == REDIS_ERR_IO &&\n              strcmp(c->errstr, \"Connection refused\") == 0);\n    redisFree(c);\n\n    test(\"Returns error when the unix_sock socket path doesn't accept connections: \");\n    c = redisConnectUnix((char*)\"/tmp/idontexist.sock\");\n    test_cond(c->err == REDIS_ERR_IO); /* Don't care about the message... */\n    redisFree(c);\n#endif\n}\n\n/* Test push handler */\nvoid push_handler(void *privdata, void *r) {\n    struct pushCounters *pcounts = privdata;\n    redisReply *reply = r, *payload;\n\n    assert(reply && reply->type == REDIS_REPLY_PUSH && reply->elements == 2);\n\n    payload = reply->element[1];\n    if (payload->type == REDIS_REPLY_ARRAY) {\n        payload = payload->element[0];\n    }\n\n    if (payload->type == REDIS_REPLY_STRING) {\n        pcounts->str++;\n    } else if (payload->type == REDIS_REPLY_NIL) {\n        pcounts->nil++;\n    }\n\n    freeReplyObject(reply);\n}\n\n/* Dummy function just to test setting a callback with redisOptions */\nvoid push_handler_async(redisAsyncContext *ac, void *reply) {\n    (void)ac;\n    (void)reply;\n}\n\nstatic void test_resp3_push_handler(redisContext *c) {\n    struct pushCounters pc = {0};\n    redisPushFn *old = NULL;\n    redisReply *reply;\n    void *privdata;\n\n    /* Switch to RESP3 and turn on client tracking */\n    send_hello(c, 3);\n    send_client_tracking(c, \"ON\");\n    privdata = c->privdata;\n    c->privdata = &pc;\n\n    reply = redisCommand(c, \"GET key:0\");\n    assert(reply != NULL);\n    freeReplyObject(reply);\n\n    test(\"RESP3 PUSH messages are handled out of band by default: \");\n    reply = redisCommand(c, \"SET key:0 val:0\");\n    test_cond(reply != NULL && reply->type == REDIS_REPLY_STATUS);\n    freeReplyObject(reply);\n\n    assert((reply = redisCommand(c, \"GET key:0\")) != NULL);\n    freeReplyObject(reply);\n\n    old = redisSetPushCallback(c, push_handler);\n    test(\"We can set a custom RESP3 PUSH handler: \");\n    reply = redisCommand(c, \"SET key:0 val:0\");\n    /* We need another command because depending on the version of Redis, the\n     * notification may be delivered after the command's reply. */\n    assert(reply != NULL);\n    freeReplyObject(reply);\n    reply = redisCommand(c, \"PING\");\n    test_cond(reply != NULL && reply->type == REDIS_REPLY_STATUS && pc.str == 1);\n    freeReplyObject(reply);\n\n    test(\"We properly handle a NIL invalidation payload: \");\n    reply = redisCommand(c, \"FLUSHDB\");\n    assert(reply != NULL);\n    freeReplyObject(reply);\n    reply = redisCommand(c, \"PING\");\n    test_cond(reply != NULL && reply->type == REDIS_REPLY_STATUS && pc.nil == 1);\n    freeReplyObject(reply);\n\n    /* Unset the push callback and generate an invalidate message making\n     * sure it is not handled out of band. */\n    test(\"With no handler, PUSH replies come in-band: \");\n    redisSetPushCallback(c, NULL);\n    assert((reply = redisCommand(c, \"GET key:0\")) != NULL);\n    freeReplyObject(reply);\n    assert((reply = redisCommand(c, \"SET key:0 invalid\")) != NULL);\n    /* Depending on Redis version, we may receive either push notification or\n     * status reply. Both cases are valid. */\n    if (reply->type == REDIS_REPLY_STATUS) {\n        freeReplyObject(reply);\n        reply = redisCommand(c, \"PING\");\n    }\n    test_cond(reply->type == REDIS_REPLY_PUSH);\n    freeReplyObject(reply);\n\n    test(\"With no PUSH handler, no replies are lost: \");\n    assert(redisGetReply(c, (void**)&reply) == REDIS_OK);\n    test_cond(reply != NULL && reply->type == REDIS_REPLY_STATUS);\n    freeReplyObject(reply);\n\n    /* Return to the originally set PUSH handler */\n    assert(old != NULL);\n    redisSetPushCallback(c, old);\n\n    /* Switch back to RESP2 and disable tracking */\n    c->privdata = privdata;\n    send_client_tracking(c, \"OFF\");\n    send_hello(c, 2);\n}\n\nredisOptions get_redis_tcp_options(struct config config) {\n    redisOptions options = {0};\n    REDIS_OPTIONS_SET_TCP(&options, config.tcp.host, config.tcp.port);\n    return options;\n}\n\nstatic void test_resp3_push_options(struct config config) {\n    redisAsyncContext *ac;\n    redisContext *c;\n    redisOptions options;\n\n    test(\"We set a default RESP3 handler for redisContext: \");\n    options = get_redis_tcp_options(config);\n    assert((c = redisConnectWithOptions(&options)) != NULL);\n    test_cond(c->push_cb != NULL);\n    redisFree(c);\n\n    test(\"We don't set a default RESP3 push handler for redisAsyncContext: \");\n    options = get_redis_tcp_options(config);\n    assert((ac = redisAsyncConnectWithOptions(&options)) != NULL);\n    test_cond(ac->c.push_cb == NULL);\n    redisAsyncFree(ac);\n\n    test(\"Our REDIS_OPT_NO_PUSH_AUTOFREE flag works: \");\n    options = get_redis_tcp_options(config);\n    options.options |= REDIS_OPT_NO_PUSH_AUTOFREE;\n    assert((c = redisConnectWithOptions(&options)) != NULL);\n    test_cond(c->push_cb == NULL);\n    redisFree(c);\n\n    test(\"We can use redisOptions to set a custom PUSH handler for redisContext: \");\n    options = get_redis_tcp_options(config);\n    options.push_cb = push_handler;\n    assert((c = redisConnectWithOptions(&options)) != NULL);\n    test_cond(c->push_cb == push_handler);\n    redisFree(c);\n\n    test(\"We can use redisOptions to set a custom PUSH handler for redisAsyncContext: \");\n    options = get_redis_tcp_options(config);\n    options.async_push_cb = push_handler_async;\n    assert((ac = redisAsyncConnectWithOptions(&options)) != NULL);\n    test_cond(ac->push_cb == push_handler_async);\n    redisAsyncFree(ac);\n}\n\nvoid free_privdata(void *privdata) {\n    struct privdata *data = privdata;\n    data->dtor_counter++;\n}\n\nstatic void test_privdata_hooks(struct config config) {\n    struct privdata data = {0};\n    redisOptions options;\n    redisContext *c;\n\n    test(\"We can use redisOptions to set privdata: \");\n    options = get_redis_tcp_options(config);\n    REDIS_OPTIONS_SET_PRIVDATA(&options, &data, free_privdata);\n    assert((c = redisConnectWithOptions(&options)) != NULL);\n    test_cond(c->privdata == &data);\n\n    test(\"Our privdata destructor fires when we free the context: \");\n    redisFree(c);\n    test_cond(data.dtor_counter == 1);\n}\n\nstatic void test_blocking_connection(struct config config) {\n    redisContext *c;\n    redisReply *reply;\n    int major;\n\n    c = do_connect(config);\n\n    test(\"Is able to deliver commands: \");\n    reply = redisCommand(c,\"PING\");\n    test_cond(reply->type == REDIS_REPLY_STATUS &&\n        strcasecmp(reply->str,\"pong\") == 0)\n    freeReplyObject(reply);\n\n    test(\"Is a able to send commands verbatim: \");\n    reply = redisCommand(c,\"SET foo bar\");\n    test_cond (reply->type == REDIS_REPLY_STATUS &&\n        strcasecmp(reply->str,\"ok\") == 0)\n    freeReplyObject(reply);\n\n    test(\"%%s String interpolation works: \");\n    reply = redisCommand(c,\"SET %s %s\",\"foo\",\"hello world\");\n    freeReplyObject(reply);\n    reply = redisCommand(c,\"GET foo\");\n    test_cond(reply->type == REDIS_REPLY_STRING &&\n        strcmp(reply->str,\"hello world\") == 0);\n    freeReplyObject(reply);\n\n    test(\"%%b String interpolation works: \");\n    reply = redisCommand(c,\"SET %b %b\",\"foo\",(size_t)3,\"hello\\x00world\",(size_t)11);\n    freeReplyObject(reply);\n    reply = redisCommand(c,\"GET foo\");\n    test_cond(reply->type == REDIS_REPLY_STRING &&\n        memcmp(reply->str,\"hello\\x00world\",11) == 0)\n\n    test(\"Binary reply length is correct: \");\n    test_cond(reply->len == 11)\n    freeReplyObject(reply);\n\n    test(\"Can parse nil replies: \");\n    reply = redisCommand(c,\"GET nokey\");\n    test_cond(reply->type == REDIS_REPLY_NIL)\n    freeReplyObject(reply);\n\n    /* test 7 */\n    test(\"Can parse integer replies: \");\n    reply = redisCommand(c,\"INCR mycounter\");\n    test_cond(reply->type == REDIS_REPLY_INTEGER && reply->integer == 1)\n    freeReplyObject(reply);\n\n    test(\"Can parse multi bulk replies: \");\n    freeReplyObject(redisCommand(c,\"LPUSH mylist foo\"));\n    freeReplyObject(redisCommand(c,\"LPUSH mylist bar\"));\n    reply = redisCommand(c,\"LRANGE mylist 0 -1\");\n    test_cond(reply->type == REDIS_REPLY_ARRAY &&\n              reply->elements == 2 &&\n              !memcmp(reply->element[0]->str,\"bar\",3) &&\n              !memcmp(reply->element[1]->str,\"foo\",3))\n    freeReplyObject(reply);\n\n    /* m/e with multi bulk reply *before* other reply.\n     * specifically test ordering of reply items to parse. */\n    test(\"Can handle nested multi bulk replies: \");\n    freeReplyObject(redisCommand(c,\"MULTI\"));\n    freeReplyObject(redisCommand(c,\"LRANGE mylist 0 -1\"));\n    freeReplyObject(redisCommand(c,\"PING\"));\n    reply = (redisCommand(c,\"EXEC\"));\n    test_cond(reply->type == REDIS_REPLY_ARRAY &&\n              reply->elements == 2 &&\n              reply->element[0]->type == REDIS_REPLY_ARRAY &&\n              reply->element[0]->elements == 2 &&\n              !memcmp(reply->element[0]->element[0]->str,\"bar\",3) &&\n              !memcmp(reply->element[0]->element[1]->str,\"foo\",3) &&\n              reply->element[1]->type == REDIS_REPLY_STATUS &&\n              strcasecmp(reply->element[1]->str,\"pong\") == 0);\n    freeReplyObject(reply);\n\n    test(\"Send command by passing argc/argv: \");\n    const char *argv[3] = {\"SET\", \"foo\", \"bar\"};\n    size_t argvlen[3] = {3, 3, 3};\n    reply = redisCommandArgv(c,3,argv,argvlen);\n    test_cond(reply->type == REDIS_REPLY_STATUS);\n    freeReplyObject(reply);\n\n    /* Make sure passing NULL to redisGetReply is safe */\n    test(\"Can pass NULL to redisGetReply: \");\n    assert(redisAppendCommand(c, \"PING\") == REDIS_OK);\n    test_cond(redisGetReply(c, NULL) == REDIS_OK);\n\n    get_redis_version(c, &major, NULL);\n    if (major >= 6) test_resp3_push_handler(c);\n    test_resp3_push_options(config);\n\n    test_privdata_hooks(config);\n\n    disconnect(c, 0);\n}\n\n/* Send DEBUG SLEEP 0 to detect if we have this command */\nstatic int detect_debug_sleep(redisContext *c) {\n    int detected;\n    redisReply *reply = redisCommand(c, \"DEBUG SLEEP 0\\r\\n\");\n\n    if (reply == NULL || c->err) {\n        const char *cause = c->err ? c->errstr : \"(none)\";\n        fprintf(stderr, \"Error testing for DEBUG SLEEP (Redis error: %s), exiting\\n\", cause);\n        exit(-1);\n    }\n\n    detected = reply->type == REDIS_REPLY_STATUS;\n    freeReplyObject(reply);\n\n    return detected;\n}\n\nstatic void test_blocking_connection_timeouts(struct config config) {\n    redisContext *c;\n    redisReply *reply;\n    ssize_t s;\n    const char *sleep_cmd = \"DEBUG SLEEP 3\\r\\n\";\n    struct timeval tv;\n\n    c = do_connect(config);\n    test(\"Successfully completes a command when the timeout is not exceeded: \");\n    reply = redisCommand(c,\"SET foo fast\");\n    freeReplyObject(reply);\n    tv.tv_sec = 0;\n    tv.tv_usec = 10000;\n    redisSetTimeout(c, tv);\n    reply = redisCommand(c, \"GET foo\");\n    test_cond(reply != NULL && reply->type == REDIS_REPLY_STRING && memcmp(reply->str, \"fast\", 4) == 0);\n    freeReplyObject(reply);\n    disconnect(c, 0);\n\n    c = do_connect(config);\n    test(\"Does not return a reply when the command times out: \");\n    if (detect_debug_sleep(c)) {\n        redisAppendFormattedCommand(c, sleep_cmd, strlen(sleep_cmd));\n\n        // flush connection buffer without waiting for the reply\n        s = c->funcs->write(c);\n        assert(s == (ssize_t)hi_sdslen(c->obuf));\n        hi_sdsfree(c->obuf);\n        c->obuf = hi_sdsempty();\n\n        tv.tv_sec = 0;\n        tv.tv_usec = 10000;\n        redisSetTimeout(c, tv);\n        reply = redisCommand(c, \"GET foo\");\n#ifndef _WIN32\n        test_cond(s > 0 && reply == NULL && c->err == REDIS_ERR_IO &&\n                  strcmp(c->errstr, \"Resource temporarily unavailable\") == 0);\n#else\n        test_cond(s > 0 && reply == NULL && c->err == REDIS_ERR_TIMEOUT &&\n                  strcmp(c->errstr, \"recv timeout\") == 0);\n#endif\n        freeReplyObject(reply);\n\n        // wait for the DEBUG SLEEP to complete so that Redis server is unblocked for the following tests\n        millisleep(3000);\n    } else {\n        test_skipped();\n    }\n\n    test(\"Reconnect properly reconnects after a timeout: \");\n    do_reconnect(c, config);\n    reply = redisCommand(c, \"PING\");\n    test_cond(reply != NULL && reply->type == REDIS_REPLY_STATUS && strcmp(reply->str, \"PONG\") == 0);\n    freeReplyObject(reply);\n\n    test(\"Reconnect properly uses owned parameters: \");\n    config.tcp.host = \"foo\";\n    config.unix_sock.path = \"foo\";\n    do_reconnect(c, config);\n    reply = redisCommand(c, \"PING\");\n    test_cond(reply != NULL && reply->type == REDIS_REPLY_STATUS && strcmp(reply->str, \"PONG\") == 0);\n    freeReplyObject(reply);\n\n    disconnect(c, 0);\n}\n\nstatic void test_blocking_io_errors(struct config config) {\n    redisContext *c;\n    redisReply *reply;\n    void *_reply;\n    int major, minor;\n\n    /* Connect to target given by config. */\n    c = do_connect(config);\n    get_redis_version(c, &major, &minor);\n\n    test(\"Returns I/O error when the connection is lost: \");\n    reply = redisCommand(c,\"QUIT\");\n    if (major > 2 || (major == 2 && minor > 0)) {\n        /* > 2.0 returns OK on QUIT and read() should be issued once more\n         * to know the descriptor is at EOF. */\n        test_cond(strcasecmp(reply->str,\"OK\") == 0 &&\n            redisGetReply(c,&_reply) == REDIS_ERR);\n        freeReplyObject(reply);\n    } else {\n        test_cond(reply == NULL);\n    }\n\n#ifndef _WIN32\n    /* On 2.0, QUIT will cause the connection to be closed immediately and\n     * the read(2) for the reply on QUIT will set the error to EOF.\n     * On >2.0, QUIT will return with OK and another read(2) needed to be\n     * issued to find out the socket was closed by the server. In both\n     * conditions, the error will be set to EOF. */\n    assert(c->err == REDIS_ERR_EOF &&\n        strcmp(c->errstr,\"Server closed the connection\") == 0);\n#endif\n    redisFree(c);\n\n    c = do_connect(config);\n    test(\"Returns I/O error on socket timeout: \");\n    struct timeval tv = { 0, 1000 };\n    assert(redisSetTimeout(c,tv) == REDIS_OK);\n    int respcode = redisGetReply(c,&_reply);\n#ifndef _WIN32\n    test_cond(respcode == REDIS_ERR && c->err == REDIS_ERR_IO && errno == EAGAIN);\n#else\n    test_cond(respcode == REDIS_ERR && c->err == REDIS_ERR_TIMEOUT);\n#endif\n    redisFree(c);\n}\n\nstatic void test_invalid_timeout_errors(struct config config) {\n    redisContext *c;\n\n    test(\"Set error when an invalid timeout usec value is used during connect: \");\n\n    config.connect_timeout.tv_sec = 0;\n    config.connect_timeout.tv_usec = 10000001;\n\n    if (config.type == CONN_TCP || config.type == CONN_SSL) {\n        c = redisConnectWithTimeout(config.tcp.host, config.tcp.port, config.connect_timeout);\n    } else if(config.type == CONN_UNIX) {\n        c = redisConnectUnixWithTimeout(config.unix_sock.path, config.connect_timeout);\n    } else {\n        assert(NULL);\n    }\n\n    test_cond(c->err == REDIS_ERR_IO && strcmp(c->errstr, \"Invalid timeout specified\") == 0);\n    redisFree(c);\n\n    test(\"Set error when an invalid timeout sec value is used during connect: \");\n\n    config.connect_timeout.tv_sec = (((LONG_MAX) - 999) / 1000) + 1;\n    config.connect_timeout.tv_usec = 0;\n\n    if (config.type == CONN_TCP || config.type == CONN_SSL) {\n        c = redisConnectWithTimeout(config.tcp.host, config.tcp.port, config.connect_timeout);\n    } else if(config.type == CONN_UNIX) {\n        c = redisConnectUnixWithTimeout(config.unix_sock.path, config.connect_timeout);\n    } else {\n        assert(NULL);\n    }\n\n    test_cond(c->err == REDIS_ERR_IO && strcmp(c->errstr, \"Invalid timeout specified\") == 0);\n    redisFree(c);\n}\n\n/* Wrap malloc to abort on failure so OOM checks don't make the test logic\n * harder to follow. */\nvoid *hi_malloc_safe(size_t size) {\n    void *ptr = hi_malloc(size);\n    if (ptr == NULL) {\n        fprintf(stderr, \"Error:  Out of memory\\n\");\n        exit(-1);\n    }\n\n    return ptr;\n}\n\nstatic void test_throughput(struct config config) {\n    redisContext *c = do_connect(config);\n    redisReply **replies;\n    int i, num;\n    long long t1, t2;\n\n    test(\"Throughput:\\n\");\n    for (i = 0; i < 500; i++)\n        freeReplyObject(redisCommand(c,\"LPUSH mylist foo\"));\n\n    num = 1000;\n    replies = hi_malloc_safe(sizeof(redisReply*)*num);\n    t1 = usec();\n    for (i = 0; i < num; i++) {\n        replies[i] = redisCommand(c,\"PING\");\n        assert(replies[i] != NULL && replies[i]->type == REDIS_REPLY_STATUS);\n    }\n    t2 = usec();\n    for (i = 0; i < num; i++) freeReplyObject(replies[i]);\n    hi_free(replies);\n    printf(\"\\t(%dx PING: %.3fs)\\n\", num, (t2-t1)/1000000.0);\n\n    replies = hi_malloc_safe(sizeof(redisReply*)*num);\n    t1 = usec();\n    for (i = 0; i < num; i++) {\n        replies[i] = redisCommand(c,\"LRANGE mylist 0 499\");\n        assert(replies[i] != NULL && replies[i]->type == REDIS_REPLY_ARRAY);\n        assert(replies[i] != NULL && replies[i]->elements == 500);\n    }\n    t2 = usec();\n    for (i = 0; i < num; i++) freeReplyObject(replies[i]);\n    hi_free(replies);\n    printf(\"\\t(%dx LRANGE with 500 elements: %.3fs)\\n\", num, (t2-t1)/1000000.0);\n\n    replies = hi_malloc_safe(sizeof(redisReply*)*num);\n    t1 = usec();\n    for (i = 0; i < num; i++) {\n        replies[i] = redisCommand(c, \"INCRBY incrkey %d\", 1000000);\n        assert(replies[i] != NULL && replies[i]->type == REDIS_REPLY_INTEGER);\n    }\n    t2 = usec();\n    for (i = 0; i < num; i++) freeReplyObject(replies[i]);\n    hi_free(replies);\n    printf(\"\\t(%dx INCRBY: %.3fs)\\n\", num, (t2-t1)/1000000.0);\n\n    num = 10000;\n    replies = hi_malloc_safe(sizeof(redisReply*)*num);\n    for (i = 0; i < num; i++)\n        redisAppendCommand(c,\"PING\");\n    t1 = usec();\n    for (i = 0; i < num; i++) {\n        assert(redisGetReply(c, (void*)&replies[i]) == REDIS_OK);\n        assert(replies[i] != NULL && replies[i]->type == REDIS_REPLY_STATUS);\n    }\n    t2 = usec();\n    for (i = 0; i < num; i++) freeReplyObject(replies[i]);\n    hi_free(replies);\n    printf(\"\\t(%dx PING (pipelined): %.3fs)\\n\", num, (t2-t1)/1000000.0);\n\n    replies = hi_malloc_safe(sizeof(redisReply*)*num);\n    for (i = 0; i < num; i++)\n        redisAppendCommand(c,\"LRANGE mylist 0 499\");\n    t1 = usec();\n    for (i = 0; i < num; i++) {\n        assert(redisGetReply(c, (void*)&replies[i]) == REDIS_OK);\n        assert(replies[i] != NULL && replies[i]->type == REDIS_REPLY_ARRAY);\n        assert(replies[i] != NULL && replies[i]->elements == 500);\n    }\n    t2 = usec();\n    for (i = 0; i < num; i++) freeReplyObject(replies[i]);\n    hi_free(replies);\n    printf(\"\\t(%dx LRANGE with 500 elements (pipelined): %.3fs)\\n\", num, (t2-t1)/1000000.0);\n\n    replies = hi_malloc_safe(sizeof(redisReply*)*num);\n    for (i = 0; i < num; i++)\n        redisAppendCommand(c,\"INCRBY incrkey %d\", 1000000);\n    t1 = usec();\n    for (i = 0; i < num; i++) {\n        assert(redisGetReply(c, (void*)&replies[i]) == REDIS_OK);\n        assert(replies[i] != NULL && replies[i]->type == REDIS_REPLY_INTEGER);\n    }\n    t2 = usec();\n    for (i = 0; i < num; i++) freeReplyObject(replies[i]);\n    hi_free(replies);\n    printf(\"\\t(%dx INCRBY (pipelined): %.3fs)\\n\", num, (t2-t1)/1000000.0);\n\n    disconnect(c, 0);\n}\n\n// static long __test_callback_flags = 0;\n// static void __test_callback(redisContext *c, void *privdata) {\n//     ((void)c);\n//     /* Shift to detect execution order */\n//     __test_callback_flags <<= 8;\n//     __test_callback_flags |= (long)privdata;\n// }\n//\n// static void __test_reply_callback(redisContext *c, redisReply *reply, void *privdata) {\n//     ((void)c);\n//     /* Shift to detect execution order */\n//     __test_callback_flags <<= 8;\n//     __test_callback_flags |= (long)privdata;\n//     if (reply) freeReplyObject(reply);\n// }\n//\n// static redisContext *__connect_nonblock() {\n//     /* Reset callback flags */\n//     __test_callback_flags = 0;\n//     return redisConnectNonBlock(\"127.0.0.1\", port, NULL);\n// }\n//\n// static void test_nonblocking_connection() {\n//     redisContext *c;\n//     int wdone = 0;\n//\n//     test(\"Calls command callback when command is issued: \");\n//     c = __connect_nonblock();\n//     redisSetCommandCallback(c,__test_callback,(void*)1);\n//     redisCommand(c,\"PING\");\n//     test_cond(__test_callback_flags == 1);\n//     redisFree(c);\n//\n//     test(\"Calls disconnect callback on redisDisconnect: \");\n//     c = __connect_nonblock();\n//     redisSetDisconnectCallback(c,__test_callback,(void*)2);\n//     redisDisconnect(c);\n//     test_cond(__test_callback_flags == 2);\n//     redisFree(c);\n//\n//     test(\"Calls disconnect callback and free callback on redisFree: \");\n//     c = __connect_nonblock();\n//     redisSetDisconnectCallback(c,__test_callback,(void*)2);\n//     redisSetFreeCallback(c,__test_callback,(void*)4);\n//     redisFree(c);\n//     test_cond(__test_callback_flags == ((2 << 8) | 4));\n//\n//     test(\"redisBufferWrite against empty write buffer: \");\n//     c = __connect_nonblock();\n//     test_cond(redisBufferWrite(c,&wdone) == REDIS_OK && wdone == 1);\n//     redisFree(c);\n//\n//     test(\"redisBufferWrite against not yet connected fd: \");\n//     c = __connect_nonblock();\n//     redisCommand(c,\"PING\");\n//     test_cond(redisBufferWrite(c,NULL) == REDIS_ERR &&\n//               strncmp(c->error,\"write:\",6) == 0);\n//     redisFree(c);\n//\n//     test(\"redisBufferWrite against closed fd: \");\n//     c = __connect_nonblock();\n//     redisCommand(c,\"PING\");\n//     redisDisconnect(c);\n//     test_cond(redisBufferWrite(c,NULL) == REDIS_ERR &&\n//               strncmp(c->error,\"write:\",6) == 0);\n//     redisFree(c);\n//\n//     test(\"Process callbacks in the right sequence: \");\n//     c = __connect_nonblock();\n//     redisCommandWithCallback(c,__test_reply_callback,(void*)1,\"PING\");\n//     redisCommandWithCallback(c,__test_reply_callback,(void*)2,\"PING\");\n//     redisCommandWithCallback(c,__test_reply_callback,(void*)3,\"PING\");\n//\n//     /* Write output buffer */\n//     wdone = 0;\n//     while(!wdone) {\n//         usleep(500);\n//         redisBufferWrite(c,&wdone);\n//     }\n//\n//     /* Read until at least one callback is executed (the 3 replies will\n//      * arrive in a single packet, causing all callbacks to be executed in\n//      * a single pass). */\n//     while(__test_callback_flags == 0) {\n//         assert(redisBufferRead(c) == REDIS_OK);\n//         redisProcessCallbacks(c);\n//     }\n//     test_cond(__test_callback_flags == 0x010203);\n//     redisFree(c);\n//\n//     test(\"redisDisconnect executes pending callbacks with NULL reply: \");\n//     c = __connect_nonblock();\n//     redisSetDisconnectCallback(c,__test_callback,(void*)1);\n//     redisCommandWithCallback(c,__test_reply_callback,(void*)2,\"PING\");\n//     redisDisconnect(c);\n//     test_cond(__test_callback_flags == 0x0201);\n//     redisFree(c);\n// }\n\n#ifdef HIREDIS_TEST_ASYNC\n\n#pragma GCC diagnostic ignored \"-Woverlength-strings\"   /* required on gcc 4.8.x due to assert statements */\n\nstruct event_base *base;\n\ntypedef struct TestState {\n    redisOptions *options;\n    int           checkpoint;\n    int           resp3;\n    int           disconnect;\n} TestState;\n\n/* Helper to disconnect and stop event loop */\nvoid async_disconnect(redisAsyncContext *ac) {\n    redisAsyncDisconnect(ac);\n    event_base_loopbreak(base);\n}\n\n/* Testcase timeout, will trigger a failure */\nvoid timeout_cb(int fd, short event, void *arg) {\n    (void) fd; (void) event; (void) arg;\n    printf(\"Timeout in async testing!\\n\");\n    exit(1);\n}\n\n/* Unexpected call, will trigger a failure */\nvoid unexpected_cb(redisAsyncContext *ac, void *r, void *privdata) {\n    (void) ac; (void) r;\n    printf(\"Unexpected call: %s\\n\",(char*)privdata);\n    exit(1);\n}\n\n/* Helper function to publish a message via own client. */\nvoid publish_msg(redisOptions *options, const char* channel, const char* msg) {\n    redisContext *c = redisConnectWithOptions(options);\n    assert(c != NULL);\n    redisReply *reply = redisCommand(c,\"PUBLISH %s %s\",channel,msg);\n    assert(reply->type == REDIS_REPLY_INTEGER && reply->integer == 1);\n    freeReplyObject(reply);\n    disconnect(c, 0);\n}\n\n/* Expect a reply of type INTEGER */\nvoid integer_cb(redisAsyncContext *ac, void *r, void *privdata) {\n    redisReply *reply = r;\n    TestState *state = privdata;\n    assert(reply != NULL && reply->type == REDIS_REPLY_INTEGER);\n    state->checkpoint++;\n    if (state->disconnect) async_disconnect(ac);\n}\n\n/* Subscribe callback for test_pubsub_handling and test_pubsub_handling_resp3:\n * - a published message triggers an unsubscribe\n * - a command is sent before the unsubscribe response is received. */\nvoid subscribe_cb(redisAsyncContext *ac, void *r, void *privdata) {\n    redisReply *reply = r;\n    TestState *state = privdata;\n\n    assert(reply != NULL &&\n           reply->type == (state->resp3 ? REDIS_REPLY_PUSH : REDIS_REPLY_ARRAY) &&\n           reply->elements == 3);\n\n    if (strcmp(reply->element[0]->str,\"subscribe\") == 0) {\n        assert(strcmp(reply->element[1]->str,\"mychannel\") == 0 &&\n               reply->element[2]->str == NULL);\n        publish_msg(state->options,\"mychannel\",\"Hello!\");\n    } else if (strcmp(reply->element[0]->str,\"message\") == 0) {\n        assert(strcmp(reply->element[1]->str,\"mychannel\") == 0 &&\n               strcmp(reply->element[2]->str,\"Hello!\") == 0);\n        state->checkpoint++;\n\n        /* Unsubscribe after receiving the published message. Send unsubscribe\n         * which should call the callback registered during subscribe */\n        redisAsyncCommand(ac,unexpected_cb,\n                          (void*)\"unsubscribe should call subscribe_cb()\",\n                          \"unsubscribe\");\n        /* Send a regular command after unsubscribing, then disconnect */\n        state->disconnect = 1;\n        redisAsyncCommand(ac,integer_cb,state,\"LPUSH mylist foo\");\n\n    } else if (strcmp(reply->element[0]->str,\"unsubscribe\") == 0) {\n        assert(strcmp(reply->element[1]->str,\"mychannel\") == 0 &&\n               reply->element[2]->str == NULL);\n    } else {\n        printf(\"Unexpected pubsub command: %s\\n\", reply->element[0]->str);\n        exit(1);\n    }\n}\n\n/* Expect a reply of type ARRAY */\nvoid array_cb(redisAsyncContext *ac, void *r, void *privdata) {\n    redisReply *reply = r;\n    TestState *state = privdata;\n    assert(reply != NULL && reply->type == REDIS_REPLY_ARRAY);\n    state->checkpoint++;\n    if (state->disconnect) async_disconnect(ac);\n}\n\n/* Expect a NULL reply */\nvoid null_cb(redisAsyncContext *ac, void *r, void *privdata) {\n    (void) ac;\n    assert(r == NULL);\n    TestState *state = privdata;\n    state->checkpoint++;\n}\n\nstatic void test_pubsub_handling(struct config config) {\n    test(\"Subscribe, handle published message and unsubscribe: \");\n    /* Setup event dispatcher with a testcase timeout */\n    base = event_base_new();\n    struct event *timeout = evtimer_new(base, timeout_cb, NULL);\n    assert(timeout != NULL);\n\n    evtimer_assign(timeout,base,timeout_cb,NULL);\n    struct timeval timeout_tv = {.tv_sec = 10};\n    evtimer_add(timeout, &timeout_tv);\n\n    /* Connect */\n    redisOptions options = get_redis_tcp_options(config);\n    redisAsyncContext *ac = redisAsyncConnectWithOptions(&options);\n    assert(ac != NULL && ac->err == 0);\n    redisLibeventAttach(ac,base);\n\n    /* Start subscribe */\n    TestState state = {.options = &options};\n    redisAsyncCommand(ac,subscribe_cb,&state,\"subscribe mychannel\");\n\n    /* Make sure non-subscribe commands are handled */\n    redisAsyncCommand(ac,array_cb,&state,\"PING\");\n\n    /* Start event dispatching loop */\n    test_cond(event_base_dispatch(base) == 0);\n    event_free(timeout);\n    event_base_free(base);\n\n    /* Verify test checkpoints */\n    assert(state.checkpoint == 3);\n}\n\n/* Unexpected push message, will trigger a failure */\nvoid unexpected_push_cb(redisAsyncContext *ac, void *r) {\n    (void) ac; (void) r;\n    printf(\"Unexpected call to the PUSH callback!\\n\");\n    exit(1);\n}\n\nstatic void test_pubsub_handling_resp3(struct config config) {\n    test(\"Subscribe, handle published message and unsubscribe using RESP3: \");\n    /* Setup event dispatcher with a testcase timeout */\n    base = event_base_new();\n    struct event *timeout = evtimer_new(base, timeout_cb, NULL);\n    assert(timeout != NULL);\n\n    evtimer_assign(timeout,base,timeout_cb,NULL);\n    struct timeval timeout_tv = {.tv_sec = 10};\n    evtimer_add(timeout, &timeout_tv);\n\n    /* Connect */\n    redisOptions options = get_redis_tcp_options(config);\n    redisAsyncContext *ac = redisAsyncConnectWithOptions(&options);\n    assert(ac != NULL && ac->err == 0);\n    redisLibeventAttach(ac,base);\n\n    /* Not expecting any push messages in this test */\n    redisAsyncSetPushCallback(ac, unexpected_push_cb);\n\n    /* Switch protocol */\n    redisAsyncCommand(ac,NULL,NULL,\"HELLO 3\");\n\n    /* Start subscribe */\n    TestState state = {.options = &options, .resp3 = 1};\n    redisAsyncCommand(ac,subscribe_cb,&state,\"subscribe mychannel\");\n\n    /* Make sure non-subscribe commands are handled in RESP3 */\n    redisAsyncCommand(ac,integer_cb,&state,\"LPUSH mylist foo\");\n    redisAsyncCommand(ac,integer_cb,&state,\"LPUSH mylist foo\");\n    redisAsyncCommand(ac,integer_cb,&state,\"LPUSH mylist foo\");\n    /* Handle an array with 3 elements as a non-subscribe command */\n    redisAsyncCommand(ac,array_cb,&state,\"LRANGE mylist 0 2\");\n\n    /* Start event dispatching loop */\n    test_cond(event_base_dispatch(base) == 0);\n    event_free(timeout);\n    event_base_free(base);\n\n    /* Verify test checkpoints */\n    assert(state.checkpoint == 6);\n}\n\n/* Subscribe callback for test_command_timeout_during_pubsub:\n * - a subscribe response triggers a published message\n * - the published message triggers a command that times out\n * - the command timeout triggers a disconnect */\nvoid subscribe_with_timeout_cb(redisAsyncContext *ac, void *r, void *privdata) {\n    redisReply *reply = r;\n    TestState *state = privdata;\n\n    /* The non-clean disconnect should trigger the\n     * subscription callback with a NULL reply. */\n    if (reply == NULL) {\n        state->checkpoint++;\n        event_base_loopbreak(base);\n        return;\n    }\n\n    assert(reply->type == (state->resp3 ? REDIS_REPLY_PUSH : REDIS_REPLY_ARRAY) &&\n           reply->elements == 3);\n\n    if (strcmp(reply->element[0]->str,\"subscribe\") == 0) {\n        assert(strcmp(reply->element[1]->str,\"mychannel\") == 0 &&\n               reply->element[2]->str == NULL);\n        publish_msg(state->options,\"mychannel\",\"Hello!\");\n        state->checkpoint++;\n    } else if (strcmp(reply->element[0]->str,\"message\") == 0) {\n        assert(strcmp(reply->element[1]->str,\"mychannel\") == 0 &&\n               strcmp(reply->element[2]->str,\"Hello!\") == 0);\n        state->checkpoint++;\n\n        /* Send a command that will trigger a timeout */\n        redisAsyncCommand(ac,null_cb,state,\"DEBUG SLEEP 3\");\n        redisAsyncCommand(ac,null_cb,state,\"LPUSH mylist foo\");\n    } else {\n        printf(\"Unexpected pubsub command: %s\\n\", reply->element[0]->str);\n        exit(1);\n    }\n}\n\nstatic void test_command_timeout_during_pubsub(struct config config) {\n    test(\"Command timeout during Pub/Sub: \");\n    /* Setup event dispatcher with a testcase timeout */\n    base = event_base_new();\n    struct event *timeout = evtimer_new(base,timeout_cb,NULL);\n    assert(timeout != NULL);\n\n    evtimer_assign(timeout,base,timeout_cb,NULL);\n    struct timeval timeout_tv = {.tv_sec = 10};\n    evtimer_add(timeout,&timeout_tv);\n\n    /* Connect */\n    redisOptions options = get_redis_tcp_options(config);\n    redisAsyncContext *ac = redisAsyncConnectWithOptions(&options);\n    assert(ac != NULL && ac->err == 0);\n    redisLibeventAttach(ac,base);\n\n    /* Configure a command timout */\n    struct timeval command_timeout = {.tv_sec = 2};\n    redisAsyncSetTimeout(ac,command_timeout);\n\n    /* Not expecting any push messages in this test */\n    redisAsyncSetPushCallback(ac,unexpected_push_cb);\n\n    /* Switch protocol */\n    redisAsyncCommand(ac,NULL,NULL,\"HELLO 3\");\n\n    /* Start subscribe */\n    TestState state = {.options = &options, .resp3 = 1};\n    redisAsyncCommand(ac,subscribe_with_timeout_cb,&state,\"subscribe mychannel\");\n\n    /* Start event dispatching loop */\n    assert(event_base_dispatch(base) == 0);\n    event_free(timeout);\n    event_base_free(base);\n\n    /* Verify test checkpoints */\n    test_cond(state.checkpoint == 5);\n}\n\n/* Subscribe callback for test_pubsub_multiple_channels */\nvoid subscribe_channel_a_cb(redisAsyncContext *ac, void *r, void *privdata) {\n    redisReply *reply = r;\n    TestState *state = privdata;\n\n    assert(reply != NULL && reply->type == REDIS_REPLY_ARRAY &&\n           reply->elements == 3);\n\n    if (strcmp(reply->element[0]->str,\"subscribe\") == 0) {\n        assert(strcmp(reply->element[1]->str,\"A\") == 0);\n        publish_msg(state->options,\"A\",\"Hello!\");\n        state->checkpoint++;\n    } else if (strcmp(reply->element[0]->str,\"message\") == 0) {\n        assert(strcmp(reply->element[1]->str,\"A\") == 0 &&\n               strcmp(reply->element[2]->str,\"Hello!\") == 0);\n        state->checkpoint++;\n\n        /* Unsubscribe to channels, including channel X & Z which we don't subscribe to */\n        redisAsyncCommand(ac,unexpected_cb,\n                          (void*)\"unsubscribe should not call unexpected_cb()\",\n                          \"unsubscribe B X A A Z\");\n        /* Unsubscribe to patterns, none which we subscribe to */\n        redisAsyncCommand(ac,unexpected_cb,\n                          (void*)\"punsubscribe should not call unexpected_cb()\",\n                          \"punsubscribe\");\n        /* Send a regular command after unsubscribing, then disconnect */\n        state->disconnect = 1;\n        redisAsyncCommand(ac,integer_cb,state,\"LPUSH mylist foo\");\n    } else if (strcmp(reply->element[0]->str,\"unsubscribe\") == 0) {\n        assert(strcmp(reply->element[1]->str,\"A\") == 0);\n        state->checkpoint++;\n    } else {\n        printf(\"Unexpected pubsub command: %s\\n\", reply->element[0]->str);\n        exit(1);\n    }\n}\n\n/* Subscribe callback for test_pubsub_multiple_channels */\nvoid subscribe_channel_b_cb(redisAsyncContext *ac, void *r, void *privdata) {\n    redisReply *reply = r;\n    TestState *state = privdata;\n    (void)ac;\n\n    assert(reply != NULL && reply->type == REDIS_REPLY_ARRAY &&\n           reply->elements == 3);\n\n    if (strcmp(reply->element[0]->str,\"subscribe\") == 0) {\n        assert(strcmp(reply->element[1]->str,\"B\") == 0);\n        state->checkpoint++;\n    } else if (strcmp(reply->element[0]->str,\"unsubscribe\") == 0) {\n        assert(strcmp(reply->element[1]->str,\"B\") == 0);\n        state->checkpoint++;\n    } else {\n        printf(\"Unexpected pubsub command: %s\\n\", reply->element[0]->str);\n        exit(1);\n    }\n}\n\n/* Test handling of multiple channels\n * - subscribe to channel A and B\n * - a published message on A triggers an unsubscribe of channel B, X, A and Z\n *   where channel X and Z are not subscribed to.\n * - the published message also triggers an unsubscribe to patterns. Since no\n *   pattern is subscribed to the responded pattern element type is NIL.\n * - a command sent after unsubscribe triggers a disconnect */\nstatic void test_pubsub_multiple_channels(struct config config) {\n    test(\"Subscribe to multiple channels: \");\n    /* Setup event dispatcher with a testcase timeout */\n    base = event_base_new();\n    struct event *timeout = evtimer_new(base,timeout_cb,NULL);\n    assert(timeout != NULL);\n\n    evtimer_assign(timeout,base,timeout_cb,NULL);\n    struct timeval timeout_tv = {.tv_sec = 10};\n    evtimer_add(timeout,&timeout_tv);\n\n    /* Connect */\n    redisOptions options = get_redis_tcp_options(config);\n    redisAsyncContext *ac = redisAsyncConnectWithOptions(&options);\n    assert(ac != NULL && ac->err == 0);\n    redisLibeventAttach(ac,base);\n\n    /* Not expecting any push messages in this test */\n    redisAsyncSetPushCallback(ac,unexpected_push_cb);\n\n    /* Start subscribing to two channels */\n    TestState state = {.options = &options};\n    redisAsyncCommand(ac,subscribe_channel_a_cb,&state,\"subscribe A\");\n    redisAsyncCommand(ac,subscribe_channel_b_cb,&state,\"subscribe B\");\n\n    /* Start event dispatching loop */\n    assert(event_base_dispatch(base) == 0);\n    event_free(timeout);\n    event_base_free(base);\n\n    /* Verify test checkpoints */\n    test_cond(state.checkpoint == 6);\n}\n\n/* Command callback for test_monitor() */\nvoid monitor_cb(redisAsyncContext *ac, void *r, void *privdata) {\n    redisReply *reply = r;\n    TestState *state = privdata;\n\n    /* NULL reply is received when BYE triggers a disconnect. */\n    if (reply == NULL) {\n        event_base_loopbreak(base);\n        return;\n    }\n\n    assert(reply != NULL && reply->type == REDIS_REPLY_STATUS);\n    state->checkpoint++;\n\n    if (state->checkpoint == 1) {\n        /* Response from MONITOR */\n        redisContext *c = redisConnectWithOptions(state->options);\n        assert(c != NULL);\n        redisReply *reply = redisCommand(c,\"SET first 1\");\n        assert(reply->type == REDIS_REPLY_STATUS);\n        freeReplyObject(reply);\n        redisFree(c);\n    } else if (state->checkpoint == 2) {\n        /* Response for monitored command 'SET first 1' */\n        assert(strstr(reply->str,\"first\") != NULL);\n        redisContext *c = redisConnectWithOptions(state->options);\n        assert(c != NULL);\n        redisReply *reply = redisCommand(c,\"SET second 2\");\n        assert(reply->type == REDIS_REPLY_STATUS);\n        freeReplyObject(reply);\n        redisFree(c);\n    } else if (state->checkpoint == 3) {\n        /* Response for monitored command 'SET second 2' */\n        assert(strstr(reply->str,\"second\") != NULL);\n        /* Send QUIT to disconnect */\n        redisAsyncCommand(ac,NULL,NULL,\"QUIT\");\n    }\n}\n\n/* Test handling of the monitor command\n * - sends MONITOR to enable monitoring.\n * - sends SET commands via separate clients to be monitored.\n * - sends QUIT to stop monitoring and disconnect. */\nstatic void test_monitor(struct config config) {\n    test(\"Enable monitoring: \");\n    /* Setup event dispatcher with a testcase timeout */\n    base = event_base_new();\n    struct event *timeout = evtimer_new(base, timeout_cb, NULL);\n    assert(timeout != NULL);\n\n    evtimer_assign(timeout,base,timeout_cb,NULL);\n    struct timeval timeout_tv = {.tv_sec = 10};\n    evtimer_add(timeout, &timeout_tv);\n\n    /* Connect */\n    redisOptions options = get_redis_tcp_options(config);\n    redisAsyncContext *ac = redisAsyncConnectWithOptions(&options);\n    assert(ac != NULL && ac->err == 0);\n    redisLibeventAttach(ac,base);\n\n    /* Not expecting any push messages in this test */\n    redisAsyncSetPushCallback(ac,unexpected_push_cb);\n\n    /* Start monitor */\n    TestState state = {.options = &options};\n    redisAsyncCommand(ac,monitor_cb,&state,\"monitor\");\n\n    /* Start event dispatching loop */\n    test_cond(event_base_dispatch(base) == 0);\n    event_free(timeout);\n    event_base_free(base);\n\n    /* Verify test checkpoints */\n    assert(state.checkpoint == 3);\n}\n#endif /* HIREDIS_TEST_ASYNC */\n\n/* tests for async api using polling adapter, requires no extra libraries*/\n\n/* enum for the test cases, the callbacks have different logic based on them */\ntypedef enum astest_no\n{\n    ASTEST_CONNECT=0,\n    ASTEST_CONN_TIMEOUT,\n    ASTEST_PINGPONG,\n    ASTEST_PINGPONG_TIMEOUT,\n    ASTEST_ISSUE_931,\n    ASTEST_ISSUE_931_PING\n}astest_no;\n\n/* a static context for the async tests */\nstruct _astest {\n    redisAsyncContext *ac;\n    astest_no testno;\n    int counter;\n    int connects;\n    int connect_status;\n    int disconnects;\n    int pongs;\n    int disconnect_status;\n    int connected;\n    int err;\n    char errstr[256];\n};\nstatic struct _astest astest;\n\n/* async callbacks */\nstatic void asCleanup(void* data)\n{\n    struct _astest *t = (struct _astest *)data;\n    t->ac = NULL;\n}\n\nstatic void commandCallback(struct redisAsyncContext *ac, void* _reply, void* _privdata);\n\nstatic void connectCallback(redisAsyncContext *c, int status) {\n    struct _astest *t = (struct _astest *)c->data;\n    assert(t == &astest);\n    assert(t->connects == 0);\n    t->err = c->err;\n    strcpy(t->errstr, c->errstr);\n    t->connects++;\n    t->connect_status = status;\n    t->connected = status == REDIS_OK ? 1 : -1;\n\n    if (t->testno == ASTEST_ISSUE_931) {\n        /* disconnect again */\n        redisAsyncDisconnect(c);\n    }\n    else if (t->testno == ASTEST_ISSUE_931_PING)\n    {\n        redisAsyncCommand(c, commandCallback, NULL, \"PING\");\n    }\n}\nstatic void disconnectCallback(const redisAsyncContext *c, int status) {\n    assert(c->data == (void*)&astest);\n    assert(astest.disconnects == 0);\n    astest.err = c->err;\n    strcpy(astest.errstr, c->errstr);\n    astest.disconnects++;\n    astest.disconnect_status = status;\n    astest.connected = 0;\n}\n\nstatic void commandCallback(struct redisAsyncContext *ac, void* _reply, void* _privdata)\n{\n    redisReply *reply = (redisReply*)_reply;\n    struct _astest *t = (struct _astest *)ac->data;\n    assert(t == &astest);\n    (void)_privdata;\n    t->err = ac->err;\n    strcpy(t->errstr, ac->errstr);\n    t->counter++;\n    if (t->testno == ASTEST_PINGPONG ||t->testno == ASTEST_ISSUE_931_PING)\n    {\n        assert(reply != NULL && reply->type == REDIS_REPLY_STATUS && strcmp(reply->str, \"PONG\") == 0);\n        t->pongs++;\n        redisAsyncFree(ac);\n    }\n    if (t->testno == ASTEST_PINGPONG_TIMEOUT)\n    {\n        /* two ping pongs */\n        assert(reply != NULL && reply->type == REDIS_REPLY_STATUS && strcmp(reply->str, \"PONG\") == 0);\n        t->pongs++;\n        if (t->counter == 1) {\n            int status = redisAsyncCommand(ac, commandCallback, NULL, \"PING\");\n            assert(status == REDIS_OK);\n        } else {\n            redisAsyncFree(ac);\n        }\n    }\n}\n\nstatic redisAsyncContext *do_aconnect(struct config config, astest_no testno)\n{\n    redisOptions options = {0};\n    memset(&astest, 0, sizeof(astest));\n\n    astest.testno = testno;\n    astest.connect_status = astest.disconnect_status = -2;\n\n    if (config.type == CONN_TCP) {\n        options.type = REDIS_CONN_TCP;\n        options.connect_timeout = &config.connect_timeout;\n        REDIS_OPTIONS_SET_TCP(&options, config.tcp.host, config.tcp.port);\n    } else if (config.type == CONN_SSL) {\n        options.type = REDIS_CONN_TCP;\n        options.connect_timeout = &config.connect_timeout;\n        REDIS_OPTIONS_SET_TCP(&options, config.ssl.host, config.ssl.port);\n    } else if (config.type == CONN_UNIX) {\n        options.type = REDIS_CONN_UNIX;\n        options.endpoint.unix_socket = config.unix_sock.path;\n    } else if (config.type == CONN_FD) {\n        options.type = REDIS_CONN_USERFD;\n        /* Create a dummy connection just to get an fd to inherit */\n        redisContext *dummy_ctx = redisConnectUnix(config.unix_sock.path);\n        if (dummy_ctx) {\n            redisFD fd = disconnect(dummy_ctx, 1);\n            printf(\"Connecting to inherited fd %d\\n\", (int)fd);\n            options.endpoint.fd = fd;\n        }\n    }\n    redisAsyncContext *c = redisAsyncConnectWithOptions(&options);\n    assert(c);\n    astest.ac = c;\n    c->data = &astest;\n    c->dataCleanup = asCleanup;\n    redisPollAttach(c);\n    redisAsyncSetConnectCallbackNC(c, connectCallback);\n    redisAsyncSetDisconnectCallback(c, disconnectCallback);\n    return c;\n}\n\nstatic void as_printerr(void) {\n    printf(\"Async err %d : %s\\n\", astest.err, astest.errstr);\n}\n\n#define ASASSERT(e) do { \\\n    if (!(e)) \\\n        as_printerr(); \\\n    assert(e); \\\n} while (0);\n\nstatic void test_async_polling(struct config config) {\n    int status;\n    redisAsyncContext *c;\n    struct config defaultconfig = config;\n\n    test(\"Async connect: \");\n    c = do_aconnect(config, ASTEST_CONNECT);\n    assert(c);\n    while(astest.connected == 0)\n        redisPollTick(c, 0.1);\n    assert(astest.connects == 1);\n    ASASSERT(astest.connect_status == REDIS_OK);\n    assert(astest.disconnects == 0);\n    test_cond(astest.connected == 1);\n\n    test(\"Async free after connect: \");\n    assert(astest.ac != NULL);\n    redisAsyncFree(c);\n    assert(astest.disconnects == 1);\n    assert(astest.ac == NULL);\n    test_cond(astest.disconnect_status == REDIS_OK);\n\n    if (config.type == CONN_TCP || config.type == CONN_SSL) {\n        /* timeout can only be simulated with network */\n        test(\"Async connect timeout: \");\n        config.tcp.host = \"192.168.254.254\";  /* blackhole ip */\n        config.connect_timeout.tv_usec = 100000;\n        c = do_aconnect(config, ASTEST_CONN_TIMEOUT);\n        assert(c);\n        assert(c->err == 0);\n        while(astest.connected == 0)\n            redisPollTick(c, 0.1);\n        assert(astest.connected == -1);\n        /*\n         * freeing should not be done, clearing should have happened.\n         *redisAsyncFree(c);\n         */\n        assert(astest.ac == NULL);\n        test_cond(astest.connect_status == REDIS_ERR);\n        config = defaultconfig;\n    }\n\n    /* Test a ping/pong after connection */\n    test(\"Async PING/PONG: \");\n    c = do_aconnect(config, ASTEST_PINGPONG);\n    while(astest.connected == 0)\n        redisPollTick(c, 0.1);\n    status = redisAsyncCommand(c, commandCallback, NULL, \"PING\");\n    assert(status == REDIS_OK);\n    while(astest.ac)\n        redisPollTick(c, 0.1);\n    test_cond(astest.pongs == 1);\n\n    /* Test a ping/pong after connection that didn't time out.\n     * see https://github.com/redis/hiredis/issues/945\n     */\n    if (config.type == CONN_TCP || config.type == CONN_SSL) {\n        test(\"Async PING/PONG after connect timeout: \");\n        config.connect_timeout.tv_usec = 10000; /* 10ms  */\n        c = do_aconnect(config, ASTEST_PINGPONG_TIMEOUT);\n        while(astest.connected == 0)\n            redisPollTick(c, 0.1);\n        /* sleep 0.1 s, allowing old timeout to arrive */\n        millisleep(10);\n        status = redisAsyncCommand(c, commandCallback, NULL, \"PING\");\n        assert(status == REDIS_OK);\n        while(astest.ac)\n            redisPollTick(c, 0.1);\n        test_cond(astest.pongs == 2);\n        config = defaultconfig;\n    }\n\n    /* Test disconnect from an on_connect callback\n     * see https://github.com/redis/hiredis/issues/931\n     */\n    test(\"Disconnect from onConnected callback (Issue #931): \");\n    c = do_aconnect(config, ASTEST_ISSUE_931);\n    while(astest.disconnects == 0)\n        redisPollTick(c, 0.1);\n    assert(astest.connected == 0);\n    assert(astest.connects == 1);\n    test_cond(astest.disconnects == 1);\n\n    /* Test ping/pong from an on_connect callback\n     * see https://github.com/redis/hiredis/issues/931\n     */\n    test(\"Ping/Pong from onConnected callback (Issue #931): \");\n    c = do_aconnect(config, ASTEST_ISSUE_931_PING);\n    /* connect callback issues ping, reponse callback destroys context */\n    while(astest.ac)\n        redisPollTick(c, 0.1);\n    assert(astest.connected == 0);\n    assert(astest.connects == 1);\n    assert(astest.disconnects == 1);\n    test_cond(astest.pongs == 1);\n}\n/* End of Async polling_adapter driven tests */\n\nint main(int argc, char **argv) {\n    struct config cfg = {\n        .tcp = {\n            .host = \"127.0.0.1\",\n            .port = 6379\n        },\n        .unix_sock = {\n            .path = \"/tmp/redis.sock\"\n        }\n    };\n    int throughput = 1;\n    int test_inherit_fd = 1;\n    int skips_as_fails = 0;\n    int test_unix_socket;\n\n    /* Parse command line options. */\n    argv++; argc--;\n    while (argc) {\n        if (argc >= 2 && !strcmp(argv[0],\"-h\")) {\n            argv++; argc--;\n            cfg.tcp.host = argv[0];\n        } else if (argc >= 2 && !strcmp(argv[0],\"-p\")) {\n            argv++; argc--;\n            cfg.tcp.port = atoi(argv[0]);\n        } else if (argc >= 2 && !strcmp(argv[0],\"-s\")) {\n            argv++; argc--;\n            cfg.unix_sock.path = argv[0];\n        } else if (argc >= 1 && !strcmp(argv[0],\"--skip-throughput\")) {\n            throughput = 0;\n        } else if (argc >= 1 && !strcmp(argv[0],\"--skip-inherit-fd\")) {\n            test_inherit_fd = 0;\n        } else if (argc >= 1 && !strcmp(argv[0],\"--skips-as-fails\")) {\n            skips_as_fails = 1;\n#ifdef HIREDIS_TEST_SSL\n        } else if (argc >= 2 && !strcmp(argv[0],\"--ssl-port\")) {\n            argv++; argc--;\n            cfg.ssl.port = atoi(argv[0]);\n        } else if (argc >= 2 && !strcmp(argv[0],\"--ssl-host\")) {\n            argv++; argc--;\n            cfg.ssl.host = argv[0];\n        } else if (argc >= 2 && !strcmp(argv[0],\"--ssl-ca-cert\")) {\n            argv++; argc--;\n            cfg.ssl.ca_cert  = argv[0];\n        } else if (argc >= 2 && !strcmp(argv[0],\"--ssl-cert\")) {\n            argv++; argc--;\n            cfg.ssl.cert = argv[0];\n        } else if (argc >= 2 && !strcmp(argv[0],\"--ssl-key\")) {\n            argv++; argc--;\n            cfg.ssl.key = argv[0];\n#endif\n        } else {\n            fprintf(stderr, \"Invalid argument: %s\\n\", argv[0]);\n            exit(1);\n        }\n        argv++; argc--;\n    }\n\n#ifndef _WIN32\n    /* Ignore broken pipe signal (for I/O error tests). */\n    signal(SIGPIPE, SIG_IGN);\n\n    test_unix_socket = access(cfg.unix_sock.path, F_OK) == 0;\n\n#else\n    /* Unix sockets don't exist in Windows */\n    test_unix_socket = 0;\n#endif\n\n    test_allocator_injection();\n\n    test_format_commands();\n    test_reply_reader();\n    test_blocking_connection_errors();\n    test_free_null();\n\n    printf(\"\\nTesting against TCP connection (%s:%d):\\n\", cfg.tcp.host, cfg.tcp.port);\n    cfg.type = CONN_TCP;\n    test_blocking_connection(cfg);\n    test_blocking_connection_timeouts(cfg);\n    test_blocking_io_errors(cfg);\n    test_invalid_timeout_errors(cfg);\n    test_append_formatted_commands(cfg);\n    test_tcp_options(cfg);\n    if (throughput) test_throughput(cfg);\n\n    printf(\"\\nTesting against Unix socket connection (%s): \", cfg.unix_sock.path);\n    if (test_unix_socket) {\n        printf(\"\\n\");\n        cfg.type = CONN_UNIX;\n        test_blocking_connection(cfg);\n        test_blocking_connection_timeouts(cfg);\n        test_blocking_io_errors(cfg);\n        test_invalid_timeout_errors(cfg);\n        if (throughput) test_throughput(cfg);\n    } else {\n        test_skipped();\n    }\n\n#ifdef HIREDIS_TEST_SSL\n    if (cfg.ssl.port && cfg.ssl.host) {\n\n        redisInitOpenSSL();\n        _ssl_ctx = redisCreateSSLContext(cfg.ssl.ca_cert, NULL, cfg.ssl.cert, cfg.ssl.key, NULL, NULL);\n        assert(_ssl_ctx != NULL);\n\n        printf(\"\\nTesting against SSL connection (%s:%d):\\n\", cfg.ssl.host, cfg.ssl.port);\n        cfg.type = CONN_SSL;\n\n        test_blocking_connection(cfg);\n        test_blocking_connection_timeouts(cfg);\n        test_blocking_io_errors(cfg);\n        test_invalid_timeout_errors(cfg);\n        test_append_formatted_commands(cfg);\n        if (throughput) test_throughput(cfg);\n\n        redisFreeSSLContext(_ssl_ctx);\n        _ssl_ctx = NULL;\n    }\n#endif\n\n#ifdef HIREDIS_TEST_ASYNC\n    cfg.type = CONN_TCP;\n    printf(\"\\nTesting asynchronous API against TCP connection (%s:%d):\\n\", cfg.tcp.host, cfg.tcp.port);\n    cfg.type = CONN_TCP;\n\n    int major;\n    redisContext *c = do_connect(cfg);\n    get_redis_version(c, &major, NULL);\n    disconnect(c, 0);\n\n    test_pubsub_handling(cfg);\n    test_pubsub_multiple_channels(cfg);\n    test_monitor(cfg);\n    if (major >= 6) {\n        test_pubsub_handling_resp3(cfg);\n        test_command_timeout_during_pubsub(cfg);\n    }\n#endif /* HIREDIS_TEST_ASYNC */\n\n    cfg.type = CONN_TCP;\n    printf(\"\\nTesting asynchronous API using polling_adapter TCP (%s:%d):\\n\", cfg.tcp.host, cfg.tcp.port);\n    test_async_polling(cfg);\n    if (test_unix_socket) {\n        cfg.type = CONN_UNIX;\n        printf(\"\\nTesting asynchronous API using polling_adapter UNIX (%s):\\n\", cfg.unix_sock.path);\n        test_async_polling(cfg);\n    }\n\n    if (test_inherit_fd) {\n        printf(\"\\nTesting against inherited fd (%s): \", cfg.unix_sock.path);\n        if (test_unix_socket) {\n            printf(\"\\n\");\n            cfg.type = CONN_FD;\n            test_blocking_connection(cfg);\n        } else {\n            test_skipped();\n        }\n    }\n\n    if (fails || (skips_as_fails && skips)) {\n        printf(\"*** %d TESTS FAILED ***\\n\", fails);\n        if (skips) {\n            printf(\"*** %d TESTS SKIPPED ***\\n\", skips);\n        }\n        return 1;\n    }\n\n    printf(\"ALL TESTS PASSED (%d skipped)\\n\", skips);\n    return 0;\n}\n"
  },
  {
    "path": "deps/hiredis/test.sh",
    "content": "#!/bin/sh -ue\n\nREDIS_SERVER=${REDIS_SERVER:-redis-server}\nREDIS_PORT=${REDIS_PORT:-56379}\nREDIS_SSL_PORT=${REDIS_SSL_PORT:-56443}\nTEST_SSL=${TEST_SSL:-0}\nSKIPS_AS_FAILS=${SKIPS_AS_FAILS:-0}\nENABLE_DEBUG_CMD=\nSSL_TEST_ARGS=\nSKIPS_ARG=${SKIPS_ARG:-}\nREDIS_DOCKER=${REDIS_DOCKER:-}\n\n# We need to enable the DEBUG command for redis-server >= 7.0.0\nREDIS_MAJOR_VERSION=\"$(redis-server --version|awk -F'[^0-9]+' '{ print $2 }')\"\nif [ \"$REDIS_MAJOR_VERSION\" -gt \"6\" ]; then\n    ENABLE_DEBUG_CMD=\"enable-debug-command local\"\nfi\n\ntmpdir=$(mktemp -d)\nPID_FILE=${tmpdir}/hiredis-test-redis.pid\nSOCK_FILE=${tmpdir}/hiredis-test-redis.sock\n\nif [ \"$TEST_SSL\" = \"1\" ]; then\n    SSL_CA_CERT=${tmpdir}/ca.crt\n    SSL_CA_KEY=${tmpdir}/ca.key\n    SSL_CERT=${tmpdir}/redis.crt\n    SSL_KEY=${tmpdir}/redis.key\n\n    openssl genrsa -out ${tmpdir}/ca.key 4096\n    openssl req \\\n        -x509 -new -nodes -sha256 \\\n        -key ${SSL_CA_KEY} \\\n        -days 3650 \\\n        -subj '/CN=Hiredis Test CA' \\\n        -out ${SSL_CA_CERT}\n    openssl genrsa -out ${SSL_KEY} 2048\n    openssl req \\\n        -new -sha256 \\\n        -key ${SSL_KEY} \\\n        -subj '/CN=Hiredis Test Cert' | \\\n        openssl x509 \\\n            -req -sha256 \\\n            -CA ${SSL_CA_CERT} \\\n            -CAkey ${SSL_CA_KEY} \\\n            -CAserial ${tmpdir}/ca.txt \\\n            -CAcreateserial \\\n            -days 365 \\\n            -out ${SSL_CERT}\n\n    SSL_TEST_ARGS=\"--ssl-host 127.0.0.1 --ssl-port ${REDIS_SSL_PORT} --ssl-ca-cert ${SSL_CA_CERT} --ssl-cert ${SSL_CERT} --ssl-key ${SSL_KEY}\"\nfi\n\ncleanup() {\n  if [ -n \"${REDIS_DOCKER}\" ] ; then\n    docker kill redis-test-server\n  else\n    set +e\n    kill $(cat ${PID_FILE})\n  fi\n  rm -rf ${tmpdir}\n}\ntrap cleanup INT TERM EXIT\n\n# base config\ncat > ${tmpdir}/redis.conf <<EOF\npidfile ${PID_FILE}\nport ${REDIS_PORT}\nunixsocket ${SOCK_FILE}\nunixsocketperm 777\nEOF\n\n# if not running in docker add these:\nif [ ! -n \"${REDIS_DOCKER}\" ]; then\ncat >> ${tmpdir}/redis.conf <<EOF\ndaemonize yes\n${ENABLE_DEBUG_CMD}\nbind 127.0.0.1\nEOF\nfi\n\n# if doing ssl, add these\nif [ \"$TEST_SSL\" = \"1\" ]; then\n    cat >> ${tmpdir}/redis.conf <<EOF\ntls-port ${REDIS_SSL_PORT}\ntls-ca-cert-file ${SSL_CA_CERT}\ntls-cert-file ${SSL_CERT}\ntls-key-file ${SSL_KEY}\nEOF\nfi\n\necho ${tmpdir}\ncat ${tmpdir}/redis.conf\nif [ -n \"${REDIS_DOCKER}\" ] ; then\n    chmod a+wx ${tmpdir}\n    chmod a+r ${tmpdir}/*\n    docker run -d --rm --name redis-test-server \\\n        -p ${REDIS_PORT}:${REDIS_PORT} \\\n        -p ${REDIS_SSL_PORT}:${REDIS_SSL_PORT} \\\n        -v ${tmpdir}:${tmpdir} \\\n        ${REDIS_DOCKER} \\\n        redis-server ${tmpdir}/redis.conf\nelse\n    ${REDIS_SERVER} ${tmpdir}/redis.conf\nfi\n# Wait until we detect the unix socket\necho waiting for server\nwhile [ ! -S \"${SOCK_FILE}\" ]; do sleep 1; done\n\n# Treat skips as failures if directed\n[ \"$SKIPS_AS_FAILS\" = 1 ] && SKIPS_ARG=\"${SKIPS_ARG} --skips-as-fails\"\n\n${TEST_PREFIX:-} ./hiredis-test -h 127.0.0.1 -p ${REDIS_PORT} -s ${SOCK_FILE} ${SSL_TEST_ARGS} ${SKIPS_ARG}\n"
  },
  {
    "path": "deps/hiredis/win32.h",
    "content": "#ifndef _WIN32_HELPER_INCLUDE\n#define _WIN32_HELPER_INCLUDE\n#ifdef _MSC_VER\n\n#include <winsock2.h> /* for struct timeval */\n\n#ifndef inline\n#define inline __inline\n#endif\n\n#ifndef strcasecmp\n#define strcasecmp stricmp\n#endif\n\n#ifndef strncasecmp\n#define strncasecmp strnicmp\n#endif\n\n#ifndef va_copy\n#define va_copy(d,s) ((d) = (s))\n#endif\n\n#ifndef snprintf\n#define snprintf c99_snprintf\n\n__inline int c99_vsnprintf(char* str, size_t size, const char* format, va_list ap)\n{\n    int count = -1;\n\n    if (size != 0)\n        count = _vsnprintf_s(str, size, _TRUNCATE, format, ap);\n    if (count == -1)\n        count = _vscprintf(format, ap);\n\n    return count;\n}\n\n__inline int c99_snprintf(char* str, size_t size, const char* format, ...)\n{\n    int count;\n    va_list ap;\n\n    va_start(ap, format);\n    count = c99_vsnprintf(str, size, format, ap);\n    va_end(ap);\n\n    return count;\n}\n#endif\n#endif /* _MSC_VER */\n\n#ifdef _WIN32\n#define strerror_r(errno,buf,len) strerror_s(buf,len,errno)\n#endif /* _WIN32 */\n\n#endif /* _WIN32_HELPER_INCLUDE */\n"
  },
  {
    "path": "deps/jemalloc/.appveyor.yml",
    "content": "version: '{build}'\n\nenvironment:\n  matrix:\n  - MSYSTEM: MINGW64\n    CPU: x86_64\n    MSVC: amd64\n    CONFIG_FLAGS: --enable-debug\n  - MSYSTEM: MINGW64\n    CPU: x86_64\n    CONFIG_FLAGS: --enable-debug\n  - MSYSTEM: MINGW32\n    CPU: i686\n    MSVC: x86\n    CONFIG_FLAGS: --enable-debug\n  - MSYSTEM: MINGW32\n    CPU: i686\n    CONFIG_FLAGS: --enable-debug\n  - MSYSTEM: MINGW64\n    CPU: x86_64\n    MSVC: amd64\n  - MSYSTEM: MINGW64\n    CPU: x86_64\n  - MSYSTEM: MINGW32\n    CPU: i686\n    MSVC: x86\n  - MSYSTEM: MINGW32\n    CPU: i686\n\ninstall:\n  - set PATH=c:\\msys64\\%MSYSTEM%\\bin;c:\\msys64\\usr\\bin;%PATH%\n  - if defined MSVC call \"c:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\vcvarsall.bat\" %MSVC%\n  - if defined MSVC pacman --noconfirm -Rsc mingw-w64-%CPU%-gcc gcc\n\nbuild_script:\n  - bash -c \"autoconf\"\n  - bash -c \"./configure $CONFIG_FLAGS\"\n  - mingw32-make\n  - file lib/jemalloc.dll\n  - mingw32-make tests\n  - mingw32-make -k check\n"
  },
  {
    "path": "deps/jemalloc/.autom4te.cfg",
    "content": "begin-language: \"Autoconf-without-aclocal-m4\"\nargs: --no-cache\nend-language: \"Autoconf-without-aclocal-m4\"\n"
  },
  {
    "path": "deps/jemalloc/.cirrus.yml",
    "content": "env:\n  CIRRUS_CLONE_DEPTH: 1\n  ARCH: amd64\n\ntask:\n  matrix:\n      env:\n        DEBUG_CONFIG: --enable-debug\n      env:\n        DEBUG_CONFIG: --disable-debug\n  matrix:\n    - env:\n        PROF_CONFIG: --enable-prof\n    - env:\n        PROF_CONFIG: --disable-prof\n  matrix:\n    - name: 64-bit\n      env:\n        CC:\n        CXX:\n    - name: 32-bit\n      env:\n        CC: cc -m32\n        CXX: c++ -m32\n  matrix:\n    - env:\n        UNCOMMON_CONFIG:\n    - env:\n        UNCOMMON_CONFIG: --with-lg-page=16 --with-malloc-conf=tcache:false\n  freebsd_instance:\n    matrix:\n      image: freebsd-12-3-release-amd64\n  install_script:\n    - sed -i.bak -e 's,pkg+http://pkg.FreeBSD.org/\\${ABI}/quarterly,pkg+http://pkg.FreeBSD.org/\\${ABI}/latest,' /etc/pkg/FreeBSD.conf\n    - pkg upgrade -y\n    - pkg install -y autoconf gmake\n  script:\n    - autoconf\n    # We don't perfectly track freebsd stdlib.h definitions.  This is fine when\n    # we count as a system header, but breaks otherwise, like during these\n    # tests.\n    - ./configure --with-jemalloc-prefix=ci_ ${DEBUG_CONFIG} ${PROF_CONFIG} ${UNCOMMON_CONFIG}\n    - export JFLAG=`sysctl -n kern.smp.cpus`\n    - gmake -j${JFLAG}\n    - gmake -j${JFLAG} tests\n    - gmake check\n"
  },
  {
    "path": "deps/jemalloc/.clang-format",
    "content": "# jemalloc targets clang-format version 8.  We include every option it supports\n# here, but comment out the ones that aren't relevant for us.\n---\n# AccessModifierOffset: -2\nAlignAfterOpenBracket: DontAlign\nAlignConsecutiveAssignments: false\nAlignConsecutiveDeclarations: false\nAlignEscapedNewlines: Right\nAlignOperands: false\nAlignTrailingComments: false\nAllowAllParametersOfDeclarationOnNextLine: true\nAllowShortBlocksOnASingleLine: false\nAllowShortCaseLabelsOnASingleLine: false\nAllowShortFunctionsOnASingleLine: Empty\nAllowShortIfStatementsOnASingleLine: false\nAllowShortLoopsOnASingleLine: false\nAlwaysBreakAfterReturnType: AllDefinitions\nAlwaysBreakBeforeMultilineStrings: true\n# AlwaysBreakTemplateDeclarations: Yes\nBinPackArguments: true\nBinPackParameters: true\nBraceWrapping:\n  AfterClass: false\n  AfterControlStatement: false\n  AfterEnum: false\n  AfterFunction: false\n  AfterNamespace: false\n  AfterObjCDeclaration: false\n  AfterStruct: false\n  AfterUnion: false\n  BeforeCatch: false\n  BeforeElse: false\n  IndentBraces: false\n# BreakAfterJavaFieldAnnotations: true\nBreakBeforeBinaryOperators: NonAssignment\nBreakBeforeBraces: Attach\nBreakBeforeTernaryOperators: true\n# BreakConstructorInitializers: BeforeColon\n# BreakInheritanceList: BeforeColon\nBreakStringLiterals: false\nColumnLimit: 80\n# CommentPragmas: ''\n# CompactNamespaces: true\n# ConstructorInitializerAllOnOneLineOrOnePerLine: true\n# ConstructorInitializerIndentWidth: 4\nContinuationIndentWidth: 2\nCpp11BracedListStyle: true\nDerivePointerAlignment: false\nDisableFormat:   false\nExperimentalAutoDetectBinPacking: false\nFixNamespaceComments: true\nForEachMacros:   [ ql_foreach, qr_foreach, ]\n# IncludeBlocks: Preserve\n# IncludeCategories:\n#   - Regex:           '^<.*\\.h(pp)?>'\n#     Priority:        1\n# IncludeIsMainRegex: ''\nIndentCaseLabels: false\nIndentPPDirectives: AfterHash\nIndentWidth: 4\nIndentWrappedFunctionNames: false\n# JavaImportGroups: []\n# JavaScriptQuotes: Leave\n# JavaScriptWrapImports: True\nKeepEmptyLinesAtTheStartOfBlocks: false\nLanguage: Cpp\nMacroBlockBegin: ''\nMacroBlockEnd: ''\nMaxEmptyLinesToKeep: 1\n# NamespaceIndentation: None\n# ObjCBinPackProtocolList: Auto\n# ObjCBlockIndentWidth: 2\n# ObjCSpaceAfterProperty: false\n# ObjCSpaceBeforeProtocolList: false\n\nPenaltyBreakAssignment: 2\nPenaltyBreakBeforeFirstCallParameter: 1\nPenaltyBreakComment: 300\nPenaltyBreakFirstLessLess: 120\nPenaltyBreakString: 1000\n# PenaltyBreakTemplateDeclaration: 10\nPenaltyExcessCharacter: 1000000\nPenaltyReturnTypeOnItsOwnLine: 60\nPointerAlignment: Right\n# RawStringFormats:\n#   - Language: TextProto\n#       Delimiters:\n#         - 'pb'\n#         - 'proto'\n#       EnclosingFunctions:\n#         - 'PARSE_TEXT_PROTO'\n#       BasedOnStyle: google\n#   - Language: Cpp\n#       Delimiters:\n#         - 'cc'\n#         - 'cpp'\n#       BasedOnStyle: llvm\n#       CanonicalDelimiter: 'cc'\nReflowComments: true\nSortIncludes: false\nSpaceAfterCStyleCast: false\n# SpaceAfterTemplateKeyword: true\nSpaceBeforeAssignmentOperators: true\n# SpaceBeforeCpp11BracedList: false\n# SpaceBeforeCtorInitializerColon: true\n# SpaceBeforeInheritanceColon: true\nSpaceBeforeParens: ControlStatements\n# SpaceBeforeRangeBasedForLoopColon: true\nSpaceInEmptyParentheses: false\nSpacesBeforeTrailingComments: 2\nSpacesInAngles:  false\nSpacesInCStyleCastParentheses: false\n# SpacesInContainerLiterals: false\nSpacesInParentheses: false\nSpacesInSquareBrackets: false\n# Standard: Cpp11\n# This is nominally supported in clang-format version 8, but not in the build\n# used by some of the core jemalloc developers.\n# StatementMacros: []\nTabWidth: 8\nUseTab: Never\n...\n"
  },
  {
    "path": "deps/jemalloc/.gitattributes",
    "content": "* text=auto eol=lf\n"
  },
  {
    "path": "deps/jemalloc/.gitignore",
    "content": "/bin/jemalloc-config\n/bin/jemalloc.sh\n/bin/jeprof\n\n/config.stamp\n/config.log\n/config.status\n/configure\n\n/doc/html.xsl\n/doc/manpages.xsl\n/doc/jemalloc.xml\n/doc/jemalloc.html\n/doc/jemalloc.3\n\n/doc_internal/PROFILING_INTERNALS.pdf\n\n/jemalloc.pc\n\n/lib/\n\n/Makefile\n\n/include/jemalloc/internal/jemalloc_preamble.h\n/include/jemalloc/internal/jemalloc_internal_defs.h\n/include/jemalloc/internal/private_namespace.gen.h\n/include/jemalloc/internal/private_namespace.h\n/include/jemalloc/internal/private_namespace_jet.gen.h\n/include/jemalloc/internal/private_namespace_jet.h\n/include/jemalloc/internal/private_symbols.awk\n/include/jemalloc/internal/private_symbols_jet.awk\n/include/jemalloc/internal/public_namespace.h\n/include/jemalloc/internal/public_symbols.txt\n/include/jemalloc/internal/public_unnamespace.h\n/include/jemalloc/jemalloc.h\n/include/jemalloc/jemalloc_defs.h\n/include/jemalloc/jemalloc_macros.h\n/include/jemalloc/jemalloc_mangle.h\n/include/jemalloc/jemalloc_mangle_jet.h\n/include/jemalloc/jemalloc_protos.h\n/include/jemalloc/jemalloc_protos_jet.h\n/include/jemalloc/jemalloc_rename.h\n/include/jemalloc/jemalloc_typedefs.h\n\n/src/*.[od]\n/src/*.sym\n\n/run_tests.out/\n\n/test/test.sh\ntest/include/test/jemalloc_test.h\ntest/include/test/jemalloc_test_defs.h\n\n/test/integration/[A-Za-z]*\n!/test/integration/cpp/\n!/test/integration/[A-Za-z]*.*\n/test/integration/*.[od]\n/test/integration/*.out\n\n/test/integration/cpp/[A-Za-z]*\n!/test/integration/cpp/[A-Za-z]*.*\n/test/integration/cpp/*.[od]\n/test/integration/cpp/*.out\n\n/test/src/*.[od]\n\n/test/stress/[A-Za-z]*\n!/test/stress/[A-Za-z]*.*\n/test/stress/*.[od]\n/test/stress/*.out\n\n/test/unit/[A-Za-z]*\n!/test/unit/[A-Za-z]*.*\n/test/unit/*.[od]\n/test/unit/*.out\n\n/test/analyze/[A-Za-z]*\n!/test/analyze/[A-Za-z]*.*\n/test/analyze/*.[od]\n/test/analyze/*.out\n\n/VERSION\n\n*.pdb\n*.sdf\n*.opendb\n*.VC.db\n*.opensdf\n*.cachefile\n*.suo\n*.user\n*.sln.docstates\n*.tmp\n.vs/\n/msvc/Win32/\n/msvc/x64/\n/msvc/projects/*/*/Debug*/\n/msvc/projects/*/*/Release*/\n/msvc/projects/*/*/Win32/\n/msvc/projects/*/*/x64/\n"
  },
  {
    "path": "deps/jemalloc/.travis.yml",
    "content": "# This config file is generated by ./scripts/gen_travis.py.\n# Do not edit by hand.\n\n# We use 'minimal', because 'generic' makes Windows VMs hang at startup. Also\n# the software provided by 'generic' is simply not needed for our tests.\n# Differences are explained here:\n# https://docs.travis-ci.com/user/languages/minimal-and-generic/\nlanguage: minimal\ndist: focal\n\njobs:\n  include:\n    - os: windows\n      arch: amd64\n      env: CC=gcc CXX=g++ EXTRA_CFLAGS=\"-fcommon\"\n    - os: windows\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-debug\" EXTRA_CFLAGS=\"-fcommon\"\n    - os: windows\n      arch: amd64\n      env: CC=cl.exe CXX=cl.exe\n    - os: windows\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes EXTRA_CFLAGS=\"-fcommon\"\n    - os: windows\n      arch: amd64\n      env: CC=cl.exe CXX=cl.exe CONFIGURE_FLAGS=\"--enable-debug\"\n    - os: windows\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes CONFIGURE_FLAGS=\"--enable-debug\" EXTRA_CFLAGS=\"-fcommon\"\n    - os: windows\n      arch: amd64\n      env: CC=cl.exe CXX=cl.exe CROSS_COMPILE_32BIT=yes\n    - os: windows\n      arch: amd64\n      env: CC=cl.exe CXX=cl.exe CROSS_COMPILE_32BIT=yes CONFIGURE_FLAGS=\"--enable-debug\"\n    - os: freebsd\n      arch: amd64\n      env: CC=gcc CXX=g++\n    - os: freebsd\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-debug\"\n    - os: freebsd\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-prof --enable-prof-libunwind\"\n    - os: freebsd\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-lg-page=16 --with-malloc-conf=tcache:false\"\n    - os: freebsd\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes\n    - os: freebsd\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-debug --enable-prof --enable-prof-libunwind\"\n    - os: freebsd\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-debug --with-lg-page=16 --with-malloc-conf=tcache:false\"\n    - os: freebsd\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes CONFIGURE_FLAGS=\"--enable-debug\"\n    - os: freebsd\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-prof --enable-prof-libunwind --with-lg-page=16 --with-malloc-conf=tcache:false\"\n    - os: freebsd\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes CONFIGURE_FLAGS=\"--enable-prof --enable-prof-libunwind\"\n    - os: freebsd\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes CONFIGURE_FLAGS=\"--with-lg-page=16 --with-malloc-conf=tcache:false\"\n    - os: freebsd\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-debug --enable-prof --enable-prof-libunwind --with-lg-page=16 --with-malloc-conf=tcache:false\"\n    - os: freebsd\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes CONFIGURE_FLAGS=\"--enable-debug --enable-prof --enable-prof-libunwind\"\n    - os: freebsd\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes CONFIGURE_FLAGS=\"--enable-debug --with-lg-page=16 --with-malloc-conf=tcache:false\"\n    - os: freebsd\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes CONFIGURE_FLAGS=\"--enable-prof --enable-prof-libunwind --with-lg-page=16 --with-malloc-conf=tcache:false\"\n    - os: freebsd\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes CONFIGURE_FLAGS=\"--enable-debug --enable-prof --enable-prof-libunwind --with-lg-page=16 --with-malloc-conf=tcache:false\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=clang CXX=clang++ EXTRA_CFLAGS=\"-Werror -Wno-array-bounds -Wno-unknown-warning-option -Wno-ignored-attributes\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes COMPILER_FLAGS=\"-m32\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-debug\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-prof\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--disable-stats\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--disable-libdl\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-opt-safety-checks\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-lg-page=16\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-malloc-conf=tcache:false\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-malloc-conf=dss:primary\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-malloc-conf=percpu_arena:percpu\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-malloc-conf=background_thread:true\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=clang CXX=clang++ CROSS_COMPILE_32BIT=yes COMPILER_FLAGS=\"-m32\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds -Wno-unknown-warning-option -Wno-ignored-attributes\"\n    - os: linux\n      arch: amd64\n      env: CC=clang CXX=clang++ CONFIGURE_FLAGS=\"--enable-debug\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds -Wno-unknown-warning-option -Wno-ignored-attributes\"\n    - os: linux\n      arch: amd64\n      env: CC=clang CXX=clang++ CONFIGURE_FLAGS=\"--enable-prof\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds -Wno-unknown-warning-option -Wno-ignored-attributes\"\n    - os: linux\n      arch: amd64\n      env: CC=clang CXX=clang++ CONFIGURE_FLAGS=\"--disable-stats\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds -Wno-unknown-warning-option -Wno-ignored-attributes\"\n    - os: linux\n      arch: amd64\n      env: CC=clang CXX=clang++ CONFIGURE_FLAGS=\"--disable-libdl\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds -Wno-unknown-warning-option -Wno-ignored-attributes\"\n    - os: linux\n      arch: amd64\n      env: CC=clang CXX=clang++ CONFIGURE_FLAGS=\"--enable-opt-safety-checks\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds -Wno-unknown-warning-option -Wno-ignored-attributes\"\n    - os: linux\n      arch: amd64\n      env: CC=clang CXX=clang++ CONFIGURE_FLAGS=\"--with-lg-page=16\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds -Wno-unknown-warning-option -Wno-ignored-attributes\"\n    - os: linux\n      arch: amd64\n      env: CC=clang CXX=clang++ CONFIGURE_FLAGS=\"--with-malloc-conf=tcache:false\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds -Wno-unknown-warning-option -Wno-ignored-attributes\"\n    - os: linux\n      arch: amd64\n      env: CC=clang CXX=clang++ CONFIGURE_FLAGS=\"--with-malloc-conf=dss:primary\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds -Wno-unknown-warning-option -Wno-ignored-attributes\"\n    - os: linux\n      arch: amd64\n      env: CC=clang CXX=clang++ CONFIGURE_FLAGS=\"--with-malloc-conf=percpu_arena:percpu\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds -Wno-unknown-warning-option -Wno-ignored-attributes\"\n    - os: linux\n      arch: amd64\n      env: CC=clang CXX=clang++ CONFIGURE_FLAGS=\"--with-malloc-conf=background_thread:true\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds -Wno-unknown-warning-option -Wno-ignored-attributes\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes COMPILER_FLAGS=\"-m32\" CONFIGURE_FLAGS=\"--enable-debug\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes COMPILER_FLAGS=\"-m32\" CONFIGURE_FLAGS=\"--enable-prof\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes COMPILER_FLAGS=\"-m32\" CONFIGURE_FLAGS=\"--disable-stats\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes COMPILER_FLAGS=\"-m32\" CONFIGURE_FLAGS=\"--disable-libdl\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes COMPILER_FLAGS=\"-m32\" CONFIGURE_FLAGS=\"--enable-opt-safety-checks\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes COMPILER_FLAGS=\"-m32\" CONFIGURE_FLAGS=\"--with-lg-page=16\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes COMPILER_FLAGS=\"-m32\" CONFIGURE_FLAGS=\"--with-malloc-conf=tcache:false\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes COMPILER_FLAGS=\"-m32\" CONFIGURE_FLAGS=\"--with-malloc-conf=dss:primary\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes COMPILER_FLAGS=\"-m32\" CONFIGURE_FLAGS=\"--with-malloc-conf=percpu_arena:percpu\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes COMPILER_FLAGS=\"-m32\" CONFIGURE_FLAGS=\"--with-malloc-conf=background_thread:true\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-debug --enable-prof\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-debug --disable-stats\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-debug --disable-libdl\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-debug --enable-opt-safety-checks\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-debug --with-lg-page=16\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-debug --with-malloc-conf=tcache:false\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-debug --with-malloc-conf=dss:primary\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-debug --with-malloc-conf=percpu_arena:percpu\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-debug --with-malloc-conf=background_thread:true\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-prof --disable-stats\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-prof --disable-libdl\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-prof --enable-opt-safety-checks\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-prof --with-lg-page=16\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-prof --with-malloc-conf=tcache:false\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-prof --with-malloc-conf=dss:primary\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-prof --with-malloc-conf=percpu_arena:percpu\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-prof --with-malloc-conf=background_thread:true\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--disable-stats --disable-libdl\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--disable-stats --enable-opt-safety-checks\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--disable-stats --with-lg-page=16\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--disable-stats --with-malloc-conf=tcache:false\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--disable-stats --with-malloc-conf=dss:primary\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--disable-stats --with-malloc-conf=percpu_arena:percpu\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--disable-stats --with-malloc-conf=background_thread:true\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--disable-libdl --enable-opt-safety-checks\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--disable-libdl --with-lg-page=16\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--disable-libdl --with-malloc-conf=tcache:false\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--disable-libdl --with-malloc-conf=dss:primary\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--disable-libdl --with-malloc-conf=percpu_arena:percpu\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--disable-libdl --with-malloc-conf=background_thread:true\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-opt-safety-checks --with-lg-page=16\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-opt-safety-checks --with-malloc-conf=tcache:false\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-opt-safety-checks --with-malloc-conf=dss:primary\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-opt-safety-checks --with-malloc-conf=percpu_arena:percpu\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-opt-safety-checks --with-malloc-conf=background_thread:true\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-lg-page=16 --with-malloc-conf=tcache:false\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-lg-page=16 --with-malloc-conf=dss:primary\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-lg-page=16 --with-malloc-conf=percpu_arena:percpu\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-lg-page=16 --with-malloc-conf=background_thread:true\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-malloc-conf=tcache:false,dss:primary\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-malloc-conf=tcache:false,percpu_arena:percpu\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-malloc-conf=tcache:false,background_thread:true\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-malloc-conf=dss:primary,percpu_arena:percpu\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-malloc-conf=dss:primary,background_thread:true\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-malloc-conf=percpu_arena:percpu,background_thread:true\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: ppc64le\n      env: CC=gcc CXX=g++ EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: ppc64le\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-debug\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: ppc64le\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-prof\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: ppc64le\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--disable-stats\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: ppc64le\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--disable-libdl\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: ppc64le\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-opt-safety-checks\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: ppc64le\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-lg-page=16\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: ppc64le\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-malloc-conf=tcache:false\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: ppc64le\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-malloc-conf=dss:primary\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: ppc64le\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-malloc-conf=percpu_arena:percpu\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: linux\n      arch: ppc64le\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-malloc-conf=background_thread:true\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    - os: osx\n      arch: amd64\n      env: CC=gcc CXX=g++ EXTRA_CFLAGS=\"-Werror -Wno-array-bounds -Wno-unknown-warning-option -Wno-ignored-attributes -Wno-deprecated-declarations\"\n    - os: osx\n      arch: amd64\n      env: CC=gcc CXX=g++ CROSS_COMPILE_32BIT=yes EXTRA_CFLAGS=\"-Werror -Wno-array-bounds -Wno-unknown-warning-option -Wno-ignored-attributes -Wno-deprecated-declarations\"\n    - os: osx\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-debug\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds -Wno-unknown-warning-option -Wno-ignored-attributes -Wno-deprecated-declarations\"\n    - os: osx\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--disable-stats\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds -Wno-unknown-warning-option -Wno-ignored-attributes -Wno-deprecated-declarations\"\n    - os: osx\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--disable-libdl\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds -Wno-unknown-warning-option -Wno-ignored-attributes -Wno-deprecated-declarations\"\n    - os: osx\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-opt-safety-checks\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds -Wno-unknown-warning-option -Wno-ignored-attributes -Wno-deprecated-declarations\"\n    - os: osx\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-lg-page=16\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds -Wno-unknown-warning-option -Wno-ignored-attributes -Wno-deprecated-declarations\"\n    - os: osx\n      arch: amd64\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--with-malloc-conf=tcache:false\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds -Wno-unknown-warning-option -Wno-ignored-attributes -Wno-deprecated-declarations\"\n    # Development build\n    - os: linux\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-debug --disable-cache-oblivious --enable-stats --enable-log --enable-prof\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    # --enable-expermental-smallocx:\n    - os: linux\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-debug --enable-experimental-smallocx --enable-stats --enable-prof\" EXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n\n\nbefore_install:\n  - |-\n    if test -f \"./scripts/$TRAVIS_OS_NAME/before_install.sh\"; then\n      source ./scripts/$TRAVIS_OS_NAME/before_install.sh\n    fi\n\nbefore_script:\n  - |-\n    if test -f \"./scripts/$TRAVIS_OS_NAME/before_script.sh\"; then\n      source ./scripts/$TRAVIS_OS_NAME/before_script.sh\n    else\n      scripts/gen_travis.py > travis_script && diff .travis.yml travis_script\n      autoconf\n      # If COMPILER_FLAGS are not empty, add them to CC and CXX\n      ./configure ${COMPILER_FLAGS:+ CC=\"$CC $COMPILER_FLAGS\" CXX=\"$CXX $COMPILER_FLAGS\"} $CONFIGURE_FLAGS\n      make -j3\n      make -j3 tests\n    fi\n\nscript:\n  - |-\n    if test -f \"./scripts/$TRAVIS_OS_NAME/script.sh\"; then\n      source ./scripts/$TRAVIS_OS_NAME/script.sh\n    else\n      make check\n    fi\n\n"
  },
  {
    "path": "deps/jemalloc/COPYING",
    "content": "Unless otherwise specified, files in the jemalloc source distribution are\nsubject to the following license:\n--------------------------------------------------------------------------------\nCopyright (C) 2002-present Jason Evans <jasone@canonware.com>.\nAll rights reserved.\nCopyright (C) 2007-2012 Mozilla Foundation.  All rights reserved.\nCopyright (C) 2009-present Facebook, Inc.  All rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n1. Redistributions of source code must retain the above copyright notice(s),\n   this list of conditions and the following disclaimer.\n2. Redistributions in binary form must reproduce the above copyright notice(s),\n   this list of conditions and the following disclaimer in the documentation\n   and/or other materials provided with the distribution.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER(S) ``AS IS'' AND ANY EXPRESS\nOR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF\nMERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO\nEVENT SHALL THE COPYRIGHT HOLDER(S) BE LIABLE FOR ANY DIRECT, INDIRECT,\nINCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\nLIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\nPROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\nLIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE\nOR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF\nADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n--------------------------------------------------------------------------------\n"
  },
  {
    "path": "deps/jemalloc/ChangeLog",
    "content": "Following are change highlights associated with official releases.  Important\nbug fixes are all mentioned, but some internal enhancements are omitted here for\nbrevity.  Much more detail can be found in the git revision history:\n\n    https://github.com/jemalloc/jemalloc\n\n* 5.3.0 (May 6, 2022)\n\n  This release contains many speed and space optimizations, from micro\n  optimizations on common paths to rework of internal data structures and\n  locking schemes, and many more too detailed to list below.  Multiple percent\n  of system level metric improvements were measured in tested production\n  workloads.  The release has gone through large-scale production testing.\n\n  New features:\n  - Add the thread.idle mallctl which hints that the calling thread will be\n    idle for a nontrivial period of time.  (@davidtgoldblatt)\n  - Allow small size classes to be the maximum size class to cache in the\n    thread-specific cache, through the opt.[lg_]tcache_max option.  (@interwq,\n    @jordalgo)\n  - Make the behavior of realloc(ptr, 0) configurable with opt.zero_realloc.\n    (@davidtgoldblatt)\n  - Add 'make uninstall' support.  (@sangshuduo, @Lapenkov)\n  - Support C++17 over-aligned allocation.  (@marksantaniello)\n  - Add the thread.peak mallctl for approximate per-thread peak memory tracking.\n    (@davidtgoldblatt)\n  - Add interval-based stats output opt.stats_interval.  (@interwq)\n  - Add prof.prefix to override filename prefixes for dumps.  (@zhxchen17)\n  - Add high resolution timestamp support for profiling.  (@tyroguru)\n  - Add the --collapsed flag to jeprof for flamegraph generation.\n    (@igorwwwwwwwwwwwwwwwwwwww)\n  - Add the --debug-syms-by-id option to jeprof for debug symbols discovery.\n    (@DeannaGelbart)\n  - Add the opt.prof_leak_error option to exit with error code when leak is\n    detected using opt.prof_final.  (@yunxuo)\n  - Add opt.cache_oblivious as an runtime alternative to config.cache_oblivious.\n    (@interwq)\n  - Add mallctl interfaces:\n    + opt.zero_realloc  (@davidtgoldblatt)\n    + opt.cache_oblivious  (@interwq)\n    + opt.prof_leak_error  (@yunxuo)\n    + opt.stats_interval  (@interwq)\n    + opt.stats_interval_opts  (@interwq)\n    + opt.tcache_max  (@interwq)\n    + opt.trust_madvise  (@azat)\n    + prof.prefix  (@zhxchen17)\n    + stats.zero_reallocs  (@davidtgoldblatt)\n    + thread.idle  (@davidtgoldblatt)\n    + thread.peak.{read,reset}  (@davidtgoldblatt)\n\n  Bug fixes:\n  - Fix the synchronization around explicit tcache creation which could cause\n    invalid tcache identifiers.  This regression was first released in 5.0.0.\n    (@yoshinorim, @davidtgoldblatt)\n  - Fix a profiling biasing issue which could cause incorrect heap usage and\n    object counts.  This issue existed in all previous releases with the heap\n    profiling feature.  (@davidtgoldblatt)\n  - Fix the order of stats counter updating on large realloc which could cause\n    failed assertions.  This regression was first released in 5.0.0.  (@azat)\n  - Fix the locking on the arena destroy mallctl, which could cause concurrent\n    arena creations to fail.  This functionality was first introduced in 5.0.0.\n    (@interwq)\n\n  Portability improvements:\n  - Remove nothrow from system function declarations on macOS and FreeBSD.\n    (@davidtgoldblatt, @fredemmott, @leres)\n  - Improve overcommit and page alignment settings on NetBSD.  (@zoulasc)\n  - Improve CPU affinity support on BSD platforms.  (@devnexen)\n  - Improve utrace detection and support.  (@devnexen)\n  - Improve QEMU support with MADV_DONTNEED zeroed pages detection.  (@azat)\n  - Add memcntl support on Solaris / illumos.  (@devnexen)\n  - Improve CPU_SPINWAIT on ARM.  (@AWSjswinney)\n  - Improve TSD cleanup on FreeBSD.  (@Lapenkov)\n  - Disable percpu_arena if the CPU count cannot be reliably detected.  (@azat)\n  - Add malloc_size(3) override support.  (@devnexen)\n  - Add mmap VM_MAKE_TAG support.  (@devnexen)\n  - Add support for MADV_[NO]CORE.  (@devnexen)\n  - Add support for DragonFlyBSD.  (@devnexen)\n  - Fix the QUANTUM setting on MIPS64.  (@brooksdavis)\n  - Add the QUANTUM setting for ARC.  (@vineetgarc)\n  - Add the QUANTUM setting for LoongArch.  (@wangjl-uos)\n  - Add QNX support.  (@jqian-aurora)\n  - Avoid atexit(3) calls unless the relevant profiling features are enabled.\n    (@BusyJay, @laiwei-rice, @interwq)\n  - Fix unknown option detection when using Clang.  (@Lapenkov)\n  - Fix symbol conflict with musl libc.  (@georgthegreat)\n  - Add -Wimplicit-fallthrough checks.  (@nickdesaulniers)\n  - Add __forceinline support on MSVC.  (@santagada)\n  - Improve FreeBSD and Windows CI support.  (@Lapenkov)\n  - Add CI support for PPC64LE architecture.  (@ezeeyahoo)\n\n  Incompatible changes:\n  - Maximum size class allowed in tcache (opt.[lg_]tcache_max) now has an upper\n    bound of 8MiB.  (@interwq)\n\n  Optimizations and refactors (@davidtgoldblatt, @Lapenkov, @interwq):\n  - Optimize the common cases of the thread cache operations.\n  - Optimize internal data structures, including RB tree and pairing heap.\n  - Optimize the internal locking on extent management.\n  - Extract and refactor the internal page allocator and interface modules.\n\n  Documentation:\n  - Fix doc build with --with-install-suffix.  (@lawmurray, @interwq)\n  - Add PROFILING_INTERNALS.md.  (@davidtgoldblatt)\n  - Ensure the proper order of doc building and installation.  (@Mingli-Yu)\n\n* 5.2.1 (August 5, 2019)\n\n  This release is primarily about Windows.  A critical virtual memory leak is\n  resolved on all Windows platforms.  The regression was present in all releases\n  since 5.0.0.\n\n  Bug fixes:\n  - Fix a severe virtual memory leak on Windows.  This regression was first\n    released in 5.0.0.  (@Ignition, @j0t, @frederik-h, @davidtgoldblatt,\n    @interwq)\n  - Fix size 0 handling in posix_memalign().  This regression was first released\n    in 5.2.0.  (@interwq)\n  - Fix the prof_log unit test which may observe unexpected backtraces from\n    compiler optimizations.  The test was first added in 5.2.0.  (@marxin,\n    @gnzlbg, @interwq)\n  - Fix the declaration of the extent_avail tree.  This regression was first\n    released in 5.1.0.  (@zoulasc)\n  - Fix an incorrect reference in jeprof.  This functionality was first released\n    in 3.0.0.  (@prehistoric-penguin)\n  - Fix an assertion on the deallocation fast-path.  This regression was first\n    released in 5.2.0.  (@yinan1048576)\n  - Fix the TLS_MODEL attribute in headers.  This regression was first released\n    in 5.0.0.  (@zoulasc, @interwq)\n\n  Optimizations and refactors:\n  - Implement opt.retain on Windows and enable by default on 64-bit.  (@interwq,\n    @davidtgoldblatt)\n  - Optimize away a branch on the operator delete[] path.  (@mgrice)\n  - Add format annotation to the format generator function.  (@zoulasc)\n  - Refactor and improve the size class header generation.  (@yinan1048576)\n  - Remove best fit.  (@djwatson)\n  - Avoid blocking on background thread locks for stats.  (@oranagra, @interwq)\n\n* 5.2.0 (April 2, 2019)\n\n  This release includes a few notable improvements, which are summarized below:\n  1) improved fast-path performance from the optimizations by @djwatson; 2)\n  reduced virtual memory fragmentation and metadata usage; and 3) bug fixes on\n  setting the number of background threads.  In addition, peak / spike memory\n  usage is improved with certain allocation patterns.  As usual, the release and\n  prior dev versions have gone through large-scale production testing.\n\n  New features:\n  - Implement oversize_threshold, which uses a dedicated arena for allocations\n    crossing the specified threshold to reduce fragmentation.  (@interwq)\n  - Add extents usage information to stats.  (@tyleretzel)\n  - Log time information for sampled allocations.  (@tyleretzel)\n  - Support 0 size in sdallocx.  (@djwatson)\n  - Output rate for certain counters in malloc_stats.  (@zinoale)\n  - Add configure option --enable-readlinkat, which allows the use of readlinkat\n    over readlink.  (@davidtgoldblatt)\n  - Add configure options --{enable,disable}-{static,shared} to allow not\n    building unwanted libraries.  (@Ericson2314)\n  - Add configure option --disable-libdl to enable fully static builds.\n    (@interwq)\n  - Add mallctl interfaces:\n\t+ opt.oversize_threshold (@interwq)\n\t+ stats.arenas.<i>.extent_avail (@tyleretzel)\n\t+ stats.arenas.<i>.extents.<j>.n{dirty,muzzy,retained} (@tyleretzel)\n\t+ stats.arenas.<i>.extents.<j>.{dirty,muzzy,retained}_bytes\n\t  (@tyleretzel)\n\n  Portability improvements:\n  - Update MSVC builds.  (@maksqwe, @rustyx)\n  - Workaround a compiler optimizer bug on s390x.  (@rkmisra)\n  - Make use of pthread_set_name_np(3) on FreeBSD.  (@trasz)\n  - Implement malloc_getcpu() to enable percpu_arena for windows.  (@santagada)\n  - Link against -pthread instead of -lpthread.  (@paravoid)\n  - Make background_thread not dependent on libdl.  (@interwq)\n  - Add stringify to fix a linker directive issue on MSVC.  (@daverigby)\n  - Detect and fall back when 8-bit atomics are unavailable.  (@interwq)\n  - Fall back to the default pthread_create if dlsym(3) fails.  (@interwq)\n\n  Optimizations and refactors:\n  - Refactor the TSD module.  (@davidtgoldblatt)\n  - Avoid taking extents_muzzy mutex when muzzy is disabled.  (@interwq)\n  - Avoid taking large_mtx for auto arenas on the tcache flush path.  (@interwq)\n  - Optimize ixalloc by avoiding a size lookup.  (@interwq)\n  - Implement opt.oversize_threshold which uses a dedicated arena for requests\n    crossing the threshold, also eagerly purges the oversize extents.  Default\n    the threshold to 8 MiB.  (@interwq)\n  - Clean compilation with -Wextra.  (@gnzlbg, @jasone)\n  - Refactor the size class module.  (@davidtgoldblatt)\n  - Refactor the stats emitter.  (@tyleretzel)\n  - Optimize pow2_ceil.  (@rkmisra)\n  - Avoid runtime detection of lazy purging on FreeBSD.  (@trasz)\n  - Optimize mmap(2) alignment handling on FreeBSD.  (@trasz)\n  - Improve error handling for THP state initialization.  (@jsteemann)\n  - Rework the malloc() fast path.  (@djwatson)\n  - Rework the free() fast path.  (@djwatson)\n  - Refactor and optimize the tcache fill / flush paths.  (@djwatson)\n  - Optimize sync / lwsync on PowerPC.  (@chmeeedalf)\n  - Bypass extent_dalloc() when retain is enabled.  (@interwq)\n  - Optimize the locking on large deallocation.  (@interwq)\n  - Reduce the number of pages committed from sanity checking in debug build.\n    (@trasz, @interwq)\n  - Deprecate OSSpinLock.  (@interwq)\n  - Lower the default number of background threads to 4 (when the feature\n    is enabled).  (@interwq)\n  - Optimize the trylock spin wait.  (@djwatson)\n  - Use arena index for arena-matching checks.  (@interwq)\n  - Avoid forced decay on thread termination when using background threads.\n    (@interwq)\n  - Disable muzzy decay by default.  (@djwatson, @interwq)\n  - Only initialize libgcc unwinder when profiling is enabled.  (@paravoid,\n    @interwq)\n\n  Bug fixes (all only relevant to jemalloc 5.x):\n  - Fix background thread index issues with max_background_threads.  (@djwatson,\n    @interwq)\n  - Fix stats output for opt.lg_extent_max_active_fit.  (@interwq)\n  - Fix opt.prof_prefix initialization.  (@davidtgoldblatt)\n  - Properly trigger decay on tcache destroy.  (@interwq, @amosbird)\n  - Fix tcache.flush.  (@interwq)\n  - Detect whether explicit extent zero out is necessary with huge pages or\n    custom extent hooks, which may change the purge semantics.  (@interwq)\n  - Fix a side effect caused by extent_max_active_fit combined with decay-based\n    purging, where freed extents can accumulate and not be reused for an\n    extended period of time.  (@interwq, @mpghf)\n  - Fix a missing unlock on extent register error handling.  (@zoulasc)\n\n  Testing:\n  - Simplify the Travis script output.  (@gnzlbg)\n  - Update the test scripts for FreeBSD.  (@devnexen)\n  - Add unit tests for the producer-consumer pattern.  (@interwq)\n  - Add Cirrus-CI config for FreeBSD builds.  (@jasone)\n  - Add size-matching sanity checks on tcache flush.  (@davidtgoldblatt,\n    @interwq)\n\n  Incompatible changes:\n  - Remove --with-lg-page-sizes.  (@davidtgoldblatt)\n\n  Documentation:\n  - Attempt to build docs by default, however skip doc building when xsltproc\n    is missing. (@interwq, @cmuellner)\n\n* 5.1.0 (May 4, 2018)\n\n  This release is primarily about fine-tuning, ranging from several new features\n  to numerous notable performance and portability enhancements.  The release and\n  prior dev versions have been running in multiple large scale applications for\n  months, and the cumulative improvements are substantial in many cases.\n\n  Given the long and successful production runs, this release is likely a good\n  candidate for applications to upgrade, from both jemalloc 5.0 and before.  For\n  performance-critical applications, the newly added TUNING.md provides\n  guidelines on jemalloc tuning.\n\n  New features:\n  - Implement transparent huge page support for internal metadata.  (@interwq)\n  - Add opt.thp to allow enabling / disabling transparent huge pages for all\n    mappings.  (@interwq)\n  - Add maximum background thread count option.  (@djwatson)\n  - Allow prof_active to control opt.lg_prof_interval and prof.gdump.\n    (@interwq)\n  - Allow arena index lookup based on allocation addresses via mallctl.\n    (@lionkov)\n  - Allow disabling initial-exec TLS model.  (@davidtgoldblatt, @KenMacD)\n  - Add opt.lg_extent_max_active_fit to set the max ratio between the size of\n    the active extent selected (to split off from) and the size of the requested\n    allocation.  (@interwq, @davidtgoldblatt)\n  - Add retain_grow_limit to set the max size when growing virtual address\n    space.  (@interwq)\n  - Add mallctl interfaces:\n    + arena.<i>.retain_grow_limit  (@interwq)\n    + arenas.lookup  (@lionkov)\n    + max_background_threads  (@djwatson)\n    + opt.lg_extent_max_active_fit  (@interwq)\n    + opt.max_background_threads  (@djwatson)\n    + opt.metadata_thp  (@interwq)\n    + opt.thp  (@interwq)\n    + stats.metadata_thp  (@interwq)\n\n  Portability improvements:\n  - Support GNU/kFreeBSD configuration.  (@paravoid)\n  - Support m68k, nios2 and SH3 architectures.  (@paravoid)\n  - Fall back to FD_CLOEXEC when O_CLOEXEC is unavailable.  (@zonyitoo)\n  - Fix symbol listing for cross-compiling.  (@tamird)\n  - Fix high bits computation on ARM.  (@davidtgoldblatt, @paravoid)\n  - Disable the CPU_SPINWAIT macro for Power.  (@davidtgoldblatt, @marxin)\n  - Fix MSVC 2015 & 2017 builds.  (@rustyx)\n  - Improve RISC-V support.  (@EdSchouten)\n  - Set name mangling script in strict mode.  (@nicolov)\n  - Avoid MADV_HUGEPAGE on ARM.  (@marxin)\n  - Modify configure to determine return value of strerror_r.\n    (@davidtgoldblatt, @cferris1000)\n  - Make sure CXXFLAGS is tested with CPP compiler.  (@nehaljwani)\n  - Fix 32-bit build on MSVC.  (@rustyx)\n  - Fix external symbol on MSVC.  (@maksqwe)\n  - Avoid a printf format specifier warning.  (@jasone)\n  - Add configure option --disable-initial-exec-tls which can allow jemalloc to\n    be dynamically loaded after program startup.  (@davidtgoldblatt, @KenMacD)\n  - AArch64: Add ILP32 support.  (@cmuellner)\n  - Add --with-lg-vaddr configure option to support cross compiling.\n    (@cmuellner, @davidtgoldblatt)\n\n  Optimizations and refactors:\n  - Improve active extent fit with extent_max_active_fit.  This considerably\n    reduces fragmentation over time and improves virtual memory and metadata\n    usage.  (@davidtgoldblatt, @interwq)\n  - Eagerly coalesce large extents to reduce fragmentation.  (@interwq)\n  - sdallocx: only read size info when page aligned (i.e. possibly sampled),\n    which speeds up the sized deallocation path significantly.  (@interwq)\n  - Avoid attempting new mappings for in place expansion with retain, since\n    it rarely succeeds in practice and causes high overhead.  (@interwq)\n  - Refactor OOM handling in newImpl.  (@wqfish)\n  - Add internal fine-grained logging functionality for debugging use.\n    (@davidtgoldblatt)\n  - Refactor arena / tcache interactions.  (@davidtgoldblatt)\n  - Refactor extent management with dumpable flag.  (@davidtgoldblatt)\n  - Add runtime detection of lazy purging.  (@interwq)\n  - Use pairing heap instead of red-black tree for extents_avail.  (@djwatson)\n  - Use sysctl on startup in FreeBSD.  (@trasz)\n  - Use thread local prng state instead of atomic.  (@djwatson)\n  - Make decay to always purge one more extent than before, because in\n    practice large extents are usually the ones that cross the decay threshold.\n    Purging the additional extent helps save memory as well as reduce VM\n    fragmentation.  (@interwq)\n  - Fast division by dynamic values.  (@davidtgoldblatt)\n  - Improve the fit for aligned allocation.  (@interwq, @edwinsmith)\n  - Refactor extent_t bitpacking.  (@rkmisra)\n  - Optimize the generated assembly for ticker operations.  (@davidtgoldblatt)\n  - Convert stats printing to use a structured text emitter.  (@davidtgoldblatt)\n  - Remove preserve_lru feature for extents management.  (@djwatson)\n  - Consolidate two memory loads into one on the fast deallocation path.\n    (@davidtgoldblatt, @interwq)\n\n  Bug fixes (most of the issues are only relevant to jemalloc 5.0):\n  - Fix deadlock with multithreaded fork in OS X.  (@davidtgoldblatt)\n  - Validate returned file descriptor before use.  (@zonyitoo)\n  - Fix a few background thread initialization and shutdown issues.  (@interwq)\n  - Fix an extent coalesce + decay race by taking both coalescing extents off\n    the LRU list.  (@interwq)\n  - Fix potentially unbound increase during decay, caused by one thread keep\n    stashing memory to purge while other threads generating new pages.  The\n    number of pages to purge is checked to prevent this.  (@interwq)\n  - Fix a FreeBSD bootstrap assertion.  (@strejda, @interwq)\n  - Handle 32 bit mutex counters.  (@rkmisra)\n  - Fix a indexing bug when creating background threads.  (@davidtgoldblatt,\n    @binliu19)\n  - Fix arguments passed to extent_init.  (@yuleniwo, @interwq)\n  - Fix addresses used for ordering mutexes.  (@rkmisra)\n  - Fix abort_conf processing during bootstrap.  (@interwq)\n  - Fix include path order for out-of-tree builds.  (@cmuellner)\n\n  Incompatible changes:\n  - Remove --disable-thp.  (@interwq)\n  - Remove mallctl interfaces:\n    + config.thp  (@interwq)\n\n  Documentation:\n  - Add TUNING.md.  (@interwq, @davidtgoldblatt, @djwatson)\n\n* 5.0.1 (July 1, 2017)\n\n  This bugfix release fixes several issues, most of which are obscure enough\n  that typical applications are not impacted.\n\n  Bug fixes:\n  - Update decay->nunpurged before purging, in order to avoid potential update\n    races and subsequent incorrect purging volume.  (@interwq)\n  - Only abort on dlsym(3) error if the failure impacts an enabled feature (lazy\n    locking and/or background threads).  This mitigates an initialization\n    failure bug for which we still do not have a clear reproduction test case.\n    (@interwq)\n  - Modify tsd management so that it neither crashes nor leaks if a thread's\n    only allocation activity is to call free() after TLS destructors have been\n    executed.  This behavior was observed when operating with GNU libc, and is\n    unlikely to be an issue with other libc implementations.  (@interwq)\n  - Mask signals during background thread creation.  This prevents signals from\n    being inadvertently delivered to background threads.  (@jasone,\n    @davidtgoldblatt, @interwq)\n  - Avoid inactivity checks within background threads, in order to prevent\n    recursive mutex acquisition.  (@interwq)\n  - Fix extent_grow_retained() to use the specified hooks when the\n    arena.<i>.extent_hooks mallctl is used to override the default hooks.\n    (@interwq)\n  - Add missing reentrancy support for custom extent hooks which allocate.\n    (@interwq)\n  - Post-fork(2), re-initialize the list of tcaches associated with each arena\n    to contain no tcaches except the forking thread's.  (@interwq)\n  - Add missing post-fork(2) mutex reinitialization for extent_grow_mtx.  This\n    fixes potential deadlocks after fork(2).  (@interwq)\n  - Enforce minimum autoconf version (currently 2.68), since 2.63 is known to\n    generate corrupt configure scripts.  (@jasone)\n  - Ensure that the configured page size (--with-lg-page) is no larger than the\n    configured huge page size (--with-lg-hugepage).  (@jasone)\n\n* 5.0.0 (June 13, 2017)\n\n  Unlike all previous jemalloc releases, this release does not use naturally\n  aligned \"chunks\" for virtual memory management, and instead uses page-aligned\n  \"extents\".  This change has few externally visible effects, but the internal\n  impacts are... extensive.  Many other internal changes combine to make this\n  the most cohesively designed version of jemalloc so far, with ample\n  opportunity for further enhancements.\n\n  Continuous integration is now an integral aspect of development thanks to the\n  efforts of @davidtgoldblatt, and the dev branch tends to remain reasonably\n  stable on the tested platforms (Linux, FreeBSD, macOS, and Windows).  As a\n  side effect the official release frequency may decrease over time.\n\n  New features:\n  - Implement optional per-CPU arena support; threads choose which arena to use\n    based on current CPU rather than on fixed thread-->arena associations.\n    (@interwq)\n  - Implement two-phase decay of unused dirty pages.  Pages transition from\n    dirty-->muzzy-->clean, where the first phase transition relies on\n    madvise(... MADV_FREE) semantics, and the second phase transition discards\n    pages such that they are replaced with demand-zeroed pages on next access.\n    (@jasone)\n  - Increase decay time resolution from seconds to milliseconds.  (@jasone)\n  - Implement opt-in per CPU background threads, and use them for asynchronous\n    decay-driven unused dirty page purging.  (@interwq)\n  - Add mutex profiling, which collects a variety of statistics useful for\n    diagnosing overhead/contention issues.  (@interwq)\n  - Add C++ new/delete operator bindings.  (@djwatson)\n  - Support manually created arena destruction, such that all data and metadata\n    are discarded.  Add MALLCTL_ARENAS_DESTROYED for accessing merged stats\n    associated with destroyed arenas.  (@jasone)\n  - Add MALLCTL_ARENAS_ALL as a fixed index for use in accessing\n    merged/destroyed arena statistics via mallctl.  (@jasone)\n  - Add opt.abort_conf to optionally abort if invalid configuration options are\n    detected during initialization.  (@interwq)\n  - Add opt.stats_print_opts, so that e.g. JSON output can be selected for the\n    stats dumped during exit if opt.stats_print is true.  (@jasone)\n  - Add --with-version=VERSION for use when embedding jemalloc into another\n    project's git repository.  (@jasone)\n  - Add --disable-thp to support cross compiling.  (@jasone)\n  - Add --with-lg-hugepage to support cross compiling.  (@jasone)\n  - Add mallctl interfaces (various authors):\n    + background_thread\n    + opt.abort_conf\n    + opt.retain\n    + opt.percpu_arena\n    + opt.background_thread\n    + opt.{dirty,muzzy}_decay_ms\n    + opt.stats_print_opts\n    + arena.<i>.initialized\n    + arena.<i>.destroy\n    + arena.<i>.{dirty,muzzy}_decay_ms\n    + arena.<i>.extent_hooks\n    + arenas.{dirty,muzzy}_decay_ms\n    + arenas.bin.<i>.slab_size\n    + arenas.nlextents\n    + arenas.lextent.<i>.size\n    + arenas.create\n    + stats.background_thread.{num_threads,num_runs,run_interval}\n    + stats.mutexes.{ctl,background_thread,prof,reset}.\n      {num_ops,num_spin_acq,num_wait,max_wait_time,total_wait_time,max_num_thds,\n      num_owner_switch}\n    + stats.arenas.<i>.{dirty,muzzy}_decay_ms\n    + stats.arenas.<i>.uptime\n    + stats.arenas.<i>.{pmuzzy,base,internal,resident}\n    + stats.arenas.<i>.{dirty,muzzy}_{npurge,nmadvise,purged}\n    + stats.arenas.<i>.bins.<j>.{nslabs,reslabs,curslabs}\n    + stats.arenas.<i>.bins.<j>.mutex.\n      {num_ops,num_spin_acq,num_wait,max_wait_time,total_wait_time,max_num_thds,\n      num_owner_switch}\n    + stats.arenas.<i>.lextents.<j>.{nmalloc,ndalloc,nrequests,curlextents}\n    + stats.arenas.i.mutexes.{large,extent_avail,extents_dirty,extents_muzzy,\n      extents_retained,decay_dirty,decay_muzzy,base,tcache_list}.\n      {num_ops,num_spin_acq,num_wait,max_wait_time,total_wait_time,max_num_thds,\n      num_owner_switch}\n\n  Portability improvements:\n  - Improve reentrant allocation support, such that deadlock is less likely if\n    e.g. a system library call in turn allocates memory.  (@davidtgoldblatt,\n    @interwq)\n  - Support static linking of jemalloc with glibc.  (@djwatson)\n\n  Optimizations and refactors:\n  - Organize virtual memory as \"extents\" of virtual memory pages, rather than as\n    naturally aligned \"chunks\", and store all metadata in arbitrarily distant\n    locations.  This reduces virtual memory external fragmentation, and will\n    interact better with huge pages (not yet explicitly supported).  (@jasone)\n  - Fold large and huge size classes together; only small and large size classes\n    remain.  (@jasone)\n  - Unify the allocation paths, and merge most fast-path branching decisions.\n    (@davidtgoldblatt, @interwq)\n  - Embed per thread automatic tcache into thread-specific data, which reduces\n    conditional branches and dereferences.  Also reorganize tcache to increase\n    fast-path data locality.  (@interwq)\n  - Rewrite atomics to closely model the C11 API, convert various\n    synchronization from mutex-based to atomic, and use the explicit memory\n    ordering control to resolve various hypothetical races without increasing\n    synchronization overhead.  (@davidtgoldblatt)\n  - Extensively optimize rtree via various methods:\n    + Add multiple layers of rtree lookup caching, since rtree lookups are now\n      part of fast-path deallocation.  (@interwq)\n    + Determine rtree layout at compile time.  (@jasone)\n    + Make the tree shallower for common configurations.  (@jasone)\n    + Embed the root node in the top-level rtree data structure, thus avoiding\n      one level of indirection.  (@jasone)\n    + Further specialize leaf elements as compared to internal node elements,\n      and directly embed extent metadata needed for fast-path deallocation.\n      (@jasone)\n    + Ignore leading always-zero address bits (architecture-specific).\n      (@jasone)\n  - Reorganize headers (ongoing work) to make them hermetic, and disentangle\n    various module dependencies.  (@davidtgoldblatt)\n  - Convert various internal data structures such as size class metadata from\n    boot-time-initialized to compile-time-initialized.  Propagate resulting data\n    structure simplifications, such as making arena metadata fixed-size.\n    (@jasone)\n  - Simplify size class lookups when constrained to size classes that are\n    multiples of the page size.  This speeds lookups, but the primary benefit is\n    complexity reduction in code that was the source of numerous regressions.\n    (@jasone)\n  - Lock individual extents when possible for localized extent operations,\n    rather than relying on a top-level arena lock.  (@davidtgoldblatt, @jasone)\n  - Use first fit layout policy instead of best fit, in order to improve\n    packing.  (@jasone)\n  - If munmap(2) is not in use, use an exponential series to grow each arena's\n    virtual memory, so that the number of disjoint virtual memory mappings\n    remains low.  (@jasone)\n  - Implement per arena base allocators, so that arenas never share any virtual\n    memory pages.  (@jasone)\n  - Automatically generate private symbol name mangling macros.  (@jasone)\n\n  Incompatible changes:\n  - Replace chunk hooks with an expanded/normalized set of extent hooks.\n    (@jasone)\n  - Remove ratio-based purging.  (@jasone)\n  - Remove --disable-tcache.  (@jasone)\n  - Remove --disable-tls.  (@jasone)\n  - Remove --enable-ivsalloc.  (@jasone)\n  - Remove --with-lg-size-class-group.  (@jasone)\n  - Remove --with-lg-tiny-min.  (@jasone)\n  - Remove --disable-cc-silence.  (@jasone)\n  - Remove --enable-code-coverage.  (@jasone)\n  - Remove --disable-munmap (replaced by opt.retain).  (@jasone)\n  - Remove Valgrind support.  (@jasone)\n  - Remove quarantine support.  (@jasone)\n  - Remove redzone support.  (@jasone)\n  - Remove mallctl interfaces (various authors):\n    + config.munmap\n    + config.tcache\n    + config.tls\n    + config.valgrind\n    + opt.lg_chunk\n    + opt.purge\n    + opt.lg_dirty_mult\n    + opt.decay_time\n    + opt.quarantine\n    + opt.redzone\n    + opt.thp\n    + arena.<i>.lg_dirty_mult\n    + arena.<i>.decay_time\n    + arena.<i>.chunk_hooks\n    + arenas.initialized\n    + arenas.lg_dirty_mult\n    + arenas.decay_time\n    + arenas.bin.<i>.run_size\n    + arenas.nlruns\n    + arenas.lrun.<i>.size\n    + arenas.nhchunks\n    + arenas.hchunk.<i>.size\n    + arenas.extend\n    + stats.cactive\n    + stats.arenas.<i>.lg_dirty_mult\n    + stats.arenas.<i>.decay_time\n    + stats.arenas.<i>.metadata.{mapped,allocated}\n    + stats.arenas.<i>.{npurge,nmadvise,purged}\n    + stats.arenas.<i>.huge.{allocated,nmalloc,ndalloc,nrequests}\n    + stats.arenas.<i>.bins.<j>.{nruns,reruns,curruns}\n    + stats.arenas.<i>.lruns.<j>.{nmalloc,ndalloc,nrequests,curruns}\n    + stats.arenas.<i>.hchunks.<j>.{nmalloc,ndalloc,nrequests,curhchunks}\n\n  Bug fixes:\n  - Improve interval-based profile dump triggering to dump only one profile when\n    a single allocation's size exceeds the interval.  (@jasone)\n  - Use prefixed function names (as controlled by --with-jemalloc-prefix) when\n    pruning backtrace frames in jeprof.  (@jasone)\n\n* 4.5.0 (February 28, 2017)\n\n  This is the first release to benefit from much broader continuous integration\n  testing, thanks to @davidtgoldblatt.  Had we had this testing infrastructure\n  in place for prior releases, it would have caught all of the most serious\n  regressions fixed by this release.\n\n  New features:\n  - Add --disable-thp and the opt.thp mallctl to provide opt-out mechanisms for\n    transparent huge page integration.  (@jasone)\n  - Update zone allocator integration to work with macOS 10.12.  (@glandium)\n  - Restructure *CFLAGS configuration, so that CFLAGS behaves typically, and\n    EXTRA_CFLAGS provides a way to specify e.g. -Werror during building, but not\n    during configuration.  (@jasone, @ronawho)\n\n  Bug fixes:\n  - Fix DSS (sbrk(2)-based) allocation.  This regression was first released in\n    4.3.0.  (@jasone)\n  - Handle race in per size class utilization computation.  This functionality\n    was first released in 4.0.0.  (@interwq)\n  - Fix lock order reversal during gdump.  (@jasone)\n  - Fix/refactor tcache synchronization.  This regression was first released in\n    4.0.0.  (@jasone)\n  - Fix various JSON-formatted malloc_stats_print() bugs.  This functionality\n    was first released in 4.3.0.  (@jasone)\n  - Fix huge-aligned allocation.  This regression was first released in 4.4.0.\n    (@jasone)\n  - When transparent huge page integration is enabled, detect what state pages\n    start in according to the kernel's current operating mode, and only convert\n    arena chunks to non-huge during purging if that is not their initial state.\n    This functionality was first released in 4.4.0.  (@jasone)\n  - Fix lg_chunk clamping for the --enable-cache-oblivious --disable-fill case.\n    This regression was first released in 4.0.0.  (@jasone, @428desmo)\n  - Properly detect sparc64 when building for Linux.  (@glaubitz)\n\n* 4.4.0 (December 3, 2016)\n\n  New features:\n  - Add configure support for *-*-linux-android.  (@cferris1000, @jasone)\n  - Add the --disable-syscall configure option, for use on systems that place\n    security-motivated limitations on syscall(2).  (@jasone)\n  - Add support for Debian GNU/kFreeBSD.  (@thesam)\n\n  Optimizations:\n  - Add extent serial numbers and use them where appropriate as a sort key that\n    is higher priority than address, so that the allocation policy prefers older\n    extents.  This tends to improve locality (decrease fragmentation) when\n    memory grows downward.  (@jasone)\n  - Refactor madvise(2) configuration so that MADV_FREE is detected and utilized\n    on Linux 4.5 and newer.  (@jasone)\n  - Mark partially purged arena chunks as non-huge-page.  This improves\n    interaction with Linux's transparent huge page functionality.  (@jasone)\n\n  Bug fixes:\n  - Fix size class computations for edge conditions involving extremely large\n    allocations.  This regression was first released in 4.0.0.  (@jasone,\n    @ingvarha)\n  - Remove overly restrictive assertions related to the cactive statistic.  This\n    regression was first released in 4.1.0.  (@jasone)\n  - Implement a more reliable detection scheme for os_unfair_lock on macOS.\n    (@jszakmeister)\n\n* 4.3.1 (November 7, 2016)\n\n  Bug fixes:\n  - Fix a severe virtual memory leak.  This regression was first released in\n    4.3.0.  (@interwq, @jasone)\n  - Refactor atomic and prng APIs to restore support for 32-bit platforms that\n    use pre-C11 toolchains, e.g. FreeBSD's mips.  (@jasone)\n\n* 4.3.0 (November 4, 2016)\n\n  This is the first release that passes the test suite for multiple Windows\n  configurations, thanks in large part to @glandium setting up continuous\n  integration via AppVeyor (and Travis CI for Linux and OS X).\n\n  New features:\n  - Add \"J\" (JSON) support to malloc_stats_print().  (@jasone)\n  - Add Cray compiler support.  (@ronawho)\n\n  Optimizations:\n  - Add/use adaptive spinning for bootstrapping and radix tree node\n    initialization.  (@jasone)\n\n  Bug fixes:\n  - Fix large allocation to search starting in the optimal size class heap,\n    which can substantially reduce virtual memory churn and fragmentation.  This\n    regression was first released in 4.0.0.  (@mjp41, @jasone)\n  - Fix stats.arenas.<i>.nthreads accounting.  (@interwq)\n  - Fix and simplify decay-based purging.  (@jasone)\n  - Make DSS (sbrk(2)-related) operations lockless, which resolves potential\n    deadlocks during thread exit.  (@jasone)\n  - Fix over-sized allocation of radix tree leaf nodes.  (@mjp41, @ogaun,\n    @jasone)\n  - Fix over-sized allocation of arena_t (plus associated stats) data\n    structures.  (@jasone, @interwq)\n  - Fix EXTRA_CFLAGS to not affect configuration.  (@jasone)\n  - Fix a Valgrind integration bug.  (@ronawho)\n  - Disallow 0x5a junk filling when running in Valgrind.  (@jasone)\n  - Fix a file descriptor leak on Linux.  This regression was first released in\n    4.2.0.  (@vsarunas, @jasone)\n  - Fix static linking of jemalloc with glibc.  (@djwatson)\n  - Use syscall(2) rather than {open,read,close}(2) during boot on Linux.  This\n    works around other libraries' system call wrappers performing reentrant\n    allocation.  (@kspinka, @Whissi, @jasone)\n  - Fix OS X default zone replacement to work with OS X 10.12.  (@glandium,\n    @jasone)\n  - Fix cached memory management to avoid needless commit/decommit operations\n    during purging, which resolves permanent virtual memory map fragmentation\n    issues on Windows.  (@mjp41, @jasone)\n  - Fix TSD fetches to avoid (recursive) allocation.  This is relevant to\n    non-TLS and Windows configurations.  (@jasone)\n  - Fix malloc_conf overriding to work on Windows.  (@jasone)\n  - Forcibly disable lazy-lock on Windows (was forcibly *enabled*).  (@jasone)\n\n* 4.2.1 (June 8, 2016)\n\n  Bug fixes:\n  - Fix bootstrapping issues for configurations that require allocation during\n    tsd initialization (e.g. --disable-tls).  (@cferris1000, @jasone)\n  - Fix gettimeofday() version of nstime_update().  (@ronawho)\n  - Fix Valgrind regressions in calloc() and chunk_alloc_wrapper().  (@ronawho)\n  - Fix potential VM map fragmentation regression.  (@jasone)\n  - Fix opt_zero-triggered in-place huge reallocation zeroing.  (@jasone)\n  - Fix heap profiling context leaks in reallocation edge cases.  (@jasone)\n\n* 4.2.0 (May 12, 2016)\n\n  New features:\n  - Add the arena.<i>.reset mallctl, which makes it possible to discard all of\n    an arena's allocations in a single operation.  (@jasone)\n  - Add the stats.retained and stats.arenas.<i>.retained statistics.  (@jasone)\n  - Add the --with-version configure option.  (@jasone)\n  - Support --with-lg-page values larger than actual page size.  (@jasone)\n\n  Optimizations:\n  - Use pairing heaps rather than red-black trees for various hot data\n    structures.  (@djwatson, @jasone)\n  - Streamline fast paths of rtree operations.  (@jasone)\n  - Optimize the fast paths of calloc() and [m,d,sd]allocx().  (@jasone)\n  - Decommit unused virtual memory if the OS does not overcommit.  (@jasone)\n  - Specify MAP_NORESERVE on Linux if [heuristic] overcommit is active, in order\n    to avoid unfortunate interactions during fork(2).  (@jasone)\n\n  Bug fixes:\n  - Fix chunk accounting related to triggering gdump profiles.  (@jasone)\n  - Link against librt for clock_gettime(2) if glibc < 2.17.  (@jasone)\n  - Scale leak report summary according to sampling probability.  (@jasone)\n\n* 4.1.1 (May 3, 2016)\n\n  This bugfix release resolves a variety of mostly minor issues, though the\n  bitmap fix is critical for 64-bit Windows.\n\n  Bug fixes:\n  - Fix the linear scan version of bitmap_sfu() to shift by the proper amount\n    even when sizeof(long) is not the same as sizeof(void *), as on 64-bit\n    Windows.  (@jasone)\n  - Fix hashing functions to avoid unaligned memory accesses (and resulting\n    crashes).  This is relevant at least to some ARM-based platforms.\n    (@rkmisra)\n  - Fix fork()-related lock rank ordering reversals.  These reversals were\n    unlikely to cause deadlocks in practice except when heap profiling was\n    enabled and active.  (@jasone)\n  - Fix various chunk leaks in OOM code paths.  (@jasone)\n  - Fix malloc_stats_print() to print opt.narenas correctly.  (@jasone)\n  - Fix MSVC-specific build/test issues.  (@rustyx, @yuslepukhin)\n  - Fix a variety of test failures that were due to test fragility rather than\n    core bugs.  (@jasone)\n\n* 4.1.0 (February 28, 2016)\n\n  This release is primarily about optimizations, but it also incorporates a lot\n  of portability-motivated refactoring and enhancements.  Many people worked on\n  this release, to an extent that even with the omission here of minor changes\n  (see git revision history), and of the people who reported and diagnosed\n  issues, so much of the work was contributed that starting with this release,\n  changes are annotated with author credits to help reflect the collaborative\n  effort involved.\n\n  New features:\n  - Implement decay-based unused dirty page purging, a major optimization with\n    mallctl API impact.  This is an alternative to the existing ratio-based\n    unused dirty page purging, and is intended to eventually become the sole\n    purging mechanism.  New mallctls:\n    + opt.purge\n    + opt.decay_time\n    + arena.<i>.decay\n    + arena.<i>.decay_time\n    + arenas.decay_time\n    + stats.arenas.<i>.decay_time\n    (@jasone, @cevans87)\n  - Add --with-malloc-conf, which makes it possible to embed a default\n    options string during configuration.  This was motivated by the desire to\n    specify --with-malloc-conf=purge:decay , since the default must remain\n    purge:ratio until the 5.0.0 release.  (@jasone)\n  - Add MS Visual Studio 2015 support.  (@rustyx, @yuslepukhin)\n  - Make *allocx() size class overflow behavior defined.  The maximum\n    size class is now less than PTRDIFF_MAX to protect applications against\n    numerical overflow, and all allocation functions are guaranteed to indicate\n    errors rather than potentially crashing if the request size exceeds the\n    maximum size class.  (@jasone)\n  - jeprof:\n    + Add raw heap profile support.  (@jasone)\n    + Add --retain and --exclude for backtrace symbol filtering.  (@jasone)\n\n  Optimizations:\n  - Optimize the fast path to combine various bootstrapping and configuration\n    checks and execute more streamlined code in the common case.  (@interwq)\n  - Use linear scan for small bitmaps (used for small object tracking).  In\n    addition to speeding up bitmap operations on 64-bit systems, this reduces\n    allocator metadata overhead by approximately 0.2%.  (@djwatson)\n  - Separate arena_avail trees, which substantially speeds up run tree\n    operations.  (@djwatson)\n  - Use memoization (boot-time-computed table) for run quantization.  Separate\n    arena_avail trees reduced the importance of this optimization.  (@jasone)\n  - Attempt mmap-based in-place huge reallocation.  This can dramatically speed\n    up incremental huge reallocation.  (@jasone)\n\n  Incompatible changes:\n  - Make opt.narenas unsigned rather than size_t.  (@jasone)\n\n  Bug fixes:\n  - Fix stats.cactive accounting regression.  (@rustyx, @jasone)\n  - Handle unaligned keys in hash().  This caused problems for some ARM systems.\n    (@jasone, @cferris1000)\n  - Refactor arenas array.  In addition to fixing a fork-related deadlock, this\n    makes arena lookups faster and simpler.  (@jasone)\n  - Move retained memory allocation out of the default chunk allocation\n    function, to a location that gets executed even if the application installs\n    a custom chunk allocation function.  This resolves a virtual memory leak.\n    (@buchgr)\n  - Fix a potential tsd cleanup leak.  (@cferris1000, @jasone)\n  - Fix run quantization.  In practice this bug had no impact unless\n    applications requested memory with alignment exceeding one page.\n    (@jasone, @djwatson)\n  - Fix LinuxThreads-specific bootstrapping deadlock.  (Cosmin Paraschiv)\n  - jeprof:\n    + Don't discard curl options if timeout is not defined.  (@djwatson)\n    + Detect failed profile fetches.  (@djwatson)\n  - Fix stats.arenas.<i>.{dss,lg_dirty_mult,decay_time,pactive,pdirty} for\n    --disable-stats case.  (@jasone)\n\n* 4.0.4 (October 24, 2015)\n\n  This bugfix release fixes another xallocx() regression.  No other regressions\n  have come to light in over a month, so this is likely a good starting point\n  for people who prefer to wait for \"dot one\" releases with all the major issues\n  shaken out.\n\n  Bug fixes:\n  - Fix xallocx(..., MALLOCX_ZERO to zero the last full trailing page of large\n    allocations that have been randomly assigned an offset of 0 when\n    --enable-cache-oblivious configure option is enabled.\n\n* 4.0.3 (September 24, 2015)\n\n  This bugfix release continues the trend of xallocx() and heap profiling fixes.\n\n  Bug fixes:\n  - Fix xallocx(..., MALLOCX_ZERO) to zero all trailing bytes of large\n    allocations when --enable-cache-oblivious configure option is enabled.\n  - Fix xallocx(..., MALLOCX_ZERO) to zero trailing bytes of huge allocations\n    when resizing from/to a size class that is not a multiple of the chunk size.\n  - Fix prof_tctx_dump_iter() to filter out nodes that were created after heap\n    profile dumping started.\n  - Work around a potentially bad thread-specific data initialization\n    interaction with NPTL (glibc's pthreads implementation).\n\n* 4.0.2 (September 21, 2015)\n\n  This bugfix release addresses a few bugs specific to heap profiling.\n\n  Bug fixes:\n  - Fix ixallocx_prof_sample() to never modify nor create sampled small\n    allocations.  xallocx() is in general incapable of moving small allocations,\n    so this fix removes buggy code without loss of generality.\n  - Fix irallocx_prof_sample() to always allocate large regions, even when\n    alignment is non-zero.\n  - Fix prof_alloc_rollback() to read tdata from thread-specific data rather\n    than dereferencing a potentially invalid tctx.\n\n* 4.0.1 (September 15, 2015)\n\n  This is a bugfix release that is somewhat high risk due to the amount of\n  refactoring required to address deep xallocx() problems.  As a side effect of\n  these fixes, xallocx() now tries harder to partially fulfill requests for\n  optional extra space.  Note that a couple of minor heap profiling\n  optimizations are included, but these are better thought of as performance\n  fixes that were integral to discovering most of the other bugs.\n\n  Optimizations:\n  - Avoid a chunk metadata read in arena_prof_tctx_set(), since it is in the\n    fast path when heap profiling is enabled.  Additionally, split a special\n    case out into arena_prof_tctx_reset(), which also avoids chunk metadata\n    reads.\n  - Optimize irallocx_prof() to optimistically update the sampler state.  The\n    prior implementation appears to have been a holdover from when\n    rallocx()/xallocx() functionality was combined as rallocm().\n\n  Bug fixes:\n  - Fix TLS configuration such that it is enabled by default for platforms on\n    which it works correctly.\n  - Fix arenas_cache_cleanup() and arena_get_hard() to handle\n    allocation/deallocation within the application's thread-specific data\n    cleanup functions even after arenas_cache is torn down.\n  - Fix xallocx() bugs related to size+extra exceeding HUGE_MAXCLASS.\n  - Fix chunk purge hook calls for in-place huge shrinking reallocation to\n    specify the old chunk size rather than the new chunk size.  This bug caused\n    no correctness issues for the default chunk purge function, but was\n    visible to custom functions set via the \"arena.<i>.chunk_hooks\" mallctl.\n  - Fix heap profiling bugs:\n    + Fix heap profiling to distinguish among otherwise identical sample sites\n      with interposed resets (triggered via the \"prof.reset\" mallctl).  This bug\n      could cause data structure corruption that would most likely result in a\n      segfault.\n    + Fix irealloc_prof() to prof_alloc_rollback() on OOM.\n    + Make one call to prof_active_get_unlocked() per allocation event, and use\n      the result throughout the relevant functions that handle an allocation\n      event.  Also add a missing check in prof_realloc().  These fixes protect\n      allocation events against concurrent prof_active changes.\n    + Fix ixallocx_prof() to pass usize_max and zero to ixallocx_prof_sample()\n      in the correct order.\n    + Fix prof_realloc() to call prof_free_sampled_object() after calling\n      prof_malloc_sample_object().  Prior to this fix, if tctx and old_tctx were\n      the same, the tctx could have been prematurely destroyed.\n  - Fix portability bugs:\n    + Don't bitshift by negative amounts when encoding/decoding run sizes in\n      chunk header maps.  This affected systems with page sizes greater than 8\n      KiB.\n    + Rename index_t to szind_t to avoid an existing type on Solaris.\n    + Add JEMALLOC_CXX_THROW to the memalign() function prototype, in order to\n      match glibc and avoid compilation errors when including both\n      jemalloc/jemalloc.h and malloc.h in C++ code.\n    + Don't assume that /bin/sh is appropriate when running size_classes.sh\n      during configuration.\n    + Consider __sparcv9 a synonym for __sparc64__ when defining LG_QUANTUM.\n    + Link tests to librt if it contains clock_gettime(2).\n\n* 4.0.0 (August 17, 2015)\n\n  This version contains many speed and space optimizations, both minor and\n  major.  The major themes are generalization, unification, and simplification.\n  Although many of these optimizations cause no visible behavior change, their\n  cumulative effect is substantial.\n\n  New features:\n  - Normalize size class spacing to be consistent across the complete size\n    range.  By default there are four size classes per size doubling, but this\n    is now configurable via the --with-lg-size-class-group option.  Also add the\n    --with-lg-page, --with-lg-page-sizes, --with-lg-quantum, and\n    --with-lg-tiny-min options, which can be used to tweak page and size class\n    settings.  Impacts:\n    + Worst case performance for incrementally growing/shrinking reallocation\n      is improved because there are far fewer size classes, and therefore\n      copying happens less often.\n    + Internal fragmentation is limited to 20% for all but the smallest size\n      classes (those less than four times the quantum).  (1B + 4 KiB)\n      and (1B + 4 MiB) previously suffered nearly 50% internal fragmentation.\n    + Chunk fragmentation tends to be lower because there are fewer distinct run\n      sizes to pack.\n  - Add support for explicit tcaches.  The \"tcache.create\", \"tcache.flush\", and\n    \"tcache.destroy\" mallctls control tcache lifetime and flushing, and the\n    MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to the *allocx() API\n    control which tcache is used for each operation.\n  - Implement per thread heap profiling, as well as the ability to\n    enable/disable heap profiling on a per thread basis.  Add the \"prof.reset\",\n    \"prof.lg_sample\", \"thread.prof.name\", \"thread.prof.active\",\n    \"opt.prof_thread_active_init\", \"prof.thread_active_init\", and\n    \"thread.prof.active\" mallctls.\n  - Add support for per arena application-specified chunk allocators, configured\n    via the \"arena.<i>.chunk_hooks\" mallctl.\n  - Refactor huge allocation to be managed by arenas, so that arenas now\n    function as general purpose independent allocators.  This is important in\n    the context of user-specified chunk allocators, aside from the scalability\n    benefits.  Related new statistics:\n    + The \"stats.arenas.<i>.huge.allocated\", \"stats.arenas.<i>.huge.nmalloc\",\n      \"stats.arenas.<i>.huge.ndalloc\", and \"stats.arenas.<i>.huge.nrequests\"\n      mallctls provide high level per arena huge allocation statistics.\n    + The \"arenas.nhchunks\", \"arenas.hchunk.<i>.size\",\n      \"stats.arenas.<i>.hchunks.<j>.nmalloc\",\n      \"stats.arenas.<i>.hchunks.<j>.ndalloc\",\n      \"stats.arenas.<i>.hchunks.<j>.nrequests\", and\n      \"stats.arenas.<i>.hchunks.<j>.curhchunks\" mallctls provide per size class\n      statistics.\n  - Add the 'util' column to malloc_stats_print() output, which reports the\n    proportion of available regions that are currently in use for each small\n    size class.\n  - Add \"alloc\" and \"free\" modes for for junk filling (see the \"opt.junk\"\n    mallctl), so that it is possible to separately enable junk filling for\n    allocation versus deallocation.\n  - Add the jemalloc-config script, which provides information about how\n    jemalloc was configured, and how to integrate it into application builds.\n  - Add metadata statistics, which are accessible via the \"stats.metadata\",\n    \"stats.arenas.<i>.metadata.mapped\", and\n    \"stats.arenas.<i>.metadata.allocated\" mallctls.\n  - Add the \"stats.resident\" mallctl, which reports the upper limit of\n    physically resident memory mapped by the allocator.\n  - Add per arena control over unused dirty page purging, via the\n    \"arenas.lg_dirty_mult\", \"arena.<i>.lg_dirty_mult\", and\n    \"stats.arenas.<i>.lg_dirty_mult\" mallctls.\n  - Add the \"prof.gdump\" mallctl, which makes it possible to toggle the gdump\n    feature on/off during program execution.\n  - Add sdallocx(), which implements sized deallocation.  The primary\n    optimization over dallocx() is the removal of a metadata read, which often\n    suffers an L1 cache miss.\n  - Add missing header includes in jemalloc/jemalloc.h, so that applications\n    only have to #include <jemalloc/jemalloc.h>.\n  - Add support for additional platforms:\n    + Bitrig\n    + Cygwin\n    + DragonFlyBSD\n    + iOS\n    + OpenBSD\n    + OpenRISC/or1k\n\n  Optimizations:\n  - Maintain dirty runs in per arena LRUs rather than in per arena trees of\n    dirty-run-containing chunks.  In practice this change significantly reduces\n    dirty page purging volume.\n  - Integrate whole chunks into the unused dirty page purging machinery.  This\n    reduces the cost of repeated huge allocation/deallocation, because it\n    effectively introduces a cache of chunks.\n  - Split the arena chunk map into two separate arrays, in order to increase\n    cache locality for the frequently accessed bits.\n  - Move small run metadata out of runs, into arena chunk headers.  This reduces\n    run fragmentation, smaller runs reduce external fragmentation for small size\n    classes, and packed (less uniformly aligned) metadata layout improves CPU\n    cache set distribution.\n  - Randomly distribute large allocation base pointer alignment relative to page\n    boundaries in order to more uniformly utilize CPU cache sets.  This can be\n    disabled via the --disable-cache-oblivious configure option, and queried via\n    the \"config.cache_oblivious\" mallctl.\n  - Micro-optimize the fast paths for the public API functions.\n  - Refactor thread-specific data to reside in a single structure.  This assures\n    that only a single TLS read is necessary per call into the public API.\n  - Implement in-place huge allocation growing and shrinking.\n  - Refactor rtree (radix tree for chunk lookups) to be lock-free, and make\n    additional optimizations that reduce maximum lookup depth to one or two\n    levels.  This resolves what was a concurrency bottleneck for per arena huge\n    allocation, because a global data structure is critical for determining\n    which arenas own which huge allocations.\n\n  Incompatible changes:\n  - Replace --enable-cc-silence with --disable-cc-silence to suppress spurious\n    warnings by default.\n  - Assure that the constness of malloc_usable_size()'s return type matches that\n    of the system implementation.\n  - Change the heap profile dump format to support per thread heap profiling,\n    rename pprof to jeprof, and enhance it with the --thread=<n> option.  As a\n    result, the bundled jeprof must now be used rather than the upstream\n    (gperftools) pprof.\n  - Disable \"opt.prof_final\" by default, in order to avoid atexit(3), which can\n    internally deadlock on some platforms.\n  - Change the \"arenas.nlruns\" mallctl type from size_t to unsigned.\n  - Replace the \"stats.arenas.<i>.bins.<j>.allocated\" mallctl with\n    \"stats.arenas.<i>.bins.<j>.curregs\".\n  - Ignore MALLOC_CONF in set{uid,gid,cap} binaries.\n  - Ignore MALLOCX_ARENA(a) in dallocx(), in favor of using the\n    MALLOCX_TCACHE(tc) and MALLOCX_TCACHE_NONE flags to control tcache usage.\n\n  Removed features:\n  - Remove the *allocm() API, which is superseded by the *allocx() API.\n  - Remove the --enable-dss options, and make dss non-optional on all platforms\n    which support sbrk(2).\n  - Remove the \"arenas.purge\" mallctl, which was obsoleted by the\n    \"arena.<i>.purge\" mallctl in 3.1.0.\n  - Remove the unnecessary \"opt.valgrind\" mallctl; jemalloc automatically\n    detects whether it is running inside Valgrind.\n  - Remove the \"stats.huge.allocated\", \"stats.huge.nmalloc\", and\n    \"stats.huge.ndalloc\" mallctls.\n  - Remove the --enable-mremap option.\n  - Remove the \"stats.chunks.current\", \"stats.chunks.total\", and\n    \"stats.chunks.high\" mallctls.\n\n  Bug fixes:\n  - Fix the cactive statistic to decrease (rather than increase) when active\n    memory decreases.  This regression was first released in 3.5.0.\n  - Fix OOM handling in memalign() and valloc().  A variant of this bug existed\n    in all releases since 2.0.0, which introduced these functions.\n  - Fix an OOM-related regression in arena_tcache_fill_small(), which could\n    cause cache corruption on OOM.  This regression was present in all releases\n    from 2.2.0 through 3.6.0.\n  - Fix size class overflow handling for malloc(), posix_memalign(), memalign(),\n    calloc(), and realloc() when profiling is enabled.\n  - Fix the \"arena.<i>.dss\" mallctl to return an error if \"primary\" or\n    \"secondary\" precedence is specified, but sbrk(2) is not supported.\n  - Fix fallback lg_floor() implementations to handle extremely large inputs.\n  - Ensure the default purgeable zone is after the default zone on OS X.\n  - Fix latent bugs in atomic_*().\n  - Fix the \"arena.<i>.dss\" mallctl to handle read-only calls.\n  - Fix tls_model configuration to enable the initial-exec model when possible.\n  - Mark malloc_conf as a weak symbol so that the application can override it.\n  - Correctly detect glibc's adaptive pthread mutexes.\n  - Fix the --without-export configure option.\n\n* 3.6.0 (March 31, 2014)\n\n  This version contains a critical bug fix for a regression present in 3.5.0 and\n  3.5.1.\n\n  Bug fixes:\n  - Fix a regression in arena_chunk_alloc() that caused crashes during\n    small/large allocation if chunk allocation failed.  In the absence of this\n    bug, chunk allocation failure would result in allocation failure, e.g.  NULL\n    return from malloc().  This regression was introduced in 3.5.0.\n  - Fix backtracing for gcc intrinsics-based backtracing by specifying\n    -fno-omit-frame-pointer to gcc.  Note that the application (and all the\n    libraries it links to) must also be compiled with this option for\n    backtracing to be reliable.\n  - Use dss allocation precedence for huge allocations as well as small/large\n    allocations.\n  - Fix test assertion failure message formatting.  This bug did not manifest on\n    x86_64 systems because of implementation subtleties in va_list.\n  - Fix inconsequential test failures for hash and SFMT code.\n\n  New features:\n  - Support heap profiling on FreeBSD.  This feature depends on the proc\n    filesystem being mounted during heap profile dumping.\n\n* 3.5.1 (February 25, 2014)\n\n  This version primarily addresses minor bugs in test code.\n\n  Bug fixes:\n  - Configure Solaris/Illumos to use MADV_FREE.\n  - Fix junk filling for mremap(2)-based huge reallocation.  This is only\n    relevant if configuring with the --enable-mremap option specified.\n  - Avoid compilation failure if 'restrict' C99 keyword is not supported by the\n    compiler.\n  - Add a configure test for SSE2 rather than assuming it is usable on i686\n    systems.  This fixes test compilation errors, especially on 32-bit Linux\n    systems.\n  - Fix mallctl argument size mismatches (size_t vs. uint64_t) in the stats unit\n    test.\n  - Fix/remove flawed alignment-related overflow tests.\n  - Prevent compiler optimizations that could change backtraces in the\n    prof_accum unit test.\n\n* 3.5.0 (January 22, 2014)\n\n  This version focuses on refactoring and automated testing, though it also\n  includes some non-trivial heap profiling optimizations not mentioned below.\n\n  New features:\n  - Add the *allocx() API, which is a successor to the experimental *allocm()\n    API.  The *allocx() functions are slightly simpler to use because they have\n    fewer parameters, they directly return the results of primary interest, and\n    mallocx()/rallocx() avoid the strict aliasing pitfall that\n    allocm()/rallocm() share with posix_memalign().  Note that *allocm() is\n    slated for removal in the next non-bugfix release.\n  - Add support for LinuxThreads.\n\n  Bug fixes:\n  - Unless heap profiling is enabled, disable floating point code and don't link\n    with libm.  This, in combination with e.g. EXTRA_CFLAGS=-mno-sse on x64\n    systems, makes it possible to completely disable floating point register\n    use.  Some versions of glibc neglect to save/restore caller-saved floating\n    point registers during dynamic lazy symbol loading, and the symbol loading\n    code uses whatever malloc the application happens to have linked/loaded\n    with, the result being potential floating point register corruption.\n  - Report ENOMEM rather than EINVAL if an OOM occurs during heap profiling\n    backtrace creation in imemalign().  This bug impacted posix_memalign() and\n    aligned_alloc().\n  - Fix a file descriptor leak in a prof_dump_maps() error path.\n  - Fix prof_dump() to close the dump file descriptor for all relevant error\n    paths.\n  - Fix rallocm() to use the arena specified by the ALLOCM_ARENA(s) flag for\n    allocation, not just deallocation.\n  - Fix a data race for large allocation stats counters.\n  - Fix a potential infinite loop during thread exit.  This bug occurred on\n    Solaris, and could affect other platforms with similar pthreads TSD\n    implementations.\n  - Don't junk-fill reallocations unless usable size changes.  This fixes a\n    violation of the *allocx()/*allocm() semantics.\n  - Fix growing large reallocation to junk fill new space.\n  - Fix huge deallocation to junk fill when munmap is disabled.\n  - Change the default private namespace prefix from empty to je_, and change\n    --with-private-namespace-prefix so that it prepends an additional prefix\n    rather than replacing je_.  This reduces the likelihood of applications\n    which statically link jemalloc experiencing symbol name collisions.\n  - Add missing private namespace mangling (relevant when\n    --with-private-namespace is specified).\n  - Add and use JEMALLOC_INLINE_C so that static inline functions are marked as\n    static even for debug builds.\n  - Add a missing mutex unlock in a malloc_init_hard() error path.  In practice\n    this error path is never executed.\n  - Fix numerous bugs in malloc_strotumax() error handling/reporting.  These\n    bugs had no impact except for malformed inputs.\n  - Fix numerous bugs in malloc_snprintf().  These bugs were not exercised by\n    existing calls, so they had no impact.\n\n* 3.4.1 (October 20, 2013)\n\n  Bug fixes:\n  - Fix a race in the \"arenas.extend\" mallctl that could cause memory corruption\n    of internal data structures and subsequent crashes.\n  - Fix Valgrind integration flaws that caused Valgrind warnings about reads of\n    uninitialized memory in:\n    + arena chunk headers\n    + internal zero-initialized data structures (relevant to tcache and prof\n      code)\n  - Preserve errno during the first allocation.  A readlink(2) call during\n    initialization fails unless /etc/malloc.conf exists, so errno was typically\n    set during the first allocation prior to this fix.\n  - Fix compilation warnings reported by gcc 4.8.1.\n\n* 3.4.0 (June 2, 2013)\n\n  This version is essentially a small bugfix release, but the addition of\n  aarch64 support requires that the minor version be incremented.\n\n  Bug fixes:\n  - Fix race-triggered deadlocks in chunk_record().  These deadlocks were\n    typically triggered by multiple threads concurrently deallocating huge\n    objects.\n\n  New features:\n  - Add support for the aarch64 architecture.\n\n* 3.3.1 (March 6, 2013)\n\n  This version fixes bugs that are typically encountered only when utilizing\n  custom run-time options.\n\n  Bug fixes:\n  - Fix a locking order bug that could cause deadlock during fork if heap\n    profiling were enabled.\n  - Fix a chunk recycling bug that could cause the allocator to lose track of\n    whether a chunk was zeroed.  On FreeBSD, NetBSD, and OS X, it could cause\n    corruption if allocating via sbrk(2) (unlikely unless running with the\n    \"dss:primary\" option specified).  This was completely harmless on Linux\n    unless using mlockall(2) (and unlikely even then, unless the\n    --disable-munmap configure option or the \"dss:primary\" option was\n    specified).  This regression was introduced in 3.1.0 by the\n    mlockall(2)/madvise(2) interaction fix.\n  - Fix TLS-related memory corruption that could occur during thread exit if the\n    thread never allocated memory.  Only the quarantine and prof facilities were\n    susceptible.\n  - Fix two quarantine bugs:\n    + Internal reallocation of the quarantined object array leaked the old\n      array.\n    + Reallocation failure for internal reallocation of the quarantined object\n      array (very unlikely) resulted in memory corruption.\n  - Fix Valgrind integration to annotate all internally allocated memory in a\n    way that keeps Valgrind happy about internal data structure access.\n  - Fix building for s390 systems.\n\n* 3.3.0 (January 23, 2013)\n\n  This version includes a few minor performance improvements in addition to the\n  listed new features and bug fixes.\n\n  New features:\n  - Add clipping support to lg_chunk option processing.\n  - Add the --enable-ivsalloc option.\n  - Add the --without-export option.\n  - Add the --disable-zone-allocator option.\n\n  Bug fixes:\n  - Fix \"arenas.extend\" mallctl to output the number of arenas.\n  - Fix chunk_recycle() to unconditionally inform Valgrind that returned memory\n    is undefined.\n  - Fix build break on FreeBSD related to alloca.h.\n\n* 3.2.0 (November 9, 2012)\n\n  In addition to a couple of bug fixes, this version modifies page run\n  allocation and dirty page purging algorithms in order to better control\n  page-level virtual memory fragmentation.\n\n  Incompatible changes:\n  - Change the \"opt.lg_dirty_mult\" default from 5 to 3 (32:1 to 8:1).\n\n  Bug fixes:\n  - Fix dss/mmap allocation precedence code to use recyclable mmap memory only\n    after primary dss allocation fails.\n  - Fix deadlock in the \"arenas.purge\" mallctl.  This regression was introduced\n    in 3.1.0 by the addition of the \"arena.<i>.purge\" mallctl.\n\n* 3.1.0 (October 16, 2012)\n\n  New features:\n  - Auto-detect whether running inside Valgrind, thus removing the need to\n    manually specify MALLOC_CONF=valgrind:true.\n  - Add the \"arenas.extend\" mallctl, which allows applications to create\n    manually managed arenas.\n  - Add the ALLOCM_ARENA() flag for {,r,d}allocm().\n  - Add the \"opt.dss\", \"arena.<i>.dss\", and \"stats.arenas.<i>.dss\" mallctls,\n    which provide control over dss/mmap precedence.\n  - Add the \"arena.<i>.purge\" mallctl, which obsoletes \"arenas.purge\".\n  - Define LG_QUANTUM for hppa.\n\n  Incompatible changes:\n  - Disable tcache by default if running inside Valgrind, in order to avoid\n    making unallocated objects appear reachable to Valgrind.\n  - Drop const from malloc_usable_size() argument on Linux.\n\n  Bug fixes:\n  - Fix heap profiling crash if sampled object is freed via realloc(p, 0).\n  - Remove const from __*_hook variable declarations, so that glibc can modify\n    them during process forking.\n  - Fix mlockall(2)/madvise(2) interaction.\n  - Fix fork(2)-related deadlocks.\n  - Fix error return value for \"thread.tcache.enabled\" mallctl.\n\n* 3.0.0 (May 11, 2012)\n\n  Although this version adds some major new features, the primary focus is on\n  internal code cleanup that facilitates maintainability and portability, most\n  of which is not reflected in the ChangeLog.  This is the first release to\n  incorporate substantial contributions from numerous other developers, and the\n  result is a more broadly useful allocator (see the git revision history for\n  contribution details).  Note that the license has been unified, thanks to\n  Facebook granting a license under the same terms as the other copyright\n  holders (see COPYING).\n\n  New features:\n  - Implement Valgrind support, redzones, and quarantine.\n  - Add support for additional platforms:\n    + FreeBSD\n    + Mac OS X Lion\n    + MinGW\n    + Windows (no support yet for replacing the system malloc)\n  - Add support for additional architectures:\n    + MIPS\n    + SH4\n    + Tilera\n  - Add support for cross compiling.\n  - Add nallocm(), which rounds a request size up to the nearest size class\n    without actually allocating.\n  - Implement aligned_alloc() (blame C11).\n  - Add the \"thread.tcache.enabled\" mallctl.\n  - Add the \"opt.prof_final\" mallctl.\n  - Update pprof (from gperftools 2.0).\n  - Add the --with-mangling option.\n  - Add the --disable-experimental option.\n  - Add the --disable-munmap option, and make it the default on Linux.\n  - Add the --enable-mremap option, which disables use of mremap(2) by default.\n\n  Incompatible changes:\n  - Enable stats by default.\n  - Enable fill by default.\n  - Disable lazy locking by default.\n  - Rename the \"tcache.flush\" mallctl to \"thread.tcache.flush\".\n  - Rename the \"arenas.pagesize\" mallctl to \"arenas.page\".\n  - Change the \"opt.lg_prof_sample\" default from 0 to 19 (1 B to 512 KiB).\n  - Change the \"opt.prof_accum\" default from true to false.\n\n  Removed features:\n  - Remove the swap feature, including the \"config.swap\", \"swap.avail\",\n    \"swap.prezeroed\", \"swap.nfds\", and \"swap.fds\" mallctls.\n  - Remove highruns statistics, including the\n    \"stats.arenas.<i>.bins.<j>.highruns\" and\n    \"stats.arenas.<i>.lruns.<j>.highruns\" mallctls.\n  - As part of small size class refactoring, remove the \"opt.lg_[qc]space_max\",\n    \"arenas.cacheline\", \"arenas.subpage\", \"arenas.[tqcs]space_{min,max}\", and\n    \"arenas.[tqcs]bins\" mallctls.\n  - Remove the \"arenas.chunksize\" mallctl.\n  - Remove the \"opt.lg_prof_tcmax\" option.\n  - Remove the \"opt.lg_prof_bt_max\" option.\n  - Remove the \"opt.lg_tcache_gc_sweep\" option.\n  - Remove the --disable-tiny option, including the \"config.tiny\" mallctl.\n  - Remove the --enable-dynamic-page-shift configure option.\n  - Remove the --enable-sysv configure option.\n\n  Bug fixes:\n  - Fix a statistics-related bug in the \"thread.arena\" mallctl that could cause\n    invalid statistics and crashes.\n  - Work around TLS deallocation via free() on Linux.  This bug could cause\n    write-after-free memory corruption.\n  - Fix a potential deadlock that could occur during interval- and\n    growth-triggered heap profile dumps.\n  - Fix large calloc() zeroing bugs due to dropping chunk map unzeroed flags.\n  - Fix chunk_alloc_dss() to stop claiming memory is zeroed.  This bug could\n    cause memory corruption and crashes with --enable-dss specified.\n  - Fix fork-related bugs that could cause deadlock in children between fork\n    and exec.\n  - Fix malloc_stats_print() to honor 'b' and 'l' in the opts parameter.\n  - Fix realloc(p, 0) to act like free(p).\n  - Do not enforce minimum alignment in memalign().\n  - Check for NULL pointer in malloc_usable_size().\n  - Fix an off-by-one heap profile statistics bug that could be observed in\n    interval- and growth-triggered heap profiles.\n  - Fix the \"epoch\" mallctl to update cached stats even if the passed in epoch\n    is 0.\n  - Fix bin->runcur management to fix a layout policy bug.  This bug did not\n    affect correctness.\n  - Fix a bug in choose_arena_hard() that potentially caused more arenas to be\n    initialized than necessary.\n  - Add missing \"opt.lg_tcache_max\" mallctl implementation.\n  - Use glibc allocator hooks to make mixed allocator usage less likely.\n  - Fix build issues for --disable-tcache.\n  - Don't mangle pthread_create() when --with-private-namespace is specified.\n\n* 2.2.5 (November 14, 2011)\n\n  Bug fixes:\n  - Fix huge_ralloc() race when using mremap(2).  This is a serious bug that\n    could cause memory corruption and/or crashes.\n  - Fix huge_ralloc() to maintain chunk statistics.\n  - Fix malloc_stats_print(..., \"a\") output.\n\n* 2.2.4 (November 5, 2011)\n\n  Bug fixes:\n  - Initialize arenas_tsd before using it.  This bug existed for 2.2.[0-3], as\n    well as for --disable-tls builds in earlier releases.\n  - Do not assume a 4 KiB page size in test/rallocm.c.\n\n* 2.2.3 (August 31, 2011)\n\n  This version fixes numerous bugs related to heap profiling.\n\n  Bug fixes:\n  - Fix a prof-related race condition.  This bug could cause memory corruption,\n    but only occurred in non-default configurations (prof_accum:false).\n  - Fix off-by-one backtracing issues (make sure that prof_alloc_prep() is\n    excluded from backtraces).\n  - Fix a prof-related bug in realloc() (only triggered by OOM errors).\n  - Fix prof-related bugs in allocm() and rallocm().\n  - Fix prof_tdata_cleanup() for --disable-tls builds.\n  - Fix a relative include path, to fix objdir builds.\n\n* 2.2.2 (July 30, 2011)\n\n  Bug fixes:\n  - Fix a build error for --disable-tcache.\n  - Fix assertions in arena_purge() (for real this time).\n  - Add the --with-private-namespace option.  This is a workaround for symbol\n    conflicts that can inadvertently arise when using static libraries.\n\n* 2.2.1 (March 30, 2011)\n\n  Bug fixes:\n  - Implement atomic operations for x86/x64.  This fixes compilation failures\n    for versions of gcc that are still in wide use.\n  - Fix an assertion in arena_purge().\n\n* 2.2.0 (March 22, 2011)\n\n  This version incorporates several improvements to algorithms and data\n  structures that tend to reduce fragmentation and increase speed.\n\n  New features:\n  - Add the \"stats.cactive\" mallctl.\n  - Update pprof (from google-perftools 1.7).\n  - Improve backtracing-related configuration logic, and add the\n    --disable-prof-libgcc option.\n\n  Bug fixes:\n  - Change default symbol visibility from \"internal\", to \"hidden\", which\n    decreases the overhead of library-internal function calls.\n  - Fix symbol visibility so that it is also set on OS X.\n  - Fix a build dependency regression caused by the introduction of the .pic.o\n    suffix for PIC object files.\n  - Add missing checks for mutex initialization failures.\n  - Don't use libgcc-based backtracing except on x64, where it is known to work.\n  - Fix deadlocks on OS X that were due to memory allocation in\n    pthread_mutex_lock().\n  - Heap profiling-specific fixes:\n    + Fix memory corruption due to integer overflow in small region index\n      computation, when using a small enough sample interval that profiling\n      context pointers are stored in small run headers.\n    + Fix a bootstrap ordering bug that only occurred with TLS disabled.\n    + Fix a rallocm() rsize bug.\n    + Fix error detection bugs for aligned memory allocation.\n\n* 2.1.3 (March 14, 2011)\n\n  Bug fixes:\n  - Fix a cpp logic regression (due to the \"thread.{de,}allocatedp\" mallctl fix\n    for OS X in 2.1.2).\n  - Fix a \"thread.arena\" mallctl bug.\n  - Fix a thread cache stats merging bug.\n\n* 2.1.2 (March 2, 2011)\n\n  Bug fixes:\n  - Fix \"thread.{de,}allocatedp\" mallctl for OS X.\n  - Add missing jemalloc.a to build system.\n\n* 2.1.1 (January 31, 2011)\n\n  Bug fixes:\n  - Fix aligned huge reallocation (affected allocm()).\n  - Fix the ALLOCM_LG_ALIGN macro definition.\n  - Fix a heap dumping deadlock.\n  - Fix a \"thread.arena\" mallctl bug.\n\n* 2.1.0 (December 3, 2010)\n\n  This version incorporates some optimizations that can't quite be considered\n  bug fixes.\n\n  New features:\n  - Use Linux's mremap(2) for huge object reallocation when possible.\n  - Avoid locking in mallctl*() when possible.\n  - Add the \"thread.[de]allocatedp\" mallctl's.\n  - Convert the manual page source from roff to DocBook, and generate both roff\n    and HTML manuals.\n\n  Bug fixes:\n  - Fix a crash due to incorrect bootstrap ordering.  This only impacted\n    --enable-debug --enable-dss configurations.\n  - Fix a minor statistics bug for mallctl(\"swap.avail\", ...).\n\n* 2.0.1 (October 29, 2010)\n\n  Bug fixes:\n  - Fix a race condition in heap profiling that could cause undefined behavior\n    if \"opt.prof_accum\" were disabled.\n  - Add missing mutex unlocks for some OOM error paths in the heap profiling\n    code.\n  - Fix a compilation error for non-C99 builds.\n\n* 2.0.0 (October 24, 2010)\n\n  This version focuses on the experimental *allocm() API, and on improved\n  run-time configuration/introspection.  Nonetheless, numerous performance\n  improvements are also included.\n\n  New features:\n  - Implement the experimental {,r,s,d}allocm() API, which provides a superset\n    of the functionality available via malloc(), calloc(), posix_memalign(),\n    realloc(), malloc_usable_size(), and free().  These functions can be used to\n    allocate/reallocate aligned zeroed memory, ask for optional extra memory\n    during reallocation, prevent object movement during reallocation, etc.\n  - Replace JEMALLOC_OPTIONS/JEMALLOC_PROF_PREFIX with MALLOC_CONF, which is\n    more human-readable, and more flexible.  For example:\n      JEMALLOC_OPTIONS=AJP\n    is now:\n      MALLOC_CONF=abort:true,fill:true,stats_print:true\n  - Port to Apple OS X.  Sponsored by Mozilla.\n  - Make it possible for the application to control thread-->arena mappings via\n    the \"thread.arena\" mallctl.\n  - Add compile-time support for all TLS-related functionality via pthreads TSD.\n    This is mainly of interest for OS X, which does not support TLS, but has a\n    TSD implementation with similar performance.\n  - Override memalign() and valloc() if they are provided by the system.\n  - Add the \"arenas.purge\" mallctl, which can be used to synchronously purge all\n    dirty unused pages.\n  - Make cumulative heap profiling data optional, so that it is possible to\n    limit the amount of memory consumed by heap profiling data structures.\n  - Add per thread allocation counters that can be accessed via the\n    \"thread.allocated\" and \"thread.deallocated\" mallctls.\n\n  Incompatible changes:\n  - Remove JEMALLOC_OPTIONS and malloc_options (see MALLOC_CONF above).\n  - Increase default backtrace depth from 4 to 128 for heap profiling.\n  - Disable interval-based profile dumps by default.\n\n  Bug fixes:\n  - Remove bad assertions in fork handler functions.  These assertions could\n    cause aborts for some combinations of configure settings.\n  - Fix strerror_r() usage to deal with non-standard semantics in GNU libc.\n  - Fix leak context reporting.  This bug tended to cause the number of contexts\n    to be underreported (though the reported number of objects and bytes were\n    correct).\n  - Fix a realloc() bug for large in-place growing reallocation.  This bug could\n    cause memory corruption, but it was hard to trigger.\n  - Fix an allocation bug for small allocations that could be triggered if\n    multiple threads raced to create a new run of backing pages.\n  - Enhance the heap profiler to trigger samples based on usable size, rather\n    than request size.\n  - Fix a heap profiling bug due to sometimes losing track of requested object\n    size for sampled objects.\n\n* 1.0.3 (August 12, 2010)\n\n  Bug fixes:\n  - Fix the libunwind-based implementation of stack backtracing (used for heap\n    profiling).  This bug could cause zero-length backtraces to be reported.\n  - Add a missing mutex unlock in library initialization code.  If multiple\n    threads raced to initialize malloc, some of them could end up permanently\n    blocked.\n\n* 1.0.2 (May 11, 2010)\n\n  Bug fixes:\n  - Fix junk filling of large objects, which could cause memory corruption.\n  - Add MAP_NORESERVE support for chunk mapping, because otherwise virtual\n    memory limits could cause swap file configuration to fail.  Contributed by\n    Jordan DeLong.\n\n* 1.0.1 (April 14, 2010)\n\n  Bug fixes:\n  - Fix compilation when --enable-fill is specified.\n  - Fix threads-related profiling bugs that affected accuracy and caused memory\n    to be leaked during thread exit.\n  - Fix dirty page purging race conditions that could cause crashes.\n  - Fix crash in tcache flushing code during thread destruction.\n\n* 1.0.0 (April 11, 2010)\n\n  This release focuses on speed and run-time introspection.  Numerous\n  algorithmic improvements make this release substantially faster than its\n  predecessors.\n\n  New features:\n  - Implement autoconf-based configuration system.\n  - Add mallctl*(), for the purposes of introspection and run-time\n    configuration.\n  - Make it possible for the application to manually flush a thread's cache, via\n    the \"tcache.flush\" mallctl.\n  - Base maximum dirty page count on proportion of active memory.\n  - Compute various additional run-time statistics, including per size class\n    statistics for large objects.\n  - Expose malloc_stats_print(), which can be called repeatedly by the\n    application.\n  - Simplify the malloc_message() signature to only take one string argument,\n    and incorporate an opaque data pointer argument for use by the application\n    in combination with malloc_stats_print().\n  - Add support for allocation backed by one or more swap files, and allow the\n    application to disable over-commit if swap files are in use.\n  - Implement allocation profiling and leak checking.\n\n  Removed features:\n  - Remove the dynamic arena rebalancing code, since thread-specific caching\n    reduces its utility.\n\n  Bug fixes:\n  - Modify chunk allocation to work when address space layout randomization\n    (ASLR) is in use.\n  - Fix thread cleanup bugs related to TLS destruction.\n  - Handle 0-size allocation requests in posix_memalign().\n  - Fix a chunk leak.  The leaked chunks were never touched, so this impacted\n    virtual memory usage, but not physical memory usage.\n\n* linux_2008082[78]a (August 27/28, 2008)\n\n  These snapshot releases are the simple result of incorporating Linux-specific\n  support into the FreeBSD malloc sources.\n\n--------------------------------------------------------------------------------\nvim:filetype=text:textwidth=80\n"
  },
  {
    "path": "deps/jemalloc/INSTALL.md",
    "content": "Building and installing a packaged release of jemalloc can be as simple as\ntyping the following while in the root directory of the source tree:\n\n    ./configure\n    make\n    make install\n\nIf building from unpackaged developer sources, the simplest command sequence\nthat might work is:\n\n    ./autogen.sh\n    make\n    make install\n\nYou can uninstall the installed build artifacts like this:\n\n    make uninstall\n\nNotes:\n - \"autoconf\" needs to be installed\n - Documentation is built by the default target only when xsltproc is\navailable.  Build will warn but not stop if the dependency is missing.\n\n\n## Advanced configuration\n\nThe 'configure' script supports numerous options that allow control of which\nfunctionality is enabled, where jemalloc is installed, etc.  Optionally, pass\nany of the following arguments (not a definitive list) to 'configure':\n\n* `--help`\n\n    Print a definitive list of options.\n\n* `--prefix=<install-root-dir>`\n\n    Set the base directory in which to install.  For example:\n\n        ./configure --prefix=/usr/local\n\n    will cause files to be installed into /usr/local/include, /usr/local/lib,\n    and /usr/local/man.\n\n* `--with-version=(<major>.<minor>.<bugfix>-<nrev>-g<gid>|VERSION)`\n\n    The VERSION file is mandatory for successful configuration, and the\n    following steps are taken to assure its presence:\n    1) If --with-version=<major>.<minor>.<bugfix>-<nrev>-g<gid> is specified,\n       generate VERSION using the specified value.\n    2) If --with-version is not specified in either form and the source\n       directory is inside a git repository, try to generate VERSION via 'git\n       describe' invocations that pattern-match release tags.\n    3) If VERSION is missing, generate it with a bogus version:\n       0.0.0-0-g0000000000000000000000000000000000000000\n\n    Note that --with-version=VERSION bypasses (1) and (2), which simplifies\n    VERSION configuration when embedding a jemalloc release into another\n    project's git repository.\n\n* `--with-rpath=<colon-separated-rpath>`\n\n    Embed one or more library paths, so that libjemalloc can find the libraries\n    it is linked to.  This works only on ELF-based systems.\n\n* `--with-mangling=<map>`\n\n    Mangle public symbols specified in <map> which is a comma-separated list of\n    name:mangled pairs.\n\n    For example, to use ld's --wrap option as an alternative method for\n    overriding libc's malloc implementation, specify something like:\n\n      --with-mangling=malloc:__wrap_malloc,free:__wrap_free[...]\n\n    Note that mangling happens prior to application of the prefix specified by\n    --with-jemalloc-prefix, and mangled symbols are then ignored when applying\n    the prefix.\n\n* `--with-jemalloc-prefix=<prefix>`\n\n    Prefix all public APIs with <prefix>.  For example, if <prefix> is\n    \"prefix_\", API changes like the following occur:\n\n      malloc()         --> prefix_malloc()\n      malloc_conf      --> prefix_malloc_conf\n      /etc/malloc.conf --> /etc/prefix_malloc.conf\n      MALLOC_CONF      --> PREFIX_MALLOC_CONF\n\n    This makes it possible to use jemalloc at the same time as the system\n    allocator, or even to use multiple copies of jemalloc simultaneously.\n\n    By default, the prefix is \"\", except on OS X, where it is \"je_\".  On OS X,\n    jemalloc overlays the default malloc zone, but makes no attempt to actually\n    replace the \"malloc\", \"calloc\", etc. symbols.\n\n* `--without-export`\n\n    Don't export public APIs.  This can be useful when building jemalloc as a\n    static library, or to avoid exporting public APIs when using the zone\n    allocator on OSX.\n\n* `--with-private-namespace=<prefix>`\n\n    Prefix all library-private APIs with <prefix>je_.  For shared libraries,\n    symbol visibility mechanisms prevent these symbols from being exported, but\n    for static libraries, naming collisions are a real possibility.  By\n    default, <prefix> is empty, which results in a symbol prefix of je_ .\n\n* `--with-install-suffix=<suffix>`\n\n    Append <suffix> to the base name of all installed files, such that multiple\n    versions of jemalloc can coexist in the same installation directory.  For\n    example, libjemalloc.so.0 becomes libjemalloc<suffix>.so.0.\n\n* `--with-malloc-conf=<malloc_conf>`\n\n    Embed `<malloc_conf>` as a run-time options string that is processed prior to\n    the malloc_conf global variable, the /etc/malloc.conf symlink, and the\n    MALLOC_CONF environment variable.  For example, to change the default decay\n    time to 30 seconds:\n\n      --with-malloc-conf=decay_ms:30000\n\n* `--enable-debug`\n\n    Enable assertions and validation code.  This incurs a substantial\n    performance hit, but is very useful during application development.\n\n* `--disable-stats`\n\n    Disable statistics gathering functionality.  See the \"opt.stats_print\"\n    option documentation for usage details.\n\n* `--enable-prof`\n\n    Enable heap profiling and leak detection functionality.  See the \"opt.prof\"\n    option documentation for usage details.  When enabled, there are several\n    approaches to backtracing, and the configure script chooses the first one\n    in the following list that appears to function correctly:\n\n    + libunwind      (requires --enable-prof-libunwind)\n    + libgcc         (unless --disable-prof-libgcc)\n    + gcc intrinsics (unless --disable-prof-gcc)\n\n* `--enable-prof-libunwind`\n\n    Use the libunwind library (http://www.nongnu.org/libunwind/) for stack\n    backtracing.\n\n* `--disable-prof-libgcc`\n\n    Disable the use of libgcc's backtracing functionality.\n\n* `--disable-prof-gcc`\n\n    Disable the use of gcc intrinsics for backtracing.\n\n* `--with-static-libunwind=<libunwind.a>`\n\n    Statically link against the specified libunwind.a rather than dynamically\n    linking with -lunwind.\n\n* `--disable-fill`\n\n    Disable support for junk/zero filling of memory.  See the \"opt.junk\" and\n    \"opt.zero\" option documentation for usage details.\n\n* `--disable-zone-allocator`\n\n    Disable zone allocator for Darwin.  This means jemalloc won't be hooked as\n    the default allocator on OSX/iOS.\n\n* `--enable-utrace`\n\n    Enable utrace(2)-based allocation tracing.  This feature is not broadly\n    portable (FreeBSD has it, but Linux and OS X do not).\n\n* `--enable-xmalloc`\n\n    Enable support for optional immediate termination due to out-of-memory\n    errors, as is commonly implemented by \"xmalloc\" wrapper function for malloc.\n    See the \"opt.xmalloc\" option documentation for usage details.\n\n* `--enable-lazy-lock`\n\n    Enable code that wraps pthread_create() to detect when an application\n    switches from single-threaded to multi-threaded mode, so that it can avoid\n    mutex locking/unlocking operations while in single-threaded mode.  In\n    practice, this feature usually has little impact on performance unless\n    thread-specific caching is disabled.\n\n* `--disable-cache-oblivious`\n\n    Disable cache-oblivious large allocation alignment by default, for large\n    allocation requests with no alignment constraints.  If this feature is\n    disabled, all large allocations are page-aligned as an implementation\n    artifact, which can severely harm CPU cache utilization.  However, the\n    cache-oblivious layout comes at the cost of one extra page per large\n    allocation, which in the most extreme case increases physical memory usage\n    for the 16 KiB size class to 20 KiB.\n\n* `--disable-syscall`\n\n    Disable use of syscall(2) rather than {open,read,write,close}(2).  This is\n    intended as a workaround for systems that place security limitations on\n    syscall(2).\n\n* `--disable-cxx`\n\n    Disable C++ integration.  This will cause new and delete operator\n    implementations to be omitted.\n\n* `--with-xslroot=<path>`\n\n    Specify where to find DocBook XSL stylesheets when building the\n    documentation.\n\n* `--with-lg-page=<lg-page>`\n\n    Specify the base 2 log of the allocator page size, which must in turn be at\n    least as large as the system page size.  By default the configure script\n    determines the host's page size and sets the allocator page size equal to\n    the system page size, so this option need not be specified unless the\n    system page size may change between configuration and execution, e.g. when\n    cross compiling.\n\n* `--with-lg-hugepage=<lg-hugepage>`\n\n    Specify the base 2 log of the system huge page size.  This option is useful\n    when cross compiling, or when overriding the default for systems that do\n    not explicitly support huge pages.\n\n* `--with-lg-quantum=<lg-quantum>`\n\n    Specify the base 2 log of the minimum allocation alignment.  jemalloc needs\n    to know the minimum alignment that meets the following C standard\n    requirement (quoted from the April 12, 2011 draft of the C11 standard):\n\n    >  The pointer returned if the allocation succeeds is suitably aligned so\n      that it may be assigned to a pointer to any type of object with a\n      fundamental alignment requirement and then used to access such an object\n      or an array of such objects in the space allocated [...]\n\n    This setting is architecture-specific, and although jemalloc includes known\n    safe values for the most commonly used modern architectures, there is a\n    wrinkle related to GNU libc (glibc) that may impact your choice of\n    <lg-quantum>.  On most modern architectures, this mandates 16-byte\n    alignment (<lg-quantum>=4), but the glibc developers chose not to meet this\n    requirement for performance reasons.  An old discussion can be found at\n    <https://sourceware.org/bugzilla/show_bug.cgi?id=206> .  Unlike glibc,\n    jemalloc does follow the C standard by default (caveat: jemalloc\n    technically cheats for size classes smaller than the quantum), but the fact\n    that Linux systems already work around this allocator noncompliance means\n    that it is generally safe in practice to let jemalloc's minimum alignment\n    follow glibc's lead.  If you specify `--with-lg-quantum=3` during\n    configuration, jemalloc will provide additional size classes that are not\n    16-byte-aligned (24, 40, and 56).\n\n* `--with-lg-vaddr=<lg-vaddr>`\n\n    Specify the number of significant virtual address bits.  By default, the\n    configure script attempts to detect virtual address size on those platforms\n    where it knows how, and picks a default otherwise.  This option may be\n    useful when cross-compiling.\n\n* `--disable-initial-exec-tls`\n\n    Disable the initial-exec TLS model for jemalloc's internal thread-local\n    storage (on those platforms that support explicit settings).  This can allow\n    jemalloc to be dynamically loaded after program startup (e.g. using dlopen).\n    Note that in this case, there will be two malloc implementations operating\n    in the same process, which will almost certainly result in confusing runtime\n    crashes if pointers leak from one implementation to the other.\n\n* `--disable-libdl`\n\n    Disable the usage of libdl, namely dlsym(3) which is required by the lazy\n    lock option.  This can allow building static binaries.\n\nThe following environment variables (not a definitive list) impact configure's\nbehavior:\n\n* `CFLAGS=\"?\"`\n* `CXXFLAGS=\"?\"`\n\n    Pass these flags to the C/C++ compiler.  Any flags set by the configure\n    script are prepended, which means explicitly set flags generally take\n    precedence.  Take care when specifying flags such as -Werror, because\n    configure tests may be affected in undesirable ways.\n\n* `EXTRA_CFLAGS=\"?\"`\n* `EXTRA_CXXFLAGS=\"?\"`\n\n    Append these flags to CFLAGS/CXXFLAGS, without passing them to the\n    compiler(s) during configuration.  This makes it possible to add flags such\n    as -Werror, while allowing the configure script to determine what other\n    flags are appropriate for the specified configuration.\n\n* `CPPFLAGS=\"?\"`\n\n    Pass these flags to the C preprocessor.  Note that CFLAGS is not passed to\n    'cpp' when 'configure' is looking for include files, so you must use\n    CPPFLAGS instead if you need to help 'configure' find header files.\n\n* `LD_LIBRARY_PATH=\"?\"`\n\n    'ld' uses this colon-separated list to find libraries.\n\n* `LDFLAGS=\"?\"`\n\n    Pass these flags when linking.\n\n* `PATH=\"?\"`\n\n    'configure' uses this to find programs.\n\nIn some cases it may be necessary to work around configuration results that do\nnot match reality.  For example, Linux 4.5 added support for the MADV_FREE flag\nto madvise(2), which can cause problems if building on a host with MADV_FREE\nsupport and deploying to a target without.  To work around this, use a cache\nfile to override the relevant configuration variable defined in configure.ac,\ne.g.:\n\n    echo \"je_cv_madv_free=no\" > config.cache && ./configure -C\n\n\n## Advanced compilation\n\nTo build only parts of jemalloc, use the following targets:\n\n    build_lib_shared\n    build_lib_static\n    build_lib\n    build_doc_html\n    build_doc_man\n    build_doc\n\nTo install only parts of jemalloc, use the following targets:\n\n    install_bin\n    install_include\n    install_lib_shared\n    install_lib_static\n    install_lib_pc\n    install_lib\n    install_doc_html\n    install_doc_man\n    install_doc\n\nTo clean up build results to varying degrees, use the following make targets:\n\n    clean\n    distclean\n    relclean\n\n\n## Advanced installation\n\nOptionally, define make variables when invoking make, including (not\nexclusively):\n\n* `INCLUDEDIR=\"?\"`\n\n    Use this as the installation prefix for header files.\n\n* `LIBDIR=\"?\"`\n\n    Use this as the installation prefix for libraries.\n\n* `MANDIR=\"?\"`\n\n    Use this as the installation prefix for man pages.\n\n* `DESTDIR=\"?\"`\n\n    Prepend DESTDIR to INCLUDEDIR, LIBDIR, DATADIR, and MANDIR.  This is useful\n    when installing to a different path than was specified via --prefix.\n\n* `CC=\"?\"`\n\n    Use this to invoke the C compiler.\n\n* `CFLAGS=\"?\"`\n\n    Pass these flags to the compiler.\n\n* `CPPFLAGS=\"?\"`\n\n    Pass these flags to the C preprocessor.\n\n* `LDFLAGS=\"?\"`\n\n    Pass these flags when linking.\n\n* `PATH=\"?\"`\n\n    Use this to search for programs used during configuration and building.\n\n\n## Development\n\nIf you intend to make non-trivial changes to jemalloc, use the 'autogen.sh'\nscript rather than 'configure'.  This re-generates 'configure', enables\nconfiguration dependency rules, and enables re-generation of automatically\ngenerated source files.\n\nThe build system supports using an object directory separate from the source\ntree.  For example, you can create an 'obj' directory, and from within that\ndirectory, issue configuration and build commands:\n\n    autoconf\n    mkdir obj\n    cd obj\n    ../configure --enable-autogen\n    make\n\n\n## Documentation\n\nThe manual page is generated in both html and roff formats.  Any web browser\ncan be used to view the html manual.  The roff manual page can be formatted\nprior to installation via the following command:\n\n    nroff -man -t doc/jemalloc.3\n"
  },
  {
    "path": "deps/jemalloc/Makefile.in",
    "content": "# Clear out all vpaths, then set just one (default vpath) for the main build\n# directory.\nvpath\nvpath % .\n\n# Clear the default suffixes, so that built-in rules are not used.\n.SUFFIXES :\n\nSHELL := /bin/sh\n\nCC := @CC@\nCXX := @CXX@\n\n# Configuration parameters.\nDESTDIR =\nBINDIR := $(DESTDIR)@BINDIR@\nINCLUDEDIR := $(DESTDIR)@INCLUDEDIR@\nLIBDIR := $(DESTDIR)@LIBDIR@\nDATADIR := $(DESTDIR)@DATADIR@\nMANDIR := $(DESTDIR)@MANDIR@\nsrcroot := @srcroot@\nobjroot := @objroot@\nabs_srcroot := @abs_srcroot@\nabs_objroot := @abs_objroot@\n\n# Build parameters.\nCPPFLAGS := @CPPFLAGS@ -I$(objroot)include -I$(srcroot)include\nCONFIGURE_CFLAGS := @CONFIGURE_CFLAGS@\nSPECIFIED_CFLAGS := @SPECIFIED_CFLAGS@\nEXTRA_CFLAGS := @EXTRA_CFLAGS@\nCFLAGS := $(strip $(CONFIGURE_CFLAGS) $(SPECIFIED_CFLAGS) $(EXTRA_CFLAGS))\nCONFIGURE_CXXFLAGS := @CONFIGURE_CXXFLAGS@\nSPECIFIED_CXXFLAGS := @SPECIFIED_CXXFLAGS@\nEXTRA_CXXFLAGS := @EXTRA_CXXFLAGS@\nCXXFLAGS := $(strip $(CONFIGURE_CXXFLAGS) $(SPECIFIED_CXXFLAGS) $(EXTRA_CXXFLAGS))\nLDFLAGS := @LDFLAGS@\nEXTRA_LDFLAGS := @EXTRA_LDFLAGS@\nLIBS := @LIBS@\nRPATH_EXTRA := @RPATH_EXTRA@\nSO := @so@\nIMPORTLIB := @importlib@\nO := @o@\nA := @a@\nEXE := @exe@\nLIBPREFIX := @libprefix@\nREV := @rev@\ninstall_suffix := @install_suffix@\nABI := @abi@\nXSLTPROC := @XSLTPROC@\nXSLROOT := @XSLROOT@\nAUTOCONF := @AUTOCONF@\n_RPATH = @RPATH@\nRPATH = $(if $(1),$(call _RPATH,$(1)))\ncfghdrs_in := $(addprefix $(srcroot),@cfghdrs_in@)\ncfghdrs_out := @cfghdrs_out@\ncfgoutputs_in := $(addprefix $(srcroot),@cfgoutputs_in@)\ncfgoutputs_out := @cfgoutputs_out@\nenable_autogen := @enable_autogen@\nenable_doc := @enable_doc@\nenable_shared := @enable_shared@\nenable_static := @enable_static@\nenable_prof := @enable_prof@\nenable_zone_allocator := @enable_zone_allocator@\nenable_experimental_smallocx := @enable_experimental_smallocx@\nMALLOC_CONF := @JEMALLOC_CPREFIX@MALLOC_CONF\nlink_whole_archive := @link_whole_archive@\nDSO_LDFLAGS = @DSO_LDFLAGS@\nSOREV = @SOREV@\nPIC_CFLAGS = @PIC_CFLAGS@\nCTARGET = @CTARGET@\nLDTARGET = @LDTARGET@\nTEST_LD_MODE = @TEST_LD_MODE@\nMKLIB = @MKLIB@\nAR = @AR@\nARFLAGS = @ARFLAGS@\nDUMP_SYMS = @DUMP_SYMS@\nAWK := @AWK@\nCC_MM = @CC_MM@\nLM := @LM@\nINSTALL = @INSTALL@\n\nifeq (macho, $(ABI))\nTEST_LIBRARY_PATH := DYLD_FALLBACK_LIBRARY_PATH=\"$(objroot)lib\"\nelse\nifeq (pecoff, $(ABI))\nTEST_LIBRARY_PATH := PATH=\"$(PATH):$(objroot)lib\"\nelse\nTEST_LIBRARY_PATH :=\nendif\nendif\n\nLIBJEMALLOC := $(LIBPREFIX)jemalloc$(install_suffix)\n\n# Lists of files.\nBINS := $(objroot)bin/jemalloc-config $(objroot)bin/jemalloc.sh $(objroot)bin/jeprof\nC_HDRS := $(objroot)include/jemalloc/jemalloc$(install_suffix).h\nC_SRCS := $(srcroot)src/jemalloc.c \\\n\t$(srcroot)src/arena.c \\\n\t$(srcroot)src/background_thread.c \\\n\t$(srcroot)src/base.c \\\n\t$(srcroot)src/bin.c \\\n\t$(srcroot)src/bin_info.c \\\n\t$(srcroot)src/bitmap.c \\\n\t$(srcroot)src/buf_writer.c \\\n\t$(srcroot)src/cache_bin.c \\\n\t$(srcroot)src/ckh.c \\\n\t$(srcroot)src/counter.c \\\n\t$(srcroot)src/ctl.c \\\n\t$(srcroot)src/decay.c \\\n\t$(srcroot)src/div.c \\\n\t$(srcroot)src/ecache.c \\\n\t$(srcroot)src/edata.c \\\n\t$(srcroot)src/edata_cache.c \\\n\t$(srcroot)src/ehooks.c \\\n\t$(srcroot)src/emap.c \\\n\t$(srcroot)src/eset.c \\\n\t$(srcroot)src/exp_grow.c \\\n\t$(srcroot)src/extent.c \\\n\t$(srcroot)src/extent_dss.c \\\n\t$(srcroot)src/extent_mmap.c \\\n\t$(srcroot)src/fxp.c \\\n\t$(srcroot)src/san.c \\\n\t$(srcroot)src/san_bump.c \\\n\t$(srcroot)src/hook.c \\\n\t$(srcroot)src/hpa.c \\\n\t$(srcroot)src/hpa_hooks.c \\\n\t$(srcroot)src/hpdata.c \\\n\t$(srcroot)src/inspect.c \\\n\t$(srcroot)src/large.c \\\n\t$(srcroot)src/log.c \\\n\t$(srcroot)src/malloc_io.c \\\n\t$(srcroot)src/mutex.c \\\n\t$(srcroot)src/nstime.c \\\n\t$(srcroot)src/pa.c \\\n\t$(srcroot)src/pa_extra.c \\\n\t$(srcroot)src/pai.c \\\n\t$(srcroot)src/pac.c \\\n\t$(srcroot)src/pages.c \\\n\t$(srcroot)src/peak_event.c \\\n\t$(srcroot)src/prof.c \\\n\t$(srcroot)src/prof_data.c \\\n\t$(srcroot)src/prof_log.c \\\n\t$(srcroot)src/prof_recent.c \\\n\t$(srcroot)src/prof_stats.c \\\n\t$(srcroot)src/prof_sys.c \\\n\t$(srcroot)src/psset.c \\\n\t$(srcroot)src/rtree.c \\\n\t$(srcroot)src/safety_check.c \\\n\t$(srcroot)src/sc.c \\\n\t$(srcroot)src/sec.c \\\n\t$(srcroot)src/stats.c \\\n\t$(srcroot)src/sz.c \\\n\t$(srcroot)src/tcache.c \\\n\t$(srcroot)src/test_hooks.c \\\n\t$(srcroot)src/thread_event.c \\\n\t$(srcroot)src/ticker.c \\\n\t$(srcroot)src/tsd.c \\\n\t$(srcroot)src/witness.c\nifeq ($(enable_zone_allocator), 1)\nC_SRCS += $(srcroot)src/zone.c\nendif\nifeq ($(IMPORTLIB),$(SO))\nSTATIC_LIBS := $(objroot)lib/$(LIBJEMALLOC).$(A)\nendif\nifdef PIC_CFLAGS\nSTATIC_LIBS += $(objroot)lib/$(LIBJEMALLOC)_pic.$(A)\nelse\nSTATIC_LIBS += $(objroot)lib/$(LIBJEMALLOC)_s.$(A)\nendif\nDSOS := $(objroot)lib/$(LIBJEMALLOC).$(SOREV)\nifneq ($(SOREV),$(SO))\nDSOS += $(objroot)lib/$(LIBJEMALLOC).$(SO)\nendif\nifeq (1, $(link_whole_archive))\nLJEMALLOC := -Wl,--whole-archive -L$(objroot)lib -l$(LIBJEMALLOC) -Wl,--no-whole-archive\nelse\nLJEMALLOC := $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB)\nendif\nPC := $(objroot)jemalloc.pc\nDOCS_XML := $(objroot)doc/jemalloc$(install_suffix).xml\nDOCS_HTML := $(DOCS_XML:$(objroot)%.xml=$(objroot)%.html)\nDOCS_MAN3 := $(DOCS_XML:$(objroot)%.xml=$(objroot)%.3)\nDOCS := $(DOCS_HTML) $(DOCS_MAN3)\nC_TESTLIB_SRCS := $(srcroot)test/src/btalloc.c $(srcroot)test/src/btalloc_0.c \\\n\t$(srcroot)test/src/btalloc_1.c $(srcroot)test/src/math.c \\\n\t$(srcroot)test/src/mtx.c $(srcroot)test/src/sleep.c \\\n\t$(srcroot)test/src/SFMT.c $(srcroot)test/src/test.c \\\n\t$(srcroot)test/src/thd.c $(srcroot)test/src/timer.c\nifeq (1, $(link_whole_archive))\nC_UTIL_INTEGRATION_SRCS :=\nC_UTIL_CPP_SRCS :=\nelse\nC_UTIL_INTEGRATION_SRCS := $(srcroot)src/nstime.c $(srcroot)src/malloc_io.c \\\n\t$(srcroot)src/ticker.c\nC_UTIL_CPP_SRCS := $(srcroot)src/nstime.c $(srcroot)src/malloc_io.c\nendif\nTESTS_UNIT := \\\n\t$(srcroot)test/unit/a0.c \\\n\t$(srcroot)test/unit/arena_decay.c \\\n\t$(srcroot)test/unit/arena_reset.c \\\n\t$(srcroot)test/unit/atomic.c \\\n\t$(srcroot)test/unit/background_thread.c \\\n\t$(srcroot)test/unit/background_thread_enable.c \\\n\t$(srcroot)test/unit/base.c \\\n\t$(srcroot)test/unit/batch_alloc.c \\\n\t$(srcroot)test/unit/binshard.c \\\n\t$(srcroot)test/unit/bitmap.c \\\n\t$(srcroot)test/unit/bit_util.c \\\n\t$(srcroot)test/unit/buf_writer.c \\\n\t$(srcroot)test/unit/cache_bin.c \\\n\t$(srcroot)test/unit/ckh.c \\\n\t$(srcroot)test/unit/counter.c \\\n\t$(srcroot)test/unit/decay.c \\\n\t$(srcroot)test/unit/div.c \\\n\t$(srcroot)test/unit/double_free.c \\\n\t$(srcroot)test/unit/edata_cache.c \\\n\t$(srcroot)test/unit/emitter.c \\\n\t$(srcroot)test/unit/extent_quantize.c \\\n\t${srcroot}test/unit/fb.c \\\n\t$(srcroot)test/unit/fork.c \\\n\t${srcroot}test/unit/fxp.c \\\n\t${srcroot}test/unit/san.c \\\n\t${srcroot}test/unit/san_bump.c \\\n\t$(srcroot)test/unit/hash.c \\\n\t$(srcroot)test/unit/hook.c \\\n\t$(srcroot)test/unit/hpa.c \\\n\t$(srcroot)test/unit/hpa_background_thread.c \\\n\t$(srcroot)test/unit/hpdata.c \\\n\t$(srcroot)test/unit/huge.c \\\n\t$(srcroot)test/unit/inspect.c \\\n\t$(srcroot)test/unit/junk.c \\\n\t$(srcroot)test/unit/junk_alloc.c \\\n\t$(srcroot)test/unit/junk_free.c \\\n\t$(srcroot)test/unit/log.c \\\n\t$(srcroot)test/unit/mallctl.c \\\n\t$(srcroot)test/unit/malloc_conf_2.c \\\n\t$(srcroot)test/unit/malloc_io.c \\\n\t$(srcroot)test/unit/math.c \\\n\t$(srcroot)test/unit/mpsc_queue.c \\\n\t$(srcroot)test/unit/mq.c \\\n\t$(srcroot)test/unit/mtx.c \\\n\t$(srcroot)test/unit/nstime.c \\\n\t$(srcroot)test/unit/oversize_threshold.c \\\n\t$(srcroot)test/unit/pa.c \\\n\t$(srcroot)test/unit/pack.c \\\n\t$(srcroot)test/unit/pages.c \\\n\t$(srcroot)test/unit/peak.c \\\n\t$(srcroot)test/unit/ph.c \\\n\t$(srcroot)test/unit/prng.c \\\n\t$(srcroot)test/unit/prof_accum.c \\\n\t$(srcroot)test/unit/prof_active.c \\\n\t$(srcroot)test/unit/prof_gdump.c \\\n\t$(srcroot)test/unit/prof_hook.c \\\n\t$(srcroot)test/unit/prof_idump.c \\\n\t$(srcroot)test/unit/prof_log.c \\\n\t$(srcroot)test/unit/prof_mdump.c \\\n\t$(srcroot)test/unit/prof_recent.c \\\n\t$(srcroot)test/unit/prof_reset.c \\\n\t$(srcroot)test/unit/prof_stats.c \\\n\t$(srcroot)test/unit/prof_tctx.c \\\n\t$(srcroot)test/unit/prof_thread_name.c \\\n\t$(srcroot)test/unit/prof_sys_thread_name.c \\\n\t$(srcroot)test/unit/psset.c \\\n\t$(srcroot)test/unit/ql.c \\\n\t$(srcroot)test/unit/qr.c \\\n\t$(srcroot)test/unit/rb.c \\\n\t$(srcroot)test/unit/retained.c \\\n\t$(srcroot)test/unit/rtree.c \\\n\t$(srcroot)test/unit/safety_check.c \\\n\t$(srcroot)test/unit/sc.c \\\n\t$(srcroot)test/unit/sec.c \\\n\t$(srcroot)test/unit/seq.c \\\n\t$(srcroot)test/unit/SFMT.c \\\n\t$(srcroot)test/unit/size_check.c \\\n\t$(srcroot)test/unit/size_classes.c \\\n\t$(srcroot)test/unit/slab.c \\\n\t$(srcroot)test/unit/smoothstep.c \\\n\t$(srcroot)test/unit/spin.c \\\n\t$(srcroot)test/unit/stats.c \\\n\t$(srcroot)test/unit/stats_print.c \\\n\t$(srcroot)test/unit/sz.c \\\n\t$(srcroot)test/unit/tcache_max.c \\\n\t$(srcroot)test/unit/test_hooks.c \\\n\t$(srcroot)test/unit/thread_event.c \\\n\t$(srcroot)test/unit/ticker.c \\\n\t$(srcroot)test/unit/tsd.c \\\n\t$(srcroot)test/unit/uaf.c \\\n\t$(srcroot)test/unit/witness.c \\\n\t$(srcroot)test/unit/zero.c \\\n\t$(srcroot)test/unit/zero_realloc_abort.c \\\n\t$(srcroot)test/unit/zero_realloc_free.c \\\n\t$(srcroot)test/unit/zero_realloc_alloc.c \\\n\t$(srcroot)test/unit/zero_reallocs.c\nifeq (@enable_prof@, 1)\nTESTS_UNIT += \\\n\t$(srcroot)test/unit/arena_reset_prof.c \\\n\t$(srcroot)test/unit/batch_alloc_prof.c\nendif\nTESTS_INTEGRATION := $(srcroot)test/integration/aligned_alloc.c \\\n\t$(srcroot)test/integration/allocated.c \\\n\t$(srcroot)test/integration/extent.c \\\n\t$(srcroot)test/integration/malloc.c \\\n\t$(srcroot)test/integration/mallocx.c \\\n\t$(srcroot)test/integration/MALLOCX_ARENA.c \\\n\t$(srcroot)test/integration/overflow.c \\\n\t$(srcroot)test/integration/posix_memalign.c \\\n\t$(srcroot)test/integration/rallocx.c \\\n\t$(srcroot)test/integration/sdallocx.c \\\n\t$(srcroot)test/integration/slab_sizes.c \\\n\t$(srcroot)test/integration/thread_arena.c \\\n\t$(srcroot)test/integration/thread_tcache_enabled.c \\\n\t$(srcroot)test/integration/xallocx.c\nifeq (@enable_experimental_smallocx@, 1)\nTESTS_INTEGRATION += \\\n  $(srcroot)test/integration/smallocx.c\nendif\nifeq (@enable_cxx@, 1)\nCPP_SRCS := $(srcroot)src/jemalloc_cpp.cpp\nTESTS_INTEGRATION_CPP := $(srcroot)test/integration/cpp/basic.cpp \\\n\t$(srcroot)test/integration/cpp/infallible_new_true.cpp \\\n\t$(srcroot)test/integration/cpp/infallible_new_false.cpp\nelse\nCPP_SRCS :=\nTESTS_INTEGRATION_CPP :=\nendif\nTESTS_ANALYZE := $(srcroot)test/analyze/prof_bias.c \\\n\t$(srcroot)test/analyze/rand.c \\\n\t$(srcroot)test/analyze/sizes.c\nTESTS_STRESS := $(srcroot)test/stress/batch_alloc.c \\\n\t$(srcroot)test/stress/fill_flush.c \\\n\t$(srcroot)test/stress/hookbench.c \\\n\t$(srcroot)test/stress/large_microbench.c \\\n\t$(srcroot)test/stress/mallctl.c \\\n\t$(srcroot)test/stress/microbench.c\n\n\nTESTS := $(TESTS_UNIT) $(TESTS_INTEGRATION) $(TESTS_INTEGRATION_CPP) \\\n\t$(TESTS_ANALYZE) $(TESTS_STRESS)\n\nPRIVATE_NAMESPACE_HDRS := $(objroot)include/jemalloc/internal/private_namespace.h $(objroot)include/jemalloc/internal/private_namespace_jet.h\nPRIVATE_NAMESPACE_GEN_HDRS := $(PRIVATE_NAMESPACE_HDRS:%.h=%.gen.h)\nC_SYM_OBJS := $(C_SRCS:$(srcroot)%.c=$(objroot)%.sym.$(O))\nC_SYMS := $(C_SRCS:$(srcroot)%.c=$(objroot)%.sym)\nC_OBJS := $(C_SRCS:$(srcroot)%.c=$(objroot)%.$(O))\nCPP_OBJS := $(CPP_SRCS:$(srcroot)%.cpp=$(objroot)%.$(O))\nC_PIC_OBJS := $(C_SRCS:$(srcroot)%.c=$(objroot)%.pic.$(O))\nCPP_PIC_OBJS := $(CPP_SRCS:$(srcroot)%.cpp=$(objroot)%.pic.$(O))\nC_JET_SYM_OBJS := $(C_SRCS:$(srcroot)%.c=$(objroot)%.jet.sym.$(O))\nC_JET_SYMS := $(C_SRCS:$(srcroot)%.c=$(objroot)%.jet.sym)\nC_JET_OBJS := $(C_SRCS:$(srcroot)%.c=$(objroot)%.jet.$(O))\nC_TESTLIB_UNIT_OBJS := $(C_TESTLIB_SRCS:$(srcroot)%.c=$(objroot)%.unit.$(O))\nC_TESTLIB_INTEGRATION_OBJS := $(C_TESTLIB_SRCS:$(srcroot)%.c=$(objroot)%.integration.$(O))\nC_UTIL_INTEGRATION_OBJS := $(C_UTIL_INTEGRATION_SRCS:$(srcroot)%.c=$(objroot)%.integration.$(O))\nC_TESTLIB_ANALYZE_OBJS := $(C_TESTLIB_SRCS:$(srcroot)%.c=$(objroot)%.analyze.$(O))\nC_TESTLIB_STRESS_OBJS := $(C_TESTLIB_SRCS:$(srcroot)%.c=$(objroot)%.stress.$(O))\nC_TESTLIB_OBJS := $(C_TESTLIB_UNIT_OBJS) $(C_TESTLIB_INTEGRATION_OBJS) \\\n\t$(C_UTIL_INTEGRATION_OBJS) $(C_TESTLIB_ANALYZE_OBJS) \\\n\t$(C_TESTLIB_STRESS_OBJS)\n\nTESTS_UNIT_OBJS := $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%.$(O))\nTESTS_INTEGRATION_OBJS := $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%.$(O))\nTESTS_INTEGRATION_CPP_OBJS := $(TESTS_INTEGRATION_CPP:$(srcroot)%.cpp=$(objroot)%.$(O))\nTESTS_ANALYZE_OBJS := $(TESTS_ANALYZE:$(srcroot)%.c=$(objroot)%.$(O))\nTESTS_STRESS_OBJS := $(TESTS_STRESS:$(srcroot)%.c=$(objroot)%.$(O))\nTESTS_OBJS := $(TESTS_UNIT_OBJS) $(TESTS_INTEGRATION_OBJS) $(TESTS_ANALYZE_OBJS) \\\n\t$(TESTS_STRESS_OBJS)\nTESTS_CPP_OBJS := $(TESTS_INTEGRATION_CPP_OBJS)\n\n.PHONY: all dist build_doc_html build_doc_man build_doc\n.PHONY: install_bin install_include install_lib\n.PHONY: install_doc_html install_doc_man install_doc install\n.PHONY: tests check clean distclean relclean\n\n.SECONDARY : $(PRIVATE_NAMESPACE_GEN_HDRS) $(TESTS_OBJS) $(TESTS_CPP_OBJS)\n\n# Default target.\nall: build_lib\n\ndist: build_doc\n\n$(objroot)doc/%$(install_suffix).html : $(objroot)doc/%.xml $(srcroot)doc/stylesheet.xsl $(objroot)doc/html.xsl\nifneq ($(XSLROOT),)\n\t$(XSLTPROC) -o $@ $(objroot)doc/html.xsl $<\nelse\nifeq ($(wildcard $(DOCS_HTML)),)\n\t@echo \"<p>Missing xsltproc.  Doc not built.</p>\" > $@\nendif\n\t@echo \"Missing xsltproc.  \"$@\" not (re)built.\"\nendif\n\n$(objroot)doc/%$(install_suffix).3 : $(objroot)doc/%.xml $(srcroot)doc/stylesheet.xsl $(objroot)doc/manpages.xsl\nifneq ($(XSLROOT),)\n\t$(XSLTPROC) -o $@ $(objroot)doc/manpages.xsl $<\n# The -o option (output filename) of xsltproc may not work (it uses the\n# <refname> in the .xml file).  Manually add the suffix if so.\n  ifneq ($(install_suffix),)\n\t@if [ -f $(objroot)doc/jemalloc.3 ]; then \\\n\t\tmv $(objroot)doc/jemalloc.3 $(objroot)doc/jemalloc$(install_suffix).3 ; \\\n\tfi\n  endif\nelse\nifeq ($(wildcard $(DOCS_MAN3)),)\n\t@echo \"Missing xsltproc.  Doc not built.\" > $@\nendif\n\t@echo \"Missing xsltproc.  \"$@\" not (re)built.\"\nendif\n\nbuild_doc_html: $(DOCS_HTML)\nbuild_doc_man: $(DOCS_MAN3)\nbuild_doc: $(DOCS)\n\n#\n# Include generated dependency files.\n#\nifdef CC_MM\n-include $(C_SYM_OBJS:%.$(O)=%.d)\n-include $(C_OBJS:%.$(O)=%.d)\n-include $(CPP_OBJS:%.$(O)=%.d)\n-include $(C_PIC_OBJS:%.$(O)=%.d)\n-include $(CPP_PIC_OBJS:%.$(O)=%.d)\n-include $(C_JET_SYM_OBJS:%.$(O)=%.d)\n-include $(C_JET_OBJS:%.$(O)=%.d)\n-include $(C_TESTLIB_OBJS:%.$(O)=%.d)\n-include $(TESTS_OBJS:%.$(O)=%.d)\n-include $(TESTS_CPP_OBJS:%.$(O)=%.d)\nendif\n\n$(C_SYM_OBJS): $(objroot)src/%.sym.$(O): $(srcroot)src/%.c\n$(C_SYM_OBJS): CPPFLAGS += -DJEMALLOC_NO_PRIVATE_NAMESPACE\n$(C_SYMS): $(objroot)src/%.sym: $(objroot)src/%.sym.$(O)\n$(C_OBJS): $(objroot)src/%.$(O): $(srcroot)src/%.c\n$(CPP_OBJS): $(objroot)src/%.$(O): $(srcroot)src/%.cpp\n$(C_PIC_OBJS): $(objroot)src/%.pic.$(O): $(srcroot)src/%.c\n$(C_PIC_OBJS): CFLAGS += $(PIC_CFLAGS)\n$(CPP_PIC_OBJS): $(objroot)src/%.pic.$(O): $(srcroot)src/%.cpp\n$(CPP_PIC_OBJS): CXXFLAGS += $(PIC_CFLAGS)\n$(C_JET_SYM_OBJS): $(objroot)src/%.jet.sym.$(O): $(srcroot)src/%.c\n$(C_JET_SYM_OBJS): CPPFLAGS += -DJEMALLOC_JET -DJEMALLOC_NO_PRIVATE_NAMESPACE\n$(C_JET_SYMS): $(objroot)src/%.jet.sym: $(objroot)src/%.jet.sym.$(O)\n$(C_JET_OBJS): $(objroot)src/%.jet.$(O): $(srcroot)src/%.c\n$(C_JET_OBJS): CPPFLAGS += -DJEMALLOC_JET\n$(C_TESTLIB_UNIT_OBJS): $(objroot)test/src/%.unit.$(O): $(srcroot)test/src/%.c\n$(C_TESTLIB_UNIT_OBJS): CPPFLAGS += -DJEMALLOC_UNIT_TEST\n$(C_TESTLIB_INTEGRATION_OBJS): $(objroot)test/src/%.integration.$(O): $(srcroot)test/src/%.c\n$(C_TESTLIB_INTEGRATION_OBJS): CPPFLAGS += -DJEMALLOC_INTEGRATION_TEST\n$(C_UTIL_INTEGRATION_OBJS): $(objroot)src/%.integration.$(O): $(srcroot)src/%.c\n$(C_TESTLIB_ANALYZE_OBJS): $(objroot)test/src/%.analyze.$(O): $(srcroot)test/src/%.c\n$(C_TESTLIB_ANALYZE_OBJS): CPPFLAGS += -DJEMALLOC_ANALYZE_TEST\n$(C_TESTLIB_STRESS_OBJS): $(objroot)test/src/%.stress.$(O): $(srcroot)test/src/%.c\n$(C_TESTLIB_STRESS_OBJS): CPPFLAGS += -DJEMALLOC_STRESS_TEST -DJEMALLOC_STRESS_TESTLIB\n$(C_TESTLIB_OBJS): CPPFLAGS += -I$(srcroot)test/include -I$(objroot)test/include\n$(TESTS_UNIT_OBJS): CPPFLAGS += -DJEMALLOC_UNIT_TEST\n$(TESTS_INTEGRATION_OBJS): CPPFLAGS += -DJEMALLOC_INTEGRATION_TEST\n$(TESTS_INTEGRATION_CPP_OBJS): CPPFLAGS += -DJEMALLOC_INTEGRATION_CPP_TEST\n$(TESTS_ANALYZE_OBJS): CPPFLAGS += -DJEMALLOC_ANALYZE_TEST\n$(TESTS_STRESS_OBJS): CPPFLAGS += -DJEMALLOC_STRESS_TEST\n$(TESTS_OBJS): $(objroot)test/%.$(O): $(srcroot)test/%.c\n$(TESTS_CPP_OBJS): $(objroot)test/%.$(O): $(srcroot)test/%.cpp\n$(TESTS_OBJS): CPPFLAGS += -I$(srcroot)test/include -I$(objroot)test/include\n$(TESTS_CPP_OBJS): CPPFLAGS += -I$(srcroot)test/include -I$(objroot)test/include\nifneq ($(IMPORTLIB),$(SO))\n$(CPP_OBJS) $(C_SYM_OBJS) $(C_OBJS) $(C_JET_SYM_OBJS) $(C_JET_OBJS): CPPFLAGS += -DDLLEXPORT\nendif\n\n# Dependencies.\nifndef CC_MM\nHEADER_DIRS = $(srcroot)include/jemalloc/internal \\\n\t$(objroot)include/jemalloc $(objroot)include/jemalloc/internal\nHEADERS = $(filter-out $(PRIVATE_NAMESPACE_HDRS),$(wildcard $(foreach dir,$(HEADER_DIRS),$(dir)/*.h)))\n$(C_SYM_OBJS) $(C_OBJS) $(CPP_OBJS) $(C_PIC_OBJS) $(CPP_PIC_OBJS) $(C_JET_SYM_OBJS) $(C_JET_OBJS) $(C_TESTLIB_OBJS) $(TESTS_OBJS) $(TESTS_CPP_OBJS): $(HEADERS)\n$(TESTS_OBJS) $(TESTS_CPP_OBJS): $(objroot)test/include/test/jemalloc_test.h\nendif\n\n$(C_OBJS) $(CPP_OBJS) $(C_PIC_OBJS) $(CPP_PIC_OBJS) $(C_TESTLIB_INTEGRATION_OBJS) $(C_UTIL_INTEGRATION_OBJS) $(TESTS_INTEGRATION_OBJS) $(TESTS_INTEGRATION_CPP_OBJS): $(objroot)include/jemalloc/internal/private_namespace.h\n$(C_JET_OBJS) $(C_TESTLIB_UNIT_OBJS) $(C_TESTLIB_ANALYZE_OBJS) $(C_TESTLIB_STRESS_OBJS) $(TESTS_UNIT_OBJS) $(TESTS_ANALYZE_OBJS) $(TESTS_STRESS_OBJS): $(objroot)include/jemalloc/internal/private_namespace_jet.h\n\n$(C_SYM_OBJS) $(C_OBJS) $(C_PIC_OBJS) $(C_JET_SYM_OBJS) $(C_JET_OBJS) $(C_TESTLIB_OBJS) $(TESTS_OBJS): %.$(O):\n\t@mkdir -p $(@D)\n\t$(CC) $(CFLAGS) -c $(CPPFLAGS) $(CTARGET) $<\nifdef CC_MM\n\t@$(CC) -MM $(CPPFLAGS) -MT $@ -o $(@:%.$(O)=%.d) $<\nendif\n\n$(C_SYMS): %.sym:\n\t@mkdir -p $(@D)\n\t$(DUMP_SYMS) $< | $(AWK) -f $(objroot)include/jemalloc/internal/private_symbols.awk > $@\n\n$(C_JET_SYMS): %.sym:\n\t@mkdir -p $(@D)\n\t$(DUMP_SYMS) $< | $(AWK) -f $(objroot)include/jemalloc/internal/private_symbols_jet.awk > $@\n\n$(objroot)include/jemalloc/internal/private_namespace.gen.h: $(C_SYMS)\n\t$(SHELL) $(srcroot)include/jemalloc/internal/private_namespace.sh $^ > $@\n\n$(objroot)include/jemalloc/internal/private_namespace_jet.gen.h: $(C_JET_SYMS)\n\t$(SHELL) $(srcroot)include/jemalloc/internal/private_namespace.sh $^ > $@\n\n%.h: %.gen.h\n\t@if ! `cmp -s $< $@` ; then echo \"cp $< $@\"; cp $< $@ ; fi\n\n$(CPP_OBJS) $(CPP_PIC_OBJS) $(TESTS_CPP_OBJS): %.$(O):\n\t@mkdir -p $(@D)\n\t$(CXX) $(CXXFLAGS) -c $(CPPFLAGS) $(CTARGET) $<\nifdef CC_MM\n\t@$(CXX) -MM $(CPPFLAGS) -MT $@ -o $(@:%.$(O)=%.d) $<\nendif\n\nifneq ($(SOREV),$(SO))\n%.$(SO) : %.$(SOREV)\n\t@mkdir -p $(@D)\n\tln -sf $(<F) $@\nendif\n\n$(objroot)lib/$(LIBJEMALLOC).$(SOREV) : $(if $(PIC_CFLAGS),$(C_PIC_OBJS),$(C_OBJS)) $(if $(PIC_CFLAGS),$(CPP_PIC_OBJS),$(CPP_OBJS))\n\t@mkdir -p $(@D)\n\t$(CC) $(DSO_LDFLAGS) $(call RPATH,$(RPATH_EXTRA)) $(LDTARGET) $+ $(LDFLAGS) $(LIBS) $(EXTRA_LDFLAGS)\n\n$(objroot)lib/$(LIBJEMALLOC)_pic.$(A) : $(C_PIC_OBJS) $(CPP_PIC_OBJS)\n$(objroot)lib/$(LIBJEMALLOC).$(A) : $(C_OBJS) $(CPP_OBJS)\n$(objroot)lib/$(LIBJEMALLOC)_s.$(A) : $(C_OBJS) $(CPP_OBJS)\n\n$(STATIC_LIBS):\n\t@mkdir -p $(@D)\n\t$(AR) $(ARFLAGS)@AROUT@ $+\n\n$(objroot)test/unit/%$(EXE): $(objroot)test/unit/%.$(O) $(C_JET_OBJS) $(C_TESTLIB_UNIT_OBJS)\n\t@mkdir -p $(@D)\n\t$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(LDFLAGS) $(filter-out -lm,$(LIBS)) $(LM) $(EXTRA_LDFLAGS)\n\n$(objroot)test/integration/%$(EXE): $(objroot)test/integration/%.$(O) $(C_TESTLIB_INTEGRATION_OBJS) $(C_UTIL_INTEGRATION_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB)\n\t@mkdir -p $(@D)\n\t$(CC) $(TEST_LD_MODE) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(LJEMALLOC) $(LDFLAGS) $(filter-out -lm,$(filter -lrt -pthread -lstdc++,$(LIBS))) $(LM) $(EXTRA_LDFLAGS)\n\n$(objroot)test/integration/cpp/%$(EXE): $(objroot)test/integration/cpp/%.$(O) $(C_TESTLIB_INTEGRATION_OBJS) $(C_UTIL_INTEGRATION_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB)\n\t@mkdir -p $(@D)\n\t$(CXX) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(LIBS)) -lm $(EXTRA_LDFLAGS)\n\n$(objroot)test/analyze/%$(EXE): $(objroot)test/analyze/%.$(O) $(C_JET_OBJS) $(C_TESTLIB_ANALYZE_OBJS)\n\t@mkdir -p $(@D)\n\t$(CC) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(LDFLAGS) $(filter-out -lm,$(LIBS)) $(LM) $(EXTRA_LDFLAGS)\n\n$(objroot)test/stress/%$(EXE): $(objroot)test/stress/%.$(O) $(C_JET_OBJS) $(C_TESTLIB_STRESS_OBJS) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB)\n\t@mkdir -p $(@D)\n\t$(CC) $(TEST_LD_MODE) $(LDTARGET) $(filter %.$(O),$^) $(call RPATH,$(objroot)lib) $(objroot)lib/$(LIBJEMALLOC).$(IMPORTLIB) $(LDFLAGS) $(filter-out -lm,$(LIBS)) $(LM) $(EXTRA_LDFLAGS)\n\nbuild_lib_shared: $(DSOS)\nbuild_lib_static: $(STATIC_LIBS)\nifeq ($(enable_shared), 1)\nbuild_lib: build_lib_shared\nendif\nifeq ($(enable_static), 1)\nbuild_lib: build_lib_static\nendif\n\ninstall_bin:\n\t$(INSTALL) -d $(BINDIR)\n\t@for b in $(BINS); do \\\n\t$(INSTALL) -v -m 755 $$b $(BINDIR); \\\ndone\n\ninstall_include:\n\t$(INSTALL) -d $(INCLUDEDIR)/jemalloc\n\t@for h in $(C_HDRS); do \\\n\t$(INSTALL) -v -m 644 $$h $(INCLUDEDIR)/jemalloc; \\\ndone\n\ninstall_lib_shared: $(DSOS)\n\t$(INSTALL) -d $(LIBDIR)\n\t$(INSTALL) -v -m 755 $(objroot)lib/$(LIBJEMALLOC).$(SOREV) $(LIBDIR)\nifneq ($(SOREV),$(SO))\n\tln -sf $(LIBJEMALLOC).$(SOREV) $(LIBDIR)/$(LIBJEMALLOC).$(SO)\nendif\n\ninstall_lib_static: $(STATIC_LIBS)\n\t$(INSTALL) -d $(LIBDIR)\n\t@for l in $(STATIC_LIBS); do \\\n\t$(INSTALL) -v -m 755 $$l $(LIBDIR); \\\ndone\n\ninstall_lib_pc: $(PC)\n\t$(INSTALL) -d $(LIBDIR)/pkgconfig\n\t@for l in $(PC); do \\\n\t$(INSTALL) -v -m 644 $$l $(LIBDIR)/pkgconfig; \\\ndone\n\nifeq ($(enable_shared), 1)\ninstall_lib: install_lib_shared\nendif\nifeq ($(enable_static), 1)\ninstall_lib: install_lib_static\nendif\ninstall_lib: install_lib_pc\n\ninstall_doc_html: build_doc_html\n\t$(INSTALL) -d $(DATADIR)/doc/jemalloc$(install_suffix)\n\t@for d in $(DOCS_HTML); do \\\n\t$(INSTALL) -v -m 644 $$d $(DATADIR)/doc/jemalloc$(install_suffix); \\\ndone\n\ninstall_doc_man: build_doc_man\n\t$(INSTALL) -d $(MANDIR)/man3\n\t@for d in $(DOCS_MAN3); do \\\n\t$(INSTALL) -v -m 644 $$d $(MANDIR)/man3; \\\ndone\n\ninstall_doc: install_doc_html install_doc_man\n\ninstall: install_bin install_include install_lib\n\nifeq ($(enable_doc), 1)\ninstall: install_doc\nendif\n\nuninstall_bin:\n\t$(RM) -v $(foreach b,$(notdir $(BINS)),$(BINDIR)/$(b))\n\nuninstall_include:\n\t$(RM) -v $(foreach h,$(notdir $(C_HDRS)),$(INCLUDEDIR)/jemalloc/$(h))\n\trmdir -v $(INCLUDEDIR)/jemalloc\n\nuninstall_lib_shared:\n\t$(RM) -v $(LIBDIR)/$(LIBJEMALLOC).$(SOREV)\nifneq ($(SOREV),$(SO))\n\t$(RM) -v $(LIBDIR)/$(LIBJEMALLOC).$(SO)\nendif\n\nuninstall_lib_static:\n\t$(RM) -v $(foreach l,$(notdir $(STATIC_LIBS)),$(LIBDIR)/$(l))\n\nuninstall_lib_pc:\n\t$(RM) -v $(foreach p,$(notdir $(PC)),$(LIBDIR)/pkgconfig/$(p))\n\nifeq ($(enable_shared), 1)\nuninstall_lib: uninstall_lib_shared\nendif\nifeq ($(enable_static), 1)\nuninstall_lib: uninstall_lib_static\nendif\nuninstall_lib: uninstall_lib_pc\n\nuninstall_doc_html:\n\t$(RM) -v $(foreach d,$(notdir $(DOCS_HTML)),$(DATADIR)/doc/jemalloc$(install_suffix)/$(d))\n\trmdir -v $(DATADIR)/doc/jemalloc$(install_suffix)\n\nuninstall_doc_man:\n\t$(RM) -v $(foreach d,$(notdir $(DOCS_MAN3)),$(MANDIR)/man3/$(d))\n\nuninstall_doc: uninstall_doc_html uninstall_doc_man\n\nuninstall: uninstall_bin uninstall_include uninstall_lib\n\nifeq ($(enable_doc), 1)\nuninstall: uninstall_doc\nendif\n\ntests_unit: $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%$(EXE))\ntests_integration: $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%$(EXE)) $(TESTS_INTEGRATION_CPP:$(srcroot)%.cpp=$(objroot)%$(EXE))\ntests_analyze: $(TESTS_ANALYZE:$(srcroot)%.c=$(objroot)%$(EXE))\ntests_stress: $(TESTS_STRESS:$(srcroot)%.c=$(objroot)%$(EXE))\ntests: tests_unit tests_integration tests_analyze tests_stress\n\ncheck_unit_dir:\n\t@mkdir -p $(objroot)test/unit\ncheck_integration_dir:\n\t@mkdir -p $(objroot)test/integration\nanalyze_dir:\n\t@mkdir -p $(objroot)test/analyze\nstress_dir:\n\t@mkdir -p $(objroot)test/stress\ncheck_dir: check_unit_dir check_integration_dir\n\ncheck_unit: tests_unit check_unit_dir\n\t$(SHELL) $(objroot)test/test.sh $(TESTS_UNIT:$(srcroot)%.c=$(objroot)%)\ncheck_integration_prof: tests_integration check_integration_dir\nifeq ($(enable_prof), 1)\n\t$(MALLOC_CONF)=\"prof:true\" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%) $(TESTS_INTEGRATION_CPP:$(srcroot)%.cpp=$(objroot)%)\n\t$(MALLOC_CONF)=\"prof:true,prof_active:false\" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%) $(TESTS_INTEGRATION_CPP:$(srcroot)%.cpp=$(objroot)%)\nendif\ncheck_integration_decay: tests_integration check_integration_dir\n\t$(MALLOC_CONF)=\"dirty_decay_ms:-1,muzzy_decay_ms:-1\" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%) $(TESTS_INTEGRATION_CPP:$(srcroot)%.cpp=$(objroot)%)\n\t$(MALLOC_CONF)=\"dirty_decay_ms:0,muzzy_decay_ms:0\" $(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%) $(TESTS_INTEGRATION_CPP:$(srcroot)%.cpp=$(objroot)%)\ncheck_integration: tests_integration check_integration_dir\n\t$(SHELL) $(objroot)test/test.sh $(TESTS_INTEGRATION:$(srcroot)%.c=$(objroot)%) $(TESTS_INTEGRATION_CPP:$(srcroot)%.cpp=$(objroot)%)\nanalyze: tests_analyze analyze_dir\nifeq ($(enable_prof), 1)\n\t$(MALLOC_CONF)=\"prof:true\" $(SHELL) $(objroot)test/test.sh $(TESTS_ANALYZE:$(srcroot)%.c=$(objroot)%)\nelse\n\t$(SHELL) $(objroot)test/test.sh $(TESTS_ANALYZE:$(srcroot)%.c=$(objroot)%)\nendif\nstress: tests_stress stress_dir\n\t$(SHELL) $(objroot)test/test.sh $(TESTS_STRESS:$(srcroot)%.c=$(objroot)%)\ncheck: check_unit check_integration check_integration_decay check_integration_prof\n\nclean:\n\trm -f $(PRIVATE_NAMESPACE_HDRS)\n\trm -f $(PRIVATE_NAMESPACE_GEN_HDRS)\n\trm -f $(C_SYM_OBJS)\n\trm -f $(C_SYMS)\n\trm -f $(C_OBJS)\n\trm -f $(CPP_OBJS)\n\trm -f $(C_PIC_OBJS)\n\trm -f $(CPP_PIC_OBJS)\n\trm -f $(C_JET_SYM_OBJS)\n\trm -f $(C_JET_SYMS)\n\trm -f $(C_JET_OBJS)\n\trm -f $(C_TESTLIB_OBJS)\n\trm -f $(C_SYM_OBJS:%.$(O)=%.d)\n\trm -f $(C_OBJS:%.$(O)=%.d)\n\trm -f $(CPP_OBJS:%.$(O)=%.d)\n\trm -f $(C_PIC_OBJS:%.$(O)=%.d)\n\trm -f $(CPP_PIC_OBJS:%.$(O)=%.d)\n\trm -f $(C_JET_SYM_OBJS:%.$(O)=%.d)\n\trm -f $(C_JET_OBJS:%.$(O)=%.d)\n\trm -f $(C_TESTLIB_OBJS:%.$(O)=%.d)\n\trm -f $(TESTS_OBJS:%.$(O)=%$(EXE))\n\trm -f $(TESTS_OBJS)\n\trm -f $(TESTS_OBJS:%.$(O)=%.d)\n\trm -f $(TESTS_OBJS:%.$(O)=%.out)\n\trm -f $(TESTS_CPP_OBJS:%.$(O)=%$(EXE))\n\trm -f $(TESTS_CPP_OBJS)\n\trm -f $(TESTS_CPP_OBJS:%.$(O)=%.d)\n\trm -f $(TESTS_CPP_OBJS:%.$(O)=%.out)\n\trm -f $(DSOS) $(STATIC_LIBS)\n\ndistclean: clean\n\trm -f $(objroot)bin/jemalloc-config\n\trm -f $(objroot)bin/jemalloc.sh\n\trm -f $(objroot)bin/jeprof\n\trm -f $(objroot)config.log\n\trm -f $(objroot)config.status\n\trm -f $(objroot)config.stamp\n\trm -f $(cfghdrs_out)\n\trm -f $(cfgoutputs_out)\n\nrelclean: distclean\n\trm -f $(objroot)configure\n\trm -f $(objroot)VERSION\n\trm -f $(DOCS_HTML)\n\trm -f $(DOCS_MAN3)\n\n#===============================================================================\n# Re-configuration rules.\n\nifeq ($(enable_autogen), 1)\n$(srcroot)configure : $(srcroot)configure.ac\n\tcd ./$(srcroot) && $(AUTOCONF)\n\n$(objroot)config.status : $(srcroot)configure\n\t./$(objroot)config.status --recheck\n\n$(srcroot)config.stamp.in : $(srcroot)configure.ac\n\techo stamp > $(srcroot)config.stamp.in\n\n$(objroot)config.stamp : $(cfgoutputs_in) $(cfghdrs_in) $(srcroot)configure\n\t./$(objroot)config.status\n\t@touch $@\n\n# There must be some action in order for make to re-read Makefile when it is\n# out of date.\n$(cfgoutputs_out) $(cfghdrs_out) : $(objroot)config.stamp\n\t@true\nendif\n"
  },
  {
    "path": "deps/jemalloc/README",
    "content": "jemalloc is a general purpose malloc(3) implementation that emphasizes\nfragmentation avoidance and scalable concurrency support.  jemalloc first came\ninto use as the FreeBSD libc allocator in 2005, and since then it has found its\nway into numerous applications that rely on its predictable behavior.  In 2010\njemalloc development efforts broadened to include developer support features\nsuch as heap profiling and extensive monitoring/tuning hooks.  Modern jemalloc\nreleases continue to be integrated back into FreeBSD, and therefore versatility\nremains critical.  Ongoing development efforts trend toward making jemalloc\namong the best allocators for a broad range of demanding applications, and\neliminating/mitigating weaknesses that have practical repercussions for real\nworld applications.\n\nThe COPYING file contains copyright and licensing information.\n\nThe INSTALL file contains information on how to configure, build, and install\njemalloc.\n\nThe ChangeLog file contains a brief summary of changes for each release.\n\nURL: http://jemalloc.net/\n"
  },
  {
    "path": "deps/jemalloc/TUNING.md",
    "content": "This document summarizes the common approaches for performance fine tuning with\njemalloc (as of 5.3.0).  The default configuration of jemalloc tends to work\nreasonably well in practice, and most applications should not have to tune any\noptions. However, in order to cover a wide range of applications and avoid\npathological cases, the default setting is sometimes kept conservative and\nsuboptimal, even for many common workloads.  When jemalloc is properly tuned for\na specific application / workload, it is common to improve system level metrics\nby a few percent, or make favorable trade-offs.\n\n\n## Notable runtime options for performance tuning\n\nRuntime options can be set via\n[malloc_conf](http://jemalloc.net/jemalloc.3.html#tuning).\n\n* [background_thread](http://jemalloc.net/jemalloc.3.html#background_thread)\n\n    Enabling jemalloc background threads generally improves the tail latency for\n    application threads, since unused memory purging is shifted to the dedicated\n    background threads.  In addition, unintended purging delay caused by\n    application inactivity is avoided with background threads.\n\n    Suggested: `background_thread:true` when jemalloc managed threads can be\n    allowed.\n\n* [metadata_thp](http://jemalloc.net/jemalloc.3.html#opt.metadata_thp)\n\n    Allowing jemalloc to utilize transparent huge pages for its internal\n    metadata usually reduces TLB misses significantly, especially for programs\n    with large memory footprint and frequent allocation / deallocation\n    activities.  Metadata memory usage may increase due to the use of huge\n    pages.\n\n    Suggested for allocation intensive programs: `metadata_thp:auto` or\n    `metadata_thp:always`, which is expected to improve CPU utilization at a\n    small memory cost.\n\n* [dirty_decay_ms](http://jemalloc.net/jemalloc.3.html#opt.dirty_decay_ms) and\n  [muzzy_decay_ms](http://jemalloc.net/jemalloc.3.html#opt.muzzy_decay_ms)\n\n    Decay time determines how fast jemalloc returns unused pages back to the\n    operating system, and therefore provides a fairly straightforward trade-off\n    between CPU and memory usage.  Shorter decay time purges unused pages faster\n    to reduces memory usage (usually at the cost of more CPU cycles spent on\n    purging), and vice versa.\n\n    Suggested: tune the values based on the desired trade-offs.\n\n* [narenas](http://jemalloc.net/jemalloc.3.html#opt.narenas)\n\n    By default jemalloc uses multiple arenas to reduce internal lock contention.\n    However high arena count may also increase overall memory fragmentation,\n    since arenas manage memory independently.  When high degree of parallelism\n    is not expected at the allocator level, lower number of arenas often\n    improves memory usage.\n\n    Suggested: if low parallelism is expected, try lower arena count while\n    monitoring CPU and memory usage.\n\n* [percpu_arena](http://jemalloc.net/jemalloc.3.html#opt.percpu_arena)\n\n    Enable dynamic thread to arena association based on running CPU.  This has\n    the potential to improve locality, e.g. when thread to CPU affinity is\n    present.\n    \n    Suggested: try `percpu_arena:percpu` or `percpu_arena:phycpu` if\n    thread migration between processors is expected to be infrequent.\n\nExamples:\n\n* High resource consumption application, prioritizing CPU utilization:\n\n    `background_thread:true,metadata_thp:auto` combined with relaxed decay time\n    (increased `dirty_decay_ms` and / or `muzzy_decay_ms`,\n    e.g. `dirty_decay_ms:30000,muzzy_decay_ms:30000`).\n\n* High resource consumption application, prioritizing memory usage:\n\n    `background_thread:true,tcache_max:4096` combined with shorter decay time\n    (decreased `dirty_decay_ms` and / or `muzzy_decay_ms`,\n    e.g. `dirty_decay_ms:5000,muzzy_decay_ms:5000`), and lower arena count\n    (e.g. number of CPUs).\n\n* Low resource consumption application:\n\n    `narenas:1,tcache_max:1024` combined with shorter decay time (decreased\n    `dirty_decay_ms` and / or `muzzy_decay_ms`,e.g.\n    `dirty_decay_ms:1000,muzzy_decay_ms:0`).\n\n* Extremely conservative -- minimize memory usage at all costs, only suitable when\nallocation activity is very rare:\n\n    `narenas:1,tcache:false,dirty_decay_ms:0,muzzy_decay_ms:0`\n\nNote that it is recommended to combine the options with `abort_conf:true` which\naborts immediately on illegal options.\n\n## Beyond runtime options\n\nIn addition to the runtime options, there are a number of programmatic ways to\nimprove application performance with jemalloc.\n\n* [Explicit arenas](http://jemalloc.net/jemalloc.3.html#arenas.create)\n\n    Manually created arenas can help performance in various ways, e.g. by\n    managing locality and contention for specific usages.  For example,\n    applications can explicitly allocate frequently accessed objects from a\n    dedicated arena with\n    [mallocx()](http://jemalloc.net/jemalloc.3.html#MALLOCX_ARENA) to improve\n    locality.  In addition, explicit arenas often benefit from individually\n    tuned options, e.g. relaxed [decay\n    time](http://jemalloc.net/jemalloc.3.html#arena.i.dirty_decay_ms) if\n    frequent reuse is expected.\n\n* [Extent hooks](http://jemalloc.net/jemalloc.3.html#arena.i.extent_hooks)\n\n    Extent hooks allow customization for managing underlying memory.  One use\n    case for performance purpose is to utilize huge pages -- for example,\n    [HHVM](https://github.com/facebook/hhvm/blob/master/hphp/util/alloc.cpp)\n    uses explicit arenas with customized extent hooks to manage 1GB huge pages\n    for frequently accessed data, which reduces TLB misses significantly.\n\n* [Explicit thread-to-arena\n  binding](http://jemalloc.net/jemalloc.3.html#thread.arena)\n\n    It is common for some threads in an application to have different memory\n    access / allocation patterns.  Threads with heavy workloads often benefit\n    from explicit binding, e.g. binding very active threads to dedicated arenas\n    may reduce contention at the allocator level.\n"
  },
  {
    "path": "deps/jemalloc/VERSION",
    "content": "5.3.0-0-g0\n"
  },
  {
    "path": "deps/jemalloc/autogen.sh",
    "content": "#!/bin/sh\n\nfor i in autoconf; do\n    echo \"$i\"\n    $i\n    if [ $? -ne 0 ]; then\n\techo \"Error $? in $i\"\n\texit 1\n    fi\ndone\n\necho \"./configure --enable-autogen $@\"\n./configure --enable-autogen $@\nif [ $? -ne 0 ]; then\n    echo \"Error $? in ./configure\"\n    exit 1\nfi\n"
  },
  {
    "path": "deps/jemalloc/bin/jemalloc-config.in",
    "content": "#!/bin/sh\n\nusage() {\n\tcat <<EOF\nUsage:\n  @BINDIR@/jemalloc-config <option>\nOptions:\n  --help | -h  : Print usage.\n  --version    : Print jemalloc version.\n  --revision   : Print shared library revision number.\n  --config     : Print configure options used to build jemalloc.\n  --prefix     : Print installation directory prefix.\n  --bindir     : Print binary installation directory.\n  --datadir    : Print data installation directory.\n  --includedir : Print include installation directory.\n  --libdir     : Print library installation directory.\n  --mandir     : Print manual page installation directory.\n  --cc         : Print compiler used to build jemalloc.\n  --cflags     : Print compiler flags used to build jemalloc.\n  --cppflags   : Print preprocessor flags used to build jemalloc.\n  --cxxflags   : Print C++ compiler flags used to build jemalloc.\n  --ldflags    : Print library flags used to build jemalloc.\n  --libs       : Print libraries jemalloc was linked against.\nEOF\n}\n\nprefix=\"@prefix@\"\nexec_prefix=\"@exec_prefix@\"\n\ncase \"$1\" in\n--help | -h)\n\tusage\n\texit 0\n\t;;\n--version)\n\techo \"@jemalloc_version@\"\n\t;;\n--revision)\n\techo \"@rev@\"\n\t;;\n--config)\n\techo \"@CONFIG@\"\n\t;;\n--prefix)\n\techo \"@PREFIX@\"\n\t;;\n--bindir)\n\techo \"@BINDIR@\"\n\t;;\n--datadir)\n\techo \"@DATADIR@\"\n\t;;\n--includedir)\n\techo \"@INCLUDEDIR@\"\n\t;;\n--libdir)\n\techo \"@LIBDIR@\"\n\t;;\n--mandir)\n\techo \"@MANDIR@\"\n\t;;\n--cc)\n\techo \"@CC@\"\n\t;;\n--cflags)\n\techo \"@CFLAGS@\"\n\t;;\n--cppflags)\n\techo \"@CPPFLAGS@\"\n\t;;\n--cxxflags)\n\techo \"@CXXFLAGS@\"\n\t;;\n--ldflags)\n\techo \"@LDFLAGS@ @EXTRA_LDFLAGS@\"\n\t;;\n--libs)\n\techo \"@LIBS@\"\n\t;;\n*)\n\tusage\n\texit 1\nesac\n"
  },
  {
    "path": "deps/jemalloc/bin/jemalloc.sh.in",
    "content": "#!/bin/sh\n\nprefix=@prefix@\nexec_prefix=@exec_prefix@\nlibdir=@libdir@\n\n@LD_PRELOAD_VAR@=${libdir}/libjemalloc.@SOREV@\nexport @LD_PRELOAD_VAR@\nexec \"$@\"\n"
  },
  {
    "path": "deps/jemalloc/bin/jeprof.in",
    "content": "#! /usr/bin/env perl\n\n# Copyright (c) 1998-2007, Google Inc.\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are\n# met:\n#\n#     * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n#     * Redistributions in binary form must reproduce the above\n# copyright notice, this list of conditions and the following disclaimer\n# in the documentation and/or other materials provided with the\n# distribution.\n#     * Neither the name of Google Inc. nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n# ---\n# Program for printing the profile generated by common/profiler.cc,\n# or by the heap profiler (common/debugallocation.cc)\n#\n# The profile contains a sequence of entries of the form:\n#       <count> <stack trace>\n# This program parses the profile, and generates user-readable\n# output.\n#\n# Examples:\n#\n# % tools/jeprof \"program\" \"profile\"\n#   Enters \"interactive\" mode\n#\n# % tools/jeprof --text \"program\" \"profile\"\n#   Generates one line per procedure\n#\n# % tools/jeprof --gv \"program\" \"profile\"\n#   Generates annotated call-graph and displays via \"gv\"\n#\n# % tools/jeprof --gv --focus=Mutex \"program\" \"profile\"\n#   Restrict to code paths that involve an entry that matches \"Mutex\"\n#\n# % tools/jeprof --gv --focus=Mutex --ignore=string \"program\" \"profile\"\n#   Restrict to code paths that involve an entry that matches \"Mutex\"\n#   and does not match \"string\"\n#\n# % tools/jeprof --list=IBF_CheckDocid \"program\" \"profile\"\n#   Generates disassembly listing of all routines with at least one\n#   sample that match the --list=<regexp> pattern.  The listing is\n#   annotated with the flat and cumulative sample counts at each line.\n#\n# % tools/jeprof --disasm=IBF_CheckDocid \"program\" \"profile\"\n#   Generates disassembly listing of all routines with at least one\n#   sample that match the --disasm=<regexp> pattern.  The listing is\n#   annotated with the flat and cumulative sample counts at each PC value.\n#\n# TODO: Use color to indicate files?\n\nuse strict;\nuse warnings;\nuse Getopt::Long;\nuse Cwd;\n\nmy $JEPROF_VERSION = \"@jemalloc_version@\";\nmy $PPROF_VERSION = \"2.0\";\n\n# These are the object tools we use which can come from a\n# user-specified location using --tools, from the JEPROF_TOOLS\n# environment variable, or from the environment.\nmy %obj_tool_map = (\n  \"objdump\" => \"objdump\",\n  \"nm\" => \"nm\",\n  \"addr2line\" => \"addr2line\",\n  \"c++filt\" => \"c++filt\",\n  ## ConfigureObjTools may add architecture-specific entries:\n  #\"nm_pdb\" => \"nm-pdb\",       # for reading windows (PDB-format) executables\n  #\"addr2line_pdb\" => \"addr2line-pdb\",                                # ditto\n  #\"otool\" => \"otool\",         # equivalent of objdump on OS X\n);\n# NOTE: these are lists, so you can put in commandline flags if you want.\nmy @DOT = (\"dot\");          # leave non-absolute, since it may be in /usr/local\nmy @GV = (\"gv\");\nmy @EVINCE = (\"evince\");    # could also be xpdf or perhaps acroread\nmy @KCACHEGRIND = (\"kcachegrind\");\nmy @PS2PDF = (\"ps2pdf\");\n# These are used for dynamic profiles\nmy @URL_FETCHER = (\"curl\", \"-s\", \"--fail\");\n\n# These are the web pages that servers need to support for dynamic profiles\nmy $HEAP_PAGE = \"/pprof/heap\";\nmy $PROFILE_PAGE = \"/pprof/profile\";   # must support cgi-param \"?seconds=#\"\nmy $PMUPROFILE_PAGE = \"/pprof/pmuprofile(?:\\\\?.*)?\"; # must support cgi-param\n                                                # ?seconds=#&event=x&period=n\nmy $GROWTH_PAGE = \"/pprof/growth\";\nmy $CONTENTION_PAGE = \"/pprof/contention\";\nmy $WALL_PAGE = \"/pprof/wall(?:\\\\?.*)?\";  # accepts options like namefilter\nmy $FILTEREDPROFILE_PAGE = \"/pprof/filteredprofile(?:\\\\?.*)?\";\nmy $CENSUSPROFILE_PAGE = \"/pprof/censusprofile(?:\\\\?.*)?\"; # must support cgi-param\n                                                       # \"?seconds=#\",\n                                                       # \"?tags_regexp=#\" and\n                                                       # \"?type=#\".\nmy $SYMBOL_PAGE = \"/pprof/symbol\";     # must support symbol lookup via POST\nmy $PROGRAM_NAME_PAGE = \"/pprof/cmdline\";\n\n# These are the web pages that can be named on the command line.\n# All the alternatives must begin with /.\nmy $PROFILES = \"($HEAP_PAGE|$PROFILE_PAGE|$PMUPROFILE_PAGE|\" .\n               \"$GROWTH_PAGE|$CONTENTION_PAGE|$WALL_PAGE|\" .\n               \"$FILTEREDPROFILE_PAGE|$CENSUSPROFILE_PAGE)\";\n\n# default binary name\nmy $UNKNOWN_BINARY = \"(unknown)\";\n\n# There is a pervasive dependency on the length (in hex characters,\n# i.e., nibbles) of an address, distinguishing between 32-bit and\n# 64-bit profiles.  To err on the safe size, default to 64-bit here:\nmy $address_length = 16;\n\nmy $dev_null = \"/dev/null\";\nif (! -e $dev_null && $^O =~ /MSWin/) {    # $^O is the OS perl was built for\n  $dev_null = \"nul\";\n}\n\n# A list of paths to search for shared object files\nmy @prefix_list = ();\n\n# Special routine name that should not have any symbols.\n# Used as separator to parse \"addr2line -i\" output.\nmy $sep_symbol = '_fini';\nmy $sep_address = undef;\n\n##### Argument parsing #####\n\nsub usage_string {\n  return <<EOF;\nUsage:\njeprof [options] <program> <profiles>\n   <profiles> is a space separated list of profile names.\njeprof [options] <symbolized-profiles>\n   <symbolized-profiles> is a list of profile files where each file contains\n   the necessary symbol mappings  as well as profile data (likely generated\n   with --raw).\njeprof [options] <profile>\n   <profile> is a remote form.  Symbols are obtained from host:port$SYMBOL_PAGE\n\n   Each name can be:\n   /path/to/profile        - a path to a profile file\n   host:port[/<service>]   - a location of a service to get profile from\n\n   The /<service> can be $HEAP_PAGE, $PROFILE_PAGE, /pprof/pmuprofile,\n                         $GROWTH_PAGE, $CONTENTION_PAGE, /pprof/wall,\n                         $CENSUSPROFILE_PAGE, or /pprof/filteredprofile.\n   For instance:\n     jeprof http://myserver.com:80$HEAP_PAGE\n   If /<service> is omitted, the service defaults to $PROFILE_PAGE (cpu profiling).\njeprof --symbols <program>\n   Maps addresses to symbol names.  In this mode, stdin should be a\n   list of library mappings, in the same format as is found in the heap-\n   and cpu-profile files (this loosely matches that of /proc/self/maps\n   on linux), followed by a list of hex addresses to map, one per line.\n\n   For more help with querying remote servers, including how to add the\n   necessary server-side support code, see this filename (or one like it):\n\n   /usr/doc/gperftools-$PPROF_VERSION/pprof_remote_servers.html\n\nOptions:\n   --cum               Sort by cumulative data\n   --base=<base>       Subtract <base> from <profile> before display\n   --interactive       Run in interactive mode (interactive \"help\" gives help) [default]\n   --seconds=<n>       Length of time for dynamic profiles [default=30 secs]\n   --add_lib=<file>    Read additional symbols and line info from the given library\n   --lib_prefix=<dir>  Comma separated list of library path prefixes\n\nReporting Granularity:\n   --addresses         Report at address level\n   --lines             Report at source line level\n   --functions         Report at function level [default]\n   --files             Report at source file level\n\nOutput type:\n   --text              Generate text report\n   --callgrind         Generate callgrind format to stdout\n   --gv                Generate Postscript and display\n   --evince            Generate PDF and display\n   --web               Generate SVG and display\n   --list=<regexp>     Generate source listing of matching routines\n   --disasm=<regexp>   Generate disassembly of matching routines\n   --symbols           Print demangled symbol names found at given addresses\n   --dot               Generate DOT file to stdout\n   --ps                Generate Postcript to stdout\n   --pdf               Generate PDF to stdout\n   --svg               Generate SVG to stdout\n   --gif               Generate GIF to stdout\n   --raw               Generate symbolized jeprof data (useful with remote fetch)\n   --collapsed         Generate collapsed stacks for building flame graphs\n                       (see http://www.brendangregg.com/flamegraphs.html)\n\nHeap-Profile Options:\n   --inuse_space       Display in-use (mega)bytes [default]\n   --inuse_objects     Display in-use objects\n   --alloc_space       Display allocated (mega)bytes\n   --alloc_objects     Display allocated objects\n   --show_bytes        Display space in bytes\n   --drop_negative     Ignore negative differences\n\nContention-profile options:\n   --total_delay       Display total delay at each region [default]\n   --contentions       Display number of delays at each region\n   --mean_delay        Display mean delay at each region\n\nCall-graph Options:\n   --nodecount=<n>     Show at most so many nodes [default=80]\n   --nodefraction=<f>  Hide nodes below <f>*total [default=.005]\n   --edgefraction=<f>  Hide edges below <f>*total [default=.001]\n   --maxdegree=<n>     Max incoming/outgoing edges per node [default=8]\n   --focus=<regexp>    Focus on backtraces with nodes matching <regexp>\n   --thread=<n>        Show profile for thread <n>\n   --ignore=<regexp>   Ignore backtraces with nodes matching <regexp>\n   --scale=<n>         Set GV scaling [default=0]\n   --heapcheck         Make nodes with non-0 object counts\n                       (i.e. direct leak generators) more visible\n   --retain=<regexp>   Retain only nodes that match <regexp>\n   --exclude=<regexp>  Exclude all nodes that match <regexp>\n\nMiscellaneous:\n   --tools=<prefix or binary:fullpath>[,...]   \\$PATH for object tool pathnames\n   --test              Run unit tests\n   --help              This message\n   --version           Version information\n   --debug-syms-by-id  (Linux only) Find debug symbol files by build ID as well as by name\n\nEnvironment Variables:\n   JEPROF_TMPDIR        Profiles directory. Defaults to \\$HOME/jeprof\n   JEPROF_TOOLS         Prefix for object tools pathnames\n\nExamples:\n\njeprof /bin/ls ls.prof\n                       Enters \"interactive\" mode\njeprof --text /bin/ls ls.prof\n                       Outputs one line per procedure\njeprof --web /bin/ls ls.prof\n                       Displays annotated call-graph in web browser\njeprof --gv /bin/ls ls.prof\n                       Displays annotated call-graph via 'gv'\njeprof --gv --focus=Mutex /bin/ls ls.prof\n                       Restricts to code paths including a .*Mutex.* entry\njeprof --gv --focus=Mutex --ignore=string /bin/ls ls.prof\n                       Code paths including Mutex but not string\njeprof --list=getdir /bin/ls ls.prof\n                       (Per-line) annotated source listing for getdir()\njeprof --disasm=getdir /bin/ls ls.prof\n                       (Per-PC) annotated disassembly for getdir()\n\njeprof http://localhost:1234/\n                       Enters \"interactive\" mode\njeprof --text localhost:1234\n                       Outputs one line per procedure for localhost:1234\njeprof --raw localhost:1234 > ./local.raw\njeprof --text ./local.raw\n                       Fetches a remote profile for later analysis and then\n                       analyzes it in text mode.\nEOF\n}\n\nsub version_string {\n  return <<EOF\njeprof (part of jemalloc $JEPROF_VERSION)\nbased on pprof (part of gperftools $PPROF_VERSION)\n\nCopyright 1998-2007 Google Inc.\n\nThis is BSD licensed software; see the source for copying conditions\nand license information.\nThere is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A\nPARTICULAR PURPOSE.\nEOF\n}\n\nsub usage {\n  my $msg = shift;\n  print STDERR \"$msg\\n\\n\";\n  print STDERR usage_string();\n  print STDERR \"\\nFATAL ERROR: $msg\\n\";    # just as a reminder\n  exit(1);\n}\n\nsub Init() {\n  # Setup tmp-file name and handler to clean it up.\n  # We do this in the very beginning so that we can use\n  # error() and cleanup() function anytime here after.\n  $main::tmpfile_sym = \"/tmp/jeprof$$.sym\";\n  $main::tmpfile_ps = \"/tmp/jeprof$$\";\n  $main::next_tmpfile = 0;\n  $SIG{'INT'} = \\&sighandler;\n\n  # Cache from filename/linenumber to source code\n  $main::source_cache = ();\n\n  $main::opt_help = 0;\n  $main::opt_version = 0;\n\n  $main::opt_cum = 0;\n  $main::opt_base = '';\n  $main::opt_addresses = 0;\n  $main::opt_lines = 0;\n  $main::opt_functions = 0;\n  $main::opt_files = 0;\n  $main::opt_lib_prefix = \"\";\n\n  $main::opt_text = 0;\n  $main::opt_callgrind = 0;\n  $main::opt_list = \"\";\n  $main::opt_disasm = \"\";\n  $main::opt_symbols = 0;\n  $main::opt_gv = 0;\n  $main::opt_evince = 0;\n  $main::opt_web = 0;\n  $main::opt_dot = 0;\n  $main::opt_ps = 0;\n  $main::opt_pdf = 0;\n  $main::opt_gif = 0;\n  $main::opt_svg = 0;\n  $main::opt_raw = 0;\n  $main::opt_collapsed = 0;\n\n  $main::opt_nodecount = 80;\n  $main::opt_nodefraction = 0.005;\n  $main::opt_edgefraction = 0.001;\n  $main::opt_maxdegree = 8;\n  $main::opt_focus = '';\n  $main::opt_thread = undef;\n  $main::opt_ignore = '';\n  $main::opt_scale = 0;\n  $main::opt_heapcheck = 0;\n  $main::opt_retain = '';\n  $main::opt_exclude = '';\n  $main::opt_seconds = 30;\n  $main::opt_lib = \"\";\n\n  $main::opt_inuse_space   = 0;\n  $main::opt_inuse_objects = 0;\n  $main::opt_alloc_space   = 0;\n  $main::opt_alloc_objects = 0;\n  $main::opt_show_bytes    = 0;\n  $main::opt_drop_negative = 0;\n  $main::opt_interactive   = 0;\n\n  $main::opt_total_delay = 0;\n  $main::opt_contentions = 0;\n  $main::opt_mean_delay = 0;\n\n  $main::opt_tools   = \"\";\n  $main::opt_debug   = 0;\n  $main::opt_test    = 0;\n  $main::opt_debug_syms_by_id = 0;\n\n  # These are undocumented flags used only by unittests.\n  $main::opt_test_stride = 0;\n\n  # Are we using $SYMBOL_PAGE?\n  $main::use_symbol_page = 0;\n\n  # Files returned by TempName.\n  %main::tempnames = ();\n\n  # Type of profile we are dealing with\n  # Supported types:\n  #     cpu\n  #     heap\n  #     growth\n  #     contention\n  $main::profile_type = '';     # Empty type means \"unknown\"\n\n  GetOptions(\"help!\"          => \\$main::opt_help,\n             \"version!\"       => \\$main::opt_version,\n             \"cum!\"           => \\$main::opt_cum,\n             \"base=s\"         => \\$main::opt_base,\n             \"seconds=i\"      => \\$main::opt_seconds,\n             \"add_lib=s\"      => \\$main::opt_lib,\n             \"lib_prefix=s\"   => \\$main::opt_lib_prefix,\n             \"functions!\"     => \\$main::opt_functions,\n             \"lines!\"         => \\$main::opt_lines,\n             \"addresses!\"     => \\$main::opt_addresses,\n             \"files!\"         => \\$main::opt_files,\n             \"text!\"          => \\$main::opt_text,\n             \"callgrind!\"     => \\$main::opt_callgrind,\n             \"list=s\"         => \\$main::opt_list,\n             \"disasm=s\"       => \\$main::opt_disasm,\n             \"symbols!\"       => \\$main::opt_symbols,\n             \"gv!\"            => \\$main::opt_gv,\n             \"evince!\"        => \\$main::opt_evince,\n             \"web!\"           => \\$main::opt_web,\n             \"dot!\"           => \\$main::opt_dot,\n             \"ps!\"            => \\$main::opt_ps,\n             \"pdf!\"           => \\$main::opt_pdf,\n             \"svg!\"           => \\$main::opt_svg,\n             \"gif!\"           => \\$main::opt_gif,\n             \"raw!\"           => \\$main::opt_raw,\n             \"collapsed!\"     => \\$main::opt_collapsed,\n             \"interactive!\"   => \\$main::opt_interactive,\n             \"nodecount=i\"    => \\$main::opt_nodecount,\n             \"nodefraction=f\" => \\$main::opt_nodefraction,\n             \"edgefraction=f\" => \\$main::opt_edgefraction,\n             \"maxdegree=i\"    => \\$main::opt_maxdegree,\n             \"focus=s\"        => \\$main::opt_focus,\n             \"thread=s\"       => \\$main::opt_thread,\n             \"ignore=s\"       => \\$main::opt_ignore,\n             \"scale=i\"        => \\$main::opt_scale,\n             \"heapcheck\"      => \\$main::opt_heapcheck,\n             \"retain=s\"       => \\$main::opt_retain,\n             \"exclude=s\"      => \\$main::opt_exclude,\n             \"inuse_space!\"   => \\$main::opt_inuse_space,\n             \"inuse_objects!\" => \\$main::opt_inuse_objects,\n             \"alloc_space!\"   => \\$main::opt_alloc_space,\n             \"alloc_objects!\" => \\$main::opt_alloc_objects,\n             \"show_bytes!\"    => \\$main::opt_show_bytes,\n             \"drop_negative!\" => \\$main::opt_drop_negative,\n             \"total_delay!\"   => \\$main::opt_total_delay,\n             \"contentions!\"   => \\$main::opt_contentions,\n             \"mean_delay!\"    => \\$main::opt_mean_delay,\n             \"tools=s\"        => \\$main::opt_tools,\n             \"test!\"          => \\$main::opt_test,\n             \"debug!\"         => \\$main::opt_debug,\n             \"debug-syms-by-id!\" => \\$main::opt_debug_syms_by_id,\n             # Undocumented flags used only by unittests:\n             \"test_stride=i\"  => \\$main::opt_test_stride,\n      ) || usage(\"Invalid option(s)\");\n\n  # Deal with the standard --help and --version\n  if ($main::opt_help) {\n    print usage_string();\n    exit(0);\n  }\n\n  if ($main::opt_version) {\n    print version_string();\n    exit(0);\n  }\n\n  # Disassembly/listing/symbols mode requires address-level info\n  if ($main::opt_disasm || $main::opt_list || $main::opt_symbols) {\n    $main::opt_functions = 0;\n    $main::opt_lines = 0;\n    $main::opt_addresses = 1;\n    $main::opt_files = 0;\n  }\n\n  # Check heap-profiling flags\n  if ($main::opt_inuse_space +\n      $main::opt_inuse_objects +\n      $main::opt_alloc_space +\n      $main::opt_alloc_objects > 1) {\n    usage(\"Specify at most on of --inuse/--alloc options\");\n  }\n\n  # Check output granularities\n  my $grains =\n      $main::opt_functions +\n      $main::opt_lines +\n      $main::opt_addresses +\n      $main::opt_files +\n      0;\n  if ($grains > 1) {\n    usage(\"Only specify one output granularity option\");\n  }\n  if ($grains == 0) {\n    $main::opt_functions = 1;\n  }\n\n  # Check output modes\n  my $modes =\n      $main::opt_text +\n      $main::opt_callgrind +\n      ($main::opt_list eq '' ? 0 : 1) +\n      ($main::opt_disasm eq '' ? 0 : 1) +\n      ($main::opt_symbols == 0 ? 0 : 1) +\n      $main::opt_gv +\n      $main::opt_evince +\n      $main::opt_web +\n      $main::opt_dot +\n      $main::opt_ps +\n      $main::opt_pdf +\n      $main::opt_svg +\n      $main::opt_gif +\n      $main::opt_raw +\n      $main::opt_collapsed +\n      $main::opt_interactive +\n      0;\n  if ($modes > 1) {\n    usage(\"Only specify one output mode\");\n  }\n  if ($modes == 0) {\n    if (-t STDOUT) {  # If STDOUT is a tty, activate interactive mode\n      $main::opt_interactive = 1;\n    } else {\n      $main::opt_text = 1;\n    }\n  }\n\n  if ($main::opt_test) {\n    RunUnitTests();\n    # Should not return\n    exit(1);\n  }\n\n  # Binary name and profile arguments list\n  $main::prog = \"\";\n  @main::pfile_args = ();\n\n  # Remote profiling without a binary (using $SYMBOL_PAGE instead)\n  if (@ARGV > 0) {\n    if (IsProfileURL($ARGV[0])) {\n      $main::use_symbol_page = 1;\n    } elsif (IsSymbolizedProfileFile($ARGV[0])) {\n      $main::use_symbolized_profile = 1;\n      $main::prog = $UNKNOWN_BINARY;  # will be set later from the profile file\n    }\n  }\n\n  if ($main::use_symbol_page || $main::use_symbolized_profile) {\n    # We don't need a binary!\n    my %disabled = ('--lines' => $main::opt_lines,\n                    '--disasm' => $main::opt_disasm);\n    for my $option (keys %disabled) {\n      usage(\"$option cannot be used without a binary\") if $disabled{$option};\n    }\n    # Set $main::prog later...\n    scalar(@ARGV) || usage(\"Did not specify profile file\");\n  } elsif ($main::opt_symbols) {\n    # --symbols needs a binary-name (to run nm on, etc) but not profiles\n    $main::prog = shift(@ARGV) || usage(\"Did not specify program\");\n  } else {\n    $main::prog = shift(@ARGV) || usage(\"Did not specify program\");\n    scalar(@ARGV) || usage(\"Did not specify profile file\");\n  }\n\n  # Parse profile file/location arguments\n  foreach my $farg (@ARGV) {\n    if ($farg =~ m/(.*)\\@([0-9]+)(|\\/.*)$/ ) {\n      my $machine = $1;\n      my $num_machines = $2;\n      my $path = $3;\n      for (my $i = 0; $i < $num_machines; $i++) {\n        unshift(@main::pfile_args, \"$i.$machine$path\");\n      }\n    } else {\n      unshift(@main::pfile_args, $farg);\n    }\n  }\n\n  if ($main::use_symbol_page) {\n    unless (IsProfileURL($main::pfile_args[0])) {\n      error(\"The first profile should be a remote form to use $SYMBOL_PAGE\\n\");\n    }\n    CheckSymbolPage();\n    $main::prog = FetchProgramName();\n  } elsif (!$main::use_symbolized_profile) {  # may not need objtools!\n    ConfigureObjTools($main::prog)\n  }\n\n  # Break the opt_lib_prefix into the prefix_list array\n  @prefix_list = split (',', $main::opt_lib_prefix);\n\n  # Remove trailing / from the prefixes, in the list to prevent\n  # searching things like /my/path//lib/mylib.so\n  foreach (@prefix_list) {\n    s|/+$||;\n  }\n\n  # Flag to prevent us from trying over and over to use\n  #  elfutils if it's not installed (used only with\n  #  --debug-syms-by-id option).\n  $main::gave_up_on_elfutils = 0;\n}\n\nsub FilterAndPrint {\n  my ($profile, $symbols, $libs, $thread) = @_;\n\n  # Get total data in profile\n  my $total = TotalProfile($profile);\n\n  # Remove uniniteresting stack items\n  $profile = RemoveUninterestingFrames($symbols, $profile);\n\n  # Focus?\n  if ($main::opt_focus ne '') {\n    $profile = FocusProfile($symbols, $profile, $main::opt_focus);\n  }\n\n  # Ignore?\n  if ($main::opt_ignore ne '') {\n    $profile = IgnoreProfile($symbols, $profile, $main::opt_ignore);\n  }\n\n  my $calls = ExtractCalls($symbols, $profile);\n\n  # Reduce profiles to required output granularity, and also clean\n  # each stack trace so a given entry exists at most once.\n  my $reduced = ReduceProfile($symbols, $profile);\n\n  # Get derived profiles\n  my $flat = FlatProfile($reduced);\n  my $cumulative = CumulativeProfile($reduced);\n\n  # Print\n  if (!$main::opt_interactive) {\n    if ($main::opt_disasm) {\n      PrintDisassembly($libs, $flat, $cumulative, $main::opt_disasm);\n    } elsif ($main::opt_list) {\n      PrintListing($total, $libs, $flat, $cumulative, $main::opt_list, 0);\n    } elsif ($main::opt_text) {\n      # Make sure the output is empty when have nothing to report\n      # (only matters when --heapcheck is given but we must be\n      # compatible with old branches that did not pass --heapcheck always):\n      if ($total != 0) {\n        printf(\"Total%s: %s %s\\n\",\n               (defined($thread) ? \" (t$thread)\" : \"\"),\n               Unparse($total), Units());\n      }\n      PrintText($symbols, $flat, $cumulative, -1);\n    } elsif ($main::opt_raw) {\n      PrintSymbolizedProfile($symbols, $profile, $main::prog);\n    } elsif ($main::opt_collapsed) {\n      PrintCollapsedStacks($symbols, $profile);\n    } elsif ($main::opt_callgrind) {\n      PrintCallgrind($calls);\n    } else {\n      if (PrintDot($main::prog, $symbols, $profile, $flat, $cumulative, $total)) {\n        if ($main::opt_gv) {\n          RunGV(TempName($main::next_tmpfile, \"ps\"), \"\");\n        } elsif ($main::opt_evince) {\n          RunEvince(TempName($main::next_tmpfile, \"pdf\"), \"\");\n        } elsif ($main::opt_web) {\n          my $tmp = TempName($main::next_tmpfile, \"svg\");\n          RunWeb($tmp);\n          # The command we run might hand the file name off\n          # to an already running browser instance and then exit.\n          # Normally, we'd remove $tmp on exit (right now),\n          # but fork a child to remove $tmp a little later, so that the\n          # browser has time to load it first.\n          delete $main::tempnames{$tmp};\n          if (fork() == 0) {\n            sleep 5;\n            unlink($tmp);\n            exit(0);\n          }\n        }\n      } else {\n        cleanup();\n        exit(1);\n      }\n    }\n  } else {\n    InteractiveMode($profile, $symbols, $libs, $total);\n  }\n}\n\nsub Main() {\n  Init();\n  $main::collected_profile = undef;\n  @main::profile_files = ();\n  $main::op_time = time();\n\n  # Printing symbols is special and requires a lot less info that most.\n  if ($main::opt_symbols) {\n    PrintSymbols(*STDIN);   # Get /proc/maps and symbols output from stdin\n    return;\n  }\n\n  # Fetch all profile data\n  FetchDynamicProfiles();\n\n  # this will hold symbols that we read from the profile files\n  my $symbol_map = {};\n\n  # Read one profile, pick the last item on the list\n  my $data = ReadProfile($main::prog, pop(@main::profile_files));\n  my $profile = $data->{profile};\n  my $pcs = $data->{pcs};\n  my $libs = $data->{libs};   # Info about main program and shared libraries\n  $symbol_map = MergeSymbols($symbol_map, $data->{symbols});\n\n  # Add additional profiles, if available.\n  if (scalar(@main::profile_files) > 0) {\n    foreach my $pname (@main::profile_files) {\n      my $data2 = ReadProfile($main::prog, $pname);\n      $profile = AddProfile($profile, $data2->{profile});\n      $pcs = AddPcs($pcs, $data2->{pcs});\n      $symbol_map = MergeSymbols($symbol_map, $data2->{symbols});\n    }\n  }\n\n  # Subtract base from profile, if specified\n  if ($main::opt_base ne '') {\n    my $base = ReadProfile($main::prog, $main::opt_base);\n    $profile = SubtractProfile($profile, $base->{profile});\n    $pcs = AddPcs($pcs, $base->{pcs});\n    $symbol_map = MergeSymbols($symbol_map, $base->{symbols});\n  }\n\n  # Collect symbols\n  my $symbols;\n  if ($main::use_symbolized_profile) {\n    $symbols = FetchSymbols($pcs, $symbol_map);\n  } elsif ($main::use_symbol_page) {\n    $symbols = FetchSymbols($pcs);\n  } else {\n    # TODO(csilvers): $libs uses the /proc/self/maps data from profile1,\n    # which may differ from the data from subsequent profiles, especially\n    # if they were run on different machines.  Use appropriate libs for\n    # each pc somehow.\n    $symbols = ExtractSymbols($libs, $pcs);\n  }\n\n  if (!defined($main::opt_thread)) {\n    FilterAndPrint($profile, $symbols, $libs);\n  }\n  if (defined($data->{threads})) {\n    foreach my $thread (sort { $a <=> $b } keys(%{$data->{threads}})) {\n      if (defined($main::opt_thread) &&\n          ($main::opt_thread eq '*' || $main::opt_thread == $thread)) {\n        my $thread_profile = $data->{threads}{$thread};\n        FilterAndPrint($thread_profile, $symbols, $libs, $thread);\n      }\n    }\n  }\n\n  cleanup();\n  exit(0);\n}\n\n##### Entry Point #####\n\nMain();\n\n# Temporary code to detect if we're running on a Goobuntu system.\n# These systems don't have the right stuff installed for the special\n# Readline libraries to work, so as a temporary workaround, we default\n# to using the normal stdio code, rather than the fancier readline-based\n# code\nsub ReadlineMightFail {\n  if (-e '/lib/libtermcap.so.2') {\n    return 0;  # libtermcap exists, so readline should be okay\n  } else {\n    return 1;\n  }\n}\n\nsub RunGV {\n  my $fname = shift;\n  my $bg = shift;       # \"\" or \" &\" if we should run in background\n  if (!system(ShellEscape(@GV, \"--version\") . \" >$dev_null 2>&1\")) {\n    # Options using double dash are supported by this gv version.\n    # Also, turn on noantialias to better handle bug in gv for\n    # postscript files with large dimensions.\n    # TODO: Maybe we should not pass the --noantialias flag\n    # if the gv version is known to work properly without the flag.\n    system(ShellEscape(@GV, \"--scale=$main::opt_scale\", \"--noantialias\", $fname)\n           . $bg);\n  } else {\n    # Old gv version - only supports options that use single dash.\n    print STDERR ShellEscape(@GV, \"-scale\", $main::opt_scale) . \"\\n\";\n    system(ShellEscape(@GV, \"-scale\", \"$main::opt_scale\", $fname) . $bg);\n  }\n}\n\nsub RunEvince {\n  my $fname = shift;\n  my $bg = shift;       # \"\" or \" &\" if we should run in background\n  system(ShellEscape(@EVINCE, $fname) . $bg);\n}\n\nsub RunWeb {\n  my $fname = shift;\n  print STDERR \"Loading web page file:///$fname\\n\";\n\n  if (`uname` =~ /Darwin/) {\n    # OS X: open will use standard preference for SVG files.\n    system(\"/usr/bin/open\", $fname);\n    return;\n  }\n\n  # Some kind of Unix; try generic symlinks, then specific browsers.\n  # (Stop once we find one.)\n  # Works best if the browser is already running.\n  my @alt = (\n    \"/etc/alternatives/gnome-www-browser\",\n    \"/etc/alternatives/x-www-browser\",\n    \"google-chrome\",\n    \"firefox\",\n  );\n  foreach my $b (@alt) {\n    if (system($b, $fname) == 0) {\n      return;\n    }\n  }\n\n  print STDERR \"Could not load web browser.\\n\";\n}\n\nsub RunKcachegrind {\n  my $fname = shift;\n  my $bg = shift;       # \"\" or \" &\" if we should run in background\n  print STDERR \"Starting '@KCACHEGRIND \" . $fname . $bg . \"'\\n\";\n  system(ShellEscape(@KCACHEGRIND, $fname) . $bg);\n}\n\n\n##### Interactive helper routines #####\n\nsub InteractiveMode {\n  $| = 1;  # Make output unbuffered for interactive mode\n  my ($orig_profile, $symbols, $libs, $total) = @_;\n\n  print STDERR \"Welcome to jeprof!  For help, type 'help'.\\n\";\n\n  # Use ReadLine if it's installed and input comes from a console.\n  if ( -t STDIN &&\n       !ReadlineMightFail() &&\n       defined(eval {require Term::ReadLine}) ) {\n    my $term = new Term::ReadLine 'jeprof';\n    while ( defined ($_ = $term->readline('(jeprof) '))) {\n      $term->addhistory($_) if /\\S/;\n      if (!InteractiveCommand($orig_profile, $symbols, $libs, $total, $_)) {\n        last;    # exit when we get an interactive command to quit\n      }\n    }\n  } else {       # don't have readline\n    while (1) {\n      print STDERR \"(jeprof) \";\n      $_ = <STDIN>;\n      last if ! defined $_ ;\n      s/\\r//g;         # turn windows-looking lines into unix-looking lines\n\n      # Save some flags that might be reset by InteractiveCommand()\n      my $save_opt_lines = $main::opt_lines;\n\n      if (!InteractiveCommand($orig_profile, $symbols, $libs, $total, $_)) {\n        last;    # exit when we get an interactive command to quit\n      }\n\n      # Restore flags\n      $main::opt_lines = $save_opt_lines;\n    }\n  }\n}\n\n# Takes two args: orig profile, and command to run.\n# Returns 1 if we should keep going, or 0 if we were asked to quit\nsub InteractiveCommand {\n  my($orig_profile, $symbols, $libs, $total, $command) = @_;\n  $_ = $command;                # just to make future m//'s easier\n  if (!defined($_)) {\n    print STDERR \"\\n\";\n    return 0;\n  }\n  if (m/^\\s*quit/) {\n    return 0;\n  }\n  if (m/^\\s*help/) {\n    InteractiveHelpMessage();\n    return 1;\n  }\n  # Clear all the mode options -- mode is controlled by \"$command\"\n  $main::opt_text = 0;\n  $main::opt_callgrind = 0;\n  $main::opt_disasm = 0;\n  $main::opt_list = 0;\n  $main::opt_gv = 0;\n  $main::opt_evince = 0;\n  $main::opt_cum = 0;\n\n  if (m/^\\s*(text|top)(\\d*)\\s*(.*)/) {\n    $main::opt_text = 1;\n\n    my $line_limit = ($2 ne \"\") ? int($2) : 10;\n\n    my $routine;\n    my $ignore;\n    ($routine, $ignore) = ParseInteractiveArgs($3);\n\n    my $profile = ProcessProfile($total, $orig_profile, $symbols, \"\", $ignore);\n    my $reduced = ReduceProfile($symbols, $profile);\n\n    # Get derived profiles\n    my $flat = FlatProfile($reduced);\n    my $cumulative = CumulativeProfile($reduced);\n\n    PrintText($symbols, $flat, $cumulative, $line_limit);\n    return 1;\n  }\n  if (m/^\\s*callgrind\\s*([^ \\n]*)/) {\n    $main::opt_callgrind = 1;\n\n    # Get derived profiles\n    my $calls = ExtractCalls($symbols, $orig_profile);\n    my $filename = $1;\n    if ( $1 eq '' ) {\n      $filename = TempName($main::next_tmpfile, \"callgrind\");\n    }\n    PrintCallgrind($calls, $filename);\n    if ( $1 eq '' ) {\n      RunKcachegrind($filename, \" & \");\n      $main::next_tmpfile++;\n    }\n\n    return 1;\n  }\n  if (m/^\\s*(web)?list\\s*(.+)/) {\n    my $html = (defined($1) && ($1 eq \"web\"));\n    $main::opt_list = 1;\n\n    my $routine;\n    my $ignore;\n    ($routine, $ignore) = ParseInteractiveArgs($2);\n\n    my $profile = ProcessProfile($total, $orig_profile, $symbols, \"\", $ignore);\n    my $reduced = ReduceProfile($symbols, $profile);\n\n    # Get derived profiles\n    my $flat = FlatProfile($reduced);\n    my $cumulative = CumulativeProfile($reduced);\n\n    PrintListing($total, $libs, $flat, $cumulative, $routine, $html);\n    return 1;\n  }\n  if (m/^\\s*disasm\\s*(.+)/) {\n    $main::opt_disasm = 1;\n\n    my $routine;\n    my $ignore;\n    ($routine, $ignore) = ParseInteractiveArgs($1);\n\n    # Process current profile to account for various settings\n    my $profile = ProcessProfile($total, $orig_profile, $symbols, \"\", $ignore);\n    my $reduced = ReduceProfile($symbols, $profile);\n\n    # Get derived profiles\n    my $flat = FlatProfile($reduced);\n    my $cumulative = CumulativeProfile($reduced);\n\n    PrintDisassembly($libs, $flat, $cumulative, $routine);\n    return 1;\n  }\n  if (m/^\\s*(gv|web|evince)\\s*(.*)/) {\n    $main::opt_gv = 0;\n    $main::opt_evince = 0;\n    $main::opt_web = 0;\n    if ($1 eq \"gv\") {\n      $main::opt_gv = 1;\n    } elsif ($1 eq \"evince\") {\n      $main::opt_evince = 1;\n    } elsif ($1 eq \"web\") {\n      $main::opt_web = 1;\n    }\n\n    my $focus;\n    my $ignore;\n    ($focus, $ignore) = ParseInteractiveArgs($2);\n\n    # Process current profile to account for various settings\n    my $profile = ProcessProfile($total, $orig_profile, $symbols,\n                                 $focus, $ignore);\n    my $reduced = ReduceProfile($symbols, $profile);\n\n    # Get derived profiles\n    my $flat = FlatProfile($reduced);\n    my $cumulative = CumulativeProfile($reduced);\n\n    if (PrintDot($main::prog, $symbols, $profile, $flat, $cumulative, $total)) {\n      if ($main::opt_gv) {\n        RunGV(TempName($main::next_tmpfile, \"ps\"), \" &\");\n      } elsif ($main::opt_evince) {\n        RunEvince(TempName($main::next_tmpfile, \"pdf\"), \" &\");\n      } elsif ($main::opt_web) {\n        RunWeb(TempName($main::next_tmpfile, \"svg\"));\n      }\n      $main::next_tmpfile++;\n    }\n    return 1;\n  }\n  if (m/^\\s*$/) {\n    return 1;\n  }\n  print STDERR \"Unknown command: try 'help'.\\n\";\n  return 1;\n}\n\n\nsub ProcessProfile {\n  my $total_count = shift;\n  my $orig_profile = shift;\n  my $symbols = shift;\n  my $focus = shift;\n  my $ignore = shift;\n\n  # Process current profile to account for various settings\n  my $profile = $orig_profile;\n  printf(\"Total: %s %s\\n\", Unparse($total_count), Units());\n  if ($focus ne '') {\n    $profile = FocusProfile($symbols, $profile, $focus);\n    my $focus_count = TotalProfile($profile);\n    printf(\"After focusing on '%s': %s %s of %s (%0.1f%%)\\n\",\n           $focus,\n           Unparse($focus_count), Units(),\n           Unparse($total_count), ($focus_count*100.0) / $total_count);\n  }\n  if ($ignore ne '') {\n    $profile = IgnoreProfile($symbols, $profile, $ignore);\n    my $ignore_count = TotalProfile($profile);\n    printf(\"After ignoring '%s': %s %s of %s (%0.1f%%)\\n\",\n           $ignore,\n           Unparse($ignore_count), Units(),\n           Unparse($total_count),\n           ($ignore_count*100.0) / $total_count);\n  }\n\n  return $profile;\n}\n\nsub InteractiveHelpMessage {\n  print STDERR <<ENDOFHELP;\nInteractive jeprof mode\n\nCommands:\n  gv\n  gv [focus] [-ignore1] [-ignore2]\n      Show graphical hierarchical display of current profile.  Without\n      any arguments, shows all samples in the profile.  With the optional\n      \"focus\" argument, restricts the samples shown to just those where\n      the \"focus\" regular expression matches a routine name on the stack\n      trace.\n\n  web\n  web [focus] [-ignore1] [-ignore2]\n      Like GV, but displays profile in your web browser instead of using\n      Ghostview. Works best if your web browser is already running.\n      To change the browser that gets used:\n      On Linux, set the /etc/alternatives/gnome-www-browser symlink.\n      On OS X, change the Finder association for SVG files.\n\n  list [routine_regexp] [-ignore1] [-ignore2]\n      Show source listing of routines whose names match \"routine_regexp\"\n\n  weblist [routine_regexp] [-ignore1] [-ignore2]\n     Displays a source listing of routines whose names match \"routine_regexp\"\n     in a web browser.  You can click on source lines to view the\n     corresponding disassembly.\n\n  top [--cum] [-ignore1] [-ignore2]\n  top20 [--cum] [-ignore1] [-ignore2]\n  top37 [--cum] [-ignore1] [-ignore2]\n      Show top lines ordered by flat profile count, or cumulative count\n      if --cum is specified.  If a number is present after 'top', the\n      top K routines will be shown (defaults to showing the top 10)\n\n  disasm [routine_regexp] [-ignore1] [-ignore2]\n      Show disassembly of routines whose names match \"routine_regexp\",\n      annotated with sample counts.\n\n  callgrind\n  callgrind [filename]\n      Generates callgrind file. If no filename is given, kcachegrind is called.\n\n  help - This listing\n  quit or ^D - End jeprof\n\nFor commands that accept optional -ignore tags, samples where any routine in\nthe stack trace matches the regular expression in any of the -ignore\nparameters will be ignored.\n\nFurther pprof details are available at this location (or one similar):\n\n /usr/doc/gperftools-$PPROF_VERSION/cpu_profiler.html\n /usr/doc/gperftools-$PPROF_VERSION/heap_profiler.html\n\nENDOFHELP\n}\nsub ParseInteractiveArgs {\n  my $args = shift;\n  my $focus = \"\";\n  my $ignore = \"\";\n  my @x = split(/ +/, $args);\n  foreach $a (@x) {\n    if ($a =~ m/^(--|-)lines$/) {\n      $main::opt_lines = 1;\n    } elsif ($a =~ m/^(--|-)cum$/) {\n      $main::opt_cum = 1;\n    } elsif ($a =~ m/^-(.*)/) {\n      $ignore .= (($ignore ne \"\") ? \"|\" : \"\" ) . $1;\n    } else {\n      $focus .= (($focus ne \"\") ? \"|\" : \"\" ) . $a;\n    }\n  }\n  if ($ignore ne \"\") {\n    print STDERR \"Ignoring samples in call stacks that match '$ignore'\\n\";\n  }\n  return ($focus, $ignore);\n}\n\n##### Output code #####\n\nsub TempName {\n  my $fnum = shift;\n  my $ext = shift;\n  my $file = \"$main::tmpfile_ps.$fnum.$ext\";\n  $main::tempnames{$file} = 1;\n  return $file;\n}\n\n# Print profile data in packed binary format (64-bit) to standard out\nsub PrintProfileData {\n  my $profile = shift;\n\n  # print header (64-bit style)\n  # (zero) (header-size) (version) (sample-period) (zero)\n  print pack('L*', 0, 0, 3, 0, 0, 0, 1, 0, 0, 0);\n\n  foreach my $k (keys(%{$profile})) {\n    my $count = $profile->{$k};\n    my @addrs = split(/\\n/, $k);\n    if ($#addrs >= 0) {\n      my $depth = $#addrs + 1;\n      # int(foo / 2**32) is the only reliable way to get rid of bottom\n      # 32 bits on both 32- and 64-bit systems.\n      print pack('L*', $count & 0xFFFFFFFF, int($count / 2**32));\n      print pack('L*', $depth & 0xFFFFFFFF, int($depth / 2**32));\n\n      foreach my $full_addr (@addrs) {\n        my $addr = $full_addr;\n        $addr =~ s/0x0*//;  # strip off leading 0x, zeroes\n        if (length($addr) > 16) {\n          print STDERR \"Invalid address in profile: $full_addr\\n\";\n          next;\n        }\n        my $low_addr = substr($addr, -8);       # get last 8 hex chars\n        my $high_addr = substr($addr, -16, 8);  # get up to 8 more hex chars\n        print pack('L*', hex('0x' . $low_addr), hex('0x' . $high_addr));\n      }\n    }\n  }\n}\n\n# Print symbols and profile data\nsub PrintSymbolizedProfile {\n  my $symbols = shift;\n  my $profile = shift;\n  my $prog = shift;\n\n  $SYMBOL_PAGE =~ m,[^/]+$,;    # matches everything after the last slash\n  my $symbol_marker = $&;\n\n  print '--- ', $symbol_marker, \"\\n\";\n  if (defined($prog)) {\n    print 'binary=', $prog, \"\\n\";\n  }\n  while (my ($pc, $name) = each(%{$symbols})) {\n    my $sep = ' ';\n    print '0x', $pc;\n    # We have a list of function names, which include the inlined\n    # calls.  They are separated (and terminated) by --, which is\n    # illegal in function names.\n    for (my $j = 2; $j <= $#{$name}; $j += 3) {\n      print $sep, $name->[$j];\n      $sep = '--';\n    }\n    print \"\\n\";\n  }\n  print '---', \"\\n\";\n\n  my $profile_marker;\n  if ($main::profile_type eq 'heap') {\n    $HEAP_PAGE =~ m,[^/]+$,;    # matches everything after the last slash\n    $profile_marker = $&;\n  } elsif ($main::profile_type eq 'growth') {\n    $GROWTH_PAGE =~ m,[^/]+$,;    # matches everything after the last slash\n    $profile_marker = $&;\n  } elsif ($main::profile_type eq 'contention') {\n    $CONTENTION_PAGE =~ m,[^/]+$,;    # matches everything after the last slash\n    $profile_marker = $&;\n  } else { # elsif ($main::profile_type eq 'cpu')\n    $PROFILE_PAGE =~ m,[^/]+$,;    # matches everything after the last slash\n    $profile_marker = $&;\n  }\n\n  print '--- ', $profile_marker, \"\\n\";\n  if (defined($main::collected_profile)) {\n    # if used with remote fetch, simply dump the collected profile to output.\n    open(SRC, \"<$main::collected_profile\");\n    while (<SRC>) {\n      print $_;\n    }\n    close(SRC);\n  } else {\n    # --raw/http: For everything to work correctly for non-remote profiles, we\n    # would need to extend PrintProfileData() to handle all possible profile\n    # types, re-enable the code that is currently disabled in ReadCPUProfile()\n    # and FixCallerAddresses(), and remove the remote profile dumping code in\n    # the block above.\n    die \"--raw/http: jeprof can only dump remote profiles for --raw\\n\";\n    # dump a cpu-format profile to standard out\n    PrintProfileData($profile);\n  }\n}\n\n# Print text output\nsub PrintText {\n  my $symbols = shift;\n  my $flat = shift;\n  my $cumulative = shift;\n  my $line_limit = shift;\n\n  my $total = TotalProfile($flat);\n\n  # Which profile to sort by?\n  my $s = $main::opt_cum ? $cumulative : $flat;\n\n  my $running_sum = 0;\n  my $lines = 0;\n  foreach my $k (sort { GetEntry($s, $b) <=> GetEntry($s, $a) || $a cmp $b }\n                 keys(%{$cumulative})) {\n    my $f = GetEntry($flat, $k);\n    my $c = GetEntry($cumulative, $k);\n    $running_sum += $f;\n\n    my $sym = $k;\n    if (exists($symbols->{$k})) {\n      $sym = $symbols->{$k}->[0] . \" \" . $symbols->{$k}->[1];\n      if ($main::opt_addresses) {\n        $sym = $k . \" \" . $sym;\n      }\n    }\n\n    if ($f != 0 || $c != 0) {\n      printf(\"%8s %6s %6s %8s %6s %s\\n\",\n             Unparse($f),\n             Percent($f, $total),\n             Percent($running_sum, $total),\n             Unparse($c),\n             Percent($c, $total),\n             $sym);\n    }\n    $lines++;\n    last if ($line_limit >= 0 && $lines >= $line_limit);\n  }\n}\n\n# Callgrind format has a compression for repeated function and file\n# names.  You show the name the first time, and just use its number\n# subsequently.  This can cut down the file to about a third or a\n# quarter of its uncompressed size.  $key and $val are the key/value\n# pair that would normally be printed by callgrind; $map is a map from\n# value to number.\nsub CompressedCGName {\n  my($key, $val, $map) = @_;\n  my $idx = $map->{$val};\n  # For very short keys, providing an index hurts rather than helps.\n  if (length($val) <= 3) {\n    return \"$key=$val\\n\";\n  } elsif (defined($idx)) {\n    return \"$key=($idx)\\n\";\n  } else {\n    # scalar(keys $map) gives the number of items in the map.\n    $idx = scalar(keys(%{$map})) + 1;\n    $map->{$val} = $idx;\n    return \"$key=($idx) $val\\n\";\n  }\n}\n\n# Print the call graph in a way that's suiteable for callgrind.\nsub PrintCallgrind {\n  my $calls = shift;\n  my $filename;\n  my %filename_to_index_map;\n  my %fnname_to_index_map;\n\n  if ($main::opt_interactive) {\n    $filename = shift;\n    print STDERR \"Writing callgrind file to '$filename'.\\n\"\n  } else {\n    $filename = \"&STDOUT\";\n  }\n  open(CG, \">$filename\");\n  printf CG (\"events: Hits\\n\\n\");\n  foreach my $call ( map { $_->[0] }\n                     sort { $a->[1] cmp $b ->[1] ||\n                            $a->[2] <=> $b->[2] }\n                     map { /([^:]+):(\\d+):([^ ]+)( -> ([^:]+):(\\d+):(.+))?/;\n                           [$_, $1, $2] }\n                     keys %$calls ) {\n    my $count = int($calls->{$call});\n    $call =~ /([^:]+):(\\d+):([^ ]+)( -> ([^:]+):(\\d+):(.+))?/;\n    my ( $caller_file, $caller_line, $caller_function,\n         $callee_file, $callee_line, $callee_function ) =\n       ( $1, $2, $3, $5, $6, $7 );\n\n    # TODO(csilvers): for better compression, collect all the\n    # caller/callee_files and functions first, before printing\n    # anything, and only compress those referenced more than once.\n    printf CG CompressedCGName(\"fl\", $caller_file, \\%filename_to_index_map);\n    printf CG CompressedCGName(\"fn\", $caller_function, \\%fnname_to_index_map);\n    if (defined $6) {\n      printf CG CompressedCGName(\"cfl\", $callee_file, \\%filename_to_index_map);\n      printf CG CompressedCGName(\"cfn\", $callee_function, \\%fnname_to_index_map);\n      printf CG (\"calls=$count $callee_line\\n\");\n    }\n    printf CG (\"$caller_line $count\\n\\n\");\n  }\n}\n\n# Print disassembly for all all routines that match $main::opt_disasm\nsub PrintDisassembly {\n  my $libs = shift;\n  my $flat = shift;\n  my $cumulative = shift;\n  my $disasm_opts = shift;\n\n  my $total = TotalProfile($flat);\n\n  foreach my $lib (@{$libs}) {\n    my $symbol_table = GetProcedureBoundaries($lib->[0], $disasm_opts);\n    my $offset = AddressSub($lib->[1], $lib->[3]);\n    foreach my $routine (sort ByName keys(%{$symbol_table})) {\n      my $start_addr = $symbol_table->{$routine}->[0];\n      my $end_addr = $symbol_table->{$routine}->[1];\n      # See if there are any samples in this routine\n      my $length = hex(AddressSub($end_addr, $start_addr));\n      my $addr = AddressAdd($start_addr, $offset);\n      for (my $i = 0; $i < $length; $i++) {\n        if (defined($cumulative->{$addr})) {\n          PrintDisassembledFunction($lib->[0], $offset,\n                                    $routine, $flat, $cumulative,\n                                    $start_addr, $end_addr, $total);\n          last;\n        }\n        $addr = AddressInc($addr);\n      }\n    }\n  }\n}\n\n# Return reference to array of tuples of the form:\n#       [start_address, filename, linenumber, instruction, limit_address]\n# E.g.,\n#       [\"0x806c43d\", \"/foo/bar.cc\", 131, \"ret\", \"0x806c440\"]\nsub Disassemble {\n  my $prog = shift;\n  my $offset = shift;\n  my $start_addr = shift;\n  my $end_addr = shift;\n\n  my $objdump = $obj_tool_map{\"objdump\"};\n  my $cmd = ShellEscape($objdump, \"-C\", \"-d\", \"-l\", \"--no-show-raw-insn\",\n                        \"--start-address=0x$start_addr\",\n                        \"--stop-address=0x$end_addr\", $prog);\n  open(OBJDUMP, \"$cmd |\") || error(\"$cmd: $!\\n\");\n  my @result = ();\n  my $filename = \"\";\n  my $linenumber = -1;\n  my $last = [\"\", \"\", \"\", \"\"];\n  while (<OBJDUMP>) {\n    s/\\r//g;         # turn windows-looking lines into unix-looking lines\n    chop;\n    if (m|\\s*([^:\\s]+):(\\d+)\\s*$|) {\n      # Location line of the form:\n      #   <filename>:<linenumber>\n      $filename = $1;\n      $linenumber = $2;\n    } elsif (m/^ +([0-9a-f]+):\\s*(.*)/) {\n      # Disassembly line -- zero-extend address to full length\n      my $addr = HexExtend($1);\n      my $k = AddressAdd($addr, $offset);\n      $last->[4] = $k;   # Store ending address for previous instruction\n      $last = [$k, $filename, $linenumber, $2, $end_addr];\n      push(@result, $last);\n    }\n  }\n  close(OBJDUMP);\n  return @result;\n}\n\n# The input file should contain lines of the form /proc/maps-like\n# output (same format as expected from the profiles) or that looks\n# like hex addresses (like \"0xDEADBEEF\").  We will parse all\n# /proc/maps output, and for all the hex addresses, we will output\n# \"short\" symbol names, one per line, in the same order as the input.\nsub PrintSymbols {\n  my $maps_and_symbols_file = shift;\n\n  # ParseLibraries expects pcs to be in a set.  Fine by us...\n  my @pclist = ();   # pcs in sorted order\n  my $pcs = {};\n  my $map = \"\";\n  foreach my $line (<$maps_and_symbols_file>) {\n    $line =~ s/\\r//g;    # turn windows-looking lines into unix-looking lines\n    if ($line =~ /\\b(0x[0-9a-f]+)\\b/i) {\n      push(@pclist, HexExtend($1));\n      $pcs->{$pclist[-1]} = 1;\n    } else {\n      $map .= $line;\n    }\n  }\n\n  my $libs = ParseLibraries($main::prog, $map, $pcs);\n  my $symbols = ExtractSymbols($libs, $pcs);\n\n  foreach my $pc (@pclist) {\n    # ->[0] is the shortname, ->[2] is the full name\n    print(($symbols->{$pc}->[0] || \"??\") . \"\\n\");\n  }\n}\n\n\n# For sorting functions by name\nsub ByName {\n  return ShortFunctionName($a) cmp ShortFunctionName($b);\n}\n\n# Print source-listing for all all routines that match $list_opts\nsub PrintListing {\n  my $total = shift;\n  my $libs = shift;\n  my $flat = shift;\n  my $cumulative = shift;\n  my $list_opts = shift;\n  my $html = shift;\n\n  my $output = \\*STDOUT;\n  my $fname = \"\";\n\n  if ($html) {\n    # Arrange to write the output to a temporary file\n    $fname = TempName($main::next_tmpfile, \"html\");\n    $main::next_tmpfile++;\n    if (!open(TEMP, \">$fname\")) {\n      print STDERR \"$fname: $!\\n\";\n      return;\n    }\n    $output = \\*TEMP;\n    print $output HtmlListingHeader();\n    printf $output (\"<div class=\\\"legend\\\">%s<br>Total: %s %s</div>\\n\",\n                    $main::prog, Unparse($total), Units());\n  }\n\n  my $listed = 0;\n  foreach my $lib (@{$libs}) {\n    my $symbol_table = GetProcedureBoundaries($lib->[0], $list_opts);\n    my $offset = AddressSub($lib->[1], $lib->[3]);\n    foreach my $routine (sort ByName keys(%{$symbol_table})) {\n      # Print if there are any samples in this routine\n      my $start_addr = $symbol_table->{$routine}->[0];\n      my $end_addr = $symbol_table->{$routine}->[1];\n      my $length = hex(AddressSub($end_addr, $start_addr));\n      my $addr = AddressAdd($start_addr, $offset);\n      for (my $i = 0; $i < $length; $i++) {\n        if (defined($cumulative->{$addr})) {\n          $listed += PrintSource(\n            $lib->[0], $offset,\n            $routine, $flat, $cumulative,\n            $start_addr, $end_addr,\n            $html,\n            $output);\n          last;\n        }\n        $addr = AddressInc($addr);\n      }\n    }\n  }\n\n  if ($html) {\n    if ($listed > 0) {\n      print $output HtmlListingFooter();\n      close($output);\n      RunWeb($fname);\n    } else {\n      close($output);\n      unlink($fname);\n    }\n  }\n}\n\nsub HtmlListingHeader {\n  return <<'EOF';\n<DOCTYPE html>\n<html>\n<head>\n<title>Pprof listing</title>\n<style type=\"text/css\">\nbody {\n  font-family: sans-serif;\n}\nh1 {\n  font-size: 1.5em;\n  margin-bottom: 4px;\n}\n.legend {\n  font-size: 1.25em;\n}\n.line {\n  color: #aaaaaa;\n}\n.nop {\n  color: #aaaaaa;\n}\n.unimportant {\n  color: #cccccc;\n}\n.disasmloc {\n  color: #000000;\n}\n.deadsrc {\n  cursor: pointer;\n}\n.deadsrc:hover {\n  background-color: #eeeeee;\n}\n.livesrc {\n  color: #0000ff;\n  cursor: pointer;\n}\n.livesrc:hover {\n  background-color: #eeeeee;\n}\n.asm {\n  color: #008800;\n  display: none;\n}\n</style>\n<script type=\"text/javascript\">\nfunction jeprof_toggle_asm(e) {\n  var target;\n  if (!e) e = window.event;\n  if (e.target) target = e.target;\n  else if (e.srcElement) target = e.srcElement;\n\n  if (target) {\n    var asm = target.nextSibling;\n    if (asm && asm.className == \"asm\") {\n      asm.style.display = (asm.style.display == \"block\" ? \"\" : \"block\");\n      e.preventDefault();\n      return false;\n    }\n  }\n}\n</script>\n</head>\n<body>\nEOF\n}\n\nsub HtmlListingFooter {\n  return <<'EOF';\n</body>\n</html>\nEOF\n}\n\nsub HtmlEscape {\n  my $text = shift;\n  $text =~ s/&/&amp;/g;\n  $text =~ s/</&lt;/g;\n  $text =~ s/>/&gt;/g;\n  return $text;\n}\n\n# Returns the indentation of the line, if it has any non-whitespace\n# characters.  Otherwise, returns -1.\nsub Indentation {\n  my $line = shift;\n  if (m/^(\\s*)\\S/) {\n    return length($1);\n  } else {\n    return -1;\n  }\n}\n\n# If the symbol table contains inlining info, Disassemble() may tag an\n# instruction with a location inside an inlined function.  But for\n# source listings, we prefer to use the location in the function we\n# are listing.  So use MapToSymbols() to fetch full location\n# information for each instruction and then pick out the first\n# location from a location list (location list contains callers before\n# callees in case of inlining).\n#\n# After this routine has run, each entry in $instructions contains:\n#   [0] start address\n#   [1] filename for function we are listing\n#   [2] line number for function we are listing\n#   [3] disassembly\n#   [4] limit address\n#   [5] most specific filename (may be different from [1] due to inlining)\n#   [6] most specific line number (may be different from [2] due to inlining)\nsub GetTopLevelLineNumbers {\n  my ($lib, $offset, $instructions) = @_;\n  my $pcs = [];\n  for (my $i = 0; $i <= $#{$instructions}; $i++) {\n    push(@{$pcs}, $instructions->[$i]->[0]);\n  }\n  my $symbols = {};\n  MapToSymbols($lib, $offset, $pcs, $symbols);\n  for (my $i = 0; $i <= $#{$instructions}; $i++) {\n    my $e = $instructions->[$i];\n    push(@{$e}, $e->[1]);\n    push(@{$e}, $e->[2]);\n    my $addr = $e->[0];\n    my $sym = $symbols->{$addr};\n    if (defined($sym)) {\n      if ($#{$sym} >= 2 && $sym->[1] =~ m/^(.*):(\\d+)$/) {\n        $e->[1] = $1;  # File name\n        $e->[2] = $2;  # Line number\n      }\n    }\n  }\n}\n\n# Print source-listing for one routine\nsub PrintSource {\n  my $prog = shift;\n  my $offset = shift;\n  my $routine = shift;\n  my $flat = shift;\n  my $cumulative = shift;\n  my $start_addr = shift;\n  my $end_addr = shift;\n  my $html = shift;\n  my $output = shift;\n\n  # Disassemble all instructions (just to get line numbers)\n  my @instructions = Disassemble($prog, $offset, $start_addr, $end_addr);\n  GetTopLevelLineNumbers($prog, $offset, \\@instructions);\n\n  # Hack 1: assume that the first source file encountered in the\n  # disassembly contains the routine\n  my $filename = undef;\n  for (my $i = 0; $i <= $#instructions; $i++) {\n    if ($instructions[$i]->[2] >= 0) {\n      $filename = $instructions[$i]->[1];\n      last;\n    }\n  }\n  if (!defined($filename)) {\n    print STDERR \"no filename found in $routine\\n\";\n    return 0;\n  }\n\n  # Hack 2: assume that the largest line number from $filename is the\n  # end of the procedure.  This is typically safe since if P1 contains\n  # an inlined call to P2, then P2 usually occurs earlier in the\n  # source file.  If this does not work, we might have to compute a\n  # density profile or just print all regions we find.\n  my $lastline = 0;\n  for (my $i = 0; $i <= $#instructions; $i++) {\n    my $f = $instructions[$i]->[1];\n    my $l = $instructions[$i]->[2];\n    if (($f eq $filename) && ($l > $lastline)) {\n      $lastline = $l;\n    }\n  }\n\n  # Hack 3: assume the first source location from \"filename\" is the start of\n  # the source code.\n  my $firstline = 1;\n  for (my $i = 0; $i <= $#instructions; $i++) {\n    if ($instructions[$i]->[1] eq $filename) {\n      $firstline = $instructions[$i]->[2];\n      last;\n    }\n  }\n\n  # Hack 4: Extend last line forward until its indentation is less than\n  # the indentation we saw on $firstline\n  my $oldlastline = $lastline;\n  {\n    if (!open(FILE, \"<$filename\")) {\n      print STDERR \"$filename: $!\\n\";\n      return 0;\n    }\n    my $l = 0;\n    my $first_indentation = -1;\n    while (<FILE>) {\n      s/\\r//g;         # turn windows-looking lines into unix-looking lines\n      $l++;\n      my $indent = Indentation($_);\n      if ($l >= $firstline) {\n        if ($first_indentation < 0 && $indent >= 0) {\n          $first_indentation = $indent;\n          last if ($first_indentation == 0);\n        }\n      }\n      if ($l >= $lastline && $indent >= 0) {\n        if ($indent >= $first_indentation) {\n          $lastline = $l+1;\n        } else {\n          last;\n        }\n      }\n    }\n    close(FILE);\n  }\n\n  # Assign all samples to the range $firstline,$lastline,\n  # Hack 4: If an instruction does not occur in the range, its samples\n  # are moved to the next instruction that occurs in the range.\n  my $samples1 = {};        # Map from line number to flat count\n  my $samples2 = {};        # Map from line number to cumulative count\n  my $running1 = 0;         # Unassigned flat counts\n  my $running2 = 0;         # Unassigned cumulative counts\n  my $total1 = 0;           # Total flat counts\n  my $total2 = 0;           # Total cumulative counts\n  my %disasm = ();          # Map from line number to disassembly\n  my $running_disasm = \"\";  # Unassigned disassembly\n  my $skip_marker = \"---\\n\";\n  if ($html) {\n    $skip_marker = \"\";\n    for (my $l = $firstline; $l <= $lastline; $l++) {\n      $disasm{$l} = \"\";\n    }\n  }\n  my $last_dis_filename = '';\n  my $last_dis_linenum = -1;\n  my $last_touched_line = -1;  # To detect gaps in disassembly for a line\n  foreach my $e (@instructions) {\n    # Add up counts for all address that fall inside this instruction\n    my $c1 = 0;\n    my $c2 = 0;\n    for (my $a = $e->[0]; $a lt $e->[4]; $a = AddressInc($a)) {\n      $c1 += GetEntry($flat, $a);\n      $c2 += GetEntry($cumulative, $a);\n    }\n\n    if ($html) {\n      my $dis = sprintf(\"      %6s %6s \\t\\t%8s: %s \",\n                        HtmlPrintNumber($c1),\n                        HtmlPrintNumber($c2),\n                        UnparseAddress($offset, $e->[0]),\n                        CleanDisassembly($e->[3]));\n\n      # Append the most specific source line associated with this instruction\n      if (length($dis) < 80) { $dis .= (' ' x (80 - length($dis))) };\n      $dis = HtmlEscape($dis);\n      my $f = $e->[5];\n      my $l = $e->[6];\n      if ($f ne $last_dis_filename) {\n        $dis .= sprintf(\"<span class=disasmloc>%s:%d</span>\",\n                        HtmlEscape(CleanFileName($f)), $l);\n      } elsif ($l ne $last_dis_linenum) {\n        # De-emphasize the unchanged file name portion\n        $dis .= sprintf(\"<span class=unimportant>%s</span>\" .\n                        \"<span class=disasmloc>:%d</span>\",\n                        HtmlEscape(CleanFileName($f)), $l);\n      } else {\n        # De-emphasize the entire location\n        $dis .= sprintf(\"<span class=unimportant>%s:%d</span>\",\n                        HtmlEscape(CleanFileName($f)), $l);\n      }\n      $last_dis_filename = $f;\n      $last_dis_linenum = $l;\n      $running_disasm .= $dis;\n      $running_disasm .= \"\\n\";\n    }\n\n    $running1 += $c1;\n    $running2 += $c2;\n    $total1 += $c1;\n    $total2 += $c2;\n    my $file = $e->[1];\n    my $line = $e->[2];\n    if (($file eq $filename) &&\n        ($line >= $firstline) &&\n        ($line <= $lastline)) {\n      # Assign all accumulated samples to this line\n      AddEntry($samples1, $line, $running1);\n      AddEntry($samples2, $line, $running2);\n      $running1 = 0;\n      $running2 = 0;\n      if ($html) {\n        if ($line != $last_touched_line && $disasm{$line} ne '') {\n          $disasm{$line} .= \"\\n\";\n        }\n        $disasm{$line} .= $running_disasm;\n        $running_disasm = '';\n        $last_touched_line = $line;\n      }\n    }\n  }\n\n  # Assign any leftover samples to $lastline\n  AddEntry($samples1, $lastline, $running1);\n  AddEntry($samples2, $lastline, $running2);\n  if ($html) {\n    if ($lastline != $last_touched_line && $disasm{$lastline} ne '') {\n      $disasm{$lastline} .= \"\\n\";\n    }\n    $disasm{$lastline} .= $running_disasm;\n  }\n\n  if ($html) {\n    printf $output (\n      \"<h1>%s</h1>%s\\n<pre onClick=\\\"jeprof_toggle_asm()\\\">\\n\" .\n      \"Total:%6s %6s (flat / cumulative %s)\\n\",\n      HtmlEscape(ShortFunctionName($routine)),\n      HtmlEscape(CleanFileName($filename)),\n      Unparse($total1),\n      Unparse($total2),\n      Units());\n  } else {\n    printf $output (\n      \"ROUTINE ====================== %s in %s\\n\" .\n      \"%6s %6s Total %s (flat / cumulative)\\n\",\n      ShortFunctionName($routine),\n      CleanFileName($filename),\n      Unparse($total1),\n      Unparse($total2),\n      Units());\n  }\n  if (!open(FILE, \"<$filename\")) {\n    print STDERR \"$filename: $!\\n\";\n    return 0;\n  }\n  my $l = 0;\n  while (<FILE>) {\n    s/\\r//g;         # turn windows-looking lines into unix-looking lines\n    $l++;\n    if ($l >= $firstline - 5 &&\n        (($l <= $oldlastline + 5) || ($l <= $lastline))) {\n      chop;\n      my $text = $_;\n      if ($l == $firstline) { print $output $skip_marker; }\n      my $n1 = GetEntry($samples1, $l);\n      my $n2 = GetEntry($samples2, $l);\n      if ($html) {\n        # Emit a span that has one of the following classes:\n        #    livesrc -- has samples\n        #    deadsrc -- has disassembly, but with no samples\n        #    nop     -- has no matching disasembly\n        # Also emit an optional span containing disassembly.\n        my $dis = $disasm{$l};\n        my $asm = \"\";\n        if (defined($dis) && $dis ne '') {\n          $asm = \"<span class=\\\"asm\\\">\" . $dis . \"</span>\";\n        }\n        my $source_class = (($n1 + $n2 > 0)\n                            ? \"livesrc\"\n                            : (($asm ne \"\") ? \"deadsrc\" : \"nop\"));\n        printf $output (\n          \"<span class=\\\"line\\\">%5d</span> \" .\n          \"<span class=\\\"%s\\\">%6s %6s %s</span>%s\\n\",\n          $l, $source_class,\n          HtmlPrintNumber($n1),\n          HtmlPrintNumber($n2),\n          HtmlEscape($text),\n          $asm);\n      } else {\n        printf $output(\n          \"%6s %6s %4d: %s\\n\",\n          UnparseAlt($n1),\n          UnparseAlt($n2),\n          $l,\n          $text);\n      }\n      if ($l == $lastline)  { print $output $skip_marker; }\n    };\n  }\n  close(FILE);\n  if ($html) {\n    print $output \"</pre>\\n\";\n  }\n  return 1;\n}\n\n# Return the source line for the specified file/linenumber.\n# Returns undef if not found.\nsub SourceLine {\n  my $file = shift;\n  my $line = shift;\n\n  # Look in cache\n  if (!defined($main::source_cache{$file})) {\n    if (100 < scalar keys(%main::source_cache)) {\n      # Clear the cache when it gets too big\n      $main::source_cache = ();\n    }\n\n    # Read all lines from the file\n    if (!open(FILE, \"<$file\")) {\n      print STDERR \"$file: $!\\n\";\n      $main::source_cache{$file} = [];  # Cache the negative result\n      return undef;\n    }\n    my $lines = [];\n    push(@{$lines}, \"\");        # So we can use 1-based line numbers as indices\n    while (<FILE>) {\n      push(@{$lines}, $_);\n    }\n    close(FILE);\n\n    # Save the lines in the cache\n    $main::source_cache{$file} = $lines;\n  }\n\n  my $lines = $main::source_cache{$file};\n  if (($line < 0) || ($line > $#{$lines})) {\n    return undef;\n  } else {\n    return $lines->[$line];\n  }\n}\n\n# Print disassembly for one routine with interspersed source if available\nsub PrintDisassembledFunction {\n  my $prog = shift;\n  my $offset = shift;\n  my $routine = shift;\n  my $flat = shift;\n  my $cumulative = shift;\n  my $start_addr = shift;\n  my $end_addr = shift;\n  my $total = shift;\n\n  # Disassemble all instructions\n  my @instructions = Disassemble($prog, $offset, $start_addr, $end_addr);\n\n  # Make array of counts per instruction\n  my @flat_count = ();\n  my @cum_count = ();\n  my $flat_total = 0;\n  my $cum_total = 0;\n  foreach my $e (@instructions) {\n    # Add up counts for all address that fall inside this instruction\n    my $c1 = 0;\n    my $c2 = 0;\n    for (my $a = $e->[0]; $a lt $e->[4]; $a = AddressInc($a)) {\n      $c1 += GetEntry($flat, $a);\n      $c2 += GetEntry($cumulative, $a);\n    }\n    push(@flat_count, $c1);\n    push(@cum_count, $c2);\n    $flat_total += $c1;\n    $cum_total += $c2;\n  }\n\n  # Print header with total counts\n  printf(\"ROUTINE ====================== %s\\n\" .\n         \"%6s %6s %s (flat, cumulative) %.1f%% of total\\n\",\n         ShortFunctionName($routine),\n         Unparse($flat_total),\n         Unparse($cum_total),\n         Units(),\n         ($cum_total * 100.0) / $total);\n\n  # Process instructions in order\n  my $current_file = \"\";\n  for (my $i = 0; $i <= $#instructions; ) {\n    my $e = $instructions[$i];\n\n    # Print the new file name whenever we switch files\n    if ($e->[1] ne $current_file) {\n      $current_file = $e->[1];\n      my $fname = $current_file;\n      $fname =~ s|^\\./||;   # Trim leading \"./\"\n\n      # Shorten long file names\n      if (length($fname) >= 58) {\n        $fname = \"...\" . substr($fname, -55);\n      }\n      printf(\"-------------------- %s\\n\", $fname);\n    }\n\n    # TODO: Compute range of lines to print together to deal with\n    # small reorderings.\n    my $first_line = $e->[2];\n    my $last_line = $first_line;\n    my %flat_sum = ();\n    my %cum_sum = ();\n    for (my $l = $first_line; $l <= $last_line; $l++) {\n      $flat_sum{$l} = 0;\n      $cum_sum{$l} = 0;\n    }\n\n    # Find run of instructions for this range of source lines\n    my $first_inst = $i;\n    while (($i <= $#instructions) &&\n           ($instructions[$i]->[2] >= $first_line) &&\n           ($instructions[$i]->[2] <= $last_line)) {\n      $e = $instructions[$i];\n      $flat_sum{$e->[2]} += $flat_count[$i];\n      $cum_sum{$e->[2]} += $cum_count[$i];\n      $i++;\n    }\n    my $last_inst = $i - 1;\n\n    # Print source lines\n    for (my $l = $first_line; $l <= $last_line; $l++) {\n      my $line = SourceLine($current_file, $l);\n      if (!defined($line)) {\n        $line = \"?\\n\";\n        next;\n      } else {\n        $line =~ s/^\\s+//;\n      }\n      printf(\"%6s %6s %5d: %s\",\n             UnparseAlt($flat_sum{$l}),\n             UnparseAlt($cum_sum{$l}),\n             $l,\n             $line);\n    }\n\n    # Print disassembly\n    for (my $x = $first_inst; $x <= $last_inst; $x++) {\n      my $e = $instructions[$x];\n      printf(\"%6s %6s    %8s: %6s\\n\",\n             UnparseAlt($flat_count[$x]),\n             UnparseAlt($cum_count[$x]),\n             UnparseAddress($offset, $e->[0]),\n             CleanDisassembly($e->[3]));\n    }\n  }\n}\n\n# Print DOT graph\nsub PrintDot {\n  my $prog = shift;\n  my $symbols = shift;\n  my $raw = shift;\n  my $flat = shift;\n  my $cumulative = shift;\n  my $overall_total = shift;\n\n  # Get total\n  my $local_total = TotalProfile($flat);\n  my $nodelimit = int($main::opt_nodefraction * $local_total);\n  my $edgelimit = int($main::opt_edgefraction * $local_total);\n  my $nodecount = $main::opt_nodecount;\n\n  # Find nodes to include\n  my @list = (sort { abs(GetEntry($cumulative, $b)) <=>\n                     abs(GetEntry($cumulative, $a))\n                     || $a cmp $b }\n              keys(%{$cumulative}));\n  my $last = $nodecount - 1;\n  if ($last > $#list) {\n    $last = $#list;\n  }\n  while (($last >= 0) &&\n         (abs(GetEntry($cumulative, $list[$last])) <= $nodelimit)) {\n    $last--;\n  }\n  if ($last < 0) {\n    print STDERR \"No nodes to print\\n\";\n    return 0;\n  }\n\n  if ($nodelimit > 0 || $edgelimit > 0) {\n    printf STDERR (\"Dropping nodes with <= %s %s; edges with <= %s abs(%s)\\n\",\n                   Unparse($nodelimit), Units(),\n                   Unparse($edgelimit), Units());\n  }\n\n  # Open DOT output file\n  my $output;\n  my $escaped_dot = ShellEscape(@DOT);\n  my $escaped_ps2pdf = ShellEscape(@PS2PDF);\n  if ($main::opt_gv) {\n    my $escaped_outfile = ShellEscape(TempName($main::next_tmpfile, \"ps\"));\n    $output = \"| $escaped_dot -Tps2 >$escaped_outfile\";\n  } elsif ($main::opt_evince) {\n    my $escaped_outfile = ShellEscape(TempName($main::next_tmpfile, \"pdf\"));\n    $output = \"| $escaped_dot -Tps2 | $escaped_ps2pdf - $escaped_outfile\";\n  } elsif ($main::opt_ps) {\n    $output = \"| $escaped_dot -Tps2\";\n  } elsif ($main::opt_pdf) {\n    $output = \"| $escaped_dot -Tps2 | $escaped_ps2pdf - -\";\n  } elsif ($main::opt_web || $main::opt_svg) {\n    # We need to post-process the SVG, so write to a temporary file always.\n    my $escaped_outfile = ShellEscape(TempName($main::next_tmpfile, \"svg\"));\n    $output = \"| $escaped_dot -Tsvg >$escaped_outfile\";\n  } elsif ($main::opt_gif) {\n    $output = \"| $escaped_dot -Tgif\";\n  } else {\n    $output = \">&STDOUT\";\n  }\n  open(DOT, $output) || error(\"$output: $!\\n\");\n\n  # Title\n  printf DOT (\"digraph \\\"%s; %s %s\\\" {\\n\",\n              $prog,\n              Unparse($overall_total),\n              Units());\n  if ($main::opt_pdf) {\n    # The output is more printable if we set the page size for dot.\n    printf DOT (\"size=\\\"8,11\\\"\\n\");\n  }\n  printf DOT (\"node [width=0.375,height=0.25];\\n\");\n\n  # Print legend\n  printf DOT (\"Legend [shape=box,fontsize=24,shape=plaintext,\" .\n              \"label=\\\"%s\\\\l%s\\\\l%s\\\\l%s\\\\l%s\\\\l\\\"];\\n\",\n              $prog,\n              sprintf(\"Total %s: %s\", Units(), Unparse($overall_total)),\n              sprintf(\"Focusing on: %s\", Unparse($local_total)),\n              sprintf(\"Dropped nodes with <= %s abs(%s)\",\n                      Unparse($nodelimit), Units()),\n              sprintf(\"Dropped edges with <= %s %s\",\n                      Unparse($edgelimit), Units())\n              );\n\n  # Print nodes\n  my %node = ();\n  my $nextnode = 1;\n  foreach my $a (@list[0..$last]) {\n    # Pick font size\n    my $f = GetEntry($flat, $a);\n    my $c = GetEntry($cumulative, $a);\n\n    my $fs = 8;\n    if ($local_total > 0) {\n      $fs = 8 + (50.0 * sqrt(abs($f * 1.0 / $local_total)));\n    }\n\n    $node{$a} = $nextnode++;\n    my $sym = $a;\n    $sym =~ s/\\s+/\\\\n/g;\n    $sym =~ s/::/\\\\n/g;\n\n    # Extra cumulative info to print for non-leaves\n    my $extra = \"\";\n    if ($f != $c) {\n      $extra = sprintf(\"\\\\rof %s (%s)\",\n                       Unparse($c),\n                       Percent($c, $local_total));\n    }\n    my $style = \"\";\n    if ($main::opt_heapcheck) {\n      if ($f > 0) {\n        # make leak-causing nodes more visible (add a background)\n        $style = \",style=filled,fillcolor=gray\"\n      } elsif ($f < 0) {\n        # make anti-leak-causing nodes (which almost never occur)\n        # stand out as well (triple border)\n        $style = \",peripheries=3\"\n      }\n    }\n\n    printf DOT (\"N%d [label=\\\"%s\\\\n%s (%s)%s\\\\r\" .\n                \"\\\",shape=box,fontsize=%.1f%s];\\n\",\n                $node{$a},\n                $sym,\n                Unparse($f),\n                Percent($f, $local_total),\n                $extra,\n                $fs,\n                $style,\n               );\n  }\n\n  # Get edges and counts per edge\n  my %edge = ();\n  my $n;\n  my $fullname_to_shortname_map = {};\n  FillFullnameToShortnameMap($symbols, $fullname_to_shortname_map);\n  foreach my $k (keys(%{$raw})) {\n    # TODO: omit low %age edges\n    $n = $raw->{$k};\n    my @translated = TranslateStack($symbols, $fullname_to_shortname_map, $k);\n    for (my $i = 1; $i <= $#translated; $i++) {\n      my $src = $translated[$i];\n      my $dst = $translated[$i-1];\n      #next if ($src eq $dst);  # Avoid self-edges?\n      if (exists($node{$src}) && exists($node{$dst})) {\n        my $edge_label = \"$src\\001$dst\";\n        if (!exists($edge{$edge_label})) {\n          $edge{$edge_label} = 0;\n        }\n        $edge{$edge_label} += $n;\n      }\n    }\n  }\n\n  # Print edges (process in order of decreasing counts)\n  my %indegree = ();   # Number of incoming edges added per node so far\n  my %outdegree = ();  # Number of outgoing edges added per node so far\n  foreach my $e (sort { $edge{$b} <=> $edge{$a} } keys(%edge)) {\n    my @x = split(/\\001/, $e);\n    $n = $edge{$e};\n\n    # Initialize degree of kept incoming and outgoing edges if necessary\n    my $src = $x[0];\n    my $dst = $x[1];\n    if (!exists($outdegree{$src})) { $outdegree{$src} = 0; }\n    if (!exists($indegree{$dst})) { $indegree{$dst} = 0; }\n\n    my $keep;\n    if ($indegree{$dst} == 0) {\n      # Keep edge if needed for reachability\n      $keep = 1;\n    } elsif (abs($n) <= $edgelimit) {\n      # Drop if we are below --edgefraction\n      $keep = 0;\n    } elsif ($outdegree{$src} >= $main::opt_maxdegree ||\n             $indegree{$dst} >= $main::opt_maxdegree) {\n      # Keep limited number of in/out edges per node\n      $keep = 0;\n    } else {\n      $keep = 1;\n    }\n\n    if ($keep) {\n      $outdegree{$src}++;\n      $indegree{$dst}++;\n\n      # Compute line width based on edge count\n      my $fraction = abs($local_total ? (3 * ($n / $local_total)) : 0);\n      if ($fraction > 1) { $fraction = 1; }\n      my $w = $fraction * 2;\n      if ($w < 1 && ($main::opt_web || $main::opt_svg)) {\n        # SVG output treats line widths < 1 poorly.\n        $w = 1;\n      }\n\n      # Dot sometimes segfaults if given edge weights that are too large, so\n      # we cap the weights at a large value\n      my $edgeweight = abs($n) ** 0.7;\n      if ($edgeweight > 100000) { $edgeweight = 100000; }\n      $edgeweight = int($edgeweight);\n\n      my $style = sprintf(\"setlinewidth(%f)\", $w);\n      if ($x[1] =~ m/\\(inline\\)/) {\n        $style .= \",dashed\";\n      }\n\n      # Use a slightly squashed function of the edge count as the weight\n      printf DOT (\"N%s -> N%s [label=%s, weight=%d, style=\\\"%s\\\"];\\n\",\n                  $node{$x[0]},\n                  $node{$x[1]},\n                  Unparse($n),\n                  $edgeweight,\n                  $style);\n    }\n  }\n\n  print DOT (\"}\\n\");\n  close(DOT);\n\n  if ($main::opt_web || $main::opt_svg) {\n    # Rewrite SVG to be more usable inside web browser.\n    RewriteSvg(TempName($main::next_tmpfile, \"svg\"));\n  }\n\n  return 1;\n}\n\nsub RewriteSvg {\n  my $svgfile = shift;\n\n  open(SVG, $svgfile) || die \"open temp svg: $!\";\n  my @svg = <SVG>;\n  close(SVG);\n  unlink $svgfile;\n  my $svg = join('', @svg);\n\n  # Dot's SVG output is\n  #\n  #    <svg width=\"___\" height=\"___\"\n  #     viewBox=\"___\" xmlns=...>\n  #    <g id=\"graph0\" transform=\"...\">\n  #    ...\n  #    </g>\n  #    </svg>\n  #\n  # Change it to\n  #\n  #    <svg width=\"100%\" height=\"100%\"\n  #     xmlns=...>\n  #    $svg_javascript\n  #    <g id=\"viewport\" transform=\"translate(0,0)\">\n  #    <g id=\"graph0\" transform=\"...\">\n  #    ...\n  #    </g>\n  #    </g>\n  #    </svg>\n\n  # Fix width, height; drop viewBox.\n  $svg =~ s/(?s)<svg width=\"[^\"]+\" height=\"[^\"]+\"(.*?)viewBox=\"[^\"]+\"/<svg width=\"100%\" height=\"100%\"$1/;\n\n  # Insert script, viewport <g> above first <g>\n  my $svg_javascript = SvgJavascript();\n  my $viewport = \"<g id=\\\"viewport\\\" transform=\\\"translate(0,0)\\\">\\n\";\n  $svg =~ s/<g id=\"graph\\d\"/$svg_javascript$viewport$&/;\n\n  # Insert final </g> above </svg>.\n  $svg =~ s/(.*)(<\\/svg>)/$1<\\/g>$2/;\n  $svg =~ s/<g id=\"graph\\d\"(.*?)/<g id=\"viewport\"$1/;\n\n  if ($main::opt_svg) {\n    # --svg: write to standard output.\n    print $svg;\n  } else {\n    # Write back to temporary file.\n    open(SVG, \">$svgfile\") || die \"open $svgfile: $!\";\n    print SVG $svg;\n    close(SVG);\n  }\n}\n\nsub SvgJavascript {\n  return <<'EOF';\n<script type=\"text/ecmascript\"><![CDATA[\n// SVGPan\n// http://www.cyberz.org/blog/2009/12/08/svgpan-a-javascript-svg-panzoomdrag-library/\n// Local modification: if(true || ...) below to force panning, never moving.\n\n/**\n *  SVGPan library 1.2\n * ====================\n *\n * Given an unique existing element with id \"viewport\", including the\n * the library into any SVG adds the following capabilities:\n *\n *  - Mouse panning\n *  - Mouse zooming (using the wheel)\n *  - Object dargging\n *\n * Known issues:\n *\n *  - Zooming (while panning) on Safari has still some issues\n *\n * Releases:\n *\n * 1.2, Sat Mar 20 08:42:50 GMT 2010, Zeng Xiaohui\n *\tFixed a bug with browser mouse handler interaction\n *\n * 1.1, Wed Feb  3 17:39:33 GMT 2010, Zeng Xiaohui\n *\tUpdated the zoom code to support the mouse wheel on Safari/Chrome\n *\n * 1.0, Andrea Leofreddi\n *\tFirst release\n *\n * This code is licensed under the following BSD license:\n *\n * Copyright 2009-2010 Andrea Leofreddi <a.leofreddi@itcharm.com>. All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without modification, are\n * permitted provided that the following conditions are met:\n *\n *    1. Redistributions of source code must retain the above copyright notice, this list of\n *       conditions and the following disclaimer.\n *\n *    2. Redistributions in binary form must reproduce the above copyright notice, this list\n *       of conditions and the following disclaimer in the documentation and/or other materials\n *       provided with the distribution.\n *\n * THIS SOFTWARE IS PROVIDED BY Andrea Leofreddi ``AS IS'' AND ANY EXPRESS OR IMPLIED\n * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND\n * FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL Andrea Leofreddi OR\n * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON\n * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF\n * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n *\n * The views and conclusions contained in the software and documentation are those of the\n * authors and should not be interpreted as representing official policies, either expressed\n * or implied, of Andrea Leofreddi.\n */\n\nvar root = document.documentElement;\n\nvar state = 'none', stateTarget, stateOrigin, stateTf;\n\nsetupHandlers(root);\n\n/**\n * Register handlers\n */\nfunction setupHandlers(root){\n\tsetAttributes(root, {\n\t\t\"onmouseup\" : \"add(evt)\",\n\t\t\"onmousedown\" : \"handleMouseDown(evt)\",\n\t\t\"onmousemove\" : \"handleMouseMove(evt)\",\n\t\t\"onmouseup\" : \"handleMouseUp(evt)\",\n\t\t//\"onmouseout\" : \"handleMouseUp(evt)\", // Decomment this to stop the pan functionality when dragging out of the SVG element\n\t});\n\n\tif(navigator.userAgent.toLowerCase().indexOf('webkit') >= 0)\n\t\twindow.addEventListener('mousewheel', handleMouseWheel, false); // Chrome/Safari\n\telse\n\t\twindow.addEventListener('DOMMouseScroll', handleMouseWheel, false); // Others\n\n\tvar g = svgDoc.getElementById(\"svg\");\n\tg.width = \"100%\";\n\tg.height = \"100%\";\n}\n\n/**\n * Instance an SVGPoint object with given event coordinates.\n */\nfunction getEventPoint(evt) {\n\tvar p = root.createSVGPoint();\n\n\tp.x = evt.clientX;\n\tp.y = evt.clientY;\n\n\treturn p;\n}\n\n/**\n * Sets the current transform matrix of an element.\n */\nfunction setCTM(element, matrix) {\n\tvar s = \"matrix(\" + matrix.a + \",\" + matrix.b + \",\" + matrix.c + \",\" + matrix.d + \",\" + matrix.e + \",\" + matrix.f + \")\";\n\n\telement.setAttribute(\"transform\", s);\n}\n\n/**\n * Dumps a matrix to a string (useful for debug).\n */\nfunction dumpMatrix(matrix) {\n\tvar s = \"[ \" + matrix.a + \", \" + matrix.c + \", \" + matrix.e + \"\\n  \" + matrix.b + \", \" + matrix.d + \", \" + matrix.f + \"\\n  0, 0, 1 ]\";\n\n\treturn s;\n}\n\n/**\n * Sets attributes of an element.\n */\nfunction setAttributes(element, attributes){\n\tfor (i in attributes)\n\t\telement.setAttributeNS(null, i, attributes[i]);\n}\n\n/**\n * Handle mouse move event.\n */\nfunction handleMouseWheel(evt) {\n\tif(evt.preventDefault)\n\t\tevt.preventDefault();\n\n\tevt.returnValue = false;\n\n\tvar svgDoc = evt.target.ownerDocument;\n\n\tvar delta;\n\n\tif(evt.wheelDelta)\n\t\tdelta = evt.wheelDelta / 3600; // Chrome/Safari\n\telse\n\t\tdelta = evt.detail / -90; // Mozilla\n\n\tvar z = 1 + delta; // Zoom factor: 0.9/1.1\n\n\tvar g = svgDoc.getElementById(\"viewport\");\n\n\tvar p = getEventPoint(evt);\n\n\tp = p.matrixTransform(g.getCTM().inverse());\n\n\t// Compute new scale matrix in current mouse position\n\tvar k = root.createSVGMatrix().translate(p.x, p.y).scale(z).translate(-p.x, -p.y);\n\n        setCTM(g, g.getCTM().multiply(k));\n\n\tstateTf = stateTf.multiply(k.inverse());\n}\n\n/**\n * Handle mouse move event.\n */\nfunction handleMouseMove(evt) {\n\tif(evt.preventDefault)\n\t\tevt.preventDefault();\n\n\tevt.returnValue = false;\n\n\tvar svgDoc = evt.target.ownerDocument;\n\n\tvar g = svgDoc.getElementById(\"viewport\");\n\n\tif(state == 'pan') {\n\t\t// Pan mode\n\t\tvar p = getEventPoint(evt).matrixTransform(stateTf);\n\n\t\tsetCTM(g, stateTf.inverse().translate(p.x - stateOrigin.x, p.y - stateOrigin.y));\n\t} else if(state == 'move') {\n\t\t// Move mode\n\t\tvar p = getEventPoint(evt).matrixTransform(g.getCTM().inverse());\n\n\t\tsetCTM(stateTarget, root.createSVGMatrix().translate(p.x - stateOrigin.x, p.y - stateOrigin.y).multiply(g.getCTM().inverse()).multiply(stateTarget.getCTM()));\n\n\t\tstateOrigin = p;\n\t}\n}\n\n/**\n * Handle click event.\n */\nfunction handleMouseDown(evt) {\n\tif(evt.preventDefault)\n\t\tevt.preventDefault();\n\n\tevt.returnValue = false;\n\n\tvar svgDoc = evt.target.ownerDocument;\n\n\tvar g = svgDoc.getElementById(\"viewport\");\n\n\tif(true || evt.target.tagName == \"svg\") {\n\t\t// Pan mode\n\t\tstate = 'pan';\n\n\t\tstateTf = g.getCTM().inverse();\n\n\t\tstateOrigin = getEventPoint(evt).matrixTransform(stateTf);\n\t} else {\n\t\t// Move mode\n\t\tstate = 'move';\n\n\t\tstateTarget = evt.target;\n\n\t\tstateTf = g.getCTM().inverse();\n\n\t\tstateOrigin = getEventPoint(evt).matrixTransform(stateTf);\n\t}\n}\n\n/**\n * Handle mouse button release event.\n */\nfunction handleMouseUp(evt) {\n\tif(evt.preventDefault)\n\t\tevt.preventDefault();\n\n\tevt.returnValue = false;\n\n\tvar svgDoc = evt.target.ownerDocument;\n\n\tif(state == 'pan' || state == 'move') {\n\t\t// Quit pan mode\n\t\tstate = '';\n\t}\n}\n\n]]></script>\nEOF\n}\n\n# Provides a map from fullname to shortname for cases where the\n# shortname is ambiguous.  The symlist has both the fullname and\n# shortname for all symbols, which is usually fine, but sometimes --\n# such as overloaded functions -- two different fullnames can map to\n# the same shortname.  In that case, we use the address of the\n# function to disambiguate the two.  This function fills in a map that\n# maps fullnames to modified shortnames in such cases.  If a fullname\n# is not present in the map, the 'normal' shortname provided by the\n# symlist is the appropriate one to use.\nsub FillFullnameToShortnameMap {\n  my $symbols = shift;\n  my $fullname_to_shortname_map = shift;\n  my $shortnames_seen_once = {};\n  my $shortnames_seen_more_than_once = {};\n\n  foreach my $symlist (values(%{$symbols})) {\n    # TODO(csilvers): deal with inlined symbols too.\n    my $shortname = $symlist->[0];\n    my $fullname = $symlist->[2];\n    if ($fullname !~ /<[0-9a-fA-F]+>$/) {  # fullname doesn't end in an address\n      next;       # the only collisions we care about are when addresses differ\n    }\n    if (defined($shortnames_seen_once->{$shortname}) &&\n        $shortnames_seen_once->{$shortname} ne $fullname) {\n      $shortnames_seen_more_than_once->{$shortname} = 1;\n    } else {\n      $shortnames_seen_once->{$shortname} = $fullname;\n    }\n  }\n\n  foreach my $symlist (values(%{$symbols})) {\n    my $shortname = $symlist->[0];\n    my $fullname = $symlist->[2];\n    # TODO(csilvers): take in a list of addresses we care about, and only\n    # store in the map if $symlist->[1] is in that list.  Saves space.\n    next if defined($fullname_to_shortname_map->{$fullname});\n    if (defined($shortnames_seen_more_than_once->{$shortname})) {\n      if ($fullname =~ /<0*([^>]*)>$/) {   # fullname has address at end of it\n        $fullname_to_shortname_map->{$fullname} = \"$shortname\\@$1\";\n      }\n    }\n  }\n}\n\n# Return a small number that identifies the argument.\n# Multiple calls with the same argument will return the same number.\n# Calls with different arguments will return different numbers.\nsub ShortIdFor {\n  my $key = shift;\n  my $id = $main::uniqueid{$key};\n  if (!defined($id)) {\n    $id = keys(%main::uniqueid) + 1;\n    $main::uniqueid{$key} = $id;\n  }\n  return $id;\n}\n\n# Translate a stack of addresses into a stack of symbols\nsub TranslateStack {\n  my $symbols = shift;\n  my $fullname_to_shortname_map = shift;\n  my $k = shift;\n\n  my @addrs = split(/\\n/, $k);\n  my @result = ();\n  for (my $i = 0; $i <= $#addrs; $i++) {\n    my $a = $addrs[$i];\n\n    # Skip large addresses since they sometimes show up as fake entries on RH9\n    if (length($a) > 8 && $a gt \"7fffffffffffffff\") {\n      next;\n    }\n\n    if ($main::opt_disasm || $main::opt_list) {\n      # We want just the address for the key\n      push(@result, $a);\n      next;\n    }\n\n    my $symlist = $symbols->{$a};\n    if (!defined($symlist)) {\n      $symlist = [$a, \"\", $a];\n    }\n\n    # We can have a sequence of symbols for a particular entry\n    # (more than one symbol in the case of inlining).  Callers\n    # come before callees in symlist, so walk backwards since\n    # the translated stack should contain callees before callers.\n    for (my $j = $#{$symlist}; $j >= 2; $j -= 3) {\n      my $func = $symlist->[$j-2];\n      my $fileline = $symlist->[$j-1];\n      my $fullfunc = $symlist->[$j];\n      if (defined($fullname_to_shortname_map->{$fullfunc})) {\n        $func = $fullname_to_shortname_map->{$fullfunc};\n      }\n      if ($j > 2) {\n        $func = \"$func (inline)\";\n      }\n\n      # Do not merge nodes corresponding to Callback::Run since that\n      # causes confusing cycles in dot display.  Instead, we synthesize\n      # a unique name for this frame per caller.\n      if ($func =~ m/Callback.*::Run$/) {\n        my $caller = ($i > 0) ? $addrs[$i-1] : 0;\n        $func = \"Run#\" . ShortIdFor($caller);\n      }\n\n      if ($main::opt_addresses) {\n        push(@result, \"$a $func $fileline\");\n      } elsif ($main::opt_lines) {\n        if ($func eq '??' && $fileline eq '??:0') {\n          push(@result, \"$a\");\n        } else {\n          push(@result, \"$func $fileline\");\n        }\n      } elsif ($main::opt_functions) {\n        if ($func eq '??') {\n          push(@result, \"$a\");\n        } else {\n          push(@result, $func);\n        }\n      } elsif ($main::opt_files) {\n        if ($fileline eq '??:0' || $fileline eq '') {\n          push(@result, \"$a\");\n        } else {\n          my $f = $fileline;\n          $f =~ s/:\\d+$//;\n          push(@result, $f);\n        }\n      } else {\n        push(@result, $a);\n        last;  # Do not print inlined info\n      }\n    }\n  }\n\n  # print join(\",\", @addrs), \" => \", join(\",\", @result), \"\\n\";\n  return @result;\n}\n\n# Generate percent string for a number and a total\nsub Percent {\n  my $num = shift;\n  my $tot = shift;\n  if ($tot != 0) {\n    return sprintf(\"%.1f%%\", $num * 100.0 / $tot);\n  } else {\n    return ($num == 0) ? \"nan\" : (($num > 0) ? \"+inf\" : \"-inf\");\n  }\n}\n\n# Generate pretty-printed form of number\nsub Unparse {\n  my $num = shift;\n  if ($main::profile_type eq 'heap' || $main::profile_type eq 'growth') {\n    if ($main::opt_inuse_objects || $main::opt_alloc_objects) {\n      return sprintf(\"%d\", $num);\n    } else {\n      if ($main::opt_show_bytes) {\n        return sprintf(\"%d\", $num);\n      } else {\n        return sprintf(\"%.1f\", $num / 1048576.0);\n      }\n    }\n  } elsif ($main::profile_type eq 'contention' && !$main::opt_contentions) {\n    return sprintf(\"%.3f\", $num / 1e9); # Convert nanoseconds to seconds\n  } else {\n    return sprintf(\"%d\", $num);\n  }\n}\n\n# Alternate pretty-printed form: 0 maps to \".\"\nsub UnparseAlt {\n  my $num = shift;\n  if ($num == 0) {\n    return \".\";\n  } else {\n    return Unparse($num);\n  }\n}\n\n# Alternate pretty-printed form: 0 maps to \"\"\nsub HtmlPrintNumber {\n  my $num = shift;\n  if ($num == 0) {\n    return \"\";\n  } else {\n    return Unparse($num);\n  }\n}\n\n# Return output units\nsub Units {\n  if ($main::profile_type eq 'heap' || $main::profile_type eq 'growth') {\n    if ($main::opt_inuse_objects || $main::opt_alloc_objects) {\n      return \"objects\";\n    } else {\n      if ($main::opt_show_bytes) {\n        return \"B\";\n      } else {\n        return \"MB\";\n      }\n    }\n  } elsif ($main::profile_type eq 'contention' && !$main::opt_contentions) {\n    return \"seconds\";\n  } else {\n    return \"samples\";\n  }\n}\n\n##### Profile manipulation code #####\n\n# Generate flattened profile:\n# If count is charged to stack [a,b,c,d], in generated profile,\n# it will be charged to [a]\nsub FlatProfile {\n  my $profile = shift;\n  my $result = {};\n  foreach my $k (keys(%{$profile})) {\n    my $count = $profile->{$k};\n    my @addrs = split(/\\n/, $k);\n    if ($#addrs >= 0) {\n      AddEntry($result, $addrs[0], $count);\n    }\n  }\n  return $result;\n}\n\n# Generate cumulative profile:\n# If count is charged to stack [a,b,c,d], in generated profile,\n# it will be charged to [a], [b], [c], [d]\nsub CumulativeProfile {\n  my $profile = shift;\n  my $result = {};\n  foreach my $k (keys(%{$profile})) {\n    my $count = $profile->{$k};\n    my @addrs = split(/\\n/, $k);\n    foreach my $a (@addrs) {\n      AddEntry($result, $a, $count);\n    }\n  }\n  return $result;\n}\n\n# If the second-youngest PC on the stack is always the same, returns\n# that pc.  Otherwise, returns undef.\nsub IsSecondPcAlwaysTheSame {\n  my $profile = shift;\n\n  my $second_pc = undef;\n  foreach my $k (keys(%{$profile})) {\n    my @addrs = split(/\\n/, $k);\n    if ($#addrs < 1) {\n      return undef;\n    }\n    if (not defined $second_pc) {\n      $second_pc = $addrs[1];\n    } else {\n      if ($second_pc ne $addrs[1]) {\n        return undef;\n      }\n    }\n  }\n  return $second_pc;\n}\n\nsub ExtractSymbolNameInlineStack {\n  my $symbols = shift;\n  my $address = shift;\n\n  my @stack = ();\n\n  if (exists $symbols->{$address}) {\n    my @localinlinestack = @{$symbols->{$address}};\n    for (my $i = $#localinlinestack; $i > 0; $i-=3) {\n      my $file = $localinlinestack[$i-1];\n      my $fn = $localinlinestack[$i-0];\n\n      if ($file eq \"?\" || $file eq \":0\") {\n        $file = \"??:0\";\n      }\n      if ($fn eq '??') {\n        # If we can't get the symbol name, at least use the file information.\n        $fn = $file;\n      }\n      my $suffix = \"[inline]\";\n      if ($i == 2) {\n        $suffix = \"\";\n      }\n      push (@stack, $fn.$suffix);\n    }\n  }\n  else {\n    # If we can't get a symbol name, at least fill in the address.\n    push (@stack, $address);\n  }\n\n  return @stack;\n}\n\nsub ExtractSymbolLocation {\n  my $symbols = shift;\n  my $address = shift;\n  # 'addr2line' outputs \"??:0\" for unknown locations; we do the\n  # same to be consistent.\n  my $location = \"??:0:unknown\";\n  if (exists $symbols->{$address}) {\n    my $file = $symbols->{$address}->[1];\n    if ($file eq \"?\") {\n      $file = \"??:0\"\n    }\n    $location = $file . \":\" . $symbols->{$address}->[0];\n  }\n  return $location;\n}\n\n# Extracts a graph of calls.\nsub ExtractCalls {\n  my $symbols = shift;\n  my $profile = shift;\n\n  my $calls = {};\n  while( my ($stack_trace, $count) = each %$profile ) {\n    my @address = split(/\\n/, $stack_trace);\n    my $destination = ExtractSymbolLocation($symbols, $address[0]);\n    AddEntry($calls, $destination, $count);\n    for (my $i = 1; $i <= $#address; $i++) {\n      my $source = ExtractSymbolLocation($symbols, $address[$i]);\n      my $call = \"$source -> $destination\";\n      AddEntry($calls, $call, $count);\n      $destination = $source;\n    }\n  }\n\n  return $calls;\n}\n\nsub FilterFrames {\n  my $symbols = shift;\n  my $profile = shift;\n\n  if ($main::opt_retain eq '' && $main::opt_exclude eq '') {\n    return $profile;\n  }\n\n  my $result = {};\n  foreach my $k (keys(%{$profile})) {\n    my $count = $profile->{$k};\n    my @addrs = split(/\\n/, $k);\n    my @path = ();\n    foreach my $a (@addrs) {\n      my $sym;\n      if (exists($symbols->{$a})) {\n        $sym = $symbols->{$a}->[0];\n      } else {\n        $sym = $a;\n      }\n      if ($main::opt_retain ne '' && $sym !~ m/$main::opt_retain/) {\n        next;\n      }\n      if ($main::opt_exclude ne '' && $sym =~ m/$main::opt_exclude/) {\n        next;\n      }\n      push(@path, $a);\n    }\n    if (scalar(@path) > 0) {\n      my $reduced_path = join(\"\\n\", @path);\n      AddEntry($result, $reduced_path, $count);\n    }\n  }\n\n  return $result;\n}\n\nsub PrintCollapsedStacks {\n  my $symbols = shift;\n  my $profile = shift;\n\n  while (my ($stack_trace, $count) = each %$profile) {\n    my @address = split(/\\n/, $stack_trace);\n    my @names = reverse ( map { ExtractSymbolNameInlineStack($symbols, $_) } @address );\n    printf(\"%s %d\\n\", join(\";\", @names), $count);\n  }\n}\n\nsub RemoveUninterestingFrames {\n  my $symbols = shift;\n  my $profile = shift;\n\n  # List of function names to skip\n  my %skip = ();\n  my $skip_regexp = 'NOMATCH';\n  if ($main::profile_type eq 'heap' || $main::profile_type eq 'growth') {\n    foreach my $name ('@JEMALLOC_PREFIX@calloc',\n                      'cfree',\n                      '@JEMALLOC_PREFIX@malloc',\n                      'newImpl',\n                      'void* newImpl',\n                      '@JEMALLOC_PREFIX@free',\n                      '@JEMALLOC_PREFIX@memalign',\n                      '@JEMALLOC_PREFIX@posix_memalign',\n                      '@JEMALLOC_PREFIX@aligned_alloc',\n                      'pvalloc',\n                      '@JEMALLOC_PREFIX@valloc',\n                      '@JEMALLOC_PREFIX@realloc',\n                      '@JEMALLOC_PREFIX@mallocx',\n                      '@JEMALLOC_PREFIX@rallocx',\n                      '@JEMALLOC_PREFIX@xallocx',\n                      '@JEMALLOC_PREFIX@dallocx',\n                      '@JEMALLOC_PREFIX@sdallocx',\n                      '@JEMALLOC_PREFIX@sdallocx_noflags',\n                      'tc_calloc',\n                      'tc_cfree',\n                      'tc_malloc',\n                      'tc_free',\n                      'tc_memalign',\n                      'tc_posix_memalign',\n                      'tc_pvalloc',\n                      'tc_valloc',\n                      'tc_realloc',\n                      'tc_new',\n                      'tc_delete',\n                      'tc_newarray',\n                      'tc_deletearray',\n                      'tc_new_nothrow',\n                      'tc_newarray_nothrow',\n                      'do_malloc',\n                      '::do_malloc',   # new name -- got moved to an unnamed ns\n                      '::do_malloc_or_cpp_alloc',\n                      'DoSampledAllocation',\n                      'simple_alloc::allocate',\n                      '__malloc_alloc_template::allocate',\n                      '__builtin_delete',\n                      '__builtin_new',\n                      '__builtin_vec_delete',\n                      '__builtin_vec_new',\n                      'operator new',\n                      'operator new[]',\n                      # The entry to our memory-allocation routines on OS X\n                      'malloc_zone_malloc',\n                      'malloc_zone_calloc',\n                      'malloc_zone_valloc',\n                      'malloc_zone_realloc',\n                      'malloc_zone_memalign',\n                      'malloc_zone_free',\n                      # These mark the beginning/end of our custom sections\n                      '__start_google_malloc',\n                      '__stop_google_malloc',\n                      '__start_malloc_hook',\n                      '__stop_malloc_hook') {\n      $skip{$name} = 1;\n      $skip{\"_\" . $name} = 1;   # Mach (OS X) adds a _ prefix to everything\n    }\n    # TODO: Remove TCMalloc once everything has been\n    # moved into the tcmalloc:: namespace and we have flushed\n    # old code out of the system.\n    $skip_regexp = \"TCMalloc|^tcmalloc::\";\n  } elsif ($main::profile_type eq 'contention') {\n    foreach my $vname ('base::RecordLockProfileData',\n                       'base::SubmitMutexProfileData',\n                       'base::SubmitSpinLockProfileData',\n                       'Mutex::Unlock',\n                       'Mutex::UnlockSlow',\n                       'Mutex::ReaderUnlock',\n                       'MutexLock::~MutexLock',\n                       'SpinLock::Unlock',\n                       'SpinLock::SlowUnlock',\n                       'SpinLockHolder::~SpinLockHolder') {\n      $skip{$vname} = 1;\n    }\n  } elsif ($main::profile_type eq 'cpu') {\n    # Drop signal handlers used for CPU profile collection\n    # TODO(dpeng): this should not be necessary; it's taken\n    # care of by the general 2nd-pc mechanism below.\n    foreach my $name ('ProfileData::Add',           # historical\n                      'ProfileData::prof_handler',  # historical\n                      'CpuProfiler::prof_handler',\n                      '__FRAME_END__',\n                      '__pthread_sighandler',\n                      '__restore') {\n      $skip{$name} = 1;\n    }\n  } else {\n    # Nothing skipped for unknown types\n  }\n\n  if ($main::profile_type eq 'cpu') {\n    # If all the second-youngest program counters are the same,\n    # this STRONGLY suggests that it is an artifact of measurement,\n    # i.e., stack frames pushed by the CPU profiler signal handler.\n    # Hence, we delete them.\n    # (The topmost PC is read from the signal structure, not from\n    # the stack, so it does not get involved.)\n    while (my $second_pc = IsSecondPcAlwaysTheSame($profile)) {\n      my $result = {};\n      my $func = '';\n      if (exists($symbols->{$second_pc})) {\n        $second_pc = $symbols->{$second_pc}->[0];\n      }\n      print STDERR \"Removing $second_pc from all stack traces.\\n\";\n      foreach my $k (keys(%{$profile})) {\n        my $count = $profile->{$k};\n        my @addrs = split(/\\n/, $k);\n        splice @addrs, 1, 1;\n        my $reduced_path = join(\"\\n\", @addrs);\n        AddEntry($result, $reduced_path, $count);\n      }\n      $profile = $result;\n    }\n  }\n\n  my $result = {};\n  foreach my $k (keys(%{$profile})) {\n    my $count = $profile->{$k};\n    my @addrs = split(/\\n/, $k);\n    my @path = ();\n    foreach my $a (@addrs) {\n      if (exists($symbols->{$a})) {\n        my $func = $symbols->{$a}->[0];\n        if ($skip{$func} || ($func =~ m/$skip_regexp/)) {\n          # Throw away the portion of the backtrace seen so far, under the\n          # assumption that previous frames were for functions internal to the\n          # allocator.\n          @path = ();\n          next;\n        }\n      }\n      push(@path, $a);\n    }\n    my $reduced_path = join(\"\\n\", @path);\n    AddEntry($result, $reduced_path, $count);\n  }\n\n  $result = FilterFrames($symbols, $result);\n\n  return $result;\n}\n\n# Reduce profile to granularity given by user\nsub ReduceProfile {\n  my $symbols = shift;\n  my $profile = shift;\n  my $result = {};\n  my $fullname_to_shortname_map = {};\n  FillFullnameToShortnameMap($symbols, $fullname_to_shortname_map);\n  foreach my $k (keys(%{$profile})) {\n    my $count = $profile->{$k};\n    my @translated = TranslateStack($symbols, $fullname_to_shortname_map, $k);\n    my @path = ();\n    my %seen = ();\n    $seen{''} = 1;      # So that empty keys are skipped\n    foreach my $e (@translated) {\n      # To avoid double-counting due to recursion, skip a stack-trace\n      # entry if it has already been seen\n      if (!$seen{$e}) {\n        $seen{$e} = 1;\n        push(@path, $e);\n      }\n    }\n    my $reduced_path = join(\"\\n\", @path);\n    AddEntry($result, $reduced_path, $count);\n  }\n  return $result;\n}\n\n# Does the specified symbol array match the regexp?\nsub SymbolMatches {\n  my $sym = shift;\n  my $re = shift;\n  if (defined($sym)) {\n    for (my $i = 0; $i < $#{$sym}; $i += 3) {\n      if ($sym->[$i] =~ m/$re/ || $sym->[$i+1] =~ m/$re/) {\n        return 1;\n      }\n    }\n  }\n  return 0;\n}\n\n# Focus only on paths involving specified regexps\nsub FocusProfile {\n  my $symbols = shift;\n  my $profile = shift;\n  my $focus = shift;\n  my $result = {};\n  foreach my $k (keys(%{$profile})) {\n    my $count = $profile->{$k};\n    my @addrs = split(/\\n/, $k);\n    foreach my $a (@addrs) {\n      # Reply if it matches either the address/shortname/fileline\n      if (($a =~ m/$focus/) || SymbolMatches($symbols->{$a}, $focus)) {\n        AddEntry($result, $k, $count);\n        last;\n      }\n    }\n  }\n  return $result;\n}\n\n# Focus only on paths not involving specified regexps\nsub IgnoreProfile {\n  my $symbols = shift;\n  my $profile = shift;\n  my $ignore = shift;\n  my $result = {};\n  foreach my $k (keys(%{$profile})) {\n    my $count = $profile->{$k};\n    my @addrs = split(/\\n/, $k);\n    my $matched = 0;\n    foreach my $a (@addrs) {\n      # Reply if it matches either the address/shortname/fileline\n      if (($a =~ m/$ignore/) || SymbolMatches($symbols->{$a}, $ignore)) {\n        $matched = 1;\n        last;\n      }\n    }\n    if (!$matched) {\n      AddEntry($result, $k, $count);\n    }\n  }\n  return $result;\n}\n\n# Get total count in profile\nsub TotalProfile {\n  my $profile = shift;\n  my $result = 0;\n  foreach my $k (keys(%{$profile})) {\n    $result += $profile->{$k};\n  }\n  return $result;\n}\n\n# Add A to B\nsub AddProfile {\n  my $A = shift;\n  my $B = shift;\n\n  my $R = {};\n  # add all keys in A\n  foreach my $k (keys(%{$A})) {\n    my $v = $A->{$k};\n    AddEntry($R, $k, $v);\n  }\n  # add all keys in B\n  foreach my $k (keys(%{$B})) {\n    my $v = $B->{$k};\n    AddEntry($R, $k, $v);\n  }\n  return $R;\n}\n\n# Merges symbol maps\nsub MergeSymbols {\n  my $A = shift;\n  my $B = shift;\n\n  my $R = {};\n  foreach my $k (keys(%{$A})) {\n    $R->{$k} = $A->{$k};\n  }\n  if (defined($B)) {\n    foreach my $k (keys(%{$B})) {\n      $R->{$k} = $B->{$k};\n    }\n  }\n  return $R;\n}\n\n\n# Add A to B\nsub AddPcs {\n  my $A = shift;\n  my $B = shift;\n\n  my $R = {};\n  # add all keys in A\n  foreach my $k (keys(%{$A})) {\n    $R->{$k} = 1\n  }\n  # add all keys in B\n  foreach my $k (keys(%{$B})) {\n    $R->{$k} = 1\n  }\n  return $R;\n}\n\n# Subtract B from A\nsub SubtractProfile {\n  my $A = shift;\n  my $B = shift;\n\n  my $R = {};\n  foreach my $k (keys(%{$A})) {\n    my $v = $A->{$k} - GetEntry($B, $k);\n    if ($v < 0 && $main::opt_drop_negative) {\n      $v = 0;\n    }\n    AddEntry($R, $k, $v);\n  }\n  if (!$main::opt_drop_negative) {\n    # Take care of when subtracted profile has more entries\n    foreach my $k (keys(%{$B})) {\n      if (!exists($A->{$k})) {\n        AddEntry($R, $k, 0 - $B->{$k});\n      }\n    }\n  }\n  return $R;\n}\n\n# Get entry from profile; zero if not present\nsub GetEntry {\n  my $profile = shift;\n  my $k = shift;\n  if (exists($profile->{$k})) {\n    return $profile->{$k};\n  } else {\n    return 0;\n  }\n}\n\n# Add entry to specified profile\nsub AddEntry {\n  my $profile = shift;\n  my $k = shift;\n  my $n = shift;\n  if (!exists($profile->{$k})) {\n    $profile->{$k} = 0;\n  }\n  $profile->{$k} += $n;\n}\n\n# Add a stack of entries to specified profile, and add them to the $pcs\n# list.\nsub AddEntries {\n  my $profile = shift;\n  my $pcs = shift;\n  my $stack = shift;\n  my $count = shift;\n  my @k = ();\n\n  foreach my $e (split(/\\s+/, $stack)) {\n    my $pc = HexExtend($e);\n    $pcs->{$pc} = 1;\n    push @k, $pc;\n  }\n  AddEntry($profile, (join \"\\n\", @k), $count);\n}\n\n##### Code to profile a server dynamically #####\n\nsub CheckSymbolPage {\n  my $url = SymbolPageURL();\n  my $command = ShellEscape(@URL_FETCHER, $url);\n  open(SYMBOL, \"$command |\") or error($command);\n  my $line = <SYMBOL>;\n  $line =~ s/\\r//g;         # turn windows-looking lines into unix-looking lines\n  close(SYMBOL);\n  unless (defined($line)) {\n    error(\"$url doesn't exist\\n\");\n  }\n\n  if ($line =~ /^num_symbols:\\s+(\\d+)$/) {\n    if ($1 == 0) {\n      error(\"Stripped binary. No symbols available.\\n\");\n    }\n  } else {\n    error(\"Failed to get the number of symbols from $url\\n\");\n  }\n}\n\nsub IsProfileURL {\n  my $profile_name = shift;\n  if (-f $profile_name) {\n    printf STDERR \"Using local file $profile_name.\\n\";\n    return 0;\n  }\n  return 1;\n}\n\nsub ParseProfileURL {\n  my $profile_name = shift;\n\n  if (!defined($profile_name) || $profile_name eq \"\") {\n    return ();\n  }\n\n  # Split profile URL - matches all non-empty strings, so no test.\n  $profile_name =~ m,^(https?://)?([^/]+)(.*?)(/|$PROFILES)?$,;\n\n  my $proto = $1 || \"http://\";\n  my $hostport = $2;\n  my $prefix = $3;\n  my $profile = $4 || \"/\";\n\n  my $host = $hostport;\n  $host =~ s/:.*//;\n\n  my $baseurl = \"$proto$hostport$prefix\";\n  return ($host, $baseurl, $profile);\n}\n\n# We fetch symbols from the first profile argument.\nsub SymbolPageURL {\n  my ($host, $baseURL, $path) = ParseProfileURL($main::pfile_args[0]);\n  return \"$baseURL$SYMBOL_PAGE\";\n}\n\nsub FetchProgramName() {\n  my ($host, $baseURL, $path) = ParseProfileURL($main::pfile_args[0]);\n  my $url = \"$baseURL$PROGRAM_NAME_PAGE\";\n  my $command_line = ShellEscape(@URL_FETCHER, $url);\n  open(CMDLINE, \"$command_line |\") or error($command_line);\n  my $cmdline = <CMDLINE>;\n  $cmdline =~ s/\\r//g;   # turn windows-looking lines into unix-looking lines\n  close(CMDLINE);\n  error(\"Failed to get program name from $url\\n\") unless defined($cmdline);\n  $cmdline =~ s/\\x00.+//;  # Remove argv[1] and latters.\n  $cmdline =~ s!\\n!!g;  # Remove LFs.\n  return $cmdline;\n}\n\n# Gee, curl's -L (--location) option isn't reliable at least\n# with its 7.12.3 version.  Curl will forget to post data if\n# there is a redirection.  This function is a workaround for\n# curl.  Redirection happens on borg hosts.\nsub ResolveRedirectionForCurl {\n  my $url = shift;\n  my $command_line = ShellEscape(@URL_FETCHER, \"--head\", $url);\n  open(CMDLINE, \"$command_line |\") or error($command_line);\n  while (<CMDLINE>) {\n    s/\\r//g;         # turn windows-looking lines into unix-looking lines\n    if (/^Location: (.*)/) {\n      $url = $1;\n    }\n  }\n  close(CMDLINE);\n  return $url;\n}\n\n# Add a timeout flat to URL_FETCHER.  Returns a new list.\nsub AddFetchTimeout {\n  my $timeout = shift;\n  my @fetcher = @_;\n  if (defined($timeout)) {\n    if (join(\" \", @fetcher) =~ m/\\bcurl -s/) {\n      push(@fetcher, \"--max-time\", sprintf(\"%d\", $timeout));\n    } elsif (join(\" \", @fetcher) =~ m/\\brpcget\\b/) {\n      push(@fetcher, sprintf(\"--deadline=%d\", $timeout));\n    }\n  }\n  return @fetcher;\n}\n\n# Reads a symbol map from the file handle name given as $1, returning\n# the resulting symbol map.  Also processes variables relating to symbols.\n# Currently, the only variable processed is 'binary=<value>' which updates\n# $main::prog to have the correct program name.\nsub ReadSymbols {\n  my $in = shift;\n  my $map = {};\n  while (<$in>) {\n    s/\\r//g;         # turn windows-looking lines into unix-looking lines\n    # Removes all the leading zeroes from the symbols, see comment below.\n    if (m/^0x0*([0-9a-f]+)\\s+(.+)/) {\n      $map->{$1} = $2;\n    } elsif (m/^---/) {\n      last;\n    } elsif (m/^([a-z][^=]*)=(.*)$/ ) {\n      my ($variable, $value) = ($1, $2);\n      for ($variable, $value) {\n        s/^\\s+//;\n        s/\\s+$//;\n      }\n      if ($variable eq \"binary\") {\n        if ($main::prog ne $UNKNOWN_BINARY && $main::prog ne $value) {\n          printf STDERR (\"Warning: Mismatched binary name '%s', using '%s'.\\n\",\n                         $main::prog, $value);\n        }\n        $main::prog = $value;\n      } else {\n        printf STDERR (\"Ignoring unknown variable in symbols list: \" .\n            \"'%s' = '%s'\\n\", $variable, $value);\n      }\n    }\n  }\n  return $map;\n}\n\nsub URLEncode {\n  my $str = shift;\n  $str =~ s/([^A-Za-z0-9\\-_.!~*'()])/ sprintf \"%%%02x\", ord $1 /eg;\n  return $str;\n}\n\nsub AppendSymbolFilterParams {\n  my $url = shift;\n  my @params = ();\n  if ($main::opt_retain ne '') {\n    push(@params, sprintf(\"retain=%s\", URLEncode($main::opt_retain)));\n  }\n  if ($main::opt_exclude ne '') {\n    push(@params, sprintf(\"exclude=%s\", URLEncode($main::opt_exclude)));\n  }\n  if (scalar @params > 0) {\n    $url = sprintf(\"%s?%s\", $url, join(\"&\", @params));\n  }\n  return $url;\n}\n\n# Fetches and processes symbols to prepare them for use in the profile output\n# code.  If the optional 'symbol_map' arg is not given, fetches symbols from\n# $SYMBOL_PAGE for all PC values found in profile.  Otherwise, the raw symbols\n# are assumed to have already been fetched into 'symbol_map' and are simply\n# extracted and processed.\nsub FetchSymbols {\n  my $pcset = shift;\n  my $symbol_map = shift;\n\n  my %seen = ();\n  my @pcs = grep { !$seen{$_}++ } keys(%$pcset);  # uniq\n\n  if (!defined($symbol_map)) {\n    my $post_data = join(\"+\", sort((map {\"0x\" . \"$_\"} @pcs)));\n\n    open(POSTFILE, \">$main::tmpfile_sym\");\n    print POSTFILE $post_data;\n    close(POSTFILE);\n\n    my $url = SymbolPageURL();\n\n    my $command_line;\n    if (join(\" \", @URL_FETCHER) =~ m/\\bcurl -s/) {\n      $url = ResolveRedirectionForCurl($url);\n      $url = AppendSymbolFilterParams($url);\n      $command_line = ShellEscape(@URL_FETCHER, \"-d\", \"\\@$main::tmpfile_sym\",\n                                  $url);\n    } else {\n      $url = AppendSymbolFilterParams($url);\n      $command_line = (ShellEscape(@URL_FETCHER, \"--post\", $url)\n                       . \" < \" . ShellEscape($main::tmpfile_sym));\n    }\n    # We use c++filt in case $SYMBOL_PAGE gives us mangled symbols.\n    my $escaped_cppfilt = ShellEscape($obj_tool_map{\"c++filt\"});\n    open(SYMBOL, \"$command_line | $escaped_cppfilt |\") or error($command_line);\n    $symbol_map = ReadSymbols(*SYMBOL{IO});\n    close(SYMBOL);\n  }\n\n  my $symbols = {};\n  foreach my $pc (@pcs) {\n    my $fullname;\n    # For 64 bits binaries, symbols are extracted with 8 leading zeroes.\n    # Then /symbol reads the long symbols in as uint64, and outputs\n    # the result with a \"0x%08llx\" format which get rid of the zeroes.\n    # By removing all the leading zeroes in both $pc and the symbols from\n    # /symbol, the symbols match and are retrievable from the map.\n    my $shortpc = $pc;\n    $shortpc =~ s/^0*//;\n    # Each line may have a list of names, which includes the function\n    # and also other functions it has inlined.  They are separated (in\n    # PrintSymbolizedProfile), by --, which is illegal in function names.\n    my $fullnames;\n    if (defined($symbol_map->{$shortpc})) {\n      $fullnames = $symbol_map->{$shortpc};\n    } else {\n      $fullnames = \"0x\" . $pc;  # Just use addresses\n    }\n    my $sym = [];\n    $symbols->{$pc} = $sym;\n    foreach my $fullname (split(\"--\", $fullnames)) {\n      my $name = ShortFunctionName($fullname);\n      push(@{$sym}, $name, \"?\", $fullname);\n    }\n  }\n  return $symbols;\n}\n\nsub BaseName {\n  my $file_name = shift;\n  $file_name =~ s!^.*/!!;  # Remove directory name\n  return $file_name;\n}\n\nsub MakeProfileBaseName {\n  my ($binary_name, $profile_name) = @_;\n  my ($host, $baseURL, $path) = ParseProfileURL($profile_name);\n  my $binary_shortname = BaseName($binary_name);\n  return sprintf(\"%s.%s.%s\",\n                 $binary_shortname, $main::op_time, $host);\n}\n\nsub FetchDynamicProfile {\n  my $binary_name = shift;\n  my $profile_name = shift;\n  my $fetch_name_only = shift;\n  my $encourage_patience = shift;\n\n  if (!IsProfileURL($profile_name)) {\n    return $profile_name;\n  } else {\n    my ($host, $baseURL, $path) = ParseProfileURL($profile_name);\n    if ($path eq \"\" || $path eq \"/\") {\n      # Missing type specifier defaults to cpu-profile\n      $path = $PROFILE_PAGE;\n    }\n\n    my $profile_file = MakeProfileBaseName($binary_name, $profile_name);\n\n    my $url = \"$baseURL$path\";\n    my $fetch_timeout = undef;\n    if ($path =~ m/$PROFILE_PAGE|$PMUPROFILE_PAGE/) {\n      if ($path =~ m/[?]/) {\n        $url .= \"&\";\n      } else {\n        $url .= \"?\";\n      }\n      $url .= sprintf(\"seconds=%d\", $main::opt_seconds);\n      $fetch_timeout = $main::opt_seconds * 1.01 + 60;\n      # Set $profile_type for consumption by PrintSymbolizedProfile.\n      $main::profile_type = 'cpu';\n    } else {\n      # For non-CPU profiles, we add a type-extension to\n      # the target profile file name.\n      my $suffix = $path;\n      $suffix =~ s,/,.,g;\n      $profile_file .= $suffix;\n      # Set $profile_type for consumption by PrintSymbolizedProfile.\n      if ($path =~ m/$HEAP_PAGE/) {\n        $main::profile_type = 'heap';\n      } elsif ($path =~ m/$GROWTH_PAGE/) {\n        $main::profile_type = 'growth';\n      } elsif ($path =~ m/$CONTENTION_PAGE/) {\n        $main::profile_type = 'contention';\n      }\n    }\n\n    my $profile_dir = $ENV{\"JEPROF_TMPDIR\"} || ($ENV{HOME} . \"/jeprof\");\n    if (! -d $profile_dir) {\n      mkdir($profile_dir)\n          || die(\"Unable to create profile directory $profile_dir: $!\\n\");\n    }\n    my $tmp_profile = \"$profile_dir/.tmp.$profile_file\";\n    my $real_profile = \"$profile_dir/$profile_file\";\n\n    if ($fetch_name_only > 0) {\n      return $real_profile;\n    }\n\n    my @fetcher = AddFetchTimeout($fetch_timeout, @URL_FETCHER);\n    my $cmd = ShellEscape(@fetcher, $url) . \" > \" . ShellEscape($tmp_profile);\n    if ($path =~ m/$PROFILE_PAGE|$PMUPROFILE_PAGE|$CENSUSPROFILE_PAGE/){\n      print STDERR \"Gathering CPU profile from $url for $main::opt_seconds seconds to\\n  ${real_profile}\\n\";\n      if ($encourage_patience) {\n        print STDERR \"Be patient...\\n\";\n      }\n    } else {\n      print STDERR \"Fetching $path profile from $url to\\n  ${real_profile}\\n\";\n    }\n\n    (system($cmd) == 0) || error(\"Failed to get profile: $cmd: $!\\n\");\n    (system(\"mv\", $tmp_profile, $real_profile) == 0) || error(\"Unable to rename profile\\n\");\n    print STDERR \"Wrote profile to $real_profile\\n\";\n    $main::collected_profile = $real_profile;\n    return $main::collected_profile;\n  }\n}\n\n# Collect profiles in parallel\nsub FetchDynamicProfiles {\n  my $items = scalar(@main::pfile_args);\n  my $levels = log($items) / log(2);\n\n  if ($items == 1) {\n    $main::profile_files[0] = FetchDynamicProfile($main::prog, $main::pfile_args[0], 0, 1);\n  } else {\n    # math rounding issues\n    if ((2 ** $levels) < $items) {\n     $levels++;\n    }\n    my $count = scalar(@main::pfile_args);\n    for (my $i = 0; $i < $count; $i++) {\n      $main::profile_files[$i] = FetchDynamicProfile($main::prog, $main::pfile_args[$i], 1, 0);\n    }\n    print STDERR \"Fetching $count profiles, Be patient...\\n\";\n    FetchDynamicProfilesRecurse($levels, 0, 0);\n    $main::collected_profile = join(\" \\\\\\n    \", @main::profile_files);\n  }\n}\n\n# Recursively fork a process to get enough processes\n# collecting profiles\nsub FetchDynamicProfilesRecurse {\n  my $maxlevel = shift;\n  my $level = shift;\n  my $position = shift;\n\n  if (my $pid = fork()) {\n    $position = 0 | ($position << 1);\n    TryCollectProfile($maxlevel, $level, $position);\n    wait;\n  } else {\n    $position = 1 | ($position << 1);\n    TryCollectProfile($maxlevel, $level, $position);\n    cleanup();\n    exit(0);\n  }\n}\n\n# Collect a single profile\nsub TryCollectProfile {\n  my $maxlevel = shift;\n  my $level = shift;\n  my $position = shift;\n\n  if ($level >= ($maxlevel - 1)) {\n    if ($position < scalar(@main::pfile_args)) {\n      FetchDynamicProfile($main::prog, $main::pfile_args[$position], 0, 0);\n    }\n  } else {\n    FetchDynamicProfilesRecurse($maxlevel, $level+1, $position);\n  }\n}\n\n##### Parsing code #####\n\n# Provide a small streaming-read module to handle very large\n# cpu-profile files.  Stream in chunks along a sliding window.\n# Provides an interface to get one 'slot', correctly handling\n# endian-ness differences.  A slot is one 32-bit or 64-bit word\n# (depending on the input profile).  We tell endianness and bit-size\n# for the profile by looking at the first 8 bytes: in cpu profiles,\n# the second slot is always 3 (we'll accept anything that's not 0).\nBEGIN {\n  package CpuProfileStream;\n\n  sub new {\n    my ($class, $file, $fname) = @_;\n    my $self = { file        => $file,\n                 base        => 0,\n                 stride      => 512 * 1024,   # must be a multiple of bitsize/8\n                 slots       => [],\n                 unpack_code => \"\",           # N for big-endian, V for little\n                 perl_is_64bit => 1,          # matters if profile is 64-bit\n    };\n    bless $self, $class;\n    # Let unittests adjust the stride\n    if ($main::opt_test_stride > 0) {\n      $self->{stride} = $main::opt_test_stride;\n    }\n    # Read the first two slots to figure out bitsize and endianness.\n    my $slots = $self->{slots};\n    my $str;\n    read($self->{file}, $str, 8);\n    # Set the global $address_length based on what we see here.\n    # 8 is 32-bit (8 hexadecimal chars); 16 is 64-bit (16 hexadecimal chars).\n    $address_length = ($str eq (chr(0)x8)) ? 16 : 8;\n    if ($address_length == 8) {\n      if (substr($str, 6, 2) eq chr(0)x2) {\n        $self->{unpack_code} = 'V';  # Little-endian.\n      } elsif (substr($str, 4, 2) eq chr(0)x2) {\n        $self->{unpack_code} = 'N';  # Big-endian\n      } else {\n        ::error(\"$fname: header size >= 2**16\\n\");\n      }\n      @$slots = unpack($self->{unpack_code} . \"*\", $str);\n    } else {\n      # If we're a 64-bit profile, check if we're a 64-bit-capable\n      # perl.  Otherwise, each slot will be represented as a float\n      # instead of an int64, losing precision and making all the\n      # 64-bit addresses wrong.  We won't complain yet, but will\n      # later if we ever see a value that doesn't fit in 32 bits.\n      my $has_q = 0;\n      eval { $has_q = pack(\"Q\", \"1\") ? 1 : 1; };\n      if (!$has_q) {\n        $self->{perl_is_64bit} = 0;\n      }\n      read($self->{file}, $str, 8);\n      if (substr($str, 4, 4) eq chr(0)x4) {\n        # We'd love to use 'Q', but it's a) not universal, b) not endian-proof.\n        $self->{unpack_code} = 'V';  # Little-endian.\n      } elsif (substr($str, 0, 4) eq chr(0)x4) {\n        $self->{unpack_code} = 'N';  # Big-endian\n      } else {\n        ::error(\"$fname: header size >= 2**32\\n\");\n      }\n      my @pair = unpack($self->{unpack_code} . \"*\", $str);\n      # Since we know one of the pair is 0, it's fine to just add them.\n      @$slots = (0, $pair[0] + $pair[1]);\n    }\n    return $self;\n  }\n\n  # Load more data when we access slots->get(X) which is not yet in memory.\n  sub overflow {\n    my ($self) = @_;\n    my $slots = $self->{slots};\n    $self->{base} += $#$slots + 1;   # skip over data we're replacing\n    my $str;\n    read($self->{file}, $str, $self->{stride});\n    if ($address_length == 8) {      # the 32-bit case\n      # This is the easy case: unpack provides 32-bit unpacking primitives.\n      @$slots = unpack($self->{unpack_code} . \"*\", $str);\n    } else {\n      # We need to unpack 32 bits at a time and combine.\n      my @b32_values = unpack($self->{unpack_code} . \"*\", $str);\n      my @b64_values = ();\n      for (my $i = 0; $i < $#b32_values; $i += 2) {\n        # TODO(csilvers): if this is a 32-bit perl, the math below\n        #    could end up in a too-large int, which perl will promote\n        #    to a double, losing necessary precision.  Deal with that.\n        #    Right now, we just die.\n        my ($lo, $hi) = ($b32_values[$i], $b32_values[$i+1]);\n        if ($self->{unpack_code} eq 'N') {    # big-endian\n          ($lo, $hi) = ($hi, $lo);\n        }\n        my $value = $lo + $hi * (2**32);\n        if (!$self->{perl_is_64bit} &&   # check value is exactly represented\n            (($value % (2**32)) != $lo || int($value / (2**32)) != $hi)) {\n          ::error(\"Need a 64-bit perl to process this 64-bit profile.\\n\");\n        }\n        push(@b64_values, $value);\n      }\n      @$slots = @b64_values;\n    }\n  }\n\n  # Access the i-th long in the file (logically), or -1 at EOF.\n  sub get {\n    my ($self, $idx) = @_;\n    my $slots = $self->{slots};\n    while ($#$slots >= 0) {\n      if ($idx < $self->{base}) {\n        # The only time we expect a reference to $slots[$i - something]\n        # after referencing $slots[$i] is reading the very first header.\n        # Since $stride > |header|, that shouldn't cause any lookback\n        # errors.  And everything after the header is sequential.\n        print STDERR \"Unexpected look-back reading CPU profile\";\n        return -1;   # shrug, don't know what better to return\n      } elsif ($idx > $self->{base} + $#$slots) {\n        $self->overflow();\n      } else {\n        return $slots->[$idx - $self->{base}];\n      }\n    }\n    # If we get here, $slots is [], which means we've reached EOF\n    return -1;  # unique since slots is supposed to hold unsigned numbers\n  }\n}\n\n# Reads the top, 'header' section of a profile, and returns the last\n# line of the header, commonly called a 'header line'.  The header\n# section of a profile consists of zero or more 'command' lines that\n# are instructions to jeprof, which jeprof executes when reading the\n# header.  All 'command' lines start with a %.  After the command\n# lines is the 'header line', which is a profile-specific line that\n# indicates what type of profile it is, and perhaps other global\n# information about the profile.  For instance, here's a header line\n# for a heap profile:\n#   heap profile:     53:    38236 [  5525:  1284029] @ heapprofile\n# For historical reasons, the CPU profile does not contain a text-\n# readable header line.  If the profile looks like a CPU profile,\n# this function returns \"\".  If no header line could be found, this\n# function returns undef.\n#\n# The following commands are recognized:\n#   %warn -- emit the rest of this line to stderr, prefixed by 'WARNING:'\n#\n# The input file should be in binmode.\nsub ReadProfileHeader {\n  local *PROFILE = shift;\n  my $firstchar = \"\";\n  my $line = \"\";\n  read(PROFILE, $firstchar, 1);\n  seek(PROFILE, -1, 1);                    # unread the firstchar\n  if ($firstchar !~ /[[:print:]]/) {       # is not a text character\n    return \"\";\n  }\n  while (defined($line = <PROFILE>)) {\n    $line =~ s/\\r//g;   # turn windows-looking lines into unix-looking lines\n    if ($line =~ /^%warn\\s+(.*)/) {        # 'warn' command\n      # Note this matches both '%warn blah\\n' and '%warn\\n'.\n      print STDERR \"WARNING: $1\\n\";        # print the rest of the line\n    } elsif ($line =~ /^%/) {\n      print STDERR \"Ignoring unknown command from profile header: $line\";\n    } else {\n      # End of commands, must be the header line.\n      return $line;\n    }\n  }\n  return undef;     # got to EOF without seeing a header line\n}\n\nsub IsSymbolizedProfileFile {\n  my $file_name = shift;\n  if (!(-e $file_name) || !(-r $file_name)) {\n    return 0;\n  }\n  # Check if the file contains a symbol-section marker.\n  open(TFILE, \"<$file_name\");\n  binmode TFILE;\n  my $firstline = ReadProfileHeader(*TFILE);\n  close(TFILE);\n  if (!$firstline) {\n    return 0;\n  }\n  $SYMBOL_PAGE =~ m,[^/]+$,;    # matches everything after the last slash\n  my $symbol_marker = $&;\n  return $firstline =~ /^--- *$symbol_marker/;\n}\n\n# Parse profile generated by common/profiler.cc and return a reference\n# to a map:\n#      $result->{version}     Version number of profile file\n#      $result->{period}      Sampling period (in microseconds)\n#      $result->{profile}     Profile object\n#      $result->{threads}     Map of thread IDs to profile objects\n#      $result->{map}         Memory map info from profile\n#      $result->{pcs}         Hash of all PC values seen, key is hex address\nsub ReadProfile {\n  my $prog = shift;\n  my $fname = shift;\n  my $result;            # return value\n\n  $CONTENTION_PAGE =~ m,[^/]+$,;    # matches everything after the last slash\n  my $contention_marker = $&;\n  $GROWTH_PAGE  =~ m,[^/]+$,;    # matches everything after the last slash\n  my $growth_marker = $&;\n  $SYMBOL_PAGE =~ m,[^/]+$,;    # matches everything after the last slash\n  my $symbol_marker = $&;\n  $PROFILE_PAGE =~ m,[^/]+$,;    # matches everything after the last slash\n  my $profile_marker = $&;\n  $HEAP_PAGE =~ m,[^/]+$,;    # matches everything after the last slash\n  my $heap_marker = $&;\n\n  # Look at first line to see if it is a heap or a CPU profile.\n  # CPU profile may start with no header at all, and just binary data\n  # (starting with \\0\\0\\0\\0) -- in that case, don't try to read the\n  # whole firstline, since it may be gigabytes(!) of data.\n  open(PROFILE, \"<$fname\") || error(\"$fname: $!\\n\");\n  binmode PROFILE;      # New perls do UTF-8 processing\n  my $header = ReadProfileHeader(*PROFILE);\n  if (!defined($header)) {   # means \"at EOF\"\n    error(\"Profile is empty.\\n\");\n  }\n\n  my $symbols;\n  if ($header =~ m/^--- *$symbol_marker/o) {\n    # Verify that the user asked for a symbolized profile\n    if (!$main::use_symbolized_profile) {\n      # we have both a binary and symbolized profiles, abort\n      error(\"FATAL ERROR: Symbolized profile\\n   $fname\\ncannot be used with \" .\n            \"a binary arg. Try again without passing\\n   $prog\\n\");\n    }\n    # Read the symbol section of the symbolized profile file.\n    $symbols = ReadSymbols(*PROFILE{IO});\n    # Read the next line to get the header for the remaining profile.\n    $header = ReadProfileHeader(*PROFILE) || \"\";\n  }\n\n  if ($header =~ m/^--- *($heap_marker|$growth_marker)/o) {\n    # Skip \"--- ...\" line for profile types that have their own headers.\n    $header = ReadProfileHeader(*PROFILE) || \"\";\n  }\n\n  $main::profile_type = '';\n\n  if ($header =~ m/^heap profile:.*$growth_marker/o) {\n    $main::profile_type = 'growth';\n    $result =  ReadHeapProfile($prog, *PROFILE, $header);\n  } elsif ($header =~ m/^heap profile:/) {\n    $main::profile_type = 'heap';\n    $result =  ReadHeapProfile($prog, *PROFILE, $header);\n  } elsif ($header =~ m/^heap/) {\n    $main::profile_type = 'heap';\n    $result = ReadThreadedHeapProfile($prog, $fname, $header);\n  } elsif ($header =~ m/^--- *$contention_marker/o) {\n    $main::profile_type = 'contention';\n    $result = ReadSynchProfile($prog, *PROFILE);\n  } elsif ($header =~ m/^--- *Stacks:/) {\n    print STDERR\n      \"Old format contention profile: mistakenly reports \" .\n      \"condition variable signals as lock contentions.\\n\";\n    $main::profile_type = 'contention';\n    $result = ReadSynchProfile($prog, *PROFILE);\n  } elsif ($header =~ m/^--- *$profile_marker/) {\n    # the binary cpu profile data starts immediately after this line\n    $main::profile_type = 'cpu';\n    $result = ReadCPUProfile($prog, $fname, *PROFILE);\n  } else {\n    if (defined($symbols)) {\n      # a symbolized profile contains a format we don't recognize, bail out\n      error(\"$fname: Cannot recognize profile section after symbols.\\n\");\n    }\n    # no ascii header present -- must be a CPU profile\n    $main::profile_type = 'cpu';\n    $result = ReadCPUProfile($prog, $fname, *PROFILE);\n  }\n\n  close(PROFILE);\n\n  # if we got symbols along with the profile, return those as well\n  if (defined($symbols)) {\n    $result->{symbols} = $symbols;\n  }\n\n  return $result;\n}\n\n# Subtract one from caller pc so we map back to call instr.\n# However, don't do this if we're reading a symbolized profile\n# file, in which case the subtract-one was done when the file\n# was written.\n#\n# We apply the same logic to all readers, though ReadCPUProfile uses an\n# independent implementation.\nsub FixCallerAddresses {\n  my $stack = shift;\n  # --raw/http: Always subtract one from pc's, because PrintSymbolizedProfile()\n  # dumps unadjusted profiles.\n  {\n    $stack =~ /(\\s)/;\n    my $delimiter = $1;\n    my @addrs = split(' ', $stack);\n    my @fixedaddrs;\n    $#fixedaddrs = $#addrs;\n    if ($#addrs >= 0) {\n      $fixedaddrs[0] = $addrs[0];\n    }\n    for (my $i = 1; $i <= $#addrs; $i++) {\n      $fixedaddrs[$i] = AddressSub($addrs[$i], \"0x1\");\n    }\n    return join $delimiter, @fixedaddrs;\n  }\n}\n\n# CPU profile reader\nsub ReadCPUProfile {\n  my $prog = shift;\n  my $fname = shift;       # just used for logging\n  local *PROFILE = shift;\n  my $version;\n  my $period;\n  my $i;\n  my $profile = {};\n  my $pcs = {};\n\n  # Parse string into array of slots.\n  my $slots = CpuProfileStream->new(*PROFILE, $fname);\n\n  # Read header.  The current header version is a 5-element structure\n  # containing:\n  #   0: header count (always 0)\n  #   1: header \"words\" (after this one: 3)\n  #   2: format version (0)\n  #   3: sampling period (usec)\n  #   4: unused padding (always 0)\n  if ($slots->get(0) != 0 ) {\n    error(\"$fname: not a profile file, or old format profile file\\n\");\n  }\n  $i = 2 + $slots->get(1);\n  $version = $slots->get(2);\n  $period = $slots->get(3);\n  # Do some sanity checking on these header values.\n  if ($version > (2**32) || $period > (2**32) || $i > (2**32) || $i < 5) {\n    error(\"$fname: not a profile file, or corrupted profile file\\n\");\n  }\n\n  # Parse profile\n  while ($slots->get($i) != -1) {\n    my $n = $slots->get($i++);\n    my $d = $slots->get($i++);\n    if ($d > (2**16)) {  # TODO(csilvers): what's a reasonable max-stack-depth?\n      my $addr = sprintf(\"0%o\", $i * ($address_length == 8 ? 4 : 8));\n      print STDERR \"At index $i (address $addr):\\n\";\n      error(\"$fname: stack trace depth >= 2**32\\n\");\n    }\n    if ($slots->get($i) == 0) {\n      # End of profile data marker\n      $i += $d;\n      last;\n    }\n\n    # Make key out of the stack entries\n    my @k = ();\n    for (my $j = 0; $j < $d; $j++) {\n      my $pc = $slots->get($i+$j);\n      # Subtract one from caller pc so we map back to call instr.\n      $pc--;\n      $pc = sprintf(\"%0*x\", $address_length, $pc);\n      $pcs->{$pc} = 1;\n      push @k, $pc;\n    }\n\n    AddEntry($profile, (join \"\\n\", @k), $n);\n    $i += $d;\n  }\n\n  # Parse map\n  my $map = '';\n  seek(PROFILE, $i * 4, 0);\n  read(PROFILE, $map, (stat PROFILE)[7]);\n\n  my $r = {};\n  $r->{version} = $version;\n  $r->{period} = $period;\n  $r->{profile} = $profile;\n  $r->{libs} = ParseLibraries($prog, $map, $pcs);\n  $r->{pcs} = $pcs;\n\n  return $r;\n}\n\nsub HeapProfileIndex {\n  my $index = 1;\n  if ($main::opt_inuse_space) {\n    $index = 1;\n  } elsif ($main::opt_inuse_objects) {\n    $index = 0;\n  } elsif ($main::opt_alloc_space) {\n    $index = 3;\n  } elsif ($main::opt_alloc_objects) {\n    $index = 2;\n  }\n  return $index;\n}\n\nsub ReadMappedLibraries {\n  my $fh = shift;\n  my $map = \"\";\n  # Read the /proc/self/maps data\n  while (<$fh>) {\n    s/\\r//g;         # turn windows-looking lines into unix-looking lines\n    $map .= $_;\n  }\n  return $map;\n}\n\nsub ReadMemoryMap {\n  my $fh = shift;\n  my $map = \"\";\n  # Read /proc/self/maps data as formatted by DumpAddressMap()\n  my $buildvar = \"\";\n  while (<PROFILE>) {\n    s/\\r//g;         # turn windows-looking lines into unix-looking lines\n    # Parse \"build=<dir>\" specification if supplied\n    if (m/^\\s*build=(.*)\\n/) {\n      $buildvar = $1;\n    }\n\n    # Expand \"$build\" variable if available\n    $_ =~ s/\\$build\\b/$buildvar/g;\n\n    $map .= $_;\n  }\n  return $map;\n}\n\nsub AdjustSamples {\n  my ($sample_adjustment, $sampling_algorithm, $n1, $s1, $n2, $s2) = @_;\n  if ($sample_adjustment) {\n    if ($sampling_algorithm == 2) {\n      # Remote-heap version 2\n      # The sampling frequency is the rate of a Poisson process.\n      # This means that the probability of sampling an allocation of\n      # size X with sampling rate Y is 1 - exp(-X/Y)\n      if ($n1 != 0) {\n        my $ratio = (($s1*1.0)/$n1)/($sample_adjustment);\n        my $scale_factor = 1/(1 - exp(-$ratio));\n        $n1 *= $scale_factor;\n        $s1 *= $scale_factor;\n      }\n      if ($n2 != 0) {\n        my $ratio = (($s2*1.0)/$n2)/($sample_adjustment);\n        my $scale_factor = 1/(1 - exp(-$ratio));\n        $n2 *= $scale_factor;\n        $s2 *= $scale_factor;\n      }\n    } else {\n      # Remote-heap version 1\n      my $ratio;\n      $ratio = (($s1*1.0)/$n1)/($sample_adjustment);\n      if ($ratio < 1) {\n        $n1 /= $ratio;\n        $s1 /= $ratio;\n      }\n      $ratio = (($s2*1.0)/$n2)/($sample_adjustment);\n      if ($ratio < 1) {\n        $n2 /= $ratio;\n        $s2 /= $ratio;\n      }\n    }\n  }\n  return ($n1, $s1, $n2, $s2);\n}\n\nsub ReadHeapProfile {\n  my $prog = shift;\n  local *PROFILE = shift;\n  my $header = shift;\n\n  my $index = HeapProfileIndex();\n\n  # Find the type of this profile.  The header line looks like:\n  #    heap profile:   1246:  8800744 [  1246:  8800744] @ <heap-url>/266053\n  # There are two pairs <count: size>, the first inuse objects/space, and the\n  # second allocated objects/space.  This is followed optionally by a profile\n  # type, and if that is present, optionally by a sampling frequency.\n  # For remote heap profiles (v1):\n  # The interpretation of the sampling frequency is that the profiler, for\n  # each sample, calculates a uniformly distributed random integer less than\n  # the given value, and records the next sample after that many bytes have\n  # been allocated.  Therefore, the expected sample interval is half of the\n  # given frequency.  By default, if not specified, the expected sample\n  # interval is 128KB.  Only remote-heap-page profiles are adjusted for\n  # sample size.\n  # For remote heap profiles (v2):\n  # The sampling frequency is the rate of a Poisson process. This means that\n  # the probability of sampling an allocation of size X with sampling rate Y\n  # is 1 - exp(-X/Y)\n  # For version 2, a typical header line might look like this:\n  # heap profile:   1922: 127792360 [  1922: 127792360] @ <heap-url>_v2/524288\n  # the trailing number (524288) is the sampling rate. (Version 1 showed\n  # double the 'rate' here)\n  my $sampling_algorithm = 0;\n  my $sample_adjustment = 0;\n  chomp($header);\n  my $type = \"unknown\";\n  if ($header =~ m\"^heap profile:\\s*(\\d+):\\s+(\\d+)\\s+\\[\\s*(\\d+):\\s+(\\d+)\\](\\s*@\\s*([^/]*)(/(\\d+))?)?\") {\n    if (defined($6) && ($6 ne '')) {\n      $type = $6;\n      my $sample_period = $8;\n      # $type is \"heapprofile\" for profiles generated by the\n      # heap-profiler, and either \"heap\" or \"heap_v2\" for profiles\n      # generated by sampling directly within tcmalloc.  It can also\n      # be \"growth\" for heap-growth profiles.  The first is typically\n      # found for profiles generated locally, and the others for\n      # remote profiles.\n      if (($type eq \"heapprofile\") || ($type !~ /heap/) ) {\n        # No need to adjust for the sampling rate with heap-profiler-derived data\n        $sampling_algorithm = 0;\n      } elsif ($type =~ /_v2/) {\n        $sampling_algorithm = 2;     # version 2 sampling\n        if (defined($sample_period) && ($sample_period ne '')) {\n          $sample_adjustment = int($sample_period);\n        }\n      } else {\n        $sampling_algorithm = 1;     # version 1 sampling\n        if (defined($sample_period) && ($sample_period ne '')) {\n          $sample_adjustment = int($sample_period)/2;\n        }\n      }\n    } else {\n      # We detect whether or not this is a remote-heap profile by checking\n      # that the total-allocated stats ($n2,$s2) are exactly the\n      # same as the in-use stats ($n1,$s1).  It is remotely conceivable\n      # that a non-remote-heap profile may pass this check, but it is hard\n      # to imagine how that could happen.\n      # In this case it's so old it's guaranteed to be remote-heap version 1.\n      my ($n1, $s1, $n2, $s2) = ($1, $2, $3, $4);\n      if (($n1 == $n2) && ($s1 == $s2)) {\n        # This is likely to be a remote-heap based sample profile\n        $sampling_algorithm = 1;\n      }\n    }\n  }\n\n  if ($sampling_algorithm > 0) {\n    # For remote-heap generated profiles, adjust the counts and sizes to\n    # account for the sample rate (we sample once every 128KB by default).\n    if ($sample_adjustment == 0) {\n      # Turn on profile adjustment.\n      $sample_adjustment = 128*1024;\n      print STDERR \"Adjusting heap profiles for 1-in-128KB sampling rate\\n\";\n    } else {\n      printf STDERR (\"Adjusting heap profiles for 1-in-%d sampling rate\\n\",\n                     $sample_adjustment);\n    }\n    if ($sampling_algorithm > 1) {\n      # We don't bother printing anything for the original version (version 1)\n      printf STDERR \"Heap version $sampling_algorithm\\n\";\n    }\n  }\n\n  my $profile = {};\n  my $pcs = {};\n  my $map = \"\";\n\n  while (<PROFILE>) {\n    s/\\r//g;         # turn windows-looking lines into unix-looking lines\n    if (/^MAPPED_LIBRARIES:/) {\n      $map .= ReadMappedLibraries(*PROFILE);\n      last;\n    }\n\n    if (/^--- Memory map:/) {\n      $map .= ReadMemoryMap(*PROFILE);\n      last;\n    }\n\n    # Read entry of the form:\n    #  <count1>: <bytes1> [<count2>: <bytes2>] @ a1 a2 a3 ... an\n    s/^\\s*//;\n    s/\\s*$//;\n    if (m/^\\s*(\\d+):\\s+(\\d+)\\s+\\[\\s*(\\d+):\\s+(\\d+)\\]\\s+@\\s+(.*)$/) {\n      my $stack = $5;\n      my ($n1, $s1, $n2, $s2) = ($1, $2, $3, $4);\n      my @counts = AdjustSamples($sample_adjustment, $sampling_algorithm,\n                                 $n1, $s1, $n2, $s2);\n      AddEntries($profile, $pcs, FixCallerAddresses($stack), $counts[$index]);\n    }\n  }\n\n  my $r = {};\n  $r->{version} = \"heap\";\n  $r->{period} = 1;\n  $r->{profile} = $profile;\n  $r->{libs} = ParseLibraries($prog, $map, $pcs);\n  $r->{pcs} = $pcs;\n  return $r;\n}\n\nsub ReadThreadedHeapProfile {\n  my ($prog, $fname, $header) = @_;\n\n  my $index = HeapProfileIndex();\n  my $sampling_algorithm = 0;\n  my $sample_adjustment = 0;\n  chomp($header);\n  my $type = \"unknown\";\n  # Assuming a very specific type of header for now.\n  if ($header =~ m\"^heap_v2/(\\d+)\") {\n    $type = \"_v2\";\n    $sampling_algorithm = 2;\n    $sample_adjustment = int($1);\n  }\n  if ($type ne \"_v2\" || !defined($sample_adjustment)) {\n    die \"Threaded heap profiles require v2 sampling with a sample rate\\n\";\n  }\n\n  my $profile = {};\n  my $thread_profiles = {};\n  my $pcs = {};\n  my $map = \"\";\n  my $stack = \"\";\n\n  while (<PROFILE>) {\n    s/\\r//g;\n    if (/^MAPPED_LIBRARIES:/) {\n      $map .= ReadMappedLibraries(*PROFILE);\n      last;\n    }\n\n    if (/^--- Memory map:/) {\n      $map .= ReadMemoryMap(*PROFILE);\n      last;\n    }\n\n    # Read entry of the form:\n    # @ a1 a2 ... an\n    #   t*: <count1>: <bytes1> [<count2>: <bytes2>]\n    #   t1: <count1>: <bytes1> [<count2>: <bytes2>]\n    #     ...\n    #   tn: <count1>: <bytes1> [<count2>: <bytes2>]\n    s/^\\s*//;\n    s/\\s*$//;\n    if (m/^@\\s+(.*)$/) {\n      $stack = $1;\n    } elsif (m/^\\s*(t(\\*|\\d+)):\\s+(\\d+):\\s+(\\d+)\\s+\\[\\s*(\\d+):\\s+(\\d+)\\]$/) {\n      if ($stack eq \"\") {\n        # Still in the header, so this is just a per-thread summary.\n        next;\n      }\n      my $thread = $2;\n      my ($n1, $s1, $n2, $s2) = ($3, $4, $5, $6);\n      my @counts = AdjustSamples($sample_adjustment, $sampling_algorithm,\n                                 $n1, $s1, $n2, $s2);\n      if ($thread eq \"*\") {\n        AddEntries($profile, $pcs, FixCallerAddresses($stack), $counts[$index]);\n      } else {\n        if (!exists($thread_profiles->{$thread})) {\n          $thread_profiles->{$thread} = {};\n        }\n        AddEntries($thread_profiles->{$thread}, $pcs,\n                   FixCallerAddresses($stack), $counts[$index]);\n      }\n    }\n  }\n\n  my $r = {};\n  $r->{version} = \"heap\";\n  $r->{period} = 1;\n  $r->{profile} = $profile;\n  $r->{threads} = $thread_profiles;\n  $r->{libs} = ParseLibraries($prog, $map, $pcs);\n  $r->{pcs} = $pcs;\n  return $r;\n}\n\nsub ReadSynchProfile {\n  my $prog = shift;\n  local *PROFILE = shift;\n  my $header = shift;\n\n  my $map = '';\n  my $profile = {};\n  my $pcs = {};\n  my $sampling_period = 1;\n  my $cyclespernanosec = 2.8;   # Default assumption for old binaries\n  my $seen_clockrate = 0;\n  my $line;\n\n  my $index = 0;\n  if ($main::opt_total_delay) {\n    $index = 0;\n  } elsif ($main::opt_contentions) {\n    $index = 1;\n  } elsif ($main::opt_mean_delay) {\n    $index = 2;\n  }\n\n  while ( $line = <PROFILE> ) {\n    $line =~ s/\\r//g;      # turn windows-looking lines into unix-looking lines\n    if ( $line =~ /^\\s*(\\d+)\\s+(\\d+) \\@\\s*(.*?)\\s*$/ ) {\n      my ($cycles, $count, $stack) = ($1, $2, $3);\n\n      # Convert cycles to nanoseconds\n      $cycles /= $cyclespernanosec;\n\n      # Adjust for sampling done by application\n      $cycles *= $sampling_period;\n      $count *= $sampling_period;\n\n      my @values = ($cycles, $count, $cycles / $count);\n      AddEntries($profile, $pcs, FixCallerAddresses($stack), $values[$index]);\n\n    } elsif ( $line =~ /^(slow release).*thread \\d+  \\@\\s*(.*?)\\s*$/ ||\n              $line =~ /^\\s*(\\d+) \\@\\s*(.*?)\\s*$/ ) {\n      my ($cycles, $stack) = ($1, $2);\n      if ($cycles !~ /^\\d+$/) {\n        next;\n      }\n\n      # Convert cycles to nanoseconds\n      $cycles /= $cyclespernanosec;\n\n      # Adjust for sampling done by application\n      $cycles *= $sampling_period;\n\n      AddEntries($profile, $pcs, FixCallerAddresses($stack), $cycles);\n\n    } elsif ( $line =~ m/^([a-z][^=]*)=(.*)$/ ) {\n      my ($variable, $value) = ($1,$2);\n      for ($variable, $value) {\n        s/^\\s+//;\n        s/\\s+$//;\n      }\n      if ($variable eq \"cycles/second\") {\n        $cyclespernanosec = $value / 1e9;\n        $seen_clockrate = 1;\n      } elsif ($variable eq \"sampling period\") {\n        $sampling_period = $value;\n      } elsif ($variable eq \"ms since reset\") {\n        # Currently nothing is done with this value in jeprof\n        # So we just silently ignore it for now\n      } elsif ($variable eq \"discarded samples\") {\n        # Currently nothing is done with this value in jeprof\n        # So we just silently ignore it for now\n      } else {\n        printf STDERR (\"Ignoring unnknown variable in /contention output: \" .\n                       \"'%s' = '%s'\\n\",$variable,$value);\n      }\n    } else {\n      # Memory map entry\n      $map .= $line;\n    }\n  }\n\n  if (!$seen_clockrate) {\n    printf STDERR (\"No cycles/second entry in profile; Guessing %.1f GHz\\n\",\n                   $cyclespernanosec);\n  }\n\n  my $r = {};\n  $r->{version} = 0;\n  $r->{period} = $sampling_period;\n  $r->{profile} = $profile;\n  $r->{libs} = ParseLibraries($prog, $map, $pcs);\n  $r->{pcs} = $pcs;\n  return $r;\n}\n\n# Given a hex value in the form \"0x1abcd\" or \"1abcd\", return either\n# \"0001abcd\" or \"000000000001abcd\", depending on the current (global)\n# address length.\nsub HexExtend {\n  my $addr = shift;\n\n  $addr =~ s/^(0x)?0*//;\n  my $zeros_needed = $address_length - length($addr);\n  if ($zeros_needed < 0) {\n    printf STDERR \"Warning: address $addr is longer than address length $address_length\\n\";\n    return $addr;\n  }\n  return (\"0\" x $zeros_needed) . $addr;\n}\n\n##### Symbol extraction #####\n\n# Aggressively search the lib_prefix values for the given library\n# If all else fails, just return the name of the library unmodified.\n# If the lib_prefix is \"/my/path,/other/path\" and $file is \"/lib/dir/mylib.so\"\n# it will search the following locations in this order, until it finds a file:\n#   /my/path/lib/dir/mylib.so\n#   /other/path/lib/dir/mylib.so\n#   /my/path/dir/mylib.so\n#   /other/path/dir/mylib.so\n#   /my/path/mylib.so\n#   /other/path/mylib.so\n#   /lib/dir/mylib.so              (returned as last resort)\nsub FindLibrary {\n  my $file = shift;\n  my $suffix = $file;\n\n  # Search for the library as described above\n  do {\n    foreach my $prefix (@prefix_list) {\n      my $fullpath = $prefix . $suffix;\n      if (-e $fullpath) {\n        return $fullpath;\n      }\n    }\n  } while ($suffix =~ s|^/[^/]+/|/|);\n  return $file;\n}\n\n# Return path to library with debugging symbols.\n# For libc libraries, the copy in /usr/lib/debug contains debugging symbols\nsub DebuggingLibrary {\n  my $file = shift;\n      \n  if ($file !~ m|^/|) {\n    return undef;\n  }\n      \n  # Find debug symbol file if it's named after the library's name.\n  \n  if (-f \"/usr/lib/debug$file\") {                 \n    if($main::opt_debug) { print STDERR \"found debug info for $file in /usr/lib/debug$file\\n\"; }\n    return \"/usr/lib/debug$file\";\n  } elsif (-f \"/usr/lib/debug$file.debug\") {\n    if($main::opt_debug) { print STDERR \"found debug info for $file in /usr/lib/debug$file.debug\\n\"; }\n    return \"/usr/lib/debug$file.debug\"; \n  }\n\n  if(!$main::opt_debug_syms_by_id) {\n    if($main::opt_debug) { print STDERR \"no debug symbols found for $file\\n\" };\n    return undef;\n  }\n\n  # Find debug file if it's named after the library's build ID.\n  \n  my $readelf = '';\n  if (!$main::gave_up_on_elfutils) {\n    $readelf = qx/eu-readelf -n ${file}/;\n    if ($?) {\n      print STDERR \"Cannot run eu-readelf. To use --debug-syms-by-id you must be on Linux, with elfutils installed.\\n\";\n      $main::gave_up_on_elfutils = 1;\n      return undef;\n    }\n    my $buildID = $1 if $readelf =~ /Build ID: ([A-Fa-f0-9]+)/s;\n    if (defined $buildID && length $buildID > 0) {\n      my $symbolFile = '/usr/lib/debug/.build-id/' . substr($buildID, 0, 2) . '/' . substr($buildID, 2) . '.debug';\n      if (-e $symbolFile) {\n        if($main::opt_debug) { print STDERR \"found debug symbol file $symbolFile for $file\\n\" };\n        return $symbolFile;\n      } else {\n        if($main::opt_debug) { print STDERR \"no debug symbol file found for $file, build ID: $buildID\\n\" };\n        return undef;\n      }\n    }\n  }\n\n  if($main::opt_debug) { print STDERR \"no debug symbols found for $file, build ID unknown\\n\" };\n  return undef;\n}\n\n\n# Parse text section header of a library using objdump\nsub ParseTextSectionHeaderFromObjdump {\n  my $lib = shift;\n\n  my $size = undef;\n  my $vma;\n  my $file_offset;\n  # Get objdump output from the library file to figure out how to\n  # map between mapped addresses and addresses in the library.\n  my $cmd = ShellEscape($obj_tool_map{\"objdump\"}, \"-h\", $lib);\n  open(OBJDUMP, \"$cmd |\") || error(\"$cmd: $!\\n\");\n  while (<OBJDUMP>) {\n    s/\\r//g;         # turn windows-looking lines into unix-looking lines\n    # Idx Name          Size      VMA       LMA       File off  Algn\n    #  10 .text         00104b2c  420156f0  420156f0  000156f0  2**4\n    # For 64-bit objects, VMA and LMA will be 16 hex digits, size and file\n    # offset may still be 8.  But AddressSub below will still handle that.\n    my @x = split;\n    if (($#x >= 6) && ($x[1] eq '.text')) {\n      $size = $x[2];\n      $vma = $x[3];\n      $file_offset = $x[5];\n      last;\n    }\n  }\n  close(OBJDUMP);\n\n  if (!defined($size)) {\n    return undef;\n  }\n\n  my $r = {};\n  $r->{size} = $size;\n  $r->{vma} = $vma;\n  $r->{file_offset} = $file_offset;\n\n  return $r;\n}\n\n# Parse text section header of a library using otool (on OS X)\nsub ParseTextSectionHeaderFromOtool {\n  my $lib = shift;\n\n  my $size = undef;\n  my $vma = undef;\n  my $file_offset = undef;\n  # Get otool output from the library file to figure out how to\n  # map between mapped addresses and addresses in the library.\n  my $command = ShellEscape($obj_tool_map{\"otool\"}, \"-l\", $lib);\n  open(OTOOL, \"$command |\") || error(\"$command: $!\\n\");\n  my $cmd = \"\";\n  my $sectname = \"\";\n  my $segname = \"\";\n  foreach my $line (<OTOOL>) {\n    $line =~ s/\\r//g;      # turn windows-looking lines into unix-looking lines\n    # Load command <#>\n    #       cmd LC_SEGMENT\n    # [...]\n    # Section\n    #   sectname __text\n    #    segname __TEXT\n    #       addr 0x000009f8\n    #       size 0x00018b9e\n    #     offset 2552\n    #      align 2^2 (4)\n    # We will need to strip off the leading 0x from the hex addresses,\n    # and convert the offset into hex.\n    if ($line =~ /Load command/) {\n      $cmd = \"\";\n      $sectname = \"\";\n      $segname = \"\";\n    } elsif ($line =~ /Section/) {\n      $sectname = \"\";\n      $segname = \"\";\n    } elsif ($line =~ /cmd (\\w+)/) {\n      $cmd = $1;\n    } elsif ($line =~ /sectname (\\w+)/) {\n      $sectname = $1;\n    } elsif ($line =~ /segname (\\w+)/) {\n      $segname = $1;\n    } elsif (!(($cmd eq \"LC_SEGMENT\" || $cmd eq \"LC_SEGMENT_64\") &&\n               $sectname eq \"__text\" &&\n               $segname eq \"__TEXT\")) {\n      next;\n    } elsif ($line =~ /\\baddr 0x([0-9a-fA-F]+)/) {\n      $vma = $1;\n    } elsif ($line =~ /\\bsize 0x([0-9a-fA-F]+)/) {\n      $size = $1;\n    } elsif ($line =~ /\\boffset ([0-9]+)/) {\n      $file_offset = sprintf(\"%016x\", $1);\n    }\n    if (defined($vma) && defined($size) && defined($file_offset)) {\n      last;\n    }\n  }\n  close(OTOOL);\n\n  if (!defined($vma) || !defined($size) || !defined($file_offset)) {\n     return undef;\n  }\n\n  my $r = {};\n  $r->{size} = $size;\n  $r->{vma} = $vma;\n  $r->{file_offset} = $file_offset;\n\n  return $r;\n}\n\nsub ParseTextSectionHeader {\n  # obj_tool_map(\"otool\") is only defined if we're in a Mach-O environment\n  if (defined($obj_tool_map{\"otool\"})) {\n    my $r = ParseTextSectionHeaderFromOtool(@_);\n    if (defined($r)){\n      return $r;\n    }\n  }\n  # If otool doesn't work, or we don't have it, fall back to objdump\n  return ParseTextSectionHeaderFromObjdump(@_);\n}\n\n# Split /proc/pid/maps dump into a list of libraries\nsub ParseLibraries {\n  return if $main::use_symbol_page;  # We don't need libraries info.\n  my $prog = Cwd::abs_path(shift);\n  my $map = shift;\n  my $pcs = shift;\n\n  my $result = [];\n  my $h = \"[a-f0-9]+\";\n  my $zero_offset = HexExtend(\"0\");\n\n  my $buildvar = \"\";\n  foreach my $l (split(\"\\n\", $map)) {\n    if ($l =~ m/^\\s*build=(.*)$/) {\n      $buildvar = $1;\n    }\n\n    my $start;\n    my $finish;\n    my $offset;\n    my $lib;\n    if ($l =~ /^($h)-($h)\\s+..x.\\s+($h)\\s+\\S+:\\S+\\s+\\d+\\s+(\\S+\\.(so|dll|dylib|bundle)((\\.\\d+)+\\w*(\\.\\d+){0,3})?)$/i) {\n      # Full line from /proc/self/maps.  Example:\n      #   40000000-40015000 r-xp 00000000 03:01 12845071   /lib/ld-2.3.2.so\n      $start = HexExtend($1);\n      $finish = HexExtend($2);\n      $offset = HexExtend($3);\n      $lib = $4;\n      $lib =~ s|\\\\|/|g;     # turn windows-style paths into unix-style paths\n    } elsif ($l =~ /^\\s*($h)-($h):\\s*(\\S+\\.so(\\.\\d+)*)/) {\n      # Cooked line from DumpAddressMap.  Example:\n      #   40000000-40015000: /lib/ld-2.3.2.so\n      $start = HexExtend($1);\n      $finish = HexExtend($2);\n      $offset = $zero_offset;\n      $lib = $3;\n    } elsif (($l =~ /^($h)-($h)\\s+..x.\\s+($h)\\s+\\S+:\\S+\\s+\\d+\\s+(\\S+)$/i) && ($4 eq $prog)) {\n      # PIEs and address space randomization do not play well with our\n      # default assumption that main executable is at lowest\n      # addresses. So we're detecting main executable in\n      # /proc/self/maps as well.\n      $start = HexExtend($1);\n      $finish = HexExtend($2);\n      $offset = HexExtend($3);\n      $lib = $4;\n      $lib =~ s|\\\\|/|g;     # turn windows-style paths into unix-style paths\n    }\n    # FreeBSD 10.0 virtual memory map /proc/curproc/map as defined in\n    # function procfs_doprocmap (sys/fs/procfs/procfs_map.c)\n    #\n    # Example:\n    # 0x800600000 0x80061a000 26 0 0xfffff800035a0000 r-x 75 33 0x1004 COW NC vnode /libexec/ld-elf.s\n    # o.1 NCH -1\n    elsif ($l =~ /^(0x$h)\\s(0x$h)\\s\\d+\\s\\d+\\s0x$h\\sr-x\\s\\d+\\s\\d+\\s0x\\d+\\s(COW|NCO)\\s(NC|NNC)\\svnode\\s(\\S+\\.so(\\.\\d+)*)/) {\n      $start = HexExtend($1);\n      $finish = HexExtend($2);\n      $offset = $zero_offset;\n      $lib = FindLibrary($5);\n\n    } else {\n      next;\n    }\n\n    # Expand \"$build\" variable if available\n    $lib =~ s/\\$build\\b/$buildvar/g;\n\n    $lib = FindLibrary($lib);\n\n    # Check for pre-relocated libraries, which use pre-relocated symbol tables\n    # and thus require adjusting the offset that we'll use to translate\n    # VM addresses into symbol table addresses.\n    # Only do this if we're not going to fetch the symbol table from a\n    # debugging copy of the library.\n    if (!DebuggingLibrary($lib)) {\n      my $text = ParseTextSectionHeader($lib);\n      if (defined($text)) {\n         my $vma_offset = AddressSub($text->{vma}, $text->{file_offset});\n         $offset = AddressAdd($offset, $vma_offset);\n      }\n    }\n\n    if($main::opt_debug) { printf STDERR \"$start:$finish ($offset) $lib\\n\"; }\n    push(@{$result}, [$lib, $start, $finish, $offset]);\n  }\n\n  # Append special entry for additional library (not relocated)\n  if ($main::opt_lib ne \"\") {\n    my $text = ParseTextSectionHeader($main::opt_lib);\n    if (defined($text)) {\n       my $start = $text->{vma};\n       my $finish = AddressAdd($start, $text->{size});\n\n       push(@{$result}, [$main::opt_lib, $start, $finish, $start]);\n    }\n  }\n\n  # Append special entry for the main program.  This covers\n  # 0..max_pc_value_seen, so that we assume pc values not found in one\n  # of the library ranges will be treated as coming from the main\n  # program binary.\n  my $min_pc = HexExtend(\"0\");\n  my $max_pc = $min_pc;          # find the maximal PC value in any sample\n  foreach my $pc (keys(%{$pcs})) {\n    if (HexExtend($pc) gt $max_pc) { $max_pc = HexExtend($pc); }\n  }\n  push(@{$result}, [$prog, $min_pc, $max_pc, $zero_offset]);\n\n  return $result;\n}\n\n# Add two hex addresses of length $address_length.\n# Run jeprof --test for unit test if this is changed.\nsub AddressAdd {\n  my $addr1 = shift;\n  my $addr2 = shift;\n  my $sum;\n\n  if ($address_length == 8) {\n    # Perl doesn't cope with wraparound arithmetic, so do it explicitly:\n    $sum = (hex($addr1)+hex($addr2)) % (0x10000000 * 16);\n    return sprintf(\"%08x\", $sum);\n\n  } else {\n    # Do the addition in 7-nibble chunks to trivialize carry handling.\n\n    if ($main::opt_debug and $main::opt_test) {\n      print STDERR \"AddressAdd $addr1 + $addr2 = \";\n    }\n\n    my $a1 = substr($addr1,-7);\n    $addr1 = substr($addr1,0,-7);\n    my $a2 = substr($addr2,-7);\n    $addr2 = substr($addr2,0,-7);\n    $sum = hex($a1) + hex($a2);\n    my $c = 0;\n    if ($sum > 0xfffffff) {\n      $c = 1;\n      $sum -= 0x10000000;\n    }\n    my $r = sprintf(\"%07x\", $sum);\n\n    $a1 = substr($addr1,-7);\n    $addr1 = substr($addr1,0,-7);\n    $a2 = substr($addr2,-7);\n    $addr2 = substr($addr2,0,-7);\n    $sum = hex($a1) + hex($a2) + $c;\n    $c = 0;\n    if ($sum > 0xfffffff) {\n      $c = 1;\n      $sum -= 0x10000000;\n    }\n    $r = sprintf(\"%07x\", $sum) . $r;\n\n    $sum = hex($addr1) + hex($addr2) + $c;\n    if ($sum > 0xff) { $sum -= 0x100; }\n    $r = sprintf(\"%02x\", $sum) . $r;\n\n    if ($main::opt_debug and $main::opt_test) { print STDERR \"$r\\n\"; }\n\n    return $r;\n  }\n}\n\n\n# Subtract two hex addresses of length $address_length.\n# Run jeprof --test for unit test if this is changed.\nsub AddressSub {\n  my $addr1 = shift;\n  my $addr2 = shift;\n  my $diff;\n\n  if ($address_length == 8) {\n    # Perl doesn't cope with wraparound arithmetic, so do it explicitly:\n    $diff = (hex($addr1)-hex($addr2)) % (0x10000000 * 16);\n    return sprintf(\"%08x\", $diff);\n\n  } else {\n    # Do the addition in 7-nibble chunks to trivialize borrow handling.\n    # if ($main::opt_debug) { print STDERR \"AddressSub $addr1 - $addr2 = \"; }\n\n    my $a1 = hex(substr($addr1,-7));\n    $addr1 = substr($addr1,0,-7);\n    my $a2 = hex(substr($addr2,-7));\n    $addr2 = substr($addr2,0,-7);\n    my $b = 0;\n    if ($a2 > $a1) {\n      $b = 1;\n      $a1 += 0x10000000;\n    }\n    $diff = $a1 - $a2;\n    my $r = sprintf(\"%07x\", $diff);\n\n    $a1 = hex(substr($addr1,-7));\n    $addr1 = substr($addr1,0,-7);\n    $a2 = hex(substr($addr2,-7)) + $b;\n    $addr2 = substr($addr2,0,-7);\n    $b = 0;\n    if ($a2 > $a1) {\n      $b = 1;\n      $a1 += 0x10000000;\n    }\n    $diff = $a1 - $a2;\n    $r = sprintf(\"%07x\", $diff) . $r;\n\n    $a1 = hex($addr1);\n    $a2 = hex($addr2) + $b;\n    if ($a2 > $a1) { $a1 += 0x100; }\n    $diff = $a1 - $a2;\n    $r = sprintf(\"%02x\", $diff) . $r;\n\n    # if ($main::opt_debug) { print STDERR \"$r\\n\"; }\n\n    return $r;\n  }\n}\n\n# Increment a hex addresses of length $address_length.\n# Run jeprof --test for unit test if this is changed.\nsub AddressInc {\n  my $addr = shift;\n  my $sum;\n\n  if ($address_length == 8) {\n    # Perl doesn't cope with wraparound arithmetic, so do it explicitly:\n    $sum = (hex($addr)+1) % (0x10000000 * 16);\n    return sprintf(\"%08x\", $sum);\n\n  } else {\n    # Do the addition in 7-nibble chunks to trivialize carry handling.\n    # We are always doing this to step through the addresses in a function,\n    # and will almost never overflow the first chunk, so we check for this\n    # case and exit early.\n\n    # if ($main::opt_debug) { print STDERR \"AddressInc $addr1 = \"; }\n\n    my $a1 = substr($addr,-7);\n    $addr = substr($addr,0,-7);\n    $sum = hex($a1) + 1;\n    my $r = sprintf(\"%07x\", $sum);\n    if ($sum <= 0xfffffff) {\n      $r = $addr . $r;\n      # if ($main::opt_debug) { print STDERR \"$r\\n\"; }\n      return HexExtend($r);\n    } else {\n      $r = \"0000000\";\n    }\n\n    $a1 = substr($addr,-7);\n    $addr = substr($addr,0,-7);\n    $sum = hex($a1) + 1;\n    $r = sprintf(\"%07x\", $sum) . $r;\n    if ($sum <= 0xfffffff) {\n      $r = $addr . $r;\n      # if ($main::opt_debug) { print STDERR \"$r\\n\"; }\n      return HexExtend($r);\n    } else {\n      $r = \"00000000000000\";\n    }\n\n    $sum = hex($addr) + 1;\n    if ($sum > 0xff) { $sum -= 0x100; }\n    $r = sprintf(\"%02x\", $sum) . $r;\n\n    # if ($main::opt_debug) { print STDERR \"$r\\n\"; }\n    return $r;\n  }\n}\n\n# Extract symbols for all PC values found in profile\nsub ExtractSymbols {\n  my $libs = shift;\n  my $pcset = shift;\n\n  my $symbols = {};\n\n  # Map each PC value to the containing library.  To make this faster,\n  # we sort libraries by their starting pc value (highest first), and\n  # advance through the libraries as we advance the pc.  Sometimes the\n  # addresses of libraries may overlap with the addresses of the main\n  # binary, so to make sure the libraries 'win', we iterate over the\n  # libraries in reverse order (which assumes the binary doesn't start\n  # in the middle of a library, which seems a fair assumption).\n  my @pcs = (sort { $a cmp $b } keys(%{$pcset}));  # pcset is 0-extended strings\n  foreach my $lib (sort {$b->[1] cmp $a->[1]} @{$libs}) {\n    my $libname = $lib->[0];\n    my $start = $lib->[1];\n    my $finish = $lib->[2];\n    my $offset = $lib->[3];\n\n    # Use debug library if it exists\n    my $debug_libname = DebuggingLibrary($libname);\n    if ($debug_libname) {\n        $libname = $debug_libname;\n    }\n\n    # Get list of pcs that belong in this library.\n    my $contained = [];\n    my ($start_pc_index, $finish_pc_index);\n    # Find smallest finish_pc_index such that $finish < $pc[$finish_pc_index].\n    for ($finish_pc_index = $#pcs + 1; $finish_pc_index > 0;\n         $finish_pc_index--) {\n      last if $pcs[$finish_pc_index - 1] le $finish;\n    }\n    # Find smallest start_pc_index such that $start <= $pc[$start_pc_index].\n    for ($start_pc_index = $finish_pc_index; $start_pc_index > 0;\n         $start_pc_index--) {\n      last if $pcs[$start_pc_index - 1] lt $start;\n    }\n    # This keeps PC values higher than $pc[$finish_pc_index] in @pcs,\n    # in case there are overlaps in libraries and the main binary.\n    @{$contained} = splice(@pcs, $start_pc_index,\n                           $finish_pc_index - $start_pc_index);\n    # Map to symbols\n    MapToSymbols($libname, AddressSub($start, $offset), $contained, $symbols);\n  }\n\n  return $symbols;\n}\n\n# Map list of PC values to symbols for a given image\nsub MapToSymbols {\n  my $image = shift;\n  my $offset = shift;\n  my $pclist = shift;\n  my $symbols = shift;\n\n  my $debug = 0;\n\n  # Ignore empty binaries\n  if ($#{$pclist} < 0) { return; }\n\n  # Figure out the addr2line command to use\n  my $addr2line = $obj_tool_map{\"addr2line\"};\n  my $cmd = ShellEscape($addr2line, \"-f\", \"-C\", \"-e\", $image);\n  if (exists $obj_tool_map{\"addr2line_pdb\"}) {\n    $addr2line = $obj_tool_map{\"addr2line_pdb\"};\n    $cmd = ShellEscape($addr2line, \"--demangle\", \"-f\", \"-C\", \"-e\", $image);\n  }\n\n  # If \"addr2line\" isn't installed on the system at all, just use\n  # nm to get what info we can (function names, but not line numbers).\n  if (system(ShellEscape($addr2line, \"--help\") . \" >$dev_null 2>&1\") != 0) {\n    MapSymbolsWithNM($image, $offset, $pclist, $symbols);\n    return;\n  }\n\n  # \"addr2line -i\" can produce a variable number of lines per input\n  # address, with no separator that allows us to tell when data for\n  # the next address starts.  So we find the address for a special\n  # symbol (_fini) and interleave this address between all real\n  # addresses passed to addr2line.  The name of this special symbol\n  # can then be used as a separator.\n  $sep_address = undef;  # May be filled in by MapSymbolsWithNM()\n  my $nm_symbols = {};\n  MapSymbolsWithNM($image, $offset, $pclist, $nm_symbols);\n  if (defined($sep_address)) {\n    # Only add \" -i\" to addr2line if the binary supports it.\n    # addr2line --help returns 0, but not if it sees an unknown flag first.\n    if (system(\"$cmd -i --help >$dev_null 2>&1\") == 0) {\n      $cmd .= \" -i\";\n    } else {\n      $sep_address = undef;   # no need for sep_address if we don't support -i\n    }\n  }\n\n  # Make file with all PC values with intervening 'sep_address' so\n  # that we can reliably detect the end of inlined function list\n  open(ADDRESSES, \">$main::tmpfile_sym\") || error(\"$main::tmpfile_sym: $!\\n\");\n  if ($debug) { print(\"---- $image ---\\n\"); }\n  for (my $i = 0; $i <= $#{$pclist}; $i++) {\n    # addr2line always reads hex addresses, and does not need '0x' prefix.\n    if ($debug) { printf STDERR (\"%s\\n\", $pclist->[$i]); }\n    printf ADDRESSES (\"%s\\n\", AddressSub($pclist->[$i], $offset));\n    if (defined($sep_address)) {\n      printf ADDRESSES (\"%s\\n\", $sep_address);\n    }\n  }\n  close(ADDRESSES);\n  if ($debug) {\n    print(\"----\\n\");\n    system(\"cat\", $main::tmpfile_sym);\n    print(\"----\\n\");\n    system(\"$cmd < \" . ShellEscape($main::tmpfile_sym));\n    print(\"----\\n\");\n  }\n\n  open(SYMBOLS, \"$cmd <\" . ShellEscape($main::tmpfile_sym) . \" |\")\n      || error(\"$cmd: $!\\n\");\n  my $count = 0;   # Index in pclist\n  while (<SYMBOLS>) {\n    # Read fullfunction and filelineinfo from next pair of lines\n    s/\\r?\\n$//g;\n    my $fullfunction = $_;\n    $_ = <SYMBOLS>;\n    s/\\r?\\n$//g;\n    my $filelinenum = $_;\n\n    if (defined($sep_address) && $fullfunction eq $sep_symbol) {\n      # Terminating marker for data for this address\n      $count++;\n      next;\n    }\n\n    $filelinenum =~ s|\\\\|/|g; # turn windows-style paths into unix-style paths\n\n    my $pcstr = $pclist->[$count];\n    my $function = ShortFunctionName($fullfunction);\n    my $nms = $nm_symbols->{$pcstr};\n    if (defined($nms)) {\n      if ($fullfunction eq '??') {\n        # nm found a symbol for us.\n        $function = $nms->[0];\n        $fullfunction = $nms->[2];\n      } else {\n\t# MapSymbolsWithNM tags each routine with its starting address,\n\t# useful in case the image has multiple occurrences of this\n\t# routine.  (It uses a syntax that resembles template parameters,\n\t# that are automatically stripped out by ShortFunctionName().)\n\t# addr2line does not provide the same information.  So we check\n\t# if nm disambiguated our symbol, and if so take the annotated\n\t# (nm) version of the routine-name.  TODO(csilvers): this won't\n\t# catch overloaded, inlined symbols, which nm doesn't see.\n\t# Better would be to do a check similar to nm's, in this fn.\n\tif ($nms->[2] =~ m/^\\Q$function\\E/) {  # sanity check it's the right fn\n\t  $function = $nms->[0];\n\t  $fullfunction = $nms->[2];\n\t}\n      }\n    }\n\n    # Prepend to accumulated symbols for pcstr\n    # (so that caller comes before callee)\n    my $sym = $symbols->{$pcstr};\n    if (!defined($sym)) {\n      $sym = [];\n      $symbols->{$pcstr} = $sym;\n    }\n    unshift(@{$sym}, $function, $filelinenum, $fullfunction);\n    if ($debug) { printf STDERR (\"%s => [%s]\\n\", $pcstr, join(\" \", @{$sym})); }\n    if (!defined($sep_address)) {\n      # Inlining is off, so this entry ends immediately\n      $count++;\n    }\n  }\n  close(SYMBOLS);\n}\n\n# Use nm to map the list of referenced PCs to symbols.  Return true iff we\n# are able to read procedure information via nm.\nsub MapSymbolsWithNM {\n  my $image = shift;\n  my $offset = shift;\n  my $pclist = shift;\n  my $symbols = shift;\n\n  # Get nm output sorted by increasing address\n  my $symbol_table = GetProcedureBoundaries($image, \".\");\n  if (!%{$symbol_table}) {\n    return 0;\n  }\n  # Start addresses are already the right length (8 or 16 hex digits).\n  my @names = sort { $symbol_table->{$a}->[0] cmp $symbol_table->{$b}->[0] }\n    keys(%{$symbol_table});\n\n  if ($#names < 0) {\n    # No symbols: just use addresses\n    foreach my $pc (@{$pclist}) {\n      my $pcstr = \"0x\" . $pc;\n      $symbols->{$pc} = [$pcstr, \"?\", $pcstr];\n    }\n    return 0;\n  }\n\n  # Sort addresses so we can do a join against nm output\n  my $index = 0;\n  my $fullname = $names[0];\n  my $name = ShortFunctionName($fullname);\n  foreach my $pc (sort { $a cmp $b } @{$pclist}) {\n    # Adjust for mapped offset\n    my $mpc = AddressSub($pc, $offset);\n    while (($index < $#names) && ($mpc ge $symbol_table->{$fullname}->[1])){\n      $index++;\n      $fullname = $names[$index];\n      $name = ShortFunctionName($fullname);\n    }\n    if ($mpc lt $symbol_table->{$fullname}->[1]) {\n      $symbols->{$pc} = [$name, \"?\", $fullname];\n    } else {\n      my $pcstr = \"0x\" . $pc;\n      $symbols->{$pc} = [$pcstr, \"?\", $pcstr];\n    }\n  }\n  return 1;\n}\n\nsub ShortFunctionName {\n  my $function = shift;\n  while ($function =~ s/\\([^()]*\\)(\\s*const)?//g) { }   # Argument types\n  while ($function =~ s/<[^<>]*>//g)  { }    # Remove template arguments\n  $function =~ s/^.*\\s+(\\w+::)/$1/;          # Remove leading type\n  return $function;\n}\n\n# Trim overly long symbols found in disassembler output\nsub CleanDisassembly {\n  my $d = shift;\n  while ($d =~ s/\\([^()%]*\\)(\\s*const)?//g) { } # Argument types, not (%rax)\n  while ($d =~ s/(\\w+)<[^<>]*>/$1/g)  { }       # Remove template arguments\n  return $d;\n}\n\n# Clean file name for display\nsub CleanFileName {\n  my ($f) = @_;\n  $f =~ s|^/proc/self/cwd/||;\n  $f =~ s|^\\./||;\n  return $f;\n}\n\n# Make address relative to section and clean up for display\nsub UnparseAddress {\n  my ($offset, $address) = @_;\n  $address = AddressSub($address, $offset);\n  $address =~ s/^0x//;\n  $address =~ s/^0*//;\n  return $address;\n}\n\n##### Miscellaneous #####\n\n# Find the right versions of the above object tools to use.  The\n# argument is the program file being analyzed, and should be an ELF\n# 32-bit or ELF 64-bit executable file.  The location of the tools\n# is determined by considering the following options in this order:\n#   1) --tools option, if set\n#   2) JEPROF_TOOLS environment variable, if set\n#   3) the environment\nsub ConfigureObjTools {\n  my $prog_file = shift;\n\n  # Check for the existence of $prog_file because /usr/bin/file does not\n  # predictably return error status in prod.\n  (-e $prog_file)  || error(\"$prog_file does not exist.\\n\");\n\n  my $file_type = undef;\n  if (-e \"/usr/bin/file\") {\n    # Follow symlinks (at least for systems where \"file\" supports that).\n    my $escaped_prog_file = ShellEscape($prog_file);\n    $file_type = `/usr/bin/file -L $escaped_prog_file 2>$dev_null ||\n                  /usr/bin/file $escaped_prog_file`;\n  } elsif ($^O == \"MSWin32\") {\n    $file_type = \"MS Windows\";\n  } else {\n    print STDERR \"WARNING: Can't determine the file type of $prog_file\";\n  }\n\n  if ($file_type =~ /64-bit/) {\n    # Change $address_length to 16 if the program file is ELF 64-bit.\n    # We can't detect this from many (most?) heap or lock contention\n    # profiles, since the actual addresses referenced are generally in low\n    # memory even for 64-bit programs.\n    $address_length = 16;\n  }\n\n  if ($file_type =~ /MS Windows/) {\n    # For windows, we provide a version of nm and addr2line as part of\n    # the opensource release, which is capable of parsing\n    # Windows-style PDB executables.  It should live in the path, or\n    # in the same directory as jeprof.\n    $obj_tool_map{\"nm_pdb\"} = \"nm-pdb\";\n    $obj_tool_map{\"addr2line_pdb\"} = \"addr2line-pdb\";\n  }\n\n  if ($file_type =~ /Mach-O/) {\n    # OS X uses otool to examine Mach-O files, rather than objdump.\n    $obj_tool_map{\"otool\"} = \"otool\";\n    $obj_tool_map{\"addr2line\"} = \"false\";  # no addr2line\n    $obj_tool_map{\"objdump\"} = \"false\";  # no objdump\n  }\n\n  # Go fill in %obj_tool_map with the pathnames to use:\n  foreach my $tool (keys %obj_tool_map) {\n    $obj_tool_map{$tool} = ConfigureTool($obj_tool_map{$tool});\n  }\n}\n\n# Returns the path of a caller-specified object tool.  If --tools or\n# JEPROF_TOOLS are specified, then returns the full path to the tool\n# with that prefix.  Otherwise, returns the path unmodified (which\n# means we will look for it on PATH).\nsub ConfigureTool {\n  my $tool = shift;\n  my $path;\n\n  # --tools (or $JEPROF_TOOLS) is a comma separated list, where each\n  # item is either a) a pathname prefix, or b) a map of the form\n  # <tool>:<path>.  First we look for an entry of type (b) for our\n  # tool.  If one is found, we use it.  Otherwise, we consider all the\n  # pathname prefixes in turn, until one yields an existing file.  If\n  # none does, we use a default path.\n  my $tools = $main::opt_tools || $ENV{\"JEPROF_TOOLS\"} || \"\";\n  if ($tools =~ m/(,|^)\\Q$tool\\E:([^,]*)/) {\n    $path = $2;\n    # TODO(csilvers): sanity-check that $path exists?  Hard if it's relative.\n  } elsif ($tools ne '') {\n    foreach my $prefix (split(',', $tools)) {\n      next if ($prefix =~ /:/);    # ignore \"tool:fullpath\" entries in the list\n      if (-x $prefix . $tool) {\n        $path = $prefix . $tool;\n        last;\n      }\n    }\n    if (!$path) {\n      error(\"No '$tool' found with prefix specified by \" .\n            \"--tools (or \\$JEPROF_TOOLS) '$tools'\\n\");\n    }\n  } else {\n    # ... otherwise use the version that exists in the same directory as\n    # jeprof.  If there's nothing there, use $PATH.\n    $0 =~ m,[^/]*$,;     # this is everything after the last slash\n    my $dirname = $`;    # this is everything up to and including the last slash\n    if (-x \"$dirname$tool\") {\n      $path = \"$dirname$tool\";\n    } else {\n      $path = $tool;\n    }\n  }\n  if ($main::opt_debug) { print STDERR \"Using '$path' for '$tool'.\\n\"; }\n  return $path;\n}\n\nsub ShellEscape {\n  my @escaped_words = ();\n  foreach my $word (@_) {\n    my $escaped_word = $word;\n    if ($word =~ m![^a-zA-Z0-9/.,_=-]!) {  # check for anything not in whitelist\n      $escaped_word =~ s/'/'\\\\''/;\n      $escaped_word = \"'$escaped_word'\";\n    }\n    push(@escaped_words, $escaped_word);\n  }\n  return join(\" \", @escaped_words);\n}\n\nsub cleanup {\n  unlink($main::tmpfile_sym);\n  unlink(keys %main::tempnames);\n\n  # We leave any collected profiles in $HOME/jeprof in case the user wants\n  # to look at them later.  We print a message informing them of this.\n  if ((scalar(@main::profile_files) > 0) &&\n      defined($main::collected_profile)) {\n    if (scalar(@main::profile_files) == 1) {\n      print STDERR \"Dynamically gathered profile is in $main::collected_profile\\n\";\n    }\n    print STDERR \"If you want to investigate this profile further, you can do:\\n\";\n    print STDERR \"\\n\";\n    print STDERR \"  jeprof \\\\\\n\";\n    print STDERR \"    $main::prog \\\\\\n\";\n    print STDERR \"    $main::collected_profile\\n\";\n    print STDERR \"\\n\";\n  }\n}\n\nsub sighandler {\n  cleanup();\n  exit(1);\n}\n\nsub error {\n  my $msg = shift;\n  print STDERR $msg;\n  cleanup();\n  exit(1);\n}\n\n\n# Run $nm_command and get all the resulting procedure boundaries whose\n# names match \"$regexp\" and returns them in a hashtable mapping from\n# procedure name to a two-element vector of [start address, end address]\nsub GetProcedureBoundariesViaNm {\n  my $escaped_nm_command = shift;    # shell-escaped\n  my $regexp = shift;\n\n  my $symbol_table = {};\n  open(NM, \"$escaped_nm_command |\") || error(\"$escaped_nm_command: $!\\n\");\n  my $last_start = \"0\";\n  my $routine = \"\";\n  while (<NM>) {\n    s/\\r//g;         # turn windows-looking lines into unix-looking lines\n    if (m/^\\s*([0-9a-f]+) (.) (..*)/) {\n      my $start_val = $1;\n      my $type = $2;\n      my $this_routine = $3;\n\n      # It's possible for two symbols to share the same address, if\n      # one is a zero-length variable (like __start_google_malloc) or\n      # one symbol is a weak alias to another (like __libc_malloc).\n      # In such cases, we want to ignore all values except for the\n      # actual symbol, which in nm-speak has type \"T\".  The logic\n      # below does this, though it's a bit tricky: what happens when\n      # we have a series of lines with the same address, is the first\n      # one gets queued up to be processed.  However, it won't\n      # *actually* be processed until later, when we read a line with\n      # a different address.  That means that as long as we're reading\n      # lines with the same address, we have a chance to replace that\n      # item in the queue, which we do whenever we see a 'T' entry --\n      # that is, a line with type 'T'.  If we never see a 'T' entry,\n      # we'll just go ahead and process the first entry (which never\n      # got touched in the queue), and ignore the others.\n      if ($start_val eq $last_start && $type =~ /t/i) {\n        # We are the 'T' symbol at this address, replace previous symbol.\n        $routine = $this_routine;\n        next;\n      } elsif ($start_val eq $last_start) {\n        # We're not the 'T' symbol at this address, so ignore us.\n        next;\n      }\n\n      if ($this_routine eq $sep_symbol) {\n        $sep_address = HexExtend($start_val);\n      }\n\n      # Tag this routine with the starting address in case the image\n      # has multiple occurrences of this routine.  We use a syntax\n      # that resembles template parameters that are automatically\n      # stripped out by ShortFunctionName()\n      $this_routine .= \"<$start_val>\";\n\n      if (defined($routine) && $routine =~ m/$regexp/) {\n        $symbol_table->{$routine} = [HexExtend($last_start),\n                                     HexExtend($start_val)];\n      }\n      $last_start = $start_val;\n      $routine = $this_routine;\n    } elsif (m/^Loaded image name: (.+)/) {\n      # The win32 nm workalike emits information about the binary it is using.\n      if ($main::opt_debug) { print STDERR \"Using Image $1\\n\"; }\n    } elsif (m/^PDB file name: (.+)/) {\n      # The win32 nm workalike emits information about the pdb it is using.\n      if ($main::opt_debug) { print STDERR \"Using PDB $1\\n\"; }\n    }\n  }\n  close(NM);\n  # Handle the last line in the nm output.  Unfortunately, we don't know\n  # how big this last symbol is, because we don't know how big the file\n  # is.  For now, we just give it a size of 0.\n  # TODO(csilvers): do better here.\n  if (defined($routine) && $routine =~ m/$regexp/) {\n    $symbol_table->{$routine} = [HexExtend($last_start),\n                                 HexExtend($last_start)];\n  }\n  return $symbol_table;\n}\n\n# Gets the procedure boundaries for all routines in \"$image\" whose names\n# match \"$regexp\" and returns them in a hashtable mapping from procedure\n# name to a two-element vector of [start address, end address].\n# Will return an empty map if nm is not installed or not working properly.\nsub GetProcedureBoundaries {\n  my $image = shift;\n  my $regexp = shift;\n\n  # If $image doesn't start with /, then put ./ in front of it.  This works\n  # around an obnoxious bug in our probing of nm -f behavior.\n  # \"nm -f $image\" is supposed to fail on GNU nm, but if:\n  #\n  # a. $image starts with [BbSsPp] (for example, bin/foo/bar), AND\n  # b. you have a.out in your current directory (a not uncommon occurrence)\n  #\n  # then \"nm -f $image\" succeeds because -f only looks at the first letter of\n  # the argument, which looks valid because it's [BbSsPp], and then since\n  # there's no image provided, it looks for a.out and finds it.\n  #\n  # This regex makes sure that $image starts with . or /, forcing the -f\n  # parsing to fail since . and / are not valid formats.\n  $image =~ s#^[^/]#./$&#;\n\n  # For libc libraries, the copy in /usr/lib/debug contains debugging symbols\n  my $debugging = DebuggingLibrary($image);\n  if ($debugging) {\n    $image = $debugging;\n  }\n\n  my $nm = $obj_tool_map{\"nm\"};\n  my $cppfilt = $obj_tool_map{\"c++filt\"};\n\n  # nm can fail for two reasons: 1) $image isn't a debug library; 2) nm\n  # binary doesn't support --demangle.  In addition, for OS X we need\n  # to use the -f flag to get 'flat' nm output (otherwise we don't sort\n  # properly and get incorrect results).  Unfortunately, GNU nm uses -f\n  # in an incompatible way.  So first we test whether our nm supports\n  # --demangle and -f.\n  my $demangle_flag = \"\";\n  my $cppfilt_flag = \"\";\n  my $to_devnull = \">$dev_null 2>&1\";\n  if (system(ShellEscape($nm, \"--demangle\", $image) . $to_devnull) == 0) {\n    # In this mode, we do \"nm --demangle <foo>\"\n    $demangle_flag = \"--demangle\";\n    $cppfilt_flag = \"\";\n  } elsif (system(ShellEscape($cppfilt, $image) . $to_devnull) == 0) {\n    # In this mode, we do \"nm <foo> | c++filt\"\n    $cppfilt_flag = \" | \" . ShellEscape($cppfilt);\n  };\n  my $flatten_flag = \"\";\n  if (system(ShellEscape($nm, \"-f\", $image) . $to_devnull) == 0) {\n    $flatten_flag = \"-f\";\n  }\n\n  # Finally, in the case $imagie isn't a debug library, we try again with\n  # -D to at least get *exported* symbols.  If we can't use --demangle,\n  # we use c++filt instead, if it exists on this system.\n  my @nm_commands = (ShellEscape($nm, \"-n\", $flatten_flag, $demangle_flag,\n                                 $image) . \" 2>$dev_null $cppfilt_flag\",\n                     ShellEscape($nm, \"-D\", \"-n\", $flatten_flag, $demangle_flag,\n                                 $image) . \" 2>$dev_null $cppfilt_flag\",\n                     # 6nm is for Go binaries\n                     ShellEscape(\"6nm\", \"$image\") . \" 2>$dev_null | sort\",\n                     );\n\n  # If the executable is an MS Windows PDB-format executable, we'll\n  # have set up obj_tool_map(\"nm_pdb\").  In this case, we actually\n  # want to use both unix nm and windows-specific nm_pdb, since\n  # PDB-format executables can apparently include dwarf .o files.\n  if (exists $obj_tool_map{\"nm_pdb\"}) {\n    push(@nm_commands,\n         ShellEscape($obj_tool_map{\"nm_pdb\"}, \"--demangle\", $image)\n         . \" 2>$dev_null\");\n  }\n\n  foreach my $nm_command (@nm_commands) {\n    my $symbol_table = GetProcedureBoundariesViaNm($nm_command, $regexp);\n    return $symbol_table if (%{$symbol_table});\n  }\n  my $symbol_table = {};\n  return $symbol_table;\n}\n\n\n# The test vectors for AddressAdd/Sub/Inc are 8-16-nibble hex strings.\n# To make them more readable, we add underscores at interesting places.\n# This routine removes the underscores, producing the canonical representation\n# used by jeprof to represent addresses, particularly in the tested routines.\nsub CanonicalHex {\n  my $arg = shift;\n  return join '', (split '_',$arg);\n}\n\n\n# Unit test for AddressAdd:\nsub AddressAddUnitTest {\n  my $test_data_8 = shift;\n  my $test_data_16 = shift;\n  my $error_count = 0;\n  my $fail_count = 0;\n  my $pass_count = 0;\n  # print STDERR \"AddressAddUnitTest: \", 1+$#{$test_data_8}, \" tests\\n\";\n\n  # First a few 8-nibble addresses.  Note that this implementation uses\n  # plain old arithmetic, so a quick sanity check along with verifying what\n  # happens to overflow (we want it to wrap):\n  $address_length = 8;\n  foreach my $row (@{$test_data_8}) {\n    if ($main::opt_debug and $main::opt_test) { print STDERR \"@{$row}\\n\"; }\n    my $sum = AddressAdd ($row->[0], $row->[1]);\n    if ($sum ne $row->[2]) {\n      printf STDERR \"ERROR: %s != %s + %s = %s\\n\", $sum,\n             $row->[0], $row->[1], $row->[2];\n      ++$fail_count;\n    } else {\n      ++$pass_count;\n    }\n  }\n  printf STDERR \"AddressAdd 32-bit tests: %d passes, %d failures\\n\",\n         $pass_count, $fail_count;\n  $error_count = $fail_count;\n  $fail_count = 0;\n  $pass_count = 0;\n\n  # Now 16-nibble addresses.\n  $address_length = 16;\n  foreach my $row (@{$test_data_16}) {\n    if ($main::opt_debug and $main::opt_test) { print STDERR \"@{$row}\\n\"; }\n    my $sum = AddressAdd (CanonicalHex($row->[0]), CanonicalHex($row->[1]));\n    my $expected = join '', (split '_',$row->[2]);\n    if ($sum ne CanonicalHex($row->[2])) {\n      printf STDERR \"ERROR: %s != %s + %s = %s\\n\", $sum,\n             $row->[0], $row->[1], $row->[2];\n      ++$fail_count;\n    } else {\n      ++$pass_count;\n    }\n  }\n  printf STDERR \"AddressAdd 64-bit tests: %d passes, %d failures\\n\",\n         $pass_count, $fail_count;\n  $error_count += $fail_count;\n\n  return $error_count;\n}\n\n\n# Unit test for AddressSub:\nsub AddressSubUnitTest {\n  my $test_data_8 = shift;\n  my $test_data_16 = shift;\n  my $error_count = 0;\n  my $fail_count = 0;\n  my $pass_count = 0;\n  # print STDERR \"AddressSubUnitTest: \", 1+$#{$test_data_8}, \" tests\\n\";\n\n  # First a few 8-nibble addresses.  Note that this implementation uses\n  # plain old arithmetic, so a quick sanity check along with verifying what\n  # happens to overflow (we want it to wrap):\n  $address_length = 8;\n  foreach my $row (@{$test_data_8}) {\n    if ($main::opt_debug and $main::opt_test) { print STDERR \"@{$row}\\n\"; }\n    my $sum = AddressSub ($row->[0], $row->[1]);\n    if ($sum ne $row->[3]) {\n      printf STDERR \"ERROR: %s != %s - %s = %s\\n\", $sum,\n             $row->[0], $row->[1], $row->[3];\n      ++$fail_count;\n    } else {\n      ++$pass_count;\n    }\n  }\n  printf STDERR \"AddressSub 32-bit tests: %d passes, %d failures\\n\",\n         $pass_count, $fail_count;\n  $error_count = $fail_count;\n  $fail_count = 0;\n  $pass_count = 0;\n\n  # Now 16-nibble addresses.\n  $address_length = 16;\n  foreach my $row (@{$test_data_16}) {\n    if ($main::opt_debug and $main::opt_test) { print STDERR \"@{$row}\\n\"; }\n    my $sum = AddressSub (CanonicalHex($row->[0]), CanonicalHex($row->[1]));\n    if ($sum ne CanonicalHex($row->[3])) {\n      printf STDERR \"ERROR: %s != %s - %s = %s\\n\", $sum,\n             $row->[0], $row->[1], $row->[3];\n      ++$fail_count;\n    } else {\n      ++$pass_count;\n    }\n  }\n  printf STDERR \"AddressSub 64-bit tests: %d passes, %d failures\\n\",\n         $pass_count, $fail_count;\n  $error_count += $fail_count;\n\n  return $error_count;\n}\n\n\n# Unit test for AddressInc:\nsub AddressIncUnitTest {\n  my $test_data_8 = shift;\n  my $test_data_16 = shift;\n  my $error_count = 0;\n  my $fail_count = 0;\n  my $pass_count = 0;\n  # print STDERR \"AddressIncUnitTest: \", 1+$#{$test_data_8}, \" tests\\n\";\n\n  # First a few 8-nibble addresses.  Note that this implementation uses\n  # plain old arithmetic, so a quick sanity check along with verifying what\n  # happens to overflow (we want it to wrap):\n  $address_length = 8;\n  foreach my $row (@{$test_data_8}) {\n    if ($main::opt_debug and $main::opt_test) { print STDERR \"@{$row}\\n\"; }\n    my $sum = AddressInc ($row->[0]);\n    if ($sum ne $row->[4]) {\n      printf STDERR \"ERROR: %s != %s + 1 = %s\\n\", $sum,\n             $row->[0], $row->[4];\n      ++$fail_count;\n    } else {\n      ++$pass_count;\n    }\n  }\n  printf STDERR \"AddressInc 32-bit tests: %d passes, %d failures\\n\",\n         $pass_count, $fail_count;\n  $error_count = $fail_count;\n  $fail_count = 0;\n  $pass_count = 0;\n\n  # Now 16-nibble addresses.\n  $address_length = 16;\n  foreach my $row (@{$test_data_16}) {\n    if ($main::opt_debug and $main::opt_test) { print STDERR \"@{$row}\\n\"; }\n    my $sum = AddressInc (CanonicalHex($row->[0]));\n    if ($sum ne CanonicalHex($row->[4])) {\n      printf STDERR \"ERROR: %s != %s + 1 = %s\\n\", $sum,\n             $row->[0], $row->[4];\n      ++$fail_count;\n    } else {\n      ++$pass_count;\n    }\n  }\n  printf STDERR \"AddressInc 64-bit tests: %d passes, %d failures\\n\",\n         $pass_count, $fail_count;\n  $error_count += $fail_count;\n\n  return $error_count;\n}\n\n\n# Driver for unit tests.\n# Currently just the address add/subtract/increment routines for 64-bit.\nsub RunUnitTests {\n  my $error_count = 0;\n\n  # This is a list of tuples [a, b, a+b, a-b, a+1]\n  my $unit_test_data_8 = [\n    [qw(aaaaaaaa 50505050 fafafafa 5a5a5a5a aaaaaaab)],\n    [qw(50505050 aaaaaaaa fafafafa a5a5a5a6 50505051)],\n    [qw(ffffffff aaaaaaaa aaaaaaa9 55555555 00000000)],\n    [qw(00000001 ffffffff 00000000 00000002 00000002)],\n    [qw(00000001 fffffff0 fffffff1 00000011 00000002)],\n  ];\n  my $unit_test_data_16 = [\n    # The implementation handles data in 7-nibble chunks, so those are the\n    # interesting boundaries.\n    [qw(aaaaaaaa 50505050\n        00_000000f_afafafa 00_0000005_a5a5a5a 00_000000a_aaaaaab)],\n    [qw(50505050 aaaaaaaa\n        00_000000f_afafafa ff_ffffffa_5a5a5a6 00_0000005_0505051)],\n    [qw(ffffffff aaaaaaaa\n        00_000001a_aaaaaa9 00_0000005_5555555 00_0000010_0000000)],\n    [qw(00000001 ffffffff\n        00_0000010_0000000 ff_ffffff0_0000002 00_0000000_0000002)],\n    [qw(00000001 fffffff0\n        00_000000f_ffffff1 ff_ffffff0_0000011 00_0000000_0000002)],\n\n    [qw(00_a00000a_aaaaaaa 50505050\n        00_a00000f_afafafa 00_a000005_a5a5a5a 00_a00000a_aaaaaab)],\n    [qw(0f_fff0005_0505050 aaaaaaaa\n        0f_fff000f_afafafa 0f_ffefffa_5a5a5a6 0f_fff0005_0505051)],\n    [qw(00_000000f_fffffff 01_800000a_aaaaaaa\n        01_800001a_aaaaaa9 fe_8000005_5555555 00_0000010_0000000)],\n    [qw(00_0000000_0000001 ff_fffffff_fffffff\n        00_0000000_0000000 00_0000000_0000002 00_0000000_0000002)],\n    [qw(00_0000000_0000001 ff_fffffff_ffffff0\n        ff_fffffff_ffffff1 00_0000000_0000011 00_0000000_0000002)],\n  ];\n\n  $error_count += AddressAddUnitTest($unit_test_data_8, $unit_test_data_16);\n  $error_count += AddressSubUnitTest($unit_test_data_8, $unit_test_data_16);\n  $error_count += AddressIncUnitTest($unit_test_data_8, $unit_test_data_16);\n  if ($error_count > 0) {\n    print STDERR $error_count, \" errors: FAILED\\n\";\n  } else {\n    print STDERR \"PASS\\n\";\n  }\n  exit ($error_count);\n}\n"
  },
  {
    "path": "deps/jemalloc/build-aux/config.guess",
    "content": "#! /bin/sh\n# Attempt to guess a canonical system name.\n#   Copyright 1992-2021 Free Software Foundation, Inc.\n\ntimestamp='2021-01-01'\n\n# This file is free software; you can redistribute it and/or modify it\n# under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n# General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <https://www.gnu.org/licenses/>.\n#\n# As a special exception to the GNU General Public License, if you\n# distribute this file as part of a program that contains a\n# configuration script generated by Autoconf, you may include it under\n# the same distribution terms that you use for the rest of that\n# program.  This Exception is an additional permission under section 7\n# of the GNU General Public License, version 3 (\"GPLv3\").\n#\n# Originally written by Per Bothner; maintained since 2000 by Ben Elliston.\n#\n# You can get the latest version of this script from:\n# https://git.savannah.gnu.org/cgit/config.git/plain/config.guess\n#\n# Please send patches to <config-patches@gnu.org>.\n\n\nme=$(echo \"$0\" | sed -e 's,.*/,,')\n\nusage=\"\\\nUsage: $0 [OPTION]\n\nOutput the configuration name of the system \\`$me' is run on.\n\nOptions:\n  -h, --help         print this help, then exit\n  -t, --time-stamp   print date of last modification, then exit\n  -v, --version      print version number, then exit\n\nReport bugs and patches to <config-patches@gnu.org>.\"\n\nversion=\"\\\nGNU config.guess ($timestamp)\n\nOriginally written by Per Bothner.\nCopyright 1992-2021 Free Software Foundation, Inc.\n\nThis is free software; see the source for copying conditions.  There is NO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\"\n\nhelp=\"\nTry \\`$me --help' for more information.\"\n\n# Parse command line\nwhile test $# -gt 0 ; do\n  case $1 in\n    --time-stamp | --time* | -t )\n       echo \"$timestamp\" ; exit ;;\n    --version | -v )\n       echo \"$version\" ; exit ;;\n    --help | --h* | -h )\n       echo \"$usage\"; exit ;;\n    -- )     # Stop option processing\n       shift; break ;;\n    - )\t# Use stdin as input.\n       break ;;\n    -* )\n       echo \"$me: invalid option $1$help\" >&2\n       exit 1 ;;\n    * )\n       break ;;\n  esac\ndone\n\nif test $# != 0; then\n  echo \"$me: too many arguments$help\" >&2\n  exit 1\nfi\n\n# CC_FOR_BUILD -- compiler used by this script. Note that the use of a\n# compiler to aid in system detection is discouraged as it requires\n# temporary files to be created and, as you can see below, it is a\n# headache to deal with in a portable fashion.\n\n# Historically, `CC_FOR_BUILD' used to be named `HOST_CC'. We still\n# use `HOST_CC' if defined, but it is deprecated.\n\n# Portable tmp directory creation inspired by the Autoconf team.\n\ntmp=\n# shellcheck disable=SC2172\ntrap 'test -z \"$tmp\" || rm -fr \"$tmp\"' 0 1 2 13 15\n\nset_cc_for_build() {\n    # prevent multiple calls if $tmp is already set\n    test \"$tmp\" && return 0\n    : \"${TMPDIR=/tmp}\"\n    # shellcheck disable=SC2039\n    { tmp=$( (umask 077 && mktemp -d \"$TMPDIR/cgXXXXXX\") 2>/dev/null) && test -n \"$tmp\" && test -d \"$tmp\" ; } ||\n\t{ test -n \"$RANDOM\" && tmp=$TMPDIR/cg$$-$RANDOM && (umask 077 && mkdir \"$tmp\" 2>/dev/null) ; } ||\n\t{ tmp=$TMPDIR/cg-$$ && (umask 077 && mkdir \"$tmp\" 2>/dev/null) && echo \"Warning: creating insecure temp directory\" >&2 ; } ||\n\t{ echo \"$me: cannot create a temporary directory in $TMPDIR\" >&2 ; exit 1 ; }\n    dummy=$tmp/dummy\n    case ${CC_FOR_BUILD-},${HOST_CC-},${CC-} in\n\t,,)    echo \"int x;\" > \"$dummy.c\"\n\t       for driver in cc gcc c89 c99 ; do\n\t\t   if ($driver -c -o \"$dummy.o\" \"$dummy.c\") >/dev/null 2>&1 ; then\n\t\t       CC_FOR_BUILD=\"$driver\"\n\t\t       break\n\t\t   fi\n\t       done\n\t       if test x\"$CC_FOR_BUILD\" = x ; then\n\t\t   CC_FOR_BUILD=no_compiler_found\n\t       fi\n\t       ;;\n\t,,*)   CC_FOR_BUILD=$CC ;;\n\t,*,*)  CC_FOR_BUILD=$HOST_CC ;;\n    esac\n}\n\n# This is needed to find uname on a Pyramid OSx when run in the BSD universe.\n# (ghazi@noc.rutgers.edu 1994-08-24)\nif test -f /.attbin/uname ; then\n\tPATH=$PATH:/.attbin ; export PATH\nfi\n\nUNAME_MACHINE=$( (uname -m) 2>/dev/null) || UNAME_MACHINE=unknown\nUNAME_RELEASE=$( (uname -r) 2>/dev/null) || UNAME_RELEASE=unknown\nUNAME_SYSTEM=$( (uname -s) 2>/dev/null) || UNAME_SYSTEM=unknown\nUNAME_VERSION=$( (uname -v) 2>/dev/null) || UNAME_VERSION=unknown\n\ncase \"$UNAME_SYSTEM\" in\nLinux|GNU|GNU/*)\n\tLIBC=unknown\n\n\tset_cc_for_build\n\tcat <<-EOF > \"$dummy.c\"\n\t#include <features.h>\n\t#if defined(__UCLIBC__)\n\tLIBC=uclibc\n\t#elif defined(__dietlibc__)\n\tLIBC=dietlibc\n\t#elif defined(__GLIBC__)\n\tLIBC=gnu\n\t#else\n\t#include <stdarg.h>\n\t/* First heuristic to detect musl libc.  */\n\t#ifdef __DEFINED_va_list\n\tLIBC=musl\n\t#endif\n\t#endif\n\tEOF\n\teval \"$($CC_FOR_BUILD -E \"$dummy.c\" 2>/dev/null | grep '^LIBC' | sed 's, ,,g')\"\n\n\t# Second heuristic to detect musl libc.\n\tif [ \"$LIBC\" = unknown ] &&\n\t   command -v ldd >/dev/null &&\n\t   ldd --version 2>&1 | grep -q ^musl; then\n\t\tLIBC=musl\n\tfi\n\n\t# If the system lacks a compiler, then just pick glibc.\n\t# We could probably try harder.\n\tif [ \"$LIBC\" = unknown ]; then\n\t\tLIBC=gnu\n\tfi\n\t;;\nesac\n\n# Note: order is significant - the case branches are not exclusive.\n\ncase \"$UNAME_MACHINE:$UNAME_SYSTEM:$UNAME_RELEASE:$UNAME_VERSION\" in\n    *:NetBSD:*:*)\n\t# NetBSD (nbsd) targets should (where applicable) match one or\n\t# more of the tuples: *-*-netbsdelf*, *-*-netbsdaout*,\n\t# *-*-netbsdecoff* and *-*-netbsd*.  For targets that recently\n\t# switched to ELF, *-*-netbsd* would select the old\n\t# object file format.  This provides both forward\n\t# compatibility and a consistent mechanism for selecting the\n\t# object file format.\n\t#\n\t# Note: NetBSD doesn't particularly care about the vendor\n\t# portion of the name.  We always set it to \"unknown\".\n\tsysctl=\"sysctl -n hw.machine_arch\"\n\tUNAME_MACHINE_ARCH=$( (uname -p 2>/dev/null || \\\n\t    \"/sbin/$sysctl\" 2>/dev/null || \\\n\t    \"/usr/sbin/$sysctl\" 2>/dev/null || \\\n\t    echo unknown))\n\tcase \"$UNAME_MACHINE_ARCH\" in\n\t    aarch64eb) machine=aarch64_be-unknown ;;\n\t    armeb) machine=armeb-unknown ;;\n\t    arm*) machine=arm-unknown ;;\n\t    sh3el) machine=shl-unknown ;;\n\t    sh3eb) machine=sh-unknown ;;\n\t    sh5el) machine=sh5le-unknown ;;\n\t    earmv*)\n\t\tarch=$(echo \"$UNAME_MACHINE_ARCH\" | sed -e 's,^e\\(armv[0-9]\\).*$,\\1,')\n\t\tendian=$(echo \"$UNAME_MACHINE_ARCH\" | sed -ne 's,^.*\\(eb\\)$,\\1,p')\n\t\tmachine=\"${arch}${endian}\"-unknown\n\t\t;;\n\t    *) machine=\"$UNAME_MACHINE_ARCH\"-unknown ;;\n\tesac\n\t# The Operating System including object format, if it has switched\n\t# to ELF recently (or will in the future) and ABI.\n\tcase \"$UNAME_MACHINE_ARCH\" in\n\t    earm*)\n\t\tos=netbsdelf\n\t\t;;\n\t    arm*|i386|m68k|ns32k|sh3*|sparc|vax)\n\t\tset_cc_for_build\n\t\tif echo __ELF__ | $CC_FOR_BUILD -E - 2>/dev/null \\\n\t\t\t| grep -q __ELF__\n\t\tthen\n\t\t    # Once all utilities can be ECOFF (netbsdecoff) or a.out (netbsdaout).\n\t\t    # Return netbsd for either.  FIX?\n\t\t    os=netbsd\n\t\telse\n\t\t    os=netbsdelf\n\t\tfi\n\t\t;;\n\t    *)\n\t\tos=netbsd\n\t\t;;\n\tesac\n\t# Determine ABI tags.\n\tcase \"$UNAME_MACHINE_ARCH\" in\n\t    earm*)\n\t\texpr='s/^earmv[0-9]/-eabi/;s/eb$//'\n\t\tabi=$(echo \"$UNAME_MACHINE_ARCH\" | sed -e \"$expr\")\n\t\t;;\n\tesac\n\t# The OS release\n\t# Debian GNU/NetBSD machines have a different userland, and\n\t# thus, need a distinct triplet. However, they do not need\n\t# kernel version information, so it can be replaced with a\n\t# suitable tag, in the style of linux-gnu.\n\tcase \"$UNAME_VERSION\" in\n\t    Debian*)\n\t\trelease='-gnu'\n\t\t;;\n\t    *)\n\t\trelease=$(echo \"$UNAME_RELEASE\" | sed -e 's/[-_].*//' | cut -d. -f1,2)\n\t\t;;\n\tesac\n\t# Since CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM:\n\t# contains redundant information, the shorter form:\n\t# CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM is used.\n\techo \"$machine-${os}${release}${abi-}\"\n\texit ;;\n    *:Bitrig:*:*)\n\tUNAME_MACHINE_ARCH=$(arch | sed 's/Bitrig.//')\n\techo \"$UNAME_MACHINE_ARCH\"-unknown-bitrig\"$UNAME_RELEASE\"\n\texit ;;\n    *:OpenBSD:*:*)\n\tUNAME_MACHINE_ARCH=$(arch | sed 's/OpenBSD.//')\n\techo \"$UNAME_MACHINE_ARCH\"-unknown-openbsd\"$UNAME_RELEASE\"\n\texit ;;\n    *:LibertyBSD:*:*)\n\tUNAME_MACHINE_ARCH=$(arch | sed 's/^.*BSD\\.//')\n\techo \"$UNAME_MACHINE_ARCH\"-unknown-libertybsd\"$UNAME_RELEASE\"\n\texit ;;\n    *:MidnightBSD:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-midnightbsd\"$UNAME_RELEASE\"\n\texit ;;\n    *:ekkoBSD:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-ekkobsd\"$UNAME_RELEASE\"\n\texit ;;\n    *:SolidBSD:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-solidbsd\"$UNAME_RELEASE\"\n\texit ;;\n    *:OS108:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-os108_\"$UNAME_RELEASE\"\n\texit ;;\n    macppc:MirBSD:*:*)\n\techo powerpc-unknown-mirbsd\"$UNAME_RELEASE\"\n\texit ;;\n    *:MirBSD:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-mirbsd\"$UNAME_RELEASE\"\n\texit ;;\n    *:Sortix:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-sortix\n\texit ;;\n    *:Twizzler:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-twizzler\n\texit ;;\n    *:Redox:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-redox\n\texit ;;\n    mips:OSF1:*.*)\n\techo mips-dec-osf1\n\texit ;;\n    alpha:OSF1:*:*)\n\tcase $UNAME_RELEASE in\n\t*4.0)\n\t\tUNAME_RELEASE=$(/usr/sbin/sizer -v | awk '{print $3}')\n\t\t;;\n\t*5.*)\n\t\tUNAME_RELEASE=$(/usr/sbin/sizer -v | awk '{print $4}')\n\t\t;;\n\tesac\n\t# According to Compaq, /usr/sbin/psrinfo has been available on\n\t# OSF/1 and Tru64 systems produced since 1995.  I hope that\n\t# covers most systems running today.  This code pipes the CPU\n\t# types through head -n 1, so we only detect the type of CPU 0.\n\tALPHA_CPU_TYPE=$(/usr/sbin/psrinfo -v | sed -n -e 's/^  The alpha \\(.*\\) processor.*$/\\1/p' | head -n 1)\n\tcase \"$ALPHA_CPU_TYPE\" in\n\t    \"EV4 (21064)\")\n\t\tUNAME_MACHINE=alpha ;;\n\t    \"EV4.5 (21064)\")\n\t\tUNAME_MACHINE=alpha ;;\n\t    \"LCA4 (21066/21068)\")\n\t\tUNAME_MACHINE=alpha ;;\n\t    \"EV5 (21164)\")\n\t\tUNAME_MACHINE=alphaev5 ;;\n\t    \"EV5.6 (21164A)\")\n\t\tUNAME_MACHINE=alphaev56 ;;\n\t    \"EV5.6 (21164PC)\")\n\t\tUNAME_MACHINE=alphapca56 ;;\n\t    \"EV5.7 (21164PC)\")\n\t\tUNAME_MACHINE=alphapca57 ;;\n\t    \"EV6 (21264)\")\n\t\tUNAME_MACHINE=alphaev6 ;;\n\t    \"EV6.7 (21264A)\")\n\t\tUNAME_MACHINE=alphaev67 ;;\n\t    \"EV6.8CB (21264C)\")\n\t\tUNAME_MACHINE=alphaev68 ;;\n\t    \"EV6.8AL (21264B)\")\n\t\tUNAME_MACHINE=alphaev68 ;;\n\t    \"EV6.8CX (21264D)\")\n\t\tUNAME_MACHINE=alphaev68 ;;\n\t    \"EV6.9A (21264/EV69A)\")\n\t\tUNAME_MACHINE=alphaev69 ;;\n\t    \"EV7 (21364)\")\n\t\tUNAME_MACHINE=alphaev7 ;;\n\t    \"EV7.9 (21364A)\")\n\t\tUNAME_MACHINE=alphaev79 ;;\n\tesac\n\t# A Pn.n version is a patched version.\n\t# A Vn.n version is a released version.\n\t# A Tn.n version is a released field test version.\n\t# A Xn.n version is an unreleased experimental baselevel.\n\t# 1.2 uses \"1.2\" for uname -r.\n\techo \"$UNAME_MACHINE\"-dec-osf\"$(echo \"$UNAME_RELEASE\" | sed -e 's/^[PVTX]//' | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz)\"\n\t# Reset EXIT trap before exiting to avoid spurious non-zero exit code.\n\texitcode=$?\n\ttrap '' 0\n\texit $exitcode ;;\n    Amiga*:UNIX_System_V:4.0:*)\n\techo m68k-unknown-sysv4\n\texit ;;\n    *:[Aa]miga[Oo][Ss]:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-amigaos\n\texit ;;\n    *:[Mm]orph[Oo][Ss]:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-morphos\n\texit ;;\n    *:OS/390:*:*)\n\techo i370-ibm-openedition\n\texit ;;\n    *:z/VM:*:*)\n\techo s390-ibm-zvmoe\n\texit ;;\n    *:OS400:*:*)\n\techo powerpc-ibm-os400\n\texit ;;\n    arm:RISC*:1.[012]*:*|arm:riscix:1.[012]*:*)\n\techo arm-acorn-riscix\"$UNAME_RELEASE\"\n\texit ;;\n    arm*:riscos:*:*|arm*:RISCOS:*:*)\n\techo arm-unknown-riscos\n\texit ;;\n    SR2?01:HI-UX/MPP:*:* | SR8000:HI-UX/MPP:*:*)\n\techo hppa1.1-hitachi-hiuxmpp\n\texit ;;\n    Pyramid*:OSx*:*:* | MIS*:OSx*:*:* | MIS*:SMP_DC-OSx*:*:*)\n\t# akee@wpdis03.wpafb.af.mil (Earle F. Ake) contributed MIS and NILE.\n\tif test \"$( (/bin/universe) 2>/dev/null)\" = att ; then\n\t\techo pyramid-pyramid-sysv3\n\telse\n\t\techo pyramid-pyramid-bsd\n\tfi\n\texit ;;\n    NILE*:*:*:dcosx)\n\techo pyramid-pyramid-svr4\n\texit ;;\n    DRS?6000:unix:4.0:6*)\n\techo sparc-icl-nx6\n\texit ;;\n    DRS?6000:UNIX_SV:4.2*:7* | DRS?6000:isis:4.2*:7*)\n\tcase $(/usr/bin/uname -p) in\n\t    sparc) echo sparc-icl-nx7; exit ;;\n\tesac ;;\n    s390x:SunOS:*:*)\n\techo \"$UNAME_MACHINE\"-ibm-solaris2\"$(echo \"$UNAME_RELEASE\" | sed -e 's/[^.]*//')\"\n\texit ;;\n    sun4H:SunOS:5.*:*)\n\techo sparc-hal-solaris2\"$(echo \"$UNAME_RELEASE\"|sed -e 's/[^.]*//')\"\n\texit ;;\n    sun4*:SunOS:5.*:* | tadpole*:SunOS:5.*:*)\n\techo sparc-sun-solaris2\"$(echo \"$UNAME_RELEASE\" | sed -e 's/[^.]*//')\"\n\texit ;;\n    i86pc:AuroraUX:5.*:* | i86xen:AuroraUX:5.*:*)\n\techo i386-pc-auroraux\"$UNAME_RELEASE\"\n\texit ;;\n    i86pc:SunOS:5.*:* | i86xen:SunOS:5.*:*)\n\tset_cc_for_build\n\tSUN_ARCH=i386\n\t# If there is a compiler, see if it is configured for 64-bit objects.\n\t# Note that the Sun cc does not turn __LP64__ into 1 like gcc does.\n\t# This test works for both compilers.\n\tif test \"$CC_FOR_BUILD\" != no_compiler_found; then\n\t    if (echo '#ifdef __amd64'; echo IS_64BIT_ARCH; echo '#endif') | \\\n\t\t(CCOPTS=\"\" $CC_FOR_BUILD -E - 2>/dev/null) | \\\n\t\tgrep IS_64BIT_ARCH >/dev/null\n\t    then\n\t\tSUN_ARCH=x86_64\n\t    fi\n\tfi\n\techo \"$SUN_ARCH\"-pc-solaris2\"$(echo \"$UNAME_RELEASE\"|sed -e 's/[^.]*//')\"\n\texit ;;\n    sun4*:SunOS:6*:*)\n\t# According to config.sub, this is the proper way to canonicalize\n\t# SunOS6.  Hard to guess exactly what SunOS6 will be like, but\n\t# it's likely to be more like Solaris than SunOS4.\n\techo sparc-sun-solaris3\"$(echo \"$UNAME_RELEASE\"|sed -e 's/[^.]*//')\"\n\texit ;;\n    sun4*:SunOS:*:*)\n\tcase \"$(/usr/bin/arch -k)\" in\n\t    Series*|S4*)\n\t\tUNAME_RELEASE=$(uname -v)\n\t\t;;\n\tesac\n\t# Japanese Language versions have a version number like `4.1.3-JL'.\n\techo sparc-sun-sunos\"$(echo \"$UNAME_RELEASE\"|sed -e 's/-/_/')\"\n\texit ;;\n    sun3*:SunOS:*:*)\n\techo m68k-sun-sunos\"$UNAME_RELEASE\"\n\texit ;;\n    sun*:*:4.2BSD:*)\n\tUNAME_RELEASE=$( (sed 1q /etc/motd | awk '{print substr($5,1,3)}') 2>/dev/null)\n\ttest \"x$UNAME_RELEASE\" = x && UNAME_RELEASE=3\n\tcase \"$(/bin/arch)\" in\n\t    sun3)\n\t\techo m68k-sun-sunos\"$UNAME_RELEASE\"\n\t\t;;\n\t    sun4)\n\t\techo sparc-sun-sunos\"$UNAME_RELEASE\"\n\t\t;;\n\tesac\n\texit ;;\n    aushp:SunOS:*:*)\n\techo sparc-auspex-sunos\"$UNAME_RELEASE\"\n\texit ;;\n    # The situation for MiNT is a little confusing.  The machine name\n    # can be virtually everything (everything which is not\n    # \"atarist\" or \"atariste\" at least should have a processor\n    # > m68000).  The system name ranges from \"MiNT\" over \"FreeMiNT\"\n    # to the lowercase version \"mint\" (or \"freemint\").  Finally\n    # the system name \"TOS\" denotes a system which is actually not\n    # MiNT.  But MiNT is downward compatible to TOS, so this should\n    # be no problem.\n    atarist[e]:*MiNT:*:* | atarist[e]:*mint:*:* | atarist[e]:*TOS:*:*)\n\techo m68k-atari-mint\"$UNAME_RELEASE\"\n\texit ;;\n    atari*:*MiNT:*:* | atari*:*mint:*:* | atarist[e]:*TOS:*:*)\n\techo m68k-atari-mint\"$UNAME_RELEASE\"\n\texit ;;\n    *falcon*:*MiNT:*:* | *falcon*:*mint:*:* | *falcon*:*TOS:*:*)\n\techo m68k-atari-mint\"$UNAME_RELEASE\"\n\texit ;;\n    milan*:*MiNT:*:* | milan*:*mint:*:* | *milan*:*TOS:*:*)\n\techo m68k-milan-mint\"$UNAME_RELEASE\"\n\texit ;;\n    hades*:*MiNT:*:* | hades*:*mint:*:* | *hades*:*TOS:*:*)\n\techo m68k-hades-mint\"$UNAME_RELEASE\"\n\texit ;;\n    *:*MiNT:*:* | *:*mint:*:* | *:*TOS:*:*)\n\techo m68k-unknown-mint\"$UNAME_RELEASE\"\n\texit ;;\n    m68k:machten:*:*)\n\techo m68k-apple-machten\"$UNAME_RELEASE\"\n\texit ;;\n    powerpc:machten:*:*)\n\techo powerpc-apple-machten\"$UNAME_RELEASE\"\n\texit ;;\n    RISC*:Mach:*:*)\n\techo mips-dec-mach_bsd4.3\n\texit ;;\n    RISC*:ULTRIX:*:*)\n\techo mips-dec-ultrix\"$UNAME_RELEASE\"\n\texit ;;\n    VAX*:ULTRIX*:*:*)\n\techo vax-dec-ultrix\"$UNAME_RELEASE\"\n\texit ;;\n    2020:CLIX:*:* | 2430:CLIX:*:*)\n\techo clipper-intergraph-clix\"$UNAME_RELEASE\"\n\texit ;;\n    mips:*:*:UMIPS | mips:*:*:RISCos)\n\tset_cc_for_build\n\tsed 's/^\t//' << EOF > \"$dummy.c\"\n#ifdef __cplusplus\n#include <stdio.h>  /* for printf() prototype */\n\tint main (int argc, char *argv[]) {\n#else\n\tint main (argc, argv) int argc; char *argv[]; {\n#endif\n\t#if defined (host_mips) && defined (MIPSEB)\n\t#if defined (SYSTYPE_SYSV)\n\t  printf (\"mips-mips-riscos%ssysv\\\\n\", argv[1]); exit (0);\n\t#endif\n\t#if defined (SYSTYPE_SVR4)\n\t  printf (\"mips-mips-riscos%ssvr4\\\\n\", argv[1]); exit (0);\n\t#endif\n\t#if defined (SYSTYPE_BSD43) || defined(SYSTYPE_BSD)\n\t  printf (\"mips-mips-riscos%sbsd\\\\n\", argv[1]); exit (0);\n\t#endif\n\t#endif\n\t  exit (-1);\n\t}\nEOF\n\t$CC_FOR_BUILD -o \"$dummy\" \"$dummy.c\" &&\n\t  dummyarg=$(echo \"$UNAME_RELEASE\" | sed -n 's/\\([0-9]*\\).*/\\1/p') &&\n\t  SYSTEM_NAME=$(\"$dummy\" \"$dummyarg\") &&\n\t    { echo \"$SYSTEM_NAME\"; exit; }\n\techo mips-mips-riscos\"$UNAME_RELEASE\"\n\texit ;;\n    Motorola:PowerMAX_OS:*:*)\n\techo powerpc-motorola-powermax\n\texit ;;\n    Motorola:*:4.3:PL8-*)\n\techo powerpc-harris-powermax\n\texit ;;\n    Night_Hawk:*:*:PowerMAX_OS | Synergy:PowerMAX_OS:*:*)\n\techo powerpc-harris-powermax\n\texit ;;\n    Night_Hawk:Power_UNIX:*:*)\n\techo powerpc-harris-powerunix\n\texit ;;\n    m88k:CX/UX:7*:*)\n\techo m88k-harris-cxux7\n\texit ;;\n    m88k:*:4*:R4*)\n\techo m88k-motorola-sysv4\n\texit ;;\n    m88k:*:3*:R3*)\n\techo m88k-motorola-sysv3\n\texit ;;\n    AViiON:dgux:*:*)\n\t# DG/UX returns AViiON for all architectures\n\tUNAME_PROCESSOR=$(/usr/bin/uname -p)\n\tif test \"$UNAME_PROCESSOR\" = mc88100 || test \"$UNAME_PROCESSOR\" = mc88110\n\tthen\n\t    if test \"$TARGET_BINARY_INTERFACE\"x = m88kdguxelfx || \\\n\t       test \"$TARGET_BINARY_INTERFACE\"x = x\n\t    then\n\t\techo m88k-dg-dgux\"$UNAME_RELEASE\"\n\t    else\n\t\techo m88k-dg-dguxbcs\"$UNAME_RELEASE\"\n\t    fi\n\telse\n\t    echo i586-dg-dgux\"$UNAME_RELEASE\"\n\tfi\n\texit ;;\n    M88*:DolphinOS:*:*)\t# DolphinOS (SVR3)\n\techo m88k-dolphin-sysv3\n\texit ;;\n    M88*:*:R3*:*)\n\t# Delta 88k system running SVR3\n\techo m88k-motorola-sysv3\n\texit ;;\n    XD88*:*:*:*) # Tektronix XD88 system running UTekV (SVR3)\n\techo m88k-tektronix-sysv3\n\texit ;;\n    Tek43[0-9][0-9]:UTek:*:*) # Tektronix 4300 system running UTek (BSD)\n\techo m68k-tektronix-bsd\n\texit ;;\n    *:IRIX*:*:*)\n\techo mips-sgi-irix\"$(echo \"$UNAME_RELEASE\"|sed -e 's/-/_/g')\"\n\texit ;;\n    ????????:AIX?:[12].1:2)   # AIX 2.2.1 or AIX 2.1.1 is RT/PC AIX.\n\techo romp-ibm-aix     # uname -m gives an 8 hex-code CPU id\n\texit ;;               # Note that: echo \"'$(uname -s)'\" gives 'AIX '\n    i*86:AIX:*:*)\n\techo i386-ibm-aix\n\texit ;;\n    ia64:AIX:*:*)\n\tif test -x /usr/bin/oslevel ; then\n\t\tIBM_REV=$(/usr/bin/oslevel)\n\telse\n\t\tIBM_REV=\"$UNAME_VERSION.$UNAME_RELEASE\"\n\tfi\n\techo \"$UNAME_MACHINE\"-ibm-aix\"$IBM_REV\"\n\texit ;;\n    *:AIX:2:3)\n\tif grep bos325 /usr/include/stdio.h >/dev/null 2>&1; then\n\t\tset_cc_for_build\n\t\tsed 's/^\t\t//' << EOF > \"$dummy.c\"\n\t\t#include <sys/systemcfg.h>\n\n\t\tmain()\n\t\t\t{\n\t\t\tif (!__power_pc())\n\t\t\t\texit(1);\n\t\t\tputs(\"powerpc-ibm-aix3.2.5\");\n\t\t\texit(0);\n\t\t\t}\nEOF\n\t\tif $CC_FOR_BUILD -o \"$dummy\" \"$dummy.c\" && SYSTEM_NAME=$(\"$dummy\")\n\t\tthen\n\t\t\techo \"$SYSTEM_NAME\"\n\t\telse\n\t\t\techo rs6000-ibm-aix3.2.5\n\t\tfi\n\telif grep bos324 /usr/include/stdio.h >/dev/null 2>&1; then\n\t\techo rs6000-ibm-aix3.2.4\n\telse\n\t\techo rs6000-ibm-aix3.2\n\tfi\n\texit ;;\n    *:AIX:*:[4567])\n\tIBM_CPU_ID=$(/usr/sbin/lsdev -C -c processor -S available | sed 1q | awk '{ print $1 }')\n\tif /usr/sbin/lsattr -El \"$IBM_CPU_ID\" | grep ' POWER' >/dev/null 2>&1; then\n\t\tIBM_ARCH=rs6000\n\telse\n\t\tIBM_ARCH=powerpc\n\tfi\n\tif test -x /usr/bin/lslpp ; then\n\t\tIBM_REV=$(/usr/bin/lslpp -Lqc bos.rte.libc |\n\t\t\t   awk -F: '{ print $3 }' | sed s/[0-9]*$/0/)\n\telse\n\t\tIBM_REV=\"$UNAME_VERSION.$UNAME_RELEASE\"\n\tfi\n\techo \"$IBM_ARCH\"-ibm-aix\"$IBM_REV\"\n\texit ;;\n    *:AIX:*:*)\n\techo rs6000-ibm-aix\n\texit ;;\n    ibmrt:4.4BSD:*|romp-ibm:4.4BSD:*)\n\techo romp-ibm-bsd4.4\n\texit ;;\n    ibmrt:*BSD:*|romp-ibm:BSD:*)            # covers RT/PC BSD and\n\techo romp-ibm-bsd\"$UNAME_RELEASE\"   # 4.3 with uname added to\n\texit ;;                             # report: romp-ibm BSD 4.3\n    *:BOSX:*:*)\n\techo rs6000-bull-bosx\n\texit ;;\n    DPX/2?00:B.O.S.:*:*)\n\techo m68k-bull-sysv3\n\texit ;;\n    9000/[34]??:4.3bsd:1.*:*)\n\techo m68k-hp-bsd\n\texit ;;\n    hp300:4.4BSD:*:* | 9000/[34]??:4.3bsd:2.*:*)\n\techo m68k-hp-bsd4.4\n\texit ;;\n    9000/[34678]??:HP-UX:*:*)\n\tHPUX_REV=$(echo \"$UNAME_RELEASE\"|sed -e 's/[^.]*.[0B]*//')\n\tcase \"$UNAME_MACHINE\" in\n\t    9000/31?)            HP_ARCH=m68000 ;;\n\t    9000/[34]??)         HP_ARCH=m68k ;;\n\t    9000/[678][0-9][0-9])\n\t\tif test -x /usr/bin/getconf; then\n\t\t    sc_cpu_version=$(/usr/bin/getconf SC_CPU_VERSION 2>/dev/null)\n\t\t    sc_kernel_bits=$(/usr/bin/getconf SC_KERNEL_BITS 2>/dev/null)\n\t\t    case \"$sc_cpu_version\" in\n\t\t      523) HP_ARCH=hppa1.0 ;; # CPU_PA_RISC1_0\n\t\t      528) HP_ARCH=hppa1.1 ;; # CPU_PA_RISC1_1\n\t\t      532)                      # CPU_PA_RISC2_0\n\t\t\tcase \"$sc_kernel_bits\" in\n\t\t\t  32) HP_ARCH=hppa2.0n ;;\n\t\t\t  64) HP_ARCH=hppa2.0w ;;\n\t\t\t  '') HP_ARCH=hppa2.0 ;;   # HP-UX 10.20\n\t\t\tesac ;;\n\t\t    esac\n\t\tfi\n\t\tif test \"$HP_ARCH\" = \"\"; then\n\t\t    set_cc_for_build\n\t\t    sed 's/^\t\t//' << EOF > \"$dummy.c\"\n\n\t\t#define _HPUX_SOURCE\n\t\t#include <stdlib.h>\n\t\t#include <unistd.h>\n\n\t\tint main ()\n\t\t{\n\t\t#if defined(_SC_KERNEL_BITS)\n\t\t    long bits = sysconf(_SC_KERNEL_BITS);\n\t\t#endif\n\t\t    long cpu  = sysconf (_SC_CPU_VERSION);\n\n\t\t    switch (cpu)\n\t\t\t{\n\t\t\tcase CPU_PA_RISC1_0: puts (\"hppa1.0\"); break;\n\t\t\tcase CPU_PA_RISC1_1: puts (\"hppa1.1\"); break;\n\t\t\tcase CPU_PA_RISC2_0:\n\t\t#if defined(_SC_KERNEL_BITS)\n\t\t\t    switch (bits)\n\t\t\t\t{\n\t\t\t\tcase 64: puts (\"hppa2.0w\"); break;\n\t\t\t\tcase 32: puts (\"hppa2.0n\"); break;\n\t\t\t\tdefault: puts (\"hppa2.0\"); break;\n\t\t\t\t} break;\n\t\t#else  /* !defined(_SC_KERNEL_BITS) */\n\t\t\t    puts (\"hppa2.0\"); break;\n\t\t#endif\n\t\t\tdefault: puts (\"hppa1.0\"); break;\n\t\t\t}\n\t\t    exit (0);\n\t\t}\nEOF\n\t\t    (CCOPTS=\"\" $CC_FOR_BUILD -o \"$dummy\" \"$dummy.c\" 2>/dev/null) && HP_ARCH=$(\"$dummy\")\n\t\t    test -z \"$HP_ARCH\" && HP_ARCH=hppa\n\t\tfi ;;\n\tesac\n\tif test \"$HP_ARCH\" = hppa2.0w\n\tthen\n\t    set_cc_for_build\n\n\t    # hppa2.0w-hp-hpux* has a 64-bit kernel and a compiler generating\n\t    # 32-bit code.  hppa64-hp-hpux* has the same kernel and a compiler\n\t    # generating 64-bit code.  GNU and HP use different nomenclature:\n\t    #\n\t    # $ CC_FOR_BUILD=cc ./config.guess\n\t    # => hppa2.0w-hp-hpux11.23\n\t    # $ CC_FOR_BUILD=\"cc +DA2.0w\" ./config.guess\n\t    # => hppa64-hp-hpux11.23\n\n\t    if echo __LP64__ | (CCOPTS=\"\" $CC_FOR_BUILD -E - 2>/dev/null) |\n\t\tgrep -q __LP64__\n\t    then\n\t\tHP_ARCH=hppa2.0w\n\t    else\n\t\tHP_ARCH=hppa64\n\t    fi\n\tfi\n\techo \"$HP_ARCH\"-hp-hpux\"$HPUX_REV\"\n\texit ;;\n    ia64:HP-UX:*:*)\n\tHPUX_REV=$(echo \"$UNAME_RELEASE\"|sed -e 's/[^.]*.[0B]*//')\n\techo ia64-hp-hpux\"$HPUX_REV\"\n\texit ;;\n    3050*:HI-UX:*:*)\n\tset_cc_for_build\n\tsed 's/^\t//' << EOF > \"$dummy.c\"\n\t#include <unistd.h>\n\tint\n\tmain ()\n\t{\n\t  long cpu = sysconf (_SC_CPU_VERSION);\n\t  /* The order matters, because CPU_IS_HP_MC68K erroneously returns\n\t     true for CPU_PA_RISC1_0.  CPU_IS_PA_RISC returns correct\n\t     results, however.  */\n\t  if (CPU_IS_PA_RISC (cpu))\n\t    {\n\t      switch (cpu)\n\t\t{\n\t\t  case CPU_PA_RISC1_0: puts (\"hppa1.0-hitachi-hiuxwe2\"); break;\n\t\t  case CPU_PA_RISC1_1: puts (\"hppa1.1-hitachi-hiuxwe2\"); break;\n\t\t  case CPU_PA_RISC2_0: puts (\"hppa2.0-hitachi-hiuxwe2\"); break;\n\t\t  default: puts (\"hppa-hitachi-hiuxwe2\"); break;\n\t\t}\n\t    }\n\t  else if (CPU_IS_HP_MC68K (cpu))\n\t    puts (\"m68k-hitachi-hiuxwe2\");\n\t  else puts (\"unknown-hitachi-hiuxwe2\");\n\t  exit (0);\n\t}\nEOF\n\t$CC_FOR_BUILD -o \"$dummy\" \"$dummy.c\" && SYSTEM_NAME=$(\"$dummy\") &&\n\t\t{ echo \"$SYSTEM_NAME\"; exit; }\n\techo unknown-hitachi-hiuxwe2\n\texit ;;\n    9000/7??:4.3bsd:*:* | 9000/8?[79]:4.3bsd:*:*)\n\techo hppa1.1-hp-bsd\n\texit ;;\n    9000/8??:4.3bsd:*:*)\n\techo hppa1.0-hp-bsd\n\texit ;;\n    *9??*:MPE/iX:*:* | *3000*:MPE/iX:*:*)\n\techo hppa1.0-hp-mpeix\n\texit ;;\n    hp7??:OSF1:*:* | hp8?[79]:OSF1:*:*)\n\techo hppa1.1-hp-osf\n\texit ;;\n    hp8??:OSF1:*:*)\n\techo hppa1.0-hp-osf\n\texit ;;\n    i*86:OSF1:*:*)\n\tif test -x /usr/sbin/sysversion ; then\n\t    echo \"$UNAME_MACHINE\"-unknown-osf1mk\n\telse\n\t    echo \"$UNAME_MACHINE\"-unknown-osf1\n\tfi\n\texit ;;\n    parisc*:Lites*:*:*)\n\techo hppa1.1-hp-lites\n\texit ;;\n    C1*:ConvexOS:*:* | convex:ConvexOS:C1*:*)\n\techo c1-convex-bsd\n\texit ;;\n    C2*:ConvexOS:*:* | convex:ConvexOS:C2*:*)\n\tif getsysinfo -f scalar_acc\n\tthen echo c32-convex-bsd\n\telse echo c2-convex-bsd\n\tfi\n\texit ;;\n    C34*:ConvexOS:*:* | convex:ConvexOS:C34*:*)\n\techo c34-convex-bsd\n\texit ;;\n    C38*:ConvexOS:*:* | convex:ConvexOS:C38*:*)\n\techo c38-convex-bsd\n\texit ;;\n    C4*:ConvexOS:*:* | convex:ConvexOS:C4*:*)\n\techo c4-convex-bsd\n\texit ;;\n    CRAY*Y-MP:*:*:*)\n\techo ymp-cray-unicos\"$UNAME_RELEASE\" | sed -e 's/\\.[^.]*$/.X/'\n\texit ;;\n    CRAY*[A-Z]90:*:*:*)\n\techo \"$UNAME_MACHINE\"-cray-unicos\"$UNAME_RELEASE\" \\\n\t| sed -e 's/CRAY.*\\([A-Z]90\\)/\\1/' \\\n\t      -e y/ABCDEFGHIJKLMNOPQRSTUVWXYZ/abcdefghijklmnopqrstuvwxyz/ \\\n\t      -e 's/\\.[^.]*$/.X/'\n\texit ;;\n    CRAY*TS:*:*:*)\n\techo t90-cray-unicos\"$UNAME_RELEASE\" | sed -e 's/\\.[^.]*$/.X/'\n\texit ;;\n    CRAY*T3E:*:*:*)\n\techo alphaev5-cray-unicosmk\"$UNAME_RELEASE\" | sed -e 's/\\.[^.]*$/.X/'\n\texit ;;\n    CRAY*SV1:*:*:*)\n\techo sv1-cray-unicos\"$UNAME_RELEASE\" | sed -e 's/\\.[^.]*$/.X/'\n\texit ;;\n    *:UNICOS/mp:*:*)\n\techo craynv-cray-unicosmp\"$UNAME_RELEASE\" | sed -e 's/\\.[^.]*$/.X/'\n\texit ;;\n    F30[01]:UNIX_System_V:*:* | F700:UNIX_System_V:*:*)\n\tFUJITSU_PROC=$(uname -m | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz)\n\tFUJITSU_SYS=$(uname -p | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/\\///')\n\tFUJITSU_REL=$(echo \"$UNAME_RELEASE\" | sed -e 's/ /_/')\n\techo \"${FUJITSU_PROC}-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}\"\n\texit ;;\n    5000:UNIX_System_V:4.*:*)\n\tFUJITSU_SYS=$(uname -p | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/\\///')\n\tFUJITSU_REL=$(echo \"$UNAME_RELEASE\" | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/ /_/')\n\techo \"sparc-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}\"\n\texit ;;\n    i*86:BSD/386:*:* | i*86:BSD/OS:*:* | *:Ascend\\ Embedded/OS:*:*)\n\techo \"$UNAME_MACHINE\"-pc-bsdi\"$UNAME_RELEASE\"\n\texit ;;\n    sparc*:BSD/OS:*:*)\n\techo sparc-unknown-bsdi\"$UNAME_RELEASE\"\n\texit ;;\n    *:BSD/OS:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-bsdi\"$UNAME_RELEASE\"\n\texit ;;\n    arm:FreeBSD:*:*)\n\tUNAME_PROCESSOR=$(uname -p)\n\tset_cc_for_build\n\tif echo __ARM_PCS_VFP | $CC_FOR_BUILD -E - 2>/dev/null \\\n\t    | grep -q __ARM_PCS_VFP\n\tthen\n\t    echo \"${UNAME_PROCESSOR}\"-unknown-freebsd\"$(echo ${UNAME_RELEASE}|sed -e 's/[-(].*//')\"-gnueabi\n\telse\n\t    echo \"${UNAME_PROCESSOR}\"-unknown-freebsd\"$(echo ${UNAME_RELEASE}|sed -e 's/[-(].*//')\"-gnueabihf\n\tfi\n\texit ;;\n    *:FreeBSD:*:*)\n\tUNAME_PROCESSOR=$(/usr/bin/uname -p)\n\tcase \"$UNAME_PROCESSOR\" in\n\t    amd64)\n\t\tUNAME_PROCESSOR=x86_64 ;;\n\t    i386)\n\t\tUNAME_PROCESSOR=i586 ;;\n\tesac\n\techo \"$UNAME_PROCESSOR\"-unknown-freebsd\"$(echo \"$UNAME_RELEASE\"|sed -e 's/[-(].*//')\"\n\texit ;;\n    i*:CYGWIN*:*)\n\techo \"$UNAME_MACHINE\"-pc-cygwin\n\texit ;;\n    *:MINGW64*:*)\n\techo \"$UNAME_MACHINE\"-pc-mingw64\n\texit ;;\n    *:MINGW*:*)\n\techo \"$UNAME_MACHINE\"-pc-mingw32\n\texit ;;\n    *:MSYS*:*)\n\techo \"$UNAME_MACHINE\"-pc-msys\n\texit ;;\n    i*:PW*:*)\n\techo \"$UNAME_MACHINE\"-pc-pw32\n\texit ;;\n    *:Interix*:*)\n\tcase \"$UNAME_MACHINE\" in\n\t    x86)\n\t\techo i586-pc-interix\"$UNAME_RELEASE\"\n\t\texit ;;\n\t    authenticamd | genuineintel | EM64T)\n\t\techo x86_64-unknown-interix\"$UNAME_RELEASE\"\n\t\texit ;;\n\t    IA64)\n\t\techo ia64-unknown-interix\"$UNAME_RELEASE\"\n\t\texit ;;\n\tesac ;;\n    i*:UWIN*:*)\n\techo \"$UNAME_MACHINE\"-pc-uwin\n\texit ;;\n    amd64:CYGWIN*:*:* | x86_64:CYGWIN*:*:*)\n\techo x86_64-pc-cygwin\n\texit ;;\n    prep*:SunOS:5.*:*)\n\techo powerpcle-unknown-solaris2\"$(echo \"$UNAME_RELEASE\"|sed -e 's/[^.]*//')\"\n\texit ;;\n    *:GNU:*:*)\n\t# the GNU system\n\techo \"$(echo \"$UNAME_MACHINE\"|sed -e 's,[-/].*$,,')-unknown-$LIBC$(echo \"$UNAME_RELEASE\"|sed -e 's,/.*$,,')\"\n\texit ;;\n    *:GNU/*:*:*)\n\t# other systems with GNU libc and userland\n\techo \"$UNAME_MACHINE-unknown-$(echo \"$UNAME_SYSTEM\" | sed 's,^[^/]*/,,' | tr \"[:upper:]\" \"[:lower:]\")$(echo \"$UNAME_RELEASE\"|sed -e 's/[-(].*//')-$LIBC\"\n\texit ;;\n    *:Minix:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-minix\n\texit ;;\n    aarch64:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\texit ;;\n    aarch64_be:Linux:*:*)\n\tUNAME_MACHINE=aarch64_be\n\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\texit ;;\n    alpha:Linux:*:*)\n\tcase $(sed -n '/^cpu model/s/^.*: \\(.*\\)/\\1/p' /proc/cpuinfo 2>/dev/null) in\n\t  EV5)   UNAME_MACHINE=alphaev5 ;;\n\t  EV56)  UNAME_MACHINE=alphaev56 ;;\n\t  PCA56) UNAME_MACHINE=alphapca56 ;;\n\t  PCA57) UNAME_MACHINE=alphapca56 ;;\n\t  EV6)   UNAME_MACHINE=alphaev6 ;;\n\t  EV67)  UNAME_MACHINE=alphaev67 ;;\n\t  EV68*) UNAME_MACHINE=alphaev68 ;;\n\tesac\n\tobjdump --private-headers /bin/sh | grep -q ld.so.1\n\tif test \"$?\" = 0 ; then LIBC=gnulibc1 ; fi\n\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\texit ;;\n    arc:Linux:*:* | arceb:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\texit ;;\n    arm*:Linux:*:*)\n\tset_cc_for_build\n\tif echo __ARM_EABI__ | $CC_FOR_BUILD -E - 2>/dev/null \\\n\t    | grep -q __ARM_EABI__\n\tthen\n\t    echo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\telse\n\t    if echo __ARM_PCS_VFP | $CC_FOR_BUILD -E - 2>/dev/null \\\n\t\t| grep -q __ARM_PCS_VFP\n\t    then\n\t\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"eabi\n\t    else\n\t\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"eabihf\n\t    fi\n\tfi\n\texit ;;\n    avr32*:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\texit ;;\n    cris:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-axis-linux-\"$LIBC\"\n\texit ;;\n    crisv32:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-axis-linux-\"$LIBC\"\n\texit ;;\n    e2k:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\texit ;;\n    frv:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\texit ;;\n    hexagon:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\texit ;;\n    i*86:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-pc-linux-\"$LIBC\"\n\texit ;;\n    ia64:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\texit ;;\n    k1om:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\texit ;;\n    loongarch32:Linux:*:* | loongarch64:Linux:*:* | loongarchx32:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\texit ;;\n    m32r*:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\texit ;;\n    m68*:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\texit ;;\n    mips:Linux:*:* | mips64:Linux:*:*)\n\tset_cc_for_build\n\tIS_GLIBC=0\n\ttest x\"${LIBC}\" = xgnu && IS_GLIBC=1\n\tsed 's/^\t//' << EOF > \"$dummy.c\"\n\t#undef CPU\n\t#undef mips\n\t#undef mipsel\n\t#undef mips64\n\t#undef mips64el\n\t#if ${IS_GLIBC} && defined(_ABI64)\n\tLIBCABI=gnuabi64\n\t#else\n\t#if ${IS_GLIBC} && defined(_ABIN32)\n\tLIBCABI=gnuabin32\n\t#else\n\tLIBCABI=${LIBC}\n\t#endif\n\t#endif\n\n\t#if ${IS_GLIBC} && defined(__mips64) && defined(__mips_isa_rev) && __mips_isa_rev>=6\n\tCPU=mipsisa64r6\n\t#else\n\t#if ${IS_GLIBC} && !defined(__mips64) && defined(__mips_isa_rev) && __mips_isa_rev>=6\n\tCPU=mipsisa32r6\n\t#else\n\t#if defined(__mips64)\n\tCPU=mips64\n\t#else\n\tCPU=mips\n\t#endif\n\t#endif\n\t#endif\n\n\t#if defined(__MIPSEL__) || defined(__MIPSEL) || defined(_MIPSEL) || defined(MIPSEL)\n\tMIPS_ENDIAN=el\n\t#else\n\t#if defined(__MIPSEB__) || defined(__MIPSEB) || defined(_MIPSEB) || defined(MIPSEB)\n\tMIPS_ENDIAN=\n\t#else\n\tMIPS_ENDIAN=\n\t#endif\n\t#endif\nEOF\n\teval \"$($CC_FOR_BUILD -E \"$dummy.c\" 2>/dev/null | grep '^CPU\\|^MIPS_ENDIAN\\|^LIBCABI')\"\n\ttest \"x$CPU\" != x && { echo \"$CPU${MIPS_ENDIAN}-unknown-linux-$LIBCABI\"; exit; }\n\t;;\n    mips64el:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\texit ;;\n    openrisc*:Linux:*:*)\n\techo or1k-unknown-linux-\"$LIBC\"\n\texit ;;\n    or32:Linux:*:* | or1k*:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\texit ;;\n    padre:Linux:*:*)\n\techo sparc-unknown-linux-\"$LIBC\"\n\texit ;;\n    parisc64:Linux:*:* | hppa64:Linux:*:*)\n\techo hppa64-unknown-linux-\"$LIBC\"\n\texit ;;\n    parisc:Linux:*:* | hppa:Linux:*:*)\n\t# Look for CPU level\n\tcase $(grep '^cpu[^a-z]*:' /proc/cpuinfo 2>/dev/null | cut -d' ' -f2) in\n\t  PA7*) echo hppa1.1-unknown-linux-\"$LIBC\" ;;\n\t  PA8*) echo hppa2.0-unknown-linux-\"$LIBC\" ;;\n\t  *)    echo hppa-unknown-linux-\"$LIBC\" ;;\n\tesac\n\texit ;;\n    ppc64:Linux:*:*)\n\techo powerpc64-unknown-linux-\"$LIBC\"\n\texit ;;\n    ppc:Linux:*:*)\n\techo powerpc-unknown-linux-\"$LIBC\"\n\texit ;;\n    ppc64le:Linux:*:*)\n\techo powerpc64le-unknown-linux-\"$LIBC\"\n\texit ;;\n    ppcle:Linux:*:*)\n\techo powerpcle-unknown-linux-\"$LIBC\"\n\texit ;;\n    riscv32:Linux:*:* | riscv32be:Linux:*:* | riscv64:Linux:*:* | riscv64be:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\texit ;;\n    s390:Linux:*:* | s390x:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-ibm-linux-\"$LIBC\"\n\texit ;;\n    sh64*:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\texit ;;\n    sh*:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\texit ;;\n    sparc:Linux:*:* | sparc64:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\texit ;;\n    tile*:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\texit ;;\n    vax:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-dec-linux-\"$LIBC\"\n\texit ;;\n    x86_64:Linux:*:*)\n\tset_cc_for_build\n\tLIBCABI=$LIBC\n\tif test \"$CC_FOR_BUILD\" != no_compiler_found; then\n\t    if (echo '#ifdef __ILP32__'; echo IS_X32; echo '#endif') | \\\n\t\t(CCOPTS=\"\" $CC_FOR_BUILD -E - 2>/dev/null) | \\\n\t\tgrep IS_X32 >/dev/null\n\t    then\n\t\tLIBCABI=\"$LIBC\"x32\n\t    fi\n\tfi\n\techo \"$UNAME_MACHINE\"-pc-linux-\"$LIBCABI\"\n\texit ;;\n    xtensa*:Linux:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-linux-\"$LIBC\"\n\texit ;;\n    i*86:DYNIX/ptx:4*:*)\n\t# ptx 4.0 does uname -s correctly, with DYNIX/ptx in there.\n\t# earlier versions are messed up and put the nodename in both\n\t# sysname and nodename.\n\techo i386-sequent-sysv4\n\texit ;;\n    i*86:UNIX_SV:4.2MP:2.*)\n\t# Unixware is an offshoot of SVR4, but it has its own version\n\t# number series starting with 2...\n\t# I am not positive that other SVR4 systems won't match this,\n\t# I just have to hope.  -- rms.\n\t# Use sysv4.2uw... so that sysv4* matches it.\n\techo \"$UNAME_MACHINE\"-pc-sysv4.2uw\"$UNAME_VERSION\"\n\texit ;;\n    i*86:OS/2:*:*)\n\t# If we were able to find `uname', then EMX Unix compatibility\n\t# is probably installed.\n\techo \"$UNAME_MACHINE\"-pc-os2-emx\n\texit ;;\n    i*86:XTS-300:*:STOP)\n\techo \"$UNAME_MACHINE\"-unknown-stop\n\texit ;;\n    i*86:atheos:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-atheos\n\texit ;;\n    i*86:syllable:*:*)\n\techo \"$UNAME_MACHINE\"-pc-syllable\n\texit ;;\n    i*86:LynxOS:2.*:* | i*86:LynxOS:3.[01]*:* | i*86:LynxOS:4.[02]*:*)\n\techo i386-unknown-lynxos\"$UNAME_RELEASE\"\n\texit ;;\n    i*86:*DOS:*:*)\n\techo \"$UNAME_MACHINE\"-pc-msdosdjgpp\n\texit ;;\n    i*86:*:4.*:*)\n\tUNAME_REL=$(echo \"$UNAME_RELEASE\" | sed 's/\\/MP$//')\n\tif grep Novell /usr/include/link.h >/dev/null 2>/dev/null; then\n\t\techo \"$UNAME_MACHINE\"-univel-sysv\"$UNAME_REL\"\n\telse\n\t\techo \"$UNAME_MACHINE\"-pc-sysv\"$UNAME_REL\"\n\tfi\n\texit ;;\n    i*86:*:5:[678]*)\n\t# UnixWare 7.x, OpenUNIX and OpenServer 6.\n\tcase $(/bin/uname -X | grep \"^Machine\") in\n\t    *486*)\t     UNAME_MACHINE=i486 ;;\n\t    *Pentium)\t     UNAME_MACHINE=i586 ;;\n\t    *Pent*|*Celeron) UNAME_MACHINE=i686 ;;\n\tesac\n\techo \"$UNAME_MACHINE-unknown-sysv${UNAME_RELEASE}${UNAME_SYSTEM}${UNAME_VERSION}\"\n\texit ;;\n    i*86:*:3.2:*)\n\tif test -f /usr/options/cb.name; then\n\t\tUNAME_REL=$(sed -n 's/.*Version //p' </usr/options/cb.name)\n\t\techo \"$UNAME_MACHINE\"-pc-isc\"$UNAME_REL\"\n\telif /bin/uname -X 2>/dev/null >/dev/null ; then\n\t\tUNAME_REL=$( (/bin/uname -X|grep Release|sed -e 's/.*= //'))\n\t\t(/bin/uname -X|grep i80486 >/dev/null) && UNAME_MACHINE=i486\n\t\t(/bin/uname -X|grep '^Machine.*Pentium' >/dev/null) \\\n\t\t\t&& UNAME_MACHINE=i586\n\t\t(/bin/uname -X|grep '^Machine.*Pent *II' >/dev/null) \\\n\t\t\t&& UNAME_MACHINE=i686\n\t\t(/bin/uname -X|grep '^Machine.*Pentium Pro' >/dev/null) \\\n\t\t\t&& UNAME_MACHINE=i686\n\t\techo \"$UNAME_MACHINE\"-pc-sco\"$UNAME_REL\"\n\telse\n\t\techo \"$UNAME_MACHINE\"-pc-sysv32\n\tfi\n\texit ;;\n    pc:*:*:*)\n\t# Left here for compatibility:\n\t# uname -m prints for DJGPP always 'pc', but it prints nothing about\n\t# the processor, so we play safe by assuming i586.\n\t# Note: whatever this is, it MUST be the same as what config.sub\n\t# prints for the \"djgpp\" host, or else GDB configure will decide that\n\t# this is a cross-build.\n\techo i586-pc-msdosdjgpp\n\texit ;;\n    Intel:Mach:3*:*)\n\techo i386-pc-mach3\n\texit ;;\n    paragon:*:*:*)\n\techo i860-intel-osf1\n\texit ;;\n    i860:*:4.*:*) # i860-SVR4\n\tif grep Stardent /usr/include/sys/uadmin.h >/dev/null 2>&1 ; then\n\t  echo i860-stardent-sysv\"$UNAME_RELEASE\" # Stardent Vistra i860-SVR4\n\telse # Add other i860-SVR4 vendors below as they are discovered.\n\t  echo i860-unknown-sysv\"$UNAME_RELEASE\"  # Unknown i860-SVR4\n\tfi\n\texit ;;\n    mini*:CTIX:SYS*5:*)\n\t# \"miniframe\"\n\techo m68010-convergent-sysv\n\texit ;;\n    mc68k:UNIX:SYSTEM5:3.51m)\n\techo m68k-convergent-sysv\n\texit ;;\n    M680?0:D-NIX:5.3:*)\n\techo m68k-diab-dnix\n\texit ;;\n    M68*:*:R3V[5678]*:*)\n\ttest -r /sysV68 && { echo 'm68k-motorola-sysv'; exit; } ;;\n    3[345]??:*:4.0:3.0 | 3[34]??A:*:4.0:3.0 | 3[34]??,*:*:4.0:3.0 | 3[34]??/*:*:4.0:3.0 | 4400:*:4.0:3.0 | 4850:*:4.0:3.0 | SKA40:*:4.0:3.0 | SDS2:*:4.0:3.0 | SHG2:*:4.0:3.0 | S7501*:*:4.0:3.0)\n\tOS_REL=''\n\ttest -r /etc/.relid \\\n\t&& OS_REL=.$(sed -n 's/[^ ]* [^ ]* \\([0-9][0-9]\\).*/\\1/p' < /etc/.relid)\n\t/bin/uname -p 2>/dev/null | grep 86 >/dev/null \\\n\t  && { echo i486-ncr-sysv4.3\"$OS_REL\"; exit; }\n\t/bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \\\n\t  && { echo i586-ncr-sysv4.3\"$OS_REL\"; exit; } ;;\n    3[34]??:*:4.0:* | 3[34]??,*:*:4.0:*)\n\t/bin/uname -p 2>/dev/null | grep 86 >/dev/null \\\n\t  && { echo i486-ncr-sysv4; exit; } ;;\n    NCR*:*:4.2:* | MPRAS*:*:4.2:*)\n\tOS_REL='.3'\n\ttest -r /etc/.relid \\\n\t    && OS_REL=.$(sed -n 's/[^ ]* [^ ]* \\([0-9][0-9]\\).*/\\1/p' < /etc/.relid)\n\t/bin/uname -p 2>/dev/null | grep 86 >/dev/null \\\n\t    && { echo i486-ncr-sysv4.3\"$OS_REL\"; exit; }\n\t/bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \\\n\t    && { echo i586-ncr-sysv4.3\"$OS_REL\"; exit; }\n\t/bin/uname -p 2>/dev/null | /bin/grep pteron >/dev/null \\\n\t    && { echo i586-ncr-sysv4.3\"$OS_REL\"; exit; } ;;\n    m68*:LynxOS:2.*:* | m68*:LynxOS:3.0*:*)\n\techo m68k-unknown-lynxos\"$UNAME_RELEASE\"\n\texit ;;\n    mc68030:UNIX_System_V:4.*:*)\n\techo m68k-atari-sysv4\n\texit ;;\n    TSUNAMI:LynxOS:2.*:*)\n\techo sparc-unknown-lynxos\"$UNAME_RELEASE\"\n\texit ;;\n    rs6000:LynxOS:2.*:*)\n\techo rs6000-unknown-lynxos\"$UNAME_RELEASE\"\n\texit ;;\n    PowerPC:LynxOS:2.*:* | PowerPC:LynxOS:3.[01]*:* | PowerPC:LynxOS:4.[02]*:*)\n\techo powerpc-unknown-lynxos\"$UNAME_RELEASE\"\n\texit ;;\n    SM[BE]S:UNIX_SV:*:*)\n\techo mips-dde-sysv\"$UNAME_RELEASE\"\n\texit ;;\n    RM*:ReliantUNIX-*:*:*)\n\techo mips-sni-sysv4\n\texit ;;\n    RM*:SINIX-*:*:*)\n\techo mips-sni-sysv4\n\texit ;;\n    *:SINIX-*:*:*)\n\tif uname -p 2>/dev/null >/dev/null ; then\n\t\tUNAME_MACHINE=$( (uname -p) 2>/dev/null)\n\t\techo \"$UNAME_MACHINE\"-sni-sysv4\n\telse\n\t\techo ns32k-sni-sysv\n\tfi\n\texit ;;\n    PENTIUM:*:4.0*:*)\t# Unisys `ClearPath HMP IX 4000' SVR4/MP effort\n\t\t\t# says <Richard.M.Bartel@ccMail.Census.GOV>\n\techo i586-unisys-sysv4\n\texit ;;\n    *:UNIX_System_V:4*:FTX*)\n\t# From Gerald Hewes <hewes@openmarket.com>.\n\t# How about differentiating between stratus architectures? -djm\n\techo hppa1.1-stratus-sysv4\n\texit ;;\n    *:*:*:FTX*)\n\t# From seanf@swdc.stratus.com.\n\techo i860-stratus-sysv4\n\texit ;;\n    i*86:VOS:*:*)\n\t# From Paul.Green@stratus.com.\n\techo \"$UNAME_MACHINE\"-stratus-vos\n\texit ;;\n    *:VOS:*:*)\n\t# From Paul.Green@stratus.com.\n\techo hppa1.1-stratus-vos\n\texit ;;\n    mc68*:A/UX:*:*)\n\techo m68k-apple-aux\"$UNAME_RELEASE\"\n\texit ;;\n    news*:NEWS-OS:6*:*)\n\techo mips-sony-newsos6\n\texit ;;\n    R[34]000:*System_V*:*:* | R4000:UNIX_SYSV:*:* | R*000:UNIX_SV:*:*)\n\tif test -d /usr/nec; then\n\t\techo mips-nec-sysv\"$UNAME_RELEASE\"\n\telse\n\t\techo mips-unknown-sysv\"$UNAME_RELEASE\"\n\tfi\n\texit ;;\n    BeBox:BeOS:*:*)\t# BeOS running on hardware made by Be, PPC only.\n\techo powerpc-be-beos\n\texit ;;\n    BeMac:BeOS:*:*)\t# BeOS running on Mac or Mac clone, PPC only.\n\techo powerpc-apple-beos\n\texit ;;\n    BePC:BeOS:*:*)\t# BeOS running on Intel PC compatible.\n\techo i586-pc-beos\n\texit ;;\n    BePC:Haiku:*:*)\t# Haiku running on Intel PC compatible.\n\techo i586-pc-haiku\n\texit ;;\n    x86_64:Haiku:*:*)\n\techo x86_64-unknown-haiku\n\texit ;;\n    SX-4:SUPER-UX:*:*)\n\techo sx4-nec-superux\"$UNAME_RELEASE\"\n\texit ;;\n    SX-5:SUPER-UX:*:*)\n\techo sx5-nec-superux\"$UNAME_RELEASE\"\n\texit ;;\n    SX-6:SUPER-UX:*:*)\n\techo sx6-nec-superux\"$UNAME_RELEASE\"\n\texit ;;\n    SX-7:SUPER-UX:*:*)\n\techo sx7-nec-superux\"$UNAME_RELEASE\"\n\texit ;;\n    SX-8:SUPER-UX:*:*)\n\techo sx8-nec-superux\"$UNAME_RELEASE\"\n\texit ;;\n    SX-8R:SUPER-UX:*:*)\n\techo sx8r-nec-superux\"$UNAME_RELEASE\"\n\texit ;;\n    SX-ACE:SUPER-UX:*:*)\n\techo sxace-nec-superux\"$UNAME_RELEASE\"\n\texit ;;\n    Power*:Rhapsody:*:*)\n\techo powerpc-apple-rhapsody\"$UNAME_RELEASE\"\n\texit ;;\n    *:Rhapsody:*:*)\n\techo \"$UNAME_MACHINE\"-apple-rhapsody\"$UNAME_RELEASE\"\n\texit ;;\n    arm64:Darwin:*:*)\n\techo aarch64-apple-darwin\"$UNAME_RELEASE\"\n\texit ;;\n    *:Darwin:*:*)\n\tUNAME_PROCESSOR=$(uname -p)\n\tcase $UNAME_PROCESSOR in\n\t    unknown) UNAME_PROCESSOR=powerpc ;;\n\tesac\n\tif command -v xcode-select > /dev/null 2> /dev/null && \\\n\t\t! xcode-select --print-path > /dev/null 2> /dev/null ; then\n\t    # Avoid executing cc if there is no toolchain installed as\n\t    # cc will be a stub that puts up a graphical alert\n\t    # prompting the user to install developer tools.\n\t    CC_FOR_BUILD=no_compiler_found\n\telse\n\t    set_cc_for_build\n\tfi\n\tif test \"$CC_FOR_BUILD\" != no_compiler_found; then\n\t    if (echo '#ifdef __LP64__'; echo IS_64BIT_ARCH; echo '#endif') | \\\n\t\t   (CCOPTS=\"\" $CC_FOR_BUILD -E - 2>/dev/null) | \\\n\t\t   grep IS_64BIT_ARCH >/dev/null\n\t    then\n\t\tcase $UNAME_PROCESSOR in\n\t\t    i386) UNAME_PROCESSOR=x86_64 ;;\n\t\t    powerpc) UNAME_PROCESSOR=powerpc64 ;;\n\t\tesac\n\t    fi\n\t    # On 10.4-10.6 one might compile for PowerPC via gcc -arch ppc\n\t    if (echo '#ifdef __POWERPC__'; echo IS_PPC; echo '#endif') | \\\n\t\t   (CCOPTS=\"\" $CC_FOR_BUILD -E - 2>/dev/null) | \\\n\t\t   grep IS_PPC >/dev/null\n\t    then\n\t\tUNAME_PROCESSOR=powerpc\n\t    fi\n\telif test \"$UNAME_PROCESSOR\" = i386 ; then\n\t    # uname -m returns i386 or x86_64\n\t    UNAME_PROCESSOR=$UNAME_MACHINE\n\tfi\n\techo \"$UNAME_PROCESSOR\"-apple-darwin\"$UNAME_RELEASE\"\n\texit ;;\n    *:procnto*:*:* | *:QNX:[0123456789]*:*)\n\tUNAME_PROCESSOR=$(uname -p)\n\tif test \"$UNAME_PROCESSOR\" = x86; then\n\t\tUNAME_PROCESSOR=i386\n\t\tUNAME_MACHINE=pc\n\tfi\n\techo \"$UNAME_PROCESSOR\"-\"$UNAME_MACHINE\"-nto-qnx\"$UNAME_RELEASE\"\n\texit ;;\n    *:QNX:*:4*)\n\techo i386-pc-qnx\n\texit ;;\n    NEO-*:NONSTOP_KERNEL:*:*)\n\techo neo-tandem-nsk\"$UNAME_RELEASE\"\n\texit ;;\n    NSE-*:NONSTOP_KERNEL:*:*)\n\techo nse-tandem-nsk\"$UNAME_RELEASE\"\n\texit ;;\n    NSR-*:NONSTOP_KERNEL:*:*)\n\techo nsr-tandem-nsk\"$UNAME_RELEASE\"\n\texit ;;\n    NSV-*:NONSTOP_KERNEL:*:*)\n\techo nsv-tandem-nsk\"$UNAME_RELEASE\"\n\texit ;;\n    NSX-*:NONSTOP_KERNEL:*:*)\n\techo nsx-tandem-nsk\"$UNAME_RELEASE\"\n\texit ;;\n    *:NonStop-UX:*:*)\n\techo mips-compaq-nonstopux\n\texit ;;\n    BS2000:POSIX*:*:*)\n\techo bs2000-siemens-sysv\n\texit ;;\n    DS/*:UNIX_System_V:*:*)\n\techo \"$UNAME_MACHINE\"-\"$UNAME_SYSTEM\"-\"$UNAME_RELEASE\"\n\texit ;;\n    *:Plan9:*:*)\n\t# \"uname -m\" is not consistent, so use $cputype instead. 386\n\t# is converted to i386 for consistency with other x86\n\t# operating systems.\n\t# shellcheck disable=SC2154\n\tif test \"$cputype\" = 386; then\n\t    UNAME_MACHINE=i386\n\telse\n\t    UNAME_MACHINE=\"$cputype\"\n\tfi\n\techo \"$UNAME_MACHINE\"-unknown-plan9\n\texit ;;\n    *:TOPS-10:*:*)\n\techo pdp10-unknown-tops10\n\texit ;;\n    *:TENEX:*:*)\n\techo pdp10-unknown-tenex\n\texit ;;\n    KS10:TOPS-20:*:* | KL10:TOPS-20:*:* | TYPE4:TOPS-20:*:*)\n\techo pdp10-dec-tops20\n\texit ;;\n    XKL-1:TOPS-20:*:* | TYPE5:TOPS-20:*:*)\n\techo pdp10-xkl-tops20\n\texit ;;\n    *:TOPS-20:*:*)\n\techo pdp10-unknown-tops20\n\texit ;;\n    *:ITS:*:*)\n\techo pdp10-unknown-its\n\texit ;;\n    SEI:*:*:SEIUX)\n\techo mips-sei-seiux\"$UNAME_RELEASE\"\n\texit ;;\n    *:DragonFly:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-dragonfly\"$(echo \"$UNAME_RELEASE\"|sed -e 's/[-(].*//')\"\n\texit ;;\n    *:*VMS:*:*)\n\tUNAME_MACHINE=$( (uname -p) 2>/dev/null)\n\tcase \"$UNAME_MACHINE\" in\n\t    A*) echo alpha-dec-vms ; exit ;;\n\t    I*) echo ia64-dec-vms ; exit ;;\n\t    V*) echo vax-dec-vms ; exit ;;\n\tesac ;;\n    *:XENIX:*:SysV)\n\techo i386-pc-xenix\n\texit ;;\n    i*86:skyos:*:*)\n\techo \"$UNAME_MACHINE\"-pc-skyos\"$(echo \"$UNAME_RELEASE\" | sed -e 's/ .*$//')\"\n\texit ;;\n    i*86:rdos:*:*)\n\techo \"$UNAME_MACHINE\"-pc-rdos\n\texit ;;\n    i*86:AROS:*:*)\n\techo \"$UNAME_MACHINE\"-pc-aros\n\texit ;;\n    x86_64:VMkernel:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-esx\n\texit ;;\n    amd64:Isilon\\ OneFS:*:*)\n\techo x86_64-unknown-onefs\n\texit ;;\n    *:Unleashed:*:*)\n\techo \"$UNAME_MACHINE\"-unknown-unleashed\"$UNAME_RELEASE\"\n\texit ;;\nesac\n\n# No uname command or uname output not recognized.\nset_cc_for_build\ncat > \"$dummy.c\" <<EOF\n#ifdef _SEQUENT_\n#include <sys/types.h>\n#include <sys/utsname.h>\n#endif\n#if defined(ultrix) || defined(_ultrix) || defined(__ultrix) || defined(__ultrix__)\n#if defined (vax) || defined (__vax) || defined (__vax__) || defined(mips) || defined(__mips) || defined(__mips__) || defined(MIPS) || defined(__MIPS__)\n#include <signal.h>\n#if defined(_SIZE_T_) || defined(SIGLOST)\n#include <sys/utsname.h>\n#endif\n#endif\n#endif\nmain ()\n{\n#if defined (sony)\n#if defined (MIPSEB)\n  /* BFD wants \"bsd\" instead of \"newsos\".  Perhaps BFD should be changed,\n     I don't know....  */\n  printf (\"mips-sony-bsd\\n\"); exit (0);\n#else\n#include <sys/param.h>\n  printf (\"m68k-sony-newsos%s\\n\",\n#ifdef NEWSOS4\n  \"4\"\n#else\n  \"\"\n#endif\n  ); exit (0);\n#endif\n#endif\n\n#if defined (NeXT)\n#if !defined (__ARCHITECTURE__)\n#define __ARCHITECTURE__ \"m68k\"\n#endif\n  int version;\n  version=$( (hostinfo | sed -n 's/.*NeXT Mach \\([0-9]*\\).*/\\1/p') 2>/dev/null);\n  if (version < 4)\n    printf (\"%s-next-nextstep%d\\n\", __ARCHITECTURE__, version);\n  else\n    printf (\"%s-next-openstep%d\\n\", __ARCHITECTURE__, version);\n  exit (0);\n#endif\n\n#if defined (MULTIMAX) || defined (n16)\n#if defined (UMAXV)\n  printf (\"ns32k-encore-sysv\\n\"); exit (0);\n#else\n#if defined (CMU)\n  printf (\"ns32k-encore-mach\\n\"); exit (0);\n#else\n  printf (\"ns32k-encore-bsd\\n\"); exit (0);\n#endif\n#endif\n#endif\n\n#if defined (__386BSD__)\n  printf (\"i386-pc-bsd\\n\"); exit (0);\n#endif\n\n#if defined (sequent)\n#if defined (i386)\n  printf (\"i386-sequent-dynix\\n\"); exit (0);\n#endif\n#if defined (ns32000)\n  printf (\"ns32k-sequent-dynix\\n\"); exit (0);\n#endif\n#endif\n\n#if defined (_SEQUENT_)\n  struct utsname un;\n\n  uname(&un);\n  if (strncmp(un.version, \"V2\", 2) == 0) {\n    printf (\"i386-sequent-ptx2\\n\"); exit (0);\n  }\n  if (strncmp(un.version, \"V1\", 2) == 0) { /* XXX is V1 correct? */\n    printf (\"i386-sequent-ptx1\\n\"); exit (0);\n  }\n  printf (\"i386-sequent-ptx\\n\"); exit (0);\n#endif\n\n#if defined (vax)\n#if !defined (ultrix)\n#include <sys/param.h>\n#if defined (BSD)\n#if BSD == 43\n  printf (\"vax-dec-bsd4.3\\n\"); exit (0);\n#else\n#if BSD == 199006\n  printf (\"vax-dec-bsd4.3reno\\n\"); exit (0);\n#else\n  printf (\"vax-dec-bsd\\n\"); exit (0);\n#endif\n#endif\n#else\n  printf (\"vax-dec-bsd\\n\"); exit (0);\n#endif\n#else\n#if defined(_SIZE_T_) || defined(SIGLOST)\n  struct utsname un;\n  uname (&un);\n  printf (\"vax-dec-ultrix%s\\n\", un.release); exit (0);\n#else\n  printf (\"vax-dec-ultrix\\n\"); exit (0);\n#endif\n#endif\n#endif\n#if defined(ultrix) || defined(_ultrix) || defined(__ultrix) || defined(__ultrix__)\n#if defined(mips) || defined(__mips) || defined(__mips__) || defined(MIPS) || defined(__MIPS__)\n#if defined(_SIZE_T_) || defined(SIGLOST)\n  struct utsname *un;\n  uname (&un);\n  printf (\"mips-dec-ultrix%s\\n\", un.release); exit (0);\n#else\n  printf (\"mips-dec-ultrix\\n\"); exit (0);\n#endif\n#endif\n#endif\n\n#if defined (alliant) && defined (i860)\n  printf (\"i860-alliant-bsd\\n\"); exit (0);\n#endif\n\n  exit (1);\n}\nEOF\n\n$CC_FOR_BUILD -o \"$dummy\" \"$dummy.c\" 2>/dev/null && SYSTEM_NAME=$($dummy) &&\n\t{ echo \"$SYSTEM_NAME\"; exit; }\n\n# Apollos put the system type in the environment.\ntest -d /usr/apollo && { echo \"$ISP-apollo-$SYSTYPE\"; exit; }\n\necho \"$0: unable to guess system type\" >&2\n\ncase \"$UNAME_MACHINE:$UNAME_SYSTEM\" in\n    mips:Linux | mips64:Linux)\n\t# If we got here on MIPS GNU/Linux, output extra information.\n\tcat >&2 <<EOF\n\nNOTE: MIPS GNU/Linux systems require a C compiler to fully recognize\nthe system type. Please install a C compiler and try again.\nEOF\n\t;;\nesac\n\ncat >&2 <<EOF\n\nThis script (version $timestamp), has failed to recognize the\noperating system you are using. If your script is old, overwrite *all*\ncopies of config.guess and config.sub with the latest versions from:\n\n  https://git.savannah.gnu.org/cgit/config.git/plain/config.guess\nand\n  https://git.savannah.gnu.org/cgit/config.git/plain/config.sub\nEOF\n\nyear=$(echo $timestamp | sed 's,-.*,,')\n# shellcheck disable=SC2003\nif test \"$(expr \"$(date +%Y)\" - \"$year\")\" -lt 3 ; then\n   cat >&2 <<EOF\n\nIf $0 has already been updated, send the following data and any\ninformation you think might be pertinent to config-patches@gnu.org to\nprovide the necessary information to handle your system.\n\nconfig.guess timestamp = $timestamp\n\nuname -m = $( (uname -m) 2>/dev/null || echo unknown)\nuname -r = $( (uname -r) 2>/dev/null || echo unknown)\nuname -s = $( (uname -s) 2>/dev/null || echo unknown)\nuname -v = $( (uname -v) 2>/dev/null || echo unknown)\n\n/usr/bin/uname -p = $( (/usr/bin/uname -p) 2>/dev/null)\n/bin/uname -X     = $( (/bin/uname -X) 2>/dev/null)\n\nhostinfo               = $( (hostinfo) 2>/dev/null)\n/bin/universe          = $( (/bin/universe) 2>/dev/null)\n/usr/bin/arch -k       = $( (/usr/bin/arch -k) 2>/dev/null)\n/bin/arch              = $( (/bin/arch) 2>/dev/null)\n/usr/bin/oslevel       = $( (/usr/bin/oslevel) 2>/dev/null)\n/usr/convex/getsysinfo = $( (/usr/convex/getsysinfo) 2>/dev/null)\n\nUNAME_MACHINE = \"$UNAME_MACHINE\"\nUNAME_RELEASE = \"$UNAME_RELEASE\"\nUNAME_SYSTEM  = \"$UNAME_SYSTEM\"\nUNAME_VERSION = \"$UNAME_VERSION\"\nEOF\nfi\n\nexit 1\n\n# Local variables:\n# eval: (add-hook 'before-save-hook 'time-stamp)\n# time-stamp-start: \"timestamp='\"\n# time-stamp-format: \"%:y-%02m-%02d\"\n# time-stamp-end: \"'\"\n# End:\n"
  },
  {
    "path": "deps/jemalloc/build-aux/config.sub",
    "content": "#! /bin/sh\n# Configuration validation subroutine script.\n#   Copyright 1992-2021 Free Software Foundation, Inc.\n\ntimestamp='2021-01-07'\n\n# This file is free software; you can redistribute it and/or modify it\n# under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n# General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <https://www.gnu.org/licenses/>.\n#\n# As a special exception to the GNU General Public License, if you\n# distribute this file as part of a program that contains a\n# configuration script generated by Autoconf, you may include it under\n# the same distribution terms that you use for the rest of that\n# program.  This Exception is an additional permission under section 7\n# of the GNU General Public License, version 3 (\"GPLv3\").\n\n\n# Please send patches to <config-patches@gnu.org>.\n#\n# Configuration subroutine to validate and canonicalize a configuration type.\n# Supply the specified configuration type as an argument.\n# If it is invalid, we print an error message on stderr and exit with code 1.\n# Otherwise, we print the canonical config type on stdout and succeed.\n\n# You can get the latest version of this script from:\n# https://git.savannah.gnu.org/cgit/config.git/plain/config.sub\n\n# This file is supposed to be the same for all GNU packages\n# and recognize all the CPU types, system types and aliases\n# that are meaningful with *any* GNU software.\n# Each package is responsible for reporting which valid configurations\n# it does not support.  The user should be able to distinguish\n# a failure to support a valid configuration from a meaningless\n# configuration.\n\n# The goal of this file is to map all the various variations of a given\n# machine specification into a single specification in the form:\n#\tCPU_TYPE-MANUFACTURER-OPERATING_SYSTEM\n# or in some cases, the newer four-part form:\n#\tCPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM\n# It is wrong to echo any other type of specification.\n\nme=$(echo \"$0\" | sed -e 's,.*/,,')\n\nusage=\"\\\nUsage: $0 [OPTION] CPU-MFR-OPSYS or ALIAS\n\nCanonicalize a configuration name.\n\nOptions:\n  -h, --help         print this help, then exit\n  -t, --time-stamp   print date of last modification, then exit\n  -v, --version      print version number, then exit\n\nReport bugs and patches to <config-patches@gnu.org>.\"\n\nversion=\"\\\nGNU config.sub ($timestamp)\n\nCopyright 1992-2021 Free Software Foundation, Inc.\n\nThis is free software; see the source for copying conditions.  There is NO\nwarranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\"\n\nhelp=\"\nTry \\`$me --help' for more information.\"\n\n# Parse command line\nwhile test $# -gt 0 ; do\n  case $1 in\n    --time-stamp | --time* | -t )\n       echo \"$timestamp\" ; exit ;;\n    --version | -v )\n       echo \"$version\" ; exit ;;\n    --help | --h* | -h )\n       echo \"$usage\"; exit ;;\n    -- )     # Stop option processing\n       shift; break ;;\n    - )\t# Use stdin as input.\n       break ;;\n    -* )\n       echo \"$me: invalid option $1$help\" >&2\n       exit 1 ;;\n\n    *local*)\n       # First pass through any local machine types.\n       echo \"$1\"\n       exit ;;\n\n    * )\n       break ;;\n  esac\ndone\n\ncase $# in\n 0) echo \"$me: missing argument$help\" >&2\n    exit 1;;\n 1) ;;\n *) echo \"$me: too many arguments$help\" >&2\n    exit 1;;\nesac\n\n# Split fields of configuration type\n# shellcheck disable=SC2162\nIFS=\"-\" read field1 field2 field3 field4 <<EOF\n$1\nEOF\n\n# Separate into logical components for further validation\ncase $1 in\n\t*-*-*-*-*)\n\t\techo Invalid configuration \\`\"$1\"\\': more than four components >&2\n\t\texit 1\n\t\t;;\n\t*-*-*-*)\n\t\tbasic_machine=$field1-$field2\n\t\tbasic_os=$field3-$field4\n\t\t;;\n\t*-*-*)\n\t\t# Ambiguous whether COMPANY is present, or skipped and KERNEL-OS is two\n\t\t# parts\n\t\tmaybe_os=$field2-$field3\n\t\tcase $maybe_os in\n\t\t\tnto-qnx* | linux-* | uclinux-uclibc* \\\n\t\t\t| uclinux-gnu* | kfreebsd*-gnu* | knetbsd*-gnu* | netbsd*-gnu* \\\n\t\t\t| netbsd*-eabi* | kopensolaris*-gnu* | cloudabi*-eabi* \\\n\t\t\t| storm-chaos* | os2-emx* | rtmk-nova*)\n\t\t\t\tbasic_machine=$field1\n\t\t\t\tbasic_os=$maybe_os\n\t\t\t\t;;\n\t\t\tandroid-linux)\n\t\t\t\tbasic_machine=$field1-unknown\n\t\t\t\tbasic_os=linux-android\n\t\t\t\t;;\n\t\t\t*)\n\t\t\t\tbasic_machine=$field1-$field2\n\t\t\t\tbasic_os=$field3\n\t\t\t\t;;\n\t\tesac\n\t\t;;\n\t*-*)\n\t\t# A lone config we happen to match not fitting any pattern\n\t\tcase $field1-$field2 in\n\t\t\tdecstation-3100)\n\t\t\t\tbasic_machine=mips-dec\n\t\t\t\tbasic_os=\n\t\t\t\t;;\n\t\t\t*-*)\n\t\t\t\t# Second component is usually, but not always the OS\n\t\t\t\tcase $field2 in\n\t\t\t\t\t# Prevent following clause from handling this valid os\n\t\t\t\t\tsun*os*)\n\t\t\t\t\t\tbasic_machine=$field1\n\t\t\t\t\t\tbasic_os=$field2\n\t\t\t\t\t\t;;\n\t\t\t\t\t# Manufacturers\n\t\t\t\t\tdec* | mips* | sequent* | encore* | pc533* | sgi* | sony* \\\n\t\t\t\t\t| att* | 7300* | 3300* | delta* | motorola* | sun[234]* \\\n\t\t\t\t\t| unicom* | ibm* | next | hp | isi* | apollo | altos* \\\n\t\t\t\t\t| convergent* | ncr* | news | 32* | 3600* | 3100* \\\n\t\t\t\t\t| hitachi* | c[123]* | convex* | sun | crds | omron* | dg \\\n\t\t\t\t\t| ultra | tti* | harris | dolphin | highlevel | gould \\\n\t\t\t\t\t| cbm | ns | masscomp | apple | axis | knuth | cray \\\n\t\t\t\t\t| microblaze* | sim | cisco \\\n\t\t\t\t\t| oki | wec | wrs | winbond)\n\t\t\t\t\t\tbasic_machine=$field1-$field2\n\t\t\t\t\t\tbasic_os=\n\t\t\t\t\t\t;;\n\t\t\t\t\t*)\n\t\t\t\t\t\tbasic_machine=$field1\n\t\t\t\t\t\tbasic_os=$field2\n\t\t\t\t\t\t;;\n\t\t\t\tesac\n\t\t\t;;\n\t\tesac\n\t\t;;\n\t*)\n\t\t# Convert single-component short-hands not valid as part of\n\t\t# multi-component configurations.\n\t\tcase $field1 in\n\t\t\t386bsd)\n\t\t\t\tbasic_machine=i386-pc\n\t\t\t\tbasic_os=bsd\n\t\t\t\t;;\n\t\t\ta29khif)\n\t\t\t\tbasic_machine=a29k-amd\n\t\t\t\tbasic_os=udi\n\t\t\t\t;;\n\t\t\tadobe68k)\n\t\t\t\tbasic_machine=m68010-adobe\n\t\t\t\tbasic_os=scout\n\t\t\t\t;;\n\t\t\talliant)\n\t\t\t\tbasic_machine=fx80-alliant\n\t\t\t\tbasic_os=\n\t\t\t\t;;\n\t\t\taltos | altos3068)\n\t\t\t\tbasic_machine=m68k-altos\n\t\t\t\tbasic_os=\n\t\t\t\t;;\n\t\t\tam29k)\n\t\t\t\tbasic_machine=a29k-none\n\t\t\t\tbasic_os=bsd\n\t\t\t\t;;\n\t\t\tamdahl)\n\t\t\t\tbasic_machine=580-amdahl\n\t\t\t\tbasic_os=sysv\n\t\t\t\t;;\n\t\t\tamiga)\n\t\t\t\tbasic_machine=m68k-unknown\n\t\t\t\tbasic_os=\n\t\t\t\t;;\n\t\t\tamigaos | amigados)\n\t\t\t\tbasic_machine=m68k-unknown\n\t\t\t\tbasic_os=amigaos\n\t\t\t\t;;\n\t\t\tamigaunix | amix)\n\t\t\t\tbasic_machine=m68k-unknown\n\t\t\t\tbasic_os=sysv4\n\t\t\t\t;;\n\t\t\tapollo68)\n\t\t\t\tbasic_machine=m68k-apollo\n\t\t\t\tbasic_os=sysv\n\t\t\t\t;;\n\t\t\tapollo68bsd)\n\t\t\t\tbasic_machine=m68k-apollo\n\t\t\t\tbasic_os=bsd\n\t\t\t\t;;\n\t\t\taros)\n\t\t\t\tbasic_machine=i386-pc\n\t\t\t\tbasic_os=aros\n\t\t\t\t;;\n\t\t\taux)\n\t\t\t\tbasic_machine=m68k-apple\n\t\t\t\tbasic_os=aux\n\t\t\t\t;;\n\t\t\tbalance)\n\t\t\t\tbasic_machine=ns32k-sequent\n\t\t\t\tbasic_os=dynix\n\t\t\t\t;;\n\t\t\tblackfin)\n\t\t\t\tbasic_machine=bfin-unknown\n\t\t\t\tbasic_os=linux\n\t\t\t\t;;\n\t\t\tcegcc)\n\t\t\t\tbasic_machine=arm-unknown\n\t\t\t\tbasic_os=cegcc\n\t\t\t\t;;\n\t\t\tconvex-c1)\n\t\t\t\tbasic_machine=c1-convex\n\t\t\t\tbasic_os=bsd\n\t\t\t\t;;\n\t\t\tconvex-c2)\n\t\t\t\tbasic_machine=c2-convex\n\t\t\t\tbasic_os=bsd\n\t\t\t\t;;\n\t\t\tconvex-c32)\n\t\t\t\tbasic_machine=c32-convex\n\t\t\t\tbasic_os=bsd\n\t\t\t\t;;\n\t\t\tconvex-c34)\n\t\t\t\tbasic_machine=c34-convex\n\t\t\t\tbasic_os=bsd\n\t\t\t\t;;\n\t\t\tconvex-c38)\n\t\t\t\tbasic_machine=c38-convex\n\t\t\t\tbasic_os=bsd\n\t\t\t\t;;\n\t\t\tcray)\n\t\t\t\tbasic_machine=j90-cray\n\t\t\t\tbasic_os=unicos\n\t\t\t\t;;\n\t\t\tcrds | unos)\n\t\t\t\tbasic_machine=m68k-crds\n\t\t\t\tbasic_os=\n\t\t\t\t;;\n\t\t\tda30)\n\t\t\t\tbasic_machine=m68k-da30\n\t\t\t\tbasic_os=\n\t\t\t\t;;\n\t\t\tdecstation | pmax | pmin | dec3100 | decstatn)\n\t\t\t\tbasic_machine=mips-dec\n\t\t\t\tbasic_os=\n\t\t\t\t;;\n\t\t\tdelta88)\n\t\t\t\tbasic_machine=m88k-motorola\n\t\t\t\tbasic_os=sysv3\n\t\t\t\t;;\n\t\t\tdicos)\n\t\t\t\tbasic_machine=i686-pc\n\t\t\t\tbasic_os=dicos\n\t\t\t\t;;\n\t\t\tdjgpp)\n\t\t\t\tbasic_machine=i586-pc\n\t\t\t\tbasic_os=msdosdjgpp\n\t\t\t\t;;\n\t\t\tebmon29k)\n\t\t\t\tbasic_machine=a29k-amd\n\t\t\t\tbasic_os=ebmon\n\t\t\t\t;;\n\t\t\tes1800 | OSE68k | ose68k | ose | OSE)\n\t\t\t\tbasic_machine=m68k-ericsson\n\t\t\t\tbasic_os=ose\n\t\t\t\t;;\n\t\t\tgmicro)\n\t\t\t\tbasic_machine=tron-gmicro\n\t\t\t\tbasic_os=sysv\n\t\t\t\t;;\n\t\t\tgo32)\n\t\t\t\tbasic_machine=i386-pc\n\t\t\t\tbasic_os=go32\n\t\t\t\t;;\n\t\t\th8300hms)\n\t\t\t\tbasic_machine=h8300-hitachi\n\t\t\t\tbasic_os=hms\n\t\t\t\t;;\n\t\t\th8300xray)\n\t\t\t\tbasic_machine=h8300-hitachi\n\t\t\t\tbasic_os=xray\n\t\t\t\t;;\n\t\t\th8500hms)\n\t\t\t\tbasic_machine=h8500-hitachi\n\t\t\t\tbasic_os=hms\n\t\t\t\t;;\n\t\t\tharris)\n\t\t\t\tbasic_machine=m88k-harris\n\t\t\t\tbasic_os=sysv3\n\t\t\t\t;;\n\t\t\thp300 | hp300hpux)\n\t\t\t\tbasic_machine=m68k-hp\n\t\t\t\tbasic_os=hpux\n\t\t\t\t;;\n\t\t\thp300bsd)\n\t\t\t\tbasic_machine=m68k-hp\n\t\t\t\tbasic_os=bsd\n\t\t\t\t;;\n\t\t\thppaosf)\n\t\t\t\tbasic_machine=hppa1.1-hp\n\t\t\t\tbasic_os=osf\n\t\t\t\t;;\n\t\t\thppro)\n\t\t\t\tbasic_machine=hppa1.1-hp\n\t\t\t\tbasic_os=proelf\n\t\t\t\t;;\n\t\t\ti386mach)\n\t\t\t\tbasic_machine=i386-mach\n\t\t\t\tbasic_os=mach\n\t\t\t\t;;\n\t\t\tisi68 | isi)\n\t\t\t\tbasic_machine=m68k-isi\n\t\t\t\tbasic_os=sysv\n\t\t\t\t;;\n\t\t\tm68knommu)\n\t\t\t\tbasic_machine=m68k-unknown\n\t\t\t\tbasic_os=linux\n\t\t\t\t;;\n\t\t\tmagnum | m3230)\n\t\t\t\tbasic_machine=mips-mips\n\t\t\t\tbasic_os=sysv\n\t\t\t\t;;\n\t\t\tmerlin)\n\t\t\t\tbasic_machine=ns32k-utek\n\t\t\t\tbasic_os=sysv\n\t\t\t\t;;\n\t\t\tmingw64)\n\t\t\t\tbasic_machine=x86_64-pc\n\t\t\t\tbasic_os=mingw64\n\t\t\t\t;;\n\t\t\tmingw32)\n\t\t\t\tbasic_machine=i686-pc\n\t\t\t\tbasic_os=mingw32\n\t\t\t\t;;\n\t\t\tmingw32ce)\n\t\t\t\tbasic_machine=arm-unknown\n\t\t\t\tbasic_os=mingw32ce\n\t\t\t\t;;\n\t\t\tmonitor)\n\t\t\t\tbasic_machine=m68k-rom68k\n\t\t\t\tbasic_os=coff\n\t\t\t\t;;\n\t\t\tmorphos)\n\t\t\t\tbasic_machine=powerpc-unknown\n\t\t\t\tbasic_os=morphos\n\t\t\t\t;;\n\t\t\tmoxiebox)\n\t\t\t\tbasic_machine=moxie-unknown\n\t\t\t\tbasic_os=moxiebox\n\t\t\t\t;;\n\t\t\tmsdos)\n\t\t\t\tbasic_machine=i386-pc\n\t\t\t\tbasic_os=msdos\n\t\t\t\t;;\n\t\t\tmsys)\n\t\t\t\tbasic_machine=i686-pc\n\t\t\t\tbasic_os=msys\n\t\t\t\t;;\n\t\t\tmvs)\n\t\t\t\tbasic_machine=i370-ibm\n\t\t\t\tbasic_os=mvs\n\t\t\t\t;;\n\t\t\tnacl)\n\t\t\t\tbasic_machine=le32-unknown\n\t\t\t\tbasic_os=nacl\n\t\t\t\t;;\n\t\t\tncr3000)\n\t\t\t\tbasic_machine=i486-ncr\n\t\t\t\tbasic_os=sysv4\n\t\t\t\t;;\n\t\t\tnetbsd386)\n\t\t\t\tbasic_machine=i386-pc\n\t\t\t\tbasic_os=netbsd\n\t\t\t\t;;\n\t\t\tnetwinder)\n\t\t\t\tbasic_machine=armv4l-rebel\n\t\t\t\tbasic_os=linux\n\t\t\t\t;;\n\t\t\tnews | news700 | news800 | news900)\n\t\t\t\tbasic_machine=m68k-sony\n\t\t\t\tbasic_os=newsos\n\t\t\t\t;;\n\t\t\tnews1000)\n\t\t\t\tbasic_machine=m68030-sony\n\t\t\t\tbasic_os=newsos\n\t\t\t\t;;\n\t\t\tnecv70)\n\t\t\t\tbasic_machine=v70-nec\n\t\t\t\tbasic_os=sysv\n\t\t\t\t;;\n\t\t\tnh3000)\n\t\t\t\tbasic_machine=m68k-harris\n\t\t\t\tbasic_os=cxux\n\t\t\t\t;;\n\t\t\tnh[45]000)\n\t\t\t\tbasic_machine=m88k-harris\n\t\t\t\tbasic_os=cxux\n\t\t\t\t;;\n\t\t\tnindy960)\n\t\t\t\tbasic_machine=i960-intel\n\t\t\t\tbasic_os=nindy\n\t\t\t\t;;\n\t\t\tmon960)\n\t\t\t\tbasic_machine=i960-intel\n\t\t\t\tbasic_os=mon960\n\t\t\t\t;;\n\t\t\tnonstopux)\n\t\t\t\tbasic_machine=mips-compaq\n\t\t\t\tbasic_os=nonstopux\n\t\t\t\t;;\n\t\t\tos400)\n\t\t\t\tbasic_machine=powerpc-ibm\n\t\t\t\tbasic_os=os400\n\t\t\t\t;;\n\t\t\tOSE68000 | ose68000)\n\t\t\t\tbasic_machine=m68000-ericsson\n\t\t\t\tbasic_os=ose\n\t\t\t\t;;\n\t\t\tos68k)\n\t\t\t\tbasic_machine=m68k-none\n\t\t\t\tbasic_os=os68k\n\t\t\t\t;;\n\t\t\tparagon)\n\t\t\t\tbasic_machine=i860-intel\n\t\t\t\tbasic_os=osf\n\t\t\t\t;;\n\t\t\tparisc)\n\t\t\t\tbasic_machine=hppa-unknown\n\t\t\t\tbasic_os=linux\n\t\t\t\t;;\n\t\t\tpsp)\n\t\t\t\tbasic_machine=mipsallegrexel-sony\n\t\t\t\tbasic_os=psp\n\t\t\t\t;;\n\t\t\tpw32)\n\t\t\t\tbasic_machine=i586-unknown\n\t\t\t\tbasic_os=pw32\n\t\t\t\t;;\n\t\t\trdos | rdos64)\n\t\t\t\tbasic_machine=x86_64-pc\n\t\t\t\tbasic_os=rdos\n\t\t\t\t;;\n\t\t\trdos32)\n\t\t\t\tbasic_machine=i386-pc\n\t\t\t\tbasic_os=rdos\n\t\t\t\t;;\n\t\t\trom68k)\n\t\t\t\tbasic_machine=m68k-rom68k\n\t\t\t\tbasic_os=coff\n\t\t\t\t;;\n\t\t\tsa29200)\n\t\t\t\tbasic_machine=a29k-amd\n\t\t\t\tbasic_os=udi\n\t\t\t\t;;\n\t\t\tsei)\n\t\t\t\tbasic_machine=mips-sei\n\t\t\t\tbasic_os=seiux\n\t\t\t\t;;\n\t\t\tsequent)\n\t\t\t\tbasic_machine=i386-sequent\n\t\t\t\tbasic_os=\n\t\t\t\t;;\n\t\t\tsps7)\n\t\t\t\tbasic_machine=m68k-bull\n\t\t\t\tbasic_os=sysv2\n\t\t\t\t;;\n\t\t\tst2000)\n\t\t\t\tbasic_machine=m68k-tandem\n\t\t\t\tbasic_os=\n\t\t\t\t;;\n\t\t\tstratus)\n\t\t\t\tbasic_machine=i860-stratus\n\t\t\t\tbasic_os=sysv4\n\t\t\t\t;;\n\t\t\tsun2)\n\t\t\t\tbasic_machine=m68000-sun\n\t\t\t\tbasic_os=\n\t\t\t\t;;\n\t\t\tsun2os3)\n\t\t\t\tbasic_machine=m68000-sun\n\t\t\t\tbasic_os=sunos3\n\t\t\t\t;;\n\t\t\tsun2os4)\n\t\t\t\tbasic_machine=m68000-sun\n\t\t\t\tbasic_os=sunos4\n\t\t\t\t;;\n\t\t\tsun3)\n\t\t\t\tbasic_machine=m68k-sun\n\t\t\t\tbasic_os=\n\t\t\t\t;;\n\t\t\tsun3os3)\n\t\t\t\tbasic_machine=m68k-sun\n\t\t\t\tbasic_os=sunos3\n\t\t\t\t;;\n\t\t\tsun3os4)\n\t\t\t\tbasic_machine=m68k-sun\n\t\t\t\tbasic_os=sunos4\n\t\t\t\t;;\n\t\t\tsun4)\n\t\t\t\tbasic_machine=sparc-sun\n\t\t\t\tbasic_os=\n\t\t\t\t;;\n\t\t\tsun4os3)\n\t\t\t\tbasic_machine=sparc-sun\n\t\t\t\tbasic_os=sunos3\n\t\t\t\t;;\n\t\t\tsun4os4)\n\t\t\t\tbasic_machine=sparc-sun\n\t\t\t\tbasic_os=sunos4\n\t\t\t\t;;\n\t\t\tsun4sol2)\n\t\t\t\tbasic_machine=sparc-sun\n\t\t\t\tbasic_os=solaris2\n\t\t\t\t;;\n\t\t\tsun386 | sun386i | roadrunner)\n\t\t\t\tbasic_machine=i386-sun\n\t\t\t\tbasic_os=\n\t\t\t\t;;\n\t\t\tsv1)\n\t\t\t\tbasic_machine=sv1-cray\n\t\t\t\tbasic_os=unicos\n\t\t\t\t;;\n\t\t\tsymmetry)\n\t\t\t\tbasic_machine=i386-sequent\n\t\t\t\tbasic_os=dynix\n\t\t\t\t;;\n\t\t\tt3e)\n\t\t\t\tbasic_machine=alphaev5-cray\n\t\t\t\tbasic_os=unicos\n\t\t\t\t;;\n\t\t\tt90)\n\t\t\t\tbasic_machine=t90-cray\n\t\t\t\tbasic_os=unicos\n\t\t\t\t;;\n\t\t\ttoad1)\n\t\t\t\tbasic_machine=pdp10-xkl\n\t\t\t\tbasic_os=tops20\n\t\t\t\t;;\n\t\t\ttpf)\n\t\t\t\tbasic_machine=s390x-ibm\n\t\t\t\tbasic_os=tpf\n\t\t\t\t;;\n\t\t\tudi29k)\n\t\t\t\tbasic_machine=a29k-amd\n\t\t\t\tbasic_os=udi\n\t\t\t\t;;\n\t\t\tultra3)\n\t\t\t\tbasic_machine=a29k-nyu\n\t\t\t\tbasic_os=sym1\n\t\t\t\t;;\n\t\t\tv810 | necv810)\n\t\t\t\tbasic_machine=v810-nec\n\t\t\t\tbasic_os=none\n\t\t\t\t;;\n\t\t\tvaxv)\n\t\t\t\tbasic_machine=vax-dec\n\t\t\t\tbasic_os=sysv\n\t\t\t\t;;\n\t\t\tvms)\n\t\t\t\tbasic_machine=vax-dec\n\t\t\t\tbasic_os=vms\n\t\t\t\t;;\n\t\t\tvsta)\n\t\t\t\tbasic_machine=i386-pc\n\t\t\t\tbasic_os=vsta\n\t\t\t\t;;\n\t\t\tvxworks960)\n\t\t\t\tbasic_machine=i960-wrs\n\t\t\t\tbasic_os=vxworks\n\t\t\t\t;;\n\t\t\tvxworks68)\n\t\t\t\tbasic_machine=m68k-wrs\n\t\t\t\tbasic_os=vxworks\n\t\t\t\t;;\n\t\t\tvxworks29k)\n\t\t\t\tbasic_machine=a29k-wrs\n\t\t\t\tbasic_os=vxworks\n\t\t\t\t;;\n\t\t\txbox)\n\t\t\t\tbasic_machine=i686-pc\n\t\t\t\tbasic_os=mingw32\n\t\t\t\t;;\n\t\t\tymp)\n\t\t\t\tbasic_machine=ymp-cray\n\t\t\t\tbasic_os=unicos\n\t\t\t\t;;\n\t\t\t*)\n\t\t\t\tbasic_machine=$1\n\t\t\t\tbasic_os=\n\t\t\t\t;;\n\t\tesac\n\t\t;;\nesac\n\n# Decode 1-component or ad-hoc basic machines\ncase $basic_machine in\n\t# Here we handle the default manufacturer of certain CPU types.  It is in\n\t# some cases the only manufacturer, in others, it is the most popular.\n\tw89k)\n\t\tcpu=hppa1.1\n\t\tvendor=winbond\n\t\t;;\n\top50n)\n\t\tcpu=hppa1.1\n\t\tvendor=oki\n\t\t;;\n\top60c)\n\t\tcpu=hppa1.1\n\t\tvendor=oki\n\t\t;;\n\tibm*)\n\t\tcpu=i370\n\t\tvendor=ibm\n\t\t;;\n\torion105)\n\t\tcpu=clipper\n\t\tvendor=highlevel\n\t\t;;\n\tmac | mpw | mac-mpw)\n\t\tcpu=m68k\n\t\tvendor=apple\n\t\t;;\n\tpmac | pmac-mpw)\n\t\tcpu=powerpc\n\t\tvendor=apple\n\t\t;;\n\n\t# Recognize the various machine names and aliases which stand\n\t# for a CPU type and a company and sometimes even an OS.\n\t3b1 | 7300 | 7300-att | att-7300 | pc7300 | safari | unixpc)\n\t\tcpu=m68000\n\t\tvendor=att\n\t\t;;\n\t3b*)\n\t\tcpu=we32k\n\t\tvendor=att\n\t\t;;\n\tbluegene*)\n\t\tcpu=powerpc\n\t\tvendor=ibm\n\t\tbasic_os=cnk\n\t\t;;\n\tdecsystem10* | dec10*)\n\t\tcpu=pdp10\n\t\tvendor=dec\n\t\tbasic_os=tops10\n\t\t;;\n\tdecsystem20* | dec20*)\n\t\tcpu=pdp10\n\t\tvendor=dec\n\t\tbasic_os=tops20\n\t\t;;\n\tdelta | 3300 | motorola-3300 | motorola-delta \\\n\t      | 3300-motorola | delta-motorola)\n\t\tcpu=m68k\n\t\tvendor=motorola\n\t\t;;\n\tdpx2*)\n\t\tcpu=m68k\n\t\tvendor=bull\n\t\tbasic_os=sysv3\n\t\t;;\n\tencore | umax | mmax)\n\t\tcpu=ns32k\n\t\tvendor=encore\n\t\t;;\n\telxsi)\n\t\tcpu=elxsi\n\t\tvendor=elxsi\n\t\tbasic_os=${basic_os:-bsd}\n\t\t;;\n\tfx2800)\n\t\tcpu=i860\n\t\tvendor=alliant\n\t\t;;\n\tgenix)\n\t\tcpu=ns32k\n\t\tvendor=ns\n\t\t;;\n\th3050r* | hiux*)\n\t\tcpu=hppa1.1\n\t\tvendor=hitachi\n\t\tbasic_os=hiuxwe2\n\t\t;;\n\thp3k9[0-9][0-9] | hp9[0-9][0-9])\n\t\tcpu=hppa1.0\n\t\tvendor=hp\n\t\t;;\n\thp9k2[0-9][0-9] | hp9k31[0-9])\n\t\tcpu=m68000\n\t\tvendor=hp\n\t\t;;\n\thp9k3[2-9][0-9])\n\t\tcpu=m68k\n\t\tvendor=hp\n\t\t;;\n\thp9k6[0-9][0-9] | hp6[0-9][0-9])\n\t\tcpu=hppa1.0\n\t\tvendor=hp\n\t\t;;\n\thp9k7[0-79][0-9] | hp7[0-79][0-9])\n\t\tcpu=hppa1.1\n\t\tvendor=hp\n\t\t;;\n\thp9k78[0-9] | hp78[0-9])\n\t\t# FIXME: really hppa2.0-hp\n\t\tcpu=hppa1.1\n\t\tvendor=hp\n\t\t;;\n\thp9k8[67]1 | hp8[67]1 | hp9k80[24] | hp80[24] | hp9k8[78]9 | hp8[78]9 | hp9k893 | hp893)\n\t\t# FIXME: really hppa2.0-hp\n\t\tcpu=hppa1.1\n\t\tvendor=hp\n\t\t;;\n\thp9k8[0-9][13679] | hp8[0-9][13679])\n\t\tcpu=hppa1.1\n\t\tvendor=hp\n\t\t;;\n\thp9k8[0-9][0-9] | hp8[0-9][0-9])\n\t\tcpu=hppa1.0\n\t\tvendor=hp\n\t\t;;\n\ti*86v32)\n\t\tcpu=$(echo \"$1\" | sed -e 's/86.*/86/')\n\t\tvendor=pc\n\t\tbasic_os=sysv32\n\t\t;;\n\ti*86v4*)\n\t\tcpu=$(echo \"$1\" | sed -e 's/86.*/86/')\n\t\tvendor=pc\n\t\tbasic_os=sysv4\n\t\t;;\n\ti*86v)\n\t\tcpu=$(echo \"$1\" | sed -e 's/86.*/86/')\n\t\tvendor=pc\n\t\tbasic_os=sysv\n\t\t;;\n\ti*86sol2)\n\t\tcpu=$(echo \"$1\" | sed -e 's/86.*/86/')\n\t\tvendor=pc\n\t\tbasic_os=solaris2\n\t\t;;\n\tj90 | j90-cray)\n\t\tcpu=j90\n\t\tvendor=cray\n\t\tbasic_os=${basic_os:-unicos}\n\t\t;;\n\tiris | iris4d)\n\t\tcpu=mips\n\t\tvendor=sgi\n\t\tcase $basic_os in\n\t\t    irix*)\n\t\t\t;;\n\t\t    *)\n\t\t\tbasic_os=irix4\n\t\t\t;;\n\t\tesac\n\t\t;;\n\tminiframe)\n\t\tcpu=m68000\n\t\tvendor=convergent\n\t\t;;\n\t*mint | mint[0-9]* | *MiNT | *MiNT[0-9]*)\n\t\tcpu=m68k\n\t\tvendor=atari\n\t\tbasic_os=mint\n\t\t;;\n\tnews-3600 | risc-news)\n\t\tcpu=mips\n\t\tvendor=sony\n\t\tbasic_os=newsos\n\t\t;;\n\tnext | m*-next)\n\t\tcpu=m68k\n\t\tvendor=next\n\t\tcase $basic_os in\n\t\t    openstep*)\n\t\t        ;;\n\t\t    nextstep*)\n\t\t\t;;\n\t\t    ns2*)\n\t\t      basic_os=nextstep2\n\t\t\t;;\n\t\t    *)\n\t\t      basic_os=nextstep3\n\t\t\t;;\n\t\tesac\n\t\t;;\n\tnp1)\n\t\tcpu=np1\n\t\tvendor=gould\n\t\t;;\n\top50n-* | op60c-*)\n\t\tcpu=hppa1.1\n\t\tvendor=oki\n\t\tbasic_os=proelf\n\t\t;;\n\tpa-hitachi)\n\t\tcpu=hppa1.1\n\t\tvendor=hitachi\n\t\tbasic_os=hiuxwe2\n\t\t;;\n\tpbd)\n\t\tcpu=sparc\n\t\tvendor=tti\n\t\t;;\n\tpbb)\n\t\tcpu=m68k\n\t\tvendor=tti\n\t\t;;\n\tpc532)\n\t\tcpu=ns32k\n\t\tvendor=pc532\n\t\t;;\n\tpn)\n\t\tcpu=pn\n\t\tvendor=gould\n\t\t;;\n\tpower)\n\t\tcpu=power\n\t\tvendor=ibm\n\t\t;;\n\tps2)\n\t\tcpu=i386\n\t\tvendor=ibm\n\t\t;;\n\trm[46]00)\n\t\tcpu=mips\n\t\tvendor=siemens\n\t\t;;\n\trtpc | rtpc-*)\n\t\tcpu=romp\n\t\tvendor=ibm\n\t\t;;\n\tsde)\n\t\tcpu=mipsisa32\n\t\tvendor=sde\n\t\tbasic_os=${basic_os:-elf}\n\t\t;;\n\tsimso-wrs)\n\t\tcpu=sparclite\n\t\tvendor=wrs\n\t\tbasic_os=vxworks\n\t\t;;\n\ttower | tower-32)\n\t\tcpu=m68k\n\t\tvendor=ncr\n\t\t;;\n\tvpp*|vx|vx-*)\n\t\tcpu=f301\n\t\tvendor=fujitsu\n\t\t;;\n\tw65)\n\t\tcpu=w65\n\t\tvendor=wdc\n\t\t;;\n\tw89k-*)\n\t\tcpu=hppa1.1\n\t\tvendor=winbond\n\t\tbasic_os=proelf\n\t\t;;\n\tnone)\n\t\tcpu=none\n\t\tvendor=none\n\t\t;;\n\tleon|leon[3-9])\n\t\tcpu=sparc\n\t\tvendor=$basic_machine\n\t\t;;\n\tleon-*|leon[3-9]-*)\n\t\tcpu=sparc\n\t\tvendor=$(echo \"$basic_machine\" | sed 's/-.*//')\n\t\t;;\n\n\t*-*)\n\t\t# shellcheck disable=SC2162\n\t\tIFS=\"-\" read cpu vendor <<EOF\n$basic_machine\nEOF\n\t\t;;\n\t# We use `pc' rather than `unknown'\n\t# because (1) that's what they normally are, and\n\t# (2) the word \"unknown\" tends to confuse beginning users.\n\ti*86 | x86_64)\n\t\tcpu=$basic_machine\n\t\tvendor=pc\n\t\t;;\n\t# These rules are duplicated from below for sake of the special case above;\n\t# i.e. things that normalized to x86 arches should also default to \"pc\"\n\tpc98)\n\t\tcpu=i386\n\t\tvendor=pc\n\t\t;;\n\tx64 | amd64)\n\t\tcpu=x86_64\n\t\tvendor=pc\n\t\t;;\n\t# Recognize the basic CPU types without company name.\n\t*)\n\t\tcpu=$basic_machine\n\t\tvendor=unknown\n\t\t;;\nesac\n\nunset -v basic_machine\n\n# Decode basic machines in the full and proper CPU-Company form.\ncase $cpu-$vendor in\n\t# Here we handle the default manufacturer of certain CPU types in canonical form. It is in\n\t# some cases the only manufacturer, in others, it is the most popular.\n\tcraynv-unknown)\n\t\tvendor=cray\n\t\tbasic_os=${basic_os:-unicosmp}\n\t\t;;\n\tc90-unknown | c90-cray)\n\t\tvendor=cray\n\t\tbasic_os=${Basic_os:-unicos}\n\t\t;;\n\tfx80-unknown)\n\t\tvendor=alliant\n\t\t;;\n\tromp-unknown)\n\t\tvendor=ibm\n\t\t;;\n\tmmix-unknown)\n\t\tvendor=knuth\n\t\t;;\n\tmicroblaze-unknown | microblazeel-unknown)\n\t\tvendor=xilinx\n\t\t;;\n\trs6000-unknown)\n\t\tvendor=ibm\n\t\t;;\n\tvax-unknown)\n\t\tvendor=dec\n\t\t;;\n\tpdp11-unknown)\n\t\tvendor=dec\n\t\t;;\n\twe32k-unknown)\n\t\tvendor=att\n\t\t;;\n\tcydra-unknown)\n\t\tvendor=cydrome\n\t\t;;\n\ti370-ibm*)\n\t\tvendor=ibm\n\t\t;;\n\torion-unknown)\n\t\tvendor=highlevel\n\t\t;;\n\txps-unknown | xps100-unknown)\n\t\tcpu=xps100\n\t\tvendor=honeywell\n\t\t;;\n\n\t# Here we normalize CPU types with a missing or matching vendor\n\tdpx20-unknown | dpx20-bull)\n\t\tcpu=rs6000\n\t\tvendor=bull\n\t\tbasic_os=${basic_os:-bosx}\n\t\t;;\n\n\t# Here we normalize CPU types irrespective of the vendor\n\tamd64-*)\n\t\tcpu=x86_64\n\t\t;;\n\tblackfin-*)\n\t\tcpu=bfin\n\t\tbasic_os=linux\n\t\t;;\n\tc54x-*)\n\t\tcpu=tic54x\n\t\t;;\n\tc55x-*)\n\t\tcpu=tic55x\n\t\t;;\n\tc6x-*)\n\t\tcpu=tic6x\n\t\t;;\n\te500v[12]-*)\n\t\tcpu=powerpc\n\t\tbasic_os=${basic_os}\"spe\"\n\t\t;;\n\tmips3*-*)\n\t\tcpu=mips64\n\t\t;;\n\tms1-*)\n\t\tcpu=mt\n\t\t;;\n\tm68knommu-*)\n\t\tcpu=m68k\n\t\tbasic_os=linux\n\t\t;;\n\tm9s12z-* | m68hcs12z-* | hcs12z-* | s12z-*)\n\t\tcpu=s12z\n\t\t;;\n\topenrisc-*)\n\t\tcpu=or32\n\t\t;;\n\tparisc-*)\n\t\tcpu=hppa\n\t\tbasic_os=linux\n\t\t;;\n\tpentium-* | p5-* | k5-* | k6-* | nexgen-* | viac3-*)\n\t\tcpu=i586\n\t\t;;\n\tpentiumpro-* | p6-* | 6x86-* | athlon-* | athalon_*-*)\n\t\tcpu=i686\n\t\t;;\n\tpentiumii-* | pentium2-* | pentiumiii-* | pentium3-*)\n\t\tcpu=i686\n\t\t;;\n\tpentium4-*)\n\t\tcpu=i786\n\t\t;;\n\tpc98-*)\n\t\tcpu=i386\n\t\t;;\n\tppc-* | ppcbe-*)\n\t\tcpu=powerpc\n\t\t;;\n\tppcle-* | powerpclittle-*)\n\t\tcpu=powerpcle\n\t\t;;\n\tppc64-*)\n\t\tcpu=powerpc64\n\t\t;;\n\tppc64le-* | powerpc64little-*)\n\t\tcpu=powerpc64le\n\t\t;;\n\tsb1-*)\n\t\tcpu=mipsisa64sb1\n\t\t;;\n\tsb1el-*)\n\t\tcpu=mipsisa64sb1el\n\t\t;;\n\tsh5e[lb]-*)\n\t\tcpu=$(echo \"$cpu\" | sed 's/^\\(sh.\\)e\\(.\\)$/\\1\\2e/')\n\t\t;;\n\tspur-*)\n\t\tcpu=spur\n\t\t;;\n\tstrongarm-* | thumb-*)\n\t\tcpu=arm\n\t\t;;\n\ttx39-*)\n\t\tcpu=mipstx39\n\t\t;;\n\ttx39el-*)\n\t\tcpu=mipstx39el\n\t\t;;\n\tx64-*)\n\t\tcpu=x86_64\n\t\t;;\n\txscale-* | xscalee[bl]-*)\n\t\tcpu=$(echo \"$cpu\" | sed 's/^xscale/arm/')\n\t\t;;\n\tarm64-*)\n\t\tcpu=aarch64\n\t\t;;\n\n\t# Recognize the canonical CPU Types that limit and/or modify the\n\t# company names they are paired with.\n\tcr16-*)\n\t\tbasic_os=${basic_os:-elf}\n\t\t;;\n\tcrisv32-* | etraxfs*-*)\n\t\tcpu=crisv32\n\t\tvendor=axis\n\t\t;;\n\tcris-* | etrax*-*)\n\t\tcpu=cris\n\t\tvendor=axis\n\t\t;;\n\tcrx-*)\n\t\tbasic_os=${basic_os:-elf}\n\t\t;;\n\tneo-tandem)\n\t\tcpu=neo\n\t\tvendor=tandem\n\t\t;;\n\tnse-tandem)\n\t\tcpu=nse\n\t\tvendor=tandem\n\t\t;;\n\tnsr-tandem)\n\t\tcpu=nsr\n\t\tvendor=tandem\n\t\t;;\n\tnsv-tandem)\n\t\tcpu=nsv\n\t\tvendor=tandem\n\t\t;;\n\tnsx-tandem)\n\t\tcpu=nsx\n\t\tvendor=tandem\n\t\t;;\n\tmipsallegrexel-sony)\n\t\tcpu=mipsallegrexel\n\t\tvendor=sony\n\t\t;;\n\ttile*-*)\n\t\tbasic_os=${basic_os:-linux-gnu}\n\t\t;;\n\n\t*)\n\t\t# Recognize the canonical CPU types that are allowed with any\n\t\t# company name.\n\t\tcase $cpu in\n\t\t\t1750a | 580 \\\n\t\t\t| a29k \\\n\t\t\t| aarch64 | aarch64_be \\\n\t\t\t| abacus \\\n\t\t\t| alpha | alphaev[4-8] | alphaev56 | alphaev6[78] \\\n\t\t\t| alpha64 | alpha64ev[4-8] | alpha64ev56 | alpha64ev6[78] \\\n\t\t\t| alphapca5[67] | alpha64pca5[67] \\\n\t\t\t| am33_2.0 \\\n\t\t\t| amdgcn \\\n\t\t\t| arc | arceb \\\n\t\t\t| arm | arm[lb]e | arme[lb] | armv* \\\n\t\t\t| avr | avr32 \\\n\t\t\t| asmjs \\\n\t\t\t| ba \\\n\t\t\t| be32 | be64 \\\n\t\t\t| bfin | bpf | bs2000 \\\n\t\t\t| c[123]* | c30 | [cjt]90 | c4x \\\n\t\t\t| c8051 | clipper | craynv | csky | cydra \\\n\t\t\t| d10v | d30v | dlx | dsp16xx \\\n\t\t\t| e2k | elxsi | epiphany \\\n\t\t\t| f30[01] | f700 | fido | fr30 | frv | ft32 | fx80 \\\n\t\t\t| h8300 | h8500 \\\n\t\t\t| hppa | hppa1.[01] | hppa2.0 | hppa2.0[nw] | hppa64 \\\n\t\t\t| hexagon \\\n\t\t\t| i370 | i*86 | i860 | i960 | ia16 | ia64 \\\n\t\t\t| ip2k | iq2000 \\\n\t\t\t| k1om \\\n\t\t\t| le32 | le64 \\\n\t\t\t| lm32 \\\n\t\t\t| loongarch32 | loongarch64 | loongarchx32 \\\n\t\t\t| m32c | m32r | m32rle \\\n\t\t\t| m5200 | m68000 | m680[012346]0 | m68360 | m683?2 | m68k \\\n\t\t\t| m6811 | m68hc11 | m6812 | m68hc12 | m68hcs12x \\\n\t\t\t| m88110 | m88k | maxq | mb | mcore | mep | metag \\\n\t\t\t| microblaze | microblazeel \\\n\t\t\t| mips | mipsbe | mipseb | mipsel | mipsle \\\n\t\t\t| mips16 \\\n\t\t\t| mips64 | mips64eb | mips64el \\\n\t\t\t| mips64octeon | mips64octeonel \\\n\t\t\t| mips64orion | mips64orionel \\\n\t\t\t| mips64r5900 | mips64r5900el \\\n\t\t\t| mips64vr | mips64vrel \\\n\t\t\t| mips64vr4100 | mips64vr4100el \\\n\t\t\t| mips64vr4300 | mips64vr4300el \\\n\t\t\t| mips64vr5000 | mips64vr5000el \\\n\t\t\t| mips64vr5900 | mips64vr5900el \\\n\t\t\t| mipsisa32 | mipsisa32el \\\n\t\t\t| mipsisa32r2 | mipsisa32r2el \\\n\t\t\t| mipsisa32r6 | mipsisa32r6el \\\n\t\t\t| mipsisa64 | mipsisa64el \\\n\t\t\t| mipsisa64r2 | mipsisa64r2el \\\n\t\t\t| mipsisa64r6 | mipsisa64r6el \\\n\t\t\t| mipsisa64sb1 | mipsisa64sb1el \\\n\t\t\t| mipsisa64sr71k | mipsisa64sr71kel \\\n\t\t\t| mipsr5900 | mipsr5900el \\\n\t\t\t| mipstx39 | mipstx39el \\\n\t\t\t| mmix \\\n\t\t\t| mn10200 | mn10300 \\\n\t\t\t| moxie \\\n\t\t\t| mt \\\n\t\t\t| msp430 \\\n\t\t\t| nds32 | nds32le | nds32be \\\n\t\t\t| nfp \\\n\t\t\t| nios | nios2 | nios2eb | nios2el \\\n\t\t\t| none | np1 | ns16k | ns32k | nvptx \\\n\t\t\t| open8 \\\n\t\t\t| or1k* \\\n\t\t\t| or32 \\\n\t\t\t| orion \\\n\t\t\t| picochip \\\n\t\t\t| pdp10 | pdp11 | pj | pjl | pn | power \\\n\t\t\t| powerpc | powerpc64 | powerpc64le | powerpcle | powerpcspe \\\n\t\t\t| pru \\\n\t\t\t| pyramid \\\n\t\t\t| riscv | riscv32 | riscv32be | riscv64 | riscv64be \\\n\t\t\t| rl78 | romp | rs6000 | rx \\\n\t\t\t| s390 | s390x \\\n\t\t\t| score \\\n\t\t\t| sh | shl \\\n\t\t\t| sh[1234] | sh[24]a | sh[24]ae[lb] | sh[23]e | she[lb] | sh[lb]e \\\n\t\t\t| sh[1234]e[lb] |  sh[12345][lb]e | sh[23]ele | sh64 | sh64le \\\n\t\t\t| sparc | sparc64 | sparc64b | sparc64v | sparc86x | sparclet \\\n\t\t\t| sparclite \\\n\t\t\t| sparcv8 | sparcv9 | sparcv9b | sparcv9v | sv1 | sx* \\\n\t\t\t| spu \\\n\t\t\t| tahoe \\\n\t\t\t| thumbv7* \\\n\t\t\t| tic30 | tic4x | tic54x | tic55x | tic6x | tic80 \\\n\t\t\t| tron \\\n\t\t\t| ubicom32 \\\n\t\t\t| v70 | v850 | v850e | v850e1 | v850es | v850e2 | v850e2v3 \\\n\t\t\t| vax \\\n\t\t\t| visium \\\n\t\t\t| w65 \\\n\t\t\t| wasm32 | wasm64 \\\n\t\t\t| we32k \\\n\t\t\t| x86 | x86_64 | xc16x | xgate | xps100 \\\n\t\t\t| xstormy16 | xtensa* \\\n\t\t\t| ymp \\\n\t\t\t| z8k | z80)\n\t\t\t\t;;\n\n\t\t\t*)\n\t\t\t\techo Invalid configuration \\`\"$1\"\\': machine \\`\"$cpu-$vendor\"\\' not recognized 1>&2\n\t\t\t\texit 1\n\t\t\t\t;;\n\t\tesac\n\t\t;;\nesac\n\n# Here we canonicalize certain aliases for manufacturers.\ncase $vendor in\n\tdigital*)\n\t\tvendor=dec\n\t\t;;\n\tcommodore*)\n\t\tvendor=cbm\n\t\t;;\n\t*)\n\t\t;;\nesac\n\n# Decode manufacturer-specific aliases for certain operating systems.\n\nif test x$basic_os != x\nthen\n\n# First recognize some ad-hoc caes, or perhaps split kernel-os, or else just\n# set os.\ncase $basic_os in\n\tgnu/linux*)\n\t\tkernel=linux\n\t\tos=$(echo $basic_os | sed -e 's|gnu/linux|gnu|')\n\t\t;;\n\tos2-emx)\n\t\tkernel=os2\n\t\tos=$(echo $basic_os | sed -e 's|os2-emx|emx|')\n\t\t;;\n\tnto-qnx*)\n\t\tkernel=nto\n\t\tos=$(echo $basic_os | sed -e 's|nto-qnx|qnx|')\n\t\t;;\n\t*-*)\n\t\t# shellcheck disable=SC2162\n\t\tIFS=\"-\" read kernel os <<EOF\n$basic_os\nEOF\n\t\t;;\n\t# Default OS when just kernel was specified\n\tnto*)\n\t\tkernel=nto\n\t\tos=$(echo $basic_os | sed -e 's|nto|qnx|')\n\t\t;;\n\tlinux*)\n\t\tkernel=linux\n\t\tos=$(echo $basic_os | sed -e 's|linux|gnu|')\n\t\t;;\n\t*)\n\t\tkernel=\n\t\tos=$basic_os\n\t\t;;\nesac\n\n# Now, normalize the OS (knowing we just have one component, it's not a kernel,\n# etc.)\ncase $os in\n\t# First match some system type aliases that might get confused\n\t# with valid system types.\n\t# solaris* is a basic system type, with this one exception.\n\tauroraux)\n\t\tos=auroraux\n\t\t;;\n\tbluegene*)\n\t\tos=cnk\n\t\t;;\n\tsolaris1 | solaris1.*)\n\t\tos=$(echo $os | sed -e 's|solaris1|sunos4|')\n\t\t;;\n\tsolaris)\n\t\tos=solaris2\n\t\t;;\n\tunixware*)\n\t\tos=sysv4.2uw\n\t\t;;\n\t# es1800 is here to avoid being matched by es* (a different OS)\n\tes1800*)\n\t\tos=ose\n\t\t;;\n\t# Some version numbers need modification\n\tchorusos*)\n\t\tos=chorusos\n\t\t;;\n\tisc)\n\t\tos=isc2.2\n\t\t;;\n\tsco6)\n\t\tos=sco5v6\n\t\t;;\n\tsco5)\n\t\tos=sco3.2v5\n\t\t;;\n\tsco4)\n\t\tos=sco3.2v4\n\t\t;;\n\tsco3.2.[4-9]*)\n\t\tos=$(echo $os | sed -e 's/sco3.2./sco3.2v/')\n\t\t;;\n\tsco*v* | scout)\n\t\t# Don't match below\n\t\t;;\n\tsco*)\n\t\tos=sco3.2v2\n\t\t;;\n\tpsos*)\n\t\tos=psos\n\t\t;;\n\tqnx*)\n\t\tos=qnx\n\t\t;;\n\thiux*)\n\t\tos=hiuxwe2\n\t\t;;\n\tlynx*178)\n\t\tos=lynxos178\n\t\t;;\n\tlynx*5)\n\t\tos=lynxos5\n\t\t;;\n\tlynxos*)\n\t\t# don't get caught up in next wildcard\n\t\t;;\n\tlynx*)\n\t\tos=lynxos\n\t\t;;\n\tmac[0-9]*)\n\t\tos=$(echo \"$os\" | sed -e 's|mac|macos|')\n\t\t;;\n\topened*)\n\t\tos=openedition\n\t\t;;\n\tos400*)\n\t\tos=os400\n\t\t;;\n\tsunos5*)\n\t\tos=$(echo \"$os\" | sed -e 's|sunos5|solaris2|')\n\t\t;;\n\tsunos6*)\n\t\tos=$(echo \"$os\" | sed -e 's|sunos6|solaris3|')\n\t\t;;\n\twince*)\n\t\tos=wince\n\t\t;;\n\tutek*)\n\t\tos=bsd\n\t\t;;\n\tdynix*)\n\t\tos=bsd\n\t\t;;\n\tacis*)\n\t\tos=aos\n\t\t;;\n\tatheos*)\n\t\tos=atheos\n\t\t;;\n\tsyllable*)\n\t\tos=syllable\n\t\t;;\n\t386bsd)\n\t\tos=bsd\n\t\t;;\n\tctix* | uts*)\n\t\tos=sysv\n\t\t;;\n\tnova*)\n\t\tos=rtmk-nova\n\t\t;;\n\tns2)\n\t\tos=nextstep2\n\t\t;;\n\t# Preserve the version number of sinix5.\n\tsinix5.*)\n\t\tos=$(echo $os | sed -e 's|sinix|sysv|')\n\t\t;;\n\tsinix*)\n\t\tos=sysv4\n\t\t;;\n\ttpf*)\n\t\tos=tpf\n\t\t;;\n\ttriton*)\n\t\tos=sysv3\n\t\t;;\n\toss*)\n\t\tos=sysv3\n\t\t;;\n\tsvr4*)\n\t\tos=sysv4\n\t\t;;\n\tsvr3)\n\t\tos=sysv3\n\t\t;;\n\tsysvr4)\n\t\tos=sysv4\n\t\t;;\n\tose*)\n\t\tos=ose\n\t\t;;\n\t*mint | mint[0-9]* | *MiNT | MiNT[0-9]*)\n\t\tos=mint\n\t\t;;\n\tdicos*)\n\t\tos=dicos\n\t\t;;\n\tpikeos*)\n\t\t# Until real need of OS specific support for\n\t\t# particular features comes up, bare metal\n\t\t# configurations are quite functional.\n\t\tcase $cpu in\n\t\t    arm*)\n\t\t\tos=eabi\n\t\t\t;;\n\t\t    *)\n\t\t\tos=elf\n\t\t\t;;\n\t\tesac\n\t\t;;\n\t*)\n\t\t# No normalization, but not necessarily accepted, that comes below.\n\t\t;;\nesac\n\nelse\n\n# Here we handle the default operating systems that come with various machines.\n# The value should be what the vendor currently ships out the door with their\n# machine or put another way, the most popular os provided with the machine.\n\n# Note that if you're going to try to match \"-MANUFACTURER\" here (say,\n# \"-sun\"), then you have to tell the case statement up towards the top\n# that MANUFACTURER isn't an operating system.  Otherwise, code above\n# will signal an error saying that MANUFACTURER isn't an operating\n# system, and we'll never get to this point.\n\nkernel=\ncase $cpu-$vendor in\n\tscore-*)\n\t\tos=elf\n\t\t;;\n\tspu-*)\n\t\tos=elf\n\t\t;;\n\t*-acorn)\n\t\tos=riscix1.2\n\t\t;;\n\tarm*-rebel)\n\t\tkernel=linux\n\t\tos=gnu\n\t\t;;\n\tarm*-semi)\n\t\tos=aout\n\t\t;;\n\tc4x-* | tic4x-*)\n\t\tos=coff\n\t\t;;\n\tc8051-*)\n\t\tos=elf\n\t\t;;\n\tclipper-intergraph)\n\t\tos=clix\n\t\t;;\n\thexagon-*)\n\t\tos=elf\n\t\t;;\n\ttic54x-*)\n\t\tos=coff\n\t\t;;\n\ttic55x-*)\n\t\tos=coff\n\t\t;;\n\ttic6x-*)\n\t\tos=coff\n\t\t;;\n\t# This must come before the *-dec entry.\n\tpdp10-*)\n\t\tos=tops20\n\t\t;;\n\tpdp11-*)\n\t\tos=none\n\t\t;;\n\t*-dec | vax-*)\n\t\tos=ultrix4.2\n\t\t;;\n\tm68*-apollo)\n\t\tos=domain\n\t\t;;\n\ti386-sun)\n\t\tos=sunos4.0.2\n\t\t;;\n\tm68000-sun)\n\t\tos=sunos3\n\t\t;;\n\tm68*-cisco)\n\t\tos=aout\n\t\t;;\n\tmep-*)\n\t\tos=elf\n\t\t;;\n\tmips*-cisco)\n\t\tos=elf\n\t\t;;\n\tmips*-*)\n\t\tos=elf\n\t\t;;\n\tor32-*)\n\t\tos=coff\n\t\t;;\n\t*-tti)\t# must be before sparc entry or we get the wrong os.\n\t\tos=sysv3\n\t\t;;\n\tsparc-* | *-sun)\n\t\tos=sunos4.1.1\n\t\t;;\n\tpru-*)\n\t\tos=elf\n\t\t;;\n\t*-be)\n\t\tos=beos\n\t\t;;\n\t*-ibm)\n\t\tos=aix\n\t\t;;\n\t*-knuth)\n\t\tos=mmixware\n\t\t;;\n\t*-wec)\n\t\tos=proelf\n\t\t;;\n\t*-winbond)\n\t\tos=proelf\n\t\t;;\n\t*-oki)\n\t\tos=proelf\n\t\t;;\n\t*-hp)\n\t\tos=hpux\n\t\t;;\n\t*-hitachi)\n\t\tos=hiux\n\t\t;;\n\ti860-* | *-att | *-ncr | *-altos | *-motorola | *-convergent)\n\t\tos=sysv\n\t\t;;\n\t*-cbm)\n\t\tos=amigaos\n\t\t;;\n\t*-dg)\n\t\tos=dgux\n\t\t;;\n\t*-dolphin)\n\t\tos=sysv3\n\t\t;;\n\tm68k-ccur)\n\t\tos=rtu\n\t\t;;\n\tm88k-omron*)\n\t\tos=luna\n\t\t;;\n\t*-next)\n\t\tos=nextstep\n\t\t;;\n\t*-sequent)\n\t\tos=ptx\n\t\t;;\n\t*-crds)\n\t\tos=unos\n\t\t;;\n\t*-ns)\n\t\tos=genix\n\t\t;;\n\ti370-*)\n\t\tos=mvs\n\t\t;;\n\t*-gould)\n\t\tos=sysv\n\t\t;;\n\t*-highlevel)\n\t\tos=bsd\n\t\t;;\n\t*-encore)\n\t\tos=bsd\n\t\t;;\n\t*-sgi)\n\t\tos=irix\n\t\t;;\n\t*-siemens)\n\t\tos=sysv4\n\t\t;;\n\t*-masscomp)\n\t\tos=rtu\n\t\t;;\n\tf30[01]-fujitsu | f700-fujitsu)\n\t\tos=uxpv\n\t\t;;\n\t*-rom68k)\n\t\tos=coff\n\t\t;;\n\t*-*bug)\n\t\tos=coff\n\t\t;;\n\t*-apple)\n\t\tos=macos\n\t\t;;\n\t*-atari*)\n\t\tos=mint\n\t\t;;\n\t*-wrs)\n\t\tos=vxworks\n\t\t;;\n\t*)\n\t\tos=none\n\t\t;;\nesac\n\nfi\n\n# Now, validate our (potentially fixed-up) OS.\ncase $os in\n\t# Sometimes we do \"kernel-abi\", so those need to count as OSes.\n\tmusl* | newlib* | uclibc*)\n\t\t;;\n\t# Likewise for \"kernel-libc\"\n\teabi* | gnueabi*)\n\t\t;;\n\t# Now accept the basic system types.\n\t# The portable systems comes first.\n\t# Each alternative MUST end in a * to match a version number.\n\tgnu* | android* | bsd* | mach* | minix* | genix* | ultrix* | irix* \\\n\t     | *vms* | esix* | aix* | cnk* | sunos | sunos[34]* \\\n\t     | hpux* | unos* | osf* | luna* | dgux* | auroraux* | solaris* \\\n\t     | sym* |  plan9* | psp* | sim* | xray* | os68k* | v88r* \\\n\t     | hiux* | abug | nacl* | netware* | windows* \\\n\t     | os9* | macos* | osx* | ios* \\\n\t     | mpw* | magic* | mmixware* | mon960* | lnews* \\\n\t     | amigaos* | amigados* | msdos* | newsos* | unicos* | aof* \\\n\t     | aos* | aros* | cloudabi* | sortix* | twizzler* \\\n\t     | nindy* | vxsim* | vxworks* | ebmon* | hms* | mvs* \\\n\t     | clix* | riscos* | uniplus* | iris* | isc* | rtu* | xenix* \\\n\t     | mirbsd* | netbsd* | dicos* | openedition* | ose* \\\n\t     | bitrig* | openbsd* | solidbsd* | libertybsd* | os108* \\\n\t     | ekkobsd* | freebsd* | riscix* | lynxos* | os400* \\\n\t     | bosx* | nextstep* | cxux* | aout* | elf* | oabi* \\\n\t     | ptx* | coff* | ecoff* | winnt* | domain* | vsta* \\\n\t     | udi* | lites* | ieee* | go32* | aux* | hcos* \\\n\t     | chorusrdb* | cegcc* | glidix* \\\n\t     | cygwin* | msys* | pe* | moss* | proelf* | rtems* \\\n\t     | midipix* | mingw32* | mingw64* | mint* \\\n\t     | uxpv* | beos* | mpeix* | udk* | moxiebox* \\\n\t     | interix* | uwin* | mks* | rhapsody* | darwin* \\\n\t     | openstep* | oskit* | conix* | pw32* | nonstopux* \\\n\t     | storm-chaos* | tops10* | tenex* | tops20* | its* \\\n\t     | os2* | vos* | palmos* | uclinux* | nucleus* | morphos* \\\n\t     | scout* | superux* | sysv* | rtmk* | tpf* | windiss* \\\n\t     | powermax* | dnix* | nx6 | nx7 | sei* | dragonfly* \\\n\t     | skyos* | haiku* | rdos* | toppers* | drops* | es* \\\n\t     | onefs* | tirtos* | phoenix* | fuchsia* | redox* | bme* \\\n\t     | midnightbsd* | amdhsa* | unleashed* | emscripten* | wasi* \\\n\t     | nsk* | powerunix* | genode* | zvmoe* | qnx* | emx*)\n\t\t;;\n\t# This one is extra strict with allowed versions\n\tsco3.2v2 | sco3.2v[4-9]* | sco5v6*)\n\t\t# Don't forget version if it is 3.2v4 or newer.\n\t\t;;\n\tnone)\n\t\t;;\n\t*)\n\t\techo Invalid configuration \\`\"$1\"\\': OS \\`\"$os\"\\' not recognized 1>&2\n\t\texit 1\n\t\t;;\nesac\n\n# As a final step for OS-related things, validate the OS-kernel combination\n# (given a valid OS), if there is a kernel.\ncase $kernel-$os in\n\tlinux-gnu* | linux-dietlibc* | linux-android* | linux-newlib* | linux-musl* | linux-uclibc* )\n\t\t;;\n\tuclinux-uclibc* )\n\t\t;;\n\t-dietlibc* | -newlib* | -musl* | -uclibc* )\n\t\t# These are just libc implementations, not actual OSes, and thus\n\t\t# require a kernel.\n\t\techo \"Invalid configuration \\`$1': libc \\`$os' needs explicit kernel.\" 1>&2\n\t\texit 1\n\t\t;;\n\tkfreebsd*-gnu* | kopensolaris*-gnu*)\n\t\t;;\n\tnto-qnx*)\n\t\t;;\n\tos2-emx)\n\t\t;;\n\t*-eabi* | *-gnueabi*)\n\t\t;;\n\t-*)\n\t\t# Blank kernel with real OS is always fine.\n\t\t;;\n\t*-*)\n\t\techo \"Invalid configuration \\`$1': Kernel \\`$kernel' not known to work with OS \\`$os'.\" 1>&2\n\t\texit 1\n\t\t;;\nesac\n\n# Here we handle the case where we know the os, and the CPU type, but not the\n# manufacturer.  We pick the logical manufacturer.\ncase $vendor in\n\tunknown)\n\t\tcase $cpu-$os in\n\t\t\t*-riscix*)\n\t\t\t\tvendor=acorn\n\t\t\t\t;;\n\t\t\t*-sunos*)\n\t\t\t\tvendor=sun\n\t\t\t\t;;\n\t\t\t*-cnk* | *-aix*)\n\t\t\t\tvendor=ibm\n\t\t\t\t;;\n\t\t\t*-beos*)\n\t\t\t\tvendor=be\n\t\t\t\t;;\n\t\t\t*-hpux*)\n\t\t\t\tvendor=hp\n\t\t\t\t;;\n\t\t\t*-mpeix*)\n\t\t\t\tvendor=hp\n\t\t\t\t;;\n\t\t\t*-hiux*)\n\t\t\t\tvendor=hitachi\n\t\t\t\t;;\n\t\t\t*-unos*)\n\t\t\t\tvendor=crds\n\t\t\t\t;;\n\t\t\t*-dgux*)\n\t\t\t\tvendor=dg\n\t\t\t\t;;\n\t\t\t*-luna*)\n\t\t\t\tvendor=omron\n\t\t\t\t;;\n\t\t\t*-genix*)\n\t\t\t\tvendor=ns\n\t\t\t\t;;\n\t\t\t*-clix*)\n\t\t\t\tvendor=intergraph\n\t\t\t\t;;\n\t\t\t*-mvs* | *-opened*)\n\t\t\t\tvendor=ibm\n\t\t\t\t;;\n\t\t\t*-os400*)\n\t\t\t\tvendor=ibm\n\t\t\t\t;;\n\t\t\ts390-* | s390x-*)\n\t\t\t\tvendor=ibm\n\t\t\t\t;;\n\t\t\t*-ptx*)\n\t\t\t\tvendor=sequent\n\t\t\t\t;;\n\t\t\t*-tpf*)\n\t\t\t\tvendor=ibm\n\t\t\t\t;;\n\t\t\t*-vxsim* | *-vxworks* | *-windiss*)\n\t\t\t\tvendor=wrs\n\t\t\t\t;;\n\t\t\t*-aux*)\n\t\t\t\tvendor=apple\n\t\t\t\t;;\n\t\t\t*-hms*)\n\t\t\t\tvendor=hitachi\n\t\t\t\t;;\n\t\t\t*-mpw* | *-macos*)\n\t\t\t\tvendor=apple\n\t\t\t\t;;\n\t\t\t*-*mint | *-mint[0-9]* | *-*MiNT | *-MiNT[0-9]*)\n\t\t\t\tvendor=atari\n\t\t\t\t;;\n\t\t\t*-vos*)\n\t\t\t\tvendor=stratus\n\t\t\t\t;;\n\t\tesac\n\t\t;;\nesac\n\necho \"$cpu-$vendor-${kernel:+$kernel-}$os\"\nexit\n\n# Local variables:\n# eval: (add-hook 'before-save-hook 'time-stamp)\n# time-stamp-start: \"timestamp='\"\n# time-stamp-format: \"%:y-%02m-%02d\"\n# time-stamp-end: \"'\"\n# End:\n"
  },
  {
    "path": "deps/jemalloc/build-aux/install-sh",
    "content": "#! /bin/sh\n#\n# install - install a program, script, or datafile\n# This comes from X11R5 (mit/util/scripts/install.sh).\n#\n# Copyright 1991 by the Massachusetts Institute of Technology\n#\n# Permission to use, copy, modify, distribute, and sell this software and its\n# documentation for any purpose is hereby granted without fee, provided that\n# the above copyright notice appear in all copies and that both that\n# copyright notice and this permission notice appear in supporting\n# documentation, and that the name of M.I.T. not be used in advertising or\n# publicity pertaining to distribution of the software without specific,\n# written prior permission.  M.I.T. makes no representations about the\n# suitability of this software for any purpose.  It is provided \"as is\"\n# without express or implied warranty.\n#\n# Calling this script install-sh is preferred over install.sh, to prevent\n# `make' implicit rules from creating a file called install from it\n# when there is no Makefile.\n#\n# This script is compatible with the BSD install script, but was written\n# from scratch.  It can only install one file at a time, a restriction\n# shared with many OS's install programs.\n\n\n# set DOITPROG to echo to test this script\n\n# Don't use :- since 4.3BSD and earlier shells don't like it.\ndoit=\"${DOITPROG-}\"\n\n\n# put in absolute paths if you don't have them in your path; or use env. vars.\n\nmvprog=\"${MVPROG-mv}\"\ncpprog=\"${CPPROG-cp}\"\nchmodprog=\"${CHMODPROG-chmod}\"\nchownprog=\"${CHOWNPROG-chown}\"\nchgrpprog=\"${CHGRPPROG-chgrp}\"\nstripprog=\"${STRIPPROG-strip}\"\nrmprog=\"${RMPROG-rm}\"\nmkdirprog=\"${MKDIRPROG-mkdir}\"\n\ntransformbasename=\"\"\ntransform_arg=\"\"\ninstcmd=\"$mvprog\"\nchmodcmd=\"$chmodprog 0755\"\nchowncmd=\"\"\nchgrpcmd=\"\"\nstripcmd=\"\"\nrmcmd=\"$rmprog -f\"\nmvcmd=\"$mvprog\"\nsrc=\"\"\ndst=\"\"\ndir_arg=\"\"\n\nwhile [ x\"$1\" != x ]; do\n    case $1 in\n\t-c) instcmd=\"$cpprog\"\n\t    shift\n\t    continue;;\n\n\t-d) dir_arg=true\n\t    shift\n\t    continue;;\n\n\t-m) chmodcmd=\"$chmodprog $2\"\n\t    shift\n\t    shift\n\t    continue;;\n\n\t-o) chowncmd=\"$chownprog $2\"\n\t    shift\n\t    shift\n\t    continue;;\n\n\t-g) chgrpcmd=\"$chgrpprog $2\"\n\t    shift\n\t    shift\n\t    continue;;\n\n\t-s) stripcmd=\"$stripprog\"\n\t    shift\n\t    continue;;\n\n\t-t=*) transformarg=`echo $1 | sed 's/-t=//'`\n\t    shift\n\t    continue;;\n\n\t-b=*) transformbasename=`echo $1 | sed 's/-b=//'`\n\t    shift\n\t    continue;;\n\n\t*)  if [ x\"$src\" = x ]\n\t    then\n\t\tsrc=$1\n\t    else\n\t\t# this colon is to work around a 386BSD /bin/sh bug\n\t\t:\n\t\tdst=$1\n\t    fi\n\t    shift\n\t    continue;;\n    esac\ndone\n\nif [ x\"$src\" = x ]\nthen\n\techo \"install:\tno input file specified\"\n\texit 1\nelse\n\ttrue\nfi\n\nif [ x\"$dir_arg\" != x ]; then\n\tdst=$src\n\tsrc=\"\"\n\t\n\tif [ -d $dst ]; then\n\t\tinstcmd=:\n\telse\n\t\tinstcmd=mkdir\n\tfi\nelse\n\n# Waiting for this to be detected by the \"$instcmd $src $dsttmp\" command\n# might cause directories to be created, which would be especially bad \n# if $src (and thus $dsttmp) contains '*'.\n\n\tif [ -f $src -o -d $src ]\n\tthen\n\t\ttrue\n\telse\n\t\techo \"install:  $src does not exist\"\n\t\texit 1\n\tfi\n\t\n\tif [ x\"$dst\" = x ]\n\tthen\n\t\techo \"install:\tno destination specified\"\n\t\texit 1\n\telse\n\t\ttrue\n\tfi\n\n# If destination is a directory, append the input filename; if your system\n# does not like double slashes in filenames, you may need to add some logic\n\n\tif [ -d $dst ]\n\tthen\n\t\tdst=\"$dst\"/`basename $src`\n\telse\n\t\ttrue\n\tfi\nfi\n\n## this sed command emulates the dirname command\ndstdir=`echo $dst | sed -e 's,[^/]*$,,;s,/$,,;s,^$,.,'`\n\n# Make sure that the destination directory exists.\n#  this part is taken from Noah Friedman's mkinstalldirs script\n\n# Skip lots of stat calls in the usual case.\nif [ ! -d \"$dstdir\" ]; then\ndefaultIFS='\t\n'\nIFS=\"${IFS-${defaultIFS}}\"\n\noIFS=\"${IFS}\"\n# Some sh's can't handle IFS=/ for some reason.\nIFS='%'\nset - `echo ${dstdir} | sed -e 's@/@%@g' -e 's@^%@/@'`\nIFS=\"${oIFS}\"\n\npathcomp=''\n\nwhile [ $# -ne 0 ] ; do\n\tpathcomp=\"${pathcomp}${1}\"\n\tshift\n\n\tif [ ! -d \"${pathcomp}\" ] ;\n        then\n\t\t$mkdirprog \"${pathcomp}\"\n\telse\n\t\ttrue\n\tfi\n\n\tpathcomp=\"${pathcomp}/\"\ndone\nfi\n\nif [ x\"$dir_arg\" != x ]\nthen\n\t$doit $instcmd $dst &&\n\n\tif [ x\"$chowncmd\" != x ]; then $doit $chowncmd $dst; else true ; fi &&\n\tif [ x\"$chgrpcmd\" != x ]; then $doit $chgrpcmd $dst; else true ; fi &&\n\tif [ x\"$stripcmd\" != x ]; then $doit $stripcmd $dst; else true ; fi &&\n\tif [ x\"$chmodcmd\" != x ]; then $doit $chmodcmd $dst; else true ; fi\nelse\n\n# If we're going to rename the final executable, determine the name now.\n\n\tif [ x\"$transformarg\" = x ] \n\tthen\n\t\tdstfile=`basename $dst`\n\telse\n\t\tdstfile=`basename $dst $transformbasename | \n\t\t\tsed $transformarg`$transformbasename\n\tfi\n\n# don't allow the sed command to completely eliminate the filename\n\n\tif [ x\"$dstfile\" = x ] \n\tthen\n\t\tdstfile=`basename $dst`\n\telse\n\t\ttrue\n\tfi\n\n# Make a temp file name in the proper directory.\n\n\tdsttmp=$dstdir/#inst.$$#\n\n# Move or copy the file name to the temp name\n\n\t$doit $instcmd $src $dsttmp &&\n\n\ttrap \"rm -f ${dsttmp}\" 0 &&\n\n# and set any options; do chmod last to preserve setuid bits\n\n# If any of these fail, we abort the whole thing.  If we want to\n# ignore errors from any of these, just make sure not to ignore\n# errors from the above \"$doit $instcmd $src $dsttmp\" command.\n\n\tif [ x\"$chowncmd\" != x ]; then $doit $chowncmd $dsttmp; else true;fi &&\n\tif [ x\"$chgrpcmd\" != x ]; then $doit $chgrpcmd $dsttmp; else true;fi &&\n\tif [ x\"$stripcmd\" != x ]; then $doit $stripcmd $dsttmp; else true;fi &&\n\tif [ x\"$chmodcmd\" != x ]; then $doit $chmodcmd $dsttmp; else true;fi &&\n\n# Now rename the file to the real destination.\n\n\t$doit $rmcmd -f $dstdir/$dstfile &&\n\t$doit $mvcmd $dsttmp $dstdir/$dstfile \n\nfi &&\n\n\nexit 0\n"
  },
  {
    "path": "deps/jemalloc/config.stamp.in",
    "content": ""
  },
  {
    "path": "deps/jemalloc/configure",
    "content": "#! /bin/sh\n# Guess values for system-dependent variables and create Makefiles.\n# Generated by GNU Autoconf 2.71.\n#\n#\n# Copyright (C) 1992-1996, 1998-2017, 2020-2021 Free Software Foundation,\n# Inc.\n#\n#\n# This configure script is free software; the Free Software Foundation\n# gives unlimited permission to copy, distribute and modify it.\n## -------------------- ##\n## M4sh Initialization. ##\n## -------------------- ##\n\n# Be more Bourne compatible\nDUALCASE=1; export DUALCASE # for MKS sh\nas_nop=:\nif test ${ZSH_VERSION+y} && (emulate sh) >/dev/null 2>&1\nthen :\n  emulate sh\n  NULLCMD=:\n  # Pre-4.2 versions of Zsh do word splitting on ${1+\"$@\"}, which\n  # is contrary to our usage.  Disable this feature.\n  alias -g '${1+\"$@\"}'='\"$@\"'\n  setopt NO_GLOB_SUBST\nelse $as_nop\n  case `(set -o) 2>/dev/null` in #(\n  *posix*) :\n    set -o posix ;; #(\n  *) :\n     ;;\nesac\nfi\n\n\n\n# Reset variables that may have inherited troublesome values from\n# the environment.\n\n# IFS needs to be set, to space, tab, and newline, in precisely that order.\n# (If _AS_PATH_WALK were called with IFS unset, it would have the\n# side effect of setting IFS to empty, thus disabling word splitting.)\n# Quoting is to prevent editors from complaining about space-tab.\nas_nl='\n'\nexport as_nl\nIFS=\" \"\"\t$as_nl\"\n\nPS1='$ '\nPS2='> '\nPS4='+ '\n\n# Ensure predictable behavior from utilities with locale-dependent output.\nLC_ALL=C\nexport LC_ALL\nLANGUAGE=C\nexport LANGUAGE\n\n# We cannot yet rely on \"unset\" to work, but we need these variables\n# to be unset--not just set to an empty or harmless value--now, to\n# avoid bugs in old shells (e.g. pre-3.0 UWIN ksh).  This construct\n# also avoids known problems related to \"unset\" and subshell syntax\n# in other old shells (e.g. bash 2.01 and pdksh 5.2.14).\nfor as_var in BASH_ENV ENV MAIL MAILPATH CDPATH\ndo eval test \\${$as_var+y} \\\n  && ( (unset $as_var) || exit 1) >/dev/null 2>&1 && unset $as_var || :\ndone\n\n# Ensure that fds 0, 1, and 2 are open.\nif (exec 3>&0) 2>/dev/null; then :; else exec 0</dev/null; fi\nif (exec 3>&1) 2>/dev/null; then :; else exec 1>/dev/null; fi\nif (exec 3>&2)            ; then :; else exec 2>/dev/null; fi\n\n# The user is always right.\nif ${PATH_SEPARATOR+false} :; then\n  PATH_SEPARATOR=:\n  (PATH='/bin;/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 && {\n    (PATH='/bin:/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 ||\n      PATH_SEPARATOR=';'\n  }\nfi\n\n\n# Find who we are.  Look in the path if we contain no directory separator.\nas_myself=\ncase $0 in #((\n  *[\\\\/]* ) as_myself=$0 ;;\n  *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    test -r \"$as_dir$0\" && as_myself=$as_dir$0 && break\n  done\nIFS=$as_save_IFS\n\n     ;;\nesac\n# We did not find ourselves, most probably we were run as `sh COMMAND'\n# in which case we are not to be found in the path.\nif test \"x$as_myself\" = x; then\n  as_myself=$0\nfi\nif test ! -f \"$as_myself\"; then\n  printf \"%s\\n\" \"$as_myself: error: cannot find myself; rerun with an absolute file name\" >&2\n  exit 1\nfi\n\n\n# Use a proper internal environment variable to ensure we don't fall\n  # into an infinite loop, continuously re-executing ourselves.\n  if test x\"${_as_can_reexec}\" != xno && test \"x$CONFIG_SHELL\" != x; then\n    _as_can_reexec=no; export _as_can_reexec;\n    # We cannot yet assume a decent shell, so we have to provide a\n# neutralization value for shells without unset; and this also\n# works around shells that cannot unset nonexistent variables.\n# Preserve -v and -x to the replacement shell.\nBASH_ENV=/dev/null\nENV=/dev/null\n(unset BASH_ENV) >/dev/null 2>&1 && unset BASH_ENV ENV\ncase $- in # ((((\n  *v*x* | *x*v* ) as_opts=-vx ;;\n  *v* ) as_opts=-v ;;\n  *x* ) as_opts=-x ;;\n  * ) as_opts= ;;\nesac\nexec $CONFIG_SHELL $as_opts \"$as_myself\" ${1+\"$@\"}\n# Admittedly, this is quite paranoid, since all the known shells bail\n# out after a failed `exec'.\nprintf \"%s\\n\" \"$0: could not re-execute with $CONFIG_SHELL\" >&2\nexit 255\n  fi\n  # We don't want this to propagate to other subprocesses.\n          { _as_can_reexec=; unset _as_can_reexec;}\nif test \"x$CONFIG_SHELL\" = x; then\n  as_bourne_compatible=\"as_nop=:\nif test \\${ZSH_VERSION+y} && (emulate sh) >/dev/null 2>&1\nthen :\n  emulate sh\n  NULLCMD=:\n  # Pre-4.2 versions of Zsh do word splitting on \\${1+\\\"\\$@\\\"}, which\n  # is contrary to our usage.  Disable this feature.\n  alias -g '\\${1+\\\"\\$@\\\"}'='\\\"\\$@\\\"'\n  setopt NO_GLOB_SUBST\nelse \\$as_nop\n  case \\`(set -o) 2>/dev/null\\` in #(\n  *posix*) :\n    set -o posix ;; #(\n  *) :\n     ;;\nesac\nfi\n\"\n  as_required=\"as_fn_return () { (exit \\$1); }\nas_fn_success () { as_fn_return 0; }\nas_fn_failure () { as_fn_return 1; }\nas_fn_ret_success () { return 0; }\nas_fn_ret_failure () { return 1; }\n\nexitcode=0\nas_fn_success || { exitcode=1; echo as_fn_success failed.; }\nas_fn_failure && { exitcode=1; echo as_fn_failure succeeded.; }\nas_fn_ret_success || { exitcode=1; echo as_fn_ret_success failed.; }\nas_fn_ret_failure && { exitcode=1; echo as_fn_ret_failure succeeded.; }\nif ( set x; as_fn_ret_success y && test x = \\\"\\$1\\\" )\nthen :\n\nelse \\$as_nop\n  exitcode=1; echo positional parameters were not saved.\nfi\ntest x\\$exitcode = x0 || exit 1\nblah=\\$(echo \\$(echo blah))\ntest x\\\"\\$blah\\\" = xblah || exit 1\ntest -x / || exit 1\"\n  as_suggested=\"  as_lineno_1=\";as_suggested=$as_suggested$LINENO;as_suggested=$as_suggested\" as_lineno_1a=\\$LINENO\n  as_lineno_2=\";as_suggested=$as_suggested$LINENO;as_suggested=$as_suggested\" as_lineno_2a=\\$LINENO\n  eval 'test \\\"x\\$as_lineno_1'\\$as_run'\\\" != \\\"x\\$as_lineno_2'\\$as_run'\\\" &&\n  test \\\"x\\`expr \\$as_lineno_1'\\$as_run' + 1\\`\\\" = \\\"x\\$as_lineno_2'\\$as_run'\\\"' || exit 1\ntest \\$(( 1 + 1 )) = 2 || exit 1\"\n  if (eval \"$as_required\") 2>/dev/null\nthen :\n  as_have_required=yes\nelse $as_nop\n  as_have_required=no\nfi\n  if test x$as_have_required = xyes && (eval \"$as_suggested\") 2>/dev/null\nthen :\n\nelse $as_nop\n  as_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nas_found=false\nfor as_dir in /bin$PATH_SEPARATOR/usr/bin$PATH_SEPARATOR$PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n  as_found=:\n  case $as_dir in #(\n\t /*)\n\t   for as_base in sh bash ksh sh5; do\n\t     # Try only shells that exist, to save several forks.\n\t     as_shell=$as_dir$as_base\n\t     if { test -f \"$as_shell\" || test -f \"$as_shell.exe\"; } &&\n\t\t    as_run=a \"$as_shell\" -c \"$as_bourne_compatible\"\"$as_required\" 2>/dev/null\nthen :\n  CONFIG_SHELL=$as_shell as_have_required=yes\n\t\t   if as_run=a \"$as_shell\" -c \"$as_bourne_compatible\"\"$as_suggested\" 2>/dev/null\nthen :\n  break 2\nfi\nfi\n\t   done;;\n       esac\n  as_found=false\ndone\nIFS=$as_save_IFS\nif $as_found\nthen :\n\nelse $as_nop\n  if { test -f \"$SHELL\" || test -f \"$SHELL.exe\"; } &&\n\t      as_run=a \"$SHELL\" -c \"$as_bourne_compatible\"\"$as_required\" 2>/dev/null\nthen :\n  CONFIG_SHELL=$SHELL as_have_required=yes\nfi\nfi\n\n\n      if test \"x$CONFIG_SHELL\" != x\nthen :\n  export CONFIG_SHELL\n             # We cannot yet assume a decent shell, so we have to provide a\n# neutralization value for shells without unset; and this also\n# works around shells that cannot unset nonexistent variables.\n# Preserve -v and -x to the replacement shell.\nBASH_ENV=/dev/null\nENV=/dev/null\n(unset BASH_ENV) >/dev/null 2>&1 && unset BASH_ENV ENV\ncase $- in # ((((\n  *v*x* | *x*v* ) as_opts=-vx ;;\n  *v* ) as_opts=-v ;;\n  *x* ) as_opts=-x ;;\n  * ) as_opts= ;;\nesac\nexec $CONFIG_SHELL $as_opts \"$as_myself\" ${1+\"$@\"}\n# Admittedly, this is quite paranoid, since all the known shells bail\n# out after a failed `exec'.\nprintf \"%s\\n\" \"$0: could not re-execute with $CONFIG_SHELL\" >&2\nexit 255\nfi\n\n    if test x$as_have_required = xno\nthen :\n  printf \"%s\\n\" \"$0: This script requires a shell more modern than all\"\n  printf \"%s\\n\" \"$0: the shells that I found on your system.\"\n  if test ${ZSH_VERSION+y} ; then\n    printf \"%s\\n\" \"$0: In particular, zsh $ZSH_VERSION has bugs and should\"\n    printf \"%s\\n\" \"$0: be upgraded to zsh 4.3.4 or later.\"\n  else\n    printf \"%s\\n\" \"$0: Please tell bug-autoconf@gnu.org about your system,\n$0: including any error possibly output before this\n$0: message. Then install a modern shell, or manually run\n$0: the script under such a shell if you do have one.\"\n  fi\n  exit 1\nfi\nfi\nfi\nSHELL=${CONFIG_SHELL-/bin/sh}\nexport SHELL\n# Unset more variables known to interfere with behavior of common tools.\nCLICOLOR_FORCE= GREP_OPTIONS=\nunset CLICOLOR_FORCE GREP_OPTIONS\n\n## --------------------- ##\n## M4sh Shell Functions. ##\n## --------------------- ##\n# as_fn_unset VAR\n# ---------------\n# Portably unset VAR.\nas_fn_unset ()\n{\n  { eval $1=; unset $1;}\n}\nas_unset=as_fn_unset\n\n\n# as_fn_set_status STATUS\n# -----------------------\n# Set $? to STATUS, without forking.\nas_fn_set_status ()\n{\n  return $1\n} # as_fn_set_status\n\n# as_fn_exit STATUS\n# -----------------\n# Exit the shell with STATUS, even in a \"trap 0\" or \"set -e\" context.\nas_fn_exit ()\n{\n  set +e\n  as_fn_set_status $1\n  exit $1\n} # as_fn_exit\n# as_fn_nop\n# ---------\n# Do nothing but, unlike \":\", preserve the value of $?.\nas_fn_nop ()\n{\n  return $?\n}\nas_nop=as_fn_nop\n\n# as_fn_mkdir_p\n# -------------\n# Create \"$as_dir\" as a directory, including parents if necessary.\nas_fn_mkdir_p ()\n{\n\n  case $as_dir in #(\n  -*) as_dir=./$as_dir;;\n  esac\n  test -d \"$as_dir\" || eval $as_mkdir_p || {\n    as_dirs=\n    while :; do\n      case $as_dir in #(\n      *\\'*) as_qdir=`printf \"%s\\n\" \"$as_dir\" | sed \"s/'/'\\\\\\\\\\\\\\\\''/g\"`;; #'(\n      *) as_qdir=$as_dir;;\n      esac\n      as_dirs=\"'$as_qdir' $as_dirs\"\n      as_dir=`$as_dirname -- \"$as_dir\" ||\n$as_expr X\"$as_dir\" : 'X\\(.*[^/]\\)//*[^/][^/]*/*$' \\| \\\n\t X\"$as_dir\" : 'X\\(//\\)[^/]' \\| \\\n\t X\"$as_dir\" : 'X\\(//\\)$' \\| \\\n\t X\"$as_dir\" : 'X\\(/\\)' \\| . 2>/dev/null ||\nprintf \"%s\\n\" X\"$as_dir\" |\n    sed '/^X\\(.*[^/]\\)\\/\\/*[^/][^/]*\\/*$/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  /^X\\(\\/\\/\\)[^/].*/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  /^X\\(\\/\\/\\)$/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  /^X\\(\\/\\).*/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  s/.*/./; q'`\n      test -d \"$as_dir\" && break\n    done\n    test -z \"$as_dirs\" || eval \"mkdir $as_dirs\"\n  } || test -d \"$as_dir\" || as_fn_error $? \"cannot create directory $as_dir\"\n\n\n} # as_fn_mkdir_p\n\n# as_fn_executable_p FILE\n# -----------------------\n# Test if FILE is an executable regular file.\nas_fn_executable_p ()\n{\n  test -f \"$1\" && test -x \"$1\"\n} # as_fn_executable_p\n# as_fn_append VAR VALUE\n# ----------------------\n# Append the text in VALUE to the end of the definition contained in VAR. Take\n# advantage of any shell optimizations that allow amortized linear growth over\n# repeated appends, instead of the typical quadratic growth present in naive\n# implementations.\nif (eval \"as_var=1; as_var+=2; test x\\$as_var = x12\") 2>/dev/null\nthen :\n  eval 'as_fn_append ()\n  {\n    eval $1+=\\$2\n  }'\nelse $as_nop\n  as_fn_append ()\n  {\n    eval $1=\\$$1\\$2\n  }\nfi # as_fn_append\n\n# as_fn_arith ARG...\n# ------------------\n# Perform arithmetic evaluation on the ARGs, and store the result in the\n# global $as_val. Take advantage of shells that can avoid forks. The arguments\n# must be portable across $(()) and expr.\nif (eval \"test \\$(( 1 + 1 )) = 2\") 2>/dev/null\nthen :\n  eval 'as_fn_arith ()\n  {\n    as_val=$(( $* ))\n  }'\nelse $as_nop\n  as_fn_arith ()\n  {\n    as_val=`expr \"$@\" || test $? -eq 1`\n  }\nfi # as_fn_arith\n\n# as_fn_nop\n# ---------\n# Do nothing but, unlike \":\", preserve the value of $?.\nas_fn_nop ()\n{\n  return $?\n}\nas_nop=as_fn_nop\n\n# as_fn_error STATUS ERROR [LINENO LOG_FD]\n# ----------------------------------------\n# Output \"`basename $0`: error: ERROR\" to stderr. If LINENO and LOG_FD are\n# provided, also output the error to LOG_FD, referencing LINENO. Then exit the\n# script with STATUS, using 1 if that was 0.\nas_fn_error ()\n{\n  as_status=$1; test $as_status -eq 0 && as_status=1\n  if test \"$4\"; then\n    as_lineno=${as_lineno-\"$3\"} as_lineno_stack=as_lineno_stack=$as_lineno_stack\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: error: $2\" >&$4\n  fi\n  printf \"%s\\n\" \"$as_me: error: $2\" >&2\n  as_fn_exit $as_status\n} # as_fn_error\n\nif expr a : '\\(a\\)' >/dev/null 2>&1 &&\n   test \"X`expr 00001 : '.*\\(...\\)'`\" = X001; then\n  as_expr=expr\nelse\n  as_expr=false\nfi\n\nif (basename -- /) >/dev/null 2>&1 && test \"X`basename -- / 2>&1`\" = \"X/\"; then\n  as_basename=basename\nelse\n  as_basename=false\nfi\n\nif (as_dir=`dirname -- /` && test \"X$as_dir\" = X/) >/dev/null 2>&1; then\n  as_dirname=dirname\nelse\n  as_dirname=false\nfi\n\nas_me=`$as_basename -- \"$0\" ||\n$as_expr X/\"$0\" : '.*/\\([^/][^/]*\\)/*$' \\| \\\n\t X\"$0\" : 'X\\(//\\)$' \\| \\\n\t X\"$0\" : 'X\\(/\\)' \\| . 2>/dev/null ||\nprintf \"%s\\n\" X/\"$0\" |\n    sed '/^.*\\/\\([^/][^/]*\\)\\/*$/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  /^X\\/\\(\\/\\/\\)$/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  /^X\\/\\(\\/\\).*/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  s/.*/./; q'`\n\n# Avoid depending upon Character Ranges.\nas_cr_letters='abcdefghijklmnopqrstuvwxyz'\nas_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ'\nas_cr_Letters=$as_cr_letters$as_cr_LETTERS\nas_cr_digits='0123456789'\nas_cr_alnum=$as_cr_Letters$as_cr_digits\n\n\n  as_lineno_1=$LINENO as_lineno_1a=$LINENO\n  as_lineno_2=$LINENO as_lineno_2a=$LINENO\n  eval 'test \"x$as_lineno_1'$as_run'\" != \"x$as_lineno_2'$as_run'\" &&\n  test \"x`expr $as_lineno_1'$as_run' + 1`\" = \"x$as_lineno_2'$as_run'\"' || {\n  # Blame Lee E. McMahon (1931-1989) for sed's syntax.  :-)\n  sed -n '\n    p\n    /[$]LINENO/=\n  ' <$as_myself |\n    sed '\n      s/[$]LINENO.*/&-/\n      t lineno\n      b\n      :lineno\n      N\n      :loop\n      s/[$]LINENO\\([^'$as_cr_alnum'_].*\\n\\)\\(.*\\)/\\2\\1\\2/\n      t loop\n      s/-\\n.*//\n    ' >$as_me.lineno &&\n  chmod +x \"$as_me.lineno\" ||\n    { printf \"%s\\n\" \"$as_me: error: cannot create $as_me.lineno; rerun with a POSIX shell\" >&2; as_fn_exit 1; }\n\n  # If we had to re-execute with $CONFIG_SHELL, we're ensured to have\n  # already done that, so ensure we don't try to do so again and fall\n  # in an infinite loop.  This has already happened in practice.\n  _as_can_reexec=no; export _as_can_reexec\n  # Don't try to exec as it changes $[0], causing all sort of problems\n  # (the dirname of $[0] is not the place where we might find the\n  # original and so on.  Autoconf is especially sensitive to this).\n  . \"./$as_me.lineno\"\n  # Exit status is that of the last command.\n  exit\n}\n\n\n# Determine whether it's possible to make 'echo' print without a newline.\n# These variables are no longer used directly by Autoconf, but are AC_SUBSTed\n# for compatibility with existing Makefiles.\nECHO_C= ECHO_N= ECHO_T=\ncase `echo -n x` in #(((((\n-n*)\n  case `echo 'xy\\c'` in\n  *c*) ECHO_T='\t';;\t# ECHO_T is single tab character.\n  xy)  ECHO_C='\\c';;\n  *)   echo `echo ksh88 bug on AIX 6.1` > /dev/null\n       ECHO_T='\t';;\n  esac;;\n*)\n  ECHO_N='-n';;\nesac\n\n# For backward compatibility with old third-party macros, we provide\n# the shell variables $as_echo and $as_echo_n.  New code should use\n# AS_ECHO([\"message\"]) and AS_ECHO_N([\"message\"]), respectively.\nas_echo='printf %s\\n'\nas_echo_n='printf %s'\n\n\nrm -f conf$$ conf$$.exe conf$$.file\nif test -d conf$$.dir; then\n  rm -f conf$$.dir/conf$$.file\nelse\n  rm -f conf$$.dir\n  mkdir conf$$.dir 2>/dev/null\nfi\nif (echo >conf$$.file) 2>/dev/null; then\n  if ln -s conf$$.file conf$$ 2>/dev/null; then\n    as_ln_s='ln -s'\n    # ... but there are two gotchas:\n    # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail.\n    # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable.\n    # In both cases, we have to default to `cp -pR'.\n    ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe ||\n      as_ln_s='cp -pR'\n  elif ln conf$$.file conf$$ 2>/dev/null; then\n    as_ln_s=ln\n  else\n    as_ln_s='cp -pR'\n  fi\nelse\n  as_ln_s='cp -pR'\nfi\nrm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file\nrmdir conf$$.dir 2>/dev/null\n\nif mkdir -p . 2>/dev/null; then\n  as_mkdir_p='mkdir -p \"$as_dir\"'\nelse\n  test -d ./-p && rmdir ./-p\n  as_mkdir_p=false\nfi\n\nas_test_x='test -x'\nas_executable_p=as_fn_executable_p\n\n# Sed expression to map a string onto a valid CPP name.\nas_tr_cpp=\"eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'\"\n\n# Sed expression to map a string onto a valid variable name.\nas_tr_sh=\"eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'\"\n\n\ntest -n \"$DJDIR\" || exec 7<&0 </dev/null\nexec 6>&1\n\n# Name of the host.\n# hostname on some systems (SVR3.2, old GNU/Linux) returns a bogus exit status,\n# so uname gets run too.\nac_hostname=`(hostname || uname -n) 2>/dev/null | sed 1q`\n\n#\n# Initializations.\n#\nac_default_prefix=/usr/local\nac_clean_files=\nac_config_libobj_dir=.\nLIBOBJS=\ncross_compiling=no\nsubdirs=\nMFLAGS=\nMAKEFLAGS=\n\n# Identity of this package.\nPACKAGE_NAME=''\nPACKAGE_TARNAME=''\nPACKAGE_VERSION=''\nPACKAGE_STRING=''\nPACKAGE_BUGREPORT=''\nPACKAGE_URL=''\n\nac_unique_file=\"Makefile.in\"\n# Factoring default headers for most tests.\nac_includes_default=\"\\\n#include <stddef.h>\n#ifdef HAVE_STDIO_H\n# include <stdio.h>\n#endif\n#ifdef HAVE_STDLIB_H\n# include <stdlib.h>\n#endif\n#ifdef HAVE_STRING_H\n# include <string.h>\n#endif\n#ifdef HAVE_INTTYPES_H\n# include <inttypes.h>\n#endif\n#ifdef HAVE_STDINT_H\n# include <stdint.h>\n#endif\n#ifdef HAVE_STRINGS_H\n# include <strings.h>\n#endif\n#ifdef HAVE_SYS_TYPES_H\n# include <sys/types.h>\n#endif\n#ifdef HAVE_SYS_STAT_H\n# include <sys/stat.h>\n#endif\n#ifdef HAVE_UNISTD_H\n# include <unistd.h>\n#endif\"\n\nac_header_c_list=\nac_subst_vars='LTLIBOBJS\nLIBOBJS\ncfgoutputs_out\ncfgoutputs_in\ncfghdrs_out\ncfghdrs_in\nenable_initial_exec_tls\nenable_zone_allocator\nenable_tls\nenable_lazy_lock\nlibdl\nenable_uaf_detection\nenable_opt_size_checks\nenable_opt_safety_checks\nenable_readlinkat\nenable_log\nenable_cache_oblivious\nenable_xmalloc\nenable_utrace\nenable_fill\nenable_prof\nenable_experimental_smallocx\nenable_stats\nenable_debug\nje_\ninstall_suffix\nprivate_namespace\nJEMALLOC_CPREFIX\nJEMALLOC_PREFIX\nenable_static\nenable_shared\nenable_doc\nAUTOCONF\nLD\nRANLIB\nINSTALL_DATA\nINSTALL_SCRIPT\nINSTALL_PROGRAM\nenable_autogen\nRPATH_EXTRA\nLM\nCC_MM\nDUMP_SYMS\nAROUT\nARFLAGS\nMKLIB\nTEST_LD_MODE\nLDTARGET\nCTARGET\nPIC_CFLAGS\nSOREV\nEXTRA_LDFLAGS\nDSO_LDFLAGS\nlink_whole_archive\nlibprefix\nexe\na\no\nimportlib\nso\nLD_PRELOAD_VAR\nRPATH\nabi\njemalloc_version_gid\njemalloc_version_nrev\njemalloc_version_bugfix\njemalloc_version_minor\njemalloc_version_major\njemalloc_version\nAWK\nNM\nAR\nhost_os\nhost_vendor\nhost_cpu\nhost\nbuild_os\nbuild_vendor\nbuild_cpu\nbuild\nEXTRA_CXXFLAGS\nSPECIFIED_CXXFLAGS\nCONFIGURE_CXXFLAGS\nenable_cxx\nHAVE_CXX14\nHAVE_CXX17\nac_ct_CXX\nCXXFLAGS\nCXX\nCPP\nEXTRA_CFLAGS\nSPECIFIED_CFLAGS\nCONFIGURE_CFLAGS\nOBJEXT\nEXEEXT\nac_ct_CC\nCPPFLAGS\nLDFLAGS\nCFLAGS\nCC\nXSLROOT\nXSLTPROC\nMANDIR\nDATADIR\nLIBDIR\nINCLUDEDIR\nBINDIR\nPREFIX\nabs_objroot\nobjroot\nabs_srcroot\nsrcroot\nrev\nCONFIG\ntarget_alias\nhost_alias\nbuild_alias\nLIBS\nECHO_T\nECHO_N\nECHO_C\nDEFS\nmandir\nlocaledir\nlibdir\npsdir\npdfdir\ndvidir\nhtmldir\ninfodir\ndocdir\noldincludedir\nincludedir\nrunstatedir\nlocalstatedir\nsharedstatedir\nsysconfdir\ndatadir\ndatarootdir\nlibexecdir\nsbindir\nbindir\nprogram_transform_name\nprefix\nexec_prefix\nPACKAGE_URL\nPACKAGE_BUGREPORT\nPACKAGE_STRING\nPACKAGE_VERSION\nPACKAGE_TARNAME\nPACKAGE_NAME\nPATH_SEPARATOR\nSHELL'\nac_subst_files=''\nac_user_opts='\nenable_option_checking\nwith_xslroot\nenable_cxx\nwith_lg_vaddr\nwith_version\nwith_rpath\nenable_autogen\nenable_doc\nenable_shared\nenable_static\nwith_mangling\nwith_jemalloc_prefix\nwith_export\nwith_private_namespace\nwith_install_suffix\nwith_malloc_conf\nenable_debug\nenable_stats\nenable_experimental_smallocx\nenable_prof\nenable_prof_libunwind\nwith_static_libunwind\nenable_prof_libgcc\nenable_prof_gcc\nenable_fill\nenable_utrace\nenable_xmalloc\nenable_cache_oblivious\nenable_log\nenable_readlinkat\nenable_opt_safety_checks\nenable_opt_size_checks\nenable_uaf_detection\nwith_lg_quantum\nwith_lg_slab_maxregs\nwith_lg_page\nwith_lg_hugepage\nenable_libdl\nenable_syscall\nenable_lazy_lock\nenable_zone_allocator\nenable_initial_exec_tls\n'\n      ac_precious_vars='build_alias\nhost_alias\ntarget_alias\nCC\nCFLAGS\nLDFLAGS\nLIBS\nCPPFLAGS\nCPP\nCXX\nCXXFLAGS\nCCC'\n\n\n# Initialize some variables set by options.\nac_init_help=\nac_init_version=false\nac_unrecognized_opts=\nac_unrecognized_sep=\n# The variables have the same names as the options, with\n# dashes changed to underlines.\ncache_file=/dev/null\nexec_prefix=NONE\nno_create=\nno_recursion=\nprefix=NONE\nprogram_prefix=NONE\nprogram_suffix=NONE\nprogram_transform_name=s,x,x,\nsilent=\nsite=\nsrcdir=\nverbose=\nx_includes=NONE\nx_libraries=NONE\n\n# Installation directory options.\n# These are left unexpanded so users can \"make install exec_prefix=/foo\"\n# and all the variables that are supposed to be based on exec_prefix\n# by default will actually change.\n# Use braces instead of parens because sh, perl, etc. also accept them.\n# (The list follows the same order as the GNU Coding Standards.)\nbindir='${exec_prefix}/bin'\nsbindir='${exec_prefix}/sbin'\nlibexecdir='${exec_prefix}/libexec'\ndatarootdir='${prefix}/share'\ndatadir='${datarootdir}'\nsysconfdir='${prefix}/etc'\nsharedstatedir='${prefix}/com'\nlocalstatedir='${prefix}/var'\nrunstatedir='${localstatedir}/run'\nincludedir='${prefix}/include'\noldincludedir='/usr/include'\ndocdir='${datarootdir}/doc/${PACKAGE}'\ninfodir='${datarootdir}/info'\nhtmldir='${docdir}'\ndvidir='${docdir}'\npdfdir='${docdir}'\npsdir='${docdir}'\nlibdir='${exec_prefix}/lib'\nlocaledir='${datarootdir}/locale'\nmandir='${datarootdir}/man'\n\nac_prev=\nac_dashdash=\nfor ac_option\ndo\n  # If the previous option needs an argument, assign it.\n  if test -n \"$ac_prev\"; then\n    eval $ac_prev=\\$ac_option\n    ac_prev=\n    continue\n  fi\n\n  case $ac_option in\n  *=?*) ac_optarg=`expr \"X$ac_option\" : '[^=]*=\\(.*\\)'` ;;\n  *=)   ac_optarg= ;;\n  *)    ac_optarg=yes ;;\n  esac\n\n  case $ac_dashdash$ac_option in\n  --)\n    ac_dashdash=yes ;;\n\n  -bindir | --bindir | --bindi | --bind | --bin | --bi)\n    ac_prev=bindir ;;\n  -bindir=* | --bindir=* | --bindi=* | --bind=* | --bin=* | --bi=*)\n    bindir=$ac_optarg ;;\n\n  -build | --build | --buil | --bui | --bu)\n    ac_prev=build_alias ;;\n  -build=* | --build=* | --buil=* | --bui=* | --bu=*)\n    build_alias=$ac_optarg ;;\n\n  -cache-file | --cache-file | --cache-fil | --cache-fi \\\n  | --cache-f | --cache- | --cache | --cach | --cac | --ca | --c)\n    ac_prev=cache_file ;;\n  -cache-file=* | --cache-file=* | --cache-fil=* | --cache-fi=* \\\n  | --cache-f=* | --cache-=* | --cache=* | --cach=* | --cac=* | --ca=* | --c=*)\n    cache_file=$ac_optarg ;;\n\n  --config-cache | -C)\n    cache_file=config.cache ;;\n\n  -datadir | --datadir | --datadi | --datad)\n    ac_prev=datadir ;;\n  -datadir=* | --datadir=* | --datadi=* | --datad=*)\n    datadir=$ac_optarg ;;\n\n  -datarootdir | --datarootdir | --datarootdi | --datarootd | --dataroot \\\n  | --dataroo | --dataro | --datar)\n    ac_prev=datarootdir ;;\n  -datarootdir=* | --datarootdir=* | --datarootdi=* | --datarootd=* \\\n  | --dataroot=* | --dataroo=* | --dataro=* | --datar=*)\n    datarootdir=$ac_optarg ;;\n\n  -disable-* | --disable-*)\n    ac_useropt=`expr \"x$ac_option\" : 'x-*disable-\\(.*\\)'`\n    # Reject names that are not valid shell variable names.\n    expr \"x$ac_useropt\" : \".*[^-+._$as_cr_alnum]\" >/dev/null &&\n      as_fn_error $? \"invalid feature name: \\`$ac_useropt'\"\n    ac_useropt_orig=$ac_useropt\n    ac_useropt=`printf \"%s\\n\" \"$ac_useropt\" | sed 's/[-+.]/_/g'`\n    case $ac_user_opts in\n      *\"\n\"enable_$ac_useropt\"\n\"*) ;;\n      *) ac_unrecognized_opts=\"$ac_unrecognized_opts$ac_unrecognized_sep--disable-$ac_useropt_orig\"\n\t ac_unrecognized_sep=', ';;\n    esac\n    eval enable_$ac_useropt=no ;;\n\n  -docdir | --docdir | --docdi | --doc | --do)\n    ac_prev=docdir ;;\n  -docdir=* | --docdir=* | --docdi=* | --doc=* | --do=*)\n    docdir=$ac_optarg ;;\n\n  -dvidir | --dvidir | --dvidi | --dvid | --dvi | --dv)\n    ac_prev=dvidir ;;\n  -dvidir=* | --dvidir=* | --dvidi=* | --dvid=* | --dvi=* | --dv=*)\n    dvidir=$ac_optarg ;;\n\n  -enable-* | --enable-*)\n    ac_useropt=`expr \"x$ac_option\" : 'x-*enable-\\([^=]*\\)'`\n    # Reject names that are not valid shell variable names.\n    expr \"x$ac_useropt\" : \".*[^-+._$as_cr_alnum]\" >/dev/null &&\n      as_fn_error $? \"invalid feature name: \\`$ac_useropt'\"\n    ac_useropt_orig=$ac_useropt\n    ac_useropt=`printf \"%s\\n\" \"$ac_useropt\" | sed 's/[-+.]/_/g'`\n    case $ac_user_opts in\n      *\"\n\"enable_$ac_useropt\"\n\"*) ;;\n      *) ac_unrecognized_opts=\"$ac_unrecognized_opts$ac_unrecognized_sep--enable-$ac_useropt_orig\"\n\t ac_unrecognized_sep=', ';;\n    esac\n    eval enable_$ac_useropt=\\$ac_optarg ;;\n\n  -exec-prefix | --exec_prefix | --exec-prefix | --exec-prefi \\\n  | --exec-pref | --exec-pre | --exec-pr | --exec-p | --exec- \\\n  | --exec | --exe | --ex)\n    ac_prev=exec_prefix ;;\n  -exec-prefix=* | --exec_prefix=* | --exec-prefix=* | --exec-prefi=* \\\n  | --exec-pref=* | --exec-pre=* | --exec-pr=* | --exec-p=* | --exec-=* \\\n  | --exec=* | --exe=* | --ex=*)\n    exec_prefix=$ac_optarg ;;\n\n  -gas | --gas | --ga | --g)\n    # Obsolete; use --with-gas.\n    with_gas=yes ;;\n\n  -help | --help | --hel | --he | -h)\n    ac_init_help=long ;;\n  -help=r* | --help=r* | --hel=r* | --he=r* | -hr*)\n    ac_init_help=recursive ;;\n  -help=s* | --help=s* | --hel=s* | --he=s* | -hs*)\n    ac_init_help=short ;;\n\n  -host | --host | --hos | --ho)\n    ac_prev=host_alias ;;\n  -host=* | --host=* | --hos=* | --ho=*)\n    host_alias=$ac_optarg ;;\n\n  -htmldir | --htmldir | --htmldi | --htmld | --html | --htm | --ht)\n    ac_prev=htmldir ;;\n  -htmldir=* | --htmldir=* | --htmldi=* | --htmld=* | --html=* | --htm=* \\\n  | --ht=*)\n    htmldir=$ac_optarg ;;\n\n  -includedir | --includedir | --includedi | --included | --include \\\n  | --includ | --inclu | --incl | --inc)\n    ac_prev=includedir ;;\n  -includedir=* | --includedir=* | --includedi=* | --included=* | --include=* \\\n  | --includ=* | --inclu=* | --incl=* | --inc=*)\n    includedir=$ac_optarg ;;\n\n  -infodir | --infodir | --infodi | --infod | --info | --inf)\n    ac_prev=infodir ;;\n  -infodir=* | --infodir=* | --infodi=* | --infod=* | --info=* | --inf=*)\n    infodir=$ac_optarg ;;\n\n  -libdir | --libdir | --libdi | --libd)\n    ac_prev=libdir ;;\n  -libdir=* | --libdir=* | --libdi=* | --libd=*)\n    libdir=$ac_optarg ;;\n\n  -libexecdir | --libexecdir | --libexecdi | --libexecd | --libexec \\\n  | --libexe | --libex | --libe)\n    ac_prev=libexecdir ;;\n  -libexecdir=* | --libexecdir=* | --libexecdi=* | --libexecd=* | --libexec=* \\\n  | --libexe=* | --libex=* | --libe=*)\n    libexecdir=$ac_optarg ;;\n\n  -localedir | --localedir | --localedi | --localed | --locale)\n    ac_prev=localedir ;;\n  -localedir=* | --localedir=* | --localedi=* | --localed=* | --locale=*)\n    localedir=$ac_optarg ;;\n\n  -localstatedir | --localstatedir | --localstatedi | --localstated \\\n  | --localstate | --localstat | --localsta | --localst | --locals)\n    ac_prev=localstatedir ;;\n  -localstatedir=* | --localstatedir=* | --localstatedi=* | --localstated=* \\\n  | --localstate=* | --localstat=* | --localsta=* | --localst=* | --locals=*)\n    localstatedir=$ac_optarg ;;\n\n  -mandir | --mandir | --mandi | --mand | --man | --ma | --m)\n    ac_prev=mandir ;;\n  -mandir=* | --mandir=* | --mandi=* | --mand=* | --man=* | --ma=* | --m=*)\n    mandir=$ac_optarg ;;\n\n  -nfp | --nfp | --nf)\n    # Obsolete; use --without-fp.\n    with_fp=no ;;\n\n  -no-create | --no-create | --no-creat | --no-crea | --no-cre \\\n  | --no-cr | --no-c | -n)\n    no_create=yes ;;\n\n  -no-recursion | --no-recursion | --no-recursio | --no-recursi \\\n  | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r)\n    no_recursion=yes ;;\n\n  -oldincludedir | --oldincludedir | --oldincludedi | --oldincluded \\\n  | --oldinclude | --oldinclud | --oldinclu | --oldincl | --oldinc \\\n  | --oldin | --oldi | --old | --ol | --o)\n    ac_prev=oldincludedir ;;\n  -oldincludedir=* | --oldincludedir=* | --oldincludedi=* | --oldincluded=* \\\n  | --oldinclude=* | --oldinclud=* | --oldinclu=* | --oldincl=* | --oldinc=* \\\n  | --oldin=* | --oldi=* | --old=* | --ol=* | --o=*)\n    oldincludedir=$ac_optarg ;;\n\n  -prefix | --prefix | --prefi | --pref | --pre | --pr | --p)\n    ac_prev=prefix ;;\n  -prefix=* | --prefix=* | --prefi=* | --pref=* | --pre=* | --pr=* | --p=*)\n    prefix=$ac_optarg ;;\n\n  -program-prefix | --program-prefix | --program-prefi | --program-pref \\\n  | --program-pre | --program-pr | --program-p)\n    ac_prev=program_prefix ;;\n  -program-prefix=* | --program-prefix=* | --program-prefi=* \\\n  | --program-pref=* | --program-pre=* | --program-pr=* | --program-p=*)\n    program_prefix=$ac_optarg ;;\n\n  -program-suffix | --program-suffix | --program-suffi | --program-suff \\\n  | --program-suf | --program-su | --program-s)\n    ac_prev=program_suffix ;;\n  -program-suffix=* | --program-suffix=* | --program-suffi=* \\\n  | --program-suff=* | --program-suf=* | --program-su=* | --program-s=*)\n    program_suffix=$ac_optarg ;;\n\n  -program-transform-name | --program-transform-name \\\n  | --program-transform-nam | --program-transform-na \\\n  | --program-transform-n | --program-transform- \\\n  | --program-transform | --program-transfor \\\n  | --program-transfo | --program-transf \\\n  | --program-trans | --program-tran \\\n  | --progr-tra | --program-tr | --program-t)\n    ac_prev=program_transform_name ;;\n  -program-transform-name=* | --program-transform-name=* \\\n  | --program-transform-nam=* | --program-transform-na=* \\\n  | --program-transform-n=* | --program-transform-=* \\\n  | --program-transform=* | --program-transfor=* \\\n  | --program-transfo=* | --program-transf=* \\\n  | --program-trans=* | --program-tran=* \\\n  | --progr-tra=* | --program-tr=* | --program-t=*)\n    program_transform_name=$ac_optarg ;;\n\n  -pdfdir | --pdfdir | --pdfdi | --pdfd | --pdf | --pd)\n    ac_prev=pdfdir ;;\n  -pdfdir=* | --pdfdir=* | --pdfdi=* | --pdfd=* | --pdf=* | --pd=*)\n    pdfdir=$ac_optarg ;;\n\n  -psdir | --psdir | --psdi | --psd | --ps)\n    ac_prev=psdir ;;\n  -psdir=* | --psdir=* | --psdi=* | --psd=* | --ps=*)\n    psdir=$ac_optarg ;;\n\n  -q | -quiet | --quiet | --quie | --qui | --qu | --q \\\n  | -silent | --silent | --silen | --sile | --sil)\n    silent=yes ;;\n\n  -runstatedir | --runstatedir | --runstatedi | --runstated \\\n  | --runstate | --runstat | --runsta | --runst | --runs \\\n  | --run | --ru | --r)\n    ac_prev=runstatedir ;;\n  -runstatedir=* | --runstatedir=* | --runstatedi=* | --runstated=* \\\n  | --runstate=* | --runstat=* | --runsta=* | --runst=* | --runs=* \\\n  | --run=* | --ru=* | --r=*)\n    runstatedir=$ac_optarg ;;\n\n  -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb)\n    ac_prev=sbindir ;;\n  -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \\\n  | --sbi=* | --sb=*)\n    sbindir=$ac_optarg ;;\n\n  -sharedstatedir | --sharedstatedir | --sharedstatedi \\\n  | --sharedstated | --sharedstate | --sharedstat | --sharedsta \\\n  | --sharedst | --shareds | --shared | --share | --shar \\\n  | --sha | --sh)\n    ac_prev=sharedstatedir ;;\n  -sharedstatedir=* | --sharedstatedir=* | --sharedstatedi=* \\\n  | --sharedstated=* | --sharedstate=* | --sharedstat=* | --sharedsta=* \\\n  | --sharedst=* | --shareds=* | --shared=* | --share=* | --shar=* \\\n  | --sha=* | --sh=*)\n    sharedstatedir=$ac_optarg ;;\n\n  -site | --site | --sit)\n    ac_prev=site ;;\n  -site=* | --site=* | --sit=*)\n    site=$ac_optarg ;;\n\n  -srcdir | --srcdir | --srcdi | --srcd | --src | --sr)\n    ac_prev=srcdir ;;\n  -srcdir=* | --srcdir=* | --srcdi=* | --srcd=* | --src=* | --sr=*)\n    srcdir=$ac_optarg ;;\n\n  -sysconfdir | --sysconfdir | --sysconfdi | --sysconfd | --sysconf \\\n  | --syscon | --sysco | --sysc | --sys | --sy)\n    ac_prev=sysconfdir ;;\n  -sysconfdir=* | --sysconfdir=* | --sysconfdi=* | --sysconfd=* | --sysconf=* \\\n  | --syscon=* | --sysco=* | --sysc=* | --sys=* | --sy=*)\n    sysconfdir=$ac_optarg ;;\n\n  -target | --target | --targe | --targ | --tar | --ta | --t)\n    ac_prev=target_alias ;;\n  -target=* | --target=* | --targe=* | --targ=* | --tar=* | --ta=* | --t=*)\n    target_alias=$ac_optarg ;;\n\n  -v | -verbose | --verbose | --verbos | --verbo | --verb)\n    verbose=yes ;;\n\n  -version | --version | --versio | --versi | --vers | -V)\n    ac_init_version=: ;;\n\n  -with-* | --with-*)\n    ac_useropt=`expr \"x$ac_option\" : 'x-*with-\\([^=]*\\)'`\n    # Reject names that are not valid shell variable names.\n    expr \"x$ac_useropt\" : \".*[^-+._$as_cr_alnum]\" >/dev/null &&\n      as_fn_error $? \"invalid package name: \\`$ac_useropt'\"\n    ac_useropt_orig=$ac_useropt\n    ac_useropt=`printf \"%s\\n\" \"$ac_useropt\" | sed 's/[-+.]/_/g'`\n    case $ac_user_opts in\n      *\"\n\"with_$ac_useropt\"\n\"*) ;;\n      *) ac_unrecognized_opts=\"$ac_unrecognized_opts$ac_unrecognized_sep--with-$ac_useropt_orig\"\n\t ac_unrecognized_sep=', ';;\n    esac\n    eval with_$ac_useropt=\\$ac_optarg ;;\n\n  -without-* | --without-*)\n    ac_useropt=`expr \"x$ac_option\" : 'x-*without-\\(.*\\)'`\n    # Reject names that are not valid shell variable names.\n    expr \"x$ac_useropt\" : \".*[^-+._$as_cr_alnum]\" >/dev/null &&\n      as_fn_error $? \"invalid package name: \\`$ac_useropt'\"\n    ac_useropt_orig=$ac_useropt\n    ac_useropt=`printf \"%s\\n\" \"$ac_useropt\" | sed 's/[-+.]/_/g'`\n    case $ac_user_opts in\n      *\"\n\"with_$ac_useropt\"\n\"*) ;;\n      *) ac_unrecognized_opts=\"$ac_unrecognized_opts$ac_unrecognized_sep--without-$ac_useropt_orig\"\n\t ac_unrecognized_sep=', ';;\n    esac\n    eval with_$ac_useropt=no ;;\n\n  --x)\n    # Obsolete; use --with-x.\n    with_x=yes ;;\n\n  -x-includes | --x-includes | --x-include | --x-includ | --x-inclu \\\n  | --x-incl | --x-inc | --x-in | --x-i)\n    ac_prev=x_includes ;;\n  -x-includes=* | --x-includes=* | --x-include=* | --x-includ=* | --x-inclu=* \\\n  | --x-incl=* | --x-inc=* | --x-in=* | --x-i=*)\n    x_includes=$ac_optarg ;;\n\n  -x-libraries | --x-libraries | --x-librarie | --x-librari \\\n  | --x-librar | --x-libra | --x-libr | --x-lib | --x-li | --x-l)\n    ac_prev=x_libraries ;;\n  -x-libraries=* | --x-libraries=* | --x-librarie=* | --x-librari=* \\\n  | --x-librar=* | --x-libra=* | --x-libr=* | --x-lib=* | --x-li=* | --x-l=*)\n    x_libraries=$ac_optarg ;;\n\n  -*) as_fn_error $? \"unrecognized option: \\`$ac_option'\nTry \\`$0 --help' for more information\"\n    ;;\n\n  *=*)\n    ac_envvar=`expr \"x$ac_option\" : 'x\\([^=]*\\)='`\n    # Reject names that are not valid shell variable names.\n    case $ac_envvar in #(\n      '' | [0-9]* | *[!_$as_cr_alnum]* )\n      as_fn_error $? \"invalid variable name: \\`$ac_envvar'\" ;;\n    esac\n    eval $ac_envvar=\\$ac_optarg\n    export $ac_envvar ;;\n\n  *)\n    # FIXME: should be removed in autoconf 3.0.\n    printf \"%s\\n\" \"$as_me: WARNING: you should use --build, --host, --target\" >&2\n    expr \"x$ac_option\" : \".*[^-._$as_cr_alnum]\" >/dev/null &&\n      printf \"%s\\n\" \"$as_me: WARNING: invalid host type: $ac_option\" >&2\n    : \"${build_alias=$ac_option} ${host_alias=$ac_option} ${target_alias=$ac_option}\"\n    ;;\n\n  esac\ndone\n\nif test -n \"$ac_prev\"; then\n  ac_option=--`echo $ac_prev | sed 's/_/-/g'`\n  as_fn_error $? \"missing argument to $ac_option\"\nfi\n\nif test -n \"$ac_unrecognized_opts\"; then\n  case $enable_option_checking in\n    no) ;;\n    fatal) as_fn_error $? \"unrecognized options: $ac_unrecognized_opts\" ;;\n    *)     printf \"%s\\n\" \"$as_me: WARNING: unrecognized options: $ac_unrecognized_opts\" >&2 ;;\n  esac\nfi\n\n# Check all directory arguments for consistency.\nfor ac_var in\texec_prefix prefix bindir sbindir libexecdir datarootdir \\\n\t\tdatadir sysconfdir sharedstatedir localstatedir includedir \\\n\t\toldincludedir docdir infodir htmldir dvidir pdfdir psdir \\\n\t\tlibdir localedir mandir runstatedir\ndo\n  eval ac_val=\\$$ac_var\n  # Remove trailing slashes.\n  case $ac_val in\n    */ )\n      ac_val=`expr \"X$ac_val\" : 'X\\(.*[^/]\\)' \\| \"X$ac_val\" : 'X\\(.*\\)'`\n      eval $ac_var=\\$ac_val;;\n  esac\n  # Be sure to have absolute directory names.\n  case $ac_val in\n    [\\\\/$]* | ?:[\\\\/]* )  continue;;\n    NONE | '' ) case $ac_var in *prefix ) continue;; esac;;\n  esac\n  as_fn_error $? \"expected an absolute directory name for --$ac_var: $ac_val\"\ndone\n\n# There might be people who depend on the old broken behavior: `$host'\n# used to hold the argument of --host etc.\n# FIXME: To remove some day.\nbuild=$build_alias\nhost=$host_alias\ntarget=$target_alias\n\n# FIXME: To remove some day.\nif test \"x$host_alias\" != x; then\n  if test \"x$build_alias\" = x; then\n    cross_compiling=maybe\n  elif test \"x$build_alias\" != \"x$host_alias\"; then\n    cross_compiling=yes\n  fi\nfi\n\nac_tool_prefix=\ntest -n \"$host_alias\" && ac_tool_prefix=$host_alias-\n\ntest \"$silent\" = yes && exec 6>/dev/null\n\n\nac_pwd=`pwd` && test -n \"$ac_pwd\" &&\nac_ls_di=`ls -di .` &&\nac_pwd_ls_di=`cd \"$ac_pwd\" && ls -di .` ||\n  as_fn_error $? \"working directory cannot be determined\"\ntest \"X$ac_ls_di\" = \"X$ac_pwd_ls_di\" ||\n  as_fn_error $? \"pwd does not report name of working directory\"\n\n\n# Find the source files, if location was not specified.\nif test -z \"$srcdir\"; then\n  ac_srcdir_defaulted=yes\n  # Try the directory containing this script, then the parent directory.\n  ac_confdir=`$as_dirname -- \"$as_myself\" ||\n$as_expr X\"$as_myself\" : 'X\\(.*[^/]\\)//*[^/][^/]*/*$' \\| \\\n\t X\"$as_myself\" : 'X\\(//\\)[^/]' \\| \\\n\t X\"$as_myself\" : 'X\\(//\\)$' \\| \\\n\t X\"$as_myself\" : 'X\\(/\\)' \\| . 2>/dev/null ||\nprintf \"%s\\n\" X\"$as_myself\" |\n    sed '/^X\\(.*[^/]\\)\\/\\/*[^/][^/]*\\/*$/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  /^X\\(\\/\\/\\)[^/].*/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  /^X\\(\\/\\/\\)$/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  /^X\\(\\/\\).*/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  s/.*/./; q'`\n  srcdir=$ac_confdir\n  if test ! -r \"$srcdir/$ac_unique_file\"; then\n    srcdir=..\n  fi\nelse\n  ac_srcdir_defaulted=no\nfi\nif test ! -r \"$srcdir/$ac_unique_file\"; then\n  test \"$ac_srcdir_defaulted\" = yes && srcdir=\"$ac_confdir or ..\"\n  as_fn_error $? \"cannot find sources ($ac_unique_file) in $srcdir\"\nfi\nac_msg=\"sources are in $srcdir, but \\`cd $srcdir' does not work\"\nac_abs_confdir=`(\n\tcd \"$srcdir\" && test -r \"./$ac_unique_file\" || as_fn_error $? \"$ac_msg\"\n\tpwd)`\n# When building in place, set srcdir=.\nif test \"$ac_abs_confdir\" = \"$ac_pwd\"; then\n  srcdir=.\nfi\n# Remove unnecessary trailing slashes from srcdir.\n# Double slashes in file names in object file debugging info\n# mess up M-x gdb in Emacs.\ncase $srcdir in\n*/) srcdir=`expr \"X$srcdir\" : 'X\\(.*[^/]\\)' \\| \"X$srcdir\" : 'X\\(.*\\)'`;;\nesac\nfor ac_var in $ac_precious_vars; do\n  eval ac_env_${ac_var}_set=\\${${ac_var}+set}\n  eval ac_env_${ac_var}_value=\\$${ac_var}\n  eval ac_cv_env_${ac_var}_set=\\${${ac_var}+set}\n  eval ac_cv_env_${ac_var}_value=\\$${ac_var}\ndone\n\n#\n# Report the --help message.\n#\nif test \"$ac_init_help\" = \"long\"; then\n  # Omit some internal or obsolete options to make the list less imposing.\n  # This message is too long to be a string in the A/UX 3.1 sh.\n  cat <<_ACEOF\n\\`configure' configures this package to adapt to many kinds of systems.\n\nUsage: $0 [OPTION]... [VAR=VALUE]...\n\nTo assign environment variables (e.g., CC, CFLAGS...), specify them as\nVAR=VALUE.  See below for descriptions of some of the useful variables.\n\nDefaults for the options are specified in brackets.\n\nConfiguration:\n  -h, --help              display this help and exit\n      --help=short        display options specific to this package\n      --help=recursive    display the short help of all the included packages\n  -V, --version           display version information and exit\n  -q, --quiet, --silent   do not print \\`checking ...' messages\n      --cache-file=FILE   cache test results in FILE [disabled]\n  -C, --config-cache      alias for \\`--cache-file=config.cache'\n  -n, --no-create         do not create output files\n      --srcdir=DIR        find the sources in DIR [configure dir or \\`..']\n\nInstallation directories:\n  --prefix=PREFIX         install architecture-independent files in PREFIX\n                          [$ac_default_prefix]\n  --exec-prefix=EPREFIX   install architecture-dependent files in EPREFIX\n                          [PREFIX]\n\nBy default, \\`make install' will install all the files in\n\\`$ac_default_prefix/bin', \\`$ac_default_prefix/lib' etc.  You can specify\nan installation prefix other than \\`$ac_default_prefix' using \\`--prefix',\nfor instance \\`--prefix=\\$HOME'.\n\nFor better control, use the options below.\n\nFine tuning of the installation directories:\n  --bindir=DIR            user executables [EPREFIX/bin]\n  --sbindir=DIR           system admin executables [EPREFIX/sbin]\n  --libexecdir=DIR        program executables [EPREFIX/libexec]\n  --sysconfdir=DIR        read-only single-machine data [PREFIX/etc]\n  --sharedstatedir=DIR    modifiable architecture-independent data [PREFIX/com]\n  --localstatedir=DIR     modifiable single-machine data [PREFIX/var]\n  --runstatedir=DIR       modifiable per-process data [LOCALSTATEDIR/run]\n  --libdir=DIR            object code libraries [EPREFIX/lib]\n  --includedir=DIR        C header files [PREFIX/include]\n  --oldincludedir=DIR     C header files for non-gcc [/usr/include]\n  --datarootdir=DIR       read-only arch.-independent data root [PREFIX/share]\n  --datadir=DIR           read-only architecture-independent data [DATAROOTDIR]\n  --infodir=DIR           info documentation [DATAROOTDIR/info]\n  --localedir=DIR         locale-dependent data [DATAROOTDIR/locale]\n  --mandir=DIR            man documentation [DATAROOTDIR/man]\n  --docdir=DIR            documentation root [DATAROOTDIR/doc/PACKAGE]\n  --htmldir=DIR           html documentation [DOCDIR]\n  --dvidir=DIR            dvi documentation [DOCDIR]\n  --pdfdir=DIR            pdf documentation [DOCDIR]\n  --psdir=DIR             ps documentation [DOCDIR]\n_ACEOF\n\n  cat <<\\_ACEOF\n\nSystem types:\n  --build=BUILD     configure for building on BUILD [guessed]\n  --host=HOST       cross-compile to build programs to run on HOST [BUILD]\n_ACEOF\nfi\n\nif test -n \"$ac_init_help\"; then\n\n  cat <<\\_ACEOF\n\nOptional Features:\n  --disable-option-checking  ignore unrecognized --enable/--with options\n  --disable-FEATURE       do not include FEATURE (same as --enable-FEATURE=no)\n  --enable-FEATURE[=ARG]  include FEATURE [ARG=yes]\n  --disable-cxx           Disable C++ integration\n  --enable-autogen        Automatically regenerate configure output\n  --enable-doc            Build documentation\n  --enable-shared         Build shared libaries\n  --enable-static         Build static libaries\n  --enable-debug          Build debugging code\n  --disable-stats         Disable statistics calculation/reporting\n  --enable-experimental-smallocx\n                          Enable experimental smallocx API\n  --enable-prof           Enable allocation profiling\n  --enable-prof-libunwind Use libunwind for backtracing\n  --disable-prof-libgcc   Do not use libgcc for backtracing\n  --disable-prof-gcc      Do not use gcc intrinsics for backtracing\n  --disable-fill          Disable support for junk/zero filling\n  --enable-utrace         Enable utrace(2)-based tracing\n  --enable-xmalloc        Support xmalloc option\n  --disable-cache-oblivious\n                          Disable support for cache-oblivious allocation\n                          alignment\n  --enable-log            Support debug logging\n  --enable-readlinkat     Use readlinkat over readlink\n  --enable-opt-safety-checks\n                          Perform certain low-overhead checks, even in opt\n                          mode\n  --enable-opt-size-checks\n                          Perform sized-deallocation argument checks, even in\n                          opt mode\n  --enable-uaf-detection  Allow sampled junk-filling on deallocation to detect\n                          use-after-free\n  --disable-libdl         Do not use libdl\n  --disable-syscall       Disable use of syscall(2)\n  --enable-lazy-lock      Enable lazy locking (only lock when multi-threaded)\n  --disable-zone-allocator\n                          Disable zone allocator for Darwin\n  --disable-initial-exec-tls\n                          Disable the initial-exec tls model\n\nOptional Packages:\n  --with-PACKAGE[=ARG]    use PACKAGE [ARG=yes]\n  --without-PACKAGE       do not use PACKAGE (same as --with-PACKAGE=no)\n  --with-xslroot=<path>   XSL stylesheet root path\n  --with-lg-vaddr=<lg-vaddr>\n                          Number of significant virtual address bits\n  --with-version=<major>.<minor>.<bugfix>-<nrev>-g<gid>\n                          Version string\n  --with-rpath=<rpath>    Colon-separated rpath (ELF systems only)\n  --with-mangling=<map>   Mangle symbols in <map>\n  --with-jemalloc-prefix=<prefix>\n                          Prefix to prepend to all public APIs\n  --without-export        disable exporting jemalloc public APIs\n  --with-private-namespace=<prefix>\n                          Prefix to prepend to all library-private APIs\n  --with-install-suffix=<suffix>\n                          Suffix to append to all installed files\n  --with-malloc-conf=<malloc_conf>\n                          config.malloc_conf options string\n  --with-static-libunwind=<libunwind.a>\n                          Path to static libunwind library; use rather than\n                          dynamically linking\n  --with-lg-quantum=<lg-quantum>\n                          Base 2 log of minimum allocation alignment\n  --with-lg-slab-maxregs=<lg-slab-maxregs>\n                          Base 2 log of maximum number of regions in a slab\n                          (used with malloc_conf slab_sizes)\n  --with-lg-page=<lg-page>\n                          Base 2 log of system page size\n  --with-lg-hugepage=<lg-hugepage>\n                          Base 2 log of system huge page size\n\nSome influential environment variables:\n  CC          C compiler command\n  CFLAGS      C compiler flags\n  LDFLAGS     linker flags, e.g. -L<lib dir> if you have libraries in a\n              nonstandard directory <lib dir>\n  LIBS        libraries to pass to the linker, e.g. -l<library>\n  CPPFLAGS    (Objective) C/C++ preprocessor flags, e.g. -I<include dir> if\n              you have headers in a nonstandard directory <include dir>\n  CPP         C preprocessor\n  CXX         C++ compiler command\n  CXXFLAGS    C++ compiler flags\n\nUse these variables to override the choices made by `configure' or to help\nit to find libraries and programs with nonstandard names/locations.\n\nReport bugs to the package provider.\n_ACEOF\nac_status=$?\nfi\n\nif test \"$ac_init_help\" = \"recursive\"; then\n  # If there are subdirs, report their specific --help.\n  for ac_dir in : $ac_subdirs_all; do test \"x$ac_dir\" = x: && continue\n    test -d \"$ac_dir\" ||\n      { cd \"$srcdir\" && ac_pwd=`pwd` && srcdir=. && test -d \"$ac_dir\"; } ||\n      continue\n    ac_builddir=.\n\ncase \"$ac_dir\" in\n.) ac_dir_suffix= ac_top_builddir_sub=. ac_top_build_prefix= ;;\n*)\n  ac_dir_suffix=/`printf \"%s\\n\" \"$ac_dir\" | sed 's|^\\.[\\\\/]||'`\n  # A \"..\" for each directory in $ac_dir_suffix.\n  ac_top_builddir_sub=`printf \"%s\\n\" \"$ac_dir_suffix\" | sed 's|/[^\\\\/]*|/..|g;s|/||'`\n  case $ac_top_builddir_sub in\n  \"\") ac_top_builddir_sub=. ac_top_build_prefix= ;;\n  *)  ac_top_build_prefix=$ac_top_builddir_sub/ ;;\n  esac ;;\nesac\nac_abs_top_builddir=$ac_pwd\nac_abs_builddir=$ac_pwd$ac_dir_suffix\n# for backward compatibility:\nac_top_builddir=$ac_top_build_prefix\n\ncase $srcdir in\n  .)  # We are building in place.\n    ac_srcdir=.\n    ac_top_srcdir=$ac_top_builddir_sub\n    ac_abs_top_srcdir=$ac_pwd ;;\n  [\\\\/]* | ?:[\\\\/]* )  # Absolute name.\n    ac_srcdir=$srcdir$ac_dir_suffix;\n    ac_top_srcdir=$srcdir\n    ac_abs_top_srcdir=$srcdir ;;\n  *) # Relative name.\n    ac_srcdir=$ac_top_build_prefix$srcdir$ac_dir_suffix\n    ac_top_srcdir=$ac_top_build_prefix$srcdir\n    ac_abs_top_srcdir=$ac_pwd/$srcdir ;;\nesac\nac_abs_srcdir=$ac_abs_top_srcdir$ac_dir_suffix\n\n    cd \"$ac_dir\" || { ac_status=$?; continue; }\n    # Check for configure.gnu first; this name is used for a wrapper for\n    # Metaconfig's \"Configure\" on case-insensitive file systems.\n    if test -f \"$ac_srcdir/configure.gnu\"; then\n      echo &&\n      $SHELL \"$ac_srcdir/configure.gnu\" --help=recursive\n    elif test -f \"$ac_srcdir/configure\"; then\n      echo &&\n      $SHELL \"$ac_srcdir/configure\" --help=recursive\n    else\n      printf \"%s\\n\" \"$as_me: WARNING: no configuration information is in $ac_dir\" >&2\n    fi || ac_status=$?\n    cd \"$ac_pwd\" || { ac_status=$?; break; }\n  done\nfi\n\ntest -n \"$ac_init_help\" && exit $ac_status\nif $ac_init_version; then\n  cat <<\\_ACEOF\nconfigure\ngenerated by GNU Autoconf 2.71\n\nCopyright (C) 2021 Free Software Foundation, Inc.\nThis configure script is free software; the Free Software Foundation\ngives unlimited permission to copy, distribute and modify it.\n_ACEOF\n  exit\nfi\n\n## ------------------------ ##\n## Autoconf initialization. ##\n## ------------------------ ##\n\n# ac_fn_c_try_compile LINENO\n# --------------------------\n# Try to compile conftest.$ac_ext, and return whether this succeeded.\nac_fn_c_try_compile ()\n{\n  as_lineno=${as_lineno-\"$1\"} as_lineno_stack=as_lineno_stack=$as_lineno_stack\n  rm -f conftest.$ac_objext conftest.beam\n  if { { ac_try=\"$ac_compile\"\ncase \"(($ac_try\" in\n  *\\\"* | *\\`* | *\\\\*) ac_try_echo=\\$ac_try;;\n  *) ac_try_echo=$ac_try;;\nesac\neval ac_try_echo=\"\\\"\\$as_me:${as_lineno-$LINENO}: $ac_try_echo\\\"\"\nprintf \"%s\\n\" \"$ac_try_echo\"; } >&5\n  (eval \"$ac_compile\") 2>conftest.err\n  ac_status=$?\n  if test -s conftest.err; then\n    grep -v '^ *+' conftest.err >conftest.er1\n    cat conftest.er1 >&5\n    mv -f conftest.er1 conftest.err\n  fi\n  printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: \\$? = $ac_status\" >&5\n  test $ac_status = 0; } && {\n\t test -z \"$ac_c_werror_flag\" ||\n\t test ! -s conftest.err\n       } && test -s conftest.$ac_objext\nthen :\n  ac_retval=0\nelse $as_nop\n  printf \"%s\\n\" \"$as_me: failed program was:\" >&5\nsed 's/^/| /' conftest.$ac_ext >&5\n\n\tac_retval=1\nfi\n  eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno\n  as_fn_set_status $ac_retval\n\n} # ac_fn_c_try_compile\n\n# ac_fn_c_try_cpp LINENO\n# ----------------------\n# Try to preprocess conftest.$ac_ext, and return whether this succeeded.\nac_fn_c_try_cpp ()\n{\n  as_lineno=${as_lineno-\"$1\"} as_lineno_stack=as_lineno_stack=$as_lineno_stack\n  if { { ac_try=\"$ac_cpp conftest.$ac_ext\"\ncase \"(($ac_try\" in\n  *\\\"* | *\\`* | *\\\\*) ac_try_echo=\\$ac_try;;\n  *) ac_try_echo=$ac_try;;\nesac\neval ac_try_echo=\"\\\"\\$as_me:${as_lineno-$LINENO}: $ac_try_echo\\\"\"\nprintf \"%s\\n\" \"$ac_try_echo\"; } >&5\n  (eval \"$ac_cpp conftest.$ac_ext\") 2>conftest.err\n  ac_status=$?\n  if test -s conftest.err; then\n    grep -v '^ *+' conftest.err >conftest.er1\n    cat conftest.er1 >&5\n    mv -f conftest.er1 conftest.err\n  fi\n  printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: \\$? = $ac_status\" >&5\n  test $ac_status = 0; } > conftest.i && {\n\t test -z \"$ac_c_preproc_warn_flag$ac_c_werror_flag\" ||\n\t test ! -s conftest.err\n       }\nthen :\n  ac_retval=0\nelse $as_nop\n  printf \"%s\\n\" \"$as_me: failed program was:\" >&5\nsed 's/^/| /' conftest.$ac_ext >&5\n\n    ac_retval=1\nfi\n  eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno\n  as_fn_set_status $ac_retval\n\n} # ac_fn_c_try_cpp\n\n# ac_fn_cxx_try_compile LINENO\n# ----------------------------\n# Try to compile conftest.$ac_ext, and return whether this succeeded.\nac_fn_cxx_try_compile ()\n{\n  as_lineno=${as_lineno-\"$1\"} as_lineno_stack=as_lineno_stack=$as_lineno_stack\n  rm -f conftest.$ac_objext conftest.beam\n  if { { ac_try=\"$ac_compile\"\ncase \"(($ac_try\" in\n  *\\\"* | *\\`* | *\\\\*) ac_try_echo=\\$ac_try;;\n  *) ac_try_echo=$ac_try;;\nesac\neval ac_try_echo=\"\\\"\\$as_me:${as_lineno-$LINENO}: $ac_try_echo\\\"\"\nprintf \"%s\\n\" \"$ac_try_echo\"; } >&5\n  (eval \"$ac_compile\") 2>conftest.err\n  ac_status=$?\n  if test -s conftest.err; then\n    grep -v '^ *+' conftest.err >conftest.er1\n    cat conftest.er1 >&5\n    mv -f conftest.er1 conftest.err\n  fi\n  printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: \\$? = $ac_status\" >&5\n  test $ac_status = 0; } && {\n\t test -z \"$ac_cxx_werror_flag\" ||\n\t test ! -s conftest.err\n       } && test -s conftest.$ac_objext\nthen :\n  ac_retval=0\nelse $as_nop\n  printf \"%s\\n\" \"$as_me: failed program was:\" >&5\nsed 's/^/| /' conftest.$ac_ext >&5\n\n\tac_retval=1\nfi\n  eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno\n  as_fn_set_status $ac_retval\n\n} # ac_fn_cxx_try_compile\n\n# ac_fn_c_try_link LINENO\n# -----------------------\n# Try to link conftest.$ac_ext, and return whether this succeeded.\nac_fn_c_try_link ()\n{\n  as_lineno=${as_lineno-\"$1\"} as_lineno_stack=as_lineno_stack=$as_lineno_stack\n  rm -f conftest.$ac_objext conftest.beam conftest$ac_exeext\n  if { { ac_try=\"$ac_link\"\ncase \"(($ac_try\" in\n  *\\\"* | *\\`* | *\\\\*) ac_try_echo=\\$ac_try;;\n  *) ac_try_echo=$ac_try;;\nesac\neval ac_try_echo=\"\\\"\\$as_me:${as_lineno-$LINENO}: $ac_try_echo\\\"\"\nprintf \"%s\\n\" \"$ac_try_echo\"; } >&5\n  (eval \"$ac_link\") 2>conftest.err\n  ac_status=$?\n  if test -s conftest.err; then\n    grep -v '^ *+' conftest.err >conftest.er1\n    cat conftest.er1 >&5\n    mv -f conftest.er1 conftest.err\n  fi\n  printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: \\$? = $ac_status\" >&5\n  test $ac_status = 0; } && {\n\t test -z \"$ac_c_werror_flag\" ||\n\t test ! -s conftest.err\n       } && test -s conftest$ac_exeext && {\n\t test \"$cross_compiling\" = yes ||\n\t test -x conftest$ac_exeext\n       }\nthen :\n  ac_retval=0\nelse $as_nop\n  printf \"%s\\n\" \"$as_me: failed program was:\" >&5\nsed 's/^/| /' conftest.$ac_ext >&5\n\n\tac_retval=1\nfi\n  # Delete the IPA/IPO (Inter Procedural Analysis/Optimization) information\n  # created by the PGI compiler (conftest_ipa8_conftest.oo), as it would\n  # interfere with the next link command; also delete a directory that is\n  # left behind by Apple's compiler.  We do this before executing the actions.\n  rm -rf conftest.dSYM conftest_ipa8_conftest.oo\n  eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno\n  as_fn_set_status $ac_retval\n\n} # ac_fn_c_try_link\n\n# ac_fn_c_try_run LINENO\n# ----------------------\n# Try to run conftest.$ac_ext, and return whether this succeeded. Assumes that\n# executables *can* be run.\nac_fn_c_try_run ()\n{\n  as_lineno=${as_lineno-\"$1\"} as_lineno_stack=as_lineno_stack=$as_lineno_stack\n  if { { ac_try=\"$ac_link\"\ncase \"(($ac_try\" in\n  *\\\"* | *\\`* | *\\\\*) ac_try_echo=\\$ac_try;;\n  *) ac_try_echo=$ac_try;;\nesac\neval ac_try_echo=\"\\\"\\$as_me:${as_lineno-$LINENO}: $ac_try_echo\\\"\"\nprintf \"%s\\n\" \"$ac_try_echo\"; } >&5\n  (eval \"$ac_link\") 2>&5\n  ac_status=$?\n  printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: \\$? = $ac_status\" >&5\n  test $ac_status = 0; } && { ac_try='./conftest$ac_exeext'\n  { { case \"(($ac_try\" in\n  *\\\"* | *\\`* | *\\\\*) ac_try_echo=\\$ac_try;;\n  *) ac_try_echo=$ac_try;;\nesac\neval ac_try_echo=\"\\\"\\$as_me:${as_lineno-$LINENO}: $ac_try_echo\\\"\"\nprintf \"%s\\n\" \"$ac_try_echo\"; } >&5\n  (eval \"$ac_try\") 2>&5\n  ac_status=$?\n  printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: \\$? = $ac_status\" >&5\n  test $ac_status = 0; }; }\nthen :\n  ac_retval=0\nelse $as_nop\n  printf \"%s\\n\" \"$as_me: program exited with status $ac_status\" >&5\n       printf \"%s\\n\" \"$as_me: failed program was:\" >&5\nsed 's/^/| /' conftest.$ac_ext >&5\n\n       ac_retval=$ac_status\nfi\n  rm -rf conftest.dSYM conftest_ipa8_conftest.oo\n  eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno\n  as_fn_set_status $ac_retval\n\n} # ac_fn_c_try_run\n\n# ac_fn_c_check_header_compile LINENO HEADER VAR INCLUDES\n# -------------------------------------------------------\n# Tests whether HEADER exists and can be compiled using the include files in\n# INCLUDES, setting the cache variable VAR accordingly.\nac_fn_c_check_header_compile ()\n{\n  as_lineno=${as_lineno-\"$1\"} as_lineno_stack=as_lineno_stack=$as_lineno_stack\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $2\" >&5\nprintf %s \"checking for $2... \" >&6; }\nif eval test \\${$3+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n$4\n#include <$2>\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  eval \"$3=yes\"\nelse $as_nop\n  eval \"$3=no\"\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nfi\neval ac_res=\\$$3\n\t       { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_res\" >&5\nprintf \"%s\\n\" \"$ac_res\" >&6; }\n  eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno\n\n} # ac_fn_c_check_header_compile\n\n# ac_fn_c_compute_int LINENO EXPR VAR INCLUDES\n# --------------------------------------------\n# Tries to find the compile-time value of EXPR in a program that includes\n# INCLUDES, setting VAR accordingly. Returns whether the value could be\n# computed\nac_fn_c_compute_int ()\n{\n  as_lineno=${as_lineno-\"$1\"} as_lineno_stack=as_lineno_stack=$as_lineno_stack\n  if test \"$cross_compiling\" = yes; then\n    # Depending upon the size, compute the lo and hi bounds.\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n$4\nint\nmain (void)\n{\nstatic int test_array [1 - 2 * !(($2) >= 0)];\ntest_array [0] = 0;\nreturn test_array [0];\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  ac_lo=0 ac_mid=0\n  while :; do\n    cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n$4\nint\nmain (void)\n{\nstatic int test_array [1 - 2 * !(($2) <= $ac_mid)];\ntest_array [0] = 0;\nreturn test_array [0];\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  ac_hi=$ac_mid; break\nelse $as_nop\n  as_fn_arith $ac_mid + 1 && ac_lo=$as_val\n\t\t\tif test $ac_lo -le $ac_mid; then\n\t\t\t  ac_lo= ac_hi=\n\t\t\t  break\n\t\t\tfi\n\t\t\tas_fn_arith 2 '*' $ac_mid + 1 && ac_mid=$as_val\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\n  done\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n$4\nint\nmain (void)\n{\nstatic int test_array [1 - 2 * !(($2) < 0)];\ntest_array [0] = 0;\nreturn test_array [0];\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  ac_hi=-1 ac_mid=-1\n  while :; do\n    cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n$4\nint\nmain (void)\n{\nstatic int test_array [1 - 2 * !(($2) >= $ac_mid)];\ntest_array [0] = 0;\nreturn test_array [0];\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  ac_lo=$ac_mid; break\nelse $as_nop\n  as_fn_arith '(' $ac_mid ')' - 1 && ac_hi=$as_val\n\t\t\tif test $ac_mid -le $ac_hi; then\n\t\t\t  ac_lo= ac_hi=\n\t\t\t  break\n\t\t\tfi\n\t\t\tas_fn_arith 2 '*' $ac_mid && ac_mid=$as_val\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\n  done\nelse $as_nop\n  ac_lo= ac_hi=\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\n# Binary search between lo and hi bounds.\nwhile test \"x$ac_lo\" != \"x$ac_hi\"; do\n  as_fn_arith '(' $ac_hi - $ac_lo ')' / 2 + $ac_lo && ac_mid=$as_val\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n$4\nint\nmain (void)\n{\nstatic int test_array [1 - 2 * !(($2) <= $ac_mid)];\ntest_array [0] = 0;\nreturn test_array [0];\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  ac_hi=$ac_mid\nelse $as_nop\n  as_fn_arith '(' $ac_mid ')' + 1 && ac_lo=$as_val\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\ndone\ncase $ac_lo in #((\n?*) eval \"$3=\\$ac_lo\"; ac_retval=0 ;;\n'') ac_retval=1 ;;\nesac\n  else\n    cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n$4\nstatic long int longval (void) { return $2; }\nstatic unsigned long int ulongval (void) { return $2; }\n#include <stdio.h>\n#include <stdlib.h>\nint\nmain (void)\n{\n\n  FILE *f = fopen (\"conftest.val\", \"w\");\n  if (! f)\n    return 1;\n  if (($2) < 0)\n    {\n      long int i = longval ();\n      if (i != ($2))\n\treturn 1;\n      fprintf (f, \"%ld\", i);\n    }\n  else\n    {\n      unsigned long int i = ulongval ();\n      if (i != ($2))\n\treturn 1;\n      fprintf (f, \"%lu\", i);\n    }\n  /* Do not output a trailing newline, as this causes \\r\\n confusion\n     on some platforms.  */\n  return ferror (f) || fclose (f) != 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_run \"$LINENO\"\nthen :\n  echo >>conftest.val; read $3 <conftest.val; ac_retval=0\nelse $as_nop\n  ac_retval=1\nfi\nrm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \\\n  conftest.$ac_objext conftest.beam conftest.$ac_ext\nrm -f conftest.val\n\n  fi\n  eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno\n  as_fn_set_status $ac_retval\n\n} # ac_fn_c_compute_int\n\n# ac_fn_c_check_func LINENO FUNC VAR\n# ----------------------------------\n# Tests whether FUNC exists, setting the cache variable VAR accordingly\nac_fn_c_check_func ()\n{\n  as_lineno=${as_lineno-\"$1\"} as_lineno_stack=as_lineno_stack=$as_lineno_stack\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $2\" >&5\nprintf %s \"checking for $2... \" >&6; }\nif eval test \\${$3+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n/* Define $2 to an innocuous variant, in case <limits.h> declares $2.\n   For example, HP-UX 11i <limits.h> declares gettimeofday.  */\n#define $2 innocuous_$2\n\n/* System header to define __stub macros and hopefully few prototypes,\n   which can conflict with char $2 (); below.  */\n\n#include <limits.h>\n#undef $2\n\n/* Override any GCC internal prototype to avoid an error.\n   Use char because int might match the return type of a GCC\n   builtin and then its argument prototype would still apply.  */\n#ifdef __cplusplus\nextern \"C\"\n#endif\nchar $2 ();\n/* The GNU C library defines this for functions which it implements\n    to always fail with ENOSYS.  Some functions are actually named\n    something starting with __ and the normal name is an alias.  */\n#if defined __stub_$2 || defined __stub___$2\nchoke me\n#endif\n\nint\nmain (void)\n{\nreturn $2 ();\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  eval \"$3=yes\"\nelse $as_nop\n  eval \"$3=no\"\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\neval ac_res=\\$$3\n\t       { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_res\" >&5\nprintf \"%s\\n\" \"$ac_res\" >&6; }\n  eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno\n\n} # ac_fn_c_check_func\n\n# ac_fn_c_check_type LINENO TYPE VAR INCLUDES\n# -------------------------------------------\n# Tests whether TYPE exists after having included INCLUDES, setting cache\n# variable VAR accordingly.\nac_fn_c_check_type ()\n{\n  as_lineno=${as_lineno-\"$1\"} as_lineno_stack=as_lineno_stack=$as_lineno_stack\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $2\" >&5\nprintf %s \"checking for $2... \" >&6; }\nif eval test \\${$3+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  eval \"$3=no\"\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n$4\nint\nmain (void)\n{\nif (sizeof ($2))\n\t return 0;\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n$4\nint\nmain (void)\n{\nif (sizeof (($2)))\n\t    return 0;\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n\nelse $as_nop\n  eval \"$3=yes\"\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nfi\neval ac_res=\\$$3\n\t       { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_res\" >&5\nprintf \"%s\\n\" \"$ac_res\" >&6; }\n  eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno\n\n} # ac_fn_c_check_type\nac_configure_args_raw=\nfor ac_arg\ndo\n  case $ac_arg in\n  *\\'*)\n    ac_arg=`printf \"%s\\n\" \"$ac_arg\" | sed \"s/'/'\\\\\\\\\\\\\\\\''/g\"` ;;\n  esac\n  as_fn_append ac_configure_args_raw \" '$ac_arg'\"\ndone\n\ncase $ac_configure_args_raw in\n  *$as_nl*)\n    ac_safe_unquote= ;;\n  *)\n    ac_unsafe_z='|&;<>()$`\\\\\"*?[ ''\t' # This string ends in space, tab.\n    ac_unsafe_a=\"$ac_unsafe_z#~\"\n    ac_safe_unquote=\"s/ '\\\\([^$ac_unsafe_a][^$ac_unsafe_z]*\\\\)'/ \\\\1/g\"\n    ac_configure_args_raw=`      printf \"%s\\n\" \"$ac_configure_args_raw\" | sed \"$ac_safe_unquote\"`;;\nesac\n\ncat >config.log <<_ACEOF\nThis file contains any messages produced by compilers while\nrunning configure, to aid debugging if configure makes a mistake.\n\nIt was created by $as_me, which was\ngenerated by GNU Autoconf 2.71.  Invocation command line was\n\n  $ $0$ac_configure_args_raw\n\n_ACEOF\nexec 5>>config.log\n{\ncat <<_ASUNAME\n## --------- ##\n## Platform. ##\n## --------- ##\n\nhostname = `(hostname || uname -n) 2>/dev/null | sed 1q`\nuname -m = `(uname -m) 2>/dev/null || echo unknown`\nuname -r = `(uname -r) 2>/dev/null || echo unknown`\nuname -s = `(uname -s) 2>/dev/null || echo unknown`\nuname -v = `(uname -v) 2>/dev/null || echo unknown`\n\n/usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null || echo unknown`\n/bin/uname -X     = `(/bin/uname -X) 2>/dev/null     || echo unknown`\n\n/bin/arch              = `(/bin/arch) 2>/dev/null              || echo unknown`\n/usr/bin/arch -k       = `(/usr/bin/arch -k) 2>/dev/null       || echo unknown`\n/usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null || echo unknown`\n/usr/bin/hostinfo      = `(/usr/bin/hostinfo) 2>/dev/null      || echo unknown`\n/bin/machine           = `(/bin/machine) 2>/dev/null           || echo unknown`\n/usr/bin/oslevel       = `(/usr/bin/oslevel) 2>/dev/null       || echo unknown`\n/bin/universe          = `(/bin/universe) 2>/dev/null          || echo unknown`\n\n_ASUNAME\n\nas_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    printf \"%s\\n\" \"PATH: $as_dir\"\n  done\nIFS=$as_save_IFS\n\n} >&5\n\ncat >&5 <<_ACEOF\n\n\n## ----------- ##\n## Core tests. ##\n## ----------- ##\n\n_ACEOF\n\n\n# Keep a trace of the command line.\n# Strip out --no-create and --no-recursion so they do not pile up.\n# Strip out --silent because we don't want to record it for future runs.\n# Also quote any args containing shell meta-characters.\n# Make two passes to allow for proper duplicate-argument suppression.\nac_configure_args=\nac_configure_args0=\nac_configure_args1=\nac_must_keep_next=false\nfor ac_pass in 1 2\ndo\n  for ac_arg\n  do\n    case $ac_arg in\n    -no-create | --no-c* | -n | -no-recursion | --no-r*) continue ;;\n    -q | -quiet | --quiet | --quie | --qui | --qu | --q \\\n    | -silent | --silent | --silen | --sile | --sil)\n      continue ;;\n    *\\'*)\n      ac_arg=`printf \"%s\\n\" \"$ac_arg\" | sed \"s/'/'\\\\\\\\\\\\\\\\''/g\"` ;;\n    esac\n    case $ac_pass in\n    1) as_fn_append ac_configure_args0 \" '$ac_arg'\" ;;\n    2)\n      as_fn_append ac_configure_args1 \" '$ac_arg'\"\n      if test $ac_must_keep_next = true; then\n\tac_must_keep_next=false # Got value, back to normal.\n      else\n\tcase $ac_arg in\n\t  *=* | --config-cache | -C | -disable-* | --disable-* \\\n\t  | -enable-* | --enable-* | -gas | --g* | -nfp | --nf* \\\n\t  | -q | -quiet | --q* | -silent | --sil* | -v | -verb* \\\n\t  | -with-* | --with-* | -without-* | --without-* | --x)\n\t    case \"$ac_configure_args0 \" in\n\t      \"$ac_configure_args1\"*\" '$ac_arg' \"* ) continue ;;\n\t    esac\n\t    ;;\n\t  -* ) ac_must_keep_next=true ;;\n\tesac\n      fi\n      as_fn_append ac_configure_args \" '$ac_arg'\"\n      ;;\n    esac\n  done\ndone\n{ ac_configure_args0=; unset ac_configure_args0;}\n{ ac_configure_args1=; unset ac_configure_args1;}\n\n# When interrupted or exit'd, cleanup temporary files, and complete\n# config.log.  We remove comments because anyway the quotes in there\n# would cause problems or look ugly.\n# WARNING: Use '\\'' to represent an apostrophe within the trap.\n# WARNING: Do not start the trap code with a newline, due to a FreeBSD 4.0 bug.\ntrap 'exit_status=$?\n  # Sanitize IFS.\n  IFS=\" \"\"\t$as_nl\"\n  # Save into config.log some information that might help in debugging.\n  {\n    echo\n\n    printf \"%s\\n\" \"## ---------------- ##\n## Cache variables. ##\n## ---------------- ##\"\n    echo\n    # The following way of writing the cache mishandles newlines in values,\n(\n  for ac_var in `(set) 2>&1 | sed -n '\\''s/^\\([a-zA-Z_][a-zA-Z0-9_]*\\)=.*/\\1/p'\\''`; do\n    eval ac_val=\\$$ac_var\n    case $ac_val in #(\n    *${as_nl}*)\n      case $ac_var in #(\n      *_cv_*) { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: WARNING: cache variable $ac_var contains a newline\" >&5\nprintf \"%s\\n\" \"$as_me: WARNING: cache variable $ac_var contains a newline\" >&2;} ;;\n      esac\n      case $ac_var in #(\n      _ | IFS | as_nl) ;; #(\n      BASH_ARGV | BASH_SOURCE) eval $ac_var= ;; #(\n      *) { eval $ac_var=; unset $ac_var;} ;;\n      esac ;;\n    esac\n  done\n  (set) 2>&1 |\n    case $as_nl`(ac_space='\\'' '\\''; set) 2>&1` in #(\n    *${as_nl}ac_space=\\ *)\n      sed -n \\\n\t\"s/'\\''/'\\''\\\\\\\\'\\'''\\''/g;\n\t  s/^\\\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\\\)=\\\\(.*\\\\)/\\\\1='\\''\\\\2'\\''/p\"\n      ;; #(\n    *)\n      sed -n \"/^[_$as_cr_alnum]*_cv_[_$as_cr_alnum]*=/p\"\n      ;;\n    esac |\n    sort\n)\n    echo\n\n    printf \"%s\\n\" \"## ----------------- ##\n## Output variables. ##\n## ----------------- ##\"\n    echo\n    for ac_var in $ac_subst_vars\n    do\n      eval ac_val=\\$$ac_var\n      case $ac_val in\n      *\\'\\''*) ac_val=`printf \"%s\\n\" \"$ac_val\" | sed \"s/'\\''/'\\''\\\\\\\\\\\\\\\\'\\'''\\''/g\"`;;\n      esac\n      printf \"%s\\n\" \"$ac_var='\\''$ac_val'\\''\"\n    done | sort\n    echo\n\n    if test -n \"$ac_subst_files\"; then\n      printf \"%s\\n\" \"## ------------------- ##\n## File substitutions. ##\n## ------------------- ##\"\n      echo\n      for ac_var in $ac_subst_files\n      do\n\teval ac_val=\\$$ac_var\n\tcase $ac_val in\n\t*\\'\\''*) ac_val=`printf \"%s\\n\" \"$ac_val\" | sed \"s/'\\''/'\\''\\\\\\\\\\\\\\\\'\\'''\\''/g\"`;;\n\tesac\n\tprintf \"%s\\n\" \"$ac_var='\\''$ac_val'\\''\"\n      done | sort\n      echo\n    fi\n\n    if test -s confdefs.h; then\n      printf \"%s\\n\" \"## ----------- ##\n## confdefs.h. ##\n## ----------- ##\"\n      echo\n      cat confdefs.h\n      echo\n    fi\n    test \"$ac_signal\" != 0 &&\n      printf \"%s\\n\" \"$as_me: caught signal $ac_signal\"\n    printf \"%s\\n\" \"$as_me: exit $exit_status\"\n  } >&5\n  rm -f core *.core core.conftest.* &&\n    rm -f -r conftest* confdefs* conf$$* $ac_clean_files &&\n    exit $exit_status\n' 0\nfor ac_signal in 1 2 13 15; do\n  trap 'ac_signal='$ac_signal'; as_fn_exit 1' $ac_signal\ndone\nac_signal=0\n\n# confdefs.h avoids OS command line length limits that DEFS can exceed.\nrm -f -r conftest* confdefs.h\n\nprintf \"%s\\n\" \"/* confdefs.h */\" > confdefs.h\n\n# Predefined preprocessor variables.\n\nprintf \"%s\\n\" \"#define PACKAGE_NAME \\\"$PACKAGE_NAME\\\"\" >>confdefs.h\n\nprintf \"%s\\n\" \"#define PACKAGE_TARNAME \\\"$PACKAGE_TARNAME\\\"\" >>confdefs.h\n\nprintf \"%s\\n\" \"#define PACKAGE_VERSION \\\"$PACKAGE_VERSION\\\"\" >>confdefs.h\n\nprintf \"%s\\n\" \"#define PACKAGE_STRING \\\"$PACKAGE_STRING\\\"\" >>confdefs.h\n\nprintf \"%s\\n\" \"#define PACKAGE_BUGREPORT \\\"$PACKAGE_BUGREPORT\\\"\" >>confdefs.h\n\nprintf \"%s\\n\" \"#define PACKAGE_URL \\\"$PACKAGE_URL\\\"\" >>confdefs.h\n\n\n# Let the site file select an alternate cache file if it wants to.\n# Prefer an explicitly selected file to automatically selected ones.\nif test -n \"$CONFIG_SITE\"; then\n  ac_site_files=\"$CONFIG_SITE\"\nelif test \"x$prefix\" != xNONE; then\n  ac_site_files=\"$prefix/share/config.site $prefix/etc/config.site\"\nelse\n  ac_site_files=\"$ac_default_prefix/share/config.site $ac_default_prefix/etc/config.site\"\nfi\n\nfor ac_site_file in $ac_site_files\ndo\n  case $ac_site_file in #(\n  */*) :\n     ;; #(\n  *) :\n    ac_site_file=./$ac_site_file ;;\nesac\n  if test -f \"$ac_site_file\" && test -r \"$ac_site_file\"; then\n    { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: loading site script $ac_site_file\" >&5\nprintf \"%s\\n\" \"$as_me: loading site script $ac_site_file\" >&6;}\n    sed 's/^/| /' \"$ac_site_file\" >&5\n    . \"$ac_site_file\" \\\n      || { { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: error: in \\`$ac_pwd':\" >&5\nprintf \"%s\\n\" \"$as_me: error: in \\`$ac_pwd':\" >&2;}\nas_fn_error $? \"failed to load site script $ac_site_file\nSee \\`config.log' for more details\" \"$LINENO\" 5; }\n  fi\ndone\n\nif test -r \"$cache_file\"; then\n  # Some versions of bash will fail to source /dev/null (special files\n  # actually), so we avoid doing that.  DJGPP emulates it as a regular file.\n  if test /dev/null != \"$cache_file\" && test -f \"$cache_file\"; then\n    { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: loading cache $cache_file\" >&5\nprintf \"%s\\n\" \"$as_me: loading cache $cache_file\" >&6;}\n    case $cache_file in\n      [\\\\/]* | ?:[\\\\/]* ) . \"$cache_file\";;\n      *)                      . \"./$cache_file\";;\n    esac\n  fi\nelse\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: creating cache $cache_file\" >&5\nprintf \"%s\\n\" \"$as_me: creating cache $cache_file\" >&6;}\n  >$cache_file\nfi\n\n# Test code for whether the C compiler supports C89 (global declarations)\nac_c_conftest_c89_globals='\n/* Does the compiler advertise C89 conformance?\n   Do not test the value of __STDC__, because some compilers set it to 0\n   while being otherwise adequately conformant. */\n#if !defined __STDC__\n# error \"Compiler does not advertise C89 conformance\"\n#endif\n\n#include <stddef.h>\n#include <stdarg.h>\nstruct stat;\n/* Most of the following tests are stolen from RCS 5.7 src/conf.sh.  */\nstruct buf { int x; };\nstruct buf * (*rcsopen) (struct buf *, struct stat *, int);\nstatic char *e (p, i)\n     char **p;\n     int i;\n{\n  return p[i];\n}\nstatic char *f (char * (*g) (char **, int), char **p, ...)\n{\n  char *s;\n  va_list v;\n  va_start (v,p);\n  s = g (p, va_arg (v,int));\n  va_end (v);\n  return s;\n}\n\n/* OSF 4.0 Compaq cc is some sort of almost-ANSI by default.  It has\n   function prototypes and stuff, but not \\xHH hex character constants.\n   These do not provoke an error unfortunately, instead are silently treated\n   as an \"x\".  The following induces an error, until -std is added to get\n   proper ANSI mode.  Curiously \\x00 != x always comes out true, for an\n   array size at least.  It is necessary to write \\x00 == 0 to get something\n   that is true only with -std.  */\nint osf4_cc_array ['\\''\\x00'\\'' == 0 ? 1 : -1];\n\n/* IBM C 6 for AIX is almost-ANSI by default, but it replaces macro parameters\n   inside strings and character constants.  */\n#define FOO(x) '\\''x'\\''\nint xlc6_cc_array[FOO(a) == '\\''x'\\'' ? 1 : -1];\n\nint test (int i, double x);\nstruct s1 {int (*f) (int a);};\nstruct s2 {int (*f) (double a);};\nint pairnames (int, char **, int *(*)(struct buf *, struct stat *, int),\n               int, int);'\n\n# Test code for whether the C compiler supports C89 (body of main).\nac_c_conftest_c89_main='\nok |= (argc == 0 || f (e, argv, 0) != argv[0] || f (e, argv, 1) != argv[1]);\n'\n\n# Test code for whether the C compiler supports C99 (global declarations)\nac_c_conftest_c99_globals='\n// Does the compiler advertise C99 conformance?\n#if !defined __STDC_VERSION__ || __STDC_VERSION__ < 199901L\n# error \"Compiler does not advertise C99 conformance\"\n#endif\n\n#include <stdbool.h>\nextern int puts (const char *);\nextern int printf (const char *, ...);\nextern int dprintf (int, const char *, ...);\nextern void *malloc (size_t);\n\n// Check varargs macros.  These examples are taken from C99 6.10.3.5.\n// dprintf is used instead of fprintf to avoid needing to declare\n// FILE and stderr.\n#define debug(...) dprintf (2, __VA_ARGS__)\n#define showlist(...) puts (#__VA_ARGS__)\n#define report(test,...) ((test) ? puts (#test) : printf (__VA_ARGS__))\nstatic void\ntest_varargs_macros (void)\n{\n  int x = 1234;\n  int y = 5678;\n  debug (\"Flag\");\n  debug (\"X = %d\\n\", x);\n  showlist (The first, second, and third items.);\n  report (x>y, \"x is %d but y is %d\", x, y);\n}\n\n// Check long long types.\n#define BIG64 18446744073709551615ull\n#define BIG32 4294967295ul\n#define BIG_OK (BIG64 / BIG32 == 4294967297ull && BIG64 % BIG32 == 0)\n#if !BIG_OK\n  #error \"your preprocessor is broken\"\n#endif\n#if BIG_OK\n#else\n  #error \"your preprocessor is broken\"\n#endif\nstatic long long int bignum = -9223372036854775807LL;\nstatic unsigned long long int ubignum = BIG64;\n\nstruct incomplete_array\n{\n  int datasize;\n  double data[];\n};\n\nstruct named_init {\n  int number;\n  const wchar_t *name;\n  double average;\n};\n\ntypedef const char *ccp;\n\nstatic inline int\ntest_restrict (ccp restrict text)\n{\n  // See if C++-style comments work.\n  // Iterate through items via the restricted pointer.\n  // Also check for declarations in for loops.\n  for (unsigned int i = 0; *(text+i) != '\\''\\0'\\''; ++i)\n    continue;\n  return 0;\n}\n\n// Check varargs and va_copy.\nstatic bool\ntest_varargs (const char *format, ...)\n{\n  va_list args;\n  va_start (args, format);\n  va_list args_copy;\n  va_copy (args_copy, args);\n\n  const char *str = \"\";\n  int number = 0;\n  float fnumber = 0;\n\n  while (*format)\n    {\n      switch (*format++)\n\t{\n\tcase '\\''s'\\'': // string\n\t  str = va_arg (args_copy, const char *);\n\t  break;\n\tcase '\\''d'\\'': // int\n\t  number = va_arg (args_copy, int);\n\t  break;\n\tcase '\\''f'\\'': // float\n\t  fnumber = va_arg (args_copy, double);\n\t  break;\n\tdefault:\n\t  break;\n\t}\n    }\n  va_end (args_copy);\n  va_end (args);\n\n  return *str && number && fnumber;\n}\n'\n\n# Test code for whether the C compiler supports C99 (body of main).\nac_c_conftest_c99_main='\n  // Check bool.\n  _Bool success = false;\n  success |= (argc != 0);\n\n  // Check restrict.\n  if (test_restrict (\"String literal\") == 0)\n    success = true;\n  char *restrict newvar = \"Another string\";\n\n  // Check varargs.\n  success &= test_varargs (\"s, d'\\'' f .\", \"string\", 65, 34.234);\n  test_varargs_macros ();\n\n  // Check flexible array members.\n  struct incomplete_array *ia =\n    malloc (sizeof (struct incomplete_array) + (sizeof (double) * 10));\n  ia->datasize = 10;\n  for (int i = 0; i < ia->datasize; ++i)\n    ia->data[i] = i * 1.234;\n\n  // Check named initializers.\n  struct named_init ni = {\n    .number = 34,\n    .name = L\"Test wide string\",\n    .average = 543.34343,\n  };\n\n  ni.number = 58;\n\n  int dynamic_array[ni.number];\n  dynamic_array[0] = argv[0][0];\n  dynamic_array[ni.number - 1] = 543;\n\n  // work around unused variable warnings\n  ok |= (!success || bignum == 0LL || ubignum == 0uLL || newvar[0] == '\\''x'\\''\n\t || dynamic_array[ni.number - 1] != 543);\n'\n\n# Test code for whether the C compiler supports C11 (global declarations)\nac_c_conftest_c11_globals='\n// Does the compiler advertise C11 conformance?\n#if !defined __STDC_VERSION__ || __STDC_VERSION__ < 201112L\n# error \"Compiler does not advertise C11 conformance\"\n#endif\n\n// Check _Alignas.\nchar _Alignas (double) aligned_as_double;\nchar _Alignas (0) no_special_alignment;\nextern char aligned_as_int;\nchar _Alignas (0) _Alignas (int) aligned_as_int;\n\n// Check _Alignof.\nenum\n{\n  int_alignment = _Alignof (int),\n  int_array_alignment = _Alignof (int[100]),\n  char_alignment = _Alignof (char)\n};\n_Static_assert (0 < -_Alignof (int), \"_Alignof is signed\");\n\n// Check _Noreturn.\nint _Noreturn does_not_return (void) { for (;;) continue; }\n\n// Check _Static_assert.\nstruct test_static_assert\n{\n  int x;\n  _Static_assert (sizeof (int) <= sizeof (long int),\n                  \"_Static_assert does not work in struct\");\n  long int y;\n};\n\n// Check UTF-8 literals.\n#define u8 syntax error!\nchar const utf8_literal[] = u8\"happens to be ASCII\" \"another string\";\n\n// Check duplicate typedefs.\ntypedef long *long_ptr;\ntypedef long int *long_ptr;\ntypedef long_ptr long_ptr;\n\n// Anonymous structures and unions -- taken from C11 6.7.2.1 Example 1.\nstruct anonymous\n{\n  union {\n    struct { int i; int j; };\n    struct { int k; long int l; } w;\n  };\n  int m;\n} v1;\n'\n\n# Test code for whether the C compiler supports C11 (body of main).\nac_c_conftest_c11_main='\n  _Static_assert ((offsetof (struct anonymous, i)\n\t\t   == offsetof (struct anonymous, w.k)),\n\t\t  \"Anonymous union alignment botch\");\n  v1.i = 2;\n  v1.w.k = 5;\n  ok |= v1.i != 5;\n'\n\n# Test code for whether the C compiler supports C11 (complete).\nac_c_conftest_c11_program=\"${ac_c_conftest_c89_globals}\n${ac_c_conftest_c99_globals}\n${ac_c_conftest_c11_globals}\n\nint\nmain (int argc, char **argv)\n{\n  int ok = 0;\n  ${ac_c_conftest_c89_main}\n  ${ac_c_conftest_c99_main}\n  ${ac_c_conftest_c11_main}\n  return ok;\n}\n\"\n\n# Test code for whether the C compiler supports C99 (complete).\nac_c_conftest_c99_program=\"${ac_c_conftest_c89_globals}\n${ac_c_conftest_c99_globals}\n\nint\nmain (int argc, char **argv)\n{\n  int ok = 0;\n  ${ac_c_conftest_c89_main}\n  ${ac_c_conftest_c99_main}\n  return ok;\n}\n\"\n\n# Test code for whether the C compiler supports C89 (complete).\nac_c_conftest_c89_program=\"${ac_c_conftest_c89_globals}\n\nint\nmain (int argc, char **argv)\n{\n  int ok = 0;\n  ${ac_c_conftest_c89_main}\n  return ok;\n}\n\"\n\n# Test code for whether the C++ compiler supports C++98 (global declarations)\nac_cxx_conftest_cxx98_globals='\n// Does the compiler advertise C++98 conformance?\n#if !defined __cplusplus || __cplusplus < 199711L\n# error \"Compiler does not advertise C++98 conformance\"\n#endif\n\n// These inclusions are to reject old compilers that\n// lack the unsuffixed header files.\n#include <cstdlib>\n#include <exception>\n\n// <cassert> and <cstring> are *not* freestanding headers in C++98.\nextern void assert (int);\nnamespace std {\n  extern int strcmp (const char *, const char *);\n}\n\n// Namespaces, exceptions, and templates were all added after \"C++ 2.0\".\nusing std::exception;\nusing std::strcmp;\n\nnamespace {\n\nvoid test_exception_syntax()\n{\n  try {\n    throw \"test\";\n  } catch (const char *s) {\n    // Extra parentheses suppress a warning when building autoconf itself,\n    // due to lint rules shared with more typical C programs.\n    assert (!(strcmp) (s, \"test\"));\n  }\n}\n\ntemplate <typename T> struct test_template\n{\n  T const val;\n  explicit test_template(T t) : val(t) {}\n  template <typename U> T add(U u) { return static_cast<T>(u) + val; }\n};\n\n} // anonymous namespace\n'\n\n# Test code for whether the C++ compiler supports C++98 (body of main)\nac_cxx_conftest_cxx98_main='\n  assert (argc);\n  assert (! argv[0]);\n{\n  test_exception_syntax ();\n  test_template<double> tt (2.0);\n  assert (tt.add (4) == 6.0);\n  assert (true && !false);\n}\n'\n\n# Test code for whether the C++ compiler supports C++11 (global declarations)\nac_cxx_conftest_cxx11_globals='\n// Does the compiler advertise C++ 2011 conformance?\n#if !defined __cplusplus || __cplusplus < 201103L\n# error \"Compiler does not advertise C++11 conformance\"\n#endif\n\nnamespace cxx11test\n{\n  constexpr int get_val() { return 20; }\n\n  struct testinit\n  {\n    int i;\n    double d;\n  };\n\n  class delegate\n  {\n  public:\n    delegate(int n) : n(n) {}\n    delegate(): delegate(2354) {}\n\n    virtual int getval() { return this->n; };\n  protected:\n    int n;\n  };\n\n  class overridden : public delegate\n  {\n  public:\n    overridden(int n): delegate(n) {}\n    virtual int getval() override final { return this->n * 2; }\n  };\n\n  class nocopy\n  {\n  public:\n    nocopy(int i): i(i) {}\n    nocopy() = default;\n    nocopy(const nocopy&) = delete;\n    nocopy & operator=(const nocopy&) = delete;\n  private:\n    int i;\n  };\n\n  // for testing lambda expressions\n  template <typename Ret, typename Fn> Ret eval(Fn f, Ret v)\n  {\n    return f(v);\n  }\n\n  // for testing variadic templates and trailing return types\n  template <typename V> auto sum(V first) -> V\n  {\n    return first;\n  }\n  template <typename V, typename... Args> auto sum(V first, Args... rest) -> V\n  {\n    return first + sum(rest...);\n  }\n}\n'\n\n# Test code for whether the C++ compiler supports C++11 (body of main)\nac_cxx_conftest_cxx11_main='\n{\n  // Test auto and decltype\n  auto a1 = 6538;\n  auto a2 = 48573953.4;\n  auto a3 = \"String literal\";\n\n  int total = 0;\n  for (auto i = a3; *i; ++i) { total += *i; }\n\n  decltype(a2) a4 = 34895.034;\n}\n{\n  // Test constexpr\n  short sa[cxx11test::get_val()] = { 0 };\n}\n{\n  // Test initializer lists\n  cxx11test::testinit il = { 4323, 435234.23544 };\n}\n{\n  // Test range-based for\n  int array[] = {9, 7, 13, 15, 4, 18, 12, 10, 5, 3,\n                 14, 19, 17, 8, 6, 20, 16, 2, 11, 1};\n  for (auto &x : array) { x += 23; }\n}\n{\n  // Test lambda expressions\n  using cxx11test::eval;\n  assert (eval ([](int x) { return x*2; }, 21) == 42);\n  double d = 2.0;\n  assert (eval ([&](double x) { return d += x; }, 3.0) == 5.0);\n  assert (d == 5.0);\n  assert (eval ([=](double x) mutable { return d += x; }, 4.0) == 9.0);\n  assert (d == 5.0);\n}\n{\n  // Test use of variadic templates\n  using cxx11test::sum;\n  auto a = sum(1);\n  auto b = sum(1, 2);\n  auto c = sum(1.0, 2.0, 3.0);\n}\n{\n  // Test constructor delegation\n  cxx11test::delegate d1;\n  cxx11test::delegate d2();\n  cxx11test::delegate d3(45);\n}\n{\n  // Test override and final\n  cxx11test::overridden o1(55464);\n}\n{\n  // Test nullptr\n  char *c = nullptr;\n}\n{\n  // Test template brackets\n  test_template<::test_template<int>> v(test_template<int>(12));\n}\n{\n  // Unicode literals\n  char const *utf8 = u8\"UTF-8 string \\u2500\";\n  char16_t const *utf16 = u\"UTF-8 string \\u2500\";\n  char32_t const *utf32 = U\"UTF-32 string \\u2500\";\n}\n'\n\n# Test code for whether the C compiler supports C++11 (complete).\nac_cxx_conftest_cxx11_program=\"${ac_cxx_conftest_cxx98_globals}\n${ac_cxx_conftest_cxx11_globals}\n\nint\nmain (int argc, char **argv)\n{\n  int ok = 0;\n  ${ac_cxx_conftest_cxx98_main}\n  ${ac_cxx_conftest_cxx11_main}\n  return ok;\n}\n\"\n\n# Test code for whether the C compiler supports C++98 (complete).\nac_cxx_conftest_cxx98_program=\"${ac_cxx_conftest_cxx98_globals}\nint\nmain (int argc, char **argv)\n{\n  int ok = 0;\n  ${ac_cxx_conftest_cxx98_main}\n  return ok;\n}\n\"\n\nas_fn_append ac_header_c_list \" stdio.h stdio_h HAVE_STDIO_H\"\nas_fn_append ac_header_c_list \" stdlib.h stdlib_h HAVE_STDLIB_H\"\nas_fn_append ac_header_c_list \" string.h string_h HAVE_STRING_H\"\nas_fn_append ac_header_c_list \" inttypes.h inttypes_h HAVE_INTTYPES_H\"\nas_fn_append ac_header_c_list \" stdint.h stdint_h HAVE_STDINT_H\"\nas_fn_append ac_header_c_list \" strings.h strings_h HAVE_STRINGS_H\"\nas_fn_append ac_header_c_list \" sys/stat.h sys_stat_h HAVE_SYS_STAT_H\"\nas_fn_append ac_header_c_list \" sys/types.h sys_types_h HAVE_SYS_TYPES_H\"\nas_fn_append ac_header_c_list \" unistd.h unistd_h HAVE_UNISTD_H\"\n\n# Auxiliary files required by this configure script.\nac_aux_files=\"install-sh config.guess config.sub\"\n\n# Locations in which to look for auxiliary files.\nac_aux_dir_candidates=\"${srcdir}/build-aux\"\n\n# Search for a directory containing all of the required auxiliary files,\n# $ac_aux_files, from the $PATH-style list $ac_aux_dir_candidates.\n# If we don't find one directory that contains all the files we need,\n# we report the set of missing files from the *first* directory in\n# $ac_aux_dir_candidates and give up.\nac_missing_aux_files=\"\"\nac_first_candidate=:\nprintf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: looking for aux files: $ac_aux_files\" >&5\nas_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nas_found=false\nfor as_dir in $ac_aux_dir_candidates\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n  as_found=:\n\n  printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}:  trying $as_dir\" >&5\n  ac_aux_dir_found=yes\n  ac_install_sh=\n  for ac_aux in $ac_aux_files\n  do\n    # As a special case, if \"install-sh\" is required, that requirement\n    # can be satisfied by any of \"install-sh\", \"install.sh\", or \"shtool\",\n    # and $ac_install_sh is set appropriately for whichever one is found.\n    if test x\"$ac_aux\" = x\"install-sh\"\n    then\n      if test -f \"${as_dir}install-sh\"; then\n        printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}:   ${as_dir}install-sh found\" >&5\n        ac_install_sh=\"${as_dir}install-sh -c\"\n      elif test -f \"${as_dir}install.sh\"; then\n        printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}:   ${as_dir}install.sh found\" >&5\n        ac_install_sh=\"${as_dir}install.sh -c\"\n      elif test -f \"${as_dir}shtool\"; then\n        printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}:   ${as_dir}shtool found\" >&5\n        ac_install_sh=\"${as_dir}shtool install -c\"\n      else\n        ac_aux_dir_found=no\n        if $ac_first_candidate; then\n          ac_missing_aux_files=\"${ac_missing_aux_files} install-sh\"\n        else\n          break\n        fi\n      fi\n    else\n      if test -f \"${as_dir}${ac_aux}\"; then\n        printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}:   ${as_dir}${ac_aux} found\" >&5\n      else\n        ac_aux_dir_found=no\n        if $ac_first_candidate; then\n          ac_missing_aux_files=\"${ac_missing_aux_files} ${ac_aux}\"\n        else\n          break\n        fi\n      fi\n    fi\n  done\n  if test \"$ac_aux_dir_found\" = yes; then\n    ac_aux_dir=\"$as_dir\"\n    break\n  fi\n  ac_first_candidate=false\n\n  as_found=false\ndone\nIFS=$as_save_IFS\nif $as_found\nthen :\n\nelse $as_nop\n  as_fn_error $? \"cannot find required auxiliary files:$ac_missing_aux_files\" \"$LINENO\" 5\nfi\n\n\n# These three variables are undocumented and unsupported,\n# and are intended to be withdrawn in a future Autoconf release.\n# They can cause serious problems if a builder's source tree is in a directory\n# whose full name contains unusual characters.\nif test -f \"${ac_aux_dir}config.guess\"; then\n  ac_config_guess=\"$SHELL ${ac_aux_dir}config.guess\"\nfi\nif test -f \"${ac_aux_dir}config.sub\"; then\n  ac_config_sub=\"$SHELL ${ac_aux_dir}config.sub\"\nfi\nif test -f \"$ac_aux_dir/configure\"; then\n  ac_configure=\"$SHELL ${ac_aux_dir}configure\"\nfi\n\n# Check that the precious variables saved in the cache have kept the same\n# value.\nac_cache_corrupted=false\nfor ac_var in $ac_precious_vars; do\n  eval ac_old_set=\\$ac_cv_env_${ac_var}_set\n  eval ac_new_set=\\$ac_env_${ac_var}_set\n  eval ac_old_val=\\$ac_cv_env_${ac_var}_value\n  eval ac_new_val=\\$ac_env_${ac_var}_value\n  case $ac_old_set,$ac_new_set in\n    set,)\n      { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: error: \\`$ac_var' was set to \\`$ac_old_val' in the previous run\" >&5\nprintf \"%s\\n\" \"$as_me: error: \\`$ac_var' was set to \\`$ac_old_val' in the previous run\" >&2;}\n      ac_cache_corrupted=: ;;\n    ,set)\n      { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: error: \\`$ac_var' was not set in the previous run\" >&5\nprintf \"%s\\n\" \"$as_me: error: \\`$ac_var' was not set in the previous run\" >&2;}\n      ac_cache_corrupted=: ;;\n    ,);;\n    *)\n      if test \"x$ac_old_val\" != \"x$ac_new_val\"; then\n\t# differences in whitespace do not lead to failure.\n\tac_old_val_w=`echo x $ac_old_val`\n\tac_new_val_w=`echo x $ac_new_val`\n\tif test \"$ac_old_val_w\" != \"$ac_new_val_w\"; then\n\t  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: error: \\`$ac_var' has changed since the previous run:\" >&5\nprintf \"%s\\n\" \"$as_me: error: \\`$ac_var' has changed since the previous run:\" >&2;}\n\t  ac_cache_corrupted=:\n\telse\n\t  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: warning: ignoring whitespace changes in \\`$ac_var' since the previous run:\" >&5\nprintf \"%s\\n\" \"$as_me: warning: ignoring whitespace changes in \\`$ac_var' since the previous run:\" >&2;}\n\t  eval $ac_var=\\$ac_old_val\n\tfi\n\t{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}:   former value:  \\`$ac_old_val'\" >&5\nprintf \"%s\\n\" \"$as_me:   former value:  \\`$ac_old_val'\" >&2;}\n\t{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}:   current value: \\`$ac_new_val'\" >&5\nprintf \"%s\\n\" \"$as_me:   current value: \\`$ac_new_val'\" >&2;}\n      fi;;\n  esac\n  # Pass precious variables to config.status.\n  if test \"$ac_new_set\" = set; then\n    case $ac_new_val in\n    *\\'*) ac_arg=$ac_var=`printf \"%s\\n\" \"$ac_new_val\" | sed \"s/'/'\\\\\\\\\\\\\\\\''/g\"` ;;\n    *) ac_arg=$ac_var=$ac_new_val ;;\n    esac\n    case \" $ac_configure_args \" in\n      *\" '$ac_arg' \"*) ;; # Avoid dups.  Use of quotes ensures accuracy.\n      *) as_fn_append ac_configure_args \" '$ac_arg'\" ;;\n    esac\n  fi\ndone\nif $ac_cache_corrupted; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: error: in \\`$ac_pwd':\" >&5\nprintf \"%s\\n\" \"$as_me: error: in \\`$ac_pwd':\" >&2;}\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: error: changes in the environment can compromise the build\" >&5\nprintf \"%s\\n\" \"$as_me: error: changes in the environment can compromise the build\" >&2;}\n  as_fn_error $? \"run \\`${MAKE-make} distclean' and/or \\`rm $cache_file'\n\t    and start over\" \"$LINENO\" 5\nfi\n## -------------------- ##\n## Main body of script. ##\n## -------------------- ##\n\nac_ext=c\nac_cpp='$CPP $CPPFLAGS'\nac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_c_compiler_gnu\n\n\n\n\n\n\n\n\n\n\nCONFIGURE_CFLAGS=\nSPECIFIED_CFLAGS=\"${CFLAGS}\"\n\n\n\n\n\nCONFIGURE_CXXFLAGS=\nSPECIFIED_CXXFLAGS=\"${CXXFLAGS}\"\n\n\n\n\n\nCONFIG=`echo ${ac_configure_args} | sed -e 's#'\"'\"'\\([^ ]*\\)'\"'\"'#\\1#g'`\n\n\nrev=2\n\n\nsrcroot=$srcdir\nif test \"x${srcroot}\" = \"x.\" ; then\n  srcroot=\"\"\nelse\n  srcroot=\"${srcroot}/\"\nfi\n\nabs_srcroot=\"`cd \\\"${srcdir}\\\"; pwd`/\"\n\n\nobjroot=\"\"\n\nabs_objroot=\"`pwd`/\"\n\n\ncase \"$prefix\" in\n   *\\ * ) as_fn_error $? \"Prefix should not contain spaces\" \"$LINENO\" 5 ;;\n   \"NONE\" ) prefix=\"/usr/local\" ;;\nesac\ncase \"$exec_prefix\" in\n   *\\ * ) as_fn_error $? \"Exec prefix should not contain spaces\" \"$LINENO\" 5 ;;\n   \"NONE\" ) exec_prefix=$prefix ;;\nesac\nPREFIX=$prefix\n\nBINDIR=`eval echo $bindir`\nBINDIR=`eval echo $BINDIR`\n\nINCLUDEDIR=`eval echo $includedir`\nINCLUDEDIR=`eval echo $INCLUDEDIR`\n\nLIBDIR=`eval echo $libdir`\nLIBDIR=`eval echo $LIBDIR`\n\nDATADIR=`eval echo $datadir`\nDATADIR=`eval echo $DATADIR`\n\nMANDIR=`eval echo $mandir`\nMANDIR=`eval echo $MANDIR`\n\n\n# Extract the first word of \"xsltproc\", so it can be a program name with args.\nset dummy xsltproc; ac_word=$2\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $ac_word\" >&5\nprintf %s \"checking for $ac_word... \" >&6; }\nif test ${ac_cv_path_XSLTPROC+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  case $XSLTPROC in\n  [\\\\/]* | ?:[\\\\/]*)\n  ac_cv_path_XSLTPROC=\"$XSLTPROC\" # Let the user override the test with a path.\n  ;;\n  *)\n  as_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    for ac_exec_ext in '' $ac_executable_extensions; do\n  if as_fn_executable_p \"$as_dir$ac_word$ac_exec_ext\"; then\n    ac_cv_path_XSLTPROC=\"$as_dir$ac_word$ac_exec_ext\"\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext\" >&5\n    break 2\n  fi\ndone\n  done\nIFS=$as_save_IFS\n\n  test -z \"$ac_cv_path_XSLTPROC\" && ac_cv_path_XSLTPROC=\"false\"\n  ;;\nesac\nfi\nXSLTPROC=$ac_cv_path_XSLTPROC\nif test -n \"$XSLTPROC\"; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $XSLTPROC\" >&5\nprintf \"%s\\n\" \"$XSLTPROC\" >&6; }\nelse\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\nfi\n\n\nif test -d \"/usr/share/xml/docbook/stylesheet/docbook-xsl\" ; then\n  DEFAULT_XSLROOT=\"/usr/share/xml/docbook/stylesheet/docbook-xsl\"\nelif test -d \"/usr/share/sgml/docbook/xsl-stylesheets\" ; then\n  DEFAULT_XSLROOT=\"/usr/share/sgml/docbook/xsl-stylesheets\"\nelse\n    DEFAULT_XSLROOT=\"\"\nfi\n\n# Check whether --with-xslroot was given.\nif test ${with_xslroot+y}\nthen :\n  withval=$with_xslroot;\nif test \"x$with_xslroot\" = \"xno\" ; then\n  XSLROOT=\"${DEFAULT_XSLROOT}\"\nelse\n  XSLROOT=\"${with_xslroot}\"\nfi\n\nelse $as_nop\n  XSLROOT=\"${DEFAULT_XSLROOT}\"\n\nfi\n\nif test \"x$XSLTPROC\" = \"xfalse\" ; then\n  XSLROOT=\"\"\nfi\n\n\nCFLAGS=$CFLAGS\n\n\n\n\n\n\n\n\n\nac_ext=c\nac_cpp='$CPP $CPPFLAGS'\nac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_c_compiler_gnu\nif test -n \"$ac_tool_prefix\"; then\n  # Extract the first word of \"${ac_tool_prefix}gcc\", so it can be a program name with args.\nset dummy ${ac_tool_prefix}gcc; ac_word=$2\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $ac_word\" >&5\nprintf %s \"checking for $ac_word... \" >&6; }\nif test ${ac_cv_prog_CC+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if test -n \"$CC\"; then\n  ac_cv_prog_CC=\"$CC\" # Let the user override the test.\nelse\nas_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    for ac_exec_ext in '' $ac_executable_extensions; do\n  if as_fn_executable_p \"$as_dir$ac_word$ac_exec_ext\"; then\n    ac_cv_prog_CC=\"${ac_tool_prefix}gcc\"\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext\" >&5\n    break 2\n  fi\ndone\n  done\nIFS=$as_save_IFS\n\nfi\nfi\nCC=$ac_cv_prog_CC\nif test -n \"$CC\"; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $CC\" >&5\nprintf \"%s\\n\" \"$CC\" >&6; }\nelse\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\nfi\n\n\nfi\nif test -z \"$ac_cv_prog_CC\"; then\n  ac_ct_CC=$CC\n  # Extract the first word of \"gcc\", so it can be a program name with args.\nset dummy gcc; ac_word=$2\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $ac_word\" >&5\nprintf %s \"checking for $ac_word... \" >&6; }\nif test ${ac_cv_prog_ac_ct_CC+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if test -n \"$ac_ct_CC\"; then\n  ac_cv_prog_ac_ct_CC=\"$ac_ct_CC\" # Let the user override the test.\nelse\nas_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    for ac_exec_ext in '' $ac_executable_extensions; do\n  if as_fn_executable_p \"$as_dir$ac_word$ac_exec_ext\"; then\n    ac_cv_prog_ac_ct_CC=\"gcc\"\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext\" >&5\n    break 2\n  fi\ndone\n  done\nIFS=$as_save_IFS\n\nfi\nfi\nac_ct_CC=$ac_cv_prog_ac_ct_CC\nif test -n \"$ac_ct_CC\"; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC\" >&5\nprintf \"%s\\n\" \"$ac_ct_CC\" >&6; }\nelse\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\nfi\n\n  if test \"x$ac_ct_CC\" = x; then\n    CC=\"\"\n  else\n    case $cross_compiling:$ac_tool_warned in\nyes:)\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet\" >&5\nprintf \"%s\\n\" \"$as_me: WARNING: using cross tools not prefixed with host triplet\" >&2;}\nac_tool_warned=yes ;;\nesac\n    CC=$ac_ct_CC\n  fi\nelse\n  CC=\"$ac_cv_prog_CC\"\nfi\n\nif test -z \"$CC\"; then\n          if test -n \"$ac_tool_prefix\"; then\n    # Extract the first word of \"${ac_tool_prefix}cc\", so it can be a program name with args.\nset dummy ${ac_tool_prefix}cc; ac_word=$2\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $ac_word\" >&5\nprintf %s \"checking for $ac_word... \" >&6; }\nif test ${ac_cv_prog_CC+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if test -n \"$CC\"; then\n  ac_cv_prog_CC=\"$CC\" # Let the user override the test.\nelse\nas_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    for ac_exec_ext in '' $ac_executable_extensions; do\n  if as_fn_executable_p \"$as_dir$ac_word$ac_exec_ext\"; then\n    ac_cv_prog_CC=\"${ac_tool_prefix}cc\"\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext\" >&5\n    break 2\n  fi\ndone\n  done\nIFS=$as_save_IFS\n\nfi\nfi\nCC=$ac_cv_prog_CC\nif test -n \"$CC\"; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $CC\" >&5\nprintf \"%s\\n\" \"$CC\" >&6; }\nelse\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\nfi\n\n\n  fi\nfi\nif test -z \"$CC\"; then\n  # Extract the first word of \"cc\", so it can be a program name with args.\nset dummy cc; ac_word=$2\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $ac_word\" >&5\nprintf %s \"checking for $ac_word... \" >&6; }\nif test ${ac_cv_prog_CC+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if test -n \"$CC\"; then\n  ac_cv_prog_CC=\"$CC\" # Let the user override the test.\nelse\n  ac_prog_rejected=no\nas_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    for ac_exec_ext in '' $ac_executable_extensions; do\n  if as_fn_executable_p \"$as_dir$ac_word$ac_exec_ext\"; then\n    if test \"$as_dir$ac_word$ac_exec_ext\" = \"/usr/ucb/cc\"; then\n       ac_prog_rejected=yes\n       continue\n     fi\n    ac_cv_prog_CC=\"cc\"\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext\" >&5\n    break 2\n  fi\ndone\n  done\nIFS=$as_save_IFS\n\nif test $ac_prog_rejected = yes; then\n  # We found a bogon in the path, so make sure we never use it.\n  set dummy $ac_cv_prog_CC\n  shift\n  if test $# != 0; then\n    # We chose a different compiler from the bogus one.\n    # However, it has the same basename, so the bogon will be chosen\n    # first if we set CC to just the basename; use the full file name.\n    shift\n    ac_cv_prog_CC=\"$as_dir$ac_word${1+' '}$@\"\n  fi\nfi\nfi\nfi\nCC=$ac_cv_prog_CC\nif test -n \"$CC\"; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $CC\" >&5\nprintf \"%s\\n\" \"$CC\" >&6; }\nelse\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\nfi\n\n\nfi\nif test -z \"$CC\"; then\n  if test -n \"$ac_tool_prefix\"; then\n  for ac_prog in cl.exe\n  do\n    # Extract the first word of \"$ac_tool_prefix$ac_prog\", so it can be a program name with args.\nset dummy $ac_tool_prefix$ac_prog; ac_word=$2\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $ac_word\" >&5\nprintf %s \"checking for $ac_word... \" >&6; }\nif test ${ac_cv_prog_CC+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if test -n \"$CC\"; then\n  ac_cv_prog_CC=\"$CC\" # Let the user override the test.\nelse\nas_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    for ac_exec_ext in '' $ac_executable_extensions; do\n  if as_fn_executable_p \"$as_dir$ac_word$ac_exec_ext\"; then\n    ac_cv_prog_CC=\"$ac_tool_prefix$ac_prog\"\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext\" >&5\n    break 2\n  fi\ndone\n  done\nIFS=$as_save_IFS\n\nfi\nfi\nCC=$ac_cv_prog_CC\nif test -n \"$CC\"; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $CC\" >&5\nprintf \"%s\\n\" \"$CC\" >&6; }\nelse\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\nfi\n\n\n    test -n \"$CC\" && break\n  done\nfi\nif test -z \"$CC\"; then\n  ac_ct_CC=$CC\n  for ac_prog in cl.exe\ndo\n  # Extract the first word of \"$ac_prog\", so it can be a program name with args.\nset dummy $ac_prog; ac_word=$2\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $ac_word\" >&5\nprintf %s \"checking for $ac_word... \" >&6; }\nif test ${ac_cv_prog_ac_ct_CC+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if test -n \"$ac_ct_CC\"; then\n  ac_cv_prog_ac_ct_CC=\"$ac_ct_CC\" # Let the user override the test.\nelse\nas_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    for ac_exec_ext in '' $ac_executable_extensions; do\n  if as_fn_executable_p \"$as_dir$ac_word$ac_exec_ext\"; then\n    ac_cv_prog_ac_ct_CC=\"$ac_prog\"\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext\" >&5\n    break 2\n  fi\ndone\n  done\nIFS=$as_save_IFS\n\nfi\nfi\nac_ct_CC=$ac_cv_prog_ac_ct_CC\nif test -n \"$ac_ct_CC\"; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC\" >&5\nprintf \"%s\\n\" \"$ac_ct_CC\" >&6; }\nelse\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\nfi\n\n\n  test -n \"$ac_ct_CC\" && break\ndone\n\n  if test \"x$ac_ct_CC\" = x; then\n    CC=\"\"\n  else\n    case $cross_compiling:$ac_tool_warned in\nyes:)\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet\" >&5\nprintf \"%s\\n\" \"$as_me: WARNING: using cross tools not prefixed with host triplet\" >&2;}\nac_tool_warned=yes ;;\nesac\n    CC=$ac_ct_CC\n  fi\nfi\n\nfi\nif test -z \"$CC\"; then\n  if test -n \"$ac_tool_prefix\"; then\n  # Extract the first word of \"${ac_tool_prefix}clang\", so it can be a program name with args.\nset dummy ${ac_tool_prefix}clang; ac_word=$2\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $ac_word\" >&5\nprintf %s \"checking for $ac_word... \" >&6; }\nif test ${ac_cv_prog_CC+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if test -n \"$CC\"; then\n  ac_cv_prog_CC=\"$CC\" # Let the user override the test.\nelse\nas_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    for ac_exec_ext in '' $ac_executable_extensions; do\n  if as_fn_executable_p \"$as_dir$ac_word$ac_exec_ext\"; then\n    ac_cv_prog_CC=\"${ac_tool_prefix}clang\"\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext\" >&5\n    break 2\n  fi\ndone\n  done\nIFS=$as_save_IFS\n\nfi\nfi\nCC=$ac_cv_prog_CC\nif test -n \"$CC\"; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $CC\" >&5\nprintf \"%s\\n\" \"$CC\" >&6; }\nelse\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\nfi\n\n\nfi\nif test -z \"$ac_cv_prog_CC\"; then\n  ac_ct_CC=$CC\n  # Extract the first word of \"clang\", so it can be a program name with args.\nset dummy clang; ac_word=$2\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $ac_word\" >&5\nprintf %s \"checking for $ac_word... \" >&6; }\nif test ${ac_cv_prog_ac_ct_CC+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if test -n \"$ac_ct_CC\"; then\n  ac_cv_prog_ac_ct_CC=\"$ac_ct_CC\" # Let the user override the test.\nelse\nas_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    for ac_exec_ext in '' $ac_executable_extensions; do\n  if as_fn_executable_p \"$as_dir$ac_word$ac_exec_ext\"; then\n    ac_cv_prog_ac_ct_CC=\"clang\"\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext\" >&5\n    break 2\n  fi\ndone\n  done\nIFS=$as_save_IFS\n\nfi\nfi\nac_ct_CC=$ac_cv_prog_ac_ct_CC\nif test -n \"$ac_ct_CC\"; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC\" >&5\nprintf \"%s\\n\" \"$ac_ct_CC\" >&6; }\nelse\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\nfi\n\n  if test \"x$ac_ct_CC\" = x; then\n    CC=\"\"\n  else\n    case $cross_compiling:$ac_tool_warned in\nyes:)\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet\" >&5\nprintf \"%s\\n\" \"$as_me: WARNING: using cross tools not prefixed with host triplet\" >&2;}\nac_tool_warned=yes ;;\nesac\n    CC=$ac_ct_CC\n  fi\nelse\n  CC=\"$ac_cv_prog_CC\"\nfi\n\nfi\n\n\ntest -z \"$CC\" && { { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: error: in \\`$ac_pwd':\" >&5\nprintf \"%s\\n\" \"$as_me: error: in \\`$ac_pwd':\" >&2;}\nas_fn_error $? \"no acceptable C compiler found in \\$PATH\nSee \\`config.log' for more details\" \"$LINENO\" 5; }\n\n# Provide some information about the compiler.\nprintf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for C compiler version\" >&5\nset X $ac_compile\nac_compiler=$2\nfor ac_option in --version -v -V -qversion -version; do\n  { { ac_try=\"$ac_compiler $ac_option >&5\"\ncase \"(($ac_try\" in\n  *\\\"* | *\\`* | *\\\\*) ac_try_echo=\\$ac_try;;\n  *) ac_try_echo=$ac_try;;\nesac\neval ac_try_echo=\"\\\"\\$as_me:${as_lineno-$LINENO}: $ac_try_echo\\\"\"\nprintf \"%s\\n\" \"$ac_try_echo\"; } >&5\n  (eval \"$ac_compiler $ac_option >&5\") 2>conftest.err\n  ac_status=$?\n  if test -s conftest.err; then\n    sed '10a\\\n... rest of stderr output deleted ...\n         10q' conftest.err >conftest.er1\n    cat conftest.er1 >&5\n  fi\n  rm -f conftest.er1 conftest.err\n  printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: \\$? = $ac_status\" >&5\n  test $ac_status = 0; }\ndone\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\nint\nmain (void)\n{\n\n  ;\n  return 0;\n}\n_ACEOF\nac_clean_files_save=$ac_clean_files\nac_clean_files=\"$ac_clean_files a.out a.out.dSYM a.exe b.out\"\n# Try to create an executable without -o first, disregard a.out.\n# It will help us diagnose broken compilers, and finding out an intuition\n# of exeext.\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether the C compiler works\" >&5\nprintf %s \"checking whether the C compiler works... \" >&6; }\nac_link_default=`printf \"%s\\n\" \"$ac_link\" | sed 's/ -o *conftest[^ ]*//'`\n\n# The possible output files:\nac_files=\"a.out conftest.exe conftest a.exe a_out.exe b.out conftest.*\"\n\nac_rmfiles=\nfor ac_file in $ac_files\ndo\n  case $ac_file in\n    *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;;\n    * ) ac_rmfiles=\"$ac_rmfiles $ac_file\";;\n  esac\ndone\nrm -f $ac_rmfiles\n\nif { { ac_try=\"$ac_link_default\"\ncase \"(($ac_try\" in\n  *\\\"* | *\\`* | *\\\\*) ac_try_echo=\\$ac_try;;\n  *) ac_try_echo=$ac_try;;\nesac\neval ac_try_echo=\"\\\"\\$as_me:${as_lineno-$LINENO}: $ac_try_echo\\\"\"\nprintf \"%s\\n\" \"$ac_try_echo\"; } >&5\n  (eval \"$ac_link_default\") 2>&5\n  ac_status=$?\n  printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: \\$? = $ac_status\" >&5\n  test $ac_status = 0; }\nthen :\n  # Autoconf-2.13 could set the ac_cv_exeext variable to `no'.\n# So ignore a value of `no', otherwise this would lead to `EXEEXT = no'\n# in a Makefile.  We should not override ac_cv_exeext if it was cached,\n# so that the user can short-circuit this test for compilers unknown to\n# Autoconf.\nfor ac_file in $ac_files ''\ndo\n  test -f \"$ac_file\" || continue\n  case $ac_file in\n    *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj )\n\t;;\n    [ab].out )\n\t# We found the default executable, but exeext='' is most\n\t# certainly right.\n\tbreak;;\n    *.* )\n\tif test ${ac_cv_exeext+y} && test \"$ac_cv_exeext\" != no;\n\tthen :; else\n\t   ac_cv_exeext=`expr \"$ac_file\" : '[^.]*\\(\\..*\\)'`\n\tfi\n\t# We set ac_cv_exeext here because the later test for it is not\n\t# safe: cross compilers may not add the suffix if given an `-o'\n\t# argument, so we may need to know it at that point already.\n\t# Even if this section looks crufty: it has the advantage of\n\t# actually working.\n\tbreak;;\n    * )\n\tbreak;;\n  esac\ndone\ntest \"$ac_cv_exeext\" = no && ac_cv_exeext=\n\nelse $as_nop\n  ac_file=''\nfi\nif test -z \"$ac_file\"\nthen :\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\nprintf \"%s\\n\" \"$as_me: failed program was:\" >&5\nsed 's/^/| /' conftest.$ac_ext >&5\n\n{ { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: error: in \\`$ac_pwd':\" >&5\nprintf \"%s\\n\" \"$as_me: error: in \\`$ac_pwd':\" >&2;}\nas_fn_error 77 \"C compiler cannot create executables\nSee \\`config.log' for more details\" \"$LINENO\" 5; }\nelse $as_nop\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for C compiler default output file name\" >&5\nprintf %s \"checking for C compiler default output file name... \" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_file\" >&5\nprintf \"%s\\n\" \"$ac_file\" >&6; }\nac_exeext=$ac_cv_exeext\n\nrm -f -r a.out a.out.dSYM a.exe conftest$ac_cv_exeext b.out\nac_clean_files=$ac_clean_files_save\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for suffix of executables\" >&5\nprintf %s \"checking for suffix of executables... \" >&6; }\nif { { ac_try=\"$ac_link\"\ncase \"(($ac_try\" in\n  *\\\"* | *\\`* | *\\\\*) ac_try_echo=\\$ac_try;;\n  *) ac_try_echo=$ac_try;;\nesac\neval ac_try_echo=\"\\\"\\$as_me:${as_lineno-$LINENO}: $ac_try_echo\\\"\"\nprintf \"%s\\n\" \"$ac_try_echo\"; } >&5\n  (eval \"$ac_link\") 2>&5\n  ac_status=$?\n  printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: \\$? = $ac_status\" >&5\n  test $ac_status = 0; }\nthen :\n  # If both `conftest.exe' and `conftest' are `present' (well, observable)\n# catch `conftest.exe'.  For instance with Cygwin, `ls conftest' will\n# work properly (i.e., refer to `conftest.exe'), while it won't with\n# `rm'.\nfor ac_file in conftest.exe conftest conftest.*; do\n  test -f \"$ac_file\" || continue\n  case $ac_file in\n    *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;;\n    *.* ) ac_cv_exeext=`expr \"$ac_file\" : '[^.]*\\(\\..*\\)'`\n\t  break;;\n    * ) break;;\n  esac\ndone\nelse $as_nop\n  { { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: error: in \\`$ac_pwd':\" >&5\nprintf \"%s\\n\" \"$as_me: error: in \\`$ac_pwd':\" >&2;}\nas_fn_error $? \"cannot compute suffix of executables: cannot compile and link\nSee \\`config.log' for more details\" \"$LINENO\" 5; }\nfi\nrm -f conftest conftest$ac_cv_exeext\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_exeext\" >&5\nprintf \"%s\\n\" \"$ac_cv_exeext\" >&6; }\n\nrm -f conftest.$ac_ext\nEXEEXT=$ac_cv_exeext\nac_exeext=$EXEEXT\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n#include <stdio.h>\nint\nmain (void)\n{\nFILE *f = fopen (\"conftest.out\", \"w\");\n return ferror (f) || fclose (f) != 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nac_clean_files=\"$ac_clean_files conftest.out\"\n# Check that the compiler produces executables we can run.  If not, either\n# the compiler is broken, or we cross compile.\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether we are cross compiling\" >&5\nprintf %s \"checking whether we are cross compiling... \" >&6; }\nif test \"$cross_compiling\" != yes; then\n  { { ac_try=\"$ac_link\"\ncase \"(($ac_try\" in\n  *\\\"* | *\\`* | *\\\\*) ac_try_echo=\\$ac_try;;\n  *) ac_try_echo=$ac_try;;\nesac\neval ac_try_echo=\"\\\"\\$as_me:${as_lineno-$LINENO}: $ac_try_echo\\\"\"\nprintf \"%s\\n\" \"$ac_try_echo\"; } >&5\n  (eval \"$ac_link\") 2>&5\n  ac_status=$?\n  printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: \\$? = $ac_status\" >&5\n  test $ac_status = 0; }\n  if { ac_try='./conftest$ac_cv_exeext'\n  { { case \"(($ac_try\" in\n  *\\\"* | *\\`* | *\\\\*) ac_try_echo=\\$ac_try;;\n  *) ac_try_echo=$ac_try;;\nesac\neval ac_try_echo=\"\\\"\\$as_me:${as_lineno-$LINENO}: $ac_try_echo\\\"\"\nprintf \"%s\\n\" \"$ac_try_echo\"; } >&5\n  (eval \"$ac_try\") 2>&5\n  ac_status=$?\n  printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: \\$? = $ac_status\" >&5\n  test $ac_status = 0; }; }; then\n    cross_compiling=no\n  else\n    if test \"$cross_compiling\" = maybe; then\n\tcross_compiling=yes\n    else\n\t{ { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: error: in \\`$ac_pwd':\" >&5\nprintf \"%s\\n\" \"$as_me: error: in \\`$ac_pwd':\" >&2;}\nas_fn_error 77 \"cannot run C compiled programs.\nIf you meant to cross compile, use \\`--host'.\nSee \\`config.log' for more details\" \"$LINENO\" 5; }\n    fi\n  fi\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $cross_compiling\" >&5\nprintf \"%s\\n\" \"$cross_compiling\" >&6; }\n\nrm -f conftest.$ac_ext conftest$ac_cv_exeext conftest.out\nac_clean_files=$ac_clean_files_save\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for suffix of object files\" >&5\nprintf %s \"checking for suffix of object files... \" >&6; }\nif test ${ac_cv_objext+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\nint\nmain (void)\n{\n\n  ;\n  return 0;\n}\n_ACEOF\nrm -f conftest.o conftest.obj\nif { { ac_try=\"$ac_compile\"\ncase \"(($ac_try\" in\n  *\\\"* | *\\`* | *\\\\*) ac_try_echo=\\$ac_try;;\n  *) ac_try_echo=$ac_try;;\nesac\neval ac_try_echo=\"\\\"\\$as_me:${as_lineno-$LINENO}: $ac_try_echo\\\"\"\nprintf \"%s\\n\" \"$ac_try_echo\"; } >&5\n  (eval \"$ac_compile\") 2>&5\n  ac_status=$?\n  printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: \\$? = $ac_status\" >&5\n  test $ac_status = 0; }\nthen :\n  for ac_file in conftest.o conftest.obj conftest.*; do\n  test -f \"$ac_file\" || continue;\n  case $ac_file in\n    *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM ) ;;\n    *) ac_cv_objext=`expr \"$ac_file\" : '.*\\.\\(.*\\)'`\n       break;;\n  esac\ndone\nelse $as_nop\n  printf \"%s\\n\" \"$as_me: failed program was:\" >&5\nsed 's/^/| /' conftest.$ac_ext >&5\n\n{ { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: error: in \\`$ac_pwd':\" >&5\nprintf \"%s\\n\" \"$as_me: error: in \\`$ac_pwd':\" >&2;}\nas_fn_error $? \"cannot compute suffix of object files: cannot compile\nSee \\`config.log' for more details\" \"$LINENO\" 5; }\nfi\nrm -f conftest.$ac_cv_objext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_objext\" >&5\nprintf \"%s\\n\" \"$ac_cv_objext\" >&6; }\nOBJEXT=$ac_cv_objext\nac_objext=$OBJEXT\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether the compiler supports GNU C\" >&5\nprintf %s \"checking whether the compiler supports GNU C... \" >&6; }\nif test ${ac_cv_c_compiler_gnu+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\nint\nmain (void)\n{\n#ifndef __GNUC__\n       choke me\n#endif\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  ac_compiler_gnu=yes\nelse $as_nop\n  ac_compiler_gnu=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nac_cv_c_compiler_gnu=$ac_compiler_gnu\n\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_c_compiler_gnu\" >&5\nprintf \"%s\\n\" \"$ac_cv_c_compiler_gnu\" >&6; }\nac_compiler_gnu=$ac_cv_c_compiler_gnu\n\nif test $ac_compiler_gnu = yes; then\n  GCC=yes\nelse\n  GCC=\nfi\nac_test_CFLAGS=${CFLAGS+y}\nac_save_CFLAGS=$CFLAGS\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether $CC accepts -g\" >&5\nprintf %s \"checking whether $CC accepts -g... \" >&6; }\nif test ${ac_cv_prog_cc_g+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  ac_save_c_werror_flag=$ac_c_werror_flag\n   ac_c_werror_flag=yes\n   ac_cv_prog_cc_g=no\n   CFLAGS=\"-g\"\n   cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\nint\nmain (void)\n{\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  ac_cv_prog_cc_g=yes\nelse $as_nop\n  CFLAGS=\"\"\n      cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\nint\nmain (void)\n{\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n\nelse $as_nop\n  ac_c_werror_flag=$ac_save_c_werror_flag\n\t CFLAGS=\"-g\"\n\t cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\nint\nmain (void)\n{\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  ac_cv_prog_cc_g=yes\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\n   ac_c_werror_flag=$ac_save_c_werror_flag\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_g\" >&5\nprintf \"%s\\n\" \"$ac_cv_prog_cc_g\" >&6; }\nif test $ac_test_CFLAGS; then\n  CFLAGS=$ac_save_CFLAGS\nelif test $ac_cv_prog_cc_g = yes; then\n  if test \"$GCC\" = yes; then\n    CFLAGS=\"-g -O2\"\n  else\n    CFLAGS=\"-g\"\n  fi\nelse\n  if test \"$GCC\" = yes; then\n    CFLAGS=\"-O2\"\n  else\n    CFLAGS=\n  fi\nfi\nac_prog_cc_stdc=no\nif test x$ac_prog_cc_stdc = xno\nthen :\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $CC option to enable C11 features\" >&5\nprintf %s \"checking for $CC option to enable C11 features... \" >&6; }\nif test ${ac_cv_prog_cc_c11+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  ac_cv_prog_cc_c11=no\nac_save_CC=$CC\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n$ac_c_conftest_c11_program\n_ACEOF\nfor ac_arg in '' -std=gnu11\ndo\n  CC=\"$ac_save_CC $ac_arg\"\n  if ac_fn_c_try_compile \"$LINENO\"\nthen :\n  ac_cv_prog_cc_c11=$ac_arg\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam\n  test \"x$ac_cv_prog_cc_c11\" != \"xno\" && break\ndone\nrm -f conftest.$ac_ext\nCC=$ac_save_CC\nfi\n\nif test \"x$ac_cv_prog_cc_c11\" = xno\nthen :\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: unsupported\" >&5\nprintf \"%s\\n\" \"unsupported\" >&6; }\nelse $as_nop\n  if test \"x$ac_cv_prog_cc_c11\" = x\nthen :\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: none needed\" >&5\nprintf \"%s\\n\" \"none needed\" >&6; }\nelse $as_nop\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c11\" >&5\nprintf \"%s\\n\" \"$ac_cv_prog_cc_c11\" >&6; }\n     CC=\"$CC $ac_cv_prog_cc_c11\"\nfi\n  ac_cv_prog_cc_stdc=$ac_cv_prog_cc_c11\n  ac_prog_cc_stdc=c11\nfi\nfi\nif test x$ac_prog_cc_stdc = xno\nthen :\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $CC option to enable C99 features\" >&5\nprintf %s \"checking for $CC option to enable C99 features... \" >&6; }\nif test ${ac_cv_prog_cc_c99+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  ac_cv_prog_cc_c99=no\nac_save_CC=$CC\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n$ac_c_conftest_c99_program\n_ACEOF\nfor ac_arg in '' -std=gnu99 -std=c99 -c99 -qlanglvl=extc1x -qlanglvl=extc99 -AC99 -D_STDC_C99=\ndo\n  CC=\"$ac_save_CC $ac_arg\"\n  if ac_fn_c_try_compile \"$LINENO\"\nthen :\n  ac_cv_prog_cc_c99=$ac_arg\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam\n  test \"x$ac_cv_prog_cc_c99\" != \"xno\" && break\ndone\nrm -f conftest.$ac_ext\nCC=$ac_save_CC\nfi\n\nif test \"x$ac_cv_prog_cc_c99\" = xno\nthen :\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: unsupported\" >&5\nprintf \"%s\\n\" \"unsupported\" >&6; }\nelse $as_nop\n  if test \"x$ac_cv_prog_cc_c99\" = x\nthen :\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: none needed\" >&5\nprintf \"%s\\n\" \"none needed\" >&6; }\nelse $as_nop\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c99\" >&5\nprintf \"%s\\n\" \"$ac_cv_prog_cc_c99\" >&6; }\n     CC=\"$CC $ac_cv_prog_cc_c99\"\nfi\n  ac_cv_prog_cc_stdc=$ac_cv_prog_cc_c99\n  ac_prog_cc_stdc=c99\nfi\nfi\nif test x$ac_prog_cc_stdc = xno\nthen :\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $CC option to enable C89 features\" >&5\nprintf %s \"checking for $CC option to enable C89 features... \" >&6; }\nif test ${ac_cv_prog_cc_c89+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  ac_cv_prog_cc_c89=no\nac_save_CC=$CC\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n$ac_c_conftest_c89_program\n_ACEOF\nfor ac_arg in '' -qlanglvl=extc89 -qlanglvl=ansi -std -Ae \"-Aa -D_HPUX_SOURCE\" \"-Xc -D__EXTENSIONS__\"\ndo\n  CC=\"$ac_save_CC $ac_arg\"\n  if ac_fn_c_try_compile \"$LINENO\"\nthen :\n  ac_cv_prog_cc_c89=$ac_arg\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam\n  test \"x$ac_cv_prog_cc_c89\" != \"xno\" && break\ndone\nrm -f conftest.$ac_ext\nCC=$ac_save_CC\nfi\n\nif test \"x$ac_cv_prog_cc_c89\" = xno\nthen :\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: unsupported\" >&5\nprintf \"%s\\n\" \"unsupported\" >&6; }\nelse $as_nop\n  if test \"x$ac_cv_prog_cc_c89\" = x\nthen :\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: none needed\" >&5\nprintf \"%s\\n\" \"none needed\" >&6; }\nelse $as_nop\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c89\" >&5\nprintf \"%s\\n\" \"$ac_cv_prog_cc_c89\" >&6; }\n     CC=\"$CC $ac_cv_prog_cc_c89\"\nfi\n  ac_cv_prog_cc_stdc=$ac_cv_prog_cc_c89\n  ac_prog_cc_stdc=c89\nfi\nfi\n\nac_ext=c\nac_cpp='$CPP $CPPFLAGS'\nac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_c_compiler_gnu\n\n\nif test \"x$GCC\" != \"xyes\" ; then\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler is MSVC\" >&5\nprintf %s \"checking whether compiler is MSVC... \" >&6; }\nif test ${je_cv_msvc+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\nint\nmain (void)\n{\n\n#ifndef _MSC_VER\n  int fail-1;\n#endif\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_msvc=yes\nelse $as_nop\n  je_cv_msvc=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_msvc\" >&5\nprintf \"%s\\n\" \"$je_cv_msvc\" >&6; }\nfi\n\nje_cv_cray_prgenv_wrapper=\"\"\nif test \"x${PE_ENV}\" != \"x\" ; then\n  case \"${CC}\" in\n    CC|cc)\n\tje_cv_cray_prgenv_wrapper=\"yes\"\n\t;;\n    *)\n       ;;\n  esac\nfi\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler is cray\" >&5\nprintf %s \"checking whether compiler is cray... \" >&6; }\nif test ${je_cv_cray+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\nint\nmain (void)\n{\n\n#ifndef _CRAYC\n  int fail-1;\n#endif\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cray=yes\nelse $as_nop\n  je_cv_cray=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_cray\" >&5\nprintf \"%s\\n\" \"$je_cv_cray\" >&6; }\n\nif test \"x${je_cv_cray}\" = \"xyes\" ; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether cray compiler version is 8.4\" >&5\nprintf %s \"checking whether cray compiler version is 8.4... \" >&6; }\nif test ${je_cv_cray_84+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\nint\nmain (void)\n{\n\n#if !(_RELEASE_MAJOR == 8 && _RELEASE_MINOR == 4)\n  int fail-1;\n#endif\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cray_84=yes\nelse $as_nop\n  je_cv_cray_84=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_cray_84\" >&5\nprintf \"%s\\n\" \"$je_cv_cray_84\" >&6; }\nfi\n\nif test \"x$GCC\" = \"xyes\" ; then\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -std=gnu11\" >&5\nprintf %s \"checking whether compiler supports -std=gnu11... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-std=gnu11\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-std=gnu11\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n  if test \"x$je_cv_cflags_added\" = \"x-std=gnu11\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAS_RESTRICT  \" >>confdefs.h\n\n  else\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -std=gnu99\" >&5\nprintf %s \"checking whether compiler supports -std=gnu99... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-std=gnu99\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-std=gnu99\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n    if test \"x$je_cv_cflags_added\" = \"x-std=gnu99\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAS_RESTRICT  \" >>confdefs.h\n\n    fi\n  fi\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Werror=unknown-warning-option\" >&5\nprintf %s \"checking whether compiler supports -Werror=unknown-warning-option... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Werror=unknown-warning-option\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Werror=unknown-warning-option\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Wall\" >&5\nprintf %s \"checking whether compiler supports -Wall... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Wall\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Wall\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Wextra\" >&5\nprintf %s \"checking whether compiler supports -Wextra... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Wextra\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Wextra\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Wshorten-64-to-32\" >&5\nprintf %s \"checking whether compiler supports -Wshorten-64-to-32... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Wshorten-64-to-32\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Wshorten-64-to-32\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Wsign-compare\" >&5\nprintf %s \"checking whether compiler supports -Wsign-compare... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Wsign-compare\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Wsign-compare\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Wundef\" >&5\nprintf %s \"checking whether compiler supports -Wundef... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Wundef\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Wundef\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Wno-format-zero-length\" >&5\nprintf %s \"checking whether compiler supports -Wno-format-zero-length... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Wno-format-zero-length\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Wno-format-zero-length\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Wpointer-arith\" >&5\nprintf %s \"checking whether compiler supports -Wpointer-arith... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Wpointer-arith\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Wpointer-arith\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Wno-missing-braces\" >&5\nprintf %s \"checking whether compiler supports -Wno-missing-braces... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Wno-missing-braces\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Wno-missing-braces\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Wno-missing-field-initializers\" >&5\nprintf %s \"checking whether compiler supports -Wno-missing-field-initializers... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Wno-missing-field-initializers\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Wno-missing-field-initializers\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Wno-missing-attributes\" >&5\nprintf %s \"checking whether compiler supports -Wno-missing-attributes... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Wno-missing-attributes\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Wno-missing-attributes\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -pipe\" >&5\nprintf %s \"checking whether compiler supports -pipe... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-pipe\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-pipe\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -g3\" >&5\nprintf %s \"checking whether compiler supports -g3... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-g3\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-g3\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\nelif test \"x$je_cv_msvc\" = \"xyes\" ; then\n  CC=\"$CC -nologo\"\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Zi\" >&5\nprintf %s \"checking whether compiler supports -Zi... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Zi\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Zi\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -MT\" >&5\nprintf %s \"checking whether compiler supports -MT... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-MT\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-MT\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -W3\" >&5\nprintf %s \"checking whether compiler supports -W3... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-W3\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-W3\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -FS\" >&5\nprintf %s \"checking whether compiler supports -FS... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-FS\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-FS\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n  T_APPEND_V=-I${srcdir}/include/msvc_compat\n  if test \"x${CPPFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CPPFLAGS=\"${CPPFLAGS}${T_APPEND_V}\"\nelse\n  CPPFLAGS=\"${CPPFLAGS} ${T_APPEND_V}\"\nfi\n\n\nfi\nif test \"x$je_cv_cray\" = \"xyes\" ; then\n    if test \"x$je_cv_cray_84\" = \"xyes\" ; then\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -hipa2\" >&5\nprintf %s \"checking whether compiler supports -hipa2... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-hipa2\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-hipa2\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -hnognu\" >&5\nprintf %s \"checking whether compiler supports -hnognu... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-hnognu\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-hnognu\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n  fi\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -hnomessage=128\" >&5\nprintf %s \"checking whether compiler supports -hnomessage=128... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-hnomessage=128\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-hnomessage=128\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -hnomessage=1357\" >&5\nprintf %s \"checking whether compiler supports -hnomessage=1357... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-hnomessage=1357\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-hnomessage=1357\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\nfi\n\n\n\nac_ext=c\nac_cpp='$CPP $CPPFLAGS'\nac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_c_compiler_gnu\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking how to run the C preprocessor\" >&5\nprintf %s \"checking how to run the C preprocessor... \" >&6; }\n# On Suns, sometimes $CPP names a directory.\nif test -n \"$CPP\" && test -d \"$CPP\"; then\n  CPP=\nfi\nif test -z \"$CPP\"; then\n  if test ${ac_cv_prog_CPP+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n      # Double quotes because $CC needs to be expanded\n    for CPP in \"$CC -E\" \"$CC -E -traditional-cpp\" cpp /lib/cpp\n    do\n      ac_preproc_ok=false\nfor ac_c_preproc_warn_flag in '' yes\ndo\n  # Use a header file that comes with gcc, so configuring glibc\n  # with a fresh cross-compiler works.\n  # On the NeXT, cc -E runs the code through the compiler's parser,\n  # not just through cpp. \"Syntax error\" is here to catch this case.\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n#include <limits.h>\n\t\t     Syntax error\n_ACEOF\nif ac_fn_c_try_cpp \"$LINENO\"\nthen :\n\nelse $as_nop\n  # Broken: fails on valid input.\ncontinue\nfi\nrm -f conftest.err conftest.i conftest.$ac_ext\n\n  # OK, works on sane cases.  Now check whether nonexistent headers\n  # can be detected and how.\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n#include <ac_nonexistent.h>\n_ACEOF\nif ac_fn_c_try_cpp \"$LINENO\"\nthen :\n  # Broken: success on invalid input.\ncontinue\nelse $as_nop\n  # Passes both tests.\nac_preproc_ok=:\nbreak\nfi\nrm -f conftest.err conftest.i conftest.$ac_ext\n\ndone\n# Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped.\nrm -f conftest.i conftest.err conftest.$ac_ext\nif $ac_preproc_ok\nthen :\n  break\nfi\n\n    done\n    ac_cv_prog_CPP=$CPP\n\nfi\n  CPP=$ac_cv_prog_CPP\nelse\n  ac_cv_prog_CPP=$CPP\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $CPP\" >&5\nprintf \"%s\\n\" \"$CPP\" >&6; }\nac_preproc_ok=false\nfor ac_c_preproc_warn_flag in '' yes\ndo\n  # Use a header file that comes with gcc, so configuring glibc\n  # with a fresh cross-compiler works.\n  # On the NeXT, cc -E runs the code through the compiler's parser,\n  # not just through cpp. \"Syntax error\" is here to catch this case.\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n#include <limits.h>\n\t\t     Syntax error\n_ACEOF\nif ac_fn_c_try_cpp \"$LINENO\"\nthen :\n\nelse $as_nop\n  # Broken: fails on valid input.\ncontinue\nfi\nrm -f conftest.err conftest.i conftest.$ac_ext\n\n  # OK, works on sane cases.  Now check whether nonexistent headers\n  # can be detected and how.\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n#include <ac_nonexistent.h>\n_ACEOF\nif ac_fn_c_try_cpp \"$LINENO\"\nthen :\n  # Broken: success on invalid input.\ncontinue\nelse $as_nop\n  # Passes both tests.\nac_preproc_ok=:\nbreak\nfi\nrm -f conftest.err conftest.i conftest.$ac_ext\n\ndone\n# Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped.\nrm -f conftest.i conftest.err conftest.$ac_ext\nif $ac_preproc_ok\nthen :\n\nelse $as_nop\n  { { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: error: in \\`$ac_pwd':\" >&5\nprintf \"%s\\n\" \"$as_me: error: in \\`$ac_pwd':\" >&2;}\nas_fn_error $? \"C preprocessor \\\"$CPP\\\" fails sanity check\nSee \\`config.log' for more details\" \"$LINENO\" 5; }\nfi\n\nac_ext=c\nac_cpp='$CPP $CPPFLAGS'\nac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_c_compiler_gnu\n\n\n# Check whether --enable-cxx was given.\nif test ${enable_cxx+y}\nthen :\n  enableval=$enable_cxx; if test \"x$enable_cxx\" = \"xno\" ; then\n  enable_cxx=\"0\"\nelse\n  enable_cxx=\"1\"\nfi\n\nelse $as_nop\n  enable_cxx=\"1\"\n\nfi\n\nif test \"x$enable_cxx\" = \"x1\" ; then\n      # ===========================================================================\n#  https://www.gnu.org/software/autoconf-archive/ax_cxx_compile_stdcxx.html\n# ===========================================================================\n#\n# SYNOPSIS\n#\n#   AX_CXX_COMPILE_STDCXX(VERSION, [ext|noext], [mandatory|optional])\n#\n# DESCRIPTION\n#\n#   Check for baseline language coverage in the compiler for the specified\n#   version of the C++ standard.  If necessary, add switches to CXX and\n#   CXXCPP to enable support.  VERSION may be '11' (for the C++11 standard)\n#   or '14' (for the C++14 standard).\n#\n#   The second argument, if specified, indicates whether you insist on an\n#   extended mode (e.g. -std=gnu++11) or a strict conformance mode (e.g.\n#   -std=c++11).  If neither is specified, you get whatever works, with\n#   preference for an extended mode.\n#\n#   The third argument, if specified 'mandatory' or if left unspecified,\n#   indicates that baseline support for the specified C++ standard is\n#   required and that the macro should error out if no mode with that\n#   support is found.  If specified 'optional', then configuration proceeds\n#   regardless, after defining HAVE_CXX${VERSION} if and only if a\n#   supporting mode is found.\n#\n# LICENSE\n#\n#   Copyright (c) 2008 Benjamin Kosnik <bkoz@redhat.com>\n#   Copyright (c) 2012 Zack Weinberg <zackw@panix.com>\n#   Copyright (c) 2013 Roy Stogner <roystgnr@ices.utexas.edu>\n#   Copyright (c) 2014, 2015 Google Inc.; contributed by Alexey Sokolov <sokolov@google.com>\n#   Copyright (c) 2015 Paul Norman <penorman@mac.com>\n#   Copyright (c) 2015 Moritz Klammler <moritz@klammler.eu>\n#   Copyright (c) 2016, 2018 Krzesimir Nowak <qdlacz@gmail.com>\n#   Copyright (c) 2019 Enji Cooper <yaneurabeya@gmail.com>\n#\n#   Copying and distribution of this file, with or without modification, are\n#   permitted in any medium without royalty provided the copyright notice\n#   and this notice are preserved.  This file is offered as-is, without any\n#   warranty.\n\n#serial 11\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nac_ext=cpp\nac_cpp='$CXXCPP $CPPFLAGS'\nac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_cxx_compiler_gnu\nif test -z \"$CXX\"; then\n  if test -n \"$CCC\"; then\n    CXX=$CCC\n  else\n    if test -n \"$ac_tool_prefix\"; then\n  for ac_prog in g++ c++ gpp aCC CC cxx cc++ cl.exe FCC KCC RCC xlC_r xlC clang++\n  do\n    # Extract the first word of \"$ac_tool_prefix$ac_prog\", so it can be a program name with args.\nset dummy $ac_tool_prefix$ac_prog; ac_word=$2\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $ac_word\" >&5\nprintf %s \"checking for $ac_word... \" >&6; }\nif test ${ac_cv_prog_CXX+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if test -n \"$CXX\"; then\n  ac_cv_prog_CXX=\"$CXX\" # Let the user override the test.\nelse\nas_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    for ac_exec_ext in '' $ac_executable_extensions; do\n  if as_fn_executable_p \"$as_dir$ac_word$ac_exec_ext\"; then\n    ac_cv_prog_CXX=\"$ac_tool_prefix$ac_prog\"\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext\" >&5\n    break 2\n  fi\ndone\n  done\nIFS=$as_save_IFS\n\nfi\nfi\nCXX=$ac_cv_prog_CXX\nif test -n \"$CXX\"; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $CXX\" >&5\nprintf \"%s\\n\" \"$CXX\" >&6; }\nelse\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\nfi\n\n\n    test -n \"$CXX\" && break\n  done\nfi\nif test -z \"$CXX\"; then\n  ac_ct_CXX=$CXX\n  for ac_prog in g++ c++ gpp aCC CC cxx cc++ cl.exe FCC KCC RCC xlC_r xlC clang++\ndo\n  # Extract the first word of \"$ac_prog\", so it can be a program name with args.\nset dummy $ac_prog; ac_word=$2\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $ac_word\" >&5\nprintf %s \"checking for $ac_word... \" >&6; }\nif test ${ac_cv_prog_ac_ct_CXX+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if test -n \"$ac_ct_CXX\"; then\n  ac_cv_prog_ac_ct_CXX=\"$ac_ct_CXX\" # Let the user override the test.\nelse\nas_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    for ac_exec_ext in '' $ac_executable_extensions; do\n  if as_fn_executable_p \"$as_dir$ac_word$ac_exec_ext\"; then\n    ac_cv_prog_ac_ct_CXX=\"$ac_prog\"\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext\" >&5\n    break 2\n  fi\ndone\n  done\nIFS=$as_save_IFS\n\nfi\nfi\nac_ct_CXX=$ac_cv_prog_ac_ct_CXX\nif test -n \"$ac_ct_CXX\"; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_ct_CXX\" >&5\nprintf \"%s\\n\" \"$ac_ct_CXX\" >&6; }\nelse\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\nfi\n\n\n  test -n \"$ac_ct_CXX\" && break\ndone\n\n  if test \"x$ac_ct_CXX\" = x; then\n    CXX=\"g++\"\n  else\n    case $cross_compiling:$ac_tool_warned in\nyes:)\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet\" >&5\nprintf \"%s\\n\" \"$as_me: WARNING: using cross tools not prefixed with host triplet\" >&2;}\nac_tool_warned=yes ;;\nesac\n    CXX=$ac_ct_CXX\n  fi\nfi\n\n  fi\nfi\n# Provide some information about the compiler.\nprintf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for C++ compiler version\" >&5\nset X $ac_compile\nac_compiler=$2\nfor ac_option in --version -v -V -qversion; do\n  { { ac_try=\"$ac_compiler $ac_option >&5\"\ncase \"(($ac_try\" in\n  *\\\"* | *\\`* | *\\\\*) ac_try_echo=\\$ac_try;;\n  *) ac_try_echo=$ac_try;;\nesac\neval ac_try_echo=\"\\\"\\$as_me:${as_lineno-$LINENO}: $ac_try_echo\\\"\"\nprintf \"%s\\n\" \"$ac_try_echo\"; } >&5\n  (eval \"$ac_compiler $ac_option >&5\") 2>conftest.err\n  ac_status=$?\n  if test -s conftest.err; then\n    sed '10a\\\n... rest of stderr output deleted ...\n         10q' conftest.err >conftest.er1\n    cat conftest.er1 >&5\n  fi\n  rm -f conftest.er1 conftest.err\n  printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: \\$? = $ac_status\" >&5\n  test $ac_status = 0; }\ndone\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether the compiler supports GNU C++\" >&5\nprintf %s \"checking whether the compiler supports GNU C++... \" >&6; }\nif test ${ac_cv_cxx_compiler_gnu+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\nint\nmain (void)\n{\n#ifndef __GNUC__\n       choke me\n#endif\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_cxx_try_compile \"$LINENO\"\nthen :\n  ac_compiler_gnu=yes\nelse $as_nop\n  ac_compiler_gnu=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nac_cv_cxx_compiler_gnu=$ac_compiler_gnu\n\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_cxx_compiler_gnu\" >&5\nprintf \"%s\\n\" \"$ac_cv_cxx_compiler_gnu\" >&6; }\nac_compiler_gnu=$ac_cv_cxx_compiler_gnu\n\nif test $ac_compiler_gnu = yes; then\n  GXX=yes\nelse\n  GXX=\nfi\nac_test_CXXFLAGS=${CXXFLAGS+y}\nac_save_CXXFLAGS=$CXXFLAGS\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether $CXX accepts -g\" >&5\nprintf %s \"checking whether $CXX accepts -g... \" >&6; }\nif test ${ac_cv_prog_cxx_g+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  ac_save_cxx_werror_flag=$ac_cxx_werror_flag\n   ac_cxx_werror_flag=yes\n   ac_cv_prog_cxx_g=no\n   CXXFLAGS=\"-g\"\n   cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\nint\nmain (void)\n{\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_cxx_try_compile \"$LINENO\"\nthen :\n  ac_cv_prog_cxx_g=yes\nelse $as_nop\n  CXXFLAGS=\"\"\n      cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\nint\nmain (void)\n{\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_cxx_try_compile \"$LINENO\"\nthen :\n\nelse $as_nop\n  ac_cxx_werror_flag=$ac_save_cxx_werror_flag\n\t CXXFLAGS=\"-g\"\n\t cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\nint\nmain (void)\n{\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_cxx_try_compile \"$LINENO\"\nthen :\n  ac_cv_prog_cxx_g=yes\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\n   ac_cxx_werror_flag=$ac_save_cxx_werror_flag\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cxx_g\" >&5\nprintf \"%s\\n\" \"$ac_cv_prog_cxx_g\" >&6; }\nif test $ac_test_CXXFLAGS; then\n  CXXFLAGS=$ac_save_CXXFLAGS\nelif test $ac_cv_prog_cxx_g = yes; then\n  if test \"$GXX\" = yes; then\n    CXXFLAGS=\"-g -O2\"\n  else\n    CXXFLAGS=\"-g\"\n  fi\nelse\n  if test \"$GXX\" = yes; then\n    CXXFLAGS=\"-O2\"\n  else\n    CXXFLAGS=\n  fi\nfi\nac_prog_cxx_stdcxx=no\nif test x$ac_prog_cxx_stdcxx = xno\nthen :\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $CXX option to enable C++11 features\" >&5\nprintf %s \"checking for $CXX option to enable C++11 features... \" >&6; }\nif test ${ac_cv_prog_cxx_11+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  ac_cv_prog_cxx_11=no\nac_save_CXX=$CXX\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n$ac_cxx_conftest_cxx11_program\n_ACEOF\nfor ac_arg in '' -std=gnu++11 -std=gnu++0x -std=c++11 -std=c++0x -qlanglvl=extended0x -AA\ndo\n  CXX=\"$ac_save_CXX $ac_arg\"\n  if ac_fn_cxx_try_compile \"$LINENO\"\nthen :\n  ac_cv_prog_cxx_cxx11=$ac_arg\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam\n  test \"x$ac_cv_prog_cxx_cxx11\" != \"xno\" && break\ndone\nrm -f conftest.$ac_ext\nCXX=$ac_save_CXX\nfi\n\nif test \"x$ac_cv_prog_cxx_cxx11\" = xno\nthen :\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: unsupported\" >&5\nprintf \"%s\\n\" \"unsupported\" >&6; }\nelse $as_nop\n  if test \"x$ac_cv_prog_cxx_cxx11\" = x\nthen :\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: none needed\" >&5\nprintf \"%s\\n\" \"none needed\" >&6; }\nelse $as_nop\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cxx_cxx11\" >&5\nprintf \"%s\\n\" \"$ac_cv_prog_cxx_cxx11\" >&6; }\n     CXX=\"$CXX $ac_cv_prog_cxx_cxx11\"\nfi\n  ac_cv_prog_cxx_stdcxx=$ac_cv_prog_cxx_cxx11\n  ac_prog_cxx_stdcxx=cxx11\nfi\nfi\nif test x$ac_prog_cxx_stdcxx = xno\nthen :\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $CXX option to enable C++98 features\" >&5\nprintf %s \"checking for $CXX option to enable C++98 features... \" >&6; }\nif test ${ac_cv_prog_cxx_98+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  ac_cv_prog_cxx_98=no\nac_save_CXX=$CXX\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n$ac_cxx_conftest_cxx98_program\n_ACEOF\nfor ac_arg in '' -std=gnu++98 -std=c++98 -qlanglvl=extended -AA\ndo\n  CXX=\"$ac_save_CXX $ac_arg\"\n  if ac_fn_cxx_try_compile \"$LINENO\"\nthen :\n  ac_cv_prog_cxx_cxx98=$ac_arg\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam\n  test \"x$ac_cv_prog_cxx_cxx98\" != \"xno\" && break\ndone\nrm -f conftest.$ac_ext\nCXX=$ac_save_CXX\nfi\n\nif test \"x$ac_cv_prog_cxx_cxx98\" = xno\nthen :\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: unsupported\" >&5\nprintf \"%s\\n\" \"unsupported\" >&6; }\nelse $as_nop\n  if test \"x$ac_cv_prog_cxx_cxx98\" = x\nthen :\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: none needed\" >&5\nprintf \"%s\\n\" \"none needed\" >&6; }\nelse $as_nop\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cxx_cxx98\" >&5\nprintf \"%s\\n\" \"$ac_cv_prog_cxx_cxx98\" >&6; }\n     CXX=\"$CXX $ac_cv_prog_cxx_cxx98\"\nfi\n  ac_cv_prog_cxx_stdcxx=$ac_cv_prog_cxx_cxx98\n  ac_prog_cxx_stdcxx=cxx98\nfi\nfi\n\nac_ext=cpp\nac_cpp='$CXXCPP $CPPFLAGS'\nac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_cxx_compiler_gnu\n\n\n  ax_cxx_compile_alternatives=\"17 1z\"    ax_cxx_compile_cxx17_required=false\n  ac_ext=cpp\nac_cpp='$CXXCPP $CPPFLAGS'\nac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_cxx_compiler_gnu\n  ac_success=no\n\n\n\n    if test x$ac_success = xno; then\n                for alternative in ${ax_cxx_compile_alternatives}; do\n      for switch in -std=c++${alternative} +std=c++${alternative} \"-h std=c++${alternative}\"; do\n        cachevar=`printf \"%s\\n\" \"ax_cv_cxx_compile_cxx17_$switch\" | $as_tr_sh`\n        { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether $CXX supports C++17 features with $switch\" >&5\nprintf %s \"checking whether $CXX supports C++17 features with $switch... \" >&6; }\nif eval test \\${$cachevar+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  ac_save_CXX=\"$CXX\"\n           CXX=\"$CXX $switch\"\n           cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\n// If the compiler admits that it is not ready for C++11, why torture it?\n// Hopefully, this will speed up the test.\n\n#ifndef __cplusplus\n\n#error \"This is not a C++ compiler\"\n\n#elif __cplusplus < 201103L\n\n#error \"This is not a C++11 compiler\"\n\n#else\n\nnamespace cxx11\n{\n\n  namespace test_static_assert\n  {\n\n    template <typename T>\n    struct check\n    {\n      static_assert(sizeof(int) <= sizeof(T), \"not big enough\");\n    };\n\n  }\n\n  namespace test_final_override\n  {\n\n    struct Base\n    {\n      virtual ~Base() {}\n      virtual void f() {}\n    };\n\n    struct Derived : public Base\n    {\n      virtual ~Derived() override {}\n      virtual void f() override {}\n    };\n\n  }\n\n  namespace test_double_right_angle_brackets\n  {\n\n    template < typename T >\n    struct check {};\n\n    typedef check<void> single_type;\n    typedef check<check<void>> double_type;\n    typedef check<check<check<void>>> triple_type;\n    typedef check<check<check<check<void>>>> quadruple_type;\n\n  }\n\n  namespace test_decltype\n  {\n\n    int\n    f()\n    {\n      int a = 1;\n      decltype(a) b = 2;\n      return a + b;\n    }\n\n  }\n\n  namespace test_type_deduction\n  {\n\n    template < typename T1, typename T2 >\n    struct is_same\n    {\n      static const bool value = false;\n    };\n\n    template < typename T >\n    struct is_same<T, T>\n    {\n      static const bool value = true;\n    };\n\n    template < typename T1, typename T2 >\n    auto\n    add(T1 a1, T2 a2) -> decltype(a1 + a2)\n    {\n      return a1 + a2;\n    }\n\n    int\n    test(const int c, volatile int v)\n    {\n      static_assert(is_same<int, decltype(0)>::value == true, \"\");\n      static_assert(is_same<int, decltype(c)>::value == false, \"\");\n      static_assert(is_same<int, decltype(v)>::value == false, \"\");\n      auto ac = c;\n      auto av = v;\n      auto sumi = ac + av + 'x';\n      auto sumf = ac + av + 1.0;\n      static_assert(is_same<int, decltype(ac)>::value == true, \"\");\n      static_assert(is_same<int, decltype(av)>::value == true, \"\");\n      static_assert(is_same<int, decltype(sumi)>::value == true, \"\");\n      static_assert(is_same<int, decltype(sumf)>::value == false, \"\");\n      static_assert(is_same<int, decltype(add(c, v))>::value == true, \"\");\n      return (sumf > 0.0) ? sumi : add(c, v);\n    }\n\n  }\n\n  namespace test_noexcept\n  {\n\n    int f() { return 0; }\n    int g() noexcept { return 0; }\n\n    static_assert(noexcept(f()) == false, \"\");\n    static_assert(noexcept(g()) == true, \"\");\n\n  }\n\n  namespace test_constexpr\n  {\n\n    template < typename CharT >\n    unsigned long constexpr\n    strlen_c_r(const CharT *const s, const unsigned long acc) noexcept\n    {\n      return *s ? strlen_c_r(s + 1, acc + 1) : acc;\n    }\n\n    template < typename CharT >\n    unsigned long constexpr\n    strlen_c(const CharT *const s) noexcept\n    {\n      return strlen_c_r(s, 0UL);\n    }\n\n    static_assert(strlen_c(\"\") == 0UL, \"\");\n    static_assert(strlen_c(\"1\") == 1UL, \"\");\n    static_assert(strlen_c(\"example\") == 7UL, \"\");\n    static_assert(strlen_c(\"another\\0example\") == 7UL, \"\");\n\n  }\n\n  namespace test_rvalue_references\n  {\n\n    template < int N >\n    struct answer\n    {\n      static constexpr int value = N;\n    };\n\n    answer<1> f(int&)       { return answer<1>(); }\n    answer<2> f(const int&) { return answer<2>(); }\n    answer<3> f(int&&)      { return answer<3>(); }\n\n    void\n    test()\n    {\n      int i = 0;\n      const int c = 0;\n      static_assert(decltype(f(i))::value == 1, \"\");\n      static_assert(decltype(f(c))::value == 2, \"\");\n      static_assert(decltype(f(0))::value == 3, \"\");\n    }\n\n  }\n\n  namespace test_uniform_initialization\n  {\n\n    struct test\n    {\n      static const int zero {};\n      static const int one {1};\n    };\n\n    static_assert(test::zero == 0, \"\");\n    static_assert(test::one == 1, \"\");\n\n  }\n\n  namespace test_lambdas\n  {\n\n    void\n    test1()\n    {\n      auto lambda1 = [](){};\n      auto lambda2 = lambda1;\n      lambda1();\n      lambda2();\n    }\n\n    int\n    test2()\n    {\n      auto a = [](int i, int j){ return i + j; }(1, 2);\n      auto b = []() -> int { return '0'; }();\n      auto c = [=](){ return a + b; }();\n      auto d = [&](){ return c; }();\n      auto e = [a, &b](int x) mutable {\n        const auto identity = [](int y){ return y; };\n        for (auto i = 0; i < a; ++i)\n          a += b--;\n        return x + identity(a + b);\n      }(0);\n      return a + b + c + d + e;\n    }\n\n    int\n    test3()\n    {\n      const auto nullary = [](){ return 0; };\n      const auto unary = [](int x){ return x; };\n      using nullary_t = decltype(nullary);\n      using unary_t = decltype(unary);\n      const auto higher1st = [](nullary_t f){ return f(); };\n      const auto higher2nd = [unary](nullary_t f1){\n        return [unary, f1](unary_t f2){ return f2(unary(f1())); };\n      };\n      return higher1st(nullary) + higher2nd(nullary)(unary);\n    }\n\n  }\n\n  namespace test_variadic_templates\n  {\n\n    template <int...>\n    struct sum;\n\n    template <int N0, int... N1toN>\n    struct sum<N0, N1toN...>\n    {\n      static constexpr auto value = N0 + sum<N1toN...>::value;\n    };\n\n    template <>\n    struct sum<>\n    {\n      static constexpr auto value = 0;\n    };\n\n    static_assert(sum<>::value == 0, \"\");\n    static_assert(sum<1>::value == 1, \"\");\n    static_assert(sum<23>::value == 23, \"\");\n    static_assert(sum<1, 2>::value == 3, \"\");\n    static_assert(sum<5, 5, 11>::value == 21, \"\");\n    static_assert(sum<2, 3, 5, 7, 11, 13>::value == 41, \"\");\n\n  }\n\n  // http://stackoverflow.com/questions/13728184/template-aliases-and-sfinae\n  // Clang 3.1 fails with headers of libstd++ 4.8.3 when using std::function\n  // because of this.\n  namespace test_template_alias_sfinae\n  {\n\n    struct foo {};\n\n    template<typename T>\n    using member = typename T::member_type;\n\n    template<typename T>\n    void func(...) {}\n\n    template<typename T>\n    void func(member<T>*) {}\n\n    void test();\n\n    void test() { func<foo>(0); }\n\n  }\n\n}  // namespace cxx11\n\n#endif  // __cplusplus >= 201103L\n\n\n\n\n// If the compiler admits that it is not ready for C++14, why torture it?\n// Hopefully, this will speed up the test.\n\n#ifndef __cplusplus\n\n#error \"This is not a C++ compiler\"\n\n#elif __cplusplus < 201402L\n\n#error \"This is not a C++14 compiler\"\n\n#else\n\nnamespace cxx14\n{\n\n  namespace test_polymorphic_lambdas\n  {\n\n    int\n    test()\n    {\n      const auto lambda = [](auto&&... args){\n        const auto istiny = [](auto x){\n          return (sizeof(x) == 1UL) ? 1 : 0;\n        };\n        const int aretiny[] = { istiny(args)... };\n        return aretiny[0];\n      };\n      return lambda(1, 1L, 1.0f, '1');\n    }\n\n  }\n\n  namespace test_binary_literals\n  {\n\n    constexpr auto ivii = 0b0000000000101010;\n    static_assert(ivii == 42, \"wrong value\");\n\n  }\n\n  namespace test_generalized_constexpr\n  {\n\n    template < typename CharT >\n    constexpr unsigned long\n    strlen_c(const CharT *const s) noexcept\n    {\n      auto length = 0UL;\n      for (auto p = s; *p; ++p)\n        ++length;\n      return length;\n    }\n\n    static_assert(strlen_c(\"\") == 0UL, \"\");\n    static_assert(strlen_c(\"x\") == 1UL, \"\");\n    static_assert(strlen_c(\"test\") == 4UL, \"\");\n    static_assert(strlen_c(\"another\\0test\") == 7UL, \"\");\n\n  }\n\n  namespace test_lambda_init_capture\n  {\n\n    int\n    test()\n    {\n      auto x = 0;\n      const auto lambda1 = [a = x](int b){ return a + b; };\n      const auto lambda2 = [a = lambda1(x)](){ return a; };\n      return lambda2();\n    }\n\n  }\n\n  namespace test_digit_separators\n  {\n\n    constexpr auto ten_million = 100'000'000;\n    static_assert(ten_million == 100000000, \"\");\n\n  }\n\n  namespace test_return_type_deduction\n  {\n\n    auto f(int& x) { return x; }\n    decltype(auto) g(int& x) { return x; }\n\n    template < typename T1, typename T2 >\n    struct is_same\n    {\n      static constexpr auto value = false;\n    };\n\n    template < typename T >\n    struct is_same<T, T>\n    {\n      static constexpr auto value = true;\n    };\n\n    int\n    test()\n    {\n      auto x = 0;\n      static_assert(is_same<int, decltype(f(x))>::value, \"\");\n      static_assert(is_same<int&, decltype(g(x))>::value, \"\");\n      return x;\n    }\n\n  }\n\n}  // namespace cxx14\n\n#endif  // __cplusplus >= 201402L\n\n\n\n\n// If the compiler admits that it is not ready for C++17, why torture it?\n// Hopefully, this will speed up the test.\n\n#ifndef __cplusplus\n\n#error \"This is not a C++ compiler\"\n\n#elif __cplusplus < 201703L\n\n#error \"This is not a C++17 compiler\"\n\n#else\n\n#include <initializer_list>\n#include <utility>\n#include <type_traits>\n\nnamespace cxx17\n{\n\n  namespace test_constexpr_lambdas\n  {\n\n    constexpr int foo = [](){return 42;}();\n\n  }\n\n  namespace test::nested_namespace::definitions\n  {\n\n  }\n\n  namespace test_fold_expression\n  {\n\n    template<typename... Args>\n    int multiply(Args... args)\n    {\n      return (args * ... * 1);\n    }\n\n    template<typename... Args>\n    bool all(Args... args)\n    {\n      return (args && ...);\n    }\n\n  }\n\n  namespace test_extended_static_assert\n  {\n\n    static_assert (true);\n\n  }\n\n  namespace test_auto_brace_init_list\n  {\n\n    auto foo = {5};\n    auto bar {5};\n\n    static_assert(std::is_same<std::initializer_list<int>, decltype(foo)>::value);\n    static_assert(std::is_same<int, decltype(bar)>::value);\n  }\n\n  namespace test_typename_in_template_template_parameter\n  {\n\n    template<template<typename> typename X> struct D;\n\n  }\n\n  namespace test_fallthrough_nodiscard_maybe_unused_attributes\n  {\n\n    int f1()\n    {\n      return 42;\n    }\n\n    [[nodiscard]] int f2()\n    {\n      [[maybe_unused]] auto unused = f1();\n\n      switch (f1())\n      {\n      case 17:\n        f1();\n        [[fallthrough]];\n      case 42:\n        f1();\n      }\n      return f1();\n    }\n\n  }\n\n  namespace test_extended_aggregate_initialization\n  {\n\n    struct base1\n    {\n      int b1, b2 = 42;\n    };\n\n    struct base2\n    {\n      base2() {\n        b3 = 42;\n      }\n      int b3;\n    };\n\n    struct derived : base1, base2\n    {\n        int d;\n    };\n\n    derived d1 {{1, 2}, {}, 4};  // full initialization\n    derived d2 {{}, {}, 4};      // value-initialized bases\n\n  }\n\n  namespace test_general_range_based_for_loop\n  {\n\n    struct iter\n    {\n      int i;\n\n      int& operator* ()\n      {\n        return i;\n      }\n\n      const int& operator* () const\n      {\n        return i;\n      }\n\n      iter& operator++()\n      {\n        ++i;\n        return *this;\n      }\n    };\n\n    struct sentinel\n    {\n      int i;\n    };\n\n    bool operator== (const iter& i, const sentinel& s)\n    {\n      return i.i == s.i;\n    }\n\n    bool operator!= (const iter& i, const sentinel& s)\n    {\n      return !(i == s);\n    }\n\n    struct range\n    {\n      iter begin() const\n      {\n        return {0};\n      }\n\n      sentinel end() const\n      {\n        return {5};\n      }\n    };\n\n    void f()\n    {\n      range r {};\n\n      for (auto i : r)\n      {\n        [[maybe_unused]] auto v = i;\n      }\n    }\n\n  }\n\n  namespace test_lambda_capture_asterisk_this_by_value\n  {\n\n    struct t\n    {\n      int i;\n      int foo()\n      {\n        return [*this]()\n        {\n          return i;\n        }();\n      }\n    };\n\n  }\n\n  namespace test_enum_class_construction\n  {\n\n    enum class byte : unsigned char\n    {};\n\n    byte foo {42};\n\n  }\n\n  namespace test_constexpr_if\n  {\n\n    template <bool cond>\n    int f ()\n    {\n      if constexpr(cond)\n      {\n        return 13;\n      }\n      else\n      {\n        return 42;\n      }\n    }\n\n  }\n\n  namespace test_selection_statement_with_initializer\n  {\n\n    int f()\n    {\n      return 13;\n    }\n\n    int f2()\n    {\n      if (auto i = f(); i > 0)\n      {\n        return 3;\n      }\n\n      switch (auto i = f(); i + 4)\n      {\n      case 17:\n        return 2;\n\n      default:\n        return 1;\n      }\n    }\n\n  }\n\n  namespace test_template_argument_deduction_for_class_templates\n  {\n\n    template <typename T1, typename T2>\n    struct pair\n    {\n      pair (T1 p1, T2 p2)\n        : m1 {p1},\n          m2 {p2}\n      {}\n\n      T1 m1;\n      T2 m2;\n    };\n\n    void f()\n    {\n      [[maybe_unused]] auto p = pair{13, 42u};\n    }\n\n  }\n\n  namespace test_non_type_auto_template_parameters\n  {\n\n    template <auto n>\n    struct B\n    {};\n\n    B<5> b1;\n    B<'a'> b2;\n\n  }\n\n  namespace test_structured_bindings\n  {\n\n    int arr[2] = { 1, 2 };\n    std::pair<int, int> pr = { 1, 2 };\n\n    auto f1() -> int(&)[2]\n    {\n      return arr;\n    }\n\n    auto f2() -> std::pair<int, int>&\n    {\n      return pr;\n    }\n\n    struct S\n    {\n      int x1 : 2;\n      volatile double y1;\n    };\n\n    S f3()\n    {\n      return {};\n    }\n\n    auto [ x1, y1 ] = f1();\n    auto& [ xr1, yr1 ] = f1();\n    auto [ x2, y2 ] = f2();\n    auto& [ xr2, yr2 ] = f2();\n    const auto [ x3, y3 ] = f3();\n\n  }\n\n  namespace test_exception_spec_type_system\n  {\n\n    struct Good {};\n    struct Bad {};\n\n    void g1() noexcept;\n    void g2();\n\n    template<typename T>\n    Bad\n    f(T*, T*);\n\n    template<typename T1, typename T2>\n    Good\n    f(T1*, T2*);\n\n    static_assert (std::is_same_v<Good, decltype(f(g1, g2))>);\n\n  }\n\n  namespace test_inline_variables\n  {\n\n    template<class T> void f(T)\n    {}\n\n    template<class T> inline T g(T)\n    {\n      return T{};\n    }\n\n    template<> inline void f<>(int)\n    {}\n\n    template<> int g<>(int)\n    {\n      return 5;\n    }\n\n  }\n\n}  // namespace cxx17\n\n#endif  // __cplusplus < 201703L\n\n\n\n_ACEOF\nif ac_fn_cxx_try_compile \"$LINENO\"\nthen :\n  eval $cachevar=yes\nelse $as_nop\n  eval $cachevar=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\n           CXX=\"$ac_save_CXX\"\nfi\neval ac_res=\\$$cachevar\n\t       { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_res\" >&5\nprintf \"%s\\n\" \"$ac_res\" >&6; }\n        if eval test x\\$$cachevar = xyes; then\n          CXX=\"$CXX $switch\"\n          if test -n \"$CXXCPP\" ; then\n            CXXCPP=\"$CXXCPP $switch\"\n          fi\n          ac_success=yes\n          break\n        fi\n      done\n      if test x$ac_success = xyes; then\n        break\n      fi\n    done\n  fi\n  ac_ext=c\nac_cpp='$CPP $CPPFLAGS'\nac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_c_compiler_gnu\n\n  if test x$ax_cxx_compile_cxx17_required = xtrue; then\n    if test x$ac_success = xno; then\n      as_fn_error $? \"*** A compiler with support for C++17 language features is required.\" \"$LINENO\" 5\n    fi\n  fi\n  if test x$ac_success = xno; then\n    HAVE_CXX17=0\n    { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: No compiler with C++17 support was found\" >&5\nprintf \"%s\\n\" \"$as_me: No compiler with C++17 support was found\" >&6;}\n  else\n    HAVE_CXX17=1\n\nprintf \"%s\\n\" \"#define HAVE_CXX17 1\" >>confdefs.h\n\n  fi\n\n\n  if test \"x${HAVE_CXX17}\" != \"x1\"; then\n      ax_cxx_compile_alternatives=\"14 1y\"    ax_cxx_compile_cxx14_required=false\n  ac_ext=cpp\nac_cpp='$CXXCPP $CPPFLAGS'\nac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_cxx_compiler_gnu\n  ac_success=no\n\n\n\n    if test x$ac_success = xno; then\n                for alternative in ${ax_cxx_compile_alternatives}; do\n      for switch in -std=c++${alternative} +std=c++${alternative} \"-h std=c++${alternative}\"; do\n        cachevar=`printf \"%s\\n\" \"ax_cv_cxx_compile_cxx14_$switch\" | $as_tr_sh`\n        { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether $CXX supports C++14 features with $switch\" >&5\nprintf %s \"checking whether $CXX supports C++14 features with $switch... \" >&6; }\nif eval test \\${$cachevar+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  ac_save_CXX=\"$CXX\"\n           CXX=\"$CXX $switch\"\n           cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\n// If the compiler admits that it is not ready for C++11, why torture it?\n// Hopefully, this will speed up the test.\n\n#ifndef __cplusplus\n\n#error \"This is not a C++ compiler\"\n\n#elif __cplusplus < 201103L\n\n#error \"This is not a C++11 compiler\"\n\n#else\n\nnamespace cxx11\n{\n\n  namespace test_static_assert\n  {\n\n    template <typename T>\n    struct check\n    {\n      static_assert(sizeof(int) <= sizeof(T), \"not big enough\");\n    };\n\n  }\n\n  namespace test_final_override\n  {\n\n    struct Base\n    {\n      virtual ~Base() {}\n      virtual void f() {}\n    };\n\n    struct Derived : public Base\n    {\n      virtual ~Derived() override {}\n      virtual void f() override {}\n    };\n\n  }\n\n  namespace test_double_right_angle_brackets\n  {\n\n    template < typename T >\n    struct check {};\n\n    typedef check<void> single_type;\n    typedef check<check<void>> double_type;\n    typedef check<check<check<void>>> triple_type;\n    typedef check<check<check<check<void>>>> quadruple_type;\n\n  }\n\n  namespace test_decltype\n  {\n\n    int\n    f()\n    {\n      int a = 1;\n      decltype(a) b = 2;\n      return a + b;\n    }\n\n  }\n\n  namespace test_type_deduction\n  {\n\n    template < typename T1, typename T2 >\n    struct is_same\n    {\n      static const bool value = false;\n    };\n\n    template < typename T >\n    struct is_same<T, T>\n    {\n      static const bool value = true;\n    };\n\n    template < typename T1, typename T2 >\n    auto\n    add(T1 a1, T2 a2) -> decltype(a1 + a2)\n    {\n      return a1 + a2;\n    }\n\n    int\n    test(const int c, volatile int v)\n    {\n      static_assert(is_same<int, decltype(0)>::value == true, \"\");\n      static_assert(is_same<int, decltype(c)>::value == false, \"\");\n      static_assert(is_same<int, decltype(v)>::value == false, \"\");\n      auto ac = c;\n      auto av = v;\n      auto sumi = ac + av + 'x';\n      auto sumf = ac + av + 1.0;\n      static_assert(is_same<int, decltype(ac)>::value == true, \"\");\n      static_assert(is_same<int, decltype(av)>::value == true, \"\");\n      static_assert(is_same<int, decltype(sumi)>::value == true, \"\");\n      static_assert(is_same<int, decltype(sumf)>::value == false, \"\");\n      static_assert(is_same<int, decltype(add(c, v))>::value == true, \"\");\n      return (sumf > 0.0) ? sumi : add(c, v);\n    }\n\n  }\n\n  namespace test_noexcept\n  {\n\n    int f() { return 0; }\n    int g() noexcept { return 0; }\n\n    static_assert(noexcept(f()) == false, \"\");\n    static_assert(noexcept(g()) == true, \"\");\n\n  }\n\n  namespace test_constexpr\n  {\n\n    template < typename CharT >\n    unsigned long constexpr\n    strlen_c_r(const CharT *const s, const unsigned long acc) noexcept\n    {\n      return *s ? strlen_c_r(s + 1, acc + 1) : acc;\n    }\n\n    template < typename CharT >\n    unsigned long constexpr\n    strlen_c(const CharT *const s) noexcept\n    {\n      return strlen_c_r(s, 0UL);\n    }\n\n    static_assert(strlen_c(\"\") == 0UL, \"\");\n    static_assert(strlen_c(\"1\") == 1UL, \"\");\n    static_assert(strlen_c(\"example\") == 7UL, \"\");\n    static_assert(strlen_c(\"another\\0example\") == 7UL, \"\");\n\n  }\n\n  namespace test_rvalue_references\n  {\n\n    template < int N >\n    struct answer\n    {\n      static constexpr int value = N;\n    };\n\n    answer<1> f(int&)       { return answer<1>(); }\n    answer<2> f(const int&) { return answer<2>(); }\n    answer<3> f(int&&)      { return answer<3>(); }\n\n    void\n    test()\n    {\n      int i = 0;\n      const int c = 0;\n      static_assert(decltype(f(i))::value == 1, \"\");\n      static_assert(decltype(f(c))::value == 2, \"\");\n      static_assert(decltype(f(0))::value == 3, \"\");\n    }\n\n  }\n\n  namespace test_uniform_initialization\n  {\n\n    struct test\n    {\n      static const int zero {};\n      static const int one {1};\n    };\n\n    static_assert(test::zero == 0, \"\");\n    static_assert(test::one == 1, \"\");\n\n  }\n\n  namespace test_lambdas\n  {\n\n    void\n    test1()\n    {\n      auto lambda1 = [](){};\n      auto lambda2 = lambda1;\n      lambda1();\n      lambda2();\n    }\n\n    int\n    test2()\n    {\n      auto a = [](int i, int j){ return i + j; }(1, 2);\n      auto b = []() -> int { return '0'; }();\n      auto c = [=](){ return a + b; }();\n      auto d = [&](){ return c; }();\n      auto e = [a, &b](int x) mutable {\n        const auto identity = [](int y){ return y; };\n        for (auto i = 0; i < a; ++i)\n          a += b--;\n        return x + identity(a + b);\n      }(0);\n      return a + b + c + d + e;\n    }\n\n    int\n    test3()\n    {\n      const auto nullary = [](){ return 0; };\n      const auto unary = [](int x){ return x; };\n      using nullary_t = decltype(nullary);\n      using unary_t = decltype(unary);\n      const auto higher1st = [](nullary_t f){ return f(); };\n      const auto higher2nd = [unary](nullary_t f1){\n        return [unary, f1](unary_t f2){ return f2(unary(f1())); };\n      };\n      return higher1st(nullary) + higher2nd(nullary)(unary);\n    }\n\n  }\n\n  namespace test_variadic_templates\n  {\n\n    template <int...>\n    struct sum;\n\n    template <int N0, int... N1toN>\n    struct sum<N0, N1toN...>\n    {\n      static constexpr auto value = N0 + sum<N1toN...>::value;\n    };\n\n    template <>\n    struct sum<>\n    {\n      static constexpr auto value = 0;\n    };\n\n    static_assert(sum<>::value == 0, \"\");\n    static_assert(sum<1>::value == 1, \"\");\n    static_assert(sum<23>::value == 23, \"\");\n    static_assert(sum<1, 2>::value == 3, \"\");\n    static_assert(sum<5, 5, 11>::value == 21, \"\");\n    static_assert(sum<2, 3, 5, 7, 11, 13>::value == 41, \"\");\n\n  }\n\n  // http://stackoverflow.com/questions/13728184/template-aliases-and-sfinae\n  // Clang 3.1 fails with headers of libstd++ 4.8.3 when using std::function\n  // because of this.\n  namespace test_template_alias_sfinae\n  {\n\n    struct foo {};\n\n    template<typename T>\n    using member = typename T::member_type;\n\n    template<typename T>\n    void func(...) {}\n\n    template<typename T>\n    void func(member<T>*) {}\n\n    void test();\n\n    void test() { func<foo>(0); }\n\n  }\n\n}  // namespace cxx11\n\n#endif  // __cplusplus >= 201103L\n\n\n\n\n// If the compiler admits that it is not ready for C++14, why torture it?\n// Hopefully, this will speed up the test.\n\n#ifndef __cplusplus\n\n#error \"This is not a C++ compiler\"\n\n#elif __cplusplus < 201402L\n\n#error \"This is not a C++14 compiler\"\n\n#else\n\nnamespace cxx14\n{\n\n  namespace test_polymorphic_lambdas\n  {\n\n    int\n    test()\n    {\n      const auto lambda = [](auto&&... args){\n        const auto istiny = [](auto x){\n          return (sizeof(x) == 1UL) ? 1 : 0;\n        };\n        const int aretiny[] = { istiny(args)... };\n        return aretiny[0];\n      };\n      return lambda(1, 1L, 1.0f, '1');\n    }\n\n  }\n\n  namespace test_binary_literals\n  {\n\n    constexpr auto ivii = 0b0000000000101010;\n    static_assert(ivii == 42, \"wrong value\");\n\n  }\n\n  namespace test_generalized_constexpr\n  {\n\n    template < typename CharT >\n    constexpr unsigned long\n    strlen_c(const CharT *const s) noexcept\n    {\n      auto length = 0UL;\n      for (auto p = s; *p; ++p)\n        ++length;\n      return length;\n    }\n\n    static_assert(strlen_c(\"\") == 0UL, \"\");\n    static_assert(strlen_c(\"x\") == 1UL, \"\");\n    static_assert(strlen_c(\"test\") == 4UL, \"\");\n    static_assert(strlen_c(\"another\\0test\") == 7UL, \"\");\n\n  }\n\n  namespace test_lambda_init_capture\n  {\n\n    int\n    test()\n    {\n      auto x = 0;\n      const auto lambda1 = [a = x](int b){ return a + b; };\n      const auto lambda2 = [a = lambda1(x)](){ return a; };\n      return lambda2();\n    }\n\n  }\n\n  namespace test_digit_separators\n  {\n\n    constexpr auto ten_million = 100'000'000;\n    static_assert(ten_million == 100000000, \"\");\n\n  }\n\n  namespace test_return_type_deduction\n  {\n\n    auto f(int& x) { return x; }\n    decltype(auto) g(int& x) { return x; }\n\n    template < typename T1, typename T2 >\n    struct is_same\n    {\n      static constexpr auto value = false;\n    };\n\n    template < typename T >\n    struct is_same<T, T>\n    {\n      static constexpr auto value = true;\n    };\n\n    int\n    test()\n    {\n      auto x = 0;\n      static_assert(is_same<int, decltype(f(x))>::value, \"\");\n      static_assert(is_same<int&, decltype(g(x))>::value, \"\");\n      return x;\n    }\n\n  }\n\n}  // namespace cxx14\n\n#endif  // __cplusplus >= 201402L\n\n\n\n_ACEOF\nif ac_fn_cxx_try_compile \"$LINENO\"\nthen :\n  eval $cachevar=yes\nelse $as_nop\n  eval $cachevar=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\n           CXX=\"$ac_save_CXX\"\nfi\neval ac_res=\\$$cachevar\n\t       { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_res\" >&5\nprintf \"%s\\n\" \"$ac_res\" >&6; }\n        if eval test x\\$$cachevar = xyes; then\n          CXX=\"$CXX $switch\"\n          if test -n \"$CXXCPP\" ; then\n            CXXCPP=\"$CXXCPP $switch\"\n          fi\n          ac_success=yes\n          break\n        fi\n      done\n      if test x$ac_success = xyes; then\n        break\n      fi\n    done\n  fi\n  ac_ext=c\nac_cpp='$CPP $CPPFLAGS'\nac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_c_compiler_gnu\n\n  if test x$ax_cxx_compile_cxx14_required = xtrue; then\n    if test x$ac_success = xno; then\n      as_fn_error $? \"*** A compiler with support for C++14 language features is required.\" \"$LINENO\" 5\n    fi\n  fi\n  if test x$ac_success = xno; then\n    HAVE_CXX14=0\n    { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: No compiler with C++14 support was found\" >&5\nprintf \"%s\\n\" \"$as_me: No compiler with C++14 support was found\" >&6;}\n  else\n    HAVE_CXX14=1\n\nprintf \"%s\\n\" \"#define HAVE_CXX14 1\" >>confdefs.h\n\n  fi\n\n\n  fi\n  if test \"x${HAVE_CXX14}\" = \"x1\" -o \"x${HAVE_CXX17}\" = \"x1\"; then\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Wall\" >&5\nprintf %s \"checking whether compiler supports -Wall... \" >&6; }\nT_CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS}\"\nT_APPEND_V=-Wall\n  if test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${SPECIFIED_CXXFLAGS}\" = \"x\" ; then\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${SPECIFIED_CXXFLAGS}\"\nelse\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${SPECIFIED_CXXFLAGS}\"\nfi\n\nac_ext=cpp\nac_cpp='$CXXCPP $CPPFLAGS'\nac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_cxx_compiler_gnu\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_cxx_try_compile \"$LINENO\"\nthen :\n  je_cv_cxxflags_added=-Wall\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cxxflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CXXFLAGS=\"${T_CONFIGURE_CXXFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nac_ext=c\nac_cpp='$CPP $CPPFLAGS'\nac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_c_compiler_gnu\n\nif test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${SPECIFIED_CXXFLAGS}\" = \"x\" ; then\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${SPECIFIED_CXXFLAGS}\"\nelse\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${SPECIFIED_CXXFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Wextra\" >&5\nprintf %s \"checking whether compiler supports -Wextra... \" >&6; }\nT_CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS}\"\nT_APPEND_V=-Wextra\n  if test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${SPECIFIED_CXXFLAGS}\" = \"x\" ; then\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${SPECIFIED_CXXFLAGS}\"\nelse\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${SPECIFIED_CXXFLAGS}\"\nfi\n\nac_ext=cpp\nac_cpp='$CXXCPP $CPPFLAGS'\nac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_cxx_compiler_gnu\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_cxx_try_compile \"$LINENO\"\nthen :\n  je_cv_cxxflags_added=-Wextra\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cxxflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CXXFLAGS=\"${T_CONFIGURE_CXXFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nac_ext=c\nac_cpp='$CPP $CPPFLAGS'\nac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_c_compiler_gnu\n\nif test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${SPECIFIED_CXXFLAGS}\" = \"x\" ; then\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${SPECIFIED_CXXFLAGS}\"\nelse\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${SPECIFIED_CXXFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -g3\" >&5\nprintf %s \"checking whether compiler supports -g3... \" >&6; }\nT_CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS}\"\nT_APPEND_V=-g3\n  if test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${SPECIFIED_CXXFLAGS}\" = \"x\" ; then\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${SPECIFIED_CXXFLAGS}\"\nelse\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${SPECIFIED_CXXFLAGS}\"\nfi\n\nac_ext=cpp\nac_cpp='$CXXCPP $CPPFLAGS'\nac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_cxx_compiler_gnu\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_cxx_try_compile \"$LINENO\"\nthen :\n  je_cv_cxxflags_added=-g3\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cxxflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CXXFLAGS=\"${T_CONFIGURE_CXXFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nac_ext=c\nac_cpp='$CPP $CPPFLAGS'\nac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_c_compiler_gnu\n\nif test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${SPECIFIED_CXXFLAGS}\" = \"x\" ; then\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${SPECIFIED_CXXFLAGS}\"\nelse\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${SPECIFIED_CXXFLAGS}\"\nfi\n\n\n\n    SAVED_LIBS=\"${LIBS}\"\n    T_APPEND_V=-lstdc++\n  if test \"x${LIBS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  LIBS=\"${LIBS}${T_APPEND_V}\"\nelse\n  LIBS=\"${LIBS} ${T_APPEND_V}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether libstdc++ linkage is compilable\" >&5\nprintf %s \"checking whether libstdc++ linkage is compilable... \" >&6; }\nif test ${je_cv_libstdcxx+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <stdlib.h>\n\nint\nmain (void)\n{\n\n\tint *arr = (int *)malloc(sizeof(int) * 42);\n\tif (arr == NULL)\n\t\treturn 1;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_libstdcxx=yes\nelse $as_nop\n  je_cv_libstdcxx=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_libstdcxx\" >&5\nprintf \"%s\\n\" \"$je_cv_libstdcxx\" >&6; }\n\n    if test \"x${je_cv_libstdcxx}\" = \"xno\" ; then\n      LIBS=\"${SAVED_LIBS}\"\n    fi\n  else\n    enable_cxx=\"0\"\n  fi\nfi\nif test \"x$enable_cxx\" = \"x1\"; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_ENABLE_CXX  \" >>confdefs.h\n\nfi\n\n\n\n\n\nac_header= ac_cache=\nfor ac_item in $ac_header_c_list\ndo\n  if test $ac_cache; then\n    ac_fn_c_check_header_compile \"$LINENO\" $ac_header ac_cv_header_$ac_cache \"$ac_includes_default\"\n    if eval test \\\"x\\$ac_cv_header_$ac_cache\\\" = xyes; then\n      printf \"%s\\n\" \"#define $ac_item 1\" >> confdefs.h\n    fi\n    ac_header= ac_cache=\n  elif test $ac_header; then\n    ac_cache=$ac_item\n  else\n    ac_header=$ac_item\n  fi\ndone\n\n\n\n\n\n\n\n\nif test $ac_cv_header_stdlib_h = yes && test $ac_cv_header_string_h = yes\nthen :\n\nprintf \"%s\\n\" \"#define STDC_HEADERS 1\" >>confdefs.h\n\nfi\n { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether byte ordering is bigendian\" >&5\nprintf %s \"checking whether byte ordering is bigendian... \" >&6; }\nif test ${ac_cv_c_bigendian+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  ac_cv_c_bigendian=unknown\n    # See if we're dealing with a universal compiler.\n    cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n#ifndef __APPLE_CC__\n\t       not a universal capable compiler\n\t     #endif\n\t     typedef int dummy;\n\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n\n\t# Check for potential -arch flags.  It is not universal unless\n\t# there are at least two -arch flags with different values.\n\tac_arch=\n\tac_prev=\n\tfor ac_word in $CC $CFLAGS $CPPFLAGS $LDFLAGS; do\n\t if test -n \"$ac_prev\"; then\n\t   case $ac_word in\n\t     i?86 | x86_64 | ppc | ppc64)\n\t       if test -z \"$ac_arch\" || test \"$ac_arch\" = \"$ac_word\"; then\n\t\t ac_arch=$ac_word\n\t       else\n\t\t ac_cv_c_bigendian=universal\n\t\t break\n\t       fi\n\t       ;;\n\t   esac\n\t   ac_prev=\n\t elif test \"x$ac_word\" = \"x-arch\"; then\n\t   ac_prev=arch\n\t fi\n       done\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\n    if test $ac_cv_c_bigendian = unknown; then\n      # See if sys/param.h defines the BYTE_ORDER macro.\n      cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n#include <sys/types.h>\n\t     #include <sys/param.h>\n\nint\nmain (void)\n{\n#if ! (defined BYTE_ORDER && defined BIG_ENDIAN \\\n\t\t     && defined LITTLE_ENDIAN && BYTE_ORDER && BIG_ENDIAN \\\n\t\t     && LITTLE_ENDIAN)\n\t      bogus endian macros\n\t     #endif\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  # It does; now see whether it defined to BIG_ENDIAN or not.\n\t cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n#include <sys/types.h>\n\t\t#include <sys/param.h>\n\nint\nmain (void)\n{\n#if BYTE_ORDER != BIG_ENDIAN\n\t\t not big endian\n\t\t#endif\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  ac_cv_c_bigendian=yes\nelse $as_nop\n  ac_cv_c_bigendian=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\n    fi\n    if test $ac_cv_c_bigendian = unknown; then\n      # See if <limits.h> defines _LITTLE_ENDIAN or _BIG_ENDIAN (e.g., Solaris).\n      cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n#include <limits.h>\n\nint\nmain (void)\n{\n#if ! (defined _LITTLE_ENDIAN || defined _BIG_ENDIAN)\n\t      bogus endian macros\n\t     #endif\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  # It does; now see whether it defined to _BIG_ENDIAN or not.\n\t cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n#include <limits.h>\n\nint\nmain (void)\n{\n#ifndef _BIG_ENDIAN\n\t\t not big endian\n\t\t#endif\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  ac_cv_c_bigendian=yes\nelse $as_nop\n  ac_cv_c_bigendian=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\n    fi\n    if test $ac_cv_c_bigendian = unknown; then\n      # Compile a test program.\n      if test \"$cross_compiling\" = yes\nthen :\n  # Try to guess by grepping values from an object file.\n\t cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\nunsigned short int ascii_mm[] =\n\t\t  { 0x4249, 0x4765, 0x6E44, 0x6961, 0x6E53, 0x7953, 0 };\n\t\tunsigned short int ascii_ii[] =\n\t\t  { 0x694C, 0x5454, 0x656C, 0x6E45, 0x6944, 0x6E61, 0 };\n\t\tint use_ascii (int i) {\n\t\t  return ascii_mm[i] + ascii_ii[i];\n\t\t}\n\t\tunsigned short int ebcdic_ii[] =\n\t\t  { 0x89D3, 0xE3E3, 0x8593, 0x95C5, 0x89C4, 0x9581, 0 };\n\t\tunsigned short int ebcdic_mm[] =\n\t\t  { 0xC2C9, 0xC785, 0x95C4, 0x8981, 0x95E2, 0xA8E2, 0 };\n\t\tint use_ebcdic (int i) {\n\t\t  return ebcdic_mm[i] + ebcdic_ii[i];\n\t\t}\n\t\textern int foo;\n\nint\nmain (void)\n{\nreturn use_ascii (foo) == use_ebcdic (foo);\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  if grep BIGenDianSyS conftest.$ac_objext >/dev/null; then\n\t      ac_cv_c_bigendian=yes\n\t    fi\n\t    if grep LiTTleEnDian conftest.$ac_objext >/dev/null ; then\n\t      if test \"$ac_cv_c_bigendian\" = unknown; then\n\t\tac_cv_c_bigendian=no\n\t      else\n\t\t# finding both strings is unlikely to happen, but who knows?\n\t\tac_cv_c_bigendian=unknown\n\t      fi\n\t    fi\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n$ac_includes_default\nint\nmain (void)\n{\n\n\t     /* Are we little or big endian?  From Harbison&Steele.  */\n\t     union\n\t     {\n\t       long int l;\n\t       char c[sizeof (long int)];\n\t     } u;\n\t     u.l = 1;\n\t     return u.c[sizeof (long int) - 1] == 1;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_run \"$LINENO\"\nthen :\n  ac_cv_c_bigendian=no\nelse $as_nop\n  ac_cv_c_bigendian=yes\nfi\nrm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \\\n  conftest.$ac_objext conftest.beam conftest.$ac_ext\nfi\n\n    fi\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_c_bigendian\" >&5\nprintf \"%s\\n\" \"$ac_cv_c_bigendian\" >&6; }\n case $ac_cv_c_bigendian in #(\n   yes)\n     ac_cv_big_endian=1;; #(\n   no)\n     ac_cv_big_endian=0 ;; #(\n   universal)\n\nprintf \"%s\\n\" \"#define AC_APPLE_UNIVERSAL_BUILD 1\" >>confdefs.h\n\n     ;; #(\n   *)\n     as_fn_error $? \"unknown endianness\n presetting ac_cv_c_bigendian=no (or yes) will help\" \"$LINENO\" 5 ;;\n esac\n\nif test \"x${ac_cv_big_endian}\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_BIG_ENDIAN  \" >>confdefs.h\n\nfi\n\nif test \"x${je_cv_msvc}\" = \"xyes\" -a \"x${ac_cv_header_inttypes_h}\" = \"xno\"; then\n  T_APPEND_V=-I${srcdir}/include/msvc_compat/C99\n  if test \"x${CPPFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CPPFLAGS=\"${CPPFLAGS}${T_APPEND_V}\"\nelse\n  CPPFLAGS=\"${CPPFLAGS} ${T_APPEND_V}\"\nfi\n\n\nfi\n\nif test \"x${je_cv_msvc}\" = \"xyes\" ; then\n  LG_SIZEOF_PTR=LG_SIZEOF_PTR_WIN\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: Using a predefined value for sizeof(void *): 4 for 32-bit, 8 for 64-bit\" >&5\nprintf \"%s\\n\" \"Using a predefined value for sizeof(void *): 4 for 32-bit, 8 for 64-bit\" >&6; }\nelse\n  # The cast to long int works around a bug in the HP C Compiler\n# version HP92453-01 B.11.11.23709.GP, which incorrectly rejects\n# declarations like `int a3[[(sizeof (unsigned char)) >= 0]];'.\n# This bug is HP SR number 8606223364.\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking size of void *\" >&5\nprintf %s \"checking size of void *... \" >&6; }\nif test ${ac_cv_sizeof_void_p+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if ac_fn_c_compute_int \"$LINENO\" \"(long int) (sizeof (void *))\" \"ac_cv_sizeof_void_p\"        \"$ac_includes_default\"\nthen :\n\nelse $as_nop\n  if test \"$ac_cv_type_void_p\" = yes; then\n     { { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: error: in \\`$ac_pwd':\" >&5\nprintf \"%s\\n\" \"$as_me: error: in \\`$ac_pwd':\" >&2;}\nas_fn_error 77 \"cannot compute sizeof (void *)\nSee \\`config.log' for more details\" \"$LINENO\" 5; }\n   else\n     ac_cv_sizeof_void_p=0\n   fi\nfi\n\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_sizeof_void_p\" >&5\nprintf \"%s\\n\" \"$ac_cv_sizeof_void_p\" >&6; }\n\n\n\nprintf \"%s\\n\" \"#define SIZEOF_VOID_P $ac_cv_sizeof_void_p\" >>confdefs.h\n\n\n  if test \"x${ac_cv_sizeof_void_p}\" = \"x8\" ; then\n    LG_SIZEOF_PTR=3\n  elif test \"x${ac_cv_sizeof_void_p}\" = \"x4\" ; then\n    LG_SIZEOF_PTR=2\n  else\n    as_fn_error $? \"Unsupported pointer size: ${ac_cv_sizeof_void_p}\" \"$LINENO\" 5\n  fi\nfi\n\nprintf \"%s\\n\" \"#define LG_SIZEOF_PTR $LG_SIZEOF_PTR\" >>confdefs.h\n\n\n# The cast to long int works around a bug in the HP C Compiler\n# version HP92453-01 B.11.11.23709.GP, which incorrectly rejects\n# declarations like `int a3[[(sizeof (unsigned char)) >= 0]];'.\n# This bug is HP SR number 8606223364.\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking size of int\" >&5\nprintf %s \"checking size of int... \" >&6; }\nif test ${ac_cv_sizeof_int+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if ac_fn_c_compute_int \"$LINENO\" \"(long int) (sizeof (int))\" \"ac_cv_sizeof_int\"        \"$ac_includes_default\"\nthen :\n\nelse $as_nop\n  if test \"$ac_cv_type_int\" = yes; then\n     { { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: error: in \\`$ac_pwd':\" >&5\nprintf \"%s\\n\" \"$as_me: error: in \\`$ac_pwd':\" >&2;}\nas_fn_error 77 \"cannot compute sizeof (int)\nSee \\`config.log' for more details\" \"$LINENO\" 5; }\n   else\n     ac_cv_sizeof_int=0\n   fi\nfi\n\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_sizeof_int\" >&5\nprintf \"%s\\n\" \"$ac_cv_sizeof_int\" >&6; }\n\n\n\nprintf \"%s\\n\" \"#define SIZEOF_INT $ac_cv_sizeof_int\" >>confdefs.h\n\n\nif test \"x${ac_cv_sizeof_int}\" = \"x8\" ; then\n  LG_SIZEOF_INT=3\nelif test \"x${ac_cv_sizeof_int}\" = \"x4\" ; then\n  LG_SIZEOF_INT=2\nelse\n  as_fn_error $? \"Unsupported int size: ${ac_cv_sizeof_int}\" \"$LINENO\" 5\nfi\n\nprintf \"%s\\n\" \"#define LG_SIZEOF_INT $LG_SIZEOF_INT\" >>confdefs.h\n\n\n# The cast to long int works around a bug in the HP C Compiler\n# version HP92453-01 B.11.11.23709.GP, which incorrectly rejects\n# declarations like `int a3[[(sizeof (unsigned char)) >= 0]];'.\n# This bug is HP SR number 8606223364.\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking size of long\" >&5\nprintf %s \"checking size of long... \" >&6; }\nif test ${ac_cv_sizeof_long+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if ac_fn_c_compute_int \"$LINENO\" \"(long int) (sizeof (long))\" \"ac_cv_sizeof_long\"        \"$ac_includes_default\"\nthen :\n\nelse $as_nop\n  if test \"$ac_cv_type_long\" = yes; then\n     { { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: error: in \\`$ac_pwd':\" >&5\nprintf \"%s\\n\" \"$as_me: error: in \\`$ac_pwd':\" >&2;}\nas_fn_error 77 \"cannot compute sizeof (long)\nSee \\`config.log' for more details\" \"$LINENO\" 5; }\n   else\n     ac_cv_sizeof_long=0\n   fi\nfi\n\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_sizeof_long\" >&5\nprintf \"%s\\n\" \"$ac_cv_sizeof_long\" >&6; }\n\n\n\nprintf \"%s\\n\" \"#define SIZEOF_LONG $ac_cv_sizeof_long\" >>confdefs.h\n\n\nif test \"x${ac_cv_sizeof_long}\" = \"x8\" ; then\n  LG_SIZEOF_LONG=3\nelif test \"x${ac_cv_sizeof_long}\" = \"x4\" ; then\n  LG_SIZEOF_LONG=2\nelse\n  as_fn_error $? \"Unsupported long size: ${ac_cv_sizeof_long}\" \"$LINENO\" 5\nfi\n\nprintf \"%s\\n\" \"#define LG_SIZEOF_LONG $LG_SIZEOF_LONG\" >>confdefs.h\n\n\n# The cast to long int works around a bug in the HP C Compiler\n# version HP92453-01 B.11.11.23709.GP, which incorrectly rejects\n# declarations like `int a3[[(sizeof (unsigned char)) >= 0]];'.\n# This bug is HP SR number 8606223364.\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking size of long long\" >&5\nprintf %s \"checking size of long long... \" >&6; }\nif test ${ac_cv_sizeof_long_long+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if ac_fn_c_compute_int \"$LINENO\" \"(long int) (sizeof (long long))\" \"ac_cv_sizeof_long_long\"        \"$ac_includes_default\"\nthen :\n\nelse $as_nop\n  if test \"$ac_cv_type_long_long\" = yes; then\n     { { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: error: in \\`$ac_pwd':\" >&5\nprintf \"%s\\n\" \"$as_me: error: in \\`$ac_pwd':\" >&2;}\nas_fn_error 77 \"cannot compute sizeof (long long)\nSee \\`config.log' for more details\" \"$LINENO\" 5; }\n   else\n     ac_cv_sizeof_long_long=0\n   fi\nfi\n\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_sizeof_long_long\" >&5\nprintf \"%s\\n\" \"$ac_cv_sizeof_long_long\" >&6; }\n\n\n\nprintf \"%s\\n\" \"#define SIZEOF_LONG_LONG $ac_cv_sizeof_long_long\" >>confdefs.h\n\n\nif test \"x${ac_cv_sizeof_long_long}\" = \"x8\" ; then\n  LG_SIZEOF_LONG_LONG=3\nelif test \"x${ac_cv_sizeof_long_long}\" = \"x4\" ; then\n  LG_SIZEOF_LONG_LONG=2\nelse\n  as_fn_error $? \"Unsupported long long size: ${ac_cv_sizeof_long_long}\" \"$LINENO\" 5\nfi\n\nprintf \"%s\\n\" \"#define LG_SIZEOF_LONG_LONG $LG_SIZEOF_LONG_LONG\" >>confdefs.h\n\n\n# The cast to long int works around a bug in the HP C Compiler\n# version HP92453-01 B.11.11.23709.GP, which incorrectly rejects\n# declarations like `int a3[[(sizeof (unsigned char)) >= 0]];'.\n# This bug is HP SR number 8606223364.\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking size of intmax_t\" >&5\nprintf %s \"checking size of intmax_t... \" >&6; }\nif test ${ac_cv_sizeof_intmax_t+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if ac_fn_c_compute_int \"$LINENO\" \"(long int) (sizeof (intmax_t))\" \"ac_cv_sizeof_intmax_t\"        \"$ac_includes_default\"\nthen :\n\nelse $as_nop\n  if test \"$ac_cv_type_intmax_t\" = yes; then\n     { { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: error: in \\`$ac_pwd':\" >&5\nprintf \"%s\\n\" \"$as_me: error: in \\`$ac_pwd':\" >&2;}\nas_fn_error 77 \"cannot compute sizeof (intmax_t)\nSee \\`config.log' for more details\" \"$LINENO\" 5; }\n   else\n     ac_cv_sizeof_intmax_t=0\n   fi\nfi\n\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_sizeof_intmax_t\" >&5\nprintf \"%s\\n\" \"$ac_cv_sizeof_intmax_t\" >&6; }\n\n\n\nprintf \"%s\\n\" \"#define SIZEOF_INTMAX_T $ac_cv_sizeof_intmax_t\" >>confdefs.h\n\n\nif test \"x${ac_cv_sizeof_intmax_t}\" = \"x16\" ; then\n  LG_SIZEOF_INTMAX_T=4\nelif test \"x${ac_cv_sizeof_intmax_t}\" = \"x8\" ; then\n  LG_SIZEOF_INTMAX_T=3\nelif test \"x${ac_cv_sizeof_intmax_t}\" = \"x4\" ; then\n  LG_SIZEOF_INTMAX_T=2\nelse\n  as_fn_error $? \"Unsupported intmax_t size: ${ac_cv_sizeof_intmax_t}\" \"$LINENO\" 5\nfi\n\nprintf \"%s\\n\" \"#define LG_SIZEOF_INTMAX_T $LG_SIZEOF_INTMAX_T\" >>confdefs.h\n\n\n\n\n\n  # Make sure we can run config.sub.\n$SHELL \"${ac_aux_dir}config.sub\" sun4 >/dev/null 2>&1 ||\n  as_fn_error $? \"cannot run $SHELL ${ac_aux_dir}config.sub\" \"$LINENO\" 5\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking build system type\" >&5\nprintf %s \"checking build system type... \" >&6; }\nif test ${ac_cv_build+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  ac_build_alias=$build_alias\ntest \"x$ac_build_alias\" = x &&\n  ac_build_alias=`$SHELL \"${ac_aux_dir}config.guess\"`\ntest \"x$ac_build_alias\" = x &&\n  as_fn_error $? \"cannot guess build type; you must specify one\" \"$LINENO\" 5\nac_cv_build=`$SHELL \"${ac_aux_dir}config.sub\" $ac_build_alias` ||\n  as_fn_error $? \"$SHELL ${ac_aux_dir}config.sub $ac_build_alias failed\" \"$LINENO\" 5\n\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_build\" >&5\nprintf \"%s\\n\" \"$ac_cv_build\" >&6; }\ncase $ac_cv_build in\n*-*-*) ;;\n*) as_fn_error $? \"invalid value of canonical build\" \"$LINENO\" 5;;\nesac\nbuild=$ac_cv_build\nac_save_IFS=$IFS; IFS='-'\nset x $ac_cv_build\nshift\nbuild_cpu=$1\nbuild_vendor=$2\nshift; shift\n# Remember, the first character of IFS is used to create $*,\n# except with old shells:\nbuild_os=$*\nIFS=$ac_save_IFS\ncase $build_os in *\\ *) build_os=`echo \"$build_os\" | sed 's/ /-/g'`;; esac\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking host system type\" >&5\nprintf %s \"checking host system type... \" >&6; }\nif test ${ac_cv_host+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if test \"x$host_alias\" = x; then\n  ac_cv_host=$ac_cv_build\nelse\n  ac_cv_host=`$SHELL \"${ac_aux_dir}config.sub\" $host_alias` ||\n    as_fn_error $? \"$SHELL ${ac_aux_dir}config.sub $host_alias failed\" \"$LINENO\" 5\nfi\n\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_host\" >&5\nprintf \"%s\\n\" \"$ac_cv_host\" >&6; }\ncase $ac_cv_host in\n*-*-*) ;;\n*) as_fn_error $? \"invalid value of canonical host\" \"$LINENO\" 5;;\nesac\nhost=$ac_cv_host\nac_save_IFS=$IFS; IFS='-'\nset x $ac_cv_host\nshift\nhost_cpu=$1\nhost_vendor=$2\nshift; shift\n# Remember, the first character of IFS is used to create $*,\n# except with old shells:\nhost_os=$*\nIFS=$ac_save_IFS\ncase $host_os in *\\ *) host_os=`echo \"$host_os\" | sed 's/ /-/g'`;; esac\n\n\nCPU_SPINWAIT=\"\"\ncase \"${host_cpu}\" in\n  i686|x86_64)\n\tHAVE_CPU_SPINWAIT=1\n\tif test \"x${je_cv_msvc}\" = \"xyes\" ; then\n\t    if test ${je_cv_pause_msvc+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether pause instruction MSVC is compilable\" >&5\nprintf %s \"checking whether pause instruction MSVC is compilable... \" >&6; }\nif test ${je_cv_pause_msvc+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\nint\nmain (void)\n{\n_mm_pause(); return 0;\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_pause_msvc=yes\nelse $as_nop\n  je_cv_pause_msvc=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_pause_msvc\" >&5\nprintf \"%s\\n\" \"$je_cv_pause_msvc\" >&6; }\n\nfi\n\n\t    if test \"x${je_cv_pause_msvc}\" = \"xyes\" ; then\n\t\tCPU_SPINWAIT='_mm_pause()'\n\t    fi\n\telse\n\t    if test ${je_cv_pause+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether pause instruction is compilable\" >&5\nprintf %s \"checking whether pause instruction is compilable... \" >&6; }\nif test ${je_cv_pause+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\nint\nmain (void)\n{\n__asm__ volatile(\"pause\"); return 0;\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_pause=yes\nelse $as_nop\n  je_cv_pause=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_pause\" >&5\nprintf \"%s\\n\" \"$je_cv_pause\" >&6; }\n\nfi\n\n\t    if test \"x${je_cv_pause}\" = \"xyes\" ; then\n\t\tCPU_SPINWAIT='__asm__ volatile(\"pause\")'\n\t    fi\n\tfi\n\t;;\n  aarch64|arm*)\n\tHAVE_CPU_SPINWAIT=1\n\t\tif test ${je_cv_isb+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether isb instruction is compilable\" >&5\nprintf %s \"checking whether isb instruction is compilable... \" >&6; }\nif test ${je_cv_isb+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\nint\nmain (void)\n{\n__asm__ volatile(\"isb\"); return 0;\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_isb=yes\nelse $as_nop\n  je_cv_isb=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_isb\" >&5\nprintf \"%s\\n\" \"$je_cv_isb\" >&6; }\n\nfi\n\n\tif test \"x${je_cv_isb}\" = \"xyes\" ; then\n\t    CPU_SPINWAIT='__asm__ volatile(\"isb\")'\n\tfi\n\t;;\n  *)\n\tHAVE_CPU_SPINWAIT=0\n\t;;\nesac\n\nprintf \"%s\\n\" \"#define HAVE_CPU_SPINWAIT $HAVE_CPU_SPINWAIT\" >>confdefs.h\n\n\nprintf \"%s\\n\" \"#define CPU_SPINWAIT $CPU_SPINWAIT\" >>confdefs.h\n\n\n\n# Check whether --with-lg_vaddr was given.\nif test ${with_lg_vaddr+y}\nthen :\n  withval=$with_lg_vaddr; LG_VADDR=\"$with_lg_vaddr\"\nelse $as_nop\n  LG_VADDR=\"detect\"\nfi\n\n\ncase \"${host_cpu}\" in\n  aarch64)\n    if test \"x$LG_VADDR\" = \"xdetect\"; then\n      { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking number of significant virtual address bits\" >&5\nprintf %s \"checking number of significant virtual address bits... \" >&6; }\n      if test \"x${LG_SIZEOF_PTR}\" = \"x2\" ; then\n        #aarch64 ILP32\n        LG_VADDR=32\n      else\n        #aarch64 LP64\n        LG_VADDR=48\n      fi\n      { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $LG_VADDR\" >&5\nprintf \"%s\\n\" \"$LG_VADDR\" >&6; }\n    fi\n    ;;\n  x86_64)\n    if test \"x$LG_VADDR\" = \"xdetect\"; then\n      { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking number of significant virtual address bits\" >&5\nprintf %s \"checking number of significant virtual address bits... \" >&6; }\nif test ${je_cv_lg_vaddr+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if test \"$cross_compiling\" = yes\nthen :\n  je_cv_lg_vaddr=57\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <stdio.h>\n#ifdef _WIN32\n#include <limits.h>\n#include <intrin.h>\ntypedef unsigned __int32 uint32_t;\n#else\n#include <stdint.h>\n#endif\n\nint\nmain (void)\n{\n\n\tuint32_t r[4];\n\tuint32_t eax_in = 0x80000008U;\n#ifdef _WIN32\n\t__cpuid((int *)r, (int)eax_in);\n#else\n\tasm volatile (\"cpuid\"\n\t    : \"=a\" (r[0]), \"=b\" (r[1]), \"=c\" (r[2]), \"=d\" (r[3])\n\t    : \"a\" (eax_in), \"c\" (0)\n\t);\n#endif\n\tuint32_t eax_out = r[0];\n\tuint32_t vaddr = ((eax_out & 0x0000ff00U) >> 8);\n\tFILE *f = fopen(\"conftest.out\", \"w\");\n\tif (f == NULL) {\n\t\treturn 1;\n\t}\n\tif (vaddr > (sizeof(void *) << 3)) {\n\t\tvaddr = sizeof(void *) << 3;\n\t}\n\tfprintf(f, \"%u\", vaddr);\n\tfclose(f);\n\treturn 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_run \"$LINENO\"\nthen :\n  je_cv_lg_vaddr=`cat conftest.out`\nelse $as_nop\n  je_cv_lg_vaddr=error\nfi\nrm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \\\n  conftest.$ac_objext conftest.beam conftest.$ac_ext\nfi\n\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_lg_vaddr\" >&5\nprintf \"%s\\n\" \"$je_cv_lg_vaddr\" >&6; }\n      if test \"x${je_cv_lg_vaddr}\" != \"x\" ; then\n        LG_VADDR=\"${je_cv_lg_vaddr}\"\n      fi\n      if test \"x${LG_VADDR}\" != \"xerror\" ; then\n\nprintf \"%s\\n\" \"#define LG_VADDR $LG_VADDR\" >>confdefs.h\n\n      else\n        as_fn_error $? \"cannot determine number of significant virtual address bits\" \"$LINENO\" 5\n      fi\n    fi\n    ;;\n  *)\n    if test \"x$LG_VADDR\" = \"xdetect\"; then\n      { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking number of significant virtual address bits\" >&5\nprintf %s \"checking number of significant virtual address bits... \" >&6; }\n      if test \"x${LG_SIZEOF_PTR}\" = \"x3\" ; then\n        LG_VADDR=64\n      elif test \"x${LG_SIZEOF_PTR}\" = \"x2\" ; then\n        LG_VADDR=32\n      elif test \"x${LG_SIZEOF_PTR}\" = \"xLG_SIZEOF_PTR_WIN\" ; then\n        LG_VADDR=\"(1U << (LG_SIZEOF_PTR_WIN+3))\"\n      else\n        as_fn_error $? \"Unsupported lg(pointer size): ${LG_SIZEOF_PTR}\" \"$LINENO\" 5\n      fi\n      { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $LG_VADDR\" >&5\nprintf \"%s\\n\" \"$LG_VADDR\" >&6; }\n    fi\n    ;;\nesac\n\nprintf \"%s\\n\" \"#define LG_VADDR $LG_VADDR\" >>confdefs.h\n\n\nLD_PRELOAD_VAR=\"LD_PRELOAD\"\nso=\"so\"\nimportlib=\"${so}\"\no=\"$ac_objext\"\na=\"a\"\nexe=\"$ac_exeext\"\nlibprefix=\"lib\"\nlink_whole_archive=\"0\"\nDSO_LDFLAGS='-shared -Wl,-soname,$(@F)'\nRPATH='-Wl,-rpath,$(1)'\nSOREV=\"${so}.${rev}\"\nPIC_CFLAGS='-fPIC -DPIC'\nCTARGET='-o $@'\nLDTARGET='-o $@'\nTEST_LD_MODE=\nEXTRA_LDFLAGS=\nARFLAGS='crs'\nAROUT=' $@'\nCC_MM=1\n\nif test \"x$je_cv_cray_prgenv_wrapper\" = \"xyes\" ; then\n  TEST_LD_MODE='-dynamic'\nfi\n\nif test \"x${je_cv_cray}\" = \"xyes\" ; then\n  CC_MM=\nfi\n\n\n\n\nif test -n \"$ac_tool_prefix\"; then\n  # Extract the first word of \"${ac_tool_prefix}ar\", so it can be a program name with args.\nset dummy ${ac_tool_prefix}ar; ac_word=$2\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $ac_word\" >&5\nprintf %s \"checking for $ac_word... \" >&6; }\nif test ${ac_cv_prog_AR+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if test -n \"$AR\"; then\n  ac_cv_prog_AR=\"$AR\" # Let the user override the test.\nelse\nas_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    for ac_exec_ext in '' $ac_executable_extensions; do\n  if as_fn_executable_p \"$as_dir$ac_word$ac_exec_ext\"; then\n    ac_cv_prog_AR=\"${ac_tool_prefix}ar\"\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext\" >&5\n    break 2\n  fi\ndone\n  done\nIFS=$as_save_IFS\n\nfi\nfi\nAR=$ac_cv_prog_AR\nif test -n \"$AR\"; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $AR\" >&5\nprintf \"%s\\n\" \"$AR\" >&6; }\nelse\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\nfi\n\n\nfi\nif test -z \"$ac_cv_prog_AR\"; then\n  ac_ct_AR=$AR\n  # Extract the first word of \"ar\", so it can be a program name with args.\nset dummy ar; ac_word=$2\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $ac_word\" >&5\nprintf %s \"checking for $ac_word... \" >&6; }\nif test ${ac_cv_prog_ac_ct_AR+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if test -n \"$ac_ct_AR\"; then\n  ac_cv_prog_ac_ct_AR=\"$ac_ct_AR\" # Let the user override the test.\nelse\nas_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    for ac_exec_ext in '' $ac_executable_extensions; do\n  if as_fn_executable_p \"$as_dir$ac_word$ac_exec_ext\"; then\n    ac_cv_prog_ac_ct_AR=\"ar\"\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext\" >&5\n    break 2\n  fi\ndone\n  done\nIFS=$as_save_IFS\n\nfi\nfi\nac_ct_AR=$ac_cv_prog_ac_ct_AR\nif test -n \"$ac_ct_AR\"; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_ct_AR\" >&5\nprintf \"%s\\n\" \"$ac_ct_AR\" >&6; }\nelse\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\nfi\n\n  if test \"x$ac_ct_AR\" = x; then\n    AR=\":\"\n  else\n    case $cross_compiling:$ac_tool_warned in\nyes:)\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet\" >&5\nprintf \"%s\\n\" \"$as_me: WARNING: using cross tools not prefixed with host triplet\" >&2;}\nac_tool_warned=yes ;;\nesac\n    AR=$ac_ct_AR\n  fi\nelse\n  AR=\"$ac_cv_prog_AR\"\nfi\n\n\n\n\n\nif test -n \"$ac_tool_prefix\"; then\n  # Extract the first word of \"${ac_tool_prefix}nm\", so it can be a program name with args.\nset dummy ${ac_tool_prefix}nm; ac_word=$2\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $ac_word\" >&5\nprintf %s \"checking for $ac_word... \" >&6; }\nif test ${ac_cv_prog_NM+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if test -n \"$NM\"; then\n  ac_cv_prog_NM=\"$NM\" # Let the user override the test.\nelse\nas_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    for ac_exec_ext in '' $ac_executable_extensions; do\n  if as_fn_executable_p \"$as_dir$ac_word$ac_exec_ext\"; then\n    ac_cv_prog_NM=\"${ac_tool_prefix}nm\"\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext\" >&5\n    break 2\n  fi\ndone\n  done\nIFS=$as_save_IFS\n\nfi\nfi\nNM=$ac_cv_prog_NM\nif test -n \"$NM\"; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $NM\" >&5\nprintf \"%s\\n\" \"$NM\" >&6; }\nelse\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\nfi\n\n\nfi\nif test -z \"$ac_cv_prog_NM\"; then\n  ac_ct_NM=$NM\n  # Extract the first word of \"nm\", so it can be a program name with args.\nset dummy nm; ac_word=$2\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $ac_word\" >&5\nprintf %s \"checking for $ac_word... \" >&6; }\nif test ${ac_cv_prog_ac_ct_NM+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if test -n \"$ac_ct_NM\"; then\n  ac_cv_prog_ac_ct_NM=\"$ac_ct_NM\" # Let the user override the test.\nelse\nas_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    for ac_exec_ext in '' $ac_executable_extensions; do\n  if as_fn_executable_p \"$as_dir$ac_word$ac_exec_ext\"; then\n    ac_cv_prog_ac_ct_NM=\"nm\"\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext\" >&5\n    break 2\n  fi\ndone\n  done\nIFS=$as_save_IFS\n\nfi\nfi\nac_ct_NM=$ac_cv_prog_ac_ct_NM\nif test -n \"$ac_ct_NM\"; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_ct_NM\" >&5\nprintf \"%s\\n\" \"$ac_ct_NM\" >&6; }\nelse\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\nfi\n\n  if test \"x$ac_ct_NM\" = x; then\n    NM=\":\"\n  else\n    case $cross_compiling:$ac_tool_warned in\nyes:)\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet\" >&5\nprintf \"%s\\n\" \"$as_me: WARNING: using cross tools not prefixed with host triplet\" >&2;}\nac_tool_warned=yes ;;\nesac\n    NM=$ac_ct_NM\n  fi\nelse\n  NM=\"$ac_cv_prog_NM\"\nfi\n\n\nfor ac_prog in gawk mawk nawk awk\ndo\n  # Extract the first word of \"$ac_prog\", so it can be a program name with args.\nset dummy $ac_prog; ac_word=$2\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $ac_word\" >&5\nprintf %s \"checking for $ac_word... \" >&6; }\nif test ${ac_cv_prog_AWK+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if test -n \"$AWK\"; then\n  ac_cv_prog_AWK=\"$AWK\" # Let the user override the test.\nelse\nas_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    for ac_exec_ext in '' $ac_executable_extensions; do\n  if as_fn_executable_p \"$as_dir$ac_word$ac_exec_ext\"; then\n    ac_cv_prog_AWK=\"$ac_prog\"\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext\" >&5\n    break 2\n  fi\ndone\n  done\nIFS=$as_save_IFS\n\nfi\nfi\nAWK=$ac_cv_prog_AWK\nif test -n \"$AWK\"; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $AWK\" >&5\nprintf \"%s\\n\" \"$AWK\" >&6; }\nelse\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\nfi\n\n\n  test -n \"$AWK\" && break\ndone\n\n\n\n\n# Check whether --with-version was given.\nif test ${with_version+y}\nthen :\n  withval=$with_version;\n    echo \"${with_version}\" | grep '^[0-9]\\+\\.[0-9]\\+\\.[0-9]\\+-[0-9]\\+-g[0-9a-f]\\+$' 2>&1 1>/dev/null\n    if test $? -eq 0 ; then\n      echo \"$with_version\" > \"${objroot}VERSION\"\n    else\n      echo \"${with_version}\" | grep '^VERSION$' 2>&1 1>/dev/null\n      if test $? -ne 0 ; then\n        as_fn_error $? \"${with_version} does not match <major>.<minor>.<bugfix>-<nrev>-g<gid> or VERSION\" \"$LINENO\" 5\n      fi\n    fi\n\nelse $as_nop\n\n        if test \"x`test ! \\\"${srcroot}\\\" && cd \\\"${srcroot}\\\"; git rev-parse --is-inside-work-tree 2>/dev/null`\" = \"xtrue\" ; then\n                        for pattern in '[0-9].[0-9].[0-9]' '[0-9].[0-9].[0-9][0-9]' \\\n                     '[0-9].[0-9][0-9].[0-9]' '[0-9].[0-9][0-9].[0-9][0-9]' \\\n                     '[0-9][0-9].[0-9].[0-9]' '[0-9][0-9].[0-9].[0-9][0-9]' \\\n                     '[0-9][0-9].[0-9][0-9].[0-9]' \\\n                     '[0-9][0-9].[0-9][0-9].[0-9][0-9]'; do\n        (test ! \"${srcroot}\" && cd \"${srcroot}\"; git describe --long --abbrev=40 --match=\"${pattern}\") > \"${objroot}VERSION.tmp\" 2>/dev/null\n        if test $? -eq 0 ; then\n          mv \"${objroot}VERSION.tmp\" \"${objroot}VERSION\"\n          break\n        fi\n      done\n    fi\n    rm -f \"${objroot}VERSION.tmp\"\n\nfi\n\n\nif test ! -e \"${objroot}VERSION\" ; then\n  if test ! -e \"${srcroot}VERSION\" ; then\n    { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: Missing VERSION file, and unable to generate it; creating bogus VERSION\" >&5\nprintf \"%s\\n\" \"Missing VERSION file, and unable to generate it; creating bogus VERSION\" >&6; }\n    echo \"0.0.0-0-g000000missing_version_try_git_fetch_tags\" > \"${objroot}VERSION\"\n  else\n    cp ${srcroot}VERSION ${objroot}VERSION\n  fi\nfi\njemalloc_version=`cat \"${objroot}VERSION\"`\njemalloc_version_major=`echo ${jemalloc_version} | tr \".g-\" \" \" | awk '{print $1}'`\njemalloc_version_minor=`echo ${jemalloc_version} | tr \".g-\" \" \" | awk '{print $2}'`\njemalloc_version_bugfix=`echo ${jemalloc_version} | tr \".g-\" \" \" | awk '{print $3}'`\njemalloc_version_nrev=`echo ${jemalloc_version} | tr \".g-\" \" \" | awk '{print $4}'`\njemalloc_version_gid=`echo ${jemalloc_version} | tr \".g-\" \" \" | awk '{print $5}'`\n\n\n\n\n\n\n\ndefault_retain=\"0\"\nzero_realloc_default_free=\"0\"\nmaps_coalesce=\"1\"\nDUMP_SYMS=\"${NM} -a\"\nSYM_PREFIX=\"\"\ncase \"${host}\" in\n  *-*-darwin* | *-*-ios*)\n\tabi=\"macho\"\n\tRPATH=\"\"\n\tLD_PRELOAD_VAR=\"DYLD_INSERT_LIBRARIES\"\n\tso=\"dylib\"\n\timportlib=\"${so}\"\n\tforce_tls=\"0\"\n\tDSO_LDFLAGS='-shared -Wl,-install_name,$(LIBDIR)/$(@F)'\n\tSOREV=\"${rev}.${so}\"\n\tsbrk_deprecated=\"1\"\n\tSYM_PREFIX=\"_\"\n\t;;\n  *-*-freebsd*)\n\tT_APPEND_V=-D_BSD_SOURCE\n  if test \"x${CPPFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CPPFLAGS=\"${CPPFLAGS}${T_APPEND_V}\"\nelse\n  CPPFLAGS=\"${CPPFLAGS} ${T_APPEND_V}\"\nfi\n\n\n\tabi=\"elf\"\n\nprintf \"%s\\n\" \"#define JEMALLOC_SYSCTL_VM_OVERCOMMIT  \" >>confdefs.h\n\n\tforce_lazy_lock=\"1\"\n\t;;\n  *-*-dragonfly*)\n\tabi=\"elf\"\n\t;;\n  *-*-openbsd*)\n\tabi=\"elf\"\n\tforce_tls=\"0\"\n\t;;\n  *-*-bitrig*)\n\tabi=\"elf\"\n\t;;\n  *-*-linux-android*)\n\t\tT_APPEND_V=-D_GNU_SOURCE\n  if test \"x${CPPFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CPPFLAGS=\"${CPPFLAGS}${T_APPEND_V}\"\nelse\n  CPPFLAGS=\"${CPPFLAGS} ${T_APPEND_V}\"\nfi\n\n\n\tabi=\"elf\"\n\tglibc=\"0\"\n\nprintf \"%s\\n\" \"#define JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS  \" >>confdefs.h\n\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAS_ALLOCA_H  \" >>confdefs.h\n\n\nprintf \"%s\\n\" \"#define JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY  \" >>confdefs.h\n\n\nprintf \"%s\\n\" \"#define JEMALLOC_THREADED_INIT  \" >>confdefs.h\n\n\nprintf \"%s\\n\" \"#define JEMALLOC_C11_ATOMICS  \" >>confdefs.h\n\n\tforce_tls=\"0\"\n\tif test \"${LG_SIZEOF_PTR}\" = \"3\"; then\n\t  default_retain=\"1\"\n\tfi\n\tzero_realloc_default_free=\"1\"\n\t;;\n  *-*-linux*)\n\t\tT_APPEND_V=-D_GNU_SOURCE\n  if test \"x${CPPFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CPPFLAGS=\"${CPPFLAGS}${T_APPEND_V}\"\nelse\n  CPPFLAGS=\"${CPPFLAGS} ${T_APPEND_V}\"\nfi\n\n\n\tabi=\"elf\"\n\tglibc=\"1\"\n\nprintf \"%s\\n\" \"#define JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS  \" >>confdefs.h\n\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAS_ALLOCA_H  \" >>confdefs.h\n\n\nprintf \"%s\\n\" \"#define JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY  \" >>confdefs.h\n\n\nprintf \"%s\\n\" \"#define JEMALLOC_THREADED_INIT  \" >>confdefs.h\n\n\nprintf \"%s\\n\" \"#define JEMALLOC_USE_CXX_THROW  \" >>confdefs.h\n\n\tif test \"${LG_SIZEOF_PTR}\" = \"3\"; then\n\t  default_retain=\"1\"\n\tfi\n\tzero_realloc_default_free=\"1\"\n\t;;\n  *-*-kfreebsd*)\n\t\tT_APPEND_V=-D_GNU_SOURCE\n  if test \"x${CPPFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CPPFLAGS=\"${CPPFLAGS}${T_APPEND_V}\"\nelse\n  CPPFLAGS=\"${CPPFLAGS} ${T_APPEND_V}\"\nfi\n\n\n\tabi=\"elf\"\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAS_ALLOCA_H  \" >>confdefs.h\n\n\nprintf \"%s\\n\" \"#define JEMALLOC_SYSCTL_VM_OVERCOMMIT  \" >>confdefs.h\n\n\nprintf \"%s\\n\" \"#define JEMALLOC_THREADED_INIT  \" >>confdefs.h\n\n\nprintf \"%s\\n\" \"#define JEMALLOC_USE_CXX_THROW  \" >>confdefs.h\n\n\t;;\n  *-*-netbsd*)\n\t{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking ABI\" >&5\nprintf %s \"checking ABI... \" >&6; }\n        cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n#ifdef __ELF__\n/* ELF */\n#else\n#error aout\n#endif\n\nint\nmain (void)\n{\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  abi=\"elf\"\nelse $as_nop\n  abi=\"aout\"\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\n\t{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $abi\" >&5\nprintf \"%s\\n\" \"$abi\" >&6; }\n\t;;\n  *-*-solaris2*)\n\tabi=\"elf\"\n\tRPATH='-Wl,-R,$(1)'\n\t\tT_APPEND_V=-D_POSIX_PTHREAD_SEMANTICS\n  if test \"x${CPPFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CPPFLAGS=\"${CPPFLAGS}${T_APPEND_V}\"\nelse\n  CPPFLAGS=\"${CPPFLAGS} ${T_APPEND_V}\"\nfi\n\n\n\tT_APPEND_V=-lposix4 -lsocket -lnsl\n  if test \"x${LIBS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  LIBS=\"${LIBS}${T_APPEND_V}\"\nelse\n  LIBS=\"${LIBS} ${T_APPEND_V}\"\nfi\n\n\n\t;;\n  *-ibm-aix*)\n\tif test \"${LG_SIZEOF_PTR}\" = \"3\"; then\n\t  \t  LD_PRELOAD_VAR=\"LDR_PRELOAD64\"\n\telse\n\t  \t  LD_PRELOAD_VAR=\"LDR_PRELOAD\"\n\tfi\n\tabi=\"xcoff\"\n\t;;\n  *-*-mingw* | *-*-cygwin*)\n\tabi=\"pecoff\"\n\tforce_tls=\"0\"\n\tmaps_coalesce=\"0\"\n\tRPATH=\"\"\n\tso=\"dll\"\n\tif test \"x$je_cv_msvc\" = \"xyes\" ; then\n\t  importlib=\"lib\"\n\t  DSO_LDFLAGS=\"-LD\"\n\t  EXTRA_LDFLAGS=\"-link -DEBUG\"\n\t  CTARGET='-Fo$@'\n\t  LDTARGET='-Fe$@'\n\t  AR='lib'\n\t  ARFLAGS='-nologo -out:'\n\t  AROUT='$@'\n\t  CC_MM=\n        else\n\t  importlib=\"${so}\"\n\t  DSO_LDFLAGS=\"-shared\"\n\t  link_whole_archive=\"1\"\n\tfi\n\tcase \"${host}\" in\n\t  *-*-cygwin*)\n\t    DUMP_SYMS=\"dumpbin /SYMBOLS\"\n\t    ;;\n\t  *)\n\t    ;;\n\tesac\n\ta=\"lib\"\n\tlibprefix=\"\"\n\tSOREV=\"${so}\"\n\tPIC_CFLAGS=\"\"\n\tif test \"${LG_SIZEOF_PTR}\" = \"3\"; then\n\t  default_retain=\"1\"\n\tfi\n\tzero_realloc_default_free=\"1\"\n\t;;\n  *-*-nto-qnx)\n\tabi=\"elf\"\n  force_tls=\"0\"\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAS_ALLOCA_H  \" >>confdefs.h\n\n\t;;\n  *)\n\t{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: Unsupported operating system: ${host}\" >&5\nprintf \"%s\\n\" \"Unsupported operating system: ${host}\" >&6; }\n\tabi=\"elf\"\n\t;;\nesac\n\nJEMALLOC_USABLE_SIZE_CONST=const\n       for ac_header in malloc.h\ndo :\n  ac_fn_c_check_header_compile \"$LINENO\" \"malloc.h\" \"ac_cv_header_malloc_h\" \"$ac_includes_default\"\nif test \"x$ac_cv_header_malloc_h\" = xyes\nthen :\n  printf \"%s\\n\" \"#define HAVE_MALLOC_H 1\" >>confdefs.h\n\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether malloc_usable_size definition can use const argument\" >&5\nprintf %s \"checking whether malloc_usable_size definition can use const argument... \" >&6; }\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n#include <malloc.h>\n     #include <stddef.h>\n    size_t malloc_usable_size(const void *ptr);\n\nint\nmain (void)\n{\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n\n                { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\n\nelse $as_nop\n\n                JEMALLOC_USABLE_SIZE_CONST=\n                { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\n\nfi\n\ndone\n\nprintf \"%s\\n\" \"#define JEMALLOC_USABLE_SIZE_CONST $JEMALLOC_USABLE_SIZE_CONST\" >>confdefs.h\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for library containing log\" >&5\nprintf %s \"checking for library containing log... \" >&6; }\nif test ${ac_cv_search_log+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  ac_func_search_save_LIBS=$LIBS\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n/* Override any GCC internal prototype to avoid an error.\n   Use char because int might match the return type of a GCC\n   builtin and then its argument prototype would still apply.  */\nchar log ();\nint\nmain (void)\n{\nreturn log ();\n  ;\n  return 0;\n}\n_ACEOF\nfor ac_lib in '' m\ndo\n  if test -z \"$ac_lib\"; then\n    ac_res=\"none required\"\n  else\n    ac_res=-l$ac_lib\n    LIBS=\"-l$ac_lib  $ac_func_search_save_LIBS\"\n  fi\n  if ac_fn_c_try_link \"$LINENO\"\nthen :\n  ac_cv_search_log=$ac_res\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext\n  if test ${ac_cv_search_log+y}\nthen :\n  break\nfi\ndone\nif test ${ac_cv_search_log+y}\nthen :\n\nelse $as_nop\n  ac_cv_search_log=no\nfi\nrm conftest.$ac_ext\nLIBS=$ac_func_search_save_LIBS\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_log\" >&5\nprintf \"%s\\n\" \"$ac_cv_search_log\" >&6; }\nac_res=$ac_cv_search_log\nif test \"$ac_res\" != no\nthen :\n  test \"$ac_res\" = \"none required\" || LIBS=\"$ac_res $LIBS\"\n\nelse $as_nop\n  as_fn_error $? \"Missing math functions\" \"$LINENO\" 5\nfi\n\nif test \"x$ac_cv_search_log\" != \"xnone required\" ; then\n  LM=\"$ac_cv_search_log\"\nelse\n  LM=\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether __attribute__ syntax is compilable\" >&5\nprintf %s \"checking whether __attribute__ syntax is compilable... \" >&6; }\nif test ${je_cv_attribute+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\nstatic __attribute__((unused)) void foo(void){}\nint\nmain (void)\n{\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_attribute=yes\nelse $as_nop\n  je_cv_attribute=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_attribute\" >&5\nprintf \"%s\\n\" \"$je_cv_attribute\" >&6; }\n\nif test \"x${je_cv_attribute}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_ATTR  \" >>confdefs.h\n\n  if test \"x${GCC}\" = \"xyes\" -a \"x${abi}\" = \"xelf\"; then\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -fvisibility=hidden\" >&5\nprintf %s \"checking whether compiler supports -fvisibility=hidden... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-fvisibility=hidden\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-fvisibility=hidden\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -fvisibility=hidden\" >&5\nprintf %s \"checking whether compiler supports -fvisibility=hidden... \" >&6; }\nT_CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS}\"\nT_APPEND_V=-fvisibility=hidden\n  if test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${SPECIFIED_CXXFLAGS}\" = \"x\" ; then\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${SPECIFIED_CXXFLAGS}\"\nelse\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${SPECIFIED_CXXFLAGS}\"\nfi\n\nac_ext=cpp\nac_cpp='$CXXCPP $CPPFLAGS'\nac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_cxx_compiler_gnu\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_cxx_try_compile \"$LINENO\"\nthen :\n  je_cv_cxxflags_added=-fvisibility=hidden\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cxxflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CXXFLAGS=\"${T_CONFIGURE_CXXFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nac_ext=c\nac_cpp='$CPP $CPPFLAGS'\nac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_c_compiler_gnu\n\nif test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${SPECIFIED_CXXFLAGS}\" = \"x\" ; then\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${SPECIFIED_CXXFLAGS}\"\nelse\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${SPECIFIED_CXXFLAGS}\"\nfi\n\n\n  fi\nfi\nSAVED_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Werror\" >&5\nprintf %s \"checking whether compiler supports -Werror... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Werror\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Werror\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -herror_on_warning\" >&5\nprintf %s \"checking whether compiler supports -herror_on_warning... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-herror_on_warning\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-herror_on_warning\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether tls_model attribute is compilable\" >&5\nprintf %s \"checking whether tls_model attribute is compilable... \" >&6; }\nif test ${je_cv_tls_model+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\nint\nmain (void)\n{\nstatic __thread int\n               __attribute__((tls_model(\"initial-exec\"), unused)) foo;\n               foo = 0;\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_tls_model=yes\nelse $as_nop\n  je_cv_tls_model=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_tls_model\" >&5\nprintf \"%s\\n\" \"$je_cv_tls_model\" >&6; }\n\nCONFIGURE_CFLAGS=\"${SAVED_CONFIGURE_CFLAGS}\"\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\nSAVED_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Werror\" >&5\nprintf %s \"checking whether compiler supports -Werror... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Werror\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Werror\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -herror_on_warning\" >&5\nprintf %s \"checking whether compiler supports -herror_on_warning... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-herror_on_warning\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-herror_on_warning\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether alloc_size attribute is compilable\" >&5\nprintf %s \"checking whether alloc_size attribute is compilable... \" >&6; }\nif test ${je_cv_alloc_size+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n#include <stdlib.h>\nint\nmain (void)\n{\nvoid *foo(size_t size) __attribute__((alloc_size(1)));\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_alloc_size=yes\nelse $as_nop\n  je_cv_alloc_size=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_alloc_size\" >&5\nprintf \"%s\\n\" \"$je_cv_alloc_size\" >&6; }\n\nCONFIGURE_CFLAGS=\"${SAVED_CONFIGURE_CFLAGS}\"\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\nif test \"x${je_cv_alloc_size}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_ATTR_ALLOC_SIZE  \" >>confdefs.h\n\nfi\nSAVED_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Werror\" >&5\nprintf %s \"checking whether compiler supports -Werror... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Werror\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Werror\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -herror_on_warning\" >&5\nprintf %s \"checking whether compiler supports -herror_on_warning... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-herror_on_warning\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-herror_on_warning\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether format(gnu_printf, ...) attribute is compilable\" >&5\nprintf %s \"checking whether format(gnu_printf, ...) attribute is compilable... \" >&6; }\nif test ${je_cv_format_gnu_printf+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n#include <stdlib.h>\nint\nmain (void)\n{\nvoid *foo(const char *format, ...) __attribute__((format(gnu_printf, 1, 2)));\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_format_gnu_printf=yes\nelse $as_nop\n  je_cv_format_gnu_printf=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_format_gnu_printf\" >&5\nprintf \"%s\\n\" \"$je_cv_format_gnu_printf\" >&6; }\n\nCONFIGURE_CFLAGS=\"${SAVED_CONFIGURE_CFLAGS}\"\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\nif test \"x${je_cv_format_gnu_printf}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_ATTR_FORMAT_GNU_PRINTF  \" >>confdefs.h\n\nfi\nSAVED_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Werror\" >&5\nprintf %s \"checking whether compiler supports -Werror... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Werror\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Werror\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -herror_on_warning\" >&5\nprintf %s \"checking whether compiler supports -herror_on_warning... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-herror_on_warning\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-herror_on_warning\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether format(printf, ...) attribute is compilable\" >&5\nprintf %s \"checking whether format(printf, ...) attribute is compilable... \" >&6; }\nif test ${je_cv_format_printf+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n#include <stdlib.h>\nint\nmain (void)\n{\nvoid *foo(const char *format, ...) __attribute__((format(printf, 1, 2)));\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_format_printf=yes\nelse $as_nop\n  je_cv_format_printf=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_format_printf\" >&5\nprintf \"%s\\n\" \"$je_cv_format_printf\" >&6; }\n\nCONFIGURE_CFLAGS=\"${SAVED_CONFIGURE_CFLAGS}\"\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\nif test \"x${je_cv_format_printf}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_ATTR_FORMAT_PRINTF  \" >>confdefs.h\n\nfi\n\nSAVED_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Werror\" >&5\nprintf %s \"checking whether compiler supports -Werror... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Werror\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Werror\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -herror_on_warning\" >&5\nprintf %s \"checking whether compiler supports -herror_on_warning... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-herror_on_warning\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-herror_on_warning\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether format(printf, ...) attribute is compilable\" >&5\nprintf %s \"checking whether format(printf, ...) attribute is compilable... \" >&6; }\nif test ${je_cv_format_arg+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n#include <stdlib.h>\nint\nmain (void)\n{\nconst char * __attribute__((__format_arg__(1))) foo(const char *format);\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_format_arg=yes\nelse $as_nop\n  je_cv_format_arg=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_format_arg\" >&5\nprintf \"%s\\n\" \"$je_cv_format_arg\" >&6; }\n\nCONFIGURE_CFLAGS=\"${SAVED_CONFIGURE_CFLAGS}\"\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\nif test \"x${je_cv_format_arg}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_ATTR_FORMAT_ARG  \" >>confdefs.h\n\nfi\n\nSAVED_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Wimplicit-fallthrough\" >&5\nprintf %s \"checking whether compiler supports -Wimplicit-fallthrough... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Wimplicit-fallthrough\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Wimplicit-fallthrough\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether fallthrough attribute is compilable\" >&5\nprintf %s \"checking whether fallthrough attribute is compilable... \" >&6; }\nif test ${je_cv_fallthrough+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n#if !__has_attribute(fallthrough)\n               #error \"foo\"\n               #endif\nint\nmain (void)\n{\nint x = 0;\n               switch (x) {\n               case 0: __attribute__((__fallthrough__));\n               case 1: return 1;\n               }\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_fallthrough=yes\nelse $as_nop\n  je_cv_fallthrough=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_fallthrough\" >&5\nprintf \"%s\\n\" \"$je_cv_fallthrough\" >&6; }\n\nCONFIGURE_CFLAGS=\"${SAVED_CONFIGURE_CFLAGS}\"\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\nif test \"x${je_cv_fallthrough}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_ATTR_FALLTHROUGH  \" >>confdefs.h\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Wimplicit-fallthrough\" >&5\nprintf %s \"checking whether compiler supports -Wimplicit-fallthrough... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Wimplicit-fallthrough\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Wimplicit-fallthrough\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Wimplicit-fallthrough\" >&5\nprintf %s \"checking whether compiler supports -Wimplicit-fallthrough... \" >&6; }\nT_CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS}\"\nT_APPEND_V=-Wimplicit-fallthrough\n  if test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${SPECIFIED_CXXFLAGS}\" = \"x\" ; then\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${SPECIFIED_CXXFLAGS}\"\nelse\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${SPECIFIED_CXXFLAGS}\"\nfi\n\nac_ext=cpp\nac_cpp='$CXXCPP $CPPFLAGS'\nac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_cxx_compiler_gnu\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_cxx_try_compile \"$LINENO\"\nthen :\n  je_cv_cxxflags_added=-Wimplicit-fallthrough\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cxxflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CXXFLAGS=\"${T_CONFIGURE_CXXFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nac_ext=c\nac_cpp='$CPP $CPPFLAGS'\nac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_c_compiler_gnu\n\nif test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${SPECIFIED_CXXFLAGS}\" = \"x\" ; then\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${SPECIFIED_CXXFLAGS}\"\nelse\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${SPECIFIED_CXXFLAGS}\"\nfi\n\n\nfi\n\nSAVED_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Werror\" >&5\nprintf %s \"checking whether compiler supports -Werror... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Werror\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Werror\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -herror_on_warning\" >&5\nprintf %s \"checking whether compiler supports -herror_on_warning... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-herror_on_warning\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-herror_on_warning\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether cold attribute is compilable\" >&5\nprintf %s \"checking whether cold attribute is compilable... \" >&6; }\nif test ${je_cv_cold+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\nint\nmain (void)\n{\n__attribute__((__cold__)) void foo();\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_cold=yes\nelse $as_nop\n  je_cv_cold=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_cold\" >&5\nprintf \"%s\\n\" \"$je_cv_cold\" >&6; }\n\nCONFIGURE_CFLAGS=\"${SAVED_CONFIGURE_CFLAGS}\"\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\nif test \"x${je_cv_cold}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_ATTR_COLD  \" >>confdefs.h\n\nfi\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether vm_make_tag is compilable\" >&5\nprintf %s \"checking whether vm_make_tag is compilable... \" >&6; }\nif test ${je_cv_vm_make_tag+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n#include <sys/mman.h>\n\t       #include <mach/vm_statistics.h>\nint\nmain (void)\n{\nvoid *p;\n\t       p = mmap(0, 16, PROT_READ, MAP_ANON|MAP_PRIVATE, VM_MAKE_TAG(1), 0);\n\t       munmap(p, 16);\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_vm_make_tag=yes\nelse $as_nop\n  je_cv_vm_make_tag=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_vm_make_tag\" >&5\nprintf \"%s\\n\" \"$je_cv_vm_make_tag\" >&6; }\n\nif test \"x${je_cv_vm_make_tag}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_VM_MAKE_TAG  \" >>confdefs.h\n\nfi\n\n\n# Check whether --with-rpath was given.\nif test ${with_rpath+y}\nthen :\n  withval=$with_rpath; if test \"x$with_rpath\" = \"xno\" ; then\n  RPATH_EXTRA=\nelse\n  RPATH_EXTRA=\"`echo $with_rpath | tr \\\":\\\" \\\" \\\"`\"\nfi\nelse $as_nop\n  RPATH_EXTRA=\n\nfi\n\n\n\n# Check whether --enable-autogen was given.\nif test ${enable_autogen+y}\nthen :\n  enableval=$enable_autogen; if test \"x$enable_autogen\" = \"xno\" ; then\n  enable_autogen=\"0\"\nelse\n  enable_autogen=\"1\"\nfi\n\nelse $as_nop\n  enable_autogen=\"0\"\n\nfi\n\n\n\n\n  # Find a good install program.  We prefer a C program (faster),\n# so one script is as good as another.  But avoid the broken or\n# incompatible versions:\n# SysV /etc/install, /usr/sbin/install\n# SunOS /usr/etc/install\n# IRIX /sbin/install\n# AIX /bin/install\n# AmigaOS /C/install, which installs bootblocks on floppy discs\n# AIX 4 /usr/bin/installbsd, which doesn't work without a -g flag\n# AFS /usr/afsws/bin/install, which mishandles nonexistent args\n# SVR4 /usr/ucb/install, which tries to use the nonexistent group \"staff\"\n# OS/2's system install, which has a completely different semantic\n# ./install, which can be erroneously created by make from ./install.sh.\n# Reject install programs that cannot install multiple files.\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for a BSD-compatible install\" >&5\nprintf %s \"checking for a BSD-compatible install... \" >&6; }\nif test -z \"$INSTALL\"; then\nif test ${ac_cv_path_install+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  as_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    # Account for fact that we put trailing slashes in our PATH walk.\ncase $as_dir in #((\n  ./ | /[cC]/* | \\\n  /etc/* | /usr/sbin/* | /usr/etc/* | /sbin/* | /usr/afsws/bin/* | \\\n  ?:[\\\\/]os2[\\\\/]install[\\\\/]* | ?:[\\\\/]OS2[\\\\/]INSTALL[\\\\/]* | \\\n  /usr/ucb/* ) ;;\n  *)\n    # OSF1 and SCO ODT 3.0 have their own names for install.\n    # Don't use installbsd from OSF since it installs stuff as root\n    # by default.\n    for ac_prog in ginstall scoinst install; do\n      for ac_exec_ext in '' $ac_executable_extensions; do\n\tif as_fn_executable_p \"$as_dir$ac_prog$ac_exec_ext\"; then\n\t  if test $ac_prog = install &&\n\t    grep dspmsg \"$as_dir$ac_prog$ac_exec_ext\" >/dev/null 2>&1; then\n\t    # AIX install.  It has an incompatible calling convention.\n\t    :\n\t  elif test $ac_prog = install &&\n\t    grep pwplus \"$as_dir$ac_prog$ac_exec_ext\" >/dev/null 2>&1; then\n\t    # program-specific install script used by HP pwplus--don't use.\n\t    :\n\t  else\n\t    rm -rf conftest.one conftest.two conftest.dir\n\t    echo one > conftest.one\n\t    echo two > conftest.two\n\t    mkdir conftest.dir\n\t    if \"$as_dir$ac_prog$ac_exec_ext\" -c conftest.one conftest.two \"`pwd`/conftest.dir/\" &&\n\t      test -s conftest.one && test -s conftest.two &&\n\t      test -s conftest.dir/conftest.one &&\n\t      test -s conftest.dir/conftest.two\n\t    then\n\t      ac_cv_path_install=\"$as_dir$ac_prog$ac_exec_ext -c\"\n\t      break 3\n\t    fi\n\t  fi\n\tfi\n      done\n    done\n    ;;\nesac\n\n  done\nIFS=$as_save_IFS\n\nrm -rf conftest.one conftest.two conftest.dir\n\nfi\n  if test ${ac_cv_path_install+y}; then\n    INSTALL=$ac_cv_path_install\n  else\n    # As a last resort, use the slow shell script.  Don't cache a\n    # value for INSTALL within a source directory, because that will\n    # break other packages using the cache if that directory is\n    # removed, or if the value is a relative name.\n    INSTALL=$ac_install_sh\n  fi\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $INSTALL\" >&5\nprintf \"%s\\n\" \"$INSTALL\" >&6; }\n\n# Use test -z because SunOS4 sh mishandles braces in ${var-val}.\n# It thinks the first close brace ends the variable substitution.\ntest -z \"$INSTALL_PROGRAM\" && INSTALL_PROGRAM='${INSTALL}'\n\ntest -z \"$INSTALL_SCRIPT\" && INSTALL_SCRIPT='${INSTALL}'\n\ntest -z \"$INSTALL_DATA\" && INSTALL_DATA='${INSTALL} -m 644'\n\nif test -n \"$ac_tool_prefix\"; then\n  # Extract the first word of \"${ac_tool_prefix}ranlib\", so it can be a program name with args.\nset dummy ${ac_tool_prefix}ranlib; ac_word=$2\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $ac_word\" >&5\nprintf %s \"checking for $ac_word... \" >&6; }\nif test ${ac_cv_prog_RANLIB+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if test -n \"$RANLIB\"; then\n  ac_cv_prog_RANLIB=\"$RANLIB\" # Let the user override the test.\nelse\nas_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    for ac_exec_ext in '' $ac_executable_extensions; do\n  if as_fn_executable_p \"$as_dir$ac_word$ac_exec_ext\"; then\n    ac_cv_prog_RANLIB=\"${ac_tool_prefix}ranlib\"\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext\" >&5\n    break 2\n  fi\ndone\n  done\nIFS=$as_save_IFS\n\nfi\nfi\nRANLIB=$ac_cv_prog_RANLIB\nif test -n \"$RANLIB\"; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $RANLIB\" >&5\nprintf \"%s\\n\" \"$RANLIB\" >&6; }\nelse\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\nfi\n\n\nfi\nif test -z \"$ac_cv_prog_RANLIB\"; then\n  ac_ct_RANLIB=$RANLIB\n  # Extract the first word of \"ranlib\", so it can be a program name with args.\nset dummy ranlib; ac_word=$2\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $ac_word\" >&5\nprintf %s \"checking for $ac_word... \" >&6; }\nif test ${ac_cv_prog_ac_ct_RANLIB+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if test -n \"$ac_ct_RANLIB\"; then\n  ac_cv_prog_ac_ct_RANLIB=\"$ac_ct_RANLIB\" # Let the user override the test.\nelse\nas_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    for ac_exec_ext in '' $ac_executable_extensions; do\n  if as_fn_executable_p \"$as_dir$ac_word$ac_exec_ext\"; then\n    ac_cv_prog_ac_ct_RANLIB=\"ranlib\"\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext\" >&5\n    break 2\n  fi\ndone\n  done\nIFS=$as_save_IFS\n\nfi\nfi\nac_ct_RANLIB=$ac_cv_prog_ac_ct_RANLIB\nif test -n \"$ac_ct_RANLIB\"; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_ct_RANLIB\" >&5\nprintf \"%s\\n\" \"$ac_ct_RANLIB\" >&6; }\nelse\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\nfi\n\n  if test \"x$ac_ct_RANLIB\" = x; then\n    RANLIB=\":\"\n  else\n    case $cross_compiling:$ac_tool_warned in\nyes:)\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet\" >&5\nprintf \"%s\\n\" \"$as_me: WARNING: using cross tools not prefixed with host triplet\" >&2;}\nac_tool_warned=yes ;;\nesac\n    RANLIB=$ac_ct_RANLIB\n  fi\nelse\n  RANLIB=\"$ac_cv_prog_RANLIB\"\nfi\n\n# Extract the first word of \"ld\", so it can be a program name with args.\nset dummy ld; ac_word=$2\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $ac_word\" >&5\nprintf %s \"checking for $ac_word... \" >&6; }\nif test ${ac_cv_path_LD+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  case $LD in\n  [\\\\/]* | ?:[\\\\/]*)\n  ac_cv_path_LD=\"$LD\" # Let the user override the test with a path.\n  ;;\n  *)\n  as_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    for ac_exec_ext in '' $ac_executable_extensions; do\n  if as_fn_executable_p \"$as_dir$ac_word$ac_exec_ext\"; then\n    ac_cv_path_LD=\"$as_dir$ac_word$ac_exec_ext\"\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext\" >&5\n    break 2\n  fi\ndone\n  done\nIFS=$as_save_IFS\n\n  test -z \"$ac_cv_path_LD\" && ac_cv_path_LD=\"false\"\n  ;;\nesac\nfi\nLD=$ac_cv_path_LD\nif test -n \"$LD\"; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $LD\" >&5\nprintf \"%s\\n\" \"$LD\" >&6; }\nelse\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\nfi\n\n\n# Extract the first word of \"autoconf\", so it can be a program name with args.\nset dummy autoconf; ac_word=$2\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for $ac_word\" >&5\nprintf %s \"checking for $ac_word... \" >&6; }\nif test ${ac_cv_path_AUTOCONF+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  case $AUTOCONF in\n  [\\\\/]* | ?:[\\\\/]*)\n  ac_cv_path_AUTOCONF=\"$AUTOCONF\" # Let the user override the test with a path.\n  ;;\n  *)\n  as_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    for ac_exec_ext in '' $ac_executable_extensions; do\n  if as_fn_executable_p \"$as_dir$ac_word$ac_exec_ext\"; then\n    ac_cv_path_AUTOCONF=\"$as_dir$ac_word$ac_exec_ext\"\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: found $as_dir$ac_word$ac_exec_ext\" >&5\n    break 2\n  fi\ndone\n  done\nIFS=$as_save_IFS\n\n  test -z \"$ac_cv_path_AUTOCONF\" && ac_cv_path_AUTOCONF=\"false\"\n  ;;\nesac\nfi\nAUTOCONF=$ac_cv_path_AUTOCONF\nif test -n \"$AUTOCONF\"; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $AUTOCONF\" >&5\nprintf \"%s\\n\" \"$AUTOCONF\" >&6; }\nelse\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\nfi\n\n\n\n# Check whether --enable-doc was given.\nif test ${enable_doc+y}\nthen :\n  enableval=$enable_doc; if test \"x$enable_doc\" = \"xno\" ; then\n  enable_doc=\"0\"\nelse\n  enable_doc=\"1\"\nfi\n\nelse $as_nop\n  enable_doc=\"1\"\n\nfi\n\n\n\n# Check whether --enable-shared was given.\nif test ${enable_shared+y}\nthen :\n  enableval=$enable_shared; if test \"x$enable_shared\" = \"xno\" ; then\n  enable_shared=\"0\"\nelse\n  enable_shared=\"1\"\nfi\n\nelse $as_nop\n  enable_shared=\"1\"\n\nfi\n\n\n\n# Check whether --enable-static was given.\nif test ${enable_static+y}\nthen :\n  enableval=$enable_static; if test \"x$enable_static\" = \"xno\" ; then\n  enable_static=\"0\"\nelse\n  enable_static=\"1\"\nfi\n\nelse $as_nop\n  enable_static=\"1\"\n\nfi\n\n\n\nif test \"$enable_shared$enable_static\" = \"00\" ; then\n  as_fn_error $? \"Please enable one of shared or static builds\" \"$LINENO\" 5\nfi\n\n\n# Check whether --with-mangling was given.\nif test ${with_mangling+y}\nthen :\n  withval=$with_mangling; mangling_map=\"$with_mangling\"\nelse $as_nop\n  mangling_map=\"\"\nfi\n\n\n\n# Check whether --with-jemalloc_prefix was given.\nif test ${with_jemalloc_prefix+y}\nthen :\n  withval=$with_jemalloc_prefix; JEMALLOC_PREFIX=\"$with_jemalloc_prefix\"\nelse $as_nop\n  if test \"x$abi\" != \"xmacho\" -a \"x$abi\" != \"xpecoff\"; then\n  JEMALLOC_PREFIX=\"\"\nelse\n  JEMALLOC_PREFIX=\"je_\"\nfi\n\nfi\n\nif test \"x$JEMALLOC_PREFIX\" = \"x\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_IS_MALLOC  \" >>confdefs.h\n\nelse\n  JEMALLOC_CPREFIX=`echo ${JEMALLOC_PREFIX} | tr \"a-z\" \"A-Z\"`\n\nprintf \"%s\\n\" \"#define JEMALLOC_PREFIX \\\"$JEMALLOC_PREFIX\\\"\" >>confdefs.h\n\n\nprintf \"%s\\n\" \"#define JEMALLOC_CPREFIX \\\"$JEMALLOC_CPREFIX\\\"\" >>confdefs.h\n\nfi\n\n\n\n\n# Check whether --with-export was given.\nif test ${with_export+y}\nthen :\n  withval=$with_export; if test \"x$with_export\" = \"xno\"; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_EXPORT /**/\" >>confdefs.h\n\nfi\n\nfi\n\n\npublic_syms=\"aligned_alloc calloc dallocx free mallctl mallctlbymib mallctlnametomib malloc malloc_conf malloc_conf_2_conf_harder malloc_message malloc_stats_print malloc_usable_size mallocx smallocx_${jemalloc_version_gid} nallocx posix_memalign rallocx realloc sallocx sdallocx xallocx\"\nac_fn_c_check_func \"$LINENO\" \"memalign\" \"ac_cv_func_memalign\"\nif test \"x$ac_cv_func_memalign\" = xyes\nthen :\n\nprintf \"%s\\n\" \"#define JEMALLOC_OVERRIDE_MEMALIGN  \" >>confdefs.h\n\n\t       public_syms=\"${public_syms} memalign\"\nfi\n\nac_fn_c_check_func \"$LINENO\" \"valloc\" \"ac_cv_func_valloc\"\nif test \"x$ac_cv_func_valloc\" = xyes\nthen :\n\nprintf \"%s\\n\" \"#define JEMALLOC_OVERRIDE_VALLOC  \" >>confdefs.h\n\n\t       public_syms=\"${public_syms} valloc\"\nfi\n\nac_fn_c_check_func \"$LINENO\" \"malloc_size\" \"ac_cv_func_malloc_size\"\nif test \"x$ac_cv_func_malloc_size\" = xyes\nthen :\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_MALLOC_SIZE  \" >>confdefs.h\n\n\t       public_syms=\"${public_syms} malloc_size\"\nfi\n\n\nwrap_syms=\nif test \"x${JEMALLOC_PREFIX}\" = \"x\" ; then\n  ac_fn_c_check_func \"$LINENO\" \"__libc_calloc\" \"ac_cv_func___libc_calloc\"\nif test \"x$ac_cv_func___libc_calloc\" = xyes\nthen :\n\nprintf \"%s\\n\" \"#define JEMALLOC_OVERRIDE___LIBC_CALLOC  \" >>confdefs.h\n\n\t\t wrap_syms=\"${wrap_syms} __libc_calloc\"\nfi\n\n  ac_fn_c_check_func \"$LINENO\" \"__libc_free\" \"ac_cv_func___libc_free\"\nif test \"x$ac_cv_func___libc_free\" = xyes\nthen :\n\nprintf \"%s\\n\" \"#define JEMALLOC_OVERRIDE___LIBC_FREE  \" >>confdefs.h\n\n\t\t wrap_syms=\"${wrap_syms} __libc_free\"\nfi\n\n  ac_fn_c_check_func \"$LINENO\" \"__libc_malloc\" \"ac_cv_func___libc_malloc\"\nif test \"x$ac_cv_func___libc_malloc\" = xyes\nthen :\n\nprintf \"%s\\n\" \"#define JEMALLOC_OVERRIDE___LIBC_MALLOC  \" >>confdefs.h\n\n\t\t wrap_syms=\"${wrap_syms} __libc_malloc\"\nfi\n\n  ac_fn_c_check_func \"$LINENO\" \"__libc_memalign\" \"ac_cv_func___libc_memalign\"\nif test \"x$ac_cv_func___libc_memalign\" = xyes\nthen :\n\nprintf \"%s\\n\" \"#define JEMALLOC_OVERRIDE___LIBC_MEMALIGN  \" >>confdefs.h\n\n\t\t wrap_syms=\"${wrap_syms} __libc_memalign\"\nfi\n\n  ac_fn_c_check_func \"$LINENO\" \"__libc_realloc\" \"ac_cv_func___libc_realloc\"\nif test \"x$ac_cv_func___libc_realloc\" = xyes\nthen :\n\nprintf \"%s\\n\" \"#define JEMALLOC_OVERRIDE___LIBC_REALLOC  \" >>confdefs.h\n\n\t\t wrap_syms=\"${wrap_syms} __libc_realloc\"\nfi\n\n  ac_fn_c_check_func \"$LINENO\" \"__libc_valloc\" \"ac_cv_func___libc_valloc\"\nif test \"x$ac_cv_func___libc_valloc\" = xyes\nthen :\n\nprintf \"%s\\n\" \"#define JEMALLOC_OVERRIDE___LIBC_VALLOC  \" >>confdefs.h\n\n\t\t wrap_syms=\"${wrap_syms} __libc_valloc\"\nfi\n\n  ac_fn_c_check_func \"$LINENO\" \"__posix_memalign\" \"ac_cv_func___posix_memalign\"\nif test \"x$ac_cv_func___posix_memalign\" = xyes\nthen :\n\nprintf \"%s\\n\" \"#define JEMALLOC_OVERRIDE___POSIX_MEMALIGN  \" >>confdefs.h\n\n\t\t wrap_syms=\"${wrap_syms} __posix_memalign\"\nfi\n\nfi\n\ncase \"${host}\" in\n  *-*-mingw* | *-*-cygwin*)\n    wrap_syms=\"${wrap_syms} tls_callback\"\n    ;;\n  *)\n    ;;\nesac\n\n\n# Check whether --with-private_namespace was given.\nif test ${with_private_namespace+y}\nthen :\n  withval=$with_private_namespace; JEMALLOC_PRIVATE_NAMESPACE=\"${with_private_namespace}je_\"\nelse $as_nop\n  JEMALLOC_PRIVATE_NAMESPACE=\"je_\"\n\nfi\n\n\nprintf \"%s\\n\" \"#define JEMALLOC_PRIVATE_NAMESPACE $JEMALLOC_PRIVATE_NAMESPACE\" >>confdefs.h\n\nprivate_namespace=\"$JEMALLOC_PRIVATE_NAMESPACE\"\n\n\n\n# Check whether --with-install_suffix was given.\nif test ${with_install_suffix+y}\nthen :\n  withval=$with_install_suffix; case \"$with_install_suffix\" in\n   *\\ * ) as_fn_error $? \"Install suffix should not contain spaces\" \"$LINENO\" 5 ;;\n   * ) INSTALL_SUFFIX=\"$with_install_suffix\" ;;\nesac\nelse $as_nop\n  INSTALL_SUFFIX=\n\nfi\n\ninstall_suffix=\"$INSTALL_SUFFIX\"\n\n\n\n# Check whether --with-malloc_conf was given.\nif test ${with_malloc_conf+y}\nthen :\n  withval=$with_malloc_conf; JEMALLOC_CONFIG_MALLOC_CONF=\"$with_malloc_conf\"\nelse $as_nop\n  JEMALLOC_CONFIG_MALLOC_CONF=\"\"\n\nfi\n\nconfig_malloc_conf=\"$JEMALLOC_CONFIG_MALLOC_CONF\"\n\nprintf \"%s\\n\" \"#define JEMALLOC_CONFIG_MALLOC_CONF \\\"$config_malloc_conf\\\"\" >>confdefs.h\n\n\nje_=\"je_\"\n\n\ncfgoutputs_in=\"Makefile.in\"\ncfgoutputs_in=\"${cfgoutputs_in} jemalloc.pc.in\"\ncfgoutputs_in=\"${cfgoutputs_in} doc/html.xsl.in\"\ncfgoutputs_in=\"${cfgoutputs_in} doc/manpages.xsl.in\"\ncfgoutputs_in=\"${cfgoutputs_in} doc/jemalloc.xml.in\"\ncfgoutputs_in=\"${cfgoutputs_in} include/jemalloc/jemalloc_macros.h.in\"\ncfgoutputs_in=\"${cfgoutputs_in} include/jemalloc/jemalloc_protos.h.in\"\ncfgoutputs_in=\"${cfgoutputs_in} include/jemalloc/jemalloc_typedefs.h.in\"\ncfgoutputs_in=\"${cfgoutputs_in} include/jemalloc/internal/jemalloc_preamble.h.in\"\ncfgoutputs_in=\"${cfgoutputs_in} test/test.sh.in\"\ncfgoutputs_in=\"${cfgoutputs_in} test/include/test/jemalloc_test.h.in\"\n\ncfgoutputs_out=\"Makefile\"\ncfgoutputs_out=\"${cfgoutputs_out} jemalloc.pc\"\ncfgoutputs_out=\"${cfgoutputs_out} doc/html.xsl\"\ncfgoutputs_out=\"${cfgoutputs_out} doc/manpages.xsl\"\ncfgoutputs_out=\"${cfgoutputs_out} doc/jemalloc.xml\"\ncfgoutputs_out=\"${cfgoutputs_out} include/jemalloc/jemalloc_macros.h\"\ncfgoutputs_out=\"${cfgoutputs_out} include/jemalloc/jemalloc_protos.h\"\ncfgoutputs_out=\"${cfgoutputs_out} include/jemalloc/jemalloc_typedefs.h\"\ncfgoutputs_out=\"${cfgoutputs_out} include/jemalloc/internal/jemalloc_preamble.h\"\ncfgoutputs_out=\"${cfgoutputs_out} test/test.sh\"\ncfgoutputs_out=\"${cfgoutputs_out} test/include/test/jemalloc_test.h\"\n\ncfgoutputs_tup=\"Makefile\"\ncfgoutputs_tup=\"${cfgoutputs_tup} jemalloc.pc:jemalloc.pc.in\"\ncfgoutputs_tup=\"${cfgoutputs_tup} doc/html.xsl:doc/html.xsl.in\"\ncfgoutputs_tup=\"${cfgoutputs_tup} doc/manpages.xsl:doc/manpages.xsl.in\"\ncfgoutputs_tup=\"${cfgoutputs_tup} doc/jemalloc.xml:doc/jemalloc.xml.in\"\ncfgoutputs_tup=\"${cfgoutputs_tup} include/jemalloc/jemalloc_macros.h:include/jemalloc/jemalloc_macros.h.in\"\ncfgoutputs_tup=\"${cfgoutputs_tup} include/jemalloc/jemalloc_protos.h:include/jemalloc/jemalloc_protos.h.in\"\ncfgoutputs_tup=\"${cfgoutputs_tup} include/jemalloc/jemalloc_typedefs.h:include/jemalloc/jemalloc_typedefs.h.in\"\ncfgoutputs_tup=\"${cfgoutputs_tup} include/jemalloc/internal/jemalloc_preamble.h\"\ncfgoutputs_tup=\"${cfgoutputs_tup} test/test.sh:test/test.sh.in\"\ncfgoutputs_tup=\"${cfgoutputs_tup} test/include/test/jemalloc_test.h:test/include/test/jemalloc_test.h.in\"\n\ncfghdrs_in=\"include/jemalloc/jemalloc_defs.h.in\"\ncfghdrs_in=\"${cfghdrs_in} include/jemalloc/internal/jemalloc_internal_defs.h.in\"\ncfghdrs_in=\"${cfghdrs_in} include/jemalloc/internal/private_symbols.sh\"\ncfghdrs_in=\"${cfghdrs_in} include/jemalloc/internal/private_namespace.sh\"\ncfghdrs_in=\"${cfghdrs_in} include/jemalloc/internal/public_namespace.sh\"\ncfghdrs_in=\"${cfghdrs_in} include/jemalloc/internal/public_unnamespace.sh\"\ncfghdrs_in=\"${cfghdrs_in} include/jemalloc/jemalloc_rename.sh\"\ncfghdrs_in=\"${cfghdrs_in} include/jemalloc/jemalloc_mangle.sh\"\ncfghdrs_in=\"${cfghdrs_in} include/jemalloc/jemalloc.sh\"\ncfghdrs_in=\"${cfghdrs_in} test/include/test/jemalloc_test_defs.h.in\"\n\ncfghdrs_out=\"include/jemalloc/jemalloc_defs.h\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/jemalloc${install_suffix}.h\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/internal/private_symbols.awk\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/internal/private_symbols_jet.awk\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/internal/public_symbols.txt\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/internal/public_namespace.h\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/internal/public_unnamespace.h\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/jemalloc_protos_jet.h\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/jemalloc_rename.h\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/jemalloc_mangle.h\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/jemalloc_mangle_jet.h\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/internal/jemalloc_internal_defs.h\"\ncfghdrs_out=\"${cfghdrs_out} test/include/test/jemalloc_test_defs.h\"\n\ncfghdrs_tup=\"include/jemalloc/jemalloc_defs.h:include/jemalloc/jemalloc_defs.h.in\"\ncfghdrs_tup=\"${cfghdrs_tup} include/jemalloc/internal/jemalloc_internal_defs.h:include/jemalloc/internal/jemalloc_internal_defs.h.in\"\ncfghdrs_tup=\"${cfghdrs_tup} test/include/test/jemalloc_test_defs.h:test/include/test/jemalloc_test_defs.h.in\"\n\n\n# Check whether --enable-debug was given.\nif test ${enable_debug+y}\nthen :\n  enableval=$enable_debug; if test \"x$enable_debug\" = \"xno\" ; then\n  enable_debug=\"0\"\nelse\n  enable_debug=\"1\"\nfi\n\nelse $as_nop\n  enable_debug=\"0\"\n\nfi\n\nif test \"x$enable_debug\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_DEBUG  \" >>confdefs.h\n\nfi\n\n\nif test \"x$enable_debug\" = \"x0\" ; then\n  if test \"x$GCC\" = \"xyes\" ; then\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -O3\" >&5\nprintf %s \"checking whether compiler supports -O3... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-O3\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-O3\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -O3\" >&5\nprintf %s \"checking whether compiler supports -O3... \" >&6; }\nT_CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS}\"\nT_APPEND_V=-O3\n  if test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${SPECIFIED_CXXFLAGS}\" = \"x\" ; then\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${SPECIFIED_CXXFLAGS}\"\nelse\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${SPECIFIED_CXXFLAGS}\"\nfi\n\nac_ext=cpp\nac_cpp='$CXXCPP $CPPFLAGS'\nac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_cxx_compiler_gnu\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_cxx_try_compile \"$LINENO\"\nthen :\n  je_cv_cxxflags_added=-O3\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cxxflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CXXFLAGS=\"${T_CONFIGURE_CXXFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nac_ext=c\nac_cpp='$CPP $CPPFLAGS'\nac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_c_compiler_gnu\n\nif test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${SPECIFIED_CXXFLAGS}\" = \"x\" ; then\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${SPECIFIED_CXXFLAGS}\"\nelse\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${SPECIFIED_CXXFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -funroll-loops\" >&5\nprintf %s \"checking whether compiler supports -funroll-loops... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-funroll-loops\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-funroll-loops\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n  elif test \"x$je_cv_msvc\" = \"xyes\" ; then\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -O2\" >&5\nprintf %s \"checking whether compiler supports -O2... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-O2\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-O2\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -O2\" >&5\nprintf %s \"checking whether compiler supports -O2... \" >&6; }\nT_CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS}\"\nT_APPEND_V=-O2\n  if test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${SPECIFIED_CXXFLAGS}\" = \"x\" ; then\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${SPECIFIED_CXXFLAGS}\"\nelse\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${SPECIFIED_CXXFLAGS}\"\nfi\n\nac_ext=cpp\nac_cpp='$CXXCPP $CPPFLAGS'\nac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_cxx_compiler_gnu\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_cxx_try_compile \"$LINENO\"\nthen :\n  je_cv_cxxflags_added=-O2\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cxxflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CXXFLAGS=\"${T_CONFIGURE_CXXFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nac_ext=c\nac_cpp='$CPP $CPPFLAGS'\nac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_c_compiler_gnu\n\nif test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${SPECIFIED_CXXFLAGS}\" = \"x\" ; then\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${SPECIFIED_CXXFLAGS}\"\nelse\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${SPECIFIED_CXXFLAGS}\"\nfi\n\n\n  else\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -O\" >&5\nprintf %s \"checking whether compiler supports -O... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-O\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-O\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -O\" >&5\nprintf %s \"checking whether compiler supports -O... \" >&6; }\nT_CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS}\"\nT_APPEND_V=-O\n  if test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${SPECIFIED_CXXFLAGS}\" = \"x\" ; then\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${SPECIFIED_CXXFLAGS}\"\nelse\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${SPECIFIED_CXXFLAGS}\"\nfi\n\nac_ext=cpp\nac_cpp='$CXXCPP $CPPFLAGS'\nac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_cxx_compiler_gnu\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_cxx_try_compile \"$LINENO\"\nthen :\n  je_cv_cxxflags_added=-O\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cxxflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CXXFLAGS=\"${T_CONFIGURE_CXXFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nac_ext=c\nac_cpp='$CPP $CPPFLAGS'\nac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'\nac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'\nac_compiler_gnu=$ac_cv_c_compiler_gnu\n\nif test \"x${CONFIGURE_CXXFLAGS}\" = \"x\" -o \"x${SPECIFIED_CXXFLAGS}\" = \"x\" ; then\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS}${SPECIFIED_CXXFLAGS}\"\nelse\n  CXXFLAGS=\"${CONFIGURE_CXXFLAGS} ${SPECIFIED_CXXFLAGS}\"\nfi\n\n\n  fi\nfi\n\n# Check whether --enable-stats was given.\nif test ${enable_stats+y}\nthen :\n  enableval=$enable_stats; if test \"x$enable_stats\" = \"xno\" ; then\n  enable_stats=\"0\"\nelse\n  enable_stats=\"1\"\nfi\n\nelse $as_nop\n  enable_stats=\"1\"\n\nfi\n\nif test \"x$enable_stats\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_STATS  \" >>confdefs.h\n\nfi\n\n\n# Check whether --enable-experimental_smallocx was given.\nif test ${enable_experimental_smallocx+y}\nthen :\n  enableval=$enable_experimental_smallocx; if test \"x$enable_experimental_smallocx\" = \"xno\" ; then\nenable_experimental_smallocx=\"0\"\nelse\nenable_experimental_smallocx=\"1\"\nfi\n\nelse $as_nop\n  enable_experimental_smallocx=\"0\"\n\nfi\n\nif test \"x$enable_experimental_smallocx\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_EXPERIMENTAL_SMALLOCX_API  \" >>confdefs.h\n\nfi\n\n\n# Check whether --enable-prof was given.\nif test ${enable_prof+y}\nthen :\n  enableval=$enable_prof; if test \"x$enable_prof\" = \"xno\" ; then\n  enable_prof=\"0\"\nelse\n  enable_prof=\"1\"\nfi\n\nelse $as_nop\n  enable_prof=\"0\"\n\nfi\n\nif test \"x$enable_prof\" = \"x1\" ; then\n  backtrace_method=\"\"\nelse\n  backtrace_method=\"N/A\"\nfi\n\n# Check whether --enable-prof-libunwind was given.\nif test ${enable_prof_libunwind+y}\nthen :\n  enableval=$enable_prof_libunwind; if test \"x$enable_prof_libunwind\" = \"xno\" ; then\n  enable_prof_libunwind=\"0\"\nelse\n  enable_prof_libunwind=\"1\"\n  if test \"x$enable_prof\" = \"x0\" ; then\n    as_fn_error $? \"--enable-prof-libunwind should only be used with --enable-prof\" \"$LINENO\" 5\n  fi\nfi\n\nelse $as_nop\n  enable_prof_libunwind=\"0\"\n\nfi\n\n\n# Check whether --with-static_libunwind was given.\nif test ${with_static_libunwind+y}\nthen :\n  withval=$with_static_libunwind; if test \"x$with_static_libunwind\" = \"xno\" ; then\n  LUNWIND=\"-lunwind\"\nelse\n  if test ! -f \"$with_static_libunwind\" ; then\n    as_fn_error $? \"Static libunwind not found: $with_static_libunwind\" \"$LINENO\" 5\n  fi\n  LUNWIND=\"$with_static_libunwind\"\nfi\nelse $as_nop\n  LUNWIND=\"-lunwind\"\n\nfi\n\nif test \"x$backtrace_method\" = \"x\" -a \"x$enable_prof_libunwind\" = \"x1\" ; then\n         for ac_header in libunwind.h\ndo :\n  ac_fn_c_check_header_compile \"$LINENO\" \"libunwind.h\" \"ac_cv_header_libunwind_h\" \"$ac_includes_default\"\nif test \"x$ac_cv_header_libunwind_h\" = xyes\nthen :\n  printf \"%s\\n\" \"#define HAVE_LIBUNWIND_H 1\" >>confdefs.h\n\nelse $as_nop\n  enable_prof_libunwind=\"0\"\nfi\n\ndone\n  if test \"x$LUNWIND\" = \"x-lunwind\" ; then\n    { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for unw_backtrace in -lunwind\" >&5\nprintf %s \"checking for unw_backtrace in -lunwind... \" >&6; }\nif test ${ac_cv_lib_unwind_unw_backtrace+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  ac_check_lib_save_LIBS=$LIBS\nLIBS=\"-lunwind  $LIBS\"\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n/* Override any GCC internal prototype to avoid an error.\n   Use char because int might match the return type of a GCC\n   builtin and then its argument prototype would still apply.  */\nchar unw_backtrace ();\nint\nmain (void)\n{\nreturn unw_backtrace ();\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  ac_cv_lib_unwind_unw_backtrace=yes\nelse $as_nop\n  ac_cv_lib_unwind_unw_backtrace=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nLIBS=$ac_check_lib_save_LIBS\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_unwind_unw_backtrace\" >&5\nprintf \"%s\\n\" \"$ac_cv_lib_unwind_unw_backtrace\" >&6; }\nif test \"x$ac_cv_lib_unwind_unw_backtrace\" = xyes\nthen :\n  T_APPEND_V=$LUNWIND\n  if test \"x${LIBS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  LIBS=\"${LIBS}${T_APPEND_V}\"\nelse\n  LIBS=\"${LIBS} ${T_APPEND_V}\"\nfi\n\n\nelse $as_nop\n  enable_prof_libunwind=\"0\"\nfi\n\n  else\n    T_APPEND_V=$LUNWIND\n  if test \"x${LIBS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  LIBS=\"${LIBS}${T_APPEND_V}\"\nelse\n  LIBS=\"${LIBS} ${T_APPEND_V}\"\nfi\n\n\n  fi\n  if test \"x${enable_prof_libunwind}\" = \"x1\" ; then\n    backtrace_method=\"libunwind\"\n\nprintf \"%s\\n\" \"#define JEMALLOC_PROF_LIBUNWIND  \" >>confdefs.h\n\n  fi\nfi\n\n# Check whether --enable-prof-libgcc was given.\nif test ${enable_prof_libgcc+y}\nthen :\n  enableval=$enable_prof_libgcc; if test \"x$enable_prof_libgcc\" = \"xno\" ; then\n  enable_prof_libgcc=\"0\"\nelse\n  enable_prof_libgcc=\"1\"\nfi\n\nelse $as_nop\n  enable_prof_libgcc=\"1\"\n\nfi\n\nif test \"x$backtrace_method\" = \"x\" -a \"x$enable_prof_libgcc\" = \"x1\" \\\n     -a \"x$GCC\" = \"xyes\" ; then\n         for ac_header in unwind.h\ndo :\n  ac_fn_c_check_header_compile \"$LINENO\" \"unwind.h\" \"ac_cv_header_unwind_h\" \"$ac_includes_default\"\nif test \"x$ac_cv_header_unwind_h\" = xyes\nthen :\n  printf \"%s\\n\" \"#define HAVE_UNWIND_H 1\" >>confdefs.h\n\nelse $as_nop\n  enable_prof_libgcc=\"0\"\nfi\n\ndone\n  if test \"x${enable_prof_libgcc}\" = \"x1\" ; then\n    { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for _Unwind_Backtrace in -lgcc\" >&5\nprintf %s \"checking for _Unwind_Backtrace in -lgcc... \" >&6; }\nif test ${ac_cv_lib_gcc__Unwind_Backtrace+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  ac_check_lib_save_LIBS=$LIBS\nLIBS=\"-lgcc  $LIBS\"\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n/* Override any GCC internal prototype to avoid an error.\n   Use char because int might match the return type of a GCC\n   builtin and then its argument prototype would still apply.  */\nchar _Unwind_Backtrace ();\nint\nmain (void)\n{\nreturn _Unwind_Backtrace ();\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  ac_cv_lib_gcc__Unwind_Backtrace=yes\nelse $as_nop\n  ac_cv_lib_gcc__Unwind_Backtrace=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nLIBS=$ac_check_lib_save_LIBS\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_gcc__Unwind_Backtrace\" >&5\nprintf \"%s\\n\" \"$ac_cv_lib_gcc__Unwind_Backtrace\" >&6; }\nif test \"x$ac_cv_lib_gcc__Unwind_Backtrace\" = xyes\nthen :\n  T_APPEND_V=-lgcc\n  if test \"x${LIBS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  LIBS=\"${LIBS}${T_APPEND_V}\"\nelse\n  LIBS=\"${LIBS} ${T_APPEND_V}\"\nfi\n\n\nelse $as_nop\n  enable_prof_libgcc=\"0\"\nfi\n\n  fi\n  if test \"x${enable_prof_libgcc}\" = \"x1\" ; then\n    backtrace_method=\"libgcc\"\n\nprintf \"%s\\n\" \"#define JEMALLOC_PROF_LIBGCC  \" >>confdefs.h\n\n  fi\nelse\n  enable_prof_libgcc=\"0\"\nfi\n\n# Check whether --enable-prof-gcc was given.\nif test ${enable_prof_gcc+y}\nthen :\n  enableval=$enable_prof_gcc; if test \"x$enable_prof_gcc\" = \"xno\" ; then\n  enable_prof_gcc=\"0\"\nelse\n  enable_prof_gcc=\"1\"\nfi\n\nelse $as_nop\n  enable_prof_gcc=\"1\"\n\nfi\n\nif test \"x$backtrace_method\" = \"x\" -a \"x$enable_prof_gcc\" = \"x1\" \\\n     -a \"x$GCC\" = \"xyes\" ; then\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -fno-omit-frame-pointer\" >&5\nprintf %s \"checking whether compiler supports -fno-omit-frame-pointer... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-fno-omit-frame-pointer\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-fno-omit-frame-pointer\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n  backtrace_method=\"gcc intrinsics\"\n\nprintf \"%s\\n\" \"#define JEMALLOC_PROF_GCC  \" >>confdefs.h\n\nelse\n  enable_prof_gcc=\"0\"\nfi\n\nif test \"x$backtrace_method\" = \"x\" ; then\n  backtrace_method=\"none (disabling profiling)\"\n  enable_prof=\"0\"\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking configured backtracing method\" >&5\nprintf %s \"checking configured backtracing method... \" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $backtrace_method\" >&5\nprintf \"%s\\n\" \"$backtrace_method\" >&6; }\nif test \"x$enable_prof\" = \"x1\" ; then\n    T_APPEND_V=$LM\n  if test \"x${LIBS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  LIBS=\"${LIBS}${T_APPEND_V}\"\nelse\n  LIBS=\"${LIBS} ${T_APPEND_V}\"\nfi\n\n\n\n\nprintf \"%s\\n\" \"#define JEMALLOC_PROF  \" >>confdefs.h\n\nfi\n\n\nif test \"x${maps_coalesce}\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_MAPS_COALESCE  \" >>confdefs.h\n\nfi\n\nif test \"x$default_retain\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_RETAIN  \" >>confdefs.h\n\nfi\n\nif test \"x$zero_realloc_default_free\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_ZERO_REALLOC_DEFAULT_FREE  \" >>confdefs.h\n\nfi\n\nhave_dss=\"1\"\nac_fn_c_check_func \"$LINENO\" \"sbrk\" \"ac_cv_func_sbrk\"\nif test \"x$ac_cv_func_sbrk\" = xyes\nthen :\n  have_sbrk=\"1\"\nelse $as_nop\n  have_sbrk=\"0\"\nfi\n\nif test \"x$have_sbrk\" = \"x1\" ; then\n  if test \"x$sbrk_deprecated\" = \"x1\" ; then\n    { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: Disabling dss allocation because sbrk is deprecated\" >&5\nprintf \"%s\\n\" \"Disabling dss allocation because sbrk is deprecated\" >&6; }\n    have_dss=\"0\"\n  fi\nelse\n  have_dss=\"0\"\nfi\n\nif test \"x$have_dss\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_DSS  \" >>confdefs.h\n\nfi\n\n# Check whether --enable-fill was given.\nif test ${enable_fill+y}\nthen :\n  enableval=$enable_fill; if test \"x$enable_fill\" = \"xno\" ; then\n  enable_fill=\"0\"\nelse\n  enable_fill=\"1\"\nfi\n\nelse $as_nop\n  enable_fill=\"1\"\n\nfi\n\nif test \"x$enable_fill\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_FILL  \" >>confdefs.h\n\nfi\n\n\n# Check whether --enable-utrace was given.\nif test ${enable_utrace+y}\nthen :\n  enableval=$enable_utrace; if test \"x$enable_utrace\" = \"xno\" ; then\n  enable_utrace=\"0\"\nelse\n  enable_utrace=\"1\"\nfi\n\nelse $as_nop\n  enable_utrace=\"0\"\n\nfi\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether utrace(2) is compilable\" >&5\nprintf %s \"checking whether utrace(2) is compilable... \" >&6; }\nif test ${je_cv_utrace+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <sys/types.h>\n#include <sys/param.h>\n#include <sys/time.h>\n#include <sys/uio.h>\n#include <sys/ktrace.h>\n\nint\nmain (void)\n{\n\n\tutrace((void *)0, 0);\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_utrace=yes\nelse $as_nop\n  je_cv_utrace=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_utrace\" >&5\nprintf \"%s\\n\" \"$je_cv_utrace\" >&6; }\n\nif test \"x${je_cv_utrace}\" = \"xno\" ; then\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether utrace(2) with label is compilable\" >&5\nprintf %s \"checking whether utrace(2) with label is compilable... \" >&6; }\nif test ${je_cv_utrace_label+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n  #include <sys/types.h>\n  #include <sys/param.h>\n  #include <sys/time.h>\n  #include <sys/uio.h>\n  #include <sys/ktrace.h>\n\nint\nmain (void)\n{\n\n\t  utrace((void *)0, (void *)0, 0);\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_utrace_label=yes\nelse $as_nop\n  je_cv_utrace_label=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_utrace_label\" >&5\nprintf \"%s\\n\" \"$je_cv_utrace_label\" >&6; }\n\n  if test \"x${je_cv_utrace_label}\" = \"xno\"; then\n    enable_utrace=\"0\"\n  fi\n  if test \"x$enable_utrace\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_UTRACE_LABEL  \" >>confdefs.h\n\n  fi\nelse\n  if test \"x$enable_utrace\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_UTRACE  \" >>confdefs.h\n\n  fi\nfi\n\n\n# Check whether --enable-xmalloc was given.\nif test ${enable_xmalloc+y}\nthen :\n  enableval=$enable_xmalloc; if test \"x$enable_xmalloc\" = \"xno\" ; then\n  enable_xmalloc=\"0\"\nelse\n  enable_xmalloc=\"1\"\nfi\n\nelse $as_nop\n  enable_xmalloc=\"0\"\n\nfi\n\nif test \"x$enable_xmalloc\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_XMALLOC  \" >>confdefs.h\n\nfi\n\n\n# Check whether --enable-cache-oblivious was given.\nif test ${enable_cache_oblivious+y}\nthen :\n  enableval=$enable_cache_oblivious; if test \"x$enable_cache_oblivious\" = \"xno\" ; then\n  enable_cache_oblivious=\"0\"\nelse\n  enable_cache_oblivious=\"1\"\nfi\n\nelse $as_nop\n  enable_cache_oblivious=\"1\"\n\nfi\n\nif test \"x$enable_cache_oblivious\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_CACHE_OBLIVIOUS  \" >>confdefs.h\n\nfi\n\n\n# Check whether --enable-log was given.\nif test ${enable_log+y}\nthen :\n  enableval=$enable_log; if test \"x$enable_log\" = \"xno\" ; then\n  enable_log=\"0\"\nelse\n  enable_log=\"1\"\nfi\n\nelse $as_nop\n  enable_log=\"0\"\n\nfi\n\nif test \"x$enable_log\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_LOG  \" >>confdefs.h\n\nfi\n\n\n# Check whether --enable-readlinkat was given.\nif test ${enable_readlinkat+y}\nthen :\n  enableval=$enable_readlinkat; if test \"x$enable_readlinkat\" = \"xno\" ; then\n  enable_readlinkat=\"0\"\nelse\n  enable_readlinkat=\"1\"\nfi\n\nelse $as_nop\n  enable_readlinkat=\"0\"\n\nfi\n\nif test \"x$enable_readlinkat\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_READLINKAT  \" >>confdefs.h\n\nfi\n\n\n# Check whether --enable-opt-safety-checks was given.\nif test ${enable_opt_safety_checks+y}\nthen :\n  enableval=$enable_opt_safety_checks; if test \"x$enable_opt_safety_checks\" = \"xno\" ; then\n  enable_opt_safety_checks=\"0\"\nelse\n  enable_opt_safety_checks=\"1\"\nfi\n\nelse $as_nop\n  enable_opt_safety_checks=\"0\"\n\nfi\n\nif test \"x$enable_opt_safety_checks\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_OPT_SAFETY_CHECKS  \" >>confdefs.h\n\nfi\n\n\n# Check whether --enable-opt-size-checks was given.\nif test ${enable_opt_size_checks+y}\nthen :\n  enableval=$enable_opt_size_checks; if test \"x$enable_opt_size_checks\" = \"xno\" ; then\n  enable_opt_size_checks=\"0\"\nelse\n  enable_opt_size_checks=\"1\"\nfi\n\nelse $as_nop\n  enable_opt_size_checks=\"0\"\n\nfi\n\nif test \"x$enable_opt_size_checks\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_OPT_SIZE_CHECKS  \" >>confdefs.h\n\nfi\n\n\n# Check whether --enable-uaf-detection was given.\nif test ${enable_uaf_detection+y}\nthen :\n  enableval=$enable_uaf_detection; if test \"x$enable_uaf_detection\" = \"xno\" ; then\n  enable_uaf_detection=\"0\"\nelse\n  enable_uaf_detection=\"1\"\nfi\n\nelse $as_nop\n  enable_uaf_detection=\"0\"\n\nfi\n\nif test \"x$enable_uaf_detection\" = \"x1\" ; then\n  printf \"%s\\n\" \"#define JEMALLOC_UAF_DETECTION  \" >>confdefs.h\n\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether a program using __builtin_unreachable is compilable\" >&5\nprintf %s \"checking whether a program using __builtin_unreachable is compilable... \" >&6; }\nif test ${je_cv_gcc_builtin_unreachable+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\nvoid foo (void) {\n  __builtin_unreachable();\n}\n\nint\nmain (void)\n{\n\n\t{\n\t\tfoo();\n\t}\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_gcc_builtin_unreachable=yes\nelse $as_nop\n  je_cv_gcc_builtin_unreachable=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_gcc_builtin_unreachable\" >&5\nprintf \"%s\\n\" \"$je_cv_gcc_builtin_unreachable\" >&6; }\n\nif test \"x${je_cv_gcc_builtin_unreachable}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_INTERNAL_UNREACHABLE __builtin_unreachable\" >>confdefs.h\n\nelse\n\nprintf \"%s\\n\" \"#define JEMALLOC_INTERNAL_UNREACHABLE abort\" >>confdefs.h\n\nfi\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether a program using __builtin_ffsl is compilable\" >&5\nprintf %s \"checking whether a program using __builtin_ffsl is compilable... \" >&6; }\nif test ${je_cv_gcc_builtin_ffsl+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <stdio.h>\n#include <strings.h>\n#include <string.h>\n\nint\nmain (void)\n{\n\n\t{\n\t\tint rv = __builtin_ffsl(0x08);\n\t\tprintf(\"%d\\n\", rv);\n\t}\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_gcc_builtin_ffsl=yes\nelse $as_nop\n  je_cv_gcc_builtin_ffsl=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_gcc_builtin_ffsl\" >&5\nprintf \"%s\\n\" \"$je_cv_gcc_builtin_ffsl\" >&6; }\n\nif test \"x${je_cv_gcc_builtin_ffsl}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_INTERNAL_FFSLL __builtin_ffsll\" >>confdefs.h\n\n\nprintf \"%s\\n\" \"#define JEMALLOC_INTERNAL_FFSL __builtin_ffsl\" >>confdefs.h\n\n\nprintf \"%s\\n\" \"#define JEMALLOC_INTERNAL_FFS __builtin_ffs\" >>confdefs.h\n\nelse\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether a program using ffsl is compilable\" >&5\nprintf %s \"checking whether a program using ffsl is compilable... \" >&6; }\nif test ${je_cv_function_ffsl+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n  #include <stdio.h>\n  #include <strings.h>\n  #include <string.h>\n\nint\nmain (void)\n{\n\n\t{\n\t\tint rv = ffsl(0x08);\n\t\tprintf(\"%d\\n\", rv);\n\t}\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_function_ffsl=yes\nelse $as_nop\n  je_cv_function_ffsl=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_function_ffsl\" >&5\nprintf \"%s\\n\" \"$je_cv_function_ffsl\" >&6; }\n\n  if test \"x${je_cv_function_ffsl}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_INTERNAL_FFSLL ffsll\" >>confdefs.h\n\n\nprintf \"%s\\n\" \"#define JEMALLOC_INTERNAL_FFSL ffsl\" >>confdefs.h\n\n\nprintf \"%s\\n\" \"#define JEMALLOC_INTERNAL_FFS ffs\" >>confdefs.h\n\n  else\n    as_fn_error $? \"Cannot build without ffsl(3) or __builtin_ffsl()\" \"$LINENO\" 5\n  fi\nfi\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether a program using __builtin_popcountl is compilable\" >&5\nprintf %s \"checking whether a program using __builtin_popcountl is compilable... \" >&6; }\nif test ${je_cv_gcc_builtin_popcountl+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <stdio.h>\n#include <strings.h>\n#include <string.h>\n\nint\nmain (void)\n{\n\n\t{\n\t\tint rv = __builtin_popcountl(0x08);\n\t\tprintf(\"%d\\n\", rv);\n\t}\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_gcc_builtin_popcountl=yes\nelse $as_nop\n  je_cv_gcc_builtin_popcountl=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_gcc_builtin_popcountl\" >&5\nprintf \"%s\\n\" \"$je_cv_gcc_builtin_popcountl\" >&6; }\n\nif test \"x${je_cv_gcc_builtin_popcountl}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_INTERNAL_POPCOUNT __builtin_popcount\" >>confdefs.h\n\n\nprintf \"%s\\n\" \"#define JEMALLOC_INTERNAL_POPCOUNTL __builtin_popcountl\" >>confdefs.h\n\n\nprintf \"%s\\n\" \"#define JEMALLOC_INTERNAL_POPCOUNTLL __builtin_popcountll\" >>confdefs.h\n\nfi\n\n\n# Check whether --with-lg_quantum was given.\nif test ${with_lg_quantum+y}\nthen :\n  withval=$with_lg_quantum;\nfi\n\nif test \"x$with_lg_quantum\" != \"x\" ; then\n\nprintf \"%s\\n\" \"#define LG_QUANTUM $with_lg_quantum\" >>confdefs.h\n\nfi\n\n\n# Check whether --with-lg_slab_maxregs was given.\nif test ${with_lg_slab_maxregs+y}\nthen :\n  withval=$with_lg_slab_maxregs; CONFIG_LG_SLAB_MAXREGS=\"with_lg_slab_maxregs\"\nelse $as_nop\n  CONFIG_LG_SLAB_MAXREGS=\"\"\nfi\n\nif test \"x$with_lg_slab_maxregs\" != \"x\" ; then\n\nprintf \"%s\\n\" \"#define CONFIG_LG_SLAB_MAXREGS $with_lg_slab_maxregs\" >>confdefs.h\n\nfi\n\n\n# Check whether --with-lg_page was given.\nif test ${with_lg_page+y}\nthen :\n  withval=$with_lg_page; LG_PAGE=\"$with_lg_page\"\nelse $as_nop\n  LG_PAGE=\"detect\"\nfi\n\ncase \"${host}\" in\n  aarch64-apple-darwin*)\n                  if test \"x${host}\" != \"x${build}\" -a \"x$LG_PAGE\" = \"xdetect\"; then\n        LG_PAGE=14\n      fi\n      ;;\nesac\nif test \"x$LG_PAGE\" = \"xdetect\"; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking LG_PAGE\" >&5\nprintf %s \"checking LG_PAGE... \" >&6; }\nif test ${je_cv_lg_page+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  if test \"$cross_compiling\" = yes\nthen :\n  je_cv_lg_page=12\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <strings.h>\n#ifdef _WIN32\n#include <windows.h>\n#else\n#include <unistd.h>\n#endif\n#include <stdio.h>\n\nint\nmain (void)\n{\n\n    int result;\n    FILE *f;\n\n#ifdef _WIN32\n    SYSTEM_INFO si;\n    GetSystemInfo(&si);\n    result = si.dwPageSize;\n#else\n    result = sysconf(_SC_PAGESIZE);\n#endif\n    if (result == -1) {\n\treturn 1;\n    }\n    result = JEMALLOC_INTERNAL_FFSL(result) - 1;\n\n    f = fopen(\"conftest.out\", \"w\");\n    if (f == NULL) {\n\treturn 1;\n    }\n    fprintf(f, \"%d\", result);\n    fclose(f);\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_run \"$LINENO\"\nthen :\n  je_cv_lg_page=`cat conftest.out`\nelse $as_nop\n  je_cv_lg_page=undefined\nfi\nrm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \\\n  conftest.$ac_objext conftest.beam conftest.$ac_ext\nfi\n\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_lg_page\" >&5\nprintf \"%s\\n\" \"$je_cv_lg_page\" >&6; }\nfi\nif test \"x${je_cv_lg_page}\" != \"x\" ; then\n  LG_PAGE=\"${je_cv_lg_page}\"\nfi\nif test \"x${LG_PAGE}\" != \"xundefined\" ; then\n\nprintf \"%s\\n\" \"#define LG_PAGE $LG_PAGE\" >>confdefs.h\n\nelse\n   as_fn_error $? \"cannot determine value for LG_PAGE\" \"$LINENO\" 5\nfi\n\n\n# Check whether --with-lg_hugepage was given.\nif test ${with_lg_hugepage+y}\nthen :\n  withval=$with_lg_hugepage; je_cv_lg_hugepage=\"${with_lg_hugepage}\"\nelse $as_nop\n  je_cv_lg_hugepage=\"\"\nfi\n\nif test \"x${je_cv_lg_hugepage}\" = \"x\" ; then\n          if test -e \"/proc/meminfo\" ; then\n    hpsk=`cat /proc/meminfo 2>/dev/null | \\\n          grep -e '^Hugepagesize:[[:space:]]\\+[0-9]\\+[[:space:]]kB$' | \\\n          awk '{print $2}'`\n    if test \"x${hpsk}\" != \"x\" ; then\n      je_cv_lg_hugepage=10\n      while test \"${hpsk}\" -gt 1 ; do\n        hpsk=\"$((hpsk / 2))\"\n        je_cv_lg_hugepage=\"$((je_cv_lg_hugepage + 1))\"\n      done\n    fi\n  fi\n\n    if test \"x${je_cv_lg_hugepage}\" = \"x\" ; then\n    je_cv_lg_hugepage=21\n  fi\nfi\nif test \"x${LG_PAGE}\" != \"xundefined\" -a \\\n        \"${je_cv_lg_hugepage}\" -lt \"${LG_PAGE}\" ; then\n  as_fn_error $? \"Huge page size (2^${je_cv_lg_hugepage}) must be at least page size (2^${LG_PAGE})\" \"$LINENO\" 5\nfi\n\nprintf \"%s\\n\" \"#define LG_HUGEPAGE ${je_cv_lg_hugepage}\" >>confdefs.h\n\n\n# Check whether --enable-libdl was given.\nif test ${enable_libdl+y}\nthen :\n  enableval=$enable_libdl; if test \"x$enable_libdl\" = \"xno\" ; then\n  enable_libdl=\"0\"\nelse\n  enable_libdl=\"1\"\nfi\n\nelse $as_nop\n  enable_libdl=\"1\"\n\nfi\n\n\n\n\nif test \"x$abi\" != \"xpecoff\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_PTHREAD  \" >>confdefs.h\n\n         for ac_header in pthread.h\ndo :\n  ac_fn_c_check_header_compile \"$LINENO\" \"pthread.h\" \"ac_cv_header_pthread_h\" \"$ac_includes_default\"\nif test \"x$ac_cv_header_pthread_h\" = xyes\nthen :\n  printf \"%s\\n\" \"#define HAVE_PTHREAD_H 1\" >>confdefs.h\n\nelse $as_nop\n  as_fn_error $? \"pthread.h is missing\" \"$LINENO\" 5\nfi\n\ndone\n      { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for pthread_create in -lpthread\" >&5\nprintf %s \"checking for pthread_create in -lpthread... \" >&6; }\nif test ${ac_cv_lib_pthread_pthread_create+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  ac_check_lib_save_LIBS=$LIBS\nLIBS=\"-lpthread  $LIBS\"\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n/* Override any GCC internal prototype to avoid an error.\n   Use char because int might match the return type of a GCC\n   builtin and then its argument prototype would still apply.  */\nchar pthread_create ();\nint\nmain (void)\n{\nreturn pthread_create ();\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  ac_cv_lib_pthread_pthread_create=yes\nelse $as_nop\n  ac_cv_lib_pthread_pthread_create=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nLIBS=$ac_check_lib_save_LIBS\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_pthread_pthread_create\" >&5\nprintf \"%s\\n\" \"$ac_cv_lib_pthread_pthread_create\" >&6; }\nif test \"x$ac_cv_lib_pthread_pthread_create\" = xyes\nthen :\n  T_APPEND_V=-pthread\n  if test \"x${LIBS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  LIBS=\"${LIBS}${T_APPEND_V}\"\nelse\n  LIBS=\"${LIBS} ${T_APPEND_V}\"\nfi\n\n\nelse $as_nop\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for library containing pthread_create\" >&5\nprintf %s \"checking for library containing pthread_create... \" >&6; }\nif test ${ac_cv_search_pthread_create+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  ac_func_search_save_LIBS=$LIBS\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n/* Override any GCC internal prototype to avoid an error.\n   Use char because int might match the return type of a GCC\n   builtin and then its argument prototype would still apply.  */\nchar pthread_create ();\nint\nmain (void)\n{\nreturn pthread_create ();\n  ;\n  return 0;\n}\n_ACEOF\nfor ac_lib in ''\ndo\n  if test -z \"$ac_lib\"; then\n    ac_res=\"none required\"\n  else\n    ac_res=-l$ac_lib\n    LIBS=\"-l$ac_lib  $ac_func_search_save_LIBS\"\n  fi\n  if ac_fn_c_try_link \"$LINENO\"\nthen :\n  ac_cv_search_pthread_create=$ac_res\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext\n  if test ${ac_cv_search_pthread_create+y}\nthen :\n  break\nfi\ndone\nif test ${ac_cv_search_pthread_create+y}\nthen :\n\nelse $as_nop\n  ac_cv_search_pthread_create=no\nfi\nrm conftest.$ac_ext\nLIBS=$ac_func_search_save_LIBS\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_pthread_create\" >&5\nprintf \"%s\\n\" \"$ac_cv_search_pthread_create\" >&6; }\nac_res=$ac_cv_search_pthread_create\nif test \"$ac_res\" != no\nthen :\n  test \"$ac_res\" = \"none required\" || LIBS=\"$ac_res $LIBS\"\n\nelse $as_nop\n  as_fn_error $? \"libpthread is missing\" \"$LINENO\" 5\nfi\n\nfi\n\n  wrap_syms=\"${wrap_syms} pthread_create\"\n  have_pthread=\"1\"\n\n  if test \"x$enable_libdl\" = \"x1\" ; then\n    have_dlsym=\"1\"\n           for ac_header in dlfcn.h\ndo :\n  ac_fn_c_check_header_compile \"$LINENO\" \"dlfcn.h\" \"ac_cv_header_dlfcn_h\" \"$ac_includes_default\"\nif test \"x$ac_cv_header_dlfcn_h\" = xyes\nthen :\n  printf \"%s\\n\" \"#define HAVE_DLFCN_H 1\" >>confdefs.h\n ac_fn_c_check_func \"$LINENO\" \"dlsym\" \"ac_cv_func_dlsym\"\nif test \"x$ac_cv_func_dlsym\" = xyes\nthen :\n\nelse $as_nop\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for dlsym in -ldl\" >&5\nprintf %s \"checking for dlsym in -ldl... \" >&6; }\nif test ${ac_cv_lib_dl_dlsym+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  ac_check_lib_save_LIBS=$LIBS\nLIBS=\"-ldl  $LIBS\"\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n/* Override any GCC internal prototype to avoid an error.\n   Use char because int might match the return type of a GCC\n   builtin and then its argument prototype would still apply.  */\nchar dlsym ();\nint\nmain (void)\n{\nreturn dlsym ();\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  ac_cv_lib_dl_dlsym=yes\nelse $as_nop\n  ac_cv_lib_dl_dlsym=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nLIBS=$ac_check_lib_save_LIBS\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dl_dlsym\" >&5\nprintf \"%s\\n\" \"$ac_cv_lib_dl_dlsym\" >&6; }\nif test \"x$ac_cv_lib_dl_dlsym\" = xyes\nthen :\n  LIBS=\"$LIBS -ldl\"\nelse $as_nop\n  have_dlsym=\"0\"\nfi\n\nfi\n\nelse $as_nop\n  have_dlsym=\"0\"\nfi\n\ndone\n    if test \"x$have_dlsym\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_DLSYM  \" >>confdefs.h\n\n    fi\n  else\n    have_dlsym=\"0\"\n  fi\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether pthread_atfork(3) is compilable\" >&5\nprintf %s \"checking whether pthread_atfork(3) is compilable... \" >&6; }\nif test ${je_cv_pthread_atfork+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <pthread.h>\n\nint\nmain (void)\n{\n\n  pthread_atfork((void *)0, (void *)0, (void *)0);\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_pthread_atfork=yes\nelse $as_nop\n  je_cv_pthread_atfork=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_pthread_atfork\" >&5\nprintf \"%s\\n\" \"$je_cv_pthread_atfork\" >&6; }\n\n  if test \"x${je_cv_pthread_atfork}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_PTHREAD_ATFORK  \" >>confdefs.h\n\n  fi\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether pthread_setname_np(3) is compilable\" >&5\nprintf %s \"checking whether pthread_setname_np(3) is compilable... \" >&6; }\nif test ${je_cv_pthread_setname_np+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <pthread.h>\n\nint\nmain (void)\n{\n\n  pthread_setname_np(pthread_self(), \"setname_test\");\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_pthread_setname_np=yes\nelse $as_nop\n  je_cv_pthread_setname_np=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_pthread_setname_np\" >&5\nprintf \"%s\\n\" \"$je_cv_pthread_setname_np\" >&6; }\n\n  if test \"x${je_cv_pthread_setname_np}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_PTHREAD_SETNAME_NP  \" >>confdefs.h\n\n  fi\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether pthread_getname_np(3) is compilable\" >&5\nprintf %s \"checking whether pthread_getname_np(3) is compilable... \" >&6; }\nif test ${je_cv_pthread_getname_np+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <pthread.h>\n#include <stdlib.h>\n\nint\nmain (void)\n{\n\n  {\n  \tchar *name = malloc(16);\n  \tpthread_getname_np(pthread_self(), name, 16);\n\tfree(name);\n  }\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_pthread_getname_np=yes\nelse $as_nop\n  je_cv_pthread_getname_np=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_pthread_getname_np\" >&5\nprintf \"%s\\n\" \"$je_cv_pthread_getname_np\" >&6; }\n\n  if test \"x${je_cv_pthread_getname_np}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_PTHREAD_GETNAME_NP  \" >>confdefs.h\n\n  fi\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether pthread_get_name_np(3) is compilable\" >&5\nprintf %s \"checking whether pthread_get_name_np(3) is compilable... \" >&6; }\nif test ${je_cv_pthread_get_name_np+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <pthread.h>\n#include <pthread_np.h>\n#include <stdlib.h>\n\nint\nmain (void)\n{\n\n  {\n  \tchar *name = malloc(16);\n  \tpthread_get_name_np(pthread_self(), name, 16);\n\tfree(name);\n  }\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_pthread_get_name_np=yes\nelse $as_nop\n  je_cv_pthread_get_name_np=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_pthread_get_name_np\" >&5\nprintf \"%s\\n\" \"$je_cv_pthread_get_name_np\" >&6; }\n\n  if test \"x${je_cv_pthread_get_name_np}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_PTHREAD_GET_NAME_NP  \" >>confdefs.h\n\n  fi\nfi\n\nT_APPEND_V=-D_REENTRANT\n  if test \"x${CPPFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CPPFLAGS=\"${CPPFLAGS}${T_APPEND_V}\"\nelse\n  CPPFLAGS=\"${CPPFLAGS} ${T_APPEND_V}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for library containing clock_gettime\" >&5\nprintf %s \"checking for library containing clock_gettime... \" >&6; }\nif test ${ac_cv_search_clock_gettime+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  ac_func_search_save_LIBS=$LIBS\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n/* Override any GCC internal prototype to avoid an error.\n   Use char because int might match the return type of a GCC\n   builtin and then its argument prototype would still apply.  */\nchar clock_gettime ();\nint\nmain (void)\n{\nreturn clock_gettime ();\n  ;\n  return 0;\n}\n_ACEOF\nfor ac_lib in '' rt\ndo\n  if test -z \"$ac_lib\"; then\n    ac_res=\"none required\"\n  else\n    ac_res=-l$ac_lib\n    LIBS=\"-l$ac_lib  $ac_func_search_save_LIBS\"\n  fi\n  if ac_fn_c_try_link \"$LINENO\"\nthen :\n  ac_cv_search_clock_gettime=$ac_res\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext\n  if test ${ac_cv_search_clock_gettime+y}\nthen :\n  break\nfi\ndone\nif test ${ac_cv_search_clock_gettime+y}\nthen :\n\nelse $as_nop\n  ac_cv_search_clock_gettime=no\nfi\nrm conftest.$ac_ext\nLIBS=$ac_func_search_save_LIBS\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_clock_gettime\" >&5\nprintf \"%s\\n\" \"$ac_cv_search_clock_gettime\" >&6; }\nac_res=$ac_cv_search_clock_gettime\nif test \"$ac_res\" != no\nthen :\n  test \"$ac_res\" = \"none required\" || LIBS=\"$ac_res $LIBS\"\n\nfi\n\n\nif test \"x$je_cv_cray_prgenv_wrapper\" = \"xyes\" ; then\n  if test \"$ac_cv_search_clock_gettime\" != \"-lrt\"; then\n    SAVED_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\n\n\n    unset ac_cv_search_clock_gettime\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -dynamic\" >&5\nprintf %s \"checking whether compiler supports -dynamic... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-dynamic\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-dynamic\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n    { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for library containing clock_gettime\" >&5\nprintf %s \"checking for library containing clock_gettime... \" >&6; }\nif test ${ac_cv_search_clock_gettime+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  ac_func_search_save_LIBS=$LIBS\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n/* Override any GCC internal prototype to avoid an error.\n   Use char because int might match the return type of a GCC\n   builtin and then its argument prototype would still apply.  */\nchar clock_gettime ();\nint\nmain (void)\n{\nreturn clock_gettime ();\n  ;\n  return 0;\n}\n_ACEOF\nfor ac_lib in '' rt\ndo\n  if test -z \"$ac_lib\"; then\n    ac_res=\"none required\"\n  else\n    ac_res=-l$ac_lib\n    LIBS=\"-l$ac_lib  $ac_func_search_save_LIBS\"\n  fi\n  if ac_fn_c_try_link \"$LINENO\"\nthen :\n  ac_cv_search_clock_gettime=$ac_res\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext\n  if test ${ac_cv_search_clock_gettime+y}\nthen :\n  break\nfi\ndone\nif test ${ac_cv_search_clock_gettime+y}\nthen :\n\nelse $as_nop\n  ac_cv_search_clock_gettime=no\nfi\nrm conftest.$ac_ext\nLIBS=$ac_func_search_save_LIBS\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_search_clock_gettime\" >&5\nprintf \"%s\\n\" \"$ac_cv_search_clock_gettime\" >&6; }\nac_res=$ac_cv_search_clock_gettime\nif test \"$ac_res\" != no\nthen :\n  test \"$ac_res\" = \"none required\" || LIBS=\"$ac_res $LIBS\"\n\nfi\n\n\n    CONFIGURE_CFLAGS=\"${SAVED_CONFIGURE_CFLAGS}\"\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n  fi\nfi\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether clock_gettime(CLOCK_MONOTONIC_COARSE, ...) is compilable\" >&5\nprintf %s \"checking whether clock_gettime(CLOCK_MONOTONIC_COARSE, ...) is compilable... \" >&6; }\nif test ${je_cv_clock_monotonic_coarse+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <time.h>\n\nint\nmain (void)\n{\n\n\tstruct timespec ts;\n\n\tclock_gettime(CLOCK_MONOTONIC_COARSE, &ts);\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_clock_monotonic_coarse=yes\nelse $as_nop\n  je_cv_clock_monotonic_coarse=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_clock_monotonic_coarse\" >&5\nprintf \"%s\\n\" \"$je_cv_clock_monotonic_coarse\" >&6; }\n\nif test \"x${je_cv_clock_monotonic_coarse}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_CLOCK_MONOTONIC_COARSE  \" >>confdefs.h\n\nfi\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether clock_gettime(CLOCK_MONOTONIC, ...) is compilable\" >&5\nprintf %s \"checking whether clock_gettime(CLOCK_MONOTONIC, ...) is compilable... \" >&6; }\nif test ${je_cv_clock_monotonic+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <unistd.h>\n#include <time.h>\n\nint\nmain (void)\n{\n\n\tstruct timespec ts;\n\n\tclock_gettime(CLOCK_MONOTONIC, &ts);\n#if !defined(_POSIX_MONOTONIC_CLOCK) || _POSIX_MONOTONIC_CLOCK < 0\n#  error _POSIX_MONOTONIC_CLOCK missing/invalid\n#endif\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_clock_monotonic=yes\nelse $as_nop\n  je_cv_clock_monotonic=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_clock_monotonic\" >&5\nprintf \"%s\\n\" \"$je_cv_clock_monotonic\" >&6; }\n\nif test \"x${je_cv_clock_monotonic}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_CLOCK_MONOTONIC  \" >>confdefs.h\n\nfi\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether mach_absolute_time() is compilable\" >&5\nprintf %s \"checking whether mach_absolute_time() is compilable... \" >&6; }\nif test ${je_cv_mach_absolute_time+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <mach/mach_time.h>\n\nint\nmain (void)\n{\n\n\tmach_absolute_time();\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_mach_absolute_time=yes\nelse $as_nop\n  je_cv_mach_absolute_time=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_mach_absolute_time\" >&5\nprintf \"%s\\n\" \"$je_cv_mach_absolute_time\" >&6; }\n\nif test \"x${je_cv_mach_absolute_time}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_MACH_ABSOLUTE_TIME  \" >>confdefs.h\n\nfi\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether clock_gettime(CLOCK_REALTIME, ...) is compilable\" >&5\nprintf %s \"checking whether clock_gettime(CLOCK_REALTIME, ...) is compilable... \" >&6; }\nif test ${je_cv_clock_realtime+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <time.h>\n\nint\nmain (void)\n{\n\n\tstruct timespec ts;\n\n\tclock_gettime(CLOCK_REALTIME, &ts);\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_clock_realtime=yes\nelse $as_nop\n  je_cv_clock_realtime=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_clock_realtime\" >&5\nprintf \"%s\\n\" \"$je_cv_clock_realtime\" >&6; }\n\nif test \"x${je_cv_clock_realtime}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_CLOCK_REALTIME  \" >>confdefs.h\n\nfi\n\n# Check whether --enable-syscall was given.\nif test ${enable_syscall+y}\nthen :\n  enableval=$enable_syscall; if test \"x$enable_syscall\" = \"xno\" ; then\n  enable_syscall=\"0\"\nelse\n  enable_syscall=\"1\"\nfi\n\nelse $as_nop\n  enable_syscall=\"1\"\n\nfi\n\nif test \"x$enable_syscall\" = \"x1\" ; then\n      SAVED_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Werror\" >&5\nprintf %s \"checking whether compiler supports -Werror... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Werror\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Werror\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether syscall(2) is compilable\" >&5\nprintf %s \"checking whether syscall(2) is compilable... \" >&6; }\nif test ${je_cv_syscall+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <sys/syscall.h>\n#include <unistd.h>\n\nint\nmain (void)\n{\n\n\tsyscall(SYS_write, 2, \"hello\", 5);\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_syscall=yes\nelse $as_nop\n  je_cv_syscall=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_syscall\" >&5\nprintf \"%s\\n\" \"$je_cv_syscall\" >&6; }\n\n  CONFIGURE_CFLAGS=\"${SAVED_CONFIGURE_CFLAGS}\"\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n  if test \"x$je_cv_syscall\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_USE_SYSCALL  \" >>confdefs.h\n\n  fi\nfi\n\nac_fn_c_check_func \"$LINENO\" \"secure_getenv\" \"ac_cv_func_secure_getenv\"\nif test \"x$ac_cv_func_secure_getenv\" = xyes\nthen :\n  have_secure_getenv=\"1\"\nelse $as_nop\n  have_secure_getenv=\"0\"\n\nfi\n\nif test \"x$have_secure_getenv\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_SECURE_GETENV  \" >>confdefs.h\n\nfi\n\nac_fn_c_check_func \"$LINENO\" \"sched_getcpu\" \"ac_cv_func_sched_getcpu\"\nif test \"x$ac_cv_func_sched_getcpu\" = xyes\nthen :\n  have_sched_getcpu=\"1\"\nelse $as_nop\n  have_sched_getcpu=\"0\"\n\nfi\n\nif test \"x$have_sched_getcpu\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_SCHED_GETCPU  \" >>confdefs.h\n\nfi\n\nac_fn_c_check_func \"$LINENO\" \"sched_setaffinity\" \"ac_cv_func_sched_setaffinity\"\nif test \"x$ac_cv_func_sched_setaffinity\" = xyes\nthen :\n  have_sched_setaffinity=\"1\"\nelse $as_nop\n  have_sched_setaffinity=\"0\"\n\nfi\n\nif test \"x$have_sched_setaffinity\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_SCHED_SETAFFINITY  \" >>confdefs.h\n\nfi\n\nac_fn_c_check_func \"$LINENO\" \"issetugid\" \"ac_cv_func_issetugid\"\nif test \"x$ac_cv_func_issetugid\" = xyes\nthen :\n  have_issetugid=\"1\"\nelse $as_nop\n  have_issetugid=\"0\"\n\nfi\n\nif test \"x$have_issetugid\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_ISSETUGID  \" >>confdefs.h\n\nfi\n\nac_fn_c_check_func \"$LINENO\" \"_malloc_thread_cleanup\" \"ac_cv_func__malloc_thread_cleanup\"\nif test \"x$ac_cv_func__malloc_thread_cleanup\" = xyes\nthen :\n  have__malloc_thread_cleanup=\"1\"\nelse $as_nop\n  have__malloc_thread_cleanup=\"0\"\n\nfi\n\nif test \"x$have__malloc_thread_cleanup\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_MALLOC_THREAD_CLEANUP  \" >>confdefs.h\n\n  wrap_syms=\"${wrap_syms} _malloc_thread_cleanup _malloc_tsd_cleanup_register\"\n  force_tls=\"1\"\nfi\n\nac_fn_c_check_func \"$LINENO\" \"_pthread_mutex_init_calloc_cb\" \"ac_cv_func__pthread_mutex_init_calloc_cb\"\nif test \"x$ac_cv_func__pthread_mutex_init_calloc_cb\" = xyes\nthen :\n  have__pthread_mutex_init_calloc_cb=\"1\"\nelse $as_nop\n  have__pthread_mutex_init_calloc_cb=\"0\"\n\nfi\n\nif test \"x$have__pthread_mutex_init_calloc_cb\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_MUTEX_INIT_CB  \" >>confdefs.h\n\n  wrap_syms=\"${wrap_syms} _malloc_prefork _malloc_postfork\"\nfi\n\nac_fn_c_check_func \"$LINENO\" \"memcntl\" \"ac_cv_func_memcntl\"\nif test \"x$ac_cv_func_memcntl\" = xyes\nthen :\n  have_memcntl=\"1\"\nelse $as_nop\n  have_memcntl=\"0\"\nfi\n\nif test \"x$have_memcntl\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_MEMCNTL  \" >>confdefs.h\n\nfi\n\n# Check whether --enable-lazy_lock was given.\nif test ${enable_lazy_lock+y}\nthen :\n  enableval=$enable_lazy_lock; if test \"x$enable_lazy_lock\" = \"xno\" ; then\n  enable_lazy_lock=\"0\"\nelse\n  enable_lazy_lock=\"1\"\nfi\n\nelse $as_nop\n  enable_lazy_lock=\"\"\n\nfi\n\nif test \"x${enable_lazy_lock}\" = \"x\" ; then\n  if test \"x${force_lazy_lock}\" = \"x1\" ; then\n    { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: Forcing lazy-lock to avoid allocator/threading bootstrap issues\" >&5\nprintf \"%s\\n\" \"Forcing lazy-lock to avoid allocator/threading bootstrap issues\" >&6; }\n    enable_lazy_lock=\"1\"\n  else\n    enable_lazy_lock=\"0\"\n  fi\nfi\nif test \"x${enable_lazy_lock}\" = \"x1\" -a \"x${abi}\" = \"xpecoff\" ; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: Forcing no lazy-lock because thread creation monitoring is unimplemented\" >&5\nprintf \"%s\\n\" \"Forcing no lazy-lock because thread creation monitoring is unimplemented\" >&6; }\n  enable_lazy_lock=\"0\"\nfi\nif test \"x$enable_lazy_lock\" = \"x1\" ; then\n  if test \"x$have_dlsym\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_LAZY_LOCK  \" >>confdefs.h\n\n  else\n    as_fn_error $? \"Missing dlsym support: lazy-lock cannot be enabled.\" \"$LINENO\" 5\n  fi\nfi\n\n\nif test \"x${force_tls}\" = \"x1\" ; then\n  enable_tls=\"1\"\nelif test \"x${force_tls}\" = \"x0\" ; then\n  enable_tls=\"0\"\nelse\n  enable_tls=\"1\"\nfi\nif test \"x${enable_tls}\" = \"x1\" ; then\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for TLS\" >&5\nprintf %s \"checking for TLS... \" >&6; }\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n    __thread int x;\n\nint\nmain (void)\n{\n\n    x = 42;\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              enable_tls=\"0\"\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nelse\n  enable_tls=\"0\"\nfi\n\nif test \"x${enable_tls}\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_TLS  \" >>confdefs.h\n\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether C11 atomics is compilable\" >&5\nprintf %s \"checking whether C11 atomics is compilable... \" >&6; }\nif test ${je_cv_c11_atomics+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <stdint.h>\n#if (__STDC_VERSION__ >= 201112L) && !defined(__STDC_NO_ATOMICS__)\n#include <stdatomic.h>\n#else\n#error Atomics not available\n#endif\n\nint\nmain (void)\n{\n\n    uint64_t *p = (uint64_t *)0;\n    uint64_t x = 1;\n    volatile atomic_uint_least64_t *a = (volatile atomic_uint_least64_t *)p;\n    uint64_t r = atomic_fetch_add(a, x) + x;\n    return r == 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_c11_atomics=yes\nelse $as_nop\n  je_cv_c11_atomics=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_c11_atomics\" >&5\nprintf \"%s\\n\" \"$je_cv_c11_atomics\" >&6; }\n\nif test \"x${je_cv_c11_atomics}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_C11_ATOMICS  \" >>confdefs.h\n\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether GCC __atomic atomics is compilable\" >&5\nprintf %s \"checking whether GCC __atomic atomics is compilable... \" >&6; }\nif test ${je_cv_gcc_atomic_atomics+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    int x = 0;\n    int val = 1;\n    int y = __atomic_fetch_add(&x, val, __ATOMIC_RELAXED);\n    int after_add = x;\n    return after_add == 1;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_gcc_atomic_atomics=yes\nelse $as_nop\n  je_cv_gcc_atomic_atomics=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_gcc_atomic_atomics\" >&5\nprintf \"%s\\n\" \"$je_cv_gcc_atomic_atomics\" >&6; }\n\nif test \"x${je_cv_gcc_atomic_atomics}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_GCC_ATOMIC_ATOMICS  \" >>confdefs.h\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether GCC 8-bit __atomic atomics is compilable\" >&5\nprintf %s \"checking whether GCC 8-bit __atomic atomics is compilable... \" >&6; }\nif test ${je_cv_gcc_u8_atomic_atomics+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n      unsigned char x = 0;\n      int val = 1;\n      int y = __atomic_fetch_add(&x, val, __ATOMIC_RELAXED);\n      int after_add = (int)x;\n      return after_add == 1;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_gcc_u8_atomic_atomics=yes\nelse $as_nop\n  je_cv_gcc_u8_atomic_atomics=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_gcc_u8_atomic_atomics\" >&5\nprintf \"%s\\n\" \"$je_cv_gcc_u8_atomic_atomics\" >&6; }\n\n  if test \"x${je_cv_gcc_u8_atomic_atomics}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_GCC_U8_ATOMIC_ATOMICS  \" >>confdefs.h\n\n  fi\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether GCC __sync atomics is compilable\" >&5\nprintf %s \"checking whether GCC __sync atomics is compilable... \" >&6; }\nif test ${je_cv_gcc_sync_atomics+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    int x = 0;\n    int before_add = __sync_fetch_and_add(&x, 1);\n    int after_add = x;\n    return (before_add == 0) && (after_add == 1);\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_gcc_sync_atomics=yes\nelse $as_nop\n  je_cv_gcc_sync_atomics=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_gcc_sync_atomics\" >&5\nprintf \"%s\\n\" \"$je_cv_gcc_sync_atomics\" >&6; }\n\nif test \"x${je_cv_gcc_sync_atomics}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_GCC_SYNC_ATOMICS  \" >>confdefs.h\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether GCC 8-bit __sync atomics is compilable\" >&5\nprintf %s \"checking whether GCC 8-bit __sync atomics is compilable... \" >&6; }\nif test ${je_cv_gcc_u8_sync_atomics+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n      unsigned char x = 0;\n      int before_add = __sync_fetch_and_add(&x, 1);\n      int after_add = (int)x;\n      return (before_add == 0) && (after_add == 1);\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_gcc_u8_sync_atomics=yes\nelse $as_nop\n  je_cv_gcc_u8_sync_atomics=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_gcc_u8_sync_atomics\" >&5\nprintf \"%s\\n\" \"$je_cv_gcc_u8_sync_atomics\" >&6; }\n\n  if test \"x${je_cv_gcc_u8_sync_atomics}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_GCC_U8_SYNC_ATOMICS  \" >>confdefs.h\n\n  fi\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether Darwin OSAtomic*() is compilable\" >&5\nprintf %s \"checking whether Darwin OSAtomic*() is compilable... \" >&6; }\nif test ${je_cv_osatomic+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <libkern/OSAtomic.h>\n#include <inttypes.h>\n\nint\nmain (void)\n{\n\n\t{\n\t\tint32_t x32 = 0;\n\t\tvolatile int32_t *x32p = &x32;\n\t\tOSAtomicAdd32(1, x32p);\n\t}\n\t{\n\t\tint64_t x64 = 0;\n\t\tvolatile int64_t *x64p = &x64;\n\t\tOSAtomicAdd64(1, x64p);\n\t}\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_osatomic=yes\nelse $as_nop\n  je_cv_osatomic=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_osatomic\" >&5\nprintf \"%s\\n\" \"$je_cv_osatomic\" >&6; }\n\nif test \"x${je_cv_osatomic}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_OSATOMIC  \" >>confdefs.h\n\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether madvise(2) is compilable\" >&5\nprintf %s \"checking whether madvise(2) is compilable... \" >&6; }\nif test ${je_cv_madvise+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <sys/mman.h>\n\nint\nmain (void)\n{\n\n\tmadvise((void *)0, 0, 0);\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_madvise=yes\nelse $as_nop\n  je_cv_madvise=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_madvise\" >&5\nprintf \"%s\\n\" \"$je_cv_madvise\" >&6; }\n\nif test \"x${je_cv_madvise}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_MADVISE  \" >>confdefs.h\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether madvise(..., MADV_FREE) is compilable\" >&5\nprintf %s \"checking whether madvise(..., MADV_FREE) is compilable... \" >&6; }\nif test ${je_cv_madv_free+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <sys/mman.h>\n\nint\nmain (void)\n{\n\n\tmadvise((void *)0, 0, MADV_FREE);\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_madv_free=yes\nelse $as_nop\n  je_cv_madv_free=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_madv_free\" >&5\nprintf \"%s\\n\" \"$je_cv_madv_free\" >&6; }\n\n  if test \"x${je_cv_madv_free}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_PURGE_MADVISE_FREE  \" >>confdefs.h\n\n  elif test \"x${je_cv_madvise}\" = \"xyes\" ; then\n    case \"${host_cpu}\" in i686|x86_64)\n        case \"${host}\" in *-*-linux*)\n\nprintf \"%s\\n\" \"#define JEMALLOC_PURGE_MADVISE_FREE  \" >>confdefs.h\n\n\nprintf \"%s\\n\" \"#define JEMALLOC_DEFINE_MADVISE_FREE  \" >>confdefs.h\n\n\t    ;;\n        esac\n        ;;\n    esac\n  fi\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether madvise(..., MADV_DONTNEED) is compilable\" >&5\nprintf %s \"checking whether madvise(..., MADV_DONTNEED) is compilable... \" >&6; }\nif test ${je_cv_madv_dontneed+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <sys/mman.h>\n\nint\nmain (void)\n{\n\n\tmadvise((void *)0, 0, MADV_DONTNEED);\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_madv_dontneed=yes\nelse $as_nop\n  je_cv_madv_dontneed=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_madv_dontneed\" >&5\nprintf \"%s\\n\" \"$je_cv_madv_dontneed\" >&6; }\n\n  if test \"x${je_cv_madv_dontneed}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_PURGE_MADVISE_DONTNEED  \" >>confdefs.h\n\n  fi\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether madvise(..., MADV_DO[NT]DUMP) is compilable\" >&5\nprintf %s \"checking whether madvise(..., MADV_DO[NT]DUMP) is compilable... \" >&6; }\nif test ${je_cv_madv_dontdump+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <sys/mman.h>\n\nint\nmain (void)\n{\n\n\tmadvise((void *)0, 0, MADV_DONTDUMP);\n\tmadvise((void *)0, 0, MADV_DODUMP);\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_madv_dontdump=yes\nelse $as_nop\n  je_cv_madv_dontdump=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_madv_dontdump\" >&5\nprintf \"%s\\n\" \"$je_cv_madv_dontdump\" >&6; }\n\n  if test \"x${je_cv_madv_dontdump}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_MADVISE_DONTDUMP  \" >>confdefs.h\n\n  fi\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether madvise(..., MADV_[NO]HUGEPAGE) is compilable\" >&5\nprintf %s \"checking whether madvise(..., MADV_[NO]HUGEPAGE) is compilable... \" >&6; }\nif test ${je_cv_thp+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <sys/mman.h>\n\nint\nmain (void)\n{\n\n\tmadvise((void *)0, 0, MADV_HUGEPAGE);\n\tmadvise((void *)0, 0, MADV_NOHUGEPAGE);\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_thp=yes\nelse $as_nop\n  je_cv_thp=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_thp\" >&5\nprintf \"%s\\n\" \"$je_cv_thp\" >&6; }\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether madvise(..., MADV_[NO]CORE) is compilable\" >&5\nprintf %s \"checking whether madvise(..., MADV_[NO]CORE) is compilable... \" >&6; }\nif test ${je_cv_madv_nocore+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <sys/mman.h>\n\nint\nmain (void)\n{\n\n\tmadvise((void *)0, 0, MADV_NOCORE);\n\tmadvise((void *)0, 0, MADV_CORE);\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_madv_nocore=yes\nelse $as_nop\n  je_cv_madv_nocore=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_madv_nocore\" >&5\nprintf \"%s\\n\" \"$je_cv_madv_nocore\" >&6; }\n\n  if test \"x${je_cv_madv_nocore}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_MADVISE_NOCORE  \" >>confdefs.h\n\n  fi\ncase \"${host_cpu}\" in\n  arm*)\n    ;;\n  *)\n  if test \"x${je_cv_thp}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_MADVISE_HUGE  \" >>confdefs.h\n\n  fi\n  ;;\nesac\nelse\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether posix_madvise is compilable\" >&5\nprintf %s \"checking whether posix_madvise is compilable... \" >&6; }\nif test ${je_cv_posix_madvise+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n  #include <sys/mman.h>\n\nint\nmain (void)\n{\n\n    posix_madvise((void *)0, 0, 0);\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_posix_madvise=yes\nelse $as_nop\n  je_cv_posix_madvise=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_posix_madvise\" >&5\nprintf \"%s\\n\" \"$je_cv_posix_madvise\" >&6; }\n\n  if test \"x${je_cv_posix_madvise}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_POSIX_MADVISE  \" >>confdefs.h\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether posix_madvise(..., POSIX_MADV_DONTNEED) is compilable\" >&5\nprintf %s \"checking whether posix_madvise(..., POSIX_MADV_DONTNEED) is compilable... \" >&6; }\nif test ${je_cv_posix_madv_dontneed+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n  #include <sys/mman.h>\n\nint\nmain (void)\n{\n\n    posix_madvise((void *)0, 0, POSIX_MADV_DONTNEED);\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_posix_madv_dontneed=yes\nelse $as_nop\n  je_cv_posix_madv_dontneed=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_posix_madv_dontneed\" >&5\nprintf \"%s\\n\" \"$je_cv_posix_madv_dontneed\" >&6; }\n\n    if test \"x${je_cv_posix_madv_dontneed}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_PURGE_POSIX_MADVISE_DONTNEED  \" >>confdefs.h\n\n    fi\n  fi\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether mprotect(2) is compilable\" >&5\nprintf %s \"checking whether mprotect(2) is compilable... \" >&6; }\nif test ${je_cv_mprotect+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <sys/mman.h>\n\nint\nmain (void)\n{\n\n\tmprotect((void *)0, 0, PROT_NONE);\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_mprotect=yes\nelse $as_nop\n  je_cv_mprotect=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_mprotect\" >&5\nprintf \"%s\\n\" \"$je_cv_mprotect\" >&6; }\n\nif test \"x${je_cv_mprotect}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_MPROTECT  \" >>confdefs.h\n\nfi\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for __builtin_clz\" >&5\nprintf %s \"checking for __builtin_clz... \" >&6; }\nif test ${je_cv_builtin_clz+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\nint\nmain (void)\n{\n\n                                                {\n                                                        unsigned x = 0;\n                                                        int y = __builtin_clz(x);\n                                                }\n                                                {\n                                                        unsigned long x = 0;\n                                                        int y = __builtin_clzl(x);\n                                                }\n                                                {\n                                                        unsigned long long x = 0;\n                                                        int y = __builtin_clzll(x);\n                                                }\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_builtin_clz=yes\nelse $as_nop\n  je_cv_builtin_clz=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_builtin_clz\" >&5\nprintf \"%s\\n\" \"$je_cv_builtin_clz\" >&6; }\n\nif test \"x${je_cv_builtin_clz}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_BUILTIN_CLZ  \" >>confdefs.h\n\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether Darwin os_unfair_lock_*() is compilable\" >&5\nprintf %s \"checking whether Darwin os_unfair_lock_*() is compilable... \" >&6; }\nif test ${je_cv_os_unfair_lock+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <os/lock.h>\n#include <AvailabilityMacros.h>\n\nint\nmain (void)\n{\n\n\t#if MAC_OS_X_VERSION_MIN_REQUIRED < 101200\n\t#error \"os_unfair_lock is not supported\"\n\t#else\n\tos_unfair_lock lock = OS_UNFAIR_LOCK_INIT;\n\tos_unfair_lock_lock(&lock);\n\tos_unfair_lock_unlock(&lock);\n\t#endif\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_os_unfair_lock=yes\nelse $as_nop\n  je_cv_os_unfair_lock=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_os_unfair_lock\" >&5\nprintf \"%s\\n\" \"$je_cv_os_unfair_lock\" >&6; }\n\nif test \"x${je_cv_os_unfair_lock}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_OS_UNFAIR_LOCK  \" >>confdefs.h\n\nfi\n\n\n# Check whether --enable-zone-allocator was given.\nif test ${enable_zone_allocator+y}\nthen :\n  enableval=$enable_zone_allocator; if test \"x$enable_zone_allocator\" = \"xno\" ; then\n  enable_zone_allocator=\"0\"\nelse\n  enable_zone_allocator=\"1\"\nfi\n\nelse $as_nop\n  if test \"x${abi}\" = \"xmacho\"; then\n  enable_zone_allocator=\"1\"\nfi\n\n\nfi\n\n\n\nif test \"x${enable_zone_allocator}\" = \"x1\" ; then\n  if test \"x${abi}\" != \"xmacho\"; then\n    as_fn_error $? \"--enable-zone-allocator is only supported on Darwin\" \"$LINENO\" 5\n  fi\n\nprintf \"%s\\n\" \"#define JEMALLOC_ZONE  \" >>confdefs.h\n\nfi\n\n# Check whether --enable-initial-exec-tls was given.\nif test ${enable_initial_exec_tls+y}\nthen :\n  enableval=$enable_initial_exec_tls; if test \"x$enable_initial_exec_tls\" = \"xno\" ; then\n  enable_initial_exec_tls=\"0\"\nelse\n  enable_initial_exec_tls=\"1\"\nfi\n\nelse $as_nop\n  enable_initial_exec_tls=\"1\"\n\nfi\n\n\n\nif test \"x${je_cv_tls_model}\" = \"xyes\" -a \\\n       \"x${enable_initial_exec_tls}\" = \"x1\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_TLS_MODEL __attribute__((tls_model(\\\"initial-exec\\\")))\" >>confdefs.h\n\nelse\n\nprintf \"%s\\n\" \"#define JEMALLOC_TLS_MODEL  \" >>confdefs.h\n\nfi\n\n\nif test \"x${have_pthread}\" = \"x1\" -a \"x${je_cv_os_unfair_lock}\" != \"xyes\" -a \\\n       \"x${abi}\" != \"xmacho\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_BACKGROUND_THREAD  \" >>confdefs.h\n\nfi\n\n\nif test \"x$glibc\" = \"x1\" ; then\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether glibc malloc hook is compilable\" >&5\nprintf %s \"checking whether glibc malloc hook is compilable... \" >&6; }\nif test ${je_cv_glibc_malloc_hook+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n  #include <stddef.h>\n\n  extern void (* __free_hook)(void *ptr);\n  extern void *(* __malloc_hook)(size_t size);\n  extern void *(* __realloc_hook)(void *ptr, size_t size);\n\nint\nmain (void)\n{\n\n    void *ptr = 0L;\n    if (__malloc_hook) ptr = __malloc_hook(1);\n    if (__realloc_hook) ptr = __realloc_hook(ptr, 2);\n    if (__free_hook && ptr) __free_hook(ptr);\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_glibc_malloc_hook=yes\nelse $as_nop\n  je_cv_glibc_malloc_hook=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_glibc_malloc_hook\" >&5\nprintf \"%s\\n\" \"$je_cv_glibc_malloc_hook\" >&6; }\n\n  if test \"x${je_cv_glibc_malloc_hook}\" = \"xyes\" ; then\n    if test \"x${JEMALLOC_PREFIX}\" = \"x\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_GLIBC_MALLOC_HOOK  \" >>confdefs.h\n\n      wrap_syms=\"${wrap_syms} __free_hook __malloc_hook __realloc_hook\"\n    fi\n  fi\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether glibc memalign hook is compilable\" >&5\nprintf %s \"checking whether glibc memalign hook is compilable... \" >&6; }\nif test ${je_cv_glibc_memalign_hook+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n  #include <stddef.h>\n\n  extern void *(* __memalign_hook)(size_t alignment, size_t size);\n\nint\nmain (void)\n{\n\n    void *ptr = 0L;\n    if (__memalign_hook) ptr = __memalign_hook(16, 7);\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_glibc_memalign_hook=yes\nelse $as_nop\n  je_cv_glibc_memalign_hook=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_glibc_memalign_hook\" >&5\nprintf \"%s\\n\" \"$je_cv_glibc_memalign_hook\" >&6; }\n\n  if test \"x${je_cv_glibc_memalign_hook}\" = \"xyes\" ; then\n    if test \"x${JEMALLOC_PREFIX}\" = \"x\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_GLIBC_MEMALIGN_HOOK  \" >>confdefs.h\n\n      wrap_syms=\"${wrap_syms} __memalign_hook\"\n    fi\n  fi\nfi\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether pthreads adaptive mutexes is compilable\" >&5\nprintf %s \"checking whether pthreads adaptive mutexes is compilable... \" >&6; }\nif test ${je_cv_pthread_mutex_adaptive_np+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <pthread.h>\n\nint\nmain (void)\n{\n\n  pthread_mutexattr_t attr;\n  pthread_mutexattr_init(&attr);\n  pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_ADAPTIVE_NP);\n  pthread_mutexattr_destroy(&attr);\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_pthread_mutex_adaptive_np=yes\nelse $as_nop\n  je_cv_pthread_mutex_adaptive_np=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_pthread_mutex_adaptive_np\" >&5\nprintf \"%s\\n\" \"$je_cv_pthread_mutex_adaptive_np\" >&6; }\n\nif test \"x${je_cv_pthread_mutex_adaptive_np}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_HAVE_PTHREAD_MUTEX_ADAPTIVE_NP  \" >>confdefs.h\n\nfi\n\nSAVED_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -D_GNU_SOURCE\" >&5\nprintf %s \"checking whether compiler supports -D_GNU_SOURCE... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-D_GNU_SOURCE\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-D_GNU_SOURCE\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -Werror\" >&5\nprintf %s \"checking whether compiler supports -Werror... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-Werror\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-Werror\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether compiler supports -herror_on_warning\" >&5\nprintf %s \"checking whether compiler supports -herror_on_warning... \" >&6; }\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nT_APPEND_V=-herror_on_warning\n  if test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${T_APPEND_V}\" = \"x\" ; then\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}${T_APPEND_V}\"\nelse\n  CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS} ${T_APPEND_V}\"\nfi\n\n\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\ncat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n\nint\nmain (void)\n{\n\n    return 0;\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  je_cv_cflags_added=-herror_on_warning\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: yes\" >&5\nprintf \"%s\\n\" \"yes\" >&6; }\nelse $as_nop\n  je_cv_cflags_added=\n              { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: no\" >&5\nprintf \"%s\\n\" \"no\" >&6; }\n              CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"\n\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking whether strerror_r returns char with gnu source is compilable\" >&5\nprintf %s \"checking whether strerror_r returns char with gnu source is compilable... \" >&6; }\nif test ${je_cv_strerror_r_returns_char_with_gnu_source+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n\n#include <errno.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\nint\nmain (void)\n{\n\n  char *buffer = (char *) malloc(100);\n  char *error = strerror_r(EINVAL, buffer, 100);\n  printf(\"%s\\n\", error);\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_link \"$LINENO\"\nthen :\n  je_cv_strerror_r_returns_char_with_gnu_source=yes\nelse $as_nop\n  je_cv_strerror_r_returns_char_with_gnu_source=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam \\\n    conftest$ac_exeext conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $je_cv_strerror_r_returns_char_with_gnu_source\" >&5\nprintf \"%s\\n\" \"$je_cv_strerror_r_returns_char_with_gnu_source\" >&6; }\n\nCONFIGURE_CFLAGS=\"${SAVED_CONFIGURE_CFLAGS}\"\nif test \"x${CONFIGURE_CFLAGS}\" = \"x\" -o \"x${SPECIFIED_CFLAGS}\" = \"x\" ; then\n  CFLAGS=\"${CONFIGURE_CFLAGS}${SPECIFIED_CFLAGS}\"\nelse\n  CFLAGS=\"${CONFIGURE_CFLAGS} ${SPECIFIED_CFLAGS}\"\nfi\n\n\nif test \"x${je_cv_strerror_r_returns_char_with_gnu_source}\" = \"xyes\" ; then\n\nprintf \"%s\\n\" \"#define JEMALLOC_STRERROR_R_RETURNS_CHAR_WITH_GNU_SOURCE  \" >>confdefs.h\n\nfi\n\nac_fn_c_check_type \"$LINENO\" \"_Bool\" \"ac_cv_type__Bool\" \"$ac_includes_default\"\nif test \"x$ac_cv_type__Bool\" = xyes\nthen :\n\nprintf \"%s\\n\" \"#define HAVE__BOOL 1\" >>confdefs.h\n\n\nfi\n\n   { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: checking for stdbool.h that conforms to C99\" >&5\nprintf %s \"checking for stdbool.h that conforms to C99... \" >&6; }\nif test ${ac_cv_header_stdbool_h+y}\nthen :\n  printf %s \"(cached) \" >&6\nelse $as_nop\n  cat confdefs.h - <<_ACEOF >conftest.$ac_ext\n/* end confdefs.h.  */\n#include <stdbool.h>\n\n             #ifndef __bool_true_false_are_defined\n               #error \"__bool_true_false_are_defined is not defined\"\n             #endif\n             char a[__bool_true_false_are_defined == 1 ? 1 : -1];\n\n             /* Regardless of whether this is C++ or \"_Bool\" is a\n                valid type name, \"true\" and \"false\" should be usable\n                in #if expressions and integer constant expressions,\n                and \"bool\" should be a valid type name.  */\n\n             #if !true\n               #error \"'true' is not true\"\n             #endif\n             #if true != 1\n               #error \"'true' is not equal to 1\"\n             #endif\n             char b[true == 1 ? 1 : -1];\n             char c[true];\n\n             #if false\n               #error \"'false' is not false\"\n             #endif\n             #if false != 0\n               #error \"'false' is not equal to 0\"\n             #endif\n             char d[false == 0 ? 1 : -1];\n\n             enum { e = false, f = true, g = false * true, h = true * 256 };\n\n             char i[(bool) 0.5 == true ? 1 : -1];\n             char j[(bool) 0.0 == false ? 1 : -1];\n             char k[sizeof (bool) > 0 ? 1 : -1];\n\n             struct sb { bool s: 1; bool t; } s;\n             char l[sizeof s.t > 0 ? 1 : -1];\n\n             /* The following fails for\n                HP aC++/ANSI C B3910B A.05.55 [Dec 04 2003]. */\n             bool m[h];\n             char n[sizeof m == h * sizeof m[0] ? 1 : -1];\n             char o[-1 - (bool) 0 < 0 ? 1 : -1];\n             /* Catch a bug in an HP-UX C compiler.  See\n         https://gcc.gnu.org/ml/gcc-patches/2003-12/msg02303.html\n         https://lists.gnu.org/archive/html/bug-coreutils/2005-11/msg00161.html\n              */\n             bool p = true;\n             bool *pp = &p;\n\n             /* C 1999 specifies that bool, true, and false are to be\n                macros, but C++ 2011 and later overrule this.  */\n             #if __cplusplus < 201103\n              #ifndef bool\n               #error \"bool is not defined\"\n              #endif\n              #ifndef false\n               #error \"false is not defined\"\n              #endif\n              #ifndef true\n               #error \"true is not defined\"\n              #endif\n             #endif\n\n             /* If _Bool is available, repeat with it all the tests\n                above that used bool.  */\n             #ifdef HAVE__BOOL\n               struct sB { _Bool s: 1; _Bool t; } t;\n\n               char q[(_Bool) 0.5 == true ? 1 : -1];\n               char r[(_Bool) 0.0 == false ? 1 : -1];\n               char u[sizeof (_Bool) > 0 ? 1 : -1];\n               char v[sizeof t.t > 0 ? 1 : -1];\n\n               _Bool w[h];\n               char x[sizeof m == h * sizeof m[0] ? 1 : -1];\n               char y[-1 - (_Bool) 0 < 0 ? 1 : -1];\n               _Bool z = true;\n               _Bool *pz = &p;\n             #endif\n\nint\nmain (void)\n{\n\n             bool ps = &s;\n             *pp |= p;\n             *pp |= ! p;\n\n             #ifdef HAVE__BOOL\n               _Bool pt = &t;\n               *pz |= z;\n               *pz |= ! z;\n             #endif\n\n             /* Refer to every declared value, so they cannot be\n                discarded as unused.  */\n             return (!a + !b + !c + !d + !e + !f + !g + !h + !i + !j + !k\n                     + !l + !m + !n + !o + !p + !pp + !ps\n             #ifdef HAVE__BOOL\n                     + !q + !r + !u + !v + !w + !x + !y + !z + !pt\n             #endif\n                    );\n\n  ;\n  return 0;\n}\n_ACEOF\nif ac_fn_c_try_compile \"$LINENO\"\nthen :\n  ac_cv_header_stdbool_h=yes\nelse $as_nop\n  ac_cv_header_stdbool_h=no\nfi\nrm -f core conftest.err conftest.$ac_objext conftest.beam conftest.$ac_ext\nfi\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: $ac_cv_header_stdbool_h\" >&5\nprintf \"%s\\n\" \"$ac_cv_header_stdbool_h\" >&6; }\n\nif test $ac_cv_header_stdbool_h = yes; then\n\nprintf \"%s\\n\" \"#define HAVE_STDBOOL_H 1\" >>confdefs.h\n\nfi\n\n\n\nac_config_commands=\"$ac_config_commands include/jemalloc/internal/public_symbols.txt\"\n\nac_config_commands=\"$ac_config_commands include/jemalloc/internal/private_symbols.awk\"\n\nac_config_commands=\"$ac_config_commands include/jemalloc/internal/private_symbols_jet.awk\"\n\nac_config_commands=\"$ac_config_commands include/jemalloc/internal/public_namespace.h\"\n\nac_config_commands=\"$ac_config_commands include/jemalloc/internal/public_unnamespace.h\"\n\nac_config_commands=\"$ac_config_commands include/jemalloc/jemalloc_protos_jet.h\"\n\nac_config_commands=\"$ac_config_commands include/jemalloc/jemalloc_rename.h\"\n\nac_config_commands=\"$ac_config_commands include/jemalloc/jemalloc_mangle.h\"\n\nac_config_commands=\"$ac_config_commands include/jemalloc/jemalloc_mangle_jet.h\"\n\nac_config_commands=\"$ac_config_commands include/jemalloc/jemalloc.h\"\n\n\n\n\nac_config_headers=\"$ac_config_headers $cfghdrs_tup\"\n\n\n\nac_config_files=\"$ac_config_files $cfgoutputs_tup config.stamp bin/jemalloc-config bin/jemalloc.sh bin/jeprof\"\n\n\n\ncat >confcache <<\\_ACEOF\n# This file is a shell script that caches the results of configure\n# tests run on this system so they can be shared between configure\n# scripts and configure runs, see configure's option --config-cache.\n# It is not useful on other systems.  If it contains results you don't\n# want to keep, you may remove or edit it.\n#\n# config.status only pays attention to the cache file if you give it\n# the --recheck option to rerun configure.\n#\n# `ac_cv_env_foo' variables (set or unset) will be overridden when\n# loading this file, other *unset* `ac_cv_foo' will be assigned the\n# following values.\n\n_ACEOF\n\n# The following way of writing the cache mishandles newlines in values,\n# but we know of no workaround that is simple, portable, and efficient.\n# So, we kill variables containing newlines.\n# Ultrix sh set writes to stderr and can't be redirected directly,\n# and sets the high bit in the cache file unless we assign to the vars.\n(\n  for ac_var in `(set) 2>&1 | sed -n 's/^\\([a-zA-Z_][a-zA-Z0-9_]*\\)=.*/\\1/p'`; do\n    eval ac_val=\\$$ac_var\n    case $ac_val in #(\n    *${as_nl}*)\n      case $ac_var in #(\n      *_cv_*) { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: WARNING: cache variable $ac_var contains a newline\" >&5\nprintf \"%s\\n\" \"$as_me: WARNING: cache variable $ac_var contains a newline\" >&2;} ;;\n      esac\n      case $ac_var in #(\n      _ | IFS | as_nl) ;; #(\n      BASH_ARGV | BASH_SOURCE) eval $ac_var= ;; #(\n      *) { eval $ac_var=; unset $ac_var;} ;;\n      esac ;;\n    esac\n  done\n\n  (set) 2>&1 |\n    case $as_nl`(ac_space=' '; set) 2>&1` in #(\n    *${as_nl}ac_space=\\ *)\n      # `set' does not quote correctly, so add quotes: double-quote\n      # substitution turns \\\\\\\\ into \\\\, and sed turns \\\\ into \\.\n      sed -n \\\n\t\"s/'/'\\\\\\\\''/g;\n\t  s/^\\\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\\\)=\\\\(.*\\\\)/\\\\1='\\\\2'/p\"\n      ;; #(\n    *)\n      # `set' quotes correctly as required by POSIX, so do not add quotes.\n      sed -n \"/^[_$as_cr_alnum]*_cv_[_$as_cr_alnum]*=/p\"\n      ;;\n    esac |\n    sort\n) |\n  sed '\n     /^ac_cv_env_/b end\n     t clear\n     :clear\n     s/^\\([^=]*\\)=\\(.*[{}].*\\)$/test ${\\1+y} || &/\n     t end\n     s/^\\([^=]*\\)=\\(.*\\)$/\\1=${\\1=\\2}/\n     :end' >>confcache\nif diff \"$cache_file\" confcache >/dev/null 2>&1; then :; else\n  if test -w \"$cache_file\"; then\n    if test \"x$cache_file\" != \"x/dev/null\"; then\n      { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: updating cache $cache_file\" >&5\nprintf \"%s\\n\" \"$as_me: updating cache $cache_file\" >&6;}\n      if test ! -f \"$cache_file\" || test -h \"$cache_file\"; then\n\tcat confcache >\"$cache_file\"\n      else\n        case $cache_file in #(\n        */* | ?:*)\n\t  mv -f confcache \"$cache_file\"$$ &&\n\t  mv -f \"$cache_file\"$$ \"$cache_file\" ;; #(\n        *)\n\t  mv -f confcache \"$cache_file\" ;;\n\tesac\n      fi\n    fi\n  else\n    { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: not updating unwritable cache $cache_file\" >&5\nprintf \"%s\\n\" \"$as_me: not updating unwritable cache $cache_file\" >&6;}\n  fi\nfi\nrm -f confcache\n\ntest \"x$prefix\" = xNONE && prefix=$ac_default_prefix\n# Let make expand exec_prefix.\ntest \"x$exec_prefix\" = xNONE && exec_prefix='${prefix}'\n\nDEFS=-DHAVE_CONFIG_H\n\nac_libobjs=\nac_ltlibobjs=\nU=\nfor ac_i in : $LIBOBJS; do test \"x$ac_i\" = x: && continue\n  # 1. Remove the extension, and $U if already installed.\n  ac_script='s/\\$U\\././;s/\\.o$//;s/\\.obj$//'\n  ac_i=`printf \"%s\\n\" \"$ac_i\" | sed \"$ac_script\"`\n  # 2. Prepend LIBOBJDIR.  When used with automake>=1.10 LIBOBJDIR\n  #    will be set to the directory where LIBOBJS objects are built.\n  as_fn_append ac_libobjs \" \\${LIBOBJDIR}$ac_i\\$U.$ac_objext\"\n  as_fn_append ac_ltlibobjs \" \\${LIBOBJDIR}$ac_i\"'$U.lo'\ndone\nLIBOBJS=$ac_libobjs\n\nLTLIBOBJS=$ac_ltlibobjs\n\n\n\n\n: \"${CONFIG_STATUS=./config.status}\"\nac_write_fail=0\nac_clean_files_save=$ac_clean_files\nac_clean_files=\"$ac_clean_files $CONFIG_STATUS\"\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: creating $CONFIG_STATUS\" >&5\nprintf \"%s\\n\" \"$as_me: creating $CONFIG_STATUS\" >&6;}\nas_write_fail=0\ncat >$CONFIG_STATUS <<_ASEOF || as_write_fail=1\n#! $SHELL\n# Generated by $as_me.\n# Run this file to recreate the current configuration.\n# Compiler output produced by configure, useful for debugging\n# configure, is in config.log if it exists.\n\ndebug=false\nac_cs_recheck=false\nac_cs_silent=false\n\nSHELL=\\${CONFIG_SHELL-$SHELL}\nexport SHELL\n_ASEOF\ncat >>$CONFIG_STATUS <<\\_ASEOF || as_write_fail=1\n## -------------------- ##\n## M4sh Initialization. ##\n## -------------------- ##\n\n# Be more Bourne compatible\nDUALCASE=1; export DUALCASE # for MKS sh\nas_nop=:\nif test ${ZSH_VERSION+y} && (emulate sh) >/dev/null 2>&1\nthen :\n  emulate sh\n  NULLCMD=:\n  # Pre-4.2 versions of Zsh do word splitting on ${1+\"$@\"}, which\n  # is contrary to our usage.  Disable this feature.\n  alias -g '${1+\"$@\"}'='\"$@\"'\n  setopt NO_GLOB_SUBST\nelse $as_nop\n  case `(set -o) 2>/dev/null` in #(\n  *posix*) :\n    set -o posix ;; #(\n  *) :\n     ;;\nesac\nfi\n\n\n\n# Reset variables that may have inherited troublesome values from\n# the environment.\n\n# IFS needs to be set, to space, tab, and newline, in precisely that order.\n# (If _AS_PATH_WALK were called with IFS unset, it would have the\n# side effect of setting IFS to empty, thus disabling word splitting.)\n# Quoting is to prevent editors from complaining about space-tab.\nas_nl='\n'\nexport as_nl\nIFS=\" \"\"\t$as_nl\"\n\nPS1='$ '\nPS2='> '\nPS4='+ '\n\n# Ensure predictable behavior from utilities with locale-dependent output.\nLC_ALL=C\nexport LC_ALL\nLANGUAGE=C\nexport LANGUAGE\n\n# We cannot yet rely on \"unset\" to work, but we need these variables\n# to be unset--not just set to an empty or harmless value--now, to\n# avoid bugs in old shells (e.g. pre-3.0 UWIN ksh).  This construct\n# also avoids known problems related to \"unset\" and subshell syntax\n# in other old shells (e.g. bash 2.01 and pdksh 5.2.14).\nfor as_var in BASH_ENV ENV MAIL MAILPATH CDPATH\ndo eval test \\${$as_var+y} \\\n  && ( (unset $as_var) || exit 1) >/dev/null 2>&1 && unset $as_var || :\ndone\n\n# Ensure that fds 0, 1, and 2 are open.\nif (exec 3>&0) 2>/dev/null; then :; else exec 0</dev/null; fi\nif (exec 3>&1) 2>/dev/null; then :; else exec 1>/dev/null; fi\nif (exec 3>&2)            ; then :; else exec 2>/dev/null; fi\n\n# The user is always right.\nif ${PATH_SEPARATOR+false} :; then\n  PATH_SEPARATOR=:\n  (PATH='/bin;/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 && {\n    (PATH='/bin:/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 ||\n      PATH_SEPARATOR=';'\n  }\nfi\n\n\n# Find who we are.  Look in the path if we contain no directory separator.\nas_myself=\ncase $0 in #((\n  *[\\\\/]* ) as_myself=$0 ;;\n  *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR\nfor as_dir in $PATH\ndo\n  IFS=$as_save_IFS\n  case $as_dir in #(((\n    '') as_dir=./ ;;\n    */) ;;\n    *) as_dir=$as_dir/ ;;\n  esac\n    test -r \"$as_dir$0\" && as_myself=$as_dir$0 && break\n  done\nIFS=$as_save_IFS\n\n     ;;\nesac\n# We did not find ourselves, most probably we were run as `sh COMMAND'\n# in which case we are not to be found in the path.\nif test \"x$as_myself\" = x; then\n  as_myself=$0\nfi\nif test ! -f \"$as_myself\"; then\n  printf \"%s\\n\" \"$as_myself: error: cannot find myself; rerun with an absolute file name\" >&2\n  exit 1\nfi\n\n\n\n# as_fn_error STATUS ERROR [LINENO LOG_FD]\n# ----------------------------------------\n# Output \"`basename $0`: error: ERROR\" to stderr. If LINENO and LOG_FD are\n# provided, also output the error to LOG_FD, referencing LINENO. Then exit the\n# script with STATUS, using 1 if that was 0.\nas_fn_error ()\n{\n  as_status=$1; test $as_status -eq 0 && as_status=1\n  if test \"$4\"; then\n    as_lineno=${as_lineno-\"$3\"} as_lineno_stack=as_lineno_stack=$as_lineno_stack\n    printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: error: $2\" >&$4\n  fi\n  printf \"%s\\n\" \"$as_me: error: $2\" >&2\n  as_fn_exit $as_status\n} # as_fn_error\n\n\n\n# as_fn_set_status STATUS\n# -----------------------\n# Set $? to STATUS, without forking.\nas_fn_set_status ()\n{\n  return $1\n} # as_fn_set_status\n\n# as_fn_exit STATUS\n# -----------------\n# Exit the shell with STATUS, even in a \"trap 0\" or \"set -e\" context.\nas_fn_exit ()\n{\n  set +e\n  as_fn_set_status $1\n  exit $1\n} # as_fn_exit\n\n# as_fn_unset VAR\n# ---------------\n# Portably unset VAR.\nas_fn_unset ()\n{\n  { eval $1=; unset $1;}\n}\nas_unset=as_fn_unset\n\n# as_fn_append VAR VALUE\n# ----------------------\n# Append the text in VALUE to the end of the definition contained in VAR. Take\n# advantage of any shell optimizations that allow amortized linear growth over\n# repeated appends, instead of the typical quadratic growth present in naive\n# implementations.\nif (eval \"as_var=1; as_var+=2; test x\\$as_var = x12\") 2>/dev/null\nthen :\n  eval 'as_fn_append ()\n  {\n    eval $1+=\\$2\n  }'\nelse $as_nop\n  as_fn_append ()\n  {\n    eval $1=\\$$1\\$2\n  }\nfi # as_fn_append\n\n# as_fn_arith ARG...\n# ------------------\n# Perform arithmetic evaluation on the ARGs, and store the result in the\n# global $as_val. Take advantage of shells that can avoid forks. The arguments\n# must be portable across $(()) and expr.\nif (eval \"test \\$(( 1 + 1 )) = 2\") 2>/dev/null\nthen :\n  eval 'as_fn_arith ()\n  {\n    as_val=$(( $* ))\n  }'\nelse $as_nop\n  as_fn_arith ()\n  {\n    as_val=`expr \"$@\" || test $? -eq 1`\n  }\nfi # as_fn_arith\n\n\nif expr a : '\\(a\\)' >/dev/null 2>&1 &&\n   test \"X`expr 00001 : '.*\\(...\\)'`\" = X001; then\n  as_expr=expr\nelse\n  as_expr=false\nfi\n\nif (basename -- /) >/dev/null 2>&1 && test \"X`basename -- / 2>&1`\" = \"X/\"; then\n  as_basename=basename\nelse\n  as_basename=false\nfi\n\nif (as_dir=`dirname -- /` && test \"X$as_dir\" = X/) >/dev/null 2>&1; then\n  as_dirname=dirname\nelse\n  as_dirname=false\nfi\n\nas_me=`$as_basename -- \"$0\" ||\n$as_expr X/\"$0\" : '.*/\\([^/][^/]*\\)/*$' \\| \\\n\t X\"$0\" : 'X\\(//\\)$' \\| \\\n\t X\"$0\" : 'X\\(/\\)' \\| . 2>/dev/null ||\nprintf \"%s\\n\" X/\"$0\" |\n    sed '/^.*\\/\\([^/][^/]*\\)\\/*$/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  /^X\\/\\(\\/\\/\\)$/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  /^X\\/\\(\\/\\).*/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  s/.*/./; q'`\n\n# Avoid depending upon Character Ranges.\nas_cr_letters='abcdefghijklmnopqrstuvwxyz'\nas_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ'\nas_cr_Letters=$as_cr_letters$as_cr_LETTERS\nas_cr_digits='0123456789'\nas_cr_alnum=$as_cr_Letters$as_cr_digits\n\n\n# Determine whether it's possible to make 'echo' print without a newline.\n# These variables are no longer used directly by Autoconf, but are AC_SUBSTed\n# for compatibility with existing Makefiles.\nECHO_C= ECHO_N= ECHO_T=\ncase `echo -n x` in #(((((\n-n*)\n  case `echo 'xy\\c'` in\n  *c*) ECHO_T='\t';;\t# ECHO_T is single tab character.\n  xy)  ECHO_C='\\c';;\n  *)   echo `echo ksh88 bug on AIX 6.1` > /dev/null\n       ECHO_T='\t';;\n  esac;;\n*)\n  ECHO_N='-n';;\nesac\n\n# For backward compatibility with old third-party macros, we provide\n# the shell variables $as_echo and $as_echo_n.  New code should use\n# AS_ECHO([\"message\"]) and AS_ECHO_N([\"message\"]), respectively.\nas_echo='printf %s\\n'\nas_echo_n='printf %s'\n\nrm -f conf$$ conf$$.exe conf$$.file\nif test -d conf$$.dir; then\n  rm -f conf$$.dir/conf$$.file\nelse\n  rm -f conf$$.dir\n  mkdir conf$$.dir 2>/dev/null\nfi\nif (echo >conf$$.file) 2>/dev/null; then\n  if ln -s conf$$.file conf$$ 2>/dev/null; then\n    as_ln_s='ln -s'\n    # ... but there are two gotchas:\n    # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail.\n    # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable.\n    # In both cases, we have to default to `cp -pR'.\n    ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe ||\n      as_ln_s='cp -pR'\n  elif ln conf$$.file conf$$ 2>/dev/null; then\n    as_ln_s=ln\n  else\n    as_ln_s='cp -pR'\n  fi\nelse\n  as_ln_s='cp -pR'\nfi\nrm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file\nrmdir conf$$.dir 2>/dev/null\n\n\n# as_fn_mkdir_p\n# -------------\n# Create \"$as_dir\" as a directory, including parents if necessary.\nas_fn_mkdir_p ()\n{\n\n  case $as_dir in #(\n  -*) as_dir=./$as_dir;;\n  esac\n  test -d \"$as_dir\" || eval $as_mkdir_p || {\n    as_dirs=\n    while :; do\n      case $as_dir in #(\n      *\\'*) as_qdir=`printf \"%s\\n\" \"$as_dir\" | sed \"s/'/'\\\\\\\\\\\\\\\\''/g\"`;; #'(\n      *) as_qdir=$as_dir;;\n      esac\n      as_dirs=\"'$as_qdir' $as_dirs\"\n      as_dir=`$as_dirname -- \"$as_dir\" ||\n$as_expr X\"$as_dir\" : 'X\\(.*[^/]\\)//*[^/][^/]*/*$' \\| \\\n\t X\"$as_dir\" : 'X\\(//\\)[^/]' \\| \\\n\t X\"$as_dir\" : 'X\\(//\\)$' \\| \\\n\t X\"$as_dir\" : 'X\\(/\\)' \\| . 2>/dev/null ||\nprintf \"%s\\n\" X\"$as_dir\" |\n    sed '/^X\\(.*[^/]\\)\\/\\/*[^/][^/]*\\/*$/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  /^X\\(\\/\\/\\)[^/].*/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  /^X\\(\\/\\/\\)$/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  /^X\\(\\/\\).*/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  s/.*/./; q'`\n      test -d \"$as_dir\" && break\n    done\n    test -z \"$as_dirs\" || eval \"mkdir $as_dirs\"\n  } || test -d \"$as_dir\" || as_fn_error $? \"cannot create directory $as_dir\"\n\n\n} # as_fn_mkdir_p\nif mkdir -p . 2>/dev/null; then\n  as_mkdir_p='mkdir -p \"$as_dir\"'\nelse\n  test -d ./-p && rmdir ./-p\n  as_mkdir_p=false\nfi\n\n\n# as_fn_executable_p FILE\n# -----------------------\n# Test if FILE is an executable regular file.\nas_fn_executable_p ()\n{\n  test -f \"$1\" && test -x \"$1\"\n} # as_fn_executable_p\nas_test_x='test -x'\nas_executable_p=as_fn_executable_p\n\n# Sed expression to map a string onto a valid CPP name.\nas_tr_cpp=\"eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'\"\n\n# Sed expression to map a string onto a valid variable name.\nas_tr_sh=\"eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'\"\n\n\nexec 6>&1\n## ----------------------------------- ##\n## Main body of $CONFIG_STATUS script. ##\n## ----------------------------------- ##\n_ASEOF\ntest $as_write_fail = 0 && chmod +x $CONFIG_STATUS || ac_write_fail=1\n\ncat >>$CONFIG_STATUS <<\\_ACEOF || ac_write_fail=1\n# Save the log message, to keep $0 and so on meaningful, and to\n# report actual input values of CONFIG_FILES etc. instead of their\n# values after options handling.\nac_log=\"\nThis file was extended by $as_me, which was\ngenerated by GNU Autoconf 2.71.  Invocation command line was\n\n  CONFIG_FILES    = $CONFIG_FILES\n  CONFIG_HEADERS  = $CONFIG_HEADERS\n  CONFIG_LINKS    = $CONFIG_LINKS\n  CONFIG_COMMANDS = $CONFIG_COMMANDS\n  $ $0 $@\n\non `(hostname || uname -n) 2>/dev/null | sed 1q`\n\"\n\n_ACEOF\n\ncase $ac_config_files in *\"\n\"*) set x $ac_config_files; shift; ac_config_files=$*;;\nesac\n\ncase $ac_config_headers in *\"\n\"*) set x $ac_config_headers; shift; ac_config_headers=$*;;\nesac\n\n\ncat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1\n# Files that config.status was made for.\nconfig_files=\"$ac_config_files\"\nconfig_headers=\"$ac_config_headers\"\nconfig_commands=\"$ac_config_commands\"\n\n_ACEOF\n\ncat >>$CONFIG_STATUS <<\\_ACEOF || ac_write_fail=1\nac_cs_usage=\"\\\n\\`$as_me' instantiates files and other configuration actions\nfrom templates according to the current configuration.  Unless the files\nand actions are specified as TAGs, all are instantiated by default.\n\nUsage: $0 [OPTION]... [TAG]...\n\n  -h, --help       print this help, then exit\n  -V, --version    print version number and configuration settings, then exit\n      --config     print configuration, then exit\n  -q, --quiet, --silent\n                   do not print progress messages\n  -d, --debug      don't remove temporary files\n      --recheck    update $as_me by reconfiguring in the same conditions\n      --file=FILE[:TEMPLATE]\n                   instantiate the configuration file FILE\n      --header=FILE[:TEMPLATE]\n                   instantiate the configuration header FILE\n\nConfiguration files:\n$config_files\n\nConfiguration headers:\n$config_headers\n\nConfiguration commands:\n$config_commands\n\nReport bugs to the package provider.\"\n\n_ACEOF\nac_cs_config=`printf \"%s\\n\" \"$ac_configure_args\" | sed \"$ac_safe_unquote\"`\nac_cs_config_escaped=`printf \"%s\\n\" \"$ac_cs_config\" | sed \"s/^ //; s/'/'\\\\\\\\\\\\\\\\''/g\"`\ncat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1\nac_cs_config='$ac_cs_config_escaped'\nac_cs_version=\"\\\\\nconfig.status\nconfigured by $0, generated by GNU Autoconf 2.71,\n  with options \\\\\"\\$ac_cs_config\\\\\"\n\nCopyright (C) 2021 Free Software Foundation, Inc.\nThis config.status script is free software; the Free Software Foundation\ngives unlimited permission to copy, distribute and modify it.\"\n\nac_pwd='$ac_pwd'\nsrcdir='$srcdir'\nINSTALL='$INSTALL'\nAWK='$AWK'\ntest -n \"\\$AWK\" || AWK=awk\n_ACEOF\n\ncat >>$CONFIG_STATUS <<\\_ACEOF || ac_write_fail=1\n# The default lists apply if the user does not specify any file.\nac_need_defaults=:\nwhile test $# != 0\ndo\n  case $1 in\n  --*=?*)\n    ac_option=`expr \"X$1\" : 'X\\([^=]*\\)='`\n    ac_optarg=`expr \"X$1\" : 'X[^=]*=\\(.*\\)'`\n    ac_shift=:\n    ;;\n  --*=)\n    ac_option=`expr \"X$1\" : 'X\\([^=]*\\)='`\n    ac_optarg=\n    ac_shift=:\n    ;;\n  *)\n    ac_option=$1\n    ac_optarg=$2\n    ac_shift=shift\n    ;;\n  esac\n\n  case $ac_option in\n  # Handling of the options.\n  -recheck | --recheck | --rechec | --reche | --rech | --rec | --re | --r)\n    ac_cs_recheck=: ;;\n  --version | --versio | --versi | --vers | --ver | --ve | --v | -V )\n    printf \"%s\\n\" \"$ac_cs_version\"; exit ;;\n  --config | --confi | --conf | --con | --co | --c )\n    printf \"%s\\n\" \"$ac_cs_config\"; exit ;;\n  --debug | --debu | --deb | --de | --d | -d )\n    debug=: ;;\n  --file | --fil | --fi | --f )\n    $ac_shift\n    case $ac_optarg in\n    *\\'*) ac_optarg=`printf \"%s\\n\" \"$ac_optarg\" | sed \"s/'/'\\\\\\\\\\\\\\\\''/g\"` ;;\n    '') as_fn_error $? \"missing file argument\" ;;\n    esac\n    as_fn_append CONFIG_FILES \" '$ac_optarg'\"\n    ac_need_defaults=false;;\n  --header | --heade | --head | --hea )\n    $ac_shift\n    case $ac_optarg in\n    *\\'*) ac_optarg=`printf \"%s\\n\" \"$ac_optarg\" | sed \"s/'/'\\\\\\\\\\\\\\\\''/g\"` ;;\n    esac\n    as_fn_append CONFIG_HEADERS \" '$ac_optarg'\"\n    ac_need_defaults=false;;\n  --he | --h)\n    # Conflict between --help and --header\n    as_fn_error $? \"ambiguous option: \\`$1'\nTry \\`$0 --help' for more information.\";;\n  --help | --hel | -h )\n    printf \"%s\\n\" \"$ac_cs_usage\"; exit ;;\n  -q | -quiet | --quiet | --quie | --qui | --qu | --q \\\n  | -silent | --silent | --silen | --sile | --sil | --si | --s)\n    ac_cs_silent=: ;;\n\n  # This is an error.\n  -*) as_fn_error $? \"unrecognized option: \\`$1'\nTry \\`$0 --help' for more information.\" ;;\n\n  *) as_fn_append ac_config_targets \" $1\"\n     ac_need_defaults=false ;;\n\n  esac\n  shift\ndone\n\nac_configure_extra_args=\n\nif $ac_cs_silent; then\n  exec 6>/dev/null\n  ac_configure_extra_args=\"$ac_configure_extra_args --silent\"\nfi\n\n_ACEOF\ncat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1\nif \\$ac_cs_recheck; then\n  set X $SHELL '$0' $ac_configure_args \\$ac_configure_extra_args --no-create --no-recursion\n  shift\n  \\printf \"%s\\n\" \"running CONFIG_SHELL=$SHELL \\$*\" >&6\n  CONFIG_SHELL='$SHELL'\n  export CONFIG_SHELL\n  exec \"\\$@\"\nfi\n\n_ACEOF\ncat >>$CONFIG_STATUS <<\\_ACEOF || ac_write_fail=1\nexec 5>>config.log\n{\n  echo\n  sed 'h;s/./-/g;s/^.../## /;s/...$/ ##/;p;x;p;x' <<_ASBOX\n## Running $as_me. ##\n_ASBOX\n  printf \"%s\\n\" \"$ac_log\"\n} >&5\n\n_ACEOF\ncat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1\n#\n# INIT-COMMANDS\n#\n\n  srcdir=\"${srcdir}\"\n  objroot=\"${objroot}\"\n  mangling_map=\"${mangling_map}\"\n  public_syms=\"${public_syms}\"\n  JEMALLOC_PREFIX=\"${JEMALLOC_PREFIX}\"\n\n\n  srcdir=\"${srcdir}\"\n  objroot=\"${objroot}\"\n  public_syms=\"${public_syms}\"\n  wrap_syms=\"${wrap_syms}\"\n  SYM_PREFIX=\"${SYM_PREFIX}\"\n  JEMALLOC_PREFIX=\"${JEMALLOC_PREFIX}\"\n\n\n  srcdir=\"${srcdir}\"\n  objroot=\"${objroot}\"\n  public_syms=\"${public_syms}\"\n  wrap_syms=\"${wrap_syms}\"\n  SYM_PREFIX=\"${SYM_PREFIX}\"\n\n\n  srcdir=\"${srcdir}\"\n  objroot=\"${objroot}\"\n\n\n  srcdir=\"${srcdir}\"\n  objroot=\"${objroot}\"\n\n\n  srcdir=\"${srcdir}\"\n  objroot=\"${objroot}\"\n\n\n  srcdir=\"${srcdir}\"\n  objroot=\"${objroot}\"\n\n\n  srcdir=\"${srcdir}\"\n  objroot=\"${objroot}\"\n\n\n  srcdir=\"${srcdir}\"\n  objroot=\"${objroot}\"\n\n\n  srcdir=\"${srcdir}\"\n  objroot=\"${objroot}\"\n  install_suffix=\"${install_suffix}\"\n\n\n_ACEOF\n\ncat >>$CONFIG_STATUS <<\\_ACEOF || ac_write_fail=1\n\n# Handling of arguments.\nfor ac_config_target in $ac_config_targets\ndo\n  case $ac_config_target in\n    \"include/jemalloc/internal/public_symbols.txt\") CONFIG_COMMANDS=\"$CONFIG_COMMANDS include/jemalloc/internal/public_symbols.txt\" ;;\n    \"include/jemalloc/internal/private_symbols.awk\") CONFIG_COMMANDS=\"$CONFIG_COMMANDS include/jemalloc/internal/private_symbols.awk\" ;;\n    \"include/jemalloc/internal/private_symbols_jet.awk\") CONFIG_COMMANDS=\"$CONFIG_COMMANDS include/jemalloc/internal/private_symbols_jet.awk\" ;;\n    \"include/jemalloc/internal/public_namespace.h\") CONFIG_COMMANDS=\"$CONFIG_COMMANDS include/jemalloc/internal/public_namespace.h\" ;;\n    \"include/jemalloc/internal/public_unnamespace.h\") CONFIG_COMMANDS=\"$CONFIG_COMMANDS include/jemalloc/internal/public_unnamespace.h\" ;;\n    \"include/jemalloc/jemalloc_protos_jet.h\") CONFIG_COMMANDS=\"$CONFIG_COMMANDS include/jemalloc/jemalloc_protos_jet.h\" ;;\n    \"include/jemalloc/jemalloc_rename.h\") CONFIG_COMMANDS=\"$CONFIG_COMMANDS include/jemalloc/jemalloc_rename.h\" ;;\n    \"include/jemalloc/jemalloc_mangle.h\") CONFIG_COMMANDS=\"$CONFIG_COMMANDS include/jemalloc/jemalloc_mangle.h\" ;;\n    \"include/jemalloc/jemalloc_mangle_jet.h\") CONFIG_COMMANDS=\"$CONFIG_COMMANDS include/jemalloc/jemalloc_mangle_jet.h\" ;;\n    \"include/jemalloc/jemalloc.h\") CONFIG_COMMANDS=\"$CONFIG_COMMANDS include/jemalloc/jemalloc.h\" ;;\n    \"$cfghdrs_tup\") CONFIG_HEADERS=\"$CONFIG_HEADERS $cfghdrs_tup\" ;;\n    \"$cfgoutputs_tup\") CONFIG_FILES=\"$CONFIG_FILES $cfgoutputs_tup\" ;;\n    \"config.stamp\") CONFIG_FILES=\"$CONFIG_FILES config.stamp\" ;;\n    \"bin/jemalloc-config\") CONFIG_FILES=\"$CONFIG_FILES bin/jemalloc-config\" ;;\n    \"bin/jemalloc.sh\") CONFIG_FILES=\"$CONFIG_FILES bin/jemalloc.sh\" ;;\n    \"bin/jeprof\") CONFIG_FILES=\"$CONFIG_FILES bin/jeprof\" ;;\n\n  *) as_fn_error $? \"invalid argument: \\`$ac_config_target'\" \"$LINENO\" 5;;\n  esac\ndone\n\n\n# If the user did not use the arguments to specify the items to instantiate,\n# then the envvar interface is used.  Set only those that are not.\n# We use the long form for the default assignment because of an extremely\n# bizarre bug on SunOS 4.1.3.\nif $ac_need_defaults; then\n  test ${CONFIG_FILES+y} || CONFIG_FILES=$config_files\n  test ${CONFIG_HEADERS+y} || CONFIG_HEADERS=$config_headers\n  test ${CONFIG_COMMANDS+y} || CONFIG_COMMANDS=$config_commands\nfi\n\n# Have a temporary directory for convenience.  Make it in the build tree\n# simply because there is no reason against having it here, and in addition,\n# creating and moving files from /tmp can sometimes cause problems.\n# Hook for its removal unless debugging.\n# Note that there is a small window in which the directory will not be cleaned:\n# after its creation but before its name has been assigned to `$tmp'.\n$debug ||\n{\n  tmp= ac_tmp=\n  trap 'exit_status=$?\n  : \"${ac_tmp:=$tmp}\"\n  { test ! -d \"$ac_tmp\" || rm -fr \"$ac_tmp\"; } && exit $exit_status\n' 0\n  trap 'as_fn_exit 1' 1 2 13 15\n}\n# Create a (secure) tmp directory for tmp files.\n\n{\n  tmp=`(umask 077 && mktemp -d \"./confXXXXXX\") 2>/dev/null` &&\n  test -d \"$tmp\"\n}  ||\n{\n  tmp=./conf$$-$RANDOM\n  (umask 077 && mkdir \"$tmp\")\n} || as_fn_error $? \"cannot create a temporary directory in .\" \"$LINENO\" 5\nac_tmp=$tmp\n\n# Set up the scripts for CONFIG_FILES section.\n# No need to generate them if there are no CONFIG_FILES.\n# This happens for instance with `./config.status config.h'.\nif test -n \"$CONFIG_FILES\"; then\n\n\nac_cr=`echo X | tr X '\\015'`\n# On cygwin, bash can eat \\r inside `` if the user requested igncr.\n# But we know of no other shell where ac_cr would be empty at this\n# point, so we can use a bashism as a fallback.\nif test \"x$ac_cr\" = x; then\n  eval ac_cr=\\$\\'\\\\r\\'\nfi\nac_cs_awk_cr=`$AWK 'BEGIN { print \"a\\rb\" }' </dev/null 2>/dev/null`\nif test \"$ac_cs_awk_cr\" = \"a${ac_cr}b\"; then\n  ac_cs_awk_cr='\\\\r'\nelse\n  ac_cs_awk_cr=$ac_cr\nfi\n\necho 'BEGIN {' >\"$ac_tmp/subs1.awk\" &&\n_ACEOF\n\n\n{\n  echo \"cat >conf$$subs.awk <<_ACEOF\" &&\n  echo \"$ac_subst_vars\" | sed 's/.*/&!$&$ac_delim/' &&\n  echo \"_ACEOF\"\n} >conf$$subs.sh ||\n  as_fn_error $? \"could not make $CONFIG_STATUS\" \"$LINENO\" 5\nac_delim_num=`echo \"$ac_subst_vars\" | grep -c '^'`\nac_delim='%!_!# '\nfor ac_last_try in false false false false false :; do\n  . ./conf$$subs.sh ||\n    as_fn_error $? \"could not make $CONFIG_STATUS\" \"$LINENO\" 5\n\n  ac_delim_n=`sed -n \"s/.*$ac_delim\\$/X/p\" conf$$subs.awk | grep -c X`\n  if test $ac_delim_n = $ac_delim_num; then\n    break\n  elif $ac_last_try; then\n    as_fn_error $? \"could not make $CONFIG_STATUS\" \"$LINENO\" 5\n  else\n    ac_delim=\"$ac_delim!$ac_delim _$ac_delim!! \"\n  fi\ndone\nrm -f conf$$subs.sh\n\ncat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1\ncat >>\"\\$ac_tmp/subs1.awk\" <<\\\\_ACAWK &&\n_ACEOF\nsed -n '\nh\ns/^/S[\"/; s/!.*/\"]=/\np\ng\ns/^[^!]*!//\n:repl\nt repl\ns/'\"$ac_delim\"'$//\nt delim\n:nl\nh\ns/\\(.\\{148\\}\\)..*/\\1/\nt more1\ns/[\"\\\\]/\\\\&/g; s/^/\"/; s/$/\\\\n\"\\\\/\np\nn\nb repl\n:more1\ns/[\"\\\\]/\\\\&/g; s/^/\"/; s/$/\"\\\\/\np\ng\ns/.\\{148\\}//\nt nl\n:delim\nh\ns/\\(.\\{148\\}\\)..*/\\1/\nt more2\ns/[\"\\\\]/\\\\&/g; s/^/\"/; s/$/\"/\np\nb\n:more2\ns/[\"\\\\]/\\\\&/g; s/^/\"/; s/$/\"\\\\/\np\ng\ns/.\\{148\\}//\nt delim\n' <conf$$subs.awk | sed '\n/^[^\"\"]/{\n  N\n  s/\\n//\n}\n' >>$CONFIG_STATUS || ac_write_fail=1\nrm -f conf$$subs.awk\ncat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1\n_ACAWK\ncat >>\"\\$ac_tmp/subs1.awk\" <<_ACAWK &&\n  for (key in S) S_is_set[key] = 1\n  FS = \"\u0007\"\n\n}\n{\n  line = $ 0\n  nfields = split(line, field, \"@\")\n  substed = 0\n  len = length(field[1])\n  for (i = 2; i < nfields; i++) {\n    key = field[i]\n    keylen = length(key)\n    if (S_is_set[key]) {\n      value = S[key]\n      line = substr(line, 1, len) \"\" value \"\" substr(line, len + keylen + 3)\n      len += length(value) + length(field[++i])\n      substed = 1\n    } else\n      len += 1 + keylen\n  }\n\n  print line\n}\n\n_ACAWK\n_ACEOF\ncat >>$CONFIG_STATUS <<\\_ACEOF || ac_write_fail=1\nif sed \"s/$ac_cr//\" < /dev/null > /dev/null 2>&1; then\n  sed \"s/$ac_cr\\$//; s/$ac_cr/$ac_cs_awk_cr/g\"\nelse\n  cat\nfi < \"$ac_tmp/subs1.awk\" > \"$ac_tmp/subs.awk\" \\\n  || as_fn_error $? \"could not setup config files machinery\" \"$LINENO\" 5\n_ACEOF\n\n# VPATH may cause trouble with some makes, so we remove sole $(srcdir),\n# ${srcdir} and @srcdir@ entries from VPATH if srcdir is \".\", strip leading and\n# trailing colons and then remove the whole line if VPATH becomes empty\n# (actually we leave an empty line to preserve line numbers).\nif test \"x$srcdir\" = x.; then\n  ac_vpsub='/^[\t ]*VPATH[\t ]*=[\t ]*/{\nh\ns///\ns/^/:/\ns/[\t ]*$/:/\ns/:\\$(srcdir):/:/g\ns/:\\${srcdir}:/:/g\ns/:@srcdir@:/:/g\ns/^:*//\ns/:*$//\nx\ns/\\(=[\t ]*\\).*/\\1/\nG\ns/\\n//\ns/^[^=]*=[\t ]*$//\n}'\nfi\n\ncat >>$CONFIG_STATUS <<\\_ACEOF || ac_write_fail=1\nfi # test -n \"$CONFIG_FILES\"\n\n# Set up the scripts for CONFIG_HEADERS section.\n# No need to generate them if there are no CONFIG_HEADERS.\n# This happens for instance with `./config.status Makefile'.\nif test -n \"$CONFIG_HEADERS\"; then\ncat >\"$ac_tmp/defines.awk\" <<\\_ACAWK ||\nBEGIN {\n_ACEOF\n\n# Transform confdefs.h into an awk script `defines.awk', embedded as\n# here-document in config.status, that substitutes the proper values into\n# config.h.in to produce config.h.\n\n# Create a delimiter string that does not exist in confdefs.h, to ease\n# handling of long lines.\nac_delim='%!_!# '\nfor ac_last_try in false false :; do\n  ac_tt=`sed -n \"/$ac_delim/p\" confdefs.h`\n  if test -z \"$ac_tt\"; then\n    break\n  elif $ac_last_try; then\n    as_fn_error $? \"could not make $CONFIG_HEADERS\" \"$LINENO\" 5\n  else\n    ac_delim=\"$ac_delim!$ac_delim _$ac_delim!! \"\n  fi\ndone\n\n# For the awk script, D is an array of macro values keyed by name,\n# likewise P contains macro parameters if any.  Preserve backslash\n# newline sequences.\n\nac_word_re=[_$as_cr_Letters][_$as_cr_alnum]*\nsed -n '\ns/.\\{148\\}/&'\"$ac_delim\"'/g\nt rset\n:rset\ns/^[\t ]*#[\t ]*define[\t ][\t ]*/ /\nt def\nd\n:def\ns/\\\\$//\nt bsnl\ns/[\"\\\\]/\\\\&/g\ns/^ \\('\"$ac_word_re\"'\\)\\(([^()]*)\\)[\t ]*\\(.*\\)/P[\"\\1\"]=\"\\2\"\\\nD[\"\\1\"]=\" \\3\"/p\ns/^ \\('\"$ac_word_re\"'\\)[\t ]*\\(.*\\)/D[\"\\1\"]=\" \\2\"/p\nd\n:bsnl\ns/[\"\\\\]/\\\\&/g\ns/^ \\('\"$ac_word_re\"'\\)\\(([^()]*)\\)[\t ]*\\(.*\\)/P[\"\\1\"]=\"\\2\"\\\nD[\"\\1\"]=\" \\3\\\\\\\\\\\\n\"\\\\/p\nt cont\ns/^ \\('\"$ac_word_re\"'\\)[\t ]*\\(.*\\)/D[\"\\1\"]=\" \\2\\\\\\\\\\\\n\"\\\\/p\nt cont\nd\n:cont\nn\ns/.\\{148\\}/&'\"$ac_delim\"'/g\nt clear\n:clear\ns/\\\\$//\nt bsnlc\ns/[\"\\\\]/\\\\&/g; s/^/\"/; s/$/\"/p\nd\n:bsnlc\ns/[\"\\\\]/\\\\&/g; s/^/\"/; s/$/\\\\\\\\\\\\n\"\\\\/p\nb cont\n' <confdefs.h | sed '\ns/'\"$ac_delim\"'/\"\\\\\\\n\"/g' >>$CONFIG_STATUS || ac_write_fail=1\n\ncat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1\n  for (key in D) D_is_set[key] = 1\n  FS = \"\u0007\"\n}\n/^[\\t ]*#[\\t ]*(define|undef)[\\t ]+$ac_word_re([\\t (]|\\$)/ {\n  line = \\$ 0\n  split(line, arg, \" \")\n  if (arg[1] == \"#\") {\n    defundef = arg[2]\n    mac1 = arg[3]\n  } else {\n    defundef = substr(arg[1], 2)\n    mac1 = arg[2]\n  }\n  split(mac1, mac2, \"(\") #)\n  macro = mac2[1]\n  prefix = substr(line, 1, index(line, defundef) - 1)\n  if (D_is_set[macro]) {\n    # Preserve the white space surrounding the \"#\".\n    print prefix \"define\", macro P[macro] D[macro]\n    next\n  } else {\n    # Replace #undef with comments.  This is necessary, for example,\n    # in the case of _POSIX_SOURCE, which is predefined and required\n    # on some systems where configure will not decide to define it.\n    if (defundef == \"undef\") {\n      print \"/*\", prefix defundef, macro, \"*/\"\n      next\n    }\n  }\n}\n{ print }\n_ACAWK\n_ACEOF\ncat >>$CONFIG_STATUS <<\\_ACEOF || ac_write_fail=1\n  as_fn_error $? \"could not setup config headers machinery\" \"$LINENO\" 5\nfi # test -n \"$CONFIG_HEADERS\"\n\n\neval set X \"  :F $CONFIG_FILES  :H $CONFIG_HEADERS    :C $CONFIG_COMMANDS\"\nshift\nfor ac_tag\ndo\n  case $ac_tag in\n  :[FHLC]) ac_mode=$ac_tag; continue;;\n  esac\n  case $ac_mode$ac_tag in\n  :[FHL]*:*);;\n  :L* | :C*:*) as_fn_error $? \"invalid tag \\`$ac_tag'\" \"$LINENO\" 5;;\n  :[FH]-) ac_tag=-:-;;\n  :[FH]*) ac_tag=$ac_tag:$ac_tag.in;;\n  esac\n  ac_save_IFS=$IFS\n  IFS=:\n  set x $ac_tag\n  IFS=$ac_save_IFS\n  shift\n  ac_file=$1\n  shift\n\n  case $ac_mode in\n  :L) ac_source=$1;;\n  :[FH])\n    ac_file_inputs=\n    for ac_f\n    do\n      case $ac_f in\n      -) ac_f=\"$ac_tmp/stdin\";;\n      *) # Look for the file first in the build tree, then in the source tree\n\t # (if the path is not absolute).  The absolute path cannot be DOS-style,\n\t # because $ac_f cannot contain `:'.\n\t test -f \"$ac_f\" ||\n\t   case $ac_f in\n\t   [\\\\/$]*) false;;\n\t   *) test -f \"$srcdir/$ac_f\" && ac_f=\"$srcdir/$ac_f\";;\n\t   esac ||\n\t   as_fn_error 1 \"cannot find input file: \\`$ac_f'\" \"$LINENO\" 5;;\n      esac\n      case $ac_f in *\\'*) ac_f=`printf \"%s\\n\" \"$ac_f\" | sed \"s/'/'\\\\\\\\\\\\\\\\''/g\"`;; esac\n      as_fn_append ac_file_inputs \" '$ac_f'\"\n    done\n\n    # Let's still pretend it is `configure' which instantiates (i.e., don't\n    # use $as_me), people would be surprised to read:\n    #    /* config.h.  Generated by config.status.  */\n    configure_input='Generated from '`\n\t  printf \"%s\\n\" \"$*\" | sed 's|^[^:]*/||;s|:[^:]*/|, |g'\n\t`' by configure.'\n    if test x\"$ac_file\" != x-; then\n      configure_input=\"$ac_file.  $configure_input\"\n      { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: creating $ac_file\" >&5\nprintf \"%s\\n\" \"$as_me: creating $ac_file\" >&6;}\n    fi\n    # Neutralize special characters interpreted by sed in replacement strings.\n    case $configure_input in #(\n    *\\&* | *\\|* | *\\\\* )\n       ac_sed_conf_input=`printf \"%s\\n\" \"$configure_input\" |\n       sed 's/[\\\\\\\\&|]/\\\\\\\\&/g'`;; #(\n    *) ac_sed_conf_input=$configure_input;;\n    esac\n\n    case $ac_tag in\n    *:-:* | *:-) cat >\"$ac_tmp/stdin\" \\\n      || as_fn_error $? \"could not create $ac_file\" \"$LINENO\" 5 ;;\n    esac\n    ;;\n  esac\n\n  ac_dir=`$as_dirname -- \"$ac_file\" ||\n$as_expr X\"$ac_file\" : 'X\\(.*[^/]\\)//*[^/][^/]*/*$' \\| \\\n\t X\"$ac_file\" : 'X\\(//\\)[^/]' \\| \\\n\t X\"$ac_file\" : 'X\\(//\\)$' \\| \\\n\t X\"$ac_file\" : 'X\\(/\\)' \\| . 2>/dev/null ||\nprintf \"%s\\n\" X\"$ac_file\" |\n    sed '/^X\\(.*[^/]\\)\\/\\/*[^/][^/]*\\/*$/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  /^X\\(\\/\\/\\)[^/].*/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  /^X\\(\\/\\/\\)$/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  /^X\\(\\/\\).*/{\n\t    s//\\1/\n\t    q\n\t  }\n\t  s/.*/./; q'`\n  as_dir=\"$ac_dir\"; as_fn_mkdir_p\n  ac_builddir=.\n\ncase \"$ac_dir\" in\n.) ac_dir_suffix= ac_top_builddir_sub=. ac_top_build_prefix= ;;\n*)\n  ac_dir_suffix=/`printf \"%s\\n\" \"$ac_dir\" | sed 's|^\\.[\\\\/]||'`\n  # A \"..\" for each directory in $ac_dir_suffix.\n  ac_top_builddir_sub=`printf \"%s\\n\" \"$ac_dir_suffix\" | sed 's|/[^\\\\/]*|/..|g;s|/||'`\n  case $ac_top_builddir_sub in\n  \"\") ac_top_builddir_sub=. ac_top_build_prefix= ;;\n  *)  ac_top_build_prefix=$ac_top_builddir_sub/ ;;\n  esac ;;\nesac\nac_abs_top_builddir=$ac_pwd\nac_abs_builddir=$ac_pwd$ac_dir_suffix\n# for backward compatibility:\nac_top_builddir=$ac_top_build_prefix\n\ncase $srcdir in\n  .)  # We are building in place.\n    ac_srcdir=.\n    ac_top_srcdir=$ac_top_builddir_sub\n    ac_abs_top_srcdir=$ac_pwd ;;\n  [\\\\/]* | ?:[\\\\/]* )  # Absolute name.\n    ac_srcdir=$srcdir$ac_dir_suffix;\n    ac_top_srcdir=$srcdir\n    ac_abs_top_srcdir=$srcdir ;;\n  *) # Relative name.\n    ac_srcdir=$ac_top_build_prefix$srcdir$ac_dir_suffix\n    ac_top_srcdir=$ac_top_build_prefix$srcdir\n    ac_abs_top_srcdir=$ac_pwd/$srcdir ;;\nesac\nac_abs_srcdir=$ac_abs_top_srcdir$ac_dir_suffix\n\n\n  case $ac_mode in\n  :F)\n  #\n  # CONFIG_FILE\n  #\n\n  case $INSTALL in\n  [\\\\/$]* | ?:[\\\\/]* ) ac_INSTALL=$INSTALL ;;\n  *) ac_INSTALL=$ac_top_build_prefix$INSTALL ;;\n  esac\n_ACEOF\n\ncat >>$CONFIG_STATUS <<\\_ACEOF || ac_write_fail=1\n# If the template does not know about datarootdir, expand it.\n# FIXME: This hack should be removed a few years after 2.60.\nac_datarootdir_hack=; ac_datarootdir_seen=\nac_sed_dataroot='\n/datarootdir/ {\n  p\n  q\n}\n/@datadir@/p\n/@docdir@/p\n/@infodir@/p\n/@localedir@/p\n/@mandir@/p'\ncase `eval \"sed -n \\\"\\$ac_sed_dataroot\\\" $ac_file_inputs\"` in\n*datarootdir*) ac_datarootdir_seen=yes;;\n*@datadir@*|*@docdir@*|*@infodir@*|*@localedir@*|*@mandir@*)\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting\" >&5\nprintf \"%s\\n\" \"$as_me: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting\" >&2;}\n_ACEOF\ncat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1\n  ac_datarootdir_hack='\n  s&@datadir@&$datadir&g\n  s&@docdir@&$docdir&g\n  s&@infodir@&$infodir&g\n  s&@localedir@&$localedir&g\n  s&@mandir@&$mandir&g\n  s&\\\\\\${datarootdir}&$datarootdir&g' ;;\nesac\n_ACEOF\n\n# Neutralize VPATH when `$srcdir' = `.'.\n# Shell code in configure.ac might set extrasub.\n# FIXME: do we really want to maintain this feature?\ncat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1\nac_sed_extra=\"$ac_vpsub\n$extrasub\n_ACEOF\ncat >>$CONFIG_STATUS <<\\_ACEOF || ac_write_fail=1\n:t\n/@[a-zA-Z_][a-zA-Z_0-9]*@/!b\ns|@configure_input@|$ac_sed_conf_input|;t t\ns&@top_builddir@&$ac_top_builddir_sub&;t t\ns&@top_build_prefix@&$ac_top_build_prefix&;t t\ns&@srcdir@&$ac_srcdir&;t t\ns&@abs_srcdir@&$ac_abs_srcdir&;t t\ns&@top_srcdir@&$ac_top_srcdir&;t t\ns&@abs_top_srcdir@&$ac_abs_top_srcdir&;t t\ns&@builddir@&$ac_builddir&;t t\ns&@abs_builddir@&$ac_abs_builddir&;t t\ns&@abs_top_builddir@&$ac_abs_top_builddir&;t t\ns&@INSTALL@&$ac_INSTALL&;t t\n$ac_datarootdir_hack\n\"\neval sed \\\"\\$ac_sed_extra\\\" \"$ac_file_inputs\" | $AWK -f \"$ac_tmp/subs.awk\" \\\n  >$ac_tmp/out || as_fn_error $? \"could not create $ac_file\" \"$LINENO\" 5\n\ntest -z \"$ac_datarootdir_hack$ac_datarootdir_seen\" &&\n  { ac_out=`sed -n '/\\${datarootdir}/p' \"$ac_tmp/out\"`; test -n \"$ac_out\"; } &&\n  { ac_out=`sed -n '/^[\t ]*datarootdir[\t ]*:*=/p' \\\n      \"$ac_tmp/out\"`; test -z \"$ac_out\"; } &&\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: WARNING: $ac_file contains a reference to the variable \\`datarootdir'\nwhich seems to be undefined.  Please make sure it is defined\" >&5\nprintf \"%s\\n\" \"$as_me: WARNING: $ac_file contains a reference to the variable \\`datarootdir'\nwhich seems to be undefined.  Please make sure it is defined\" >&2;}\n\n  rm -f \"$ac_tmp/stdin\"\n  case $ac_file in\n  -) cat \"$ac_tmp/out\" && rm -f \"$ac_tmp/out\";;\n  *) rm -f \"$ac_file\" && mv \"$ac_tmp/out\" \"$ac_file\";;\n  esac \\\n  || as_fn_error $? \"could not create $ac_file\" \"$LINENO\" 5\n ;;\n  :H)\n  #\n  # CONFIG_HEADER\n  #\n  if test x\"$ac_file\" != x-; then\n    {\n      printf \"%s\\n\" \"/* $configure_input  */\" >&1 \\\n      && eval '$AWK -f \"$ac_tmp/defines.awk\"' \"$ac_file_inputs\"\n    } >\"$ac_tmp/config.h\" \\\n      || as_fn_error $? \"could not create $ac_file\" \"$LINENO\" 5\n    if diff \"$ac_file\" \"$ac_tmp/config.h\" >/dev/null 2>&1; then\n      { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: $ac_file is unchanged\" >&5\nprintf \"%s\\n\" \"$as_me: $ac_file is unchanged\" >&6;}\n    else\n      rm -f \"$ac_file\"\n      mv \"$ac_tmp/config.h\" \"$ac_file\" \\\n\t|| as_fn_error $? \"could not create $ac_file\" \"$LINENO\" 5\n    fi\n  else\n    printf \"%s\\n\" \"/* $configure_input  */\" >&1 \\\n      && eval '$AWK -f \"$ac_tmp/defines.awk\"' \"$ac_file_inputs\" \\\n      || as_fn_error $? \"could not create -\" \"$LINENO\" 5\n  fi\n ;;\n\n  :C)  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: executing $ac_file commands\" >&5\nprintf \"%s\\n\" \"$as_me: executing $ac_file commands\" >&6;}\n ;;\n  esac\n\n\n  case $ac_file$ac_mode in\n    \"include/jemalloc/internal/public_symbols.txt\":C)\n  f=\"${objroot}include/jemalloc/internal/public_symbols.txt\"\n  mkdir -p \"${objroot}include/jemalloc/internal\"\n  cp /dev/null \"${f}\"\n  for nm in `echo ${mangling_map} |tr ',' ' '` ; do\n    n=`echo ${nm} |tr ':' ' ' |awk '{print $1}'`\n    m=`echo ${nm} |tr ':' ' ' |awk '{print $2}'`\n    echo \"${n}:${m}\" >> \"${f}\"\n        public_syms=`for sym in ${public_syms}; do echo \"${sym}\"; done |grep -v \"^${n}\\$\" |tr '\\n' ' '`\n  done\n  for sym in ${public_syms} ; do\n    n=\"${sym}\"\n    m=\"${JEMALLOC_PREFIX}${sym}\"\n    echo \"${n}:${m}\" >> \"${f}\"\n  done\n ;;\n    \"include/jemalloc/internal/private_symbols.awk\":C)\n  f=\"${objroot}include/jemalloc/internal/private_symbols.awk\"\n  mkdir -p \"${objroot}include/jemalloc/internal\"\n  export_syms=`for sym in ${public_syms}; do echo \"${JEMALLOC_PREFIX}${sym}\"; done; for sym in ${wrap_syms}; do echo \"${sym}\"; done;`\n  \"${srcdir}/include/jemalloc/internal/private_symbols.sh\" \"${SYM_PREFIX}\" ${export_syms} > \"${objroot}include/jemalloc/internal/private_symbols.awk\"\n ;;\n    \"include/jemalloc/internal/private_symbols_jet.awk\":C)\n  f=\"${objroot}include/jemalloc/internal/private_symbols_jet.awk\"\n  mkdir -p \"${objroot}include/jemalloc/internal\"\n  export_syms=`for sym in ${public_syms}; do echo \"jet_${sym}\"; done; for sym in ${wrap_syms}; do echo \"${sym}\"; done;`\n  \"${srcdir}/include/jemalloc/internal/private_symbols.sh\" \"${SYM_PREFIX}\" ${export_syms} > \"${objroot}include/jemalloc/internal/private_symbols_jet.awk\"\n ;;\n    \"include/jemalloc/internal/public_namespace.h\":C)\n  mkdir -p \"${objroot}include/jemalloc/internal\"\n  \"${srcdir}/include/jemalloc/internal/public_namespace.sh\" \"${objroot}include/jemalloc/internal/public_symbols.txt\" > \"${objroot}include/jemalloc/internal/public_namespace.h\"\n ;;\n    \"include/jemalloc/internal/public_unnamespace.h\":C)\n  mkdir -p \"${objroot}include/jemalloc/internal\"\n  \"${srcdir}/include/jemalloc/internal/public_unnamespace.sh\" \"${objroot}include/jemalloc/internal/public_symbols.txt\" > \"${objroot}include/jemalloc/internal/public_unnamespace.h\"\n ;;\n    \"include/jemalloc/jemalloc_protos_jet.h\":C)\n  mkdir -p \"${objroot}include/jemalloc\"\n  cat \"${srcdir}/include/jemalloc/jemalloc_protos.h.in\" | sed -e 's/@je_@/jet_/g' > \"${objroot}include/jemalloc/jemalloc_protos_jet.h\"\n ;;\n    \"include/jemalloc/jemalloc_rename.h\":C)\n  mkdir -p \"${objroot}include/jemalloc\"\n  \"${srcdir}/include/jemalloc/jemalloc_rename.sh\" \"${objroot}include/jemalloc/internal/public_symbols.txt\" > \"${objroot}include/jemalloc/jemalloc_rename.h\"\n ;;\n    \"include/jemalloc/jemalloc_mangle.h\":C)\n  mkdir -p \"${objroot}include/jemalloc\"\n  \"${srcdir}/include/jemalloc/jemalloc_mangle.sh\" \"${objroot}include/jemalloc/internal/public_symbols.txt\" je_ > \"${objroot}include/jemalloc/jemalloc_mangle.h\"\n ;;\n    \"include/jemalloc/jemalloc_mangle_jet.h\":C)\n  mkdir -p \"${objroot}include/jemalloc\"\n  \"${srcdir}/include/jemalloc/jemalloc_mangle.sh\" \"${objroot}include/jemalloc/internal/public_symbols.txt\" jet_ > \"${objroot}include/jemalloc/jemalloc_mangle_jet.h\"\n ;;\n    \"include/jemalloc/jemalloc.h\":C)\n  mkdir -p \"${objroot}include/jemalloc\"\n  \"${srcdir}/include/jemalloc/jemalloc.sh\" \"${objroot}\" > \"${objroot}include/jemalloc/jemalloc${install_suffix}.h\"\n ;;\n\n  esac\ndone # for ac_tag\n\n\nas_fn_exit 0\n_ACEOF\nac_clean_files=$ac_clean_files_save\n\ntest $ac_write_fail = 0 ||\n  as_fn_error $? \"write failure creating $CONFIG_STATUS\" \"$LINENO\" 5\n\n\n# configure is writing to config.log, and then calls config.status.\n# config.status does its own redirection, appending to config.log.\n# Unfortunately, on DOS this fails, as config.log is still kept open\n# by configure, so config.status won't be able to write to it; its\n# output is simply discarded.  So we exec the FD to /dev/null,\n# effectively closing config.log, so it can be properly (re)opened and\n# appended to by config.status.  When coming back to configure, we\n# need to make the FD available again.\nif test \"$no_create\" != yes; then\n  ac_cs_success=:\n  ac_config_status_args=\n  test \"$silent\" = yes &&\n    ac_config_status_args=\"$ac_config_status_args --quiet\"\n  exec 5>/dev/null\n  $SHELL $CONFIG_STATUS $ac_config_status_args || ac_cs_success=false\n  exec 5>>config.log\n  # Use ||, not &&, to avoid exiting from the if with $? = 1, which\n  # would make configure fail if this is the last instruction.\n  $ac_cs_success || as_fn_exit 1\nfi\nif test -n \"$ac_unrecognized_opts\" && test \"$enable_option_checking\" != no; then\n  { printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: WARNING: unrecognized options: $ac_unrecognized_opts\" >&5\nprintf \"%s\\n\" \"$as_me: WARNING: unrecognized options: $ac_unrecognized_opts\" >&2;}\nfi\n\n\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: ===============================================================================\" >&5\nprintf \"%s\\n\" \"===============================================================================\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: jemalloc version   : ${jemalloc_version}\" >&5\nprintf \"%s\\n\" \"jemalloc version   : ${jemalloc_version}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: library revision   : ${rev}\" >&5\nprintf \"%s\\n\" \"library revision   : ${rev}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: \" >&5\nprintf \"%s\\n\" \"\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: CONFIG             : ${CONFIG}\" >&5\nprintf \"%s\\n\" \"CONFIG             : ${CONFIG}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: CC                 : ${CC}\" >&5\nprintf \"%s\\n\" \"CC                 : ${CC}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: CONFIGURE_CFLAGS   : ${CONFIGURE_CFLAGS}\" >&5\nprintf \"%s\\n\" \"CONFIGURE_CFLAGS   : ${CONFIGURE_CFLAGS}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: SPECIFIED_CFLAGS   : ${SPECIFIED_CFLAGS}\" >&5\nprintf \"%s\\n\" \"SPECIFIED_CFLAGS   : ${SPECIFIED_CFLAGS}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: EXTRA_CFLAGS       : ${EXTRA_CFLAGS}\" >&5\nprintf \"%s\\n\" \"EXTRA_CFLAGS       : ${EXTRA_CFLAGS}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: CPPFLAGS           : ${CPPFLAGS}\" >&5\nprintf \"%s\\n\" \"CPPFLAGS           : ${CPPFLAGS}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: CXX                : ${CXX}\" >&5\nprintf \"%s\\n\" \"CXX                : ${CXX}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: CONFIGURE_CXXFLAGS : ${CONFIGURE_CXXFLAGS}\" >&5\nprintf \"%s\\n\" \"CONFIGURE_CXXFLAGS : ${CONFIGURE_CXXFLAGS}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: SPECIFIED_CXXFLAGS : ${SPECIFIED_CXXFLAGS}\" >&5\nprintf \"%s\\n\" \"SPECIFIED_CXXFLAGS : ${SPECIFIED_CXXFLAGS}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: EXTRA_CXXFLAGS     : ${EXTRA_CXXFLAGS}\" >&5\nprintf \"%s\\n\" \"EXTRA_CXXFLAGS     : ${EXTRA_CXXFLAGS}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: LDFLAGS            : ${LDFLAGS}\" >&5\nprintf \"%s\\n\" \"LDFLAGS            : ${LDFLAGS}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: EXTRA_LDFLAGS      : ${EXTRA_LDFLAGS}\" >&5\nprintf \"%s\\n\" \"EXTRA_LDFLAGS      : ${EXTRA_LDFLAGS}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: DSO_LDFLAGS        : ${DSO_LDFLAGS}\" >&5\nprintf \"%s\\n\" \"DSO_LDFLAGS        : ${DSO_LDFLAGS}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: LIBS               : ${LIBS}\" >&5\nprintf \"%s\\n\" \"LIBS               : ${LIBS}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: RPATH_EXTRA        : ${RPATH_EXTRA}\" >&5\nprintf \"%s\\n\" \"RPATH_EXTRA        : ${RPATH_EXTRA}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: \" >&5\nprintf \"%s\\n\" \"\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: XSLTPROC           : ${XSLTPROC}\" >&5\nprintf \"%s\\n\" \"XSLTPROC           : ${XSLTPROC}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: XSLROOT            : ${XSLROOT}\" >&5\nprintf \"%s\\n\" \"XSLROOT            : ${XSLROOT}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: \" >&5\nprintf \"%s\\n\" \"\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: PREFIX             : ${PREFIX}\" >&5\nprintf \"%s\\n\" \"PREFIX             : ${PREFIX}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: BINDIR             : ${BINDIR}\" >&5\nprintf \"%s\\n\" \"BINDIR             : ${BINDIR}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: DATADIR            : ${DATADIR}\" >&5\nprintf \"%s\\n\" \"DATADIR            : ${DATADIR}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: INCLUDEDIR         : ${INCLUDEDIR}\" >&5\nprintf \"%s\\n\" \"INCLUDEDIR         : ${INCLUDEDIR}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: LIBDIR             : ${LIBDIR}\" >&5\nprintf \"%s\\n\" \"LIBDIR             : ${LIBDIR}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: MANDIR             : ${MANDIR}\" >&5\nprintf \"%s\\n\" \"MANDIR             : ${MANDIR}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: \" >&5\nprintf \"%s\\n\" \"\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: srcroot            : ${srcroot}\" >&5\nprintf \"%s\\n\" \"srcroot            : ${srcroot}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: abs_srcroot        : ${abs_srcroot}\" >&5\nprintf \"%s\\n\" \"abs_srcroot        : ${abs_srcroot}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: objroot            : ${objroot}\" >&5\nprintf \"%s\\n\" \"objroot            : ${objroot}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: abs_objroot        : ${abs_objroot}\" >&5\nprintf \"%s\\n\" \"abs_objroot        : ${abs_objroot}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: \" >&5\nprintf \"%s\\n\" \"\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: JEMALLOC_PREFIX    : ${JEMALLOC_PREFIX}\" >&5\nprintf \"%s\\n\" \"JEMALLOC_PREFIX    : ${JEMALLOC_PREFIX}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: JEMALLOC_PRIVATE_NAMESPACE\" >&5\nprintf \"%s\\n\" \"JEMALLOC_PRIVATE_NAMESPACE\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result:                    : ${JEMALLOC_PRIVATE_NAMESPACE}\" >&5\nprintf \"%s\\n\" \"                   : ${JEMALLOC_PRIVATE_NAMESPACE}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: install_suffix     : ${install_suffix}\" >&5\nprintf \"%s\\n\" \"install_suffix     : ${install_suffix}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: malloc_conf        : ${config_malloc_conf}\" >&5\nprintf \"%s\\n\" \"malloc_conf        : ${config_malloc_conf}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: documentation      : ${enable_doc}\" >&5\nprintf \"%s\\n\" \"documentation      : ${enable_doc}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: shared libs        : ${enable_shared}\" >&5\nprintf \"%s\\n\" \"shared libs        : ${enable_shared}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: static libs        : ${enable_static}\" >&5\nprintf \"%s\\n\" \"static libs        : ${enable_static}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: autogen            : ${enable_autogen}\" >&5\nprintf \"%s\\n\" \"autogen            : ${enable_autogen}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: debug              : ${enable_debug}\" >&5\nprintf \"%s\\n\" \"debug              : ${enable_debug}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: stats              : ${enable_stats}\" >&5\nprintf \"%s\\n\" \"stats              : ${enable_stats}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: experimental_smallocx : ${enable_experimental_smallocx}\" >&5\nprintf \"%s\\n\" \"experimental_smallocx : ${enable_experimental_smallocx}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: prof               : ${enable_prof}\" >&5\nprintf \"%s\\n\" \"prof               : ${enable_prof}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: prof-libunwind     : ${enable_prof_libunwind}\" >&5\nprintf \"%s\\n\" \"prof-libunwind     : ${enable_prof_libunwind}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: prof-libgcc        : ${enable_prof_libgcc}\" >&5\nprintf \"%s\\n\" \"prof-libgcc        : ${enable_prof_libgcc}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: prof-gcc           : ${enable_prof_gcc}\" >&5\nprintf \"%s\\n\" \"prof-gcc           : ${enable_prof_gcc}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: fill               : ${enable_fill}\" >&5\nprintf \"%s\\n\" \"fill               : ${enable_fill}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: utrace             : ${enable_utrace}\" >&5\nprintf \"%s\\n\" \"utrace             : ${enable_utrace}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: xmalloc            : ${enable_xmalloc}\" >&5\nprintf \"%s\\n\" \"xmalloc            : ${enable_xmalloc}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: log                : ${enable_log}\" >&5\nprintf \"%s\\n\" \"log                : ${enable_log}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: lazy_lock          : ${enable_lazy_lock}\" >&5\nprintf \"%s\\n\" \"lazy_lock          : ${enable_lazy_lock}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: cache-oblivious    : ${enable_cache_oblivious}\" >&5\nprintf \"%s\\n\" \"cache-oblivious    : ${enable_cache_oblivious}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: cxx                : ${enable_cxx}\" >&5\nprintf \"%s\\n\" \"cxx                : ${enable_cxx}\" >&6; }\n{ printf \"%s\\n\" \"$as_me:${as_lineno-$LINENO}: result: ===============================================================================\" >&5\nprintf \"%s\\n\" \"===============================================================================\" >&6; }\n\n"
  },
  {
    "path": "deps/jemalloc/configure.ac",
    "content": "dnl Process this file with autoconf to produce a configure script.\nAC_PREREQ(2.68)\nAC_INIT([Makefile.in])\n\nAC_CONFIG_AUX_DIR([build-aux])\n\ndnl ============================================================================\ndnl Custom macro definitions.\n\ndnl JE_CONCAT_VVV(r, a, b)\ndnl\ndnl Set $r to the concatenation of $a and $b, with a space separating them iff\ndnl both $a and $b are non-empty.\nAC_DEFUN([JE_CONCAT_VVV],\nif test \"x[$]{$2}\" = \"x\" -o \"x[$]{$3}\" = \"x\" ; then\n  $1=\"[$]{$2}[$]{$3}\"\nelse\n  $1=\"[$]{$2} [$]{$3}\"\nfi\n)\n\ndnl JE_APPEND_VS(a, b)\ndnl\ndnl Set $a to the concatenation of $a and b, with a space separating them iff\ndnl both $a and b are non-empty.\nAC_DEFUN([JE_APPEND_VS],\n  T_APPEND_V=$2\n  JE_CONCAT_VVV($1, $1, T_APPEND_V)\n)\n\nCONFIGURE_CFLAGS=\nSPECIFIED_CFLAGS=\"${CFLAGS}\"\ndnl JE_CFLAGS_ADD(cflag)\ndnl\ndnl CFLAGS is the concatenation of CONFIGURE_CFLAGS and SPECIFIED_CFLAGS\ndnl (ignoring EXTRA_CFLAGS, which does not impact configure tests.  This macro\ndnl appends to CONFIGURE_CFLAGS and regenerates CFLAGS.\nAC_DEFUN([JE_CFLAGS_ADD],\n[\nAC_MSG_CHECKING([whether compiler supports $1])\nT_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\nJE_APPEND_VS(CONFIGURE_CFLAGS, $1)\nJE_CONCAT_VVV(CFLAGS, CONFIGURE_CFLAGS, SPECIFIED_CFLAGS)\nAC_COMPILE_IFELSE([AC_LANG_PROGRAM(\n[[\n]], [[\n    return 0;\n]])],\n              [je_cv_cflags_added=$1]\n              AC_MSG_RESULT([yes]),\n              [je_cv_cflags_added=]\n              AC_MSG_RESULT([no])\n              [CONFIGURE_CFLAGS=\"${T_CONFIGURE_CFLAGS}\"]\n)\nJE_CONCAT_VVV(CFLAGS, CONFIGURE_CFLAGS, SPECIFIED_CFLAGS)\n])\n\ndnl JE_CFLAGS_SAVE()\ndnl JE_CFLAGS_RESTORE()\ndnl\ndnl Save/restore CFLAGS.  Nesting is not supported.\nAC_DEFUN([JE_CFLAGS_SAVE],\nSAVED_CONFIGURE_CFLAGS=\"${CONFIGURE_CFLAGS}\"\n)\nAC_DEFUN([JE_CFLAGS_RESTORE],\nCONFIGURE_CFLAGS=\"${SAVED_CONFIGURE_CFLAGS}\"\nJE_CONCAT_VVV(CFLAGS, CONFIGURE_CFLAGS, SPECIFIED_CFLAGS)\n)\n\nCONFIGURE_CXXFLAGS=\nSPECIFIED_CXXFLAGS=\"${CXXFLAGS}\"\ndnl JE_CXXFLAGS_ADD(cxxflag)\nAC_DEFUN([JE_CXXFLAGS_ADD],\n[\nAC_MSG_CHECKING([whether compiler supports $1])\nT_CONFIGURE_CXXFLAGS=\"${CONFIGURE_CXXFLAGS}\"\nJE_APPEND_VS(CONFIGURE_CXXFLAGS, $1)\nJE_CONCAT_VVV(CXXFLAGS, CONFIGURE_CXXFLAGS, SPECIFIED_CXXFLAGS)\nAC_LANG_PUSH([C++])\nAC_COMPILE_IFELSE([AC_LANG_PROGRAM(\n[[\n]], [[\n    return 0;\n]])],\n              [je_cv_cxxflags_added=$1]\n              AC_MSG_RESULT([yes]),\n              [je_cv_cxxflags_added=]\n              AC_MSG_RESULT([no])\n              [CONFIGURE_CXXFLAGS=\"${T_CONFIGURE_CXXFLAGS}\"]\n)\nAC_LANG_POP([C++])\nJE_CONCAT_VVV(CXXFLAGS, CONFIGURE_CXXFLAGS, SPECIFIED_CXXFLAGS)\n])\n\ndnl JE_COMPILABLE(label, hcode, mcode, rvar)\ndnl\ndnl Use AC_LINK_IFELSE() rather than AC_COMPILE_IFELSE() so that linker errors\ndnl cause failure.\nAC_DEFUN([JE_COMPILABLE],\n[\nAC_CACHE_CHECK([whether $1 is compilable],\n               [$4],\n               [AC_LINK_IFELSE([AC_LANG_PROGRAM([$2],\n                                                [$3])],\n                               [$4=yes],\n                               [$4=no])])\n])\n\ndnl ============================================================================\n\nCONFIG=`echo ${ac_configure_args} | sed -e 's#'\"'\"'\\([^ ]*\\)'\"'\"'#\\1#g'`\nAC_SUBST([CONFIG])\n\ndnl Library revision.\nrev=2\nAC_SUBST([rev])\n\nsrcroot=$srcdir\nif test \"x${srcroot}\" = \"x.\" ; then\n  srcroot=\"\"\nelse\n  srcroot=\"${srcroot}/\"\nfi\nAC_SUBST([srcroot])\nabs_srcroot=\"`cd \\\"${srcdir}\\\"; pwd`/\"\nAC_SUBST([abs_srcroot])\n\nobjroot=\"\"\nAC_SUBST([objroot])\nabs_objroot=\"`pwd`/\"\nAC_SUBST([abs_objroot])\n\ndnl Munge install path variables.\ncase \"$prefix\" in\n   *\\ * ) AC_MSG_ERROR([Prefix should not contain spaces]) ;;\n   \"NONE\" ) prefix=\"/usr/local\" ;;\nesac\ncase \"$exec_prefix\" in\n   *\\ * ) AC_MSG_ERROR([Exec prefix should not contain spaces]) ;;\n   \"NONE\" ) exec_prefix=$prefix ;;\nesac\nPREFIX=$prefix\nAC_SUBST([PREFIX])\nBINDIR=`eval echo $bindir`\nBINDIR=`eval echo $BINDIR`\nAC_SUBST([BINDIR])\nINCLUDEDIR=`eval echo $includedir`\nINCLUDEDIR=`eval echo $INCLUDEDIR`\nAC_SUBST([INCLUDEDIR])\nLIBDIR=`eval echo $libdir`\nLIBDIR=`eval echo $LIBDIR`\nAC_SUBST([LIBDIR])\nDATADIR=`eval echo $datadir`\nDATADIR=`eval echo $DATADIR`\nAC_SUBST([DATADIR])\nMANDIR=`eval echo $mandir`\nMANDIR=`eval echo $MANDIR`\nAC_SUBST([MANDIR])\n\ndnl Support for building documentation.\nAC_PATH_PROG([XSLTPROC], [xsltproc], [false], [$PATH])\nif test -d \"/usr/share/xml/docbook/stylesheet/docbook-xsl\" ; then\n  DEFAULT_XSLROOT=\"/usr/share/xml/docbook/stylesheet/docbook-xsl\"\nelif test -d \"/usr/share/sgml/docbook/xsl-stylesheets\" ; then\n  DEFAULT_XSLROOT=\"/usr/share/sgml/docbook/xsl-stylesheets\"\nelse\n  dnl Documentation building will fail if this default gets used.\n  DEFAULT_XSLROOT=\"\"\nfi\nAC_ARG_WITH([xslroot],\n  [AS_HELP_STRING([--with-xslroot=<path>], [XSL stylesheet root path])], [\nif test \"x$with_xslroot\" = \"xno\" ; then\n  XSLROOT=\"${DEFAULT_XSLROOT}\"\nelse\n  XSLROOT=\"${with_xslroot}\"\nfi\n],\n  XSLROOT=\"${DEFAULT_XSLROOT}\"\n)\nif test \"x$XSLTPROC\" = \"xfalse\" ; then\n  XSLROOT=\"\"\nfi\nAC_SUBST([XSLROOT])\n\ndnl If CFLAGS isn't defined, set CFLAGS to something reasonable.  Otherwise,\ndnl just prevent autoconf from molesting CFLAGS.\nCFLAGS=$CFLAGS\nAC_PROG_CC\n\nif test \"x$GCC\" != \"xyes\" ; then\n  AC_CACHE_CHECK([whether compiler is MSVC],\n                 [je_cv_msvc],\n                 [AC_COMPILE_IFELSE([AC_LANG_PROGRAM([],\n                                                     [\n#ifndef _MSC_VER\n  int fail[-1];\n#endif\n])],\n                               [je_cv_msvc=yes],\n                               [je_cv_msvc=no])])\nfi\n\ndnl check if a cray prgenv wrapper compiler is being used\nje_cv_cray_prgenv_wrapper=\"\"\nif test \"x${PE_ENV}\" != \"x\" ; then\n  case \"${CC}\" in\n    CC|cc)\n\tje_cv_cray_prgenv_wrapper=\"yes\"\n\t;;\n    *)\n       ;;\n  esac\nfi\n\nAC_CACHE_CHECK([whether compiler is cray],\n              [je_cv_cray],\n              [AC_COMPILE_IFELSE([AC_LANG_PROGRAM([],\n                                                  [\n#ifndef _CRAYC\n  int fail[-1];\n#endif\n])],\n                            [je_cv_cray=yes],\n                            [je_cv_cray=no])])\n\nif test \"x${je_cv_cray}\" = \"xyes\" ; then\n  AC_CACHE_CHECK([whether cray compiler version is 8.4],\n                [je_cv_cray_84],\n                [AC_COMPILE_IFELSE([AC_LANG_PROGRAM([],\n                                                      [\n#if !(_RELEASE_MAJOR == 8 && _RELEASE_MINOR == 4)\n  int fail[-1];\n#endif\n])],\n                              [je_cv_cray_84=yes],\n                              [je_cv_cray_84=no])])\nfi\n\nif test \"x$GCC\" = \"xyes\" ; then\n  JE_CFLAGS_ADD([-std=gnu11])\n  if test \"x$je_cv_cflags_added\" = \"x-std=gnu11\" ; then\n    AC_DEFINE_UNQUOTED([JEMALLOC_HAS_RESTRICT], [ ], [ ])\n  else\n    JE_CFLAGS_ADD([-std=gnu99])\n    if test \"x$je_cv_cflags_added\" = \"x-std=gnu99\" ; then\n      AC_DEFINE_UNQUOTED([JEMALLOC_HAS_RESTRICT], [ ], [ ])\n    fi\n  fi\n  JE_CFLAGS_ADD([-Werror=unknown-warning-option])\n  JE_CFLAGS_ADD([-Wall])\n  JE_CFLAGS_ADD([-Wextra])\n  JE_CFLAGS_ADD([-Wshorten-64-to-32])\n  JE_CFLAGS_ADD([-Wsign-compare])\n  JE_CFLAGS_ADD([-Wundef])\n  JE_CFLAGS_ADD([-Wno-format-zero-length])\n  JE_CFLAGS_ADD([-Wpointer-arith])\n  dnl This warning triggers on the use of the universal zero initializer, which\n  dnl is a very handy idiom for things like the tcache static initializer (which\n  dnl has lots of nested structs).  See the discussion at.\n  dnl https://gcc.gnu.org/bugzilla/show_bug.cgi?id=53119\n  JE_CFLAGS_ADD([-Wno-missing-braces])\n  dnl This one too.\n  JE_CFLAGS_ADD([-Wno-missing-field-initializers])\n  JE_CFLAGS_ADD([-Wno-missing-attributes])\n  JE_CFLAGS_ADD([-pipe])\n  JE_CFLAGS_ADD([-g3])\nelif test \"x$je_cv_msvc\" = \"xyes\" ; then\n  CC=\"$CC -nologo\"\n  JE_CFLAGS_ADD([-Zi])\n  JE_CFLAGS_ADD([-MT])\n  JE_CFLAGS_ADD([-W3])\n  JE_CFLAGS_ADD([-FS])\n  JE_APPEND_VS(CPPFLAGS, -I${srcdir}/include/msvc_compat)\nfi\nif test \"x$je_cv_cray\" = \"xyes\" ; then\n  dnl cray compiler 8.4 has an inlining bug\n  if test \"x$je_cv_cray_84\" = \"xyes\" ; then\n    JE_CFLAGS_ADD([-hipa2])\n    JE_CFLAGS_ADD([-hnognu])\n  fi\n  dnl ignore unreachable code warning\n  JE_CFLAGS_ADD([-hnomessage=128])\n  dnl ignore redefinition of \"malloc\", \"free\", etc warning\n  JE_CFLAGS_ADD([-hnomessage=1357])\nfi\nAC_SUBST([CONFIGURE_CFLAGS])\nAC_SUBST([SPECIFIED_CFLAGS])\nAC_SUBST([EXTRA_CFLAGS])\nAC_PROG_CPP\n\nAC_ARG_ENABLE([cxx],\n  [AS_HELP_STRING([--disable-cxx], [Disable C++ integration])],\nif test \"x$enable_cxx\" = \"xno\" ; then\n  enable_cxx=\"0\"\nelse\n  enable_cxx=\"1\"\nfi\n,\nenable_cxx=\"1\"\n)\nif test \"x$enable_cxx\" = \"x1\" ; then\n  dnl Require at least c++14, which is the first version to support sized\n  dnl deallocation.  C++ support is not compiled otherwise.\n  m4_include([m4/ax_cxx_compile_stdcxx.m4])\n  AX_CXX_COMPILE_STDCXX([17], [noext], [optional])\n  if test \"x${HAVE_CXX17}\" != \"x1\"; then\n    AX_CXX_COMPILE_STDCXX([14], [noext], [optional])\n  fi\n  if test \"x${HAVE_CXX14}\" = \"x1\" -o \"x${HAVE_CXX17}\" = \"x1\"; then\n    JE_CXXFLAGS_ADD([-Wall])\n    JE_CXXFLAGS_ADD([-Wextra])\n    JE_CXXFLAGS_ADD([-g3])\n\n    SAVED_LIBS=\"${LIBS}\"\n    JE_APPEND_VS(LIBS, -lstdc++)\n    JE_COMPILABLE([libstdc++ linkage], [\n#include <stdlib.h>\n], [[\n\tint *arr = (int *)malloc(sizeof(int) * 42);\n\tif (arr == NULL)\n\t\treturn 1;\n]], [je_cv_libstdcxx])\n    if test \"x${je_cv_libstdcxx}\" = \"xno\" ; then\n      LIBS=\"${SAVED_LIBS}\"\n    fi\n  else\n    enable_cxx=\"0\"\n  fi\nfi\nif test \"x$enable_cxx\" = \"x1\"; then\n  AC_DEFINE([JEMALLOC_ENABLE_CXX], [ ], [ ])\nfi\nAC_SUBST([enable_cxx])\nAC_SUBST([CONFIGURE_CXXFLAGS])\nAC_SUBST([SPECIFIED_CXXFLAGS])\nAC_SUBST([EXTRA_CXXFLAGS])\n\nAC_C_BIGENDIAN([ac_cv_big_endian=1], [ac_cv_big_endian=0])\nif test \"x${ac_cv_big_endian}\" = \"x1\" ; then\n  AC_DEFINE_UNQUOTED([JEMALLOC_BIG_ENDIAN], [ ], [ ])\nfi\n\nif test \"x${je_cv_msvc}\" = \"xyes\" -a \"x${ac_cv_header_inttypes_h}\" = \"xno\"; then\n  JE_APPEND_VS(CPPFLAGS, -I${srcdir}/include/msvc_compat/C99)\nfi\n\nif test \"x${je_cv_msvc}\" = \"xyes\" ; then\n  LG_SIZEOF_PTR=LG_SIZEOF_PTR_WIN\n  AC_MSG_RESULT([Using a predefined value for sizeof(void *): 4 for 32-bit, 8 for 64-bit])\nelse\n  AC_CHECK_SIZEOF([void *])\n  if test \"x${ac_cv_sizeof_void_p}\" = \"x8\" ; then\n    LG_SIZEOF_PTR=3\n  elif test \"x${ac_cv_sizeof_void_p}\" = \"x4\" ; then\n    LG_SIZEOF_PTR=2\n  else\n    AC_MSG_ERROR([Unsupported pointer size: ${ac_cv_sizeof_void_p}])\n  fi\nfi\nAC_DEFINE_UNQUOTED([LG_SIZEOF_PTR], [$LG_SIZEOF_PTR], [ ])\n\nAC_CHECK_SIZEOF([int])\nif test \"x${ac_cv_sizeof_int}\" = \"x8\" ; then\n  LG_SIZEOF_INT=3\nelif test \"x${ac_cv_sizeof_int}\" = \"x4\" ; then\n  LG_SIZEOF_INT=2\nelse\n  AC_MSG_ERROR([Unsupported int size: ${ac_cv_sizeof_int}])\nfi\nAC_DEFINE_UNQUOTED([LG_SIZEOF_INT], [$LG_SIZEOF_INT], [ ])\n\nAC_CHECK_SIZEOF([long])\nif test \"x${ac_cv_sizeof_long}\" = \"x8\" ; then\n  LG_SIZEOF_LONG=3\nelif test \"x${ac_cv_sizeof_long}\" = \"x4\" ; then\n  LG_SIZEOF_LONG=2\nelse\n  AC_MSG_ERROR([Unsupported long size: ${ac_cv_sizeof_long}])\nfi\nAC_DEFINE_UNQUOTED([LG_SIZEOF_LONG], [$LG_SIZEOF_LONG], [ ])\n\nAC_CHECK_SIZEOF([long long])\nif test \"x${ac_cv_sizeof_long_long}\" = \"x8\" ; then\n  LG_SIZEOF_LONG_LONG=3\nelif test \"x${ac_cv_sizeof_long_long}\" = \"x4\" ; then\n  LG_SIZEOF_LONG_LONG=2\nelse\n  AC_MSG_ERROR([Unsupported long long size: ${ac_cv_sizeof_long_long}])\nfi\nAC_DEFINE_UNQUOTED([LG_SIZEOF_LONG_LONG], [$LG_SIZEOF_LONG_LONG], [ ])\n\nAC_CHECK_SIZEOF([intmax_t])\nif test \"x${ac_cv_sizeof_intmax_t}\" = \"x16\" ; then\n  LG_SIZEOF_INTMAX_T=4\nelif test \"x${ac_cv_sizeof_intmax_t}\" = \"x8\" ; then\n  LG_SIZEOF_INTMAX_T=3\nelif test \"x${ac_cv_sizeof_intmax_t}\" = \"x4\" ; then\n  LG_SIZEOF_INTMAX_T=2\nelse\n  AC_MSG_ERROR([Unsupported intmax_t size: ${ac_cv_sizeof_intmax_t}])\nfi\nAC_DEFINE_UNQUOTED([LG_SIZEOF_INTMAX_T], [$LG_SIZEOF_INTMAX_T], [ ])\n\nAC_CANONICAL_HOST\ndnl CPU-specific settings.\nCPU_SPINWAIT=\"\"\ncase \"${host_cpu}\" in\n  i686|x86_64)\n\tHAVE_CPU_SPINWAIT=1\n\tif test \"x${je_cv_msvc}\" = \"xyes\" ; then\n\t    AC_CACHE_VAL([je_cv_pause_msvc],\n\t      [JE_COMPILABLE([pause instruction MSVC], [],\n\t\t\t\t\t[[_mm_pause(); return 0;]],\n\t\t\t\t\t[je_cv_pause_msvc])])\n\t    if test \"x${je_cv_pause_msvc}\" = \"xyes\" ; then\n\t\tCPU_SPINWAIT='_mm_pause()'\n\t    fi\n\telse\n\t    AC_CACHE_VAL([je_cv_pause],\n\t      [JE_COMPILABLE([pause instruction], [],\n\t\t\t\t\t[[__asm__ volatile(\"pause\"); return 0;]],\n\t\t\t\t\t[je_cv_pause])])\n\t    if test \"x${je_cv_pause}\" = \"xyes\" ; then\n\t\tCPU_SPINWAIT='__asm__ volatile(\"pause\")'\n\t    fi\n\tfi\n\t;;\n  aarch64|arm*)\n\tHAVE_CPU_SPINWAIT=1\n\tdnl isb is a better equivalent to the pause instruction on x86.\n\tAC_CACHE_VAL([je_cv_isb],\n\t  [JE_COMPILABLE([isb instruction], [],\n\t\t\t[[__asm__ volatile(\"isb\"); return 0;]],\n\t\t\t[je_cv_isb])])\n\tif test \"x${je_cv_isb}\" = \"xyes\" ; then\n\t    CPU_SPINWAIT='__asm__ volatile(\"isb\")'\n\tfi\n\t;;\n  *)\n\tHAVE_CPU_SPINWAIT=0\n\t;;\nesac\nAC_DEFINE_UNQUOTED([HAVE_CPU_SPINWAIT], [$HAVE_CPU_SPINWAIT], [ ])\nAC_DEFINE_UNQUOTED([CPU_SPINWAIT], [$CPU_SPINWAIT], [ ])\n\nAC_ARG_WITH([lg_vaddr],\n  [AS_HELP_STRING([--with-lg-vaddr=<lg-vaddr>], [Number of significant virtual address bits])],\n  [LG_VADDR=\"$with_lg_vaddr\"], [LG_VADDR=\"detect\"])\n\ncase \"${host_cpu}\" in\n  aarch64)\n    if test \"x$LG_VADDR\" = \"xdetect\"; then\n      AC_MSG_CHECKING([number of significant virtual address bits])\n      if test \"x${LG_SIZEOF_PTR}\" = \"x2\" ; then\n        #aarch64 ILP32\n        LG_VADDR=32\n      else\n        #aarch64 LP64\n        LG_VADDR=48\n      fi\n      AC_MSG_RESULT([$LG_VADDR])\n    fi\n    ;;\n  x86_64)\n    if test \"x$LG_VADDR\" = \"xdetect\"; then\n      AC_CACHE_CHECK([number of significant virtual address bits],\n                     [je_cv_lg_vaddr],\n                     AC_RUN_IFELSE([AC_LANG_PROGRAM(\n[[\n#include <stdio.h>\n#ifdef _WIN32\n#include <limits.h>\n#include <intrin.h>\ntypedef unsigned __int32 uint32_t;\n#else\n#include <stdint.h>\n#endif\n]], [[\n\tuint32_t r[[4]];\n\tuint32_t eax_in = 0x80000008U;\n#ifdef _WIN32\n\t__cpuid((int *)r, (int)eax_in);\n#else\n\tasm volatile (\"cpuid\"\n\t    : \"=a\" (r[[0]]), \"=b\" (r[[1]]), \"=c\" (r[[2]]), \"=d\" (r[[3]])\n\t    : \"a\" (eax_in), \"c\" (0)\n\t);\n#endif\n\tuint32_t eax_out = r[[0]];\n\tuint32_t vaddr = ((eax_out & 0x0000ff00U) >> 8);\n\tFILE *f = fopen(\"conftest.out\", \"w\");\n\tif (f == NULL) {\n\t\treturn 1;\n\t}\n\tif (vaddr > (sizeof(void *) << 3)) {\n\t\tvaddr = sizeof(void *) << 3;\n\t}\n\tfprintf(f, \"%u\", vaddr);\n\tfclose(f);\n\treturn 0;\n]])],\n                   [je_cv_lg_vaddr=`cat conftest.out`],\n                   [je_cv_lg_vaddr=error],\n                   [je_cv_lg_vaddr=57]))\n      if test \"x${je_cv_lg_vaddr}\" != \"x\" ; then\n        LG_VADDR=\"${je_cv_lg_vaddr}\"\n      fi\n      if test \"x${LG_VADDR}\" != \"xerror\" ; then\n        AC_DEFINE_UNQUOTED([LG_VADDR], [$LG_VADDR], [ ])\n      else\n        AC_MSG_ERROR([cannot determine number of significant virtual address bits])\n      fi\n    fi\n    ;;\n  *)\n    if test \"x$LG_VADDR\" = \"xdetect\"; then\n      AC_MSG_CHECKING([number of significant virtual address bits])\n      if test \"x${LG_SIZEOF_PTR}\" = \"x3\" ; then\n        LG_VADDR=64\n      elif test \"x${LG_SIZEOF_PTR}\" = \"x2\" ; then\n        LG_VADDR=32\n      elif test \"x${LG_SIZEOF_PTR}\" = \"xLG_SIZEOF_PTR_WIN\" ; then\n        LG_VADDR=\"(1U << (LG_SIZEOF_PTR_WIN+3))\"\n      else\n        AC_MSG_ERROR([Unsupported lg(pointer size): ${LG_SIZEOF_PTR}])\n      fi\n      AC_MSG_RESULT([$LG_VADDR])\n    fi\n    ;;\nesac\nAC_DEFINE_UNQUOTED([LG_VADDR], [$LG_VADDR], [ ])\n\nLD_PRELOAD_VAR=\"LD_PRELOAD\"\nso=\"so\"\nimportlib=\"${so}\"\no=\"$ac_objext\"\na=\"a\"\nexe=\"$ac_exeext\"\nlibprefix=\"lib\"\nlink_whole_archive=\"0\"\nDSO_LDFLAGS='-shared -Wl,-soname,$(@F)'\nRPATH='-Wl,-rpath,$(1)'\nSOREV=\"${so}.${rev}\"\nPIC_CFLAGS='-fPIC -DPIC'\nCTARGET='-o $@'\nLDTARGET='-o $@'\nTEST_LD_MODE=\nEXTRA_LDFLAGS=\nARFLAGS='crs'\nAROUT=' $@'\nCC_MM=1\n\nif test \"x$je_cv_cray_prgenv_wrapper\" = \"xyes\" ; then\n  TEST_LD_MODE='-dynamic'\nfi\n\nif test \"x${je_cv_cray}\" = \"xyes\" ; then\n  CC_MM=\nfi\n\nAN_MAKEVAR([AR], [AC_PROG_AR])\nAN_PROGRAM([ar], [AC_PROG_AR])\nAC_DEFUN([AC_PROG_AR], [AC_CHECK_TOOL(AR, ar, :)])\nAC_PROG_AR\n\nAN_MAKEVAR([NM], [AC_PROG_NM])\nAN_PROGRAM([nm], [AC_PROG_NM])\nAC_DEFUN([AC_PROG_NM], [AC_CHECK_TOOL(NM, nm, :)])\nAC_PROG_NM\n\nAC_PROG_AWK\n\ndnl ============================================================================\ndnl jemalloc version.\ndnl\n\nAC_ARG_WITH([version],\n  [AS_HELP_STRING([--with-version=<major>.<minor>.<bugfix>-<nrev>-g<gid>],\n   [Version string])],\n  [\n    echo \"${with_version}\" | grep ['^[0-9]\\+\\.[0-9]\\+\\.[0-9]\\+-[0-9]\\+-g[0-9a-f]\\+$'] 2>&1 1>/dev/null\n    if test $? -eq 0 ; then\n      echo \"$with_version\" > \"${objroot}VERSION\"\n    else\n      echo \"${with_version}\" | grep ['^VERSION$'] 2>&1 1>/dev/null\n      if test $? -ne 0 ; then\n        AC_MSG_ERROR([${with_version} does not match <major>.<minor>.<bugfix>-<nrev>-g<gid> or VERSION])\n      fi\n    fi\n  ], [\n    dnl Set VERSION if source directory is inside a git repository.\n    if test \"x`test ! \\\"${srcroot}\\\" && cd \\\"${srcroot}\\\"; git rev-parse --is-inside-work-tree 2>/dev/null`\" = \"xtrue\" ; then\n      dnl Pattern globs aren't powerful enough to match both single- and\n      dnl double-digit version numbers, so iterate over patterns to support up\n      dnl to version 99.99.99 without any accidental matches.\n      for pattern in ['[0-9].[0-9].[0-9]' '[0-9].[0-9].[0-9][0-9]' \\\n                     '[0-9].[0-9][0-9].[0-9]' '[0-9].[0-9][0-9].[0-9][0-9]' \\\n                     '[0-9][0-9].[0-9].[0-9]' '[0-9][0-9].[0-9].[0-9][0-9]' \\\n                     '[0-9][0-9].[0-9][0-9].[0-9]' \\\n                     '[0-9][0-9].[0-9][0-9].[0-9][0-9]']; do\n        (test ! \"${srcroot}\" && cd \"${srcroot}\"; git describe --long --abbrev=40 --match=\"${pattern}\") > \"${objroot}VERSION.tmp\" 2>/dev/null\n        if test $? -eq 0 ; then\n          mv \"${objroot}VERSION.tmp\" \"${objroot}VERSION\"\n          break\n        fi\n      done\n    fi\n    rm -f \"${objroot}VERSION.tmp\"\n  ])\n\nif test ! -e \"${objroot}VERSION\" ; then\n  if test ! -e \"${srcroot}VERSION\" ; then\n    AC_MSG_RESULT(\n      [Missing VERSION file, and unable to generate it; creating bogus VERSION])\n    echo \"0.0.0-0-g000000missing_version_try_git_fetch_tags\" > \"${objroot}VERSION\"\n  else\n    cp ${srcroot}VERSION ${objroot}VERSION\n  fi\nfi\njemalloc_version=`cat \"${objroot}VERSION\"`\njemalloc_version_major=`echo ${jemalloc_version} | tr \".g-\" \" \" | awk '{print [$]1}'`\njemalloc_version_minor=`echo ${jemalloc_version} | tr \".g-\" \" \" | awk '{print [$]2}'`\njemalloc_version_bugfix=`echo ${jemalloc_version} | tr \".g-\" \" \" | awk '{print [$]3}'`\njemalloc_version_nrev=`echo ${jemalloc_version} | tr \".g-\" \" \" | awk '{print [$]4}'`\njemalloc_version_gid=`echo ${jemalloc_version} | tr \".g-\" \" \" | awk '{print [$]5}'`\nAC_SUBST([jemalloc_version])\nAC_SUBST([jemalloc_version_major])\nAC_SUBST([jemalloc_version_minor])\nAC_SUBST([jemalloc_version_bugfix])\nAC_SUBST([jemalloc_version_nrev])\nAC_SUBST([jemalloc_version_gid])\n\ndnl Platform-specific settings.  abi and RPATH can probably be determined\ndnl programmatically, but doing so is error-prone, which makes it generally\ndnl not worth the trouble.\ndnl\ndnl Define cpp macros in CPPFLAGS, rather than doing AC_DEFINE(macro), since the\ndnl definitions need to be seen before any headers are included, which is a pain\ndnl to make happen otherwise.\ndefault_retain=\"0\"\nzero_realloc_default_free=\"0\"\nmaps_coalesce=\"1\"\nDUMP_SYMS=\"${NM} -a\"\nSYM_PREFIX=\"\"\ncase \"${host}\" in\n  *-*-darwin* | *-*-ios*)\n\tabi=\"macho\"\n\tRPATH=\"\"\n\tLD_PRELOAD_VAR=\"DYLD_INSERT_LIBRARIES\"\n\tso=\"dylib\"\n\timportlib=\"${so}\"\n\tforce_tls=\"0\"\n\tDSO_LDFLAGS='-shared -Wl,-install_name,$(LIBDIR)/$(@F)'\n\tSOREV=\"${rev}.${so}\"\n\tsbrk_deprecated=\"1\"\n\tSYM_PREFIX=\"_\"\n\t;;\n  *-*-freebsd*)\n\tJE_APPEND_VS(CPPFLAGS, -D_BSD_SOURCE)\n\tabi=\"elf\"\n\tAC_DEFINE([JEMALLOC_SYSCTL_VM_OVERCOMMIT], [ ], [ ])\n\tforce_lazy_lock=\"1\"\n\t;;\n  *-*-dragonfly*)\n\tabi=\"elf\"\n\t;;\n  *-*-openbsd*)\n\tabi=\"elf\"\n\tforce_tls=\"0\"\n\t;;\n  *-*-bitrig*)\n\tabi=\"elf\"\n\t;;\n  *-*-linux-android*)\n\tdnl syscall(2) and secure_getenv(3) are exposed by _GNU_SOURCE.\n\tJE_APPEND_VS(CPPFLAGS, -D_GNU_SOURCE)\n\tabi=\"elf\"\n\tglibc=\"0\"\n\tAC_DEFINE([JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS], [ ], [ ])\n\tAC_DEFINE([JEMALLOC_HAS_ALLOCA_H], [ ], [ ])\n\tAC_DEFINE([JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY], [ ], [ ])\n\tAC_DEFINE([JEMALLOC_THREADED_INIT], [ ], [ ])\n\tAC_DEFINE([JEMALLOC_C11_ATOMICS], [ ], [ ])\n\tforce_tls=\"0\"\n\tif test \"${LG_SIZEOF_PTR}\" = \"3\"; then\n\t  default_retain=\"1\"\n\tfi\n\tzero_realloc_default_free=\"1\"\n\t;;\n  *-*-linux*)\n\tdnl syscall(2) and secure_getenv(3) are exposed by _GNU_SOURCE.\n\tJE_APPEND_VS(CPPFLAGS, -D_GNU_SOURCE)\n\tabi=\"elf\"\n\tglibc=\"1\"\n\tAC_DEFINE([JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS], [ ], [ ])\n\tAC_DEFINE([JEMALLOC_HAS_ALLOCA_H], [ ], [ ])\n\tAC_DEFINE([JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY], [ ], [ ])\n\tAC_DEFINE([JEMALLOC_THREADED_INIT], [ ], [ ])\n\tAC_DEFINE([JEMALLOC_USE_CXX_THROW], [ ], [ ])\n\tif test \"${LG_SIZEOF_PTR}\" = \"3\"; then\n\t  default_retain=\"1\"\n\tfi\n\tzero_realloc_default_free=\"1\"\n\t;;\n  *-*-kfreebsd*)\n\tdnl syscall(2) and secure_getenv(3) are exposed by _GNU_SOURCE.\n\tJE_APPEND_VS(CPPFLAGS, -D_GNU_SOURCE)\n\tabi=\"elf\"\n\tAC_DEFINE([JEMALLOC_HAS_ALLOCA_H], [ ], [ ])\n\tAC_DEFINE([JEMALLOC_SYSCTL_VM_OVERCOMMIT], [ ], [ ])\n\tAC_DEFINE([JEMALLOC_THREADED_INIT], [ ], [ ])\n\tAC_DEFINE([JEMALLOC_USE_CXX_THROW], [ ], [ ])\n\t;;\n  *-*-netbsd*)\n\tAC_MSG_CHECKING([ABI])\n        AC_COMPILE_IFELSE([AC_LANG_PROGRAM(\n[[#ifdef __ELF__\n/* ELF */\n#else\n#error aout\n#endif\n]])],\n                          [abi=\"elf\"],\n                          [abi=\"aout\"])\n\tAC_MSG_RESULT([$abi])\n\t;;\n  *-*-solaris2*)\n\tabi=\"elf\"\n\tRPATH='-Wl,-R,$(1)'\n\tdnl Solaris needs this for sigwait().\n\tJE_APPEND_VS(CPPFLAGS, -D_POSIX_PTHREAD_SEMANTICS)\n\tJE_APPEND_VS(LIBS, -lposix4 -lsocket -lnsl)\n\t;;\n  *-ibm-aix*)\n\tif test \"${LG_SIZEOF_PTR}\" = \"3\"; then\n\t  dnl 64bit AIX\n\t  LD_PRELOAD_VAR=\"LDR_PRELOAD64\"\n\telse\n\t  dnl 32bit AIX\n\t  LD_PRELOAD_VAR=\"LDR_PRELOAD\"\n\tfi\n\tabi=\"xcoff\"\n\t;;\n  *-*-mingw* | *-*-cygwin*)\n\tabi=\"pecoff\"\n\tforce_tls=\"0\"\n\tmaps_coalesce=\"0\"\n\tRPATH=\"\"\n\tso=\"dll\"\n\tif test \"x$je_cv_msvc\" = \"xyes\" ; then\n\t  importlib=\"lib\"\n\t  DSO_LDFLAGS=\"-LD\"\n\t  EXTRA_LDFLAGS=\"-link -DEBUG\"\n\t  CTARGET='-Fo$@'\n\t  LDTARGET='-Fe$@'\n\t  AR='lib'\n\t  ARFLAGS='-nologo -out:'\n\t  AROUT='$@'\n\t  CC_MM=\n        else\n\t  importlib=\"${so}\"\n\t  DSO_LDFLAGS=\"-shared\"\n\t  link_whole_archive=\"1\"\n\tfi\n\tcase \"${host}\" in\n\t  *-*-cygwin*)\n\t    DUMP_SYMS=\"dumpbin /SYMBOLS\"\n\t    ;;\n\t  *)\n\t    ;;\n\tesac\n\ta=\"lib\"\n\tlibprefix=\"\"\n\tSOREV=\"${so}\"\n\tPIC_CFLAGS=\"\"\n\tif test \"${LG_SIZEOF_PTR}\" = \"3\"; then\n\t  default_retain=\"1\"\n\tfi\n\tzero_realloc_default_free=\"1\"\n\t;;\n  *-*-nto-qnx)\n\tabi=\"elf\"\n  force_tls=\"0\"\n  AC_DEFINE([JEMALLOC_HAS_ALLOCA_H], [ ], [ ])\n\t;;\n  *)\n\tAC_MSG_RESULT([Unsupported operating system: ${host}])\n\tabi=\"elf\"\n\t;;\nesac\n\nJEMALLOC_USABLE_SIZE_CONST=const\nAC_CHECK_HEADERS([malloc.h], [\n  AC_MSG_CHECKING([whether malloc_usable_size definition can use const argument])\n  AC_COMPILE_IFELSE([AC_LANG_PROGRAM(\n    [#include <malloc.h>\n     #include <stddef.h>\n    size_t malloc_usable_size(const void *ptr);\n    ],\n    [])],[\n                AC_MSG_RESULT([yes])\n         ],[\n                JEMALLOC_USABLE_SIZE_CONST=\n                AC_MSG_RESULT([no])\n         ])\n])\nAC_DEFINE_UNQUOTED([JEMALLOC_USABLE_SIZE_CONST], [$JEMALLOC_USABLE_SIZE_CONST], [ ])\nAC_SUBST([abi])\nAC_SUBST([RPATH])\nAC_SUBST([LD_PRELOAD_VAR])\nAC_SUBST([so])\nAC_SUBST([importlib])\nAC_SUBST([o])\nAC_SUBST([a])\nAC_SUBST([exe])\nAC_SUBST([libprefix])\nAC_SUBST([link_whole_archive])\nAC_SUBST([DSO_LDFLAGS])\nAC_SUBST([EXTRA_LDFLAGS])\nAC_SUBST([SOREV])\nAC_SUBST([PIC_CFLAGS])\nAC_SUBST([CTARGET])\nAC_SUBST([LDTARGET])\nAC_SUBST([TEST_LD_MODE])\nAC_SUBST([MKLIB])\nAC_SUBST([ARFLAGS])\nAC_SUBST([AROUT])\nAC_SUBST([DUMP_SYMS])\nAC_SUBST([CC_MM])\n\ndnl Determine whether libm must be linked to use e.g. log(3).\nAC_SEARCH_LIBS([log], [m], , [AC_MSG_ERROR([Missing math functions])])\nif test \"x$ac_cv_search_log\" != \"xnone required\" ; then\n  LM=\"$ac_cv_search_log\"\nelse\n  LM=\nfi\nAC_SUBST(LM)\n\nJE_COMPILABLE([__attribute__ syntax],\n              [static __attribute__((unused)) void foo(void){}],\n              [],\n              [je_cv_attribute])\nif test \"x${je_cv_attribute}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_ATTR], [ ], [ ])\n  if test \"x${GCC}\" = \"xyes\" -a \"x${abi}\" = \"xelf\"; then\n    JE_CFLAGS_ADD([-fvisibility=hidden])\n    JE_CXXFLAGS_ADD([-fvisibility=hidden])\n  fi\nfi\ndnl Check for tls_model attribute support (clang 3.0 still lacks support).\nJE_CFLAGS_SAVE()\nJE_CFLAGS_ADD([-Werror])\nJE_CFLAGS_ADD([-herror_on_warning])\nJE_COMPILABLE([tls_model attribute], [],\n              [static __thread int\n               __attribute__((tls_model(\"initial-exec\"), unused)) foo;\n               foo = 0;],\n              [je_cv_tls_model])\nJE_CFLAGS_RESTORE()\ndnl (Setting of JEMALLOC_TLS_MODEL is done later, after we've checked for\ndnl --disable-initial-exec-tls)\n\ndnl Check for alloc_size attribute support.\nJE_CFLAGS_SAVE()\nJE_CFLAGS_ADD([-Werror])\nJE_CFLAGS_ADD([-herror_on_warning])\nJE_COMPILABLE([alloc_size attribute], [#include <stdlib.h>],\n              [void *foo(size_t size) __attribute__((alloc_size(1)));],\n              [je_cv_alloc_size])\nJE_CFLAGS_RESTORE()\nif test \"x${je_cv_alloc_size}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_ATTR_ALLOC_SIZE], [ ], [ ])\nfi\ndnl Check for format(gnu_printf, ...) attribute support.\nJE_CFLAGS_SAVE()\nJE_CFLAGS_ADD([-Werror])\nJE_CFLAGS_ADD([-herror_on_warning])\nJE_COMPILABLE([format(gnu_printf, ...) attribute], [#include <stdlib.h>],\n              [void *foo(const char *format, ...) __attribute__((format(gnu_printf, 1, 2)));],\n              [je_cv_format_gnu_printf])\nJE_CFLAGS_RESTORE()\nif test \"x${je_cv_format_gnu_printf}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_ATTR_FORMAT_GNU_PRINTF], [ ], [ ])\nfi\ndnl Check for format(printf, ...) attribute support.\nJE_CFLAGS_SAVE()\nJE_CFLAGS_ADD([-Werror])\nJE_CFLAGS_ADD([-herror_on_warning])\nJE_COMPILABLE([format(printf, ...) attribute], [#include <stdlib.h>],\n              [void *foo(const char *format, ...) __attribute__((format(printf, 1, 2)));],\n              [je_cv_format_printf])\nJE_CFLAGS_RESTORE()\nif test \"x${je_cv_format_printf}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_ATTR_FORMAT_PRINTF], [ ], [ ])\nfi\n\ndnl Check for format_arg(...) attribute support.\nJE_CFLAGS_SAVE()\nJE_CFLAGS_ADD([-Werror])\nJE_CFLAGS_ADD([-herror_on_warning])\nJE_COMPILABLE([format(printf, ...) attribute], [#include <stdlib.h>],\n              [const char * __attribute__((__format_arg__(1))) foo(const char *format);],\n              [je_cv_format_arg])\nJE_CFLAGS_RESTORE()\nif test \"x${je_cv_format_arg}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_ATTR_FORMAT_ARG], [ ], [ ])\nfi\n\ndnl Check for fallthrough attribute support.\nJE_CFLAGS_SAVE()\nJE_CFLAGS_ADD([-Wimplicit-fallthrough])\nJE_COMPILABLE([fallthrough attribute],\n              [#if !__has_attribute(fallthrough)\n               #error \"foo\"\n               #endif],\n              [int x = 0;\n               switch (x) {\n               case 0: __attribute__((__fallthrough__));\n               case 1: return 1;\n               }],\n              [je_cv_fallthrough])\nJE_CFLAGS_RESTORE()\nif test \"x${je_cv_fallthrough}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_ATTR_FALLTHROUGH], [ ], [ ])\n  JE_CFLAGS_ADD([-Wimplicit-fallthrough])\n  JE_CXXFLAGS_ADD([-Wimplicit-fallthrough])\nfi\n\ndnl Check for cold attribute support.\nJE_CFLAGS_SAVE()\nJE_CFLAGS_ADD([-Werror])\nJE_CFLAGS_ADD([-herror_on_warning])\nJE_COMPILABLE([cold attribute], [],\n              [__attribute__((__cold__)) void foo();],\n              [je_cv_cold])\nJE_CFLAGS_RESTORE()\nif test \"x${je_cv_cold}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_ATTR_COLD], [ ], [ ])\nfi\n\ndnl Check for VM_MAKE_TAG for mmap support.\nJE_COMPILABLE([vm_make_tag],\n\t      [#include <sys/mman.h>\n\t       #include <mach/vm_statistics.h>],\n\t      [void *p;\n\t       p = mmap(0, 16, PROT_READ, MAP_ANON|MAP_PRIVATE, VM_MAKE_TAG(1), 0);\n\t       munmap(p, 16);],\n\t      [je_cv_vm_make_tag])\nif test \"x${je_cv_vm_make_tag}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_VM_MAKE_TAG], [ ], [ ])\nfi\n\ndnl Support optional additions to rpath.\nAC_ARG_WITH([rpath],\n  [AS_HELP_STRING([--with-rpath=<rpath>], [Colon-separated rpath (ELF systems only)])],\nif test \"x$with_rpath\" = \"xno\" ; then\n  RPATH_EXTRA=\nelse\n  RPATH_EXTRA=\"`echo $with_rpath | tr \\\":\\\" \\\" \\\"`\"\nfi,\n  RPATH_EXTRA=\n)\nAC_SUBST([RPATH_EXTRA])\n\ndnl Disable rules that do automatic regeneration of configure output by default.\nAC_ARG_ENABLE([autogen],\n  [AS_HELP_STRING([--enable-autogen], [Automatically regenerate configure output])],\nif test \"x$enable_autogen\" = \"xno\" ; then\n  enable_autogen=\"0\"\nelse\n  enable_autogen=\"1\"\nfi\n,\nenable_autogen=\"0\"\n)\nAC_SUBST([enable_autogen])\n\nAC_PROG_INSTALL\nAC_PROG_RANLIB\nAC_PATH_PROG([LD], [ld], [false], [$PATH])\nAC_PATH_PROG([AUTOCONF], [autoconf], [false], [$PATH])\n\ndnl Enable documentation\nAC_ARG_ENABLE([doc],\n\t      [AS_HELP_STRING([--enable-doc], [Build documentation])],\nif test \"x$enable_doc\" = \"xno\" ; then\n  enable_doc=\"0\"\nelse\n  enable_doc=\"1\"\nfi\n,\nenable_doc=\"1\"\n)\nAC_SUBST([enable_doc])\n\ndnl Enable shared libs\nAC_ARG_ENABLE([shared],\n  [AS_HELP_STRING([--enable-shared], [Build shared libaries])],\nif test \"x$enable_shared\" = \"xno\" ; then\n  enable_shared=\"0\"\nelse\n  enable_shared=\"1\"\nfi\n,\nenable_shared=\"1\"\n)\nAC_SUBST([enable_shared])\n\ndnl Enable static libs\nAC_ARG_ENABLE([static],\n  [AS_HELP_STRING([--enable-static], [Build static libaries])],\nif test \"x$enable_static\" = \"xno\" ; then\n  enable_static=\"0\"\nelse\n  enable_static=\"1\"\nfi\n,\nenable_static=\"1\"\n)\nAC_SUBST([enable_static])\n\nif test \"$enable_shared$enable_static\" = \"00\" ; then\n  AC_MSG_ERROR([Please enable one of shared or static builds])\nfi\n\ndnl Perform no name mangling by default.\nAC_ARG_WITH([mangling],\n  [AS_HELP_STRING([--with-mangling=<map>], [Mangle symbols in <map>])],\n  [mangling_map=\"$with_mangling\"], [mangling_map=\"\"])\n\ndnl Do not prefix public APIs by default.\nAC_ARG_WITH([jemalloc_prefix],\n  [AS_HELP_STRING([--with-jemalloc-prefix=<prefix>], [Prefix to prepend to all public APIs])],\n  [JEMALLOC_PREFIX=\"$with_jemalloc_prefix\"],\n  [if test \"x$abi\" != \"xmacho\" -a \"x$abi\" != \"xpecoff\"; then\n  JEMALLOC_PREFIX=\"\"\nelse\n  JEMALLOC_PREFIX=\"je_\"\nfi]\n)\nif test \"x$JEMALLOC_PREFIX\" = \"x\" ; then\n  AC_DEFINE([JEMALLOC_IS_MALLOC], [ ], [ ])\nelse\n  JEMALLOC_CPREFIX=`echo ${JEMALLOC_PREFIX} | tr \"a-z\" \"A-Z\"`\n  AC_DEFINE_UNQUOTED([JEMALLOC_PREFIX], [\"$JEMALLOC_PREFIX\"], [ ])\n  AC_DEFINE_UNQUOTED([JEMALLOC_CPREFIX], [\"$JEMALLOC_CPREFIX\"], [ ])\nfi\nAC_SUBST([JEMALLOC_PREFIX])\nAC_SUBST([JEMALLOC_CPREFIX])\n\nAC_ARG_WITH([export],\n  [AS_HELP_STRING([--without-export], [disable exporting jemalloc public APIs])],\n  [if test \"x$with_export\" = \"xno\"; then\n  AC_DEFINE([JEMALLOC_EXPORT],[], [ ])\nfi]\n)\n\npublic_syms=\"aligned_alloc calloc dallocx free mallctl mallctlbymib mallctlnametomib malloc malloc_conf malloc_conf_2_conf_harder malloc_message malloc_stats_print malloc_usable_size mallocx smallocx_${jemalloc_version_gid} nallocx posix_memalign rallocx realloc sallocx sdallocx xallocx\"\ndnl Check for additional platform-specific public API functions.\nAC_CHECK_FUNC([memalign],\n\t      [AC_DEFINE([JEMALLOC_OVERRIDE_MEMALIGN], [ ], [ ])\n\t       public_syms=\"${public_syms} memalign\"])\nAC_CHECK_FUNC([valloc],\n\t      [AC_DEFINE([JEMALLOC_OVERRIDE_VALLOC], [ ], [ ])\n\t       public_syms=\"${public_syms} valloc\"])\nAC_CHECK_FUNC([malloc_size],\n\t      [AC_DEFINE([JEMALLOC_HAVE_MALLOC_SIZE], [ ], [ ])\n\t       public_syms=\"${public_syms} malloc_size\"])\n\ndnl Check for allocator-related functions that should be wrapped.\nwrap_syms=\nif test \"x${JEMALLOC_PREFIX}\" = \"x\" ; then\n  AC_CHECK_FUNC([__libc_calloc],\n\t\t[AC_DEFINE([JEMALLOC_OVERRIDE___LIBC_CALLOC], [ ], [ ])\n\t\t wrap_syms=\"${wrap_syms} __libc_calloc\"])\n  AC_CHECK_FUNC([__libc_free],\n\t\t[AC_DEFINE([JEMALLOC_OVERRIDE___LIBC_FREE], [ ], [ ])\n\t\t wrap_syms=\"${wrap_syms} __libc_free\"])\n  AC_CHECK_FUNC([__libc_malloc],\n\t\t[AC_DEFINE([JEMALLOC_OVERRIDE___LIBC_MALLOC], [ ], [ ])\n\t\t wrap_syms=\"${wrap_syms} __libc_malloc\"])\n  AC_CHECK_FUNC([__libc_memalign],\n\t\t[AC_DEFINE([JEMALLOC_OVERRIDE___LIBC_MEMALIGN], [ ], [ ])\n\t\t wrap_syms=\"${wrap_syms} __libc_memalign\"])\n  AC_CHECK_FUNC([__libc_realloc],\n\t\t[AC_DEFINE([JEMALLOC_OVERRIDE___LIBC_REALLOC], [ ], [ ])\n\t\t wrap_syms=\"${wrap_syms} __libc_realloc\"])\n  AC_CHECK_FUNC([__libc_valloc],\n\t\t[AC_DEFINE([JEMALLOC_OVERRIDE___LIBC_VALLOC], [ ], [ ])\n\t\t wrap_syms=\"${wrap_syms} __libc_valloc\"])\n  AC_CHECK_FUNC([__posix_memalign],\n\t\t[AC_DEFINE([JEMALLOC_OVERRIDE___POSIX_MEMALIGN], [ ], [ ])\n\t\t wrap_syms=\"${wrap_syms} __posix_memalign\"])\nfi\n\ncase \"${host}\" in\n  *-*-mingw* | *-*-cygwin*)\n    wrap_syms=\"${wrap_syms} tls_callback\"\n    ;;\n  *)\n    ;;\nesac\n\ndnl Mangle library-private APIs.\nAC_ARG_WITH([private_namespace],\n  [AS_HELP_STRING([--with-private-namespace=<prefix>], [Prefix to prepend to all library-private APIs])],\n  [JEMALLOC_PRIVATE_NAMESPACE=\"${with_private_namespace}je_\"],\n  [JEMALLOC_PRIVATE_NAMESPACE=\"je_\"]\n)\nAC_DEFINE_UNQUOTED([JEMALLOC_PRIVATE_NAMESPACE], [$JEMALLOC_PRIVATE_NAMESPACE], [ ])\nprivate_namespace=\"$JEMALLOC_PRIVATE_NAMESPACE\"\nAC_SUBST([private_namespace])\n\ndnl Do not add suffix to installed files by default.\nAC_ARG_WITH([install_suffix],\n  [AS_HELP_STRING([--with-install-suffix=<suffix>], [Suffix to append to all installed files])],\n  [case \"$with_install_suffix\" in\n   *\\ * ) AC_MSG_ERROR([Install suffix should not contain spaces]) ;;\n   * ) INSTALL_SUFFIX=\"$with_install_suffix\" ;;\nesac],\n  [INSTALL_SUFFIX=]\n)\ninstall_suffix=\"$INSTALL_SUFFIX\"\nAC_SUBST([install_suffix])\n\ndnl Specify default malloc_conf.\nAC_ARG_WITH([malloc_conf],\n  [AS_HELP_STRING([--with-malloc-conf=<malloc_conf>], [config.malloc_conf options string])],\n  [JEMALLOC_CONFIG_MALLOC_CONF=\"$with_malloc_conf\"],\n  [JEMALLOC_CONFIG_MALLOC_CONF=\"\"]\n)\nconfig_malloc_conf=\"$JEMALLOC_CONFIG_MALLOC_CONF\"\nAC_DEFINE_UNQUOTED([JEMALLOC_CONFIG_MALLOC_CONF], [\"$config_malloc_conf\"], [ ])\n\ndnl Substitute @je_@ in jemalloc_protos.h.in, primarily to make generation of\ndnl jemalloc_protos_jet.h easy.\nje_=\"je_\"\nAC_SUBST([je_])\n\ncfgoutputs_in=\"Makefile.in\"\ncfgoutputs_in=\"${cfgoutputs_in} jemalloc.pc.in\"\ncfgoutputs_in=\"${cfgoutputs_in} doc/html.xsl.in\"\ncfgoutputs_in=\"${cfgoutputs_in} doc/manpages.xsl.in\"\ncfgoutputs_in=\"${cfgoutputs_in} doc/jemalloc.xml.in\"\ncfgoutputs_in=\"${cfgoutputs_in} include/jemalloc/jemalloc_macros.h.in\"\ncfgoutputs_in=\"${cfgoutputs_in} include/jemalloc/jemalloc_protos.h.in\"\ncfgoutputs_in=\"${cfgoutputs_in} include/jemalloc/jemalloc_typedefs.h.in\"\ncfgoutputs_in=\"${cfgoutputs_in} include/jemalloc/internal/jemalloc_preamble.h.in\"\ncfgoutputs_in=\"${cfgoutputs_in} test/test.sh.in\"\ncfgoutputs_in=\"${cfgoutputs_in} test/include/test/jemalloc_test.h.in\"\n\ncfgoutputs_out=\"Makefile\"\ncfgoutputs_out=\"${cfgoutputs_out} jemalloc.pc\"\ncfgoutputs_out=\"${cfgoutputs_out} doc/html.xsl\"\ncfgoutputs_out=\"${cfgoutputs_out} doc/manpages.xsl\"\ncfgoutputs_out=\"${cfgoutputs_out} doc/jemalloc.xml\"\ncfgoutputs_out=\"${cfgoutputs_out} include/jemalloc/jemalloc_macros.h\"\ncfgoutputs_out=\"${cfgoutputs_out} include/jemalloc/jemalloc_protos.h\"\ncfgoutputs_out=\"${cfgoutputs_out} include/jemalloc/jemalloc_typedefs.h\"\ncfgoutputs_out=\"${cfgoutputs_out} include/jemalloc/internal/jemalloc_preamble.h\"\ncfgoutputs_out=\"${cfgoutputs_out} test/test.sh\"\ncfgoutputs_out=\"${cfgoutputs_out} test/include/test/jemalloc_test.h\"\n\ncfgoutputs_tup=\"Makefile\"\ncfgoutputs_tup=\"${cfgoutputs_tup} jemalloc.pc:jemalloc.pc.in\"\ncfgoutputs_tup=\"${cfgoutputs_tup} doc/html.xsl:doc/html.xsl.in\"\ncfgoutputs_tup=\"${cfgoutputs_tup} doc/manpages.xsl:doc/manpages.xsl.in\"\ncfgoutputs_tup=\"${cfgoutputs_tup} doc/jemalloc.xml:doc/jemalloc.xml.in\"\ncfgoutputs_tup=\"${cfgoutputs_tup} include/jemalloc/jemalloc_macros.h:include/jemalloc/jemalloc_macros.h.in\"\ncfgoutputs_tup=\"${cfgoutputs_tup} include/jemalloc/jemalloc_protos.h:include/jemalloc/jemalloc_protos.h.in\"\ncfgoutputs_tup=\"${cfgoutputs_tup} include/jemalloc/jemalloc_typedefs.h:include/jemalloc/jemalloc_typedefs.h.in\"\ncfgoutputs_tup=\"${cfgoutputs_tup} include/jemalloc/internal/jemalloc_preamble.h\"\ncfgoutputs_tup=\"${cfgoutputs_tup} test/test.sh:test/test.sh.in\"\ncfgoutputs_tup=\"${cfgoutputs_tup} test/include/test/jemalloc_test.h:test/include/test/jemalloc_test.h.in\"\n\ncfghdrs_in=\"include/jemalloc/jemalloc_defs.h.in\"\ncfghdrs_in=\"${cfghdrs_in} include/jemalloc/internal/jemalloc_internal_defs.h.in\"\ncfghdrs_in=\"${cfghdrs_in} include/jemalloc/internal/private_symbols.sh\"\ncfghdrs_in=\"${cfghdrs_in} include/jemalloc/internal/private_namespace.sh\"\ncfghdrs_in=\"${cfghdrs_in} include/jemalloc/internal/public_namespace.sh\"\ncfghdrs_in=\"${cfghdrs_in} include/jemalloc/internal/public_unnamespace.sh\"\ncfghdrs_in=\"${cfghdrs_in} include/jemalloc/jemalloc_rename.sh\"\ncfghdrs_in=\"${cfghdrs_in} include/jemalloc/jemalloc_mangle.sh\"\ncfghdrs_in=\"${cfghdrs_in} include/jemalloc/jemalloc.sh\"\ncfghdrs_in=\"${cfghdrs_in} test/include/test/jemalloc_test_defs.h.in\"\n\ncfghdrs_out=\"include/jemalloc/jemalloc_defs.h\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/jemalloc${install_suffix}.h\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/internal/private_symbols.awk\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/internal/private_symbols_jet.awk\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/internal/public_symbols.txt\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/internal/public_namespace.h\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/internal/public_unnamespace.h\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/jemalloc_protos_jet.h\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/jemalloc_rename.h\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/jemalloc_mangle.h\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/jemalloc_mangle_jet.h\"\ncfghdrs_out=\"${cfghdrs_out} include/jemalloc/internal/jemalloc_internal_defs.h\"\ncfghdrs_out=\"${cfghdrs_out} test/include/test/jemalloc_test_defs.h\"\n\ncfghdrs_tup=\"include/jemalloc/jemalloc_defs.h:include/jemalloc/jemalloc_defs.h.in\"\ncfghdrs_tup=\"${cfghdrs_tup} include/jemalloc/internal/jemalloc_internal_defs.h:include/jemalloc/internal/jemalloc_internal_defs.h.in\"\ncfghdrs_tup=\"${cfghdrs_tup} test/include/test/jemalloc_test_defs.h:test/include/test/jemalloc_test_defs.h.in\"\n\ndnl ============================================================================\ndnl jemalloc build options.\ndnl\n\ndnl Do not compile with debugging by default.\nAC_ARG_ENABLE([debug],\n  [AS_HELP_STRING([--enable-debug],\n                  [Build debugging code])],\n[if test \"x$enable_debug\" = \"xno\" ; then\n  enable_debug=\"0\"\nelse\n  enable_debug=\"1\"\nfi\n],\n[enable_debug=\"0\"]\n)\nif test \"x$enable_debug\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_DEBUG], [ ], [ ])\nfi\nAC_SUBST([enable_debug])\n\ndnl Only optimize if not debugging.\nif test \"x$enable_debug\" = \"x0\" ; then\n  if test \"x$GCC\" = \"xyes\" ; then\n    JE_CFLAGS_ADD([-O3])\n    JE_CXXFLAGS_ADD([-O3])\n    JE_CFLAGS_ADD([-funroll-loops])\n  elif test \"x$je_cv_msvc\" = \"xyes\" ; then\n    JE_CFLAGS_ADD([-O2])\n    JE_CXXFLAGS_ADD([-O2])\n  else\n    JE_CFLAGS_ADD([-O])\n    JE_CXXFLAGS_ADD([-O])\n  fi\nfi\n\ndnl Enable statistics calculation by default.\nAC_ARG_ENABLE([stats],\n  [AS_HELP_STRING([--disable-stats],\n                  [Disable statistics calculation/reporting])],\n[if test \"x$enable_stats\" = \"xno\" ; then\n  enable_stats=\"0\"\nelse\n  enable_stats=\"1\"\nfi\n],\n[enable_stats=\"1\"]\n)\nif test \"x$enable_stats\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_STATS], [ ], [ ])\nfi\nAC_SUBST([enable_stats])\n\ndnl Do not enable smallocx by default.\nAC_ARG_ENABLE([experimental_smallocx],\n  [AS_HELP_STRING([--enable-experimental-smallocx], [Enable experimental smallocx API])],\n[if test \"x$enable_experimental_smallocx\" = \"xno\" ; then\nenable_experimental_smallocx=\"0\"\nelse\nenable_experimental_smallocx=\"1\"\nfi\n],\n[enable_experimental_smallocx=\"0\"]\n)\nif test \"x$enable_experimental_smallocx\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_EXPERIMENTAL_SMALLOCX_API], [ ], [ ])\nfi\nAC_SUBST([enable_experimental_smallocx])\n\ndnl Do not enable profiling by default.\nAC_ARG_ENABLE([prof],\n  [AS_HELP_STRING([--enable-prof], [Enable allocation profiling])],\n[if test \"x$enable_prof\" = \"xno\" ; then\n  enable_prof=\"0\"\nelse\n  enable_prof=\"1\"\nfi\n],\n[enable_prof=\"0\"]\n)\nif test \"x$enable_prof\" = \"x1\" ; then\n  backtrace_method=\"\"\nelse\n  backtrace_method=\"N/A\"\nfi\n\nAC_ARG_ENABLE([prof-libunwind],\n  [AS_HELP_STRING([--enable-prof-libunwind], [Use libunwind for backtracing])],\n[if test \"x$enable_prof_libunwind\" = \"xno\" ; then\n  enable_prof_libunwind=\"0\"\nelse\n  enable_prof_libunwind=\"1\"\n  if test \"x$enable_prof\" = \"x0\" ; then\n    AC_MSG_ERROR([--enable-prof-libunwind should only be used with --enable-prof])\n  fi\nfi\n],\n[enable_prof_libunwind=\"0\"]\n)\nAC_ARG_WITH([static_libunwind],\n  [AS_HELP_STRING([--with-static-libunwind=<libunwind.a>],\n  [Path to static libunwind library; use rather than dynamically linking])],\nif test \"x$with_static_libunwind\" = \"xno\" ; then\n  LUNWIND=\"-lunwind\"\nelse\n  if test ! -f \"$with_static_libunwind\" ; then\n    AC_MSG_ERROR([Static libunwind not found: $with_static_libunwind])\n  fi\n  LUNWIND=\"$with_static_libunwind\"\nfi,\n  LUNWIND=\"-lunwind\"\n)\nif test \"x$backtrace_method\" = \"x\" -a \"x$enable_prof_libunwind\" = \"x1\" ; then\n  AC_CHECK_HEADERS([libunwind.h], , [enable_prof_libunwind=\"0\"])\n  if test \"x$LUNWIND\" = \"x-lunwind\" ; then\n    AC_CHECK_LIB([unwind], [unw_backtrace], [JE_APPEND_VS(LIBS, $LUNWIND)],\n                 [enable_prof_libunwind=\"0\"])\n  else\n    JE_APPEND_VS(LIBS, $LUNWIND)\n  fi\n  if test \"x${enable_prof_libunwind}\" = \"x1\" ; then\n    backtrace_method=\"libunwind\"\n    AC_DEFINE([JEMALLOC_PROF_LIBUNWIND], [ ], [ ])\n  fi\nfi\n\nAC_ARG_ENABLE([prof-libgcc],\n  [AS_HELP_STRING([--disable-prof-libgcc],\n  [Do not use libgcc for backtracing])],\n[if test \"x$enable_prof_libgcc\" = \"xno\" ; then\n  enable_prof_libgcc=\"0\"\nelse\n  enable_prof_libgcc=\"1\"\nfi\n],\n[enable_prof_libgcc=\"1\"]\n)\nif test \"x$backtrace_method\" = \"x\" -a \"x$enable_prof_libgcc\" = \"x1\" \\\n     -a \"x$GCC\" = \"xyes\" ; then\n  AC_CHECK_HEADERS([unwind.h], , [enable_prof_libgcc=\"0\"])\n  if test \"x${enable_prof_libgcc}\" = \"x1\" ; then\n    AC_CHECK_LIB([gcc], [_Unwind_Backtrace], [JE_APPEND_VS(LIBS, -lgcc)], [enable_prof_libgcc=\"0\"])\n  fi\n  if test \"x${enable_prof_libgcc}\" = \"x1\" ; then\n    backtrace_method=\"libgcc\"\n    AC_DEFINE([JEMALLOC_PROF_LIBGCC], [ ], [ ])\n  fi\nelse\n  enable_prof_libgcc=\"0\"\nfi\n\nAC_ARG_ENABLE([prof-gcc],\n  [AS_HELP_STRING([--disable-prof-gcc],\n  [Do not use gcc intrinsics for backtracing])],\n[if test \"x$enable_prof_gcc\" = \"xno\" ; then\n  enable_prof_gcc=\"0\"\nelse\n  enable_prof_gcc=\"1\"\nfi\n],\n[enable_prof_gcc=\"1\"]\n)\nif test \"x$backtrace_method\" = \"x\" -a \"x$enable_prof_gcc\" = \"x1\" \\\n     -a \"x$GCC\" = \"xyes\" ; then\n  JE_CFLAGS_ADD([-fno-omit-frame-pointer])\n  backtrace_method=\"gcc intrinsics\"\n  AC_DEFINE([JEMALLOC_PROF_GCC], [ ], [ ])\nelse\n  enable_prof_gcc=\"0\"\nfi\n\nif test \"x$backtrace_method\" = \"x\" ; then\n  backtrace_method=\"none (disabling profiling)\"\n  enable_prof=\"0\"\nfi\nAC_MSG_CHECKING([configured backtracing method])\nAC_MSG_RESULT([$backtrace_method])\nif test \"x$enable_prof\" = \"x1\" ; then\n  dnl Heap profiling uses the log(3) function.\n  JE_APPEND_VS(LIBS, $LM)\n\n  AC_DEFINE([JEMALLOC_PROF], [ ], [ ])\nfi\nAC_SUBST([enable_prof])\n\ndnl Indicate whether adjacent virtual memory mappings automatically coalesce\ndnl (and fragment on demand).\nif test \"x${maps_coalesce}\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_MAPS_COALESCE], [ ], [ ])\nfi\n\ndnl Indicate whether to retain memory (rather than using munmap()) by default.\nif test \"x$default_retain\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_RETAIN], [ ], [ ])\nfi\n\ndnl Indicate whether realloc(ptr, 0) defaults to the \"alloc\" behavior.\nif test \"x$zero_realloc_default_free\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_ZERO_REALLOC_DEFAULT_FREE], [ ], [ ])\nfi\n\ndnl Enable allocation from DSS if supported by the OS.\nhave_dss=\"1\"\ndnl Check whether the BSD/SUSv1 sbrk() exists.  If not, disable DSS support.\nAC_CHECK_FUNC([sbrk], [have_sbrk=\"1\"], [have_sbrk=\"0\"])\nif test \"x$have_sbrk\" = \"x1\" ; then\n  if test \"x$sbrk_deprecated\" = \"x1\" ; then\n    AC_MSG_RESULT([Disabling dss allocation because sbrk is deprecated])\n    have_dss=\"0\"\n  fi\nelse\n  have_dss=\"0\"\nfi\n\nif test \"x$have_dss\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_DSS], [ ], [ ])\nfi\n\ndnl Support the junk/zero filling option by default.\nAC_ARG_ENABLE([fill],\n  [AS_HELP_STRING([--disable-fill], [Disable support for junk/zero filling])],\n[if test \"x$enable_fill\" = \"xno\" ; then\n  enable_fill=\"0\"\nelse\n  enable_fill=\"1\"\nfi\n],\n[enable_fill=\"1\"]\n)\nif test \"x$enable_fill\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_FILL], [ ], [ ])\nfi\nAC_SUBST([enable_fill])\n\ndnl Disable utrace(2)-based tracing by default.\nAC_ARG_ENABLE([utrace],\n  [AS_HELP_STRING([--enable-utrace], [Enable utrace(2)-based tracing])],\n[if test \"x$enable_utrace\" = \"xno\" ; then\n  enable_utrace=\"0\"\nelse\n  enable_utrace=\"1\"\nfi\n],\n[enable_utrace=\"0\"]\n)\nJE_COMPILABLE([utrace(2)], [\n#include <sys/types.h>\n#include <sys/param.h>\n#include <sys/time.h>\n#include <sys/uio.h>\n#include <sys/ktrace.h>\n], [\n\tutrace((void *)0, 0);\n], [je_cv_utrace])\nif test \"x${je_cv_utrace}\" = \"xno\" ; then\n  JE_COMPILABLE([utrace(2) with label], [\n  #include <sys/types.h>\n  #include <sys/param.h>\n  #include <sys/time.h>\n  #include <sys/uio.h>\n  #include <sys/ktrace.h>\n  ], [\n\t  utrace((void *)0, (void *)0, 0);\n  ], [je_cv_utrace_label])\n  if test \"x${je_cv_utrace_label}\" = \"xno\"; then\n    enable_utrace=\"0\"\n  fi\n  if test \"x$enable_utrace\" = \"x1\" ; then\n    AC_DEFINE([JEMALLOC_UTRACE_LABEL], [ ], [ ])\n  fi\nelse\n  if test \"x$enable_utrace\" = \"x1\" ; then\n    AC_DEFINE([JEMALLOC_UTRACE], [ ], [ ])\n  fi\nfi\nAC_SUBST([enable_utrace])\n\ndnl Do not support the xmalloc option by default.\nAC_ARG_ENABLE([xmalloc],\n  [AS_HELP_STRING([--enable-xmalloc], [Support xmalloc option])],\n[if test \"x$enable_xmalloc\" = \"xno\" ; then\n  enable_xmalloc=\"0\"\nelse\n  enable_xmalloc=\"1\"\nfi\n],\n[enable_xmalloc=\"0\"]\n)\nif test \"x$enable_xmalloc\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_XMALLOC], [ ], [ ])\nfi\nAC_SUBST([enable_xmalloc])\n\ndnl Support cache-oblivious allocation alignment by default.\nAC_ARG_ENABLE([cache-oblivious],\n  [AS_HELP_STRING([--disable-cache-oblivious],\n                  [Disable support for cache-oblivious allocation alignment])],\n[if test \"x$enable_cache_oblivious\" = \"xno\" ; then\n  enable_cache_oblivious=\"0\"\nelse\n  enable_cache_oblivious=\"1\"\nfi\n],\n[enable_cache_oblivious=\"1\"]\n)\nif test \"x$enable_cache_oblivious\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_CACHE_OBLIVIOUS], [ ], [ ])\nfi\nAC_SUBST([enable_cache_oblivious])\n\ndnl Do not log by default.\nAC_ARG_ENABLE([log],\n  [AS_HELP_STRING([--enable-log], [Support debug logging])],\n[if test \"x$enable_log\" = \"xno\" ; then\n  enable_log=\"0\"\nelse\n  enable_log=\"1\"\nfi\n],\n[enable_log=\"0\"]\n)\nif test \"x$enable_log\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_LOG], [ ], [ ])\nfi\nAC_SUBST([enable_log])\n\ndnl Do not use readlinkat by default\nAC_ARG_ENABLE([readlinkat],\n  [AS_HELP_STRING([--enable-readlinkat], [Use readlinkat over readlink])],\n[if test \"x$enable_readlinkat\" = \"xno\" ; then\n  enable_readlinkat=\"0\"\nelse\n  enable_readlinkat=\"1\"\nfi\n],\n[enable_readlinkat=\"0\"]\n)\nif test \"x$enable_readlinkat\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_READLINKAT], [ ], [ ])\nfi\nAC_SUBST([enable_readlinkat])\n\ndnl Avoid extra safety checks by default\nAC_ARG_ENABLE([opt-safety-checks],\n  [AS_HELP_STRING([--enable-opt-safety-checks],\n  [Perform certain low-overhead checks, even in opt mode])],\n[if test \"x$enable_opt_safety_checks\" = \"xno\" ; then\n  enable_opt_safety_checks=\"0\"\nelse\n  enable_opt_safety_checks=\"1\"\nfi\n],\n[enable_opt_safety_checks=\"0\"]\n)\nif test \"x$enable_opt_safety_checks\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_OPT_SAFETY_CHECKS], [ ], [ ])\nfi\nAC_SUBST([enable_opt_safety_checks])\n\ndnl Look for sized-deallocation bugs while otherwise being in opt mode.\nAC_ARG_ENABLE([opt-size-checks],\n  [AS_HELP_STRING([--enable-opt-size-checks],\n  [Perform sized-deallocation argument checks, even in opt mode])],\n[if test \"x$enable_opt_size_checks\" = \"xno\" ; then\n  enable_opt_size_checks=\"0\"\nelse\n  enable_opt_size_checks=\"1\"\nfi\n],\n[enable_opt_size_checks=\"0\"]\n)\nif test \"x$enable_opt_size_checks\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_OPT_SIZE_CHECKS], [ ], [ ])\nfi\nAC_SUBST([enable_opt_size_checks])\n\ndnl Do not check for use-after-free by default.\nAC_ARG_ENABLE([uaf-detection],\n  [AS_HELP_STRING([--enable-uaf-detection],\n  [Allow sampled junk-filling on deallocation to detect use-after-free])],\n[if test \"x$enable_uaf_detection\" = \"xno\" ; then\n  enable_uaf_detection=\"0\"\nelse\n  enable_uaf_detection=\"1\"\nfi\n],\n[enable_uaf_detection=\"0\"]\n)\nif test \"x$enable_uaf_detection\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_UAF_DETECTION], [ ])\nfi\nAC_SUBST([enable_uaf_detection])\n\nJE_COMPILABLE([a program using __builtin_unreachable], [\nvoid foo (void) {\n  __builtin_unreachable();\n}\n], [\n\t{\n\t\tfoo();\n\t}\n], [je_cv_gcc_builtin_unreachable])\nif test \"x${je_cv_gcc_builtin_unreachable}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_INTERNAL_UNREACHABLE], [__builtin_unreachable], [ ])\nelse\n  AC_DEFINE([JEMALLOC_INTERNAL_UNREACHABLE], [abort], [ ])\nfi\n\ndnl ============================================================================\ndnl Check for  __builtin_ffsl(), then ffsl(3), and fail if neither are found.\ndnl One of those two functions should (theoretically) exist on all platforms\ndnl that jemalloc currently has a chance of functioning on without modification.\ndnl We additionally assume ffs[ll]() or __builtin_ffs[ll]() are defined if\ndnl ffsl() or __builtin_ffsl() are defined, respectively.\nJE_COMPILABLE([a program using __builtin_ffsl], [\n#include <stdio.h>\n#include <strings.h>\n#include <string.h>\n], [\n\t{\n\t\tint rv = __builtin_ffsl(0x08);\n\t\tprintf(\"%d\\n\", rv);\n\t}\n], [je_cv_gcc_builtin_ffsl])\nif test \"x${je_cv_gcc_builtin_ffsl}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_INTERNAL_FFSLL], [__builtin_ffsll], [ ])\n  AC_DEFINE([JEMALLOC_INTERNAL_FFSL], [__builtin_ffsl], [ ])\n  AC_DEFINE([JEMALLOC_INTERNAL_FFS], [__builtin_ffs], [ ])\nelse\n  JE_COMPILABLE([a program using ffsl], [\n  #include <stdio.h>\n  #include <strings.h>\n  #include <string.h>\n  ], [\n\t{\n\t\tint rv = ffsl(0x08);\n\t\tprintf(\"%d\\n\", rv);\n\t}\n  ], [je_cv_function_ffsl])\n  if test \"x${je_cv_function_ffsl}\" = \"xyes\" ; then\n    AC_DEFINE([JEMALLOC_INTERNAL_FFSLL], [ffsll], [ ])\n    AC_DEFINE([JEMALLOC_INTERNAL_FFSL], [ffsl], [ ])\n    AC_DEFINE([JEMALLOC_INTERNAL_FFS], [ffs], [ ])\n  else\n    AC_MSG_ERROR([Cannot build without ffsl(3) or __builtin_ffsl()])\n  fi\nfi\n\nJE_COMPILABLE([a program using __builtin_popcountl], [\n#include <stdio.h>\n#include <strings.h>\n#include <string.h>\n], [\n\t{\n\t\tint rv = __builtin_popcountl(0x08);\n\t\tprintf(\"%d\\n\", rv);\n\t}\n], [je_cv_gcc_builtin_popcountl])\nif test \"x${je_cv_gcc_builtin_popcountl}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_INTERNAL_POPCOUNT], [__builtin_popcount], [ ])\n  AC_DEFINE([JEMALLOC_INTERNAL_POPCOUNTL], [__builtin_popcountl], [ ])\n  AC_DEFINE([JEMALLOC_INTERNAL_POPCOUNTLL], [__builtin_popcountll], [ ])\nfi\n\nAC_ARG_WITH([lg_quantum],\n  [AS_HELP_STRING([--with-lg-quantum=<lg-quantum>],\n   [Base 2 log of minimum allocation alignment])])\nif test \"x$with_lg_quantum\" != \"x\" ; then\n  AC_DEFINE_UNQUOTED([LG_QUANTUM], [$with_lg_quantum], [ ])\nfi\n\nAC_ARG_WITH([lg_slab_maxregs],\n  [AS_HELP_STRING([--with-lg-slab-maxregs=<lg-slab-maxregs>],\n   [Base 2 log of maximum number of regions in a slab (used with malloc_conf slab_sizes)])],\n  [CONFIG_LG_SLAB_MAXREGS=\"with_lg_slab_maxregs\"],\n  [CONFIG_LG_SLAB_MAXREGS=\"\"])\nif test \"x$with_lg_slab_maxregs\" != \"x\" ; then\n  AC_DEFINE_UNQUOTED([CONFIG_LG_SLAB_MAXREGS], [$with_lg_slab_maxregs], [ ])\nfi\n\nAC_ARG_WITH([lg_page],\n  [AS_HELP_STRING([--with-lg-page=<lg-page>], [Base 2 log of system page size])],\n  [LG_PAGE=\"$with_lg_page\"], [LG_PAGE=\"detect\"])\ncase \"${host}\" in\n  aarch64-apple-darwin*)\n      dnl When cross-compile for Apple M1 and no page size specified, use the\n      dnl default and skip detecting the page size (which is likely incorrect).\n      if test \"x${host}\" != \"x${build}\" -a \"x$LG_PAGE\" = \"xdetect\"; then\n        LG_PAGE=14\n      fi\n      ;;\nesac\nif test \"x$LG_PAGE\" = \"xdetect\"; then\n  AC_CACHE_CHECK([LG_PAGE],\n               [je_cv_lg_page],\n               AC_RUN_IFELSE([AC_LANG_PROGRAM(\n[[\n#include <strings.h>\n#ifdef _WIN32\n#include <windows.h>\n#else\n#include <unistd.h>\n#endif\n#include <stdio.h>\n]],\n[[\n    int result;\n    FILE *f;\n\n#ifdef _WIN32\n    SYSTEM_INFO si;\n    GetSystemInfo(&si);\n    result = si.dwPageSize;\n#else\n    result = sysconf(_SC_PAGESIZE);\n#endif\n    if (result == -1) {\n\treturn 1;\n    }\n    result = JEMALLOC_INTERNAL_FFSL(result) - 1;\n\n    f = fopen(\"conftest.out\", \"w\");\n    if (f == NULL) {\n\treturn 1;\n    }\n    fprintf(f, \"%d\", result);\n    fclose(f);\n\n    return 0;\n]])],\n                             [je_cv_lg_page=`cat conftest.out`],\n                             [je_cv_lg_page=undefined],\n                             [je_cv_lg_page=12]))\nfi\nif test \"x${je_cv_lg_page}\" != \"x\" ; then\n  LG_PAGE=\"${je_cv_lg_page}\"\nfi\nif test \"x${LG_PAGE}\" != \"xundefined\" ; then\n   AC_DEFINE_UNQUOTED([LG_PAGE], [$LG_PAGE], [ ])\nelse\n   AC_MSG_ERROR([cannot determine value for LG_PAGE])\nfi\n\nAC_ARG_WITH([lg_hugepage],\n  [AS_HELP_STRING([--with-lg-hugepage=<lg-hugepage>],\n   [Base 2 log of system huge page size])],\n  [je_cv_lg_hugepage=\"${with_lg_hugepage}\"],\n  [je_cv_lg_hugepage=\"\"])\nif test \"x${je_cv_lg_hugepage}\" = \"x\" ; then\n  dnl Look in /proc/meminfo (Linux-specific) for information on the default huge\n  dnl page size, if any.  The relevant line looks like:\n  dnl\n  dnl   Hugepagesize:       2048 kB\n  if test -e \"/proc/meminfo\" ; then\n    hpsk=[`cat /proc/meminfo 2>/dev/null | \\\n          grep -e '^Hugepagesize:[[:space:]]\\+[0-9]\\+[[:space:]]kB$' | \\\n          awk '{print $2}'`]\n    if test \"x${hpsk}\" != \"x\" ; then\n      je_cv_lg_hugepage=10\n      while test \"${hpsk}\" -gt 1 ; do\n        hpsk=\"$((hpsk / 2))\"\n        je_cv_lg_hugepage=\"$((je_cv_lg_hugepage + 1))\"\n      done\n    fi\n  fi\n\n  dnl Set default if unable to automatically configure.\n  if test \"x${je_cv_lg_hugepage}\" = \"x\" ; then\n    je_cv_lg_hugepage=21\n  fi\nfi\nif test \"x${LG_PAGE}\" != \"xundefined\" -a \\\n        \"${je_cv_lg_hugepage}\" -lt \"${LG_PAGE}\" ; then\n  AC_MSG_ERROR([Huge page size (2^${je_cv_lg_hugepage}) must be at least page size (2^${LG_PAGE})])\nfi\nAC_DEFINE_UNQUOTED([LG_HUGEPAGE], [${je_cv_lg_hugepage}], [ ])\n\ndnl ============================================================================\ndnl Enable libdl by default.\nAC_ARG_ENABLE([libdl],\n  [AS_HELP_STRING([--disable-libdl],\n  [Do not use libdl])],\n[if test \"x$enable_libdl\" = \"xno\" ; then\n  enable_libdl=\"0\"\nelse\n  enable_libdl=\"1\"\nfi\n],\n[enable_libdl=\"1\"]\n)\nAC_SUBST([libdl])\n\ndnl ============================================================================\ndnl Configure pthreads.\n\nif test \"x$abi\" != \"xpecoff\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_PTHREAD], [ ], [ ])\n  AC_CHECK_HEADERS([pthread.h], , [AC_MSG_ERROR([pthread.h is missing])])\n  dnl Some systems may embed pthreads functionality in libc; check for libpthread\n  dnl first, but try libc too before failing.\n  AC_CHECK_LIB([pthread], [pthread_create], [JE_APPEND_VS(LIBS, -pthread)],\n               [AC_SEARCH_LIBS([pthread_create], , ,\n                               AC_MSG_ERROR([libpthread is missing]))])\n  wrap_syms=\"${wrap_syms} pthread_create\"\n  have_pthread=\"1\"\n\ndnl Check if we have dlsym support.\n  if test \"x$enable_libdl\" = \"x1\" ; then\n    have_dlsym=\"1\"\n    AC_CHECK_HEADERS([dlfcn.h],\n      AC_CHECK_FUNC([dlsym], [],\n        [AC_CHECK_LIB([dl], [dlsym], [LIBS=\"$LIBS -ldl\"], [have_dlsym=\"0\"])]),\n      [have_dlsym=\"0\"])\n    if test \"x$have_dlsym\" = \"x1\" ; then\n      AC_DEFINE([JEMALLOC_HAVE_DLSYM], [ ], [ ])\n    fi\n  else\n    have_dlsym=\"0\"\n  fi\n\n  JE_COMPILABLE([pthread_atfork(3)], [\n#include <pthread.h>\n], [\n  pthread_atfork((void *)0, (void *)0, (void *)0);\n], [je_cv_pthread_atfork])\n  if test \"x${je_cv_pthread_atfork}\" = \"xyes\" ; then\n    AC_DEFINE([JEMALLOC_HAVE_PTHREAD_ATFORK], [ ], [ ])\n  fi\n  dnl Check if pthread_setname_np is available with the expected API.\n  JE_COMPILABLE([pthread_setname_np(3)], [\n#include <pthread.h>\n], [\n  pthread_setname_np(pthread_self(), \"setname_test\");\n], [je_cv_pthread_setname_np])\n  if test \"x${je_cv_pthread_setname_np}\" = \"xyes\" ; then\n    AC_DEFINE([JEMALLOC_HAVE_PTHREAD_SETNAME_NP], [ ], [ ])\n  fi\n  dnl Check if pthread_getname_np is not necessarily present despite\n  dnl the pthread_setname_np counterpart\n  JE_COMPILABLE([pthread_getname_np(3)], [\n#include <pthread.h>\n#include <stdlib.h>\n], [\n  {\n  \tchar *name = malloc(16);\n  \tpthread_getname_np(pthread_self(), name, 16);\n\tfree(name);\n  }\n], [je_cv_pthread_getname_np])\n  if test \"x${je_cv_pthread_getname_np}\" = \"xyes\" ; then\n    AC_DEFINE([JEMALLOC_HAVE_PTHREAD_GETNAME_NP], [ ], [ ])\n  fi\n  dnl Check if pthread_get_name_np is not necessarily present despite\n  dnl the pthread_set_name_np counterpart\n  JE_COMPILABLE([pthread_get_name_np(3)], [\n#include <pthread.h>\n#include <pthread_np.h>\n#include <stdlib.h>\n], [\n  {\n  \tchar *name = malloc(16);\n  \tpthread_get_name_np(pthread_self(), name, 16);\n\tfree(name);\n  }\n], [je_cv_pthread_get_name_np])\n  if test \"x${je_cv_pthread_get_name_np}\" = \"xyes\" ; then\n    AC_DEFINE([JEMALLOC_HAVE_PTHREAD_GET_NAME_NP], [ ], [ ])\n  fi\nfi\n\nJE_APPEND_VS(CPPFLAGS, -D_REENTRANT)\n\ndnl Check whether clock_gettime(2) is in libc or librt.\nAC_SEARCH_LIBS([clock_gettime], [rt])\n\ndnl Cray wrapper compiler often adds `-lrt` when using `-static`. Check with\ndnl `-dynamic` as well in case a user tries to dynamically link in jemalloc\nif test \"x$je_cv_cray_prgenv_wrapper\" = \"xyes\" ; then\n  if test \"$ac_cv_search_clock_gettime\" != \"-lrt\"; then\n    JE_CFLAGS_SAVE()\n\n    unset ac_cv_search_clock_gettime\n    JE_CFLAGS_ADD([-dynamic])\n    AC_SEARCH_LIBS([clock_gettime], [rt])\n\n    JE_CFLAGS_RESTORE()\n  fi\nfi\n\ndnl check for CLOCK_MONOTONIC_COARSE (Linux-specific).\nJE_COMPILABLE([clock_gettime(CLOCK_MONOTONIC_COARSE, ...)], [\n#include <time.h>\n], [\n\tstruct timespec ts;\n\n\tclock_gettime(CLOCK_MONOTONIC_COARSE, &ts);\n], [je_cv_clock_monotonic_coarse])\nif test \"x${je_cv_clock_monotonic_coarse}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_CLOCK_MONOTONIC_COARSE], [ ], [ ])\nfi\n\ndnl check for CLOCK_MONOTONIC.\nJE_COMPILABLE([clock_gettime(CLOCK_MONOTONIC, ...)], [\n#include <unistd.h>\n#include <time.h>\n], [\n\tstruct timespec ts;\n\n\tclock_gettime(CLOCK_MONOTONIC, &ts);\n#if !defined(_POSIX_MONOTONIC_CLOCK) || _POSIX_MONOTONIC_CLOCK < 0\n#  error _POSIX_MONOTONIC_CLOCK missing/invalid\n#endif\n], [je_cv_clock_monotonic])\nif test \"x${je_cv_clock_monotonic}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_CLOCK_MONOTONIC], [ ], [ ])\nfi\n\ndnl Check for mach_absolute_time().\nJE_COMPILABLE([mach_absolute_time()], [\n#include <mach/mach_time.h>\n], [\n\tmach_absolute_time();\n], [je_cv_mach_absolute_time])\nif test \"x${je_cv_mach_absolute_time}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_MACH_ABSOLUTE_TIME], [ ], [ ])\nfi\n\ndnl check for CLOCK_REALTIME (always should be available on Linux)\nJE_COMPILABLE([clock_gettime(CLOCK_REALTIME, ...)], [\n#include <time.h>\n], [\n\tstruct timespec ts;\n\n\tclock_gettime(CLOCK_REALTIME, &ts);\n], [je_cv_clock_realtime])\nif test \"x${je_cv_clock_realtime}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_CLOCK_REALTIME], [ ], [ ])\nfi\n\ndnl Use syscall(2) (if available) by default.\nAC_ARG_ENABLE([syscall],\n  [AS_HELP_STRING([--disable-syscall], [Disable use of syscall(2)])],\n[if test \"x$enable_syscall\" = \"xno\" ; then\n  enable_syscall=\"0\"\nelse\n  enable_syscall=\"1\"\nfi\n],\n[enable_syscall=\"1\"]\n)\nif test \"x$enable_syscall\" = \"x1\" ; then\n  dnl Check if syscall(2) is usable.  Treat warnings as errors, so that e.g. OS\n  dnl X 10.12's deprecation warning prevents use.\n  JE_CFLAGS_SAVE()\n  JE_CFLAGS_ADD([-Werror])\n  JE_COMPILABLE([syscall(2)], [\n#include <sys/syscall.h>\n#include <unistd.h>\n], [\n\tsyscall(SYS_write, 2, \"hello\", 5);\n],\n                [je_cv_syscall])\n  JE_CFLAGS_RESTORE()\n  if test \"x$je_cv_syscall\" = \"xyes\" ; then\n    AC_DEFINE([JEMALLOC_USE_SYSCALL], [ ], [ ])\n  fi\nfi\n\ndnl Check if the GNU-specific secure_getenv function exists.\nAC_CHECK_FUNC([secure_getenv],\n              [have_secure_getenv=\"1\"],\n              [have_secure_getenv=\"0\"]\n             )\nif test \"x$have_secure_getenv\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_SECURE_GETENV], [ ], [ ])\nfi\n\ndnl Check if the GNU-specific sched_getcpu function exists.\nAC_CHECK_FUNC([sched_getcpu],\n              [have_sched_getcpu=\"1\"],\n              [have_sched_getcpu=\"0\"]\n             )\nif test \"x$have_sched_getcpu\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_SCHED_GETCPU], [ ], [ ])\nfi\n\ndnl Check if the GNU-specific sched_setaffinity function exists.\nAC_CHECK_FUNC([sched_setaffinity],\n              [have_sched_setaffinity=\"1\"],\n              [have_sched_setaffinity=\"0\"]\n             )\nif test \"x$have_sched_setaffinity\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_SCHED_SETAFFINITY], [ ], [ ])\nfi\n\ndnl Check if the Solaris/BSD issetugid function exists.\nAC_CHECK_FUNC([issetugid],\n              [have_issetugid=\"1\"],\n              [have_issetugid=\"0\"]\n             )\nif test \"x$have_issetugid\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_ISSETUGID], [ ], [ ])\nfi\n\ndnl Check whether the BSD-specific _malloc_thread_cleanup() exists.  If so, use\ndnl it rather than pthreads TSD cleanup functions to support cleanup during\ndnl thread exit, in order to avoid pthreads library recursion during\ndnl bootstrapping.\nAC_CHECK_FUNC([_malloc_thread_cleanup],\n              [have__malloc_thread_cleanup=\"1\"],\n              [have__malloc_thread_cleanup=\"0\"]\n             )\nif test \"x$have__malloc_thread_cleanup\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_MALLOC_THREAD_CLEANUP], [ ], [ ])\n  wrap_syms=\"${wrap_syms} _malloc_thread_cleanup _malloc_tsd_cleanup_register\"\n  force_tls=\"1\"\nfi\n\ndnl Check whether the BSD-specific _pthread_mutex_init_calloc_cb() exists.  If\ndnl so, mutex initialization causes allocation, and we need to implement this\ndnl callback function in order to prevent recursive allocation.\nAC_CHECK_FUNC([_pthread_mutex_init_calloc_cb],\n              [have__pthread_mutex_init_calloc_cb=\"1\"],\n              [have__pthread_mutex_init_calloc_cb=\"0\"]\n             )\nif test \"x$have__pthread_mutex_init_calloc_cb\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_MUTEX_INIT_CB], [ ], [ ])\n  wrap_syms=\"${wrap_syms} _malloc_prefork _malloc_postfork\"\nfi\n\nAC_CHECK_FUNC([memcntl],\n\t      [have_memcntl=\"1\"],\n\t      [have_memcntl=\"0\"],\n\t      )\nif test \"x$have_memcntl\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_MEMCNTL], [ ], [ ])\nfi\n\ndnl Disable lazy locking by default.\nAC_ARG_ENABLE([lazy_lock],\n  [AS_HELP_STRING([--enable-lazy-lock],\n  [Enable lazy locking (only lock when multi-threaded)])],\n[if test \"x$enable_lazy_lock\" = \"xno\" ; then\n  enable_lazy_lock=\"0\"\nelse\n  enable_lazy_lock=\"1\"\nfi\n],\n[enable_lazy_lock=\"\"]\n)\nif test \"x${enable_lazy_lock}\" = \"x\" ; then\n  if test \"x${force_lazy_lock}\" = \"x1\" ; then\n    AC_MSG_RESULT([Forcing lazy-lock to avoid allocator/threading bootstrap issues])\n    enable_lazy_lock=\"1\"\n  else\n    enable_lazy_lock=\"0\"\n  fi\nfi\nif test \"x${enable_lazy_lock}\" = \"x1\" -a \"x${abi}\" = \"xpecoff\" ; then\n  AC_MSG_RESULT([Forcing no lazy-lock because thread creation monitoring is unimplemented])\n  enable_lazy_lock=\"0\"\nfi\nif test \"x$enable_lazy_lock\" = \"x1\" ; then\n  if test \"x$have_dlsym\" = \"x1\" ; then\n    AC_DEFINE([JEMALLOC_LAZY_LOCK], [ ], [ ])\n  else\n    AC_MSG_ERROR([Missing dlsym support: lazy-lock cannot be enabled.])\n  fi\nfi\nAC_SUBST([enable_lazy_lock])\n\ndnl Automatically configure TLS.\nif test \"x${force_tls}\" = \"x1\" ; then\n  enable_tls=\"1\"\nelif test \"x${force_tls}\" = \"x0\" ; then\n  enable_tls=\"0\"\nelse\n  enable_tls=\"1\"\nfi\nif test \"x${enable_tls}\" = \"x1\" ; then\nAC_MSG_CHECKING([for TLS])\nAC_COMPILE_IFELSE([AC_LANG_PROGRAM(\n[[\n    __thread int x;\n]], [[\n    x = 42;\n\n    return 0;\n]])],\n              AC_MSG_RESULT([yes]),\n              AC_MSG_RESULT([no])\n              enable_tls=\"0\")\nelse\n  enable_tls=\"0\"\nfi\nAC_SUBST([enable_tls])\nif test \"x${enable_tls}\" = \"x1\" ; then\n  AC_DEFINE_UNQUOTED([JEMALLOC_TLS], [ ], [ ])\nfi\n\ndnl ============================================================================\ndnl Check for C11 atomics.\n\nJE_COMPILABLE([C11 atomics], [\n#include <stdint.h>\n#if (__STDC_VERSION__ >= 201112L) && !defined(__STDC_NO_ATOMICS__)\n#include <stdatomic.h>\n#else\n#error Atomics not available\n#endif\n], [\n    uint64_t *p = (uint64_t *)0;\n    uint64_t x = 1;\n    volatile atomic_uint_least64_t *a = (volatile atomic_uint_least64_t *)p;\n    uint64_t r = atomic_fetch_add(a, x) + x;\n    return r == 0;\n], [je_cv_c11_atomics])\nif test \"x${je_cv_c11_atomics}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_C11_ATOMICS], [ ], [ ])\nfi\n\ndnl ============================================================================\ndnl Check for GCC-style __atomic atomics.\n\nJE_COMPILABLE([GCC __atomic atomics], [\n], [\n    int x = 0;\n    int val = 1;\n    int y = __atomic_fetch_add(&x, val, __ATOMIC_RELAXED);\n    int after_add = x;\n    return after_add == 1;\n], [je_cv_gcc_atomic_atomics])\nif test \"x${je_cv_gcc_atomic_atomics}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_GCC_ATOMIC_ATOMICS], [ ], [ ])\n\n  dnl check for 8-bit atomic support\n  JE_COMPILABLE([GCC 8-bit __atomic atomics], [\n  ], [\n      unsigned char x = 0;\n      int val = 1;\n      int y = __atomic_fetch_add(&x, val, __ATOMIC_RELAXED);\n      int after_add = (int)x;\n      return after_add == 1;\n  ], [je_cv_gcc_u8_atomic_atomics])\n  if test \"x${je_cv_gcc_u8_atomic_atomics}\" = \"xyes\" ; then\n    AC_DEFINE([JEMALLOC_GCC_U8_ATOMIC_ATOMICS], [ ], [ ])\n  fi\nfi\n\ndnl ============================================================================\ndnl Check for GCC-style __sync atomics.\n\nJE_COMPILABLE([GCC __sync atomics], [\n], [\n    int x = 0;\n    int before_add = __sync_fetch_and_add(&x, 1);\n    int after_add = x;\n    return (before_add == 0) && (after_add == 1);\n], [je_cv_gcc_sync_atomics])\nif test \"x${je_cv_gcc_sync_atomics}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_GCC_SYNC_ATOMICS], [ ], [ ])\n\n  dnl check for 8-bit atomic support\n  JE_COMPILABLE([GCC 8-bit __sync atomics], [\n  ], [\n      unsigned char x = 0;\n      int before_add = __sync_fetch_and_add(&x, 1);\n      int after_add = (int)x;\n      return (before_add == 0) && (after_add == 1);\n  ], [je_cv_gcc_u8_sync_atomics])\n  if test \"x${je_cv_gcc_u8_sync_atomics}\" = \"xyes\" ; then\n    AC_DEFINE([JEMALLOC_GCC_U8_SYNC_ATOMICS], [ ], [ ])\n  fi\nfi\n\ndnl ============================================================================\ndnl Check for atomic(3) operations as provided on Darwin.\ndnl We need this not for the atomic operations (which are provided above), but\ndnl rather for the OS_unfair_lock type it exposes.\n\nJE_COMPILABLE([Darwin OSAtomic*()], [\n#include <libkern/OSAtomic.h>\n#include <inttypes.h>\n], [\n\t{\n\t\tint32_t x32 = 0;\n\t\tvolatile int32_t *x32p = &x32;\n\t\tOSAtomicAdd32(1, x32p);\n\t}\n\t{\n\t\tint64_t x64 = 0;\n\t\tvolatile int64_t *x64p = &x64;\n\t\tOSAtomicAdd64(1, x64p);\n\t}\n], [je_cv_osatomic])\nif test \"x${je_cv_osatomic}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_OSATOMIC], [ ], [ ])\nfi\n\ndnl ============================================================================\ndnl Check for madvise(2).\n\nJE_COMPILABLE([madvise(2)], [\n#include <sys/mman.h>\n], [\n\tmadvise((void *)0, 0, 0);\n], [je_cv_madvise])\nif test \"x${je_cv_madvise}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_MADVISE], [ ], [ ])\n\n  dnl Check for madvise(..., MADV_FREE).\n  JE_COMPILABLE([madvise(..., MADV_FREE)], [\n#include <sys/mman.h>\n], [\n\tmadvise((void *)0, 0, MADV_FREE);\n], [je_cv_madv_free])\n  if test \"x${je_cv_madv_free}\" = \"xyes\" ; then\n    AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ], [ ])\n  elif test \"x${je_cv_madvise}\" = \"xyes\" ; then\n    case \"${host_cpu}\" in i686|x86_64)\n        case \"${host}\" in *-*-linux*)\n            AC_DEFINE([JEMALLOC_PURGE_MADVISE_FREE], [ ], [ ])\n            AC_DEFINE([JEMALLOC_DEFINE_MADVISE_FREE], [ ], [ ])\n\t    ;;\n        esac\n        ;;\n    esac\n  fi\n\n  dnl Check for madvise(..., MADV_DONTNEED).\n  JE_COMPILABLE([madvise(..., MADV_DONTNEED)], [\n#include <sys/mman.h>\n], [\n\tmadvise((void *)0, 0, MADV_DONTNEED);\n], [je_cv_madv_dontneed])\n  if test \"x${je_cv_madv_dontneed}\" = \"xyes\" ; then\n    AC_DEFINE([JEMALLOC_PURGE_MADVISE_DONTNEED], [ ], [ ])\n  fi\n\n  dnl Check for madvise(..., MADV_DO[NT]DUMP).\n  JE_COMPILABLE([madvise(..., MADV_DO[[NT]]DUMP)], [\n#include <sys/mman.h>\n], [\n\tmadvise((void *)0, 0, MADV_DONTDUMP);\n\tmadvise((void *)0, 0, MADV_DODUMP);\n], [je_cv_madv_dontdump])\n  if test \"x${je_cv_madv_dontdump}\" = \"xyes\" ; then\n    AC_DEFINE([JEMALLOC_MADVISE_DONTDUMP], [ ], [ ])\n  fi\n\n  dnl Check for madvise(..., MADV_[NO]HUGEPAGE).\n  JE_COMPILABLE([madvise(..., MADV_[[NO]]HUGEPAGE)], [\n#include <sys/mman.h>\n], [\n\tmadvise((void *)0, 0, MADV_HUGEPAGE);\n\tmadvise((void *)0, 0, MADV_NOHUGEPAGE);\n], [je_cv_thp])\n  dnl Check for madvise(..., MADV_[NO]CORE).\n  JE_COMPILABLE([madvise(..., MADV_[[NO]]CORE)], [\n#include <sys/mman.h>\n], [\n\tmadvise((void *)0, 0, MADV_NOCORE);\n\tmadvise((void *)0, 0, MADV_CORE);\n], [je_cv_madv_nocore])\n  if test \"x${je_cv_madv_nocore}\" = \"xyes\" ; then\n    AC_DEFINE([JEMALLOC_MADVISE_NOCORE], [ ], [ ])\n  fi\ncase \"${host_cpu}\" in\n  arm*)\n    ;;\n  *)\n  if test \"x${je_cv_thp}\" = \"xyes\" ; then\n    AC_DEFINE([JEMALLOC_HAVE_MADVISE_HUGE], [ ], [ ])\n  fi\n  ;;\nesac\nelse\n  dnl Check for posix_madvise.\n  JE_COMPILABLE([posix_madvise], [\n  #include <sys/mman.h>\n  ], [\n    posix_madvise((void *)0, 0, 0);\n  ], [je_cv_posix_madvise])\n  if test \"x${je_cv_posix_madvise}\" = \"xyes\" ; then\n    AC_DEFINE([JEMALLOC_HAVE_POSIX_MADVISE], [ ], [ ])\n\n    dnl Check for posix_madvise(..., POSIX_MADV_DONTNEED).\n    JE_COMPILABLE([posix_madvise(..., POSIX_MADV_DONTNEED)], [\n  #include <sys/mman.h>\n  ], [\n    posix_madvise((void *)0, 0, POSIX_MADV_DONTNEED);\n  ], [je_cv_posix_madv_dontneed])\n    if test \"x${je_cv_posix_madv_dontneed}\" = \"xyes\" ; then\n      AC_DEFINE([JEMALLOC_PURGE_POSIX_MADVISE_DONTNEED], [ ], [ ])\n    fi\n  fi\nfi\n\ndnl ============================================================================\ndnl Check for mprotect(2).\n\nJE_COMPILABLE([mprotect(2)], [\n#include <sys/mman.h>\n], [\n\tmprotect((void *)0, 0, PROT_NONE);\n], [je_cv_mprotect])\nif test \"x${je_cv_mprotect}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_MPROTECT], [ ], [ ])\nfi\n\ndnl ============================================================================\ndnl Check for __builtin_clz(), __builtin_clzl(), and __builtin_clzll().\n\nAC_CACHE_CHECK([for __builtin_clz],\n               [je_cv_builtin_clz],\n               [AC_LINK_IFELSE([AC_LANG_PROGRAM([],\n                                                [\n                                                {\n                                                        unsigned x = 0;\n                                                        int y = __builtin_clz(x);\n                                                }\n                                                {\n                                                        unsigned long x = 0;\n                                                        int y = __builtin_clzl(x);\n                                                }\n                                                {\n                                                        unsigned long long x = 0;\n                                                        int y = __builtin_clzll(x);\n                                                }\n                                                ])],\n                               [je_cv_builtin_clz=yes],\n                               [je_cv_builtin_clz=no])])\n\nif test \"x${je_cv_builtin_clz}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_BUILTIN_CLZ], [ ], [ ])\nfi\n\ndnl ============================================================================\ndnl Check for os_unfair_lock operations as provided on Darwin.\n\nJE_COMPILABLE([Darwin os_unfair_lock_*()], [\n#include <os/lock.h>\n#include <AvailabilityMacros.h>\n], [\n\t#if MAC_OS_X_VERSION_MIN_REQUIRED < 101200\n\t#error \"os_unfair_lock is not supported\"\n\t#else\n\tos_unfair_lock lock = OS_UNFAIR_LOCK_INIT;\n\tos_unfair_lock_lock(&lock);\n\tos_unfair_lock_unlock(&lock);\n\t#endif\n], [je_cv_os_unfair_lock])\nif test \"x${je_cv_os_unfair_lock}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_OS_UNFAIR_LOCK], [ ], [ ])\nfi\n\ndnl ============================================================================\ndnl Darwin-related configuration.\n\nAC_ARG_ENABLE([zone-allocator],\n  [AS_HELP_STRING([--disable-zone-allocator],\n                  [Disable zone allocator for Darwin])],\n[if test \"x$enable_zone_allocator\" = \"xno\" ; then\n  enable_zone_allocator=\"0\"\nelse\n  enable_zone_allocator=\"1\"\nfi\n],\n[if test \"x${abi}\" = \"xmacho\"; then\n  enable_zone_allocator=\"1\"\nfi\n]\n)\nAC_SUBST([enable_zone_allocator])\n\nif test \"x${enable_zone_allocator}\" = \"x1\" ; then\n  if test \"x${abi}\" != \"xmacho\"; then\n    AC_MSG_ERROR([--enable-zone-allocator is only supported on Darwin])\n  fi\n  AC_DEFINE([JEMALLOC_ZONE], [ ], [ ])\nfi\n\ndnl ============================================================================\ndnl Use initial-exec TLS by default.\nAC_ARG_ENABLE([initial-exec-tls],\n  [AS_HELP_STRING([--disable-initial-exec-tls],\n                  [Disable the initial-exec tls model])],\n[if test \"x$enable_initial_exec_tls\" = \"xno\" ; then\n  enable_initial_exec_tls=\"0\"\nelse\n  enable_initial_exec_tls=\"1\"\nfi\n],\n[enable_initial_exec_tls=\"1\"]\n)\nAC_SUBST([enable_initial_exec_tls])\n\nif test \"x${je_cv_tls_model}\" = \"xyes\" -a \\\n       \"x${enable_initial_exec_tls}\" = \"x1\" ; then\n  AC_DEFINE([JEMALLOC_TLS_MODEL],\n            [__attribute__((tls_model(\"initial-exec\")))], \n            [ ])\nelse\n  AC_DEFINE([JEMALLOC_TLS_MODEL], [ ], [ ])\nfi\n\ndnl ============================================================================\ndnl Enable background threads if possible.\n\nif test \"x${have_pthread}\" = \"x1\" -a \"x${je_cv_os_unfair_lock}\" != \"xyes\" -a \\\n       \"x${abi}\" != \"xmacho\" ; then\n  AC_DEFINE([JEMALLOC_BACKGROUND_THREAD], [ ], [ ])\nfi\n\ndnl ============================================================================\ndnl Check for glibc malloc hooks\n\nif test \"x$glibc\" = \"x1\" ; then\n  JE_COMPILABLE([glibc malloc hook], [\n  #include <stddef.h>\n\n  extern void (* __free_hook)(void *ptr);\n  extern void *(* __malloc_hook)(size_t size);\n  extern void *(* __realloc_hook)(void *ptr, size_t size);\n], [\n    void *ptr = 0L;\n    if (__malloc_hook) ptr = __malloc_hook(1);\n    if (__realloc_hook) ptr = __realloc_hook(ptr, 2);\n    if (__free_hook && ptr) __free_hook(ptr);\n], [je_cv_glibc_malloc_hook])\n  if test \"x${je_cv_glibc_malloc_hook}\" = \"xyes\" ; then\n    if test \"x${JEMALLOC_PREFIX}\" = \"x\" ; then\n      AC_DEFINE([JEMALLOC_GLIBC_MALLOC_HOOK], [ ], [ ])\n      wrap_syms=\"${wrap_syms} __free_hook __malloc_hook __realloc_hook\"\n    fi\n  fi\n\n  JE_COMPILABLE([glibc memalign hook], [\n  #include <stddef.h>\n\n  extern void *(* __memalign_hook)(size_t alignment, size_t size);\n], [\n    void *ptr = 0L;\n    if (__memalign_hook) ptr = __memalign_hook(16, 7);\n], [je_cv_glibc_memalign_hook])\n  if test \"x${je_cv_glibc_memalign_hook}\" = \"xyes\" ; then\n    if test \"x${JEMALLOC_PREFIX}\" = \"x\" ; then\n      AC_DEFINE([JEMALLOC_GLIBC_MEMALIGN_HOOK], [ ], [ ])\n      wrap_syms=\"${wrap_syms} __memalign_hook\"\n    fi\n  fi\nfi\n\nJE_COMPILABLE([pthreads adaptive mutexes], [\n#include <pthread.h>\n], [\n  pthread_mutexattr_t attr;\n  pthread_mutexattr_init(&attr);\n  pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_ADAPTIVE_NP);\n  pthread_mutexattr_destroy(&attr);\n], [je_cv_pthread_mutex_adaptive_np])\nif test \"x${je_cv_pthread_mutex_adaptive_np}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_HAVE_PTHREAD_MUTEX_ADAPTIVE_NP], [ ], [ ])\nfi\n\nJE_CFLAGS_SAVE()\nJE_CFLAGS_ADD([-D_GNU_SOURCE])\nJE_CFLAGS_ADD([-Werror])\nJE_CFLAGS_ADD([-herror_on_warning])\nJE_COMPILABLE([strerror_r returns char with gnu source], [\n#include <errno.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n], [\n  char *buffer = (char *) malloc(100);\n  char *error = strerror_r(EINVAL, buffer, 100);\n  printf(\"%s\\n\", error);\n], [je_cv_strerror_r_returns_char_with_gnu_source])\nJE_CFLAGS_RESTORE()\nif test \"x${je_cv_strerror_r_returns_char_with_gnu_source}\" = \"xyes\" ; then\n  AC_DEFINE([JEMALLOC_STRERROR_R_RETURNS_CHAR_WITH_GNU_SOURCE], [ ], [ ])\nfi\n\ndnl ============================================================================\ndnl Check for typedefs, structures, and compiler characteristics.\nAC_HEADER_STDBOOL\n\ndnl ============================================================================\ndnl Define commands that generate output files.\n\nAC_CONFIG_COMMANDS([include/jemalloc/internal/public_symbols.txt], [\n  f=\"${objroot}include/jemalloc/internal/public_symbols.txt\"\n  mkdir -p \"${objroot}include/jemalloc/internal\"\n  cp /dev/null \"${f}\"\n  for nm in `echo ${mangling_map} |tr ',' ' '` ; do\n    n=`echo ${nm} |tr ':' ' ' |awk '{print $[]1}'`\n    m=`echo ${nm} |tr ':' ' ' |awk '{print $[]2}'`\n    echo \"${n}:${m}\" >> \"${f}\"\n    dnl Remove name from public_syms so that it isn't redefined later.\n    public_syms=`for sym in ${public_syms}; do echo \"${sym}\"; done |grep -v \"^${n}\\$\" |tr '\\n' ' '`\n  done\n  for sym in ${public_syms} ; do\n    n=\"${sym}\"\n    m=\"${JEMALLOC_PREFIX}${sym}\"\n    echo \"${n}:${m}\" >> \"${f}\"\n  done\n], [\n  srcdir=\"${srcdir}\"\n  objroot=\"${objroot}\"\n  mangling_map=\"${mangling_map}\"\n  public_syms=\"${public_syms}\"\n  JEMALLOC_PREFIX=\"${JEMALLOC_PREFIX}\"\n])\nAC_CONFIG_COMMANDS([include/jemalloc/internal/private_symbols.awk], [\n  f=\"${objroot}include/jemalloc/internal/private_symbols.awk\"\n  mkdir -p \"${objroot}include/jemalloc/internal\"\n  export_syms=`for sym in ${public_syms}; do echo \"${JEMALLOC_PREFIX}${sym}\"; done; for sym in ${wrap_syms}; do echo \"${sym}\"; done;`\n  \"${srcdir}/include/jemalloc/internal/private_symbols.sh\" \"${SYM_PREFIX}\" ${export_syms} > \"${objroot}include/jemalloc/internal/private_symbols.awk\"\n], [\n  srcdir=\"${srcdir}\"\n  objroot=\"${objroot}\"\n  public_syms=\"${public_syms}\"\n  wrap_syms=\"${wrap_syms}\"\n  SYM_PREFIX=\"${SYM_PREFIX}\"\n  JEMALLOC_PREFIX=\"${JEMALLOC_PREFIX}\"\n])\nAC_CONFIG_COMMANDS([include/jemalloc/internal/private_symbols_jet.awk], [\n  f=\"${objroot}include/jemalloc/internal/private_symbols_jet.awk\"\n  mkdir -p \"${objroot}include/jemalloc/internal\"\n  export_syms=`for sym in ${public_syms}; do echo \"jet_${sym}\"; done; for sym in ${wrap_syms}; do echo \"${sym}\"; done;`\n  \"${srcdir}/include/jemalloc/internal/private_symbols.sh\" \"${SYM_PREFIX}\" ${export_syms} > \"${objroot}include/jemalloc/internal/private_symbols_jet.awk\"\n], [\n  srcdir=\"${srcdir}\"\n  objroot=\"${objroot}\"\n  public_syms=\"${public_syms}\"\n  wrap_syms=\"${wrap_syms}\"\n  SYM_PREFIX=\"${SYM_PREFIX}\"\n])\nAC_CONFIG_COMMANDS([include/jemalloc/internal/public_namespace.h], [\n  mkdir -p \"${objroot}include/jemalloc/internal\"\n  \"${srcdir}/include/jemalloc/internal/public_namespace.sh\" \"${objroot}include/jemalloc/internal/public_symbols.txt\" > \"${objroot}include/jemalloc/internal/public_namespace.h\"\n], [\n  srcdir=\"${srcdir}\"\n  objroot=\"${objroot}\"\n])\nAC_CONFIG_COMMANDS([include/jemalloc/internal/public_unnamespace.h], [\n  mkdir -p \"${objroot}include/jemalloc/internal\"\n  \"${srcdir}/include/jemalloc/internal/public_unnamespace.sh\" \"${objroot}include/jemalloc/internal/public_symbols.txt\" > \"${objroot}include/jemalloc/internal/public_unnamespace.h\"\n], [\n  srcdir=\"${srcdir}\"\n  objroot=\"${objroot}\"\n])\nAC_CONFIG_COMMANDS([include/jemalloc/jemalloc_protos_jet.h], [\n  mkdir -p \"${objroot}include/jemalloc\"\n  cat \"${srcdir}/include/jemalloc/jemalloc_protos.h.in\" | sed -e 's/@je_@/jet_/g' > \"${objroot}include/jemalloc/jemalloc_protos_jet.h\"\n], [\n  srcdir=\"${srcdir}\"\n  objroot=\"${objroot}\"\n])\nAC_CONFIG_COMMANDS([include/jemalloc/jemalloc_rename.h], [\n  mkdir -p \"${objroot}include/jemalloc\"\n  \"${srcdir}/include/jemalloc/jemalloc_rename.sh\" \"${objroot}include/jemalloc/internal/public_symbols.txt\" > \"${objroot}include/jemalloc/jemalloc_rename.h\"\n], [\n  srcdir=\"${srcdir}\"\n  objroot=\"${objroot}\"\n])\nAC_CONFIG_COMMANDS([include/jemalloc/jemalloc_mangle.h], [\n  mkdir -p \"${objroot}include/jemalloc\"\n  \"${srcdir}/include/jemalloc/jemalloc_mangle.sh\" \"${objroot}include/jemalloc/internal/public_symbols.txt\" je_ > \"${objroot}include/jemalloc/jemalloc_mangle.h\"\n], [\n  srcdir=\"${srcdir}\"\n  objroot=\"${objroot}\"\n])\nAC_CONFIG_COMMANDS([include/jemalloc/jemalloc_mangle_jet.h], [\n  mkdir -p \"${objroot}include/jemalloc\"\n  \"${srcdir}/include/jemalloc/jemalloc_mangle.sh\" \"${objroot}include/jemalloc/internal/public_symbols.txt\" jet_ > \"${objroot}include/jemalloc/jemalloc_mangle_jet.h\"\n], [\n  srcdir=\"${srcdir}\"\n  objroot=\"${objroot}\"\n])\nAC_CONFIG_COMMANDS([include/jemalloc/jemalloc.h], [\n  mkdir -p \"${objroot}include/jemalloc\"\n  \"${srcdir}/include/jemalloc/jemalloc.sh\" \"${objroot}\" > \"${objroot}include/jemalloc/jemalloc${install_suffix}.h\"\n], [\n  srcdir=\"${srcdir}\"\n  objroot=\"${objroot}\"\n  install_suffix=\"${install_suffix}\"\n])\n\ndnl Process .in files.\nAC_SUBST([cfghdrs_in])\nAC_SUBST([cfghdrs_out])\nAC_CONFIG_HEADERS([$cfghdrs_tup])\n\ndnl ============================================================================\ndnl Generate outputs.\n\nAC_CONFIG_FILES([$cfgoutputs_tup config.stamp bin/jemalloc-config bin/jemalloc.sh bin/jeprof])\nAC_SUBST([cfgoutputs_in])\nAC_SUBST([cfgoutputs_out])\nAC_OUTPUT\n\ndnl ============================================================================\ndnl Print out the results of configuration.\nAC_MSG_RESULT([===============================================================================])\nAC_MSG_RESULT([jemalloc version   : ${jemalloc_version}])\nAC_MSG_RESULT([library revision   : ${rev}])\nAC_MSG_RESULT([])\nAC_MSG_RESULT([CONFIG             : ${CONFIG}])\nAC_MSG_RESULT([CC                 : ${CC}])\nAC_MSG_RESULT([CONFIGURE_CFLAGS   : ${CONFIGURE_CFLAGS}])\nAC_MSG_RESULT([SPECIFIED_CFLAGS   : ${SPECIFIED_CFLAGS}])\nAC_MSG_RESULT([EXTRA_CFLAGS       : ${EXTRA_CFLAGS}])\nAC_MSG_RESULT([CPPFLAGS           : ${CPPFLAGS}])\nAC_MSG_RESULT([CXX                : ${CXX}])\nAC_MSG_RESULT([CONFIGURE_CXXFLAGS : ${CONFIGURE_CXXFLAGS}])\nAC_MSG_RESULT([SPECIFIED_CXXFLAGS : ${SPECIFIED_CXXFLAGS}])\nAC_MSG_RESULT([EXTRA_CXXFLAGS     : ${EXTRA_CXXFLAGS}])\nAC_MSG_RESULT([LDFLAGS            : ${LDFLAGS}])\nAC_MSG_RESULT([EXTRA_LDFLAGS      : ${EXTRA_LDFLAGS}])\nAC_MSG_RESULT([DSO_LDFLAGS        : ${DSO_LDFLAGS}])\nAC_MSG_RESULT([LIBS               : ${LIBS}])\nAC_MSG_RESULT([RPATH_EXTRA        : ${RPATH_EXTRA}])\nAC_MSG_RESULT([])\nAC_MSG_RESULT([XSLTPROC           : ${XSLTPROC}])\nAC_MSG_RESULT([XSLROOT            : ${XSLROOT}])\nAC_MSG_RESULT([])\nAC_MSG_RESULT([PREFIX             : ${PREFIX}])\nAC_MSG_RESULT([BINDIR             : ${BINDIR}])\nAC_MSG_RESULT([DATADIR            : ${DATADIR}])\nAC_MSG_RESULT([INCLUDEDIR         : ${INCLUDEDIR}])\nAC_MSG_RESULT([LIBDIR             : ${LIBDIR}])\nAC_MSG_RESULT([MANDIR             : ${MANDIR}])\nAC_MSG_RESULT([])\nAC_MSG_RESULT([srcroot            : ${srcroot}])\nAC_MSG_RESULT([abs_srcroot        : ${abs_srcroot}])\nAC_MSG_RESULT([objroot            : ${objroot}])\nAC_MSG_RESULT([abs_objroot        : ${abs_objroot}])\nAC_MSG_RESULT([])\nAC_MSG_RESULT([JEMALLOC_PREFIX    : ${JEMALLOC_PREFIX}])\nAC_MSG_RESULT([JEMALLOC_PRIVATE_NAMESPACE])\nAC_MSG_RESULT([                   : ${JEMALLOC_PRIVATE_NAMESPACE}])\nAC_MSG_RESULT([install_suffix     : ${install_suffix}])\nAC_MSG_RESULT([malloc_conf        : ${config_malloc_conf}])\nAC_MSG_RESULT([documentation      : ${enable_doc}])\nAC_MSG_RESULT([shared libs        : ${enable_shared}])\nAC_MSG_RESULT([static libs        : ${enable_static}])\nAC_MSG_RESULT([autogen            : ${enable_autogen}])\nAC_MSG_RESULT([debug              : ${enable_debug}])\nAC_MSG_RESULT([stats              : ${enable_stats}])\nAC_MSG_RESULT([experimental_smallocx : ${enable_experimental_smallocx}])\nAC_MSG_RESULT([prof               : ${enable_prof}])\nAC_MSG_RESULT([prof-libunwind     : ${enable_prof_libunwind}])\nAC_MSG_RESULT([prof-libgcc        : ${enable_prof_libgcc}])\nAC_MSG_RESULT([prof-gcc           : ${enable_prof_gcc}])\nAC_MSG_RESULT([fill               : ${enable_fill}])\nAC_MSG_RESULT([utrace             : ${enable_utrace}])\nAC_MSG_RESULT([xmalloc            : ${enable_xmalloc}])\nAC_MSG_RESULT([log                : ${enable_log}])\nAC_MSG_RESULT([lazy_lock          : ${enable_lazy_lock}])\nAC_MSG_RESULT([cache-oblivious    : ${enable_cache_oblivious}])\nAC_MSG_RESULT([cxx                : ${enable_cxx}])\nAC_MSG_RESULT([===============================================================================])\n"
  },
  {
    "path": "deps/jemalloc/doc/html.xsl.in",
    "content": "<xsl:stylesheet xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\" version=\"1.0\">\n  <xsl:import href=\"@XSLROOT@/html/docbook.xsl\"/>\n  <xsl:import href=\"@abs_srcroot@doc/stylesheet.xsl\"/>\n  <xsl:output method=\"xml\" encoding=\"utf-8\"/>\n</xsl:stylesheet>\n"
  },
  {
    "path": "deps/jemalloc/doc/jemalloc.xml.in",
    "content": "<?xml version='1.0' encoding='UTF-8'?>\n<?xml-stylesheet type=\"text/xsl\"\n        href=\"http://docbook.sourceforge.net/release/xsl/current/manpages/docbook.xsl\"?>\n<!DOCTYPE refentry PUBLIC \"-//OASIS//DTD DocBook XML V4.4//EN\"\n        \"http://www.oasis-open.org/docbook/xml/4.4/docbookx.dtd\" [\n]>\n\n<refentry>\n  <refentryinfo>\n    <title>User Manual</title>\n    <productname>jemalloc</productname>\n    <releaseinfo role=\"version\">@jemalloc_version@</releaseinfo>\n    <authorgroup>\n      <author>\n        <firstname>Jason</firstname>\n        <surname>Evans</surname>\n        <personblurb>Author</personblurb>\n      </author>\n    </authorgroup>\n  </refentryinfo>\n  <refmeta>\n    <refentrytitle>JEMALLOC</refentrytitle>\n    <manvolnum>3</manvolnum>\n  </refmeta>\n  <refnamediv>\n    <refdescriptor>jemalloc</refdescriptor>\n    <refname>jemalloc</refname>\n    <!-- Each refname causes a man page file to be created.  Only if this were\n         the system malloc(3) implementation would these files be appropriate.\n    <refname>malloc</refname>\n    <refname>calloc</refname>\n    <refname>posix_memalign</refname>\n    <refname>aligned_alloc</refname>\n    <refname>realloc</refname>\n    <refname>free</refname>\n    <refname>mallocx</refname>\n    <refname>rallocx</refname>\n    <refname>xallocx</refname>\n    <refname>sallocx</refname>\n    <refname>dallocx</refname>\n    <refname>sdallocx</refname>\n    <refname>nallocx</refname>\n    <refname>mallctl</refname>\n    <refname>mallctlnametomib</refname>\n    <refname>mallctlbymib</refname>\n    <refname>malloc_stats_print</refname>\n    <refname>malloc_usable_size</refname>\n    -->\n    <refpurpose>general purpose memory allocation functions</refpurpose>\n  </refnamediv>\n  <refsect1 id=\"library\">\n    <title>LIBRARY</title>\n    <para>This manual describes jemalloc @jemalloc_version@.  More information\n    can be found at the <ulink\n    url=\"http://jemalloc.net/\">jemalloc website</ulink>.</para>\n  </refsect1>\n  <refsynopsisdiv>\n    <title>SYNOPSIS</title>\n    <funcsynopsis>\n      <funcsynopsisinfo>#include &lt;<filename class=\"headerfile\">jemalloc/jemalloc.h</filename>&gt;</funcsynopsisinfo>\n      <refsect2>\n        <title>Standard API</title>\n        <funcprototype>\n          <funcdef>void *<function>malloc</function></funcdef>\n          <paramdef>size_t <parameter>size</parameter></paramdef>\n        </funcprototype>\n        <funcprototype>\n          <funcdef>void *<function>calloc</function></funcdef>\n          <paramdef>size_t <parameter>number</parameter></paramdef>\n          <paramdef>size_t <parameter>size</parameter></paramdef>\n        </funcprototype>\n        <funcprototype>\n          <funcdef>int <function>posix_memalign</function></funcdef>\n          <paramdef>void **<parameter>ptr</parameter></paramdef>\n          <paramdef>size_t <parameter>alignment</parameter></paramdef>\n          <paramdef>size_t <parameter>size</parameter></paramdef>\n        </funcprototype>\n        <funcprototype>\n          <funcdef>void *<function>aligned_alloc</function></funcdef>\n          <paramdef>size_t <parameter>alignment</parameter></paramdef>\n          <paramdef>size_t <parameter>size</parameter></paramdef>\n        </funcprototype>\n        <funcprototype>\n          <funcdef>void *<function>realloc</function></funcdef>\n          <paramdef>void *<parameter>ptr</parameter></paramdef>\n          <paramdef>size_t <parameter>size</parameter></paramdef>\n        </funcprototype>\n        <funcprototype>\n          <funcdef>void <function>free</function></funcdef>\n          <paramdef>void *<parameter>ptr</parameter></paramdef>\n        </funcprototype>\n      </refsect2>\n      <refsect2>\n        <title>Non-standard API</title>\n        <funcprototype>\n          <funcdef>void *<function>mallocx</function></funcdef>\n          <paramdef>size_t <parameter>size</parameter></paramdef>\n          <paramdef>int <parameter>flags</parameter></paramdef>\n        </funcprototype>\n        <funcprototype>\n          <funcdef>void *<function>rallocx</function></funcdef>\n          <paramdef>void *<parameter>ptr</parameter></paramdef>\n          <paramdef>size_t <parameter>size</parameter></paramdef>\n          <paramdef>int <parameter>flags</parameter></paramdef>\n        </funcprototype>\n        <funcprototype>\n          <funcdef>size_t <function>xallocx</function></funcdef>\n          <paramdef>void *<parameter>ptr</parameter></paramdef>\n          <paramdef>size_t <parameter>size</parameter></paramdef>\n          <paramdef>size_t <parameter>extra</parameter></paramdef>\n          <paramdef>int <parameter>flags</parameter></paramdef>\n        </funcprototype>\n        <funcprototype>\n          <funcdef>size_t <function>sallocx</function></funcdef>\n          <paramdef>void *<parameter>ptr</parameter></paramdef>\n          <paramdef>int <parameter>flags</parameter></paramdef>\n        </funcprototype>\n        <funcprototype>\n          <funcdef>void <function>dallocx</function></funcdef>\n          <paramdef>void *<parameter>ptr</parameter></paramdef>\n          <paramdef>int <parameter>flags</parameter></paramdef>\n        </funcprototype>\n        <funcprototype>\n          <funcdef>void <function>sdallocx</function></funcdef>\n          <paramdef>void *<parameter>ptr</parameter></paramdef>\n          <paramdef>size_t <parameter>size</parameter></paramdef>\n          <paramdef>int <parameter>flags</parameter></paramdef>\n        </funcprototype>\n        <funcprototype>\n          <funcdef>size_t <function>nallocx</function></funcdef>\n          <paramdef>size_t <parameter>size</parameter></paramdef>\n          <paramdef>int <parameter>flags</parameter></paramdef>\n        </funcprototype>\n        <funcprototype>\n          <funcdef>int <function>mallctl</function></funcdef>\n          <paramdef>const char *<parameter>name</parameter></paramdef>\n          <paramdef>void *<parameter>oldp</parameter></paramdef>\n          <paramdef>size_t *<parameter>oldlenp</parameter></paramdef>\n          <paramdef>void *<parameter>newp</parameter></paramdef>\n          <paramdef>size_t <parameter>newlen</parameter></paramdef>\n        </funcprototype>\n        <funcprototype>\n          <funcdef>int <function>mallctlnametomib</function></funcdef>\n          <paramdef>const char *<parameter>name</parameter></paramdef>\n          <paramdef>size_t *<parameter>mibp</parameter></paramdef>\n          <paramdef>size_t *<parameter>miblenp</parameter></paramdef>\n        </funcprototype>\n        <funcprototype>\n          <funcdef>int <function>mallctlbymib</function></funcdef>\n          <paramdef>const size_t *<parameter>mib</parameter></paramdef>\n          <paramdef>size_t <parameter>miblen</parameter></paramdef>\n          <paramdef>void *<parameter>oldp</parameter></paramdef>\n          <paramdef>size_t *<parameter>oldlenp</parameter></paramdef>\n          <paramdef>void *<parameter>newp</parameter></paramdef>\n          <paramdef>size_t <parameter>newlen</parameter></paramdef>\n        </funcprototype>\n        <funcprototype>\n          <funcdef>void <function>malloc_stats_print</function></funcdef>\n          <paramdef>void <parameter>(*write_cb)</parameter>\n            <funcparams>void *, const char *</funcparams>\n          </paramdef>\n          <paramdef>void *<parameter>cbopaque</parameter></paramdef>\n          <paramdef>const char *<parameter>opts</parameter></paramdef>\n        </funcprototype>\n        <funcprototype>\n          <funcdef>size_t <function>malloc_usable_size</function></funcdef>\n          <paramdef>const void *<parameter>ptr</parameter></paramdef>\n        </funcprototype>\n        <funcprototype>\n          <funcdef>void <function>(*malloc_message)</function></funcdef>\n          <paramdef>void *<parameter>cbopaque</parameter></paramdef>\n          <paramdef>const char *<parameter>s</parameter></paramdef>\n        </funcprototype>\n        <para><type>const char *</type><varname>malloc_conf</varname>;</para>\n      </refsect2>\n    </funcsynopsis>\n  </refsynopsisdiv>\n  <refsect1 id=\"description\">\n    <title>DESCRIPTION</title>\n    <refsect2>\n      <title>Standard API</title>\n\n      <para>The <function>malloc()</function> function allocates\n      <parameter>size</parameter> bytes of uninitialized memory.  The allocated\n      space is suitably aligned (after possible pointer coercion) for storage\n      of any type of object.</para>\n\n      <para>The <function>calloc()</function> function allocates\n      space for <parameter>number</parameter> objects, each\n      <parameter>size</parameter> bytes in length.  The result is identical to\n      calling <function>malloc()</function> with an argument of\n      <parameter>number</parameter> * <parameter>size</parameter>, with the\n      exception that the allocated memory is explicitly initialized to zero\n      bytes.</para>\n\n      <para>The <function>posix_memalign()</function> function\n      allocates <parameter>size</parameter> bytes of memory such that the\n      allocation's base address is a multiple of\n      <parameter>alignment</parameter>, and returns the allocation in the value\n      pointed to by <parameter>ptr</parameter>.  The requested\n      <parameter>alignment</parameter> must be a power of 2 at least as large as\n      <code language=\"C\">sizeof(<type>void *</type>)</code>.</para>\n\n      <para>The <function>aligned_alloc()</function> function\n      allocates <parameter>size</parameter> bytes of memory such that the\n      allocation's base address is a multiple of\n      <parameter>alignment</parameter>.  The requested\n      <parameter>alignment</parameter> must be a power of 2.  Behavior is\n      undefined if <parameter>size</parameter> is not an integral multiple of\n      <parameter>alignment</parameter>.</para>\n\n      <para>The <function>realloc()</function> function changes the\n      size of the previously allocated memory referenced by\n      <parameter>ptr</parameter> to <parameter>size</parameter> bytes.  The\n      contents of the memory are unchanged up to the lesser of the new and old\n      sizes.  If the new size is larger, the contents of the newly allocated\n      portion of the memory are undefined.  Upon success, the memory referenced\n      by <parameter>ptr</parameter> is freed and a pointer to the newly\n      allocated memory is returned.  Note that\n      <function>realloc()</function> may move the memory allocation,\n      resulting in a different return value than <parameter>ptr</parameter>.\n      If <parameter>ptr</parameter> is <constant>NULL</constant>, the\n      <function>realloc()</function> function behaves identically to\n      <function>malloc()</function> for the specified size.</para>\n\n      <para>The <function>free()</function> function causes the\n      allocated memory referenced by <parameter>ptr</parameter> to be made\n      available for future allocations.  If <parameter>ptr</parameter> is\n      <constant>NULL</constant>, no action occurs.</para>\n    </refsect2>\n    <refsect2>\n      <title>Non-standard API</title>\n      <para>The <function>mallocx()</function>,\n      <function>rallocx()</function>,\n      <function>xallocx()</function>,\n      <function>sallocx()</function>,\n      <function>dallocx()</function>,\n      <function>sdallocx()</function>, and\n      <function>nallocx()</function> functions all have a\n      <parameter>flags</parameter> argument that can be used to specify\n      options.  The functions only check the options that are contextually\n      relevant.  Use bitwise or (<code language=\"C\">|</code>) operations to\n      specify one or more of the following:\n        <variablelist>\n          <varlistentry id=\"MALLOCX_LG_ALIGN\">\n            <term><constant>MALLOCX_LG_ALIGN(<parameter>la</parameter>)\n            </constant></term>\n\n            <listitem><para>Align the memory allocation to start at an address\n            that is a multiple of <code language=\"C\">(1 &lt;&lt;\n            <parameter>la</parameter>)</code>.  This macro does not validate\n            that <parameter>la</parameter> is within the valid\n            range.</para></listitem>\n          </varlistentry>\n          <varlistentry id=\"MALLOCX_ALIGN\">\n            <term><constant>MALLOCX_ALIGN(<parameter>a</parameter>)\n            </constant></term>\n\n            <listitem><para>Align the memory allocation to start at an address\n            that is a multiple of <parameter>a</parameter>, where\n            <parameter>a</parameter> is a power of two.  This macro does not\n            validate that <parameter>a</parameter> is a power of 2.\n            </para></listitem>\n          </varlistentry>\n          <varlistentry id=\"MALLOCX_ZERO\">\n            <term><constant>MALLOCX_ZERO</constant></term>\n\n            <listitem><para>Initialize newly allocated memory to contain zero\n            bytes.  In the growing reallocation case, the real size prior to\n            reallocation defines the boundary between untouched bytes and those\n            that are initialized to contain zero bytes.  If this macro is\n            absent, newly allocated memory is uninitialized.</para></listitem>\n          </varlistentry>\n          <varlistentry id=\"MALLOCX_TCACHE\">\n            <term><constant>MALLOCX_TCACHE(<parameter>tc</parameter>)\n            </constant></term>\n\n            <listitem><para>Use the thread-specific cache (tcache) specified by\n            the identifier <parameter>tc</parameter>, which must have been\n            acquired via the <link\n            linkend=\"tcache.create\"><mallctl>tcache.create</mallctl></link>\n            mallctl.  This macro does not validate that\n            <parameter>tc</parameter> specifies a valid\n            identifier.</para></listitem>\n          </varlistentry>\n          <varlistentry id=\"MALLOC_TCACHE_NONE\">\n            <term><constant>MALLOCX_TCACHE_NONE</constant></term>\n\n            <listitem><para>Do not use a thread-specific cache (tcache).  Unless\n            <constant>MALLOCX_TCACHE(<parameter>tc</parameter>)</constant> or\n            <constant>MALLOCX_TCACHE_NONE</constant> is specified, an\n            automatically managed tcache will be used under many circumstances.\n            This macro cannot be used in the same <parameter>flags</parameter>\n            argument as\n            <constant>MALLOCX_TCACHE(<parameter>tc</parameter>)</constant>.</para></listitem>\n          </varlistentry>\n          <varlistentry id=\"MALLOCX_ARENA\">\n            <term><constant>MALLOCX_ARENA(<parameter>a</parameter>)\n            </constant></term>\n\n            <listitem><para>Use the arena specified by the index\n            <parameter>a</parameter>.  This macro has no effect for regions that\n            were allocated via an arena other than the one specified.  This\n            macro does not validate that <parameter>a</parameter> specifies an\n            arena index in the valid range.</para></listitem>\n          </varlistentry>\n        </variablelist>\n      </para>\n\n      <para>The <function>mallocx()</function> function allocates at\n      least <parameter>size</parameter> bytes of memory, and returns a pointer\n      to the base address of the allocation.  Behavior is undefined if\n      <parameter>size</parameter> is <constant>0</constant>.</para>\n\n      <para>The <function>rallocx()</function> function resizes the\n      allocation at <parameter>ptr</parameter> to be at least\n      <parameter>size</parameter> bytes, and returns a pointer to the base\n      address of the resulting allocation, which may or may not have moved from\n      its original location.  Behavior is undefined if\n      <parameter>size</parameter> is <constant>0</constant>.</para>\n\n      <para>The <function>xallocx()</function> function resizes the\n      allocation at <parameter>ptr</parameter> in place to be at least\n      <parameter>size</parameter> bytes, and returns the real size of the\n      allocation.  If <parameter>extra</parameter> is non-zero, an attempt is\n      made to resize the allocation to be at least <code\n      language=\"C\">(<parameter>size</parameter> +\n      <parameter>extra</parameter>)</code> bytes, though inability to allocate\n      the extra byte(s) will not by itself result in failure to resize.\n      Behavior is undefined if <parameter>size</parameter> is\n      <constant>0</constant>, or if <code\n      language=\"C\">(<parameter>size</parameter> + <parameter>extra</parameter>\n      &gt; <constant>SIZE_T_MAX</constant>)</code>.</para>\n\n      <para>The <function>sallocx()</function> function returns the\n      real size of the allocation at <parameter>ptr</parameter>.</para>\n\n      <para>The <function>dallocx()</function> function causes the\n      memory referenced by <parameter>ptr</parameter> to be made available for\n      future allocations.</para>\n\n      <para>The <function>sdallocx()</function> function is an\n      extension of <function>dallocx()</function> with a\n      <parameter>size</parameter> parameter to allow the caller to pass in the\n      allocation size as an optimization.  The minimum valid input size is the\n      original requested size of the allocation, and the maximum valid input\n      size is the corresponding value returned by\n      <function>nallocx()</function> or\n      <function>sallocx()</function>.</para>\n\n      <para>The <function>nallocx()</function> function allocates no\n      memory, but it performs the same size computation as the\n      <function>mallocx()</function> function, and returns the real\n      size of the allocation that would result from the equivalent\n      <function>mallocx()</function> function call, or\n      <constant>0</constant> if the inputs exceed the maximum supported size\n      class and/or alignment.  Behavior is undefined if\n      <parameter>size</parameter> is <constant>0</constant>.</para>\n\n      <para>The <function>mallctl()</function> function provides a\n      general interface for introspecting the memory allocator, as well as\n      setting modifiable parameters and triggering actions.  The\n      period-separated <parameter>name</parameter> argument specifies a\n      location in a tree-structured namespace; see the <xref\n      linkend=\"mallctl_namespace\" xrefstyle=\"template:%t\"/> section for\n      documentation on the tree contents.  To read a value, pass a pointer via\n      <parameter>oldp</parameter> to adequate space to contain the value, and a\n      pointer to its length via <parameter>oldlenp</parameter>; otherwise pass\n      <constant>NULL</constant> and <constant>NULL</constant>.  Similarly, to\n      write a value, pass a pointer to the value via\n      <parameter>newp</parameter>, and its length via\n      <parameter>newlen</parameter>; otherwise pass <constant>NULL</constant>\n      and <constant>0</constant>.</para>\n\n      <para>The <function>mallctlnametomib()</function> function\n      provides a way to avoid repeated name lookups for applications that\n      repeatedly query the same portion of the namespace, by translating a name\n      to a <quote>Management Information Base</quote> (MIB) that can be passed\n      repeatedly to <function>mallctlbymib()</function>.  Upon\n      successful return from <function>mallctlnametomib()</function>,\n      <parameter>mibp</parameter> contains an array of\n      <parameter>*miblenp</parameter> integers, where\n      <parameter>*miblenp</parameter> is the lesser of the number of components\n      in <parameter>name</parameter> and the input value of\n      <parameter>*miblenp</parameter>.  Thus it is possible to pass a\n      <parameter>*miblenp</parameter> that is smaller than the number of\n      period-separated name components, which results in a partial MIB that can\n      be used as the basis for constructing a complete MIB.  For name\n      components that are integers (e.g. the 2 in\n      <link\n      linkend=\"arenas.bin.i.size\"><mallctl>arenas.bin.2.size</mallctl></link>),\n      the corresponding MIB component will always be that integer.  Therefore,\n      it is legitimate to construct code like the following: <programlisting\n      language=\"C\"><![CDATA[\nunsigned nbins, i;\nsize_t mib[4];\nsize_t len, miblen;\n\nlen = sizeof(nbins);\nmallctl(\"arenas.nbins\", &nbins, &len, NULL, 0);\n\nmiblen = 4;\nmallctlnametomib(\"arenas.bin.0.size\", mib, &miblen);\nfor (i = 0; i < nbins; i++) {\n\tsize_t bin_size;\n\n\tmib[2] = i;\n\tlen = sizeof(bin_size);\n\tmallctlbymib(mib, miblen, (void *)&bin_size, &len, NULL, 0);\n\t/* Do something with bin_size... */\n}]]></programlisting></para>\n\n      <varlistentry id=\"malloc_stats_print_opts\">\n      </varlistentry>\n      <para>The <function>malloc_stats_print()</function> function writes\n      summary statistics via the <parameter>write_cb</parameter> callback\n      function pointer and <parameter>cbopaque</parameter> data passed to\n      <parameter>write_cb</parameter>, or <function>malloc_message()</function>\n      if <parameter>write_cb</parameter> is <constant>NULL</constant>.  The\n      statistics are presented in human-readable form unless <quote>J</quote> is\n      specified as a character within the <parameter>opts</parameter> string, in\n      which case the statistics are presented in <ulink\n      url=\"http://www.json.org/\">JSON format</ulink>.  This function can be\n      called repeatedly.  General information that never changes during\n      execution can be omitted by specifying <quote>g</quote> as a character\n      within the <parameter>opts</parameter> string.  Note that\n      <function>malloc_stats_print()</function> uses the\n      <function>mallctl*()</function> functions internally, so inconsistent\n      statistics can be reported if multiple threads use these functions\n      simultaneously.  If <option>--enable-stats</option> is specified during\n      configuration, <quote>m</quote>, <quote>d</quote>, and <quote>a</quote>\n      can be specified to omit merged arena, destroyed merged arena, and per\n      arena statistics, respectively; <quote>b</quote> and <quote>l</quote> can\n      be specified to omit per size class statistics for bins and large objects,\n      respectively; <quote>x</quote> can be specified to omit all mutex\n      statistics; <quote>e</quote> can be used to omit extent statistics.\n      Unrecognized characters are silently ignored.  Note that thread caching\n      may prevent some statistics from being completely up to date, since extra\n      locking would be required to merge counters that track thread cache\n      operations.</para>\n\n      <para>The <function>malloc_usable_size()</function> function\n      returns the usable size of the allocation pointed to by\n      <parameter>ptr</parameter>.  The return value may be larger than the size\n      that was requested during allocation.  The\n      <function>malloc_usable_size()</function> function is not a\n      mechanism for in-place <function>realloc()</function>; rather\n      it is provided solely as a tool for introspection purposes.  Any\n      discrepancy between the requested allocation size and the size reported\n      by <function>malloc_usable_size()</function> should not be\n      depended on, since such behavior is entirely implementation-dependent.\n      </para>\n    </refsect2>\n  </refsect1>\n  <refsect1 id=\"tuning\">\n    <title>TUNING</title>\n    <para>Once, when the first call is made to one of the memory allocation\n    routines, the allocator initializes its internals based in part on various\n    options that can be specified at compile- or run-time.</para>\n\n    <para>The string specified via <option>--with-malloc-conf</option>, the\n    string pointed to by the global variable <varname>malloc_conf</varname>, the\n    <quote>name</quote> of the file referenced by the symbolic link named\n    <filename class=\"symlink\">/etc/malloc.conf</filename>, and the value of the\n    environment variable <envar>MALLOC_CONF</envar>, will be interpreted, in\n    that order, from left to right as options.  Note that\n    <varname>malloc_conf</varname> may be read before\n    <function>main()</function> is entered, so the declaration of\n    <varname>malloc_conf</varname> should specify an initializer that contains\n    the final value to be read by jemalloc.  <option>--with-malloc-conf</option>\n    and <varname>malloc_conf</varname> are compile-time mechanisms, whereas\n    <filename class=\"symlink\">/etc/malloc.conf</filename> and\n    <envar>MALLOC_CONF</envar> can be safely set any time prior to program\n    invocation.</para>\n\n    <para>An options string is a comma-separated list of option:value pairs.\n    There is one key corresponding to each <link\n    linkend=\"opt.abort\"><mallctl>opt.*</mallctl></link> mallctl (see the <xref\n    linkend=\"mallctl_namespace\" xrefstyle=\"template:%t\"/> section for options\n    documentation).  For example, <literal>abort:true,narenas:1</literal> sets\n    the <link linkend=\"opt.abort\"><mallctl>opt.abort</mallctl></link> and <link\n    linkend=\"opt.narenas\"><mallctl>opt.narenas</mallctl></link> options.  Some\n    options have boolean values (true/false), others have integer values (base\n    8, 10, or 16, depending on prefix), and yet others have raw string\n    values.</para>\n  </refsect1>\n  <refsect1 id=\"implementation_notes\">\n    <title>IMPLEMENTATION NOTES</title>\n    <para>Traditionally, allocators have used\n    <citerefentry><refentrytitle>sbrk</refentrytitle>\n    <manvolnum>2</manvolnum></citerefentry> to obtain memory, which is\n    suboptimal for several reasons, including race conditions, increased\n    fragmentation, and artificial limitations on maximum usable memory.  If\n    <citerefentry><refentrytitle>sbrk</refentrytitle>\n    <manvolnum>2</manvolnum></citerefentry> is supported by the operating\n    system, this allocator uses both\n    <citerefentry><refentrytitle>mmap</refentrytitle>\n    <manvolnum>2</manvolnum></citerefentry> and\n    <citerefentry><refentrytitle>sbrk</refentrytitle>\n    <manvolnum>2</manvolnum></citerefentry>, in that order of preference;\n    otherwise only <citerefentry><refentrytitle>mmap</refentrytitle>\n    <manvolnum>2</manvolnum></citerefentry> is used.</para>\n\n    <para>This allocator uses multiple arenas in order to reduce lock\n    contention for threaded programs on multi-processor systems.  This works\n    well with regard to threading scalability, but incurs some costs.  There is\n    a small fixed per-arena overhead, and additionally, arenas manage memory\n    completely independently of each other, which means a small fixed increase\n    in overall memory fragmentation.  These overheads are not generally an\n    issue, given the number of arenas normally used.  Note that using\n    substantially more arenas than the default is not likely to improve\n    performance, mainly due to reduced cache performance.  However, it may make\n    sense to reduce the number of arenas if an application does not make much\n    use of the allocation functions.</para>\n\n    <para>In addition to multiple arenas, this allocator supports\n    thread-specific caching, in order to make it possible to completely avoid\n    synchronization for most allocation requests.  Such caching allows very fast\n    allocation in the common case, but it increases memory usage and\n    fragmentation, since a bounded number of objects can remain allocated in\n    each thread cache.</para>\n\n    <para>Memory is conceptually broken into extents.  Extents are always\n    aligned to multiples of the page size.  This alignment makes it possible to\n    find metadata for user objects quickly.  User objects are broken into two\n    categories according to size: small and large.  Contiguous small objects\n    comprise a slab, which resides within a single extent, whereas large objects\n    each have their own extents backing them.</para>\n\n    <para>Small objects are managed in groups by slabs.  Each slab maintains\n    a bitmap to track which regions are in use.  Allocation requests that are no\n    more than half the quantum (8 or 16, depending on architecture) are rounded\n    up to the nearest power of two that is at least <code\n    language=\"C\">sizeof(<type>double</type>)</code>.  All other object size\n    classes are multiples of the quantum, spaced such that there are four size\n    classes for each doubling in size, which limits internal fragmentation to\n    approximately 20% for all but the smallest size classes.  Small size classes\n    are smaller than four times the page size, and large size classes extend\n    from four times the page size up to the largest size class that does not\n    exceed <constant>PTRDIFF_MAX</constant>.</para>\n\n    <para>Allocations are packed tightly together, which can be an issue for\n    multi-threaded applications.  If you need to assure that allocations do not\n    suffer from cacheline sharing, round your allocation requests up to the\n    nearest multiple of the cacheline size, or specify cacheline alignment when\n    allocating.</para>\n\n    <para>The <function>realloc()</function>,\n    <function>rallocx()</function>, and\n    <function>xallocx()</function> functions may resize allocations\n    without moving them under limited circumstances.  Unlike the\n    <function>*allocx()</function> API, the standard API does not\n    officially round up the usable size of an allocation to the nearest size\n    class, so technically it is necessary to call\n    <function>realloc()</function> to grow e.g. a 9-byte allocation to\n    16 bytes, or shrink a 16-byte allocation to 9 bytes.  Growth and shrinkage\n    trivially succeeds in place as long as the pre-size and post-size both round\n    up to the same size class.  No other API guarantees are made regarding\n    in-place resizing, but the current implementation also tries to resize large\n    allocations in place, as long as the pre-size and post-size are both large.\n    For shrinkage to succeed, the extent allocator must support splitting (see\n    <link\n    linkend=\"arena.i.extent_hooks\"><mallctl>arena.&lt;i&gt;.extent_hooks</mallctl></link>).\n    Growth only succeeds if the trailing memory is currently available, and the\n    extent allocator supports merging.</para>\n\n    <para>Assuming 4 KiB pages and a 16-byte quantum on a 64-bit system, the\n    size classes in each category are as shown in <xref linkend=\"size_classes\"\n    xrefstyle=\"template:Table %n\"/>.</para>\n\n    <table xml:id=\"size_classes\" frame=\"all\">\n      <title>Size classes</title>\n      <tgroup cols=\"3\" colsep=\"1\" rowsep=\"1\">\n      <colspec colname=\"c1\" align=\"left\"/>\n      <colspec colname=\"c2\" align=\"right\"/>\n      <colspec colname=\"c3\" align=\"left\"/>\n      <thead>\n        <row>\n          <entry>Category</entry>\n          <entry>Spacing</entry>\n          <entry>Size</entry>\n        </row>\n      </thead>\n      <tbody>\n        <row>\n          <entry morerows=\"8\">Small</entry>\n          <entry>lg</entry>\n          <entry>[8]</entry>\n        </row>\n        <row>\n          <entry>16</entry>\n          <entry>[16, 32, 48, 64, 80, 96, 112, 128]</entry>\n        </row>\n        <row>\n          <entry>32</entry>\n          <entry>[160, 192, 224, 256]</entry>\n        </row>\n        <row>\n          <entry>64</entry>\n          <entry>[320, 384, 448, 512]</entry>\n        </row>\n        <row>\n          <entry>128</entry>\n          <entry>[640, 768, 896, 1024]</entry>\n        </row>\n        <row>\n          <entry>256</entry>\n          <entry>[1280, 1536, 1792, 2048]</entry>\n        </row>\n        <row>\n          <entry>512</entry>\n          <entry>[2560, 3072, 3584, 4096]</entry>\n        </row>\n        <row>\n          <entry>1 KiB</entry>\n          <entry>[5 KiB, 6 KiB, 7 KiB, 8 KiB]</entry>\n        </row>\n        <row>\n          <entry>2 KiB</entry>\n          <entry>[10 KiB, 12 KiB, 14 KiB]</entry>\n        </row>\n        <row>\n          <entry morerows=\"15\">Large</entry>\n          <entry>2 KiB</entry>\n          <entry>[16 KiB]</entry>\n        </row>\n        <row>\n          <entry>4 KiB</entry>\n          <entry>[20 KiB, 24 KiB, 28 KiB, 32 KiB]</entry>\n        </row>\n        <row>\n          <entry>8 KiB</entry>\n          <entry>[40 KiB, 48 KiB, 56 KiB, 64 KiB]</entry>\n        </row>\n        <row>\n          <entry>16 KiB</entry>\n          <entry>[80 KiB, 96 KiB, 112 KiB, 128 KiB]</entry>\n        </row>\n        <row>\n          <entry>32 KiB</entry>\n          <entry>[160 KiB, 192 KiB, 224 KiB, 256 KiB]</entry>\n        </row>\n        <row>\n          <entry>64 KiB</entry>\n          <entry>[320 KiB, 384 KiB, 448 KiB, 512 KiB]</entry>\n        </row>\n        <row>\n          <entry>128 KiB</entry>\n          <entry>[640 KiB, 768 KiB, 896 KiB, 1 MiB]</entry>\n        </row>\n        <row>\n          <entry>256 KiB</entry>\n          <entry>[1280 KiB, 1536 KiB, 1792 KiB, 2 MiB]</entry>\n        </row>\n        <row>\n          <entry>512 KiB</entry>\n          <entry>[2560 KiB, 3 MiB, 3584 KiB, 4 MiB]</entry>\n        </row>\n        <row>\n          <entry>1 MiB</entry>\n          <entry>[5 MiB, 6 MiB, 7 MiB, 8 MiB]</entry>\n        </row>\n        <row>\n          <entry>2 MiB</entry>\n          <entry>[10 MiB, 12 MiB, 14 MiB, 16 MiB]</entry>\n        </row>\n        <row>\n          <entry>4 MiB</entry>\n          <entry>[20 MiB, 24 MiB, 28 MiB, 32 MiB]</entry>\n        </row>\n        <row>\n          <entry>8 MiB</entry>\n          <entry>[40 MiB, 48 MiB, 56 MiB, 64 MiB]</entry>\n        </row>\n        <row>\n          <entry>...</entry>\n          <entry>...</entry>\n        </row>\n        <row>\n          <entry>512 PiB</entry>\n          <entry>[2560 PiB, 3 EiB, 3584 PiB, 4 EiB]</entry>\n        </row>\n        <row>\n          <entry>1 EiB</entry>\n          <entry>[5 EiB, 6 EiB, 7 EiB]</entry>\n        </row>\n      </tbody>\n      </tgroup>\n    </table>\n  </refsect1>\n  <refsect1 id=\"mallctl_namespace\">\n    <title>MALLCTL NAMESPACE</title>\n    <para>The following names are defined in the namespace accessible via the\n    <function>mallctl*()</function> functions.  Value types are specified in\n    parentheses, their readable/writable statuses are encoded as\n    <literal>rw</literal>, <literal>r-</literal>, <literal>-w</literal>, or\n    <literal>--</literal>, and required build configuration flags follow, if\n    any.  A name element encoded as <literal>&lt;i&gt;</literal> or\n    <literal>&lt;j&gt;</literal> indicates an integer component, where the\n    integer varies from 0 to some upper value that must be determined via\n    introspection.  In the case of <mallctl>stats.arenas.&lt;i&gt;.*</mallctl>\n    and <mallctl>arena.&lt;i&gt;.{initialized,purge,decay,dss}</mallctl>,\n    <literal>&lt;i&gt;</literal> equal to\n    <constant>MALLCTL_ARENAS_ALL</constant> can be used to operate on all arenas\n    or access the summation of statistics from all arenas; similarly\n    <literal>&lt;i&gt;</literal> equal to\n    <constant>MALLCTL_ARENAS_DESTROYED</constant> can be used to access the\n    summation of statistics from all destroyed arenas.  These constants can be\n    utilized either via <function>mallctlnametomib()</function> followed by\n    <function>mallctlbymib()</function>, or via code such as the following:\n    <programlisting language=\"C\"><![CDATA[\n#define STRINGIFY_HELPER(x) #x\n#define STRINGIFY(x) STRINGIFY_HELPER(x)\n\nmallctl(\"arena.\" STRINGIFY(MALLCTL_ARENAS_ALL) \".decay\",\n    NULL, NULL, NULL, 0);]]></programlisting>\n    Take special note of the <link\n    linkend=\"epoch\"><mallctl>epoch</mallctl></link> mallctl, which controls\n    refreshing of cached dynamic statistics.</para>\n\n    <variablelist>\n      <varlistentry id=\"version\">\n        <term>\n          <mallctl>version</mallctl>\n          (<type>const char *</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Return the jemalloc version string.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"epoch\">\n        <term>\n          <mallctl>epoch</mallctl>\n          (<type>uint64_t</type>)\n          <literal>rw</literal>\n        </term>\n        <listitem><para>If a value is passed in, refresh the data from which\n        the <function>mallctl*()</function> functions report values,\n        and increment the epoch.  Return the current epoch.  This is useful for\n        detecting whether another thread caused a refresh.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"background_thread\">\n        <term>\n          <mallctl>background_thread</mallctl>\n          (<type>bool</type>)\n          <literal>rw</literal>\n        </term>\n        <listitem><para>Enable/disable internal background worker threads.  When\n        set to true, background threads are created on demand (the number of\n        background threads will be no more than the number of CPUs or active\n        arenas).  Threads run periodically, and handle <link\n        linkend=\"arena.i.decay\">purging</link> asynchronously.  When switching\n        off, background threads are terminated synchronously.  Note that after\n        <citerefentry><refentrytitle>fork</refentrytitle><manvolnum>2</manvolnum></citerefentry>\n        function, the state in the child process will be disabled regardless\n        the state in parent process. See <link\n        linkend=\"stats.background_thread.num_threads\"><mallctl>stats.background_thread</mallctl></link>\n        for related stats.  <link\n        linkend=\"opt.background_thread\"><mallctl>opt.background_thread</mallctl></link>\n        can be used to set the default option.  This option is only available on\n        selected pthread-based platforms.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"max_background_threads\">\n        <term>\n          <mallctl>max_background_threads</mallctl>\n          (<type>size_t</type>)\n          <literal>rw</literal>\n        </term>\n        <listitem><para>Maximum number of background worker threads that will\n        be created.  This value is capped at <link\n        linkend=\"opt.max_background_threads\"><mallctl>opt.max_background_threads</mallctl></link> at\n        startup.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"config.cache_oblivious\">\n        <term>\n          <mallctl>config.cache_oblivious</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para><option>--enable-cache-oblivious</option> was specified\n        during build configuration.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"config.debug\">\n        <term>\n          <mallctl>config.debug</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para><option>--enable-debug</option> was specified during\n        build configuration.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"config.fill\">\n        <term>\n          <mallctl>config.fill</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para><option>--enable-fill</option> was specified during\n        build configuration.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"config.lazy_lock\">\n        <term>\n          <mallctl>config.lazy_lock</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para><option>--enable-lazy-lock</option> was specified\n        during build configuration.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"config.malloc_conf\">\n        <term>\n          <mallctl>config.malloc_conf</mallctl>\n          (<type>const char *</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Embedded configure-time-specified run-time options\n        string, empty unless <option>--with-malloc-conf</option> was specified\n        during build configuration.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"config.prof\">\n        <term>\n          <mallctl>config.prof</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para><option>--enable-prof</option> was specified during\n        build configuration.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"config.prof_libgcc\">\n        <term>\n          <mallctl>config.prof_libgcc</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para><option>--disable-prof-libgcc</option> was not\n        specified during build configuration.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"config.prof_libunwind\">\n        <term>\n          <mallctl>config.prof_libunwind</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para><option>--enable-prof-libunwind</option> was specified\n        during build configuration.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"config.stats\">\n        <term>\n          <mallctl>config.stats</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para><option>--enable-stats</option> was specified during\n        build configuration.</para></listitem>\n      </varlistentry>\n\n\n      <varlistentry id=\"config.utrace\">\n        <term>\n          <mallctl>config.utrace</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para><option>--enable-utrace</option> was specified during\n        build configuration.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"config.xmalloc\">\n        <term>\n          <mallctl>config.xmalloc</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para><option>--enable-xmalloc</option> was specified during\n        build configuration.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.abort\">\n        <term>\n          <mallctl>opt.abort</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Abort-on-warning enabled/disabled.  If true, most\n        warnings are fatal.  Note that runtime option warnings are not included\n        (see <link\n        linkend=\"opt.abort_conf\"><mallctl>opt.abort_conf</mallctl></link> for\n        that). The process will call\n        <citerefentry><refentrytitle>abort</refentrytitle>\n        <manvolnum>3</manvolnum></citerefentry> in these cases.  This option is\n        disabled by default unless <option>--enable-debug</option> is\n        specified during configuration, in which case it is enabled by default.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.confirm_conf\">\n        <term>\n          <mallctl>opt.confirm_conf</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n        </term>\n\t<listitem><para>Confirm-runtime-options-when-program-starts\n\tenabled/disabled.  If true, the string specified via\n\t<option>--with-malloc-conf</option>, the string pointed to by the\n\tglobal variable <varname>malloc_conf</varname>, the <quote>name</quote>\n\tof the file referenced by the symbolic link named\n\t<filename class=\"symlink\">/etc/malloc.conf</filename>, and the value of\n\tthe environment variable <envar>MALLOC_CONF</envar>, will be printed in\n\torder.  Then, each option being set will be individually printed.  This\n\toption is disabled by default.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.abort_conf\">\n        <term>\n          <mallctl>opt.abort_conf</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Abort-on-invalid-configuration enabled/disabled.  If\n        true, invalid runtime options are fatal.  The process will call\n        <citerefentry><refentrytitle>abort</refentrytitle>\n        <manvolnum>3</manvolnum></citerefentry> in these cases.  This option is\n        disabled by default unless <option>--enable-debug</option> is\n        specified during configuration, in which case it is enabled by default.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.cache_oblivious\">\n        <term>\n          <mallctl>opt.cache_oblivious</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Enable / Disable cache-oblivious large allocation\n        alignment, for large requests with no alignment constraints.  If this\n        feature is disabled, all large allocations are page-aligned as an\n        implementation artifact, which can severely harm CPU cache utilization.\n        However, the cache-oblivious layout comes at the cost of one extra page\n        per large allocation, which in the most extreme case increases physical\n        memory usage for the 16 KiB size class to 20 KiB. This option is enabled\n        by default.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.metadata_thp\">\n        <term>\n          <mallctl>opt.metadata_thp</mallctl>\n          (<type>const char *</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Controls whether to allow jemalloc to use transparent\n        huge page (THP) for internal metadata (see <link\n        linkend=\"stats.metadata\">stats.metadata</link>).  <quote>always</quote>\n        allows such usage.  <quote>auto</quote> uses no THP initially, but may\n        begin to do so when metadata usage reaches certain level.  The default\n        is <quote>disabled</quote>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.trust_madvise\">\n        <term>\n          <mallctl>opt.trust_madvise</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>If true, do not perform runtime check for MADV_DONTNEED,\n        to check that it actually zeros pages.  The default is disabled on Linux\n        and enabled elsewhere.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.retain\">\n        <term>\n          <mallctl>opt.retain</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>If true, retain unused virtual memory for later reuse\n        rather than discarding it by calling\n        <citerefentry><refentrytitle>munmap</refentrytitle>\n        <manvolnum>2</manvolnum></citerefentry> or equivalent (see <link\n        linkend=\"stats.retained\">stats.retained</link> for related details).\n        It also makes jemalloc use <citerefentry>\n        <refentrytitle>mmap</refentrytitle><manvolnum>2</manvolnum>\n        </citerefentry> or equivalent in a more greedy way, mapping larger\n        chunks in one go.  This option is disabled by default unless discarding\n        virtual memory is known to trigger platform-specific performance\n        problems, namely 1) for [64-bit] Linux, which has a quirk in its virtual\n        memory allocation algorithm that causes semi-permanent VM map holes\n        under normal jemalloc operation; and 2) for [64-bit] Windows, which\n        disallows split / merged regions with\n        <parameter><constant>MEM_RELEASE</constant></parameter>.  Although the\n        same issues may present on 32-bit platforms as well, retaining virtual\n        memory for 32-bit Linux and Windows is disabled by default due to the\n        practical possibility of address space exhaustion.  </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.dss\">\n        <term>\n          <mallctl>opt.dss</mallctl>\n          (<type>const char *</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>dss (<citerefentry><refentrytitle>sbrk</refentrytitle>\n        <manvolnum>2</manvolnum></citerefentry>) allocation precedence as\n        related to <citerefentry><refentrytitle>mmap</refentrytitle>\n        <manvolnum>2</manvolnum></citerefentry> allocation.  The following\n        settings are supported if\n        <citerefentry><refentrytitle>sbrk</refentrytitle>\n        <manvolnum>2</manvolnum></citerefentry> is supported by the operating\n        system: <quote>disabled</quote>, <quote>primary</quote>, and\n        <quote>secondary</quote>; otherwise only <quote>disabled</quote> is\n        supported.  The default is <quote>secondary</quote> if\n        <citerefentry><refentrytitle>sbrk</refentrytitle>\n        <manvolnum>2</manvolnum></citerefentry> is supported by the operating\n        system; <quote>disabled</quote> otherwise.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.narenas\">\n        <term>\n          <mallctl>opt.narenas</mallctl>\n          (<type>unsigned</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Maximum number of arenas to use for automatic\n        multiplexing of threads and arenas.  The default is four times the\n        number of CPUs, or one if there is a single CPU.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.oversize_threshold\">\n        <term>\n          <mallctl>opt.oversize_threshold</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>The threshold in bytes of which requests are considered\n        oversize.  Allocation requests with greater sizes are fulfilled from a\n        dedicated arena (automatically managed, however not within\n        <literal>narenas</literal>), in order to reduce fragmentation by not\n        mixing huge allocations with small ones.  In addition, the decay API\n        guarantees on the extents greater than the specified threshold may be\n        overridden.  Note that requests with arena index specified via\n        <constant>MALLOCX_ARENA</constant>, or threads associated with explicit\n        arenas will not be considered.  The default threshold is 8MiB.  Values\n        not within large size classes disables this feature.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.percpu_arena\">\n        <term>\n          <mallctl>opt.percpu_arena</mallctl>\n          (<type>const char *</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Per CPU arena mode.  Use the <quote>percpu</quote>\n        setting to enable this feature, which uses number of CPUs to determine\n        number of arenas, and bind threads to arenas dynamically based on the\n        CPU the thread runs on currently.  <quote>phycpu</quote> setting uses\n        one arena per physical CPU, which means the two hyper threads on the\n        same CPU share one arena.  Note that no runtime checking regarding the\n        availability of hyper threading is done at the moment.  When set to\n        <quote>disabled</quote>, narenas and thread to arena association will\n        not be impacted by this option.  The default is <quote>disabled</quote>.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.background_thread\">\n        <term>\n          <mallctl>opt.background_thread</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Internal background worker threads enabled/disabled.\n        Because of potential circular dependencies, enabling background thread\n        using this option may cause crash or deadlock during initialization. For\n        a reliable way to use this feature, see <link\n        linkend=\"background_thread\">background_thread</link> for dynamic control\n        options and details.  This option is disabled by\n        default.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.max_background_threads\">\n        <term>\n          <mallctl>opt.max_background_threads</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Maximum number of background threads that will be created\n        if <link linkend=\"background_thread\">background_thread</link> is set.\n        Defaults to number of cpus.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.dirty_decay_ms\">\n        <term>\n          <mallctl>opt.dirty_decay_ms</mallctl>\n          (<type>ssize_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Approximate time in milliseconds from the creation of a\n        set of unused dirty pages until an equivalent set of unused dirty pages\n        is purged (i.e. converted to muzzy via e.g.\n        <function>madvise(<parameter>...</parameter><parameter><constant>MADV_FREE</constant></parameter>)</function>\n        if supported by the operating system, or converted to clean otherwise)\n        and/or reused.  Dirty pages are defined as previously having been\n        potentially written to by the application, and therefore consuming\n        physical memory, yet having no current use.  The pages are incrementally\n        purged according to a sigmoidal decay curve that starts and ends with\n        zero purge rate.  A decay time of 0 causes all unused dirty pages to be\n        purged immediately upon creation.  A decay time of -1 disables purging.\n        The default decay time is 10 seconds.  See <link\n        linkend=\"arenas.dirty_decay_ms\"><mallctl>arenas.dirty_decay_ms</mallctl></link>\n        and <link\n        linkend=\"arena.i.dirty_decay_ms\"><mallctl>arena.&lt;i&gt;.dirty_decay_ms</mallctl></link>\n        for related dynamic control options.  See <link\n        linkend=\"opt.muzzy_decay_ms\"><mallctl>opt.muzzy_decay_ms</mallctl></link>\n        for a description of muzzy pages.for a description of muzzy pages.  Note\n        that when the <link\n        linkend=\"opt.oversize_threshold\"><mallctl>oversize_threshold</mallctl></link>\n        feature is enabled, the arenas reserved for oversize requests may have\n        its own default decay settings.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.muzzy_decay_ms\">\n        <term>\n          <mallctl>opt.muzzy_decay_ms</mallctl>\n          (<type>ssize_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Approximate time in milliseconds from the creation of a\n        set of unused muzzy pages until an equivalent set of unused muzzy pages\n        is purged (i.e. converted to clean) and/or reused.  Muzzy pages are\n        defined as previously having been unused dirty pages that were\n        subsequently purged in a manner that left them subject to the\n        reclamation whims of the operating system (e.g.\n        <function>madvise(<parameter>...</parameter><parameter><constant>MADV_FREE</constant></parameter>)</function>),\n        and therefore in an indeterminate state.  The pages are incrementally\n        purged according to a sigmoidal decay curve that starts and ends with\n        zero purge rate.  A decay time of 0 causes all unused muzzy pages to be\n        purged immediately upon creation.  A decay time of -1 disables purging.\n        The default decay time is 10 seconds.  See <link\n        linkend=\"arenas.muzzy_decay_ms\"><mallctl>arenas.muzzy_decay_ms</mallctl></link>\n        and <link\n        linkend=\"arena.i.muzzy_decay_ms\"><mallctl>arena.&lt;i&gt;.muzzy_decay_ms</mallctl></link>\n        for related dynamic control options.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.lg_extent_max_active_fit\">\n        <term>\n          <mallctl>opt.lg_extent_max_active_fit</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>When reusing dirty extents, this determines the (log\n        base 2 of the) maximum ratio between the size of the active extent\n        selected (to split off from) and the size of the requested allocation.\n        This prevents the splitting of large active extents for smaller\n        allocations, which can reduce fragmentation over the long run\n        (especially for non-active extents).  Lower value may reduce\n        fragmentation, at the cost of extra active extents.  The default value\n        is 6, which gives a maximum ratio of 64 (2^6).</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.stats_print\">\n        <term>\n          <mallctl>opt.stats_print</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Enable/disable statistics printing at exit.  If\n        enabled, the <function>malloc_stats_print()</function>\n        function is called at program exit via an\n        <citerefentry><refentrytitle>atexit</refentrytitle>\n        <manvolnum>3</manvolnum></citerefentry> function.  <link\n        linkend=\"opt.stats_print_opts\"><mallctl>opt.stats_print_opts</mallctl></link>\n        can be combined to specify output options. If\n        <option>--enable-stats</option> is specified during configuration, this\n        has the potential to cause deadlock for a multi-threaded process that\n        exits while one or more threads are executing in the memory allocation\n        functions.  Furthermore, <function>atexit()</function> may\n        allocate memory during application initialization and then deadlock\n        internally when jemalloc in turn calls\n        <function>atexit()</function>, so this option is not\n        universally usable (though the application can register its own\n        <function>atexit()</function> function with equivalent\n        functionality).  Therefore, this option should only be used with care;\n        it is primarily intended as a performance tuning aid during application\n        development.  This option is disabled by default.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.stats_print_opts\">\n        <term>\n          <mallctl>opt.stats_print_opts</mallctl>\n          (<type>const char *</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Options (the <parameter>opts</parameter> string) to pass\n        to the <function>malloc_stats_print()</function> at exit (enabled\n        through <link\n        linkend=\"opt.stats_print\"><mallctl>opt.stats_print</mallctl></link>). See\n        available options in <link\n        linkend=\"malloc_stats_print_opts\"><function>malloc_stats_print()</function></link>.\n        Has no effect unless <link\n        linkend=\"opt.stats_print\"><mallctl>opt.stats_print</mallctl></link> is\n        enabled.  The default is <quote></quote>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.stats_interval\">\n        <term>\n          <mallctl>opt.stats_interval</mallctl>\n          (<type>int64_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Average interval between statistics outputs, as measured\n        in bytes of allocation activity.  The actual interval may be sporadic\n        because decentralized event counters are used to avoid synchronization\n        bottlenecks.  The output may be triggered on any thread, which then\n        calls <function>malloc_stats_print()</function>.  <link\n        linkend=\"opt.stats_interval_opts\"><mallctl>opt.stats_interval_opts</mallctl></link>\n        can be combined to specify output options.  By default,\n        interval-triggered stats output is disabled (encoded as\n        -1).</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.stats_interval_opts\">\n        <term>\n          <mallctl>opt.stats_interval_opts</mallctl>\n          (<type>const char *</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Options (the <parameter>opts</parameter> string) to pass\n        to the <function>malloc_stats_print()</function> for interval based\n\tstatistics printing (enabled\n        through <link\n        linkend=\"opt.stats_interval\"><mallctl>opt.stats_interval</mallctl></link>). See\n        available options in <link\n        linkend=\"malloc_stats_print_opts\"><function>malloc_stats_print()</function></link>.\n        Has no effect unless <link\n        linkend=\"opt.stats_interval\"><mallctl>opt.stats_interval</mallctl></link> is\n        enabled.  The default is <quote></quote>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.junk\">\n        <term>\n          <mallctl>opt.junk</mallctl>\n          (<type>const char *</type>)\n          <literal>r-</literal>\n          [<option>--enable-fill</option>]\n        </term>\n        <listitem><para>Junk filling.  If set to <quote>alloc</quote>, each byte\n        of uninitialized allocated memory will be initialized to\n        <literal>0xa5</literal>.  If set to <quote>free</quote>, all deallocated\n        memory will be initialized to <literal>0x5a</literal>.  If set to\n        <quote>true</quote>, both allocated and deallocated memory will be\n        initialized, and if set to <quote>false</quote>, junk filling be\n        disabled entirely.  This is intended for debugging and will impact\n        performance negatively.  This option is <quote>false</quote> by default\n        unless <option>--enable-debug</option> is specified during\n        configuration, in which case it is <quote>true</quote> by\n        default.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.zero\">\n        <term>\n          <mallctl>opt.zero</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n          [<option>--enable-fill</option>]\n        </term>\n        <listitem><para>Zero filling enabled/disabled.  If enabled, each byte\n        of uninitialized allocated memory will be initialized to 0.  Note that\n        this initialization only happens once for each byte, so\n        <function>realloc()</function> and\n        <function>rallocx()</function> calls do not zero memory that\n        was previously allocated.  This is intended for debugging and will\n        impact performance negatively.  This option is disabled by default.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.utrace\">\n        <term>\n          <mallctl>opt.utrace</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n          [<option>--enable-utrace</option>]\n        </term>\n        <listitem><para>Allocation tracing based on\n        <citerefentry><refentrytitle>utrace</refentrytitle>\n        <manvolnum>2</manvolnum></citerefentry> enabled/disabled.  This option\n        is disabled by default.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.xmalloc\">\n        <term>\n          <mallctl>opt.xmalloc</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n          [<option>--enable-xmalloc</option>]\n        </term>\n        <listitem><para>Abort-on-out-of-memory enabled/disabled.  If enabled,\n        rather than returning failure for any allocation function, display a\n        diagnostic message on <constant>STDERR_FILENO</constant> and cause the\n        program to drop core (using\n        <citerefentry><refentrytitle>abort</refentrytitle>\n        <manvolnum>3</manvolnum></citerefentry>).  If an application is\n        designed to depend on this behavior, set the option at compile time by\n        including the following in the source code:\n        <programlisting language=\"C\"><![CDATA[\nmalloc_conf = \"xmalloc:true\";]]></programlisting>\n        This option is disabled by default.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.tcache\">\n        <term>\n          <mallctl>opt.tcache</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Thread-specific caching (tcache) enabled/disabled.  When\n        there are multiple threads, each thread uses a tcache for objects up to\n        a certain size.  Thread-specific caching allows many allocations to be\n        satisfied without performing any thread synchronization, at the cost of\n        increased memory use.  See the <link\n        linkend=\"opt.tcache_max\"><mallctl>opt.tcache_max</mallctl></link>\n        option for related tuning information.  This option is enabled by\n        default.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.tcache_max\">\n        <term>\n          <mallctl>opt.tcache_max</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Maximum size class to cache in the thread-specific cache\n        (tcache).  At a minimum, the first size class is cached; and at a\n        maximum, size classes up to 8 MiB can be cached.  The default maximum is\n        32 KiB (2^15).  As a convenience, this may also be set by specifying\n        lg_tcache_max, which will be taken to be the base-2 logarithm of the\n        setting of tcache_max.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.thp\">\n        <term>\n          <mallctl>opt.thp</mallctl>\n          (<type>const char *</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Transparent hugepage (THP) mode. Settings \"always\",\n        \"never\" and \"default\" are available if THP is supported by the operating\n        system.  The \"always\" setting enables transparent hugepage for all user\n        memory mappings with\n        <parameter><constant>MADV_HUGEPAGE</constant></parameter>; \"never\"\n        ensures no transparent hugepage with\n        <parameter><constant>MADV_NOHUGEPAGE</constant></parameter>; the default\n        setting \"default\" makes no changes.  Note that: this option does not\n        affect THP for jemalloc internal metadata (see <link\n        linkend=\"opt.metadata_thp\"><mallctl>opt.metadata_thp</mallctl></link>);\n        in addition, for arenas with customized <link\n        linkend=\"arena.i.extent_hooks\"><mallctl>extent_hooks</mallctl></link>,\n        this option is bypassed as it is implemented as part of the default\n        extent hooks.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.prof\">\n        <term>\n          <mallctl>opt.prof</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n          [<option>--enable-prof</option>]\n        </term>\n        <listitem><para>Memory profiling enabled/disabled.  If enabled, profile\n        memory allocation activity.  See the <link\n        linkend=\"opt.prof_active\"><mallctl>opt.prof_active</mallctl></link>\n        option for on-the-fly activation/deactivation.  See the <link\n        linkend=\"opt.lg_prof_sample\"><mallctl>opt.lg_prof_sample</mallctl></link>\n        option for probabilistic sampling control.  See the <link\n        linkend=\"opt.prof_accum\"><mallctl>opt.prof_accum</mallctl></link>\n        option for control of cumulative sample reporting.  See the <link\n        linkend=\"opt.lg_prof_interval\"><mallctl>opt.lg_prof_interval</mallctl></link>\n        option for information on interval-triggered profile dumping, the <link\n        linkend=\"opt.prof_gdump\"><mallctl>opt.prof_gdump</mallctl></link>\n        option for information on high-water-triggered profile dumping, and the\n        <link linkend=\"opt.prof_final\"><mallctl>opt.prof_final</mallctl></link>\n        option for final profile dumping.  Profile output is compatible with\n        the <command>jeprof</command> command, which is based on the\n        <command>pprof</command> that is developed as part of the <ulink\n        url=\"http://code.google.com/p/gperftools/\">gperftools\n        package</ulink>.  See <link linkend=\"heap_profile_format\">HEAP PROFILE\n        FORMAT</link> for heap profile format documentation.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.prof_prefix\">\n        <term>\n          <mallctl>opt.prof_prefix</mallctl>\n          (<type>const char *</type>)\n          <literal>r-</literal>\n          [<option>--enable-prof</option>]\n        </term>\n        <listitem><para>Filename prefix for profile dumps.  If the prefix is\n        set to the empty string, no automatic dumps will occur; this is\n        primarily useful for disabling the automatic final heap dump (which\n        also disables leak reporting, if enabled).  The default prefix is\n        <filename>jeprof</filename>.  This prefix value can be overridden by\n        <link linkend=\"prof.prefix\"><mallctl>prof.prefix</mallctl></link>.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.prof_active\">\n        <term>\n          <mallctl>opt.prof_active</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n          [<option>--enable-prof</option>]\n        </term>\n        <listitem><para>Profiling activated/deactivated.  This is a secondary\n        control mechanism that makes it possible to start the application with\n        profiling enabled (see the <link\n        linkend=\"opt.prof\"><mallctl>opt.prof</mallctl></link> option) but\n        inactive, then toggle profiling at any time during program execution\n        with the <link\n        linkend=\"prof.active\"><mallctl>prof.active</mallctl></link> mallctl.\n        This option is enabled by default.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.prof_thread_active_init\">\n        <term>\n          <mallctl>opt.prof_thread_active_init</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n          [<option>--enable-prof</option>]\n        </term>\n        <listitem><para>Initial setting for <link\n        linkend=\"thread.prof.active\"><mallctl>thread.prof.active</mallctl></link>\n        in newly created threads.  The initial setting for newly created threads\n        can also be changed during execution via the <link\n        linkend=\"prof.thread_active_init\"><mallctl>prof.thread_active_init</mallctl></link>\n        mallctl.  This option is enabled by default.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.lg_prof_sample\">\n        <term>\n          <mallctl>opt.lg_prof_sample</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-prof</option>]\n        </term>\n        <listitem><para>Average interval (log base 2) between allocation\n        samples, as measured in bytes of allocation activity.  Increasing the\n        sampling interval decreases profile fidelity, but also decreases the\n        computational overhead.  The default sample interval is 512 KiB (2^19\n        B).</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.prof_accum\">\n        <term>\n          <mallctl>opt.prof_accum</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n          [<option>--enable-prof</option>]\n        </term>\n        <listitem><para>Reporting of cumulative object/byte counts in profile\n        dumps enabled/disabled.  If this option is enabled, every unique\n        backtrace must be stored for the duration of execution.  Depending on\n        the application, this can impose a large memory overhead, and the\n        cumulative counts are not always of interest.  This option is disabled\n        by default.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.lg_prof_interval\">\n        <term>\n          <mallctl>opt.lg_prof_interval</mallctl>\n          (<type>ssize_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-prof</option>]\n        </term>\n        <listitem><para>Average interval (log base 2) between memory profile\n        dumps, as measured in bytes of allocation activity.  The actual\n        interval between dumps may be sporadic because decentralized allocation\n        counters are used to avoid synchronization bottlenecks.  Profiles are\n        dumped to files named according to the pattern\n        <filename>&lt;prefix&gt;.&lt;pid&gt;.&lt;seq&gt;.i&lt;iseq&gt;.heap</filename>,\n        where <literal>&lt;prefix&gt;</literal> is controlled by the\n        <link\n        linkend=\"opt.prof_prefix\"><mallctl>opt.prof_prefix</mallctl></link> and\n        <link linkend=\"prof.prefix\"><mallctl>prof.prefix</mallctl></link>\n        options.  By default, interval-triggered profile dumping is disabled\n        (encoded as -1).\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.prof_gdump\">\n        <term>\n          <mallctl>opt.prof_gdump</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n          [<option>--enable-prof</option>]\n        </term>\n        <listitem><para>Set the initial state of <link\n        linkend=\"prof.gdump\"><mallctl>prof.gdump</mallctl></link>, which when\n        enabled triggers a memory profile dump every time the total virtual\n        memory exceeds the previous maximum.  This option is disabled by\n        default.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.prof_final\">\n        <term>\n          <mallctl>opt.prof_final</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n          [<option>--enable-prof</option>]\n        </term>\n        <listitem><para>Use an\n        <citerefentry><refentrytitle>atexit</refentrytitle>\n        <manvolnum>3</manvolnum></citerefentry> function to dump final memory\n        usage to a file named according to the pattern\n        <filename>&lt;prefix&gt;.&lt;pid&gt;.&lt;seq&gt;.f.heap</filename>,\n        where <literal>&lt;prefix&gt;</literal> is controlled by the <link\n        linkend=\"opt.prof_prefix\"><mallctl>opt.prof_prefix</mallctl></link> and\n        <link linkend=\"prof.prefix\"><mallctl>prof.prefix</mallctl></link>\n        options.  Note that <function>atexit()</function> may allocate\n        memory during application initialization and then deadlock internally\n        when jemalloc in turn calls <function>atexit()</function>, so\n        this option is not universally usable (though the application can\n        register its own <function>atexit()</function> function with\n        equivalent functionality).  This option is disabled by\n        default.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.prof_leak\">\n        <term>\n          <mallctl>opt.prof_leak</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n          [<option>--enable-prof</option>]\n        </term>\n        <listitem><para>Leak reporting enabled/disabled.  If enabled, use an\n        <citerefentry><refentrytitle>atexit</refentrytitle>\n        <manvolnum>3</manvolnum></citerefentry> function to report memory leaks\n        detected by allocation sampling.  See the\n        <link linkend=\"opt.prof\"><mallctl>opt.prof</mallctl></link> option for\n        information on analyzing heap profile output.  Works only when combined\n        with <link linkend=\"opt.prof_final\"><mallctl>opt.prof_final</mallctl>\n        </link>, otherwise does nothing.  This option is disabled by default.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.prof_leak_error\">\n        <term>\n          <mallctl>opt.prof_leak_error</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n          [<option>--enable-prof</option>]\n        </term>\n        <listitem><para>Similar to <link linkend=\"opt.prof_leak\"><mallctl>\n        opt.prof_leak</mallctl></link>, but makes the process exit with error\n        code 1 if a memory leak is detected.  This option supersedes\n        <link linkend=\"opt.prof_leak\"><mallctl>opt.prof_leak</mallctl></link>,\n        meaning that if both are specified, this option takes precedence.  When\n        enabled, also enables <link linkend=\"opt.prof_leak\"><mallctl>\n        opt.prof_leak</mallctl></link>.  Works only when combined with\n        <link linkend=\"opt.prof_final\"><mallctl>opt.prof_final</mallctl></link>,\n        otherwise does nothing.  This option is disabled by default.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"opt.zero_realloc\">\n        <term>\n          <mallctl>opt.zero_realloc</mallctl>\n          (<type>const char *</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para> Determines the behavior of\n        <function>realloc()</function> when passed a value of zero for the new\n        size.  <quote>alloc</quote> treats this as an allocation of size zero\n        (and returns a non-null result except in case of resource exhaustion).\n        <quote>free</quote> treats this as a deallocation of the pointer, and\n        returns <constant>NULL</constant> without setting\n        <varname>errno</varname>.  <quote>abort</quote> aborts the process if\n        zero is passed.  The default is <quote>free</quote> on Linux and\n        Windows, and <quote>alloc</quote> elsewhere.</para>\n\n\t<para>There is considerable divergence of behaviors across\n\timplementations in handling this case. Many have the behavior of\n\t<quote>free</quote>. This can introduce security vulnerabilities, since\n\ta <constant>NULL</constant> return value indicates failure, and the\n\tcontinued validity of the passed-in pointer (per POSIX and C11).\n\t<quote>alloc</quote> is safe, but can cause leaks in programs that\n\texpect the common behavior.  Programs intended to be portable and\n\tleak-free cannot assume either behavior, and must therefore never call\n\trealloc with a size of 0.  The <quote>abort</quote> option enables these\n\ttesting this behavior.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"thread.arena\">\n        <term>\n          <mallctl>thread.arena</mallctl>\n          (<type>unsigned</type>)\n          <literal>rw</literal>\n        </term>\n        <listitem><para>Get or set the arena associated with the calling\n        thread.  If the specified arena was not initialized beforehand (see the\n        <link\n        linkend=\"arena.i.initialized\"><mallctl>arena.i.initialized</mallctl></link>\n        mallctl), it will be automatically initialized as a side effect of\n        calling this interface.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"thread.allocated\">\n        <term>\n          <mallctl>thread.allocated</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Get the total number of bytes ever allocated by the\n        calling thread.  This counter has the potential to wrap around; it is\n        up to the application to appropriately interpret the counter in such\n        cases.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"thread.allocatedp\">\n        <term>\n          <mallctl>thread.allocatedp</mallctl>\n          (<type>uint64_t *</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Get a pointer to the the value that is returned by the\n        <link\n        linkend=\"thread.allocated\"><mallctl>thread.allocated</mallctl></link>\n        mallctl.  This is useful for avoiding the overhead of repeated\n        <function>mallctl*()</function> calls.  Note that the underlying counter\n        should not be modified by the application.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"thread.deallocated\">\n        <term>\n          <mallctl>thread.deallocated</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Get the total number of bytes ever deallocated by the\n        calling thread.  This counter has the potential to wrap around; it is\n        up to the application to appropriately interpret the counter in such\n        cases.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"thread.deallocatedp\">\n        <term>\n          <mallctl>thread.deallocatedp</mallctl>\n          (<type>uint64_t *</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Get a pointer to the the value that is returned by the\n        <link\n        linkend=\"thread.deallocated\"><mallctl>thread.deallocated</mallctl></link>\n        mallctl.  This is useful for avoiding the overhead of repeated\n        <function>mallctl*()</function> calls.  Note that the underlying counter\n        should not be modified by the application.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"thread.peak.read\">\n        <term>\n          <mallctl>thread.peak.read</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Get an approximation of the maximum value of the\n        difference between the number of bytes allocated and the number of bytes\n        deallocated by the calling thread since the last call to <link\n        linkend=\"thread.peak.reset\"><mallctl>thread.peak.reset</mallctl></link>,\n        or since the thread's creation if it has not called <link\n        linkend=\"thread.peak.reset\"><mallctl>thread.peak.reset</mallctl></link>.\n        No guarantees are made about the quality of the approximation, but\n        jemalloc currently endeavors to maintain accuracy to within one hundred\n        kilobytes.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"thread.peak.reset\">\n        <term>\n          <mallctl>thread.peak.reset</mallctl>\n          (<type>void</type>)\n          <literal>--</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Resets the counter for net bytes allocated in the calling\n        thread to zero. This affects subsequent calls to <link\n        linkend=\"thread.peak.read\"><mallctl>thread.peak.read</mallctl></link>,\n        but not the values returned by <link\n        linkend=\"thread.allocated\"><mallctl>thread.allocated</mallctl></link>\n        or <link\n        linkend=\"thread.deallocated\"><mallctl>thread.deallocated</mallctl></link>.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"thread.tcache.enabled\">\n        <term>\n          <mallctl>thread.tcache.enabled</mallctl>\n          (<type>bool</type>)\n          <literal>rw</literal>\n        </term>\n        <listitem><para>Enable/disable calling thread's tcache.  The tcache is\n        implicitly flushed as a side effect of becoming\n        disabled (see <link\n        linkend=\"thread.tcache.flush\"><mallctl>thread.tcache.flush</mallctl></link>).\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"thread.tcache.flush\">\n        <term>\n          <mallctl>thread.tcache.flush</mallctl>\n          (<type>void</type>)\n          <literal>--</literal>\n        </term>\n        <listitem><para>Flush calling thread's thread-specific cache (tcache).\n        This interface releases all cached objects and internal data structures\n        associated with the calling thread's tcache.  Ordinarily, this interface\n        need not be called, since automatic periodic incremental garbage\n        collection occurs, and the thread cache is automatically discarded when\n        a thread exits.  However, garbage collection is triggered by allocation\n        activity, so it is possible for a thread that stops\n        allocating/deallocating to retain its cache indefinitely, in which case\n        the developer may find manual flushing useful.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"thread.prof.name\">\n        <term>\n          <mallctl>thread.prof.name</mallctl>\n          (<type>const char *</type>)\n          <literal>r-</literal> or\n          <literal>-w</literal>\n          [<option>--enable-prof</option>]\n        </term>\n        <listitem><para>Get/set the descriptive name associated with the calling\n        thread in memory profile dumps.  An internal copy of the name string is\n        created, so the input string need not be maintained after this interface\n        completes execution.  The output string of this interface should be\n        copied for non-ephemeral uses, because multiple implementation details\n        can cause asynchronous string deallocation.  Furthermore, each\n        invocation of this interface can only read or write; simultaneous\n        read/write is not supported due to string lifetime limitations.  The\n        name string must be nil-terminated and comprised only of characters in\n        the sets recognized\n        by <citerefentry><refentrytitle>isgraph</refentrytitle>\n        <manvolnum>3</manvolnum></citerefentry> and\n        <citerefentry><refentrytitle>isblank</refentrytitle>\n        <manvolnum>3</manvolnum></citerefentry>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"thread.prof.active\">\n        <term>\n          <mallctl>thread.prof.active</mallctl>\n          (<type>bool</type>)\n          <literal>rw</literal>\n          [<option>--enable-prof</option>]\n        </term>\n        <listitem><para>Control whether sampling is currently active for the\n        calling thread.  This is an activation mechanism in addition to <link\n        linkend=\"prof.active\"><mallctl>prof.active</mallctl></link>; both must\n        be active for the calling thread to sample.  This flag is enabled by\n        default.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"thread.idle\">\n        <term>\n          <mallctl>thread.idle</mallctl>\n          (<type>void</type>)\n          <literal>--</literal>\n        </term>\n        <listitem><para>Hints to jemalloc that the calling thread will be idle\n\tfor some nontrivial period of time (say, on the order of seconds), and\n\tthat doing some cleanup operations may be beneficial.  There are no\n\tguarantees as to what specific operations will be performed; currently\n\tthis flushes the caller's tcache and may (according to some heuristic)\n\tpurge its associated arena.</para>\n\t<para>This is not intended to be a general-purpose background activity\n\tmechanism, and threads should not wake up multiple times solely to call\n\tit.  Rather, a thread waiting for a task should do a timed wait first,\n\tcall <link linkend=\"thread.idle\"><mallctl>thread.idle</mallctl></link>\n\tif no task appears in the timeout interval, and then do an untimed wait.\n\tFor such a background activity mechanism, see\n\t<link linkend=\"background_thread\"><mallctl>background_thread</mallctl></link>.\n\t</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"tcache.create\">\n        <term>\n          <mallctl>tcache.create</mallctl>\n          (<type>unsigned</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Create an explicit thread-specific cache (tcache) and\n        return an identifier that can be passed to the <link\n        linkend=\"MALLOCX_TCACHE\"><constant>MALLOCX_TCACHE(<parameter>tc</parameter>)</constant></link>\n        macro to explicitly use the specified cache rather than the\n        automatically managed one that is used by default.  Each explicit cache\n        can be used by only one thread at a time; the application must assure\n        that this constraint holds.\n        </para>\n\n        <para>If the amount of space supplied for storing the thread-specific\n        cache identifier does not equal\n        <code language=\"C\">sizeof(<type>unsigned</type>)</code>, no\n        thread-specific cache will be created, no data will be written to the\n        space pointed by <parameter>oldp</parameter>, and\n        <parameter>*oldlenp</parameter> will be set to 0.\n        </para></listitem>\n\n      </varlistentry>\n\n      <varlistentry id=\"tcache.flush\">\n        <term>\n          <mallctl>tcache.flush</mallctl>\n          (<type>unsigned</type>)\n          <literal>-w</literal>\n        </term>\n        <listitem><para>Flush the specified thread-specific cache (tcache).  The\n        same considerations apply to this interface as to <link\n        linkend=\"thread.tcache.flush\"><mallctl>thread.tcache.flush</mallctl></link>,\n        except that the tcache will never be automatically discarded.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"tcache.destroy\">\n        <term>\n          <mallctl>tcache.destroy</mallctl>\n          (<type>unsigned</type>)\n          <literal>-w</literal>\n        </term>\n        <listitem><para>Flush the specified thread-specific cache (tcache) and\n        make the identifier available for use during a future tcache creation.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arena.i.initialized\">\n        <term>\n          <mallctl>arena.&lt;i&gt;.initialized</mallctl>\n          (<type>bool</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Get whether the specified arena's statistics are\n        initialized (i.e. the arena was initialized prior to the current epoch).\n        This interface can also be nominally used to query whether the merged\n        statistics corresponding to <constant>MALLCTL_ARENAS_ALL</constant> are\n        initialized (always true).</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arena.i.decay\">\n        <term>\n          <mallctl>arena.&lt;i&gt;.decay</mallctl>\n          (<type>void</type>)\n          <literal>--</literal>\n        </term>\n        <listitem><para>Trigger decay-based purging of unused dirty/muzzy pages\n        for arena &lt;i&gt;, or for all arenas if &lt;i&gt; equals\n        <constant>MALLCTL_ARENAS_ALL</constant>.  The proportion of unused\n        dirty/muzzy pages to be purged depends on the current time; see <link\n        linkend=\"opt.dirty_decay_ms\"><mallctl>opt.dirty_decay_ms</mallctl></link>\n        and <link\n        linkend=\"opt.muzzy_decay_ms\"><mallctl>opt.muzy_decay_ms</mallctl></link>\n        for details.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arena.i.purge\">\n        <term>\n          <mallctl>arena.&lt;i&gt;.purge</mallctl>\n          (<type>void</type>)\n          <literal>--</literal>\n        </term>\n        <listitem><para>Purge all unused dirty pages for arena &lt;i&gt;, or for\n        all arenas if &lt;i&gt; equals <constant>MALLCTL_ARENAS_ALL</constant>.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arena.i.reset\">\n        <term>\n          <mallctl>arena.&lt;i&gt;.reset</mallctl>\n          (<type>void</type>)\n          <literal>--</literal>\n        </term>\n        <listitem><para>Discard all of the arena's extant allocations.  This\n        interface can only be used with arenas explicitly created via <link\n        linkend=\"arenas.create\"><mallctl>arenas.create</mallctl></link>.  None\n        of the arena's discarded/cached allocations may accessed afterward.  As\n        part of this requirement, all thread caches which were used to\n        allocate/deallocate in conjunction with the arena must be flushed\n        beforehand.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arena.i.destroy\">\n        <term>\n          <mallctl>arena.&lt;i&gt;.destroy</mallctl>\n          (<type>void</type>)\n          <literal>--</literal>\n        </term>\n        <listitem><para>Destroy the arena.  Discard all of the arena's extant\n        allocations using the same mechanism as for <link\n        linkend=\"arena.i.reset\"><mallctl>arena.&lt;i&gt;.reset</mallctl></link>\n        (with all the same constraints and side effects), merge the arena stats\n        into those accessible at arena index\n        <constant>MALLCTL_ARENAS_DESTROYED</constant>, and then completely\n        discard all metadata associated with the arena.  Future calls to <link\n        linkend=\"arenas.create\"><mallctl>arenas.create</mallctl></link> may\n        recycle the arena index.  Destruction will fail if any threads are\n        currently associated with the arena as a result of calls to <link\n        linkend=\"thread.arena\"><mallctl>thread.arena</mallctl></link>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arena.i.dss\">\n        <term>\n          <mallctl>arena.&lt;i&gt;.dss</mallctl>\n          (<type>const char *</type>)\n          <literal>rw</literal>\n        </term>\n        <listitem><para>Set the precedence of dss allocation as related to mmap\n        allocation for arena &lt;i&gt;, or for all arenas if &lt;i&gt; equals\n        <constant>MALLCTL_ARENAS_ALL</constant>.  See <link\n        linkend=\"opt.dss\"><mallctl>opt.dss</mallctl></link> for supported\n        settings.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arena.i.dirty_decay_ms\">\n        <term>\n          <mallctl>arena.&lt;i&gt;.dirty_decay_ms</mallctl>\n          (<type>ssize_t</type>)\n          <literal>rw</literal>\n        </term>\n        <listitem><para>Current per-arena approximate time in milliseconds from\n        the creation of a set of unused dirty pages until an equivalent set of\n        unused dirty pages is purged and/or reused.  Each time this interface is\n        set, all currently unused dirty pages are considered to have fully\n        decayed, which causes immediate purging of all unused dirty pages unless\n        the decay time is set to -1 (i.e. purging disabled).  See <link\n        linkend=\"opt.dirty_decay_ms\"><mallctl>opt.dirty_decay_ms</mallctl></link>\n        for additional information.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arena.i.muzzy_decay_ms\">\n        <term>\n          <mallctl>arena.&lt;i&gt;.muzzy_decay_ms</mallctl>\n          (<type>ssize_t</type>)\n          <literal>rw</literal>\n        </term>\n        <listitem><para>Current per-arena approximate time in milliseconds from\n        the creation of a set of unused muzzy pages until an equivalent set of\n        unused muzzy pages is purged and/or reused.  Each time this interface is\n        set, all currently unused muzzy pages are considered to have fully\n        decayed, which causes immediate purging of all unused muzzy pages unless\n        the decay time is set to -1 (i.e. purging disabled).  See <link\n        linkend=\"opt.muzzy_decay_ms\"><mallctl>opt.muzzy_decay_ms</mallctl></link>\n        for additional information.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arena.i.retain_grow_limit\">\n        <term>\n          <mallctl>arena.&lt;i&gt;.retain_grow_limit</mallctl>\n          (<type>size_t</type>)\n          <literal>rw</literal>\n        </term>\n        <listitem><para>Maximum size to grow retained region (only relevant when\n        <link linkend=\"opt.retain\"><mallctl>opt.retain</mallctl></link> is\n        enabled).  This controls the maximum increment to expand virtual memory,\n        or allocation through <link\n        linkend=\"arena.i.extent_hooks\"><mallctl>arena.&lt;i&gt;extent_hooks</mallctl></link>.\n        In particular, if customized extent hooks reserve physical memory\n        (e.g. 1G huge pages), this is useful to control the allocation hook's\n        input size.  The default is no limit.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arena.i.extent_hooks\">\n        <term>\n          <mallctl>arena.&lt;i&gt;.extent_hooks</mallctl>\n          (<type>extent_hooks_t *</type>)\n          <literal>rw</literal>\n        </term>\n        <listitem><para>Get or set the extent management hook functions for\n        arena &lt;i&gt;.  The functions must be capable of operating on all\n        extant extents associated with arena &lt;i&gt;, usually by passing\n        unknown extents to the replaced functions.  In practice, it is feasible\n        to control allocation for arenas explicitly created via <link\n        linkend=\"arenas.create\"><mallctl>arenas.create</mallctl></link> such\n        that all extents originate from an application-supplied extent allocator\n        (by specifying the custom extent hook functions during arena creation).\n        However, the API guarantees for the automatically created arenas may be\n        relaxed -- hooks set there may be called in a \"best effort\" fashion; in\n        addition there may be extents created prior to the application having an\n        opportunity to take over extent allocation.</para>\n\n        <programlisting language=\"C\"><![CDATA[\ntypedef extent_hooks_s extent_hooks_t;\nstruct extent_hooks_s {\n\textent_alloc_t\t\t*alloc;\n\textent_dalloc_t\t\t*dalloc;\n\textent_destroy_t\t*destroy;\n\textent_commit_t\t\t*commit;\n\textent_decommit_t\t*decommit;\n\textent_purge_t\t\t*purge_lazy;\n\textent_purge_t\t\t*purge_forced;\n\textent_split_t\t\t*split;\n\textent_merge_t\t\t*merge;\n};]]></programlisting>\n        <para>The <type>extent_hooks_t</type> structure comprises function\n        pointers which are described individually below.  jemalloc uses these\n        functions to manage extent lifetime, which starts off with allocation of\n        mapped committed memory, in the simplest case followed by deallocation.\n        However, there are performance and platform reasons to retain extents\n        for later reuse.  Cleanup attempts cascade from deallocation to decommit\n        to forced purging to lazy purging, which gives the extent management\n        functions opportunities to reject the most permanent cleanup operations\n        in favor of less permanent (and often less costly) operations.  All\n        operations except allocation can be universally opted out of by setting\n        the hook pointers to <constant>NULL</constant>, or selectively opted out\n        of by returning failure.  Note that once the extent hook is set, the\n        structure is accessed directly by the associated arenas, so it must\n        remain valid for the entire lifetime of the arenas.</para>\n\n        <funcsynopsis><funcprototype>\n          <funcdef>typedef void *<function>(extent_alloc_t)</function></funcdef>\n          <paramdef>extent_hooks_t *<parameter>extent_hooks</parameter></paramdef>\n          <paramdef>void *<parameter>new_addr</parameter></paramdef>\n          <paramdef>size_t <parameter>size</parameter></paramdef>\n          <paramdef>size_t <parameter>alignment</parameter></paramdef>\n          <paramdef>bool *<parameter>zero</parameter></paramdef>\n          <paramdef>bool *<parameter>commit</parameter></paramdef>\n          <paramdef>unsigned <parameter>arena_ind</parameter></paramdef>\n        </funcprototype></funcsynopsis>\n        <literallayout></literallayout>\n        <para>An extent allocation function conforms to the\n        <type>extent_alloc_t</type> type and upon success returns a pointer to\n        <parameter>size</parameter> bytes of mapped memory on behalf of arena\n        <parameter>arena_ind</parameter> such that the extent's base address is\n        a multiple of <parameter>alignment</parameter>, as well as setting\n        <parameter>*zero</parameter> to indicate whether the extent is zeroed\n        and <parameter>*commit</parameter> to indicate whether the extent is\n        committed.  Upon error the function returns <constant>NULL</constant>\n        and leaves <parameter>*zero</parameter> and\n        <parameter>*commit</parameter> unmodified.  The\n        <parameter>size</parameter> parameter is always a multiple of the page\n        size.  The <parameter>alignment</parameter> parameter is always a power\n        of two at least as large as the page size.  Zeroing is mandatory if\n        <parameter>*zero</parameter> is true upon function entry.  Committing is\n        mandatory if <parameter>*commit</parameter> is true upon function entry.\n        If <parameter>new_addr</parameter> is not <constant>NULL</constant>, the\n        returned pointer must be <parameter>new_addr</parameter> on success or\n        <constant>NULL</constant> on error.  Committed memory may be committed\n        in absolute terms as on a system that does not overcommit, or in\n        implicit terms as on a system that overcommits and satisfies physical\n        memory needs on demand via soft page faults.  Note that replacing the\n        default extent allocation function makes the arena's <link\n        linkend=\"arena.i.dss\"><mallctl>arena.&lt;i&gt;.dss</mallctl></link>\n        setting irrelevant.</para>\n\n        <funcsynopsis><funcprototype>\n          <funcdef>typedef bool <function>(extent_dalloc_t)</function></funcdef>\n          <paramdef>extent_hooks_t *<parameter>extent_hooks</parameter></paramdef>\n          <paramdef>void *<parameter>addr</parameter></paramdef>\n          <paramdef>size_t <parameter>size</parameter></paramdef>\n          <paramdef>bool <parameter>committed</parameter></paramdef>\n          <paramdef>unsigned <parameter>arena_ind</parameter></paramdef>\n        </funcprototype></funcsynopsis>\n        <literallayout></literallayout>\n        <para>\n        An extent deallocation function conforms to the\n        <type>extent_dalloc_t</type> type and deallocates an extent at given\n        <parameter>addr</parameter> and <parameter>size</parameter> with\n        <parameter>committed</parameter>/decommited memory as indicated, on\n        behalf of arena <parameter>arena_ind</parameter>, returning false upon\n        success.  If the function returns true, this indicates opt-out from\n        deallocation; the virtual memory mapping associated with the extent\n        remains mapped, in the same commit state, and available for future use,\n        in which case it will be automatically retained for later reuse.</para>\n\n        <funcsynopsis><funcprototype>\n          <funcdef>typedef void <function>(extent_destroy_t)</function></funcdef>\n          <paramdef>extent_hooks_t *<parameter>extent_hooks</parameter></paramdef>\n          <paramdef>void *<parameter>addr</parameter></paramdef>\n          <paramdef>size_t <parameter>size</parameter></paramdef>\n          <paramdef>bool <parameter>committed</parameter></paramdef>\n          <paramdef>unsigned <parameter>arena_ind</parameter></paramdef>\n        </funcprototype></funcsynopsis>\n        <literallayout></literallayout>\n        <para>\n        An extent destruction function conforms to the\n        <type>extent_destroy_t</type> type and unconditionally destroys an\n        extent at given <parameter>addr</parameter> and\n        <parameter>size</parameter> with\n        <parameter>committed</parameter>/decommited memory as indicated, on\n        behalf of arena <parameter>arena_ind</parameter>.  This function may be\n        called to destroy retained extents during arena destruction (see <link\n        linkend=\"arena.i.destroy\"><mallctl>arena.&lt;i&gt;.destroy</mallctl></link>).</para>\n\n        <funcsynopsis><funcprototype>\n          <funcdef>typedef bool <function>(extent_commit_t)</function></funcdef>\n          <paramdef>extent_hooks_t *<parameter>extent_hooks</parameter></paramdef>\n          <paramdef>void *<parameter>addr</parameter></paramdef>\n          <paramdef>size_t <parameter>size</parameter></paramdef>\n          <paramdef>size_t <parameter>offset</parameter></paramdef>\n          <paramdef>size_t <parameter>length</parameter></paramdef>\n          <paramdef>unsigned <parameter>arena_ind</parameter></paramdef>\n        </funcprototype></funcsynopsis>\n        <literallayout></literallayout>\n        <para>An extent commit function conforms to the\n        <type>extent_commit_t</type> type and commits zeroed physical memory to\n        back pages within an extent at given <parameter>addr</parameter> and\n        <parameter>size</parameter> at <parameter>offset</parameter> bytes,\n        extending for <parameter>length</parameter> on behalf of arena\n        <parameter>arena_ind</parameter>, returning false upon success.\n        Committed memory may be committed in absolute terms as on a system that\n        does not overcommit, or in implicit terms as on a system that\n        overcommits and satisfies physical memory needs on demand via soft page\n        faults. If the function returns true, this indicates insufficient\n        physical memory to satisfy the request.</para>\n\n        <funcsynopsis><funcprototype>\n          <funcdef>typedef bool <function>(extent_decommit_t)</function></funcdef>\n          <paramdef>extent_hooks_t *<parameter>extent_hooks</parameter></paramdef>\n          <paramdef>void *<parameter>addr</parameter></paramdef>\n          <paramdef>size_t <parameter>size</parameter></paramdef>\n          <paramdef>size_t <parameter>offset</parameter></paramdef>\n          <paramdef>size_t <parameter>length</parameter></paramdef>\n          <paramdef>unsigned <parameter>arena_ind</parameter></paramdef>\n        </funcprototype></funcsynopsis>\n        <literallayout></literallayout>\n        <para>An extent decommit function conforms to the\n        <type>extent_decommit_t</type> type and decommits any physical memory\n        that is backing pages within an extent at given\n        <parameter>addr</parameter> and <parameter>size</parameter> at\n        <parameter>offset</parameter> bytes, extending for\n        <parameter>length</parameter> on behalf of arena\n        <parameter>arena_ind</parameter>, returning false upon success, in which\n        case the pages will be committed via the extent commit function before\n        being reused.  If the function returns true, this indicates opt-out from\n        decommit; the memory remains committed and available for future use, in\n        which case it will be automatically retained for later reuse.</para>\n\n        <funcsynopsis><funcprototype>\n          <funcdef>typedef bool <function>(extent_purge_t)</function></funcdef>\n          <paramdef>extent_hooks_t *<parameter>extent_hooks</parameter></paramdef>\n          <paramdef>void *<parameter>addr</parameter></paramdef>\n          <paramdef>size_t <parameter>size</parameter></paramdef>\n          <paramdef>size_t <parameter>offset</parameter></paramdef>\n          <paramdef>size_t <parameter>length</parameter></paramdef>\n          <paramdef>unsigned <parameter>arena_ind</parameter></paramdef>\n        </funcprototype></funcsynopsis>\n        <literallayout></literallayout>\n        <para>An extent purge function conforms to the\n        <type>extent_purge_t</type> type and discards physical pages\n        within the virtual memory mapping associated with an extent at given\n        <parameter>addr</parameter> and <parameter>size</parameter> at\n        <parameter>offset</parameter> bytes, extending for\n        <parameter>length</parameter> on behalf of arena\n        <parameter>arena_ind</parameter>.  A lazy extent purge function (e.g.\n        implemented via\n        <function>madvise(<parameter>...</parameter><parameter><constant>MADV_FREE</constant></parameter>)</function>)\n        can delay purging indefinitely and leave the pages within the purged\n        virtual memory range in an indeterminite state, whereas a forced extent\n        purge function immediately purges, and the pages within the virtual\n        memory range will be zero-filled the next time they are accessed.  If\n        the function returns true, this indicates failure to purge.</para>\n\n        <funcsynopsis><funcprototype>\n          <funcdef>typedef bool <function>(extent_split_t)</function></funcdef>\n          <paramdef>extent_hooks_t *<parameter>extent_hooks</parameter></paramdef>\n          <paramdef>void *<parameter>addr</parameter></paramdef>\n          <paramdef>size_t <parameter>size</parameter></paramdef>\n          <paramdef>size_t <parameter>size_a</parameter></paramdef>\n          <paramdef>size_t <parameter>size_b</parameter></paramdef>\n          <paramdef>bool <parameter>committed</parameter></paramdef>\n          <paramdef>unsigned <parameter>arena_ind</parameter></paramdef>\n        </funcprototype></funcsynopsis>\n        <literallayout></literallayout>\n        <para>An extent split function conforms to the\n        <type>extent_split_t</type> type and optionally splits an extent at\n        given <parameter>addr</parameter> and <parameter>size</parameter> into\n        two adjacent extents, the first of <parameter>size_a</parameter> bytes,\n        and the second of <parameter>size_b</parameter> bytes, operating on\n        <parameter>committed</parameter>/decommitted memory as indicated, on\n        behalf of arena <parameter>arena_ind</parameter>, returning false upon\n        success.  If the function returns true, this indicates that the extent\n        remains unsplit and therefore should continue to be operated on as a\n        whole.</para>\n\n        <funcsynopsis><funcprototype>\n          <funcdef>typedef bool <function>(extent_merge_t)</function></funcdef>\n          <paramdef>extent_hooks_t *<parameter>extent_hooks</parameter></paramdef>\n          <paramdef>void *<parameter>addr_a</parameter></paramdef>\n          <paramdef>size_t <parameter>size_a</parameter></paramdef>\n          <paramdef>void *<parameter>addr_b</parameter></paramdef>\n          <paramdef>size_t <parameter>size_b</parameter></paramdef>\n          <paramdef>bool <parameter>committed</parameter></paramdef>\n          <paramdef>unsigned <parameter>arena_ind</parameter></paramdef>\n        </funcprototype></funcsynopsis>\n        <literallayout></literallayout>\n        <para>An extent merge function conforms to the\n        <type>extent_merge_t</type> type and optionally merges adjacent extents,\n        at given <parameter>addr_a</parameter> and <parameter>size_a</parameter>\n        with given <parameter>addr_b</parameter> and\n        <parameter>size_b</parameter> into one contiguous extent, operating on\n        <parameter>committed</parameter>/decommitted memory as indicated, on\n        behalf of arena <parameter>arena_ind</parameter>, returning false upon\n        success.  If the function returns true, this indicates that the extents\n        remain distinct mappings and therefore should continue to be operated on\n        independently.</para>\n        </listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arenas.narenas\">\n        <term>\n          <mallctl>arenas.narenas</mallctl>\n          (<type>unsigned</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Current limit on number of arenas.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arenas.dirty_decay_ms\">\n        <term>\n          <mallctl>arenas.dirty_decay_ms</mallctl>\n          (<type>ssize_t</type>)\n          <literal>rw</literal>\n        </term>\n        <listitem><para>Current default per-arena approximate time in\n        milliseconds from the creation of a set of unused dirty pages until an\n        equivalent set of unused dirty pages is purged and/or reused, used to\n        initialize <link\n        linkend=\"arena.i.dirty_decay_ms\"><mallctl>arena.&lt;i&gt;.dirty_decay_ms</mallctl></link>\n        during arena creation.  See <link\n        linkend=\"opt.dirty_decay_ms\"><mallctl>opt.dirty_decay_ms</mallctl></link>\n        for additional information.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arenas.muzzy_decay_ms\">\n        <term>\n          <mallctl>arenas.muzzy_decay_ms</mallctl>\n          (<type>ssize_t</type>)\n          <literal>rw</literal>\n        </term>\n        <listitem><para>Current default per-arena approximate time in\n        milliseconds from the creation of a set of unused muzzy pages until an\n        equivalent set of unused muzzy pages is purged and/or reused, used to\n        initialize <link\n        linkend=\"arena.i.muzzy_decay_ms\"><mallctl>arena.&lt;i&gt;.muzzy_decay_ms</mallctl></link>\n        during arena creation.  See <link\n        linkend=\"opt.muzzy_decay_ms\"><mallctl>opt.muzzy_decay_ms</mallctl></link>\n        for additional information.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arenas.quantum\">\n        <term>\n          <mallctl>arenas.quantum</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Quantum size.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arenas.page\">\n        <term>\n          <mallctl>arenas.page</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Page size.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arenas.tcache_max\">\n        <term>\n          <mallctl>arenas.tcache_max</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Maximum thread-cached size class.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arenas.nbins\">\n        <term>\n          <mallctl>arenas.nbins</mallctl>\n          (<type>unsigned</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Number of bin size classes.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arenas.nhbins\">\n        <term>\n          <mallctl>arenas.nhbins</mallctl>\n          (<type>unsigned</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Total number of thread cache bin size\n        classes.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arenas.bin.i.size\">\n        <term>\n          <mallctl>arenas.bin.&lt;i&gt;.size</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Maximum size supported by size class.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arenas.bin.i.nregs\">\n        <term>\n          <mallctl>arenas.bin.&lt;i&gt;.nregs</mallctl>\n          (<type>uint32_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Number of regions per slab.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arenas.bin.i.slab_size\">\n        <term>\n          <mallctl>arenas.bin.&lt;i&gt;.slab_size</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Number of bytes per slab.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arenas.nlextents\">\n        <term>\n          <mallctl>arenas.nlextents</mallctl>\n          (<type>unsigned</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Total number of large size classes.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arenas.lextent.i.size\">\n        <term>\n          <mallctl>arenas.lextent.&lt;i&gt;.size</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Maximum size supported by this large size\n        class.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arenas.create\">\n        <term>\n          <mallctl>arenas.create</mallctl>\n          (<type>unsigned</type>, <type>extent_hooks_t *</type>)\n          <literal>rw</literal>\n        </term>\n        <listitem><para>Explicitly create a new arena outside the range of\n        automatically managed arenas, with optionally specified extent hooks,\n        and return the new arena index.</para>\n\n        <para>If the amount of space supplied for storing the arena index does\n        not equal <code language=\"C\">sizeof(<type>unsigned</type>)</code>, no\n        arena will be created, no data will be written to the space pointed by\n        <parameter>oldp</parameter>, and <parameter>*oldlenp</parameter> will\n        be set to 0.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"arenas.lookup\">\n        <term>\n          <mallctl>arenas.lookup</mallctl>\n          (<type>unsigned</type>, <type>void*</type>)\n          <literal>rw</literal>\n        </term>\n        <listitem><para>Index of the arena to which an allocation belongs to.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"prof.thread_active_init\">\n        <term>\n          <mallctl>prof.thread_active_init</mallctl>\n          (<type>bool</type>)\n          <literal>rw</literal>\n          [<option>--enable-prof</option>]\n        </term>\n        <listitem><para>Control the initial setting for <link\n        linkend=\"thread.prof.active\"><mallctl>thread.prof.active</mallctl></link>\n        in newly created threads.  See the <link\n        linkend=\"opt.prof_thread_active_init\"><mallctl>opt.prof_thread_active_init</mallctl></link>\n        option for additional information.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"prof.active\">\n        <term>\n          <mallctl>prof.active</mallctl>\n          (<type>bool</type>)\n          <literal>rw</literal>\n          [<option>--enable-prof</option>]\n        </term>\n        <listitem><para>Control whether sampling is currently active.  See the\n        <link\n        linkend=\"opt.prof_active\"><mallctl>opt.prof_active</mallctl></link>\n        option for additional information, as well as the interrelated <link\n        linkend=\"thread.prof.active\"><mallctl>thread.prof.active</mallctl></link>\n        mallctl.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"prof.dump\">\n        <term>\n          <mallctl>prof.dump</mallctl>\n          (<type>const char *</type>)\n          <literal>-w</literal>\n          [<option>--enable-prof</option>]\n        </term>\n        <listitem><para>Dump a memory profile to the specified file, or if NULL\n        is specified, to a file according to the pattern\n        <filename>&lt;prefix&gt;.&lt;pid&gt;.&lt;seq&gt;.m&lt;mseq&gt;.heap</filename>,\n        where <literal>&lt;prefix&gt;</literal> is controlled by the\n        <link linkend=\"opt.prof_prefix\"><mallctl>opt.prof_prefix</mallctl></link>\n        and <link linkend=\"prof.prefix\"><mallctl>prof.prefix</mallctl></link>\n        options.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"prof.prefix\">\n        <term>\n          <mallctl>prof.prefix</mallctl>\n          (<type>const char *</type>)\n          <literal>-w</literal>\n          [<option>--enable-prof</option>]\n        </term>\n        <listitem><para>Set the filename prefix for profile dumps. See\n        <link\n        linkend=\"opt.prof_prefix\"><mallctl>opt.prof_prefix</mallctl></link>\n        for the default setting.  This can be useful to differentiate profile\n        dumps such as from forked processes.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"prof.gdump\">\n        <term>\n          <mallctl>prof.gdump</mallctl>\n          (<type>bool</type>)\n          <literal>rw</literal>\n          [<option>--enable-prof</option>]\n        </term>\n        <listitem><para>When enabled, trigger a memory profile dump every time\n        the total virtual memory exceeds the previous maximum.  Profiles are\n        dumped to files named according to the pattern\n        <filename>&lt;prefix&gt;.&lt;pid&gt;.&lt;seq&gt;.u&lt;useq&gt;.heap</filename>,\n        where <literal>&lt;prefix&gt;</literal> is controlled by the <link\n        linkend=\"opt.prof_prefix\"><mallctl>opt.prof_prefix</mallctl></link> and\n        <link linkend=\"prof.prefix\"><mallctl>prof.prefix</mallctl></link>\n        options.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"prof.reset\">\n        <term>\n          <mallctl>prof.reset</mallctl>\n          (<type>size_t</type>)\n          <literal>-w</literal>\n          [<option>--enable-prof</option>]\n        </term>\n        <listitem><para>Reset all memory profile statistics, and optionally\n        update the sample rate (see <link\n        linkend=\"opt.lg_prof_sample\"><mallctl>opt.lg_prof_sample</mallctl></link>\n        and <link\n        linkend=\"prof.lg_sample\"><mallctl>prof.lg_sample</mallctl></link>).\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"prof.lg_sample\">\n        <term>\n          <mallctl>prof.lg_sample</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-prof</option>]\n        </term>\n        <listitem><para>Get the current sample rate (see <link\n        linkend=\"opt.lg_prof_sample\"><mallctl>opt.lg_prof_sample</mallctl></link>).\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"prof.interval\">\n        <term>\n          <mallctl>prof.interval</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-prof</option>]\n        </term>\n        <listitem><para>Average number of bytes allocated between\n        interval-based profile dumps.  See the\n        <link\n        linkend=\"opt.lg_prof_interval\"><mallctl>opt.lg_prof_interval</mallctl></link>\n        option for additional information.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.allocated\">\n        <term>\n          <mallctl>stats.allocated</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Total number of bytes allocated by the\n        application.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.active\">\n        <term>\n          <mallctl>stats.active</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Total number of bytes in active pages allocated by the\n        application.  This is a multiple of the page size, and greater than or\n        equal to <link\n        linkend=\"stats.allocated\"><mallctl>stats.allocated</mallctl></link>.\n        This does not include <link linkend=\"stats.arenas.i.pdirty\">\n        <mallctl>stats.arenas.&lt;i&gt;.pdirty</mallctl></link>,\n        <link linkend=\"stats.arenas.i.pmuzzy\">\n        <mallctl>stats.arenas.&lt;i&gt;.pmuzzy</mallctl></link>, nor pages\n        entirely devoted to allocator metadata.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.metadata\">\n        <term>\n          <mallctl>stats.metadata</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Total number of bytes dedicated to metadata, which\n        comprise base allocations used for bootstrap-sensitive allocator\n        metadata structures (see <link\n        linkend=\"stats.arenas.i.base\"><mallctl>stats.arenas.&lt;i&gt;.base</mallctl></link>)\n        and internal allocations (see <link\n        linkend=\"stats.arenas.i.internal\"><mallctl>stats.arenas.&lt;i&gt;.internal</mallctl></link>).\n        Transparent huge page (enabled with <link\n        linkend=\"opt.metadata_thp\">opt.metadata_thp</link>) usage is not\n        considered.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.metadata_thp\">\n        <term>\n          <mallctl>stats.metadata_thp</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Number of transparent huge pages (THP) used for\n        metadata.  See <link\n        linkend=\"stats.metadata\"><mallctl>stats.metadata</mallctl></link> and\n        <link linkend=\"opt.metadata_thp\">opt.metadata_thp</link>) for\n        details.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.resident\">\n        <term>\n          <mallctl>stats.resident</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Maximum number of bytes in physically resident data\n        pages mapped by the allocator, comprising all pages dedicated to\n        allocator metadata, pages backing active allocations, and unused dirty\n        pages.  This is a maximum rather than precise because pages may not\n        actually be physically resident if they correspond to demand-zeroed\n        virtual memory that has not yet been touched.  This is a multiple of the\n        page size, and is larger than <link\n        linkend=\"stats.active\"><mallctl>stats.active</mallctl></link>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.mapped\">\n        <term>\n          <mallctl>stats.mapped</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Total number of bytes in active extents mapped by the\n        allocator.  This is larger than <link\n        linkend=\"stats.active\"><mallctl>stats.active</mallctl></link>.  This\n        does not include inactive extents, even those that contain unused dirty\n        pages, which means that there is no strict ordering between this and\n        <link\n        linkend=\"stats.resident\"><mallctl>stats.resident</mallctl></link>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.retained\">\n        <term>\n          <mallctl>stats.retained</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Total number of bytes in virtual memory mappings that\n        were retained rather than being returned to the operating system via\n        e.g. <citerefentry><refentrytitle>munmap</refentrytitle>\n        <manvolnum>2</manvolnum></citerefentry> or similar.  Retained virtual\n        memory is typically untouched, decommitted, or purged, so it has no\n        strongly associated physical memory (see <link\n        linkend=\"arena.i.extent_hooks\">extent hooks</link> for details).\n        Retained memory is excluded from mapped memory statistics, e.g. <link\n        linkend=\"stats.mapped\"><mallctl>stats.mapped</mallctl></link>.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.zero_reallocs\">\n        <term>\n          <mallctl>stats.zero_reallocs</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Number of times that the <function>realloc()</function>\n        was called with a non-<constant>NULL</constant> pointer argument and a\n        <constant>0</constant> size argument.  This is a fundamentally unsafe\n        pattern in portable programs; see <link linkend=\"opt.zero_realloc\">\n        <mallctl>opt.zero_realloc</mallctl></link> for details.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.background_thread.num_threads\">\n        <term>\n          <mallctl>stats.background_thread.num_threads</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para> Number of <link linkend=\"background_thread\">background\n        threads</link> running currently.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.background_thread.num_runs\">\n        <term>\n          <mallctl>stats.background_thread.num_runs</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para> Total number of runs from all <link\n        linkend=\"background_thread\">background threads</link>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.background_thread.run_interval\">\n        <term>\n          <mallctl>stats.background_thread.run_interval</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para> Average run interval in nanoseconds of <link\n        linkend=\"background_thread\">background threads</link>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.mutexes.ctl\">\n        <term>\n          <mallctl>stats.mutexes.ctl.{counter};</mallctl>\n          (<type>counter specific type</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Statistics on <varname>ctl</varname> mutex (global\n        scope; mallctl related).  <mallctl>{counter}</mallctl> is one of the\n        counters below:</para>\n        <varlistentry id=\"mutex_counters\">\n          <listitem><para><varname>num_ops</varname> (<type>uint64_t</type>):\n          Total number of lock acquisition operations on this mutex.</para>\n\n\t  <para><varname>num_spin_acq</varname> (<type>uint64_t</type>): Number\n\t  of times the mutex was spin-acquired.  When the mutex is currently\n\t  locked and cannot be acquired immediately, a short period of\n\t  spin-retry within jemalloc will be performed.  Acquired through spin\n\t  generally means the contention was lightweight and not causing context\n\t  switches.</para>\n\n\t  <para><varname>num_wait</varname> (<type>uint64_t</type>): Number of\n\t  times the mutex was wait-acquired, which means the mutex contention\n\t  was not solved by spin-retry, and blocking operation was likely\n\t  involved in order to acquire the mutex.  This event generally implies\n\t  higher cost / longer delay, and should be investigated if it happens\n\t  often.</para>\n\n\t  <para><varname>max_wait_time</varname> (<type>uint64_t</type>):\n\t  Maximum length of time in nanoseconds spent on a single wait-acquired\n\t  lock operation.  Note that to avoid profiling overhead on the common\n\t  path, this does not consider spin-acquired cases.</para>\n\n\t  <para><varname>total_wait_time</varname> (<type>uint64_t</type>):\n\t  Cumulative time in nanoseconds spent on wait-acquired lock operations.\n\t  Similarly, spin-acquired cases are not considered.</para>\n\n\t  <para><varname>max_num_thds</varname> (<type>uint32_t</type>): Maximum\n\t  number of threads waiting on this mutex simultaneously.  Similarly,\n\t  spin-acquired cases are not considered.</para>\n\n\t  <para><varname>num_owner_switch</varname> (<type>uint64_t</type>):\n\t  Number of times the current mutex owner is different from the previous\n\t  one.  This event does not generally imply an issue; rather it is an\n\t  indicator of how often the protected data are accessed by different\n\t  threads.\n\t  </para>\n\t  </listitem>\n\t</varlistentry>\n\t</listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.mutexes.background_thread\">\n        <term>\n          <mallctl>stats.mutexes.background_thread.{counter}</mallctl>\n\t  (<type>counter specific type</type>) <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Statistics on <varname>background_thread</varname> mutex\n        (global scope; <link\n        linkend=\"background_thread\"><mallctl>background_thread</mallctl></link>\n        related).  <mallctl>{counter}</mallctl> is one of the counters in <link\n        linkend=\"mutex_counters\">mutex profiling\n        counters</link>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.mutexes.prof\">\n        <term>\n          <mallctl>stats.mutexes.prof.{counter}</mallctl>\n\t  (<type>counter specific type</type>) <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Statistics on <varname>prof</varname> mutex (global\n        scope; profiling related).  <mallctl>{counter}</mallctl> is one of the\n        counters in <link linkend=\"mutex_counters\">mutex profiling\n        counters</link>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.mutexes.prof_thds_data\">\n        <term>\n          <mallctl>stats.mutexes.prof_thds_data.{counter}</mallctl>\n\t  (<type>counter specific type</type>) <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n\t<listitem><para>Statistics on <varname>prof</varname> threads data mutex\n\t(global scope; profiling related).  <mallctl>{counter}</mallctl> is one\n\tof the counters in <link linkend=\"mutex_counters\">mutex profiling\n        counters</link>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.mutexes.prof_dump\">\n        <term>\n          <mallctl>stats.mutexes.prof_dump.{counter}</mallctl>\n\t  (<type>counter specific type</type>) <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n\t<listitem><para>Statistics on <varname>prof</varname> dumping mutex\n\t(global scope; profiling related).  <mallctl>{counter}</mallctl> is one\n\tof the counters in <link linkend=\"mutex_counters\">mutex profiling\n        counters</link>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.mutexes.reset\">\n        <term>\n          <mallctl>stats.mutexes.reset</mallctl>\n\t  (<type>void</type>) <literal>--</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Reset all mutex profile statistics, including global\n        mutexes, arena mutexes and bin mutexes.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.dss\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.dss</mallctl>\n          (<type>const char *</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>dss (<citerefentry><refentrytitle>sbrk</refentrytitle>\n        <manvolnum>2</manvolnum></citerefentry>) allocation precedence as\n        related to <citerefentry><refentrytitle>mmap</refentrytitle>\n        <manvolnum>2</manvolnum></citerefentry> allocation.  See <link\n        linkend=\"opt.dss\"><mallctl>opt.dss</mallctl></link> for details.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.dirty_decay_ms\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.dirty_decay_ms</mallctl>\n          (<type>ssize_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Approximate time in milliseconds from the creation of a\n        set of unused dirty pages until an equivalent set of unused dirty pages\n        is purged and/or reused.  See <link\n        linkend=\"opt.dirty_decay_ms\"><mallctl>opt.dirty_decay_ms</mallctl></link>\n        for details.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.muzzy_decay_ms\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.muzzy_decay_ms</mallctl>\n          (<type>ssize_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Approximate time in milliseconds from the creation of a\n        set of unused muzzy pages until an equivalent set of unused muzzy pages\n        is purged and/or reused.  See <link\n        linkend=\"opt.muzzy_decay_ms\"><mallctl>opt.muzzy_decay_ms</mallctl></link>\n        for details.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.nthreads\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.nthreads</mallctl>\n          (<type>unsigned</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Number of threads currently assigned to\n        arena.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.uptime\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.uptime</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Time elapsed (in nanoseconds) since the arena was\n        created.  If &lt;i&gt; equals <constant>0</constant> or\n        <constant>MALLCTL_ARENAS_ALL</constant>, this is the uptime since malloc\n        initialization.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.pactive\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.pactive</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Number of pages in active extents.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.pdirty\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.pdirty</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Number of pages within unused extents that are\n        potentially dirty, and for which <function>madvise()</function> or\n        similar has not been called.  See <link\n        linkend=\"opt.dirty_decay_ms\"><mallctl>opt.dirty_decay_ms</mallctl></link>\n        for a description of dirty pages.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.pmuzzy\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.pmuzzy</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Number of pages within unused extents that are muzzy.\n        See <link\n        linkend=\"opt.muzzy_decay_ms\"><mallctl>opt.muzzy_decay_ms</mallctl></link>\n        for a description of muzzy pages.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.mapped\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.mapped</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Number of mapped bytes.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.retained\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.retained</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Number of retained bytes.  See <link\n        linkend=\"stats.retained\"><mallctl>stats.retained</mallctl></link> for\n        details.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.extent_avail\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.extent_avail</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Number of allocated (but unused) extent structs in this\n\tarena.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.base\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.base</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>\n        Number of bytes dedicated to bootstrap-sensitive allocator metadata\n        structures.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.internal\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.internal</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Number of bytes dedicated to internal allocations.\n        Internal allocations differ from application-originated allocations in\n        that they are for internal use, and that they are omitted from heap\n        profiles.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.metadata_thp\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.metadata_thp</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Number of transparent huge pages (THP) used for\n        metadata.  See <link linkend=\"opt.metadata_thp\">opt.metadata_thp</link>\n        for details.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.resident\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.resident</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Maximum number of bytes in physically resident data\n        pages mapped by the arena, comprising all pages dedicated to allocator\n        metadata, pages backing active allocations, and unused dirty pages.\n        This is a maximum rather than precise because pages may not actually be\n        physically resident if they correspond to demand-zeroed virtual memory\n        that has not yet been touched.  This is a multiple of the page\n        size.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.dirty_npurge\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.dirty_npurge</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Number of dirty page purge sweeps performed.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.dirty_nmadvise\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.dirty_nmadvise</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Number of <function>madvise()</function> or similar\n        calls made to purge dirty pages.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.dirty_purged\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.dirty_purged</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Number of dirty pages purged.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.muzzy_npurge\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.muzzy_npurge</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Number of muzzy page purge sweeps performed.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.muzzy_nmadvise\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.muzzy_nmadvise</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Number of <function>madvise()</function> or similar\n        calls made to purge muzzy pages.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.muzzy_purged\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.muzzy_purged</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Number of muzzy pages purged.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.small.allocated\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.small.allocated</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Number of bytes currently allocated by small objects.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.small.nmalloc\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.small.nmalloc</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Cumulative number of times a small allocation was\n        requested from the arena's bins, whether to fill the relevant tcache if\n        <link linkend=\"opt.tcache\"><mallctl>opt.tcache</mallctl></link> is\n        enabled, or to directly satisfy an allocation request\n        otherwise.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.small.ndalloc\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.small.ndalloc</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Cumulative number of times a small allocation was\n        returned to the arena's bins, whether to flush the relevant tcache if\n        <link linkend=\"opt.tcache\"><mallctl>opt.tcache</mallctl></link> is\n        enabled, or to directly deallocate an allocation\n        otherwise.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.small.nrequests\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.small.nrequests</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Cumulative number of allocation requests satisfied by\n        all bin size classes.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.small.nfills\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.small.nfills</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Cumulative number of tcache fills by all small size\n\tclasses.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.small.nflushes\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.small.nflushes</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Cumulative number of tcache flushes by all small size\n        classes.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.large.allocated\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.large.allocated</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Number of bytes currently allocated by large objects.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.large.nmalloc\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.large.nmalloc</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Cumulative number of times a large extent was allocated\n        from the arena, whether to fill the relevant tcache if <link\n        linkend=\"opt.tcache\"><mallctl>opt.tcache</mallctl></link> is enabled and\n        the size class is within the range being cached, or to directly satisfy\n        an allocation request otherwise.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.large.ndalloc\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.large.ndalloc</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Cumulative number of times a large extent was returned\n        to the arena, whether to flush the relevant tcache if <link\n        linkend=\"opt.tcache\"><mallctl>opt.tcache</mallctl></link> is enabled and\n        the size class is within the range being cached, or to directly\n        deallocate an allocation otherwise.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.large.nrequests\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.large.nrequests</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Cumulative number of allocation requests satisfied by\n        all large size classes.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.large.nfills\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.large.nfills</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Cumulative number of tcache fills by all large size\n\tclasses.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.large.nflushes\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.large.nflushes</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Cumulative number of tcache flushes by all large size\n        classes.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.bins.j.nmalloc\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.bins.&lt;j&gt;.nmalloc</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Cumulative number of times a bin region of the\n        corresponding size class was allocated from the arena, whether to fill\n        the relevant tcache if <link\n        linkend=\"opt.tcache\"><mallctl>opt.tcache</mallctl></link> is enabled, or\n        to directly satisfy an allocation request otherwise.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.bins.j.ndalloc\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.bins.&lt;j&gt;.ndalloc</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Cumulative number of times a bin region of the\n        corresponding size class was returned to the arena, whether to flush the\n        relevant tcache if <link\n        linkend=\"opt.tcache\"><mallctl>opt.tcache</mallctl></link> is enabled, or\n        to directly deallocate an allocation otherwise.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.bins.j.nrequests\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.bins.&lt;j&gt;.nrequests</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Cumulative number of allocation requests satisfied by\n        bin regions of the corresponding size class.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.bins.j.curregs\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.bins.&lt;j&gt;.curregs</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Current number of regions for this size\n        class.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.bins.j.nfills\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.bins.&lt;j&gt;.nfills</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Cumulative number of tcache fills.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.bins.j.nflushes\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.bins.&lt;j&gt;.nflushes</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n        </term>\n        <listitem><para>Cumulative number of tcache flushes.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.bins.j.nslabs\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.bins.&lt;j&gt;.nslabs</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Cumulative number of slabs created.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.bins.j.nreslabs\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.bins.&lt;j&gt;.nreslabs</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Cumulative number of times the current slab from which\n        to allocate changed.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.bins.j.curslabs\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.bins.&lt;j&gt;.curslabs</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Current number of slabs.</para></listitem>\n      </varlistentry>\n\n\n      <varlistentry id=\"stats.arenas.i.bins.j.nonfull_slabs\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.bins.&lt;j&gt;.nonfull_slabs</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Current number of nonfull slabs.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.bins.mutex\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.bins.&lt;j&gt;.mutex.{counter}</mallctl>\n          (<type>counter specific type</type>) <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Statistics on\n        <varname>arena.&lt;i&gt;.bins.&lt;j&gt;</varname> mutex (arena bin\n        scope; bin operation related).  <mallctl>{counter}</mallctl> is one of\n        the counters in <link linkend=\"mutex_counters\">mutex profiling\n        counters</link>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.extents.n\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.extents.&lt;j&gt;.n{extent_type}</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para> Number of extents of the given type in this arena in\n\tthe bucket corresponding to page size index &lt;j&gt;. The extent type\n\tis one of dirty, muzzy, or retained.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.extents.bytes\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.extents.&lt;j&gt;.{extent_type}_bytes</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n\t<listitem><para> Sum of the bytes managed by extents of the given type\n\tin this arena in the bucket corresponding to page size index &lt;j&gt;.\n\tThe extent type is one of dirty, muzzy, or retained.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.lextents.j.nmalloc\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.lextents.&lt;j&gt;.nmalloc</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Cumulative number of times a large extent of the\n        corresponding size class was allocated from the arena, whether to fill\n        the relevant tcache if <link\n        linkend=\"opt.tcache\"><mallctl>opt.tcache</mallctl></link> is enabled and\n        the size class is within the range being cached, or to directly satisfy\n        an allocation request otherwise.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.lextents.j.ndalloc\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.lextents.&lt;j&gt;.ndalloc</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Cumulative number of times a large extent of the\n        corresponding size class was returned to the arena, whether to flush the\n        relevant tcache if <link\n        linkend=\"opt.tcache\"><mallctl>opt.tcache</mallctl></link> is enabled and\n        the size class is within the range being cached, or to directly\n        deallocate an allocation otherwise.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.lextents.j.nrequests\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.lextents.&lt;j&gt;.nrequests</mallctl>\n          (<type>uint64_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Cumulative number of allocation requests satisfied by\n        large extents of the corresponding size class.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.lextents.j.curlextents\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.lextents.&lt;j&gt;.curlextents</mallctl>\n          (<type>size_t</type>)\n          <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Current number of large allocations for this size class.\n        </para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.mutexes.large\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.mutexes.large.{counter}</mallctl>\n          (<type>counter specific type</type>) <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Statistics on <varname>arena.&lt;i&gt;.large</varname>\n        mutex (arena scope; large allocation related).\n        <mallctl>{counter}</mallctl> is one of the counters in <link\n        linkend=\"mutex_counters\">mutex profiling\n        counters</link>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.mutexes.extent_avail\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.mutexes.extent_avail.{counter}</mallctl>\n          (<type>counter specific type</type>) <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Statistics on <varname>arena.&lt;i&gt;.extent_avail\n        </varname> mutex (arena scope; extent avail related).\n        <mallctl>{counter}</mallctl> is one of the counters in <link\n        linkend=\"mutex_counters\">mutex profiling\n        counters</link>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.mutexes.extents_dirty\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.mutexes.extents_dirty.{counter}</mallctl>\n          (<type>counter specific type</type>) <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Statistics on <varname>arena.&lt;i&gt;.extents_dirty\n        </varname> mutex (arena scope; dirty extents related).\n        <mallctl>{counter}</mallctl> is one of the counters in <link\n        linkend=\"mutex_counters\">mutex profiling\n        counters</link>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.mutexes.extents_muzzy\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.mutexes.extents_muzzy.{counter}</mallctl>\n          (<type>counter specific type</type>) <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Statistics on <varname>arena.&lt;i&gt;.extents_muzzy\n        </varname> mutex (arena scope; muzzy extents related).\n        <mallctl>{counter}</mallctl> is one of the counters in <link\n        linkend=\"mutex_counters\">mutex profiling\n        counters</link>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.mutexes.extents_retained\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.mutexes.extents_retained.{counter}</mallctl>\n          (<type>counter specific type</type>) <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Statistics on <varname>arena.&lt;i&gt;.extents_retained\n        </varname> mutex (arena scope; retained extents related).\n        <mallctl>{counter}</mallctl> is one of the counters in <link\n        linkend=\"mutex_counters\">mutex profiling\n        counters</link>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.mutexes.decay_dirty\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.mutexes.decay_dirty.{counter}</mallctl>\n          (<type>counter specific type</type>) <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Statistics on <varname>arena.&lt;i&gt;.decay_dirty\n        </varname> mutex (arena scope; decay for dirty pages related).\n        <mallctl>{counter}</mallctl> is one of the counters in <link\n        linkend=\"mutex_counters\">mutex profiling\n        counters</link>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.mutexes.decay_muzzy\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.mutexes.decay_muzzy.{counter}</mallctl>\n          (<type>counter specific type</type>) <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Statistics on <varname>arena.&lt;i&gt;.decay_muzzy\n        </varname> mutex (arena scope; decay for muzzy pages related).\n        <mallctl>{counter}</mallctl> is one of the counters in <link\n        linkend=\"mutex_counters\">mutex profiling\n        counters</link>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.mutexes.base\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.mutexes.base.{counter}</mallctl>\n          (<type>counter specific type</type>) <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Statistics on <varname>arena.&lt;i&gt;.base</varname>\n        mutex (arena scope; base allocator related).\n        <mallctl>{counter}</mallctl> is one of the counters in <link\n        linkend=\"mutex_counters\">mutex profiling\n        counters</link>.</para></listitem>\n      </varlistentry>\n\n      <varlistentry id=\"stats.arenas.i.mutexes.tcache_list\">\n        <term>\n          <mallctl>stats.arenas.&lt;i&gt;.mutexes.tcache_list.{counter}</mallctl>\n          (<type>counter specific type</type>) <literal>r-</literal>\n          [<option>--enable-stats</option>]\n        </term>\n        <listitem><para>Statistics on\n        <varname>arena.&lt;i&gt;.tcache_list</varname> mutex (arena scope;\n        tcache to arena association related).  This mutex is expected to be\n        accessed less often.  <mallctl>{counter}</mallctl> is one of the\n        counters in <link linkend=\"mutex_counters\">mutex profiling\n        counters</link>.</para></listitem>\n      </varlistentry>\n\n    </variablelist>\n  </refsect1>\n  <refsect1 id=\"heap_profile_format\">\n    <title>HEAP PROFILE FORMAT</title>\n    <para>Although the heap profiling functionality was originally designed to\n    be compatible with the\n    <command>pprof</command> command that is developed as part of the <ulink\n    url=\"http://code.google.com/p/gperftools/\">gperftools\n    package</ulink>, the addition of per thread heap profiling functionality\n    required a different heap profile format.  The <command>jeprof</command>\n    command is derived from <command>pprof</command>, with enhancements to\n    support the heap profile format described here.</para>\n\n    <para>In the following hypothetical heap profile, <constant>[...]</constant>\n    indicates elision for the sake of compactness.  <programlisting><![CDATA[\nheap_v2/524288\n  t*: 28106: 56637512 [0: 0]\n  [...]\n  t3: 352: 16777344 [0: 0]\n  [...]\n  t99: 17754: 29341640 [0: 0]\n  [...]\n@ 0x5f86da8 0x5f5a1dc [...] 0x29e4d4e 0xa200316 0xabb2988 [...]\n  t*: 13: 6688 [0: 0]\n  t3: 12: 6496 [0: 0]\n  t99: 1: 192 [0: 0]\n[...]\n\nMAPPED_LIBRARIES:\n[...]]]></programlisting> The following matches the above heap profile, but most\ntokens are replaced with <constant>&lt;description&gt;</constant> to indicate\ndescriptions of the corresponding fields.  <programlisting><![CDATA[\n<heap_profile_format_version>/<mean_sample_interval>\n  <aggregate>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]\n  [...]\n  <thread_3_aggregate>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]\n  [...]\n  <thread_99_aggregate>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]\n  [...]\n@ <top_frame> <frame> [...] <frame> <frame> <frame> [...]\n  <backtrace_aggregate>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]\n  <backtrace_thread_3>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]\n  <backtrace_thread_99>: <curobjs>: <curbytes> [<cumobjs>: <cumbytes>]\n[...]\n\nMAPPED_LIBRARIES:\n</proc/<pid>/maps>]]></programlisting></para>\n  </refsect1>\n\n  <refsect1 id=\"debugging_malloc_problems\">\n    <title>DEBUGGING MALLOC PROBLEMS</title>\n    <para>When debugging, it is a good idea to configure/build jemalloc with\n    the <option>--enable-debug</option> and <option>--enable-fill</option>\n    options, and recompile the program with suitable options and symbols for\n    debugger support.  When so configured, jemalloc incorporates a wide variety\n    of run-time assertions that catch application errors such as double-free,\n    write-after-free, etc.</para>\n\n    <para>Programs often accidentally depend on <quote>uninitialized</quote>\n    memory actually being filled with zero bytes.  Junk filling\n    (see the <link linkend=\"opt.junk\"><mallctl>opt.junk</mallctl></link>\n    option) tends to expose such bugs in the form of obviously incorrect\n    results and/or coredumps.  Conversely, zero\n    filling (see the <link\n    linkend=\"opt.zero\"><mallctl>opt.zero</mallctl></link> option) eliminates\n    the symptoms of such bugs.  Between these two options, it is usually\n    possible to quickly detect, diagnose, and eliminate such bugs.</para>\n\n    <para>This implementation does not provide much detail about the problems\n    it detects, because the performance impact for storing such information\n    would be prohibitive.</para>\n  </refsect1>\n  <refsect1 id=\"diagnostic_messages\">\n    <title>DIAGNOSTIC MESSAGES</title>\n    <para>If any of the memory allocation/deallocation functions detect an\n    error or warning condition, a message will be printed to file descriptor\n    <constant>STDERR_FILENO</constant>.  Errors will result in the process\n    dumping core.  If the <link\n    linkend=\"opt.abort\"><mallctl>opt.abort</mallctl></link> option is set, most\n    warnings are treated as errors.</para>\n\n    <para>The <varname>malloc_message</varname> variable allows the programmer\n    to override the function which emits the text strings forming the errors\n    and warnings if for some reason the <constant>STDERR_FILENO</constant> file\n    descriptor is not suitable for this.\n    <function>malloc_message()</function> takes the\n    <parameter>cbopaque</parameter> pointer argument that is\n    <constant>NULL</constant> unless overridden by the arguments in a call to\n    <function>malloc_stats_print()</function>, followed by a string\n    pointer.  Please note that doing anything which tries to allocate memory in\n    this function is likely to result in a crash or deadlock.</para>\n\n    <para>All messages are prefixed by\n    <quote><computeroutput>&lt;jemalloc&gt;: </computeroutput></quote>.</para>\n  </refsect1>\n  <refsect1 id=\"return_values\">\n    <title>RETURN VALUES</title>\n    <refsect2>\n      <title>Standard API</title>\n      <para>The <function>malloc()</function> and\n      <function>calloc()</function> functions return a pointer to the\n      allocated memory if successful; otherwise a <constant>NULL</constant>\n      pointer is returned and <varname>errno</varname> is set to\n      <errorname>ENOMEM</errorname>.</para>\n\n      <para>The <function>posix_memalign()</function> function\n      returns the value 0 if successful; otherwise it returns an error value.\n      The <function>posix_memalign()</function> function will fail\n      if:\n        <variablelist>\n          <varlistentry>\n            <term><errorname>EINVAL</errorname></term>\n\n            <listitem><para>The <parameter>alignment</parameter> parameter is\n            not a power of 2 at least as large as\n            <code language=\"C\">sizeof(<type>void *</type>)</code>.\n            </para></listitem>\n          </varlistentry>\n          <varlistentry>\n            <term><errorname>ENOMEM</errorname></term>\n\n            <listitem><para>Memory allocation error.</para></listitem>\n          </varlistentry>\n        </variablelist>\n      </para>\n\n      <para>The <function>aligned_alloc()</function> function returns\n      a pointer to the allocated memory if successful; otherwise a\n      <constant>NULL</constant> pointer is returned and\n      <varname>errno</varname> is set.  The\n      <function>aligned_alloc()</function> function will fail if:\n        <variablelist>\n          <varlistentry>\n            <term><errorname>EINVAL</errorname></term>\n\n            <listitem><para>The <parameter>alignment</parameter> parameter is\n            not a power of 2.\n            </para></listitem>\n          </varlistentry>\n          <varlistentry>\n            <term><errorname>ENOMEM</errorname></term>\n\n            <listitem><para>Memory allocation error.</para></listitem>\n          </varlistentry>\n        </variablelist>\n      </para>\n\n      <para>The <function>realloc()</function> function returns a\n      pointer, possibly identical to <parameter>ptr</parameter>, to the\n      allocated memory if successful; otherwise a <constant>NULL</constant>\n      pointer is returned, and <varname>errno</varname> is set to\n      <errorname>ENOMEM</errorname> if the error was the result of an\n      allocation failure.  The <function>realloc()</function>\n      function always leaves the original buffer intact when an error occurs.\n      </para>\n\n      <para>The <function>free()</function> function returns no\n      value.</para>\n    </refsect2>\n    <refsect2>\n      <title>Non-standard API</title>\n      <para>The <function>mallocx()</function> and\n      <function>rallocx()</function> functions return a pointer to\n      the allocated memory if successful; otherwise a <constant>NULL</constant>\n      pointer is returned to indicate insufficient contiguous memory was\n      available to service the allocation request.  </para>\n\n      <para>The <function>xallocx()</function> function returns the\n      real size of the resulting resized allocation pointed to by\n      <parameter>ptr</parameter>, which is a value less than\n      <parameter>size</parameter> if the allocation could not be adequately\n      grown in place.  </para>\n\n      <para>The <function>sallocx()</function> function returns the\n      real size of the allocation pointed to by <parameter>ptr</parameter>.\n      </para>\n\n      <para>The <function>nallocx()</function> returns the real size\n      that would result from a successful equivalent\n      <function>mallocx()</function> function call, or zero if\n      insufficient memory is available to perform the size computation.  </para>\n\n      <para>The <function>mallctl()</function>,\n      <function>mallctlnametomib()</function>, and\n      <function>mallctlbymib()</function> functions return 0 on\n      success; otherwise they return an error value.  The functions will fail\n      if:\n        <variablelist>\n          <varlistentry>\n            <term><errorname>EINVAL</errorname></term>\n\n            <listitem><para><parameter>newp</parameter> is not\n            <constant>NULL</constant>, and <parameter>newlen</parameter> is too\n            large or too small.  Alternatively, <parameter>*oldlenp</parameter>\n            is too large or too small; when it happens, except for a very few\n            cases explicitly documented otherwise, as much data as possible\n            are read despite the error, with the amount of data read being\n            recorded in <parameter>*oldlenp</parameter>.</para></listitem>\n          </varlistentry>\n          <varlistentry>\n            <term><errorname>ENOENT</errorname></term>\n\n            <listitem><para><parameter>name</parameter> or\n            <parameter>mib</parameter> specifies an unknown/invalid\n            value.</para></listitem>\n          </varlistentry>\n          <varlistentry>\n            <term><errorname>EPERM</errorname></term>\n\n            <listitem><para>Attempt to read or write void value, or attempt to\n            write read-only value.</para></listitem>\n          </varlistentry>\n          <varlistentry>\n            <term><errorname>EAGAIN</errorname></term>\n\n            <listitem><para>A memory allocation failure\n            occurred.</para></listitem>\n          </varlistentry>\n          <varlistentry>\n            <term><errorname>EFAULT</errorname></term>\n\n            <listitem><para>An interface with side effects failed in some way\n            not directly related to <function>mallctl*()</function>\n            read/write processing.</para></listitem>\n          </varlistentry>\n        </variablelist>\n      </para>\n\n      <para>The <function>malloc_usable_size()</function> function\n      returns the usable size of the allocation pointed to by\n      <parameter>ptr</parameter>.  </para>\n    </refsect2>\n  </refsect1>\n  <refsect1 id=\"environment\">\n    <title>ENVIRONMENT</title>\n    <para>The following environment variable affects the execution of the\n    allocation functions:\n      <variablelist>\n        <varlistentry>\n          <term><envar>MALLOC_CONF</envar></term>\n\n          <listitem><para>If the environment variable\n          <envar>MALLOC_CONF</envar> is set, the characters it contains\n          will be interpreted as options.</para></listitem>\n        </varlistentry>\n      </variablelist>\n    </para>\n  </refsect1>\n  <refsect1 id=\"examples\">\n    <title>EXAMPLES</title>\n    <para>To dump core whenever a problem occurs:\n      <screen>ln -s 'abort:true' /etc/malloc.conf</screen>\n    </para>\n    <para>To specify in the source that only one arena should be automatically\n    created:\n      <programlisting language=\"C\"><![CDATA[\nmalloc_conf = \"narenas:1\";]]></programlisting></para>\n  </refsect1>\n  <refsect1 id=\"see_also\">\n    <title>SEE ALSO</title>\n    <para><citerefentry><refentrytitle>madvise</refentrytitle>\n    <manvolnum>2</manvolnum></citerefentry>,\n    <citerefentry><refentrytitle>mmap</refentrytitle>\n    <manvolnum>2</manvolnum></citerefentry>,\n    <citerefentry><refentrytitle>sbrk</refentrytitle>\n    <manvolnum>2</manvolnum></citerefentry>,\n    <citerefentry><refentrytitle>utrace</refentrytitle>\n    <manvolnum>2</manvolnum></citerefentry>,\n    <citerefentry><refentrytitle>alloca</refentrytitle>\n    <manvolnum>3</manvolnum></citerefentry>,\n    <citerefentry><refentrytitle>atexit</refentrytitle>\n    <manvolnum>3</manvolnum></citerefentry>,\n    <citerefentry><refentrytitle>getpagesize</refentrytitle>\n    <manvolnum>3</manvolnum></citerefentry></para>\n  </refsect1>\n  <refsect1 id=\"standards\">\n    <title>STANDARDS</title>\n    <para>The <function>malloc()</function>,\n    <function>calloc()</function>,\n    <function>realloc()</function>, and\n    <function>free()</function> functions conform to ISO/IEC\n    9899:1990 (<quote>ISO C90</quote>).</para>\n\n    <para>The <function>posix_memalign()</function> function conforms\n    to IEEE Std 1003.1-2001 (<quote>POSIX.1</quote>).</para>\n  </refsect1>\n</refentry>\n"
  },
  {
    "path": "deps/jemalloc/doc/manpages.xsl.in",
    "content": "<xsl:stylesheet xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\" version=\"1.0\">\n  <xsl:import href=\"@XSLROOT@/manpages/docbook.xsl\"/>\n  <xsl:import href=\"@abs_srcroot@doc/stylesheet.xsl\"/>\n</xsl:stylesheet>\n"
  },
  {
    "path": "deps/jemalloc/doc/stylesheet.xsl",
    "content": "<xsl:stylesheet xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\" version=\"1.0\">\n  <xsl:param name=\"funcsynopsis.style\">ansi</xsl:param>\n  <xsl:param name=\"function.parens\" select=\"0\"/>\n  <xsl:template match=\"function\">\n    <xsl:call-template name=\"inline.monoseq\"/>\n  </xsl:template>\n  <xsl:template match=\"mallctl\">\n    <quote><xsl:call-template name=\"inline.monoseq\"/></quote>\n  </xsl:template>\n</xsl:stylesheet>\n"
  },
  {
    "path": "deps/jemalloc/doc_internal/PROFILING_INTERNALS.md",
    "content": "# jemalloc profiling\nThis describes the mathematical basis behind jemalloc's profiling implementation, as well as the implementation tricks that make it effective. Historically, the jemalloc profiling design simply copied tcmalloc's. The implementation has since diverged, due to both the desire to record additional information, and to correct some biasing bugs.\n\nNote: this document is markdown with embedded LaTeX; different markdown renderers may not produce the expected output.  Viewing with `pandoc -s PROFILING_INTERNALS.md -o PROFILING_INTERNALS.pdf` is recommended.\n\n## Some tricks in our implementation toolbag\n\n### Sampling\nRecording our metadata is quite expensive; we need to walk up the stack to get a stack trace. On top of that, we need to allocate storage to record that stack trace, and stick it somewhere where a profile-dumping call can find it. That call might happen on another thread, so we'll probably need to take a lock to do so. These costs are quite large compared to the average cost of an allocation. To manage this, we'll only sample some fraction of allocations. This will miss some of them, so our data will be incomplete, but we'll try to make up for it. We can tune our sampling rate to balance accuracy and performance.\n\n### Fast Bernoulli sampling\nCompared to our fast paths, even a `coinflip(p)` function can be quite expensive. Having to do a random-number generation and some floating point operations would be a sizeable relative cost. However (as pointed out in [[Vitter, 1987](https://dl.acm.org/doi/10.1145/23002.23003)]), if we can orchestrate our algorithm so that many of our `coinflip` calls share their parameter value, we can do better. We can sample from the geometric distribution, and initialize a counter with the result. When the counter hits 0, the `coinflip` function returns true (and reinitializes its internal counter).\nThis can let us do a random-number generation once per (logical) coinflip that comes up heads, rather than once per (logical) coinflip. Since we expect to sample relatively rarely, this can be a large win.\n\n### Fast-path / slow-path thinking\nMost programs have a skewed distribution of allocations. Smaller allocations are much more frequent than large ones, but shorter lived and less common as a fraction of program memory. \"Small\" and \"large\" are necessarily sort of fuzzy terms, but if we define \"small\" as \"allocations jemalloc puts into slabs\" and \"large\" as the others, then it's not uncommon for small allocations to be hundreds of times more frequent than large ones, but take up around half the amount of heap space as large ones. Moreover, small allocations tend to be much cheaper than large ones (often by a factor of 20-30): they're more likely to hit in thread caches, less likely to have to do an mmap, and cheaper to fill (by the user) once the allocation has been returned.\n\n## An unbiased estimator of space consumption from (almost) arbitrary sampling strategies\nSuppose we have a sampling strategy that meets the following criteria:\n\n  - One allocation being sampled is independent of other allocations being sampled.\n  - Each allocation has a non-zero probability of being sampled.\n\nWe can then estimate the bytes in live allocations through some particular stack trace as:\n\n$$ \\sum_i S_i I_i \\frac{1}{\\mathrm{E}[I_i]} $$\n\nwhere the sum ranges over some index variable of live allocations from that stack, $S_i$ is the size of the $i$'th allocation, and $I_i$ is an indicator random variable for whether or not the $i'th$ allocation is sampled. $S_i$ and $\\mathrm{E}[I_i]$ are constants (the program allocations are fixed; the random variables are the sampling decisions), so taking the expectation we get\n\n$$ \\sum_i S_i \\mathrm{E}[I_i] \\frac{1}{\\mathrm{E}[I_i]}.$$\n\nThis is of course $\\sum_i S_i$, as we want (and, a similar calculation could be done for allocation counts as well).\nThis is a fairly general strategy; note that while we require that sampling decisions be independent of one another's outcomes, they don't have to be independent of previous allocations, total bytes allocated, etc. You can imagine strategies that:\n\n  - Sample allocations at program startup at a higher rate than subsequent allocations\n  - Sample even-indexed allocations more frequently than odd-indexed ones (so long as no allocation has zero sampling probability)\n  - Let threads declare themselves as high-sampling-priority, and sample their allocations at an increased rate.\n\nThese can all be fit into this framework to give an unbiased estimator.\n\n## Evaluating sampling strategies\nNot all strategies for picking allocations to sample are equally good, of course. Among unbiased estimators, the lower the variance, the lower the mean squared error. Using the estimator above, the variance is:\n\n$$\n\\begin{aligned}\n& \\mathrm{Var}[\\sum_i S_i I_i \\frac{1}{\\mathrm{E}[I_i]}]  \\\\\n=& \\sum_i \\mathrm{Var}[S_i I_i \\frac{1}{\\mathrm{E}[I_i]}] \\\\\n=& \\sum_i \\frac{S_i^2}{\\mathrm{E}[I_i]^2} \\mathrm{Var}[I_i] \\\\\n=& \\sum_i \\frac{S_i^2}{\\mathrm{E}[I_i]^2} \\mathrm{Var}[I_i] \\\\\n=& \\sum_i \\frac{S_i^2}{\\mathrm{E}[I_i]^2} \\mathrm{E}[I_i](1 - \\mathrm{E}[I_i]) \\\\\n=& \\sum_i S_i^2 \\frac{1 - \\mathrm{E}[I_i]}{\\mathrm{E}[I_i]}.\n\\end{aligned}\n$$\n\nWe can use this formula to compare various strategy choices. All else being equal, lower-variance strategies are better.\n\n## Possible sampling strategies\nBecause of the desire to avoid the fast-path costs, we'd like to use our Bernoulli trick if possible. There are two obvious counters to use: a coinflip per allocation, and a coinflip per byte allocated.\n\n### Bernoulli sampling per-allocation\nAn obvious strategy is to pick some large $N$, and give each allocation a $1/N$ chance of being sampled. This would let us use our Bernoulli-via-Geometric trick. Using the formula from above, we can compute the variance as:\n\n$$ \\sum_i S_i^2 \\frac{1 - \\frac{1}{N}}{\\frac{1}{N}}  = (N-1) \\sum_i S_i^2.$$\n\nThat is, an allocation of size $Z$ contributes a term of $(N-1)Z^2$ to the variance.\n\n### Bernoulli sampling per-byte\nAnother option we have is to pick some rate $R$, and give each byte a $1/R$ chance of being picked for sampling (at which point we would sample its contained allocation). The chance of an allocation of size $Z$ being sampled, then, is\n\n$$1-(1-\\frac{1}{R})^{Z}$$\n\nand an allocation of size $Z$ contributes a term of\n\n$$Z^2 \\frac{(1-\\frac{1}{R})^{Z}}{1-(1-\\frac{1}{R})^{Z}}.$$\n\nIn practical settings, $R$ is large, and so this is well-approximated by\n\n$$Z^2 \\frac{e^{-Z/R}}{1 - e^{-Z/R}} .$$\n\nJust to get a sense of the dynamics here, let's look at the behavior for various values of $Z$. When $Z$ is small relative to $R$, we can use $e^z \\approx 1 + x$, and conclude that the variance contributed by a small-$Z$ allocation is around\n\n$$Z^2 \\frac{1-Z/R}{Z/R} \\approx RZ.$$\n\nWhen $Z$ is comparable to $R$, the variance term is near $Z^2$ (we have $\\frac{e^{-Z/R}}{1 - e^{-Z/R}} = 1$ when $Z/R = \\ln 2 \\approx 0.693$). When $Z$ is large relative to $R$, the variance term goes to zero.\n\n## Picking a sampling strategy\nThe fast-path/slow-path dynamics of allocation patterns point us towards the per-byte sampling approach:\n\n  - The quadratic increase in variance per allocation in the first approach is quite costly when heaps have a non-negligible portion of their bytes in those allocations, which is practically often the case.\n  - The Bernoulli-per-byte approach shifts more of its samples towards large allocations, which are already a slow-path.\n  - We drive several tickers (e.g. tcache gc) by bytes allocated, and report bytes-allocated as a user-visible statistic, so we have to do all the necessary bookkeeping anyways.\n\nIndeed, this is the approach we use in jemalloc. Our heap dumps record the size of the allocation and the sampling rate $R$, and jeprof unbiases by dividing by $1 - e^{-Z/R}$.  The framework above would suggest dividing by $1-(1-1/R)^Z$; instead, we use the fact that $R$ is large in practical situations, and so $e^{-Z/R}$ is a good approximation (and faster to compute).  (Equivalently, we may also see this as the factor that falls out from viewing sampling as a Poisson process directly).\n\n## Consequences for heap dump consumers\nUsing this approach means that there are a few things users need to be aware of.\n\n### Stack counts are not proportional to allocation frequencies\nIf one stack appears twice as often as another, this by itself does not imply that it allocates twice as often. Consider the case in which there are only two types of allocating call stacks in a program. Stack A allocates 8 bytes, and occurs a million times in a program. Stack B allocates 8 MB, and occurs just once in a program. If our sampling rate $R$ is about 1MB, we expect stack A to show up about 8 times, and stack B to show up once. Stack A isn't 8 times more frequent than stack B, though; it's a million times more frequent.\n\n### Aggregation must be done after unbiasing samples\nSome tools manually parse heap dump output, and aggregate across stacks (or across program runs) to provide wider-scale data analyses. When doing this aggregation, though, it's important to unbias-and-then-sum, rather than sum-and-then-unbias. Reusing our example from the previous section: suppose we collect heap dumps of the program from a million machines. We then have 8 million occurs of stack A (each of 8 bytes), and a million occurrences of stack B (each of 8 MB). If we sum first, we'll attribute 64 MB to stack A, and 8 TB to stack B. Unbiasing changes these numbers by an infinitesimal amount, so that sum-then-unbias dramatically underreports the amount of memory allocated by stack A.\n\n## An avenue for future exploration\nWhile the framework we laid out above is pretty general, as an engineering decision we're only interested in fairly simple approaches (i.e. ones for which the chance of an allocation being sampled depends only on its size). Our job is then: for each size class $Z$, pick a probability $p_Z$ that an allocation of that size will be sampled. We made some handwave-y references to statistical distributions to justify our choices, but there's no reason we need to pick them that way. Any set of non-zero probabilities is a valid choice.\nThe real limiting factor in our ability to reduce estimator variance is that fact that sampling is expensive; we want to make sure we only do it on a small fraction of allocations. Our goal, then, is to pick the $p_Z$ to minimize variance given some maximum sampling rate $P$. If we define $a_Z$ to be the fraction of allocations of size $Z$, and $l_Z$ to be the fraction of allocations of size $Z$ still alive at the time of a heap dump, then we can phrase this as an optimization problem over the choices of $p_Z$:\n\nMinimize\n\n$$ \\sum_Z Z^2 l_Z \\frac{1-p_Z}{p_Z} $$\n\nsubject to\n\n$$ \\sum_Z a_Z p_Z \\leq P $$\n\nIgnoring a term that doesn't depend on $p_Z$, the objective is minimized whenever\n\n$$ \\sum_Z Z^2 l_Z \\frac{1}{p_Z} $$\n\nis. For a particular program, $l_Z$ and $a_Z$ are just numbers that can be obtained (exactly) from existing stats introspection facilities, and we have a fairly tractable convex optimization problem (it can be framed as a second-order cone program). It would be interesting to evaluate, for various common allocation patterns, how well our current strategy adapts. Do our actual choices for $p_Z$ closely correspond to the optimal ones? How close is the variance of our choices to the variance of the optimal strategy?\nYou can imagine an implementation that actually goes all the way, and makes $p_Z$ selections a tuning parameter. I don't think this is a good use of development time for the foreseeable future; but I do wonder about the answers to some of these questions.\n\n## Implementation realities\n\nThe nice story above is at least partially a lie. Initially, jeprof (copying its logic from pprof)  had the sum-then-unbias error described above.  The current version of jemalloc does the unbiasing step on a per-allocation basis internally, so that we're always tracking what the unbiased numbers \"should\" be.  The problem is, actually surfacing those unbiased numbers would require a breaking change to jeprof (and the various already-deployed tools that have copied its logic). Instead, we use a little bit more trickery. Since we know at dump time the numbers we want jeprof to report, we simply choose the values we'll output so that the jeprof numbers will match the true numbers.  The math is described in `src/prof_data.c` (where the only cleverness is a change of variables that lets the exponentials fall out).\n\nThis has the effect of making the output of jeprof (and related tools) correct, while making its inputs incorrect.  This can be annoying to human readers of raw profiling dump output.\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/activity_callback.h",
    "content": "#ifndef JEMALLOC_INTERNAL_ACTIVITY_CALLBACK_H\n#define JEMALLOC_INTERNAL_ACTIVITY_CALLBACK_H\n\n/*\n * The callback to be executed \"periodically\", in response to some amount of\n * allocator activity.\n *\n * This callback need not be computing any sort of peak (although that's the\n * intended first use case), but we drive it from the peak counter, so it's\n * keeps things tidy to keep it here.\n *\n * The calls to this thunk get driven by the peak_event module.\n */\n#define ACTIVITY_CALLBACK_THUNK_INITIALIZER {NULL, NULL}\ntypedef void (*activity_callback_t)(void *uctx, uint64_t allocated,\n    uint64_t deallocated);\ntypedef struct activity_callback_thunk_s activity_callback_thunk_t;\nstruct activity_callback_thunk_s {\n\tactivity_callback_t callback;\n\tvoid *uctx;\n};\n\n#endif /* JEMALLOC_INTERNAL_ACTIVITY_CALLBACK_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/arena_externs.h",
    "content": "#ifndef JEMALLOC_INTERNAL_ARENA_EXTERNS_H\n#define JEMALLOC_INTERNAL_ARENA_EXTERNS_H\n\n#include \"jemalloc/internal/bin.h\"\n#include \"jemalloc/internal/div.h\"\n#include \"jemalloc/internal/extent_dss.h\"\n#include \"jemalloc/internal/hook.h\"\n#include \"jemalloc/internal/pages.h\"\n#include \"jemalloc/internal/stats.h\"\n\n/*\n * When the amount of pages to be purged exceeds this amount, deferred purge\n * should happen.\n */\n#define ARENA_DEFERRED_PURGE_NPAGES_THRESHOLD UINT64_C(1024)\n\nextern ssize_t opt_dirty_decay_ms;\nextern ssize_t opt_muzzy_decay_ms;\n\nextern percpu_arena_mode_t opt_percpu_arena;\nextern const char *percpu_arena_mode_names[];\n\nextern div_info_t arena_binind_div_info[SC_NBINS];\n\nextern malloc_mutex_t arenas_lock;\nextern emap_t arena_emap_global;\n\nextern size_t opt_oversize_threshold;\nextern size_t oversize_threshold;\n\n/*\n * arena_bin_offsets[binind] is the offset of the first bin shard for size class\n * binind.\n */\nextern uint32_t arena_bin_offsets[SC_NBINS];\n\nvoid arena_basic_stats_merge(tsdn_t *tsdn, arena_t *arena,\n    unsigned *nthreads, const char **dss, ssize_t *dirty_decay_ms,\n    ssize_t *muzzy_decay_ms, size_t *nactive, size_t *ndirty, size_t *nmuzzy);\nvoid arena_stats_merge(tsdn_t *tsdn, arena_t *arena, unsigned *nthreads,\n    const char **dss, ssize_t *dirty_decay_ms, ssize_t *muzzy_decay_ms,\n    size_t *nactive, size_t *ndirty, size_t *nmuzzy, arena_stats_t *astats,\n    bin_stats_data_t *bstats, arena_stats_large_t *lstats,\n    pac_estats_t *estats, hpa_shard_stats_t *hpastats, sec_stats_t *secstats);\nvoid arena_handle_deferred_work(tsdn_t *tsdn, arena_t *arena);\nedata_t *arena_extent_alloc_large(tsdn_t *tsdn, arena_t *arena,\n    size_t usize, size_t alignment, bool zero);\nvoid arena_extent_dalloc_large_prep(tsdn_t *tsdn, arena_t *arena,\n    edata_t *edata);\nvoid arena_extent_ralloc_large_shrink(tsdn_t *tsdn, arena_t *arena,\n    edata_t *edata, size_t oldsize);\nvoid arena_extent_ralloc_large_expand(tsdn_t *tsdn, arena_t *arena,\n    edata_t *edata, size_t oldsize);\nbool arena_decay_ms_set(tsdn_t *tsdn, arena_t *arena, extent_state_t state,\n    ssize_t decay_ms);\nssize_t arena_decay_ms_get(arena_t *arena, extent_state_t state);\nvoid arena_decay(tsdn_t *tsdn, arena_t *arena, bool is_background_thread,\n    bool all);\nuint64_t arena_time_until_deferred(tsdn_t *tsdn, arena_t *arena);\nvoid arena_do_deferred_work(tsdn_t *tsdn, arena_t *arena);\nvoid arena_reset(tsd_t *tsd, arena_t *arena);\nvoid arena_destroy(tsd_t *tsd, arena_t *arena);\nvoid arena_cache_bin_fill_small(tsdn_t *tsdn, arena_t *arena,\n    cache_bin_t *cache_bin, cache_bin_info_t *cache_bin_info, szind_t binind,\n    const unsigned nfill);\n\nvoid *arena_malloc_hard(tsdn_t *tsdn, arena_t *arena, size_t size,\n    szind_t ind, bool zero);\nvoid *arena_palloc(tsdn_t *tsdn, arena_t *arena, size_t usize,\n    size_t alignment, bool zero, tcache_t *tcache);\nvoid arena_prof_promote(tsdn_t *tsdn, void *ptr, size_t usize);\nvoid arena_dalloc_promoted(tsdn_t *tsdn, void *ptr, tcache_t *tcache,\n    bool slow_path);\nvoid arena_slab_dalloc(tsdn_t *tsdn, arena_t *arena, edata_t *slab);\n\nvoid arena_dalloc_bin_locked_handle_newly_empty(tsdn_t *tsdn, arena_t *arena,\n    edata_t *slab, bin_t *bin);\nvoid arena_dalloc_bin_locked_handle_newly_nonempty(tsdn_t *tsdn, arena_t *arena,\n    edata_t *slab, bin_t *bin);\nvoid arena_dalloc_small(tsdn_t *tsdn, void *ptr);\nbool arena_ralloc_no_move(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t size,\n    size_t extra, bool zero, size_t *newsize);\nvoid *arena_ralloc(tsdn_t *tsdn, arena_t *arena, void *ptr, size_t oldsize,\n    size_t size, size_t alignment, bool zero, tcache_t *tcache,\n    hook_ralloc_args_t *hook_args);\ndss_prec_t arena_dss_prec_get(arena_t *arena);\nehooks_t *arena_get_ehooks(arena_t *arena);\nextent_hooks_t *arena_set_extent_hooks(tsd_t *tsd, arena_t *arena,\n    extent_hooks_t *extent_hooks);\nbool arena_dss_prec_set(arena_t *arena, dss_prec_t dss_prec);\nssize_t arena_dirty_decay_ms_default_get(void);\nbool arena_dirty_decay_ms_default_set(ssize_t decay_ms);\nssize_t arena_muzzy_decay_ms_default_get(void);\nbool arena_muzzy_decay_ms_default_set(ssize_t decay_ms);\nbool arena_retain_grow_limit_get_set(tsd_t *tsd, arena_t *arena,\n    size_t *old_limit, size_t *new_limit);\nunsigned arena_nthreads_get(arena_t *arena, bool internal);\nvoid arena_nthreads_inc(arena_t *arena, bool internal);\nvoid arena_nthreads_dec(arena_t *arena, bool internal);\narena_t *arena_new(tsdn_t *tsdn, unsigned ind, const arena_config_t *config);\nbool arena_init_huge(void);\nbool arena_is_huge(unsigned arena_ind);\narena_t *arena_choose_huge(tsd_t *tsd);\nbin_t *arena_bin_choose(tsdn_t *tsdn, arena_t *arena, szind_t binind,\n    unsigned *binshard);\nsize_t arena_fill_small_fresh(tsdn_t *tsdn, arena_t *arena, szind_t binind,\n    void **ptrs, size_t nfill, bool zero);\nbool arena_boot(sc_data_t *sc_data, base_t *base, bool hpa);\nvoid arena_prefork0(tsdn_t *tsdn, arena_t *arena);\nvoid arena_prefork1(tsdn_t *tsdn, arena_t *arena);\nvoid arena_prefork2(tsdn_t *tsdn, arena_t *arena);\nvoid arena_prefork3(tsdn_t *tsdn, arena_t *arena);\nvoid arena_prefork4(tsdn_t *tsdn, arena_t *arena);\nvoid arena_prefork5(tsdn_t *tsdn, arena_t *arena);\nvoid arena_prefork6(tsdn_t *tsdn, arena_t *arena);\nvoid arena_prefork7(tsdn_t *tsdn, arena_t *arena);\nvoid arena_prefork8(tsdn_t *tsdn, arena_t *arena);\nvoid arena_postfork_parent(tsdn_t *tsdn, arena_t *arena);\nvoid arena_postfork_child(tsdn_t *tsdn, arena_t *arena);\n\n#endif /* JEMALLOC_INTERNAL_ARENA_EXTERNS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/arena_inlines_a.h",
    "content": "#ifndef JEMALLOC_INTERNAL_ARENA_INLINES_A_H\n#define JEMALLOC_INTERNAL_ARENA_INLINES_A_H\n\nstatic inline unsigned\narena_ind_get(const arena_t *arena) {\n\treturn arena->ind;\n}\n\nstatic inline void\narena_internal_add(arena_t *arena, size_t size) {\n\tatomic_fetch_add_zu(&arena->stats.internal, size, ATOMIC_RELAXED);\n}\n\nstatic inline void\narena_internal_sub(arena_t *arena, size_t size) {\n\tatomic_fetch_sub_zu(&arena->stats.internal, size, ATOMIC_RELAXED);\n}\n\nstatic inline size_t\narena_internal_get(arena_t *arena) {\n\treturn atomic_load_zu(&arena->stats.internal, ATOMIC_RELAXED);\n}\n\n#endif /* JEMALLOC_INTERNAL_ARENA_INLINES_A_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/arena_inlines_b.h",
    "content": "#ifndef JEMALLOC_INTERNAL_ARENA_INLINES_B_H\n#define JEMALLOC_INTERNAL_ARENA_INLINES_B_H\n\n#include \"jemalloc/internal/div.h\"\n#include \"jemalloc/internal/emap.h\"\n#include \"jemalloc/internal/jemalloc_internal_types.h\"\n#include \"jemalloc/internal/mutex.h\"\n#include \"jemalloc/internal/rtree.h\"\n#include \"jemalloc/internal/safety_check.h\"\n#include \"jemalloc/internal/sc.h\"\n#include \"jemalloc/internal/sz.h\"\n#include \"jemalloc/internal/ticker.h\"\n\nstatic inline arena_t *\narena_get_from_edata(edata_t *edata) {\n\treturn (arena_t *)atomic_load_p(&arenas[edata_arena_ind_get(edata)],\n\t    ATOMIC_RELAXED);\n}\n\nJEMALLOC_ALWAYS_INLINE arena_t *\narena_choose_maybe_huge(tsd_t *tsd, arena_t *arena, size_t size) {\n\tif (arena != NULL) {\n\t\treturn arena;\n\t}\n\n\t/*\n\t * For huge allocations, use the dedicated huge arena if both are true:\n\t * 1) is using auto arena selection (i.e. arena == NULL), and 2) the\n\t * thread is not assigned to a manual arena.\n\t */\n\tif (unlikely(size >= oversize_threshold)) {\n\t\tarena_t *tsd_arena = tsd_arena_get(tsd);\n\t\tif (tsd_arena == NULL || arena_is_auto(tsd_arena)) {\n\t\t\treturn arena_choose_huge(tsd);\n\t\t}\n\t}\n\n\treturn arena_choose(tsd, NULL);\n}\n\nJEMALLOC_ALWAYS_INLINE void\narena_prof_info_get(tsd_t *tsd, const void *ptr, emap_alloc_ctx_t *alloc_ctx,\n    prof_info_t *prof_info, bool reset_recent) {\n\tcassert(config_prof);\n\tassert(ptr != NULL);\n\tassert(prof_info != NULL);\n\n\tedata_t *edata = NULL;\n\tbool is_slab;\n\n\t/* Static check. */\n\tif (alloc_ctx == NULL) {\n\t\tedata = emap_edata_lookup(tsd_tsdn(tsd), &arena_emap_global,\n\t\t    ptr);\n\t\tis_slab = edata_slab_get(edata);\n\t} else if (unlikely(!(is_slab = alloc_ctx->slab))) {\n\t\tedata = emap_edata_lookup(tsd_tsdn(tsd), &arena_emap_global,\n\t\t    ptr);\n\t}\n\n\tif (unlikely(!is_slab)) {\n\t\t/* edata must have been initialized at this point. */\n\t\tassert(edata != NULL);\n\t\tlarge_prof_info_get(tsd, edata, prof_info, reset_recent);\n\t} else {\n\t\tprof_info->alloc_tctx = (prof_tctx_t *)(uintptr_t)1U;\n\t\t/*\n\t\t * No need to set other fields in prof_info; they will never be\n\t\t * accessed if (uintptr_t)alloc_tctx == (uintptr_t)1U.\n\t\t */\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE void\narena_prof_tctx_reset(tsd_t *tsd, const void *ptr,\n    emap_alloc_ctx_t *alloc_ctx) {\n\tcassert(config_prof);\n\tassert(ptr != NULL);\n\n\t/* Static check. */\n\tif (alloc_ctx == NULL) {\n\t\tedata_t *edata = emap_edata_lookup(tsd_tsdn(tsd),\n\t\t    &arena_emap_global, ptr);\n\t\tif (unlikely(!edata_slab_get(edata))) {\n\t\t\tlarge_prof_tctx_reset(edata);\n\t\t}\n\t} else {\n\t\tif (unlikely(!alloc_ctx->slab)) {\n\t\t\tedata_t *edata = emap_edata_lookup(tsd_tsdn(tsd),\n\t\t\t    &arena_emap_global, ptr);\n\t\t\tlarge_prof_tctx_reset(edata);\n\t\t}\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE void\narena_prof_tctx_reset_sampled(tsd_t *tsd, const void *ptr) {\n\tcassert(config_prof);\n\tassert(ptr != NULL);\n\n\tedata_t *edata = emap_edata_lookup(tsd_tsdn(tsd), &arena_emap_global,\n\t    ptr);\n\tassert(!edata_slab_get(edata));\n\n\tlarge_prof_tctx_reset(edata);\n}\n\nJEMALLOC_ALWAYS_INLINE void\narena_prof_info_set(tsd_t *tsd, edata_t *edata, prof_tctx_t *tctx,\n    size_t size) {\n\tcassert(config_prof);\n\n\tassert(!edata_slab_get(edata));\n\tlarge_prof_info_set(edata, tctx, size);\n}\n\nJEMALLOC_ALWAYS_INLINE void\narena_decay_ticks(tsdn_t *tsdn, arena_t *arena, unsigned nticks) {\n\tif (unlikely(tsdn_null(tsdn))) {\n\t\treturn;\n\t}\n\ttsd_t *tsd = tsdn_tsd(tsdn);\n\t/*\n\t * We use the ticker_geom_t to avoid having per-arena state in the tsd.\n\t * Instead of having a countdown-until-decay timer running for every\n\t * arena in every thread, we flip a coin once per tick, whose\n\t * probability of coming up heads is 1/nticks; this is effectively the\n\t * operation of the ticker_geom_t.  Each arena has the same chance of a\n\t * coinflip coming up heads (1/ARENA_DECAY_NTICKS_PER_UPDATE), so we can\n\t * use a single ticker for all of them.\n\t */\n\tticker_geom_t *decay_ticker = tsd_arena_decay_tickerp_get(tsd);\n\tuint64_t *prng_state = tsd_prng_statep_get(tsd);\n\tif (unlikely(ticker_geom_ticks(decay_ticker, prng_state, nticks))) {\n\t\tarena_decay(tsdn, arena, false, false);\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE void\narena_decay_tick(tsdn_t *tsdn, arena_t *arena) {\n\tarena_decay_ticks(tsdn, arena, 1);\n}\n\nJEMALLOC_ALWAYS_INLINE void *\narena_malloc(tsdn_t *tsdn, arena_t *arena, size_t size, szind_t ind, bool zero,\n    tcache_t *tcache, bool slow_path) {\n\tassert(!tsdn_null(tsdn) || tcache == NULL);\n\n\tif (likely(tcache != NULL)) {\n\t\tif (likely(size <= SC_SMALL_MAXCLASS)) {\n\t\t\treturn tcache_alloc_small(tsdn_tsd(tsdn), arena,\n\t\t\t    tcache, size, ind, zero, slow_path);\n\t\t}\n\t\tif (likely(size <= tcache_maxclass)) {\n\t\t\treturn tcache_alloc_large(tsdn_tsd(tsdn), arena,\n\t\t\t    tcache, size, ind, zero, slow_path);\n\t\t}\n\t\t/* (size > tcache_maxclass) case falls through. */\n\t\tassert(size > tcache_maxclass);\n\t}\n\n\treturn arena_malloc_hard(tsdn, arena, size, ind, zero);\n}\n\nJEMALLOC_ALWAYS_INLINE arena_t *\narena_aalloc(tsdn_t *tsdn, const void *ptr) {\n\tedata_t *edata = emap_edata_lookup(tsdn, &arena_emap_global, ptr);\n\tunsigned arena_ind = edata_arena_ind_get(edata);\n\treturn (arena_t *)atomic_load_p(&arenas[arena_ind], ATOMIC_RELAXED);\n}\n\nJEMALLOC_ALWAYS_INLINE size_t\narena_salloc(tsdn_t *tsdn, const void *ptr) {\n\tassert(ptr != NULL);\n\temap_alloc_ctx_t alloc_ctx;\n\temap_alloc_ctx_lookup(tsdn, &arena_emap_global, ptr, &alloc_ctx);\n\tassert(alloc_ctx.szind != SC_NSIZES);\n\n\treturn sz_index2size(alloc_ctx.szind);\n}\n\nJEMALLOC_ALWAYS_INLINE size_t\narena_vsalloc(tsdn_t *tsdn, const void *ptr) {\n\t/*\n\t * Return 0 if ptr is not within an extent managed by jemalloc.  This\n\t * function has two extra costs relative to isalloc():\n\t * - The rtree calls cannot claim to be dependent lookups, which induces\n\t *   rtree lookup load dependencies.\n\t * - The lookup may fail, so there is an extra branch to check for\n\t *   failure.\n\t */\n\n\temap_full_alloc_ctx_t full_alloc_ctx;\n\tbool missing = emap_full_alloc_ctx_try_lookup(tsdn, &arena_emap_global,\n\t    ptr, &full_alloc_ctx);\n\tif (missing) {\n\t\treturn 0;\n\t}\n\n\tif (full_alloc_ctx.edata == NULL) {\n\t\treturn 0;\n\t}\n\tassert(edata_state_get(full_alloc_ctx.edata) == extent_state_active);\n\t/* Only slab members should be looked up via interior pointers. */\n\tassert(edata_addr_get(full_alloc_ctx.edata) == ptr\n\t    || edata_slab_get(full_alloc_ctx.edata));\n\n\tassert(full_alloc_ctx.szind != SC_NSIZES);\n\n\treturn sz_index2size(full_alloc_ctx.szind);\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nlarge_dalloc_safety_checks(edata_t *edata, void *ptr, szind_t szind) {\n\tif (!config_opt_safety_checks) {\n\t\treturn false;\n\t}\n\n\t/*\n\t * Eagerly detect double free and sized dealloc bugs for large sizes.\n\t * The cost is low enough (as edata will be accessed anyway) to be\n\t * enabled all the time.\n\t */\n\tif (unlikely(edata == NULL ||\n\t    edata_state_get(edata) != extent_state_active)) {\n\t\tsafety_check_fail(\"Invalid deallocation detected: \"\n\t\t    \"pages being freed (%p) not currently active, \"\n\t\t    \"possibly caused by double free bugs.\",\n\t\t    (uintptr_t)edata_addr_get(edata));\n\t\treturn true;\n\t}\n\tsize_t input_size = sz_index2size(szind);\n\tif (unlikely(input_size != edata_usize_get(edata))) {\n\t\tsafety_check_fail_sized_dealloc(/* current_dealloc */ true, ptr,\n\t\t    /* true_size */ edata_usize_get(edata), input_size);\n\t\treturn true;\n\t}\n\n\treturn false;\n}\n\nstatic inline void\narena_dalloc_large_no_tcache(tsdn_t *tsdn, void *ptr, szind_t szind) {\n\tif (config_prof && unlikely(szind < SC_NBINS)) {\n\t\tarena_dalloc_promoted(tsdn, ptr, NULL, true);\n\t} else {\n\t\tedata_t *edata = emap_edata_lookup(tsdn, &arena_emap_global,\n\t\t    ptr);\n\t\tif (large_dalloc_safety_checks(edata, ptr, szind)) {\n\t\t\t/* See the comment in isfree. */\n\t\t\treturn;\n\t\t}\n\t\tlarge_dalloc(tsdn, edata);\n\t}\n}\n\nstatic inline void\narena_dalloc_no_tcache(tsdn_t *tsdn, void *ptr) {\n\tassert(ptr != NULL);\n\n\temap_alloc_ctx_t alloc_ctx;\n\temap_alloc_ctx_lookup(tsdn, &arena_emap_global, ptr, &alloc_ctx);\n\n\tif (config_debug) {\n\t\tedata_t *edata = emap_edata_lookup(tsdn, &arena_emap_global,\n\t\t    ptr);\n\t\tassert(alloc_ctx.szind == edata_szind_get(edata));\n\t\tassert(alloc_ctx.szind < SC_NSIZES);\n\t\tassert(alloc_ctx.slab == edata_slab_get(edata));\n\t}\n\n\tif (likely(alloc_ctx.slab)) {\n\t\t/* Small allocation. */\n\t\tarena_dalloc_small(tsdn, ptr);\n\t} else {\n\t\tarena_dalloc_large_no_tcache(tsdn, ptr, alloc_ctx.szind);\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE void\narena_dalloc_large(tsdn_t *tsdn, void *ptr, tcache_t *tcache, szind_t szind,\n    bool slow_path) {\n\tif (szind < nhbins) {\n\t\tif (config_prof && unlikely(szind < SC_NBINS)) {\n\t\t\tarena_dalloc_promoted(tsdn, ptr, tcache, slow_path);\n\t\t} else {\n\t\t\ttcache_dalloc_large(tsdn_tsd(tsdn), tcache, ptr, szind,\n\t\t\t    slow_path);\n\t\t}\n\t} else {\n\t\tedata_t *edata = emap_edata_lookup(tsdn, &arena_emap_global,\n\t\t    ptr);\n\t\tif (large_dalloc_safety_checks(edata, ptr, szind)) {\n\t\t\t/* See the comment in isfree. */\n\t\t\treturn;\n\t\t}\n\t\tlarge_dalloc(tsdn, edata);\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE void\narena_dalloc(tsdn_t *tsdn, void *ptr, tcache_t *tcache,\n    emap_alloc_ctx_t *caller_alloc_ctx, bool slow_path) {\n\tassert(!tsdn_null(tsdn) || tcache == NULL);\n\tassert(ptr != NULL);\n\n\tif (unlikely(tcache == NULL)) {\n\t\tarena_dalloc_no_tcache(tsdn, ptr);\n\t\treturn;\n\t}\n\n\temap_alloc_ctx_t alloc_ctx;\n\tif (caller_alloc_ctx != NULL) {\n\t\talloc_ctx = *caller_alloc_ctx;\n\t} else {\n\t\tutil_assume(!tsdn_null(tsdn));\n\t\temap_alloc_ctx_lookup(tsdn, &arena_emap_global, ptr,\n\t\t    &alloc_ctx);\n\t}\n\n\tif (config_debug) {\n\t\tedata_t *edata = emap_edata_lookup(tsdn, &arena_emap_global,\n\t\t    ptr);\n\t\tassert(alloc_ctx.szind == edata_szind_get(edata));\n\t\tassert(alloc_ctx.szind < SC_NSIZES);\n\t\tassert(alloc_ctx.slab == edata_slab_get(edata));\n\t}\n\n\tif (likely(alloc_ctx.slab)) {\n\t\t/* Small allocation. */\n\t\ttcache_dalloc_small(tsdn_tsd(tsdn), tcache, ptr,\n\t\t    alloc_ctx.szind, slow_path);\n\t} else {\n\t\tarena_dalloc_large(tsdn, ptr, tcache, alloc_ctx.szind,\n\t\t    slow_path);\n\t}\n}\n\nstatic inline void\narena_sdalloc_no_tcache(tsdn_t *tsdn, void *ptr, size_t size) {\n\tassert(ptr != NULL);\n\tassert(size <= SC_LARGE_MAXCLASS);\n\n\temap_alloc_ctx_t alloc_ctx;\n\tif (!config_prof || !opt_prof) {\n\t\t/*\n\t\t * There is no risk of being confused by a promoted sampled\n\t\t * object, so base szind and slab on the given size.\n\t\t */\n\t\talloc_ctx.szind = sz_size2index(size);\n\t\talloc_ctx.slab = (alloc_ctx.szind < SC_NBINS);\n\t}\n\n\tif ((config_prof && opt_prof) || config_debug) {\n\t\temap_alloc_ctx_lookup(tsdn, &arena_emap_global, ptr,\n\t\t    &alloc_ctx);\n\n\t\tassert(alloc_ctx.szind == sz_size2index(size));\n\t\tassert((config_prof && opt_prof)\n\t\t    || alloc_ctx.slab == (alloc_ctx.szind < SC_NBINS));\n\n\t\tif (config_debug) {\n\t\t\tedata_t *edata = emap_edata_lookup(tsdn,\n\t\t\t    &arena_emap_global, ptr);\n\t\t\tassert(alloc_ctx.szind == edata_szind_get(edata));\n\t\t\tassert(alloc_ctx.slab == edata_slab_get(edata));\n\t\t}\n\t}\n\n\tif (likely(alloc_ctx.slab)) {\n\t\t/* Small allocation. */\n\t\tarena_dalloc_small(tsdn, ptr);\n\t} else {\n\t\tarena_dalloc_large_no_tcache(tsdn, ptr, alloc_ctx.szind);\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE void\narena_sdalloc(tsdn_t *tsdn, void *ptr, size_t size, tcache_t *tcache,\n    emap_alloc_ctx_t *caller_alloc_ctx, bool slow_path) {\n\tassert(!tsdn_null(tsdn) || tcache == NULL);\n\tassert(ptr != NULL);\n\tassert(size <= SC_LARGE_MAXCLASS);\n\n\tif (unlikely(tcache == NULL)) {\n\t\tarena_sdalloc_no_tcache(tsdn, ptr, size);\n\t\treturn;\n\t}\n\n\temap_alloc_ctx_t alloc_ctx;\n\tif (config_prof && opt_prof) {\n\t\tif (caller_alloc_ctx == NULL) {\n\t\t\t/* Uncommon case and should be a static check. */\n\t\t\temap_alloc_ctx_lookup(tsdn, &arena_emap_global, ptr,\n\t\t\t    &alloc_ctx);\n\t\t\tassert(alloc_ctx.szind == sz_size2index(size));\n\t\t} else {\n\t\t\talloc_ctx = *caller_alloc_ctx;\n\t\t}\n\t} else {\n\t\t/*\n\t\t * There is no risk of being confused by a promoted sampled\n\t\t * object, so base szind and slab on the given size.\n\t\t */\n\t\talloc_ctx.szind = sz_size2index(size);\n\t\talloc_ctx.slab = (alloc_ctx.szind < SC_NBINS);\n\t}\n\n\tif (config_debug) {\n\t\tedata_t *edata = emap_edata_lookup(tsdn, &arena_emap_global,\n\t\t    ptr);\n\t\tassert(alloc_ctx.szind == edata_szind_get(edata));\n\t\tassert(alloc_ctx.slab == edata_slab_get(edata));\n\t}\n\n\tif (likely(alloc_ctx.slab)) {\n\t\t/* Small allocation. */\n\t\ttcache_dalloc_small(tsdn_tsd(tsdn), tcache, ptr,\n\t\t    alloc_ctx.szind, slow_path);\n\t} else {\n\t\tarena_dalloc_large(tsdn, ptr, tcache, alloc_ctx.szind,\n\t\t    slow_path);\n\t}\n}\n\nstatic inline void\narena_cache_oblivious_randomize(tsdn_t *tsdn, arena_t *arena, edata_t *edata,\n    size_t alignment) {\n\tassert(edata_base_get(edata) == edata_addr_get(edata));\n\n\tif (alignment < PAGE) {\n\t\tunsigned lg_range = LG_PAGE -\n\t\t    lg_floor(CACHELINE_CEILING(alignment));\n\t\tsize_t r;\n\t\tif (!tsdn_null(tsdn)) {\n\t\t\ttsd_t *tsd = tsdn_tsd(tsdn);\n\t\t\tr = (size_t)prng_lg_range_u64(\n\t\t\t    tsd_prng_statep_get(tsd), lg_range);\n\t\t} else {\n\t\t\tuint64_t stack_value = (uint64_t)(uintptr_t)&r;\n\t\t\tr = (size_t)prng_lg_range_u64(&stack_value, lg_range);\n\t\t}\n\t\tuintptr_t random_offset = ((uintptr_t)r) << (LG_PAGE -\n\t\t    lg_range);\n\t\tedata->e_addr = (void *)((uintptr_t)edata->e_addr +\n\t\t    random_offset);\n\t\tassert(ALIGNMENT_ADDR2BASE(edata->e_addr, alignment) ==\n\t\t    edata->e_addr);\n\t}\n}\n\n/*\n * The dalloc bin info contains just the information that the common paths need\n * during tcache flushes.  By force-inlining these paths, and using local copies\n * of data (so that the compiler knows it's constant), we avoid a whole bunch of\n * redundant loads and stores by leaving this information in registers.\n */\ntypedef struct arena_dalloc_bin_locked_info_s arena_dalloc_bin_locked_info_t;\nstruct arena_dalloc_bin_locked_info_s {\n\tdiv_info_t div_info;\n\tuint32_t nregs;\n\tuint64_t ndalloc;\n};\n\nJEMALLOC_ALWAYS_INLINE size_t\narena_slab_regind(arena_dalloc_bin_locked_info_t *info, szind_t binind,\n    edata_t *slab, const void *ptr) {\n\tsize_t diff, regind;\n\n\t/* Freeing a pointer outside the slab can cause assertion failure. */\n\tassert((uintptr_t)ptr >= (uintptr_t)edata_addr_get(slab));\n\tassert((uintptr_t)ptr < (uintptr_t)edata_past_get(slab));\n\t/* Freeing an interior pointer can cause assertion failure. */\n\tassert(((uintptr_t)ptr - (uintptr_t)edata_addr_get(slab)) %\n\t    (uintptr_t)bin_infos[binind].reg_size == 0);\n\n\tdiff = (size_t)((uintptr_t)ptr - (uintptr_t)edata_addr_get(slab));\n\n\t/* Avoid doing division with a variable divisor. */\n\tregind = div_compute(&info->div_info, diff);\n\n\tassert(regind < bin_infos[binind].nregs);\n\n\treturn regind;\n}\n\nJEMALLOC_ALWAYS_INLINE void\narena_dalloc_bin_locked_begin(arena_dalloc_bin_locked_info_t *info,\n    szind_t binind) {\n\tinfo->div_info = arena_binind_div_info[binind];\n\tinfo->nregs = bin_infos[binind].nregs;\n\tinfo->ndalloc = 0;\n}\n\n/*\n * Does the deallocation work associated with freeing a single pointer (a\n * \"step\") in between a arena_dalloc_bin_locked begin and end call.\n *\n * Returns true if arena_slab_dalloc must be called on slab.  Doesn't do\n * stats updates, which happen during finish (this lets running counts get left\n * in a register).\n */\nJEMALLOC_ALWAYS_INLINE bool\narena_dalloc_bin_locked_step(tsdn_t *tsdn, arena_t *arena, bin_t *bin,\n    arena_dalloc_bin_locked_info_t *info, szind_t binind, edata_t *slab,\n    void *ptr) {\n\tconst bin_info_t *bin_info = &bin_infos[binind];\n\tsize_t regind = arena_slab_regind(info, binind, slab, ptr);\n\tslab_data_t *slab_data = edata_slab_data_get(slab);\n\n\tassert(edata_nfree_get(slab) < bin_info->nregs);\n\t/* Freeing an unallocated pointer can cause assertion failure. */\n\tassert(bitmap_get(slab_data->bitmap, &bin_info->bitmap_info, regind));\n\n\tbitmap_unset(slab_data->bitmap, &bin_info->bitmap_info, regind);\n\tedata_nfree_inc(slab);\n\n\tif (config_stats) {\n\t\tinfo->ndalloc++;\n\t}\n\n\tunsigned nfree = edata_nfree_get(slab);\n\tif (nfree == bin_info->nregs) {\n\t\tarena_dalloc_bin_locked_handle_newly_empty(tsdn, arena, slab,\n\t\t    bin);\n\t\treturn true;\n\t} else if (nfree == 1 && slab != bin->slabcur) {\n\t\tarena_dalloc_bin_locked_handle_newly_nonempty(tsdn, arena, slab,\n\t\t    bin);\n\t}\n\treturn false;\n}\n\nJEMALLOC_ALWAYS_INLINE void\narena_dalloc_bin_locked_finish(tsdn_t *tsdn, arena_t *arena, bin_t *bin,\n    arena_dalloc_bin_locked_info_t *info) {\n\tif (config_stats) {\n\t\tbin->stats.ndalloc += info->ndalloc;\n\t\tassert(bin->stats.curregs >= (size_t)info->ndalloc);\n\t\tbin->stats.curregs -= (size_t)info->ndalloc;\n\t}\n}\n\nstatic inline bin_t *\narena_get_bin(arena_t *arena, szind_t binind, unsigned binshard) {\n\tbin_t *shard0 = (bin_t *)((uintptr_t)arena + arena_bin_offsets[binind]);\n\treturn shard0 + binshard;\n}\n\n#endif /* JEMALLOC_INTERNAL_ARENA_INLINES_B_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/arena_stats.h",
    "content": "#ifndef JEMALLOC_INTERNAL_ARENA_STATS_H\n#define JEMALLOC_INTERNAL_ARENA_STATS_H\n\n#include \"jemalloc/internal/atomic.h\"\n#include \"jemalloc/internal/lockedint.h\"\n#include \"jemalloc/internal/mutex.h\"\n#include \"jemalloc/internal/mutex_prof.h\"\n#include \"jemalloc/internal/pa.h\"\n#include \"jemalloc/internal/sc.h\"\n\nJEMALLOC_DIAGNOSTIC_DISABLE_SPURIOUS\n\ntypedef struct arena_stats_large_s arena_stats_large_t;\nstruct arena_stats_large_s {\n\t/*\n\t * Total number of allocation/deallocation requests served directly by\n\t * the arena.\n\t */\n\tlocked_u64_t\tnmalloc;\n\tlocked_u64_t\tndalloc;\n\n\t/*\n\t * Number of allocation requests that correspond to this size class.\n\t * This includes requests served by tcache, though tcache only\n\t * periodically merges into this counter.\n\t */\n\tlocked_u64_t\tnrequests; /* Partially derived. */\n\t/*\n\t * Number of tcache fills / flushes for large (similarly, periodically\n\t * merged).  Note that there is no large tcache batch-fill currently\n\t * (i.e. only fill 1 at a time); however flush may be batched.\n\t */\n\tlocked_u64_t\tnfills; /* Partially derived. */\n\tlocked_u64_t\tnflushes; /* Partially derived. */\n\n\t/* Current number of allocations of this size class. */\n\tsize_t\t\tcurlextents; /* Derived. */\n};\n\n/*\n * Arena stats.  Note that fields marked \"derived\" are not directly maintained\n * within the arena code; rather their values are derived during stats merge\n * requests.\n */\ntypedef struct arena_stats_s arena_stats_t;\nstruct arena_stats_s {\n\tLOCKEDINT_MTX_DECLARE(mtx)\n\n\t/*\n\t * resident includes the base stats -- that's why it lives here and not\n\t * in pa_shard_stats_t.\n\t */\n\tsize_t\t\t\tbase; /* Derived. */\n\tsize_t\t\t\tresident; /* Derived. */\n\tsize_t\t\t\tmetadata_thp; /* Derived. */\n\tsize_t\t\t\tmapped; /* Derived. */\n\n\tatomic_zu_t\t\tinternal;\n\n\tsize_t\t\t\tallocated_large; /* Derived. */\n\tuint64_t\t\tnmalloc_large; /* Derived. */\n\tuint64_t\t\tndalloc_large; /* Derived. */\n\tuint64_t\t\tnfills_large; /* Derived. */\n\tuint64_t\t\tnflushes_large; /* Derived. */\n\tuint64_t\t\tnrequests_large; /* Derived. */\n\n\t/*\n\t * The stats logically owned by the pa_shard in the same arena.  This\n\t * lives here only because it's convenient for the purposes of the ctl\n\t * module -- it only knows about the single arena_stats.\n\t */\n\tpa_shard_stats_t\tpa_shard_stats;\n\n\t/* Number of bytes cached in tcache associated with this arena. */\n\tsize_t\t\t\ttcache_bytes; /* Derived. */\n\tsize_t\t\t\ttcache_stashed_bytes; /* Derived. */\n\n\tmutex_prof_data_t mutex_prof_data[mutex_prof_num_arena_mutexes];\n\n\t/* One element for each large size class. */\n\tarena_stats_large_t\tlstats[SC_NSIZES - SC_NBINS];\n\n\t/* Arena uptime. */\n\tnstime_t\t\tuptime;\n};\n\nstatic inline bool\narena_stats_init(tsdn_t *tsdn, arena_stats_t *arena_stats) {\n\tif (config_debug) {\n\t\tfor (size_t i = 0; i < sizeof(arena_stats_t); i++) {\n\t\t\tassert(((char *)arena_stats)[i] == 0);\n\t\t}\n\t}\n\tif (LOCKEDINT_MTX_INIT(arena_stats->mtx, \"arena_stats\",\n\t    WITNESS_RANK_ARENA_STATS, malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\t/* Memory is zeroed, so there is no need to clear stats. */\n\treturn false;\n}\n\nstatic inline void\narena_stats_large_flush_nrequests_add(tsdn_t *tsdn, arena_stats_t *arena_stats,\n    szind_t szind, uint64_t nrequests) {\n\tLOCKEDINT_MTX_LOCK(tsdn, arena_stats->mtx);\n\tarena_stats_large_t *lstats = &arena_stats->lstats[szind - SC_NBINS];\n\tlocked_inc_u64(tsdn, LOCKEDINT_MTX(arena_stats->mtx),\n\t    &lstats->nrequests, nrequests);\n\tlocked_inc_u64(tsdn, LOCKEDINT_MTX(arena_stats->mtx),\n\t    &lstats->nflushes, 1);\n\tLOCKEDINT_MTX_UNLOCK(tsdn, arena_stats->mtx);\n}\n\n#endif /* JEMALLOC_INTERNAL_ARENA_STATS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/arena_structs.h",
    "content": "#ifndef JEMALLOC_INTERNAL_ARENA_STRUCTS_H\n#define JEMALLOC_INTERNAL_ARENA_STRUCTS_H\n\n#include \"jemalloc/internal/arena_stats.h\"\n#include \"jemalloc/internal/atomic.h\"\n#include \"jemalloc/internal/bin.h\"\n#include \"jemalloc/internal/bitmap.h\"\n#include \"jemalloc/internal/counter.h\"\n#include \"jemalloc/internal/ecache.h\"\n#include \"jemalloc/internal/edata_cache.h\"\n#include \"jemalloc/internal/extent_dss.h\"\n#include \"jemalloc/internal/jemalloc_internal_types.h\"\n#include \"jemalloc/internal/mutex.h\"\n#include \"jemalloc/internal/nstime.h\"\n#include \"jemalloc/internal/pa.h\"\n#include \"jemalloc/internal/ql.h\"\n#include \"jemalloc/internal/sc.h\"\n#include \"jemalloc/internal/ticker.h\"\n\nstruct arena_s {\n\t/*\n\t * Number of threads currently assigned to this arena.  Each thread has\n\t * two distinct assignments, one for application-serving allocation, and\n\t * the other for internal metadata allocation.  Internal metadata must\n\t * not be allocated from arenas explicitly created via the arenas.create\n\t * mallctl, because the arena.<i>.reset mallctl indiscriminately\n\t * discards all allocations for the affected arena.\n\t *\n\t *   0: Application allocation.\n\t *   1: Internal metadata allocation.\n\t *\n\t * Synchronization: atomic.\n\t */\n\tatomic_u_t\t\tnthreads[2];\n\n\t/* Next bin shard for binding new threads. Synchronization: atomic. */\n\tatomic_u_t\t\tbinshard_next;\n\n\t/*\n\t * When percpu_arena is enabled, to amortize the cost of reading /\n\t * updating the current CPU id, track the most recent thread accessing\n\t * this arena, and only read CPU if there is a mismatch.\n\t */\n\ttsdn_t\t\t*last_thd;\n\n\t/* Synchronization: internal. */\n\tarena_stats_t\t\tstats;\n\n\t/*\n\t * Lists of tcaches and cache_bin_array_descriptors for extant threads\n\t * associated with this arena.  Stats from these are merged\n\t * incrementally, and at exit if opt_stats_print is enabled.\n\t *\n\t * Synchronization: tcache_ql_mtx.\n\t */\n\tql_head(tcache_slow_t)\t\t\ttcache_ql;\n\tql_head(cache_bin_array_descriptor_t)\tcache_bin_array_descriptor_ql;\n\tmalloc_mutex_t\t\t\t\ttcache_ql_mtx;\n\n\t/*\n\t * Represents a dss_prec_t, but atomically.\n\t *\n\t * Synchronization: atomic.\n\t */\n\tatomic_u_t\t\tdss_prec;\n\n\t/*\n\t * Extant large allocations.\n\t *\n\t * Synchronization: large_mtx.\n\t */\n\tedata_list_active_t\tlarge;\n\t/* Synchronizes all large allocation/update/deallocation. */\n\tmalloc_mutex_t\t\tlarge_mtx;\n\n\t/* The page-level allocator shard this arena uses. */\n\tpa_shard_t\t\tpa_shard;\n\n\t/*\n\t * A cached copy of base->ind.  This can get accessed on hot paths;\n\t * looking it up in base requires an extra pointer hop / cache miss.\n\t */\n\tunsigned ind;\n\n\t/*\n\t * Base allocator, from which arena metadata are allocated.\n\t *\n\t * Synchronization: internal.\n\t */\n\tbase_t\t\t\t*base;\n\t/* Used to determine uptime.  Read-only after initialization. */\n\tnstime_t\t\tcreate_time;\n\n\t/*\n\t * The arena is allocated alongside its bins; really this is a\n\t * dynamically sized array determined by the binshard settings.\n\t */\n\tbin_t\t\t\tbins[0];\n};\n\n#endif /* JEMALLOC_INTERNAL_ARENA_STRUCTS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/arena_types.h",
    "content": "#ifndef JEMALLOC_INTERNAL_ARENA_TYPES_H\n#define JEMALLOC_INTERNAL_ARENA_TYPES_H\n\n#include \"jemalloc/internal/sc.h\"\n\n/* Default decay times in milliseconds. */\n#define DIRTY_DECAY_MS_DEFAULT\tZD(10 * 1000)\n#define MUZZY_DECAY_MS_DEFAULT\t(0)\n/* Number of event ticks between time checks. */\n#define ARENA_DECAY_NTICKS_PER_UPDATE\t1000\n\ntypedef struct arena_decay_s arena_decay_t;\ntypedef struct arena_s arena_t;\n\ntypedef enum {\n\tpercpu_arena_mode_names_base   = 0, /* Used for options processing. */\n\n\t/*\n\t * *_uninit are used only during bootstrapping, and must correspond\n\t * to initialized variant plus percpu_arena_mode_enabled_base.\n\t */\n\tpercpu_arena_uninit            = 0,\n\tper_phycpu_arena_uninit        = 1,\n\n\t/* All non-disabled modes must come after percpu_arena_disabled. */\n\tpercpu_arena_disabled          = 2,\n\n\tpercpu_arena_mode_names_limit  = 3, /* Used for options processing. */\n\tpercpu_arena_mode_enabled_base = 3,\n\n\tpercpu_arena                   = 3,\n\tper_phycpu_arena               = 4  /* Hyper threads share arena. */\n} percpu_arena_mode_t;\n\n#define PERCPU_ARENA_ENABLED(m)\t((m) >= percpu_arena_mode_enabled_base)\n#define PERCPU_ARENA_DEFAULT\tpercpu_arena_disabled\n\n/*\n * When allocation_size >= oversize_threshold, use the dedicated huge arena\n * (unless have explicitly spicified arena index).  0 disables the feature.\n */\n#define OVERSIZE_THRESHOLD_DEFAULT (8 << 20)\n\nstruct arena_config_s {\n\t/* extent hooks to be used for the arena */\n\textent_hooks_t *extent_hooks;\n\n\t/*\n\t * Use extent hooks for metadata (base) allocations when true.\n\t */\n\tbool metadata_use_hooks;\n};\n\ntypedef struct arena_config_s arena_config_t;\n\nextern const arena_config_t arena_config_default;\n\n#endif /* JEMALLOC_INTERNAL_ARENA_TYPES_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/assert.h",
    "content": "#include \"jemalloc/internal/malloc_io.h\"\n#include \"jemalloc/internal/util.h\"\n\n/*\n * Define a custom assert() in order to reduce the chances of deadlock during\n * assertion failure.\n */\n#ifndef assert\n#define assert(e) do {\t\t\t\t\t\t\t\\\n\tif (unlikely(config_debug && !(e))) {\t\t\t\t\\\n\t\tmalloc_printf(\t\t\t\t\t\t\\\n\t\t    \"<jemalloc>: %s:%d: Failed assertion: \\\"%s\\\"\\n\",\t\\\n\t\t    __FILE__, __LINE__, #e);\t\t\t\t\\\n\t\tabort();\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n#endif\n\n#ifndef not_reached\n#define not_reached() do {\t\t\t\t\t\t\\\n\tif (config_debug) {\t\t\t\t\t\t\\\n\t\tmalloc_printf(\t\t\t\t\t\t\\\n\t\t    \"<jemalloc>: %s:%d: Unreachable code reached\\n\",\t\\\n\t\t    __FILE__, __LINE__);\t\t\t\t\\\n\t\tabort();\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tunreachable();\t\t\t\t\t\t\t\\\n} while (0)\n#endif\n\n#ifndef not_implemented\n#define not_implemented() do {\t\t\t\t\t\t\\\n\tif (config_debug) {\t\t\t\t\t\t\\\n\t\tmalloc_printf(\"<jemalloc>: %s:%d: Not implemented\\n\",\t\\\n\t\t    __FILE__, __LINE__);\t\t\t\t\\\n\t\tabort();\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n#endif\n\n#ifndef assert_not_implemented\n#define assert_not_implemented(e) do {\t\t\t\t\t\\\n\tif (unlikely(config_debug && !(e))) {\t\t\t\t\\\n\t\tnot_implemented();\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n#endif\n\n/* Use to assert a particular configuration, e.g., cassert(config_debug). */\n#ifndef cassert\n#define cassert(c) do {\t\t\t\t\t\t\t\\\n\tif (unlikely(!(c))) {\t\t\t\t\t\t\\\n\t\tnot_reached();\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n#endif\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/atomic.h",
    "content": "#ifndef JEMALLOC_INTERNAL_ATOMIC_H\n#define JEMALLOC_INTERNAL_ATOMIC_H\n\n#define ATOMIC_INLINE JEMALLOC_ALWAYS_INLINE\n\n#define JEMALLOC_U8_ATOMICS\n#if defined(JEMALLOC_GCC_ATOMIC_ATOMICS)\n#  include \"jemalloc/internal/atomic_gcc_atomic.h\"\n#  if !defined(JEMALLOC_GCC_U8_ATOMIC_ATOMICS)\n#    undef JEMALLOC_U8_ATOMICS\n#  endif\n#elif defined(JEMALLOC_GCC_SYNC_ATOMICS)\n#  include \"jemalloc/internal/atomic_gcc_sync.h\"\n#  if !defined(JEMALLOC_GCC_U8_SYNC_ATOMICS)\n#    undef JEMALLOC_U8_ATOMICS\n#  endif\n#elif defined(_MSC_VER)\n#  include \"jemalloc/internal/atomic_msvc.h\"\n#elif defined(JEMALLOC_C11_ATOMICS)\n#  include \"jemalloc/internal/atomic_c11.h\"\n#else\n#  error \"Don't have atomics implemented on this platform.\"\n#endif\n\n/*\n * This header gives more or less a backport of C11 atomics. The user can write\n * JEMALLOC_GENERATE_ATOMICS(type, short_type, lg_sizeof_type); to generate\n * counterparts of the C11 atomic functions for type, as so:\n *   JEMALLOC_GENERATE_ATOMICS(int *, pi, 3);\n * and then write things like:\n *   int *some_ptr;\n *   atomic_pi_t atomic_ptr_to_int;\n *   atomic_store_pi(&atomic_ptr_to_int, some_ptr, ATOMIC_RELAXED);\n *   int *prev_value = atomic_exchange_pi(&ptr_to_int, NULL, ATOMIC_ACQ_REL);\n *   assert(some_ptr == prev_value);\n * and expect things to work in the obvious way.\n *\n * Also included (with naming differences to avoid conflicts with the standard\n * library):\n *   atomic_fence(atomic_memory_order_t) (mimics C11's atomic_thread_fence).\n *   ATOMIC_INIT (mimics C11's ATOMIC_VAR_INIT).\n */\n\n/*\n * Pure convenience, so that we don't have to type \"atomic_memory_order_\"\n * quite so often.\n */\n#define ATOMIC_RELAXED atomic_memory_order_relaxed\n#define ATOMIC_ACQUIRE atomic_memory_order_acquire\n#define ATOMIC_RELEASE atomic_memory_order_release\n#define ATOMIC_ACQ_REL atomic_memory_order_acq_rel\n#define ATOMIC_SEQ_CST atomic_memory_order_seq_cst\n\n/*\n * Another convenience -- simple atomic helper functions.\n */\n#define JEMALLOC_GENERATE_EXPANDED_INT_ATOMICS(type, short_type,\t\\\n    lg_size)\t\t\t\t\t\t\t\t\\\n    JEMALLOC_GENERATE_INT_ATOMICS(type, short_type, lg_size)\t\t\\\n    ATOMIC_INLINE void\t\t\t\t\t\t\t\\\n    atomic_load_add_store_##short_type(atomic_##short_type##_t *a,\t\\\n\ttype inc) {\t\t\t\t\t\t\t\\\n\t    type oldval = atomic_load_##short_type(a, ATOMIC_RELAXED);\t\\\n\t    type newval = oldval + inc;\t\t\t\t\t\\\n\t    atomic_store_##short_type(a, newval, ATOMIC_RELAXED);\t\\\n\t}\t\t\t\t\t\t\t\t\\\n    ATOMIC_INLINE void\t\t\t\t\t\t\t\\\n    atomic_load_sub_store_##short_type(atomic_##short_type##_t *a,\t\\\n\ttype inc) {\t\t\t\t\t\t\t\\\n\t    type oldval = atomic_load_##short_type(a, ATOMIC_RELAXED);\t\\\n\t    type newval = oldval - inc;\t\t\t\t\t\\\n\t    atomic_store_##short_type(a, newval, ATOMIC_RELAXED);\t\\\n\t}\n\n/*\n * Not all platforms have 64-bit atomics.  If we do, this #define exposes that\n * fact.\n */\n#if (LG_SIZEOF_PTR == 3 || LG_SIZEOF_INT == 3)\n#  define JEMALLOC_ATOMIC_U64\n#endif\n\nJEMALLOC_GENERATE_ATOMICS(void *, p, LG_SIZEOF_PTR)\n\n/*\n * There's no actual guarantee that sizeof(bool) == 1, but it's true on the only\n * platform that actually needs to know the size, MSVC.\n */\nJEMALLOC_GENERATE_ATOMICS(bool, b, 0)\n\nJEMALLOC_GENERATE_EXPANDED_INT_ATOMICS(unsigned, u, LG_SIZEOF_INT)\n\nJEMALLOC_GENERATE_EXPANDED_INT_ATOMICS(size_t, zu, LG_SIZEOF_PTR)\n\nJEMALLOC_GENERATE_EXPANDED_INT_ATOMICS(ssize_t, zd, LG_SIZEOF_PTR)\n\nJEMALLOC_GENERATE_EXPANDED_INT_ATOMICS(uint8_t, u8, 0)\n\nJEMALLOC_GENERATE_EXPANDED_INT_ATOMICS(uint32_t, u32, 2)\n\n#ifdef JEMALLOC_ATOMIC_U64\nJEMALLOC_GENERATE_EXPANDED_INT_ATOMICS(uint64_t, u64, 3)\n#endif\n\n#undef ATOMIC_INLINE\n\n#endif /* JEMALLOC_INTERNAL_ATOMIC_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/atomic_c11.h",
    "content": "#ifndef JEMALLOC_INTERNAL_ATOMIC_C11_H\n#define JEMALLOC_INTERNAL_ATOMIC_C11_H\n\n#include <stdatomic.h>\n\n#define ATOMIC_INIT(...) ATOMIC_VAR_INIT(__VA_ARGS__)\n\n#define atomic_memory_order_t memory_order\n#define atomic_memory_order_relaxed memory_order_relaxed\n#define atomic_memory_order_acquire memory_order_acquire\n#define atomic_memory_order_release memory_order_release\n#define atomic_memory_order_acq_rel memory_order_acq_rel\n#define atomic_memory_order_seq_cst memory_order_seq_cst\n\n#define atomic_fence atomic_thread_fence\n\n#define JEMALLOC_GENERATE_ATOMICS(type, short_type,\t\t\t\\\n    /* unused */ lg_size)\t\t\t\t\t\t\\\ntypedef _Atomic(type) atomic_##short_type##_t;\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_load_##short_type(const atomic_##short_type##_t *a,\t\t\\\n    atomic_memory_order_t mo) {\t\t\t\t\t\t\\\n\t/*\t\t\t\t\t\t\t\t\\\n\t * A strict interpretation of the C standard prevents\t\t\\\n\t * atomic_load from taking a const argument, but it's\t\t\\\n\t * convenient for our purposes. This cast is a workaround.\t\\\n\t */\t\t\t\t\t\t\t\t\\\n\tatomic_##short_type##_t* a_nonconst =\t\t\t\t\\\n\t    (atomic_##short_type##_t*)a;\t\t\t\t\\\n\treturn atomic_load_explicit(a_nonconst, mo);\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE void\t\t\t\t\t\t\t\\\natomic_store_##short_type(atomic_##short_type##_t *a,\t\t\t\\\n    type val, atomic_memory_order_t mo) {\t\t\t\t\\\n\tatomic_store_explicit(a, val, mo);\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_exchange_##short_type(atomic_##short_type##_t *a, type val,\t\\\n    atomic_memory_order_t mo) {\t\t\t\t\t\t\\\n\treturn atomic_exchange_explicit(a, val, mo);\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE bool\t\t\t\t\t\t\t\\\natomic_compare_exchange_weak_##short_type(atomic_##short_type##_t *a,\t\\\n    type *expected, type desired, atomic_memory_order_t success_mo,\t\\\n    atomic_memory_order_t failure_mo) {\t\t\t\t\t\\\n\treturn atomic_compare_exchange_weak_explicit(a, expected,\t\\\n\t    desired, success_mo, failure_mo);\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE bool\t\t\t\t\t\t\t\\\natomic_compare_exchange_strong_##short_type(atomic_##short_type##_t *a,\t\\\n    type *expected, type desired, atomic_memory_order_t success_mo,\t\\\n    atomic_memory_order_t failure_mo) {\t\t\t\t\t\\\n\treturn atomic_compare_exchange_strong_explicit(a, expected,\t\\\n\t    desired, success_mo, failure_mo);\t\t\t\t\\\n}\n\n/*\n * Integral types have some special operations available that non-integral ones\n * lack.\n */\n#define JEMALLOC_GENERATE_INT_ATOMICS(type, short_type, \t\t\\\n    /* unused */ lg_size)\t\t\t\t\t\t\\\nJEMALLOC_GENERATE_ATOMICS(type, short_type, /* unused */ lg_size)\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_fetch_add_##short_type(atomic_##short_type##_t *a,\t\t\\\n    type val, atomic_memory_order_t mo) {\t\t\t\t\\\n\treturn atomic_fetch_add_explicit(a, val, mo);\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_fetch_sub_##short_type(atomic_##short_type##_t *a,\t\t\\\n    type val, atomic_memory_order_t mo) {\t\t\t\t\\\n\treturn atomic_fetch_sub_explicit(a, val, mo);\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_fetch_and_##short_type(atomic_##short_type##_t *a,\t\t\\\n    type val, atomic_memory_order_t mo) {\t\t\t\t\\\n\treturn atomic_fetch_and_explicit(a, val, mo);\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_fetch_or_##short_type(atomic_##short_type##_t *a,\t\t\\\n    type val, atomic_memory_order_t mo) {\t\t\t\t\\\n\treturn atomic_fetch_or_explicit(a, val, mo);\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_fetch_xor_##short_type(atomic_##short_type##_t *a,\t\t\\\n    type val, atomic_memory_order_t mo) {\t\t\t\t\\\n\treturn atomic_fetch_xor_explicit(a, val, mo);\t\t\t\\\n}\n\n#endif /* JEMALLOC_INTERNAL_ATOMIC_C11_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/atomic_gcc_atomic.h",
    "content": "#ifndef JEMALLOC_INTERNAL_ATOMIC_GCC_ATOMIC_H\n#define JEMALLOC_INTERNAL_ATOMIC_GCC_ATOMIC_H\n\n#include \"jemalloc/internal/assert.h\"\n\n#define ATOMIC_INIT(...) {__VA_ARGS__}\n\ntypedef enum {\n\tatomic_memory_order_relaxed,\n\tatomic_memory_order_acquire,\n\tatomic_memory_order_release,\n\tatomic_memory_order_acq_rel,\n\tatomic_memory_order_seq_cst\n} atomic_memory_order_t;\n\nATOMIC_INLINE int\natomic_enum_to_builtin(atomic_memory_order_t mo) {\n\tswitch (mo) {\n\tcase atomic_memory_order_relaxed:\n\t\treturn __ATOMIC_RELAXED;\n\tcase atomic_memory_order_acquire:\n\t\treturn __ATOMIC_ACQUIRE;\n\tcase atomic_memory_order_release:\n\t\treturn __ATOMIC_RELEASE;\n\tcase atomic_memory_order_acq_rel:\n\t\treturn __ATOMIC_ACQ_REL;\n\tcase atomic_memory_order_seq_cst:\n\t\treturn __ATOMIC_SEQ_CST;\n\t}\n\t/* Can't happen; the switch is exhaustive. */\n\tnot_reached();\n}\n\nATOMIC_INLINE void\natomic_fence(atomic_memory_order_t mo) {\n\t__atomic_thread_fence(atomic_enum_to_builtin(mo));\n}\n\n#define JEMALLOC_GENERATE_ATOMICS(type, short_type,\t\t\t\\\n    /* unused */ lg_size)\t\t\t\t\t\t\\\ntypedef struct {\t\t\t\t\t\t\t\\\n\ttype repr;\t\t\t\t\t\t\t\\\n} atomic_##short_type##_t;\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_load_##short_type(const atomic_##short_type##_t *a,\t\t\\\n    atomic_memory_order_t mo) {\t\t\t\t\t\t\\\n\ttype result;\t\t\t\t\t\t\t\\\n\t__atomic_load(&a->repr, &result, atomic_enum_to_builtin(mo));\t\\\n\treturn result;\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE void\t\t\t\t\t\t\t\\\natomic_store_##short_type(atomic_##short_type##_t *a, type val,\t\t\\\n    atomic_memory_order_t mo) {\t\t\t\t\t\t\\\n\t__atomic_store(&a->repr, &val, atomic_enum_to_builtin(mo));\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_exchange_##short_type(atomic_##short_type##_t *a, type val,\t\\\n    atomic_memory_order_t mo) {\t\t\t\t\t\t\\\n\ttype result;\t\t\t\t\t\t\t\\\n\t__atomic_exchange(&a->repr, &val, &result,\t\t\t\\\n\t    atomic_enum_to_builtin(mo));\t\t\t\t\\\n\treturn result;\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE bool\t\t\t\t\t\t\t\\\natomic_compare_exchange_weak_##short_type(atomic_##short_type##_t *a,\t\\\n    UNUSED type *expected, type desired,\t\t\t\t\\\n    atomic_memory_order_t success_mo,\t\t\t\t\t\\\n    atomic_memory_order_t failure_mo) {\t\t\t\t\t\\\n\treturn __atomic_compare_exchange(&a->repr, expected, &desired,\t\\\n\t    true, atomic_enum_to_builtin(success_mo),\t\t\t\\\n\t    atomic_enum_to_builtin(failure_mo));\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE bool\t\t\t\t\t\t\t\\\natomic_compare_exchange_strong_##short_type(atomic_##short_type##_t *a,\t\\\n    UNUSED type *expected, type desired,\t\t\t\t\\\n    atomic_memory_order_t success_mo,\t\t\t\t\t\\\n    atomic_memory_order_t failure_mo) {\t\t\t\t\t\\\n\treturn __atomic_compare_exchange(&a->repr, expected, &desired,\t\\\n\t    false,\t\t\t\t\t\t\t\\\n\t    atomic_enum_to_builtin(success_mo),\t\t\t\t\\\n\t    atomic_enum_to_builtin(failure_mo));\t\t\t\\\n}\n\n\n#define JEMALLOC_GENERATE_INT_ATOMICS(type, short_type,\t\t\t\\\n    /* unused */ lg_size)\t\t\t\t\t\t\\\nJEMALLOC_GENERATE_ATOMICS(type, short_type, /* unused */ lg_size)\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_fetch_add_##short_type(atomic_##short_type##_t *a, type val,\t\\\n    atomic_memory_order_t mo) {\t\t\t\t\t\t\\\n\treturn __atomic_fetch_add(&a->repr, val,\t\t\t\\\n\t    atomic_enum_to_builtin(mo));\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_fetch_sub_##short_type(atomic_##short_type##_t *a, type val,\t\\\n    atomic_memory_order_t mo) {\t\t\t\t\t\t\\\n\treturn __atomic_fetch_sub(&a->repr, val,\t\t\t\\\n\t    atomic_enum_to_builtin(mo));\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_fetch_and_##short_type(atomic_##short_type##_t *a, type val,\t\\\n    atomic_memory_order_t mo) {\t\t\t\t\t\t\\\n\treturn __atomic_fetch_and(&a->repr, val,\t\t\t\\\n\t    atomic_enum_to_builtin(mo));\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_fetch_or_##short_type(atomic_##short_type##_t *a, type val,\t\\\n    atomic_memory_order_t mo) {\t\t\t\t\t\t\\\n\treturn __atomic_fetch_or(&a->repr, val,\t\t\t\t\\\n\t    atomic_enum_to_builtin(mo));\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_fetch_xor_##short_type(atomic_##short_type##_t *a, type val,\t\\\n    atomic_memory_order_t mo) {\t\t\t\t\t\t\\\n\treturn __atomic_fetch_xor(&a->repr, val,\t\t\t\\\n\t    atomic_enum_to_builtin(mo));\t\t\t\t\\\n}\n\n#endif /* JEMALLOC_INTERNAL_ATOMIC_GCC_ATOMIC_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/atomic_gcc_sync.h",
    "content": "#ifndef JEMALLOC_INTERNAL_ATOMIC_GCC_SYNC_H\n#define JEMALLOC_INTERNAL_ATOMIC_GCC_SYNC_H\n\n#define ATOMIC_INIT(...) {__VA_ARGS__}\n\ntypedef enum {\n\tatomic_memory_order_relaxed,\n\tatomic_memory_order_acquire,\n\tatomic_memory_order_release,\n\tatomic_memory_order_acq_rel,\n\tatomic_memory_order_seq_cst\n} atomic_memory_order_t;\n\nATOMIC_INLINE void\natomic_fence(atomic_memory_order_t mo) {\n\t/* Easy cases first: no barrier, and full barrier. */\n\tif (mo == atomic_memory_order_relaxed) {\n\t\tasm volatile(\"\" ::: \"memory\");\n\t\treturn;\n\t}\n\tif (mo == atomic_memory_order_seq_cst) {\n\t\tasm volatile(\"\" ::: \"memory\");\n\t\t__sync_synchronize();\n\t\tasm volatile(\"\" ::: \"memory\");\n\t\treturn;\n\t}\n\tasm volatile(\"\" ::: \"memory\");\n#  if defined(__i386__) || defined(__x86_64__)\n\t/* This is implicit on x86. */\n#  elif defined(__ppc64__)\n\tasm volatile(\"lwsync\");\n#  elif defined(__ppc__)\n\tasm volatile(\"sync\");\n#  elif defined(__sparc__) && defined(__arch64__)\n\tif (mo == atomic_memory_order_acquire) {\n\t\tasm volatile(\"membar #LoadLoad | #LoadStore\");\n\t} else if (mo == atomic_memory_order_release) {\n\t\tasm volatile(\"membar #LoadStore | #StoreStore\");\n\t} else {\n\t\tasm volatile(\"membar #LoadLoad | #LoadStore | #StoreStore\");\n\t}\n#  else\n\t__sync_synchronize();\n#  endif\n\tasm volatile(\"\" ::: \"memory\");\n}\n\n/*\n * A correct implementation of seq_cst loads and stores on weakly ordered\n * architectures could do either of the following:\n *   1. store() is weak-fence -> store -> strong fence, load() is load ->\n *      strong-fence.\n *   2. store() is strong-fence -> store, load() is strong-fence -> load ->\n *      weak-fence.\n * The tricky thing is, load() and store() above can be the load or store\n * portions of a gcc __sync builtin, so we have to follow GCC's lead, which\n * means going with strategy 2.\n * On strongly ordered architectures, the natural strategy is to stick a strong\n * fence after seq_cst stores, and have naked loads.  So we want the strong\n * fences in different places on different architectures.\n * atomic_pre_sc_load_fence and atomic_post_sc_store_fence allow us to\n * accomplish this.\n */\n\nATOMIC_INLINE void\natomic_pre_sc_load_fence() {\n#  if defined(__i386__) || defined(__x86_64__) ||\t\t\t\\\n    (defined(__sparc__) && defined(__arch64__))\n\tatomic_fence(atomic_memory_order_relaxed);\n#  else\n\tatomic_fence(atomic_memory_order_seq_cst);\n#  endif\n}\n\nATOMIC_INLINE void\natomic_post_sc_store_fence() {\n#  if defined(__i386__) || defined(__x86_64__) ||\t\t\t\\\n    (defined(__sparc__) && defined(__arch64__))\n\tatomic_fence(atomic_memory_order_seq_cst);\n#  else\n\tatomic_fence(atomic_memory_order_relaxed);\n#  endif\n\n}\n\n#define JEMALLOC_GENERATE_ATOMICS(type, short_type,\t\t\t\\\n    /* unused */ lg_size)\t\t\t\t\t\t\\\ntypedef struct {\t\t\t\t\t\t\t\\\n\ttype volatile repr;\t\t\t\t\t\t\\\n} atomic_##short_type##_t;\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_load_##short_type(const atomic_##short_type##_t *a,\t\t\\\n    atomic_memory_order_t mo) {\t\t\t\t\t\t\\\n\tif (mo == atomic_memory_order_seq_cst) {\t\t\t\\\n\t\tatomic_pre_sc_load_fence();\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\ttype result = a->repr;\t\t\t\t\t\t\\\n\tif (mo != atomic_memory_order_relaxed) {\t\t\t\\\n\t\tatomic_fence(atomic_memory_order_acquire);\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\treturn result;\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE void\t\t\t\t\t\t\t\\\natomic_store_##short_type(atomic_##short_type##_t *a,\t\t\t\\\n    type val, atomic_memory_order_t mo) {\t\t\t\t\\\n\tif (mo != atomic_memory_order_relaxed) {\t\t\t\\\n\t\tatomic_fence(atomic_memory_order_release);\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\ta->repr = val;\t\t\t\t\t\t\t\\\n\tif (mo == atomic_memory_order_seq_cst) {\t\t\t\\\n\t\tatomic_post_sc_store_fence();\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_exchange_##short_type(atomic_##short_type##_t *a, type val, \\\n    atomic_memory_order_t mo) {                  \t\t\t\t\t \\\n\t/*\t\t\t\t\t\t\t\t\\\n\t * Because of FreeBSD, we care about gcc 4.2, which doesn't have\\\n\t * an atomic exchange builtin.  We fake it with a CAS loop.\t\\\n\t */\t\t\t\t\t\t\t\t\\\n\twhile (true) {\t\t\t\t\t\t\t\\\n\t\ttype old = a->repr;\t\t\t\t\t\\\n\t\tif (__sync_bool_compare_and_swap(&a->repr, old, val)) {\t\\\n\t\t\treturn old;\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE bool\t\t\t\t\t\t\t\\\natomic_compare_exchange_weak_##short_type(atomic_##short_type##_t *a,\t\\\n    type *expected, type desired,                                     \\\n    atomic_memory_order_t success_mo,                          \\\n    atomic_memory_order_t failure_mo) {\t\t\t\t                \\\n\ttype prev = __sync_val_compare_and_swap(&a->repr, *expected,\t\\\n\t    desired);\t\t\t\t\t\t\t\\\n\tif (prev == *expected) {\t\t\t\t\t\\\n\t\treturn true;\t\t\t\t\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t\t*expected = prev;\t\t\t\t\t\\\n\t\treturn false;\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE bool\t\t\t\t\t\t\t\\\natomic_compare_exchange_strong_##short_type(atomic_##short_type##_t *a,\t\\\n    type *expected, type desired,                                       \\\n    atomic_memory_order_t success_mo,                            \\\n    atomic_memory_order_t failure_mo) {                          \\\n\ttype prev = __sync_val_compare_and_swap(&a->repr, *expected,\t\\\n\t    desired);\t\t\t\t\t\t\t\\\n\tif (prev == *expected) {\t\t\t\t\t\\\n\t\treturn true;\t\t\t\t\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t\t*expected = prev;\t\t\t\t\t\\\n\t\treturn false;\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n}\n\n#define JEMALLOC_GENERATE_INT_ATOMICS(type, short_type,\t\t\t\\\n    /* unused */ lg_size)\t\t\t\t\t\t\\\nJEMALLOC_GENERATE_ATOMICS(type, short_type, /* unused */ lg_size)\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_fetch_add_##short_type(atomic_##short_type##_t *a, type val,\t\\\n    atomic_memory_order_t mo) {\t\t\t\t\t\t\\\n\treturn __sync_fetch_and_add(&a->repr, val);\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_fetch_sub_##short_type(atomic_##short_type##_t *a, type val,\t\\\n    atomic_memory_order_t mo) {\t\t\t\t\t\t\\\n\treturn __sync_fetch_and_sub(&a->repr, val);\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_fetch_and_##short_type(atomic_##short_type##_t *a, type val,\t\\\n    atomic_memory_order_t mo) {\t\t\t\t\t\t\\\n\treturn __sync_fetch_and_and(&a->repr, val);\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_fetch_or_##short_type(atomic_##short_type##_t *a, type val,\t\\\n    atomic_memory_order_t mo) {\t\t\t\t\t\t\\\n\treturn __sync_fetch_and_or(&a->repr, val);\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_fetch_xor_##short_type(atomic_##short_type##_t *a, type val,\t\\\n    atomic_memory_order_t mo) {\t\t\t\t\t\t\\\n\treturn __sync_fetch_and_xor(&a->repr, val);\t\t\t\\\n}\n\n#endif /* JEMALLOC_INTERNAL_ATOMIC_GCC_SYNC_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/atomic_msvc.h",
    "content": "#ifndef JEMALLOC_INTERNAL_ATOMIC_MSVC_H\n#define JEMALLOC_INTERNAL_ATOMIC_MSVC_H\n\n#define ATOMIC_INIT(...) {__VA_ARGS__}\n\ntypedef enum {\n\tatomic_memory_order_relaxed,\n\tatomic_memory_order_acquire,\n\tatomic_memory_order_release,\n\tatomic_memory_order_acq_rel,\n\tatomic_memory_order_seq_cst\n} atomic_memory_order_t;\n\ntypedef char atomic_repr_0_t;\ntypedef short atomic_repr_1_t;\ntypedef long atomic_repr_2_t;\ntypedef __int64 atomic_repr_3_t;\n\nATOMIC_INLINE void\natomic_fence(atomic_memory_order_t mo) {\n\t_ReadWriteBarrier();\n#  if defined(_M_ARM) || defined(_M_ARM64)\n\t/* ARM needs a barrier for everything but relaxed. */\n\tif (mo != atomic_memory_order_relaxed) {\n\t\tMemoryBarrier();\n\t}\n#  elif defined(_M_IX86) || defined (_M_X64)\n\t/* x86 needs a barrier only for seq_cst. */\n\tif (mo == atomic_memory_order_seq_cst) {\n\t\tMemoryBarrier();\n\t}\n#  else\n#  error \"Don't know how to create atomics for this platform for MSVC.\"\n#  endif\n\t_ReadWriteBarrier();\n}\n\n#define ATOMIC_INTERLOCKED_REPR(lg_size) atomic_repr_ ## lg_size ## _t\n\n#define ATOMIC_CONCAT(a, b) ATOMIC_RAW_CONCAT(a, b)\n#define ATOMIC_RAW_CONCAT(a, b) a ## b\n\n#define ATOMIC_INTERLOCKED_NAME(base_name, lg_size) ATOMIC_CONCAT(\t\\\n    base_name, ATOMIC_INTERLOCKED_SUFFIX(lg_size))\n\n#define ATOMIC_INTERLOCKED_SUFFIX(lg_size)\t\t\t\t\\\n    ATOMIC_CONCAT(ATOMIC_INTERLOCKED_SUFFIX_, lg_size)\n\n#define ATOMIC_INTERLOCKED_SUFFIX_0 8\n#define ATOMIC_INTERLOCKED_SUFFIX_1 16\n#define ATOMIC_INTERLOCKED_SUFFIX_2\n#define ATOMIC_INTERLOCKED_SUFFIX_3 64\n\n#define JEMALLOC_GENERATE_ATOMICS(type, short_type, lg_size)\t\t\\\ntypedef struct {\t\t\t\t\t\t\t\\\n\tATOMIC_INTERLOCKED_REPR(lg_size) repr;\t\t\t\t\\\n} atomic_##short_type##_t;\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_load_##short_type(const atomic_##short_type##_t *a,\t\t\\\n    atomic_memory_order_t mo) {\t\t\t\t\t\t\\\n\tATOMIC_INTERLOCKED_REPR(lg_size) ret = a->repr;\t\t\t\\\n\tif (mo != atomic_memory_order_relaxed) {\t\t\t\\\n\t\tatomic_fence(atomic_memory_order_acquire);\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\treturn (type) ret;\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE void\t\t\t\t\t\t\t\\\natomic_store_##short_type(atomic_##short_type##_t *a,\t\t\t\\\n    type val, atomic_memory_order_t mo) {\t\t\t\t\\\n\tif (mo != atomic_memory_order_relaxed) {\t\t\t\\\n\t\tatomic_fence(atomic_memory_order_release);\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\ta->repr = (ATOMIC_INTERLOCKED_REPR(lg_size)) val;\t\t\\\n\tif (mo == atomic_memory_order_seq_cst) {\t\t\t\\\n\t\tatomic_fence(atomic_memory_order_seq_cst);\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_exchange_##short_type(atomic_##short_type##_t *a, type val,\t\\\n    atomic_memory_order_t mo) {\t\t\t\t\t\t\\\n\treturn (type)ATOMIC_INTERLOCKED_NAME(_InterlockedExchange,\t\\\n\t    lg_size)(&a->repr, (ATOMIC_INTERLOCKED_REPR(lg_size))val);\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE bool\t\t\t\t\t\t\t\\\natomic_compare_exchange_weak_##short_type(atomic_##short_type##_t *a,\t\\\n    type *expected, type desired, atomic_memory_order_t success_mo,\t\\\n    atomic_memory_order_t failure_mo) {\t\t\t\t\t\\\n\tATOMIC_INTERLOCKED_REPR(lg_size) e =\t\t\t\t\\\n\t    (ATOMIC_INTERLOCKED_REPR(lg_size))*expected;\t\t\\\n\tATOMIC_INTERLOCKED_REPR(lg_size) d =\t\t\t\t\\\n\t    (ATOMIC_INTERLOCKED_REPR(lg_size))desired;\t\t\t\\\n\tATOMIC_INTERLOCKED_REPR(lg_size) old =\t\t\t\t\\\n\t    ATOMIC_INTERLOCKED_NAME(_InterlockedCompareExchange, \t\\\n\t\tlg_size)(&a->repr, d, e);\t\t\t\t\\\n\tif (old == e) {\t\t\t\t\t\t\t\\\n\t\treturn true;\t\t\t\t\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t\t*expected = (type)old;\t\t\t\t\t\\\n\t\treturn false;\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE bool\t\t\t\t\t\t\t\\\natomic_compare_exchange_strong_##short_type(atomic_##short_type##_t *a,\t\\\n    type *expected, type desired, atomic_memory_order_t success_mo,\t\\\n    atomic_memory_order_t failure_mo) {\t\t\t\t\t\\\n\t/* We implement the weak version with strong semantics. */\t\\\n\treturn atomic_compare_exchange_weak_##short_type(a, expected,\t\\\n\t    desired, success_mo, failure_mo);\t\t\t\t\\\n}\n\n\n#define JEMALLOC_GENERATE_INT_ATOMICS(type, short_type, lg_size)\t\\\nJEMALLOC_GENERATE_ATOMICS(type, short_type, lg_size)\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_fetch_add_##short_type(atomic_##short_type##_t *a,\t\t\\\n    type val, atomic_memory_order_t mo) {\t\t\t\t\\\n\treturn (type)ATOMIC_INTERLOCKED_NAME(_InterlockedExchangeAdd,\t\\\n\t    lg_size)(&a->repr, (ATOMIC_INTERLOCKED_REPR(lg_size))val);\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_fetch_sub_##short_type(atomic_##short_type##_t *a,\t\t\\\n    type val, atomic_memory_order_t mo) {\t\t\t\t\\\n\t/*\t\t\t\t\t\t\t\t\\\n\t * MSVC warns on negation of unsigned operands, but for us it\t\\\n\t * gives exactly the right semantics (MAX_TYPE + 1 - operand).\t\\\n\t */\t\t\t\t\t\t\t\t\\\n\t__pragma(warning(push))\t\t\t\t\t\t\\\n\t__pragma(warning(disable: 4146))\t\t\t\t\\\n\treturn atomic_fetch_add_##short_type(a, -val, mo);\t\t\\\n\t__pragma(warning(pop))\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_fetch_and_##short_type(atomic_##short_type##_t *a,\t\t\\\n    type val, atomic_memory_order_t mo) {\t\t\t\t\\\n\treturn (type)ATOMIC_INTERLOCKED_NAME(_InterlockedAnd, lg_size)(\t\\\n\t    &a->repr, (ATOMIC_INTERLOCKED_REPR(lg_size))val);\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_fetch_or_##short_type(atomic_##short_type##_t *a,\t\t\\\n    type val, atomic_memory_order_t mo) {\t\t\t\t\\\n\treturn (type)ATOMIC_INTERLOCKED_NAME(_InterlockedOr, lg_size)(\t\\\n\t    &a->repr, (ATOMIC_INTERLOCKED_REPR(lg_size))val);\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\nATOMIC_INLINE type\t\t\t\t\t\t\t\\\natomic_fetch_xor_##short_type(atomic_##short_type##_t *a,\t\t\\\n    type val, atomic_memory_order_t mo) {\t\t\t\t\\\n\treturn (type)ATOMIC_INTERLOCKED_NAME(_InterlockedXor, lg_size)(\t\\\n\t    &a->repr, (ATOMIC_INTERLOCKED_REPR(lg_size))val);\t\t\\\n}\n\n#endif /* JEMALLOC_INTERNAL_ATOMIC_MSVC_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/background_thread_externs.h",
    "content": "#ifndef JEMALLOC_INTERNAL_BACKGROUND_THREAD_EXTERNS_H\n#define JEMALLOC_INTERNAL_BACKGROUND_THREAD_EXTERNS_H\n\nextern bool opt_background_thread;\nextern size_t opt_max_background_threads;\nextern malloc_mutex_t background_thread_lock;\nextern atomic_b_t background_thread_enabled_state;\nextern size_t n_background_threads;\nextern size_t max_background_threads;\nextern background_thread_info_t *background_thread_info;\n\nbool background_thread_create(tsd_t *tsd, unsigned arena_ind);\nbool background_threads_enable(tsd_t *tsd);\nbool background_threads_disable(tsd_t *tsd);\nbool background_thread_is_started(background_thread_info_t* info);\nvoid background_thread_wakeup_early(background_thread_info_t *info,\n    nstime_t *remaining_sleep);\nvoid background_thread_prefork0(tsdn_t *tsdn);\nvoid background_thread_prefork1(tsdn_t *tsdn);\nvoid background_thread_postfork_parent(tsdn_t *tsdn);\nvoid background_thread_postfork_child(tsdn_t *tsdn);\nbool background_thread_stats_read(tsdn_t *tsdn,\n    background_thread_stats_t *stats);\nvoid background_thread_ctl_init(tsdn_t *tsdn);\n\n#ifdef JEMALLOC_PTHREAD_CREATE_WRAPPER\nextern int pthread_create_wrapper(pthread_t *__restrict, const pthread_attr_t *,\n    void *(*)(void *), void *__restrict);\n#endif\nbool background_thread_boot0(void);\nbool background_thread_boot1(tsdn_t *tsdn, base_t *base);\n\n#endif /* JEMALLOC_INTERNAL_BACKGROUND_THREAD_EXTERNS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/background_thread_inlines.h",
    "content": "#ifndef JEMALLOC_INTERNAL_BACKGROUND_THREAD_INLINES_H\n#define JEMALLOC_INTERNAL_BACKGROUND_THREAD_INLINES_H\n\nJEMALLOC_ALWAYS_INLINE bool\nbackground_thread_enabled(void) {\n\treturn atomic_load_b(&background_thread_enabled_state, ATOMIC_RELAXED);\n}\n\nJEMALLOC_ALWAYS_INLINE void\nbackground_thread_enabled_set(tsdn_t *tsdn, bool state) {\n\tmalloc_mutex_assert_owner(tsdn, &background_thread_lock);\n\tatomic_store_b(&background_thread_enabled_state, state, ATOMIC_RELAXED);\n}\n\nJEMALLOC_ALWAYS_INLINE background_thread_info_t *\narena_background_thread_info_get(arena_t *arena) {\n\tunsigned arena_ind = arena_ind_get(arena);\n\treturn &background_thread_info[arena_ind % max_background_threads];\n}\n\nJEMALLOC_ALWAYS_INLINE background_thread_info_t *\nbackground_thread_info_get(size_t ind) {\n\treturn &background_thread_info[ind % max_background_threads];\n}\n\nJEMALLOC_ALWAYS_INLINE uint64_t\nbackground_thread_wakeup_time_get(background_thread_info_t *info) {\n\tuint64_t next_wakeup = nstime_ns(&info->next_wakeup);\n\tassert(atomic_load_b(&info->indefinite_sleep, ATOMIC_ACQUIRE) ==\n\t    (next_wakeup == BACKGROUND_THREAD_INDEFINITE_SLEEP));\n\treturn next_wakeup;\n}\n\nJEMALLOC_ALWAYS_INLINE void\nbackground_thread_wakeup_time_set(tsdn_t *tsdn, background_thread_info_t *info,\n    uint64_t wakeup_time) {\n\tmalloc_mutex_assert_owner(tsdn, &info->mtx);\n\tatomic_store_b(&info->indefinite_sleep,\n\t    wakeup_time == BACKGROUND_THREAD_INDEFINITE_SLEEP, ATOMIC_RELEASE);\n\tnstime_init(&info->next_wakeup, wakeup_time);\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nbackground_thread_indefinite_sleep(background_thread_info_t *info) {\n\treturn atomic_load_b(&info->indefinite_sleep, ATOMIC_ACQUIRE);\n}\n\n#endif /* JEMALLOC_INTERNAL_BACKGROUND_THREAD_INLINES_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/background_thread_structs.h",
    "content": "#ifndef JEMALLOC_INTERNAL_BACKGROUND_THREAD_STRUCTS_H\n#define JEMALLOC_INTERNAL_BACKGROUND_THREAD_STRUCTS_H\n\n/* This file really combines \"structs\" and \"types\", but only transitionally. */\n\n#if defined(JEMALLOC_BACKGROUND_THREAD) || defined(JEMALLOC_LAZY_LOCK)\n#  define JEMALLOC_PTHREAD_CREATE_WRAPPER\n#endif\n\n#define BACKGROUND_THREAD_INDEFINITE_SLEEP UINT64_MAX\n#define MAX_BACKGROUND_THREAD_LIMIT MALLOCX_ARENA_LIMIT\n#define DEFAULT_NUM_BACKGROUND_THREAD 4\n\n/*\n * These exist only as a transitional state.  Eventually, deferral should be\n * part of the PAI, and each implementation can indicate wait times with more\n * specificity.\n */\n#define BACKGROUND_THREAD_HPA_INTERVAL_MAX_UNINITIALIZED (-2)\n#define BACKGROUND_THREAD_HPA_INTERVAL_MAX_DEFAULT_WHEN_ENABLED 5000\n\n#define BACKGROUND_THREAD_DEFERRED_MIN UINT64_C(0)\n#define BACKGROUND_THREAD_DEFERRED_MAX UINT64_MAX\n\ntypedef enum {\n\tbackground_thread_stopped,\n\tbackground_thread_started,\n\t/* Thread waits on the global lock when paused (for arena_reset). */\n\tbackground_thread_paused,\n} background_thread_state_t;\n\nstruct background_thread_info_s {\n#ifdef JEMALLOC_BACKGROUND_THREAD\n\t/* Background thread is pthread specific. */\n\tpthread_t\t\tthread;\n\tpthread_cond_t\t\tcond;\n#endif\n\tmalloc_mutex_t\t\tmtx;\n\tbackground_thread_state_t\tstate;\n\t/* When true, it means no wakeup scheduled. */\n\tatomic_b_t\t\tindefinite_sleep;\n\t/* Next scheduled wakeup time (absolute time in ns). */\n\tnstime_t\t\tnext_wakeup;\n\t/*\n\t *  Since the last background thread run, newly added number of pages\n\t *  that need to be purged by the next wakeup.  This is adjusted on\n\t *  epoch advance, and is used to determine whether we should signal the\n\t *  background thread to wake up earlier.\n\t */\n\tsize_t\t\t\tnpages_to_purge_new;\n\t/* Stats: total number of runs since started. */\n\tuint64_t\t\ttot_n_runs;\n\t/* Stats: total sleep time since started. */\n\tnstime_t\t\ttot_sleep_time;\n};\ntypedef struct background_thread_info_s background_thread_info_t;\n\nstruct background_thread_stats_s {\n\tsize_t num_threads;\n\tuint64_t num_runs;\n\tnstime_t run_interval;\n\tmutex_prof_data_t max_counter_per_bg_thd;\n};\ntypedef struct background_thread_stats_s background_thread_stats_t;\n\n#endif /* JEMALLOC_INTERNAL_BACKGROUND_THREAD_STRUCTS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/base.h",
    "content": "#ifndef JEMALLOC_INTERNAL_BASE_H\n#define JEMALLOC_INTERNAL_BASE_H\n\n#include \"jemalloc/internal/edata.h\"\n#include \"jemalloc/internal/ehooks.h\"\n#include \"jemalloc/internal/mutex.h\"\n\nenum metadata_thp_mode_e {\n\tmetadata_thp_disabled   = 0,\n\t/*\n\t * Lazily enable hugepage for metadata. To avoid high RSS caused by THP\n\t * + low usage arena (i.e. THP becomes a significant percentage), the\n\t * \"auto\" option only starts using THP after a base allocator used up\n\t * the first THP region.  Starting from the second hugepage (in a single\n\t * arena), \"auto\" behaves the same as \"always\", i.e. madvise hugepage\n\t * right away.\n\t */\n\tmetadata_thp_auto       = 1,\n\tmetadata_thp_always     = 2,\n\tmetadata_thp_mode_limit = 3\n};\ntypedef enum metadata_thp_mode_e metadata_thp_mode_t;\n\n#define METADATA_THP_DEFAULT metadata_thp_disabled\nextern metadata_thp_mode_t opt_metadata_thp;\nextern const char *metadata_thp_mode_names[];\n\n\n/* Embedded at the beginning of every block of base-managed virtual memory. */\ntypedef struct base_block_s base_block_t;\nstruct base_block_s {\n\t/* Total size of block's virtual memory mapping. */\n\tsize_t size;\n\n\t/* Next block in list of base's blocks. */\n\tbase_block_t *next;\n\n\t/* Tracks unused trailing space. */\n\tedata_t edata;\n};\n\ntypedef struct base_s base_t;\nstruct base_s {\n\t/*\n\t * User-configurable extent hook functions.\n\t */\n\tehooks_t ehooks;\n\n\t/*\n\t * User-configurable extent hook functions for metadata allocations.\n\t */\n\tehooks_t ehooks_base;\n\n\t/* Protects base_alloc() and base_stats_get() operations. */\n\tmalloc_mutex_t mtx;\n\n\t/* Using THP when true (metadata_thp auto mode). */\n\tbool auto_thp_switched;\n\t/*\n\t * Most recent size class in the series of increasingly large base\n\t * extents.  Logarithmic spacing between subsequent allocations ensures\n\t * that the total number of distinct mappings remains small.\n\t */\n\tpszind_t pind_last;\n\n\t/* Serial number generation state. */\n\tsize_t extent_sn_next;\n\n\t/* Chain of all blocks associated with base. */\n\tbase_block_t *blocks;\n\n\t/* Heap of extents that track unused trailing space within blocks. */\n\tedata_heap_t avail[SC_NSIZES];\n\n\t/* Stats, only maintained if config_stats. */\n\tsize_t allocated;\n\tsize_t resident;\n\tsize_t mapped;\n\t/* Number of THP regions touched. */\n\tsize_t n_thp;\n};\n\nstatic inline unsigned\nbase_ind_get(const base_t *base) {\n\treturn ehooks_ind_get(&base->ehooks);\n}\n\nstatic inline bool\nmetadata_thp_enabled(void) {\n\treturn (opt_metadata_thp != metadata_thp_disabled);\n}\n\nbase_t *b0get(void);\nbase_t *base_new(tsdn_t *tsdn, unsigned ind,\n    const extent_hooks_t *extent_hooks, bool metadata_use_hooks);\nvoid base_delete(tsdn_t *tsdn, base_t *base);\nehooks_t *base_ehooks_get(base_t *base);\nehooks_t *base_ehooks_get_for_metadata(base_t *base);\nextent_hooks_t *base_extent_hooks_set(base_t *base,\n    extent_hooks_t *extent_hooks);\nvoid *base_alloc(tsdn_t *tsdn, base_t *base, size_t size, size_t alignment);\nedata_t *base_alloc_edata(tsdn_t *tsdn, base_t *base);\nvoid base_stats_get(tsdn_t *tsdn, base_t *base, size_t *allocated,\n    size_t *resident, size_t *mapped, size_t *n_thp);\nvoid base_prefork(tsdn_t *tsdn, base_t *base);\nvoid base_postfork_parent(tsdn_t *tsdn, base_t *base);\nvoid base_postfork_child(tsdn_t *tsdn, base_t *base);\nbool base_boot(tsdn_t *tsdn);\n\n#endif /* JEMALLOC_INTERNAL_BASE_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/bin.h",
    "content": "#ifndef JEMALLOC_INTERNAL_BIN_H\n#define JEMALLOC_INTERNAL_BIN_H\n\n#include \"jemalloc/internal/bin_stats.h\"\n#include \"jemalloc/internal/bin_types.h\"\n#include \"jemalloc/internal/edata.h\"\n#include \"jemalloc/internal/mutex.h\"\n#include \"jemalloc/internal/sc.h\"\n\n/*\n * A bin contains a set of extents that are currently being used for slab\n * allocations.\n */\ntypedef struct bin_s bin_t;\nstruct bin_s {\n\t/* All operations on bin_t fields require lock ownership. */\n\tmalloc_mutex_t\t\tlock;\n\n\t/*\n\t * Bin statistics.  These get touched every time the lock is acquired,\n\t * so put them close by in the hopes of getting some cache locality.\n\t */\n\tbin_stats_t\tstats;\n\n\t/*\n\t * Current slab being used to service allocations of this bin's size\n\t * class.  slabcur is independent of slabs_{nonfull,full}; whenever\n\t * slabcur is reassigned, the previous slab must be deallocated or\n\t * inserted into slabs_{nonfull,full}.\n\t */\n\tedata_t\t\t\t*slabcur;\n\n\t/*\n\t * Heap of non-full slabs.  This heap is used to assure that new\n\t * allocations come from the non-full slab that is oldest/lowest in\n\t * memory.\n\t */\n\tedata_heap_t\t\tslabs_nonfull;\n\n\t/* List used to track full slabs. */\n\tedata_list_active_t\tslabs_full;\n};\n\n/* A set of sharded bins of the same size class. */\ntypedef struct bins_s bins_t;\nstruct bins_s {\n\t/* Sharded bins.  Dynamically sized. */\n\tbin_t *bin_shards;\n};\n\nvoid bin_shard_sizes_boot(unsigned bin_shards[SC_NBINS]);\nbool bin_update_shard_size(unsigned bin_shards[SC_NBINS], size_t start_size,\n    size_t end_size, size_t nshards);\n\n/* Initializes a bin to empty.  Returns true on error. */\nbool bin_init(bin_t *bin);\n\n/* Forking. */\nvoid bin_prefork(tsdn_t *tsdn, bin_t *bin);\nvoid bin_postfork_parent(tsdn_t *tsdn, bin_t *bin);\nvoid bin_postfork_child(tsdn_t *tsdn, bin_t *bin);\n\n/* Stats. */\nstatic inline void\nbin_stats_merge(tsdn_t *tsdn, bin_stats_data_t *dst_bin_stats, bin_t *bin) {\n\tmalloc_mutex_lock(tsdn, &bin->lock);\n\tmalloc_mutex_prof_accum(tsdn, &dst_bin_stats->mutex_data, &bin->lock);\n\tbin_stats_t *stats = &dst_bin_stats->stats_data;\n\tstats->nmalloc += bin->stats.nmalloc;\n\tstats->ndalloc += bin->stats.ndalloc;\n\tstats->nrequests += bin->stats.nrequests;\n\tstats->curregs += bin->stats.curregs;\n\tstats->nfills += bin->stats.nfills;\n\tstats->nflushes += bin->stats.nflushes;\n\tstats->nslabs += bin->stats.nslabs;\n\tstats->reslabs += bin->stats.reslabs;\n\tstats->curslabs += bin->stats.curslabs;\n\tstats->nonfull_slabs += bin->stats.nonfull_slabs;\n\tmalloc_mutex_unlock(tsdn, &bin->lock);\n}\n\n#endif /* JEMALLOC_INTERNAL_BIN_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/bin_info.h",
    "content": "#ifndef JEMALLOC_INTERNAL_BIN_INFO_H\n#define JEMALLOC_INTERNAL_BIN_INFO_H\n\n#include \"jemalloc/internal/bitmap.h\"\n\n/*\n * Read-only information associated with each element of arena_t's bins array\n * is stored separately, partly to reduce memory usage (only one copy, rather\n * than one per arena), but mainly to avoid false cacheline sharing.\n *\n * Each slab has the following layout:\n *\n *   /--------------------\\\n *   | region 0           |\n *   |--------------------|\n *   | region 1           |\n *   |--------------------|\n *   | ...                |\n *   | ...                |\n *   | ...                |\n *   |--------------------|\n *   | region nregs-1     |\n *   \\--------------------/\n */\ntypedef struct bin_info_s bin_info_t;\nstruct bin_info_s {\n\t/* Size of regions in a slab for this bin's size class. */\n\tsize_t\t\t\treg_size;\n\n\t/* Total size of a slab for this bin's size class. */\n\tsize_t\t\t\tslab_size;\n\n\t/* Total number of regions in a slab for this bin's size class. */\n\tuint32_t\t\tnregs;\n\n\t/* Number of sharded bins in each arena for this size class. */\n\tuint32_t\t\tn_shards;\n\n\t/*\n\t * Metadata used to manipulate bitmaps for slabs associated with this\n\t * bin.\n\t */\n\tbitmap_info_t\t\tbitmap_info;\n};\n\nextern bin_info_t bin_infos[SC_NBINS];\n\nvoid bin_info_boot(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS]);\n\n#endif /* JEMALLOC_INTERNAL_BIN_INFO_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/bin_stats.h",
    "content": "#ifndef JEMALLOC_INTERNAL_BIN_STATS_H\n#define JEMALLOC_INTERNAL_BIN_STATS_H\n\n#include \"jemalloc/internal/mutex_prof.h\"\n\ntypedef struct bin_stats_s bin_stats_t;\nstruct bin_stats_s {\n\t/*\n\t * Total number of allocation/deallocation requests served directly by\n\t * the bin.  Note that tcache may allocate an object, then recycle it\n\t * many times, resulting many increments to nrequests, but only one\n\t * each to nmalloc and ndalloc.\n\t */\n\tuint64_t\tnmalloc;\n\tuint64_t\tndalloc;\n\n\t/*\n\t * Number of allocation requests that correspond to the size of this\n\t * bin.  This includes requests served by tcache, though tcache only\n\t * periodically merges into this counter.\n\t */\n\tuint64_t\tnrequests;\n\n\t/*\n\t * Current number of regions of this size class, including regions\n\t * currently cached by tcache.\n\t */\n\tsize_t\t\tcurregs;\n\n\t/* Number of tcache fills from this bin. */\n\tuint64_t\tnfills;\n\n\t/* Number of tcache flushes to this bin. */\n\tuint64_t\tnflushes;\n\n\t/* Total number of slabs created for this bin's size class. */\n\tuint64_t\tnslabs;\n\n\t/*\n\t * Total number of slabs reused by extracting them from the slabs heap\n\t * for this bin's size class.\n\t */\n\tuint64_t\treslabs;\n\n\t/* Current number of slabs in this bin. */\n\tsize_t\t\tcurslabs;\n\n\t/* Current size of nonfull slabs heap in this bin. */\n\tsize_t\t\tnonfull_slabs;\n};\n\ntypedef struct bin_stats_data_s bin_stats_data_t;\nstruct bin_stats_data_s {\n\tbin_stats_t stats_data;\n\tmutex_prof_data_t mutex_data;\n};\n#endif /* JEMALLOC_INTERNAL_BIN_STATS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/bin_types.h",
    "content": "#ifndef JEMALLOC_INTERNAL_BIN_TYPES_H\n#define JEMALLOC_INTERNAL_BIN_TYPES_H\n\n#include \"jemalloc/internal/sc.h\"\n\n#define BIN_SHARDS_MAX (1 << EDATA_BITS_BINSHARD_WIDTH)\n#define N_BIN_SHARDS_DEFAULT 1\n\n/* Used in TSD static initializer only. Real init in arena_bind(). */\n#define TSD_BINSHARDS_ZERO_INITIALIZER {{UINT8_MAX}}\n\ntypedef struct tsd_binshards_s tsd_binshards_t;\nstruct tsd_binshards_s {\n\tuint8_t binshard[SC_NBINS];\n};\n\n#endif /* JEMALLOC_INTERNAL_BIN_TYPES_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/bit_util.h",
    "content": "#ifndef JEMALLOC_INTERNAL_BIT_UTIL_H\n#define JEMALLOC_INTERNAL_BIT_UTIL_H\n\n#include \"jemalloc/internal/assert.h\"\n\n/* Sanity check. */\n#if !defined(JEMALLOC_INTERNAL_FFSLL) || !defined(JEMALLOC_INTERNAL_FFSL) \\\n    || !defined(JEMALLOC_INTERNAL_FFS)\n#  error JEMALLOC_INTERNAL_FFS{,L,LL} should have been defined by configure\n#endif\n\n/*\n * Unlike the builtins and posix ffs functions, our ffs requires a non-zero\n * input, and returns the position of the lowest bit set (as opposed to the\n * posix versions, which return 1 larger than that position and use a return\n * value of zero as a sentinel.  This tends to simplify logic in callers, and\n * allows for consistency with the builtins we build fls on top of.\n */\nstatic inline unsigned\nffs_llu(unsigned long long x) {\n\tutil_assume(x != 0);\n\treturn JEMALLOC_INTERNAL_FFSLL(x) - 1;\n}\n\nstatic inline unsigned\nffs_lu(unsigned long x) {\n\tutil_assume(x != 0);\n\treturn JEMALLOC_INTERNAL_FFSL(x) - 1;\n}\n\nstatic inline unsigned\nffs_u(unsigned x) {\n\tutil_assume(x != 0);\n\treturn JEMALLOC_INTERNAL_FFS(x) - 1;\n}\n\n#define DO_FLS_SLOW(x, suffix) do {\t\t\t\t\t\\\n\tutil_assume(x != 0);\t\t\t\t\t\t\\\n\tx |= (x >> 1);\t\t\t\t\t\t\t\\\n\tx |= (x >> 2);\t\t\t\t\t\t\t\\\n\tx |= (x >> 4);\t\t\t\t\t\t\t\\\n\tx |= (x >> 8);\t\t\t\t\t\t\t\\\n\tx |= (x >> 16);\t\t\t\t\t\t\t\\\n\tif (sizeof(x) > 4) {\t\t\t\t\t\t\\\n\t\t/*\t\t\t\t\t\t\t\\\n\t\t * If sizeof(x) is 4, then the expression \"x >> 32\"\t\\\n\t\t * will generate compiler warnings even if the code\t\\\n\t\t * never executes.  This circumvents the warning, and\t\\\n\t\t * gets compiled out in optimized builds.\t\t\\\n\t\t */\t\t\t\t\t\t\t\\\n\t\tint constant_32 = sizeof(x) * 4;\t\t\t\\\n\t\tx |= (x >> constant_32);\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tx++;\t\t\t\t\t\t\t\t\\\n\tif (x == 0) {\t\t\t\t\t\t\t\\\n\t\treturn 8 * sizeof(x) - 1;\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\treturn ffs_##suffix(x) - 1;\t\t\t\t\t\\\n} while(0)\n\nstatic inline unsigned\nfls_llu_slow(unsigned long long x) {\n\tDO_FLS_SLOW(x, llu);\n}\n\nstatic inline unsigned\nfls_lu_slow(unsigned long x) {\n\tDO_FLS_SLOW(x, lu);\n}\n\nstatic inline unsigned\nfls_u_slow(unsigned x) {\n\tDO_FLS_SLOW(x, u);\n}\n\n#undef DO_FLS_SLOW\n\n#ifdef JEMALLOC_HAVE_BUILTIN_CLZ\nstatic inline unsigned\nfls_llu(unsigned long long x) {\n\tutil_assume(x != 0);\n\t/*\n\t * Note that the xor here is more naturally written as subtraction; the\n\t * last bit set is the number of bits in the type minus the number of\n\t * leading zero bits.  But GCC implements that as:\n\t *    bsr     edi, edi\n\t *    mov     eax, 31\n\t *    xor     edi, 31\n\t *    sub     eax, edi\n\t * If we write it as xor instead, then we get\n\t *    bsr     eax, edi\n\t * as desired.\n\t */\n\treturn (8 * sizeof(x) - 1) ^ __builtin_clzll(x);\n}\n\nstatic inline unsigned\nfls_lu(unsigned long x) {\n\tutil_assume(x != 0);\n\treturn (8 * sizeof(x) - 1) ^ __builtin_clzl(x);\n}\n\nstatic inline unsigned\nfls_u(unsigned x) {\n\tutil_assume(x != 0);\n\treturn (8 * sizeof(x) - 1) ^ __builtin_clz(x);\n}\n#elif defined(_MSC_VER)\n\n#if LG_SIZEOF_PTR == 3\n#define DO_BSR64(bit, x) _BitScanReverse64(&bit, x)\n#else\n/*\n * This never actually runs; we're just dodging a compiler error for the\n * never-taken branch where sizeof(void *) == 8.\n */\n#define DO_BSR64(bit, x) bit = 0; unreachable()\n#endif\n\n#define DO_FLS(x) do {\t\t\t\t\t\t\t\\\n\tif (x == 0) {\t\t\t\t\t\t\t\\\n\t\treturn 8 * sizeof(x);\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tunsigned long bit;\t\t\t\t\t\t\\\n\tif (sizeof(x) == 4) {\t\t\t\t\t\t\\\n\t\t_BitScanReverse(&bit, (unsigned)x);\t\t\t\\\n\t\treturn (unsigned)bit;\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tif (sizeof(x) == 8 && sizeof(void *) == 8) {\t\t\t\\\n\t\tDO_BSR64(bit, x);\t\t\t\t\t\\\n\t\treturn (unsigned)bit;\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tif (sizeof(x) == 8 && sizeof(void *) == 4) {\t\t\t\\\n\t\t/* Dodge a compiler warning, as above. */\t\t\\\n\t\tint constant_32 = sizeof(x) * 4;\t\t\t\\\n\t\tif (_BitScanReverse(&bit,\t\t\t\t\\\n\t\t    (unsigned)(x >> constant_32))) {\t\t\t\\\n\t\t\treturn 32 + (unsigned)bit;\t\t\t\\\n\t\t} else {\t\t\t\t\t\t\\\n\t\t\t_BitScanReverse(&bit, (unsigned)x);\t\t\\\n\t\t\treturn (unsigned)bit;\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tunreachable();\t\t\t\t\t\t\t\\\n} while (0)\n\nstatic inline unsigned\nfls_llu(unsigned long long x) {\n\tDO_FLS(x);\n}\n\nstatic inline unsigned\nfls_lu(unsigned long x) {\n\tDO_FLS(x);\n}\n\nstatic inline unsigned\nfls_u(unsigned x) {\n\tDO_FLS(x);\n}\n\n#undef DO_FLS\n#undef DO_BSR64\n#else\n\nstatic inline unsigned\nfls_llu(unsigned long long x) {\n\treturn fls_llu_slow(x);\n}\n\nstatic inline unsigned\nfls_lu(unsigned long x) {\n\treturn fls_lu_slow(x);\n}\n\nstatic inline unsigned\nfls_u(unsigned x) {\n\treturn fls_u_slow(x);\n}\n#endif\n\n#if LG_SIZEOF_LONG_LONG > 3\n#  error \"Haven't implemented popcount for 16-byte ints.\"\n#endif\n\n#define DO_POPCOUNT(x, type) do {\t\t\t\t\t\\\n\t/*\t\t\t\t\t\t\t\t\\\n\t * Algorithm from an old AMD optimization reference manual.\t\\\n\t * We're putting a little bit more work than you might expect\t\\\n\t * into the no-instrinsic case, since we only support the\t\\\n\t * GCC intrinsics spelling of popcount (for now).  Detecting\t\\\n\t * whether or not the popcount builtin is actually useable in\t\\\n\t * MSVC is nontrivial.\t\t\t\t\t\t\\\n\t */\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\ttype bmul = (type)0x0101010101010101ULL;\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\t/*\t\t\t\t\t\t\t\t\\\n\t * Replace each 2 bits with the sideways sum of the original\t\\\n\t * values.  0x5 = 0b0101.\t\t\t\t\t\\\n\t *\t\t\t\t\t\t\t\t\\\n\t * You might expect this to be:\t\t\t\t\t\\\n\t *   x = (x & 0x55...) + ((x >> 1) & 0x55...).\t\t\t\\\n\t * That costs an extra mask relative to this, though.\t\t\\\n\t */\t\t\t\t\t\t\t\t\\\n\tx = x - ((x >> 1) & (0x55U * bmul));\t\t\t\t\\\n\t/* Replace each 4 bits with their sideays sum.  0x3 = 0b0011. */\\\n\tx = (x & (bmul * 0x33U)) + ((x >> 2) & (bmul * 0x33U));\t\t\\\n\t/*\t\t\t\t\t\t\t\t\\\n\t * Replace each 8 bits with their sideways sum.  Note that we\t\\\n\t * can't overflow within each 4-bit sum here, so we can skip\t\\\n\t * the initial mask.\t\t\t\t\t\t\\\n\t */\t\t\t\t\t\t\t\t\\\n\tx = (x + (x >> 4)) & (bmul * 0x0FU);\t\t\t\t\\\n\t/*\t\t\t\t\t\t\t\t\\\n\t * None of the partial sums in this multiplication (viewed in\t\\\n\t * base-256) can overflow into the next digit.  So the least\t\\\n\t * significant byte of the product will be the least\t\t\\\n\t * significant byte of the original value, the second least\t\\\n\t * significant byte will be the sum of the two least\t\t\\\n\t * significant bytes of the original value, and so on.\t\t\\\n\t * Importantly, the high byte will be the byte-wise sum of all\t\\\n\t * the bytes of the original value.\t\t\t\t\\\n\t */\t\t\t\t\t\t\t\t\\\n\tx = x * bmul;\t\t\t\t\t\t\t\\\n\tx >>= ((sizeof(x) - 1) * 8);\t\t\t\t\t\\\n\treturn (unsigned)x;\t\t\t\t\t\t\\\n} while(0)\n\nstatic inline unsigned\npopcount_u_slow(unsigned bitmap) {\n\tDO_POPCOUNT(bitmap, unsigned);\n}\n\nstatic inline unsigned\npopcount_lu_slow(unsigned long bitmap) {\n\tDO_POPCOUNT(bitmap, unsigned long);\n}\n\nstatic inline unsigned\npopcount_llu_slow(unsigned long long bitmap) {\n\tDO_POPCOUNT(bitmap, unsigned long long);\n}\n\n#undef DO_POPCOUNT\n\nstatic inline unsigned\npopcount_u(unsigned bitmap) {\n#ifdef JEMALLOC_INTERNAL_POPCOUNT\n\treturn JEMALLOC_INTERNAL_POPCOUNT(bitmap);\n#else\n\treturn popcount_u_slow(bitmap);\n#endif\n}\n\nstatic inline unsigned\npopcount_lu(unsigned long bitmap) {\n#ifdef JEMALLOC_INTERNAL_POPCOUNTL\n\treturn JEMALLOC_INTERNAL_POPCOUNTL(bitmap);\n#else\n\treturn popcount_lu_slow(bitmap);\n#endif\n}\n\nstatic inline unsigned\npopcount_llu(unsigned long long bitmap) {\n#ifdef JEMALLOC_INTERNAL_POPCOUNTLL\n\treturn JEMALLOC_INTERNAL_POPCOUNTLL(bitmap);\n#else\n\treturn popcount_llu_slow(bitmap);\n#endif\n}\n\n/*\n * Clears first unset bit in bitmap, and returns\n * place of bit.  bitmap *must not* be 0.\n */\n\nstatic inline size_t\ncfs_lu(unsigned long* bitmap) {\n\tutil_assume(*bitmap != 0);\n\tsize_t bit = ffs_lu(*bitmap);\n\t*bitmap ^= ZU(1) << bit;\n\treturn bit;\n}\n\nstatic inline unsigned\nffs_zu(size_t x) {\n#if LG_SIZEOF_PTR == LG_SIZEOF_INT\n\treturn ffs_u(x);\n#elif LG_SIZEOF_PTR == LG_SIZEOF_LONG\n\treturn ffs_lu(x);\n#elif LG_SIZEOF_PTR == LG_SIZEOF_LONG_LONG\n\treturn ffs_llu(x);\n#else\n#error No implementation for size_t ffs()\n#endif\n}\n\nstatic inline unsigned\nfls_zu(size_t x) {\n#if LG_SIZEOF_PTR == LG_SIZEOF_INT\n\treturn fls_u(x);\n#elif LG_SIZEOF_PTR == LG_SIZEOF_LONG\n\treturn fls_lu(x);\n#elif LG_SIZEOF_PTR == LG_SIZEOF_LONG_LONG\n\treturn fls_llu(x);\n#else\n#error No implementation for size_t fls()\n#endif\n}\n\n\nstatic inline unsigned\nffs_u64(uint64_t x) {\n#if LG_SIZEOF_LONG == 3\n\treturn ffs_lu(x);\n#elif LG_SIZEOF_LONG_LONG == 3\n\treturn ffs_llu(x);\n#else\n#error No implementation for 64-bit ffs()\n#endif\n}\n\nstatic inline unsigned\nfls_u64(uint64_t x) {\n#if LG_SIZEOF_LONG == 3\n\treturn fls_lu(x);\n#elif LG_SIZEOF_LONG_LONG == 3\n\treturn fls_llu(x);\n#else\n#error No implementation for 64-bit fls()\n#endif\n}\n\nstatic inline unsigned\nffs_u32(uint32_t x) {\n#if LG_SIZEOF_INT == 2\n\treturn ffs_u(x);\n#else\n#error No implementation for 32-bit ffs()\n#endif\n\treturn ffs_u(x);\n}\n\nstatic inline unsigned\nfls_u32(uint32_t x) {\n#if LG_SIZEOF_INT == 2\n\treturn fls_u(x);\n#else\n#error No implementation for 32-bit fls()\n#endif\n\treturn fls_u(x);\n}\n\nstatic inline uint64_t\npow2_ceil_u64(uint64_t x) {\n\tif (unlikely(x <= 1)) {\n\t\treturn x;\n\t}\n\tsize_t msb_on_index = fls_u64(x - 1);\n\t/*\n\t * Range-check; it's on the callers to ensure that the result of this\n\t * call won't overflow.\n\t */\n\tassert(msb_on_index < 63);\n\treturn 1ULL << (msb_on_index + 1);\n}\n\nstatic inline uint32_t\npow2_ceil_u32(uint32_t x) {\n\tif (unlikely(x <= 1)) {\n\t    return x;\n\t}\n\tsize_t msb_on_index = fls_u32(x - 1);\n\t/* As above. */\n\tassert(msb_on_index < 31);\n\treturn 1U << (msb_on_index + 1);\n}\n\n/* Compute the smallest power of 2 that is >= x. */\nstatic inline size_t\npow2_ceil_zu(size_t x) {\n#if (LG_SIZEOF_PTR == 3)\n\treturn pow2_ceil_u64(x);\n#else\n\treturn pow2_ceil_u32(x);\n#endif\n}\n\nstatic inline unsigned\nlg_floor(size_t x) {\n\tutil_assume(x != 0);\n#if (LG_SIZEOF_PTR == 3)\n\treturn fls_u64(x);\n#else\n\treturn fls_u32(x);\n#endif\n}\n\nstatic inline unsigned\nlg_ceil(size_t x) {\n\treturn lg_floor(x) + ((x & (x - 1)) == 0 ? 0 : 1);\n}\n\n/* A compile-time version of lg_floor and lg_ceil. */\n#define LG_FLOOR_1(x) 0\n#define LG_FLOOR_2(x) (x < (1ULL << 1) ? LG_FLOOR_1(x) : 1 + LG_FLOOR_1(x >> 1))\n#define LG_FLOOR_4(x) (x < (1ULL << 2) ? LG_FLOOR_2(x) : 2 + LG_FLOOR_2(x >> 2))\n#define LG_FLOOR_8(x) (x < (1ULL << 4) ? LG_FLOOR_4(x) : 4 + LG_FLOOR_4(x >> 4))\n#define LG_FLOOR_16(x) (x < (1ULL << 8) ? LG_FLOOR_8(x) : 8 + LG_FLOOR_8(x >> 8))\n#define LG_FLOOR_32(x) (x < (1ULL << 16) ? LG_FLOOR_16(x) : 16 + LG_FLOOR_16(x >> 16))\n#define LG_FLOOR_64(x) (x < (1ULL << 32) ? LG_FLOOR_32(x) : 32 + LG_FLOOR_32(x >> 32))\n#if LG_SIZEOF_PTR == 2\n#  define LG_FLOOR(x) LG_FLOOR_32((x))\n#else\n#  define LG_FLOOR(x) LG_FLOOR_64((x))\n#endif\n\n#define LG_CEIL(x) (LG_FLOOR(x) + (((x) & ((x) - 1)) == 0 ? 0 : 1))\n\n#endif /* JEMALLOC_INTERNAL_BIT_UTIL_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/bitmap.h",
    "content": "#ifndef JEMALLOC_INTERNAL_BITMAP_H\n#define JEMALLOC_INTERNAL_BITMAP_H\n\n#include \"jemalloc/internal/bit_util.h\"\n#include \"jemalloc/internal/sc.h\"\n\ntypedef unsigned long bitmap_t;\n#define LG_SIZEOF_BITMAP\tLG_SIZEOF_LONG\n\n/* Maximum bitmap bit count is 2^LG_BITMAP_MAXBITS. */\n#if SC_LG_SLAB_MAXREGS > LG_CEIL(SC_NSIZES)\n/* Maximum bitmap bit count is determined by maximum regions per slab. */\n#  define LG_BITMAP_MAXBITS\tSC_LG_SLAB_MAXREGS\n#else\n/* Maximum bitmap bit count is determined by number of extent size classes. */\n#  define LG_BITMAP_MAXBITS\tLG_CEIL(SC_NSIZES)\n#endif\n#define BITMAP_MAXBITS\t\t(ZU(1) << LG_BITMAP_MAXBITS)\n\n/* Number of bits per group. */\n#define LG_BITMAP_GROUP_NBITS\t\t(LG_SIZEOF_BITMAP + 3)\n#define BITMAP_GROUP_NBITS\t\t(1U << LG_BITMAP_GROUP_NBITS)\n#define BITMAP_GROUP_NBITS_MASK\t\t(BITMAP_GROUP_NBITS-1)\n\n/*\n * Do some analysis on how big the bitmap is before we use a tree.  For a brute\n * force linear search, if we would have to call ffs_lu() more than 2^3 times,\n * use a tree instead.\n */\n#if LG_BITMAP_MAXBITS - LG_BITMAP_GROUP_NBITS > 3\n#  define BITMAP_USE_TREE\n#endif\n\n/* Number of groups required to store a given number of bits. */\n#define BITMAP_BITS2GROUPS(nbits)\t\t\t\t\t\\\n    (((nbits) + BITMAP_GROUP_NBITS_MASK) >> LG_BITMAP_GROUP_NBITS)\n\n/*\n * Number of groups required at a particular level for a given number of bits.\n */\n#define BITMAP_GROUPS_L0(nbits)\t\t\t\t\t\t\\\n    BITMAP_BITS2GROUPS(nbits)\n#define BITMAP_GROUPS_L1(nbits)\t\t\t\t\t\t\\\n    BITMAP_BITS2GROUPS(BITMAP_BITS2GROUPS(nbits))\n#define BITMAP_GROUPS_L2(nbits)\t\t\t\t\t\t\\\n    BITMAP_BITS2GROUPS(BITMAP_BITS2GROUPS(BITMAP_BITS2GROUPS((nbits))))\n#define BITMAP_GROUPS_L3(nbits)\t\t\t\t\t\t\\\n    BITMAP_BITS2GROUPS(BITMAP_BITS2GROUPS(BITMAP_BITS2GROUPS(\t\t\\\n\tBITMAP_BITS2GROUPS((nbits)))))\n#define BITMAP_GROUPS_L4(nbits)\t\t\t\t\t\t\\\n    BITMAP_BITS2GROUPS(BITMAP_BITS2GROUPS(BITMAP_BITS2GROUPS(\t\t\\\n\tBITMAP_BITS2GROUPS(BITMAP_BITS2GROUPS((nbits))))))\n\n/*\n * Assuming the number of levels, number of groups required for a given number\n * of bits.\n */\n#define BITMAP_GROUPS_1_LEVEL(nbits)\t\t\t\t\t\\\n    BITMAP_GROUPS_L0(nbits)\n#define BITMAP_GROUPS_2_LEVEL(nbits)\t\t\t\t\t\\\n    (BITMAP_GROUPS_1_LEVEL(nbits) + BITMAP_GROUPS_L1(nbits))\n#define BITMAP_GROUPS_3_LEVEL(nbits)\t\t\t\t\t\\\n    (BITMAP_GROUPS_2_LEVEL(nbits) + BITMAP_GROUPS_L2(nbits))\n#define BITMAP_GROUPS_4_LEVEL(nbits)\t\t\t\t\t\\\n    (BITMAP_GROUPS_3_LEVEL(nbits) + BITMAP_GROUPS_L3(nbits))\n#define BITMAP_GROUPS_5_LEVEL(nbits)\t\t\t\t\t\\\n    (BITMAP_GROUPS_4_LEVEL(nbits) + BITMAP_GROUPS_L4(nbits))\n\n/*\n * Maximum number of groups required to support LG_BITMAP_MAXBITS.\n */\n#ifdef BITMAP_USE_TREE\n\n#if LG_BITMAP_MAXBITS <= LG_BITMAP_GROUP_NBITS\n#  define BITMAP_GROUPS(nbits)\tBITMAP_GROUPS_1_LEVEL(nbits)\n#  define BITMAP_GROUPS_MAX\tBITMAP_GROUPS_1_LEVEL(BITMAP_MAXBITS)\n#elif LG_BITMAP_MAXBITS <= LG_BITMAP_GROUP_NBITS * 2\n#  define BITMAP_GROUPS(nbits)\tBITMAP_GROUPS_2_LEVEL(nbits)\n#  define BITMAP_GROUPS_MAX\tBITMAP_GROUPS_2_LEVEL(BITMAP_MAXBITS)\n#elif LG_BITMAP_MAXBITS <= LG_BITMAP_GROUP_NBITS * 3\n#  define BITMAP_GROUPS(nbits)\tBITMAP_GROUPS_3_LEVEL(nbits)\n#  define BITMAP_GROUPS_MAX\tBITMAP_GROUPS_3_LEVEL(BITMAP_MAXBITS)\n#elif LG_BITMAP_MAXBITS <= LG_BITMAP_GROUP_NBITS * 4\n#  define BITMAP_GROUPS(nbits)\tBITMAP_GROUPS_4_LEVEL(nbits)\n#  define BITMAP_GROUPS_MAX\tBITMAP_GROUPS_4_LEVEL(BITMAP_MAXBITS)\n#elif LG_BITMAP_MAXBITS <= LG_BITMAP_GROUP_NBITS * 5\n#  define BITMAP_GROUPS(nbits)\tBITMAP_GROUPS_5_LEVEL(nbits)\n#  define BITMAP_GROUPS_MAX\tBITMAP_GROUPS_5_LEVEL(BITMAP_MAXBITS)\n#else\n#  error \"Unsupported bitmap size\"\n#endif\n\n/*\n * Maximum number of levels possible.  This could be statically computed based\n * on LG_BITMAP_MAXBITS:\n *\n * #define BITMAP_MAX_LEVELS \\\n *     (LG_BITMAP_MAXBITS / LG_SIZEOF_BITMAP) \\\n *     + !!(LG_BITMAP_MAXBITS % LG_SIZEOF_BITMAP)\n *\n * However, that would not allow the generic BITMAP_INFO_INITIALIZER() macro, so\n * instead hardcode BITMAP_MAX_LEVELS to the largest number supported by the\n * various cascading macros.  The only additional cost this incurs is some\n * unused trailing entries in bitmap_info_t structures; the bitmaps themselves\n * are not impacted.\n */\n#define BITMAP_MAX_LEVELS\t5\n\n#define BITMAP_INFO_INITIALIZER(nbits) {\t\t\t\t\\\n\t/* nbits. */\t\t\t\t\t\t\t\\\n\tnbits,\t\t\t\t\t\t\t\t\\\n\t/* nlevels. */\t\t\t\t\t\t\t\\\n\t(BITMAP_GROUPS_L0(nbits) > BITMAP_GROUPS_L1(nbits)) +\t\t\\\n\t    (BITMAP_GROUPS_L1(nbits) > BITMAP_GROUPS_L2(nbits)) +\t\\\n\t    (BITMAP_GROUPS_L2(nbits) > BITMAP_GROUPS_L3(nbits)) +\t\\\n\t    (BITMAP_GROUPS_L3(nbits) > BITMAP_GROUPS_L4(nbits)) + 1,\t\\\n\t/* levels. */\t\t\t\t\t\t\t\\\n\t{\t\t\t\t\t\t\t\t\\\n\t\t{0},\t\t\t\t\t\t\t\\\n\t\t{BITMAP_GROUPS_L0(nbits)},\t\t\t\t\\\n\t\t{BITMAP_GROUPS_L1(nbits) + BITMAP_GROUPS_L0(nbits)},\t\\\n\t\t{BITMAP_GROUPS_L2(nbits) + BITMAP_GROUPS_L1(nbits) +\t\\\n\t\t    BITMAP_GROUPS_L0(nbits)},\t\t\t\t\\\n\t\t{BITMAP_GROUPS_L3(nbits) + BITMAP_GROUPS_L2(nbits) +\t\\\n\t\t    BITMAP_GROUPS_L1(nbits) + BITMAP_GROUPS_L0(nbits)},\t\\\n\t\t{BITMAP_GROUPS_L4(nbits) + BITMAP_GROUPS_L3(nbits) +\t\\\n\t\t     BITMAP_GROUPS_L2(nbits) + BITMAP_GROUPS_L1(nbits)\t\\\n\t\t     + BITMAP_GROUPS_L0(nbits)}\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n}\n\n#else /* BITMAP_USE_TREE */\n\n#define BITMAP_GROUPS(nbits)\tBITMAP_BITS2GROUPS(nbits)\n#define BITMAP_GROUPS_MAX\tBITMAP_BITS2GROUPS(BITMAP_MAXBITS)\n\n#define BITMAP_INFO_INITIALIZER(nbits) {\t\t\t\t\\\n\t/* nbits. */\t\t\t\t\t\t\t\\\n\tnbits,\t\t\t\t\t\t\t\t\\\n\t/* ngroups. */\t\t\t\t\t\t\t\\\n\tBITMAP_BITS2GROUPS(nbits)\t\t\t\t\t\\\n}\n\n#endif /* BITMAP_USE_TREE */\n\ntypedef struct bitmap_level_s {\n\t/* Offset of this level's groups within the array of groups. */\n\tsize_t group_offset;\n} bitmap_level_t;\n\ntypedef struct bitmap_info_s {\n\t/* Logical number of bits in bitmap (stored at bottom level). */\n\tsize_t nbits;\n\n#ifdef BITMAP_USE_TREE\n\t/* Number of levels necessary for nbits. */\n\tunsigned nlevels;\n\n\t/*\n\t * Only the first (nlevels+1) elements are used, and levels are ordered\n\t * bottom to top (e.g. the bottom level is stored in levels[0]).\n\t */\n\tbitmap_level_t levels[BITMAP_MAX_LEVELS+1];\n#else /* BITMAP_USE_TREE */\n\t/* Number of groups necessary for nbits. */\n\tsize_t ngroups;\n#endif /* BITMAP_USE_TREE */\n} bitmap_info_t;\n\nvoid bitmap_info_init(bitmap_info_t *binfo, size_t nbits);\nvoid bitmap_init(bitmap_t *bitmap, const bitmap_info_t *binfo, bool fill);\nsize_t bitmap_size(const bitmap_info_t *binfo);\n\nstatic inline bool\nbitmap_full(bitmap_t *bitmap, const bitmap_info_t *binfo) {\n#ifdef BITMAP_USE_TREE\n\tsize_t rgoff = binfo->levels[binfo->nlevels].group_offset - 1;\n\tbitmap_t rg = bitmap[rgoff];\n\t/* The bitmap is full iff the root group is 0. */\n\treturn (rg == 0);\n#else\n\tsize_t i;\n\n\tfor (i = 0; i < binfo->ngroups; i++) {\n\t\tif (bitmap[i] != 0) {\n\t\t\treturn false;\n\t\t}\n\t}\n\treturn true;\n#endif\n}\n\nstatic inline bool\nbitmap_get(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit) {\n\tsize_t goff;\n\tbitmap_t g;\n\n\tassert(bit < binfo->nbits);\n\tgoff = bit >> LG_BITMAP_GROUP_NBITS;\n\tg = bitmap[goff];\n\treturn !(g & (ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK)));\n}\n\nstatic inline void\nbitmap_set(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit) {\n\tsize_t goff;\n\tbitmap_t *gp;\n\tbitmap_t g;\n\n\tassert(bit < binfo->nbits);\n\tassert(!bitmap_get(bitmap, binfo, bit));\n\tgoff = bit >> LG_BITMAP_GROUP_NBITS;\n\tgp = &bitmap[goff];\n\tg = *gp;\n\tassert(g & (ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK)));\n\tg ^= ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK);\n\t*gp = g;\n\tassert(bitmap_get(bitmap, binfo, bit));\n#ifdef BITMAP_USE_TREE\n\t/* Propagate group state transitions up the tree. */\n\tif (g == 0) {\n\t\tunsigned i;\n\t\tfor (i = 1; i < binfo->nlevels; i++) {\n\t\t\tbit = goff;\n\t\t\tgoff = bit >> LG_BITMAP_GROUP_NBITS;\n\t\t\tgp = &bitmap[binfo->levels[i].group_offset + goff];\n\t\t\tg = *gp;\n\t\t\tassert(g & (ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK)));\n\t\t\tg ^= ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK);\n\t\t\t*gp = g;\n\t\t\tif (g != 0) {\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n#endif\n}\n\n/* ffu: find first unset >= bit. */\nstatic inline size_t\nbitmap_ffu(const bitmap_t *bitmap, const bitmap_info_t *binfo, size_t min_bit) {\n\tassert(min_bit < binfo->nbits);\n\n#ifdef BITMAP_USE_TREE\n\tsize_t bit = 0;\n\tfor (unsigned level = binfo->nlevels; level--;) {\n\t\tsize_t lg_bits_per_group = (LG_BITMAP_GROUP_NBITS * (level +\n\t\t    1));\n\t\tbitmap_t group = bitmap[binfo->levels[level].group_offset + (bit\n\t\t    >> lg_bits_per_group)];\n\t\tunsigned group_nmask = (unsigned)(((min_bit > bit) ? (min_bit -\n\t\t    bit) : 0) >> (lg_bits_per_group - LG_BITMAP_GROUP_NBITS));\n\t\tassert(group_nmask <= BITMAP_GROUP_NBITS);\n\t\tbitmap_t group_mask = ~((1LU << group_nmask) - 1);\n\t\tbitmap_t group_masked = group & group_mask;\n\t\tif (group_masked == 0LU) {\n\t\t\tif (group == 0LU) {\n\t\t\t\treturn binfo->nbits;\n\t\t\t}\n\t\t\t/*\n\t\t\t * min_bit was preceded by one or more unset bits in\n\t\t\t * this group, but there are no other unset bits in this\n\t\t\t * group.  Try again starting at the first bit of the\n\t\t\t * next sibling.  This will recurse at most once per\n\t\t\t * non-root level.\n\t\t\t */\n\t\t\tsize_t sib_base = bit + (ZU(1) << lg_bits_per_group);\n\t\t\tassert(sib_base > min_bit);\n\t\t\tassert(sib_base > bit);\n\t\t\tif (sib_base >= binfo->nbits) {\n\t\t\t\treturn binfo->nbits;\n\t\t\t}\n\t\t\treturn bitmap_ffu(bitmap, binfo, sib_base);\n\t\t}\n\t\tbit += ((size_t)ffs_lu(group_masked)) <<\n\t\t    (lg_bits_per_group - LG_BITMAP_GROUP_NBITS);\n\t}\n\tassert(bit >= min_bit);\n\tassert(bit < binfo->nbits);\n\treturn bit;\n#else\n\tsize_t i = min_bit >> LG_BITMAP_GROUP_NBITS;\n\tbitmap_t g = bitmap[i] & ~((1LU << (min_bit & BITMAP_GROUP_NBITS_MASK))\n\t    - 1);\n\tsize_t bit;\n\tdo {\n\t\tif (g != 0) {\n\t\t\tbit = ffs_lu(g);\n\t\t\treturn (i << LG_BITMAP_GROUP_NBITS) + bit;\n\t\t}\n\t\ti++;\n\t\tg = bitmap[i];\n\t} while (i < binfo->ngroups);\n\treturn binfo->nbits;\n#endif\n}\n\n/* sfu: set first unset. */\nstatic inline size_t\nbitmap_sfu(bitmap_t *bitmap, const bitmap_info_t *binfo) {\n\tsize_t bit;\n\tbitmap_t g;\n\tunsigned i;\n\n\tassert(!bitmap_full(bitmap, binfo));\n\n#ifdef BITMAP_USE_TREE\n\ti = binfo->nlevels - 1;\n\tg = bitmap[binfo->levels[i].group_offset];\n\tbit = ffs_lu(g);\n\twhile (i > 0) {\n\t\ti--;\n\t\tg = bitmap[binfo->levels[i].group_offset + bit];\n\t\tbit = (bit << LG_BITMAP_GROUP_NBITS) + ffs_lu(g);\n\t}\n#else\n\ti = 0;\n\tg = bitmap[0];\n\twhile (g == 0) {\n\t\ti++;\n\t\tg = bitmap[i];\n\t}\n\tbit = (i << LG_BITMAP_GROUP_NBITS) + ffs_lu(g);\n#endif\n\tbitmap_set(bitmap, binfo, bit);\n\treturn bit;\n}\n\nstatic inline void\nbitmap_unset(bitmap_t *bitmap, const bitmap_info_t *binfo, size_t bit) {\n\tsize_t goff;\n\tbitmap_t *gp;\n\tbitmap_t g;\n\tUNUSED bool propagate;\n\n\tassert(bit < binfo->nbits);\n\tassert(bitmap_get(bitmap, binfo, bit));\n\tgoff = bit >> LG_BITMAP_GROUP_NBITS;\n\tgp = &bitmap[goff];\n\tg = *gp;\n\tpropagate = (g == 0);\n\tassert((g & (ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK))) == 0);\n\tg ^= ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK);\n\t*gp = g;\n\tassert(!bitmap_get(bitmap, binfo, bit));\n#ifdef BITMAP_USE_TREE\n\t/* Propagate group state transitions up the tree. */\n\tif (propagate) {\n\t\tunsigned i;\n\t\tfor (i = 1; i < binfo->nlevels; i++) {\n\t\t\tbit = goff;\n\t\t\tgoff = bit >> LG_BITMAP_GROUP_NBITS;\n\t\t\tgp = &bitmap[binfo->levels[i].group_offset + goff];\n\t\t\tg = *gp;\n\t\t\tpropagate = (g == 0);\n\t\t\tassert((g & (ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK)))\n\t\t\t    == 0);\n\t\t\tg ^= ZU(1) << (bit & BITMAP_GROUP_NBITS_MASK);\n\t\t\t*gp = g;\n\t\t\tif (!propagate) {\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n#endif /* BITMAP_USE_TREE */\n}\n\n#endif /* JEMALLOC_INTERNAL_BITMAP_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/buf_writer.h",
    "content": "#ifndef JEMALLOC_INTERNAL_BUF_WRITER_H\n#define JEMALLOC_INTERNAL_BUF_WRITER_H\n\n/*\n * Note: when using the buffered writer, cbopaque is passed to write_cb only\n * when the buffer is flushed.  It would make a difference if cbopaque points\n * to something that's changing for each write_cb call, or something that\n * affects write_cb in a way dependent on the content of the output string.\n * However, the most typical usage case in practice is that cbopaque points to\n * some \"option like\" content for the write_cb, so it doesn't matter.\n */\n\ntypedef struct {\n\twrite_cb_t *write_cb;\n\tvoid *cbopaque;\n\tchar *buf;\n\tsize_t buf_size;\n\tsize_t buf_end;\n\tbool internal_buf;\n} buf_writer_t;\n\nbool buf_writer_init(tsdn_t *tsdn, buf_writer_t *buf_writer,\n    write_cb_t *write_cb, void *cbopaque, char *buf, size_t buf_len);\nvoid buf_writer_flush(buf_writer_t *buf_writer);\nwrite_cb_t buf_writer_cb;\nvoid buf_writer_terminate(tsdn_t *tsdn, buf_writer_t *buf_writer);\n\ntypedef ssize_t (read_cb_t)(void *read_cbopaque, void *buf, size_t limit);\nvoid buf_writer_pipe(buf_writer_t *buf_writer, read_cb_t *read_cb,\n    void *read_cbopaque);\n\n#endif /* JEMALLOC_INTERNAL_BUF_WRITER_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/cache_bin.h",
    "content": "#ifndef JEMALLOC_INTERNAL_CACHE_BIN_H\n#define JEMALLOC_INTERNAL_CACHE_BIN_H\n\n#include \"jemalloc/internal/ql.h\"\n#include \"jemalloc/internal/sz.h\"\n\n/*\n * The cache_bins are the mechanism that the tcache and the arena use to\n * communicate.  The tcache fills from and flushes to the arena by passing a\n * cache_bin_t to fill/flush.  When the arena needs to pull stats from the\n * tcaches associated with it, it does so by iterating over its\n * cache_bin_array_descriptor_t objects and reading out per-bin stats it\n * contains.  This makes it so that the arena need not know about the existence\n * of the tcache at all.\n */\n\n/*\n * The size in bytes of each cache bin stack.  We also use this to indicate\n * *counts* of individual objects.\n */\ntypedef uint16_t cache_bin_sz_t;\n\n/*\n * Leave a noticeable mark pattern on the cache bin stack boundaries, in case a\n * bug starts leaking those.  Make it look like the junk pattern but be distinct\n * from it.\n */\nstatic const uintptr_t cache_bin_preceding_junk =\n    (uintptr_t)0x7a7a7a7a7a7a7a7aULL;\n/* Note: a7 vs. 7a above -- this tells you which pointer leaked. */\nstatic const uintptr_t cache_bin_trailing_junk =\n    (uintptr_t)0xa7a7a7a7a7a7a7a7ULL;\n\n/*\n * That implies the following value, for the maximum number of items in any\n * individual bin.  The cache bins track their bounds looking just at the low\n * bits of a pointer, compared against a cache_bin_sz_t.  So that's\n *   1 << (sizeof(cache_bin_sz_t) * 8)\n * bytes spread across pointer sized objects to get the maximum.\n */\n#define CACHE_BIN_NCACHED_MAX (((size_t)1 << sizeof(cache_bin_sz_t) * 8) \\\n    / sizeof(void *) - 1)\n\n/*\n * This lives inside the cache_bin (for locality reasons), and is initialized\n * alongside it, but is otherwise not modified by any cache bin operations.\n * It's logically public and maintained by its callers.\n */\ntypedef struct cache_bin_stats_s cache_bin_stats_t;\nstruct cache_bin_stats_s {\n\t/*\n\t * Number of allocation requests that corresponded to the size of this\n\t * bin.\n\t */\n\tuint64_t nrequests;\n};\n\n/*\n * Read-only information associated with each element of tcache_t's tbins array\n * is stored separately, mainly to reduce memory usage.\n */\ntypedef struct cache_bin_info_s cache_bin_info_t;\nstruct cache_bin_info_s {\n\tcache_bin_sz_t ncached_max;\n};\n\n/*\n * Responsible for caching allocations associated with a single size.\n *\n * Several pointers are used to track the stack.  To save on metadata bytes,\n * only the stack_head is a full sized pointer (which is dereferenced on the\n * fastpath), while the others store only the low 16 bits -- this is correct\n * because a single stack never takes more space than 2^16 bytes, and at the\n * same time only equality checks are performed on the low bits.\n *\n * (low addr)                                                  (high addr)\n * |------stashed------|------available------|------cached-----|\n * ^                   ^                     ^                 ^\n * low_bound(derived)  low_bits_full         stack_head        low_bits_empty\n */\ntypedef struct cache_bin_s cache_bin_t;\nstruct cache_bin_s {\n\t/*\n\t * The stack grows down.  Whenever the bin is nonempty, the head points\n\t * to an array entry containing a valid allocation.  When it is empty,\n\t * the head points to one element past the owned array.\n\t */\n\tvoid **stack_head;\n\t/*\n\t * cur_ptr and stats are both modified frequently.  Let's keep them\n\t * close so that they have a higher chance of being on the same\n\t * cacheline, thus less write-backs.\n\t */\n\tcache_bin_stats_t tstats;\n\n\t/*\n\t * The low bits of the address of the first item in the stack that\n\t * hasn't been used since the last GC, to track the low water mark (min\n\t * # of cached items).\n\t *\n\t * Since the stack grows down, this is a higher address than\n\t * low_bits_full.\n\t */\n\tuint16_t low_bits_low_water;\n\n\t/*\n\t * The low bits of the value that stack_head will take on when the array\n\t * is full (of cached & stashed items).  But remember that stack_head\n\t * always points to a valid item when the array is nonempty -- this is\n\t * in the array.\n\t *\n\t * Recall that since the stack grows down, this is the lowest available\n\t * address in the array for caching.  Only adjusted when stashing items.\n\t */\n\tuint16_t low_bits_full;\n\n\t/*\n\t * The low bits of the value that stack_head will take on when the array\n\t * is empty.\n\t *\n\t * The stack grows down -- this is one past the highest address in the\n\t * array.  Immutable after initialization.\n\t */\n\tuint16_t low_bits_empty;\n};\n\n/*\n * The cache_bins live inside the tcache, but the arena (by design) isn't\n * supposed to know much about tcache internals.  To let the arena iterate over\n * associated bins, we keep (with the tcache) a linked list of\n * cache_bin_array_descriptor_ts that tell the arena how to find the bins.\n */\ntypedef struct cache_bin_array_descriptor_s cache_bin_array_descriptor_t;\nstruct cache_bin_array_descriptor_s {\n\t/*\n\t * The arena keeps a list of the cache bins associated with it, for\n\t * stats collection.\n\t */\n\tql_elm(cache_bin_array_descriptor_t) link;\n\t/* Pointers to the tcache bins. */\n\tcache_bin_t *bins;\n};\n\nstatic inline void\ncache_bin_array_descriptor_init(cache_bin_array_descriptor_t *descriptor,\n    cache_bin_t *bins) {\n\tql_elm_new(descriptor, link);\n\tdescriptor->bins = bins;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\ncache_bin_nonfast_aligned(const void *ptr) {\n\tif (!config_uaf_detection) {\n\t\treturn false;\n\t}\n\t/*\n\t * Currently we use alignment to decide which pointer to junk & stash on\n\t * dealloc (for catching use-after-free).  In some common cases a\n\t * page-aligned check is needed already (sdalloc w/ config_prof), so we\n\t * are getting it more or less for free -- no added instructions on\n\t * free_fastpath.\n\t *\n\t * Another way of deciding which pointer to sample, is adding another\n\t * thread_event to pick one every N bytes.  That also adds no cost on\n\t * the fastpath, however it will tend to pick large allocations which is\n\t * not the desired behavior.\n\t */\n\treturn ((uintptr_t)ptr & san_cache_bin_nonfast_mask) == 0;\n}\n\n/* Returns ncached_max: Upper limit on ncached. */\nstatic inline cache_bin_sz_t\ncache_bin_info_ncached_max(cache_bin_info_t *info) {\n\treturn info->ncached_max;\n}\n\n/*\n * Internal.\n *\n * Asserts that the pointer associated with earlier is <= the one associated\n * with later.\n */\nstatic inline void\ncache_bin_assert_earlier(cache_bin_t *bin, uint16_t earlier, uint16_t later) {\n\tif (earlier > later) {\n\t\tassert(bin->low_bits_full > bin->low_bits_empty);\n\t}\n}\n\n/*\n * Internal.\n *\n * Does difference calculations that handle wraparound correctly.  Earlier must\n * be associated with the position earlier in memory.\n */\nstatic inline uint16_t\ncache_bin_diff(cache_bin_t *bin, uint16_t earlier, uint16_t later, bool racy) {\n\t/*\n\t * When it's racy, bin->low_bits_full can be modified concurrently. It\n\t * can cross the uint16_t max value and become less than\n\t * bin->low_bits_empty at the time of the check.\n\t */\n\tif (!racy) {\n\t\tcache_bin_assert_earlier(bin, earlier, later);\n\t}\n\treturn later - earlier;\n}\n\n/*\n * Number of items currently cached in the bin, without checking ncached_max.\n * We require specifying whether or not the request is racy or not (i.e. whether\n * or not concurrent modifications are possible).\n */\nstatic inline cache_bin_sz_t\ncache_bin_ncached_get_internal(cache_bin_t *bin, bool racy) {\n\tcache_bin_sz_t diff = cache_bin_diff(bin,\n\t    (uint16_t)(uintptr_t)bin->stack_head, bin->low_bits_empty, racy);\n\tcache_bin_sz_t n = diff / sizeof(void *);\n\t/*\n\t * We have undefined behavior here; if this function is called from the\n\t * arena stats updating code, then stack_head could change from the\n\t * first line to the next one.  Morally, these loads should be atomic,\n\t * but compilers won't currently generate comparisons with in-memory\n\t * operands against atomics, and these variables get accessed on the\n\t * fast paths.  This should still be \"safe\" in the sense of generating\n\t * the correct assembly for the foreseeable future, though.\n\t */\n\tassert(n == 0 || *(bin->stack_head) != NULL || racy);\n\treturn n;\n}\n\n/*\n * Number of items currently cached in the bin, with checking ncached_max.  The\n * caller must know that no concurrent modification of the cache_bin is\n * possible.\n */\nstatic inline cache_bin_sz_t\ncache_bin_ncached_get_local(cache_bin_t *bin, cache_bin_info_t *info) {\n\tcache_bin_sz_t n = cache_bin_ncached_get_internal(bin,\n\t    /* racy */ false);\n\tassert(n <= cache_bin_info_ncached_max(info));\n\treturn n;\n}\n\n/*\n * Internal.\n *\n * A pointer to the position one past the end of the backing array.\n *\n * Do not call if racy, because both 'bin->stack_head' and 'bin->low_bits_full'\n * are subject to concurrent modifications.\n */\nstatic inline void **\ncache_bin_empty_position_get(cache_bin_t *bin) {\n\tcache_bin_sz_t diff = cache_bin_diff(bin,\n\t    (uint16_t)(uintptr_t)bin->stack_head, bin->low_bits_empty,\n\t    /* racy */ false);\n\tuintptr_t empty_bits = (uintptr_t)bin->stack_head + diff;\n\tvoid **ret = (void **)empty_bits;\n\n\tassert(ret >= bin->stack_head);\n\n\treturn ret;\n}\n\n/*\n * Internal.\n *\n * Calculates low bits of the lower bound of the usable cache bin's range (see\n * cache_bin_t visual representation above).\n *\n * No values are concurrently modified, so should be safe to read in a\n * multithreaded environment. Currently concurrent access happens only during\n * arena statistics collection.\n */\nstatic inline uint16_t\ncache_bin_low_bits_low_bound_get(cache_bin_t *bin, cache_bin_info_t *info) {\n\treturn (uint16_t)bin->low_bits_empty -\n\t    info->ncached_max * sizeof(void *);\n}\n\n/*\n * Internal.\n *\n * A pointer to the position with the lowest address of the backing array.\n */\nstatic inline void **\ncache_bin_low_bound_get(cache_bin_t *bin, cache_bin_info_t *info) {\n\tcache_bin_sz_t ncached_max = cache_bin_info_ncached_max(info);\n\tvoid **ret = cache_bin_empty_position_get(bin) - ncached_max;\n\tassert(ret <= bin->stack_head);\n\n\treturn ret;\n}\n\n/*\n * As the name implies.  This is important since it's not correct to try to\n * batch fill a nonempty cache bin.\n */\nstatic inline void\ncache_bin_assert_empty(cache_bin_t *bin, cache_bin_info_t *info) {\n\tassert(cache_bin_ncached_get_local(bin, info) == 0);\n\tassert(cache_bin_empty_position_get(bin) == bin->stack_head);\n}\n\n/*\n * Get low water, but without any of the correctness checking we do for the\n * caller-usable version, if we are temporarily breaking invariants (like\n * ncached >= low_water during flush).\n */\nstatic inline cache_bin_sz_t\ncache_bin_low_water_get_internal(cache_bin_t *bin) {\n\treturn cache_bin_diff(bin, bin->low_bits_low_water,\n\t    bin->low_bits_empty, /* racy */ false) / sizeof(void *);\n}\n\n/* Returns the numeric value of low water in [0, ncached]. */\nstatic inline cache_bin_sz_t\ncache_bin_low_water_get(cache_bin_t *bin, cache_bin_info_t *info) {\n\tcache_bin_sz_t low_water = cache_bin_low_water_get_internal(bin);\n\tassert(low_water <= cache_bin_info_ncached_max(info));\n\tassert(low_water <= cache_bin_ncached_get_local(bin, info));\n\n\tcache_bin_assert_earlier(bin, (uint16_t)(uintptr_t)bin->stack_head,\n\t    bin->low_bits_low_water);\n\n\treturn low_water;\n}\n\n/*\n * Indicates that the current cache bin position should be the low water mark\n * going forward.\n */\nstatic inline void\ncache_bin_low_water_set(cache_bin_t *bin) {\n\tbin->low_bits_low_water = (uint16_t)(uintptr_t)bin->stack_head;\n}\n\nstatic inline void\ncache_bin_low_water_adjust(cache_bin_t *bin) {\n\tif (cache_bin_ncached_get_internal(bin, /* racy */ false)\n\t    < cache_bin_low_water_get_internal(bin)) {\n\t\tcache_bin_low_water_set(bin);\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE void *\ncache_bin_alloc_impl(cache_bin_t *bin, bool *success, bool adjust_low_water) {\n\t/*\n\t * success (instead of ret) should be checked upon the return of this\n\t * function.  We avoid checking (ret == NULL) because there is never a\n\t * null stored on the avail stack (which is unknown to the compiler),\n\t * and eagerly checking ret would cause pipeline stall (waiting for the\n\t * cacheline).\n\t */\n\n\t/*\n\t * This may read from the empty position; however the loaded value won't\n\t * be used.  It's safe because the stack has one more slot reserved.\n\t */\n\tvoid *ret = *bin->stack_head;\n\tuint16_t low_bits = (uint16_t)(uintptr_t)bin->stack_head;\n\tvoid **new_head = bin->stack_head + 1;\n\n\t/*\n\t * Note that the low water mark is at most empty; if we pass this check,\n\t * we know we're non-empty.\n\t */\n\tif (likely(low_bits != bin->low_bits_low_water)) {\n\t\tbin->stack_head = new_head;\n\t\t*success = true;\n\t\treturn ret;\n\t}\n\tif (!adjust_low_water) {\n\t\t*success = false;\n\t\treturn NULL;\n\t}\n\t/*\n\t * In the fast-path case where we call alloc_easy and then alloc, the\n\t * previous checking and computation is optimized away -- we didn't\n\t * actually commit any of our operations.\n\t */\n\tif (likely(low_bits != bin->low_bits_empty)) {\n\t\tbin->stack_head = new_head;\n\t\tbin->low_bits_low_water = (uint16_t)(uintptr_t)new_head;\n\t\t*success = true;\n\t\treturn ret;\n\t}\n\t*success = false;\n\treturn NULL;\n}\n\n/*\n * Allocate an item out of the bin, failing if we're at the low-water mark.\n */\nJEMALLOC_ALWAYS_INLINE void *\ncache_bin_alloc_easy(cache_bin_t *bin, bool *success) {\n\t/* We don't look at info if we're not adjusting low-water. */\n\treturn cache_bin_alloc_impl(bin, success, false);\n}\n\n/*\n * Allocate an item out of the bin, even if we're currently at the low-water\n * mark (and failing only if the bin is empty).\n */\nJEMALLOC_ALWAYS_INLINE void *\ncache_bin_alloc(cache_bin_t *bin, bool *success) {\n\treturn cache_bin_alloc_impl(bin, success, true);\n}\n\nJEMALLOC_ALWAYS_INLINE cache_bin_sz_t\ncache_bin_alloc_batch(cache_bin_t *bin, size_t num, void **out) {\n\tcache_bin_sz_t n = cache_bin_ncached_get_internal(bin,\n\t    /* racy */ false);\n\tif (n > num) {\n\t\tn = (cache_bin_sz_t)num;\n\t}\n\tmemcpy(out, bin->stack_head, n * sizeof(void *));\n\tbin->stack_head += n;\n\tcache_bin_low_water_adjust(bin);\n\n\treturn n;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\ncache_bin_full(cache_bin_t *bin) {\n\treturn ((uint16_t)(uintptr_t)bin->stack_head == bin->low_bits_full);\n}\n\n/*\n * Free an object into the given bin.  Fails only if the bin is full.\n */\nJEMALLOC_ALWAYS_INLINE bool\ncache_bin_dalloc_easy(cache_bin_t *bin, void *ptr) {\n\tif (unlikely(cache_bin_full(bin))) {\n\t\treturn false;\n\t}\n\n\tbin->stack_head--;\n\t*bin->stack_head = ptr;\n\tcache_bin_assert_earlier(bin, bin->low_bits_full,\n\t    (uint16_t)(uintptr_t)bin->stack_head);\n\n\treturn true;\n}\n\n/* Returns false if failed to stash (i.e. bin is full). */\nJEMALLOC_ALWAYS_INLINE bool\ncache_bin_stash(cache_bin_t *bin, void *ptr) {\n\tif (cache_bin_full(bin)) {\n\t\treturn false;\n\t}\n\n\t/* Stash at the full position, in the [full, head) range. */\n\tuint16_t low_bits_head = (uint16_t)(uintptr_t)bin->stack_head;\n\t/* Wraparound handled as well. */\n\tuint16_t diff = cache_bin_diff(bin, bin->low_bits_full, low_bits_head,\n\t    /* racy */ false);\n\t*(void **)((uintptr_t)bin->stack_head - diff) = ptr;\n\n\tassert(!cache_bin_full(bin));\n\tbin->low_bits_full += sizeof(void *);\n\tcache_bin_assert_earlier(bin, bin->low_bits_full, low_bits_head);\n\n\treturn true;\n}\n\n/*\n * Get the number of stashed pointers.\n *\n * When called from a thread not owning the TLS (i.e. racy = true), it's\n * important to keep in mind that 'bin->stack_head' and 'bin->low_bits_full' can\n * be modified concurrently and almost none assertions about their values can be\n * made.\n */\nJEMALLOC_ALWAYS_INLINE cache_bin_sz_t\ncache_bin_nstashed_get_internal(cache_bin_t *bin, cache_bin_info_t *info,\n    bool racy) {\n\tcache_bin_sz_t ncached_max = cache_bin_info_ncached_max(info);\n\tuint16_t low_bits_low_bound = cache_bin_low_bits_low_bound_get(bin,\n\t    info);\n\n\tcache_bin_sz_t n = cache_bin_diff(bin, low_bits_low_bound,\n\t    bin->low_bits_full, racy) / sizeof(void *);\n\tassert(n <= ncached_max);\n\n\tif (!racy) {\n\t\t/* Below are for assertions only. */\n\t\tvoid **low_bound = cache_bin_low_bound_get(bin, info);\n\n\t\tassert((uint16_t)(uintptr_t)low_bound == low_bits_low_bound);\n\t\tvoid *stashed = *(low_bound + n - 1);\n\t\tbool aligned = cache_bin_nonfast_aligned(stashed);\n#ifdef JEMALLOC_JET\n\t\t/* Allow arbitrary pointers to be stashed in tests. */\n\t\taligned = true;\n#endif\n\t\tassert(n == 0 || (stashed != NULL && aligned));\n\t}\n\n\treturn n;\n}\n\nJEMALLOC_ALWAYS_INLINE cache_bin_sz_t\ncache_bin_nstashed_get_local(cache_bin_t *bin, cache_bin_info_t *info) {\n\tcache_bin_sz_t n = cache_bin_nstashed_get_internal(bin, info,\n\t    /* racy */ false);\n\tassert(n <= cache_bin_info_ncached_max(info));\n\treturn n;\n}\n\n/*\n * Obtain a racy view of the number of items currently in the cache bin, in the\n * presence of possible concurrent modifications.\n */\nstatic inline void\ncache_bin_nitems_get_remote(cache_bin_t *bin, cache_bin_info_t *info,\n    cache_bin_sz_t *ncached, cache_bin_sz_t *nstashed) {\n\tcache_bin_sz_t n = cache_bin_ncached_get_internal(bin, /* racy */ true);\n\tassert(n <= cache_bin_info_ncached_max(info));\n\t*ncached = n;\n\n\tn = cache_bin_nstashed_get_internal(bin, info, /* racy */ true);\n\tassert(n <= cache_bin_info_ncached_max(info));\n\t*nstashed = n;\n\t/* Note that cannot assert ncached + nstashed <= ncached_max (racy). */\n}\n\n/*\n * Filling and flushing are done in batch, on arrays of void *s.  For filling,\n * the arrays go forward, and can be accessed with ordinary array arithmetic.\n * For flushing, we work from the end backwards, and so need to use special\n * accessors that invert the usual ordering.\n *\n * This is important for maintaining first-fit; the arena code fills with\n * earliest objects first, and so those are the ones we should return first for\n * cache_bin_alloc calls.  When flushing, we should flush the objects that we\n * wish to return later; those at the end of the array.  This is better for the\n * first-fit heuristic as well as for cache locality; the most recently freed\n * objects are the ones most likely to still be in cache.\n *\n * This all sounds very hand-wavey and theoretical, but reverting the ordering\n * on one or the other pathway leads to measurable slowdowns.\n */\n\ntypedef struct cache_bin_ptr_array_s cache_bin_ptr_array_t;\nstruct cache_bin_ptr_array_s {\n\tcache_bin_sz_t n;\n\tvoid **ptr;\n};\n\n/*\n * Declare a cache_bin_ptr_array_t sufficient for nval items.\n *\n * In the current implementation, this could be just part of a\n * cache_bin_ptr_array_init_... call, since we reuse the cache bin stack memory.\n * Indirecting behind a macro, though, means experimenting with linked-list\n * representations is easy (since they'll require an alloca in the calling\n * frame).\n */\n#define CACHE_BIN_PTR_ARRAY_DECLARE(name, nval)\t\t\t\t\\\n    cache_bin_ptr_array_t name;\t\t\t\t\t\t\\\n    name.n = (nval)\n\n/*\n * Start a fill.  The bin must be empty, and This must be followed by a\n * finish_fill call before doing any alloc/dalloc operations on the bin.\n */\nstatic inline void\ncache_bin_init_ptr_array_for_fill(cache_bin_t *bin, cache_bin_info_t *info,\n    cache_bin_ptr_array_t *arr, cache_bin_sz_t nfill) {\n\tcache_bin_assert_empty(bin, info);\n\tarr->ptr = cache_bin_empty_position_get(bin) - nfill;\n}\n\n/*\n * While nfill in cache_bin_init_ptr_array_for_fill is the number we *intend* to\n * fill, nfilled here is the number we actually filled (which may be less, in\n * case of OOM.\n */\nstatic inline void\ncache_bin_finish_fill(cache_bin_t *bin, cache_bin_info_t *info,\n    cache_bin_ptr_array_t *arr, cache_bin_sz_t nfilled) {\n\tcache_bin_assert_empty(bin, info);\n\tvoid **empty_position = cache_bin_empty_position_get(bin);\n\tif (nfilled < arr->n) {\n\t\tmemmove(empty_position - nfilled, empty_position - arr->n,\n\t\t    nfilled * sizeof(void *));\n\t}\n\tbin->stack_head = empty_position - nfilled;\n}\n\n/*\n * Same deal, but with flush.  Unlike fill (which can fail), the user must flush\n * everything we give them.\n */\nstatic inline void\ncache_bin_init_ptr_array_for_flush(cache_bin_t *bin, cache_bin_info_t *info,\n    cache_bin_ptr_array_t *arr, cache_bin_sz_t nflush) {\n\tarr->ptr = cache_bin_empty_position_get(bin) - nflush;\n\tassert(cache_bin_ncached_get_local(bin, info) == 0\n\t    || *arr->ptr != NULL);\n}\n\nstatic inline void\ncache_bin_finish_flush(cache_bin_t *bin, cache_bin_info_t *info,\n    cache_bin_ptr_array_t *arr, cache_bin_sz_t nflushed) {\n\tunsigned rem = cache_bin_ncached_get_local(bin, info) - nflushed;\n\tmemmove(bin->stack_head + nflushed, bin->stack_head,\n\t    rem * sizeof(void *));\n\tbin->stack_head = bin->stack_head + nflushed;\n\tcache_bin_low_water_adjust(bin);\n}\n\nstatic inline void\ncache_bin_init_ptr_array_for_stashed(cache_bin_t *bin, szind_t binind,\n    cache_bin_info_t *info, cache_bin_ptr_array_t *arr,\n    cache_bin_sz_t nstashed) {\n\tassert(nstashed > 0);\n\tassert(cache_bin_nstashed_get_local(bin, info) == nstashed);\n\n\tvoid **low_bound = cache_bin_low_bound_get(bin, info);\n\tarr->ptr = low_bound;\n\tassert(*arr->ptr != NULL);\n}\n\nstatic inline void\ncache_bin_finish_flush_stashed(cache_bin_t *bin, cache_bin_info_t *info) {\n\tvoid **low_bound = cache_bin_low_bound_get(bin, info);\n\n\t/* Reset the bin local full position. */\n\tbin->low_bits_full = (uint16_t)(uintptr_t)low_bound;\n\tassert(cache_bin_nstashed_get_local(bin, info) == 0);\n}\n\n/*\n * Initialize a cache_bin_info to represent up to the given number of items in\n * the cache_bins it is associated with.\n */\nvoid cache_bin_info_init(cache_bin_info_t *bin_info,\n    cache_bin_sz_t ncached_max);\n/*\n * Given an array of initialized cache_bin_info_ts, determine how big an\n * allocation is required to initialize a full set of cache_bin_ts.\n */\nvoid cache_bin_info_compute_alloc(cache_bin_info_t *infos, szind_t ninfos,\n    size_t *size, size_t *alignment);\n\n/*\n * Actually initialize some cache bins.  Callers should allocate the backing\n * memory indicated by a call to cache_bin_compute_alloc.  They should then\n * preincrement, call init once for each bin and info, and then call\n * cache_bin_postincrement.  *alloc_cur will then point immediately past the end\n * of the allocation.\n */\nvoid cache_bin_preincrement(cache_bin_info_t *infos, szind_t ninfos,\n    void *alloc, size_t *cur_offset);\nvoid cache_bin_postincrement(cache_bin_info_t *infos, szind_t ninfos,\n    void *alloc, size_t *cur_offset);\nvoid cache_bin_init(cache_bin_t *bin, cache_bin_info_t *info, void *alloc,\n    size_t *cur_offset);\n\n/*\n * If a cache bin was zero initialized (either because it lives in static or\n * thread-local storage, or was memset to 0), this function indicates whether or\n * not cache_bin_init was called on it.\n */\nbool cache_bin_still_zero_initialized(cache_bin_t *bin);\n\n#endif /* JEMALLOC_INTERNAL_CACHE_BIN_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/ckh.h",
    "content": "#ifndef JEMALLOC_INTERNAL_CKH_H\n#define JEMALLOC_INTERNAL_CKH_H\n\n#include \"jemalloc/internal/tsd.h\"\n\n/* Cuckoo hashing implementation.  Skip to the end for the interface. */\n\n/******************************************************************************/\n/* INTERNAL DEFINITIONS -- IGNORE */\n/******************************************************************************/\n\n/* Maintain counters used to get an idea of performance. */\n/* #define CKH_COUNT */\n/* Print counter values in ckh_delete() (requires CKH_COUNT). */\n/* #define CKH_VERBOSE */\n\n/*\n * There are 2^LG_CKH_BUCKET_CELLS cells in each hash table bucket.  Try to fit\n * one bucket per L1 cache line.\n */\n#define LG_CKH_BUCKET_CELLS (LG_CACHELINE - LG_SIZEOF_PTR - 1)\n\n/* Typedefs to allow easy function pointer passing. */\ntypedef void ckh_hash_t (const void *, size_t[2]);\ntypedef bool ckh_keycomp_t (const void *, const void *);\n\n/* Hash table cell. */\ntypedef struct {\n\tconst void *key;\n\tconst void *data;\n} ckhc_t;\n\n/* The hash table itself. */\ntypedef struct {\n#ifdef CKH_COUNT\n\t/* Counters used to get an idea of performance. */\n\tuint64_t ngrows;\n\tuint64_t nshrinks;\n\tuint64_t nshrinkfails;\n\tuint64_t ninserts;\n\tuint64_t nrelocs;\n#endif\n\n\t/* Used for pseudo-random number generation. */\n\tuint64_t prng_state;\n\n\t/* Total number of items. */\n\tsize_t count;\n\n\t/*\n\t * Minimum and current number of hash table buckets.  There are\n\t * 2^LG_CKH_BUCKET_CELLS cells per bucket.\n\t */\n\tunsigned lg_minbuckets;\n\tunsigned lg_curbuckets;\n\n\t/* Hash and comparison functions. */\n\tckh_hash_t *hash;\n\tckh_keycomp_t *keycomp;\n\n\t/* Hash table with 2^lg_curbuckets buckets. */\n\tckhc_t *tab;\n} ckh_t;\n\n/******************************************************************************/\n/* BEGIN PUBLIC API */\n/******************************************************************************/\n\n/* Lifetime management.  Minitems is the initial capacity. */\nbool ckh_new(tsd_t *tsd, ckh_t *ckh, size_t minitems, ckh_hash_t *hash,\n    ckh_keycomp_t *keycomp);\nvoid ckh_delete(tsd_t *tsd, ckh_t *ckh);\n\n/* Get the number of elements in the set. */\nsize_t ckh_count(ckh_t *ckh);\n\n/*\n * To iterate over the elements in the table, initialize *tabind to 0 and call\n * this function until it returns true.  Each call that returns false will\n * update *key and *data to the next element in the table, assuming the pointers\n * are non-NULL.\n */\nbool ckh_iter(ckh_t *ckh, size_t *tabind, void **key, void **data);\n\n/*\n * Basic hash table operations -- insert, removal, lookup.  For ckh_remove and\n * ckh_search, key or data can be NULL.  The hash-table only stores pointers to\n * the key and value, and doesn't do any lifetime management.\n */\nbool ckh_insert(tsd_t *tsd, ckh_t *ckh, const void *key, const void *data);\nbool ckh_remove(tsd_t *tsd, ckh_t *ckh, const void *searchkey, void **key,\n    void **data);\nbool ckh_search(ckh_t *ckh, const void *searchkey, void **key, void **data);\n\n/* Some useful hash and comparison functions for strings and pointers. */\nvoid ckh_string_hash(const void *key, size_t r_hash[2]);\nbool ckh_string_keycomp(const void *k1, const void *k2);\nvoid ckh_pointer_hash(const void *key, size_t r_hash[2]);\nbool ckh_pointer_keycomp(const void *k1, const void *k2);\n\n#endif /* JEMALLOC_INTERNAL_CKH_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/counter.h",
    "content": "#ifndef JEMALLOC_INTERNAL_COUNTER_H\n#define JEMALLOC_INTERNAL_COUNTER_H\n\n#include \"jemalloc/internal/mutex.h\"\n\ntypedef struct counter_accum_s {\n\tLOCKEDINT_MTX_DECLARE(mtx)\n\tlocked_u64_t accumbytes;\n\tuint64_t interval;\n} counter_accum_t;\n\nJEMALLOC_ALWAYS_INLINE bool\ncounter_accum(tsdn_t *tsdn, counter_accum_t *counter, uint64_t bytes) {\n\tuint64_t interval = counter->interval;\n\tassert(interval > 0);\n\tLOCKEDINT_MTX_LOCK(tsdn, counter->mtx);\n\t/*\n\t * If the event moves fast enough (and/or if the event handling is slow\n\t * enough), extreme overflow can cause counter trigger coalescing.\n\t * This is an intentional mechanism that avoids rate-limiting\n\t * allocation.\n\t */\n\tbool overflow = locked_inc_mod_u64(tsdn, LOCKEDINT_MTX(counter->mtx),\n\t    &counter->accumbytes, bytes, interval);\n\tLOCKEDINT_MTX_UNLOCK(tsdn, counter->mtx);\n\treturn overflow;\n}\n\nbool counter_accum_init(counter_accum_t *counter, uint64_t interval);\nvoid counter_prefork(tsdn_t *tsdn, counter_accum_t *counter);\nvoid counter_postfork_parent(tsdn_t *tsdn, counter_accum_t *counter);\nvoid counter_postfork_child(tsdn_t *tsdn, counter_accum_t *counter);\n\n#endif /* JEMALLOC_INTERNAL_COUNTER_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/ctl.h",
    "content": "#ifndef JEMALLOC_INTERNAL_CTL_H\n#define JEMALLOC_INTERNAL_CTL_H\n\n#include \"jemalloc/internal/jemalloc_internal_types.h\"\n#include \"jemalloc/internal/malloc_io.h\"\n#include \"jemalloc/internal/mutex_prof.h\"\n#include \"jemalloc/internal/ql.h\"\n#include \"jemalloc/internal/sc.h\"\n#include \"jemalloc/internal/stats.h\"\n\n/* Maximum ctl tree depth. */\n#define CTL_MAX_DEPTH\t7\n\ntypedef struct ctl_node_s {\n\tbool named;\n} ctl_node_t;\n\ntypedef struct ctl_named_node_s {\n\tctl_node_t node;\n\tconst char *name;\n\t/* If (nchildren == 0), this is a terminal node. */\n\tsize_t nchildren;\n\tconst ctl_node_t *children;\n\tint (*ctl)(tsd_t *, const size_t *, size_t, void *, size_t *, void *,\n\t    size_t);\n} ctl_named_node_t;\n\ntypedef struct ctl_indexed_node_s {\n\tstruct ctl_node_s node;\n\tconst ctl_named_node_t *(*index)(tsdn_t *, const size_t *, size_t,\n\t    size_t);\n} ctl_indexed_node_t;\n\ntypedef struct ctl_arena_stats_s {\n\tarena_stats_t astats;\n\n\t/* Aggregate stats for small size classes, based on bin stats. */\n\tsize_t allocated_small;\n\tuint64_t nmalloc_small;\n\tuint64_t ndalloc_small;\n\tuint64_t nrequests_small;\n\tuint64_t nfills_small;\n\tuint64_t nflushes_small;\n\n\tbin_stats_data_t bstats[SC_NBINS];\n\tarena_stats_large_t lstats[SC_NSIZES - SC_NBINS];\n\tpac_estats_t estats[SC_NPSIZES];\n\thpa_shard_stats_t hpastats;\n\tsec_stats_t secstats;\n} ctl_arena_stats_t;\n\ntypedef struct ctl_stats_s {\n\tsize_t allocated;\n\tsize_t active;\n\tsize_t metadata;\n\tsize_t metadata_thp;\n\tsize_t resident;\n\tsize_t mapped;\n\tsize_t retained;\n\n\tbackground_thread_stats_t background_thread;\n\tmutex_prof_data_t mutex_prof_data[mutex_prof_num_global_mutexes];\n} ctl_stats_t;\n\ntypedef struct ctl_arena_s ctl_arena_t;\nstruct ctl_arena_s {\n\tunsigned arena_ind;\n\tbool initialized;\n\tql_elm(ctl_arena_t) destroyed_link;\n\n\t/* Basic stats, supported even if !config_stats. */\n\tunsigned nthreads;\n\tconst char *dss;\n\tssize_t dirty_decay_ms;\n\tssize_t muzzy_decay_ms;\n\tsize_t pactive;\n\tsize_t pdirty;\n\tsize_t pmuzzy;\n\n\t/* NULL if !config_stats. */\n\tctl_arena_stats_t *astats;\n};\n\ntypedef struct ctl_arenas_s {\n\tuint64_t epoch;\n\tunsigned narenas;\n\tql_head(ctl_arena_t) destroyed;\n\n\t/*\n\t * Element 0 corresponds to merged stats for extant arenas (accessed via\n\t * MALLCTL_ARENAS_ALL), element 1 corresponds to merged stats for\n\t * destroyed arenas (accessed via MALLCTL_ARENAS_DESTROYED), and the\n\t * remaining MALLOCX_ARENA_LIMIT elements correspond to arenas.\n\t */\n\tctl_arena_t *arenas[2 + MALLOCX_ARENA_LIMIT];\n} ctl_arenas_t;\n\nint ctl_byname(tsd_t *tsd, const char *name, void *oldp, size_t *oldlenp,\n    void *newp, size_t newlen);\nint ctl_nametomib(tsd_t *tsd, const char *name, size_t *mibp, size_t *miblenp);\nint ctl_bymib(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp,\n    size_t *oldlenp, void *newp, size_t newlen);\nint ctl_mibnametomib(tsd_t *tsd, size_t *mib, size_t miblen, const char *name,\n    size_t *miblenp);\nint ctl_bymibname(tsd_t *tsd, size_t *mib, size_t miblen, const char *name,\n    size_t *miblenp, void *oldp, size_t *oldlenp, void *newp, size_t newlen);\nbool ctl_boot(void);\nvoid ctl_prefork(tsdn_t *tsdn);\nvoid ctl_postfork_parent(tsdn_t *tsdn);\nvoid ctl_postfork_child(tsdn_t *tsdn);\nvoid ctl_mtx_assert_held(tsdn_t *tsdn);\n\n#define xmallctl(name, oldp, oldlenp, newp, newlen) do {\t\t\\\n\tif (je_mallctl(name, oldp, oldlenp, newp, newlen)\t\t\\\n\t    != 0) {\t\t\t\t\t\t\t\\\n\t\tmalloc_printf(\t\t\t\t\t\t\\\n\t\t    \"<jemalloc>: Failure in xmallctl(\\\"%s\\\", ...)\\n\",\t\\\n\t\t    name);\t\t\t\t\t\t\\\n\t\tabort();\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#define xmallctlnametomib(name, mibp, miblenp) do {\t\t\t\\\n\tif (je_mallctlnametomib(name, mibp, miblenp) != 0) {\t\t\\\n\t\tmalloc_printf(\"<jemalloc>: Failure in \"\t\t\t\\\n\t\t    \"xmallctlnametomib(\\\"%s\\\", ...)\\n\", name);\t\t\\\n\t\tabort();\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#define xmallctlbymib(mib, miblen, oldp, oldlenp, newp, newlen) do {\t\\\n\tif (je_mallctlbymib(mib, miblen, oldp, oldlenp, newp,\t\t\\\n\t    newlen) != 0) {\t\t\t\t\t\t\\\n\t\tmalloc_write(\t\t\t\t\t\t\\\n\t\t    \"<jemalloc>: Failure in xmallctlbymib()\\n\");\t\\\n\t\tabort();\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#define xmallctlmibnametomib(mib, miblen, name, miblenp) do {\t\t\\\n\tif (ctl_mibnametomib(tsd_fetch(), mib, miblen, name, miblenp)\t\\\n\t    != 0) {\t\t\t\t\t\t\t\\\n\t\tmalloc_write(\t\t\t\t\t\t\\\n\t\t    \"<jemalloc>: Failure in ctl_mibnametomib()\\n\");\t\\\n\t\tabort();\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#define xmallctlbymibname(mib, miblen, name, miblenp, oldp, oldlenp,\t\\\n    newp, newlen) do {\t\t\t\t\t\t\t\\\n\tif (ctl_bymibname(tsd_fetch(), mib, miblen, name, miblenp,\t\\\n\t    oldp, oldlenp, newp, newlen) != 0) {\t\t\t\\\n\t\tmalloc_write(\t\t\t\t\t\t\\\n\t\t    \"<jemalloc>: Failure in ctl_bymibname()\\n\");\t\\\n\t\tabort();\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#endif /* JEMALLOC_INTERNAL_CTL_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/decay.h",
    "content": "#ifndef JEMALLOC_INTERNAL_DECAY_H\n#define JEMALLOC_INTERNAL_DECAY_H\n\n#include \"jemalloc/internal/smoothstep.h\"\n\n#define DECAY_UNBOUNDED_TIME_TO_PURGE ((uint64_t)-1)\n\n/*\n * The decay_t computes the number of pages we should purge at any given time.\n * Page allocators inform a decay object when pages enter a decay-able state\n * (i.e. dirty or muzzy), and query it to determine how many pages should be\n * purged at any given time.\n *\n * This is mostly a single-threaded data structure and doesn't care about\n * synchronization at all; it's the caller's responsibility to manage their\n * synchronization on their own.  There are two exceptions:\n * 1) It's OK to racily call decay_ms_read (i.e. just the simplest state query).\n * 2) The mtx and purging fields live (and are initialized) here, but are\n *    logically owned by the page allocator.  This is just a convenience (since\n *    those fields would be duplicated for both the dirty and muzzy states\n *    otherwise).\n */\ntypedef struct decay_s decay_t;\nstruct decay_s {\n\t/* Synchronizes all non-atomic fields. */\n\tmalloc_mutex_t mtx;\n\t/*\n\t * True if a thread is currently purging the extents associated with\n\t * this decay structure.\n\t */\n\tbool purging;\n\t/*\n\t * Approximate time in milliseconds from the creation of a set of unused\n\t * dirty pages until an equivalent set of unused dirty pages is purged\n\t * and/or reused.\n\t */\n\tatomic_zd_t time_ms;\n\t/* time / SMOOTHSTEP_NSTEPS. */\n\tnstime_t interval;\n\t/*\n\t * Time at which the current decay interval logically started.  We do\n\t * not actually advance to a new epoch until sometime after it starts\n\t * because of scheduling and computation delays, and it is even possible\n\t * to completely skip epochs.  In all cases, during epoch advancement we\n\t * merge all relevant activity into the most recently recorded epoch.\n\t */\n\tnstime_t epoch;\n\t/* Deadline randomness generator. */\n\tuint64_t jitter_state;\n\t/*\n\t * Deadline for current epoch.  This is the sum of interval and per\n\t * epoch jitter which is a uniform random variable in [0..interval).\n\t * Epochs always advance by precise multiples of interval, but we\n\t * randomize the deadline to reduce the likelihood of arenas purging in\n\t * lockstep.\n\t */\n\tnstime_t deadline;\n\t/*\n\t * The number of pages we cap ourselves at in the current epoch, per\n\t * decay policies.  Updated on an epoch change.  After an epoch change,\n\t * the caller should take steps to try to purge down to this amount.\n\t */\n\tsize_t npages_limit;\n\t/*\n\t * Number of unpurged pages at beginning of current epoch.  During epoch\n\t * advancement we use the delta between arena->decay_*.nunpurged and\n\t * ecache_npages_get(&arena->ecache_*) to determine how many dirty pages,\n\t * if any, were generated.\n\t */\n\tsize_t nunpurged;\n\t/*\n\t * Trailing log of how many unused dirty pages were generated during\n\t * each of the past SMOOTHSTEP_NSTEPS decay epochs, where the last\n\t * element is the most recent epoch.  Corresponding epoch times are\n\t * relative to epoch.\n\t *\n\t * Updated only on epoch advance, triggered by\n\t * decay_maybe_advance_epoch, below.\n\t */\n\tsize_t backlog[SMOOTHSTEP_NSTEPS];\n\n\t/* Peak number of pages in associated extents.  Used for debug only. */\n\tuint64_t ceil_npages;\n};\n\n/*\n * The current decay time setting.  This is the only public access to a decay_t\n * that's allowed without holding mtx.\n */\nstatic inline ssize_t\ndecay_ms_read(const decay_t *decay) {\n\treturn atomic_load_zd(&decay->time_ms, ATOMIC_RELAXED);\n}\n\n/*\n * See the comment on the struct field -- the limit on pages we should allow in\n * this decay state this epoch.\n */\nstatic inline size_t\ndecay_npages_limit_get(const decay_t *decay) {\n\treturn decay->npages_limit;\n}\n\n/* How many unused dirty pages were generated during the last epoch. */\nstatic inline size_t\ndecay_epoch_npages_delta(const decay_t *decay) {\n\treturn decay->backlog[SMOOTHSTEP_NSTEPS - 1];\n}\n\n/*\n * Current epoch duration, in nanoseconds.  Given that new epochs are started\n * somewhat haphazardly, this is not necessarily exactly the time between any\n * two calls to decay_maybe_advance_epoch; see the comments on fields in the\n * decay_t.\n */\nstatic inline uint64_t\ndecay_epoch_duration_ns(const decay_t *decay) {\n\treturn nstime_ns(&decay->interval);\n}\n\nstatic inline bool\ndecay_immediately(const decay_t *decay) {\n\tssize_t decay_ms = decay_ms_read(decay);\n\treturn decay_ms == 0;\n}\n\nstatic inline bool\ndecay_disabled(const decay_t *decay) {\n\tssize_t decay_ms = decay_ms_read(decay);\n\treturn decay_ms < 0;\n}\n\n/* Returns true if decay is enabled and done gradually. */\nstatic inline bool\ndecay_gradually(const decay_t *decay) {\n\tssize_t decay_ms = decay_ms_read(decay);\n\treturn decay_ms > 0;\n}\n\n/*\n * Returns true if the passed in decay time setting is valid.\n * < -1 : invalid\n * -1   : never decay\n *  0   : decay immediately\n *  > 0 : some positive decay time, up to a maximum allowed value of\n *  NSTIME_SEC_MAX * 1000, which corresponds to decaying somewhere in the early\n *  27th century.  By that time, we expect to have implemented alternate purging\n *  strategies.\n */\nbool decay_ms_valid(ssize_t decay_ms);\n\n/*\n * As a precondition, the decay_t must be zeroed out (as if with memset).\n *\n * Returns true on error.\n */\nbool decay_init(decay_t *decay, nstime_t *cur_time, ssize_t decay_ms);\n\n/*\n * Given an already-initialized decay_t, reinitialize it with the given decay\n * time.  The decay_t must have previously been initialized (and should not then\n * be zeroed).\n */\nvoid decay_reinit(decay_t *decay, nstime_t *cur_time, ssize_t decay_ms);\n\n/*\n * Compute how many of 'npages_new' pages we would need to purge in 'time'.\n */\nuint64_t decay_npages_purge_in(decay_t *decay, nstime_t *time,\n    size_t npages_new);\n\n/* Returns true if the epoch advanced and there are pages to purge. */\nbool decay_maybe_advance_epoch(decay_t *decay, nstime_t *new_time,\n    size_t current_npages);\n\n/*\n * Calculates wait time until a number of pages in the interval\n * [0.5 * npages_threshold .. 1.5 * npages_threshold] should be purged.\n *\n * Returns number of nanoseconds or DECAY_UNBOUNDED_TIME_TO_PURGE in case of\n * indefinite wait.\n */\nuint64_t decay_ns_until_purge(decay_t *decay, size_t npages_current,\n    uint64_t npages_threshold);\n\n#endif /* JEMALLOC_INTERNAL_DECAY_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/div.h",
    "content": "#ifndef JEMALLOC_INTERNAL_DIV_H\n#define JEMALLOC_INTERNAL_DIV_H\n\n#include \"jemalloc/internal/assert.h\"\n\n/*\n * This module does the division that computes the index of a region in a slab,\n * given its offset relative to the base.\n * That is, given a divisor d, an n = i * d (all integers), we'll return i.\n * We do some pre-computation to do this more quickly than a CPU division\n * instruction.\n * We bound n < 2^32, and don't support dividing by one.\n */\n\ntypedef struct div_info_s div_info_t;\nstruct div_info_s {\n\tuint32_t magic;\n#ifdef JEMALLOC_DEBUG\n\tsize_t d;\n#endif\n};\n\nvoid div_init(div_info_t *div_info, size_t divisor);\n\nstatic inline size_t\ndiv_compute(div_info_t *div_info, size_t n) {\n\tassert(n <= (uint32_t)-1);\n\t/*\n\t * This generates, e.g. mov; imul; shr on x86-64. On a 32-bit machine,\n\t * the compilers I tried were all smart enough to turn this into the\n\t * appropriate \"get the high 32 bits of the result of a multiply\" (e.g.\n\t * mul; mov edx eax; on x86, umull on arm, etc.).\n\t */\n\tsize_t i = ((uint64_t)n * (uint64_t)div_info->magic) >> 32;\n#ifdef JEMALLOC_DEBUG\n\tassert(i * div_info->d == n);\n#endif\n\treturn i;\n}\n\n#endif /* JEMALLOC_INTERNAL_DIV_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/ecache.h",
    "content": "#ifndef JEMALLOC_INTERNAL_ECACHE_H\n#define JEMALLOC_INTERNAL_ECACHE_H\n\n#include \"jemalloc/internal/eset.h\"\n#include \"jemalloc/internal/san.h\"\n#include \"jemalloc/internal/mutex.h\"\n\ntypedef struct ecache_s ecache_t;\nstruct ecache_s {\n\tmalloc_mutex_t mtx;\n\teset_t eset;\n\teset_t guarded_eset;\n\t/* All stored extents must be in the same state. */\n\textent_state_t state;\n\t/* The index of the ehooks the ecache is associated with. */\n\tunsigned ind;\n\t/*\n\t * If true, delay coalescing until eviction; otherwise coalesce during\n\t * deallocation.\n\t */\n\tbool delay_coalesce;\n};\n\nstatic inline size_t\necache_npages_get(ecache_t *ecache) {\n\treturn eset_npages_get(&ecache->eset) +\n\t    eset_npages_get(&ecache->guarded_eset);\n}\n\n/* Get the number of extents in the given page size index. */\nstatic inline size_t\necache_nextents_get(ecache_t *ecache, pszind_t ind) {\n\treturn eset_nextents_get(&ecache->eset, ind) +\n\t    eset_nextents_get(&ecache->guarded_eset, ind);\n}\n\n/* Get the sum total bytes of the extents in the given page size index. */\nstatic inline size_t\necache_nbytes_get(ecache_t *ecache, pszind_t ind) {\n\treturn eset_nbytes_get(&ecache->eset, ind) +\n\t    eset_nbytes_get(&ecache->guarded_eset, ind);\n}\n\nstatic inline unsigned\necache_ind_get(ecache_t *ecache) {\n\treturn ecache->ind;\n}\n\nbool ecache_init(tsdn_t *tsdn, ecache_t *ecache, extent_state_t state,\n    unsigned ind, bool delay_coalesce);\nvoid ecache_prefork(tsdn_t *tsdn, ecache_t *ecache);\nvoid ecache_postfork_parent(tsdn_t *tsdn, ecache_t *ecache);\nvoid ecache_postfork_child(tsdn_t *tsdn, ecache_t *ecache);\n\n#endif /* JEMALLOC_INTERNAL_ECACHE_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/edata.h",
    "content": "#ifndef JEMALLOC_INTERNAL_EDATA_H\n#define JEMALLOC_INTERNAL_EDATA_H\n\n#include \"jemalloc/internal/atomic.h\"\n#include \"jemalloc/internal/bin_info.h\"\n#include \"jemalloc/internal/bit_util.h\"\n#include \"jemalloc/internal/hpdata.h\"\n#include \"jemalloc/internal/nstime.h\"\n#include \"jemalloc/internal/ph.h\"\n#include \"jemalloc/internal/ql.h\"\n#include \"jemalloc/internal/sc.h\"\n#include \"jemalloc/internal/slab_data.h\"\n#include \"jemalloc/internal/sz.h\"\n#include \"jemalloc/internal/typed_list.h\"\n\n/*\n * sizeof(edata_t) is 128 bytes on 64-bit architectures.  Ensure the alignment\n * to free up the low bits in the rtree leaf.\n */\n#define EDATA_ALIGNMENT 128\n\nenum extent_state_e {\n\textent_state_active   = 0,\n\textent_state_dirty    = 1,\n\textent_state_muzzy    = 2,\n\textent_state_retained = 3,\n\textent_state_transition = 4, /* States below are intermediate. */\n\textent_state_merging = 5,\n\textent_state_max = 5 /* Sanity checking only. */\n};\ntypedef enum extent_state_e extent_state_t;\n\nenum extent_head_state_e {\n\tEXTENT_NOT_HEAD,\n\tEXTENT_IS_HEAD   /* See comments in ehooks_default_merge_impl(). */\n};\ntypedef enum extent_head_state_e extent_head_state_t;\n\n/*\n * Which implementation of the page allocator interface, (PAI, defined in\n * pai.h) owns the given extent?\n */\nenum extent_pai_e {\n\tEXTENT_PAI_PAC = 0,\n\tEXTENT_PAI_HPA = 1\n};\ntypedef enum extent_pai_e extent_pai_t;\n\nstruct e_prof_info_s {\n\t/* Time when this was allocated. */\n\tnstime_t\te_prof_alloc_time;\n\t/* Allocation request size. */\n\tsize_t\t\te_prof_alloc_size;\n\t/* Points to a prof_tctx_t. */\n\tatomic_p_t\te_prof_tctx;\n\t/*\n\t * Points to a prof_recent_t for the allocation; NULL\n\t * means the recent allocation record no longer exists.\n\t * Protected by prof_recent_alloc_mtx.\n\t */\n\tatomic_p_t\te_prof_recent_alloc;\n};\ntypedef struct e_prof_info_s e_prof_info_t;\n\n/*\n * The information about a particular edata that lives in an emap.  Space is\n * more precious there (the information, plus the edata pointer, has to live in\n * a 64-bit word if we want to enable a packed representation.\n *\n * There are two things that are special about the information here:\n * - It's quicker to access.  You have one fewer pointer hop, since finding the\n *   edata_t associated with an item always requires accessing the rtree leaf in\n *   which this data is stored.\n * - It can be read unsynchronized, and without worrying about lifetime issues.\n */\ntypedef struct edata_map_info_s edata_map_info_t;\nstruct edata_map_info_s {\n\tbool slab;\n\tszind_t szind;\n};\n\ntypedef struct edata_cmp_summary_s edata_cmp_summary_t;\nstruct edata_cmp_summary_s {\n\tuint64_t sn;\n\tuintptr_t addr;\n};\n\n/* Extent (span of pages).  Use accessor functions for e_* fields. */\ntypedef struct edata_s edata_t;\nph_structs(edata_avail, edata_t);\nph_structs(edata_heap, edata_t);\nstruct edata_s {\n\t/*\n\t * Bitfield containing several fields:\n\t *\n\t * a: arena_ind\n\t * b: slab\n\t * c: committed\n\t * p: pai\n\t * z: zeroed\n\t * g: guarded\n\t * t: state\n\t * i: szind\n\t * f: nfree\n\t * s: bin_shard\n\t *\n\t * 00000000 ... 0000ssss ssffffff ffffiiii iiiitttg zpcbaaaa aaaaaaaa\n\t *\n\t * arena_ind: Arena from which this extent came, or all 1 bits if\n\t *            unassociated.\n\t *\n\t * slab: The slab flag indicates whether the extent is used for a slab\n\t *       of small regions.  This helps differentiate small size classes,\n\t *       and it indicates whether interior pointers can be looked up via\n\t *       iealloc().\n\t *\n\t * committed: The committed flag indicates whether physical memory is\n\t *            committed to the extent, whether explicitly or implicitly\n\t *            as on a system that overcommits and satisfies physical\n\t *            memory needs on demand via soft page faults.\n\t *\n\t * pai: The pai flag is an extent_pai_t.\n\t *\n\t * zeroed: The zeroed flag is used by extent recycling code to track\n\t *         whether memory is zero-filled.\n\t *\n\t * guarded: The guarded flag is use by the sanitizer to track whether\n\t *          the extent has page guards around it.\n\t *\n\t * state: The state flag is an extent_state_t.\n\t *\n\t * szind: The szind flag indicates usable size class index for\n\t *        allocations residing in this extent, regardless of whether the\n\t *        extent is a slab.  Extent size and usable size often differ\n\t *        even for non-slabs, either due to sz_large_pad or promotion of\n\t *        sampled small regions.\n\t *\n\t * nfree: Number of free regions in slab.\n\t *\n\t * bin_shard: the shard of the bin from which this extent came.\n\t */\n\tuint64_t\t\te_bits;\n#define MASK(CURRENT_FIELD_WIDTH, CURRENT_FIELD_SHIFT) ((((((uint64_t)0x1U) << (CURRENT_FIELD_WIDTH)) - 1)) << (CURRENT_FIELD_SHIFT))\n\n#define EDATA_BITS_ARENA_WIDTH  MALLOCX_ARENA_BITS\n#define EDATA_BITS_ARENA_SHIFT  0\n#define EDATA_BITS_ARENA_MASK  MASK(EDATA_BITS_ARENA_WIDTH, EDATA_BITS_ARENA_SHIFT)\n\n#define EDATA_BITS_SLAB_WIDTH  1\n#define EDATA_BITS_SLAB_SHIFT  (EDATA_BITS_ARENA_WIDTH + EDATA_BITS_ARENA_SHIFT)\n#define EDATA_BITS_SLAB_MASK  MASK(EDATA_BITS_SLAB_WIDTH, EDATA_BITS_SLAB_SHIFT)\n\n#define EDATA_BITS_COMMITTED_WIDTH  1\n#define EDATA_BITS_COMMITTED_SHIFT  (EDATA_BITS_SLAB_WIDTH + EDATA_BITS_SLAB_SHIFT)\n#define EDATA_BITS_COMMITTED_MASK  MASK(EDATA_BITS_COMMITTED_WIDTH, EDATA_BITS_COMMITTED_SHIFT)\n\n#define EDATA_BITS_PAI_WIDTH  1\n#define EDATA_BITS_PAI_SHIFT  (EDATA_BITS_COMMITTED_WIDTH + EDATA_BITS_COMMITTED_SHIFT)\n#define EDATA_BITS_PAI_MASK  MASK(EDATA_BITS_PAI_WIDTH, EDATA_BITS_PAI_SHIFT)\n\n#define EDATA_BITS_ZEROED_WIDTH  1\n#define EDATA_BITS_ZEROED_SHIFT  (EDATA_BITS_PAI_WIDTH + EDATA_BITS_PAI_SHIFT)\n#define EDATA_BITS_ZEROED_MASK  MASK(EDATA_BITS_ZEROED_WIDTH, EDATA_BITS_ZEROED_SHIFT)\n\n#define EDATA_BITS_GUARDED_WIDTH  1\n#define EDATA_BITS_GUARDED_SHIFT  (EDATA_BITS_ZEROED_WIDTH + EDATA_BITS_ZEROED_SHIFT)\n#define EDATA_BITS_GUARDED_MASK  MASK(EDATA_BITS_GUARDED_WIDTH, EDATA_BITS_GUARDED_SHIFT)\n\n#define EDATA_BITS_STATE_WIDTH  3\n#define EDATA_BITS_STATE_SHIFT  (EDATA_BITS_GUARDED_WIDTH + EDATA_BITS_GUARDED_SHIFT)\n#define EDATA_BITS_STATE_MASK  MASK(EDATA_BITS_STATE_WIDTH, EDATA_BITS_STATE_SHIFT)\n\n#define EDATA_BITS_SZIND_WIDTH  LG_CEIL(SC_NSIZES)\n#define EDATA_BITS_SZIND_SHIFT  (EDATA_BITS_STATE_WIDTH + EDATA_BITS_STATE_SHIFT)\n#define EDATA_BITS_SZIND_MASK  MASK(EDATA_BITS_SZIND_WIDTH, EDATA_BITS_SZIND_SHIFT)\n\n#define EDATA_BITS_NFREE_WIDTH  (SC_LG_SLAB_MAXREGS + 1)\n#define EDATA_BITS_NFREE_SHIFT  (EDATA_BITS_SZIND_WIDTH + EDATA_BITS_SZIND_SHIFT)\n#define EDATA_BITS_NFREE_MASK  MASK(EDATA_BITS_NFREE_WIDTH, EDATA_BITS_NFREE_SHIFT)\n\n#define EDATA_BITS_BINSHARD_WIDTH  6\n#define EDATA_BITS_BINSHARD_SHIFT  (EDATA_BITS_NFREE_WIDTH + EDATA_BITS_NFREE_SHIFT)\n#define EDATA_BITS_BINSHARD_MASK  MASK(EDATA_BITS_BINSHARD_WIDTH, EDATA_BITS_BINSHARD_SHIFT)\n\n#define EDATA_BITS_IS_HEAD_WIDTH 1\n#define EDATA_BITS_IS_HEAD_SHIFT  (EDATA_BITS_BINSHARD_WIDTH + EDATA_BITS_BINSHARD_SHIFT)\n#define EDATA_BITS_IS_HEAD_MASK  MASK(EDATA_BITS_IS_HEAD_WIDTH, EDATA_BITS_IS_HEAD_SHIFT)\n\n\t/* Pointer to the extent that this structure is responsible for. */\n\tvoid\t\t\t*e_addr;\n\n\tunion {\n\t\t/*\n\t\t * Extent size and serial number associated with the extent\n\t\t * structure (different than the serial number for the extent at\n\t\t * e_addr).\n\t\t *\n\t\t * ssssssss [...] ssssssss ssssnnnn nnnnnnnn\n\t\t */\n\t\tsize_t\t\t\te_size_esn;\n\t#define EDATA_SIZE_MASK\t((size_t)~(PAGE-1))\n\t#define EDATA_ESN_MASK\t\t((size_t)PAGE-1)\n\t\t/* Base extent size, which may not be a multiple of PAGE. */\n\t\tsize_t\t\t\te_bsize;\n\t};\n\n\t/*\n\t * If this edata is a user allocation from an HPA, it comes out of some\n\t * pageslab (we don't yet support huegpage allocations that don't fit\n\t * into pageslabs).  This tracks it.\n\t */\n\thpdata_t *e_ps;\n\n\t/*\n\t * Serial number.  These are not necessarily unique; splitting an extent\n\t * results in two extents with the same serial number.\n\t */\n\tuint64_t e_sn;\n\n\tunion {\n\t\t/*\n\t\t * List linkage used when the edata_t is active; either in\n\t\t * arena's large allocations or bin_t's slabs_full.\n\t\t */\n\t\tql_elm(edata_t)\tql_link_active;\n\t\t/*\n\t\t * Pairing heap linkage.  Used whenever the extent is inactive\n\t\t * (in the page allocators), or when it is active and in\n\t\t * slabs_nonfull, or when the edata_t is unassociated with an\n\t\t * extent and sitting in an edata_cache.\n\t\t */\n\t\tunion {\n\t\t\tedata_heap_link_t heap_link;\n\t\t\tedata_avail_link_t avail_link;\n\t\t};\n\t};\n\n\tunion {\n\t\t/*\n\t\t * List linkage used when the extent is inactive:\n\t\t * - Stashed dirty extents\n\t\t * - Ecache LRU functionality.\n\t\t */\n\t\tql_elm(edata_t) ql_link_inactive;\n\t\t/* Small region slab metadata. */\n\t\tslab_data_t\te_slab_data;\n\n\t\t/* Profiling data, used for large objects. */\n\t\te_prof_info_t\te_prof_info;\n\t};\n};\n\nTYPED_LIST(edata_list_active, edata_t, ql_link_active)\nTYPED_LIST(edata_list_inactive, edata_t, ql_link_inactive)\n\nstatic inline unsigned\nedata_arena_ind_get(const edata_t *edata) {\n\tunsigned arena_ind = (unsigned)((edata->e_bits &\n\t    EDATA_BITS_ARENA_MASK) >> EDATA_BITS_ARENA_SHIFT);\n\tassert(arena_ind < MALLOCX_ARENA_LIMIT);\n\n\treturn arena_ind;\n}\n\nstatic inline szind_t\nedata_szind_get_maybe_invalid(const edata_t *edata) {\n\tszind_t szind = (szind_t)((edata->e_bits & EDATA_BITS_SZIND_MASK) >>\n\t    EDATA_BITS_SZIND_SHIFT);\n\tassert(szind <= SC_NSIZES);\n\treturn szind;\n}\n\nstatic inline szind_t\nedata_szind_get(const edata_t *edata) {\n\tszind_t szind = edata_szind_get_maybe_invalid(edata);\n\tassert(szind < SC_NSIZES); /* Never call when \"invalid\". */\n\treturn szind;\n}\n\nstatic inline size_t\nedata_usize_get(const edata_t *edata) {\n\treturn sz_index2size(edata_szind_get(edata));\n}\n\nstatic inline unsigned\nedata_binshard_get(const edata_t *edata) {\n\tunsigned binshard = (unsigned)((edata->e_bits &\n\t    EDATA_BITS_BINSHARD_MASK) >> EDATA_BITS_BINSHARD_SHIFT);\n\tassert(binshard < bin_infos[edata_szind_get(edata)].n_shards);\n\treturn binshard;\n}\n\nstatic inline uint64_t\nedata_sn_get(const edata_t *edata) {\n\treturn edata->e_sn;\n}\n\nstatic inline extent_state_t\nedata_state_get(const edata_t *edata) {\n\treturn (extent_state_t)((edata->e_bits & EDATA_BITS_STATE_MASK) >>\n\t    EDATA_BITS_STATE_SHIFT);\n}\n\nstatic inline bool\nedata_guarded_get(const edata_t *edata) {\n\treturn (bool)((edata->e_bits & EDATA_BITS_GUARDED_MASK) >>\n\t    EDATA_BITS_GUARDED_SHIFT);\n}\n\nstatic inline bool\nedata_zeroed_get(const edata_t *edata) {\n\treturn (bool)((edata->e_bits & EDATA_BITS_ZEROED_MASK) >>\n\t    EDATA_BITS_ZEROED_SHIFT);\n}\n\nstatic inline bool\nedata_committed_get(const edata_t *edata) {\n\treturn (bool)((edata->e_bits & EDATA_BITS_COMMITTED_MASK) >>\n\t    EDATA_BITS_COMMITTED_SHIFT);\n}\n\nstatic inline extent_pai_t\nedata_pai_get(const edata_t *edata) {\n\treturn (extent_pai_t)((edata->e_bits & EDATA_BITS_PAI_MASK) >>\n\t    EDATA_BITS_PAI_SHIFT);\n}\n\nstatic inline bool\nedata_slab_get(const edata_t *edata) {\n\treturn (bool)((edata->e_bits & EDATA_BITS_SLAB_MASK) >>\n\t    EDATA_BITS_SLAB_SHIFT);\n}\n\nstatic inline unsigned\nedata_nfree_get(const edata_t *edata) {\n\tassert(edata_slab_get(edata));\n\treturn (unsigned)((edata->e_bits & EDATA_BITS_NFREE_MASK) >>\n\t    EDATA_BITS_NFREE_SHIFT);\n}\n\nstatic inline void *\nedata_base_get(const edata_t *edata) {\n\tassert(edata->e_addr == PAGE_ADDR2BASE(edata->e_addr) ||\n\t    !edata_slab_get(edata));\n\treturn PAGE_ADDR2BASE(edata->e_addr);\n}\n\nstatic inline void *\nedata_addr_get(const edata_t *edata) {\n\tassert(edata->e_addr == PAGE_ADDR2BASE(edata->e_addr) ||\n\t    !edata_slab_get(edata));\n\treturn edata->e_addr;\n}\n\nstatic inline size_t\nedata_size_get(const edata_t *edata) {\n\treturn (edata->e_size_esn & EDATA_SIZE_MASK);\n}\n\nstatic inline size_t\nedata_esn_get(const edata_t *edata) {\n\treturn (edata->e_size_esn & EDATA_ESN_MASK);\n}\n\nstatic inline size_t\nedata_bsize_get(const edata_t *edata) {\n\treturn edata->e_bsize;\n}\n\nstatic inline hpdata_t *\nedata_ps_get(const edata_t *edata) {\n\tassert(edata_pai_get(edata) == EXTENT_PAI_HPA);\n\treturn edata->e_ps;\n}\n\nstatic inline void *\nedata_before_get(const edata_t *edata) {\n\treturn (void *)((uintptr_t)edata_base_get(edata) - PAGE);\n}\n\nstatic inline void *\nedata_last_get(const edata_t *edata) {\n\treturn (void *)((uintptr_t)edata_base_get(edata) +\n\t    edata_size_get(edata) - PAGE);\n}\n\nstatic inline void *\nedata_past_get(const edata_t *edata) {\n\treturn (void *)((uintptr_t)edata_base_get(edata) +\n\t    edata_size_get(edata));\n}\n\nstatic inline slab_data_t *\nedata_slab_data_get(edata_t *edata) {\n\tassert(edata_slab_get(edata));\n\treturn &edata->e_slab_data;\n}\n\nstatic inline const slab_data_t *\nedata_slab_data_get_const(const edata_t *edata) {\n\tassert(edata_slab_get(edata));\n\treturn &edata->e_slab_data;\n}\n\nstatic inline prof_tctx_t *\nedata_prof_tctx_get(const edata_t *edata) {\n\treturn (prof_tctx_t *)atomic_load_p(&edata->e_prof_info.e_prof_tctx,\n\t    ATOMIC_ACQUIRE);\n}\n\nstatic inline const nstime_t *\nedata_prof_alloc_time_get(const edata_t *edata) {\n\treturn &edata->e_prof_info.e_prof_alloc_time;\n}\n\nstatic inline size_t\nedata_prof_alloc_size_get(const edata_t *edata) {\n\treturn edata->e_prof_info.e_prof_alloc_size;\n}\n\nstatic inline prof_recent_t *\nedata_prof_recent_alloc_get_dont_call_directly(const edata_t *edata) {\n\treturn (prof_recent_t *)atomic_load_p(\n\t    &edata->e_prof_info.e_prof_recent_alloc, ATOMIC_RELAXED);\n}\n\nstatic inline void\nedata_arena_ind_set(edata_t *edata, unsigned arena_ind) {\n\tedata->e_bits = (edata->e_bits & ~EDATA_BITS_ARENA_MASK) |\n\t    ((uint64_t)arena_ind << EDATA_BITS_ARENA_SHIFT);\n}\n\nstatic inline void\nedata_binshard_set(edata_t *edata, unsigned binshard) {\n\t/* The assertion assumes szind is set already. */\n\tassert(binshard < bin_infos[edata_szind_get(edata)].n_shards);\n\tedata->e_bits = (edata->e_bits & ~EDATA_BITS_BINSHARD_MASK) |\n\t    ((uint64_t)binshard << EDATA_BITS_BINSHARD_SHIFT);\n}\n\nstatic inline void\nedata_addr_set(edata_t *edata, void *addr) {\n\tedata->e_addr = addr;\n}\n\nstatic inline void\nedata_size_set(edata_t *edata, size_t size) {\n\tassert((size & ~EDATA_SIZE_MASK) == 0);\n\tedata->e_size_esn = size | (edata->e_size_esn & ~EDATA_SIZE_MASK);\n}\n\nstatic inline void\nedata_esn_set(edata_t *edata, size_t esn) {\n\tedata->e_size_esn = (edata->e_size_esn & ~EDATA_ESN_MASK) | (esn &\n\t    EDATA_ESN_MASK);\n}\n\nstatic inline void\nedata_bsize_set(edata_t *edata, size_t bsize) {\n\tedata->e_bsize = bsize;\n}\n\nstatic inline void\nedata_ps_set(edata_t *edata, hpdata_t *ps) {\n\tassert(edata_pai_get(edata) == EXTENT_PAI_HPA);\n\tedata->e_ps = ps;\n}\n\nstatic inline void\nedata_szind_set(edata_t *edata, szind_t szind) {\n\tassert(szind <= SC_NSIZES); /* SC_NSIZES means \"invalid\". */\n\tedata->e_bits = (edata->e_bits & ~EDATA_BITS_SZIND_MASK) |\n\t    ((uint64_t)szind << EDATA_BITS_SZIND_SHIFT);\n}\n\nstatic inline void\nedata_nfree_set(edata_t *edata, unsigned nfree) {\n\tassert(edata_slab_get(edata));\n\tedata->e_bits = (edata->e_bits & ~EDATA_BITS_NFREE_MASK) |\n\t    ((uint64_t)nfree << EDATA_BITS_NFREE_SHIFT);\n}\n\nstatic inline void\nedata_nfree_binshard_set(edata_t *edata, unsigned nfree, unsigned binshard) {\n\t/* The assertion assumes szind is set already. */\n\tassert(binshard < bin_infos[edata_szind_get(edata)].n_shards);\n\tedata->e_bits = (edata->e_bits &\n\t    (~EDATA_BITS_NFREE_MASK & ~EDATA_BITS_BINSHARD_MASK)) |\n\t    ((uint64_t)binshard << EDATA_BITS_BINSHARD_SHIFT) |\n\t    ((uint64_t)nfree << EDATA_BITS_NFREE_SHIFT);\n}\n\nstatic inline void\nedata_nfree_inc(edata_t *edata) {\n\tassert(edata_slab_get(edata));\n\tedata->e_bits += ((uint64_t)1U << EDATA_BITS_NFREE_SHIFT);\n}\n\nstatic inline void\nedata_nfree_dec(edata_t *edata) {\n\tassert(edata_slab_get(edata));\n\tedata->e_bits -= ((uint64_t)1U << EDATA_BITS_NFREE_SHIFT);\n}\n\nstatic inline void\nedata_nfree_sub(edata_t *edata, uint64_t n) {\n\tassert(edata_slab_get(edata));\n\tedata->e_bits -= (n << EDATA_BITS_NFREE_SHIFT);\n}\n\nstatic inline void\nedata_sn_set(edata_t *edata, uint64_t sn) {\n\tedata->e_sn = sn;\n}\n\nstatic inline void\nedata_state_set(edata_t *edata, extent_state_t state) {\n\tedata->e_bits = (edata->e_bits & ~EDATA_BITS_STATE_MASK) |\n\t    ((uint64_t)state << EDATA_BITS_STATE_SHIFT);\n}\n\nstatic inline void\nedata_guarded_set(edata_t *edata, bool guarded) {\n\tedata->e_bits = (edata->e_bits & ~EDATA_BITS_GUARDED_MASK) |\n\t    ((uint64_t)guarded << EDATA_BITS_GUARDED_SHIFT);\n}\n\nstatic inline void\nedata_zeroed_set(edata_t *edata, bool zeroed) {\n\tedata->e_bits = (edata->e_bits & ~EDATA_BITS_ZEROED_MASK) |\n\t    ((uint64_t)zeroed << EDATA_BITS_ZEROED_SHIFT);\n}\n\nstatic inline void\nedata_committed_set(edata_t *edata, bool committed) {\n\tedata->e_bits = (edata->e_bits & ~EDATA_BITS_COMMITTED_MASK) |\n\t    ((uint64_t)committed << EDATA_BITS_COMMITTED_SHIFT);\n}\n\nstatic inline void\nedata_pai_set(edata_t *edata, extent_pai_t pai) {\n\tedata->e_bits = (edata->e_bits & ~EDATA_BITS_PAI_MASK) |\n\t    ((uint64_t)pai << EDATA_BITS_PAI_SHIFT);\n}\n\nstatic inline void\nedata_slab_set(edata_t *edata, bool slab) {\n\tedata->e_bits = (edata->e_bits & ~EDATA_BITS_SLAB_MASK) |\n\t    ((uint64_t)slab << EDATA_BITS_SLAB_SHIFT);\n}\n\nstatic inline void\nedata_prof_tctx_set(edata_t *edata, prof_tctx_t *tctx) {\n\tatomic_store_p(&edata->e_prof_info.e_prof_tctx, tctx, ATOMIC_RELEASE);\n}\n\nstatic inline void\nedata_prof_alloc_time_set(edata_t *edata, nstime_t *t) {\n\tnstime_copy(&edata->e_prof_info.e_prof_alloc_time, t);\n}\n\nstatic inline void\nedata_prof_alloc_size_set(edata_t *edata, size_t size) {\n\tedata->e_prof_info.e_prof_alloc_size = size;\n}\n\nstatic inline void\nedata_prof_recent_alloc_set_dont_call_directly(edata_t *edata,\n    prof_recent_t *recent_alloc) {\n\tatomic_store_p(&edata->e_prof_info.e_prof_recent_alloc, recent_alloc,\n\t    ATOMIC_RELAXED);\n}\n\nstatic inline bool\nedata_is_head_get(edata_t *edata) {\n\treturn (bool)((edata->e_bits & EDATA_BITS_IS_HEAD_MASK) >>\n\t    EDATA_BITS_IS_HEAD_SHIFT);\n}\n\nstatic inline void\nedata_is_head_set(edata_t *edata, bool is_head) {\n\tedata->e_bits = (edata->e_bits & ~EDATA_BITS_IS_HEAD_MASK) |\n\t    ((uint64_t)is_head << EDATA_BITS_IS_HEAD_SHIFT);\n}\n\nstatic inline bool\nedata_state_in_transition(extent_state_t state) {\n\treturn state >= extent_state_transition;\n}\n\n/*\n * Because this function is implemented as a sequence of bitfield modifications,\n * even though each individual bit is properly initialized, we technically read\n * uninitialized data within it.  This is mostly fine, since most callers get\n * their edatas from zeroing sources, but callers who make stack edata_ts need\n * to manually zero them.\n */\nstatic inline void\nedata_init(edata_t *edata, unsigned arena_ind, void *addr, size_t size,\n    bool slab, szind_t szind, uint64_t sn, extent_state_t state, bool zeroed,\n    bool committed, extent_pai_t pai, extent_head_state_t is_head) {\n\tassert(addr == PAGE_ADDR2BASE(addr) || !slab);\n\n\tedata_arena_ind_set(edata, arena_ind);\n\tedata_addr_set(edata, addr);\n\tedata_size_set(edata, size);\n\tedata_slab_set(edata, slab);\n\tedata_szind_set(edata, szind);\n\tedata_sn_set(edata, sn);\n\tedata_state_set(edata, state);\n\tedata_guarded_set(edata, false);\n\tedata_zeroed_set(edata, zeroed);\n\tedata_committed_set(edata, committed);\n\tedata_pai_set(edata, pai);\n\tedata_is_head_set(edata, is_head == EXTENT_IS_HEAD);\n\tif (config_prof) {\n\t\tedata_prof_tctx_set(edata, NULL);\n\t}\n}\n\nstatic inline void\nedata_binit(edata_t *edata, void *addr, size_t bsize, uint64_t sn) {\n\tedata_arena_ind_set(edata, (1U << MALLOCX_ARENA_BITS) - 1);\n\tedata_addr_set(edata, addr);\n\tedata_bsize_set(edata, bsize);\n\tedata_slab_set(edata, false);\n\tedata_szind_set(edata, SC_NSIZES);\n\tedata_sn_set(edata, sn);\n\tedata_state_set(edata, extent_state_active);\n\tedata_guarded_set(edata, false);\n\tedata_zeroed_set(edata, true);\n\tedata_committed_set(edata, true);\n\t/*\n\t * This isn't strictly true, but base allocated extents never get\n\t * deallocated and can't be looked up in the emap, but no sense in\n\t * wasting a state bit to encode this fact.\n\t */\n\tedata_pai_set(edata, EXTENT_PAI_PAC);\n}\n\nstatic inline int\nedata_esn_comp(const edata_t *a, const edata_t *b) {\n\tsize_t a_esn = edata_esn_get(a);\n\tsize_t b_esn = edata_esn_get(b);\n\n\treturn (a_esn > b_esn) - (a_esn < b_esn);\n}\n\nstatic inline int\nedata_ead_comp(const edata_t *a, const edata_t *b) {\n\tuintptr_t a_eaddr = (uintptr_t)a;\n\tuintptr_t b_eaddr = (uintptr_t)b;\n\n\treturn (a_eaddr > b_eaddr) - (a_eaddr < b_eaddr);\n}\n\nstatic inline edata_cmp_summary_t\nedata_cmp_summary_get(const edata_t *edata) {\n\treturn (edata_cmp_summary_t){edata_sn_get(edata),\n\t\t(uintptr_t)edata_addr_get(edata)};\n}\n\nstatic inline int\nedata_cmp_summary_comp(edata_cmp_summary_t a, edata_cmp_summary_t b) {\n\tint ret;\n\tret = (a.sn > b.sn) - (a.sn < b.sn);\n\tif (ret != 0) {\n\t\treturn ret;\n\t}\n\tret = (a.addr > b.addr) - (a.addr < b.addr);\n\treturn ret;\n}\n\nstatic inline int\nedata_snad_comp(const edata_t *a, const edata_t *b) {\n\tedata_cmp_summary_t a_cmp = edata_cmp_summary_get(a);\n\tedata_cmp_summary_t b_cmp = edata_cmp_summary_get(b);\n\n\treturn edata_cmp_summary_comp(a_cmp, b_cmp);\n}\n\nstatic inline int\nedata_esnead_comp(const edata_t *a, const edata_t *b) {\n\tint ret;\n\n\tret = edata_esn_comp(a, b);\n\tif (ret != 0) {\n\t\treturn ret;\n\t}\n\n\tret = edata_ead_comp(a, b);\n\treturn ret;\n}\n\nph_proto(, edata_avail, edata_t)\nph_proto(, edata_heap, edata_t)\n\n#endif /* JEMALLOC_INTERNAL_EDATA_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/edata_cache.h",
    "content": "#ifndef JEMALLOC_INTERNAL_EDATA_CACHE_H\n#define JEMALLOC_INTERNAL_EDATA_CACHE_H\n\n#include \"jemalloc/internal/base.h\"\n\n/* For tests only. */\n#define EDATA_CACHE_FAST_FILL 4\n\n/*\n * A cache of edata_t structures allocated via base_alloc_edata (as opposed to\n * the underlying extents they describe).  The contents of returned edata_t\n * objects are garbage and cannot be relied upon.\n */\n\ntypedef struct edata_cache_s edata_cache_t;\nstruct edata_cache_s {\n\tedata_avail_t avail;\n\tatomic_zu_t count;\n\tmalloc_mutex_t mtx;\n\tbase_t *base;\n};\n\nbool edata_cache_init(edata_cache_t *edata_cache, base_t *base);\nedata_t *edata_cache_get(tsdn_t *tsdn, edata_cache_t *edata_cache);\nvoid edata_cache_put(tsdn_t *tsdn, edata_cache_t *edata_cache, edata_t *edata);\n\nvoid edata_cache_prefork(tsdn_t *tsdn, edata_cache_t *edata_cache);\nvoid edata_cache_postfork_parent(tsdn_t *tsdn, edata_cache_t *edata_cache);\nvoid edata_cache_postfork_child(tsdn_t *tsdn, edata_cache_t *edata_cache);\n\n/*\n * An edata_cache_small is like an edata_cache, but it relies on external\n * synchronization and avoids first-fit strategies.\n */\n\ntypedef struct edata_cache_fast_s edata_cache_fast_t;\nstruct edata_cache_fast_s {\n\tedata_list_inactive_t list;\n\tedata_cache_t *fallback;\n\tbool disabled;\n};\n\nvoid edata_cache_fast_init(edata_cache_fast_t *ecs, edata_cache_t *fallback);\nedata_t *edata_cache_fast_get(tsdn_t *tsdn, edata_cache_fast_t *ecs);\nvoid edata_cache_fast_put(tsdn_t *tsdn, edata_cache_fast_t *ecs,\n    edata_t *edata);\nvoid edata_cache_fast_disable(tsdn_t *tsdn, edata_cache_fast_t *ecs);\n\n#endif /* JEMALLOC_INTERNAL_EDATA_CACHE_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/ehooks.h",
    "content": "#ifndef JEMALLOC_INTERNAL_EHOOKS_H\n#define JEMALLOC_INTERNAL_EHOOKS_H\n\n#include \"jemalloc/internal/atomic.h\"\n#include \"jemalloc/internal/extent_mmap.h\"\n\n/*\n * This module is the internal interface to the extent hooks (both\n * user-specified and external).  Eventually, this will give us the flexibility\n * to use multiple different versions of user-visible extent-hook APIs under a\n * single user interface.\n *\n * Current API expansions (not available to anyone but the default hooks yet):\n *   - Head state tracking.  Hooks can decide whether or not to merge two\n *     extents based on whether or not one of them is the head (i.e. was\n *     allocated on its own).  The later extent loses its \"head\" status.\n */\n\nextern const extent_hooks_t ehooks_default_extent_hooks;\n\ntypedef struct ehooks_s ehooks_t;\nstruct ehooks_s {\n\t/*\n\t * The user-visible id that goes with the ehooks (i.e. that of the base\n\t * they're a part of, the associated arena's index within the arenas\n\t * array).\n\t */\n\tunsigned ind;\n\t/* Logically an extent_hooks_t *. */\n\tatomic_p_t ptr;\n};\n\nextern const extent_hooks_t ehooks_default_extent_hooks;\n\n/*\n * These are not really part of the public API.  Each hook has a fast-path for\n * the default-hooks case that can avoid various small inefficiencies:\n *   - Forgetting tsd and then calling tsd_get within the hook.\n *   - Getting more state than necessary out of the extent_t.\n *   - Doing arena_ind -> arena -> arena_ind lookups.\n * By making the calls to these functions visible to the compiler, it can move\n * those extra bits of computation down below the fast-paths where they get ignored.\n */\nvoid *ehooks_default_alloc_impl(tsdn_t *tsdn, void *new_addr, size_t size,\n    size_t alignment, bool *zero, bool *commit, unsigned arena_ind);\nbool ehooks_default_dalloc_impl(void *addr, size_t size);\nvoid ehooks_default_destroy_impl(void *addr, size_t size);\nbool ehooks_default_commit_impl(void *addr, size_t offset, size_t length);\nbool ehooks_default_decommit_impl(void *addr, size_t offset, size_t length);\n#ifdef PAGES_CAN_PURGE_LAZY\nbool ehooks_default_purge_lazy_impl(void *addr, size_t offset, size_t length);\n#endif\n#ifdef PAGES_CAN_PURGE_FORCED\nbool ehooks_default_purge_forced_impl(void *addr, size_t offset, size_t length);\n#endif\nbool ehooks_default_split_impl();\n/*\n * Merge is the only default extent hook we declare -- see the comment in\n * ehooks_merge.\n */\nbool ehooks_default_merge(extent_hooks_t *extent_hooks, void *addr_a,\n    size_t size_a, void *addr_b, size_t size_b, bool committed,\n    unsigned arena_ind);\nbool ehooks_default_merge_impl(tsdn_t *tsdn, void *addr_a, void *addr_b);\nvoid ehooks_default_zero_impl(void *addr, size_t size);\nvoid ehooks_default_guard_impl(void *guard1, void *guard2);\nvoid ehooks_default_unguard_impl(void *guard1, void *guard2);\n\n/*\n * We don't officially support reentrancy from wtihin the extent hooks.  But\n * various people who sit within throwing distance of the jemalloc team want\n * that functionality in certain limited cases.  The default reentrancy guards\n * assert that we're not reentrant from a0 (since it's the bootstrap arena,\n * where reentrant allocations would be redirected), which we would incorrectly\n * trigger in cases where a0 has extent hooks (those hooks themselves can't be\n * reentrant, then, but there are reasonable uses for such functionality, like\n * putting internal metadata on hugepages).  Therefore, we use the raw\n * reentrancy guards.\n *\n * Eventually, we need to think more carefully about whether and where we\n * support allocating from within extent hooks (and what that means for things\n * like profiling, stats collection, etc.), and document what the guarantee is.\n */\nstatic inline void\nehooks_pre_reentrancy(tsdn_t *tsdn) {\n\ttsd_t *tsd = tsdn_null(tsdn) ? tsd_fetch() : tsdn_tsd(tsdn);\n\ttsd_pre_reentrancy_raw(tsd);\n}\n\nstatic inline void\nehooks_post_reentrancy(tsdn_t *tsdn) {\n\ttsd_t *tsd = tsdn_null(tsdn) ? tsd_fetch() : tsdn_tsd(tsdn);\n\ttsd_post_reentrancy_raw(tsd);\n}\n\n/* Beginning of the public API. */\nvoid ehooks_init(ehooks_t *ehooks, extent_hooks_t *extent_hooks, unsigned ind);\n\nstatic inline unsigned\nehooks_ind_get(const ehooks_t *ehooks) {\n\treturn ehooks->ind;\n}\n\nstatic inline void\nehooks_set_extent_hooks_ptr(ehooks_t *ehooks, extent_hooks_t *extent_hooks) {\n\tatomic_store_p(&ehooks->ptr, extent_hooks, ATOMIC_RELEASE);\n}\n\nstatic inline extent_hooks_t *\nehooks_get_extent_hooks_ptr(ehooks_t *ehooks) {\n\treturn (extent_hooks_t *)atomic_load_p(&ehooks->ptr, ATOMIC_ACQUIRE);\n}\n\nstatic inline bool\nehooks_are_default(ehooks_t *ehooks) {\n\treturn ehooks_get_extent_hooks_ptr(ehooks) ==\n\t    &ehooks_default_extent_hooks;\n}\n\n/*\n * In some cases, a caller needs to allocate resources before attempting to call\n * a hook.  If that hook is doomed to fail, this is wasteful.  We therefore\n * include some checks for such cases.\n */\nstatic inline bool\nehooks_dalloc_will_fail(ehooks_t *ehooks) {\n\tif (ehooks_are_default(ehooks)) {\n\t\treturn opt_retain;\n\t} else {\n\t\treturn ehooks_get_extent_hooks_ptr(ehooks)->dalloc == NULL;\n\t}\n}\n\nstatic inline bool\nehooks_split_will_fail(ehooks_t *ehooks) {\n\treturn ehooks_get_extent_hooks_ptr(ehooks)->split == NULL;\n}\n\nstatic inline bool\nehooks_merge_will_fail(ehooks_t *ehooks) {\n\treturn ehooks_get_extent_hooks_ptr(ehooks)->merge == NULL;\n}\n\nstatic inline bool\nehooks_guard_will_fail(ehooks_t *ehooks) {\n\t/*\n\t * Before the guard hooks are officially introduced, limit the use to\n\t * the default hooks only.\n\t */\n\treturn !ehooks_are_default(ehooks);\n}\n\n/*\n * Some hooks are required to return zeroed memory in certain situations.  In\n * debug mode, we do some heuristic checks that they did what they were supposed\n * to.\n *\n * This isn't really ehooks-specific (i.e. anyone can check for zeroed memory).\n * But incorrect zero information indicates an ehook bug.\n */\nstatic inline void\nehooks_debug_zero_check(void *addr, size_t size) {\n\tassert(((uintptr_t)addr & PAGE_MASK) == 0);\n\tassert((size & PAGE_MASK) == 0);\n\tassert(size > 0);\n\tif (config_debug) {\n\t\t/* Check the whole first page. */\n\t\tsize_t *p = (size_t *)addr;\n\t\tfor (size_t i = 0; i < PAGE / sizeof(size_t); i++) {\n\t\t\tassert(p[i] == 0);\n\t\t}\n\t\t/*\n\t\t * And 4 spots within.  There's a tradeoff here; the larger\n\t\t * this number, the more likely it is that we'll catch a bug\n\t\t * where ehooks return a sparsely non-zero range.  But\n\t\t * increasing the number of checks also increases the number of\n\t\t * page faults in debug mode.  FreeBSD does much of their\n\t\t * day-to-day development work in debug mode, so we don't want\n\t\t * even the debug builds to be too slow.\n\t\t */\n\t\tconst size_t nchecks = 4;\n\t\tassert(PAGE >= sizeof(size_t) * nchecks);\n\t\tfor (size_t i = 0; i < nchecks; ++i) {\n\t\t\tassert(p[i * (size / sizeof(size_t) / nchecks)] == 0);\n\t\t}\n\t}\n}\n\n\nstatic inline void *\nehooks_alloc(tsdn_t *tsdn, ehooks_t *ehooks, void *new_addr, size_t size,\n    size_t alignment, bool *zero, bool *commit) {\n\tbool orig_zero = *zero;\n\tvoid *ret;\n\textent_hooks_t *extent_hooks = ehooks_get_extent_hooks_ptr(ehooks);\n\tif (extent_hooks == &ehooks_default_extent_hooks) {\n\t\tret = ehooks_default_alloc_impl(tsdn, new_addr, size,\n\t\t    alignment, zero, commit, ehooks_ind_get(ehooks));\n\t} else {\n\t\tehooks_pre_reentrancy(tsdn);\n\t\tret = extent_hooks->alloc(extent_hooks, new_addr, size,\n\t\t    alignment, zero, commit, ehooks_ind_get(ehooks));\n\t\tehooks_post_reentrancy(tsdn);\n\t}\n\tassert(new_addr == NULL || ret == NULL || new_addr == ret);\n\tassert(!orig_zero || *zero);\n\tif (*zero && ret != NULL) {\n\t\tehooks_debug_zero_check(ret, size);\n\t}\n\treturn ret;\n}\n\nstatic inline bool\nehooks_dalloc(tsdn_t *tsdn, ehooks_t *ehooks, void *addr, size_t size,\n    bool committed) {\n\textent_hooks_t *extent_hooks = ehooks_get_extent_hooks_ptr(ehooks);\n\tif (extent_hooks == &ehooks_default_extent_hooks) {\n\t\treturn ehooks_default_dalloc_impl(addr, size);\n\t} else if (extent_hooks->dalloc == NULL) {\n\t\treturn true;\n\t} else {\n\t\tehooks_pre_reentrancy(tsdn);\n\t\tbool err = extent_hooks->dalloc(extent_hooks, addr, size,\n\t\t    committed, ehooks_ind_get(ehooks));\n\t\tehooks_post_reentrancy(tsdn);\n\t\treturn err;\n\t}\n}\n\nstatic inline void\nehooks_destroy(tsdn_t *tsdn, ehooks_t *ehooks, void *addr, size_t size,\n    bool committed) {\n\textent_hooks_t *extent_hooks = ehooks_get_extent_hooks_ptr(ehooks);\n\tif (extent_hooks == &ehooks_default_extent_hooks) {\n\t\tehooks_default_destroy_impl(addr, size);\n\t} else if (extent_hooks->destroy == NULL) {\n\t\t/* Do nothing. */\n\t} else {\n\t\tehooks_pre_reentrancy(tsdn);\n\t\textent_hooks->destroy(extent_hooks, addr, size, committed,\n\t\t    ehooks_ind_get(ehooks));\n\t\tehooks_post_reentrancy(tsdn);\n\t}\n}\n\nstatic inline bool\nehooks_commit(tsdn_t *tsdn, ehooks_t *ehooks, void *addr, size_t size,\n    size_t offset, size_t length) {\n\textent_hooks_t *extent_hooks = ehooks_get_extent_hooks_ptr(ehooks);\n\tbool err;\n\tif (extent_hooks == &ehooks_default_extent_hooks) {\n\t\terr = ehooks_default_commit_impl(addr, offset, length);\n\t} else if (extent_hooks->commit == NULL) {\n\t\terr = true;\n\t} else {\n\t\tehooks_pre_reentrancy(tsdn);\n\t\terr = extent_hooks->commit(extent_hooks, addr, size,\n\t\t    offset, length, ehooks_ind_get(ehooks));\n\t\tehooks_post_reentrancy(tsdn);\n\t}\n\tif (!err) {\n\t\tehooks_debug_zero_check(addr, size);\n\t}\n\treturn err;\n}\n\nstatic inline bool\nehooks_decommit(tsdn_t *tsdn, ehooks_t *ehooks, void *addr, size_t size,\n    size_t offset, size_t length) {\n\textent_hooks_t *extent_hooks = ehooks_get_extent_hooks_ptr(ehooks);\n\tif (extent_hooks == &ehooks_default_extent_hooks) {\n\t\treturn ehooks_default_decommit_impl(addr, offset, length);\n\t} else if (extent_hooks->decommit == NULL) {\n\t\treturn true;\n\t} else {\n\t\tehooks_pre_reentrancy(tsdn);\n\t\tbool err = extent_hooks->decommit(extent_hooks, addr, size,\n\t\t    offset, length, ehooks_ind_get(ehooks));\n\t\tehooks_post_reentrancy(tsdn);\n\t\treturn err;\n\t}\n}\n\nstatic inline bool\nehooks_purge_lazy(tsdn_t *tsdn, ehooks_t *ehooks, void *addr, size_t size,\n    size_t offset, size_t length) {\n\textent_hooks_t *extent_hooks = ehooks_get_extent_hooks_ptr(ehooks);\n#ifdef PAGES_CAN_PURGE_LAZY\n\tif (extent_hooks == &ehooks_default_extent_hooks) {\n\t\treturn ehooks_default_purge_lazy_impl(addr, offset, length);\n\t}\n#endif\n\tif (extent_hooks->purge_lazy == NULL) {\n\t\treturn true;\n\t} else {\n\t\tehooks_pre_reentrancy(tsdn);\n\t\tbool err = extent_hooks->purge_lazy(extent_hooks, addr, size,\n\t\t    offset, length, ehooks_ind_get(ehooks));\n\t\tehooks_post_reentrancy(tsdn);\n\t\treturn err;\n\t}\n}\n\nstatic inline bool\nehooks_purge_forced(tsdn_t *tsdn, ehooks_t *ehooks, void *addr, size_t size,\n    size_t offset, size_t length) {\n\textent_hooks_t *extent_hooks = ehooks_get_extent_hooks_ptr(ehooks);\n\t/*\n\t * It would be correct to have a ehooks_debug_zero_check call at the end\n\t * of this function; purge_forced is required to zero.  But checking\n\t * would touch the page in question, which may have performance\n\t * consequences (imagine the hooks are using hugepages, with a global\n\t * zero page off).  Even in debug mode, it's usually a good idea to\n\t * avoid cases that can dramatically increase memory consumption.\n\t */\n#ifdef PAGES_CAN_PURGE_FORCED\n\tif (extent_hooks == &ehooks_default_extent_hooks) {\n\t\treturn ehooks_default_purge_forced_impl(addr, offset, length);\n\t}\n#endif\n\tif (extent_hooks->purge_forced == NULL) {\n\t\treturn true;\n\t} else {\n\t\tehooks_pre_reentrancy(tsdn);\n\t\tbool err = extent_hooks->purge_forced(extent_hooks, addr, size,\n\t\t    offset, length, ehooks_ind_get(ehooks));\n\t\tehooks_post_reentrancy(tsdn);\n\t\treturn err;\n\t}\n}\n\nstatic inline bool\nehooks_split(tsdn_t *tsdn, ehooks_t *ehooks, void *addr, size_t size,\n    size_t size_a, size_t size_b, bool committed) {\n\textent_hooks_t *extent_hooks = ehooks_get_extent_hooks_ptr(ehooks);\n\tif (ehooks_are_default(ehooks)) {\n\t\treturn ehooks_default_split_impl();\n\t} else if (extent_hooks->split == NULL) {\n\t\treturn true;\n\t} else {\n\t\tehooks_pre_reentrancy(tsdn);\n\t\tbool err = extent_hooks->split(extent_hooks, addr, size, size_a,\n\t\t    size_b, committed, ehooks_ind_get(ehooks));\n\t\tehooks_post_reentrancy(tsdn);\n\t\treturn err;\n\t}\n}\n\nstatic inline bool\nehooks_merge(tsdn_t *tsdn, ehooks_t *ehooks, void *addr_a, size_t size_a,\n    void *addr_b, size_t size_b, bool committed) {\n\textent_hooks_t *extent_hooks = ehooks_get_extent_hooks_ptr(ehooks);\n\tif (extent_hooks == &ehooks_default_extent_hooks) {\n\t\treturn ehooks_default_merge_impl(tsdn, addr_a, addr_b);\n\t} else if (extent_hooks->merge == NULL) {\n\t\treturn true;\n\t} else {\n\t\tehooks_pre_reentrancy(tsdn);\n\t\tbool err = extent_hooks->merge(extent_hooks, addr_a, size_a,\n\t\t    addr_b, size_b, committed, ehooks_ind_get(ehooks));\n\t\tehooks_post_reentrancy(tsdn);\n\t\treturn err;\n\t}\n}\n\nstatic inline void\nehooks_zero(tsdn_t *tsdn, ehooks_t *ehooks, void *addr, size_t size) {\n\textent_hooks_t *extent_hooks = ehooks_get_extent_hooks_ptr(ehooks);\n\tif (extent_hooks == &ehooks_default_extent_hooks) {\n\t\tehooks_default_zero_impl(addr, size);\n\t} else {\n\t\t/*\n\t\t * It would be correct to try using the user-provided purge\n\t\t * hooks (since they are required to have zeroed the extent if\n\t\t * they indicate success), but we don't necessarily know their\n\t\t * cost.  We'll be conservative and use memset.\n\t\t */\n\t\tmemset(addr, 0, size);\n\t}\n}\n\nstatic inline bool\nehooks_guard(tsdn_t *tsdn, ehooks_t *ehooks, void *guard1, void *guard2) {\n\tbool err;\n\textent_hooks_t *extent_hooks = ehooks_get_extent_hooks_ptr(ehooks);\n\n\tif (extent_hooks == &ehooks_default_extent_hooks) {\n\t\tehooks_default_guard_impl(guard1, guard2);\n\t\terr = false;\n\t} else {\n\t\terr = true;\n\t}\n\n\treturn err;\n}\n\nstatic inline bool\nehooks_unguard(tsdn_t *tsdn, ehooks_t *ehooks, void *guard1, void *guard2) {\n\tbool err;\n\textent_hooks_t *extent_hooks = ehooks_get_extent_hooks_ptr(ehooks);\n\n\tif (extent_hooks == &ehooks_default_extent_hooks) {\n\t\tehooks_default_unguard_impl(guard1, guard2);\n\t\terr = false;\n\t} else {\n\t\terr = true;\n\t}\n\n\treturn err;\n}\n\n#endif /* JEMALLOC_INTERNAL_EHOOKS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/emap.h",
    "content": "#ifndef JEMALLOC_INTERNAL_EMAP_H\n#define JEMALLOC_INTERNAL_EMAP_H\n\n#include \"jemalloc/internal/base.h\"\n#include \"jemalloc/internal/rtree.h\"\n\n/*\n * Note: Ends without at semicolon, so that\n *     EMAP_DECLARE_RTREE_CTX;\n * in uses will avoid empty-statement warnings.\n */\n#define EMAP_DECLARE_RTREE_CTX\t\t\t\t\t\t\\\n    rtree_ctx_t rtree_ctx_fallback;\t\t\t\t\t\\\n    rtree_ctx_t *rtree_ctx = tsdn_rtree_ctx(tsdn, &rtree_ctx_fallback)\n\ntypedef struct emap_s emap_t;\nstruct emap_s {\n\trtree_t rtree;\n};\n\n/* Used to pass rtree lookup context down the path. */\ntypedef struct emap_alloc_ctx_t emap_alloc_ctx_t;\nstruct emap_alloc_ctx_t {\n\tszind_t szind;\n\tbool slab;\n};\n\ntypedef struct emap_full_alloc_ctx_s emap_full_alloc_ctx_t;\nstruct emap_full_alloc_ctx_s {\n\tszind_t szind;\n\tbool slab;\n\tedata_t *edata;\n};\n\nbool emap_init(emap_t *emap, base_t *base, bool zeroed);\n\nvoid emap_remap(tsdn_t *tsdn, emap_t *emap, edata_t *edata, szind_t szind,\n    bool slab);\n\nvoid emap_update_edata_state(tsdn_t *tsdn, emap_t *emap, edata_t *edata,\n    extent_state_t state);\n\n/*\n * The two acquire functions below allow accessing neighbor edatas, if it's safe\n * and valid to do so (i.e. from the same arena, of the same state, etc.).  This\n * is necessary because the ecache locks are state based, and only protect\n * edatas with the same state.  Therefore the neighbor edata's state needs to be\n * verified first, before chasing the edata pointer.  The returned edata will be\n * in an acquired state, meaning other threads will be prevented from accessing\n * it, even if technically the edata can still be discovered from the rtree.\n *\n * This means, at any moment when holding pointers to edata, either one of the\n * state based locks is held (and the edatas are all of the protected state), or\n * the edatas are in an acquired state (e.g. in active or merging state).  The\n * acquire operation itself (changing the edata to an acquired state) is done\n * under the state locks.\n */\nedata_t *emap_try_acquire_edata_neighbor(tsdn_t *tsdn, emap_t *emap,\n    edata_t *edata, extent_pai_t pai, extent_state_t expected_state,\n    bool forward);\nedata_t *emap_try_acquire_edata_neighbor_expand(tsdn_t *tsdn, emap_t *emap,\n    edata_t *edata, extent_pai_t pai, extent_state_t expected_state);\nvoid emap_release_edata(tsdn_t *tsdn, emap_t *emap, edata_t *edata,\n    extent_state_t new_state);\n\n/*\n * Associate the given edata with its beginning and end address, setting the\n * szind and slab info appropriately.\n * Returns true on error (i.e. resource exhaustion).\n */\nbool emap_register_boundary(tsdn_t *tsdn, emap_t *emap, edata_t *edata,\n    szind_t szind, bool slab);\n\n/*\n * Does the same thing, but with the interior of the range, for slab\n * allocations.\n *\n * You might wonder why we don't just have a single emap_register function that\n * does both depending on the value of 'slab'.  The answer is twofold:\n * - As a practical matter, in places like the extract->split->commit pathway,\n *   we defer the interior operation until we're sure that the commit won't fail\n *   (but we have to register the split boundaries there).\n * - In general, we're trying to move to a world where the page-specific\n *   allocator doesn't know as much about how the pages it allocates will be\n *   used, and passing a 'slab' parameter everywhere makes that more\n *   complicated.\n *\n * Unlike the boundary version, this function can't fail; this is because slabs\n * can't get big enough to touch a new page that neither of the boundaries\n * touched, so no allocation is necessary to fill the interior once the boundary\n * has been touched.\n */\nvoid emap_register_interior(tsdn_t *tsdn, emap_t *emap, edata_t *edata,\n    szind_t szind);\n\nvoid emap_deregister_boundary(tsdn_t *tsdn, emap_t *emap, edata_t *edata);\nvoid emap_deregister_interior(tsdn_t *tsdn, emap_t *emap, edata_t *edata);\n\ntypedef struct emap_prepare_s emap_prepare_t;\nstruct emap_prepare_s {\n\trtree_leaf_elm_t *lead_elm_a;\n\trtree_leaf_elm_t *lead_elm_b;\n\trtree_leaf_elm_t *trail_elm_a;\n\trtree_leaf_elm_t *trail_elm_b;\n};\n\n/**\n * These functions the emap metadata management for merging, splitting, and\n * reusing extents.  In particular, they set the boundary mappings from\n * addresses to edatas.  If the result is going to be used as a slab, you\n * still need to call emap_register_interior on it, though.\n *\n * Remap simply changes the szind and slab status of an extent's boundary\n * mappings.  If the extent is not a slab, it doesn't bother with updating the\n * end mapping (since lookups only occur in the interior of an extent for\n * slabs).  Since the szind and slab status only make sense for active extents,\n * this should only be called while activating or deactivating an extent.\n *\n * Split and merge have a \"prepare\" and a \"commit\" portion.  The prepare portion\n * does the operations that can be done without exclusive access to the extent\n * in question, while the commit variant requires exclusive access to maintain\n * the emap invariants.  The only function that can fail is emap_split_prepare,\n * and it returns true on failure (at which point the caller shouldn't commit).\n *\n * In all cases, \"lead\" refers to the lower-addressed extent, and trail to the\n * higher-addressed one.  It's the caller's responsibility to set the edata\n * state appropriately.\n */\nbool emap_split_prepare(tsdn_t *tsdn, emap_t *emap, emap_prepare_t *prepare,\n    edata_t *edata, size_t size_a, edata_t *trail, size_t size_b);\nvoid emap_split_commit(tsdn_t *tsdn, emap_t *emap, emap_prepare_t *prepare,\n    edata_t *lead, size_t size_a, edata_t *trail, size_t size_b);\nvoid emap_merge_prepare(tsdn_t *tsdn, emap_t *emap, emap_prepare_t *prepare,\n    edata_t *lead, edata_t *trail);\nvoid emap_merge_commit(tsdn_t *tsdn, emap_t *emap, emap_prepare_t *prepare,\n    edata_t *lead, edata_t *trail);\n\n/* Assert that the emap's view of the given edata matches the edata's view. */\nvoid emap_do_assert_mapped(tsdn_t *tsdn, emap_t *emap, edata_t *edata);\nstatic inline void\nemap_assert_mapped(tsdn_t *tsdn, emap_t *emap, edata_t *edata) {\n\tif (config_debug) {\n\t\temap_do_assert_mapped(tsdn, emap, edata);\n\t}\n}\n\n/* Assert that the given edata isn't in the map. */\nvoid emap_do_assert_not_mapped(tsdn_t *tsdn, emap_t *emap, edata_t *edata);\nstatic inline void\nemap_assert_not_mapped(tsdn_t *tsdn, emap_t *emap, edata_t *edata) {\n\tif (config_debug) {\n\t\temap_do_assert_not_mapped(tsdn, emap, edata);\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nemap_edata_in_transition(tsdn_t *tsdn, emap_t *emap, edata_t *edata) {\n\tassert(config_debug);\n\temap_assert_mapped(tsdn, emap, edata);\n\n\tEMAP_DECLARE_RTREE_CTX;\n\trtree_contents_t contents = rtree_read(tsdn, &emap->rtree, rtree_ctx,\n\t    (uintptr_t)edata_base_get(edata));\n\n\treturn edata_state_in_transition(contents.metadata.state);\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nemap_edata_is_acquired(tsdn_t *tsdn, emap_t *emap, edata_t *edata) {\n\tif (!config_debug) {\n\t\t/* For assertions only. */\n\t\treturn false;\n\t}\n\n\t/*\n\t * The edata is considered acquired if no other threads will attempt to\n\t * read / write any fields from it.  This includes a few cases:\n\t *\n\t * 1) edata not hooked into emap yet -- This implies the edata just got\n\t * allocated or initialized.\n\t *\n\t * 2) in an active or transition state -- In both cases, the edata can\n\t * be discovered from the emap, however the state tracked in the rtree\n\t * will prevent other threads from accessing the actual edata.\n\t */\n\tEMAP_DECLARE_RTREE_CTX;\n\trtree_leaf_elm_t *elm = rtree_leaf_elm_lookup(tsdn, &emap->rtree,\n\t    rtree_ctx, (uintptr_t)edata_base_get(edata), /* dependent */ true,\n\t    /* init_missing */ false);\n\tif (elm == NULL) {\n\t\treturn true;\n\t}\n\trtree_contents_t contents = rtree_leaf_elm_read(tsdn, &emap->rtree, elm,\n\t    /* dependent */ true);\n\tif (contents.edata == NULL ||\n\t    contents.metadata.state == extent_state_active ||\n\t    edata_state_in_transition(contents.metadata.state)) {\n\t\treturn true;\n\t}\n\n\treturn false;\n}\n\nJEMALLOC_ALWAYS_INLINE void\nextent_assert_can_coalesce(const edata_t *inner, const edata_t *outer) {\n\tassert(edata_arena_ind_get(inner) == edata_arena_ind_get(outer));\n\tassert(edata_pai_get(inner) == edata_pai_get(outer));\n\tassert(edata_committed_get(inner) == edata_committed_get(outer));\n\tassert(edata_state_get(inner) == extent_state_active);\n\tassert(edata_state_get(outer) == extent_state_merging);\n\tassert(!edata_guarded_get(inner) && !edata_guarded_get(outer));\n\tassert(edata_base_get(inner) == edata_past_get(outer) ||\n\t    edata_base_get(outer) == edata_past_get(inner));\n}\n\nJEMALLOC_ALWAYS_INLINE void\nextent_assert_can_expand(const edata_t *original, const edata_t *expand) {\n\tassert(edata_arena_ind_get(original) == edata_arena_ind_get(expand));\n\tassert(edata_pai_get(original) == edata_pai_get(expand));\n\tassert(edata_state_get(original) == extent_state_active);\n\tassert(edata_state_get(expand) == extent_state_merging);\n\tassert(edata_past_get(original) == edata_base_get(expand));\n}\n\nJEMALLOC_ALWAYS_INLINE edata_t *\nemap_edata_lookup(tsdn_t *tsdn, emap_t *emap, const void *ptr) {\n\tEMAP_DECLARE_RTREE_CTX;\n\n\treturn rtree_read(tsdn, &emap->rtree, rtree_ctx, (uintptr_t)ptr).edata;\n}\n\n/* Fills in alloc_ctx with the info in the map. */\nJEMALLOC_ALWAYS_INLINE void\nemap_alloc_ctx_lookup(tsdn_t *tsdn, emap_t *emap, const void *ptr,\n    emap_alloc_ctx_t *alloc_ctx) {\n\tEMAP_DECLARE_RTREE_CTX;\n\n\trtree_metadata_t metadata = rtree_metadata_read(tsdn, &emap->rtree,\n\t    rtree_ctx, (uintptr_t)ptr);\n\talloc_ctx->szind = metadata.szind;\n\talloc_ctx->slab = metadata.slab;\n}\n\n/* The pointer must be mapped. */\nJEMALLOC_ALWAYS_INLINE void\nemap_full_alloc_ctx_lookup(tsdn_t *tsdn, emap_t *emap, const void *ptr,\n    emap_full_alloc_ctx_t *full_alloc_ctx) {\n\tEMAP_DECLARE_RTREE_CTX;\n\n\trtree_contents_t contents = rtree_read(tsdn, &emap->rtree, rtree_ctx,\n\t    (uintptr_t)ptr);\n\tfull_alloc_ctx->edata = contents.edata;\n\tfull_alloc_ctx->szind = contents.metadata.szind;\n\tfull_alloc_ctx->slab = contents.metadata.slab;\n}\n\n/*\n * The pointer is allowed to not be mapped.\n *\n * Returns true when the pointer is not present.\n */\nJEMALLOC_ALWAYS_INLINE bool\nemap_full_alloc_ctx_try_lookup(tsdn_t *tsdn, emap_t *emap, const void *ptr,\n    emap_full_alloc_ctx_t *full_alloc_ctx) {\n\tEMAP_DECLARE_RTREE_CTX;\n\n\trtree_contents_t contents;\n\tbool err = rtree_read_independent(tsdn, &emap->rtree, rtree_ctx,\n\t    (uintptr_t)ptr, &contents);\n\tif (err) {\n\t\treturn true;\n\t}\n\tfull_alloc_ctx->edata = contents.edata;\n\tfull_alloc_ctx->szind = contents.metadata.szind;\n\tfull_alloc_ctx->slab = contents.metadata.slab;\n\treturn false;\n}\n\n/*\n * Only used on the fastpath of free.  Returns true when cannot be fulfilled by\n * fast path, e.g. when the metadata key is not cached.\n */\nJEMALLOC_ALWAYS_INLINE bool\nemap_alloc_ctx_try_lookup_fast(tsd_t *tsd, emap_t *emap, const void *ptr,\n    emap_alloc_ctx_t *alloc_ctx) {\n\t/* Use the unsafe getter since this may gets called during exit. */\n\trtree_ctx_t *rtree_ctx = tsd_rtree_ctxp_get_unsafe(tsd);\n\n\trtree_metadata_t metadata;\n\tbool err = rtree_metadata_try_read_fast(tsd_tsdn(tsd), &emap->rtree,\n\t    rtree_ctx, (uintptr_t)ptr, &metadata);\n\tif (err) {\n\t\treturn true;\n\t}\n\talloc_ctx->szind = metadata.szind;\n\talloc_ctx->slab = metadata.slab;\n\treturn false;\n}\n\n/*\n * We want to do batch lookups out of the cache bins, which use\n * cache_bin_ptr_array_get to access the i'th element of the bin (since they\n * invert usual ordering in deciding what to flush).  This lets the emap avoid\n * caring about its caller's ordering.\n */\ntypedef const void *(*emap_ptr_getter)(void *ctx, size_t ind);\n/*\n * This allows size-checking assertions, which we can only do while we're in the\n * process of edata lookups.\n */\ntypedef void (*emap_metadata_visitor)(void *ctx, emap_full_alloc_ctx_t *alloc_ctx);\n\ntypedef union emap_batch_lookup_result_u emap_batch_lookup_result_t;\nunion emap_batch_lookup_result_u {\n\tedata_t *edata;\n\trtree_leaf_elm_t *rtree_leaf;\n};\n\nJEMALLOC_ALWAYS_INLINE void\nemap_edata_lookup_batch(tsd_t *tsd, emap_t *emap, size_t nptrs,\n    emap_ptr_getter ptr_getter, void *ptr_getter_ctx,\n    emap_metadata_visitor metadata_visitor, void *metadata_visitor_ctx,\n    emap_batch_lookup_result_t *result) {\n\t/* Avoids null-checking tsdn in the loop below. */\n\tutil_assume(tsd != NULL);\n\trtree_ctx_t *rtree_ctx = tsd_rtree_ctxp_get(tsd);\n\n\tfor (size_t i = 0; i < nptrs; i++) {\n\t\tconst void *ptr = ptr_getter(ptr_getter_ctx, i);\n\t\t/*\n\t\t * Reuse the edatas array as a temp buffer, lying a little about\n\t\t * the types.\n\t\t */\n\t\tresult[i].rtree_leaf = rtree_leaf_elm_lookup(tsd_tsdn(tsd),\n\t\t    &emap->rtree, rtree_ctx, (uintptr_t)ptr,\n\t\t    /* dependent */ true, /* init_missing */ false);\n\t}\n\n\tfor (size_t i = 0; i < nptrs; i++) {\n\t\trtree_leaf_elm_t *elm = result[i].rtree_leaf;\n\t\trtree_contents_t contents = rtree_leaf_elm_read(tsd_tsdn(tsd),\n\t\t    &emap->rtree, elm, /* dependent */ true);\n\t\tresult[i].edata = contents.edata;\n\t\temap_full_alloc_ctx_t alloc_ctx;\n\t\t/*\n\t\t * Not all these fields are read in practice by the metadata\n\t\t * visitor.  But the compiler can easily optimize away the ones\n\t\t * that aren't, so no sense in being incomplete.\n\t\t */\n\t\talloc_ctx.szind = contents.metadata.szind;\n\t\talloc_ctx.slab = contents.metadata.slab;\n\t\talloc_ctx.edata = contents.edata;\n\t\tmetadata_visitor(metadata_visitor_ctx, &alloc_ctx);\n\t}\n}\n\n#endif /* JEMALLOC_INTERNAL_EMAP_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/emitter.h",
    "content": "#ifndef JEMALLOC_INTERNAL_EMITTER_H\n#define JEMALLOC_INTERNAL_EMITTER_H\n\n#include \"jemalloc/internal/ql.h\"\n\ntypedef enum emitter_output_e emitter_output_t;\nenum emitter_output_e {\n\temitter_output_json,\n\temitter_output_json_compact,\n\temitter_output_table\n};\n\ntypedef enum emitter_justify_e emitter_justify_t;\nenum emitter_justify_e {\n\temitter_justify_left,\n\temitter_justify_right,\n\t/* Not for users; just to pass to internal functions. */\n\temitter_justify_none\n};\n\ntypedef enum emitter_type_e emitter_type_t;\nenum emitter_type_e {\n\temitter_type_bool,\n\temitter_type_int,\n\temitter_type_int64,\n\temitter_type_unsigned,\n\temitter_type_uint32,\n\temitter_type_uint64,\n\temitter_type_size,\n\temitter_type_ssize,\n\temitter_type_string,\n\t/*\n\t * A title is a column title in a table; it's just a string, but it's\n\t * not quoted.\n\t */\n\temitter_type_title,\n};\n\ntypedef struct emitter_col_s emitter_col_t;\nstruct emitter_col_s {\n\t/* Filled in by the user. */\n\temitter_justify_t justify;\n\tint width;\n\temitter_type_t type;\n\tunion {\n\t\tbool bool_val;\n\t\tint int_val;\n\t\tunsigned unsigned_val;\n\t\tuint32_t uint32_val;\n\t\tuint32_t uint32_t_val;\n\t\tuint64_t uint64_val;\n\t\tuint64_t uint64_t_val;\n\t\tsize_t size_val;\n\t\tssize_t ssize_val;\n\t\tconst char *str_val;\n\t};\n\n\t/* Filled in by initialization. */\n\tql_elm(emitter_col_t) link;\n};\n\ntypedef struct emitter_row_s emitter_row_t;\nstruct emitter_row_s {\n\tql_head(emitter_col_t) cols;\n};\n\ntypedef struct emitter_s emitter_t;\nstruct emitter_s {\n\temitter_output_t output;\n\t/* The output information. */\n\twrite_cb_t *write_cb;\n\tvoid *cbopaque;\n\tint nesting_depth;\n\t/* True if we've already emitted a value at the given depth. */\n\tbool item_at_depth;\n\t/* True if we emitted a key and will emit corresponding value next. */\n\tbool emitted_key;\n};\n\nstatic inline bool\nemitter_outputs_json(emitter_t *emitter) {\n\treturn emitter->output == emitter_output_json ||\n\t    emitter->output == emitter_output_json_compact;\n}\n\n/* Internal convenience function.  Write to the emitter the given string. */\nJEMALLOC_FORMAT_PRINTF(2, 3)\nstatic inline void\nemitter_printf(emitter_t *emitter, const char *format, ...) {\n\tva_list ap;\n\n\tva_start(ap, format);\n\tmalloc_vcprintf(emitter->write_cb, emitter->cbopaque, format, ap);\n\tva_end(ap);\n}\n\nstatic inline const char * JEMALLOC_FORMAT_ARG(3)\nemitter_gen_fmt(char *out_fmt, size_t out_size, const char *fmt_specifier,\n    emitter_justify_t justify, int width) {\n\tsize_t written;\n\tfmt_specifier++;\n\tif (justify == emitter_justify_none) {\n\t\twritten = malloc_snprintf(out_fmt, out_size,\n\t\t    \"%%%s\", fmt_specifier);\n\t} else if (justify == emitter_justify_left) {\n\t\twritten = malloc_snprintf(out_fmt, out_size,\n\t\t    \"%%-%d%s\", width, fmt_specifier);\n\t} else {\n\t\twritten = malloc_snprintf(out_fmt, out_size,\n\t\t    \"%%%d%s\", width, fmt_specifier);\n\t}\n\t/* Only happens in case of bad format string, which *we* choose. */\n\tassert(written <  out_size);\n\treturn out_fmt;\n}\n\n/*\n * Internal.  Emit the given value type in the relevant encoding (so that the\n * bool true gets mapped to json \"true\", but the string \"true\" gets mapped to\n * json \"\\\"true\\\"\", for instance.\n *\n * Width is ignored if justify is emitter_justify_none.\n */\nstatic inline void\nemitter_print_value(emitter_t *emitter, emitter_justify_t justify, int width,\n    emitter_type_t value_type, const void *value) {\n\tsize_t str_written;\n#define BUF_SIZE 256\n#define FMT_SIZE 10\n\t/*\n\t * We dynamically generate a format string to emit, to let us use the\n\t * snprintf machinery.  This is kinda hacky, but gets the job done\n\t * quickly without having to think about the various snprintf edge\n\t * cases.\n\t */\n\tchar fmt[FMT_SIZE];\n\tchar buf[BUF_SIZE];\n\n#define EMIT_SIMPLE(type, format)\t\t\t\t\t\\\n\temitter_printf(emitter,\t\t\t\t\t\t\\\n\t    emitter_gen_fmt(fmt, FMT_SIZE, format, justify, width),\t\\\n\t    *(const type *)value);\n\n\tswitch (value_type) {\n\tcase emitter_type_bool:\n\t\temitter_printf(emitter,\n\t\t    emitter_gen_fmt(fmt, FMT_SIZE, \"%s\", justify, width),\n\t\t    *(const bool *)value ?  \"true\" : \"false\");\n\t\tbreak;\n\tcase emitter_type_int:\n\t\tEMIT_SIMPLE(int, \"%d\")\n\t\tbreak;\n\tcase emitter_type_int64:\n\t\tEMIT_SIMPLE(int64_t, \"%\" FMTd64)\n\t\tbreak;\n\tcase emitter_type_unsigned:\n\t\tEMIT_SIMPLE(unsigned, \"%u\")\n\t\tbreak;\n\tcase emitter_type_ssize:\n\t\tEMIT_SIMPLE(ssize_t, \"%zd\")\n\t\tbreak;\n\tcase emitter_type_size:\n\t\tEMIT_SIMPLE(size_t, \"%zu\")\n\t\tbreak;\n\tcase emitter_type_string:\n\t\tstr_written = malloc_snprintf(buf, BUF_SIZE, \"\\\"%s\\\"\",\n\t\t    *(const char *const *)value);\n\t\t/*\n\t\t * We control the strings we output; we shouldn't get anything\n\t\t * anywhere near the fmt size.\n\t\t */\n\t\tassert(str_written < BUF_SIZE);\n\t\temitter_printf(emitter,\n\t\t    emitter_gen_fmt(fmt, FMT_SIZE, \"%s\", justify, width), buf);\n\t\tbreak;\n\tcase emitter_type_uint32:\n\t\tEMIT_SIMPLE(uint32_t, \"%\" FMTu32)\n\t\tbreak;\n\tcase emitter_type_uint64:\n\t\tEMIT_SIMPLE(uint64_t, \"%\" FMTu64)\n\t\tbreak;\n\tcase emitter_type_title:\n\t\tEMIT_SIMPLE(char *const, \"%s\");\n\t\tbreak;\n\tdefault:\n\t\tunreachable();\n\t}\n#undef BUF_SIZE\n#undef FMT_SIZE\n}\n\n\n/* Internal functions.  In json mode, tracks nesting state. */\nstatic inline void\nemitter_nest_inc(emitter_t *emitter) {\n\temitter->nesting_depth++;\n\temitter->item_at_depth = false;\n}\n\nstatic inline void\nemitter_nest_dec(emitter_t *emitter) {\n\temitter->nesting_depth--;\n\temitter->item_at_depth = true;\n}\n\nstatic inline void\nemitter_indent(emitter_t *emitter) {\n\tint amount = emitter->nesting_depth;\n\tconst char *indent_str;\n\tassert(emitter->output != emitter_output_json_compact);\n\tif (emitter->output == emitter_output_json) {\n\t\tindent_str = \"\\t\";\n\t} else {\n\t\tamount *= 2;\n\t\tindent_str = \" \";\n\t}\n\tfor (int i = 0; i < amount; i++) {\n\t\temitter_printf(emitter, \"%s\", indent_str);\n\t}\n}\n\nstatic inline void\nemitter_json_key_prefix(emitter_t *emitter) {\n\tassert(emitter_outputs_json(emitter));\n\tif (emitter->emitted_key) {\n\t\temitter->emitted_key = false;\n\t\treturn;\n\t}\n\tif (emitter->item_at_depth) {\n\t\temitter_printf(emitter, \",\");\n\t}\n\tif (emitter->output != emitter_output_json_compact) {\n\t\temitter_printf(emitter, \"\\n\");\n\t\temitter_indent(emitter);\n\t}\n}\n\n/******************************************************************************/\n/* Public functions for emitter_t. */\n\nstatic inline void\nemitter_init(emitter_t *emitter, emitter_output_t emitter_output,\n    write_cb_t *write_cb, void *cbopaque) {\n\temitter->output = emitter_output;\n\temitter->write_cb = write_cb;\n\temitter->cbopaque = cbopaque;\n\temitter->item_at_depth = false;\n\temitter->emitted_key = false;\n\temitter->nesting_depth = 0;\n}\n\n/******************************************************************************/\n/* JSON public API. */\n\n/*\n * Emits a key (e.g. as appears in an object). The next json entity emitted will\n * be the corresponding value.\n */\nstatic inline void\nemitter_json_key(emitter_t *emitter, const char *json_key) {\n\tif (emitter_outputs_json(emitter)) {\n\t\temitter_json_key_prefix(emitter);\n\t\temitter_printf(emitter, \"\\\"%s\\\":%s\", json_key,\n\t\t    emitter->output == emitter_output_json_compact ? \"\" : \" \");\n\t\temitter->emitted_key = true;\n\t}\n}\n\nstatic inline void\nemitter_json_value(emitter_t *emitter, emitter_type_t value_type,\n    const void *value) {\n\tif (emitter_outputs_json(emitter)) {\n\t\temitter_json_key_prefix(emitter);\n\t\temitter_print_value(emitter, emitter_justify_none, -1,\n\t\t    value_type, value);\n\t\temitter->item_at_depth = true;\n\t}\n}\n\n/* Shorthand for calling emitter_json_key and then emitter_json_value. */\nstatic inline void\nemitter_json_kv(emitter_t *emitter, const char *json_key,\n    emitter_type_t value_type, const void *value) {\n\temitter_json_key(emitter, json_key);\n\temitter_json_value(emitter, value_type, value);\n}\n\nstatic inline void\nemitter_json_array_begin(emitter_t *emitter) {\n\tif (emitter_outputs_json(emitter)) {\n\t\temitter_json_key_prefix(emitter);\n\t\temitter_printf(emitter, \"[\");\n\t\temitter_nest_inc(emitter);\n\t}\n}\n\n/* Shorthand for calling emitter_json_key and then emitter_json_array_begin. */\nstatic inline void\nemitter_json_array_kv_begin(emitter_t *emitter, const char *json_key) {\n\temitter_json_key(emitter, json_key);\n\temitter_json_array_begin(emitter);\n}\n\nstatic inline void\nemitter_json_array_end(emitter_t *emitter) {\n\tif (emitter_outputs_json(emitter)) {\n\t\tassert(emitter->nesting_depth > 0);\n\t\temitter_nest_dec(emitter);\n\t\tif (emitter->output != emitter_output_json_compact) {\n\t\t\temitter_printf(emitter, \"\\n\");\n\t\t\temitter_indent(emitter);\n\t\t}\n\t\temitter_printf(emitter, \"]\");\n\t}\n}\n\nstatic inline void\nemitter_json_object_begin(emitter_t *emitter) {\n\tif (emitter_outputs_json(emitter)) {\n\t\temitter_json_key_prefix(emitter);\n\t\temitter_printf(emitter, \"{\");\n\t\temitter_nest_inc(emitter);\n\t}\n}\n\n/* Shorthand for calling emitter_json_key and then emitter_json_object_begin. */\nstatic inline void\nemitter_json_object_kv_begin(emitter_t *emitter, const char *json_key) {\n\temitter_json_key(emitter, json_key);\n\temitter_json_object_begin(emitter);\n}\n\nstatic inline void\nemitter_json_object_end(emitter_t *emitter) {\n\tif (emitter_outputs_json(emitter)) {\n\t\tassert(emitter->nesting_depth > 0);\n\t\temitter_nest_dec(emitter);\n\t\tif (emitter->output != emitter_output_json_compact) {\n\t\t\temitter_printf(emitter, \"\\n\");\n\t\t\temitter_indent(emitter);\n\t\t}\n\t\temitter_printf(emitter, \"}\");\n\t}\n}\n\n\n/******************************************************************************/\n/* Table public API. */\n\nstatic inline void\nemitter_table_dict_begin(emitter_t *emitter, const char *table_key) {\n\tif (emitter->output == emitter_output_table) {\n\t\temitter_indent(emitter);\n\t\temitter_printf(emitter, \"%s\\n\", table_key);\n\t\temitter_nest_inc(emitter);\n\t}\n}\n\nstatic inline void\nemitter_table_dict_end(emitter_t *emitter) {\n\tif (emitter->output == emitter_output_table) {\n\t\temitter_nest_dec(emitter);\n\t}\n}\n\nstatic inline void\nemitter_table_kv_note(emitter_t *emitter, const char *table_key,\n    emitter_type_t value_type, const void *value,\n    const char *table_note_key, emitter_type_t table_note_value_type,\n    const void *table_note_value) {\n\tif (emitter->output == emitter_output_table) {\n\t\temitter_indent(emitter);\n\t\temitter_printf(emitter, \"%s: \", table_key);\n\t\temitter_print_value(emitter, emitter_justify_none, -1,\n\t\t    value_type, value);\n\t\tif (table_note_key != NULL) {\n\t\t\temitter_printf(emitter, \" (%s: \", table_note_key);\n\t\t\temitter_print_value(emitter, emitter_justify_none, -1,\n\t\t\t    table_note_value_type, table_note_value);\n\t\t\temitter_printf(emitter, \")\");\n\t\t}\n\t\temitter_printf(emitter, \"\\n\");\n\t}\n\temitter->item_at_depth = true;\n}\n\nstatic inline void\nemitter_table_kv(emitter_t *emitter, const char *table_key,\n    emitter_type_t value_type, const void *value) {\n\temitter_table_kv_note(emitter, table_key, value_type, value, NULL,\n\t    emitter_type_bool, NULL);\n}\n\n\n/* Write to the emitter the given string, but only in table mode. */\nJEMALLOC_FORMAT_PRINTF(2, 3)\nstatic inline void\nemitter_table_printf(emitter_t *emitter, const char *format, ...) {\n\tif (emitter->output == emitter_output_table) {\n\t\tva_list ap;\n\t\tva_start(ap, format);\n\t\tmalloc_vcprintf(emitter->write_cb, emitter->cbopaque, format, ap);\n\t\tva_end(ap);\n\t}\n}\n\nstatic inline void\nemitter_table_row(emitter_t *emitter, emitter_row_t *row) {\n\tif (emitter->output != emitter_output_table) {\n\t\treturn;\n\t}\n\temitter_col_t *col;\n\tql_foreach(col, &row->cols, link) {\n\t\temitter_print_value(emitter, col->justify, col->width,\n\t\t    col->type, (const void *)&col->bool_val);\n\t}\n\temitter_table_printf(emitter, \"\\n\");\n}\n\nstatic inline void\nemitter_row_init(emitter_row_t *row) {\n\tql_new(&row->cols);\n}\n\nstatic inline void\nemitter_col_init(emitter_col_t *col, emitter_row_t *row) {\n\tql_elm_new(col, link);\n\tql_tail_insert(&row->cols, col, link);\n}\n\n\n/******************************************************************************/\n/*\n * Generalized public API. Emits using either JSON or table, according to\n * settings in the emitter_t. */\n\n/*\n * Note emits a different kv pair as well, but only in table mode.  Omits the\n * note if table_note_key is NULL.\n */\nstatic inline void\nemitter_kv_note(emitter_t *emitter, const char *json_key, const char *table_key,\n    emitter_type_t value_type, const void *value,\n    const char *table_note_key, emitter_type_t table_note_value_type,\n    const void *table_note_value) {\n\tif (emitter_outputs_json(emitter)) {\n\t\temitter_json_key(emitter, json_key);\n\t\temitter_json_value(emitter, value_type, value);\n\t} else {\n\t\temitter_table_kv_note(emitter, table_key, value_type, value,\n\t\t    table_note_key, table_note_value_type, table_note_value);\n\t}\n\temitter->item_at_depth = true;\n}\n\nstatic inline void\nemitter_kv(emitter_t *emitter, const char *json_key, const char *table_key,\n    emitter_type_t value_type, const void *value) {\n\temitter_kv_note(emitter, json_key, table_key, value_type, value, NULL,\n\t    emitter_type_bool, NULL);\n}\n\nstatic inline void\nemitter_dict_begin(emitter_t *emitter, const char *json_key,\n    const char *table_header) {\n\tif (emitter_outputs_json(emitter)) {\n\t\temitter_json_key(emitter, json_key);\n\t\temitter_json_object_begin(emitter);\n\t} else {\n\t\temitter_table_dict_begin(emitter, table_header);\n\t}\n}\n\nstatic inline void\nemitter_dict_end(emitter_t *emitter) {\n\tif (emitter_outputs_json(emitter)) {\n\t\temitter_json_object_end(emitter);\n\t} else {\n\t\temitter_table_dict_end(emitter);\n\t}\n}\n\nstatic inline void\nemitter_begin(emitter_t *emitter) {\n\tif (emitter_outputs_json(emitter)) {\n\t\tassert(emitter->nesting_depth == 0);\n\t\temitter_printf(emitter, \"{\");\n\t\temitter_nest_inc(emitter);\n\t} else {\n\t\t/*\n\t\t * This guarantees that we always call write_cb at least once.\n\t\t * This is useful if some invariant is established by each call\n\t\t * to write_cb, but doesn't hold initially: e.g., some buffer\n\t\t * holds a null-terminated string.\n\t\t */\n\t\temitter_printf(emitter, \"%s\", \"\");\n\t}\n}\n\nstatic inline void\nemitter_end(emitter_t *emitter) {\n\tif (emitter_outputs_json(emitter)) {\n\t\tassert(emitter->nesting_depth == 1);\n\t\temitter_nest_dec(emitter);\n\t\temitter_printf(emitter, \"%s\", emitter->output ==\n\t\t    emitter_output_json_compact ? \"}\" : \"\\n}\\n\");\n\t}\n}\n\n#endif /* JEMALLOC_INTERNAL_EMITTER_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/eset.h",
    "content": "#ifndef JEMALLOC_INTERNAL_ESET_H\n#define JEMALLOC_INTERNAL_ESET_H\n\n#include \"jemalloc/internal/atomic.h\"\n#include \"jemalloc/internal/fb.h\"\n#include \"jemalloc/internal/edata.h\"\n#include \"jemalloc/internal/mutex.h\"\n\n/*\n * An eset (\"extent set\") is a quantized collection of extents, with built-in\n * LRU queue.\n *\n * This class is not thread-safe; synchronization must be done externally if\n * there are mutating operations.  One exception is the stats counters, which\n * may be read without any locking.\n */\n\ntypedef struct eset_bin_s eset_bin_t;\nstruct eset_bin_s {\n\tedata_heap_t heap;\n\t/*\n\t * We do first-fit across multiple size classes.  If we compared against\n\t * the min element in each heap directly, we'd take a cache miss per\n\t * extent we looked at.  If we co-locate the edata summaries, we only\n\t * take a miss on the edata we're actually going to return (which is\n\t * inevitable anyways).\n\t */\n\tedata_cmp_summary_t heap_min;\n};\n\ntypedef struct eset_bin_stats_s eset_bin_stats_t;\nstruct eset_bin_stats_s {\n\tatomic_zu_t nextents;\n\tatomic_zu_t nbytes;\n};\n\ntypedef struct eset_s eset_t;\nstruct eset_s {\n\t/* Bitmap for which set bits correspond to non-empty heaps. */\n\tfb_group_t bitmap[FB_NGROUPS(SC_NPSIZES + 1)];\n\n\t/* Quantized per size class heaps of extents. */\n\teset_bin_t bins[SC_NPSIZES + 1];\n\n\teset_bin_stats_t bin_stats[SC_NPSIZES + 1];\n\n\t/* LRU of all extents in heaps. */\n\tedata_list_inactive_t lru;\n\n\t/* Page sum for all extents in heaps. */\n\tatomic_zu_t npages;\n\n\t/*\n\t * A duplication of the data in the containing ecache.  We use this only\n\t * for assertions on the states of the passed-in extents.\n\t */\n\textent_state_t state;\n};\n\nvoid eset_init(eset_t *eset, extent_state_t state);\n\nsize_t eset_npages_get(eset_t *eset);\n/* Get the number of extents in the given page size index. */\nsize_t eset_nextents_get(eset_t *eset, pszind_t ind);\n/* Get the sum total bytes of the extents in the given page size index. */\nsize_t eset_nbytes_get(eset_t *eset, pszind_t ind);\n\nvoid eset_insert(eset_t *eset, edata_t *edata);\nvoid eset_remove(eset_t *eset, edata_t *edata);\n/*\n * Select an extent from this eset of the given size and alignment.  Returns\n * null if no such item could be found.\n */\nedata_t *eset_fit(eset_t *eset, size_t esize, size_t alignment, bool exact_only,\n    unsigned lg_max_fit);\n\n#endif /* JEMALLOC_INTERNAL_ESET_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/exp_grow.h",
    "content": "#ifndef JEMALLOC_INTERNAL_EXP_GROW_H\n#define JEMALLOC_INTERNAL_EXP_GROW_H\n\ntypedef struct exp_grow_s exp_grow_t;\nstruct exp_grow_s {\n\t/*\n\t * Next extent size class in a growing series to use when satisfying a\n\t * request via the extent hooks (only if opt_retain).  This limits the\n\t * number of disjoint virtual memory ranges so that extent merging can\n\t * be effective even if multiple arenas' extent allocation requests are\n\t * highly interleaved.\n\t *\n\t * retain_grow_limit is the max allowed size ind to expand (unless the\n\t * required size is greater).  Default is no limit, and controlled\n\t * through mallctl only.\n\t */\n\tpszind_t next;\n\tpszind_t limit;\n};\n\nstatic inline bool\nexp_grow_size_prepare(exp_grow_t *exp_grow, size_t alloc_size_min,\n    size_t *r_alloc_size, pszind_t *r_skip) {\n\t*r_skip = 0;\n\t*r_alloc_size = sz_pind2sz(exp_grow->next + *r_skip);\n\twhile (*r_alloc_size < alloc_size_min) {\n\t\t(*r_skip)++;\n\t\tif (exp_grow->next + *r_skip  >=\n\t\t    sz_psz2ind(SC_LARGE_MAXCLASS)) {\n\t\t\t/* Outside legal range. */\n\t\t\treturn true;\n\t\t}\n\t\t*r_alloc_size = sz_pind2sz(exp_grow->next + *r_skip);\n\t}\n\treturn false;\n}\n\nstatic inline void\nexp_grow_size_commit(exp_grow_t *exp_grow, pszind_t skip) {\n\tif (exp_grow->next + skip + 1 <= exp_grow->limit) {\n\t\texp_grow->next += skip + 1;\n\t} else {\n\t\texp_grow->next = exp_grow->limit;\n\t}\n\n}\n\nvoid exp_grow_init(exp_grow_t *exp_grow);\n\n#endif /* JEMALLOC_INTERNAL_EXP_GROW_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/extent.h",
    "content": "#ifndef JEMALLOC_INTERNAL_EXTENT_H\n#define JEMALLOC_INTERNAL_EXTENT_H\n\n#include \"jemalloc/internal/ecache.h\"\n#include \"jemalloc/internal/ehooks.h\"\n#include \"jemalloc/internal/ph.h\"\n#include \"jemalloc/internal/rtree.h\"\n\n/*\n * This module contains the page-level allocator.  It chooses the addresses that\n * allocations requested by other modules will inhabit, and updates the global\n * metadata to reflect allocation/deallocation/purging decisions.\n */\n\n/*\n * When reuse (and split) an active extent, (1U << opt_lg_extent_max_active_fit)\n * is the max ratio between the size of the active extent and the new extent.\n */\n#define LG_EXTENT_MAX_ACTIVE_FIT_DEFAULT 6\nextern size_t opt_lg_extent_max_active_fit;\n\nedata_t *ecache_alloc(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    ecache_t *ecache, edata_t *expand_edata, size_t size, size_t alignment,\n    bool zero, bool guarded);\nedata_t *ecache_alloc_grow(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    ecache_t *ecache, edata_t *expand_edata, size_t size, size_t alignment,\n    bool zero, bool guarded);\nvoid ecache_dalloc(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    ecache_t *ecache, edata_t *edata);\nedata_t *ecache_evict(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    ecache_t *ecache, size_t npages_min);\n\nvoid extent_gdump_add(tsdn_t *tsdn, const edata_t *edata);\nvoid extent_record(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks, ecache_t *ecache,\n    edata_t *edata);\nvoid extent_dalloc_gap(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    edata_t *edata);\nedata_t *extent_alloc_wrapper(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    void *new_addr, size_t size, size_t alignment, bool zero, bool *commit,\n    bool growing_retained);\nvoid extent_dalloc_wrapper(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    edata_t *edata);\nvoid extent_destroy_wrapper(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    edata_t *edata);\nbool extent_commit_wrapper(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata,\n    size_t offset, size_t length);\nbool extent_decommit_wrapper(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata,\n    size_t offset, size_t length);\nbool extent_purge_lazy_wrapper(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata,\n    size_t offset, size_t length);\nbool extent_purge_forced_wrapper(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata,\n    size_t offset, size_t length);\nedata_t *extent_split_wrapper(tsdn_t *tsdn, pac_t *pac,\n    ehooks_t *ehooks, edata_t *edata, size_t size_a, size_t size_b,\n    bool holding_core_locks);\nbool extent_merge_wrapper(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    edata_t *a, edata_t *b);\nbool extent_commit_zero(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata,\n    bool commit, bool zero, bool growing_retained);\nsize_t extent_sn_next(pac_t *pac);\nbool extent_boot(void);\n\nJEMALLOC_ALWAYS_INLINE bool\nextent_neighbor_head_state_mergeable(bool edata_is_head,\n    bool neighbor_is_head, bool forward) {\n\t/*\n\t * Head states checking: disallow merging if the higher addr extent is a\n\t * head extent.  This helps preserve first-fit, and more importantly\n\t * makes sure no merge across arenas.\n\t */\n\tif (forward) {\n\t\tif (neighbor_is_head) {\n\t\t\treturn false;\n\t\t}\n\t} else {\n\t\tif (edata_is_head) {\n\t\t\treturn false;\n\t\t}\n\t}\n\treturn true;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nextent_can_acquire_neighbor(edata_t *edata, rtree_contents_t contents,\n    extent_pai_t pai, extent_state_t expected_state, bool forward,\n    bool expanding) {\n\tedata_t *neighbor = contents.edata;\n\tif (neighbor == NULL) {\n\t\treturn false;\n\t}\n\t/* It's not safe to access *neighbor yet; must verify states first. */\n\tbool neighbor_is_head = contents.metadata.is_head;\n\tif (!extent_neighbor_head_state_mergeable(edata_is_head_get(edata),\n\t    neighbor_is_head, forward)) {\n\t\treturn false;\n\t}\n\textent_state_t neighbor_state = contents.metadata.state;\n\tif (pai == EXTENT_PAI_PAC) {\n\t\tif (neighbor_state != expected_state) {\n\t\t\treturn false;\n\t\t}\n\t\t/* From this point, it's safe to access *neighbor. */\n\t\tif (!expanding && (edata_committed_get(edata) !=\n\t\t    edata_committed_get(neighbor))) {\n\t\t\t/*\n\t\t\t * Some platforms (e.g. Windows) require an explicit\n\t\t\t * commit step (and writing to uncommitted memory is not\n\t\t\t * allowed).\n\t\t\t */\n\t\t\treturn false;\n\t\t}\n\t} else {\n\t\tif (neighbor_state == extent_state_active) {\n\t\t\treturn false;\n\t\t}\n\t\t/* From this point, it's safe to access *neighbor. */\n\t}\n\n\tassert(edata_pai_get(edata) == pai);\n\tif (edata_pai_get(neighbor) != pai) {\n\t\treturn false;\n\t}\n\tif (opt_retain) {\n\t\tassert(edata_arena_ind_get(edata) ==\n\t\t    edata_arena_ind_get(neighbor));\n\t} else {\n\t\tif (edata_arena_ind_get(edata) !=\n\t\t    edata_arena_ind_get(neighbor)) {\n\t\t\treturn false;\n\t\t}\n\t}\n\tassert(!edata_guarded_get(edata) && !edata_guarded_get(neighbor));\n\n\treturn true;\n}\n\n#endif /* JEMALLOC_INTERNAL_EXTENT_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/extent_dss.h",
    "content": "#ifndef JEMALLOC_INTERNAL_EXTENT_DSS_H\n#define JEMALLOC_INTERNAL_EXTENT_DSS_H\n\ntypedef enum {\n\tdss_prec_disabled  = 0,\n\tdss_prec_primary   = 1,\n\tdss_prec_secondary = 2,\n\n\tdss_prec_limit     = 3\n} dss_prec_t;\n#define DSS_PREC_DEFAULT dss_prec_secondary\n#define DSS_DEFAULT \"secondary\"\n\nextern const char *dss_prec_names[];\n\nextern const char *opt_dss;\n\ndss_prec_t extent_dss_prec_get(void);\nbool extent_dss_prec_set(dss_prec_t dss_prec);\nvoid *extent_alloc_dss(tsdn_t *tsdn, arena_t *arena, void *new_addr,\n    size_t size, size_t alignment, bool *zero, bool *commit);\nbool extent_in_dss(void *addr);\nbool extent_dss_mergeable(void *addr_a, void *addr_b);\nvoid extent_dss_boot(void);\n\n#endif /* JEMALLOC_INTERNAL_EXTENT_DSS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/extent_mmap.h",
    "content": "#ifndef JEMALLOC_INTERNAL_EXTENT_MMAP_EXTERNS_H\n#define JEMALLOC_INTERNAL_EXTENT_MMAP_EXTERNS_H\n\nextern bool opt_retain;\n\nvoid *extent_alloc_mmap(void *new_addr, size_t size, size_t alignment,\n    bool *zero, bool *commit);\nbool extent_dalloc_mmap(void *addr, size_t size);\n\n#endif /* JEMALLOC_INTERNAL_EXTENT_MMAP_EXTERNS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/fb.h",
    "content": "#ifndef JEMALLOC_INTERNAL_FB_H\n#define JEMALLOC_INTERNAL_FB_H\n\n/*\n * The flat bitmap module.  This has a larger API relative to the bitmap module\n * (supporting things like backwards searches, and searching for both set and\n * unset bits), at the cost of slower operations for very large bitmaps.\n *\n * Initialized flat bitmaps start at all-zeros (all bits unset).\n */\n\ntypedef unsigned long fb_group_t;\n#define FB_GROUP_BITS (ZU(1) << (LG_SIZEOF_LONG + 3))\n#define FB_NGROUPS(nbits) ((nbits) / FB_GROUP_BITS \\\n    + ((nbits) % FB_GROUP_BITS == 0 ? 0 : 1))\n\nstatic inline void\nfb_init(fb_group_t *fb, size_t nbits) {\n\tsize_t ngroups = FB_NGROUPS(nbits);\n\tmemset(fb, 0, ngroups * sizeof(fb_group_t));\n}\n\nstatic inline bool\nfb_empty(fb_group_t *fb, size_t nbits) {\n\tsize_t ngroups = FB_NGROUPS(nbits);\n\tfor (size_t i = 0; i < ngroups; i++) {\n\t\tif (fb[i] != 0) {\n\t\t\treturn false;\n\t\t}\n\t}\n\treturn true;\n}\n\nstatic inline bool\nfb_full(fb_group_t *fb, size_t nbits) {\n\tsize_t ngroups = FB_NGROUPS(nbits);\n\tsize_t trailing_bits = nbits % FB_GROUP_BITS;\n\tsize_t limit = (trailing_bits == 0 ? ngroups : ngroups - 1);\n\tfor (size_t i = 0; i < limit; i++) {\n\t\tif (fb[i] != ~(fb_group_t)0) {\n\t\t\treturn false;\n\t\t}\n\t}\n\tif (trailing_bits == 0) {\n\t\treturn true;\n\t}\n\treturn fb[ngroups - 1] == ((fb_group_t)1 << trailing_bits) - 1;\n}\n\nstatic inline bool\nfb_get(fb_group_t *fb, size_t nbits, size_t bit) {\n\tassert(bit < nbits);\n\tsize_t group_ind = bit / FB_GROUP_BITS;\n\tsize_t bit_ind = bit % FB_GROUP_BITS;\n\treturn (bool)(fb[group_ind] & ((fb_group_t)1 << bit_ind));\n}\n\nstatic inline void\nfb_set(fb_group_t *fb, size_t nbits, size_t bit) {\n\tassert(bit < nbits);\n\tsize_t group_ind = bit / FB_GROUP_BITS;\n\tsize_t bit_ind = bit % FB_GROUP_BITS;\n\tfb[group_ind] |= ((fb_group_t)1 << bit_ind);\n}\n\nstatic inline void\nfb_unset(fb_group_t *fb, size_t nbits, size_t bit) {\n\tassert(bit < nbits);\n\tsize_t group_ind = bit / FB_GROUP_BITS;\n\tsize_t bit_ind = bit % FB_GROUP_BITS;\n\tfb[group_ind] &= ~((fb_group_t)1 << bit_ind);\n}\n\n\n/*\n * Some implementation details.  This visitation function lets us apply a group\n * visitor to each group in the bitmap (potentially modifying it).  The mask\n * indicates which bits are logically part of the visitation.\n */\ntypedef void (*fb_group_visitor_t)(void *ctx, fb_group_t *fb, fb_group_t mask);\nJEMALLOC_ALWAYS_INLINE void\nfb_visit_impl(fb_group_t *fb, size_t nbits, fb_group_visitor_t visit, void *ctx,\n    size_t start, size_t cnt) {\n\tassert(cnt > 0);\n\tassert(start + cnt <= nbits);\n\tsize_t group_ind = start / FB_GROUP_BITS;\n\tsize_t start_bit_ind = start % FB_GROUP_BITS;\n\t/*\n\t * The first group is special; it's the only one we don't start writing\n\t * to from bit 0.\n\t */\n\tsize_t first_group_cnt = (start_bit_ind + cnt > FB_GROUP_BITS\n\t\t? FB_GROUP_BITS - start_bit_ind : cnt);\n\t/*\n\t * We can basically split affected words into:\n\t *   - The first group, where we touch only the high bits\n\t *   - The last group, where we touch only the low bits\n\t *   - The middle, where we set all the bits to the same thing.\n\t * We treat each case individually.  The last two could be merged, but\n\t * this can lead to bad codegen for those middle words.\n\t */\n\t/* First group */\n\tfb_group_t mask = ((~(fb_group_t)0)\n\t    >> (FB_GROUP_BITS - first_group_cnt))\n\t    << start_bit_ind;\n\tvisit(ctx, &fb[group_ind], mask);\n\n\tcnt -= first_group_cnt;\n\tgroup_ind++;\n\t/* Middle groups */\n\twhile (cnt > FB_GROUP_BITS) {\n\t\tvisit(ctx, &fb[group_ind], ~(fb_group_t)0);\n\t\tcnt -= FB_GROUP_BITS;\n\t\tgroup_ind++;\n\t}\n\t/* Last group */\n\tif (cnt != 0) {\n\t\tmask = (~(fb_group_t)0) >> (FB_GROUP_BITS - cnt);\n\t\tvisit(ctx, &fb[group_ind], mask);\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE void\nfb_assign_visitor(void *ctx, fb_group_t *fb, fb_group_t mask) {\n\tbool val = *(bool *)ctx;\n\tif (val) {\n\t\t*fb |= mask;\n\t} else {\n\t\t*fb &= ~mask;\n\t}\n}\n\n/* Sets the cnt bits starting at position start.  Must not have a 0 count. */\nstatic inline void\nfb_set_range(fb_group_t *fb, size_t nbits, size_t start, size_t cnt) {\n\tbool val = true;\n\tfb_visit_impl(fb, nbits, &fb_assign_visitor, &val, start, cnt);\n}\n\n/* Unsets the cnt bits starting at position start.  Must not have a 0 count. */\nstatic inline void\nfb_unset_range(fb_group_t *fb, size_t nbits, size_t start, size_t cnt) {\n\tbool val = false;\n\tfb_visit_impl(fb, nbits, &fb_assign_visitor, &val, start, cnt);\n}\n\nJEMALLOC_ALWAYS_INLINE void\nfb_scount_visitor(void *ctx, fb_group_t *fb, fb_group_t mask) {\n\tsize_t *scount = (size_t *)ctx;\n\t*scount += popcount_lu(*fb & mask);\n}\n\n/* Finds the number of set bit in the of length cnt starting at start. */\nJEMALLOC_ALWAYS_INLINE size_t\nfb_scount(fb_group_t *fb, size_t nbits, size_t start, size_t cnt) {\n\tsize_t scount = 0;\n\tfb_visit_impl(fb, nbits, &fb_scount_visitor, &scount, start, cnt);\n\treturn scount;\n}\n\n/* Finds the number of unset bit in the of length cnt starting at start. */\nJEMALLOC_ALWAYS_INLINE size_t\nfb_ucount(fb_group_t *fb, size_t nbits, size_t start, size_t cnt) {\n\tsize_t scount = fb_scount(fb, nbits, start, cnt);\n\treturn cnt - scount;\n}\n\n/*\n * An implementation detail; find the first bit at position >= min_bit with the\n * value val.\n *\n * Returns the number of bits in the bitmap if no such bit exists.\n */\nJEMALLOC_ALWAYS_INLINE ssize_t\nfb_find_impl(fb_group_t *fb, size_t nbits, size_t start, bool val,\n    bool forward) {\n\tassert(start < nbits);\n\tsize_t ngroups = FB_NGROUPS(nbits);\n\tssize_t group_ind = start / FB_GROUP_BITS;\n\tsize_t bit_ind = start % FB_GROUP_BITS;\n\n\tfb_group_t maybe_invert = (val ? 0 : (fb_group_t)-1);\n\n\tfb_group_t group = fb[group_ind];\n\tgroup ^= maybe_invert;\n\tif (forward) {\n\t\t/* Only keep ones in bits bit_ind and above. */\n\t\tgroup &= ~((1LU << bit_ind) - 1);\n\t} else {\n\t\t/*\n\t\t * Only keep ones in bits bit_ind and below.  You might more\n\t\t * naturally express this as (1 << (bit_ind + 1)) - 1, but\n\t\t * that shifts by an invalid amount if bit_ind is one less than\n\t\t * FB_GROUP_BITS.\n\t\t */\n\t\tgroup &= ((2LU << bit_ind) - 1);\n\t}\n\tssize_t group_ind_bound = forward ? (ssize_t)ngroups : -1;\n\twhile (group == 0) {\n\t\tgroup_ind += forward ? 1 : -1;\n\t\tif (group_ind == group_ind_bound) {\n\t\t\treturn forward ? (ssize_t)nbits : (ssize_t)-1;\n\t\t}\n\t\tgroup = fb[group_ind];\n\t\tgroup ^= maybe_invert;\n\t}\n\tassert(group != 0);\n\tsize_t bit = forward ? ffs_lu(group) : fls_lu(group);\n\tsize_t pos = group_ind * FB_GROUP_BITS + bit;\n\t/*\n\t * The high bits of a partially filled last group are zeros, so if we're\n\t * looking for zeros we don't want to report an invalid result.\n\t */\n\tif (forward && !val && pos > nbits) {\n\t\treturn nbits;\n\t}\n\treturn pos;\n}\n\n/*\n * Find the first set bit in the bitmap with an index >= min_bit.  Returns the\n * number of bits in the bitmap if no such bit exists.\n */\nstatic inline size_t\nfb_ffu(fb_group_t *fb, size_t nbits, size_t min_bit) {\n\treturn (size_t)fb_find_impl(fb, nbits, min_bit, /* val */ false,\n\t    /* forward */ true);\n}\n\n/* The same, but looks for an unset bit. */\nstatic inline size_t\nfb_ffs(fb_group_t *fb, size_t nbits, size_t min_bit) {\n\treturn (size_t)fb_find_impl(fb, nbits, min_bit, /* val */ true,\n\t    /* forward */ true);\n}\n\n/*\n * Find the last set bit in the bitmap with an index <= max_bit.  Returns -1 if\n * no such bit exists.\n */\nstatic inline ssize_t\nfb_flu(fb_group_t *fb, size_t nbits, size_t max_bit) {\n\treturn fb_find_impl(fb, nbits, max_bit, /* val */ false,\n\t    /* forward */ false);\n}\n\nstatic inline ssize_t\nfb_fls(fb_group_t *fb, size_t nbits, size_t max_bit) {\n\treturn fb_find_impl(fb, nbits, max_bit, /* val */ true,\n\t    /* forward */ false);\n}\n\n/* Returns whether or not we found a range. */\nJEMALLOC_ALWAYS_INLINE bool\nfb_iter_range_impl(fb_group_t *fb, size_t nbits, size_t start, size_t *r_begin,\n    size_t *r_len, bool val, bool forward) {\n\tassert(start < nbits);\n\tssize_t next_range_begin = fb_find_impl(fb, nbits, start, val, forward);\n\tif ((forward && next_range_begin == (ssize_t)nbits)\n\t    || (!forward && next_range_begin == (ssize_t)-1)) {\n\t\treturn false;\n\t}\n\t/* Half open range; the set bits are [begin, end). */\n\tssize_t next_range_end = fb_find_impl(fb, nbits, next_range_begin, !val,\n\t    forward);\n\tif (forward) {\n\t\t*r_begin = next_range_begin;\n\t\t*r_len = next_range_end - next_range_begin;\n\t} else {\n\t\t*r_begin = next_range_end + 1;\n\t\t*r_len = next_range_begin - next_range_end;\n\t}\n\treturn true;\n}\n\n/*\n * Used to iterate through ranges of set bits.\n *\n * Tries to find the next contiguous sequence of set bits with a first index >=\n * start.  If one exists, puts the earliest bit of the range in *r_begin, its\n * length in *r_len, and returns true.  Otherwise, returns false (without\n * touching *r_begin or *r_end).\n */\nstatic inline bool\nfb_srange_iter(fb_group_t *fb, size_t nbits, size_t start, size_t *r_begin,\n    size_t *r_len) {\n\treturn fb_iter_range_impl(fb, nbits, start, r_begin, r_len,\n\t    /* val */ true, /* forward */ true);\n}\n\n/*\n * The same as fb_srange_iter, but searches backwards from start rather than\n * forwards.  (The position returned is still the earliest bit in the range).\n */\nstatic inline bool\nfb_srange_riter(fb_group_t *fb, size_t nbits, size_t start, size_t *r_begin,\n    size_t *r_len) {\n\treturn fb_iter_range_impl(fb, nbits, start, r_begin, r_len,\n\t    /* val */ true, /* forward */ false);\n}\n\n/* Similar to fb_srange_iter, but searches for unset bits. */\nstatic inline bool\nfb_urange_iter(fb_group_t *fb, size_t nbits, size_t start, size_t *r_begin,\n    size_t *r_len) {\n\treturn fb_iter_range_impl(fb, nbits, start, r_begin, r_len,\n\t    /* val */ false, /* forward */ true);\n}\n\n/* Similar to fb_srange_riter, but searches for unset bits. */\nstatic inline bool\nfb_urange_riter(fb_group_t *fb, size_t nbits, size_t start, size_t *r_begin,\n    size_t *r_len) {\n\treturn fb_iter_range_impl(fb, nbits, start, r_begin, r_len,\n\t    /* val */ false, /* forward */ false);\n}\n\nJEMALLOC_ALWAYS_INLINE size_t\nfb_range_longest_impl(fb_group_t *fb, size_t nbits, bool val) {\n\tsize_t begin = 0;\n\tsize_t longest_len = 0;\n\tsize_t len = 0;\n\twhile (begin < nbits && fb_iter_range_impl(fb, nbits, begin, &begin,\n\t    &len, val, /* forward */ true)) {\n\t\tif (len > longest_len) {\n\t\t\tlongest_len = len;\n\t\t}\n\t\tbegin += len;\n\t}\n\treturn longest_len;\n}\n\nstatic inline size_t\nfb_srange_longest(fb_group_t *fb, size_t nbits) {\n\treturn fb_range_longest_impl(fb, nbits, /* val */ true);\n}\n\nstatic inline size_t\nfb_urange_longest(fb_group_t *fb, size_t nbits) {\n\treturn fb_range_longest_impl(fb, nbits, /* val */ false);\n}\n\n/*\n * Initializes each bit of dst with the bitwise-AND of the corresponding bits of\n * src1 and src2.  All bitmaps must be the same size.\n */\nstatic inline void\nfb_bit_and(fb_group_t *dst, fb_group_t *src1, fb_group_t *src2, size_t nbits) {\n\tsize_t ngroups = FB_NGROUPS(nbits);\n\tfor (size_t i = 0; i < ngroups; i++) {\n\t\tdst[i] = src1[i] & src2[i];\n\t}\n}\n\n/* Like fb_bit_and, but with bitwise-OR. */\nstatic inline void\nfb_bit_or(fb_group_t *dst, fb_group_t *src1, fb_group_t *src2, size_t nbits) {\n\tsize_t ngroups = FB_NGROUPS(nbits);\n\tfor (size_t i = 0; i < ngroups; i++) {\n\t\tdst[i] = src1[i] | src2[i];\n\t}\n}\n\n/* Initializes dst bit i to the negation of source bit i. */\nstatic inline void\nfb_bit_not(fb_group_t *dst, fb_group_t *src, size_t nbits) {\n\tsize_t ngroups = FB_NGROUPS(nbits);\n\tfor (size_t i = 0; i < ngroups; i++) {\n\t\tdst[i] = ~src[i];\n\t}\n}\n\n#endif /* JEMALLOC_INTERNAL_FB_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/fxp.h",
    "content": "#ifndef JEMALLOC_INTERNAL_FXP_H\n#define JEMALLOC_INTERNAL_FXP_H\n\n/*\n * A simple fixed-point math implementation, supporting only unsigned values\n * (with overflow being an error).\n *\n * It's not in general safe to use floating point in core code, because various\n * libc implementations we get linked against can assume that malloc won't touch\n * floating point state and call it with an unusual calling convention.\n */\n\n/*\n * High 16 bits are the integer part, low 16 are the fractional part.  Or\n * equivalently, repr == 2**16 * val, where we use \"val\" to refer to the\n * (imaginary) fractional representation of the true value.\n *\n * We pick a uint32_t here since it's convenient in some places to\n * double the representation size (i.e. multiplication and division use\n * 64-bit integer types), and a uint64_t is the largest type we're\n * certain is available.\n */\ntypedef uint32_t fxp_t;\n#define FXP_INIT_INT(x) ((x) << 16)\n#define FXP_INIT_PERCENT(pct) (((pct) << 16) / 100)\n\n/*\n * Amount of precision used in parsing and printing numbers.  The integer bound\n * is simply because the integer part of the number gets 16 bits, and so is\n * bounded by 65536.\n *\n * We use a lot of precision for the fractional part, even though most of it\n * gets rounded off; this lets us get exact values for the important special\n * case where the denominator is a small power of 2 (for instance,\n * 1/512 == 0.001953125 is exactly representable even with only 16 bits of\n * fractional precision).  We need to left-shift by 16 before dividing by\n * 10**precision, so we pick precision to be floor(log(2**48)) = 14.\n */\n#define FXP_INTEGER_PART_DIGITS 5\n#define FXP_FRACTIONAL_PART_DIGITS 14\n\n/*\n * In addition to the integer and fractional parts of the number, we need to\n * include a null character and (possibly) a decimal point.\n */\n#define FXP_BUF_SIZE (FXP_INTEGER_PART_DIGITS + FXP_FRACTIONAL_PART_DIGITS + 2)\n\nstatic inline fxp_t\nfxp_add(fxp_t a, fxp_t b) {\n\treturn a + b;\n}\n\nstatic inline fxp_t\nfxp_sub(fxp_t a, fxp_t b) {\n\tassert(a >= b);\n\treturn a - b;\n}\n\nstatic inline fxp_t\nfxp_mul(fxp_t a, fxp_t b) {\n\tuint64_t unshifted = (uint64_t)a * (uint64_t)b;\n\t/*\n\t * Unshifted is (a.val * 2**16) * (b.val * 2**16)\n\t *   == (a.val * b.val) * 2**32, but we want\n\t * (a.val * b.val) * 2 ** 16.\n\t */\n\treturn (uint32_t)(unshifted >> 16);\n}\n\nstatic inline fxp_t\nfxp_div(fxp_t a, fxp_t b) {\n\tassert(b != 0);\n\tuint64_t unshifted = ((uint64_t)a << 32) / (uint64_t)b;\n\t/*\n\t * Unshifted is (a.val * 2**16) * (2**32) / (b.val * 2**16)\n\t *   == (a.val / b.val) * (2 ** 32), which again corresponds to a right\n\t *   shift of 16.\n\t */\n\treturn (uint32_t)(unshifted >> 16);\n}\n\nstatic inline uint32_t\nfxp_round_down(fxp_t a) {\n\treturn a >> 16;\n}\n\nstatic inline uint32_t\nfxp_round_nearest(fxp_t a) {\n\tuint32_t fractional_part = (a  & ((1U << 16) - 1));\n\tuint32_t increment = (uint32_t)(fractional_part >= (1U << 15));\n\treturn (a >> 16) + increment;\n}\n\n/*\n * Approximately computes x * frac, without the size limitations that would be\n * imposed by converting u to an fxp_t.\n */\nstatic inline size_t\nfxp_mul_frac(size_t x_orig, fxp_t frac) {\n\tassert(frac <= (1U << 16));\n\t/*\n\t * Work around an over-enthusiastic warning about type limits below (on\n\t * 32-bit platforms, a size_t is always less than 1ULL << 48).\n\t */\n\tuint64_t x = (uint64_t)x_orig;\n\t/*\n\t * If we can guarantee no overflow, multiply first before shifting, to\n\t * preserve some precision.  Otherwise, shift first and then multiply.\n\t * In the latter case, we only lose the low 16 bits of a 48-bit number,\n\t * so we're still accurate to within 1/2**32.\n\t */\n\tif (x < (1ULL << 48)) {\n\t\treturn (size_t)((x * frac) >> 16);\n\t} else {\n\t\treturn (size_t)((x >> 16) * (uint64_t)frac);\n\t}\n}\n\n/*\n * Returns true on error.  Otherwise, returns false and updates *ptr to point to\n * the first character not parsed (because it wasn't a digit).\n */\nbool fxp_parse(fxp_t *a, const char *ptr, char **end);\nvoid fxp_print(fxp_t a, char buf[FXP_BUF_SIZE]);\n\n#endif /* JEMALLOC_INTERNAL_FXP_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/hash.h",
    "content": "#ifndef JEMALLOC_INTERNAL_HASH_H\n#define JEMALLOC_INTERNAL_HASH_H\n\n#include \"jemalloc/internal/assert.h\"\n\n/*\n * The following hash function is based on MurmurHash3, placed into the public\n * domain by Austin Appleby.  See https://github.com/aappleby/smhasher for\n * details.\n */\n\n/******************************************************************************/\n/* Internal implementation. */\nstatic inline uint32_t\nhash_rotl_32(uint32_t x, int8_t r) {\n\treturn ((x << r) | (x >> (32 - r)));\n}\n\nstatic inline uint64_t\nhash_rotl_64(uint64_t x, int8_t r) {\n\treturn ((x << r) | (x >> (64 - r)));\n}\n\nstatic inline uint32_t\nhash_get_block_32(const uint32_t *p, int i) {\n\t/* Handle unaligned read. */\n\tif (unlikely((uintptr_t)p & (sizeof(uint32_t)-1)) != 0) {\n\t\tuint32_t ret;\n\n\t\tmemcpy(&ret, (uint8_t *)(p + i), sizeof(uint32_t));\n\t\treturn ret;\n\t}\n\n\treturn p[i];\n}\n\nstatic inline uint64_t\nhash_get_block_64(const uint64_t *p, int i) {\n\t/* Handle unaligned read. */\n\tif (unlikely((uintptr_t)p & (sizeof(uint64_t)-1)) != 0) {\n\t\tuint64_t ret;\n\n\t\tmemcpy(&ret, (uint8_t *)(p + i), sizeof(uint64_t));\n\t\treturn ret;\n\t}\n\n\treturn p[i];\n}\n\nstatic inline uint32_t\nhash_fmix_32(uint32_t h) {\n\th ^= h >> 16;\n\th *= 0x85ebca6b;\n\th ^= h >> 13;\n\th *= 0xc2b2ae35;\n\th ^= h >> 16;\n\n\treturn h;\n}\n\nstatic inline uint64_t\nhash_fmix_64(uint64_t k) {\n\tk ^= k >> 33;\n\tk *= KQU(0xff51afd7ed558ccd);\n\tk ^= k >> 33;\n\tk *= KQU(0xc4ceb9fe1a85ec53);\n\tk ^= k >> 33;\n\n\treturn k;\n}\n\nstatic inline uint32_t\nhash_x86_32(const void *key, int len, uint32_t seed) {\n\tconst uint8_t *data = (const uint8_t *) key;\n\tconst int nblocks = len / 4;\n\n\tuint32_t h1 = seed;\n\n\tconst uint32_t c1 = 0xcc9e2d51;\n\tconst uint32_t c2 = 0x1b873593;\n\n\t/* body */\n\t{\n\t\tconst uint32_t *blocks = (const uint32_t *) (data + nblocks*4);\n\t\tint i;\n\n\t\tfor (i = -nblocks; i; i++) {\n\t\t\tuint32_t k1 = hash_get_block_32(blocks, i);\n\n\t\t\tk1 *= c1;\n\t\t\tk1 = hash_rotl_32(k1, 15);\n\t\t\tk1 *= c2;\n\n\t\t\th1 ^= k1;\n\t\t\th1 = hash_rotl_32(h1, 13);\n\t\t\th1 = h1*5 + 0xe6546b64;\n\t\t}\n\t}\n\n\t/* tail */\n\t{\n\t\tconst uint8_t *tail = (const uint8_t *) (data + nblocks*4);\n\n\t\tuint32_t k1 = 0;\n\n\t\tswitch (len & 3) {\n\t\tcase 3: k1 ^= tail[2] << 16; JEMALLOC_FALLTHROUGH;\n\t\tcase 2: k1 ^= tail[1] << 8; JEMALLOC_FALLTHROUGH;\n\t\tcase 1: k1 ^= tail[0]; k1 *= c1; k1 = hash_rotl_32(k1, 15);\n\t\t\tk1 *= c2; h1 ^= k1;\n\t\t}\n\t}\n\n\t/* finalization */\n\th1 ^= len;\n\n\th1 = hash_fmix_32(h1);\n\n\treturn h1;\n}\n\nstatic inline void\nhash_x86_128(const void *key, const int len, uint32_t seed,\n    uint64_t r_out[2]) {\n\tconst uint8_t * data = (const uint8_t *) key;\n\tconst int nblocks = len / 16;\n\n\tuint32_t h1 = seed;\n\tuint32_t h2 = seed;\n\tuint32_t h3 = seed;\n\tuint32_t h4 = seed;\n\n\tconst uint32_t c1 = 0x239b961b;\n\tconst uint32_t c2 = 0xab0e9789;\n\tconst uint32_t c3 = 0x38b34ae5;\n\tconst uint32_t c4 = 0xa1e38b93;\n\n\t/* body */\n\t{\n\t\tconst uint32_t *blocks = (const uint32_t *) (data + nblocks*16);\n\t\tint i;\n\n\t\tfor (i = -nblocks; i; i++) {\n\t\t\tuint32_t k1 = hash_get_block_32(blocks, i*4 + 0);\n\t\t\tuint32_t k2 = hash_get_block_32(blocks, i*4 + 1);\n\t\t\tuint32_t k3 = hash_get_block_32(blocks, i*4 + 2);\n\t\t\tuint32_t k4 = hash_get_block_32(blocks, i*4 + 3);\n\n\t\t\tk1 *= c1; k1 = hash_rotl_32(k1, 15); k1 *= c2; h1 ^= k1;\n\n\t\t\th1 = hash_rotl_32(h1, 19); h1 += h2;\n\t\t\th1 = h1*5 + 0x561ccd1b;\n\n\t\t\tk2 *= c2; k2 = hash_rotl_32(k2, 16); k2 *= c3; h2 ^= k2;\n\n\t\t\th2 = hash_rotl_32(h2, 17); h2 += h3;\n\t\t\th2 = h2*5 + 0x0bcaa747;\n\n\t\t\tk3 *= c3; k3 = hash_rotl_32(k3, 17); k3 *= c4; h3 ^= k3;\n\n\t\t\th3 = hash_rotl_32(h3, 15); h3 += h4;\n\t\t\th3 = h3*5 + 0x96cd1c35;\n\n\t\t\tk4 *= c4; k4 = hash_rotl_32(k4, 18); k4 *= c1; h4 ^= k4;\n\n\t\t\th4 = hash_rotl_32(h4, 13); h4 += h1;\n\t\t\th4 = h4*5 + 0x32ac3b17;\n\t\t}\n\t}\n\n\t/* tail */\n\t{\n\t\tconst uint8_t *tail = (const uint8_t *) (data + nblocks*16);\n\t\tuint32_t k1 = 0;\n\t\tuint32_t k2 = 0;\n\t\tuint32_t k3 = 0;\n\t\tuint32_t k4 = 0;\n\n\t\tswitch (len & 15) {\n\t\tcase 15: k4 ^= tail[14] << 16; JEMALLOC_FALLTHROUGH;\n\t\tcase 14: k4 ^= tail[13] << 8; JEMALLOC_FALLTHROUGH;\n\t\tcase 13: k4 ^= tail[12] << 0;\n\t\t\tk4 *= c4; k4 = hash_rotl_32(k4, 18); k4 *= c1; h4 ^= k4;\n\t\t\tJEMALLOC_FALLTHROUGH;\n\t\tcase 12: k3 ^= (uint32_t) tail[11] << 24; JEMALLOC_FALLTHROUGH;\n\t\tcase 11: k3 ^= tail[10] << 16; JEMALLOC_FALLTHROUGH;\n\t\tcase 10: k3 ^= tail[ 9] << 8; JEMALLOC_FALLTHROUGH;\n\t\tcase  9: k3 ^= tail[ 8] << 0;\n\t\t\tk3 *= c3; k3 = hash_rotl_32(k3, 17); k3 *= c4; h3 ^= k3;\n\t\t\tJEMALLOC_FALLTHROUGH;\n\t\tcase  8: k2 ^= (uint32_t) tail[ 7] << 24; JEMALLOC_FALLTHROUGH;\n\t\tcase  7: k2 ^= tail[ 6] << 16; JEMALLOC_FALLTHROUGH;\n\t\tcase  6: k2 ^= tail[ 5] << 8; JEMALLOC_FALLTHROUGH;\n\t\tcase  5: k2 ^= tail[ 4] << 0;\n\t\t\tk2 *= c2; k2 = hash_rotl_32(k2, 16); k2 *= c3; h2 ^= k2;\n\t\t\tJEMALLOC_FALLTHROUGH;\n\t\tcase  4: k1 ^= (uint32_t) tail[ 3] << 24; JEMALLOC_FALLTHROUGH;\n\t\tcase  3: k1 ^= tail[ 2] << 16; JEMALLOC_FALLTHROUGH;\n\t\tcase  2: k1 ^= tail[ 1] << 8; JEMALLOC_FALLTHROUGH;\n\t\tcase  1: k1 ^= tail[ 0] << 0;\n\t\t\tk1 *= c1; k1 = hash_rotl_32(k1, 15); k1 *= c2; h1 ^= k1;\n\t\t\tbreak;\n\t\t}\n\t}\n\n\t/* finalization */\n\th1 ^= len; h2 ^= len; h3 ^= len; h4 ^= len;\n\n\th1 += h2; h1 += h3; h1 += h4;\n\th2 += h1; h3 += h1; h4 += h1;\n\n\th1 = hash_fmix_32(h1);\n\th2 = hash_fmix_32(h2);\n\th3 = hash_fmix_32(h3);\n\th4 = hash_fmix_32(h4);\n\n\th1 += h2; h1 += h3; h1 += h4;\n\th2 += h1; h3 += h1; h4 += h1;\n\n\tr_out[0] = (((uint64_t) h2) << 32) | h1;\n\tr_out[1] = (((uint64_t) h4) << 32) | h3;\n}\n\nstatic inline void\nhash_x64_128(const void *key, const int len, const uint32_t seed,\n    uint64_t r_out[2]) {\n\tconst uint8_t *data = (const uint8_t *) key;\n\tconst int nblocks = len / 16;\n\n\tuint64_t h1 = seed;\n\tuint64_t h2 = seed;\n\n\tconst uint64_t c1 = KQU(0x87c37b91114253d5);\n\tconst uint64_t c2 = KQU(0x4cf5ad432745937f);\n\n\t/* body */\n\t{\n\t\tconst uint64_t *blocks = (const uint64_t *) (data);\n\t\tint i;\n\n\t\tfor (i = 0; i < nblocks; i++) {\n\t\t\tuint64_t k1 = hash_get_block_64(blocks, i*2 + 0);\n\t\t\tuint64_t k2 = hash_get_block_64(blocks, i*2 + 1);\n\n\t\t\tk1 *= c1; k1 = hash_rotl_64(k1, 31); k1 *= c2; h1 ^= k1;\n\n\t\t\th1 = hash_rotl_64(h1, 27); h1 += h2;\n\t\t\th1 = h1*5 + 0x52dce729;\n\n\t\t\tk2 *= c2; k2 = hash_rotl_64(k2, 33); k2 *= c1; h2 ^= k2;\n\n\t\t\th2 = hash_rotl_64(h2, 31); h2 += h1;\n\t\t\th2 = h2*5 + 0x38495ab5;\n\t\t}\n\t}\n\n\t/* tail */\n\t{\n\t\tconst uint8_t *tail = (const uint8_t*)(data + nblocks*16);\n\t\tuint64_t k1 = 0;\n\t\tuint64_t k2 = 0;\n\n\t\tswitch (len & 15) {\n\t\tcase 15: k2 ^= ((uint64_t)(tail[14])) << 48; JEMALLOC_FALLTHROUGH;\n\t\tcase 14: k2 ^= ((uint64_t)(tail[13])) << 40; JEMALLOC_FALLTHROUGH;\n\t\tcase 13: k2 ^= ((uint64_t)(tail[12])) << 32; JEMALLOC_FALLTHROUGH;\n\t\tcase 12: k2 ^= ((uint64_t)(tail[11])) << 24; JEMALLOC_FALLTHROUGH;\n\t\tcase 11: k2 ^= ((uint64_t)(tail[10])) << 16; JEMALLOC_FALLTHROUGH;\n\t\tcase 10: k2 ^= ((uint64_t)(tail[ 9])) << 8;  JEMALLOC_FALLTHROUGH;\n\t\tcase  9: k2 ^= ((uint64_t)(tail[ 8])) << 0;\n\t\t\tk2 *= c2; k2 = hash_rotl_64(k2, 33); k2 *= c1; h2 ^= k2;\n\t\t\tJEMALLOC_FALLTHROUGH;\n\t\tcase  8: k1 ^= ((uint64_t)(tail[ 7])) << 56; JEMALLOC_FALLTHROUGH;\n\t\tcase  7: k1 ^= ((uint64_t)(tail[ 6])) << 48; JEMALLOC_FALLTHROUGH;\n\t\tcase  6: k1 ^= ((uint64_t)(tail[ 5])) << 40; JEMALLOC_FALLTHROUGH;\n\t\tcase  5: k1 ^= ((uint64_t)(tail[ 4])) << 32; JEMALLOC_FALLTHROUGH;\n\t\tcase  4: k1 ^= ((uint64_t)(tail[ 3])) << 24; JEMALLOC_FALLTHROUGH;\n\t\tcase  3: k1 ^= ((uint64_t)(tail[ 2])) << 16; JEMALLOC_FALLTHROUGH;\n\t\tcase  2: k1 ^= ((uint64_t)(tail[ 1])) << 8;  JEMALLOC_FALLTHROUGH;\n\t\tcase  1: k1 ^= ((uint64_t)(tail[ 0])) << 0;\n\t\t\tk1 *= c1; k1 = hash_rotl_64(k1, 31); k1 *= c2; h1 ^= k1;\n\t\t\tbreak;\n\t\t}\n\t}\n\n\t/* finalization */\n\th1 ^= len; h2 ^= len;\n\n\th1 += h2;\n\th2 += h1;\n\n\th1 = hash_fmix_64(h1);\n\th2 = hash_fmix_64(h2);\n\n\th1 += h2;\n\th2 += h1;\n\n\tr_out[0] = h1;\n\tr_out[1] = h2;\n}\n\n/******************************************************************************/\n/* API. */\nstatic inline void\nhash(const void *key, size_t len, const uint32_t seed, size_t r_hash[2]) {\n\tassert(len <= INT_MAX); /* Unfortunate implementation limitation. */\n\n#if (LG_SIZEOF_PTR == 3 && !defined(JEMALLOC_BIG_ENDIAN))\n\thash_x64_128(key, (int)len, seed, (uint64_t *)r_hash);\n#else\n\t{\n\t\tuint64_t hashes[2];\n\t\thash_x86_128(key, (int)len, seed, hashes);\n\t\tr_hash[0] = (size_t)hashes[0];\n\t\tr_hash[1] = (size_t)hashes[1];\n\t}\n#endif\n}\n\n#endif /* JEMALLOC_INTERNAL_HASH_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/hook.h",
    "content": "#ifndef JEMALLOC_INTERNAL_HOOK_H\n#define JEMALLOC_INTERNAL_HOOK_H\n\n#include \"jemalloc/internal/tsd.h\"\n\n/*\n * This API is *extremely* experimental, and may get ripped out, changed in API-\n * and ABI-incompatible ways, be insufficiently or incorrectly documented, etc.\n *\n * It allows hooking the stateful parts of the API to see changes as they\n * happen.\n *\n * Allocation hooks are called after the allocation is done, free hooks are\n * called before the free is done, and expand hooks are called after the\n * allocation is expanded.\n *\n * For realloc and rallocx, if the expansion happens in place, the expansion\n * hook is called.  If it is moved, then the alloc hook is called on the new\n * location, and then the free hook is called on the old location (i.e. both\n * hooks are invoked in between the alloc and the dalloc).\n *\n * If we return NULL from OOM, then usize might not be trustworthy.  Calling\n * realloc(NULL, size) only calls the alloc hook, and calling realloc(ptr, 0)\n * only calls the free hook.  (Calling realloc(NULL, 0) is treated as malloc(0),\n * and only calls the alloc hook).\n *\n * Reentrancy:\n *   Reentrancy is guarded against from within the hook implementation.  If you\n *   call allocator functions from within a hook, the hooks will not be invoked\n *   again.\n * Threading:\n *   The installation of a hook synchronizes with all its uses.  If you can\n *   prove the installation of a hook happens-before a jemalloc entry point,\n *   then the hook will get invoked (unless there's a racing removal).\n *\n *   Hook insertion appears to be atomic at a per-thread level (i.e. if a thread\n *   allocates and has the alloc hook invoked, then a subsequent free on the\n *   same thread will also have the free hook invoked).\n *\n *   The *removal* of a hook does *not* block until all threads are done with\n *   the hook.  Hook authors have to be resilient to this, and need some\n *   out-of-band mechanism for cleaning up any dynamically allocated memory\n *   associated with their hook.\n * Ordering:\n *   Order of hook execution is unspecified, and may be different than insertion\n *   order.\n */\n\n#define HOOK_MAX 4\n\nenum hook_alloc_e {\n\thook_alloc_malloc,\n\thook_alloc_posix_memalign,\n\thook_alloc_aligned_alloc,\n\thook_alloc_calloc,\n\thook_alloc_memalign,\n\thook_alloc_valloc,\n\thook_alloc_mallocx,\n\n\t/* The reallocating functions have both alloc and dalloc variants */\n\thook_alloc_realloc,\n\thook_alloc_rallocx,\n};\n/*\n * We put the enum typedef after the enum, since this file may get included by\n * jemalloc_cpp.cpp, and C++ disallows enum forward declarations.\n */\ntypedef enum hook_alloc_e hook_alloc_t;\n\nenum hook_dalloc_e {\n\thook_dalloc_free,\n\thook_dalloc_dallocx,\n\thook_dalloc_sdallocx,\n\n\t/*\n\t * The dalloc halves of reallocation (not called if in-place expansion\n\t * happens).\n\t */\n\thook_dalloc_realloc,\n\thook_dalloc_rallocx,\n};\ntypedef enum hook_dalloc_e hook_dalloc_t;\n\n\nenum hook_expand_e {\n\thook_expand_realloc,\n\thook_expand_rallocx,\n\thook_expand_xallocx,\n};\ntypedef enum hook_expand_e hook_expand_t;\n\ntypedef void (*hook_alloc)(\n    void *extra, hook_alloc_t type, void *result, uintptr_t result_raw,\n    uintptr_t args_raw[3]);\n\ntypedef void (*hook_dalloc)(\n    void *extra, hook_dalloc_t type, void *address, uintptr_t args_raw[3]);\n\ntypedef void (*hook_expand)(\n    void *extra, hook_expand_t type, void *address, size_t old_usize,\n    size_t new_usize, uintptr_t result_raw, uintptr_t args_raw[4]);\n\ntypedef struct hooks_s hooks_t;\nstruct hooks_s {\n\thook_alloc alloc_hook;\n\thook_dalloc dalloc_hook;\n\thook_expand expand_hook;\n\tvoid *extra;\n};\n\n/*\n * Begin implementation details; everything above this point might one day live\n * in a public API.  Everything below this point never will.\n */\n\n/*\n * The realloc pathways haven't gotten any refactoring love in a while, and it's\n * fairly difficult to pass information from the entry point to the hooks.  We\n * put the informaiton the hooks will need into a struct to encapsulate\n * everything.\n *\n * Much of these pathways are force-inlined, so that the compiler can avoid\n * materializing this struct until we hit an extern arena function.  For fairly\n * goofy reasons, *many* of the realloc paths hit an extern arena function.\n * These paths are cold enough that it doesn't matter; eventually, we should\n * rewrite the realloc code to make the expand-in-place and the\n * free-then-realloc paths more orthogonal, at which point we don't need to\n * spread the hook logic all over the place.\n */\ntypedef struct hook_ralloc_args_s hook_ralloc_args_t;\nstruct hook_ralloc_args_s {\n\t/* I.e. as opposed to rallocx. */\n\tbool is_realloc;\n\t/*\n\t * The expand hook takes 4 arguments, even if only 3 are actually used;\n\t * we add an extra one in case the user decides to memcpy without\n\t * looking too closely at the hooked function.\n\t */\n\tuintptr_t args[4];\n};\n\n/*\n * Returns an opaque handle to be used when removing the hook.  NULL means that\n * we couldn't install the hook.\n */\nbool hook_boot();\n\nvoid *hook_install(tsdn_t *tsdn, hooks_t *hooks);\n/* Uninstalls the hook with the handle previously returned from hook_install. */\nvoid hook_remove(tsdn_t *tsdn, void *opaque);\n\n/* Hooks */\n\nvoid hook_invoke_alloc(hook_alloc_t type, void *result, uintptr_t result_raw,\n    uintptr_t args_raw[3]);\n\nvoid hook_invoke_dalloc(hook_dalloc_t type, void *address,\n    uintptr_t args_raw[3]);\n\nvoid hook_invoke_expand(hook_expand_t type, void *address, size_t old_usize,\n    size_t new_usize, uintptr_t result_raw, uintptr_t args_raw[4]);\n\n#endif /* JEMALLOC_INTERNAL_HOOK_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/hpa.h",
    "content": "#ifndef JEMALLOC_INTERNAL_HPA_H\n#define JEMALLOC_INTERNAL_HPA_H\n\n#include \"jemalloc/internal/exp_grow.h\"\n#include \"jemalloc/internal/hpa_hooks.h\"\n#include \"jemalloc/internal/hpa_opts.h\"\n#include \"jemalloc/internal/pai.h\"\n#include \"jemalloc/internal/psset.h\"\n\ntypedef struct hpa_central_s hpa_central_t;\nstruct hpa_central_s {\n\t/*\n\t * The mutex guarding most of the operations on the central data\n\t * structure.\n\t */\n\tmalloc_mutex_t mtx;\n\t/*\n\t * Guards expansion of eden.  We separate this from the regular mutex so\n\t * that cheaper operations can still continue while we're doing the OS\n\t * call.\n\t */\n\tmalloc_mutex_t grow_mtx;\n\t/*\n\t * Either NULL (if empty), or some integer multiple of a\n\t * hugepage-aligned number of hugepages.  We carve them off one at a\n\t * time to satisfy new pageslab requests.\n\t *\n\t * Guarded by grow_mtx.\n\t */\n\tvoid *eden;\n\tsize_t eden_len;\n\t/* Source for metadata. */\n\tbase_t *base;\n\t/* Number of grow operations done on this hpa_central_t. */\n\tuint64_t age_counter;\n\n\t/* The HPA hooks. */\n\thpa_hooks_t hooks;\n};\n\ntypedef struct hpa_shard_nonderived_stats_s hpa_shard_nonderived_stats_t;\nstruct hpa_shard_nonderived_stats_s {\n\t/*\n\t * The number of times we've purged within a hugepage.\n\t *\n\t * Guarded by mtx.\n\t */\n\tuint64_t npurge_passes;\n\t/*\n\t * The number of individual purge calls we perform (which should always\n\t * be bigger than npurge_passes, since each pass purges at least one\n\t * extent within a hugepage.\n\t *\n\t * Guarded by mtx.\n\t */\n\tuint64_t npurges;\n\n\t/*\n\t * The number of times we've hugified a pageslab.\n\t *\n\t * Guarded by mtx.\n\t */\n\tuint64_t nhugifies;\n\t/*\n\t * The number of times we've dehugified a pageslab.\n\t *\n\t * Guarded by mtx.\n\t */\n\tuint64_t ndehugifies;\n};\n\n/* Completely derived; only used by CTL. */\ntypedef struct hpa_shard_stats_s hpa_shard_stats_t;\nstruct hpa_shard_stats_s {\n\tpsset_stats_t psset_stats;\n\thpa_shard_nonderived_stats_t nonderived_stats;\n};\n\ntypedef struct hpa_shard_s hpa_shard_t;\nstruct hpa_shard_s {\n\t/*\n\t * pai must be the first member; we cast from a pointer to it to a\n\t * pointer to the hpa_shard_t.\n\t */\n\tpai_t pai;\n\n\t/* The central allocator we get our hugepages from. */\n\thpa_central_t *central;\n\t/* Protects most of this shard's state. */\n\tmalloc_mutex_t mtx;\n\t/*\n\t * Guards the shard's access to the central allocator (preventing\n\t * multiple threads operating on this shard from accessing the central\n\t * allocator).\n\t */\n\tmalloc_mutex_t grow_mtx;\n\t/* The base metadata allocator. */\n\tbase_t *base;\n\n\t/*\n\t * This edata cache is the one we use when allocating a small extent\n\t * from a pageslab.  The pageslab itself comes from the centralized\n\t * allocator, and so will use its edata_cache.\n\t */\n\tedata_cache_fast_t ecf;\n\n\tpsset_t psset;\n\n\t/*\n\t * How many grow operations have occurred.\n\t *\n\t * Guarded by grow_mtx.\n\t */\n\tuint64_t age_counter;\n\n\t/* The arena ind we're associated with. */\n\tunsigned ind;\n\n\t/*\n\t * Our emap.  This is just a cache of the emap pointer in the associated\n\t * hpa_central.\n\t */\n\temap_t *emap;\n\n\t/* The configuration choices for this hpa shard. */\n\thpa_shard_opts_t opts;\n\n\t/*\n\t * How many pages have we started but not yet finished purging in this\n\t * hpa shard.\n\t */\n\tsize_t npending_purge;\n\n\t/*\n\t * Those stats which are copied directly into the CTL-centric hpa shard\n\t * stats.\n\t */\n\thpa_shard_nonderived_stats_t stats;\n\n\t/*\n\t * Last time we performed purge on this shard.\n\t */\n\tnstime_t last_purge;\n};\n\n/*\n * Whether or not the HPA can be used given the current configuration.  This is\n * is not necessarily a guarantee that it backs its allocations by hugepages,\n * just that it can function properly given the system it's running on.\n */\nbool hpa_supported();\nbool hpa_central_init(hpa_central_t *central, base_t *base, const hpa_hooks_t *hooks);\nbool hpa_shard_init(hpa_shard_t *shard, hpa_central_t *central, emap_t *emap,\n    base_t *base, edata_cache_t *edata_cache, unsigned ind,\n    const hpa_shard_opts_t *opts);\n\nvoid hpa_shard_stats_accum(hpa_shard_stats_t *dst, hpa_shard_stats_t *src);\nvoid hpa_shard_stats_merge(tsdn_t *tsdn, hpa_shard_t *shard,\n    hpa_shard_stats_t *dst);\n\n/*\n * Notify the shard that we won't use it for allocations much longer.  Due to\n * the possibility of races, we don't actually prevent allocations; just flush\n * and disable the embedded edata_cache_small.\n */\nvoid hpa_shard_disable(tsdn_t *tsdn, hpa_shard_t *shard);\nvoid hpa_shard_destroy(tsdn_t *tsdn, hpa_shard_t *shard);\n\nvoid hpa_shard_set_deferral_allowed(tsdn_t *tsdn, hpa_shard_t *shard,\n    bool deferral_allowed);\nvoid hpa_shard_do_deferred_work(tsdn_t *tsdn, hpa_shard_t *shard);\n\n/*\n * We share the fork ordering with the PA and arena prefork handling; that's why\n * these are 3 and 4 rather than 0 and 1.\n */\nvoid hpa_shard_prefork3(tsdn_t *tsdn, hpa_shard_t *shard);\nvoid hpa_shard_prefork4(tsdn_t *tsdn, hpa_shard_t *shard);\nvoid hpa_shard_postfork_parent(tsdn_t *tsdn, hpa_shard_t *shard);\nvoid hpa_shard_postfork_child(tsdn_t *tsdn, hpa_shard_t *shard);\n\n#endif /* JEMALLOC_INTERNAL_HPA_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/hpa_hooks.h",
    "content": "#ifndef JEMALLOC_INTERNAL_HPA_HOOKS_H\n#define JEMALLOC_INTERNAL_HPA_HOOKS_H\n\ntypedef struct hpa_hooks_s hpa_hooks_t;\nstruct hpa_hooks_s {\n\tvoid *(*map)(size_t size);\n\tvoid (*unmap)(void *ptr, size_t size);\n\tvoid (*purge)(void *ptr, size_t size);\n\tvoid (*hugify)(void *ptr, size_t size);\n\tvoid (*dehugify)(void *ptr, size_t size);\n\tvoid (*curtime)(nstime_t *r_time, bool first_reading);\n\tuint64_t (*ms_since)(nstime_t *r_time);\n};\n\nextern hpa_hooks_t hpa_hooks_default;\n\n#endif /* JEMALLOC_INTERNAL_HPA_HOOKS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/hpa_opts.h",
    "content": "#ifndef JEMALLOC_INTERNAL_HPA_OPTS_H\n#define JEMALLOC_INTERNAL_HPA_OPTS_H\n\n#include \"jemalloc/internal/fxp.h\"\n\n/*\n * This file is morally part of hpa.h, but is split out for header-ordering\n * reasons.\n */\n\ntypedef struct hpa_shard_opts_s hpa_shard_opts_t;\nstruct hpa_shard_opts_s {\n\t/*\n\t * The largest size we'll allocate out of the shard.  For those\n\t * allocations refused, the caller (in practice, the PA module) will\n\t * fall back to the more general (for now) PAC, which can always handle\n\t * any allocation request.\n\t */\n\tsize_t slab_max_alloc;\n\n\t/*\n\t * When the number of active bytes in a hugepage is >=\n\t * hugification_threshold, we force hugify it.\n\t */\n\tsize_t hugification_threshold;\n\n\t/*\n\t * The HPA purges whenever the number of pages exceeds dirty_mult *\n\t * active_pages.  This may be set to (fxp_t)-1 to disable purging.\n\t */\n\tfxp_t dirty_mult;\n\n\t/*\n\t * Whether or not the PAI methods are allowed to defer work to a\n\t * subsequent hpa_shard_do_deferred_work() call.  Practically, this\n\t * corresponds to background threads being enabled.  We track this\n\t * ourselves for encapsulation purposes.\n\t */\n\tbool deferral_allowed;\n\n\t/*\n\t * How long a hugepage has to be a hugification candidate before it will\n\t * actually get hugified.\n\t */\n\tuint64_t hugify_delay_ms;\n\n\t/*\n\t * Minimum amount of time between purges.\n\t */\n\tuint64_t min_purge_interval_ms;\n};\n\n#define HPA_SHARD_OPTS_DEFAULT {\t\t\t\t\t\\\n\t/* slab_max_alloc */\t\t\t\t\t\t\\\n\t64 * 1024,\t\t\t\t\t\t\t\\\n\t/* hugification_threshold */\t\t\t\t\t\\\n\tHUGEPAGE * 95 / 100,\t\t\t\t\t\t\\\n\t/* dirty_mult */\t\t\t\t\t\t\\\n\tFXP_INIT_PERCENT(25),\t\t\t\t\t\t\\\n\t/*\t\t\t\t\t\t\t\t\\\n\t * deferral_allowed\t\t\t\t\t\t\\\n\t * \t\t\t\t\t\t\t\t\\\n\t * Really, this is always set by the arena during creation\t\\\n\t * or by an hpa_shard_set_deferral_allowed call, so the value\t\\\n\t * we put here doesn't matter.\t\t\t\t\t\\\n\t */\t\t\t\t\t\t\t\t\\\n\tfalse,\t\t\t\t\t\t\t\t\\\n\t/* hugify_delay_ms */\t\t\t\t\t\t\\\n\t10 * 1000,\t\t\t\t\t\t\t\\\n\t/* min_purge_interval_ms */\t\t\t\t\t\\\n\t5 * 1000\t\t\t\t\t\t\t\\\n}\n\n#endif /* JEMALLOC_INTERNAL_HPA_OPTS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/hpdata.h",
    "content": "#ifndef JEMALLOC_INTERNAL_HPDATA_H\n#define JEMALLOC_INTERNAL_HPDATA_H\n\n#include \"jemalloc/internal/fb.h\"\n#include \"jemalloc/internal/ph.h\"\n#include \"jemalloc/internal/ql.h\"\n#include \"jemalloc/internal/typed_list.h\"\n\n/*\n * The metadata representation we use for extents in hugepages.  While the PAC\n * uses the edata_t to represent both active and inactive extents, the HP only\n * uses the edata_t for active ones; instead, inactive extent state is tracked\n * within hpdata associated with the enclosing hugepage-sized, hugepage-aligned\n * region of virtual address space.\n *\n * An hpdata need not be \"truly\" backed by a hugepage (which is not necessarily\n * an observable property of any given region of address space).  It's just\n * hugepage-sized and hugepage-aligned; it's *potentially* huge.\n */\ntypedef struct hpdata_s hpdata_t;\nph_structs(hpdata_age_heap, hpdata_t);\nstruct hpdata_s {\n\t/*\n\t * We likewise follow the edata convention of mangling names and forcing\n\t * the use of accessors -- this lets us add some consistency checks on\n\t * access.\n\t */\n\n\t/*\n\t * The address of the hugepage in question.  This can't be named h_addr,\n\t * since that conflicts with a macro defined in Windows headers.\n\t */\n\tvoid *h_address;\n\t/* Its age (measured in psset operations). */\n\tuint64_t h_age;\n\t/* Whether or not we think the hugepage is mapped that way by the OS. */\n\tbool h_huge;\n\n\t/*\n\t * For some properties, we keep parallel sets of bools; h_foo_allowed\n\t * and h_in_psset_foo_container.  This is a decoupling mechanism to\n\t * avoid bothering the hpa (which manages policies) from the psset\n\t * (which is the mechanism used to enforce those policies).  This allows\n\t * all the container management logic to live in one place, without the\n\t * HPA needing to know or care how that happens.\n\t */\n\n\t/*\n\t * Whether or not the hpdata is allowed to be used to serve allocations,\n\t * and whether or not the psset is currently tracking it as such.\n\t */\n\tbool h_alloc_allowed;\n\tbool h_in_psset_alloc_container;\n\n\t/*\n\t * The same, but with purging.  There's no corresponding\n\t * h_in_psset_purge_container, because the psset (currently) always\n\t * removes hpdatas from their containers during updates (to implement\n\t * LRU for purging).\n\t */\n\tbool h_purge_allowed;\n\n\t/* And with hugifying. */\n\tbool h_hugify_allowed;\n\t/* When we became a hugification candidate. */\n\tnstime_t h_time_hugify_allowed;\n\tbool h_in_psset_hugify_container;\n\n\t/* Whether or not a purge or hugify is currently happening. */\n\tbool h_mid_purge;\n\tbool h_mid_hugify;\n\n\t/*\n\t * Whether or not the hpdata is being updated in the psset (i.e. if\n\t * there has been a psset_update_begin call issued without a matching\n\t * psset_update_end call).  Eventually this will expand to other types\n\t * of updates.\n\t */\n\tbool h_updating;\n\n\t/* Whether or not the hpdata is in a psset. */\n\tbool h_in_psset;\n\n\tunion {\n\t\t/* When nonempty (and also nonfull), used by the psset bins. */\n\t\thpdata_age_heap_link_t age_link;\n\t\t/*\n\t\t * When empty (or not corresponding to any hugepage), list\n\t\t * linkage.\n\t\t */\n\t\tql_elm(hpdata_t) ql_link_empty;\n\t};\n\n\t/*\n\t * Linkage for the psset to track candidates for purging and hugifying.\n\t */\n\tql_elm(hpdata_t) ql_link_purge;\n\tql_elm(hpdata_t) ql_link_hugify;\n\n\t/* The length of the largest contiguous sequence of inactive pages. */\n\tsize_t h_longest_free_range;\n\n\t/* Number of active pages. */\n\tsize_t h_nactive;\n\n\t/* A bitmap with bits set in the active pages. */\n\tfb_group_t active_pages[FB_NGROUPS(HUGEPAGE_PAGES)];\n\n\t/*\n\t * Number of dirty or active pages, and a bitmap tracking them.  One\n\t * way to think of this is as which pages are dirty from the OS's\n\t * perspective.\n\t */\n\tsize_t h_ntouched;\n\n\t/* The touched pages (using the same definition as above). */\n\tfb_group_t touched_pages[FB_NGROUPS(HUGEPAGE_PAGES)];\n};\n\nTYPED_LIST(hpdata_empty_list, hpdata_t, ql_link_empty)\nTYPED_LIST(hpdata_purge_list, hpdata_t, ql_link_purge)\nTYPED_LIST(hpdata_hugify_list, hpdata_t, ql_link_hugify)\n\nph_proto(, hpdata_age_heap, hpdata_t);\n\nstatic inline void *\nhpdata_addr_get(const hpdata_t *hpdata) {\n\treturn hpdata->h_address;\n}\n\nstatic inline void\nhpdata_addr_set(hpdata_t *hpdata, void *addr) {\n\tassert(HUGEPAGE_ADDR2BASE(addr) == addr);\n\thpdata->h_address = addr;\n}\n\nstatic inline uint64_t\nhpdata_age_get(const hpdata_t *hpdata) {\n\treturn hpdata->h_age;\n}\n\nstatic inline void\nhpdata_age_set(hpdata_t *hpdata, uint64_t age) {\n\thpdata->h_age = age;\n}\n\nstatic inline bool\nhpdata_huge_get(const hpdata_t *hpdata) {\n\treturn hpdata->h_huge;\n}\n\nstatic inline bool\nhpdata_alloc_allowed_get(const hpdata_t *hpdata) {\n\treturn hpdata->h_alloc_allowed;\n}\n\nstatic inline void\nhpdata_alloc_allowed_set(hpdata_t *hpdata, bool alloc_allowed) {\n\thpdata->h_alloc_allowed = alloc_allowed;\n}\n\nstatic inline bool\nhpdata_in_psset_alloc_container_get(const hpdata_t *hpdata) {\n\treturn hpdata->h_in_psset_alloc_container;\n}\n\nstatic inline void\nhpdata_in_psset_alloc_container_set(hpdata_t *hpdata, bool in_container) {\n\tassert(in_container != hpdata->h_in_psset_alloc_container);\n\thpdata->h_in_psset_alloc_container = in_container;\n}\n\nstatic inline bool\nhpdata_purge_allowed_get(const hpdata_t *hpdata) {\n\treturn hpdata->h_purge_allowed;\n}\n\nstatic inline void\nhpdata_purge_allowed_set(hpdata_t *hpdata, bool purge_allowed) {\n       assert(purge_allowed == false || !hpdata->h_mid_purge);\n       hpdata->h_purge_allowed = purge_allowed;\n}\n\nstatic inline bool\nhpdata_hugify_allowed_get(const hpdata_t *hpdata) {\n\treturn hpdata->h_hugify_allowed;\n}\n\nstatic inline void\nhpdata_allow_hugify(hpdata_t *hpdata, nstime_t now) {\n\tassert(!hpdata->h_mid_hugify);\n\thpdata->h_hugify_allowed = true;\n\thpdata->h_time_hugify_allowed = now;\n}\n\nstatic inline nstime_t\nhpdata_time_hugify_allowed(hpdata_t *hpdata) {\n\treturn hpdata->h_time_hugify_allowed;\n}\n\nstatic inline void\nhpdata_disallow_hugify(hpdata_t *hpdata) {\n\thpdata->h_hugify_allowed = false;\n}\n\nstatic inline bool\nhpdata_in_psset_hugify_container_get(const hpdata_t *hpdata) {\n\treturn hpdata->h_in_psset_hugify_container;\n}\n\nstatic inline void\nhpdata_in_psset_hugify_container_set(hpdata_t *hpdata, bool in_container) {\n\tassert(in_container != hpdata->h_in_psset_hugify_container);\n\thpdata->h_in_psset_hugify_container = in_container;\n}\n\nstatic inline bool\nhpdata_mid_purge_get(const hpdata_t *hpdata) {\n\treturn hpdata->h_mid_purge;\n}\n\nstatic inline void\nhpdata_mid_purge_set(hpdata_t *hpdata, bool mid_purge) {\n\tassert(mid_purge != hpdata->h_mid_purge);\n\thpdata->h_mid_purge = mid_purge;\n}\n\nstatic inline bool\nhpdata_mid_hugify_get(const hpdata_t *hpdata) {\n\treturn hpdata->h_mid_hugify;\n}\n\nstatic inline void\nhpdata_mid_hugify_set(hpdata_t *hpdata, bool mid_hugify) {\n\tassert(mid_hugify != hpdata->h_mid_hugify);\n\thpdata->h_mid_hugify = mid_hugify;\n}\n\nstatic inline bool\nhpdata_changing_state_get(const hpdata_t *hpdata) {\n\treturn hpdata->h_mid_purge || hpdata->h_mid_hugify;\n}\n\n\nstatic inline bool\nhpdata_updating_get(const hpdata_t *hpdata) {\n\treturn hpdata->h_updating;\n}\n\nstatic inline void\nhpdata_updating_set(hpdata_t *hpdata, bool updating) {\n\tassert(updating != hpdata->h_updating);\n\thpdata->h_updating = updating;\n}\n\nstatic inline bool\nhpdata_in_psset_get(const hpdata_t *hpdata) {\n\treturn hpdata->h_in_psset;\n}\n\nstatic inline void\nhpdata_in_psset_set(hpdata_t *hpdata, bool in_psset) {\n\tassert(in_psset != hpdata->h_in_psset);\n\thpdata->h_in_psset = in_psset;\n}\n\nstatic inline size_t\nhpdata_longest_free_range_get(const hpdata_t *hpdata) {\n\treturn hpdata->h_longest_free_range;\n}\n\nstatic inline void\nhpdata_longest_free_range_set(hpdata_t *hpdata, size_t longest_free_range) {\n\tassert(longest_free_range <= HUGEPAGE_PAGES);\n\thpdata->h_longest_free_range = longest_free_range;\n}\n\nstatic inline size_t\nhpdata_nactive_get(hpdata_t *hpdata) {\n\treturn hpdata->h_nactive;\n}\n\nstatic inline size_t\nhpdata_ntouched_get(hpdata_t *hpdata) {\n\treturn hpdata->h_ntouched;\n}\n\nstatic inline size_t\nhpdata_ndirty_get(hpdata_t *hpdata) {\n\treturn hpdata->h_ntouched - hpdata->h_nactive;\n}\n\nstatic inline size_t\nhpdata_nretained_get(hpdata_t *hpdata) {\n\treturn HUGEPAGE_PAGES - hpdata->h_ntouched;\n}\n\nstatic inline void\nhpdata_assert_empty(hpdata_t *hpdata) {\n\tassert(fb_empty(hpdata->active_pages, HUGEPAGE_PAGES));\n\tassert(hpdata->h_nactive == 0);\n}\n\n/*\n * Only used in tests, and in hpdata_assert_consistent, below.  Verifies some\n * consistency properties of the hpdata (e.g. that cached counts of page stats\n * match computed ones).\n */\nstatic inline bool\nhpdata_consistent(hpdata_t *hpdata) {\n\tif(fb_urange_longest(hpdata->active_pages, HUGEPAGE_PAGES)\n\t    != hpdata_longest_free_range_get(hpdata)) {\n\t\treturn false;\n\t}\n\tif (fb_scount(hpdata->active_pages, HUGEPAGE_PAGES, 0, HUGEPAGE_PAGES)\n\t    != hpdata->h_nactive) {\n\t\treturn false;\n\t}\n\tif (fb_scount(hpdata->touched_pages, HUGEPAGE_PAGES, 0, HUGEPAGE_PAGES)\n\t    != hpdata->h_ntouched) {\n\t\treturn false;\n\t}\n\tif (hpdata->h_ntouched < hpdata->h_nactive) {\n\t\treturn false;\n\t}\n\tif (hpdata->h_huge && hpdata->h_ntouched != HUGEPAGE_PAGES) {\n\t\treturn false;\n\t}\n\tif (hpdata_changing_state_get(hpdata)\n\t    && ((hpdata->h_purge_allowed) || hpdata->h_hugify_allowed)) {\n\t\treturn false;\n\t}\n\tif (hpdata_hugify_allowed_get(hpdata)\n\t    != hpdata_in_psset_hugify_container_get(hpdata)) {\n\t\treturn false;\n\t}\n\treturn true;\n}\n\nstatic inline void\nhpdata_assert_consistent(hpdata_t *hpdata) {\n\tassert(hpdata_consistent(hpdata));\n}\n\nstatic inline bool\nhpdata_empty(hpdata_t *hpdata) {\n\treturn hpdata->h_nactive == 0;\n}\n\nstatic inline bool\nhpdata_full(hpdata_t *hpdata) {\n\treturn hpdata->h_nactive == HUGEPAGE_PAGES;\n}\n\nvoid hpdata_init(hpdata_t *hpdata, void *addr, uint64_t age);\n\n/*\n * Given an hpdata which can serve an allocation request, pick and reserve an\n * offset within that allocation.\n */\nvoid *hpdata_reserve_alloc(hpdata_t *hpdata, size_t sz);\nvoid hpdata_unreserve(hpdata_t *hpdata, void *begin, size_t sz);\n\n/*\n * The hpdata_purge_prepare_t allows grabbing the metadata required to purge\n * subranges of a hugepage while holding a lock, drop the lock during the actual\n * purging of them, and reacquire it to update the metadata again.\n */\ntypedef struct hpdata_purge_state_s hpdata_purge_state_t;\nstruct hpdata_purge_state_s {\n\tsize_t npurged;\n\tsize_t ndirty_to_purge;\n\tfb_group_t to_purge[FB_NGROUPS(HUGEPAGE_PAGES)];\n\tsize_t next_purge_search_begin;\n};\n\n/*\n * Initializes purge state.  The access to hpdata must be externally\n * synchronized with other hpdata_* calls.\n *\n * You can tell whether or not a thread is purging or hugifying a given hpdata\n * via hpdata_changing_state_get(hpdata).  Racing hugification or purging\n * operations aren't allowed.\n *\n * Once you begin purging, you have to follow through and call hpdata_purge_next\n * until you're done, and then end.  Allocating out of an hpdata undergoing\n * purging is not allowed.\n *\n * Returns the number of dirty pages that will be purged.\n */\nsize_t hpdata_purge_begin(hpdata_t *hpdata, hpdata_purge_state_t *purge_state);\n\n/*\n * If there are more extents to purge, sets *r_purge_addr and *r_purge_size to\n * true, and returns true.  Otherwise, returns false to indicate that we're\n * done.\n *\n * This requires exclusive access to the purge state, but *not* to the hpdata.\n * In particular, unreserve calls are allowed while purging (i.e. you can dalloc\n * into one part of the hpdata while purging a different part).\n */\nbool hpdata_purge_next(hpdata_t *hpdata, hpdata_purge_state_t *purge_state,\n    void **r_purge_addr, size_t *r_purge_size);\n/*\n * Updates the hpdata metadata after all purging is done.  Needs external\n * synchronization.\n */\nvoid hpdata_purge_end(hpdata_t *hpdata, hpdata_purge_state_t *purge_state);\n\nvoid hpdata_hugify(hpdata_t *hpdata);\nvoid hpdata_dehugify(hpdata_t *hpdata);\n\n#endif /* JEMALLOC_INTERNAL_HPDATA_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/inspect.h",
    "content": "#ifndef JEMALLOC_INTERNAL_INSPECT_H\n#define JEMALLOC_INTERNAL_INSPECT_H\n\n/*\n * This module contains the heap introspection capabilities.  For now they are\n * exposed purely through mallctl APIs in the experimental namespace, but this\n * may change over time.\n */\n\n/*\n * The following two structs are for experimental purposes. See\n * experimental_utilization_query_ctl and\n * experimental_utilization_batch_query_ctl in src/ctl.c.\n */\ntypedef struct inspect_extent_util_stats_s inspect_extent_util_stats_t;\nstruct inspect_extent_util_stats_s {\n\tsize_t nfree;\n\tsize_t nregs;\n\tsize_t size;\n};\n\ntypedef struct inspect_extent_util_stats_verbose_s\n    inspect_extent_util_stats_verbose_t;\n\nstruct inspect_extent_util_stats_verbose_s {\n\tvoid *slabcur_addr;\n\tsize_t nfree;\n\tsize_t nregs;\n\tsize_t size;\n\tsize_t bin_nfree;\n\tsize_t bin_nregs;\n};\n\nvoid inspect_extent_util_stats_get(tsdn_t *tsdn, const void *ptr,\n    size_t *nfree, size_t *nregs, size_t *size);\nvoid inspect_extent_util_stats_verbose_get(tsdn_t *tsdn, const void *ptr,\n    size_t *nfree, size_t *nregs, size_t *size,\n    size_t *bin_nfree, size_t *bin_nregs, void **slabcur_addr);\n\n#endif /* JEMALLOC_INTERNAL_INSPECT_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/jemalloc_internal_decls.h",
    "content": "#ifndef JEMALLOC_INTERNAL_DECLS_H\n#define JEMALLOC_INTERNAL_DECLS_H\n\n#include <math.h>\n#ifdef _WIN32\n#  include <windows.h>\n#  include \"msvc_compat/windows_extra.h\"\n#  include \"msvc_compat/strings.h\"\n#  ifdef _WIN64\n#    if LG_VADDR <= 32\n#      error Generate the headers using x64 vcargs\n#    endif\n#  else\n#    if LG_VADDR > 32\n#      undef LG_VADDR\n#      define LG_VADDR 32\n#    endif\n#  endif\n#else\n#  include <sys/param.h>\n#  include <sys/mman.h>\n#  if !defined(__pnacl__) && !defined(__native_client__)\n#    include <sys/syscall.h>\n#    if !defined(SYS_write) && defined(__NR_write)\n#      define SYS_write __NR_write\n#    endif\n#    if defined(SYS_open) && defined(__aarch64__)\n       /* Android headers may define SYS_open to __NR_open even though\n        * __NR_open may not exist on AArch64 (superseded by __NR_openat). */\n#      undef SYS_open\n#    endif\n#    include <sys/uio.h>\n#  endif\n#  include <pthread.h>\n#  if defined(__FreeBSD__) || defined(__DragonFly__)\n#  include <pthread_np.h>\n#  include <sched.h>\n#  if defined(__FreeBSD__)\n#    define cpu_set_t cpuset_t\n#  endif\n#  endif\n#  include <signal.h>\n#  ifdef JEMALLOC_OS_UNFAIR_LOCK\n#    include <os/lock.h>\n#  endif\n#  ifdef JEMALLOC_GLIBC_MALLOC_HOOK\n#    include <sched.h>\n#  endif\n#  include <errno.h>\n#  include <sys/time.h>\n#  include <time.h>\n#  ifdef JEMALLOC_HAVE_MACH_ABSOLUTE_TIME\n#    include <mach/mach_time.h>\n#  endif\n#endif\n#include <sys/types.h>\n\n#include <limits.h>\n#ifndef SIZE_T_MAX\n#  define SIZE_T_MAX\tSIZE_MAX\n#endif\n#ifndef SSIZE_MAX\n#  define SSIZE_MAX\t((ssize_t)(SIZE_T_MAX >> 1))\n#endif\n#include <stdarg.h>\n#include <stdbool.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <stdint.h>\n#include <stddef.h>\n#ifndef offsetof\n#  define offsetof(type, member)\t((size_t)&(((type *)NULL)->member))\n#endif\n#include <string.h>\n#include <strings.h>\n#include <ctype.h>\n#ifdef _MSC_VER\n#  include <io.h>\ntypedef intptr_t ssize_t;\n#  define PATH_MAX 1024\n#  define STDERR_FILENO 2\n#  define __func__ __FUNCTION__\n#  ifdef JEMALLOC_HAS_RESTRICT\n#    define restrict __restrict\n#  endif\n/* Disable warnings about deprecated system functions. */\n#  pragma warning(disable: 4996)\n#if _MSC_VER < 1800\nstatic int\nisblank(int c) {\n\treturn (c == '\\t' || c == ' ');\n}\n#endif\n#else\n#  include <unistd.h>\n#endif\n#include <fcntl.h>\n\n/*\n * The Win32 midl compiler has #define small char; we don't use midl, but\n * \"small\" is a nice identifier to have available when talking about size\n * classes.\n */\n#ifdef small\n#  undef small\n#endif\n\n#endif /* JEMALLOC_INTERNAL_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/jemalloc_internal_defs.h.in",
    "content": "#ifndef JEMALLOC_INTERNAL_DEFS_H_\n#define JEMALLOC_INTERNAL_DEFS_H_\n/*\n * If JEMALLOC_PREFIX is defined via --with-jemalloc-prefix, it will cause all\n * public APIs to be prefixed.  This makes it possible, with some care, to use\n * multiple allocators simultaneously.\n */\n#undef JEMALLOC_PREFIX\n#undef JEMALLOC_CPREFIX\n\n/*\n * Define overrides for non-standard allocator-related functions if they are\n * present on the system.\n */\n#undef JEMALLOC_OVERRIDE___LIBC_CALLOC\n#undef JEMALLOC_OVERRIDE___LIBC_FREE\n#undef JEMALLOC_OVERRIDE___LIBC_MALLOC\n#undef JEMALLOC_OVERRIDE___LIBC_MEMALIGN\n#undef JEMALLOC_OVERRIDE___LIBC_REALLOC\n#undef JEMALLOC_OVERRIDE___LIBC_VALLOC\n#undef JEMALLOC_OVERRIDE___POSIX_MEMALIGN\n\n/*\n * JEMALLOC_PRIVATE_NAMESPACE is used as a prefix for all library-private APIs.\n * For shared libraries, symbol visibility mechanisms prevent these symbols\n * from being exported, but for static libraries, naming collisions are a real\n * possibility.\n */\n#undef JEMALLOC_PRIVATE_NAMESPACE\n\n/*\n * Hyper-threaded CPUs may need a special instruction inside spin loops in\n * order to yield to another virtual CPU.\n */\n#undef CPU_SPINWAIT\n/* 1 if CPU_SPINWAIT is defined, 0 otherwise. */\n#undef HAVE_CPU_SPINWAIT\n\n/*\n * Number of significant bits in virtual addresses.  This may be less than the\n * total number of bits in a pointer, e.g. on x64, for which the uppermost 16\n * bits are the same as bit 47.\n */\n#undef LG_VADDR\n\n/* Defined if C11 atomics are available. */\n#undef JEMALLOC_C11_ATOMICS\n\n/* Defined if GCC __atomic atomics are available. */\n#undef JEMALLOC_GCC_ATOMIC_ATOMICS\n/* and the 8-bit variant support. */\n#undef JEMALLOC_GCC_U8_ATOMIC_ATOMICS\n\n/* Defined if GCC __sync atomics are available. */\n#undef JEMALLOC_GCC_SYNC_ATOMICS\n/* and the 8-bit variant support. */\n#undef JEMALLOC_GCC_U8_SYNC_ATOMICS\n\n/*\n * Defined if __builtin_clz() and __builtin_clzl() are available.\n */\n#undef JEMALLOC_HAVE_BUILTIN_CLZ\n\n/*\n * Defined if os_unfair_lock_*() functions are available, as provided by Darwin.\n */\n#undef JEMALLOC_OS_UNFAIR_LOCK\n\n/* Defined if syscall(2) is usable. */\n#undef JEMALLOC_USE_SYSCALL\n\n/*\n * Defined if secure_getenv(3) is available.\n */\n#undef JEMALLOC_HAVE_SECURE_GETENV\n\n/*\n * Defined if issetugid(2) is available.\n */\n#undef JEMALLOC_HAVE_ISSETUGID\n\n/* Defined if pthread_atfork(3) is available. */\n#undef JEMALLOC_HAVE_PTHREAD_ATFORK\n\n/* Defined if pthread_setname_np(3) is available. */\n#undef JEMALLOC_HAVE_PTHREAD_SETNAME_NP\n\n/* Defined if pthread_getname_np(3) is available. */\n#undef JEMALLOC_HAVE_PTHREAD_GETNAME_NP\n\n/* Defined if pthread_get_name_np(3) is available. */\n#undef JEMALLOC_HAVE_PTHREAD_GET_NAME_NP\n\n/*\n * Defined if clock_gettime(CLOCK_MONOTONIC_COARSE, ...) is available.\n */\n#undef JEMALLOC_HAVE_CLOCK_MONOTONIC_COARSE\n\n/*\n * Defined if clock_gettime(CLOCK_MONOTONIC, ...) is available.\n */\n#undef JEMALLOC_HAVE_CLOCK_MONOTONIC\n\n/*\n * Defined if mach_absolute_time() is available.\n */\n#undef JEMALLOC_HAVE_MACH_ABSOLUTE_TIME\n\n/*\n * Defined if clock_gettime(CLOCK_REALTIME, ...) is available.\n */\n#undef JEMALLOC_HAVE_CLOCK_REALTIME\n\n/*\n * Defined if _malloc_thread_cleanup() exists.  At least in the case of\n * FreeBSD, pthread_key_create() allocates, which if used during malloc\n * bootstrapping will cause recursion into the pthreads library.  Therefore, if\n * _malloc_thread_cleanup() exists, use it as the basis for thread cleanup in\n * malloc_tsd.\n */\n#undef JEMALLOC_MALLOC_THREAD_CLEANUP\n\n/*\n * Defined if threaded initialization is known to be safe on this platform.\n * Among other things, it must be possible to initialize a mutex without\n * triggering allocation in order for threaded allocation to be safe.\n */\n#undef JEMALLOC_THREADED_INIT\n\n/*\n * Defined if the pthreads implementation defines\n * _pthread_mutex_init_calloc_cb(), in which case the function is used in order\n * to avoid recursive allocation during mutex initialization.\n */\n#undef JEMALLOC_MUTEX_INIT_CB\n\n/* Non-empty if the tls_model attribute is supported. */\n#undef JEMALLOC_TLS_MODEL\n\n/*\n * JEMALLOC_DEBUG enables assertions and other sanity checks, and disables\n * inline functions.\n */\n#undef JEMALLOC_DEBUG\n\n/* JEMALLOC_STATS enables statistics calculation. */\n#undef JEMALLOC_STATS\n\n/* JEMALLOC_EXPERIMENTAL_SMALLOCX_API enables experimental smallocx API. */\n#undef JEMALLOC_EXPERIMENTAL_SMALLOCX_API\n\n/* JEMALLOC_PROF enables allocation profiling. */\n#undef JEMALLOC_PROF\n\n/* Use libunwind for profile backtracing if defined. */\n#undef JEMALLOC_PROF_LIBUNWIND\n\n/* Use libgcc for profile backtracing if defined. */\n#undef JEMALLOC_PROF_LIBGCC\n\n/* Use gcc intrinsics for profile backtracing if defined. */\n#undef JEMALLOC_PROF_GCC\n\n/*\n * JEMALLOC_DSS enables use of sbrk(2) to allocate extents from the data storage\n * segment (DSS).\n */\n#undef JEMALLOC_DSS\n\n/* Support memory filling (junk/zero). */\n#undef JEMALLOC_FILL\n\n/* Support utrace(2)-based tracing. */\n#undef JEMALLOC_UTRACE\n\n/* Support utrace(2)-based tracing (label based signature). */\n#undef JEMALLOC_UTRACE_LABEL\n\n/* Support optional abort() on OOM. */\n#undef JEMALLOC_XMALLOC\n\n/* Support lazy locking (avoid locking unless a second thread is launched). */\n#undef JEMALLOC_LAZY_LOCK\n\n/*\n * Minimum allocation alignment is 2^LG_QUANTUM bytes (ignoring tiny size\n * classes).\n */\n#undef LG_QUANTUM\n\n/* One page is 2^LG_PAGE bytes. */\n#undef LG_PAGE\n\n/* Maximum number of regions in a slab. */\n#undef CONFIG_LG_SLAB_MAXREGS\n\n/*\n * One huge page is 2^LG_HUGEPAGE bytes.  Note that this is defined even if the\n * system does not explicitly support huge pages; system calls that require\n * explicit huge page support are separately configured.\n */\n#undef LG_HUGEPAGE\n\n/*\n * If defined, adjacent virtual memory mappings with identical attributes\n * automatically coalesce, and they fragment when changes are made to subranges.\n * This is the normal order of things for mmap()/munmap(), but on Windows\n * VirtualAlloc()/VirtualFree() operations must be precisely matched, i.e.\n * mappings do *not* coalesce/fragment.\n */\n#undef JEMALLOC_MAPS_COALESCE\n\n/*\n * If defined, retain memory for later reuse by default rather than using e.g.\n * munmap() to unmap freed extents.  This is enabled on 64-bit Linux because\n * common sequences of mmap()/munmap() calls will cause virtual memory map\n * holes.\n */\n#undef JEMALLOC_RETAIN\n\n/* TLS is used to map arenas and magazine caches to threads. */\n#undef JEMALLOC_TLS\n\n/*\n * Used to mark unreachable code to quiet \"end of non-void\" compiler warnings.\n * Don't use this directly; instead use unreachable() from util.h\n */\n#undef JEMALLOC_INTERNAL_UNREACHABLE\n\n/*\n * ffs*() functions to use for bitmapping.  Don't use these directly; instead,\n * use ffs_*() from util.h.\n */\n#undef JEMALLOC_INTERNAL_FFSLL\n#undef JEMALLOC_INTERNAL_FFSL\n#undef JEMALLOC_INTERNAL_FFS\n\n/*\n * popcount*() functions to use for bitmapping.\n */\n#undef JEMALLOC_INTERNAL_POPCOUNTL\n#undef JEMALLOC_INTERNAL_POPCOUNT\n\n/*\n * If defined, explicitly attempt to more uniformly distribute large allocation\n * pointer alignments across all cache indices.\n */\n#undef JEMALLOC_CACHE_OBLIVIOUS\n\n/*\n * If defined, enable logging facilities.  We make this a configure option to\n * avoid taking extra branches everywhere.\n */\n#undef JEMALLOC_LOG\n\n/*\n * If defined, use readlinkat() (instead of readlink()) to follow\n * /etc/malloc_conf.\n */\n#undef JEMALLOC_READLINKAT\n\n/*\n * Darwin (OS X) uses zones to work around Mach-O symbol override shortcomings.\n */\n#undef JEMALLOC_ZONE\n\n/*\n * Methods for determining whether the OS overcommits.\n * JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY: Linux's\n *                                         /proc/sys/vm.overcommit_memory file.\n * JEMALLOC_SYSCTL_VM_OVERCOMMIT: FreeBSD's vm.overcommit sysctl.\n */\n#undef JEMALLOC_SYSCTL_VM_OVERCOMMIT\n#undef JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY\n\n/* Defined if madvise(2) is available. */\n#undef JEMALLOC_HAVE_MADVISE\n\n/*\n * Defined if transparent huge pages are supported via the MADV_[NO]HUGEPAGE\n * arguments to madvise(2).\n */\n#undef JEMALLOC_HAVE_MADVISE_HUGE\n\n/*\n * Methods for purging unused pages differ between operating systems.\n *\n *   madvise(..., MADV_FREE) : This marks pages as being unused, such that they\n *                             will be discarded rather than swapped out.\n *   madvise(..., MADV_DONTNEED) : If JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS is\n *                                 defined, this immediately discards pages,\n *                                 such that new pages will be demand-zeroed if\n *                                 the address region is later touched;\n *                                 otherwise this behaves similarly to\n *                                 MADV_FREE, though typically with higher\n *                                 system overhead.\n */\n#undef JEMALLOC_PURGE_MADVISE_FREE\n#undef JEMALLOC_PURGE_MADVISE_DONTNEED\n#undef JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS\n\n/* Defined if madvise(2) is available but MADV_FREE is not (x86 Linux only). */\n#undef JEMALLOC_DEFINE_MADVISE_FREE\n\n/*\n * Defined if MADV_DO[NT]DUMP is supported as an argument to madvise.\n */\n#undef JEMALLOC_MADVISE_DONTDUMP\n\n/*\n * Defined if MADV_[NO]CORE is supported as an argument to madvise.\n */\n#undef JEMALLOC_MADVISE_NOCORE\n\n/* Defined if mprotect(2) is available. */\n#undef JEMALLOC_HAVE_MPROTECT\n\n/*\n * Defined if transparent huge pages (THPs) are supported via the\n * MADV_[NO]HUGEPAGE arguments to madvise(2), and THP support is enabled.\n */\n#undef JEMALLOC_THP\n\n/* Defined if posix_madvise is available. */\n#undef JEMALLOC_HAVE_POSIX_MADVISE\n\n/*\n * Method for purging unused pages using posix_madvise.\n *\n *   posix_madvise(..., POSIX_MADV_DONTNEED)\n */\n#undef JEMALLOC_PURGE_POSIX_MADVISE_DONTNEED\n#undef JEMALLOC_PURGE_POSIX_MADVISE_DONTNEED_ZEROS\n\n/*\n * Defined if memcntl page admin call is supported\n */\n#undef JEMALLOC_HAVE_MEMCNTL\n\n/*\n * Defined if malloc_size is supported\n */\n#undef JEMALLOC_HAVE_MALLOC_SIZE\n\n/* Define if operating system has alloca.h header. */\n#undef JEMALLOC_HAS_ALLOCA_H\n\n/* C99 restrict keyword supported. */\n#undef JEMALLOC_HAS_RESTRICT\n\n/* For use by hash code. */\n#undef JEMALLOC_BIG_ENDIAN\n\n/* sizeof(int) == 2^LG_SIZEOF_INT. */\n#undef LG_SIZEOF_INT\n\n/* sizeof(long) == 2^LG_SIZEOF_LONG. */\n#undef LG_SIZEOF_LONG\n\n/* sizeof(long long) == 2^LG_SIZEOF_LONG_LONG. */\n#undef LG_SIZEOF_LONG_LONG\n\n/* sizeof(intmax_t) == 2^LG_SIZEOF_INTMAX_T. */\n#undef LG_SIZEOF_INTMAX_T\n\n/* glibc malloc hooks (__malloc_hook, __realloc_hook, __free_hook). */\n#undef JEMALLOC_GLIBC_MALLOC_HOOK\n\n/* glibc memalign hook. */\n#undef JEMALLOC_GLIBC_MEMALIGN_HOOK\n\n/* pthread support */\n#undef JEMALLOC_HAVE_PTHREAD\n\n/* dlsym() support */\n#undef JEMALLOC_HAVE_DLSYM\n\n/* Adaptive mutex support in pthreads. */\n#undef JEMALLOC_HAVE_PTHREAD_MUTEX_ADAPTIVE_NP\n\n/* GNU specific sched_getcpu support */\n#undef JEMALLOC_HAVE_SCHED_GETCPU\n\n/* GNU specific sched_setaffinity support */\n#undef JEMALLOC_HAVE_SCHED_SETAFFINITY\n\n/*\n * If defined, all the features necessary for background threads are present.\n */\n#undef JEMALLOC_BACKGROUND_THREAD\n\n/*\n * If defined, jemalloc symbols are not exported (doesn't work when\n * JEMALLOC_PREFIX is not defined).\n */\n#undef JEMALLOC_EXPORT\n\n/* config.malloc_conf options string. */\n#undef JEMALLOC_CONFIG_MALLOC_CONF\n\n/* If defined, jemalloc takes the malloc/free/etc. symbol names. */\n#undef JEMALLOC_IS_MALLOC\n\n/*\n * Defined if strerror_r returns char * if _GNU_SOURCE is defined.\n */\n#undef JEMALLOC_STRERROR_R_RETURNS_CHAR_WITH_GNU_SOURCE\n\n/* Performs additional safety checks when defined. */\n#undef JEMALLOC_OPT_SAFETY_CHECKS\n\n/* Is C++ support being built? */\n#undef JEMALLOC_ENABLE_CXX\n\n/* Performs additional size checks when defined. */\n#undef JEMALLOC_OPT_SIZE_CHECKS\n\n/* Allows sampled junk and stash for checking use-after-free when defined. */\n#undef JEMALLOC_UAF_DETECTION\n\n/* Darwin VM_MAKE_TAG support */\n#undef JEMALLOC_HAVE_VM_MAKE_TAG\n\n/* If defined, realloc(ptr, 0) defaults to \"free\" instead of \"alloc\". */\n#undef JEMALLOC_ZERO_REALLOC_DEFAULT_FREE\n\n#endif /* JEMALLOC_INTERNAL_DEFS_H_ */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/jemalloc_internal_externs.h",
    "content": "#ifndef JEMALLOC_INTERNAL_EXTERNS_H\n#define JEMALLOC_INTERNAL_EXTERNS_H\n\n#include \"jemalloc/internal/atomic.h\"\n#include \"jemalloc/internal/hpa_opts.h\"\n#include \"jemalloc/internal/sec_opts.h\"\n#include \"jemalloc/internal/tsd_types.h\"\n#include \"jemalloc/internal/nstime.h\"\n\n/* TSD checks this to set thread local slow state accordingly. */\nextern bool malloc_slow;\n\n/* Run-time options. */\nextern bool opt_abort;\nextern bool opt_abort_conf;\nextern bool opt_trust_madvise;\nextern bool opt_confirm_conf;\nextern bool opt_hpa;\nextern hpa_shard_opts_t opt_hpa_opts;\nextern sec_opts_t opt_hpa_sec_opts;\n\nextern const char *opt_junk;\nextern bool opt_junk_alloc;\nextern bool opt_junk_free;\nextern void (*junk_free_callback)(void *ptr, size_t size);\nextern void (*junk_alloc_callback)(void *ptr, size_t size);\nextern bool opt_utrace;\nextern bool opt_xmalloc;\nextern bool opt_experimental_infallible_new;\nextern bool opt_zero;\nextern unsigned opt_narenas;\nextern zero_realloc_action_t opt_zero_realloc_action;\nextern malloc_init_t malloc_init_state;\nextern const char *zero_realloc_mode_names[];\nextern atomic_zu_t zero_realloc_count;\nextern bool opt_cache_oblivious;\n\n/* Escape free-fastpath when ptr & mask == 0 (for sanitization purpose). */\nextern uintptr_t san_cache_bin_nonfast_mask;\n\n/* Number of CPUs. */\nextern unsigned ncpus;\n\n/* Number of arenas used for automatic multiplexing of threads and arenas. */\nextern unsigned narenas_auto;\n\n/* Base index for manual arenas. */\nextern unsigned manual_arena_base;\n\n/*\n * Arenas that are used to service external requests.  Not all elements of the\n * arenas array are necessarily used; arenas are created lazily as needed.\n */\nextern atomic_p_t arenas[];\n\nvoid *a0malloc(size_t size);\nvoid a0dalloc(void *ptr);\nvoid *bootstrap_malloc(size_t size);\nvoid *bootstrap_calloc(size_t num, size_t size);\nvoid bootstrap_free(void *ptr);\nvoid arena_set(unsigned ind, arena_t *arena);\nunsigned narenas_total_get(void);\narena_t *arena_init(tsdn_t *tsdn, unsigned ind, const arena_config_t *config);\narena_t *arena_choose_hard(tsd_t *tsd, bool internal);\nvoid arena_migrate(tsd_t *tsd, arena_t *oldarena, arena_t *newarena);\nvoid iarena_cleanup(tsd_t *tsd);\nvoid arena_cleanup(tsd_t *tsd);\nsize_t batch_alloc(void **ptrs, size_t num, size_t size, int flags);\nvoid jemalloc_prefork(void);\nvoid jemalloc_postfork_parent(void);\nvoid jemalloc_postfork_child(void);\nvoid je_sdallocx_noflags(void *ptr, size_t size);\nvoid *malloc_default(size_t size, size_t *usize);\n\n#endif /* JEMALLOC_INTERNAL_EXTERNS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/jemalloc_internal_includes.h",
    "content": "#ifndef JEMALLOC_INTERNAL_INCLUDES_H\n#define JEMALLOC_INTERNAL_INCLUDES_H\n\n/*\n * jemalloc can conceptually be broken into components (arena, tcache, etc.),\n * but there are circular dependencies that cannot be broken without\n * substantial performance degradation.\n *\n * Historically, we dealt with this by each header into four sections (types,\n * structs, externs, and inlines), and included each header file multiple times\n * in this file, picking out the portion we want on each pass using the\n * following #defines:\n *   JEMALLOC_H_TYPES   : Preprocessor-defined constants and pseudo-opaque data\n *                        types.\n *   JEMALLOC_H_STRUCTS : Data structures.\n *   JEMALLOC_H_EXTERNS : Extern data declarations and function prototypes.\n *   JEMALLOC_H_INLINES : Inline functions.\n *\n * We're moving toward a world in which the dependencies are explicit; each file\n * will #include the headers it depends on (rather than relying on them being\n * implicitly available via this file including every header file in the\n * project).\n *\n * We're now in an intermediate state: we've broken up the header files to avoid\n * having to include each one multiple times, but have not yet moved the\n * dependency information into the header files (i.e. we still rely on the\n * ordering in this file to ensure all a header's dependencies are available in\n * its translation unit).  Each component is now broken up into multiple header\n * files, corresponding to the sections above (e.g. instead of \"foo.h\", we now\n * have \"foo_types.h\", \"foo_structs.h\", \"foo_externs.h\", \"foo_inlines.h\").\n *\n * Those files which have been converted to explicitly include their\n * inter-component dependencies are now in the initial HERMETIC HEADERS\n * section.  All headers may still rely on jemalloc_preamble.h (which, by fiat,\n * must be included first in every translation unit) for system headers and\n * global jemalloc definitions, however.\n */\n\n/******************************************************************************/\n/* TYPES */\n/******************************************************************************/\n\n#include \"jemalloc/internal/arena_types.h\"\n#include \"jemalloc/internal/tcache_types.h\"\n#include \"jemalloc/internal/prof_types.h\"\n\n/******************************************************************************/\n/* STRUCTS */\n/******************************************************************************/\n\n#include \"jemalloc/internal/prof_structs.h\"\n#include \"jemalloc/internal/arena_structs.h\"\n#include \"jemalloc/internal/tcache_structs.h\"\n#include \"jemalloc/internal/background_thread_structs.h\"\n\n/******************************************************************************/\n/* EXTERNS */\n/******************************************************************************/\n\n#include \"jemalloc/internal/jemalloc_internal_externs.h\"\n#include \"jemalloc/internal/arena_externs.h\"\n#include \"jemalloc/internal/large_externs.h\"\n#include \"jemalloc/internal/tcache_externs.h\"\n#include \"jemalloc/internal/prof_externs.h\"\n#include \"jemalloc/internal/background_thread_externs.h\"\n\n/******************************************************************************/\n/* INLINES */\n/******************************************************************************/\n\n#include \"jemalloc/internal/jemalloc_internal_inlines_a.h\"\n/*\n * Include portions of arena code interleaved with tcache code in order to\n * resolve circular dependencies.\n */\n#include \"jemalloc/internal/arena_inlines_a.h\"\n#include \"jemalloc/internal/jemalloc_internal_inlines_b.h\"\n#include \"jemalloc/internal/tcache_inlines.h\"\n#include \"jemalloc/internal/arena_inlines_b.h\"\n#include \"jemalloc/internal/jemalloc_internal_inlines_c.h\"\n#include \"jemalloc/internal/prof_inlines.h\"\n#include \"jemalloc/internal/background_thread_inlines.h\"\n\n#endif /* JEMALLOC_INTERNAL_INCLUDES_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/jemalloc_internal_inlines_a.h",
    "content": "#ifndef JEMALLOC_INTERNAL_INLINES_A_H\n#define JEMALLOC_INTERNAL_INLINES_A_H\n\n#include \"jemalloc/internal/atomic.h\"\n#include \"jemalloc/internal/bit_util.h\"\n#include \"jemalloc/internal/jemalloc_internal_types.h\"\n#include \"jemalloc/internal/sc.h\"\n#include \"jemalloc/internal/ticker.h\"\n\nJEMALLOC_ALWAYS_INLINE malloc_cpuid_t\nmalloc_getcpu(void) {\n\tassert(have_percpu_arena);\n#if defined(_WIN32)\n\treturn GetCurrentProcessorNumber();\n#elif defined(JEMALLOC_HAVE_SCHED_GETCPU)\n\treturn (malloc_cpuid_t)sched_getcpu();\n#else\n\tnot_reached();\n\treturn -1;\n#endif\n}\n\n/* Return the chosen arena index based on current cpu. */\nJEMALLOC_ALWAYS_INLINE unsigned\npercpu_arena_choose(void) {\n\tassert(have_percpu_arena && PERCPU_ARENA_ENABLED(opt_percpu_arena));\n\n\tmalloc_cpuid_t cpuid = malloc_getcpu();\n\tassert(cpuid >= 0);\n\n\tunsigned arena_ind;\n\tif ((opt_percpu_arena == percpu_arena) || ((unsigned)cpuid < ncpus /\n\t    2)) {\n\t\tarena_ind = cpuid;\n\t} else {\n\t\tassert(opt_percpu_arena == per_phycpu_arena);\n\t\t/* Hyper threads on the same physical CPU share arena. */\n\t\tarena_ind = cpuid - ncpus / 2;\n\t}\n\n\treturn arena_ind;\n}\n\n/* Return the limit of percpu auto arena range, i.e. arenas[0...ind_limit). */\nJEMALLOC_ALWAYS_INLINE unsigned\npercpu_arena_ind_limit(percpu_arena_mode_t mode) {\n\tassert(have_percpu_arena && PERCPU_ARENA_ENABLED(mode));\n\tif (mode == per_phycpu_arena && ncpus > 1) {\n\t\tif (ncpus % 2) {\n\t\t\t/* This likely means a misconfig. */\n\t\t\treturn ncpus / 2 + 1;\n\t\t}\n\t\treturn ncpus / 2;\n\t} else {\n\t\treturn ncpus;\n\t}\n}\n\nstatic inline arena_t *\narena_get(tsdn_t *tsdn, unsigned ind, bool init_if_missing) {\n\tarena_t *ret;\n\n\tassert(ind < MALLOCX_ARENA_LIMIT);\n\n\tret = (arena_t *)atomic_load_p(&arenas[ind], ATOMIC_ACQUIRE);\n\tif (unlikely(ret == NULL)) {\n\t\tif (init_if_missing) {\n\t\t\tret = arena_init(tsdn, ind, &arena_config_default);\n\t\t}\n\t}\n\treturn ret;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\ntcache_available(tsd_t *tsd) {\n\t/*\n\t * Thread specific auto tcache might be unavailable if: 1) during tcache\n\t * initialization, or 2) disabled through thread.tcache.enabled mallctl\n\t * or config options.  This check covers all cases.\n\t */\n\tif (likely(tsd_tcache_enabled_get(tsd))) {\n\t\t/* Associated arena == NULL implies tcache init in progress. */\n\t\tif (config_debug && tsd_tcache_slowp_get(tsd)->arena != NULL) {\n\t\t\ttcache_assert_initialized(tsd_tcachep_get(tsd));\n\t\t}\n\t\treturn true;\n\t}\n\n\treturn false;\n}\n\nJEMALLOC_ALWAYS_INLINE tcache_t *\ntcache_get(tsd_t *tsd) {\n\tif (!tcache_available(tsd)) {\n\t\treturn NULL;\n\t}\n\n\treturn tsd_tcachep_get(tsd);\n}\n\nJEMALLOC_ALWAYS_INLINE tcache_slow_t *\ntcache_slow_get(tsd_t *tsd) {\n\tif (!tcache_available(tsd)) {\n\t\treturn NULL;\n\t}\n\n\treturn tsd_tcache_slowp_get(tsd);\n}\n\nstatic inline void\npre_reentrancy(tsd_t *tsd, arena_t *arena) {\n\t/* arena is the current context.  Reentry from a0 is not allowed. */\n\tassert(arena != arena_get(tsd_tsdn(tsd), 0, false));\n\ttsd_pre_reentrancy_raw(tsd);\n}\n\nstatic inline void\npost_reentrancy(tsd_t *tsd) {\n\ttsd_post_reentrancy_raw(tsd);\n}\n\n#endif /* JEMALLOC_INTERNAL_INLINES_A_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/jemalloc_internal_inlines_b.h",
    "content": "#ifndef JEMALLOC_INTERNAL_INLINES_B_H\n#define JEMALLOC_INTERNAL_INLINES_B_H\n\n#include \"jemalloc/internal/extent.h\"\n\nstatic inline void\npercpu_arena_update(tsd_t *tsd, unsigned cpu) {\n\tassert(have_percpu_arena);\n\tarena_t *oldarena = tsd_arena_get(tsd);\n\tassert(oldarena != NULL);\n\tunsigned oldind = arena_ind_get(oldarena);\n\n\tif (oldind != cpu) {\n\t\tunsigned newind = cpu;\n\t\tarena_t *newarena = arena_get(tsd_tsdn(tsd), newind, true);\n\t\tassert(newarena != NULL);\n\n\t\t/* Set new arena/tcache associations. */\n\t\tarena_migrate(tsd, oldarena, newarena);\n\t\ttcache_t *tcache = tcache_get(tsd);\n\t\tif (tcache != NULL) {\n\t\t\ttcache_slow_t *tcache_slow = tsd_tcache_slowp_get(tsd);\n\t\t\ttcache_arena_reassociate(tsd_tsdn(tsd), tcache_slow,\n\t\t\t    tcache, newarena);\n\t\t}\n\t}\n}\n\n\n/* Choose an arena based on a per-thread value. */\nstatic inline arena_t *\narena_choose_impl(tsd_t *tsd, arena_t *arena, bool internal) {\n\tarena_t *ret;\n\n\tif (arena != NULL) {\n\t\treturn arena;\n\t}\n\n\t/* During reentrancy, arena 0 is the safest bet. */\n\tif (unlikely(tsd_reentrancy_level_get(tsd) > 0)) {\n\t\treturn arena_get(tsd_tsdn(tsd), 0, true);\n\t}\n\n\tret = internal ? tsd_iarena_get(tsd) : tsd_arena_get(tsd);\n\tif (unlikely(ret == NULL)) {\n\t\tret = arena_choose_hard(tsd, internal);\n\t\tassert(ret);\n\t\tif (tcache_available(tsd)) {\n\t\t\ttcache_slow_t *tcache_slow = tsd_tcache_slowp_get(tsd);\n\t\t\ttcache_t *tcache = tsd_tcachep_get(tsd);\n\t\t\tif (tcache_slow->arena != NULL) {\n\t\t\t\t/* See comments in tsd_tcache_data_init().*/\n\t\t\t\tassert(tcache_slow->arena ==\n\t\t\t\t    arena_get(tsd_tsdn(tsd), 0, false));\n\t\t\t\tif (tcache_slow->arena != ret) {\n\t\t\t\t\ttcache_arena_reassociate(tsd_tsdn(tsd),\n\t\t\t\t\t    tcache_slow, tcache, ret);\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\ttcache_arena_associate(tsd_tsdn(tsd),\n\t\t\t\t    tcache_slow, tcache, ret);\n\t\t\t}\n\t\t}\n\t}\n\n\t/*\n\t * Note that for percpu arena, if the current arena is outside of the\n\t * auto percpu arena range, (i.e. thread is assigned to a manually\n\t * managed arena), then percpu arena is skipped.\n\t */\n\tif (have_percpu_arena && PERCPU_ARENA_ENABLED(opt_percpu_arena) &&\n\t    !internal && (arena_ind_get(ret) <\n\t    percpu_arena_ind_limit(opt_percpu_arena)) && (ret->last_thd !=\n\t    tsd_tsdn(tsd))) {\n\t\tunsigned ind = percpu_arena_choose();\n\t\tif (arena_ind_get(ret) != ind) {\n\t\t\tpercpu_arena_update(tsd, ind);\n\t\t\tret = tsd_arena_get(tsd);\n\t\t}\n\t\tret->last_thd = tsd_tsdn(tsd);\n\t}\n\n\treturn ret;\n}\n\nstatic inline arena_t *\narena_choose(tsd_t *tsd, arena_t *arena) {\n\treturn arena_choose_impl(tsd, arena, false);\n}\n\nstatic inline arena_t *\narena_ichoose(tsd_t *tsd, arena_t *arena) {\n\treturn arena_choose_impl(tsd, arena, true);\n}\n\nstatic inline bool\narena_is_auto(arena_t *arena) {\n\tassert(narenas_auto > 0);\n\n\treturn (arena_ind_get(arena) < manual_arena_base);\n}\n\n#endif /* JEMALLOC_INTERNAL_INLINES_B_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/jemalloc_internal_inlines_c.h",
    "content": "#ifndef JEMALLOC_INTERNAL_INLINES_C_H\n#define JEMALLOC_INTERNAL_INLINES_C_H\n\n#include \"jemalloc/internal/hook.h\"\n#include \"jemalloc/internal/jemalloc_internal_types.h\"\n#include \"jemalloc/internal/log.h\"\n#include \"jemalloc/internal/sz.h\"\n#include \"jemalloc/internal/thread_event.h\"\n#include \"jemalloc/internal/witness.h\"\n\n/*\n * Translating the names of the 'i' functions:\n *   Abbreviations used in the first part of the function name (before\n *   alloc/dalloc) describe what that function accomplishes:\n *     a: arena (query)\n *     s: size (query, or sized deallocation)\n *     e: extent (query)\n *     p: aligned (allocates)\n *     vs: size (query, without knowing that the pointer is into the heap)\n *     r: rallocx implementation\n *     x: xallocx implementation\n *   Abbreviations used in the second part of the function name (after\n *   alloc/dalloc) describe the arguments it takes\n *     z: whether to return zeroed memory\n *     t: accepts a tcache_t * parameter\n *     m: accepts an arena_t * parameter\n */\n\nJEMALLOC_ALWAYS_INLINE arena_t *\niaalloc(tsdn_t *tsdn, const void *ptr) {\n\tassert(ptr != NULL);\n\n\treturn arena_aalloc(tsdn, ptr);\n}\n\nJEMALLOC_ALWAYS_INLINE size_t\nisalloc(tsdn_t *tsdn, const void *ptr) {\n\tassert(ptr != NULL);\n\n\treturn arena_salloc(tsdn, ptr);\n}\n\nJEMALLOC_ALWAYS_INLINE void *\niallocztm(tsdn_t *tsdn, size_t size, szind_t ind, bool zero, tcache_t *tcache,\n    bool is_internal, arena_t *arena, bool slow_path) {\n\tvoid *ret;\n\n\tassert(!is_internal || tcache == NULL);\n\tassert(!is_internal || arena == NULL || arena_is_auto(arena));\n\tif (!tsdn_null(tsdn) && tsd_reentrancy_level_get(tsdn_tsd(tsdn)) == 0) {\n\t\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t\t    WITNESS_RANK_CORE, 0);\n\t}\n\n\tret = arena_malloc(tsdn, arena, size, ind, zero, tcache, slow_path);\n\tif (config_stats && is_internal && likely(ret != NULL)) {\n\t\tarena_internal_add(iaalloc(tsdn, ret), isalloc(tsdn, ret));\n\t}\n\treturn ret;\n}\n\nJEMALLOC_ALWAYS_INLINE void *\nialloc(tsd_t *tsd, size_t size, szind_t ind, bool zero, bool slow_path) {\n\treturn iallocztm(tsd_tsdn(tsd), size, ind, zero, tcache_get(tsd), false,\n\t    NULL, slow_path);\n}\n\nJEMALLOC_ALWAYS_INLINE void *\nipallocztm(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero,\n    tcache_t *tcache, bool is_internal, arena_t *arena) {\n\tvoid *ret;\n\n\tassert(usize != 0);\n\tassert(usize == sz_sa2u(usize, alignment));\n\tassert(!is_internal || tcache == NULL);\n\tassert(!is_internal || arena == NULL || arena_is_auto(arena));\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\n\tret = arena_palloc(tsdn, arena, usize, alignment, zero, tcache);\n\tassert(ALIGNMENT_ADDR2BASE(ret, alignment) == ret);\n\tif (config_stats && is_internal && likely(ret != NULL)) {\n\t\tarena_internal_add(iaalloc(tsdn, ret), isalloc(tsdn, ret));\n\t}\n\treturn ret;\n}\n\nJEMALLOC_ALWAYS_INLINE void *\nipalloct(tsdn_t *tsdn, size_t usize, size_t alignment, bool zero,\n    tcache_t *tcache, arena_t *arena) {\n\treturn ipallocztm(tsdn, usize, alignment, zero, tcache, false, arena);\n}\n\nJEMALLOC_ALWAYS_INLINE void *\nipalloc(tsd_t *tsd, size_t usize, size_t alignment, bool zero) {\n\treturn ipallocztm(tsd_tsdn(tsd), usize, alignment, zero,\n\t    tcache_get(tsd), false, NULL);\n}\n\nJEMALLOC_ALWAYS_INLINE size_t\nivsalloc(tsdn_t *tsdn, const void *ptr) {\n\treturn arena_vsalloc(tsdn, ptr);\n}\n\nJEMALLOC_ALWAYS_INLINE void\nidalloctm(tsdn_t *tsdn, void *ptr, tcache_t *tcache,\n    emap_alloc_ctx_t *alloc_ctx, bool is_internal, bool slow_path) {\n\tassert(ptr != NULL);\n\tassert(!is_internal || tcache == NULL);\n\tassert(!is_internal || arena_is_auto(iaalloc(tsdn, ptr)));\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\tif (config_stats && is_internal) {\n\t\tarena_internal_sub(iaalloc(tsdn, ptr), isalloc(tsdn, ptr));\n\t}\n\tif (!is_internal && !tsdn_null(tsdn) &&\n\t    tsd_reentrancy_level_get(tsdn_tsd(tsdn)) != 0) {\n\t\tassert(tcache == NULL);\n\t}\n\tarena_dalloc(tsdn, ptr, tcache, alloc_ctx, slow_path);\n}\n\nJEMALLOC_ALWAYS_INLINE void\nidalloc(tsd_t *tsd, void *ptr) {\n\tidalloctm(tsd_tsdn(tsd), ptr, tcache_get(tsd), NULL, false, true);\n}\n\nJEMALLOC_ALWAYS_INLINE void\nisdalloct(tsdn_t *tsdn, void *ptr, size_t size, tcache_t *tcache,\n    emap_alloc_ctx_t *alloc_ctx, bool slow_path) {\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\tarena_sdalloc(tsdn, ptr, size, tcache, alloc_ctx, slow_path);\n}\n\nJEMALLOC_ALWAYS_INLINE void *\niralloct_realign(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t size,\n    size_t alignment, bool zero, tcache_t *tcache, arena_t *arena,\n    hook_ralloc_args_t *hook_args) {\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\tvoid *p;\n\tsize_t usize, copysize;\n\n\tusize = sz_sa2u(size, alignment);\n\tif (unlikely(usize == 0 || usize > SC_LARGE_MAXCLASS)) {\n\t\treturn NULL;\n\t}\n\tp = ipalloct(tsdn, usize, alignment, zero, tcache, arena);\n\tif (p == NULL) {\n\t\treturn NULL;\n\t}\n\t/*\n\t * Copy at most size bytes (not size+extra), since the caller has no\n\t * expectation that the extra bytes will be reliably preserved.\n\t */\n\tcopysize = (size < oldsize) ? size : oldsize;\n\tmemcpy(p, ptr, copysize);\n\thook_invoke_alloc(hook_args->is_realloc\n\t    ? hook_alloc_realloc : hook_alloc_rallocx, p, (uintptr_t)p,\n\t    hook_args->args);\n\thook_invoke_dalloc(hook_args->is_realloc\n\t    ? hook_dalloc_realloc : hook_dalloc_rallocx, ptr, hook_args->args);\n\tisdalloct(tsdn, ptr, oldsize, tcache, NULL, true);\n\treturn p;\n}\n\n/*\n * is_realloc threads through the knowledge of whether or not this call comes\n * from je_realloc (as opposed to je_rallocx); this ensures that we pass the\n * correct entry point into any hooks.\n * Note that these functions are all force-inlined, so no actual bool gets\n * passed-around anywhere.\n */\nJEMALLOC_ALWAYS_INLINE void *\niralloct(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t size, size_t alignment,\n    bool zero, tcache_t *tcache, arena_t *arena, hook_ralloc_args_t *hook_args)\n{\n\tassert(ptr != NULL);\n\tassert(size != 0);\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\n\tif (alignment != 0 && ((uintptr_t)ptr & ((uintptr_t)alignment-1))\n\t    != 0) {\n\t\t/*\n\t\t * Existing object alignment is inadequate; allocate new space\n\t\t * and copy.\n\t\t */\n\t\treturn iralloct_realign(tsdn, ptr, oldsize, size, alignment,\n\t\t    zero, tcache, arena, hook_args);\n\t}\n\n\treturn arena_ralloc(tsdn, arena, ptr, oldsize, size, alignment, zero,\n\t    tcache, hook_args);\n}\n\nJEMALLOC_ALWAYS_INLINE void *\niralloc(tsd_t *tsd, void *ptr, size_t oldsize, size_t size, size_t alignment,\n    bool zero, hook_ralloc_args_t *hook_args) {\n\treturn iralloct(tsd_tsdn(tsd), ptr, oldsize, size, alignment, zero,\n\t    tcache_get(tsd), NULL, hook_args);\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nixalloc(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t size, size_t extra,\n    size_t alignment, bool zero, size_t *newsize) {\n\tassert(ptr != NULL);\n\tassert(size != 0);\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\n\tif (alignment != 0 && ((uintptr_t)ptr & ((uintptr_t)alignment-1))\n\t    != 0) {\n\t\t/* Existing object alignment is inadequate. */\n\t\t*newsize = oldsize;\n\t\treturn true;\n\t}\n\n\treturn arena_ralloc_no_move(tsdn, ptr, oldsize, size, extra, zero,\n\t    newsize);\n}\n\nJEMALLOC_ALWAYS_INLINE void\nfastpath_success_finish(tsd_t *tsd, uint64_t allocated_after,\n    cache_bin_t *bin, void *ret) {\n\tthread_allocated_set(tsd, allocated_after);\n\tif (config_stats) {\n\t\tbin->tstats.nrequests++;\n\t}\n\n\tLOG(\"core.malloc.exit\", \"result: %p\", ret);\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nmalloc_initialized(void) {\n\treturn (malloc_init_state == malloc_init_initialized);\n}\n\n/*\n * malloc() fastpath.  Included here so that we can inline it into operator new;\n * function call overhead there is non-negligible as a fraction of total CPU in\n * allocation-heavy C++ programs.  We take the fallback alloc to allow malloc\n * (which can return NULL) to differ in its behavior from operator new (which\n * can't).  It matches the signature of malloc / operator new so that we can\n * tail-call the fallback allocator, allowing us to avoid setting up the call\n * frame in the common case.\n *\n * Fastpath assumes size <= SC_LOOKUP_MAXCLASS, and that we hit\n * tcache.  If either of these is false, we tail-call to the slowpath,\n * malloc_default().  Tail-calling is used to avoid any caller-saved\n * registers.\n *\n * fastpath supports ticker and profiling, both of which will also\n * tail-call to the slowpath if they fire.\n */\nJEMALLOC_ALWAYS_INLINE void *\nimalloc_fastpath(size_t size, void *(fallback_alloc)(size_t, size_t *), size_t *usable_size) {\n\tLOG(\"core.malloc.entry\", \"size: %zu\", size);\n\tif (tsd_get_allocates() && unlikely(!malloc_initialized())) {\n\t\treturn fallback_alloc(size, usable_size);\n\t}\n\n\ttsd_t *tsd = tsd_get(false);\n\tif (unlikely((size > SC_LOOKUP_MAXCLASS) || tsd == NULL)) {\n\t\treturn fallback_alloc(size, usable_size);\n\t}\n\t/*\n\t * The code below till the branch checking the next_event threshold may\n\t * execute before malloc_init(), in which case the threshold is 0 to\n\t * trigger slow path and initialization.\n\t *\n\t * Note that when uninitialized, only the fast-path variants of the sz /\n\t * tsd facilities may be called.\n\t */\n\tszind_t ind;\n\t/*\n\t * The thread_allocated counter in tsd serves as a general purpose\n\t * accumulator for bytes of allocation to trigger different types of\n\t * events.  usize is always needed to advance thread_allocated, though\n\t * it's not always needed in the core allocation logic.\n\t */\n\tsize_t usize;\n\tsz_size2index_usize_fastpath(size, &ind, &usize);\n\t/* Fast path relies on size being a bin. */\n\tassert(ind < SC_NBINS);\n\tassert((SC_LOOKUP_MAXCLASS < SC_SMALL_MAXCLASS) &&\n\t    (size <= SC_SMALL_MAXCLASS));\n\n\tuint64_t allocated, threshold;\n\tte_malloc_fastpath_ctx(tsd, &allocated, &threshold);\n\tuint64_t allocated_after = allocated + usize;\n\t/*\n\t * The ind and usize might be uninitialized (or partially) before\n\t * malloc_init().  The assertions check for: 1) full correctness (usize\n\t * & ind) when initialized; and 2) guaranteed slow-path (threshold == 0)\n\t * when !initialized.\n\t */\n\tif (!malloc_initialized()) {\n\t\tassert(threshold == 0);\n\t} else {\n\t\tassert(ind == sz_size2index(size));\n\t\tassert(usize > 0 && usize == sz_index2size(ind));\n\t}\n\t/*\n\t * Check for events and tsd non-nominal (fast_threshold will be set to\n\t * 0) in a single branch.\n\t */\n\tif (unlikely(allocated_after >= threshold)) {\n\t\treturn fallback_alloc(size, usable_size);\n\t}\n\tassert(tsd_fast(tsd));\n\n\ttcache_t *tcache = tsd_tcachep_get(tsd);\n\tassert(tcache == tcache_get(tsd));\n\tcache_bin_t *bin = &tcache->bins[ind];\n\tbool tcache_success;\n\tvoid *ret;\n\n\t/*\n\t * We split up the code this way so that redundant low-water\n\t * computation doesn't happen on the (more common) case in which we\n\t * don't touch the low water mark.  The compiler won't do this\n\t * duplication on its own.\n\t */\n\tret = cache_bin_alloc_easy(bin, &tcache_success);\n\tif (tcache_success) {\n\t\tfastpath_success_finish(tsd, allocated_after, bin, ret);\n\t\tif (usable_size) *usable_size = usize;\n\t\treturn ret;\n\t}\n\tret = cache_bin_alloc(bin, &tcache_success);\n\tif (tcache_success) {\n\t\tfastpath_success_finish(tsd, allocated_after, bin, ret);\n\t\tif (usable_size) *usable_size = usize;\n\t\treturn ret;\n\t}\n\n\treturn fallback_alloc(size, usable_size);\n}\n\nJEMALLOC_ALWAYS_INLINE int\niget_defrag_hint(tsdn_t *tsdn, void* ptr) {\n\tint defrag = 0;\n\temap_alloc_ctx_t alloc_ctx;\n\temap_alloc_ctx_lookup(tsdn, &arena_emap_global, ptr, &alloc_ctx);\n\tif (likely(alloc_ctx.slab)) {\n\t\t/* Small allocation. */\n\t\tedata_t *slab = emap_edata_lookup(tsdn, &arena_emap_global, ptr);\n\t\tarena_t *arena = arena_get_from_edata(slab);\n\t\tszind_t binind = edata_szind_get(slab);\n\t\tunsigned binshard = edata_binshard_get(slab);\n\t\tbin_t *bin = arena_get_bin(arena, binind, binshard);\n\t\tmalloc_mutex_lock(tsdn, &bin->lock);\n\t\tarena_dalloc_bin_locked_info_t info;\n\t\tarena_dalloc_bin_locked_begin(&info, binind);\n\t\t/* Don't bother moving allocations from the slab currently used for new allocations */\n\t\tif (slab != bin->slabcur) {\n\t\t\tint free_in_slab = edata_nfree_get(slab);\n\t\t\tif (free_in_slab) {\n\t\t\t\tconst bin_info_t *bin_info = &bin_infos[binind];\n\t\t\t\t/* Find number of non-full slabs and the number of regs in them */\n\t\t\t\tunsigned long curslabs = 0;\n\t\t\t\tsize_t curregs = 0;\n\t\t\t\t/* Run on all bin shards (usually just one) */\n\t\t\t\tfor (uint32_t i=0; i< bin_info->n_shards; i++) {\n\t\t\t\t\tbin_t *bb = arena_get_bin(arena, binind, i);\n\t\t\t\t\tcurslabs += bb->stats.nonfull_slabs;\n\t\t\t\t\t/* Deduct the regs in full slabs (they're not part of the game) */\n\t\t\t\t\tunsigned long full_slabs = bb->stats.curslabs - bb->stats.nonfull_slabs;\n\t\t\t\t\tcurregs += bb->stats.curregs - full_slabs * bin_info->nregs;\n\t\t\t\t\tif (bb->slabcur) {\n\t\t\t\t\t\t/* Remove slabcur from the overall utilization (not a candidate to nove from) */\n\t\t\t\t\t\tcurregs -= bin_info->nregs - edata_nfree_get(bb->slabcur);\n\t\t\t\t\t\tcurslabs -= 1;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t/* Compare the utilization ratio of the slab in question to the total average\n\t\t\t\t * among non-full slabs. To avoid precision loss in division, we do that by\n\t\t\t\t * extrapolating the usage of the slab as if all slabs have the same usage.\n\t\t\t\t * If this slab is less used than the average, we'll prefer to move the data\n\t\t\t\t * to hopefully more used ones. To avoid stagnation when all slabs have the same\n\t\t\t\t * utilization, we give additional 12.5% weight to the decision to defrag. */\n\t\t\t\tdefrag = (bin_info->nregs - free_in_slab) * curslabs <= curregs + curregs / 8;\n\t\t\t}\n\t\t}\n\t\tarena_dalloc_bin_locked_finish(tsdn, arena, bin, &info);\n\t\tmalloc_mutex_unlock(tsdn, &bin->lock);\n\t}\n\treturn defrag;\n}\n\n#endif /* JEMALLOC_INTERNAL_INLINES_C_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/jemalloc_internal_macros.h",
    "content": "#ifndef JEMALLOC_INTERNAL_MACROS_H\n#define JEMALLOC_INTERNAL_MACROS_H\n\n#ifdef JEMALLOC_DEBUG\n#  define JEMALLOC_ALWAYS_INLINE static inline\n#else\n#  ifdef _MSC_VER\n#    define JEMALLOC_ALWAYS_INLINE static __forceinline\n#  else\n#    define JEMALLOC_ALWAYS_INLINE JEMALLOC_ATTR(always_inline) static inline\n#  endif\n#endif\n#ifdef _MSC_VER\n#  define inline _inline\n#endif\n\n#define UNUSED JEMALLOC_ATTR(unused)\n\n#define ZU(z)\t((size_t)z)\n#define ZD(z)\t((ssize_t)z)\n#define QU(q)\t((uint64_t)q)\n#define QD(q)\t((int64_t)q)\n\n#define KZU(z)\tZU(z##ULL)\n#define KZD(z)\tZD(z##LL)\n#define KQU(q)\tQU(q##ULL)\n#define KQD(q)\tQI(q##LL)\n\n#ifndef __DECONST\n#  define\t__DECONST(type, var)\t((type)(uintptr_t)(const void *)(var))\n#endif\n\n#if !defined(JEMALLOC_HAS_RESTRICT) || defined(__cplusplus)\n#  define restrict\n#endif\n\n/* Various function pointers are static and immutable except during testing. */\n#ifdef JEMALLOC_JET\n#  define JET_MUTABLE\n#else\n#  define JET_MUTABLE const\n#endif\n\n#define JEMALLOC_VA_ARGS_HEAD(head, ...) head\n#define JEMALLOC_VA_ARGS_TAIL(head, ...) __VA_ARGS__\n\n/* Diagnostic suppression macros */\n#if defined(_MSC_VER) && !defined(__clang__)\n#  define JEMALLOC_DIAGNOSTIC_PUSH __pragma(warning(push))\n#  define JEMALLOC_DIAGNOSTIC_POP __pragma(warning(pop))\n#  define JEMALLOC_DIAGNOSTIC_IGNORE(W) __pragma(warning(disable:W))\n#  define JEMALLOC_DIAGNOSTIC_IGNORE_MISSING_STRUCT_FIELD_INITIALIZERS\n#  define JEMALLOC_DIAGNOSTIC_IGNORE_TYPE_LIMITS\n#  define JEMALLOC_DIAGNOSTIC_IGNORE_ALLOC_SIZE_LARGER_THAN\n#  define JEMALLOC_DIAGNOSTIC_DISABLE_SPURIOUS\n/* #pragma GCC diagnostic first appeared in gcc 4.6. */\n#elif (defined(__GNUC__) && ((__GNUC__ > 4) || ((__GNUC__ == 4) && \\\n  (__GNUC_MINOR__ > 5)))) || defined(__clang__)\n/*\n * The JEMALLOC_PRAGMA__ macro is an implementation detail of the GCC and Clang\n * diagnostic suppression macros and should not be used anywhere else.\n */\n#  define JEMALLOC_PRAGMA__(X) _Pragma(#X)\n#  define JEMALLOC_DIAGNOSTIC_PUSH JEMALLOC_PRAGMA__(GCC diagnostic push)\n#  define JEMALLOC_DIAGNOSTIC_POP JEMALLOC_PRAGMA__(GCC diagnostic pop)\n#  define JEMALLOC_DIAGNOSTIC_IGNORE(W) \\\n     JEMALLOC_PRAGMA__(GCC diagnostic ignored W)\n\n/*\n * The -Wmissing-field-initializers warning is buggy in GCC versions < 5.1 and\n * all clang versions up to version 7 (currently trunk, unreleased).  This macro\n * suppresses the warning for the affected compiler versions only.\n */\n#  if ((defined(__GNUC__) && !defined(__clang__)) && (__GNUC__ < 5)) || \\\n     defined(__clang__)\n#    define JEMALLOC_DIAGNOSTIC_IGNORE_MISSING_STRUCT_FIELD_INITIALIZERS  \\\n          JEMALLOC_DIAGNOSTIC_IGNORE(\"-Wmissing-field-initializers\")\n#  else\n#    define JEMALLOC_DIAGNOSTIC_IGNORE_MISSING_STRUCT_FIELD_INITIALIZERS\n#  endif\n\n#  define JEMALLOC_DIAGNOSTIC_IGNORE_TYPE_LIMITS  \\\n     JEMALLOC_DIAGNOSTIC_IGNORE(\"-Wtype-limits\")\n#  define JEMALLOC_DIAGNOSTIC_IGNORE_UNUSED_PARAMETER \\\n     JEMALLOC_DIAGNOSTIC_IGNORE(\"-Wunused-parameter\")\n#  if defined(__GNUC__) && !defined(__clang__) && (__GNUC__ >= 7)\n#    define JEMALLOC_DIAGNOSTIC_IGNORE_ALLOC_SIZE_LARGER_THAN \\\n       JEMALLOC_DIAGNOSTIC_IGNORE(\"-Walloc-size-larger-than=\")\n#  else\n#    define JEMALLOC_DIAGNOSTIC_IGNORE_ALLOC_SIZE_LARGER_THAN\n#  endif\n#  define JEMALLOC_DIAGNOSTIC_DISABLE_SPURIOUS \\\n  JEMALLOC_DIAGNOSTIC_PUSH \\\n  JEMALLOC_DIAGNOSTIC_IGNORE_UNUSED_PARAMETER\n#else\n#  define JEMALLOC_DIAGNOSTIC_PUSH\n#  define JEMALLOC_DIAGNOSTIC_POP\n#  define JEMALLOC_DIAGNOSTIC_IGNORE(W)\n#  define JEMALLOC_DIAGNOSTIC_IGNORE_MISSING_STRUCT_FIELD_INITIALIZERS\n#  define JEMALLOC_DIAGNOSTIC_IGNORE_TYPE_LIMITS\n#  define JEMALLOC_DIAGNOSTIC_IGNORE_ALLOC_SIZE_LARGER_THAN\n#  define JEMALLOC_DIAGNOSTIC_DISABLE_SPURIOUS\n#endif\n\n/*\n * Disables spurious diagnostics for all headers.  Since these headers are not\n * included by users directly, it does not affect their diagnostic settings.\n */\nJEMALLOC_DIAGNOSTIC_DISABLE_SPURIOUS\n\n#endif /* JEMALLOC_INTERNAL_MACROS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/jemalloc_internal_types.h",
    "content": "#ifndef JEMALLOC_INTERNAL_TYPES_H\n#define JEMALLOC_INTERNAL_TYPES_H\n\n#include \"jemalloc/internal/quantum.h\"\n\n/* Processor / core id type. */\ntypedef int malloc_cpuid_t;\n\n/* When realloc(non-null-ptr, 0) is called, what happens? */\nenum zero_realloc_action_e {\n\t/* Realloc(ptr, 0) is free(ptr); return malloc(0); */\n\tzero_realloc_action_alloc = 0,\n\t/* Realloc(ptr, 0) is free(ptr); */\n\tzero_realloc_action_free = 1,\n\t/* Realloc(ptr, 0) aborts. */\n\tzero_realloc_action_abort = 2\n};\ntypedef enum zero_realloc_action_e zero_realloc_action_t;\n\n/* Signature of write callback. */\ntypedef void (write_cb_t)(void *, const char *);\n\nenum malloc_init_e {\n\tmalloc_init_uninitialized\t= 3,\n\tmalloc_init_a0_initialized\t= 2,\n\tmalloc_init_recursible\t\t= 1,\n\tmalloc_init_initialized\t\t= 0 /* Common case --> jnz. */\n};\ntypedef enum malloc_init_e malloc_init_t;\n\n/*\n * Flags bits:\n *\n * a: arena\n * t: tcache\n * 0: unused\n * z: zero\n * n: alignment\n *\n * aaaaaaaa aaaatttt tttttttt 0znnnnnn\n */\n#define MALLOCX_ARENA_BITS\t12\n#define MALLOCX_TCACHE_BITS\t12\n#define MALLOCX_LG_ALIGN_BITS\t6\n#define MALLOCX_ARENA_SHIFT\t20\n#define MALLOCX_TCACHE_SHIFT\t8\n#define MALLOCX_ARENA_MASK \\\n    (((1 << MALLOCX_ARENA_BITS) - 1) << MALLOCX_ARENA_SHIFT)\n/* NB: Arena index bias decreases the maximum number of arenas by 1. */\n#define MALLOCX_ARENA_LIMIT\t((1 << MALLOCX_ARENA_BITS) - 1)\n#define MALLOCX_TCACHE_MASK \\\n    (((1 << MALLOCX_TCACHE_BITS) - 1) << MALLOCX_TCACHE_SHIFT)\n#define MALLOCX_TCACHE_MAX\t((1 << MALLOCX_TCACHE_BITS) - 3)\n#define MALLOCX_LG_ALIGN_MASK\t((1 << MALLOCX_LG_ALIGN_BITS) - 1)\n/* Use MALLOCX_ALIGN_GET() if alignment may not be specified in flags. */\n#define MALLOCX_ALIGN_GET_SPECIFIED(flags)\t\t\t\t\\\n    (ZU(1) << (flags & MALLOCX_LG_ALIGN_MASK))\n#define MALLOCX_ALIGN_GET(flags)\t\t\t\t\t\\\n    (MALLOCX_ALIGN_GET_SPECIFIED(flags) & (SIZE_T_MAX-1))\n#define MALLOCX_ZERO_GET(flags)\t\t\t\t\t\t\\\n    ((bool)(flags & MALLOCX_ZERO))\n\n#define MALLOCX_TCACHE_GET(flags)\t\t\t\t\t\\\n    (((unsigned)((flags & MALLOCX_TCACHE_MASK) >> MALLOCX_TCACHE_SHIFT)) - 2)\n#define MALLOCX_ARENA_GET(flags)\t\t\t\t\t\\\n    (((unsigned)(((unsigned)flags) >> MALLOCX_ARENA_SHIFT)) - 1)\n\n/* Smallest size class to support. */\n#define TINY_MIN\t\t(1U << LG_TINY_MIN)\n\n#define LONG\t\t\t((size_t)(1U << LG_SIZEOF_LONG))\n#define LONG_MASK\t\t(LONG - 1)\n\n/* Return the smallest long multiple that is >= a. */\n#define LONG_CEILING(a)\t\t\t\t\t\t\t\\\n\t(((a) + LONG_MASK) & ~LONG_MASK)\n\n#define SIZEOF_PTR\t\t(1U << LG_SIZEOF_PTR)\n#define PTR_MASK\t\t(SIZEOF_PTR - 1)\n\n/* Return the smallest (void *) multiple that is >= a. */\n#define PTR_CEILING(a)\t\t\t\t\t\t\t\\\n\t(((a) + PTR_MASK) & ~PTR_MASK)\n\n/*\n * Maximum size of L1 cache line.  This is used to avoid cache line aliasing.\n * In addition, this controls the spacing of cacheline-spaced size classes.\n *\n * CACHELINE cannot be based on LG_CACHELINE because __declspec(align()) can\n * only handle raw constants.\n */\n#define LG_CACHELINE\t\t6\n#define CACHELINE\t\t64\n#define CACHELINE_MASK\t\t(CACHELINE - 1)\n\n/* Return the smallest cacheline multiple that is >= s. */\n#define CACHELINE_CEILING(s)\t\t\t\t\t\t\\\n\t(((s) + CACHELINE_MASK) & ~CACHELINE_MASK)\n\n/* Return the nearest aligned address at or below a. */\n#define ALIGNMENT_ADDR2BASE(a, alignment)\t\t\t\t\\\n\t((void *)((uintptr_t)(a) & ((~(alignment)) + 1)))\n\n/* Return the offset between a and the nearest aligned address at or below a. */\n#define ALIGNMENT_ADDR2OFFSET(a, alignment)\t\t\t\t\\\n\t((size_t)((uintptr_t)(a) & (alignment - 1)))\n\n/* Return the smallest alignment multiple that is >= s. */\n#define ALIGNMENT_CEILING(s, alignment)\t\t\t\t\t\\\n\t(((s) + (alignment - 1)) & ((~(alignment)) + 1))\n\n/* Declare a variable-length array. */\n#if __STDC_VERSION__ < 199901L\n#  ifdef _MSC_VER\n#    include <malloc.h>\n#    define alloca _alloca\n#  else\n#    ifdef JEMALLOC_HAS_ALLOCA_H\n#      include <alloca.h>\n#    else\n#      include <stdlib.h>\n#    endif\n#  endif\n#  define VARIABLE_ARRAY(type, name, count) \\\n\ttype *name = alloca(sizeof(type) * (count))\n#else\n#  define VARIABLE_ARRAY(type, name, count) type name[(count)]\n#endif\n\n#endif /* JEMALLOC_INTERNAL_TYPES_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/jemalloc_preamble.h.in",
    "content": "#ifndef JEMALLOC_PREAMBLE_H\n#define JEMALLOC_PREAMBLE_H\n\n#include \"jemalloc_internal_defs.h\"\n#include \"jemalloc/internal/jemalloc_internal_decls.h\"\n\n#if defined(JEMALLOC_UTRACE) || defined(JEMALLOC_UTRACE_LABEL)\n#include <sys/ktrace.h>\n#  if defined(JEMALLOC_UTRACE)\n#    define UTRACE_CALL(p, l) utrace(p, l)\n#  else\n#    define UTRACE_CALL(p, l) utrace(\"jemalloc_process\", p, l)\n#    define JEMALLOC_UTRACE\n#  endif\n#endif\n\n#define JEMALLOC_NO_DEMANGLE\n#ifdef JEMALLOC_JET\n#  undef JEMALLOC_IS_MALLOC\n#  define JEMALLOC_N(n) jet_##n\n#  include \"jemalloc/internal/public_namespace.h\"\n#  define JEMALLOC_NO_RENAME\n#  include \"../jemalloc@install_suffix@.h\"\n#  undef JEMALLOC_NO_RENAME\n#else\n#  define JEMALLOC_N(n) @private_namespace@##n\n#  include \"../jemalloc@install_suffix@.h\"\n#endif\n\n#if defined(JEMALLOC_OSATOMIC)\n#include <libkern/OSAtomic.h>\n#endif\n\n#ifdef JEMALLOC_ZONE\n#include <mach/mach_error.h>\n#include <mach/mach_init.h>\n#include <mach/vm_map.h>\n#endif\n\n#include \"jemalloc/internal/jemalloc_internal_macros.h\"\n\n/*\n * Note that the ordering matters here; the hook itself is name-mangled.  We\n * want the inclusion of hooks to happen early, so that we hook as much as\n * possible.\n */\n#ifndef JEMALLOC_NO_PRIVATE_NAMESPACE\n#  ifndef JEMALLOC_JET\n#    include \"jemalloc/internal/private_namespace.h\"\n#  else\n#    include \"jemalloc/internal/private_namespace_jet.h\"\n#  endif\n#endif\n#include \"jemalloc/internal/test_hooks.h\"\n\n#ifdef JEMALLOC_DEFINE_MADVISE_FREE\n#  define JEMALLOC_MADV_FREE 8\n#endif\n\nstatic const bool config_debug =\n#ifdef JEMALLOC_DEBUG\n    true\n#else\n    false\n#endif\n    ;\nstatic const bool have_dss =\n#ifdef JEMALLOC_DSS\n    true\n#else\n    false\n#endif\n    ;\nstatic const bool have_madvise_huge =\n#ifdef JEMALLOC_HAVE_MADVISE_HUGE\n    true\n#else\n    false\n#endif\n    ;\nstatic const bool config_fill =\n#ifdef JEMALLOC_FILL\n    true\n#else\n    false\n#endif\n    ;\nstatic const bool config_lazy_lock =\n#ifdef JEMALLOC_LAZY_LOCK\n    true\n#else\n    false\n#endif\n    ;\nstatic const char * const config_malloc_conf = JEMALLOC_CONFIG_MALLOC_CONF;\nstatic const bool config_prof =\n#ifdef JEMALLOC_PROF\n    true\n#else\n    false\n#endif\n    ;\nstatic const bool config_prof_libgcc =\n#ifdef JEMALLOC_PROF_LIBGCC\n    true\n#else\n    false\n#endif\n    ;\nstatic const bool config_prof_libunwind =\n#ifdef JEMALLOC_PROF_LIBUNWIND\n    true\n#else\n    false\n#endif\n    ;\nstatic const bool maps_coalesce =\n#ifdef JEMALLOC_MAPS_COALESCE\n    true\n#else\n    false\n#endif\n    ;\nstatic const bool config_stats =\n#ifdef JEMALLOC_STATS\n    true\n#else\n    false\n#endif\n    ;\nstatic const bool config_tls =\n#ifdef JEMALLOC_TLS\n    true\n#else\n    false\n#endif\n    ;\nstatic const bool config_utrace =\n#ifdef JEMALLOC_UTRACE\n    true\n#else\n    false\n#endif\n    ;\nstatic const bool config_xmalloc =\n#ifdef JEMALLOC_XMALLOC\n    true\n#else\n    false\n#endif\n    ;\nstatic const bool config_cache_oblivious =\n#ifdef JEMALLOC_CACHE_OBLIVIOUS\n    true\n#else\n    false\n#endif\n    ;\n/*\n * Undocumented, for jemalloc development use only at the moment.  See the note\n * in jemalloc/internal/log.h.\n */\nstatic const bool config_log =\n#ifdef JEMALLOC_LOG\n    true\n#else\n    false\n#endif\n    ;\n/*\n * Are extra safety checks enabled; things like checking the size of sized\n * deallocations, double-frees, etc.\n */\nstatic const bool config_opt_safety_checks =\n#ifdef JEMALLOC_OPT_SAFETY_CHECKS\n    true\n#elif defined(JEMALLOC_DEBUG)\n    /*\n     * This lets us only guard safety checks by one flag instead of two; fast\n     * checks can guard solely by config_opt_safety_checks and run in debug mode\n     * too.\n     */\n    true\n#else\n    false\n#endif\n    ;\n\n/*\n * Extra debugging of sized deallocations too onerous to be included in the\n * general safety checks.\n */\nstatic const bool config_opt_size_checks =\n#if defined(JEMALLOC_OPT_SIZE_CHECKS) || defined(JEMALLOC_DEBUG)\n    true\n#else\n    false\n#endif\n    ;\n\nstatic const bool config_uaf_detection =\n#if defined(JEMALLOC_UAF_DETECTION) || defined(JEMALLOC_DEBUG)\n    true\n#else\n    false\n#endif\n    ;\n\n/* Whether or not the C++ extensions are enabled. */\nstatic const bool config_enable_cxx =\n#ifdef JEMALLOC_ENABLE_CXX\n    true\n#else\n    false\n#endif\n;\n\n#if defined(_WIN32) || defined(JEMALLOC_HAVE_SCHED_GETCPU)\n/* Currently percpu_arena depends on sched_getcpu. */\n#define JEMALLOC_PERCPU_ARENA\n#endif\nstatic const bool have_percpu_arena =\n#ifdef JEMALLOC_PERCPU_ARENA\n    true\n#else\n    false\n#endif\n    ;\n/*\n * Undocumented, and not recommended; the application should take full\n * responsibility for tracking provenance.\n */\nstatic const bool force_ivsalloc =\n#ifdef JEMALLOC_FORCE_IVSALLOC\n    true\n#else\n    false\n#endif\n    ;\nstatic const bool have_background_thread =\n#ifdef JEMALLOC_BACKGROUND_THREAD\n    true\n#else\n    false\n#endif\n    ;\nstatic const bool config_high_res_timer =\n#ifdef JEMALLOC_HAVE_CLOCK_REALTIME\n    true\n#else\n    false\n#endif\n    ;\n\nstatic const bool have_memcntl =\n#ifdef JEMALLOC_HAVE_MEMCNTL\n    true\n#else\n    false\n#endif\n    ;\n\n#endif /* JEMALLOC_PREAMBLE_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/large_externs.h",
    "content": "#ifndef JEMALLOC_INTERNAL_LARGE_EXTERNS_H\n#define JEMALLOC_INTERNAL_LARGE_EXTERNS_H\n\n#include \"jemalloc/internal/hook.h\"\n\nvoid *large_malloc(tsdn_t *tsdn, arena_t *arena, size_t usize, bool zero);\nvoid *large_palloc(tsdn_t *tsdn, arena_t *arena, size_t usize, size_t alignment,\n    bool zero);\nbool large_ralloc_no_move(tsdn_t *tsdn, edata_t *edata, size_t usize_min,\n    size_t usize_max, bool zero);\nvoid *large_ralloc(tsdn_t *tsdn, arena_t *arena, void *ptr, size_t usize,\n    size_t alignment, bool zero, tcache_t *tcache,\n    hook_ralloc_args_t *hook_args);\n\nvoid large_dalloc_prep_locked(tsdn_t *tsdn, edata_t *edata);\nvoid large_dalloc_finish(tsdn_t *tsdn, edata_t *edata);\nvoid large_dalloc(tsdn_t *tsdn, edata_t *edata);\nsize_t large_salloc(tsdn_t *tsdn, const edata_t *edata);\nvoid large_prof_info_get(tsd_t *tsd, edata_t *edata, prof_info_t *prof_info,\n    bool reset_recent);\nvoid large_prof_tctx_reset(edata_t *edata);\nvoid large_prof_info_set(edata_t *edata, prof_tctx_t *tctx, size_t size);\n\n#endif /* JEMALLOC_INTERNAL_LARGE_EXTERNS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/lockedint.h",
    "content": "#ifndef JEMALLOC_INTERNAL_LOCKEDINT_H\n#define JEMALLOC_INTERNAL_LOCKEDINT_H\n\n/*\n * In those architectures that support 64-bit atomics, we use atomic updates for\n * our 64-bit values.  Otherwise, we use a plain uint64_t and synchronize\n * externally.\n */\n\ntypedef struct locked_u64_s locked_u64_t;\n#ifdef JEMALLOC_ATOMIC_U64\nstruct locked_u64_s {\n\tatomic_u64_t val;\n};\n#else\n/* Must hold the associated mutex. */\nstruct locked_u64_s {\n\tuint64_t val;\n};\n#endif\n\ntypedef struct locked_zu_s locked_zu_t;\nstruct locked_zu_s {\n\tatomic_zu_t val;\n};\n\n#ifndef JEMALLOC_ATOMIC_U64\n#  define LOCKEDINT_MTX_DECLARE(name) malloc_mutex_t name;\n#  define LOCKEDINT_MTX_INIT(mu, name, rank, rank_mode)\t\t\t\\\n    malloc_mutex_init(&(mu), name, rank, rank_mode)\n#  define LOCKEDINT_MTX(mtx) (&(mtx))\n#  define LOCKEDINT_MTX_LOCK(tsdn, mu) malloc_mutex_lock(tsdn, &(mu))\n#  define LOCKEDINT_MTX_UNLOCK(tsdn, mu) malloc_mutex_unlock(tsdn, &(mu))\n#  define LOCKEDINT_MTX_PREFORK(tsdn, mu) malloc_mutex_prefork(tsdn, &(mu))\n#  define LOCKEDINT_MTX_POSTFORK_PARENT(tsdn, mu)\t\t\t\\\n    malloc_mutex_postfork_parent(tsdn, &(mu))\n#  define LOCKEDINT_MTX_POSTFORK_CHILD(tsdn, mu)\t\t\t\\\n    malloc_mutex_postfork_child(tsdn, &(mu))\n#else\n#  define LOCKEDINT_MTX_DECLARE(name)\n#  define LOCKEDINT_MTX(mtx) NULL\n#  define LOCKEDINT_MTX_INIT(mu, name, rank, rank_mode) false\n#  define LOCKEDINT_MTX_LOCK(tsdn, mu)\n#  define LOCKEDINT_MTX_UNLOCK(tsdn, mu)\n#  define LOCKEDINT_MTX_PREFORK(tsdn, mu)\n#  define LOCKEDINT_MTX_POSTFORK_PARENT(tsdn, mu)\n#  define LOCKEDINT_MTX_POSTFORK_CHILD(tsdn, mu)\n#endif\n\n#ifdef JEMALLOC_ATOMIC_U64\n#  define LOCKEDINT_MTX_ASSERT_INTERNAL(tsdn, mtx) assert((mtx) == NULL)\n#else\n#  define LOCKEDINT_MTX_ASSERT_INTERNAL(tsdn, mtx)\t\t\t\\\n    malloc_mutex_assert_owner(tsdn, (mtx))\n#endif\n\nstatic inline uint64_t\nlocked_read_u64(tsdn_t *tsdn, malloc_mutex_t *mtx, locked_u64_t *p) {\n\tLOCKEDINT_MTX_ASSERT_INTERNAL(tsdn, mtx);\n#ifdef JEMALLOC_ATOMIC_U64\n\treturn atomic_load_u64(&p->val, ATOMIC_RELAXED);\n#else\n\treturn p->val;\n#endif\n}\n\nstatic inline void\nlocked_inc_u64(tsdn_t *tsdn, malloc_mutex_t *mtx, locked_u64_t *p,\n    uint64_t x) {\n\tLOCKEDINT_MTX_ASSERT_INTERNAL(tsdn, mtx);\n#ifdef JEMALLOC_ATOMIC_U64\n\tatomic_fetch_add_u64(&p->val, x, ATOMIC_RELAXED);\n#else\n\tp->val += x;\n#endif\n}\n\nstatic inline void\nlocked_dec_u64(tsdn_t *tsdn, malloc_mutex_t *mtx, locked_u64_t *p,\n    uint64_t x) {\n\tLOCKEDINT_MTX_ASSERT_INTERNAL(tsdn, mtx);\n#ifdef JEMALLOC_ATOMIC_U64\n\tuint64_t r = atomic_fetch_sub_u64(&p->val, x, ATOMIC_RELAXED);\n\tassert(r - x <= r);\n#else\n\tp->val -= x;\n\tassert(p->val + x >= p->val);\n#endif\n}\n\n/* Increment and take modulus.  Returns whether the modulo made any change.  */\nstatic inline bool\nlocked_inc_mod_u64(tsdn_t *tsdn, malloc_mutex_t *mtx, locked_u64_t *p,\n    const uint64_t x, const uint64_t modulus) {\n\tLOCKEDINT_MTX_ASSERT_INTERNAL(tsdn, mtx);\n\tuint64_t before, after;\n\tbool overflow;\n#ifdef JEMALLOC_ATOMIC_U64\n\tbefore = atomic_load_u64(&p->val, ATOMIC_RELAXED);\n\tdo {\n\t\tafter = before + x;\n\t\tassert(after >= before);\n\t\toverflow = (after >= modulus);\n\t\tif (overflow) {\n\t\t\tafter %= modulus;\n\t\t}\n\t} while (!atomic_compare_exchange_weak_u64(&p->val, &before, after,\n\t    ATOMIC_RELAXED, ATOMIC_RELAXED));\n#else\n\tbefore = p->val;\n\tafter = before + x;\n\toverflow = (after >= modulus);\n\tif (overflow) {\n\t\tafter %= modulus;\n\t}\n\tp->val = after;\n#endif\n\treturn overflow;\n}\n\n/*\n * Non-atomically sets *dst += src.  *dst needs external synchronization.\n * This lets us avoid the cost of a fetch_add when its unnecessary (note that\n * the types here are atomic).\n */\nstatic inline void\nlocked_inc_u64_unsynchronized(locked_u64_t *dst, uint64_t src) {\n#ifdef JEMALLOC_ATOMIC_U64\n\tuint64_t cur_dst = atomic_load_u64(&dst->val, ATOMIC_RELAXED);\n\tatomic_store_u64(&dst->val, src + cur_dst, ATOMIC_RELAXED);\n#else\n\tdst->val += src;\n#endif\n}\n\nstatic inline uint64_t\nlocked_read_u64_unsynchronized(locked_u64_t *p) {\n#ifdef JEMALLOC_ATOMIC_U64\n\treturn atomic_load_u64(&p->val, ATOMIC_RELAXED);\n#else\n\treturn p->val;\n#endif\n}\n\nstatic inline void\nlocked_init_u64_unsynchronized(locked_u64_t *p, uint64_t x) {\n#ifdef JEMALLOC_ATOMIC_U64\n\tatomic_store_u64(&p->val, x, ATOMIC_RELAXED);\n#else\n\tp->val = x;\n#endif\n}\n\nstatic inline size_t\nlocked_read_zu(tsdn_t *tsdn, malloc_mutex_t *mtx, locked_zu_t *p) {\n\tLOCKEDINT_MTX_ASSERT_INTERNAL(tsdn, mtx);\n#ifdef JEMALLOC_ATOMIC_U64\n\treturn atomic_load_zu(&p->val, ATOMIC_RELAXED);\n#else\n\treturn atomic_load_zu(&p->val, ATOMIC_RELAXED);\n#endif\n}\n\nstatic inline void\nlocked_inc_zu(tsdn_t *tsdn, malloc_mutex_t *mtx, locked_zu_t *p,\n    size_t x) {\n\tLOCKEDINT_MTX_ASSERT_INTERNAL(tsdn, mtx);\n#ifdef JEMALLOC_ATOMIC_U64\n\tatomic_fetch_add_zu(&p->val, x, ATOMIC_RELAXED);\n#else\n\tsize_t cur = atomic_load_zu(&p->val, ATOMIC_RELAXED);\n\tatomic_store_zu(&p->val, cur + x, ATOMIC_RELAXED);\n#endif\n}\n\nstatic inline void\nlocked_dec_zu(tsdn_t *tsdn, malloc_mutex_t *mtx, locked_zu_t *p,\n    size_t x) {\n\tLOCKEDINT_MTX_ASSERT_INTERNAL(tsdn, mtx);\n#ifdef JEMALLOC_ATOMIC_U64\n\tsize_t r = atomic_fetch_sub_zu(&p->val, x, ATOMIC_RELAXED);\n\tassert(r - x <= r);\n#else\n\tsize_t cur = atomic_load_zu(&p->val, ATOMIC_RELAXED);\n\tatomic_store_zu(&p->val, cur - x, ATOMIC_RELAXED);\n#endif\n}\n\n/* Like the _u64 variant, needs an externally synchronized *dst. */\nstatic inline void\nlocked_inc_zu_unsynchronized(locked_zu_t *dst, size_t src) {\n\tsize_t cur_dst = atomic_load_zu(&dst->val, ATOMIC_RELAXED);\n\tatomic_store_zu(&dst->val, src + cur_dst, ATOMIC_RELAXED);\n}\n\n/*\n * Unlike the _u64 variant, this is safe to call unconditionally.\n */\nstatic inline size_t\nlocked_read_atomic_zu(locked_zu_t *p) {\n\treturn atomic_load_zu(&p->val, ATOMIC_RELAXED);\n}\n\n#endif /* JEMALLOC_INTERNAL_LOCKEDINT_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/log.h",
    "content": "#ifndef JEMALLOC_INTERNAL_LOG_H\n#define JEMALLOC_INTERNAL_LOG_H\n\n#include \"jemalloc/internal/atomic.h\"\n#include \"jemalloc/internal/malloc_io.h\"\n#include \"jemalloc/internal/mutex.h\"\n\n#ifdef JEMALLOC_LOG\n#  define JEMALLOC_LOG_VAR_BUFSIZE 1000\n#else\n#  define JEMALLOC_LOG_VAR_BUFSIZE 1\n#endif\n\n#define JEMALLOC_LOG_BUFSIZE 4096\n\n/*\n * The log malloc_conf option is a '|'-delimited list of log_var name segments\n * which should be logged.  The names are themselves hierarchical, with '.' as\n * the delimiter (a \"segment\" is just a prefix in the log namespace).  So, if\n * you have:\n *\n * log(\"arena\", \"log msg for arena\"); // 1\n * log(\"arena.a\", \"log msg for arena.a\"); // 2\n * log(\"arena.b\", \"log msg for arena.b\"); // 3\n * log(\"arena.a.a\", \"log msg for arena.a.a\"); // 4\n * log(\"extent.a\", \"log msg for extent.a\"); // 5\n * log(\"extent.b\", \"log msg for extent.b\"); // 6\n *\n * And your malloc_conf option is \"log=arena.a|extent\", then lines 2, 4, 5, and\n * 6 will print at runtime.  You can enable logging from all log vars by\n * writing \"log=.\".\n *\n * None of this should be regarded as a stable API for right now.  It's intended\n * as a debugging interface, to let us keep around some of our printf-debugging\n * statements.\n */\n\nextern char log_var_names[JEMALLOC_LOG_VAR_BUFSIZE];\nextern atomic_b_t log_init_done;\n\ntypedef struct log_var_s log_var_t;\nstruct log_var_s {\n\t/*\n\t * Lowest bit is \"inited\", second lowest is \"enabled\".  Putting them in\n\t * a single word lets us avoid any fences on weak architectures.\n\t */\n\tatomic_u_t state;\n\tconst char *name;\n};\n\n#define LOG_NOT_INITIALIZED 0U\n#define LOG_INITIALIZED_NOT_ENABLED 1U\n#define LOG_ENABLED 2U\n\n#define LOG_VAR_INIT(name_str) {ATOMIC_INIT(LOG_NOT_INITIALIZED), name_str}\n\n/*\n * Returns the value we should assume for state (which is not necessarily\n * accurate; if logging is done before logging has finished initializing, then\n * we default to doing the safe thing by logging everything).\n */\nunsigned log_var_update_state(log_var_t *log_var);\n\n/* We factor out the metadata management to allow us to test more easily. */\n#define log_do_begin(log_var)\t\t\t\t\t\t\\\nif (config_log) {\t\t\t\t\t\t\t\\\n\tunsigned log_state = atomic_load_u(&(log_var).state,\t\t\\\n\t    ATOMIC_RELAXED);\t\t\t\t\t\t\\\n\tif (unlikely(log_state == LOG_NOT_INITIALIZED)) {\t\t\\\n\t\tlog_state = log_var_update_state(&(log_var));\t\t\\\n\t\tassert(log_state != LOG_NOT_INITIALIZED);\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tif (log_state == LOG_ENABLED) {\t\t\t\t\t\\\n\t\t{\n\t\t\t/* User code executes here. */\n#define log_do_end(log_var)\t\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n}\n\n/*\n * MSVC has some preprocessor bugs in its expansion of __VA_ARGS__ during\n * preprocessing.  To work around this, we take all potential extra arguments in\n * a var-args functions.  Since a varargs macro needs at least one argument in\n * the \"...\", we accept the format string there, and require that the first\n * argument in this \"...\" is a const char *.\n */\nstatic inline void\nlog_impl_varargs(const char *name, ...) {\n\tchar buf[JEMALLOC_LOG_BUFSIZE];\n\tva_list ap;\n\n\tva_start(ap, name);\n\tconst char *format = va_arg(ap, const char *);\n\tsize_t dst_offset = 0;\n\tdst_offset += malloc_snprintf(buf, JEMALLOC_LOG_BUFSIZE, \"%s: \", name);\n\tdst_offset += malloc_vsnprintf(buf + dst_offset,\n\t    JEMALLOC_LOG_BUFSIZE - dst_offset, format, ap);\n\tdst_offset += malloc_snprintf(buf + dst_offset,\n\t    JEMALLOC_LOG_BUFSIZE - dst_offset, \"\\n\");\n\tva_end(ap);\n\n\tmalloc_write(buf);\n}\n\n/* Call as log(\"log.var.str\", \"format_string %d\", arg_for_format_string); */\n#define LOG(log_var_str, ...)\t\t\t\t\t\t\\\ndo {\t\t\t\t\t\t\t\t\t\\\n\tstatic log_var_t log_var = LOG_VAR_INIT(log_var_str);\t\t\\\n\tlog_do_begin(log_var)\t\t\t\t\t\t\\\n\t\tlog_impl_varargs((log_var).name, __VA_ARGS__);\t\t\\\n\tlog_do_end(log_var)\t\t\t\t\t\t\\\n} while (0)\n\n#endif /* JEMALLOC_INTERNAL_LOG_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/malloc_io.h",
    "content": "#ifndef JEMALLOC_INTERNAL_MALLOC_IO_H\n#define JEMALLOC_INTERNAL_MALLOC_IO_H\n\n#include \"jemalloc/internal/jemalloc_internal_types.h\"\n\n#ifdef _WIN32\n#  ifdef _WIN64\n#    define FMT64_PREFIX \"ll\"\n#    define FMTPTR_PREFIX \"ll\"\n#  else\n#    define FMT64_PREFIX \"ll\"\n#    define FMTPTR_PREFIX \"\"\n#  endif\n#  define FMTd32 \"d\"\n#  define FMTu32 \"u\"\n#  define FMTx32 \"x\"\n#  define FMTd64 FMT64_PREFIX \"d\"\n#  define FMTu64 FMT64_PREFIX \"u\"\n#  define FMTx64 FMT64_PREFIX \"x\"\n#  define FMTdPTR FMTPTR_PREFIX \"d\"\n#  define FMTuPTR FMTPTR_PREFIX \"u\"\n#  define FMTxPTR FMTPTR_PREFIX \"x\"\n#else\n#  include <inttypes.h>\n#  define FMTd32 PRId32\n#  define FMTu32 PRIu32\n#  define FMTx32 PRIx32\n#  define FMTd64 PRId64\n#  define FMTu64 PRIu64\n#  define FMTx64 PRIx64\n#  define FMTdPTR PRIdPTR\n#  define FMTuPTR PRIuPTR\n#  define FMTxPTR PRIxPTR\n#endif\n\n/* Size of stack-allocated buffer passed to buferror(). */\n#define BUFERROR_BUF\t\t64\n\n/*\n * Size of stack-allocated buffer used by malloc_{,v,vc}printf().  This must be\n * large enough for all possible uses within jemalloc.\n */\n#define MALLOC_PRINTF_BUFSIZE\t4096\n\nwrite_cb_t wrtmessage;\nint buferror(int err, char *buf, size_t buflen);\nuintmax_t malloc_strtoumax(const char *restrict nptr, char **restrict endptr,\n    int base);\nvoid malloc_write(const char *s);\n\n/*\n * malloc_vsnprintf() supports a subset of snprintf(3) that avoids floating\n * point math.\n */\nsize_t malloc_vsnprintf(char *str, size_t size, const char *format,\n    va_list ap);\nsize_t malloc_snprintf(char *str, size_t size, const char *format, ...)\n    JEMALLOC_FORMAT_PRINTF(3, 4);\n/*\n * The caller can set write_cb to null to choose to print with the\n * je_malloc_message hook.\n */\nvoid malloc_vcprintf(write_cb_t *write_cb, void *cbopaque, const char *format,\n    va_list ap);\nvoid malloc_cprintf(write_cb_t *write_cb, void *cbopaque, const char *format,\n    ...) JEMALLOC_FORMAT_PRINTF(3, 4);\nvoid malloc_printf(const char *format, ...) JEMALLOC_FORMAT_PRINTF(1, 2);\n\nstatic inline ssize_t\nmalloc_write_fd(int fd, const void *buf, size_t count) {\n#if defined(JEMALLOC_USE_SYSCALL) && defined(SYS_write)\n\t/*\n\t * Use syscall(2) rather than write(2) when possible in order to avoid\n\t * the possibility of memory allocation within libc.  This is necessary\n\t * on FreeBSD; most operating systems do not have this problem though.\n\t *\n\t * syscall() returns long or int, depending on platform, so capture the\n\t * result in the widest plausible type to avoid compiler warnings.\n\t */\n\tlong result = syscall(SYS_write, fd, buf, count);\n#else\n\tssize_t result = (ssize_t)write(fd, buf,\n#ifdef _WIN32\n\t    (unsigned int)\n#endif\n\t    count);\n#endif\n\treturn (ssize_t)result;\n}\n\nstatic inline ssize_t\nmalloc_read_fd(int fd, void *buf, size_t count) {\n#if defined(JEMALLOC_USE_SYSCALL) && defined(SYS_read)\n\tlong result = syscall(SYS_read, fd, buf, count);\n#else\n\tssize_t result = read(fd, buf,\n#ifdef _WIN32\n\t    (unsigned int)\n#endif\n\t    count);\n#endif\n\treturn (ssize_t)result;\n}\n\n#endif /* JEMALLOC_INTERNAL_MALLOC_IO_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/mpsc_queue.h",
    "content": "#ifndef JEMALLOC_INTERNAL_MPSC_QUEUE_H\n#define JEMALLOC_INTERNAL_MPSC_QUEUE_H\n\n#include \"jemalloc/internal/atomic.h\"\n\n/*\n * A concurrent implementation of a multi-producer, single-consumer queue.  It\n * supports three concurrent operations:\n * - Push\n * - Push batch\n * - Pop batch\n *\n * These operations are all lock-free.\n *\n * The implementation is the simple two-stack queue built on a Treiber stack.\n * It's not terribly efficient, but this isn't expected to go into anywhere with\n * hot code.  In fact, we don't really even need queue semantics in any\n * anticipated use cases; we could get away with just the stack.  But this way\n * lets us frame the API in terms of the existing list types, which is a nice\n * convenience.  We can save on cache misses by introducing our own (parallel)\n * single-linked list type here, and dropping FIFO semantics, if we need this to\n * get faster.  Since we're currently providing queue semantics though, we use\n * the prev field in the link rather than the next field for Treiber-stack\n * linkage, so that we can preserve order for bash-pushed lists (recall that the\n * two-stack tricks reverses orders in the lock-free first stack).\n */\n\n#define mpsc_queue(a_type)\t\t\t\t\t\t\\\nstruct {\t\t\t\t\t\t\t\t\\\n\tatomic_p_t tail;\t\t\t\t\t\t\\\n}\n\n#define mpsc_queue_proto(a_attr, a_prefix, a_queue_type, a_type,\t\\\n    a_list_type)\t\t\t\t\t\t\t\\\n/* Initialize a queue. */\t\t\t\t\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##new(a_queue_type *queue);\t\t\t\t\t\\\n/* Insert all items in src into the queue, clearing src. */\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##push_batch(a_queue_type *queue, a_list_type *src);\t\t\\\n/* Insert node into the queue. */\t\t\t\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##push(a_queue_type *queue, a_type *node);\t\t\t\\\n/*\t\t\t\t\t\t\t\t\t\\\n * Pop all items in the queue into the list at dst.  dst should already\t\\\n * be initialized (and may contain existing items, which then remain\t\\\n * in dst).\t\t\t\t\t\t\t\t\\\n */\t\t\t\t\t\t\t\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##pop_batch(a_queue_type *queue, a_list_type *dst);\n\n#define mpsc_queue_gen(a_attr, a_prefix, a_queue_type, a_type,\t\t\\\n    a_list_type, a_link)\t\t\t\t\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##new(a_queue_type *queue) {\t\t\t\t\t\\\n\tatomic_store_p(&queue->tail, NULL, ATOMIC_RELAXED);\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##push_batch(a_queue_type *queue, a_list_type *src) {\t\t\\\n\t/*\t\t\t\t\t\t\t\t\\\n\t * Reuse the ql list next field as the Treiber stack next\t\\\n\t * field.\t\t\t\t\t\t\t\\\n\t */\t\t\t\t\t\t\t\t\\\n\ta_type *first = ql_first(src);\t\t\t\t\t\\\n\ta_type *last = ql_last(src, a_link);\t\t\t\t\\\n\tvoid* cur_tail = atomic_load_p(&queue->tail, ATOMIC_RELAXED);\t\\\n\tdo {\t\t\t\t\t\t\t\t\\\n\t\t/*\t\t\t\t\t\t\t\\\n\t\t * Note that this breaks the queue ring structure;\t\\\n\t\t * it's not a ring any more!\t\t\t\t\\\n\t\t */\t\t\t\t\t\t\t\\\n\t\tfirst->a_link.qre_prev = cur_tail;\t\t\t\\\n\t\t/*\t\t\t\t\t\t\t\\\n\t\t * Note: the upcoming CAS doesn't need an atomic; every\t\\\n\t\t * push only needs to synchronize with the next pop,\t\\\n\t\t * which we get from the release sequence rules.\t\\\n\t\t */\t\t\t\t\t\t\t\\\n\t} while (!atomic_compare_exchange_weak_p(&queue->tail,\t\t\\\n\t    &cur_tail, last, ATOMIC_RELEASE, ATOMIC_RELAXED));\t\t\\\n\tql_new(src);\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##push(a_queue_type *queue, a_type *node) {\t\t\t\\\n\tql_elm_new(node, a_link);\t\t\t\t\t\\\n\ta_list_type list;\t\t\t\t\t\t\\\n\tql_new(&list);\t\t\t\t\t\t\t\\\n\tql_head_insert(&list, node, a_link);\t\t\t\t\\\n\ta_prefix##push_batch(queue, &list);\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##pop_batch(a_queue_type *queue, a_list_type *dst) {\t\t\\\n\ta_type *tail = atomic_load_p(&queue->tail, ATOMIC_RELAXED);\t\\\n\tif (tail == NULL) {\t\t\t\t\t\t\\\n\t\t/*\t\t\t\t\t\t\t\\\n\t\t * In the common special case where there are no\t\\\n\t\t * pending elements, bail early without a costly RMW.\t\\\n\t\t */\t\t\t\t\t\t\t\\\n\t\treturn;\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\ttail = atomic_exchange_p(&queue->tail, NULL, ATOMIC_ACQUIRE);\t\\\n\t/*\t\t\t\t\t\t\t\t\\\n\t * It's a single-consumer queue, so if cur started non-NULL,\t\\\n\t * it'd better stay non-NULL.\t\t\t\t\t\\\n\t */\t\t\t\t\t\t\t\t\\\n\tassert(tail != NULL);\t\t\t\t\t\t\\\n\t/*\t\t\t\t\t\t\t\t\\\n\t * We iterate through the stack and both fix up the link\t\\\n\t * structure (stack insertion broke the list requirement that\t\\\n\t * the list be circularly linked).  It's just as efficient at\t\\\n\t * this point to make the queue a \"real\" queue, so do that as\t\\\n\t * well.\t\t\t\t\t\t\t\\\n\t * If this ever gets to be a hot spot, we can omit this fixup\t\\\n\t * and make the queue a bag (i.e. not necessarily ordered), but\t\\\n\t * that would mean jettisoning the existing list API as the \t\\\n\t * batch pushing/popping interface.\t\t\t\t\\\n\t */\t\t\t\t\t\t\t\t\\\n\ta_list_type reversed;\t\t\t\t\t\t\\\n\tql_new(&reversed);\t\t\t\t\t\t\\\n\twhile (tail != NULL) {\t\t\t\t\t\t\\\n\t\t/*\t\t\t\t\t\t\t\\\n\t\t * Pop an item off the stack, prepend it onto the list\t\\\n\t\t * (reversing the order).  Recall that we use the\t\\\n\t\t * list prev field as the Treiber stack next field to\t\\\n\t\t * preserve order of batch-pushed items when reversed.\t\\\n\t\t */\t\t\t\t\t\t\t\\\n\t\ta_type *next = tail->a_link.qre_prev;\t\t\t\\\n\t\tql_elm_new(tail, a_link);\t\t\t\t\\\n\t\tql_head_insert(&reversed, tail, a_link);\t\t\\\n\t\ttail = next;\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tql_concat(dst, &reversed, a_link);\t\t\t\t\\\n}\n\n#endif /* JEMALLOC_INTERNAL_MPSC_QUEUE_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/mutex.h",
    "content": "#ifndef JEMALLOC_INTERNAL_MUTEX_H\n#define JEMALLOC_INTERNAL_MUTEX_H\n\n#include \"jemalloc/internal/atomic.h\"\n#include \"jemalloc/internal/mutex_prof.h\"\n#include \"jemalloc/internal/tsd.h\"\n#include \"jemalloc/internal/witness.h\"\n\nextern int64_t opt_mutex_max_spin;\n\ntypedef enum {\n\t/* Can only acquire one mutex of a given witness rank at a time. */\n\tmalloc_mutex_rank_exclusive,\n\t/*\n\t * Can acquire multiple mutexes of the same witness rank, but in\n\t * address-ascending order only.\n\t */\n\tmalloc_mutex_address_ordered\n} malloc_mutex_lock_order_t;\n\ntypedef struct malloc_mutex_s malloc_mutex_t;\nstruct malloc_mutex_s {\n\tunion {\n\t\tstruct {\n\t\t\t/*\n\t\t\t * prof_data is defined first to reduce cacheline\n\t\t\t * bouncing: the data is not touched by the mutex holder\n\t\t\t * during unlocking, while might be modified by\n\t\t\t * contenders.  Having it before the mutex itself could\n\t\t\t * avoid prefetching a modified cacheline (for the\n\t\t\t * unlocking thread).\n\t\t\t */\n\t\t\tmutex_prof_data_t\tprof_data;\n#ifdef _WIN32\n#  if _WIN32_WINNT >= 0x0600\n\t\t\tSRWLOCK         \tlock;\n#  else\n\t\t\tCRITICAL_SECTION\tlock;\n#  endif\n#elif (defined(JEMALLOC_OS_UNFAIR_LOCK))\n\t\t\tos_unfair_lock\t\tlock;\n#elif (defined(JEMALLOC_MUTEX_INIT_CB))\n\t\t\tpthread_mutex_t\t\tlock;\n\t\t\tmalloc_mutex_t\t\t*postponed_next;\n#else\n\t\t\tpthread_mutex_t\t\tlock;\n#endif\n\t\t\t/*\n\t\t\t * Hint flag to avoid exclusive cache line contention\n\t\t\t * during spin waiting\n\t\t\t */\n\t\t\tatomic_b_t\t\tlocked;\n\t\t};\n\t\t/*\n\t\t * We only touch witness when configured w/ debug.  However we\n\t\t * keep the field in a union when !debug so that we don't have\n\t\t * to pollute the code base with #ifdefs, while avoid paying the\n\t\t * memory cost.\n\t\t */\n#if !defined(JEMALLOC_DEBUG)\n\t\twitness_t\t\t\twitness;\n\t\tmalloc_mutex_lock_order_t\tlock_order;\n#endif\n\t};\n\n#if defined(JEMALLOC_DEBUG)\n\twitness_t\t\t\twitness;\n\tmalloc_mutex_lock_order_t\tlock_order;\n#endif\n};\n\n#ifdef _WIN32\n#  if _WIN32_WINNT >= 0x0600\n#    define MALLOC_MUTEX_LOCK(m)    AcquireSRWLockExclusive(&(m)->lock)\n#    define MALLOC_MUTEX_UNLOCK(m)  ReleaseSRWLockExclusive(&(m)->lock)\n#    define MALLOC_MUTEX_TRYLOCK(m) (!TryAcquireSRWLockExclusive(&(m)->lock))\n#  else\n#    define MALLOC_MUTEX_LOCK(m)    EnterCriticalSection(&(m)->lock)\n#    define MALLOC_MUTEX_UNLOCK(m)  LeaveCriticalSection(&(m)->lock)\n#    define MALLOC_MUTEX_TRYLOCK(m) (!TryEnterCriticalSection(&(m)->lock))\n#  endif\n#elif (defined(JEMALLOC_OS_UNFAIR_LOCK))\n#    define MALLOC_MUTEX_LOCK(m)    os_unfair_lock_lock(&(m)->lock)\n#    define MALLOC_MUTEX_UNLOCK(m)  os_unfair_lock_unlock(&(m)->lock)\n#    define MALLOC_MUTEX_TRYLOCK(m) (!os_unfair_lock_trylock(&(m)->lock))\n#else\n#    define MALLOC_MUTEX_LOCK(m)    pthread_mutex_lock(&(m)->lock)\n#    define MALLOC_MUTEX_UNLOCK(m)  pthread_mutex_unlock(&(m)->lock)\n#    define MALLOC_MUTEX_TRYLOCK(m) (pthread_mutex_trylock(&(m)->lock) != 0)\n#endif\n\n#define LOCK_PROF_DATA_INITIALIZER\t\t\t\t\t\\\n    {NSTIME_ZERO_INITIALIZER, NSTIME_ZERO_INITIALIZER, 0, 0, 0,\t\t\\\n\t    ATOMIC_INIT(0), 0, NULL, 0}\n\n#ifdef _WIN32\n#  define MALLOC_MUTEX_INITIALIZER\n#elif (defined(JEMALLOC_OS_UNFAIR_LOCK))\n#  if defined(JEMALLOC_DEBUG)\n#    define MALLOC_MUTEX_INITIALIZER\t\t\t\t\t\\\n  {{{LOCK_PROF_DATA_INITIALIZER, OS_UNFAIR_LOCK_INIT, ATOMIC_INIT(false)}}, \\\n         WITNESS_INITIALIZER(\"mutex\", WITNESS_RANK_OMIT), 0}\n#  else\n#    define MALLOC_MUTEX_INITIALIZER                      \\\n  {{{LOCK_PROF_DATA_INITIALIZER, OS_UNFAIR_LOCK_INIT, ATOMIC_INIT(false)}},  \\\n      WITNESS_INITIALIZER(\"mutex\", WITNESS_RANK_OMIT)}\n#  endif\n#elif (defined(JEMALLOC_MUTEX_INIT_CB))\n#  if (defined(JEMALLOC_DEBUG))\n#     define MALLOC_MUTEX_INITIALIZER\t\t\t\t\t\\\n      {{{LOCK_PROF_DATA_INITIALIZER, PTHREAD_MUTEX_INITIALIZER, NULL, ATOMIC_INIT(false)}},\t\\\n           WITNESS_INITIALIZER(\"mutex\", WITNESS_RANK_OMIT), 0}\n#  else\n#     define MALLOC_MUTEX_INITIALIZER\t\t\t\t\t\\\n      {{{LOCK_PROF_DATA_INITIALIZER, PTHREAD_MUTEX_INITIALIZER, NULL, ATOMIC_INIT(false)}},\t\\\n           WITNESS_INITIALIZER(\"mutex\", WITNESS_RANK_OMIT)}\n#  endif\n\n#else\n#    define MALLOC_MUTEX_TYPE PTHREAD_MUTEX_DEFAULT\n#  if defined(JEMALLOC_DEBUG)\n#    define MALLOC_MUTEX_INITIALIZER\t\t\t\t\t\\\n     {{{LOCK_PROF_DATA_INITIALIZER, PTHREAD_MUTEX_INITIALIZER, ATOMIC_INIT(false)}}, \\\n           WITNESS_INITIALIZER(\"mutex\", WITNESS_RANK_OMIT), 0}\n#  else\n#    define MALLOC_MUTEX_INITIALIZER                          \\\n     {{{LOCK_PROF_DATA_INITIALIZER, PTHREAD_MUTEX_INITIALIZER, ATOMIC_INIT(false)}},\t\\\n      WITNESS_INITIALIZER(\"mutex\", WITNESS_RANK_OMIT)}\n#  endif\n#endif\n\n#ifdef JEMALLOC_LAZY_LOCK\nextern bool isthreaded;\n#else\n#  undef isthreaded /* Undo private_namespace.h definition. */\n#  define isthreaded true\n#endif\n\nbool malloc_mutex_init(malloc_mutex_t *mutex, const char *name,\n    witness_rank_t rank, malloc_mutex_lock_order_t lock_order);\nvoid malloc_mutex_prefork(tsdn_t *tsdn, malloc_mutex_t *mutex);\nvoid malloc_mutex_postfork_parent(tsdn_t *tsdn, malloc_mutex_t *mutex);\nvoid malloc_mutex_postfork_child(tsdn_t *tsdn, malloc_mutex_t *mutex);\nbool malloc_mutex_boot(void);\nvoid malloc_mutex_prof_data_reset(tsdn_t *tsdn, malloc_mutex_t *mutex);\n\nvoid malloc_mutex_lock_slow(malloc_mutex_t *mutex);\n\nstatic inline void\nmalloc_mutex_lock_final(malloc_mutex_t *mutex) {\n\tMALLOC_MUTEX_LOCK(mutex);\n\tatomic_store_b(&mutex->locked, true, ATOMIC_RELAXED);\n}\n\nstatic inline bool\nmalloc_mutex_trylock_final(malloc_mutex_t *mutex) {\n\treturn MALLOC_MUTEX_TRYLOCK(mutex);\n}\n\nstatic inline void\nmutex_owner_stats_update(tsdn_t *tsdn, malloc_mutex_t *mutex) {\n\tif (config_stats) {\n\t\tmutex_prof_data_t *data = &mutex->prof_data;\n\t\tdata->n_lock_ops++;\n\t\tif (data->prev_owner != tsdn) {\n\t\t\tdata->prev_owner = tsdn;\n\t\t\tdata->n_owner_switches++;\n\t\t}\n\t}\n}\n\n/* Trylock: return false if the lock is successfully acquired. */\nstatic inline bool\nmalloc_mutex_trylock(tsdn_t *tsdn, malloc_mutex_t *mutex) {\n\twitness_assert_not_owner(tsdn_witness_tsdp_get(tsdn), &mutex->witness);\n\tif (isthreaded) {\n\t\tif (malloc_mutex_trylock_final(mutex)) {\n\t\t\tatomic_store_b(&mutex->locked, true, ATOMIC_RELAXED);\n\t\t\treturn true;\n\t\t}\n\t\tmutex_owner_stats_update(tsdn, mutex);\n\t}\n\twitness_lock(tsdn_witness_tsdp_get(tsdn), &mutex->witness);\n\n\treturn false;\n}\n\n/* Aggregate lock prof data. */\nstatic inline void\nmalloc_mutex_prof_merge(mutex_prof_data_t *sum, mutex_prof_data_t *data) {\n\tnstime_add(&sum->tot_wait_time, &data->tot_wait_time);\n\tif (nstime_compare(&sum->max_wait_time, &data->max_wait_time) < 0) {\n\t\tnstime_copy(&sum->max_wait_time, &data->max_wait_time);\n\t}\n\n\tsum->n_wait_times += data->n_wait_times;\n\tsum->n_spin_acquired += data->n_spin_acquired;\n\n\tif (sum->max_n_thds < data->max_n_thds) {\n\t\tsum->max_n_thds = data->max_n_thds;\n\t}\n\tuint32_t cur_n_waiting_thds = atomic_load_u32(&sum->n_waiting_thds,\n\t    ATOMIC_RELAXED);\n\tuint32_t new_n_waiting_thds = cur_n_waiting_thds + atomic_load_u32(\n\t    &data->n_waiting_thds, ATOMIC_RELAXED);\n\tatomic_store_u32(&sum->n_waiting_thds, new_n_waiting_thds,\n\t    ATOMIC_RELAXED);\n\tsum->n_owner_switches += data->n_owner_switches;\n\tsum->n_lock_ops += data->n_lock_ops;\n}\n\nstatic inline void\nmalloc_mutex_lock(tsdn_t *tsdn, malloc_mutex_t *mutex) {\n\twitness_assert_not_owner(tsdn_witness_tsdp_get(tsdn), &mutex->witness);\n\tif (isthreaded) {\n\t\tif (malloc_mutex_trylock_final(mutex)) {\n\t\t\tmalloc_mutex_lock_slow(mutex);\n\t\t\tatomic_store_b(&mutex->locked, true, ATOMIC_RELAXED);\n\t\t}\n\t\tmutex_owner_stats_update(tsdn, mutex);\n\t}\n\twitness_lock(tsdn_witness_tsdp_get(tsdn), &mutex->witness);\n}\n\nstatic inline void\nmalloc_mutex_unlock(tsdn_t *tsdn, malloc_mutex_t *mutex) {\n\tatomic_store_b(&mutex->locked, false, ATOMIC_RELAXED);\n\twitness_unlock(tsdn_witness_tsdp_get(tsdn), &mutex->witness);\n\tif (isthreaded) {\n\t\tMALLOC_MUTEX_UNLOCK(mutex);\n\t}\n}\n\nstatic inline void\nmalloc_mutex_assert_owner(tsdn_t *tsdn, malloc_mutex_t *mutex) {\n\twitness_assert_owner(tsdn_witness_tsdp_get(tsdn), &mutex->witness);\n}\n\nstatic inline void\nmalloc_mutex_assert_not_owner(tsdn_t *tsdn, malloc_mutex_t *mutex) {\n\twitness_assert_not_owner(tsdn_witness_tsdp_get(tsdn), &mutex->witness);\n}\n\nstatic inline void\nmalloc_mutex_prof_copy(mutex_prof_data_t *dst, mutex_prof_data_t *source) {\n\t/*\n\t * Not *really* allowed (we shouldn't be doing non-atomic loads of\n\t * atomic data), but the mutex protection makes this safe, and writing\n\t * a member-for-member copy is tedious for this situation.\n\t */\n\t*dst = *source;\n\t/* n_wait_thds is not reported (modified w/o locking). */\n\tatomic_store_u32(&dst->n_waiting_thds, 0, ATOMIC_RELAXED);\n}\n\n/* Copy the prof data from mutex for processing. */\nstatic inline void\nmalloc_mutex_prof_read(tsdn_t *tsdn, mutex_prof_data_t *data,\n    malloc_mutex_t *mutex) {\n\t/* Can only read holding the mutex. */\n\tmalloc_mutex_assert_owner(tsdn, mutex);\n\tmalloc_mutex_prof_copy(data, &mutex->prof_data);\n}\n\nstatic inline void\nmalloc_mutex_prof_accum(tsdn_t *tsdn, mutex_prof_data_t *data,\n    malloc_mutex_t *mutex) {\n\tmutex_prof_data_t *source = &mutex->prof_data;\n\t/* Can only read holding the mutex. */\n\tmalloc_mutex_assert_owner(tsdn, mutex);\n\n\tnstime_add(&data->tot_wait_time, &source->tot_wait_time);\n\tif (nstime_compare(&source->max_wait_time, &data->max_wait_time) > 0) {\n\t\tnstime_copy(&data->max_wait_time, &source->max_wait_time);\n\t}\n\tdata->n_wait_times += source->n_wait_times;\n\tdata->n_spin_acquired += source->n_spin_acquired;\n\tif (data->max_n_thds < source->max_n_thds) {\n\t\tdata->max_n_thds = source->max_n_thds;\n\t}\n\t/* n_wait_thds is not reported. */\n\tatomic_store_u32(&data->n_waiting_thds, 0, ATOMIC_RELAXED);\n\tdata->n_owner_switches += source->n_owner_switches;\n\tdata->n_lock_ops += source->n_lock_ops;\n}\n\n/* Compare the prof data and update to the maximum. */\nstatic inline void\nmalloc_mutex_prof_max_update(tsdn_t *tsdn, mutex_prof_data_t *data,\n    malloc_mutex_t *mutex) {\n\tmutex_prof_data_t *source = &mutex->prof_data;\n\t/* Can only read holding the mutex. */\n\tmalloc_mutex_assert_owner(tsdn, mutex);\n\n\tif (nstime_compare(&source->tot_wait_time, &data->tot_wait_time) > 0) {\n\t\tnstime_copy(&data->tot_wait_time, &source->tot_wait_time);\n\t}\n\tif (nstime_compare(&source->max_wait_time, &data->max_wait_time) > 0) {\n\t\tnstime_copy(&data->max_wait_time, &source->max_wait_time);\n\t}\n\tif (source->n_wait_times > data->n_wait_times) {\n\t\tdata->n_wait_times = source->n_wait_times;\n\t}\n\tif (source->n_spin_acquired > data->n_spin_acquired) {\n\t\tdata->n_spin_acquired = source->n_spin_acquired;\n\t}\n\tif (source->max_n_thds > data->max_n_thds) {\n\t\tdata->max_n_thds = source->max_n_thds;\n\t}\n\tif (source->n_owner_switches > data->n_owner_switches) {\n\t\tdata->n_owner_switches = source->n_owner_switches;\n\t}\n\tif (source->n_lock_ops > data->n_lock_ops) {\n\t\tdata->n_lock_ops = source->n_lock_ops;\n\t}\n\t/* n_wait_thds is not reported. */\n}\n\n#endif /* JEMALLOC_INTERNAL_MUTEX_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/mutex_prof.h",
    "content": "#ifndef JEMALLOC_INTERNAL_MUTEX_PROF_H\n#define JEMALLOC_INTERNAL_MUTEX_PROF_H\n\n#include \"jemalloc/internal/atomic.h\"\n#include \"jemalloc/internal/nstime.h\"\n#include \"jemalloc/internal/tsd_types.h\"\n\n#define MUTEX_PROF_GLOBAL_MUTEXES\t\t\t\t\t\\\n    OP(background_thread)\t\t\t\t\t\t\\\n    OP(max_per_bg_thd)\t\t\t\t\t\t\t\\\n    OP(ctl)\t\t\t\t\t\t\t\t\\\n    OP(prof)\t\t\t\t\t\t\t\t\\\n    OP(prof_thds_data)\t\t\t\t\t\t\t\\\n    OP(prof_dump)\t\t\t\t\t\t\t\\\n    OP(prof_recent_alloc)\t\t\t\t\t\t\\\n    OP(prof_recent_dump)\t\t\t\t\t\t\\\n    OP(prof_stats)\n\ntypedef enum {\n#define OP(mtx) global_prof_mutex_##mtx,\n\tMUTEX_PROF_GLOBAL_MUTEXES\n#undef OP\n\tmutex_prof_num_global_mutexes\n} mutex_prof_global_ind_t;\n\n#define MUTEX_PROF_ARENA_MUTEXES\t\t\t\t\t\\\n    OP(large)\t\t\t\t\t\t\t\t\\\n    OP(extent_avail)\t\t\t\t\t\t\t\\\n    OP(extents_dirty)\t\t\t\t\t\t\t\\\n    OP(extents_muzzy)\t\t\t\t\t\t\t\\\n    OP(extents_retained)\t\t\t\t\t\t\\\n    OP(decay_dirty)\t\t\t\t\t\t\t\\\n    OP(decay_muzzy)\t\t\t\t\t\t\t\\\n    OP(base)\t\t\t\t\t\t\t\t\\\n    OP(tcache_list)\t\t\t\t\t\t\t\\\n    OP(hpa_shard)\t\t\t\t\t\t\t\\\n    OP(hpa_shard_grow)\t\t\t\t\t\t\t\\\n    OP(hpa_sec)\n\ntypedef enum {\n#define OP(mtx) arena_prof_mutex_##mtx,\n\tMUTEX_PROF_ARENA_MUTEXES\n#undef OP\n\tmutex_prof_num_arena_mutexes\n} mutex_prof_arena_ind_t;\n\n/*\n * The forth parameter is a boolean value that is true for derived rate counters\n * and false for real ones.\n */\n#define MUTEX_PROF_UINT64_COUNTERS\t\t\t\t\t\\\n    OP(num_ops, uint64_t, \"n_lock_ops\", false, num_ops)\t\t\t\t\t\\\n    OP(num_ops_ps, uint64_t, \"(#/sec)\", true, num_ops)\t\t\t\t\\\n    OP(num_wait, uint64_t, \"n_waiting\", false, num_wait)\t\t\t\t\\\n    OP(num_wait_ps, uint64_t, \"(#/sec)\", true, num_wait)\t\t\t\t\\\n    OP(num_spin_acq, uint64_t, \"n_spin_acq\", false, num_spin_acq)\t\t\t\\\n    OP(num_spin_acq_ps, uint64_t, \"(#/sec)\", true, num_spin_acq)\t\t\t\\\n    OP(num_owner_switch, uint64_t, \"n_owner_switch\", false, num_owner_switch)\t\t\\\n    OP(num_owner_switch_ps, uint64_t, \"(#/sec)\", true, num_owner_switch)\t\\\n    OP(total_wait_time, uint64_t, \"total_wait_ns\", false, total_wait_time)\t\t\\\n    OP(total_wait_time_ps, uint64_t, \"(#/sec)\", true, total_wait_time)\t\t\\\n    OP(max_wait_time, uint64_t, \"max_wait_ns\", false, max_wait_time)\n\n#define MUTEX_PROF_UINT32_COUNTERS\t\t\t\t\t\\\n    OP(max_num_thds, uint32_t, \"max_n_thds\", false, max_num_thds)\n\n#define MUTEX_PROF_COUNTERS\t\t\t\t\t\t\\\n\t\tMUTEX_PROF_UINT64_COUNTERS\t\t\t\t\\\n\t\tMUTEX_PROF_UINT32_COUNTERS\n\n#define OP(counter, type, human, derived, base_counter) mutex_counter_##counter,\n\n#define COUNTER_ENUM(counter_list, t)\t\t\t\t\t\\\n\t\ttypedef enum {\t\t\t\t\t\t\\\n\t\t\tcounter_list\t\t\t\t\t\\\n\t\t\tmutex_prof_num_##t##_counters\t\t\t\\\n\t\t} mutex_prof_##t##_counter_ind_t;\n\nCOUNTER_ENUM(MUTEX_PROF_UINT64_COUNTERS, uint64_t)\nCOUNTER_ENUM(MUTEX_PROF_UINT32_COUNTERS, uint32_t)\n\n#undef COUNTER_ENUM\n#undef OP\n\ntypedef struct {\n\t/*\n\t * Counters touched on the slow path, i.e. when there is lock\n\t * contention.  We update them once we have the lock.\n\t */\n\t/* Total time (in nano seconds) spent waiting on this mutex. */\n\tnstime_t\t\ttot_wait_time;\n\t/* Max time (in nano seconds) spent on a single lock operation. */\n\tnstime_t\t\tmax_wait_time;\n\t/* # of times have to wait for this mutex (after spinning). */\n\tuint64_t\t\tn_wait_times;\n\t/* # of times acquired the mutex through local spinning. */\n\tuint64_t\t\tn_spin_acquired;\n\t/* Max # of threads waiting for the mutex at the same time. */\n\tuint32_t\t\tmax_n_thds;\n\t/* Current # of threads waiting on the lock.  Atomic synced. */\n\tatomic_u32_t\t\tn_waiting_thds;\n\n\t/*\n\t * Data touched on the fast path.  These are modified right after we\n\t * grab the lock, so it's placed closest to the end (i.e. right before\n\t * the lock) so that we have a higher chance of them being on the same\n\t * cacheline.\n\t */\n\t/* # of times the mutex holder is different than the previous one. */\n\tuint64_t\t\tn_owner_switches;\n\t/* Previous mutex holder, to facilitate n_owner_switches. */\n\ttsdn_t\t\t\t*prev_owner;\n\t/* # of lock() operations in total. */\n\tuint64_t\t\tn_lock_ops;\n} mutex_prof_data_t;\n\n#endif /* JEMALLOC_INTERNAL_MUTEX_PROF_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/nstime.h",
    "content": "#ifndef JEMALLOC_INTERNAL_NSTIME_H\n#define JEMALLOC_INTERNAL_NSTIME_H\n\n/* Maximum supported number of seconds (~584 years). */\n#define NSTIME_SEC_MAX KQU(18446744072)\n\n#define NSTIME_MAGIC ((uint32_t)0xb8a9ce37)\n#ifdef JEMALLOC_DEBUG\n#  define NSTIME_ZERO_INITIALIZER {0, NSTIME_MAGIC}\n#else\n#  define NSTIME_ZERO_INITIALIZER {0}\n#endif\n\ntypedef struct {\n\tuint64_t ns;\n#ifdef JEMALLOC_DEBUG\n\tuint32_t magic; /* Tracks if initialized. */\n#endif\n} nstime_t;\n\nstatic const nstime_t nstime_zero = NSTIME_ZERO_INITIALIZER;\n\nvoid nstime_init(nstime_t *time, uint64_t ns);\nvoid nstime_init2(nstime_t *time, uint64_t sec, uint64_t nsec);\nuint64_t nstime_ns(const nstime_t *time);\nuint64_t nstime_sec(const nstime_t *time);\nuint64_t nstime_msec(const nstime_t *time);\nuint64_t nstime_nsec(const nstime_t *time);\nvoid nstime_copy(nstime_t *time, const nstime_t *source);\nint nstime_compare(const nstime_t *a, const nstime_t *b);\nvoid nstime_add(nstime_t *time, const nstime_t *addend);\nvoid nstime_iadd(nstime_t *time, uint64_t addend);\nvoid nstime_subtract(nstime_t *time, const nstime_t *subtrahend);\nvoid nstime_isubtract(nstime_t *time, uint64_t subtrahend);\nvoid nstime_imultiply(nstime_t *time, uint64_t multiplier);\nvoid nstime_idivide(nstime_t *time, uint64_t divisor);\nuint64_t nstime_divide(const nstime_t *time, const nstime_t *divisor);\nuint64_t nstime_ns_since(const nstime_t *past);\n\ntypedef bool (nstime_monotonic_t)(void);\nextern nstime_monotonic_t *JET_MUTABLE nstime_monotonic;\n\ntypedef void (nstime_update_t)(nstime_t *);\nextern nstime_update_t *JET_MUTABLE nstime_update;\n\ntypedef void (nstime_prof_update_t)(nstime_t *);\nextern nstime_prof_update_t *JET_MUTABLE nstime_prof_update;\n\nvoid nstime_init_update(nstime_t *time);\nvoid nstime_prof_init_update(nstime_t *time);\n\nenum prof_time_res_e {\n\tprof_time_res_default = 0,\n\tprof_time_res_high = 1\n};\ntypedef enum prof_time_res_e prof_time_res_t;\n\nextern prof_time_res_t opt_prof_time_res;\nextern const char *prof_time_res_mode_names[];\n\nJEMALLOC_ALWAYS_INLINE void\nnstime_init_zero(nstime_t *time) {\n\tnstime_copy(time, &nstime_zero);\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nnstime_equals_zero(nstime_t *time) {\n\tint diff = nstime_compare(time, &nstime_zero);\n\tassert(diff >= 0);\n\treturn diff == 0;\n}\n\n#endif /* JEMALLOC_INTERNAL_NSTIME_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/pa.h",
    "content": "#ifndef JEMALLOC_INTERNAL_PA_H\n#define JEMALLOC_INTERNAL_PA_H\n\n#include \"jemalloc/internal/base.h\"\n#include \"jemalloc/internal/decay.h\"\n#include \"jemalloc/internal/ecache.h\"\n#include \"jemalloc/internal/edata_cache.h\"\n#include \"jemalloc/internal/emap.h\"\n#include \"jemalloc/internal/hpa.h\"\n#include \"jemalloc/internal/lockedint.h\"\n#include \"jemalloc/internal/pac.h\"\n#include \"jemalloc/internal/pai.h\"\n#include \"jemalloc/internal/sec.h\"\n\n/*\n * The page allocator; responsible for acquiring pages of memory for\n * allocations.  It picks the implementation of the page allocator interface\n * (i.e. a pai_t) to handle a given page-level allocation request.  For now, the\n * only such implementation is the PAC code (\"page allocator classic\"), but\n * others will be coming soon.\n */\n\ntypedef struct pa_central_s pa_central_t;\nstruct pa_central_s {\n\thpa_central_t hpa;\n};\n\n/*\n * The stats for a particular pa_shard.  Because of the way the ctl module\n * handles stats epoch data collection (it has its own arena_stats, and merges\n * the stats from each arena into it), this needs to live in the arena_stats_t;\n * hence we define it here and let the pa_shard have a pointer (rather than the\n * more natural approach of just embedding it in the pa_shard itself).\n *\n * We follow the arena_stats_t approach of marking the derived fields.  These\n * are the ones that are not maintained on their own; instead, their values are\n * derived during those stats merges.\n */\ntypedef struct pa_shard_stats_s pa_shard_stats_t;\nstruct pa_shard_stats_s {\n\t/* Number of edata_t structs allocated by base, but not being used. */\n\tsize_t edata_avail; /* Derived. */\n\t/*\n\t * Stats specific to the PAC.  For now, these are the only stats that\n\t * exist, but there will eventually be other page allocators.  Things\n\t * like edata_avail make sense in a cross-PA sense, but things like\n\t * npurges don't.\n\t */\n\tpac_stats_t pac_stats;\n};\n\n/*\n * The local allocator handle.  Keeps the state necessary to satisfy page-sized\n * allocations.\n *\n * The contents are mostly internal to the PA module.  The key exception is that\n * arena decay code is allowed to grab pointers to the dirty and muzzy ecaches\n * decay_ts, for a couple of queries, passing them back to a PA function, or\n * acquiring decay.mtx and looking at decay.purging.  The reasoning is that,\n * while PA decides what and how to purge, the arena code decides when and where\n * (e.g. on what thread).  It's allowed to use the presence of another purger to\n * decide.\n * (The background thread code also touches some other decay internals, but\n * that's not fundamental; its' just an artifact of a partial refactoring, and\n * its accesses could be straightforwardly moved inside the decay module).\n */\ntypedef struct pa_shard_s pa_shard_t;\nstruct pa_shard_s {\n\t/* The central PA this shard is associated with. */\n\tpa_central_t *central;\n\n\t/*\n\t * Number of pages in active extents.\n\t *\n\t * Synchronization: atomic.\n\t */\n\tatomic_zu_t nactive;\n\n\t/*\n\t * Whether or not we should prefer the hugepage allocator.  Atomic since\n\t * it may be concurrently modified by a thread setting extent hooks.\n\t * Note that we still may do HPA operations in this arena; if use_hpa is\n\t * changed from true to false, we'll free back to the hugepage allocator\n\t * for those allocations.\n\t */\n\tatomic_b_t use_hpa;\n\n\t/*\n\t * If we never used the HPA to begin with, it wasn't initialized, and so\n\t * we shouldn't try to e.g. acquire its mutexes during fork.  This\n\t * tracks that knowledge.\n\t */\n\tbool ever_used_hpa;\n\n\t/* Allocates from a PAC. */\n\tpac_t pac;\n\n\t/*\n\t * We place a small extent cache in front of the HPA, since we intend\n\t * these configurations to use many fewer arenas, and therefore have a\n\t * higher risk of hot locks.\n\t */\n\tsec_t hpa_sec;\n\thpa_shard_t hpa_shard;\n\n\t/* The source of edata_t objects. */\n\tedata_cache_t edata_cache;\n\n\tunsigned ind;\n\n\tmalloc_mutex_t *stats_mtx;\n\tpa_shard_stats_t *stats;\n\n\t/* The emap this shard is tied to. */\n\temap_t *emap;\n\n\t/* The base from which we get the ehooks and allocate metadat. */\n\tbase_t *base;\n};\n\nstatic inline bool\npa_shard_dont_decay_muzzy(pa_shard_t *shard) {\n\treturn ecache_npages_get(&shard->pac.ecache_muzzy) == 0 &&\n\t    pac_decay_ms_get(&shard->pac, extent_state_muzzy) <= 0;\n}\n\nstatic inline ehooks_t *\npa_shard_ehooks_get(pa_shard_t *shard) {\n\treturn base_ehooks_get(shard->base);\n}\n\n/* Returns true on error. */\nbool pa_central_init(pa_central_t *central, base_t *base, bool hpa,\n    hpa_hooks_t *hpa_hooks);\n\n/* Returns true on error. */\nbool pa_shard_init(tsdn_t *tsdn, pa_shard_t *shard, pa_central_t *central,\n    emap_t *emap, base_t *base, unsigned ind, pa_shard_stats_t *stats,\n    malloc_mutex_t *stats_mtx, nstime_t *cur_time, size_t oversize_threshold,\n    ssize_t dirty_decay_ms, ssize_t muzzy_decay_ms);\n\n/*\n * This isn't exposed to users; we allow late enablement of the HPA shard so\n * that we can boot without worrying about the HPA, then turn it on in a0.\n */\nbool pa_shard_enable_hpa(tsdn_t *tsdn, pa_shard_t *shard,\n    const hpa_shard_opts_t *hpa_opts, const sec_opts_t *hpa_sec_opts);\n\n/*\n * We stop using the HPA when custom extent hooks are installed, but still\n * redirect deallocations to it.\n */\nvoid pa_shard_disable_hpa(tsdn_t *tsdn, pa_shard_t *shard);\n\n/*\n * This does the PA-specific parts of arena reset (i.e. freeing all active\n * allocations).\n */\nvoid pa_shard_reset(tsdn_t *tsdn, pa_shard_t *shard);\n\n/*\n * Destroy all the remaining retained extents.  Should only be called after\n * decaying all active, dirty, and muzzy extents to the retained state, as the\n * last step in destroying the shard.\n */\nvoid pa_shard_destroy(tsdn_t *tsdn, pa_shard_t *shard);\n\n/* Gets an edata for the given allocation. */\nedata_t *pa_alloc(tsdn_t *tsdn, pa_shard_t *shard, size_t size,\n    size_t alignment, bool slab, szind_t szind, bool zero, bool guarded,\n    bool *deferred_work_generated);\n/* Returns true on error, in which case nothing changed. */\nbool pa_expand(tsdn_t *tsdn, pa_shard_t *shard, edata_t *edata, size_t old_size,\n    size_t new_size, szind_t szind, bool zero, bool *deferred_work_generated);\n/*\n * The same.  Sets *generated_dirty to true if we produced new dirty pages, and\n * false otherwise.\n */\nbool pa_shrink(tsdn_t *tsdn, pa_shard_t *shard, edata_t *edata, size_t old_size,\n    size_t new_size, szind_t szind, bool *deferred_work_generated);\n/*\n * Frees the given edata back to the pa.  Sets *generated_dirty if we produced\n * new dirty pages (well, we always set it for now; but this need not be the\n * case).\n * (We could make generated_dirty the return value of course, but this is more\n * consistent with the shrink pathway and our error codes here).\n */\nvoid pa_dalloc(tsdn_t *tsdn, pa_shard_t *shard, edata_t *edata,\n    bool *deferred_work_generated);\nbool pa_decay_ms_set(tsdn_t *tsdn, pa_shard_t *shard, extent_state_t state,\n    ssize_t decay_ms, pac_purge_eagerness_t eagerness);\nssize_t pa_decay_ms_get(pa_shard_t *shard, extent_state_t state);\n\n/*\n * Do deferred work on this PA shard.\n *\n * Morally, this should do both PAC decay and the HPA deferred work.  For now,\n * though, the arena, background thread, and PAC modules are tightly interwoven\n * in a way that's tricky to extricate, so we only do the HPA-specific parts.\n */\nvoid pa_shard_set_deferral_allowed(tsdn_t *tsdn, pa_shard_t *shard,\n    bool deferral_allowed);\nvoid pa_shard_do_deferred_work(tsdn_t *tsdn, pa_shard_t *shard);\nvoid pa_shard_try_deferred_work(tsdn_t *tsdn, pa_shard_t *shard);\nuint64_t pa_shard_time_until_deferred_work(tsdn_t *tsdn, pa_shard_t *shard);\n\n/******************************************************************************/\n/*\n * Various bits of \"boring\" functionality that are still part of this module,\n * but that we relegate to pa_extra.c, to keep the core logic in pa.c as\n * readable as possible.\n */\n\n/*\n * These fork phases are synchronized with the arena fork phase numbering to\n * make it easy to keep straight. That's why there's no prefork1.\n */\nvoid pa_shard_prefork0(tsdn_t *tsdn, pa_shard_t *shard);\nvoid pa_shard_prefork2(tsdn_t *tsdn, pa_shard_t *shard);\nvoid pa_shard_prefork3(tsdn_t *tsdn, pa_shard_t *shard);\nvoid pa_shard_prefork4(tsdn_t *tsdn, pa_shard_t *shard);\nvoid pa_shard_prefork5(tsdn_t *tsdn, pa_shard_t *shard);\nvoid pa_shard_postfork_parent(tsdn_t *tsdn, pa_shard_t *shard);\nvoid pa_shard_postfork_child(tsdn_t *tsdn, pa_shard_t *shard);\n\nvoid pa_shard_basic_stats_merge(pa_shard_t *shard, size_t *nactive,\n    size_t *ndirty, size_t *nmuzzy);\n\nvoid pa_shard_stats_merge(tsdn_t *tsdn, pa_shard_t *shard,\n    pa_shard_stats_t *pa_shard_stats_out, pac_estats_t *estats_out,\n    hpa_shard_stats_t *hpa_stats_out, sec_stats_t *sec_stats_out,\n    size_t *resident);\n\n/*\n * Reads the PA-owned mutex stats into the output stats array, at the\n * appropriate positions.  Morally, these stats should really live in\n * pa_shard_stats_t, but the indices are sort of baked into the various mutex\n * prof macros.  This would be a good thing to do at some point.\n */\nvoid pa_shard_mtx_stats_read(tsdn_t *tsdn, pa_shard_t *shard,\n    mutex_prof_data_t mutex_prof_data[mutex_prof_num_arena_mutexes]);\n\n#endif /* JEMALLOC_INTERNAL_PA_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/pac.h",
    "content": "#ifndef JEMALLOC_INTERNAL_PAC_H\n#define JEMALLOC_INTERNAL_PAC_H\n\n#include \"jemalloc/internal/exp_grow.h\"\n#include \"jemalloc/internal/pai.h\"\n#include \"san_bump.h\"\n\n\n/*\n * Page allocator classic; an implementation of the PAI interface that:\n * - Can be used for arenas with custom extent hooks.\n * - Can always satisfy any allocation request (including highly-fragmentary\n *   ones).\n * - Can use efficient OS-level zeroing primitives for demand-filled pages.\n */\n\n/* How \"eager\" decay/purging should be. */\nenum pac_purge_eagerness_e {\n\tPAC_PURGE_ALWAYS,\n\tPAC_PURGE_NEVER,\n\tPAC_PURGE_ON_EPOCH_ADVANCE\n};\ntypedef enum pac_purge_eagerness_e pac_purge_eagerness_t;\n\ntypedef struct pac_decay_stats_s pac_decay_stats_t;\nstruct pac_decay_stats_s {\n\t/* Total number of purge sweeps. */\n\tlocked_u64_t npurge;\n\t/* Total number of madvise calls made. */\n\tlocked_u64_t nmadvise;\n\t/* Total number of pages purged. */\n\tlocked_u64_t purged;\n};\n\ntypedef struct pac_estats_s pac_estats_t;\nstruct pac_estats_s {\n\t/*\n\t * Stats for a given index in the range [0, SC_NPSIZES] in the various\n\t * ecache_ts.\n\t * We track both bytes and # of extents: two extents in the same bucket\n\t * may have different sizes if adjacent size classes differ by more than\n\t * a page, so bytes cannot always be derived from # of extents.\n\t */\n\tsize_t ndirty;\n\tsize_t dirty_bytes;\n\tsize_t nmuzzy;\n\tsize_t muzzy_bytes;\n\tsize_t nretained;\n\tsize_t retained_bytes;\n};\n\ntypedef struct pac_stats_s pac_stats_t;\nstruct pac_stats_s {\n\tpac_decay_stats_t decay_dirty;\n\tpac_decay_stats_t decay_muzzy;\n\n\t/*\n\t * Number of unused virtual memory bytes currently retained.  Retained\n\t * bytes are technically mapped (though always decommitted or purged),\n\t * but they are excluded from the mapped statistic (above).\n\t */\n\tsize_t retained; /* Derived. */\n\n\t/*\n\t * Number of bytes currently mapped, excluding retained memory (and any\n\t * base-allocated memory, which is tracked by the arena stats).\n\t *\n\t * We name this \"pac_mapped\" to avoid confusion with the arena_stats\n\t * \"mapped\".\n\t */\n\tatomic_zu_t pac_mapped;\n\n\t/* VM space had to be leaked (undocumented).  Normally 0. */\n\tatomic_zu_t abandoned_vm;\n};\n\ntypedef struct pac_s pac_t;\nstruct pac_s {\n\t/*\n\t * Must be the first member (we convert it to a PAC given only a\n\t * pointer).  The handle to the allocation interface.\n\t */\n\tpai_t pai;\n\t/*\n\t * Collections of extents that were previously allocated.  These are\n\t * used when allocating extents, in an attempt to re-use address space.\n\t *\n\t * Synchronization: internal.\n\t */\n\tecache_t ecache_dirty;\n\tecache_t ecache_muzzy;\n\tecache_t ecache_retained;\n\n\tbase_t *base;\n\temap_t *emap;\n\tedata_cache_t *edata_cache;\n\n\t/* The grow info for the retained ecache. */\n\texp_grow_t exp_grow;\n\tmalloc_mutex_t grow_mtx;\n\n\t/* Special allocator for guarded frequently reused extents. */\n\tsan_bump_alloc_t sba;\n\n\t/* How large extents should be before getting auto-purged. */\n\tatomic_zu_t oversize_threshold;\n\n\t/*\n\t * Decay-based purging state, responsible for scheduling extent state\n\t * transitions.\n\t *\n\t * Synchronization: via the internal mutex.\n\t */\n\tdecay_t decay_dirty; /* dirty --> muzzy */\n\tdecay_t decay_muzzy; /* muzzy --> retained */\n\n\tmalloc_mutex_t *stats_mtx;\n\tpac_stats_t *stats;\n\n\t/* Extent serial number generator state. */\n\tatomic_zu_t extent_sn_next;\n};\n\nbool pac_init(tsdn_t *tsdn, pac_t *pac, base_t *base, emap_t *emap,\n    edata_cache_t *edata_cache, nstime_t *cur_time, size_t oversize_threshold,\n    ssize_t dirty_decay_ms, ssize_t muzzy_decay_ms, pac_stats_t *pac_stats,\n    malloc_mutex_t *stats_mtx);\n\nstatic inline size_t\npac_mapped(pac_t *pac) {\n\treturn atomic_load_zu(&pac->stats->pac_mapped, ATOMIC_RELAXED);\n}\n\nstatic inline ehooks_t *\npac_ehooks_get(pac_t *pac) {\n\treturn base_ehooks_get(pac->base);\n}\n\n/*\n * All purging functions require holding decay->mtx.  This is one of the few\n * places external modules are allowed to peek inside pa_shard_t internals.\n */\n\n/*\n * Decays the number of pages currently in the ecache.  This might not leave the\n * ecache empty if other threads are inserting dirty objects into it\n * concurrently with the call.\n */\nvoid pac_decay_all(tsdn_t *tsdn, pac_t *pac, decay_t *decay,\n    pac_decay_stats_t *decay_stats, ecache_t *ecache, bool fully_decay);\n/*\n * Updates decay settings for the current time, and conditionally purges in\n * response (depending on decay_purge_setting).  Returns whether or not the\n * epoch advanced.\n */\nbool pac_maybe_decay_purge(tsdn_t *tsdn, pac_t *pac, decay_t *decay,\n    pac_decay_stats_t *decay_stats, ecache_t *ecache,\n    pac_purge_eagerness_t eagerness);\n\n/*\n * Gets / sets the maximum amount that we'll grow an arena down the\n * grow-retained pathways (unless forced to by an allocaction request).\n *\n * Set new_limit to NULL if it's just a query, or old_limit to NULL if you don't\n * care about the previous value.\n *\n * Returns true on error (if the new limit is not valid).\n */\nbool pac_retain_grow_limit_get_set(tsdn_t *tsdn, pac_t *pac, size_t *old_limit,\n    size_t *new_limit);\n\nbool pac_decay_ms_set(tsdn_t *tsdn, pac_t *pac, extent_state_t state,\n    ssize_t decay_ms, pac_purge_eagerness_t eagerness);\nssize_t pac_decay_ms_get(pac_t *pac, extent_state_t state);\n\nvoid pac_reset(tsdn_t *tsdn, pac_t *pac);\nvoid pac_destroy(tsdn_t *tsdn, pac_t *pac);\n\n#endif /* JEMALLOC_INTERNAL_PAC_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/pages.h",
    "content": "#ifndef JEMALLOC_INTERNAL_PAGES_EXTERNS_H\n#define JEMALLOC_INTERNAL_PAGES_EXTERNS_H\n\n/* Page size.  LG_PAGE is determined by the configure script. */\n#ifdef PAGE_MASK\n#  undef PAGE_MASK\n#endif\n#define PAGE\t\t((size_t)(1U << LG_PAGE))\n#define PAGE_MASK\t((size_t)(PAGE - 1))\n/* Return the page base address for the page containing address a. */\n#define PAGE_ADDR2BASE(a)\t\t\t\t\t\t\\\n\t((void *)((uintptr_t)(a) & ~PAGE_MASK))\n/* Return the smallest pagesize multiple that is >= s. */\n#define PAGE_CEILING(s)\t\t\t\t\t\t\t\\\n\t(((s) + PAGE_MASK) & ~PAGE_MASK)\n/* Return the largest pagesize multiple that is <=s. */\n#define PAGE_FLOOR(s) \t\t\t\t\t\t\t\\\n\t((s) & ~PAGE_MASK)\n\n/* Huge page size.  LG_HUGEPAGE is determined by the configure script. */\n#define HUGEPAGE\t((size_t)(1U << LG_HUGEPAGE))\n#define HUGEPAGE_MASK\t((size_t)(HUGEPAGE - 1))\n\n#if LG_HUGEPAGE != 0\n#  define HUGEPAGE_PAGES (HUGEPAGE / PAGE)\n#else\n/*\n * It's convenient to define arrays (or bitmaps) of HUGEPAGE_PAGES lengths.  If\n * we can't autodetect the hugepage size, it gets treated as 0, in which case\n * we'll trigger a compiler error in those arrays.  Avoid this case by ensuring\n * that this value is at least 1.  (We won't ever run in this degraded state;\n * hpa_supported() returns false in this case.\n */\n#  define HUGEPAGE_PAGES 1\n#endif\n\n/* Return the huge page base address for the huge page containing address a. */\n#define HUGEPAGE_ADDR2BASE(a)\t\t\t\t\t\t\\\n\t((void *)((uintptr_t)(a) & ~HUGEPAGE_MASK))\n/* Return the smallest pagesize multiple that is >= s. */\n#define HUGEPAGE_CEILING(s)\t\t\t\t\t\t\\\n\t(((s) + HUGEPAGE_MASK) & ~HUGEPAGE_MASK)\n\n/* PAGES_CAN_PURGE_LAZY is defined if lazy purging is supported. */\n#if defined(_WIN32) || defined(JEMALLOC_PURGE_MADVISE_FREE)\n#  define PAGES_CAN_PURGE_LAZY\n#endif\n/*\n * PAGES_CAN_PURGE_FORCED is defined if forced purging is supported.\n *\n * The only supported way to hard-purge on Windows is to decommit and then\n * re-commit, but doing so is racy, and if re-commit fails it's a pain to\n * propagate the \"poisoned\" memory state.  Since we typically decommit as the\n * next step after purging on Windows anyway, there's no point in adding such\n * complexity.\n */\n#if !defined(_WIN32) && ((defined(JEMALLOC_PURGE_MADVISE_DONTNEED) && \\\n    defined(JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS)) || \\\n    defined(JEMALLOC_MAPS_COALESCE))\n#  define PAGES_CAN_PURGE_FORCED\n#endif\n\nstatic const bool pages_can_purge_lazy =\n#ifdef PAGES_CAN_PURGE_LAZY\n    true\n#else\n    false\n#endif\n    ;\nstatic const bool pages_can_purge_forced =\n#ifdef PAGES_CAN_PURGE_FORCED\n    true\n#else\n    false\n#endif\n    ;\n\n#if defined(JEMALLOC_HAVE_MADVISE_HUGE) || defined(JEMALLOC_HAVE_MEMCNTL)\n#  define PAGES_CAN_HUGIFY\n#endif\n\nstatic const bool pages_can_hugify =\n#ifdef PAGES_CAN_HUGIFY\n    true\n#else\n    false\n#endif\n    ;\n\ntypedef enum {\n\tthp_mode_default       = 0, /* Do not change hugepage settings. */\n\tthp_mode_always        = 1, /* Always set MADV_HUGEPAGE. */\n\tthp_mode_never         = 2, /* Always set MADV_NOHUGEPAGE. */\n\n\tthp_mode_names_limit   = 3, /* Used for option processing. */\n\tthp_mode_not_supported = 3  /* No THP support detected. */\n} thp_mode_t;\n\n#define THP_MODE_DEFAULT thp_mode_default\nextern thp_mode_t opt_thp;\nextern thp_mode_t init_system_thp_mode; /* Initial system wide state. */\nextern const char *thp_mode_names[];\n\nvoid *pages_map(void *addr, size_t size, size_t alignment, bool *commit);\nvoid pages_unmap(void *addr, size_t size);\nbool pages_commit(void *addr, size_t size);\nbool pages_decommit(void *addr, size_t size);\nbool pages_purge_lazy(void *addr, size_t size);\nbool pages_purge_forced(void *addr, size_t size);\nbool pages_huge(void *addr, size_t size);\nbool pages_nohuge(void *addr, size_t size);\nbool pages_dontdump(void *addr, size_t size);\nbool pages_dodump(void *addr, size_t size);\nbool pages_boot(void);\nvoid pages_set_thp_state (void *ptr, size_t size);\nvoid pages_mark_guards(void *head, void *tail);\nvoid pages_unmark_guards(void *head, void *tail);\n\n#endif /* JEMALLOC_INTERNAL_PAGES_EXTERNS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/pai.h",
    "content": "#ifndef JEMALLOC_INTERNAL_PAI_H\n#define JEMALLOC_INTERNAL_PAI_H\n\n/* An interface for page allocation. */\n\ntypedef struct pai_s pai_t;\nstruct pai_s {\n\t/* Returns NULL on failure. */\n\tedata_t *(*alloc)(tsdn_t *tsdn, pai_t *self, size_t size,\n\t    size_t alignment, bool zero, bool guarded, bool frequent_reuse,\n\t    bool *deferred_work_generated);\n\t/*\n\t * Returns the number of extents added to the list (which may be fewer\n\t * than requested, in case of OOM).  The list should already be\n\t * initialized.  The only alignment guarantee is page-alignment, and\n\t * the results are not necessarily zeroed.\n\t */\n\tsize_t (*alloc_batch)(tsdn_t *tsdn, pai_t *self, size_t size,\n\t    size_t nallocs, edata_list_active_t *results,\n\t    bool *deferred_work_generated);\n\tbool (*expand)(tsdn_t *tsdn, pai_t *self, edata_t *edata,\n\t    size_t old_size, size_t new_size, bool zero,\n\t    bool *deferred_work_generated);\n\tbool (*shrink)(tsdn_t *tsdn, pai_t *self, edata_t *edata,\n\t    size_t old_size, size_t new_size, bool *deferred_work_generated);\n\tvoid (*dalloc)(tsdn_t *tsdn, pai_t *self, edata_t *edata,\n\t    bool *deferred_work_generated);\n\t/* This function empties out list as a side-effect of being called. */\n\tvoid (*dalloc_batch)(tsdn_t *tsdn, pai_t *self,\n\t    edata_list_active_t *list, bool *deferred_work_generated);\n\tuint64_t (*time_until_deferred_work)(tsdn_t *tsdn, pai_t *self);\n};\n\n/*\n * These are just simple convenience functions to avoid having to reference the\n * same pai_t twice on every invocation.\n */\n\nstatic inline edata_t *\npai_alloc(tsdn_t *tsdn, pai_t *self, size_t size, size_t alignment,\n    bool zero, bool guarded, bool frequent_reuse,\n    bool *deferred_work_generated) {\n\treturn self->alloc(tsdn, self, size, alignment, zero, guarded,\n\t    frequent_reuse, deferred_work_generated);\n}\n\nstatic inline size_t\npai_alloc_batch(tsdn_t *tsdn, pai_t *self, size_t size, size_t nallocs,\n    edata_list_active_t *results, bool *deferred_work_generated) {\n\treturn self->alloc_batch(tsdn, self, size, nallocs, results,\n\t    deferred_work_generated);\n}\n\nstatic inline bool\npai_expand(tsdn_t *tsdn, pai_t *self, edata_t *edata, size_t old_size,\n    size_t new_size, bool zero, bool *deferred_work_generated) {\n\treturn self->expand(tsdn, self, edata, old_size, new_size, zero,\n\t    deferred_work_generated);\n}\n\nstatic inline bool\npai_shrink(tsdn_t *tsdn, pai_t *self, edata_t *edata, size_t old_size,\n    size_t new_size, bool *deferred_work_generated) {\n\treturn self->shrink(tsdn, self, edata, old_size, new_size,\n\t    deferred_work_generated);\n}\n\nstatic inline void\npai_dalloc(tsdn_t *tsdn, pai_t *self, edata_t *edata,\n    bool *deferred_work_generated) {\n\tself->dalloc(tsdn, self, edata, deferred_work_generated);\n}\n\nstatic inline void\npai_dalloc_batch(tsdn_t *tsdn, pai_t *self, edata_list_active_t *list,\n    bool *deferred_work_generated) {\n\tself->dalloc_batch(tsdn, self, list, deferred_work_generated);\n}\n\nstatic inline uint64_t\npai_time_until_deferred_work(tsdn_t *tsdn, pai_t *self) {\n\treturn self->time_until_deferred_work(tsdn, self);\n}\n\n/*\n * An implementation of batch allocation that simply calls alloc once for\n * each item in the list.\n */\nsize_t pai_alloc_batch_default(tsdn_t *tsdn, pai_t *self, size_t size,\n    size_t nallocs, edata_list_active_t *results, bool *deferred_work_generated);\n/* Ditto, for dalloc. */\nvoid pai_dalloc_batch_default(tsdn_t *tsdn, pai_t *self,\n    edata_list_active_t *list, bool *deferred_work_generated);\n\n#endif /* JEMALLOC_INTERNAL_PAI_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/peak.h",
    "content": "#ifndef JEMALLOC_INTERNAL_PEAK_H\n#define JEMALLOC_INTERNAL_PEAK_H\n\ntypedef struct peak_s peak_t;\nstruct peak_s {\n\t/* The highest recorded peak value, after adjustment (see below). */\n\tuint64_t cur_max;\n\t/*\n\t * The difference between alloc and dalloc at the last set_zero call;\n\t * this lets us cancel out the appropriate amount of excess.\n\t */\n\tuint64_t adjustment;\n};\n\n#define PEAK_INITIALIZER {0, 0}\n\nstatic inline uint64_t\npeak_max(peak_t *peak) {\n\treturn peak->cur_max;\n}\n\nstatic inline void\npeak_update(peak_t *peak, uint64_t alloc, uint64_t dalloc) {\n\tint64_t candidate_max = (int64_t)(alloc - dalloc - peak->adjustment);\n\tif (candidate_max > (int64_t)peak->cur_max) {\n\t\tpeak->cur_max = candidate_max;\n\t}\n}\n\n/* Resets the counter to zero; all peaks are now relative to this point. */\nstatic inline void\npeak_set_zero(peak_t *peak, uint64_t alloc, uint64_t dalloc) {\n\tpeak->cur_max = 0;\n\tpeak->adjustment = alloc - dalloc;\n}\n\n#endif /* JEMALLOC_INTERNAL_PEAK_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/peak_event.h",
    "content": "#ifndef JEMALLOC_INTERNAL_PEAK_EVENT_H\n#define JEMALLOC_INTERNAL_PEAK_EVENT_H\n\n/*\n * While peak.h contains the simple helper struct that tracks state, this\n * contains the allocator tie-ins (and knows about tsd, the event module, etc.).\n */\n\n/* Update the peak with current tsd state. */\nvoid peak_event_update(tsd_t *tsd);\n/* Set current state to zero. */\nvoid peak_event_zero(tsd_t *tsd);\nuint64_t peak_event_max(tsd_t *tsd);\n\n/* Manual hooks. */\n/* The activity-triggered hooks. */\nuint64_t peak_alloc_new_event_wait(tsd_t *tsd);\nuint64_t peak_alloc_postponed_event_wait(tsd_t *tsd);\nvoid peak_alloc_event_handler(tsd_t *tsd, uint64_t elapsed);\nuint64_t peak_dalloc_new_event_wait(tsd_t *tsd);\nuint64_t peak_dalloc_postponed_event_wait(tsd_t *tsd);\nvoid peak_dalloc_event_handler(tsd_t *tsd, uint64_t elapsed);\n\n#endif /* JEMALLOC_INTERNAL_PEAK_EVENT_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/ph.h",
    "content": "#ifndef JEMALLOC_INTERNAL_PH_H\n#define JEMALLOC_INTERNAL_PH_H\n\n/*\n * A Pairing Heap implementation.\n *\n * \"The Pairing Heap: A New Form of Self-Adjusting Heap\"\n * https://www.cs.cmu.edu/~sleator/papers/pairing-heaps.pdf\n *\n * With auxiliary twopass list, described in a follow on paper.\n *\n * \"Pairing Heaps: Experiments and Analysis\"\n * http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.106.2988&rep=rep1&type=pdf\n *\n *******************************************************************************\n *\n * We include a non-obvious optimization:\n * - First, we introduce a new pop-and-link operation; pop the two most\n *   recently-inserted items off the aux-list, link them, and push the resulting\n *   heap.\n * - We maintain a count of the number of insertions since the last time we\n *   merged the aux-list (i.e. via first() or remove_first()).  After N inserts,\n *   we do ffs(N) pop-and-link operations.\n *\n * One way to think of this is that we're progressively building up a tree in\n * the aux-list, rather than a linked-list (think of the series of merges that\n * will be performed as the aux-count grows).\n *\n * There's a couple reasons we benefit from this:\n * - Ordinarily, after N insertions, the aux-list is of size N.  With our\n *   strategy, it's of size O(log(N)).  So we decrease the worst-case time of\n *   first() calls, and reduce the average cost of remove_min calls.  Since\n *   these almost always occur while holding a lock, we practically reduce the\n *   frequency of unusually long hold times.\n * - This moves the bulk of the work of merging the aux-list onto the threads\n *   that are inserting into the heap.  In some common scenarios, insertions\n *   happen in bulk, from a single thread (think tcache flushing; we potentially\n *   move many slabs from slabs_full to slabs_nonfull).  All the nodes in this\n *   case are in the inserting threads cache, and linking them is very cheap\n *   (cache misses dominate linking cost).  Without this optimization, linking\n *   happens on the next call to remove_first.  Since that remove_first call\n *   likely happens on a different thread (or at least, after the cache has\n *   gotten cold if done on the same thread), deferring linking trades cheap\n *   link operations now for expensive ones later.\n *\n * The ffs trick keeps amortized insert cost at constant time.  Similar\n * strategies based on periodically sorting the list after a batch of operations\n * perform worse than this in practice, even with various fancy tricks; they\n * all took amortized complexity of an insert from O(1) to O(log(n)).\n */\n\ntypedef int (*ph_cmp_t)(void *, void *);\n\n/* Node structure. */\ntypedef struct phn_link_s phn_link_t;\nstruct phn_link_s {\n\tvoid *prev;\n\tvoid *next;\n\tvoid *lchild;\n};\n\ntypedef struct ph_s ph_t;\nstruct ph_s {\n\tvoid *root;\n\t/*\n\t * Inserts done since the last aux-list merge.  This is not necessarily\n\t * the size of the aux-list, since it's possible that removals have\n\t * happened since, and we don't track whether or not those removals are\n\t * from the aux list.\n\t */\n\tsize_t auxcount;\n};\n\nJEMALLOC_ALWAYS_INLINE phn_link_t *\nphn_link_get(void *phn, size_t offset) {\n\treturn (phn_link_t *)(((uintptr_t)phn) + offset);\n}\n\nJEMALLOC_ALWAYS_INLINE void\nphn_link_init(void *phn, size_t offset) {\n\tphn_link_get(phn, offset)->prev = NULL;\n\tphn_link_get(phn, offset)->next = NULL;\n\tphn_link_get(phn, offset)->lchild = NULL;\n}\n\n/* Internal utility helpers. */\nJEMALLOC_ALWAYS_INLINE void *\nphn_lchild_get(void *phn, size_t offset) {\n\treturn phn_link_get(phn, offset)->lchild;\n}\n\nJEMALLOC_ALWAYS_INLINE void\nphn_lchild_set(void *phn, void *lchild, size_t offset) {\n\tphn_link_get(phn, offset)->lchild = lchild;\n}\n\nJEMALLOC_ALWAYS_INLINE void *\nphn_next_get(void *phn, size_t offset) {\n\treturn phn_link_get(phn, offset)->next;\n}\n\nJEMALLOC_ALWAYS_INLINE void\nphn_next_set(void *phn, void *next, size_t offset) {\n\tphn_link_get(phn, offset)->next = next;\n}\n\nJEMALLOC_ALWAYS_INLINE void *\nphn_prev_get(void *phn, size_t offset) {\n\treturn phn_link_get(phn, offset)->prev;\n}\n\nJEMALLOC_ALWAYS_INLINE void\nphn_prev_set(void *phn, void *prev, size_t offset) {\n\tphn_link_get(phn, offset)->prev = prev;\n}\n\nJEMALLOC_ALWAYS_INLINE void\nphn_merge_ordered(void *phn0, void *phn1, size_t offset,\n    ph_cmp_t cmp) {\n\tvoid *phn0child;\n\n\tassert(phn0 != NULL);\n\tassert(phn1 != NULL);\n\tassert(cmp(phn0, phn1) <= 0);\n\n\tphn_prev_set(phn1, phn0, offset);\n\tphn0child = phn_lchild_get(phn0, offset);\n\tphn_next_set(phn1, phn0child, offset);\n\tif (phn0child != NULL) {\n\t\tphn_prev_set(phn0child, phn1, offset);\n\t}\n\tphn_lchild_set(phn0, phn1, offset);\n}\n\nJEMALLOC_ALWAYS_INLINE void *\nphn_merge(void *phn0, void *phn1, size_t offset, ph_cmp_t cmp) {\n\tvoid *result;\n\tif (phn0 == NULL) {\n\t\tresult = phn1;\n\t} else if (phn1 == NULL) {\n\t\tresult = phn0;\n\t} else if (cmp(phn0, phn1) < 0) {\n\t\tphn_merge_ordered(phn0, phn1, offset, cmp);\n\t\tresult = phn0;\n\t} else {\n\t\tphn_merge_ordered(phn1, phn0, offset, cmp);\n\t\tresult = phn1;\n\t}\n\treturn result;\n}\n\nJEMALLOC_ALWAYS_INLINE void *\nphn_merge_siblings(void *phn, size_t offset, ph_cmp_t cmp) {\n\tvoid *head = NULL;\n\tvoid *tail = NULL;\n\tvoid *phn0 = phn;\n\tvoid *phn1 = phn_next_get(phn0, offset);\n\n\t/*\n\t * Multipass merge, wherein the first two elements of a FIFO\n\t * are repeatedly merged, and each result is appended to the\n\t * singly linked FIFO, until the FIFO contains only a single\n\t * element.  We start with a sibling list but no reference to\n\t * its tail, so we do a single pass over the sibling list to\n\t * populate the FIFO.\n\t */\n\tif (phn1 != NULL) {\n\t\tvoid *phnrest = phn_next_get(phn1, offset);\n\t\tif (phnrest != NULL) {\n\t\t\tphn_prev_set(phnrest, NULL, offset);\n\t\t}\n\t\tphn_prev_set(phn0, NULL, offset);\n\t\tphn_next_set(phn0, NULL, offset);\n\t\tphn_prev_set(phn1, NULL, offset);\n\t\tphn_next_set(phn1, NULL, offset);\n\t\tphn0 = phn_merge(phn0, phn1, offset, cmp);\n\t\thead = tail = phn0;\n\t\tphn0 = phnrest;\n\t\twhile (phn0 != NULL) {\n\t\t\tphn1 = phn_next_get(phn0, offset);\n\t\t\tif (phn1 != NULL) {\n\t\t\t\tphnrest = phn_next_get(phn1, offset);\n\t\t\t\tif (phnrest != NULL) {\n\t\t\t\t\tphn_prev_set(phnrest, NULL, offset);\n\t\t\t\t}\n\t\t\t\tphn_prev_set(phn0, NULL, offset);\n\t\t\t\tphn_next_set(phn0, NULL, offset);\n\t\t\t\tphn_prev_set(phn1, NULL, offset);\n\t\t\t\tphn_next_set(phn1, NULL, offset);\n\t\t\t\tphn0 = phn_merge(phn0, phn1, offset, cmp);\n\t\t\t\tphn_next_set(tail, phn0, offset);\n\t\t\t\ttail = phn0;\n\t\t\t\tphn0 = phnrest;\n\t\t\t} else {\n\t\t\t\tphn_next_set(tail, phn0, offset);\n\t\t\t\ttail = phn0;\n\t\t\t\tphn0 = NULL;\n\t\t\t}\n\t\t}\n\t\tphn0 = head;\n\t\tphn1 = phn_next_get(phn0, offset);\n\t\tif (phn1 != NULL) {\n\t\t\twhile (true) {\n\t\t\t\thead = phn_next_get(phn1, offset);\n\t\t\t\tassert(phn_prev_get(phn0, offset) == NULL);\n\t\t\t\tphn_next_set(phn0, NULL, offset);\n\t\t\t\tassert(phn_prev_get(phn1, offset) == NULL);\n\t\t\t\tphn_next_set(phn1, NULL, offset);\n\t\t\t\tphn0 = phn_merge(phn0, phn1, offset, cmp);\n\t\t\t\tif (head == NULL) {\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tphn_next_set(tail, phn0, offset);\n\t\t\t\ttail = phn0;\n\t\t\t\tphn0 = head;\n\t\t\t\tphn1 = phn_next_get(phn0, offset);\n\t\t\t}\n\t\t}\n\t}\n\treturn phn0;\n}\n\nJEMALLOC_ALWAYS_INLINE void\nph_merge_aux(ph_t *ph, size_t offset, ph_cmp_t cmp) {\n\tph->auxcount = 0;\n\tvoid *phn = phn_next_get(ph->root, offset);\n\tif (phn != NULL) {\n\t\tphn_prev_set(ph->root, NULL, offset);\n\t\tphn_next_set(ph->root, NULL, offset);\n\t\tphn_prev_set(phn, NULL, offset);\n\t\tphn = phn_merge_siblings(phn, offset, cmp);\n\t\tassert(phn_next_get(phn, offset) == NULL);\n\t\tph->root = phn_merge(ph->root, phn, offset, cmp);\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE void *\nph_merge_children(void *phn, size_t offset, ph_cmp_t cmp) {\n\tvoid *result;\n\tvoid *lchild = phn_lchild_get(phn, offset);\n\tif (lchild == NULL) {\n\t\tresult = NULL;\n\t} else {\n\t\tresult = phn_merge_siblings(lchild, offset, cmp);\n\t}\n\treturn result;\n}\n\nJEMALLOC_ALWAYS_INLINE void\nph_new(ph_t *ph) {\n\tph->root = NULL;\n\tph->auxcount = 0;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nph_empty(ph_t *ph) {\n\treturn ph->root == NULL;\n}\n\nJEMALLOC_ALWAYS_INLINE void *\nph_first(ph_t *ph, size_t offset, ph_cmp_t cmp) {\n\tif (ph->root == NULL) {\n\t\treturn NULL;\n\t}\n\tph_merge_aux(ph, offset, cmp);\n\treturn ph->root;\n}\n\nJEMALLOC_ALWAYS_INLINE void *\nph_any(ph_t *ph, size_t offset) {\n\tif (ph->root == NULL) {\n\t\treturn NULL;\n\t}\n\tvoid *aux = phn_next_get(ph->root, offset);\n\tif (aux != NULL) {\n\t\treturn aux;\n\t}\n\treturn ph->root;\n}\n\n/* Returns true if we should stop trying to merge. */\nJEMALLOC_ALWAYS_INLINE bool\nph_try_aux_merge_pair(ph_t *ph, size_t offset, ph_cmp_t cmp) {\n\tassert(ph->root != NULL);\n\tvoid *phn0 = phn_next_get(ph->root, offset);\n\tif (phn0 == NULL) {\n\t\treturn true;\n\t}\n\tvoid *phn1 = phn_next_get(phn0, offset);\n\tif (phn1 == NULL) {\n\t\treturn true;\n\t}\n\tvoid *next_phn1 = phn_next_get(phn1, offset);\n\tphn_next_set(phn0, NULL, offset);\n\tphn_prev_set(phn0, NULL, offset);\n\tphn_next_set(phn1, NULL, offset);\n\tphn_prev_set(phn1, NULL, offset);\n\tphn0 = phn_merge(phn0, phn1, offset, cmp);\n\tphn_next_set(phn0, next_phn1, offset);\n\tif (next_phn1 != NULL) {\n\t\tphn_prev_set(next_phn1, phn0, offset);\n\t}\n\tphn_next_set(ph->root, phn0, offset);\n\tphn_prev_set(phn0, ph->root, offset);\n\treturn next_phn1 == NULL;\n}\n\nJEMALLOC_ALWAYS_INLINE void\nph_insert(ph_t *ph, void *phn, size_t offset, ph_cmp_t cmp) {\n\tphn_link_init(phn, offset);\n\n\t/*\n\t * Treat the root as an aux list during insertion, and lazily merge\n\t * during a_prefix##remove_first().  For elements that are inserted,\n\t * then removed via a_prefix##remove() before the aux list is ever\n\t * processed, this makes insert/remove constant-time, whereas eager\n\t * merging would make insert O(log n).\n\t */\n\tif (ph->root == NULL) {\n\t\tph->root = phn;\n\t} else {\n\t\t/*\n\t\t * As a special case, check to see if we can replace the root.\n\t\t * This is practically common in some important cases, and lets\n\t\t * us defer some insertions (hopefully, until the point where\n\t\t * some of the items in the aux list have been removed, savings\n\t\t * us from linking them at all).\n\t\t */\n\t\tif (cmp(phn, ph->root) < 0) {\n\t\t\tphn_lchild_set(phn, ph->root, offset);\n\t\t\tphn_prev_set(ph->root, phn, offset);\n\t\t\tph->root = phn;\n\t\t\tph->auxcount = 0;\n\t\t\treturn;\n\t\t}\n\t\tph->auxcount++;\n\t\tphn_next_set(phn, phn_next_get(ph->root, offset), offset);\n\t\tif (phn_next_get(ph->root, offset) != NULL) {\n\t\t\tphn_prev_set(phn_next_get(ph->root, offset), phn,\n\t\t\t    offset);\n\t\t}\n\t\tphn_prev_set(phn, ph->root, offset);\n\t\tphn_next_set(ph->root, phn, offset);\n\t}\n\tif (ph->auxcount > 1) {\n\t\tunsigned nmerges = ffs_zu(ph->auxcount - 1);\n\t\tbool done = false;\n\t\tfor (unsigned i = 0; i < nmerges && !done; i++) {\n\t\t\tdone = ph_try_aux_merge_pair(ph, offset, cmp);\n\t\t}\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE void *\nph_remove_first(ph_t *ph, size_t offset, ph_cmp_t cmp) {\n\tvoid *ret;\n\n\tif (ph->root == NULL) {\n\t\treturn NULL;\n\t}\n\tph_merge_aux(ph, offset, cmp);\n\tret = ph->root;\n\tph->root = ph_merge_children(ph->root, offset, cmp);\n\n\treturn ret;\n\n}\n\nJEMALLOC_ALWAYS_INLINE void\nph_remove(ph_t *ph, void *phn, size_t offset, ph_cmp_t cmp) {\n\tvoid *replace;\n\tvoid *parent;\n\n\tif (ph->root == phn) {\n\t\t/*\n\t\t * We can delete from aux list without merging it, but we need\n\t\t * to merge if we are dealing with the root node and it has\n\t\t * children.\n\t\t */\n\t\tif (phn_lchild_get(phn, offset) == NULL) {\n\t\t\tph->root = phn_next_get(phn, offset);\n\t\t\tif (ph->root != NULL) {\n\t\t\t\tphn_prev_set(ph->root, NULL, offset);\n\t\t\t}\n\t\t\treturn;\n\t\t}\n\t\tph_merge_aux(ph, offset, cmp);\n\t\tif (ph->root == phn) {\n\t\t\tph->root = ph_merge_children(ph->root, offset, cmp);\n\t\t\treturn;\n\t\t}\n\t}\n\n\t/* Get parent (if phn is leftmost child) before mutating. */\n\tif ((parent = phn_prev_get(phn, offset)) != NULL) {\n\t\tif (phn_lchild_get(parent, offset) != phn) {\n\t\t\tparent = NULL;\n\t\t}\n\t}\n\t/* Find a possible replacement node, and link to parent. */\n\treplace = ph_merge_children(phn, offset, cmp);\n\t/* Set next/prev for sibling linked list. */\n\tif (replace != NULL) {\n\t\tif (parent != NULL) {\n\t\t\tphn_prev_set(replace, parent, offset);\n\t\t\tphn_lchild_set(parent, replace, offset);\n\t\t} else {\n\t\t\tphn_prev_set(replace, phn_prev_get(phn, offset),\n\t\t\t    offset);\n\t\t\tif (phn_prev_get(phn, offset) != NULL) {\n\t\t\t\tphn_next_set(phn_prev_get(phn, offset), replace,\n\t\t\t\t    offset);\n\t\t\t}\n\t\t}\n\t\tphn_next_set(replace, phn_next_get(phn, offset), offset);\n\t\tif (phn_next_get(phn, offset) != NULL) {\n\t\t\tphn_prev_set(phn_next_get(phn, offset), replace,\n\t\t\t    offset);\n\t\t}\n\t} else {\n\t\tif (parent != NULL) {\n\t\t\tvoid *next = phn_next_get(phn, offset);\n\t\t\tphn_lchild_set(parent, next, offset);\n\t\t\tif (next != NULL) {\n\t\t\t\tphn_prev_set(next, parent, offset);\n\t\t\t}\n\t\t} else {\n\t\t\tassert(phn_prev_get(phn, offset) != NULL);\n\t\t\tphn_next_set(\n\t\t\t    phn_prev_get(phn, offset),\n\t\t\t    phn_next_get(phn, offset), offset);\n\t\t}\n\t\tif (phn_next_get(phn, offset) != NULL) {\n\t\t\tphn_prev_set(\n\t\t\t    phn_next_get(phn, offset),\n\t\t\t    phn_prev_get(phn, offset), offset);\n\t\t}\n\t}\n}\n\n#define ph_structs(a_prefix, a_type)\t\t\t\t\t\\\ntypedef struct {\t\t\t\t\t\t\t\\\n\tphn_link_t link;\t\t\t\t\t\t\\\n} a_prefix##_link_t;\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\ntypedef struct {\t\t\t\t\t\t\t\\\n\tph_t ph;\t\t\t\t\t\t\t\\\n} a_prefix##_t;\n\n/*\n * The ph_proto() macro generates function prototypes that correspond to the\n * functions generated by an equivalently parameterized call to ph_gen().\n */\n#define ph_proto(a_attr, a_prefix, a_type)\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\na_attr void a_prefix##_new(a_prefix##_t *ph);\t\t\t\t\\\na_attr bool a_prefix##_empty(a_prefix##_t *ph);\t\t\t\t\\\na_attr a_type *a_prefix##_first(a_prefix##_t *ph);\t\t\t\\\na_attr a_type *a_prefix##_any(a_prefix##_t *ph);\t\t\t\\\na_attr void a_prefix##_insert(a_prefix##_t *ph, a_type *phn);\t\t\\\na_attr a_type *a_prefix##_remove_first(a_prefix##_t *ph);\t\t\\\na_attr void a_prefix##_remove(a_prefix##_t *ph, a_type *phn);\t\t\\\na_attr a_type *a_prefix##_remove_any(a_prefix##_t *ph);\n\n/* The ph_gen() macro generates a type-specific pairing heap implementation. */\n#define ph_gen(a_attr, a_prefix, a_type, a_field, a_cmp)\t\t\\\nJEMALLOC_ALWAYS_INLINE int\t\t\t\t\t\t\\\na_prefix##_ph_cmp(void *a, void *b) {\t\t\t\t\t\\\n\treturn a_cmp((a_type *)a, (a_type *)b);\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##_new(a_prefix##_t *ph) {\t\t\t\t\t\\\n\tph_new(&ph->ph);\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\na_attr bool\t\t\t\t\t\t\t\t\\\na_prefix##_empty(a_prefix##_t *ph) {\t\t\t\t\t\\\n\treturn ph_empty(&ph->ph);\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##_first(a_prefix##_t *ph) {\t\t\t\t\t\\\n\treturn ph_first(&ph->ph, offsetof(a_type, a_field),\t\t\\\n\t    &a_prefix##_ph_cmp);\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##_any(a_prefix##_t *ph) {\t\t\t\t\t\\\n\treturn ph_any(&ph->ph, offsetof(a_type, a_field));\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##_insert(a_prefix##_t *ph, a_type *phn) {\t\t\t\\\n\tph_insert(&ph->ph, phn, offsetof(a_type, a_field),\t\t\\\n\t    a_prefix##_ph_cmp);\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##_remove_first(a_prefix##_t *ph) {\t\t\t\t\\\n\treturn ph_remove_first(&ph->ph, offsetof(a_type, a_field),\t\\\n\t    a_prefix##_ph_cmp);\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##_remove(a_prefix##_t *ph, a_type *phn) {\t\t\t\\\n\tph_remove(&ph->ph, phn, offsetof(a_type, a_field),\t\t\\\n\t    a_prefix##_ph_cmp);\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##_remove_any(a_prefix##_t *ph) {\t\t\t\t\\\n\ta_type *ret = a_prefix##_any(ph);\t\t\t\t\\\n\tif (ret != NULL) {\t\t\t\t\t\t\\\n\t\ta_prefix##_remove(ph, ret);\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\treturn ret;\t\t\t\t\t\t\t\\\n}\n\n#endif /* JEMALLOC_INTERNAL_PH_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/private_namespace.sh",
    "content": "#!/bin/sh\n\nfor symbol in `cat \"$@\"` ; do\n  echo \"#define ${symbol} JEMALLOC_N(${symbol})\"\ndone\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/private_symbols.sh",
    "content": "#!/bin/sh\n#\n# Generate private_symbols[_jet].awk.\n#\n# Usage: private_symbols.sh <sym_prefix> <sym>*\n#\n# <sym_prefix> is typically \"\" or \"_\".\n\nsym_prefix=$1\nshift\n\ncat <<EOF\n#!/usr/bin/env awk -f\n\nBEGIN {\n  sym_prefix = \"${sym_prefix}\"\n  split(\"\\\\\nEOF\n\nfor public_sym in \"$@\" ; do\n  cat <<EOF\n        ${sym_prefix}${public_sym} \\\\\nEOF\ndone\n\ncat <<\"EOF\"\n        \", exported_symbol_names)\n  # Store exported symbol names as keys in exported_symbols.\n  for (i in exported_symbol_names) {\n    exported_symbols[exported_symbol_names[i]] = 1\n  }\n}\n\n# Process 'nm -a <c_source.o>' output.\n#\n# Handle lines like:\n#   0000000000000008 D opt_junk\n#   0000000000007574 T malloc_initialized\n(NF == 3 && $2 ~ /^[ABCDGRSTVW]$/ && !($3 in exported_symbols) && $3 ~ /^[A-Za-z0-9_]+$/) {\n  print substr($3, 1+length(sym_prefix), length($3)-length(sym_prefix))\n}\n\n# Process 'dumpbin /SYMBOLS <c_source.obj>' output.\n#\n# Handle lines like:\n#   353 00008098 SECT4  notype       External     | opt_junk\n#   3F1 00000000 SECT7  notype ()    External     | malloc_initialized\n($3 ~ /^SECT[0-9]+/ && $(NF-2) == \"External\" && !($NF in exported_symbols)) {\n  print $NF\n}\nEOF\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/prng.h",
    "content": "#ifndef JEMALLOC_INTERNAL_PRNG_H\n#define JEMALLOC_INTERNAL_PRNG_H\n\n#include \"jemalloc/internal/bit_util.h\"\n\n/*\n * Simple linear congruential pseudo-random number generator:\n *\n *   prng(y) = (a*x + c) % m\n *\n * where the following constants ensure maximal period:\n *\n *   a == Odd number (relatively prime to 2^n), and (a-1) is a multiple of 4.\n *   c == Odd number (relatively prime to 2^n).\n *   m == 2^32\n *\n * See Knuth's TAOCP 3rd Ed., Vol. 2, pg. 17 for details on these constraints.\n *\n * This choice of m has the disadvantage that the quality of the bits is\n * proportional to bit position.  For example, the lowest bit has a cycle of 2,\n * the next has a cycle of 4, etc.  For this reason, we prefer to use the upper\n * bits.\n */\n\n/******************************************************************************/\n/* INTERNAL DEFINITIONS -- IGNORE */\n/******************************************************************************/\n#define PRNG_A_32\tUINT32_C(1103515241)\n#define PRNG_C_32\tUINT32_C(12347)\n\n#define PRNG_A_64\tUINT64_C(6364136223846793005)\n#define PRNG_C_64\tUINT64_C(1442695040888963407)\n\nJEMALLOC_ALWAYS_INLINE uint32_t\nprng_state_next_u32(uint32_t state) {\n\treturn (state * PRNG_A_32) + PRNG_C_32;\n}\n\nJEMALLOC_ALWAYS_INLINE uint64_t\nprng_state_next_u64(uint64_t state) {\n\treturn (state * PRNG_A_64) + PRNG_C_64;\n}\n\nJEMALLOC_ALWAYS_INLINE size_t\nprng_state_next_zu(size_t state) {\n#if LG_SIZEOF_PTR == 2\n\treturn (state * PRNG_A_32) + PRNG_C_32;\n#elif LG_SIZEOF_PTR == 3\n\treturn (state * PRNG_A_64) + PRNG_C_64;\n#else\n#error Unsupported pointer size\n#endif\n}\n\n/******************************************************************************/\n/* BEGIN PUBLIC API */\n/******************************************************************************/\n\n/*\n * The prng_lg_range functions give a uniform int in the half-open range [0,\n * 2**lg_range).\n */\n\nJEMALLOC_ALWAYS_INLINE uint32_t\nprng_lg_range_u32(uint32_t *state, unsigned lg_range) {\n\tassert(lg_range > 0);\n\tassert(lg_range <= 32);\n\n\t*state = prng_state_next_u32(*state);\n\tuint32_t ret = *state >> (32 - lg_range);\n\n\treturn ret;\n}\n\nJEMALLOC_ALWAYS_INLINE uint64_t\nprng_lg_range_u64(uint64_t *state, unsigned lg_range) {\n\tassert(lg_range > 0);\n\tassert(lg_range <= 64);\n\n\t*state = prng_state_next_u64(*state);\n\tuint64_t ret = *state >> (64 - lg_range);\n\n\treturn ret;\n}\n\nJEMALLOC_ALWAYS_INLINE size_t\nprng_lg_range_zu(size_t *state, unsigned lg_range) {\n\tassert(lg_range > 0);\n\tassert(lg_range <= ZU(1) << (3 + LG_SIZEOF_PTR));\n\n\t*state = prng_state_next_zu(*state);\n\tsize_t ret = *state >> ((ZU(1) << (3 + LG_SIZEOF_PTR)) - lg_range);\n\n\treturn ret;\n}\n\n/*\n * The prng_range functions behave like the prng_lg_range, but return a result\n * in [0, range) instead of [0, 2**lg_range).\n */\n\nJEMALLOC_ALWAYS_INLINE uint32_t\nprng_range_u32(uint32_t *state, uint32_t range) {\n\tassert(range != 0);\n\t/*\n\t * If range were 1, lg_range would be 0, so the shift in\n\t * prng_lg_range_u32 would be a shift of a 32-bit variable by 32 bits,\n\t * which is UB.  Just handle this case as a one-off.\n\t */\n\tif (range == 1) {\n\t\treturn 0;\n\t}\n\n\t/* Compute the ceiling of lg(range). */\n\tunsigned lg_range = ffs_u32(pow2_ceil_u32(range));\n\n\t/* Generate a result in [0..range) via repeated trial. */\n\tuint32_t ret;\n\tdo {\n\t\tret = prng_lg_range_u32(state, lg_range);\n\t} while (ret >= range);\n\n\treturn ret;\n}\n\nJEMALLOC_ALWAYS_INLINE uint64_t\nprng_range_u64(uint64_t *state, uint64_t range) {\n\tassert(range != 0);\n\n\t/* See the note in prng_range_u32. */\n\tif (range == 1) {\n\t\treturn 0;\n\t}\n\n\t/* Compute the ceiling of lg(range). */\n\tunsigned lg_range = ffs_u64(pow2_ceil_u64(range));\n\n\t/* Generate a result in [0..range) via repeated trial. */\n\tuint64_t ret;\n\tdo {\n\t\tret = prng_lg_range_u64(state, lg_range);\n\t} while (ret >= range);\n\n\treturn ret;\n}\n\nJEMALLOC_ALWAYS_INLINE size_t\nprng_range_zu(size_t *state, size_t range) {\n\tassert(range != 0);\n\n\t/* See the note in prng_range_u32. */\n\tif (range == 1) {\n\t\treturn 0;\n\t}\n\n\t/* Compute the ceiling of lg(range). */\n\tunsigned lg_range = ffs_u64(pow2_ceil_u64(range));\n\n\t/* Generate a result in [0..range) via repeated trial. */\n\tsize_t ret;\n\tdo {\n\t\tret = prng_lg_range_zu(state, lg_range);\n\t} while (ret >= range);\n\n\treturn ret;\n}\n\n#endif /* JEMALLOC_INTERNAL_PRNG_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/prof_data.h",
    "content": "#ifndef JEMALLOC_INTERNAL_PROF_DATA_H\n#define JEMALLOC_INTERNAL_PROF_DATA_H\n\n#include \"jemalloc/internal/mutex.h\"\n\nextern malloc_mutex_t bt2gctx_mtx;\nextern malloc_mutex_t tdatas_mtx;\nextern malloc_mutex_t prof_dump_mtx;\n\nextern malloc_mutex_t *gctx_locks;\nextern malloc_mutex_t *tdata_locks;\n\nextern size_t prof_unbiased_sz[PROF_SC_NSIZES];\nextern size_t prof_shifted_unbiased_cnt[PROF_SC_NSIZES];\n\nvoid prof_bt_hash(const void *key, size_t r_hash[2]);\nbool prof_bt_keycomp(const void *k1, const void *k2);\n\nbool prof_data_init(tsd_t *tsd);\nprof_tctx_t *prof_lookup(tsd_t *tsd, prof_bt_t *bt);\nchar *prof_thread_name_alloc(tsd_t *tsd, const char *thread_name);\nint prof_thread_name_set_impl(tsd_t *tsd, const char *thread_name);\nvoid prof_unbias_map_init();\nvoid prof_dump_impl(tsd_t *tsd, write_cb_t *prof_dump_write, void *cbopaque,\n    prof_tdata_t *tdata, bool leakcheck);\nprof_tdata_t * prof_tdata_init_impl(tsd_t *tsd, uint64_t thr_uid,\n    uint64_t thr_discrim, char *thread_name, bool active);\nvoid prof_tdata_detach(tsd_t *tsd, prof_tdata_t *tdata);\nvoid prof_reset(tsd_t *tsd, size_t lg_sample);\nvoid prof_tctx_try_destroy(tsd_t *tsd, prof_tctx_t *tctx);\n\n/* Used in unit tests. */\nsize_t prof_tdata_count(void);\nsize_t prof_bt_count(void);\nvoid prof_cnt_all(prof_cnt_t *cnt_all);\n\n#endif /* JEMALLOC_INTERNAL_PROF_DATA_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/prof_externs.h",
    "content": "#ifndef JEMALLOC_INTERNAL_PROF_EXTERNS_H\n#define JEMALLOC_INTERNAL_PROF_EXTERNS_H\n\n#include \"jemalloc/internal/mutex.h\"\n#include \"jemalloc/internal/prof_hook.h\"\n\nextern bool opt_prof;\nextern bool opt_prof_active;\nextern bool opt_prof_thread_active_init;\nextern size_t opt_lg_prof_sample;    /* Mean bytes between samples. */\nextern ssize_t opt_lg_prof_interval; /* lg(prof_interval). */\nextern bool opt_prof_gdump;          /* High-water memory dumping. */\nextern bool opt_prof_final;          /* Final profile dumping. */\nextern bool opt_prof_leak;           /* Dump leak summary at exit. */\nextern bool opt_prof_leak_error;     /* Exit with error code if memory leaked */\nextern bool opt_prof_accum;          /* Report cumulative bytes. */\nextern bool opt_prof_log;            /* Turn logging on at boot. */\nextern char opt_prof_prefix[\n    /* Minimize memory bloat for non-prof builds. */\n#ifdef JEMALLOC_PROF\n    PATH_MAX +\n#endif\n    1];\nextern bool opt_prof_unbias;\n\n/* For recording recent allocations */\nextern ssize_t opt_prof_recent_alloc_max;\n\n/* Whether to use thread name provided by the system or by mallctl. */\nextern bool opt_prof_sys_thread_name;\n\n/* Whether to record per size class counts and request size totals. */\nextern bool opt_prof_stats;\n\n/* Accessed via prof_active_[gs]et{_unlocked,}(). */\nextern bool prof_active_state;\n\n/* Accessed via prof_gdump_[gs]et{_unlocked,}(). */\nextern bool prof_gdump_val;\n\n/* Profile dump interval, measured in bytes allocated. */\nextern uint64_t prof_interval;\n\n/*\n * Initialized as opt_lg_prof_sample, and potentially modified during profiling\n * resets.\n */\nextern size_t lg_prof_sample;\n\nextern bool prof_booted;\n\nvoid prof_backtrace_hook_set(prof_backtrace_hook_t hook);\nprof_backtrace_hook_t prof_backtrace_hook_get();\n\nvoid prof_dump_hook_set(prof_dump_hook_t hook);\nprof_dump_hook_t prof_dump_hook_get();\n\n/* Functions only accessed in prof_inlines.h */\nprof_tdata_t *prof_tdata_init(tsd_t *tsd);\nprof_tdata_t *prof_tdata_reinit(tsd_t *tsd, prof_tdata_t *tdata);\n\nvoid prof_alloc_rollback(tsd_t *tsd, prof_tctx_t *tctx);\nvoid prof_malloc_sample_object(tsd_t *tsd, const void *ptr, size_t size,\n    size_t usize, prof_tctx_t *tctx);\nvoid prof_free_sampled_object(tsd_t *tsd, size_t usize, prof_info_t *prof_info);\nprof_tctx_t *prof_tctx_create(tsd_t *tsd);\nvoid prof_idump(tsdn_t *tsdn);\nbool prof_mdump(tsd_t *tsd, const char *filename);\nvoid prof_gdump(tsdn_t *tsdn);\n\nvoid prof_tdata_cleanup(tsd_t *tsd);\nbool prof_active_get(tsdn_t *tsdn);\nbool prof_active_set(tsdn_t *tsdn, bool active);\nconst char *prof_thread_name_get(tsd_t *tsd);\nint prof_thread_name_set(tsd_t *tsd, const char *thread_name);\nbool prof_thread_active_get(tsd_t *tsd);\nbool prof_thread_active_set(tsd_t *tsd, bool active);\nbool prof_thread_active_init_get(tsdn_t *tsdn);\nbool prof_thread_active_init_set(tsdn_t *tsdn, bool active_init);\nbool prof_gdump_get(tsdn_t *tsdn);\nbool prof_gdump_set(tsdn_t *tsdn, bool active);\nvoid prof_boot0(void);\nvoid prof_boot1(void);\nbool prof_boot2(tsd_t *tsd, base_t *base);\nvoid prof_prefork0(tsdn_t *tsdn);\nvoid prof_prefork1(tsdn_t *tsdn);\nvoid prof_postfork_parent(tsdn_t *tsdn);\nvoid prof_postfork_child(tsdn_t *tsdn);\n\n/* Only accessed by thread event. */\nuint64_t prof_sample_new_event_wait(tsd_t *tsd);\nuint64_t prof_sample_postponed_event_wait(tsd_t *tsd);\nvoid prof_sample_event_handler(tsd_t *tsd, uint64_t elapsed);\n\n#endif /* JEMALLOC_INTERNAL_PROF_EXTERNS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/prof_hook.h",
    "content": "#ifndef JEMALLOC_INTERNAL_PROF_HOOK_H\n#define JEMALLOC_INTERNAL_PROF_HOOK_H\n\n/*\n * The hooks types of which are declared in this file are experimental and\n * undocumented, thus the typedefs are located in an 'internal' header.\n */\n\n/*\n * A hook to mock out backtrace functionality.  This can be handy, since it's\n * otherwise difficult to guarantee that two allocations are reported as coming\n * from the exact same stack trace in the presence of an optimizing compiler.\n */\ntypedef void (*prof_backtrace_hook_t)(void **, unsigned *, unsigned);\n\n/*\n * A callback hook that notifies about recently dumped heap profile.\n */\ntypedef void (*prof_dump_hook_t)(const char *filename);\n\n#endif /* JEMALLOC_INTERNAL_PROF_HOOK_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/prof_inlines.h",
    "content": "#ifndef JEMALLOC_INTERNAL_PROF_INLINES_H\n#define JEMALLOC_INTERNAL_PROF_INLINES_H\n\n#include \"jemalloc/internal/safety_check.h\"\n#include \"jemalloc/internal/sz.h\"\n#include \"jemalloc/internal/thread_event.h\"\n\nJEMALLOC_ALWAYS_INLINE void\nprof_active_assert() {\n\tcassert(config_prof);\n\t/*\n\t * If opt_prof is off, then prof_active must always be off, regardless\n\t * of whether prof_active_mtx is in effect or not.\n\t */\n\tassert(opt_prof || !prof_active_state);\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nprof_active_get_unlocked(void) {\n\tprof_active_assert();\n\t/*\n\t * Even if opt_prof is true, sampling can be temporarily disabled by\n\t * setting prof_active to false.  No locking is used when reading\n\t * prof_active in the fast path, so there are no guarantees regarding\n\t * how long it will take for all threads to notice state changes.\n\t */\n\treturn prof_active_state;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nprof_gdump_get_unlocked(void) {\n\t/*\n\t * No locking is used when reading prof_gdump_val in the fast path, so\n\t * there are no guarantees regarding how long it will take for all\n\t * threads to notice state changes.\n\t */\n\treturn prof_gdump_val;\n}\n\nJEMALLOC_ALWAYS_INLINE prof_tdata_t *\nprof_tdata_get(tsd_t *tsd, bool create) {\n\tprof_tdata_t *tdata;\n\n\tcassert(config_prof);\n\n\ttdata = tsd_prof_tdata_get(tsd);\n\tif (create) {\n\t\tassert(tsd_reentrancy_level_get(tsd) == 0);\n\t\tif (unlikely(tdata == NULL)) {\n\t\t\tif (tsd_nominal(tsd)) {\n\t\t\t\ttdata = prof_tdata_init(tsd);\n\t\t\t\ttsd_prof_tdata_set(tsd, tdata);\n\t\t\t}\n\t\t} else if (unlikely(tdata->expired)) {\n\t\t\ttdata = prof_tdata_reinit(tsd, tdata);\n\t\t\ttsd_prof_tdata_set(tsd, tdata);\n\t\t}\n\t\tassert(tdata == NULL || tdata->attached);\n\t}\n\n\treturn tdata;\n}\n\nJEMALLOC_ALWAYS_INLINE void\nprof_info_get(tsd_t *tsd, const void *ptr, emap_alloc_ctx_t *alloc_ctx,\n    prof_info_t *prof_info) {\n\tcassert(config_prof);\n\tassert(ptr != NULL);\n\tassert(prof_info != NULL);\n\n\tarena_prof_info_get(tsd, ptr, alloc_ctx, prof_info, false);\n}\n\nJEMALLOC_ALWAYS_INLINE void\nprof_info_get_and_reset_recent(tsd_t *tsd, const void *ptr,\n    emap_alloc_ctx_t *alloc_ctx, prof_info_t *prof_info) {\n\tcassert(config_prof);\n\tassert(ptr != NULL);\n\tassert(prof_info != NULL);\n\n\tarena_prof_info_get(tsd, ptr, alloc_ctx, prof_info, true);\n}\n\nJEMALLOC_ALWAYS_INLINE void\nprof_tctx_reset(tsd_t *tsd, const void *ptr, emap_alloc_ctx_t *alloc_ctx) {\n\tcassert(config_prof);\n\tassert(ptr != NULL);\n\n\tarena_prof_tctx_reset(tsd, ptr, alloc_ctx);\n}\n\nJEMALLOC_ALWAYS_INLINE void\nprof_tctx_reset_sampled(tsd_t *tsd, const void *ptr) {\n\tcassert(config_prof);\n\tassert(ptr != NULL);\n\n\tarena_prof_tctx_reset_sampled(tsd, ptr);\n}\n\nJEMALLOC_ALWAYS_INLINE void\nprof_info_set(tsd_t *tsd, edata_t *edata, prof_tctx_t *tctx, size_t size) {\n\tcassert(config_prof);\n\tassert(edata != NULL);\n\tassert((uintptr_t)tctx > (uintptr_t)1U);\n\n\tarena_prof_info_set(tsd, edata, tctx, size);\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nprof_sample_should_skip(tsd_t *tsd, bool sample_event) {\n\tcassert(config_prof);\n\n\t/* Fastpath: no need to load tdata */\n\tif (likely(!sample_event)) {\n\t\treturn true;\n\t}\n\n\t/*\n\t * sample_event is always obtained from the thread event module, and\n\t * whenever it's true, it means that the thread event module has\n\t * already checked the reentrancy level.\n\t */\n\tassert(tsd_reentrancy_level_get(tsd) == 0);\n\n\tprof_tdata_t *tdata = prof_tdata_get(tsd, true);\n\tif (unlikely(tdata == NULL)) {\n\t\treturn true;\n\t}\n\n\treturn !tdata->active;\n}\n\nJEMALLOC_ALWAYS_INLINE prof_tctx_t *\nprof_alloc_prep(tsd_t *tsd, bool prof_active, bool sample_event) {\n\tprof_tctx_t *ret;\n\n\tif (!prof_active ||\n\t    likely(prof_sample_should_skip(tsd, sample_event))) {\n\t\tret = (prof_tctx_t *)(uintptr_t)1U;\n\t} else {\n\t\tret = prof_tctx_create(tsd);\n\t}\n\n\treturn ret;\n}\n\nJEMALLOC_ALWAYS_INLINE void\nprof_malloc(tsd_t *tsd, const void *ptr, size_t size, size_t usize,\n    emap_alloc_ctx_t *alloc_ctx, prof_tctx_t *tctx) {\n\tcassert(config_prof);\n\tassert(ptr != NULL);\n\tassert(usize == isalloc(tsd_tsdn(tsd), ptr));\n\n\tif (unlikely((uintptr_t)tctx > (uintptr_t)1U)) {\n\t\tprof_malloc_sample_object(tsd, ptr, size, usize, tctx);\n\t} else {\n\t\tprof_tctx_reset(tsd, ptr, alloc_ctx);\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE void\nprof_realloc(tsd_t *tsd, const void *ptr, size_t size, size_t usize,\n    prof_tctx_t *tctx, bool prof_active, const void *old_ptr, size_t old_usize,\n    prof_info_t *old_prof_info, bool sample_event) {\n\tbool sampled, old_sampled, moved;\n\n\tcassert(config_prof);\n\tassert(ptr != NULL || (uintptr_t)tctx <= (uintptr_t)1U);\n\n\tif (prof_active && ptr != NULL) {\n\t\tassert(usize == isalloc(tsd_tsdn(tsd), ptr));\n\t\tif (prof_sample_should_skip(tsd, sample_event)) {\n\t\t\t/*\n\t\t\t * Don't sample.  The usize passed to prof_alloc_prep()\n\t\t\t * was larger than what actually got allocated, so a\n\t\t\t * backtrace was captured for this allocation, even\n\t\t\t * though its actual usize was insufficient to cross the\n\t\t\t * sample threshold.\n\t\t\t */\n\t\t\tprof_alloc_rollback(tsd, tctx);\n\t\t\ttctx = (prof_tctx_t *)(uintptr_t)1U;\n\t\t}\n\t}\n\n\tsampled = ((uintptr_t)tctx > (uintptr_t)1U);\n\told_sampled = ((uintptr_t)old_prof_info->alloc_tctx > (uintptr_t)1U);\n\tmoved = (ptr != old_ptr);\n\n\tif (unlikely(sampled)) {\n\t\tprof_malloc_sample_object(tsd, ptr, size, usize, tctx);\n\t} else if (moved) {\n\t\tprof_tctx_reset(tsd, ptr, NULL);\n\t} else if (unlikely(old_sampled)) {\n\t\t/*\n\t\t * prof_tctx_reset() would work for the !moved case as well,\n\t\t * but prof_tctx_reset_sampled() is slightly cheaper, and the\n\t\t * proper thing to do here in the presence of explicit\n\t\t * knowledge re: moved state.\n\t\t */\n\t\tprof_tctx_reset_sampled(tsd, ptr);\n\t} else {\n\t\tprof_info_t prof_info;\n\t\tprof_info_get(tsd, ptr, NULL, &prof_info);\n\t\tassert((uintptr_t)prof_info.alloc_tctx == (uintptr_t)1U);\n\t}\n\n\t/*\n\t * The prof_free_sampled_object() call must come after the\n\t * prof_malloc_sample_object() call, because tctx and old_tctx may be\n\t * the same, in which case reversing the call order could cause the tctx\n\t * to be prematurely destroyed as a side effect of momentarily zeroed\n\t * counters.\n\t */\n\tif (unlikely(old_sampled)) {\n\t\tprof_free_sampled_object(tsd, old_usize, old_prof_info);\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE size_t\nprof_sample_align(size_t orig_align) {\n\t/*\n\t * Enforce page alignment, so that sampled allocations can be identified\n\t * w/o metadata lookup.\n\t */\n\tassert(opt_prof);\n\treturn (opt_cache_oblivious && orig_align < PAGE) ? PAGE :\n\t    orig_align;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nprof_sample_aligned(const void *ptr) {\n\treturn ((uintptr_t)ptr & PAGE_MASK) == 0;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nprof_sampled(tsd_t *tsd, const void *ptr) {\n\tprof_info_t prof_info;\n\tprof_info_get(tsd, ptr, NULL, &prof_info);\n\tbool sampled = (uintptr_t)prof_info.alloc_tctx > (uintptr_t)1U;\n\tif (sampled) {\n\t\tassert(prof_sample_aligned(ptr));\n\t}\n\treturn sampled;\n}\n\nJEMALLOC_ALWAYS_INLINE void\nprof_free(tsd_t *tsd, const void *ptr, size_t usize,\n    emap_alloc_ctx_t *alloc_ctx) {\n\tprof_info_t prof_info;\n\tprof_info_get_and_reset_recent(tsd, ptr, alloc_ctx, &prof_info);\n\n\tcassert(config_prof);\n\tassert(usize == isalloc(tsd_tsdn(tsd), ptr));\n\n\tif (unlikely((uintptr_t)prof_info.alloc_tctx > (uintptr_t)1U)) {\n\t\tassert(prof_sample_aligned(ptr));\n\t\tprof_free_sampled_object(tsd, usize, &prof_info);\n\t}\n}\n\n#endif /* JEMALLOC_INTERNAL_PROF_INLINES_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/prof_log.h",
    "content": "#ifndef JEMALLOC_INTERNAL_PROF_LOG_H\n#define JEMALLOC_INTERNAL_PROF_LOG_H\n\n#include \"jemalloc/internal/mutex.h\"\n\nextern malloc_mutex_t log_mtx;\n\nvoid prof_try_log(tsd_t *tsd, size_t usize, prof_info_t *prof_info);\nbool prof_log_init(tsd_t *tsdn);\n\n/* Used in unit tests. */\nsize_t prof_log_bt_count(void);\nsize_t prof_log_alloc_count(void);\nsize_t prof_log_thr_count(void);\nbool prof_log_is_logging(void);\nbool prof_log_rep_check(void);\nvoid prof_log_dummy_set(bool new_value);\n\nbool prof_log_start(tsdn_t *tsdn, const char *filename);\nbool prof_log_stop(tsdn_t *tsdn);\n\n#endif /* JEMALLOC_INTERNAL_PROF_LOG_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/prof_recent.h",
    "content": "#ifndef JEMALLOC_INTERNAL_PROF_RECENT_H\n#define JEMALLOC_INTERNAL_PROF_RECENT_H\n\nextern malloc_mutex_t prof_recent_alloc_mtx;\nextern malloc_mutex_t prof_recent_dump_mtx;\n\nbool prof_recent_alloc_prepare(tsd_t *tsd, prof_tctx_t *tctx);\nvoid prof_recent_alloc(tsd_t *tsd, edata_t *edata, size_t size, size_t usize);\nvoid prof_recent_alloc_reset(tsd_t *tsd, edata_t *edata);\nbool prof_recent_init();\nvoid edata_prof_recent_alloc_init(edata_t *edata);\n\n/* Used in unit tests. */\ntypedef ql_head(prof_recent_t) prof_recent_list_t;\nextern prof_recent_list_t prof_recent_alloc_list;\nedata_t *prof_recent_alloc_edata_get_no_lock_test(const prof_recent_t *node);\nprof_recent_t *edata_prof_recent_alloc_get_no_lock_test(const edata_t *edata);\n\nssize_t prof_recent_alloc_max_ctl_read();\nssize_t prof_recent_alloc_max_ctl_write(tsd_t *tsd, ssize_t max);\nvoid prof_recent_alloc_dump(tsd_t *tsd, write_cb_t *write_cb, void *cbopaque);\n\n#endif /* JEMALLOC_INTERNAL_PROF_RECENT_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/prof_stats.h",
    "content": "#ifndef JEMALLOC_INTERNAL_PROF_STATS_H\n#define JEMALLOC_INTERNAL_PROF_STATS_H\n\ntypedef struct prof_stats_s prof_stats_t;\nstruct prof_stats_s {\n\tuint64_t req_sum;\n\tuint64_t count;\n};\n\nextern malloc_mutex_t prof_stats_mtx;\n\nvoid prof_stats_inc(tsd_t *tsd, szind_t ind, size_t size);\nvoid prof_stats_dec(tsd_t *tsd, szind_t ind, size_t size);\nvoid prof_stats_get_live(tsd_t *tsd, szind_t ind, prof_stats_t *stats);\nvoid prof_stats_get_accum(tsd_t *tsd, szind_t ind, prof_stats_t *stats);\n\n#endif /* JEMALLOC_INTERNAL_PROF_STATS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/prof_structs.h",
    "content": "#ifndef JEMALLOC_INTERNAL_PROF_STRUCTS_H\n#define JEMALLOC_INTERNAL_PROF_STRUCTS_H\n\n#include \"jemalloc/internal/ckh.h\"\n#include \"jemalloc/internal/edata.h\"\n#include \"jemalloc/internal/mutex.h\"\n#include \"jemalloc/internal/prng.h\"\n#include \"jemalloc/internal/rb.h\"\n\nstruct prof_bt_s {\n\t/* Backtrace, stored as len program counters. */\n\tvoid\t\t**vec;\n\tunsigned\tlen;\n};\n\n#ifdef JEMALLOC_PROF_LIBGCC\n/* Data structure passed to libgcc _Unwind_Backtrace() callback functions. */\ntypedef struct {\n\tvoid \t\t**vec;\n\tunsigned\t*len;\n\tunsigned\tmax;\n} prof_unwind_data_t;\n#endif\n\nstruct prof_cnt_s {\n\t/* Profiling counters. */\n\tuint64_t\tcurobjs;\n\tuint64_t\tcurobjs_shifted_unbiased;\n\tuint64_t\tcurbytes;\n\tuint64_t\tcurbytes_unbiased;\n\tuint64_t\taccumobjs;\n\tuint64_t\taccumobjs_shifted_unbiased;\n\tuint64_t\taccumbytes;\n\tuint64_t\taccumbytes_unbiased;\n};\n\ntypedef enum {\n\tprof_tctx_state_initializing,\n\tprof_tctx_state_nominal,\n\tprof_tctx_state_dumping,\n\tprof_tctx_state_purgatory /* Dumper must finish destroying. */\n} prof_tctx_state_t;\n\nstruct prof_tctx_s {\n\t/* Thread data for thread that performed the allocation. */\n\tprof_tdata_t\t\t*tdata;\n\n\t/*\n\t * Copy of tdata->thr_{uid,discrim}, necessary because tdata may be\n\t * defunct during teardown.\n\t */\n\tuint64_t\t\tthr_uid;\n\tuint64_t\t\tthr_discrim;\n\n\t/*\n\t * Reference count of how many times this tctx object is referenced in\n\t * recent allocation / deallocation records, protected by tdata->lock.\n\t */\n\tuint64_t\t\trecent_count;\n\n\t/* Profiling counters, protected by tdata->lock. */\n\tprof_cnt_t\t\tcnts;\n\n\t/* Associated global context. */\n\tprof_gctx_t\t\t*gctx;\n\n\t/*\n\t * UID that distinguishes multiple tctx's created by the same thread,\n\t * but coexisting in gctx->tctxs.  There are two ways that such\n\t * coexistence can occur:\n\t * - A dumper thread can cause a tctx to be retained in the purgatory\n\t *   state.\n\t * - Although a single \"producer\" thread must create all tctx's which\n\t *   share the same thr_uid, multiple \"consumers\" can each concurrently\n\t *   execute portions of prof_tctx_destroy().  prof_tctx_destroy() only\n\t *   gets called once each time cnts.cur{objs,bytes} drop to 0, but this\n\t *   threshold can be hit again before the first consumer finishes\n\t *   executing prof_tctx_destroy().\n\t */\n\tuint64_t\t\ttctx_uid;\n\n\t/* Linkage into gctx's tctxs. */\n\trb_node(prof_tctx_t)\ttctx_link;\n\n\t/*\n\t * True during prof_alloc_prep()..prof_malloc_sample_object(), prevents\n\t * sample vs destroy race.\n\t */\n\tbool\t\t\tprepared;\n\n\t/* Current dump-related state, protected by gctx->lock. */\n\tprof_tctx_state_t\tstate;\n\n\t/*\n\t * Copy of cnts snapshotted during early dump phase, protected by\n\t * dump_mtx.\n\t */\n\tprof_cnt_t\t\tdump_cnts;\n};\ntypedef rb_tree(prof_tctx_t) prof_tctx_tree_t;\n\nstruct prof_info_s {\n\t/* Time when the allocation was made. */\n\tnstime_t\t\talloc_time;\n\t/* Points to the prof_tctx_t corresponding to the allocation. */\n\tprof_tctx_t\t\t*alloc_tctx;\n\t/* Allocation request size. */\n\tsize_t\t\t\talloc_size;\n};\n\nstruct prof_gctx_s {\n\t/* Protects nlimbo, cnt_summed, and tctxs. */\n\tmalloc_mutex_t\t\t*lock;\n\n\t/*\n\t * Number of threads that currently cause this gctx to be in a state of\n\t * limbo due to one of:\n\t *   - Initializing this gctx.\n\t *   - Initializing per thread counters associated with this gctx.\n\t *   - Preparing to destroy this gctx.\n\t *   - Dumping a heap profile that includes this gctx.\n\t * nlimbo must be 1 (single destroyer) in order to safely destroy the\n\t * gctx.\n\t */\n\tunsigned\t\tnlimbo;\n\n\t/*\n\t * Tree of profile counters, one for each thread that has allocated in\n\t * this context.\n\t */\n\tprof_tctx_tree_t\ttctxs;\n\n\t/* Linkage for tree of contexts to be dumped. */\n\trb_node(prof_gctx_t)\tdump_link;\n\n\t/* Temporary storage for summation during dump. */\n\tprof_cnt_t\t\tcnt_summed;\n\n\t/* Associated backtrace. */\n\tprof_bt_t\t\tbt;\n\n\t/* Backtrace vector, variable size, referred to by bt. */\n\tvoid\t\t\t*vec[1];\n};\ntypedef rb_tree(prof_gctx_t) prof_gctx_tree_t;\n\nstruct prof_tdata_s {\n\tmalloc_mutex_t\t\t*lock;\n\n\t/* Monotonically increasing unique thread identifier. */\n\tuint64_t\t\tthr_uid;\n\n\t/*\n\t * Monotonically increasing discriminator among tdata structures\n\t * associated with the same thr_uid.\n\t */\n\tuint64_t\t\tthr_discrim;\n\n\t/* Included in heap profile dumps if non-NULL. */\n\tchar\t\t\t*thread_name;\n\n\tbool\t\t\tattached;\n\tbool\t\t\texpired;\n\n\trb_node(prof_tdata_t)\ttdata_link;\n\n\t/*\n\t * Counter used to initialize prof_tctx_t's tctx_uid.  No locking is\n\t * necessary when incrementing this field, because only one thread ever\n\t * does so.\n\t */\n\tuint64_t\t\ttctx_uid_next;\n\n\t/*\n\t * Hash of (prof_bt_t *)-->(prof_tctx_t *).  Each thread tracks\n\t * backtraces for which it has non-zero allocation/deallocation counters\n\t * associated with thread-specific prof_tctx_t objects.  Other threads\n\t * may write to prof_tctx_t contents when freeing associated objects.\n\t */\n\tckh_t\t\t\tbt2tctx;\n\n\t/* State used to avoid dumping while operating on prof internals. */\n\tbool\t\t\tenq;\n\tbool\t\t\tenq_idump;\n\tbool\t\t\tenq_gdump;\n\n\t/*\n\t * Set to true during an early dump phase for tdata's which are\n\t * currently being dumped.  New threads' tdata's have this initialized\n\t * to false so that they aren't accidentally included in later dump\n\t * phases.\n\t */\n\tbool\t\t\tdumping;\n\n\t/*\n\t * True if profiling is active for this tdata's thread\n\t * (thread.prof.active mallctl).\n\t */\n\tbool\t\t\tactive;\n\n\t/* Temporary storage for summation during dump. */\n\tprof_cnt_t\t\tcnt_summed;\n\n\t/* Backtrace vector, used for calls to prof_backtrace(). */\n\tvoid\t\t\t*vec[PROF_BT_MAX];\n};\ntypedef rb_tree(prof_tdata_t) prof_tdata_tree_t;\n\nstruct prof_recent_s {\n\tnstime_t alloc_time;\n\tnstime_t dalloc_time;\n\n\tql_elm(prof_recent_t) link;\n\tsize_t size;\n\tsize_t usize;\n\tatomic_p_t alloc_edata; /* NULL means allocation has been freed. */\n\tprof_tctx_t *alloc_tctx;\n\tprof_tctx_t *dalloc_tctx;\n};\n\n#endif /* JEMALLOC_INTERNAL_PROF_STRUCTS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/prof_sys.h",
    "content": "#ifndef JEMALLOC_INTERNAL_PROF_SYS_H\n#define JEMALLOC_INTERNAL_PROF_SYS_H\n\nextern malloc_mutex_t prof_dump_filename_mtx;\nextern base_t *prof_base;\n\nvoid bt_init(prof_bt_t *bt, void **vec);\nvoid prof_backtrace(tsd_t *tsd, prof_bt_t *bt);\nvoid prof_hooks_init();\nvoid prof_unwind_init();\nvoid prof_sys_thread_name_fetch(tsd_t *tsd);\nint prof_getpid(void);\nvoid prof_get_default_filename(tsdn_t *tsdn, char *filename, uint64_t ind);\nbool prof_prefix_set(tsdn_t *tsdn, const char *prefix);\nvoid prof_fdump_impl(tsd_t *tsd);\nvoid prof_idump_impl(tsd_t *tsd);\nbool prof_mdump_impl(tsd_t *tsd, const char *filename);\nvoid prof_gdump_impl(tsd_t *tsd);\n\n/* Used in unit tests. */\ntypedef int (prof_sys_thread_name_read_t)(char *buf, size_t limit);\nextern prof_sys_thread_name_read_t *JET_MUTABLE prof_sys_thread_name_read;\ntypedef int (prof_dump_open_file_t)(const char *, int);\nextern prof_dump_open_file_t *JET_MUTABLE prof_dump_open_file;\ntypedef ssize_t (prof_dump_write_file_t)(int, const void *, size_t);\nextern prof_dump_write_file_t *JET_MUTABLE prof_dump_write_file;\ntypedef int (prof_dump_open_maps_t)();\nextern prof_dump_open_maps_t *JET_MUTABLE prof_dump_open_maps;\n\n#endif /* JEMALLOC_INTERNAL_PROF_SYS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/prof_types.h",
    "content": "#ifndef JEMALLOC_INTERNAL_PROF_TYPES_H\n#define JEMALLOC_INTERNAL_PROF_TYPES_H\n\ntypedef struct prof_bt_s prof_bt_t;\ntypedef struct prof_cnt_s prof_cnt_t;\ntypedef struct prof_tctx_s prof_tctx_t;\ntypedef struct prof_info_s prof_info_t;\ntypedef struct prof_gctx_s prof_gctx_t;\ntypedef struct prof_tdata_s prof_tdata_t;\ntypedef struct prof_recent_s prof_recent_t;\n\n/* Option defaults. */\n#ifdef JEMALLOC_PROF\n#  define PROF_PREFIX_DEFAULT\t\t\"jeprof\"\n#else\n#  define PROF_PREFIX_DEFAULT\t\t\"\"\n#endif\n#define LG_PROF_SAMPLE_DEFAULT\t\t19\n#define LG_PROF_INTERVAL_DEFAULT\t-1\n\n/*\n * Hard limit on stack backtrace depth.  The version of prof_backtrace() that\n * is based on __builtin_return_address() necessarily has a hard-coded number\n * of backtrace frame handlers, and should be kept in sync with this setting.\n */\n#define PROF_BT_MAX\t\t\t128\n\n/* Initial hash table size. */\n#define PROF_CKH_MINITEMS\t\t64\n\n/* Size of memory buffer to use when writing dump files. */\n#ifndef JEMALLOC_PROF\n/* Minimize memory bloat for non-prof builds. */\n#  define PROF_DUMP_BUFSIZE\t\t1\n#elif defined(JEMALLOC_DEBUG)\n/* Use a small buffer size in debug build, mainly to facilitate testing. */\n#  define PROF_DUMP_BUFSIZE\t\t16\n#else\n#  define PROF_DUMP_BUFSIZE\t\t65536\n#endif\n\n/* Size of size class related tables */\n#ifdef JEMALLOC_PROF\n#  define PROF_SC_NSIZES\t\tSC_NSIZES\n#else\n/* Minimize memory bloat for non-prof builds. */\n#  define PROF_SC_NSIZES\t\t1\n#endif\n\n/* Size of stack-allocated buffer used by prof_printf(). */\n#define PROF_PRINTF_BUFSIZE\t\t128\n\n/*\n * Number of mutexes shared among all gctx's.  No space is allocated for these\n * unless profiling is enabled, so it's okay to over-provision.\n */\n#define PROF_NCTX_LOCKS\t\t\t1024\n\n/*\n * Number of mutexes shared among all tdata's.  No space is allocated for these\n * unless profiling is enabled, so it's okay to over-provision.\n */\n#define PROF_NTDATA_LOCKS\t\t256\n\n/* Minimize memory bloat for non-prof builds. */\n#ifdef JEMALLOC_PROF\n#define PROF_DUMP_FILENAME_LEN (PATH_MAX + 1)\n#else\n#define PROF_DUMP_FILENAME_LEN 1\n#endif\n\n/* Default number of recent allocations to record. */\n#define PROF_RECENT_ALLOC_MAX_DEFAULT 0\n\n#endif /* JEMALLOC_INTERNAL_PROF_TYPES_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/psset.h",
    "content": "#ifndef JEMALLOC_INTERNAL_PSSET_H\n#define JEMALLOC_INTERNAL_PSSET_H\n\n#include \"jemalloc/internal/hpdata.h\"\n\n/*\n * A page-slab set.  What the eset is to PAC, the psset is to HPA.  It maintains\n * a collection of page-slabs (the intent being that they are backed by\n * hugepages, or at least could be), and handles allocation and deallocation\n * requests.\n */\n\n/*\n * One more than the maximum pszind_t we will serve out of the HPA.\n * Practically, we expect only the first few to be actually used.  This\n * corresponds to a maximum size of of 512MB on systems with 4k pages and\n * SC_NGROUP == 4, which is already an unreasonably large maximum.  Morally, you\n * can think of this as being SC_NPSIZES, but there's no sense in wasting that\n * much space in the arena, making bitmaps that much larger, etc.\n */\n#define PSSET_NPSIZES 64\n\n/*\n * We keep two purge lists per page size class; one for hugified hpdatas (at\n * index 2*pszind), and one for the non-hugified hpdatas (at index 2*pszind +\n * 1).  This lets us implement a preference for purging non-hugified hpdatas\n * among similarly-dirty ones.\n * We reserve the last two indices for empty slabs, in that case purging\n * hugified ones (which are definitionally all waste) before non-hugified ones\n * (i.e. reversing the order).\n */\n#define PSSET_NPURGE_LISTS (2 * PSSET_NPSIZES)\n\ntypedef struct psset_bin_stats_s psset_bin_stats_t;\nstruct psset_bin_stats_s {\n\t/* How many pageslabs are in this bin? */\n\tsize_t npageslabs;\n\t/* Of them, how many pages are active? */\n\tsize_t nactive;\n\t/* And how many are dirty? */\n\tsize_t ndirty;\n};\n\ntypedef struct psset_stats_s psset_stats_t;\nstruct psset_stats_s {\n\t/*\n\t * The second index is huge stats; nonfull_slabs[pszind][0] contains\n\t * stats for the non-huge slabs in bucket pszind, while\n\t * nonfull_slabs[pszind][1] contains stats for the huge slabs.\n\t */\n\tpsset_bin_stats_t nonfull_slabs[PSSET_NPSIZES][2];\n\n\t/*\n\t * Full slabs don't live in any edata heap, but we still track their\n\t * stats.\n\t */\n\tpsset_bin_stats_t full_slabs[2];\n\n\t/* Empty slabs are similar. */\n\tpsset_bin_stats_t empty_slabs[2];\n};\n\ntypedef struct psset_s psset_t;\nstruct psset_s {\n\t/*\n\t * The pageslabs, quantized by the size class of the largest contiguous\n\t * free run of pages in a pageslab.\n\t */\n\thpdata_age_heap_t pageslabs[PSSET_NPSIZES];\n\t/* Bitmap for which set bits correspond to non-empty heaps. */\n\tfb_group_t pageslab_bitmap[FB_NGROUPS(PSSET_NPSIZES)];\n\t/*\n\t * The sum of all bin stats in stats.  This lets us quickly answer\n\t * queries for the number of dirty, active, and retained pages in the\n\t * entire set.\n\t */\n\tpsset_bin_stats_t merged_stats;\n\tpsset_stats_t stats;\n\t/*\n\t * Slabs with no active allocations, but which are allowed to serve new\n\t * allocations.\n\t */\n\thpdata_empty_list_t empty;\n\t/*\n\t * Slabs which are available to be purged, ordered by how much we want\n\t * to purge them (with later indices indicating slabs we want to purge\n\t * more).\n\t */\n\thpdata_purge_list_t to_purge[PSSET_NPURGE_LISTS];\n\t/* Bitmap for which set bits correspond to non-empty purge lists. */\n\tfb_group_t purge_bitmap[FB_NGROUPS(PSSET_NPURGE_LISTS)];\n\t/* Slabs which are available to be hugified. */\n\thpdata_hugify_list_t to_hugify;\n};\n\nvoid psset_init(psset_t *psset);\nvoid psset_stats_accum(psset_stats_t *dst, psset_stats_t *src);\n\n/*\n * Begin or end updating the given pageslab's metadata.  While the pageslab is\n * being updated, it won't be returned from psset_fit calls.\n */\nvoid psset_update_begin(psset_t *psset, hpdata_t *ps);\nvoid psset_update_end(psset_t *psset, hpdata_t *ps);\n\n/* Analogous to the eset_fit; pick a hpdata to serve the request. */\nhpdata_t *psset_pick_alloc(psset_t *psset, size_t size);\n/* Pick one to purge. */\nhpdata_t *psset_pick_purge(psset_t *psset);\n/* Pick one to hugify. */\nhpdata_t *psset_pick_hugify(psset_t *psset);\n\nvoid psset_insert(psset_t *psset, hpdata_t *ps);\nvoid psset_remove(psset_t *psset, hpdata_t *ps);\n\nstatic inline size_t\npsset_npageslabs(psset_t *psset) {\n\treturn psset->merged_stats.npageslabs;\n}\n\nstatic inline size_t\npsset_nactive(psset_t *psset) {\n\treturn psset->merged_stats.nactive;\n}\n\nstatic inline size_t\npsset_ndirty(psset_t *psset) {\n\treturn psset->merged_stats.ndirty;\n}\n\n#endif /* JEMALLOC_INTERNAL_PSSET_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/public_namespace.sh",
    "content": "#!/bin/sh\n\nfor nm in `cat $1` ; do\n  n=`echo ${nm} |tr ':' ' ' |awk '{print $1}'`\n  echo \"#define je_${n} JEMALLOC_N(${n})\"\ndone\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/public_unnamespace.sh",
    "content": "#!/bin/sh\n\nfor nm in `cat $1` ; do\n  n=`echo ${nm} |tr ':' ' ' |awk '{print $1}'`\n  echo \"#undef je_${n}\"\ndone\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/ql.h",
    "content": "#ifndef JEMALLOC_INTERNAL_QL_H\n#define JEMALLOC_INTERNAL_QL_H\n\n#include \"jemalloc/internal/qr.h\"\n\n/*\n * A linked-list implementation.\n *\n * This is built on top of the ring implementation, but that can be viewed as an\n * implementation detail (i.e. trying to advance past the tail of the list\n * doesn't wrap around).\n *\n * You define a struct like so:\n * typedef strucy my_s my_t;\n * struct my_s {\n *   int data;\n *   ql_elm(my_t) my_link;\n * };\n *\n * // We wobble between \"list\" and \"head\" for this type; we're now mostly\n * // heading towards \"list\".\n * typedef ql_head(my_t) my_list_t;\n *\n * You then pass a my_list_t * for a_head arguments, a my_t * for a_elm\n * arguments, the token \"my_link\" for a_field arguments, and the token \"my_t\"\n * for a_type arguments.\n */\n\n/* List definitions. */\n#define ql_head(a_type)\t\t\t\t\t\t\t\\\nstruct {\t\t\t\t\t\t\t\t\\\n\ta_type *qlh_first;\t\t\t\t\t\t\\\n}\n\n/* Static initializer for an empty list. */\n#define ql_head_initializer(a_head) {NULL}\n\n/* The field definition. */\n#define ql_elm(a_type)\tqr(a_type)\n\n/* A pointer to the first element in the list, or NULL if the list is empty. */\n#define ql_first(a_head) ((a_head)->qlh_first)\n\n/* Dynamically initializes a list. */\n#define ql_new(a_head) do {\t\t\t\t\t\t\\\n\tql_first(a_head) = NULL;\t\t\t\t\t\\\n} while (0)\n\n/*\n * Sets dest to be the contents of src (overwriting any elements there), leaving\n * src empty.\n */\n#define ql_move(a_head_dest, a_head_src) do {\t\t\t\t\\\n\tql_first(a_head_dest) = ql_first(a_head_src);\t\t\t\\\n\tql_new(a_head_src);\t\t\t\t\t\t\\\n} while (0)\n\n/* True if the list is empty, otherwise false. */\n#define ql_empty(a_head) (ql_first(a_head) == NULL)\n\n/*\n * Initializes a ql_elm.  Must be called even if the field is about to be\n * overwritten.\n */\n#define ql_elm_new(a_elm, a_field) qr_new((a_elm), a_field)\n\n/*\n * Obtains the last item in the list.\n */\n#define ql_last(a_head, a_field)\t\t\t\t\t\\\n\t(ql_empty(a_head) ? NULL : qr_prev(ql_first(a_head), a_field))\n\n/*\n * Gets a pointer to the next/prev element in the list.  Trying to advance past\n * the end or retreat before the beginning of the list returns NULL.\n */\n#define ql_next(a_head, a_elm, a_field)\t\t\t\t\t\\\n\t((ql_last(a_head, a_field) != (a_elm))\t\t\t\t\\\n\t    ? qr_next((a_elm), a_field)\t: NULL)\n#define ql_prev(a_head, a_elm, a_field)\t\t\t\t\t\\\n\t((ql_first(a_head) != (a_elm)) ? qr_prev((a_elm), a_field)\t\\\n\t\t\t\t       : NULL)\n\n/* Inserts a_elm before a_qlelm in the list. */\n#define ql_before_insert(a_head, a_qlelm, a_elm, a_field) do {\t\t\\\n\tqr_before_insert((a_qlelm), (a_elm), a_field);\t\t\t\\\n\tif (ql_first(a_head) == (a_qlelm)) {\t\t\t\t\\\n\t\tql_first(a_head) = (a_elm);\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n/* Inserts a_elm after a_qlelm in the list. */\n#define ql_after_insert(a_qlelm, a_elm, a_field)\t\t\t\\\n\tqr_after_insert((a_qlelm), (a_elm), a_field)\n\n/* Inserts a_elm as the first item in the list. */\n#define ql_head_insert(a_head, a_elm, a_field) do {\t\t\t\\\n\tif (!ql_empty(a_head)) {\t\t\t\t\t\\\n\t\tqr_before_insert(ql_first(a_head), (a_elm), a_field);\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tql_first(a_head) = (a_elm);\t\t\t\t\t\\\n} while (0)\n\n/* Inserts a_elm as the last item in the list. */\n#define ql_tail_insert(a_head, a_elm, a_field) do {\t\t\t\\\n\tif (!ql_empty(a_head)) {\t\t\t\t\t\\\n\t\tqr_before_insert(ql_first(a_head), (a_elm), a_field);\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tql_first(a_head) = qr_next((a_elm), a_field);\t\t\t\\\n} while (0)\n\n/*\n * Given lists a = [a_1, ..., a_n] and [b_1, ..., b_n], results in:\n * a = [a1, ..., a_n, b_1, ..., b_n] and b = [].\n */\n#define ql_concat(a_head_a, a_head_b, a_field) do {\t\t\t\\\n\tif (ql_empty(a_head_a)) {\t\t\t\t\t\\\n\t\tql_move(a_head_a, a_head_b);\t\t\t\t\\\n\t} else if (!ql_empty(a_head_b)) {\t\t\t\t\\\n\t\tqr_meld(ql_first(a_head_a), ql_first(a_head_b),\t\t\\\n\t\t    a_field);\t\t\t\t\t\t\\\n\t\tql_new(a_head_b);\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n/* Removes a_elm from the list. */\n#define ql_remove(a_head, a_elm, a_field) do {\t\t\t\t\\\n\tif (ql_first(a_head) == (a_elm)) {\t\t\t\t\\\n\t\tql_first(a_head) = qr_next(ql_first(a_head), a_field);\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tif (ql_first(a_head) != (a_elm)) {\t\t\t\t\\\n\t\tqr_remove((a_elm), a_field);\t\t\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t\tql_new(a_head);\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n/* Removes the first item in the list. */\n#define ql_head_remove(a_head, a_type, a_field) do {\t\t\t\\\n\ta_type *t = ql_first(a_head);\t\t\t\t\t\\\n\tql_remove((a_head), t, a_field);\t\t\t\t\\\n} while (0)\n\n/* Removes the last item in the list. */\n#define ql_tail_remove(a_head, a_type, a_field) do {\t\t\t\\\n\ta_type *t = ql_last(a_head, a_field);\t\t\t\t\\\n\tql_remove((a_head), t, a_field);\t\t\t\t\\\n} while (0)\n\n/*\n * Given a = [a_1, a_2, ..., a_n-1, a_n, a_n+1, ...],\n * ql_split(a, a_n, b, some_field) results in\n *   a = [a_1, a_2, ..., a_n-1]\n * and replaces b's contents with:\n *   b = [a_n, a_n+1, ...]\n */\n#define ql_split(a_head_a, a_elm, a_head_b, a_field) do {\t\t\\\n\tif (ql_first(a_head_a) == (a_elm)) {\t\t\t\t\\\n\t\tql_move(a_head_b, a_head_a);\t\t\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t\tqr_split(ql_first(a_head_a), (a_elm), a_field);\t\t\\\n\t\tql_first(a_head_b) = (a_elm);\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n/*\n * An optimized version of:\n *\ta_type *t = ql_first(a_head);\n *\tql_remove((a_head), t, a_field);\n *\tql_tail_insert((a_head), t, a_field);\n */\n#define ql_rotate(a_head, a_field) do {\t\t\t\t\t\\\n\tql_first(a_head) = qr_next(ql_first(a_head), a_field);\t\t\\\n} while (0)\n\n/*\n * Helper macro to iterate over each element in a list in order, starting from\n * the head (or in reverse order, starting from the tail).  The usage is\n * (assuming my_t and my_list_t defined as above).\n *\n * int sum(my_list_t *list) {\n *   int sum = 0;\n *   my_t *iter;\n *   ql_foreach(iter, list, link) {\n *     sum += iter->data;\n *   }\n *   return sum;\n * }\n */\n\n#define ql_foreach(a_var, a_head, a_field)\t\t\t\t\\\n\tqr_foreach((a_var), ql_first(a_head), a_field)\n\n#define ql_reverse_foreach(a_var, a_head, a_field)\t\t\t\\\n\tqr_reverse_foreach((a_var), ql_first(a_head), a_field)\n\n#endif /* JEMALLOC_INTERNAL_QL_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/qr.h",
    "content": "#ifndef JEMALLOC_INTERNAL_QR_H\n#define JEMALLOC_INTERNAL_QR_H\n\n/*\n * A ring implementation based on an embedded circular doubly-linked list.\n *\n * You define your struct like so:\n *\n * typedef struct my_s my_t;\n * struct my_s {\n *   int data;\n *   qr(my_t) my_link;\n * };\n *\n * And then pass a my_t * into macros for a_qr arguments, and the token\n * \"my_link\" into a_field fields.\n */\n\n/* Ring definitions. */\n#define qr(a_type)\t\t\t\t\t\t\t\\\nstruct {\t\t\t\t\t\t\t\t\\\n\ta_type\t*qre_next;\t\t\t\t\t\t\\\n\ta_type\t*qre_prev;\t\t\t\t\t\t\\\n}\n\n/*\n * Initialize a qr link.  Every link must be initialized before being used, even\n * if that initialization is going to be immediately overwritten (say, by being\n * passed into an insertion macro).\n */\n#define qr_new(a_qr, a_field) do {\t\t\t\t\t\\\n\t(a_qr)->a_field.qre_next = (a_qr);\t\t\t\t\\\n\t(a_qr)->a_field.qre_prev = (a_qr);\t\t\t\t\\\n} while (0)\n\n/*\n * Go forwards or backwards in the ring.  Note that (the ring being circular), this\n * always succeeds -- you just keep looping around and around the ring if you\n * chase pointers without end.\n */\n#define qr_next(a_qr, a_field) ((a_qr)->a_field.qre_next)\n#define qr_prev(a_qr, a_field) ((a_qr)->a_field.qre_prev)\n\n/*\n * Given two rings:\n *    a -> a_1 -> ... -> a_n --\n *    ^                       |\n *    |------------------------\n *\n *    b -> b_1 -> ... -> b_n --\n *    ^                       |\n *    |------------------------\n *\n * Results in the ring:\n *   a -> a_1 -> ... -> a_n -> b -> b_1 -> ... -> b_n --\n *   ^                                                 |\n *   |-------------------------------------------------|\n *\n * a_qr_a can directly be a qr_next() macro, but a_qr_b cannot.\n */\n#define qr_meld(a_qr_a, a_qr_b, a_field) do {\t\t\t\t\\\n\t(a_qr_b)->a_field.qre_prev->a_field.qre_next =\t\t\t\\\n\t    (a_qr_a)->a_field.qre_prev;\t\t\t\t\t\\\n\t(a_qr_a)->a_field.qre_prev = (a_qr_b)->a_field.qre_prev;\t\\\n\t(a_qr_b)->a_field.qre_prev =\t\t\t\t\t\\\n\t    (a_qr_b)->a_field.qre_prev->a_field.qre_next;\t\t\\\n\t(a_qr_a)->a_field.qre_prev->a_field.qre_next = (a_qr_a);\t\\\n\t(a_qr_b)->a_field.qre_prev->a_field.qre_next = (a_qr_b);\t\\\n} while (0)\n\n/*\n * Logically, this is just a meld.  The intent, though, is that a_qrelm is a\n * single-element ring, so that \"before\" has a more obvious interpretation than\n * meld.\n */\n#define qr_before_insert(a_qrelm, a_qr, a_field)\t\t\t\\\n\tqr_meld((a_qrelm), (a_qr), a_field)\n\n/* Ditto, but inserting after rather than before. */\n#define qr_after_insert(a_qrelm, a_qr, a_field)\t\t\t\t\\\n\tqr_before_insert(qr_next(a_qrelm, a_field), (a_qr), a_field)\n\n/*\n * Inverts meld; given the ring:\n *   a -> a_1 -> ... -> a_n -> b -> b_1 -> ... -> b_n --\n *   ^                                                 |\n *   |-------------------------------------------------|\n *\n * Results in two rings:\n *    a -> a_1 -> ... -> a_n --\n *    ^                       |\n *    |------------------------\n *\n *    b -> b_1 -> ... -> b_n --\n *    ^                       |\n *    |------------------------\n *\n * qr_meld() and qr_split() are functionally equivalent, so there's no need to\n * have two copies of the code.\n */\n#define qr_split(a_qr_a, a_qr_b, a_field)\t\t\t\t\\\n\tqr_meld((a_qr_a), (a_qr_b), a_field)\n\n/*\n * Splits off a_qr from the rest of its ring, so that it becomes a\n * single-element ring.\n */\n#define qr_remove(a_qr, a_field)\t\t\t\t\t\\\n\tqr_split(qr_next(a_qr, a_field), (a_qr), a_field)\n\n/*\n * Helper macro to iterate over each element in a ring exactly once, starting\n * with a_qr.  The usage is (assuming my_t defined as above):\n *\n * int sum(my_t *item) {\n *   int sum = 0;\n *   my_t *iter;\n *   qr_foreach(iter, item, link) {\n *     sum += iter->data;\n *   }\n *   return sum;\n * }\n */\n#define qr_foreach(var, a_qr, a_field)\t\t\t\t\t\\\n\tfor ((var) = (a_qr);\t\t\t\t\t\t\\\n\t    (var) != NULL;\t\t\t\t\t\t\\\n\t    (var) = (((var)->a_field.qre_next != (a_qr))\t\t\\\n\t    ? (var)->a_field.qre_next : NULL))\n\n/*\n * The same (and with the same usage) as qr_foreach, but in the opposite order,\n * ending with a_qr.\n */\n#define qr_reverse_foreach(var, a_qr, a_field)\t\t\t\t\\\n\tfor ((var) = ((a_qr) != NULL) ? qr_prev(a_qr, a_field) : NULL;\t\\\n\t    (var) != NULL;\t\t\t\t\t\t\\\n\t    (var) = (((var) != (a_qr))\t\t\t\t\t\\\n\t    ? (var)->a_field.qre_prev : NULL))\n\n#endif /* JEMALLOC_INTERNAL_QR_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/quantum.h",
    "content": "#ifndef JEMALLOC_INTERNAL_QUANTUM_H\n#define JEMALLOC_INTERNAL_QUANTUM_H\n\n/*\n * Minimum allocation alignment is 2^LG_QUANTUM bytes (ignoring tiny size\n * classes).\n */\n#ifndef LG_QUANTUM\n#  if (defined(__i386__) || defined(_M_IX86))\n#    define LG_QUANTUM\t\t4\n#  endif\n#  ifdef __ia64__\n#    define LG_QUANTUM\t\t4\n#  endif\n#  ifdef __alpha__\n#    define LG_QUANTUM\t\t4\n#  endif\n#  if (defined(__sparc64__) || defined(__sparcv9) || defined(__sparc_v9__))\n#    define LG_QUANTUM\t\t4\n#  endif\n#  if (defined(__amd64__) || defined(__x86_64__) || defined(_M_X64))\n#    define LG_QUANTUM\t\t4\n#  endif\n#  ifdef __arm__\n#    define LG_QUANTUM\t\t3\n#  endif\n#  ifdef __aarch64__\n#    define LG_QUANTUM\t\t4\n#  endif\n#  ifdef __hppa__\n#    define LG_QUANTUM\t\t4\n#  endif\n#  ifdef __loongarch__\n#    define LG_QUANTUM\t\t4\n#  endif\n#  ifdef __m68k__\n#    define LG_QUANTUM\t\t3\n#  endif\n#  ifdef __mips__\n#    if defined(__mips_n32) || defined(__mips_n64)\n#      define LG_QUANTUM\t\t4\n#    else\n#      define LG_QUANTUM\t\t3\n#    endif\n#  endif\n#  ifdef __nios2__\n#    define LG_QUANTUM\t\t3\n#  endif\n#  ifdef __or1k__\n#    define LG_QUANTUM\t\t3\n#  endif\n#  ifdef __powerpc__\n#    define LG_QUANTUM\t\t4\n#  endif\n#  if defined(__riscv) || defined(__riscv__)\n#    define LG_QUANTUM\t\t4\n#  endif\n#  ifdef __s390__\n#    define LG_QUANTUM\t\t4\n#  endif\n#  if (defined (__SH3E__) || defined(__SH4_SINGLE__) || defined(__SH4__) || \\\n\tdefined(__SH4_SINGLE_ONLY__))\n#    define LG_QUANTUM\t\t4\n#  endif\n#  ifdef __tile__\n#    define LG_QUANTUM\t\t4\n#  endif\n#  ifdef __le32__\n#    define LG_QUANTUM\t\t4\n#  endif\n#  ifdef __arc__\n#    define LG_QUANTUM\t\t3\n#  endif\n#  ifndef LG_QUANTUM\n#    error \"Unknown minimum alignment for architecture; specify via \"\n\t \"--with-lg-quantum\"\n#  endif\n#endif\n\n#define QUANTUM\t\t\t((size_t)(1U << LG_QUANTUM))\n#define QUANTUM_MASK\t\t(QUANTUM - 1)\n\n/* Return the smallest quantum multiple that is >= a. */\n#define QUANTUM_CEILING(a)\t\t\t\t\t\t\\\n\t(((a) + QUANTUM_MASK) & ~QUANTUM_MASK)\n\n#endif /* JEMALLOC_INTERNAL_QUANTUM_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/rb.h",
    "content": "#ifndef JEMALLOC_INTERNAL_RB_H\n#define JEMALLOC_INTERNAL_RB_H\n\n/*-\n *******************************************************************************\n *\n * cpp macro implementation of left-leaning 2-3 red-black trees.  Parent\n * pointers are not used, and color bits are stored in the least significant\n * bit of right-child pointers (if RB_COMPACT is defined), thus making node\n * linkage as compact as is possible for red-black trees.\n *\n * Usage:\n *\n *   #include <stdint.h>\n *   #include <stdbool.h>\n *   #define NDEBUG // (Optional, see assert(3).)\n *   #include <assert.h>\n *   #define RB_COMPACT // (Optional, embed color bits in right-child pointers.)\n *   #include <rb.h>\n *   ...\n *\n *******************************************************************************\n */\n\n#ifndef __PGI\n#define RB_COMPACT\n#endif\n\n/*\n * Each node in the RB tree consumes at least 1 byte of space (for the linkage\n * if nothing else, so there are a maximum of sizeof(void *) << 3 rb tree nodes\n * in any process (and thus, at most sizeof(void *) << 3 nodes in any rb tree).\n * The choice of algorithm bounds the depth of a tree to twice the binary log of\n * the number of elements in the tree; the following bound follows.\n */\n#define RB_MAX_DEPTH (sizeof(void *) << 4)\n\n#ifdef RB_COMPACT\n/* Node structure. */\n#define rb_node(a_type)\t\t\t\t\t\t\t\\\nstruct {\t\t\t\t\t\t\t\t\\\n    a_type *rbn_left;\t\t\t\t\t\t\t\\\n    a_type *rbn_right_red;\t\t\t\t\t\t\\\n}\n#else\n#define rb_node(a_type)\t\t\t\t\t\t\t\\\nstruct {\t\t\t\t\t\t\t\t\\\n    a_type *rbn_left;\t\t\t\t\t\t\t\\\n    a_type *rbn_right;\t\t\t\t\t\t\t\\\n    bool rbn_red;\t\t\t\t\t\t\t\\\n}\n#endif\n\n/* Root structure. */\n#define rb_tree(a_type)\t\t\t\t\t\t\t\\\nstruct {\t\t\t\t\t\t\t\t\\\n    a_type *rbt_root;\t\t\t\t\t\t\t\\\n}\n\n/* Left accessors. */\n#define rbtn_left_get(a_type, a_field, a_node)\t\t\t\t\\\n    ((a_node)->a_field.rbn_left)\n#define rbtn_left_set(a_type, a_field, a_node, a_left) do {\t\t\\\n    (a_node)->a_field.rbn_left = a_left;\t\t\t\t\\\n} while (0)\n\n#ifdef RB_COMPACT\n/* Right accessors. */\n#define rbtn_right_get(a_type, a_field, a_node)\t\t\t\t\\\n    ((a_type *) (((intptr_t) (a_node)->a_field.rbn_right_red)\t\t\\\n      & ((ssize_t)-2)))\n#define rbtn_right_set(a_type, a_field, a_node, a_right) do {\t\t\\\n    (a_node)->a_field.rbn_right_red = (a_type *) (((uintptr_t) a_right)\t\\\n      | (((uintptr_t) (a_node)->a_field.rbn_right_red) & ((size_t)1)));\t\\\n} while (0)\n\n/* Color accessors. */\n#define rbtn_red_get(a_type, a_field, a_node)\t\t\t\t\\\n    ((bool) (((uintptr_t) (a_node)->a_field.rbn_right_red)\t\t\\\n      & ((size_t)1)))\n#define rbtn_color_set(a_type, a_field, a_node, a_red) do {\t\t\\\n    (a_node)->a_field.rbn_right_red = (a_type *) ((((intptr_t)\t\t\\\n      (a_node)->a_field.rbn_right_red) & ((ssize_t)-2))\t\t\t\\\n      | ((ssize_t)a_red));\t\t\t\t\t\t\\\n} while (0)\n#define rbtn_red_set(a_type, a_field, a_node) do {\t\t\t\\\n    (a_node)->a_field.rbn_right_red = (a_type *) (((uintptr_t)\t\t\\\n      (a_node)->a_field.rbn_right_red) | ((size_t)1));\t\t\t\\\n} while (0)\n#define rbtn_black_set(a_type, a_field, a_node) do {\t\t\t\\\n    (a_node)->a_field.rbn_right_red = (a_type *) (((intptr_t)\t\t\\\n      (a_node)->a_field.rbn_right_red) & ((ssize_t)-2));\t\t\\\n} while (0)\n\n/* Node initializer. */\n#define rbt_node_new(a_type, a_field, a_rbt, a_node) do {\t\t\\\n    /* Bookkeeping bit cannot be used by node pointer. */\t\t\\\n    assert(((uintptr_t)(a_node) & 0x1) == 0);\t\t\t\t\\\n    rbtn_left_set(a_type, a_field, (a_node), NULL);\t\\\n    rbtn_right_set(a_type, a_field, (a_node), NULL);\t\\\n    rbtn_red_set(a_type, a_field, (a_node));\t\t\t\t\\\n} while (0)\n#else\n/* Right accessors. */\n#define rbtn_right_get(a_type, a_field, a_node)\t\t\t\t\\\n    ((a_node)->a_field.rbn_right)\n#define rbtn_right_set(a_type, a_field, a_node, a_right) do {\t\t\\\n    (a_node)->a_field.rbn_right = a_right;\t\t\t\t\\\n} while (0)\n\n/* Color accessors. */\n#define rbtn_red_get(a_type, a_field, a_node)\t\t\t\t\\\n    ((a_node)->a_field.rbn_red)\n#define rbtn_color_set(a_type, a_field, a_node, a_red) do {\t\t\\\n    (a_node)->a_field.rbn_red = (a_red);\t\t\t\t\\\n} while (0)\n#define rbtn_red_set(a_type, a_field, a_node) do {\t\t\t\\\n    (a_node)->a_field.rbn_red = true;\t\t\t\t\t\\\n} while (0)\n#define rbtn_black_set(a_type, a_field, a_node) do {\t\t\t\\\n    (a_node)->a_field.rbn_red = false;\t\t\t\t\t\\\n} while (0)\n\n/* Node initializer. */\n#define rbt_node_new(a_type, a_field, a_rbt, a_node) do {\t\t\\\n    rbtn_left_set(a_type, a_field, (a_node), NULL);\t\\\n    rbtn_right_set(a_type, a_field, (a_node), NULL);\t\\\n    rbtn_red_set(a_type, a_field, (a_node));\t\t\t\t\\\n} while (0)\n#endif\n\n/* Tree initializer. */\n#define rb_new(a_type, a_field, a_rbt) do {\t\t\t\t\\\n    (a_rbt)->rbt_root = NULL;\t\t\t\t\t\t\\\n} while (0)\n\n/* Internal utility macros. */\n#define rbtn_first(a_type, a_field, a_rbt, a_root, r_node) do {\t\t\\\n    (r_node) = (a_root);\t\t\t\t\t\t\\\n    if ((r_node) != NULL) {\t\t\t\t\t\t\\\n\tfor (;\t\t\t\t\t\t\t\t\\\n\t  rbtn_left_get(a_type, a_field, (r_node)) != NULL;\t\t\\\n\t  (r_node) = rbtn_left_get(a_type, a_field, (r_node))) {\t\\\n\t}\t\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#define rbtn_last(a_type, a_field, a_rbt, a_root, r_node) do {\t\t\\\n    (r_node) = (a_root);\t\t\t\t\t\t\\\n    if ((r_node) != NULL) {\t\t\t\t\t\t\\\n\tfor (; rbtn_right_get(a_type, a_field, (r_node)) != NULL;\t\\\n\t  (r_node) = rbtn_right_get(a_type, a_field, (r_node))) {\t\\\n\t}\t\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#define rbtn_rotate_left(a_type, a_field, a_node, r_node) do {\t\t\\\n    (r_node) = rbtn_right_get(a_type, a_field, (a_node));\t\t\\\n    rbtn_right_set(a_type, a_field, (a_node),\t\t\t\t\\\n      rbtn_left_get(a_type, a_field, (r_node)));\t\t\t\\\n    rbtn_left_set(a_type, a_field, (r_node), (a_node));\t\t\t\\\n} while (0)\n\n#define rbtn_rotate_right(a_type, a_field, a_node, r_node) do {\t\t\\\n    (r_node) = rbtn_left_get(a_type, a_field, (a_node));\t\t\\\n    rbtn_left_set(a_type, a_field, (a_node),\t\t\t\t\\\n      rbtn_right_get(a_type, a_field, (r_node)));\t\t\t\\\n    rbtn_right_set(a_type, a_field, (r_node), (a_node));\t\t\\\n} while (0)\n\n#define rb_summarized_only_false(...)\n#define rb_summarized_only_true(...) __VA_ARGS__\n#define rb_empty_summarize(a_node, a_lchild, a_rchild) false\n\n/*\n * The rb_proto() and rb_summarized_proto() macros generate function prototypes\n * that correspond to the functions generated by an equivalently parameterized\n * call to rb_gen() or rb_summarized_gen(), respectively.\n */\n\n#define rb_proto(a_attr, a_prefix, a_rbt_type, a_type)\t\t\t\\\n    rb_proto_impl(a_attr, a_prefix, a_rbt_type, a_type, false)\n#define rb_summarized_proto(a_attr, a_prefix, a_rbt_type, a_type)\t\\\n    rb_proto_impl(a_attr, a_prefix, a_rbt_type, a_type, true)\n#define rb_proto_impl(a_attr, a_prefix, a_rbt_type, a_type,\t\t\\\n    a_is_summarized)\t\t\t\t\t\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##new(a_rbt_type *rbtree);\t\t\t\t\t\\\na_attr bool\t\t\t\t\t\t\t\t\\\na_prefix##empty(a_rbt_type *rbtree);\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##first(a_rbt_type *rbtree);\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##last(a_rbt_type *rbtree);\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##next(a_rbt_type *rbtree, a_type *node);\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##prev(a_rbt_type *rbtree, a_type *node);\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##search(a_rbt_type *rbtree, const a_type *key);\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##nsearch(a_rbt_type *rbtree, const a_type *key);\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##psearch(a_rbt_type *rbtree, const a_type *key);\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##insert(a_rbt_type *rbtree, a_type *node);\t\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##remove(a_rbt_type *rbtree, a_type *node);\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##iter(a_rbt_type *rbtree, a_type *start, a_type *(*cb)(\t\\\n  a_rbt_type *, a_type *, void *), void *arg);\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##reverse_iter(a_rbt_type *rbtree, a_type *start,\t\t\\\n  a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg);\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##destroy(a_rbt_type *rbtree, void (*cb)(a_type *, void *),\t\\\n  void *arg);\t\t\t\t\t\t\t\t\\\n/* Extended API */\t\t\t\t\t\t\t\\\nrb_summarized_only_##a_is_summarized(\t\t\t\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##update_summaries(a_rbt_type *rbtree, a_type *node);\t\t\\\na_attr bool\t\t\t\t\t\t\t\t\\\na_prefix##empty_filtered(a_rbt_type *rbtree,\t\t\t\t\\\n    bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n    bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n    void *filter_ctx);\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##first_filtered(a_rbt_type *rbtree,\t\t\t\t\\\n    bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n    bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n    void *filter_ctx);\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##last_filtered(a_rbt_type *rbtree,\t\t\t\t\\\n    bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n    bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n    void *filter_ctx);\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##next_filtered(a_rbt_type *rbtree, a_type *node,\t\t\\\n    bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n    bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n    void *filter_ctx);\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##prev_filtered(a_rbt_type *rbtree, a_type *node,\t\t\\\n    bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n    bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n    void *filter_ctx);\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##search_filtered(a_rbt_type *rbtree, const a_type *key,\t\\\n    bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n    bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n    void *filter_ctx);\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##nsearch_filtered(a_rbt_type *rbtree, const a_type *key,\t\\\n    bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n    bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n    void *filter_ctx);\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##psearch_filtered(a_rbt_type *rbtree, const a_type *key,\t\\\n    bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n    bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n    void *filter_ctx);\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##iter_filtered(a_rbt_type *rbtree, a_type *start,\t\t\\\n    a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg,\t\t\\\n    bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n    bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n    void *filter_ctx);\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##reverse_iter_filtered(a_rbt_type *rbtree, a_type *start,\t\\\n  a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg,\t\t\\\n    bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n    bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n    void *filter_ctx);\t\t\t\t\t\t\t\\\n)\n\n/*\n * The rb_gen() macro generates a type-specific red-black tree implementation,\n * based on the above cpp macros.\n * Arguments:\n *\n *   a_attr:\n *     Function attribute for generated functions (ex: static).\n *   a_prefix:\n *     Prefix for generated functions (ex: ex_).\n *   a_rb_type:\n *     Type for red-black tree data structure (ex: ex_t).\n *   a_type:\n *     Type for red-black tree node data structure (ex: ex_node_t).\n *   a_field:\n *     Name of red-black tree node linkage (ex: ex_link).\n *   a_cmp:\n *     Node comparison function name, with the following prototype:\n *\n *     int a_cmp(a_type *a_node, a_type *a_other);\n *                        ^^^^^^\n *                        or a_key\n *     Interpretation of comparison function return values:\n *       -1 : a_node <  a_other\n *        0 : a_node == a_other\n *        1 : a_node >  a_other\n *     In all cases, the a_node or a_key macro argument is the first argument to\n *     the comparison function, which makes it possible to write comparison\n *     functions that treat the first argument specially.  a_cmp must be a total\n *     order on values inserted into the tree -- duplicates are not allowed.\n *\n * Assuming the following setup:\n *\n *   typedef struct ex_node_s ex_node_t;\n *   struct ex_node_s {\n *       rb_node(ex_node_t) ex_link;\n *   };\n *   typedef rb_tree(ex_node_t) ex_t;\n *   rb_gen(static, ex_, ex_t, ex_node_t, ex_link, ex_cmp)\n *\n * The following API is generated:\n *\n *   static void\n *   ex_new(ex_t *tree);\n *       Description: Initialize a red-black tree structure.\n *       Args:\n *         tree: Pointer to an uninitialized red-black tree object.\n *\n *   static bool\n *   ex_empty(ex_t *tree);\n *       Description: Determine whether tree is empty.\n *       Args:\n *         tree: Pointer to an initialized red-black tree object.\n *       Ret: True if tree is empty, false otherwise.\n *\n *   static ex_node_t *\n *   ex_first(ex_t *tree);\n *   static ex_node_t *\n *   ex_last(ex_t *tree);\n *       Description: Get the first/last node in tree.\n *       Args:\n *         tree: Pointer to an initialized red-black tree object.\n *       Ret: First/last node in tree, or NULL if tree is empty.\n *\n *   static ex_node_t *\n *   ex_next(ex_t *tree, ex_node_t *node);\n *   static ex_node_t *\n *   ex_prev(ex_t *tree, ex_node_t *node);\n *       Description: Get node's successor/predecessor.\n *       Args:\n *         tree: Pointer to an initialized red-black tree object.\n *         node: A node in tree.\n *       Ret: node's successor/predecessor in tree, or NULL if node is\n *            last/first.\n *\n *   static ex_node_t *\n *   ex_search(ex_t *tree, const ex_node_t *key);\n *       Description: Search for node that matches key.\n *       Args:\n *         tree: Pointer to an initialized red-black tree object.\n *         key : Search key.\n *       Ret: Node in tree that matches key, or NULL if no match.\n *\n *   static ex_node_t *\n *   ex_nsearch(ex_t *tree, const ex_node_t *key);\n *   static ex_node_t *\n *   ex_psearch(ex_t *tree, const ex_node_t *key);\n *       Description: Search for node that matches key.  If no match is found,\n *                    return what would be key's successor/predecessor, were\n *                    key in tree.\n *       Args:\n *         tree: Pointer to an initialized red-black tree object.\n *         key : Search key.\n *       Ret: Node in tree that matches key, or if no match, hypothetical node's\n *            successor/predecessor (NULL if no successor/predecessor).\n *\n *   static void\n *   ex_insert(ex_t *tree, ex_node_t *node);\n *       Description: Insert node into tree.\n *       Args:\n *         tree: Pointer to an initialized red-black tree object.\n *         node: Node to be inserted into tree.\n *\n *   static void\n *   ex_remove(ex_t *tree, ex_node_t *node);\n *       Description: Remove node from tree.\n *       Args:\n *         tree: Pointer to an initialized red-black tree object.\n *         node: Node in tree to be removed.\n *\n *   static ex_node_t *\n *   ex_iter(ex_t *tree, ex_node_t *start, ex_node_t *(*cb)(ex_t *,\n *     ex_node_t *, void *), void *arg);\n *   static ex_node_t *\n *   ex_reverse_iter(ex_t *tree, ex_node_t *start, ex_node *(*cb)(ex_t *,\n *     ex_node_t *, void *), void *arg);\n *       Description: Iterate forward/backward over tree, starting at node.  If\n *                    tree is modified, iteration must be immediately\n *                    terminated by the callback function that causes the\n *                    modification.\n *       Args:\n *         tree : Pointer to an initialized red-black tree object.\n *         start: Node at which to start iteration, or NULL to start at\n *                first/last node.\n *         cb   : Callback function, which is called for each node during\n *                iteration.  Under normal circumstances the callback function\n *                should return NULL, which causes iteration to continue.  If a\n *                callback function returns non-NULL, iteration is immediately\n *                terminated and the non-NULL return value is returned by the\n *                iterator.  This is useful for re-starting iteration after\n *                modifying tree.\n *         arg  : Opaque pointer passed to cb().\n *       Ret: NULL if iteration completed, or the non-NULL callback return value\n *            that caused termination of the iteration.\n *\n *   static void\n *   ex_destroy(ex_t *tree, void (*cb)(ex_node_t *, void *), void *arg);\n *       Description: Iterate over the tree with post-order traversal, remove\n *                    each node, and run the callback if non-null.  This is\n *                    used for destroying a tree without paying the cost to\n *                    rebalance it.  The tree must not be otherwise altered\n *                    during traversal.\n *       Args:\n *         tree: Pointer to an initialized red-black tree object.\n *         cb  : Callback function, which, if non-null, is called for each node\n *               during iteration.  There is no way to stop iteration once it\n *               has begun.\n *         arg : Opaque pointer passed to cb().\n *\n * The rb_summarized_gen() macro generates all the functions above, but has an\n * expanded interface.  In introduces the notion of summarizing subtrees, and of\n * filtering searches in the tree according to the information contained in\n * those summaries.\n * The extra macro argument is:\n *   a_summarize:\n *     Tree summarization function name, with the following prototype:\n *\n *     bool a_summarize(a_type *a_node, const a_type *a_left_child,\n *         const a_type *a_right_child);\n *\n *     This function should update a_node with the summary of the subtree rooted\n *     there, using the data contained in it and the summaries in a_left_child\n *     and a_right_child.  One or both of them may be NULL.  When the tree\n *     changes due to an insertion or removal, it updates the summaries of all\n *     nodes whose subtrees have changed (always updating the summaries of\n *     children before their parents).  If the user alters a node in the tree in\n *     a way that may change its summary, they can call the generated\n *     update_summaries function to bubble up the summary changes to the root.\n *     It should return true if the summary changed (or may have changed), and\n *     false if it didn't (which will allow the implementation to terminate\n *     \"bubbling up\" the summaries early).\n *     As the parameter names indicate, the children are ordered as they are in\n *     the tree, a_left_child, if it is not NULL, compares less than a_node,\n *     which in turn compares less than a_right_child (if a_right_child is not\n *     NULL).\n *\n * Using the same setup as above but replacing the macro with\n *   rb_summarized_gen(static, ex_, ex_t, ex_node_t, ex_link, ex_cmp,\n *       ex_summarize)\n *\n * Generates all the previous functions, but adds some more:\n *\n *   static void\n *   ex_update_summaries(ex_t *tree, ex_node_t *node);\n *       Description: Recompute all summaries of ancestors of node.\n *       Args:\n *         tree: Pointer to an initialized red-black tree object.\n *         node: The element of the tree whose summary may have changed.\n *\n * For each of ex_empty, ex_first, ex_last, ex_next, ex_prev, ex_search,\n * ex_nsearch, ex_psearch, ex_iter, and ex_reverse_iter, an additional function\n * is generated as well, with the suffix _filtered (e.g. ex_empty_filtered,\n * ex_first_filtered, etc.).  These use the concept of a \"filter\"; a binary\n * property some node either satisfies or does not satisfy.  Clever use of the\n * a_summary argument to rb_summarized_gen can allow efficient computation of\n * these predicates across whole subtrees of the tree.\n * The extended API functions accept three additional arguments after the\n * arguments to the corresponding non-extended equivalent.\n *\n * ex_fn(..., bool (*filter_node)(void *, ex_node_t *),\n *     bool (*filter_subtree)(void *, ex_node_t *), void *filter_ctx);\n *         filter_node    : Returns true if the node passes the filter.\n *         filter_subtree : Returns true if some node in the subtree rooted at\n *                          node passes the filter.\n *         filter_ctx     : A context argument passed to the filters.\n *\n * For a more concrete example of summarizing and filtering, suppose we're using\n * the red-black tree to track a set of integers:\n *\n * struct ex_node_s {\n *     rb_node(ex_node_t) ex_link;\n *     unsigned data;\n * };\n *\n * Suppose, for some application-specific reason, we want to be able to quickly\n * find numbers in the set which are divisible by large powers of 2 (say, for\n * aligned allocation purposes).  We augment the node with a summary field:\n *\n * struct ex_node_s {\n *     rb_node(ex_node_t) ex_link;\n *     unsigned data;\n *     unsigned max_subtree_ffs;\n * }\n *\n * and define our summarization function as follows:\n *\n * bool\n * ex_summarize(ex_node_t *node, const ex_node_t *lchild,\n *   const ex_node_t *rchild) {\n *     unsigned new_max_subtree_ffs = ffs(node->data);\n *     if (lchild != NULL && lchild->max_subtree_ffs > new_max_subtree_ffs) {\n *         new_max_subtree_ffs = lchild->max_subtree_ffs;\n *     }\n *     if (rchild != NULL && rchild->max_subtree_ffs > new_max_subtree_ffs) {\n *         new_max_subtree_ffs = rchild->max_subtree_ffs;\n *     }\n *     bool changed = (node->max_subtree_ffs != new_max_subtree_ffs)\n *     node->max_subtree_ffs = new_max_subtree_ffs;\n *     // This could be \"return true\" without any correctness or big-O\n *     // performance changes; but practically, precisely reporting summary\n *     // changes reduces the amount of work that has to be done when \"bubbling\n *     // up\" summary changes.\n *     return changed;\n * }\n *\n * We can now implement our filter functions as follows:\n * bool\n * ex_filter_node(void *filter_ctx, ex_node_t *node) {\n *     unsigned required_ffs = *(unsigned *)filter_ctx;\n *     return ffs(node->data) >= required_ffs;\n * }\n * bool\n * ex_filter_subtree(void *filter_ctx, ex_node_t *node) {\n *     unsigned required_ffs = *(unsigned *)filter_ctx;\n *     return node->max_subtree_ffs >= required_ffs;\n * }\n *\n * We can now easily search for, e.g., the smallest integer in the set that's\n * divisible by 128:\n * ex_node_t *\n * find_div_128(ex_tree_t *tree) {\n *     unsigned min_ffs = 7;\n *     return ex_first_filtered(tree, &ex_filter_node, &ex_filter_subtree,\n *         &min_ffs);\n * }\n *\n * We could with similar ease:\n * - Fnd the next multiple of 128 in the set that's larger than 12345 (with\n *   ex_nsearch_filtered)\n * - Iterate over just those multiples of 64 that are in the set (with\n *   ex_iter_filtered)\n * - Determine if the set contains any multiples of 1024 (with\n *   ex_empty_filtered).\n *\n * Some possibly subtle API notes:\n * - The node argument to ex_next_filtered and ex_prev_filtered need not pass\n *   the filter; it will find the next/prev node that passes the filter.\n * - ex_search_filtered will fail even for a node in the tree, if that node does\n *   not pass the filter.  ex_psearch_filtered and ex_nsearch_filtered behave\n *   similarly; they may return a node larger/smaller than the key, even if a\n *   node equivalent to the key is in the tree (but does not pass the filter).\n * - Similarly, if the start argument to a filtered iteration function does not\n *   pass the filter, the callback won't be invoked on it.\n *\n * These should make sense after a moment's reflection; each post-condition is\n * the same as with the unfiltered version, with the added constraint that the\n * returned node must pass the filter.\n */\n#define rb_gen(a_attr, a_prefix, a_rbt_type, a_type, a_field, a_cmp)\t\\\n    rb_gen_impl(a_attr, a_prefix, a_rbt_type, a_type, a_field, a_cmp,\t\\\n\trb_empty_summarize, false)\n#define rb_summarized_gen(a_attr, a_prefix, a_rbt_type, a_type,\t\t\\\n    a_field, a_cmp, a_summarize)\t\t\t\t\t\\\n    rb_gen_impl(a_attr, a_prefix, a_rbt_type, a_type, a_field, a_cmp,\t\\\n\ta_summarize, true)\n\n#define rb_gen_impl(a_attr, a_prefix, a_rbt_type, a_type,\t\t\\\n    a_field, a_cmp, a_summarize, a_is_summarized)\t\t\t\\\ntypedef struct {\t\t\t\t\t\t\t\\\n    a_type *node;\t\t\t\t\t\t\t\\\n    int cmp;\t\t\t\t\t\t\t\t\\\n} a_prefix##path_entry_t;\t\t\t\t\t\t\\\nstatic inline void\t\t\t\t\t\t\t\\\na_prefix##summarize_range(a_prefix##path_entry_t *rfirst,\t\t\\\n    a_prefix##path_entry_t *rlast) {\t\t\t\t\t\\\n    while ((uintptr_t)rlast >= (uintptr_t)rfirst) {\t\t\t\\\n\ta_type *node = rlast->node;\t\t\t\t\t\\\n\t/* Avoid a warning when a_summarize is rb_empty_summarize. */\t\\\n\t(void)node;\t\t\t\t\t\t\t\\\n\tbool changed = a_summarize(node, rbtn_left_get(a_type, a_field,\t\\\n\t    node), rbtn_right_get(a_type, a_field, node));\t\t\\\n\tif (!changed) {\t\t\t\t\t\t\t\\\n\t\tbreak;\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\trlast--;\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n/* On the remove pathways, we sometimes swap the node being removed   */\\\n/* and its first successor; in such cases we need to do two range     */\\\n/* updates; one from the node to its (former) swapped successor, the  */\\\n/* next from that successor to the root (with either allowed to       */\\\n/* bail out early if appropriate.                                     */\\\nstatic inline void\t\t\t\t\t\t\t\\\na_prefix##summarize_swapped_range(a_prefix##path_entry_t *rfirst,\t\\\n    a_prefix##path_entry_t *rlast, a_prefix##path_entry_t *swap_loc) {\t\\\n\tif (swap_loc == NULL || rlast <= swap_loc) {\t\t\t\\\n\t\ta_prefix##summarize_range(rfirst, rlast);\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t\ta_prefix##summarize_range(swap_loc + 1, rlast);\t\t\\\n\t\t(void)a_summarize(swap_loc->node,\t\t\t\\\n\t\t    rbtn_left_get(a_type, a_field, swap_loc->node),\t\\\n\t\t    rbtn_right_get(a_type, a_field, swap_loc->node));\t\\\n\t\ta_prefix##summarize_range(rfirst, swap_loc - 1);\t\\\n\t}\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##new(a_rbt_type *rbtree) {\t\t\t\t\t\\\n    rb_new(a_type, a_field, rbtree);\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr bool\t\t\t\t\t\t\t\t\\\na_prefix##empty(a_rbt_type *rbtree) {\t\t\t\t\t\\\n    return (rbtree->rbt_root == NULL);\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##first(a_rbt_type *rbtree) {\t\t\t\t\t\\\n    a_type *ret;\t\t\t\t\t\t\t\\\n    rbtn_first(a_type, a_field, rbtree, rbtree->rbt_root, ret);\t\t\\\n    return ret;\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##last(a_rbt_type *rbtree) {\t\t\t\t\t\\\n    a_type *ret;\t\t\t\t\t\t\t\\\n    rbtn_last(a_type, a_field, rbtree, rbtree->rbt_root, ret);\t\t\\\n    return ret;\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##next(a_rbt_type *rbtree, a_type *node) {\t\t\t\\\n    a_type *ret;\t\t\t\t\t\t\t\\\n    if (rbtn_right_get(a_type, a_field, node) != NULL) {\t\t\\\n\trbtn_first(a_type, a_field, rbtree, rbtn_right_get(a_type,\t\\\n\t  a_field, node), ret);\t\t\t\t\t\t\\\n    } else {\t\t\t\t\t\t\t\t\\\n\ta_type *tnode = rbtree->rbt_root;\t\t\t\t\\\n\tassert(tnode != NULL);\t\t\t\t\t\t\\\n\tret = NULL;\t\t\t\t\t\t\t\\\n\twhile (true) {\t\t\t\t\t\t\t\\\n\t    int cmp = (a_cmp)(node, tnode);\t\t\t\t\\\n\t    if (cmp < 0) {\t\t\t\t\t\t\\\n\t\tret = tnode;\t\t\t\t\t\t\\\n\t\ttnode = rbtn_left_get(a_type, a_field, tnode);\t\t\\\n\t    } else if (cmp > 0) {\t\t\t\t\t\\\n\t\ttnode = rbtn_right_get(a_type, a_field, tnode);\t\t\\\n\t    } else {\t\t\t\t\t\t\t\\\n\t\tbreak;\t\t\t\t\t\t\t\\\n\t    }\t\t\t\t\t\t\t\t\\\n\t    assert(tnode != NULL);\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    return ret;\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##prev(a_rbt_type *rbtree, a_type *node) {\t\t\t\\\n    a_type *ret;\t\t\t\t\t\t\t\\\n    if (rbtn_left_get(a_type, a_field, node) != NULL) {\t\t\t\\\n\trbtn_last(a_type, a_field, rbtree, rbtn_left_get(a_type,\t\\\n\t  a_field, node), ret);\t\t\t\t\t\t\\\n    } else {\t\t\t\t\t\t\t\t\\\n\ta_type *tnode = rbtree->rbt_root;\t\t\t\t\\\n\tassert(tnode != NULL);\t\t\t\t\t\t\\\n\tret = NULL;\t\t\t\t\t\t\t\\\n\twhile (true) {\t\t\t\t\t\t\t\\\n\t    int cmp = (a_cmp)(node, tnode);\t\t\t\t\\\n\t    if (cmp < 0) {\t\t\t\t\t\t\\\n\t\ttnode = rbtn_left_get(a_type, a_field, tnode);\t\t\\\n\t    } else if (cmp > 0) {\t\t\t\t\t\\\n\t\tret = tnode;\t\t\t\t\t\t\\\n\t\ttnode = rbtn_right_get(a_type, a_field, tnode);\t\t\\\n\t    } else {\t\t\t\t\t\t\t\\\n\t\tbreak;\t\t\t\t\t\t\t\\\n\t    }\t\t\t\t\t\t\t\t\\\n\t    assert(tnode != NULL);\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    return ret;\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##search(a_rbt_type *rbtree, const a_type *key) {\t\t\\\n    a_type *ret;\t\t\t\t\t\t\t\\\n    int cmp;\t\t\t\t\t\t\t\t\\\n    ret = rbtree->rbt_root;\t\t\t\t\t\t\\\n    while (ret != NULL\t\t\t\t\t\t\t\\\n      && (cmp = (a_cmp)(key, ret)) != 0) {\t\t\t\t\\\n\tif (cmp < 0) {\t\t\t\t\t\t\t\\\n\t    ret = rbtn_left_get(a_type, a_field, ret);\t\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t    ret = rbtn_right_get(a_type, a_field, ret);\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    return ret;\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##nsearch(a_rbt_type *rbtree, const a_type *key) {\t\t\\\n    a_type *ret;\t\t\t\t\t\t\t\\\n    a_type *tnode = rbtree->rbt_root;\t\t\t\t\t\\\n    ret = NULL;\t\t\t\t\t\t\t\t\\\n    while (tnode != NULL) {\t\t\t\t\t\t\\\n\tint cmp = (a_cmp)(key, tnode);\t\t\t\t\t\\\n\tif (cmp < 0) {\t\t\t\t\t\t\t\\\n\t    ret = tnode;\t\t\t\t\t\t\\\n\t    tnode = rbtn_left_get(a_type, a_field, tnode);\t\t\\\n\t} else if (cmp > 0) {\t\t\t\t\t\t\\\n\t    tnode = rbtn_right_get(a_type, a_field, tnode);\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t    ret = tnode;\t\t\t\t\t\t\\\n\t    break;\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    return ret;\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##psearch(a_rbt_type *rbtree, const a_type *key) {\t\t\\\n    a_type *ret;\t\t\t\t\t\t\t\\\n    a_type *tnode = rbtree->rbt_root;\t\t\t\t\t\\\n    ret = NULL;\t\t\t\t\t\t\t\t\\\n    while (tnode != NULL) {\t\t\t\t\t\t\\\n\tint cmp = (a_cmp)(key, tnode);\t\t\t\t\t\\\n\tif (cmp < 0) {\t\t\t\t\t\t\t\\\n\t    tnode = rbtn_left_get(a_type, a_field, tnode);\t\t\\\n\t} else if (cmp > 0) {\t\t\t\t\t\t\\\n\t    ret = tnode;\t\t\t\t\t\t\\\n\t    tnode = rbtn_right_get(a_type, a_field, tnode);\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t    ret = tnode;\t\t\t\t\t\t\\\n\t    break;\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    return ret;\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##insert(a_rbt_type *rbtree, a_type *node) {\t\t\t\\\n    a_prefix##path_entry_t path[RB_MAX_DEPTH];\t\t\t\\\n    a_prefix##path_entry_t *pathp;\t\t\t\t\t\\\n    rbt_node_new(a_type, a_field, rbtree, node);\t\t\t\\\n    /* Wind. */\t\t\t\t\t\t\t\t\\\n    path->node = rbtree->rbt_root;\t\t\t\t\t\\\n    for (pathp = path; pathp->node != NULL; pathp++) {\t\t\t\\\n\tint cmp = pathp->cmp = a_cmp(node, pathp->node);\t\t\\\n\tassert(cmp != 0);\t\t\t\t\t\t\\\n\tif (cmp < 0) {\t\t\t\t\t\t\t\\\n\t    pathp[1].node = rbtn_left_get(a_type, a_field,\t\t\\\n\t      pathp->node);\t\t\t\t\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t    pathp[1].node = rbtn_right_get(a_type, a_field,\t\t\\\n\t      pathp->node);\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    pathp->node = node;\t\t\t\t\t\t\t\\\n    /* A loop invariant we maintain is that all nodes with            */\\\n    /* out-of-date summaries live in path[0], path[1], ..., *pathp.   */\\\n    /* To maintain this, we have to summarize node, since we          */\\\n    /* decrement pathp before the first iteration.                    */\\\n    assert(rbtn_left_get(a_type, a_field, node) == NULL);\t\t\\\n    assert(rbtn_right_get(a_type, a_field, node) == NULL);\t\t\\\n    (void)a_summarize(node, NULL, NULL);\t\t\t\t\\\n    /* Unwind. */\t\t\t\t\t\t\t\\\n    for (pathp--; (uintptr_t)pathp >= (uintptr_t)path; pathp--) {\t\\\n\ta_type *cnode = pathp->node;\t\t\t\t\t\\\n\tif (pathp->cmp < 0) {\t\t\t\t\t\t\\\n\t    a_type *left = pathp[1].node;\t\t\t\t\\\n\t    rbtn_left_set(a_type, a_field, cnode, left);\t\t\\\n\t    if (rbtn_red_get(a_type, a_field, left)) {\t\t\t\\\n\t\ta_type *leftleft = rbtn_left_get(a_type, a_field, left);\\\n\t\tif (leftleft != NULL && rbtn_red_get(a_type, a_field,\t\\\n\t\t  leftleft)) {\t\t\t\t\t\t\\\n\t\t    /* Fix up 4-node. */\t\t\t\t\\\n\t\t    a_type *tnode;\t\t\t\t\t\\\n\t\t    rbtn_black_set(a_type, a_field, leftleft);\t\t\\\n\t\t    rbtn_rotate_right(a_type, a_field, cnode, tnode);\t\\\n\t\t    (void)a_summarize(cnode,\t\t\t\t\\\n\t\t\trbtn_left_get(a_type, a_field, cnode),\t\t\\\n\t\t\trbtn_right_get(a_type, a_field, cnode));\t\\\n\t\t    cnode = tnode;\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t    } else {\t\t\t\t\t\t\t\\\n\t\ta_prefix##summarize_range(path, pathp);\t\t\t\\\n\t\treturn;\t\t\t\t\t\t\t\\\n\t    }\t\t\t\t\t\t\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t    a_type *right = pathp[1].node;\t\t\t\t\\\n\t    rbtn_right_set(a_type, a_field, cnode, right);\t\t\\\n\t    if (rbtn_red_get(a_type, a_field, right)) {\t\t\t\\\n\t\ta_type *left = rbtn_left_get(a_type, a_field, cnode);\t\\\n\t\tif (left != NULL && rbtn_red_get(a_type, a_field,\t\\\n\t\t  left)) {\t\t\t\t\t\t\\\n\t\t    /* Split 4-node. */\t\t\t\t\t\\\n\t\t    rbtn_black_set(a_type, a_field, left);\t\t\\\n\t\t    rbtn_black_set(a_type, a_field, right);\t\t\\\n\t\t    rbtn_red_set(a_type, a_field, cnode);\t\t\\\n\t\t} else {\t\t\t\t\t\t\\\n\t\t    /* Lean left. */\t\t\t\t\t\\\n\t\t    a_type *tnode;\t\t\t\t\t\\\n\t\t    bool tred = rbtn_red_get(a_type, a_field, cnode);\t\\\n\t\t    rbtn_rotate_left(a_type, a_field, cnode, tnode);\t\\\n\t\t    rbtn_color_set(a_type, a_field, tnode, tred);\t\\\n\t\t    rbtn_red_set(a_type, a_field, cnode);\t\t\\\n\t\t    (void)a_summarize(cnode,\t\t\t\t\\\n\t\t\trbtn_left_get(a_type, a_field, cnode),\t\t\\\n\t\t\trbtn_right_get(a_type, a_field, cnode));\t\\\n\t\t    cnode = tnode;\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t    } else {\t\t\t\t\t\t\t\\\n\t\ta_prefix##summarize_range(path, pathp);\t\t\t\\\n\t\treturn;\t\t\t\t\t\t\t\\\n\t    }\t\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tpathp->node = cnode;\t\t\t\t\t\t\\\n\t(void)a_summarize(cnode,\t\t\t\t\t\\\n\t    rbtn_left_get(a_type, a_field, cnode),\t\t\t\\\n\t    rbtn_right_get(a_type, a_field, cnode));\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    /* Set root, and make it black. */\t\t\t\t\t\\\n    rbtree->rbt_root = path->node;\t\t\t\t\t\\\n    rbtn_black_set(a_type, a_field, rbtree->rbt_root);\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##remove(a_rbt_type *rbtree, a_type *node) {\t\t\t\\\n    a_prefix##path_entry_t path[RB_MAX_DEPTH];\t\t\t\t\\\n    a_prefix##path_entry_t *pathp;\t\t\t\t\t\\\n    a_prefix##path_entry_t *nodep;\t\t\t\t\t\\\n    a_prefix##path_entry_t *swap_loc;\t\t\t\t\t\\\n    /* This is a \"real\" sentinel -- NULL means we didn't swap the     */\\\n    /* node to be pruned with one of its successors, and so           */\\\n    /* summarization can terminate early whenever some summary        */\\\n    /* doesn't change.                                                */\\\n    swap_loc = NULL;\t\t\t\t\t\t\t\\\n    /* This is just to silence a compiler warning. */\t\t\t\\\n    nodep = NULL;\t\t\t\t\t\t\t\\\n    /* Wind. */\t\t\t\t\t\t\t\t\\\n    path->node = rbtree->rbt_root;\t\t\t\t\t\\\n    for (pathp = path; pathp->node != NULL; pathp++) {\t\t\t\\\n\tint cmp = pathp->cmp = a_cmp(node, pathp->node);\t\t\\\n\tif (cmp < 0) {\t\t\t\t\t\t\t\\\n\t    pathp[1].node = rbtn_left_get(a_type, a_field,\t\t\\\n\t      pathp->node);\t\t\t\t\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t    pathp[1].node = rbtn_right_get(a_type, a_field,\t\t\\\n\t      pathp->node);\t\t\t\t\t\t\\\n\t    if (cmp == 0) {\t\t\t\t\t\t\\\n\t        /* Find node's successor, in preparation for swap. */\t\\\n\t\tpathp->cmp = 1;\t\t\t\t\t\t\\\n\t\tnodep = pathp;\t\t\t\t\t\t\\\n\t\tfor (pathp++; pathp->node != NULL; pathp++) {\t\t\\\n\t\t    pathp->cmp = -1;\t\t\t\t\t\\\n\t\t    pathp[1].node = rbtn_left_get(a_type, a_field,\t\\\n\t\t      pathp->node);\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t\tbreak;\t\t\t\t\t\t\t\\\n\t    }\t\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    assert(nodep->node == node);\t\t\t\t\t\\\n    pathp--;\t\t\t\t\t\t\t\t\\\n    if (pathp->node != node) {\t\t\t\t\t\t\\\n\t/* Swap node with its successor. */\t\t\t\t\\\n\tswap_loc = nodep;\t\t\t\t\t\t\\\n\tbool tred = rbtn_red_get(a_type, a_field, pathp->node);\t\t\\\n\trbtn_color_set(a_type, a_field, pathp->node,\t\t\t\\\n\t  rbtn_red_get(a_type, a_field, node));\t\t\t\t\\\n\trbtn_left_set(a_type, a_field, pathp->node,\t\t\t\\\n\t  rbtn_left_get(a_type, a_field, node));\t\t\t\\\n\t/* If node's successor is its right child, the following code */\\\n\t/* will do the wrong thing for the right child pointer.       */\\\n\t/* However, it doesn't matter, because the pointer will be    */\\\n\t/* properly set when the successor is pruned.                 */\\\n\trbtn_right_set(a_type, a_field, pathp->node,\t\t\t\\\n\t  rbtn_right_get(a_type, a_field, node));\t\t\t\\\n\trbtn_color_set(a_type, a_field, node, tred);\t\t\t\\\n\t/* The pruned leaf node's child pointers are never accessed   */\\\n\t/* again, so don't bother setting them to nil.                */\\\n\tnodep->node = pathp->node;\t\t\t\t\t\\\n\tpathp->node = node;\t\t\t\t\t\t\\\n\tif (nodep == path) {\t\t\t\t\t\t\\\n\t    rbtree->rbt_root = nodep->node;\t\t\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t    if (nodep[-1].cmp < 0) {\t\t\t\t\t\\\n\t\trbtn_left_set(a_type, a_field, nodep[-1].node,\t\t\\\n\t\t  nodep->node);\t\t\t\t\t\t\\\n\t    } else {\t\t\t\t\t\t\t\\\n\t\trbtn_right_set(a_type, a_field, nodep[-1].node,\t\t\\\n\t\t  nodep->node);\t\t\t\t\t\t\\\n\t    }\t\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n    } else {\t\t\t\t\t\t\t\t\\\n\ta_type *left = rbtn_left_get(a_type, a_field, node);\t\t\\\n\tif (left != NULL) {\t\t\t\t\t\t\\\n\t    /* node has no successor, but it has a left child.        */\\\n\t    /* Splice node out, without losing the left child.        */\\\n\t    assert(!rbtn_red_get(a_type, a_field, node));\t\t\\\n\t    assert(rbtn_red_get(a_type, a_field, left));\t\t\\\n\t    rbtn_black_set(a_type, a_field, left);\t\t\t\\\n\t    if (pathp == path) {\t\t\t\t\t\\\n\t\trbtree->rbt_root = left;\t\t\t\t\\\n\t\t/* Nothing to summarize -- the subtree rooted at the  */\\\n\t\t/* node's left child hasn't changed, and it's now the */\\\n\t\t/* root.\t\t\t\t\t      */\\\n\t    } else {\t\t\t\t\t\t\t\\\n\t\tif (pathp[-1].cmp < 0) {\t\t\t\t\\\n\t\t    rbtn_left_set(a_type, a_field, pathp[-1].node,\t\\\n\t\t      left);\t\t\t\t\t\t\\\n\t\t} else {\t\t\t\t\t\t\\\n\t\t    rbtn_right_set(a_type, a_field, pathp[-1].node,\t\\\n\t\t      left);\t\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t\ta_prefix##summarize_swapped_range(path, &pathp[-1],\t\\\n\t\t    swap_loc);\t\t\t\t\t\t\\\n\t    }\t\t\t\t\t\t\t\t\\\n\t    return;\t\t\t\t\t\t\t\\\n\t} else if (pathp == path) {\t\t\t\t\t\\\n\t    /* The tree only contained one node. */\t\t\t\\\n\t    rbtree->rbt_root = NULL;\t\t\t\t\t\\\n\t    return;\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    /* We've now established the invariant that the node has no right */\\\n    /* child (well, morally; we didn't bother nulling it out if we    */\\\n    /* swapped it with its successor), and that the only nodes with   */\\\n    /* out-of-date summaries live in path[0], path[1], ..., pathp[-1].*/\\\n    if (rbtn_red_get(a_type, a_field, pathp->node)) {\t\t\t\\\n\t/* Prune red node, which requires no fixup. */\t\t\t\\\n\tassert(pathp[-1].cmp < 0);\t\t\t\t\t\\\n\trbtn_left_set(a_type, a_field, pathp[-1].node, NULL);\t\t\\\n\ta_prefix##summarize_swapped_range(path, &pathp[-1], swap_loc);\t\\\n\treturn;\t\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    /* The node to be pruned is black, so unwind until balance is     */\\\n    /* restored.                                                      */\\\n    pathp->node = NULL;\t\t\t\t\t\t\t\\\n    for (pathp--; (uintptr_t)pathp >= (uintptr_t)path; pathp--) {\t\\\n\tassert(pathp->cmp != 0);\t\t\t\t\t\\\n\tif (pathp->cmp < 0) {\t\t\t\t\t\t\\\n\t    rbtn_left_set(a_type, a_field, pathp->node,\t\t\t\\\n\t      pathp[1].node);\t\t\t\t\t\t\\\n\t    if (rbtn_red_get(a_type, a_field, pathp->node)) {\t\t\\\n\t\ta_type *right = rbtn_right_get(a_type, a_field,\t\t\\\n\t\t  pathp->node);\t\t\t\t\t\t\\\n\t\ta_type *rightleft = rbtn_left_get(a_type, a_field,\t\\\n\t\t  right);\t\t\t\t\t\t\\\n\t\ta_type *tnode;\t\t\t\t\t\t\\\n\t\tif (rightleft != NULL && rbtn_red_get(a_type, a_field,\t\\\n\t\t  rightleft)) {\t\t\t\t\t\t\\\n\t\t    /* In the following diagrams, ||, //, and \\\\      */\\\n\t\t    /* indicate the path to the removed node.         */\\\n\t\t    /*                                                */\\\n\t\t    /*      ||                                        */\\\n\t\t    /*    pathp(r)                                    */\\\n\t\t    /*  //        \\                                   */\\\n\t\t    /* (b)        (b)                                 */\\\n\t\t    /*           /                                    */\\\n\t\t    /*          (r)                                   */\\\n\t\t    /*                                                */\\\n\t\t    rbtn_black_set(a_type, a_field, pathp->node);\t\\\n\t\t    rbtn_rotate_right(a_type, a_field, right, tnode);\t\\\n\t\t    rbtn_right_set(a_type, a_field, pathp->node, tnode);\\\n\t\t    rbtn_rotate_left(a_type, a_field, pathp->node,\t\\\n\t\t      tnode);\t\t\t\t\t\t\\\n\t\t    (void)a_summarize(pathp->node,\t\t\t\\\n\t\t\trbtn_left_get(a_type, a_field, pathp->node),\t\\\n\t\t\trbtn_right_get(a_type, a_field, pathp->node));\t\\\n\t\t    (void)a_summarize(right,\t\t\t\t\\\n\t\t\trbtn_left_get(a_type, a_field, right),\t\t\\\n\t\t\trbtn_right_get(a_type, a_field, right));\t\\\n\t\t} else {\t\t\t\t\t\t\\\n\t\t    /*      ||                                        */\\\n\t\t    /*    pathp(r)                                    */\\\n\t\t    /*  //        \\                                   */\\\n\t\t    /* (b)        (b)                                 */\\\n\t\t    /*           /                                    */\\\n\t\t    /*          (b)                                   */\\\n\t\t    /*                                                */\\\n\t\t    rbtn_rotate_left(a_type, a_field, pathp->node,\t\\\n\t\t      tnode);\t\t\t\t\t\t\\\n\t\t    (void)a_summarize(pathp->node,\t\t\t\\\n\t\t\trbtn_left_get(a_type, a_field, pathp->node),\t\\\n\t\t\trbtn_right_get(a_type, a_field, pathp->node));\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t\t(void)a_summarize(tnode, rbtn_left_get(a_type, a_field,\t\\\n\t\t    tnode), rbtn_right_get(a_type, a_field, tnode));\t\\\n\t\t/* Balance restored, but rotation modified subtree    */\\\n\t\t/* root.                                              */\\\n\t\tassert((uintptr_t)pathp > (uintptr_t)path);\t\t\\\n\t\tif (pathp[-1].cmp < 0) {\t\t\t\t\\\n\t\t    rbtn_left_set(a_type, a_field, pathp[-1].node,\t\\\n\t\t      tnode);\t\t\t\t\t\t\\\n\t\t} else {\t\t\t\t\t\t\\\n\t\t    rbtn_right_set(a_type, a_field, pathp[-1].node,\t\\\n\t\t      tnode);\t\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t\ta_prefix##summarize_swapped_range(path, &pathp[-1],\t\\\n\t\t    swap_loc);\t\t\t\t\t\t\\\n\t\treturn;\t\t\t\t\t\t\t\\\n\t    } else {\t\t\t\t\t\t\t\\\n\t\ta_type *right = rbtn_right_get(a_type, a_field,\t\t\\\n\t\t  pathp->node);\t\t\t\t\t\t\\\n\t\ta_type *rightleft = rbtn_left_get(a_type, a_field,\t\\\n\t\t  right);\t\t\t\t\t\t\\\n\t\tif (rightleft != NULL && rbtn_red_get(a_type, a_field,\t\\\n\t\t  rightleft)) {\t\t\t\t\t\t\\\n\t\t    /*      ||                                        */\\\n\t\t    /*    pathp(b)                                    */\\\n\t\t    /*  //        \\                                   */\\\n\t\t    /* (b)        (b)                                 */\\\n\t\t    /*           /                                    */\\\n\t\t    /*          (r)                                   */\\\n\t\t    a_type *tnode;\t\t\t\t\t\\\n\t\t    rbtn_black_set(a_type, a_field, rightleft);\t\t\\\n\t\t    rbtn_rotate_right(a_type, a_field, right, tnode);\t\\\n\t\t    rbtn_right_set(a_type, a_field, pathp->node, tnode);\\\n\t\t    rbtn_rotate_left(a_type, a_field, pathp->node,\t\\\n\t\t      tnode);\t\t\t\t\t\t\\\n\t\t    (void)a_summarize(pathp->node,\t\t\t\\\n\t\t\trbtn_left_get(a_type, a_field, pathp->node),\t\\\n\t\t\trbtn_right_get(a_type, a_field, pathp->node));\t\\\n\t\t    (void)a_summarize(right,\t\t\t\t\\\n\t\t\trbtn_left_get(a_type, a_field, right),\t\t\\\n\t\t\trbtn_right_get(a_type, a_field, right));\t\\\n\t\t    (void)a_summarize(tnode,\t\t\t\t\\\n\t\t\trbtn_left_get(a_type, a_field, tnode),\t\t\\\n\t\t\trbtn_right_get(a_type, a_field, tnode));\t\\\n\t\t    /* Balance restored, but rotation modified        */\\\n\t\t    /* subtree root, which may actually be the tree   */\\\n\t\t    /* root.                                          */\\\n\t\t    if (pathp == path) {\t\t\t\t\\\n\t\t\t/* Set root. */\t\t\t\t\t\\\n\t\t\trbtree->rbt_root = tnode;\t\t\t\\\n\t\t    } else {\t\t\t\t\t\t\\\n\t\t\tif (pathp[-1].cmp < 0) {\t\t\t\\\n\t\t\t    rbtn_left_set(a_type, a_field,\t\t\\\n\t\t\t      pathp[-1].node, tnode);\t\t\t\\\n\t\t\t} else {\t\t\t\t\t\\\n\t\t\t    rbtn_right_set(a_type, a_field,\t\t\\\n\t\t\t      pathp[-1].node, tnode);\t\t\t\\\n\t\t\t}\t\t\t\t\t\t\\\n\t\t\ta_prefix##summarize_swapped_range(path,\t\t\\\n\t\t\t    &pathp[-1], swap_loc);\t\t\t\\\n\t\t    }\t\t\t\t\t\t\t\\\n\t\t    return;\t\t\t\t\t\t\\\n\t\t} else {\t\t\t\t\t\t\\\n\t\t    /*      ||                                        */\\\n\t\t    /*    pathp(b)                                    */\\\n\t\t    /*  //        \\                                   */\\\n\t\t    /* (b)        (b)                                 */\\\n\t\t    /*           /                                    */\\\n\t\t    /*          (b)                                   */\\\n\t\t    a_type *tnode;\t\t\t\t\t\\\n\t\t    rbtn_red_set(a_type, a_field, pathp->node);\t\t\\\n\t\t    rbtn_rotate_left(a_type, a_field, pathp->node,\t\\\n\t\t      tnode);\t\t\t\t\t\t\\\n\t\t    (void)a_summarize(pathp->node,\t\t\t\\\n\t\t\trbtn_left_get(a_type, a_field, pathp->node),\t\\\n\t\t\trbtn_right_get(a_type, a_field, pathp->node));\t\\\n\t\t    (void)a_summarize(tnode,\t\t\t\t\\\n\t\t\trbtn_left_get(a_type, a_field, tnode),\t\t\\\n\t\t\trbtn_right_get(a_type, a_field, tnode));\t\\\n\t\t    pathp->node = tnode;\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t    }\t\t\t\t\t\t\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t    a_type *left;\t\t\t\t\t\t\\\n\t    rbtn_right_set(a_type, a_field, pathp->node,\t\t\\\n\t      pathp[1].node);\t\t\t\t\t\t\\\n\t    left = rbtn_left_get(a_type, a_field, pathp->node);\t\t\\\n\t    if (rbtn_red_get(a_type, a_field, left)) {\t\t\t\\\n\t\ta_type *tnode;\t\t\t\t\t\t\\\n\t\ta_type *leftright = rbtn_right_get(a_type, a_field,\t\\\n\t\t  left);\t\t\t\t\t\t\\\n\t\ta_type *leftrightleft = rbtn_left_get(a_type, a_field,\t\\\n\t\t  leftright);\t\t\t\t\t\t\\\n\t\tif (leftrightleft != NULL && rbtn_red_get(a_type,\t\\\n\t\t  a_field, leftrightleft)) {\t\t\t\t\\\n\t\t    /*      ||                                        */\\\n\t\t    /*    pathp(b)                                    */\\\n\t\t    /*   /        \\\\                                  */\\\n\t\t    /* (r)        (b)                                 */\\\n\t\t    /*   \\                                            */\\\n\t\t    /*   (b)                                          */\\\n\t\t    /*   /                                            */\\\n\t\t    /* (r)                                            */\\\n\t\t    a_type *unode;\t\t\t\t\t\\\n\t\t    rbtn_black_set(a_type, a_field, leftrightleft);\t\\\n\t\t    rbtn_rotate_right(a_type, a_field, pathp->node,\t\\\n\t\t      unode);\t\t\t\t\t\t\\\n\t\t    rbtn_rotate_right(a_type, a_field, pathp->node,\t\\\n\t\t      tnode);\t\t\t\t\t\t\\\n\t\t    rbtn_right_set(a_type, a_field, unode, tnode);\t\\\n\t\t    rbtn_rotate_left(a_type, a_field, unode, tnode);\t\\\n\t\t    (void)a_summarize(pathp->node,\t\t\t\\\n\t\t\trbtn_left_get(a_type, a_field, pathp->node),\t\\\n\t\t\trbtn_right_get(a_type, a_field, pathp->node));\t\\\n\t\t    (void)a_summarize(unode,\t\t\t\t\\\n\t\t\trbtn_left_get(a_type, a_field, unode),\t\t\\\n\t\t\trbtn_right_get(a_type, a_field, unode));\t\\\n\t\t} else {\t\t\t\t\t\t\\\n\t\t    /*      ||                                        */\\\n\t\t    /*    pathp(b)                                    */\\\n\t\t    /*   /        \\\\                                  */\\\n\t\t    /* (r)        (b)                                 */\\\n\t\t    /*   \\                                            */\\\n\t\t    /*   (b)                                          */\\\n\t\t    /*   /                                            */\\\n\t\t    /* (b)                                            */\\\n\t\t    assert(leftright != NULL);\t\t\t\t\\\n\t\t    rbtn_red_set(a_type, a_field, leftright);\t\t\\\n\t\t    rbtn_rotate_right(a_type, a_field, pathp->node,\t\\\n\t\t      tnode);\t\t\t\t\t\t\\\n\t\t    rbtn_black_set(a_type, a_field, tnode);\t\t\\\n\t\t    (void)a_summarize(pathp->node,\t\t\t\\\n\t\t\trbtn_left_get(a_type, a_field, pathp->node),\t\\\n\t\t\trbtn_right_get(a_type, a_field, pathp->node));\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t\t(void)a_summarize(tnode,\t\t\t\t\\\n\t\t    rbtn_left_get(a_type, a_field, tnode),\t\t\\\n\t\t    rbtn_right_get(a_type, a_field, tnode));\t\t\\\n\t\t/* Balance restored, but rotation modified subtree    */\\\n\t\t/* root, which may actually be the tree root.         */\\\n\t\tif (pathp == path) {\t\t\t\t\t\\\n\t\t    /* Set root. */\t\t\t\t\t\\\n\t\t    rbtree->rbt_root = tnode;\t\t\t\t\\\n\t\t} else {\t\t\t\t\t\t\\\n\t\t    if (pathp[-1].cmp < 0) {\t\t\t\t\\\n\t\t\trbtn_left_set(a_type, a_field, pathp[-1].node,\t\\\n\t\t\t  tnode);\t\t\t\t\t\\\n\t\t    } else {\t\t\t\t\t\t\\\n\t\t\trbtn_right_set(a_type, a_field, pathp[-1].node,\t\\\n\t\t\t  tnode);\t\t\t\t\t\\\n\t\t    }\t\t\t\t\t\t\t\\\n\t\t    a_prefix##summarize_swapped_range(path, &pathp[-1],\t\\\n\t\t\tswap_loc);\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t\treturn;\t\t\t\t\t\t\t\\\n\t    } else if (rbtn_red_get(a_type, a_field, pathp->node)) {\t\\\n\t\ta_type *leftleft = rbtn_left_get(a_type, a_field, left);\\\n\t\tif (leftleft != NULL && rbtn_red_get(a_type, a_field,\t\\\n\t\t  leftleft)) {\t\t\t\t\t\t\\\n\t\t    /*        ||                                      */\\\n\t\t    /*      pathp(r)                                  */\\\n\t\t    /*     /        \\\\                                */\\\n\t\t    /*   (b)        (b)                               */\\\n\t\t    /*   /                                            */\\\n\t\t    /* (r)                                            */\\\n\t\t    a_type *tnode;\t\t\t\t\t\\\n\t\t    rbtn_black_set(a_type, a_field, pathp->node);\t\\\n\t\t    rbtn_red_set(a_type, a_field, left);\t\t\\\n\t\t    rbtn_black_set(a_type, a_field, leftleft);\t\t\\\n\t\t    rbtn_rotate_right(a_type, a_field, pathp->node,\t\\\n\t\t      tnode);\t\t\t\t\t\t\\\n\t\t    (void)a_summarize(pathp->node,\t\t\t\\\n\t\t\trbtn_left_get(a_type, a_field, pathp->node),\t\\\n\t\t\trbtn_right_get(a_type, a_field, pathp->node));\t\\\n\t\t    (void)a_summarize(tnode,\t\t\t\t\\\n\t\t\trbtn_left_get(a_type, a_field, tnode),\t\t\\\n\t\t\trbtn_right_get(a_type, a_field, tnode));\t\\\n\t\t    /* Balance restored, but rotation modified        */\\\n\t\t    /* subtree root.                                  */\\\n\t\t    assert((uintptr_t)pathp > (uintptr_t)path);\t\t\\\n\t\t    if (pathp[-1].cmp < 0) {\t\t\t\t\\\n\t\t\trbtn_left_set(a_type, a_field, pathp[-1].node,\t\\\n\t\t\t  tnode);\t\t\t\t\t\\\n\t\t    } else {\t\t\t\t\t\t\\\n\t\t\trbtn_right_set(a_type, a_field, pathp[-1].node,\t\\\n\t\t\t  tnode);\t\t\t\t\t\\\n\t\t    }\t\t\t\t\t\t\t\\\n\t\t    a_prefix##summarize_swapped_range(path, &pathp[-1],\t\\\n\t\t\tswap_loc);\t\t\t\t\t\\\n\t\t    return;\t\t\t\t\t\t\\\n\t\t} else {\t\t\t\t\t\t\\\n\t\t    /*        ||                                      */\\\n\t\t    /*      pathp(r)                                  */\\\n\t\t    /*     /        \\\\                                */\\\n\t\t    /*   (b)        (b)                               */\\\n\t\t    /*   /                                            */\\\n\t\t    /* (b)                                            */\\\n\t\t    rbtn_red_set(a_type, a_field, left);\t\t\\\n\t\t    rbtn_black_set(a_type, a_field, pathp->node);\t\\\n\t\t    /* Balance restored. */\t\t\t\t\\\n\t\t    a_prefix##summarize_swapped_range(path, pathp,\t\\\n\t\t\tswap_loc);\t\t\t\t\t\\\n\t\t    return;\t\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t    } else {\t\t\t\t\t\t\t\\\n\t\ta_type *leftleft = rbtn_left_get(a_type, a_field, left);\\\n\t\tif (leftleft != NULL && rbtn_red_get(a_type, a_field,\t\\\n\t\t  leftleft)) {\t\t\t\t\t\t\\\n\t\t    /*               ||                               */\\\n\t\t    /*             pathp(b)                           */\\\n\t\t    /*            /        \\\\                         */\\\n\t\t    /*          (b)        (b)                        */\\\n\t\t    /*          /                                     */\\\n\t\t    /*        (r)                                     */\\\n\t\t    a_type *tnode;\t\t\t\t\t\\\n\t\t    rbtn_black_set(a_type, a_field, leftleft);\t\t\\\n\t\t    rbtn_rotate_right(a_type, a_field, pathp->node,\t\\\n\t\t      tnode);\t\t\t\t\t\t\\\n\t\t    (void)a_summarize(pathp->node,\t\t\t\\\n\t\t\trbtn_left_get(a_type, a_field, pathp->node),\t\\\n\t\t\trbtn_right_get(a_type, a_field, pathp->node));\t\\\n\t\t    (void)a_summarize(tnode,\t\t\t\t\\\n\t\t\trbtn_left_get(a_type, a_field, tnode),\t\t\\\n\t\t\trbtn_right_get(a_type, a_field, tnode));\t\\\n\t\t    /* Balance restored, but rotation modified        */\\\n\t\t    /* subtree root, which may actually be the tree   */\\\n\t\t    /* root.                                          */\\\n\t\t    if (pathp == path) {\t\t\t\t\\\n\t\t\t/* Set root. */\t\t\t\t\t\\\n\t\t\trbtree->rbt_root = tnode;\t\t\t\\\n\t\t    } else {\t\t\t\t\t\t\\\n\t\t\tif (pathp[-1].cmp < 0) {\t\t\t\\\n\t\t\t    rbtn_left_set(a_type, a_field,\t\t\\\n\t\t\t      pathp[-1].node, tnode);\t\t\t\\\n\t\t\t} else {\t\t\t\t\t\\\n\t\t\t    rbtn_right_set(a_type, a_field,\t\t\\\n\t\t\t      pathp[-1].node, tnode);\t\t\t\\\n\t\t\t}\t\t\t\t\t\t\\\n\t\t        a_prefix##summarize_swapped_range(path,\t\t\\\n\t\t\t    &pathp[-1], swap_loc);\t\t\t\\\n\t\t    }\t\t\t\t\t\t\t\\\n\t\t    return;\t\t\t\t\t\t\\\n\t\t} else {\t\t\t\t\t\t\\\n\t\t    /*               ||                               */\\\n\t\t    /*             pathp(b)                           */\\\n\t\t    /*            /        \\\\                         */\\\n\t\t    /*          (b)        (b)                        */\\\n\t\t    /*          /                                     */\\\n\t\t    /*        (b)                                     */\\\n\t\t    rbtn_red_set(a_type, a_field, left);\t\t\\\n\t\t    (void)a_summarize(pathp->node,\t\t\t\\\n\t\t\trbtn_left_get(a_type, a_field, pathp->node),\t\\\n\t\t\trbtn_right_get(a_type, a_field, pathp->node));\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t    }\t\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    /* Set root. */\t\t\t\t\t\t\t\\\n    rbtree->rbt_root = path->node;\t\t\t\t\t\\\n    assert(!rbtn_red_get(a_type, a_field, rbtree->rbt_root));\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##iter_recurse(a_rbt_type *rbtree, a_type *node,\t\t\\\n  a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg) {\t\t\\\n    if (node == NULL) {\t\t\t\t\t\t\t\\\n\treturn NULL;\t\t\t\t\t\t\t\\\n    } else {\t\t\t\t\t\t\t\t\\\n\ta_type *ret;\t\t\t\t\t\t\t\\\n\tif ((ret = a_prefix##iter_recurse(rbtree, rbtn_left_get(a_type,\t\\\n\t  a_field, node), cb, arg)) != NULL || (ret = cb(rbtree, node,\t\\\n\t  arg)) != NULL) {\t\t\t\t\t\t\\\n\t    return ret;\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\treturn a_prefix##iter_recurse(rbtree, rbtn_right_get(a_type,\t\\\n\t  a_field, node), cb, arg);\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##iter_start(a_rbt_type *rbtree, a_type *start, a_type *node,\t\\\n  a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg) {\t\t\\\n    int cmp = a_cmp(start, node);\t\t\t\t\t\\\n    if (cmp < 0) {\t\t\t\t\t\t\t\\\n\ta_type *ret;\t\t\t\t\t\t\t\\\n\tif ((ret = a_prefix##iter_start(rbtree, start,\t\t\t\\\n\t  rbtn_left_get(a_type, a_field, node), cb, arg)) != NULL ||\t\\\n\t  (ret = cb(rbtree, node, arg)) != NULL) {\t\t\t\\\n\t    return ret;\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\treturn a_prefix##iter_recurse(rbtree, rbtn_right_get(a_type,\t\\\n\t  a_field, node), cb, arg);\t\t\t\t\t\\\n    } else if (cmp > 0) {\t\t\t\t\t\t\\\n\treturn a_prefix##iter_start(rbtree, start,\t\t\t\\\n\t  rbtn_right_get(a_type, a_field, node), cb, arg);\t\t\\\n    } else {\t\t\t\t\t\t\t\t\\\n\ta_type *ret;\t\t\t\t\t\t\t\\\n\tif ((ret = cb(rbtree, node, arg)) != NULL) {\t\t\t\\\n\t    return ret;\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\treturn a_prefix##iter_recurse(rbtree, rbtn_right_get(a_type,\t\\\n\t  a_field, node), cb, arg);\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##iter(a_rbt_type *rbtree, a_type *start, a_type *(*cb)(\t\\\n  a_rbt_type *, a_type *, void *), void *arg) {\t\t\t\t\\\n    a_type *ret;\t\t\t\t\t\t\t\\\n    if (start != NULL) {\t\t\t\t\t\t\\\n\tret = a_prefix##iter_start(rbtree, start, rbtree->rbt_root,\t\\\n\t  cb, arg);\t\t\t\t\t\t\t\\\n    } else {\t\t\t\t\t\t\t\t\\\n\tret = a_prefix##iter_recurse(rbtree, rbtree->rbt_root, cb, arg);\\\n    }\t\t\t\t\t\t\t\t\t\\\n    return ret;\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##reverse_iter_recurse(a_rbt_type *rbtree, a_type *node,\t\\\n  a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg) {\t\t\\\n    if (node == NULL) {\t\t\t\t\t\t\t\\\n\treturn NULL;\t\t\t\t\t\t\t\\\n    } else {\t\t\t\t\t\t\t\t\\\n\ta_type *ret;\t\t\t\t\t\t\t\\\n\tif ((ret = a_prefix##reverse_iter_recurse(rbtree,\t\t\\\n\t  rbtn_right_get(a_type, a_field, node), cb, arg)) != NULL ||\t\\\n\t  (ret = cb(rbtree, node, arg)) != NULL) {\t\t\t\\\n\t    return ret;\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\treturn a_prefix##reverse_iter_recurse(rbtree,\t\t\t\\\n\t  rbtn_left_get(a_type, a_field, node), cb, arg);\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##reverse_iter_start(a_rbt_type *rbtree, a_type *start,\t\t\\\n  a_type *node, a_type *(*cb)(a_rbt_type *, a_type *, void *),\t\t\\\n  void *arg) {\t\t\t\t\t\t\t\t\\\n    int cmp = a_cmp(start, node);\t\t\t\t\t\\\n    if (cmp > 0) {\t\t\t\t\t\t\t\\\n\ta_type *ret;\t\t\t\t\t\t\t\\\n\tif ((ret = a_prefix##reverse_iter_start(rbtree, start,\t\t\\\n\t  rbtn_right_get(a_type, a_field, node), cb, arg)) != NULL ||\t\\\n\t  (ret = cb(rbtree, node, arg)) != NULL) {\t\t\t\\\n\t    return ret;\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\treturn a_prefix##reverse_iter_recurse(rbtree,\t\t\t\\\n\t  rbtn_left_get(a_type, a_field, node), cb, arg);\t\t\\\n    } else if (cmp < 0) {\t\t\t\t\t\t\\\n\treturn a_prefix##reverse_iter_start(rbtree, start,\t\t\\\n\t  rbtn_left_get(a_type, a_field, node), cb, arg);\t\t\\\n    } else {\t\t\t\t\t\t\t\t\\\n\ta_type *ret;\t\t\t\t\t\t\t\\\n\tif ((ret = cb(rbtree, node, arg)) != NULL) {\t\t\t\\\n\t    return ret;\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\treturn a_prefix##reverse_iter_recurse(rbtree,\t\t\t\\\n\t  rbtn_left_get(a_type, a_field, node), cb, arg);\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##reverse_iter(a_rbt_type *rbtree, a_type *start,\t\t\\\n  a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg) {\t\t\\\n    a_type *ret;\t\t\t\t\t\t\t\\\n    if (start != NULL) {\t\t\t\t\t\t\\\n\tret = a_prefix##reverse_iter_start(rbtree, start,\t\t\\\n\t  rbtree->rbt_root, cb, arg);\t\t\t\t\t\\\n    } else {\t\t\t\t\t\t\t\t\\\n\tret = a_prefix##reverse_iter_recurse(rbtree, rbtree->rbt_root,\t\\\n\t  cb, arg);\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    return ret;\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##destroy_recurse(a_rbt_type *rbtree, a_type *node, void (*cb)(\t\\\n  a_type *, void *), void *arg) {\t\t\t\t\t\\\n    if (node == NULL) {\t\t\t\t\t\t\t\\\n\treturn;\t\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    a_prefix##destroy_recurse(rbtree, rbtn_left_get(a_type, a_field,\t\\\n      node), cb, arg);\t\t\t\t\t\t\t\\\n    rbtn_left_set(a_type, a_field, (node), NULL);\t\t\t\\\n    a_prefix##destroy_recurse(rbtree, rbtn_right_get(a_type, a_field,\t\\\n      node), cb, arg);\t\t\t\t\t\t\t\\\n    rbtn_right_set(a_type, a_field, (node), NULL);\t\t\t\\\n    if (cb) {\t\t\t\t\t\t\t\t\\\n\tcb(node, arg);\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##destroy(a_rbt_type *rbtree, void (*cb)(a_type *, void *),\t\\\n  void *arg) {\t\t\t\t\t\t\t\t\\\n    a_prefix##destroy_recurse(rbtree, rbtree->rbt_root, cb, arg);\t\\\n    rbtree->rbt_root = NULL;\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n/* BEGIN SUMMARIZED-ONLY IMPLEMENTATION */\t\t\t\t\\\nrb_summarized_only_##a_is_summarized(\t\t\t\t\t\\\nstatic inline a_prefix##path_entry_t *\t\t\t\t\t\\\na_prefix##wind(a_rbt_type *rbtree,\t\t\t\t\t\\\n    a_prefix##path_entry_t path[RB_MAX_DEPTH], a_type *node) {\t\t\\\n    a_prefix##path_entry_t *pathp;\t\t\t\t\t\\\n    path->node = rbtree->rbt_root;\t\t\t\t\t\\\n    for (pathp = path; ; pathp++) {\t\t\t\t\t\\\n\tassert((size_t)(pathp - path) < RB_MAX_DEPTH);\t\t\t\\\n\tpathp->cmp = a_cmp(node, pathp->node);\t\t\t\t\\\n\tif (pathp->cmp < 0) {\t\t\t\t\t\t\\\n\t    pathp[1].node = rbtn_left_get(a_type, a_field,\t\t\\\n\t\tpathp->node);\t\t\t\t\t\t\\\n\t} else if (pathp->cmp == 0) {\t\t\t\t\t\\\n\t    return pathp;\t\t\t\t\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t    pathp[1].node = rbtn_right_get(a_type, a_field,\t\t\\\n\t\tpathp->node);\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    unreachable();\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##update_summaries(a_rbt_type *rbtree, a_type *node) {\t\t\\\n    a_prefix##path_entry_t path[RB_MAX_DEPTH];\t\t\t\t\\\n    a_prefix##path_entry_t *pathp = a_prefix##wind(rbtree, path, node);\t\\\n    a_prefix##summarize_range(path, pathp);\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr bool\t\t\t\t\t\t\t\t\\\na_prefix##empty_filtered(a_rbt_type *rbtree,\t\t\t\t\\\n  bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n  bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n  void *filter_ctx) {\t\t\t\t\t\t\t\\\n    a_type *node = rbtree->rbt_root;\t\t\t\t\t\\\n    return node == NULL || !filter_subtree(filter_ctx, node);\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\nstatic inline a_type *\t\t\t\t\t\t\t\\\na_prefix##first_filtered_from_node(a_type *node,\t\t\t\\\n  bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n  bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n  void *filter_ctx) {\t\t\t\t\t\t\t\\\n    assert(node != NULL && filter_subtree(filter_ctx, node));\t\t\\\n    while (true) {\t\t\t\t\t\t\t\\\n\ta_type *left = rbtn_left_get(a_type, a_field, node);\t\t\\\n\ta_type *right = rbtn_right_get(a_type, a_field, node);\t\t\\\n\tif (left != NULL && filter_subtree(filter_ctx, left)) {\t\t\\\n\t    node = left;\t\t\t\t\t\t\\\n\t} else if (filter_node(filter_ctx, node)) {\t\t\t\\\n\t    return node;\t\t\t\t\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t\tassert(right != NULL\t\t\t\t\t\\\n\t\t    && filter_subtree(filter_ctx, right));\t\t\\\n\t\tnode = right;\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    unreachable();\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##first_filtered(a_rbt_type *rbtree,\t\t\t\t\\\n  bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n  bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n  void *filter_ctx) {\t\t\t\t\t\t\t\\\n    a_type *node = rbtree->rbt_root;\t\t\t\t\t\\\n    if (node == NULL || !filter_subtree(filter_ctx, node)) {\t\t\\\n\treturn NULL;\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    return a_prefix##first_filtered_from_node(node, filter_node,\t\\\n\tfilter_subtree, filter_ctx);\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\nstatic inline a_type *\t\t\t\t\t\t\t\\\na_prefix##last_filtered_from_node(a_type *node,\t\t\t\t\\\n  bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n  bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n  void *filter_ctx) {\t\t\t\t\t\t\t\\\n    assert(node != NULL && filter_subtree(filter_ctx, node));\t\t\\\n    while (true) {\t\t\t\t\t\t\t\\\n\ta_type *left = rbtn_left_get(a_type, a_field, node);\t\t\\\n\ta_type *right = rbtn_right_get(a_type, a_field, node);\t\t\\\n\tif (right != NULL && filter_subtree(filter_ctx, right)) {\t\\\n\t    node = right;\t\t\t\t\t\t\\\n\t} else if (filter_node(filter_ctx, node)) {\t\t\t\\\n\t    return node;\t\t\t\t\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t\tassert(left != NULL\t\t\t\t\t\\\n\t\t    && filter_subtree(filter_ctx, left));\t\t\\\n\t\tnode = left;\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    unreachable();\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##last_filtered(a_rbt_type *rbtree,\t\t\t\t\\\n  bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n  bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n  void *filter_ctx) {\t\t\t\t\t\t\t\\\n    a_type *node = rbtree->rbt_root;\t\t\t\t\t\\\n    if (node == NULL || !filter_subtree(filter_ctx, node)) {\t\t\\\n\treturn NULL;\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    return a_prefix##last_filtered_from_node(node, filter_node,\t\t\\\n\tfilter_subtree, filter_ctx);\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n/* Internal implementation function.  Search for a node comparing     */\\\n/* equal to key matching the filter.  If such a node is in the tree,  */\\\n/* return it.  Additionally, the caller has the option to ask for     */\\\n/* bounds on the next / prev node in the tree passing the filter.     */\\\n/* If nextbound is true, then this function will do one of the        */\\\n/* following:                                                         */\\\n/* - Fill in *nextbound_node with the smallest node in the tree       */\\\n/*   greater than key passing the filter, and NULL-out                */\\\n/*   *nextbound_subtree.                                              */\\\n/* - Fill in *nextbound_subtree with a parent of that node which is   */\\\n/*   not a parent of the searched-for node, and NULL-out              */\\\n/*   *nextbound_node.                                                 */\\\n/* - NULL-out both *nextbound_node and *nextbound_subtree, in which   */\\\n/*   case no node greater than key but passing the filter is in the   */\\\n/*   tree.                                                            */\\\n/* The prevbound case is similar.  If the caller knows that key is in */\\\n/* the tree and that the subtree rooted at key does not contain a     */\\\n/* node satisfying the bound being searched for, then they can pass   */\\\n/* false for include_subtree, in which case we won't bother searching */\\\n/* there (risking a cache miss).                                      */\\\n/*                                                                    */\\\n/* This API is unfortunately complex; but the logic for filtered      */\\\n/* searches is very subtle, and otherwise we would have to repeat it  */\\\n/* multiple times for filtered search, nsearch, psearch, next, and    */\\\n/* prev.                                                              */\\\nstatic inline a_type *\t\t\t\t\t\t\t\\\na_prefix##search_with_filter_bounds(a_rbt_type *rbtree,\t\t\t\\\n  const a_type *key,\t\t\t\t\t\t\t\\\n  bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n  bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n  void *filter_ctx,\t\t\t\t\t\t\t\\\n  bool include_subtree,\t\t\t\t\t\t\t\\\n  bool nextbound, a_type **nextbound_node, a_type **nextbound_subtree,\t\\\n  bool prevbound, a_type **prevbound_node, a_type **prevbound_subtree) {\\\n    if (nextbound) {\t\t\t\t\t\t\t\\\n\t    *nextbound_node = NULL;\t\t\t\t\t\\\n\t    *nextbound_subtree = NULL;\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    if (prevbound) {\t\t\t\t\t\t\t\\\n\t    *prevbound_node = NULL;\t\t\t\t\t\\\n\t    *prevbound_subtree = NULL;\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    a_type *tnode = rbtree->rbt_root;\t\t\t\t\t\\\n    while (tnode != NULL && filter_subtree(filter_ctx, tnode)) {\t\\\n\tint cmp = a_cmp(key, tnode);\t\t\t\t\t\\\n\ta_type *tleft = rbtn_left_get(a_type, a_field, tnode);\t\t\\\n\ta_type *tright = rbtn_right_get(a_type, a_field, tnode);\t\\\n\tif (cmp < 0) {\t\t\t\t\t\t\t\\\n\t    if (nextbound) {\t\t\t\t\t\t\\\n\t\tif (filter_node(filter_ctx, tnode)) {\t\t\t\\\n\t\t    *nextbound_node = tnode;\t\t\t\t\\\n\t\t    *nextbound_subtree = NULL;\t\t\t\t\\\n\t\t} else if (tright != NULL && filter_subtree(\t\t\\\n\t\t    filter_ctx, tright)) {\t\t\t\t\\\n\t\t    *nextbound_node = NULL;\t\t\t\t\\\n\t\t    *nextbound_subtree = tright;\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t    }\t\t\t\t\t\t\t\t\\\n\t    tnode = tleft;\t\t\t\t\t\t\\\n\t} else if (cmp > 0) {\t\t\t\t\t\t\\\n\t    if (prevbound) {\t\t\t\t\t\t\\\n\t\tif (filter_node(filter_ctx, tnode)) {\t\t\t\\\n\t\t    *prevbound_node = tnode;\t\t\t\t\\\n\t\t    *prevbound_subtree = NULL;\t\t\t\t\\\n\t\t} else if (tleft != NULL && filter_subtree(\t\t\\\n\t\t    filter_ctx, tleft)) {\t\t\t\t\\\n\t\t    *prevbound_node = NULL;\t\t\t\t\\\n\t\t    *prevbound_subtree = tleft;\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t    }\t\t\t\t\t\t\t\t\\\n\t    tnode = tright;\t\t\t\t\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t    if (filter_node(filter_ctx, tnode)) {\t\t\t\\\n\t\treturn tnode;\t\t\t\t\t\t\\\n\t    }\t\t\t\t\t\t\t\t\\\n\t    if (include_subtree) {\t\t\t\t\t\\\n\t\tif (prevbound && tleft != NULL && filter_subtree(\t\\\n\t\t    filter_ctx, tleft)) {\t\t\t\t\\\n\t\t    *prevbound_node = NULL;\t\t\t\t\\\n\t\t    *prevbound_subtree = tleft;\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t\tif (nextbound && tright != NULL && filter_subtree(\t\\\n\t\t    filter_ctx, tright)) {\t\t\t\t\\\n\t\t    *nextbound_node = NULL;\t\t\t\t\\\n\t\t    *nextbound_subtree = tright;\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t    }\t\t\t\t\t\t\t\t\\\n\t    return NULL;\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    return NULL;\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##next_filtered(a_rbt_type *rbtree, a_type *node,\t\t\\\n  bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n  bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n  void *filter_ctx) {\t\t\t\t\t\t\t\\\n    a_type *nright = rbtn_right_get(a_type, a_field, node);\t\t\\\n    if (nright != NULL && filter_subtree(filter_ctx, nright)) {\t\t\\\n\treturn a_prefix##first_filtered_from_node(nright, filter_node,\t\\\n\t    filter_subtree, filter_ctx);\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    a_type *node_candidate;\t\t\t\t\t\t\\\n    a_type *subtree_candidate;\t\t\t\t\t\t\\\n    a_type *search_result = a_prefix##search_with_filter_bounds(\t\\\n\trbtree, node, filter_node, filter_subtree, filter_ctx,\t\t\\\n\t/* include_subtree */ false,\t\t\t\t\t\\\n\t/* nextbound */ true, &node_candidate, &subtree_candidate,\t\\\n\t/* prevbound */ false, NULL, NULL);\t\t\t\t\\\n    assert(node == search_result\t\t\t\t\t\\\n\t|| !filter_node(filter_ctx, node));\t\t\t\t\\\n    if (node_candidate != NULL) {\t\t\t\t\t\\\n\treturn node_candidate;\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    if (subtree_candidate != NULL) {\t\t\t\t\t\\\n\treturn a_prefix##first_filtered_from_node(\t\t\t\\\n\t    subtree_candidate, filter_node, filter_subtree,\t\t\\\n\t    filter_ctx);\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    return NULL;\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##prev_filtered(a_rbt_type *rbtree, a_type *node,\t\t\\\n  bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n  bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n  void *filter_ctx) {\t\t\t\t\t\t\t\\\n    a_type *nleft = rbtn_left_get(a_type, a_field, node);\t\t\\\n    if (nleft != NULL && filter_subtree(filter_ctx, nleft)) {\t\t\\\n\treturn a_prefix##last_filtered_from_node(nleft, filter_node,\t\\\n\t    filter_subtree, filter_ctx);\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    a_type *node_candidate;\t\t\t\t\t\t\\\n    a_type *subtree_candidate;\t\t\t\t\t\t\\\n    a_type *search_result = a_prefix##search_with_filter_bounds(\t\\\n\trbtree, node, filter_node, filter_subtree, filter_ctx,\t\t\\\n\t/* include_subtree */ false,\t\t\t\t\t\\\n\t/* nextbound */ false, NULL, NULL,\t\t\t\t\\\n\t/* prevbound */ true, &node_candidate, &subtree_candidate);\t\\\n    assert(node == search_result\t\t\t\t\t\\\n\t|| !filter_node(filter_ctx, node));\t\t\t\t\\\n    if (node_candidate != NULL) {\t\t\t\t\t\\\n\treturn node_candidate;\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    if (subtree_candidate != NULL) {\t\t\t\t\t\\\n\treturn a_prefix##last_filtered_from_node(\t\t\t\\\n\t    subtree_candidate, filter_node, filter_subtree,\t\t\\\n\t    filter_ctx);\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    return NULL;\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##search_filtered(a_rbt_type *rbtree, const a_type *key,\t\\\n  bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n  bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n  void *filter_ctx) {\t\t\t\t\t\t\t\\\n    a_type *result = a_prefix##search_with_filter_bounds(rbtree, key,\t\\\n\tfilter_node, filter_subtree, filter_ctx,\t\t\t\\\n\t/* include_subtree */ false,\t\t\t\t\t\\\n\t/* nextbound */ false, NULL, NULL,\t\t\t\t\\\n\t/* prevbound */ false, NULL, NULL);\t\t\t\t\\\n    return result;\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##nsearch_filtered(a_rbt_type *rbtree, const a_type *key,\t\\\n  bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n  bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n  void *filter_ctx) {\t\t\t\t\t\t\t\\\n    a_type *node_candidate;\t\t\t\t\t\t\\\n    a_type *subtree_candidate;\t\t\t\t\t\t\\\n    a_type *result = a_prefix##search_with_filter_bounds(rbtree, key,\t\\\n\tfilter_node, filter_subtree, filter_ctx,\t\t\t\\\n\t/* include_subtree */ true,\t\t\t\t\t\\\n\t/* nextbound */ true, &node_candidate, &subtree_candidate,\t\\\n\t/* prevbound */ false, NULL, NULL);\t\t\t\t\\\n    if (result != NULL) {\t\t\t\t\t\t\\\n\treturn result;\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    if (node_candidate != NULL) {\t\t\t\t\t\\\n\treturn node_candidate;\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    if (subtree_candidate != NULL) {\t\t\t\t\t\\\n\treturn a_prefix##first_filtered_from_node(\t\t\t\\\n\t    subtree_candidate, filter_node, filter_subtree,\t\t\\\n\t    filter_ctx);\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    return NULL;\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##psearch_filtered(a_rbt_type *rbtree, const a_type *key,\t\\\n  bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n  bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n  void *filter_ctx) {\t\t\t\t\t\t\t\\\n    a_type *node_candidate;\t\t\t\t\t\t\\\n    a_type *subtree_candidate;\t\t\t\t\t\t\\\n    a_type *result = a_prefix##search_with_filter_bounds(rbtree, key,\t\\\n\tfilter_node, filter_subtree, filter_ctx,\t\t\t\\\n\t/* include_subtree */ true,\t\t\t\t\t\\\n\t/* nextbound */ false, NULL, NULL,\t\t\t\t\\\n\t/* prevbound */ true, &node_candidate, &subtree_candidate);\t\\\n    if (result != NULL) {\t\t\t\t\t\t\\\n\treturn result;\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    if (node_candidate != NULL) {\t\t\t\t\t\\\n\treturn node_candidate;\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    if (subtree_candidate != NULL) {\t\t\t\t\t\\\n\treturn a_prefix##last_filtered_from_node(\t\t\t\\\n\t    subtree_candidate, filter_node, filter_subtree,\t\t\\\n\t    filter_ctx);\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    return NULL;\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##iter_recurse_filtered(a_rbt_type *rbtree, a_type *node,\t\\\n  a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg,\t\t\\\n  bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n  bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n  void *filter_ctx) {\t\t\t\t\t\t\t\\\n    if (node == NULL || !filter_subtree(filter_ctx, node)) {\t\t\\\n\treturn NULL;\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    a_type *ret;\t\t\t\t\t\t\t\\\n    a_type *left = rbtn_left_get(a_type, a_field, node);\t\t\\\n    a_type *right = rbtn_right_get(a_type, a_field, node);\t\t\\\n    ret = a_prefix##iter_recurse_filtered(rbtree, left, cb, arg,\t\\\n      filter_node, filter_subtree, filter_ctx);\t\t\t\t\\\n    if (ret != NULL) {\t\t\t\t\t\t\t\\\n\treturn ret;\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    if (filter_node(filter_ctx, node)) {\t\t\t\t\\\n\tret = cb(rbtree, node, arg);\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    if (ret != NULL) {\t\t\t\t\t\t\t\\\n\treturn ret;\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    return a_prefix##iter_recurse_filtered(rbtree, right, cb, arg,\t\\\n      filter_node, filter_subtree, filter_ctx);\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##iter_start_filtered(a_rbt_type *rbtree, a_type *start,\t\\\n  a_type *node, a_type *(*cb)(a_rbt_type *, a_type *, void *),\t\t\\\n  void *arg, bool (*filter_node)(void *, a_type *),\t\t\t\\\n  bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n  void *filter_ctx) {\t\t\t\t\t\t\t\\\n    if (!filter_subtree(filter_ctx, node)) {\t\t\t\t\\\n\treturn NULL;\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    int cmp = a_cmp(start, node);\t\t\t\t\t\\\n    a_type *ret;\t\t\t\t\t\t\t\\\n    a_type *left = rbtn_left_get(a_type, a_field, node);\t\t\\\n    a_type *right = rbtn_right_get(a_type, a_field, node);\t\t\\\n    if (cmp < 0) {\t\t\t\t\t\t\t\\\n\tret = a_prefix##iter_start_filtered(rbtree, start, left, cb,\t\\\n\t    arg, filter_node, filter_subtree, filter_ctx);\t\t\\\n\tif (ret != NULL) {\t\t\t\t\t\t\\\n\t    return ret;\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tif (filter_node(filter_ctx, node)) {\t\t\t\t\\\n\t    ret = cb(rbtree, node, arg);\t\t\t\t\\\n\t    if (ret != NULL) {\t\t\t\t\t\t\\\n\t\treturn ret;\t\t\t\t\t\t\\\n\t    }\t\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\treturn a_prefix##iter_recurse_filtered(rbtree, right, cb, arg,\t\\\n\t    filter_node, filter_subtree, filter_ctx);\t\t\t\\\n    } else if (cmp > 0) {\t\t\t\t\t\t\\\n\treturn a_prefix##iter_start_filtered(rbtree, start, right,\t\\\n\t  cb, arg, filter_node, filter_subtree, filter_ctx);\t\t\\\n    } else {\t\t\t\t\t\t\t\t\\\n\tif (filter_node(filter_ctx, node)) {\t\t\t\t\\\n\t    ret = cb(rbtree, node, arg);\t\t\t\t\\\n\t    if (ret != NULL) {\t\t\t\t\t\t\\\n\t\treturn ret;\t\t\t\t\t\t\\\n\t    }\t\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\treturn a_prefix##iter_recurse_filtered(rbtree, right, cb, arg,\t\\\n\t  filter_node, filter_subtree, filter_ctx);\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##iter_filtered(a_rbt_type *rbtree, a_type *start,\t\t\\\n  a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg,\t\t\\\n  bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n  bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n  void *filter_ctx) {\t\t\t\t\t\t\t\\\n    a_type *ret;\t\t\t\t\t\t\t\\\n    if (start != NULL) {\t\t\t\t\t\t\\\n\tret = a_prefix##iter_start_filtered(rbtree, start,\t\t\\\n\t    rbtree->rbt_root, cb, arg, filter_node, filter_subtree,\t\\\n\t    filter_ctx);\t\t\t\t\t\t\\\n    } else {\t\t\t\t\t\t\t\t\\\n\tret = a_prefix##iter_recurse_filtered(rbtree, rbtree->rbt_root,\t\\\n\t    cb, arg, filter_node, filter_subtree, filter_ctx);\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    return ret;\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##reverse_iter_recurse_filtered(a_rbt_type *rbtree,\t\t\\\n  a_type *node, a_type *(*cb)(a_rbt_type *, a_type *, void *),\t\t\\\n  void *arg,\t\t\t\t\t\t\t\t\\\n  bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n  bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n  void *filter_ctx) {\t\t\t\t\t\t\t\\\n    if (node == NULL || !filter_subtree(filter_ctx, node)) {\t\t\\\n\treturn NULL;\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    a_type *ret;\t\t\t\t\t\t\t\\\n    a_type *left = rbtn_left_get(a_type, a_field, node);\t\t\\\n    a_type *right = rbtn_right_get(a_type, a_field, node);\t\t\\\n    ret = a_prefix##reverse_iter_recurse_filtered(rbtree, right, cb,\t\\\n\targ, filter_node, filter_subtree, filter_ctx);\t\t\t\\\n    if (ret != NULL) {\t\t\t\t\t\t\t\\\n\treturn ret;\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    if (filter_node(filter_ctx, node)) {\t\t\t\t\\\n\tret = cb(rbtree, node, arg);\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    if (ret != NULL) {\t\t\t\t\t\t\t\\\n\treturn ret;\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    return a_prefix##reverse_iter_recurse_filtered(rbtree, left, cb,\t\\\n      arg, filter_node, filter_subtree, filter_ctx);\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##reverse_iter_start_filtered(a_rbt_type *rbtree, a_type *start,\\\n  a_type *node, a_type *(*cb)(a_rbt_type *, a_type *, void *),\t\t\\\n  void *arg, bool (*filter_node)(void *, a_type *),\t\t\t\\\n  bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n  void *filter_ctx) {\t\t\t\t\t\t\t\\\n    if (!filter_subtree(filter_ctx, node)) {\t\t\t\t\\\n\treturn NULL;\t\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    int cmp = a_cmp(start, node);\t\t\t\t\t\\\n    a_type *ret;\t\t\t\t\t\t\t\\\n    a_type *left = rbtn_left_get(a_type, a_field, node);\t\t\\\n    a_type *right = rbtn_right_get(a_type, a_field, node);\t\t\\\n    if (cmp > 0) {\t\t\t\t\t\t\t\\\n\tret = a_prefix##reverse_iter_start_filtered(rbtree, start,\t\\\n\t    right, cb, arg, filter_node, filter_subtree, filter_ctx);\t\\\n\tif (ret != NULL) {\t\t\t\t\t\t\\\n\t    return ret;\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tif (filter_node(filter_ctx, node)) {\t\t\t\t\\\n\t    ret = cb(rbtree, node, arg);\t\t\t\t\\\n\t    if (ret != NULL) {\t\t\t\t\t\t\\\n\t\treturn ret;\t\t\t\t\t\t\\\n\t    }\t\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\treturn a_prefix##reverse_iter_recurse_filtered(rbtree, left, cb,\\\n\t    arg, filter_node, filter_subtree, filter_ctx);\t\t\\\n    } else if (cmp < 0) {\t\t\t\t\t\t\\\n\treturn a_prefix##reverse_iter_start_filtered(rbtree, start,\t\\\n\t  left, cb, arg, filter_node, filter_subtree, filter_ctx);\t\\\n    } else {\t\t\t\t\t\t\t\t\\\n\tif (filter_node(filter_ctx, node)) {\t\t\t\t\\\n\t    ret = cb(rbtree, node, arg);\t\t\t\t\\\n\t    if (ret != NULL) {\t\t\t\t\t\t\\\n\t\treturn ret;\t\t\t\t\t\t\\\n\t    }\t\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\treturn a_prefix##reverse_iter_recurse_filtered(rbtree, left, cb,\\\n\t  arg, filter_node, filter_subtree, filter_ctx);\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_type *\t\t\t\t\t\t\t\t\\\na_prefix##reverse_iter_filtered(a_rbt_type *rbtree, a_type *start,\t\\\n  a_type *(*cb)(a_rbt_type *, a_type *, void *), void *arg,\t\t\\\n  bool (*filter_node)(void *, a_type *),\t\t\t\t\\\n  bool (*filter_subtree)(void *, a_type *),\t\t\t\t\\\n  void *filter_ctx) {\t\t\t\t\t\t\t\\\n    a_type *ret;\t\t\t\t\t\t\t\\\n    if (start != NULL) {\t\t\t\t\t\t\\\n\tret = a_prefix##reverse_iter_start_filtered(rbtree, start,\t\\\n\t    rbtree->rbt_root, cb, arg, filter_node, filter_subtree,\t\\\n\t    filter_ctx);\t\t\t\t\t\t\\\n    } else {\t\t\t\t\t\t\t\t\\\n\tret = a_prefix##reverse_iter_recurse_filtered(rbtree,\t\t\\\n\t    rbtree->rbt_root, cb, arg, filter_node, filter_subtree,\t\\\n\t    filter_ctx);\t\t\t\t\t\t\\\n    }\t\t\t\t\t\t\t\t\t\\\n    return ret;\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\n) /* end rb_summarized_only */\n\n#endif /* JEMALLOC_INTERNAL_RB_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/rtree.h",
    "content": "#ifndef JEMALLOC_INTERNAL_RTREE_H\n#define JEMALLOC_INTERNAL_RTREE_H\n\n#include \"jemalloc/internal/atomic.h\"\n#include \"jemalloc/internal/mutex.h\"\n#include \"jemalloc/internal/rtree_tsd.h\"\n#include \"jemalloc/internal/sc.h\"\n#include \"jemalloc/internal/tsd.h\"\n\n/*\n * This radix tree implementation is tailored to the singular purpose of\n * associating metadata with extents that are currently owned by jemalloc.\n *\n *******************************************************************************\n */\n\n/* Number of high insignificant bits. */\n#define RTREE_NHIB ((1U << (LG_SIZEOF_PTR+3)) - LG_VADDR)\n/* Number of low insigificant bits. */\n#define RTREE_NLIB LG_PAGE\n/* Number of significant bits. */\n#define RTREE_NSB (LG_VADDR - RTREE_NLIB)\n/* Number of levels in radix tree. */\n#if RTREE_NSB <= 10\n#  define RTREE_HEIGHT 1\n#elif RTREE_NSB <= 36\n#  define RTREE_HEIGHT 2\n#elif RTREE_NSB <= 52\n#  define RTREE_HEIGHT 3\n#else\n#  error Unsupported number of significant virtual address bits\n#endif\n/* Use compact leaf representation if virtual address encoding allows. */\n#if RTREE_NHIB >= LG_CEIL(SC_NSIZES)\n#  define RTREE_LEAF_COMPACT\n#endif\n\ntypedef struct rtree_node_elm_s rtree_node_elm_t;\nstruct rtree_node_elm_s {\n\tatomic_p_t\tchild; /* (rtree_{node,leaf}_elm_t *) */\n};\n\ntypedef struct rtree_metadata_s rtree_metadata_t;\nstruct rtree_metadata_s {\n\tszind_t szind;\n\textent_state_t state; /* Mirrors edata->state. */\n\tbool is_head; /* Mirrors edata->is_head. */\n\tbool slab;\n};\n\ntypedef struct rtree_contents_s rtree_contents_t;\nstruct rtree_contents_s {\n\tedata_t *edata;\n\trtree_metadata_t metadata;\n};\n\n#define RTREE_LEAF_STATE_WIDTH EDATA_BITS_STATE_WIDTH\n#define RTREE_LEAF_STATE_SHIFT 2\n#define RTREE_LEAF_STATE_MASK MASK(RTREE_LEAF_STATE_WIDTH, RTREE_LEAF_STATE_SHIFT)\n\nstruct rtree_leaf_elm_s {\n#ifdef RTREE_LEAF_COMPACT\n\t/*\n\t * Single pointer-width field containing all three leaf element fields.\n\t * For example, on a 64-bit x64 system with 48 significant virtual\n\t * memory address bits, the index, edata, and slab fields are packed as\n\t * such:\n\t *\n\t * x: index\n\t * e: edata\n\t * s: state\n\t * h: is_head\n\t * b: slab\n\t *\n\t *   00000000 xxxxxxxx eeeeeeee [...] eeeeeeee e00ssshb\n\t */\n\tatomic_p_t\tle_bits;\n#else\n\tatomic_p_t\tle_edata; /* (edata_t *) */\n\t/*\n\t * From high to low bits: szind (8 bits), state (4 bits), is_head, slab\n\t */\n\tatomic_u_t\tle_metadata;\n#endif\n};\n\ntypedef struct rtree_level_s rtree_level_t;\nstruct rtree_level_s {\n\t/* Number of key bits distinguished by this level. */\n\tunsigned\t\tbits;\n\t/*\n\t * Cumulative number of key bits distinguished by traversing to\n\t * corresponding tree level.\n\t */\n\tunsigned\t\tcumbits;\n};\n\ntypedef struct rtree_s rtree_t;\nstruct rtree_s {\n\tbase_t\t\t\t*base;\n\tmalloc_mutex_t\t\tinit_lock;\n\t/* Number of elements based on rtree_levels[0].bits. */\n#if RTREE_HEIGHT > 1\n\trtree_node_elm_t\troot[1U << (RTREE_NSB/RTREE_HEIGHT)];\n#else\n\trtree_leaf_elm_t\troot[1U << (RTREE_NSB/RTREE_HEIGHT)];\n#endif\n};\n\n/*\n * Split the bits into one to three partitions depending on number of\n * significant bits.  It the number of bits does not divide evenly into the\n * number of levels, place one remainder bit per level starting at the leaf\n * level.\n */\nstatic const rtree_level_t rtree_levels[] = {\n#if RTREE_HEIGHT == 1\n\t{RTREE_NSB, RTREE_NHIB + RTREE_NSB}\n#elif RTREE_HEIGHT == 2\n\t{RTREE_NSB/2, RTREE_NHIB + RTREE_NSB/2},\n\t{RTREE_NSB/2 + RTREE_NSB%2, RTREE_NHIB + RTREE_NSB}\n#elif RTREE_HEIGHT == 3\n\t{RTREE_NSB/3, RTREE_NHIB + RTREE_NSB/3},\n\t{RTREE_NSB/3 + RTREE_NSB%3/2,\n\t    RTREE_NHIB + RTREE_NSB/3*2 + RTREE_NSB%3/2},\n\t{RTREE_NSB/3 + RTREE_NSB%3 - RTREE_NSB%3/2, RTREE_NHIB + RTREE_NSB}\n#else\n#  error Unsupported rtree height\n#endif\n};\n\nbool rtree_new(rtree_t *rtree, base_t *base, bool zeroed);\n\nrtree_leaf_elm_t *rtree_leaf_elm_lookup_hard(tsdn_t *tsdn, rtree_t *rtree,\n    rtree_ctx_t *rtree_ctx, uintptr_t key, bool dependent, bool init_missing);\n\nJEMALLOC_ALWAYS_INLINE unsigned\nrtree_leaf_maskbits(void) {\n\tunsigned ptrbits = ZU(1) << (LG_SIZEOF_PTR+3);\n\tunsigned cumbits = (rtree_levels[RTREE_HEIGHT-1].cumbits -\n\t    rtree_levels[RTREE_HEIGHT-1].bits);\n\treturn ptrbits - cumbits;\n}\n\nJEMALLOC_ALWAYS_INLINE uintptr_t\nrtree_leafkey(uintptr_t key) {\n\tuintptr_t mask = ~((ZU(1) << rtree_leaf_maskbits()) - 1);\n\treturn (key & mask);\n}\n\nJEMALLOC_ALWAYS_INLINE size_t\nrtree_cache_direct_map(uintptr_t key) {\n\treturn (size_t)((key >> rtree_leaf_maskbits()) &\n\t    (RTREE_CTX_NCACHE - 1));\n}\n\nJEMALLOC_ALWAYS_INLINE uintptr_t\nrtree_subkey(uintptr_t key, unsigned level) {\n\tunsigned ptrbits = ZU(1) << (LG_SIZEOF_PTR+3);\n\tunsigned cumbits = rtree_levels[level].cumbits;\n\tunsigned shiftbits = ptrbits - cumbits;\n\tunsigned maskbits = rtree_levels[level].bits;\n\tuintptr_t mask = (ZU(1) << maskbits) - 1;\n\treturn ((key >> shiftbits) & mask);\n}\n\n/*\n * Atomic getters.\n *\n * dependent: Reading a value on behalf of a pointer to a valid allocation\n *            is guaranteed to be a clean read even without synchronization,\n *            because the rtree update became visible in memory before the\n *            pointer came into existence.\n * !dependent: An arbitrary read, e.g. on behalf of ivsalloc(), may not be\n *             dependent on a previous rtree write, which means a stale read\n *             could result if synchronization were omitted here.\n */\n#  ifdef RTREE_LEAF_COMPACT\nJEMALLOC_ALWAYS_INLINE uintptr_t\nrtree_leaf_elm_bits_read(tsdn_t *tsdn, rtree_t *rtree,\n    rtree_leaf_elm_t *elm, bool dependent) {\n\treturn (uintptr_t)atomic_load_p(&elm->le_bits, dependent\n\t    ? ATOMIC_RELAXED : ATOMIC_ACQUIRE);\n}\n\nJEMALLOC_ALWAYS_INLINE uintptr_t\nrtree_leaf_elm_bits_encode(rtree_contents_t contents) {\n\tassert((uintptr_t)contents.edata % (uintptr_t)EDATA_ALIGNMENT == 0);\n\tuintptr_t edata_bits = (uintptr_t)contents.edata\n\t    & (((uintptr_t)1 << LG_VADDR) - 1);\n\n\tuintptr_t szind_bits = (uintptr_t)contents.metadata.szind << LG_VADDR;\n\tuintptr_t slab_bits = (uintptr_t)contents.metadata.slab;\n\tuintptr_t is_head_bits = (uintptr_t)contents.metadata.is_head << 1;\n\tuintptr_t state_bits = (uintptr_t)contents.metadata.state <<\n\t    RTREE_LEAF_STATE_SHIFT;\n\tuintptr_t metadata_bits = szind_bits | state_bits | is_head_bits |\n\t    slab_bits;\n\tassert((edata_bits & metadata_bits) == 0);\n\n\treturn edata_bits | metadata_bits;\n}\n\nJEMALLOC_ALWAYS_INLINE rtree_contents_t\nrtree_leaf_elm_bits_decode(uintptr_t bits) {\n\trtree_contents_t contents;\n\t/* Do the easy things first. */\n\tcontents.metadata.szind = bits >> LG_VADDR;\n\tcontents.metadata.slab = (bool)(bits & 1);\n\tcontents.metadata.is_head = (bool)(bits & (1 << 1));\n\n\tuintptr_t state_bits = (bits & RTREE_LEAF_STATE_MASK) >>\n\t    RTREE_LEAF_STATE_SHIFT;\n\tassert(state_bits <= extent_state_max);\n\tcontents.metadata.state = (extent_state_t)state_bits;\n\n\tuintptr_t low_bit_mask = ~((uintptr_t)EDATA_ALIGNMENT - 1);\n#    ifdef __aarch64__\n\t/*\n\t * aarch64 doesn't sign extend the highest virtual address bit to set\n\t * the higher ones.  Instead, the high bits get zeroed.\n\t */\n\tuintptr_t high_bit_mask = ((uintptr_t)1 << LG_VADDR) - 1;\n\t/* Mask off metadata. */\n\tuintptr_t mask = high_bit_mask & low_bit_mask;\n\tcontents.edata = (edata_t *)(bits & mask);\n#    else\n\t/* Restore sign-extended high bits, mask metadata bits. */\n\tcontents.edata = (edata_t *)((uintptr_t)((intptr_t)(bits << RTREE_NHIB)\n\t    >> RTREE_NHIB) & low_bit_mask);\n#    endif\n\tassert((uintptr_t)contents.edata % (uintptr_t)EDATA_ALIGNMENT == 0);\n\treturn contents;\n}\n\n#  endif /* RTREE_LEAF_COMPACT */\n\nJEMALLOC_ALWAYS_INLINE rtree_contents_t\nrtree_leaf_elm_read(tsdn_t *tsdn, rtree_t *rtree, rtree_leaf_elm_t *elm,\n    bool dependent) {\n#ifdef RTREE_LEAF_COMPACT\n\tuintptr_t bits = rtree_leaf_elm_bits_read(tsdn, rtree, elm, dependent);\n\trtree_contents_t contents = rtree_leaf_elm_bits_decode(bits);\n\treturn contents;\n#else\n\trtree_contents_t contents;\n\tunsigned metadata_bits = atomic_load_u(&elm->le_metadata, dependent\n\t    ? ATOMIC_RELAXED : ATOMIC_ACQUIRE);\n\tcontents.metadata.slab = (bool)(metadata_bits & 1);\n\tcontents.metadata.is_head = (bool)(metadata_bits & (1 << 1));\n\n\tuintptr_t state_bits = (metadata_bits & RTREE_LEAF_STATE_MASK) >>\n\t    RTREE_LEAF_STATE_SHIFT;\n\tassert(state_bits <= extent_state_max);\n\tcontents.metadata.state = (extent_state_t)state_bits;\n\tcontents.metadata.szind = metadata_bits >> (RTREE_LEAF_STATE_SHIFT +\n\t    RTREE_LEAF_STATE_WIDTH);\n\n\tcontents.edata = (edata_t *)atomic_load_p(&elm->le_edata, dependent\n\t    ? ATOMIC_RELAXED : ATOMIC_ACQUIRE);\n\n\treturn contents;\n#endif\n}\n\nJEMALLOC_ALWAYS_INLINE void\nrtree_contents_encode(rtree_contents_t contents, void **bits,\n    unsigned *additional) {\n#ifdef RTREE_LEAF_COMPACT\n\t*bits = (void *)rtree_leaf_elm_bits_encode(contents);\n#else\n\t*additional = (unsigned)contents.metadata.slab\n\t    | ((unsigned)contents.metadata.is_head << 1)\n\t    | ((unsigned)contents.metadata.state << RTREE_LEAF_STATE_SHIFT)\n\t    | ((unsigned)contents.metadata.szind << (RTREE_LEAF_STATE_SHIFT +\n\t    RTREE_LEAF_STATE_WIDTH));\n\t*bits = contents.edata;\n#endif\n}\n\nJEMALLOC_ALWAYS_INLINE void\nrtree_leaf_elm_write_commit(tsdn_t *tsdn, rtree_t *rtree,\n    rtree_leaf_elm_t *elm, void *bits, unsigned additional) {\n#ifdef RTREE_LEAF_COMPACT\n\tatomic_store_p(&elm->le_bits, bits, ATOMIC_RELEASE);\n#else\n\tatomic_store_u(&elm->le_metadata, additional, ATOMIC_RELEASE);\n\t/*\n\t * Write edata last, since the element is atomically considered valid\n\t * as soon as the edata field is non-NULL.\n\t */\n\tatomic_store_p(&elm->le_edata, bits, ATOMIC_RELEASE);\n#endif\n}\n\nJEMALLOC_ALWAYS_INLINE void\nrtree_leaf_elm_write(tsdn_t *tsdn, rtree_t *rtree,\n    rtree_leaf_elm_t *elm, rtree_contents_t contents) {\n\tassert((uintptr_t)contents.edata % EDATA_ALIGNMENT == 0);\n\tvoid *bits;\n\tunsigned additional;\n\n\trtree_contents_encode(contents, &bits, &additional);\n\trtree_leaf_elm_write_commit(tsdn, rtree, elm, bits, additional);\n}\n\n/* The state field can be updated independently (and more frequently). */\nJEMALLOC_ALWAYS_INLINE void\nrtree_leaf_elm_state_update(tsdn_t *tsdn, rtree_t *rtree,\n    rtree_leaf_elm_t *elm1, rtree_leaf_elm_t *elm2, extent_state_t state) {\n\tassert(elm1 != NULL);\n#ifdef RTREE_LEAF_COMPACT\n\tuintptr_t bits = rtree_leaf_elm_bits_read(tsdn, rtree, elm1,\n\t    /* dependent */ true);\n\tbits &= ~RTREE_LEAF_STATE_MASK;\n\tbits |= state << RTREE_LEAF_STATE_SHIFT;\n\tatomic_store_p(&elm1->le_bits, (void *)bits, ATOMIC_RELEASE);\n\tif (elm2 != NULL) {\n\t\tatomic_store_p(&elm2->le_bits, (void *)bits, ATOMIC_RELEASE);\n\t}\n#else\n\tunsigned bits = atomic_load_u(&elm1->le_metadata, ATOMIC_RELAXED);\n\tbits &= ~RTREE_LEAF_STATE_MASK;\n\tbits |= state << RTREE_LEAF_STATE_SHIFT;\n\tatomic_store_u(&elm1->le_metadata, bits, ATOMIC_RELEASE);\n\tif (elm2 != NULL) {\n\t\tatomic_store_u(&elm2->le_metadata, bits, ATOMIC_RELEASE);\n\t}\n#endif\n}\n\n/*\n * Tries to look up the key in the L1 cache, returning false if there's a hit, or\n * true if there's a miss.\n * Key is allowed to be NULL; returns true in this case.\n */\nJEMALLOC_ALWAYS_INLINE bool\nrtree_leaf_elm_lookup_fast(tsdn_t *tsdn, rtree_t *rtree, rtree_ctx_t *rtree_ctx,\n    uintptr_t key, rtree_leaf_elm_t **elm) {\n\tsize_t slot = rtree_cache_direct_map(key);\n\tuintptr_t leafkey = rtree_leafkey(key);\n\tassert(leafkey != RTREE_LEAFKEY_INVALID);\n\n\tif (unlikely(rtree_ctx->cache[slot].leafkey != leafkey)) {\n\t\treturn true;\n\t}\n\n\trtree_leaf_elm_t *leaf = rtree_ctx->cache[slot].leaf;\n\tassert(leaf != NULL);\n\tuintptr_t subkey = rtree_subkey(key, RTREE_HEIGHT-1);\n\t*elm = &leaf[subkey];\n\n\treturn false;\n}\n\nJEMALLOC_ALWAYS_INLINE rtree_leaf_elm_t *\nrtree_leaf_elm_lookup(tsdn_t *tsdn, rtree_t *rtree, rtree_ctx_t *rtree_ctx,\n    uintptr_t key, bool dependent, bool init_missing) {\n\tassert(key != 0);\n\tassert(!dependent || !init_missing);\n\n\tsize_t slot = rtree_cache_direct_map(key);\n\tuintptr_t leafkey = rtree_leafkey(key);\n\tassert(leafkey != RTREE_LEAFKEY_INVALID);\n\n\t/* Fast path: L1 direct mapped cache. */\n\tif (likely(rtree_ctx->cache[slot].leafkey == leafkey)) {\n\t\trtree_leaf_elm_t *leaf = rtree_ctx->cache[slot].leaf;\n\t\tassert(leaf != NULL);\n\t\tuintptr_t subkey = rtree_subkey(key, RTREE_HEIGHT-1);\n\t\treturn &leaf[subkey];\n\t}\n\t/*\n\t * Search the L2 LRU cache.  On hit, swap the matching element into the\n\t * slot in L1 cache, and move the position in L2 up by 1.\n\t */\n#define RTREE_CACHE_CHECK_L2(i) do {\t\t\t\t\t\\\n\tif (likely(rtree_ctx->l2_cache[i].leafkey == leafkey)) {\t\\\n\t\trtree_leaf_elm_t *leaf = rtree_ctx->l2_cache[i].leaf;\t\\\n\t\tassert(leaf != NULL);\t\t\t\t\t\\\n\t\tif (i > 0) {\t\t\t\t\t\t\\\n\t\t\t/* Bubble up by one. */\t\t\t\t\\\n\t\t\trtree_ctx->l2_cache[i].leafkey =\t\t\\\n\t\t\t\trtree_ctx->l2_cache[i - 1].leafkey;\t\\\n\t\t\trtree_ctx->l2_cache[i].leaf =\t\t\t\\\n\t\t\t\trtree_ctx->l2_cache[i - 1].leaf;\t\\\n\t\t\trtree_ctx->l2_cache[i - 1].leafkey =\t\t\\\n\t\t\t    rtree_ctx->cache[slot].leafkey;\t\t\\\n\t\t\trtree_ctx->l2_cache[i - 1].leaf =\t\t\\\n\t\t\t    rtree_ctx->cache[slot].leaf;\t\t\\\n\t\t} else {\t\t\t\t\t\t\\\n\t\t\trtree_ctx->l2_cache[0].leafkey =\t\t\\\n\t\t\t    rtree_ctx->cache[slot].leafkey;\t\t\\\n\t\t\trtree_ctx->l2_cache[0].leaf =\t\t\t\\\n\t\t\t    rtree_ctx->cache[slot].leaf;\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t\trtree_ctx->cache[slot].leafkey = leafkey;\t\t\\\n\t\trtree_ctx->cache[slot].leaf = leaf;\t\t\t\\\n\t\tuintptr_t subkey = rtree_subkey(key, RTREE_HEIGHT-1);\t\\\n\t\treturn &leaf[subkey];\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\t/* Check the first cache entry. */\n\tRTREE_CACHE_CHECK_L2(0);\n\t/* Search the remaining cache elements. */\n\tfor (unsigned i = 1; i < RTREE_CTX_NCACHE_L2; i++) {\n\t\tRTREE_CACHE_CHECK_L2(i);\n\t}\n#undef RTREE_CACHE_CHECK_L2\n\n\treturn rtree_leaf_elm_lookup_hard(tsdn, rtree, rtree_ctx, key,\n\t    dependent, init_missing);\n}\n\n/*\n * Returns true on lookup failure.\n */\nstatic inline bool\nrtree_read_independent(tsdn_t *tsdn, rtree_t *rtree, rtree_ctx_t *rtree_ctx,\n    uintptr_t key, rtree_contents_t *r_contents) {\n\trtree_leaf_elm_t *elm = rtree_leaf_elm_lookup(tsdn, rtree, rtree_ctx,\n\t    key, /* dependent */ false, /* init_missing */ false);\n\tif (elm == NULL) {\n\t\treturn true;\n\t}\n\t*r_contents = rtree_leaf_elm_read(tsdn, rtree, elm,\n\t    /* dependent */ false);\n\treturn false;\n}\n\nstatic inline rtree_contents_t\nrtree_read(tsdn_t *tsdn, rtree_t *rtree, rtree_ctx_t *rtree_ctx,\n    uintptr_t key) {\n\trtree_leaf_elm_t *elm = rtree_leaf_elm_lookup(tsdn, rtree, rtree_ctx,\n\t    key, /* dependent */ true, /* init_missing */ false);\n\tassert(elm != NULL);\n\treturn rtree_leaf_elm_read(tsdn, rtree, elm, /* dependent */ true);\n}\n\nstatic inline rtree_metadata_t\nrtree_metadata_read(tsdn_t *tsdn, rtree_t *rtree, rtree_ctx_t *rtree_ctx,\n    uintptr_t key) {\n\trtree_leaf_elm_t *elm = rtree_leaf_elm_lookup(tsdn, rtree, rtree_ctx,\n\t    key, /* dependent */ true, /* init_missing */ false);\n\tassert(elm != NULL);\n\treturn rtree_leaf_elm_read(tsdn, rtree, elm,\n\t    /* dependent */ true).metadata;\n}\n\n/*\n * Returns true when the request cannot be fulfilled by fastpath.\n */\nstatic inline bool\nrtree_metadata_try_read_fast(tsdn_t *tsdn, rtree_t *rtree, rtree_ctx_t *rtree_ctx,\n    uintptr_t key, rtree_metadata_t *r_rtree_metadata) {\n\trtree_leaf_elm_t *elm;\n\t/*\n\t * Should check the bool return value (lookup success or not) instead of\n\t * elm == NULL (which will result in an extra branch).  This is because\n\t * when the cache lookup succeeds, there will never be a NULL pointer\n\t * returned (which is unknown to the compiler).\n\t */\n\tif (rtree_leaf_elm_lookup_fast(tsdn, rtree, rtree_ctx, key, &elm)) {\n\t\treturn true;\n\t}\n\tassert(elm != NULL);\n\t*r_rtree_metadata = rtree_leaf_elm_read(tsdn, rtree, elm,\n\t    /* dependent */ true).metadata;\n\treturn false;\n}\n\nJEMALLOC_ALWAYS_INLINE void\nrtree_write_range_impl(tsdn_t *tsdn, rtree_t *rtree, rtree_ctx_t *rtree_ctx,\n    uintptr_t base, uintptr_t end, rtree_contents_t contents, bool clearing) {\n\tassert((base & PAGE_MASK) == 0 && (end & PAGE_MASK) == 0);\n\t/*\n\t * Only used for emap_(de)register_interior, which implies the\n\t * boundaries have been registered already.  Therefore all the lookups\n\t * are dependent w/o init_missing, assuming the range spans across at\n\t * most 2 rtree leaf nodes (each covers 1 GiB of vaddr).\n\t */\n\tvoid *bits;\n\tunsigned additional;\n\trtree_contents_encode(contents, &bits, &additional);\n\n\trtree_leaf_elm_t *elm = NULL; /* Dead store. */\n\tfor (uintptr_t addr = base; addr <= end; addr += PAGE) {\n\t\tif (addr == base ||\n\t\t    (addr & ((ZU(1) << rtree_leaf_maskbits()) - 1)) == 0) {\n\t\t\telm = rtree_leaf_elm_lookup(tsdn, rtree, rtree_ctx, addr,\n\t\t\t    /* dependent */ true, /* init_missing */ false);\n\t\t\tassert(elm != NULL);\n\t\t}\n\t\tassert(elm == rtree_leaf_elm_lookup(tsdn, rtree, rtree_ctx, addr,\n\t\t    /* dependent */ true, /* init_missing */ false));\n\t\tassert(!clearing || rtree_leaf_elm_read(tsdn, rtree, elm,\n\t\t    /* dependent */ true).edata != NULL);\n\t\trtree_leaf_elm_write_commit(tsdn, rtree, elm, bits, additional);\n\t\telm++;\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE void\nrtree_write_range(tsdn_t *tsdn, rtree_t *rtree, rtree_ctx_t *rtree_ctx,\n    uintptr_t base, uintptr_t end, rtree_contents_t contents) {\n\trtree_write_range_impl(tsdn, rtree, rtree_ctx, base, end, contents,\n\t    /* clearing */ false);\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nrtree_write(tsdn_t *tsdn, rtree_t *rtree, rtree_ctx_t *rtree_ctx, uintptr_t key,\n    rtree_contents_t contents) {\n\trtree_leaf_elm_t *elm = rtree_leaf_elm_lookup(tsdn, rtree, rtree_ctx,\n\t    key, /* dependent */ false, /* init_missing */ true);\n\tif (elm == NULL) {\n\t\treturn true;\n\t}\n\n\trtree_leaf_elm_write(tsdn, rtree, elm, contents);\n\n\treturn false;\n}\n\nstatic inline void\nrtree_clear(tsdn_t *tsdn, rtree_t *rtree, rtree_ctx_t *rtree_ctx,\n    uintptr_t key) {\n\trtree_leaf_elm_t *elm = rtree_leaf_elm_lookup(tsdn, rtree, rtree_ctx,\n\t    key, /* dependent */ true, /* init_missing */ false);\n\tassert(elm != NULL);\n\tassert(rtree_leaf_elm_read(tsdn, rtree, elm,\n\t    /* dependent */ true).edata != NULL);\n\trtree_contents_t contents;\n\tcontents.edata = NULL;\n\tcontents.metadata.szind = SC_NSIZES;\n\tcontents.metadata.slab = false;\n\tcontents.metadata.is_head = false;\n\tcontents.metadata.state = (extent_state_t)0;\n\trtree_leaf_elm_write(tsdn, rtree, elm, contents);\n}\n\nstatic inline void\nrtree_clear_range(tsdn_t *tsdn, rtree_t *rtree, rtree_ctx_t *rtree_ctx,\n    uintptr_t base, uintptr_t end) {\n\trtree_contents_t contents;\n\tcontents.edata = NULL;\n\tcontents.metadata.szind = SC_NSIZES;\n\tcontents.metadata.slab = false;\n\tcontents.metadata.is_head = false;\n\tcontents.metadata.state = (extent_state_t)0;\n\trtree_write_range_impl(tsdn, rtree, rtree_ctx, base, end, contents,\n\t    /* clearing */ true);\n}\n\n#endif /* JEMALLOC_INTERNAL_RTREE_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/rtree_tsd.h",
    "content": "#ifndef JEMALLOC_INTERNAL_RTREE_CTX_H\n#define JEMALLOC_INTERNAL_RTREE_CTX_H\n\n/*\n * Number of leafkey/leaf pairs to cache in L1 and L2 level respectively.  Each\n * entry supports an entire leaf, so the cache hit rate is typically high even\n * with a small number of entries.  In rare cases extent activity will straddle\n * the boundary between two leaf nodes.  Furthermore, an arena may use a\n * combination of dss and mmap.  Note that as memory usage grows past the amount\n * that this cache can directly cover, the cache will become less effective if\n * locality of reference is low, but the consequence is merely cache misses\n * while traversing the tree nodes.\n *\n * The L1 direct mapped cache offers consistent and low cost on cache hit.\n * However collision could affect hit rate negatively.  This is resolved by\n * combining with a L2 LRU cache, which requires linear search and re-ordering\n * on access but suffers no collision.  Note that, the cache will itself suffer\n * cache misses if made overly large, plus the cost of linear search in the LRU\n * cache.\n */\n#define RTREE_CTX_NCACHE 16\n#define RTREE_CTX_NCACHE_L2 8\n\n/* Needed for initialization only. */\n#define RTREE_LEAFKEY_INVALID ((uintptr_t)1)\n#define RTREE_CTX_CACHE_ELM_INVALID {RTREE_LEAFKEY_INVALID, NULL}\n\n#define RTREE_CTX_INIT_ELM_1 RTREE_CTX_CACHE_ELM_INVALID\n#define RTREE_CTX_INIT_ELM_2 RTREE_CTX_INIT_ELM_1, RTREE_CTX_INIT_ELM_1\n#define RTREE_CTX_INIT_ELM_4 RTREE_CTX_INIT_ELM_2, RTREE_CTX_INIT_ELM_2\n#define RTREE_CTX_INIT_ELM_8 RTREE_CTX_INIT_ELM_4, RTREE_CTX_INIT_ELM_4\n#define RTREE_CTX_INIT_ELM_16 RTREE_CTX_INIT_ELM_8, RTREE_CTX_INIT_ELM_8\n\n#define _RTREE_CTX_INIT_ELM_DATA(n) RTREE_CTX_INIT_ELM_##n\n#define RTREE_CTX_INIT_ELM_DATA(n) _RTREE_CTX_INIT_ELM_DATA(n)\n\n/*\n * Static initializer (to invalidate the cache entries) is required because the\n * free fastpath may access the rtree cache before a full tsd initialization.\n */\n#define RTREE_CTX_INITIALIZER {{RTREE_CTX_INIT_ELM_DATA(RTREE_CTX_NCACHE)}, \\\n\t\t\t       {RTREE_CTX_INIT_ELM_DATA(RTREE_CTX_NCACHE_L2)}}\n\ntypedef struct rtree_leaf_elm_s rtree_leaf_elm_t;\n\ntypedef struct rtree_ctx_cache_elm_s rtree_ctx_cache_elm_t;\nstruct rtree_ctx_cache_elm_s {\n\tuintptr_t\t\tleafkey;\n\trtree_leaf_elm_t\t*leaf;\n};\n\ntypedef struct rtree_ctx_s rtree_ctx_t;\nstruct rtree_ctx_s {\n\t/* Direct mapped cache. */\n\trtree_ctx_cache_elm_t\tcache[RTREE_CTX_NCACHE];\n\t/* L2 LRU cache. */\n\trtree_ctx_cache_elm_t\tl2_cache[RTREE_CTX_NCACHE_L2];\n};\n\nvoid rtree_ctx_data_init(rtree_ctx_t *ctx);\n\n#endif /* JEMALLOC_INTERNAL_RTREE_CTX_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/safety_check.h",
    "content": "#ifndef JEMALLOC_INTERNAL_SAFETY_CHECK_H\n#define JEMALLOC_INTERNAL_SAFETY_CHECK_H\n\nvoid safety_check_fail_sized_dealloc(bool current_dealloc, const void *ptr,\n    size_t true_size, size_t input_size);\nvoid safety_check_fail(const char *format, ...);\n\ntypedef void (*safety_check_abort_hook_t)(const char *message);\n\n/* Can set to NULL for a default. */\nvoid safety_check_set_abort(safety_check_abort_hook_t abort_fn);\n\nJEMALLOC_ALWAYS_INLINE void\nsafety_check_set_redzone(void *ptr, size_t usize, size_t bumped_usize) {\n\tassert(usize < bumped_usize);\n\tfor (size_t i = usize; i < bumped_usize && i < usize + 32; ++i) {\n\t\t*((unsigned char *)ptr + i) = 0xBC;\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE void\nsafety_check_verify_redzone(const void *ptr, size_t usize, size_t bumped_usize)\n{\n\tfor (size_t i = usize; i < bumped_usize && i < usize + 32; ++i) {\n\t\tif (unlikely(*((unsigned char *)ptr + i) != 0xBC)) {\n\t\t\tsafety_check_fail(\"Use after free error\\n\");\n\t\t}\n\t}\n}\n\n#endif /*JEMALLOC_INTERNAL_SAFETY_CHECK_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/san.h",
    "content": "#ifndef JEMALLOC_INTERNAL_GUARD_H\n#define JEMALLOC_INTERNAL_GUARD_H\n\n#include \"jemalloc/internal/ehooks.h\"\n#include \"jemalloc/internal/emap.h\"\n\n#define SAN_PAGE_GUARD PAGE\n#define SAN_PAGE_GUARDS_SIZE (SAN_PAGE_GUARD * 2)\n\n#define SAN_GUARD_LARGE_EVERY_N_EXTENTS_DEFAULT 0\n#define SAN_GUARD_SMALL_EVERY_N_EXTENTS_DEFAULT 0\n\n#define SAN_LG_UAF_ALIGN_DEFAULT (-1)\n#define SAN_CACHE_BIN_NONFAST_MASK_DEFAULT (uintptr_t)(-1)\n\nstatic const uintptr_t uaf_detect_junk = (uintptr_t)0x5b5b5b5b5b5b5b5bULL;\n\n/* 0 means disabled, i.e. never guarded. */\nextern size_t opt_san_guard_large;\nextern size_t opt_san_guard_small;\n/* -1 means disabled, i.e. never check for use-after-free. */\nextern ssize_t opt_lg_san_uaf_align;\n\nvoid san_guard_pages(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata,\n    emap_t *emap, bool left, bool right, bool remap);\nvoid san_unguard_pages(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata,\n    emap_t *emap, bool left, bool right);\n/*\n * Unguard the extent, but don't modify emap boundaries. Must be called on an\n * extent that has been erased from emap and shouldn't be placed back.\n */\nvoid san_unguard_pages_pre_destroy(tsdn_t *tsdn, ehooks_t *ehooks,\n    edata_t *edata, emap_t *emap);\nvoid san_check_stashed_ptrs(void **ptrs, size_t nstashed, size_t usize);\n\nvoid tsd_san_init(tsd_t *tsd);\nvoid san_init(ssize_t lg_san_uaf_align);\n\nstatic inline void\nsan_guard_pages_two_sided(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata,\n    emap_t *emap, bool remap) {\n\tsan_guard_pages(tsdn, ehooks, edata, emap, true, true, remap);\n}\n\nstatic inline void\nsan_unguard_pages_two_sided(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata,\n    emap_t *emap) {\n\tsan_unguard_pages(tsdn, ehooks, edata, emap, true, true);\n}\n\nstatic inline size_t\nsan_two_side_unguarded_sz(size_t size) {\n\tassert(size % PAGE == 0);\n\tassert(size >= SAN_PAGE_GUARDS_SIZE);\n\treturn size - SAN_PAGE_GUARDS_SIZE;\n}\n\nstatic inline size_t\nsan_two_side_guarded_sz(size_t size) {\n\tassert(size % PAGE == 0);\n\treturn size + SAN_PAGE_GUARDS_SIZE;\n}\n\nstatic inline size_t\nsan_one_side_unguarded_sz(size_t size) {\n\tassert(size % PAGE == 0);\n\tassert(size >= SAN_PAGE_GUARD);\n\treturn size - SAN_PAGE_GUARD;\n}\n\nstatic inline size_t\nsan_one_side_guarded_sz(size_t size) {\n\tassert(size % PAGE == 0);\n\treturn size + SAN_PAGE_GUARD;\n}\n\nstatic inline bool\nsan_guard_enabled(void) {\n\treturn (opt_san_guard_large != 0 || opt_san_guard_small != 0);\n}\n\nstatic inline bool\nsan_large_extent_decide_guard(tsdn_t *tsdn, ehooks_t *ehooks, size_t size,\n    size_t alignment) {\n\tif (opt_san_guard_large == 0 || ehooks_guard_will_fail(ehooks) ||\n\t    tsdn_null(tsdn)) {\n\t\treturn false;\n\t}\n\n\ttsd_t *tsd = tsdn_tsd(tsdn);\n\tuint64_t n = tsd_san_extents_until_guard_large_get(tsd);\n\tassert(n >= 1);\n\tif (n > 1) {\n\t\t/*\n\t\t * Subtract conditionally because the guard may not happen due\n\t\t * to alignment or size restriction below.\n\t\t */\n\t\t*tsd_san_extents_until_guard_largep_get(tsd) = n - 1;\n\t}\n\n\tif (n == 1 && (alignment <= PAGE) &&\n\t    (san_two_side_guarded_sz(size) <= SC_LARGE_MAXCLASS)) {\n\t\t*tsd_san_extents_until_guard_largep_get(tsd) =\n\t\t    opt_san_guard_large;\n\t\treturn true;\n\t} else {\n\t\tassert(tsd_san_extents_until_guard_large_get(tsd) >= 1);\n\t\treturn false;\n\t}\n}\n\nstatic inline bool\nsan_slab_extent_decide_guard(tsdn_t *tsdn, ehooks_t *ehooks) {\n\tif (opt_san_guard_small == 0 || ehooks_guard_will_fail(ehooks) ||\n\t    tsdn_null(tsdn)) {\n\t\treturn false;\n\t}\n\n\ttsd_t *tsd = tsdn_tsd(tsdn);\n\tuint64_t n = tsd_san_extents_until_guard_small_get(tsd);\n\tassert(n >= 1);\n\tif (n == 1) {\n\t\t*tsd_san_extents_until_guard_smallp_get(tsd) =\n\t\t    opt_san_guard_small;\n\t\treturn true;\n\t} else {\n\t\t*tsd_san_extents_until_guard_smallp_get(tsd) = n - 1;\n\t\tassert(tsd_san_extents_until_guard_small_get(tsd) >= 1);\n\t\treturn false;\n\t}\n}\n\nstatic inline void\nsan_junk_ptr_locations(void *ptr, size_t usize, void **first, void **mid,\n    void **last) {\n\tsize_t ptr_sz = sizeof(void *);\n\n\t*first = ptr;\n\n\t*mid = (void *)((uintptr_t)ptr + ((usize >> 1) & ~(ptr_sz - 1)));\n\tassert(*first != *mid || usize == ptr_sz);\n\tassert((uintptr_t)*first <= (uintptr_t)*mid);\n\n\t/*\n\t * When usize > 32K, the gap between requested_size and usize might be\n\t * greater than 4K -- this means the last write may access an\n\t * likely-untouched page (default settings w/ 4K pages).  However by\n\t * default the tcache only goes up to the 32K size class, and is usually\n\t * tuned lower instead of higher, which makes it less of a concern.\n\t */\n\t*last = (void *)((uintptr_t)ptr + usize - sizeof(uaf_detect_junk));\n\tassert(*first != *last || usize == ptr_sz);\n\tassert(*mid != *last || usize <= ptr_sz * 2);\n\tassert((uintptr_t)*mid <= (uintptr_t)*last);\n}\n\nstatic inline bool\nsan_junk_ptr_should_slow(void) {\n\t/*\n\t * The latter condition (pointer size greater than the min size class)\n\t * is not expected -- fall back to the slow path for simplicity.\n\t */\n\treturn config_debug || (LG_SIZEOF_PTR > SC_LG_TINY_MIN);\n}\n\nstatic inline void\nsan_junk_ptr(void *ptr, size_t usize) {\n\tif (san_junk_ptr_should_slow()) {\n\t\tmemset(ptr, (char)uaf_detect_junk, usize);\n\t\treturn;\n\t}\n\n\tvoid *first, *mid, *last;\n\tsan_junk_ptr_locations(ptr, usize, &first, &mid, &last);\n\t*(uintptr_t *)first = uaf_detect_junk;\n\t*(uintptr_t *)mid = uaf_detect_junk;\n\t*(uintptr_t *)last = uaf_detect_junk;\n}\n\nstatic inline bool\nsan_uaf_detection_enabled(void) {\n\tbool ret = config_uaf_detection && (opt_lg_san_uaf_align != -1);\n\tif (config_uaf_detection && ret) {\n\t\tassert(san_cache_bin_nonfast_mask == ((uintptr_t)1 <<\n\t\t    opt_lg_san_uaf_align) - 1);\n\t}\n\n\treturn ret;\n}\n\n#endif /* JEMALLOC_INTERNAL_GUARD_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/san_bump.h",
    "content": "#ifndef JEMALLOC_INTERNAL_SAN_BUMP_H\n#define JEMALLOC_INTERNAL_SAN_BUMP_H\n\n#include \"jemalloc/internal/edata.h\"\n#include \"jemalloc/internal/exp_grow.h\"\n#include \"jemalloc/internal/mutex.h\"\n\n#define SBA_RETAINED_ALLOC_SIZE ((size_t)4 << 20)\n\nextern bool opt_retain;\n\ntypedef struct ehooks_s ehooks_t;\ntypedef struct pac_s pac_t;\n\ntypedef struct san_bump_alloc_s san_bump_alloc_t;\nstruct san_bump_alloc_s {\n\tmalloc_mutex_t mtx;\n\n\tedata_t *curr_reg;\n};\n\nstatic inline bool\nsan_bump_enabled() {\n\t/*\n\t * We enable san_bump allocator only when it's possible to break up a\n\t * mapping and unmap a part of it (maps_coalesce). This is needed to\n\t * ensure the arena destruction process can destroy all retained guarded\n\t * extents one by one and to unmap a trailing part of a retained guarded\n\t * region when it's too small to fit a pending allocation.\n\t * opt_retain is required, because this allocator retains a large\n\t * virtual memory mapping and returns smaller parts of it.\n\t */\n\treturn maps_coalesce && opt_retain;\n}\n\nstatic inline bool\nsan_bump_alloc_init(san_bump_alloc_t* sba) {\n\tbool err = malloc_mutex_init(&sba->mtx, \"sanitizer_bump_allocator\",\n\t    WITNESS_RANK_SAN_BUMP_ALLOC, malloc_mutex_rank_exclusive);\n\tif (err) {\n\t\treturn true;\n\t}\n\tsba->curr_reg = NULL;\n\n\treturn false;\n}\n\nedata_t *\nsan_bump_alloc(tsdn_t *tsdn, san_bump_alloc_t* sba, pac_t *pac, ehooks_t *ehooks,\n    size_t size, bool zero);\n\n#endif /* JEMALLOC_INTERNAL_SAN_BUMP_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/sc.h",
    "content": "#ifndef JEMALLOC_INTERNAL_SC_H\n#define JEMALLOC_INTERNAL_SC_H\n\n#include \"jemalloc/internal/jemalloc_internal_types.h\"\n\n/*\n * Size class computations:\n *\n * These are a little tricky; we'll first start by describing how things\n * generally work, and then describe some of the details.\n *\n * Ignore the first few size classes for a moment. We can then split all the\n * remaining size classes into groups. The size classes in a group are spaced\n * such that they cover allocation request sizes in a power-of-2 range. The\n * power of two is called the base of the group, and the size classes in it\n * satisfy allocations in the half-open range (base, base * 2]. There are\n * SC_NGROUP size classes in each group, equally spaced in the range, so that\n * each one covers allocations for base / SC_NGROUP possible allocation sizes.\n * We call that value (base / SC_NGROUP) the delta of the group. Each size class\n * is delta larger than the one before it (including the initial size class in a\n * group, which is delta larger than base, the largest size class in the\n * previous group).\n * To make the math all work out nicely, we require that SC_NGROUP is a power of\n * two, and define it in terms of SC_LG_NGROUP. We'll often talk in terms of\n * lg_base and lg_delta. For each of these groups then, we have that\n * lg_delta == lg_base - SC_LG_NGROUP.\n * The size classes in a group with a given lg_base and lg_delta (which, recall,\n * can be computed from lg_base for these groups) are therefore:\n *   base + 1 * delta\n *     which covers allocations in (base, base + 1 * delta]\n *   base + 2 * delta\n *     which covers allocations in (base + 1 * delta, base + 2 * delta].\n *   base + 3 * delta\n *     which covers allocations in (base + 2 * delta, base + 3 * delta].\n *   ...\n *   base + SC_NGROUP * delta ( == 2 * base)\n *     which covers allocations in (base + (SC_NGROUP - 1) * delta, 2 * base].\n * (Note that currently SC_NGROUP is always 4, so the \"...\" is empty in\n * practice.)\n * Note that the last size class in the group is the next power of two (after\n * base), so that we've set up the induction correctly for the next group's\n * selection of delta.\n *\n * Now, let's start considering the first few size classes. Two extra constants\n * come into play here: LG_QUANTUM and SC_LG_TINY_MIN. LG_QUANTUM ensures\n * correct platform alignment; all objects of size (1 << LG_QUANTUM) or larger\n * are at least (1 << LG_QUANTUM) aligned; this can be used to ensure that we\n * never return improperly aligned memory, by making (1 << LG_QUANTUM) equal the\n * highest required alignment of a platform. For allocation sizes smaller than\n * (1 << LG_QUANTUM) though, we can be more relaxed (since we don't support\n * platforms with types with alignment larger than their size). To allow such\n * allocations (without wasting space unnecessarily), we introduce tiny size\n * classes; one per power of two, up until we hit the quantum size. There are\n * therefore LG_QUANTUM - SC_LG_TINY_MIN such size classes.\n *\n * Next, we have a size class of size (1 << LG_QUANTUM).  This can't be the\n * start of a group in the sense we described above (covering a power of two\n * range) since, if we divided into it to pick a value of delta, we'd get a\n * delta smaller than (1 << LG_QUANTUM) for sizes >= (1 << LG_QUANTUM), which\n * is against the rules.\n *\n * The first base we can divide by SC_NGROUP while still being at least\n * (1 << LG_QUANTUM) is SC_NGROUP * (1 << LG_QUANTUM). We can get there by\n * having SC_NGROUP size classes, spaced (1 << LG_QUANTUM) apart. These size\n * classes are:\n *   1 * (1 << LG_QUANTUM)\n *   2 * (1 << LG_QUANTUM)\n *   3 * (1 << LG_QUANTUM)\n *   ... (although, as above, this \"...\" is empty in practice)\n *   SC_NGROUP * (1 << LG_QUANTUM).\n *\n * There are SC_NGROUP of these size classes, so we can regard it as a sort of\n * pseudo-group, even though it spans multiple powers of 2, is divided\n * differently, and both starts and ends on a power of 2 (as opposed to just\n * ending). SC_NGROUP is itself a power of two, so the first group after the\n * pseudo-group has the power-of-two base SC_NGROUP * (1 << LG_QUANTUM), for a\n * lg_base of LG_QUANTUM + SC_LG_NGROUP. We can divide this base into SC_NGROUP\n * sizes without violating our LG_QUANTUM requirements, so we can safely set\n * lg_delta = lg_base - SC_LG_GROUP (== LG_QUANTUM).\n *\n * So, in order, the size classes are:\n *\n * Tiny size classes:\n * - Count: LG_QUANTUM - SC_LG_TINY_MIN.\n * - Sizes:\n *     1 << SC_LG_TINY_MIN\n *     1 << (SC_LG_TINY_MIN + 1)\n *     1 << (SC_LG_TINY_MIN + 2)\n *     ...\n *     1 << (LG_QUANTUM - 1)\n *\n * Initial pseudo-group:\n * - Count: SC_NGROUP\n * - Sizes:\n *     1 * (1 << LG_QUANTUM)\n *     2 * (1 << LG_QUANTUM)\n *     3 * (1 << LG_QUANTUM)\n *     ...\n *     SC_NGROUP * (1 << LG_QUANTUM)\n *\n * Regular group 0:\n * - Count: SC_NGROUP\n * - Sizes:\n *   (relative to lg_base of LG_QUANTUM + SC_LG_NGROUP and lg_delta of\n *   lg_base - SC_LG_NGROUP)\n *     (1 << lg_base) + 1 * (1 << lg_delta)\n *     (1 << lg_base) + 2 * (1 << lg_delta)\n *     (1 << lg_base) + 3 * (1 << lg_delta)\n *     ...\n *     (1 << lg_base) + SC_NGROUP * (1 << lg_delta) [ == (1 << (lg_base + 1)) ]\n *\n * Regular group 1:\n * - Count: SC_NGROUP\n * - Sizes:\n *   (relative to lg_base of LG_QUANTUM + SC_LG_NGROUP + 1 and lg_delta of\n *   lg_base - SC_LG_NGROUP)\n *     (1 << lg_base) + 1 * (1 << lg_delta)\n *     (1 << lg_base) + 2 * (1 << lg_delta)\n *     (1 << lg_base) + 3 * (1 << lg_delta)\n *     ...\n *     (1 << lg_base) + SC_NGROUP * (1 << lg_delta) [ == (1 << (lg_base + 1)) ]\n *\n * ...\n *\n * Regular group N:\n * - Count: SC_NGROUP\n * - Sizes:\n *   (relative to lg_base of LG_QUANTUM + SC_LG_NGROUP + N and lg_delta of\n *   lg_base - SC_LG_NGROUP)\n *     (1 << lg_base) + 1 * (1 << lg_delta)\n *     (1 << lg_base) + 2 * (1 << lg_delta)\n *     (1 << lg_base) + 3 * (1 << lg_delta)\n *     ...\n *     (1 << lg_base) + SC_NGROUP * (1 << lg_delta) [ == (1 << (lg_base + 1)) ]\n *\n *\n * Representation of metadata:\n * To make the math easy, we'll mostly work in lg quantities. We record lg_base,\n * lg_delta, and ndelta (i.e. number of deltas above the base) on a\n * per-size-class basis, and maintain the invariant that, across all size\n * classes, size == (1 << lg_base) + ndelta * (1 << lg_delta).\n *\n * For regular groups (i.e. those with lg_base >= LG_QUANTUM + SC_LG_NGROUP),\n * lg_delta is lg_base - SC_LG_NGROUP, and ndelta goes from 1 to SC_NGROUP.\n *\n * For the initial tiny size classes (if any), lg_base is lg(size class size).\n * lg_delta is lg_base for the first size class, and lg_base - 1 for all\n * subsequent ones. ndelta is always 0.\n *\n * For the pseudo-group, if there are no tiny size classes, then we set\n * lg_base == LG_QUANTUM, lg_delta == LG_QUANTUM, and have ndelta range from 0\n * to SC_NGROUP - 1. (Note that delta == base, so base + (SC_NGROUP - 1) * delta\n * is just SC_NGROUP * base, or (1 << (SC_LG_NGROUP + LG_QUANTUM)), so we do\n * indeed get a power of two that way). If there *are* tiny size classes, then\n * the first size class needs to have lg_delta relative to the largest tiny size\n * class. We therefore set lg_base == LG_QUANTUM - 1,\n * lg_delta == LG_QUANTUM - 1, and ndelta == 1, keeping the rest of the\n * pseudo-group the same.\n *\n *\n * Other terminology:\n * \"Small\" size classes mean those that are allocated out of bins, which is the\n * same as those that are slab allocated.\n * \"Large\" size classes are those that are not small. The cutoff for counting as\n * large is page size * group size.\n */\n\n/*\n * Size class N + (1 << SC_LG_NGROUP) twice the size of size class N.\n */\n#define SC_LG_NGROUP 2\n#define SC_LG_TINY_MIN 3\n\n#if SC_LG_TINY_MIN == 0\n/* The div module doesn't support division by 1, which this would require. */\n#error \"Unsupported LG_TINY_MIN\"\n#endif\n\n/*\n * The definitions below are all determined by the above settings and system\n * characteristics.\n */\n#define SC_NGROUP (1ULL << SC_LG_NGROUP)\n#define SC_PTR_BITS ((1ULL << LG_SIZEOF_PTR) * 8)\n#define SC_NTINY (LG_QUANTUM - SC_LG_TINY_MIN)\n#define SC_LG_TINY_MAXCLASS (LG_QUANTUM > SC_LG_TINY_MIN ? LG_QUANTUM - 1 : -1)\n#define SC_NPSEUDO SC_NGROUP\n#define SC_LG_FIRST_REGULAR_BASE (LG_QUANTUM + SC_LG_NGROUP)\n/*\n * We cap allocations to be less than 2 ** (ptr_bits - 1), so the highest base\n * we need is 2 ** (ptr_bits - 2). (This also means that the last group is 1\n * size class shorter than the others).\n * We could probably save some space in arenas by capping this at LG_VADDR size.\n */\n#define SC_LG_BASE_MAX (SC_PTR_BITS - 2)\n#define SC_NREGULAR (SC_NGROUP * \t\t\t\t\t\\\n    (SC_LG_BASE_MAX - SC_LG_FIRST_REGULAR_BASE + 1) - 1)\n#define SC_NSIZES (SC_NTINY + SC_NPSEUDO + SC_NREGULAR)\n\n/*\n * The number of size classes that are a multiple of the page size.\n *\n * Here are the first few bases that have a page-sized SC.\n *\n *      lg(base) |     base | highest SC | page-multiple SCs\n * --------------|------------------------------------------\n *   LG_PAGE - 1 | PAGE / 2 |       PAGE | 1\n *       LG_PAGE |     PAGE |   2 * PAGE | 1\n *   LG_PAGE + 1 | 2 * PAGE |   4 * PAGE | 2\n *   LG_PAGE + 2 | 4 * PAGE |   8 * PAGE | 4\n *\n * The number of page-multiple SCs continues to grow in powers of two, up until\n * lg_delta == lg_page, which corresponds to setting lg_base to lg_page +\n * SC_LG_NGROUP.  So, then, the number of size classes that are multiples of the\n * page size whose lg_delta is less than the page size are\n * is 1 + (2**0 + 2**1 + ... + 2**(lg_ngroup - 1) == 2**lg_ngroup.\n *\n * For each base with lg_base in [lg_page + lg_ngroup, lg_base_max), there are\n * NGROUP page-sized size classes, and when lg_base == lg_base_max, there are\n * NGROUP - 1.\n *\n * This gives us the quantity we seek.\n */\n#define SC_NPSIZES (\t\t\t\t\t\t\t\\\n    SC_NGROUP\t\t\t\t\t\t\t\t\\\n    + (SC_LG_BASE_MAX - (LG_PAGE + SC_LG_NGROUP)) * SC_NGROUP\t\t\\\n    + SC_NGROUP - 1)\n\n/*\n * We declare a size class is binnable if size < page size * group. Or, in other\n * words, lg(size) < lg(page size) + lg(group size).\n */\n#define SC_NBINS (\t\t\t\t\t\t\t\\\n    /* Sub-regular size classes. */\t\t\t\t\t\\\n    SC_NTINY + SC_NPSEUDO\t\t\t\t\t\t\\\n    /* Groups with lg_regular_min_base <= lg_base <= lg_base_max */\t\\\n    + SC_NGROUP * (LG_PAGE + SC_LG_NGROUP - SC_LG_FIRST_REGULAR_BASE)\t\\\n    /* Last SC of the last group hits the bound exactly; exclude it. */\t\\\n    - 1)\n\n/*\n * The size2index_tab lookup table uses uint8_t to encode each bin index, so we\n * cannot support more than 256 small size classes.\n */\n#if (SC_NBINS > 256)\n#  error \"Too many small size classes\"\n#endif\n\n/* The largest size class in the lookup table, and its binary log. */\n#define SC_LG_MAX_LOOKUP 12\n#define SC_LOOKUP_MAXCLASS (1 << SC_LG_MAX_LOOKUP)\n\n/* Internal, only used for the definition of SC_SMALL_MAXCLASS. */\n#define SC_SMALL_MAX_BASE (1 << (LG_PAGE + SC_LG_NGROUP - 1))\n#define SC_SMALL_MAX_DELTA (1 << (LG_PAGE - 1))\n\n/* The largest size class allocated out of a slab. */\n#define SC_SMALL_MAXCLASS (SC_SMALL_MAX_BASE\t\t\t\t\\\n    + (SC_NGROUP - 1) * SC_SMALL_MAX_DELTA)\n\n/* The fastpath assumes all lookup-able sizes are small. */\n#if (SC_SMALL_MAXCLASS < SC_LOOKUP_MAXCLASS)\n#  error \"Lookup table sizes must be small\"\n#endif\n\n/* The smallest size class not allocated out of a slab. */\n#define SC_LARGE_MINCLASS ((size_t)1ULL << (LG_PAGE + SC_LG_NGROUP))\n#define SC_LG_LARGE_MINCLASS (LG_PAGE + SC_LG_NGROUP)\n\n/* Internal; only used for the definition of SC_LARGE_MAXCLASS. */\n#define SC_MAX_BASE ((size_t)1 << (SC_PTR_BITS - 2))\n#define SC_MAX_DELTA ((size_t)1 << (SC_PTR_BITS - 2 - SC_LG_NGROUP))\n\n/* The largest size class supported. */\n#define SC_LARGE_MAXCLASS (SC_MAX_BASE + (SC_NGROUP - 1) * SC_MAX_DELTA)\n\n/* Maximum number of regions in one slab. */\n#ifndef CONFIG_LG_SLAB_MAXREGS\n#  define SC_LG_SLAB_MAXREGS (LG_PAGE - SC_LG_TINY_MIN)\n#else\n#  if CONFIG_LG_SLAB_MAXREGS < (LG_PAGE - SC_LG_TINY_MIN)\n#    error \"Unsupported SC_LG_SLAB_MAXREGS\"\n#  else\n#    define SC_LG_SLAB_MAXREGS CONFIG_LG_SLAB_MAXREGS\n#  endif\n#endif\n\n#define SC_SLAB_MAXREGS (1U << SC_LG_SLAB_MAXREGS)\n\ntypedef struct sc_s sc_t;\nstruct sc_s {\n\t/* Size class index, or -1 if not a valid size class. */\n\tint index;\n\t/* Lg group base size (no deltas added). */\n\tint lg_base;\n\t/* Lg delta to previous size class. */\n\tint lg_delta;\n\t/* Delta multiplier.  size == 1<<lg_base + ndelta<<lg_delta */\n\tint ndelta;\n\t/*\n\t * True if the size class is a multiple of the page size, false\n\t * otherwise.\n\t */\n\tbool psz;\n\t/*\n\t * True if the size class is a small, bin, size class. False otherwise.\n\t */\n\tbool bin;\n\t/* The slab page count if a small bin size class, 0 otherwise. */\n\tint pgs;\n\t/* Same as lg_delta if a lookup table size class, 0 otherwise. */\n\tint lg_delta_lookup;\n};\n\ntypedef struct sc_data_s sc_data_t;\nstruct sc_data_s {\n\t/* Number of tiny size classes. */\n\tunsigned ntiny;\n\t/* Number of bins supported by the lookup table. */\n\tint nlbins;\n\t/* Number of small size class bins. */\n\tint nbins;\n\t/* Number of size classes. */\n\tint nsizes;\n\t/* Number of bits required to store NSIZES. */\n\tint lg_ceil_nsizes;\n\t/* Number of size classes that are a multiple of (1U << LG_PAGE). */\n\tunsigned npsizes;\n\t/* Lg of maximum tiny size class (or -1, if none). */\n\tint lg_tiny_maxclass;\n\t/* Maximum size class included in lookup table. */\n\tsize_t lookup_maxclass;\n\t/* Maximum small size class. */\n\tsize_t small_maxclass;\n\t/* Lg of minimum large size class. */\n\tint lg_large_minclass;\n\t/* The minimum large size class. */\n\tsize_t large_minclass;\n\t/* Maximum (large) size class. */\n\tsize_t large_maxclass;\n\t/* True if the sc_data_t has been initialized (for debugging only). */\n\tbool initialized;\n\n\tsc_t sc[SC_NSIZES];\n};\n\nsize_t reg_size_compute(int lg_base, int lg_delta, int ndelta);\nvoid sc_data_init(sc_data_t *data);\n/*\n * Updates slab sizes in [begin, end] to be pgs pages in length, if possible.\n * Otherwise, does its best to accommodate the request.\n */\nvoid sc_data_update_slab_size(sc_data_t *data, size_t begin, size_t end,\n    int pgs);\nvoid sc_boot(sc_data_t *data);\n\n#endif /* JEMALLOC_INTERNAL_SC_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/sec.h",
    "content": "#ifndef JEMALLOC_INTERNAL_SEC_H\n#define JEMALLOC_INTERNAL_SEC_H\n\n#include \"jemalloc/internal/atomic.h\"\n#include \"jemalloc/internal/pai.h\"\n\n/*\n * Small extent cache.\n *\n * This includes some utilities to cache small extents.  We have a per-pszind\n * bin with its own list of extents of that size.  We don't try to do any\n * coalescing of extents (since it would in general require cross-shard locks or\n * knowledge of the underlying PAI implementation).\n */\n\n/*\n * For now, this is just one field; eventually, we'll probably want to get more\n * fine-grained data out (like per-size class statistics).\n */\ntypedef struct sec_stats_s sec_stats_t;\nstruct sec_stats_s {\n\t/* Sum of bytes_cur across all shards. */\n\tsize_t bytes;\n};\n\nstatic inline void\nsec_stats_accum(sec_stats_t *dst, sec_stats_t *src) {\n\tdst->bytes += src->bytes;\n}\n\n/* A collections of free extents, all of the same size. */\ntypedef struct sec_bin_s sec_bin_t;\nstruct sec_bin_s {\n\t/*\n\t * When we fail to fulfill an allocation, we do a batch-alloc on the\n\t * underlying allocator to fill extra items, as well.  We drop the SEC\n\t * lock while doing so, to allow operations on other bins to succeed.\n\t * That introduces the possibility of other threads also trying to\n\t * allocate out of this bin, failing, and also going to the backing\n\t * allocator.  To avoid a thundering herd problem in which lots of\n\t * threads do batch allocs and overfill this bin as a result, we only\n\t * allow one batch allocation at a time for a bin.  This bool tracks\n\t * whether or not some thread is already batch allocating.\n\t *\n\t * Eventually, the right answer may be a smarter sharding policy for the\n\t * bins (e.g. a mutex per bin, which would also be more scalable\n\t * generally; the batch-allocating thread could hold it while\n\t * batch-allocating).\n\t */\n\tbool being_batch_filled;\n\n\t/*\n\t * Number of bytes in this particular bin (as opposed to the\n\t * sec_shard_t's bytes_cur.  This isn't user visible or reported in\n\t * stats; rather, it allows us to quickly determine the change in the\n\t * centralized counter when flushing.\n\t */\n\tsize_t bytes_cur;\n\tedata_list_active_t freelist;\n};\n\ntypedef struct sec_shard_s sec_shard_t;\nstruct sec_shard_s {\n\t/*\n\t * We don't keep per-bin mutexes, even though that would allow more\n\t * sharding; this allows global cache-eviction, which in turn allows for\n\t * better balancing across free lists.\n\t */\n\tmalloc_mutex_t mtx;\n\t/*\n\t * A SEC may need to be shut down (i.e. flushed of its contents and\n\t * prevented from further caching).  To avoid tricky synchronization\n\t * issues, we just track enabled-status in each shard, guarded by a\n\t * mutex.  In practice, this is only ever checked during brief races,\n\t * since the arena-level atomic boolean tracking HPA enabled-ness means\n\t * that we won't go down these pathways very often after custom extent\n\t * hooks are installed.\n\t */\n\tbool enabled;\n\tsec_bin_t *bins;\n\t/* Number of bytes in all bins in the shard. */\n\tsize_t bytes_cur;\n\t/* The next pszind to flush in the flush-some pathways. */\n\tpszind_t to_flush_next;\n};\n\ntypedef struct sec_s sec_t;\nstruct sec_s {\n\tpai_t pai;\n\tpai_t *fallback;\n\n\tsec_opts_t opts;\n\tsec_shard_t *shards;\n\tpszind_t npsizes;\n};\n\nbool sec_init(tsdn_t *tsdn, sec_t *sec, base_t *base, pai_t *fallback,\n    const sec_opts_t *opts);\nvoid sec_flush(tsdn_t *tsdn, sec_t *sec);\nvoid sec_disable(tsdn_t *tsdn, sec_t *sec);\n\n/*\n * Morally, these two stats methods probably ought to be a single one (and the\n * mutex_prof_data ought to live in the sec_stats_t.  But splitting them apart\n * lets them fit easily into the pa_shard stats framework (which also has this\n * split), which simplifies the stats management.\n */\nvoid sec_stats_merge(tsdn_t *tsdn, sec_t *sec, sec_stats_t *stats);\nvoid sec_mutex_stats_read(tsdn_t *tsdn, sec_t *sec,\n    mutex_prof_data_t *mutex_prof_data);\n\n/*\n * We use the arena lock ordering; these are acquired in phase 2 of forking, but\n * should be acquired before the underlying allocator mutexes.\n */\nvoid sec_prefork2(tsdn_t *tsdn, sec_t *sec);\nvoid sec_postfork_parent(tsdn_t *tsdn, sec_t *sec);\nvoid sec_postfork_child(tsdn_t *tsdn, sec_t *sec);\n\n#endif /* JEMALLOC_INTERNAL_SEC_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/sec_opts.h",
    "content": "#ifndef JEMALLOC_INTERNAL_SEC_OPTS_H\n#define JEMALLOC_INTERNAL_SEC_OPTS_H\n\n/*\n * The configuration settings used by an sec_t.  Morally, this is part of the\n * SEC interface, but we put it here for header-ordering reasons.\n */\n\ntypedef struct sec_opts_s sec_opts_t;\nstruct sec_opts_s {\n\t/*\n\t * We don't necessarily always use all the shards; requests are\n\t * distributed across shards [0, nshards - 1).\n\t */\n\tsize_t nshards;\n\t/*\n\t * We'll automatically refuse to cache any objects in this sec if\n\t * they're larger than max_alloc bytes, instead forwarding such objects\n\t * directly to the fallback.\n\t */\n\tsize_t max_alloc;\n\t/*\n\t * Exceeding this amount of cached extents in a shard causes us to start\n\t * flushing bins in that shard until we fall below bytes_after_flush.\n\t */\n\tsize_t max_bytes;\n\t/*\n\t * The number of bytes (in all bins) we flush down to when we exceed\n\t * bytes_cur.  We want this to be less than bytes_cur, because\n\t * otherwise we could get into situations where a shard undergoing\n\t * net-deallocation keeps bytes_cur very near to max_bytes, so that\n\t * most deallocations get immediately forwarded to the underlying PAI\n\t * implementation, defeating the point of the SEC.\n\t */\n\tsize_t bytes_after_flush;\n\t/*\n\t * When we can't satisfy an allocation out of the SEC because there are\n\t * no available ones cached, we allocate multiple of that size out of\n\t * the fallback allocator.  Eventually we might want to do something\n\t * cleverer, but for now we just grab a fixed number.\n\t */\n\tsize_t batch_fill_extra;\n};\n\n#define SEC_OPTS_DEFAULT {\t\t\t\t\t\t\\\n\t/* nshards */\t\t\t\t\t\t\t\\\n\t4,\t\t\t\t\t\t\t\t\\\n\t/* max_alloc */\t\t\t\t\t\t\t\\\n\t(32 * 1024) < PAGE ? PAGE : (32 * 1024),\t\t\t\\\n\t/* max_bytes */\t\t\t\t\t\t\t\\\n\t256 * 1024,\t\t\t\t\t\t\t\\\n\t/* bytes_after_flush */\t\t\t\t\t\t\\\n\t128 * 1024,\t\t\t\t\t\t\t\\\n\t/* batch_fill_extra */\t\t\t\t\t\t\\\n\t0\t\t\t\t\t\t\t\t\\\n}\n\n\n#endif /* JEMALLOC_INTERNAL_SEC_OPTS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/seq.h",
    "content": "#ifndef JEMALLOC_INTERNAL_SEQ_H\n#define JEMALLOC_INTERNAL_SEQ_H\n\n#include \"jemalloc/internal/atomic.h\"\n\n/*\n * A simple seqlock implementation.\n */\n\n#define seq_define(type, short_type)\t\t\t\t\t\\\ntypedef struct {\t\t\t\t\t\t\t\\\n\tatomic_zu_t seq;\t\t\t\t\t\t\\\n\tatomic_zu_t data[\t\t\t\t\t\t\\\n\t    (sizeof(type) + sizeof(size_t) - 1) / sizeof(size_t)];\t\\\n} seq_##short_type##_t;\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n/*\t\t\t\t\t\t\t\t\t\\\n * No internal synchronization -- the caller must ensure that there's\t\\\n * only a single writer at a time.\t\t\t\t\t\\\n */\t\t\t\t\t\t\t\t\t\\\nstatic inline void\t\t\t\t\t\t\t\\\nseq_store_##short_type(seq_##short_type##_t *dst, type *src) {\t\t\\\n\tsize_t buf[sizeof(dst->data) / sizeof(size_t)];\t\t\t\\\n\tbuf[sizeof(buf) / sizeof(size_t) - 1] = 0;\t\t\t\\\n\tmemcpy(buf, src, sizeof(type));\t\t\t\t\t\\\n\tsize_t old_seq = atomic_load_zu(&dst->seq, ATOMIC_RELAXED);\t\\\n\tatomic_store_zu(&dst->seq, old_seq + 1, ATOMIC_RELAXED);\t\\\n\tatomic_fence(ATOMIC_RELEASE);\t\t\t\t\t\\\n\tfor (size_t i = 0; i < sizeof(buf) / sizeof(size_t); i++) {\t\\\n\t\tatomic_store_zu(&dst->data[i], buf[i], ATOMIC_RELAXED);\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tatomic_store_zu(&dst->seq, old_seq + 2, ATOMIC_RELEASE);\t\\\n}\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n/* Returns whether or not the read was consistent. */\t\t\t\\\nstatic inline bool\t\t\t\t\t\t\t\\\nseq_try_load_##short_type(type *dst, seq_##short_type##_t *src) {\t\\\n\tsize_t buf[sizeof(src->data) / sizeof(size_t)];\t\t\t\\\n\tsize_t seq1 = atomic_load_zu(&src->seq, ATOMIC_ACQUIRE);\t\\\n\tif (seq1 % 2 != 0) {\t\t\t\t\t\t\\\n\t\treturn false;\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tfor (size_t i = 0; i < sizeof(buf) / sizeof(size_t); i++) {\t\\\n\t\tbuf[i] = atomic_load_zu(&src->data[i], ATOMIC_RELAXED);\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tatomic_fence(ATOMIC_ACQUIRE);\t\t\t\t\t\\\n\tsize_t seq2 = atomic_load_zu(&src->seq, ATOMIC_RELAXED);\t\\\n\tif (seq1 != seq2) {\t\t\t\t\t\t\\\n\t\treturn false;\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tmemcpy(dst, buf, sizeof(type));\t\t\t\t\t\\\n\treturn true;\t\t\t\t\t\t\t\\\n}\n\n#endif /* JEMALLOC_INTERNAL_SEQ_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/slab_data.h",
    "content": "#ifndef JEMALLOC_INTERNAL_SLAB_DATA_H\n#define JEMALLOC_INTERNAL_SLAB_DATA_H\n\n#include \"jemalloc/internal/bitmap.h\"\n\ntypedef struct slab_data_s slab_data_t;\nstruct slab_data_s {\n\t/* Per region allocated/deallocated bitmap. */\n\tbitmap_t bitmap[BITMAP_GROUPS_MAX];\n};\n\n#endif /* JEMALLOC_INTERNAL_SLAB_DATA_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/smoothstep.h",
    "content": "#ifndef JEMALLOC_INTERNAL_SMOOTHSTEP_H\n#define JEMALLOC_INTERNAL_SMOOTHSTEP_H\n\n/*\n * This file was generated by the following command:\n *   sh smoothstep.sh smoother 200 24 3 15\n */\n/******************************************************************************/\n\n/*\n * This header defines a precomputed table based on the smoothstep family of\n * sigmoidal curves (https://en.wikipedia.org/wiki/Smoothstep) that grow from 0\n * to 1 in 0 <= x <= 1.  The table is stored as integer fixed point values so\n * that floating point math can be avoided.\n *\n *                      3     2\n *   smoothstep(x) = -2x  + 3x\n *\n *                       5      4      3\n *   smootherstep(x) = 6x  - 15x  + 10x\n *\n *                          7      6      5      4\n *   smootheststep(x) = -20x  + 70x  - 84x  + 35x\n */\n\n#define SMOOTHSTEP_VARIANT\t\"smoother\"\n#define SMOOTHSTEP_NSTEPS\t200\n#define SMOOTHSTEP_BFP\t\t24\n#define SMOOTHSTEP \\\n /* STEP(step, h,                            x,     y) */ \\\n    STEP(   1, UINT64_C(0x0000000000000014), 0.005, 0.000001240643750) \\\n    STEP(   2, UINT64_C(0x00000000000000a5), 0.010, 0.000009850600000) \\\n    STEP(   3, UINT64_C(0x0000000000000229), 0.015, 0.000032995181250) \\\n    STEP(   4, UINT64_C(0x0000000000000516), 0.020, 0.000077619200000) \\\n    STEP(   5, UINT64_C(0x00000000000009dc), 0.025, 0.000150449218750) \\\n    STEP(   6, UINT64_C(0x00000000000010e8), 0.030, 0.000257995800000) \\\n    STEP(   7, UINT64_C(0x0000000000001aa4), 0.035, 0.000406555756250) \\\n    STEP(   8, UINT64_C(0x0000000000002777), 0.040, 0.000602214400000) \\\n    STEP(   9, UINT64_C(0x00000000000037c2), 0.045, 0.000850847793750) \\\n    STEP(  10, UINT64_C(0x0000000000004be6), 0.050, 0.001158125000000) \\\n    STEP(  11, UINT64_C(0x000000000000643c), 0.055, 0.001529510331250) \\\n    STEP(  12, UINT64_C(0x000000000000811f), 0.060, 0.001970265600000) \\\n    STEP(  13, UINT64_C(0x000000000000a2e2), 0.065, 0.002485452368750) \\\n    STEP(  14, UINT64_C(0x000000000000c9d8), 0.070, 0.003079934200000) \\\n    STEP(  15, UINT64_C(0x000000000000f64f), 0.075, 0.003758378906250) \\\n    STEP(  16, UINT64_C(0x0000000000012891), 0.080, 0.004525260800000) \\\n    STEP(  17, UINT64_C(0x00000000000160e7), 0.085, 0.005384862943750) \\\n    STEP(  18, UINT64_C(0x0000000000019f95), 0.090, 0.006341279400000) \\\n    STEP(  19, UINT64_C(0x000000000001e4dc), 0.095, 0.007398417481250) \\\n    STEP(  20, UINT64_C(0x00000000000230fc), 0.100, 0.008560000000000) \\\n    STEP(  21, UINT64_C(0x0000000000028430), 0.105, 0.009829567518750) \\\n    STEP(  22, UINT64_C(0x000000000002deb0), 0.110, 0.011210480600000) \\\n    STEP(  23, UINT64_C(0x00000000000340b1), 0.115, 0.012705922056250) \\\n    STEP(  24, UINT64_C(0x000000000003aa67), 0.120, 0.014318899200000) \\\n    STEP(  25, UINT64_C(0x0000000000041c00), 0.125, 0.016052246093750) \\\n    STEP(  26, UINT64_C(0x00000000000495a8), 0.130, 0.017908625800000) \\\n    STEP(  27, UINT64_C(0x000000000005178b), 0.135, 0.019890532631250) \\\n    STEP(  28, UINT64_C(0x000000000005a1cf), 0.140, 0.022000294400000) \\\n    STEP(  29, UINT64_C(0x0000000000063498), 0.145, 0.024240074668750) \\\n    STEP(  30, UINT64_C(0x000000000006d009), 0.150, 0.026611875000000) \\\n    STEP(  31, UINT64_C(0x000000000007743f), 0.155, 0.029117537206250) \\\n    STEP(  32, UINT64_C(0x0000000000082157), 0.160, 0.031758745600000) \\\n    STEP(  33, UINT64_C(0x000000000008d76b), 0.165, 0.034537029243750) \\\n    STEP(  34, UINT64_C(0x0000000000099691), 0.170, 0.037453764200000) \\\n    STEP(  35, UINT64_C(0x00000000000a5edf), 0.175, 0.040510175781250) \\\n    STEP(  36, UINT64_C(0x00000000000b3067), 0.180, 0.043707340800000) \\\n    STEP(  37, UINT64_C(0x00000000000c0b38), 0.185, 0.047046189818750) \\\n    STEP(  38, UINT64_C(0x00000000000cef5e), 0.190, 0.050527509400000) \\\n    STEP(  39, UINT64_C(0x00000000000ddce6), 0.195, 0.054151944356250) \\\n    STEP(  40, UINT64_C(0x00000000000ed3d8), 0.200, 0.057920000000000) \\\n    STEP(  41, UINT64_C(0x00000000000fd439), 0.205, 0.061832044393750) \\\n    STEP(  42, UINT64_C(0x000000000010de0e), 0.210, 0.065888310600000) \\\n    STEP(  43, UINT64_C(0x000000000011f158), 0.215, 0.070088898931250) \\\n    STEP(  44, UINT64_C(0x0000000000130e17), 0.220, 0.074433779200000) \\\n    STEP(  45, UINT64_C(0x0000000000143448), 0.225, 0.078922792968750) \\\n    STEP(  46, UINT64_C(0x00000000001563e7), 0.230, 0.083555655800000) \\\n    STEP(  47, UINT64_C(0x0000000000169cec), 0.235, 0.088331959506250) \\\n    STEP(  48, UINT64_C(0x000000000017df4f), 0.240, 0.093251174400000) \\\n    STEP(  49, UINT64_C(0x0000000000192b04), 0.245, 0.098312651543750) \\\n    STEP(  50, UINT64_C(0x00000000001a8000), 0.250, 0.103515625000000) \\\n    STEP(  51, UINT64_C(0x00000000001bde32), 0.255, 0.108859214081250) \\\n    STEP(  52, UINT64_C(0x00000000001d458b), 0.260, 0.114342425600000) \\\n    STEP(  53, UINT64_C(0x00000000001eb5f8), 0.265, 0.119964156118750) \\\n    STEP(  54, UINT64_C(0x0000000000202f65), 0.270, 0.125723194200000) \\\n    STEP(  55, UINT64_C(0x000000000021b1bb), 0.275, 0.131618222656250) \\\n    STEP(  56, UINT64_C(0x0000000000233ce3), 0.280, 0.137647820800000) \\\n    STEP(  57, UINT64_C(0x000000000024d0c3), 0.285, 0.143810466693750) \\\n    STEP(  58, UINT64_C(0x0000000000266d40), 0.290, 0.150104539400000) \\\n    STEP(  59, UINT64_C(0x000000000028123d), 0.295, 0.156528321231250) \\\n    STEP(  60, UINT64_C(0x000000000029bf9c), 0.300, 0.163080000000000) \\\n    STEP(  61, UINT64_C(0x00000000002b753d), 0.305, 0.169757671268750) \\\n    STEP(  62, UINT64_C(0x00000000002d32fe), 0.310, 0.176559340600000) \\\n    STEP(  63, UINT64_C(0x00000000002ef8bc), 0.315, 0.183482925806250) \\\n    STEP(  64, UINT64_C(0x000000000030c654), 0.320, 0.190526259200000) \\\n    STEP(  65, UINT64_C(0x0000000000329b9f), 0.325, 0.197687089843750) \\\n    STEP(  66, UINT64_C(0x0000000000347875), 0.330, 0.204963085800000) \\\n    STEP(  67, UINT64_C(0x0000000000365cb0), 0.335, 0.212351836381250) \\\n    STEP(  68, UINT64_C(0x0000000000384825), 0.340, 0.219850854400000) \\\n    STEP(  69, UINT64_C(0x00000000003a3aa8), 0.345, 0.227457578418750) \\\n    STEP(  70, UINT64_C(0x00000000003c340f), 0.350, 0.235169375000000) \\\n    STEP(  71, UINT64_C(0x00000000003e342b), 0.355, 0.242983540956250) \\\n    STEP(  72, UINT64_C(0x0000000000403ace), 0.360, 0.250897305600000) \\\n    STEP(  73, UINT64_C(0x00000000004247c8), 0.365, 0.258907832993750) \\\n    STEP(  74, UINT64_C(0x0000000000445ae9), 0.370, 0.267012224200000) \\\n    STEP(  75, UINT64_C(0x0000000000467400), 0.375, 0.275207519531250) \\\n    STEP(  76, UINT64_C(0x00000000004892d8), 0.380, 0.283490700800000) \\\n    STEP(  77, UINT64_C(0x00000000004ab740), 0.385, 0.291858693568750) \\\n    STEP(  78, UINT64_C(0x00000000004ce102), 0.390, 0.300308369400000) \\\n    STEP(  79, UINT64_C(0x00000000004f0fe9), 0.395, 0.308836548106250) \\\n    STEP(  80, UINT64_C(0x00000000005143bf), 0.400, 0.317440000000000) \\\n    STEP(  81, UINT64_C(0x0000000000537c4d), 0.405, 0.326115448143750) \\\n    STEP(  82, UINT64_C(0x000000000055b95b), 0.410, 0.334859570600000) \\\n    STEP(  83, UINT64_C(0x000000000057fab1), 0.415, 0.343669002681250) \\\n    STEP(  84, UINT64_C(0x00000000005a4015), 0.420, 0.352540339200000) \\\n    STEP(  85, UINT64_C(0x00000000005c894e), 0.425, 0.361470136718750) \\\n    STEP(  86, UINT64_C(0x00000000005ed622), 0.430, 0.370454915800000) \\\n    STEP(  87, UINT64_C(0x0000000000612655), 0.435, 0.379491163256250) \\\n    STEP(  88, UINT64_C(0x00000000006379ac), 0.440, 0.388575334400000) \\\n    STEP(  89, UINT64_C(0x000000000065cfeb), 0.445, 0.397703855293750) \\\n    STEP(  90, UINT64_C(0x00000000006828d6), 0.450, 0.406873125000000) \\\n    STEP(  91, UINT64_C(0x00000000006a842f), 0.455, 0.416079517831250) \\\n    STEP(  92, UINT64_C(0x00000000006ce1bb), 0.460, 0.425319385600000) \\\n    STEP(  93, UINT64_C(0x00000000006f413a), 0.465, 0.434589059868750) \\\n    STEP(  94, UINT64_C(0x000000000071a270), 0.470, 0.443884854200000) \\\n    STEP(  95, UINT64_C(0x000000000074051d), 0.475, 0.453203066406250) \\\n    STEP(  96, UINT64_C(0x0000000000766905), 0.480, 0.462539980800000) \\\n    STEP(  97, UINT64_C(0x000000000078cde7), 0.485, 0.471891870443750) \\\n    STEP(  98, UINT64_C(0x00000000007b3387), 0.490, 0.481254999400000) \\\n    STEP(  99, UINT64_C(0x00000000007d99a4), 0.495, 0.490625624981250) \\\n    STEP( 100, UINT64_C(0x0000000000800000), 0.500, 0.500000000000000) \\\n    STEP( 101, UINT64_C(0x000000000082665b), 0.505, 0.509374375018750) \\\n    STEP( 102, UINT64_C(0x000000000084cc78), 0.510, 0.518745000600000) \\\n    STEP( 103, UINT64_C(0x0000000000873218), 0.515, 0.528108129556250) \\\n    STEP( 104, UINT64_C(0x00000000008996fa), 0.520, 0.537460019200000) \\\n    STEP( 105, UINT64_C(0x00000000008bfae2), 0.525, 0.546796933593750) \\\n    STEP( 106, UINT64_C(0x00000000008e5d8f), 0.530, 0.556115145800000) \\\n    STEP( 107, UINT64_C(0x000000000090bec5), 0.535, 0.565410940131250) \\\n    STEP( 108, UINT64_C(0x0000000000931e44), 0.540, 0.574680614400000) \\\n    STEP( 109, UINT64_C(0x0000000000957bd0), 0.545, 0.583920482168750) \\\n    STEP( 110, UINT64_C(0x000000000097d729), 0.550, 0.593126875000000) \\\n    STEP( 111, UINT64_C(0x00000000009a3014), 0.555, 0.602296144706250) \\\n    STEP( 112, UINT64_C(0x00000000009c8653), 0.560, 0.611424665600000) \\\n    STEP( 113, UINT64_C(0x00000000009ed9aa), 0.565, 0.620508836743750) \\\n    STEP( 114, UINT64_C(0x0000000000a129dd), 0.570, 0.629545084200000) \\\n    STEP( 115, UINT64_C(0x0000000000a376b1), 0.575, 0.638529863281250) \\\n    STEP( 116, UINT64_C(0x0000000000a5bfea), 0.580, 0.647459660800000) \\\n    STEP( 117, UINT64_C(0x0000000000a8054e), 0.585, 0.656330997318750) \\\n    STEP( 118, UINT64_C(0x0000000000aa46a4), 0.590, 0.665140429400000) \\\n    STEP( 119, UINT64_C(0x0000000000ac83b2), 0.595, 0.673884551856250) \\\n    STEP( 120, UINT64_C(0x0000000000aebc40), 0.600, 0.682560000000000) \\\n    STEP( 121, UINT64_C(0x0000000000b0f016), 0.605, 0.691163451893750) \\\n    STEP( 122, UINT64_C(0x0000000000b31efd), 0.610, 0.699691630600000) \\\n    STEP( 123, UINT64_C(0x0000000000b548bf), 0.615, 0.708141306431250) \\\n    STEP( 124, UINT64_C(0x0000000000b76d27), 0.620, 0.716509299200000) \\\n    STEP( 125, UINT64_C(0x0000000000b98c00), 0.625, 0.724792480468750) \\\n    STEP( 126, UINT64_C(0x0000000000bba516), 0.630, 0.732987775800000) \\\n    STEP( 127, UINT64_C(0x0000000000bdb837), 0.635, 0.741092167006250) \\\n    STEP( 128, UINT64_C(0x0000000000bfc531), 0.640, 0.749102694400000) \\\n    STEP( 129, UINT64_C(0x0000000000c1cbd4), 0.645, 0.757016459043750) \\\n    STEP( 130, UINT64_C(0x0000000000c3cbf0), 0.650, 0.764830625000000) \\\n    STEP( 131, UINT64_C(0x0000000000c5c557), 0.655, 0.772542421581250) \\\n    STEP( 132, UINT64_C(0x0000000000c7b7da), 0.660, 0.780149145600000) \\\n    STEP( 133, UINT64_C(0x0000000000c9a34f), 0.665, 0.787648163618750) \\\n    STEP( 134, UINT64_C(0x0000000000cb878a), 0.670, 0.795036914200000) \\\n    STEP( 135, UINT64_C(0x0000000000cd6460), 0.675, 0.802312910156250) \\\n    STEP( 136, UINT64_C(0x0000000000cf39ab), 0.680, 0.809473740800000) \\\n    STEP( 137, UINT64_C(0x0000000000d10743), 0.685, 0.816517074193750) \\\n    STEP( 138, UINT64_C(0x0000000000d2cd01), 0.690, 0.823440659400000) \\\n    STEP( 139, UINT64_C(0x0000000000d48ac2), 0.695, 0.830242328731250) \\\n    STEP( 140, UINT64_C(0x0000000000d64063), 0.700, 0.836920000000000) \\\n    STEP( 141, UINT64_C(0x0000000000d7edc2), 0.705, 0.843471678768750) \\\n    STEP( 142, UINT64_C(0x0000000000d992bf), 0.710, 0.849895460600000) \\\n    STEP( 143, UINT64_C(0x0000000000db2f3c), 0.715, 0.856189533306250) \\\n    STEP( 144, UINT64_C(0x0000000000dcc31c), 0.720, 0.862352179200000) \\\n    STEP( 145, UINT64_C(0x0000000000de4e44), 0.725, 0.868381777343750) \\\n    STEP( 146, UINT64_C(0x0000000000dfd09a), 0.730, 0.874276805800000) \\\n    STEP( 147, UINT64_C(0x0000000000e14a07), 0.735, 0.880035843881250) \\\n    STEP( 148, UINT64_C(0x0000000000e2ba74), 0.740, 0.885657574400000) \\\n    STEP( 149, UINT64_C(0x0000000000e421cd), 0.745, 0.891140785918750) \\\n    STEP( 150, UINT64_C(0x0000000000e58000), 0.750, 0.896484375000000) \\\n    STEP( 151, UINT64_C(0x0000000000e6d4fb), 0.755, 0.901687348456250) \\\n    STEP( 152, UINT64_C(0x0000000000e820b0), 0.760, 0.906748825600000) \\\n    STEP( 153, UINT64_C(0x0000000000e96313), 0.765, 0.911668040493750) \\\n    STEP( 154, UINT64_C(0x0000000000ea9c18), 0.770, 0.916444344200000) \\\n    STEP( 155, UINT64_C(0x0000000000ebcbb7), 0.775, 0.921077207031250) \\\n    STEP( 156, UINT64_C(0x0000000000ecf1e8), 0.780, 0.925566220800000) \\\n    STEP( 157, UINT64_C(0x0000000000ee0ea7), 0.785, 0.929911101068750) \\\n    STEP( 158, UINT64_C(0x0000000000ef21f1), 0.790, 0.934111689400000) \\\n    STEP( 159, UINT64_C(0x0000000000f02bc6), 0.795, 0.938167955606250) \\\n    STEP( 160, UINT64_C(0x0000000000f12c27), 0.800, 0.942080000000000) \\\n    STEP( 161, UINT64_C(0x0000000000f22319), 0.805, 0.945848055643750) \\\n    STEP( 162, UINT64_C(0x0000000000f310a1), 0.810, 0.949472490600000) \\\n    STEP( 163, UINT64_C(0x0000000000f3f4c7), 0.815, 0.952953810181250) \\\n    STEP( 164, UINT64_C(0x0000000000f4cf98), 0.820, 0.956292659200000) \\\n    STEP( 165, UINT64_C(0x0000000000f5a120), 0.825, 0.959489824218750) \\\n    STEP( 166, UINT64_C(0x0000000000f6696e), 0.830, 0.962546235800000) \\\n    STEP( 167, UINT64_C(0x0000000000f72894), 0.835, 0.965462970756250) \\\n    STEP( 168, UINT64_C(0x0000000000f7dea8), 0.840, 0.968241254400000) \\\n    STEP( 169, UINT64_C(0x0000000000f88bc0), 0.845, 0.970882462793750) \\\n    STEP( 170, UINT64_C(0x0000000000f92ff6), 0.850, 0.973388125000000) \\\n    STEP( 171, UINT64_C(0x0000000000f9cb67), 0.855, 0.975759925331250) \\\n    STEP( 172, UINT64_C(0x0000000000fa5e30), 0.860, 0.977999705600000) \\\n    STEP( 173, UINT64_C(0x0000000000fae874), 0.865, 0.980109467368750) \\\n    STEP( 174, UINT64_C(0x0000000000fb6a57), 0.870, 0.982091374200000) \\\n    STEP( 175, UINT64_C(0x0000000000fbe400), 0.875, 0.983947753906250) \\\n    STEP( 176, UINT64_C(0x0000000000fc5598), 0.880, 0.985681100800000) \\\n    STEP( 177, UINT64_C(0x0000000000fcbf4e), 0.885, 0.987294077943750) \\\n    STEP( 178, UINT64_C(0x0000000000fd214f), 0.890, 0.988789519400000) \\\n    STEP( 179, UINT64_C(0x0000000000fd7bcf), 0.895, 0.990170432481250) \\\n    STEP( 180, UINT64_C(0x0000000000fdcf03), 0.900, 0.991440000000000) \\\n    STEP( 181, UINT64_C(0x0000000000fe1b23), 0.905, 0.992601582518750) \\\n    STEP( 182, UINT64_C(0x0000000000fe606a), 0.910, 0.993658720600000) \\\n    STEP( 183, UINT64_C(0x0000000000fe9f18), 0.915, 0.994615137056250) \\\n    STEP( 184, UINT64_C(0x0000000000fed76e), 0.920, 0.995474739200000) \\\n    STEP( 185, UINT64_C(0x0000000000ff09b0), 0.925, 0.996241621093750) \\\n    STEP( 186, UINT64_C(0x0000000000ff3627), 0.930, 0.996920065800000) \\\n    STEP( 187, UINT64_C(0x0000000000ff5d1d), 0.935, 0.997514547631250) \\\n    STEP( 188, UINT64_C(0x0000000000ff7ee0), 0.940, 0.998029734400000) \\\n    STEP( 189, UINT64_C(0x0000000000ff9bc3), 0.945, 0.998470489668750) \\\n    STEP( 190, UINT64_C(0x0000000000ffb419), 0.950, 0.998841875000000) \\\n    STEP( 191, UINT64_C(0x0000000000ffc83d), 0.955, 0.999149152206250) \\\n    STEP( 192, UINT64_C(0x0000000000ffd888), 0.960, 0.999397785600000) \\\n    STEP( 193, UINT64_C(0x0000000000ffe55b), 0.965, 0.999593444243750) \\\n    STEP( 194, UINT64_C(0x0000000000ffef17), 0.970, 0.999742004200000) \\\n    STEP( 195, UINT64_C(0x0000000000fff623), 0.975, 0.999849550781250) \\\n    STEP( 196, UINT64_C(0x0000000000fffae9), 0.980, 0.999922380800000) \\\n    STEP( 197, UINT64_C(0x0000000000fffdd6), 0.985, 0.999967004818750) \\\n    STEP( 198, UINT64_C(0x0000000000ffff5a), 0.990, 0.999990149400000) \\\n    STEP( 199, UINT64_C(0x0000000000ffffeb), 0.995, 0.999998759356250) \\\n    STEP( 200, UINT64_C(0x0000000001000000), 1.000, 1.000000000000000) \\\n\n#endif /* JEMALLOC_INTERNAL_SMOOTHSTEP_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/smoothstep.sh",
    "content": "#!/bin/sh\n#\n# Generate a discrete lookup table for a sigmoid function in the smoothstep\n# family (https://en.wikipedia.org/wiki/Smoothstep), where the lookup table\n# entries correspond to x in [1/nsteps, 2/nsteps, ..., nsteps/nsteps].  Encode\n# the entries using a binary fixed point representation.\n#\n# Usage: smoothstep.sh <variant> <nsteps> <bfp> <xprec> <yprec>\n#\n#        <variant> is in {smooth, smoother, smoothest}.\n#        <nsteps> must be greater than zero.\n#        <bfp> must be in [0..62]; reasonable values are roughly [10..30].\n#        <xprec> is x decimal precision.\n#        <yprec> is y decimal precision.\n\n#set -x\n\ncmd=\"sh smoothstep.sh $*\"\nvariant=$1\nnsteps=$2\nbfp=$3\nxprec=$4\nyprec=$5\n\ncase \"${variant}\" in\n  smooth)\n    ;;\n  smoother)\n    ;;\n  smoothest)\n    ;;\n  *)\n    echo \"Unsupported variant\"\n    exit 1\n    ;;\nesac\n\nsmooth() {\n  step=$1\n  y=`echo ${yprec} k ${step} ${nsteps} / sx _2 lx 3 ^ '*' 3 lx 2 ^ '*' + p | dc | tr -d '\\\\\\\\\\n' | sed -e 's#^\\.#0.#g'`\n  h=`echo ${yprec} k 2 ${bfp} ^ ${y} '*' p | dc | tr -d '\\\\\\\\\\n' | sed -e 's#^\\.#0.#g' | tr '.' ' ' | awk '{print $1}' `\n}\n\nsmoother() {\n  step=$1\n  y=`echo ${yprec} k ${step} ${nsteps} / sx 6 lx 5 ^ '*' _15 lx 4 ^ '*' + 10 lx 3 ^ '*' + p | dc | tr -d '\\\\\\\\\\n' | sed -e 's#^\\.#0.#g'`\n  h=`echo ${yprec} k 2 ${bfp} ^ ${y} '*' p | dc | tr -d '\\\\\\\\\\n' | sed -e 's#^\\.#0.#g' | tr '.' ' ' | awk '{print $1}' `\n}\n\nsmoothest() {\n  step=$1\n  y=`echo ${yprec} k ${step} ${nsteps} / sx _20 lx 7 ^ '*' 70 lx 6 ^ '*' + _84 lx 5 ^ '*' + 35 lx 4 ^ '*' + p | dc | tr -d '\\\\\\\\\\n' | sed -e 's#^\\.#0.#g'`\n  h=`echo ${yprec} k 2 ${bfp} ^ ${y} '*' p | dc | tr -d '\\\\\\\\\\n' | sed -e 's#^\\.#0.#g' | tr '.' ' ' | awk '{print $1}' `\n}\n\ncat <<EOF\n#ifndef JEMALLOC_INTERNAL_SMOOTHSTEP_H\n#define JEMALLOC_INTERNAL_SMOOTHSTEP_H\n\n/*\n * This file was generated by the following command:\n *   $cmd\n */\n/******************************************************************************/\n\n/*\n * This header defines a precomputed table based on the smoothstep family of\n * sigmoidal curves (https://en.wikipedia.org/wiki/Smoothstep) that grow from 0\n * to 1 in 0 <= x <= 1.  The table is stored as integer fixed point values so\n * that floating point math can be avoided.\n *\n *                      3     2\n *   smoothstep(x) = -2x  + 3x\n *\n *                       5      4      3\n *   smootherstep(x) = 6x  - 15x  + 10x\n *\n *                          7      6      5      4\n *   smootheststep(x) = -20x  + 70x  - 84x  + 35x\n */\n\n#define SMOOTHSTEP_VARIANT\t\"${variant}\"\n#define SMOOTHSTEP_NSTEPS\t${nsteps}\n#define SMOOTHSTEP_BFP\t\t${bfp}\n#define SMOOTHSTEP \\\\\n /* STEP(step, h,                            x,     y) */ \\\\\nEOF\n\ns=1\nwhile [ $s -le $nsteps ] ; do\n  $variant ${s}\n  x=`echo ${xprec} k ${s} ${nsteps} / p | dc | tr -d '\\\\\\\\\\n' | sed -e 's#^\\.#0.#g'`\n  printf '    STEP(%4d, UINT64_C(0x%016x), %s, %s) \\\\\\n' ${s} ${h} ${x} ${y}\n\n  s=$((s+1))\ndone\necho\n\ncat <<EOF\n#endif /* JEMALLOC_INTERNAL_SMOOTHSTEP_H */\nEOF\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/spin.h",
    "content": "#ifndef JEMALLOC_INTERNAL_SPIN_H\n#define JEMALLOC_INTERNAL_SPIN_H\n\n#define SPIN_INITIALIZER {0U}\n\ntypedef struct {\n\tunsigned iteration;\n} spin_t;\n\nstatic inline void\nspin_cpu_spinwait() {\n#  if HAVE_CPU_SPINWAIT\n\tCPU_SPINWAIT;\n#  else\n\tvolatile int x = 0;\n\tx = x;\n#  endif\n}\n\nstatic inline void\nspin_adaptive(spin_t *spin) {\n\tvolatile uint32_t i;\n\n\tif (spin->iteration < 5) {\n\t\tfor (i = 0; i < (1U << spin->iteration); i++) {\n\t\t\tspin_cpu_spinwait();\n\t\t}\n\t\tspin->iteration++;\n\t} else {\n#ifdef _WIN32\n\t\tSwitchToThread();\n#else\n\t\tsched_yield();\n#endif\n\t}\n}\n\n#undef SPIN_INLINE\n\n#endif /* JEMALLOC_INTERNAL_SPIN_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/stats.h",
    "content": "#ifndef JEMALLOC_INTERNAL_STATS_H\n#define JEMALLOC_INTERNAL_STATS_H\n\n/*  OPTION(opt,\t\tvar_name,\tdefault,\tset_value_to) */\n#define STATS_PRINT_OPTIONS\t\t\t\t\t\t\\\n    OPTION('J',\t\tjson,\t\tfalse,\t\ttrue)\t\t\\\n    OPTION('g',\t\tgeneral,\ttrue,\t\tfalse)\t\t\\\n    OPTION('m',\t\tmerged,\t\tconfig_stats,\tfalse)\t\t\\\n    OPTION('d',\t\tdestroyed,\tconfig_stats,\tfalse)\t\t\\\n    OPTION('a',\t\tunmerged,\tconfig_stats,\tfalse)\t\t\\\n    OPTION('b',\t\tbins,\t\ttrue,\t\tfalse)\t\t\\\n    OPTION('l',\t\tlarge,\t\ttrue,\t\tfalse)\t\t\\\n    OPTION('x',\t\tmutex,\t\ttrue,\t\tfalse)\t\t\\\n    OPTION('e',\t\textents,\ttrue,\t\tfalse)\t\t\\\n    OPTION('h',\t\thpa,\t\tconfig_stats,\tfalse)\n\nenum {\n#define OPTION(o, v, d, s) stats_print_option_num_##v,\n    STATS_PRINT_OPTIONS\n#undef OPTION\n    stats_print_tot_num_options\n};\n\n/* Options for stats_print. */\nextern bool opt_stats_print;\nextern char opt_stats_print_opts[stats_print_tot_num_options+1];\n\n/* Utilities for stats_interval. */\nextern int64_t opt_stats_interval;\nextern char opt_stats_interval_opts[stats_print_tot_num_options+1];\n\n#define STATS_INTERVAL_DEFAULT -1\n/*\n * Batch-increment the counter to reduce synchronization overhead.  Each thread\n * merges after (interval >> LG_BATCH_SIZE) bytes of allocations; also limit the\n * BATCH_MAX for accuracy when the interval is huge (which is expected).\n */\n#define STATS_INTERVAL_ACCUM_LG_BATCH_SIZE 6\n#define STATS_INTERVAL_ACCUM_BATCH_MAX (4 << 20)\n\n/* Only accessed by thread event. */\nuint64_t stats_interval_new_event_wait(tsd_t *tsd);\nuint64_t stats_interval_postponed_event_wait(tsd_t *tsd);\nvoid stats_interval_event_handler(tsd_t *tsd, uint64_t elapsed);\n\n/* Implements je_malloc_stats_print. */\nvoid stats_print(write_cb_t *write_cb, void *cbopaque, const char *opts);\n\nbool stats_boot(void);\nvoid stats_prefork(tsdn_t *tsdn);\nvoid stats_postfork_parent(tsdn_t *tsdn);\nvoid stats_postfork_child(tsdn_t *tsdn);\n\n#endif /* JEMALLOC_INTERNAL_STATS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/sz.h",
    "content": "#ifndef JEMALLOC_INTERNAL_SIZE_H\n#define JEMALLOC_INTERNAL_SIZE_H\n\n#include \"jemalloc/internal/bit_util.h\"\n#include \"jemalloc/internal/pages.h\"\n#include \"jemalloc/internal/sc.h\"\n#include \"jemalloc/internal/util.h\"\n\n/*\n * sz module: Size computations.\n *\n * Some abbreviations used here:\n *   p: Page\n *   ind: Index\n *   s, sz: Size\n *   u: Usable size\n *   a: Aligned\n *\n * These are not always used completely consistently, but should be enough to\n * interpret function names.  E.g. sz_psz2ind converts page size to page size\n * index; sz_sa2u converts a (size, alignment) allocation request to the usable\n * size that would result from such an allocation.\n */\n\n/* Page size index type. */\ntypedef unsigned pszind_t;\n\n/* Size class index type. */\ntypedef unsigned szind_t;\n\n/*\n * sz_pind2sz_tab encodes the same information as could be computed by\n * sz_pind2sz_compute().\n */\nextern size_t sz_pind2sz_tab[SC_NPSIZES + 1];\n/*\n * sz_index2size_tab encodes the same information as could be computed (at\n * unacceptable cost in some code paths) by sz_index2size_compute().\n */\nextern size_t sz_index2size_tab[SC_NSIZES];\n/*\n * sz_size2index_tab is a compact lookup table that rounds request sizes up to\n * size classes.  In order to reduce cache footprint, the table is compressed,\n * and all accesses are via sz_size2index().\n */\nextern uint8_t sz_size2index_tab[];\n\n/*\n * Padding for large allocations: PAGE when opt_cache_oblivious == true (to\n * enable cache index randomization); 0 otherwise.\n */\nextern size_t sz_large_pad;\n\nextern void sz_boot(const sc_data_t *sc_data, bool cache_oblivious);\n\nJEMALLOC_ALWAYS_INLINE pszind_t\nsz_psz2ind(size_t psz) {\n\tassert(psz > 0);\n\tif (unlikely(psz > SC_LARGE_MAXCLASS)) {\n\t\treturn SC_NPSIZES;\n\t}\n\t/* x is the lg of the first base >= psz. */\n\tpszind_t x = lg_ceil(psz);\n\t/*\n\t * sc.h introduces a lot of size classes. These size classes are divided\n\t * into different size class groups. There is a very special size class\n\t * group, each size class in or after it is an integer multiple of PAGE.\n\t * We call it first_ps_rg. It means first page size regular group. The\n\t * range of first_ps_rg is (base, base * 2], and base == PAGE *\n\t * SC_NGROUP. off_to_first_ps_rg begins from 1, instead of 0. e.g.\n\t * off_to_first_ps_rg is 1 when psz is (PAGE * SC_NGROUP + 1).\n\t */\n\tpszind_t off_to_first_ps_rg = (x < SC_LG_NGROUP + LG_PAGE) ?\n\t    0 : x - (SC_LG_NGROUP + LG_PAGE);\n\n\t/*\n\t * Same as sc_s::lg_delta.\n\t * Delta for off_to_first_ps_rg == 1 is PAGE,\n\t * for each increase in offset, it's multiplied by two.\n\t * Therefore, lg_delta = LG_PAGE + (off_to_first_ps_rg - 1).\n\t */\n\tpszind_t lg_delta = (off_to_first_ps_rg == 0) ?\n\t    LG_PAGE : LG_PAGE + (off_to_first_ps_rg - 1);\n\n\t/*\n\t * Let's write psz in binary, e.g. 0011 for 0x3, 0111 for 0x7.\n\t * The leftmost bits whose len is lg_base decide the base of psz.\n\t * The rightmost bits whose len is lg_delta decide (pgz % PAGE).\n\t * The middle bits whose len is SC_LG_NGROUP decide ndelta.\n\t * ndelta is offset to the first size class in the size class group,\n\t * starts from 1.\n\t * If you don't know lg_base, ndelta or lg_delta, see sc.h.\n\t * |xxxxxxxxxxxxxxxxxxxx|------------------------|yyyyyyyyyyyyyyyyyyyyy|\n\t * |<-- len: lg_base -->|<-- len: SC_LG_NGROUP-->|<-- len: lg_delta -->|\n\t *                      |<--      ndelta      -->|\n\t * rg_inner_off = ndelta - 1\n\t * Why use (psz - 1)?\n\t * To handle case: psz % (1 << lg_delta) == 0.\n\t */\n\tpszind_t rg_inner_off = (((psz - 1)) >> lg_delta) & (SC_NGROUP - 1);\n\n\tpszind_t base_ind = off_to_first_ps_rg << SC_LG_NGROUP;\n\tpszind_t ind = base_ind + rg_inner_off;\n\treturn ind;\n}\n\nstatic inline size_t\nsz_pind2sz_compute(pszind_t pind) {\n\tif (unlikely(pind == SC_NPSIZES)) {\n\t\treturn SC_LARGE_MAXCLASS + PAGE;\n\t}\n\tsize_t grp = pind >> SC_LG_NGROUP;\n\tsize_t mod = pind & ((ZU(1) << SC_LG_NGROUP) - 1);\n\n\tsize_t grp_size_mask = ~((!!grp)-1);\n\tsize_t grp_size = ((ZU(1) << (LG_PAGE + (SC_LG_NGROUP-1))) << grp)\n\t    & grp_size_mask;\n\n\tsize_t shift = (grp == 0) ? 1 : grp;\n\tsize_t lg_delta = shift + (LG_PAGE-1);\n\tsize_t mod_size = (mod+1) << lg_delta;\n\n\tsize_t sz = grp_size + mod_size;\n\treturn sz;\n}\n\nstatic inline size_t\nsz_pind2sz_lookup(pszind_t pind) {\n\tsize_t ret = (size_t)sz_pind2sz_tab[pind];\n\tassert(ret == sz_pind2sz_compute(pind));\n\treturn ret;\n}\n\nstatic inline size_t\nsz_pind2sz(pszind_t pind) {\n\tassert(pind < SC_NPSIZES + 1);\n\treturn sz_pind2sz_lookup(pind);\n}\n\nstatic inline size_t\nsz_psz2u(size_t psz) {\n\tif (unlikely(psz > SC_LARGE_MAXCLASS)) {\n\t\treturn SC_LARGE_MAXCLASS + PAGE;\n\t}\n\tsize_t x = lg_floor((psz<<1)-1);\n\tsize_t lg_delta = (x < SC_LG_NGROUP + LG_PAGE + 1) ?\n\t    LG_PAGE : x - SC_LG_NGROUP - 1;\n\tsize_t delta = ZU(1) << lg_delta;\n\tsize_t delta_mask = delta - 1;\n\tsize_t usize = (psz + delta_mask) & ~delta_mask;\n\treturn usize;\n}\n\nstatic inline szind_t\nsz_size2index_compute(size_t size) {\n\tif (unlikely(size > SC_LARGE_MAXCLASS)) {\n\t\treturn SC_NSIZES;\n\t}\n\n\tif (size == 0) {\n\t\treturn 0;\n\t}\n#if (SC_NTINY != 0)\n\tif (size <= (ZU(1) << SC_LG_TINY_MAXCLASS)) {\n\t\tszind_t lg_tmin = SC_LG_TINY_MAXCLASS - SC_NTINY + 1;\n\t\tszind_t lg_ceil = lg_floor(pow2_ceil_zu(size));\n\t\treturn (lg_ceil < lg_tmin ? 0 : lg_ceil - lg_tmin);\n\t}\n#endif\n\t{\n\t\tszind_t x = lg_floor((size<<1)-1);\n\t\tszind_t shift = (x < SC_LG_NGROUP + LG_QUANTUM) ? 0 :\n\t\t    x - (SC_LG_NGROUP + LG_QUANTUM);\n\t\tszind_t grp = shift << SC_LG_NGROUP;\n\n\t\tszind_t lg_delta = (x < SC_LG_NGROUP + LG_QUANTUM + 1)\n\t\t    ? LG_QUANTUM : x - SC_LG_NGROUP - 1;\n\n\t\tsize_t delta_inverse_mask = ZU(-1) << lg_delta;\n\t\tszind_t mod = ((((size-1) & delta_inverse_mask) >> lg_delta)) &\n\t\t    ((ZU(1) << SC_LG_NGROUP) - 1);\n\n\t\tszind_t index = SC_NTINY + grp + mod;\n\t\treturn index;\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE szind_t\nsz_size2index_lookup_impl(size_t size) {\n\tassert(size <= SC_LOOKUP_MAXCLASS);\n\treturn sz_size2index_tab[(size + (ZU(1) << SC_LG_TINY_MIN) - 1)\n\t    >> SC_LG_TINY_MIN];\n}\n\nJEMALLOC_ALWAYS_INLINE szind_t\nsz_size2index_lookup(size_t size) {\n\tszind_t ret = sz_size2index_lookup_impl(size);\n\tassert(ret == sz_size2index_compute(size));\n\treturn ret;\n}\n\nJEMALLOC_ALWAYS_INLINE szind_t\nsz_size2index(size_t size) {\n\tif (likely(size <= SC_LOOKUP_MAXCLASS)) {\n\t\treturn sz_size2index_lookup(size);\n\t}\n\treturn sz_size2index_compute(size);\n}\n\nstatic inline size_t\nsz_index2size_compute(szind_t index) {\n#if (SC_NTINY > 0)\n\tif (index < SC_NTINY) {\n\t\treturn (ZU(1) << (SC_LG_TINY_MAXCLASS - SC_NTINY + 1 + index));\n\t}\n#endif\n\t{\n\t\tsize_t reduced_index = index - SC_NTINY;\n\t\tsize_t grp = reduced_index >> SC_LG_NGROUP;\n\t\tsize_t mod = reduced_index & ((ZU(1) << SC_LG_NGROUP) -\n\t\t    1);\n\n\t\tsize_t grp_size_mask = ~((!!grp)-1);\n\t\tsize_t grp_size = ((ZU(1) << (LG_QUANTUM +\n\t\t    (SC_LG_NGROUP-1))) << grp) & grp_size_mask;\n\n\t\tsize_t shift = (grp == 0) ? 1 : grp;\n\t\tsize_t lg_delta = shift + (LG_QUANTUM-1);\n\t\tsize_t mod_size = (mod+1) << lg_delta;\n\n\t\tsize_t usize = grp_size + mod_size;\n\t\treturn usize;\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE size_t\nsz_index2size_lookup_impl(szind_t index) {\n\treturn sz_index2size_tab[index];\n}\n\nJEMALLOC_ALWAYS_INLINE size_t\nsz_index2size_lookup(szind_t index) {\n\tsize_t ret = sz_index2size_lookup_impl(index);\n\tassert(ret == sz_index2size_compute(index));\n\treturn ret;\n}\n\nJEMALLOC_ALWAYS_INLINE size_t\nsz_index2size(szind_t index) {\n\tassert(index < SC_NSIZES);\n\treturn sz_index2size_lookup(index);\n}\n\nJEMALLOC_ALWAYS_INLINE void\nsz_size2index_usize_fastpath(size_t size, szind_t *ind, size_t *usize) {\n\t*ind = sz_size2index_lookup_impl(size);\n\t*usize = sz_index2size_lookup_impl(*ind);\n}\n\nJEMALLOC_ALWAYS_INLINE size_t\nsz_s2u_compute(size_t size) {\n\tif (unlikely(size > SC_LARGE_MAXCLASS)) {\n\t\treturn 0;\n\t}\n\n\tif (size == 0) {\n\t\tsize++;\n\t}\n#if (SC_NTINY > 0)\n\tif (size <= (ZU(1) << SC_LG_TINY_MAXCLASS)) {\n\t\tsize_t lg_tmin = SC_LG_TINY_MAXCLASS - SC_NTINY + 1;\n\t\tsize_t lg_ceil = lg_floor(pow2_ceil_zu(size));\n\t\treturn (lg_ceil < lg_tmin ? (ZU(1) << lg_tmin) :\n\t\t    (ZU(1) << lg_ceil));\n\t}\n#endif\n\t{\n\t\tsize_t x = lg_floor((size<<1)-1);\n\t\tsize_t lg_delta = (x < SC_LG_NGROUP + LG_QUANTUM + 1)\n\t\t    ?  LG_QUANTUM : x - SC_LG_NGROUP - 1;\n\t\tsize_t delta = ZU(1) << lg_delta;\n\t\tsize_t delta_mask = delta - 1;\n\t\tsize_t usize = (size + delta_mask) & ~delta_mask;\n\t\treturn usize;\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE size_t\nsz_s2u_lookup(size_t size) {\n\tsize_t ret = sz_index2size_lookup(sz_size2index_lookup(size));\n\n\tassert(ret == sz_s2u_compute(size));\n\treturn ret;\n}\n\n/*\n * Compute usable size that would result from allocating an object with the\n * specified size.\n */\nJEMALLOC_ALWAYS_INLINE size_t\nsz_s2u(size_t size) {\n\tif (likely(size <= SC_LOOKUP_MAXCLASS)) {\n\t\treturn sz_s2u_lookup(size);\n\t}\n\treturn sz_s2u_compute(size);\n}\n\n/*\n * Compute usable size that would result from allocating an object with the\n * specified size and alignment.\n */\nJEMALLOC_ALWAYS_INLINE size_t\nsz_sa2u(size_t size, size_t alignment) {\n\tsize_t usize;\n\n\tassert(alignment != 0 && ((alignment - 1) & alignment) == 0);\n\n\t/* Try for a small size class. */\n\tif (size <= SC_SMALL_MAXCLASS && alignment <= PAGE) {\n\t\t/*\n\t\t * Round size up to the nearest multiple of alignment.\n\t\t *\n\t\t * This done, we can take advantage of the fact that for each\n\t\t * small size class, every object is aligned at the smallest\n\t\t * power of two that is non-zero in the base two representation\n\t\t * of the size.  For example:\n\t\t *\n\t\t *   Size |   Base 2 | Minimum alignment\n\t\t *   -----+----------+------------------\n\t\t *     96 |  1100000 |  32\n\t\t *    144 | 10100000 |  32\n\t\t *    192 | 11000000 |  64\n\t\t */\n\t\tusize = sz_s2u(ALIGNMENT_CEILING(size, alignment));\n\t\tif (usize < SC_LARGE_MINCLASS) {\n\t\t\treturn usize;\n\t\t}\n\t}\n\n\t/* Large size class.  Beware of overflow. */\n\n\tif (unlikely(alignment > SC_LARGE_MAXCLASS)) {\n\t\treturn 0;\n\t}\n\n\t/* Make sure result is a large size class. */\n\tif (size <= SC_LARGE_MINCLASS) {\n\t\tusize = SC_LARGE_MINCLASS;\n\t} else {\n\t\tusize = sz_s2u(size);\n\t\tif (usize < size) {\n\t\t\t/* size_t overflow. */\n\t\t\treturn 0;\n\t\t}\n\t}\n\n\t/*\n\t * Calculate the multi-page mapping that large_palloc() would need in\n\t * order to guarantee the alignment.\n\t */\n\tif (usize + sz_large_pad + PAGE_CEILING(alignment) - PAGE < usize) {\n\t\t/* size_t overflow. */\n\t\treturn 0;\n\t}\n\treturn usize;\n}\n\nsize_t sz_psz_quantize_floor(size_t size);\nsize_t sz_psz_quantize_ceil(size_t size);\n\n#endif /* JEMALLOC_INTERNAL_SIZE_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/tcache_externs.h",
    "content": "#ifndef JEMALLOC_INTERNAL_TCACHE_EXTERNS_H\n#define JEMALLOC_INTERNAL_TCACHE_EXTERNS_H\n\nextern bool opt_tcache;\nextern size_t opt_tcache_max;\nextern ssize_t\topt_lg_tcache_nslots_mul;\nextern unsigned opt_tcache_nslots_small_min;\nextern unsigned opt_tcache_nslots_small_max;\nextern unsigned opt_tcache_nslots_large;\nextern ssize_t opt_lg_tcache_shift;\nextern size_t opt_tcache_gc_incr_bytes;\nextern size_t opt_tcache_gc_delay_bytes;\nextern unsigned opt_lg_tcache_flush_small_div;\nextern unsigned opt_lg_tcache_flush_large_div;\n\n/*\n * Number of tcache bins.  There are SC_NBINS small-object bins, plus 0 or more\n * large-object bins.\n */\nextern unsigned\tnhbins;\n\n/* Maximum cached size class. */\nextern size_t\ttcache_maxclass;\n\nextern cache_bin_info_t *tcache_bin_info;\n\n/*\n * Explicit tcaches, managed via the tcache.{create,flush,destroy} mallctls and\n * usable via the MALLOCX_TCACHE() flag.  The automatic per thread tcaches are\n * completely disjoint from this data structure.  tcaches starts off as a sparse\n * array, so it has no physical memory footprint until individual pages are\n * touched.  This allows the entire array to be allocated the first time an\n * explicit tcache is created without a disproportionate impact on memory usage.\n */\nextern tcaches_t\t*tcaches;\n\nsize_t tcache_salloc(tsdn_t *tsdn, const void *ptr);\nvoid *tcache_alloc_small_hard(tsdn_t *tsdn, arena_t *arena, tcache_t *tcache,\n    cache_bin_t *tbin, szind_t binind, bool *tcache_success);\n\nvoid tcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, cache_bin_t *tbin,\n    szind_t binind, unsigned rem);\nvoid tcache_bin_flush_large(tsd_t *tsd, tcache_t *tcache, cache_bin_t *tbin,\n    szind_t binind, unsigned rem);\nvoid tcache_bin_flush_stashed(tsd_t *tsd, tcache_t *tcache, cache_bin_t *bin,\n    szind_t binind, bool is_small);\nvoid tcache_arena_reassociate(tsdn_t *tsdn, tcache_slow_t *tcache_slow,\n    tcache_t *tcache, arena_t *arena);\ntcache_t *tcache_create_explicit(tsd_t *tsd);\nvoid tcache_cleanup(tsd_t *tsd);\nvoid tcache_stats_merge(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena);\nbool tcaches_create(tsd_t *tsd, base_t *base, unsigned *r_ind);\nvoid tcaches_flush(tsd_t *tsd, unsigned ind);\nvoid tcaches_destroy(tsd_t *tsd, unsigned ind);\nbool tcache_boot(tsdn_t *tsdn, base_t *base);\nvoid tcache_arena_associate(tsdn_t *tsdn, tcache_slow_t *tcache_slow,\n    tcache_t *tcache, arena_t *arena);\nvoid tcache_prefork(tsdn_t *tsdn);\nvoid tcache_postfork_parent(tsdn_t *tsdn);\nvoid tcache_postfork_child(tsdn_t *tsdn);\nvoid tcache_flush(tsd_t *tsd);\nbool tsd_tcache_data_init(tsd_t *tsd);\nbool tsd_tcache_enabled_data_init(tsd_t *tsd);\n\nvoid tcache_assert_initialized(tcache_t *tcache);\n\n/* Only accessed by thread event. */\nuint64_t tcache_gc_new_event_wait(tsd_t *tsd);\nuint64_t tcache_gc_postponed_event_wait(tsd_t *tsd);\nvoid tcache_gc_event_handler(tsd_t *tsd, uint64_t elapsed);\nuint64_t tcache_gc_dalloc_new_event_wait(tsd_t *tsd);\nuint64_t tcache_gc_dalloc_postponed_event_wait(tsd_t *tsd);\nvoid tcache_gc_dalloc_event_handler(tsd_t *tsd, uint64_t elapsed);\n\n#endif /* JEMALLOC_INTERNAL_TCACHE_EXTERNS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/tcache_inlines.h",
    "content": "#ifndef JEMALLOC_INTERNAL_TCACHE_INLINES_H\n#define JEMALLOC_INTERNAL_TCACHE_INLINES_H\n\n#include \"jemalloc/internal/bin.h\"\n#include \"jemalloc/internal/jemalloc_internal_types.h\"\n#include \"jemalloc/internal/san.h\"\n#include \"jemalloc/internal/sc.h\"\n#include \"jemalloc/internal/sz.h\"\n#include \"jemalloc/internal/util.h\"\n\nstatic inline bool\ntcache_enabled_get(tsd_t *tsd) {\n\treturn tsd_tcache_enabled_get(tsd);\n}\n\nstatic inline void\ntcache_enabled_set(tsd_t *tsd, bool enabled) {\n\tbool was_enabled = tsd_tcache_enabled_get(tsd);\n\n\tif (!was_enabled && enabled) {\n\t\ttsd_tcache_data_init(tsd);\n\t} else if (was_enabled && !enabled) {\n\t\ttcache_cleanup(tsd);\n\t}\n\t/* Commit the state last.  Above calls check current state. */\n\ttsd_tcache_enabled_set(tsd, enabled);\n\ttsd_slow_update(tsd);\n}\n\nJEMALLOC_ALWAYS_INLINE bool\ntcache_small_bin_disabled(szind_t ind, cache_bin_t *bin) {\n\tassert(ind < SC_NBINS);\n\tbool ret = (cache_bin_info_ncached_max(&tcache_bin_info[ind]) == 0);\n\tif (ret && bin != NULL) {\n\t\t/* small size class but cache bin disabled. */\n\t\tassert(ind >= nhbins);\n\t\tassert((uintptr_t)(*bin->stack_head) ==\n\t\t    cache_bin_preceding_junk);\n\t}\n\n\treturn ret;\n}\n\nJEMALLOC_ALWAYS_INLINE void *\ntcache_alloc_small(tsd_t *tsd, arena_t *arena, tcache_t *tcache,\n    size_t size, szind_t binind, bool zero, bool slow_path) {\n\tvoid *ret;\n\tbool tcache_success;\n\n\tassert(binind < SC_NBINS);\n\tcache_bin_t *bin = &tcache->bins[binind];\n\tret = cache_bin_alloc(bin, &tcache_success);\n\tassert(tcache_success == (ret != NULL));\n\tif (unlikely(!tcache_success)) {\n\t\tbool tcache_hard_success;\n\t\tarena = arena_choose(tsd, arena);\n\t\tif (unlikely(arena == NULL)) {\n\t\t\treturn NULL;\n\t\t}\n\t\tif (unlikely(tcache_small_bin_disabled(binind, bin))) {\n\t\t\t/* stats and zero are handled directly by the arena. */\n\t\t\treturn arena_malloc_hard(tsd_tsdn(tsd), arena, size,\n\t\t\t    binind, zero);\n\t\t}\n\t\ttcache_bin_flush_stashed(tsd, tcache, bin, binind,\n\t\t    /* is_small */ true);\n\n\t\tret = tcache_alloc_small_hard(tsd_tsdn(tsd), arena, tcache,\n\t\t    bin, binind, &tcache_hard_success);\n\t\tif (tcache_hard_success == false) {\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tassert(ret);\n\tif (unlikely(zero)) {\n\t\tsize_t usize = sz_index2size(binind);\n\t\tassert(tcache_salloc(tsd_tsdn(tsd), ret) == usize);\n\t\tmemset(ret, 0, usize);\n\t}\n\tif (config_stats) {\n\t\tbin->tstats.nrequests++;\n\t}\n\treturn ret;\n}\n\nJEMALLOC_ALWAYS_INLINE void *\ntcache_alloc_large(tsd_t *tsd, arena_t *arena, tcache_t *tcache, size_t size,\n    szind_t binind, bool zero, bool slow_path) {\n\tvoid *ret;\n\tbool tcache_success;\n\n\tassert(binind >= SC_NBINS && binind < nhbins);\n\tcache_bin_t *bin = &tcache->bins[binind];\n\tret = cache_bin_alloc(bin, &tcache_success);\n\tassert(tcache_success == (ret != NULL));\n\tif (unlikely(!tcache_success)) {\n\t\t/*\n\t\t * Only allocate one large object at a time, because it's quite\n\t\t * expensive to create one and not use it.\n\t\t */\n\t\tarena = arena_choose(tsd, arena);\n\t\tif (unlikely(arena == NULL)) {\n\t\t\treturn NULL;\n\t\t}\n\t\ttcache_bin_flush_stashed(tsd, tcache, bin, binind,\n\t\t    /* is_small */ false);\n\n\t\tret = large_malloc(tsd_tsdn(tsd), arena, sz_s2u(size), zero);\n\t\tif (ret == NULL) {\n\t\t\treturn NULL;\n\t\t}\n\t} else {\n\t\tif (unlikely(zero)) {\n\t\t\tsize_t usize = sz_index2size(binind);\n\t\t\tassert(usize <= tcache_maxclass);\n\t\t\tmemset(ret, 0, usize);\n\t\t}\n\n\t\tif (config_stats) {\n\t\t\tbin->tstats.nrequests++;\n\t\t}\n\t}\n\n\treturn ret;\n}\n\nJEMALLOC_ALWAYS_INLINE void\ntcache_dalloc_small(tsd_t *tsd, tcache_t *tcache, void *ptr, szind_t binind,\n    bool slow_path) {\n\tassert(tcache_salloc(tsd_tsdn(tsd), ptr) <= SC_SMALL_MAXCLASS);\n\n\tcache_bin_t *bin = &tcache->bins[binind];\n\t/*\n\t * Not marking the branch unlikely because this is past free_fastpath()\n\t * (which handles the most common cases), i.e. at this point it's often\n\t * uncommon cases.\n\t */\n\tif (cache_bin_nonfast_aligned(ptr)) {\n\t\t/* Junk unconditionally, even if bin is full. */\n\t\tsan_junk_ptr(ptr, sz_index2size(binind));\n\t\tif (cache_bin_stash(bin, ptr)) {\n\t\t\treturn;\n\t\t}\n\t\tassert(cache_bin_full(bin));\n\t\t/* Bin full; fall through into the flush branch. */\n\t}\n\n\tif (unlikely(!cache_bin_dalloc_easy(bin, ptr))) {\n\t\tif (unlikely(tcache_small_bin_disabled(binind, bin))) {\n\t\t\tarena_dalloc_small(tsd_tsdn(tsd), ptr);\n\t\t\treturn;\n\t\t}\n\t\tcache_bin_sz_t max = cache_bin_info_ncached_max(\n\t\t    &tcache_bin_info[binind]);\n\t\tunsigned remain = max >> opt_lg_tcache_flush_small_div;\n\t\ttcache_bin_flush_small(tsd, tcache, bin, binind, remain);\n\t\tbool ret = cache_bin_dalloc_easy(bin, ptr);\n\t\tassert(ret);\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE void\ntcache_dalloc_large(tsd_t *tsd, tcache_t *tcache, void *ptr, szind_t binind,\n    bool slow_path) {\n\n\tassert(tcache_salloc(tsd_tsdn(tsd), ptr)\n\t    > SC_SMALL_MAXCLASS);\n\tassert(tcache_salloc(tsd_tsdn(tsd), ptr) <= tcache_maxclass);\n\n\tcache_bin_t *bin = &tcache->bins[binind];\n\tif (unlikely(!cache_bin_dalloc_easy(bin, ptr))) {\n\t\tunsigned remain = cache_bin_info_ncached_max(\n\t\t    &tcache_bin_info[binind]) >> opt_lg_tcache_flush_large_div;\n\t\ttcache_bin_flush_large(tsd, tcache, bin, binind, remain);\n\t\tbool ret = cache_bin_dalloc_easy(bin, ptr);\n\t\tassert(ret);\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE tcache_t *\ntcaches_get(tsd_t *tsd, unsigned ind) {\n\ttcaches_t *elm = &tcaches[ind];\n\tif (unlikely(elm->tcache == NULL)) {\n\t\tmalloc_printf(\"<jemalloc>: invalid tcache id (%u).\\n\", ind);\n\t\tabort();\n\t} else if (unlikely(elm->tcache == TCACHES_ELM_NEED_REINIT)) {\n\t\telm->tcache = tcache_create_explicit(tsd);\n\t}\n\treturn elm->tcache;\n}\n\n#endif /* JEMALLOC_INTERNAL_TCACHE_INLINES_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/tcache_structs.h",
    "content": "#ifndef JEMALLOC_INTERNAL_TCACHE_STRUCTS_H\n#define JEMALLOC_INTERNAL_TCACHE_STRUCTS_H\n\n#include \"jemalloc/internal/cache_bin.h\"\n#include \"jemalloc/internal/ql.h\"\n#include \"jemalloc/internal/sc.h\"\n#include \"jemalloc/internal/ticker.h\"\n#include \"jemalloc/internal/tsd_types.h\"\n\n/*\n * The tcache state is split into the slow and hot path data.  Each has a\n * pointer to the other, and the data always comes in pairs.  The layout of each\n * of them varies in practice; tcache_slow lives in the TSD for the automatic\n * tcache, and as part of a dynamic allocation for manual allocations.  Keeping\n * a pointer to tcache_slow lets us treat these cases uniformly, rather than\n * splitting up the tcache [de]allocation code into those paths called with the\n * TSD tcache and those called with a manual tcache.\n */\n\nstruct tcache_slow_s {\n\t/* Lets us track all the tcaches in an arena. */\n\tql_elm(tcache_slow_t) link;\n\n\t/*\n\t * The descriptor lets the arena find our cache bins without seeing the\n\t * tcache definition.  This enables arenas to aggregate stats across\n\t * tcaches without having a tcache dependency.\n\t */\n\tcache_bin_array_descriptor_t cache_bin_array_descriptor;\n\n\t/* The arena this tcache is associated with. */\n\tarena_t\t\t*arena;\n\t/* Next bin to GC. */\n\tszind_t\t\tnext_gc_bin;\n\t/* For small bins, fill (ncached_max >> lg_fill_div). */\n\tuint8_t\t\tlg_fill_div[SC_NBINS];\n\t/* For small bins, whether has been refilled since last GC. */\n\tbool\t\tbin_refilled[SC_NBINS];\n\t/*\n\t * For small bins, the number of items we can pretend to flush before\n\t * actually flushing.\n\t */\n\tuint8_t\t\tbin_flush_delay_items[SC_NBINS];\n\t/*\n\t * The start of the allocation containing the dynamic allocation for\n\t * either the cache bins alone, or the cache bin memory as well as this\n\t * tcache_slow_t and its associated tcache_t.\n\t */\n\tvoid\t\t*dyn_alloc;\n\n\t/* The associated bins. */\n\ttcache_t\t*tcache;\n};\n\nstruct tcache_s {\n\ttcache_slow_t\t*tcache_slow;\n\tcache_bin_t\tbins[TCACHE_NBINS_MAX];\n};\n\n/* Linkage for list of available (previously used) explicit tcache IDs. */\nstruct tcaches_s {\n\tunion {\n\t\ttcache_t\t*tcache;\n\t\ttcaches_t\t*next;\n\t};\n};\n\n#endif /* JEMALLOC_INTERNAL_TCACHE_STRUCTS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/tcache_types.h",
    "content": "#ifndef JEMALLOC_INTERNAL_TCACHE_TYPES_H\n#define JEMALLOC_INTERNAL_TCACHE_TYPES_H\n\n#include \"jemalloc/internal/sc.h\"\n\ntypedef struct tcache_slow_s tcache_slow_t;\ntypedef struct tcache_s tcache_t;\ntypedef struct tcaches_s tcaches_t;\n\n/*\n * tcache pointers close to NULL are used to encode state information that is\n * used for two purposes: preventing thread caching on a per thread basis and\n * cleaning up during thread shutdown.\n */\n#define TCACHE_STATE_DISABLED\t\t((tcache_t *)(uintptr_t)1)\n#define TCACHE_STATE_REINCARNATED\t((tcache_t *)(uintptr_t)2)\n#define TCACHE_STATE_PURGATORY\t\t((tcache_t *)(uintptr_t)3)\n#define TCACHE_STATE_MAX\t\tTCACHE_STATE_PURGATORY\n\n/* Used in TSD static initializer only. Real init in tsd_tcache_data_init(). */\n#define TCACHE_ZERO_INITIALIZER {0}\n#define TCACHE_SLOW_ZERO_INITIALIZER {0}\n\n/* Used in TSD static initializer only. Will be initialized to opt_tcache. */\n#define TCACHE_ENABLED_ZERO_INITIALIZER false\n\n/* Used for explicit tcache only. Means flushed but not destroyed. */\n#define TCACHES_ELM_NEED_REINIT ((tcache_t *)(uintptr_t)1)\n\n#define TCACHE_LG_MAXCLASS_LIMIT 23 /* tcache_maxclass = 8M */\n#define TCACHE_MAXCLASS_LIMIT ((size_t)1 << TCACHE_LG_MAXCLASS_LIMIT)\n#define TCACHE_NBINS_MAX (SC_NBINS + SC_NGROUP *\t\t\t\\\n    (TCACHE_LG_MAXCLASS_LIMIT - SC_LG_LARGE_MINCLASS) + 1)\n\n#endif /* JEMALLOC_INTERNAL_TCACHE_TYPES_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/test_hooks.h",
    "content": "#ifndef JEMALLOC_INTERNAL_TEST_HOOKS_H\n#define JEMALLOC_INTERNAL_TEST_HOOKS_H\n\nextern JEMALLOC_EXPORT void (*test_hooks_arena_new_hook)();\nextern JEMALLOC_EXPORT void (*test_hooks_libc_hook)();\n\n#if defined(JEMALLOC_JET) || defined(JEMALLOC_UNIT_TEST)\n#  define JEMALLOC_TEST_HOOK(fn, hook) ((void)(hook != NULL && (hook(), 0)), fn)\n\n#  define open JEMALLOC_TEST_HOOK(open, test_hooks_libc_hook)\n#  define read JEMALLOC_TEST_HOOK(read, test_hooks_libc_hook)\n#  define write JEMALLOC_TEST_HOOK(write, test_hooks_libc_hook)\n#  define readlink JEMALLOC_TEST_HOOK(readlink, test_hooks_libc_hook)\n#  define close JEMALLOC_TEST_HOOK(close, test_hooks_libc_hook)\n#  define creat JEMALLOC_TEST_HOOK(creat, test_hooks_libc_hook)\n#  define secure_getenv JEMALLOC_TEST_HOOK(secure_getenv, test_hooks_libc_hook)\n/* Note that this is undef'd and re-define'd in src/prof.c. */\n#  define _Unwind_Backtrace JEMALLOC_TEST_HOOK(_Unwind_Backtrace, test_hooks_libc_hook)\n#else\n#  define JEMALLOC_TEST_HOOK(fn, hook) fn\n#endif\n\n\n#endif /* JEMALLOC_INTERNAL_TEST_HOOKS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/thread_event.h",
    "content": "#ifndef JEMALLOC_INTERNAL_THREAD_EVENT_H\n#define JEMALLOC_INTERNAL_THREAD_EVENT_H\n\n#include \"jemalloc/internal/tsd.h\"\n\n/* \"te\" is short for \"thread_event\" */\n\n/*\n * TE_MIN_START_WAIT should not exceed the minimal allocation usize.\n */\n#define TE_MIN_START_WAIT ((uint64_t)1U)\n#define TE_MAX_START_WAIT UINT64_MAX\n\n/*\n * Maximum threshold on thread_(de)allocated_next_event_fast, so that there is\n * no need to check overflow in malloc fast path. (The allocation size in malloc\n * fast path never exceeds SC_LOOKUP_MAXCLASS.)\n */\n#define TE_NEXT_EVENT_FAST_MAX (UINT64_MAX - SC_LOOKUP_MAXCLASS + 1U)\n\n/*\n * The max interval helps make sure that malloc stays on the fast path in the\n * common case, i.e. thread_allocated < thread_allocated_next_event_fast.  When\n * thread_allocated is within an event's distance to TE_NEXT_EVENT_FAST_MAX\n * above, thread_allocated_next_event_fast is wrapped around and we fall back to\n * the medium-fast path. The max interval makes sure that we're not staying on\n * the fallback case for too long, even if there's no active event or if all\n * active events have long wait times.\n */\n#define TE_MAX_INTERVAL ((uint64_t)(4U << 20))\n\n/*\n * Invalid elapsed time, for situations where elapsed time is not needed.  See\n * comments in thread_event.c for more info.\n */\n#define TE_INVALID_ELAPSED UINT64_MAX\n\ntypedef struct te_ctx_s {\n\tbool is_alloc;\n\tuint64_t *current;\n\tuint64_t *last_event;\n\tuint64_t *next_event;\n\tuint64_t *next_event_fast;\n} te_ctx_t;\n\nvoid te_assert_invariants_debug(tsd_t *tsd);\nvoid te_event_trigger(tsd_t *tsd, te_ctx_t *ctx);\nvoid te_recompute_fast_threshold(tsd_t *tsd);\nvoid tsd_te_init(tsd_t *tsd);\n\n/*\n * List of all events, in the following format:\n *  E(event,\t\t(condition), is_alloc_event)\n */\n#define ITERATE_OVER_ALL_EVENTS\t\t\t\t\t\t\\\n    E(tcache_gc,\t\t(opt_tcache_gc_incr_bytes > 0), true)\t\\\n    E(prof_sample,\t\t(config_prof && opt_prof), true)  \t\\\n    E(stats_interval,\t\t(opt_stats_interval >= 0), true)   \t\\\n    E(tcache_gc_dalloc,\t\t(opt_tcache_gc_incr_bytes > 0), false)\t\\\n    E(peak_alloc,\t\tconfig_stats, true)\t\t\t\\\n    E(peak_dalloc,\t\tconfig_stats, false)\n\n#define E(event, condition_unused, is_alloc_event_unused)\t\t\\\n    C(event##_event_wait)\n\n/* List of all thread event counters. */\n#define ITERATE_OVER_ALL_COUNTERS\t\t\t\t\t\\\n    C(thread_allocated)\t\t\t\t\t\t\t\\\n    C(thread_allocated_last_event)\t\t\t\t\t\\\n    ITERATE_OVER_ALL_EVENTS\t\t\t\t\t\t\\\n    C(prof_sample_last_event)\t\t\t\t\t\t\\\n    C(stats_interval_last_event)\n\n/* Getters directly wrap TSD getters. */\n#define C(counter)\t\t\t\t\t\t\t\\\nJEMALLOC_ALWAYS_INLINE uint64_t\t\t\t\t\t\t\\\ncounter##_get(tsd_t *tsd) {\t\t\t\t\t\t\\\n\treturn tsd_##counter##_get(tsd);\t\t\t\t\\\n}\n\nITERATE_OVER_ALL_COUNTERS\n#undef C\n\n/*\n * Setters call the TSD pointer getters rather than the TSD setters, so that\n * the counters can be modified even when TSD state is reincarnated or\n * minimal_initialized: if an event is triggered in such cases, we will\n * temporarily delay the event and let it be immediately triggered at the next\n * allocation call.\n */\n#define C(counter)\t\t\t\t\t\t\t\\\nJEMALLOC_ALWAYS_INLINE void\t\t\t\t\t\t\\\ncounter##_set(tsd_t *tsd, uint64_t v) {\t\t\t\t\t\\\n\t*tsd_##counter##p_get(tsd) = v;\t\t\t\t\t\\\n}\n\nITERATE_OVER_ALL_COUNTERS\n#undef C\n\n/*\n * For generating _event_wait getter / setter functions for each individual\n * event.\n */\n#undef E\n\n/*\n * The malloc and free fastpath getters -- use the unsafe getters since tsd may\n * be non-nominal, in which case the fast_threshold will be set to 0.  This\n * allows checking for events and tsd non-nominal in a single branch.\n *\n * Note that these can only be used on the fastpath.\n */\nJEMALLOC_ALWAYS_INLINE void\nte_malloc_fastpath_ctx(tsd_t *tsd, uint64_t *allocated, uint64_t *threshold) {\n\t*allocated = *tsd_thread_allocatedp_get_unsafe(tsd);\n\t*threshold = *tsd_thread_allocated_next_event_fastp_get_unsafe(tsd);\n\tassert(*threshold <= TE_NEXT_EVENT_FAST_MAX);\n}\n\nJEMALLOC_ALWAYS_INLINE void\nte_free_fastpath_ctx(tsd_t *tsd, uint64_t *deallocated, uint64_t *threshold) {\n\t/* Unsafe getters since this may happen before tsd_init. */\n\t*deallocated = *tsd_thread_deallocatedp_get_unsafe(tsd);\n\t*threshold = *tsd_thread_deallocated_next_event_fastp_get_unsafe(tsd);\n\tassert(*threshold <= TE_NEXT_EVENT_FAST_MAX);\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nte_ctx_is_alloc(te_ctx_t *ctx) {\n\treturn ctx->is_alloc;\n}\n\nJEMALLOC_ALWAYS_INLINE uint64_t\nte_ctx_current_bytes_get(te_ctx_t *ctx) {\n\treturn *ctx->current;\n}\n\nJEMALLOC_ALWAYS_INLINE void\nte_ctx_current_bytes_set(te_ctx_t *ctx, uint64_t v) {\n\t*ctx->current = v;\n}\n\nJEMALLOC_ALWAYS_INLINE uint64_t\nte_ctx_last_event_get(te_ctx_t *ctx) {\n\treturn *ctx->last_event;\n}\n\nJEMALLOC_ALWAYS_INLINE void\nte_ctx_last_event_set(te_ctx_t *ctx, uint64_t v) {\n\t*ctx->last_event = v;\n}\n\n/* Below 3 for next_event_fast. */\nJEMALLOC_ALWAYS_INLINE uint64_t\nte_ctx_next_event_fast_get(te_ctx_t *ctx) {\n\tuint64_t v = *ctx->next_event_fast;\n\tassert(v <= TE_NEXT_EVENT_FAST_MAX);\n\treturn v;\n}\n\nJEMALLOC_ALWAYS_INLINE void\nte_ctx_next_event_fast_set(te_ctx_t *ctx, uint64_t v) {\n\tassert(v <= TE_NEXT_EVENT_FAST_MAX);\n\t*ctx->next_event_fast = v;\n}\n\nJEMALLOC_ALWAYS_INLINE void\nte_next_event_fast_set_non_nominal(tsd_t *tsd) {\n\t/*\n\t * Set the fast thresholds to zero when tsd is non-nominal.  Use the\n\t * unsafe getter as this may get called during tsd init and clean up.\n\t */\n\t*tsd_thread_allocated_next_event_fastp_get_unsafe(tsd) = 0;\n\t*tsd_thread_deallocated_next_event_fastp_get_unsafe(tsd) = 0;\n}\n\n/* For next_event.  Setter also updates the fast threshold. */\nJEMALLOC_ALWAYS_INLINE uint64_t\nte_ctx_next_event_get(te_ctx_t *ctx) {\n\treturn *ctx->next_event;\n}\n\nJEMALLOC_ALWAYS_INLINE void\nte_ctx_next_event_set(tsd_t *tsd, te_ctx_t *ctx, uint64_t v) {\n\t*ctx->next_event = v;\n\tte_recompute_fast_threshold(tsd);\n}\n\n/*\n * The function checks in debug mode whether the thread event counters are in\n * a consistent state, which forms the invariants before and after each round\n * of thread event handling that we can rely on and need to promise.\n * The invariants are only temporarily violated in the middle of\n * te_event_advance() if an event is triggered (the te_event_trigger() call at\n * the end will restore the invariants).\n */\nJEMALLOC_ALWAYS_INLINE void\nte_assert_invariants(tsd_t *tsd) {\n\tif (config_debug) {\n\t\tte_assert_invariants_debug(tsd);\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE void\nte_ctx_get(tsd_t *tsd, te_ctx_t *ctx, bool is_alloc) {\n\tctx->is_alloc = is_alloc;\n\tif (is_alloc) {\n\t\tctx->current = tsd_thread_allocatedp_get(tsd);\n\t\tctx->last_event = tsd_thread_allocated_last_eventp_get(tsd);\n\t\tctx->next_event = tsd_thread_allocated_next_eventp_get(tsd);\n\t\tctx->next_event_fast =\n\t\t    tsd_thread_allocated_next_event_fastp_get(tsd);\n\t} else {\n\t\tctx->current = tsd_thread_deallocatedp_get(tsd);\n\t\tctx->last_event = tsd_thread_deallocated_last_eventp_get(tsd);\n\t\tctx->next_event = tsd_thread_deallocated_next_eventp_get(tsd);\n\t\tctx->next_event_fast =\n\t\t    tsd_thread_deallocated_next_event_fastp_get(tsd);\n\t}\n}\n\n/*\n * The lookahead functionality facilitates events to be able to lookahead, i.e.\n * without touching the event counters, to determine whether an event would be\n * triggered.  The event counters are not advanced until the end of the\n * allocation / deallocation calls, so the lookahead can be useful if some\n * preparation work for some event must be done early in the allocation /\n * deallocation calls.\n *\n * Currently only the profiling sampling event needs the lookahead\n * functionality, so we don't yet define general purpose lookahead functions.\n *\n * Surplus is a terminology referring to the amount of bytes beyond what's\n * needed for triggering an event, which can be a useful quantity to have in\n * general when lookahead is being called.\n */\n\nJEMALLOC_ALWAYS_INLINE bool\nte_prof_sample_event_lookahead_surplus(tsd_t *tsd, size_t usize,\n    size_t *surplus) {\n\tif (surplus != NULL) {\n\t\t/*\n\t\t * This is a dead store: the surplus will be overwritten before\n\t\t * any read.  The initialization suppresses compiler warnings.\n\t\t * Meanwhile, using SIZE_MAX to initialize is good for\n\t\t * debugging purpose, because a valid surplus value is strictly\n\t\t * less than usize, which is at most SIZE_MAX.\n\t\t */\n\t\t*surplus = SIZE_MAX;\n\t}\n\tif (unlikely(!tsd_nominal(tsd) || tsd_reentrancy_level_get(tsd) > 0)) {\n\t\treturn false;\n\t}\n\t/* The subtraction is intentionally susceptible to underflow. */\n\tuint64_t accumbytes = tsd_thread_allocated_get(tsd) + usize -\n\t    tsd_thread_allocated_last_event_get(tsd);\n\tuint64_t sample_wait = tsd_prof_sample_event_wait_get(tsd);\n\tif (accumbytes < sample_wait) {\n\t\treturn false;\n\t}\n\tassert(accumbytes - sample_wait < (uint64_t)usize);\n\tif (surplus != NULL) {\n\t\t*surplus = (size_t)(accumbytes - sample_wait);\n\t}\n\treturn true;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nte_prof_sample_event_lookahead(tsd_t *tsd, size_t usize) {\n\treturn te_prof_sample_event_lookahead_surplus(tsd, usize, NULL);\n}\n\nJEMALLOC_ALWAYS_INLINE void\nte_event_advance(tsd_t *tsd, size_t usize, bool is_alloc) {\n\tte_assert_invariants(tsd);\n\n\tte_ctx_t ctx;\n\tte_ctx_get(tsd, &ctx, is_alloc);\n\n\tuint64_t bytes_before = te_ctx_current_bytes_get(&ctx);\n\tte_ctx_current_bytes_set(&ctx, bytes_before + usize);\n\n\t/* The subtraction is intentionally susceptible to underflow. */\n\tif (likely(usize < te_ctx_next_event_get(&ctx) - bytes_before)) {\n\t\tte_assert_invariants(tsd);\n\t} else {\n\t\tte_event_trigger(tsd, &ctx);\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE void\nthread_dalloc_event(tsd_t *tsd, size_t usize) {\n\tte_event_advance(tsd, usize, false);\n}\n\nJEMALLOC_ALWAYS_INLINE void\nthread_alloc_event(tsd_t *tsd, size_t usize) {\n\tte_event_advance(tsd, usize, true);\n}\n\n#endif /* JEMALLOC_INTERNAL_THREAD_EVENT_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/ticker.h",
    "content": "#ifndef JEMALLOC_INTERNAL_TICKER_H\n#define JEMALLOC_INTERNAL_TICKER_H\n\n#include \"jemalloc/internal/prng.h\"\n#include \"jemalloc/internal/util.h\"\n\n/**\n * A ticker makes it easy to count-down events until some limit.  You\n * ticker_init the ticker to trigger every nticks events.  You then notify it\n * that an event has occurred with calls to ticker_tick (or that nticks events\n * have occurred with a call to ticker_ticks), which will return true (and reset\n * the counter) if the countdown hit zero.\n */\ntypedef struct ticker_s ticker_t;\nstruct ticker_s {\n\tint32_t tick;\n\tint32_t nticks;\n};\n\nstatic inline void\nticker_init(ticker_t *ticker, int32_t nticks) {\n\tticker->tick = nticks;\n\tticker->nticks = nticks;\n}\n\nstatic inline void\nticker_copy(ticker_t *ticker, const ticker_t *other) {\n\t*ticker = *other;\n}\n\nstatic inline int32_t\nticker_read(const ticker_t *ticker) {\n\treturn ticker->tick;\n}\n\n/*\n * Not intended to be a public API.  Unfortunately, on x86, neither gcc nor\n * clang seems smart enough to turn\n *   ticker->tick -= nticks;\n *   if (unlikely(ticker->tick < 0)) {\n *     fixup ticker\n *     return true;\n *   }\n *   return false;\n * into\n *   subq %nticks_reg, (%ticker_reg)\n *   js fixup ticker\n *\n * unless we force \"fixup ticker\" out of line.  In that case, gcc gets it right,\n * but clang now does worse than before.  So, on x86 with gcc, we force it out\n * of line, but otherwise let the inlining occur.  Ordinarily this wouldn't be\n * worth the hassle, but this is on the fast path of both malloc and free (via\n * tcache_event).\n */\n#if defined(__GNUC__) && !defined(__clang__)\t\t\t\t\\\n    && (defined(__x86_64__) || defined(__i386__))\nJEMALLOC_NOINLINE\n#endif\nstatic bool\nticker_fixup(ticker_t *ticker) {\n\tticker->tick = ticker->nticks;\n\treturn true;\n}\n\nstatic inline bool\nticker_ticks(ticker_t *ticker, int32_t nticks) {\n\tticker->tick -= nticks;\n\tif (unlikely(ticker->tick < 0)) {\n\t\treturn ticker_fixup(ticker);\n\t}\n\treturn false;\n}\n\nstatic inline bool\nticker_tick(ticker_t *ticker) {\n\treturn ticker_ticks(ticker, 1);\n}\n\n/*\n * Try to tick.  If ticker would fire, return true, but rely on\n * slowpath to reset ticker.\n */\nstatic inline bool\nticker_trytick(ticker_t *ticker) {\n\t--ticker->tick;\n\tif (unlikely(ticker->tick < 0)) {\n\t\treturn true;\n\t}\n\treturn false;\n}\n\n/*\n * The ticker_geom_t is much like the ticker_t, except that instead of ticker\n * having a constant countdown, it has an approximate one; each tick has\n * approximately a 1/nticks chance of triggering the count.\n *\n * The motivation is in triggering arena decay.  With a naive strategy, each\n * thread would maintain a ticker per arena, and check if decay is necessary\n * each time that the arena's ticker fires.  This has two costs:\n * - Since under reasonable assumptions both threads and arenas can scale\n *   linearly with the number of CPUs, maintaining per-arena data in each thread\n *   scales quadratically with the number of CPUs.\n * - These tickers are often a cache miss down tcache flush pathways.\n *\n * By giving each tick a 1/nticks chance of firing, we still maintain the same\n * average number of ticks-until-firing per arena, with only a single ticker's\n * worth of metadata.\n */\n\n/* See ticker.c for an explanation of these constants. */\n#define TICKER_GEOM_NBITS 6\n#define TICKER_GEOM_MUL 61\nextern const uint8_t ticker_geom_table[1 << TICKER_GEOM_NBITS];\n\n/* Not actually any different from ticker_t; just for type safety. */\ntypedef struct ticker_geom_s ticker_geom_t;\nstruct ticker_geom_s {\n\tint32_t tick;\n\tint32_t nticks;\n};\n\n/*\n * Just pick the average delay for the first counter.  We're more concerned with\n * the behavior over long periods of time rather than the exact timing of the\n * initial ticks.\n */\n#define TICKER_GEOM_INIT(nticks) {nticks, nticks}\n\nstatic inline void\nticker_geom_init(ticker_geom_t *ticker, int32_t nticks) {\n\t/*\n\t * Make sure there's no overflow possible.  This shouldn't really be a\n\t * problem for reasonable nticks choices, which are all static and\n\t * relatively small.\n\t */\n\tassert((uint64_t)nticks * (uint64_t)255 / (uint64_t)TICKER_GEOM_MUL\n\t    <= (uint64_t)INT32_MAX);\n\tticker->tick = nticks;\n\tticker->nticks = nticks;\n}\n\nstatic inline int32_t\nticker_geom_read(const ticker_geom_t *ticker) {\n\treturn ticker->tick;\n}\n\n/* Same deal as above. */\n#if defined(__GNUC__) && !defined(__clang__)\t\t\t\t\\\n    && (defined(__x86_64__) || defined(__i386__))\nJEMALLOC_NOINLINE\n#endif\nstatic bool\nticker_geom_fixup(ticker_geom_t *ticker, uint64_t *prng_state) {\n\tuint64_t idx = prng_lg_range_u64(prng_state, TICKER_GEOM_NBITS);\n\tticker->tick = (uint32_t)(\n\t    (uint64_t)ticker->nticks * (uint64_t)ticker_geom_table[idx]\n\t    / (uint64_t)TICKER_GEOM_MUL);\n\treturn true;\n}\n\nstatic inline bool\nticker_geom_ticks(ticker_geom_t *ticker, uint64_t *prng_state, int32_t nticks) {\n\tticker->tick -= nticks;\n\tif (unlikely(ticker->tick < 0)) {\n\t\treturn ticker_geom_fixup(ticker, prng_state);\n\t}\n\treturn false;\n}\n\nstatic inline bool\nticker_geom_tick(ticker_geom_t *ticker, uint64_t *prng_state) {\n\treturn ticker_geom_ticks(ticker, prng_state, 1);\n}\n\n#endif /* JEMALLOC_INTERNAL_TICKER_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/tsd.h",
    "content": "#ifndef JEMALLOC_INTERNAL_TSD_H\n#define JEMALLOC_INTERNAL_TSD_H\n\n#include \"jemalloc/internal/activity_callback.h\"\n#include \"jemalloc/internal/arena_types.h\"\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/bin_types.h\"\n#include \"jemalloc/internal/jemalloc_internal_externs.h\"\n#include \"jemalloc/internal/peak.h\"\n#include \"jemalloc/internal/prof_types.h\"\n#include \"jemalloc/internal/ql.h\"\n#include \"jemalloc/internal/rtree_tsd.h\"\n#include \"jemalloc/internal/tcache_types.h\"\n#include \"jemalloc/internal/tcache_structs.h\"\n#include \"jemalloc/internal/util.h\"\n#include \"jemalloc/internal/witness.h\"\n\n/*\n * Thread-Specific-Data layout\n *\n * At least some thread-local data gets touched on the fast-path of almost all\n * malloc operations.  But much of it is only necessary down slow-paths, or\n * testing.  We want to colocate the fast-path data so that it can live on the\n * same cacheline if possible.  So we define three tiers of hotness:\n * TSD_DATA_FAST: Touched on the alloc/dalloc fast paths.\n * TSD_DATA_SLOW: Touched down slow paths.  \"Slow\" here is sort of general;\n *     there are \"semi-slow\" paths like \"not a sized deallocation, but can still\n *     live in the tcache\".  We'll want to keep these closer to the fast-path\n *     data.\n * TSD_DATA_SLOWER: Only touched in test or debug modes, or not touched at all.\n *\n * An additional concern is that the larger tcache bins won't be used (we have a\n * bin per size class, but by default only cache relatively small objects).  So\n * the earlier bins are in the TSD_DATA_FAST tier, but the later ones are in the\n * TSD_DATA_SLOWER tier.\n *\n * As a result of all this, we put the slow data first, then the fast data, then\n * the slower data, while keeping the tcache as the last element of the fast\n * data (so that the fast -> slower transition happens midway through the\n * tcache).  While we don't yet play alignment tricks to guarantee it, this\n * increases our odds of getting some cache/page locality on fast paths.\n */\n\n#ifdef JEMALLOC_JET\ntypedef void (*test_callback_t)(int *);\n#  define MALLOC_TSD_TEST_DATA_INIT 0x72b65c10\n#  define MALLOC_TEST_TSD \\\n    O(test_data,\t\tint,\t\t\tint)\t\t\\\n    O(test_callback,\t\ttest_callback_t,\tint)\n#  define MALLOC_TEST_TSD_INITIALIZER , MALLOC_TSD_TEST_DATA_INIT, NULL\n#else\n#  define MALLOC_TEST_TSD\n#  define MALLOC_TEST_TSD_INITIALIZER\n#endif\n\ntypedef ql_elm(tsd_t) tsd_link_t;\n\n/*  O(name,\t\t\ttype,\t\t\tnullable type) */\n#define TSD_DATA_SLOW\t\t\t\t\t\t\t\\\n    O(tcache_enabled,\t\tbool,\t\t\tbool)\t\t\\\n    O(reentrancy_level,\t\tint8_t,\t\t\tint8_t)\t\t\\\n    O(thread_allocated_last_event,\tuint64_t,\tuint64_t)\t\\\n    O(thread_allocated_next_event,\tuint64_t,\tuint64_t)\t\\\n    O(thread_deallocated_last_event,\tuint64_t,\tuint64_t)\t\\\n    O(thread_deallocated_next_event,\tuint64_t,\tuint64_t)\t\\\n    O(tcache_gc_event_wait,\tuint64_t,\t\tuint64_t)\t\\\n    O(tcache_gc_dalloc_event_wait,\tuint64_t,\tuint64_t)\t\\\n    O(prof_sample_event_wait,\tuint64_t,\t\tuint64_t)\t\\\n    O(prof_sample_last_event,\tuint64_t,\t\tuint64_t)\t\\\n    O(stats_interval_event_wait,\tuint64_t,\tuint64_t)\t\\\n    O(stats_interval_last_event,\tuint64_t,\tuint64_t)\t\\\n    O(peak_alloc_event_wait,\tuint64_t,\t\tuint64_t)\t\\\n    O(peak_dalloc_event_wait,\tuint64_t,\tuint64_t)\t\t\\\n    O(prof_tdata,\t\tprof_tdata_t *,\t\tprof_tdata_t *)\t\\\n    O(prng_state,\t\tuint64_t,\t\tuint64_t)\t\\\n    O(san_extents_until_guard_small,\tuint64_t,\tuint64_t)\t\\\n    O(san_extents_until_guard_large,\tuint64_t,\tuint64_t)\t\\\n    O(iarena,\t\t\tarena_t *,\t\tarena_t *)\t\\\n    O(arena,\t\t\tarena_t *,\t\tarena_t *)\t\\\n    O(arena_decay_ticker,\tticker_geom_t,\t\tticker_geom_t)\t\\\n    O(sec_shard,\t\tuint8_t,\t\tuint8_t)\t\\\n    O(binshards,\t\ttsd_binshards_t,\ttsd_binshards_t)\\\n    O(tsd_link,\t\t\ttsd_link_t,\t\ttsd_link_t)\t\\\n    O(in_hook,\t\t\tbool,\t\t\tbool)\t\t\\\n    O(peak,\t\t\tpeak_t,\t\t\tpeak_t)\t\t\\\n    O(activity_callback_thunk,\tactivity_callback_thunk_t,\t\t\\\n\tactivity_callback_thunk_t)\t\t\t\t\t\\\n    O(tcache_slow,\t\ttcache_slow_t,\t\ttcache_slow_t)\t\\\n    O(rtree_ctx,\t\trtree_ctx_t,\t\trtree_ctx_t)\n\n#define TSD_DATA_SLOW_INITIALIZER\t\t\t\t\t\\\n    /* tcache_enabled */\tTCACHE_ENABLED_ZERO_INITIALIZER,\t\\\n    /* reentrancy_level */\t0,\t\t\t\t\t\\\n    /* thread_allocated_last_event */\t0,\t\t\t\t\\\n    /* thread_allocated_next_event */\t0,\t\t\t\t\\\n    /* thread_deallocated_last_event */\t0,\t\t\t\t\\\n    /* thread_deallocated_next_event */\t0,\t\t\t\t\\\n    /* tcache_gc_event_wait */\t\t0,\t\t\t\t\\\n    /* tcache_gc_dalloc_event_wait */\t0,\t\t\t\t\\\n    /* prof_sample_event_wait */\t0,\t\t\t\t\\\n    /* prof_sample_last_event */\t0,\t\t\t\t\\\n    /* stats_interval_event_wait */\t0,\t\t\t\t\\\n    /* stats_interval_last_event */\t0,\t\t\t\t\\\n    /* peak_alloc_event_wait */\t\t0,\t\t\t\t\\\n    /* peak_dalloc_event_wait */\t0,\t\t\t\t\\\n    /* prof_tdata */\t\tNULL,\t\t\t\t\t\\\n    /* prng_state */\t\t0,\t\t\t\t\t\\\n    /* san_extents_until_guard_small */\t0,\t\t\t\t\\\n    /* san_extents_until_guard_large */\t0,\t\t\t\t\\\n    /* iarena */\t\tNULL,\t\t\t\t\t\\\n    /* arena */\t\t\tNULL,\t\t\t\t\t\\\n    /* arena_decay_ticker */\t\t\t\t\t\t\\\n\tTICKER_GEOM_INIT(ARENA_DECAY_NTICKS_PER_UPDATE),\t\t\\\n    /* sec_shard */\t\t(uint8_t)-1,\t\t\t\t\\\n    /* binshards */\t\tTSD_BINSHARDS_ZERO_INITIALIZER,\t\t\\\n    /* tsd_link */\t\t{NULL},\t\t\t\t\t\\\n    /* in_hook */\t\tfalse,\t\t\t\t\t\\\n    /* peak */\t\t\tPEAK_INITIALIZER,\t\t\t\\\n    /* activity_callback_thunk */\t\t\t\t\t\\\n\tACTIVITY_CALLBACK_THUNK_INITIALIZER,\t\t\t\t\\\n    /* tcache_slow */\t\tTCACHE_SLOW_ZERO_INITIALIZER,\t\t\\\n    /* rtree_ctx */\t\tRTREE_CTX_INITIALIZER,\n\n/*  O(name,\t\t\ttype,\t\t\tnullable type) */\n#define TSD_DATA_FAST\t\t\t\t\t\t\t\\\n    O(thread_allocated,\t\tuint64_t,\t\tuint64_t)\t\\\n    O(thread_allocated_next_event_fast,\tuint64_t,\tuint64_t)\t\\\n    O(thread_deallocated,\tuint64_t,\t\tuint64_t)\t\\\n    O(thread_deallocated_next_event_fast, uint64_t,\tuint64_t)\t\\\n    O(tcache,\t\t\ttcache_t,\t\ttcache_t)\n\n#define TSD_DATA_FAST_INITIALIZER\t\t\t\t\t\\\n    /* thread_allocated */\t0,\t\t\t\t\t\\\n    /* thread_allocated_next_event_fast */ 0, \t\t\t\t\\\n    /* thread_deallocated */\t0,\t\t\t\t\t\\\n    /* thread_deallocated_next_event_fast */\t0,\t\t\t\\\n    /* tcache */\t\tTCACHE_ZERO_INITIALIZER,\n\n/*  O(name,\t\t\ttype,\t\t\tnullable type) */\n#define TSD_DATA_SLOWER\t\t\t\t\t\t\t\\\n    O(witness_tsd,              witness_tsd_t,\t\twitness_tsdn_t)\t\\\n    MALLOC_TEST_TSD\n\n#define TSD_DATA_SLOWER_INITIALIZER\t\t\t\t\t\\\n    /* witness */\t\tWITNESS_TSD_INITIALIZER\t\t\t\\\n    /* test data */\t\tMALLOC_TEST_TSD_INITIALIZER\n\n\n#define TSD_INITIALIZER {\t\t\t\t\t\t\\\n    \t\t\t\tTSD_DATA_SLOW_INITIALIZER\t\t\\\n    /* state */\t\t\tATOMIC_INIT(tsd_state_uninitialized),\t\\\n    \t\t\t\tTSD_DATA_FAST_INITIALIZER\t\t\\\n    \t\t\t\tTSD_DATA_SLOWER_INITIALIZER\t\t\\\n}\n\n#if defined(JEMALLOC_MALLOC_THREAD_CLEANUP) || defined(_WIN32)\nvoid _malloc_tsd_cleanup_register(bool (*f)(void));\n#endif\n\nvoid *malloc_tsd_malloc(size_t size);\nvoid malloc_tsd_dalloc(void *wrapper);\ntsd_t *malloc_tsd_boot0(void);\nvoid malloc_tsd_boot1(void);\nvoid tsd_cleanup(void *arg);\ntsd_t *tsd_fetch_slow(tsd_t *tsd, bool internal);\nvoid tsd_state_set(tsd_t *tsd, uint8_t new_state);\nvoid tsd_slow_update(tsd_t *tsd);\nvoid tsd_prefork(tsd_t *tsd);\nvoid tsd_postfork_parent(tsd_t *tsd);\nvoid tsd_postfork_child(tsd_t *tsd);\n\n/*\n * Call ..._inc when your module wants to take all threads down the slow paths,\n * and ..._dec when it no longer needs to.\n */\nvoid tsd_global_slow_inc(tsdn_t *tsdn);\nvoid tsd_global_slow_dec(tsdn_t *tsdn);\nbool tsd_global_slow();\n\nenum {\n\t/* Common case --> jnz. */\n\ttsd_state_nominal = 0,\n\t/* Initialized but on slow path. */\n\ttsd_state_nominal_slow = 1,\n\t/*\n\t * Some thread has changed global state in such a way that all nominal\n\t * threads need to recompute their fast / slow status the next time they\n\t * get a chance.\n\t *\n\t * Any thread can change another thread's status *to* recompute, but\n\t * threads are the only ones who can change their status *from*\n\t * recompute.\n\t */\n\ttsd_state_nominal_recompute = 2,\n\t/*\n\t * The above nominal states should be lower values.  We use\n\t * tsd_nominal_max to separate nominal states from threads in the\n\t * process of being born / dying.\n\t */\n\ttsd_state_nominal_max = 2,\n\n\t/*\n\t * A thread might free() during its death as its only allocator action;\n\t * in such scenarios, we need tsd, but set up in such a way that no\n\t * cleanup is necessary.\n\t */\n\ttsd_state_minimal_initialized = 3,\n\t/* States during which we know we're in thread death. */\n\ttsd_state_purgatory = 4,\n\ttsd_state_reincarnated = 5,\n\t/*\n\t * What it says on the tin; tsd that hasn't been initialized.  Note\n\t * that even when the tsd struct lives in TLS, when need to keep track\n\t * of stuff like whether or not our pthread destructors have been\n\t * scheduled, so this really truly is different than the nominal state.\n\t */\n\ttsd_state_uninitialized = 6\n};\n\n/*\n * Some TSD accesses can only be done in a nominal state.  To enforce this, we\n * wrap TSD member access in a function that asserts on TSD state, and mangle\n * field names to prevent touching them accidentally.\n */\n#define TSD_MANGLE(n) cant_access_tsd_items_directly_use_a_getter_or_setter_##n\n\n#ifdef JEMALLOC_U8_ATOMICS\n#  define tsd_state_t atomic_u8_t\n#  define tsd_atomic_load atomic_load_u8\n#  define tsd_atomic_store atomic_store_u8\n#  define tsd_atomic_exchange atomic_exchange_u8\n#else\n#  define tsd_state_t atomic_u32_t\n#  define tsd_atomic_load atomic_load_u32\n#  define tsd_atomic_store atomic_store_u32\n#  define tsd_atomic_exchange atomic_exchange_u32\n#endif\n\n/* The actual tsd. */\nstruct tsd_s {\n\t/*\n\t * The contents should be treated as totally opaque outside the tsd\n\t * module.  Access any thread-local state through the getters and\n\t * setters below.\n\t */\n\n#define O(n, t, nt)\t\t\t\t\t\t\t\\\n\tt TSD_MANGLE(n);\n\n\tTSD_DATA_SLOW\n\t/*\n\t * We manually limit the state to just a single byte.  Unless the 8-bit\n\t * atomics are unavailable (which is rare).\n\t */\n\ttsd_state_t state;\n\tTSD_DATA_FAST\n\tTSD_DATA_SLOWER\n#undef O\n};\n\nJEMALLOC_ALWAYS_INLINE uint8_t\ntsd_state_get(tsd_t *tsd) {\n\t/*\n\t * This should be atomic.  Unfortunately, compilers right now can't tell\n\t * that this can be done as a memory comparison, and forces a load into\n\t * a register that hurts fast-path performance.\n\t */\n\t/* return atomic_load_u8(&tsd->state, ATOMIC_RELAXED); */\n\treturn *(uint8_t *)&tsd->state;\n}\n\n/*\n * Wrapper around tsd_t that makes it possible to avoid implicit conversion\n * between tsd_t and tsdn_t, where tsdn_t is \"nullable\" and has to be\n * explicitly converted to tsd_t, which is non-nullable.\n */\nstruct tsdn_s {\n\ttsd_t tsd;\n};\n#define TSDN_NULL ((tsdn_t *)0)\nJEMALLOC_ALWAYS_INLINE tsdn_t *\ntsd_tsdn(tsd_t *tsd) {\n\treturn (tsdn_t *)tsd;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\ntsdn_null(const tsdn_t *tsdn) {\n\treturn tsdn == NULL;\n}\n\nJEMALLOC_ALWAYS_INLINE tsd_t *\ntsdn_tsd(tsdn_t *tsdn) {\n\tassert(!tsdn_null(tsdn));\n\n\treturn &tsdn->tsd;\n}\n\n/*\n * We put the platform-specific data declarations and inlines into their own\n * header files to avoid cluttering this file.  They define tsd_boot0,\n * tsd_boot1, tsd_boot, tsd_booted_get, tsd_get_allocates, tsd_get, and tsd_set.\n */\n#ifdef JEMALLOC_MALLOC_THREAD_CLEANUP\n#include \"jemalloc/internal/tsd_malloc_thread_cleanup.h\"\n#elif (defined(JEMALLOC_TLS))\n#include \"jemalloc/internal/tsd_tls.h\"\n#elif (defined(_WIN32))\n#include \"jemalloc/internal/tsd_win.h\"\n#else\n#include \"jemalloc/internal/tsd_generic.h\"\n#endif\n\n/*\n * tsd_foop_get_unsafe(tsd) returns a pointer to the thread-local instance of\n * foo.  This omits some safety checks, and so can be used during tsd\n * initialization and cleanup.\n */\n#define O(n, t, nt)\t\t\t\t\t\t\t\\\nJEMALLOC_ALWAYS_INLINE t *\t\t\t\t\t\t\\\ntsd_##n##p_get_unsafe(tsd_t *tsd) {\t\t\t\t\t\\\n\treturn &tsd->TSD_MANGLE(n);\t\t\t\t\t\\\n}\nTSD_DATA_SLOW\nTSD_DATA_FAST\nTSD_DATA_SLOWER\n#undef O\n\n/* tsd_foop_get(tsd) returns a pointer to the thread-local instance of foo. */\n#define O(n, t, nt)\t\t\t\t\t\t\t\\\nJEMALLOC_ALWAYS_INLINE t *\t\t\t\t\t\t\\\ntsd_##n##p_get(tsd_t *tsd) {\t\t\t\t\t\t\\\n\t/*\t\t\t\t\t\t\t\t\\\n\t * Because the state might change asynchronously if it's\t\\\n\t * nominal, we need to make sure that we only read it once.\t\\\n\t */\t\t\t\t\t\t\t\t\\\n\tuint8_t state = tsd_state_get(tsd);\t\t\t\t\\\n\tassert(state == tsd_state_nominal ||\t\t\t\t\\\n\t    state == tsd_state_nominal_slow ||\t\t\t\t\\\n\t    state == tsd_state_nominal_recompute ||\t\t\t\\\n\t    state == tsd_state_reincarnated ||\t\t\t\t\\\n\t    state == tsd_state_minimal_initialized);\t\t\t\\\n\treturn tsd_##n##p_get_unsafe(tsd);\t\t\t\t\\\n}\nTSD_DATA_SLOW\nTSD_DATA_FAST\nTSD_DATA_SLOWER\n#undef O\n\n/*\n * tsdn_foop_get(tsdn) returns either the thread-local instance of foo (if tsdn\n * isn't NULL), or NULL (if tsdn is NULL), cast to the nullable pointer type.\n */\n#define O(n, t, nt)\t\t\t\t\t\t\t\\\nJEMALLOC_ALWAYS_INLINE nt *\t\t\t\t\t\t\\\ntsdn_##n##p_get(tsdn_t *tsdn) {\t\t\t\t\t\t\\\n\tif (tsdn_null(tsdn)) {\t\t\t\t\t\t\\\n\t\treturn NULL;\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\ttsd_t *tsd = tsdn_tsd(tsdn);\t\t\t\t\t\\\n\treturn (nt *)tsd_##n##p_get(tsd);\t\t\t\t\\\n}\nTSD_DATA_SLOW\nTSD_DATA_FAST\nTSD_DATA_SLOWER\n#undef O\n\n/* tsd_foo_get(tsd) returns the value of the thread-local instance of foo. */\n#define O(n, t, nt)\t\t\t\t\t\t\t\\\nJEMALLOC_ALWAYS_INLINE t\t\t\t\t\t\t\\\ntsd_##n##_get(tsd_t *tsd) {\t\t\t\t\t\t\\\n\treturn *tsd_##n##p_get(tsd);\t\t\t\t\t\\\n}\nTSD_DATA_SLOW\nTSD_DATA_FAST\nTSD_DATA_SLOWER\n#undef O\n\n/* tsd_foo_set(tsd, val) updates the thread-local instance of foo to be val. */\n#define O(n, t, nt)\t\t\t\t\t\t\t\\\nJEMALLOC_ALWAYS_INLINE void\t\t\t\t\t\t\\\ntsd_##n##_set(tsd_t *tsd, t val) {\t\t\t\t\t\\\n\tassert(tsd_state_get(tsd) != tsd_state_reincarnated &&\t\t\\\n\t    tsd_state_get(tsd) != tsd_state_minimal_initialized);\t\\\n\t*tsd_##n##p_get(tsd) = val;\t\t\t\t\t\\\n}\nTSD_DATA_SLOW\nTSD_DATA_FAST\nTSD_DATA_SLOWER\n#undef O\n\nJEMALLOC_ALWAYS_INLINE void\ntsd_assert_fast(tsd_t *tsd) {\n\t/*\n\t * Note that our fastness assertion does *not* include global slowness\n\t * counters; it's not in general possible to ensure that they won't\n\t * change asynchronously from underneath us.\n\t */\n\tassert(!malloc_slow && tsd_tcache_enabled_get(tsd) &&\n\t    tsd_reentrancy_level_get(tsd) == 0);\n}\n\nJEMALLOC_ALWAYS_INLINE bool\ntsd_fast(tsd_t *tsd) {\n\tbool fast = (tsd_state_get(tsd) == tsd_state_nominal);\n\tif (fast) {\n\t\ttsd_assert_fast(tsd);\n\t}\n\n\treturn fast;\n}\n\nJEMALLOC_ALWAYS_INLINE tsd_t *\ntsd_fetch_impl(bool init, bool minimal) {\n\ttsd_t *tsd = tsd_get(init);\n\n\tif (!init && tsd_get_allocates() && tsd == NULL) {\n\t\treturn NULL;\n\t}\n\tassert(tsd != NULL);\n\n\tif (unlikely(tsd_state_get(tsd) != tsd_state_nominal)) {\n\t\treturn tsd_fetch_slow(tsd, minimal);\n\t}\n\tassert(tsd_fast(tsd));\n\ttsd_assert_fast(tsd);\n\n\treturn tsd;\n}\n\n/* Get a minimal TSD that requires no cleanup.  See comments in free(). */\nJEMALLOC_ALWAYS_INLINE tsd_t *\ntsd_fetch_min(void) {\n\treturn tsd_fetch_impl(true, true);\n}\n\n/* For internal background threads use only. */\nJEMALLOC_ALWAYS_INLINE tsd_t *\ntsd_internal_fetch(void) {\n\ttsd_t *tsd = tsd_fetch_min();\n\t/* Use reincarnated state to prevent full initialization. */\n\ttsd_state_set(tsd, tsd_state_reincarnated);\n\n\treturn tsd;\n}\n\nJEMALLOC_ALWAYS_INLINE tsd_t *\ntsd_fetch(void) {\n\treturn tsd_fetch_impl(true, false);\n}\n\nstatic inline bool\ntsd_nominal(tsd_t *tsd) {\n\tbool nominal = tsd_state_get(tsd) <= tsd_state_nominal_max;\n\tassert(nominal || tsd_reentrancy_level_get(tsd) > 0);\n\n\treturn nominal;\n}\n\nJEMALLOC_ALWAYS_INLINE tsdn_t *\ntsdn_fetch(void) {\n\tif (!tsd_booted_get()) {\n\t\treturn NULL;\n\t}\n\n\treturn tsd_tsdn(tsd_fetch_impl(false, false));\n}\n\nJEMALLOC_ALWAYS_INLINE rtree_ctx_t *\ntsd_rtree_ctx(tsd_t *tsd) {\n\treturn tsd_rtree_ctxp_get(tsd);\n}\n\nJEMALLOC_ALWAYS_INLINE rtree_ctx_t *\ntsdn_rtree_ctx(tsdn_t *tsdn, rtree_ctx_t *fallback) {\n\t/*\n\t * If tsd cannot be accessed, initialize the fallback rtree_ctx and\n\t * return a pointer to it.\n\t */\n\tif (unlikely(tsdn_null(tsdn))) {\n\t\trtree_ctx_data_init(fallback);\n\t\treturn fallback;\n\t}\n\treturn tsd_rtree_ctx(tsdn_tsd(tsdn));\n}\n\nstatic inline bool\ntsd_state_nocleanup(tsd_t *tsd) {\n\treturn tsd_state_get(tsd) == tsd_state_reincarnated ||\n\t    tsd_state_get(tsd) == tsd_state_minimal_initialized;\n}\n\n/*\n * These \"raw\" tsd reentrancy functions don't have any debug checking to make\n * sure that we're not touching arena 0.  Better is to call pre_reentrancy and\n * post_reentrancy if this is possible.\n */\nstatic inline void\ntsd_pre_reentrancy_raw(tsd_t *tsd) {\n\tbool fast = tsd_fast(tsd);\n\tassert(tsd_reentrancy_level_get(tsd) < INT8_MAX);\n\t++*tsd_reentrancy_levelp_get(tsd);\n\tif (fast) {\n\t\t/* Prepare slow path for reentrancy. */\n\t\ttsd_slow_update(tsd);\n\t\tassert(tsd_state_get(tsd) == tsd_state_nominal_slow);\n\t}\n}\n\nstatic inline void\ntsd_post_reentrancy_raw(tsd_t *tsd) {\n\tint8_t *reentrancy_level = tsd_reentrancy_levelp_get(tsd);\n\tassert(*reentrancy_level > 0);\n\tif (--*reentrancy_level == 0) {\n\t\ttsd_slow_update(tsd);\n\t}\n}\n\n#endif /* JEMALLOC_INTERNAL_TSD_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/tsd_generic.h",
    "content": "#ifdef JEMALLOC_INTERNAL_TSD_GENERIC_H\n#error This file should be included only once, by tsd.h.\n#endif\n#define JEMALLOC_INTERNAL_TSD_GENERIC_H\n\ntypedef struct tsd_init_block_s tsd_init_block_t;\nstruct tsd_init_block_s {\n\tql_elm(tsd_init_block_t) link;\n\tpthread_t thread;\n\tvoid *data;\n};\n\n/* Defined in tsd.c, to allow the mutex headers to have tsd dependencies. */\ntypedef struct tsd_init_head_s tsd_init_head_t;\n\ntypedef struct {\n\tbool initialized;\n\ttsd_t val;\n} tsd_wrapper_t;\n\nvoid *tsd_init_check_recursion(tsd_init_head_t *head,\n    tsd_init_block_t *block);\nvoid tsd_init_finish(tsd_init_head_t *head, tsd_init_block_t *block);\n\nextern pthread_key_t tsd_tsd;\nextern tsd_init_head_t tsd_init_head;\nextern tsd_wrapper_t tsd_boot_wrapper;\nextern bool tsd_booted;\n\n/* Initialization/cleanup. */\nJEMALLOC_ALWAYS_INLINE void\ntsd_cleanup_wrapper(void *arg) {\n\ttsd_wrapper_t *wrapper = (tsd_wrapper_t *)arg;\n\n\tif (wrapper->initialized) {\n\t\twrapper->initialized = false;\n\t\ttsd_cleanup(&wrapper->val);\n\t\tif (wrapper->initialized) {\n\t\t\t/* Trigger another cleanup round. */\n\t\t\tif (pthread_setspecific(tsd_tsd, (void *)wrapper) != 0)\n\t\t\t{\n\t\t\t\tmalloc_write(\"<jemalloc>: Error setting TSD\\n\");\n\t\t\t\tif (opt_abort) {\n\t\t\t\t\tabort();\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn;\n\t\t}\n\t}\n\tmalloc_tsd_dalloc(wrapper);\n}\n\nJEMALLOC_ALWAYS_INLINE void\ntsd_wrapper_set(tsd_wrapper_t *wrapper) {\n\tif (unlikely(!tsd_booted)) {\n\t\treturn;\n\t}\n\tif (pthread_setspecific(tsd_tsd, (void *)wrapper) != 0) {\n\t\tmalloc_write(\"<jemalloc>: Error setting TSD\\n\");\n\t\tabort();\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE tsd_wrapper_t *\ntsd_wrapper_get(bool init) {\n\ttsd_wrapper_t *wrapper;\n\n\tif (unlikely(!tsd_booted)) {\n\t\treturn &tsd_boot_wrapper;\n\t}\n\n\twrapper = (tsd_wrapper_t *)pthread_getspecific(tsd_tsd);\n\n\tif (init && unlikely(wrapper == NULL)) {\n\t\ttsd_init_block_t block;\n\t\twrapper = (tsd_wrapper_t *)\n\t\t    tsd_init_check_recursion(&tsd_init_head, &block);\n\t\tif (wrapper) {\n\t\t\treturn wrapper;\n\t\t}\n\t\twrapper = (tsd_wrapper_t *)\n\t\t    malloc_tsd_malloc(sizeof(tsd_wrapper_t));\n\t\tblock.data = (void *)wrapper;\n\t\tif (wrapper == NULL) {\n\t\t\tmalloc_write(\"<jemalloc>: Error allocating TSD\\n\");\n\t\t\tabort();\n\t\t} else {\n\t\t\twrapper->initialized = false;\n      JEMALLOC_DIAGNOSTIC_PUSH\n      JEMALLOC_DIAGNOSTIC_IGNORE_MISSING_STRUCT_FIELD_INITIALIZERS\n\t\t\ttsd_t initializer = TSD_INITIALIZER;\n      JEMALLOC_DIAGNOSTIC_POP\n\t\t\twrapper->val = initializer;\n\t\t}\n\t\ttsd_wrapper_set(wrapper);\n\t\ttsd_init_finish(&tsd_init_head, &block);\n\t}\n\treturn wrapper;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\ntsd_boot0(void) {\n\ttsd_wrapper_t *wrapper;\n\ttsd_init_block_t block;\n\n\twrapper = (tsd_wrapper_t *)\n\t    tsd_init_check_recursion(&tsd_init_head, &block);\n\tif (wrapper) {\n\t\treturn false;\n\t}\n\tblock.data = &tsd_boot_wrapper;\n\tif (pthread_key_create(&tsd_tsd, tsd_cleanup_wrapper) != 0) {\n\t\treturn true;\n\t}\n\ttsd_booted = true;\n\ttsd_wrapper_set(&tsd_boot_wrapper);\n\ttsd_init_finish(&tsd_init_head, &block);\n\treturn false;\n}\n\nJEMALLOC_ALWAYS_INLINE void\ntsd_boot1(void) {\n\ttsd_wrapper_t *wrapper;\n\twrapper = (tsd_wrapper_t *)malloc_tsd_malloc(sizeof(tsd_wrapper_t));\n\tif (wrapper == NULL) {\n\t\tmalloc_write(\"<jemalloc>: Error allocating TSD\\n\");\n\t\tabort();\n\t}\n\ttsd_boot_wrapper.initialized = false;\n\ttsd_cleanup(&tsd_boot_wrapper.val);\n\twrapper->initialized = false;\n  JEMALLOC_DIAGNOSTIC_PUSH\n  JEMALLOC_DIAGNOSTIC_IGNORE_MISSING_STRUCT_FIELD_INITIALIZERS\n\ttsd_t initializer = TSD_INITIALIZER;\n  JEMALLOC_DIAGNOSTIC_POP\n\twrapper->val = initializer;\n\ttsd_wrapper_set(wrapper);\n}\n\nJEMALLOC_ALWAYS_INLINE bool\ntsd_boot(void) {\n\tif (tsd_boot0()) {\n\t\treturn true;\n\t}\n\ttsd_boot1();\n\treturn false;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\ntsd_booted_get(void) {\n\treturn tsd_booted;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\ntsd_get_allocates(void) {\n\treturn true;\n}\n\n/* Get/set. */\nJEMALLOC_ALWAYS_INLINE tsd_t *\ntsd_get(bool init) {\n\ttsd_wrapper_t *wrapper;\n\n\tassert(tsd_booted);\n\twrapper = tsd_wrapper_get(init);\n\tif (tsd_get_allocates() && !init && wrapper == NULL) {\n\t\treturn NULL;\n\t}\n\treturn &wrapper->val;\n}\n\nJEMALLOC_ALWAYS_INLINE void\ntsd_set(tsd_t *val) {\n\ttsd_wrapper_t *wrapper;\n\n\tassert(tsd_booted);\n\twrapper = tsd_wrapper_get(true);\n\tif (likely(&wrapper->val != val)) {\n\t\twrapper->val = *(val);\n\t}\n\twrapper->initialized = true;\n}\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/tsd_malloc_thread_cleanup.h",
    "content": "#ifdef JEMALLOC_INTERNAL_TSD_MALLOC_THREAD_CLEANUP_H\n#error This file should be included only once, by tsd.h.\n#endif\n#define JEMALLOC_INTERNAL_TSD_MALLOC_THREAD_CLEANUP_H\n\n#define JEMALLOC_TSD_TYPE_ATTR(type) __thread type JEMALLOC_TLS_MODEL\n\nextern JEMALLOC_TSD_TYPE_ATTR(tsd_t) tsd_tls;\nextern JEMALLOC_TSD_TYPE_ATTR(bool) tsd_initialized;\nextern bool tsd_booted;\n\n/* Initialization/cleanup. */\nJEMALLOC_ALWAYS_INLINE bool\ntsd_cleanup_wrapper(void) {\n\tif (tsd_initialized) {\n\t\ttsd_initialized = false;\n\t\ttsd_cleanup(&tsd_tls);\n\t}\n\treturn tsd_initialized;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\ntsd_boot0(void) {\n\t_malloc_tsd_cleanup_register(&tsd_cleanup_wrapper);\n\ttsd_booted = true;\n\treturn false;\n}\n\nJEMALLOC_ALWAYS_INLINE void\ntsd_boot1(void) {\n\t/* Do nothing. */\n}\n\nJEMALLOC_ALWAYS_INLINE bool\ntsd_boot(void) {\n\treturn tsd_boot0();\n}\n\nJEMALLOC_ALWAYS_INLINE bool\ntsd_booted_get(void) {\n\treturn tsd_booted;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\ntsd_get_allocates(void) {\n\treturn false;\n}\n\n/* Get/set. */\nJEMALLOC_ALWAYS_INLINE tsd_t *\ntsd_get(bool init) {\n\treturn &tsd_tls;\n}\nJEMALLOC_ALWAYS_INLINE void\ntsd_set(tsd_t *val) {\n\tassert(tsd_booted);\n\tif (likely(&tsd_tls != val)) {\n\t\ttsd_tls = (*val);\n\t}\n\ttsd_initialized = true;\n}\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/tsd_tls.h",
    "content": "#ifdef JEMALLOC_INTERNAL_TSD_TLS_H\n#error This file should be included only once, by tsd.h.\n#endif\n#define JEMALLOC_INTERNAL_TSD_TLS_H\n\n#define JEMALLOC_TSD_TYPE_ATTR(type) __thread type JEMALLOC_TLS_MODEL\n\nextern JEMALLOC_TSD_TYPE_ATTR(tsd_t) tsd_tls;\nextern pthread_key_t tsd_tsd;\nextern bool tsd_booted;\n\n/* Initialization/cleanup. */\nJEMALLOC_ALWAYS_INLINE bool\ntsd_boot0(void) {\n\tif (pthread_key_create(&tsd_tsd, &tsd_cleanup) != 0) {\n\t\treturn true;\n\t}\n\ttsd_booted = true;\n\treturn false;\n}\n\nJEMALLOC_ALWAYS_INLINE void\ntsd_boot1(void) {\n\t/* Do nothing. */\n}\n\nJEMALLOC_ALWAYS_INLINE bool\ntsd_boot(void) {\n\treturn tsd_boot0();\n}\n\nJEMALLOC_ALWAYS_INLINE bool\ntsd_booted_get(void) {\n\treturn tsd_booted;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\ntsd_get_allocates(void) {\n\treturn false;\n}\n\n/* Get/set. */\nJEMALLOC_ALWAYS_INLINE tsd_t *\ntsd_get(bool init) {\n\treturn &tsd_tls;\n}\n\nJEMALLOC_ALWAYS_INLINE void\ntsd_set(tsd_t *val) {\n\tassert(tsd_booted);\n\tif (likely(&tsd_tls != val)) {\n\t\ttsd_tls = (*val);\n\t}\n\tif (pthread_setspecific(tsd_tsd, (void *)(&tsd_tls)) != 0) {\n\t\tmalloc_write(\"<jemalloc>: Error setting tsd.\\n\");\n\t\tif (opt_abort) {\n\t\t\tabort();\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/tsd_types.h",
    "content": "#ifndef JEMALLOC_INTERNAL_TSD_TYPES_H\n#define JEMALLOC_INTERNAL_TSD_TYPES_H\n\n#define MALLOC_TSD_CLEANUPS_MAX\t4\n\ntypedef struct tsd_s tsd_t;\ntypedef struct tsdn_s tsdn_t;\ntypedef bool (*malloc_tsd_cleanup_t)(void);\n\n#endif /* JEMALLOC_INTERNAL_TSD_TYPES_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/tsd_win.h",
    "content": "#ifdef JEMALLOC_INTERNAL_TSD_WIN_H\n#error This file should be included only once, by tsd.h.\n#endif\n#define JEMALLOC_INTERNAL_TSD_WIN_H\n\ntypedef struct {\n\tbool initialized;\n\ttsd_t val;\n} tsd_wrapper_t;\n\nextern DWORD tsd_tsd;\nextern tsd_wrapper_t tsd_boot_wrapper;\nextern bool tsd_booted;\n\n/* Initialization/cleanup. */\nJEMALLOC_ALWAYS_INLINE bool\ntsd_cleanup_wrapper(void) {\n\tDWORD error = GetLastError();\n\ttsd_wrapper_t *wrapper = (tsd_wrapper_t *)TlsGetValue(tsd_tsd);\n\tSetLastError(error);\n\n\tif (wrapper == NULL) {\n\t\treturn false;\n\t}\n\n\tif (wrapper->initialized) {\n\t\twrapper->initialized = false;\n\t\ttsd_cleanup(&wrapper->val);\n\t\tif (wrapper->initialized) {\n\t\t\t/* Trigger another cleanup round. */\n\t\t\treturn true;\n\t\t}\n\t}\n\tmalloc_tsd_dalloc(wrapper);\n\treturn false;\n}\n\nJEMALLOC_ALWAYS_INLINE void\ntsd_wrapper_set(tsd_wrapper_t *wrapper) {\n\tif (!TlsSetValue(tsd_tsd, (void *)wrapper)) {\n\t\tmalloc_write(\"<jemalloc>: Error setting TSD\\n\");\n\t\tabort();\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE tsd_wrapper_t *\ntsd_wrapper_get(bool init) {\n\tDWORD error = GetLastError();\n\ttsd_wrapper_t *wrapper = (tsd_wrapper_t *) TlsGetValue(tsd_tsd);\n\tSetLastError(error);\n\n\tif (init && unlikely(wrapper == NULL)) {\n\t\twrapper = (tsd_wrapper_t *)\n\t\t    malloc_tsd_malloc(sizeof(tsd_wrapper_t));\n\t\tif (wrapper == NULL) {\n\t\t\tmalloc_write(\"<jemalloc>: Error allocating TSD\\n\");\n\t\t\tabort();\n\t\t} else {\n\t\t\twrapper->initialized = false;\n\t\t\t/* MSVC is finicky about aggregate initialization. */\n\t\t\ttsd_t tsd_initializer = TSD_INITIALIZER;\n\t\t\twrapper->val = tsd_initializer;\n\t\t}\n\t\ttsd_wrapper_set(wrapper);\n\t}\n\treturn wrapper;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\ntsd_boot0(void) {\n\ttsd_tsd = TlsAlloc();\n\tif (tsd_tsd == TLS_OUT_OF_INDEXES) {\n\t\treturn true;\n\t}\n\t_malloc_tsd_cleanup_register(&tsd_cleanup_wrapper);\n\ttsd_wrapper_set(&tsd_boot_wrapper);\n\ttsd_booted = true;\n\treturn false;\n}\n\nJEMALLOC_ALWAYS_INLINE void\ntsd_boot1(void) {\n\ttsd_wrapper_t *wrapper;\n\twrapper = (tsd_wrapper_t *)\n\t    malloc_tsd_malloc(sizeof(tsd_wrapper_t));\n\tif (wrapper == NULL) {\n\t\tmalloc_write(\"<jemalloc>: Error allocating TSD\\n\");\n\t\tabort();\n\t}\n\ttsd_boot_wrapper.initialized = false;\n\ttsd_cleanup(&tsd_boot_wrapper.val);\n\twrapper->initialized = false;\n\ttsd_t initializer = TSD_INITIALIZER;\n\twrapper->val = initializer;\n\ttsd_wrapper_set(wrapper);\n}\nJEMALLOC_ALWAYS_INLINE bool\ntsd_boot(void) {\n\tif (tsd_boot0()) {\n\t\treturn true;\n\t}\n\ttsd_boot1();\n\treturn false;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\ntsd_booted_get(void) {\n\treturn tsd_booted;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\ntsd_get_allocates(void) {\n\treturn true;\n}\n\n/* Get/set. */\nJEMALLOC_ALWAYS_INLINE tsd_t *\ntsd_get(bool init) {\n\ttsd_wrapper_t *wrapper;\n\n\tassert(tsd_booted);\n\twrapper = tsd_wrapper_get(init);\n\tif (tsd_get_allocates() && !init && wrapper == NULL) {\n\t\treturn NULL;\n\t}\n\treturn &wrapper->val;\n}\n\nJEMALLOC_ALWAYS_INLINE void\ntsd_set(tsd_t *val) {\n\ttsd_wrapper_t *wrapper;\n\n\tassert(tsd_booted);\n\twrapper = tsd_wrapper_get(true);\n\tif (likely(&wrapper->val != val)) {\n\t\twrapper->val = *(val);\n\t}\n\twrapper->initialized = true;\n}\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/typed_list.h",
    "content": "#ifndef JEMALLOC_INTERNAL_TYPED_LIST_H\n#define JEMALLOC_INTERNAL_TYPED_LIST_H\n\n/*\n * This wraps the ql module to implement a list class in a way that's a little\n * bit easier to use; it handles ql_elm_new calls and provides type safety.\n */\n\n#define TYPED_LIST(list_type, el_type, linkage)\t\t\t\t\\\ntypedef struct {\t\t\t\t\t\t\t\\\n\tql_head(el_type) head;\t\t\t\t\t\t\\\n} list_type##_t;\t\t\t\t\t\t\t\\\nstatic inline void\t\t\t\t\t\t\t\\\nlist_type##_init(list_type##_t *list) {\t\t\t\t\t\\\n\tql_new(&list->head);\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\nstatic inline el_type *\t\t\t\t\t\t\t\\\nlist_type##_first(const list_type##_t *list) {\t\t\t\t\\\n\treturn ql_first(&list->head);\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\nstatic inline el_type *\t\t\t\t\t\t\t\\\nlist_type##_last(const list_type##_t *list) {\t\t\t\t\\\n\treturn ql_last(&list->head, linkage);\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\nstatic inline void\t\t\t\t\t\t\t\\\nlist_type##_append(list_type##_t *list, el_type *item) {\t\t\\\n\tql_elm_new(item, linkage);\t\t\t\t\t\\\n\tql_tail_insert(&list->head, item, linkage);\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\nstatic inline void\t\t\t\t\t\t\t\\\nlist_type##_prepend(list_type##_t *list, el_type *item) {\t\t\\\n\tql_elm_new(item, linkage);\t\t\t\t\t\\\n\tql_head_insert(&list->head, item, linkage);\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\nstatic inline void\t\t\t\t\t\t\t\\\nlist_type##_replace(list_type##_t *list, el_type *to_remove,\t\t\\\n    el_type *to_insert) {\t\t\t\t\t\t\\\n\tql_elm_new(to_insert, linkage);\t\t\t\t\t\\\n\tql_after_insert(to_remove, to_insert, linkage);\t\t\t\\\n\tql_remove(&list->head, to_remove, linkage);\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\nstatic inline void\t\t\t\t\t\t\t\\\nlist_type##_remove(list_type##_t *list, el_type *item) {\t\t\\\n\tql_remove(&list->head, item, linkage);\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\nstatic inline bool\t\t\t\t\t\t\t\\\nlist_type##_empty(list_type##_t *list) {\t\t\t\t\\\n\treturn ql_empty(&list->head);\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\nstatic inline void\t\t\t\t\t\t\t\\\nlist_type##_concat(list_type##_t *list_a, list_type##_t *list_b) {\t\\\n\tql_concat(&list_a->head, &list_b->head, linkage);\t\t\\\n}\n\n#endif /* JEMALLOC_INTERNAL_TYPED_LIST_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/util.h",
    "content": "#ifndef JEMALLOC_INTERNAL_UTIL_H\n#define JEMALLOC_INTERNAL_UTIL_H\n\n#define UTIL_INLINE static inline\n\n/* Junk fill patterns. */\n#ifndef JEMALLOC_ALLOC_JUNK\n#  define JEMALLOC_ALLOC_JUNK\t((uint8_t)0xa5)\n#endif\n#ifndef JEMALLOC_FREE_JUNK\n#  define JEMALLOC_FREE_JUNK\t((uint8_t)0x5a)\n#endif\n\n/*\n * Wrap a cpp argument that contains commas such that it isn't broken up into\n * multiple arguments.\n */\n#define JEMALLOC_ARG_CONCAT(...) __VA_ARGS__\n\n/* cpp macro definition stringification. */\n#define STRINGIFY_HELPER(x) #x\n#define STRINGIFY(x) STRINGIFY_HELPER(x)\n\n/*\n * Silence compiler warnings due to uninitialized values.  This is used\n * wherever the compiler fails to recognize that the variable is never used\n * uninitialized.\n */\n#define JEMALLOC_CC_SILENCE_INIT(v) = v\n\n#ifdef __GNUC__\n#  define likely(x)   __builtin_expect(!!(x), 1)\n#  define unlikely(x) __builtin_expect(!!(x), 0)\n#else\n#  define likely(x)   !!(x)\n#  define unlikely(x) !!(x)\n#endif\n\n#if !defined(JEMALLOC_INTERNAL_UNREACHABLE)\n#  error JEMALLOC_INTERNAL_UNREACHABLE should have been defined by configure\n#endif\n\n#define unreachable() JEMALLOC_INTERNAL_UNREACHABLE()\n\n/* Set error code. */\nUTIL_INLINE void\nset_errno(int errnum) {\n#ifdef _WIN32\n\tSetLastError(errnum);\n#else\n\terrno = errnum;\n#endif\n}\n\n/* Get last error code. */\nUTIL_INLINE int\nget_errno(void) {\n#ifdef _WIN32\n\treturn GetLastError();\n#else\n\treturn errno;\n#endif\n}\n\nJEMALLOC_ALWAYS_INLINE void\nutil_assume(bool b) {\n\tif (!b) {\n\t\tunreachable();\n\t}\n}\n\n/* ptr should be valid. */\nJEMALLOC_ALWAYS_INLINE void\nutil_prefetch_read(void *ptr) {\n\t/*\n\t * This should arguably be a config check; but any version of GCC so old\n\t * that it doesn't support __builtin_prefetch is also too old to build\n\t * jemalloc.\n\t */\n#ifdef __GNUC__\n\tif (config_debug) {\n\t\t/* Enforce the \"valid ptr\" requirement. */\n\t\t*(volatile char *)ptr;\n\t}\n\t__builtin_prefetch(ptr, /* read or write */ 0, /* locality hint */ 3);\n#else\n\t*(volatile char *)ptr;\n#endif\n}\n\nJEMALLOC_ALWAYS_INLINE void\nutil_prefetch_write(void *ptr) {\n#ifdef __GNUC__\n\tif (config_debug) {\n\t\t*(volatile char *)ptr;\n\t}\n\t/*\n\t * The only difference from the read variant is that this has a 1 as the\n\t * second argument (the write hint).\n\t */\n\t__builtin_prefetch(ptr, 1, 3);\n#else\n\t*(volatile char *)ptr;\n#endif\n}\n\nJEMALLOC_ALWAYS_INLINE void\nutil_prefetch_read_range(void *ptr, size_t sz) {\n\tfor (size_t i = 0; i < sz; i += CACHELINE) {\n\t\tutil_prefetch_read((void *)((uintptr_t)ptr + i));\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE void\nutil_prefetch_write_range(void *ptr, size_t sz) {\n\tfor (size_t i = 0; i < sz; i += CACHELINE) {\n\t\tutil_prefetch_write((void *)((uintptr_t)ptr + i));\n\t}\n}\n\n#undef UTIL_INLINE\n\n#endif /* JEMALLOC_INTERNAL_UTIL_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/internal/witness.h",
    "content": "#ifndef JEMALLOC_INTERNAL_WITNESS_H\n#define JEMALLOC_INTERNAL_WITNESS_H\n\n#include \"jemalloc/internal/ql.h\"\n\n/******************************************************************************/\n/* LOCK RANKS */\n/******************************************************************************/\n\nenum witness_rank_e {\n\t/*\n\t * Order matters within this enum listing -- higher valued locks can\n\t * only be acquired after lower-valued ones.  We use the\n\t * auto-incrementing-ness of enum values to enforce this.\n\t */\n\n\t/*\n\t * Witnesses with rank WITNESS_RANK_OMIT are completely ignored by the\n\t * witness machinery.\n\t */\n\tWITNESS_RANK_OMIT,\n\tWITNESS_RANK_MIN,\n\tWITNESS_RANK_INIT = WITNESS_RANK_MIN,\n\tWITNESS_RANK_CTL,\n\tWITNESS_RANK_TCACHES,\n\tWITNESS_RANK_ARENAS,\n\tWITNESS_RANK_BACKGROUND_THREAD_GLOBAL,\n\tWITNESS_RANK_PROF_DUMP,\n\tWITNESS_RANK_PROF_BT2GCTX,\n\tWITNESS_RANK_PROF_TDATAS,\n\tWITNESS_RANK_PROF_TDATA,\n\tWITNESS_RANK_PROF_LOG,\n\tWITNESS_RANK_PROF_GCTX,\n\tWITNESS_RANK_PROF_RECENT_DUMP,\n\tWITNESS_RANK_BACKGROUND_THREAD,\n\t/*\n\t * Used as an argument to witness_assert_depth_to_rank() in order to\n\t * validate depth excluding non-core locks with lower ranks.  Since the\n\t * rank argument to witness_assert_depth_to_rank() is inclusive rather\n\t * than exclusive, this definition can have the same value as the\n\t * minimally ranked core lock.\n\t */\n\tWITNESS_RANK_CORE,\n\tWITNESS_RANK_DECAY = WITNESS_RANK_CORE,\n\tWITNESS_RANK_TCACHE_QL,\n\n\tWITNESS_RANK_SEC_SHARD,\n\n\tWITNESS_RANK_EXTENT_GROW,\n\tWITNESS_RANK_HPA_SHARD_GROW = WITNESS_RANK_EXTENT_GROW,\n\tWITNESS_RANK_SAN_BUMP_ALLOC = WITNESS_RANK_EXTENT_GROW,\n\n\tWITNESS_RANK_EXTENTS,\n\tWITNESS_RANK_HPA_SHARD = WITNESS_RANK_EXTENTS,\n\n\tWITNESS_RANK_HPA_CENTRAL_GROW,\n\tWITNESS_RANK_HPA_CENTRAL,\n\n\tWITNESS_RANK_EDATA_CACHE,\n\n\tWITNESS_RANK_RTREE,\n\tWITNESS_RANK_BASE,\n\tWITNESS_RANK_ARENA_LARGE,\n\tWITNESS_RANK_HOOK,\n\n\tWITNESS_RANK_LEAF=0x1000,\n\tWITNESS_RANK_BIN = WITNESS_RANK_LEAF,\n\tWITNESS_RANK_ARENA_STATS = WITNESS_RANK_LEAF,\n\tWITNESS_RANK_COUNTER_ACCUM = WITNESS_RANK_LEAF,\n\tWITNESS_RANK_DSS = WITNESS_RANK_LEAF,\n\tWITNESS_RANK_PROF_ACTIVE = WITNESS_RANK_LEAF,\n\tWITNESS_RANK_PROF_DUMP_FILENAME = WITNESS_RANK_LEAF,\n\tWITNESS_RANK_PROF_GDUMP = WITNESS_RANK_LEAF,\n\tWITNESS_RANK_PROF_NEXT_THR_UID = WITNESS_RANK_LEAF,\n\tWITNESS_RANK_PROF_RECENT_ALLOC = WITNESS_RANK_LEAF,\n\tWITNESS_RANK_PROF_STATS = WITNESS_RANK_LEAF,\n\tWITNESS_RANK_PROF_THREAD_ACTIVE_INIT = WITNESS_RANK_LEAF,\n};\ntypedef enum witness_rank_e witness_rank_t;\n\n/******************************************************************************/\n/* PER-WITNESS DATA */\n/******************************************************************************/\n#if defined(JEMALLOC_DEBUG)\n#  define WITNESS_INITIALIZER(name, rank) {name, rank, NULL, NULL, {NULL, NULL}}\n#else\n#  define WITNESS_INITIALIZER(name, rank)\n#endif\n\ntypedef struct witness_s witness_t;\ntypedef ql_head(witness_t) witness_list_t;\ntypedef int witness_comp_t (const witness_t *, void *, const witness_t *,\n    void *);\n\nstruct witness_s {\n\t/* Name, used for printing lock order reversal messages. */\n\tconst char\t\t*name;\n\n\t/*\n\t * Witness rank, where 0 is lowest and WITNESS_RANK_LEAF is highest.\n\t * Witnesses must be acquired in order of increasing rank.\n\t */\n\twitness_rank_t\t\trank;\n\n\t/*\n\t * If two witnesses are of equal rank and they have the samp comp\n\t * function pointer, it is called as a last attempt to differentiate\n\t * between witnesses of equal rank.\n\t */\n\twitness_comp_t\t\t*comp;\n\n\t/* Opaque data, passed to comp(). */\n\tvoid\t\t\t*opaque;\n\n\t/* Linkage for thread's currently owned locks. */\n\tql_elm(witness_t)\tlink;\n};\n\n/******************************************************************************/\n/* PER-THREAD DATA */\n/******************************************************************************/\ntypedef struct witness_tsd_s witness_tsd_t;\nstruct witness_tsd_s {\n\twitness_list_t witnesses;\n\tbool forking;\n};\n\n#define WITNESS_TSD_INITIALIZER { ql_head_initializer(witnesses), false }\n#define WITNESS_TSDN_NULL ((witness_tsdn_t *)0)\n\n/******************************************************************************/\n/* (PER-THREAD) NULLABILITY HELPERS */\n/******************************************************************************/\ntypedef struct witness_tsdn_s witness_tsdn_t;\nstruct witness_tsdn_s {\n\twitness_tsd_t witness_tsd;\n};\n\nJEMALLOC_ALWAYS_INLINE witness_tsdn_t *\nwitness_tsd_tsdn(witness_tsd_t *witness_tsd) {\n\treturn (witness_tsdn_t *)witness_tsd;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nwitness_tsdn_null(witness_tsdn_t *witness_tsdn) {\n\treturn witness_tsdn == NULL;\n}\n\nJEMALLOC_ALWAYS_INLINE witness_tsd_t *\nwitness_tsdn_tsd(witness_tsdn_t *witness_tsdn) {\n\tassert(!witness_tsdn_null(witness_tsdn));\n\treturn &witness_tsdn->witness_tsd;\n}\n\n/******************************************************************************/\n/* API */\n/******************************************************************************/\nvoid witness_init(witness_t *witness, const char *name, witness_rank_t rank,\n    witness_comp_t *comp, void *opaque);\n\ntypedef void (witness_lock_error_t)(const witness_list_t *, const witness_t *);\nextern witness_lock_error_t *JET_MUTABLE witness_lock_error;\n\ntypedef void (witness_owner_error_t)(const witness_t *);\nextern witness_owner_error_t *JET_MUTABLE witness_owner_error;\n\ntypedef void (witness_not_owner_error_t)(const witness_t *);\nextern witness_not_owner_error_t *JET_MUTABLE witness_not_owner_error;\n\ntypedef void (witness_depth_error_t)(const witness_list_t *,\n    witness_rank_t rank_inclusive, unsigned depth);\nextern witness_depth_error_t *JET_MUTABLE witness_depth_error;\n\nvoid witnesses_cleanup(witness_tsd_t *witness_tsd);\nvoid witness_prefork(witness_tsd_t *witness_tsd);\nvoid witness_postfork_parent(witness_tsd_t *witness_tsd);\nvoid witness_postfork_child(witness_tsd_t *witness_tsd);\n\n/* Helper, not intended for direct use. */\nstatic inline bool\nwitness_owner(witness_tsd_t *witness_tsd, const witness_t *witness) {\n\twitness_list_t *witnesses;\n\twitness_t *w;\n\n\tcassert(config_debug);\n\n\twitnesses = &witness_tsd->witnesses;\n\tql_foreach(w, witnesses, link) {\n\t\tif (w == witness) {\n\t\t\treturn true;\n\t\t}\n\t}\n\n\treturn false;\n}\n\nstatic inline void\nwitness_assert_owner(witness_tsdn_t *witness_tsdn, const witness_t *witness) {\n\twitness_tsd_t *witness_tsd;\n\n\tif (!config_debug) {\n\t\treturn;\n\t}\n\n\tif (witness_tsdn_null(witness_tsdn)) {\n\t\treturn;\n\t}\n\twitness_tsd = witness_tsdn_tsd(witness_tsdn);\n\tif (witness->rank == WITNESS_RANK_OMIT) {\n\t\treturn;\n\t}\n\n\tif (witness_owner(witness_tsd, witness)) {\n\t\treturn;\n\t}\n\twitness_owner_error(witness);\n}\n\nstatic inline void\nwitness_assert_not_owner(witness_tsdn_t *witness_tsdn,\n    const witness_t *witness) {\n\twitness_tsd_t *witness_tsd;\n\twitness_list_t *witnesses;\n\twitness_t *w;\n\n\tif (!config_debug) {\n\t\treturn;\n\t}\n\n\tif (witness_tsdn_null(witness_tsdn)) {\n\t\treturn;\n\t}\n\twitness_tsd = witness_tsdn_tsd(witness_tsdn);\n\tif (witness->rank == WITNESS_RANK_OMIT) {\n\t\treturn;\n\t}\n\n\twitnesses = &witness_tsd->witnesses;\n\tql_foreach(w, witnesses, link) {\n\t\tif (w == witness) {\n\t\t\twitness_not_owner_error(witness);\n\t\t}\n\t}\n}\n\n/* Returns depth.  Not intended for direct use. */\nstatic inline unsigned\nwitness_depth_to_rank(witness_list_t *witnesses, witness_rank_t rank_inclusive)\n{\n\tunsigned d = 0;\n\twitness_t *w = ql_last(witnesses, link);\n\n\tif (w != NULL) {\n\t\tql_reverse_foreach(w, witnesses, link) {\n\t\t\tif (w->rank < rank_inclusive) {\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\td++;\n\t\t}\n\t}\n\n\treturn d;\n}\n\nstatic inline void\nwitness_assert_depth_to_rank(witness_tsdn_t *witness_tsdn,\n    witness_rank_t rank_inclusive, unsigned depth) {\n\tif (!config_debug || witness_tsdn_null(witness_tsdn)) {\n\t\treturn;\n\t}\n\n\twitness_list_t *witnesses = &witness_tsdn_tsd(witness_tsdn)->witnesses;\n\tunsigned d = witness_depth_to_rank(witnesses, rank_inclusive);\n\n\tif (d != depth) {\n\t\twitness_depth_error(witnesses, rank_inclusive, depth);\n\t}\n}\n\nstatic inline void\nwitness_assert_depth(witness_tsdn_t *witness_tsdn, unsigned depth) {\n\twitness_assert_depth_to_rank(witness_tsdn, WITNESS_RANK_MIN, depth);\n}\n\nstatic inline void\nwitness_assert_lockless(witness_tsdn_t *witness_tsdn) {\n\twitness_assert_depth(witness_tsdn, 0);\n}\n\nstatic inline void\nwitness_assert_positive_depth_to_rank(witness_tsdn_t *witness_tsdn,\n    witness_rank_t rank_inclusive) {\n\tif (!config_debug || witness_tsdn_null(witness_tsdn)) {\n\t\treturn;\n\t}\n\n\twitness_list_t *witnesses = &witness_tsdn_tsd(witness_tsdn)->witnesses;\n\tunsigned d = witness_depth_to_rank(witnesses, rank_inclusive);\n\n\tif (d == 0) {\n\t\twitness_depth_error(witnesses, rank_inclusive, 1);\n\t}\n}\n\nstatic inline void\nwitness_lock(witness_tsdn_t *witness_tsdn, witness_t *witness) {\n\twitness_tsd_t *witness_tsd;\n\twitness_list_t *witnesses;\n\twitness_t *w;\n\n\tif (!config_debug) {\n\t\treturn;\n\t}\n\n\tif (witness_tsdn_null(witness_tsdn)) {\n\t\treturn;\n\t}\n\twitness_tsd = witness_tsdn_tsd(witness_tsdn);\n\tif (witness->rank == WITNESS_RANK_OMIT) {\n\t\treturn;\n\t}\n\n\twitness_assert_not_owner(witness_tsdn, witness);\n\n\twitnesses = &witness_tsd->witnesses;\n\tw = ql_last(witnesses, link);\n\tif (w == NULL) {\n\t\t/* No other locks; do nothing. */\n\t} else if (witness_tsd->forking && w->rank <= witness->rank) {\n\t\t/* Forking, and relaxed ranking satisfied. */\n\t} else if (w->rank > witness->rank) {\n\t\t/* Not forking, rank order reversal. */\n\t\twitness_lock_error(witnesses, witness);\n\t} else if (w->rank == witness->rank && (w->comp == NULL || w->comp !=\n\t    witness->comp || w->comp(w, w->opaque, witness, witness->opaque) >\n\t    0)) {\n\t\t/*\n\t\t * Missing/incompatible comparison function, or comparison\n\t\t * function indicates rank order reversal.\n\t\t */\n\t\twitness_lock_error(witnesses, witness);\n\t}\n\n\tql_elm_new(witness, link);\n\tql_tail_insert(witnesses, witness, link);\n}\n\nstatic inline void\nwitness_unlock(witness_tsdn_t *witness_tsdn, witness_t *witness) {\n\twitness_tsd_t *witness_tsd;\n\twitness_list_t *witnesses;\n\n\tif (!config_debug) {\n\t\treturn;\n\t}\n\n\tif (witness_tsdn_null(witness_tsdn)) {\n\t\treturn;\n\t}\n\twitness_tsd = witness_tsdn_tsd(witness_tsdn);\n\tif (witness->rank == WITNESS_RANK_OMIT) {\n\t\treturn;\n\t}\n\n\t/*\n\t * Check whether owner before removal, rather than relying on\n\t * witness_assert_owner() to abort, so that unit tests can test this\n\t * function's failure mode without causing undefined behavior.\n\t */\n\tif (witness_owner(witness_tsd, witness)) {\n\t\twitnesses = &witness_tsd->witnesses;\n\t\tql_remove(witnesses, witness, link);\n\t} else {\n\t\twitness_assert_owner(witness_tsdn, witness);\n\t}\n}\n\n#endif /* JEMALLOC_INTERNAL_WITNESS_H */\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/jemalloc.sh",
    "content": "#!/bin/sh\n\nobjroot=$1\n\ncat <<EOF\n#ifndef JEMALLOC_H_\n#define JEMALLOC_H_\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\nEOF\n\nfor hdr in jemalloc_defs.h jemalloc_rename.h jemalloc_macros.h \\\n           jemalloc_protos.h jemalloc_typedefs.h jemalloc_mangle.h ; do\n  cat \"${objroot}include/jemalloc/${hdr}\" \\\n      | grep -v 'Generated from .* by configure\\.' \\\n      | sed -e 's/ $//g'\n  echo\ndone\n\ncat <<EOF\n#ifdef __cplusplus\n}\n#endif\n#endif /* JEMALLOC_H_ */\nEOF\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/jemalloc_defs.h.in",
    "content": "/* Defined if __attribute__((...)) syntax is supported. */\n#undef JEMALLOC_HAVE_ATTR\n\n/* Defined if alloc_size attribute is supported. */\n#undef JEMALLOC_HAVE_ATTR_ALLOC_SIZE\n\n/* Defined if format_arg(...) attribute is supported. */\n#undef JEMALLOC_HAVE_ATTR_FORMAT_ARG\n\n/* Defined if format(gnu_printf, ...) attribute is supported. */\n#undef JEMALLOC_HAVE_ATTR_FORMAT_GNU_PRINTF\n\n/* Defined if format(printf, ...) attribute is supported. */\n#undef JEMALLOC_HAVE_ATTR_FORMAT_PRINTF\n\n/* Defined if fallthrough attribute is supported. */\n#undef JEMALLOC_HAVE_ATTR_FALLTHROUGH\n\n/* Defined if cold attribute is supported. */\n#undef JEMALLOC_HAVE_ATTR_COLD\n\n/*\n * Define overrides for non-standard allocator-related functions if they are\n * present on the system.\n */\n#undef JEMALLOC_OVERRIDE_MEMALIGN\n#undef JEMALLOC_OVERRIDE_VALLOC\n\n/*\n * At least Linux omits the \"const\" in:\n *\n *   size_t malloc_usable_size(const void *ptr);\n *\n * Match the operating system's prototype.\n */\n#undef JEMALLOC_USABLE_SIZE_CONST\n\n/*\n * If defined, specify throw() for the public function prototypes when compiling\n * with C++.  The only justification for this is to match the prototypes that\n * glibc defines.\n */\n#undef JEMALLOC_USE_CXX_THROW\n\n#ifdef _MSC_VER\n#  ifdef _WIN64\n#    define LG_SIZEOF_PTR_WIN 3\n#  else\n#    define LG_SIZEOF_PTR_WIN 2\n#  endif\n#endif\n\n/* sizeof(void *) == 2^LG_SIZEOF_PTR. */\n#undef LG_SIZEOF_PTR\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/jemalloc_macros.h.in",
    "content": "#include <stdlib.h>\n#include <stdbool.h>\n#include <stdint.h>\n#include <limits.h>\n#include <strings.h>\n\n#define JEMALLOC_VERSION \"@jemalloc_version@\"\n#define JEMALLOC_VERSION_MAJOR @jemalloc_version_major@\n#define JEMALLOC_VERSION_MINOR @jemalloc_version_minor@\n#define JEMALLOC_VERSION_BUGFIX @jemalloc_version_bugfix@\n#define JEMALLOC_VERSION_NREV @jemalloc_version_nrev@\n#define JEMALLOC_VERSION_GID \"@jemalloc_version_gid@\"\n#define JEMALLOC_VERSION_GID_IDENT @jemalloc_version_gid@\n\n#define MALLOCX_LG_ALIGN(la)\t((int)(la))\n#if LG_SIZEOF_PTR == 2\n#  define MALLOCX_ALIGN(a)\t((int)(ffs((int)(a))-1))\n#else\n#  define MALLOCX_ALIGN(a)\t\t\t\t\t\t\\\n     ((int)(((size_t)(a) < (size_t)INT_MAX) ? ffs((int)(a))-1 :\t\\\n     ffs((int)(((size_t)(a))>>32))+31))\n#endif\n#define MALLOCX_ZERO\t((int)0x40)\n/*\n * Bias tcache index bits so that 0 encodes \"automatic tcache management\", and 1\n * encodes MALLOCX_TCACHE_NONE.\n */\n#define MALLOCX_TCACHE(tc)\t((int)(((tc)+2) << 8))\n#define MALLOCX_TCACHE_NONE\tMALLOCX_TCACHE(-1)\n/*\n * Bias arena index bits so that 0 encodes \"use an automatically chosen arena\".\n */\n#define MALLOCX_ARENA(a)\t((((int)(a))+1) << 20)\n\n/*\n * Use as arena index in \"arena.<i>.{purge,decay,dss}\" and\n * \"stats.arenas.<i>.*\" mallctl interfaces to select all arenas.  This\n * definition is intentionally specified in raw decimal format to support\n * cpp-based string concatenation, e.g.\n *\n *   #define STRINGIFY_HELPER(x) #x\n *   #define STRINGIFY(x) STRINGIFY_HELPER(x)\n *\n *   mallctl(\"arena.\" STRINGIFY(MALLCTL_ARENAS_ALL) \".purge\", NULL, NULL, NULL,\n *       0);\n */\n#define MALLCTL_ARENAS_ALL\t4096\n/*\n * Use as arena index in \"stats.arenas.<i>.*\" mallctl interfaces to select\n * destroyed arenas.\n */\n#define MALLCTL_ARENAS_DESTROYED\t4097\n\n#if defined(__cplusplus) && defined(JEMALLOC_USE_CXX_THROW)\n#  define JEMALLOC_CXX_THROW throw()\n#else\n#  define JEMALLOC_CXX_THROW\n#endif\n\n#if defined(_MSC_VER)\n#  define JEMALLOC_ATTR(s)\n#  define JEMALLOC_ALIGNED(s) __declspec(align(s))\n#  define JEMALLOC_ALLOC_SIZE(s)\n#  define JEMALLOC_ALLOC_SIZE2(s1, s2)\n#  ifndef JEMALLOC_EXPORT\n#    ifdef DLLEXPORT\n#      define JEMALLOC_EXPORT __declspec(dllexport)\n#    else\n#      define JEMALLOC_EXPORT __declspec(dllimport)\n#    endif\n#  endif\n#  define JEMALLOC_FORMAT_ARG(i)\n#  define JEMALLOC_FORMAT_PRINTF(s, i)\n#  define JEMALLOC_FALLTHROUGH\n#  define JEMALLOC_NOINLINE __declspec(noinline)\n#  ifdef __cplusplus\n#    define JEMALLOC_NOTHROW __declspec(nothrow)\n#  else\n#    define JEMALLOC_NOTHROW\n#  endif\n#  define JEMALLOC_SECTION(s) __declspec(allocate(s))\n#  define JEMALLOC_RESTRICT_RETURN __declspec(restrict)\n#  if _MSC_VER >= 1900 && !defined(__EDG__)\n#    define JEMALLOC_ALLOCATOR __declspec(allocator)\n#  else\n#    define JEMALLOC_ALLOCATOR\n#  endif\n#  define JEMALLOC_COLD\n#elif defined(JEMALLOC_HAVE_ATTR)\n#  define JEMALLOC_ATTR(s) __attribute__((s))\n#  define JEMALLOC_ALIGNED(s) JEMALLOC_ATTR(aligned(s))\n#  ifdef JEMALLOC_HAVE_ATTR_ALLOC_SIZE\n#    define JEMALLOC_ALLOC_SIZE(s) JEMALLOC_ATTR(alloc_size(s))\n#    define JEMALLOC_ALLOC_SIZE2(s1, s2) JEMALLOC_ATTR(alloc_size(s1, s2))\n#  else\n#    define JEMALLOC_ALLOC_SIZE(s)\n#    define JEMALLOC_ALLOC_SIZE2(s1, s2)\n#  endif\n#  ifndef JEMALLOC_EXPORT\n#    define JEMALLOC_EXPORT JEMALLOC_ATTR(visibility(\"default\"))\n#  endif\n#  ifdef JEMALLOC_HAVE_ATTR_FORMAT_ARG\n#    define JEMALLOC_FORMAT_ARG(i) JEMALLOC_ATTR(__format_arg__(3))\n#  else\n#    define JEMALLOC_FORMAT_ARG(i)\n#  endif\n#  ifdef JEMALLOC_HAVE_ATTR_FORMAT_GNU_PRINTF\n#    define JEMALLOC_FORMAT_PRINTF(s, i) JEMALLOC_ATTR(format(gnu_printf, s, i))\n#  elif defined(JEMALLOC_HAVE_ATTR_FORMAT_PRINTF)\n#    define JEMALLOC_FORMAT_PRINTF(s, i) JEMALLOC_ATTR(format(printf, s, i))\n#  else\n#    define JEMALLOC_FORMAT_PRINTF(s, i)\n#  endif\n#  ifdef JEMALLOC_HAVE_ATTR_FALLTHROUGH\n#    define JEMALLOC_FALLTHROUGH JEMALLOC_ATTR(fallthrough)\n#  else\n#    define JEMALLOC_FALLTHROUGH\n#  endif\n#  define JEMALLOC_NOINLINE JEMALLOC_ATTR(noinline)\n#  define JEMALLOC_NOTHROW JEMALLOC_ATTR(nothrow)\n#  define JEMALLOC_SECTION(s) JEMALLOC_ATTR(section(s))\n#  define JEMALLOC_RESTRICT_RETURN\n#  define JEMALLOC_ALLOCATOR\n#  ifdef JEMALLOC_HAVE_ATTR_COLD\n#    define JEMALLOC_COLD JEMALLOC_ATTR(__cold__)\n#  else\n#    define JEMALLOC_COLD\n#  endif\n#else\n#  define JEMALLOC_ATTR(s)\n#  define JEMALLOC_ALIGNED(s)\n#  define JEMALLOC_ALLOC_SIZE(s)\n#  define JEMALLOC_ALLOC_SIZE2(s1, s2)\n#  define JEMALLOC_EXPORT\n#  define JEMALLOC_FORMAT_PRINTF(s, i)\n#  define JEMALLOC_FALLTHROUGH\n#  define JEMALLOC_NOINLINE\n#  define JEMALLOC_NOTHROW\n#  define JEMALLOC_SECTION(s)\n#  define JEMALLOC_RESTRICT_RETURN\n#  define JEMALLOC_ALLOCATOR\n#  define JEMALLOC_COLD\n#endif\n\n#if (defined(__APPLE__) || defined(__FreeBSD__)) && !defined(JEMALLOC_NO_RENAME)\n#  define JEMALLOC_SYS_NOTHROW\n#else\n#  define JEMALLOC_SYS_NOTHROW JEMALLOC_NOTHROW\n#endif\n\n/* This version of Jemalloc, modified for Redis, has the je_get_defrag_hint()\n * function. */\n#define JEMALLOC_FRAG_HINT\n\n/* This version of Jemalloc, modified for Redis, has the je_*_usable() family functions. */\n#define JEMALLOC_ALLOC_WITH_USIZE\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/jemalloc_mangle.sh",
    "content": "#!/bin/sh -eu\n\npublic_symbols_txt=$1\nsymbol_prefix=$2\n\ncat <<EOF\n/*\n * By default application code must explicitly refer to mangled symbol names,\n * so that it is possible to use jemalloc in conjunction with another allocator\n * in the same application.  Define JEMALLOC_MANGLE in order to cause automatic\n * name mangling that matches the API prefixing that happened as a result of\n * --with-mangling and/or --with-jemalloc-prefix configuration settings.\n */\n#ifdef JEMALLOC_MANGLE\n#  ifndef JEMALLOC_NO_DEMANGLE\n#    define JEMALLOC_NO_DEMANGLE\n#  endif\nEOF\n\nfor nm in `cat ${public_symbols_txt}` ; do\n  n=`echo ${nm} |tr ':' ' ' |awk '{print $1}'`\n  echo \"#  define ${n} ${symbol_prefix}${n}\"\ndone\n\ncat <<EOF\n#endif\n\n/*\n * The ${symbol_prefix}* macros can be used as stable alternative names for the\n * public jemalloc API if JEMALLOC_NO_DEMANGLE is defined.  This is primarily\n * meant for use in jemalloc itself, but it can be used by application code to\n * provide isolation from the name mangling specified via --with-mangling\n * and/or --with-jemalloc-prefix.\n */\n#ifndef JEMALLOC_NO_DEMANGLE\nEOF\n\nfor nm in `cat ${public_symbols_txt}` ; do\n  n=`echo ${nm} |tr ':' ' ' |awk '{print $1}'`\n  echo \"#  undef ${symbol_prefix}${n}\"\ndone\n\ncat <<EOF\n#endif\nEOF\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/jemalloc_protos.h.in",
    "content": "/*\n * The @je_@ prefix on the following public symbol declarations is an artifact\n * of namespace management, and should be omitted in application code unless\n * JEMALLOC_NO_DEMANGLE is defined (see jemalloc_mangle@install_suffix@.h).\n */\nextern JEMALLOC_EXPORT const char\t*@je_@malloc_conf;\nextern JEMALLOC_EXPORT void\t\t(*@je_@malloc_message)(void *cbopaque,\n    const char *s);\n\nJEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN\n    void JEMALLOC_SYS_NOTHROW\t*@je_@malloc(size_t size)\n    JEMALLOC_CXX_THROW JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE(1);\nJEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN\n    void JEMALLOC_SYS_NOTHROW\t*@je_@calloc(size_t num, size_t size)\n    JEMALLOC_CXX_THROW JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE2(1, 2);\nJEMALLOC_EXPORT int JEMALLOC_SYS_NOTHROW @je_@posix_memalign(\n    void **memptr, size_t alignment, size_t size) JEMALLOC_CXX_THROW\n    JEMALLOC_ATTR(nonnull(1));\nJEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN\n    void JEMALLOC_SYS_NOTHROW\t*@je_@aligned_alloc(size_t alignment,\n    size_t size) JEMALLOC_CXX_THROW JEMALLOC_ATTR(malloc)\n    JEMALLOC_ALLOC_SIZE(2);\nJEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN\n    void JEMALLOC_SYS_NOTHROW\t*@je_@realloc(void *ptr, size_t size)\n    JEMALLOC_CXX_THROW JEMALLOC_ALLOC_SIZE(2);\nJEMALLOC_EXPORT void JEMALLOC_SYS_NOTHROW\t@je_@free(void *ptr)\n    JEMALLOC_CXX_THROW;\n\nJEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN\n    void JEMALLOC_NOTHROW\t*@je_@mallocx(size_t size, int flags)\n    JEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE(1);\nJEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN\n    void JEMALLOC_NOTHROW\t*@je_@rallocx(void *ptr, size_t size,\n    int flags) JEMALLOC_ALLOC_SIZE(2);\nJEMALLOC_EXPORT size_t JEMALLOC_NOTHROW\t@je_@xallocx(void *ptr, size_t size,\n    size_t extra, int flags);\nJEMALLOC_EXPORT size_t JEMALLOC_NOTHROW\t@je_@sallocx(const void *ptr,\n    int flags) JEMALLOC_ATTR(pure);\nJEMALLOC_EXPORT void JEMALLOC_NOTHROW\t@je_@dallocx(void *ptr, int flags);\nJEMALLOC_EXPORT void JEMALLOC_NOTHROW\t@je_@sdallocx(void *ptr, size_t size,\n    int flags);\nJEMALLOC_EXPORT size_t JEMALLOC_NOTHROW\t@je_@nallocx(size_t size, int flags)\n    JEMALLOC_ATTR(pure);\n\nJEMALLOC_EXPORT int JEMALLOC_NOTHROW\t@je_@mallctl(const char *name,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen);\nJEMALLOC_EXPORT int JEMALLOC_NOTHROW\t@je_@mallctlnametomib(const char *name,\n    size_t *mibp, size_t *miblenp);\nJEMALLOC_EXPORT int JEMALLOC_NOTHROW\t@je_@mallctlbymib(const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen);\nJEMALLOC_EXPORT void JEMALLOC_NOTHROW\t@je_@malloc_stats_print(\n    void (*write_cb)(void *, const char *), void *@je_@cbopaque,\n    const char *opts);\nJEMALLOC_EXPORT size_t JEMALLOC_NOTHROW\t@je_@malloc_usable_size(\n    JEMALLOC_USABLE_SIZE_CONST void *ptr) JEMALLOC_CXX_THROW;\n#ifdef JEMALLOC_HAVE_MALLOC_SIZE\nJEMALLOC_EXPORT size_t JEMALLOC_NOTHROW\t@je_@malloc_size(\n    const void *ptr);\n#endif\n\n#ifdef JEMALLOC_OVERRIDE_MEMALIGN\nJEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN\n    void JEMALLOC_SYS_NOTHROW\t*@je_@memalign(size_t alignment, size_t size)\n    JEMALLOC_CXX_THROW JEMALLOC_ATTR(malloc);\n#endif\n\n#ifdef JEMALLOC_OVERRIDE_VALLOC\nJEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN\n    void JEMALLOC_SYS_NOTHROW\t*@je_@valloc(size_t size) JEMALLOC_CXX_THROW\n    JEMALLOC_ATTR(malloc);\n#endif\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/jemalloc_rename.sh",
    "content": "#!/bin/sh\n\npublic_symbols_txt=$1\n\ncat <<EOF\n/*\n * Name mangling for public symbols is controlled by --with-mangling and\n * --with-jemalloc-prefix.  With default settings the je_ prefix is stripped by\n * these macro definitions.\n */\n#ifndef JEMALLOC_NO_RENAME\nEOF\n\nfor nm in `cat ${public_symbols_txt}` ; do\n  n=`echo ${nm} |tr ':' ' ' |awk '{print $1}'`\n  m=`echo ${nm} |tr ':' ' ' |awk '{print $2}'`\n  echo \"#  define je_${n} ${m}\"\ndone\n\ncat <<EOF\n#endif\nEOF\n"
  },
  {
    "path": "deps/jemalloc/include/jemalloc/jemalloc_typedefs.h.in",
    "content": "typedef struct extent_hooks_s extent_hooks_t;\n\n/*\n * void *\n * extent_alloc(extent_hooks_t *extent_hooks, void *new_addr, size_t size,\n *     size_t alignment, bool *zero, bool *commit, unsigned arena_ind);\n */\ntypedef void *(extent_alloc_t)(extent_hooks_t *, void *, size_t, size_t, bool *,\n    bool *, unsigned);\n\n/*\n * bool\n * extent_dalloc(extent_hooks_t *extent_hooks, void *addr, size_t size,\n *     bool committed, unsigned arena_ind);\n */\ntypedef bool (extent_dalloc_t)(extent_hooks_t *, void *, size_t, bool,\n    unsigned);\n\n/*\n * void\n * extent_destroy(extent_hooks_t *extent_hooks, void *addr, size_t size,\n *     bool committed, unsigned arena_ind);\n */\ntypedef void (extent_destroy_t)(extent_hooks_t *, void *, size_t, bool,\n    unsigned);\n\n/*\n * bool\n * extent_commit(extent_hooks_t *extent_hooks, void *addr, size_t size,\n *     size_t offset, size_t length, unsigned arena_ind);\n */\ntypedef bool (extent_commit_t)(extent_hooks_t *, void *, size_t, size_t, size_t,\n    unsigned);\n\n/*\n * bool\n * extent_decommit(extent_hooks_t *extent_hooks, void *addr, size_t size,\n *     size_t offset, size_t length, unsigned arena_ind);\n */\ntypedef bool (extent_decommit_t)(extent_hooks_t *, void *, size_t, size_t,\n    size_t, unsigned);\n\n/*\n * bool\n * extent_purge(extent_hooks_t *extent_hooks, void *addr, size_t size,\n *     size_t offset, size_t length, unsigned arena_ind);\n */\ntypedef bool (extent_purge_t)(extent_hooks_t *, void *, size_t, size_t, size_t,\n    unsigned);\n\n/*\n * bool\n * extent_split(extent_hooks_t *extent_hooks, void *addr, size_t size,\n *     size_t size_a, size_t size_b, bool committed, unsigned arena_ind);\n */\ntypedef bool (extent_split_t)(extent_hooks_t *, void *, size_t, size_t, size_t,\n    bool, unsigned);\n\n/*\n * bool\n * extent_merge(extent_hooks_t *extent_hooks, void *addr_a, size_t size_a,\n *     void *addr_b, size_t size_b, bool committed, unsigned arena_ind);\n */\ntypedef bool (extent_merge_t)(extent_hooks_t *, void *, size_t, void *, size_t,\n    bool, unsigned);\n\nstruct extent_hooks_s {\n\textent_alloc_t\t\t*alloc;\n\textent_dalloc_t\t\t*dalloc;\n\textent_destroy_t\t*destroy;\n\textent_commit_t\t\t*commit;\n\textent_decommit_t\t*decommit;\n\textent_purge_t\t\t*purge_lazy;\n\textent_purge_t\t\t*purge_forced;\n\textent_split_t\t\t*split;\n\textent_merge_t\t\t*merge;\n};\n"
  },
  {
    "path": "deps/jemalloc/include/msvc_compat/C99/stdbool.h",
    "content": "#ifndef stdbool_h\n#define stdbool_h\n\n#include <wtypes.h>\n\n/* MSVC doesn't define _Bool or bool in C, but does have BOOL */\n/* Note this doesn't pass autoconf's test because (bool) 0.5 != true */\n/* Clang-cl uses MSVC headers, so needs msvc_compat, but has _Bool as\n * a built-in type. */\n#ifndef __clang__\ntypedef BOOL _Bool;\n#endif\n\n#define bool _Bool\n#define true 1\n#define false 0\n\n#define __bool_true_false_are_defined 1\n\n#endif /* stdbool_h */\n"
  },
  {
    "path": "deps/jemalloc/include/msvc_compat/C99/stdint.h",
    "content": "// ISO C9x  compliant stdint.h for Microsoft Visual Studio\n// Based on ISO/IEC 9899:TC2 Committee draft (May 6, 2005) WG14/N1124 \n// \n//  Copyright (c) 2006-2008 Alexander Chemeris\n// \n// Redistribution and use in source and binary forms, with or without\n// modification, are permitted provided that the following conditions are met:\n// \n//   1. Redistributions of source code must retain the above copyright notice,\n//      this list of conditions and the following disclaimer.\n// \n//   2. Redistributions in binary form must reproduce the above copyright\n//      notice, this list of conditions and the following disclaimer in the\n//      documentation and/or other materials provided with the distribution.\n// \n//   3. The name of the author may be used to endorse or promote products\n//      derived from this software without specific prior written permission.\n// \n// THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED\n// WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF\n// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO\n// EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;\n// OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, \n// WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR\n// OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF\n// ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n// \n///////////////////////////////////////////////////////////////////////////////\n\n#ifndef _MSC_VER // [\n#error \"Use this header only with Microsoft Visual C++ compilers!\"\n#endif // _MSC_VER ]\n\n#ifndef _MSC_STDINT_H_ // [\n#define _MSC_STDINT_H_\n\n#if _MSC_VER > 1000\n#pragma once\n#endif\n\n#include <limits.h>\n\n// For Visual Studio 6 in C++ mode and for many Visual Studio versions when\n// compiling for ARM we should wrap <wchar.h> include with 'extern \"C++\" {}'\n// or compiler give many errors like this:\n//   error C2733: second C linkage of overloaded function 'wmemchr' not allowed\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n#  include <wchar.h>\n#ifdef __cplusplus\n}\n#endif\n\n// Define _W64 macros to mark types changing their size, like intptr_t.\n#ifndef _W64\n#  if !defined(__midl) && (defined(_X86_) || defined(_M_IX86)) && _MSC_VER >= 1300\n#     define _W64 __w64\n#  else\n#     define _W64\n#  endif\n#endif\n\n\n// 7.18.1 Integer types\n\n// 7.18.1.1 Exact-width integer types\n\n// Visual Studio 6 and Embedded Visual C++ 4 doesn't\n// realize that, e.g. char has the same size as __int8\n// so we give up on __intX for them.\n#if (_MSC_VER < 1300)\n   typedef signed char       int8_t;\n   typedef signed short      int16_t;\n   typedef signed int        int32_t;\n   typedef unsigned char     uint8_t;\n   typedef unsigned short    uint16_t;\n   typedef unsigned int      uint32_t;\n#else\n   typedef signed __int8     int8_t;\n   typedef signed __int16    int16_t;\n   typedef signed __int32    int32_t;\n   typedef unsigned __int8   uint8_t;\n   typedef unsigned __int16  uint16_t;\n   typedef unsigned __int32  uint32_t;\n#endif\ntypedef signed __int64       int64_t;\ntypedef unsigned __int64     uint64_t;\n\n\n// 7.18.1.2 Minimum-width integer types\ntypedef int8_t    int_least8_t;\ntypedef int16_t   int_least16_t;\ntypedef int32_t   int_least32_t;\ntypedef int64_t   int_least64_t;\ntypedef uint8_t   uint_least8_t;\ntypedef uint16_t  uint_least16_t;\ntypedef uint32_t  uint_least32_t;\ntypedef uint64_t  uint_least64_t;\n\n// 7.18.1.3 Fastest minimum-width integer types\ntypedef int8_t    int_fast8_t;\ntypedef int16_t   int_fast16_t;\ntypedef int32_t   int_fast32_t;\ntypedef int64_t   int_fast64_t;\ntypedef uint8_t   uint_fast8_t;\ntypedef uint16_t  uint_fast16_t;\ntypedef uint32_t  uint_fast32_t;\ntypedef uint64_t  uint_fast64_t;\n\n// 7.18.1.4 Integer types capable of holding object pointers\n#ifdef _WIN64 // [\n   typedef signed __int64    intptr_t;\n   typedef unsigned __int64  uintptr_t;\n#else // _WIN64 ][\n   typedef _W64 signed int   intptr_t;\n   typedef _W64 unsigned int uintptr_t;\n#endif // _WIN64 ]\n\n// 7.18.1.5 Greatest-width integer types\ntypedef int64_t   intmax_t;\ntypedef uint64_t  uintmax_t;\n\n\n// 7.18.2 Limits of specified-width integer types\n\n#if !defined(__cplusplus) || defined(__STDC_LIMIT_MACROS) // [   See footnote 220 at page 257 and footnote 221 at page 259\n\n// 7.18.2.1 Limits of exact-width integer types\n#define INT8_MIN     ((int8_t)_I8_MIN)\n#define INT8_MAX     _I8_MAX\n#define INT16_MIN    ((int16_t)_I16_MIN)\n#define INT16_MAX    _I16_MAX\n#define INT32_MIN    ((int32_t)_I32_MIN)\n#define INT32_MAX    _I32_MAX\n#define INT64_MIN    ((int64_t)_I64_MIN)\n#define INT64_MAX    _I64_MAX\n#define UINT8_MAX    _UI8_MAX\n#define UINT16_MAX   _UI16_MAX\n#define UINT32_MAX   _UI32_MAX\n#define UINT64_MAX   _UI64_MAX\n\n// 7.18.2.2 Limits of minimum-width integer types\n#define INT_LEAST8_MIN    INT8_MIN\n#define INT_LEAST8_MAX    INT8_MAX\n#define INT_LEAST16_MIN   INT16_MIN\n#define INT_LEAST16_MAX   INT16_MAX\n#define INT_LEAST32_MIN   INT32_MIN\n#define INT_LEAST32_MAX   INT32_MAX\n#define INT_LEAST64_MIN   INT64_MIN\n#define INT_LEAST64_MAX   INT64_MAX\n#define UINT_LEAST8_MAX   UINT8_MAX\n#define UINT_LEAST16_MAX  UINT16_MAX\n#define UINT_LEAST32_MAX  UINT32_MAX\n#define UINT_LEAST64_MAX  UINT64_MAX\n\n// 7.18.2.3 Limits of fastest minimum-width integer types\n#define INT_FAST8_MIN    INT8_MIN\n#define INT_FAST8_MAX    INT8_MAX\n#define INT_FAST16_MIN   INT16_MIN\n#define INT_FAST16_MAX   INT16_MAX\n#define INT_FAST32_MIN   INT32_MIN\n#define INT_FAST32_MAX   INT32_MAX\n#define INT_FAST64_MIN   INT64_MIN\n#define INT_FAST64_MAX   INT64_MAX\n#define UINT_FAST8_MAX   UINT8_MAX\n#define UINT_FAST16_MAX  UINT16_MAX\n#define UINT_FAST32_MAX  UINT32_MAX\n#define UINT_FAST64_MAX  UINT64_MAX\n\n// 7.18.2.4 Limits of integer types capable of holding object pointers\n#ifdef _WIN64 // [\n#  define INTPTR_MIN   INT64_MIN\n#  define INTPTR_MAX   INT64_MAX\n#  define UINTPTR_MAX  UINT64_MAX\n#else // _WIN64 ][\n#  define INTPTR_MIN   INT32_MIN\n#  define INTPTR_MAX   INT32_MAX\n#  define UINTPTR_MAX  UINT32_MAX\n#endif // _WIN64 ]\n\n// 7.18.2.5 Limits of greatest-width integer types\n#define INTMAX_MIN   INT64_MIN\n#define INTMAX_MAX   INT64_MAX\n#define UINTMAX_MAX  UINT64_MAX\n\n// 7.18.3 Limits of other integer types\n\n#ifdef _WIN64 // [\n#  define PTRDIFF_MIN  _I64_MIN\n#  define PTRDIFF_MAX  _I64_MAX\n#else  // _WIN64 ][\n#  define PTRDIFF_MIN  _I32_MIN\n#  define PTRDIFF_MAX  _I32_MAX\n#endif  // _WIN64 ]\n\n#define SIG_ATOMIC_MIN  INT_MIN\n#define SIG_ATOMIC_MAX  INT_MAX\n\n#ifndef SIZE_MAX // [\n#  ifdef _WIN64 // [\n#     define SIZE_MAX  _UI64_MAX\n#  else // _WIN64 ][\n#     define SIZE_MAX  _UI32_MAX\n#  endif // _WIN64 ]\n#endif // SIZE_MAX ]\n\n// WCHAR_MIN and WCHAR_MAX are also defined in <wchar.h>\n#ifndef WCHAR_MIN // [\n#  define WCHAR_MIN  0\n#endif  // WCHAR_MIN ]\n#ifndef WCHAR_MAX // [\n#  define WCHAR_MAX  _UI16_MAX\n#endif  // WCHAR_MAX ]\n\n#define WINT_MIN  0\n#define WINT_MAX  _UI16_MAX\n\n#endif // __STDC_LIMIT_MACROS ]\n\n\n// 7.18.4 Limits of other integer types\n\n#if !defined(__cplusplus) || defined(__STDC_CONSTANT_MACROS) // [   See footnote 224 at page 260\n\n// 7.18.4.1 Macros for minimum-width integer constants\n\n#define INT8_C(val)  val##i8\n#define INT16_C(val) val##i16\n#define INT32_C(val) val##i32\n#define INT64_C(val) val##i64\n\n#define UINT8_C(val)  val##ui8\n#define UINT16_C(val) val##ui16\n#define UINT32_C(val) val##ui32\n#define UINT64_C(val) val##ui64\n\n// 7.18.4.2 Macros for greatest-width integer constants\n#define INTMAX_C   INT64_C\n#define UINTMAX_C  UINT64_C\n\n#endif // __STDC_CONSTANT_MACROS ]\n\n\n#endif // _MSC_STDINT_H_ ]\n"
  },
  {
    "path": "deps/jemalloc/include/msvc_compat/strings.h",
    "content": "#ifndef strings_h\n#define strings_h\n\n/* MSVC doesn't define ffs/ffsl. This dummy strings.h header is provided\n * for both */\n#ifdef _MSC_VER\n#  include <intrin.h>\n#  pragma intrinsic(_BitScanForward)\nstatic __forceinline int ffsl(long x) {\n\tunsigned long i;\n\n\tif (_BitScanForward(&i, x)) {\n\t\treturn i + 1;\n\t}\n\treturn 0;\n}\n\nstatic __forceinline int ffs(int x) {\n\treturn ffsl(x);\n}\n\n#  ifdef  _M_X64\n#    pragma intrinsic(_BitScanForward64)\n#  endif\n\nstatic __forceinline int ffsll(unsigned __int64 x) {\n\tunsigned long i;\n#ifdef  _M_X64\n\tif (_BitScanForward64(&i, x)) {\n\t\treturn i + 1;\n\t}\n\treturn 0;\n#else\n// Fallback for 32-bit build where 64-bit version not available\n// assuming little endian\n\tunion {\n\t\tunsigned __int64 ll;\n\t\tunsigned   long l[2];\n\t} s;\n\n\ts.ll = x;\n\n\tif (_BitScanForward(&i, s.l[0])) {\n\t\treturn i + 1;\n\t} else if(_BitScanForward(&i, s.l[1])) {\n\t\treturn i + 33;\n\t}\n\treturn 0;\n#endif\n}\n\n#else\n#  define ffsll(x) __builtin_ffsll(x)\n#  define ffsl(x) __builtin_ffsl(x)\n#  define ffs(x) __builtin_ffs(x)\n#endif\n\n#endif /* strings_h */\n"
  },
  {
    "path": "deps/jemalloc/include/msvc_compat/windows_extra.h",
    "content": "#ifndef MSVC_COMPAT_WINDOWS_EXTRA_H\n#define MSVC_COMPAT_WINDOWS_EXTRA_H\n\n#include <errno.h>\n\n#endif /* MSVC_COMPAT_WINDOWS_EXTRA_H */\n"
  },
  {
    "path": "deps/jemalloc/jemalloc.pc.in",
    "content": "prefix=@prefix@\nexec_prefix=@exec_prefix@\nlibdir=@libdir@\nincludedir=@includedir@\ninstall_suffix=@install_suffix@\n\nName: jemalloc\nDescription: A general purpose malloc(3) implementation that emphasizes fragmentation avoidance and scalable concurrency support.\nURL: http://jemalloc.net/\nVersion: @jemalloc_version_major@.@jemalloc_version_minor@.@jemalloc_version_bugfix@_@jemalloc_version_nrev@\nCflags: -I${includedir}\nLibs: -L${libdir} -ljemalloc${install_suffix}\n"
  },
  {
    "path": "deps/jemalloc/m4/ax_cxx_compile_stdcxx.m4",
    "content": "# ===========================================================================\n#  https://www.gnu.org/software/autoconf-archive/ax_cxx_compile_stdcxx.html\n# ===========================================================================\n#\n# SYNOPSIS\n#\n#   AX_CXX_COMPILE_STDCXX(VERSION, [ext|noext], [mandatory|optional])\n#\n# DESCRIPTION\n#\n#   Check for baseline language coverage in the compiler for the specified\n#   version of the C++ standard.  If necessary, add switches to CXX and\n#   CXXCPP to enable support.  VERSION may be '11' (for the C++11 standard)\n#   or '14' (for the C++14 standard).\n#\n#   The second argument, if specified, indicates whether you insist on an\n#   extended mode (e.g. -std=gnu++11) or a strict conformance mode (e.g.\n#   -std=c++11).  If neither is specified, you get whatever works, with\n#   preference for an extended mode.\n#\n#   The third argument, if specified 'mandatory' or if left unspecified,\n#   indicates that baseline support for the specified C++ standard is\n#   required and that the macro should error out if no mode with that\n#   support is found.  If specified 'optional', then configuration proceeds\n#   regardless, after defining HAVE_CXX${VERSION} if and only if a\n#   supporting mode is found.\n#\n# LICENSE\n#\n#   Copyright (c) 2008 Benjamin Kosnik <bkoz@redhat.com>\n#   Copyright (c) 2012 Zack Weinberg <zackw@panix.com>\n#   Copyright (c) 2013 Roy Stogner <roystgnr@ices.utexas.edu>\n#   Copyright (c) 2014, 2015 Google Inc.; contributed by Alexey Sokolov <sokolov@google.com>\n#   Copyright (c) 2015 Paul Norman <penorman@mac.com>\n#   Copyright (c) 2015 Moritz Klammler <moritz@klammler.eu>\n#   Copyright (c) 2016, 2018 Krzesimir Nowak <qdlacz@gmail.com>\n#   Copyright (c) 2019 Enji Cooper <yaneurabeya@gmail.com>\n#\n#   Copying and distribution of this file, with or without modification, are\n#   permitted in any medium without royalty provided the copyright notice\n#   and this notice are preserved.  This file is offered as-is, without any\n#   warranty.\n\n#serial 11\n\ndnl  This macro is based on the code from the AX_CXX_COMPILE_STDCXX_11 macro\ndnl  (serial version number 13).\n\nAC_DEFUN([AX_CXX_COMPILE_STDCXX], [dnl\n  m4_if([$1], [11], [ax_cxx_compile_alternatives=\"11 0x\"],\n        [$1], [14], [ax_cxx_compile_alternatives=\"14 1y\"],\n        [$1], [17], [ax_cxx_compile_alternatives=\"17 1z\"],\n        [m4_fatal([invalid first argument `$1' to AX_CXX_COMPILE_STDCXX])])dnl\n  m4_if([$2], [], [],\n        [$2], [ext], [],\n        [$2], [noext], [],\n        [m4_fatal([invalid second argument `$2' to AX_CXX_COMPILE_STDCXX])])dnl\n  m4_if([$3], [], [ax_cxx_compile_cxx$1_required=true],\n        [$3], [mandatory], [ax_cxx_compile_cxx$1_required=true],\n        [$3], [optional], [ax_cxx_compile_cxx$1_required=false],\n        [m4_fatal([invalid third argument `$3' to AX_CXX_COMPILE_STDCXX])])\n  AC_LANG_PUSH([C++])dnl\n  ac_success=no\n\n  m4_if([$2], [noext], [], [dnl\n  if test x$ac_success = xno; then\n    for alternative in ${ax_cxx_compile_alternatives}; do\n      switch=\"-std=gnu++${alternative}\"\n      cachevar=AS_TR_SH([ax_cv_cxx_compile_cxx$1_$switch])\n      AC_CACHE_CHECK(whether $CXX supports C++$1 features with $switch,\n                     $cachevar,\n        [ac_save_CXX=\"$CXX\"\n         CXX=\"$CXX $switch\"\n         AC_COMPILE_IFELSE([AC_LANG_SOURCE([_AX_CXX_COMPILE_STDCXX_testbody_$1])],\n          [eval $cachevar=yes],\n          [eval $cachevar=no])\n         CXX=\"$ac_save_CXX\"])\n      if eval test x\\$$cachevar = xyes; then\n        CXX=\"$CXX $switch\"\n        if test -n \"$CXXCPP\" ; then\n          CXXCPP=\"$CXXCPP $switch\"\n        fi\n        ac_success=yes\n        break\n      fi\n    done\n  fi])\n\n  m4_if([$2], [ext], [], [dnl\n  if test x$ac_success = xno; then\n    dnl HP's aCC needs +std=c++11 according to:\n    dnl http://h21007.www2.hp.com/portal/download/files/unprot/aCxx/PDF_Release_Notes/769149-001.pdf\n    dnl Cray's crayCC needs \"-h std=c++11\"\n    for alternative in ${ax_cxx_compile_alternatives}; do\n      for switch in -std=c++${alternative} +std=c++${alternative} \"-h std=c++${alternative}\"; do\n        cachevar=AS_TR_SH([ax_cv_cxx_compile_cxx$1_$switch])\n        AC_CACHE_CHECK(whether $CXX supports C++$1 features with $switch,\n                       $cachevar,\n          [ac_save_CXX=\"$CXX\"\n           CXX=\"$CXX $switch\"\n           AC_COMPILE_IFELSE([AC_LANG_SOURCE([_AX_CXX_COMPILE_STDCXX_testbody_$1])],\n            [eval $cachevar=yes],\n            [eval $cachevar=no])\n           CXX=\"$ac_save_CXX\"])\n        if eval test x\\$$cachevar = xyes; then\n          CXX=\"$CXX $switch\"\n          if test -n \"$CXXCPP\" ; then\n            CXXCPP=\"$CXXCPP $switch\"\n          fi\n          ac_success=yes\n          break\n        fi\n      done\n      if test x$ac_success = xyes; then\n        break\n      fi\n    done\n  fi])\n  AC_LANG_POP([C++])\n  if test x$ax_cxx_compile_cxx$1_required = xtrue; then\n    if test x$ac_success = xno; then\n      AC_MSG_ERROR([*** A compiler with support for C++$1 language features is required.])\n    fi\n  fi\n  if test x$ac_success = xno; then\n    HAVE_CXX$1=0\n    AC_MSG_NOTICE([No compiler with C++$1 support was found])\n  else\n    HAVE_CXX$1=1\n    AC_DEFINE(HAVE_CXX$1,1,\n              [define if the compiler supports basic C++$1 syntax])\n  fi\n  AC_SUBST(HAVE_CXX$1)\n])\n\n\ndnl  Test body for checking C++11 support\n\nm4_define([_AX_CXX_COMPILE_STDCXX_testbody_11],\n  _AX_CXX_COMPILE_STDCXX_testbody_new_in_11\n)\n\n\ndnl  Test body for checking C++14 support\n\nm4_define([_AX_CXX_COMPILE_STDCXX_testbody_14],\n  _AX_CXX_COMPILE_STDCXX_testbody_new_in_11\n  _AX_CXX_COMPILE_STDCXX_testbody_new_in_14\n)\n\nm4_define([_AX_CXX_COMPILE_STDCXX_testbody_17],\n  _AX_CXX_COMPILE_STDCXX_testbody_new_in_11\n  _AX_CXX_COMPILE_STDCXX_testbody_new_in_14\n  _AX_CXX_COMPILE_STDCXX_testbody_new_in_17\n)\n\ndnl  Tests for new features in C++11\n\nm4_define([_AX_CXX_COMPILE_STDCXX_testbody_new_in_11], [[\n\n// If the compiler admits that it is not ready for C++11, why torture it?\n// Hopefully, this will speed up the test.\n\n#ifndef __cplusplus\n\n#error \"This is not a C++ compiler\"\n\n#elif __cplusplus < 201103L\n\n#error \"This is not a C++11 compiler\"\n\n#else\n\nnamespace cxx11\n{\n\n  namespace test_static_assert\n  {\n\n    template <typename T>\n    struct check\n    {\n      static_assert(sizeof(int) <= sizeof(T), \"not big enough\");\n    };\n\n  }\n\n  namespace test_final_override\n  {\n\n    struct Base\n    {\n      virtual ~Base() {}\n      virtual void f() {}\n    };\n\n    struct Derived : public Base\n    {\n      virtual ~Derived() override {}\n      virtual void f() override {}\n    };\n\n  }\n\n  namespace test_double_right_angle_brackets\n  {\n\n    template < typename T >\n    struct check {};\n\n    typedef check<void> single_type;\n    typedef check<check<void>> double_type;\n    typedef check<check<check<void>>> triple_type;\n    typedef check<check<check<check<void>>>> quadruple_type;\n\n  }\n\n  namespace test_decltype\n  {\n\n    int\n    f()\n    {\n      int a = 1;\n      decltype(a) b = 2;\n      return a + b;\n    }\n\n  }\n\n  namespace test_type_deduction\n  {\n\n    template < typename T1, typename T2 >\n    struct is_same\n    {\n      static const bool value = false;\n    };\n\n    template < typename T >\n    struct is_same<T, T>\n    {\n      static const bool value = true;\n    };\n\n    template < typename T1, typename T2 >\n    auto\n    add(T1 a1, T2 a2) -> decltype(a1 + a2)\n    {\n      return a1 + a2;\n    }\n\n    int\n    test(const int c, volatile int v)\n    {\n      static_assert(is_same<int, decltype(0)>::value == true, \"\");\n      static_assert(is_same<int, decltype(c)>::value == false, \"\");\n      static_assert(is_same<int, decltype(v)>::value == false, \"\");\n      auto ac = c;\n      auto av = v;\n      auto sumi = ac + av + 'x';\n      auto sumf = ac + av + 1.0;\n      static_assert(is_same<int, decltype(ac)>::value == true, \"\");\n      static_assert(is_same<int, decltype(av)>::value == true, \"\");\n      static_assert(is_same<int, decltype(sumi)>::value == true, \"\");\n      static_assert(is_same<int, decltype(sumf)>::value == false, \"\");\n      static_assert(is_same<int, decltype(add(c, v))>::value == true, \"\");\n      return (sumf > 0.0) ? sumi : add(c, v);\n    }\n\n  }\n\n  namespace test_noexcept\n  {\n\n    int f() { return 0; }\n    int g() noexcept { return 0; }\n\n    static_assert(noexcept(f()) == false, \"\");\n    static_assert(noexcept(g()) == true, \"\");\n\n  }\n\n  namespace test_constexpr\n  {\n\n    template < typename CharT >\n    unsigned long constexpr\n    strlen_c_r(const CharT *const s, const unsigned long acc) noexcept\n    {\n      return *s ? strlen_c_r(s + 1, acc + 1) : acc;\n    }\n\n    template < typename CharT >\n    unsigned long constexpr\n    strlen_c(const CharT *const s) noexcept\n    {\n      return strlen_c_r(s, 0UL);\n    }\n\n    static_assert(strlen_c(\"\") == 0UL, \"\");\n    static_assert(strlen_c(\"1\") == 1UL, \"\");\n    static_assert(strlen_c(\"example\") == 7UL, \"\");\n    static_assert(strlen_c(\"another\\0example\") == 7UL, \"\");\n\n  }\n\n  namespace test_rvalue_references\n  {\n\n    template < int N >\n    struct answer\n    {\n      static constexpr int value = N;\n    };\n\n    answer<1> f(int&)       { return answer<1>(); }\n    answer<2> f(const int&) { return answer<2>(); }\n    answer<3> f(int&&)      { return answer<3>(); }\n\n    void\n    test()\n    {\n      int i = 0;\n      const int c = 0;\n      static_assert(decltype(f(i))::value == 1, \"\");\n      static_assert(decltype(f(c))::value == 2, \"\");\n      static_assert(decltype(f(0))::value == 3, \"\");\n    }\n\n  }\n\n  namespace test_uniform_initialization\n  {\n\n    struct test\n    {\n      static const int zero {};\n      static const int one {1};\n    };\n\n    static_assert(test::zero == 0, \"\");\n    static_assert(test::one == 1, \"\");\n\n  }\n\n  namespace test_lambdas\n  {\n\n    void\n    test1()\n    {\n      auto lambda1 = [](){};\n      auto lambda2 = lambda1;\n      lambda1();\n      lambda2();\n    }\n\n    int\n    test2()\n    {\n      auto a = [](int i, int j){ return i + j; }(1, 2);\n      auto b = []() -> int { return '0'; }();\n      auto c = [=](){ return a + b; }();\n      auto d = [&](){ return c; }();\n      auto e = [a, &b](int x) mutable {\n        const auto identity = [](int y){ return y; };\n        for (auto i = 0; i < a; ++i)\n          a += b--;\n        return x + identity(a + b);\n      }(0);\n      return a + b + c + d + e;\n    }\n\n    int\n    test3()\n    {\n      const auto nullary = [](){ return 0; };\n      const auto unary = [](int x){ return x; };\n      using nullary_t = decltype(nullary);\n      using unary_t = decltype(unary);\n      const auto higher1st = [](nullary_t f){ return f(); };\n      const auto higher2nd = [unary](nullary_t f1){\n        return [unary, f1](unary_t f2){ return f2(unary(f1())); };\n      };\n      return higher1st(nullary) + higher2nd(nullary)(unary);\n    }\n\n  }\n\n  namespace test_variadic_templates\n  {\n\n    template <int...>\n    struct sum;\n\n    template <int N0, int... N1toN>\n    struct sum<N0, N1toN...>\n    {\n      static constexpr auto value = N0 + sum<N1toN...>::value;\n    };\n\n    template <>\n    struct sum<>\n    {\n      static constexpr auto value = 0;\n    };\n\n    static_assert(sum<>::value == 0, \"\");\n    static_assert(sum<1>::value == 1, \"\");\n    static_assert(sum<23>::value == 23, \"\");\n    static_assert(sum<1, 2>::value == 3, \"\");\n    static_assert(sum<5, 5, 11>::value == 21, \"\");\n    static_assert(sum<2, 3, 5, 7, 11, 13>::value == 41, \"\");\n\n  }\n\n  // http://stackoverflow.com/questions/13728184/template-aliases-and-sfinae\n  // Clang 3.1 fails with headers of libstd++ 4.8.3 when using std::function\n  // because of this.\n  namespace test_template_alias_sfinae\n  {\n\n    struct foo {};\n\n    template<typename T>\n    using member = typename T::member_type;\n\n    template<typename T>\n    void func(...) {}\n\n    template<typename T>\n    void func(member<T>*) {}\n\n    void test();\n\n    void test() { func<foo>(0); }\n\n  }\n\n}  // namespace cxx11\n\n#endif  // __cplusplus >= 201103L\n\n]])\n\n\ndnl  Tests for new features in C++14\n\nm4_define([_AX_CXX_COMPILE_STDCXX_testbody_new_in_14], [[\n\n// If the compiler admits that it is not ready for C++14, why torture it?\n// Hopefully, this will speed up the test.\n\n#ifndef __cplusplus\n\n#error \"This is not a C++ compiler\"\n\n#elif __cplusplus < 201402L\n\n#error \"This is not a C++14 compiler\"\n\n#else\n\nnamespace cxx14\n{\n\n  namespace test_polymorphic_lambdas\n  {\n\n    int\n    test()\n    {\n      const auto lambda = [](auto&&... args){\n        const auto istiny = [](auto x){\n          return (sizeof(x) == 1UL) ? 1 : 0;\n        };\n        const int aretiny[] = { istiny(args)... };\n        return aretiny[0];\n      };\n      return lambda(1, 1L, 1.0f, '1');\n    }\n\n  }\n\n  namespace test_binary_literals\n  {\n\n    constexpr auto ivii = 0b0000000000101010;\n    static_assert(ivii == 42, \"wrong value\");\n\n  }\n\n  namespace test_generalized_constexpr\n  {\n\n    template < typename CharT >\n    constexpr unsigned long\n    strlen_c(const CharT *const s) noexcept\n    {\n      auto length = 0UL;\n      for (auto p = s; *p; ++p)\n        ++length;\n      return length;\n    }\n\n    static_assert(strlen_c(\"\") == 0UL, \"\");\n    static_assert(strlen_c(\"x\") == 1UL, \"\");\n    static_assert(strlen_c(\"test\") == 4UL, \"\");\n    static_assert(strlen_c(\"another\\0test\") == 7UL, \"\");\n\n  }\n\n  namespace test_lambda_init_capture\n  {\n\n    int\n    test()\n    {\n      auto x = 0;\n      const auto lambda1 = [a = x](int b){ return a + b; };\n      const auto lambda2 = [a = lambda1(x)](){ return a; };\n      return lambda2();\n    }\n\n  }\n\n  namespace test_digit_separators\n  {\n\n    constexpr auto ten_million = 100'000'000;\n    static_assert(ten_million == 100000000, \"\");\n\n  }\n\n  namespace test_return_type_deduction\n  {\n\n    auto f(int& x) { return x; }\n    decltype(auto) g(int& x) { return x; }\n\n    template < typename T1, typename T2 >\n    struct is_same\n    {\n      static constexpr auto value = false;\n    };\n\n    template < typename T >\n    struct is_same<T, T>\n    {\n      static constexpr auto value = true;\n    };\n\n    int\n    test()\n    {\n      auto x = 0;\n      static_assert(is_same<int, decltype(f(x))>::value, \"\");\n      static_assert(is_same<int&, decltype(g(x))>::value, \"\");\n      return x;\n    }\n\n  }\n\n}  // namespace cxx14\n\n#endif  // __cplusplus >= 201402L\n\n]])\n\n\ndnl  Tests for new features in C++17\n\nm4_define([_AX_CXX_COMPILE_STDCXX_testbody_new_in_17], [[\n\n// If the compiler admits that it is not ready for C++17, why torture it?\n// Hopefully, this will speed up the test.\n\n#ifndef __cplusplus\n\n#error \"This is not a C++ compiler\"\n\n#elif __cplusplus < 201703L\n\n#error \"This is not a C++17 compiler\"\n\n#else\n\n#include <initializer_list>\n#include <utility>\n#include <type_traits>\n\nnamespace cxx17\n{\n\n  namespace test_constexpr_lambdas\n  {\n\n    constexpr int foo = [](){return 42;}();\n\n  }\n\n  namespace test::nested_namespace::definitions\n  {\n\n  }\n\n  namespace test_fold_expression\n  {\n\n    template<typename... Args>\n    int multiply(Args... args)\n    {\n      return (args * ... * 1);\n    }\n\n    template<typename... Args>\n    bool all(Args... args)\n    {\n      return (args && ...);\n    }\n\n  }\n\n  namespace test_extended_static_assert\n  {\n\n    static_assert (true);\n\n  }\n\n  namespace test_auto_brace_init_list\n  {\n\n    auto foo = {5};\n    auto bar {5};\n\n    static_assert(std::is_same<std::initializer_list<int>, decltype(foo)>::value);\n    static_assert(std::is_same<int, decltype(bar)>::value);\n  }\n\n  namespace test_typename_in_template_template_parameter\n  {\n\n    template<template<typename> typename X> struct D;\n\n  }\n\n  namespace test_fallthrough_nodiscard_maybe_unused_attributes\n  {\n\n    int f1()\n    {\n      return 42;\n    }\n\n    [[nodiscard]] int f2()\n    {\n      [[maybe_unused]] auto unused = f1();\n\n      switch (f1())\n      {\n      case 17:\n        f1();\n        [[fallthrough]];\n      case 42:\n        f1();\n      }\n      return f1();\n    }\n\n  }\n\n  namespace test_extended_aggregate_initialization\n  {\n\n    struct base1\n    {\n      int b1, b2 = 42;\n    };\n\n    struct base2\n    {\n      base2() {\n        b3 = 42;\n      }\n      int b3;\n    };\n\n    struct derived : base1, base2\n    {\n        int d;\n    };\n\n    derived d1 {{1, 2}, {}, 4};  // full initialization\n    derived d2 {{}, {}, 4};      // value-initialized bases\n\n  }\n\n  namespace test_general_range_based_for_loop\n  {\n\n    struct iter\n    {\n      int i;\n\n      int& operator* ()\n      {\n        return i;\n      }\n\n      const int& operator* () const\n      {\n        return i;\n      }\n\n      iter& operator++()\n      {\n        ++i;\n        return *this;\n      }\n    };\n\n    struct sentinel\n    {\n      int i;\n    };\n\n    bool operator== (const iter& i, const sentinel& s)\n    {\n      return i.i == s.i;\n    }\n\n    bool operator!= (const iter& i, const sentinel& s)\n    {\n      return !(i == s);\n    }\n\n    struct range\n    {\n      iter begin() const\n      {\n        return {0};\n      }\n\n      sentinel end() const\n      {\n        return {5};\n      }\n    };\n\n    void f()\n    {\n      range r {};\n\n      for (auto i : r)\n      {\n        [[maybe_unused]] auto v = i;\n      }\n    }\n\n  }\n\n  namespace test_lambda_capture_asterisk_this_by_value\n  {\n\n    struct t\n    {\n      int i;\n      int foo()\n      {\n        return [*this]()\n        {\n          return i;\n        }();\n      }\n    };\n\n  }\n\n  namespace test_enum_class_construction\n  {\n\n    enum class byte : unsigned char\n    {};\n\n    byte foo {42};\n\n  }\n\n  namespace test_constexpr_if\n  {\n\n    template <bool cond>\n    int f ()\n    {\n      if constexpr(cond)\n      {\n        return 13;\n      }\n      else\n      {\n        return 42;\n      }\n    }\n\n  }\n\n  namespace test_selection_statement_with_initializer\n  {\n\n    int f()\n    {\n      return 13;\n    }\n\n    int f2()\n    {\n      if (auto i = f(); i > 0)\n      {\n        return 3;\n      }\n\n      switch (auto i = f(); i + 4)\n      {\n      case 17:\n        return 2;\n\n      default:\n        return 1;\n      }\n    }\n\n  }\n\n  namespace test_template_argument_deduction_for_class_templates\n  {\n\n    template <typename T1, typename T2>\n    struct pair\n    {\n      pair (T1 p1, T2 p2)\n        : m1 {p1},\n          m2 {p2}\n      {}\n\n      T1 m1;\n      T2 m2;\n    };\n\n    void f()\n    {\n      [[maybe_unused]] auto p = pair{13, 42u};\n    }\n\n  }\n\n  namespace test_non_type_auto_template_parameters\n  {\n\n    template <auto n>\n    struct B\n    {};\n\n    B<5> b1;\n    B<'a'> b2;\n\n  }\n\n  namespace test_structured_bindings\n  {\n\n    int arr[2] = { 1, 2 };\n    std::pair<int, int> pr = { 1, 2 };\n\n    auto f1() -> int(&)[2]\n    {\n      return arr;\n    }\n\n    auto f2() -> std::pair<int, int>&\n    {\n      return pr;\n    }\n\n    struct S\n    {\n      int x1 : 2;\n      volatile double y1;\n    };\n\n    S f3()\n    {\n      return {};\n    }\n\n    auto [ x1, y1 ] = f1();\n    auto& [ xr1, yr1 ] = f1();\n    auto [ x2, y2 ] = f2();\n    auto& [ xr2, yr2 ] = f2();\n    const auto [ x3, y3 ] = f3();\n\n  }\n\n  namespace test_exception_spec_type_system\n  {\n\n    struct Good {};\n    struct Bad {};\n\n    void g1() noexcept;\n    void g2();\n\n    template<typename T>\n    Bad\n    f(T*, T*);\n\n    template<typename T1, typename T2>\n    Good\n    f(T1*, T2*);\n\n    static_assert (std::is_same_v<Good, decltype(f(g1, g2))>);\n\n  }\n\n  namespace test_inline_variables\n  {\n\n    template<class T> void f(T)\n    {}\n\n    template<class T> inline T g(T)\n    {\n      return T{};\n    }\n\n    template<> inline void f<>(int)\n    {}\n\n    template<> int g<>(int)\n    {\n      return 5;\n    }\n\n  }\n\n}  // namespace cxx17\n\n#endif  // __cplusplus < 201703L\n\n]])\n"
  },
  {
    "path": "deps/jemalloc/msvc/ReadMe.txt",
    "content": "\nHow to build jemalloc for Windows\n=================================\n\n1. Install Cygwin with at least the following packages:\n   * autoconf\n   * autogen\n   * gawk\n   * grep\n   * sed\n\n2. Install Visual Studio 2015 or 2017 with Visual C++\n\n3. Add Cygwin\\bin to the PATH environment variable\n\n4. Open \"x64 Native Tools Command Prompt for VS 2017\"\n   (note: x86/x64 doesn't matter at this point)\n\n5. Generate header files:\n   sh -c \"CC=cl ./autogen.sh\"\n\n6. Now the project can be opened and built in Visual Studio:\n   msvc\\jemalloc_vc2017.sln\n"
  },
  {
    "path": "deps/jemalloc/msvc/jemalloc_vc2015.sln",
    "content": "﻿\nMicrosoft Visual Studio Solution File, Format Version 12.00\n# Visual Studio 14\nVisualStudioVersion = 14.0.24720.0\nMinimumVisualStudioVersion = 10.0.40219.1\nProject(\"{2150E333-8FDC-42A3-9474-1A3956D46DE8}\") = \"Solution Items\", \"Solution Items\", \"{70A99006-6DE9-472B-8F83-4CEE6C616DF3}\"\n\tProjectSection(SolutionItems) = preProject\n\t\tReadMe.txt = ReadMe.txt\n\tEndProjectSection\nEndProject\nProject(\"{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}\") = \"jemalloc\", \"projects\\vc2015\\jemalloc\\jemalloc.vcxproj\", \"{8D6BB292-9E1C-413D-9F98-4864BDC1514A}\"\nEndProject\nProject(\"{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}\") = \"test_threads\", \"projects\\vc2015\\test_threads\\test_threads.vcxproj\", \"{09028CFD-4EB7-491D-869C-0708DB97ED44}\"\nEndProject\nGlobal\n\tGlobalSection(SolutionConfigurationPlatforms) = preSolution\n\t\tDebug|x64 = Debug|x64\n\t\tDebug|x86 = Debug|x86\n\t\tDebug-static|x64 = Debug-static|x64\n\t\tDebug-static|x86 = Debug-static|x86\n\t\tRelease|x64 = Release|x64\n\t\tRelease|x86 = Release|x86\n\t\tRelease-static|x64 = Release-static|x64\n\t\tRelease-static|x86 = Release-static|x86\n\tEndGlobalSection\n\tGlobalSection(ProjectConfigurationPlatforms) = postSolution\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug|x64.ActiveCfg = Debug|x64\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug|x64.Build.0 = Debug|x64\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug|x86.ActiveCfg = Debug|Win32\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug|x86.Build.0 = Debug|Win32\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug-static|x64.ActiveCfg = Debug-static|x64\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug-static|x64.Build.0 = Debug-static|x64\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug-static|x86.ActiveCfg = Debug-static|Win32\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug-static|x86.Build.0 = Debug-static|Win32\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release|x64.ActiveCfg = Release|x64\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release|x64.Build.0 = Release|x64\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release|x86.ActiveCfg = Release|Win32\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release|x86.Build.0 = Release|Win32\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release-static|x64.ActiveCfg = Release-static|x64\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release-static|x64.Build.0 = Release-static|x64\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release-static|x86.ActiveCfg = Release-static|Win32\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release-static|x86.Build.0 = Release-static|Win32\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug|x64.ActiveCfg = Debug|x64\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug|x64.Build.0 = Debug|x64\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug|x86.ActiveCfg = Debug|Win32\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug|x86.Build.0 = Debug|Win32\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug-static|x64.ActiveCfg = Debug-static|x64\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug-static|x64.Build.0 = Debug-static|x64\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug-static|x86.ActiveCfg = Debug-static|Win32\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug-static|x86.Build.0 = Debug-static|Win32\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release|x64.ActiveCfg = Release|x64\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release|x64.Build.0 = Release|x64\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release|x86.ActiveCfg = Release|Win32\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release|x86.Build.0 = Release|Win32\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release-static|x64.ActiveCfg = Release-static|x64\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release-static|x64.Build.0 = Release-static|x64\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release-static|x86.ActiveCfg = Release-static|Win32\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release-static|x86.Build.0 = Release-static|Win32\n\tEndGlobalSection\n\tGlobalSection(SolutionProperties) = preSolution\n\t\tHideSolutionNode = FALSE\n\tEndGlobalSection\nEndGlobal\n"
  },
  {
    "path": "deps/jemalloc/msvc/jemalloc_vc2017.sln",
    "content": "﻿\nMicrosoft Visual Studio Solution File, Format Version 12.00\n# Visual Studio 14\nVisualStudioVersion = 14.0.24720.0\nMinimumVisualStudioVersion = 10.0.40219.1\nProject(\"{2150E333-8FDC-42A3-9474-1A3956D46DE8}\") = \"Solution Items\", \"Solution Items\", \"{70A99006-6DE9-472B-8F83-4CEE6C616DF3}\"\n\tProjectSection(SolutionItems) = preProject\n\t\tReadMe.txt = ReadMe.txt\n\tEndProjectSection\nEndProject\nProject(\"{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}\") = \"jemalloc\", \"projects\\vc2017\\jemalloc\\jemalloc.vcxproj\", \"{8D6BB292-9E1C-413D-9F98-4864BDC1514A}\"\nEndProject\nProject(\"{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}\") = \"test_threads\", \"projects\\vc2017\\test_threads\\test_threads.vcxproj\", \"{09028CFD-4EB7-491D-869C-0708DB97ED44}\"\nEndProject\nGlobal\n\tGlobalSection(SolutionConfigurationPlatforms) = preSolution\n\t\tDebug|x64 = Debug|x64\n\t\tDebug|x86 = Debug|x86\n\t\tDebug-static|x64 = Debug-static|x64\n\t\tDebug-static|x86 = Debug-static|x86\n\t\tRelease|x64 = Release|x64\n\t\tRelease|x86 = Release|x86\n\t\tRelease-static|x64 = Release-static|x64\n\t\tRelease-static|x86 = Release-static|x86\n\tEndGlobalSection\n\tGlobalSection(ProjectConfigurationPlatforms) = postSolution\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug|x64.ActiveCfg = Debug|x64\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug|x64.Build.0 = Debug|x64\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug|x86.ActiveCfg = Debug|Win32\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug|x86.Build.0 = Debug|Win32\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug-static|x64.ActiveCfg = Debug-static|x64\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug-static|x64.Build.0 = Debug-static|x64\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug-static|x86.ActiveCfg = Debug-static|Win32\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Debug-static|x86.Build.0 = Debug-static|Win32\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release|x64.ActiveCfg = Release|x64\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release|x64.Build.0 = Release|x64\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release|x86.ActiveCfg = Release|Win32\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release|x86.Build.0 = Release|Win32\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release-static|x64.ActiveCfg = Release-static|x64\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release-static|x64.Build.0 = Release-static|x64\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release-static|x86.ActiveCfg = Release-static|Win32\n\t\t{8D6BB292-9E1C-413D-9F98-4864BDC1514A}.Release-static|x86.Build.0 = Release-static|Win32\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug|x64.ActiveCfg = Debug|x64\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug|x64.Build.0 = Debug|x64\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug|x86.ActiveCfg = Debug|Win32\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug|x86.Build.0 = Debug|Win32\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug-static|x64.ActiveCfg = Debug-static|x64\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug-static|x64.Build.0 = Debug-static|x64\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug-static|x86.ActiveCfg = Debug-static|Win32\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Debug-static|x86.Build.0 = Debug-static|Win32\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release|x64.ActiveCfg = Release|x64\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release|x64.Build.0 = Release|x64\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release|x86.ActiveCfg = Release|Win32\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release|x86.Build.0 = Release|Win32\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release-static|x64.ActiveCfg = Release-static|x64\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release-static|x64.Build.0 = Release-static|x64\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release-static|x86.ActiveCfg = Release-static|Win32\n\t\t{09028CFD-4EB7-491D-869C-0708DB97ED44}.Release-static|x86.Build.0 = Release-static|Win32\n\tEndGlobalSection\n\tGlobalSection(SolutionProperties) = preSolution\n\t\tHideSolutionNode = FALSE\n\tEndGlobalSection\nEndGlobal\n"
  },
  {
    "path": "deps/jemalloc/msvc/projects/vc2015/jemalloc/jemalloc.vcxproj",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project DefaultTargets=\"Build\" ToolsVersion=\"14.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n  <ItemGroup Label=\"ProjectConfigurations\">\n    <ProjectConfiguration Include=\"Debug-static|Win32\">\n      <Configuration>Debug-static</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Debug-static|x64\">\n      <Configuration>Debug-static</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Debug|Win32\">\n      <Configuration>Debug</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release-static|Win32\">\n      <Configuration>Release-static</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release-static|x64\">\n      <Configuration>Release-static</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release|Win32\">\n      <Configuration>Release</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Debug|x64\">\n      <Configuration>Debug</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release|x64\">\n      <Configuration>Release</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n  </ItemGroup>\n  <ItemGroup>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\arena.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\background_thread.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\base.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\bin.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\bin_info.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\bitmap.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\buf_writer.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\cache_bin.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\ckh.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\counter.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\ctl.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\decay.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\div.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\ecache.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\edata.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\edata_cache.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\ehooks.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\emap.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\eset.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\exp_grow.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\extent.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\extent_dss.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\extent_mmap.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\fxp.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\hook.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\hpa.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\hpa_hooks.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\hpdata.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\inspect.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\jemalloc.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\large.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\log.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\malloc_io.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\mutex.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\nstime.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\pa.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\pa_extra.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\pai.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\pac.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\pages.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\peak_event.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof_data.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof_log.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof_recent.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof_stats.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof_sys.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\psset.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\rtree.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\safety_check.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\san.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\san_bump.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\sc.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\sec.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\stats.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\sz.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\tcache.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\test_hooks.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\thread_event.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\ticker.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\tsd.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\witness.c\" />\n  </ItemGroup>\n  <PropertyGroup Label=\"Globals\">\n    <ProjectGuid>{8D6BB292-9E1C-413D-9F98-4864BDC1514A}</ProjectGuid>\n    <Keyword>Win32Proj</Keyword>\n    <RootNamespace>jemalloc</RootNamespace>\n    <WindowsTargetPlatformVersion>8.1</WindowsTargetPlatformVersion>\n  </PropertyGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.Default.props\" />\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>DynamicLibrary</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <PlatformToolset>v140</PlatformToolset>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>StaticLibrary</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <PlatformToolset>v140</PlatformToolset>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>DynamicLibrary</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <PlatformToolset>v140</PlatformToolset>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>StaticLibrary</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <PlatformToolset>v140</PlatformToolset>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\" Label=\"Configuration\">\n    <ConfigurationType>DynamicLibrary</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <PlatformToolset>v140</PlatformToolset>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|x64'\" Label=\"Configuration\">\n    <ConfigurationType>StaticLibrary</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <PlatformToolset>v140</PlatformToolset>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\" Label=\"Configuration\">\n    <ConfigurationType>DynamicLibrary</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <PlatformToolset>v140</PlatformToolset>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|x64'\" Label=\"Configuration\">\n    <ConfigurationType>StaticLibrary</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <PlatformToolset>v140</PlatformToolset>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.props\" />\n  <ImportGroup Label=\"ExtensionSettings\">\n  </ImportGroup>\n  <ImportGroup Label=\"Shared\">\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|Win32'\" Label=\"PropertySheets\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|Win32'\" Label=\"PropertySheets\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|x64'\" Label=\"PropertySheets\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|x64'\" Label=\"PropertySheets\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <PropertyGroup Label=\"UserMacros\" />\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <TargetName>$(ProjectName)d</TargetName>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|Win32'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <TargetName>$(ProjectName)-$(PlatformToolset)-$(Configuration)</TargetName>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|Win32'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <TargetName>$(ProjectName)-$(PlatformToolset)-$(Configuration)</TargetName>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <TargetName>$(ProjectName)d</TargetName>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|x64'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <TargetName>$(ProjectName)-vc$(PlatformToolsetVersion)-$(Configuration)</TargetName>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|x64'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <TargetName>$(ProjectName)-vc$(PlatformToolsetVersion)-$(Configuration)</TargetName>\n  </PropertyGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <ClCompile>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>JEMALLOC_NO_PRIVATE_NAMESPACE;_REENTRANT;_WINDLL;DLLEXPORT;JEMALLOC_DEBUG;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>\n      <ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>\n    </ClCompile>\n    <Link>\n      <SubSystem>Windows</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|Win32'\">\n    <ClCompile>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>JEMALLOC_NO_PRIVATE_NAMESPACE;JEMALLOC_DEBUG;_REENTRANT;JEMALLOC_EXPORT=;_DEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary>\n      <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>\n      <ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>\n    </ClCompile>\n    <Link>\n      <SubSystem>Windows</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\n    <ClCompile>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>JEMALLOC_NO_PRIVATE_NAMESPACE;_REENTRANT;_WINDLL;DLLEXPORT;JEMALLOC_DEBUG;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>\n      <ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>\n    </ClCompile>\n    <Link>\n      <SubSystem>Windows</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|x64'\">\n    <ClCompile>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>JEMALLOC_NO_PRIVATE_NAMESPACE;JEMALLOC_DEBUG;_REENTRANT;JEMALLOC_EXPORT=;_DEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary>\n      <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>\n      <DebugInformationFormat>OldStyle</DebugInformationFormat>\n      <MinimalRebuild>false</MinimalRebuild>\n    </ClCompile>\n    <Link>\n      <SubSystem>Windows</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <PreprocessorDefinitions>JEMALLOC_NO_PRIVATE_NAMESPACE;_REENTRANT;_WINDLL;DLLEXPORT;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>\n      <ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>\n    </ClCompile>\n    <Link>\n      <SubSystem>Windows</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|Win32'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <PreprocessorDefinitions>JEMALLOC_NO_PRIVATE_NAMESPACE;_REENTRANT;JEMALLOC_EXPORT=;NDEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <RuntimeLibrary>MultiThreaded</RuntimeLibrary>\n      <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>\n      <ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>\n    </ClCompile>\n    <Link>\n      <SubSystem>Windows</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <PreprocessorDefinitions>JEMALLOC_NO_PRIVATE_NAMESPACE;_REENTRANT;_WINDLL;DLLEXPORT;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>\n      <ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>\n    </ClCompile>\n    <Link>\n      <SubSystem>Windows</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|x64'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <PreprocessorDefinitions>JEMALLOC_NO_PRIVATE_NAMESPACE;_REENTRANT;JEMALLOC_EXPORT=;NDEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <RuntimeLibrary>MultiThreaded</RuntimeLibrary>\n      <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>\n      <DebugInformationFormat>OldStyle</DebugInformationFormat>\n    </ClCompile>\n    <Link>\n      <SubSystem>Windows</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n    </Link>\n  </ItemDefinitionGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.targets\" />\n  <ImportGroup Label=\"ExtensionTargets\">\n  </ImportGroup>\n</Project>"
  },
  {
    "path": "deps/jemalloc/msvc/projects/vc2015/jemalloc/jemalloc.vcxproj.filters",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n  <ItemGroup>\n    <Filter Include=\"Source Files\">\n      <UniqueIdentifier>{4FC737F1-C7A5-4376-A066-2A32D752A2FF}</UniqueIdentifier>\n      <Extensions>cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx</Extensions>\n    </Filter>\n  </ItemGroup>\n  <ItemGroup>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\arena.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\background_thread.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\base.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\bin.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\bitmap.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\buf_writer.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\cache_bin.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\ckh.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\counter.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\ctl.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\decay.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\div.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\emap.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\exp_grow.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\extent.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\extent_dss.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\extent_mmap.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\fxp.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\hook.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\hpa.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\hpa_hooks.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\hpdata.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\inspect.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\jemalloc.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\large.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\log.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\malloc_io.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\mutex.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\nstime.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\pa.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\pa_extra.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\pai.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\pac.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\pages.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\peak_event.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof_data.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof_log.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof_recent.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof_stats.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof_sys.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\psset.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\rtree.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\safety_check.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\sc.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\sec.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\stats.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\sz.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\tcache.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\test_hooks.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\thread_event.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\ticker.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\tsd.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\witness.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\bin_info.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\ecache.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\edata.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\edata_cache.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\ehooks.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\eset.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\san.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\san_bump.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n  </ItemGroup>\n</Project>"
  },
  {
    "path": "deps/jemalloc/msvc/projects/vc2015/test_threads/test_threads.vcxproj",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project DefaultTargets=\"Build\" ToolsVersion=\"14.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n  <ItemGroup Label=\"ProjectConfigurations\">\n    <ProjectConfiguration Include=\"Debug-static|Win32\">\n      <Configuration>Debug-static</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Debug-static|x64\">\n      <Configuration>Debug-static</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Debug|Win32\">\n      <Configuration>Debug</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release-static|Win32\">\n      <Configuration>Release-static</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release-static|x64\">\n      <Configuration>Release-static</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release|Win32\">\n      <Configuration>Release</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Debug|x64\">\n      <Configuration>Debug</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release|x64\">\n      <Configuration>Release</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n  </ItemGroup>\n  <PropertyGroup Label=\"Globals\">\n    <ProjectGuid>{09028CFD-4EB7-491D-869C-0708DB97ED44}</ProjectGuid>\n    <Keyword>Win32Proj</Keyword>\n    <RootNamespace>test_threads</RootNamespace>\n    <WindowsTargetPlatformVersion>8.1</WindowsTargetPlatformVersion>\n  </PropertyGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.Default.props\" />\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <PlatformToolset>v140</PlatformToolset>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <PlatformToolset>v140</PlatformToolset>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <PlatformToolset>v140</PlatformToolset>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <PlatformToolset>v140</PlatformToolset>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <PlatformToolset>v140</PlatformToolset>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|x64'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <PlatformToolset>v140</PlatformToolset>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <PlatformToolset>v140</PlatformToolset>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|x64'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <PlatformToolset>v140</PlatformToolset>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.props\" />\n  <ImportGroup Label=\"ExtensionSettings\">\n  </ImportGroup>\n  <ImportGroup Label=\"Shared\">\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|Win32'\" Label=\"PropertySheets\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|Win32'\" Label=\"PropertySheets\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|x64'\" Label=\"PropertySheets\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|x64'\" Label=\"PropertySheets\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <PropertyGroup Label=\"UserMacros\" />\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <LinkIncremental>true</LinkIncremental>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|Win32'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <LinkIncremental>true</LinkIncremental>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\n    <LinkIncremental>true</LinkIncremental>\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|x64'\">\n    <LinkIncremental>true</LinkIncremental>\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <LinkIncremental>false</LinkIncremental>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|Win32'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <LinkIncremental>false</LinkIncremental>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <LinkIncremental>false</LinkIncremental>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|x64'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <LinkIncremental>false</LinkIncremental>\n  </PropertyGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <ClCompile>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\test\\include;..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\\$(Configuration)</AdditionalLibraryDirectories>\n      <AdditionalDependencies>jemallocd.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|Win32'\">\n    <ClCompile>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>JEMALLOC_EXPORT=;JEMALLOC_STATIC;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\test\\include;..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\\$(Configuration)</AdditionalLibraryDirectories>\n      <AdditionalDependencies>jemalloc-$(PlatformToolset)-$(Configuration).lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\n    <ClCompile>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\test\\include;..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <AdditionalDependencies>jemallocd.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>\n      <AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\\$(Configuration)</AdditionalLibraryDirectories>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|x64'\">\n    <ClCompile>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>JEMALLOC_EXPORT=;JEMALLOC_STATIC;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\test\\include;..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <AdditionalDependencies>jemalloc-vc$(PlatformToolsetVersion)-$(Configuration).lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>\n      <AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\\$(Configuration)</AdditionalLibraryDirectories>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <PreprocessorDefinitions>WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\test\\include;..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n      <AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\\$(Configuration)</AdditionalLibraryDirectories>\n      <AdditionalDependencies>jemalloc.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|Win32'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <PreprocessorDefinitions>JEMALLOC_EXPORT=;JEMALLOC_STATIC;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\test\\include;..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <RuntimeLibrary>MultiThreaded</RuntimeLibrary>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n      <AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\\$(Configuration)</AdditionalLibraryDirectories>\n      <AdditionalDependencies>jemalloc-$(PlatformToolset)-$(Configuration).lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <PreprocessorDefinitions>NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\test\\include;..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n      <AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\\$(Configuration)</AdditionalLibraryDirectories>\n      <AdditionalDependencies>jemalloc.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|x64'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <PreprocessorDefinitions>JEMALLOC_EXPORT=;JEMALLOC_STATIC;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\test\\include;..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <RuntimeLibrary>MultiThreaded</RuntimeLibrary>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n      <AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\\$(Configuration)</AdditionalLibraryDirectories>\n      <AdditionalDependencies>jemalloc-vc$(PlatformToolsetVersion)-$(Configuration).lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemGroup>\n    <ClCompile Include=\"..\\..\\..\\test_threads\\test_threads.cpp\" />\n    <ClCompile Include=\"..\\..\\..\\test_threads\\test_threads_main.cpp\" />\n  </ItemGroup>\n  <ItemGroup>\n    <ProjectReference Include=\"..\\jemalloc\\jemalloc.vcxproj\">\n      <Project>{8d6bb292-9e1c-413d-9f98-4864bdc1514a}</Project>\n    </ProjectReference>\n  </ItemGroup>\n  <ItemGroup>\n    <ClInclude Include=\"..\\..\\..\\test_threads\\test_threads.h\" />\n  </ItemGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.targets\" />\n  <ImportGroup Label=\"ExtensionTargets\">\n  </ImportGroup>\n</Project>"
  },
  {
    "path": "deps/jemalloc/msvc/projects/vc2015/test_threads/test_threads.vcxproj.filters",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n  <ItemGroup>\n    <Filter Include=\"Source Files\">\n      <UniqueIdentifier>{4FC737F1-C7A5-4376-A066-2A32D752A2FF}</UniqueIdentifier>\n      <Extensions>cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx</Extensions>\n    </Filter>\n    <Filter Include=\"Header Files\">\n      <UniqueIdentifier>{93995380-89BD-4b04-88EB-625FBE52EBFB}</UniqueIdentifier>\n      <Extensions>h;hh;hpp;hxx;hm;inl;inc;xsd</Extensions>\n    </Filter>\n  </ItemGroup>\n  <ItemGroup>\n    <ClCompile Include=\"..\\..\\..\\test_threads\\test_threads.cpp\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\test_threads\\test_threads_main.cpp\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n  </ItemGroup>\n  <ItemGroup>\n    <ClInclude Include=\"..\\..\\..\\test_threads\\test_threads.h\">\n      <Filter>Header Files</Filter>\n    </ClInclude>\n  </ItemGroup>\n</Project>"
  },
  {
    "path": "deps/jemalloc/msvc/projects/vc2017/jemalloc/jemalloc.vcxproj",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project DefaultTargets=\"Build\" ToolsVersion=\"15.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n  <ItemGroup Label=\"ProjectConfigurations\">\n    <ProjectConfiguration Include=\"Debug-static|Win32\">\n      <Configuration>Debug-static</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Debug-static|x64\">\n      <Configuration>Debug-static</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Debug|Win32\">\n      <Configuration>Debug</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release-static|Win32\">\n      <Configuration>Release-static</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release-static|x64\">\n      <Configuration>Release-static</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release|Win32\">\n      <Configuration>Release</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Debug|x64\">\n      <Configuration>Debug</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release|x64\">\n      <Configuration>Release</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n  </ItemGroup>\n  <ItemGroup>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\arena.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\background_thread.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\base.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\bin.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\bin_info.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\bitmap.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\buf_writer.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\cache_bin.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\ckh.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\counter.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\ctl.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\decay.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\div.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\ecache.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\edata.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\edata_cache.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\ehooks.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\emap.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\eset.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\exp_grow.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\extent.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\extent_dss.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\extent_mmap.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\fxp.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\hook.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\hpa.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\hpa_hooks.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\hpdata.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\inspect.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\jemalloc.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\large.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\log.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\malloc_io.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\mutex.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\nstime.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\pa.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\pa_extra.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\pai.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\pac.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\pages.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\peak_event.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof_data.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof_log.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof_recent.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof_stats.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof_sys.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\psset.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\rtree.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\safety_check.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\san.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\san_bump.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\sc.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\sec.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\stats.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\sz.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\tcache.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\test_hooks.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\thread_event.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\ticker.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\tsd.c\" />\n    <ClCompile Include=\"..\\..\\..\\..\\src\\witness.c\" />\n  </ItemGroup>\n  <PropertyGroup Label=\"Globals\">\n    <ProjectGuid>{8D6BB292-9E1C-413D-9F98-4864BDC1514A}</ProjectGuid>\n    <Keyword>Win32Proj</Keyword>\n    <RootNamespace>jemalloc</RootNamespace>\n  </PropertyGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.Default.props\" />\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>DynamicLibrary</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <PlatformToolset>v141</PlatformToolset>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>StaticLibrary</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <PlatformToolset>v141</PlatformToolset>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>DynamicLibrary</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <PlatformToolset>v141</PlatformToolset>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>StaticLibrary</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <PlatformToolset>v141</PlatformToolset>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\" Label=\"Configuration\">\n    <ConfigurationType>DynamicLibrary</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <PlatformToolset>v141</PlatformToolset>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|x64'\" Label=\"Configuration\">\n    <ConfigurationType>StaticLibrary</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <PlatformToolset>v141</PlatformToolset>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\" Label=\"Configuration\">\n    <ConfigurationType>DynamicLibrary</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <PlatformToolset>v141</PlatformToolset>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|x64'\" Label=\"Configuration\">\n    <ConfigurationType>StaticLibrary</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <PlatformToolset>v141</PlatformToolset>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.props\" />\n  <ImportGroup Label=\"ExtensionSettings\">\n  </ImportGroup>\n  <ImportGroup Label=\"Shared\">\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|Win32'\" Label=\"PropertySheets\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|Win32'\" Label=\"PropertySheets\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|x64'\" Label=\"PropertySheets\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|x64'\" Label=\"PropertySheets\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <PropertyGroup Label=\"UserMacros\" />\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <TargetName>$(ProjectName)d</TargetName>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|Win32'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <TargetName>$(ProjectName)-$(PlatformToolset)-$(Configuration)</TargetName>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|Win32'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <TargetName>$(ProjectName)-$(PlatformToolset)-$(Configuration)</TargetName>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <TargetName>$(ProjectName)d</TargetName>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|x64'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <TargetName>$(ProjectName)-vc$(PlatformToolsetVersion)-$(Configuration)</TargetName>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|x64'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <TargetName>$(ProjectName)-vc$(PlatformToolsetVersion)-$(Configuration)</TargetName>\n  </PropertyGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <ClCompile>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>_REENTRANT;_WINDLL;DLLEXPORT;JEMALLOC_DEBUG;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>\n      <ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>\n    </ClCompile>\n    <Link>\n      <SubSystem>Windows</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|Win32'\">\n    <ClCompile>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>JEMALLOC_DEBUG;_REENTRANT;JEMALLOC_EXPORT=;_DEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary>\n      <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>\n      <ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>\n    </ClCompile>\n    <Link>\n      <SubSystem>Windows</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\n    <ClCompile>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>JEMALLOC_NO_PRIVATE_NAMESPACE;_REENTRANT;_WINDLL;DLLEXPORT;JEMALLOC_DEBUG;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>\n      <ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>\n    </ClCompile>\n    <Link>\n      <SubSystem>Windows</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|x64'\">\n    <ClCompile>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>JEMALLOC_NO_PRIVATE_NAMESPACE;JEMALLOC_DEBUG;_REENTRANT;JEMALLOC_EXPORT=;_DEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary>\n      <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>\n      <DebugInformationFormat>OldStyle</DebugInformationFormat>\n      <MinimalRebuild>false</MinimalRebuild>\n    </ClCompile>\n    <Link>\n      <SubSystem>Windows</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <PreprocessorDefinitions>_REENTRANT;_WINDLL;DLLEXPORT;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>\n      <ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>\n    </ClCompile>\n    <Link>\n      <SubSystem>Windows</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|Win32'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <PreprocessorDefinitions>_REENTRANT;JEMALLOC_EXPORT=;NDEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <RuntimeLibrary>MultiThreaded</RuntimeLibrary>\n      <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>\n      <ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>\n    </ClCompile>\n    <Link>\n      <SubSystem>Windows</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <PreprocessorDefinitions>JEMALLOC_NO_PRIVATE_NAMESPACE;_REENTRANT;_WINDLL;DLLEXPORT;NDEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>\n      <ProgramDataBaseFileName>$(OutputPath)$(TargetName).pdb</ProgramDataBaseFileName>\n    </ClCompile>\n    <Link>\n      <SubSystem>Windows</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|x64'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <PreprocessorDefinitions>JEMALLOC_NO_PRIVATE_NAMESPACE;_REENTRANT;JEMALLOC_EXPORT=;NDEBUG;_LIB;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <RuntimeLibrary>MultiThreaded</RuntimeLibrary>\n      <DisableSpecificWarnings>4090;4146;4267;4334</DisableSpecificWarnings>\n      <DebugInformationFormat>OldStyle</DebugInformationFormat>\n    </ClCompile>\n    <Link>\n      <SubSystem>Windows</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n    </Link>\n  </ItemDefinitionGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.targets\" />\n  <ImportGroup Label=\"ExtensionTargets\">\n  </ImportGroup>\n</Project>"
  },
  {
    "path": "deps/jemalloc/msvc/projects/vc2017/jemalloc/jemalloc.vcxproj.filters",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n  <ItemGroup>\n    <Filter Include=\"Source Files\">\n      <UniqueIdentifier>{4FC737F1-C7A5-4376-A066-2A32D752A2FF}</UniqueIdentifier>\n      <Extensions>cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx</Extensions>\n    </Filter>\n  </ItemGroup>\n  <ItemGroup>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\arena.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\background_thread.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\base.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\bin.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\bitmap.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\buf_writer.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\cache_bin.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\ckh.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\counter.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\ctl.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\decay.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\div.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\emap.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\exp_grow.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\extent.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\extent_dss.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\extent_mmap.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\fxp.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\hook.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\hpa.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\hpa_hooks.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\hpdata.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\inspect.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\jemalloc.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\large.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\log.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\malloc_io.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\mutex.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\nstime.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\pa.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\pa_extra.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\pai.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\pac.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\pages.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\peak_event.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof_data.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof_log.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof_recent.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof_stats.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\prof_sys.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\psset.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\rtree.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\safety_check.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\sc.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\sec.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\stats.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\sz.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\tcache.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\test_hooks.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\thread_event.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\ticker.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\tsd.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\witness.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\bin_info.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\ecache.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\edata.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\edata_cache.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\ehooks.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\eset.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\san.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\..\\src\\san_bump.c\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n  </ItemGroup>\n</Project>"
  },
  {
    "path": "deps/jemalloc/msvc/projects/vc2017/test_threads/test_threads.vcxproj",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project DefaultTargets=\"Build\" ToolsVersion=\"15.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n  <ItemGroup Label=\"ProjectConfigurations\">\n    <ProjectConfiguration Include=\"Debug-static|Win32\">\n      <Configuration>Debug-static</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Debug-static|x64\">\n      <Configuration>Debug-static</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Debug|Win32\">\n      <Configuration>Debug</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release-static|Win32\">\n      <Configuration>Release-static</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release-static|x64\">\n      <Configuration>Release-static</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release|Win32\">\n      <Configuration>Release</Configuration>\n      <Platform>Win32</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Debug|x64\">\n      <Configuration>Debug</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n    <ProjectConfiguration Include=\"Release|x64\">\n      <Configuration>Release</Configuration>\n      <Platform>x64</Platform>\n    </ProjectConfiguration>\n  </ItemGroup>\n  <PropertyGroup Label=\"Globals\">\n    <ProjectGuid>{09028CFD-4EB7-491D-869C-0708DB97ED44}</ProjectGuid>\n    <Keyword>Win32Proj</Keyword>\n    <RootNamespace>test_threads</RootNamespace>\n  </PropertyGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.Default.props\" />\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <PlatformToolset>v141</PlatformToolset>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <PlatformToolset>v141</PlatformToolset>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <PlatformToolset>v141</PlatformToolset>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|Win32'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <PlatformToolset>v141</PlatformToolset>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <PlatformToolset>v141</PlatformToolset>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|x64'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>true</UseDebugLibraries>\n    <PlatformToolset>v141</PlatformToolset>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <PlatformToolset>v141</PlatformToolset>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|x64'\" Label=\"Configuration\">\n    <ConfigurationType>Application</ConfigurationType>\n    <UseDebugLibraries>false</UseDebugLibraries>\n    <PlatformToolset>v141</PlatformToolset>\n    <WholeProgramOptimization>true</WholeProgramOptimization>\n    <CharacterSet>MultiByte</CharacterSet>\n  </PropertyGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.props\" />\n  <ImportGroup Label=\"ExtensionSettings\">\n  </ImportGroup>\n  <ImportGroup Label=\"Shared\">\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|Win32'\" Label=\"PropertySheets\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|Win32'\" Label=\"PropertySheets\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|x64'\" Label=\"PropertySheets\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <ImportGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|x64'\" Label=\"PropertySheets\">\n    <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n  </ImportGroup>\n  <PropertyGroup Label=\"UserMacros\" />\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <LinkIncremental>true</LinkIncremental>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|Win32'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <LinkIncremental>true</LinkIncremental>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\n    <LinkIncremental>true</LinkIncremental>\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|x64'\">\n    <LinkIncremental>true</LinkIncremental>\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <LinkIncremental>false</LinkIncremental>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|Win32'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <LinkIncremental>false</LinkIncremental>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <LinkIncremental>false</LinkIncremental>\n  </PropertyGroup>\n  <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|x64'\">\n    <OutDir>$(SolutionDir)$(Platform)\\$(Configuration)\\</OutDir>\n    <IntDir>$(Platform)\\$(Configuration)\\</IntDir>\n    <LinkIncremental>false</LinkIncremental>\n  </PropertyGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n    <ClCompile>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>WIN32;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\test\\include;..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\\$(Configuration)</AdditionalLibraryDirectories>\n      <AdditionalDependencies>jemallocd.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|Win32'\">\n    <ClCompile>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>JEMALLOC_EXPORT=;JEMALLOC_STATIC;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\test\\include;..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\\$(Configuration)</AdditionalLibraryDirectories>\n      <AdditionalDependencies>jemalloc-$(PlatformToolset)-$(Configuration).lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|x64'\">\n    <ClCompile>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\test\\include;..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <AdditionalDependencies>jemallocd.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>\n      <AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\\$(Configuration)</AdditionalLibraryDirectories>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug-static|x64'\">\n    <ClCompile>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <WarningLevel>Level3</WarningLevel>\n      <Optimization>Disabled</Optimization>\n      <PreprocessorDefinitions>JEMALLOC_EXPORT=;JEMALLOC_STATIC;_DEBUG;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\test\\include;..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <AdditionalDependencies>jemalloc-vc$(PlatformToolsetVersion)-$(Configuration).lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>\n      <AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\\$(Configuration)</AdditionalLibraryDirectories>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <PreprocessorDefinitions>WIN32;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\test\\include;..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n      <AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\\$(Configuration)</AdditionalLibraryDirectories>\n      <AdditionalDependencies>jemalloc.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|Win32'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <PreprocessorDefinitions>JEMALLOC_EXPORT=;JEMALLOC_STATIC;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\test\\include;..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <RuntimeLibrary>MultiThreaded</RuntimeLibrary>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n      <AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\\$(Configuration)</AdditionalLibraryDirectories>\n      <AdditionalDependencies>jemalloc-$(PlatformToolset)-$(Configuration).lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|x64'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <PreprocessorDefinitions>NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\test\\include;..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n      <AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\\$(Configuration)</AdditionalLibraryDirectories>\n      <AdditionalDependencies>jemalloc.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release-static|x64'\">\n    <ClCompile>\n      <WarningLevel>Level3</WarningLevel>\n      <PrecompiledHeader>\n      </PrecompiledHeader>\n      <Optimization>MaxSpeed</Optimization>\n      <FunctionLevelLinking>true</FunctionLevelLinking>\n      <IntrinsicFunctions>true</IntrinsicFunctions>\n      <PreprocessorDefinitions>JEMALLOC_EXPORT=;JEMALLOC_STATIC;NDEBUG;_CONSOLE;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n      <AdditionalIncludeDirectories>..\\..\\..\\..\\test\\include;..\\..\\..\\..\\include;..\\..\\..\\..\\include\\msvc_compat;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n      <RuntimeLibrary>MultiThreaded</RuntimeLibrary>\n    </ClCompile>\n    <Link>\n      <SubSystem>Console</SubSystem>\n      <GenerateDebugInformation>true</GenerateDebugInformation>\n      <EnableCOMDATFolding>true</EnableCOMDATFolding>\n      <OptimizeReferences>true</OptimizeReferences>\n      <AdditionalLibraryDirectories>$(SolutionDir)$(Platform)\\$(Configuration)</AdditionalLibraryDirectories>\n      <AdditionalDependencies>jemalloc-vc$(PlatformToolsetVersion)-$(Configuration).lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies)</AdditionalDependencies>\n    </Link>\n  </ItemDefinitionGroup>\n  <ItemGroup>\n    <ClCompile Include=\"..\\..\\..\\test_threads\\test_threads.cpp\" />\n    <ClCompile Include=\"..\\..\\..\\test_threads\\test_threads_main.cpp\" />\n  </ItemGroup>\n  <ItemGroup>\n    <ProjectReference Include=\"..\\jemalloc\\jemalloc.vcxproj\">\n      <Project>{8d6bb292-9e1c-413d-9f98-4864bdc1514a}</Project>\n    </ProjectReference>\n  </ItemGroup>\n  <ItemGroup>\n    <ClInclude Include=\"..\\..\\..\\test_threads\\test_threads.h\" />\n  </ItemGroup>\n  <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.targets\" />\n  <ImportGroup Label=\"ExtensionTargets\">\n  </ImportGroup>\n</Project>"
  },
  {
    "path": "deps/jemalloc/msvc/projects/vc2017/test_threads/test_threads.vcxproj.filters",
    "content": "﻿<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n  <ItemGroup>\n    <Filter Include=\"Source Files\">\n      <UniqueIdentifier>{4FC737F1-C7A5-4376-A066-2A32D752A2FF}</UniqueIdentifier>\n      <Extensions>cpp;c;cc;cxx;def;odl;idl;hpj;bat;asm;asmx</Extensions>\n    </Filter>\n    <Filter Include=\"Header Files\">\n      <UniqueIdentifier>{93995380-89BD-4b04-88EB-625FBE52EBFB}</UniqueIdentifier>\n      <Extensions>h;hh;hpp;hxx;hm;inl;inc;xsd</Extensions>\n    </Filter>\n  </ItemGroup>\n  <ItemGroup>\n    <ClCompile Include=\"..\\..\\..\\test_threads\\test_threads.cpp\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n    <ClCompile Include=\"..\\..\\..\\test_threads\\test_threads_main.cpp\">\n      <Filter>Source Files</Filter>\n    </ClCompile>\n  </ItemGroup>\n  <ItemGroup>\n    <ClInclude Include=\"..\\..\\..\\test_threads\\test_threads.h\">\n      <Filter>Header Files</Filter>\n    </ClInclude>\n  </ItemGroup>\n</Project>"
  },
  {
    "path": "deps/jemalloc/msvc/test_threads/test_threads.cpp",
    "content": "// jemalloc C++ threaded test\n// Author: Rustam Abdullaev\n// Public Domain\n\n#include <atomic>\n#include <functional>\n#include <future>\n#include <random>\n#include <thread>\n#include <vector>\n#include <stdio.h>\n#define JEMALLOC_NO_DEMANGLE\n#include <jemalloc/jemalloc.h>\n\nusing std::vector;\nusing std::thread;\nusing std::uniform_int_distribution;\nusing std::minstd_rand;\n\nint test_threads() {\n  je_malloc_conf = \"narenas:3\";\n  int narenas = 0;\n  size_t sz = sizeof(narenas);\n  je_mallctl(\"opt.narenas\", (void *)&narenas, &sz, NULL, 0);\n  if (narenas != 3) {\n    printf(\"Error: unexpected number of arenas: %d\\n\", narenas);\n    return 1;\n  }\n  static const int sizes[] = { 7, 16, 32, 60, 91, 100, 120, 144, 169, 199, 255, 400, 670, 900, 917, 1025, 3333, 5190, 13131, 49192, 99999, 123123, 255265, 2333111 };\n  static const int numSizes = (int)(sizeof(sizes) / sizeof(sizes[0]));\n  vector<thread> workers;\n  static const int numThreads = narenas + 1, numAllocsMax = 25, numIter1 = 50, numIter2 = 50;\n  je_malloc_stats_print(NULL, NULL, NULL);\n  size_t allocated1;\n  size_t sz1 = sizeof(allocated1);\n  je_mallctl(\"stats.active\", (void *)&allocated1, &sz1, NULL, 0);\n  printf(\"\\nPress Enter to start threads...\\n\");\n  getchar();\n  printf(\"Starting %d threads x %d x %d iterations...\\n\", numThreads, numIter1, numIter2);\n  for (int i = 0; i < numThreads; i++) {\n    workers.emplace_back([tid=i]() {\n      uniform_int_distribution<int> sizeDist(0, numSizes - 1);\n      minstd_rand rnd(tid * 17);\n      uint8_t* ptrs[numAllocsMax];\n      int ptrsz[numAllocsMax];\n      for (int i = 0; i < numIter1; ++i) {\n        thread t([&]() {\n          for (int i = 0; i < numIter2; ++i) {\n            const int numAllocs = numAllocsMax - sizeDist(rnd);\n            for (int j = 0; j < numAllocs; j += 64) {\n              const int x = sizeDist(rnd);\n              const int sz = sizes[x];\n              ptrsz[j] = sz;\n              ptrs[j] = (uint8_t*)je_malloc(sz);\n              if (!ptrs[j]) {\n                printf(\"Unable to allocate %d bytes in thread %d, iter %d, alloc %d. %d\\n\", sz, tid, i, j, x);\n                exit(1);\n              }\n              for (int k = 0; k < sz; k++)\n                ptrs[j][k] = tid + k;\n            }\n            for (int j = 0; j < numAllocs; j += 64) {\n              for (int k = 0, sz = ptrsz[j]; k < sz; k++)\n                if (ptrs[j][k] != (uint8_t)(tid + k)) {\n                  printf(\"Memory error in thread %d, iter %d, alloc %d @ %d : %02X!=%02X\\n\", tid, i, j, k, ptrs[j][k], (uint8_t)(tid + k));\n                  exit(1);\n                }\n              je_free(ptrs[j]);\n            }\n          }\n        });\n        t.join();\n      }\n    });\n  }\n  for (thread& t : workers) {\n    t.join();\n  }\n  je_malloc_stats_print(NULL, NULL, NULL);\n  size_t allocated2;\n  je_mallctl(\"stats.active\", (void *)&allocated2, &sz1, NULL, 0);\n  size_t leaked = allocated2 - allocated1;\n  printf(\"\\nDone. Leaked: %zd bytes\\n\", leaked);\n  bool failed = leaked > 65536; // in case C++ runtime allocated something (e.g. iostream locale or facet)\n  printf(\"\\nTest %s!\\n\", (failed ? \"FAILED\" : \"successful\"));\n  printf(\"\\nPress Enter to continue...\\n\");\n  getchar();\n  return failed ? 1 : 0;\n}\n"
  },
  {
    "path": "deps/jemalloc/msvc/test_threads/test_threads.h",
    "content": "#pragma once\n\nint test_threads();\n"
  },
  {
    "path": "deps/jemalloc/msvc/test_threads/test_threads_main.cpp",
    "content": "#include \"test_threads.h\"\n#include <future>\n#include <functional>\n#include <chrono>\n\nusing namespace std::chrono_literals;\n\nint main(int argc, char** argv) {\n  int rc = test_threads();\n  return rc;\n}\n"
  },
  {
    "path": "deps/jemalloc/run_tests.sh",
    "content": "$(dirname \"$)\")/scripts/gen_run_tests.py | bash\n"
  },
  {
    "path": "deps/jemalloc/scripts/check-formatting.sh",
    "content": "#!/bin/bash\n\n# The files that need to be properly formatted.  We'll grow this incrementally\n# until it includes all the jemalloc source files (as we convert things over),\n# and then just replace it with\n#    find -name '*.c' -o -name '*.h' -o -name '*.cpp\nFILES=(\n)\n\nif command -v clang-format &> /dev/null; then\n  CLANG_FORMAT=\"clang-format\"\nelif command -v clang-format-8 &> /dev/null; then\n  CLANG_FORMAT=\"clang-format-8\"\nelse\n  echo \"Couldn't find clang-format.\"\nfi\n\nif ! $CLANG_FORMAT -version | grep \"version 8\\.\" &> /dev/null; then\n  echo \"clang-format is the wrong version.\"\n  exit 1\nfi\n\nfor file in ${FILES[@]}; do\n  if ! cmp --silent $file <($CLANG_FORMAT $file) &> /dev/null; then\n    echo \"Error: $file is not clang-formatted\"\n    exit 1\n  fi\ndone\n"
  },
  {
    "path": "deps/jemalloc/scripts/freebsd/before_install.sh",
    "content": "#!/bin/tcsh\n\nsu -m root -c 'pkg install -y git'\n"
  },
  {
    "path": "deps/jemalloc/scripts/freebsd/before_script.sh",
    "content": "#!/bin/tcsh\n\nautoconf\n# We don't perfectly track freebsd stdlib.h definitions.  This is fine when\n# we count as a system header, but breaks otherwise, like during these\n# tests.\n./configure --with-jemalloc-prefix=ci_ ${COMPILER_FLAGS:+ CC=\"$CC $COMPILER_FLAGS\" CXX=\"$CXX $COMPILER_FLAGS\"} $CONFIGURE_FLAGS\nJE_NCPUS=`sysctl -n kern.smp.cpus`\ngmake -j${JE_NCPUS}\ngmake -j${JE_NCPUS} tests\n"
  },
  {
    "path": "deps/jemalloc/scripts/freebsd/script.sh",
    "content": "#!/bin/tcsh\n\ngmake check\n"
  },
  {
    "path": "deps/jemalloc/scripts/gen_run_tests.py",
    "content": "#!/usr/bin/env python3\n\nimport sys\nfrom itertools import combinations\nfrom os import uname\nfrom multiprocessing import cpu_count\nfrom subprocess import call\n\n# Later, we want to test extended vaddr support.  Apparently, the \"real\" way of\n# checking this is flaky on OS X.\nbits_64 = sys.maxsize > 2**32\n\nnparallel = cpu_count() * 2\n\nuname = uname()[0]\n\nif call(\"command -v gmake\", shell=True) == 0:\n    make_cmd = 'gmake'\nelse:\n    make_cmd = 'make'\n\ndef powerset(items):\n    result = []\n    for i in range(len(items) + 1):\n        result += combinations(items, i)\n    return result\n\npossible_compilers = []\nfor cc, cxx in (['gcc', 'g++'], ['clang', 'clang++']):\n    try:\n        cmd_ret = call([cc, \"-v\"])\n        if cmd_ret == 0:\n            possible_compilers.append((cc, cxx))\n    except:\n        pass\npossible_compiler_opts = [\n    '-m32',\n]\npossible_config_opts = [\n    '--enable-debug',\n    '--enable-prof',\n    '--disable-stats',\n    '--enable-opt-safety-checks',\n    '--with-lg-page=16',\n]\nif bits_64:\n    possible_config_opts.append('--with-lg-vaddr=56')\n\npossible_malloc_conf_opts = [\n    'tcache:false',\n    'dss:primary',\n    'percpu_arena:percpu',\n    'background_thread:true',\n]\n\nprint('set -e')\nprint('if [ -f Makefile ] ; then %(make_cmd)s relclean ; fi' % {'make_cmd':\n    make_cmd})\nprint('autoconf')\nprint('rm -rf run_tests.out')\nprint('mkdir run_tests.out')\nprint('cd run_tests.out')\n\nind = 0\nfor cc, cxx in possible_compilers:\n    for compiler_opts in powerset(possible_compiler_opts):\n        for config_opts in powerset(possible_config_opts):\n            for malloc_conf_opts in powerset(possible_malloc_conf_opts):\n                if cc == 'clang' \\\n                  and '-m32' in possible_compiler_opts \\\n                  and '--enable-prof' in config_opts:\n                    continue\n                config_line = (\n                    'EXTRA_CFLAGS=-Werror EXTRA_CXXFLAGS=-Werror '\n                    + 'CC=\"{} {}\" '.format(cc, \" \".join(compiler_opts))\n                    + 'CXX=\"{} {}\" '.format(cxx, \" \".join(compiler_opts))\n                    + '../../configure '\n                    + \" \".join(config_opts) + (' --with-malloc-conf=' +\n                    \",\".join(malloc_conf_opts) if len(malloc_conf_opts) > 0\n                    else '')\n                )\n\n                # We don't want to test large vaddr spaces in 32-bit mode.\n                if ('-m32' in compiler_opts and '--with-lg-vaddr=56' in\n                    config_opts):\n                    continue\n\n                # Per CPU arenas are only supported on Linux.\n                linux_supported = ('percpu_arena:percpu' in malloc_conf_opts \\\n                  or 'background_thread:true' in malloc_conf_opts)\n                # Heap profiling and dss are not supported on OS X.\n                darwin_unsupported = ('--enable-prof' in config_opts or \\\n                  'dss:primary' in malloc_conf_opts)\n                if (uname == 'Linux' and linux_supported) \\\n                  or (not linux_supported and (uname != 'Darwin' or \\\n                  not darwin_unsupported)):\n                    print(\"\"\"cat <<EOF > run_test_%(ind)d.sh\n#!/bin/sh\n\nset -e\n\nabort() {\n    echo \"==> Error\" >> run_test.log\n    echo \"Error; see run_tests.out/run_test_%(ind)d.out/run_test.log\"\n    exit 255 # Special exit code tells xargs to terminate.\n}\n\n# Environment variables are not supported.\nrun_cmd() {\n    echo \"==> \\$@\" >> run_test.log\n    \\$@ >> run_test.log 2>&1 || abort\n}\n\necho \"=> run_test_%(ind)d: %(config_line)s\"\nmkdir run_test_%(ind)d.out\ncd run_test_%(ind)d.out\n\necho \"==> %(config_line)s\" >> run_test.log\n%(config_line)s >> run_test.log 2>&1 || abort\n\nrun_cmd %(make_cmd)s all tests\nrun_cmd %(make_cmd)s check\nrun_cmd %(make_cmd)s distclean\nEOF\nchmod 755 run_test_%(ind)d.sh\"\"\" % {'ind': ind, 'config_line': config_line,\n      'make_cmd': make_cmd})\n                    ind += 1\n\nprint('for i in `seq 0 %(last_ind)d` ; do echo run_test_${i}.sh ; done | xargs'\n    ' -P %(nparallel)d -n 1 sh' % {'last_ind': ind-1, 'nparallel': nparallel})\n"
  },
  {
    "path": "deps/jemalloc/scripts/gen_travis.py",
    "content": "#!/usr/bin/env python3\n\nfrom itertools import combinations, chain\nfrom enum import Enum, auto\n\n\nLINUX = 'linux'\nOSX = 'osx'\nWINDOWS = 'windows'\nFREEBSD = 'freebsd'\n\n\nAMD64 = 'amd64'\nARM64 = 'arm64'\nPPC64LE = 'ppc64le'\n\n\nTRAVIS_TEMPLATE = \"\"\"\\\n# This config file is generated by ./scripts/gen_travis.py.\n# Do not edit by hand.\n\n# We use 'minimal', because 'generic' makes Windows VMs hang at startup. Also\n# the software provided by 'generic' is simply not needed for our tests.\n# Differences are explained here:\n# https://docs.travis-ci.com/user/languages/minimal-and-generic/\nlanguage: minimal\ndist: focal\n\njobs:\n  include:\n{jobs}\n\nbefore_install:\n  - |-\n    if test -f \"./scripts/$TRAVIS_OS_NAME/before_install.sh\"; then\n      source ./scripts/$TRAVIS_OS_NAME/before_install.sh\n    fi\n\nbefore_script:\n  - |-\n    if test -f \"./scripts/$TRAVIS_OS_NAME/before_script.sh\"; then\n      source ./scripts/$TRAVIS_OS_NAME/before_script.sh\n    else\n      scripts/gen_travis.py > travis_script && diff .travis.yml travis_script\n      autoconf\n      # If COMPILER_FLAGS are not empty, add them to CC and CXX\n      ./configure ${{COMPILER_FLAGS:+ CC=\"$CC $COMPILER_FLAGS\" \\\nCXX=\"$CXX $COMPILER_FLAGS\"}} $CONFIGURE_FLAGS\n      make -j3\n      make -j3 tests\n    fi\n\nscript:\n  - |-\n    if test -f \"./scripts/$TRAVIS_OS_NAME/script.sh\"; then\n      source ./scripts/$TRAVIS_OS_NAME/script.sh\n    else\n      make check\n    fi\n\"\"\"\n\n\nclass Option(object):\n    class Type:\n        COMPILER = auto()\n        COMPILER_FLAG = auto()\n        CONFIGURE_FLAG = auto()\n        MALLOC_CONF = auto()\n        FEATURE = auto()\n\n    def __init__(self, type, value):\n        self.type = type\n        self.value = value\n\n    @staticmethod\n    def as_compiler(value):\n        return Option(Option.Type.COMPILER, value)\n\n    @staticmethod\n    def as_compiler_flag(value):\n        return Option(Option.Type.COMPILER_FLAG, value)\n\n    @staticmethod\n    def as_configure_flag(value):\n        return Option(Option.Type.CONFIGURE_FLAG, value)\n\n    @staticmethod\n    def as_malloc_conf(value):\n        return Option(Option.Type.MALLOC_CONF, value)\n\n    @staticmethod\n    def as_feature(value):\n        return Option(Option.Type.FEATURE, value)\n\n    def __eq__(self, obj):\n        return (isinstance(obj, Option) and obj.type == self.type\n                and obj.value == self.value)\n\n\n# The 'default' configuration is gcc, on linux, with no compiler or configure\n# flags.  We also test with clang, -m32, --enable-debug, --enable-prof,\n# --disable-stats, and --with-malloc-conf=tcache:false.  To avoid abusing\n# travis though, we don't test all 2**7 = 128 possible combinations of these;\n# instead, we only test combinations of up to 2 'unusual' settings, under the\n# hope that bugs involving interactions of such settings are rare.\nMAX_UNUSUAL_OPTIONS = 2\n\n\nGCC = Option.as_compiler('CC=gcc CXX=g++')\nCLANG = Option.as_compiler('CC=clang CXX=clang++')\nCL = Option.as_compiler('CC=cl.exe CXX=cl.exe')\n\n\ncompilers_unusual = [CLANG,]\n\n\nCROSS_COMPILE_32BIT = Option.as_feature('CROSS_COMPILE_32BIT')\nfeature_unusuals = [CROSS_COMPILE_32BIT]\n\n\nconfigure_flag_unusuals = [Option.as_configure_flag(opt) for opt in (\n    '--enable-debug',\n    '--enable-prof',\n    '--disable-stats',\n    '--disable-libdl',\n    '--enable-opt-safety-checks',\n    '--with-lg-page=16',\n)]\n\n\nmalloc_conf_unusuals = [Option.as_malloc_conf(opt) for opt in (\n    'tcache:false',\n    'dss:primary',\n    'percpu_arena:percpu',\n    'background_thread:true',\n)]\n\n\nall_unusuals = (compilers_unusual + feature_unusuals\n    + configure_flag_unusuals + malloc_conf_unusuals)\n\n\ndef get_extra_cflags(os, compiler):\n    if os == FREEBSD:\n        return []\n\n    if os == WINDOWS:\n        # For non-CL compilers under Windows (for now it's only MinGW-GCC),\n        # -fcommon needs to be specified to correctly handle multiple\n        # 'malloc_conf' symbols and such, which are declared weak under Linux.\n        # Weak symbols don't work with MinGW-GCC.\n        if compiler != CL.value:\n            return ['-fcommon']\n        else:\n            return []\n\n    # We get some spurious errors when -Warray-bounds is enabled.\n    extra_cflags = ['-Werror', '-Wno-array-bounds']\n    if compiler == CLANG.value or os == OSX:\n        extra_cflags += [\n            '-Wno-unknown-warning-option',\n            '-Wno-ignored-attributes'\n        ]\n    if os == OSX:\n        extra_cflags += [\n            '-Wno-deprecated-declarations',\n        ]\n    return extra_cflags\n\n\n# Formats a job from a combination of flags\ndef format_job(os, arch, combination):\n    compilers = [x.value for x in combination if x.type == Option.Type.COMPILER]\n    assert(len(compilers) <= 1)\n    compiler_flags = [x.value for x in combination if x.type == Option.Type.COMPILER_FLAG]\n    configure_flags = [x.value for x in combination if x.type == Option.Type.CONFIGURE_FLAG]\n    malloc_conf = [x.value for x in combination if x.type == Option.Type.MALLOC_CONF]\n    features = [x.value for x in combination if x.type == Option.Type.FEATURE]\n\n    if len(malloc_conf) > 0:\n        configure_flags.append('--with-malloc-conf=' + ','.join(malloc_conf))\n\n    if not compilers:\n        compiler = GCC.value\n    else:\n        compiler = compilers[0]\n\n    extra_environment_vars = ''\n    cross_compile = CROSS_COMPILE_32BIT.value in features\n    if os == LINUX and cross_compile:\n        compiler_flags.append('-m32')\n\n    features_str = ' '.join([' {}=yes'.format(feature) for feature in features])\n\n    stringify = lambda arr, name: ' {}=\"{}\"'.format(name, ' '.join(arr)) if arr else ''\n    env_string = '{}{}{}{}{}{}'.format(\n            compiler,\n            features_str,\n            stringify(compiler_flags, 'COMPILER_FLAGS'),\n            stringify(configure_flags, 'CONFIGURE_FLAGS'),\n            stringify(get_extra_cflags(os, compiler), 'EXTRA_CFLAGS'),\n            extra_environment_vars)\n\n    job = '    - os: {}\\n'.format(os)\n    job += '      arch: {}\\n'.format(arch)\n    job += '      env: {}'.format(env_string)\n    return job\n\n\ndef generate_unusual_combinations(unusuals, max_unusual_opts):\n    \"\"\"\n    Generates different combinations of non-standard compilers, compiler flags,\n    configure flags and malloc_conf settings.\n\n    @param max_unusual_opts: Limit of unusual options per combination.\n    \"\"\"\n    return chain.from_iterable(\n            [combinations(unusuals, i) for i in range(max_unusual_opts + 1)])\n\n\ndef included(combination, exclude):\n    \"\"\"\n    Checks if the combination of options should be included in the Travis\n    testing matrix.\n\n    @param exclude: A list of options to be avoided.\n    \"\"\"\n    return not any(excluded in combination for excluded in exclude)\n\n\ndef generate_jobs(os, arch, exclude, max_unusual_opts, unusuals=all_unusuals):\n    jobs = []\n    for combination in generate_unusual_combinations(unusuals, max_unusual_opts):\n        if included(combination, exclude):\n            jobs.append(format_job(os, arch, combination))\n    return '\\n'.join(jobs)\n\n\ndef generate_linux(arch):\n    os = LINUX\n\n    # Only generate 2 unusual options for AMD64 to reduce matrix size\n    max_unusual_opts = MAX_UNUSUAL_OPTIONS if arch == AMD64 else 1\n\n    exclude = []\n    if arch == PPC64LE:\n        # Avoid 32 bit builds and clang on PowerPC\n        exclude = (CROSS_COMPILE_32BIT, CLANG,)\n\n    return generate_jobs(os, arch, exclude, max_unusual_opts)\n\n\ndef generate_macos(arch):\n    os = OSX\n\n    max_unusual_opts = 1\n\n    exclude = ([Option.as_malloc_conf(opt) for opt in (\n            'dss:primary',\n            'percpu_arena:percpu',\n            'background_thread:true')] +\n        [Option.as_configure_flag('--enable-prof')] +\n        [CLANG,])\n\n    return generate_jobs(os, arch, exclude, max_unusual_opts)\n\n\ndef generate_windows(arch):\n    os = WINDOWS\n\n    max_unusual_opts = 3\n    unusuals = (\n        Option.as_configure_flag('--enable-debug'),\n        CL,\n        CROSS_COMPILE_32BIT,\n    )\n    return generate_jobs(os, arch, (), max_unusual_opts, unusuals)\n\n\ndef generate_freebsd(arch):\n    os = FREEBSD\n\n    max_unusual_opts = 4\n    unusuals = (\n        Option.as_configure_flag('--enable-debug'),\n        Option.as_configure_flag('--enable-prof --enable-prof-libunwind'),\n        Option.as_configure_flag('--with-lg-page=16 --with-malloc-conf=tcache:false'),\n        CROSS_COMPILE_32BIT,\n    )\n    return generate_jobs(os, arch, (), max_unusual_opts, unusuals)\n\n\n\ndef get_manual_jobs():\n    return \"\"\"\\\n    # Development build\n    - os: linux\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-debug \\\n--disable-cache-oblivious --enable-stats --enable-log --enable-prof\" \\\nEXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n    # --enable-expermental-smallocx:\n    - os: linux\n      env: CC=gcc CXX=g++ CONFIGURE_FLAGS=\"--enable-debug \\\n--enable-experimental-smallocx --enable-stats --enable-prof\" \\\nEXTRA_CFLAGS=\"-Werror -Wno-array-bounds\"\n\"\"\"\n\n\ndef main():\n    jobs = '\\n'.join((\n        generate_windows(AMD64),\n\n        generate_freebsd(AMD64),\n\n        generate_linux(AMD64),\n        generate_linux(PPC64LE),\n\n        generate_macos(AMD64),\n\n        get_manual_jobs(),\n    ))\n\n    print(TRAVIS_TEMPLATE.format(jobs=jobs))\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "deps/jemalloc/scripts/linux/before_install.sh",
    "content": "#!/bin/bash\n\nset -ev\n\nif [[ \"$TRAVIS_OS_NAME\" != \"linux\" ]]; then\n    echo \"Incorrect \\$TRAVIS_OS_NAME: expected linux, got $TRAVIS_OS_NAME\"\n    exit 1\nfi\n\nif [[ \"$CROSS_COMPILE_32BIT\" == \"yes\" ]]; then\n    sudo apt-get update\n    sudo apt-get -y install gcc-multilib g++-multilib\nfi\n"
  },
  {
    "path": "deps/jemalloc/scripts/windows/before_install.sh",
    "content": "#!/bin/bash\n\nset -e\n\n# The purpose of this script is to install build dependencies and set\n# $build_env to a function that sets appropriate environment variables,\n# to enable (mingw32|mingw64) environment if we want to compile with gcc, or\n# (mingw32|mingw64) + vcvarsall.bat if we want to compile with cl.exe\n\nif [[ \"$TRAVIS_OS_NAME\" != \"windows\" ]]; then\n    echo \"Incorrect \\$TRAVIS_OS_NAME: expected windows, got $TRAVIS_OS_NAME\"\n    exit 1\nfi\n\n[[ ! -f C:/tools/msys64/msys2_shell.cmd ]] && rm -rf C:/tools/msys64\nchoco uninstall -y mingw\nchoco upgrade --no-progress -y msys2\n\nmsys_shell_cmd=\"cmd //C RefreshEnv.cmd && set MSYS=winsymlinks:nativestrict && C:\\\\tools\\\\msys64\\\\msys2_shell.cmd\"\n\nmsys2() { $msys_shell_cmd -defterm -no-start -msys2 -c \"$*\"; }\nmingw32() { $msys_shell_cmd -defterm -no-start -mingw32 -c \"$*\"; }\nmingw64() { $msys_shell_cmd -defterm -no-start -mingw64 -c \"$*\"; }\n\nif [[ \"$CROSS_COMPILE_32BIT\" == \"yes\" ]]; then\n    mingw=mingw32\n    mingw_gcc_package_arch=i686\nelse\n    mingw=mingw64\n    mingw_gcc_package_arch=x86_64\nfi\n\nif [[ \"$CC\" == *\"gcc\"* ]]; then\n    $mingw pacman -S --noconfirm --needed \\\n        autotools \\\n        git \\\n        mingw-w64-${mingw_gcc_package_arch}-make \\\n\t    mingw-w64-${mingw_gcc_package_arch}-gcc \\\n\t    mingw-w64-${mingw_gcc_package_arch}-binutils\n    build_env=$mingw\nelif [[ \"$CC\" == *\"cl\"* ]]; then\n    $mingw pacman -S --noconfirm --needed \\\n        autotools \\\n\t    git \\\n\t    mingw-w64-${mingw_gcc_package_arch}-make \\\n\t    mingw-w64-${mingw_gcc_package_arch}-binutils\n\n    # In order to use MSVC compiler (cl.exe), we need to correctly set some environment\n    # variables, namely PATH, INCLUDE, LIB and LIBPATH. The correct values of these\n    # variables are set by a batch script \"vcvarsall.bat\". The code below generates\n    # a batch script that calls \"vcvarsall.bat\" and prints the environment variables.\n    #\n    # Then, those environment variables are transformed from cmd to bash format and put\n    # into a script $apply_vsenv. If cl.exe needs to be used from bash, one can\n    # 'source $apply_vsenv' and it will apply the environment variables needed for cl.exe\n    # to be located and function correctly.\n    #\n    # At last, a function \"mingw_with_msvc_vars\" is generated which forwards user input\n    # into a correct mingw (32 or 64) subshell that automatically performs 'source $apply_vsenv',\n    # making it possible for autotools to discover and use cl.exe.\n    vcvarsall=\"vcvarsall.tmp.bat\"\n    echo \"@echo off\" > $vcvarsall\n    echo \"call \\\"c:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\\\\\vcvarsall.bat\\\" $USE_MSVC\" >> $vcvarsall\n    echo \"set\" >> $vcvarsall\n\n    apply_vsenv=\"./apply_vsenv.sh\"\n    cmd //C $vcvarsall | grep -E \"^PATH=\" | sed -n -e 's/\\(.*\\)=\\(.*\\)/export \\1=$PATH:\"\\2\"/g' \\\n        -e 's/\\([a-zA-Z]\\):[\\\\\\/]/\\/\\1\\//g' \\\n        -e 's/\\\\/\\//g' \\\n        -e 's/;\\//:\\//gp' > $apply_vsenv\n    cmd //C $vcvarsall | grep -E \"^(INCLUDE|LIB|LIBPATH)=\" | sed -n -e 's/\\(.*\\)=\\(.*\\)/export \\1=\"\\2\"/gp' >> $apply_vsenv\n\n    cat $apply_vsenv\n    mingw_with_msvc_vars() { $msys_shell_cmd -defterm -no-start -$mingw -c \"source $apply_vsenv && \"\"$*\"; }\n    build_env=mingw_with_msvc_vars\n\n    rm -f $vcvarsall\nelse\n    echo \"Unknown C compiler: $CC\"\n    exit 1\nfi\n\necho \"Build environment function: $build_env\"\n"
  },
  {
    "path": "deps/jemalloc/scripts/windows/before_script.sh",
    "content": "#!/bin/bash\n\nset -e\n\nif [[ \"$TRAVIS_OS_NAME\" != \"windows\" ]]; then\n    echo \"Incorrect \\$TRAVIS_OS_NAME: expected windows, got $TRAVIS_OS_NAME\"\n    exit 1\nfi\n\n$build_env autoconf\n$build_env ./configure $CONFIGURE_FLAGS\n# mingw32-make simply means \"make\", unrelated to mingw32 vs mingw64.\n# Simply disregard the prefix and treat is as \"make\".\n$build_env mingw32-make -j3\n# At the moment, it's impossible to make tests in parallel,\n# seemingly due to concurrent writes to '.pdb' file. I don't know why\n# that happens, because we explicitly supply '/Fs' to the compiler.\n# Until we figure out how to fix it, we should build tests sequentially\n# on Windows.\n$build_env mingw32-make tests\n"
  },
  {
    "path": "deps/jemalloc/scripts/windows/script.sh",
    "content": "#!/bin/bash\n\nset -e\n\nif [[ \"$TRAVIS_OS_NAME\" != \"windows\" ]]; then\n    echo \"Incorrect \\$TRAVIS_OS_NAME: expected windows, got $TRAVIS_OS_NAME\"\n    exit 1\nfi\n\n$build_env mingw32-make -k check\n"
  },
  {
    "path": "deps/jemalloc/src/arena.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/decay.h\"\n#include \"jemalloc/internal/ehooks.h\"\n#include \"jemalloc/internal/extent_dss.h\"\n#include \"jemalloc/internal/extent_mmap.h\"\n#include \"jemalloc/internal/san.h\"\n#include \"jemalloc/internal/mutex.h\"\n#include \"jemalloc/internal/rtree.h\"\n#include \"jemalloc/internal/safety_check.h\"\n#include \"jemalloc/internal/util.h\"\n\nJEMALLOC_DIAGNOSTIC_DISABLE_SPURIOUS\n\n/******************************************************************************/\n/* Data. */\n\n/*\n * Define names for both unininitialized and initialized phases, so that\n * options and mallctl processing are straightforward.\n */\nconst char *percpu_arena_mode_names[] = {\n\t\"percpu\",\n\t\"phycpu\",\n\t\"disabled\",\n\t\"percpu\",\n\t\"phycpu\"\n};\npercpu_arena_mode_t opt_percpu_arena = PERCPU_ARENA_DEFAULT;\n\nssize_t opt_dirty_decay_ms = DIRTY_DECAY_MS_DEFAULT;\nssize_t opt_muzzy_decay_ms = MUZZY_DECAY_MS_DEFAULT;\n\nstatic atomic_zd_t dirty_decay_ms_default;\nstatic atomic_zd_t muzzy_decay_ms_default;\n\nemap_t arena_emap_global;\npa_central_t arena_pa_central_global;\n\ndiv_info_t arena_binind_div_info[SC_NBINS];\n\nsize_t opt_oversize_threshold = OVERSIZE_THRESHOLD_DEFAULT;\nsize_t oversize_threshold = OVERSIZE_THRESHOLD_DEFAULT;\n\nuint32_t arena_bin_offsets[SC_NBINS];\nstatic unsigned nbins_total;\n\nstatic unsigned huge_arena_ind;\n\nconst arena_config_t arena_config_default = {\n\t/* .extent_hooks = */ (extent_hooks_t *)&ehooks_default_extent_hooks,\n\t/* .metadata_use_hooks = */ true,\n};\n\n/******************************************************************************/\n/*\n * Function prototypes for static functions that are referenced prior to\n * definition.\n */\n\nstatic bool arena_decay_dirty(tsdn_t *tsdn, arena_t *arena,\n    bool is_background_thread, bool all);\nstatic void arena_bin_lower_slab(tsdn_t *tsdn, arena_t *arena, edata_t *slab,\n    bin_t *bin);\nstatic void\narena_maybe_do_deferred_work(tsdn_t *tsdn, arena_t *arena, decay_t *decay,\n    size_t npages_new);\n\n/******************************************************************************/\n\nvoid\narena_basic_stats_merge(tsdn_t *tsdn, arena_t *arena, unsigned *nthreads,\n    const char **dss, ssize_t *dirty_decay_ms, ssize_t *muzzy_decay_ms,\n    size_t *nactive, size_t *ndirty, size_t *nmuzzy) {\n\t*nthreads += arena_nthreads_get(arena, false);\n\t*dss = dss_prec_names[arena_dss_prec_get(arena)];\n\t*dirty_decay_ms = arena_decay_ms_get(arena, extent_state_dirty);\n\t*muzzy_decay_ms = arena_decay_ms_get(arena, extent_state_muzzy);\n\tpa_shard_basic_stats_merge(&arena->pa_shard, nactive, ndirty, nmuzzy);\n}\n\nvoid\narena_stats_merge(tsdn_t *tsdn, arena_t *arena, unsigned *nthreads,\n    const char **dss, ssize_t *dirty_decay_ms, ssize_t *muzzy_decay_ms,\n    size_t *nactive, size_t *ndirty, size_t *nmuzzy, arena_stats_t *astats,\n    bin_stats_data_t *bstats, arena_stats_large_t *lstats,\n    pac_estats_t *estats, hpa_shard_stats_t *hpastats, sec_stats_t *secstats) {\n\tcassert(config_stats);\n\n\tarena_basic_stats_merge(tsdn, arena, nthreads, dss, dirty_decay_ms,\n\t    muzzy_decay_ms, nactive, ndirty, nmuzzy);\n\n\tsize_t base_allocated, base_resident, base_mapped, metadata_thp;\n\tbase_stats_get(tsdn, arena->base, &base_allocated, &base_resident,\n\t    &base_mapped, &metadata_thp);\n\tsize_t pac_mapped_sz = pac_mapped(&arena->pa_shard.pac);\n\tastats->mapped += base_mapped + pac_mapped_sz;\n\tastats->resident += base_resident;\n\n\tLOCKEDINT_MTX_LOCK(tsdn, arena->stats.mtx);\n\n\tastats->base += base_allocated;\n\tatomic_load_add_store_zu(&astats->internal, arena_internal_get(arena));\n\tastats->metadata_thp += metadata_thp;\n\n\tfor (szind_t i = 0; i < SC_NSIZES - SC_NBINS; i++) {\n\t\tuint64_t nmalloc = locked_read_u64(tsdn,\n\t\t    LOCKEDINT_MTX(arena->stats.mtx),\n\t\t    &arena->stats.lstats[i].nmalloc);\n\t\tlocked_inc_u64_unsynchronized(&lstats[i].nmalloc, nmalloc);\n\t\tastats->nmalloc_large += nmalloc;\n\n\t\tuint64_t ndalloc = locked_read_u64(tsdn,\n\t\t    LOCKEDINT_MTX(arena->stats.mtx),\n\t\t    &arena->stats.lstats[i].ndalloc);\n\t\tlocked_inc_u64_unsynchronized(&lstats[i].ndalloc, ndalloc);\n\t\tastats->ndalloc_large += ndalloc;\n\n\t\tuint64_t nrequests = locked_read_u64(tsdn,\n\t\t    LOCKEDINT_MTX(arena->stats.mtx),\n\t\t    &arena->stats.lstats[i].nrequests);\n\t\tlocked_inc_u64_unsynchronized(&lstats[i].nrequests,\n\t\t    nmalloc + nrequests);\n\t\tastats->nrequests_large += nmalloc + nrequests;\n\n\t\t/* nfill == nmalloc for large currently. */\n\t\tlocked_inc_u64_unsynchronized(&lstats[i].nfills, nmalloc);\n\t\tastats->nfills_large += nmalloc;\n\n\t\tuint64_t nflush = locked_read_u64(tsdn,\n\t\t    LOCKEDINT_MTX(arena->stats.mtx),\n\t\t    &arena->stats.lstats[i].nflushes);\n\t\tlocked_inc_u64_unsynchronized(&lstats[i].nflushes, nflush);\n\t\tastats->nflushes_large += nflush;\n\n\t\tassert(nmalloc >= ndalloc);\n\t\tassert(nmalloc - ndalloc <= SIZE_T_MAX);\n\t\tsize_t curlextents = (size_t)(nmalloc - ndalloc);\n\t\tlstats[i].curlextents += curlextents;\n\t\tastats->allocated_large +=\n\t\t    curlextents * sz_index2size(SC_NBINS + i);\n\t}\n\n\tpa_shard_stats_merge(tsdn, &arena->pa_shard, &astats->pa_shard_stats,\n\t    estats, hpastats, secstats, &astats->resident);\n\n\tLOCKEDINT_MTX_UNLOCK(tsdn, arena->stats.mtx);\n\n\t/* Currently cached bytes and sanitizer-stashed bytes in tcache. */\n\tastats->tcache_bytes = 0;\n\tastats->tcache_stashed_bytes = 0;\n\tmalloc_mutex_lock(tsdn, &arena->tcache_ql_mtx);\n\tcache_bin_array_descriptor_t *descriptor;\n\tql_foreach(descriptor, &arena->cache_bin_array_descriptor_ql, link) {\n\t\tfor (szind_t i = 0; i < nhbins; i++) {\n\t\t\tcache_bin_t *cache_bin = &descriptor->bins[i];\n\t\t\tcache_bin_sz_t ncached, nstashed;\n\t\t\tcache_bin_nitems_get_remote(cache_bin,\n\t\t\t    &tcache_bin_info[i], &ncached, &nstashed);\n\n\t\t\tastats->tcache_bytes += ncached * sz_index2size(i);\n\t\t\tastats->tcache_stashed_bytes += nstashed *\n\t\t\t    sz_index2size(i);\n\t\t}\n\t}\n\tmalloc_mutex_prof_read(tsdn,\n\t    &astats->mutex_prof_data[arena_prof_mutex_tcache_list],\n\t    &arena->tcache_ql_mtx);\n\tmalloc_mutex_unlock(tsdn, &arena->tcache_ql_mtx);\n\n#define READ_ARENA_MUTEX_PROF_DATA(mtx, ind)\t\t\t\t\\\n    malloc_mutex_lock(tsdn, &arena->mtx);\t\t\t\t\\\n    malloc_mutex_prof_read(tsdn, &astats->mutex_prof_data[ind],\t\t\\\n        &arena->mtx);\t\t\t\t\t\t\t\\\n    malloc_mutex_unlock(tsdn, &arena->mtx);\n\n\t/* Gather per arena mutex profiling data. */\n\tREAD_ARENA_MUTEX_PROF_DATA(large_mtx, arena_prof_mutex_large);\n\tREAD_ARENA_MUTEX_PROF_DATA(base->mtx,\n\t    arena_prof_mutex_base);\n#undef READ_ARENA_MUTEX_PROF_DATA\n\tpa_shard_mtx_stats_read(tsdn, &arena->pa_shard,\n\t    astats->mutex_prof_data);\n\n\tnstime_copy(&astats->uptime, &arena->create_time);\n\tnstime_update(&astats->uptime);\n\tnstime_subtract(&astats->uptime, &arena->create_time);\n\n\tfor (szind_t i = 0; i < SC_NBINS; i++) {\n\t\tfor (unsigned j = 0; j < bin_infos[i].n_shards; j++) {\n\t\t\tbin_stats_merge(tsdn, &bstats[i],\n\t\t\t    arena_get_bin(arena, i, j));\n\t\t}\n\t}\n}\n\nstatic void\narena_background_thread_inactivity_check(tsdn_t *tsdn, arena_t *arena,\n    bool is_background_thread) {\n\tif (!background_thread_enabled() || is_background_thread) {\n\t\treturn;\n\t}\n\tbackground_thread_info_t *info =\n\t    arena_background_thread_info_get(arena);\n\tif (background_thread_indefinite_sleep(info)) {\n\t\tarena_maybe_do_deferred_work(tsdn, arena,\n\t\t    &arena->pa_shard.pac.decay_dirty, 0);\n\t}\n}\n\n/*\n * React to deferred work generated by a PAI function.\n */\nvoid arena_handle_deferred_work(tsdn_t *tsdn, arena_t *arena) {\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\n\tif (decay_immediately(&arena->pa_shard.pac.decay_dirty)) {\n\t\tarena_decay_dirty(tsdn, arena, false, true);\n\t}\n\tarena_background_thread_inactivity_check(tsdn, arena, false);\n}\n\nstatic void *\narena_slab_reg_alloc(edata_t *slab, const bin_info_t *bin_info) {\n\tvoid *ret;\n\tslab_data_t *slab_data = edata_slab_data_get(slab);\n\tsize_t regind;\n\n\tassert(edata_nfree_get(slab) > 0);\n\tassert(!bitmap_full(slab_data->bitmap, &bin_info->bitmap_info));\n\n\tregind = bitmap_sfu(slab_data->bitmap, &bin_info->bitmap_info);\n\tret = (void *)((uintptr_t)edata_addr_get(slab) +\n\t    (uintptr_t)(bin_info->reg_size * regind));\n\tedata_nfree_dec(slab);\n\treturn ret;\n}\n\nstatic void\narena_slab_reg_alloc_batch(edata_t *slab, const bin_info_t *bin_info,\n\t\t\t   unsigned cnt, void** ptrs) {\n\tslab_data_t *slab_data = edata_slab_data_get(slab);\n\n\tassert(edata_nfree_get(slab) >= cnt);\n\tassert(!bitmap_full(slab_data->bitmap, &bin_info->bitmap_info));\n\n#if (! defined JEMALLOC_INTERNAL_POPCOUNTL) || (defined BITMAP_USE_TREE)\n\tfor (unsigned i = 0; i < cnt; i++) {\n\t\tsize_t regind = bitmap_sfu(slab_data->bitmap,\n\t\t\t\t\t   &bin_info->bitmap_info);\n\t\t*(ptrs + i) = (void *)((uintptr_t)edata_addr_get(slab) +\n\t\t    (uintptr_t)(bin_info->reg_size * regind));\n\t}\n#else\n\tunsigned group = 0;\n\tbitmap_t g = slab_data->bitmap[group];\n\tunsigned i = 0;\n\twhile (i < cnt) {\n\t\twhile (g == 0) {\n\t\t\tg = slab_data->bitmap[++group];\n\t\t}\n\t\tsize_t shift = group << LG_BITMAP_GROUP_NBITS;\n\t\tsize_t pop = popcount_lu(g);\n\t\tif (pop > (cnt - i)) {\n\t\t\tpop = cnt - i;\n\t\t}\n\n\t\t/*\n\t\t * Load from memory locations only once, outside the\n\t\t * hot loop below.\n\t\t */\n\t\tuintptr_t base = (uintptr_t)edata_addr_get(slab);\n\t\tuintptr_t regsize = (uintptr_t)bin_info->reg_size;\n\t\twhile (pop--) {\n\t\t\tsize_t bit = cfs_lu(&g);\n\t\t\tsize_t regind = shift + bit;\n\t\t\t*(ptrs + i) = (void *)(base + regsize * regind);\n\n\t\t\ti++;\n\t\t}\n\t\tslab_data->bitmap[group] = g;\n\t}\n#endif\n\tedata_nfree_sub(slab, cnt);\n}\n\nstatic void\narena_large_malloc_stats_update(tsdn_t *tsdn, arena_t *arena, size_t usize) {\n\tszind_t index, hindex;\n\n\tcassert(config_stats);\n\n\tif (usize < SC_LARGE_MINCLASS) {\n\t\tusize = SC_LARGE_MINCLASS;\n\t}\n\tindex = sz_size2index(usize);\n\thindex = (index >= SC_NBINS) ? index - SC_NBINS : 0;\n\n\tlocked_inc_u64(tsdn, LOCKEDINT_MTX(arena->stats.mtx),\n\t    &arena->stats.lstats[hindex].nmalloc, 1);\n}\n\nstatic void\narena_large_dalloc_stats_update(tsdn_t *tsdn, arena_t *arena, size_t usize) {\n\tszind_t index, hindex;\n\n\tcassert(config_stats);\n\n\tif (usize < SC_LARGE_MINCLASS) {\n\t\tusize = SC_LARGE_MINCLASS;\n\t}\n\tindex = sz_size2index(usize);\n\thindex = (index >= SC_NBINS) ? index - SC_NBINS : 0;\n\n\tlocked_inc_u64(tsdn, LOCKEDINT_MTX(arena->stats.mtx),\n\t    &arena->stats.lstats[hindex].ndalloc, 1);\n}\n\nstatic void\narena_large_ralloc_stats_update(tsdn_t *tsdn, arena_t *arena, size_t oldusize,\n    size_t usize) {\n\tarena_large_malloc_stats_update(tsdn, arena, usize);\n\tarena_large_dalloc_stats_update(tsdn, arena, oldusize);\n}\n\nedata_t *\narena_extent_alloc_large(tsdn_t *tsdn, arena_t *arena, size_t usize,\n    size_t alignment, bool zero) {\n\tbool deferred_work_generated = false;\n\tszind_t szind = sz_size2index(usize);\n\tsize_t esize = usize + sz_large_pad;\n\n\tbool guarded = san_large_extent_decide_guard(tsdn,\n\t    arena_get_ehooks(arena), esize, alignment);\n\tedata_t *edata = pa_alloc(tsdn, &arena->pa_shard, esize, alignment,\n\t    /* slab */ false, szind, zero, guarded, &deferred_work_generated);\n\tassert(deferred_work_generated == false);\n\n\tif (edata != NULL) {\n\t\tif (config_stats) {\n\t\t\tLOCKEDINT_MTX_LOCK(tsdn, arena->stats.mtx);\n\t\t\tarena_large_malloc_stats_update(tsdn, arena, usize);\n\t\t\tLOCKEDINT_MTX_UNLOCK(tsdn, arena->stats.mtx);\n\t\t}\n\t}\n\n\tif (edata != NULL && sz_large_pad != 0) {\n\t\tarena_cache_oblivious_randomize(tsdn, arena, edata, alignment);\n\t}\n\n\treturn edata;\n}\n\nvoid\narena_extent_dalloc_large_prep(tsdn_t *tsdn, arena_t *arena, edata_t *edata) {\n\tif (config_stats) {\n\t\tLOCKEDINT_MTX_LOCK(tsdn, arena->stats.mtx);\n\t\tarena_large_dalloc_stats_update(tsdn, arena,\n\t\t    edata_usize_get(edata));\n\t\tLOCKEDINT_MTX_UNLOCK(tsdn, arena->stats.mtx);\n\t}\n}\n\nvoid\narena_extent_ralloc_large_shrink(tsdn_t *tsdn, arena_t *arena, edata_t *edata,\n    size_t oldusize) {\n\tsize_t usize = edata_usize_get(edata);\n\n\tif (config_stats) {\n\t\tLOCKEDINT_MTX_LOCK(tsdn, arena->stats.mtx);\n\t\tarena_large_ralloc_stats_update(tsdn, arena, oldusize, usize);\n\t\tLOCKEDINT_MTX_UNLOCK(tsdn, arena->stats.mtx);\n\t}\n}\n\nvoid\narena_extent_ralloc_large_expand(tsdn_t *tsdn, arena_t *arena, edata_t *edata,\n    size_t oldusize) {\n\tsize_t usize = edata_usize_get(edata);\n\n\tif (config_stats) {\n\t\tLOCKEDINT_MTX_LOCK(tsdn, arena->stats.mtx);\n\t\tarena_large_ralloc_stats_update(tsdn, arena, oldusize, usize);\n\t\tLOCKEDINT_MTX_UNLOCK(tsdn, arena->stats.mtx);\n\t}\n}\n\n/*\n * In situations where we're not forcing a decay (i.e. because the user\n * specifically requested it), should we purge ourselves, or wait for the\n * background thread to get to it.\n */\nstatic pac_purge_eagerness_t\narena_decide_unforced_purge_eagerness(bool is_background_thread) {\n\tif (is_background_thread) {\n\t\treturn PAC_PURGE_ALWAYS;\n\t} else if (!is_background_thread && background_thread_enabled()) {\n\t\treturn PAC_PURGE_NEVER;\n\t} else {\n\t\treturn PAC_PURGE_ON_EPOCH_ADVANCE;\n\t}\n}\n\nbool\narena_decay_ms_set(tsdn_t *tsdn, arena_t *arena, extent_state_t state,\n    ssize_t decay_ms) {\n\tpac_purge_eagerness_t eagerness = arena_decide_unforced_purge_eagerness(\n\t    /* is_background_thread */ false);\n\treturn pa_decay_ms_set(tsdn, &arena->pa_shard, state, decay_ms,\n\t    eagerness);\n}\n\nssize_t\narena_decay_ms_get(arena_t *arena, extent_state_t state) {\n\treturn pa_decay_ms_get(&arena->pa_shard, state);\n}\n\nstatic bool\narena_decay_impl(tsdn_t *tsdn, arena_t *arena, decay_t *decay,\n    pac_decay_stats_t *decay_stats, ecache_t *ecache,\n    bool is_background_thread, bool all) {\n\tif (all) {\n\t\tmalloc_mutex_lock(tsdn, &decay->mtx);\n\t\tpac_decay_all(tsdn, &arena->pa_shard.pac, decay, decay_stats,\n\t\t    ecache, /* fully_decay */ all);\n\t\tmalloc_mutex_unlock(tsdn, &decay->mtx);\n\t\treturn false;\n\t}\n\n\tif (malloc_mutex_trylock(tsdn, &decay->mtx)) {\n\t\t/* No need to wait if another thread is in progress. */\n\t\treturn true;\n\t}\n\tpac_purge_eagerness_t eagerness =\n\t    arena_decide_unforced_purge_eagerness(is_background_thread);\n\tbool epoch_advanced = pac_maybe_decay_purge(tsdn, &arena->pa_shard.pac,\n\t    decay, decay_stats, ecache, eagerness);\n\tsize_t npages_new;\n\tif (epoch_advanced) {\n\t\t/* Backlog is updated on epoch advance. */\n\t\tnpages_new = decay_epoch_npages_delta(decay);\n\t}\n\tmalloc_mutex_unlock(tsdn, &decay->mtx);\n\n\tif (have_background_thread && background_thread_enabled() &&\n\t    epoch_advanced && !is_background_thread) {\n\t\tarena_maybe_do_deferred_work(tsdn, arena, decay, npages_new);\n\t}\n\n\treturn false;\n}\n\nstatic bool\narena_decay_dirty(tsdn_t *tsdn, arena_t *arena, bool is_background_thread,\n    bool all) {\n\treturn arena_decay_impl(tsdn, arena, &arena->pa_shard.pac.decay_dirty,\n\t    &arena->pa_shard.pac.stats->decay_dirty,\n\t    &arena->pa_shard.pac.ecache_dirty, is_background_thread, all);\n}\n\nstatic bool\narena_decay_muzzy(tsdn_t *tsdn, arena_t *arena, bool is_background_thread,\n    bool all) {\n\tif (pa_shard_dont_decay_muzzy(&arena->pa_shard)) {\n\t\treturn false;\n\t}\n\treturn arena_decay_impl(tsdn, arena, &arena->pa_shard.pac.decay_muzzy,\n\t    &arena->pa_shard.pac.stats->decay_muzzy,\n\t    &arena->pa_shard.pac.ecache_muzzy, is_background_thread, all);\n}\n\nvoid\narena_decay(tsdn_t *tsdn, arena_t *arena, bool is_background_thread, bool all) {\n\tif (all) {\n\t\t/*\n\t\t * We should take a purge of \"all\" to mean \"save as much memory\n\t\t * as possible\", including flushing any caches (for situations\n\t\t * like thread death, or manual purge calls).\n\t\t */\n\t\tsec_flush(tsdn, &arena->pa_shard.hpa_sec);\n\t}\n\tif (arena_decay_dirty(tsdn, arena, is_background_thread, all)) {\n\t\treturn;\n\t}\n\tarena_decay_muzzy(tsdn, arena, is_background_thread, all);\n}\n\nstatic bool\narena_should_decay_early(tsdn_t *tsdn, arena_t *arena, decay_t *decay,\n    background_thread_info_t *info, nstime_t *remaining_sleep,\n    size_t npages_new) {\n\tmalloc_mutex_assert_owner(tsdn, &info->mtx);\n\n\tif (malloc_mutex_trylock(tsdn, &decay->mtx)) {\n\t\treturn false;\n\t}\n\n\tif (!decay_gradually(decay)) {\n\t\tmalloc_mutex_unlock(tsdn, &decay->mtx);\n\t\treturn false;\n\t}\n\n\tnstime_init(remaining_sleep, background_thread_wakeup_time_get(info));\n\tif (nstime_compare(remaining_sleep, &decay->epoch) <= 0) {\n\t\tmalloc_mutex_unlock(tsdn, &decay->mtx);\n\t\treturn false;\n\t}\n\tnstime_subtract(remaining_sleep, &decay->epoch);\n\tif (npages_new > 0) {\n\t\tuint64_t npurge_new = decay_npages_purge_in(decay,\n\t\t    remaining_sleep, npages_new);\n\t\tinfo->npages_to_purge_new += npurge_new;\n\t}\n\tmalloc_mutex_unlock(tsdn, &decay->mtx);\n\treturn info->npages_to_purge_new >\n\t    ARENA_DEFERRED_PURGE_NPAGES_THRESHOLD;\n}\n\n/*\n * Check if deferred work needs to be done sooner than planned.\n * For decay we might want to wake up earlier because of an influx of dirty\n * pages. Rather than waiting for previously estimated time, we proactively\n * purge those pages.\n * If background thread sleeps indefinitely, always wake up because some\n * deferred work has been generated.\n */\nstatic void\narena_maybe_do_deferred_work(tsdn_t *tsdn, arena_t *arena, decay_t *decay,\n    size_t npages_new) {\n\tbackground_thread_info_t *info = arena_background_thread_info_get(\n\t    arena);\n\tif (malloc_mutex_trylock(tsdn, &info->mtx)) {\n\t\t/*\n\t\t * Background thread may hold the mutex for a long period of\n\t\t * time.  We'd like to avoid the variance on application\n\t\t * threads.  So keep this non-blocking, and leave the work to a\n\t\t * future epoch.\n\t\t */\n\t\treturn;\n\t}\n\tif (!background_thread_is_started(info)) {\n\t\tgoto label_done;\n\t}\n\n\tnstime_t remaining_sleep;\n\tif (background_thread_indefinite_sleep(info)) {\n\t\tbackground_thread_wakeup_early(info, NULL);\n\t} else if (arena_should_decay_early(tsdn, arena, decay, info,\n\t    &remaining_sleep, npages_new)) {\n\t\tinfo->npages_to_purge_new = 0;\n\t\tbackground_thread_wakeup_early(info, &remaining_sleep);\n\t}\nlabel_done:\n\tmalloc_mutex_unlock(tsdn, &info->mtx);\n}\n\n/* Called from background threads. */\nvoid\narena_do_deferred_work(tsdn_t *tsdn, arena_t *arena) {\n\tarena_decay(tsdn, arena, true, false);\n\tpa_shard_do_deferred_work(tsdn, &arena->pa_shard);\n}\n\nvoid\narena_slab_dalloc(tsdn_t *tsdn, arena_t *arena, edata_t *slab) {\n\tbool deferred_work_generated = false;\n\tpa_dalloc(tsdn, &arena->pa_shard, slab, &deferred_work_generated);\n\tif (deferred_work_generated) {\n\t\tarena_handle_deferred_work(tsdn, arena);\n\t}\n}\n\nstatic void\narena_bin_slabs_nonfull_insert(bin_t *bin, edata_t *slab) {\n\tassert(edata_nfree_get(slab) > 0);\n\tedata_heap_insert(&bin->slabs_nonfull, slab);\n\tif (config_stats) {\n\t\tbin->stats.nonfull_slabs++;\n\t}\n}\n\nstatic void\narena_bin_slabs_nonfull_remove(bin_t *bin, edata_t *slab) {\n\tedata_heap_remove(&bin->slabs_nonfull, slab);\n\tif (config_stats) {\n\t\tbin->stats.nonfull_slabs--;\n\t}\n}\n\nstatic edata_t *\narena_bin_slabs_nonfull_tryget(bin_t *bin) {\n\tedata_t *slab = edata_heap_remove_first(&bin->slabs_nonfull);\n\tif (slab == NULL) {\n\t\treturn NULL;\n\t}\n\tif (config_stats) {\n\t\tbin->stats.reslabs++;\n\t\tbin->stats.nonfull_slabs--;\n\t}\n\treturn slab;\n}\n\nstatic void\narena_bin_slabs_full_insert(arena_t *arena, bin_t *bin, edata_t *slab) {\n\tassert(edata_nfree_get(slab) == 0);\n\t/*\n\t *  Tracking extents is required by arena_reset, which is not allowed\n\t *  for auto arenas.  Bypass this step to avoid touching the edata\n\t *  linkage (often results in cache misses) for auto arenas.\n\t */\n\tif (arena_is_auto(arena)) {\n\t\treturn;\n\t}\n\tedata_list_active_append(&bin->slabs_full, slab);\n}\n\nstatic void\narena_bin_slabs_full_remove(arena_t *arena, bin_t *bin, edata_t *slab) {\n\tif (arena_is_auto(arena)) {\n\t\treturn;\n\t}\n\tedata_list_active_remove(&bin->slabs_full, slab);\n}\n\nstatic void\narena_bin_reset(tsd_t *tsd, arena_t *arena, bin_t *bin) {\n\tedata_t *slab;\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &bin->lock);\n\tif (bin->slabcur != NULL) {\n\t\tslab = bin->slabcur;\n\t\tbin->slabcur = NULL;\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), &bin->lock);\n\t\tarena_slab_dalloc(tsd_tsdn(tsd), arena, slab);\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), &bin->lock);\n\t}\n\twhile ((slab = edata_heap_remove_first(&bin->slabs_nonfull)) != NULL) {\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), &bin->lock);\n\t\tarena_slab_dalloc(tsd_tsdn(tsd), arena, slab);\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), &bin->lock);\n\t}\n\tfor (slab = edata_list_active_first(&bin->slabs_full); slab != NULL;\n\t     slab = edata_list_active_first(&bin->slabs_full)) {\n\t\tarena_bin_slabs_full_remove(arena, bin, slab);\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), &bin->lock);\n\t\tarena_slab_dalloc(tsd_tsdn(tsd), arena, slab);\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), &bin->lock);\n\t}\n\tif (config_stats) {\n\t\tbin->stats.curregs = 0;\n\t\tbin->stats.curslabs = 0;\n\t}\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &bin->lock);\n}\n\nvoid\narena_reset(tsd_t *tsd, arena_t *arena) {\n\t/*\n\t * Locking in this function is unintuitive.  The caller guarantees that\n\t * no concurrent operations are happening in this arena, but there are\n\t * still reasons that some locking is necessary:\n\t *\n\t * - Some of the functions in the transitive closure of calls assume\n\t *   appropriate locks are held, and in some cases these locks are\n\t *   temporarily dropped to avoid lock order reversal or deadlock due to\n\t *   reentry.\n\t * - mallctl(\"epoch\", ...) may concurrently refresh stats.  While\n\t *   strictly speaking this is a \"concurrent operation\", disallowing\n\t *   stats refreshes would impose an inconvenient burden.\n\t */\n\n\t/* Large allocations. */\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &arena->large_mtx);\n\n\tfor (edata_t *edata = edata_list_active_first(&arena->large);\n\t    edata != NULL; edata = edata_list_active_first(&arena->large)) {\n\t\tvoid *ptr = edata_base_get(edata);\n\t\tsize_t usize;\n\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), &arena->large_mtx);\n\t\temap_alloc_ctx_t alloc_ctx;\n\t\temap_alloc_ctx_lookup(tsd_tsdn(tsd), &arena_emap_global, ptr,\n\t\t    &alloc_ctx);\n\t\tassert(alloc_ctx.szind != SC_NSIZES);\n\n\t\tif (config_stats || (config_prof && opt_prof)) {\n\t\t\tusize = sz_index2size(alloc_ctx.szind);\n\t\t\tassert(usize == isalloc(tsd_tsdn(tsd), ptr));\n\t\t}\n\t\t/* Remove large allocation from prof sample set. */\n\t\tif (config_prof && opt_prof) {\n\t\t\tprof_free(tsd, ptr, usize, &alloc_ctx);\n\t\t}\n\t\tlarge_dalloc(tsd_tsdn(tsd), edata);\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), &arena->large_mtx);\n\t}\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &arena->large_mtx);\n\n\t/* Bins. */\n\tfor (unsigned i = 0; i < SC_NBINS; i++) {\n\t\tfor (unsigned j = 0; j < bin_infos[i].n_shards; j++) {\n\t\t\tarena_bin_reset(tsd, arena, arena_get_bin(arena, i, j));\n\t\t}\n\t}\n\tpa_shard_reset(tsd_tsdn(tsd), &arena->pa_shard);\n}\n\nstatic void\narena_prepare_base_deletion_sync_finish(tsd_t *tsd, malloc_mutex_t **mutexes,\n    unsigned n_mtx) {\n\tfor (unsigned i = 0; i < n_mtx; i++) {\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), mutexes[i]);\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), mutexes[i]);\n\t}\n}\n\n#define ARENA_DESTROY_MAX_DELAYED_MTX 32\nstatic void\narena_prepare_base_deletion_sync(tsd_t *tsd, malloc_mutex_t *mtx,\n    malloc_mutex_t **delayed_mtx, unsigned *n_delayed) {\n\tif (!malloc_mutex_trylock(tsd_tsdn(tsd), mtx)) {\n\t\t/* No contention. */\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), mtx);\n\t\treturn;\n\t}\n\tunsigned n = *n_delayed;\n\tassert(n < ARENA_DESTROY_MAX_DELAYED_MTX);\n\t/* Add another to the batch. */\n\tdelayed_mtx[n++] = mtx;\n\n\tif (n == ARENA_DESTROY_MAX_DELAYED_MTX) {\n\t\tarena_prepare_base_deletion_sync_finish(tsd, delayed_mtx, n);\n\t\tn = 0;\n\t}\n\t*n_delayed = n;\n}\n\nstatic void\narena_prepare_base_deletion(tsd_t *tsd, base_t *base_to_destroy) {\n\t/*\n\t * In order to coalesce, emap_try_acquire_edata_neighbor will attempt to\n\t * check neighbor edata's state to determine eligibility.  This means\n\t * under certain conditions, the metadata from an arena can be accessed\n\t * w/o holding any locks from that arena.  In order to guarantee safe\n\t * memory access, the metadata and the underlying base allocator needs\n\t * to be kept alive, until all pending accesses are done.\n\t *\n\t * 1) with opt_retain, the arena boundary implies the is_head state\n\t * (tracked in the rtree leaf), and the coalesce flow will stop at the\n\t * head state branch.  Therefore no cross arena metadata access\n\t * possible.\n\t *\n\t * 2) w/o opt_retain, the arena id needs to be read from the edata_t,\n\t * meaning read only cross-arena metadata access is possible.  The\n\t * coalesce attempt will stop at the arena_id mismatch, and is always\n\t * under one of the ecache locks.  To allow safe passthrough of such\n\t * metadata accesses, the loop below will iterate through all manual\n\t * arenas' ecache locks.  As all the metadata from this base allocator\n\t * have been unlinked from the rtree, after going through all the\n\t * relevant ecache locks, it's safe to say that a) pending accesses are\n\t * all finished, and b) no new access will be generated.\n\t */\n\tif (opt_retain) {\n\t\treturn;\n\t}\n\tunsigned destroy_ind = base_ind_get(base_to_destroy);\n\tassert(destroy_ind >= manual_arena_base);\n\n\ttsdn_t *tsdn = tsd_tsdn(tsd);\n\tmalloc_mutex_t *delayed_mtx[ARENA_DESTROY_MAX_DELAYED_MTX];\n\tunsigned n_delayed = 0, total = narenas_total_get();\n\tfor (unsigned i = 0; i < total; i++) {\n\t\tif (i == destroy_ind) {\n\t\t\tcontinue;\n\t\t}\n\t\tarena_t *arena = arena_get(tsdn, i, false);\n\t\tif (arena == NULL) {\n\t\t\tcontinue;\n\t\t}\n\t\tpac_t *pac = &arena->pa_shard.pac;\n\t\tarena_prepare_base_deletion_sync(tsd, &pac->ecache_dirty.mtx,\n\t\t    delayed_mtx, &n_delayed);\n\t\tarena_prepare_base_deletion_sync(tsd, &pac->ecache_muzzy.mtx,\n\t\t    delayed_mtx, &n_delayed);\n\t\tarena_prepare_base_deletion_sync(tsd, &pac->ecache_retained.mtx,\n\t\t    delayed_mtx, &n_delayed);\n\t}\n\tarena_prepare_base_deletion_sync_finish(tsd, delayed_mtx, n_delayed);\n}\n#undef ARENA_DESTROY_MAX_DELAYED_MTX\n\nvoid\narena_destroy(tsd_t *tsd, arena_t *arena) {\n\tassert(base_ind_get(arena->base) >= narenas_auto);\n\tassert(arena_nthreads_get(arena, false) == 0);\n\tassert(arena_nthreads_get(arena, true) == 0);\n\n\t/*\n\t * No allocations have occurred since arena_reset() was called.\n\t * Furthermore, the caller (arena_i_destroy_ctl()) purged all cached\n\t * extents, so only retained extents may remain and it's safe to call\n\t * pa_shard_destroy_retained.\n\t */\n\tpa_shard_destroy(tsd_tsdn(tsd), &arena->pa_shard);\n\n\t/*\n\t * Remove the arena pointer from the arenas array.  We rely on the fact\n\t * that there is no way for the application to get a dirty read from the\n\t * arenas array unless there is an inherent race in the application\n\t * involving access of an arena being concurrently destroyed.  The\n\t * application must synchronize knowledge of the arena's validity, so as\n\t * long as we use an atomic write to update the arenas array, the\n\t * application will get a clean read any time after it synchronizes\n\t * knowledge that the arena is no longer valid.\n\t */\n\tarena_set(base_ind_get(arena->base), NULL);\n\n\t/*\n\t * Destroy the base allocator, which manages all metadata ever mapped by\n\t * this arena.  The prepare function will make sure no pending access to\n\t * the metadata in this base anymore.\n\t */\n\tarena_prepare_base_deletion(tsd, arena->base);\n\tbase_delete(tsd_tsdn(tsd), arena->base);\n}\n\nstatic edata_t *\narena_slab_alloc(tsdn_t *tsdn, arena_t *arena, szind_t binind, unsigned binshard,\n    const bin_info_t *bin_info) {\n\tbool deferred_work_generated = false;\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\n\tbool guarded = san_slab_extent_decide_guard(tsdn,\n\t    arena_get_ehooks(arena));\n\tedata_t *slab = pa_alloc(tsdn, &arena->pa_shard, bin_info->slab_size,\n\t    /* alignment */ PAGE, /* slab */ true, /* szind */ binind,\n\t     /* zero */ false, guarded, &deferred_work_generated);\n\n\tif (deferred_work_generated) {\n\t\tarena_handle_deferred_work(tsdn, arena);\n\t}\n\n\tif (slab == NULL) {\n\t\treturn NULL;\n\t}\n\tassert(edata_slab_get(slab));\n\n\t/* Initialize slab internals. */\n\tslab_data_t *slab_data = edata_slab_data_get(slab);\n\tedata_nfree_binshard_set(slab, bin_info->nregs, binshard);\n\tbitmap_init(slab_data->bitmap, &bin_info->bitmap_info, false);\n\n\treturn slab;\n}\n\n/*\n * Before attempting the _with_fresh_slab approaches below, the _no_fresh_slab\n * variants (i.e. through slabcur and nonfull) must be tried first.\n */\nstatic void\narena_bin_refill_slabcur_with_fresh_slab(tsdn_t *tsdn, arena_t *arena,\n    bin_t *bin, szind_t binind, edata_t *fresh_slab) {\n\tmalloc_mutex_assert_owner(tsdn, &bin->lock);\n\t/* Only called after slabcur and nonfull both failed. */\n\tassert(bin->slabcur == NULL);\n\tassert(edata_heap_first(&bin->slabs_nonfull) == NULL);\n\tassert(fresh_slab != NULL);\n\n\t/* A new slab from arena_slab_alloc() */\n\tassert(edata_nfree_get(fresh_slab) == bin_infos[binind].nregs);\n\tif (config_stats) {\n\t\tbin->stats.nslabs++;\n\t\tbin->stats.curslabs++;\n\t}\n\tbin->slabcur = fresh_slab;\n}\n\n/* Refill slabcur and then alloc using the fresh slab */\nstatic void *\narena_bin_malloc_with_fresh_slab(tsdn_t *tsdn, arena_t *arena, bin_t *bin,\n    szind_t binind, edata_t *fresh_slab) {\n\tmalloc_mutex_assert_owner(tsdn, &bin->lock);\n\tarena_bin_refill_slabcur_with_fresh_slab(tsdn, arena, bin, binind,\n\t    fresh_slab);\n\n\treturn arena_slab_reg_alloc(bin->slabcur, &bin_infos[binind]);\n}\n\nstatic bool\narena_bin_refill_slabcur_no_fresh_slab(tsdn_t *tsdn, arena_t *arena,\n    bin_t *bin) {\n\tmalloc_mutex_assert_owner(tsdn, &bin->lock);\n\t/* Only called after arena_slab_reg_alloc[_batch] failed. */\n\tassert(bin->slabcur == NULL || edata_nfree_get(bin->slabcur) == 0);\n\n\tif (bin->slabcur != NULL) {\n\t\tarena_bin_slabs_full_insert(arena, bin, bin->slabcur);\n\t}\n\n\t/* Look for a usable slab. */\n\tbin->slabcur = arena_bin_slabs_nonfull_tryget(bin);\n\tassert(bin->slabcur == NULL || edata_nfree_get(bin->slabcur) > 0);\n\n\treturn (bin->slabcur == NULL);\n}\n\nbin_t *\narena_bin_choose(tsdn_t *tsdn, arena_t *arena, szind_t binind,\n    unsigned *binshard_p) {\n\tunsigned binshard;\n\tif (tsdn_null(tsdn) || tsd_arena_get(tsdn_tsd(tsdn)) == NULL) {\n\t\tbinshard = 0;\n\t} else {\n\t\tbinshard = tsd_binshardsp_get(tsdn_tsd(tsdn))->binshard[binind];\n\t}\n\tassert(binshard < bin_infos[binind].n_shards);\n\tif (binshard_p != NULL) {\n\t\t*binshard_p = binshard;\n\t}\n\treturn arena_get_bin(arena, binind, binshard);\n}\n\nvoid\narena_cache_bin_fill_small(tsdn_t *tsdn, arena_t *arena,\n    cache_bin_t *cache_bin, cache_bin_info_t *cache_bin_info, szind_t binind,\n    const unsigned nfill) {\n\tassert(cache_bin_ncached_get_local(cache_bin, cache_bin_info) == 0);\n\n\tconst bin_info_t *bin_info = &bin_infos[binind];\n\n\tCACHE_BIN_PTR_ARRAY_DECLARE(ptrs, nfill);\n\tcache_bin_init_ptr_array_for_fill(cache_bin, cache_bin_info, &ptrs,\n\t    nfill);\n\t/*\n\t * Bin-local resources are used first: 1) bin->slabcur, and 2) nonfull\n\t * slabs.  After both are exhausted, new slabs will be allocated through\n\t * arena_slab_alloc().\n\t *\n\t * Bin lock is only taken / released right before / after the while(...)\n\t * refill loop, with new slab allocation (which has its own locking)\n\t * kept outside of the loop.  This setup facilitates flat combining, at\n\t * the cost of the nested loop (through goto label_refill).\n\t *\n\t * To optimize for cases with contention and limited resources\n\t * (e.g. hugepage-backed or non-overcommit arenas), each fill-iteration\n\t * gets one chance of slab_alloc, and a retry of bin local resources\n\t * after the slab allocation (regardless if slab_alloc failed, because\n\t * the bin lock is dropped during the slab allocation).\n\t *\n\t * In other words, new slab allocation is allowed, as long as there was\n\t * progress since the previous slab_alloc.  This is tracked with\n\t * made_progress below, initialized to true to jump start the first\n\t * iteration.\n\t *\n\t * In other words (again), the loop will only terminate early (i.e. stop\n\t * with filled < nfill) after going through the three steps: a) bin\n\t * local exhausted, b) unlock and slab_alloc returns null, c) re-lock\n\t * and bin local fails again.\n\t */\n\tbool made_progress = true;\n\tedata_t *fresh_slab = NULL;\n\tbool alloc_and_retry = false;\n\tunsigned filled = 0;\n\tunsigned binshard;\n\tbin_t *bin = arena_bin_choose(tsdn, arena, binind, &binshard);\n\nlabel_refill:\n\tmalloc_mutex_lock(tsdn, &bin->lock);\n\n\twhile (filled < nfill) {\n\t\t/* Try batch-fill from slabcur first. */\n\t\tedata_t *slabcur = bin->slabcur;\n\t\tif (slabcur != NULL && edata_nfree_get(slabcur) > 0) {\n\t\t\tunsigned tofill = nfill - filled;\n\t\t\tunsigned nfree = edata_nfree_get(slabcur);\n\t\t\tunsigned cnt = tofill < nfree ? tofill : nfree;\n\n\t\t\tarena_slab_reg_alloc_batch(slabcur, bin_info, cnt,\n\t\t\t    &ptrs.ptr[filled]);\n\t\t\tmade_progress = true;\n\t\t\tfilled += cnt;\n\t\t\tcontinue;\n\t\t}\n\t\t/* Next try refilling slabcur from nonfull slabs. */\n\t\tif (!arena_bin_refill_slabcur_no_fresh_slab(tsdn, arena, bin)) {\n\t\t\tassert(bin->slabcur != NULL);\n\t\t\tcontinue;\n\t\t}\n\n\t\t/* Then see if a new slab was reserved already. */\n\t\tif (fresh_slab != NULL) {\n\t\t\tarena_bin_refill_slabcur_with_fresh_slab(tsdn, arena,\n\t\t\t    bin, binind, fresh_slab);\n\t\t\tassert(bin->slabcur != NULL);\n\t\t\tfresh_slab = NULL;\n\t\t\tcontinue;\n\t\t}\n\n\t\t/* Try slab_alloc if made progress (or never did slab_alloc). */\n\t\tif (made_progress) {\n\t\t\tassert(bin->slabcur == NULL);\n\t\t\tassert(fresh_slab == NULL);\n\t\t\talloc_and_retry = true;\n\t\t\t/* Alloc a new slab then come back. */\n\t\t\tbreak;\n\t\t}\n\n\t\t/* OOM. */\n\n\t\tassert(fresh_slab == NULL);\n\t\tassert(!alloc_and_retry);\n\t\tbreak;\n\t} /* while (filled < nfill) loop. */\n\n\tif (config_stats && !alloc_and_retry) {\n\t\tbin->stats.nmalloc += filled;\n\t\tbin->stats.nrequests += cache_bin->tstats.nrequests;\n\t\tbin->stats.curregs += filled;\n\t\tbin->stats.nfills++;\n\t\tcache_bin->tstats.nrequests = 0;\n\t}\n\n\tmalloc_mutex_unlock(tsdn, &bin->lock);\n\n\tif (alloc_and_retry) {\n\t\tassert(fresh_slab == NULL);\n\t\tassert(filled < nfill);\n\t\tassert(made_progress);\n\n\t\tfresh_slab = arena_slab_alloc(tsdn, arena, binind, binshard,\n\t\t    bin_info);\n\t\t/* fresh_slab NULL case handled in the for loop. */\n\n\t\talloc_and_retry = false;\n\t\tmade_progress = false;\n\t\tgoto label_refill;\n\t}\n\tassert(filled == nfill || (fresh_slab == NULL && !made_progress));\n\n\t/* Release if allocated but not used. */\n\tif (fresh_slab != NULL) {\n\t\tassert(edata_nfree_get(fresh_slab) == bin_info->nregs);\n\t\tarena_slab_dalloc(tsdn, arena, fresh_slab);\n\t\tfresh_slab = NULL;\n\t}\n\n\tcache_bin_finish_fill(cache_bin, cache_bin_info, &ptrs, filled);\n\tarena_decay_tick(tsdn, arena);\n}\n\nsize_t\narena_fill_small_fresh(tsdn_t *tsdn, arena_t *arena, szind_t binind,\n    void **ptrs, size_t nfill, bool zero) {\n\tassert(binind < SC_NBINS);\n\tconst bin_info_t *bin_info = &bin_infos[binind];\n\tconst size_t nregs = bin_info->nregs;\n\tassert(nregs > 0);\n\tconst size_t usize = bin_info->reg_size;\n\n\tconst bool manual_arena = !arena_is_auto(arena);\n\tunsigned binshard;\n\tbin_t *bin = arena_bin_choose(tsdn, arena, binind, &binshard);\n\n\tsize_t nslab = 0;\n\tsize_t filled = 0;\n\tedata_t *slab = NULL;\n\tedata_list_active_t fulls;\n\tedata_list_active_init(&fulls);\n\n\twhile (filled < nfill && (slab = arena_slab_alloc(tsdn, arena, binind,\n\t    binshard, bin_info)) != NULL) {\n\t\tassert((size_t)edata_nfree_get(slab) == nregs);\n\t\t++nslab;\n\t\tsize_t batch = nfill - filled;\n\t\tif (batch > nregs) {\n\t\t\tbatch = nregs;\n\t\t}\n\t\tassert(batch > 0);\n\t\tarena_slab_reg_alloc_batch(slab, bin_info, (unsigned)batch,\n\t\t    &ptrs[filled]);\n\t\tassert(edata_addr_get(slab) == ptrs[filled]);\n\t\tif (zero) {\n\t\t\tmemset(ptrs[filled], 0, batch * usize);\n\t\t}\n\t\tfilled += batch;\n\t\tif (batch == nregs) {\n\t\t\tif (manual_arena) {\n\t\t\t\tedata_list_active_append(&fulls, slab);\n\t\t\t}\n\t\t\tslab = NULL;\n\t\t}\n\t}\n\n\tmalloc_mutex_lock(tsdn, &bin->lock);\n\t/*\n\t * Only the last slab can be non-empty, and the last slab is non-empty\n\t * iff slab != NULL.\n\t */\n\tif (slab != NULL) {\n\t\tarena_bin_lower_slab(tsdn, arena, slab, bin);\n\t}\n\tif (manual_arena) {\n\t\tedata_list_active_concat(&bin->slabs_full, &fulls);\n\t}\n\tassert(edata_list_active_empty(&fulls));\n\tif (config_stats) {\n\t\tbin->stats.nslabs += nslab;\n\t\tbin->stats.curslabs += nslab;\n\t\tbin->stats.nmalloc += filled;\n\t\tbin->stats.nrequests += filled;\n\t\tbin->stats.curregs += filled;\n\t}\n\tmalloc_mutex_unlock(tsdn, &bin->lock);\n\n\tarena_decay_tick(tsdn, arena);\n\treturn filled;\n}\n\n/*\n * Without allocating a new slab, try arena_slab_reg_alloc() and re-fill\n * bin->slabcur if necessary.\n */\nstatic void *\narena_bin_malloc_no_fresh_slab(tsdn_t *tsdn, arena_t *arena, bin_t *bin,\n    szind_t binind) {\n\tmalloc_mutex_assert_owner(tsdn, &bin->lock);\n\tif (bin->slabcur == NULL || edata_nfree_get(bin->slabcur) == 0) {\n\t\tif (arena_bin_refill_slabcur_no_fresh_slab(tsdn, arena, bin)) {\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tassert(bin->slabcur != NULL && edata_nfree_get(bin->slabcur) > 0);\n\treturn arena_slab_reg_alloc(bin->slabcur, &bin_infos[binind]);\n}\n\nstatic void *\narena_malloc_small(tsdn_t *tsdn, arena_t *arena, szind_t binind, bool zero) {\n\tassert(binind < SC_NBINS);\n\tconst bin_info_t *bin_info = &bin_infos[binind];\n\tsize_t usize = sz_index2size(binind);\n\tunsigned binshard;\n\tbin_t *bin = arena_bin_choose(tsdn, arena, binind, &binshard);\n\n\tmalloc_mutex_lock(tsdn, &bin->lock);\n\tedata_t *fresh_slab = NULL;\n\tvoid *ret = arena_bin_malloc_no_fresh_slab(tsdn, arena, bin, binind);\n\tif (ret == NULL) {\n\t\tmalloc_mutex_unlock(tsdn, &bin->lock);\n\t\t/******************************/\n\t\tfresh_slab = arena_slab_alloc(tsdn, arena, binind, binshard,\n\t\t    bin_info);\n\t\t/********************************/\n\t\tmalloc_mutex_lock(tsdn, &bin->lock);\n\t\t/* Retry since the lock was dropped. */\n\t\tret = arena_bin_malloc_no_fresh_slab(tsdn, arena, bin, binind);\n\t\tif (ret == NULL) {\n\t\t\tif (fresh_slab == NULL) {\n\t\t\t\t/* OOM */\n\t\t\t\tmalloc_mutex_unlock(tsdn, &bin->lock);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tret = arena_bin_malloc_with_fresh_slab(tsdn, arena, bin,\n\t\t\t    binind, fresh_slab);\n\t\t\tfresh_slab = NULL;\n\t\t}\n\t}\n\tif (config_stats) {\n\t\tbin->stats.nmalloc++;\n\t\tbin->stats.nrequests++;\n\t\tbin->stats.curregs++;\n\t}\n\tmalloc_mutex_unlock(tsdn, &bin->lock);\n\n\tif (fresh_slab != NULL) {\n\t\tarena_slab_dalloc(tsdn, arena, fresh_slab);\n\t}\n\tif (zero) {\n\t\tmemset(ret, 0, usize);\n\t}\n\tarena_decay_tick(tsdn, arena);\n\n\treturn ret;\n}\n\nvoid *\narena_malloc_hard(tsdn_t *tsdn, arena_t *arena, size_t size, szind_t ind,\n    bool zero) {\n\tassert(!tsdn_null(tsdn) || arena != NULL);\n\n\tif (likely(!tsdn_null(tsdn))) {\n\t\tarena = arena_choose_maybe_huge(tsdn_tsd(tsdn), arena, size);\n\t}\n\tif (unlikely(arena == NULL)) {\n\t\treturn NULL;\n\t}\n\n\tif (likely(size <= SC_SMALL_MAXCLASS)) {\n\t\treturn arena_malloc_small(tsdn, arena, ind, zero);\n\t}\n\treturn large_malloc(tsdn, arena, sz_index2size(ind), zero);\n}\n\nvoid *\narena_palloc(tsdn_t *tsdn, arena_t *arena, size_t usize, size_t alignment,\n    bool zero, tcache_t *tcache) {\n\tvoid *ret;\n\n\tif (usize <= SC_SMALL_MAXCLASS) {\n\t\t/* Small; alignment doesn't require special slab placement. */\n\n\t\t/* usize should be a result of sz_sa2u() */\n\t\tassert((usize & (alignment - 1)) == 0);\n\n\t\t/*\n\t\t * Small usize can't come from an alignment larger than a page.\n\t\t */\n\t\tassert(alignment <= PAGE);\n\n\t\tret = arena_malloc(tsdn, arena, usize, sz_size2index(usize),\n\t\t    zero, tcache, true);\n\t} else {\n\t\tif (likely(alignment <= CACHELINE)) {\n\t\t\tret = large_malloc(tsdn, arena, usize, zero);\n\t\t} else {\n\t\t\tret = large_palloc(tsdn, arena, usize, alignment, zero);\n\t\t}\n\t}\n\treturn ret;\n}\n\nvoid\narena_prof_promote(tsdn_t *tsdn, void *ptr, size_t usize) {\n\tcassert(config_prof);\n\tassert(ptr != NULL);\n\tassert(isalloc(tsdn, ptr) == SC_LARGE_MINCLASS);\n\tassert(usize <= SC_SMALL_MAXCLASS);\n\n\tif (config_opt_safety_checks) {\n\t\tsafety_check_set_redzone(ptr, usize, SC_LARGE_MINCLASS);\n\t}\n\n\tedata_t *edata = emap_edata_lookup(tsdn, &arena_emap_global, ptr);\n\n\tszind_t szind = sz_size2index(usize);\n\tedata_szind_set(edata, szind);\n\temap_remap(tsdn, &arena_emap_global, edata, szind, /* slab */ false);\n\n\tassert(isalloc(tsdn, ptr) == usize);\n}\n\nstatic size_t\narena_prof_demote(tsdn_t *tsdn, edata_t *edata, const void *ptr) {\n\tcassert(config_prof);\n\tassert(ptr != NULL);\n\n\tedata_szind_set(edata, SC_NBINS);\n\temap_remap(tsdn, &arena_emap_global, edata, SC_NBINS, /* slab */ false);\n\n\tassert(isalloc(tsdn, ptr) == SC_LARGE_MINCLASS);\n\n\treturn SC_LARGE_MINCLASS;\n}\n\nvoid\narena_dalloc_promoted(tsdn_t *tsdn, void *ptr, tcache_t *tcache,\n    bool slow_path) {\n\tcassert(config_prof);\n\tassert(opt_prof);\n\n\tedata_t *edata = emap_edata_lookup(tsdn, &arena_emap_global, ptr);\n\tsize_t usize = edata_usize_get(edata);\n\tsize_t bumped_usize = arena_prof_demote(tsdn, edata, ptr);\n\tif (config_opt_safety_checks && usize < SC_LARGE_MINCLASS) {\n\t\t/*\n\t\t * Currently, we only do redzoning for small sampled\n\t\t * allocations.\n\t\t */\n\t\tassert(bumped_usize == SC_LARGE_MINCLASS);\n\t\tsafety_check_verify_redzone(ptr, usize, bumped_usize);\n\t}\n\tif (bumped_usize <= tcache_maxclass && tcache != NULL) {\n\t\ttcache_dalloc_large(tsdn_tsd(tsdn), tcache, ptr,\n\t\t    sz_size2index(bumped_usize), slow_path);\n\t} else {\n\t\tlarge_dalloc(tsdn, edata);\n\t}\n}\n\nstatic void\narena_dissociate_bin_slab(arena_t *arena, edata_t *slab, bin_t *bin) {\n\t/* Dissociate slab from bin. */\n\tif (slab == bin->slabcur) {\n\t\tbin->slabcur = NULL;\n\t} else {\n\t\tszind_t binind = edata_szind_get(slab);\n\t\tconst bin_info_t *bin_info = &bin_infos[binind];\n\n\t\t/*\n\t\t * The following block's conditional is necessary because if the\n\t\t * slab only contains one region, then it never gets inserted\n\t\t * into the non-full slabs heap.\n\t\t */\n\t\tif (bin_info->nregs == 1) {\n\t\t\tarena_bin_slabs_full_remove(arena, bin, slab);\n\t\t} else {\n\t\t\tarena_bin_slabs_nonfull_remove(bin, slab);\n\t\t}\n\t}\n}\n\nstatic void\narena_bin_lower_slab(tsdn_t *tsdn, arena_t *arena, edata_t *slab,\n    bin_t *bin) {\n\tassert(edata_nfree_get(slab) > 0);\n\n\t/*\n\t * Make sure that if bin->slabcur is non-NULL, it refers to the\n\t * oldest/lowest non-full slab.  It is okay to NULL slabcur out rather\n\t * than proactively keeping it pointing at the oldest/lowest non-full\n\t * slab.\n\t */\n\tif (bin->slabcur != NULL && edata_snad_comp(bin->slabcur, slab) > 0) {\n\t\t/* Switch slabcur. */\n\t\tif (edata_nfree_get(bin->slabcur) > 0) {\n\t\t\tarena_bin_slabs_nonfull_insert(bin, bin->slabcur);\n\t\t} else {\n\t\t\tarena_bin_slabs_full_insert(arena, bin, bin->slabcur);\n\t\t}\n\t\tbin->slabcur = slab;\n\t\tif (config_stats) {\n\t\t\tbin->stats.reslabs++;\n\t\t}\n\t} else {\n\t\tarena_bin_slabs_nonfull_insert(bin, slab);\n\t}\n}\n\nstatic void\narena_dalloc_bin_slab_prepare(tsdn_t *tsdn, edata_t *slab, bin_t *bin) {\n\tmalloc_mutex_assert_owner(tsdn, &bin->lock);\n\n\tassert(slab != bin->slabcur);\n\tif (config_stats) {\n\t\tbin->stats.curslabs--;\n\t}\n}\n\nvoid\narena_dalloc_bin_locked_handle_newly_empty(tsdn_t *tsdn, arena_t *arena,\n    edata_t *slab, bin_t *bin) {\n\tarena_dissociate_bin_slab(arena, slab, bin);\n\tarena_dalloc_bin_slab_prepare(tsdn, slab, bin);\n}\n\nvoid\narena_dalloc_bin_locked_handle_newly_nonempty(tsdn_t *tsdn, arena_t *arena,\n    edata_t *slab, bin_t *bin) {\n\tarena_bin_slabs_full_remove(arena, bin, slab);\n\tarena_bin_lower_slab(tsdn, arena, slab, bin);\n}\n\nstatic void\narena_dalloc_bin(tsdn_t *tsdn, arena_t *arena, edata_t *edata, void *ptr) {\n\tszind_t binind = edata_szind_get(edata);\n\tunsigned binshard = edata_binshard_get(edata);\n\tbin_t *bin = arena_get_bin(arena, binind, binshard);\n\n\tmalloc_mutex_lock(tsdn, &bin->lock);\n\tarena_dalloc_bin_locked_info_t info;\n\tarena_dalloc_bin_locked_begin(&info, binind);\n\tbool ret = arena_dalloc_bin_locked_step(tsdn, arena, bin,\n\t    &info, binind, edata, ptr);\n\tarena_dalloc_bin_locked_finish(tsdn, arena, bin, &info);\n\tmalloc_mutex_unlock(tsdn, &bin->lock);\n\n\tif (ret) {\n\t\tarena_slab_dalloc(tsdn, arena, edata);\n\t}\n}\n\nvoid\narena_dalloc_small(tsdn_t *tsdn, void *ptr) {\n\tedata_t *edata = emap_edata_lookup(tsdn, &arena_emap_global, ptr);\n\tarena_t *arena = arena_get_from_edata(edata);\n\n\tarena_dalloc_bin(tsdn, arena, edata, ptr);\n\tarena_decay_tick(tsdn, arena);\n}\n\nbool\narena_ralloc_no_move(tsdn_t *tsdn, void *ptr, size_t oldsize, size_t size,\n    size_t extra, bool zero, size_t *newsize) {\n\tbool ret;\n\t/* Calls with non-zero extra had to clamp extra. */\n\tassert(extra == 0 || size + extra <= SC_LARGE_MAXCLASS);\n\n\tedata_t *edata = emap_edata_lookup(tsdn, &arena_emap_global, ptr);\n\tif (unlikely(size > SC_LARGE_MAXCLASS)) {\n\t\tret = true;\n\t\tgoto done;\n\t}\n\n\tsize_t usize_min = sz_s2u(size);\n\tsize_t usize_max = sz_s2u(size + extra);\n\tif (likely(oldsize <= SC_SMALL_MAXCLASS && usize_min\n\t    <= SC_SMALL_MAXCLASS)) {\n\t\t/*\n\t\t * Avoid moving the allocation if the size class can be left the\n\t\t * same.\n\t\t */\n\t\tassert(bin_infos[sz_size2index(oldsize)].reg_size ==\n\t\t    oldsize);\n\t\tif ((usize_max > SC_SMALL_MAXCLASS\n\t\t    || sz_size2index(usize_max) != sz_size2index(oldsize))\n\t\t    && (size > oldsize || usize_max < oldsize)) {\n\t\t\tret = true;\n\t\t\tgoto done;\n\t\t}\n\n\t\tarena_t *arena = arena_get_from_edata(edata);\n\t\tarena_decay_tick(tsdn, arena);\n\t\tret = false;\n\t} else if (oldsize >= SC_LARGE_MINCLASS\n\t    && usize_max >= SC_LARGE_MINCLASS) {\n\t\tret = large_ralloc_no_move(tsdn, edata, usize_min, usize_max,\n\t\t    zero);\n\t} else {\n\t\tret = true;\n\t}\ndone:\n\tassert(edata == emap_edata_lookup(tsdn, &arena_emap_global, ptr));\n\t*newsize = edata_usize_get(edata);\n\n\treturn ret;\n}\n\nstatic void *\narena_ralloc_move_helper(tsdn_t *tsdn, arena_t *arena, size_t usize,\n    size_t alignment, bool zero, tcache_t *tcache) {\n\tif (alignment == 0) {\n\t\treturn arena_malloc(tsdn, arena, usize, sz_size2index(usize),\n\t\t    zero, tcache, true);\n\t}\n\tusize = sz_sa2u(usize, alignment);\n\tif (unlikely(usize == 0 || usize > SC_LARGE_MAXCLASS)) {\n\t\treturn NULL;\n\t}\n\treturn ipalloct(tsdn, usize, alignment, zero, tcache, arena);\n}\n\nvoid *\narena_ralloc(tsdn_t *tsdn, arena_t *arena, void *ptr, size_t oldsize,\n    size_t size, size_t alignment, bool zero, tcache_t *tcache,\n    hook_ralloc_args_t *hook_args) {\n\tsize_t usize = alignment == 0 ? sz_s2u(size) : sz_sa2u(size, alignment);\n\tif (unlikely(usize == 0 || size > SC_LARGE_MAXCLASS)) {\n\t\treturn NULL;\n\t}\n\n\tif (likely(usize <= SC_SMALL_MAXCLASS)) {\n\t\t/* Try to avoid moving the allocation. */\n\t\tUNUSED size_t newsize;\n\t\tif (!arena_ralloc_no_move(tsdn, ptr, oldsize, usize, 0, zero,\n\t\t    &newsize)) {\n\t\t\thook_invoke_expand(hook_args->is_realloc\n\t\t\t    ? hook_expand_realloc : hook_expand_rallocx,\n\t\t\t    ptr, oldsize, usize, (uintptr_t)ptr,\n\t\t\t    hook_args->args);\n\t\t\treturn ptr;\n\t\t}\n\t}\n\n\tif (oldsize >= SC_LARGE_MINCLASS\n\t    && usize >= SC_LARGE_MINCLASS) {\n\t\treturn large_ralloc(tsdn, arena, ptr, usize,\n\t\t    alignment, zero, tcache, hook_args);\n\t}\n\n\t/*\n\t * size and oldsize are different enough that we need to move the\n\t * object.  In that case, fall back to allocating new space and copying.\n\t */\n\tvoid *ret = arena_ralloc_move_helper(tsdn, arena, usize, alignment,\n\t    zero, tcache);\n\tif (ret == NULL) {\n\t\treturn NULL;\n\t}\n\n\thook_invoke_alloc(hook_args->is_realloc\n\t    ? hook_alloc_realloc : hook_alloc_rallocx, ret, (uintptr_t)ret,\n\t    hook_args->args);\n\thook_invoke_dalloc(hook_args->is_realloc\n\t    ? hook_dalloc_realloc : hook_dalloc_rallocx, ptr, hook_args->args);\n\n\t/*\n\t * Junk/zero-filling were already done by\n\t * ipalloc()/arena_malloc().\n\t */\n\tsize_t copysize = (usize < oldsize) ? usize : oldsize;\n\tmemcpy(ret, ptr, copysize);\n\tisdalloct(tsdn, ptr, oldsize, tcache, NULL, true);\n\treturn ret;\n}\n\nehooks_t *\narena_get_ehooks(arena_t *arena) {\n\treturn base_ehooks_get(arena->base);\n}\n\nextent_hooks_t *\narena_set_extent_hooks(tsd_t *tsd, arena_t *arena,\n    extent_hooks_t *extent_hooks) {\n\tbackground_thread_info_t *info;\n\tif (have_background_thread) {\n\t\tinfo = arena_background_thread_info_get(arena);\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), &info->mtx);\n\t}\n\t/* No using the HPA now that we have the custom hooks. */\n\tpa_shard_disable_hpa(tsd_tsdn(tsd), &arena->pa_shard);\n\textent_hooks_t *ret = base_extent_hooks_set(arena->base, extent_hooks);\n\tif (have_background_thread) {\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), &info->mtx);\n\t}\n\n\treturn ret;\n}\n\ndss_prec_t\narena_dss_prec_get(arena_t *arena) {\n\treturn (dss_prec_t)atomic_load_u(&arena->dss_prec, ATOMIC_ACQUIRE);\n}\n\nbool\narena_dss_prec_set(arena_t *arena, dss_prec_t dss_prec) {\n\tif (!have_dss) {\n\t\treturn (dss_prec != dss_prec_disabled);\n\t}\n\tatomic_store_u(&arena->dss_prec, (unsigned)dss_prec, ATOMIC_RELEASE);\n\treturn false;\n}\n\nssize_t\narena_dirty_decay_ms_default_get(void) {\n\treturn atomic_load_zd(&dirty_decay_ms_default, ATOMIC_RELAXED);\n}\n\nbool\narena_dirty_decay_ms_default_set(ssize_t decay_ms) {\n\tif (!decay_ms_valid(decay_ms)) {\n\t\treturn true;\n\t}\n\tatomic_store_zd(&dirty_decay_ms_default, decay_ms, ATOMIC_RELAXED);\n\treturn false;\n}\n\nssize_t\narena_muzzy_decay_ms_default_get(void) {\n\treturn atomic_load_zd(&muzzy_decay_ms_default, ATOMIC_RELAXED);\n}\n\nbool\narena_muzzy_decay_ms_default_set(ssize_t decay_ms) {\n\tif (!decay_ms_valid(decay_ms)) {\n\t\treturn true;\n\t}\n\tatomic_store_zd(&muzzy_decay_ms_default, decay_ms, ATOMIC_RELAXED);\n\treturn false;\n}\n\nbool\narena_retain_grow_limit_get_set(tsd_t *tsd, arena_t *arena, size_t *old_limit,\n    size_t *new_limit) {\n\tassert(opt_retain);\n\treturn pac_retain_grow_limit_get_set(tsd_tsdn(tsd),\n\t    &arena->pa_shard.pac, old_limit, new_limit);\n}\n\nunsigned\narena_nthreads_get(arena_t *arena, bool internal) {\n\treturn atomic_load_u(&arena->nthreads[internal], ATOMIC_RELAXED);\n}\n\nvoid\narena_nthreads_inc(arena_t *arena, bool internal) {\n\tatomic_fetch_add_u(&arena->nthreads[internal], 1, ATOMIC_RELAXED);\n}\n\nvoid\narena_nthreads_dec(arena_t *arena, bool internal) {\n\tatomic_fetch_sub_u(&arena->nthreads[internal], 1, ATOMIC_RELAXED);\n}\n\narena_t *\narena_new(tsdn_t *tsdn, unsigned ind, const arena_config_t *config) {\n\tarena_t *arena;\n\tbase_t *base;\n\tunsigned i;\n\n\tif (ind == 0) {\n\t\tbase = b0get();\n\t} else {\n\t\tbase = base_new(tsdn, ind, config->extent_hooks,\n\t\t    config->metadata_use_hooks);\n\t\tif (base == NULL) {\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tsize_t arena_size = sizeof(arena_t) + sizeof(bin_t) * nbins_total;\n\tarena = (arena_t *)base_alloc(tsdn, base, arena_size, CACHELINE);\n\tif (arena == NULL) {\n\t\tgoto label_error;\n\t}\n\n\tatomic_store_u(&arena->nthreads[0], 0, ATOMIC_RELAXED);\n\tatomic_store_u(&arena->nthreads[1], 0, ATOMIC_RELAXED);\n\tarena->last_thd = NULL;\n\n\tif (config_stats) {\n\t\tif (arena_stats_init(tsdn, &arena->stats)) {\n\t\t\tgoto label_error;\n\t\t}\n\n\t\tql_new(&arena->tcache_ql);\n\t\tql_new(&arena->cache_bin_array_descriptor_ql);\n\t\tif (malloc_mutex_init(&arena->tcache_ql_mtx, \"tcache_ql\",\n\t\t    WITNESS_RANK_TCACHE_QL, malloc_mutex_rank_exclusive)) {\n\t\t\tgoto label_error;\n\t\t}\n\t}\n\n\tatomic_store_u(&arena->dss_prec, (unsigned)extent_dss_prec_get(),\n\t    ATOMIC_RELAXED);\n\n\tedata_list_active_init(&arena->large);\n\tif (malloc_mutex_init(&arena->large_mtx, \"arena_large\",\n\t    WITNESS_RANK_ARENA_LARGE, malloc_mutex_rank_exclusive)) {\n\t\tgoto label_error;\n\t}\n\n\tnstime_t cur_time;\n\tnstime_init_update(&cur_time);\n\tif (pa_shard_init(tsdn, &arena->pa_shard, &arena_pa_central_global,\n\t    &arena_emap_global, base, ind, &arena->stats.pa_shard_stats,\n\t    LOCKEDINT_MTX(arena->stats.mtx), &cur_time, oversize_threshold,\n\t    arena_dirty_decay_ms_default_get(),\n\t    arena_muzzy_decay_ms_default_get())) {\n\t\tgoto label_error;\n\t}\n\n\t/* Initialize bins. */\n\tatomic_store_u(&arena->binshard_next, 0, ATOMIC_RELEASE);\n\tfor (i = 0; i < nbins_total; i++) {\n\t\tbool err = bin_init(&arena->bins[i]);\n\t\tif (err) {\n\t\t\tgoto label_error;\n\t\t}\n\t}\n\n\tarena->base = base;\n\t/* Set arena before creating background threads. */\n\tarena_set(ind, arena);\n\tarena->ind = ind;\n\n\tnstime_init_update(&arena->create_time);\n\n\t/*\n\t * We turn on the HPA if set to.  There are two exceptions:\n\t * - Custom extent hooks (we should only return memory allocated from\n\t *   them in that case).\n\t * - Arena 0 initialization.  In this case, we're mid-bootstrapping, and\n\t *   so arena_hpa_global is not yet initialized.\n\t */\n\tif (opt_hpa && ehooks_are_default(base_ehooks_get(base)) && ind != 0) {\n\t\thpa_shard_opts_t hpa_shard_opts = opt_hpa_opts;\n\t\thpa_shard_opts.deferral_allowed = background_thread_enabled();\n\t\tif (pa_shard_enable_hpa(tsdn, &arena->pa_shard,\n\t\t    &hpa_shard_opts, &opt_hpa_sec_opts)) {\n\t\t\tgoto label_error;\n\t\t}\n\t}\n\n\t/* We don't support reentrancy for arena 0 bootstrapping. */\n\tif (ind != 0) {\n\t\t/*\n\t\t * If we're here, then arena 0 already exists, so bootstrapping\n\t\t * is done enough that we should have tsd.\n\t\t */\n\t\tassert(!tsdn_null(tsdn));\n\t\tpre_reentrancy(tsdn_tsd(tsdn), arena);\n\t\tif (test_hooks_arena_new_hook) {\n\t\t\ttest_hooks_arena_new_hook();\n\t\t}\n\t\tpost_reentrancy(tsdn_tsd(tsdn));\n\t}\n\n\treturn arena;\nlabel_error:\n\tif (ind != 0) {\n\t\tbase_delete(tsdn, base);\n\t}\n\treturn NULL;\n}\n\narena_t *\narena_choose_huge(tsd_t *tsd) {\n\t/* huge_arena_ind can be 0 during init (will use a0). */\n\tif (huge_arena_ind == 0) {\n\t\tassert(!malloc_initialized());\n\t}\n\n\tarena_t *huge_arena = arena_get(tsd_tsdn(tsd), huge_arena_ind, false);\n\tif (huge_arena == NULL) {\n\t\t/* Create the huge arena on demand. */\n\t\tassert(huge_arena_ind != 0);\n\t\thuge_arena = arena_get(tsd_tsdn(tsd), huge_arena_ind, true);\n\t\tif (huge_arena == NULL) {\n\t\t\treturn NULL;\n\t\t}\n\t\t/*\n\t\t * Purge eagerly for huge allocations, because: 1) number of\n\t\t * huge allocations is usually small, which means ticker based\n\t\t * decay is not reliable; and 2) less immediate reuse is\n\t\t * expected for huge allocations.\n\t\t */\n\t\tif (arena_dirty_decay_ms_default_get() > 0) {\n\t\t\tarena_decay_ms_set(tsd_tsdn(tsd), huge_arena,\n\t\t\t    extent_state_dirty, 0);\n\t\t}\n\t\tif (arena_muzzy_decay_ms_default_get() > 0) {\n\t\t\tarena_decay_ms_set(tsd_tsdn(tsd), huge_arena,\n\t\t\t    extent_state_muzzy, 0);\n\t\t}\n\t}\n\n\treturn huge_arena;\n}\n\nbool\narena_init_huge(void) {\n\tbool huge_enabled;\n\n\t/* The threshold should be large size class. */\n\tif (opt_oversize_threshold > SC_LARGE_MAXCLASS ||\n\t    opt_oversize_threshold < SC_LARGE_MINCLASS) {\n\t\topt_oversize_threshold = 0;\n\t\toversize_threshold = SC_LARGE_MAXCLASS + PAGE;\n\t\thuge_enabled = false;\n\t} else {\n\t\t/* Reserve the index for the huge arena. */\n\t\thuge_arena_ind = narenas_total_get();\n\t\toversize_threshold = opt_oversize_threshold;\n\t\thuge_enabled = true;\n\t}\n\n\treturn huge_enabled;\n}\n\nbool\narena_is_huge(unsigned arena_ind) {\n\tif (huge_arena_ind == 0) {\n\t\treturn false;\n\t}\n\treturn (arena_ind == huge_arena_ind);\n}\n\nbool\narena_boot(sc_data_t *sc_data, base_t *base, bool hpa) {\n\tarena_dirty_decay_ms_default_set(opt_dirty_decay_ms);\n\tarena_muzzy_decay_ms_default_set(opt_muzzy_decay_ms);\n\tfor (unsigned i = 0; i < SC_NBINS; i++) {\n\t\tsc_t *sc = &sc_data->sc[i];\n\t\tdiv_init(&arena_binind_div_info[i],\n\t\t    (1U << sc->lg_base) + (sc->ndelta << sc->lg_delta));\n\t}\n\n\tuint32_t cur_offset = (uint32_t)offsetof(arena_t, bins);\n\tfor (szind_t i = 0; i < SC_NBINS; i++) {\n\t\tarena_bin_offsets[i] = cur_offset;\n\t\tnbins_total += bin_infos[i].n_shards;\n\t\tcur_offset += (uint32_t)(bin_infos[i].n_shards * sizeof(bin_t));\n\t}\n\treturn pa_central_init(&arena_pa_central_global, base, hpa,\n\t    &hpa_hooks_default);\n}\n\nvoid\narena_prefork0(tsdn_t *tsdn, arena_t *arena) {\n\tpa_shard_prefork0(tsdn, &arena->pa_shard);\n}\n\nvoid\narena_prefork1(tsdn_t *tsdn, arena_t *arena) {\n\tif (config_stats) {\n\t\tmalloc_mutex_prefork(tsdn, &arena->tcache_ql_mtx);\n\t}\n}\n\nvoid\narena_prefork2(tsdn_t *tsdn, arena_t *arena) {\n\tpa_shard_prefork2(tsdn, &arena->pa_shard);\n}\n\nvoid\narena_prefork3(tsdn_t *tsdn, arena_t *arena) {\n\tpa_shard_prefork3(tsdn, &arena->pa_shard);\n}\n\nvoid\narena_prefork4(tsdn_t *tsdn, arena_t *arena) {\n\tpa_shard_prefork4(tsdn, &arena->pa_shard);\n}\n\nvoid\narena_prefork5(tsdn_t *tsdn, arena_t *arena) {\n\tpa_shard_prefork5(tsdn, &arena->pa_shard);\n}\n\nvoid\narena_prefork6(tsdn_t *tsdn, arena_t *arena) {\n\tbase_prefork(tsdn, arena->base);\n}\n\nvoid\narena_prefork7(tsdn_t *tsdn, arena_t *arena) {\n\tmalloc_mutex_prefork(tsdn, &arena->large_mtx);\n}\n\nvoid\narena_prefork8(tsdn_t *tsdn, arena_t *arena) {\n\tfor (unsigned i = 0; i < nbins_total; i++) {\n\t\tbin_prefork(tsdn, &arena->bins[i]);\n\t}\n}\n\nvoid\narena_postfork_parent(tsdn_t *tsdn, arena_t *arena) {\n\tfor (unsigned i = 0; i < nbins_total; i++) {\n\t\tbin_postfork_parent(tsdn, &arena->bins[i]);\n\t}\n\n\tmalloc_mutex_postfork_parent(tsdn, &arena->large_mtx);\n\tbase_postfork_parent(tsdn, arena->base);\n\tpa_shard_postfork_parent(tsdn, &arena->pa_shard);\n\tif (config_stats) {\n\t\tmalloc_mutex_postfork_parent(tsdn, &arena->tcache_ql_mtx);\n\t}\n}\n\nvoid\narena_postfork_child(tsdn_t *tsdn, arena_t *arena) {\n\tatomic_store_u(&arena->nthreads[0], 0, ATOMIC_RELAXED);\n\tatomic_store_u(&arena->nthreads[1], 0, ATOMIC_RELAXED);\n\tif (tsd_arena_get(tsdn_tsd(tsdn)) == arena) {\n\t\tarena_nthreads_inc(arena, false);\n\t}\n\tif (tsd_iarena_get(tsdn_tsd(tsdn)) == arena) {\n\t\tarena_nthreads_inc(arena, true);\n\t}\n\tif (config_stats) {\n\t\tql_new(&arena->tcache_ql);\n\t\tql_new(&arena->cache_bin_array_descriptor_ql);\n\t\ttcache_slow_t *tcache_slow = tcache_slow_get(tsdn_tsd(tsdn));\n\t\tif (tcache_slow != NULL && tcache_slow->arena == arena) {\n\t\t\ttcache_t *tcache = tcache_slow->tcache;\n\t\t\tql_elm_new(tcache_slow, link);\n\t\t\tql_tail_insert(&arena->tcache_ql, tcache_slow, link);\n\t\t\tcache_bin_array_descriptor_init(\n\t\t\t    &tcache_slow->cache_bin_array_descriptor,\n\t\t\t    tcache->bins);\n\t\t\tql_tail_insert(&arena->cache_bin_array_descriptor_ql,\n\t\t\t    &tcache_slow->cache_bin_array_descriptor, link);\n\t\t}\n\t}\n\n\tfor (unsigned i = 0; i < nbins_total; i++) {\n\t\tbin_postfork_child(tsdn, &arena->bins[i]);\n\t}\n\n\tmalloc_mutex_postfork_child(tsdn, &arena->large_mtx);\n\tbase_postfork_child(tsdn, arena->base);\n\tpa_shard_postfork_child(tsdn, &arena->pa_shard);\n\tif (config_stats) {\n\t\tmalloc_mutex_postfork_child(tsdn, &arena->tcache_ql_mtx);\n\t}\n}\n"
  },
  {
    "path": "deps/jemalloc/src/background_thread.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n\nJEMALLOC_DIAGNOSTIC_DISABLE_SPURIOUS\n\n/******************************************************************************/\n/* Data. */\n\n/* This option should be opt-in only. */\n#define BACKGROUND_THREAD_DEFAULT false\n/* Read-only after initialization. */\nbool opt_background_thread = BACKGROUND_THREAD_DEFAULT;\nsize_t opt_max_background_threads = MAX_BACKGROUND_THREAD_LIMIT + 1;\n\n/* Used for thread creation, termination and stats. */\nmalloc_mutex_t background_thread_lock;\n/* Indicates global state.  Atomic because decay reads this w/o locking. */\natomic_b_t background_thread_enabled_state;\nsize_t n_background_threads;\nsize_t max_background_threads;\n/* Thread info per-index. */\nbackground_thread_info_t *background_thread_info;\n\n/******************************************************************************/\n\n#ifdef JEMALLOC_PTHREAD_CREATE_WRAPPER\n\nstatic int (*pthread_create_fptr)(pthread_t *__restrict, const pthread_attr_t *,\n    void *(*)(void *), void *__restrict);\n\nstatic void\npthread_create_wrapper_init(void) {\n#ifdef JEMALLOC_LAZY_LOCK\n\tif (!isthreaded) {\n\t\tisthreaded = true;\n\t}\n#endif\n}\n\nint\npthread_create_wrapper(pthread_t *__restrict thread, const pthread_attr_t *attr,\n    void *(*start_routine)(void *), void *__restrict arg) {\n\tpthread_create_wrapper_init();\n\n\treturn pthread_create_fptr(thread, attr, start_routine, arg);\n}\n#endif /* JEMALLOC_PTHREAD_CREATE_WRAPPER */\n\n#ifndef JEMALLOC_BACKGROUND_THREAD\n#define NOT_REACHED { not_reached(); }\nbool background_thread_create(tsd_t *tsd, unsigned arena_ind) NOT_REACHED\nbool background_threads_enable(tsd_t *tsd) NOT_REACHED\nbool background_threads_disable(tsd_t *tsd) NOT_REACHED\nbool background_thread_is_started(background_thread_info_t *info) NOT_REACHED\nvoid background_thread_wakeup_early(background_thread_info_t *info,\n    nstime_t *remaining_sleep) NOT_REACHED\nvoid background_thread_prefork0(tsdn_t *tsdn) NOT_REACHED\nvoid background_thread_prefork1(tsdn_t *tsdn) NOT_REACHED\nvoid background_thread_postfork_parent(tsdn_t *tsdn) NOT_REACHED\nvoid background_thread_postfork_child(tsdn_t *tsdn) NOT_REACHED\nbool background_thread_stats_read(tsdn_t *tsdn,\n    background_thread_stats_t *stats) NOT_REACHED\nvoid background_thread_ctl_init(tsdn_t *tsdn) NOT_REACHED\n#undef NOT_REACHED\n#else\n\nstatic bool background_thread_enabled_at_fork;\n\nstatic void\nbackground_thread_info_init(tsdn_t *tsdn, background_thread_info_t *info) {\n\tbackground_thread_wakeup_time_set(tsdn, info, 0);\n\tinfo->npages_to_purge_new = 0;\n\tif (config_stats) {\n\t\tinfo->tot_n_runs = 0;\n\t\tnstime_init_zero(&info->tot_sleep_time);\n\t}\n}\n\nstatic inline bool\nset_current_thread_affinity(int cpu) {\n#if defined(JEMALLOC_HAVE_SCHED_SETAFFINITY)\n\tcpu_set_t cpuset;\n#else\n#  ifndef __NetBSD__\n\tcpuset_t cpuset;\n#  else\n\tcpuset_t *cpuset;\n#  endif\n#endif\n\n#ifndef __NetBSD__\n\tCPU_ZERO(&cpuset);\n\tCPU_SET(cpu, &cpuset);\n#else\n\tcpuset = cpuset_create();\n#endif\n\n#if defined(JEMALLOC_HAVE_SCHED_SETAFFINITY)\n\treturn (sched_setaffinity(0, sizeof(cpu_set_t), &cpuset) != 0);\n#else\n#  ifndef __NetBSD__\n\tint ret = pthread_setaffinity_np(pthread_self(), sizeof(cpuset_t),\n\t    &cpuset);\n#  else\n\tint ret = pthread_setaffinity_np(pthread_self(), cpuset_size(cpuset),\n\t    cpuset);\n\tcpuset_destroy(cpuset);\n#  endif\n\treturn ret != 0;\n#endif\n}\n\n#define BILLION UINT64_C(1000000000)\n/* Minimal sleep interval 100 ms. */\n#define BACKGROUND_THREAD_MIN_INTERVAL_NS (BILLION / 10)\n\nstatic void\nbackground_thread_sleep(tsdn_t *tsdn, background_thread_info_t *info,\n    uint64_t interval) {\n\tif (config_stats) {\n\t\tinfo->tot_n_runs++;\n\t}\n\tinfo->npages_to_purge_new = 0;\n\n\tstruct timeval tv;\n\t/* Specific clock required by timedwait. */\n\tgettimeofday(&tv, NULL);\n\tnstime_t before_sleep;\n\tnstime_init2(&before_sleep, tv.tv_sec, tv.tv_usec * 1000);\n\n\tint ret;\n\tif (interval == BACKGROUND_THREAD_INDEFINITE_SLEEP) {\n\t\tbackground_thread_wakeup_time_set(tsdn, info,\n\t\t    BACKGROUND_THREAD_INDEFINITE_SLEEP);\n\t\tret = pthread_cond_wait(&info->cond, &info->mtx.lock);\n\t\tassert(ret == 0);\n\t} else {\n\t\tassert(interval >= BACKGROUND_THREAD_MIN_INTERVAL_NS &&\n\t\t    interval <= BACKGROUND_THREAD_INDEFINITE_SLEEP);\n\t\t/* We need malloc clock (can be different from tv). */\n\t\tnstime_t next_wakeup;\n\t\tnstime_init_update(&next_wakeup);\n\t\tnstime_iadd(&next_wakeup, interval);\n\t\tassert(nstime_ns(&next_wakeup) <\n\t\t    BACKGROUND_THREAD_INDEFINITE_SLEEP);\n\t\tbackground_thread_wakeup_time_set(tsdn, info,\n\t\t    nstime_ns(&next_wakeup));\n\n\t\tnstime_t ts_wakeup;\n\t\tnstime_copy(&ts_wakeup, &before_sleep);\n\t\tnstime_iadd(&ts_wakeup, interval);\n\t\tstruct timespec ts;\n\t\tts.tv_sec = (size_t)nstime_sec(&ts_wakeup);\n\t\tts.tv_nsec = (size_t)nstime_nsec(&ts_wakeup);\n\n\t\tassert(!background_thread_indefinite_sleep(info));\n\t\tret = pthread_cond_timedwait(&info->cond, &info->mtx.lock, &ts);\n\t\tassert(ret == ETIMEDOUT || ret == 0);\n\t}\n\tif (config_stats) {\n\t\tgettimeofday(&tv, NULL);\n\t\tnstime_t after_sleep;\n\t\tnstime_init2(&after_sleep, tv.tv_sec, tv.tv_usec * 1000);\n\t\tif (nstime_compare(&after_sleep, &before_sleep) > 0) {\n\t\t\tnstime_subtract(&after_sleep, &before_sleep);\n\t\t\tnstime_add(&info->tot_sleep_time, &after_sleep);\n\t\t}\n\t}\n}\n\nstatic bool\nbackground_thread_pause_check(tsdn_t *tsdn, background_thread_info_t *info) {\n\tif (unlikely(info->state == background_thread_paused)) {\n\t\tmalloc_mutex_unlock(tsdn, &info->mtx);\n\t\t/* Wait on global lock to update status. */\n\t\tmalloc_mutex_lock(tsdn, &background_thread_lock);\n\t\tmalloc_mutex_unlock(tsdn, &background_thread_lock);\n\t\tmalloc_mutex_lock(tsdn, &info->mtx);\n\t\treturn true;\n\t}\n\n\treturn false;\n}\n\nstatic inline void\nbackground_work_sleep_once(tsdn_t *tsdn, background_thread_info_t *info,\n    unsigned ind) {\n\tuint64_t ns_until_deferred = BACKGROUND_THREAD_DEFERRED_MAX;\n\tunsigned narenas = narenas_total_get();\n\tbool slept_indefinitely = background_thread_indefinite_sleep(info);\n\n\tfor (unsigned i = ind; i < narenas; i += max_background_threads) {\n\t\tarena_t *arena = arena_get(tsdn, i, false);\n\t\tif (!arena) {\n\t\t\tcontinue;\n\t\t}\n\t\t/*\n\t\t * If thread was woken up from the indefinite sleep, don't\n\t\t * do the work instantly, but rather check when the deferred\n\t\t * work that caused this thread to wake up is scheduled for.\n\t\t */\n\t\tif (!slept_indefinitely) {\n\t\t\tarena_do_deferred_work(tsdn, arena);\n\t\t}\n\t\tif (ns_until_deferred <= BACKGROUND_THREAD_MIN_INTERVAL_NS) {\n\t\t\t/* Min interval will be used. */\n\t\t\tcontinue;\n\t\t}\n\t\tuint64_t ns_arena_deferred = pa_shard_time_until_deferred_work(\n\t\t    tsdn, &arena->pa_shard);\n\t\tif (ns_arena_deferred < ns_until_deferred) {\n\t\t\tns_until_deferred = ns_arena_deferred;\n\t\t}\n\t}\n\n\tuint64_t sleep_ns;\n\tif (ns_until_deferred == BACKGROUND_THREAD_DEFERRED_MAX) {\n\t\tsleep_ns = BACKGROUND_THREAD_INDEFINITE_SLEEP;\n\t} else {\n\t\tsleep_ns =\n\t\t    (ns_until_deferred < BACKGROUND_THREAD_MIN_INTERVAL_NS)\n\t\t    ? BACKGROUND_THREAD_MIN_INTERVAL_NS\n\t\t    : ns_until_deferred;\n\n\t}\n\n\tbackground_thread_sleep(tsdn, info, sleep_ns);\n}\n\nstatic bool\nbackground_threads_disable_single(tsd_t *tsd, background_thread_info_t *info) {\n\tif (info == &background_thread_info[0]) {\n\t\tmalloc_mutex_assert_owner(tsd_tsdn(tsd),\n\t\t    &background_thread_lock);\n\t} else {\n\t\tmalloc_mutex_assert_not_owner(tsd_tsdn(tsd),\n\t\t    &background_thread_lock);\n\t}\n\n\tpre_reentrancy(tsd, NULL);\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &info->mtx);\n\tbool has_thread;\n\tassert(info->state != background_thread_paused);\n\tif (info->state == background_thread_started) {\n\t\thas_thread = true;\n\t\tinfo->state = background_thread_stopped;\n\t\tpthread_cond_signal(&info->cond);\n\t} else {\n\t\thas_thread = false;\n\t}\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &info->mtx);\n\n\tif (!has_thread) {\n\t\tpost_reentrancy(tsd);\n\t\treturn false;\n\t}\n\tvoid *ret;\n\tif (pthread_join(info->thread, &ret)) {\n\t\tpost_reentrancy(tsd);\n\t\treturn true;\n\t}\n\tassert(ret == NULL);\n\tn_background_threads--;\n\tpost_reentrancy(tsd);\n\n\treturn false;\n}\n\nstatic void *background_thread_entry(void *ind_arg);\n\nstatic int\nbackground_thread_create_signals_masked(pthread_t *thread,\n    const pthread_attr_t *attr, void *(*start_routine)(void *), void *arg) {\n\t/*\n\t * Mask signals during thread creation so that the thread inherits\n\t * an empty signal set.\n\t */\n\tsigset_t set;\n\tsigfillset(&set);\n\tsigset_t oldset;\n\tint mask_err = pthread_sigmask(SIG_SETMASK, &set, &oldset);\n\tif (mask_err != 0) {\n\t\treturn mask_err;\n\t}\n\tint create_err = pthread_create_wrapper(thread, attr, start_routine,\n\t    arg);\n\t/*\n\t * Restore the signal mask.  Failure to restore the signal mask here\n\t * changes program behavior.\n\t */\n\tint restore_err = pthread_sigmask(SIG_SETMASK, &oldset, NULL);\n\tif (restore_err != 0) {\n\t\tmalloc_printf(\"<jemalloc>: background thread creation \"\n\t\t    \"failed (%d), and signal mask restoration failed \"\n\t\t    \"(%d)\\n\", create_err, restore_err);\n\t\tif (opt_abort) {\n\t\t\tabort();\n\t\t}\n\t}\n\treturn create_err;\n}\n\nstatic bool\ncheck_background_thread_creation(tsd_t *tsd, unsigned *n_created,\n    bool *created_threads) {\n\tbool ret = false;\n\tif (likely(*n_created == n_background_threads)) {\n\t\treturn ret;\n\t}\n\n\ttsdn_t *tsdn = tsd_tsdn(tsd);\n\tmalloc_mutex_unlock(tsdn, &background_thread_info[0].mtx);\n\tfor (unsigned i = 1; i < max_background_threads; i++) {\n\t\tif (created_threads[i]) {\n\t\t\tcontinue;\n\t\t}\n\t\tbackground_thread_info_t *info = &background_thread_info[i];\n\t\tmalloc_mutex_lock(tsdn, &info->mtx);\n\t\t/*\n\t\t * In case of the background_thread_paused state because of\n\t\t * arena reset, delay the creation.\n\t\t */\n\t\tbool create = (info->state == background_thread_started);\n\t\tmalloc_mutex_unlock(tsdn, &info->mtx);\n\t\tif (!create) {\n\t\t\tcontinue;\n\t\t}\n\n\t\tpre_reentrancy(tsd, NULL);\n\t\tint err = background_thread_create_signals_masked(&info->thread,\n\t\t    NULL, background_thread_entry, (void *)(uintptr_t)i);\n\t\tpost_reentrancy(tsd);\n\n\t\tif (err == 0) {\n\t\t\t(*n_created)++;\n\t\t\tcreated_threads[i] = true;\n\t\t} else {\n\t\t\tmalloc_printf(\"<jemalloc>: background thread \"\n\t\t\t    \"creation failed (%d)\\n\", err);\n\t\t\tif (opt_abort) {\n\t\t\t\tabort();\n\t\t\t}\n\t\t}\n\t\t/* Return to restart the loop since we unlocked. */\n\t\tret = true;\n\t\tbreak;\n\t}\n\tmalloc_mutex_lock(tsdn, &background_thread_info[0].mtx);\n\n\treturn ret;\n}\n\nstatic void\nbackground_thread0_work(tsd_t *tsd) {\n\t/* Thread0 is also responsible for launching / terminating threads. */\n\tVARIABLE_ARRAY(bool, created_threads, max_background_threads);\n\tunsigned i;\n\tfor (i = 1; i < max_background_threads; i++) {\n\t\tcreated_threads[i] = false;\n\t}\n\t/* Start working, and create more threads when asked. */\n\tunsigned n_created = 1;\n\twhile (background_thread_info[0].state != background_thread_stopped) {\n\t\tif (background_thread_pause_check(tsd_tsdn(tsd),\n\t\t    &background_thread_info[0])) {\n\t\t\tcontinue;\n\t\t}\n\t\tif (check_background_thread_creation(tsd, &n_created,\n\t\t    (bool *)&created_threads)) {\n\t\t\tcontinue;\n\t\t}\n\t\tbackground_work_sleep_once(tsd_tsdn(tsd),\n\t\t    &background_thread_info[0], 0);\n\t}\n\n\t/*\n\t * Shut down other threads at exit.  Note that the ctl thread is holding\n\t * the global background_thread mutex (and is waiting) for us.\n\t */\n\tassert(!background_thread_enabled());\n\tfor (i = 1; i < max_background_threads; i++) {\n\t\tbackground_thread_info_t *info = &background_thread_info[i];\n\t\tassert(info->state != background_thread_paused);\n\t\tif (created_threads[i]) {\n\t\t\tbackground_threads_disable_single(tsd, info);\n\t\t} else {\n\t\t\tmalloc_mutex_lock(tsd_tsdn(tsd), &info->mtx);\n\t\t\tif (info->state != background_thread_stopped) {\n\t\t\t\t/* The thread was not created. */\n\t\t\t\tassert(info->state ==\n\t\t\t\t    background_thread_started);\n\t\t\t\tn_background_threads--;\n\t\t\t\tinfo->state = background_thread_stopped;\n\t\t\t}\n\t\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), &info->mtx);\n\t\t}\n\t}\n\tbackground_thread_info[0].state = background_thread_stopped;\n\tassert(n_background_threads == 1);\n}\n\nstatic void\nbackground_work(tsd_t *tsd, unsigned ind) {\n\tbackground_thread_info_t *info = &background_thread_info[ind];\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &info->mtx);\n\tbackground_thread_wakeup_time_set(tsd_tsdn(tsd), info,\n\t    BACKGROUND_THREAD_INDEFINITE_SLEEP);\n\tif (ind == 0) {\n\t\tbackground_thread0_work(tsd);\n\t} else {\n\t\twhile (info->state != background_thread_stopped) {\n\t\t\tif (background_thread_pause_check(tsd_tsdn(tsd),\n\t\t\t    info)) {\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tbackground_work_sleep_once(tsd_tsdn(tsd), info, ind);\n\t\t}\n\t}\n\tassert(info->state == background_thread_stopped);\n\tbackground_thread_wakeup_time_set(tsd_tsdn(tsd), info, 0);\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &info->mtx);\n}\n\nstatic void *\nbackground_thread_entry(void *ind_arg) {\n\tunsigned thread_ind = (unsigned)(uintptr_t)ind_arg;\n\tassert(thread_ind < max_background_threads);\n#ifdef JEMALLOC_HAVE_PTHREAD_SETNAME_NP\n\tpthread_setname_np(pthread_self(), \"jemalloc_bg_thd\");\n#elif defined(__FreeBSD__) || defined(__DragonFly__)\n\tpthread_set_name_np(pthread_self(), \"jemalloc_bg_thd\");\n#endif\n\tif (opt_percpu_arena != percpu_arena_disabled) {\n\t\tset_current_thread_affinity((int)thread_ind);\n\t}\n\t/*\n\t * Start periodic background work.  We use internal tsd which avoids\n\t * side effects, for example triggering new arena creation (which in\n\t * turn triggers another background thread creation).\n\t */\n\tbackground_work(tsd_internal_fetch(), thread_ind);\n\tassert(pthread_equal(pthread_self(),\n\t    background_thread_info[thread_ind].thread));\n\n\treturn NULL;\n}\n\nstatic void\nbackground_thread_init(tsd_t *tsd, background_thread_info_t *info) {\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &background_thread_lock);\n\tinfo->state = background_thread_started;\n\tbackground_thread_info_init(tsd_tsdn(tsd), info);\n\tn_background_threads++;\n}\n\nstatic bool\nbackground_thread_create_locked(tsd_t *tsd, unsigned arena_ind) {\n\tassert(have_background_thread);\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &background_thread_lock);\n\n\t/* We create at most NCPUs threads. */\n\tsize_t thread_ind = arena_ind % max_background_threads;\n\tbackground_thread_info_t *info = &background_thread_info[thread_ind];\n\n\tbool need_new_thread;\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &info->mtx);\n\tneed_new_thread = background_thread_enabled() &&\n\t    (info->state == background_thread_stopped);\n\tif (need_new_thread) {\n\t\tbackground_thread_init(tsd, info);\n\t}\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &info->mtx);\n\tif (!need_new_thread) {\n\t\treturn false;\n\t}\n\tif (arena_ind != 0) {\n\t\t/* Threads are created asynchronously by Thread 0. */\n\t\tbackground_thread_info_t *t0 = &background_thread_info[0];\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), &t0->mtx);\n\t\tassert(t0->state == background_thread_started);\n\t\tpthread_cond_signal(&t0->cond);\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), &t0->mtx);\n\n\t\treturn false;\n\t}\n\n\tpre_reentrancy(tsd, NULL);\n\t/*\n\t * To avoid complications (besides reentrancy), create internal\n\t * background threads with the underlying pthread_create.\n\t */\n\tint err = background_thread_create_signals_masked(&info->thread, NULL,\n\t    background_thread_entry, (void *)thread_ind);\n\tpost_reentrancy(tsd);\n\n\tif (err != 0) {\n\t\tmalloc_printf(\"<jemalloc>: arena 0 background thread creation \"\n\t\t    \"failed (%d)\\n\", err);\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), &info->mtx);\n\t\tinfo->state = background_thread_stopped;\n\t\tn_background_threads--;\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), &info->mtx);\n\n\t\treturn true;\n\t}\n\n\treturn false;\n}\n\n/* Create a new background thread if needed. */\nbool\nbackground_thread_create(tsd_t *tsd, unsigned arena_ind) {\n\tassert(have_background_thread);\n\n\tbool ret;\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &background_thread_lock);\n\tret = background_thread_create_locked(tsd, arena_ind);\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &background_thread_lock);\n\n\treturn ret;\n}\n\nbool\nbackground_threads_enable(tsd_t *tsd) {\n\tassert(n_background_threads == 0);\n\tassert(background_thread_enabled());\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &background_thread_lock);\n\n\tVARIABLE_ARRAY(bool, marked, max_background_threads);\n\tunsigned nmarked;\n\tfor (unsigned i = 0; i < max_background_threads; i++) {\n\t\tmarked[i] = false;\n\t}\n\tnmarked = 0;\n\t/* Thread 0 is required and created at the end. */\n\tmarked[0] = true;\n\t/* Mark the threads we need to create for thread 0. */\n\tunsigned narenas = narenas_total_get();\n\tfor (unsigned i = 1; i < narenas; i++) {\n\t\tif (marked[i % max_background_threads] ||\n\t\t    arena_get(tsd_tsdn(tsd), i, false) == NULL) {\n\t\t\tcontinue;\n\t\t}\n\t\tbackground_thread_info_t *info = &background_thread_info[\n\t\t    i % max_background_threads];\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), &info->mtx);\n\t\tassert(info->state == background_thread_stopped);\n\t\tbackground_thread_init(tsd, info);\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), &info->mtx);\n\t\tmarked[i % max_background_threads] = true;\n\t\tif (++nmarked == max_background_threads) {\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tbool err = background_thread_create_locked(tsd, 0);\n\tif (err) {\n\t\treturn true;\n\t}\n\tfor (unsigned i = 0; i < narenas; i++) {\n\t\tarena_t *arena = arena_get(tsd_tsdn(tsd), i, false);\n\t\tif (arena != NULL) {\n\t\t\tpa_shard_set_deferral_allowed(tsd_tsdn(tsd),\n\t\t\t    &arena->pa_shard, true);\n\t\t}\n\t}\n\treturn false;\n}\n\nbool\nbackground_threads_disable(tsd_t *tsd) {\n\tassert(!background_thread_enabled());\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &background_thread_lock);\n\n\t/* Thread 0 will be responsible for terminating other threads. */\n\tif (background_threads_disable_single(tsd,\n\t    &background_thread_info[0])) {\n\t\treturn true;\n\t}\n\tassert(n_background_threads == 0);\n\tunsigned narenas = narenas_total_get();\n\tfor (unsigned i = 0; i < narenas; i++) {\n\t\tarena_t *arena = arena_get(tsd_tsdn(tsd), i, false);\n\t\tif (arena != NULL) {\n\t\t\tpa_shard_set_deferral_allowed(tsd_tsdn(tsd),\n\t\t\t    &arena->pa_shard, false);\n\t\t}\n\t}\n\n\treturn false;\n}\n\nbool\nbackground_thread_is_started(background_thread_info_t *info) {\n\treturn info->state == background_thread_started;\n}\n\nvoid\nbackground_thread_wakeup_early(background_thread_info_t *info,\n    nstime_t *remaining_sleep) {\n\t/*\n\t * This is an optimization to increase batching. At this point\n\t * we know that background thread wakes up soon, so the time to cache\n\t * the just freed memory is bounded and low.\n\t */\n\tif (remaining_sleep != NULL && nstime_ns(remaining_sleep) <\n\t    BACKGROUND_THREAD_MIN_INTERVAL_NS) {\n\t\treturn;\n\t}\n\tpthread_cond_signal(&info->cond);\n}\n\nvoid\nbackground_thread_prefork0(tsdn_t *tsdn) {\n\tmalloc_mutex_prefork(tsdn, &background_thread_lock);\n\tbackground_thread_enabled_at_fork = background_thread_enabled();\n}\n\nvoid\nbackground_thread_prefork1(tsdn_t *tsdn) {\n\tfor (unsigned i = 0; i < max_background_threads; i++) {\n\t\tmalloc_mutex_prefork(tsdn, &background_thread_info[i].mtx);\n\t}\n}\n\nvoid\nbackground_thread_postfork_parent(tsdn_t *tsdn) {\n\tfor (unsigned i = 0; i < max_background_threads; i++) {\n\t\tmalloc_mutex_postfork_parent(tsdn,\n\t\t    &background_thread_info[i].mtx);\n\t}\n\tmalloc_mutex_postfork_parent(tsdn, &background_thread_lock);\n}\n\nvoid\nbackground_thread_postfork_child(tsdn_t *tsdn) {\n\tfor (unsigned i = 0; i < max_background_threads; i++) {\n\t\tmalloc_mutex_postfork_child(tsdn,\n\t\t    &background_thread_info[i].mtx);\n\t}\n\tmalloc_mutex_postfork_child(tsdn, &background_thread_lock);\n\tif (!background_thread_enabled_at_fork) {\n\t\treturn;\n\t}\n\n\t/* Clear background_thread state (reset to disabled for child). */\n\tmalloc_mutex_lock(tsdn, &background_thread_lock);\n\tn_background_threads = 0;\n\tbackground_thread_enabled_set(tsdn, false);\n\tfor (unsigned i = 0; i < max_background_threads; i++) {\n\t\tbackground_thread_info_t *info = &background_thread_info[i];\n\t\tmalloc_mutex_lock(tsdn, &info->mtx);\n\t\tinfo->state = background_thread_stopped;\n\t\tint ret = pthread_cond_init(&info->cond, NULL);\n\t\tassert(ret == 0);\n\t\tbackground_thread_info_init(tsdn, info);\n\t\tmalloc_mutex_unlock(tsdn, &info->mtx);\n\t}\n\tmalloc_mutex_unlock(tsdn, &background_thread_lock);\n}\n\nbool\nbackground_thread_stats_read(tsdn_t *tsdn, background_thread_stats_t *stats) {\n\tassert(config_stats);\n\tmalloc_mutex_lock(tsdn, &background_thread_lock);\n\tif (!background_thread_enabled()) {\n\t\tmalloc_mutex_unlock(tsdn, &background_thread_lock);\n\t\treturn true;\n\t}\n\n\tnstime_init_zero(&stats->run_interval);\n\tmemset(&stats->max_counter_per_bg_thd, 0, sizeof(mutex_prof_data_t));\n\n\tuint64_t num_runs = 0;\n\tstats->num_threads = n_background_threads;\n\tfor (unsigned i = 0; i < max_background_threads; i++) {\n\t\tbackground_thread_info_t *info = &background_thread_info[i];\n\t\tif (malloc_mutex_trylock(tsdn, &info->mtx)) {\n\t\t\t/*\n\t\t\t * Each background thread run may take a long time;\n\t\t\t * avoid waiting on the stats if the thread is active.\n\t\t\t */\n\t\t\tcontinue;\n\t\t}\n\t\tif (info->state != background_thread_stopped) {\n\t\t\tnum_runs += info->tot_n_runs;\n\t\t\tnstime_add(&stats->run_interval, &info->tot_sleep_time);\n\t\t\tmalloc_mutex_prof_max_update(tsdn,\n\t\t\t    &stats->max_counter_per_bg_thd, &info->mtx);\n\t\t}\n\t\tmalloc_mutex_unlock(tsdn, &info->mtx);\n\t}\n\tstats->num_runs = num_runs;\n\tif (num_runs > 0) {\n\t\tnstime_idivide(&stats->run_interval, num_runs);\n\t}\n\tmalloc_mutex_unlock(tsdn, &background_thread_lock);\n\n\treturn false;\n}\n\n#undef BACKGROUND_THREAD_NPAGES_THRESHOLD\n#undef BILLION\n#undef BACKGROUND_THREAD_MIN_INTERVAL_NS\n\n#ifdef JEMALLOC_HAVE_DLSYM\n#include <dlfcn.h>\n#endif\n\nstatic bool\npthread_create_fptr_init(void) {\n\tif (pthread_create_fptr != NULL) {\n\t\treturn false;\n\t}\n\t/*\n\t * Try the next symbol first, because 1) when use lazy_lock we have a\n\t * wrapper for pthread_create; and 2) application may define its own\n\t * wrapper as well (and can call malloc within the wrapper).\n\t */\n#ifdef JEMALLOC_HAVE_DLSYM\n\tpthread_create_fptr = dlsym(RTLD_NEXT, \"pthread_create\");\n#else\n\tpthread_create_fptr = NULL;\n#endif\n\tif (pthread_create_fptr == NULL) {\n\t\tif (config_lazy_lock) {\n\t\t\tmalloc_write(\"<jemalloc>: Error in dlsym(RTLD_NEXT, \"\n\t\t\t    \"\\\"pthread_create\\\")\\n\");\n\t\t\tabort();\n\t\t} else {\n\t\t\t/* Fall back to the default symbol. */\n\t\t\tpthread_create_fptr = pthread_create;\n\t\t}\n\t}\n\n\treturn false;\n}\n\n/*\n * When lazy lock is enabled, we need to make sure setting isthreaded before\n * taking any background_thread locks.  This is called early in ctl (instead of\n * wait for the pthread_create calls to trigger) because the mutex is required\n * before creating background threads.\n */\nvoid\nbackground_thread_ctl_init(tsdn_t *tsdn) {\n\tmalloc_mutex_assert_not_owner(tsdn, &background_thread_lock);\n#ifdef JEMALLOC_PTHREAD_CREATE_WRAPPER\n\tpthread_create_fptr_init();\n\tpthread_create_wrapper_init();\n#endif\n}\n\n#endif /* defined(JEMALLOC_BACKGROUND_THREAD) */\n\nbool\nbackground_thread_boot0(void) {\n\tif (!have_background_thread && opt_background_thread) {\n\t\tmalloc_printf(\"<jemalloc>: option background_thread currently \"\n\t\t    \"supports pthread only\\n\");\n\t\treturn true;\n\t}\n#ifdef JEMALLOC_PTHREAD_CREATE_WRAPPER\n\tif ((config_lazy_lock || opt_background_thread) &&\n\t    pthread_create_fptr_init()) {\n\t\treturn true;\n\t}\n#endif\n\treturn false;\n}\n\nbool\nbackground_thread_boot1(tsdn_t *tsdn, base_t *base) {\n#ifdef JEMALLOC_BACKGROUND_THREAD\n\tassert(have_background_thread);\n\tassert(narenas_total_get() > 0);\n\n\tif (opt_max_background_threads > MAX_BACKGROUND_THREAD_LIMIT) {\n\t\topt_max_background_threads = DEFAULT_NUM_BACKGROUND_THREAD;\n\t}\n\tmax_background_threads = opt_max_background_threads;\n\n\tbackground_thread_enabled_set(tsdn, opt_background_thread);\n\tif (malloc_mutex_init(&background_thread_lock,\n\t    \"background_thread_global\",\n\t    WITNESS_RANK_BACKGROUND_THREAD_GLOBAL,\n\t    malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\n\tbackground_thread_info = (background_thread_info_t *)base_alloc(tsdn,\n\t    base, opt_max_background_threads *\n\t    sizeof(background_thread_info_t), CACHELINE);\n\tif (background_thread_info == NULL) {\n\t\treturn true;\n\t}\n\n\tfor (unsigned i = 0; i < max_background_threads; i++) {\n\t\tbackground_thread_info_t *info = &background_thread_info[i];\n\t\t/* Thread mutex is rank_inclusive because of thread0. */\n\t\tif (malloc_mutex_init(&info->mtx, \"background_thread\",\n\t\t    WITNESS_RANK_BACKGROUND_THREAD,\n\t\t    malloc_mutex_address_ordered)) {\n\t\t\treturn true;\n\t\t}\n\t\tif (pthread_cond_init(&info->cond, NULL)) {\n\t\t\treturn true;\n\t\t}\n\t\tmalloc_mutex_lock(tsdn, &info->mtx);\n\t\tinfo->state = background_thread_stopped;\n\t\tbackground_thread_info_init(tsdn, info);\n\t\tmalloc_mutex_unlock(tsdn, &info->mtx);\n\t}\n#endif\n\n\treturn false;\n}\n"
  },
  {
    "path": "deps/jemalloc/src/base.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/extent_mmap.h\"\n#include \"jemalloc/internal/mutex.h\"\n#include \"jemalloc/internal/sz.h\"\n\n/*\n * In auto mode, arenas switch to huge pages for the base allocator on the\n * second base block.  a0 switches to thp on the 5th block (after 20 megabytes\n * of metadata), since more metadata (e.g. rtree nodes) come from a0's base.\n */\n\n#define BASE_AUTO_THP_THRESHOLD    2\n#define BASE_AUTO_THP_THRESHOLD_A0 5\n\n/******************************************************************************/\n/* Data. */\n\nstatic base_t *b0;\n\nmetadata_thp_mode_t opt_metadata_thp = METADATA_THP_DEFAULT;\n\nconst char *metadata_thp_mode_names[] = {\n\t\"disabled\",\n\t\"auto\",\n\t\"always\"\n};\n\n/******************************************************************************/\n\nstatic inline bool\nmetadata_thp_madvise(void) {\n\treturn (metadata_thp_enabled() &&\n\t    (init_system_thp_mode == thp_mode_default));\n}\n\nstatic void *\nbase_map(tsdn_t *tsdn, ehooks_t *ehooks, unsigned ind, size_t size) {\n\tvoid *addr;\n\tbool zero = true;\n\tbool commit = true;\n\n\t/* Use huge page sizes and alignment regardless of opt_metadata_thp. */\n\tassert(size == HUGEPAGE_CEILING(size));\n\tsize_t alignment = HUGEPAGE;\n\tif (ehooks_are_default(ehooks)) {\n\t\taddr = extent_alloc_mmap(NULL, size, alignment, &zero, &commit);\n\t\tif (have_madvise_huge && addr) {\n\t\t\tpages_set_thp_state(addr, size);\n\t\t}\n\t} else {\n\t\taddr = ehooks_alloc(tsdn, ehooks, NULL, size, alignment, &zero,\n\t\t    &commit);\n\t}\n\n\treturn addr;\n}\n\nstatic void\nbase_unmap(tsdn_t *tsdn, ehooks_t *ehooks, unsigned ind, void *addr,\n    size_t size) {\n\t/*\n\t * Cascade through dalloc, decommit, purge_forced, and purge_lazy,\n\t * stopping at first success.  This cascade is performed for consistency\n\t * with the cascade in extent_dalloc_wrapper() because an application's\n\t * custom hooks may not support e.g. dalloc.  This function is only ever\n\t * called as a side effect of arena destruction, so although it might\n\t * seem pointless to do anything besides dalloc here, the application\n\t * may in fact want the end state of all associated virtual memory to be\n\t * in some consistent-but-allocated state.\n\t */\n\tif (ehooks_are_default(ehooks)) {\n\t\tif (!extent_dalloc_mmap(addr, size)) {\n\t\t\tgoto label_done;\n\t\t}\n\t\tif (!pages_decommit(addr, size)) {\n\t\t\tgoto label_done;\n\t\t}\n\t\tif (!pages_purge_forced(addr, size)) {\n\t\t\tgoto label_done;\n\t\t}\n\t\tif (!pages_purge_lazy(addr, size)) {\n\t\t\tgoto label_done;\n\t\t}\n\t\t/* Nothing worked.  This should never happen. */\n\t\tnot_reached();\n\t} else {\n\t\tif (!ehooks_dalloc(tsdn, ehooks, addr, size, true)) {\n\t\t\tgoto label_done;\n\t\t}\n\t\tif (!ehooks_decommit(tsdn, ehooks, addr, size, 0, size)) {\n\t\t\tgoto label_done;\n\t\t}\n\t\tif (!ehooks_purge_forced(tsdn, ehooks, addr, size, 0, size)) {\n\t\t\tgoto label_done;\n\t\t}\n\t\tif (!ehooks_purge_lazy(tsdn, ehooks, addr, size, 0, size)) {\n\t\t\tgoto label_done;\n\t\t}\n\t\t/* Nothing worked.  That's the application's problem. */\n\t}\nlabel_done:\n\tif (metadata_thp_madvise()) {\n\t\t/* Set NOHUGEPAGE after unmap to avoid kernel defrag. */\n\t\tassert(((uintptr_t)addr & HUGEPAGE_MASK) == 0 &&\n\t\t    (size & HUGEPAGE_MASK) == 0);\n\t\tpages_nohuge(addr, size);\n\t}\n}\n\nstatic void\nbase_edata_init(size_t *extent_sn_next, edata_t *edata, void *addr,\n    size_t size) {\n\tsize_t sn;\n\n\tsn = *extent_sn_next;\n\t(*extent_sn_next)++;\n\n\tedata_binit(edata, addr, size, sn);\n}\n\nstatic size_t\nbase_get_num_blocks(base_t *base, bool with_new_block) {\n\tbase_block_t *b = base->blocks;\n\tassert(b != NULL);\n\n\tsize_t n_blocks = with_new_block ? 2 : 1;\n\twhile (b->next != NULL) {\n\t\tn_blocks++;\n\t\tb = b->next;\n\t}\n\n\treturn n_blocks;\n}\n\nstatic void\nbase_auto_thp_switch(tsdn_t *tsdn, base_t *base) {\n\tassert(opt_metadata_thp == metadata_thp_auto);\n\tmalloc_mutex_assert_owner(tsdn, &base->mtx);\n\tif (base->auto_thp_switched) {\n\t\treturn;\n\t}\n\t/* Called when adding a new block. */\n\tbool should_switch;\n\tif (base_ind_get(base) != 0) {\n\t\tshould_switch = (base_get_num_blocks(base, true) ==\n\t\t    BASE_AUTO_THP_THRESHOLD);\n\t} else {\n\t\tshould_switch = (base_get_num_blocks(base, true) ==\n\t\t    BASE_AUTO_THP_THRESHOLD_A0);\n\t}\n\tif (!should_switch) {\n\t\treturn;\n\t}\n\n\tbase->auto_thp_switched = true;\n\tassert(!config_stats || base->n_thp == 0);\n\t/* Make the initial blocks THP lazily. */\n\tbase_block_t *block = base->blocks;\n\twhile (block != NULL) {\n\t\tassert((block->size & HUGEPAGE_MASK) == 0);\n\t\tpages_huge(block, block->size);\n\t\tif (config_stats) {\n\t\t\tbase->n_thp += HUGEPAGE_CEILING(block->size -\n\t\t\t    edata_bsize_get(&block->edata)) >> LG_HUGEPAGE;\n\t\t}\n\t\tblock = block->next;\n\t\tassert(block == NULL || (base_ind_get(base) == 0));\n\t}\n}\n\nstatic void *\nbase_extent_bump_alloc_helper(edata_t *edata, size_t *gap_size, size_t size,\n    size_t alignment) {\n\tvoid *ret;\n\n\tassert(alignment == ALIGNMENT_CEILING(alignment, QUANTUM));\n\tassert(size == ALIGNMENT_CEILING(size, alignment));\n\n\t*gap_size = ALIGNMENT_CEILING((uintptr_t)edata_addr_get(edata),\n\t    alignment) - (uintptr_t)edata_addr_get(edata);\n\tret = (void *)((uintptr_t)edata_addr_get(edata) + *gap_size);\n\tassert(edata_bsize_get(edata) >= *gap_size + size);\n\tedata_binit(edata, (void *)((uintptr_t)edata_addr_get(edata) +\n\t    *gap_size + size), edata_bsize_get(edata) - *gap_size - size,\n\t    edata_sn_get(edata));\n\treturn ret;\n}\n\nstatic void\nbase_extent_bump_alloc_post(base_t *base, edata_t *edata, size_t gap_size,\n    void *addr, size_t size) {\n\tif (edata_bsize_get(edata) > 0) {\n\t\t/*\n\t\t * Compute the index for the largest size class that does not\n\t\t * exceed extent's size.\n\t\t */\n\t\tszind_t index_floor =\n\t\t    sz_size2index(edata_bsize_get(edata) + 1) - 1;\n\t\tedata_heap_insert(&base->avail[index_floor], edata);\n\t}\n\n\tif (config_stats) {\n\t\tbase->allocated += size;\n\t\t/*\n\t\t * Add one PAGE to base_resident for every page boundary that is\n\t\t * crossed by the new allocation. Adjust n_thp similarly when\n\t\t * metadata_thp is enabled.\n\t\t */\n\t\tbase->resident += PAGE_CEILING((uintptr_t)addr + size) -\n\t\t    PAGE_CEILING((uintptr_t)addr - gap_size);\n\t\tassert(base->allocated <= base->resident);\n\t\tassert(base->resident <= base->mapped);\n\t\tif (metadata_thp_madvise() && (opt_metadata_thp ==\n\t\t    metadata_thp_always || base->auto_thp_switched)) {\n\t\t\tbase->n_thp += (HUGEPAGE_CEILING((uintptr_t)addr + size)\n\t\t\t    - HUGEPAGE_CEILING((uintptr_t)addr - gap_size)) >>\n\t\t\t    LG_HUGEPAGE;\n\t\t\tassert(base->mapped >= base->n_thp << LG_HUGEPAGE);\n\t\t}\n\t}\n}\n\nstatic void *\nbase_extent_bump_alloc(base_t *base, edata_t *edata, size_t size,\n    size_t alignment) {\n\tvoid *ret;\n\tsize_t gap_size;\n\n\tret = base_extent_bump_alloc_helper(edata, &gap_size, size, alignment);\n\tbase_extent_bump_alloc_post(base, edata, gap_size, ret, size);\n\treturn ret;\n}\n\n/*\n * Allocate a block of virtual memory that is large enough to start with a\n * base_block_t header, followed by an object of specified size and alignment.\n * On success a pointer to the initialized base_block_t header is returned.\n */\nstatic base_block_t *\nbase_block_alloc(tsdn_t *tsdn, base_t *base, ehooks_t *ehooks, unsigned ind,\n    pszind_t *pind_last, size_t *extent_sn_next, size_t size,\n    size_t alignment) {\n\talignment = ALIGNMENT_CEILING(alignment, QUANTUM);\n\tsize_t usize = ALIGNMENT_CEILING(size, alignment);\n\tsize_t header_size = sizeof(base_block_t);\n\tsize_t gap_size = ALIGNMENT_CEILING(header_size, alignment) -\n\t    header_size;\n\t/*\n\t * Create increasingly larger blocks in order to limit the total number\n\t * of disjoint virtual memory ranges.  Choose the next size in the page\n\t * size class series (skipping size classes that are not a multiple of\n\t * HUGEPAGE), or a size large enough to satisfy the requested size and\n\t * alignment, whichever is larger.\n\t */\n\tsize_t min_block_size = HUGEPAGE_CEILING(sz_psz2u(header_size + gap_size\n\t    + usize));\n\tpszind_t pind_next = (*pind_last + 1 < sz_psz2ind(SC_LARGE_MAXCLASS)) ?\n\t    *pind_last + 1 : *pind_last;\n\tsize_t next_block_size = HUGEPAGE_CEILING(sz_pind2sz(pind_next));\n\tsize_t block_size = (min_block_size > next_block_size) ? min_block_size\n\t    : next_block_size;\n\tbase_block_t *block = (base_block_t *)base_map(tsdn, ehooks, ind,\n\t    block_size);\n\tif (block == NULL) {\n\t\treturn NULL;\n\t}\n\n\tif (metadata_thp_madvise()) {\n\t\tvoid *addr = (void *)block;\n\t\tassert(((uintptr_t)addr & HUGEPAGE_MASK) == 0 &&\n\t\t    (block_size & HUGEPAGE_MASK) == 0);\n\t\tif (opt_metadata_thp == metadata_thp_always) {\n\t\t\tpages_huge(addr, block_size);\n\t\t} else if (opt_metadata_thp == metadata_thp_auto &&\n\t\t    base != NULL) {\n\t\t\t/* base != NULL indicates this is not a new base. */\n\t\t\tmalloc_mutex_lock(tsdn, &base->mtx);\n\t\t\tbase_auto_thp_switch(tsdn, base);\n\t\t\tif (base->auto_thp_switched) {\n\t\t\t\tpages_huge(addr, block_size);\n\t\t\t}\n\t\t\tmalloc_mutex_unlock(tsdn, &base->mtx);\n\t\t}\n\t}\n\n\t*pind_last = sz_psz2ind(block_size);\n\tblock->size = block_size;\n\tblock->next = NULL;\n\tassert(block_size >= header_size);\n\tbase_edata_init(extent_sn_next, &block->edata,\n\t    (void *)((uintptr_t)block + header_size), block_size - header_size);\n\treturn block;\n}\n\n/*\n * Allocate an extent that is at least as large as specified size, with\n * specified alignment.\n */\nstatic edata_t *\nbase_extent_alloc(tsdn_t *tsdn, base_t *base, size_t size, size_t alignment) {\n\tmalloc_mutex_assert_owner(tsdn, &base->mtx);\n\n\tehooks_t *ehooks = base_ehooks_get_for_metadata(base);\n\t/*\n\t * Drop mutex during base_block_alloc(), because an extent hook will be\n\t * called.\n\t */\n\tmalloc_mutex_unlock(tsdn, &base->mtx);\n\tbase_block_t *block = base_block_alloc(tsdn, base, ehooks,\n\t    base_ind_get(base), &base->pind_last, &base->extent_sn_next, size,\n\t    alignment);\n\tmalloc_mutex_lock(tsdn, &base->mtx);\n\tif (block == NULL) {\n\t\treturn NULL;\n\t}\n\tblock->next = base->blocks;\n\tbase->blocks = block;\n\tif (config_stats) {\n\t\tbase->allocated += sizeof(base_block_t);\n\t\tbase->resident += PAGE_CEILING(sizeof(base_block_t));\n\t\tbase->mapped += block->size;\n\t\tif (metadata_thp_madvise() &&\n\t\t    !(opt_metadata_thp == metadata_thp_auto\n\t\t      && !base->auto_thp_switched)) {\n\t\t\tassert(base->n_thp > 0);\n\t\t\tbase->n_thp += HUGEPAGE_CEILING(sizeof(base_block_t)) >>\n\t\t\t    LG_HUGEPAGE;\n\t\t}\n\t\tassert(base->allocated <= base->resident);\n\t\tassert(base->resident <= base->mapped);\n\t\tassert(base->n_thp << LG_HUGEPAGE <= base->mapped);\n\t}\n\treturn &block->edata;\n}\n\nbase_t *\nb0get(void) {\n\treturn b0;\n}\n\nbase_t *\nbase_new(tsdn_t *tsdn, unsigned ind, const extent_hooks_t *extent_hooks,\n    bool metadata_use_hooks) {\n\tpszind_t pind_last = 0;\n\tsize_t extent_sn_next = 0;\n\n\t/*\n\t * The base will contain the ehooks eventually, but it itself is\n\t * allocated using them.  So we use some stack ehooks to bootstrap its\n\t * memory, and then initialize the ehooks within the base_t.\n\t */\n\tehooks_t fake_ehooks;\n\tehooks_init(&fake_ehooks, metadata_use_hooks ?\n\t    (extent_hooks_t *)extent_hooks :\n\t    (extent_hooks_t *)&ehooks_default_extent_hooks, ind);\n\n\tbase_block_t *block = base_block_alloc(tsdn, NULL, &fake_ehooks, ind,\n\t    &pind_last, &extent_sn_next, sizeof(base_t), QUANTUM);\n\tif (block == NULL) {\n\t\treturn NULL;\n\t}\n\n\tsize_t gap_size;\n\tsize_t base_alignment = CACHELINE;\n\tsize_t base_size = ALIGNMENT_CEILING(sizeof(base_t), base_alignment);\n\tbase_t *base = (base_t *)base_extent_bump_alloc_helper(&block->edata,\n\t    &gap_size, base_size, base_alignment);\n\tehooks_init(&base->ehooks, (extent_hooks_t *)extent_hooks, ind);\n\tehooks_init(&base->ehooks_base, metadata_use_hooks ?\n\t    (extent_hooks_t *)extent_hooks :\n\t    (extent_hooks_t *)&ehooks_default_extent_hooks, ind);\n\tif (malloc_mutex_init(&base->mtx, \"base\", WITNESS_RANK_BASE,\n\t    malloc_mutex_rank_exclusive)) {\n\t\tbase_unmap(tsdn, &fake_ehooks, ind, block, block->size);\n\t\treturn NULL;\n\t}\n\tbase->pind_last = pind_last;\n\tbase->extent_sn_next = extent_sn_next;\n\tbase->blocks = block;\n\tbase->auto_thp_switched = false;\n\tfor (szind_t i = 0; i < SC_NSIZES; i++) {\n\t\tedata_heap_new(&base->avail[i]);\n\t}\n\tif (config_stats) {\n\t\tbase->allocated = sizeof(base_block_t);\n\t\tbase->resident = PAGE_CEILING(sizeof(base_block_t));\n\t\tbase->mapped = block->size;\n\t\tbase->n_thp = (opt_metadata_thp == metadata_thp_always) &&\n\t\t    metadata_thp_madvise() ? HUGEPAGE_CEILING(sizeof(base_block_t))\n\t\t    >> LG_HUGEPAGE : 0;\n\t\tassert(base->allocated <= base->resident);\n\t\tassert(base->resident <= base->mapped);\n\t\tassert(base->n_thp << LG_HUGEPAGE <= base->mapped);\n\t}\n\tbase_extent_bump_alloc_post(base, &block->edata, gap_size, base,\n\t    base_size);\n\n\treturn base;\n}\n\nvoid\nbase_delete(tsdn_t *tsdn, base_t *base) {\n\tehooks_t *ehooks = base_ehooks_get_for_metadata(base);\n\tbase_block_t *next = base->blocks;\n\tdo {\n\t\tbase_block_t *block = next;\n\t\tnext = block->next;\n\t\tbase_unmap(tsdn, ehooks, base_ind_get(base), block,\n\t\t    block->size);\n\t} while (next != NULL);\n}\n\nehooks_t *\nbase_ehooks_get(base_t *base) {\n\treturn &base->ehooks;\n}\n\nehooks_t *\nbase_ehooks_get_for_metadata(base_t *base) {\n\treturn &base->ehooks_base;\n}\n\nextent_hooks_t *\nbase_extent_hooks_set(base_t *base, extent_hooks_t *extent_hooks) {\n\textent_hooks_t *old_extent_hooks =\n\t    ehooks_get_extent_hooks_ptr(&base->ehooks);\n\tehooks_init(&base->ehooks, extent_hooks, ehooks_ind_get(&base->ehooks));\n\treturn old_extent_hooks;\n}\n\nstatic void *\nbase_alloc_impl(tsdn_t *tsdn, base_t *base, size_t size, size_t alignment,\n    size_t *esn) {\n\talignment = QUANTUM_CEILING(alignment);\n\tsize_t usize = ALIGNMENT_CEILING(size, alignment);\n\tsize_t asize = usize + alignment - QUANTUM;\n\n\tedata_t *edata = NULL;\n\tmalloc_mutex_lock(tsdn, &base->mtx);\n\tfor (szind_t i = sz_size2index(asize); i < SC_NSIZES; i++) {\n\t\tedata = edata_heap_remove_first(&base->avail[i]);\n\t\tif (edata != NULL) {\n\t\t\t/* Use existing space. */\n\t\t\tbreak;\n\t\t}\n\t}\n\tif (edata == NULL) {\n\t\t/* Try to allocate more space. */\n\t\tedata = base_extent_alloc(tsdn, base, usize, alignment);\n\t}\n\tvoid *ret;\n\tif (edata == NULL) {\n\t\tret = NULL;\n\t\tgoto label_return;\n\t}\n\n\tret = base_extent_bump_alloc(base, edata, usize, alignment);\n\tif (esn != NULL) {\n\t\t*esn = (size_t)edata_sn_get(edata);\n\t}\nlabel_return:\n\tmalloc_mutex_unlock(tsdn, &base->mtx);\n\treturn ret;\n}\n\n/*\n * base_alloc() returns zeroed memory, which is always demand-zeroed for the\n * auto arenas, in order to make multi-page sparse data structures such as radix\n * tree nodes efficient with respect to physical memory usage.  Upon success a\n * pointer to at least size bytes with specified alignment is returned.  Note\n * that size is rounded up to the nearest multiple of alignment to avoid false\n * sharing.\n */\nvoid *\nbase_alloc(tsdn_t *tsdn, base_t *base, size_t size, size_t alignment) {\n\treturn base_alloc_impl(tsdn, base, size, alignment, NULL);\n}\n\nedata_t *\nbase_alloc_edata(tsdn_t *tsdn, base_t *base) {\n\tsize_t esn;\n\tedata_t *edata = base_alloc_impl(tsdn, base, sizeof(edata_t),\n\t    EDATA_ALIGNMENT, &esn);\n\tif (edata == NULL) {\n\t\treturn NULL;\n\t}\n\tedata_esn_set(edata, esn);\n\treturn edata;\n}\n\nvoid\nbase_stats_get(tsdn_t *tsdn, base_t *base, size_t *allocated, size_t *resident,\n    size_t *mapped, size_t *n_thp) {\n\tcassert(config_stats);\n\n\tmalloc_mutex_lock(tsdn, &base->mtx);\n\tassert(base->allocated <= base->resident);\n\tassert(base->resident <= base->mapped);\n\t*allocated = base->allocated;\n\t*resident = base->resident;\n\t*mapped = base->mapped;\n\t*n_thp = base->n_thp;\n\tmalloc_mutex_unlock(tsdn, &base->mtx);\n}\n\nvoid\nbase_prefork(tsdn_t *tsdn, base_t *base) {\n\tmalloc_mutex_prefork(tsdn, &base->mtx);\n}\n\nvoid\nbase_postfork_parent(tsdn_t *tsdn, base_t *base) {\n\tmalloc_mutex_postfork_parent(tsdn, &base->mtx);\n}\n\nvoid\nbase_postfork_child(tsdn_t *tsdn, base_t *base) {\n\tmalloc_mutex_postfork_child(tsdn, &base->mtx);\n}\n\nbool\nbase_boot(tsdn_t *tsdn) {\n\tb0 = base_new(tsdn, 0, (extent_hooks_t *)&ehooks_default_extent_hooks,\n\t    /* metadata_use_hooks */ true);\n\treturn (b0 == NULL);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/bin.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/bin.h\"\n#include \"jemalloc/internal/sc.h\"\n#include \"jemalloc/internal/witness.h\"\n\nbool\nbin_update_shard_size(unsigned bin_shard_sizes[SC_NBINS], size_t start_size,\n    size_t end_size, size_t nshards) {\n\tif (nshards > BIN_SHARDS_MAX || nshards == 0) {\n\t\treturn true;\n\t}\n\n\tif (start_size > SC_SMALL_MAXCLASS) {\n\t\treturn false;\n\t}\n\tif (end_size > SC_SMALL_MAXCLASS) {\n\t\tend_size = SC_SMALL_MAXCLASS;\n\t}\n\n\t/* Compute the index since this may happen before sz init. */\n\tszind_t ind1 = sz_size2index_compute(start_size);\n\tszind_t ind2 = sz_size2index_compute(end_size);\n\tfor (unsigned i = ind1; i <= ind2; i++) {\n\t\tbin_shard_sizes[i] = (unsigned)nshards;\n\t}\n\n\treturn false;\n}\n\nvoid\nbin_shard_sizes_boot(unsigned bin_shard_sizes[SC_NBINS]) {\n\t/* Load the default number of shards. */\n\tfor (unsigned i = 0; i < SC_NBINS; i++) {\n\t\tbin_shard_sizes[i] = N_BIN_SHARDS_DEFAULT;\n\t}\n}\n\nbool\nbin_init(bin_t *bin) {\n\tif (malloc_mutex_init(&bin->lock, \"bin\", WITNESS_RANK_BIN,\n\t    malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\tbin->slabcur = NULL;\n\tedata_heap_new(&bin->slabs_nonfull);\n\tedata_list_active_init(&bin->slabs_full);\n\tif (config_stats) {\n\t\tmemset(&bin->stats, 0, sizeof(bin_stats_t));\n\t}\n\treturn false;\n}\n\nvoid\nbin_prefork(tsdn_t *tsdn, bin_t *bin) {\n\tmalloc_mutex_prefork(tsdn, &bin->lock);\n}\n\nvoid\nbin_postfork_parent(tsdn_t *tsdn, bin_t *bin) {\n\tmalloc_mutex_postfork_parent(tsdn, &bin->lock);\n}\n\nvoid\nbin_postfork_child(tsdn_t *tsdn, bin_t *bin) {\n\tmalloc_mutex_postfork_child(tsdn, &bin->lock);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/bin_info.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/bin_info.h\"\n\nbin_info_t bin_infos[SC_NBINS];\n\nstatic void\nbin_infos_init(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS],\n    bin_info_t infos[SC_NBINS]) {\n\tfor (unsigned i = 0; i < SC_NBINS; i++) {\n\t\tbin_info_t *bin_info = &infos[i];\n\t\tsc_t *sc = &sc_data->sc[i];\n\t\tbin_info->reg_size = ((size_t)1U << sc->lg_base)\n\t\t    + ((size_t)sc->ndelta << sc->lg_delta);\n\t\tbin_info->slab_size = (sc->pgs << LG_PAGE);\n\t\tbin_info->nregs =\n\t\t    (uint32_t)(bin_info->slab_size / bin_info->reg_size);\n\t\tbin_info->n_shards = bin_shard_sizes[i];\n\t\tbitmap_info_t bitmap_info = BITMAP_INFO_INITIALIZER(\n\t\t    bin_info->nregs);\n\t\tbin_info->bitmap_info = bitmap_info;\n\t}\n}\n\nvoid\nbin_info_boot(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS]) {\n\tassert(sc_data->initialized);\n\tbin_infos_init(sc_data, bin_shard_sizes, bin_infos);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/bitmap.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n\n/******************************************************************************/\n\n#ifdef BITMAP_USE_TREE\n\nvoid\nbitmap_info_init(bitmap_info_t *binfo, size_t nbits) {\n\tunsigned i;\n\tsize_t group_count;\n\n\tassert(nbits > 0);\n\tassert(nbits <= (ZU(1) << LG_BITMAP_MAXBITS));\n\n\t/*\n\t * Compute the number of groups necessary to store nbits bits, and\n\t * progressively work upward through the levels until reaching a level\n\t * that requires only one group.\n\t */\n\tbinfo->levels[0].group_offset = 0;\n\tgroup_count = BITMAP_BITS2GROUPS(nbits);\n\tfor (i = 1; group_count > 1; i++) {\n\t\tassert(i < BITMAP_MAX_LEVELS);\n\t\tbinfo->levels[i].group_offset = binfo->levels[i-1].group_offset\n\t\t    + group_count;\n\t\tgroup_count = BITMAP_BITS2GROUPS(group_count);\n\t}\n\tbinfo->levels[i].group_offset = binfo->levels[i-1].group_offset\n\t    + group_count;\n\tassert(binfo->levels[i].group_offset <= BITMAP_GROUPS_MAX);\n\tbinfo->nlevels = i;\n\tbinfo->nbits = nbits;\n}\n\nstatic size_t\nbitmap_info_ngroups(const bitmap_info_t *binfo) {\n\treturn binfo->levels[binfo->nlevels].group_offset;\n}\n\nvoid\nbitmap_init(bitmap_t *bitmap, const bitmap_info_t *binfo, bool fill) {\n\tsize_t extra;\n\tunsigned i;\n\n\t/*\n\t * Bits are actually inverted with regard to the external bitmap\n\t * interface.\n\t */\n\n\tif (fill) {\n\t\t/* The \"filled\" bitmap starts out with all 0 bits. */\n\t\tmemset(bitmap, 0, bitmap_size(binfo));\n\t\treturn;\n\t}\n\n\t/*\n\t * The \"empty\" bitmap starts out with all 1 bits, except for trailing\n\t * unused bits (if any).  Note that each group uses bit 0 to correspond\n\t * to the first logical bit in the group, so extra bits are the most\n\t * significant bits of the last group.\n\t */\n\tmemset(bitmap, 0xffU, bitmap_size(binfo));\n\textra = (BITMAP_GROUP_NBITS - (binfo->nbits & BITMAP_GROUP_NBITS_MASK))\n\t    & BITMAP_GROUP_NBITS_MASK;\n\tif (extra != 0) {\n\t\tbitmap[binfo->levels[1].group_offset - 1] >>= extra;\n\t}\n\tfor (i = 1; i < binfo->nlevels; i++) {\n\t\tsize_t group_count = binfo->levels[i].group_offset -\n\t\t    binfo->levels[i-1].group_offset;\n\t\textra = (BITMAP_GROUP_NBITS - (group_count &\n\t\t    BITMAP_GROUP_NBITS_MASK)) & BITMAP_GROUP_NBITS_MASK;\n\t\tif (extra != 0) {\n\t\t\tbitmap[binfo->levels[i+1].group_offset - 1] >>= extra;\n\t\t}\n\t}\n}\n\n#else /* BITMAP_USE_TREE */\n\nvoid\nbitmap_info_init(bitmap_info_t *binfo, size_t nbits) {\n\tassert(nbits > 0);\n\tassert(nbits <= (ZU(1) << LG_BITMAP_MAXBITS));\n\n\tbinfo->ngroups = BITMAP_BITS2GROUPS(nbits);\n\tbinfo->nbits = nbits;\n}\n\nstatic size_t\nbitmap_info_ngroups(const bitmap_info_t *binfo) {\n\treturn binfo->ngroups;\n}\n\nvoid\nbitmap_init(bitmap_t *bitmap, const bitmap_info_t *binfo, bool fill) {\n\tsize_t extra;\n\n\tif (fill) {\n\t\tmemset(bitmap, 0, bitmap_size(binfo));\n\t\treturn;\n\t}\n\n\tmemset(bitmap, 0xffU, bitmap_size(binfo));\n\textra = (BITMAP_GROUP_NBITS - (binfo->nbits & BITMAP_GROUP_NBITS_MASK))\n\t    & BITMAP_GROUP_NBITS_MASK;\n\tif (extra != 0) {\n\t\tbitmap[binfo->ngroups - 1] >>= extra;\n\t}\n}\n\n#endif /* BITMAP_USE_TREE */\n\nsize_t\nbitmap_size(const bitmap_info_t *binfo) {\n\treturn (bitmap_info_ngroups(binfo) << LG_SIZEOF_BITMAP);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/buf_writer.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/buf_writer.h\"\n#include \"jemalloc/internal/malloc_io.h\"\n\nstatic void *\nbuf_writer_allocate_internal_buf(tsdn_t *tsdn, size_t buf_len) {\n#ifdef JEMALLOC_JET\n\tif (buf_len > SC_LARGE_MAXCLASS) {\n\t\treturn NULL;\n\t}\n#else\n\tassert(buf_len <= SC_LARGE_MAXCLASS);\n#endif\n\treturn iallocztm(tsdn, buf_len, sz_size2index(buf_len), false, NULL,\n\t    true, arena_get(tsdn, 0, false), true);\n}\n\nstatic void\nbuf_writer_free_internal_buf(tsdn_t *tsdn, void *buf) {\n\tif (buf != NULL) {\n\t\tidalloctm(tsdn, buf, NULL, NULL, true, true);\n\t}\n}\n\nstatic void\nbuf_writer_assert(buf_writer_t *buf_writer) {\n\tassert(buf_writer != NULL);\n\tassert(buf_writer->write_cb != NULL);\n\tif (buf_writer->buf != NULL) {\n\t\tassert(buf_writer->buf_size > 0);\n\t} else {\n\t\tassert(buf_writer->buf_size == 0);\n\t\tassert(buf_writer->internal_buf);\n\t}\n\tassert(buf_writer->buf_end <= buf_writer->buf_size);\n}\n\nbool\nbuf_writer_init(tsdn_t *tsdn, buf_writer_t *buf_writer, write_cb_t *write_cb,\n    void *cbopaque, char *buf, size_t buf_len) {\n\tif (write_cb != NULL) {\n\t\tbuf_writer->write_cb = write_cb;\n\t} else {\n\t\tbuf_writer->write_cb = je_malloc_message != NULL ?\n\t\t    je_malloc_message : wrtmessage;\n\t}\n\tbuf_writer->cbopaque = cbopaque;\n\tassert(buf_len >= 2);\n\tif (buf != NULL) {\n\t\tbuf_writer->buf = buf;\n\t\tbuf_writer->internal_buf = false;\n\t} else {\n\t\tbuf_writer->buf = buf_writer_allocate_internal_buf(tsdn,\n\t\t    buf_len);\n\t\tbuf_writer->internal_buf = true;\n\t}\n\tif (buf_writer->buf != NULL) {\n\t\tbuf_writer->buf_size = buf_len - 1; /* Allowing for '\\0'. */\n\t} else {\n\t\tbuf_writer->buf_size = 0;\n\t}\n\tbuf_writer->buf_end = 0;\n\tbuf_writer_assert(buf_writer);\n\treturn buf_writer->buf == NULL;\n}\n\nvoid\nbuf_writer_flush(buf_writer_t *buf_writer) {\n\tbuf_writer_assert(buf_writer);\n\tif (buf_writer->buf == NULL) {\n\t\treturn;\n\t}\n\tbuf_writer->buf[buf_writer->buf_end] = '\\0';\n\tbuf_writer->write_cb(buf_writer->cbopaque, buf_writer->buf);\n\tbuf_writer->buf_end = 0;\n\tbuf_writer_assert(buf_writer);\n}\n\nvoid\nbuf_writer_cb(void *buf_writer_arg, const char *s) {\n\tbuf_writer_t *buf_writer = (buf_writer_t *)buf_writer_arg;\n\tbuf_writer_assert(buf_writer);\n\tif (buf_writer->buf == NULL) {\n\t\tbuf_writer->write_cb(buf_writer->cbopaque, s);\n\t\treturn;\n\t}\n\tsize_t i, slen, n;\n\tfor (i = 0, slen = strlen(s); i < slen; i += n) {\n\t\tif (buf_writer->buf_end == buf_writer->buf_size) {\n\t\t\tbuf_writer_flush(buf_writer);\n\t\t}\n\t\tsize_t s_remain = slen - i;\n\t\tsize_t buf_remain = buf_writer->buf_size - buf_writer->buf_end;\n\t\tn = s_remain < buf_remain ? s_remain : buf_remain;\n\t\tmemcpy(buf_writer->buf + buf_writer->buf_end, s + i, n);\n\t\tbuf_writer->buf_end += n;\n\t\tbuf_writer_assert(buf_writer);\n\t}\n\tassert(i == slen);\n}\n\nvoid\nbuf_writer_terminate(tsdn_t *tsdn, buf_writer_t *buf_writer) {\n\tbuf_writer_assert(buf_writer);\n\tbuf_writer_flush(buf_writer);\n\tif (buf_writer->internal_buf) {\n\t\tbuf_writer_free_internal_buf(tsdn, buf_writer->buf);\n\t}\n}\n\nvoid\nbuf_writer_pipe(buf_writer_t *buf_writer, read_cb_t *read_cb,\n    void *read_cbopaque) {\n\t/*\n\t * A tiny local buffer in case the buffered writer failed to allocate\n\t * at init.\n\t */\n\tstatic char backup_buf[16];\n\tstatic buf_writer_t backup_buf_writer;\n\n\tbuf_writer_assert(buf_writer);\n\tassert(read_cb != NULL);\n\tif (buf_writer->buf == NULL) {\n\t\tbuf_writer_init(TSDN_NULL, &backup_buf_writer,\n\t\t    buf_writer->write_cb, buf_writer->cbopaque, backup_buf,\n\t\t    sizeof(backup_buf));\n\t\tbuf_writer = &backup_buf_writer;\n\t}\n\tassert(buf_writer->buf != NULL);\n\tssize_t nread = 0;\n\tdo {\n\t\tbuf_writer->buf_end += nread;\n\t\tbuf_writer_assert(buf_writer);\n\t\tif (buf_writer->buf_end == buf_writer->buf_size) {\n\t\t\tbuf_writer_flush(buf_writer);\n\t\t}\n\t\tnread = read_cb(read_cbopaque,\n\t\t    buf_writer->buf + buf_writer->buf_end,\n\t\t    buf_writer->buf_size - buf_writer->buf_end);\n\t} while (nread > 0);\n\tbuf_writer_flush(buf_writer);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/cache_bin.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/bit_util.h\"\n#include \"jemalloc/internal/cache_bin.h\"\n#include \"jemalloc/internal/safety_check.h\"\n\nvoid\ncache_bin_info_init(cache_bin_info_t *info,\n    cache_bin_sz_t ncached_max) {\n\tassert(ncached_max <= CACHE_BIN_NCACHED_MAX);\n\tsize_t stack_size = (size_t)ncached_max * sizeof(void *);\n\tassert(stack_size < ((size_t)1 << (sizeof(cache_bin_sz_t) * 8)));\n\tinfo->ncached_max = (cache_bin_sz_t)ncached_max;\n}\n\nvoid\ncache_bin_info_compute_alloc(cache_bin_info_t *infos, szind_t ninfos,\n    size_t *size, size_t *alignment) {\n\t/* For the total bin stack region (per tcache), reserve 2 more slots so\n\t * that\n\t * 1) the empty position can be safely read on the fast path before\n\t *    checking \"is_empty\"; and\n\t * 2) the cur_ptr can go beyond the empty position by 1 step safely on\n\t * the fast path (i.e. no overflow).\n\t */\n\t*size = sizeof(void *) * 2;\n\tfor (szind_t i = 0; i < ninfos; i++) {\n\t\tassert(infos[i].ncached_max > 0);\n\t\t*size += infos[i].ncached_max * sizeof(void *);\n\t}\n\n\t/*\n\t * Align to at least PAGE, to minimize the # of TLBs needed by the\n\t * smaller sizes; also helps if the larger sizes don't get used at all.\n\t */\n\t*alignment = PAGE;\n}\n\nvoid\ncache_bin_preincrement(cache_bin_info_t *infos, szind_t ninfos, void *alloc,\n    size_t *cur_offset) {\n\tif (config_debug) {\n\t\tsize_t computed_size;\n\t\tsize_t computed_alignment;\n\n\t\t/* Pointer should be as aligned as we asked for. */\n\t\tcache_bin_info_compute_alloc(infos, ninfos, &computed_size,\n\t\t    &computed_alignment);\n\t\tassert(((uintptr_t)alloc & (computed_alignment - 1)) == 0);\n\t}\n\n\t*(uintptr_t *)((uintptr_t)alloc + *cur_offset) =\n\t    cache_bin_preceding_junk;\n\t*cur_offset += sizeof(void *);\n}\n\nvoid\ncache_bin_postincrement(cache_bin_info_t *infos, szind_t ninfos, void *alloc,\n    size_t *cur_offset) {\n\t*(uintptr_t *)((uintptr_t)alloc + *cur_offset) =\n\t    cache_bin_trailing_junk;\n\t*cur_offset += sizeof(void *);\n}\n\nvoid\ncache_bin_init(cache_bin_t *bin, cache_bin_info_t *info, void *alloc,\n    size_t *cur_offset) {\n\t/*\n\t * The full_position points to the lowest available space.  Allocations\n\t * will access the slots toward higher addresses (for the benefit of\n\t * adjacent prefetch).\n\t */\n\tvoid *stack_cur = (void *)((uintptr_t)alloc + *cur_offset);\n\tvoid *full_position = stack_cur;\n\tuint16_t bin_stack_size = info->ncached_max * sizeof(void *);\n\n\t*cur_offset += bin_stack_size;\n\tvoid *empty_position = (void *)((uintptr_t)alloc + *cur_offset);\n\n\t/* Init to the empty position. */\n\tbin->stack_head = (void **)empty_position;\n\tbin->low_bits_low_water = (uint16_t)(uintptr_t)bin->stack_head;\n\tbin->low_bits_full = (uint16_t)(uintptr_t)full_position;\n\tbin->low_bits_empty = (uint16_t)(uintptr_t)empty_position;\n\tcache_bin_sz_t free_spots = cache_bin_diff(bin,\n\t    bin->low_bits_full, (uint16_t)(uintptr_t)bin->stack_head,\n\t    /* racy */ false);\n\tassert(free_spots == bin_stack_size);\n\tassert(cache_bin_ncached_get_local(bin, info) == 0);\n\tassert(cache_bin_empty_position_get(bin) == empty_position);\n\n\tassert(bin_stack_size > 0 || empty_position == full_position);\n}\n\nbool\ncache_bin_still_zero_initialized(cache_bin_t *bin) {\n\treturn bin->stack_head == NULL;\n}\n"
  },
  {
    "path": "deps/jemalloc/src/ckh.c",
    "content": "/*\n *******************************************************************************\n * Implementation of (2^1+,2) cuckoo hashing, where 2^1+ indicates that each\n * hash bucket contains 2^n cells, for n >= 1, and 2 indicates that two hash\n * functions are employed.  The original cuckoo hashing algorithm was described\n * in:\n *\n *   Pagh, R., F.F. Rodler (2004) Cuckoo Hashing.  Journal of Algorithms\n *     51(2):122-144.\n *\n * Generalization of cuckoo hashing was discussed in:\n *\n *   Erlingsson, U., M. Manasse, F. McSherry (2006) A cool and practical\n *     alternative to traditional hash tables.  In Proceedings of the 7th\n *     Workshop on Distributed Data and Structures (WDAS'06), Santa Clara, CA,\n *     January 2006.\n *\n * This implementation uses precisely two hash functions because that is the\n * fewest that can work, and supporting multiple hashes is an implementation\n * burden.  Here is a reproduction of Figure 1 from Erlingsson et al. (2006)\n * that shows approximate expected maximum load factors for various\n * configurations:\n *\n *           |         #cells/bucket         |\n *   #hashes |   1   |   2   |   4   |   8   |\n *   --------+-------+-------+-------+-------+\n *         1 | 0.006 | 0.006 | 0.03  | 0.12  |\n *         2 | 0.49  | 0.86  |>0.93< |>0.96< |\n *         3 | 0.91  | 0.97  | 0.98  | 0.999 |\n *         4 | 0.97  | 0.99  | 0.999 |       |\n *\n * The number of cells per bucket is chosen such that a bucket fits in one cache\n * line.  So, on 32- and 64-bit systems, we use (8,2) and (4,2) cuckoo hashing,\n * respectively.\n *\n ******************************************************************************/\n#include \"jemalloc/internal/jemalloc_preamble.h\"\n\n#include \"jemalloc/internal/ckh.h\"\n\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/hash.h\"\n#include \"jemalloc/internal/malloc_io.h\"\n#include \"jemalloc/internal/prng.h\"\n#include \"jemalloc/internal/util.h\"\n\n/******************************************************************************/\n/* Function prototypes for non-inline static functions. */\n\nstatic bool\tckh_grow(tsd_t *tsd, ckh_t *ckh);\nstatic void\tckh_shrink(tsd_t *tsd, ckh_t *ckh);\n\n/******************************************************************************/\n\n/*\n * Search bucket for key and return the cell number if found; SIZE_T_MAX\n * otherwise.\n */\nstatic size_t\nckh_bucket_search(ckh_t *ckh, size_t bucket, const void *key) {\n\tckhc_t *cell;\n\tunsigned i;\n\n\tfor (i = 0; i < (ZU(1) << LG_CKH_BUCKET_CELLS); i++) {\n\t\tcell = &ckh->tab[(bucket << LG_CKH_BUCKET_CELLS) + i];\n\t\tif (cell->key != NULL && ckh->keycomp(key, cell->key)) {\n\t\t\treturn (bucket << LG_CKH_BUCKET_CELLS) + i;\n\t\t}\n\t}\n\n\treturn SIZE_T_MAX;\n}\n\n/*\n * Search table for key and return cell number if found; SIZE_T_MAX otherwise.\n */\nstatic size_t\nckh_isearch(ckh_t *ckh, const void *key) {\n\tsize_t hashes[2], bucket, cell;\n\n\tassert(ckh != NULL);\n\n\tckh->hash(key, hashes);\n\n\t/* Search primary bucket. */\n\tbucket = hashes[0] & ((ZU(1) << ckh->lg_curbuckets) - 1);\n\tcell = ckh_bucket_search(ckh, bucket, key);\n\tif (cell != SIZE_T_MAX) {\n\t\treturn cell;\n\t}\n\n\t/* Search secondary bucket. */\n\tbucket = hashes[1] & ((ZU(1) << ckh->lg_curbuckets) - 1);\n\tcell = ckh_bucket_search(ckh, bucket, key);\n\treturn cell;\n}\n\nstatic bool\nckh_try_bucket_insert(ckh_t *ckh, size_t bucket, const void *key,\n    const void *data) {\n\tckhc_t *cell;\n\tunsigned offset, i;\n\n\t/*\n\t * Cycle through the cells in the bucket, starting at a random position.\n\t * The randomness avoids worst-case search overhead as buckets fill up.\n\t */\n\toffset = (unsigned)prng_lg_range_u64(&ckh->prng_state,\n\t    LG_CKH_BUCKET_CELLS);\n\tfor (i = 0; i < (ZU(1) << LG_CKH_BUCKET_CELLS); i++) {\n\t\tcell = &ckh->tab[(bucket << LG_CKH_BUCKET_CELLS) +\n\t\t    ((i + offset) & ((ZU(1) << LG_CKH_BUCKET_CELLS) - 1))];\n\t\tif (cell->key == NULL) {\n\t\t\tcell->key = key;\n\t\t\tcell->data = data;\n\t\t\tckh->count++;\n\t\t\treturn false;\n\t\t}\n\t}\n\n\treturn true;\n}\n\n/*\n * No space is available in bucket.  Randomly evict an item, then try to find an\n * alternate location for that item.  Iteratively repeat this\n * eviction/relocation procedure until either success or detection of an\n * eviction/relocation bucket cycle.\n */\nstatic bool\nckh_evict_reloc_insert(ckh_t *ckh, size_t argbucket, void const **argkey,\n    void const **argdata) {\n\tconst void *key, *data, *tkey, *tdata;\n\tckhc_t *cell;\n\tsize_t hashes[2], bucket, tbucket;\n\tunsigned i;\n\n\tbucket = argbucket;\n\tkey = *argkey;\n\tdata = *argdata;\n\twhile (true) {\n\t\t/*\n\t\t * Choose a random item within the bucket to evict.  This is\n\t\t * critical to correct function, because without (eventually)\n\t\t * evicting all items within a bucket during iteration, it\n\t\t * would be possible to get stuck in an infinite loop if there\n\t\t * were an item for which both hashes indicated the same\n\t\t * bucket.\n\t\t */\n\t\ti = (unsigned)prng_lg_range_u64(&ckh->prng_state,\n\t\t    LG_CKH_BUCKET_CELLS);\n\t\tcell = &ckh->tab[(bucket << LG_CKH_BUCKET_CELLS) + i];\n\t\tassert(cell->key != NULL);\n\n\t\t/* Swap cell->{key,data} and {key,data} (evict). */\n\t\ttkey = cell->key; tdata = cell->data;\n\t\tcell->key = key; cell->data = data;\n\t\tkey = tkey; data = tdata;\n\n#ifdef CKH_COUNT\n\t\tckh->nrelocs++;\n#endif\n\n\t\t/* Find the alternate bucket for the evicted item. */\n\t\tckh->hash(key, hashes);\n\t\ttbucket = hashes[1] & ((ZU(1) << ckh->lg_curbuckets) - 1);\n\t\tif (tbucket == bucket) {\n\t\t\ttbucket = hashes[0] & ((ZU(1) << ckh->lg_curbuckets)\n\t\t\t    - 1);\n\t\t\t/*\n\t\t\t * It may be that (tbucket == bucket) still, if the\n\t\t\t * item's hashes both indicate this bucket.  However,\n\t\t\t * we are guaranteed to eventually escape this bucket\n\t\t\t * during iteration, assuming pseudo-random item\n\t\t\t * selection (true randomness would make infinite\n\t\t\t * looping a remote possibility).  The reason we can\n\t\t\t * never get trapped forever is that there are two\n\t\t\t * cases:\n\t\t\t *\n\t\t\t * 1) This bucket == argbucket, so we will quickly\n\t\t\t *    detect an eviction cycle and terminate.\n\t\t\t * 2) An item was evicted to this bucket from another,\n\t\t\t *    which means that at least one item in this bucket\n\t\t\t *    has hashes that indicate distinct buckets.\n\t\t\t */\n\t\t}\n\t\t/* Check for a cycle. */\n\t\tif (tbucket == argbucket) {\n\t\t\t*argkey = key;\n\t\t\t*argdata = data;\n\t\t\treturn true;\n\t\t}\n\n\t\tbucket = tbucket;\n\t\tif (!ckh_try_bucket_insert(ckh, bucket, key, data)) {\n\t\t\treturn false;\n\t\t}\n\t}\n}\n\nstatic bool\nckh_try_insert(ckh_t *ckh, void const**argkey, void const**argdata) {\n\tsize_t hashes[2], bucket;\n\tconst void *key = *argkey;\n\tconst void *data = *argdata;\n\n\tckh->hash(key, hashes);\n\n\t/* Try to insert in primary bucket. */\n\tbucket = hashes[0] & ((ZU(1) << ckh->lg_curbuckets) - 1);\n\tif (!ckh_try_bucket_insert(ckh, bucket, key, data)) {\n\t\treturn false;\n\t}\n\n\t/* Try to insert in secondary bucket. */\n\tbucket = hashes[1] & ((ZU(1) << ckh->lg_curbuckets) - 1);\n\tif (!ckh_try_bucket_insert(ckh, bucket, key, data)) {\n\t\treturn false;\n\t}\n\n\t/*\n\t * Try to find a place for this item via iterative eviction/relocation.\n\t */\n\treturn ckh_evict_reloc_insert(ckh, bucket, argkey, argdata);\n}\n\n/*\n * Try to rebuild the hash table from scratch by inserting all items from the\n * old table into the new.\n */\nstatic bool\nckh_rebuild(ckh_t *ckh, ckhc_t *aTab) {\n\tsize_t count, i, nins;\n\tconst void *key, *data;\n\n\tcount = ckh->count;\n\tckh->count = 0;\n\tfor (i = nins = 0; nins < count; i++) {\n\t\tif (aTab[i].key != NULL) {\n\t\t\tkey = aTab[i].key;\n\t\t\tdata = aTab[i].data;\n\t\t\tif (ckh_try_insert(ckh, &key, &data)) {\n\t\t\t\tckh->count = count;\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tnins++;\n\t\t}\n\t}\n\n\treturn false;\n}\n\nstatic bool\nckh_grow(tsd_t *tsd, ckh_t *ckh) {\n\tbool ret;\n\tckhc_t *tab, *ttab;\n\tunsigned lg_prevbuckets, lg_curcells;\n\n#ifdef CKH_COUNT\n\tckh->ngrows++;\n#endif\n\n\t/*\n\t * It is possible (though unlikely, given well behaved hashes) that the\n\t * table will have to be doubled more than once in order to create a\n\t * usable table.\n\t */\n\tlg_prevbuckets = ckh->lg_curbuckets;\n\tlg_curcells = ckh->lg_curbuckets + LG_CKH_BUCKET_CELLS;\n\twhile (true) {\n\t\tsize_t usize;\n\n\t\tlg_curcells++;\n\t\tusize = sz_sa2u(sizeof(ckhc_t) << lg_curcells, CACHELINE);\n\t\tif (unlikely(usize == 0\n\t\t    || usize > SC_LARGE_MAXCLASS)) {\n\t\t\tret = true;\n\t\t\tgoto label_return;\n\t\t}\n\t\ttab = (ckhc_t *)ipallocztm(tsd_tsdn(tsd), usize, CACHELINE,\n\t\t    true, NULL, true, arena_ichoose(tsd, NULL));\n\t\tif (tab == NULL) {\n\t\t\tret = true;\n\t\t\tgoto label_return;\n\t\t}\n\t\t/* Swap in new table. */\n\t\tttab = ckh->tab;\n\t\tckh->tab = tab;\n\t\ttab = ttab;\n\t\tckh->lg_curbuckets = lg_curcells - LG_CKH_BUCKET_CELLS;\n\n\t\tif (!ckh_rebuild(ckh, tab)) {\n\t\t\tidalloctm(tsd_tsdn(tsd), tab, NULL, NULL, true, true);\n\t\t\tbreak;\n\t\t}\n\n\t\t/* Rebuilding failed, so back out partially rebuilt table. */\n\t\tidalloctm(tsd_tsdn(tsd), ckh->tab, NULL, NULL, true, true);\n\t\tckh->tab = tab;\n\t\tckh->lg_curbuckets = lg_prevbuckets;\n\t}\n\n\tret = false;\nlabel_return:\n\treturn ret;\n}\n\nstatic void\nckh_shrink(tsd_t *tsd, ckh_t *ckh) {\n\tckhc_t *tab, *ttab;\n\tsize_t usize;\n\tunsigned lg_prevbuckets, lg_curcells;\n\n\t/*\n\t * It is possible (though unlikely, given well behaved hashes) that the\n\t * table rebuild will fail.\n\t */\n\tlg_prevbuckets = ckh->lg_curbuckets;\n\tlg_curcells = ckh->lg_curbuckets + LG_CKH_BUCKET_CELLS - 1;\n\tusize = sz_sa2u(sizeof(ckhc_t) << lg_curcells, CACHELINE);\n\tif (unlikely(usize == 0 || usize > SC_LARGE_MAXCLASS)) {\n\t\treturn;\n\t}\n\ttab = (ckhc_t *)ipallocztm(tsd_tsdn(tsd), usize, CACHELINE, true, NULL,\n\t    true, arena_ichoose(tsd, NULL));\n\tif (tab == NULL) {\n\t\t/*\n\t\t * An OOM error isn't worth propagating, since it doesn't\n\t\t * prevent this or future operations from proceeding.\n\t\t */\n\t\treturn;\n\t}\n\t/* Swap in new table. */\n\tttab = ckh->tab;\n\tckh->tab = tab;\n\ttab = ttab;\n\tckh->lg_curbuckets = lg_curcells - LG_CKH_BUCKET_CELLS;\n\n\tif (!ckh_rebuild(ckh, tab)) {\n\t\tidalloctm(tsd_tsdn(tsd), tab, NULL, NULL, true, true);\n#ifdef CKH_COUNT\n\t\tckh->nshrinks++;\n#endif\n\t\treturn;\n\t}\n\n\t/* Rebuilding failed, so back out partially rebuilt table. */\n\tidalloctm(tsd_tsdn(tsd), ckh->tab, NULL, NULL, true, true);\n\tckh->tab = tab;\n\tckh->lg_curbuckets = lg_prevbuckets;\n#ifdef CKH_COUNT\n\tckh->nshrinkfails++;\n#endif\n}\n\nbool\nckh_new(tsd_t *tsd, ckh_t *ckh, size_t minitems, ckh_hash_t *ckh_hash,\n    ckh_keycomp_t *keycomp) {\n\tbool ret;\n\tsize_t mincells, usize;\n\tunsigned lg_mincells;\n\n\tassert(minitems > 0);\n\tassert(ckh_hash != NULL);\n\tassert(keycomp != NULL);\n\n#ifdef CKH_COUNT\n\tckh->ngrows = 0;\n\tckh->nshrinks = 0;\n\tckh->nshrinkfails = 0;\n\tckh->ninserts = 0;\n\tckh->nrelocs = 0;\n#endif\n\tckh->prng_state = 42; /* Value doesn't really matter. */\n\tckh->count = 0;\n\n\t/*\n\t * Find the minimum power of 2 that is large enough to fit minitems\n\t * entries.  We are using (2+,2) cuckoo hashing, which has an expected\n\t * maximum load factor of at least ~0.86, so 0.75 is a conservative load\n\t * factor that will typically allow mincells items to fit without ever\n\t * growing the table.\n\t */\n\tassert(LG_CKH_BUCKET_CELLS > 0);\n\tmincells = ((minitems + (3 - (minitems % 3))) / 3) << 2;\n\tfor (lg_mincells = LG_CKH_BUCKET_CELLS;\n\t    (ZU(1) << lg_mincells) < mincells;\n\t    lg_mincells++) {\n\t\t/* Do nothing. */\n\t}\n\tckh->lg_minbuckets = lg_mincells - LG_CKH_BUCKET_CELLS;\n\tckh->lg_curbuckets = lg_mincells - LG_CKH_BUCKET_CELLS;\n\tckh->hash = ckh_hash;\n\tckh->keycomp = keycomp;\n\n\tusize = sz_sa2u(sizeof(ckhc_t) << lg_mincells, CACHELINE);\n\tif (unlikely(usize == 0 || usize > SC_LARGE_MAXCLASS)) {\n\t\tret = true;\n\t\tgoto label_return;\n\t}\n\tckh->tab = (ckhc_t *)ipallocztm(tsd_tsdn(tsd), usize, CACHELINE, true,\n\t    NULL, true, arena_ichoose(tsd, NULL));\n\tif (ckh->tab == NULL) {\n\t\tret = true;\n\t\tgoto label_return;\n\t}\n\n\tret = false;\nlabel_return:\n\treturn ret;\n}\n\nvoid\nckh_delete(tsd_t *tsd, ckh_t *ckh) {\n\tassert(ckh != NULL);\n\n#ifdef CKH_VERBOSE\n\tmalloc_printf(\n\t    \"%s(%p): ngrows: %\"FMTu64\", nshrinks: %\"FMTu64\",\"\n\t    \" nshrinkfails: %\"FMTu64\", ninserts: %\"FMTu64\",\"\n\t    \" nrelocs: %\"FMTu64\"\\n\", __func__, ckh,\n\t    (unsigned long long)ckh->ngrows,\n\t    (unsigned long long)ckh->nshrinks,\n\t    (unsigned long long)ckh->nshrinkfails,\n\t    (unsigned long long)ckh->ninserts,\n\t    (unsigned long long)ckh->nrelocs);\n#endif\n\n\tidalloctm(tsd_tsdn(tsd), ckh->tab, NULL, NULL, true, true);\n\tif (config_debug) {\n\t\tmemset(ckh, JEMALLOC_FREE_JUNK, sizeof(ckh_t));\n\t}\n}\n\nsize_t\nckh_count(ckh_t *ckh) {\n\tassert(ckh != NULL);\n\n\treturn ckh->count;\n}\n\nbool\nckh_iter(ckh_t *ckh, size_t *tabind, void **key, void **data) {\n\tsize_t i, ncells;\n\n\tfor (i = *tabind, ncells = (ZU(1) << (ckh->lg_curbuckets +\n\t    LG_CKH_BUCKET_CELLS)); i < ncells; i++) {\n\t\tif (ckh->tab[i].key != NULL) {\n\t\t\tif (key != NULL) {\n\t\t\t\t*key = (void *)ckh->tab[i].key;\n\t\t\t}\n\t\t\tif (data != NULL) {\n\t\t\t\t*data = (void *)ckh->tab[i].data;\n\t\t\t}\n\t\t\t*tabind = i + 1;\n\t\t\treturn false;\n\t\t}\n\t}\n\n\treturn true;\n}\n\nbool\nckh_insert(tsd_t *tsd, ckh_t *ckh, const void *key, const void *data) {\n\tbool ret;\n\n\tassert(ckh != NULL);\n\tassert(ckh_search(ckh, key, NULL, NULL));\n\n#ifdef CKH_COUNT\n\tckh->ninserts++;\n#endif\n\n\twhile (ckh_try_insert(ckh, &key, &data)) {\n\t\tif (ckh_grow(tsd, ckh)) {\n\t\t\tret = true;\n\t\t\tgoto label_return;\n\t\t}\n\t}\n\n\tret = false;\nlabel_return:\n\treturn ret;\n}\n\nbool\nckh_remove(tsd_t *tsd, ckh_t *ckh, const void *searchkey, void **key,\n    void **data) {\n\tsize_t cell;\n\n\tassert(ckh != NULL);\n\n\tcell = ckh_isearch(ckh, searchkey);\n\tif (cell != SIZE_T_MAX) {\n\t\tif (key != NULL) {\n\t\t\t*key = (void *)ckh->tab[cell].key;\n\t\t}\n\t\tif (data != NULL) {\n\t\t\t*data = (void *)ckh->tab[cell].data;\n\t\t}\n\t\tckh->tab[cell].key = NULL;\n\t\tckh->tab[cell].data = NULL; /* Not necessary. */\n\n\t\tckh->count--;\n\t\t/* Try to halve the table if it is less than 1/4 full. */\n\t\tif (ckh->count < (ZU(1) << (ckh->lg_curbuckets\n\t\t    + LG_CKH_BUCKET_CELLS - 2)) && ckh->lg_curbuckets\n\t\t    > ckh->lg_minbuckets) {\n\t\t\t/* Ignore error due to OOM. */\n\t\t\tckh_shrink(tsd, ckh);\n\t\t}\n\n\t\treturn false;\n\t}\n\n\treturn true;\n}\n\nbool\nckh_search(ckh_t *ckh, const void *searchkey, void **key, void **data) {\n\tsize_t cell;\n\n\tassert(ckh != NULL);\n\n\tcell = ckh_isearch(ckh, searchkey);\n\tif (cell != SIZE_T_MAX) {\n\t\tif (key != NULL) {\n\t\t\t*key = (void *)ckh->tab[cell].key;\n\t\t}\n\t\tif (data != NULL) {\n\t\t\t*data = (void *)ckh->tab[cell].data;\n\t\t}\n\t\treturn false;\n\t}\n\n\treturn true;\n}\n\nvoid\nckh_string_hash(const void *key, size_t r_hash[2]) {\n\thash(key, strlen((const char *)key), 0x94122f33U, r_hash);\n}\n\nbool\nckh_string_keycomp(const void *k1, const void *k2) {\n\tassert(k1 != NULL);\n\tassert(k2 != NULL);\n\n\treturn !strcmp((char *)k1, (char *)k2);\n}\n\nvoid\nckh_pointer_hash(const void *key, size_t r_hash[2]) {\n\tunion {\n\t\tconst void\t*v;\n\t\tsize_t\t\ti;\n\t} u;\n\n\tassert(sizeof(u.v) == sizeof(u.i));\n\tu.v = key;\n\thash(&u.i, sizeof(u.i), 0xd983396eU, r_hash);\n}\n\nbool\nckh_pointer_keycomp(const void *k1, const void *k2) {\n\treturn (k1 == k2);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/counter.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/counter.h\"\n\nbool\ncounter_accum_init(counter_accum_t *counter, uint64_t interval) {\n\tif (LOCKEDINT_MTX_INIT(counter->mtx, \"counter_accum\",\n\t    WITNESS_RANK_COUNTER_ACCUM, malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\tlocked_init_u64_unsynchronized(&counter->accumbytes, 0);\n\tcounter->interval = interval;\n\treturn false;\n}\n\nvoid\ncounter_prefork(tsdn_t *tsdn, counter_accum_t *counter) {\n\tLOCKEDINT_MTX_PREFORK(tsdn, counter->mtx);\n}\n\nvoid\ncounter_postfork_parent(tsdn_t *tsdn, counter_accum_t *counter) {\n\tLOCKEDINT_MTX_POSTFORK_PARENT(tsdn, counter->mtx);\n}\n\nvoid\ncounter_postfork_child(tsdn_t *tsdn, counter_accum_t *counter) {\n\tLOCKEDINT_MTX_POSTFORK_CHILD(tsdn, counter->mtx);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/ctl.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/ctl.h\"\n#include \"jemalloc/internal/extent_dss.h\"\n#include \"jemalloc/internal/extent_mmap.h\"\n#include \"jemalloc/internal/inspect.h\"\n#include \"jemalloc/internal/mutex.h\"\n#include \"jemalloc/internal/nstime.h\"\n#include \"jemalloc/internal/peak_event.h\"\n#include \"jemalloc/internal/prof_data.h\"\n#include \"jemalloc/internal/prof_log.h\"\n#include \"jemalloc/internal/prof_recent.h\"\n#include \"jemalloc/internal/prof_stats.h\"\n#include \"jemalloc/internal/prof_sys.h\"\n#include \"jemalloc/internal/safety_check.h\"\n#include \"jemalloc/internal/sc.h\"\n#include \"jemalloc/internal/util.h\"\n\n/******************************************************************************/\n/* Data. */\n\n/*\n * ctl_mtx protects the following:\n * - ctl_stats->*\n */\nstatic malloc_mutex_t\tctl_mtx;\nstatic bool\t\tctl_initialized;\nstatic ctl_stats_t\t*ctl_stats;\nstatic ctl_arenas_t\t*ctl_arenas;\n\n/******************************************************************************/\n/* Helpers for named and indexed nodes. */\n\nstatic const ctl_named_node_t *\nctl_named_node(const ctl_node_t *node) {\n\treturn ((node->named) ? (const ctl_named_node_t *)node : NULL);\n}\n\nstatic const ctl_named_node_t *\nctl_named_children(const ctl_named_node_t *node, size_t index) {\n\tconst ctl_named_node_t *children = ctl_named_node(node->children);\n\n\treturn (children ? &children[index] : NULL);\n}\n\nstatic const ctl_indexed_node_t *\nctl_indexed_node(const ctl_node_t *node) {\n\treturn (!node->named ? (const ctl_indexed_node_t *)node : NULL);\n}\n\n/******************************************************************************/\n/* Function prototypes for non-inline static functions. */\n\n#define CTL_PROTO(n)\t\t\t\t\t\t\t\\\nstatic int\tn##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\t\\\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen);\n\n#define INDEX_PROTO(n)\t\t\t\t\t\t\t\\\nstatic const ctl_named_node_t\t*n##_index(tsdn_t *tsdn,\t\t\\\n    const size_t *mib, size_t miblen, size_t i);\n\nCTL_PROTO(version)\nCTL_PROTO(epoch)\nCTL_PROTO(background_thread)\nCTL_PROTO(max_background_threads)\nCTL_PROTO(thread_tcache_enabled)\nCTL_PROTO(thread_tcache_flush)\nCTL_PROTO(thread_peak_read)\nCTL_PROTO(thread_peak_reset)\nCTL_PROTO(thread_prof_name)\nCTL_PROTO(thread_prof_active)\nCTL_PROTO(thread_arena)\nCTL_PROTO(thread_allocated)\nCTL_PROTO(thread_allocatedp)\nCTL_PROTO(thread_deallocated)\nCTL_PROTO(thread_deallocatedp)\nCTL_PROTO(thread_idle)\nCTL_PROTO(config_cache_oblivious)\nCTL_PROTO(config_debug)\nCTL_PROTO(config_fill)\nCTL_PROTO(config_lazy_lock)\nCTL_PROTO(config_malloc_conf)\nCTL_PROTO(config_opt_safety_checks)\nCTL_PROTO(config_prof)\nCTL_PROTO(config_prof_libgcc)\nCTL_PROTO(config_prof_libunwind)\nCTL_PROTO(config_stats)\nCTL_PROTO(config_utrace)\nCTL_PROTO(config_xmalloc)\nCTL_PROTO(opt_abort)\nCTL_PROTO(opt_abort_conf)\nCTL_PROTO(opt_cache_oblivious)\nCTL_PROTO(opt_trust_madvise)\nCTL_PROTO(opt_confirm_conf)\nCTL_PROTO(opt_hpa)\nCTL_PROTO(opt_hpa_slab_max_alloc)\nCTL_PROTO(opt_hpa_hugification_threshold)\nCTL_PROTO(opt_hpa_hugify_delay_ms)\nCTL_PROTO(opt_hpa_min_purge_interval_ms)\nCTL_PROTO(opt_hpa_dirty_mult)\nCTL_PROTO(opt_hpa_sec_nshards)\nCTL_PROTO(opt_hpa_sec_max_alloc)\nCTL_PROTO(opt_hpa_sec_max_bytes)\nCTL_PROTO(opt_hpa_sec_bytes_after_flush)\nCTL_PROTO(opt_hpa_sec_batch_fill_extra)\nCTL_PROTO(opt_metadata_thp)\nCTL_PROTO(opt_retain)\nCTL_PROTO(opt_dss)\nCTL_PROTO(opt_narenas)\nCTL_PROTO(opt_percpu_arena)\nCTL_PROTO(opt_oversize_threshold)\nCTL_PROTO(opt_background_thread)\nCTL_PROTO(opt_mutex_max_spin)\nCTL_PROTO(opt_max_background_threads)\nCTL_PROTO(opt_dirty_decay_ms)\nCTL_PROTO(opt_muzzy_decay_ms)\nCTL_PROTO(opt_stats_print)\nCTL_PROTO(opt_stats_print_opts)\nCTL_PROTO(opt_stats_interval)\nCTL_PROTO(opt_stats_interval_opts)\nCTL_PROTO(opt_junk)\nCTL_PROTO(opt_zero)\nCTL_PROTO(opt_utrace)\nCTL_PROTO(opt_xmalloc)\nCTL_PROTO(opt_experimental_infallible_new)\nCTL_PROTO(opt_tcache)\nCTL_PROTO(opt_tcache_max)\nCTL_PROTO(opt_tcache_nslots_small_min)\nCTL_PROTO(opt_tcache_nslots_small_max)\nCTL_PROTO(opt_tcache_nslots_large)\nCTL_PROTO(opt_lg_tcache_nslots_mul)\nCTL_PROTO(opt_tcache_gc_incr_bytes)\nCTL_PROTO(opt_tcache_gc_delay_bytes)\nCTL_PROTO(opt_lg_tcache_flush_small_div)\nCTL_PROTO(opt_lg_tcache_flush_large_div)\nCTL_PROTO(opt_thp)\nCTL_PROTO(opt_lg_extent_max_active_fit)\nCTL_PROTO(opt_prof)\nCTL_PROTO(opt_prof_prefix)\nCTL_PROTO(opt_prof_active)\nCTL_PROTO(opt_prof_thread_active_init)\nCTL_PROTO(opt_lg_prof_sample)\nCTL_PROTO(opt_lg_prof_interval)\nCTL_PROTO(opt_prof_gdump)\nCTL_PROTO(opt_prof_final)\nCTL_PROTO(opt_prof_leak)\nCTL_PROTO(opt_prof_leak_error)\nCTL_PROTO(opt_prof_accum)\nCTL_PROTO(opt_prof_recent_alloc_max)\nCTL_PROTO(opt_prof_stats)\nCTL_PROTO(opt_prof_sys_thread_name)\nCTL_PROTO(opt_prof_time_res)\nCTL_PROTO(opt_lg_san_uaf_align)\nCTL_PROTO(opt_zero_realloc)\nCTL_PROTO(tcache_create)\nCTL_PROTO(tcache_flush)\nCTL_PROTO(tcache_destroy)\nCTL_PROTO(arena_i_initialized)\nCTL_PROTO(arena_i_decay)\nCTL_PROTO(arena_i_purge)\nCTL_PROTO(arena_i_reset)\nCTL_PROTO(arena_i_destroy)\nCTL_PROTO(arena_i_dss)\nCTL_PROTO(arena_i_oversize_threshold)\nCTL_PROTO(arena_i_dirty_decay_ms)\nCTL_PROTO(arena_i_muzzy_decay_ms)\nCTL_PROTO(arena_i_extent_hooks)\nCTL_PROTO(arena_i_retain_grow_limit)\nINDEX_PROTO(arena_i)\nCTL_PROTO(arenas_bin_i_size)\nCTL_PROTO(arenas_bin_i_nregs)\nCTL_PROTO(arenas_bin_i_slab_size)\nCTL_PROTO(arenas_bin_i_nshards)\nINDEX_PROTO(arenas_bin_i)\nCTL_PROTO(arenas_lextent_i_size)\nINDEX_PROTO(arenas_lextent_i)\nCTL_PROTO(arenas_narenas)\nCTL_PROTO(arenas_dirty_decay_ms)\nCTL_PROTO(arenas_muzzy_decay_ms)\nCTL_PROTO(arenas_quantum)\nCTL_PROTO(arenas_page)\nCTL_PROTO(arenas_tcache_max)\nCTL_PROTO(arenas_nbins)\nCTL_PROTO(arenas_nhbins)\nCTL_PROTO(arenas_nlextents)\nCTL_PROTO(arenas_create)\nCTL_PROTO(arenas_lookup)\nCTL_PROTO(prof_thread_active_init)\nCTL_PROTO(prof_active)\nCTL_PROTO(prof_dump)\nCTL_PROTO(prof_gdump)\nCTL_PROTO(prof_prefix)\nCTL_PROTO(prof_reset)\nCTL_PROTO(prof_interval)\nCTL_PROTO(lg_prof_sample)\nCTL_PROTO(prof_log_start)\nCTL_PROTO(prof_log_stop)\nCTL_PROTO(prof_stats_bins_i_live)\nCTL_PROTO(prof_stats_bins_i_accum)\nINDEX_PROTO(prof_stats_bins_i)\nCTL_PROTO(prof_stats_lextents_i_live)\nCTL_PROTO(prof_stats_lextents_i_accum)\nINDEX_PROTO(prof_stats_lextents_i)\nCTL_PROTO(stats_arenas_i_small_allocated)\nCTL_PROTO(stats_arenas_i_small_nmalloc)\nCTL_PROTO(stats_arenas_i_small_ndalloc)\nCTL_PROTO(stats_arenas_i_small_nrequests)\nCTL_PROTO(stats_arenas_i_small_nfills)\nCTL_PROTO(stats_arenas_i_small_nflushes)\nCTL_PROTO(stats_arenas_i_large_allocated)\nCTL_PROTO(stats_arenas_i_large_nmalloc)\nCTL_PROTO(stats_arenas_i_large_ndalloc)\nCTL_PROTO(stats_arenas_i_large_nrequests)\nCTL_PROTO(stats_arenas_i_large_nfills)\nCTL_PROTO(stats_arenas_i_large_nflushes)\nCTL_PROTO(stats_arenas_i_bins_j_nmalloc)\nCTL_PROTO(stats_arenas_i_bins_j_ndalloc)\nCTL_PROTO(stats_arenas_i_bins_j_nrequests)\nCTL_PROTO(stats_arenas_i_bins_j_curregs)\nCTL_PROTO(stats_arenas_i_bins_j_nfills)\nCTL_PROTO(stats_arenas_i_bins_j_nflushes)\nCTL_PROTO(stats_arenas_i_bins_j_nslabs)\nCTL_PROTO(stats_arenas_i_bins_j_nreslabs)\nCTL_PROTO(stats_arenas_i_bins_j_curslabs)\nCTL_PROTO(stats_arenas_i_bins_j_nonfull_slabs)\nINDEX_PROTO(stats_arenas_i_bins_j)\nCTL_PROTO(stats_arenas_i_lextents_j_nmalloc)\nCTL_PROTO(stats_arenas_i_lextents_j_ndalloc)\nCTL_PROTO(stats_arenas_i_lextents_j_nrequests)\nCTL_PROTO(stats_arenas_i_lextents_j_curlextents)\nINDEX_PROTO(stats_arenas_i_lextents_j)\nCTL_PROTO(stats_arenas_i_extents_j_ndirty)\nCTL_PROTO(stats_arenas_i_extents_j_nmuzzy)\nCTL_PROTO(stats_arenas_i_extents_j_nretained)\nCTL_PROTO(stats_arenas_i_extents_j_dirty_bytes)\nCTL_PROTO(stats_arenas_i_extents_j_muzzy_bytes)\nCTL_PROTO(stats_arenas_i_extents_j_retained_bytes)\nINDEX_PROTO(stats_arenas_i_extents_j)\nCTL_PROTO(stats_arenas_i_hpa_shard_npurge_passes)\nCTL_PROTO(stats_arenas_i_hpa_shard_npurges)\nCTL_PROTO(stats_arenas_i_hpa_shard_nhugifies)\nCTL_PROTO(stats_arenas_i_hpa_shard_ndehugifies)\n\n/* We have a set of stats for full slabs. */\nCTL_PROTO(stats_arenas_i_hpa_shard_full_slabs_npageslabs_nonhuge)\nCTL_PROTO(stats_arenas_i_hpa_shard_full_slabs_npageslabs_huge)\nCTL_PROTO(stats_arenas_i_hpa_shard_full_slabs_nactive_nonhuge)\nCTL_PROTO(stats_arenas_i_hpa_shard_full_slabs_nactive_huge)\nCTL_PROTO(stats_arenas_i_hpa_shard_full_slabs_ndirty_nonhuge)\nCTL_PROTO(stats_arenas_i_hpa_shard_full_slabs_ndirty_huge)\n\n/* A parallel set for the empty slabs. */\nCTL_PROTO(stats_arenas_i_hpa_shard_empty_slabs_npageslabs_nonhuge)\nCTL_PROTO(stats_arenas_i_hpa_shard_empty_slabs_npageslabs_huge)\nCTL_PROTO(stats_arenas_i_hpa_shard_empty_slabs_nactive_nonhuge)\nCTL_PROTO(stats_arenas_i_hpa_shard_empty_slabs_nactive_huge)\nCTL_PROTO(stats_arenas_i_hpa_shard_empty_slabs_ndirty_nonhuge)\nCTL_PROTO(stats_arenas_i_hpa_shard_empty_slabs_ndirty_huge)\n\n/*\n * And one for the slabs that are neither empty nor full, but indexed by how\n * full they are.\n */\nCTL_PROTO(stats_arenas_i_hpa_shard_nonfull_slabs_j_npageslabs_nonhuge)\nCTL_PROTO(stats_arenas_i_hpa_shard_nonfull_slabs_j_npageslabs_huge)\nCTL_PROTO(stats_arenas_i_hpa_shard_nonfull_slabs_j_nactive_nonhuge)\nCTL_PROTO(stats_arenas_i_hpa_shard_nonfull_slabs_j_nactive_huge)\nCTL_PROTO(stats_arenas_i_hpa_shard_nonfull_slabs_j_ndirty_nonhuge)\nCTL_PROTO(stats_arenas_i_hpa_shard_nonfull_slabs_j_ndirty_huge)\n\nINDEX_PROTO(stats_arenas_i_hpa_shard_nonfull_slabs_j)\nCTL_PROTO(stats_arenas_i_nthreads)\nCTL_PROTO(stats_arenas_i_uptime)\nCTL_PROTO(stats_arenas_i_dss)\nCTL_PROTO(stats_arenas_i_dirty_decay_ms)\nCTL_PROTO(stats_arenas_i_muzzy_decay_ms)\nCTL_PROTO(stats_arenas_i_pactive)\nCTL_PROTO(stats_arenas_i_pdirty)\nCTL_PROTO(stats_arenas_i_pmuzzy)\nCTL_PROTO(stats_arenas_i_mapped)\nCTL_PROTO(stats_arenas_i_retained)\nCTL_PROTO(stats_arenas_i_extent_avail)\nCTL_PROTO(stats_arenas_i_dirty_npurge)\nCTL_PROTO(stats_arenas_i_dirty_nmadvise)\nCTL_PROTO(stats_arenas_i_dirty_purged)\nCTL_PROTO(stats_arenas_i_muzzy_npurge)\nCTL_PROTO(stats_arenas_i_muzzy_nmadvise)\nCTL_PROTO(stats_arenas_i_muzzy_purged)\nCTL_PROTO(stats_arenas_i_base)\nCTL_PROTO(stats_arenas_i_internal)\nCTL_PROTO(stats_arenas_i_metadata_thp)\nCTL_PROTO(stats_arenas_i_tcache_bytes)\nCTL_PROTO(stats_arenas_i_tcache_stashed_bytes)\nCTL_PROTO(stats_arenas_i_resident)\nCTL_PROTO(stats_arenas_i_abandoned_vm)\nCTL_PROTO(stats_arenas_i_hpa_sec_bytes)\nINDEX_PROTO(stats_arenas_i)\nCTL_PROTO(stats_allocated)\nCTL_PROTO(stats_active)\nCTL_PROTO(stats_background_thread_num_threads)\nCTL_PROTO(stats_background_thread_num_runs)\nCTL_PROTO(stats_background_thread_run_interval)\nCTL_PROTO(stats_metadata)\nCTL_PROTO(stats_metadata_thp)\nCTL_PROTO(stats_resident)\nCTL_PROTO(stats_mapped)\nCTL_PROTO(stats_retained)\nCTL_PROTO(stats_zero_reallocs)\nCTL_PROTO(experimental_hooks_install)\nCTL_PROTO(experimental_hooks_remove)\nCTL_PROTO(experimental_hooks_prof_backtrace)\nCTL_PROTO(experimental_hooks_prof_dump)\nCTL_PROTO(experimental_hooks_safety_check_abort)\nCTL_PROTO(experimental_thread_activity_callback)\nCTL_PROTO(experimental_utilization_query)\nCTL_PROTO(experimental_utilization_batch_query)\nCTL_PROTO(experimental_arenas_i_pactivep)\nINDEX_PROTO(experimental_arenas_i)\nCTL_PROTO(experimental_prof_recent_alloc_max)\nCTL_PROTO(experimental_prof_recent_alloc_dump)\nCTL_PROTO(experimental_batch_alloc)\nCTL_PROTO(experimental_arenas_create_ext)\n\n#define MUTEX_STATS_CTL_PROTO_GEN(n)\t\t\t\t\t\\\nCTL_PROTO(stats_##n##_num_ops)\t\t\t\t\t\t\\\nCTL_PROTO(stats_##n##_num_wait)\t\t\t\t\t\t\\\nCTL_PROTO(stats_##n##_num_spin_acq)\t\t\t\t\t\\\nCTL_PROTO(stats_##n##_num_owner_switch)\t\t\t\t\t\\\nCTL_PROTO(stats_##n##_total_wait_time)\t\t\t\t\t\\\nCTL_PROTO(stats_##n##_max_wait_time)\t\t\t\t\t\\\nCTL_PROTO(stats_##n##_max_num_thds)\n\n/* Global mutexes. */\n#define OP(mtx) MUTEX_STATS_CTL_PROTO_GEN(mutexes_##mtx)\nMUTEX_PROF_GLOBAL_MUTEXES\n#undef OP\n\n/* Per arena mutexes. */\n#define OP(mtx) MUTEX_STATS_CTL_PROTO_GEN(arenas_i_mutexes_##mtx)\nMUTEX_PROF_ARENA_MUTEXES\n#undef OP\n\n/* Arena bin mutexes. */\nMUTEX_STATS_CTL_PROTO_GEN(arenas_i_bins_j_mutex)\n#undef MUTEX_STATS_CTL_PROTO_GEN\n\nCTL_PROTO(stats_mutexes_reset)\n\n/******************************************************************************/\n/* mallctl tree. */\n\n#define NAME(n)\t{true},\tn\n#define CHILD(t, c)\t\t\t\t\t\t\t\\\n\tsizeof(c##_node) / sizeof(ctl_##t##_node_t),\t\t\t\\\n\t(ctl_node_t *)c##_node,\t\t\t\t\t\t\\\n\tNULL\n#define CTL(c)\t0, NULL, c##_ctl\n\n/*\n * Only handles internal indexed nodes, since there are currently no external\n * ones.\n */\n#define INDEX(i)\t{false},\ti##_index\n\nstatic const ctl_named_node_t\tthread_tcache_node[] = {\n\t{NAME(\"enabled\"),\tCTL(thread_tcache_enabled)},\n\t{NAME(\"flush\"),\t\tCTL(thread_tcache_flush)}\n};\n\nstatic const ctl_named_node_t\tthread_peak_node[] = {\n\t{NAME(\"read\"),\t\tCTL(thread_peak_read)},\n\t{NAME(\"reset\"),\t\tCTL(thread_peak_reset)},\n};\n\nstatic const ctl_named_node_t\tthread_prof_node[] = {\n\t{NAME(\"name\"),\t\tCTL(thread_prof_name)},\n\t{NAME(\"active\"),\tCTL(thread_prof_active)}\n};\n\nstatic const ctl_named_node_t\tthread_node[] = {\n\t{NAME(\"arena\"),\t\tCTL(thread_arena)},\n\t{NAME(\"allocated\"),\tCTL(thread_allocated)},\n\t{NAME(\"allocatedp\"),\tCTL(thread_allocatedp)},\n\t{NAME(\"deallocated\"),\tCTL(thread_deallocated)},\n\t{NAME(\"deallocatedp\"),\tCTL(thread_deallocatedp)},\n\t{NAME(\"tcache\"),\tCHILD(named, thread_tcache)},\n\t{NAME(\"peak\"),\t\tCHILD(named, thread_peak)},\n\t{NAME(\"prof\"),\t\tCHILD(named, thread_prof)},\n\t{NAME(\"idle\"),\t\tCTL(thread_idle)}\n};\n\nstatic const ctl_named_node_t\tconfig_node[] = {\n\t{NAME(\"cache_oblivious\"), CTL(config_cache_oblivious)},\n\t{NAME(\"debug\"),\t\tCTL(config_debug)},\n\t{NAME(\"fill\"),\t\tCTL(config_fill)},\n\t{NAME(\"lazy_lock\"),\tCTL(config_lazy_lock)},\n\t{NAME(\"malloc_conf\"),\tCTL(config_malloc_conf)},\n\t{NAME(\"opt_safety_checks\"),\tCTL(config_opt_safety_checks)},\n\t{NAME(\"prof\"),\t\tCTL(config_prof)},\n\t{NAME(\"prof_libgcc\"),\tCTL(config_prof_libgcc)},\n\t{NAME(\"prof_libunwind\"), CTL(config_prof_libunwind)},\n\t{NAME(\"stats\"),\t\tCTL(config_stats)},\n\t{NAME(\"utrace\"),\tCTL(config_utrace)},\n\t{NAME(\"xmalloc\"),\tCTL(config_xmalloc)}\n};\n\nstatic const ctl_named_node_t opt_node[] = {\n\t{NAME(\"abort\"),\t\tCTL(opt_abort)},\n\t{NAME(\"abort_conf\"),\tCTL(opt_abort_conf)},\n\t{NAME(\"cache_oblivious\"),\tCTL(opt_cache_oblivious)},\n\t{NAME(\"trust_madvise\"),\tCTL(opt_trust_madvise)},\n\t{NAME(\"confirm_conf\"),\tCTL(opt_confirm_conf)},\n\t{NAME(\"hpa\"),\t\tCTL(opt_hpa)},\n\t{NAME(\"hpa_slab_max_alloc\"),\tCTL(opt_hpa_slab_max_alloc)},\n\t{NAME(\"hpa_hugification_threshold\"),\n\t\tCTL(opt_hpa_hugification_threshold)},\n\t{NAME(\"hpa_hugify_delay_ms\"), CTL(opt_hpa_hugify_delay_ms)},\n\t{NAME(\"hpa_min_purge_interval_ms\"), CTL(opt_hpa_min_purge_interval_ms)},\n\t{NAME(\"hpa_dirty_mult\"), CTL(opt_hpa_dirty_mult)},\n\t{NAME(\"hpa_sec_nshards\"),\tCTL(opt_hpa_sec_nshards)},\n\t{NAME(\"hpa_sec_max_alloc\"),\tCTL(opt_hpa_sec_max_alloc)},\n\t{NAME(\"hpa_sec_max_bytes\"),\tCTL(opt_hpa_sec_max_bytes)},\n\t{NAME(\"hpa_sec_bytes_after_flush\"),\n\t\tCTL(opt_hpa_sec_bytes_after_flush)},\n\t{NAME(\"hpa_sec_batch_fill_extra\"),\n\t\tCTL(opt_hpa_sec_batch_fill_extra)},\n\t{NAME(\"metadata_thp\"),\tCTL(opt_metadata_thp)},\n\t{NAME(\"retain\"),\tCTL(opt_retain)},\n\t{NAME(\"dss\"),\t\tCTL(opt_dss)},\n\t{NAME(\"narenas\"),\tCTL(opt_narenas)},\n\t{NAME(\"percpu_arena\"),\tCTL(opt_percpu_arena)},\n\t{NAME(\"oversize_threshold\"),\tCTL(opt_oversize_threshold)},\n\t{NAME(\"mutex_max_spin\"),\tCTL(opt_mutex_max_spin)},\n\t{NAME(\"background_thread\"),\tCTL(opt_background_thread)},\n\t{NAME(\"max_background_threads\"),\tCTL(opt_max_background_threads)},\n\t{NAME(\"dirty_decay_ms\"), CTL(opt_dirty_decay_ms)},\n\t{NAME(\"muzzy_decay_ms\"), CTL(opt_muzzy_decay_ms)},\n\t{NAME(\"stats_print\"),\tCTL(opt_stats_print)},\n\t{NAME(\"stats_print_opts\"),\tCTL(opt_stats_print_opts)},\n\t{NAME(\"stats_interval\"),\tCTL(opt_stats_interval)},\n\t{NAME(\"stats_interval_opts\"),\tCTL(opt_stats_interval_opts)},\n\t{NAME(\"junk\"),\t\tCTL(opt_junk)},\n\t{NAME(\"zero\"),\t\tCTL(opt_zero)},\n\t{NAME(\"utrace\"),\tCTL(opt_utrace)},\n\t{NAME(\"xmalloc\"),\tCTL(opt_xmalloc)},\n\t{NAME(\"experimental_infallible_new\"),\n\t\tCTL(opt_experimental_infallible_new)},\n\t{NAME(\"tcache\"),\tCTL(opt_tcache)},\n\t{NAME(\"tcache_max\"),\tCTL(opt_tcache_max)},\n\t{NAME(\"tcache_nslots_small_min\"),\n\t\tCTL(opt_tcache_nslots_small_min)},\n\t{NAME(\"tcache_nslots_small_max\"),\n\t\tCTL(opt_tcache_nslots_small_max)},\n\t{NAME(\"tcache_nslots_large\"),\tCTL(opt_tcache_nslots_large)},\n\t{NAME(\"lg_tcache_nslots_mul\"),\tCTL(opt_lg_tcache_nslots_mul)},\n\t{NAME(\"tcache_gc_incr_bytes\"),\tCTL(opt_tcache_gc_incr_bytes)},\n\t{NAME(\"tcache_gc_delay_bytes\"),\tCTL(opt_tcache_gc_delay_bytes)},\n\t{NAME(\"lg_tcache_flush_small_div\"),\n\t\tCTL(opt_lg_tcache_flush_small_div)},\n\t{NAME(\"lg_tcache_flush_large_div\"),\n\t\tCTL(opt_lg_tcache_flush_large_div)},\n\t{NAME(\"thp\"),\t\tCTL(opt_thp)},\n\t{NAME(\"lg_extent_max_active_fit\"), CTL(opt_lg_extent_max_active_fit)},\n\t{NAME(\"prof\"),\t\tCTL(opt_prof)},\n\t{NAME(\"prof_prefix\"),\tCTL(opt_prof_prefix)},\n\t{NAME(\"prof_active\"),\tCTL(opt_prof_active)},\n\t{NAME(\"prof_thread_active_init\"), CTL(opt_prof_thread_active_init)},\n\t{NAME(\"lg_prof_sample\"), CTL(opt_lg_prof_sample)},\n\t{NAME(\"lg_prof_interval\"), CTL(opt_lg_prof_interval)},\n\t{NAME(\"prof_gdump\"),\tCTL(opt_prof_gdump)},\n\t{NAME(\"prof_final\"),\tCTL(opt_prof_final)},\n\t{NAME(\"prof_leak\"),\tCTL(opt_prof_leak)},\n\t{NAME(\"prof_leak_error\"),\tCTL(opt_prof_leak_error)},\n\t{NAME(\"prof_accum\"),\tCTL(opt_prof_accum)},\n\t{NAME(\"prof_recent_alloc_max\"),\tCTL(opt_prof_recent_alloc_max)},\n\t{NAME(\"prof_stats\"),\tCTL(opt_prof_stats)},\n\t{NAME(\"prof_sys_thread_name\"),\tCTL(opt_prof_sys_thread_name)},\n\t{NAME(\"prof_time_resolution\"),\tCTL(opt_prof_time_res)},\n\t{NAME(\"lg_san_uaf_align\"),\tCTL(opt_lg_san_uaf_align)},\n\t{NAME(\"zero_realloc\"),\tCTL(opt_zero_realloc)}\n};\n\nstatic const ctl_named_node_t\ttcache_node[] = {\n\t{NAME(\"create\"),\tCTL(tcache_create)},\n\t{NAME(\"flush\"),\t\tCTL(tcache_flush)},\n\t{NAME(\"destroy\"),\tCTL(tcache_destroy)}\n};\n\nstatic const ctl_named_node_t arena_i_node[] = {\n\t{NAME(\"initialized\"),\tCTL(arena_i_initialized)},\n\t{NAME(\"decay\"),\t\tCTL(arena_i_decay)},\n\t{NAME(\"purge\"),\t\tCTL(arena_i_purge)},\n\t{NAME(\"reset\"),\t\tCTL(arena_i_reset)},\n\t{NAME(\"destroy\"),\tCTL(arena_i_destroy)},\n\t{NAME(\"dss\"),\t\tCTL(arena_i_dss)},\n\t/*\n\t * Undocumented for now, since we anticipate an arena API in flux after\n\t * we cut the last 5-series release.\n\t */\n\t{NAME(\"oversize_threshold\"), CTL(arena_i_oversize_threshold)},\n\t{NAME(\"dirty_decay_ms\"), CTL(arena_i_dirty_decay_ms)},\n\t{NAME(\"muzzy_decay_ms\"), CTL(arena_i_muzzy_decay_ms)},\n\t{NAME(\"extent_hooks\"),\tCTL(arena_i_extent_hooks)},\n\t{NAME(\"retain_grow_limit\"),\tCTL(arena_i_retain_grow_limit)}\n};\nstatic const ctl_named_node_t super_arena_i_node[] = {\n\t{NAME(\"\"),\t\tCHILD(named, arena_i)}\n};\n\nstatic const ctl_indexed_node_t arena_node[] = {\n\t{INDEX(arena_i)}\n};\n\nstatic const ctl_named_node_t arenas_bin_i_node[] = {\n\t{NAME(\"size\"),\t\tCTL(arenas_bin_i_size)},\n\t{NAME(\"nregs\"),\t\tCTL(arenas_bin_i_nregs)},\n\t{NAME(\"slab_size\"),\tCTL(arenas_bin_i_slab_size)},\n\t{NAME(\"nshards\"),\tCTL(arenas_bin_i_nshards)}\n};\nstatic const ctl_named_node_t super_arenas_bin_i_node[] = {\n\t{NAME(\"\"),\t\tCHILD(named, arenas_bin_i)}\n};\n\nstatic const ctl_indexed_node_t arenas_bin_node[] = {\n\t{INDEX(arenas_bin_i)}\n};\n\nstatic const ctl_named_node_t arenas_lextent_i_node[] = {\n\t{NAME(\"size\"),\t\tCTL(arenas_lextent_i_size)}\n};\nstatic const ctl_named_node_t super_arenas_lextent_i_node[] = {\n\t{NAME(\"\"),\t\tCHILD(named, arenas_lextent_i)}\n};\n\nstatic const ctl_indexed_node_t arenas_lextent_node[] = {\n\t{INDEX(arenas_lextent_i)}\n};\n\nstatic const ctl_named_node_t arenas_node[] = {\n\t{NAME(\"narenas\"),\tCTL(arenas_narenas)},\n\t{NAME(\"dirty_decay_ms\"), CTL(arenas_dirty_decay_ms)},\n\t{NAME(\"muzzy_decay_ms\"), CTL(arenas_muzzy_decay_ms)},\n\t{NAME(\"quantum\"),\tCTL(arenas_quantum)},\n\t{NAME(\"page\"),\t\tCTL(arenas_page)},\n\t{NAME(\"tcache_max\"),\tCTL(arenas_tcache_max)},\n\t{NAME(\"nbins\"),\t\tCTL(arenas_nbins)},\n\t{NAME(\"nhbins\"),\tCTL(arenas_nhbins)},\n\t{NAME(\"bin\"),\t\tCHILD(indexed, arenas_bin)},\n\t{NAME(\"nlextents\"),\tCTL(arenas_nlextents)},\n\t{NAME(\"lextent\"),\tCHILD(indexed, arenas_lextent)},\n\t{NAME(\"create\"),\tCTL(arenas_create)},\n\t{NAME(\"lookup\"),\tCTL(arenas_lookup)}\n};\n\nstatic const ctl_named_node_t prof_stats_bins_i_node[] = {\n\t{NAME(\"live\"),\t\tCTL(prof_stats_bins_i_live)},\n\t{NAME(\"accum\"),\t\tCTL(prof_stats_bins_i_accum)}\n};\n\nstatic const ctl_named_node_t super_prof_stats_bins_i_node[] = {\n\t{NAME(\"\"),\t\tCHILD(named, prof_stats_bins_i)}\n};\n\nstatic const ctl_indexed_node_t prof_stats_bins_node[] = {\n\t{INDEX(prof_stats_bins_i)}\n};\n\nstatic const ctl_named_node_t prof_stats_lextents_i_node[] = {\n\t{NAME(\"live\"),\t\tCTL(prof_stats_lextents_i_live)},\n\t{NAME(\"accum\"),\t\tCTL(prof_stats_lextents_i_accum)}\n};\n\nstatic const ctl_named_node_t super_prof_stats_lextents_i_node[] = {\n\t{NAME(\"\"),\t\tCHILD(named, prof_stats_lextents_i)}\n};\n\nstatic const ctl_indexed_node_t prof_stats_lextents_node[] = {\n\t{INDEX(prof_stats_lextents_i)}\n};\n\nstatic const ctl_named_node_t\tprof_stats_node[] = {\n\t{NAME(\"bins\"),\t\tCHILD(indexed, prof_stats_bins)},\n\t{NAME(\"lextents\"),\tCHILD(indexed, prof_stats_lextents)},\n};\n\nstatic const ctl_named_node_t\tprof_node[] = {\n\t{NAME(\"thread_active_init\"), CTL(prof_thread_active_init)},\n\t{NAME(\"active\"),\tCTL(prof_active)},\n\t{NAME(\"dump\"),\t\tCTL(prof_dump)},\n\t{NAME(\"gdump\"),\t\tCTL(prof_gdump)},\n\t{NAME(\"prefix\"),\tCTL(prof_prefix)},\n\t{NAME(\"reset\"),\t\tCTL(prof_reset)},\n\t{NAME(\"interval\"),\tCTL(prof_interval)},\n\t{NAME(\"lg_sample\"),\tCTL(lg_prof_sample)},\n\t{NAME(\"log_start\"),\tCTL(prof_log_start)},\n\t{NAME(\"log_stop\"),\tCTL(prof_log_stop)},\n\t{NAME(\"stats\"),\t\tCHILD(named, prof_stats)}\n};\n\nstatic const ctl_named_node_t stats_arenas_i_small_node[] = {\n\t{NAME(\"allocated\"),\tCTL(stats_arenas_i_small_allocated)},\n\t{NAME(\"nmalloc\"),\tCTL(stats_arenas_i_small_nmalloc)},\n\t{NAME(\"ndalloc\"),\tCTL(stats_arenas_i_small_ndalloc)},\n\t{NAME(\"nrequests\"),\tCTL(stats_arenas_i_small_nrequests)},\n\t{NAME(\"nfills\"),\tCTL(stats_arenas_i_small_nfills)},\n\t{NAME(\"nflushes\"),\tCTL(stats_arenas_i_small_nflushes)}\n};\n\nstatic const ctl_named_node_t stats_arenas_i_large_node[] = {\n\t{NAME(\"allocated\"),\tCTL(stats_arenas_i_large_allocated)},\n\t{NAME(\"nmalloc\"),\tCTL(stats_arenas_i_large_nmalloc)},\n\t{NAME(\"ndalloc\"),\tCTL(stats_arenas_i_large_ndalloc)},\n\t{NAME(\"nrequests\"),\tCTL(stats_arenas_i_large_nrequests)},\n\t{NAME(\"nfills\"),\tCTL(stats_arenas_i_large_nfills)},\n\t{NAME(\"nflushes\"),\tCTL(stats_arenas_i_large_nflushes)}\n};\n\n#define MUTEX_PROF_DATA_NODE(prefix)\t\t\t\t\t\\\nstatic const ctl_named_node_t stats_##prefix##_node[] = {\t\t\\\n\t{NAME(\"num_ops\"),\t\t\t\t\t\t\\\n\t CTL(stats_##prefix##_num_ops)},\t\t\t\t\\\n\t{NAME(\"num_wait\"),\t\t\t\t\t\t\\\n\t CTL(stats_##prefix##_num_wait)},\t\t\t\t\\\n\t{NAME(\"num_spin_acq\"),\t\t\t\t\t\t\\\n\t CTL(stats_##prefix##_num_spin_acq)},\t\t\t\t\\\n\t{NAME(\"num_owner_switch\"),\t\t\t\t\t\\\n\t CTL(stats_##prefix##_num_owner_switch)},\t\t\t\\\n\t{NAME(\"total_wait_time\"),\t\t\t\t\t\\\n\t CTL(stats_##prefix##_total_wait_time)},\t\t\t\\\n\t{NAME(\"max_wait_time\"),\t\t\t\t\t\t\\\n\t CTL(stats_##prefix##_max_wait_time)},\t\t\t\t\\\n\t{NAME(\"max_num_thds\"),\t\t\t\t\t\t\\\n\t CTL(stats_##prefix##_max_num_thds)}\t\t\t\t\\\n\t/* Note that # of current waiting thread not provided. */\t\\\n};\n\nMUTEX_PROF_DATA_NODE(arenas_i_bins_j_mutex)\n\nstatic const ctl_named_node_t stats_arenas_i_bins_j_node[] = {\n\t{NAME(\"nmalloc\"),\tCTL(stats_arenas_i_bins_j_nmalloc)},\n\t{NAME(\"ndalloc\"),\tCTL(stats_arenas_i_bins_j_ndalloc)},\n\t{NAME(\"nrequests\"),\tCTL(stats_arenas_i_bins_j_nrequests)},\n\t{NAME(\"curregs\"),\tCTL(stats_arenas_i_bins_j_curregs)},\n\t{NAME(\"nfills\"),\tCTL(stats_arenas_i_bins_j_nfills)},\n\t{NAME(\"nflushes\"),\tCTL(stats_arenas_i_bins_j_nflushes)},\n\t{NAME(\"nslabs\"),\tCTL(stats_arenas_i_bins_j_nslabs)},\n\t{NAME(\"nreslabs\"),\tCTL(stats_arenas_i_bins_j_nreslabs)},\n\t{NAME(\"curslabs\"),\tCTL(stats_arenas_i_bins_j_curslabs)},\n\t{NAME(\"nonfull_slabs\"),\tCTL(stats_arenas_i_bins_j_nonfull_slabs)},\n\t{NAME(\"mutex\"),\t\tCHILD(named, stats_arenas_i_bins_j_mutex)}\n};\n\nstatic const ctl_named_node_t super_stats_arenas_i_bins_j_node[] = {\n\t{NAME(\"\"),\t\tCHILD(named, stats_arenas_i_bins_j)}\n};\n\nstatic const ctl_indexed_node_t stats_arenas_i_bins_node[] = {\n\t{INDEX(stats_arenas_i_bins_j)}\n};\n\nstatic const ctl_named_node_t stats_arenas_i_lextents_j_node[] = {\n\t{NAME(\"nmalloc\"),\tCTL(stats_arenas_i_lextents_j_nmalloc)},\n\t{NAME(\"ndalloc\"),\tCTL(stats_arenas_i_lextents_j_ndalloc)},\n\t{NAME(\"nrequests\"),\tCTL(stats_arenas_i_lextents_j_nrequests)},\n\t{NAME(\"curlextents\"),\tCTL(stats_arenas_i_lextents_j_curlextents)}\n};\nstatic const ctl_named_node_t super_stats_arenas_i_lextents_j_node[] = {\n\t{NAME(\"\"),\t\tCHILD(named, stats_arenas_i_lextents_j)}\n};\n\nstatic const ctl_indexed_node_t stats_arenas_i_lextents_node[] = {\n\t{INDEX(stats_arenas_i_lextents_j)}\n};\n\nstatic const ctl_named_node_t stats_arenas_i_extents_j_node[] = {\n\t{NAME(\"ndirty\"),\tCTL(stats_arenas_i_extents_j_ndirty)},\n\t{NAME(\"nmuzzy\"),\tCTL(stats_arenas_i_extents_j_nmuzzy)},\n\t{NAME(\"nretained\"),\tCTL(stats_arenas_i_extents_j_nretained)},\n\t{NAME(\"dirty_bytes\"),\tCTL(stats_arenas_i_extents_j_dirty_bytes)},\n\t{NAME(\"muzzy_bytes\"),\tCTL(stats_arenas_i_extents_j_muzzy_bytes)},\n\t{NAME(\"retained_bytes\"), CTL(stats_arenas_i_extents_j_retained_bytes)}\n};\n\nstatic const ctl_named_node_t super_stats_arenas_i_extents_j_node[] = {\n\t{NAME(\"\"),\t\tCHILD(named, stats_arenas_i_extents_j)}\n};\n\nstatic const ctl_indexed_node_t stats_arenas_i_extents_node[] = {\n\t{INDEX(stats_arenas_i_extents_j)}\n};\n\n#define OP(mtx)  MUTEX_PROF_DATA_NODE(arenas_i_mutexes_##mtx)\nMUTEX_PROF_ARENA_MUTEXES\n#undef OP\n\nstatic const ctl_named_node_t stats_arenas_i_mutexes_node[] = {\n#define OP(mtx) {NAME(#mtx), CHILD(named, stats_arenas_i_mutexes_##mtx)},\nMUTEX_PROF_ARENA_MUTEXES\n#undef OP\n};\n\nstatic const ctl_named_node_t stats_arenas_i_hpa_shard_full_slabs_node[] = {\n\t{NAME(\"npageslabs_nonhuge\"),\n\t\tCTL(stats_arenas_i_hpa_shard_full_slabs_npageslabs_nonhuge)},\n\t{NAME(\"npageslabs_huge\"),\n\t\tCTL(stats_arenas_i_hpa_shard_full_slabs_npageslabs_huge)},\n\t{NAME(\"nactive_nonhuge\"),\n\t\tCTL(stats_arenas_i_hpa_shard_full_slabs_nactive_nonhuge)},\n\t{NAME(\"nactive_huge\"),\n\t\tCTL(stats_arenas_i_hpa_shard_full_slabs_nactive_huge)},\n\t{NAME(\"ndirty_nonhuge\"),\n\t\tCTL(stats_arenas_i_hpa_shard_full_slabs_ndirty_nonhuge)},\n\t{NAME(\"ndirty_huge\"),\n\t\tCTL(stats_arenas_i_hpa_shard_full_slabs_ndirty_huge)}\n};\n\nstatic const ctl_named_node_t stats_arenas_i_hpa_shard_empty_slabs_node[] = {\n\t{NAME(\"npageslabs_nonhuge\"),\n\t\tCTL(stats_arenas_i_hpa_shard_empty_slabs_npageslabs_nonhuge)},\n\t{NAME(\"npageslabs_huge\"),\n\t\tCTL(stats_arenas_i_hpa_shard_empty_slabs_npageslabs_huge)},\n\t{NAME(\"nactive_nonhuge\"),\n\t\tCTL(stats_arenas_i_hpa_shard_empty_slabs_nactive_nonhuge)},\n\t{NAME(\"nactive_huge\"),\n\t\tCTL(stats_arenas_i_hpa_shard_empty_slabs_nactive_huge)},\n\t{NAME(\"ndirty_nonhuge\"),\n\t\tCTL(stats_arenas_i_hpa_shard_empty_slabs_ndirty_nonhuge)},\n\t{NAME(\"ndirty_huge\"),\n\t\tCTL(stats_arenas_i_hpa_shard_empty_slabs_ndirty_huge)}\n};\n\nstatic const ctl_named_node_t stats_arenas_i_hpa_shard_nonfull_slabs_j_node[] = {\n\t{NAME(\"npageslabs_nonhuge\"),\n\t\tCTL(stats_arenas_i_hpa_shard_nonfull_slabs_j_npageslabs_nonhuge)},\n\t{NAME(\"npageslabs_huge\"),\n\t\tCTL(stats_arenas_i_hpa_shard_nonfull_slabs_j_npageslabs_huge)},\n\t{NAME(\"nactive_nonhuge\"),\n\t\tCTL(stats_arenas_i_hpa_shard_nonfull_slabs_j_nactive_nonhuge)},\n\t{NAME(\"nactive_huge\"),\n\t\tCTL(stats_arenas_i_hpa_shard_nonfull_slabs_j_nactive_huge)},\n\t{NAME(\"ndirty_nonhuge\"),\n\t\tCTL(stats_arenas_i_hpa_shard_nonfull_slabs_j_ndirty_nonhuge)},\n\t{NAME(\"ndirty_huge\"),\n\t\tCTL(stats_arenas_i_hpa_shard_nonfull_slabs_j_ndirty_huge)}\n};\n\nstatic const ctl_named_node_t super_stats_arenas_i_hpa_shard_nonfull_slabs_j_node[] = {\n\t{NAME(\"\"),\n\t\tCHILD(named, stats_arenas_i_hpa_shard_nonfull_slabs_j)}\n};\n\nstatic const ctl_indexed_node_t stats_arenas_i_hpa_shard_nonfull_slabs_node[] =\n{\n\t{INDEX(stats_arenas_i_hpa_shard_nonfull_slabs_j)}\n};\n\nstatic const ctl_named_node_t stats_arenas_i_hpa_shard_node[] = {\n\t{NAME(\"full_slabs\"),\tCHILD(named,\n\t    stats_arenas_i_hpa_shard_full_slabs)},\n\t{NAME(\"empty_slabs\"),\tCHILD(named,\n\t    stats_arenas_i_hpa_shard_empty_slabs)},\n\t{NAME(\"nonfull_slabs\"),\tCHILD(indexed,\n\t    stats_arenas_i_hpa_shard_nonfull_slabs)},\n\n\t{NAME(\"npurge_passes\"),\tCTL(stats_arenas_i_hpa_shard_npurge_passes)},\n\t{NAME(\"npurges\"),\tCTL(stats_arenas_i_hpa_shard_npurges)},\n\t{NAME(\"nhugifies\"),\tCTL(stats_arenas_i_hpa_shard_nhugifies)},\n\t{NAME(\"ndehugifies\"),\tCTL(stats_arenas_i_hpa_shard_ndehugifies)}\n};\n\nstatic const ctl_named_node_t stats_arenas_i_node[] = {\n\t{NAME(\"nthreads\"),\tCTL(stats_arenas_i_nthreads)},\n\t{NAME(\"uptime\"),\tCTL(stats_arenas_i_uptime)},\n\t{NAME(\"dss\"),\t\tCTL(stats_arenas_i_dss)},\n\t{NAME(\"dirty_decay_ms\"), CTL(stats_arenas_i_dirty_decay_ms)},\n\t{NAME(\"muzzy_decay_ms\"), CTL(stats_arenas_i_muzzy_decay_ms)},\n\t{NAME(\"pactive\"),\tCTL(stats_arenas_i_pactive)},\n\t{NAME(\"pdirty\"),\tCTL(stats_arenas_i_pdirty)},\n\t{NAME(\"pmuzzy\"),\tCTL(stats_arenas_i_pmuzzy)},\n\t{NAME(\"mapped\"),\tCTL(stats_arenas_i_mapped)},\n\t{NAME(\"retained\"),\tCTL(stats_arenas_i_retained)},\n\t{NAME(\"extent_avail\"),\tCTL(stats_arenas_i_extent_avail)},\n\t{NAME(\"dirty_npurge\"),\tCTL(stats_arenas_i_dirty_npurge)},\n\t{NAME(\"dirty_nmadvise\"), CTL(stats_arenas_i_dirty_nmadvise)},\n\t{NAME(\"dirty_purged\"),\tCTL(stats_arenas_i_dirty_purged)},\n\t{NAME(\"muzzy_npurge\"),\tCTL(stats_arenas_i_muzzy_npurge)},\n\t{NAME(\"muzzy_nmadvise\"), CTL(stats_arenas_i_muzzy_nmadvise)},\n\t{NAME(\"muzzy_purged\"),\tCTL(stats_arenas_i_muzzy_purged)},\n\t{NAME(\"base\"),\t\tCTL(stats_arenas_i_base)},\n\t{NAME(\"internal\"),\tCTL(stats_arenas_i_internal)},\n\t{NAME(\"metadata_thp\"),\tCTL(stats_arenas_i_metadata_thp)},\n\t{NAME(\"tcache_bytes\"),\tCTL(stats_arenas_i_tcache_bytes)},\n\t{NAME(\"tcache_stashed_bytes\"),\n\t    CTL(stats_arenas_i_tcache_stashed_bytes)},\n\t{NAME(\"resident\"),\tCTL(stats_arenas_i_resident)},\n\t{NAME(\"abandoned_vm\"),\tCTL(stats_arenas_i_abandoned_vm)},\n\t{NAME(\"hpa_sec_bytes\"),\tCTL(stats_arenas_i_hpa_sec_bytes)},\n\t{NAME(\"small\"),\t\tCHILD(named, stats_arenas_i_small)},\n\t{NAME(\"large\"),\t\tCHILD(named, stats_arenas_i_large)},\n\t{NAME(\"bins\"),\t\tCHILD(indexed, stats_arenas_i_bins)},\n\t{NAME(\"lextents\"),\tCHILD(indexed, stats_arenas_i_lextents)},\n\t{NAME(\"extents\"),\tCHILD(indexed, stats_arenas_i_extents)},\n\t{NAME(\"mutexes\"),\tCHILD(named, stats_arenas_i_mutexes)},\n\t{NAME(\"hpa_shard\"),\tCHILD(named, stats_arenas_i_hpa_shard)}\n};\nstatic const ctl_named_node_t super_stats_arenas_i_node[] = {\n\t{NAME(\"\"),\t\tCHILD(named, stats_arenas_i)}\n};\n\nstatic const ctl_indexed_node_t stats_arenas_node[] = {\n\t{INDEX(stats_arenas_i)}\n};\n\nstatic const ctl_named_node_t stats_background_thread_node[] = {\n\t{NAME(\"num_threads\"),\tCTL(stats_background_thread_num_threads)},\n\t{NAME(\"num_runs\"),\tCTL(stats_background_thread_num_runs)},\n\t{NAME(\"run_interval\"),\tCTL(stats_background_thread_run_interval)}\n};\n\n#define OP(mtx) MUTEX_PROF_DATA_NODE(mutexes_##mtx)\nMUTEX_PROF_GLOBAL_MUTEXES\n#undef OP\n\nstatic const ctl_named_node_t stats_mutexes_node[] = {\n#define OP(mtx) {NAME(#mtx), CHILD(named, stats_mutexes_##mtx)},\nMUTEX_PROF_GLOBAL_MUTEXES\n#undef OP\n\t{NAME(\"reset\"),\t\tCTL(stats_mutexes_reset)}\n};\n#undef MUTEX_PROF_DATA_NODE\n\nstatic const ctl_named_node_t stats_node[] = {\n\t{NAME(\"allocated\"),\tCTL(stats_allocated)},\n\t{NAME(\"active\"),\tCTL(stats_active)},\n\t{NAME(\"metadata\"),\tCTL(stats_metadata)},\n\t{NAME(\"metadata_thp\"),\tCTL(stats_metadata_thp)},\n\t{NAME(\"resident\"),\tCTL(stats_resident)},\n\t{NAME(\"mapped\"),\tCTL(stats_mapped)},\n\t{NAME(\"retained\"),\tCTL(stats_retained)},\n\t{NAME(\"background_thread\"),\n\t CHILD(named, stats_background_thread)},\n\t{NAME(\"mutexes\"),\tCHILD(named, stats_mutexes)},\n\t{NAME(\"arenas\"),\tCHILD(indexed, stats_arenas)},\n\t{NAME(\"zero_reallocs\"),\tCTL(stats_zero_reallocs)},\n};\n\nstatic const ctl_named_node_t experimental_hooks_node[] = {\n\t{NAME(\"install\"),\tCTL(experimental_hooks_install)},\n\t{NAME(\"remove\"),\tCTL(experimental_hooks_remove)},\n\t{NAME(\"prof_backtrace\"),\tCTL(experimental_hooks_prof_backtrace)},\n\t{NAME(\"prof_dump\"),\tCTL(experimental_hooks_prof_dump)},\n\t{NAME(\"safety_check_abort\"),\tCTL(experimental_hooks_safety_check_abort)},\n};\n\nstatic const ctl_named_node_t experimental_thread_node[] = {\n\t{NAME(\"activity_callback\"),\n\t\tCTL(experimental_thread_activity_callback)}\n};\n\nstatic const ctl_named_node_t experimental_utilization_node[] = {\n\t{NAME(\"query\"),\t\tCTL(experimental_utilization_query)},\n\t{NAME(\"batch_query\"),\tCTL(experimental_utilization_batch_query)}\n};\n\nstatic const ctl_named_node_t experimental_arenas_i_node[] = {\n\t{NAME(\"pactivep\"),\tCTL(experimental_arenas_i_pactivep)}\n};\nstatic const ctl_named_node_t super_experimental_arenas_i_node[] = {\n\t{NAME(\"\"),\t\tCHILD(named, experimental_arenas_i)}\n};\n\nstatic const ctl_indexed_node_t experimental_arenas_node[] = {\n\t{INDEX(experimental_arenas_i)}\n};\n\nstatic const ctl_named_node_t experimental_prof_recent_node[] = {\n\t{NAME(\"alloc_max\"),\tCTL(experimental_prof_recent_alloc_max)},\n\t{NAME(\"alloc_dump\"),\tCTL(experimental_prof_recent_alloc_dump)},\n};\n\nstatic const ctl_named_node_t experimental_node[] = {\n\t{NAME(\"hooks\"),\t\tCHILD(named, experimental_hooks)},\n\t{NAME(\"utilization\"),\tCHILD(named, experimental_utilization)},\n\t{NAME(\"arenas\"),\tCHILD(indexed, experimental_arenas)},\n\t{NAME(\"arenas_create_ext\"),\tCTL(experimental_arenas_create_ext)},\n\t{NAME(\"prof_recent\"),\tCHILD(named, experimental_prof_recent)},\n\t{NAME(\"batch_alloc\"),\tCTL(experimental_batch_alloc)},\n\t{NAME(\"thread\"),\tCHILD(named, experimental_thread)}\n};\n\nstatic const ctl_named_node_t\troot_node[] = {\n\t{NAME(\"version\"),\tCTL(version)},\n\t{NAME(\"epoch\"),\t\tCTL(epoch)},\n\t{NAME(\"background_thread\"),\tCTL(background_thread)},\n\t{NAME(\"max_background_threads\"),\tCTL(max_background_threads)},\n\t{NAME(\"thread\"),\tCHILD(named, thread)},\n\t{NAME(\"config\"),\tCHILD(named, config)},\n\t{NAME(\"opt\"),\t\tCHILD(named, opt)},\n\t{NAME(\"tcache\"),\tCHILD(named, tcache)},\n\t{NAME(\"arena\"),\t\tCHILD(indexed, arena)},\n\t{NAME(\"arenas\"),\tCHILD(named, arenas)},\n\t{NAME(\"prof\"),\t\tCHILD(named, prof)},\n\t{NAME(\"stats\"),\t\tCHILD(named, stats)},\n\t{NAME(\"experimental\"),\tCHILD(named, experimental)}\n};\nstatic const ctl_named_node_t super_root_node[] = {\n\t{NAME(\"\"),\t\tCHILD(named, root)}\n};\n\n#undef NAME\n#undef CHILD\n#undef CTL\n#undef INDEX\n\n/******************************************************************************/\n\n/*\n * Sets *dst + *src non-atomically.  This is safe, since everything is\n * synchronized by the ctl mutex.\n */\nstatic void\nctl_accum_locked_u64(locked_u64_t *dst, locked_u64_t *src) {\n\tlocked_inc_u64_unsynchronized(dst,\n\t    locked_read_u64_unsynchronized(src));\n}\n\nstatic void\nctl_accum_atomic_zu(atomic_zu_t *dst, atomic_zu_t *src) {\n\tsize_t cur_dst = atomic_load_zu(dst, ATOMIC_RELAXED);\n\tsize_t cur_src = atomic_load_zu(src, ATOMIC_RELAXED);\n\tatomic_store_zu(dst, cur_dst + cur_src, ATOMIC_RELAXED);\n}\n\n/******************************************************************************/\n\nstatic unsigned\narenas_i2a_impl(size_t i, bool compat, bool validate) {\n\tunsigned a;\n\n\tswitch (i) {\n\tcase MALLCTL_ARENAS_ALL:\n\t\ta = 0;\n\t\tbreak;\n\tcase MALLCTL_ARENAS_DESTROYED:\n\t\ta = 1;\n\t\tbreak;\n\tdefault:\n\t\tif (compat && i == ctl_arenas->narenas) {\n\t\t\t/*\n\t\t\t * Provide deprecated backward compatibility for\n\t\t\t * accessing the merged stats at index narenas rather\n\t\t\t * than via MALLCTL_ARENAS_ALL.  This is scheduled for\n\t\t\t * removal in 6.0.0.\n\t\t\t */\n\t\t\ta = 0;\n\t\t} else if (validate && i >= ctl_arenas->narenas) {\n\t\t\ta = UINT_MAX;\n\t\t} else {\n\t\t\t/*\n\t\t\t * This function should never be called for an index\n\t\t\t * more than one past the range of indices that have\n\t\t\t * initialized ctl data.\n\t\t\t */\n\t\t\tassert(i < ctl_arenas->narenas || (!validate && i ==\n\t\t\t    ctl_arenas->narenas));\n\t\t\ta = (unsigned)i + 2;\n\t\t}\n\t\tbreak;\n\t}\n\n\treturn a;\n}\n\nstatic unsigned\narenas_i2a(size_t i) {\n\treturn arenas_i2a_impl(i, true, false);\n}\n\nstatic ctl_arena_t *\narenas_i_impl(tsd_t *tsd, size_t i, bool compat, bool init) {\n\tctl_arena_t *ret;\n\n\tassert(!compat || !init);\n\n\tret = ctl_arenas->arenas[arenas_i2a_impl(i, compat, false)];\n\tif (init && ret == NULL) {\n\t\tif (config_stats) {\n\t\t\tstruct container_s {\n\t\t\t\tctl_arena_t\t\tctl_arena;\n\t\t\t\tctl_arena_stats_t\tastats;\n\t\t\t};\n\t\t\tstruct container_s *cont =\n\t\t\t    (struct container_s *)base_alloc(tsd_tsdn(tsd),\n\t\t\t    b0get(), sizeof(struct container_s), QUANTUM);\n\t\t\tif (cont == NULL) {\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tret = &cont->ctl_arena;\n\t\t\tret->astats = &cont->astats;\n\t\t} else {\n\t\t\tret = (ctl_arena_t *)base_alloc(tsd_tsdn(tsd), b0get(),\n\t\t\t    sizeof(ctl_arena_t), QUANTUM);\n\t\t\tif (ret == NULL) {\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t}\n\t\tret->arena_ind = (unsigned)i;\n\t\tctl_arenas->arenas[arenas_i2a_impl(i, compat, false)] = ret;\n\t}\n\n\tassert(ret == NULL || arenas_i2a(ret->arena_ind) == arenas_i2a(i));\n\treturn ret;\n}\n\nstatic ctl_arena_t *\narenas_i(size_t i) {\n\tctl_arena_t *ret = arenas_i_impl(tsd_fetch(), i, true, false);\n\tassert(ret != NULL);\n\treturn ret;\n}\n\nstatic void\nctl_arena_clear(ctl_arena_t *ctl_arena) {\n\tctl_arena->nthreads = 0;\n\tctl_arena->dss = dss_prec_names[dss_prec_limit];\n\tctl_arena->dirty_decay_ms = -1;\n\tctl_arena->muzzy_decay_ms = -1;\n\tctl_arena->pactive = 0;\n\tctl_arena->pdirty = 0;\n\tctl_arena->pmuzzy = 0;\n\tif (config_stats) {\n\t\tmemset(&ctl_arena->astats->astats, 0, sizeof(arena_stats_t));\n\t\tctl_arena->astats->allocated_small = 0;\n\t\tctl_arena->astats->nmalloc_small = 0;\n\t\tctl_arena->astats->ndalloc_small = 0;\n\t\tctl_arena->astats->nrequests_small = 0;\n\t\tctl_arena->astats->nfills_small = 0;\n\t\tctl_arena->astats->nflushes_small = 0;\n\t\tmemset(ctl_arena->astats->bstats, 0, SC_NBINS *\n\t\t    sizeof(bin_stats_data_t));\n\t\tmemset(ctl_arena->astats->lstats, 0, (SC_NSIZES - SC_NBINS) *\n\t\t    sizeof(arena_stats_large_t));\n\t\tmemset(ctl_arena->astats->estats, 0, SC_NPSIZES *\n\t\t    sizeof(pac_estats_t));\n\t\tmemset(&ctl_arena->astats->hpastats, 0,\n\t\t    sizeof(hpa_shard_stats_t));\n\t\tmemset(&ctl_arena->astats->secstats, 0,\n\t\t    sizeof(sec_stats_t));\n\t}\n}\n\nstatic void\nctl_arena_stats_amerge(tsdn_t *tsdn, ctl_arena_t *ctl_arena, arena_t *arena) {\n\tunsigned i;\n\n\tif (config_stats) {\n\t\tarena_stats_merge(tsdn, arena, &ctl_arena->nthreads,\n\t\t    &ctl_arena->dss, &ctl_arena->dirty_decay_ms,\n\t\t    &ctl_arena->muzzy_decay_ms, &ctl_arena->pactive,\n\t\t    &ctl_arena->pdirty, &ctl_arena->pmuzzy,\n\t\t    &ctl_arena->astats->astats, ctl_arena->astats->bstats,\n\t\t    ctl_arena->astats->lstats, ctl_arena->astats->estats,\n\t\t    &ctl_arena->astats->hpastats, &ctl_arena->astats->secstats);\n\n\t\tfor (i = 0; i < SC_NBINS; i++) {\n\t\t\tbin_stats_t *bstats =\n\t\t\t    &ctl_arena->astats->bstats[i].stats_data;\n\t\t\tctl_arena->astats->allocated_small += bstats->curregs *\n\t\t\t    sz_index2size(i);\n\t\t\tctl_arena->astats->nmalloc_small += bstats->nmalloc;\n\t\t\tctl_arena->astats->ndalloc_small += bstats->ndalloc;\n\t\t\tctl_arena->astats->nrequests_small += bstats->nrequests;\n\t\t\tctl_arena->astats->nfills_small += bstats->nfills;\n\t\t\tctl_arena->astats->nflushes_small += bstats->nflushes;\n\t\t}\n\t} else {\n\t\tarena_basic_stats_merge(tsdn, arena, &ctl_arena->nthreads,\n\t\t    &ctl_arena->dss, &ctl_arena->dirty_decay_ms,\n\t\t    &ctl_arena->muzzy_decay_ms, &ctl_arena->pactive,\n\t\t    &ctl_arena->pdirty, &ctl_arena->pmuzzy);\n\t}\n}\n\nstatic void\nctl_arena_stats_sdmerge(ctl_arena_t *ctl_sdarena, ctl_arena_t *ctl_arena,\n    bool destroyed) {\n\tunsigned i;\n\n\tif (!destroyed) {\n\t\tctl_sdarena->nthreads += ctl_arena->nthreads;\n\t\tctl_sdarena->pactive += ctl_arena->pactive;\n\t\tctl_sdarena->pdirty += ctl_arena->pdirty;\n\t\tctl_sdarena->pmuzzy += ctl_arena->pmuzzy;\n\t} else {\n\t\tassert(ctl_arena->nthreads == 0);\n\t\tassert(ctl_arena->pactive == 0);\n\t\tassert(ctl_arena->pdirty == 0);\n\t\tassert(ctl_arena->pmuzzy == 0);\n\t}\n\n\tif (config_stats) {\n\t\tctl_arena_stats_t *sdstats = ctl_sdarena->astats;\n\t\tctl_arena_stats_t *astats = ctl_arena->astats;\n\n\t\tif (!destroyed) {\n\t\t\tsdstats->astats.mapped += astats->astats.mapped;\n\t\t\tsdstats->astats.pa_shard_stats.pac_stats.retained\n\t\t\t    += astats->astats.pa_shard_stats.pac_stats.retained;\n\t\t\tsdstats->astats.pa_shard_stats.edata_avail\n\t\t\t    += astats->astats.pa_shard_stats.edata_avail;\n\t\t}\n\n\t\tctl_accum_locked_u64(\n\t\t    &sdstats->astats.pa_shard_stats.pac_stats.decay_dirty.npurge,\n\t\t    &astats->astats.pa_shard_stats.pac_stats.decay_dirty.npurge);\n\t\tctl_accum_locked_u64(\n\t\t    &sdstats->astats.pa_shard_stats.pac_stats.decay_dirty.nmadvise,\n\t\t    &astats->astats.pa_shard_stats.pac_stats.decay_dirty.nmadvise);\n\t\tctl_accum_locked_u64(\n\t\t    &sdstats->astats.pa_shard_stats.pac_stats.decay_dirty.purged,\n\t\t    &astats->astats.pa_shard_stats.pac_stats.decay_dirty.purged);\n\n\t\tctl_accum_locked_u64(\n\t\t    &sdstats->astats.pa_shard_stats.pac_stats.decay_muzzy.npurge,\n\t\t    &astats->astats.pa_shard_stats.pac_stats.decay_muzzy.npurge);\n\t\tctl_accum_locked_u64(\n\t\t    &sdstats->astats.pa_shard_stats.pac_stats.decay_muzzy.nmadvise,\n\t\t    &astats->astats.pa_shard_stats.pac_stats.decay_muzzy.nmadvise);\n\t\tctl_accum_locked_u64(\n\t\t    &sdstats->astats.pa_shard_stats.pac_stats.decay_muzzy.purged,\n\t\t    &astats->astats.pa_shard_stats.pac_stats.decay_muzzy.purged);\n\n#define OP(mtx) malloc_mutex_prof_merge(\t\t\t\t\\\n\t\t    &(sdstats->astats.mutex_prof_data[\t\t\t\\\n\t\t        arena_prof_mutex_##mtx]),\t\t\t\\\n\t\t    &(astats->astats.mutex_prof_data[\t\t\t\\\n\t\t        arena_prof_mutex_##mtx]));\nMUTEX_PROF_ARENA_MUTEXES\n#undef OP\n\t\tif (!destroyed) {\n\t\t\tsdstats->astats.base += astats->astats.base;\n\t\t\tsdstats->astats.resident += astats->astats.resident;\n\t\t\tsdstats->astats.metadata_thp += astats->astats.metadata_thp;\n\t\t\tctl_accum_atomic_zu(&sdstats->astats.internal,\n\t\t\t    &astats->astats.internal);\n\t\t} else {\n\t\t\tassert(atomic_load_zu(\n\t\t\t    &astats->astats.internal, ATOMIC_RELAXED) == 0);\n\t\t}\n\n\t\tif (!destroyed) {\n\t\t\tsdstats->allocated_small += astats->allocated_small;\n\t\t} else {\n\t\t\tassert(astats->allocated_small == 0);\n\t\t}\n\t\tsdstats->nmalloc_small += astats->nmalloc_small;\n\t\tsdstats->ndalloc_small += astats->ndalloc_small;\n\t\tsdstats->nrequests_small += astats->nrequests_small;\n\t\tsdstats->nfills_small += astats->nfills_small;\n\t\tsdstats->nflushes_small += astats->nflushes_small;\n\n\t\tif (!destroyed) {\n\t\t\tsdstats->astats.allocated_large +=\n\t\t\t    astats->astats.allocated_large;\n\t\t} else {\n\t\t\tassert(astats->astats.allocated_large == 0);\n\t\t}\n\t\tsdstats->astats.nmalloc_large += astats->astats.nmalloc_large;\n\t\tsdstats->astats.ndalloc_large += astats->astats.ndalloc_large;\n\t\tsdstats->astats.nrequests_large\n\t\t    += astats->astats.nrequests_large;\n\t\tsdstats->astats.nflushes_large += astats->astats.nflushes_large;\n\t\tctl_accum_atomic_zu(\n\t\t    &sdstats->astats.pa_shard_stats.pac_stats.abandoned_vm,\n\t\t    &astats->astats.pa_shard_stats.pac_stats.abandoned_vm);\n\n\t\tsdstats->astats.tcache_bytes += astats->astats.tcache_bytes;\n\t\tsdstats->astats.tcache_stashed_bytes +=\n\t\t    astats->astats.tcache_stashed_bytes;\n\n\t\tif (ctl_arena->arena_ind == 0) {\n\t\t\tsdstats->astats.uptime = astats->astats.uptime;\n\t\t}\n\n\t\t/* Merge bin stats. */\n\t\tfor (i = 0; i < SC_NBINS; i++) {\n\t\t\tbin_stats_t *bstats = &astats->bstats[i].stats_data;\n\t\t\tbin_stats_t *merged = &sdstats->bstats[i].stats_data;\n\t\t\tmerged->nmalloc += bstats->nmalloc;\n\t\t\tmerged->ndalloc += bstats->ndalloc;\n\t\t\tmerged->nrequests += bstats->nrequests;\n\t\t\tif (!destroyed) {\n\t\t\t\tmerged->curregs += bstats->curregs;\n\t\t\t} else {\n\t\t\t\tassert(bstats->curregs == 0);\n\t\t\t}\n\t\t\tmerged->nfills += bstats->nfills;\n\t\t\tmerged->nflushes += bstats->nflushes;\n\t\t\tmerged->nslabs += bstats->nslabs;\n\t\t\tmerged->reslabs += bstats->reslabs;\n\t\t\tif (!destroyed) {\n\t\t\t\tmerged->curslabs += bstats->curslabs;\n\t\t\t\tmerged->nonfull_slabs += bstats->nonfull_slabs;\n\t\t\t} else {\n\t\t\t\tassert(bstats->curslabs == 0);\n\t\t\t\tassert(bstats->nonfull_slabs == 0);\n\t\t\t}\n\t\t\tmalloc_mutex_prof_merge(&sdstats->bstats[i].mutex_data,\n\t\t\t    &astats->bstats[i].mutex_data);\n\t\t}\n\n\t\t/* Merge stats for large allocations. */\n\t\tfor (i = 0; i < SC_NSIZES - SC_NBINS; i++) {\n\t\t\tctl_accum_locked_u64(&sdstats->lstats[i].nmalloc,\n\t\t\t    &astats->lstats[i].nmalloc);\n\t\t\tctl_accum_locked_u64(&sdstats->lstats[i].ndalloc,\n\t\t\t    &astats->lstats[i].ndalloc);\n\t\t\tctl_accum_locked_u64(&sdstats->lstats[i].nrequests,\n\t\t\t    &astats->lstats[i].nrequests);\n\t\t\tif (!destroyed) {\n\t\t\t\tsdstats->lstats[i].curlextents +=\n\t\t\t\t    astats->lstats[i].curlextents;\n\t\t\t} else {\n\t\t\t\tassert(astats->lstats[i].curlextents == 0);\n\t\t\t}\n\t\t}\n\n\t\t/* Merge extents stats. */\n\t\tfor (i = 0; i < SC_NPSIZES; i++) {\n\t\t\tsdstats->estats[i].ndirty += astats->estats[i].ndirty;\n\t\t\tsdstats->estats[i].nmuzzy += astats->estats[i].nmuzzy;\n\t\t\tsdstats->estats[i].nretained\n\t\t\t    += astats->estats[i].nretained;\n\t\t\tsdstats->estats[i].dirty_bytes\n\t\t\t    += astats->estats[i].dirty_bytes;\n\t\t\tsdstats->estats[i].muzzy_bytes\n\t\t\t    += astats->estats[i].muzzy_bytes;\n\t\t\tsdstats->estats[i].retained_bytes\n\t\t\t    += astats->estats[i].retained_bytes;\n\t\t}\n\n\t\t/* Merge HPA stats. */\n\t\thpa_shard_stats_accum(&sdstats->hpastats, &astats->hpastats);\n\t\tsec_stats_accum(&sdstats->secstats, &astats->secstats);\n\t}\n}\n\nstatic void\nctl_arena_refresh(tsdn_t *tsdn, arena_t *arena, ctl_arena_t *ctl_sdarena,\n    unsigned i, bool destroyed) {\n\tctl_arena_t *ctl_arena = arenas_i(i);\n\n\tctl_arena_clear(ctl_arena);\n\tctl_arena_stats_amerge(tsdn, ctl_arena, arena);\n\t/* Merge into sum stats as well. */\n\tctl_arena_stats_sdmerge(ctl_sdarena, ctl_arena, destroyed);\n}\n\nstatic unsigned\nctl_arena_init(tsd_t *tsd, const arena_config_t *config) {\n\tunsigned arena_ind;\n\tctl_arena_t *ctl_arena;\n\n\tif ((ctl_arena = ql_last(&ctl_arenas->destroyed, destroyed_link)) !=\n\t    NULL) {\n\t\tql_remove(&ctl_arenas->destroyed, ctl_arena, destroyed_link);\n\t\tarena_ind = ctl_arena->arena_ind;\n\t} else {\n\t\tarena_ind = ctl_arenas->narenas;\n\t}\n\n\t/* Trigger stats allocation. */\n\tif (arenas_i_impl(tsd, arena_ind, false, true) == NULL) {\n\t\treturn UINT_MAX;\n\t}\n\n\t/* Initialize new arena. */\n\tif (arena_init(tsd_tsdn(tsd), arena_ind, config) == NULL) {\n\t\treturn UINT_MAX;\n\t}\n\n\tif (arena_ind == ctl_arenas->narenas) {\n\t\tctl_arenas->narenas++;\n\t}\n\n\treturn arena_ind;\n}\n\nstatic void\nctl_background_thread_stats_read(tsdn_t *tsdn) {\n\tbackground_thread_stats_t *stats = &ctl_stats->background_thread;\n\tif (!have_background_thread ||\n\t    background_thread_stats_read(tsdn, stats)) {\n\t\tmemset(stats, 0, sizeof(background_thread_stats_t));\n\t\tnstime_init_zero(&stats->run_interval);\n\t}\n\tmalloc_mutex_prof_copy(\n\t    &ctl_stats->mutex_prof_data[global_prof_mutex_max_per_bg_thd],\n\t    &stats->max_counter_per_bg_thd);\n}\n\nstatic void\nctl_refresh(tsdn_t *tsdn) {\n\tunsigned i;\n\tctl_arena_t *ctl_sarena = arenas_i(MALLCTL_ARENAS_ALL);\n\tVARIABLE_ARRAY(arena_t *, tarenas, ctl_arenas->narenas);\n\n\t/*\n\t * Clear sum stats, since they will be merged into by\n\t * ctl_arena_refresh().\n\t */\n\tctl_arena_clear(ctl_sarena);\n\n\tfor (i = 0; i < ctl_arenas->narenas; i++) {\n\t\ttarenas[i] = arena_get(tsdn, i, false);\n\t}\n\n\tfor (i = 0; i < ctl_arenas->narenas; i++) {\n\t\tctl_arena_t *ctl_arena = arenas_i(i);\n\t\tbool initialized = (tarenas[i] != NULL);\n\n\t\tctl_arena->initialized = initialized;\n\t\tif (initialized) {\n\t\t\tctl_arena_refresh(tsdn, tarenas[i], ctl_sarena, i,\n\t\t\t    false);\n\t\t}\n\t}\n\n\tif (config_stats) {\n\t\tctl_stats->allocated = ctl_sarena->astats->allocated_small +\n\t\t    ctl_sarena->astats->astats.allocated_large;\n\t\tctl_stats->active = (ctl_sarena->pactive << LG_PAGE);\n\t\tctl_stats->metadata = ctl_sarena->astats->astats.base +\n\t\t    atomic_load_zu(&ctl_sarena->astats->astats.internal,\n\t\t\tATOMIC_RELAXED);\n\t\tctl_stats->resident = ctl_sarena->astats->astats.resident;\n\t\tctl_stats->metadata_thp =\n\t\t    ctl_sarena->astats->astats.metadata_thp;\n\t\tctl_stats->mapped = ctl_sarena->astats->astats.mapped;\n\t\tctl_stats->retained = ctl_sarena->astats->astats\n\t\t    .pa_shard_stats.pac_stats.retained;\n\n\t\tctl_background_thread_stats_read(tsdn);\n\n#define READ_GLOBAL_MUTEX_PROF_DATA(i, mtx)\t\t\t\t\\\n    malloc_mutex_lock(tsdn, &mtx);\t\t\t\t\t\\\n    malloc_mutex_prof_read(tsdn, &ctl_stats->mutex_prof_data[i], &mtx);\t\\\n    malloc_mutex_unlock(tsdn, &mtx);\n\n\t\tif (config_prof && opt_prof) {\n\t\t\tREAD_GLOBAL_MUTEX_PROF_DATA(\n\t\t\t    global_prof_mutex_prof, bt2gctx_mtx);\n\t\t\tREAD_GLOBAL_MUTEX_PROF_DATA(\n\t\t\t    global_prof_mutex_prof_thds_data, tdatas_mtx);\n\t\t\tREAD_GLOBAL_MUTEX_PROF_DATA(\n\t\t\t    global_prof_mutex_prof_dump, prof_dump_mtx);\n\t\t\tREAD_GLOBAL_MUTEX_PROF_DATA(\n\t\t\t    global_prof_mutex_prof_recent_alloc,\n\t\t\t    prof_recent_alloc_mtx);\n\t\t\tREAD_GLOBAL_MUTEX_PROF_DATA(\n\t\t\t    global_prof_mutex_prof_recent_dump,\n\t\t\t    prof_recent_dump_mtx);\n\t\t\tREAD_GLOBAL_MUTEX_PROF_DATA(\n\t\t\t    global_prof_mutex_prof_stats, prof_stats_mtx);\n\t\t}\n\t\tif (have_background_thread) {\n\t\t\tREAD_GLOBAL_MUTEX_PROF_DATA(\n\t\t\t    global_prof_mutex_background_thread,\n\t\t\t    background_thread_lock);\n\t\t} else {\n\t\t\tmemset(&ctl_stats->mutex_prof_data[\n\t\t\t    global_prof_mutex_background_thread], 0,\n\t\t\t    sizeof(mutex_prof_data_t));\n\t\t}\n\t\t/* We own ctl mutex already. */\n\t\tmalloc_mutex_prof_read(tsdn,\n\t\t    &ctl_stats->mutex_prof_data[global_prof_mutex_ctl],\n\t\t    &ctl_mtx);\n#undef READ_GLOBAL_MUTEX_PROF_DATA\n\t}\n\tctl_arenas->epoch++;\n}\n\nstatic bool\nctl_init(tsd_t *tsd) {\n\tbool ret;\n\ttsdn_t *tsdn = tsd_tsdn(tsd);\n\n\tmalloc_mutex_lock(tsdn, &ctl_mtx);\n\tif (!ctl_initialized) {\n\t\tctl_arena_t *ctl_sarena, *ctl_darena;\n\t\tunsigned i;\n\n\t\t/*\n\t\t * Allocate demand-zeroed space for pointers to the full\n\t\t * range of supported arena indices.\n\t\t */\n\t\tif (ctl_arenas == NULL) {\n\t\t\tctl_arenas = (ctl_arenas_t *)base_alloc(tsdn,\n\t\t\t    b0get(), sizeof(ctl_arenas_t), QUANTUM);\n\t\t\tif (ctl_arenas == NULL) {\n\t\t\t\tret = true;\n\t\t\t\tgoto label_return;\n\t\t\t}\n\t\t}\n\n\t\tif (config_stats && ctl_stats == NULL) {\n\t\t\tctl_stats = (ctl_stats_t *)base_alloc(tsdn, b0get(),\n\t\t\t    sizeof(ctl_stats_t), QUANTUM);\n\t\t\tif (ctl_stats == NULL) {\n\t\t\t\tret = true;\n\t\t\t\tgoto label_return;\n\t\t\t}\n\t\t}\n\n\t\t/*\n\t\t * Allocate space for the current full range of arenas\n\t\t * here rather than doing it lazily elsewhere, in order\n\t\t * to limit when OOM-caused errors can occur.\n\t\t */\n\t\tif ((ctl_sarena = arenas_i_impl(tsd, MALLCTL_ARENAS_ALL, false,\n\t\t    true)) == NULL) {\n\t\t\tret = true;\n\t\t\tgoto label_return;\n\t\t}\n\t\tctl_sarena->initialized = true;\n\n\t\tif ((ctl_darena = arenas_i_impl(tsd, MALLCTL_ARENAS_DESTROYED,\n\t\t    false, true)) == NULL) {\n\t\t\tret = true;\n\t\t\tgoto label_return;\n\t\t}\n\t\tctl_arena_clear(ctl_darena);\n\t\t/*\n\t\t * Don't toggle ctl_darena to initialized until an arena is\n\t\t * actually destroyed, so that arena.<i>.initialized can be used\n\t\t * to query whether the stats are relevant.\n\t\t */\n\n\t\tctl_arenas->narenas = narenas_total_get();\n\t\tfor (i = 0; i < ctl_arenas->narenas; i++) {\n\t\t\tif (arenas_i_impl(tsd, i, false, true) == NULL) {\n\t\t\t\tret = true;\n\t\t\t\tgoto label_return;\n\t\t\t}\n\t\t}\n\n\t\tql_new(&ctl_arenas->destroyed);\n\t\tctl_refresh(tsdn);\n\n\t\tctl_initialized = true;\n\t}\n\n\tret = false;\nlabel_return:\n\tmalloc_mutex_unlock(tsdn, &ctl_mtx);\n\treturn ret;\n}\n\nstatic int\nctl_lookup(tsdn_t *tsdn, const ctl_named_node_t *starting_node,\n    const char *name, const ctl_named_node_t **ending_nodep, size_t *mibp,\n    size_t *depthp) {\n\tint ret;\n\tconst char *elm, *tdot, *dot;\n\tsize_t elen, i, j;\n\tconst ctl_named_node_t *node;\n\n\telm = name;\n\t/* Equivalent to strchrnul(). */\n\tdot = ((tdot = strchr(elm, '.')) != NULL) ? tdot : strchr(elm, '\\0');\n\telen = (size_t)((uintptr_t)dot - (uintptr_t)elm);\n\tif (elen == 0) {\n\t\tret = ENOENT;\n\t\tgoto label_return;\n\t}\n\tnode = starting_node;\n\tfor (i = 0; i < *depthp; i++) {\n\t\tassert(node);\n\t\tassert(node->nchildren > 0);\n\t\tif (ctl_named_node(node->children) != NULL) {\n\t\t\tconst ctl_named_node_t *pnode = node;\n\n\t\t\t/* Children are named. */\n\t\t\tfor (j = 0; j < node->nchildren; j++) {\n\t\t\t\tconst ctl_named_node_t *child =\n\t\t\t\t    ctl_named_children(node, j);\n\t\t\t\tif (strlen(child->name) == elen &&\n\t\t\t\t    strncmp(elm, child->name, elen) == 0) {\n\t\t\t\t\tnode = child;\n\t\t\t\t\tmibp[i] = j;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (node == pnode) {\n\t\t\t\tret = ENOENT;\n\t\t\t\tgoto label_return;\n\t\t\t}\n\t\t} else {\n\t\t\tuintmax_t index;\n\t\t\tconst ctl_indexed_node_t *inode;\n\n\t\t\t/* Children are indexed. */\n\t\t\tindex = malloc_strtoumax(elm, NULL, 10);\n\t\t\tif (index == UINTMAX_MAX || index > SIZE_T_MAX) {\n\t\t\t\tret = ENOENT;\n\t\t\t\tgoto label_return;\n\t\t\t}\n\n\t\t\tinode = ctl_indexed_node(node->children);\n\t\t\tnode = inode->index(tsdn, mibp, *depthp, (size_t)index);\n\t\t\tif (node == NULL) {\n\t\t\t\tret = ENOENT;\n\t\t\t\tgoto label_return;\n\t\t\t}\n\n\t\t\tmibp[i] = (size_t)index;\n\t\t}\n\n\t\t/* Reached the end? */\n\t\tif (node->ctl != NULL || *dot == '\\0') {\n\t\t\t/* Terminal node. */\n\t\t\tif (*dot != '\\0') {\n\t\t\t\t/*\n\t\t\t\t * The name contains more elements than are\n\t\t\t\t * in this path through the tree.\n\t\t\t\t */\n\t\t\t\tret = ENOENT;\n\t\t\t\tgoto label_return;\n\t\t\t}\n\t\t\t/* Complete lookup successful. */\n\t\t\t*depthp = i + 1;\n\t\t\tbreak;\n\t\t}\n\n\t\t/* Update elm. */\n\t\telm = &dot[1];\n\t\tdot = ((tdot = strchr(elm, '.')) != NULL) ? tdot :\n\t\t    strchr(elm, '\\0');\n\t\telen = (size_t)((uintptr_t)dot - (uintptr_t)elm);\n\t}\n\tif (ending_nodep != NULL) {\n\t\t*ending_nodep = node;\n\t}\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nint\nctl_byname(tsd_t *tsd, const char *name, void *oldp, size_t *oldlenp,\n    void *newp, size_t newlen) {\n\tint ret;\n\tsize_t depth;\n\tsize_t mib[CTL_MAX_DEPTH];\n\tconst ctl_named_node_t *node;\n\n\tif (!ctl_initialized && ctl_init(tsd)) {\n\t\tret = EAGAIN;\n\t\tgoto label_return;\n\t}\n\n\tdepth = CTL_MAX_DEPTH;\n\tret = ctl_lookup(tsd_tsdn(tsd), super_root_node, name, &node, mib,\n\t    &depth);\n\tif (ret != 0) {\n\t\tgoto label_return;\n\t}\n\n\tif (node != NULL && node->ctl) {\n\t\tret = node->ctl(tsd, mib, depth, oldp, oldlenp, newp, newlen);\n\t} else {\n\t\t/* The name refers to a partial path through the ctl tree. */\n\t\tret = ENOENT;\n\t}\n\nlabel_return:\n\treturn(ret);\n}\n\nint\nctl_nametomib(tsd_t *tsd, const char *name, size_t *mibp, size_t *miblenp) {\n\tint ret;\n\n\tif (!ctl_initialized && ctl_init(tsd)) {\n\t\tret = EAGAIN;\n\t\tgoto label_return;\n\t}\n\n\tret = ctl_lookup(tsd_tsdn(tsd), super_root_node, name, NULL, mibp,\n\t    miblenp);\nlabel_return:\n\treturn(ret);\n}\n\nstatic int\nctl_lookupbymib(tsdn_t *tsdn, const ctl_named_node_t **ending_nodep,\n    const size_t *mib, size_t miblen) {\n\tint ret;\n\n\tconst ctl_named_node_t *node = super_root_node;\n\tfor (size_t i = 0; i < miblen; i++) {\n\t\tassert(node);\n\t\tassert(node->nchildren > 0);\n\t\tif (ctl_named_node(node->children) != NULL) {\n\t\t\t/* Children are named. */\n\t\t\tif (node->nchildren <= mib[i]) {\n\t\t\t\tret = ENOENT;\n\t\t\t\tgoto label_return;\n\t\t\t}\n\t\t\tnode = ctl_named_children(node, mib[i]);\n\t\t} else {\n\t\t\tconst ctl_indexed_node_t *inode;\n\n\t\t\t/* Indexed element. */\n\t\t\tinode = ctl_indexed_node(node->children);\n\t\t\tnode = inode->index(tsdn, mib, miblen, mib[i]);\n\t\t\tif (node == NULL) {\n\t\t\t\tret = ENOENT;\n\t\t\t\tgoto label_return;\n\t\t\t}\n\t\t}\n\t}\n\tassert(ending_nodep != NULL);\n\t*ending_nodep = node;\n\tret = 0;\n\nlabel_return:\n\treturn(ret);\n}\n\nint\nctl_bymib(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp,\n    size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tconst ctl_named_node_t *node;\n\n\tif (!ctl_initialized && ctl_init(tsd)) {\n\t\tret = EAGAIN;\n\t\tgoto label_return;\n\t}\n\n\tret = ctl_lookupbymib(tsd_tsdn(tsd), &node, mib, miblen);\n\tif (ret != 0) {\n\t\tgoto label_return;\n\t}\n\n\t/* Call the ctl function. */\n\tif (node && node->ctl) {\n\t\tret = node->ctl(tsd, mib, miblen, oldp, oldlenp, newp, newlen);\n\t} else {\n\t\t/* Partial MIB. */\n\t\tret = ENOENT;\n\t}\n\nlabel_return:\n\treturn(ret);\n}\n\nint\nctl_mibnametomib(tsd_t *tsd, size_t *mib, size_t miblen, const char *name,\n    size_t *miblenp) {\n\tint ret;\n\tconst ctl_named_node_t *node;\n\n\tif (!ctl_initialized && ctl_init(tsd)) {\n\t\tret = EAGAIN;\n\t\tgoto label_return;\n\t}\n\n\tret = ctl_lookupbymib(tsd_tsdn(tsd), &node, mib, miblen);\n\tif (ret != 0) {\n\t\tgoto label_return;\n\t}\n\tif (node == NULL || node->ctl != NULL) {\n\t\tret = ENOENT;\n\t\tgoto label_return;\n\t}\n\n\tassert(miblenp != NULL);\n\tassert(*miblenp >= miblen);\n\t*miblenp -= miblen;\n\tret = ctl_lookup(tsd_tsdn(tsd), node, name, NULL, mib + miblen,\n\t    miblenp);\n\t*miblenp += miblen;\nlabel_return:\n\treturn(ret);\n}\n\nint\nctl_bymibname(tsd_t *tsd, size_t *mib, size_t miblen, const char *name,\n    size_t *miblenp, void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tconst ctl_named_node_t *node;\n\n\tif (!ctl_initialized && ctl_init(tsd)) {\n\t\tret = EAGAIN;\n\t\tgoto label_return;\n\t}\n\n\tret = ctl_lookupbymib(tsd_tsdn(tsd), &node, mib, miblen);\n\tif (ret != 0) {\n\t\tgoto label_return;\n\t}\n\tif (node == NULL || node->ctl != NULL) {\n\t\tret = ENOENT;\n\t\tgoto label_return;\n\t}\n\n\tassert(miblenp != NULL);\n\tassert(*miblenp >= miblen);\n\t*miblenp -= miblen;\n\t/*\n\t * The same node supplies the starting node and stores the ending node.\n\t */\n\tret = ctl_lookup(tsd_tsdn(tsd), node, name, &node, mib + miblen,\n\t    miblenp);\n\t*miblenp += miblen;\n\tif (ret != 0) {\n\t\tgoto label_return;\n\t}\n\n\tif (node != NULL && node->ctl) {\n\t\tret = node->ctl(tsd, mib, *miblenp, oldp, oldlenp, newp,\n\t\t    newlen);\n\t} else {\n\t\t/* The name refers to a partial path through the ctl tree. */\n\t\tret = ENOENT;\n\t}\n\nlabel_return:\n\treturn(ret);\n}\n\nbool\nctl_boot(void) {\n\tif (malloc_mutex_init(&ctl_mtx, \"ctl\", WITNESS_RANK_CTL,\n\t    malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\n\tctl_initialized = false;\n\n\treturn false;\n}\n\nvoid\nctl_prefork(tsdn_t *tsdn) {\n\tmalloc_mutex_prefork(tsdn, &ctl_mtx);\n}\n\nvoid\nctl_postfork_parent(tsdn_t *tsdn) {\n\tmalloc_mutex_postfork_parent(tsdn, &ctl_mtx);\n}\n\nvoid\nctl_postfork_child(tsdn_t *tsdn) {\n\tmalloc_mutex_postfork_child(tsdn, &ctl_mtx);\n}\n\nvoid\nctl_mtx_assert_held(tsdn_t *tsdn) {\n\tmalloc_mutex_assert_owner(tsdn, &ctl_mtx);\n}\n\n/******************************************************************************/\n/* *_ctl() functions. */\n\n#define READONLY()\tdo {\t\t\t\t\t\t\\\n\tif (newp != NULL || newlen != 0) {\t\t\t\t\\\n\t\tret = EPERM;\t\t\t\t\t\t\\\n\t\tgoto label_return;\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#define WRITEONLY()\tdo {\t\t\t\t\t\t\\\n\tif (oldp != NULL || oldlenp != NULL) {\t\t\t\t\\\n\t\tret = EPERM;\t\t\t\t\t\t\\\n\t\tgoto label_return;\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n/* Can read or write, but not both. */\n#define READ_XOR_WRITE()\tdo {\t\t\t\t\t\\\n\tif ((oldp != NULL && oldlenp != NULL) && (newp != NULL ||\t\\\n\t    newlen != 0)) {\t\t\t\t\t\t\\\n\t\tret = EPERM;\t\t\t\t\t\t\\\n\t\tgoto label_return;\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n/* Can neither read nor write. */\n#define NEITHER_READ_NOR_WRITE()\tdo {\t\t\t\t\\\n\tif (oldp != NULL || oldlenp != NULL || newp != NULL ||\t\t\\\n\t    newlen != 0) {\t\t\t\t\t\t\\\n\t\tret = EPERM;\t\t\t\t\t\t\\\n\t\tgoto label_return;\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n/* Verify that the space provided is enough. */\n#define VERIFY_READ(t)\tdo {\t\t\t\t\t\t\\\n\tif (oldp == NULL || oldlenp == NULL || *oldlenp != sizeof(t)) {\t\\\n\t\t*oldlenp = 0;\t\t\t\t\t\t\\\n\t\tret = EINVAL;\t\t\t\t\t\t\\\n\t\tgoto label_return;\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#define READ(v, t)\tdo {\t\t\t\t\t\t\\\n\tif (oldp != NULL && oldlenp != NULL) {\t\t\t\t\\\n\t\tif (*oldlenp != sizeof(t)) {\t\t\t\t\\\n\t\t\tsize_t\tcopylen = (sizeof(t) <= *oldlenp)\t\\\n\t\t\t    ? sizeof(t) : *oldlenp;\t\t\t\\\n\t\t\tmemcpy(oldp, (void *)&(v), copylen);\t\t\\\n\t\t\t*oldlenp = copylen;\t\t\t\t\\\n\t\t\tret = EINVAL;\t\t\t\t\t\\\n\t\t\tgoto label_return;\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t\t*(t *)oldp = (v);\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#define WRITE(v, t)\tdo {\t\t\t\t\t\t\\\n\tif (newp != NULL) {\t\t\t\t\t\t\\\n\t\tif (newlen != sizeof(t)) {\t\t\t\t\\\n\t\t\tret = EINVAL;\t\t\t\t\t\\\n\t\t\tgoto label_return;\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t\t(v) = *(t *)newp;\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#define ASSURED_WRITE(v, t)\tdo {\t\t\t\t\t\\\n\tif (newp == NULL || newlen != sizeof(t)) {\t\t\t\\\n\t\tret = EINVAL;\t\t\t\t\t\t\\\n\t\tgoto label_return;\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\t(v) = *(t *)newp;\t\t\t\t\t\t\\\n} while (0)\n\n#define MIB_UNSIGNED(v, i) do {\t\t\t\t\t\t\\\n\tif (mib[i] > UINT_MAX) {\t\t\t\t\t\\\n\t\tret = EFAULT;\t\t\t\t\t\t\\\n\t\tgoto label_return;\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tv = (unsigned)mib[i];\t\t\t\t\t\t\\\n} while (0)\n\n/*\n * There's a lot of code duplication in the following macros due to limitations\n * in how nested cpp macros are expanded.\n */\n#define CTL_RO_CLGEN(c, l, n, v, t)\t\t\t\t\t\\\nstatic int\t\t\t\t\t\t\t\t\\\nn##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp,\t\\\n    size_t *oldlenp, void *newp, size_t newlen) {\t\t\t\\\n\tint ret;\t\t\t\t\t\t\t\\\n\tt oldval;\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tif (!(c)) {\t\t\t\t\t\t\t\\\n\t\treturn ENOENT;\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tif (l) {\t\t\t\t\t\t\t\\\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx);\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tREADONLY();\t\t\t\t\t\t\t\\\n\toldval = (v);\t\t\t\t\t\t\t\\\n\tREAD(oldval, t);\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tret = 0;\t\t\t\t\t\t\t\\\nlabel_return:\t\t\t\t\t\t\t\t\\\n\tif (l) {\t\t\t\t\t\t\t\\\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx);\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\treturn ret;\t\t\t\t\t\t\t\\\n}\n\n#define CTL_RO_CGEN(c, n, v, t)\t\t\t\t\t\t\\\nstatic int\t\t\t\t\t\t\t\t\\\nn##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\t\t\t\\\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\t\t\\\n\tint ret;\t\t\t\t\t\t\t\\\n\tt oldval;\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tif (!(c)) {\t\t\t\t\t\t\t\\\n\t\treturn ENOENT;\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx);\t\t\t\\\n\tREADONLY();\t\t\t\t\t\t\t\\\n\toldval = (v);\t\t\t\t\t\t\t\\\n\tREAD(oldval, t);\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tret = 0;\t\t\t\t\t\t\t\\\nlabel_return:\t\t\t\t\t\t\t\t\\\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx);\t\t\t\\\n\treturn ret;\t\t\t\t\t\t\t\\\n}\n\n#define CTL_RO_GEN(n, v, t)\t\t\t\t\t\t\\\nstatic int\t\t\t\t\t\t\t\t\\\nn##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp,\t\\\n    size_t *oldlenp, void *newp, size_t newlen) {\t\t\t\\\n\tint ret;\t\t\t\t\t\t\t\\\n\tt oldval;\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx);\t\t\t\\\n\tREADONLY();\t\t\t\t\t\t\t\\\n\toldval = (v);\t\t\t\t\t\t\t\\\n\tREAD(oldval, t);\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tret = 0;\t\t\t\t\t\t\t\\\nlabel_return:\t\t\t\t\t\t\t\t\\\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx);\t\t\t\\\n\treturn ret;\t\t\t\t\t\t\t\\\n}\n\n/*\n * ctl_mtx is not acquired, under the assumption that no pertinent data will\n * mutate during the call.\n */\n#define CTL_RO_NL_CGEN(c, n, v, t)\t\t\t\t\t\\\nstatic int\t\t\t\t\t\t\t\t\\\nn##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\t\t\t\\\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\t\t\\\n\tint ret;\t\t\t\t\t\t\t\\\n\tt oldval;\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tif (!(c)) {\t\t\t\t\t\t\t\\\n\t\treturn ENOENT;\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tREADONLY();\t\t\t\t\t\t\t\\\n\toldval = (v);\t\t\t\t\t\t\t\\\n\tREAD(oldval, t);\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tret = 0;\t\t\t\t\t\t\t\\\nlabel_return:\t\t\t\t\t\t\t\t\\\n\treturn ret;\t\t\t\t\t\t\t\\\n}\n\n#define CTL_RO_NL_GEN(n, v, t)\t\t\t\t\t\t\\\nstatic int\t\t\t\t\t\t\t\t\\\nn##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\t\t\t\\\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\t\t\\\n\tint ret;\t\t\t\t\t\t\t\\\n\tt oldval;\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tREADONLY();\t\t\t\t\t\t\t\\\n\toldval = (v);\t\t\t\t\t\t\t\\\n\tREAD(oldval, t);\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tret = 0;\t\t\t\t\t\t\t\\\nlabel_return:\t\t\t\t\t\t\t\t\\\n\treturn ret;\t\t\t\t\t\t\t\\\n}\n\n#define CTL_RO_CONFIG_GEN(n, t)\t\t\t\t\t\t\\\nstatic int\t\t\t\t\t\t\t\t\\\nn##_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\t\t\t\\\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\t\t\\\n\tint ret;\t\t\t\t\t\t\t\\\n\tt oldval;\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tREADONLY();\t\t\t\t\t\t\t\\\n\toldval = n;\t\t\t\t\t\t\t\\\n\tREAD(oldval, t);\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tret = 0;\t\t\t\t\t\t\t\\\nlabel_return:\t\t\t\t\t\t\t\t\\\n\treturn ret;\t\t\t\t\t\t\t\\\n}\n\n/******************************************************************************/\n\nCTL_RO_NL_GEN(version, JEMALLOC_VERSION, const char *)\n\nstatic int\nepoch_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tUNUSED uint64_t newval;\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx);\n\tWRITE(newval, uint64_t);\n\tif (newp != NULL) {\n\t\tctl_refresh(tsd_tsdn(tsd));\n\t}\n\tREAD(ctl_arenas->epoch, uint64_t);\n\n\tret = 0;\nlabel_return:\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx);\n\treturn ret;\n}\n\nstatic int\nbackground_thread_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp,\n    void *newp, size_t newlen) {\n\tint ret;\n\tbool oldval;\n\n\tif (!have_background_thread) {\n\t\treturn ENOENT;\n\t}\n\tbackground_thread_ctl_init(tsd_tsdn(tsd));\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx);\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &background_thread_lock);\n\tif (newp == NULL) {\n\t\toldval = background_thread_enabled();\n\t\tREAD(oldval, bool);\n\t} else {\n\t\tif (newlen != sizeof(bool)) {\n\t\t\tret = EINVAL;\n\t\t\tgoto label_return;\n\t\t}\n\t\toldval = background_thread_enabled();\n\t\tREAD(oldval, bool);\n\n\t\tbool newval = *(bool *)newp;\n\t\tif (newval == oldval) {\n\t\t\tret = 0;\n\t\t\tgoto label_return;\n\t\t}\n\n\t\tbackground_thread_enabled_set(tsd_tsdn(tsd), newval);\n\t\tif (newval) {\n\t\t\tif (background_threads_enable(tsd)) {\n\t\t\t\tret = EFAULT;\n\t\t\t\tgoto label_return;\n\t\t\t}\n\t\t} else {\n\t\t\tif (background_threads_disable(tsd)) {\n\t\t\t\tret = EFAULT;\n\t\t\t\tgoto label_return;\n\t\t\t}\n\t\t}\n\t}\n\tret = 0;\nlabel_return:\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &background_thread_lock);\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx);\n\n\treturn ret;\n}\n\nstatic int\nmax_background_threads_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp,\n    size_t newlen) {\n\tint ret;\n\tsize_t oldval;\n\n\tif (!have_background_thread) {\n\t\treturn ENOENT;\n\t}\n\tbackground_thread_ctl_init(tsd_tsdn(tsd));\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx);\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &background_thread_lock);\n\tif (newp == NULL) {\n\t\toldval = max_background_threads;\n\t\tREAD(oldval, size_t);\n\t} else {\n\t\tif (newlen != sizeof(size_t)) {\n\t\t\tret = EINVAL;\n\t\t\tgoto label_return;\n\t\t}\n\t\toldval = max_background_threads;\n\t\tREAD(oldval, size_t);\n\n\t\tsize_t newval = *(size_t *)newp;\n\t\tif (newval == oldval) {\n\t\t\tret = 0;\n\t\t\tgoto label_return;\n\t\t}\n\t\tif (newval > opt_max_background_threads) {\n\t\t\tret = EINVAL;\n\t\t\tgoto label_return;\n\t\t}\n\n\t\tif (background_thread_enabled()) {\n\t\t\tbackground_thread_enabled_set(tsd_tsdn(tsd), false);\n\t\t\tif (background_threads_disable(tsd)) {\n\t\t\t\tret = EFAULT;\n\t\t\t\tgoto label_return;\n\t\t\t}\n\t\t\tmax_background_threads = newval;\n\t\t\tbackground_thread_enabled_set(tsd_tsdn(tsd), true);\n\t\t\tif (background_threads_enable(tsd)) {\n\t\t\t\tret = EFAULT;\n\t\t\t\tgoto label_return;\n\t\t\t}\n\t\t} else {\n\t\t\tmax_background_threads = newval;\n\t\t}\n\t}\n\tret = 0;\nlabel_return:\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &background_thread_lock);\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx);\n\n\treturn ret;\n}\n\n/******************************************************************************/\n\nCTL_RO_CONFIG_GEN(config_cache_oblivious, bool)\nCTL_RO_CONFIG_GEN(config_debug, bool)\nCTL_RO_CONFIG_GEN(config_fill, bool)\nCTL_RO_CONFIG_GEN(config_lazy_lock, bool)\nCTL_RO_CONFIG_GEN(config_malloc_conf, const char *)\nCTL_RO_CONFIG_GEN(config_opt_safety_checks, bool)\nCTL_RO_CONFIG_GEN(config_prof, bool)\nCTL_RO_CONFIG_GEN(config_prof_libgcc, bool)\nCTL_RO_CONFIG_GEN(config_prof_libunwind, bool)\nCTL_RO_CONFIG_GEN(config_stats, bool)\nCTL_RO_CONFIG_GEN(config_utrace, bool)\nCTL_RO_CONFIG_GEN(config_xmalloc, bool)\n\n/******************************************************************************/\n\nCTL_RO_NL_GEN(opt_abort, opt_abort, bool)\nCTL_RO_NL_GEN(opt_abort_conf, opt_abort_conf, bool)\nCTL_RO_NL_GEN(opt_cache_oblivious, opt_cache_oblivious, bool)\nCTL_RO_NL_GEN(opt_trust_madvise, opt_trust_madvise, bool)\nCTL_RO_NL_GEN(opt_confirm_conf, opt_confirm_conf, bool)\n\n/* HPA options. */\nCTL_RO_NL_GEN(opt_hpa, opt_hpa, bool)\nCTL_RO_NL_GEN(opt_hpa_hugification_threshold,\n    opt_hpa_opts.hugification_threshold, size_t)\nCTL_RO_NL_GEN(opt_hpa_hugify_delay_ms, opt_hpa_opts.hugify_delay_ms, uint64_t)\nCTL_RO_NL_GEN(opt_hpa_min_purge_interval_ms, opt_hpa_opts.min_purge_interval_ms,\n    uint64_t)\n\n/*\n * This will have to change before we publicly document this option; fxp_t and\n * its representation are internal implementation details.\n */\nCTL_RO_NL_GEN(opt_hpa_dirty_mult, opt_hpa_opts.dirty_mult, fxp_t)\nCTL_RO_NL_GEN(opt_hpa_slab_max_alloc, opt_hpa_opts.slab_max_alloc, size_t)\n\n/* HPA SEC options */\nCTL_RO_NL_GEN(opt_hpa_sec_nshards, opt_hpa_sec_opts.nshards, size_t)\nCTL_RO_NL_GEN(opt_hpa_sec_max_alloc, opt_hpa_sec_opts.max_alloc, size_t)\nCTL_RO_NL_GEN(opt_hpa_sec_max_bytes, opt_hpa_sec_opts.max_bytes, size_t)\nCTL_RO_NL_GEN(opt_hpa_sec_bytes_after_flush, opt_hpa_sec_opts.bytes_after_flush,\n    size_t)\nCTL_RO_NL_GEN(opt_hpa_sec_batch_fill_extra, opt_hpa_sec_opts.batch_fill_extra,\n    size_t)\n\nCTL_RO_NL_GEN(opt_metadata_thp, metadata_thp_mode_names[opt_metadata_thp],\n    const char *)\nCTL_RO_NL_GEN(opt_retain, opt_retain, bool)\nCTL_RO_NL_GEN(opt_dss, opt_dss, const char *)\nCTL_RO_NL_GEN(opt_narenas, opt_narenas, unsigned)\nCTL_RO_NL_GEN(opt_percpu_arena, percpu_arena_mode_names[opt_percpu_arena],\n    const char *)\nCTL_RO_NL_GEN(opt_mutex_max_spin, opt_mutex_max_spin, int64_t)\nCTL_RO_NL_GEN(opt_oversize_threshold, opt_oversize_threshold, size_t)\nCTL_RO_NL_GEN(opt_background_thread, opt_background_thread, bool)\nCTL_RO_NL_GEN(opt_max_background_threads, opt_max_background_threads, size_t)\nCTL_RO_NL_GEN(opt_dirty_decay_ms, opt_dirty_decay_ms, ssize_t)\nCTL_RO_NL_GEN(opt_muzzy_decay_ms, opt_muzzy_decay_ms, ssize_t)\nCTL_RO_NL_GEN(opt_stats_print, opt_stats_print, bool)\nCTL_RO_NL_GEN(opt_stats_print_opts, opt_stats_print_opts, const char *)\nCTL_RO_NL_GEN(opt_stats_interval, opt_stats_interval, int64_t)\nCTL_RO_NL_GEN(opt_stats_interval_opts, opt_stats_interval_opts, const char *)\nCTL_RO_NL_CGEN(config_fill, opt_junk, opt_junk, const char *)\nCTL_RO_NL_CGEN(config_fill, opt_zero, opt_zero, bool)\nCTL_RO_NL_CGEN(config_utrace, opt_utrace, opt_utrace, bool)\nCTL_RO_NL_CGEN(config_xmalloc, opt_xmalloc, opt_xmalloc, bool)\nCTL_RO_NL_CGEN(config_enable_cxx, opt_experimental_infallible_new,\n    opt_experimental_infallible_new, bool)\nCTL_RO_NL_GEN(opt_tcache, opt_tcache, bool)\nCTL_RO_NL_GEN(opt_tcache_max, opt_tcache_max, size_t)\nCTL_RO_NL_GEN(opt_tcache_nslots_small_min, opt_tcache_nslots_small_min,\n    unsigned)\nCTL_RO_NL_GEN(opt_tcache_nslots_small_max, opt_tcache_nslots_small_max,\n    unsigned)\nCTL_RO_NL_GEN(opt_tcache_nslots_large, opt_tcache_nslots_large, unsigned)\nCTL_RO_NL_GEN(opt_lg_tcache_nslots_mul, opt_lg_tcache_nslots_mul, ssize_t)\nCTL_RO_NL_GEN(opt_tcache_gc_incr_bytes, opt_tcache_gc_incr_bytes, size_t)\nCTL_RO_NL_GEN(opt_tcache_gc_delay_bytes, opt_tcache_gc_delay_bytes, size_t)\nCTL_RO_NL_GEN(opt_lg_tcache_flush_small_div, opt_lg_tcache_flush_small_div,\n    unsigned)\nCTL_RO_NL_GEN(opt_lg_tcache_flush_large_div, opt_lg_tcache_flush_large_div,\n    unsigned)\nCTL_RO_NL_GEN(opt_thp, thp_mode_names[opt_thp], const char *)\nCTL_RO_NL_GEN(opt_lg_extent_max_active_fit, opt_lg_extent_max_active_fit,\n    size_t)\nCTL_RO_NL_CGEN(config_prof, opt_prof, opt_prof, bool)\nCTL_RO_NL_CGEN(config_prof, opt_prof_prefix, opt_prof_prefix, const char *)\nCTL_RO_NL_CGEN(config_prof, opt_prof_active, opt_prof_active, bool)\nCTL_RO_NL_CGEN(config_prof, opt_prof_thread_active_init,\n    opt_prof_thread_active_init, bool)\nCTL_RO_NL_CGEN(config_prof, opt_lg_prof_sample, opt_lg_prof_sample, size_t)\nCTL_RO_NL_CGEN(config_prof, opt_prof_accum, opt_prof_accum, bool)\nCTL_RO_NL_CGEN(config_prof, opt_lg_prof_interval, opt_lg_prof_interval, ssize_t)\nCTL_RO_NL_CGEN(config_prof, opt_prof_gdump, opt_prof_gdump, bool)\nCTL_RO_NL_CGEN(config_prof, opt_prof_final, opt_prof_final, bool)\nCTL_RO_NL_CGEN(config_prof, opt_prof_leak, opt_prof_leak, bool)\nCTL_RO_NL_CGEN(config_prof, opt_prof_leak_error, opt_prof_leak_error, bool)\nCTL_RO_NL_CGEN(config_prof, opt_prof_recent_alloc_max,\n    opt_prof_recent_alloc_max, ssize_t)\nCTL_RO_NL_CGEN(config_prof, opt_prof_stats, opt_prof_stats, bool)\nCTL_RO_NL_CGEN(config_prof, opt_prof_sys_thread_name, opt_prof_sys_thread_name,\n    bool)\nCTL_RO_NL_CGEN(config_prof, opt_prof_time_res,\n    prof_time_res_mode_names[opt_prof_time_res], const char *)\nCTL_RO_NL_CGEN(config_uaf_detection, opt_lg_san_uaf_align,\n    opt_lg_san_uaf_align, ssize_t)\nCTL_RO_NL_GEN(opt_zero_realloc,\n    zero_realloc_mode_names[opt_zero_realloc_action], const char *)\n\n/******************************************************************************/\n\nstatic int\nthread_arena_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tarena_t *oldarena;\n\tunsigned newind, oldind;\n\n\toldarena = arena_choose(tsd, NULL);\n\tif (oldarena == NULL) {\n\t\treturn EAGAIN;\n\t}\n\tnewind = oldind = arena_ind_get(oldarena);\n\tWRITE(newind, unsigned);\n\tREAD(oldind, unsigned);\n\n\tif (newind != oldind) {\n\t\tarena_t *newarena;\n\n\t\tif (newind >= narenas_total_get()) {\n\t\t\t/* New arena index is out of range. */\n\t\t\tret = EFAULT;\n\t\t\tgoto label_return;\n\t\t}\n\n\t\tif (have_percpu_arena &&\n\t\t    PERCPU_ARENA_ENABLED(opt_percpu_arena)) {\n\t\t\tif (newind < percpu_arena_ind_limit(opt_percpu_arena)) {\n\t\t\t\t/*\n\t\t\t\t * If perCPU arena is enabled, thread_arena\n\t\t\t\t * control is not allowed for the auto arena\n\t\t\t\t * range.\n\t\t\t\t */\n\t\t\t\tret = EPERM;\n\t\t\t\tgoto label_return;\n\t\t\t}\n\t\t}\n\n\t\t/* Initialize arena if necessary. */\n\t\tnewarena = arena_get(tsd_tsdn(tsd), newind, true);\n\t\tif (newarena == NULL) {\n\t\t\tret = EAGAIN;\n\t\t\tgoto label_return;\n\t\t}\n\t\t/* Set new arena/tcache associations. */\n\t\tarena_migrate(tsd, oldarena, newarena);\n\t\tif (tcache_available(tsd)) {\n\t\t\ttcache_arena_reassociate(tsd_tsdn(tsd),\n\t\t\t    tsd_tcache_slowp_get(tsd), tsd_tcachep_get(tsd),\n\t\t\t    newarena);\n\t\t}\n\t}\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nCTL_RO_NL_GEN(thread_allocated, tsd_thread_allocated_get(tsd), uint64_t)\nCTL_RO_NL_GEN(thread_allocatedp, tsd_thread_allocatedp_get(tsd), uint64_t *)\nCTL_RO_NL_GEN(thread_deallocated, tsd_thread_deallocated_get(tsd), uint64_t)\nCTL_RO_NL_GEN(thread_deallocatedp, tsd_thread_deallocatedp_get(tsd), uint64_t *)\n\nstatic int\nthread_tcache_enabled_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp,\n    size_t newlen) {\n\tint ret;\n\tbool oldval;\n\n\toldval = tcache_enabled_get(tsd);\n\tif (newp != NULL) {\n\t\tif (newlen != sizeof(bool)) {\n\t\t\tret = EINVAL;\n\t\t\tgoto label_return;\n\t\t}\n\t\ttcache_enabled_set(tsd, *(bool *)newp);\n\t}\n\tREAD(oldval, bool);\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\nthread_tcache_flush_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp,\n    size_t newlen) {\n\tint ret;\n\n\tif (!tcache_available(tsd)) {\n\t\tret = EFAULT;\n\t\tgoto label_return;\n\t}\n\n\tNEITHER_READ_NOR_WRITE();\n\n\ttcache_flush(tsd);\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\nthread_peak_read_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp,\n    size_t newlen) {\n\tint ret;\n\tif (!config_stats) {\n\t\treturn ENOENT;\n\t}\n\tREADONLY();\n\tpeak_event_update(tsd);\n\tuint64_t result = peak_event_max(tsd);\n\tREAD(result, uint64_t);\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\nthread_peak_reset_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp,\n    size_t newlen) {\n\tint ret;\n\tif (!config_stats) {\n\t\treturn ENOENT;\n\t}\n\tNEITHER_READ_NOR_WRITE();\n\tpeak_event_zero(tsd);\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\nthread_prof_name_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp,\n    size_t newlen) {\n\tint ret;\n\n\tif (!config_prof || !opt_prof) {\n\t\treturn ENOENT;\n\t}\n\n\tREAD_XOR_WRITE();\n\n\tif (newp != NULL) {\n\t\tif (newlen != sizeof(const char *)) {\n\t\t\tret = EINVAL;\n\t\t\tgoto label_return;\n\t\t}\n\n\t\tif ((ret = prof_thread_name_set(tsd, *(const char **)newp)) !=\n\t\t    0) {\n\t\t\tgoto label_return;\n\t\t}\n\t} else {\n\t\tconst char *oldname = prof_thread_name_get(tsd);\n\t\tREAD(oldname, const char *);\n\t}\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\nthread_prof_active_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp,\n    size_t newlen) {\n\tint ret;\n\tbool oldval;\n\n\tif (!config_prof) {\n\t\treturn ENOENT;\n\t}\n\n\toldval = opt_prof ? prof_thread_active_get(tsd) : false;\n\tif (newp != NULL) {\n\t\tif (!opt_prof) {\n\t\t\tret = ENOENT;\n\t\t\tgoto label_return;\n\t\t}\n\t\tif (newlen != sizeof(bool)) {\n\t\t\tret = EINVAL;\n\t\t\tgoto label_return;\n\t\t}\n\t\tif (prof_thread_active_set(tsd, *(bool *)newp)) {\n\t\t\tret = EAGAIN;\n\t\t\tgoto label_return;\n\t\t}\n\t}\n\tREAD(oldval, bool);\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\nthread_idle_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp,\n    size_t newlen) {\n\tint ret;\n\n\tNEITHER_READ_NOR_WRITE();\n\n\tif (tcache_available(tsd)) {\n\t\ttcache_flush(tsd);\n\t}\n\t/*\n\t * This heuristic is perhaps not the most well-considered.  But it\n\t * matches the only idling policy we have experience with in the status\n\t * quo.  Over time we should investigate more principled approaches.\n\t */\n\tif (opt_narenas > ncpus * 2) {\n\t\tarena_t *arena = arena_choose(tsd, NULL);\n\t\tif (arena != NULL) {\n\t\t\tarena_decay(tsd_tsdn(tsd), arena, false, true);\n\t\t}\n\t\t/*\n\t\t * The missing arena case is not actually an error; a thread\n\t\t * might be idle before it associates itself to one.  This is\n\t\t * unusual, but not wrong.\n\t\t */\n\t}\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\n/******************************************************************************/\n\nstatic int\ntcache_create_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tunsigned tcache_ind;\n\n\tREADONLY();\n\tVERIFY_READ(unsigned);\n\tif (tcaches_create(tsd, b0get(), &tcache_ind)) {\n\t\tret = EFAULT;\n\t\tgoto label_return;\n\t}\n\tREAD(tcache_ind, unsigned);\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\ntcache_flush_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tunsigned tcache_ind;\n\n\tWRITEONLY();\n\tASSURED_WRITE(tcache_ind, unsigned);\n\ttcaches_flush(tsd, tcache_ind);\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\ntcache_destroy_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tunsigned tcache_ind;\n\n\tWRITEONLY();\n\tASSURED_WRITE(tcache_ind, unsigned);\n\ttcaches_destroy(tsd, tcache_ind);\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\n/******************************************************************************/\n\nstatic int\narena_i_initialized_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\ttsdn_t *tsdn = tsd_tsdn(tsd);\n\tunsigned arena_ind;\n\tbool initialized;\n\n\tREADONLY();\n\tMIB_UNSIGNED(arena_ind, 1);\n\n\tmalloc_mutex_lock(tsdn, &ctl_mtx);\n\tinitialized = arenas_i(arena_ind)->initialized;\n\tmalloc_mutex_unlock(tsdn, &ctl_mtx);\n\n\tREAD(initialized, bool);\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic void\narena_i_decay(tsdn_t *tsdn, unsigned arena_ind, bool all) {\n\tmalloc_mutex_lock(tsdn, &ctl_mtx);\n\t{\n\t\tunsigned narenas = ctl_arenas->narenas;\n\n\t\t/*\n\t\t * Access via index narenas is deprecated, and scheduled for\n\t\t * removal in 6.0.0.\n\t\t */\n\t\tif (arena_ind == MALLCTL_ARENAS_ALL || arena_ind == narenas) {\n\t\t\tunsigned i;\n\t\t\tVARIABLE_ARRAY(arena_t *, tarenas, narenas);\n\n\t\t\tfor (i = 0; i < narenas; i++) {\n\t\t\t\ttarenas[i] = arena_get(tsdn, i, false);\n\t\t\t}\n\n\t\t\t/*\n\t\t\t * No further need to hold ctl_mtx, since narenas and\n\t\t\t * tarenas contain everything needed below.\n\t\t\t */\n\t\t\tmalloc_mutex_unlock(tsdn, &ctl_mtx);\n\n\t\t\tfor (i = 0; i < narenas; i++) {\n\t\t\t\tif (tarenas[i] != NULL) {\n\t\t\t\t\tarena_decay(tsdn, tarenas[i], false,\n\t\t\t\t\t    all);\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\tarena_t *tarena;\n\n\t\t\tassert(arena_ind < narenas);\n\n\t\t\ttarena = arena_get(tsdn, arena_ind, false);\n\n\t\t\t/* No further need to hold ctl_mtx. */\n\t\t\tmalloc_mutex_unlock(tsdn, &ctl_mtx);\n\n\t\t\tif (tarena != NULL) {\n\t\t\t\tarena_decay(tsdn, tarena, false, all);\n\t\t\t}\n\t\t}\n\t}\n}\n\nstatic int\narena_i_decay_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp,\n    size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tunsigned arena_ind;\n\n\tNEITHER_READ_NOR_WRITE();\n\tMIB_UNSIGNED(arena_ind, 1);\n\tarena_i_decay(tsd_tsdn(tsd), arena_ind, false);\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\narena_i_purge_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp,\n    size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tunsigned arena_ind;\n\n\tNEITHER_READ_NOR_WRITE();\n\tMIB_UNSIGNED(arena_ind, 1);\n\tarena_i_decay(tsd_tsdn(tsd), arena_ind, true);\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\narena_i_reset_destroy_helper(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen, unsigned *arena_ind,\n    arena_t **arena) {\n\tint ret;\n\n\tNEITHER_READ_NOR_WRITE();\n\tMIB_UNSIGNED(*arena_ind, 1);\n\n\t*arena = arena_get(tsd_tsdn(tsd), *arena_ind, false);\n\tif (*arena == NULL || arena_is_auto(*arena)) {\n\t\tret = EFAULT;\n\t\tgoto label_return;\n\t}\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic void\narena_reset_prepare_background_thread(tsd_t *tsd, unsigned arena_ind) {\n\t/* Temporarily disable the background thread during arena reset. */\n\tif (have_background_thread) {\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), &background_thread_lock);\n\t\tif (background_thread_enabled()) {\n\t\t\tbackground_thread_info_t *info =\n\t\t\t    background_thread_info_get(arena_ind);\n\t\t\tassert(info->state == background_thread_started);\n\t\t\tmalloc_mutex_lock(tsd_tsdn(tsd), &info->mtx);\n\t\t\tinfo->state = background_thread_paused;\n\t\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), &info->mtx);\n\t\t}\n\t}\n}\n\nstatic void\narena_reset_finish_background_thread(tsd_t *tsd, unsigned arena_ind) {\n\tif (have_background_thread) {\n\t\tif (background_thread_enabled()) {\n\t\t\tbackground_thread_info_t *info =\n\t\t\t    background_thread_info_get(arena_ind);\n\t\t\tassert(info->state == background_thread_paused);\n\t\t\tmalloc_mutex_lock(tsd_tsdn(tsd), &info->mtx);\n\t\t\tinfo->state = background_thread_started;\n\t\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), &info->mtx);\n\t\t}\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), &background_thread_lock);\n\t}\n}\n\nstatic int\narena_i_reset_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp,\n    size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tunsigned arena_ind;\n\tarena_t *arena;\n\n\tret = arena_i_reset_destroy_helper(tsd, mib, miblen, oldp, oldlenp,\n\t    newp, newlen, &arena_ind, &arena);\n\tif (ret != 0) {\n\t\treturn ret;\n\t}\n\n\tarena_reset_prepare_background_thread(tsd, arena_ind);\n\tarena_reset(tsd, arena);\n\tarena_reset_finish_background_thread(tsd, arena_ind);\n\n\treturn ret;\n}\n\nstatic int\narena_i_destroy_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp,\n    size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tunsigned arena_ind;\n\tarena_t *arena;\n\tctl_arena_t *ctl_darena, *ctl_arena;\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx);\n\n\tret = arena_i_reset_destroy_helper(tsd, mib, miblen, oldp, oldlenp,\n\t    newp, newlen, &arena_ind, &arena);\n\tif (ret != 0) {\n\t\tgoto label_return;\n\t}\n\n\tif (arena_nthreads_get(arena, false) != 0 || arena_nthreads_get(arena,\n\t    true) != 0) {\n\t\tret = EFAULT;\n\t\tgoto label_return;\n\t}\n\n\tarena_reset_prepare_background_thread(tsd, arena_ind);\n\t/* Merge stats after resetting and purging arena. */\n\tarena_reset(tsd, arena);\n\tarena_decay(tsd_tsdn(tsd), arena, false, true);\n\tctl_darena = arenas_i(MALLCTL_ARENAS_DESTROYED);\n\tctl_darena->initialized = true;\n\tctl_arena_refresh(tsd_tsdn(tsd), arena, ctl_darena, arena_ind, true);\n\t/* Destroy arena. */\n\tarena_destroy(tsd, arena);\n\tctl_arena = arenas_i(arena_ind);\n\tctl_arena->initialized = false;\n\t/* Record arena index for later recycling via arenas.create. */\n\tql_elm_new(ctl_arena, destroyed_link);\n\tql_tail_insert(&ctl_arenas->destroyed, ctl_arena, destroyed_link);\n\tarena_reset_finish_background_thread(tsd, arena_ind);\n\n\tassert(ret == 0);\nlabel_return:\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx);\n\n\treturn ret;\n}\n\nstatic int\narena_i_dss_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp,\n    size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tconst char *dss = NULL;\n\tunsigned arena_ind;\n\tdss_prec_t dss_prec_old = dss_prec_limit;\n\tdss_prec_t dss_prec = dss_prec_limit;\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx);\n\tWRITE(dss, const char *);\n\tMIB_UNSIGNED(arena_ind, 1);\n\tif (dss != NULL) {\n\t\tint i;\n\t\tbool match = false;\n\n\t\tfor (i = 0; i < dss_prec_limit; i++) {\n\t\t\tif (strcmp(dss_prec_names[i], dss) == 0) {\n\t\t\t\tdss_prec = i;\n\t\t\t\tmatch = true;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\n\t\tif (!match) {\n\t\t\tret = EINVAL;\n\t\t\tgoto label_return;\n\t\t}\n\t}\n\n\t/*\n\t * Access via index narenas is deprecated, and scheduled for removal in\n\t * 6.0.0.\n\t */\n\tif (arena_ind == MALLCTL_ARENAS_ALL || arena_ind ==\n\t    ctl_arenas->narenas) {\n\t\tif (dss_prec != dss_prec_limit &&\n\t\t    extent_dss_prec_set(dss_prec)) {\n\t\t\tret = EFAULT;\n\t\t\tgoto label_return;\n\t\t}\n\t\tdss_prec_old = extent_dss_prec_get();\n\t} else {\n\t\tarena_t *arena = arena_get(tsd_tsdn(tsd), arena_ind, false);\n\t\tif (arena == NULL || (dss_prec != dss_prec_limit &&\n\t\t    arena_dss_prec_set(arena, dss_prec))) {\n\t\t\tret = EFAULT;\n\t\t\tgoto label_return;\n\t\t}\n\t\tdss_prec_old = arena_dss_prec_get(arena);\n\t}\n\n\tdss = dss_prec_names[dss_prec_old];\n\tREAD(dss, const char *);\n\n\tret = 0;\nlabel_return:\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx);\n\treturn ret;\n}\n\nstatic int\narena_i_oversize_threshold_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\n\tunsigned arena_ind;\n\tMIB_UNSIGNED(arena_ind, 1);\n\n\tarena_t *arena = arena_get(tsd_tsdn(tsd), arena_ind, false);\n\tif (arena == NULL) {\n\t\tret = EFAULT;\n\t\tgoto label_return;\n\t}\n\n\tif (oldp != NULL && oldlenp != NULL) {\n\t\tsize_t oldval = atomic_load_zu(\n\t\t    &arena->pa_shard.pac.oversize_threshold, ATOMIC_RELAXED);\n\t\tREAD(oldval, size_t);\n\t}\n\tif (newp != NULL) {\n\t\tif (newlen != sizeof(size_t)) {\n\t\t\tret = EINVAL;\n\t\t\tgoto label_return;\n\t\t}\n\t\tatomic_store_zu(&arena->pa_shard.pac.oversize_threshold,\n\t\t    *(size_t *)newp, ATOMIC_RELAXED);\n\t}\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\narena_i_decay_ms_ctl_impl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen, bool dirty) {\n\tint ret;\n\tunsigned arena_ind;\n\tarena_t *arena;\n\n\tMIB_UNSIGNED(arena_ind, 1);\n\tarena = arena_get(tsd_tsdn(tsd), arena_ind, false);\n\tif (arena == NULL) {\n\t\tret = EFAULT;\n\t\tgoto label_return;\n\t}\n\textent_state_t state = dirty ? extent_state_dirty : extent_state_muzzy;\n\n\tif (oldp != NULL && oldlenp != NULL) {\n\t\tsize_t oldval = arena_decay_ms_get(arena, state);\n\t\tREAD(oldval, ssize_t);\n\t}\n\tif (newp != NULL) {\n\t\tif (newlen != sizeof(ssize_t)) {\n\t\t\tret = EINVAL;\n\t\t\tgoto label_return;\n\t\t}\n\t\tif (arena_is_huge(arena_ind) && *(ssize_t *)newp > 0) {\n\t\t\t/*\n\t\t\t * By default the huge arena purges eagerly.  If it is\n\t\t\t * set to non-zero decay time afterwards, background\n\t\t\t * thread might be needed.\n\t\t\t */\n\t\t\tif (background_thread_create(tsd, arena_ind)) {\n\t\t\t\tret = EFAULT;\n\t\t\t\tgoto label_return;\n\t\t\t}\n\t\t}\n\n\t\tif (arena_decay_ms_set(tsd_tsdn(tsd), arena, state,\n\t\t    *(ssize_t *)newp)) {\n\t\t\tret = EFAULT;\n\t\t\tgoto label_return;\n\t\t}\n\t}\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\narena_i_dirty_decay_ms_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\treturn arena_i_decay_ms_ctl_impl(tsd, mib, miblen, oldp, oldlenp, newp,\n\t    newlen, true);\n}\n\nstatic int\narena_i_muzzy_decay_ms_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\treturn arena_i_decay_ms_ctl_impl(tsd, mib, miblen, oldp, oldlenp, newp,\n\t    newlen, false);\n}\n\nstatic int\narena_i_extent_hooks_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tunsigned arena_ind;\n\tarena_t *arena;\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx);\n\tMIB_UNSIGNED(arena_ind, 1);\n\tif (arena_ind < narenas_total_get()) {\n\t\textent_hooks_t *old_extent_hooks;\n\t\tarena = arena_get(tsd_tsdn(tsd), arena_ind, false);\n\t\tif (arena == NULL) {\n\t\t\tif (arena_ind >= narenas_auto) {\n\t\t\t\tret = EFAULT;\n\t\t\t\tgoto label_return;\n\t\t\t}\n\t\t\told_extent_hooks =\n\t\t\t    (extent_hooks_t *)&ehooks_default_extent_hooks;\n\t\t\tREAD(old_extent_hooks, extent_hooks_t *);\n\t\t\tif (newp != NULL) {\n\t\t\t\t/* Initialize a new arena as a side effect. */\n\t\t\t\textent_hooks_t *new_extent_hooks\n\t\t\t\t    JEMALLOC_CC_SILENCE_INIT(NULL);\n\t\t\t\tWRITE(new_extent_hooks, extent_hooks_t *);\n\t\t\t\tarena_config_t config = arena_config_default;\n\t\t\t\tconfig.extent_hooks = new_extent_hooks;\n\n\t\t\t\tarena = arena_init(tsd_tsdn(tsd), arena_ind,\n\t\t\t\t    &config);\n\t\t\t\tif (arena == NULL) {\n\t\t\t\t\tret = EFAULT;\n\t\t\t\t\tgoto label_return;\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\tif (newp != NULL) {\n\t\t\t\textent_hooks_t *new_extent_hooks\n\t\t\t\t    JEMALLOC_CC_SILENCE_INIT(NULL);\n\t\t\t\tWRITE(new_extent_hooks, extent_hooks_t *);\n\t\t\t\told_extent_hooks = arena_set_extent_hooks(tsd,\n\t\t\t\t    arena, new_extent_hooks);\n\t\t\t\tREAD(old_extent_hooks, extent_hooks_t *);\n\t\t\t} else {\n\t\t\t\told_extent_hooks =\n\t\t\t\t    ehooks_get_extent_hooks_ptr(\n\t\t\t\t\tarena_get_ehooks(arena));\n\t\t\t\tREAD(old_extent_hooks, extent_hooks_t *);\n\t\t\t}\n\t\t}\n\t} else {\n\t\tret = EFAULT;\n\t\tgoto label_return;\n\t}\n\tret = 0;\nlabel_return:\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx);\n\treturn ret;\n}\n\nstatic int\narena_i_retain_grow_limit_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp,\n    size_t newlen) {\n\tint ret;\n\tunsigned arena_ind;\n\tarena_t *arena;\n\n\tif (!opt_retain) {\n\t\t/* Only relevant when retain is enabled. */\n\t\treturn ENOENT;\n\t}\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx);\n\tMIB_UNSIGNED(arena_ind, 1);\n\tif (arena_ind < narenas_total_get() && (arena =\n\t    arena_get(tsd_tsdn(tsd), arena_ind, false)) != NULL) {\n\t\tsize_t old_limit, new_limit;\n\t\tif (newp != NULL) {\n\t\t\tWRITE(new_limit, size_t);\n\t\t}\n\t\tbool err = arena_retain_grow_limit_get_set(tsd, arena,\n\t\t    &old_limit, newp != NULL ? &new_limit : NULL);\n\t\tif (!err) {\n\t\t\tREAD(old_limit, size_t);\n\t\t\tret = 0;\n\t\t} else {\n\t\t\tret = EFAULT;\n\t\t}\n\t} else {\n\t\tret = EFAULT;\n\t}\nlabel_return:\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx);\n\treturn ret;\n}\n\nstatic const ctl_named_node_t *\narena_i_index(tsdn_t *tsdn, const size_t *mib, size_t miblen,\n    size_t i) {\n\tconst ctl_named_node_t *ret;\n\n\tmalloc_mutex_lock(tsdn, &ctl_mtx);\n\tswitch (i) {\n\tcase MALLCTL_ARENAS_ALL:\n\tcase MALLCTL_ARENAS_DESTROYED:\n\t\tbreak;\n\tdefault:\n\t\tif (i > ctl_arenas->narenas) {\n\t\t\tret = NULL;\n\t\t\tgoto label_return;\n\t\t}\n\t\tbreak;\n\t}\n\n\tret = super_arena_i_node;\nlabel_return:\n\tmalloc_mutex_unlock(tsdn, &ctl_mtx);\n\treturn ret;\n}\n\n/******************************************************************************/\n\nstatic int\narenas_narenas_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tunsigned narenas;\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx);\n\tREADONLY();\n\tnarenas = ctl_arenas->narenas;\n\tREAD(narenas, unsigned);\n\n\tret = 0;\nlabel_return:\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx);\n\treturn ret;\n}\n\nstatic int\narenas_decay_ms_ctl_impl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp,\n    size_t newlen, bool dirty) {\n\tint ret;\n\n\tif (oldp != NULL && oldlenp != NULL) {\n\t\tsize_t oldval = (dirty ? arena_dirty_decay_ms_default_get() :\n\t\t    arena_muzzy_decay_ms_default_get());\n\t\tREAD(oldval, ssize_t);\n\t}\n\tif (newp != NULL) {\n\t\tif (newlen != sizeof(ssize_t)) {\n\t\t\tret = EINVAL;\n\t\t\tgoto label_return;\n\t\t}\n\t\tif (dirty ? arena_dirty_decay_ms_default_set(*(ssize_t *)newp)\n\t\t    : arena_muzzy_decay_ms_default_set(*(ssize_t *)newp)) {\n\t\t\tret = EFAULT;\n\t\t\tgoto label_return;\n\t\t}\n\t}\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\narenas_dirty_decay_ms_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\treturn arenas_decay_ms_ctl_impl(tsd, mib, miblen, oldp, oldlenp, newp,\n\t    newlen, true);\n}\n\nstatic int\narenas_muzzy_decay_ms_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\treturn arenas_decay_ms_ctl_impl(tsd, mib, miblen, oldp, oldlenp, newp,\n\t    newlen, false);\n}\n\nCTL_RO_NL_GEN(arenas_quantum, QUANTUM, size_t)\nCTL_RO_NL_GEN(arenas_page, PAGE, size_t)\nCTL_RO_NL_GEN(arenas_tcache_max, tcache_maxclass, size_t)\nCTL_RO_NL_GEN(arenas_nbins, SC_NBINS, unsigned)\nCTL_RO_NL_GEN(arenas_nhbins, nhbins, unsigned)\nCTL_RO_NL_GEN(arenas_bin_i_size, bin_infos[mib[2]].reg_size, size_t)\nCTL_RO_NL_GEN(arenas_bin_i_nregs, bin_infos[mib[2]].nregs, uint32_t)\nCTL_RO_NL_GEN(arenas_bin_i_slab_size, bin_infos[mib[2]].slab_size, size_t)\nCTL_RO_NL_GEN(arenas_bin_i_nshards, bin_infos[mib[2]].n_shards, uint32_t)\nstatic const ctl_named_node_t *\narenas_bin_i_index(tsdn_t *tsdn, const size_t *mib,\n    size_t miblen, size_t i) {\n\tif (i > SC_NBINS) {\n\t\treturn NULL;\n\t}\n\treturn super_arenas_bin_i_node;\n}\n\nCTL_RO_NL_GEN(arenas_nlextents, SC_NSIZES - SC_NBINS, unsigned)\nCTL_RO_NL_GEN(arenas_lextent_i_size, sz_index2size(SC_NBINS+(szind_t)mib[2]),\n    size_t)\nstatic const ctl_named_node_t *\narenas_lextent_i_index(tsdn_t *tsdn, const size_t *mib,\n    size_t miblen, size_t i) {\n\tif (i > SC_NSIZES - SC_NBINS) {\n\t\treturn NULL;\n\t}\n\treturn super_arenas_lextent_i_node;\n}\n\nstatic int\narenas_create_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tunsigned arena_ind;\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx);\n\n\tVERIFY_READ(unsigned);\n\tarena_config_t config = arena_config_default;\n\tWRITE(config.extent_hooks, extent_hooks_t *);\n\tif ((arena_ind = ctl_arena_init(tsd, &config)) == UINT_MAX) {\n\t\tret = EAGAIN;\n\t\tgoto label_return;\n\t}\n\tREAD(arena_ind, unsigned);\n\n\tret = 0;\nlabel_return:\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx);\n\treturn ret;\n}\n\nstatic int\nexperimental_arenas_create_ext_ctl(tsd_t *tsd,\n    const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tunsigned arena_ind;\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx);\n\n\tarena_config_t config = arena_config_default;\n\tVERIFY_READ(unsigned);\n\tWRITE(config, arena_config_t);\n\n\tif ((arena_ind = ctl_arena_init(tsd, &config)) == UINT_MAX) {\n\t\tret = EAGAIN;\n\t\tgoto label_return;\n\t}\n\tREAD(arena_ind, unsigned);\n\tret = 0;\nlabel_return:\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx);\n\treturn ret;\n}\n\nstatic int\narenas_lookup_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp,\n    size_t newlen) {\n\tint ret;\n\tunsigned arena_ind;\n\tvoid *ptr;\n\tedata_t *edata;\n\tarena_t *arena;\n\n\tptr = NULL;\n\tret = EINVAL;\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx);\n\tWRITE(ptr, void *);\n\tedata = emap_edata_lookup(tsd_tsdn(tsd), &arena_emap_global, ptr);\n\tif (edata == NULL) {\n\t\tgoto label_return;\n\t}\n\n\tarena = arena_get_from_edata(edata);\n\tif (arena == NULL) {\n\t\tgoto label_return;\n\t}\n\n\tarena_ind = arena_ind_get(arena);\n\tREAD(arena_ind, unsigned);\n\n\tret = 0;\nlabel_return:\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx);\n\treturn ret;\n}\n\n/******************************************************************************/\n\nstatic int\nprof_thread_active_init_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp,\n    size_t newlen) {\n\tint ret;\n\tbool oldval;\n\n\tif (!config_prof) {\n\t\treturn ENOENT;\n\t}\n\n\tif (newp != NULL) {\n\t\tif (!opt_prof) {\n\t\t\tret = ENOENT;\n\t\t\tgoto label_return;\n\t\t}\n\t\tif (newlen != sizeof(bool)) {\n\t\t\tret = EINVAL;\n\t\t\tgoto label_return;\n\t\t}\n\t\toldval = prof_thread_active_init_set(tsd_tsdn(tsd),\n\t\t    *(bool *)newp);\n\t} else {\n\t\toldval = opt_prof ? prof_thread_active_init_get(tsd_tsdn(tsd)) :\n\t\t    false;\n\t}\n\tREAD(oldval, bool);\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\nprof_active_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tbool oldval;\n\n\tif (!config_prof) {\n\t\tret = ENOENT;\n\t\tgoto label_return;\n\t}\n\n\tif (newp != NULL) {\n\t\tif (newlen != sizeof(bool)) {\n\t\t\tret = EINVAL;\n\t\t\tgoto label_return;\n\t\t}\n\t\tbool val = *(bool *)newp;\n\t\tif (!opt_prof) {\n\t\t\tif (val) {\n\t\t\t\tret = ENOENT;\n\t\t\t\tgoto label_return;\n\t\t\t} else {\n\t\t\t\t/* No change needed (already off). */\n\t\t\t\toldval = false;\n\t\t\t}\n\t\t} else {\n\t\t\toldval = prof_active_set(tsd_tsdn(tsd), val);\n\t\t}\n\t} else {\n\t\toldval = opt_prof ? prof_active_get(tsd_tsdn(tsd)) : false;\n\t}\n\tREAD(oldval, bool);\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\nprof_dump_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tconst char *filename = NULL;\n\n\tif (!config_prof || !opt_prof) {\n\t\treturn ENOENT;\n\t}\n\n\tWRITEONLY();\n\tWRITE(filename, const char *);\n\n\tif (prof_mdump(tsd, filename)) {\n\t\tret = EFAULT;\n\t\tgoto label_return;\n\t}\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\nprof_gdump_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tbool oldval;\n\n\tif (!config_prof) {\n\t\treturn ENOENT;\n\t}\n\n\tif (newp != NULL) {\n\t\tif (!opt_prof) {\n\t\t\tret = ENOENT;\n\t\t\tgoto label_return;\n\t\t}\n\t\tif (newlen != sizeof(bool)) {\n\t\t\tret = EINVAL;\n\t\t\tgoto label_return;\n\t\t}\n\t\toldval = prof_gdump_set(tsd_tsdn(tsd), *(bool *)newp);\n\t} else {\n\t\toldval = opt_prof ? prof_gdump_get(tsd_tsdn(tsd)) : false;\n\t}\n\tREAD(oldval, bool);\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\nprof_prefix_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tconst char *prefix = NULL;\n\n\tif (!config_prof || !opt_prof) {\n\t\treturn ENOENT;\n\t}\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx);\n\tWRITEONLY();\n\tWRITE(prefix, const char *);\n\n\tret = prof_prefix_set(tsd_tsdn(tsd), prefix) ? EFAULT : 0;\nlabel_return:\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx);\n\treturn ret;\n}\n\nstatic int\nprof_reset_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tsize_t lg_sample = lg_prof_sample;\n\n\tif (!config_prof || !opt_prof) {\n\t\treturn ENOENT;\n\t}\n\n\tWRITEONLY();\n\tWRITE(lg_sample, size_t);\n\tif (lg_sample >= (sizeof(uint64_t) << 3)) {\n\t\tlg_sample = (sizeof(uint64_t) << 3) - 1;\n\t}\n\n\tprof_reset(tsd, lg_sample);\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nCTL_RO_NL_CGEN(config_prof, prof_interval, prof_interval, uint64_t)\nCTL_RO_NL_CGEN(config_prof, lg_prof_sample, lg_prof_sample, size_t)\n\nstatic int\nprof_log_start_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp,\n    size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\n\tconst char *filename = NULL;\n\n\tif (!config_prof || !opt_prof) {\n\t\treturn ENOENT;\n\t}\n\n\tWRITEONLY();\n\tWRITE(filename, const char *);\n\n\tif (prof_log_start(tsd_tsdn(tsd), filename)) {\n\t\tret = EFAULT;\n\t\tgoto label_return;\n\t}\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\nprof_log_stop_ctl(tsd_t *tsd, const size_t *mib, size_t miblen, void *oldp,\n    size_t *oldlenp, void *newp, size_t newlen) {\n\tif (!config_prof || !opt_prof) {\n\t\treturn ENOENT;\n\t}\n\n\tif (prof_log_stop(tsd_tsdn(tsd))) {\n\t\treturn EFAULT;\n\t}\n\n\treturn 0;\n}\n\nstatic int\nexperimental_hooks_prof_backtrace_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\n\tif (oldp == NULL && newp == NULL) {\n\t\tret = EINVAL;\n\t\tgoto label_return;\n\t}\n\tif (oldp != NULL) {\n\t\tprof_backtrace_hook_t old_hook =\n\t\t    prof_backtrace_hook_get();\n\t\tREAD(old_hook, prof_backtrace_hook_t);\n\t}\n\tif (newp != NULL) {\n\t\tif (!opt_prof) {\n\t\t\tret = ENOENT;\n\t\t\tgoto label_return;\n\t\t}\n\t\tprof_backtrace_hook_t new_hook JEMALLOC_CC_SILENCE_INIT(NULL);\n\t\tWRITE(new_hook, prof_backtrace_hook_t);\n\t\tif (new_hook == NULL) {\n\t\t\tret = EINVAL;\n\t\t\tgoto label_return;\n\t\t}\n\t\tprof_backtrace_hook_set(new_hook);\n\t}\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\nexperimental_hooks_prof_dump_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\n\tif (oldp == NULL && newp == NULL) {\n\t\tret = EINVAL;\n\t\tgoto label_return;\n\t}\n\tif (oldp != NULL) {\n\t\tprof_dump_hook_t old_hook =\n\t\t    prof_dump_hook_get();\n\t\tREAD(old_hook, prof_dump_hook_t);\n\t}\n\tif (newp != NULL) {\n\t\tif (!opt_prof) {\n\t\t\tret = ENOENT;\n\t\t\tgoto label_return;\n\t\t}\n\t\tprof_dump_hook_t new_hook JEMALLOC_CC_SILENCE_INIT(NULL);\n\t\tWRITE(new_hook, prof_dump_hook_t);\n\t\tprof_dump_hook_set(new_hook);\n\t}\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\n/* For integration test purpose only.  No plan to move out of experimental. */\nstatic int\nexperimental_hooks_safety_check_abort_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\n\tWRITEONLY();\n\tif (newp != NULL) {\n\t\tif (newlen != sizeof(safety_check_abort_hook_t)) {\n\t\t\tret = EINVAL;\n\t\t\tgoto label_return;\n\t\t}\n\t\tsafety_check_abort_hook_t hook JEMALLOC_CC_SILENCE_INIT(NULL);\n\t\tWRITE(hook, safety_check_abort_hook_t);\n\t\tsafety_check_set_abort(hook);\n\t}\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\n/******************************************************************************/\n\nCTL_RO_CGEN(config_stats, stats_allocated, ctl_stats->allocated, size_t)\nCTL_RO_CGEN(config_stats, stats_active, ctl_stats->active, size_t)\nCTL_RO_CGEN(config_stats, stats_metadata, ctl_stats->metadata, size_t)\nCTL_RO_CGEN(config_stats, stats_metadata_thp, ctl_stats->metadata_thp, size_t)\nCTL_RO_CGEN(config_stats, stats_resident, ctl_stats->resident, size_t)\nCTL_RO_CGEN(config_stats, stats_mapped, ctl_stats->mapped, size_t)\nCTL_RO_CGEN(config_stats, stats_retained, ctl_stats->retained, size_t)\n\nCTL_RO_CGEN(config_stats, stats_background_thread_num_threads,\n    ctl_stats->background_thread.num_threads, size_t)\nCTL_RO_CGEN(config_stats, stats_background_thread_num_runs,\n    ctl_stats->background_thread.num_runs, uint64_t)\nCTL_RO_CGEN(config_stats, stats_background_thread_run_interval,\n    nstime_ns(&ctl_stats->background_thread.run_interval), uint64_t)\n\nCTL_RO_CGEN(config_stats, stats_zero_reallocs,\n    atomic_load_zu(&zero_realloc_count, ATOMIC_RELAXED), size_t)\n\nCTL_RO_GEN(stats_arenas_i_dss, arenas_i(mib[2])->dss, const char *)\nCTL_RO_GEN(stats_arenas_i_dirty_decay_ms, arenas_i(mib[2])->dirty_decay_ms,\n    ssize_t)\nCTL_RO_GEN(stats_arenas_i_muzzy_decay_ms, arenas_i(mib[2])->muzzy_decay_ms,\n    ssize_t)\nCTL_RO_GEN(stats_arenas_i_nthreads, arenas_i(mib[2])->nthreads, unsigned)\nCTL_RO_GEN(stats_arenas_i_uptime,\n    nstime_ns(&arenas_i(mib[2])->astats->astats.uptime), uint64_t)\nCTL_RO_GEN(stats_arenas_i_pactive, arenas_i(mib[2])->pactive, size_t)\nCTL_RO_GEN(stats_arenas_i_pdirty, arenas_i(mib[2])->pdirty, size_t)\nCTL_RO_GEN(stats_arenas_i_pmuzzy, arenas_i(mib[2])->pmuzzy, size_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_mapped,\n    arenas_i(mib[2])->astats->astats.mapped, size_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_retained,\n    arenas_i(mib[2])->astats->astats.pa_shard_stats.pac_stats.retained, size_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_extent_avail,\n    arenas_i(mib[2])->astats->astats.pa_shard_stats.edata_avail, size_t)\n\nCTL_RO_CGEN(config_stats, stats_arenas_i_dirty_npurge,\n    locked_read_u64_unsynchronized(\n    &arenas_i(mib[2])->astats->astats.pa_shard_stats.pac_stats.decay_dirty.npurge),\n    uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_dirty_nmadvise,\n    locked_read_u64_unsynchronized(\n    &arenas_i(mib[2])->astats->astats.pa_shard_stats.pac_stats.decay_dirty.nmadvise),\n    uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_dirty_purged,\n    locked_read_u64_unsynchronized(\n    &arenas_i(mib[2])->astats->astats.pa_shard_stats.pac_stats.decay_dirty.purged),\n    uint64_t)\n\nCTL_RO_CGEN(config_stats, stats_arenas_i_muzzy_npurge,\n    locked_read_u64_unsynchronized(\n    &arenas_i(mib[2])->astats->astats.pa_shard_stats.pac_stats.decay_muzzy.npurge),\n    uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_muzzy_nmadvise,\n    locked_read_u64_unsynchronized(\n    &arenas_i(mib[2])->astats->astats.pa_shard_stats.pac_stats.decay_muzzy.nmadvise),\n    uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_muzzy_purged,\n    locked_read_u64_unsynchronized(\n    &arenas_i(mib[2])->astats->astats.pa_shard_stats.pac_stats.decay_muzzy.purged),\n    uint64_t)\n\nCTL_RO_CGEN(config_stats, stats_arenas_i_base,\n    arenas_i(mib[2])->astats->astats.base,\n    size_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_internal,\n    atomic_load_zu(&arenas_i(mib[2])->astats->astats.internal, ATOMIC_RELAXED),\n    size_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_metadata_thp,\n    arenas_i(mib[2])->astats->astats.metadata_thp, size_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_tcache_bytes,\n    arenas_i(mib[2])->astats->astats.tcache_bytes, size_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_tcache_stashed_bytes,\n    arenas_i(mib[2])->astats->astats.tcache_stashed_bytes, size_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_resident,\n    arenas_i(mib[2])->astats->astats.resident,\n    size_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_abandoned_vm,\n    atomic_load_zu(\n    &arenas_i(mib[2])->astats->astats.pa_shard_stats.pac_stats.abandoned_vm,\n    ATOMIC_RELAXED), size_t)\n\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_sec_bytes,\n    arenas_i(mib[2])->astats->secstats.bytes, size_t)\n\nCTL_RO_CGEN(config_stats, stats_arenas_i_small_allocated,\n    arenas_i(mib[2])->astats->allocated_small, size_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_small_nmalloc,\n    arenas_i(mib[2])->astats->nmalloc_small, uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_small_ndalloc,\n    arenas_i(mib[2])->astats->ndalloc_small, uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_small_nrequests,\n    arenas_i(mib[2])->astats->nrequests_small, uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_small_nfills,\n    arenas_i(mib[2])->astats->nfills_small, uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_small_nflushes,\n    arenas_i(mib[2])->astats->nflushes_small, uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_large_allocated,\n    arenas_i(mib[2])->astats->astats.allocated_large, size_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_large_nmalloc,\n    arenas_i(mib[2])->astats->astats.nmalloc_large, uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_large_ndalloc,\n    arenas_i(mib[2])->astats->astats.ndalloc_large, uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_large_nrequests,\n    arenas_i(mib[2])->astats->astats.nrequests_large, uint64_t)\n/*\n * Note: \"nmalloc_large\" here instead of \"nfills\" in the read.  This is\n * intentional (large has no batch fill).\n */\nCTL_RO_CGEN(config_stats, stats_arenas_i_large_nfills,\n    arenas_i(mib[2])->astats->astats.nmalloc_large, uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_large_nflushes,\n    arenas_i(mib[2])->astats->astats.nflushes_large, uint64_t)\n\n/* Lock profiling related APIs below. */\n#define RO_MUTEX_CTL_GEN(n, l)\t\t\t\t\t\t\\\nCTL_RO_CGEN(config_stats, stats_##n##_num_ops,\t\t\t\t\\\n    l.n_lock_ops, uint64_t)\t\t\t\t\t\t\\\nCTL_RO_CGEN(config_stats, stats_##n##_num_wait,\t\t\t\t\\\n    l.n_wait_times, uint64_t)\t\t\t\t\t\t\\\nCTL_RO_CGEN(config_stats, stats_##n##_num_spin_acq,\t\t\t\\\n    l.n_spin_acquired, uint64_t)\t\t\t\t\t\\\nCTL_RO_CGEN(config_stats, stats_##n##_num_owner_switch,\t\t\t\\\n    l.n_owner_switches, uint64_t) \t\t\t\t\t\\\nCTL_RO_CGEN(config_stats, stats_##n##_total_wait_time,\t\t\t\\\n    nstime_ns(&l.tot_wait_time), uint64_t)\t\t\t\t\\\nCTL_RO_CGEN(config_stats, stats_##n##_max_wait_time,\t\t\t\\\n    nstime_ns(&l.max_wait_time), uint64_t)\t\t\t\t\\\nCTL_RO_CGEN(config_stats, stats_##n##_max_num_thds,\t\t\t\\\n    l.max_n_thds, uint32_t)\n\n/* Global mutexes. */\n#define OP(mtx)\t\t\t\t\t\t\t\t\\\n    RO_MUTEX_CTL_GEN(mutexes_##mtx,\t\t\t\t\t\\\n        ctl_stats->mutex_prof_data[global_prof_mutex_##mtx])\nMUTEX_PROF_GLOBAL_MUTEXES\n#undef OP\n\n/* Per arena mutexes */\n#define OP(mtx) RO_MUTEX_CTL_GEN(arenas_i_mutexes_##mtx,\t\t\\\n    arenas_i(mib[2])->astats->astats.mutex_prof_data[arena_prof_mutex_##mtx])\nMUTEX_PROF_ARENA_MUTEXES\n#undef OP\n\n/* tcache bin mutex */\nRO_MUTEX_CTL_GEN(arenas_i_bins_j_mutex,\n    arenas_i(mib[2])->astats->bstats[mib[4]].mutex_data)\n#undef RO_MUTEX_CTL_GEN\n\n/* Resets all mutex stats, including global, arena and bin mutexes. */\nstatic int\nstats_mutexes_reset_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp,\n    void *newp, size_t newlen) {\n\tif (!config_stats) {\n\t\treturn ENOENT;\n\t}\n\n\ttsdn_t *tsdn = tsd_tsdn(tsd);\n\n#define MUTEX_PROF_RESET(mtx)\t\t\t\t\t\t\\\n    malloc_mutex_lock(tsdn, &mtx);\t\t\t\t\t\\\n    malloc_mutex_prof_data_reset(tsdn, &mtx);\t\t\t\t\\\n    malloc_mutex_unlock(tsdn, &mtx);\n\n\t/* Global mutexes: ctl and prof. */\n\tMUTEX_PROF_RESET(ctl_mtx);\n\tif (have_background_thread) {\n\t\tMUTEX_PROF_RESET(background_thread_lock);\n\t}\n\tif (config_prof && opt_prof) {\n\t\tMUTEX_PROF_RESET(bt2gctx_mtx);\n\t\tMUTEX_PROF_RESET(tdatas_mtx);\n\t\tMUTEX_PROF_RESET(prof_dump_mtx);\n\t\tMUTEX_PROF_RESET(prof_recent_alloc_mtx);\n\t\tMUTEX_PROF_RESET(prof_recent_dump_mtx);\n\t\tMUTEX_PROF_RESET(prof_stats_mtx);\n\t}\n\n\t/* Per arena mutexes. */\n\tunsigned n = narenas_total_get();\n\n\tfor (unsigned i = 0; i < n; i++) {\n\t\tarena_t *arena = arena_get(tsdn, i, false);\n\t\tif (!arena) {\n\t\t\tcontinue;\n\t\t}\n\t\tMUTEX_PROF_RESET(arena->large_mtx);\n\t\tMUTEX_PROF_RESET(arena->pa_shard.edata_cache.mtx);\n\t\tMUTEX_PROF_RESET(arena->pa_shard.pac.ecache_dirty.mtx);\n\t\tMUTEX_PROF_RESET(arena->pa_shard.pac.ecache_muzzy.mtx);\n\t\tMUTEX_PROF_RESET(arena->pa_shard.pac.ecache_retained.mtx);\n\t\tMUTEX_PROF_RESET(arena->pa_shard.pac.decay_dirty.mtx);\n\t\tMUTEX_PROF_RESET(arena->pa_shard.pac.decay_muzzy.mtx);\n\t\tMUTEX_PROF_RESET(arena->tcache_ql_mtx);\n\t\tMUTEX_PROF_RESET(arena->base->mtx);\n\n\t\tfor (szind_t j = 0; j < SC_NBINS; j++) {\n\t\t\tfor (unsigned k = 0; k < bin_infos[j].n_shards; k++) {\n\t\t\t\tbin_t *bin = arena_get_bin(arena, j, k);\n\t\t\t\tMUTEX_PROF_RESET(bin->lock);\n\t\t\t}\n\t\t}\n\t}\n#undef MUTEX_PROF_RESET\n\treturn 0;\n}\n\nCTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_nmalloc,\n    arenas_i(mib[2])->astats->bstats[mib[4]].stats_data.nmalloc, uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_ndalloc,\n    arenas_i(mib[2])->astats->bstats[mib[4]].stats_data.ndalloc, uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_nrequests,\n    arenas_i(mib[2])->astats->bstats[mib[4]].stats_data.nrequests, uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_curregs,\n    arenas_i(mib[2])->astats->bstats[mib[4]].stats_data.curregs, size_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_nfills,\n    arenas_i(mib[2])->astats->bstats[mib[4]].stats_data.nfills, uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_nflushes,\n    arenas_i(mib[2])->astats->bstats[mib[4]].stats_data.nflushes, uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_nslabs,\n    arenas_i(mib[2])->astats->bstats[mib[4]].stats_data.nslabs, uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_nreslabs,\n    arenas_i(mib[2])->astats->bstats[mib[4]].stats_data.reslabs, uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_curslabs,\n    arenas_i(mib[2])->astats->bstats[mib[4]].stats_data.curslabs, size_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_bins_j_nonfull_slabs,\n    arenas_i(mib[2])->astats->bstats[mib[4]].stats_data.nonfull_slabs, size_t)\n\nstatic const ctl_named_node_t *\nstats_arenas_i_bins_j_index(tsdn_t *tsdn, const size_t *mib,\n    size_t miblen, size_t j) {\n\tif (j > SC_NBINS) {\n\t\treturn NULL;\n\t}\n\treturn super_stats_arenas_i_bins_j_node;\n}\n\nCTL_RO_CGEN(config_stats, stats_arenas_i_lextents_j_nmalloc,\n    locked_read_u64_unsynchronized(\n    &arenas_i(mib[2])->astats->lstats[mib[4]].nmalloc), uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_lextents_j_ndalloc,\n    locked_read_u64_unsynchronized(\n    &arenas_i(mib[2])->astats->lstats[mib[4]].ndalloc), uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_lextents_j_nrequests,\n    locked_read_u64_unsynchronized(\n    &arenas_i(mib[2])->astats->lstats[mib[4]].nrequests), uint64_t)\nCTL_RO_CGEN(config_stats, stats_arenas_i_lextents_j_curlextents,\n    arenas_i(mib[2])->astats->lstats[mib[4]].curlextents, size_t)\n\nstatic const ctl_named_node_t *\nstats_arenas_i_lextents_j_index(tsdn_t *tsdn, const size_t *mib,\n    size_t miblen, size_t j) {\n\tif (j > SC_NSIZES - SC_NBINS) {\n\t\treturn NULL;\n\t}\n\treturn super_stats_arenas_i_lextents_j_node;\n}\n\nCTL_RO_CGEN(config_stats, stats_arenas_i_extents_j_ndirty,\n        arenas_i(mib[2])->astats->estats[mib[4]].ndirty, size_t);\nCTL_RO_CGEN(config_stats, stats_arenas_i_extents_j_nmuzzy,\n        arenas_i(mib[2])->astats->estats[mib[4]].nmuzzy, size_t);\nCTL_RO_CGEN(config_stats, stats_arenas_i_extents_j_nretained,\n        arenas_i(mib[2])->astats->estats[mib[4]].nretained, size_t);\nCTL_RO_CGEN(config_stats, stats_arenas_i_extents_j_dirty_bytes,\n        arenas_i(mib[2])->astats->estats[mib[4]].dirty_bytes, size_t);\nCTL_RO_CGEN(config_stats, stats_arenas_i_extents_j_muzzy_bytes,\n        arenas_i(mib[2])->astats->estats[mib[4]].muzzy_bytes, size_t);\nCTL_RO_CGEN(config_stats, stats_arenas_i_extents_j_retained_bytes,\n        arenas_i(mib[2])->astats->estats[mib[4]].retained_bytes, size_t);\n\nstatic const ctl_named_node_t *\nstats_arenas_i_extents_j_index(tsdn_t *tsdn, const size_t *mib,\n    size_t miblen, size_t j) {\n\tif (j >= SC_NPSIZES) {\n\t\treturn NULL;\n\t}\n\treturn super_stats_arenas_i_extents_j_node;\n}\n\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_npurge_passes,\n    arenas_i(mib[2])->astats->hpastats.nonderived_stats.npurge_passes, uint64_t);\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_npurges,\n    arenas_i(mib[2])->astats->hpastats.nonderived_stats.npurges, uint64_t);\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_nhugifies,\n    arenas_i(mib[2])->astats->hpastats.nonderived_stats.nhugifies, uint64_t);\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_ndehugifies,\n    arenas_i(mib[2])->astats->hpastats.nonderived_stats.ndehugifies, uint64_t);\n\n/* Full, nonhuge */\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_full_slabs_npageslabs_nonhuge,\n    arenas_i(mib[2])->astats->hpastats.psset_stats.full_slabs[0].npageslabs,\n    size_t);\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_full_slabs_nactive_nonhuge,\n    arenas_i(mib[2])->astats->hpastats.psset_stats.full_slabs[0].nactive, size_t);\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_full_slabs_ndirty_nonhuge,\n    arenas_i(mib[2])->astats->hpastats.psset_stats.full_slabs[0].ndirty, size_t);\n\n/* Full, huge */\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_full_slabs_npageslabs_huge,\n    arenas_i(mib[2])->astats->hpastats.psset_stats.full_slabs[1].npageslabs,\n    size_t);\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_full_slabs_nactive_huge,\n    arenas_i(mib[2])->astats->hpastats.psset_stats.full_slabs[1].nactive, size_t);\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_full_slabs_ndirty_huge,\n    arenas_i(mib[2])->astats->hpastats.psset_stats.full_slabs[1].ndirty, size_t);\n\n/* Empty, nonhuge */\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_empty_slabs_npageslabs_nonhuge,\n    arenas_i(mib[2])->astats->hpastats.psset_stats.empty_slabs[0].npageslabs,\n    size_t);\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_empty_slabs_nactive_nonhuge,\n    arenas_i(mib[2])->astats->hpastats.psset_stats.empty_slabs[0].nactive, size_t);\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_empty_slabs_ndirty_nonhuge,\n    arenas_i(mib[2])->astats->hpastats.psset_stats.empty_slabs[0].ndirty, size_t);\n\n/* Empty, huge */\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_empty_slabs_npageslabs_huge,\n    arenas_i(mib[2])->astats->hpastats.psset_stats.empty_slabs[1].npageslabs,\n    size_t);\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_empty_slabs_nactive_huge,\n    arenas_i(mib[2])->astats->hpastats.psset_stats.empty_slabs[1].nactive, size_t);\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_empty_slabs_ndirty_huge,\n    arenas_i(mib[2])->astats->hpastats.psset_stats.empty_slabs[1].ndirty, size_t);\n\n/* Nonfull, nonhuge */\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_nonfull_slabs_j_npageslabs_nonhuge,\n    arenas_i(mib[2])->astats->hpastats.psset_stats.nonfull_slabs[mib[5]][0].npageslabs,\n    size_t);\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_nonfull_slabs_j_nactive_nonhuge,\n    arenas_i(mib[2])->astats->hpastats.psset_stats.nonfull_slabs[mib[5]][0].nactive,\n    size_t);\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_nonfull_slabs_j_ndirty_nonhuge,\n    arenas_i(mib[2])->astats->hpastats.psset_stats.nonfull_slabs[mib[5]][0].ndirty,\n    size_t);\n\n/* Nonfull, huge */\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_nonfull_slabs_j_npageslabs_huge,\n    arenas_i(mib[2])->astats->hpastats.psset_stats.nonfull_slabs[mib[5]][1].npageslabs,\n    size_t);\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_nonfull_slabs_j_nactive_huge,\n    arenas_i(mib[2])->astats->hpastats.psset_stats.nonfull_slabs[mib[5]][1].nactive,\n    size_t);\nCTL_RO_CGEN(config_stats, stats_arenas_i_hpa_shard_nonfull_slabs_j_ndirty_huge,\n    arenas_i(mib[2])->astats->hpastats.psset_stats.nonfull_slabs[mib[5]][1].ndirty,\n    size_t);\n\nstatic const ctl_named_node_t *\nstats_arenas_i_hpa_shard_nonfull_slabs_j_index(tsdn_t *tsdn, const size_t *mib,\n    size_t miblen, size_t j) {\n\tif (j >= PSSET_NPSIZES) {\n\t\treturn NULL;\n\t}\n\treturn super_stats_arenas_i_hpa_shard_nonfull_slabs_j_node;\n}\n\nstatic bool\nctl_arenas_i_verify(size_t i) {\n\tsize_t a = arenas_i2a_impl(i, true, true);\n\tif (a == UINT_MAX || !ctl_arenas->arenas[a]->initialized) {\n\t\treturn true;\n\t}\n\n\treturn false;\n}\n\nstatic const ctl_named_node_t *\nstats_arenas_i_index(tsdn_t *tsdn, const size_t *mib,\n    size_t miblen, size_t i) {\n\tconst ctl_named_node_t *ret;\n\n\tmalloc_mutex_lock(tsdn, &ctl_mtx);\n\tif (ctl_arenas_i_verify(i)) {\n\t\tret = NULL;\n\t\tgoto label_return;\n\t}\n\n\tret = super_stats_arenas_i_node;\nlabel_return:\n\tmalloc_mutex_unlock(tsdn, &ctl_mtx);\n\treturn ret;\n}\n\nstatic int\nexperimental_hooks_install_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tif (oldp == NULL || oldlenp == NULL|| newp == NULL) {\n\t\tret = EINVAL;\n\t\tgoto label_return;\n\t}\n\t/*\n\t * Note: this is a *private* struct.  This is an experimental interface;\n\t * forcing the user to know the jemalloc internals well enough to\n\t * extract the ABI hopefully ensures nobody gets too comfortable with\n\t * this API, which can change at a moment's notice.\n\t */\n\thooks_t hooks;\n\tWRITE(hooks, hooks_t);\n\tvoid *handle = hook_install(tsd_tsdn(tsd), &hooks);\n\tif (handle == NULL) {\n\t\tret = EAGAIN;\n\t\tgoto label_return;\n\t}\n\tREAD(handle, void *);\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\nexperimental_hooks_remove_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tWRITEONLY();\n\tvoid *handle = NULL;\n\tWRITE(handle, void *);\n\tif (handle == NULL) {\n\t\tret = EINVAL;\n\t\tgoto label_return;\n\t}\n\thook_remove(tsd_tsdn(tsd), handle);\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\nexperimental_thread_activity_callback_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\n\tif (!config_stats) {\n\t\treturn ENOENT;\n\t}\n\n\tactivity_callback_thunk_t t_old = tsd_activity_callback_thunk_get(tsd);\n\tREAD(t_old, activity_callback_thunk_t);\n\n\tif (newp != NULL) {\n\t\t/*\n\t\t * This initialization is unnecessary.  If it's omitted, though,\n\t\t * clang gets confused and warns on the subsequent use of t_new.\n\t\t */\n\t\tactivity_callback_thunk_t t_new = {NULL, NULL};\n\t\tWRITE(t_new, activity_callback_thunk_t);\n\t\ttsd_activity_callback_thunk_set(tsd, t_new);\n\t}\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\n/*\n * Output six memory utilization entries for an input pointer, the first one of\n * type (void *) and the remaining five of type size_t, describing the following\n * (in the same order):\n *\n * (a) memory address of the extent a potential reallocation would go into,\n * == the five fields below describe about the extent the pointer resides in ==\n * (b) number of free regions in the extent,\n * (c) number of regions in the extent,\n * (d) size of the extent in terms of bytes,\n * (e) total number of free regions in the bin the extent belongs to, and\n * (f) total number of regions in the bin the extent belongs to.\n *\n * Note that \"(e)\" and \"(f)\" are only available when stats are enabled;\n * otherwise their values are undefined.\n *\n * This API is mainly intended for small class allocations, where extents are\n * used as slab.  Note that if the bin the extent belongs to is completely\n * full, \"(a)\" will be NULL.\n *\n * In case of large class allocations, \"(a)\" will be NULL, and \"(e)\" and \"(f)\"\n * will be zero (if stats are enabled; otherwise undefined).  The other three\n * fields will be properly set though the values are trivial: \"(b)\" will be 0,\n * \"(c)\" will be 1, and \"(d)\" will be the usable size.\n *\n * The input pointer and size are respectively passed in by newp and newlen,\n * and the output fields and size are respectively oldp and *oldlenp.\n *\n * It can be beneficial to define the following macros to make it easier to\n * access the output:\n *\n * #define SLABCUR_READ(out) (*(void **)out)\n * #define COUNTS(out) ((size_t *)((void **)out + 1))\n * #define NFREE_READ(out) COUNTS(out)[0]\n * #define NREGS_READ(out) COUNTS(out)[1]\n * #define SIZE_READ(out) COUNTS(out)[2]\n * #define BIN_NFREE_READ(out) COUNTS(out)[3]\n * #define BIN_NREGS_READ(out) COUNTS(out)[4]\n *\n * and then write e.g. NFREE_READ(oldp) to fetch the output.  See the unit test\n * test_query in test/unit/extent_util.c for an example.\n *\n * For a typical defragmentation workflow making use of this API for\n * understanding the fragmentation level, please refer to the comment for\n * experimental_utilization_batch_query_ctl.\n *\n * It's up to the application how to determine the significance of\n * fragmentation relying on the outputs returned.  Possible choices are:\n *\n * (a) if extent utilization ratio is below certain threshold,\n * (b) if extent memory consumption is above certain threshold,\n * (c) if extent utilization ratio is significantly below bin utilization ratio,\n * (d) if input pointer deviates a lot from potential reallocation address, or\n * (e) some selection/combination of the above.\n *\n * The caller needs to make sure that the input/output arguments are valid,\n * in particular, that the size of the output is correct, i.e.:\n *\n *     *oldlenp = sizeof(void *) + sizeof(size_t) * 5\n *\n * Otherwise, the function immediately returns EINVAL without touching anything.\n *\n * In the rare case where there's no associated extent found for the input\n * pointer, the function zeros out all output fields and return.  Please refer\n * to the comment for experimental_utilization_batch_query_ctl to understand the\n * motivation from C++.\n */\nstatic int\nexperimental_utilization_query_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\n\tassert(sizeof(inspect_extent_util_stats_verbose_t)\n\t    == sizeof(void *) + sizeof(size_t) * 5);\n\n\tif (oldp == NULL || oldlenp == NULL\n\t    || *oldlenp != sizeof(inspect_extent_util_stats_verbose_t)\n\t    || newp == NULL) {\n\t\tret = EINVAL;\n\t\tgoto label_return;\n\t}\n\n\tvoid *ptr = NULL;\n\tWRITE(ptr, void *);\n\tinspect_extent_util_stats_verbose_t *util_stats\n\t    = (inspect_extent_util_stats_verbose_t *)oldp;\n\tinspect_extent_util_stats_verbose_get(tsd_tsdn(tsd), ptr,\n\t    &util_stats->nfree, &util_stats->nregs, &util_stats->size,\n\t    &util_stats->bin_nfree, &util_stats->bin_nregs,\n\t    &util_stats->slabcur_addr);\n\tret = 0;\n\nlabel_return:\n\treturn ret;\n}\n\n/*\n * Given an input array of pointers, output three memory utilization entries of\n * type size_t for each input pointer about the extent it resides in:\n *\n * (a) number of free regions in the extent,\n * (b) number of regions in the extent, and\n * (c) size of the extent in terms of bytes.\n *\n * This API is mainly intended for small class allocations, where extents are\n * used as slab.  In case of large class allocations, the outputs are trivial:\n * \"(a)\" will be 0, \"(b)\" will be 1, and \"(c)\" will be the usable size.\n *\n * Note that multiple input pointers may reside on a same extent so the output\n * fields may contain duplicates.\n *\n * The format of the input/output looks like:\n *\n * input[0]:  1st_pointer_to_query\t|  output[0]: 1st_extent_n_free_regions\n *\t\t\t\t\t|  output[1]: 1st_extent_n_regions\n *\t\t\t\t\t|  output[2]: 1st_extent_size\n * input[1]:  2nd_pointer_to_query\t|  output[3]: 2nd_extent_n_free_regions\n *\t\t\t\t\t|  output[4]: 2nd_extent_n_regions\n *\t\t\t\t\t|  output[5]: 2nd_extent_size\n * ...\t\t\t\t\t|  ...\n *\n * The input array and size are respectively passed in by newp and newlen, and\n * the output array and size are respectively oldp and *oldlenp.\n *\n * It can be beneficial to define the following macros to make it easier to\n * access the output:\n *\n * #define NFREE_READ(out, i) out[(i) * 3]\n * #define NREGS_READ(out, i) out[(i) * 3 + 1]\n * #define SIZE_READ(out, i) out[(i) * 3 + 2]\n *\n * and then write e.g. NFREE_READ(oldp, i) to fetch the output.  See the unit\n * test test_batch in test/unit/extent_util.c for a concrete example.\n *\n * A typical workflow would be composed of the following steps:\n *\n * (1) flush tcache: mallctl(\"thread.tcache.flush\", ...)\n * (2) initialize input array of pointers to query fragmentation\n * (3) allocate output array to hold utilization statistics\n * (4) query utilization: mallctl(\"experimental.utilization.batch_query\", ...)\n * (5) (optional) decide if it's worthwhile to defragment; otherwise stop here\n * (6) disable tcache: mallctl(\"thread.tcache.enabled\", ...)\n * (7) defragment allocations with significant fragmentation, e.g.:\n *         for each allocation {\n *             if it's fragmented {\n *                 malloc(...);\n *                 memcpy(...);\n *                 free(...);\n *             }\n *         }\n * (8) enable tcache: mallctl(\"thread.tcache.enabled\", ...)\n *\n * The application can determine the significance of fragmentation themselves\n * relying on the statistics returned, both at the overall level i.e. step \"(5)\"\n * and at individual allocation level i.e. within step \"(7)\".  Possible choices\n * are:\n *\n * (a) whether memory utilization ratio is below certain threshold,\n * (b) whether memory consumption is above certain threshold, or\n * (c) some combination of the two.\n *\n * The caller needs to make sure that the input/output arrays are valid and\n * their sizes are proper as well as matched, meaning:\n *\n * (a) newlen = n_pointers * sizeof(const void *)\n * (b) *oldlenp = n_pointers * sizeof(size_t) * 3\n * (c) n_pointers > 0\n *\n * Otherwise, the function immediately returns EINVAL without touching anything.\n *\n * In the rare case where there's no associated extent found for some pointers,\n * rather than immediately terminating the computation and raising an error,\n * the function simply zeros out the corresponding output fields and continues\n * the computation until all input pointers are handled.  The motivations of\n * such a design are as follows:\n *\n * (a) The function always either processes nothing or processes everything, and\n * never leaves the output half touched and half untouched.\n *\n * (b) It facilitates usage needs especially common in C++.  A vast variety of\n * C++ objects are instantiated with multiple dynamic memory allocations.  For\n * example, std::string and std::vector typically use at least two allocations,\n * one for the metadata and one for the actual content.  Other types may use\n * even more allocations.  When inquiring about utilization statistics, the\n * caller often wants to examine into all such allocations, especially internal\n * one(s), rather than just the topmost one.  The issue comes when some\n * implementations do certain optimizations to reduce/aggregate some internal\n * allocations, e.g. putting short strings directly into the metadata, and such\n * decisions are not known to the caller.  Therefore, we permit pointers to\n * memory usages that may not be returned by previous malloc calls, and we\n * provide the caller a convenient way to identify such cases.\n */\nstatic int\nexperimental_utilization_batch_query_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\n\tassert(sizeof(inspect_extent_util_stats_t) == sizeof(size_t) * 3);\n\n\tconst size_t len = newlen / sizeof(const void *);\n\tif (oldp == NULL || oldlenp == NULL || newp == NULL || newlen == 0\n\t    || newlen != len * sizeof(const void *)\n\t    || *oldlenp != len * sizeof(inspect_extent_util_stats_t)) {\n\t\tret = EINVAL;\n\t\tgoto label_return;\n\t}\n\n\tvoid **ptrs = (void **)newp;\n\tinspect_extent_util_stats_t *util_stats =\n\t    (inspect_extent_util_stats_t *)oldp;\n\tsize_t i;\n\tfor (i = 0; i < len; ++i) {\n\t\tinspect_extent_util_stats_get(tsd_tsdn(tsd), ptrs[i],\n\t\t    &util_stats[i].nfree, &util_stats[i].nregs,\n\t\t    &util_stats[i].size);\n\t}\n\tret = 0;\n\nlabel_return:\n\treturn ret;\n}\n\nstatic const ctl_named_node_t *\nexperimental_arenas_i_index(tsdn_t *tsdn, const size_t *mib,\n    size_t miblen, size_t i) {\n\tconst ctl_named_node_t *ret;\n\n\tmalloc_mutex_lock(tsdn, &ctl_mtx);\n\tif (ctl_arenas_i_verify(i)) {\n\t\tret = NULL;\n\t\tgoto label_return;\n\t}\n\tret = super_experimental_arenas_i_node;\nlabel_return:\n\tmalloc_mutex_unlock(tsdn, &ctl_mtx);\n\treturn ret;\n}\n\nstatic int\nexperimental_arenas_i_pactivep_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tif (!config_stats) {\n\t\treturn ENOENT;\n\t}\n\tif (oldp == NULL || oldlenp == NULL || *oldlenp != sizeof(size_t *)) {\n\t\treturn EINVAL;\n\t}\n\n\tunsigned arena_ind;\n\tarena_t *arena;\n\tint ret;\n\tsize_t *pactivep;\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &ctl_mtx);\n\tREADONLY();\n\tMIB_UNSIGNED(arena_ind, 2);\n\tif (arena_ind < narenas_total_get() && (arena =\n\t    arena_get(tsd_tsdn(tsd), arena_ind, false)) != NULL) {\n#if defined(JEMALLOC_GCC_ATOMIC_ATOMICS) ||\t\t\t\t\\\n    defined(JEMALLOC_GCC_SYNC_ATOMICS) || defined(_MSC_VER)\n\t\t/* Expose the underlying counter for fast read. */\n\t\tpactivep = (size_t *)&(arena->pa_shard.nactive.repr);\n\t\tREAD(pactivep, size_t *);\n\t\tret = 0;\n#else\n\t\tret = EFAULT;\n#endif\n\t} else {\n\t\tret = EFAULT;\n\t}\nlabel_return:\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &ctl_mtx);\n\treturn ret;\n}\n\nstatic int\nexperimental_prof_recent_alloc_max_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\n\tif (!(config_prof && opt_prof)) {\n\t\tret = ENOENT;\n\t\tgoto label_return;\n\t}\n\n\tssize_t old_max;\n\tif (newp != NULL) {\n\t\tif (newlen != sizeof(ssize_t)) {\n\t\t\tret = EINVAL;\n\t\t\tgoto label_return;\n\t\t}\n\t\tssize_t max = *(ssize_t *)newp;\n\t\tif (max < -1) {\n\t\t\tret = EINVAL;\n\t\t\tgoto label_return;\n\t\t}\n\t\told_max = prof_recent_alloc_max_ctl_write(tsd, max);\n\t} else {\n\t\told_max = prof_recent_alloc_max_ctl_read();\n\t}\n\tREAD(old_max, ssize_t);\n\n\tret = 0;\n\nlabel_return:\n\treturn ret;\n}\n\ntypedef struct write_cb_packet_s write_cb_packet_t;\nstruct write_cb_packet_s {\n\twrite_cb_t *write_cb;\n\tvoid *cbopaque;\n};\n\nstatic int\nexperimental_prof_recent_alloc_dump_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\n\tif (!(config_prof && opt_prof)) {\n\t\tret = ENOENT;\n\t\tgoto label_return;\n\t}\n\n\tassert(sizeof(write_cb_packet_t) == sizeof(void *) * 2);\n\n\tWRITEONLY();\n\twrite_cb_packet_t write_cb_packet;\n\tASSURED_WRITE(write_cb_packet, write_cb_packet_t);\n\n\tprof_recent_alloc_dump(tsd, write_cb_packet.write_cb,\n\t    write_cb_packet.cbopaque);\n\n\tret = 0;\n\nlabel_return:\n\treturn ret;\n}\n\ntypedef struct batch_alloc_packet_s batch_alloc_packet_t;\nstruct batch_alloc_packet_s {\n\tvoid **ptrs;\n\tsize_t num;\n\tsize_t size;\n\tint flags;\n};\n\nstatic int\nexperimental_batch_alloc_ctl(tsd_t *tsd, const size_t *mib,\n    size_t miblen, void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\n\tVERIFY_READ(size_t);\n\n\tbatch_alloc_packet_t batch_alloc_packet;\n\tASSURED_WRITE(batch_alloc_packet, batch_alloc_packet_t);\n\tsize_t filled = batch_alloc(batch_alloc_packet.ptrs,\n\t    batch_alloc_packet.num, batch_alloc_packet.size,\n\t    batch_alloc_packet.flags);\n\tREAD(filled, size_t);\n\n\tret = 0;\n\nlabel_return:\n\treturn ret;\n}\n\nstatic int\nprof_stats_bins_i_live_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tunsigned binind;\n\tprof_stats_t stats;\n\n\tif (!(config_prof && opt_prof && opt_prof_stats)) {\n\t\tret = ENOENT;\n\t\tgoto label_return;\n\t}\n\n\tREADONLY();\n\tMIB_UNSIGNED(binind, 3);\n\tif (binind >= SC_NBINS) {\n\t\tret = EINVAL;\n\t\tgoto label_return;\n\t}\n\tprof_stats_get_live(tsd, (szind_t)binind, &stats);\n\tREAD(stats, prof_stats_t);\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\nprof_stats_bins_i_accum_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tunsigned binind;\n\tprof_stats_t stats;\n\n\tif (!(config_prof && opt_prof && opt_prof_stats)) {\n\t\tret = ENOENT;\n\t\tgoto label_return;\n\t}\n\n\tREADONLY();\n\tMIB_UNSIGNED(binind, 3);\n\tif (binind >= SC_NBINS) {\n\t\tret = EINVAL;\n\t\tgoto label_return;\n\t}\n\tprof_stats_get_accum(tsd, (szind_t)binind, &stats);\n\tREAD(stats, prof_stats_t);\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic const ctl_named_node_t *\nprof_stats_bins_i_index(tsdn_t *tsdn, const size_t *mib, size_t miblen,\n    size_t i) {\n\tif (!(config_prof && opt_prof && opt_prof_stats)) {\n\t\treturn NULL;\n\t}\n\tif (i >= SC_NBINS) {\n\t\treturn NULL;\n\t}\n\treturn super_prof_stats_bins_i_node;\n}\n\nstatic int\nprof_stats_lextents_i_live_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tunsigned lextent_ind;\n\tprof_stats_t stats;\n\n\tif (!(config_prof && opt_prof && opt_prof_stats)) {\n\t\tret = ENOENT;\n\t\tgoto label_return;\n\t}\n\n\tREADONLY();\n\tMIB_UNSIGNED(lextent_ind, 3);\n\tif (lextent_ind >= SC_NSIZES - SC_NBINS) {\n\t\tret = EINVAL;\n\t\tgoto label_return;\n\t}\n\tprof_stats_get_live(tsd, (szind_t)(lextent_ind + SC_NBINS), &stats);\n\tREAD(stats, prof_stats_t);\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic int\nprof_stats_lextents_i_accum_ctl(tsd_t *tsd, const size_t *mib, size_t miblen,\n    void *oldp, size_t *oldlenp, void *newp, size_t newlen) {\n\tint ret;\n\tunsigned lextent_ind;\n\tprof_stats_t stats;\n\n\tif (!(config_prof && opt_prof && opt_prof_stats)) {\n\t\tret = ENOENT;\n\t\tgoto label_return;\n\t}\n\n\tREADONLY();\n\tMIB_UNSIGNED(lextent_ind, 3);\n\tif (lextent_ind >= SC_NSIZES - SC_NBINS) {\n\t\tret = EINVAL;\n\t\tgoto label_return;\n\t}\n\tprof_stats_get_accum(tsd, (szind_t)(lextent_ind + SC_NBINS), &stats);\n\tREAD(stats, prof_stats_t);\n\n\tret = 0;\nlabel_return:\n\treturn ret;\n}\n\nstatic const ctl_named_node_t *\nprof_stats_lextents_i_index(tsdn_t *tsdn, const size_t *mib, size_t miblen,\n    size_t i) {\n\tif (!(config_prof && opt_prof && opt_prof_stats)) {\n\t\treturn NULL;\n\t}\n\tif (i >= SC_NSIZES - SC_NBINS) {\n\t\treturn NULL;\n\t}\n\treturn super_prof_stats_lextents_i_node;\n}\n"
  },
  {
    "path": "deps/jemalloc/src/decay.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/decay.h\"\n\nstatic const uint64_t h_steps[SMOOTHSTEP_NSTEPS] = {\n#define STEP(step, h, x, y)\t\t\t\\\n\t\th,\n\t\tSMOOTHSTEP\n#undef STEP\n};\n\n/*\n * Generate a new deadline that is uniformly random within the next epoch after\n * the current one.\n */\nvoid\ndecay_deadline_init(decay_t *decay) {\n\tnstime_copy(&decay->deadline, &decay->epoch);\n\tnstime_add(&decay->deadline, &decay->interval);\n\tif (decay_ms_read(decay) > 0) {\n\t\tnstime_t jitter;\n\n\t\tnstime_init(&jitter, prng_range_u64(&decay->jitter_state,\n\t\t    nstime_ns(&decay->interval)));\n\t\tnstime_add(&decay->deadline, &jitter);\n\t}\n}\n\nvoid\ndecay_reinit(decay_t *decay, nstime_t *cur_time, ssize_t decay_ms) {\n\tatomic_store_zd(&decay->time_ms, decay_ms, ATOMIC_RELAXED);\n\tif (decay_ms > 0) {\n\t\tnstime_init(&decay->interval, (uint64_t)decay_ms *\n\t\t    KQU(1000000));\n\t\tnstime_idivide(&decay->interval, SMOOTHSTEP_NSTEPS);\n\t}\n\n\tnstime_copy(&decay->epoch, cur_time);\n\tdecay->jitter_state = (uint64_t)(uintptr_t)decay;\n\tdecay_deadline_init(decay);\n\tdecay->nunpurged = 0;\n\tmemset(decay->backlog, 0, SMOOTHSTEP_NSTEPS * sizeof(size_t));\n}\n\nbool\ndecay_init(decay_t *decay, nstime_t *cur_time, ssize_t decay_ms) {\n\tif (config_debug) {\n\t\tfor (size_t i = 0; i < sizeof(decay_t); i++) {\n\t\t\tassert(((char *)decay)[i] == 0);\n\t\t}\n\t\tdecay->ceil_npages = 0;\n\t}\n\tif (malloc_mutex_init(&decay->mtx, \"decay\", WITNESS_RANK_DECAY,\n\t    malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\tdecay->purging = false;\n\tdecay_reinit(decay, cur_time, decay_ms);\n\treturn false;\n}\n\nbool\ndecay_ms_valid(ssize_t decay_ms) {\n\tif (decay_ms < -1) {\n\t\treturn false;\n\t}\n\tif (decay_ms == -1 || (uint64_t)decay_ms <= NSTIME_SEC_MAX *\n\t    KQU(1000)) {\n\t\treturn true;\n\t}\n\treturn false;\n}\n\nstatic void\ndecay_maybe_update_time(decay_t *decay, nstime_t *new_time) {\n\tif (unlikely(!nstime_monotonic() && nstime_compare(&decay->epoch,\n\t    new_time) > 0)) {\n\t\t/*\n\t\t * Time went backwards.  Move the epoch back in time and\n\t\t * generate a new deadline, with the expectation that time\n\t\t * typically flows forward for long enough periods of time that\n\t\t * epochs complete.  Unfortunately, this strategy is susceptible\n\t\t * to clock jitter triggering premature epoch advances, but\n\t\t * clock jitter estimation and compensation isn't feasible here\n\t\t * because calls into this code are event-driven.\n\t\t */\n\t\tnstime_copy(&decay->epoch, new_time);\n\t\tdecay_deadline_init(decay);\n\t} else {\n\t\t/* Verify that time does not go backwards. */\n\t\tassert(nstime_compare(&decay->epoch, new_time) <= 0);\n\t}\n}\n\nstatic size_t\ndecay_backlog_npages_limit(const decay_t *decay) {\n\t/*\n\t * For each element of decay_backlog, multiply by the corresponding\n\t * fixed-point smoothstep decay factor.  Sum the products, then divide\n\t * to round down to the nearest whole number of pages.\n\t */\n\tuint64_t sum = 0;\n\tfor (unsigned i = 0; i < SMOOTHSTEP_NSTEPS; i++) {\n\t\tsum += decay->backlog[i] * h_steps[i];\n\t}\n\tsize_t npages_limit_backlog = (size_t)(sum >> SMOOTHSTEP_BFP);\n\n\treturn npages_limit_backlog;\n}\n\n/*\n * Update backlog, assuming that 'nadvance_u64' time intervals have passed.\n * Trailing 'nadvance_u64' records should be erased and 'current_npages' is\n * placed as the newest record.\n */\nstatic void\ndecay_backlog_update(decay_t *decay, uint64_t nadvance_u64,\n    size_t current_npages) {\n\tif (nadvance_u64 >= SMOOTHSTEP_NSTEPS) {\n\t\tmemset(decay->backlog, 0, (SMOOTHSTEP_NSTEPS-1) *\n\t\t    sizeof(size_t));\n\t} else {\n\t\tsize_t nadvance_z = (size_t)nadvance_u64;\n\n\t\tassert((uint64_t)nadvance_z == nadvance_u64);\n\n\t\tmemmove(decay->backlog, &decay->backlog[nadvance_z],\n\t\t    (SMOOTHSTEP_NSTEPS - nadvance_z) * sizeof(size_t));\n\t\tif (nadvance_z > 1) {\n\t\t\tmemset(&decay->backlog[SMOOTHSTEP_NSTEPS -\n\t\t\t    nadvance_z], 0, (nadvance_z-1) * sizeof(size_t));\n\t\t}\n\t}\n\n\tsize_t npages_delta = (current_npages > decay->nunpurged) ?\n\t    current_npages - decay->nunpurged : 0;\n\tdecay->backlog[SMOOTHSTEP_NSTEPS-1] = npages_delta;\n\n\tif (config_debug) {\n\t\tif (current_npages > decay->ceil_npages) {\n\t\t\tdecay->ceil_npages = current_npages;\n\t\t}\n\t\tsize_t npages_limit = decay_backlog_npages_limit(decay);\n\t\tassert(decay->ceil_npages >= npages_limit);\n\t\tif (decay->ceil_npages > npages_limit) {\n\t\t\tdecay->ceil_npages = npages_limit;\n\t\t}\n\t}\n}\n\nstatic inline bool\ndecay_deadline_reached(const decay_t *decay, const nstime_t *time) {\n\treturn (nstime_compare(&decay->deadline, time) <= 0);\n}\n\nuint64_t\ndecay_npages_purge_in(decay_t *decay, nstime_t *time, size_t npages_new) {\n\tuint64_t decay_interval_ns = decay_epoch_duration_ns(decay);\n\tsize_t n_epoch = (size_t)(nstime_ns(time) / decay_interval_ns);\n\n\tuint64_t npages_purge;\n\tif (n_epoch >= SMOOTHSTEP_NSTEPS) {\n\t\tnpages_purge = npages_new;\n\t} else {\n\t\tuint64_t h_steps_max = h_steps[SMOOTHSTEP_NSTEPS - 1];\n\t\tassert(h_steps_max >=\n\t\t    h_steps[SMOOTHSTEP_NSTEPS - 1 - n_epoch]);\n\t\tnpages_purge = npages_new * (h_steps_max -\n\t\t    h_steps[SMOOTHSTEP_NSTEPS - 1 - n_epoch]);\n\t\tnpages_purge >>= SMOOTHSTEP_BFP;\n\t}\n\treturn npages_purge;\n}\n\nbool\ndecay_maybe_advance_epoch(decay_t *decay, nstime_t *new_time,\n    size_t npages_current) {\n\t/* Handle possible non-monotonicity of time. */\n\tdecay_maybe_update_time(decay, new_time);\n\n\tif (!decay_deadline_reached(decay, new_time)) {\n\t\treturn false;\n\t}\n\tnstime_t delta;\n\tnstime_copy(&delta, new_time);\n\tnstime_subtract(&delta, &decay->epoch);\n\n\tuint64_t nadvance_u64 = nstime_divide(&delta, &decay->interval);\n\tassert(nadvance_u64 > 0);\n\n\t/* Add nadvance_u64 decay intervals to epoch. */\n\tnstime_copy(&delta, &decay->interval);\n\tnstime_imultiply(&delta, nadvance_u64);\n\tnstime_add(&decay->epoch, &delta);\n\n\t/* Set a new deadline. */\n\tdecay_deadline_init(decay);\n\n\t/* Update the backlog. */\n\tdecay_backlog_update(decay, nadvance_u64, npages_current);\n\n\tdecay->npages_limit = decay_backlog_npages_limit(decay);\n\tdecay->nunpurged = (decay->npages_limit > npages_current) ?\n\t    decay->npages_limit : npages_current;\n\n\treturn true;\n}\n\n/*\n * Calculate how many pages should be purged after 'interval'.\n *\n * First, calculate how many pages should remain at the moment, then subtract\n * the number of pages that should remain after 'interval'. The difference is\n * how many pages should be purged until then.\n *\n * The number of pages that should remain at a specific moment is calculated\n * like this: pages(now) = sum(backlog[i] * h_steps[i]). After 'interval'\n * passes, backlog would shift 'interval' positions to the left and sigmoid\n * curve would be applied starting with backlog[interval].\n *\n * The implementation doesn't directly map to the description, but it's\n * essentially the same calculation, optimized to avoid iterating over\n * [interval..SMOOTHSTEP_NSTEPS) twice.\n */\nstatic inline size_t\ndecay_npurge_after_interval(decay_t *decay, size_t interval) {\n\tsize_t i;\n\tuint64_t sum = 0;\n\tfor (i = 0; i < interval; i++) {\n\t\tsum += decay->backlog[i] * h_steps[i];\n\t}\n\tfor (; i < SMOOTHSTEP_NSTEPS; i++) {\n\t\tsum += decay->backlog[i] *\n\t\t    (h_steps[i] - h_steps[i - interval]);\n\t}\n\n\treturn (size_t)(sum >> SMOOTHSTEP_BFP);\n}\n\nuint64_t decay_ns_until_purge(decay_t *decay, size_t npages_current,\n    uint64_t npages_threshold) {\n\tif (!decay_gradually(decay)) {\n\t\treturn DECAY_UNBOUNDED_TIME_TO_PURGE;\n\t}\n\tuint64_t decay_interval_ns = decay_epoch_duration_ns(decay);\n\tassert(decay_interval_ns > 0);\n\tif (npages_current == 0) {\n\t\tunsigned i;\n\t\tfor (i = 0; i < SMOOTHSTEP_NSTEPS; i++) {\n\t\t\tif (decay->backlog[i] > 0) {\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif (i == SMOOTHSTEP_NSTEPS) {\n\t\t\t/* No dirty pages recorded.  Sleep indefinitely. */\n\t\t\treturn DECAY_UNBOUNDED_TIME_TO_PURGE;\n\t\t}\n\t}\n\tif (npages_current <= npages_threshold) {\n\t\t/* Use max interval. */\n\t\treturn decay_interval_ns * SMOOTHSTEP_NSTEPS;\n\t}\n\n\t/* Minimal 2 intervals to ensure reaching next epoch deadline. */\n\tsize_t lb = 2;\n\tsize_t ub = SMOOTHSTEP_NSTEPS;\n\n\tsize_t npurge_lb, npurge_ub;\n\tnpurge_lb = decay_npurge_after_interval(decay, lb);\n\tif (npurge_lb > npages_threshold) {\n\t\treturn decay_interval_ns * lb;\n\t}\n\tnpurge_ub = decay_npurge_after_interval(decay, ub);\n\tif (npurge_ub < npages_threshold) {\n\t\treturn decay_interval_ns * ub;\n\t}\n\n\tunsigned n_search = 0;\n\tsize_t target, npurge;\n\twhile ((npurge_lb + npages_threshold < npurge_ub) && (lb + 2 < ub)) {\n\t\ttarget = (lb + ub) / 2;\n\t\tnpurge = decay_npurge_after_interval(decay, target);\n\t\tif (npurge > npages_threshold) {\n\t\t\tub = target;\n\t\t\tnpurge_ub = npurge;\n\t\t} else {\n\t\t\tlb = target;\n\t\t\tnpurge_lb = npurge;\n\t\t}\n\t\tassert(n_search < lg_floor(SMOOTHSTEP_NSTEPS) + 1);\n\t\t++n_search;\n\t}\n\treturn decay_interval_ns * (ub + lb) / 2;\n}\n"
  },
  {
    "path": "deps/jemalloc/src/div.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n\n#include \"jemalloc/internal/div.h\"\n\n#include \"jemalloc/internal/assert.h\"\n\n/*\n * Suppose we have n = q * d, all integers. We know n and d, and want q = n / d.\n *\n * For any k, we have (here, all division is exact; not C-style rounding):\n * floor(ceil(2^k / d) * n / 2^k) = floor((2^k + r) / d * n / 2^k), where\n * r = (-2^k) mod d.\n *\n * Expanding this out:\n * ... = floor(2^k / d * n / 2^k + r / d * n / 2^k)\n *     = floor(n / d + (r / d) * (n / 2^k)).\n *\n * The fractional part of n / d is 0 (because of the assumption that d divides n\n * exactly), so we have:\n * ... = n / d + floor((r / d) * (n / 2^k))\n *\n * So that our initial expression is equal to the quantity we seek, so long as\n * (r / d) * (n / 2^k) < 1.\n *\n * r is a remainder mod d, so r < d and r / d < 1 always. We can make\n * n / 2 ^ k < 1 by setting k = 32. This gets us a value of magic that works.\n */\n\nvoid\ndiv_init(div_info_t *div_info, size_t d) {\n\t/* Nonsensical. */\n\tassert(d != 0);\n\t/*\n\t * This would make the value of magic too high to fit into a uint32_t\n\t * (we would want magic = 2^32 exactly). This would mess with code gen\n\t * on 32-bit machines.\n\t */\n\tassert(d != 1);\n\n\tuint64_t two_to_k = ((uint64_t)1 << 32);\n\tuint32_t magic = (uint32_t)(two_to_k / d);\n\n\t/*\n\t * We want magic = ceil(2^k / d), but C gives us floor. We have to\n\t * increment it unless the result was exact (i.e. unless d is a power of\n\t * two).\n\t */\n\tif (two_to_k % d != 0) {\n\t\tmagic++;\n\t}\n\tdiv_info->magic = magic;\n#ifdef JEMALLOC_DEBUG\n\tdiv_info->d = d;\n#endif\n}\n"
  },
  {
    "path": "deps/jemalloc/src/ecache.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/san.h\"\n\nbool\necache_init(tsdn_t *tsdn, ecache_t *ecache, extent_state_t state, unsigned ind,\n    bool delay_coalesce) {\n\tif (malloc_mutex_init(&ecache->mtx, \"extents\", WITNESS_RANK_EXTENTS,\n\t    malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\tecache->state = state;\n\tecache->ind = ind;\n\tecache->delay_coalesce = delay_coalesce;\n\teset_init(&ecache->eset, state);\n\teset_init(&ecache->guarded_eset, state);\n\n\treturn false;\n}\n\nvoid\necache_prefork(tsdn_t *tsdn, ecache_t *ecache) {\n\tmalloc_mutex_prefork(tsdn, &ecache->mtx);\n}\n\nvoid\necache_postfork_parent(tsdn_t *tsdn, ecache_t *ecache) {\n\tmalloc_mutex_postfork_parent(tsdn, &ecache->mtx);\n}\n\nvoid\necache_postfork_child(tsdn_t *tsdn, ecache_t *ecache) {\n\tmalloc_mutex_postfork_child(tsdn, &ecache->mtx);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/edata.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\nph_gen(, edata_avail, edata_t, avail_link,\n    edata_esnead_comp)\nph_gen(, edata_heap, edata_t, heap_link, edata_snad_comp)\n"
  },
  {
    "path": "deps/jemalloc/src/edata_cache.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\nbool\nedata_cache_init(edata_cache_t *edata_cache, base_t *base) {\n\tedata_avail_new(&edata_cache->avail);\n\t/*\n\t * This is not strictly necessary, since the edata_cache_t is only\n\t * created inside an arena, which is zeroed on creation.  But this is\n\t * handy as a safety measure.\n\t */\n\tatomic_store_zu(&edata_cache->count, 0, ATOMIC_RELAXED);\n\tif (malloc_mutex_init(&edata_cache->mtx, \"edata_cache\",\n\t    WITNESS_RANK_EDATA_CACHE, malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\tedata_cache->base = base;\n\treturn false;\n}\n\nedata_t *\nedata_cache_get(tsdn_t *tsdn, edata_cache_t *edata_cache) {\n\tmalloc_mutex_lock(tsdn, &edata_cache->mtx);\n\tedata_t *edata = edata_avail_first(&edata_cache->avail);\n\tif (edata == NULL) {\n\t\tmalloc_mutex_unlock(tsdn, &edata_cache->mtx);\n\t\treturn base_alloc_edata(tsdn, edata_cache->base);\n\t}\n\tedata_avail_remove(&edata_cache->avail, edata);\n\tatomic_load_sub_store_zu(&edata_cache->count, 1);\n\tmalloc_mutex_unlock(tsdn, &edata_cache->mtx);\n\treturn edata;\n}\n\nvoid\nedata_cache_put(tsdn_t *tsdn, edata_cache_t *edata_cache, edata_t *edata) {\n\tmalloc_mutex_lock(tsdn, &edata_cache->mtx);\n\tedata_avail_insert(&edata_cache->avail, edata);\n\tatomic_load_add_store_zu(&edata_cache->count, 1);\n\tmalloc_mutex_unlock(tsdn, &edata_cache->mtx);\n}\n\nvoid\nedata_cache_prefork(tsdn_t *tsdn, edata_cache_t *edata_cache) {\n\tmalloc_mutex_prefork(tsdn, &edata_cache->mtx);\n}\n\nvoid\nedata_cache_postfork_parent(tsdn_t *tsdn, edata_cache_t *edata_cache) {\n\tmalloc_mutex_postfork_parent(tsdn, &edata_cache->mtx);\n}\n\nvoid\nedata_cache_postfork_child(tsdn_t *tsdn, edata_cache_t *edata_cache) {\n\tmalloc_mutex_postfork_child(tsdn, &edata_cache->mtx);\n}\n\nvoid\nedata_cache_fast_init(edata_cache_fast_t *ecs, edata_cache_t *fallback) {\n\tedata_list_inactive_init(&ecs->list);\n\tecs->fallback = fallback;\n\tecs->disabled = false;\n}\n\nstatic void\nedata_cache_fast_try_fill_from_fallback(tsdn_t *tsdn,\n    edata_cache_fast_t *ecs) {\n\tedata_t *edata;\n\tmalloc_mutex_lock(tsdn, &ecs->fallback->mtx);\n\tfor (int i = 0; i < EDATA_CACHE_FAST_FILL; i++) {\n\t\tedata = edata_avail_remove_first(&ecs->fallback->avail);\n\t\tif (edata == NULL) {\n\t\t\tbreak;\n\t\t}\n\t\tedata_list_inactive_append(&ecs->list, edata);\n\t\tatomic_load_sub_store_zu(&ecs->fallback->count, 1);\n\t}\n\tmalloc_mutex_unlock(tsdn, &ecs->fallback->mtx);\n}\n\nedata_t *\nedata_cache_fast_get(tsdn_t *tsdn, edata_cache_fast_t *ecs) {\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_EDATA_CACHE, 0);\n\n\tif (ecs->disabled) {\n\t\tassert(edata_list_inactive_first(&ecs->list) == NULL);\n\t\treturn edata_cache_get(tsdn, ecs->fallback);\n\t}\n\n\tedata_t *edata = edata_list_inactive_first(&ecs->list);\n\tif (edata != NULL) {\n\t\tedata_list_inactive_remove(&ecs->list, edata);\n\t\treturn edata;\n\t}\n\t/* Slow path; requires synchronization. */\n\tedata_cache_fast_try_fill_from_fallback(tsdn, ecs);\n\tedata = edata_list_inactive_first(&ecs->list);\n\tif (edata != NULL) {\n\t\tedata_list_inactive_remove(&ecs->list, edata);\n\t} else {\n\t\t/*\n\t\t * Slowest path (fallback was also empty); allocate something\n\t\t * new.\n\t\t */\n\t\tedata = base_alloc_edata(tsdn, ecs->fallback->base);\n\t}\n\treturn edata;\n}\n\nstatic void\nedata_cache_fast_flush_all(tsdn_t *tsdn, edata_cache_fast_t *ecs) {\n\t/*\n\t * You could imagine smarter cache management policies (like\n\t * only flushing down to some threshold in anticipation of\n\t * future get requests).  But just flushing everything provides\n\t * a good opportunity to defrag too, and lets us share code between the\n\t * flush and disable pathways.\n\t */\n\tedata_t *edata;\n\tsize_t nflushed = 0;\n\tmalloc_mutex_lock(tsdn, &ecs->fallback->mtx);\n\twhile ((edata = edata_list_inactive_first(&ecs->list)) != NULL) {\n\t\tedata_list_inactive_remove(&ecs->list, edata);\n\t\tedata_avail_insert(&ecs->fallback->avail, edata);\n\t\tnflushed++;\n\t}\n\tatomic_load_add_store_zu(&ecs->fallback->count, nflushed);\n\tmalloc_mutex_unlock(tsdn, &ecs->fallback->mtx);\n}\n\nvoid\nedata_cache_fast_put(tsdn_t *tsdn, edata_cache_fast_t *ecs, edata_t *edata) {\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_EDATA_CACHE, 0);\n\n\tif (ecs->disabled) {\n\t\tassert(edata_list_inactive_first(&ecs->list) == NULL);\n\t\tedata_cache_put(tsdn, ecs->fallback, edata);\n\t\treturn;\n\t}\n\n\t/*\n\t * Prepend rather than append, to do LIFO ordering in the hopes of some\n\t * cache locality.\n\t */\n\tedata_list_inactive_prepend(&ecs->list, edata);\n}\n\nvoid\nedata_cache_fast_disable(tsdn_t *tsdn, edata_cache_fast_t *ecs) {\n\tedata_cache_fast_flush_all(tsdn, ecs);\n\tecs->disabled = true;\n}\n"
  },
  {
    "path": "deps/jemalloc/src/ehooks.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/ehooks.h\"\n#include \"jemalloc/internal/extent_mmap.h\"\n\nvoid\nehooks_init(ehooks_t *ehooks, extent_hooks_t *extent_hooks, unsigned ind) {\n\t/* All other hooks are optional; this one is not. */\n\tassert(extent_hooks->alloc != NULL);\n\tehooks->ind = ind;\n\tehooks_set_extent_hooks_ptr(ehooks, extent_hooks);\n}\n\n/*\n * If the caller specifies (!*zero), it is still possible to receive zeroed\n * memory, in which case *zero is toggled to true.  arena_extent_alloc() takes\n * advantage of this to avoid demanding zeroed extents, but taking advantage of\n * them if they are returned.\n */\nstatic void *\nextent_alloc_core(tsdn_t *tsdn, arena_t *arena, void *new_addr, size_t size,\n    size_t alignment, bool *zero, bool *commit, dss_prec_t dss_prec) {\n\tvoid *ret;\n\n\tassert(size != 0);\n\tassert(alignment != 0);\n\n\t/* \"primary\" dss. */\n\tif (have_dss && dss_prec == dss_prec_primary && (ret =\n\t    extent_alloc_dss(tsdn, arena, new_addr, size, alignment, zero,\n\t    commit)) != NULL) {\n\t\treturn ret;\n\t}\n\t/* mmap. */\n\tif ((ret = extent_alloc_mmap(new_addr, size, alignment, zero, commit))\n\t    != NULL) {\n\t\treturn ret;\n\t}\n\t/* \"secondary\" dss. */\n\tif (have_dss && dss_prec == dss_prec_secondary && (ret =\n\t    extent_alloc_dss(tsdn, arena, new_addr, size, alignment, zero,\n\t    commit)) != NULL) {\n\t\treturn ret;\n\t}\n\n\t/* All strategies for allocation failed. */\n\treturn NULL;\n}\n\nvoid *\nehooks_default_alloc_impl(tsdn_t *tsdn, void *new_addr, size_t size,\n    size_t alignment, bool *zero, bool *commit, unsigned arena_ind) {\n\tarena_t *arena = arena_get(tsdn, arena_ind, false);\n\t/* NULL arena indicates arena_create. */\n\tassert(arena != NULL || alignment == HUGEPAGE);\n\tdss_prec_t dss = (arena == NULL) ? dss_prec_disabled :\n\t    (dss_prec_t)atomic_load_u(&arena->dss_prec, ATOMIC_RELAXED);\n\tvoid *ret = extent_alloc_core(tsdn, arena, new_addr, size, alignment,\n\t    zero, commit, dss);\n\tif (have_madvise_huge && ret) {\n\t\tpages_set_thp_state(ret, size);\n\t}\n\treturn ret;\n}\n\nstatic void *\nehooks_default_alloc(extent_hooks_t *extent_hooks, void *new_addr, size_t size,\n    size_t alignment, bool *zero, bool *commit, unsigned arena_ind) {\n\treturn ehooks_default_alloc_impl(tsdn_fetch(), new_addr, size,\n\t    ALIGNMENT_CEILING(alignment, PAGE), zero, commit, arena_ind);\n}\n\nbool\nehooks_default_dalloc_impl(void *addr, size_t size) {\n\tif (!have_dss || !extent_in_dss(addr)) {\n\t\treturn extent_dalloc_mmap(addr, size);\n\t}\n\treturn true;\n}\n\nstatic bool\nehooks_default_dalloc(extent_hooks_t *extent_hooks, void *addr, size_t size,\n    bool committed, unsigned arena_ind) {\n\treturn ehooks_default_dalloc_impl(addr, size);\n}\n\nvoid\nehooks_default_destroy_impl(void *addr, size_t size) {\n\tif (!have_dss || !extent_in_dss(addr)) {\n\t\tpages_unmap(addr, size);\n\t}\n}\n\nstatic void\nehooks_default_destroy(extent_hooks_t *extent_hooks, void *addr, size_t size,\n    bool committed, unsigned arena_ind) {\n\tehooks_default_destroy_impl(addr, size);\n}\n\nbool\nehooks_default_commit_impl(void *addr, size_t offset, size_t length) {\n\treturn pages_commit((void *)((uintptr_t)addr + (uintptr_t)offset),\n\t    length);\n}\n\nstatic bool\nehooks_default_commit(extent_hooks_t *extent_hooks, void *addr, size_t size,\n    size_t offset, size_t length, unsigned arena_ind) {\n\treturn ehooks_default_commit_impl(addr, offset, length);\n}\n\nbool\nehooks_default_decommit_impl(void *addr, size_t offset, size_t length) {\n\treturn pages_decommit((void *)((uintptr_t)addr + (uintptr_t)offset),\n\t    length);\n}\n\nstatic bool\nehooks_default_decommit(extent_hooks_t *extent_hooks, void *addr, size_t size,\n    size_t offset, size_t length, unsigned arena_ind) {\n\treturn ehooks_default_decommit_impl(addr, offset, length);\n}\n\n#ifdef PAGES_CAN_PURGE_LAZY\nbool\nehooks_default_purge_lazy_impl(void *addr, size_t offset, size_t length) {\n\treturn pages_purge_lazy((void *)((uintptr_t)addr + (uintptr_t)offset),\n\t    length);\n}\n\nstatic bool\nehooks_default_purge_lazy(extent_hooks_t *extent_hooks, void *addr, size_t size,\n    size_t offset, size_t length, unsigned arena_ind) {\n\tassert(addr != NULL);\n\tassert((offset & PAGE_MASK) == 0);\n\tassert(length != 0);\n\tassert((length & PAGE_MASK) == 0);\n\treturn ehooks_default_purge_lazy_impl(addr, offset, length);\n}\n#endif\n\n#ifdef PAGES_CAN_PURGE_FORCED\nbool\nehooks_default_purge_forced_impl(void *addr, size_t offset, size_t length) {\n\treturn pages_purge_forced((void *)((uintptr_t)addr +\n\t    (uintptr_t)offset), length);\n}\n\nstatic bool\nehooks_default_purge_forced(extent_hooks_t *extent_hooks, void *addr,\n    size_t size, size_t offset, size_t length, unsigned arena_ind) {\n\tassert(addr != NULL);\n\tassert((offset & PAGE_MASK) == 0);\n\tassert(length != 0);\n\tassert((length & PAGE_MASK) == 0);\n\treturn ehooks_default_purge_forced_impl(addr, offset, length);\n}\n#endif\n\nbool\nehooks_default_split_impl() {\n\tif (!maps_coalesce) {\n\t\t/*\n\t\t * Without retain, only whole regions can be purged (required by\n\t\t * MEM_RELEASE on Windows) -- therefore disallow splitting.  See\n\t\t * comments in extent_head_no_merge().\n\t\t */\n\t\treturn !opt_retain;\n\t}\n\n\treturn false;\n}\n\nstatic bool\nehooks_default_split(extent_hooks_t *extent_hooks, void *addr, size_t size,\n    size_t size_a, size_t size_b, bool committed, unsigned arena_ind) {\n\treturn ehooks_default_split_impl();\n}\n\nbool\nehooks_default_merge_impl(tsdn_t *tsdn, void *addr_a, void *addr_b) {\n\tassert(addr_a < addr_b);\n\t/*\n\t * For non-DSS cases --\n\t * a) W/o maps_coalesce, merge is not always allowed (Windows):\n\t *   1) w/o retain, never merge (first branch below).\n\t *   2) with retain, only merge extents from the same VirtualAlloc\n\t *      region (in which case MEM_DECOMMIT is utilized for purging).\n\t *\n\t * b) With maps_coalesce, it's always possible to merge.\n\t *   1) w/o retain, always allow merge (only about dirty / muzzy).\n\t *   2) with retain, to preserve the SN / first-fit, merge is still\n\t *      disallowed if b is a head extent, i.e. no merging across\n\t *      different mmap regions.\n\t *\n\t * a2) and b2) are implemented in emap_try_acquire_edata_neighbor, and\n\t * sanity checked in the second branch below.\n\t */\n\tif (!maps_coalesce && !opt_retain) {\n\t\treturn true;\n\t}\n\tif (config_debug) {\n\t\tedata_t *a = emap_edata_lookup(tsdn, &arena_emap_global,\n\t\t    addr_a);\n\t\tbool head_a = edata_is_head_get(a);\n\t\tedata_t *b = emap_edata_lookup(tsdn, &arena_emap_global,\n\t\t    addr_b);\n\t\tbool head_b = edata_is_head_get(b);\n\t\temap_assert_mapped(tsdn, &arena_emap_global, a);\n\t\temap_assert_mapped(tsdn, &arena_emap_global, b);\n\t\tassert(extent_neighbor_head_state_mergeable(head_a, head_b,\n\t\t    /* forward */ true));\n\t}\n\tif (have_dss && !extent_dss_mergeable(addr_a, addr_b)) {\n\t\treturn true;\n\t}\n\n\treturn false;\n}\n\nbool\nehooks_default_merge(extent_hooks_t *extent_hooks, void *addr_a, size_t size_a,\n    void *addr_b, size_t size_b, bool committed, unsigned arena_ind) {\n\ttsdn_t *tsdn = tsdn_fetch();\n\n\treturn ehooks_default_merge_impl(tsdn, addr_a, addr_b);\n}\n\nvoid\nehooks_default_zero_impl(void *addr, size_t size) {\n\t/*\n\t * By default, we try to zero out memory using OS-provided demand-zeroed\n\t * pages.  If the user has specifically requested hugepages, though, we\n\t * don't want to purge in the middle of a hugepage (which would break it\n\t * up), so we act conservatively and use memset.\n\t */\n\tbool needs_memset = true;\n\tif (opt_thp != thp_mode_always) {\n\t\tneeds_memset = pages_purge_forced(addr, size);\n\t}\n\tif (needs_memset) {\n\t\tmemset(addr, 0, size);\n\t}\n}\n\nvoid\nehooks_default_guard_impl(void *guard1, void *guard2) {\n\tpages_mark_guards(guard1, guard2);\n}\n\nvoid\nehooks_default_unguard_impl(void *guard1, void *guard2) {\n\tpages_unmark_guards(guard1, guard2);\n}\n\nconst extent_hooks_t ehooks_default_extent_hooks = {\n\tehooks_default_alloc,\n\tehooks_default_dalloc,\n\tehooks_default_destroy,\n\tehooks_default_commit,\n\tehooks_default_decommit,\n#ifdef PAGES_CAN_PURGE_LAZY\n\tehooks_default_purge_lazy,\n#else\n\tNULL,\n#endif\n#ifdef PAGES_CAN_PURGE_FORCED\n\tehooks_default_purge_forced,\n#else\n\tNULL,\n#endif\n\tehooks_default_split,\n\tehooks_default_merge\n};\n"
  },
  {
    "path": "deps/jemalloc/src/emap.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/emap.h\"\n\nenum emap_lock_result_e {\n\temap_lock_result_success,\n\temap_lock_result_failure,\n\temap_lock_result_no_extent\n};\ntypedef enum emap_lock_result_e emap_lock_result_t;\n\nbool\nemap_init(emap_t *emap, base_t *base, bool zeroed) {\n\treturn rtree_new(&emap->rtree, base, zeroed);\n}\n\nvoid\nemap_update_edata_state(tsdn_t *tsdn, emap_t *emap, edata_t *edata,\n    extent_state_t state) {\n\twitness_assert_positive_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE);\n\n\tedata_state_set(edata, state);\n\n\tEMAP_DECLARE_RTREE_CTX;\n\trtree_leaf_elm_t *elm1 = rtree_leaf_elm_lookup(tsdn, &emap->rtree,\n\t    rtree_ctx, (uintptr_t)edata_base_get(edata), /* dependent */ true,\n\t    /* init_missing */ false);\n\tassert(elm1 != NULL);\n\trtree_leaf_elm_t *elm2 = edata_size_get(edata) == PAGE ? NULL :\n\t    rtree_leaf_elm_lookup(tsdn, &emap->rtree, rtree_ctx,\n\t    (uintptr_t)edata_last_get(edata), /* dependent */ true,\n\t    /* init_missing */ false);\n\n\trtree_leaf_elm_state_update(tsdn, &emap->rtree, elm1, elm2, state);\n\n\temap_assert_mapped(tsdn, emap, edata);\n}\n\nstatic inline edata_t *\nemap_try_acquire_edata_neighbor_impl(tsdn_t *tsdn, emap_t *emap, edata_t *edata,\n    extent_pai_t pai, extent_state_t expected_state, bool forward,\n    bool expanding) {\n\twitness_assert_positive_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE);\n\tassert(!edata_guarded_get(edata));\n\tassert(!expanding || forward);\n\tassert(!edata_state_in_transition(expected_state));\n\tassert(expected_state == extent_state_dirty ||\n\t       expected_state == extent_state_muzzy ||\n\t       expected_state == extent_state_retained);\n\n\tvoid *neighbor_addr = forward ? edata_past_get(edata) :\n\t    edata_before_get(edata);\n\t/*\n\t * This is subtle; the rtree code asserts that its input pointer is\n\t * non-NULL, and this is a useful thing to check.  But it's possible\n\t * that edata corresponds to an address of (void *)PAGE (in practice,\n\t * this has only been observed on FreeBSD when address-space\n\t * randomization is on, but it could in principle happen anywhere).  In\n\t * this case, edata_before_get(edata) is NULL, triggering the assert.\n\t */\n\tif (neighbor_addr == NULL) {\n\t\treturn NULL;\n\t}\n\n\tEMAP_DECLARE_RTREE_CTX;\n\trtree_leaf_elm_t *elm = rtree_leaf_elm_lookup(tsdn, &emap->rtree,\n\t    rtree_ctx, (uintptr_t)neighbor_addr, /* dependent*/ false,\n\t    /* init_missing */ false);\n\tif (elm == NULL) {\n\t\treturn NULL;\n\t}\n\n\trtree_contents_t neighbor_contents = rtree_leaf_elm_read(tsdn,\n\t    &emap->rtree, elm, /* dependent */ true);\n\tif (!extent_can_acquire_neighbor(edata, neighbor_contents, pai,\n\t    expected_state, forward, expanding)) {\n\t\treturn NULL;\n\t}\n\n\t/* From this point, the neighbor edata can be safely acquired. */\n\tedata_t *neighbor = neighbor_contents.edata;\n\tassert(edata_state_get(neighbor) == expected_state);\n\temap_update_edata_state(tsdn, emap, neighbor, extent_state_merging);\n\tif (expanding) {\n\t\textent_assert_can_expand(edata, neighbor);\n\t} else {\n\t\textent_assert_can_coalesce(edata, neighbor);\n\t}\n\n\treturn neighbor;\n}\n\nedata_t *\nemap_try_acquire_edata_neighbor(tsdn_t *tsdn, emap_t *emap, edata_t *edata,\n    extent_pai_t pai, extent_state_t expected_state, bool forward) {\n\treturn emap_try_acquire_edata_neighbor_impl(tsdn, emap, edata, pai,\n\t    expected_state, forward, /* expand */ false);\n}\n\nedata_t *\nemap_try_acquire_edata_neighbor_expand(tsdn_t *tsdn, emap_t *emap,\n    edata_t *edata, extent_pai_t pai, extent_state_t expected_state) {\n\t/* Try expanding forward. */\n\treturn emap_try_acquire_edata_neighbor_impl(tsdn, emap, edata, pai,\n\t    expected_state, /* forward */ true, /* expand */ true);\n}\n\nvoid\nemap_release_edata(tsdn_t *tsdn, emap_t *emap, edata_t *edata,\n    extent_state_t new_state) {\n\tassert(emap_edata_in_transition(tsdn, emap, edata));\n\tassert(emap_edata_is_acquired(tsdn, emap, edata));\n\n\temap_update_edata_state(tsdn, emap, edata, new_state);\n}\n\nstatic bool\nemap_rtree_leaf_elms_lookup(tsdn_t *tsdn, emap_t *emap, rtree_ctx_t *rtree_ctx,\n    const edata_t *edata, bool dependent, bool init_missing,\n    rtree_leaf_elm_t **r_elm_a, rtree_leaf_elm_t **r_elm_b) {\n\t*r_elm_a = rtree_leaf_elm_lookup(tsdn, &emap->rtree, rtree_ctx,\n\t    (uintptr_t)edata_base_get(edata), dependent, init_missing);\n\tif (!dependent && *r_elm_a == NULL) {\n\t\treturn true;\n\t}\n\tassert(*r_elm_a != NULL);\n\n\t*r_elm_b = rtree_leaf_elm_lookup(tsdn, &emap->rtree, rtree_ctx,\n\t    (uintptr_t)edata_last_get(edata), dependent, init_missing);\n\tif (!dependent && *r_elm_b == NULL) {\n\t\treturn true;\n\t}\n\tassert(*r_elm_b != NULL);\n\n\treturn false;\n}\n\nstatic void\nemap_rtree_write_acquired(tsdn_t *tsdn, emap_t *emap, rtree_leaf_elm_t *elm_a,\n    rtree_leaf_elm_t *elm_b, edata_t *edata, szind_t szind, bool slab) {\n\trtree_contents_t contents;\n\tcontents.edata = edata;\n\tcontents.metadata.szind = szind;\n\tcontents.metadata.slab = slab;\n\tcontents.metadata.is_head = (edata == NULL) ? false :\n\t    edata_is_head_get(edata);\n\tcontents.metadata.state = (edata == NULL) ? 0 : edata_state_get(edata);\n\trtree_leaf_elm_write(tsdn, &emap->rtree, elm_a, contents);\n\tif (elm_b != NULL) {\n\t\trtree_leaf_elm_write(tsdn, &emap->rtree, elm_b, contents);\n\t}\n}\n\nbool\nemap_register_boundary(tsdn_t *tsdn, emap_t *emap, edata_t *edata,\n    szind_t szind, bool slab) {\n\tassert(edata_state_get(edata) == extent_state_active);\n\tEMAP_DECLARE_RTREE_CTX;\n\n\trtree_leaf_elm_t *elm_a, *elm_b;\n\tbool err = emap_rtree_leaf_elms_lookup(tsdn, emap, rtree_ctx, edata,\n\t    false, true, &elm_a, &elm_b);\n\tif (err) {\n\t\treturn true;\n\t}\n\tassert(rtree_leaf_elm_read(tsdn, &emap->rtree, elm_a,\n\t    /* dependent */ false).edata == NULL);\n\tassert(rtree_leaf_elm_read(tsdn, &emap->rtree, elm_b,\n\t    /* dependent */ false).edata == NULL);\n\temap_rtree_write_acquired(tsdn, emap, elm_a, elm_b, edata, szind, slab);\n\treturn false;\n}\n\n/* Invoked *after* emap_register_boundary. */\nvoid\nemap_register_interior(tsdn_t *tsdn, emap_t *emap, edata_t *edata,\n    szind_t szind) {\n\tEMAP_DECLARE_RTREE_CTX;\n\n\tassert(edata_slab_get(edata));\n\tassert(edata_state_get(edata) == extent_state_active);\n\n\tif (config_debug) {\n\t\t/* Making sure the boundary is registered already. */\n\t\trtree_leaf_elm_t *elm_a, *elm_b;\n\t\tbool err = emap_rtree_leaf_elms_lookup(tsdn, emap, rtree_ctx,\n\t\t    edata, /* dependent */ true, /* init_missing */ false,\n\t\t    &elm_a, &elm_b);\n\t\tassert(!err);\n\t\trtree_contents_t contents_a, contents_b;\n\t\tcontents_a = rtree_leaf_elm_read(tsdn, &emap->rtree, elm_a,\n\t\t    /* dependent */ true);\n\t\tcontents_b = rtree_leaf_elm_read(tsdn, &emap->rtree, elm_b,\n\t\t    /* dependent */ true);\n\t\tassert(contents_a.edata == edata && contents_b.edata == edata);\n\t\tassert(contents_a.metadata.slab && contents_b.metadata.slab);\n\t}\n\n\trtree_contents_t contents;\n\tcontents.edata = edata;\n\tcontents.metadata.szind = szind;\n\tcontents.metadata.slab = true;\n\tcontents.metadata.state = extent_state_active;\n\tcontents.metadata.is_head = false; /* Not allowed to access. */\n\n\tassert(edata_size_get(edata) > (2 << LG_PAGE));\n\trtree_write_range(tsdn, &emap->rtree, rtree_ctx,\n\t    (uintptr_t)edata_base_get(edata) + PAGE,\n\t    (uintptr_t)edata_last_get(edata) - PAGE, contents);\n}\n\nvoid\nemap_deregister_boundary(tsdn_t *tsdn, emap_t *emap, edata_t *edata) {\n\t/*\n\t * The edata must be either in an acquired state, or protected by state\n\t * based locks.\n\t */\n\tif (!emap_edata_is_acquired(tsdn, emap, edata)) {\n\t\twitness_assert_positive_depth_to_rank(\n\t\t    tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE);\n\t}\n\n\tEMAP_DECLARE_RTREE_CTX;\n\trtree_leaf_elm_t *elm_a, *elm_b;\n\n\temap_rtree_leaf_elms_lookup(tsdn, emap, rtree_ctx, edata,\n\t    true, false, &elm_a, &elm_b);\n\temap_rtree_write_acquired(tsdn, emap, elm_a, elm_b, NULL, SC_NSIZES,\n\t    false);\n}\n\nvoid\nemap_deregister_interior(tsdn_t *tsdn, emap_t *emap, edata_t *edata) {\n\tEMAP_DECLARE_RTREE_CTX;\n\n\tassert(edata_slab_get(edata));\n\tif (edata_size_get(edata) > (2 << LG_PAGE)) {\n\t\trtree_clear_range(tsdn, &emap->rtree, rtree_ctx,\n\t\t    (uintptr_t)edata_base_get(edata) + PAGE,\n\t\t    (uintptr_t)edata_last_get(edata) - PAGE);\n\t}\n}\n\nvoid\nemap_remap(tsdn_t *tsdn, emap_t *emap, edata_t *edata, szind_t szind,\n    bool slab) {\n\tEMAP_DECLARE_RTREE_CTX;\n\n\tif (szind != SC_NSIZES) {\n\t\trtree_contents_t contents;\n\t\tcontents.edata = edata;\n\t\tcontents.metadata.szind = szind;\n\t\tcontents.metadata.slab = slab;\n\t\tcontents.metadata.is_head = edata_is_head_get(edata);\n\t\tcontents.metadata.state = edata_state_get(edata);\n\n\t\trtree_write(tsdn, &emap->rtree, rtree_ctx,\n\t\t    (uintptr_t)edata_addr_get(edata), contents);\n\t\t/*\n\t\t * Recall that this is called only for active->inactive and\n\t\t * inactive->active transitions (since only active extents have\n\t\t * meaningful values for szind and slab).  Active, non-slab\n\t\t * extents only need to handle lookups at their head (on\n\t\t * deallocation), so we don't bother filling in the end\n\t\t * boundary.\n\t\t *\n\t\t * For slab extents, we do the end-mapping change.  This still\n\t\t * leaves the interior unmodified; an emap_register_interior\n\t\t * call is coming in those cases, though.\n\t\t */\n\t\tif (slab && edata_size_get(edata) > PAGE) {\n\t\t\tuintptr_t key = (uintptr_t)edata_past_get(edata)\n\t\t\t    - (uintptr_t)PAGE;\n\t\t\trtree_write(tsdn, &emap->rtree, rtree_ctx, key,\n\t\t\t    contents);\n\t\t}\n\t}\n}\n\nbool\nemap_split_prepare(tsdn_t *tsdn, emap_t *emap, emap_prepare_t *prepare,\n    edata_t *edata, size_t size_a, edata_t *trail, size_t size_b) {\n\tEMAP_DECLARE_RTREE_CTX;\n\n\t/*\n\t * We use incorrect constants for things like arena ind, zero, ranged,\n\t * and commit state, and head status.  This is a fake edata_t, used to\n\t * facilitate a lookup.\n\t */\n\tedata_t lead = {0};\n\tedata_init(&lead, 0U, edata_addr_get(edata), size_a, false, 0, 0,\n\t    extent_state_active, false, false, EXTENT_PAI_PAC, EXTENT_NOT_HEAD);\n\n\temap_rtree_leaf_elms_lookup(tsdn, emap, rtree_ctx, &lead, false, true,\n\t    &prepare->lead_elm_a, &prepare->lead_elm_b);\n\temap_rtree_leaf_elms_lookup(tsdn, emap, rtree_ctx, trail, false, true,\n\t    &prepare->trail_elm_a, &prepare->trail_elm_b);\n\n\tif (prepare->lead_elm_a == NULL || prepare->lead_elm_b == NULL\n\t    || prepare->trail_elm_a == NULL || prepare->trail_elm_b == NULL) {\n\t\treturn true;\n\t}\n\treturn false;\n}\n\nvoid\nemap_split_commit(tsdn_t *tsdn, emap_t *emap, emap_prepare_t *prepare,\n    edata_t *lead, size_t size_a, edata_t *trail, size_t size_b) {\n\t/*\n\t * We should think about not writing to the lead leaf element.  We can\n\t * get into situations where a racing realloc-like call can disagree\n\t * with a size lookup request.  I think it's fine to declare that these\n\t * situations are race bugs, but there's an argument to be made that for\n\t * things like xallocx, a size lookup call should return either the old\n\t * size or the new size, but not anything else.\n\t */\n\temap_rtree_write_acquired(tsdn, emap, prepare->lead_elm_a,\n\t    prepare->lead_elm_b, lead, SC_NSIZES, /* slab */ false);\n\temap_rtree_write_acquired(tsdn, emap, prepare->trail_elm_a,\n\t    prepare->trail_elm_b, trail, SC_NSIZES, /* slab */ false);\n}\n\nvoid\nemap_merge_prepare(tsdn_t *tsdn, emap_t *emap, emap_prepare_t *prepare,\n    edata_t *lead, edata_t *trail) {\n\tEMAP_DECLARE_RTREE_CTX;\n\temap_rtree_leaf_elms_lookup(tsdn, emap, rtree_ctx, lead, true, false,\n\t    &prepare->lead_elm_a, &prepare->lead_elm_b);\n\temap_rtree_leaf_elms_lookup(tsdn, emap, rtree_ctx, trail, true, false,\n\t    &prepare->trail_elm_a, &prepare->trail_elm_b);\n}\n\nvoid\nemap_merge_commit(tsdn_t *tsdn, emap_t *emap, emap_prepare_t *prepare,\n    edata_t *lead, edata_t *trail) {\n\trtree_contents_t clear_contents;\n\tclear_contents.edata = NULL;\n\tclear_contents.metadata.szind = SC_NSIZES;\n\tclear_contents.metadata.slab = false;\n\tclear_contents.metadata.is_head = false;\n\tclear_contents.metadata.state = (extent_state_t)0;\n\n\tif (prepare->lead_elm_b != NULL) {\n\t\trtree_leaf_elm_write(tsdn, &emap->rtree,\n\t\t    prepare->lead_elm_b, clear_contents);\n\t}\n\n\trtree_leaf_elm_t *merged_b;\n\tif (prepare->trail_elm_b != NULL) {\n\t\trtree_leaf_elm_write(tsdn, &emap->rtree,\n\t\t    prepare->trail_elm_a, clear_contents);\n\t\tmerged_b = prepare->trail_elm_b;\n\t} else {\n\t\tmerged_b = prepare->trail_elm_a;\n\t}\n\n\temap_rtree_write_acquired(tsdn, emap, prepare->lead_elm_a, merged_b,\n\t    lead, SC_NSIZES, false);\n}\n\nvoid\nemap_do_assert_mapped(tsdn_t *tsdn, emap_t *emap, edata_t *edata) {\n\tEMAP_DECLARE_RTREE_CTX;\n\n\trtree_contents_t contents = rtree_read(tsdn, &emap->rtree, rtree_ctx,\n\t    (uintptr_t)edata_base_get(edata));\n\tassert(contents.edata == edata);\n\tassert(contents.metadata.is_head == edata_is_head_get(edata));\n\tassert(contents.metadata.state == edata_state_get(edata));\n}\n\nvoid\nemap_do_assert_not_mapped(tsdn_t *tsdn, emap_t *emap, edata_t *edata) {\n\temap_full_alloc_ctx_t context1 = {0};\n\temap_full_alloc_ctx_try_lookup(tsdn, emap, edata_base_get(edata),\n\t    &context1);\n\tassert(context1.edata == NULL);\n\n\temap_full_alloc_ctx_t context2 = {0};\n\temap_full_alloc_ctx_try_lookup(tsdn, emap, edata_last_get(edata),\n\t    &context2);\n\tassert(context2.edata == NULL);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/eset.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/eset.h\"\n\n#define ESET_NPSIZES (SC_NPSIZES + 1)\n\nstatic void\neset_bin_init(eset_bin_t *bin) {\n\tedata_heap_new(&bin->heap);\n\t/*\n\t * heap_min doesn't need initialization; it gets filled in when the bin\n\t * goes from non-empty to empty.\n\t */\n}\n\nstatic void\neset_bin_stats_init(eset_bin_stats_t *bin_stats) {\n\tatomic_store_zu(&bin_stats->nextents, 0, ATOMIC_RELAXED);\n\tatomic_store_zu(&bin_stats->nbytes, 0, ATOMIC_RELAXED);\n}\n\nvoid\neset_init(eset_t *eset, extent_state_t state) {\n\tfor (unsigned i = 0; i < ESET_NPSIZES; i++) {\n\t\teset_bin_init(&eset->bins[i]);\n\t\teset_bin_stats_init(&eset->bin_stats[i]);\n\t}\n\tfb_init(eset->bitmap, ESET_NPSIZES);\n\tedata_list_inactive_init(&eset->lru);\n\teset->state = state;\n}\n\nsize_t\neset_npages_get(eset_t *eset) {\n\treturn atomic_load_zu(&eset->npages, ATOMIC_RELAXED);\n}\n\nsize_t\neset_nextents_get(eset_t *eset, pszind_t pind) {\n\treturn atomic_load_zu(&eset->bin_stats[pind].nextents, ATOMIC_RELAXED);\n}\n\nsize_t\neset_nbytes_get(eset_t *eset, pszind_t pind) {\n\treturn atomic_load_zu(&eset->bin_stats[pind].nbytes, ATOMIC_RELAXED);\n}\n\nstatic void\neset_stats_add(eset_t *eset, pszind_t pind, size_t sz) {\n\tsize_t cur = atomic_load_zu(&eset->bin_stats[pind].nextents,\n\t    ATOMIC_RELAXED);\n\tatomic_store_zu(&eset->bin_stats[pind].nextents, cur + 1,\n\t    ATOMIC_RELAXED);\n\tcur = atomic_load_zu(&eset->bin_stats[pind].nbytes, ATOMIC_RELAXED);\n\tatomic_store_zu(&eset->bin_stats[pind].nbytes, cur + sz,\n\t    ATOMIC_RELAXED);\n}\n\nstatic void\neset_stats_sub(eset_t *eset, pszind_t pind, size_t sz) {\n\tsize_t cur = atomic_load_zu(&eset->bin_stats[pind].nextents,\n\t    ATOMIC_RELAXED);\n\tatomic_store_zu(&eset->bin_stats[pind].nextents, cur - 1,\n\t    ATOMIC_RELAXED);\n\tcur = atomic_load_zu(&eset->bin_stats[pind].nbytes, ATOMIC_RELAXED);\n\tatomic_store_zu(&eset->bin_stats[pind].nbytes, cur - sz,\n\t    ATOMIC_RELAXED);\n}\n\nvoid\neset_insert(eset_t *eset, edata_t *edata) {\n\tassert(edata_state_get(edata) == eset->state);\n\n\tsize_t size = edata_size_get(edata);\n\tsize_t psz = sz_psz_quantize_floor(size);\n\tpszind_t pind = sz_psz2ind(psz);\n\n\tedata_cmp_summary_t edata_cmp_summary = edata_cmp_summary_get(edata);\n\tif (edata_heap_empty(&eset->bins[pind].heap)) {\n\t\tfb_set(eset->bitmap, ESET_NPSIZES, (size_t)pind);\n\t\t/* Only element is automatically the min element. */\n\t\teset->bins[pind].heap_min = edata_cmp_summary;\n\t} else {\n\t\t/*\n\t\t * There's already a min element; update the summary if we're\n\t\t * about to insert a lower one.\n\t\t */\n\t\tif (edata_cmp_summary_comp(edata_cmp_summary,\n\t\t    eset->bins[pind].heap_min) < 0) {\n\t\t\teset->bins[pind].heap_min = edata_cmp_summary;\n\t\t}\n\t}\n\tedata_heap_insert(&eset->bins[pind].heap, edata);\n\n\tif (config_stats) {\n\t\teset_stats_add(eset, pind, size);\n\t}\n\n\tedata_list_inactive_append(&eset->lru, edata);\n\tsize_t npages = size >> LG_PAGE;\n\t/*\n\t * All modifications to npages hold the mutex (as asserted above), so we\n\t * don't need an atomic fetch-add; we can get by with a load followed by\n\t * a store.\n\t */\n\tsize_t cur_eset_npages =\n\t    atomic_load_zu(&eset->npages, ATOMIC_RELAXED);\n\tatomic_store_zu(&eset->npages, cur_eset_npages + npages,\n\t    ATOMIC_RELAXED);\n}\n\nvoid\neset_remove(eset_t *eset, edata_t *edata) {\n\tassert(edata_state_get(edata) == eset->state ||\n\t    edata_state_in_transition(edata_state_get(edata)));\n\n\tsize_t size = edata_size_get(edata);\n\tsize_t psz = sz_psz_quantize_floor(size);\n\tpszind_t pind = sz_psz2ind(psz);\n\tif (config_stats) {\n\t\teset_stats_sub(eset, pind, size);\n\t}\n\n\tedata_cmp_summary_t edata_cmp_summary = edata_cmp_summary_get(edata);\n\tedata_heap_remove(&eset->bins[pind].heap, edata);\n\tif (edata_heap_empty(&eset->bins[pind].heap)) {\n\t\tfb_unset(eset->bitmap, ESET_NPSIZES, (size_t)pind);\n\t} else {\n\t\t/*\n\t\t * This is a little weird; we compare if the summaries are\n\t\t * equal, rather than if the edata we removed was the heap\n\t\t * minimum.  The reason why is that getting the heap minimum\n\t\t * can cause a pairing heap merge operation.  We can avoid this\n\t\t * if we only update the min if it's changed, in which case the\n\t\t * summaries of the removed element and the min element should\n\t\t * compare equal.\n\t\t */\n\t\tif (edata_cmp_summary_comp(edata_cmp_summary,\n\t\t    eset->bins[pind].heap_min) == 0) {\n\t\t\teset->bins[pind].heap_min = edata_cmp_summary_get(\n\t\t\t    edata_heap_first(&eset->bins[pind].heap));\n\t\t}\n\t}\n\tedata_list_inactive_remove(&eset->lru, edata);\n\tsize_t npages = size >> LG_PAGE;\n\t/*\n\t * As in eset_insert, we hold eset->mtx and so don't need atomic\n\t * operations for updating eset->npages.\n\t */\n\tsize_t cur_extents_npages =\n\t    atomic_load_zu(&eset->npages, ATOMIC_RELAXED);\n\tassert(cur_extents_npages >= npages);\n\tatomic_store_zu(&eset->npages,\n\t    cur_extents_npages - (size >> LG_PAGE), ATOMIC_RELAXED);\n}\n\n/*\n * Find an extent with size [min_size, max_size) to satisfy the alignment\n * requirement.  For each size, try only the first extent in the heap.\n */\nstatic edata_t *\neset_fit_alignment(eset_t *eset, size_t min_size, size_t max_size,\n    size_t alignment) {\n        pszind_t pind = sz_psz2ind(sz_psz_quantize_ceil(min_size));\n        pszind_t pind_max = sz_psz2ind(sz_psz_quantize_ceil(max_size));\n\n\tfor (pszind_t i =\n\t    (pszind_t)fb_ffs(eset->bitmap, ESET_NPSIZES, (size_t)pind);\n\t    i < pind_max;\n\t    i = (pszind_t)fb_ffs(eset->bitmap, ESET_NPSIZES, (size_t)i + 1)) {\n\t\tassert(i < SC_NPSIZES);\n\t\tassert(!edata_heap_empty(&eset->bins[i].heap));\n\t\tedata_t *edata = edata_heap_first(&eset->bins[i].heap);\n\t\tuintptr_t base = (uintptr_t)edata_base_get(edata);\n\t\tsize_t candidate_size = edata_size_get(edata);\n\t\tassert(candidate_size >= min_size);\n\n\t\tuintptr_t next_align = ALIGNMENT_CEILING((uintptr_t)base,\n\t\t    PAGE_CEILING(alignment));\n\t\tif (base > next_align || base + candidate_size <= next_align) {\n\t\t\t/* Overflow or not crossing the next alignment. */\n\t\t\tcontinue;\n\t\t}\n\n\t\tsize_t leadsize = next_align - base;\n\t\tif (candidate_size - leadsize >= min_size) {\n\t\t\treturn edata;\n\t\t}\n\t}\n\n\treturn NULL;\n}\n\n/*\n * Do first-fit extent selection, i.e. select the oldest/lowest extent that is\n * large enough.\n *\n * lg_max_fit is the (log of the) maximum ratio between the requested size and\n * the returned size that we'll allow.  This can reduce fragmentation by\n * avoiding reusing and splitting large extents for smaller sizes.  In practice,\n * it's set to opt_lg_extent_max_active_fit for the dirty eset and SC_PTR_BITS\n * for others.\n */\nstatic edata_t *\neset_first_fit(eset_t *eset, size_t size, bool exact_only,\n    unsigned lg_max_fit) {\n\tedata_t *ret = NULL;\n\tedata_cmp_summary_t ret_summ JEMALLOC_CC_SILENCE_INIT({0});\n\n\tpszind_t pind = sz_psz2ind(sz_psz_quantize_ceil(size));\n\n\tif (exact_only) {\n\t\treturn edata_heap_empty(&eset->bins[pind].heap) ? NULL :\n\t\t    edata_heap_first(&eset->bins[pind].heap);\n\t}\n\n\tfor (pszind_t i =\n\t    (pszind_t)fb_ffs(eset->bitmap, ESET_NPSIZES, (size_t)pind);\n\t    i < ESET_NPSIZES;\n\t    i = (pszind_t)fb_ffs(eset->bitmap, ESET_NPSIZES, (size_t)i + 1)) {\n\t\tassert(!edata_heap_empty(&eset->bins[i].heap));\n\t\tif (lg_max_fit == SC_PTR_BITS) {\n\t\t\t/*\n\t\t\t * We'll shift by this below, and shifting out all the\n\t\t\t * bits is undefined.  Decreasing is safe, since the\n\t\t\t * page size is larger than 1 byte.\n\t\t\t */\n\t\t\tlg_max_fit = SC_PTR_BITS - 1;\n\t\t}\n\t\tif ((sz_pind2sz(i) >> lg_max_fit) > size) {\n\t\t\tbreak;\n\t\t}\n\t\tif (ret == NULL || edata_cmp_summary_comp(\n\t\t    eset->bins[i].heap_min, ret_summ) < 0) {\n\t\t\t/*\n\t\t\t * We grab the edata as early as possible, even though\n\t\t\t * we might change it later.  Practically, a large\n\t\t\t * portion of eset_fit calls succeed at the first valid\n\t\t\t * index, so this doesn't cost much, and we get the\n\t\t\t * effect of prefetching the edata as early as possible.\n\t\t\t */\n\t\t\tedata_t *edata = edata_heap_first(&eset->bins[i].heap);\n\t\t\tassert(edata_size_get(edata) >= size);\n\t\t\tassert(ret == NULL || edata_snad_comp(edata, ret) < 0);\n\t\t\tassert(ret == NULL || edata_cmp_summary_comp(\n\t\t\t    eset->bins[i].heap_min,\n\t\t\t    edata_cmp_summary_get(edata)) == 0);\n\t\t\tret = edata;\n\t\t\tret_summ = eset->bins[i].heap_min;\n\t\t}\n\t\tif (i == SC_NPSIZES) {\n\t\t\tbreak;\n\t\t}\n\t\tassert(i < SC_NPSIZES);\n\t}\n\n\treturn ret;\n}\n\nedata_t *\neset_fit(eset_t *eset, size_t esize, size_t alignment, bool exact_only,\n    unsigned lg_max_fit) {\n\tsize_t max_size = esize + PAGE_CEILING(alignment) - PAGE;\n\t/* Beware size_t wrap-around. */\n\tif (max_size < esize) {\n\t\treturn NULL;\n\t}\n\n\tedata_t *edata = eset_first_fit(eset, max_size, exact_only, lg_max_fit);\n\n\tif (alignment > PAGE && edata == NULL) {\n\t\t/*\n\t\t * max_size guarantees the alignment requirement but is rather\n\t\t * pessimistic.  Next we try to satisfy the aligned allocation\n\t\t * with sizes in [esize, max_size).\n\t\t */\n\t\tedata = eset_fit_alignment(eset, esize, max_size, alignment);\n\t}\n\n\treturn edata;\n}\n"
  },
  {
    "path": "deps/jemalloc/src/exp_grow.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\nvoid\nexp_grow_init(exp_grow_t *exp_grow) {\n\texp_grow->next = sz_psz2ind(HUGEPAGE);\n\texp_grow->limit = sz_psz2ind(SC_LARGE_MAXCLASS);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/extent.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/emap.h\"\n#include \"jemalloc/internal/extent_dss.h\"\n#include \"jemalloc/internal/extent_mmap.h\"\n#include \"jemalloc/internal/ph.h\"\n#include \"jemalloc/internal/mutex.h\"\n\n/******************************************************************************/\n/* Data. */\n\nsize_t opt_lg_extent_max_active_fit = LG_EXTENT_MAX_ACTIVE_FIT_DEFAULT;\n\nstatic bool extent_commit_impl(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata,\n    size_t offset, size_t length, bool growing_retained);\nstatic bool extent_purge_lazy_impl(tsdn_t *tsdn, ehooks_t *ehooks,\n    edata_t *edata, size_t offset, size_t length, bool growing_retained);\nstatic bool extent_purge_forced_impl(tsdn_t *tsdn, ehooks_t *ehooks,\n    edata_t *edata, size_t offset, size_t length, bool growing_retained);\nstatic edata_t *extent_split_impl(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    edata_t *edata, size_t size_a, size_t size_b, bool holding_core_locks);\nstatic bool extent_merge_impl(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    edata_t *a, edata_t *b, bool holding_core_locks);\n\n/* Used exclusively for gdump triggering. */\nstatic atomic_zu_t curpages;\nstatic atomic_zu_t highpages;\n\n/******************************************************************************/\n/*\n * Function prototypes for static functions that are referenced prior to\n * definition.\n */\n\nstatic void extent_deregister(tsdn_t *tsdn, pac_t *pac, edata_t *edata);\nstatic edata_t *extent_recycle(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    ecache_t *ecache, edata_t *expand_edata, size_t usize, size_t alignment,\n    bool zero, bool *commit, bool growing_retained, bool guarded);\nstatic edata_t *extent_try_coalesce(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    ecache_t *ecache, edata_t *edata, bool *coalesced);\nstatic edata_t *extent_alloc_retained(tsdn_t *tsdn, pac_t *pac,\n    ehooks_t *ehooks, edata_t *expand_edata, size_t size, size_t alignment,\n    bool zero, bool *commit, bool guarded);\n\n/******************************************************************************/\n\nsize_t\nextent_sn_next(pac_t *pac) {\n\treturn atomic_fetch_add_zu(&pac->extent_sn_next, 1, ATOMIC_RELAXED);\n}\n\nstatic inline bool\nextent_may_force_decay(pac_t *pac) {\n\treturn !(pac_decay_ms_get(pac, extent_state_dirty) == -1\n\t    || pac_decay_ms_get(pac, extent_state_muzzy) == -1);\n}\n\nstatic bool\nextent_try_delayed_coalesce(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    ecache_t *ecache, edata_t *edata) {\n\temap_update_edata_state(tsdn, pac->emap, edata, extent_state_active);\n\n\tbool coalesced;\n\tedata = extent_try_coalesce(tsdn, pac, ehooks, ecache,\n\t    edata, &coalesced);\n\temap_update_edata_state(tsdn, pac->emap, edata, ecache->state);\n\n\tif (!coalesced) {\n\t\treturn true;\n\t}\n\teset_insert(&ecache->eset, edata);\n\treturn false;\n}\n\nedata_t *\necache_alloc(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks, ecache_t *ecache,\n    edata_t *expand_edata, size_t size, size_t alignment, bool zero,\n    bool guarded) {\n\tassert(size != 0);\n\tassert(alignment != 0);\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\n\tbool commit = true;\n\tedata_t *edata = extent_recycle(tsdn, pac, ehooks, ecache, expand_edata,\n\t    size, alignment, zero, &commit, false, guarded);\n\tassert(edata == NULL || edata_pai_get(edata) == EXTENT_PAI_PAC);\n\tassert(edata == NULL || edata_guarded_get(edata) == guarded);\n\treturn edata;\n}\n\nedata_t *\necache_alloc_grow(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks, ecache_t *ecache,\n    edata_t *expand_edata, size_t size, size_t alignment, bool zero,\n    bool guarded) {\n\tassert(size != 0);\n\tassert(alignment != 0);\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\n\tbool commit = true;\n\tedata_t *edata = extent_alloc_retained(tsdn, pac, ehooks, expand_edata,\n\t    size, alignment, zero, &commit, guarded);\n\tif (edata == NULL) {\n\t\tif (opt_retain && expand_edata != NULL) {\n\t\t\t/*\n\t\t\t * When retain is enabled and trying to expand, we do\n\t\t\t * not attempt extent_alloc_wrapper which does mmap that\n\t\t\t * is very unlikely to succeed (unless it happens to be\n\t\t\t * at the end).\n\t\t\t */\n\t\t\treturn NULL;\n\t\t}\n\t\tif (guarded) {\n\t\t\t/*\n\t\t\t * Means no cached guarded extents available (and no\n\t\t\t * grow_retained was attempted).  The pac_alloc flow\n\t\t\t * will alloc regular extents to make new guarded ones.\n\t\t\t */\n\t\t\treturn NULL;\n\t\t}\n\t\tvoid *new_addr = (expand_edata == NULL) ? NULL :\n\t\t    edata_past_get(expand_edata);\n\t\tedata = extent_alloc_wrapper(tsdn, pac, ehooks, new_addr,\n\t\t    size, alignment, zero, &commit,\n\t\t    /* growing_retained */ false);\n\t}\n\n\tassert(edata == NULL || edata_pai_get(edata) == EXTENT_PAI_PAC);\n\treturn edata;\n}\n\nvoid\necache_dalloc(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks, ecache_t *ecache,\n    edata_t *edata) {\n\tassert(edata_base_get(edata) != NULL);\n\tassert(edata_size_get(edata) != 0);\n\tassert(edata_pai_get(edata) == EXTENT_PAI_PAC);\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\n\tedata_addr_set(edata, edata_base_get(edata));\n\tedata_zeroed_set(edata, false);\n\n\textent_record(tsdn, pac, ehooks, ecache, edata);\n}\n\nedata_t *\necache_evict(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    ecache_t *ecache, size_t npages_min) {\n\tmalloc_mutex_lock(tsdn, &ecache->mtx);\n\n\t/*\n\t * Get the LRU coalesced extent, if any.  If coalescing was delayed,\n\t * the loop will iterate until the LRU extent is fully coalesced.\n\t */\n\tedata_t *edata;\n\twhile (true) {\n\t\t/* Get the LRU extent, if any. */\n\t\teset_t *eset = &ecache->eset;\n\t\tedata = edata_list_inactive_first(&eset->lru);\n\t\tif (edata == NULL) {\n\t\t\t/*\n\t\t\t * Next check if there are guarded extents.  They are\n\t\t\t * more expensive to purge (since they are not\n\t\t\t * mergeable), thus in favor of caching them longer.\n\t\t\t */\n\t\t\teset = &ecache->guarded_eset;\n\t\t\tedata = edata_list_inactive_first(&eset->lru);\n\t\t\tif (edata == NULL) {\n\t\t\t\tgoto label_return;\n\t\t\t}\n\t\t}\n\t\t/* Check the eviction limit. */\n\t\tsize_t extents_npages = ecache_npages_get(ecache);\n\t\tif (extents_npages <= npages_min) {\n\t\t\tedata = NULL;\n\t\t\tgoto label_return;\n\t\t}\n\t\teset_remove(eset, edata);\n\t\tif (!ecache->delay_coalesce || edata_guarded_get(edata)) {\n\t\t\tbreak;\n\t\t}\n\t\t/* Try to coalesce. */\n\t\tif (extent_try_delayed_coalesce(tsdn, pac, ehooks, ecache,\n\t\t    edata)) {\n\t\t\tbreak;\n\t\t}\n\t\t/*\n\t\t * The LRU extent was just coalesced and the result placed in\n\t\t * the LRU at its neighbor's position.  Start over.\n\t\t */\n\t}\n\n\t/*\n\t * Either mark the extent active or deregister it to protect against\n\t * concurrent operations.\n\t */\n\tswitch (ecache->state) {\n\tcase extent_state_active:\n\t\tnot_reached();\n\tcase extent_state_dirty:\n\tcase extent_state_muzzy:\n\t\temap_update_edata_state(tsdn, pac->emap, edata,\n\t\t    extent_state_active);\n\t\tbreak;\n\tcase extent_state_retained:\n\t\textent_deregister(tsdn, pac, edata);\n\t\tbreak;\n\tdefault:\n\t\tnot_reached();\n\t}\n\nlabel_return:\n\tmalloc_mutex_unlock(tsdn, &ecache->mtx);\n\treturn edata;\n}\n\n/*\n * This can only happen when we fail to allocate a new extent struct (which\n * indicates OOM), e.g. when trying to split an existing extent.\n */\nstatic void\nextents_abandon_vm(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks, ecache_t *ecache,\n    edata_t *edata, bool growing_retained) {\n\tsize_t sz = edata_size_get(edata);\n\tif (config_stats) {\n\t\tatomic_fetch_add_zu(&pac->stats->abandoned_vm, sz,\n\t\t    ATOMIC_RELAXED);\n\t}\n\t/*\n\t * Leak extent after making sure its pages have already been purged, so\n\t * that this is only a virtual memory leak.\n\t */\n\tif (ecache->state == extent_state_dirty) {\n\t\tif (extent_purge_lazy_impl(tsdn, ehooks, edata, 0, sz,\n\t\t    growing_retained)) {\n\t\t\textent_purge_forced_impl(tsdn, ehooks, edata, 0,\n\t\t\t    edata_size_get(edata), growing_retained);\n\t\t}\n\t}\n\tedata_cache_put(tsdn, pac->edata_cache, edata);\n}\n\nstatic void\nextent_deactivate_locked_impl(tsdn_t *tsdn, pac_t *pac, ecache_t *ecache,\n    edata_t *edata) {\n\tmalloc_mutex_assert_owner(tsdn, &ecache->mtx);\n\tassert(edata_arena_ind_get(edata) == ecache_ind_get(ecache));\n\n\temap_update_edata_state(tsdn, pac->emap, edata, ecache->state);\n\teset_t *eset = edata_guarded_get(edata) ? &ecache->guarded_eset :\n\t    &ecache->eset;\n\teset_insert(eset, edata);\n}\n\nstatic void\nextent_deactivate_locked(tsdn_t *tsdn, pac_t *pac, ecache_t *ecache,\n    edata_t *edata) {\n\tassert(edata_state_get(edata) == extent_state_active);\n\textent_deactivate_locked_impl(tsdn, pac, ecache, edata);\n}\n\nstatic void\nextent_deactivate_check_state_locked(tsdn_t *tsdn, pac_t *pac, ecache_t *ecache,\n    edata_t *edata, extent_state_t expected_state) {\n\tassert(edata_state_get(edata) == expected_state);\n\textent_deactivate_locked_impl(tsdn, pac, ecache, edata);\n}\n\nstatic void\nextent_activate_locked(tsdn_t *tsdn, pac_t *pac, ecache_t *ecache, eset_t *eset,\n    edata_t *edata) {\n\tassert(edata_arena_ind_get(edata) == ecache_ind_get(ecache));\n\tassert(edata_state_get(edata) == ecache->state ||\n\t    edata_state_get(edata) == extent_state_merging);\n\n\teset_remove(eset, edata);\n\temap_update_edata_state(tsdn, pac->emap, edata, extent_state_active);\n}\n\nvoid\nextent_gdump_add(tsdn_t *tsdn, const edata_t *edata) {\n\tcassert(config_prof);\n\t/* prof_gdump() requirement. */\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\n\tif (opt_prof && edata_state_get(edata) == extent_state_active) {\n\t\tsize_t nadd = edata_size_get(edata) >> LG_PAGE;\n\t\tsize_t cur = atomic_fetch_add_zu(&curpages, nadd,\n\t\t    ATOMIC_RELAXED) + nadd;\n\t\tsize_t high = atomic_load_zu(&highpages, ATOMIC_RELAXED);\n\t\twhile (cur > high && !atomic_compare_exchange_weak_zu(\n\t\t    &highpages, &high, cur, ATOMIC_RELAXED, ATOMIC_RELAXED)) {\n\t\t\t/*\n\t\t\t * Don't refresh cur, because it may have decreased\n\t\t\t * since this thread lost the highpages update race.\n\t\t\t * Note that high is updated in case of CAS failure.\n\t\t\t */\n\t\t}\n\t\tif (cur > high && prof_gdump_get_unlocked()) {\n\t\t\tprof_gdump(tsdn);\n\t\t}\n\t}\n}\n\nstatic void\nextent_gdump_sub(tsdn_t *tsdn, const edata_t *edata) {\n\tcassert(config_prof);\n\n\tif (opt_prof && edata_state_get(edata) == extent_state_active) {\n\t\tsize_t nsub = edata_size_get(edata) >> LG_PAGE;\n\t\tassert(atomic_load_zu(&curpages, ATOMIC_RELAXED) >= nsub);\n\t\tatomic_fetch_sub_zu(&curpages, nsub, ATOMIC_RELAXED);\n\t}\n}\n\nstatic bool\nextent_register_impl(tsdn_t *tsdn, pac_t *pac, edata_t *edata, bool gdump_add) {\n\tassert(edata_state_get(edata) == extent_state_active);\n\t/*\n\t * No locking needed, as the edata must be in active state, which\n\t * prevents other threads from accessing the edata.\n\t */\n\tif (emap_register_boundary(tsdn, pac->emap, edata, SC_NSIZES,\n\t    /* slab */ false)) {\n\t\treturn true;\n\t}\n\n\tif (config_prof && gdump_add) {\n\t\textent_gdump_add(tsdn, edata);\n\t}\n\n\treturn false;\n}\n\nstatic bool\nextent_register(tsdn_t *tsdn, pac_t *pac, edata_t *edata) {\n\treturn extent_register_impl(tsdn, pac, edata, true);\n}\n\nstatic bool\nextent_register_no_gdump_add(tsdn_t *tsdn, pac_t *pac, edata_t *edata) {\n\treturn extent_register_impl(tsdn, pac, edata, false);\n}\n\nstatic void\nextent_reregister(tsdn_t *tsdn, pac_t *pac, edata_t *edata) {\n\tbool err = extent_register(tsdn, pac, edata);\n\tassert(!err);\n}\n\n/*\n * Removes all pointers to the given extent from the global rtree.\n */\nstatic void\nextent_deregister_impl(tsdn_t *tsdn, pac_t *pac, edata_t *edata,\n    bool gdump) {\n\temap_deregister_boundary(tsdn, pac->emap, edata);\n\n\tif (config_prof && gdump) {\n\t\textent_gdump_sub(tsdn, edata);\n\t}\n}\n\nstatic void\nextent_deregister(tsdn_t *tsdn, pac_t *pac, edata_t *edata) {\n\textent_deregister_impl(tsdn, pac, edata, true);\n}\n\nstatic void\nextent_deregister_no_gdump_sub(tsdn_t *tsdn, pac_t *pac,\n    edata_t *edata) {\n\textent_deregister_impl(tsdn, pac, edata, false);\n}\n\n/*\n * Tries to find and remove an extent from ecache that can be used for the\n * given allocation request.\n */\nstatic edata_t *\nextent_recycle_extract(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    ecache_t *ecache, edata_t *expand_edata, size_t size, size_t alignment,\n    bool guarded) {\n\tmalloc_mutex_assert_owner(tsdn, &ecache->mtx);\n\tassert(alignment > 0);\n\tif (config_debug && expand_edata != NULL) {\n\t\t/*\n\t\t * Non-NULL expand_edata indicates in-place expanding realloc.\n\t\t * new_addr must either refer to a non-existing extent, or to\n\t\t * the base of an extant extent, since only active slabs support\n\t\t * interior lookups (which of course cannot be recycled).\n\t\t */\n\t\tvoid *new_addr = edata_past_get(expand_edata);\n\t\tassert(PAGE_ADDR2BASE(new_addr) == new_addr);\n\t\tassert(alignment <= PAGE);\n\t}\n\n\tedata_t *edata;\n\teset_t *eset = guarded ? &ecache->guarded_eset : &ecache->eset;\n\tif (expand_edata != NULL) {\n\t\tedata = emap_try_acquire_edata_neighbor_expand(tsdn, pac->emap,\n\t\t    expand_edata, EXTENT_PAI_PAC, ecache->state);\n\t\tif (edata != NULL) {\n\t\t\textent_assert_can_expand(expand_edata, edata);\n\t\t\tif (edata_size_get(edata) < size) {\n\t\t\t\temap_release_edata(tsdn, pac->emap, edata,\n\t\t\t\t    ecache->state);\n\t\t\t\tedata = NULL;\n\t\t\t}\n\t\t}\n\t} else {\n\t\t/*\n\t\t * A large extent might be broken up from its original size to\n\t\t * some small size to satisfy a small request.  When that small\n\t\t * request is freed, though, it won't merge back with the larger\n\t\t * extent if delayed coalescing is on.  The large extent can\n\t\t * then no longer satify a request for its original size.  To\n\t\t * limit this effect, when delayed coalescing is enabled, we\n\t\t * put a cap on how big an extent we can split for a request.\n\t\t */\n\t\tunsigned lg_max_fit = ecache->delay_coalesce\n\t\t    ? (unsigned)opt_lg_extent_max_active_fit : SC_PTR_BITS;\n\n\t\t/*\n\t\t * If split and merge are not allowed (Windows w/o retain), try\n\t\t * exact fit only.\n\t\t *\n\t\t * For simplicity purposes, splitting guarded extents is not\n\t\t * supported.  Hence, we do only exact fit for guarded\n\t\t * allocations.\n\t\t */\n\t\tbool exact_only = (!maps_coalesce && !opt_retain) || guarded;\n\t\tedata = eset_fit(eset, size, alignment, exact_only,\n\t\t    lg_max_fit);\n\t}\n\tif (edata == NULL) {\n\t\treturn NULL;\n\t}\n\tassert(!guarded || edata_guarded_get(edata));\n\textent_activate_locked(tsdn, pac, ecache, eset, edata);\n\n\treturn edata;\n}\n\n/*\n * Given an allocation request and an extent guaranteed to be able to satisfy\n * it, this splits off lead and trail extents, leaving edata pointing to an\n * extent satisfying the allocation.\n * This function doesn't put lead or trail into any ecache; it's the caller's\n * job to ensure that they can be reused.\n */\ntypedef enum {\n\t/*\n\t * Split successfully.  lead, edata, and trail, are modified to extents\n\t * describing the ranges before, in, and after the given allocation.\n\t */\n\textent_split_interior_ok,\n\t/*\n\t * The extent can't satisfy the given allocation request.  None of the\n\t * input edata_t *s are touched.\n\t */\n\textent_split_interior_cant_alloc,\n\t/*\n\t * In a potentially invalid state.  Must leak (if *to_leak is non-NULL),\n\t * and salvage what's still salvageable (if *to_salvage is non-NULL).\n\t * None of lead, edata, or trail are valid.\n\t */\n\textent_split_interior_error\n} extent_split_interior_result_t;\n\nstatic extent_split_interior_result_t\nextent_split_interior(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    /* The result of splitting, in case of success. */\n    edata_t **edata, edata_t **lead, edata_t **trail,\n    /* The mess to clean up, in case of error. */\n    edata_t **to_leak, edata_t **to_salvage,\n    edata_t *expand_edata, size_t size, size_t alignment) {\n\tsize_t leadsize = ALIGNMENT_CEILING((uintptr_t)edata_base_get(*edata),\n\t    PAGE_CEILING(alignment)) - (uintptr_t)edata_base_get(*edata);\n\tassert(expand_edata == NULL || leadsize == 0);\n\tif (edata_size_get(*edata) < leadsize + size) {\n\t\treturn extent_split_interior_cant_alloc;\n\t}\n\tsize_t trailsize = edata_size_get(*edata) - leadsize - size;\n\n\t*lead = NULL;\n\t*trail = NULL;\n\t*to_leak = NULL;\n\t*to_salvage = NULL;\n\n\t/* Split the lead. */\n\tif (leadsize != 0) {\n\t\tassert(!edata_guarded_get(*edata));\n\t\t*lead = *edata;\n\t\t*edata = extent_split_impl(tsdn, pac, ehooks, *lead, leadsize,\n\t\t    size + trailsize, /* holding_core_locks*/ true);\n\t\tif (*edata == NULL) {\n\t\t\t*to_leak = *lead;\n\t\t\t*lead = NULL;\n\t\t\treturn extent_split_interior_error;\n\t\t}\n\t}\n\n\t/* Split the trail. */\n\tif (trailsize != 0) {\n\t\tassert(!edata_guarded_get(*edata));\n\t\t*trail = extent_split_impl(tsdn, pac, ehooks, *edata, size,\n\t\t    trailsize, /* holding_core_locks */ true);\n\t\tif (*trail == NULL) {\n\t\t\t*to_leak = *edata;\n\t\t\t*to_salvage = *lead;\n\t\t\t*lead = NULL;\n\t\t\t*edata = NULL;\n\t\t\treturn extent_split_interior_error;\n\t\t}\n\t}\n\n\treturn extent_split_interior_ok;\n}\n\n/*\n * This fulfills the indicated allocation request out of the given extent (which\n * the caller should have ensured was big enough).  If there's any unused space\n * before or after the resulting allocation, that space is given its own extent\n * and put back into ecache.\n */\nstatic edata_t *\nextent_recycle_split(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    ecache_t *ecache, edata_t *expand_edata, size_t size, size_t alignment,\n    edata_t *edata, bool growing_retained) {\n\tassert(!edata_guarded_get(edata) || size == edata_size_get(edata));\n\tmalloc_mutex_assert_owner(tsdn, &ecache->mtx);\n\n\tedata_t *lead;\n\tedata_t *trail;\n\tedata_t *to_leak JEMALLOC_CC_SILENCE_INIT(NULL);\n\tedata_t *to_salvage JEMALLOC_CC_SILENCE_INIT(NULL);\n\n\textent_split_interior_result_t result = extent_split_interior(\n\t    tsdn, pac, ehooks, &edata, &lead, &trail, &to_leak, &to_salvage,\n\t    expand_edata, size, alignment);\n\n\tif (!maps_coalesce && result != extent_split_interior_ok\n\t    && !opt_retain) {\n\t\t/*\n\t\t * Split isn't supported (implies Windows w/o retain).  Avoid\n\t\t * leaking the extent.\n\t\t */\n\t\tassert(to_leak != NULL && lead == NULL && trail == NULL);\n\t\textent_deactivate_locked(tsdn, pac, ecache, to_leak);\n\t\treturn NULL;\n\t}\n\n\tif (result == extent_split_interior_ok) {\n\t\tif (lead != NULL) {\n\t\t\textent_deactivate_locked(tsdn, pac, ecache, lead);\n\t\t}\n\t\tif (trail != NULL) {\n\t\t\textent_deactivate_locked(tsdn, pac, ecache, trail);\n\t\t}\n\t\treturn edata;\n\t} else {\n\t\t/*\n\t\t * We should have picked an extent that was large enough to\n\t\t * fulfill our allocation request.\n\t\t */\n\t\tassert(result == extent_split_interior_error);\n\t\tif (to_salvage != NULL) {\n\t\t\textent_deregister(tsdn, pac, to_salvage);\n\t\t}\n\t\tif (to_leak != NULL) {\n\t\t\textent_deregister_no_gdump_sub(tsdn, pac, to_leak);\n\t\t\t/*\n\t\t\t * May go down the purge path (which assume no ecache\n\t\t\t * locks).  Only happens with OOM caused split failures.\n\t\t\t */\n\t\t\tmalloc_mutex_unlock(tsdn, &ecache->mtx);\n\t\t\textents_abandon_vm(tsdn, pac, ehooks, ecache, to_leak,\n\t\t\t    growing_retained);\n\t\t\tmalloc_mutex_lock(tsdn, &ecache->mtx);\n\t\t}\n\t\treturn NULL;\n\t}\n\tunreachable();\n}\n\n/*\n * Tries to satisfy the given allocation request by reusing one of the extents\n * in the given ecache_t.\n */\nstatic edata_t *\nextent_recycle(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks, ecache_t *ecache,\n    edata_t *expand_edata, size_t size, size_t alignment, bool zero,\n    bool *commit, bool growing_retained, bool guarded) {\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, growing_retained ? 1 : 0);\n\tassert(!guarded || expand_edata == NULL);\n\tassert(!guarded || alignment <= PAGE);\n\n\tmalloc_mutex_lock(tsdn, &ecache->mtx);\n\n\tedata_t *edata = extent_recycle_extract(tsdn, pac, ehooks, ecache,\n\t    expand_edata, size, alignment, guarded);\n\tif (edata == NULL) {\n\t\tmalloc_mutex_unlock(tsdn, &ecache->mtx);\n\t\treturn NULL;\n\t}\n\n\tedata = extent_recycle_split(tsdn, pac, ehooks, ecache, expand_edata,\n\t    size, alignment, edata, growing_retained);\n\tmalloc_mutex_unlock(tsdn, &ecache->mtx);\n\tif (edata == NULL) {\n\t\treturn NULL;\n\t}\n\n\tassert(edata_state_get(edata) == extent_state_active);\n\tif (extent_commit_zero(tsdn, ehooks, edata, *commit, zero,\n\t    growing_retained)) {\n\t\textent_record(tsdn, pac, ehooks, ecache, edata);\n\t\treturn NULL;\n\t}\n\tif (edata_committed_get(edata)) {\n\t\t/*\n\t\t * This reverses the purpose of this variable - previously it\n\t\t * was treated as an input parameter, now it turns into an\n\t\t * output parameter, reporting if the edata has actually been\n\t\t * committed.\n\t\t */\n\t\t*commit = true;\n\t}\n\treturn edata;\n}\n\n/*\n * If virtual memory is retained, create increasingly larger extents from which\n * to split requested extents in order to limit the total number of disjoint\n * virtual memory ranges retained by each shard.\n */\nstatic edata_t *\nextent_grow_retained(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    size_t size, size_t alignment, bool zero, bool *commit) {\n\tmalloc_mutex_assert_owner(tsdn, &pac->grow_mtx);\n\n\tsize_t alloc_size_min = size + PAGE_CEILING(alignment) - PAGE;\n\t/* Beware size_t wrap-around. */\n\tif (alloc_size_min < size) {\n\t\tgoto label_err;\n\t}\n\t/*\n\t * Find the next extent size in the series that would be large enough to\n\t * satisfy this request.\n\t */\n\tsize_t alloc_size;\n\tpszind_t exp_grow_skip;\n\tbool err = exp_grow_size_prepare(&pac->exp_grow, alloc_size_min,\n\t    &alloc_size, &exp_grow_skip);\n\tif (err) {\n\t\tgoto label_err;\n\t}\n\n\tedata_t *edata = edata_cache_get(tsdn, pac->edata_cache);\n\tif (edata == NULL) {\n\t\tgoto label_err;\n\t}\n\tbool zeroed = false;\n\tbool committed = false;\n\n\tvoid *ptr = ehooks_alloc(tsdn, ehooks, NULL, alloc_size, PAGE, &zeroed,\n\t    &committed);\n\n\tif (ptr == NULL) {\n\t\tedata_cache_put(tsdn, pac->edata_cache, edata);\n\t\tgoto label_err;\n\t}\n\n\tedata_init(edata, ecache_ind_get(&pac->ecache_retained), ptr,\n\t    alloc_size, false, SC_NSIZES, extent_sn_next(pac),\n\t    extent_state_active, zeroed, committed, EXTENT_PAI_PAC,\n\t    EXTENT_IS_HEAD);\n\n\tif (extent_register_no_gdump_add(tsdn, pac, edata)) {\n\t\tedata_cache_put(tsdn, pac->edata_cache, edata);\n\t\tgoto label_err;\n\t}\n\n\tif (edata_committed_get(edata)) {\n\t\t*commit = true;\n\t}\n\n\tedata_t *lead;\n\tedata_t *trail;\n\tedata_t *to_leak JEMALLOC_CC_SILENCE_INIT(NULL);\n\tedata_t *to_salvage JEMALLOC_CC_SILENCE_INIT(NULL);\n\n\textent_split_interior_result_t result = extent_split_interior(tsdn,\n\t    pac, ehooks, &edata, &lead, &trail, &to_leak, &to_salvage, NULL,\n\t    size, alignment);\n\n\tif (result == extent_split_interior_ok) {\n\t\tif (lead != NULL) {\n\t\t\textent_record(tsdn, pac, ehooks, &pac->ecache_retained,\n\t\t\t    lead);\n\t\t}\n\t\tif (trail != NULL) {\n\t\t\textent_record(tsdn, pac, ehooks, &pac->ecache_retained,\n\t\t\t    trail);\n\t\t}\n\t} else {\n\t\t/*\n\t\t * We should have allocated a sufficiently large extent; the\n\t\t * cant_alloc case should not occur.\n\t\t */\n\t\tassert(result == extent_split_interior_error);\n\t\tif (to_salvage != NULL) {\n\t\t\tif (config_prof) {\n\t\t\t\textent_gdump_add(tsdn, to_salvage);\n\t\t\t}\n\t\t\textent_record(tsdn, pac, ehooks, &pac->ecache_retained,\n\t\t\t    to_salvage);\n\t\t}\n\t\tif (to_leak != NULL) {\n\t\t\textent_deregister_no_gdump_sub(tsdn, pac, to_leak);\n\t\t\textents_abandon_vm(tsdn, pac, ehooks,\n\t\t\t    &pac->ecache_retained, to_leak, true);\n\t\t}\n\t\tgoto label_err;\n\t}\n\n\tif (*commit && !edata_committed_get(edata)) {\n\t\tif (extent_commit_impl(tsdn, ehooks, edata, 0,\n\t\t    edata_size_get(edata), true)) {\n\t\t\textent_record(tsdn, pac, ehooks,\n\t\t\t    &pac->ecache_retained, edata);\n\t\t\tgoto label_err;\n\t\t}\n\t\t/* A successful commit should return zeroed memory. */\n\t\tif (config_debug) {\n\t\t\tvoid *addr = edata_addr_get(edata);\n\t\t\tsize_t *p = (size_t *)(uintptr_t)addr;\n\t\t\t/* Check the first page only. */\n\t\t\tfor (size_t i = 0; i < PAGE / sizeof(size_t); i++) {\n\t\t\t\tassert(p[i] == 0);\n\t\t\t}\n\t\t}\n\t}\n\n\t/*\n\t * Increment extent_grow_next if doing so wouldn't exceed the allowed\n\t * range.\n\t */\n\t/* All opportunities for failure are past. */\n\texp_grow_size_commit(&pac->exp_grow, exp_grow_skip);\n\tmalloc_mutex_unlock(tsdn, &pac->grow_mtx);\n\n\tif (config_prof) {\n\t\t/* Adjust gdump stats now that extent is final size. */\n\t\textent_gdump_add(tsdn, edata);\n\t}\n\tif (zero && !edata_zeroed_get(edata)) {\n\t\tehooks_zero(tsdn, ehooks, edata_base_get(edata),\n\t\t    edata_size_get(edata));\n\t}\n\treturn edata;\nlabel_err:\n\tmalloc_mutex_unlock(tsdn, &pac->grow_mtx);\n\treturn NULL;\n}\n\nstatic edata_t *\nextent_alloc_retained(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    edata_t *expand_edata, size_t size, size_t alignment, bool zero,\n    bool *commit, bool guarded) {\n\tassert(size != 0);\n\tassert(alignment != 0);\n\n\tmalloc_mutex_lock(tsdn, &pac->grow_mtx);\n\n\tedata_t *edata = extent_recycle(tsdn, pac, ehooks,\n\t    &pac->ecache_retained, expand_edata, size, alignment, zero, commit,\n\t    /* growing_retained */ true, guarded);\n\tif (edata != NULL) {\n\t\tmalloc_mutex_unlock(tsdn, &pac->grow_mtx);\n\t\tif (config_prof) {\n\t\t\textent_gdump_add(tsdn, edata);\n\t\t}\n\t} else if (opt_retain && expand_edata == NULL && !guarded) {\n\t\tedata = extent_grow_retained(tsdn, pac, ehooks, size,\n\t\t    alignment, zero, commit);\n\t\t/* extent_grow_retained() always releases pac->grow_mtx. */\n\t} else {\n\t\tmalloc_mutex_unlock(tsdn, &pac->grow_mtx);\n\t}\n\tmalloc_mutex_assert_not_owner(tsdn, &pac->grow_mtx);\n\n\treturn edata;\n}\n\nstatic bool\nextent_coalesce(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks, ecache_t *ecache,\n    edata_t *inner, edata_t *outer, bool forward) {\n\textent_assert_can_coalesce(inner, outer);\n\teset_remove(&ecache->eset, outer);\n\n\tbool err = extent_merge_impl(tsdn, pac, ehooks,\n\t    forward ? inner : outer, forward ? outer : inner,\n\t    /* holding_core_locks */ true);\n\tif (err) {\n\t\textent_deactivate_check_state_locked(tsdn, pac, ecache, outer,\n\t\t    extent_state_merging);\n\t}\n\n\treturn err;\n}\n\nstatic edata_t *\nextent_try_coalesce_impl(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    ecache_t *ecache, edata_t *edata, bool *coalesced) {\n\tassert(!edata_guarded_get(edata));\n\t/*\n\t * We avoid checking / locking inactive neighbors for large size\n\t * classes, since they are eagerly coalesced on deallocation which can\n\t * cause lock contention.\n\t */\n\t/*\n\t * Continue attempting to coalesce until failure, to protect against\n\t * races with other threads that are thwarted by this one.\n\t */\n\tbool again;\n\tdo {\n\t\tagain = false;\n\n\t\t/* Try to coalesce forward. */\n\t\tedata_t *next = emap_try_acquire_edata_neighbor(tsdn, pac->emap,\n\t\t    edata, EXTENT_PAI_PAC, ecache->state, /* forward */ true);\n\t\tif (next != NULL) {\n\t\t\tif (!extent_coalesce(tsdn, pac, ehooks, ecache, edata,\n\t\t\t    next, true)) {\n\t\t\t\tif (ecache->delay_coalesce) {\n\t\t\t\t\t/* Do minimal coalescing. */\n\t\t\t\t\t*coalesced = true;\n\t\t\t\t\treturn edata;\n\t\t\t\t}\n\t\t\t\tagain = true;\n\t\t\t}\n\t\t}\n\n\t\t/* Try to coalesce backward. */\n\t\tedata_t *prev = emap_try_acquire_edata_neighbor(tsdn, pac->emap,\n\t\t    edata, EXTENT_PAI_PAC, ecache->state, /* forward */ false);\n\t\tif (prev != NULL) {\n\t\t\tif (!extent_coalesce(tsdn, pac, ehooks, ecache, edata,\n\t\t\t    prev, false)) {\n\t\t\t\tedata = prev;\n\t\t\t\tif (ecache->delay_coalesce) {\n\t\t\t\t\t/* Do minimal coalescing. */\n\t\t\t\t\t*coalesced = true;\n\t\t\t\t\treturn edata;\n\t\t\t\t}\n\t\t\t\tagain = true;\n\t\t\t}\n\t\t}\n\t} while (again);\n\n\tif (ecache->delay_coalesce) {\n\t\t*coalesced = false;\n\t}\n\treturn edata;\n}\n\nstatic edata_t *\nextent_try_coalesce(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    ecache_t *ecache, edata_t *edata, bool *coalesced) {\n\treturn extent_try_coalesce_impl(tsdn, pac, ehooks, ecache, edata,\n\t    coalesced);\n}\n\nstatic edata_t *\nextent_try_coalesce_large(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    ecache_t *ecache, edata_t *edata, bool *coalesced) {\n\treturn extent_try_coalesce_impl(tsdn, pac, ehooks, ecache, edata,\n\t    coalesced);\n}\n\n/* Purge a single extent to retained / unmapped directly. */\nstatic void\nextent_maximally_purge(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    edata_t *edata) {\n\tsize_t extent_size = edata_size_get(edata);\n\textent_dalloc_wrapper(tsdn, pac, ehooks, edata);\n\tif (config_stats) {\n\t\t/* Update stats accordingly. */\n\t\tLOCKEDINT_MTX_LOCK(tsdn, *pac->stats_mtx);\n\t\tlocked_inc_u64(tsdn,\n\t\t    LOCKEDINT_MTX(*pac->stats_mtx),\n\t\t    &pac->stats->decay_dirty.nmadvise, 1);\n\t\tlocked_inc_u64(tsdn,\n\t\t    LOCKEDINT_MTX(*pac->stats_mtx),\n\t\t    &pac->stats->decay_dirty.purged,\n\t\t    extent_size >> LG_PAGE);\n\t\tLOCKEDINT_MTX_UNLOCK(tsdn, *pac->stats_mtx);\n\t\tatomic_fetch_sub_zu(&pac->stats->pac_mapped, extent_size,\n\t\t    ATOMIC_RELAXED);\n\t}\n}\n\n/*\n * Does the metadata management portions of putting an unused extent into the\n * given ecache_t (coalesces and inserts into the eset).\n */\nvoid\nextent_record(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks, ecache_t *ecache,\n    edata_t *edata) {\n\tassert((ecache->state != extent_state_dirty &&\n\t    ecache->state != extent_state_muzzy) ||\n\t    !edata_zeroed_get(edata));\n\n\tmalloc_mutex_lock(tsdn, &ecache->mtx);\n\n\temap_assert_mapped(tsdn, pac->emap, edata);\n\n\tif (edata_guarded_get(edata)) {\n\t\tgoto label_skip_coalesce;\n\t}\n\tif (!ecache->delay_coalesce) {\n\t\tedata = extent_try_coalesce(tsdn, pac, ehooks, ecache, edata,\n\t\t    NULL);\n\t} else if (edata_size_get(edata) >= SC_LARGE_MINCLASS) {\n\t\tassert(ecache == &pac->ecache_dirty);\n\t\t/* Always coalesce large extents eagerly. */\n\t\tbool coalesced;\n\t\tdo {\n\t\t\tassert(edata_state_get(edata) == extent_state_active);\n\t\t\tedata = extent_try_coalesce_large(tsdn, pac, ehooks,\n\t\t\t    ecache, edata, &coalesced);\n\t\t} while (coalesced);\n\t\tif (edata_size_get(edata) >=\n\t\t    atomic_load_zu(&pac->oversize_threshold, ATOMIC_RELAXED)\n\t\t    && extent_may_force_decay(pac)) {\n\t\t\t/* Shortcut to purge the oversize extent eagerly. */\n\t\t\tmalloc_mutex_unlock(tsdn, &ecache->mtx);\n\t\t\textent_maximally_purge(tsdn, pac, ehooks, edata);\n\t\t\treturn;\n\t\t}\n\t}\nlabel_skip_coalesce:\n\textent_deactivate_locked(tsdn, pac, ecache, edata);\n\n\tmalloc_mutex_unlock(tsdn, &ecache->mtx);\n}\n\nvoid\nextent_dalloc_gap(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    edata_t *edata) {\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\n\tif (extent_register(tsdn, pac, edata)) {\n\t\tedata_cache_put(tsdn, pac->edata_cache, edata);\n\t\treturn;\n\t}\n\textent_dalloc_wrapper(tsdn, pac, ehooks, edata);\n}\n\nstatic bool\nextent_dalloc_wrapper_try(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    edata_t *edata) {\n\tbool err;\n\n\tassert(edata_base_get(edata) != NULL);\n\tassert(edata_size_get(edata) != 0);\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\n\tedata_addr_set(edata, edata_base_get(edata));\n\n\t/* Try to deallocate. */\n\terr = ehooks_dalloc(tsdn, ehooks, edata_base_get(edata),\n\t    edata_size_get(edata), edata_committed_get(edata));\n\n\tif (!err) {\n\t\tedata_cache_put(tsdn, pac->edata_cache, edata);\n\t}\n\n\treturn err;\n}\n\nedata_t *\nextent_alloc_wrapper(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    void *new_addr, size_t size, size_t alignment, bool zero, bool *commit,\n    bool growing_retained) {\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, growing_retained ? 1 : 0);\n\n\tedata_t *edata = edata_cache_get(tsdn, pac->edata_cache);\n\tif (edata == NULL) {\n\t\treturn NULL;\n\t}\n\tsize_t palignment = ALIGNMENT_CEILING(alignment, PAGE);\n\tvoid *addr = ehooks_alloc(tsdn, ehooks, new_addr, size, palignment,\n\t    &zero, commit);\n\tif (addr == NULL) {\n\t\tedata_cache_put(tsdn, pac->edata_cache, edata);\n\t\treturn NULL;\n\t}\n\tedata_init(edata, ecache_ind_get(&pac->ecache_dirty), addr,\n\t    size, /* slab */ false, SC_NSIZES, extent_sn_next(pac),\n\t    extent_state_active, zero, *commit, EXTENT_PAI_PAC,\n\t    opt_retain ? EXTENT_IS_HEAD : EXTENT_NOT_HEAD);\n\t/*\n\t * Retained memory is not counted towards gdump.  Only if an extent is\n\t * allocated as a separate mapping, i.e. growing_retained is false, then\n\t * gdump should be updated.\n\t */\n\tbool gdump_add = !growing_retained;\n\tif (extent_register_impl(tsdn, pac, edata, gdump_add)) {\n\t\tedata_cache_put(tsdn, pac->edata_cache, edata);\n\t\treturn NULL;\n\t}\n\n\treturn edata;\n}\n\nvoid\nextent_dalloc_wrapper(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    edata_t *edata) {\n\tassert(edata_pai_get(edata) == EXTENT_PAI_PAC);\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\n\t/* Avoid calling the default extent_dalloc unless have to. */\n\tif (!ehooks_dalloc_will_fail(ehooks)) {\n\t\t/* Remove guard pages for dalloc / unmap. */\n\t\tif (edata_guarded_get(edata)) {\n\t\t\tassert(ehooks_are_default(ehooks));\n\t\t\tsan_unguard_pages_two_sided(tsdn, ehooks, edata,\n\t\t\t    pac->emap);\n\t\t}\n\t\t/*\n\t\t * Deregister first to avoid a race with other allocating\n\t\t * threads, and reregister if deallocation fails.\n\t\t */\n\t\textent_deregister(tsdn, pac, edata);\n\t\tif (!extent_dalloc_wrapper_try(tsdn, pac, ehooks, edata)) {\n\t\t\treturn;\n\t\t}\n\t\textent_reregister(tsdn, pac, edata);\n\t}\n\n\t/* Try to decommit; purge if that fails. */\n\tbool zeroed;\n\tif (!edata_committed_get(edata)) {\n\t\tzeroed = true;\n\t} else if (!extent_decommit_wrapper(tsdn, ehooks, edata, 0,\n\t    edata_size_get(edata))) {\n\t\tzeroed = true;\n\t} else if (!ehooks_purge_forced(tsdn, ehooks, edata_base_get(edata),\n\t    edata_size_get(edata), 0, edata_size_get(edata))) {\n\t\tzeroed = true;\n\t} else if (edata_state_get(edata) == extent_state_muzzy ||\n\t    !ehooks_purge_lazy(tsdn, ehooks, edata_base_get(edata),\n\t    edata_size_get(edata), 0, edata_size_get(edata))) {\n\t\tzeroed = false;\n\t} else {\n\t\tzeroed = false;\n\t}\n\tedata_zeroed_set(edata, zeroed);\n\n\tif (config_prof) {\n\t\textent_gdump_sub(tsdn, edata);\n\t}\n\n\textent_record(tsdn, pac, ehooks, &pac->ecache_retained, edata);\n}\n\nvoid\nextent_destroy_wrapper(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    edata_t *edata) {\n\tassert(edata_base_get(edata) != NULL);\n\tassert(edata_size_get(edata) != 0);\n\textent_state_t state = edata_state_get(edata);\n\tassert(state == extent_state_retained || state == extent_state_active);\n\tassert(emap_edata_is_acquired(tsdn, pac->emap, edata));\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\n\tif (edata_guarded_get(edata)) {\n\t\tassert(opt_retain);\n\t\tsan_unguard_pages_pre_destroy(tsdn, ehooks, edata, pac->emap);\n\t}\n\tedata_addr_set(edata, edata_base_get(edata));\n\n\t/* Try to destroy; silently fail otherwise. */\n\tehooks_destroy(tsdn, ehooks, edata_base_get(edata),\n\t    edata_size_get(edata), edata_committed_get(edata));\n\n\tedata_cache_put(tsdn, pac->edata_cache, edata);\n}\n\nstatic bool\nextent_commit_impl(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata,\n    size_t offset, size_t length, bool growing_retained) {\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, growing_retained ? 1 : 0);\n\tbool err = ehooks_commit(tsdn, ehooks, edata_base_get(edata),\n\t    edata_size_get(edata), offset, length);\n\tedata_committed_set(edata, edata_committed_get(edata) || !err);\n\treturn err;\n}\n\nbool\nextent_commit_wrapper(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata,\n    size_t offset, size_t length) {\n\treturn extent_commit_impl(tsdn, ehooks, edata, offset, length,\n\t    /* growing_retained */ false);\n}\n\nbool\nextent_decommit_wrapper(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata,\n    size_t offset, size_t length) {\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\tbool err = ehooks_decommit(tsdn, ehooks, edata_base_get(edata),\n\t    edata_size_get(edata), offset, length);\n\tedata_committed_set(edata, edata_committed_get(edata) && err);\n\treturn err;\n}\n\nstatic bool\nextent_purge_lazy_impl(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata,\n    size_t offset, size_t length, bool growing_retained) {\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, growing_retained ? 1 : 0);\n\tbool err = ehooks_purge_lazy(tsdn, ehooks, edata_base_get(edata),\n\t    edata_size_get(edata), offset, length);\n\treturn err;\n}\n\nbool\nextent_purge_lazy_wrapper(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata,\n    size_t offset, size_t length) {\n\treturn extent_purge_lazy_impl(tsdn, ehooks, edata, offset,\n\t    length, false);\n}\n\nstatic bool\nextent_purge_forced_impl(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata,\n    size_t offset, size_t length, bool growing_retained) {\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, growing_retained ? 1 : 0);\n\tbool err = ehooks_purge_forced(tsdn, ehooks, edata_base_get(edata),\n\t    edata_size_get(edata), offset, length);\n\treturn err;\n}\n\nbool\nextent_purge_forced_wrapper(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata,\n    size_t offset, size_t length) {\n\treturn extent_purge_forced_impl(tsdn, ehooks, edata, offset, length,\n\t    false);\n}\n\n/*\n * Accepts the extent to split, and the characteristics of each side of the\n * split.  The 'a' parameters go with the 'lead' of the resulting pair of\n * extents (the lower addressed portion of the split), and the 'b' parameters go\n * with the trail (the higher addressed portion).  This makes 'extent' the lead,\n * and returns the trail (except in case of error).\n */\nstatic edata_t *\nextent_split_impl(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    edata_t *edata, size_t size_a, size_t size_b, bool holding_core_locks) {\n\tassert(edata_size_get(edata) == size_a + size_b);\n\t/* Only the shrink path may split w/o holding core locks. */\n\tif (holding_core_locks) {\n\t\twitness_assert_positive_depth_to_rank(\n\t\t    tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE);\n\t} else {\n\t\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t\t    WITNESS_RANK_CORE, 0);\n\t}\n\n\tif (ehooks_split_will_fail(ehooks)) {\n\t\treturn NULL;\n\t}\n\n\tedata_t *trail = edata_cache_get(tsdn, pac->edata_cache);\n\tif (trail == NULL) {\n\t\tgoto label_error_a;\n\t}\n\n\tedata_init(trail, edata_arena_ind_get(edata),\n\t    (void *)((uintptr_t)edata_base_get(edata) + size_a), size_b,\n\t    /* slab */ false, SC_NSIZES, edata_sn_get(edata),\n\t    edata_state_get(edata), edata_zeroed_get(edata),\n\t    edata_committed_get(edata), EXTENT_PAI_PAC, EXTENT_NOT_HEAD);\n\temap_prepare_t prepare;\n\tbool err = emap_split_prepare(tsdn, pac->emap, &prepare, edata,\n\t    size_a, trail, size_b);\n\tif (err) {\n\t\tgoto label_error_b;\n\t}\n\n\t/*\n\t * No need to acquire trail or edata, because: 1) trail was new (just\n\t * allocated); and 2) edata is either an active allocation (the shrink\n\t * path), or in an acquired state (extracted from the ecache on the\n\t * extent_recycle_split path).\n\t */\n\tassert(emap_edata_is_acquired(tsdn, pac->emap, edata));\n\tassert(emap_edata_is_acquired(tsdn, pac->emap, trail));\n\n\terr = ehooks_split(tsdn, ehooks, edata_base_get(edata), size_a + size_b,\n\t    size_a, size_b, edata_committed_get(edata));\n\n\tif (err) {\n\t\tgoto label_error_b;\n\t}\n\n\tedata_size_set(edata, size_a);\n\temap_split_commit(tsdn, pac->emap, &prepare, edata, size_a, trail,\n\t    size_b);\n\n\treturn trail;\nlabel_error_b:\n\tedata_cache_put(tsdn, pac->edata_cache, trail);\nlabel_error_a:\n\treturn NULL;\n}\n\nedata_t *\nextent_split_wrapper(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks, edata_t *edata,\n    size_t size_a, size_t size_b, bool holding_core_locks) {\n\treturn extent_split_impl(tsdn, pac, ehooks, edata, size_a, size_b,\n\t    holding_core_locks);\n}\n\nstatic bool\nextent_merge_impl(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks, edata_t *a,\n    edata_t *b, bool holding_core_locks) {\n\t/* Only the expanding path may merge w/o holding ecache locks. */\n\tif (holding_core_locks) {\n\t\twitness_assert_positive_depth_to_rank(\n\t\t    tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_CORE);\n\t} else {\n\t\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t\t    WITNESS_RANK_CORE, 0);\n\t}\n\n\tassert(edata_base_get(a) < edata_base_get(b));\n\tassert(edata_arena_ind_get(a) == edata_arena_ind_get(b));\n\tassert(edata_arena_ind_get(a) == ehooks_ind_get(ehooks));\n\temap_assert_mapped(tsdn, pac->emap, a);\n\temap_assert_mapped(tsdn, pac->emap, b);\n\n\tbool err = ehooks_merge(tsdn, ehooks, edata_base_get(a),\n\t    edata_size_get(a), edata_base_get(b), edata_size_get(b),\n\t    edata_committed_get(a));\n\n\tif (err) {\n\t\treturn true;\n\t}\n\n\t/*\n\t * The rtree writes must happen while all the relevant elements are\n\t * owned, so the following code uses decomposed helper functions rather\n\t * than extent_{,de}register() to do things in the right order.\n\t */\n\temap_prepare_t prepare;\n\temap_merge_prepare(tsdn, pac->emap, &prepare, a, b);\n\n\tassert(edata_state_get(a) == extent_state_active ||\n\t    edata_state_get(a) == extent_state_merging);\n\tedata_state_set(a, extent_state_active);\n\tedata_size_set(a, edata_size_get(a) + edata_size_get(b));\n\tedata_sn_set(a, (edata_sn_get(a) < edata_sn_get(b)) ?\n\t    edata_sn_get(a) : edata_sn_get(b));\n\tedata_zeroed_set(a, edata_zeroed_get(a) && edata_zeroed_get(b));\n\n\temap_merge_commit(tsdn, pac->emap, &prepare, a, b);\n\n\tedata_cache_put(tsdn, pac->edata_cache, b);\n\n\treturn false;\n}\n\nbool\nextent_merge_wrapper(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks,\n    edata_t *a, edata_t *b) {\n\treturn extent_merge_impl(tsdn, pac, ehooks, a, b,\n\t    /* holding_core_locks */ false);\n}\n\nbool\nextent_commit_zero(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata,\n    bool commit, bool zero, bool growing_retained) {\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, growing_retained ? 1 : 0);\n\n\tif (commit && !edata_committed_get(edata)) {\n\t\tif (extent_commit_impl(tsdn, ehooks, edata, 0,\n\t\t    edata_size_get(edata), growing_retained)) {\n\t\t\treturn true;\n\t\t}\n\t}\n\tif (zero && !edata_zeroed_get(edata)) {\n\t\tvoid *addr = edata_base_get(edata);\n\t\tsize_t size = edata_size_get(edata);\n\t\tehooks_zero(tsdn, ehooks, addr, size);\n\t}\n\treturn false;\n}\n\nbool\nextent_boot(void) {\n\tassert(sizeof(slab_data_t) >= sizeof(e_prof_info_t));\n\n\tif (have_dss) {\n\t\textent_dss_boot();\n\t}\n\n\treturn false;\n}\n"
  },
  {
    "path": "deps/jemalloc/src/extent_dss.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/extent_dss.h\"\n#include \"jemalloc/internal/spin.h\"\n\n/******************************************************************************/\n/* Data. */\n\nconst char\t*opt_dss = DSS_DEFAULT;\n\nconst char\t*dss_prec_names[] = {\n\t\"disabled\",\n\t\"primary\",\n\t\"secondary\",\n\t\"N/A\"\n};\n\n/*\n * Current dss precedence default, used when creating new arenas.  NB: This is\n * stored as unsigned rather than dss_prec_t because in principle there's no\n * guarantee that sizeof(dss_prec_t) is the same as sizeof(unsigned), and we use\n * atomic operations to synchronize the setting.\n */\nstatic atomic_u_t\tdss_prec_default = ATOMIC_INIT(\n    (unsigned)DSS_PREC_DEFAULT);\n\n/* Base address of the DSS. */\nstatic void\t\t*dss_base;\n/* Atomic boolean indicating whether a thread is currently extending DSS. */\nstatic atomic_b_t\tdss_extending;\n/* Atomic boolean indicating whether the DSS is exhausted. */\nstatic atomic_b_t\tdss_exhausted;\n/* Atomic current upper limit on DSS addresses. */\nstatic atomic_p_t\tdss_max;\n\n/******************************************************************************/\n\nstatic void *\nextent_dss_sbrk(intptr_t increment) {\n#ifdef JEMALLOC_DSS\n\treturn sbrk(increment);\n#else\n\tnot_implemented();\n\treturn NULL;\n#endif\n}\n\ndss_prec_t\nextent_dss_prec_get(void) {\n\tdss_prec_t ret;\n\n\tif (!have_dss) {\n\t\treturn dss_prec_disabled;\n\t}\n\tret = (dss_prec_t)atomic_load_u(&dss_prec_default, ATOMIC_ACQUIRE);\n\treturn ret;\n}\n\nbool\nextent_dss_prec_set(dss_prec_t dss_prec) {\n\tif (!have_dss) {\n\t\treturn (dss_prec != dss_prec_disabled);\n\t}\n\tatomic_store_u(&dss_prec_default, (unsigned)dss_prec, ATOMIC_RELEASE);\n\treturn false;\n}\n\nstatic void\nextent_dss_extending_start(void) {\n\tspin_t spinner = SPIN_INITIALIZER;\n\twhile (true) {\n\t\tbool expected = false;\n\t\tif (atomic_compare_exchange_weak_b(&dss_extending, &expected,\n\t\t    true, ATOMIC_ACQ_REL, ATOMIC_RELAXED)) {\n\t\t\tbreak;\n\t\t}\n\t\tspin_adaptive(&spinner);\n\t}\n}\n\nstatic void\nextent_dss_extending_finish(void) {\n\tassert(atomic_load_b(&dss_extending, ATOMIC_RELAXED));\n\n\tatomic_store_b(&dss_extending, false, ATOMIC_RELEASE);\n}\n\nstatic void *\nextent_dss_max_update(void *new_addr) {\n\t/*\n\t * Get the current end of the DSS as max_cur and assure that dss_max is\n\t * up to date.\n\t */\n\tvoid *max_cur = extent_dss_sbrk(0);\n\tif (max_cur == (void *)-1) {\n\t\treturn NULL;\n\t}\n\tatomic_store_p(&dss_max, max_cur, ATOMIC_RELEASE);\n\t/* Fixed new_addr can only be supported if it is at the edge of DSS. */\n\tif (new_addr != NULL && max_cur != new_addr) {\n\t\treturn NULL;\n\t}\n\treturn max_cur;\n}\n\nvoid *\nextent_alloc_dss(tsdn_t *tsdn, arena_t *arena, void *new_addr, size_t size,\n    size_t alignment, bool *zero, bool *commit) {\n\tedata_t *gap;\n\n\tcassert(have_dss);\n\tassert(size > 0);\n\tassert(alignment == ALIGNMENT_CEILING(alignment, PAGE));\n\n\t/*\n\t * sbrk() uses a signed increment argument, so take care not to\n\t * interpret a large allocation request as a negative increment.\n\t */\n\tif ((intptr_t)size < 0) {\n\t\treturn NULL;\n\t}\n\n\tgap = edata_cache_get(tsdn, &arena->pa_shard.edata_cache);\n\tif (gap == NULL) {\n\t\treturn NULL;\n\t}\n\n\textent_dss_extending_start();\n\tif (!atomic_load_b(&dss_exhausted, ATOMIC_ACQUIRE)) {\n\t\t/*\n\t\t * The loop is necessary to recover from races with other\n\t\t * threads that are using the DSS for something other than\n\t\t * malloc.\n\t\t */\n\t\twhile (true) {\n\t\t\tvoid *max_cur = extent_dss_max_update(new_addr);\n\t\t\tif (max_cur == NULL) {\n\t\t\t\tgoto label_oom;\n\t\t\t}\n\n\t\t\tbool head_state = opt_retain ? EXTENT_IS_HEAD :\n\t\t\t    EXTENT_NOT_HEAD;\n\t\t\t/*\n\t\t\t * Compute how much page-aligned gap space (if any) is\n\t\t\t * necessary to satisfy alignment.  This space can be\n\t\t\t * recycled for later use.\n\t\t\t */\n\t\t\tvoid *gap_addr_page = (void *)(PAGE_CEILING(\n\t\t\t    (uintptr_t)max_cur));\n\t\t\tvoid *ret = (void *)ALIGNMENT_CEILING(\n\t\t\t    (uintptr_t)gap_addr_page, alignment);\n\t\t\tsize_t gap_size_page = (uintptr_t)ret -\n\t\t\t    (uintptr_t)gap_addr_page;\n\t\t\tif (gap_size_page != 0) {\n\t\t\t\tedata_init(gap, arena_ind_get(arena),\n\t\t\t\t    gap_addr_page, gap_size_page, false,\n\t\t\t\t    SC_NSIZES, extent_sn_next(\n\t\t\t\t\t&arena->pa_shard.pac),\n\t\t\t\t    extent_state_active, false, true,\n\t\t\t\t    EXTENT_PAI_PAC, head_state);\n\t\t\t}\n\t\t\t/*\n\t\t\t * Compute the address just past the end of the desired\n\t\t\t * allocation space.\n\t\t\t */\n\t\t\tvoid *dss_next = (void *)((uintptr_t)ret + size);\n\t\t\tif ((uintptr_t)ret < (uintptr_t)max_cur ||\n\t\t\t    (uintptr_t)dss_next < (uintptr_t)max_cur) {\n\t\t\t\tgoto label_oom; /* Wrap-around. */\n\t\t\t}\n\t\t\t/* Compute the increment, including subpage bytes. */\n\t\t\tvoid *gap_addr_subpage = max_cur;\n\t\t\tsize_t gap_size_subpage = (uintptr_t)ret -\n\t\t\t    (uintptr_t)gap_addr_subpage;\n\t\t\tintptr_t incr = gap_size_subpage + size;\n\n\t\t\tassert((uintptr_t)max_cur + incr == (uintptr_t)ret +\n\t\t\t    size);\n\n\t\t\t/* Try to allocate. */\n\t\t\tvoid *dss_prev = extent_dss_sbrk(incr);\n\t\t\tif (dss_prev == max_cur) {\n\t\t\t\t/* Success. */\n\t\t\t\tatomic_store_p(&dss_max, dss_next,\n\t\t\t\t    ATOMIC_RELEASE);\n\t\t\t\textent_dss_extending_finish();\n\n\t\t\t\tif (gap_size_page != 0) {\n\t\t\t\t\tehooks_t *ehooks = arena_get_ehooks(\n\t\t\t\t\t    arena);\n\t\t\t\t\textent_dalloc_gap(tsdn,\n\t\t\t\t\t    &arena->pa_shard.pac, ehooks, gap);\n\t\t\t\t} else {\n\t\t\t\t\tedata_cache_put(tsdn,\n\t\t\t\t\t    &arena->pa_shard.edata_cache, gap);\n\t\t\t\t}\n\t\t\t\tif (!*commit) {\n\t\t\t\t\t*commit = pages_decommit(ret, size);\n\t\t\t\t}\n\t\t\t\tif (*zero && *commit) {\n\t\t\t\t\tedata_t edata = {0};\n\t\t\t\t\tehooks_t *ehooks = arena_get_ehooks(\n\t\t\t\t\t    arena);\n\n\t\t\t\t\tedata_init(&edata,\n\t\t\t\t\t    arena_ind_get(arena), ret, size,\n\t\t\t\t\t    size, false, SC_NSIZES,\n\t\t\t\t\t    extent_state_active, false, true,\n\t\t\t\t\t    EXTENT_PAI_PAC, head_state);\n\t\t\t\t\tif (extent_purge_forced_wrapper(tsdn,\n\t\t\t\t\t    ehooks, &edata, 0, size)) {\n\t\t\t\t\t\tmemset(ret, 0, size);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\treturn ret;\n\t\t\t}\n\t\t\t/*\n\t\t\t * Failure, whether due to OOM or a race with a raw\n\t\t\t * sbrk() call from outside the allocator.\n\t\t\t */\n\t\t\tif (dss_prev == (void *)-1) {\n\t\t\t\t/* OOM. */\n\t\t\t\tatomic_store_b(&dss_exhausted, true,\n\t\t\t\t    ATOMIC_RELEASE);\n\t\t\t\tgoto label_oom;\n\t\t\t}\n\t\t}\n\t}\nlabel_oom:\n\textent_dss_extending_finish();\n\tedata_cache_put(tsdn, &arena->pa_shard.edata_cache, gap);\n\treturn NULL;\n}\n\nstatic bool\nextent_in_dss_helper(void *addr, void *max) {\n\treturn ((uintptr_t)addr >= (uintptr_t)dss_base && (uintptr_t)addr <\n\t    (uintptr_t)max);\n}\n\nbool\nextent_in_dss(void *addr) {\n\tcassert(have_dss);\n\n\treturn extent_in_dss_helper(addr, atomic_load_p(&dss_max,\n\t    ATOMIC_ACQUIRE));\n}\n\nbool\nextent_dss_mergeable(void *addr_a, void *addr_b) {\n\tvoid *max;\n\n\tcassert(have_dss);\n\n\tif ((uintptr_t)addr_a < (uintptr_t)dss_base && (uintptr_t)addr_b <\n\t    (uintptr_t)dss_base) {\n\t\treturn true;\n\t}\n\n\tmax = atomic_load_p(&dss_max, ATOMIC_ACQUIRE);\n\treturn (extent_in_dss_helper(addr_a, max) ==\n\t    extent_in_dss_helper(addr_b, max));\n}\n\nvoid\nextent_dss_boot(void) {\n\tcassert(have_dss);\n\n\tdss_base = extent_dss_sbrk(0);\n\tatomic_store_b(&dss_extending, false, ATOMIC_RELAXED);\n\tatomic_store_b(&dss_exhausted, dss_base == (void *)-1, ATOMIC_RELAXED);\n\tatomic_store_p(&dss_max, dss_base, ATOMIC_RELAXED);\n}\n\n/******************************************************************************/\n"
  },
  {
    "path": "deps/jemalloc/src/extent_mmap.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/extent_mmap.h\"\n\n/******************************************************************************/\n/* Data. */\n\nbool\topt_retain =\n#ifdef JEMALLOC_RETAIN\n    true\n#else\n    false\n#endif\n    ;\n\n/******************************************************************************/\n\nvoid *\nextent_alloc_mmap(void *new_addr, size_t size, size_t alignment, bool *zero,\n    bool *commit) {\n\tassert(alignment == ALIGNMENT_CEILING(alignment, PAGE));\n\tvoid *ret = pages_map(new_addr, size, alignment, commit);\n\tif (ret == NULL) {\n\t\treturn NULL;\n\t}\n\tassert(ret != NULL);\n\tif (*commit) {\n\t\t*zero = true;\n\t}\n\treturn ret;\n}\n\nbool\nextent_dalloc_mmap(void *addr, size_t size) {\n\tif (!opt_retain) {\n\t\tpages_unmap(addr, size);\n\t}\n\treturn opt_retain;\n}\n"
  },
  {
    "path": "deps/jemalloc/src/fxp.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/fxp.h\"\n\nstatic bool\nfxp_isdigit(char c) {\n\treturn '0' <= c && c <= '9';\n}\n\nbool\nfxp_parse(fxp_t *result, const char *str, char **end) {\n\t/*\n\t * Using malloc_strtoumax in this method isn't as handy as you might\n\t * expect (I tried). In the fractional part, significant leading zeros\n\t * mean that you still need to do your own parsing, now with trickier\n\t * math.  In the integer part, the casting (uintmax_t to uint32_t)\n\t * forces more reasoning about bounds than just checking for overflow as\n\t * we parse.\n\t */\n\tuint32_t integer_part = 0;\n\n\tconst char *cur = str;\n\n\t/* The string must start with a digit or a decimal point. */\n\tif (*cur != '.' && !fxp_isdigit(*cur)) {\n\t\treturn true;\n\t}\n\n\twhile ('0' <= *cur && *cur <= '9') {\n\t\tinteger_part *= 10;\n\t\tinteger_part += *cur - '0';\n\t\tif (integer_part >= (1U << 16)) {\n\t\t\treturn true;\n\t\t}\n\t\tcur++;\n\t}\n\n\t/*\n\t * We've parsed all digits at the beginning of the string, without\n\t * overflow.  Either we're done, or there's a fractional part.\n\t */\n\tif (*cur != '.') {\n\t\t*result = (integer_part << 16);\n\t\tif (end != NULL) {\n\t\t\t*end = (char *)cur;\n\t\t}\n\t\treturn false;\n\t}\n\n\t/* There's a fractional part. */\n\tcur++;\n\tif (!fxp_isdigit(*cur)) {\n\t\t/* Shouldn't end on the decimal point. */\n\t\treturn true;\n\t}\n\n\t/*\n\t * We use a lot of precision for the fractional part, even though we'll\n\t * discard most of it; this lets us get exact values for the important\n\t * special case where the denominator is a small power of 2 (for\n\t * instance, 1/512 == 0.001953125 is exactly representable even with\n\t * only 16 bits of fractional precision).  We need to left-shift by 16\n\t * before dividing so we pick the number of digits to be\n\t * floor(log(2**48)) = 14.\n\t */\n\tuint64_t fractional_part = 0;\n\tuint64_t frac_div = 1;\n\tfor (int i = 0; i < FXP_FRACTIONAL_PART_DIGITS; i++) {\n\t\tfractional_part *= 10;\n\t\tfrac_div *= 10;\n\t\tif (fxp_isdigit(*cur)) {\n\t\t\tfractional_part += *cur - '0';\n\t\t\tcur++;\n\t\t}\n\t}\n\t/*\n\t * We only parse the first maxdigits characters, but we can still ignore\n\t * any digits after that.\n\t */\n\twhile (fxp_isdigit(*cur)) {\n\t\tcur++;\n\t}\n\n\tassert(fractional_part < frac_div);\n\tuint32_t fractional_repr = (uint32_t)(\n\t    (fractional_part << 16) / frac_div);\n\n\t/* Success! */\n\t*result = (integer_part << 16) + fractional_repr;\n\tif (end != NULL) {\n\t\t*end = (char *)cur;\n\t}\n\treturn false;\n}\n\nvoid\nfxp_print(fxp_t a, char buf[FXP_BUF_SIZE]) {\n\tuint32_t integer_part = fxp_round_down(a);\n\tuint32_t fractional_part = (a & ((1U << 16) - 1));\n\n\tint leading_fraction_zeros = 0;\n\tuint64_t fraction_digits = fractional_part;\n\tfor (int i = 0; i < FXP_FRACTIONAL_PART_DIGITS; i++) {\n\t\tif (fraction_digits < (1U << 16)\n\t\t    && fraction_digits * 10 >= (1U << 16)) {\n\t\t\tleading_fraction_zeros = i;\n\t\t}\n\t\tfraction_digits *= 10;\n\t}\n\tfraction_digits >>= 16;\n\twhile (fraction_digits > 0 && fraction_digits % 10 == 0) {\n\t\tfraction_digits /= 10;\n\t}\n\n\tsize_t printed = malloc_snprintf(buf, FXP_BUF_SIZE, \"%\"FMTu32\".\",\n\t    integer_part);\n\tfor (int i = 0; i < leading_fraction_zeros; i++) {\n\t\tbuf[printed] = '0';\n\t\tprinted++;\n\t}\n\tmalloc_snprintf(&buf[printed], FXP_BUF_SIZE - printed, \"%\"FMTu64,\n\t    fraction_digits);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/hook.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n\n#include \"jemalloc/internal/hook.h\"\n\n#include \"jemalloc/internal/atomic.h\"\n#include \"jemalloc/internal/mutex.h\"\n#include \"jemalloc/internal/seq.h\"\n\ntypedef struct hooks_internal_s hooks_internal_t;\nstruct hooks_internal_s {\n\thooks_t hooks;\n\tbool in_use;\n};\n\nseq_define(hooks_internal_t, hooks)\n\nstatic atomic_u_t nhooks = ATOMIC_INIT(0);\nstatic seq_hooks_t hooks[HOOK_MAX];\nstatic malloc_mutex_t hooks_mu;\n\nbool\nhook_boot() {\n\treturn malloc_mutex_init(&hooks_mu, \"hooks\", WITNESS_RANK_HOOK,\n\t    malloc_mutex_rank_exclusive);\n}\n\nstatic void *\nhook_install_locked(hooks_t *to_install) {\n\thooks_internal_t hooks_internal;\n\tfor (int i = 0; i < HOOK_MAX; i++) {\n\t\tbool success = seq_try_load_hooks(&hooks_internal, &hooks[i]);\n\t\t/* We hold mu; no concurrent access. */\n\t\tassert(success);\n\t\tif (!hooks_internal.in_use) {\n\t\t\thooks_internal.hooks = *to_install;\n\t\t\thooks_internal.in_use = true;\n\t\t\tseq_store_hooks(&hooks[i], &hooks_internal);\n\t\t\tatomic_store_u(&nhooks,\n\t\t\t    atomic_load_u(&nhooks, ATOMIC_RELAXED) + 1,\n\t\t\t    ATOMIC_RELAXED);\n\t\t\treturn &hooks[i];\n\t\t}\n\t}\n\treturn NULL;\n}\n\nvoid *\nhook_install(tsdn_t *tsdn, hooks_t *to_install) {\n\tmalloc_mutex_lock(tsdn, &hooks_mu);\n\tvoid *ret = hook_install_locked(to_install);\n\tif (ret != NULL) {\n\t\ttsd_global_slow_inc(tsdn);\n\t}\n\tmalloc_mutex_unlock(tsdn, &hooks_mu);\n\treturn ret;\n}\n\nstatic void\nhook_remove_locked(seq_hooks_t *to_remove) {\n\thooks_internal_t hooks_internal;\n\tbool success = seq_try_load_hooks(&hooks_internal, to_remove);\n\t/* We hold mu; no concurrent access. */\n\tassert(success);\n\t/* Should only remove hooks that were added. */\n\tassert(hooks_internal.in_use);\n\thooks_internal.in_use = false;\n\tseq_store_hooks(to_remove, &hooks_internal);\n\tatomic_store_u(&nhooks, atomic_load_u(&nhooks, ATOMIC_RELAXED) - 1,\n\t    ATOMIC_RELAXED);\n}\n\nvoid\nhook_remove(tsdn_t *tsdn, void *opaque) {\n\tif (config_debug) {\n\t\tchar *hooks_begin = (char *)&hooks[0];\n\t\tchar *hooks_end = (char *)&hooks[HOOK_MAX];\n\t\tchar *hook = (char *)opaque;\n\t\tassert(hooks_begin <= hook && hook < hooks_end\n\t\t    && (hook - hooks_begin) % sizeof(seq_hooks_t) == 0);\n\t}\n\tmalloc_mutex_lock(tsdn, &hooks_mu);\n\thook_remove_locked((seq_hooks_t *)opaque);\n\ttsd_global_slow_dec(tsdn);\n\tmalloc_mutex_unlock(tsdn, &hooks_mu);\n}\n\n#define FOR_EACH_HOOK_BEGIN(hooks_internal_ptr)\t\t\t\t\\\nfor (int for_each_hook_counter = 0;\t\t\t\t\t\\\n    for_each_hook_counter < HOOK_MAX;\t\t\t\t\t\\\n    for_each_hook_counter++) {\t\t\t\t\t\t\\\n\tbool for_each_hook_success = seq_try_load_hooks(\t\t\\\n\t    (hooks_internal_ptr), &hooks[for_each_hook_counter]);\t\\\n\tif (!for_each_hook_success) {\t\t\t\t\t\\\n\t\tcontinue;\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tif (!(hooks_internal_ptr)->in_use) {\t\t\t\t\\\n\t\tcontinue;\t\t\t\t\t\t\\\n\t}\n#define FOR_EACH_HOOK_END\t\t\t\t\t\t\\\n}\n\nstatic bool *\nhook_reentrantp() {\n\t/*\n\t * We prevent user reentrancy within hooks.  This is basically just a\n\t * thread-local bool that triggers an early-exit.\n\t *\n\t * We don't fold in_hook into reentrancy.  There are two reasons for\n\t * this:\n\t * - Right now, we turn on reentrancy during things like extent hook\n\t *   execution.  Allocating during extent hooks is not officially\n\t *   supported, but we don't want to break it for the time being.  These\n\t *   sorts of allocations should probably still be hooked, though.\n\t * - If a hook allocates, we may want it to be relatively fast (after\n\t *   all, it executes on every allocator operation).  Turning on\n\t *   reentrancy is a fairly heavyweight mode (disabling tcache,\n\t *   redirecting to arena 0, etc.).  It's possible we may one day want\n\t *   to turn on reentrant mode here, if it proves too difficult to keep\n\t *   this working.  But that's fairly easy for us to see; OTOH, people\n\t *   not using hooks because they're too slow is easy for us to miss.\n\t *\n\t * The tricky part is\n\t * that this code might get invoked even if we don't have access to tsd.\n\t * This function mimics getting a pointer to thread-local data, except\n\t * that it might secretly return a pointer to some global data if we\n\t * know that the caller will take the early-exit path.\n\t * If we return a bool that indicates that we are reentrant, then the\n\t * caller will go down the early exit path, leaving the global\n\t * untouched.\n\t */\n\tstatic bool in_hook_global = true;\n\ttsdn_t *tsdn = tsdn_fetch();\n\tbool *in_hook = tsdn_in_hookp_get(tsdn);\n\tif (in_hook!= NULL) {\n\t\treturn in_hook;\n\t}\n\treturn &in_hook_global;\n}\n\n#define HOOK_PROLOGUE\t\t\t\t\t\t\t\\\n\tif (likely(atomic_load_u(&nhooks, ATOMIC_RELAXED) == 0)) {\t\\\n\t\treturn;\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tbool *in_hook = hook_reentrantp();\t\t\t\t\\\n\tif (*in_hook) {\t\t\t\t\t\t\t\\\n\t\treturn;\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\t*in_hook = true;\n\n#define HOOK_EPILOGUE\t\t\t\t\t\t\t\\\n\t*in_hook = false;\n\nvoid\nhook_invoke_alloc(hook_alloc_t type, void *result, uintptr_t result_raw,\n    uintptr_t args_raw[3]) {\n\tHOOK_PROLOGUE\n\n\thooks_internal_t hook;\n\tFOR_EACH_HOOK_BEGIN(&hook)\n\t\thook_alloc h = hook.hooks.alloc_hook;\n\t\tif (h != NULL) {\n\t\t\th(hook.hooks.extra, type, result, result_raw, args_raw);\n\t\t}\n\tFOR_EACH_HOOK_END\n\n\tHOOK_EPILOGUE\n}\n\nvoid\nhook_invoke_dalloc(hook_dalloc_t type, void *address, uintptr_t args_raw[3]) {\n\tHOOK_PROLOGUE\n\thooks_internal_t hook;\n\tFOR_EACH_HOOK_BEGIN(&hook)\n\t\thook_dalloc h = hook.hooks.dalloc_hook;\n\t\tif (h != NULL) {\n\t\t\th(hook.hooks.extra, type, address, args_raw);\n\t\t}\n\tFOR_EACH_HOOK_END\n\tHOOK_EPILOGUE\n}\n\nvoid\nhook_invoke_expand(hook_expand_t type, void *address, size_t old_usize,\n    size_t new_usize, uintptr_t result_raw, uintptr_t args_raw[4]) {\n\tHOOK_PROLOGUE\n\thooks_internal_t hook;\n\tFOR_EACH_HOOK_BEGIN(&hook)\n\t\thook_expand h = hook.hooks.expand_hook;\n\t\tif (h != NULL) {\n\t\t\th(hook.hooks.extra, type, address, old_usize, new_usize,\n\t\t\t    result_raw, args_raw);\n\t\t}\n\tFOR_EACH_HOOK_END\n\tHOOK_EPILOGUE\n}\n"
  },
  {
    "path": "deps/jemalloc/src/hpa.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/hpa.h\"\n\n#include \"jemalloc/internal/fb.h\"\n#include \"jemalloc/internal/witness.h\"\n\n#define HPA_EDEN_SIZE (128 * HUGEPAGE)\n\nstatic edata_t *hpa_alloc(tsdn_t *tsdn, pai_t *self, size_t size,\n    size_t alignment, bool zero, bool guarded, bool frequent_reuse,\n    bool *deferred_work_generated);\nstatic size_t hpa_alloc_batch(tsdn_t *tsdn, pai_t *self, size_t size,\n    size_t nallocs, edata_list_active_t *results, bool *deferred_work_generated);\nstatic bool hpa_expand(tsdn_t *tsdn, pai_t *self, edata_t *edata,\n    size_t old_size, size_t new_size, bool zero, bool *deferred_work_generated);\nstatic bool hpa_shrink(tsdn_t *tsdn, pai_t *self, edata_t *edata,\n    size_t old_size, size_t new_size, bool *deferred_work_generated);\nstatic void hpa_dalloc(tsdn_t *tsdn, pai_t *self, edata_t *edata,\n    bool *deferred_work_generated);\nstatic void hpa_dalloc_batch(tsdn_t *tsdn, pai_t *self,\n    edata_list_active_t *list, bool *deferred_work_generated);\nstatic uint64_t hpa_time_until_deferred_work(tsdn_t *tsdn, pai_t *self);\n\nbool\nhpa_supported() {\n#ifdef _WIN32\n\t/*\n\t * At least until the API and implementation is somewhat settled, we\n\t * don't want to try to debug the VM subsystem on the hardest-to-test\n\t * platform.\n\t */\n\treturn false;\n#endif\n\tif (!pages_can_hugify) {\n\t\treturn false;\n\t}\n\t/*\n\t * We fundamentally rely on a address-space-hungry growth strategy for\n\t * hugepages.\n\t */\n\tif (LG_SIZEOF_PTR != 3) {\n\t\treturn false;\n\t}\n\t/*\n\t * If we couldn't detect the value of HUGEPAGE, HUGEPAGE_PAGES becomes\n\t * this sentinel value -- see the comment in pages.h.\n\t */\n\tif (HUGEPAGE_PAGES == 1) {\n\t\treturn false;\n\t}\n\treturn true;\n}\n\nstatic void\nhpa_do_consistency_checks(hpa_shard_t *shard) {\n\tassert(shard->base != NULL);\n}\n\nbool\nhpa_central_init(hpa_central_t *central, base_t *base, const hpa_hooks_t *hooks) {\n\t/* malloc_conf processing should have filtered out these cases. */\n\tassert(hpa_supported());\n\tbool err;\n\terr = malloc_mutex_init(&central->grow_mtx, \"hpa_central_grow\",\n\t    WITNESS_RANK_HPA_CENTRAL_GROW, malloc_mutex_rank_exclusive);\n\tif (err) {\n\t\treturn true;\n\t}\n\terr = malloc_mutex_init(&central->mtx, \"hpa_central\",\n\t    WITNESS_RANK_HPA_CENTRAL, malloc_mutex_rank_exclusive);\n\tif (err) {\n\t\treturn true;\n\t}\n\tcentral->base = base;\n\tcentral->eden = NULL;\n\tcentral->eden_len = 0;\n\tcentral->age_counter = 0;\n\tcentral->hooks = *hooks;\n\treturn false;\n}\n\nstatic hpdata_t *\nhpa_alloc_ps(tsdn_t *tsdn, hpa_central_t *central) {\n\treturn (hpdata_t *)base_alloc(tsdn, central->base, sizeof(hpdata_t),\n\t    CACHELINE);\n}\n\nhpdata_t *\nhpa_central_extract(tsdn_t *tsdn, hpa_central_t *central, size_t size,\n    bool *oom) {\n\t/* Don't yet support big allocations; these should get filtered out. */\n\tassert(size <= HUGEPAGE);\n\t/*\n\t * Should only try to extract from the central allocator if the local\n\t * shard is exhausted.  We should hold the grow_mtx on that shard.\n\t */\n\twitness_assert_positive_depth_to_rank(\n\t    tsdn_witness_tsdp_get(tsdn), WITNESS_RANK_HPA_SHARD_GROW);\n\n\tmalloc_mutex_lock(tsdn, &central->grow_mtx);\n\t*oom = false;\n\n\thpdata_t *ps = NULL;\n\n\t/* Is eden a perfect fit? */\n\tif (central->eden != NULL && central->eden_len == HUGEPAGE) {\n\t\tps = hpa_alloc_ps(tsdn, central);\n\t\tif (ps == NULL) {\n\t\t\t*oom = true;\n\t\t\tmalloc_mutex_unlock(tsdn, &central->grow_mtx);\n\t\t\treturn NULL;\n\t\t}\n\t\thpdata_init(ps, central->eden, central->age_counter++);\n\t\tcentral->eden = NULL;\n\t\tcentral->eden_len = 0;\n\t\tmalloc_mutex_unlock(tsdn, &central->grow_mtx);\n\t\treturn ps;\n\t}\n\n\t/*\n\t * We're about to try to allocate from eden by splitting.  If eden is\n\t * NULL, we have to allocate it too.  Otherwise, we just have to\n\t * allocate an edata_t for the new psset.\n\t */\n\tif (central->eden == NULL) {\n\t\t/*\n\t\t * During development, we're primarily concerned with systems\n\t\t * with overcommit.  Eventually, we should be more careful here.\n\t\t */\n\t\tbool commit = true;\n\t\t/* Allocate address space, bailing if we fail. */\n\t\tvoid *new_eden = pages_map(NULL, HPA_EDEN_SIZE, HUGEPAGE,\n\t\t    &commit);\n\t\tif (new_eden == NULL) {\n\t\t\t*oom = true;\n\t\t\tmalloc_mutex_unlock(tsdn, &central->grow_mtx);\n\t\t\treturn NULL;\n\t\t}\n\t\tps = hpa_alloc_ps(tsdn, central);\n\t\tif (ps == NULL) {\n\t\t\tpages_unmap(new_eden, HPA_EDEN_SIZE);\n\t\t\t*oom = true;\n\t\t\tmalloc_mutex_unlock(tsdn, &central->grow_mtx);\n\t\t\treturn NULL;\n\t\t}\n\t\tcentral->eden = new_eden;\n\t\tcentral->eden_len = HPA_EDEN_SIZE;\n\t} else {\n\t\t/* Eden is already nonempty; only need an edata for ps. */\n\t\tps = hpa_alloc_ps(tsdn, central);\n\t\tif (ps == NULL) {\n\t\t\t*oom = true;\n\t\t\tmalloc_mutex_unlock(tsdn, &central->grow_mtx);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\tassert(ps != NULL);\n\tassert(central->eden != NULL);\n\tassert(central->eden_len > HUGEPAGE);\n\tassert(central->eden_len % HUGEPAGE == 0);\n\tassert(HUGEPAGE_ADDR2BASE(central->eden) == central->eden);\n\n\thpdata_init(ps, central->eden, central->age_counter++);\n\n\tchar *eden_char = (char *)central->eden;\n\teden_char += HUGEPAGE;\n\tcentral->eden = (void *)eden_char;\n\tcentral->eden_len -= HUGEPAGE;\n\n\tmalloc_mutex_unlock(tsdn, &central->grow_mtx);\n\n\treturn ps;\n}\n\nbool\nhpa_shard_init(hpa_shard_t *shard, hpa_central_t *central, emap_t *emap,\n    base_t *base, edata_cache_t *edata_cache, unsigned ind,\n    const hpa_shard_opts_t *opts) {\n\t/* malloc_conf processing should have filtered out these cases. */\n\tassert(hpa_supported());\n\tbool err;\n\terr = malloc_mutex_init(&shard->grow_mtx, \"hpa_shard_grow\",\n\t    WITNESS_RANK_HPA_SHARD_GROW, malloc_mutex_rank_exclusive);\n\tif (err) {\n\t\treturn true;\n\t}\n\terr = malloc_mutex_init(&shard->mtx, \"hpa_shard\",\n\t    WITNESS_RANK_HPA_SHARD, malloc_mutex_rank_exclusive);\n\tif (err) {\n\t\treturn true;\n\t}\n\n\tassert(edata_cache != NULL);\n\tshard->central = central;\n\tshard->base = base;\n\tedata_cache_fast_init(&shard->ecf, edata_cache);\n\tpsset_init(&shard->psset);\n\tshard->age_counter = 0;\n\tshard->ind = ind;\n\tshard->emap = emap;\n\n\tshard->opts = *opts;\n\n\tshard->npending_purge = 0;\n\tnstime_init_zero(&shard->last_purge);\n\n\tshard->stats.npurge_passes = 0;\n\tshard->stats.npurges = 0;\n\tshard->stats.nhugifies = 0;\n\tshard->stats.ndehugifies = 0;\n\n\t/*\n\t * Fill these in last, so that if an hpa_shard gets used despite\n\t * initialization failing, we'll at least crash instead of just\n\t * operating on corrupted data.\n\t */\n\tshard->pai.alloc = &hpa_alloc;\n\tshard->pai.alloc_batch = &hpa_alloc_batch;\n\tshard->pai.expand = &hpa_expand;\n\tshard->pai.shrink = &hpa_shrink;\n\tshard->pai.dalloc = &hpa_dalloc;\n\tshard->pai.dalloc_batch = &hpa_dalloc_batch;\n\tshard->pai.time_until_deferred_work = &hpa_time_until_deferred_work;\n\n\thpa_do_consistency_checks(shard);\n\n\treturn false;\n}\n\n/*\n * Note that the stats functions here follow the usual stats naming conventions;\n * \"merge\" obtains the stats from some live object of instance, while \"accum\"\n * only combines the stats from one stats objet to another.  Hence the lack of\n * locking here.\n */\nstatic void\nhpa_shard_nonderived_stats_accum(hpa_shard_nonderived_stats_t *dst,\n    hpa_shard_nonderived_stats_t *src) {\n\tdst->npurge_passes += src->npurge_passes;\n\tdst->npurges += src->npurges;\n\tdst->nhugifies += src->nhugifies;\n\tdst->ndehugifies += src->ndehugifies;\n}\n\nvoid\nhpa_shard_stats_accum(hpa_shard_stats_t *dst, hpa_shard_stats_t *src) {\n\tpsset_stats_accum(&dst->psset_stats, &src->psset_stats);\n\thpa_shard_nonderived_stats_accum(&dst->nonderived_stats,\n\t    &src->nonderived_stats);\n}\n\nvoid\nhpa_shard_stats_merge(tsdn_t *tsdn, hpa_shard_t *shard,\n    hpa_shard_stats_t *dst) {\n\thpa_do_consistency_checks(shard);\n\n\tmalloc_mutex_lock(tsdn, &shard->grow_mtx);\n\tmalloc_mutex_lock(tsdn, &shard->mtx);\n\tpsset_stats_accum(&dst->psset_stats, &shard->psset.stats);\n\thpa_shard_nonderived_stats_accum(&dst->nonderived_stats, &shard->stats);\n\tmalloc_mutex_unlock(tsdn, &shard->mtx);\n\tmalloc_mutex_unlock(tsdn, &shard->grow_mtx);\n}\n\nstatic bool\nhpa_good_hugification_candidate(hpa_shard_t *shard, hpdata_t *ps) {\n\t/*\n\t * Note that this needs to be >= rather than just >, because of the\n\t * important special case in which the hugification threshold is exactly\n\t * HUGEPAGE.\n\t */\n\treturn hpdata_nactive_get(ps) * PAGE\n\t    >= shard->opts.hugification_threshold;\n}\n\nstatic size_t\nhpa_adjusted_ndirty(tsdn_t *tsdn, hpa_shard_t *shard) {\n\tmalloc_mutex_assert_owner(tsdn, &shard->mtx);\n\treturn psset_ndirty(&shard->psset) - shard->npending_purge;\n}\n\nstatic size_t\nhpa_ndirty_max(tsdn_t *tsdn, hpa_shard_t *shard) {\n\tmalloc_mutex_assert_owner(tsdn, &shard->mtx);\n\tif (shard->opts.dirty_mult == (fxp_t)-1) {\n\t\treturn (size_t)-1;\n\t}\n\treturn fxp_mul_frac(psset_nactive(&shard->psset),\n\t    shard->opts.dirty_mult);\n}\n\nstatic bool\nhpa_hugify_blocked_by_ndirty(tsdn_t *tsdn, hpa_shard_t *shard) {\n\tmalloc_mutex_assert_owner(tsdn, &shard->mtx);\n\thpdata_t *to_hugify = psset_pick_hugify(&shard->psset);\n\tif (to_hugify == NULL) {\n\t\treturn false;\n\t}\n\treturn hpa_adjusted_ndirty(tsdn, shard)\n\t    + hpdata_nretained_get(to_hugify) > hpa_ndirty_max(tsdn, shard);\n}\n\nstatic bool\nhpa_should_purge(tsdn_t *tsdn, hpa_shard_t *shard) {\n\tmalloc_mutex_assert_owner(tsdn, &shard->mtx);\n\tif (hpa_adjusted_ndirty(tsdn, shard) > hpa_ndirty_max(tsdn, shard)) {\n\t\treturn true;\n\t}\n\tif (hpa_hugify_blocked_by_ndirty(tsdn, shard)) {\n\t\treturn true;\n\t}\n\treturn false;\n}\n\nstatic void\nhpa_update_purge_hugify_eligibility(tsdn_t *tsdn, hpa_shard_t *shard,\n    hpdata_t *ps) {\n\tmalloc_mutex_assert_owner(tsdn, &shard->mtx);\n\tif (hpdata_changing_state_get(ps)) {\n\t\thpdata_purge_allowed_set(ps, false);\n\t\thpdata_disallow_hugify(ps);\n\t\treturn;\n\t}\n\t/*\n\t * Hugepages are distinctly costly to purge, so try to avoid it unless\n\t * they're *particularly* full of dirty pages.  Eventually, we should\n\t * use a smarter / more dynamic heuristic for situations where we have\n\t * to manually hugify.\n\t *\n\t * In situations where we don't manually hugify, this problem is\n\t * reduced.  The \"bad\" situation we're trying to avoid is one's that's\n\t * common in some Linux configurations (where both enabled and defrag\n\t * are set to madvise) that can lead to long latency spikes on the first\n\t * access after a hugification.  The ideal policy in such configurations\n\t * is probably time-based for both purging and hugifying; only hugify a\n\t * hugepage if it's met the criteria for some extended period of time,\n\t * and only dehugify it if it's failed to meet the criteria for an\n\t * extended period of time.  When background threads are on, we should\n\t * try to take this hit on one of them, as well.\n\t *\n\t * I think the ideal setting is THP always enabled, and defrag set to\n\t * deferred; in that case we don't need any explicit calls on the\n\t * allocator's end at all; we just try to pack allocations in a\n\t * hugepage-friendly manner and let the OS hugify in the background.\n\t */\n\thpdata_purge_allowed_set(ps, hpdata_ndirty_get(ps) > 0);\n\tif (hpa_good_hugification_candidate(shard, ps)\n\t    && !hpdata_huge_get(ps)) {\n\t\tnstime_t now;\n\t\tshard->central->hooks.curtime(&now, /* first_reading */ true);\n\t\thpdata_allow_hugify(ps, now);\n\t}\n\t/*\n\t * Once a hugepage has become eligible for hugification, we don't mark\n\t * it as ineligible just because it stops meeting the criteria (this\n\t * could lead to situations where a hugepage that spends most of its\n\t * time meeting the criteria never quite getting hugified if there are\n\t * intervening deallocations).  The idea is that the hugification delay\n\t * will allow them to get purged, reseting their \"hugify-allowed\" bit.\n\t * If they don't get purged, then the hugification isn't hurting and\n\t * might help.  As an exception, we don't hugify hugepages that are now\n\t * empty; it definitely doesn't help there until the hugepage gets\n\t * reused, which is likely not for a while.\n\t */\n\tif (hpdata_nactive_get(ps) == 0) {\n\t\thpdata_disallow_hugify(ps);\n\t}\n}\n\nstatic bool\nhpa_shard_has_deferred_work(tsdn_t *tsdn, hpa_shard_t *shard) {\n\tmalloc_mutex_assert_owner(tsdn, &shard->mtx);\n\thpdata_t *to_hugify = psset_pick_hugify(&shard->psset);\n\treturn to_hugify != NULL || hpa_should_purge(tsdn, shard);\n}\n\n/* Returns whether or not we purged anything. */\nstatic bool\nhpa_try_purge(tsdn_t *tsdn, hpa_shard_t *shard) {\n\tmalloc_mutex_assert_owner(tsdn, &shard->mtx);\n\n\thpdata_t *to_purge = psset_pick_purge(&shard->psset);\n\tif (to_purge == NULL) {\n\t\treturn false;\n\t}\n\tassert(hpdata_purge_allowed_get(to_purge));\n\tassert(!hpdata_changing_state_get(to_purge));\n\n\t/*\n\t * Don't let anyone else purge or hugify this page while\n\t * we're purging it (allocations and deallocations are\n\t * OK).\n\t */\n\tpsset_update_begin(&shard->psset, to_purge);\n\tassert(hpdata_alloc_allowed_get(to_purge));\n\thpdata_mid_purge_set(to_purge, true);\n\thpdata_purge_allowed_set(to_purge, false);\n\thpdata_disallow_hugify(to_purge);\n\t/*\n\t * Unlike with hugification (where concurrent\n\t * allocations are allowed), concurrent allocation out\n\t * of a hugepage being purged is unsafe; we might hand\n\t * out an extent for an allocation and then purge it\n\t * (clearing out user data).\n\t */\n\thpdata_alloc_allowed_set(to_purge, false);\n\tpsset_update_end(&shard->psset, to_purge);\n\n\t/* Gather all the metadata we'll need during the purge. */\n\tbool dehugify = hpdata_huge_get(to_purge);\n\thpdata_purge_state_t purge_state;\n\tsize_t num_to_purge = hpdata_purge_begin(to_purge, &purge_state);\n\n\tshard->npending_purge += num_to_purge;\n\n\tmalloc_mutex_unlock(tsdn, &shard->mtx);\n\n\t/* Actually do the purging, now that the lock is dropped. */\n\tif (dehugify) {\n\t\tshard->central->hooks.dehugify(hpdata_addr_get(to_purge),\n\t\t    HUGEPAGE);\n\t}\n\tsize_t total_purged = 0;\n\tuint64_t purges_this_pass = 0;\n\tvoid *purge_addr;\n\tsize_t purge_size;\n\twhile (hpdata_purge_next(to_purge, &purge_state, &purge_addr,\n\t    &purge_size)) {\n\t\ttotal_purged += purge_size;\n\t\tassert(total_purged <= HUGEPAGE);\n\t\tpurges_this_pass++;\n\t\tshard->central->hooks.purge(purge_addr, purge_size);\n\t}\n\n\tmalloc_mutex_lock(tsdn, &shard->mtx);\n\t/* The shard updates */\n\tshard->npending_purge -= num_to_purge;\n\tshard->stats.npurge_passes++;\n\tshard->stats.npurges += purges_this_pass;\n\tshard->central->hooks.curtime(&shard->last_purge,\n\t    /* first_reading */ false);\n\tif (dehugify) {\n\t\tshard->stats.ndehugifies++;\n\t}\n\n\t/* The hpdata updates. */\n\tpsset_update_begin(&shard->psset, to_purge);\n\tif (dehugify) {\n\t\thpdata_dehugify(to_purge);\n\t}\n\thpdata_purge_end(to_purge, &purge_state);\n\thpdata_mid_purge_set(to_purge, false);\n\n\thpdata_alloc_allowed_set(to_purge, true);\n\thpa_update_purge_hugify_eligibility(tsdn, shard, to_purge);\n\n\tpsset_update_end(&shard->psset, to_purge);\n\n\treturn true;\n}\n\n/* Returns whether or not we hugified anything. */\nstatic bool\nhpa_try_hugify(tsdn_t *tsdn, hpa_shard_t *shard) {\n\tmalloc_mutex_assert_owner(tsdn, &shard->mtx);\n\n\tif (hpa_hugify_blocked_by_ndirty(tsdn, shard)) {\n\t\treturn false;\n\t}\n\n\thpdata_t *to_hugify = psset_pick_hugify(&shard->psset);\n\tif (to_hugify == NULL) {\n\t\treturn false;\n\t}\n\tassert(hpdata_hugify_allowed_get(to_hugify));\n\tassert(!hpdata_changing_state_get(to_hugify));\n\n\t/* Make sure that it's been hugifiable for long enough. */\n\tnstime_t time_hugify_allowed = hpdata_time_hugify_allowed(to_hugify);\n\tuint64_t millis = shard->central->hooks.ms_since(&time_hugify_allowed);\n\tif (millis < shard->opts.hugify_delay_ms) {\n\t\treturn false;\n\t}\n\n\t/*\n\t * Don't let anyone else purge or hugify this page while\n\t * we're hugifying it (allocations and deallocations are\n\t * OK).\n\t */\n\tpsset_update_begin(&shard->psset, to_hugify);\n\thpdata_mid_hugify_set(to_hugify, true);\n\thpdata_purge_allowed_set(to_hugify, false);\n\thpdata_disallow_hugify(to_hugify);\n\tassert(hpdata_alloc_allowed_get(to_hugify));\n\tpsset_update_end(&shard->psset, to_hugify);\n\n\tmalloc_mutex_unlock(tsdn, &shard->mtx);\n\n\tshard->central->hooks.hugify(hpdata_addr_get(to_hugify), HUGEPAGE);\n\n\tmalloc_mutex_lock(tsdn, &shard->mtx);\n\tshard->stats.nhugifies++;\n\n\tpsset_update_begin(&shard->psset, to_hugify);\n\thpdata_hugify(to_hugify);\n\thpdata_mid_hugify_set(to_hugify, false);\n\thpa_update_purge_hugify_eligibility(tsdn, shard, to_hugify);\n\tpsset_update_end(&shard->psset, to_hugify);\n\n\treturn true;\n}\n\n/*\n * Execution of deferred work is forced if it's triggered by an explicit\n * hpa_shard_do_deferred_work() call.\n */\nstatic void\nhpa_shard_maybe_do_deferred_work(tsdn_t *tsdn, hpa_shard_t *shard,\n    bool forced) {\n\tmalloc_mutex_assert_owner(tsdn, &shard->mtx);\n\tif (!forced && shard->opts.deferral_allowed) {\n\t\treturn;\n\t}\n\t/*\n\t * If we're on a background thread, do work so long as there's work to\n\t * be done.  Otherwise, bound latency to not be *too* bad by doing at\n\t * most a small fixed number of operations.\n\t */\n\tbool hugified = false;\n\tbool purged = false;\n\tsize_t max_ops = (forced ? (size_t)-1 : 16);\n\tsize_t nops = 0;\n\tdo {\n\t\t/*\n\t\t * Always purge before hugifying, to make sure we get some\n\t\t * ability to hit our quiescence targets.\n\t\t */\n\t\tpurged = false;\n\t\twhile (hpa_should_purge(tsdn, shard) && nops < max_ops) {\n\t\t\tpurged = hpa_try_purge(tsdn, shard);\n\t\t\tif (purged) {\n\t\t\t\tnops++;\n\t\t\t}\n\t\t}\n\t\thugified = hpa_try_hugify(tsdn, shard);\n\t\tif (hugified) {\n\t\t\tnops++;\n\t\t}\n\t\tmalloc_mutex_assert_owner(tsdn, &shard->mtx);\n\t\tmalloc_mutex_assert_owner(tsdn, &shard->mtx);\n\t} while ((hugified || purged) && nops < max_ops);\n}\n\nstatic edata_t *\nhpa_try_alloc_one_no_grow(tsdn_t *tsdn, hpa_shard_t *shard, size_t size,\n    bool *oom) {\n\tbool err;\n\tedata_t *edata = edata_cache_fast_get(tsdn, &shard->ecf);\n\tif (edata == NULL) {\n\t\t*oom = true;\n\t\treturn NULL;\n\t}\n\n\thpdata_t *ps = psset_pick_alloc(&shard->psset, size);\n\tif (ps == NULL) {\n\t\tedata_cache_fast_put(tsdn, &shard->ecf, edata);\n\t\treturn NULL;\n\t}\n\n\tpsset_update_begin(&shard->psset, ps);\n\n\tif (hpdata_empty(ps)) {\n\t\t/*\n\t\t * If the pageslab used to be empty, treat it as though it's\n\t\t * brand new for fragmentation-avoidance purposes; what we're\n\t\t * trying to approximate is the age of the allocations *in* that\n\t\t * pageslab, and the allocations in the new pageslab are\n\t\t * definitionally the youngest in this hpa shard.\n\t\t */\n\t\thpdata_age_set(ps, shard->age_counter++);\n\t}\n\n\tvoid *addr = hpdata_reserve_alloc(ps, size);\n\tedata_init(edata, shard->ind, addr, size, /* slab */ false,\n\t    SC_NSIZES, /* sn */ hpdata_age_get(ps), extent_state_active,\n\t    /* zeroed */ false, /* committed */ true, EXTENT_PAI_HPA,\n\t    EXTENT_NOT_HEAD);\n\tedata_ps_set(edata, ps);\n\n\t/*\n\t * This could theoretically be moved outside of the critical section,\n\t * but that introduces the potential for a race.  Without the lock, the\n\t * (initially nonempty, since this is the reuse pathway) pageslab we\n\t * allocated out of could become otherwise empty while the lock is\n\t * dropped.  This would force us to deal with a pageslab eviction down\n\t * the error pathway, which is a pain.\n\t */\n\terr = emap_register_boundary(tsdn, shard->emap, edata,\n\t    SC_NSIZES, /* slab */ false);\n\tif (err) {\n\t\thpdata_unreserve(ps, edata_addr_get(edata),\n\t\t    edata_size_get(edata));\n\t\t/*\n\t\t * We should arguably reset dirty state here, but this would\n\t\t * require some sort of prepare + commit functionality that's a\n\t\t * little much to deal with for now.\n\t\t *\n\t\t * We don't have a do_deferred_work down this pathway, on the\n\t\t * principle that we didn't *really* affect shard state (we\n\t\t * tweaked the stats, but our tweaks weren't really accurate).\n\t\t */\n\t\tpsset_update_end(&shard->psset, ps);\n\t\tedata_cache_fast_put(tsdn, &shard->ecf, edata);\n\t\t*oom = true;\n\t\treturn NULL;\n\t}\n\n\thpa_update_purge_hugify_eligibility(tsdn, shard, ps);\n\tpsset_update_end(&shard->psset, ps);\n\treturn edata;\n}\n\nstatic size_t\nhpa_try_alloc_batch_no_grow(tsdn_t *tsdn, hpa_shard_t *shard, size_t size,\n    bool *oom, size_t nallocs, edata_list_active_t *results,\n    bool *deferred_work_generated) {\n\tmalloc_mutex_lock(tsdn, &shard->mtx);\n\tsize_t nsuccess = 0;\n\tfor (; nsuccess < nallocs; nsuccess++) {\n\t\tedata_t *edata = hpa_try_alloc_one_no_grow(tsdn, shard, size,\n\t\t    oom);\n\t\tif (edata == NULL) {\n\t\t\tbreak;\n\t\t}\n\t\tedata_list_active_append(results, edata);\n\t}\n\n\thpa_shard_maybe_do_deferred_work(tsdn, shard, /* forced */ false);\n\t*deferred_work_generated = hpa_shard_has_deferred_work(tsdn, shard);\n\tmalloc_mutex_unlock(tsdn, &shard->mtx);\n\treturn nsuccess;\n}\n\nstatic size_t\nhpa_alloc_batch_psset(tsdn_t *tsdn, hpa_shard_t *shard, size_t size,\n    size_t nallocs, edata_list_active_t *results,\n    bool *deferred_work_generated) {\n\tassert(size <= shard->opts.slab_max_alloc);\n\tbool oom = false;\n\n\tsize_t nsuccess = hpa_try_alloc_batch_no_grow(tsdn, shard, size, &oom,\n\t    nallocs, results, deferred_work_generated);\n\n\tif (nsuccess == nallocs || oom) {\n\t\treturn nsuccess;\n\t}\n\n\t/*\n\t * We didn't OOM, but weren't able to fill everything requested of us;\n\t * try to grow.\n\t */\n\tmalloc_mutex_lock(tsdn, &shard->grow_mtx);\n\t/*\n\t * Check for grow races; maybe some earlier thread expanded the psset\n\t * in between when we dropped the main mutex and grabbed the grow mutex.\n\t */\n\tnsuccess += hpa_try_alloc_batch_no_grow(tsdn, shard, size, &oom,\n\t    nallocs - nsuccess, results, deferred_work_generated);\n\tif (nsuccess == nallocs || oom) {\n\t\tmalloc_mutex_unlock(tsdn, &shard->grow_mtx);\n\t\treturn nsuccess;\n\t}\n\n\t/*\n\t * Note that we don't hold shard->mtx here (while growing);\n\t * deallocations (and allocations of smaller sizes) may still succeed\n\t * while we're doing this potentially expensive system call.\n\t */\n\thpdata_t *ps = hpa_central_extract(tsdn, shard->central, size, &oom);\n\tif (ps == NULL) {\n\t\tmalloc_mutex_unlock(tsdn, &shard->grow_mtx);\n\t\treturn nsuccess;\n\t}\n\n\t/*\n\t * We got the pageslab; allocate from it.  This does an unlock followed\n\t * by a lock on the same mutex, and holds the grow mutex while doing\n\t * deferred work, but this is an uncommon path; the simplicity is worth\n\t * it.\n\t */\n\tmalloc_mutex_lock(tsdn, &shard->mtx);\n\tpsset_insert(&shard->psset, ps);\n\tmalloc_mutex_unlock(tsdn, &shard->mtx);\n\n\tnsuccess += hpa_try_alloc_batch_no_grow(tsdn, shard, size, &oom,\n\t    nallocs - nsuccess, results, deferred_work_generated);\n\t/*\n\t * Drop grow_mtx before doing deferred work; other threads blocked on it\n\t * should be allowed to proceed while we're working.\n\t */\n\tmalloc_mutex_unlock(tsdn, &shard->grow_mtx);\n\n\treturn nsuccess;\n}\n\nstatic hpa_shard_t *\nhpa_from_pai(pai_t *self) {\n\tassert(self->alloc = &hpa_alloc);\n\tassert(self->expand = &hpa_expand);\n\tassert(self->shrink = &hpa_shrink);\n\tassert(self->dalloc = &hpa_dalloc);\n\treturn (hpa_shard_t *)self;\n}\n\nstatic size_t\nhpa_alloc_batch(tsdn_t *tsdn, pai_t *self, size_t size, size_t nallocs,\n    edata_list_active_t *results, bool *deferred_work_generated) {\n\tassert(nallocs > 0);\n\tassert((size & PAGE_MASK) == 0);\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\thpa_shard_t *shard = hpa_from_pai(self);\n\n\tif (size > shard->opts.slab_max_alloc) {\n\t\treturn 0;\n\t}\n\n\tsize_t nsuccess = hpa_alloc_batch_psset(tsdn, shard, size, nallocs,\n\t    results, deferred_work_generated);\n\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\n\t/*\n\t * Guard the sanity checks with config_debug because the loop cannot be\n\t * proven non-circular by the compiler, even if everything within the\n\t * loop is optimized away.\n\t */\n\tif (config_debug) {\n\t\tedata_t *edata;\n\t\tql_foreach(edata, &results->head, ql_link_active) {\n\t\t\temap_assert_mapped(tsdn, shard->emap, edata);\n\t\t\tassert(edata_pai_get(edata) == EXTENT_PAI_HPA);\n\t\t\tassert(edata_state_get(edata) == extent_state_active);\n\t\t\tassert(edata_arena_ind_get(edata) == shard->ind);\n\t\t\tassert(edata_szind_get_maybe_invalid(edata) ==\n\t\t\t    SC_NSIZES);\n\t\t\tassert(!edata_slab_get(edata));\n\t\t\tassert(edata_committed_get(edata));\n\t\t\tassert(edata_base_get(edata) == edata_addr_get(edata));\n\t\t\tassert(edata_base_get(edata) != NULL);\n\t\t}\n\t}\n\treturn nsuccess;\n}\n\nstatic edata_t *\nhpa_alloc(tsdn_t *tsdn, pai_t *self, size_t size, size_t alignment, bool zero,\n    bool guarded, bool frequent_reuse, bool *deferred_work_generated) {\n\tassert((size & PAGE_MASK) == 0);\n\tassert(!guarded);\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\n\t/* We don't handle alignment or zeroing for now. */\n\tif (alignment > PAGE || zero) {\n\t\treturn NULL;\n\t}\n\t/*\n\t * An alloc with alignment == PAGE and zero == false is equivalent to a\n\t * batch alloc of 1.  Just do that, so we can share code.\n\t */\n\tedata_list_active_t results;\n\tedata_list_active_init(&results);\n\tsize_t nallocs = hpa_alloc_batch(tsdn, self, size, /* nallocs */ 1,\n\t    &results, deferred_work_generated);\n\tassert(nallocs == 0 || nallocs == 1);\n\tedata_t *edata = edata_list_active_first(&results);\n\treturn edata;\n}\n\nstatic bool\nhpa_expand(tsdn_t *tsdn, pai_t *self, edata_t *edata, size_t old_size,\n    size_t new_size, bool zero, bool *deferred_work_generated) {\n\t/* Expand not yet supported. */\n\treturn true;\n}\n\nstatic bool\nhpa_shrink(tsdn_t *tsdn, pai_t *self, edata_t *edata,\n    size_t old_size, size_t new_size, bool *deferred_work_generated) {\n\t/* Shrink not yet supported. */\n\treturn true;\n}\n\nstatic void\nhpa_dalloc_prepare_unlocked(tsdn_t *tsdn, hpa_shard_t *shard, edata_t *edata) {\n\tmalloc_mutex_assert_not_owner(tsdn, &shard->mtx);\n\n\tassert(edata_pai_get(edata) == EXTENT_PAI_HPA);\n\tassert(edata_state_get(edata) == extent_state_active);\n\tassert(edata_arena_ind_get(edata) == shard->ind);\n\tassert(edata_szind_get_maybe_invalid(edata) == SC_NSIZES);\n\tassert(edata_committed_get(edata));\n\tassert(edata_base_get(edata) != NULL);\n\n\t/*\n\t * Another thread shouldn't be trying to touch the metadata of an\n\t * allocation being freed.  The one exception is a merge attempt from a\n\t * lower-addressed PAC extent; in this case we have a nominal race on\n\t * the edata metadata bits, but in practice the fact that the PAI bits\n\t * are different will prevent any further access.  The race is bad, but\n\t * benign in practice, and the long term plan is to track enough state\n\t * in the rtree to prevent these merge attempts in the first place.\n\t */\n\tedata_addr_set(edata, edata_base_get(edata));\n\tedata_zeroed_set(edata, false);\n\temap_deregister_boundary(tsdn, shard->emap, edata);\n}\n\nstatic void\nhpa_dalloc_locked(tsdn_t *tsdn, hpa_shard_t *shard, edata_t *edata) {\n\tmalloc_mutex_assert_owner(tsdn, &shard->mtx);\n\n\t/*\n\t * Release the metadata early, to avoid having to remember to do it\n\t * while we're also doing tricky purging logic.  First, we need to grab\n\t * a few bits of metadata from it.\n\t *\n\t * Note that the shard mutex protects ps's metadata too; it wouldn't be\n\t * correct to try to read most information out of it without the lock.\n\t */\n\thpdata_t *ps = edata_ps_get(edata);\n\t/* Currently, all edatas come from pageslabs. */\n\tassert(ps != NULL);\n\tvoid *unreserve_addr = edata_addr_get(edata);\n\tsize_t unreserve_size = edata_size_get(edata);\n\tedata_cache_fast_put(tsdn, &shard->ecf, edata);\n\n\tpsset_update_begin(&shard->psset, ps);\n\thpdata_unreserve(ps, unreserve_addr, unreserve_size);\n\thpa_update_purge_hugify_eligibility(tsdn, shard, ps);\n\tpsset_update_end(&shard->psset, ps);\n}\n\nstatic void\nhpa_dalloc_batch(tsdn_t *tsdn, pai_t *self, edata_list_active_t *list,\n    bool *deferred_work_generated) {\n\thpa_shard_t *shard = hpa_from_pai(self);\n\n\tedata_t *edata;\n\tql_foreach(edata, &list->head, ql_link_active) {\n\t\thpa_dalloc_prepare_unlocked(tsdn, shard, edata);\n\t}\n\n\tmalloc_mutex_lock(tsdn, &shard->mtx);\n\t/* Now, remove from the list. */\n\twhile ((edata = edata_list_active_first(list)) != NULL) {\n\t\tedata_list_active_remove(list, edata);\n\t\thpa_dalloc_locked(tsdn, shard, edata);\n\t}\n\thpa_shard_maybe_do_deferred_work(tsdn, shard, /* forced */ false);\n\t*deferred_work_generated =\n\t    hpa_shard_has_deferred_work(tsdn, shard);\n\n\tmalloc_mutex_unlock(tsdn, &shard->mtx);\n}\n\nstatic void\nhpa_dalloc(tsdn_t *tsdn, pai_t *self, edata_t *edata,\n    bool *deferred_work_generated) {\n\tassert(!edata_guarded_get(edata));\n\t/* Just a dalloc_batch of size 1; this lets us share logic. */\n\tedata_list_active_t dalloc_list;\n\tedata_list_active_init(&dalloc_list);\n\tedata_list_active_append(&dalloc_list, edata);\n\thpa_dalloc_batch(tsdn, self, &dalloc_list, deferred_work_generated);\n}\n\n/*\n * Calculate time until either purging or hugification ought to happen.\n * Called by background threads.\n */\nstatic uint64_t\nhpa_time_until_deferred_work(tsdn_t *tsdn, pai_t *self) {\n\thpa_shard_t *shard = hpa_from_pai(self);\n\tuint64_t time_ns = BACKGROUND_THREAD_DEFERRED_MAX;\n\n\tmalloc_mutex_lock(tsdn, &shard->mtx);\n\n\thpdata_t *to_hugify = psset_pick_hugify(&shard->psset);\n\tif (to_hugify != NULL) {\n\t\tnstime_t time_hugify_allowed =\n\t\t    hpdata_time_hugify_allowed(to_hugify);\n\t\tuint64_t since_hugify_allowed_ms =\n\t\t    shard->central->hooks.ms_since(&time_hugify_allowed);\n\t\t/*\n\t\t * If not enough time has passed since hugification was allowed,\n\t\t * sleep for the rest.\n\t\t */\n\t\tif (since_hugify_allowed_ms < shard->opts.hugify_delay_ms) {\n\t\t\ttime_ns = shard->opts.hugify_delay_ms -\n\t\t\t    since_hugify_allowed_ms;\n\t\t\ttime_ns *= 1000 * 1000;\n\t\t} else {\n\t\t\tmalloc_mutex_unlock(tsdn, &shard->mtx);\n\t\t\treturn BACKGROUND_THREAD_DEFERRED_MIN;\n\t\t}\n\t}\n\n\tif (hpa_should_purge(tsdn, shard)) {\n\t\t/*\n\t\t * If we haven't purged before, no need to check interval\n\t\t * between purges. Simply purge as soon as possible.\n\t\t */\n\t\tif (shard->stats.npurge_passes == 0) {\n\t\t\tmalloc_mutex_unlock(tsdn, &shard->mtx);\n\t\t\treturn BACKGROUND_THREAD_DEFERRED_MIN;\n\t\t}\n\t\tuint64_t since_last_purge_ms = shard->central->hooks.ms_since(\n\t\t    &shard->last_purge);\n\n\t\tif (since_last_purge_ms < shard->opts.min_purge_interval_ms) {\n\t\t\tuint64_t until_purge_ns;\n\t\t\tuntil_purge_ns = shard->opts.min_purge_interval_ms -\n\t\t\t    since_last_purge_ms;\n\t\t\tuntil_purge_ns *= 1000 * 1000;\n\n\t\t\tif (until_purge_ns < time_ns) {\n\t\t\t\ttime_ns = until_purge_ns;\n\t\t\t}\n\t\t} else {\n\t\t\ttime_ns = BACKGROUND_THREAD_DEFERRED_MIN;\n\t\t}\n\t}\n\tmalloc_mutex_unlock(tsdn, &shard->mtx);\n\treturn time_ns;\n}\n\nvoid\nhpa_shard_disable(tsdn_t *tsdn, hpa_shard_t *shard) {\n\thpa_do_consistency_checks(shard);\n\n\tmalloc_mutex_lock(tsdn, &shard->mtx);\n\tedata_cache_fast_disable(tsdn, &shard->ecf);\n\tmalloc_mutex_unlock(tsdn, &shard->mtx);\n}\n\nstatic void\nhpa_shard_assert_stats_empty(psset_bin_stats_t *bin_stats) {\n\tassert(bin_stats->npageslabs == 0);\n\tassert(bin_stats->nactive == 0);\n}\n\nstatic void\nhpa_assert_empty(tsdn_t *tsdn, hpa_shard_t *shard, psset_t *psset) {\n\tmalloc_mutex_assert_owner(tsdn, &shard->mtx);\n\tfor (int huge = 0; huge <= 1; huge++) {\n\t\thpa_shard_assert_stats_empty(&psset->stats.full_slabs[huge]);\n\t\tfor (pszind_t i = 0; i < PSSET_NPSIZES; i++) {\n\t\t\thpa_shard_assert_stats_empty(\n\t\t\t    &psset->stats.nonfull_slabs[i][huge]);\n\t\t}\n\t}\n}\n\nvoid\nhpa_shard_destroy(tsdn_t *tsdn, hpa_shard_t *shard) {\n\thpa_do_consistency_checks(shard);\n\t/*\n\t * By the time we're here, the arena code should have dalloc'd all the\n\t * active extents, which means we should have eventually evicted\n\t * everything from the psset, so it shouldn't be able to serve even a\n\t * 1-page allocation.\n\t */\n\tif (config_debug) {\n\t\tmalloc_mutex_lock(tsdn, &shard->mtx);\n\t\thpa_assert_empty(tsdn, shard, &shard->psset);\n\t\tmalloc_mutex_unlock(tsdn, &shard->mtx);\n\t}\n\thpdata_t *ps;\n\twhile ((ps = psset_pick_alloc(&shard->psset, PAGE)) != NULL) {\n\t\t/* There should be no allocations anywhere. */\n\t\tassert(hpdata_empty(ps));\n\t\tpsset_remove(&shard->psset, ps);\n\t\tshard->central->hooks.unmap(hpdata_addr_get(ps), HUGEPAGE);\n\t}\n}\n\nvoid\nhpa_shard_set_deferral_allowed(tsdn_t *tsdn, hpa_shard_t *shard,\n    bool deferral_allowed) {\n\thpa_do_consistency_checks(shard);\n\n\tmalloc_mutex_lock(tsdn, &shard->mtx);\n\tbool deferral_previously_allowed = shard->opts.deferral_allowed;\n\tshard->opts.deferral_allowed = deferral_allowed;\n\tif (deferral_previously_allowed && !deferral_allowed) {\n\t\thpa_shard_maybe_do_deferred_work(tsdn, shard,\n\t\t    /* forced */ true);\n\t}\n\tmalloc_mutex_unlock(tsdn, &shard->mtx);\n}\n\nvoid\nhpa_shard_do_deferred_work(tsdn_t *tsdn, hpa_shard_t *shard) {\n\thpa_do_consistency_checks(shard);\n\n\tmalloc_mutex_lock(tsdn, &shard->mtx);\n\thpa_shard_maybe_do_deferred_work(tsdn, shard, /* forced */ true);\n\tmalloc_mutex_unlock(tsdn, &shard->mtx);\n}\n\nvoid\nhpa_shard_prefork3(tsdn_t *tsdn, hpa_shard_t *shard) {\n\thpa_do_consistency_checks(shard);\n\n\tmalloc_mutex_prefork(tsdn, &shard->grow_mtx);\n}\n\nvoid\nhpa_shard_prefork4(tsdn_t *tsdn, hpa_shard_t *shard) {\n\thpa_do_consistency_checks(shard);\n\n\tmalloc_mutex_prefork(tsdn, &shard->mtx);\n}\n\nvoid\nhpa_shard_postfork_parent(tsdn_t *tsdn, hpa_shard_t *shard) {\n\thpa_do_consistency_checks(shard);\n\n\tmalloc_mutex_postfork_parent(tsdn, &shard->grow_mtx);\n\tmalloc_mutex_postfork_parent(tsdn, &shard->mtx);\n}\n\nvoid\nhpa_shard_postfork_child(tsdn_t *tsdn, hpa_shard_t *shard) {\n\thpa_do_consistency_checks(shard);\n\n\tmalloc_mutex_postfork_child(tsdn, &shard->grow_mtx);\n\tmalloc_mutex_postfork_child(tsdn, &shard->mtx);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/hpa_hooks.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/hpa_hooks.h\"\n\nstatic void *hpa_hooks_map(size_t size);\nstatic void hpa_hooks_unmap(void *ptr, size_t size);\nstatic void hpa_hooks_purge(void *ptr, size_t size);\nstatic void hpa_hooks_hugify(void *ptr, size_t size);\nstatic void hpa_hooks_dehugify(void *ptr, size_t size);\nstatic void hpa_hooks_curtime(nstime_t *r_nstime, bool first_reading);\nstatic uint64_t hpa_hooks_ms_since(nstime_t *past_nstime);\n\nhpa_hooks_t hpa_hooks_default = {\n\t&hpa_hooks_map,\n\t&hpa_hooks_unmap,\n\t&hpa_hooks_purge,\n\t&hpa_hooks_hugify,\n\t&hpa_hooks_dehugify,\n\t&hpa_hooks_curtime,\n\t&hpa_hooks_ms_since\n};\n\nstatic void *\nhpa_hooks_map(size_t size) {\n\tbool commit = true;\n\treturn pages_map(NULL, size, HUGEPAGE, &commit);\n}\n\nstatic void\nhpa_hooks_unmap(void *ptr, size_t size) {\n\tpages_unmap(ptr, size);\n}\n\nstatic void\nhpa_hooks_purge(void *ptr, size_t size) {\n\tpages_purge_forced(ptr, size);\n}\n\nstatic void\nhpa_hooks_hugify(void *ptr, size_t size) {\n\tbool err = pages_huge(ptr, size);\n\t(void)err;\n}\n\nstatic void\nhpa_hooks_dehugify(void *ptr, size_t size) {\n\tbool err = pages_nohuge(ptr, size);\n\t(void)err;\n}\n\nstatic void\nhpa_hooks_curtime(nstime_t *r_nstime, bool first_reading) {\n\tif (first_reading) {\n\t\tnstime_init_zero(r_nstime);\n\t}\n\tnstime_update(r_nstime);\n}\n\nstatic uint64_t\nhpa_hooks_ms_since(nstime_t *past_nstime) {\n\treturn nstime_ns_since(past_nstime) / 1000 / 1000;\n}\n"
  },
  {
    "path": "deps/jemalloc/src/hpdata.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/hpdata.h\"\n\nstatic int\nhpdata_age_comp(const hpdata_t *a, const hpdata_t *b) {\n\tuint64_t a_age = hpdata_age_get(a);\n\tuint64_t b_age = hpdata_age_get(b);\n\t/*\n\t * hpdata ages are operation counts in the psset; no two should be the\n\t * same.\n\t */\n\tassert(a_age != b_age);\n\treturn (a_age > b_age) - (a_age < b_age);\n}\n\nph_gen(, hpdata_age_heap, hpdata_t, age_link, hpdata_age_comp)\n\nvoid\nhpdata_init(hpdata_t *hpdata, void *addr, uint64_t age) {\n\thpdata_addr_set(hpdata, addr);\n\thpdata_age_set(hpdata, age);\n\thpdata->h_huge = false;\n\thpdata->h_alloc_allowed = true;\n\thpdata->h_in_psset_alloc_container = false;\n\thpdata->h_purge_allowed = false;\n\thpdata->h_hugify_allowed = false;\n\thpdata->h_in_psset_hugify_container = false;\n\thpdata->h_mid_purge = false;\n\thpdata->h_mid_hugify = false;\n\thpdata->h_updating = false;\n\thpdata->h_in_psset = false;\n\thpdata_longest_free_range_set(hpdata, HUGEPAGE_PAGES);\n\thpdata->h_nactive = 0;\n\tfb_init(hpdata->active_pages, HUGEPAGE_PAGES);\n\thpdata->h_ntouched = 0;\n\tfb_init(hpdata->touched_pages, HUGEPAGE_PAGES);\n\n\thpdata_assert_consistent(hpdata);\n}\n\nvoid *\nhpdata_reserve_alloc(hpdata_t *hpdata, size_t sz) {\n\thpdata_assert_consistent(hpdata);\n\t/*\n\t * This is a metadata change; the hpdata should therefore either not be\n\t * in the psset, or should have explicitly marked itself as being\n\t * mid-update.\n\t */\n\tassert(!hpdata->h_in_psset || hpdata->h_updating);\n\tassert(hpdata->h_alloc_allowed);\n\tassert((sz & PAGE_MASK) == 0);\n\tsize_t npages = sz >> LG_PAGE;\n\tassert(npages <= hpdata_longest_free_range_get(hpdata));\n\n\tsize_t result;\n\n\tsize_t start = 0;\n\t/*\n\t * These are dead stores, but the compiler will issue warnings on them\n\t * since it can't tell statically that found is always true below.\n\t */\n\tsize_t begin = 0;\n\tsize_t len = 0;\n\n\tsize_t largest_unchosen_range = 0;\n\twhile (true) {\n\t\tbool found = fb_urange_iter(hpdata->active_pages,\n\t\t    HUGEPAGE_PAGES, start, &begin, &len);\n\t\t/*\n\t\t * A precondition to this function is that hpdata must be able\n\t\t * to serve the allocation.\n\t\t */\n\t\tassert(found);\n\t\tassert(len <= hpdata_longest_free_range_get(hpdata));\n\t\tif (len >= npages) {\n\t\t\t/*\n\t\t\t * We use first-fit within the page slabs; this gives\n\t\t\t * bounded worst-case fragmentation within a slab.  It's\n\t\t\t * not necessarily right; we could experiment with\n\t\t\t * various other options.\n\t\t\t */\n\t\t\tbreak;\n\t\t}\n\t\tif (len > largest_unchosen_range) {\n\t\t\tlargest_unchosen_range = len;\n\t\t}\n\t\tstart = begin + len;\n\t}\n\t/* We found a range; remember it. */\n\tresult = begin;\n\tfb_set_range(hpdata->active_pages, HUGEPAGE_PAGES, begin, npages);\n\thpdata->h_nactive += npages;\n\n\t/*\n\t * We might be about to dirty some memory for the first time; update our\n\t * count if so.\n\t */\n\tsize_t new_dirty = fb_ucount(hpdata->touched_pages,  HUGEPAGE_PAGES,\n\t    result, npages);\n\tfb_set_range(hpdata->touched_pages, HUGEPAGE_PAGES, result, npages);\n\thpdata->h_ntouched += new_dirty;\n\n\t/*\n\t * If we allocated out of a range that was the longest in the hpdata, it\n\t * might be the only one of that size and we'll have to adjust the\n\t * metadata.\n\t */\n\tif (len == hpdata_longest_free_range_get(hpdata)) {\n\t\tstart = begin + npages;\n\t\twhile (start < HUGEPAGE_PAGES) {\n\t\t\tbool found = fb_urange_iter(hpdata->active_pages,\n\t\t\t    HUGEPAGE_PAGES, start, &begin, &len);\n\t\t\tif (!found) {\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tassert(len <= hpdata_longest_free_range_get(hpdata));\n\t\t\tif (len == hpdata_longest_free_range_get(hpdata)) {\n\t\t\t\tlargest_unchosen_range = len;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (len > largest_unchosen_range) {\n\t\t\t\tlargest_unchosen_range = len;\n\t\t\t}\n\t\t\tstart = begin + len;\n\t\t}\n\t\thpdata_longest_free_range_set(hpdata, largest_unchosen_range);\n\t}\n\n\thpdata_assert_consistent(hpdata);\n\treturn (void *)(\n\t    (uintptr_t)hpdata_addr_get(hpdata) + (result << LG_PAGE));\n}\n\nvoid\nhpdata_unreserve(hpdata_t *hpdata, void *addr, size_t sz) {\n\thpdata_assert_consistent(hpdata);\n\t/* See the comment in reserve. */\n\tassert(!hpdata->h_in_psset || hpdata->h_updating);\n\tassert(((uintptr_t)addr & PAGE_MASK) == 0);\n\tassert((sz & PAGE_MASK) == 0);\n\tsize_t begin = ((uintptr_t)addr - (uintptr_t)hpdata_addr_get(hpdata))\n\t    >> LG_PAGE;\n\tassert(begin < HUGEPAGE_PAGES);\n\tsize_t npages = sz >> LG_PAGE;\n\tsize_t old_longest_range = hpdata_longest_free_range_get(hpdata);\n\n\tfb_unset_range(hpdata->active_pages, HUGEPAGE_PAGES, begin, npages);\n\t/* We might have just created a new, larger range. */\n\tsize_t new_begin = (fb_fls(hpdata->active_pages, HUGEPAGE_PAGES,\n\t    begin) + 1);\n\tsize_t new_end = fb_ffs(hpdata->active_pages, HUGEPAGE_PAGES,\n\t    begin + npages - 1);\n\tsize_t new_range_len = new_end - new_begin;\n\n\tif (new_range_len > old_longest_range) {\n\t\thpdata_longest_free_range_set(hpdata, new_range_len);\n\t}\n\n\thpdata->h_nactive -= npages;\n\n\thpdata_assert_consistent(hpdata);\n}\n\nsize_t\nhpdata_purge_begin(hpdata_t *hpdata, hpdata_purge_state_t *purge_state) {\n\thpdata_assert_consistent(hpdata);\n\t/*\n\t * See the comment below; we might purge any inactive extent, so it's\n\t * unsafe for any other thread to turn any inactive extent active while\n\t * we're operating on it.\n\t */\n\tassert(!hpdata_alloc_allowed_get(hpdata));\n\n\tpurge_state->npurged = 0;\n\tpurge_state->next_purge_search_begin = 0;\n\n\t/*\n\t * Initialize to_purge.\n\t *\n\t * It's possible to end up in situations where two dirty extents are\n\t * separated by a retained extent:\n\t * - 1 page allocated.\n\t * - 1 page allocated.\n\t * - 1 pages allocated.\n\t *\n\t * If the middle page is freed and purged, and then the first and third\n\t * pages are freed, and then another purge pass happens, the hpdata\n\t * looks like this:\n\t * - 1 page dirty.\n\t * - 1 page retained.\n\t * - 1 page dirty.\n\t *\n\t * But it's safe to do a single 3-page purge.\n\t *\n\t * We do this by first computing the dirty pages, and then filling in\n\t * any gaps by extending each range in the dirty bitmap to extend until\n\t * the next active page.  This purges more pages, but the expensive part\n\t * of purging is the TLB shootdowns, rather than the kernel state\n\t * tracking; doing a little bit more of the latter is fine if it saves\n\t * us from doing some of the former.\n\t */\n\n\t/*\n\t * The dirty pages are those that are touched but not active.  Note that\n\t * in a normal-ish case, HUGEPAGE_PAGES is something like 512 and the\n\t * fb_group_t is 64 bits, so this is 64 bytes, spread across 8\n\t * fb_group_ts.\n\t */\n\tfb_group_t dirty_pages[FB_NGROUPS(HUGEPAGE_PAGES)];\n\tfb_init(dirty_pages, HUGEPAGE_PAGES);\n\tfb_bit_not(dirty_pages, hpdata->active_pages, HUGEPAGE_PAGES);\n\tfb_bit_and(dirty_pages, dirty_pages, hpdata->touched_pages,\n\t    HUGEPAGE_PAGES);\n\n\tfb_init(purge_state->to_purge, HUGEPAGE_PAGES);\n\tsize_t next_bit = 0;\n\twhile (next_bit < HUGEPAGE_PAGES) {\n\t\tsize_t next_dirty = fb_ffs(dirty_pages, HUGEPAGE_PAGES,\n\t\t    next_bit);\n\t\t/* Recall that fb_ffs returns nbits if no set bit is found. */\n\t\tif (next_dirty == HUGEPAGE_PAGES) {\n\t\t\tbreak;\n\t\t}\n\t\tsize_t next_active = fb_ffs(hpdata->active_pages,\n\t\t    HUGEPAGE_PAGES, next_dirty);\n\t\t/*\n\t\t * Don't purge past the end of the dirty extent, into retained\n\t\t * pages.  This helps the kernel a tiny bit, but honestly it's\n\t\t * mostly helpful for testing (where we tend to write test cases\n\t\t * that think in terms of the dirty ranges).\n\t\t */\n\t\tssize_t last_dirty = fb_fls(dirty_pages, HUGEPAGE_PAGES,\n\t\t    next_active - 1);\n\t\tassert(last_dirty >= 0);\n\t\tassert((size_t)last_dirty >= next_dirty);\n\t\tassert((size_t)last_dirty - next_dirty + 1 <= HUGEPAGE_PAGES);\n\n\t\tfb_set_range(purge_state->to_purge, HUGEPAGE_PAGES, next_dirty,\n\t\t    last_dirty - next_dirty + 1);\n\t\tnext_bit = next_active + 1;\n\t}\n\n\t/* We should purge, at least, everything dirty. */\n\tsize_t ndirty = hpdata->h_ntouched - hpdata->h_nactive;\n\tpurge_state->ndirty_to_purge = ndirty;\n\tassert(ndirty <= fb_scount(\n\t    purge_state->to_purge, HUGEPAGE_PAGES, 0, HUGEPAGE_PAGES));\n\tassert(ndirty == fb_scount(dirty_pages, HUGEPAGE_PAGES, 0,\n\t    HUGEPAGE_PAGES));\n\n\thpdata_assert_consistent(hpdata);\n\n\treturn ndirty;\n}\n\nbool\nhpdata_purge_next(hpdata_t *hpdata, hpdata_purge_state_t *purge_state,\n    void **r_purge_addr, size_t *r_purge_size) {\n\t/*\n\t * Note that we don't have a consistency check here; we're accessing\n\t * hpdata without synchronization, and therefore have no right to expect\n\t * a consistent state.\n\t */\n\tassert(!hpdata_alloc_allowed_get(hpdata));\n\n\tif (purge_state->next_purge_search_begin == HUGEPAGE_PAGES) {\n\t\treturn false;\n\t}\n\tsize_t purge_begin;\n\tsize_t purge_len;\n\tbool found_range = fb_srange_iter(purge_state->to_purge, HUGEPAGE_PAGES,\n\t    purge_state->next_purge_search_begin, &purge_begin, &purge_len);\n\tif (!found_range) {\n\t\treturn false;\n\t}\n\n\t*r_purge_addr = (void *)(\n\t    (uintptr_t)hpdata_addr_get(hpdata) + purge_begin * PAGE);\n\t*r_purge_size = purge_len * PAGE;\n\n\tpurge_state->next_purge_search_begin = purge_begin + purge_len;\n\tpurge_state->npurged += purge_len;\n\tassert(purge_state->npurged <= HUGEPAGE_PAGES);\n\n\treturn true;\n}\n\nvoid\nhpdata_purge_end(hpdata_t *hpdata, hpdata_purge_state_t *purge_state) {\n\tassert(!hpdata_alloc_allowed_get(hpdata));\n\thpdata_assert_consistent(hpdata);\n\t/* See the comment in reserve. */\n\tassert(!hpdata->h_in_psset || hpdata->h_updating);\n\n\tassert(purge_state->npurged == fb_scount(purge_state->to_purge,\n\t    HUGEPAGE_PAGES, 0, HUGEPAGE_PAGES));\n\tassert(purge_state->npurged >= purge_state->ndirty_to_purge);\n\n\tfb_bit_not(purge_state->to_purge, purge_state->to_purge,\n\t    HUGEPAGE_PAGES);\n\tfb_bit_and(hpdata->touched_pages, hpdata->touched_pages,\n\t    purge_state->to_purge, HUGEPAGE_PAGES);\n\tassert(hpdata->h_ntouched >= purge_state->ndirty_to_purge);\n\thpdata->h_ntouched -= purge_state->ndirty_to_purge;\n\n\thpdata_assert_consistent(hpdata);\n}\n\nvoid\nhpdata_hugify(hpdata_t *hpdata) {\n\thpdata_assert_consistent(hpdata);\n\thpdata->h_huge = true;\n\tfb_set_range(hpdata->touched_pages, HUGEPAGE_PAGES, 0, HUGEPAGE_PAGES);\n\thpdata->h_ntouched = HUGEPAGE_PAGES;\n\thpdata_assert_consistent(hpdata);\n}\n\nvoid\nhpdata_dehugify(hpdata_t *hpdata) {\n\thpdata_assert_consistent(hpdata);\n\thpdata->h_huge = false;\n\thpdata_assert_consistent(hpdata);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/inspect.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\nvoid\ninspect_extent_util_stats_get(tsdn_t *tsdn, const void *ptr, size_t *nfree,\n    size_t *nregs, size_t *size) {\n\tassert(ptr != NULL && nfree != NULL && nregs != NULL && size != NULL);\n\n\tconst edata_t *edata = emap_edata_lookup(tsdn, &arena_emap_global, ptr);\n\tif (unlikely(edata == NULL)) {\n\t\t*nfree = *nregs = *size = 0;\n\t\treturn;\n\t}\n\n\t*size = edata_size_get(edata);\n\tif (!edata_slab_get(edata)) {\n\t\t*nfree = 0;\n\t\t*nregs = 1;\n\t} else {\n\t\t*nfree = edata_nfree_get(edata);\n\t\t*nregs = bin_infos[edata_szind_get(edata)].nregs;\n\t\tassert(*nfree <= *nregs);\n\t\tassert(*nfree * edata_usize_get(edata) <= *size);\n\t}\n}\n\nvoid\ninspect_extent_util_stats_verbose_get(tsdn_t *tsdn, const void *ptr,\n    size_t *nfree, size_t *nregs, size_t *size, size_t *bin_nfree,\n    size_t *bin_nregs, void **slabcur_addr) {\n\tassert(ptr != NULL && nfree != NULL && nregs != NULL && size != NULL\n\t    && bin_nfree != NULL && bin_nregs != NULL && slabcur_addr != NULL);\n\n\tconst edata_t *edata = emap_edata_lookup(tsdn, &arena_emap_global, ptr);\n\tif (unlikely(edata == NULL)) {\n\t\t*nfree = *nregs = *size = *bin_nfree = *bin_nregs = 0;\n\t\t*slabcur_addr = NULL;\n\t\treturn;\n\t}\n\n\t*size = edata_size_get(edata);\n\tif (!edata_slab_get(edata)) {\n\t\t*nfree = *bin_nfree = *bin_nregs = 0;\n\t\t*nregs = 1;\n\t\t*slabcur_addr = NULL;\n\t\treturn;\n\t}\n\n\t*nfree = edata_nfree_get(edata);\n\tconst szind_t szind = edata_szind_get(edata);\n\t*nregs = bin_infos[szind].nregs;\n\tassert(*nfree <= *nregs);\n\tassert(*nfree * edata_usize_get(edata) <= *size);\n\n\tarena_t *arena = (arena_t *)atomic_load_p(\n\t    &arenas[edata_arena_ind_get(edata)], ATOMIC_RELAXED);\n\tassert(arena != NULL);\n\tconst unsigned binshard = edata_binshard_get(edata);\n\tbin_t *bin = arena_get_bin(arena, szind, binshard);\n\n\tmalloc_mutex_lock(tsdn, &bin->lock);\n\tif (config_stats) {\n\t\t*bin_nregs = *nregs * bin->stats.curslabs;\n\t\tassert(*bin_nregs >= bin->stats.curregs);\n\t\t*bin_nfree = *bin_nregs - bin->stats.curregs;\n\t} else {\n\t\t*bin_nfree = *bin_nregs = 0;\n\t}\n\tedata_t *slab;\n\tif (bin->slabcur != NULL) {\n\t\tslab = bin->slabcur;\n\t} else {\n\t\tslab = edata_heap_first(&bin->slabs_nonfull);\n\t}\n\t*slabcur_addr = slab != NULL ? edata_addr_get(slab) : NULL;\n\tmalloc_mutex_unlock(tsdn, &bin->lock);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/jemalloc.c",
    "content": "#define JEMALLOC_C_\n#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/atomic.h\"\n#include \"jemalloc/internal/buf_writer.h\"\n#include \"jemalloc/internal/ctl.h\"\n#include \"jemalloc/internal/emap.h\"\n#include \"jemalloc/internal/extent_dss.h\"\n#include \"jemalloc/internal/extent_mmap.h\"\n#include \"jemalloc/internal/fxp.h\"\n#include \"jemalloc/internal/san.h\"\n#include \"jemalloc/internal/hook.h\"\n#include \"jemalloc/internal/jemalloc_internal_types.h\"\n#include \"jemalloc/internal/log.h\"\n#include \"jemalloc/internal/malloc_io.h\"\n#include \"jemalloc/internal/mutex.h\"\n#include \"jemalloc/internal/nstime.h\"\n#include \"jemalloc/internal/rtree.h\"\n#include \"jemalloc/internal/safety_check.h\"\n#include \"jemalloc/internal/sc.h\"\n#include \"jemalloc/internal/spin.h\"\n#include \"jemalloc/internal/sz.h\"\n#include \"jemalloc/internal/ticker.h\"\n#include \"jemalloc/internal/thread_event.h\"\n#include \"jemalloc/internal/util.h\"\n\n/******************************************************************************/\n/* Data. */\n\n/* Runtime configuration options. */\nconst char\t*je_malloc_conf\n#ifndef _WIN32\n    JEMALLOC_ATTR(weak)\n#endif\n    ;\n/*\n * The usual rule is that the closer to runtime you are, the higher priority\n * your configuration settings are (so the jemalloc config options get lower\n * priority than the per-binary setting, which gets lower priority than the /etc\n * setting, which gets lower priority than the environment settings).\n *\n * But it's a fairly common use case in some testing environments for a user to\n * be able to control the binary, but nothing else (e.g. a performancy canary\n * uses the production OS and environment variables, but can run any binary in\n * those circumstances).  For these use cases, it's handy to have an in-binary\n * mechanism for overriding environment variable settings, with the idea that if\n * the results are positive they get promoted to the official settings, and\n * moved from the binary to the environment variable.\n *\n * We don't actually want this to be widespread, so we'll give it a silly name\n * and not mention it in headers or documentation.\n */\nconst char\t*je_malloc_conf_2_conf_harder\n#ifndef _WIN32\n    JEMALLOC_ATTR(weak)\n#endif\n    ;\n\nbool\topt_abort =\n#ifdef JEMALLOC_DEBUG\n    true\n#else\n    false\n#endif\n    ;\nbool\topt_abort_conf =\n#ifdef JEMALLOC_DEBUG\n    true\n#else\n    false\n#endif\n    ;\n/* Intentionally default off, even with debug builds. */\nbool\topt_confirm_conf = false;\nconst char\t*opt_junk =\n#if (defined(JEMALLOC_DEBUG) && defined(JEMALLOC_FILL))\n    \"true\"\n#else\n    \"false\"\n#endif\n    ;\nbool\topt_junk_alloc =\n#if (defined(JEMALLOC_DEBUG) && defined(JEMALLOC_FILL))\n    true\n#else\n    false\n#endif\n    ;\nbool\topt_junk_free =\n#if (defined(JEMALLOC_DEBUG) && defined(JEMALLOC_FILL))\n    true\n#else\n    false\n#endif\n    ;\nbool\topt_trust_madvise =\n#ifdef JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS\n    false\n#else\n    true\n#endif\n    ;\n\nbool opt_cache_oblivious =\n#ifdef JEMALLOC_CACHE_OBLIVIOUS\n    true\n#else\n    false\n#endif\n    ;\n\nzero_realloc_action_t opt_zero_realloc_action =\n#ifdef JEMALLOC_ZERO_REALLOC_DEFAULT_FREE\n    zero_realloc_action_free\n#else\n    zero_realloc_action_alloc\n#endif\n    ;\n\natomic_zu_t zero_realloc_count = ATOMIC_INIT(0);\n\nconst char *zero_realloc_mode_names[] = {\n\t\"alloc\",\n\t\"free\",\n\t\"abort\",\n};\n\n/*\n * These are the documented values for junk fill debugging facilities -- see the\n * man page.\n */\nstatic const uint8_t junk_alloc_byte = 0xa5;\nstatic const uint8_t junk_free_byte = 0x5a;\n\nstatic void default_junk_alloc(void *ptr, size_t usize) {\n\tmemset(ptr, junk_alloc_byte, usize);\n}\n\nstatic void default_junk_free(void *ptr, size_t usize) {\n\tmemset(ptr, junk_free_byte, usize);\n}\n\nvoid (*junk_alloc_callback)(void *ptr, size_t size) = &default_junk_alloc;\nvoid (*junk_free_callback)(void *ptr, size_t size) = &default_junk_free;\n\nbool\topt_utrace = false;\nbool\topt_xmalloc = false;\nbool\topt_experimental_infallible_new = false;\nbool\topt_zero = false;\nunsigned\topt_narenas = 0;\nfxp_t\t\topt_narenas_ratio = FXP_INIT_INT(4);\n\nunsigned\tncpus;\n\n/* Protects arenas initialization. */\nmalloc_mutex_t arenas_lock;\n\n/* The global hpa, and whether it's on. */\nbool opt_hpa = false;\nhpa_shard_opts_t opt_hpa_opts = HPA_SHARD_OPTS_DEFAULT;\nsec_opts_t opt_hpa_sec_opts = SEC_OPTS_DEFAULT;\n\n/*\n * Arenas that are used to service external requests.  Not all elements of the\n * arenas array are necessarily used; arenas are created lazily as needed.\n *\n * arenas[0..narenas_auto) are used for automatic multiplexing of threads and\n * arenas.  arenas[narenas_auto..narenas_total) are only used if the application\n * takes some action to create them and allocate from them.\n *\n * Points to an arena_t.\n */\nJEMALLOC_ALIGNED(CACHELINE)\natomic_p_t\t\tarenas[MALLOCX_ARENA_LIMIT];\nstatic atomic_u_t\tnarenas_total; /* Use narenas_total_*(). */\n/* Below three are read-only after initialization. */\nstatic arena_t\t\t*a0; /* arenas[0]. */\nunsigned\t\tnarenas_auto;\nunsigned\t\tmanual_arena_base;\n\nmalloc_init_t malloc_init_state = malloc_init_uninitialized;\n\n/* False should be the common case.  Set to true to trigger initialization. */\nbool\t\t\tmalloc_slow = true;\n\n/* When malloc_slow is true, set the corresponding bits for sanity check. */\nenum {\n\tflag_opt_junk_alloc\t= (1U),\n\tflag_opt_junk_free\t= (1U << 1),\n\tflag_opt_zero\t\t= (1U << 2),\n\tflag_opt_utrace\t\t= (1U << 3),\n\tflag_opt_xmalloc\t= (1U << 4)\n};\nstatic uint8_t\tmalloc_slow_flags;\n\n#ifdef JEMALLOC_THREADED_INIT\n/* Used to let the initializing thread recursively allocate. */\n#  define NO_INITIALIZER\t((unsigned long)0)\n#  define INITIALIZER\t\tpthread_self()\n#  define IS_INITIALIZER\t(malloc_initializer == pthread_self())\nstatic pthread_t\t\tmalloc_initializer = NO_INITIALIZER;\n#else\n#  define NO_INITIALIZER\tfalse\n#  define INITIALIZER\t\ttrue\n#  define IS_INITIALIZER\tmalloc_initializer\nstatic bool\t\t\tmalloc_initializer = NO_INITIALIZER;\n#endif\n\n/* Used to avoid initialization races. */\n#ifdef _WIN32\n#if _WIN32_WINNT >= 0x0600\nstatic malloc_mutex_t\tinit_lock = SRWLOCK_INIT;\n#else\nstatic malloc_mutex_t\tinit_lock;\nstatic bool init_lock_initialized = false;\n\nJEMALLOC_ATTR(constructor)\nstatic void WINAPI\n_init_init_lock(void) {\n\t/*\n\t * If another constructor in the same binary is using mallctl to e.g.\n\t * set up extent hooks, it may end up running before this one, and\n\t * malloc_init_hard will crash trying to lock the uninitialized lock. So\n\t * we force an initialization of the lock in malloc_init_hard as well.\n\t * We don't try to care about atomicity of the accessed to the\n\t * init_lock_initialized boolean, since it really only matters early in\n\t * the process creation, before any separate thread normally starts\n\t * doing anything.\n\t */\n\tif (!init_lock_initialized) {\n\t\tmalloc_mutex_init(&init_lock, \"init\", WITNESS_RANK_INIT,\n\t\t    malloc_mutex_rank_exclusive);\n\t}\n\tinit_lock_initialized = true;\n}\n\n#ifdef _MSC_VER\n#  pragma section(\".CRT$XCU\", read)\nJEMALLOC_SECTION(\".CRT$XCU\") JEMALLOC_ATTR(used)\nstatic const void (WINAPI *init_init_lock)(void) = _init_init_lock;\n#endif\n#endif\n#else\nstatic malloc_mutex_t\tinit_lock = MALLOC_MUTEX_INITIALIZER;\n#endif\n\ntypedef struct {\n\tvoid\t*p;\t/* Input pointer (as in realloc(p, s)). */\n\tsize_t\ts;\t/* Request size. */\n\tvoid\t*r;\t/* Result pointer. */\n} malloc_utrace_t;\n\n#ifdef JEMALLOC_UTRACE\n#  define UTRACE(a, b, c) do {\t\t\t\t\t\t\\\n\tif (unlikely(opt_utrace)) {\t\t\t\t\t\\\n\t\tint utrace_serrno = errno;\t\t\t\t\\\n\t\tmalloc_utrace_t ut;\t\t\t\t\t\\\n\t\tut.p = (a);\t\t\t\t\t\t\\\n\t\tut.s = (b);\t\t\t\t\t\t\\\n\t\tut.r = (c);\t\t\t\t\t\t\\\n\t\tUTRACE_CALL(&ut, sizeof(ut));\t\t\t\t\\\n\t\terrno = utrace_serrno;\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n#else\n#  define UTRACE(a, b, c)\n#endif\n\n/* Whether encountered any invalid config options. */\nstatic bool had_conf_error = false;\n\n/******************************************************************************/\n/*\n * Function prototypes for static functions that are referenced prior to\n * definition.\n */\n\nstatic bool\tmalloc_init_hard_a0(void);\nstatic bool\tmalloc_init_hard(void);\n\n/******************************************************************************/\n/*\n * Begin miscellaneous support functions.\n */\n\nJEMALLOC_ALWAYS_INLINE bool\nmalloc_init_a0(void) {\n\tif (unlikely(malloc_init_state == malloc_init_uninitialized)) {\n\t\treturn malloc_init_hard_a0();\n\t}\n\treturn false;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nmalloc_init(void) {\n\tif (unlikely(!malloc_initialized()) && malloc_init_hard()) {\n\t\treturn true;\n\t}\n\treturn false;\n}\n\n/*\n * The a0*() functions are used instead of i{d,}alloc() in situations that\n * cannot tolerate TLS variable access.\n */\n\nstatic void *\na0ialloc(size_t size, bool zero, bool is_internal) {\n\tif (unlikely(malloc_init_a0())) {\n\t\treturn NULL;\n\t}\n\n\treturn iallocztm(TSDN_NULL, size, sz_size2index(size), zero, NULL,\n\t    is_internal, arena_get(TSDN_NULL, 0, true), true);\n}\n\nstatic void\na0idalloc(void *ptr, bool is_internal) {\n\tidalloctm(TSDN_NULL, ptr, NULL, NULL, is_internal, true);\n}\n\nvoid *\na0malloc(size_t size) {\n\treturn a0ialloc(size, false, true);\n}\n\nvoid\na0dalloc(void *ptr) {\n\ta0idalloc(ptr, true);\n}\n\n/*\n * FreeBSD's libc uses the bootstrap_*() functions in bootstrap-sensitive\n * situations that cannot tolerate TLS variable access (TLS allocation and very\n * early internal data structure initialization).\n */\n\nvoid *\nbootstrap_malloc(size_t size) {\n\tif (unlikely(size == 0)) {\n\t\tsize = 1;\n\t}\n\n\treturn a0ialloc(size, false, false);\n}\n\nvoid *\nbootstrap_calloc(size_t num, size_t size) {\n\tsize_t num_size;\n\n\tnum_size = num * size;\n\tif (unlikely(num_size == 0)) {\n\t\tassert(num == 0 || size == 0);\n\t\tnum_size = 1;\n\t}\n\n\treturn a0ialloc(num_size, true, false);\n}\n\nvoid\nbootstrap_free(void *ptr) {\n\tif (unlikely(ptr == NULL)) {\n\t\treturn;\n\t}\n\n\ta0idalloc(ptr, false);\n}\n\nvoid\narena_set(unsigned ind, arena_t *arena) {\n\tatomic_store_p(&arenas[ind], arena, ATOMIC_RELEASE);\n}\n\nstatic void\nnarenas_total_set(unsigned narenas) {\n\tatomic_store_u(&narenas_total, narenas, ATOMIC_RELEASE);\n}\n\nstatic void\nnarenas_total_inc(void) {\n\tatomic_fetch_add_u(&narenas_total, 1, ATOMIC_RELEASE);\n}\n\nunsigned\nnarenas_total_get(void) {\n\treturn atomic_load_u(&narenas_total, ATOMIC_ACQUIRE);\n}\n\n/* Create a new arena and insert it into the arenas array at index ind. */\nstatic arena_t *\narena_init_locked(tsdn_t *tsdn, unsigned ind, const arena_config_t *config) {\n\tarena_t *arena;\n\n\tassert(ind <= narenas_total_get());\n\tif (ind >= MALLOCX_ARENA_LIMIT) {\n\t\treturn NULL;\n\t}\n\tif (ind == narenas_total_get()) {\n\t\tnarenas_total_inc();\n\t}\n\n\t/*\n\t * Another thread may have already initialized arenas[ind] if it's an\n\t * auto arena.\n\t */\n\tarena = arena_get(tsdn, ind, false);\n\tif (arena != NULL) {\n\t\tassert(arena_is_auto(arena));\n\t\treturn arena;\n\t}\n\n\t/* Actually initialize the arena. */\n\tarena = arena_new(tsdn, ind, config);\n\n\treturn arena;\n}\n\nstatic void\narena_new_create_background_thread(tsdn_t *tsdn, unsigned ind) {\n\tif (ind == 0) {\n\t\treturn;\n\t}\n\t/*\n\t * Avoid creating a new background thread just for the huge arena, which\n\t * purges eagerly by default.\n\t */\n\tif (have_background_thread && !arena_is_huge(ind)) {\n\t\tif (background_thread_create(tsdn_tsd(tsdn), ind)) {\n\t\t\tmalloc_printf(\"<jemalloc>: error in background thread \"\n\t\t\t\t      \"creation for arena %u. Abort.\\n\", ind);\n\t\t\tabort();\n\t\t}\n\t}\n}\n\narena_t *\narena_init(tsdn_t *tsdn, unsigned ind, const arena_config_t *config) {\n\tarena_t *arena;\n\n\tmalloc_mutex_lock(tsdn, &arenas_lock);\n\tarena = arena_init_locked(tsdn, ind, config);\n\tmalloc_mutex_unlock(tsdn, &arenas_lock);\n\n\tarena_new_create_background_thread(tsdn, ind);\n\n\treturn arena;\n}\n\nstatic void\narena_bind(tsd_t *tsd, unsigned ind, bool internal) {\n\tarena_t *arena = arena_get(tsd_tsdn(tsd), ind, false);\n\tarena_nthreads_inc(arena, internal);\n\n\tif (internal) {\n\t\ttsd_iarena_set(tsd, arena);\n\t} else {\n\t\ttsd_arena_set(tsd, arena);\n\t\tunsigned shard = atomic_fetch_add_u(&arena->binshard_next, 1,\n\t\t    ATOMIC_RELAXED);\n\t\ttsd_binshards_t *bins = tsd_binshardsp_get(tsd);\n\t\tfor (unsigned i = 0; i < SC_NBINS; i++) {\n\t\t\tassert(bin_infos[i].n_shards > 0 &&\n\t\t\t    bin_infos[i].n_shards <= BIN_SHARDS_MAX);\n\t\t\tbins->binshard[i] = shard % bin_infos[i].n_shards;\n\t\t}\n\t}\n}\n\nvoid\narena_migrate(tsd_t *tsd, arena_t *oldarena, arena_t *newarena) {\n\tassert(oldarena != NULL);\n\tassert(newarena != NULL);\n\n\tarena_nthreads_dec(oldarena, false);\n\tarena_nthreads_inc(newarena, false);\n\ttsd_arena_set(tsd, newarena);\n\n\tif (arena_nthreads_get(oldarena, false) == 0) {\n\t\t/* Purge if the old arena has no associated threads anymore. */\n\t\tarena_decay(tsd_tsdn(tsd), oldarena,\n\t\t    /* is_background_thread */ false, /* all */ true);\n\t}\n}\n\nstatic void\narena_unbind(tsd_t *tsd, unsigned ind, bool internal) {\n\tarena_t *arena;\n\n\tarena = arena_get(tsd_tsdn(tsd), ind, false);\n\tarena_nthreads_dec(arena, internal);\n\n\tif (internal) {\n\t\ttsd_iarena_set(tsd, NULL);\n\t} else {\n\t\ttsd_arena_set(tsd, NULL);\n\t}\n}\n\n/* Slow path, called only by arena_choose(). */\narena_t *\narena_choose_hard(tsd_t *tsd, bool internal) {\n\tarena_t *ret JEMALLOC_CC_SILENCE_INIT(NULL);\n\n\tif (have_percpu_arena && PERCPU_ARENA_ENABLED(opt_percpu_arena)) {\n\t\tunsigned choose = percpu_arena_choose();\n\t\tret = arena_get(tsd_tsdn(tsd), choose, true);\n\t\tassert(ret != NULL);\n\t\tarena_bind(tsd, arena_ind_get(ret), false);\n\t\tarena_bind(tsd, arena_ind_get(ret), true);\n\n\t\treturn ret;\n\t}\n\n\tif (narenas_auto > 1) {\n\t\tunsigned i, j, choose[2], first_null;\n\t\tbool is_new_arena[2];\n\n\t\t/*\n\t\t * Determine binding for both non-internal and internal\n\t\t * allocation.\n\t\t *\n\t\t *   choose[0]: For application allocation.\n\t\t *   choose[1]: For internal metadata allocation.\n\t\t */\n\n\t\tfor (j = 0; j < 2; j++) {\n\t\t\tchoose[j] = 0;\n\t\t\tis_new_arena[j] = false;\n\t\t}\n\n\t\tfirst_null = narenas_auto;\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), &arenas_lock);\n\t\tassert(arena_get(tsd_tsdn(tsd), 0, false) != NULL);\n\t\tfor (i = 1; i < narenas_auto; i++) {\n\t\t\tif (arena_get(tsd_tsdn(tsd), i, false) != NULL) {\n\t\t\t\t/*\n\t\t\t\t * Choose the first arena that has the lowest\n\t\t\t\t * number of threads assigned to it.\n\t\t\t\t */\n\t\t\t\tfor (j = 0; j < 2; j++) {\n\t\t\t\t\tif (arena_nthreads_get(arena_get(\n\t\t\t\t\t    tsd_tsdn(tsd), i, false), !!j) <\n\t\t\t\t\t    arena_nthreads_get(arena_get(\n\t\t\t\t\t    tsd_tsdn(tsd), choose[j], false),\n\t\t\t\t\t    !!j)) {\n\t\t\t\t\t\tchoose[j] = i;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else if (first_null == narenas_auto) {\n\t\t\t\t/*\n\t\t\t\t * Record the index of the first uninitialized\n\t\t\t\t * arena, in case all extant arenas are in use.\n\t\t\t\t *\n\t\t\t\t * NB: It is possible for there to be\n\t\t\t\t * discontinuities in terms of initialized\n\t\t\t\t * versus uninitialized arenas, due to the\n\t\t\t\t * \"thread.arena\" mallctl.\n\t\t\t\t */\n\t\t\t\tfirst_null = i;\n\t\t\t}\n\t\t}\n\n\t\tfor (j = 0; j < 2; j++) {\n\t\t\tif (arena_nthreads_get(arena_get(tsd_tsdn(tsd),\n\t\t\t    choose[j], false), !!j) == 0 || first_null ==\n\t\t\t    narenas_auto) {\n\t\t\t\t/*\n\t\t\t\t * Use an unloaded arena, or the least loaded\n\t\t\t\t * arena if all arenas are already initialized.\n\t\t\t\t */\n\t\t\t\tif (!!j == internal) {\n\t\t\t\t\tret = arena_get(tsd_tsdn(tsd),\n\t\t\t\t\t    choose[j], false);\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tarena_t *arena;\n\n\t\t\t\t/* Initialize a new arena. */\n\t\t\t\tchoose[j] = first_null;\n\t\t\t\tarena = arena_init_locked(tsd_tsdn(tsd),\n\t\t\t\t    choose[j], &arena_config_default);\n\t\t\t\tif (arena == NULL) {\n\t\t\t\t\tmalloc_mutex_unlock(tsd_tsdn(tsd),\n\t\t\t\t\t    &arenas_lock);\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t\tis_new_arena[j] = true;\n\t\t\t\tif (!!j == internal) {\n\t\t\t\t\tret = arena;\n\t\t\t\t}\n\t\t\t}\n\t\t\tarena_bind(tsd, choose[j], !!j);\n\t\t}\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), &arenas_lock);\n\n\t\tfor (j = 0; j < 2; j++) {\n\t\t\tif (is_new_arena[j]) {\n\t\t\t\tassert(choose[j] > 0);\n\t\t\t\tarena_new_create_background_thread(\n\t\t\t\t    tsd_tsdn(tsd), choose[j]);\n\t\t\t}\n\t\t}\n\n\t} else {\n\t\tret = arena_get(tsd_tsdn(tsd), 0, false);\n\t\tarena_bind(tsd, 0, false);\n\t\tarena_bind(tsd, 0, true);\n\t}\n\n\treturn ret;\n}\n\nvoid\niarena_cleanup(tsd_t *tsd) {\n\tarena_t *iarena;\n\n\tiarena = tsd_iarena_get(tsd);\n\tif (iarena != NULL) {\n\t\tarena_unbind(tsd, arena_ind_get(iarena), true);\n\t}\n}\n\nvoid\narena_cleanup(tsd_t *tsd) {\n\tarena_t *arena;\n\n\tarena = tsd_arena_get(tsd);\n\tif (arena != NULL) {\n\t\tarena_unbind(tsd, arena_ind_get(arena), false);\n\t}\n}\n\nstatic void\nstats_print_atexit(void) {\n\tif (config_stats) {\n\t\ttsdn_t *tsdn;\n\t\tunsigned narenas, i;\n\n\t\ttsdn = tsdn_fetch();\n\n\t\t/*\n\t\t * Merge stats from extant threads.  This is racy, since\n\t\t * individual threads do not lock when recording tcache stats\n\t\t * events.  As a consequence, the final stats may be slightly\n\t\t * out of date by the time they are reported, if other threads\n\t\t * continue to allocate.\n\t\t */\n\t\tfor (i = 0, narenas = narenas_total_get(); i < narenas; i++) {\n\t\t\tarena_t *arena = arena_get(tsdn, i, false);\n\t\t\tif (arena != NULL) {\n\t\t\t\ttcache_slow_t *tcache_slow;\n\n\t\t\t\tmalloc_mutex_lock(tsdn, &arena->tcache_ql_mtx);\n\t\t\t\tql_foreach(tcache_slow, &arena->tcache_ql,\n\t\t\t\t    link) {\n\t\t\t\t\ttcache_stats_merge(tsdn,\n\t\t\t\t\t    tcache_slow->tcache, arena);\n\t\t\t\t}\n\t\t\t\tmalloc_mutex_unlock(tsdn,\n\t\t\t\t    &arena->tcache_ql_mtx);\n\t\t\t}\n\t\t}\n\t}\n\tje_malloc_stats_print(NULL, NULL, opt_stats_print_opts);\n}\n\n/*\n * Ensure that we don't hold any locks upon entry to or exit from allocator\n * code (in a \"broad\" sense that doesn't count a reentrant allocation as an\n * entrance or exit).\n */\nJEMALLOC_ALWAYS_INLINE void\ncheck_entry_exit_locking(tsdn_t *tsdn) {\n\tif (!config_debug) {\n\t\treturn;\n\t}\n\tif (tsdn_null(tsdn)) {\n\t\treturn;\n\t}\n\ttsd_t *tsd = tsdn_tsd(tsdn);\n\t/*\n\t * It's possible we hold locks at entry/exit if we're in a nested\n\t * allocation.\n\t */\n\tint8_t reentrancy_level = tsd_reentrancy_level_get(tsd);\n\tif (reentrancy_level != 0) {\n\t\treturn;\n\t}\n\twitness_assert_lockless(tsdn_witness_tsdp_get(tsdn));\n}\n\n/*\n * End miscellaneous support functions.\n */\n/******************************************************************************/\n/*\n * Begin initialization functions.\n */\n\nstatic char *\njemalloc_secure_getenv(const char *name) {\n#ifdef JEMALLOC_HAVE_SECURE_GETENV\n\treturn secure_getenv(name);\n#else\n#  ifdef JEMALLOC_HAVE_ISSETUGID\n\tif (issetugid() != 0) {\n\t\treturn NULL;\n\t}\n#  endif\n\treturn getenv(name);\n#endif\n}\n\nstatic unsigned\nmalloc_ncpus(void) {\n\tlong result;\n\n#ifdef _WIN32\n\tSYSTEM_INFO si;\n\tGetSystemInfo(&si);\n\tresult = si.dwNumberOfProcessors;\n#elif defined(CPU_COUNT)\n\t/*\n\t * glibc >= 2.6 has the CPU_COUNT macro.\n\t *\n\t * glibc's sysconf() uses isspace().  glibc allocates for the first time\n\t * *before* setting up the isspace tables.  Therefore we need a\n\t * different method to get the number of CPUs.\n\t *\n\t * The getaffinity approach is also preferred when only a subset of CPUs\n\t * is available, to avoid using more arenas than necessary.\n\t */\n\t{\n#  if defined(__FreeBSD__) || defined(__DragonFly__)\n\t\tcpuset_t set;\n#  else\n\t\tcpu_set_t set;\n#  endif\n#  if defined(JEMALLOC_HAVE_SCHED_SETAFFINITY)\n\t\tsched_getaffinity(0, sizeof(set), &set);\n#  else\n\t\tpthread_getaffinity_np(pthread_self(), sizeof(set), &set);\n#  endif\n\t\tresult = CPU_COUNT(&set);\n\t}\n#else\n\tresult = sysconf(_SC_NPROCESSORS_ONLN);\n#endif\n\treturn ((result == -1) ? 1 : (unsigned)result);\n}\n\n/*\n * Ensure that number of CPUs is determistinc, i.e. it is the same based on:\n * - sched_getaffinity()\n * - _SC_NPROCESSORS_ONLN\n * - _SC_NPROCESSORS_CONF\n * Since otherwise tricky things is possible with percpu arenas in use.\n */\nstatic bool\nmalloc_cpu_count_is_deterministic()\n{\n#ifdef _WIN32\n\treturn true;\n#else\n\tlong cpu_onln = sysconf(_SC_NPROCESSORS_ONLN);\n\tlong cpu_conf = sysconf(_SC_NPROCESSORS_CONF);\n\tif (cpu_onln != cpu_conf) {\n\t\treturn false;\n\t}\n#  if defined(CPU_COUNT)\n#    if defined(__FreeBSD__) || defined(__DragonFly__)\n\tcpuset_t set;\n#    else\n\tcpu_set_t set;\n#    endif /* __FreeBSD__ */\n#    if defined(JEMALLOC_HAVE_SCHED_SETAFFINITY)\n\tsched_getaffinity(0, sizeof(set), &set);\n#    else /* !JEMALLOC_HAVE_SCHED_SETAFFINITY */\n\tpthread_getaffinity_np(pthread_self(), sizeof(set), &set);\n#    endif /* JEMALLOC_HAVE_SCHED_SETAFFINITY */\n\tlong cpu_affinity = CPU_COUNT(&set);\n\tif (cpu_affinity != cpu_conf) {\n\t\treturn false;\n\t}\n#  endif /* CPU_COUNT */\n\treturn true;\n#endif\n}\n\nstatic void\ninit_opt_stats_opts(const char *v, size_t vlen, char *dest) {\n\tsize_t opts_len = strlen(dest);\n\tassert(opts_len <= stats_print_tot_num_options);\n\n\tfor (size_t i = 0; i < vlen; i++) {\n\t\tswitch (v[i]) {\n#define OPTION(o, v, d, s) case o: break;\n\t\t\tSTATS_PRINT_OPTIONS\n#undef OPTION\n\t\tdefault: continue;\n\t\t}\n\n\t\tif (strchr(dest, v[i]) != NULL) {\n\t\t\t/* Ignore repeated. */\n\t\t\tcontinue;\n\t\t}\n\n\t\tdest[opts_len++] = v[i];\n\t\tdest[opts_len] = '\\0';\n\t\tassert(opts_len <= stats_print_tot_num_options);\n\t}\n\tassert(opts_len == strlen(dest));\n}\n\n/* Reads the next size pair in a multi-sized option. */\nstatic bool\nmalloc_conf_multi_sizes_next(const char **slab_size_segment_cur,\n    size_t *vlen_left, size_t *slab_start, size_t *slab_end, size_t *new_size) {\n\tconst char *cur = *slab_size_segment_cur;\n\tchar *end;\n\tuintmax_t um;\n\n\tset_errno(0);\n\n\t/* First number, then '-' */\n\tum = malloc_strtoumax(cur, &end, 0);\n\tif (get_errno() != 0 || *end != '-') {\n\t\treturn true;\n\t}\n\t*slab_start = (size_t)um;\n\tcur = end + 1;\n\n\t/* Second number, then ':' */\n\tum = malloc_strtoumax(cur, &end, 0);\n\tif (get_errno() != 0 || *end != ':') {\n\t\treturn true;\n\t}\n\t*slab_end = (size_t)um;\n\tcur = end + 1;\n\n\t/* Last number */\n\tum = malloc_strtoumax(cur, &end, 0);\n\tif (get_errno() != 0) {\n\t\treturn true;\n\t}\n\t*new_size = (size_t)um;\n\n\t/* Consume the separator if there is one. */\n\tif (*end == '|') {\n\t\tend++;\n\t}\n\n\t*vlen_left -= end - *slab_size_segment_cur;\n\t*slab_size_segment_cur = end;\n\n\treturn false;\n}\n\nstatic bool\nmalloc_conf_next(char const **opts_p, char const **k_p, size_t *klen_p,\n    char const **v_p, size_t *vlen_p) {\n\tbool accept;\n\tconst char *opts = *opts_p;\n\n\t*k_p = opts;\n\n\tfor (accept = false; !accept;) {\n\t\tswitch (*opts) {\n\t\tcase 'A': case 'B': case 'C': case 'D': case 'E': case 'F':\n\t\tcase 'G': case 'H': case 'I': case 'J': case 'K': case 'L':\n\t\tcase 'M': case 'N': case 'O': case 'P': case 'Q': case 'R':\n\t\tcase 'S': case 'T': case 'U': case 'V': case 'W': case 'X':\n\t\tcase 'Y': case 'Z':\n\t\tcase 'a': case 'b': case 'c': case 'd': case 'e': case 'f':\n\t\tcase 'g': case 'h': case 'i': case 'j': case 'k': case 'l':\n\t\tcase 'm': case 'n': case 'o': case 'p': case 'q': case 'r':\n\t\tcase 's': case 't': case 'u': case 'v': case 'w': case 'x':\n\t\tcase 'y': case 'z':\n\t\tcase '0': case '1': case '2': case '3': case '4': case '5':\n\t\tcase '6': case '7': case '8': case '9':\n\t\tcase '_':\n\t\t\topts++;\n\t\t\tbreak;\n\t\tcase ':':\n\t\t\topts++;\n\t\t\t*klen_p = (uintptr_t)opts - 1 - (uintptr_t)*k_p;\n\t\t\t*v_p = opts;\n\t\t\taccept = true;\n\t\t\tbreak;\n\t\tcase '\\0':\n\t\t\tif (opts != *opts_p) {\n\t\t\t\tmalloc_write(\"<jemalloc>: Conf string ends \"\n\t\t\t\t    \"with key\\n\");\n\t\t\t\thad_conf_error = true;\n\t\t\t}\n\t\t\treturn true;\n\t\tdefault:\n\t\t\tmalloc_write(\"<jemalloc>: Malformed conf string\\n\");\n\t\t\thad_conf_error = true;\n\t\t\treturn true;\n\t\t}\n\t}\n\n\tfor (accept = false; !accept;) {\n\t\tswitch (*opts) {\n\t\tcase ',':\n\t\t\topts++;\n\t\t\t/*\n\t\t\t * Look ahead one character here, because the next time\n\t\t\t * this function is called, it will assume that end of\n\t\t\t * input has been cleanly reached if no input remains,\n\t\t\t * but we have optimistically already consumed the\n\t\t\t * comma if one exists.\n\t\t\t */\n\t\t\tif (*opts == '\\0') {\n\t\t\t\tmalloc_write(\"<jemalloc>: Conf string ends \"\n\t\t\t\t    \"with comma\\n\");\n\t\t\t\thad_conf_error = true;\n\t\t\t}\n\t\t\t*vlen_p = (uintptr_t)opts - 1 - (uintptr_t)*v_p;\n\t\t\taccept = true;\n\t\t\tbreak;\n\t\tcase '\\0':\n\t\t\t*vlen_p = (uintptr_t)opts - (uintptr_t)*v_p;\n\t\t\taccept = true;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\topts++;\n\t\t\tbreak;\n\t\t}\n\t}\n\n\t*opts_p = opts;\n\treturn false;\n}\n\nstatic void\nmalloc_abort_invalid_conf(void) {\n\tassert(opt_abort_conf);\n\tmalloc_printf(\"<jemalloc>: Abort (abort_conf:true) on invalid conf \"\n\t    \"value (see above).\\n\");\n\tabort();\n}\n\nstatic void\nmalloc_conf_error(const char *msg, const char *k, size_t klen, const char *v,\n    size_t vlen) {\n\tmalloc_printf(\"<jemalloc>: %s: %.*s:%.*s\\n\", msg, (int)klen, k,\n\t    (int)vlen, v);\n\t/* If abort_conf is set, error out after processing all options. */\n\tconst char *experimental = \"experimental_\";\n\tif (strncmp(k, experimental, strlen(experimental)) == 0) {\n\t\t/* However, tolerate experimental features. */\n\t\treturn;\n\t}\n\thad_conf_error = true;\n}\n\nstatic void\nmalloc_slow_flag_init(void) {\n\t/*\n\t * Combine the runtime options into malloc_slow for fast path.  Called\n\t * after processing all the options.\n\t */\n\tmalloc_slow_flags |= (opt_junk_alloc ? flag_opt_junk_alloc : 0)\n\t    | (opt_junk_free ? flag_opt_junk_free : 0)\n\t    | (opt_zero ? flag_opt_zero : 0)\n\t    | (opt_utrace ? flag_opt_utrace : 0)\n\t    | (opt_xmalloc ? flag_opt_xmalloc : 0);\n\n\tmalloc_slow = (malloc_slow_flags != 0);\n}\n\n/* Number of sources for initializing malloc_conf */\n#define MALLOC_CONF_NSOURCES 5\n\nstatic const char *\nobtain_malloc_conf(unsigned which_source, char buf[PATH_MAX + 1]) {\n\tif (config_debug) {\n\t\tstatic unsigned read_source = 0;\n\t\t/*\n\t\t * Each source should only be read once, to minimize # of\n\t\t * syscalls on init.\n\t\t */\n\t\tassert(read_source++ == which_source);\n\t}\n\tassert(which_source < MALLOC_CONF_NSOURCES);\n\n\tconst char *ret;\n\tswitch (which_source) {\n\tcase 0:\n\t\tret = config_malloc_conf;\n\t\tbreak;\n\tcase 1:\n\t\tif (je_malloc_conf != NULL) {\n\t\t\t/* Use options that were compiled into the program. */\n\t\t\tret = je_malloc_conf;\n\t\t} else {\n\t\t\t/* No configuration specified. */\n\t\t\tret = NULL;\n\t\t}\n\t\tbreak;\n\tcase 2: {\n\t\tssize_t linklen = 0;\n#ifndef _WIN32\n\t\tint saved_errno = errno;\n\t\tconst char *linkname =\n#  ifdef JEMALLOC_PREFIX\n\t\t    \"/etc/\"JEMALLOC_PREFIX\"malloc.conf\"\n#  else\n\t\t    \"/etc/malloc.conf\"\n#  endif\n\t\t    ;\n\n\t\t/*\n\t\t * Try to use the contents of the \"/etc/malloc.conf\" symbolic\n\t\t * link's name.\n\t\t */\n#ifndef JEMALLOC_READLINKAT\n\t\tlinklen = readlink(linkname, buf, PATH_MAX);\n#else\n\t\tlinklen = readlinkat(AT_FDCWD, linkname, buf, PATH_MAX);\n#endif\n\t\tif (linklen == -1) {\n\t\t\t/* No configuration specified. */\n\t\t\tlinklen = 0;\n\t\t\t/* Restore errno. */\n\t\t\tset_errno(saved_errno);\n\t\t}\n#endif\n\t\tbuf[linklen] = '\\0';\n\t\tret = buf;\n\t\tbreak;\n\t} case 3: {\n\t\tconst char *envname =\n#ifdef JEMALLOC_PREFIX\n\t\t    JEMALLOC_CPREFIX\"MALLOC_CONF\"\n#else\n\t\t    \"MALLOC_CONF\"\n#endif\n\t\t    ;\n\n\t\tif ((ret = jemalloc_secure_getenv(envname)) != NULL) {\n\t\t\t/*\n\t\t\t * Do nothing; opts is already initialized to the value\n\t\t\t * of the MALLOC_CONF environment variable.\n\t\t\t */\n\t\t} else {\n\t\t\t/* No configuration specified. */\n\t\t\tret = NULL;\n\t\t}\n\t\tbreak;\n\t} case 4: {\n\t\tret = je_malloc_conf_2_conf_harder;\n\t\tbreak;\n\t} default:\n\t\tnot_reached();\n\t\tret = NULL;\n\t}\n\treturn ret;\n}\n\nstatic void\nmalloc_conf_init_helper(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS],\n    bool initial_call, const char *opts_cache[MALLOC_CONF_NSOURCES],\n    char buf[PATH_MAX + 1]) {\n\tstatic const char *opts_explain[MALLOC_CONF_NSOURCES] = {\n\t\t\"string specified via --with-malloc-conf\",\n\t\t\"string pointed to by the global variable malloc_conf\",\n\t\t(\"\\\"name\\\" of the file referenced by the symbolic link named \"\n\t\t    \"/etc/malloc.conf\"),\n\t\t\"value of the environment variable MALLOC_CONF\",\n\t\t(\"string pointed to by the global variable \"\n\t\t    \"malloc_conf_2_conf_harder\"),\n\t};\n\tunsigned i;\n\tconst char *opts, *k, *v;\n\tsize_t klen, vlen;\n\n\tfor (i = 0; i < MALLOC_CONF_NSOURCES; i++) {\n\t\t/* Get runtime configuration. */\n\t\tif (initial_call) {\n\t\t\topts_cache[i] = obtain_malloc_conf(i, buf);\n\t\t}\n\t\topts = opts_cache[i];\n\t\tif (!initial_call && opt_confirm_conf) {\n\t\t\tmalloc_printf(\n\t\t\t    \"<jemalloc>: malloc_conf #%u (%s): \\\"%s\\\"\\n\",\n\t\t\t    i + 1, opts_explain[i], opts != NULL ? opts : \"\");\n\t\t}\n\t\tif (opts == NULL) {\n\t\t\tcontinue;\n\t\t}\n\n\t\twhile (*opts != '\\0' && !malloc_conf_next(&opts, &k, &klen, &v,\n\t\t    &vlen)) {\n\n#define CONF_ERROR(msg, k, klen, v, vlen)\t\t\t\t\\\n\t\t\tif (!initial_call) {\t\t\t\t\\\n\t\t\t\tmalloc_conf_error(\t\t\t\\\n\t\t\t\t    msg, k, klen, v, vlen);\t\t\\\n\t\t\t\tcur_opt_valid = false;\t\t\t\\\n\t\t\t}\n#define CONF_CONTINUE\t{\t\t\t\t\t\t\\\n\t\t\t\tif (!initial_call && opt_confirm_conf\t\\\n\t\t\t\t    && cur_opt_valid) {\t\t\t\\\n\t\t\t\t\tmalloc_printf(\"<jemalloc>: -- \"\t\\\n\t\t\t\t\t    \"Set conf value: %.*s:%.*s\"\t\\\n\t\t\t\t\t    \"\\n\", (int)klen, k,\t\t\\\n\t\t\t\t\t    (int)vlen, v);\t\t\\\n\t\t\t\t}\t\t\t\t\t\\\n\t\t\t\tcontinue;\t\t\t\t\\\n\t\t\t}\n#define CONF_MATCH(n)\t\t\t\t\t\t\t\\\n\t(sizeof(n)-1 == klen && strncmp(n, k, klen) == 0)\n#define CONF_MATCH_VALUE(n)\t\t\t\t\t\t\\\n\t(sizeof(n)-1 == vlen && strncmp(n, v, vlen) == 0)\n#define CONF_HANDLE_BOOL(o, n)\t\t\t\t\t\t\\\n\t\t\tif (CONF_MATCH(n)) {\t\t\t\t\\\n\t\t\t\tif (CONF_MATCH_VALUE(\"true\")) {\t\t\\\n\t\t\t\t\to = true;\t\t\t\\\n\t\t\t\t} else if (CONF_MATCH_VALUE(\"false\")) {\t\\\n\t\t\t\t\to = false;\t\t\t\\\n\t\t\t\t} else {\t\t\t\t\\\n\t\t\t\t\tCONF_ERROR(\"Invalid conf value\",\\\n\t\t\t\t\t    k, klen, v, vlen);\t\t\\\n\t\t\t\t}\t\t\t\t\t\\\n\t\t\t\tCONF_CONTINUE;\t\t\t\t\\\n\t\t\t}\n      /*\n       * One of the CONF_MIN macros below expands, in one of the use points,\n       * to \"unsigned integer < 0\", which is always false, triggering the\n       * GCC -Wtype-limits warning, which we disable here and re-enable below.\n       */\n      JEMALLOC_DIAGNOSTIC_PUSH\n      JEMALLOC_DIAGNOSTIC_IGNORE_TYPE_LIMITS\n\n#define CONF_DONT_CHECK_MIN(um, min)\tfalse\n#define CONF_CHECK_MIN(um, min)\t((um) < (min))\n#define CONF_DONT_CHECK_MAX(um, max)\tfalse\n#define CONF_CHECK_MAX(um, max)\t((um) > (max))\n\n#define CONF_VALUE_READ(max_t, result)\t\t\t\t\t\\\n\t      char *end;\t\t\t\t\t\t\\\n\t      set_errno(0);\t\t\t\t\t\t\\\n\t      result = (max_t)malloc_strtoumax(v, &end, 0);\n#define CONF_VALUE_READ_FAIL()\t\t\t\t\t\t\\\n\t      (get_errno() != 0 || (uintptr_t)end - (uintptr_t)v != vlen)\n\n#define CONF_HANDLE_T(t, max_t, o, n, min, max, check_min, check_max, clip) \\\n\t\t\tif (CONF_MATCH(n)) {\t\t\t\t\\\n\t\t\t\tmax_t mv;\t\t\t\t\\\n\t\t\t\tCONF_VALUE_READ(max_t, mv)\t\t\\\n\t\t\t\tif (CONF_VALUE_READ_FAIL()) {\t\t\\\n\t\t\t\t\tCONF_ERROR(\"Invalid conf value\",\\\n\t\t\t\t\t    k, klen, v, vlen);\t\t\\\n\t\t\t\t} else if (clip) {\t\t\t\\\n\t\t\t\t\tif (check_min(mv, (t)(min))) {\t\\\n\t\t\t\t\t\to = (t)(min);\t\t\\\n\t\t\t\t\t} else if (\t\t\t\\\n\t\t\t\t\t    check_max(mv, (t)(max))) {\t\\\n\t\t\t\t\t\to = (t)(max);\t\t\\\n\t\t\t\t\t} else {\t\t\t\\\n\t\t\t\t\t\to = (t)mv;\t\t\\\n\t\t\t\t\t}\t\t\t\t\\\n\t\t\t\t} else {\t\t\t\t\\\n\t\t\t\t\tif (check_min(mv, (t)(min)) ||\t\\\n\t\t\t\t\t    check_max(mv, (t)(max))) {\t\\\n\t\t\t\t\t\tCONF_ERROR(\t\t\\\n\t\t\t\t\t\t    \"Out-of-range \"\t\\\n\t\t\t\t\t\t    \"conf value\",\t\\\n\t\t\t\t\t\t    k, klen, v, vlen);\t\\\n\t\t\t\t\t} else {\t\t\t\\\n\t\t\t\t\t\to = (t)mv;\t\t\\\n\t\t\t\t\t}\t\t\t\t\\\n\t\t\t\t}\t\t\t\t\t\\\n\t\t\t\tCONF_CONTINUE;\t\t\t\t\\\n\t\t\t}\n#define CONF_HANDLE_T_U(t, o, n, min, max, check_min, check_max, clip)\t\\\n\t      CONF_HANDLE_T(t, uintmax_t, o, n, min, max, check_min,\t\\\n\t\t\t    check_max, clip)\n#define CONF_HANDLE_T_SIGNED(t, o, n, min, max, check_min, check_max, clip)\\\n\t      CONF_HANDLE_T(t, intmax_t, o, n, min, max, check_min,\t\\\n\t\t\t    check_max, clip)\n\n#define CONF_HANDLE_UNSIGNED(o, n, min, max, check_min, check_max,\t\\\n    clip)\t\t\t\t\t\t\t\t\\\n\t\t\tCONF_HANDLE_T_U(unsigned, o, n, min, max,\t\\\n\t\t\t    check_min, check_max, clip)\n#define CONF_HANDLE_SIZE_T(o, n, min, max, check_min, check_max, clip)\t\\\n\t\t\tCONF_HANDLE_T_U(size_t, o, n, min, max,\t\t\\\n\t\t\t    check_min, check_max, clip)\n#define CONF_HANDLE_INT64_T(o, n, min, max, check_min, check_max, clip)\t\\\n\t\t\tCONF_HANDLE_T_SIGNED(int64_t, o, n, min, max,\t\\\n\t\t\t    check_min, check_max, clip)\n#define CONF_HANDLE_UINT64_T(o, n, min, max, check_min, check_max, clip)\\\n\t\t\tCONF_HANDLE_T_U(uint64_t, o, n, min, max,\t\\\n\t\t\t    check_min, check_max, clip)\n#define CONF_HANDLE_SSIZE_T(o, n, min, max)\t\t\t\t\\\n\t\t\tCONF_HANDLE_T_SIGNED(ssize_t, o, n, min, max,\t\\\n\t\t\t    CONF_CHECK_MIN, CONF_CHECK_MAX, false)\n#define CONF_HANDLE_CHAR_P(o, n, d)\t\t\t\t\t\\\n\t\t\tif (CONF_MATCH(n)) {\t\t\t\t\\\n\t\t\t\tsize_t cpylen = (vlen <=\t\t\\\n\t\t\t\t    sizeof(o)-1) ? vlen :\t\t\\\n\t\t\t\t    sizeof(o)-1;\t\t\t\\\n\t\t\t\tstrncpy(o, v, cpylen);\t\t\t\\\n\t\t\t\to[cpylen] = '\\0';\t\t\t\\\n\t\t\t\tCONF_CONTINUE;\t\t\t\t\\\n\t\t\t}\n\n\t\t\tbool cur_opt_valid = true;\n\n\t\t\tCONF_HANDLE_BOOL(opt_confirm_conf, \"confirm_conf\")\n\t\t\tif (initial_call) {\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tCONF_HANDLE_BOOL(opt_abort, \"abort\")\n\t\t\tCONF_HANDLE_BOOL(opt_abort_conf, \"abort_conf\")\n\t\t\tCONF_HANDLE_BOOL(opt_trust_madvise, \"trust_madvise\")\n\t\t\tif (strncmp(\"metadata_thp\", k, klen) == 0) {\n\t\t\t\tint m;\n\t\t\t\tbool match = false;\n\t\t\t\tfor (m = 0; m < metadata_thp_mode_limit; m++) {\n\t\t\t\t\tif (strncmp(metadata_thp_mode_names[m],\n\t\t\t\t\t    v, vlen) == 0) {\n\t\t\t\t\t\topt_metadata_thp = m;\n\t\t\t\t\t\tmatch = true;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (!match) {\n\t\t\t\t\tCONF_ERROR(\"Invalid conf value\",\n\t\t\t\t\t    k, klen, v, vlen);\n\t\t\t\t}\n\t\t\t\tCONF_CONTINUE;\n\t\t\t}\n\t\t\tCONF_HANDLE_BOOL(opt_retain, \"retain\")\n\t\t\tif (strncmp(\"dss\", k, klen) == 0) {\n\t\t\t\tint m;\n\t\t\t\tbool match = false;\n\t\t\t\tfor (m = 0; m < dss_prec_limit; m++) {\n\t\t\t\t\tif (strncmp(dss_prec_names[m], v, vlen)\n\t\t\t\t\t    == 0) {\n\t\t\t\t\t\tif (extent_dss_prec_set(m)) {\n\t\t\t\t\t\t\tCONF_ERROR(\n\t\t\t\t\t\t\t    \"Error setting dss\",\n\t\t\t\t\t\t\t    k, klen, v, vlen);\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\topt_dss =\n\t\t\t\t\t\t\t    dss_prec_names[m];\n\t\t\t\t\t\t\tmatch = true;\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (!match) {\n\t\t\t\t\tCONF_ERROR(\"Invalid conf value\",\n\t\t\t\t\t    k, klen, v, vlen);\n\t\t\t\t}\n\t\t\t\tCONF_CONTINUE;\n\t\t\t}\n\t\t\tif (CONF_MATCH(\"narenas\")) {\n\t\t\t\tif (CONF_MATCH_VALUE(\"default\")) {\n\t\t\t\t\topt_narenas = 0;\n\t\t\t\t\tCONF_CONTINUE;\n\t\t\t\t} else {\n\t\t\t\t\tCONF_HANDLE_UNSIGNED(opt_narenas,\n\t\t\t\t\t    \"narenas\", 1, UINT_MAX,\n\t\t\t\t\t    CONF_CHECK_MIN, CONF_DONT_CHECK_MAX,\n\t\t\t\t\t    /* clip */ false)\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (CONF_MATCH(\"narenas_ratio\")) {\n\t\t\t\tchar *end;\n\t\t\t\tbool err = fxp_parse(&opt_narenas_ratio, v,\n\t\t\t\t    &end);\n\t\t\t\tif (err || (size_t)(end - v) != vlen) {\n\t\t\t\t\tCONF_ERROR(\"Invalid conf value\",\n\t\t\t\t\t    k, klen, v, vlen);\n\t\t\t\t}\n\t\t\t\tCONF_CONTINUE;\n\t\t\t}\n\t\t\tif (CONF_MATCH(\"bin_shards\")) {\n\t\t\t\tconst char *bin_shards_segment_cur = v;\n\t\t\t\tsize_t vlen_left = vlen;\n\t\t\t\tdo {\n\t\t\t\t\tsize_t size_start;\n\t\t\t\t\tsize_t size_end;\n\t\t\t\t\tsize_t nshards;\n\t\t\t\t\tbool err = malloc_conf_multi_sizes_next(\n\t\t\t\t\t    &bin_shards_segment_cur, &vlen_left,\n\t\t\t\t\t    &size_start, &size_end, &nshards);\n\t\t\t\t\tif (err || bin_update_shard_size(\n\t\t\t\t\t    bin_shard_sizes, size_start,\n\t\t\t\t\t    size_end, nshards)) {\n\t\t\t\t\t\tCONF_ERROR(\n\t\t\t\t\t\t    \"Invalid settings for \"\n\t\t\t\t\t\t    \"bin_shards\", k, klen, v,\n\t\t\t\t\t\t    vlen);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t} while (vlen_left > 0);\n\t\t\t\tCONF_CONTINUE;\n\t\t\t}\n\t\t\tCONF_HANDLE_INT64_T(opt_mutex_max_spin,\n\t\t\t    \"mutex_max_spin\", -1, INT64_MAX, CONF_CHECK_MIN,\n\t\t\t    CONF_DONT_CHECK_MAX, false);\n\t\t\tCONF_HANDLE_SSIZE_T(opt_dirty_decay_ms,\n\t\t\t    \"dirty_decay_ms\", -1, NSTIME_SEC_MAX * KQU(1000) <\n\t\t\t    QU(SSIZE_MAX) ? NSTIME_SEC_MAX * KQU(1000) :\n\t\t\t    SSIZE_MAX);\n\t\t\tCONF_HANDLE_SSIZE_T(opt_muzzy_decay_ms,\n\t\t\t    \"muzzy_decay_ms\", -1, NSTIME_SEC_MAX * KQU(1000) <\n\t\t\t    QU(SSIZE_MAX) ? NSTIME_SEC_MAX * KQU(1000) :\n\t\t\t    SSIZE_MAX);\n\t\t\tCONF_HANDLE_BOOL(opt_stats_print, \"stats_print\")\n\t\t\tif (CONF_MATCH(\"stats_print_opts\")) {\n\t\t\t\tinit_opt_stats_opts(v, vlen,\n\t\t\t\t    opt_stats_print_opts);\n\t\t\t\tCONF_CONTINUE;\n\t\t\t}\n\t\t\tCONF_HANDLE_INT64_T(opt_stats_interval,\n\t\t\t    \"stats_interval\", -1, INT64_MAX,\n\t\t\t    CONF_CHECK_MIN, CONF_DONT_CHECK_MAX, false)\n\t\t\tif (CONF_MATCH(\"stats_interval_opts\")) {\n\t\t\t\tinit_opt_stats_opts(v, vlen,\n\t\t\t\t    opt_stats_interval_opts);\n\t\t\t\tCONF_CONTINUE;\n\t\t\t}\n\t\t\tif (config_fill) {\n\t\t\t\tif (CONF_MATCH(\"junk\")) {\n\t\t\t\t\tif (CONF_MATCH_VALUE(\"true\")) {\n\t\t\t\t\t\topt_junk = \"true\";\n\t\t\t\t\t\topt_junk_alloc = opt_junk_free =\n\t\t\t\t\t\t    true;\n\t\t\t\t\t} else if (CONF_MATCH_VALUE(\"false\")) {\n\t\t\t\t\t\topt_junk = \"false\";\n\t\t\t\t\t\topt_junk_alloc = opt_junk_free =\n\t\t\t\t\t\t    false;\n\t\t\t\t\t} else if (CONF_MATCH_VALUE(\"alloc\")) {\n\t\t\t\t\t\topt_junk = \"alloc\";\n\t\t\t\t\t\topt_junk_alloc = true;\n\t\t\t\t\t\topt_junk_free = false;\n\t\t\t\t\t} else if (CONF_MATCH_VALUE(\"free\")) {\n\t\t\t\t\t\topt_junk = \"free\";\n\t\t\t\t\t\topt_junk_alloc = false;\n\t\t\t\t\t\topt_junk_free = true;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tCONF_ERROR(\n\t\t\t\t\t\t    \"Invalid conf value\",\n\t\t\t\t\t\t    k, klen, v, vlen);\n\t\t\t\t\t}\n\t\t\t\t\tCONF_CONTINUE;\n\t\t\t\t}\n\t\t\t\tCONF_HANDLE_BOOL(opt_zero, \"zero\")\n\t\t\t}\n\t\t\tif (config_utrace) {\n\t\t\t\tCONF_HANDLE_BOOL(opt_utrace, \"utrace\")\n\t\t\t}\n\t\t\tif (config_xmalloc) {\n\t\t\t\tCONF_HANDLE_BOOL(opt_xmalloc, \"xmalloc\")\n\t\t\t}\n\t\t\tif (config_enable_cxx) {\n\t\t\t\tCONF_HANDLE_BOOL(\n\t\t\t\t    opt_experimental_infallible_new,\n\t\t\t\t    \"experimental_infallible_new\")\n\t\t\t}\n\n\t\t\tCONF_HANDLE_BOOL(opt_tcache, \"tcache\")\n\t\t\tCONF_HANDLE_SIZE_T(opt_tcache_max, \"tcache_max\",\n\t\t\t    0, TCACHE_MAXCLASS_LIMIT, CONF_DONT_CHECK_MIN,\n\t\t\t    CONF_CHECK_MAX, /* clip */ true)\n\t\t\tif (CONF_MATCH(\"lg_tcache_max\")) {\n\t\t\t\tsize_t m;\n\t\t\t\tCONF_VALUE_READ(size_t, m)\n\t\t\t\tif (CONF_VALUE_READ_FAIL()) {\n\t\t\t\t\tCONF_ERROR(\"Invalid conf value\",\n\t\t\t\t\t    k, klen, v, vlen);\n\t\t\t\t} else {\n\t\t\t\t\t/* clip if necessary */\n\t\t\t\t\tif (m > TCACHE_LG_MAXCLASS_LIMIT) {\n\t\t\t\t\t\tm = TCACHE_LG_MAXCLASS_LIMIT;\n\t\t\t\t\t}\n\t\t\t\t\topt_tcache_max = (size_t)1 << m;\n\t\t\t\t}\n\t\t\t\tCONF_CONTINUE;\n\t\t\t}\n\t\t\t/*\n\t\t\t * Anyone trying to set a value outside -16 to 16 is\n\t\t\t * deeply confused.\n\t\t\t */\n\t\t\tCONF_HANDLE_SSIZE_T(opt_lg_tcache_nslots_mul,\n\t\t\t    \"lg_tcache_nslots_mul\", -16, 16)\n\t\t\t/* Ditto with values past 2048. */\n\t\t\tCONF_HANDLE_UNSIGNED(opt_tcache_nslots_small_min,\n\t\t\t    \"tcache_nslots_small_min\", 1, 2048,\n\t\t\t    CONF_CHECK_MIN, CONF_CHECK_MAX, /* clip */ true)\n\t\t\tCONF_HANDLE_UNSIGNED(opt_tcache_nslots_small_max,\n\t\t\t    \"tcache_nslots_small_max\", 1, 2048,\n\t\t\t    CONF_CHECK_MIN, CONF_CHECK_MAX, /* clip */ true)\n\t\t\tCONF_HANDLE_UNSIGNED(opt_tcache_nslots_large,\n\t\t\t    \"tcache_nslots_large\", 1, 2048,\n\t\t\t    CONF_CHECK_MIN, CONF_CHECK_MAX, /* clip */ true)\n\t\t\tCONF_HANDLE_SIZE_T(opt_tcache_gc_incr_bytes,\n\t\t\t    \"tcache_gc_incr_bytes\", 1024, SIZE_T_MAX,\n\t\t\t    CONF_CHECK_MIN, CONF_DONT_CHECK_MAX,\n\t\t\t    /* clip */ true)\n\t\t\tCONF_HANDLE_SIZE_T(opt_tcache_gc_delay_bytes,\n\t\t\t    \"tcache_gc_delay_bytes\", 0, SIZE_T_MAX,\n\t\t\t    CONF_DONT_CHECK_MIN, CONF_DONT_CHECK_MAX,\n\t\t\t    /* clip */ false)\n\t\t\tCONF_HANDLE_UNSIGNED(opt_lg_tcache_flush_small_div,\n\t\t\t    \"lg_tcache_flush_small_div\", 1, 16,\n\t\t\t    CONF_CHECK_MIN, CONF_CHECK_MAX, /* clip */ true)\n\t\t\tCONF_HANDLE_UNSIGNED(opt_lg_tcache_flush_large_div,\n\t\t\t    \"lg_tcache_flush_large_div\", 1, 16,\n\t\t\t    CONF_CHECK_MIN, CONF_CHECK_MAX, /* clip */ true)\n\n\t\t\t/*\n\t\t\t * The runtime option of oversize_threshold remains\n\t\t\t * undocumented.  It may be tweaked in the next major\n\t\t\t * release (6.0).  The default value 8M is rather\n\t\t\t * conservative / safe.  Tuning it further down may\n\t\t\t * improve fragmentation a bit more, but may also cause\n\t\t\t * contention on the huge arena.\n\t\t\t */\n\t\t\tCONF_HANDLE_SIZE_T(opt_oversize_threshold,\n\t\t\t    \"oversize_threshold\", 0, SC_LARGE_MAXCLASS,\n\t\t\t    CONF_DONT_CHECK_MIN, CONF_CHECK_MAX, false)\n\t\t\tCONF_HANDLE_SIZE_T(opt_lg_extent_max_active_fit,\n\t\t\t    \"lg_extent_max_active_fit\", 0,\n\t\t\t    (sizeof(size_t) << 3), CONF_DONT_CHECK_MIN,\n\t\t\t    CONF_CHECK_MAX, false)\n\n\t\t\tif (strncmp(\"percpu_arena\", k, klen) == 0) {\n\t\t\t\tbool match = false;\n\t\t\t\tfor (int m = percpu_arena_mode_names_base; m <\n\t\t\t\t    percpu_arena_mode_names_limit; m++) {\n\t\t\t\t\tif (strncmp(percpu_arena_mode_names[m],\n\t\t\t\t\t    v, vlen) == 0) {\n\t\t\t\t\t\tif (!have_percpu_arena) {\n\t\t\t\t\t\t\tCONF_ERROR(\n\t\t\t\t\t\t\t    \"No getcpu support\",\n\t\t\t\t\t\t\t    k, klen, v, vlen);\n\t\t\t\t\t\t}\n\t\t\t\t\t\topt_percpu_arena = m;\n\t\t\t\t\t\tmatch = true;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (!match) {\n\t\t\t\t\tCONF_ERROR(\"Invalid conf value\",\n\t\t\t\t\t    k, klen, v, vlen);\n\t\t\t\t}\n\t\t\t\tCONF_CONTINUE;\n\t\t\t}\n\t\t\tCONF_HANDLE_BOOL(opt_background_thread,\n\t\t\t    \"background_thread\");\n\t\t\tCONF_HANDLE_SIZE_T(opt_max_background_threads,\n\t\t\t\t\t   \"max_background_threads\", 1,\n\t\t\t\t\t   opt_max_background_threads,\n\t\t\t\t\t   CONF_CHECK_MIN, CONF_CHECK_MAX,\n\t\t\t\t\t   true);\n\t\t\tCONF_HANDLE_BOOL(opt_hpa, \"hpa\")\n\t\t\tCONF_HANDLE_SIZE_T(opt_hpa_opts.slab_max_alloc,\n\t\t\t    \"hpa_slab_max_alloc\", PAGE, HUGEPAGE,\n\t\t\t    CONF_CHECK_MIN, CONF_CHECK_MAX, true);\n\n\t\t\t/*\n\t\t\t * Accept either a ratio-based or an exact hugification\n\t\t\t * threshold.\n\t\t\t */\n\t\t\tCONF_HANDLE_SIZE_T(opt_hpa_opts.hugification_threshold,\n\t\t\t    \"hpa_hugification_threshold\", PAGE, HUGEPAGE,\n\t\t\t    CONF_CHECK_MIN, CONF_CHECK_MAX, true);\n\t\t\tif (CONF_MATCH(\"hpa_hugification_threshold_ratio\")) {\n\t\t\t\tfxp_t ratio;\n\t\t\t\tchar *end;\n\t\t\t\tbool err = fxp_parse(&ratio, v,\n\t\t\t\t    &end);\n\t\t\t\tif (err || (size_t)(end - v) != vlen\n\t\t\t\t    || ratio > FXP_INIT_INT(1)) {\n\t\t\t\t\tCONF_ERROR(\"Invalid conf value\",\n\t\t\t\t\t    k, klen, v, vlen);\n\t\t\t\t} else {\n\t\t\t\t\topt_hpa_opts.hugification_threshold =\n\t\t\t\t\t    fxp_mul_frac(HUGEPAGE, ratio);\n\t\t\t\t}\n\t\t\t\tCONF_CONTINUE;\n\t\t\t}\n\n\t\t\tCONF_HANDLE_UINT64_T(\n\t\t\t    opt_hpa_opts.hugify_delay_ms, \"hpa_hugify_delay_ms\",\n\t\t\t    0, 0, CONF_DONT_CHECK_MIN, CONF_DONT_CHECK_MAX,\n\t\t\t    false);\n\n\t\t\tCONF_HANDLE_UINT64_T(\n\t\t\t    opt_hpa_opts.min_purge_interval_ms,\n\t\t\t    \"hpa_min_purge_interval_ms\", 0, 0,\n\t\t\t    CONF_DONT_CHECK_MIN, CONF_DONT_CHECK_MAX, false);\n\n\t\t\tif (CONF_MATCH(\"hpa_dirty_mult\")) {\n\t\t\t\tif (CONF_MATCH_VALUE(\"-1\")) {\n\t\t\t\t\topt_hpa_opts.dirty_mult = (fxp_t)-1;\n\t\t\t\t\tCONF_CONTINUE;\n\t\t\t\t}\n\t\t\t\tfxp_t ratio;\n\t\t\t\tchar *end;\n\t\t\t\tbool err = fxp_parse(&ratio, v,\n\t\t\t\t    &end);\n\t\t\t\tif (err || (size_t)(end - v) != vlen) {\n\t\t\t\t\tCONF_ERROR(\"Invalid conf value\",\n\t\t\t\t\t    k, klen, v, vlen);\n\t\t\t\t} else {\n\t\t\t\t\topt_hpa_opts.dirty_mult = ratio;\n\t\t\t\t}\n\t\t\t\tCONF_CONTINUE;\n\t\t\t}\n\n\t\t\tCONF_HANDLE_SIZE_T(opt_hpa_sec_opts.nshards,\n\t\t\t    \"hpa_sec_nshards\", 0, 0, CONF_CHECK_MIN,\n\t\t\t    CONF_DONT_CHECK_MAX, true);\n\t\t\tCONF_HANDLE_SIZE_T(opt_hpa_sec_opts.max_alloc,\n\t\t\t    \"hpa_sec_max_alloc\", PAGE, 0, CONF_CHECK_MIN,\n\t\t\t    CONF_DONT_CHECK_MAX, true);\n\t\t\tCONF_HANDLE_SIZE_T(opt_hpa_sec_opts.max_bytes,\n\t\t\t    \"hpa_sec_max_bytes\", PAGE, 0, CONF_CHECK_MIN,\n\t\t\t    CONF_DONT_CHECK_MAX, true);\n\t\t\tCONF_HANDLE_SIZE_T(opt_hpa_sec_opts.bytes_after_flush,\n\t\t\t    \"hpa_sec_bytes_after_flush\", PAGE, 0,\n\t\t\t    CONF_CHECK_MIN, CONF_DONT_CHECK_MAX, true);\n\t\t\tCONF_HANDLE_SIZE_T(opt_hpa_sec_opts.batch_fill_extra,\n\t\t\t    \"hpa_sec_batch_fill_extra\", 0, HUGEPAGE_PAGES,\n\t\t\t    CONF_CHECK_MIN, CONF_CHECK_MAX, true);\n\n\t\t\tif (CONF_MATCH(\"slab_sizes\")) {\n\t\t\t\tif (CONF_MATCH_VALUE(\"default\")) {\n\t\t\t\t\tsc_data_init(sc_data);\n\t\t\t\t\tCONF_CONTINUE;\n\t\t\t\t}\n\t\t\t\tbool err;\n\t\t\t\tconst char *slab_size_segment_cur = v;\n\t\t\t\tsize_t vlen_left = vlen;\n\t\t\t\tdo {\n\t\t\t\t\tsize_t slab_start;\n\t\t\t\t\tsize_t slab_end;\n\t\t\t\t\tsize_t pgs;\n\t\t\t\t\terr = malloc_conf_multi_sizes_next(\n\t\t\t\t\t    &slab_size_segment_cur,\n\t\t\t\t\t    &vlen_left, &slab_start, &slab_end,\n\t\t\t\t\t    &pgs);\n\t\t\t\t\tif (!err) {\n\t\t\t\t\t\tsc_data_update_slab_size(\n\t\t\t\t\t\t    sc_data, slab_start,\n\t\t\t\t\t\t    slab_end, (int)pgs);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tCONF_ERROR(\"Invalid settings \"\n\t\t\t\t\t\t    \"for slab_sizes\",\n\t\t\t\t\t\t    k, klen, v, vlen);\n\t\t\t\t\t}\n\t\t\t\t} while (!err && vlen_left > 0);\n\t\t\t\tCONF_CONTINUE;\n\t\t\t}\n\t\t\tif (config_prof) {\n\t\t\t\tCONF_HANDLE_BOOL(opt_prof, \"prof\")\n\t\t\t\tCONF_HANDLE_CHAR_P(opt_prof_prefix,\n\t\t\t\t    \"prof_prefix\", \"jeprof\")\n\t\t\t\tCONF_HANDLE_BOOL(opt_prof_active, \"prof_active\")\n\t\t\t\tCONF_HANDLE_BOOL(opt_prof_thread_active_init,\n\t\t\t\t    \"prof_thread_active_init\")\n\t\t\t\tCONF_HANDLE_SIZE_T(opt_lg_prof_sample,\n\t\t\t\t    \"lg_prof_sample\", 0, (sizeof(uint64_t) << 3)\n\t\t\t\t    - 1, CONF_DONT_CHECK_MIN, CONF_CHECK_MAX,\n\t\t\t\t    true)\n\t\t\t\tCONF_HANDLE_BOOL(opt_prof_accum, \"prof_accum\")\n\t\t\t\tCONF_HANDLE_SSIZE_T(opt_lg_prof_interval,\n\t\t\t\t    \"lg_prof_interval\", -1,\n\t\t\t\t    (sizeof(uint64_t) << 3) - 1)\n\t\t\t\tCONF_HANDLE_BOOL(opt_prof_gdump, \"prof_gdump\")\n\t\t\t\tCONF_HANDLE_BOOL(opt_prof_final, \"prof_final\")\n\t\t\t\tCONF_HANDLE_BOOL(opt_prof_leak, \"prof_leak\")\n\t\t\t\tCONF_HANDLE_BOOL(opt_prof_leak_error,\n\t\t\t\t    \"prof_leak_error\")\n\t\t\t\tCONF_HANDLE_BOOL(opt_prof_log, \"prof_log\")\n\t\t\t\tCONF_HANDLE_SSIZE_T(opt_prof_recent_alloc_max,\n\t\t\t\t    \"prof_recent_alloc_max\", -1, SSIZE_MAX)\n\t\t\t\tCONF_HANDLE_BOOL(opt_prof_stats, \"prof_stats\")\n\t\t\t\tCONF_HANDLE_BOOL(opt_prof_sys_thread_name,\n\t\t\t\t    \"prof_sys_thread_name\")\n\t\t\t\tif (CONF_MATCH(\"prof_time_resolution\")) {\n\t\t\t\t\tif (CONF_MATCH_VALUE(\"default\")) {\n\t\t\t\t\t\topt_prof_time_res =\n\t\t\t\t\t\t    prof_time_res_default;\n\t\t\t\t\t} else if (CONF_MATCH_VALUE(\"high\")) {\n\t\t\t\t\t\tif (!config_high_res_timer) {\n\t\t\t\t\t\t\tCONF_ERROR(\n\t\t\t\t\t\t\t    \"No high resolution\"\n\t\t\t\t\t\t\t    \" timer support\",\n\t\t\t\t\t\t\t    k, klen, v, vlen);\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\topt_prof_time_res =\n\t\t\t\t\t\t\t    prof_time_res_high;\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tCONF_ERROR(\"Invalid conf value\",\n\t\t\t\t\t\t    k, klen, v, vlen);\n\t\t\t\t\t}\n\t\t\t\t\tCONF_CONTINUE;\n\t\t\t\t}\n\t\t\t\t/*\n\t\t\t\t * Undocumented.  When set to false, don't\n\t\t\t\t * correct for an unbiasing bug in jeprof\n\t\t\t\t * attribution.  This can be handy if you want\n\t\t\t\t * to get consistent numbers from your binary\n\t\t\t\t * across different jemalloc versions, even if\n\t\t\t\t * those numbers are incorrect.  The default is\n\t\t\t\t * true.\n\t\t\t\t */\n\t\t\t\tCONF_HANDLE_BOOL(opt_prof_unbias, \"prof_unbias\")\n\t\t\t}\n\t\t\tif (config_log) {\n\t\t\t\tif (CONF_MATCH(\"log\")) {\n\t\t\t\t\tsize_t cpylen = (\n\t\t\t\t\t    vlen <= sizeof(log_var_names) ?\n\t\t\t\t\t    vlen : sizeof(log_var_names) - 1);\n\t\t\t\t\tstrncpy(log_var_names, v, cpylen);\n\t\t\t\t\tlog_var_names[cpylen] = '\\0';\n\t\t\t\t\tCONF_CONTINUE;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (CONF_MATCH(\"thp\")) {\n\t\t\t\tbool match = false;\n\t\t\t\tfor (int m = 0; m < thp_mode_names_limit; m++) {\n\t\t\t\t\tif (strncmp(thp_mode_names[m],v, vlen)\n\t\t\t\t\t    == 0) {\n\t\t\t\t\t\tif (!have_madvise_huge && !have_memcntl) {\n\t\t\t\t\t\t\tCONF_ERROR(\n\t\t\t\t\t\t\t    \"No THP support\",\n\t\t\t\t\t\t\t    k, klen, v, vlen);\n\t\t\t\t\t\t}\n\t\t\t\t\t\topt_thp = m;\n\t\t\t\t\t\tmatch = true;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (!match) {\n\t\t\t\t\tCONF_ERROR(\"Invalid conf value\",\n\t\t\t\t\t    k, klen, v, vlen);\n\t\t\t\t}\n\t\t\t\tCONF_CONTINUE;\n\t\t\t}\n\t\t\tif (CONF_MATCH(\"zero_realloc\")) {\n\t\t\t\tif (CONF_MATCH_VALUE(\"alloc\")) {\n\t\t\t\t\topt_zero_realloc_action\n\t\t\t\t\t    = zero_realloc_action_alloc;\n\t\t\t\t} else if (CONF_MATCH_VALUE(\"free\")) {\n\t\t\t\t\topt_zero_realloc_action\n\t\t\t\t\t    = zero_realloc_action_free;\n\t\t\t\t} else if (CONF_MATCH_VALUE(\"abort\")) {\n\t\t\t\t\topt_zero_realloc_action\n\t\t\t\t\t    = zero_realloc_action_abort;\n\t\t\t\t} else {\n\t\t\t\t\tCONF_ERROR(\"Invalid conf value\",\n\t\t\t\t\t    k, klen, v, vlen);\n\t\t\t\t}\n\t\t\t\tCONF_CONTINUE;\n\t\t\t}\n\t\t\tif (config_uaf_detection &&\n\t\t\t    CONF_MATCH(\"lg_san_uaf_align\")) {\n\t\t\t\tssize_t a;\n\t\t\t\tCONF_VALUE_READ(ssize_t, a)\n\t\t\t\tif (CONF_VALUE_READ_FAIL() || a < -1) {\n\t\t\t\t\tCONF_ERROR(\"Invalid conf value\",\n\t\t\t\t\t    k, klen, v, vlen);\n\t\t\t\t}\n\t\t\t\tif (a == -1) {\n\t\t\t\t\topt_lg_san_uaf_align = -1;\n\t\t\t\t\tCONF_CONTINUE;\n\t\t\t\t}\n\n\t\t\t\t/* clip if necessary */\n\t\t\t\tssize_t max_allowed = (sizeof(size_t) << 3) - 1;\n\t\t\t\tssize_t min_allowed = LG_PAGE;\n\t\t\t\tif (a > max_allowed) {\n\t\t\t\t\ta = max_allowed;\n\t\t\t\t} else if (a < min_allowed) {\n\t\t\t\t\ta = min_allowed;\n\t\t\t\t}\n\n\t\t\t\topt_lg_san_uaf_align = a;\n\t\t\t\tCONF_CONTINUE;\n\t\t\t}\n\n\t\t\tCONF_HANDLE_SIZE_T(opt_san_guard_small,\n\t\t\t    \"san_guard_small\", 0, SIZE_T_MAX,\n\t\t\t    CONF_DONT_CHECK_MIN, CONF_DONT_CHECK_MAX, false)\n\t\t\tCONF_HANDLE_SIZE_T(opt_san_guard_large,\n\t\t\t    \"san_guard_large\", 0, SIZE_T_MAX,\n\t\t\t    CONF_DONT_CHECK_MIN, CONF_DONT_CHECK_MAX, false)\n\n\t\t\tCONF_ERROR(\"Invalid conf pair\", k, klen, v, vlen);\n#undef CONF_ERROR\n#undef CONF_CONTINUE\n#undef CONF_MATCH\n#undef CONF_MATCH_VALUE\n#undef CONF_HANDLE_BOOL\n#undef CONF_DONT_CHECK_MIN\n#undef CONF_CHECK_MIN\n#undef CONF_DONT_CHECK_MAX\n#undef CONF_CHECK_MAX\n#undef CONF_HANDLE_T\n#undef CONF_HANDLE_T_U\n#undef CONF_HANDLE_T_SIGNED\n#undef CONF_HANDLE_UNSIGNED\n#undef CONF_HANDLE_SIZE_T\n#undef CONF_HANDLE_SSIZE_T\n#undef CONF_HANDLE_CHAR_P\n    /* Re-enable diagnostic \"-Wtype-limits\" */\n    JEMALLOC_DIAGNOSTIC_POP\n\t\t}\n\t\tif (opt_abort_conf && had_conf_error) {\n\t\t\tmalloc_abort_invalid_conf();\n\t\t}\n\t}\n\tatomic_store_b(&log_init_done, true, ATOMIC_RELEASE);\n}\n\nstatic bool\nmalloc_conf_init_check_deps(void) {\n\tif (opt_prof_leak_error && !opt_prof_final) {\n\t\tmalloc_printf(\"<jemalloc>: prof_leak_error is set w/o \"\n\t\t    \"prof_final.\\n\");\n\t\treturn true;\n\t}\n\n\treturn false;\n}\n\nstatic void\nmalloc_conf_init(sc_data_t *sc_data, unsigned bin_shard_sizes[SC_NBINS]) {\n\tconst char *opts_cache[MALLOC_CONF_NSOURCES] = {NULL, NULL, NULL, NULL,\n\t\tNULL};\n\tchar buf[PATH_MAX + 1];\n\n\t/* The first call only set the confirm_conf option and opts_cache */\n\tmalloc_conf_init_helper(NULL, NULL, true, opts_cache, buf);\n\tmalloc_conf_init_helper(sc_data, bin_shard_sizes, false, opts_cache,\n\t    NULL);\n\tif (malloc_conf_init_check_deps()) {\n\t\t/* check_deps does warning msg only; abort below if needed. */\n\t\tif (opt_abort_conf) {\n\t\t\tmalloc_abort_invalid_conf();\n\t\t}\n\t}\n}\n\n#undef MALLOC_CONF_NSOURCES\n\nstatic bool\nmalloc_init_hard_needed(void) {\n\tif (malloc_initialized() || (IS_INITIALIZER && malloc_init_state ==\n\t    malloc_init_recursible)) {\n\t\t/*\n\t\t * Another thread initialized the allocator before this one\n\t\t * acquired init_lock, or this thread is the initializing\n\t\t * thread, and it is recursively allocating.\n\t\t */\n\t\treturn false;\n\t}\n#ifdef JEMALLOC_THREADED_INIT\n\tif (malloc_initializer != NO_INITIALIZER && !IS_INITIALIZER) {\n\t\t/* Busy-wait until the initializing thread completes. */\n\t\tspin_t spinner = SPIN_INITIALIZER;\n\t\tdo {\n\t\t\tmalloc_mutex_unlock(TSDN_NULL, &init_lock);\n\t\t\tspin_adaptive(&spinner);\n\t\t\tmalloc_mutex_lock(TSDN_NULL, &init_lock);\n\t\t} while (!malloc_initialized());\n\t\treturn false;\n\t}\n#endif\n\treturn true;\n}\n\nstatic bool\nmalloc_init_hard_a0_locked() {\n\tmalloc_initializer = INITIALIZER;\n\n\tJEMALLOC_DIAGNOSTIC_PUSH\n\tJEMALLOC_DIAGNOSTIC_IGNORE_MISSING_STRUCT_FIELD_INITIALIZERS\n\tsc_data_t sc_data = {0};\n\tJEMALLOC_DIAGNOSTIC_POP\n\n\t/*\n\t * Ordering here is somewhat tricky; we need sc_boot() first, since that\n\t * determines what the size classes will be, and then\n\t * malloc_conf_init(), since any slab size tweaking will need to be done\n\t * before sz_boot and bin_info_boot, which assume that the values they\n\t * read out of sc_data_global are final.\n\t */\n\tsc_boot(&sc_data);\n\tunsigned bin_shard_sizes[SC_NBINS];\n\tbin_shard_sizes_boot(bin_shard_sizes);\n\t/*\n\t * prof_boot0 only initializes opt_prof_prefix.  We need to do it before\n\t * we parse malloc_conf options, in case malloc_conf parsing overwrites\n\t * it.\n\t */\n\tif (config_prof) {\n\t\tprof_boot0();\n\t}\n\tmalloc_conf_init(&sc_data, bin_shard_sizes);\n\tsan_init(opt_lg_san_uaf_align);\n\tsz_boot(&sc_data, opt_cache_oblivious);\n\tbin_info_boot(&sc_data, bin_shard_sizes);\n\n\tif (opt_stats_print) {\n\t\t/* Print statistics at exit. */\n\t\tif (atexit(stats_print_atexit) != 0) {\n\t\t\tmalloc_write(\"<jemalloc>: Error in atexit()\\n\");\n\t\t\tif (opt_abort) {\n\t\t\t\tabort();\n\t\t\t}\n\t\t}\n\t}\n\n\tif (stats_boot()) {\n\t\treturn true;\n\t}\n\tif (pages_boot()) {\n\t\treturn true;\n\t}\n\tif (base_boot(TSDN_NULL)) {\n\t\treturn true;\n\t}\n\t/* emap_global is static, hence zeroed. */\n\tif (emap_init(&arena_emap_global, b0get(), /* zeroed */ true)) {\n\t\treturn true;\n\t}\n\tif (extent_boot()) {\n\t\treturn true;\n\t}\n\tif (ctl_boot()) {\n\t\treturn true;\n\t}\n\tif (config_prof) {\n\t\tprof_boot1();\n\t}\n\tif (opt_hpa && !hpa_supported()) {\n\t\tmalloc_printf(\"<jemalloc>: HPA not supported in the current \"\n\t\t    \"configuration; %s.\",\n\t\t    opt_abort_conf ? \"aborting\" : \"disabling\");\n\t\tif (opt_abort_conf) {\n\t\t\tmalloc_abort_invalid_conf();\n\t\t} else {\n\t\t\topt_hpa = false;\n\t\t}\n\t}\n\tif (arena_boot(&sc_data, b0get(), opt_hpa)) {\n\t\treturn true;\n\t}\n\tif (tcache_boot(TSDN_NULL, b0get())) {\n\t\treturn true;\n\t}\n\tif (malloc_mutex_init(&arenas_lock, \"arenas\", WITNESS_RANK_ARENAS,\n\t    malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\thook_boot();\n\t/*\n\t * Create enough scaffolding to allow recursive allocation in\n\t * malloc_ncpus().\n\t */\n\tnarenas_auto = 1;\n\tmanual_arena_base = narenas_auto + 1;\n\tmemset(arenas, 0, sizeof(arena_t *) * narenas_auto);\n\t/*\n\t * Initialize one arena here.  The rest are lazily created in\n\t * arena_choose_hard().\n\t */\n\tif (arena_init(TSDN_NULL, 0, &arena_config_default) == NULL) {\n\t\treturn true;\n\t}\n\ta0 = arena_get(TSDN_NULL, 0, false);\n\n\tif (opt_hpa && !hpa_supported()) {\n\t\tmalloc_printf(\"<jemalloc>: HPA not supported in the current \"\n\t\t    \"configuration; %s.\",\n\t\t    opt_abort_conf ? \"aborting\" : \"disabling\");\n\t\tif (opt_abort_conf) {\n\t\t\tmalloc_abort_invalid_conf();\n\t\t} else {\n\t\t\topt_hpa = false;\n\t\t}\n\t} else if (opt_hpa) {\n\t\thpa_shard_opts_t hpa_shard_opts = opt_hpa_opts;\n\t\thpa_shard_opts.deferral_allowed = background_thread_enabled();\n\t\tif (pa_shard_enable_hpa(TSDN_NULL, &a0->pa_shard,\n\t\t    &hpa_shard_opts, &opt_hpa_sec_opts)) {\n\t\t\treturn true;\n\t\t}\n\t}\n\n\tmalloc_init_state = malloc_init_a0_initialized;\n\n\treturn false;\n}\n\nstatic bool\nmalloc_init_hard_a0(void) {\n\tbool ret;\n\n\tmalloc_mutex_lock(TSDN_NULL, &init_lock);\n\tret = malloc_init_hard_a0_locked();\n\tmalloc_mutex_unlock(TSDN_NULL, &init_lock);\n\treturn ret;\n}\n\n/* Initialize data structures which may trigger recursive allocation. */\nstatic bool\nmalloc_init_hard_recursible(void) {\n\tmalloc_init_state = malloc_init_recursible;\n\n\tncpus = malloc_ncpus();\n\tif (opt_percpu_arena != percpu_arena_disabled) {\n\t\tbool cpu_count_is_deterministic =\n\t\t    malloc_cpu_count_is_deterministic();\n\t\tif (!cpu_count_is_deterministic) {\n\t\t\t/*\n\t\t\t * If # of CPU is not deterministic, and narenas not\n\t\t\t * specified, disables per cpu arena since it may not\n\t\t\t * detect CPU IDs properly.\n\t\t\t */\n\t\t\tif (opt_narenas == 0) {\n\t\t\t\topt_percpu_arena = percpu_arena_disabled;\n\t\t\t\tmalloc_write(\"<jemalloc>: Number of CPUs \"\n\t\t\t\t    \"detected is not deterministic. Per-CPU \"\n\t\t\t\t    \"arena disabled.\\n\");\n\t\t\t\tif (opt_abort_conf) {\n\t\t\t\t\tmalloc_abort_invalid_conf();\n\t\t\t\t}\n\t\t\t\tif (opt_abort) {\n\t\t\t\t\tabort();\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n#if (defined(JEMALLOC_HAVE_PTHREAD_ATFORK) && !defined(JEMALLOC_MUTEX_INIT_CB) \\\n    && !defined(JEMALLOC_ZONE) && !defined(_WIN32) && \\\n    !defined(__native_client__))\n\t/* LinuxThreads' pthread_atfork() allocates. */\n\tif (pthread_atfork(jemalloc_prefork, jemalloc_postfork_parent,\n\t    jemalloc_postfork_child) != 0) {\n\t\tmalloc_write(\"<jemalloc>: Error in pthread_atfork()\\n\");\n\t\tif (opt_abort) {\n\t\t\tabort();\n\t\t}\n\t\treturn true;\n\t}\n#endif\n\n\tif (background_thread_boot0()) {\n\t\treturn true;\n\t}\n\n\treturn false;\n}\n\nstatic unsigned\nmalloc_narenas_default(void) {\n\tassert(ncpus > 0);\n\t/*\n\t * For SMP systems, create more than one arena per CPU by\n\t * default.\n\t */\n\tif (ncpus > 1) {\n\t\tfxp_t fxp_ncpus = FXP_INIT_INT(ncpus);\n\t\tfxp_t goal = fxp_mul(fxp_ncpus, opt_narenas_ratio);\n\t\tuint32_t int_goal = fxp_round_nearest(goal);\n\t\tif (int_goal == 0) {\n\t\t\treturn 1;\n\t\t}\n\t\treturn int_goal;\n\t} else {\n\t\treturn 1;\n\t}\n}\n\nstatic percpu_arena_mode_t\npercpu_arena_as_initialized(percpu_arena_mode_t mode) {\n\tassert(!malloc_initialized());\n\tassert(mode <= percpu_arena_disabled);\n\n\tif (mode != percpu_arena_disabled) {\n\t\tmode += percpu_arena_mode_enabled_base;\n\t}\n\n\treturn mode;\n}\n\nstatic bool\nmalloc_init_narenas(void) {\n\tassert(ncpus > 0);\n\n\tif (opt_percpu_arena != percpu_arena_disabled) {\n\t\tif (!have_percpu_arena || malloc_getcpu() < 0) {\n\t\t\topt_percpu_arena = percpu_arena_disabled;\n\t\t\tmalloc_printf(\"<jemalloc>: perCPU arena getcpu() not \"\n\t\t\t    \"available. Setting narenas to %u.\\n\", opt_narenas ?\n\t\t\t    opt_narenas : malloc_narenas_default());\n\t\t\tif (opt_abort) {\n\t\t\t\tabort();\n\t\t\t}\n\t\t} else {\n\t\t\tif (ncpus >= MALLOCX_ARENA_LIMIT) {\n\t\t\t\tmalloc_printf(\"<jemalloc>: narenas w/ percpu\"\n\t\t\t\t    \"arena beyond limit (%d)\\n\", ncpus);\n\t\t\t\tif (opt_abort) {\n\t\t\t\t\tabort();\n\t\t\t\t}\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\t/* NB: opt_percpu_arena isn't fully initialized yet. */\n\t\t\tif (percpu_arena_as_initialized(opt_percpu_arena) ==\n\t\t\t    per_phycpu_arena && ncpus % 2 != 0) {\n\t\t\t\tmalloc_printf(\"<jemalloc>: invalid \"\n\t\t\t\t    \"configuration -- per physical CPU arena \"\n\t\t\t\t    \"with odd number (%u) of CPUs (no hyper \"\n\t\t\t\t    \"threading?).\\n\", ncpus);\n\t\t\t\tif (opt_abort)\n\t\t\t\t\tabort();\n\t\t\t}\n\t\t\tunsigned n = percpu_arena_ind_limit(\n\t\t\t    percpu_arena_as_initialized(opt_percpu_arena));\n\t\t\tif (opt_narenas < n) {\n\t\t\t\t/*\n\t\t\t\t * If narenas is specified with percpu_arena\n\t\t\t\t * enabled, actual narenas is set as the greater\n\t\t\t\t * of the two. percpu_arena_choose will be free\n\t\t\t\t * to use any of the arenas based on CPU\n\t\t\t\t * id. This is conservative (at a small cost)\n\t\t\t\t * but ensures correctness.\n\t\t\t\t *\n\t\t\t\t * If for some reason the ncpus determined at\n\t\t\t\t * boot is not the actual number (e.g. because\n\t\t\t\t * of affinity setting from numactl), reserving\n\t\t\t\t * narenas this way provides a workaround for\n\t\t\t\t * percpu_arena.\n\t\t\t\t */\n\t\t\t\topt_narenas = n;\n\t\t\t}\n\t\t}\n\t}\n\tif (opt_narenas == 0) {\n\t\topt_narenas = malloc_narenas_default();\n\t}\n\tassert(opt_narenas > 0);\n\n\tnarenas_auto = opt_narenas;\n\t/*\n\t * Limit the number of arenas to the indexing range of MALLOCX_ARENA().\n\t */\n\tif (narenas_auto >= MALLOCX_ARENA_LIMIT) {\n\t\tnarenas_auto = MALLOCX_ARENA_LIMIT - 1;\n\t\tmalloc_printf(\"<jemalloc>: Reducing narenas to limit (%d)\\n\",\n\t\t    narenas_auto);\n\t}\n\tnarenas_total_set(narenas_auto);\n\tif (arena_init_huge()) {\n\t\tnarenas_total_inc();\n\t}\n\tmanual_arena_base = narenas_total_get();\n\n\treturn false;\n}\n\nstatic void\nmalloc_init_percpu(void) {\n\topt_percpu_arena = percpu_arena_as_initialized(opt_percpu_arena);\n}\n\nstatic bool\nmalloc_init_hard_finish(void) {\n\tif (malloc_mutex_boot()) {\n\t\treturn true;\n\t}\n\n\tmalloc_init_state = malloc_init_initialized;\n\tmalloc_slow_flag_init();\n\n\treturn false;\n}\n\nstatic void\nmalloc_init_hard_cleanup(tsdn_t *tsdn, bool reentrancy_set) {\n\tmalloc_mutex_assert_owner(tsdn, &init_lock);\n\tmalloc_mutex_unlock(tsdn, &init_lock);\n\tif (reentrancy_set) {\n\t\tassert(!tsdn_null(tsdn));\n\t\ttsd_t *tsd = tsdn_tsd(tsdn);\n\t\tassert(tsd_reentrancy_level_get(tsd) > 0);\n\t\tpost_reentrancy(tsd);\n\t}\n}\n\nstatic bool\nmalloc_init_hard(void) {\n\ttsd_t *tsd;\n\n#if defined(_WIN32) && _WIN32_WINNT < 0x0600\n\t_init_init_lock();\n#endif\n\tmalloc_mutex_lock(TSDN_NULL, &init_lock);\n\n#define UNLOCK_RETURN(tsdn, ret, reentrancy)\t\t\\\n\tmalloc_init_hard_cleanup(tsdn, reentrancy);\t\\\n\treturn ret;\n\n\tif (!malloc_init_hard_needed()) {\n\t\tUNLOCK_RETURN(TSDN_NULL, false, false)\n\t}\n\n\tif (malloc_init_state != malloc_init_a0_initialized &&\n\t    malloc_init_hard_a0_locked()) {\n\t\tUNLOCK_RETURN(TSDN_NULL, true, false)\n\t}\n\n\tmalloc_mutex_unlock(TSDN_NULL, &init_lock);\n\t/* Recursive allocation relies on functional tsd. */\n\ttsd = malloc_tsd_boot0();\n\tif (tsd == NULL) {\n\t\treturn true;\n\t}\n\tif (malloc_init_hard_recursible()) {\n\t\treturn true;\n\t}\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &init_lock);\n\t/* Set reentrancy level to 1 during init. */\n\tpre_reentrancy(tsd, NULL);\n\t/* Initialize narenas before prof_boot2 (for allocation). */\n\tif (malloc_init_narenas()\n\t    || background_thread_boot1(tsd_tsdn(tsd), b0get())) {\n\t\tUNLOCK_RETURN(tsd_tsdn(tsd), true, true)\n\t}\n\tif (config_prof && prof_boot2(tsd, b0get())) {\n\t\tUNLOCK_RETURN(tsd_tsdn(tsd), true, true)\n\t}\n\n\tmalloc_init_percpu();\n\n\tif (malloc_init_hard_finish()) {\n\t\tUNLOCK_RETURN(tsd_tsdn(tsd), true, true)\n\t}\n\tpost_reentrancy(tsd);\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &init_lock);\n\n\twitness_assert_lockless(witness_tsd_tsdn(\n\t    tsd_witness_tsdp_get_unsafe(tsd)));\n\tmalloc_tsd_boot1();\n\t/* Update TSD after tsd_boot1. */\n\ttsd = tsd_fetch();\n\tif (opt_background_thread) {\n\t\tassert(have_background_thread);\n\t\t/*\n\t\t * Need to finish init & unlock first before creating background\n\t\t * threads (pthread_create depends on malloc).  ctl_init (which\n\t\t * sets isthreaded) needs to be called without holding any lock.\n\t\t */\n\t\tbackground_thread_ctl_init(tsd_tsdn(tsd));\n\t\tif (background_thread_create(tsd, 0)) {\n\t\t\treturn true;\n\t\t}\n\t}\n#undef UNLOCK_RETURN\n\treturn false;\n}\n\n/*\n * End initialization functions.\n */\n/******************************************************************************/\n/*\n * Begin allocation-path internal functions and data structures.\n */\n\n/*\n * Settings determined by the documented behavior of the allocation functions.\n */\ntypedef struct static_opts_s static_opts_t;\nstruct static_opts_s {\n\t/* Whether or not allocation size may overflow. */\n\tbool may_overflow;\n\n\t/*\n\t * Whether or not allocations (with alignment) of size 0 should be\n\t * treated as size 1.\n\t */\n\tbool bump_empty_aligned_alloc;\n\t/*\n\t * Whether to assert that allocations are not of size 0 (after any\n\t * bumping).\n\t */\n\tbool assert_nonempty_alloc;\n\n\t/*\n\t * Whether or not to modify the 'result' argument to malloc in case of\n\t * error.\n\t */\n\tbool null_out_result_on_error;\n\t/* Whether to set errno when we encounter an error condition. */\n\tbool set_errno_on_error;\n\n\t/*\n\t * The minimum valid alignment for functions requesting aligned storage.\n\t */\n\tsize_t min_alignment;\n\n\t/* The error string to use if we oom. */\n\tconst char *oom_string;\n\t/* The error string to use if the passed-in alignment is invalid. */\n\tconst char *invalid_alignment_string;\n\n\t/*\n\t * False if we're configured to skip some time-consuming operations.\n\t *\n\t * This isn't really a malloc \"behavior\", but it acts as a useful\n\t * summary of several other static (or at least, static after program\n\t * initialization) options.\n\t */\n\tbool slow;\n\t/*\n\t * Return size.\n\t */\n\tbool usize;\n};\n\nJEMALLOC_ALWAYS_INLINE void\nstatic_opts_init(static_opts_t *static_opts) {\n\tstatic_opts->may_overflow = false;\n\tstatic_opts->bump_empty_aligned_alloc = false;\n\tstatic_opts->assert_nonempty_alloc = false;\n\tstatic_opts->null_out_result_on_error = false;\n\tstatic_opts->set_errno_on_error = false;\n\tstatic_opts->min_alignment = 0;\n\tstatic_opts->oom_string = \"\";\n\tstatic_opts->invalid_alignment_string = \"\";\n\tstatic_opts->slow = false;\n\tstatic_opts->usize = false;\n}\n\n/*\n * These correspond to the macros in jemalloc/jemalloc_macros.h.  Broadly, we\n * should have one constant here per magic value there.  Note however that the\n * representations need not be related.\n */\n#define TCACHE_IND_NONE ((unsigned)-1)\n#define TCACHE_IND_AUTOMATIC ((unsigned)-2)\n#define ARENA_IND_AUTOMATIC ((unsigned)-1)\n\ntypedef struct dynamic_opts_s dynamic_opts_t;\nstruct dynamic_opts_s {\n\tvoid **result;\n\tsize_t usize;\n\tsize_t num_items;\n\tsize_t item_size;\n\tsize_t alignment;\n\tbool zero;\n\tunsigned tcache_ind;\n\tunsigned arena_ind;\n};\n\nJEMALLOC_ALWAYS_INLINE void\ndynamic_opts_init(dynamic_opts_t *dynamic_opts) {\n\tdynamic_opts->result = NULL;\n\tdynamic_opts->usize = 0;\n\tdynamic_opts->num_items = 0;\n\tdynamic_opts->item_size = 0;\n\tdynamic_opts->alignment = 0;\n\tdynamic_opts->zero = false;\n\tdynamic_opts->tcache_ind = TCACHE_IND_AUTOMATIC;\n\tdynamic_opts->arena_ind = ARENA_IND_AUTOMATIC;\n}\n\n/*\n * ind parameter is optional and is only checked and filled if alignment == 0;\n * return true if result is out of range.\n */\nJEMALLOC_ALWAYS_INLINE bool\naligned_usize_get(size_t size, size_t alignment, size_t *usize, szind_t *ind,\n    bool bump_empty_aligned_alloc) {\n\tassert(usize != NULL);\n\tif (alignment == 0) {\n\t\tif (ind != NULL) {\n\t\t\t*ind = sz_size2index(size);\n\t\t\tif (unlikely(*ind >= SC_NSIZES)) {\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\t*usize = sz_index2size(*ind);\n\t\t\tassert(*usize > 0 && *usize <= SC_LARGE_MAXCLASS);\n\t\t\treturn false;\n\t\t}\n\t\t*usize = sz_s2u(size);\n\t} else {\n\t\tif (bump_empty_aligned_alloc && unlikely(size == 0)) {\n\t\t\tsize = 1;\n\t\t}\n\t\t*usize = sz_sa2u(size, alignment);\n\t}\n\tif (unlikely(*usize == 0 || *usize > SC_LARGE_MAXCLASS)) {\n\t\treturn true;\n\t}\n\treturn false;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nzero_get(bool guarantee, bool slow) {\n\tif (config_fill && slow && unlikely(opt_zero)) {\n\t\treturn true;\n\t} else {\n\t\treturn guarantee;\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE tcache_t *\ntcache_get_from_ind(tsd_t *tsd, unsigned tcache_ind, bool slow, bool is_alloc) {\n\ttcache_t *tcache;\n\tif (tcache_ind == TCACHE_IND_AUTOMATIC) {\n\t\tif (likely(!slow)) {\n\t\t\t/* Getting tcache ptr unconditionally. */\n\t\t\ttcache = tsd_tcachep_get(tsd);\n\t\t\tassert(tcache == tcache_get(tsd));\n\t\t} else if (is_alloc ||\n\t\t    likely(tsd_reentrancy_level_get(tsd) == 0)) {\n\t\t\ttcache = tcache_get(tsd);\n\t\t} else {\n\t\t\ttcache = NULL;\n\t\t}\n\t} else {\n\t\t/*\n\t\t * Should not specify tcache on deallocation path when being\n\t\t * reentrant.\n\t\t */\n\t\tassert(is_alloc || tsd_reentrancy_level_get(tsd) == 0 ||\n\t\t    tsd_state_nocleanup(tsd));\n\t\tif (tcache_ind == TCACHE_IND_NONE) {\n\t\t\ttcache = NULL;\n\t\t} else {\n\t\t\ttcache = tcaches_get(tsd, tcache_ind);\n\t\t}\n\t}\n\treturn tcache;\n}\n\n/* Return true if a manual arena is specified and arena_get() OOMs. */\nJEMALLOC_ALWAYS_INLINE bool\narena_get_from_ind(tsd_t *tsd, unsigned arena_ind, arena_t **arena_p) {\n\tif (arena_ind == ARENA_IND_AUTOMATIC) {\n\t\t/*\n\t\t * In case of automatic arena management, we defer arena\n\t\t * computation until as late as we can, hoping to fill the\n\t\t * allocation out of the tcache.\n\t\t */\n\t\t*arena_p = NULL;\n\t} else {\n\t\t*arena_p = arena_get(tsd_tsdn(tsd), arena_ind, true);\n\t\tif (unlikely(*arena_p == NULL) && arena_ind >= narenas_auto) {\n\t\t\treturn true;\n\t\t}\n\t}\n\treturn false;\n}\n\n/* ind is ignored if dopts->alignment > 0. */\nJEMALLOC_ALWAYS_INLINE void *\nimalloc_no_sample(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd,\n    size_t size, size_t usize, szind_t ind) {\n\t/* Fill in the tcache. */\n\ttcache_t *tcache = tcache_get_from_ind(tsd, dopts->tcache_ind,\n\t    sopts->slow, /* is_alloc */ true);\n\n\t/* Fill in the arena. */\n\tarena_t *arena;\n\tif (arena_get_from_ind(tsd, dopts->arena_ind, &arena)) {\n\t\treturn NULL;\n\t}\n\n\tif (unlikely(dopts->alignment != 0)) {\n\t\treturn ipalloct(tsd_tsdn(tsd), usize, dopts->alignment,\n\t\t    dopts->zero, tcache, arena);\n\t}\n\n\treturn iallocztm(tsd_tsdn(tsd), size, ind, dopts->zero, tcache, false,\n\t    arena, sopts->slow);\n}\n\nJEMALLOC_ALWAYS_INLINE void *\nimalloc_sample(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd,\n    size_t usize, szind_t ind) {\n\tvoid *ret;\n\n\t/*\n\t * For small allocations, sampling bumps the usize.  If so, we allocate\n\t * from the ind_large bucket.\n\t */\n\tszind_t ind_large;\n\tsize_t bumped_usize = usize;\n\n\tdopts->alignment = prof_sample_align(dopts->alignment);\n\tif (usize <= SC_SMALL_MAXCLASS) {\n\t\tassert(((dopts->alignment == 0) ?\n\t\t    sz_s2u(SC_LARGE_MINCLASS) :\n\t\t    sz_sa2u(SC_LARGE_MINCLASS, dopts->alignment))\n\t\t\t== SC_LARGE_MINCLASS);\n\t\tind_large = sz_size2index(SC_LARGE_MINCLASS);\n\t\tbumped_usize = sz_s2u(SC_LARGE_MINCLASS);\n\t\tret = imalloc_no_sample(sopts, dopts, tsd, bumped_usize,\n\t\t    bumped_usize, ind_large);\n\t\tif (unlikely(ret == NULL)) {\n\t\t\treturn NULL;\n\t\t}\n\t\tarena_prof_promote(tsd_tsdn(tsd), ret, usize);\n\t} else {\n\t\tret = imalloc_no_sample(sopts, dopts, tsd, usize, usize, ind);\n\t}\n\tassert(prof_sample_aligned(ret));\n\n\treturn ret;\n}\n\n/*\n * Returns true if the allocation will overflow, and false otherwise.  Sets\n * *size to the product either way.\n */\nJEMALLOC_ALWAYS_INLINE bool\ncompute_size_with_overflow(bool may_overflow, dynamic_opts_t *dopts,\n    size_t *size) {\n\t/*\n\t * This function is just num_items * item_size, except that we may have\n\t * to check for overflow.\n\t */\n\n\tif (!may_overflow) {\n\t\tassert(dopts->num_items == 1);\n\t\t*size = dopts->item_size;\n\t\treturn false;\n\t}\n\n\t/* A size_t with its high-half bits all set to 1. */\n\tstatic const size_t high_bits = SIZE_T_MAX << (sizeof(size_t) * 8 / 2);\n\n\t*size = dopts->item_size * dopts->num_items;\n\n\tif (unlikely(*size == 0)) {\n\t\treturn (dopts->num_items != 0 && dopts->item_size != 0);\n\t}\n\n\t/*\n\t * We got a non-zero size, but we don't know if we overflowed to get\n\t * there.  To avoid having to do a divide, we'll be clever and note that\n\t * if both A and B can be represented in N/2 bits, then their product\n\t * can be represented in N bits (without the possibility of overflow).\n\t */\n\tif (likely((high_bits & (dopts->num_items | dopts->item_size)) == 0)) {\n\t\treturn false;\n\t}\n\tif (likely(*size / dopts->item_size == dopts->num_items)) {\n\t\treturn false;\n\t}\n\treturn true;\n}\n\nJEMALLOC_ALWAYS_INLINE int\nimalloc_body(static_opts_t *sopts, dynamic_opts_t *dopts, tsd_t *tsd) {\n\t/* Where the actual allocated memory will live. */\n\tvoid *allocation = NULL;\n\t/* Filled in by compute_size_with_overflow below. */\n\tsize_t size = 0;\n\t/*\n\t * The zero initialization for ind is actually dead store, in that its\n\t * value is reset before any branch on its value is taken.  Sometimes\n\t * though, it's convenient to pass it as arguments before this point.\n\t * To avoid undefined behavior then, we initialize it with dummy stores.\n\t */\n\tszind_t ind = 0;\n\t/* usize will always be properly initialized. */\n\tsize_t usize;\n\n\t/* Reentrancy is only checked on slow path. */\n\tint8_t reentrancy_level;\n\n\t/* Compute the amount of memory the user wants. */\n\tif (unlikely(compute_size_with_overflow(sopts->may_overflow, dopts,\n\t    &size))) {\n\t\tgoto label_oom;\n\t}\n\n\tif (unlikely(dopts->alignment < sopts->min_alignment\n\t    || (dopts->alignment & (dopts->alignment - 1)) != 0)) {\n\t\tgoto label_invalid_alignment;\n\t}\n\n\t/* This is the beginning of the \"core\" algorithm. */\n\tdopts->zero = zero_get(dopts->zero, sopts->slow);\n\tif (aligned_usize_get(size, dopts->alignment, &usize, &ind,\n\t    sopts->bump_empty_aligned_alloc)) {\n\t\tgoto label_oom;\n\t}\n\tdopts->usize = usize;\n\t/* Validate the user input. */\n\tif (sopts->assert_nonempty_alloc) {\n\t\tassert (size != 0);\n\t}\n\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\n\t/*\n\t * If we need to handle reentrancy, we can do it out of a\n\t * known-initialized arena (i.e. arena 0).\n\t */\n\treentrancy_level = tsd_reentrancy_level_get(tsd);\n\tif (sopts->slow && unlikely(reentrancy_level > 0)) {\n\t\t/*\n\t\t * We should never specify particular arenas or tcaches from\n\t\t * within our internal allocations.\n\t\t */\n\t\tassert(dopts->tcache_ind == TCACHE_IND_AUTOMATIC ||\n\t\t    dopts->tcache_ind == TCACHE_IND_NONE);\n\t\tassert(dopts->arena_ind == ARENA_IND_AUTOMATIC);\n\t\tdopts->tcache_ind = TCACHE_IND_NONE;\n\t\t/* We know that arena 0 has already been initialized. */\n\t\tdopts->arena_ind = 0;\n\t}\n\n\t/*\n\t * If dopts->alignment > 0, then ind is still 0, but usize was computed\n\t * in the previous if statement.  Down the positive alignment path,\n\t * imalloc_no_sample and imalloc_sample will ignore ind.\n\t */\n\n\t/* If profiling is on, get our profiling context. */\n\tif (config_prof && opt_prof) {\n\t\tbool prof_active = prof_active_get_unlocked();\n\t\tbool sample_event = te_prof_sample_event_lookahead(tsd, usize);\n\t\tprof_tctx_t *tctx = prof_alloc_prep(tsd, prof_active,\n\t\t    sample_event);\n\n\t\temap_alloc_ctx_t alloc_ctx;\n\t\tif (likely((uintptr_t)tctx == (uintptr_t)1U)) {\n\t\t\talloc_ctx.slab = (usize <= SC_SMALL_MAXCLASS);\n\t\t\tallocation = imalloc_no_sample(\n\t\t\t    sopts, dopts, tsd, usize, usize, ind);\n\t\t} else if ((uintptr_t)tctx > (uintptr_t)1U) {\n\t\t\tallocation = imalloc_sample(\n\t\t\t    sopts, dopts, tsd, usize, ind);\n\t\t\talloc_ctx.slab = false;\n\t\t} else {\n\t\t\tallocation = NULL;\n\t\t}\n\n\t\tif (unlikely(allocation == NULL)) {\n\t\t\tprof_alloc_rollback(tsd, tctx);\n\t\t\tgoto label_oom;\n\t\t}\n\t\tprof_malloc(tsd, allocation, size, usize, &alloc_ctx, tctx);\n\t} else {\n\t\tassert(!opt_prof);\n\t\tallocation = imalloc_no_sample(sopts, dopts, tsd, size, usize,\n\t\t    ind);\n\t\tif (unlikely(allocation == NULL)) {\n\t\t\tgoto label_oom;\n\t\t}\n\t}\n\n\t/*\n\t * Allocation has been done at this point.  We still have some\n\t * post-allocation work to do though.\n\t */\n\n\tthread_alloc_event(tsd, usize);\n\n\tassert(dopts->alignment == 0\n\t    || ((uintptr_t)allocation & (dopts->alignment - 1)) == ZU(0));\n\n\tassert(usize == isalloc(tsd_tsdn(tsd), allocation));\n\n\tif (config_fill && sopts->slow && !dopts->zero\n\t    && unlikely(opt_junk_alloc)) {\n\t\tjunk_alloc_callback(allocation, usize);\n\t}\n\n\tif (sopts->slow) {\n\t\tUTRACE(0, size, allocation);\n\t}\n\n\t/* Success! */\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\t*dopts->result = allocation;\n\treturn 0;\n\nlabel_oom:\n\tif (unlikely(sopts->slow) && config_xmalloc && unlikely(opt_xmalloc)) {\n\t\tmalloc_write(sopts->oom_string);\n\t\tabort();\n\t}\n\n\tif (sopts->slow) {\n\t\tUTRACE(NULL, size, NULL);\n\t}\n\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\n\tif (sopts->set_errno_on_error) {\n\t\tset_errno(ENOMEM);\n\t}\n\n\tif (sopts->null_out_result_on_error) {\n\t\t*dopts->result = NULL;\n\t}\n\n\treturn ENOMEM;\n\n\t/*\n\t * This label is only jumped to by one goto; we move it out of line\n\t * anyways to avoid obscuring the non-error paths, and for symmetry with\n\t * the oom case.\n\t */\nlabel_invalid_alignment:\n\tif (config_xmalloc && unlikely(opt_xmalloc)) {\n\t\tmalloc_write(sopts->invalid_alignment_string);\n\t\tabort();\n\t}\n\n\tif (sopts->set_errno_on_error) {\n\t\tset_errno(EINVAL);\n\t}\n\n\tif (sopts->slow) {\n\t\tUTRACE(NULL, size, NULL);\n\t}\n\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\n\tif (sopts->null_out_result_on_error) {\n\t\t*dopts->result = NULL;\n\t}\n\n\treturn EINVAL;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nimalloc_init_check(static_opts_t *sopts, dynamic_opts_t *dopts) {\n\tif (unlikely(!malloc_initialized()) && unlikely(malloc_init())) {\n\t\tif (config_xmalloc && unlikely(opt_xmalloc)) {\n\t\t\tmalloc_write(sopts->oom_string);\n\t\t\tabort();\n\t\t}\n\t\tUTRACE(NULL, dopts->num_items * dopts->item_size, NULL);\n\t\tset_errno(ENOMEM);\n\t\t*dopts->result = NULL;\n\n\t\treturn false;\n\t}\n\n\treturn true;\n}\n\n/* Returns the errno-style error code of the allocation. */\nJEMALLOC_ALWAYS_INLINE int\nimalloc(static_opts_t *sopts, dynamic_opts_t *dopts) {\n\tif (tsd_get_allocates() && !imalloc_init_check(sopts, dopts)) {\n\t\treturn ENOMEM;\n\t}\n\n\t/* We always need the tsd.  Let's grab it right away. */\n\ttsd_t *tsd = tsd_fetch();\n\tassert(tsd);\n\tif (likely(tsd_fast(tsd))) {\n\t\t/* Fast and common path. */\n\t\ttsd_assert_fast(tsd);\n\t\tsopts->slow = false;\n\t\treturn imalloc_body(sopts, dopts, tsd);\n\t} else {\n\t\tif (!tsd_get_allocates() && !imalloc_init_check(sopts, dopts)) {\n\t\t\treturn ENOMEM;\n\t\t}\n\n\t\tsopts->slow = true;\n\t\treturn imalloc_body(sopts, dopts, tsd);\n\t}\n}\n\nJEMALLOC_NOINLINE\nvoid *\nmalloc_default(size_t size, size_t *usize) {\n\tvoid *ret;\n\tstatic_opts_t sopts;\n\tdynamic_opts_t dopts;\n\n\t/*\n\t * This variant has logging hook on exit but not on entry.  It's callled\n\t * only by je_malloc, below, which emits the entry one for us (and, if\n\t * it calls us, does so only via tail call).\n\t */\n\n\tstatic_opts_init(&sopts);\n\tdynamic_opts_init(&dopts);\n\n\tsopts.null_out_result_on_error = true;\n\tsopts.set_errno_on_error = true;\n\tsopts.oom_string = \"<jemalloc>: Error in malloc(): out of memory\\n\";\n\n\tdopts.result = &ret;\n\tdopts.num_items = 1;\n\tdopts.item_size = size;\n\n\timalloc(&sopts, &dopts);\n\t/*\n\t * Note that this branch gets optimized away -- it immediately follows\n\t * the check on tsd_fast that sets sopts.slow.\n\t */\n\tif (sopts.slow) {\n\t\tuintptr_t args[3] = {size};\n\t\thook_invoke_alloc(hook_alloc_malloc, ret, (uintptr_t)ret, args);\n\t}\n\n\tLOG(\"core.malloc.exit\", \"result: %p\", ret);\n\n\tif (usize) *usize = dopts.usize;\n\treturn ret;\n}\n\n/******************************************************************************/\n/*\n * Begin malloc(3)-compatible functions.\n */\n\nstatic inline void *je_malloc_internal(size_t size, size_t *usize) {\n\treturn imalloc_fastpath(size, &malloc_default, usize);\n}\n\nJEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN\nvoid JEMALLOC_NOTHROW *\nJEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE(1)\nje_malloc(size_t size) {\n\treturn je_malloc_internal(size, NULL);\n}\n\nJEMALLOC_EXPORT int JEMALLOC_NOTHROW\nJEMALLOC_ATTR(nonnull(1))\nje_posix_memalign(void **memptr, size_t alignment, size_t size) {\n\tint ret;\n\tstatic_opts_t sopts;\n\tdynamic_opts_t dopts;\n\n\tLOG(\"core.posix_memalign.entry\", \"mem ptr: %p, alignment: %zu, \"\n\t    \"size: %zu\", memptr, alignment, size);\n\n\tstatic_opts_init(&sopts);\n\tdynamic_opts_init(&dopts);\n\n\tsopts.bump_empty_aligned_alloc = true;\n\tsopts.min_alignment = sizeof(void *);\n\tsopts.oom_string =\n\t    \"<jemalloc>: Error allocating aligned memory: out of memory\\n\";\n\tsopts.invalid_alignment_string =\n\t    \"<jemalloc>: Error allocating aligned memory: invalid alignment\\n\";\n\n\tdopts.result = memptr;\n\tdopts.num_items = 1;\n\tdopts.item_size = size;\n\tdopts.alignment = alignment;\n\n\tret = imalloc(&sopts, &dopts);\n\tif (sopts.slow) {\n\t\tuintptr_t args[3] = {(uintptr_t)memptr, (uintptr_t)alignment,\n\t\t\t(uintptr_t)size};\n\t\thook_invoke_alloc(hook_alloc_posix_memalign, *memptr,\n\t\t    (uintptr_t)ret, args);\n\t}\n\n\tLOG(\"core.posix_memalign.exit\", \"result: %d, alloc ptr: %p\", ret,\n\t    *memptr);\n\n\treturn ret;\n}\n\nJEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN\nvoid JEMALLOC_NOTHROW *\nJEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE(2)\nje_aligned_alloc(size_t alignment, size_t size) {\n\tvoid *ret;\n\n\tstatic_opts_t sopts;\n\tdynamic_opts_t dopts;\n\n\tLOG(\"core.aligned_alloc.entry\", \"alignment: %zu, size: %zu\\n\",\n\t    alignment, size);\n\n\tstatic_opts_init(&sopts);\n\tdynamic_opts_init(&dopts);\n\n\tsopts.bump_empty_aligned_alloc = true;\n\tsopts.null_out_result_on_error = true;\n\tsopts.set_errno_on_error = true;\n\tsopts.min_alignment = 1;\n\tsopts.oom_string =\n\t    \"<jemalloc>: Error allocating aligned memory: out of memory\\n\";\n\tsopts.invalid_alignment_string =\n\t    \"<jemalloc>: Error allocating aligned memory: invalid alignment\\n\";\n\n\tdopts.result = &ret;\n\tdopts.num_items = 1;\n\tdopts.item_size = size;\n\tdopts.alignment = alignment;\n\n\timalloc(&sopts, &dopts);\n\tif (sopts.slow) {\n\t\tuintptr_t args[3] = {(uintptr_t)alignment, (uintptr_t)size};\n\t\thook_invoke_alloc(hook_alloc_aligned_alloc, ret,\n\t\t    (uintptr_t)ret, args);\n\t}\n\n\tLOG(\"core.aligned_alloc.exit\", \"result: %p\", ret);\n\n\treturn ret;\n}\n\nstatic void *je_calloc_internal(size_t num, size_t size, size_t *usize) {\n\tvoid *ret;\n\tstatic_opts_t sopts;\n\tdynamic_opts_t dopts;\n\n\tLOG(\"core.calloc.entry\", \"num: %zu, size: %zu\\n\", num, size);\n\n\tstatic_opts_init(&sopts);\n\tdynamic_opts_init(&dopts);\n\n\tsopts.may_overflow = true;\n\tsopts.null_out_result_on_error = true;\n\tsopts.set_errno_on_error = true;\n\tsopts.oom_string = \"<jemalloc>: Error in calloc(): out of memory\\n\";\n\n\tdopts.result = &ret;\n\tdopts.num_items = num;\n\tdopts.item_size = size;\n\tdopts.zero = true;\n\n\timalloc(&sopts, &dopts);\n\tif (sopts.slow) {\n\t\tuintptr_t args[3] = {(uintptr_t)num, (uintptr_t)size};\n\t\thook_invoke_alloc(hook_alloc_calloc, ret, (uintptr_t)ret, args);\n\t}\n\n\tLOG(\"core.calloc.exit\", \"result: %p\", ret);\n\n\tif (usize) *usize = dopts.usize;\n\treturn ret;\n}\n\nJEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN\nvoid JEMALLOC_NOTHROW *\nJEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE2(1, 2)\nje_calloc(size_t num, size_t size) {\n\treturn je_calloc_internal(num, size, NULL);\n}\n\nJEMALLOC_ALWAYS_INLINE void\nifree(tsd_t *tsd, void *ptr, tcache_t *tcache, bool slow_path, size_t *usable) {\n\tif (!slow_path) {\n\t\ttsd_assert_fast(tsd);\n\t}\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\tif (tsd_reentrancy_level_get(tsd) != 0) {\n\t\tassert(slow_path);\n\t}\n\n\tassert(ptr != NULL);\n\tassert(malloc_initialized() || IS_INITIALIZER);\n\n\temap_alloc_ctx_t alloc_ctx;\n\temap_alloc_ctx_lookup(tsd_tsdn(tsd), &arena_emap_global, ptr,\n\t    &alloc_ctx);\n\tassert(alloc_ctx.szind != SC_NSIZES);\n\n\tsize_t usize = sz_index2size(alloc_ctx.szind);\n\tif (config_prof && opt_prof) {\n\t\tprof_free(tsd, ptr, usize, &alloc_ctx);\n\t}\n\n\tif (likely(!slow_path)) {\n\t\tidalloctm(tsd_tsdn(tsd), ptr, tcache, &alloc_ctx, false,\n\t\t    false);\n\t} else {\n\t\tif (config_fill && slow_path && opt_junk_free) {\n\t\t\tjunk_free_callback(ptr, usize);\n\t\t}\n\t\tidalloctm(tsd_tsdn(tsd), ptr, tcache, &alloc_ctx, false,\n\t\t    true);\n\t}\n\tthread_dalloc_event(tsd, usize);\n\tif (usable) *usable = usize;\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nmaybe_check_alloc_ctx(tsd_t *tsd, void *ptr, emap_alloc_ctx_t *alloc_ctx) {\n\tif (config_opt_size_checks) {\n\t\temap_alloc_ctx_t dbg_ctx;\n\t\temap_alloc_ctx_lookup(tsd_tsdn(tsd), &arena_emap_global, ptr,\n\t\t    &dbg_ctx);\n\t\tif (alloc_ctx->szind != dbg_ctx.szind) {\n\t\t\tsafety_check_fail_sized_dealloc(\n\t\t\t    /* current_dealloc */ true, ptr,\n\t\t\t    /* true_size */ sz_size2index(dbg_ctx.szind),\n\t\t\t    /* input_size */ sz_size2index(alloc_ctx->szind));\n\t\t\treturn true;\n\t\t}\n\t\tif (alloc_ctx->slab != dbg_ctx.slab) {\n\t\t\tsafety_check_fail(\n\t\t\t    \"Internal heap corruption detected: \"\n\t\t\t    \"mismatch in slab bit\");\n\t\t\treturn true;\n\t\t}\n\t}\n\treturn false;\n}\n\nJEMALLOC_ALWAYS_INLINE void\nisfree(tsd_t *tsd, void *ptr, size_t usize, tcache_t *tcache, bool slow_path) {\n\tif (!slow_path) {\n\t\ttsd_assert_fast(tsd);\n\t}\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\tif (tsd_reentrancy_level_get(tsd) != 0) {\n\t\tassert(slow_path);\n\t}\n\n\tassert(ptr != NULL);\n\tassert(malloc_initialized() || IS_INITIALIZER);\n\n\temap_alloc_ctx_t alloc_ctx;\n\tif (!config_prof) {\n\t\talloc_ctx.szind = sz_size2index(usize);\n\t\talloc_ctx.slab = (alloc_ctx.szind < SC_NBINS);\n\t} else {\n\t\tif (likely(!prof_sample_aligned(ptr))) {\n\t\t\t/*\n\t\t\t * When the ptr is not page aligned, it was not sampled.\n\t\t\t * usize can be trusted to determine szind and slab.\n\t\t\t */\n\t\t\talloc_ctx.szind = sz_size2index(usize);\n\t\t\talloc_ctx.slab = (alloc_ctx.szind < SC_NBINS);\n\t\t} else if (opt_prof) {\n\t\t\temap_alloc_ctx_lookup(tsd_tsdn(tsd), &arena_emap_global,\n\t\t\t    ptr, &alloc_ctx);\n\n\t\t\tif (config_opt_safety_checks) {\n\t\t\t\t/* Small alloc may have !slab (sampled). */\n\t\t\t\tif (unlikely(alloc_ctx.szind !=\n\t\t\t\t    sz_size2index(usize))) {\n\t\t\t\t\tsafety_check_fail_sized_dealloc(\n\t\t\t\t\t    /* current_dealloc */ true, ptr,\n\t\t\t\t\t    /* true_size */ sz_index2size(\n\t\t\t\t\t    alloc_ctx.szind),\n\t\t\t\t\t    /* input_size */ usize);\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\talloc_ctx.szind = sz_size2index(usize);\n\t\t\talloc_ctx.slab = (alloc_ctx.szind < SC_NBINS);\n\t\t}\n\t}\n\tbool fail = maybe_check_alloc_ctx(tsd, ptr, &alloc_ctx);\n\tif (fail) {\n\t\t/*\n\t\t * This is a heap corruption bug.  In real life we'll crash; for\n\t\t * the unit test we just want to avoid breaking anything too\n\t\t * badly to get a test result out.  Let's leak instead of trying\n\t\t * to free.\n\t\t */\n\t\treturn;\n\t}\n\n\tif (config_prof && opt_prof) {\n\t\tprof_free(tsd, ptr, usize, &alloc_ctx);\n\t}\n\tif (likely(!slow_path)) {\n\t\tisdalloct(tsd_tsdn(tsd), ptr, usize, tcache, &alloc_ctx,\n\t\t    false);\n\t} else {\n\t\tif (config_fill && slow_path && opt_junk_free) {\n\t\t\tjunk_free_callback(ptr, usize);\n\t\t}\n\t\tisdalloct(tsd_tsdn(tsd), ptr, usize, tcache, &alloc_ctx,\n\t\t    true);\n\t}\n\tthread_dalloc_event(tsd, usize);\n}\n\nJEMALLOC_NOINLINE\nvoid\nfree_default(void *ptr, size_t *usize) {\n\tUTRACE(ptr, 0, 0);\n\tif (likely(ptr != NULL)) {\n\t\t/*\n\t\t * We avoid setting up tsd fully (e.g. tcache, arena binding)\n\t\t * based on only free() calls -- other activities trigger the\n\t\t * minimal to full transition.  This is because free() may\n\t\t * happen during thread shutdown after tls deallocation: if a\n\t\t * thread never had any malloc activities until then, a\n\t\t * fully-setup tsd won't be destructed properly.\n\t\t */\n\t\ttsd_t *tsd = tsd_fetch_min();\n\t\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\n\t\tif (likely(tsd_fast(tsd))) {\n\t\t\ttcache_t *tcache = tcache_get_from_ind(tsd,\n\t\t\t    TCACHE_IND_AUTOMATIC, /* slow */ false,\n\t\t\t    /* is_alloc */ false);\n\t\t\tifree(tsd, ptr, tcache, /* slow */ false, usize);\n\t\t} else {\n\t\t\ttcache_t *tcache = tcache_get_from_ind(tsd,\n\t\t\t    TCACHE_IND_AUTOMATIC, /* slow */ true,\n\t\t\t    /* is_alloc */ false);\n\t\t\tuintptr_t args_raw[3] = {(uintptr_t)ptr};\n\t\t\thook_invoke_dalloc(hook_dalloc_free, ptr, args_raw);\n\t\t\tifree(tsd, ptr, tcache, /* slow */ true, usize);\n\t\t}\n\n\t\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE bool\nfree_fastpath_nonfast_aligned(void *ptr, bool check_prof) {\n\t/*\n\t * free_fastpath do not handle two uncommon cases: 1) sampled profiled\n\t * objects and 2) sampled junk & stash for use-after-free detection.\n\t * Both have special alignments which are used to escape the fastpath.\n\t *\n\t * prof_sample is page-aligned, which covers the UAF check when both\n\t * are enabled (the assertion below).  Avoiding redundant checks since\n\t * this is on the fastpath -- at most one runtime branch from this.\n\t */\n\tif (config_debug && cache_bin_nonfast_aligned(ptr)) {\n\t\tassert(prof_sample_aligned(ptr));\n\t}\n\n\tif (config_prof && check_prof) {\n\t\t/* When prof is enabled, the prof_sample alignment is enough. */\n\t\tif (prof_sample_aligned(ptr)) {\n\t\t\treturn true;\n\t\t} else {\n\t\t\treturn false;\n\t\t}\n\t}\n\n\tif (config_uaf_detection) {\n\t\tif (cache_bin_nonfast_aligned(ptr)) {\n\t\t\treturn true;\n\t\t} else {\n\t\t\treturn false;\n\t\t}\n\t}\n\n\treturn false;\n}\n\n/* Returns whether or not the free attempt was successful. */\nJEMALLOC_ALWAYS_INLINE\nbool free_fastpath(void *ptr, size_t size, bool size_hint, size_t *usable_size) {\n\ttsd_t *tsd = tsd_get(false);\n\t/* The branch gets optimized away unless tsd_get_allocates(). */\n\tif (unlikely(tsd == NULL)) {\n\t\treturn false;\n\t}\n\t/*\n\t *  The tsd_fast() / initialized checks are folded into the branch\n\t *  testing (deallocated_after >= threshold) later in this function.\n\t *  The threshold will be set to 0 when !tsd_fast.\n\t */\n\tassert(tsd_fast(tsd) ||\n\t    *tsd_thread_deallocated_next_event_fastp_get_unsafe(tsd) == 0);\n\n\temap_alloc_ctx_t alloc_ctx;\n\tif (!size_hint) {\n\t\tbool err = emap_alloc_ctx_try_lookup_fast(tsd,\n\t\t    &arena_emap_global, ptr, &alloc_ctx);\n\n\t\t/* Note: profiled objects will have alloc_ctx.slab set */\n\t\tif (unlikely(err || !alloc_ctx.slab ||\n\t\t    free_fastpath_nonfast_aligned(ptr,\n\t\t    /* check_prof */ false))) {\n\t\t\treturn false;\n\t\t}\n\t\tassert(alloc_ctx.szind != SC_NSIZES);\n\t} else {\n\t\t/*\n\t\t * Check for both sizes that are too large, and for sampled /\n\t\t * special aligned objects.  The alignment check will also check\n\t\t * for null ptr.\n\t\t */\n\t\tif (unlikely(size > SC_LOOKUP_MAXCLASS ||\n\t\t    free_fastpath_nonfast_aligned(ptr,\n\t\t    /* check_prof */ true))) {\n\t\t\treturn false;\n\t\t}\n\t\talloc_ctx.szind = sz_size2index_lookup(size);\n\t\t/* Max lookup class must be small. */\n\t\tassert(alloc_ctx.szind < SC_NBINS);\n\t\t/* This is a dead store, except when opt size checking is on. */\n\t\talloc_ctx.slab = true;\n\t}\n\t/*\n\t * Currently the fastpath only handles small sizes.  The branch on\n\t * SC_LOOKUP_MAXCLASS makes sure of it.  This lets us avoid checking\n\t * tcache szind upper limit (i.e. tcache_maxclass) as well.\n\t */\n\tassert(alloc_ctx.slab);\n\n\tuint64_t deallocated, threshold;\n\tte_free_fastpath_ctx(tsd, &deallocated, &threshold);\n\n\tsize_t usize = sz_index2size(alloc_ctx.szind);\n\tuint64_t deallocated_after = deallocated + usize;\n\t/*\n\t * Check for events and tsd non-nominal (fast_threshold will be set to\n\t * 0) in a single branch.  Note that this handles the uninitialized case\n\t * as well (TSD init will be triggered on the non-fastpath).  Therefore\n\t * anything depends on a functional TSD (e.g. the alloc_ctx sanity check\n\t * below) needs to be after this branch.\n\t */\n\tif (unlikely(deallocated_after >= threshold)) {\n\t\treturn false;\n\t}\n\tassert(tsd_fast(tsd));\n\tbool fail = maybe_check_alloc_ctx(tsd, ptr, &alloc_ctx);\n\tif (fail) {\n\t\t/* See the comment in isfree. */\n\t\tif (usable_size) *usable_size = usize;\n\t\treturn true;\n\t}\n\n\ttcache_t *tcache = tcache_get_from_ind(tsd, TCACHE_IND_AUTOMATIC,\n\t    /* slow */ false, /* is_alloc */ false);\n\tcache_bin_t *bin = &tcache->bins[alloc_ctx.szind];\n\n\t/*\n\t * If junking were enabled, this is where we would do it.  It's not\n\t * though, since we ensured above that we're on the fast path.  Assert\n\t * that to double-check.\n\t */\n\tassert(!opt_junk_free);\n\n\tif (!cache_bin_dalloc_easy(bin, ptr)) {\n\t\treturn false;\n\t}\n\n\t*tsd_thread_deallocatedp_get(tsd) = deallocated_after;\n\n\tif (usable_size) *usable_size = usize;\n\treturn true;\n}\n\nstatic inline void je_free_internal(void *ptr, size_t *usize) {\n\tLOG(\"core.free.entry\", \"ptr: %p\", ptr);\n\n\tif (!free_fastpath(ptr, 0, false, usize)) {\n\t\tfree_default(ptr, usize);\n\t}\n\n\tLOG(\"core.free.exit\", \"\");\n}\n\nJEMALLOC_EXPORT void JEMALLOC_NOTHROW\nje_free(void *ptr) {\n\tje_free_internal(ptr, NULL);\n}\n\n/*\n * End malloc(3)-compatible functions.\n */\n/******************************************************************************/\n/*\n * Begin non-standard override functions.\n */\n\n#ifdef JEMALLOC_OVERRIDE_MEMALIGN\nJEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN\nvoid JEMALLOC_NOTHROW *\nJEMALLOC_ATTR(malloc)\nje_memalign(size_t alignment, size_t size) {\n\tvoid *ret;\n\tstatic_opts_t sopts;\n\tdynamic_opts_t dopts;\n\n\tLOG(\"core.memalign.entry\", \"alignment: %zu, size: %zu\\n\", alignment,\n\t    size);\n\n\tstatic_opts_init(&sopts);\n\tdynamic_opts_init(&dopts);\n\n\tsopts.min_alignment = 1;\n\tsopts.oom_string =\n\t    \"<jemalloc>: Error allocating aligned memory: out of memory\\n\";\n\tsopts.invalid_alignment_string =\n\t    \"<jemalloc>: Error allocating aligned memory: invalid alignment\\n\";\n\tsopts.null_out_result_on_error = true;\n\n\tdopts.result = &ret;\n\tdopts.num_items = 1;\n\tdopts.item_size = size;\n\tdopts.alignment = alignment;\n\n\timalloc(&sopts, &dopts);\n\tif (sopts.slow) {\n\t\tuintptr_t args[3] = {alignment, size};\n\t\thook_invoke_alloc(hook_alloc_memalign, ret, (uintptr_t)ret,\n\t\t    args);\n\t}\n\n\tLOG(\"core.memalign.exit\", \"result: %p\", ret);\n\treturn ret;\n}\n#endif\n\n#ifdef JEMALLOC_OVERRIDE_VALLOC\nJEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN\nvoid JEMALLOC_NOTHROW *\nJEMALLOC_ATTR(malloc)\nje_valloc(size_t size) {\n\tvoid *ret;\n\n\tstatic_opts_t sopts;\n\tdynamic_opts_t dopts;\n\n\tLOG(\"core.valloc.entry\", \"size: %zu\\n\", size);\n\n\tstatic_opts_init(&sopts);\n\tdynamic_opts_init(&dopts);\n\n\tsopts.null_out_result_on_error = true;\n\tsopts.min_alignment = PAGE;\n\tsopts.oom_string =\n\t    \"<jemalloc>: Error allocating aligned memory: out of memory\\n\";\n\tsopts.invalid_alignment_string =\n\t    \"<jemalloc>: Error allocating aligned memory: invalid alignment\\n\";\n\n\tdopts.result = &ret;\n\tdopts.num_items = 1;\n\tdopts.item_size = size;\n\tdopts.alignment = PAGE;\n\n\timalloc(&sopts, &dopts);\n\tif (sopts.slow) {\n\t\tuintptr_t args[3] = {size};\n\t\thook_invoke_alloc(hook_alloc_valloc, ret, (uintptr_t)ret, args);\n\t}\n\n\tLOG(\"core.valloc.exit\", \"result: %p\\n\", ret);\n\treturn ret;\n}\n#endif\n\n#if defined(JEMALLOC_IS_MALLOC) && defined(JEMALLOC_GLIBC_MALLOC_HOOK)\n/*\n * glibc provides the RTLD_DEEPBIND flag for dlopen which can make it possible\n * to inconsistently reference libc's malloc(3)-compatible functions\n * (https://bugzilla.mozilla.org/show_bug.cgi?id=493541).\n *\n * These definitions interpose hooks in glibc.  The functions are actually\n * passed an extra argument for the caller return address, which will be\n * ignored.\n */\n#include <features.h> // defines __GLIBC__ if we are compiling against glibc\n\nJEMALLOC_EXPORT void (*__free_hook)(void *ptr) = je_free;\nJEMALLOC_EXPORT void *(*__malloc_hook)(size_t size) = je_malloc;\nJEMALLOC_EXPORT void *(*__realloc_hook)(void *ptr, size_t size) = je_realloc;\n#  ifdef JEMALLOC_GLIBC_MEMALIGN_HOOK\nJEMALLOC_EXPORT void *(*__memalign_hook)(size_t alignment, size_t size) =\n    je_memalign;\n#  endif\n\n#  ifdef __GLIBC__\n/*\n * To enable static linking with glibc, the libc specific malloc interface must\n * be implemented also, so none of glibc's malloc.o functions are added to the\n * link.\n */\n#    define ALIAS(je_fn)\t__attribute__((alias (#je_fn), used))\n/* To force macro expansion of je_ prefix before stringification. */\n#    define PREALIAS(je_fn)\tALIAS(je_fn)\n#    ifdef JEMALLOC_OVERRIDE___LIBC_CALLOC\nvoid *__libc_calloc(size_t n, size_t size) PREALIAS(je_calloc);\n#    endif\n#    ifdef JEMALLOC_OVERRIDE___LIBC_FREE\nvoid __libc_free(void* ptr) PREALIAS(je_free);\n#    endif\n#    ifdef JEMALLOC_OVERRIDE___LIBC_MALLOC\nvoid *__libc_malloc(size_t size) PREALIAS(je_malloc);\n#    endif\n#    ifdef JEMALLOC_OVERRIDE___LIBC_MEMALIGN\nvoid *__libc_memalign(size_t align, size_t s) PREALIAS(je_memalign);\n#    endif\n#    ifdef JEMALLOC_OVERRIDE___LIBC_REALLOC\nvoid *__libc_realloc(void* ptr, size_t size) PREALIAS(je_realloc);\n#    endif\n#    ifdef JEMALLOC_OVERRIDE___LIBC_VALLOC\nvoid *__libc_valloc(size_t size) PREALIAS(je_valloc);\n#    endif\n#    ifdef JEMALLOC_OVERRIDE___POSIX_MEMALIGN\nint __posix_memalign(void** r, size_t a, size_t s) PREALIAS(je_posix_memalign);\n#    endif\n#    undef PREALIAS\n#    undef ALIAS\n#  endif\n#endif\n\n/*\n * End non-standard override functions.\n */\n/******************************************************************************/\n/*\n * Begin non-standard functions.\n */\n\nJEMALLOC_ALWAYS_INLINE unsigned\nmallocx_tcache_get(int flags) {\n\tif (likely((flags & MALLOCX_TCACHE_MASK) == 0)) {\n\t\treturn TCACHE_IND_AUTOMATIC;\n\t} else if ((flags & MALLOCX_TCACHE_MASK) == MALLOCX_TCACHE_NONE) {\n\t\treturn TCACHE_IND_NONE;\n\t} else {\n\t\treturn MALLOCX_TCACHE_GET(flags);\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE unsigned\nmallocx_arena_get(int flags) {\n\tif (unlikely((flags & MALLOCX_ARENA_MASK) != 0)) {\n\t\treturn MALLOCX_ARENA_GET(flags);\n\t} else {\n\t\treturn ARENA_IND_AUTOMATIC;\n\t}\n}\n\n#ifdef JEMALLOC_EXPERIMENTAL_SMALLOCX_API\n\n#define JEMALLOC_SMALLOCX_CONCAT_HELPER(x, y) x ## y\n#define JEMALLOC_SMALLOCX_CONCAT_HELPER2(x, y)  \\\n  JEMALLOC_SMALLOCX_CONCAT_HELPER(x, y)\n\ntypedef struct {\n\tvoid *ptr;\n\tsize_t size;\n} smallocx_return_t;\n\nJEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN\nsmallocx_return_t JEMALLOC_NOTHROW\n/*\n * The attribute JEMALLOC_ATTR(malloc) cannot be used due to:\n *  - https://gcc.gnu.org/bugzilla/show_bug.cgi?id=86488\n */\nJEMALLOC_SMALLOCX_CONCAT_HELPER2(je_smallocx_, JEMALLOC_VERSION_GID_IDENT)\n  (size_t size, int flags) {\n\t/*\n\t * Note: the attribute JEMALLOC_ALLOC_SIZE(1) cannot be\n\t * used here because it makes writing beyond the `size`\n\t * of the `ptr` undefined behavior, but the objective\n\t * of this function is to allow writing beyond `size`\n\t * up to `smallocx_return_t::size`.\n\t */\n\tsmallocx_return_t ret;\n\tstatic_opts_t sopts;\n\tdynamic_opts_t dopts;\n\n\tLOG(\"core.smallocx.entry\", \"size: %zu, flags: %d\", size, flags);\n\n\tstatic_opts_init(&sopts);\n\tdynamic_opts_init(&dopts);\n\n\tsopts.assert_nonempty_alloc = true;\n\tsopts.null_out_result_on_error = true;\n\tsopts.oom_string = \"<jemalloc>: Error in mallocx(): out of memory\\n\";\n\tsopts.usize = true;\n\n\tdopts.result = &ret.ptr;\n\tdopts.num_items = 1;\n\tdopts.item_size = size;\n\tif (unlikely(flags != 0)) {\n\t\tdopts.alignment = MALLOCX_ALIGN_GET(flags);\n\t\tdopts.zero = MALLOCX_ZERO_GET(flags);\n\t\tdopts.tcache_ind = mallocx_tcache_get(flags);\n\t\tdopts.arena_ind = mallocx_arena_get(flags);\n\t}\n\n\timalloc(&sopts, &dopts);\n\tassert(dopts.usize == je_nallocx(size, flags));\n\tret.size = dopts.usize;\n\n\tLOG(\"core.smallocx.exit\", \"result: %p, size: %zu\", ret.ptr, ret.size);\n\treturn ret;\n}\n#undef JEMALLOC_SMALLOCX_CONCAT_HELPER\n#undef JEMALLOC_SMALLOCX_CONCAT_HELPER2\n#endif\n\nJEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN\nvoid JEMALLOC_NOTHROW *\nJEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE(1)\nje_mallocx(size_t size, int flags) {\n\tvoid *ret;\n\tstatic_opts_t sopts;\n\tdynamic_opts_t dopts;\n\n\tLOG(\"core.mallocx.entry\", \"size: %zu, flags: %d\", size, flags);\n\n\tstatic_opts_init(&sopts);\n\tdynamic_opts_init(&dopts);\n\n\tsopts.assert_nonempty_alloc = true;\n\tsopts.null_out_result_on_error = true;\n\tsopts.oom_string = \"<jemalloc>: Error in mallocx(): out of memory\\n\";\n\n\tdopts.result = &ret;\n\tdopts.num_items = 1;\n\tdopts.item_size = size;\n\tif (unlikely(flags != 0)) {\n\t\tdopts.alignment = MALLOCX_ALIGN_GET(flags);\n\t\tdopts.zero = MALLOCX_ZERO_GET(flags);\n\t\tdopts.tcache_ind = mallocx_tcache_get(flags);\n\t\tdopts.arena_ind = mallocx_arena_get(flags);\n\t}\n\n\timalloc(&sopts, &dopts);\n\tif (sopts.slow) {\n\t\tuintptr_t args[3] = {size, flags};\n\t\thook_invoke_alloc(hook_alloc_mallocx, ret, (uintptr_t)ret,\n\t\t    args);\n\t}\n\n\tLOG(\"core.mallocx.exit\", \"result: %p\", ret);\n\treturn ret;\n}\n\nstatic void *\nirallocx_prof_sample(tsdn_t *tsdn, void *old_ptr, size_t old_usize,\n    size_t usize, size_t alignment, bool zero, tcache_t *tcache, arena_t *arena,\n    prof_tctx_t *tctx, hook_ralloc_args_t *hook_args) {\n\tvoid *p;\n\n\tif (tctx == NULL) {\n\t\treturn NULL;\n\t}\n\n\talignment = prof_sample_align(alignment);\n\tif (usize <= SC_SMALL_MAXCLASS) {\n\t\tp = iralloct(tsdn, old_ptr, old_usize,\n\t\t    SC_LARGE_MINCLASS, alignment, zero, tcache,\n\t\t    arena, hook_args);\n\t\tif (p == NULL) {\n\t\t\treturn NULL;\n\t\t}\n\t\tarena_prof_promote(tsdn, p, usize);\n\t} else {\n\t\tp = iralloct(tsdn, old_ptr, old_usize, usize, alignment, zero,\n\t\t    tcache, arena, hook_args);\n\t}\n\tassert(prof_sample_aligned(p));\n\n\treturn p;\n}\n\nJEMALLOC_ALWAYS_INLINE void *\nirallocx_prof(tsd_t *tsd, void *old_ptr, size_t old_usize, size_t size,\n    size_t alignment, size_t usize, bool zero, tcache_t *tcache,\n    arena_t *arena, emap_alloc_ctx_t *alloc_ctx,\n    hook_ralloc_args_t *hook_args) {\n\tprof_info_t old_prof_info;\n\tprof_info_get_and_reset_recent(tsd, old_ptr, alloc_ctx, &old_prof_info);\n\tbool prof_active = prof_active_get_unlocked();\n\tbool sample_event = te_prof_sample_event_lookahead(tsd, usize);\n\tprof_tctx_t *tctx = prof_alloc_prep(tsd, prof_active, sample_event);\n\tvoid *p;\n\tif (unlikely((uintptr_t)tctx != (uintptr_t)1U)) {\n\t\tp = irallocx_prof_sample(tsd_tsdn(tsd), old_ptr, old_usize,\n\t\t    usize, alignment, zero, tcache, arena, tctx, hook_args);\n\t} else {\n\t\tp = iralloct(tsd_tsdn(tsd), old_ptr, old_usize, size, alignment,\n\t\t    zero, tcache, arena, hook_args);\n\t}\n\tif (unlikely(p == NULL)) {\n\t\tprof_alloc_rollback(tsd, tctx);\n\t\treturn NULL;\n\t}\n\tassert(usize == isalloc(tsd_tsdn(tsd), p));\n\tprof_realloc(tsd, p, size, usize, tctx, prof_active, old_ptr,\n\t    old_usize, &old_prof_info, sample_event);\n\n\treturn p;\n}\n\nstatic void *\ndo_rallocx(void *ptr, size_t size, int flags, bool is_realloc, size_t *old_usable_size, size_t *new_usable_size) {\n\tvoid *p;\n\ttsd_t *tsd;\n\tsize_t usize;\n\tsize_t old_usize;\n\tsize_t alignment = MALLOCX_ALIGN_GET(flags);\n\tarena_t *arena;\n\n\tassert(ptr != NULL);\n\tassert(size != 0);\n\tassert(malloc_initialized() || IS_INITIALIZER);\n\ttsd = tsd_fetch();\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\n\tbool zero = zero_get(MALLOCX_ZERO_GET(flags), /* slow */ true);\n\n\tunsigned arena_ind = mallocx_arena_get(flags);\n\tif (arena_get_from_ind(tsd, arena_ind, &arena)) {\n\t\tgoto label_oom;\n\t}\n\n\tunsigned tcache_ind = mallocx_tcache_get(flags);\n\ttcache_t *tcache = tcache_get_from_ind(tsd, tcache_ind,\n\t    /* slow */ true, /* is_alloc */ true);\n\n\temap_alloc_ctx_t alloc_ctx;\n\temap_alloc_ctx_lookup(tsd_tsdn(tsd), &arena_emap_global, ptr,\n\t    &alloc_ctx);\n\tassert(alloc_ctx.szind != SC_NSIZES);\n\told_usize = sz_index2size(alloc_ctx.szind);\n\tassert(old_usize == isalloc(tsd_tsdn(tsd), ptr));\n\tif (aligned_usize_get(size, alignment, &usize, NULL, false)) {\n\t\tgoto label_oom;\n\t}\n\n\thook_ralloc_args_t hook_args = {is_realloc, {(uintptr_t)ptr, size,\n\t\tflags, 0}};\n\tif (config_prof && opt_prof) {\n\t\tp = irallocx_prof(tsd, ptr, old_usize, size, alignment, usize,\n\t\t    zero, tcache, arena, &alloc_ctx, &hook_args);\n\t\tif (unlikely(p == NULL)) {\n\t\t\tgoto label_oom;\n\t\t}\n\t} else {\n\t\tp = iralloct(tsd_tsdn(tsd), ptr, old_usize, size, alignment,\n\t\t    zero, tcache, arena, &hook_args);\n\t\tif (unlikely(p == NULL)) {\n\t\t\tgoto label_oom;\n\t\t}\n\t\tassert(usize == isalloc(tsd_tsdn(tsd), p));\n\t}\n\tassert(alignment == 0 || ((uintptr_t)p & (alignment - 1)) == ZU(0));\n\tthread_alloc_event(tsd, usize);\n\tthread_dalloc_event(tsd, old_usize);\n\n\tUTRACE(ptr, size, p);\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\n\tif (config_fill && unlikely(opt_junk_alloc) && usize > old_usize\n\t    && !zero) {\n\t\tsize_t excess_len = usize - old_usize;\n\t\tvoid *excess_start = (void *)((uintptr_t)p + old_usize);\n\t\tjunk_alloc_callback(excess_start, excess_len);\n\t}\n\n\tif (old_usable_size) *old_usable_size = old_usize;\n\tif (new_usable_size) *new_usable_size = usize;\n\treturn p;\nlabel_oom:\n\tif (config_xmalloc && unlikely(opt_xmalloc)) {\n\t\tmalloc_write(\"<jemalloc>: Error in rallocx(): out of memory\\n\");\n\t\tabort();\n\t}\n\tUTRACE(ptr, size, 0);\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\n\treturn NULL;\n}\n\nJEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN\nvoid JEMALLOC_NOTHROW *\nJEMALLOC_ALLOC_SIZE(2)\nje_rallocx(void *ptr, size_t size, int flags) {\n\tLOG(\"core.rallocx.entry\", \"ptr: %p, size: %zu, flags: %d\", ptr,\n\t    size, flags);\n\tvoid *ret = do_rallocx(ptr, size, flags, false, NULL, NULL);\n\tLOG(\"core.rallocx.exit\", \"result: %p\", ret);\n\treturn ret;\n}\n\nstatic void *\ndo_realloc_nonnull_zero(void *ptr, size_t *old_usize, size_t *new_usize) {\n\tif (config_stats) {\n\t\tatomic_fetch_add_zu(&zero_realloc_count, 1, ATOMIC_RELAXED);\n\t}\n\tif (opt_zero_realloc_action == zero_realloc_action_alloc) {\n\t\t/*\n\t\t * The user might have gotten an alloc setting while expecting a\n\t\t * free setting.  If that's the case, we at least try to\n\t\t * reduce the harm, and turn off the tcache while allocating, so\n\t\t * that we'll get a true first fit.\n\t\t */\n\t\treturn do_rallocx(ptr, 1, MALLOCX_TCACHE_NONE, true, old_usize, new_usize);\n\t} else if (opt_zero_realloc_action == zero_realloc_action_free) {\n\t\tUTRACE(ptr, 0, 0);\n\t\ttsd_t *tsd = tsd_fetch();\n\t\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\n\t\ttcache_t *tcache = tcache_get_from_ind(tsd,\n\t\t    TCACHE_IND_AUTOMATIC, /* slow */ true,\n\t\t    /* is_alloc */ false);\n\t\tuintptr_t args[3] = {(uintptr_t)ptr, 0};\n\t\thook_invoke_dalloc(hook_dalloc_realloc, ptr, args);\n\t\tsize_t usize;\n\t\tifree(tsd, ptr, tcache, true, &usize);\n\t\tif (old_usize) *old_usize = usize;\n\t\tif (new_usize) *new_usize = 0;\n\n\t\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\t\treturn NULL;\n\t} else {\n\t\tsafety_check_fail(\"Called realloc(non-null-ptr, 0) with \"\n\t\t    \"zero_realloc:abort set\\n\");\n\t\t/* In real code, this will never run; the safety check failure\n\t\t * will call abort.  In the unit test, we just want to bail out\n\t\t * without corrupting internal state that the test needs to\n\t\t * finish.\n\t\t */\n\t\treturn NULL;\n\t}\n}\n\nstatic inline void *je_realloc_internal(void *ptr, size_t size, size_t *old_usize, size_t *new_usize) {\n\tLOG(\"core.realloc.entry\", \"ptr: %p, size: %zu\\n\", ptr, size);\n\n\tif (likely(ptr != NULL && size != 0)) {\n\t\tvoid *ret = do_rallocx(ptr, size, 0, true, old_usize, new_usize);\n\t\tLOG(\"core.realloc.exit\", \"result: %p\", ret);\n\t\treturn ret;\n\t} else if (ptr != NULL && size == 0) {\n\t\tvoid *ret = do_realloc_nonnull_zero(ptr, old_usize, new_usize);\n\t\tLOG(\"core.realloc.exit\", \"result: %p\", ret);\n\t\treturn ret;\n\t} else {\n\t\t/* realloc(NULL, size) is equivalent to malloc(size). */\n\t\tvoid *ret;\n\n\t\tstatic_opts_t sopts;\n\t\tdynamic_opts_t dopts;\n\n\t\tstatic_opts_init(&sopts);\n\t\tdynamic_opts_init(&dopts);\n\n\t\tsopts.null_out_result_on_error = true;\n\t\tsopts.set_errno_on_error = true;\n\t\tsopts.oom_string =\n\t\t    \"<jemalloc>: Error in realloc(): out of memory\\n\";\n\n\t\tdopts.result = &ret;\n\t\tdopts.num_items = 1;\n\t\tdopts.item_size = size;\n\n\t\timalloc(&sopts, &dopts);\n\t\tif (sopts.slow) {\n\t\t\tuintptr_t args[3] = {(uintptr_t)ptr, size};\n\t\t\thook_invoke_alloc(hook_alloc_realloc, ret,\n\t\t\t    (uintptr_t)ret, args);\n\t\t}\n\t\tLOG(\"core.realloc.exit\", \"result: %p\", ret);\n\t\tif (old_usize) *old_usize = 0;\n\t\tif (new_usize) *new_usize = dopts.usize;\n\t\treturn ret;\n\t}\n}\n\nJEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN\nvoid JEMALLOC_NOTHROW *\nJEMALLOC_ALLOC_SIZE(2)\nje_realloc(void *ptr, size_t size) {\n\treturn je_realloc_internal(ptr, size, NULL, NULL);\n}\n\nJEMALLOC_ALWAYS_INLINE size_t\nixallocx_helper(tsdn_t *tsdn, void *ptr, size_t old_usize, size_t size,\n    size_t extra, size_t alignment, bool zero) {\n\tsize_t newsize;\n\n\tif (ixalloc(tsdn, ptr, old_usize, size, extra, alignment, zero,\n\t    &newsize)) {\n\t\treturn old_usize;\n\t}\n\n\treturn newsize;\n}\n\nstatic size_t\nixallocx_prof_sample(tsdn_t *tsdn, void *ptr, size_t old_usize, size_t size,\n    size_t extra, size_t alignment, bool zero, prof_tctx_t *tctx) {\n\t/* Sampled allocation needs to be page aligned. */\n\tif (tctx == NULL || !prof_sample_aligned(ptr)) {\n\t\treturn old_usize;\n\t}\n\n\treturn ixallocx_helper(tsdn, ptr, old_usize, size, extra, alignment,\n\t    zero);\n}\n\nJEMALLOC_ALWAYS_INLINE size_t\nixallocx_prof(tsd_t *tsd, void *ptr, size_t old_usize, size_t size,\n    size_t extra, size_t alignment, bool zero, emap_alloc_ctx_t *alloc_ctx) {\n\t/*\n\t * old_prof_info is only used for asserting that the profiling info\n\t * isn't changed by the ixalloc() call.\n\t */\n\tprof_info_t old_prof_info;\n\tprof_info_get(tsd, ptr, alloc_ctx, &old_prof_info);\n\n\t/*\n\t * usize isn't knowable before ixalloc() returns when extra is non-zero.\n\t * Therefore, compute its maximum possible value and use that in\n\t * prof_alloc_prep() to decide whether to capture a backtrace.\n\t * prof_realloc() will use the actual usize to decide whether to sample.\n\t */\n\tsize_t usize_max;\n\tif (aligned_usize_get(size + extra, alignment, &usize_max, NULL,\n\t    false)) {\n\t\t/*\n\t\t * usize_max is out of range, and chances are that allocation\n\t\t * will fail, but use the maximum possible value and carry on\n\t\t * with prof_alloc_prep(), just in case allocation succeeds.\n\t\t */\n\t\tusize_max = SC_LARGE_MAXCLASS;\n\t}\n\tbool prof_active = prof_active_get_unlocked();\n\tbool sample_event = te_prof_sample_event_lookahead(tsd, usize_max);\n\tprof_tctx_t *tctx = prof_alloc_prep(tsd, prof_active, sample_event);\n\n\tsize_t usize;\n\tif (unlikely((uintptr_t)tctx != (uintptr_t)1U)) {\n\t\tusize = ixallocx_prof_sample(tsd_tsdn(tsd), ptr, old_usize,\n\t\t    size, extra, alignment, zero, tctx);\n\t} else {\n\t\tusize = ixallocx_helper(tsd_tsdn(tsd), ptr, old_usize, size,\n\t\t    extra, alignment, zero);\n\t}\n\n\t/*\n\t * At this point we can still safely get the original profiling\n\t * information associated with the ptr, because (a) the edata_t object\n\t * associated with the ptr still lives and (b) the profiling info\n\t * fields are not touched.  \"(a)\" is asserted in the outer je_xallocx()\n\t * function, and \"(b)\" is indirectly verified below by checking that\n\t * the alloc_tctx field is unchanged.\n\t */\n\tprof_info_t prof_info;\n\tif (usize == old_usize) {\n\t\tprof_info_get(tsd, ptr, alloc_ctx, &prof_info);\n\t\tprof_alloc_rollback(tsd, tctx);\n\t} else {\n\t\tprof_info_get_and_reset_recent(tsd, ptr, alloc_ctx, &prof_info);\n\t\tassert(usize <= usize_max);\n\t\tsample_event = te_prof_sample_event_lookahead(tsd, usize);\n\t\tprof_realloc(tsd, ptr, size, usize, tctx, prof_active, ptr,\n\t\t    old_usize, &prof_info, sample_event);\n\t}\n\n\tassert(old_prof_info.alloc_tctx == prof_info.alloc_tctx);\n\treturn usize;\n}\n\nJEMALLOC_EXPORT size_t JEMALLOC_NOTHROW\nje_xallocx(void *ptr, size_t size, size_t extra, int flags) {\n\ttsd_t *tsd;\n\tsize_t usize, old_usize;\n\tsize_t alignment = MALLOCX_ALIGN_GET(flags);\n\tbool zero = zero_get(MALLOCX_ZERO_GET(flags), /* slow */ true);\n\n\tLOG(\"core.xallocx.entry\", \"ptr: %p, size: %zu, extra: %zu, \"\n\t    \"flags: %d\", ptr, size, extra, flags);\n\n\tassert(ptr != NULL);\n\tassert(size != 0);\n\tassert(SIZE_T_MAX - size >= extra);\n\tassert(malloc_initialized() || IS_INITIALIZER);\n\ttsd = tsd_fetch();\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\n\t/*\n\t * old_edata is only for verifying that xallocx() keeps the edata_t\n\t * object associated with the ptr (though the content of the edata_t\n\t * object can be changed).\n\t */\n\tedata_t *old_edata = emap_edata_lookup(tsd_tsdn(tsd),\n\t    &arena_emap_global, ptr);\n\n\temap_alloc_ctx_t alloc_ctx;\n\temap_alloc_ctx_lookup(tsd_tsdn(tsd), &arena_emap_global, ptr,\n\t    &alloc_ctx);\n\tassert(alloc_ctx.szind != SC_NSIZES);\n\told_usize = sz_index2size(alloc_ctx.szind);\n\tassert(old_usize == isalloc(tsd_tsdn(tsd), ptr));\n\t/*\n\t * The API explicitly absolves itself of protecting against (size +\n\t * extra) numerical overflow, but we may need to clamp extra to avoid\n\t * exceeding SC_LARGE_MAXCLASS.\n\t *\n\t * Ordinarily, size limit checking is handled deeper down, but here we\n\t * have to check as part of (size + extra) clamping, since we need the\n\t * clamped value in the above helper functions.\n\t */\n\tif (unlikely(size > SC_LARGE_MAXCLASS)) {\n\t\tusize = old_usize;\n\t\tgoto label_not_resized;\n\t}\n\tif (unlikely(SC_LARGE_MAXCLASS - size < extra)) {\n\t\textra = SC_LARGE_MAXCLASS - size;\n\t}\n\n\tif (config_prof && opt_prof) {\n\t\tusize = ixallocx_prof(tsd, ptr, old_usize, size, extra,\n\t\t    alignment, zero, &alloc_ctx);\n\t} else {\n\t\tusize = ixallocx_helper(tsd_tsdn(tsd), ptr, old_usize, size,\n\t\t    extra, alignment, zero);\n\t}\n\n\t/*\n\t * xallocx() should keep using the same edata_t object (though its\n\t * content can be changed).\n\t */\n\tassert(emap_edata_lookup(tsd_tsdn(tsd), &arena_emap_global, ptr)\n\t    == old_edata);\n\n\tif (unlikely(usize == old_usize)) {\n\t\tgoto label_not_resized;\n\t}\n\tthread_alloc_event(tsd, usize);\n\tthread_dalloc_event(tsd, old_usize);\n\n\tif (config_fill && unlikely(opt_junk_alloc) && usize > old_usize &&\n\t    !zero) {\n\t\tsize_t excess_len = usize - old_usize;\n\t\tvoid *excess_start = (void *)((uintptr_t)ptr + old_usize);\n\t\tjunk_alloc_callback(excess_start, excess_len);\n\t}\nlabel_not_resized:\n\tif (unlikely(!tsd_fast(tsd))) {\n\t\tuintptr_t args[4] = {(uintptr_t)ptr, size, extra, flags};\n\t\thook_invoke_expand(hook_expand_xallocx, ptr, old_usize,\n\t\t    usize, (uintptr_t)usize, args);\n\t}\n\n\tUTRACE(ptr, size, ptr);\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\n\tLOG(\"core.xallocx.exit\", \"result: %zu\", usize);\n\treturn usize;\n}\n\nJEMALLOC_EXPORT size_t JEMALLOC_NOTHROW\nJEMALLOC_ATTR(pure)\nje_sallocx(const void *ptr, int flags) {\n\tsize_t usize;\n\ttsdn_t *tsdn;\n\n\tLOG(\"core.sallocx.entry\", \"ptr: %p, flags: %d\", ptr, flags);\n\n\tassert(malloc_initialized() || IS_INITIALIZER);\n\tassert(ptr != NULL);\n\n\ttsdn = tsdn_fetch();\n\tcheck_entry_exit_locking(tsdn);\n\n\tif (config_debug || force_ivsalloc) {\n\t\tusize = ivsalloc(tsdn, ptr);\n\t\tassert(force_ivsalloc || usize != 0);\n\t} else {\n\t\tusize = isalloc(tsdn, ptr);\n\t}\n\n\tcheck_entry_exit_locking(tsdn);\n\n\tLOG(\"core.sallocx.exit\", \"result: %zu\", usize);\n\treturn usize;\n}\n\nJEMALLOC_EXPORT void JEMALLOC_NOTHROW\nje_dallocx(void *ptr, int flags) {\n\tLOG(\"core.dallocx.entry\", \"ptr: %p, flags: %d\", ptr, flags);\n\n\tassert(ptr != NULL);\n\tassert(malloc_initialized() || IS_INITIALIZER);\n\n\ttsd_t *tsd = tsd_fetch_min();\n\tbool fast = tsd_fast(tsd);\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\n\tunsigned tcache_ind = mallocx_tcache_get(flags);\n\ttcache_t *tcache = tcache_get_from_ind(tsd, tcache_ind, !fast,\n\t    /* is_alloc */ false);\n\n\tUTRACE(ptr, 0, 0);\n\tif (likely(fast)) {\n\t\ttsd_assert_fast(tsd);\n\t\tifree(tsd, ptr, tcache, false, NULL);\n\t} else {\n\t\tuintptr_t args_raw[3] = {(uintptr_t)ptr, flags};\n\t\thook_invoke_dalloc(hook_dalloc_dallocx, ptr, args_raw);\n\t\tifree(tsd, ptr, tcache, true, NULL);\n\t}\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\n\tLOG(\"core.dallocx.exit\", \"\");\n}\n\nJEMALLOC_ALWAYS_INLINE size_t\ninallocx(tsdn_t *tsdn, size_t size, int flags) {\n\tcheck_entry_exit_locking(tsdn);\n\tsize_t usize;\n\t/* In case of out of range, let the user see it rather than fail. */\n\taligned_usize_get(size, MALLOCX_ALIGN_GET(flags), &usize, NULL, false);\n\tcheck_entry_exit_locking(tsdn);\n\treturn usize;\n}\n\nJEMALLOC_NOINLINE void\nsdallocx_default(void *ptr, size_t size, int flags) {\n\tassert(ptr != NULL);\n\tassert(malloc_initialized() || IS_INITIALIZER);\n\n\ttsd_t *tsd = tsd_fetch_min();\n\tbool fast = tsd_fast(tsd);\n\tsize_t usize = inallocx(tsd_tsdn(tsd), size, flags);\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\n\tunsigned tcache_ind = mallocx_tcache_get(flags);\n\ttcache_t *tcache = tcache_get_from_ind(tsd, tcache_ind, !fast,\n\t    /* is_alloc */ false);\n\n\tUTRACE(ptr, 0, 0);\n\tif (likely(fast)) {\n\t\ttsd_assert_fast(tsd);\n\t\tisfree(tsd, ptr, usize, tcache, false);\n\t} else {\n\t\tuintptr_t args_raw[3] = {(uintptr_t)ptr, size, flags};\n\t\thook_invoke_dalloc(hook_dalloc_sdallocx, ptr, args_raw);\n\t\tisfree(tsd, ptr, usize, tcache, true);\n\t}\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n}\n\nJEMALLOC_EXPORT void JEMALLOC_NOTHROW\nje_sdallocx(void *ptr, size_t size, int flags) {\n\tLOG(\"core.sdallocx.entry\", \"ptr: %p, size: %zu, flags: %d\", ptr,\n\t\tsize, flags);\n\n\tif (flags != 0 || !free_fastpath(ptr, size, true, NULL)) {\n\t\tsdallocx_default(ptr, size, flags);\n\t}\n\n\tLOG(\"core.sdallocx.exit\", \"\");\n}\n\nvoid JEMALLOC_NOTHROW\nje_sdallocx_noflags(void *ptr, size_t size) {\n\tLOG(\"core.sdallocx.entry\", \"ptr: %p, size: %zu, flags: 0\", ptr,\n\t\tsize);\n\n\tif (!free_fastpath(ptr, size, true, NULL)) {\n\t\tsdallocx_default(ptr, size, 0);\n\t}\n\n\tLOG(\"core.sdallocx.exit\", \"\");\n}\n\nJEMALLOC_EXPORT size_t JEMALLOC_NOTHROW\nJEMALLOC_ATTR(pure)\nje_nallocx(size_t size, int flags) {\n\tsize_t usize;\n\ttsdn_t *tsdn;\n\n\tassert(size != 0);\n\n\tif (unlikely(malloc_init())) {\n\t\tLOG(\"core.nallocx.exit\", \"result: %zu\", ZU(0));\n\t\treturn 0;\n\t}\n\n\ttsdn = tsdn_fetch();\n\tcheck_entry_exit_locking(tsdn);\n\n\tusize = inallocx(tsdn, size, flags);\n\tif (unlikely(usize > SC_LARGE_MAXCLASS)) {\n\t\tLOG(\"core.nallocx.exit\", \"result: %zu\", ZU(0));\n\t\treturn 0;\n\t}\n\n\tcheck_entry_exit_locking(tsdn);\n\tLOG(\"core.nallocx.exit\", \"result: %zu\", usize);\n\treturn usize;\n}\n\nJEMALLOC_EXPORT int JEMALLOC_NOTHROW\nje_mallctl(const char *name, void *oldp, size_t *oldlenp, void *newp,\n    size_t newlen) {\n\tint ret;\n\ttsd_t *tsd;\n\n\tLOG(\"core.mallctl.entry\", \"name: %s\", name);\n\n\tif (unlikely(malloc_init())) {\n\t\tLOG(\"core.mallctl.exit\", \"result: %d\", EAGAIN);\n\t\treturn EAGAIN;\n\t}\n\n\ttsd = tsd_fetch();\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\tret = ctl_byname(tsd, name, oldp, oldlenp, newp, newlen);\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\n\tLOG(\"core.mallctl.exit\", \"result: %d\", ret);\n\treturn ret;\n}\n\nJEMALLOC_EXPORT int JEMALLOC_NOTHROW\nje_mallctlnametomib(const char *name, size_t *mibp, size_t *miblenp) {\n\tint ret;\n\n\tLOG(\"core.mallctlnametomib.entry\", \"name: %s\", name);\n\n\tif (unlikely(malloc_init())) {\n\t\tLOG(\"core.mallctlnametomib.exit\", \"result: %d\", EAGAIN);\n\t\treturn EAGAIN;\n\t}\n\n\ttsd_t *tsd = tsd_fetch();\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\tret = ctl_nametomib(tsd, name, mibp, miblenp);\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\n\tLOG(\"core.mallctlnametomib.exit\", \"result: %d\", ret);\n\treturn ret;\n}\n\nJEMALLOC_EXPORT int JEMALLOC_NOTHROW\nje_mallctlbymib(const size_t *mib, size_t miblen, void *oldp, size_t *oldlenp,\n  void *newp, size_t newlen) {\n\tint ret;\n\ttsd_t *tsd;\n\n\tLOG(\"core.mallctlbymib.entry\", \"\");\n\n\tif (unlikely(malloc_init())) {\n\t\tLOG(\"core.mallctlbymib.exit\", \"result: %d\", EAGAIN);\n\t\treturn EAGAIN;\n\t}\n\n\ttsd = tsd_fetch();\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\tret = ctl_bymib(tsd, mib, miblen, oldp, oldlenp, newp, newlen);\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\tLOG(\"core.mallctlbymib.exit\", \"result: %d\", ret);\n\treturn ret;\n}\n\n#define STATS_PRINT_BUFSIZE 65536\nJEMALLOC_EXPORT void JEMALLOC_NOTHROW\nje_malloc_stats_print(void (*write_cb)(void *, const char *), void *cbopaque,\n    const char *opts) {\n\ttsdn_t *tsdn;\n\n\tLOG(\"core.malloc_stats_print.entry\", \"\");\n\n\ttsdn = tsdn_fetch();\n\tcheck_entry_exit_locking(tsdn);\n\n\tif (config_debug) {\n\t\tstats_print(write_cb, cbopaque, opts);\n\t} else {\n\t\tbuf_writer_t buf_writer;\n\t\tbuf_writer_init(tsdn, &buf_writer, write_cb, cbopaque, NULL,\n\t\t    STATS_PRINT_BUFSIZE);\n\t\tstats_print(buf_writer_cb, &buf_writer, opts);\n\t\tbuf_writer_terminate(tsdn, &buf_writer);\n\t}\n\n\tcheck_entry_exit_locking(tsdn);\n\tLOG(\"core.malloc_stats_print.exit\", \"\");\n}\n#undef STATS_PRINT_BUFSIZE\n\nJEMALLOC_ALWAYS_INLINE size_t\nje_malloc_usable_size_impl(JEMALLOC_USABLE_SIZE_CONST void *ptr) {\n\tassert(malloc_initialized() || IS_INITIALIZER);\n\n\ttsdn_t *tsdn = tsdn_fetch();\n\tcheck_entry_exit_locking(tsdn);\n\n\tsize_t ret;\n\tif (unlikely(ptr == NULL)) {\n\t\tret = 0;\n\t} else {\n\t\tif (config_debug || force_ivsalloc) {\n\t\t\tret = ivsalloc(tsdn, ptr);\n\t\t\tassert(force_ivsalloc || ret != 0);\n\t\t} else {\n\t\t\tret = isalloc(tsdn, ptr);\n\t\t}\n\t}\n\tcheck_entry_exit_locking(tsdn);\n\n\treturn ret;\n}\n\nJEMALLOC_EXPORT size_t JEMALLOC_NOTHROW\nje_malloc_usable_size(JEMALLOC_USABLE_SIZE_CONST void *ptr) {\n\tLOG(\"core.malloc_usable_size.entry\", \"ptr: %p\", ptr);\n\n\tsize_t ret = je_malloc_usable_size_impl(ptr);\n\n\tLOG(\"core.malloc_usable_size.exit\", \"result: %zu\", ret);\n\treturn ret;\n}\n\n#ifdef JEMALLOC_HAVE_MALLOC_SIZE\nJEMALLOC_EXPORT size_t JEMALLOC_NOTHROW\nje_malloc_size(const void *ptr) {\n\tLOG(\"core.malloc_size.entry\", \"ptr: %p\", ptr);\n\n\tsize_t ret = je_malloc_usable_size_impl(ptr);\n\n\tLOG(\"core.malloc_size.exit\", \"result: %zu\", ret);\n\treturn ret;\n}\n#endif\n\nstatic void\nbatch_alloc_prof_sample_assert(tsd_t *tsd, size_t batch, size_t usize) {\n\tassert(config_prof && opt_prof);\n\tbool prof_sample_event = te_prof_sample_event_lookahead(tsd,\n\t    batch * usize);\n\tassert(!prof_sample_event);\n\tsize_t surplus;\n\tprof_sample_event = te_prof_sample_event_lookahead_surplus(tsd,\n\t    (batch + 1) * usize, &surplus);\n\tassert(prof_sample_event);\n\tassert(surplus < usize);\n}\n\nsize_t\nbatch_alloc(void **ptrs, size_t num, size_t size, int flags) {\n\tLOG(\"core.batch_alloc.entry\",\n\t    \"ptrs: %p, num: %zu, size: %zu, flags: %d\", ptrs, num, size, flags);\n\n\ttsd_t *tsd = tsd_fetch();\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\n\tsize_t filled = 0;\n\n\tif (unlikely(tsd == NULL || tsd_reentrancy_level_get(tsd) > 0)) {\n\t\tgoto label_done;\n\t}\n\n\tsize_t alignment = MALLOCX_ALIGN_GET(flags);\n\tsize_t usize;\n\tif (aligned_usize_get(size, alignment, &usize, NULL, false)) {\n\t\tgoto label_done;\n\t}\n\tszind_t ind = sz_size2index(usize);\n\tbool zero = zero_get(MALLOCX_ZERO_GET(flags), /* slow */ true);\n\n\t/*\n\t * The cache bin and arena will be lazily initialized; it's hard to\n\t * know in advance whether each of them needs to be initialized.\n\t */\n\tcache_bin_t *bin = NULL;\n\tarena_t *arena = NULL;\n\n\tsize_t nregs = 0;\n\tif (likely(ind < SC_NBINS)) {\n\t\tnregs = bin_infos[ind].nregs;\n\t\tassert(nregs > 0);\n\t}\n\n\twhile (filled < num) {\n\t\tsize_t batch = num - filled;\n\t\tsize_t surplus = SIZE_MAX; /* Dead store. */\n\t\tbool prof_sample_event = config_prof && opt_prof\n\t\t    && prof_active_get_unlocked()\n\t\t    && te_prof_sample_event_lookahead_surplus(tsd,\n\t\t    batch * usize, &surplus);\n\n\t\tif (prof_sample_event) {\n\t\t\t/*\n\t\t\t * Adjust so that the batch does not trigger prof\n\t\t\t * sampling.\n\t\t\t */\n\t\t\tbatch -= surplus / usize + 1;\n\t\t\tbatch_alloc_prof_sample_assert(tsd, batch, usize);\n\t\t}\n\n\t\tsize_t progress = 0;\n\n\t\tif (likely(ind < SC_NBINS) && batch >= nregs) {\n\t\t\tif (arena == NULL) {\n\t\t\t\tunsigned arena_ind = mallocx_arena_get(flags);\n\t\t\t\tif (arena_get_from_ind(tsd, arena_ind,\n\t\t\t\t    &arena)) {\n\t\t\t\t\tgoto label_done;\n\t\t\t\t}\n\t\t\t\tif (arena == NULL) {\n\t\t\t\t\tarena = arena_choose(tsd, NULL);\n\t\t\t\t}\n\t\t\t\tif (unlikely(arena == NULL)) {\n\t\t\t\t\tgoto label_done;\n\t\t\t\t}\n\t\t\t}\n\t\t\tsize_t arena_batch = batch - batch % nregs;\n\t\t\tsize_t n = arena_fill_small_fresh(tsd_tsdn(tsd), arena,\n\t\t\t    ind, ptrs + filled, arena_batch, zero);\n\t\t\tprogress += n;\n\t\t\tfilled += n;\n\t\t}\n\n\t\tif (likely(ind < nhbins) && progress < batch) {\n\t\t\tif (bin == NULL) {\n\t\t\t\tunsigned tcache_ind = mallocx_tcache_get(flags);\n\t\t\t\ttcache_t *tcache = tcache_get_from_ind(tsd,\n\t\t\t\t    tcache_ind, /* slow */ true,\n\t\t\t\t    /* is_alloc */ true);\n\t\t\t\tif (tcache != NULL) {\n\t\t\t\t\tbin = &tcache->bins[ind];\n\t\t\t\t}\n\t\t\t}\n\t\t\t/*\n\t\t\t * If we don't have a tcache bin, we don't want to\n\t\t\t * immediately give up, because there's the possibility\n\t\t\t * that the user explicitly requested to bypass the\n\t\t\t * tcache, or that the user explicitly turned off the\n\t\t\t * tcache; in such cases, we go through the slow path,\n\t\t\t * i.e. the mallocx() call at the end of the while loop.\n\t\t\t */\n\t\t\tif (bin != NULL) {\n\t\t\t\tsize_t bin_batch = batch - progress;\n\t\t\t\t/*\n\t\t\t\t * n can be less than bin_batch, meaning that\n\t\t\t\t * the cache bin does not have enough memory.\n\t\t\t\t * In such cases, we rely on the slow path,\n\t\t\t\t * i.e. the mallocx() call at the end of the\n\t\t\t\t * while loop, to fill in the cache, and in the\n\t\t\t\t * next iteration of the while loop, the tcache\n\t\t\t\t * will contain a lot of memory, and we can\n\t\t\t\t * harvest them here.  Compared to the\n\t\t\t\t * alternative approach where we directly go to\n\t\t\t\t * the arena bins here, the overhead of our\n\t\t\t\t * current approach should usually be minimal,\n\t\t\t\t * since we never try to fetch more memory than\n\t\t\t\t * what a slab contains via the tcache.  An\n\t\t\t\t * additional benefit is that the tcache will\n\t\t\t\t * not be empty for the next allocation request.\n\t\t\t\t */\n\t\t\t\tsize_t n = cache_bin_alloc_batch(bin, bin_batch,\n\t\t\t\t    ptrs + filled);\n\t\t\t\tif (config_stats) {\n\t\t\t\t\tbin->tstats.nrequests += n;\n\t\t\t\t}\n\t\t\t\tif (zero) {\n\t\t\t\t\tfor (size_t i = 0; i < n; ++i) {\n\t\t\t\t\t\tmemset(ptrs[filled + i], 0,\n\t\t\t\t\t\t    usize);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (config_prof && opt_prof\n\t\t\t\t    && unlikely(ind >= SC_NBINS)) {\n\t\t\t\t\tfor (size_t i = 0; i < n; ++i) {\n\t\t\t\t\t\tprof_tctx_reset_sampled(tsd,\n\t\t\t\t\t\t    ptrs[filled + i]);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tprogress += n;\n\t\t\t\tfilled += n;\n\t\t\t}\n\t\t}\n\n\t\t/*\n\t\t * For thread events other than prof sampling, trigger them as\n\t\t * if there's a single allocation of size (n * usize).  This is\n\t\t * fine because:\n\t\t * (a) these events do not alter the allocation itself, and\n\t\t * (b) it's possible that some event would have been triggered\n\t\t *     multiple times, instead of only once, if the allocations\n\t\t *     were handled individually, but it would do no harm (or\n\t\t *     even be beneficial) to coalesce the triggerings.\n\t\t */\n\t\tthread_alloc_event(tsd, progress * usize);\n\n\t\tif (progress < batch || prof_sample_event) {\n\t\t\tvoid *p = je_mallocx(size, flags);\n\t\t\tif (p == NULL) { /* OOM */\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (progress == batch) {\n\t\t\t\tassert(prof_sampled(tsd, p));\n\t\t\t}\n\t\t\tptrs[filled++] = p;\n\t\t}\n\t}\n\nlabel_done:\n\tcheck_entry_exit_locking(tsd_tsdn(tsd));\n\tLOG(\"core.batch_alloc.exit\", \"result: %zu\", filled);\n\treturn filled;\n}\n\n/*\n * End non-standard functions.\n */\n/******************************************************************************/\n/*\n * The following functions are used by threading libraries for protection of\n * malloc during fork().\n */\n\n/*\n * If an application creates a thread before doing any allocation in the main\n * thread, then calls fork(2) in the main thread followed by memory allocation\n * in the child process, a race can occur that results in deadlock within the\n * child: the main thread may have forked while the created thread had\n * partially initialized the allocator.  Ordinarily jemalloc prevents\n * fork/malloc races via the following functions it registers during\n * initialization using pthread_atfork(), but of course that does no good if\n * the allocator isn't fully initialized at fork time.  The following library\n * constructor is a partial solution to this problem.  It may still be possible\n * to trigger the deadlock described above, but doing so would involve forking\n * via a library constructor that runs before jemalloc's runs.\n */\n#ifndef JEMALLOC_JET\nJEMALLOC_ATTR(constructor)\nstatic void\njemalloc_constructor(void) {\n\tmalloc_init();\n}\n#endif\n\n#ifndef JEMALLOC_MUTEX_INIT_CB\nvoid\njemalloc_prefork(void)\n#else\nJEMALLOC_EXPORT void\n_malloc_prefork(void)\n#endif\n{\n\ttsd_t *tsd;\n\tunsigned i, j, narenas;\n\tarena_t *arena;\n\n#ifdef JEMALLOC_MUTEX_INIT_CB\n\tif (!malloc_initialized()) {\n\t\treturn;\n\t}\n#endif\n\tassert(malloc_initialized());\n\n\ttsd = tsd_fetch();\n\n\tnarenas = narenas_total_get();\n\n\twitness_prefork(tsd_witness_tsdp_get(tsd));\n\t/* Acquire all mutexes in a safe order. */\n\tctl_prefork(tsd_tsdn(tsd));\n\ttcache_prefork(tsd_tsdn(tsd));\n\tmalloc_mutex_prefork(tsd_tsdn(tsd), &arenas_lock);\n\tif (have_background_thread) {\n\t\tbackground_thread_prefork0(tsd_tsdn(tsd));\n\t}\n\tprof_prefork0(tsd_tsdn(tsd));\n\tif (have_background_thread) {\n\t\tbackground_thread_prefork1(tsd_tsdn(tsd));\n\t}\n\t/* Break arena prefork into stages to preserve lock order. */\n\tfor (i = 0; i < 9; i++) {\n\t\tfor (j = 0; j < narenas; j++) {\n\t\t\tif ((arena = arena_get(tsd_tsdn(tsd), j, false)) !=\n\t\t\t    NULL) {\n\t\t\t\tswitch (i) {\n\t\t\t\tcase 0:\n\t\t\t\t\tarena_prefork0(tsd_tsdn(tsd), arena);\n\t\t\t\t\tbreak;\n\t\t\t\tcase 1:\n\t\t\t\t\tarena_prefork1(tsd_tsdn(tsd), arena);\n\t\t\t\t\tbreak;\n\t\t\t\tcase 2:\n\t\t\t\t\tarena_prefork2(tsd_tsdn(tsd), arena);\n\t\t\t\t\tbreak;\n\t\t\t\tcase 3:\n\t\t\t\t\tarena_prefork3(tsd_tsdn(tsd), arena);\n\t\t\t\t\tbreak;\n\t\t\t\tcase 4:\n\t\t\t\t\tarena_prefork4(tsd_tsdn(tsd), arena);\n\t\t\t\t\tbreak;\n\t\t\t\tcase 5:\n\t\t\t\t\tarena_prefork5(tsd_tsdn(tsd), arena);\n\t\t\t\t\tbreak;\n\t\t\t\tcase 6:\n\t\t\t\t\tarena_prefork6(tsd_tsdn(tsd), arena);\n\t\t\t\t\tbreak;\n\t\t\t\tcase 7:\n\t\t\t\t\tarena_prefork7(tsd_tsdn(tsd), arena);\n\t\t\t\t\tbreak;\n\t\t\t\tcase 8:\n\t\t\t\t\tarena_prefork8(tsd_tsdn(tsd), arena);\n\t\t\t\t\tbreak;\n\t\t\t\tdefault: not_reached();\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t}\n\tprof_prefork1(tsd_tsdn(tsd));\n\tstats_prefork(tsd_tsdn(tsd));\n\ttsd_prefork(tsd);\n}\n\n#ifndef JEMALLOC_MUTEX_INIT_CB\nvoid\njemalloc_postfork_parent(void)\n#else\nJEMALLOC_EXPORT void\n_malloc_postfork(void)\n#endif\n{\n\ttsd_t *tsd;\n\tunsigned i, narenas;\n\n#ifdef JEMALLOC_MUTEX_INIT_CB\n\tif (!malloc_initialized()) {\n\t\treturn;\n\t}\n#endif\n\tassert(malloc_initialized());\n\n\ttsd = tsd_fetch();\n\n\ttsd_postfork_parent(tsd);\n\n\twitness_postfork_parent(tsd_witness_tsdp_get(tsd));\n\t/* Release all mutexes, now that fork() has completed. */\n\tstats_postfork_parent(tsd_tsdn(tsd));\n\tfor (i = 0, narenas = narenas_total_get(); i < narenas; i++) {\n\t\tarena_t *arena;\n\n\t\tif ((arena = arena_get(tsd_tsdn(tsd), i, false)) != NULL) {\n\t\t\tarena_postfork_parent(tsd_tsdn(tsd), arena);\n\t\t}\n\t}\n\tprof_postfork_parent(tsd_tsdn(tsd));\n\tif (have_background_thread) {\n\t\tbackground_thread_postfork_parent(tsd_tsdn(tsd));\n\t}\n\tmalloc_mutex_postfork_parent(tsd_tsdn(tsd), &arenas_lock);\n\ttcache_postfork_parent(tsd_tsdn(tsd));\n\tctl_postfork_parent(tsd_tsdn(tsd));\n}\n\nvoid\njemalloc_postfork_child(void) {\n\ttsd_t *tsd;\n\tunsigned i, narenas;\n\n\tassert(malloc_initialized());\n\n\ttsd = tsd_fetch();\n\n\ttsd_postfork_child(tsd);\n\n\twitness_postfork_child(tsd_witness_tsdp_get(tsd));\n\t/* Release all mutexes, now that fork() has completed. */\n\tstats_postfork_child(tsd_tsdn(tsd));\n\tfor (i = 0, narenas = narenas_total_get(); i < narenas; i++) {\n\t\tarena_t *arena;\n\n\t\tif ((arena = arena_get(tsd_tsdn(tsd), i, false)) != NULL) {\n\t\t\tarena_postfork_child(tsd_tsdn(tsd), arena);\n\t\t}\n\t}\n\tprof_postfork_child(tsd_tsdn(tsd));\n\tif (have_background_thread) {\n\t\tbackground_thread_postfork_child(tsd_tsdn(tsd));\n\t}\n\tmalloc_mutex_postfork_child(tsd_tsdn(tsd), &arenas_lock);\n\ttcache_postfork_child(tsd_tsdn(tsd));\n\tctl_postfork_child(tsd_tsdn(tsd));\n}\n\n/******************************************************************************/\n\n/* Helps the application decide if a pointer is worth re-allocating in order to reduce fragmentation.\n * returns 1 if the allocation should be moved, and 0 if the allocation be kept.\n * If the application decides to re-allocate it should use MALLOCX_TCACHE_NONE when doing so. */\nJEMALLOC_EXPORT int JEMALLOC_NOTHROW\nget_defrag_hint(void* ptr) {\n\tassert(ptr != NULL);\n\treturn iget_defrag_hint(TSDN_NULL, ptr);\n}\n\nJEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN\nvoid JEMALLOC_NOTHROW *\nJEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE(1)\nmalloc_with_usize(size_t size, size_t *usize) {\n\treturn je_malloc_internal(size, usize);\n}\n\nJEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN\nvoid JEMALLOC_NOTHROW *\nJEMALLOC_ATTR(malloc) JEMALLOC_ALLOC_SIZE2(1, 2)\ncalloc_with_usize(size_t num, size_t size, size_t *usize) {\n\treturn je_calloc_internal(num, size, usize);\n}\n\nJEMALLOC_EXPORT JEMALLOC_ALLOCATOR JEMALLOC_RESTRICT_RETURN\nvoid JEMALLOC_NOTHROW *\nJEMALLOC_ALLOC_SIZE(2)\nrealloc_with_usize(void *ptr, size_t size, size_t *old_usize, size_t *new_usize) {\n\treturn je_realloc_internal(ptr, size, old_usize, new_usize);\n}\n\nJEMALLOC_EXPORT void JEMALLOC_NOTHROW\nfree_with_usize(void *ptr, size_t *usize) {\n\tje_free_internal(ptr, usize);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/jemalloc_cpp.cpp",
    "content": "#include <mutex>\n#include <new>\n\n#define JEMALLOC_CPP_CPP_\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#ifdef __cplusplus\n}\n#endif\n\n// All operators in this file are exported.\n\n// Possibly alias hidden versions of malloc and sdallocx to avoid an extra plt\n// thunk?\n//\n// extern __typeof (sdallocx) sdallocx_int\n//  __attribute ((alias (\"sdallocx\"),\n//\t\tvisibility (\"hidden\")));\n//\n// ... but it needs to work with jemalloc namespaces.\n\nvoid\t*operator new(std::size_t size);\nvoid\t*operator new[](std::size_t size);\nvoid\t*operator new(std::size_t size, const std::nothrow_t &) noexcept;\nvoid\t*operator new[](std::size_t size, const std::nothrow_t &) noexcept;\nvoid\toperator delete(void *ptr) noexcept;\nvoid\toperator delete[](void *ptr) noexcept;\nvoid\toperator delete(void *ptr, const std::nothrow_t &) noexcept;\nvoid\toperator delete[](void *ptr, const std::nothrow_t &) noexcept;\n\n#if __cpp_sized_deallocation >= 201309\n/* C++14's sized-delete operators. */\nvoid\toperator delete(void *ptr, std::size_t size) noexcept;\nvoid\toperator delete[](void *ptr, std::size_t size) noexcept;\n#endif\n\n#if __cpp_aligned_new >= 201606\n/* C++17's over-aligned operators. */\nvoid\t*operator new(std::size_t size, std::align_val_t);\nvoid\t*operator new(std::size_t size, std::align_val_t, const std::nothrow_t &) noexcept;\nvoid\t*operator new[](std::size_t size, std::align_val_t);\nvoid\t*operator new[](std::size_t size, std::align_val_t, const std::nothrow_t &) noexcept;\nvoid\toperator delete(void* ptr, std::align_val_t) noexcept;\nvoid\toperator delete(void* ptr, std::align_val_t, const std::nothrow_t &) noexcept;\nvoid\toperator delete(void* ptr, std::size_t size, std::align_val_t al) noexcept;\nvoid\toperator delete[](void* ptr, std::align_val_t) noexcept;\nvoid\toperator delete[](void* ptr, std::align_val_t, const std::nothrow_t &) noexcept;\nvoid\toperator delete[](void* ptr, std::size_t size, std::align_val_t al) noexcept;\n#endif\n\nJEMALLOC_NOINLINE\nstatic void *\nhandleOOM(std::size_t size, bool nothrow) {\n\tif (opt_experimental_infallible_new) {\n\t\tsafety_check_fail(\"<jemalloc>: Allocation failed and \"\n\t\t    \"opt.experimental_infallible_new is true. Aborting.\\n\");\n\t\treturn nullptr;\n\t}\n\n\tvoid *ptr = nullptr;\n\n\twhile (ptr == nullptr) {\n\t\tstd::new_handler handler;\n\t\t// GCC-4.8 and clang 4.0 do not have std::get_new_handler.\n\t\t{\n\t\t\tstatic std::mutex mtx;\n\t\t\tstd::lock_guard<std::mutex> lock(mtx);\n\n\t\t\thandler = std::set_new_handler(nullptr);\n\t\t\tstd::set_new_handler(handler);\n\t\t}\n\t\tif (handler == nullptr)\n\t\t\tbreak;\n\n\t\ttry {\n\t\t\thandler();\n\t\t} catch (const std::bad_alloc &) {\n\t\t\tbreak;\n\t\t}\n\n\t\tptr = je_malloc(size);\n\t}\n\n\tif (ptr == nullptr && !nothrow)\n\t\tstd::__throw_bad_alloc();\n\treturn ptr;\n}\n\ntemplate <bool IsNoExcept>\nJEMALLOC_NOINLINE\nstatic void *\nfallback_impl(std::size_t size, std::size_t *usize) noexcept(IsNoExcept) {\n\tvoid *ptr = malloc_default(size, NULL);\n\tif (likely(ptr != nullptr)) {\n\t\treturn ptr;\n\t}\n\treturn handleOOM(size, IsNoExcept);\n}\n\ntemplate <bool IsNoExcept>\nJEMALLOC_ALWAYS_INLINE\nvoid *\nnewImpl(std::size_t size) noexcept(IsNoExcept) {\n\treturn imalloc_fastpath(size, &fallback_impl<IsNoExcept>, NULL);\n}\n\nvoid *\noperator new(std::size_t size) {\n\treturn newImpl<false>(size);\n}\n\nvoid *\noperator new[](std::size_t size) {\n\treturn newImpl<false>(size);\n}\n\nvoid *\noperator new(std::size_t size, const std::nothrow_t &) noexcept {\n\treturn newImpl<true>(size);\n}\n\nvoid *\noperator new[](std::size_t size, const std::nothrow_t &) noexcept {\n\treturn newImpl<true>(size);\n}\n\n#if __cpp_aligned_new >= 201606\n\ntemplate <bool IsNoExcept>\nJEMALLOC_ALWAYS_INLINE\nvoid *\nalignedNewImpl(std::size_t size, std::align_val_t alignment) noexcept(IsNoExcept) {\n\tvoid *ptr = je_aligned_alloc(static_cast<std::size_t>(alignment), size);\n\tif (likely(ptr != nullptr)) {\n\t\treturn ptr;\n\t}\n\n\treturn handleOOM(size, IsNoExcept);\n}\n\nvoid *\noperator new(std::size_t size, std::align_val_t alignment) {\n\treturn alignedNewImpl<false>(size, alignment);\n}\n\nvoid *\noperator new[](std::size_t size, std::align_val_t alignment) {\n\treturn alignedNewImpl<false>(size, alignment);\n}\n\nvoid *\noperator new(std::size_t size, std::align_val_t alignment, const std::nothrow_t &) noexcept {\n\treturn alignedNewImpl<true>(size, alignment);\n}\n\nvoid *\noperator new[](std::size_t size, std::align_val_t alignment, const std::nothrow_t &) noexcept {\n\treturn alignedNewImpl<true>(size, alignment);\n}\n\n#endif  // __cpp_aligned_new\n\nvoid\noperator delete(void *ptr) noexcept {\n\tje_free(ptr);\n}\n\nvoid\noperator delete[](void *ptr) noexcept {\n\tje_free(ptr);\n}\n\nvoid\noperator delete(void *ptr, const std::nothrow_t &) noexcept {\n\tje_free(ptr);\n}\n\nvoid operator delete[](void *ptr, const std::nothrow_t &) noexcept {\n\tje_free(ptr);\n}\n\n#if __cpp_sized_deallocation >= 201309\n\nJEMALLOC_ALWAYS_INLINE\nvoid\nsizedDeleteImpl(void* ptr, std::size_t size) noexcept {\n\tif (unlikely(ptr == nullptr)) {\n\t\treturn;\n\t}\n\tje_sdallocx_noflags(ptr, size);\n}\n\nvoid\noperator delete(void *ptr, std::size_t size) noexcept {\n\tsizedDeleteImpl(ptr, size);\n}\n\nvoid\noperator delete[](void *ptr, std::size_t size) noexcept {\n\tsizedDeleteImpl(ptr, size);\n}\n\n#endif  // __cpp_sized_deallocation\n\n#if __cpp_aligned_new >= 201606\n\nJEMALLOC_ALWAYS_INLINE\nvoid\nalignedSizedDeleteImpl(void* ptr, std::size_t size, std::align_val_t alignment) noexcept {\n\tif (config_debug) {\n\t\tassert(((size_t)alignment & ((size_t)alignment - 1)) == 0);\n\t}\n\tif (unlikely(ptr == nullptr)) {\n\t\treturn;\n\t}\n\tje_sdallocx(ptr, size, MALLOCX_ALIGN(alignment));\n}\n\nvoid\noperator delete(void* ptr, std::align_val_t) noexcept {\n\tje_free(ptr);\n}\n\nvoid\noperator delete[](void* ptr, std::align_val_t) noexcept {\n\tje_free(ptr);\n}\n\nvoid\noperator delete(void* ptr, std::align_val_t, const std::nothrow_t&) noexcept {\n\tje_free(ptr);\n}\n\nvoid\noperator delete[](void* ptr, std::align_val_t, const std::nothrow_t&) noexcept {\n\tje_free(ptr);\n}\n\nvoid\noperator delete(void* ptr, std::size_t size, std::align_val_t alignment) noexcept {\n\talignedSizedDeleteImpl(ptr, size, alignment);\n}\n\nvoid\noperator delete[](void* ptr, std::size_t size, std::align_val_t alignment) noexcept {\n\talignedSizedDeleteImpl(ptr, size, alignment);\n}\n\n#endif  // __cpp_aligned_new\n"
  },
  {
    "path": "deps/jemalloc/src/large.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/emap.h\"\n#include \"jemalloc/internal/extent_mmap.h\"\n#include \"jemalloc/internal/mutex.h\"\n#include \"jemalloc/internal/prof_recent.h\"\n#include \"jemalloc/internal/util.h\"\n\n/******************************************************************************/\n\nvoid *\nlarge_malloc(tsdn_t *tsdn, arena_t *arena, size_t usize, bool zero) {\n\tassert(usize == sz_s2u(usize));\n\n\treturn large_palloc(tsdn, arena, usize, CACHELINE, zero);\n}\n\nvoid *\nlarge_palloc(tsdn_t *tsdn, arena_t *arena, size_t usize, size_t alignment,\n    bool zero) {\n\tsize_t ausize;\n\tedata_t *edata;\n\tUNUSED bool idump JEMALLOC_CC_SILENCE_INIT(false);\n\n\tassert(!tsdn_null(tsdn) || arena != NULL);\n\n\tausize = sz_sa2u(usize, alignment);\n\tif (unlikely(ausize == 0 || ausize > SC_LARGE_MAXCLASS)) {\n\t\treturn NULL;\n\t}\n\n\tif (likely(!tsdn_null(tsdn))) {\n\t\tarena = arena_choose_maybe_huge(tsdn_tsd(tsdn), arena, usize);\n\t}\n\tif (unlikely(arena == NULL) || (edata = arena_extent_alloc_large(tsdn,\n\t    arena, usize, alignment, zero)) == NULL) {\n\t\treturn NULL;\n\t}\n\n\t/* See comments in arena_bin_slabs_full_insert(). */\n\tif (!arena_is_auto(arena)) {\n\t\t/* Insert edata into large. */\n\t\tmalloc_mutex_lock(tsdn, &arena->large_mtx);\n\t\tedata_list_active_append(&arena->large, edata);\n\t\tmalloc_mutex_unlock(tsdn, &arena->large_mtx);\n\t}\n\n\tarena_decay_tick(tsdn, arena);\n\treturn edata_addr_get(edata);\n}\n\nstatic bool\nlarge_ralloc_no_move_shrink(tsdn_t *tsdn, edata_t *edata, size_t usize) {\n\tarena_t *arena = arena_get_from_edata(edata);\n\tehooks_t *ehooks = arena_get_ehooks(arena);\n\tsize_t old_size = edata_size_get(edata);\n\tsize_t old_usize = edata_usize_get(edata);\n\n\tassert(old_usize > usize);\n\n\tif (ehooks_split_will_fail(ehooks)) {\n\t\treturn true;\n\t}\n\n\tbool deferred_work_generated = false;\n\tbool err = pa_shrink(tsdn, &arena->pa_shard, edata, old_size,\n\t    usize + sz_large_pad, sz_size2index(usize),\n\t    &deferred_work_generated);\n\tif (err) {\n\t\treturn true;\n\t}\n\tif (deferred_work_generated) {\n\t\tarena_handle_deferred_work(tsdn, arena);\n\t}\n\tarena_extent_ralloc_large_shrink(tsdn, arena, edata, old_usize);\n\n\treturn false;\n}\n\nstatic bool\nlarge_ralloc_no_move_expand(tsdn_t *tsdn, edata_t *edata, size_t usize,\n    bool zero) {\n\tarena_t *arena = arena_get_from_edata(edata);\n\n\tsize_t old_size = edata_size_get(edata);\n\tsize_t old_usize = edata_usize_get(edata);\n\tsize_t new_size = usize + sz_large_pad;\n\n\tszind_t szind = sz_size2index(usize);\n\n\tbool deferred_work_generated = false;\n\tbool err = pa_expand(tsdn, &arena->pa_shard, edata, old_size, new_size,\n\t    szind, zero, &deferred_work_generated);\n\n\tif (deferred_work_generated) {\n\t\tarena_handle_deferred_work(tsdn, arena);\n\t}\n\n\tif (err) {\n\t\treturn true;\n\t}\n\n\tif (zero) {\n\t\tif (opt_cache_oblivious) {\n\t\t\tassert(sz_large_pad == PAGE);\n\t\t\t/*\n\t\t\t * Zero the trailing bytes of the original allocation's\n\t\t\t * last page, since they are in an indeterminate state.\n\t\t\t * There will always be trailing bytes, because ptr's\n\t\t\t * offset from the beginning of the extent is a multiple\n\t\t\t * of CACHELINE in [0 .. PAGE).\n\t\t\t */\n\t\t\tvoid *zbase = (void *)\n\t\t\t    ((uintptr_t)edata_addr_get(edata) + old_usize);\n\t\t\tvoid *zpast = PAGE_ADDR2BASE((void *)((uintptr_t)zbase +\n\t\t\t    PAGE));\n\t\t\tsize_t nzero = (uintptr_t)zpast - (uintptr_t)zbase;\n\t\t\tassert(nzero > 0);\n\t\t\tmemset(zbase, 0, nzero);\n\t\t}\n\t}\n\tarena_extent_ralloc_large_expand(tsdn, arena, edata, old_usize);\n\n\treturn false;\n}\n\nbool\nlarge_ralloc_no_move(tsdn_t *tsdn, edata_t *edata, size_t usize_min,\n    size_t usize_max, bool zero) {\n\tsize_t oldusize = edata_usize_get(edata);\n\n\t/* The following should have been caught by callers. */\n\tassert(usize_min > 0 && usize_max <= SC_LARGE_MAXCLASS);\n\t/* Both allocation sizes must be large to avoid a move. */\n\tassert(oldusize >= SC_LARGE_MINCLASS\n\t    && usize_max >= SC_LARGE_MINCLASS);\n\n\tif (usize_max > oldusize) {\n\t\t/* Attempt to expand the allocation in-place. */\n\t\tif (!large_ralloc_no_move_expand(tsdn, edata, usize_max,\n\t\t    zero)) {\n\t\t\tarena_decay_tick(tsdn, arena_get_from_edata(edata));\n\t\t\treturn false;\n\t\t}\n\t\t/* Try again, this time with usize_min. */\n\t\tif (usize_min < usize_max && usize_min > oldusize &&\n\t\t    large_ralloc_no_move_expand(tsdn, edata, usize_min, zero)) {\n\t\t\tarena_decay_tick(tsdn, arena_get_from_edata(edata));\n\t\t\treturn false;\n\t\t}\n\t}\n\n\t/*\n\t * Avoid moving the allocation if the existing extent size accommodates\n\t * the new size.\n\t */\n\tif (oldusize >= usize_min && oldusize <= usize_max) {\n\t\tarena_decay_tick(tsdn, arena_get_from_edata(edata));\n\t\treturn false;\n\t}\n\n\t/* Attempt to shrink the allocation in-place. */\n\tif (oldusize > usize_max) {\n\t\tif (!large_ralloc_no_move_shrink(tsdn, edata, usize_max)) {\n\t\t\tarena_decay_tick(tsdn, arena_get_from_edata(edata));\n\t\t\treturn false;\n\t\t}\n\t}\n\treturn true;\n}\n\nstatic void *\nlarge_ralloc_move_helper(tsdn_t *tsdn, arena_t *arena, size_t usize,\n    size_t alignment, bool zero) {\n\tif (alignment <= CACHELINE) {\n\t\treturn large_malloc(tsdn, arena, usize, zero);\n\t}\n\treturn large_palloc(tsdn, arena, usize, alignment, zero);\n}\n\nvoid *\nlarge_ralloc(tsdn_t *tsdn, arena_t *arena, void *ptr, size_t usize,\n    size_t alignment, bool zero, tcache_t *tcache,\n    hook_ralloc_args_t *hook_args) {\n\tedata_t *edata = emap_edata_lookup(tsdn, &arena_emap_global, ptr);\n\n\tsize_t oldusize = edata_usize_get(edata);\n\t/* The following should have been caught by callers. */\n\tassert(usize > 0 && usize <= SC_LARGE_MAXCLASS);\n\t/* Both allocation sizes must be large to avoid a move. */\n\tassert(oldusize >= SC_LARGE_MINCLASS\n\t    && usize >= SC_LARGE_MINCLASS);\n\n\t/* Try to avoid moving the allocation. */\n\tif (!large_ralloc_no_move(tsdn, edata, usize, usize, zero)) {\n\t\thook_invoke_expand(hook_args->is_realloc\n\t\t    ? hook_expand_realloc : hook_expand_rallocx, ptr, oldusize,\n\t\t    usize, (uintptr_t)ptr, hook_args->args);\n\t\treturn edata_addr_get(edata);\n\t}\n\n\t/*\n\t * usize and old size are different enough that we need to use a\n\t * different size class.  In that case, fall back to allocating new\n\t * space and copying.\n\t */\n\tvoid *ret = large_ralloc_move_helper(tsdn, arena, usize, alignment,\n\t    zero);\n\tif (ret == NULL) {\n\t\treturn NULL;\n\t}\n\n\thook_invoke_alloc(hook_args->is_realloc\n\t    ? hook_alloc_realloc : hook_alloc_rallocx, ret, (uintptr_t)ret,\n\t    hook_args->args);\n\thook_invoke_dalloc(hook_args->is_realloc\n\t    ? hook_dalloc_realloc : hook_dalloc_rallocx, ptr, hook_args->args);\n\n\tsize_t copysize = (usize < oldusize) ? usize : oldusize;\n\tmemcpy(ret, edata_addr_get(edata), copysize);\n\tisdalloct(tsdn, edata_addr_get(edata), oldusize, tcache, NULL, true);\n\treturn ret;\n}\n\n/*\n * locked indicates whether the arena's large_mtx is currently held.\n */\nstatic void\nlarge_dalloc_prep_impl(tsdn_t *tsdn, arena_t *arena, edata_t *edata,\n    bool locked) {\n\tif (!locked) {\n\t\t/* See comments in arena_bin_slabs_full_insert(). */\n\t\tif (!arena_is_auto(arena)) {\n\t\t\tmalloc_mutex_lock(tsdn, &arena->large_mtx);\n\t\t\tedata_list_active_remove(&arena->large, edata);\n\t\t\tmalloc_mutex_unlock(tsdn, &arena->large_mtx);\n\t\t}\n\t} else {\n\t\t/* Only hold the large_mtx if necessary. */\n\t\tif (!arena_is_auto(arena)) {\n\t\t\tmalloc_mutex_assert_owner(tsdn, &arena->large_mtx);\n\t\t\tedata_list_active_remove(&arena->large, edata);\n\t\t}\n\t}\n\tarena_extent_dalloc_large_prep(tsdn, arena, edata);\n}\n\nstatic void\nlarge_dalloc_finish_impl(tsdn_t *tsdn, arena_t *arena, edata_t *edata) {\n\tbool deferred_work_generated = false;\n\tpa_dalloc(tsdn, &arena->pa_shard, edata, &deferred_work_generated);\n\tif (deferred_work_generated) {\n\t\tarena_handle_deferred_work(tsdn, arena);\n\t}\n}\n\nvoid\nlarge_dalloc_prep_locked(tsdn_t *tsdn, edata_t *edata) {\n\tlarge_dalloc_prep_impl(tsdn, arena_get_from_edata(edata), edata, true);\n}\n\nvoid\nlarge_dalloc_finish(tsdn_t *tsdn, edata_t *edata) {\n\tlarge_dalloc_finish_impl(tsdn, arena_get_from_edata(edata), edata);\n}\n\nvoid\nlarge_dalloc(tsdn_t *tsdn, edata_t *edata) {\n\tarena_t *arena = arena_get_from_edata(edata);\n\tlarge_dalloc_prep_impl(tsdn, arena, edata, false);\n\tlarge_dalloc_finish_impl(tsdn, arena, edata);\n\tarena_decay_tick(tsdn, arena);\n}\n\nsize_t\nlarge_salloc(tsdn_t *tsdn, const edata_t *edata) {\n\treturn edata_usize_get(edata);\n}\n\nvoid\nlarge_prof_info_get(tsd_t *tsd, edata_t *edata, prof_info_t *prof_info,\n    bool reset_recent) {\n\tassert(prof_info != NULL);\n\n\tprof_tctx_t *alloc_tctx = edata_prof_tctx_get(edata);\n\tprof_info->alloc_tctx = alloc_tctx;\n\n\tif ((uintptr_t)alloc_tctx > (uintptr_t)1U) {\n\t\tnstime_copy(&prof_info->alloc_time,\n\t\t    edata_prof_alloc_time_get(edata));\n\t\tprof_info->alloc_size = edata_prof_alloc_size_get(edata);\n\t\tif (reset_recent) {\n\t\t\t/*\n\t\t\t * Reset the pointer on the recent allocation record,\n\t\t\t * so that this allocation is recorded as released.\n\t\t\t */\n\t\t\tprof_recent_alloc_reset(tsd, edata);\n\t\t}\n\t}\n}\n\nstatic void\nlarge_prof_tctx_set(edata_t *edata, prof_tctx_t *tctx) {\n\tedata_prof_tctx_set(edata, tctx);\n}\n\nvoid\nlarge_prof_tctx_reset(edata_t *edata) {\n\tlarge_prof_tctx_set(edata, (prof_tctx_t *)(uintptr_t)1U);\n}\n\nvoid\nlarge_prof_info_set(edata_t *edata, prof_tctx_t *tctx, size_t size) {\n\tnstime_t t;\n\tnstime_prof_init_update(&t);\n\tedata_prof_alloc_time_set(edata, &t);\n\tedata_prof_alloc_size_set(edata, size);\n\tedata_prof_recent_alloc_init(edata);\n\tlarge_prof_tctx_set(edata, tctx);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/log.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/log.h\"\n\nchar log_var_names[JEMALLOC_LOG_VAR_BUFSIZE];\natomic_b_t log_init_done = ATOMIC_INIT(false);\n\n/*\n * Returns true if we were able to pick out a segment.  Fills in r_segment_end\n * with a pointer to the first character after the end of the string.\n */\nstatic const char *\nlog_var_extract_segment(const char* segment_begin) {\n\tconst char *end;\n\tfor (end = segment_begin; *end != '\\0' && *end != '|'; end++) {\n\t}\n\treturn end;\n}\n\nstatic bool\nlog_var_matches_segment(const char *segment_begin, const char *segment_end,\n    const char *log_var_begin, const char *log_var_end) {\n\tassert(segment_begin <= segment_end);\n\tassert(log_var_begin < log_var_end);\n\n\tptrdiff_t segment_len = segment_end - segment_begin;\n\tptrdiff_t log_var_len = log_var_end - log_var_begin;\n\t/* The special '.' segment matches everything. */\n\tif (segment_len == 1 && *segment_begin == '.') {\n\t\treturn true;\n\t}\n        if (segment_len == log_var_len) {\n\t\treturn strncmp(segment_begin, log_var_begin, segment_len) == 0;\n\t} else if (segment_len < log_var_len) {\n\t\treturn strncmp(segment_begin, log_var_begin, segment_len) == 0\n\t\t    && log_var_begin[segment_len] == '.';\n        } else {\n\t\treturn false;\n\t}\n}\n\nunsigned\nlog_var_update_state(log_var_t *log_var) {\n\tconst char *log_var_begin = log_var->name;\n\tconst char *log_var_end = log_var->name + strlen(log_var->name);\n\n\t/* Pointer to one before the beginning of the current segment. */\n\tconst char *segment_begin = log_var_names;\n\n\t/*\n\t * If log_init done is false, we haven't parsed the malloc conf yet.  To\n\t * avoid log-spew, we default to not displaying anything.\n\t */\n\tif (!atomic_load_b(&log_init_done, ATOMIC_ACQUIRE)) {\n\t\treturn LOG_INITIALIZED_NOT_ENABLED;\n\t}\n\n\twhile (true) {\n\t\tconst char *segment_end = log_var_extract_segment(\n\t\t    segment_begin);\n\t\tassert(segment_end < log_var_names + JEMALLOC_LOG_VAR_BUFSIZE);\n\t\tif (log_var_matches_segment(segment_begin, segment_end,\n\t\t    log_var_begin, log_var_end)) {\n\t\t\tatomic_store_u(&log_var->state, LOG_ENABLED,\n\t\t\t    ATOMIC_RELAXED);\n\t\t\treturn LOG_ENABLED;\n\t\t}\n\t\tif (*segment_end == '\\0') {\n\t\t\t/* Hit the end of the segment string with no match. */\n\t\t\tatomic_store_u(&log_var->state,\n\t\t\t    LOG_INITIALIZED_NOT_ENABLED, ATOMIC_RELAXED);\n\t\t\treturn LOG_INITIALIZED_NOT_ENABLED;\n\t\t}\n\t\t/* Otherwise, skip the delimiter and continue. */\n\t\tsegment_begin = segment_end + 1;\n\t}\n}\n"
  },
  {
    "path": "deps/jemalloc/src/malloc_io.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/malloc_io.h\"\n#include \"jemalloc/internal/util.h\"\n\n#ifdef assert\n#  undef assert\n#endif\n#ifdef not_reached\n#  undef not_reached\n#endif\n#ifdef not_implemented\n#  undef not_implemented\n#endif\n#ifdef assert_not_implemented\n#  undef assert_not_implemented\n#endif\n\n/*\n * Define simple versions of assertion macros that won't recurse in case\n * of assertion failures in malloc_*printf().\n */\n#define assert(e) do {\t\t\t\t\t\t\t\\\n\tif (config_debug && !(e)) {\t\t\t\t\t\\\n\t\tmalloc_write(\"<jemalloc>: Failed assertion\\n\");\t\t\\\n\t\tabort();\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#define not_reached() do {\t\t\t\t\t\t\\\n\tif (config_debug) {\t\t\t\t\t\t\\\n\t\tmalloc_write(\"<jemalloc>: Unreachable code reached\\n\");\t\\\n\t\tabort();\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tunreachable();\t\t\t\t\t\t\t\\\n} while (0)\n\n#define not_implemented() do {\t\t\t\t\t\t\\\n\tif (config_debug) {\t\t\t\t\t\t\\\n\t\tmalloc_write(\"<jemalloc>: Not implemented\\n\");\t\t\\\n\t\tabort();\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#define assert_not_implemented(e) do {\t\t\t\t\t\\\n\tif (unlikely(config_debug && !(e))) {\t\t\t\t\\\n\t\tnot_implemented();\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n/******************************************************************************/\n/* Function prototypes for non-inline static functions. */\n\n#define U2S_BUFSIZE ((1U << (LG_SIZEOF_INTMAX_T + 3)) + 1)\nstatic char *u2s(uintmax_t x, unsigned base, bool uppercase, char *s,\n    size_t *slen_p);\n#define D2S_BUFSIZE (1 + U2S_BUFSIZE)\nstatic char *d2s(intmax_t x, char sign, char *s, size_t *slen_p);\n#define O2S_BUFSIZE (1 + U2S_BUFSIZE)\nstatic char *o2s(uintmax_t x, bool alt_form, char *s, size_t *slen_p);\n#define X2S_BUFSIZE (2 + U2S_BUFSIZE)\nstatic char *x2s(uintmax_t x, bool alt_form, bool uppercase, char *s,\n    size_t *slen_p);\n\n/******************************************************************************/\n\n/* malloc_message() setup. */\nvoid\nwrtmessage(void *cbopaque, const char *s) {\n\tmalloc_write_fd(STDERR_FILENO, s, strlen(s));\n}\n\nJEMALLOC_EXPORT void\t(*je_malloc_message)(void *, const char *s);\n\n/*\n * Wrapper around malloc_message() that avoids the need for\n * je_malloc_message(...) throughout the code.\n */\nvoid\nmalloc_write(const char *s) {\n\tif (je_malloc_message != NULL) {\n\t\tje_malloc_message(NULL, s);\n\t} else {\n\t\twrtmessage(NULL, s);\n\t}\n}\n\n/*\n * glibc provides a non-standard strerror_r() when _GNU_SOURCE is defined, so\n * provide a wrapper.\n */\nint\nbuferror(int err, char *buf, size_t buflen) {\n#ifdef _WIN32\n\tFormatMessageA(FORMAT_MESSAGE_FROM_SYSTEM, NULL, err, 0,\n\t    (LPSTR)buf, (DWORD)buflen, NULL);\n\treturn 0;\n#elif defined(JEMALLOC_STRERROR_R_RETURNS_CHAR_WITH_GNU_SOURCE) && defined(_GNU_SOURCE)\n\tchar *b = strerror_r(err, buf, buflen);\n\tif (b != buf) {\n\t\tstrncpy(buf, b, buflen);\n\t\tbuf[buflen-1] = '\\0';\n\t}\n\treturn 0;\n#else\n\treturn strerror_r(err, buf, buflen);\n#endif\n}\n\nuintmax_t\nmalloc_strtoumax(const char *restrict nptr, char **restrict endptr, int base) {\n\tuintmax_t ret, digit;\n\tunsigned b;\n\tbool neg;\n\tconst char *p, *ns;\n\n\tp = nptr;\n\tif (base < 0 || base == 1 || base > 36) {\n\t\tns = p;\n\t\tset_errno(EINVAL);\n\t\tret = UINTMAX_MAX;\n\t\tgoto label_return;\n\t}\n\tb = base;\n\n\t/* Swallow leading whitespace and get sign, if any. */\n\tneg = false;\n\twhile (true) {\n\t\tswitch (*p) {\n\t\tcase '\\t': case '\\n': case '\\v': case '\\f': case '\\r': case ' ':\n\t\t\tp++;\n\t\t\tbreak;\n\t\tcase '-':\n\t\t\tneg = true;\n\t\t\tJEMALLOC_FALLTHROUGH;\n\t\tcase '+':\n\t\t\tp++;\n\t\t\tJEMALLOC_FALLTHROUGH;\n\t\tdefault:\n\t\t\tgoto label_prefix;\n\t\t}\n\t}\n\n\t/* Get prefix, if any. */\n\tlabel_prefix:\n\t/*\n\t * Note where the first non-whitespace/sign character is so that it is\n\t * possible to tell whether any digits are consumed (e.g., \"  0\" vs.\n\t * \"  -x\").\n\t */\n\tns = p;\n\tif (*p == '0') {\n\t\tswitch (p[1]) {\n\t\tcase '0': case '1': case '2': case '3': case '4': case '5':\n\t\tcase '6': case '7':\n\t\t\tif (b == 0) {\n\t\t\t\tb = 8;\n\t\t\t}\n\t\t\tif (b == 8) {\n\t\t\t\tp++;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase 'X': case 'x':\n\t\t\tswitch (p[2]) {\n\t\t\tcase '0': case '1': case '2': case '3': case '4':\n\t\t\tcase '5': case '6': case '7': case '8': case '9':\n\t\t\tcase 'A': case 'B': case 'C': case 'D': case 'E':\n\t\t\tcase 'F':\n\t\t\tcase 'a': case 'b': case 'c': case 'd': case 'e':\n\t\t\tcase 'f':\n\t\t\t\tif (b == 0) {\n\t\t\t\t\tb = 16;\n\t\t\t\t}\n\t\t\t\tif (b == 16) {\n\t\t\t\t\tp += 2;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tp++;\n\t\t\tret = 0;\n\t\t\tgoto label_return;\n\t\t}\n\t}\n\tif (b == 0) {\n\t\tb = 10;\n\t}\n\n\t/* Convert. */\n\tret = 0;\n\twhile ((*p >= '0' && *p <= '9' && (digit = *p - '0') < b)\n\t    || (*p >= 'A' && *p <= 'Z' && (digit = 10 + *p - 'A') < b)\n\t    || (*p >= 'a' && *p <= 'z' && (digit = 10 + *p - 'a') < b)) {\n\t\tuintmax_t pret = ret;\n\t\tret *= b;\n\t\tret += digit;\n\t\tif (ret < pret) {\n\t\t\t/* Overflow. */\n\t\t\tset_errno(ERANGE);\n\t\t\tret = UINTMAX_MAX;\n\t\t\tgoto label_return;\n\t\t}\n\t\tp++;\n\t}\n\tif (neg) {\n\t\tret = (uintmax_t)(-((intmax_t)ret));\n\t}\n\n\tif (p == ns) {\n\t\t/* No conversion performed. */\n\t\tset_errno(EINVAL);\n\t\tret = UINTMAX_MAX;\n\t\tgoto label_return;\n\t}\n\nlabel_return:\n\tif (endptr != NULL) {\n\t\tif (p == ns) {\n\t\t\t/* No characters were converted. */\n\t\t\t*endptr = (char *)nptr;\n\t\t} else {\n\t\t\t*endptr = (char *)p;\n\t\t}\n\t}\n\treturn ret;\n}\n\nstatic char *\nu2s(uintmax_t x, unsigned base, bool uppercase, char *s, size_t *slen_p) {\n\tunsigned i;\n\n\ti = U2S_BUFSIZE - 1;\n\ts[i] = '\\0';\n\tswitch (base) {\n\tcase 10:\n\t\tdo {\n\t\t\ti--;\n\t\t\ts[i] = \"0123456789\"[x % (uint64_t)10];\n\t\t\tx /= (uint64_t)10;\n\t\t} while (x > 0);\n\t\tbreak;\n\tcase 16: {\n\t\tconst char *digits = (uppercase)\n\t\t    ? \"0123456789ABCDEF\"\n\t\t    : \"0123456789abcdef\";\n\n\t\tdo {\n\t\t\ti--;\n\t\t\ts[i] = digits[x & 0xf];\n\t\t\tx >>= 4;\n\t\t} while (x > 0);\n\t\tbreak;\n\t} default: {\n\t\tconst char *digits = (uppercase)\n\t\t    ? \"0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n\t\t    : \"0123456789abcdefghijklmnopqrstuvwxyz\";\n\n\t\tassert(base >= 2 && base <= 36);\n\t\tdo {\n\t\t\ti--;\n\t\t\ts[i] = digits[x % (uint64_t)base];\n\t\t\tx /= (uint64_t)base;\n\t\t} while (x > 0);\n\t}}\n\n\t*slen_p = U2S_BUFSIZE - 1 - i;\n\treturn &s[i];\n}\n\nstatic char *\nd2s(intmax_t x, char sign, char *s, size_t *slen_p) {\n\tbool neg;\n\n\tif ((neg = (x < 0))) {\n\t\tx = -x;\n\t}\n\ts = u2s(x, 10, false, s, slen_p);\n\tif (neg) {\n\t\tsign = '-';\n\t}\n\tswitch (sign) {\n\tcase '-':\n\t\tif (!neg) {\n\t\t\tbreak;\n\t\t}\n\t\tJEMALLOC_FALLTHROUGH;\n\tcase ' ':\n\tcase '+':\n\t\ts--;\n\t\t(*slen_p)++;\n\t\t*s = sign;\n\t\tbreak;\n\tdefault: not_reached();\n\t}\n\treturn s;\n}\n\nstatic char *\no2s(uintmax_t x, bool alt_form, char *s, size_t *slen_p) {\n\ts = u2s(x, 8, false, s, slen_p);\n\tif (alt_form && *s != '0') {\n\t\ts--;\n\t\t(*slen_p)++;\n\t\t*s = '0';\n\t}\n\treturn s;\n}\n\nstatic char *\nx2s(uintmax_t x, bool alt_form, bool uppercase, char *s, size_t *slen_p) {\n\ts = u2s(x, 16, uppercase, s, slen_p);\n\tif (alt_form) {\n\t\ts -= 2;\n\t\t(*slen_p) += 2;\n\t\tmemcpy(s, uppercase ? \"0X\" : \"0x\", 2);\n\t}\n\treturn s;\n}\n\nJEMALLOC_COLD\nsize_t\nmalloc_vsnprintf(char *str, size_t size, const char *format, va_list ap) {\n\tsize_t i;\n\tconst char *f;\n\n#define APPEND_C(c) do {\t\t\t\t\t\t\\\n\tif (i < size) {\t\t\t\t\t\t\t\\\n\t\tstr[i] = (c);\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\ti++;\t\t\t\t\t\t\t\t\\\n} while (0)\n#define APPEND_S(s, slen) do {\t\t\t\t\t\t\\\n\tif (i < size) {\t\t\t\t\t\t\t\\\n\t\tsize_t cpylen = (slen <= size - i) ? slen : size - i;\t\\\n\t\tmemcpy(&str[i], s, cpylen);\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\ti += slen;\t\t\t\t\t\t\t\\\n} while (0)\n#define APPEND_PADDED_S(s, slen, width, left_justify) do {\t\t\\\n\t/* Left padding. */\t\t\t\t\t\t\\\n\tsize_t pad_len = (width == -1) ? 0 : ((slen < (size_t)width) ?\t\\\n\t    (size_t)width - slen : 0);\t\t\t\t\t\\\n\tif (!left_justify && pad_len != 0) {\t\t\t\t\\\n\t\tsize_t j;\t\t\t\t\t\t\\\n\t\tfor (j = 0; j < pad_len; j++) {\t\t\t\t\\\n\t\t\tif (pad_zero) {\t\t\t\t\t\\\n\t\t\t\tAPPEND_C('0');\t\t\t\t\\\n\t\t\t} else {\t\t\t\t\t\\\n\t\t\t\tAPPEND_C(' ');\t\t\t\t\\\n\t\t\t}\t\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\t/* Value. */\t\t\t\t\t\t\t\\\n\tAPPEND_S(s, slen);\t\t\t\t\t\t\\\n\t/* Right padding. */\t\t\t\t\t\t\\\n\tif (left_justify && pad_len != 0) {\t\t\t\t\\\n\t\tsize_t j;\t\t\t\t\t\t\\\n\t\tfor (j = 0; j < pad_len; j++) {\t\t\t\t\\\n\t\t\tAPPEND_C(' ');\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n#define GET_ARG_NUMERIC(val, len) do {\t\t\t\t\t\\\n\tswitch ((unsigned char)len) {\t\t\t\t\t\\\n\tcase '?':\t\t\t\t\t\t\t\\\n\t\tval = va_arg(ap, int);\t\t\t\t\t\\\n\t\tbreak;\t\t\t\t\t\t\t\\\n\tcase '?' | 0x80:\t\t\t\t\t\t\\\n\t\tval = va_arg(ap, unsigned int);\t\t\t\t\\\n\t\tbreak;\t\t\t\t\t\t\t\\\n\tcase 'l':\t\t\t\t\t\t\t\\\n\t\tval = va_arg(ap, long);\t\t\t\t\t\\\n\t\tbreak;\t\t\t\t\t\t\t\\\n\tcase 'l' | 0x80:\t\t\t\t\t\t\\\n\t\tval = va_arg(ap, unsigned long);\t\t\t\\\n\t\tbreak;\t\t\t\t\t\t\t\\\n\tcase 'q':\t\t\t\t\t\t\t\\\n\t\tval = va_arg(ap, long long);\t\t\t\t\\\n\t\tbreak;\t\t\t\t\t\t\t\\\n\tcase 'q' | 0x80:\t\t\t\t\t\t\\\n\t\tval = va_arg(ap, unsigned long long);\t\t\t\\\n\t\tbreak;\t\t\t\t\t\t\t\\\n\tcase 'j':\t\t\t\t\t\t\t\\\n\t\tval = va_arg(ap, intmax_t);\t\t\t\t\\\n\t\tbreak;\t\t\t\t\t\t\t\\\n\tcase 'j' | 0x80:\t\t\t\t\t\t\\\n\t\tval = va_arg(ap, uintmax_t);\t\t\t\t\\\n\t\tbreak;\t\t\t\t\t\t\t\\\n\tcase 't':\t\t\t\t\t\t\t\\\n\t\tval = va_arg(ap, ptrdiff_t);\t\t\t\t\\\n\t\tbreak;\t\t\t\t\t\t\t\\\n\tcase 'z':\t\t\t\t\t\t\t\\\n\t\tval = va_arg(ap, ssize_t);\t\t\t\t\\\n\t\tbreak;\t\t\t\t\t\t\t\\\n\tcase 'z' | 0x80:\t\t\t\t\t\t\\\n\t\tval = va_arg(ap, size_t);\t\t\t\t\\\n\t\tbreak;\t\t\t\t\t\t\t\\\n\tcase 'p': /* Synthetic; used for %p. */\t\t\t\t\\\n\t\tval = va_arg(ap, uintptr_t);\t\t\t\t\\\n\t\tbreak;\t\t\t\t\t\t\t\\\n\tdefault:\t\t\t\t\t\t\t\\\n\t\tnot_reached();\t\t\t\t\t\t\\\n\t\tval = 0;\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n\ti = 0;\n\tf = format;\n\twhile (true) {\n\t\tswitch (*f) {\n\t\tcase '\\0': goto label_out;\n\t\tcase '%': {\n\t\t\tbool alt_form = false;\n\t\t\tbool left_justify = false;\n\t\t\tbool plus_space = false;\n\t\t\tbool plus_plus = false;\n\t\t\tint prec = -1;\n\t\t\tint width = -1;\n\t\t\tunsigned char len = '?';\n\t\t\tchar *s;\n\t\t\tsize_t slen;\n\t\t\tbool first_width_digit = true;\n\t\t\tbool pad_zero = false;\n\n\t\t\tf++;\n\t\t\t/* Flags. */\n\t\t\twhile (true) {\n\t\t\t\tswitch (*f) {\n\t\t\t\tcase '#':\n\t\t\t\t\tassert(!alt_form);\n\t\t\t\t\talt_form = true;\n\t\t\t\t\tbreak;\n\t\t\t\tcase '-':\n\t\t\t\t\tassert(!left_justify);\n\t\t\t\t\tleft_justify = true;\n\t\t\t\t\tbreak;\n\t\t\t\tcase ' ':\n\t\t\t\t\tassert(!plus_space);\n\t\t\t\t\tplus_space = true;\n\t\t\t\t\tbreak;\n\t\t\t\tcase '+':\n\t\t\t\t\tassert(!plus_plus);\n\t\t\t\t\tplus_plus = true;\n\t\t\t\t\tbreak;\n\t\t\t\tdefault: goto label_width;\n\t\t\t\t}\n\t\t\t\tf++;\n\t\t\t}\n\t\t\t/* Width. */\n\t\t\tlabel_width:\n\t\t\tswitch (*f) {\n\t\t\tcase '*':\n\t\t\t\twidth = va_arg(ap, int);\n\t\t\t\tf++;\n\t\t\t\tif (width < 0) {\n\t\t\t\t\tleft_justify = true;\n\t\t\t\t\twidth = -width;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase '0':\n\t\t\t\tif (first_width_digit) {\n\t\t\t\t\tpad_zero = true;\n\t\t\t\t}\n\t\t\t\tJEMALLOC_FALLTHROUGH;\n\t\t\tcase '1': case '2': case '3': case '4':\n\t\t\tcase '5': case '6': case '7': case '8': case '9': {\n\t\t\t\tuintmax_t uwidth;\n\t\t\t\tset_errno(0);\n\t\t\t\tuwidth = malloc_strtoumax(f, (char **)&f, 10);\n\t\t\t\tassert(uwidth != UINTMAX_MAX || get_errno() !=\n\t\t\t\t    ERANGE);\n\t\t\t\twidth = (int)uwidth;\n\t\t\t\tfirst_width_digit = false;\n\t\t\t\tbreak;\n\t\t\t} default:\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\t/* Width/precision separator. */\n\t\t\tif (*f == '.') {\n\t\t\t\tf++;\n\t\t\t} else {\n\t\t\t\tgoto label_length;\n\t\t\t}\n\t\t\t/* Precision. */\n\t\t\tswitch (*f) {\n\t\t\tcase '*':\n\t\t\t\tprec = va_arg(ap, int);\n\t\t\t\tf++;\n\t\t\t\tbreak;\n\t\t\tcase '0': case '1': case '2': case '3': case '4':\n\t\t\tcase '5': case '6': case '7': case '8': case '9': {\n\t\t\t\tuintmax_t uprec;\n\t\t\t\tset_errno(0);\n\t\t\t\tuprec = malloc_strtoumax(f, (char **)&f, 10);\n\t\t\t\tassert(uprec != UINTMAX_MAX || get_errno() !=\n\t\t\t\t    ERANGE);\n\t\t\t\tprec = (int)uprec;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tdefault: break;\n\t\t\t}\n\t\t\t/* Length. */\n\t\t\tlabel_length:\n\t\t\tswitch (*f) {\n\t\t\tcase 'l':\n\t\t\t\tf++;\n\t\t\t\tif (*f == 'l') {\n\t\t\t\t\tlen = 'q';\n\t\t\t\t\tf++;\n\t\t\t\t} else {\n\t\t\t\t\tlen = 'l';\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'q': case 'j': case 't': case 'z':\n\t\t\t\tlen = *f;\n\t\t\t\tf++;\n\t\t\t\tbreak;\n\t\t\tdefault: break;\n\t\t\t}\n\t\t\t/* Conversion specifier. */\n\t\t\tswitch (*f) {\n\t\t\tcase '%':\n\t\t\t\t/* %% */\n\t\t\t\tAPPEND_C(*f);\n\t\t\t\tf++;\n\t\t\t\tbreak;\n\t\t\tcase 'd': case 'i': {\n\t\t\t\tintmax_t val JEMALLOC_CC_SILENCE_INIT(0);\n\t\t\t\tchar buf[D2S_BUFSIZE];\n\n\t\t\t\t/*\n\t\t\t\t * Outputting negative, zero-padded numbers\n\t\t\t\t * would require a nontrivial rework of the\n\t\t\t\t * interaction between the width and padding\n\t\t\t\t * (since 0 padding goes between the '-' and the\n\t\t\t\t * number, while ' ' padding goes either before\n\t\t\t\t * the - or after the number.  Since we\n\t\t\t\t * currently don't ever need 0-padded negative\n\t\t\t\t * numbers, just don't bother supporting it.\n\t\t\t\t */\n\t\t\t\tassert(!pad_zero);\n\n\t\t\t\tGET_ARG_NUMERIC(val, len);\n\t\t\t\ts = d2s(val, (plus_plus ? '+' : (plus_space ?\n\t\t\t\t    ' ' : '-')), buf, &slen);\n\t\t\t\tAPPEND_PADDED_S(s, slen, width, left_justify);\n\t\t\t\tf++;\n\t\t\t\tbreak;\n\t\t\t} case 'o': {\n\t\t\t\tuintmax_t val JEMALLOC_CC_SILENCE_INIT(0);\n\t\t\t\tchar buf[O2S_BUFSIZE];\n\n\t\t\t\tGET_ARG_NUMERIC(val, len | 0x80);\n\t\t\t\ts = o2s(val, alt_form, buf, &slen);\n\t\t\t\tAPPEND_PADDED_S(s, slen, width, left_justify);\n\t\t\t\tf++;\n\t\t\t\tbreak;\n\t\t\t} case 'u': {\n\t\t\t\tuintmax_t val JEMALLOC_CC_SILENCE_INIT(0);\n\t\t\t\tchar buf[U2S_BUFSIZE];\n\n\t\t\t\tGET_ARG_NUMERIC(val, len | 0x80);\n\t\t\t\ts = u2s(val, 10, false, buf, &slen);\n\t\t\t\tAPPEND_PADDED_S(s, slen, width, left_justify);\n\t\t\t\tf++;\n\t\t\t\tbreak;\n\t\t\t} case 'x': case 'X': {\n\t\t\t\tuintmax_t val JEMALLOC_CC_SILENCE_INIT(0);\n\t\t\t\tchar buf[X2S_BUFSIZE];\n\n\t\t\t\tGET_ARG_NUMERIC(val, len | 0x80);\n\t\t\t\ts = x2s(val, alt_form, *f == 'X', buf, &slen);\n\t\t\t\tAPPEND_PADDED_S(s, slen, width, left_justify);\n\t\t\t\tf++;\n\t\t\t\tbreak;\n\t\t\t} case 'c': {\n\t\t\t\tunsigned char val;\n\t\t\t\tchar buf[2];\n\n\t\t\t\tassert(len == '?' || len == 'l');\n\t\t\t\tassert_not_implemented(len != 'l');\n\t\t\t\tval = va_arg(ap, int);\n\t\t\t\tbuf[0] = val;\n\t\t\t\tbuf[1] = '\\0';\n\t\t\t\tAPPEND_PADDED_S(buf, 1, width, left_justify);\n\t\t\t\tf++;\n\t\t\t\tbreak;\n\t\t\t} case 's':\n\t\t\t\tassert(len == '?' || len == 'l');\n\t\t\t\tassert_not_implemented(len != 'l');\n\t\t\t\ts = va_arg(ap, char *);\n\t\t\t\tslen = (prec < 0) ? strlen(s) : (size_t)prec;\n\t\t\t\tAPPEND_PADDED_S(s, slen, width, left_justify);\n\t\t\t\tf++;\n\t\t\t\tbreak;\n\t\t\tcase 'p': {\n\t\t\t\tuintmax_t val;\n\t\t\t\tchar buf[X2S_BUFSIZE];\n\n\t\t\t\tGET_ARG_NUMERIC(val, 'p');\n\t\t\t\ts = x2s(val, true, false, buf, &slen);\n\t\t\t\tAPPEND_PADDED_S(s, slen, width, left_justify);\n\t\t\t\tf++;\n\t\t\t\tbreak;\n\t\t\t} default: not_reached();\n\t\t\t}\n\t\t\tbreak;\n\t\t} default: {\n\t\t\tAPPEND_C(*f);\n\t\t\tf++;\n\t\t\tbreak;\n\t\t}}\n\t}\n\tlabel_out:\n\tif (i < size) {\n\t\tstr[i] = '\\0';\n\t} else {\n\t\tstr[size - 1] = '\\0';\n\t}\n\n#undef APPEND_C\n#undef APPEND_S\n#undef APPEND_PADDED_S\n#undef GET_ARG_NUMERIC\n\treturn i;\n}\n\nJEMALLOC_FORMAT_PRINTF(3, 4)\nsize_t\nmalloc_snprintf(char *str, size_t size, const char *format, ...) {\n\tsize_t ret;\n\tva_list ap;\n\n\tva_start(ap, format);\n\tret = malloc_vsnprintf(str, size, format, ap);\n\tva_end(ap);\n\n\treturn ret;\n}\n\nvoid\nmalloc_vcprintf(write_cb_t *write_cb, void *cbopaque, const char *format,\n    va_list ap) {\n\tchar buf[MALLOC_PRINTF_BUFSIZE];\n\n\tif (write_cb == NULL) {\n\t\t/*\n\t\t * The caller did not provide an alternate write_cb callback\n\t\t * function, so use the default one.  malloc_write() is an\n\t\t * inline function, so use malloc_message() directly here.\n\t\t */\n\t\twrite_cb = (je_malloc_message != NULL) ? je_malloc_message :\n\t\t    wrtmessage;\n\t}\n\n\tmalloc_vsnprintf(buf, sizeof(buf), format, ap);\n\twrite_cb(cbopaque, buf);\n}\n\n/*\n * Print to a callback function in such a way as to (hopefully) avoid memory\n * allocation.\n */\nJEMALLOC_FORMAT_PRINTF(3, 4)\nvoid\nmalloc_cprintf(write_cb_t *write_cb, void *cbopaque, const char *format, ...) {\n\tva_list ap;\n\n\tva_start(ap, format);\n\tmalloc_vcprintf(write_cb, cbopaque, format, ap);\n\tva_end(ap);\n}\n\n/* Print to stderr in such a way as to avoid memory allocation. */\nJEMALLOC_FORMAT_PRINTF(1, 2)\nvoid\nmalloc_printf(const char *format, ...) {\n\tva_list ap;\n\n\tva_start(ap, format);\n\tmalloc_vcprintf(NULL, NULL, format, ap);\n\tva_end(ap);\n}\n\n/*\n * Restore normal assertion macros, in order to make it possible to compile all\n * C files as a single concatenation.\n */\n#undef assert\n#undef not_reached\n#undef not_implemented\n#undef assert_not_implemented\n#include \"jemalloc/internal/assert.h\"\n"
  },
  {
    "path": "deps/jemalloc/src/mutex.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/malloc_io.h\"\n#include \"jemalloc/internal/spin.h\"\n\n#ifndef _CRT_SPINCOUNT\n#define _CRT_SPINCOUNT 4000\n#endif\n\n/*\n * Based on benchmark results, a fixed spin with this amount of retries works\n * well for our critical sections.\n */\nint64_t opt_mutex_max_spin = 600;\n\n/******************************************************************************/\n/* Data. */\n\n#ifdef JEMALLOC_LAZY_LOCK\nbool isthreaded = false;\n#endif\n#ifdef JEMALLOC_MUTEX_INIT_CB\nstatic bool\t\tpostpone_init = true;\nstatic malloc_mutex_t\t*postponed_mutexes = NULL;\n#endif\n\n/******************************************************************************/\n/*\n * We intercept pthread_create() calls in order to toggle isthreaded if the\n * process goes multi-threaded.\n */\n\n#if defined(JEMALLOC_LAZY_LOCK) && !defined(_WIN32)\nJEMALLOC_EXPORT int\npthread_create(pthread_t *__restrict thread,\n    const pthread_attr_t *__restrict attr, void *(*start_routine)(void *),\n    void *__restrict arg) {\n\treturn pthread_create_wrapper(thread, attr, start_routine, arg);\n}\n#endif\n\n/******************************************************************************/\n\n#ifdef JEMALLOC_MUTEX_INIT_CB\nJEMALLOC_EXPORT int\t_pthread_mutex_init_calloc_cb(pthread_mutex_t *mutex,\n    void *(calloc_cb)(size_t, size_t));\n#endif\n\nvoid\nmalloc_mutex_lock_slow(malloc_mutex_t *mutex) {\n\tmutex_prof_data_t *data = &mutex->prof_data;\n\tnstime_t before;\n\n\tif (ncpus == 1) {\n\t\tgoto label_spin_done;\n\t}\n\n\tint cnt = 0;\n\tdo {\n\t\tspin_cpu_spinwait();\n\t\tif (!atomic_load_b(&mutex->locked, ATOMIC_RELAXED)\n                    && !malloc_mutex_trylock_final(mutex)) {\n\t\t\tdata->n_spin_acquired++;\n\t\t\treturn;\n\t\t}\n\t} while (cnt++ < opt_mutex_max_spin || opt_mutex_max_spin == -1);\n\n\tif (!config_stats) {\n\t\t/* Only spin is useful when stats is off. */\n\t\tmalloc_mutex_lock_final(mutex);\n\t\treturn;\n\t}\nlabel_spin_done:\n\tnstime_init_update(&before);\n\t/* Copy before to after to avoid clock skews. */\n\tnstime_t after;\n\tnstime_copy(&after, &before);\n\tuint32_t n_thds = atomic_fetch_add_u32(&data->n_waiting_thds, 1,\n\t    ATOMIC_RELAXED) + 1;\n\t/* One last try as above two calls may take quite some cycles. */\n\tif (!malloc_mutex_trylock_final(mutex)) {\n\t\tatomic_fetch_sub_u32(&data->n_waiting_thds, 1, ATOMIC_RELAXED);\n\t\tdata->n_spin_acquired++;\n\t\treturn;\n\t}\n\n\t/* True slow path. */\n\tmalloc_mutex_lock_final(mutex);\n\t/* Update more slow-path only counters. */\n\tatomic_fetch_sub_u32(&data->n_waiting_thds, 1, ATOMIC_RELAXED);\n\tnstime_update(&after);\n\n\tnstime_t delta;\n\tnstime_copy(&delta, &after);\n\tnstime_subtract(&delta, &before);\n\n\tdata->n_wait_times++;\n\tnstime_add(&data->tot_wait_time, &delta);\n\tif (nstime_compare(&data->max_wait_time, &delta) < 0) {\n\t\tnstime_copy(&data->max_wait_time, &delta);\n\t}\n\tif (n_thds > data->max_n_thds) {\n\t\tdata->max_n_thds = n_thds;\n\t}\n}\n\nstatic void\nmutex_prof_data_init(mutex_prof_data_t *data) {\n\tmemset(data, 0, sizeof(mutex_prof_data_t));\n\tnstime_init_zero(&data->max_wait_time);\n\tnstime_init_zero(&data->tot_wait_time);\n\tdata->prev_owner = NULL;\n}\n\nvoid\nmalloc_mutex_prof_data_reset(tsdn_t *tsdn, malloc_mutex_t *mutex) {\n\tmalloc_mutex_assert_owner(tsdn, mutex);\n\tmutex_prof_data_init(&mutex->prof_data);\n}\n\nstatic int\nmutex_addr_comp(const witness_t *witness1, void *mutex1,\n    const witness_t *witness2, void *mutex2) {\n\tassert(mutex1 != NULL);\n\tassert(mutex2 != NULL);\n\tuintptr_t mu1int = (uintptr_t)mutex1;\n\tuintptr_t mu2int = (uintptr_t)mutex2;\n\tif (mu1int < mu2int) {\n\t\treturn -1;\n\t} else if (mu1int == mu2int) {\n\t\treturn 0;\n\t} else {\n\t\treturn 1;\n\t}\n}\n\nbool\nmalloc_mutex_init(malloc_mutex_t *mutex, const char *name,\n    witness_rank_t rank, malloc_mutex_lock_order_t lock_order) {\n\tmutex_prof_data_init(&mutex->prof_data);\n#ifdef _WIN32\n#  if _WIN32_WINNT >= 0x0600\n\tInitializeSRWLock(&mutex->lock);\n#  else\n\tif (!InitializeCriticalSectionAndSpinCount(&mutex->lock,\n\t    _CRT_SPINCOUNT)) {\n\t\treturn true;\n\t}\n#  endif\n#elif (defined(JEMALLOC_OS_UNFAIR_LOCK))\n       mutex->lock = OS_UNFAIR_LOCK_INIT;\n#elif (defined(JEMALLOC_MUTEX_INIT_CB))\n\tif (postpone_init) {\n\t\tmutex->postponed_next = postponed_mutexes;\n\t\tpostponed_mutexes = mutex;\n\t} else {\n\t\tif (_pthread_mutex_init_calloc_cb(&mutex->lock,\n\t\t    bootstrap_calloc) != 0) {\n\t\t\treturn true;\n\t\t}\n\t}\n#else\n\tpthread_mutexattr_t attr;\n\n\tif (pthread_mutexattr_init(&attr) != 0) {\n\t\treturn true;\n\t}\n\tpthread_mutexattr_settype(&attr, MALLOC_MUTEX_TYPE);\n\tif (pthread_mutex_init(&mutex->lock, &attr) != 0) {\n\t\tpthread_mutexattr_destroy(&attr);\n\t\treturn true;\n\t}\n\tpthread_mutexattr_destroy(&attr);\n#endif\n\tif (config_debug) {\n\t\tmutex->lock_order = lock_order;\n\t\tif (lock_order == malloc_mutex_address_ordered) {\n\t\t\twitness_init(&mutex->witness, name, rank,\n\t\t\t    mutex_addr_comp, mutex);\n\t\t} else {\n\t\t\twitness_init(&mutex->witness, name, rank, NULL, NULL);\n\t\t}\n\t}\n\treturn false;\n}\n\nvoid\nmalloc_mutex_prefork(tsdn_t *tsdn, malloc_mutex_t *mutex) {\n\tmalloc_mutex_lock(tsdn, mutex);\n}\n\nvoid\nmalloc_mutex_postfork_parent(tsdn_t *tsdn, malloc_mutex_t *mutex) {\n\tmalloc_mutex_unlock(tsdn, mutex);\n}\n\nvoid\nmalloc_mutex_postfork_child(tsdn_t *tsdn, malloc_mutex_t *mutex) {\n#ifdef JEMALLOC_MUTEX_INIT_CB\n\tmalloc_mutex_unlock(tsdn, mutex);\n#else\n\tif (malloc_mutex_init(mutex, mutex->witness.name,\n\t    mutex->witness.rank, mutex->lock_order)) {\n\t\tmalloc_printf(\"<jemalloc>: Error re-initializing mutex in \"\n\t\t    \"child\\n\");\n\t\tif (opt_abort) {\n\t\t\tabort();\n\t\t}\n\t}\n#endif\n}\n\nbool\nmalloc_mutex_boot(void) {\n#ifdef JEMALLOC_MUTEX_INIT_CB\n\tpostpone_init = false;\n\twhile (postponed_mutexes != NULL) {\n\t\tif (_pthread_mutex_init_calloc_cb(&postponed_mutexes->lock,\n\t\t    bootstrap_calloc) != 0) {\n\t\t\treturn true;\n\t\t}\n\t\tpostponed_mutexes = postponed_mutexes->postponed_next;\n\t}\n#endif\n\treturn false;\n}\n"
  },
  {
    "path": "deps/jemalloc/src/nstime.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/nstime.h\"\n\n#include \"jemalloc/internal/assert.h\"\n\n#define BILLION\tUINT64_C(1000000000)\n#define MILLION\tUINT64_C(1000000)\n\nstatic void\nnstime_set_initialized(nstime_t *time) {\n#ifdef JEMALLOC_DEBUG\n\ttime->magic = NSTIME_MAGIC;\n#endif\n}\n\nstatic void\nnstime_assert_initialized(const nstime_t *time) {\n#ifdef JEMALLOC_DEBUG\n\t/*\n\t * Some parts (e.g. stats) rely on memset to zero initialize.  Treat\n\t * these as valid initialization.\n\t */\n\tassert(time->magic == NSTIME_MAGIC ||\n\t    (time->magic == 0 && time->ns == 0));\n#endif\n}\n\nstatic void\nnstime_pair_assert_initialized(const nstime_t *t1, const nstime_t *t2) {\n\tnstime_assert_initialized(t1);\n\tnstime_assert_initialized(t2);\n}\n\nstatic void\nnstime_initialize_operand(nstime_t *time) {\n\t/*\n\t * Operations like nstime_add may have the initial operand being zero\n\t * initialized (covered by the assert below).  Full-initialize needed\n\t * before changing it to non-zero.\n\t */\n\tnstime_assert_initialized(time);\n\tnstime_set_initialized(time);\n}\n\nvoid\nnstime_init(nstime_t *time, uint64_t ns) {\n\tnstime_set_initialized(time);\n\ttime->ns = ns;\n}\n\nvoid\nnstime_init2(nstime_t *time, uint64_t sec, uint64_t nsec) {\n\tnstime_set_initialized(time);\n\ttime->ns = sec * BILLION + nsec;\n}\n\nuint64_t\nnstime_ns(const nstime_t *time) {\n\tnstime_assert_initialized(time);\n\treturn time->ns;\n}\n\nuint64_t\nnstime_msec(const nstime_t *time) {\n\tnstime_assert_initialized(time);\n\treturn time->ns / MILLION;\n}\n\nuint64_t\nnstime_sec(const nstime_t *time) {\n\tnstime_assert_initialized(time);\n\treturn time->ns / BILLION;\n}\n\nuint64_t\nnstime_nsec(const nstime_t *time) {\n\tnstime_assert_initialized(time);\n\treturn time->ns % BILLION;\n}\n\nvoid\nnstime_copy(nstime_t *time, const nstime_t *source) {\n\t/* Source is required to be initialized. */\n\tnstime_assert_initialized(source);\n\t*time = *source;\n\tnstime_assert_initialized(time);\n}\n\nint\nnstime_compare(const nstime_t *a, const nstime_t *b) {\n\tnstime_pair_assert_initialized(a, b);\n\treturn (a->ns > b->ns) - (a->ns < b->ns);\n}\n\nvoid\nnstime_add(nstime_t *time, const nstime_t *addend) {\n\tnstime_pair_assert_initialized(time, addend);\n\tassert(UINT64_MAX - time->ns >= addend->ns);\n\n\tnstime_initialize_operand(time);\n\ttime->ns += addend->ns;\n}\n\nvoid\nnstime_iadd(nstime_t *time, uint64_t addend) {\n\tnstime_assert_initialized(time);\n\tassert(UINT64_MAX - time->ns >= addend);\n\n\tnstime_initialize_operand(time);\n\ttime->ns += addend;\n}\n\nvoid\nnstime_subtract(nstime_t *time, const nstime_t *subtrahend) {\n\tnstime_pair_assert_initialized(time, subtrahend);\n\tassert(nstime_compare(time, subtrahend) >= 0);\n\n\t/* No initialize operand -- subtraction must be initialized. */\n\ttime->ns -= subtrahend->ns;\n}\n\nvoid\nnstime_isubtract(nstime_t *time, uint64_t subtrahend) {\n\tnstime_assert_initialized(time);\n\tassert(time->ns >= subtrahend);\n\n\t/* No initialize operand -- subtraction must be initialized. */\n\ttime->ns -= subtrahend;\n}\n\nvoid\nnstime_imultiply(nstime_t *time, uint64_t multiplier) {\n\tnstime_assert_initialized(time);\n\tassert((((time->ns | multiplier) & (UINT64_MAX << (sizeof(uint64_t) <<\n\t    2))) == 0) || ((time->ns * multiplier) / multiplier == time->ns));\n\n\tnstime_initialize_operand(time);\n\ttime->ns *= multiplier;\n}\n\nvoid\nnstime_idivide(nstime_t *time, uint64_t divisor) {\n\tnstime_assert_initialized(time);\n\tassert(divisor != 0);\n\n\tnstime_initialize_operand(time);\n\ttime->ns /= divisor;\n}\n\nuint64_t\nnstime_divide(const nstime_t *time, const nstime_t *divisor) {\n\tnstime_pair_assert_initialized(time, divisor);\n\tassert(divisor->ns != 0);\n\n\t/* No initialize operand -- *time itself remains unchanged. */\n\treturn time->ns / divisor->ns;\n}\n\n/* Returns time since *past, w/o updating *past. */\nuint64_t\nnstime_ns_since(const nstime_t *past) {\n\tnstime_assert_initialized(past);\n\n\tnstime_t now;\n\tnstime_copy(&now, past);\n\tnstime_update(&now);\n\n\tassert(nstime_compare(&now, past) >= 0);\n\treturn now.ns - past->ns;\n}\n\n#ifdef _WIN32\n#  define NSTIME_MONOTONIC true\nstatic void\nnstime_get(nstime_t *time) {\n\tFILETIME ft;\n\tuint64_t ticks_100ns;\n\n\tGetSystemTimeAsFileTime(&ft);\n\tticks_100ns = (((uint64_t)ft.dwHighDateTime) << 32) | ft.dwLowDateTime;\n\n\tnstime_init(time, ticks_100ns * 100);\n}\n#elif defined(JEMALLOC_HAVE_CLOCK_MONOTONIC_COARSE)\n#  define NSTIME_MONOTONIC true\nstatic void\nnstime_get(nstime_t *time) {\n\tstruct timespec ts;\n\n\tclock_gettime(CLOCK_MONOTONIC_COARSE, &ts);\n\tnstime_init2(time, ts.tv_sec, ts.tv_nsec);\n}\n#elif defined(JEMALLOC_HAVE_CLOCK_MONOTONIC)\n#  define NSTIME_MONOTONIC true\nstatic void\nnstime_get(nstime_t *time) {\n\tstruct timespec ts;\n\n\tclock_gettime(CLOCK_MONOTONIC, &ts);\n\tnstime_init2(time, ts.tv_sec, ts.tv_nsec);\n}\n#elif defined(JEMALLOC_HAVE_MACH_ABSOLUTE_TIME)\n#  define NSTIME_MONOTONIC true\nstatic void\nnstime_get(nstime_t *time) {\n\tnstime_init(time, mach_absolute_time());\n}\n#else\n#  define NSTIME_MONOTONIC false\nstatic void\nnstime_get(nstime_t *time) {\n\tstruct timeval tv;\n\n\tgettimeofday(&tv, NULL);\n\tnstime_init2(time, tv.tv_sec, tv.tv_usec * 1000);\n}\n#endif\n\nstatic bool\nnstime_monotonic_impl(void) {\n\treturn NSTIME_MONOTONIC;\n#undef NSTIME_MONOTONIC\n}\nnstime_monotonic_t *JET_MUTABLE nstime_monotonic = nstime_monotonic_impl;\n\nprof_time_res_t opt_prof_time_res =\n\tprof_time_res_default;\n\nconst char *prof_time_res_mode_names[] = {\n\t\"default\",\n\t\"high\",\n};\n\n\nstatic void\nnstime_get_realtime(nstime_t *time) {\n#if defined(JEMALLOC_HAVE_CLOCK_REALTIME) && !defined(_WIN32)\n\tstruct timespec ts;\n\n\tclock_gettime(CLOCK_REALTIME, &ts);\n\tnstime_init2(time, ts.tv_sec, ts.tv_nsec);\n#else\n\tunreachable();\n#endif\n}\n\nstatic void\nnstime_prof_update_impl(nstime_t *time) {\n\tnstime_t old_time;\n\n\tnstime_copy(&old_time, time);\n\n\tif (opt_prof_time_res == prof_time_res_high) {\n\t\tnstime_get_realtime(time);\n\t} else {\n\t\tnstime_get(time);\n\t}\n}\nnstime_prof_update_t *JET_MUTABLE nstime_prof_update = nstime_prof_update_impl;\n\nstatic void\nnstime_update_impl(nstime_t *time) {\n\tnstime_t old_time;\n\n\tnstime_copy(&old_time, time);\n\tnstime_get(time);\n\n\t/* Handle non-monotonic clocks. */\n\tif (unlikely(nstime_compare(&old_time, time) > 0)) {\n\t\tnstime_copy(time, &old_time);\n\t}\n}\nnstime_update_t *JET_MUTABLE nstime_update = nstime_update_impl;\n\nvoid\nnstime_init_update(nstime_t *time) {\n\tnstime_init_zero(time);\n\tnstime_update(time);\n}\n\nvoid\nnstime_prof_init_update(nstime_t *time) {\n\tnstime_init_zero(time);\n\tnstime_prof_update(time);\n}\n\n\n"
  },
  {
    "path": "deps/jemalloc/src/pa.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/san.h\"\n#include \"jemalloc/internal/hpa.h\"\n\nstatic void\npa_nactive_add(pa_shard_t *shard, size_t add_pages) {\n\tatomic_fetch_add_zu(&shard->nactive, add_pages, ATOMIC_RELAXED);\n}\n\nstatic void\npa_nactive_sub(pa_shard_t *shard, size_t sub_pages) {\n\tassert(atomic_load_zu(&shard->nactive, ATOMIC_RELAXED) >= sub_pages);\n\tatomic_fetch_sub_zu(&shard->nactive, sub_pages, ATOMIC_RELAXED);\n}\n\nbool\npa_central_init(pa_central_t *central, base_t *base, bool hpa,\n    hpa_hooks_t *hpa_hooks) {\n\tbool err;\n\tif (hpa) {\n\t\terr = hpa_central_init(&central->hpa, base, hpa_hooks);\n\t\tif (err) {\n\t\t\treturn true;\n\t\t}\n\t}\n\treturn false;\n}\n\nbool\npa_shard_init(tsdn_t *tsdn, pa_shard_t *shard, pa_central_t *central,\n    emap_t *emap, base_t *base, unsigned ind, pa_shard_stats_t *stats,\n    malloc_mutex_t *stats_mtx, nstime_t *cur_time,\n    size_t pac_oversize_threshold, ssize_t dirty_decay_ms,\n    ssize_t muzzy_decay_ms) {\n\t/* This will change eventually, but for now it should hold. */\n\tassert(base_ind_get(base) == ind);\n\tif (edata_cache_init(&shard->edata_cache, base)) {\n\t\treturn true;\n\t}\n\n\tif (pac_init(tsdn, &shard->pac, base, emap, &shard->edata_cache,\n\t    cur_time, pac_oversize_threshold, dirty_decay_ms, muzzy_decay_ms,\n\t    &stats->pac_stats, stats_mtx)) {\n\t\treturn true;\n\t}\n\n\tshard->ind = ind;\n\n\tshard->ever_used_hpa = false;\n\tatomic_store_b(&shard->use_hpa, false, ATOMIC_RELAXED);\n\n\tatomic_store_zu(&shard->nactive, 0, ATOMIC_RELAXED);\n\n\tshard->stats_mtx = stats_mtx;\n\tshard->stats = stats;\n\tmemset(shard->stats, 0, sizeof(*shard->stats));\n\n\tshard->central = central;\n\tshard->emap = emap;\n\tshard->base = base;\n\n\treturn false;\n}\n\nbool\npa_shard_enable_hpa(tsdn_t *tsdn, pa_shard_t *shard,\n    const hpa_shard_opts_t *hpa_opts, const sec_opts_t *hpa_sec_opts) {\n\tif (hpa_shard_init(&shard->hpa_shard, &shard->central->hpa, shard->emap,\n\t    shard->base, &shard->edata_cache, shard->ind, hpa_opts)) {\n\t\treturn true;\n\t}\n\tif (sec_init(tsdn, &shard->hpa_sec, shard->base, &shard->hpa_shard.pai,\n\t    hpa_sec_opts)) {\n\t\treturn true;\n\t}\n\tshard->ever_used_hpa = true;\n\tatomic_store_b(&shard->use_hpa, true, ATOMIC_RELAXED);\n\n\treturn false;\n}\n\nvoid\npa_shard_disable_hpa(tsdn_t *tsdn, pa_shard_t *shard) {\n\tatomic_store_b(&shard->use_hpa, false, ATOMIC_RELAXED);\n\tif (shard->ever_used_hpa) {\n\t\tsec_disable(tsdn, &shard->hpa_sec);\n\t\thpa_shard_disable(tsdn, &shard->hpa_shard);\n\t}\n}\n\nvoid\npa_shard_reset(tsdn_t *tsdn, pa_shard_t *shard) {\n\tatomic_store_zu(&shard->nactive, 0, ATOMIC_RELAXED);\n\tif (shard->ever_used_hpa) {\n\t\tsec_flush(tsdn, &shard->hpa_sec);\n\t}\n}\n\nstatic bool\npa_shard_uses_hpa(pa_shard_t *shard) {\n\treturn atomic_load_b(&shard->use_hpa, ATOMIC_RELAXED);\n}\n\nvoid\npa_shard_destroy(tsdn_t *tsdn, pa_shard_t *shard) {\n\tpac_destroy(tsdn, &shard->pac);\n\tif (shard->ever_used_hpa) {\n\t\tsec_flush(tsdn, &shard->hpa_sec);\n\t\thpa_shard_disable(tsdn, &shard->hpa_shard);\n\t}\n}\n\nstatic pai_t *\npa_get_pai(pa_shard_t *shard, edata_t *edata) {\n\treturn (edata_pai_get(edata) == EXTENT_PAI_PAC\n\t    ? &shard->pac.pai : &shard->hpa_sec.pai);\n}\n\nedata_t *\npa_alloc(tsdn_t *tsdn, pa_shard_t *shard, size_t size, size_t alignment,\n    bool slab, szind_t szind, bool zero, bool guarded,\n    bool *deferred_work_generated) {\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\tassert(!guarded || alignment <= PAGE);\n\n\tedata_t *edata = NULL;\n\tif (!guarded && pa_shard_uses_hpa(shard)) {\n\t\tedata = pai_alloc(tsdn, &shard->hpa_sec.pai, size, alignment,\n\t\t    zero, /* guarded */ false, slab, deferred_work_generated);\n\t}\n\t/*\n\t * Fall back to the PAC if the HPA is off or couldn't serve the given\n\t * allocation request.\n\t */\n\tif (edata == NULL) {\n\t\tedata = pai_alloc(tsdn, &shard->pac.pai, size, alignment, zero,\n\t\t    guarded, slab, deferred_work_generated);\n\t}\n\tif (edata != NULL) {\n\t\tassert(edata_size_get(edata) == size);\n\t\tpa_nactive_add(shard, size >> LG_PAGE);\n\t\temap_remap(tsdn, shard->emap, edata, szind, slab);\n\t\tedata_szind_set(edata, szind);\n\t\tedata_slab_set(edata, slab);\n\t\tif (slab && (size > 2 * PAGE)) {\n\t\t\temap_register_interior(tsdn, shard->emap, edata, szind);\n\t\t}\n\t\tassert(edata_arena_ind_get(edata) == shard->ind);\n\t}\n\treturn edata;\n}\n\nbool\npa_expand(tsdn_t *tsdn, pa_shard_t *shard, edata_t *edata, size_t old_size,\n    size_t new_size, szind_t szind, bool zero, bool *deferred_work_generated) {\n\tassert(new_size > old_size);\n\tassert(edata_size_get(edata) == old_size);\n\tassert((new_size & PAGE_MASK) == 0);\n\tif (edata_guarded_get(edata)) {\n\t\treturn true;\n\t}\n\tsize_t expand_amount = new_size - old_size;\n\n\tpai_t *pai = pa_get_pai(shard, edata);\n\n\tbool error = pai_expand(tsdn, pai, edata, old_size, new_size, zero,\n\t    deferred_work_generated);\n\tif (error) {\n\t\treturn true;\n\t}\n\n\tpa_nactive_add(shard, expand_amount >> LG_PAGE);\n\tedata_szind_set(edata, szind);\n\temap_remap(tsdn, shard->emap, edata, szind, /* slab */ false);\n\treturn false;\n}\n\nbool\npa_shrink(tsdn_t *tsdn, pa_shard_t *shard, edata_t *edata, size_t old_size,\n    size_t new_size, szind_t szind, bool *deferred_work_generated) {\n\tassert(new_size < old_size);\n\tassert(edata_size_get(edata) == old_size);\n\tassert((new_size & PAGE_MASK) == 0);\n\tif (edata_guarded_get(edata)) {\n\t\treturn true;\n\t}\n\tsize_t shrink_amount = old_size - new_size;\n\n\tpai_t *pai = pa_get_pai(shard, edata);\n\tbool error = pai_shrink(tsdn, pai, edata, old_size, new_size,\n\t    deferred_work_generated);\n\tif (error) {\n\t\treturn true;\n\t}\n\tpa_nactive_sub(shard, shrink_amount >> LG_PAGE);\n\n\tedata_szind_set(edata, szind);\n\temap_remap(tsdn, shard->emap, edata, szind, /* slab */ false);\n\treturn false;\n}\n\nvoid\npa_dalloc(tsdn_t *tsdn, pa_shard_t *shard, edata_t *edata,\n    bool *deferred_work_generated) {\n\temap_remap(tsdn, shard->emap, edata, SC_NSIZES, /* slab */ false);\n\tif (edata_slab_get(edata)) {\n\t\temap_deregister_interior(tsdn, shard->emap, edata);\n\t\t/*\n\t\t * The slab state of the extent isn't cleared.  It may be used\n\t\t * by the pai implementation, e.g. to make caching decisions.\n\t\t */\n\t}\n\tedata_addr_set(edata, edata_base_get(edata));\n\tedata_szind_set(edata, SC_NSIZES);\n\tpa_nactive_sub(shard, edata_size_get(edata) >> LG_PAGE);\n\tpai_t *pai = pa_get_pai(shard, edata);\n\tpai_dalloc(tsdn, pai, edata, deferred_work_generated);\n}\n\nbool\npa_shard_retain_grow_limit_get_set(tsdn_t *tsdn, pa_shard_t *shard,\n    size_t *old_limit, size_t *new_limit) {\n\treturn pac_retain_grow_limit_get_set(tsdn, &shard->pac, old_limit,\n\t    new_limit);\n}\n\nbool\npa_decay_ms_set(tsdn_t *tsdn, pa_shard_t *shard, extent_state_t state,\n    ssize_t decay_ms, pac_purge_eagerness_t eagerness) {\n\treturn pac_decay_ms_set(tsdn, &shard->pac, state, decay_ms, eagerness);\n}\n\nssize_t\npa_decay_ms_get(pa_shard_t *shard, extent_state_t state) {\n\treturn pac_decay_ms_get(&shard->pac, state);\n}\n\nvoid\npa_shard_set_deferral_allowed(tsdn_t *tsdn, pa_shard_t *shard,\n    bool deferral_allowed) {\n\tif (pa_shard_uses_hpa(shard)) {\n\t\thpa_shard_set_deferral_allowed(tsdn, &shard->hpa_shard,\n\t\t    deferral_allowed);\n\t}\n}\n\nvoid\npa_shard_do_deferred_work(tsdn_t *tsdn, pa_shard_t *shard) {\n\tif (pa_shard_uses_hpa(shard)) {\n\t\thpa_shard_do_deferred_work(tsdn, &shard->hpa_shard);\n\t}\n}\n\n/*\n * Get time until next deferred work ought to happen. If there are multiple\n * things that have been deferred, this function calculates the time until\n * the soonest of those things.\n */\nuint64_t\npa_shard_time_until_deferred_work(tsdn_t *tsdn, pa_shard_t *shard) {\n\tuint64_t time = pai_time_until_deferred_work(tsdn, &shard->pac.pai);\n\tif (time == BACKGROUND_THREAD_DEFERRED_MIN) {\n\t\treturn time;\n\t}\n\n\tif (pa_shard_uses_hpa(shard)) {\n\t\tuint64_t hpa =\n\t\t    pai_time_until_deferred_work(tsdn, &shard->hpa_shard.pai);\n\t\tif (hpa < time) {\n\t\t\ttime = hpa;\n\t\t}\n\t}\n\treturn time;\n}\n"
  },
  {
    "path": "deps/jemalloc/src/pa_extra.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n/*\n * This file is logically part of the PA module.  While pa.c contains the core\n * allocator functionality, this file contains boring integration functionality;\n * things like the pre- and post- fork handlers, and stats merging for CTL\n * refreshes.\n */\n\nvoid\npa_shard_prefork0(tsdn_t *tsdn, pa_shard_t *shard) {\n\tmalloc_mutex_prefork(tsdn, &shard->pac.decay_dirty.mtx);\n\tmalloc_mutex_prefork(tsdn, &shard->pac.decay_muzzy.mtx);\n}\n\nvoid\npa_shard_prefork2(tsdn_t *tsdn, pa_shard_t *shard) {\n\tif (shard->ever_used_hpa) {\n\t\tsec_prefork2(tsdn, &shard->hpa_sec);\n\t}\n}\n\nvoid\npa_shard_prefork3(tsdn_t *tsdn, pa_shard_t *shard) {\n\tmalloc_mutex_prefork(tsdn, &shard->pac.grow_mtx);\n\tif (shard->ever_used_hpa) {\n\t\thpa_shard_prefork3(tsdn, &shard->hpa_shard);\n\t}\n}\n\nvoid\npa_shard_prefork4(tsdn_t *tsdn, pa_shard_t *shard) {\n\tecache_prefork(tsdn, &shard->pac.ecache_dirty);\n\tecache_prefork(tsdn, &shard->pac.ecache_muzzy);\n\tecache_prefork(tsdn, &shard->pac.ecache_retained);\n\tif (shard->ever_used_hpa) {\n\t\thpa_shard_prefork4(tsdn, &shard->hpa_shard);\n\t}\n}\n\nvoid\npa_shard_prefork5(tsdn_t *tsdn, pa_shard_t *shard) {\n\tedata_cache_prefork(tsdn, &shard->edata_cache);\n}\n\nvoid\npa_shard_postfork_parent(tsdn_t *tsdn, pa_shard_t *shard) {\n\tedata_cache_postfork_parent(tsdn, &shard->edata_cache);\n\tecache_postfork_parent(tsdn, &shard->pac.ecache_dirty);\n\tecache_postfork_parent(tsdn, &shard->pac.ecache_muzzy);\n\tecache_postfork_parent(tsdn, &shard->pac.ecache_retained);\n\tmalloc_mutex_postfork_parent(tsdn, &shard->pac.grow_mtx);\n\tmalloc_mutex_postfork_parent(tsdn, &shard->pac.decay_dirty.mtx);\n\tmalloc_mutex_postfork_parent(tsdn, &shard->pac.decay_muzzy.mtx);\n\tif (shard->ever_used_hpa) {\n\t\tsec_postfork_parent(tsdn, &shard->hpa_sec);\n\t\thpa_shard_postfork_parent(tsdn, &shard->hpa_shard);\n\t}\n}\n\nvoid\npa_shard_postfork_child(tsdn_t *tsdn, pa_shard_t *shard) {\n\tedata_cache_postfork_child(tsdn, &shard->edata_cache);\n\tecache_postfork_child(tsdn, &shard->pac.ecache_dirty);\n\tecache_postfork_child(tsdn, &shard->pac.ecache_muzzy);\n\tecache_postfork_child(tsdn, &shard->pac.ecache_retained);\n\tmalloc_mutex_postfork_child(tsdn, &shard->pac.grow_mtx);\n\tmalloc_mutex_postfork_child(tsdn, &shard->pac.decay_dirty.mtx);\n\tmalloc_mutex_postfork_child(tsdn, &shard->pac.decay_muzzy.mtx);\n\tif (shard->ever_used_hpa) {\n\t\tsec_postfork_child(tsdn, &shard->hpa_sec);\n\t\thpa_shard_postfork_child(tsdn, &shard->hpa_shard);\n\t}\n}\n\nvoid\npa_shard_basic_stats_merge(pa_shard_t *shard, size_t *nactive, size_t *ndirty,\n    size_t *nmuzzy) {\n\t*nactive += atomic_load_zu(&shard->nactive, ATOMIC_RELAXED);\n\t*ndirty += ecache_npages_get(&shard->pac.ecache_dirty);\n\t*nmuzzy += ecache_npages_get(&shard->pac.ecache_muzzy);\n}\n\nvoid\npa_shard_stats_merge(tsdn_t *tsdn, pa_shard_t *shard,\n    pa_shard_stats_t *pa_shard_stats_out, pac_estats_t *estats_out,\n    hpa_shard_stats_t *hpa_stats_out, sec_stats_t *sec_stats_out,\n    size_t *resident) {\n\tcassert(config_stats);\n\n\tpa_shard_stats_out->pac_stats.retained +=\n\t    ecache_npages_get(&shard->pac.ecache_retained) << LG_PAGE;\n\tpa_shard_stats_out->edata_avail += atomic_load_zu(\n\t    &shard->edata_cache.count, ATOMIC_RELAXED);\n\n\tsize_t resident_pgs = 0;\n\tresident_pgs += atomic_load_zu(&shard->nactive, ATOMIC_RELAXED);\n\tresident_pgs += ecache_npages_get(&shard->pac.ecache_dirty);\n\t*resident += (resident_pgs << LG_PAGE);\n\n\t/* Dirty decay stats */\n\tlocked_inc_u64_unsynchronized(\n\t    &pa_shard_stats_out->pac_stats.decay_dirty.npurge,\n\t    locked_read_u64(tsdn, LOCKEDINT_MTX(*shard->stats_mtx),\n\t    &shard->pac.stats->decay_dirty.npurge));\n\tlocked_inc_u64_unsynchronized(\n\t    &pa_shard_stats_out->pac_stats.decay_dirty.nmadvise,\n\t    locked_read_u64(tsdn, LOCKEDINT_MTX(*shard->stats_mtx),\n\t    &shard->pac.stats->decay_dirty.nmadvise));\n\tlocked_inc_u64_unsynchronized(\n\t    &pa_shard_stats_out->pac_stats.decay_dirty.purged,\n\t    locked_read_u64(tsdn, LOCKEDINT_MTX(*shard->stats_mtx),\n\t    &shard->pac.stats->decay_dirty.purged));\n\n\t/* Muzzy decay stats */\n\tlocked_inc_u64_unsynchronized(\n\t    &pa_shard_stats_out->pac_stats.decay_muzzy.npurge,\n\t    locked_read_u64(tsdn, LOCKEDINT_MTX(*shard->stats_mtx),\n\t    &shard->pac.stats->decay_muzzy.npurge));\n\tlocked_inc_u64_unsynchronized(\n\t    &pa_shard_stats_out->pac_stats.decay_muzzy.nmadvise,\n\t    locked_read_u64(tsdn, LOCKEDINT_MTX(*shard->stats_mtx),\n\t    &shard->pac.stats->decay_muzzy.nmadvise));\n\tlocked_inc_u64_unsynchronized(\n\t    &pa_shard_stats_out->pac_stats.decay_muzzy.purged,\n\t    locked_read_u64(tsdn, LOCKEDINT_MTX(*shard->stats_mtx),\n\t    &shard->pac.stats->decay_muzzy.purged));\n\n\tatomic_load_add_store_zu(&pa_shard_stats_out->pac_stats.abandoned_vm,\n\t    atomic_load_zu(&shard->pac.stats->abandoned_vm, ATOMIC_RELAXED));\n\n\tfor (pszind_t i = 0; i < SC_NPSIZES; i++) {\n\t\tsize_t dirty, muzzy, retained, dirty_bytes, muzzy_bytes,\n\t\t    retained_bytes;\n\t\tdirty = ecache_nextents_get(&shard->pac.ecache_dirty, i);\n\t\tmuzzy = ecache_nextents_get(&shard->pac.ecache_muzzy, i);\n\t\tretained = ecache_nextents_get(&shard->pac.ecache_retained, i);\n\t\tdirty_bytes = ecache_nbytes_get(&shard->pac.ecache_dirty, i);\n\t\tmuzzy_bytes = ecache_nbytes_get(&shard->pac.ecache_muzzy, i);\n\t\tretained_bytes = ecache_nbytes_get(&shard->pac.ecache_retained,\n\t\t    i);\n\n\t\testats_out[i].ndirty = dirty;\n\t\testats_out[i].nmuzzy = muzzy;\n\t\testats_out[i].nretained = retained;\n\t\testats_out[i].dirty_bytes = dirty_bytes;\n\t\testats_out[i].muzzy_bytes = muzzy_bytes;\n\t\testats_out[i].retained_bytes = retained_bytes;\n\t}\n\n\tif (shard->ever_used_hpa) {\n\t\thpa_shard_stats_merge(tsdn, &shard->hpa_shard, hpa_stats_out);\n\t\tsec_stats_merge(tsdn, &shard->hpa_sec, sec_stats_out);\n\t}\n}\n\nstatic void\npa_shard_mtx_stats_read_single(tsdn_t *tsdn, mutex_prof_data_t *mutex_prof_data,\n    malloc_mutex_t *mtx, int ind) {\n\tmalloc_mutex_lock(tsdn, mtx);\n\tmalloc_mutex_prof_read(tsdn, &mutex_prof_data[ind], mtx);\n\tmalloc_mutex_unlock(tsdn, mtx);\n}\n\nvoid\npa_shard_mtx_stats_read(tsdn_t *tsdn, pa_shard_t *shard,\n    mutex_prof_data_t mutex_prof_data[mutex_prof_num_arena_mutexes]) {\n\tpa_shard_mtx_stats_read_single(tsdn, mutex_prof_data,\n\t    &shard->edata_cache.mtx, arena_prof_mutex_extent_avail);\n\tpa_shard_mtx_stats_read_single(tsdn, mutex_prof_data,\n\t    &shard->pac.ecache_dirty.mtx, arena_prof_mutex_extents_dirty);\n\tpa_shard_mtx_stats_read_single(tsdn, mutex_prof_data,\n\t    &shard->pac.ecache_muzzy.mtx, arena_prof_mutex_extents_muzzy);\n\tpa_shard_mtx_stats_read_single(tsdn, mutex_prof_data,\n\t    &shard->pac.ecache_retained.mtx, arena_prof_mutex_extents_retained);\n\tpa_shard_mtx_stats_read_single(tsdn, mutex_prof_data,\n\t    &shard->pac.decay_dirty.mtx, arena_prof_mutex_decay_dirty);\n\tpa_shard_mtx_stats_read_single(tsdn, mutex_prof_data,\n\t    &shard->pac.decay_muzzy.mtx, arena_prof_mutex_decay_muzzy);\n\n\tif (shard->ever_used_hpa) {\n\t\tpa_shard_mtx_stats_read_single(tsdn, mutex_prof_data,\n\t\t    &shard->hpa_shard.mtx, arena_prof_mutex_hpa_shard);\n\t\tpa_shard_mtx_stats_read_single(tsdn, mutex_prof_data,\n\t\t    &shard->hpa_shard.grow_mtx,\n\t\t    arena_prof_mutex_hpa_shard_grow);\n\t\tsec_mutex_stats_read(tsdn, &shard->hpa_sec,\n\t\t    &mutex_prof_data[arena_prof_mutex_hpa_sec]);\n\t}\n}\n"
  },
  {
    "path": "deps/jemalloc/src/pac.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/pac.h\"\n#include \"jemalloc/internal/san.h\"\n\nstatic edata_t *pac_alloc_impl(tsdn_t *tsdn, pai_t *self, size_t size,\n    size_t alignment, bool zero, bool guarded, bool frequent_reuse,\n    bool *deferred_work_generated);\nstatic bool pac_expand_impl(tsdn_t *tsdn, pai_t *self, edata_t *edata,\n    size_t old_size, size_t new_size, bool zero, bool *deferred_work_generated);\nstatic bool pac_shrink_impl(tsdn_t *tsdn, pai_t *self, edata_t *edata,\n    size_t old_size, size_t new_size, bool *deferred_work_generated);\nstatic void pac_dalloc_impl(tsdn_t *tsdn, pai_t *self, edata_t *edata,\n    bool *deferred_work_generated);\nstatic uint64_t pac_time_until_deferred_work(tsdn_t *tsdn, pai_t *self);\n\nstatic inline void\npac_decay_data_get(pac_t *pac, extent_state_t state,\n    decay_t **r_decay, pac_decay_stats_t **r_decay_stats, ecache_t **r_ecache) {\n\tswitch(state) {\n\tcase extent_state_dirty:\n\t\t*r_decay = &pac->decay_dirty;\n\t\t*r_decay_stats = &pac->stats->decay_dirty;\n\t\t*r_ecache = &pac->ecache_dirty;\n\t\treturn;\n\tcase extent_state_muzzy:\n\t\t*r_decay = &pac->decay_muzzy;\n\t\t*r_decay_stats = &pac->stats->decay_muzzy;\n\t\t*r_ecache = &pac->ecache_muzzy;\n\t\treturn;\n\tdefault:\n\t\tunreachable();\n\t}\n}\n\nbool\npac_init(tsdn_t *tsdn, pac_t *pac, base_t *base, emap_t *emap,\n    edata_cache_t *edata_cache, nstime_t *cur_time,\n    size_t pac_oversize_threshold, ssize_t dirty_decay_ms,\n    ssize_t muzzy_decay_ms, pac_stats_t *pac_stats, malloc_mutex_t *stats_mtx) {\n\tunsigned ind = base_ind_get(base);\n\t/*\n\t * Delay coalescing for dirty extents despite the disruptive effect on\n\t * memory layout for best-fit extent allocation, since cached extents\n\t * are likely to be reused soon after deallocation, and the cost of\n\t * merging/splitting extents is non-trivial.\n\t */\n\tif (ecache_init(tsdn, &pac->ecache_dirty, extent_state_dirty, ind,\n\t    /* delay_coalesce */ true)) {\n\t\treturn true;\n\t}\n\t/*\n\t * Coalesce muzzy extents immediately, because operations on them are in\n\t * the critical path much less often than for dirty extents.\n\t */\n\tif (ecache_init(tsdn, &pac->ecache_muzzy, extent_state_muzzy, ind,\n\t    /* delay_coalesce */ false)) {\n\t\treturn true;\n\t}\n\t/*\n\t * Coalesce retained extents immediately, in part because they will\n\t * never be evicted (and therefore there's no opportunity for delayed\n\t * coalescing), but also because operations on retained extents are not\n\t * in the critical path.\n\t */\n\tif (ecache_init(tsdn, &pac->ecache_retained, extent_state_retained,\n\t    ind, /* delay_coalesce */ false)) {\n\t\treturn true;\n\t}\n\texp_grow_init(&pac->exp_grow);\n\tif (malloc_mutex_init(&pac->grow_mtx, \"extent_grow\",\n\t    WITNESS_RANK_EXTENT_GROW, malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\tatomic_store_zu(&pac->oversize_threshold, pac_oversize_threshold,\n\t    ATOMIC_RELAXED);\n\tif (decay_init(&pac->decay_dirty, cur_time, dirty_decay_ms)) {\n\t\treturn true;\n\t}\n\tif (decay_init(&pac->decay_muzzy, cur_time, muzzy_decay_ms)) {\n\t\treturn true;\n\t}\n\tif (san_bump_alloc_init(&pac->sba)) {\n\t\treturn true;\n\t}\n\n\tpac->base = base;\n\tpac->emap = emap;\n\tpac->edata_cache = edata_cache;\n\tpac->stats = pac_stats;\n\tpac->stats_mtx = stats_mtx;\n\tatomic_store_zu(&pac->extent_sn_next, 0, ATOMIC_RELAXED);\n\n\tpac->pai.alloc = &pac_alloc_impl;\n\tpac->pai.alloc_batch = &pai_alloc_batch_default;\n\tpac->pai.expand = &pac_expand_impl;\n\tpac->pai.shrink = &pac_shrink_impl;\n\tpac->pai.dalloc = &pac_dalloc_impl;\n\tpac->pai.dalloc_batch = &pai_dalloc_batch_default;\n\tpac->pai.time_until_deferred_work = &pac_time_until_deferred_work;\n\n\treturn false;\n}\n\nstatic inline bool\npac_may_have_muzzy(pac_t *pac) {\n\treturn pac_decay_ms_get(pac, extent_state_muzzy) != 0;\n}\n\nstatic edata_t *\npac_alloc_real(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks, size_t size,\n    size_t alignment, bool zero, bool guarded) {\n\tassert(!guarded || alignment <= PAGE);\n\n\tedata_t *edata = ecache_alloc(tsdn, pac, ehooks, &pac->ecache_dirty,\n\t    NULL, size, alignment, zero, guarded);\n\n\tif (edata == NULL && pac_may_have_muzzy(pac)) {\n\t\tedata = ecache_alloc(tsdn, pac, ehooks, &pac->ecache_muzzy,\n\t\t    NULL, size, alignment, zero, guarded);\n\t}\n\tif (edata == NULL) {\n\t\tedata = ecache_alloc_grow(tsdn, pac, ehooks,\n\t\t    &pac->ecache_retained, NULL, size, alignment, zero,\n\t\t    guarded);\n\t\tif (config_stats && edata != NULL) {\n\t\t\tatomic_fetch_add_zu(&pac->stats->pac_mapped, size,\n\t\t\t    ATOMIC_RELAXED);\n\t\t}\n\t}\n\n\treturn edata;\n}\n\nstatic edata_t *\npac_alloc_new_guarded(tsdn_t *tsdn, pac_t *pac, ehooks_t *ehooks, size_t size,\n    size_t alignment, bool zero, bool frequent_reuse) {\n\tassert(alignment <= PAGE);\n\n\tedata_t *edata;\n\tif (san_bump_enabled() && frequent_reuse) {\n\t\tedata = san_bump_alloc(tsdn, &pac->sba, pac, ehooks, size,\n\t\t    zero);\n\t} else {\n\t\tsize_t size_with_guards = san_two_side_guarded_sz(size);\n\t\t/* Alloc a non-guarded extent first.*/\n\t\tedata = pac_alloc_real(tsdn, pac, ehooks, size_with_guards,\n\t\t    /* alignment */ PAGE, zero, /* guarded */ false);\n\t\tif (edata != NULL) {\n\t\t\t/* Add guards around it. */\n\t\t\tassert(edata_size_get(edata) == size_with_guards);\n\t\t\tsan_guard_pages_two_sided(tsdn, ehooks, edata,\n\t\t\t    pac->emap, true);\n\t\t}\n\t}\n\tassert(edata == NULL || (edata_guarded_get(edata) &&\n\t    edata_size_get(edata) == size));\n\n\treturn edata;\n}\n\nstatic edata_t *\npac_alloc_impl(tsdn_t *tsdn, pai_t *self, size_t size, size_t alignment,\n    bool zero, bool guarded, bool frequent_reuse,\n    bool *deferred_work_generated) {\n\tpac_t *pac = (pac_t *)self;\n\tehooks_t *ehooks = pac_ehooks_get(pac);\n\n\tedata_t *edata = NULL;\n\t/*\n\t * The condition is an optimization - not frequently reused guarded\n\t * allocations are never put in the ecache.  pac_alloc_real also\n\t * doesn't grow retained for guarded allocations.  So pac_alloc_real\n\t * for such allocations would always return NULL.\n\t * */\n\tif (!guarded || frequent_reuse) {\n\t\tedata =\tpac_alloc_real(tsdn, pac, ehooks, size, alignment,\n\t\t    zero, guarded);\n\t}\n\tif (edata == NULL && guarded) {\n\t\t/* No cached guarded extents; creating a new one. */\n\t\tedata = pac_alloc_new_guarded(tsdn, pac, ehooks, size,\n\t\t    alignment, zero, frequent_reuse);\n\t}\n\n\treturn edata;\n}\n\nstatic bool\npac_expand_impl(tsdn_t *tsdn, pai_t *self, edata_t *edata, size_t old_size,\n    size_t new_size, bool zero, bool *deferred_work_generated) {\n\tpac_t *pac = (pac_t *)self;\n\tehooks_t *ehooks = pac_ehooks_get(pac);\n\n\tsize_t mapped_add = 0;\n\tsize_t expand_amount = new_size - old_size;\n\n\tif (ehooks_merge_will_fail(ehooks)) {\n\t\treturn true;\n\t}\n\tedata_t *trail = ecache_alloc(tsdn, pac, ehooks, &pac->ecache_dirty,\n\t    edata, expand_amount, PAGE, zero, /* guarded*/ false);\n\tif (trail == NULL) {\n\t\ttrail = ecache_alloc(tsdn, pac, ehooks, &pac->ecache_muzzy,\n\t\t    edata, expand_amount, PAGE, zero, /* guarded*/ false);\n\t}\n\tif (trail == NULL) {\n\t\ttrail = ecache_alloc_grow(tsdn, pac, ehooks,\n\t\t    &pac->ecache_retained, edata, expand_amount, PAGE, zero,\n\t\t    /* guarded */ false);\n\t\tmapped_add = expand_amount;\n\t}\n\tif (trail == NULL) {\n\t\treturn true;\n\t}\n\tif (extent_merge_wrapper(tsdn, pac, ehooks, edata, trail)) {\n\t\textent_dalloc_wrapper(tsdn, pac, ehooks, trail);\n\t\treturn true;\n\t}\n\tif (config_stats && mapped_add > 0) {\n\t\tatomic_fetch_add_zu(&pac->stats->pac_mapped, mapped_add,\n\t\t    ATOMIC_RELAXED);\n\t}\n\treturn false;\n}\n\nstatic bool\npac_shrink_impl(tsdn_t *tsdn, pai_t *self, edata_t *edata, size_t old_size,\n    size_t new_size, bool *deferred_work_generated) {\n\tpac_t *pac = (pac_t *)self;\n\tehooks_t *ehooks = pac_ehooks_get(pac);\n\n\tsize_t shrink_amount = old_size - new_size;\n\n\tif (ehooks_split_will_fail(ehooks)) {\n\t\treturn true;\n\t}\n\n\tedata_t *trail = extent_split_wrapper(tsdn, pac, ehooks, edata,\n\t    new_size, shrink_amount, /* holding_core_locks */ false);\n\tif (trail == NULL) {\n\t\treturn true;\n\t}\n\tecache_dalloc(tsdn, pac, ehooks, &pac->ecache_dirty, trail);\n\t*deferred_work_generated = true;\n\treturn false;\n}\n\nstatic void\npac_dalloc_impl(tsdn_t *tsdn, pai_t *self, edata_t *edata,\n    bool *deferred_work_generated) {\n\tpac_t *pac = (pac_t *)self;\n\tehooks_t *ehooks = pac_ehooks_get(pac);\n\n\tif (edata_guarded_get(edata)) {\n\t\t/*\n\t\t * Because cached guarded extents do exact fit only, large\n\t\t * guarded extents are restored on dalloc eagerly (otherwise\n\t\t * they will not be reused efficiently).  Slab sizes have a\n\t\t * limited number of size classes, and tend to cycle faster.\n\t\t *\n\t\t * In the case where coalesce is restrained (VirtualFree on\n\t\t * Windows), guarded extents are also not cached -- otherwise\n\t\t * during arena destroy / reset, the retained extents would not\n\t\t * be whole regions (i.e. they are split between regular and\n\t\t * guarded).\n\t\t */\n\t\tif (!edata_slab_get(edata) || !maps_coalesce) {\n\t\t\tassert(edata_size_get(edata) >= SC_LARGE_MINCLASS ||\n\t\t\t    !maps_coalesce);\n\t\t\tsan_unguard_pages_two_sided(tsdn, ehooks, edata,\n\t\t\t    pac->emap);\n\t\t}\n\t}\n\n\tecache_dalloc(tsdn, pac, ehooks, &pac->ecache_dirty, edata);\n\t/* Purging of deallocated pages is deferred */\n\t*deferred_work_generated = true;\n}\n\nstatic inline uint64_t\npac_ns_until_purge(tsdn_t *tsdn, decay_t *decay, size_t npages) {\n\tif (malloc_mutex_trylock(tsdn, &decay->mtx)) {\n\t\t/* Use minimal interval if decay is contended. */\n\t\treturn BACKGROUND_THREAD_DEFERRED_MIN;\n\t}\n\tuint64_t result = decay_ns_until_purge(decay, npages,\n\t    ARENA_DEFERRED_PURGE_NPAGES_THRESHOLD);\n\n\tmalloc_mutex_unlock(tsdn, &decay->mtx);\n\treturn result;\n}\n\nstatic uint64_t\npac_time_until_deferred_work(tsdn_t *tsdn, pai_t *self) {\n\tuint64_t time;\n\tpac_t *pac = (pac_t *)self;\n\n\ttime = pac_ns_until_purge(tsdn,\n\t    &pac->decay_dirty,\n\t    ecache_npages_get(&pac->ecache_dirty));\n\tif (time == BACKGROUND_THREAD_DEFERRED_MIN) {\n\t\treturn time;\n\t}\n\n\tuint64_t muzzy = pac_ns_until_purge(tsdn,\n\t    &pac->decay_muzzy,\n\t    ecache_npages_get(&pac->ecache_muzzy));\n\tif (muzzy < time) {\n\t\ttime = muzzy;\n\t}\n\treturn time;\n}\n\nbool\npac_retain_grow_limit_get_set(tsdn_t *tsdn, pac_t *pac, size_t *old_limit,\n    size_t *new_limit) {\n\tpszind_t new_ind JEMALLOC_CC_SILENCE_INIT(0);\n\tif (new_limit != NULL) {\n\t\tsize_t limit = *new_limit;\n\t\t/* Grow no more than the new limit. */\n\t\tif ((new_ind = sz_psz2ind(limit + 1) - 1) >= SC_NPSIZES) {\n\t\t\treturn true;\n\t\t}\n\t}\n\n\tmalloc_mutex_lock(tsdn, &pac->grow_mtx);\n\tif (old_limit != NULL) {\n\t\t*old_limit = sz_pind2sz(pac->exp_grow.limit);\n\t}\n\tif (new_limit != NULL) {\n\t\tpac->exp_grow.limit = new_ind;\n\t}\n\tmalloc_mutex_unlock(tsdn, &pac->grow_mtx);\n\n\treturn false;\n}\n\nstatic size_t\npac_stash_decayed(tsdn_t *tsdn, pac_t *pac, ecache_t *ecache,\n    size_t npages_limit, size_t npages_decay_max,\n    edata_list_inactive_t *result) {\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 0);\n\tehooks_t *ehooks = pac_ehooks_get(pac);\n\n\t/* Stash extents according to npages_limit. */\n\tsize_t nstashed = 0;\n\twhile (nstashed < npages_decay_max) {\n\t\tedata_t *edata = ecache_evict(tsdn, pac, ehooks, ecache,\n\t\t    npages_limit);\n\t\tif (edata == NULL) {\n\t\t\tbreak;\n\t\t}\n\t\tedata_list_inactive_append(result, edata);\n\t\tnstashed += edata_size_get(edata) >> LG_PAGE;\n\t}\n\treturn nstashed;\n}\n\nstatic size_t\npac_decay_stashed(tsdn_t *tsdn, pac_t *pac, decay_t *decay,\n    pac_decay_stats_t *decay_stats, ecache_t *ecache, bool fully_decay,\n    edata_list_inactive_t *decay_extents) {\n\tbool err;\n\n\tsize_t nmadvise = 0;\n\tsize_t nunmapped = 0;\n\tsize_t npurged = 0;\n\n\tehooks_t *ehooks = pac_ehooks_get(pac);\n\n\tbool try_muzzy = !fully_decay\n\t    && pac_decay_ms_get(pac, extent_state_muzzy) != 0;\n\n\tfor (edata_t *edata = edata_list_inactive_first(decay_extents); edata !=\n\t    NULL; edata = edata_list_inactive_first(decay_extents)) {\n\t\tedata_list_inactive_remove(decay_extents, edata);\n\n\t\tsize_t size = edata_size_get(edata);\n\t\tsize_t npages = size >> LG_PAGE;\n\n\t\tnmadvise++;\n\t\tnpurged += npages;\n\n\t\tswitch (ecache->state) {\n\t\tcase extent_state_active:\n\t\t\tnot_reached();\n\t\tcase extent_state_dirty:\n\t\t\tif (try_muzzy) {\n\t\t\t\terr = extent_purge_lazy_wrapper(tsdn, ehooks,\n\t\t\t\t    edata, /* offset */ 0, size);\n\t\t\t\tif (!err) {\n\t\t\t\t\tecache_dalloc(tsdn, pac, ehooks,\n\t\t\t\t\t    &pac->ecache_muzzy, edata);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tJEMALLOC_FALLTHROUGH;\n\t\tcase extent_state_muzzy:\n\t\t\textent_dalloc_wrapper(tsdn, pac, ehooks, edata);\n\t\t\tnunmapped += npages;\n\t\t\tbreak;\n\t\tcase extent_state_retained:\n\t\tdefault:\n\t\t\tnot_reached();\n\t\t}\n\t}\n\n\tif (config_stats) {\n\t\tLOCKEDINT_MTX_LOCK(tsdn, *pac->stats_mtx);\n\t\tlocked_inc_u64(tsdn, LOCKEDINT_MTX(*pac->stats_mtx),\n\t\t    &decay_stats->npurge, 1);\n\t\tlocked_inc_u64(tsdn, LOCKEDINT_MTX(*pac->stats_mtx),\n\t\t    &decay_stats->nmadvise, nmadvise);\n\t\tlocked_inc_u64(tsdn, LOCKEDINT_MTX(*pac->stats_mtx),\n\t\t    &decay_stats->purged, npurged);\n\t\tLOCKEDINT_MTX_UNLOCK(tsdn, *pac->stats_mtx);\n\t\tatomic_fetch_sub_zu(&pac->stats->pac_mapped,\n\t\t    nunmapped << LG_PAGE, ATOMIC_RELAXED);\n\t}\n\n\treturn npurged;\n}\n\n/*\n * npages_limit: Decay at most npages_decay_max pages without violating the\n * invariant: (ecache_npages_get(ecache) >= npages_limit).  We need an upper\n * bound on number of pages in order to prevent unbounded growth (namely in\n * stashed), otherwise unbounded new pages could be added to extents during the\n * current decay run, so that the purging thread never finishes.\n */\nstatic void\npac_decay_to_limit(tsdn_t *tsdn, pac_t *pac, decay_t *decay,\n    pac_decay_stats_t *decay_stats, ecache_t *ecache, bool fully_decay,\n    size_t npages_limit, size_t npages_decay_max) {\n\twitness_assert_depth_to_rank(tsdn_witness_tsdp_get(tsdn),\n\t    WITNESS_RANK_CORE, 1);\n\n\tif (decay->purging || npages_decay_max == 0) {\n\t\treturn;\n\t}\n\tdecay->purging = true;\n\tmalloc_mutex_unlock(tsdn, &decay->mtx);\n\n\tedata_list_inactive_t decay_extents;\n\tedata_list_inactive_init(&decay_extents);\n\tsize_t npurge = pac_stash_decayed(tsdn, pac, ecache, npages_limit,\n\t    npages_decay_max, &decay_extents);\n\tif (npurge != 0) {\n\t\tsize_t npurged = pac_decay_stashed(tsdn, pac, decay,\n\t\t    decay_stats, ecache, fully_decay, &decay_extents);\n\t\tassert(npurged == npurge);\n\t}\n\n\tmalloc_mutex_lock(tsdn, &decay->mtx);\n\tdecay->purging = false;\n}\n\nvoid\npac_decay_all(tsdn_t *tsdn, pac_t *pac, decay_t *decay,\n    pac_decay_stats_t *decay_stats, ecache_t *ecache, bool fully_decay) {\n\tmalloc_mutex_assert_owner(tsdn, &decay->mtx);\n\tpac_decay_to_limit(tsdn, pac, decay, decay_stats, ecache, fully_decay,\n\t    /* npages_limit */ 0, ecache_npages_get(ecache));\n}\n\nstatic void\npac_decay_try_purge(tsdn_t *tsdn, pac_t *pac, decay_t *decay,\n    pac_decay_stats_t *decay_stats, ecache_t *ecache,\n    size_t current_npages, size_t npages_limit) {\n\tif (current_npages > npages_limit) {\n\t\tpac_decay_to_limit(tsdn, pac, decay, decay_stats, ecache,\n\t\t    /* fully_decay */ false, npages_limit,\n\t\t    current_npages - npages_limit);\n\t}\n}\n\nbool\npac_maybe_decay_purge(tsdn_t *tsdn, pac_t *pac, decay_t *decay,\n    pac_decay_stats_t *decay_stats, ecache_t *ecache,\n    pac_purge_eagerness_t eagerness) {\n\tmalloc_mutex_assert_owner(tsdn, &decay->mtx);\n\n\t/* Purge all or nothing if the option is disabled. */\n\tssize_t decay_ms = decay_ms_read(decay);\n\tif (decay_ms <= 0) {\n\t\tif (decay_ms == 0) {\n\t\t\tpac_decay_to_limit(tsdn, pac, decay, decay_stats,\n\t\t\t    ecache, /* fully_decay */ false,\n\t\t\t    /* npages_limit */ 0, ecache_npages_get(ecache));\n\t\t}\n\t\treturn false;\n\t}\n\n\t/*\n\t * If the deadline has been reached, advance to the current epoch and\n\t * purge to the new limit if necessary.  Note that dirty pages created\n\t * during the current epoch are not subject to purge until a future\n\t * epoch, so as a result purging only happens during epoch advances, or\n\t * being triggered by background threads (scheduled event).\n\t */\n\tnstime_t time;\n\tnstime_init_update(&time);\n\tsize_t npages_current = ecache_npages_get(ecache);\n\tbool epoch_advanced = decay_maybe_advance_epoch(decay, &time,\n\t    npages_current);\n\tif (eagerness == PAC_PURGE_ALWAYS\n\t    || (epoch_advanced && eagerness == PAC_PURGE_ON_EPOCH_ADVANCE)) {\n\t\tsize_t npages_limit = decay_npages_limit_get(decay);\n\t\tpac_decay_try_purge(tsdn, pac, decay, decay_stats, ecache,\n\t\t    npages_current, npages_limit);\n\t}\n\n\treturn epoch_advanced;\n}\n\nbool\npac_decay_ms_set(tsdn_t *tsdn, pac_t *pac, extent_state_t state,\n    ssize_t decay_ms, pac_purge_eagerness_t eagerness) {\n\tdecay_t *decay;\n\tpac_decay_stats_t *decay_stats;\n\tecache_t *ecache;\n\tpac_decay_data_get(pac, state, &decay, &decay_stats, &ecache);\n\n\tif (!decay_ms_valid(decay_ms)) {\n\t\treturn true;\n\t}\n\n\tmalloc_mutex_lock(tsdn, &decay->mtx);\n\t/*\n\t * Restart decay backlog from scratch, which may cause many dirty pages\n\t * to be immediately purged.  It would conceptually be possible to map\n\t * the old backlog onto the new backlog, but there is no justification\n\t * for such complexity since decay_ms changes are intended to be\n\t * infrequent, either between the {-1, 0, >0} states, or a one-time\n\t * arbitrary change during initial arena configuration.\n\t */\n\tnstime_t cur_time;\n\tnstime_init_update(&cur_time);\n\tdecay_reinit(decay, &cur_time, decay_ms);\n\tpac_maybe_decay_purge(tsdn, pac, decay, decay_stats, ecache, eagerness);\n\tmalloc_mutex_unlock(tsdn, &decay->mtx);\n\n\treturn false;\n}\n\nssize_t\npac_decay_ms_get(pac_t *pac, extent_state_t state) {\n\tdecay_t *decay;\n\tpac_decay_stats_t *decay_stats;\n\tecache_t *ecache;\n\tpac_decay_data_get(pac, state, &decay, &decay_stats, &ecache);\n\treturn decay_ms_read(decay);\n}\n\nvoid\npac_reset(tsdn_t *tsdn, pac_t *pac) {\n\t/*\n\t * No-op for now; purging is still done at the arena-level.  It should\n\t * get moved in here, though.\n\t */\n\t(void)tsdn;\n\t(void)pac;\n}\n\nvoid\npac_destroy(tsdn_t *tsdn, pac_t *pac) {\n\tassert(ecache_npages_get(&pac->ecache_dirty) == 0);\n\tassert(ecache_npages_get(&pac->ecache_muzzy) == 0);\n\t/*\n\t * Iterate over the retained extents and destroy them.  This gives the\n\t * extent allocator underlying the extent hooks an opportunity to unmap\n\t * all retained memory without having to keep its own metadata\n\t * structures.  In practice, virtual memory for dss-allocated extents is\n\t * leaked here, so best practice is to avoid dss for arenas to be\n\t * destroyed, or provide custom extent hooks that track retained\n\t * dss-based extents for later reuse.\n\t */\n\tehooks_t *ehooks = pac_ehooks_get(pac);\n\tedata_t *edata;\n\twhile ((edata = ecache_evict(tsdn, pac, ehooks,\n\t    &pac->ecache_retained, 0)) != NULL) {\n\t\textent_destroy_wrapper(tsdn, pac, ehooks, edata);\n\t}\n}\n"
  },
  {
    "path": "deps/jemalloc/src/pages.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n\n#include \"jemalloc/internal/pages.h\"\n\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/malloc_io.h\"\n\n#ifdef JEMALLOC_SYSCTL_VM_OVERCOMMIT\n#include <sys/sysctl.h>\n#ifdef __FreeBSD__\n#include <vm/vm_param.h>\n#endif\n#endif\n#ifdef __NetBSD__\n#include <sys/bitops.h>\t/* ilog2 */\n#endif\n#ifdef JEMALLOC_HAVE_VM_MAKE_TAG\n#define PAGES_FD_TAG VM_MAKE_TAG(101U)\n#else\n#define PAGES_FD_TAG -1\n#endif\n\n/******************************************************************************/\n/* Data. */\n\n/* Actual operating system page size, detected during bootstrap, <= PAGE. */\nstatic size_t\tos_page;\n\n#ifndef _WIN32\n#  define PAGES_PROT_COMMIT (PROT_READ | PROT_WRITE)\n#  define PAGES_PROT_DECOMMIT (PROT_NONE)\nstatic int\tmmap_flags;\n#endif\nstatic bool\tos_overcommits;\n\nconst char *thp_mode_names[] = {\n\t\"default\",\n\t\"always\",\n\t\"never\",\n\t\"not supported\"\n};\nthp_mode_t opt_thp = THP_MODE_DEFAULT;\nthp_mode_t init_system_thp_mode;\n\n/* Runtime support for lazy purge. Irrelevant when !pages_can_purge_lazy. */\nstatic bool pages_can_purge_lazy_runtime = true;\n\n#ifdef JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS\nstatic int madvise_dont_need_zeros_is_faulty = -1;\n/**\n * Check that MADV_DONTNEED will actually zero pages on subsequent access.\n *\n * Since qemu does not support this, yet [1], and you can get very tricky\n * assert if you will run program with jemalloc in use under qemu:\n *\n *     <jemalloc>: ../contrib/jemalloc/src/extent.c:1195: Failed assertion: \"p[i] == 0\"\n *\n *   [1]: https://patchwork.kernel.org/patch/10576637/\n */\nstatic int madvise_MADV_DONTNEED_zeroes_pages()\n{\n\tint works = -1;\n\tsize_t size = PAGE;\n\n\tvoid * addr = mmap(NULL, size, PROT_READ|PROT_WRITE,\n\t    MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);\n\n\tif (addr == MAP_FAILED) {\n\t\tmalloc_write(\"<jemalloc>: Cannot allocate memory for \"\n\t\t    \"MADV_DONTNEED check\\n\");\n\t\tif (opt_abort) {\n\t\t\tabort();\n\t\t}\n\t}\n\n\tmemset(addr, 'A', size);\n\tif (madvise(addr, size, MADV_DONTNEED) == 0) {\n\t\tworks = memchr(addr, 'A', size) == NULL;\n\t} else {\n\t\t/*\n\t\t * If madvise() does not support MADV_DONTNEED, then we can\n\t\t * call it anyway, and use it's return code.\n\t\t */\n\t\tworks = 1;\n\t}\n\n\tif (munmap(addr, size) != 0) {\n\t\tmalloc_write(\"<jemalloc>: Cannot deallocate memory for \"\n\t\t    \"MADV_DONTNEED check\\n\");\n\t\tif (opt_abort) {\n\t\t\tabort();\n\t\t}\n\t}\n\n\treturn works;\n}\n#endif\n\n/******************************************************************************/\n/*\n * Function prototypes for static functions that are referenced prior to\n * definition.\n */\n\nstatic void os_pages_unmap(void *addr, size_t size);\n\n/******************************************************************************/\n\nstatic void *\nos_pages_map(void *addr, size_t size, size_t alignment, bool *commit) {\n\tassert(ALIGNMENT_ADDR2BASE(addr, os_page) == addr);\n\tassert(ALIGNMENT_CEILING(size, os_page) == size);\n\tassert(size != 0);\n\n\tif (os_overcommits) {\n\t\t*commit = true;\n\t}\n\n\tvoid *ret;\n#ifdef _WIN32\n\t/*\n\t * If VirtualAlloc can't allocate at the given address when one is\n\t * given, it fails and returns NULL.\n\t */\n\tret = VirtualAlloc(addr, size, MEM_RESERVE | (*commit ? MEM_COMMIT : 0),\n\t    PAGE_READWRITE);\n#else\n\t/*\n\t * We don't use MAP_FIXED here, because it can cause the *replacement*\n\t * of existing mappings, and we only want to create new mappings.\n\t */\n\t{\n#ifdef __NetBSD__\n\t\t/*\n\t\t * On NetBSD PAGE for a platform is defined to the\n\t\t * maximum page size of all machine architectures\n\t\t * for that platform, so that we can use the same\n\t\t * binaries across all machine architectures.\n\t\t */\n\t\tif (alignment > os_page || PAGE > os_page) {\n\t\t\tunsigned int a = ilog2(MAX(alignment, PAGE));\n\t\t\tmmap_flags |= MAP_ALIGNED(a);\n\t\t}\n#endif\n\t\tint prot = *commit ? PAGES_PROT_COMMIT : PAGES_PROT_DECOMMIT;\n\n\t\tret = mmap(addr, size, prot, mmap_flags, PAGES_FD_TAG, 0);\n\t}\n\tassert(ret != NULL);\n\n\tif (ret == MAP_FAILED) {\n\t\tret = NULL;\n\t} else if (addr != NULL && ret != addr) {\n\t\t/*\n\t\t * We succeeded in mapping memory, but not in the right place.\n\t\t */\n\t\tos_pages_unmap(ret, size);\n\t\tret = NULL;\n\t}\n#endif\n\tassert(ret == NULL || (addr == NULL && ret != addr) || (addr != NULL &&\n\t    ret == addr));\n\treturn ret;\n}\n\nstatic void *\nos_pages_trim(void *addr, size_t alloc_size, size_t leadsize, size_t size,\n    bool *commit) {\n\tvoid *ret = (void *)((uintptr_t)addr + leadsize);\n\n\tassert(alloc_size >= leadsize + size);\n#ifdef _WIN32\n\tos_pages_unmap(addr, alloc_size);\n\tvoid *new_addr = os_pages_map(ret, size, PAGE, commit);\n\tif (new_addr == ret) {\n\t\treturn ret;\n\t}\n\tif (new_addr != NULL) {\n\t\tos_pages_unmap(new_addr, size);\n\t}\n\treturn NULL;\n#else\n\tsize_t trailsize = alloc_size - leadsize - size;\n\n\tif (leadsize != 0) {\n\t\tos_pages_unmap(addr, leadsize);\n\t}\n\tif (trailsize != 0) {\n\t\tos_pages_unmap((void *)((uintptr_t)ret + size), trailsize);\n\t}\n\treturn ret;\n#endif\n}\n\nstatic void\nos_pages_unmap(void *addr, size_t size) {\n\tassert(ALIGNMENT_ADDR2BASE(addr, os_page) == addr);\n\tassert(ALIGNMENT_CEILING(size, os_page) == size);\n\n#ifdef _WIN32\n\tif (VirtualFree(addr, 0, MEM_RELEASE) == 0)\n#else\n\tif (munmap(addr, size) == -1)\n#endif\n\t{\n\t\tchar buf[BUFERROR_BUF];\n\n\t\tbuferror(get_errno(), buf, sizeof(buf));\n\t\tmalloc_printf(\"<jemalloc>: Error in \"\n#ifdef _WIN32\n\t\t    \"VirtualFree\"\n#else\n\t\t    \"munmap\"\n#endif\n\t\t    \"(): %s\\n\", buf);\n\t\tif (opt_abort) {\n\t\t\tabort();\n\t\t}\n\t}\n}\n\nstatic void *\npages_map_slow(size_t size, size_t alignment, bool *commit) {\n\tsize_t alloc_size = size + alignment - os_page;\n\t/* Beware size_t wrap-around. */\n\tif (alloc_size < size) {\n\t\treturn NULL;\n\t}\n\n\tvoid *ret;\n\tdo {\n\t\tvoid *pages = os_pages_map(NULL, alloc_size, alignment, commit);\n\t\tif (pages == NULL) {\n\t\t\treturn NULL;\n\t\t}\n\t\tsize_t leadsize = ALIGNMENT_CEILING((uintptr_t)pages, alignment)\n\t\t    - (uintptr_t)pages;\n\t\tret = os_pages_trim(pages, alloc_size, leadsize, size, commit);\n\t} while (ret == NULL);\n\n\tassert(ret != NULL);\n\tassert(PAGE_ADDR2BASE(ret) == ret);\n\treturn ret;\n}\n\nvoid *\npages_map(void *addr, size_t size, size_t alignment, bool *commit) {\n\tassert(alignment >= PAGE);\n\tassert(ALIGNMENT_ADDR2BASE(addr, alignment) == addr);\n\n#if defined(__FreeBSD__) && defined(MAP_EXCL)\n\t/*\n\t * FreeBSD has mechanisms both to mmap at specific address without\n\t * touching existing mappings, and to mmap with specific alignment.\n\t */\n\t{\n\t\tif (os_overcommits) {\n\t\t\t*commit = true;\n\t\t}\n\n\t\tint prot = *commit ? PAGES_PROT_COMMIT : PAGES_PROT_DECOMMIT;\n\t\tint flags = mmap_flags;\n\n\t\tif (addr != NULL) {\n\t\t\tflags |= MAP_FIXED | MAP_EXCL;\n\t\t} else {\n\t\t\tunsigned alignment_bits = ffs_zu(alignment);\n\t\t\tassert(alignment_bits > 0);\n\t\t\tflags |= MAP_ALIGNED(alignment_bits);\n\t\t}\n\n\t\tvoid *ret = mmap(addr, size, prot, flags, -1, 0);\n\t\tif (ret == MAP_FAILED) {\n\t\t\tret = NULL;\n\t\t}\n\n\t\treturn ret;\n\t}\n#endif\n\t/*\n\t * Ideally, there would be a way to specify alignment to mmap() (like\n\t * NetBSD has), but in the absence of such a feature, we have to work\n\t * hard to efficiently create aligned mappings.  The reliable, but\n\t * slow method is to create a mapping that is over-sized, then trim the\n\t * excess.  However, that always results in one or two calls to\n\t * os_pages_unmap(), and it can leave holes in the process's virtual\n\t * memory map if memory grows downward.\n\t *\n\t * Optimistically try mapping precisely the right amount before falling\n\t * back to the slow method, with the expectation that the optimistic\n\t * approach works most of the time.\n\t */\n\n\tvoid *ret = os_pages_map(addr, size, os_page, commit);\n\tif (ret == NULL || ret == addr) {\n\t\treturn ret;\n\t}\n\tassert(addr == NULL);\n\tif (ALIGNMENT_ADDR2OFFSET(ret, alignment) != 0) {\n\t\tos_pages_unmap(ret, size);\n\t\treturn pages_map_slow(size, alignment, commit);\n\t}\n\n\tassert(PAGE_ADDR2BASE(ret) == ret);\n\treturn ret;\n}\n\nvoid\npages_unmap(void *addr, size_t size) {\n\tassert(PAGE_ADDR2BASE(addr) == addr);\n\tassert(PAGE_CEILING(size) == size);\n\n\tos_pages_unmap(addr, size);\n}\n\nstatic bool\nos_pages_commit(void *addr, size_t size, bool commit) {\n\tassert(PAGE_ADDR2BASE(addr) == addr);\n\tassert(PAGE_CEILING(size) == size);\n\n#ifdef _WIN32\n\treturn (commit ? (addr != VirtualAlloc(addr, size, MEM_COMMIT,\n\t    PAGE_READWRITE)) : (!VirtualFree(addr, size, MEM_DECOMMIT)));\n#else\n\t{\n\t\tint prot = commit ? PAGES_PROT_COMMIT : PAGES_PROT_DECOMMIT;\n\t\tvoid *result = mmap(addr, size, prot, mmap_flags | MAP_FIXED,\n\t\t    PAGES_FD_TAG, 0);\n\t\tif (result == MAP_FAILED) {\n\t\t\treturn true;\n\t\t}\n\t\tif (result != addr) {\n\t\t\t/*\n\t\t\t * We succeeded in mapping memory, but not in the right\n\t\t\t * place.\n\t\t\t */\n\t\t\tos_pages_unmap(result, size);\n\t\t\treturn true;\n\t\t}\n\t\treturn false;\n\t}\n#endif\n}\n\nstatic bool\npages_commit_impl(void *addr, size_t size, bool commit) {\n\tif (os_overcommits) {\n\t\treturn true;\n\t}\n\n\treturn os_pages_commit(addr, size, commit);\n}\n\nbool\npages_commit(void *addr, size_t size) {\n\treturn pages_commit_impl(addr, size, true);\n}\n\nbool\npages_decommit(void *addr, size_t size) {\n\treturn pages_commit_impl(addr, size, false);\n}\n\nvoid\npages_mark_guards(void *head, void *tail) {\n\tassert(head != NULL || tail != NULL);\n\tassert(head == NULL || tail == NULL ||\n\t    (uintptr_t)head < (uintptr_t)tail);\n#ifdef JEMALLOC_HAVE_MPROTECT\n\tif (head != NULL) {\n\t\tmprotect(head, PAGE, PROT_NONE);\n\t}\n\tif (tail != NULL) {\n\t\tmprotect(tail, PAGE, PROT_NONE);\n\t}\n#else\n\t/* Decommit sets to PROT_NONE / MEM_DECOMMIT. */\n\tif (head != NULL) {\n\t\tos_pages_commit(head, PAGE, false);\n\t}\n\tif (tail != NULL) {\n\t\tos_pages_commit(tail, PAGE, false);\n\t}\n#endif\n}\n\nvoid\npages_unmark_guards(void *head, void *tail) {\n\tassert(head != NULL || tail != NULL);\n\tassert(head == NULL || tail == NULL ||\n\t    (uintptr_t)head < (uintptr_t)tail);\n#ifdef JEMALLOC_HAVE_MPROTECT\n\tbool head_and_tail = (head != NULL) && (tail != NULL);\n\tsize_t range = head_and_tail ?\n\t    (uintptr_t)tail - (uintptr_t)head + PAGE :\n\t    SIZE_T_MAX;\n\t/*\n\t * The amount of work that the kernel does in mprotect depends on the\n\t * range argument.  SC_LARGE_MINCLASS is an arbitrary threshold chosen\n\t * to prevent kernel from doing too much work that would outweigh the\n\t * savings of performing one less system call.\n\t */\n\tbool ranged_mprotect = head_and_tail && range <= SC_LARGE_MINCLASS;\n\tif (ranged_mprotect) {\n\t\tmprotect(head, range, PROT_READ | PROT_WRITE);\n\t} else {\n\t\tif (head != NULL) {\n\t\t\tmprotect(head, PAGE, PROT_READ | PROT_WRITE);\n\t\t}\n\t\tif (tail != NULL) {\n\t\t\tmprotect(tail, PAGE, PROT_READ | PROT_WRITE);\n\t\t}\n\t}\n#else\n\tif (head != NULL) {\n\t\tos_pages_commit(head, PAGE, true);\n\t}\n\tif (tail != NULL) {\n\t\tos_pages_commit(tail, PAGE, true);\n\t}\n#endif\n}\n\nbool\npages_purge_lazy(void *addr, size_t size) {\n\tassert(ALIGNMENT_ADDR2BASE(addr, os_page) == addr);\n\tassert(PAGE_CEILING(size) == size);\n\n\tif (!pages_can_purge_lazy) {\n\t\treturn true;\n\t}\n\tif (!pages_can_purge_lazy_runtime) {\n\t\t/*\n\t\t * Built with lazy purge enabled, but detected it was not\n\t\t * supported on the current system.\n\t\t */\n\t\treturn true;\n\t}\n\n#ifdef _WIN32\n\tVirtualAlloc(addr, size, MEM_RESET, PAGE_READWRITE);\n\treturn false;\n#elif defined(JEMALLOC_PURGE_MADVISE_FREE)\n\treturn (madvise(addr, size,\n#  ifdef MADV_FREE\n\t    MADV_FREE\n#  else\n\t    JEMALLOC_MADV_FREE\n#  endif\n\t    ) != 0);\n#elif defined(JEMALLOC_PURGE_MADVISE_DONTNEED) && \\\n    !defined(JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS)\n\treturn (madvise(addr, size, MADV_DONTNEED) != 0);\n#elif defined(JEMALLOC_PURGE_POSIX_MADVISE_DONTNEED) && \\\n    !defined(JEMALLOC_PURGE_POSIX_MADVISE_DONTNEED_ZEROS)\n\treturn (posix_madvise(addr, size, POSIX_MADV_DONTNEED) != 0);\n#else\n\tnot_reached();\n#endif\n}\n\nbool\npages_purge_forced(void *addr, size_t size) {\n\tassert(PAGE_ADDR2BASE(addr) == addr);\n\tassert(PAGE_CEILING(size) == size);\n\n\tif (!pages_can_purge_forced) {\n\t\treturn true;\n\t}\n\n#if defined(JEMALLOC_PURGE_MADVISE_DONTNEED) && \\\n    defined(JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS)\n\treturn (unlikely(madvise_dont_need_zeros_is_faulty) ||\n\t    madvise(addr, size, MADV_DONTNEED) != 0);\n#elif defined(JEMALLOC_PURGE_POSIX_MADVISE_DONTNEED) && \\\n    defined(JEMALLOC_PURGE_POSIX_MADVISE_DONTNEED_ZEROS)\n\treturn (unlikely(madvise_dont_need_zeros_is_faulty) ||\n\t    posix_madvise(addr, size, POSIX_MADV_DONTNEED) != 0);\n#elif defined(JEMALLOC_MAPS_COALESCE)\n\t/* Try to overlay a new demand-zeroed mapping. */\n\treturn pages_commit(addr, size);\n#else\n\tnot_reached();\n#endif\n}\n\nstatic bool\npages_huge_impl(void *addr, size_t size, bool aligned) {\n\tif (aligned) {\n\t\tassert(HUGEPAGE_ADDR2BASE(addr) == addr);\n\t\tassert(HUGEPAGE_CEILING(size) == size);\n\t}\n#if defined(JEMALLOC_HAVE_MADVISE_HUGE)\n\treturn (madvise(addr, size, MADV_HUGEPAGE) != 0);\n#elif defined(JEMALLOC_HAVE_MEMCNTL)\n\tstruct memcntl_mha m = {0};\n\tm.mha_cmd = MHA_MAPSIZE_VA;\n\tm.mha_pagesize = HUGEPAGE;\n\treturn (memcntl(addr, size, MC_HAT_ADVISE, (caddr_t)&m, 0, 0) == 0);\n#else\n\treturn true;\n#endif\n}\n\nbool\npages_huge(void *addr, size_t size) {\n\treturn pages_huge_impl(addr, size, true);\n}\n\nstatic bool\npages_huge_unaligned(void *addr, size_t size) {\n\treturn pages_huge_impl(addr, size, false);\n}\n\nstatic bool\npages_nohuge_impl(void *addr, size_t size, bool aligned) {\n\tif (aligned) {\n\t\tassert(HUGEPAGE_ADDR2BASE(addr) == addr);\n\t\tassert(HUGEPAGE_CEILING(size) == size);\n\t}\n\n#ifdef JEMALLOC_HAVE_MADVISE_HUGE\n\treturn (madvise(addr, size, MADV_NOHUGEPAGE) != 0);\n#else\n\treturn false;\n#endif\n}\n\nbool\npages_nohuge(void *addr, size_t size) {\n\treturn pages_nohuge_impl(addr, size, true);\n}\n\nstatic bool\npages_nohuge_unaligned(void *addr, size_t size) {\n\treturn pages_nohuge_impl(addr, size, false);\n}\n\nbool\npages_dontdump(void *addr, size_t size) {\n\tassert(PAGE_ADDR2BASE(addr) == addr);\n\tassert(PAGE_CEILING(size) == size);\n#if defined(JEMALLOC_MADVISE_DONTDUMP)\n\treturn madvise(addr, size, MADV_DONTDUMP) != 0;\n#elif defined(JEMALLOC_MADVISE_NOCORE)\n\treturn madvise(addr, size, MADV_NOCORE) != 0;\n#else\n\treturn false;\n#endif\n}\n\nbool\npages_dodump(void *addr, size_t size) {\n\tassert(PAGE_ADDR2BASE(addr) == addr);\n\tassert(PAGE_CEILING(size) == size);\n#if defined(JEMALLOC_MADVISE_DONTDUMP)\n\treturn madvise(addr, size, MADV_DODUMP) != 0;\n#elif defined(JEMALLOC_MADVISE_NOCORE)\n\treturn madvise(addr, size, MADV_CORE) != 0;\n#else\n\treturn false;\n#endif\n}\n\n\nstatic size_t\nos_page_detect(void) {\n#ifdef _WIN32\n\tSYSTEM_INFO si;\n\tGetSystemInfo(&si);\n\treturn si.dwPageSize;\n#elif defined(__FreeBSD__)\n\t/*\n\t * This returns the value obtained from\n\t * the auxv vector, avoiding a syscall.\n\t */\n\treturn getpagesize();\n#else\n\tlong result = sysconf(_SC_PAGESIZE);\n\tif (result == -1) {\n\t\treturn LG_PAGE;\n\t}\n\treturn (size_t)result;\n#endif\n}\n\n#ifdef JEMALLOC_SYSCTL_VM_OVERCOMMIT\nstatic bool\nos_overcommits_sysctl(void) {\n\tint vm_overcommit;\n\tsize_t sz;\n\n\tsz = sizeof(vm_overcommit);\n#if defined(__FreeBSD__) && defined(VM_OVERCOMMIT)\n\tint mib[2];\n\n\tmib[0] = CTL_VM;\n\tmib[1] = VM_OVERCOMMIT;\n\tif (sysctl(mib, 2, &vm_overcommit, &sz, NULL, 0) != 0) {\n\t\treturn false; /* Error. */\n\t}\n#else\n\tif (sysctlbyname(\"vm.overcommit\", &vm_overcommit, &sz, NULL, 0) != 0) {\n\t\treturn false; /* Error. */\n\t}\n#endif\n\n\treturn ((vm_overcommit & 0x3) == 0);\n}\n#endif\n\n#ifdef JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY\n/*\n * Use syscall(2) rather than {open,read,close}(2) when possible to avoid\n * reentry during bootstrapping if another library has interposed system call\n * wrappers.\n */\nstatic bool\nos_overcommits_proc(void) {\n\tint fd;\n\tchar buf[1];\n\n#if defined(JEMALLOC_USE_SYSCALL) && defined(SYS_open)\n\t#if defined(O_CLOEXEC)\n\t\tfd = (int)syscall(SYS_open, \"/proc/sys/vm/overcommit_memory\", O_RDONLY |\n\t\t\tO_CLOEXEC);\n\t#else\n\t\tfd = (int)syscall(SYS_open, \"/proc/sys/vm/overcommit_memory\", O_RDONLY);\n\t\tif (fd != -1) {\n\t\t\tfcntl(fd, F_SETFD, fcntl(fd, F_GETFD) | FD_CLOEXEC);\n\t\t}\n\t#endif\n#elif defined(JEMALLOC_USE_SYSCALL) && defined(SYS_openat)\n\t#if defined(O_CLOEXEC)\n\t\tfd = (int)syscall(SYS_openat,\n\t\t\tAT_FDCWD, \"/proc/sys/vm/overcommit_memory\", O_RDONLY | O_CLOEXEC);\n\t#else\n\t\tfd = (int)syscall(SYS_openat,\n\t\t\tAT_FDCWD, \"/proc/sys/vm/overcommit_memory\", O_RDONLY);\n\t\tif (fd != -1) {\n\t\t\tfcntl(fd, F_SETFD, fcntl(fd, F_GETFD) | FD_CLOEXEC);\n\t\t}\n\t#endif\n#else\n\t#if defined(O_CLOEXEC)\n\t\tfd = open(\"/proc/sys/vm/overcommit_memory\", O_RDONLY | O_CLOEXEC);\n\t#else\n\t\tfd = open(\"/proc/sys/vm/overcommit_memory\", O_RDONLY);\n\t\tif (fd != -1) {\n\t\t\tfcntl(fd, F_SETFD, fcntl(fd, F_GETFD) | FD_CLOEXEC);\n\t\t}\n\t#endif\n#endif\n\n\tif (fd == -1) {\n\t\treturn false; /* Error. */\n\t}\n\n\tssize_t nread = malloc_read_fd(fd, &buf, sizeof(buf));\n#if defined(JEMALLOC_USE_SYSCALL) && defined(SYS_close)\n\tsyscall(SYS_close, fd);\n#else\n\tclose(fd);\n#endif\n\n\tif (nread < 1) {\n\t\treturn false; /* Error. */\n\t}\n\t/*\n\t * /proc/sys/vm/overcommit_memory meanings:\n\t * 0: Heuristic overcommit.\n\t * 1: Always overcommit.\n\t * 2: Never overcommit.\n\t */\n\treturn (buf[0] == '0' || buf[0] == '1');\n}\n#endif\n\nvoid\npages_set_thp_state (void *ptr, size_t size) {\n\tif (opt_thp == thp_mode_default || opt_thp == init_system_thp_mode) {\n\t\treturn;\n\t}\n\tassert(opt_thp != thp_mode_not_supported &&\n\t    init_system_thp_mode != thp_mode_not_supported);\n\n\tif (opt_thp == thp_mode_always\n\t    && init_system_thp_mode != thp_mode_never) {\n\t\tassert(init_system_thp_mode == thp_mode_default);\n\t\tpages_huge_unaligned(ptr, size);\n\t} else if (opt_thp == thp_mode_never) {\n\t\tassert(init_system_thp_mode == thp_mode_default ||\n\t\t    init_system_thp_mode == thp_mode_always);\n\t\tpages_nohuge_unaligned(ptr, size);\n\t}\n}\n\nstatic void\ninit_thp_state(void) {\n\tif (!have_madvise_huge && !have_memcntl) {\n\t\tif (metadata_thp_enabled() && opt_abort) {\n\t\t\tmalloc_write(\"<jemalloc>: no MADV_HUGEPAGE support\\n\");\n\t\t\tabort();\n\t\t}\n\t\tgoto label_error;\n\t}\n#if defined(JEMALLOC_HAVE_MADVISE_HUGE)\n\tstatic const char sys_state_madvise[] = \"always [madvise] never\\n\";\n\tstatic const char sys_state_always[] = \"[always] madvise never\\n\";\n\tstatic const char sys_state_never[] = \"always madvise [never]\\n\";\n\tchar buf[sizeof(sys_state_madvise)];\n\n#if defined(JEMALLOC_USE_SYSCALL) && defined(SYS_open)\n\tint fd = (int)syscall(SYS_open,\n\t    \"/sys/kernel/mm/transparent_hugepage/enabled\", O_RDONLY);\n#elif defined(JEMALLOC_USE_SYSCALL) && defined(SYS_openat)\n\tint fd = (int)syscall(SYS_openat,\n\t\t    AT_FDCWD, \"/sys/kernel/mm/transparent_hugepage/enabled\", O_RDONLY);\n#else\n\tint fd = open(\"/sys/kernel/mm/transparent_hugepage/enabled\", O_RDONLY);\n#endif\n\tif (fd == -1) {\n\t\tgoto label_error;\n\t}\n\n\tssize_t nread = malloc_read_fd(fd, &buf, sizeof(buf));\n#if defined(JEMALLOC_USE_SYSCALL) && defined(SYS_close)\n\tsyscall(SYS_close, fd);\n#else\n\tclose(fd);\n#endif\n\n        if (nread < 0) {\n\t\tgoto label_error;\n        }\n\n\tif (strncmp(buf, sys_state_madvise, (size_t)nread) == 0) {\n\t\tinit_system_thp_mode = thp_mode_default;\n\t} else if (strncmp(buf, sys_state_always, (size_t)nread) == 0) {\n\t\tinit_system_thp_mode = thp_mode_always;\n\t} else if (strncmp(buf, sys_state_never, (size_t)nread) == 0) {\n\t\tinit_system_thp_mode = thp_mode_never;\n\t} else {\n\t\tgoto label_error;\n\t}\n\treturn;\n#elif defined(JEMALLOC_HAVE_MEMCNTL)\n\tinit_system_thp_mode = thp_mode_default;\n\treturn;\n#endif\nlabel_error:\n\topt_thp = init_system_thp_mode = thp_mode_not_supported;\n}\n\nbool\npages_boot(void) {\n\tos_page = os_page_detect();\n\tif (os_page > PAGE) {\n\t\tmalloc_write(\"<jemalloc>: Unsupported system page size\\n\");\n\t\tif (opt_abort) {\n\t\t\tabort();\n\t\t}\n\t\treturn true;\n\t}\n\n#ifdef JEMALLOC_PURGE_MADVISE_DONTNEED_ZEROS\n\tif (!opt_trust_madvise) {\n\t\tmadvise_dont_need_zeros_is_faulty = !madvise_MADV_DONTNEED_zeroes_pages();\n\t\tif (madvise_dont_need_zeros_is_faulty) {\n\t\t\tmalloc_write(\"<jemalloc>: MADV_DONTNEED does not work (memset will be used instead)\\n\");\n\t\t\tmalloc_write(\"<jemalloc>: (This is the expected behaviour if you are running under QEMU)\\n\");\n\t\t}\n\t} else {\n\t\t/* In case opt_trust_madvise is disable,\n\t\t * do not do runtime check */\n\t\tmadvise_dont_need_zeros_is_faulty = 0;\n\t}\n#endif\n\n#ifndef _WIN32\n\tmmap_flags = MAP_PRIVATE | MAP_ANON;\n#endif\n\n#ifdef JEMALLOC_SYSCTL_VM_OVERCOMMIT\n\tos_overcommits = os_overcommits_sysctl();\n#elif defined(JEMALLOC_PROC_SYS_VM_OVERCOMMIT_MEMORY)\n\tos_overcommits = os_overcommits_proc();\n#  ifdef MAP_NORESERVE\n\tif (os_overcommits) {\n\t\tmmap_flags |= MAP_NORESERVE;\n\t}\n#  endif\n#elif defined(__NetBSD__)\n\tos_overcommits = true;\n#else\n\tos_overcommits = false;\n#endif\n\n\tinit_thp_state();\n\n#ifdef __FreeBSD__\n\t/*\n\t * FreeBSD doesn't need the check; madvise(2) is known to work.\n\t */\n#else\n\t/* Detect lazy purge runtime support. */\n\tif (pages_can_purge_lazy) {\n\t\tbool committed = false;\n\t\tvoid *madv_free_page = os_pages_map(NULL, PAGE, PAGE, &committed);\n\t\tif (madv_free_page == NULL) {\n\t\t\treturn true;\n\t\t}\n\t\tassert(pages_can_purge_lazy_runtime);\n\t\tif (pages_purge_lazy(madv_free_page, PAGE)) {\n\t\t\tpages_can_purge_lazy_runtime = false;\n\t\t}\n\t\tos_pages_unmap(madv_free_page, PAGE);\n\t}\n#endif\n\n\treturn false;\n}\n"
  },
  {
    "path": "deps/jemalloc/src/pai.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\nsize_t\npai_alloc_batch_default(tsdn_t *tsdn, pai_t *self, size_t size, size_t nallocs,\n    edata_list_active_t *results, bool *deferred_work_generated) {\n\tfor (size_t i = 0; i < nallocs; i++) {\n\t\tbool deferred_by_alloc = false;\n\t\tedata_t *edata = pai_alloc(tsdn, self, size, PAGE,\n\t\t    /* zero */ false, /* guarded */ false,\n\t\t    /* frequent_reuse */ false, &deferred_by_alloc);\n\t\t*deferred_work_generated |= deferred_by_alloc;\n\t\tif (edata == NULL) {\n\t\t\treturn i;\n\t\t}\n\t\tedata_list_active_append(results, edata);\n\t}\n\treturn nallocs;\n}\n\nvoid\npai_dalloc_batch_default(tsdn_t *tsdn, pai_t *self,\n    edata_list_active_t *list, bool *deferred_work_generated) {\n\tedata_t *edata;\n\twhile ((edata = edata_list_active_first(list)) != NULL) {\n\t\tbool deferred_by_dalloc = false;\n\t\tedata_list_active_remove(list, edata);\n\t\tpai_dalloc(tsdn, self, edata, &deferred_by_dalloc);\n\t\t*deferred_work_generated |= deferred_by_dalloc;\n\t}\n}\n"
  },
  {
    "path": "deps/jemalloc/src/peak_event.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/peak_event.h\"\n\n#include \"jemalloc/internal/activity_callback.h\"\n#include \"jemalloc/internal/peak.h\"\n\n/*\n * Update every 64K by default.  We're not exposing this as a configuration\n * option for now; we don't want to bind ourselves too tightly to any particular\n * performance requirements for small values, or guarantee that we'll even be\n * able to provide fine-grained accuracy.\n */\n#define PEAK_EVENT_WAIT (64 * 1024)\n\n/* Update the peak with current tsd state. */\nvoid\npeak_event_update(tsd_t *tsd) {\n\tuint64_t alloc = tsd_thread_allocated_get(tsd);\n\tuint64_t dalloc = tsd_thread_deallocated_get(tsd);\n\tpeak_t *peak = tsd_peakp_get(tsd);\n\tpeak_update(peak, alloc, dalloc);\n}\n\nstatic void\npeak_event_activity_callback(tsd_t *tsd) {\n\tactivity_callback_thunk_t *thunk = tsd_activity_callback_thunkp_get(\n\t    tsd);\n\tuint64_t alloc = tsd_thread_allocated_get(tsd);\n\tuint64_t dalloc = tsd_thread_deallocated_get(tsd);\n\tif (thunk->callback != NULL) {\n\t\tthunk->callback(thunk->uctx, alloc, dalloc);\n\t}\n}\n\n/* Set current state to zero. */\nvoid\npeak_event_zero(tsd_t *tsd) {\n\tuint64_t alloc = tsd_thread_allocated_get(tsd);\n\tuint64_t dalloc = tsd_thread_deallocated_get(tsd);\n\tpeak_t *peak = tsd_peakp_get(tsd);\n\tpeak_set_zero(peak, alloc, dalloc);\n}\n\nuint64_t\npeak_event_max(tsd_t *tsd) {\n\tpeak_t *peak = tsd_peakp_get(tsd);\n\treturn peak_max(peak);\n}\n\nuint64_t\npeak_alloc_new_event_wait(tsd_t *tsd) {\n\treturn PEAK_EVENT_WAIT;\n}\n\nuint64_t\npeak_alloc_postponed_event_wait(tsd_t *tsd) {\n\treturn TE_MIN_START_WAIT;\n}\n\nvoid\npeak_alloc_event_handler(tsd_t *tsd, uint64_t elapsed) {\n\tpeak_event_update(tsd);\n\tpeak_event_activity_callback(tsd);\n}\n\nuint64_t\npeak_dalloc_new_event_wait(tsd_t *tsd) {\n\treturn PEAK_EVENT_WAIT;\n}\n\nuint64_t\npeak_dalloc_postponed_event_wait(tsd_t *tsd) {\n\treturn TE_MIN_START_WAIT;\n}\n\nvoid\npeak_dalloc_event_handler(tsd_t *tsd, uint64_t elapsed) {\n\tpeak_event_update(tsd);\n\tpeak_event_activity_callback(tsd);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/prof.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/ctl.h\"\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/mutex.h\"\n#include \"jemalloc/internal/counter.h\"\n#include \"jemalloc/internal/prof_data.h\"\n#include \"jemalloc/internal/prof_log.h\"\n#include \"jemalloc/internal/prof_recent.h\"\n#include \"jemalloc/internal/prof_stats.h\"\n#include \"jemalloc/internal/prof_sys.h\"\n#include \"jemalloc/internal/prof_hook.h\"\n#include \"jemalloc/internal/thread_event.h\"\n\n/*\n * This file implements the profiling \"APIs\" needed by other parts of jemalloc,\n * and also manages the relevant \"operational\" data, mainly options and mutexes;\n * the core profiling data structures are encapsulated in prof_data.c.\n */\n\n/******************************************************************************/\n\n/* Data. */\n\nbool opt_prof = false;\nbool opt_prof_active = true;\nbool opt_prof_thread_active_init = true;\nsize_t opt_lg_prof_sample = LG_PROF_SAMPLE_DEFAULT;\nssize_t opt_lg_prof_interval = LG_PROF_INTERVAL_DEFAULT;\nbool opt_prof_gdump = false;\nbool opt_prof_final = false;\nbool opt_prof_leak = false;\nbool opt_prof_leak_error = false;\nbool opt_prof_accum = false;\nchar opt_prof_prefix[PROF_DUMP_FILENAME_LEN];\nbool opt_prof_sys_thread_name = false;\nbool opt_prof_unbias = true;\n\n/* Accessed via prof_sample_event_handler(). */\nstatic counter_accum_t prof_idump_accumulated;\n\n/*\n * Initialized as opt_prof_active, and accessed via\n * prof_active_[gs]et{_unlocked,}().\n */\nbool prof_active_state;\nstatic malloc_mutex_t prof_active_mtx;\n\n/*\n * Initialized as opt_prof_thread_active_init, and accessed via\n * prof_thread_active_init_[gs]et().\n */\nstatic bool prof_thread_active_init;\nstatic malloc_mutex_t prof_thread_active_init_mtx;\n\n/*\n * Initialized as opt_prof_gdump, and accessed via\n * prof_gdump_[gs]et{_unlocked,}().\n */\nbool prof_gdump_val;\nstatic malloc_mutex_t prof_gdump_mtx;\n\nuint64_t prof_interval = 0;\n\nsize_t lg_prof_sample;\n\nstatic uint64_t next_thr_uid;\nstatic malloc_mutex_t next_thr_uid_mtx;\n\n/* Do not dump any profiles until bootstrapping is complete. */\nbool prof_booted = false;\n\n/* Logically a prof_backtrace_hook_t. */\natomic_p_t prof_backtrace_hook;\n\n/* Logically a prof_dump_hook_t. */\natomic_p_t prof_dump_hook;\n\n/******************************************************************************/\n\nvoid\nprof_alloc_rollback(tsd_t *tsd, prof_tctx_t *tctx) {\n\tcassert(config_prof);\n\n\tif (tsd_reentrancy_level_get(tsd) > 0) {\n\t\tassert((uintptr_t)tctx == (uintptr_t)1U);\n\t\treturn;\n\t}\n\n\tif ((uintptr_t)tctx > (uintptr_t)1U) {\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), tctx->tdata->lock);\n\t\ttctx->prepared = false;\n\t\tprof_tctx_try_destroy(tsd, tctx);\n\t}\n}\n\nvoid\nprof_malloc_sample_object(tsd_t *tsd, const void *ptr, size_t size,\n    size_t usize, prof_tctx_t *tctx) {\n\tcassert(config_prof);\n\n\tif (opt_prof_sys_thread_name) {\n\t\tprof_sys_thread_name_fetch(tsd);\n\t}\n\n\tedata_t *edata = emap_edata_lookup(tsd_tsdn(tsd), &arena_emap_global,\n\t    ptr);\n\tprof_info_set(tsd, edata, tctx, size);\n\n\tszind_t szind = sz_size2index(usize);\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), tctx->tdata->lock);\n\t/*\n\t * We need to do these map lookups while holding the lock, to avoid the\n\t * possibility of races with prof_reset calls, which update the map and\n\t * then acquire the lock.  This actually still leaves a data race on the\n\t * contents of the unbias map, but we have not yet gone through and\n\t * atomic-ified the prof module, and compilers are not yet causing us\n\t * issues.  The key thing is to make sure that, if we read garbage data,\n\t * the prof_reset call is about to mark our tctx as expired before any\n\t * dumping of our corrupted output is attempted.\n\t */\n\tsize_t shifted_unbiased_cnt = prof_shifted_unbiased_cnt[szind];\n\tsize_t unbiased_bytes = prof_unbiased_sz[szind];\n\ttctx->cnts.curobjs++;\n\ttctx->cnts.curobjs_shifted_unbiased += shifted_unbiased_cnt;\n\ttctx->cnts.curbytes += usize;\n\ttctx->cnts.curbytes_unbiased += unbiased_bytes;\n\tif (opt_prof_accum) {\n\t\ttctx->cnts.accumobjs++;\n\t\ttctx->cnts.accumobjs_shifted_unbiased += shifted_unbiased_cnt;\n\t\ttctx->cnts.accumbytes += usize;\n\t\ttctx->cnts.accumbytes_unbiased += unbiased_bytes;\n\t}\n\tbool record_recent = prof_recent_alloc_prepare(tsd, tctx);\n\ttctx->prepared = false;\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), tctx->tdata->lock);\n\tif (record_recent) {\n\t\tassert(tctx == edata_prof_tctx_get(edata));\n\t\tprof_recent_alloc(tsd, edata, size, usize);\n\t}\n\n\tif (opt_prof_stats) {\n\t\tprof_stats_inc(tsd, szind, size);\n\t}\n}\n\nvoid\nprof_free_sampled_object(tsd_t *tsd, size_t usize, prof_info_t *prof_info) {\n\tcassert(config_prof);\n\n\tassert(prof_info != NULL);\n\tprof_tctx_t *tctx = prof_info->alloc_tctx;\n\tassert((uintptr_t)tctx > (uintptr_t)1U);\n\n\tszind_t szind = sz_size2index(usize);\n\tmalloc_mutex_lock(tsd_tsdn(tsd), tctx->tdata->lock);\n\n\tassert(tctx->cnts.curobjs > 0);\n\tassert(tctx->cnts.curbytes >= usize);\n\t/*\n\t * It's not correct to do equivalent asserts for unbiased bytes, because\n\t * of the potential for races with prof.reset calls.  The map contents\n\t * should really be atomic, but we have not atomic-ified the prof module\n\t * yet.\n\t */\n\ttctx->cnts.curobjs--;\n\ttctx->cnts.curobjs_shifted_unbiased -= prof_shifted_unbiased_cnt[szind];\n\ttctx->cnts.curbytes -= usize;\n\ttctx->cnts.curbytes_unbiased -= prof_unbiased_sz[szind];\n\n\tprof_try_log(tsd, usize, prof_info);\n\n\tprof_tctx_try_destroy(tsd, tctx);\n\n\tif (opt_prof_stats) {\n\t\tprof_stats_dec(tsd, szind, prof_info->alloc_size);\n\t}\n}\n\nprof_tctx_t *\nprof_tctx_create(tsd_t *tsd) {\n\tif (!tsd_nominal(tsd) || tsd_reentrancy_level_get(tsd) > 0) {\n\t\treturn NULL;\n\t}\n\n\tprof_tdata_t *tdata = prof_tdata_get(tsd, true);\n\tif (tdata == NULL) {\n\t\treturn NULL;\n\t}\n\n\tprof_bt_t bt;\n\tbt_init(&bt, tdata->vec);\n\tprof_backtrace(tsd, &bt);\n\treturn prof_lookup(tsd, &bt);\n}\n\n/*\n * The bodies of this function and prof_leakcheck() are compiled out unless heap\n * profiling is enabled, so that it is possible to compile jemalloc with\n * floating point support completely disabled.  Avoiding floating point code is\n * important on memory-constrained systems, but it also enables a workaround for\n * versions of glibc that don't properly save/restore floating point registers\n * during dynamic lazy symbol loading (which internally calls into whatever\n * malloc implementation happens to be integrated into the application).  Note\n * that some compilers (e.g.  gcc 4.8) may use floating point registers for fast\n * memory moves, so jemalloc must be compiled with such optimizations disabled\n * (e.g.\n * -mno-sse) in order for the workaround to be complete.\n */\nuint64_t\nprof_sample_new_event_wait(tsd_t *tsd) {\n#ifdef JEMALLOC_PROF\n\tif (lg_prof_sample == 0) {\n\t\treturn TE_MIN_START_WAIT;\n\t}\n\n\t/*\n\t * Compute sample interval as a geometrically distributed random\n\t * variable with mean (2^lg_prof_sample).\n\t *\n\t *                      __        __\n\t *                      |  log(u)  |                     1\n\t * bytes_until_sample = | -------- |, where p = ---------------\n\t *                      | log(1-p) |             lg_prof_sample\n\t *                                              2\n\t *\n\t * For more information on the math, see:\n\t *\n\t *   Non-Uniform Random Variate Generation\n\t *   Luc Devroye\n\t *   Springer-Verlag, New York, 1986\n\t *   pp 500\n\t *   (http://luc.devroye.org/rnbookindex.html)\n\t *\n\t * In the actual computation, there's a non-zero probability that our\n\t * pseudo random number generator generates an exact 0, and to avoid\n\t * log(0), we set u to 1.0 in case r is 0.  Therefore u effectively is\n\t * uniformly distributed in (0, 1] instead of [0, 1).  Further, rather\n\t * than taking the ceiling, we take the floor and then add 1, since\n\t * otherwise bytes_until_sample would be 0 if u is exactly 1.0.\n\t */\n\tuint64_t r = prng_lg_range_u64(tsd_prng_statep_get(tsd), 53);\n\tdouble u = (r == 0U) ? 1.0 : (double)r * (1.0/9007199254740992.0L);\n\treturn (uint64_t)(log(u) /\n\t    log(1.0 - (1.0 / (double)((uint64_t)1U << lg_prof_sample))))\n\t    + (uint64_t)1U;\n#else\n\tnot_reached();\n\treturn TE_MAX_START_WAIT;\n#endif\n}\n\nuint64_t\nprof_sample_postponed_event_wait(tsd_t *tsd) {\n\t/*\n\t * The postponed wait time for prof sample event is computed as if we\n\t * want a new wait time (i.e. as if the event were triggered).  If we\n\t * instead postpone to the immediate next allocation, like how we're\n\t * handling the other events, then we can have sampling bias, if e.g.\n\t * the allocation immediately following a reentrancy always comes from\n\t * the same stack trace.\n\t */\n\treturn prof_sample_new_event_wait(tsd);\n}\n\nvoid\nprof_sample_event_handler(tsd_t *tsd, uint64_t elapsed) {\n\tcassert(config_prof);\n\tassert(elapsed > 0 && elapsed != TE_INVALID_ELAPSED);\n\tif (prof_interval == 0 || !prof_active_get_unlocked()) {\n\t\treturn;\n\t}\n\tif (counter_accum(tsd_tsdn(tsd), &prof_idump_accumulated, elapsed)) {\n\t\tprof_idump(tsd_tsdn(tsd));\n\t}\n}\n\nstatic void\nprof_fdump(void) {\n\ttsd_t *tsd;\n\n\tcassert(config_prof);\n\tassert(opt_prof_final);\n\n\tif (!prof_booted) {\n\t\treturn;\n\t}\n\ttsd = tsd_fetch();\n\tassert(tsd_reentrancy_level_get(tsd) == 0);\n\n\tprof_fdump_impl(tsd);\n}\n\nstatic bool\nprof_idump_accum_init(void) {\n\tcassert(config_prof);\n\n\treturn counter_accum_init(&prof_idump_accumulated, prof_interval);\n}\n\nvoid\nprof_idump(tsdn_t *tsdn) {\n\ttsd_t *tsd;\n\tprof_tdata_t *tdata;\n\n\tcassert(config_prof);\n\n\tif (!prof_booted || tsdn_null(tsdn) || !prof_active_get_unlocked()) {\n\t\treturn;\n\t}\n\ttsd = tsdn_tsd(tsdn);\n\tif (tsd_reentrancy_level_get(tsd) > 0) {\n\t\treturn;\n\t}\n\n\ttdata = prof_tdata_get(tsd, true);\n\tif (tdata == NULL) {\n\t\treturn;\n\t}\n\tif (tdata->enq) {\n\t\ttdata->enq_idump = true;\n\t\treturn;\n\t}\n\n\tprof_idump_impl(tsd);\n}\n\nbool\nprof_mdump(tsd_t *tsd, const char *filename) {\n\tcassert(config_prof);\n\tassert(tsd_reentrancy_level_get(tsd) == 0);\n\n\tif (!opt_prof || !prof_booted) {\n\t\treturn true;\n\t}\n\n\treturn prof_mdump_impl(tsd, filename);\n}\n\nvoid\nprof_gdump(tsdn_t *tsdn) {\n\ttsd_t *tsd;\n\tprof_tdata_t *tdata;\n\n\tcassert(config_prof);\n\n\tif (!prof_booted || tsdn_null(tsdn) || !prof_active_get_unlocked()) {\n\t\treturn;\n\t}\n\ttsd = tsdn_tsd(tsdn);\n\tif (tsd_reentrancy_level_get(tsd) > 0) {\n\t\treturn;\n\t}\n\n\ttdata = prof_tdata_get(tsd, false);\n\tif (tdata == NULL) {\n\t\treturn;\n\t}\n\tif (tdata->enq) {\n\t\ttdata->enq_gdump = true;\n\t\treturn;\n\t}\n\n\tprof_gdump_impl(tsd);\n}\n\nstatic uint64_t\nprof_thr_uid_alloc(tsdn_t *tsdn) {\n\tuint64_t thr_uid;\n\n\tmalloc_mutex_lock(tsdn, &next_thr_uid_mtx);\n\tthr_uid = next_thr_uid;\n\tnext_thr_uid++;\n\tmalloc_mutex_unlock(tsdn, &next_thr_uid_mtx);\n\n\treturn thr_uid;\n}\n\nprof_tdata_t *\nprof_tdata_init(tsd_t *tsd) {\n\treturn prof_tdata_init_impl(tsd, prof_thr_uid_alloc(tsd_tsdn(tsd)), 0,\n\t    NULL, prof_thread_active_init_get(tsd_tsdn(tsd)));\n}\n\nprof_tdata_t *\nprof_tdata_reinit(tsd_t *tsd, prof_tdata_t *tdata) {\n\tuint64_t thr_uid = tdata->thr_uid;\n\tuint64_t thr_discrim = tdata->thr_discrim + 1;\n\tchar *thread_name = (tdata->thread_name != NULL) ?\n\t    prof_thread_name_alloc(tsd, tdata->thread_name) : NULL;\n\tbool active = tdata->active;\n\n\tprof_tdata_detach(tsd, tdata);\n\treturn prof_tdata_init_impl(tsd, thr_uid, thr_discrim, thread_name,\n\t    active);\n}\n\nvoid\nprof_tdata_cleanup(tsd_t *tsd) {\n\tprof_tdata_t *tdata;\n\n\tif (!config_prof) {\n\t\treturn;\n\t}\n\n\ttdata = tsd_prof_tdata_get(tsd);\n\tif (tdata != NULL) {\n\t\tprof_tdata_detach(tsd, tdata);\n\t}\n}\n\nbool\nprof_active_get(tsdn_t *tsdn) {\n\tbool prof_active_current;\n\n\tprof_active_assert();\n\tmalloc_mutex_lock(tsdn, &prof_active_mtx);\n\tprof_active_current = prof_active_state;\n\tmalloc_mutex_unlock(tsdn, &prof_active_mtx);\n\treturn prof_active_current;\n}\n\nbool\nprof_active_set(tsdn_t *tsdn, bool active) {\n\tbool prof_active_old;\n\n\tprof_active_assert();\n\tmalloc_mutex_lock(tsdn, &prof_active_mtx);\n\tprof_active_old = prof_active_state;\n\tprof_active_state = active;\n\tmalloc_mutex_unlock(tsdn, &prof_active_mtx);\n\tprof_active_assert();\n\treturn prof_active_old;\n}\n\nconst char *\nprof_thread_name_get(tsd_t *tsd) {\n\tassert(tsd_reentrancy_level_get(tsd) == 0);\n\n\tprof_tdata_t *tdata;\n\n\ttdata = prof_tdata_get(tsd, true);\n\tif (tdata == NULL) {\n\t\treturn \"\";\n\t}\n\treturn (tdata->thread_name != NULL ? tdata->thread_name : \"\");\n}\n\nint\nprof_thread_name_set(tsd_t *tsd, const char *thread_name) {\n\tif (opt_prof_sys_thread_name) {\n\t\treturn ENOENT;\n\t} else {\n\t\treturn prof_thread_name_set_impl(tsd, thread_name);\n\t}\n}\n\nbool\nprof_thread_active_get(tsd_t *tsd) {\n\tassert(tsd_reentrancy_level_get(tsd) == 0);\n\n\tprof_tdata_t *tdata;\n\n\ttdata = prof_tdata_get(tsd, true);\n\tif (tdata == NULL) {\n\t\treturn false;\n\t}\n\treturn tdata->active;\n}\n\nbool\nprof_thread_active_set(tsd_t *tsd, bool active) {\n\tassert(tsd_reentrancy_level_get(tsd) == 0);\n\n\tprof_tdata_t *tdata;\n\n\ttdata = prof_tdata_get(tsd, true);\n\tif (tdata == NULL) {\n\t\treturn true;\n\t}\n\ttdata->active = active;\n\treturn false;\n}\n\nbool\nprof_thread_active_init_get(tsdn_t *tsdn) {\n\tbool active_init;\n\n\tmalloc_mutex_lock(tsdn, &prof_thread_active_init_mtx);\n\tactive_init = prof_thread_active_init;\n\tmalloc_mutex_unlock(tsdn, &prof_thread_active_init_mtx);\n\treturn active_init;\n}\n\nbool\nprof_thread_active_init_set(tsdn_t *tsdn, bool active_init) {\n\tbool active_init_old;\n\n\tmalloc_mutex_lock(tsdn, &prof_thread_active_init_mtx);\n\tactive_init_old = prof_thread_active_init;\n\tprof_thread_active_init = active_init;\n\tmalloc_mutex_unlock(tsdn, &prof_thread_active_init_mtx);\n\treturn active_init_old;\n}\n\nbool\nprof_gdump_get(tsdn_t *tsdn) {\n\tbool prof_gdump_current;\n\n\tmalloc_mutex_lock(tsdn, &prof_gdump_mtx);\n\tprof_gdump_current = prof_gdump_val;\n\tmalloc_mutex_unlock(tsdn, &prof_gdump_mtx);\n\treturn prof_gdump_current;\n}\n\nbool\nprof_gdump_set(tsdn_t *tsdn, bool gdump) {\n\tbool prof_gdump_old;\n\n\tmalloc_mutex_lock(tsdn, &prof_gdump_mtx);\n\tprof_gdump_old = prof_gdump_val;\n\tprof_gdump_val = gdump;\n\tmalloc_mutex_unlock(tsdn, &prof_gdump_mtx);\n\treturn prof_gdump_old;\n}\n\nvoid\nprof_backtrace_hook_set(prof_backtrace_hook_t hook) {\n\tatomic_store_p(&prof_backtrace_hook, hook, ATOMIC_RELEASE);\n}\n\nprof_backtrace_hook_t\nprof_backtrace_hook_get() {\n\treturn (prof_backtrace_hook_t)atomic_load_p(&prof_backtrace_hook,\n\t    ATOMIC_ACQUIRE);\n}\n\nvoid\nprof_dump_hook_set(prof_dump_hook_t hook) {\n\tatomic_store_p(&prof_dump_hook, hook, ATOMIC_RELEASE);\n}\n\nprof_dump_hook_t\nprof_dump_hook_get() {\n\treturn (prof_dump_hook_t)atomic_load_p(&prof_dump_hook,\n\t    ATOMIC_ACQUIRE);\n}\n\nvoid\nprof_boot0(void) {\n\tcassert(config_prof);\n\n\tmemcpy(opt_prof_prefix, PROF_PREFIX_DEFAULT,\n\t    sizeof(PROF_PREFIX_DEFAULT));\n}\n\nvoid\nprof_boot1(void) {\n\tcassert(config_prof);\n\n\t/*\n\t * opt_prof must be in its final state before any arenas are\n\t * initialized, so this function must be executed early.\n\t */\n\tif (opt_prof_leak_error && !opt_prof_leak) {\n\t\topt_prof_leak = true;\n\t}\n\n\tif (opt_prof_leak && !opt_prof) {\n\t\t/*\n\t\t * Enable opt_prof, but in such a way that profiles are never\n\t\t * automatically dumped.\n\t\t */\n\t\topt_prof = true;\n\t\topt_prof_gdump = false;\n\t} else if (opt_prof) {\n\t\tif (opt_lg_prof_interval >= 0) {\n\t\t\tprof_interval = (((uint64_t)1U) <<\n\t\t\t    opt_lg_prof_interval);\n\t\t}\n\t}\n}\n\nbool\nprof_boot2(tsd_t *tsd, base_t *base) {\n\tcassert(config_prof);\n\n\t/*\n\t * Initialize the global mutexes unconditionally to maintain correct\n\t * stats when opt_prof is false.\n\t */\n\tif (malloc_mutex_init(&prof_active_mtx, \"prof_active\",\n\t    WITNESS_RANK_PROF_ACTIVE, malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\tif (malloc_mutex_init(&prof_gdump_mtx, \"prof_gdump\",\n\t    WITNESS_RANK_PROF_GDUMP, malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\tif (malloc_mutex_init(&prof_thread_active_init_mtx,\n\t    \"prof_thread_active_init\", WITNESS_RANK_PROF_THREAD_ACTIVE_INIT,\n\t    malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\tif (malloc_mutex_init(&bt2gctx_mtx, \"prof_bt2gctx\",\n\t    WITNESS_RANK_PROF_BT2GCTX, malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\tif (malloc_mutex_init(&tdatas_mtx, \"prof_tdatas\",\n\t    WITNESS_RANK_PROF_TDATAS, malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\tif (malloc_mutex_init(&next_thr_uid_mtx, \"prof_next_thr_uid\",\n\t    WITNESS_RANK_PROF_NEXT_THR_UID, malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\tif (malloc_mutex_init(&prof_stats_mtx, \"prof_stats\",\n\t    WITNESS_RANK_PROF_STATS, malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\tif (malloc_mutex_init(&prof_dump_filename_mtx,\n\t    \"prof_dump_filename\", WITNESS_RANK_PROF_DUMP_FILENAME,\n\t    malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\tif (malloc_mutex_init(&prof_dump_mtx, \"prof_dump\",\n\t    WITNESS_RANK_PROF_DUMP, malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\n\tif (opt_prof) {\n\t\tlg_prof_sample = opt_lg_prof_sample;\n\t\tprof_unbias_map_init();\n\t\tprof_active_state = opt_prof_active;\n\t\tprof_gdump_val = opt_prof_gdump;\n\t\tprof_thread_active_init = opt_prof_thread_active_init;\n\n\t\tif (prof_data_init(tsd)) {\n\t\t\treturn true;\n\t\t}\n\n\t\tnext_thr_uid = 0;\n\t\tif (prof_idump_accum_init()) {\n\t\t\treturn true;\n\t\t}\n\n\t\tif (opt_prof_final && opt_prof_prefix[0] != '\\0' &&\n\t\t    atexit(prof_fdump) != 0) {\n\t\t\tmalloc_write(\"<jemalloc>: Error in atexit()\\n\");\n\t\t\tif (opt_abort) {\n\t\t\t\tabort();\n\t\t\t}\n\t\t}\n\n\t\tif (prof_log_init(tsd)) {\n\t\t\treturn true;\n\t\t}\n\n\t\tif (prof_recent_init()) {\n\t\t\treturn true;\n\t\t}\n\n\t\tprof_base = base;\n\n\t\tgctx_locks = (malloc_mutex_t *)base_alloc(tsd_tsdn(tsd), base,\n\t\t    PROF_NCTX_LOCKS * sizeof(malloc_mutex_t), CACHELINE);\n\t\tif (gctx_locks == NULL) {\n\t\t\treturn true;\n\t\t}\n\t\tfor (unsigned i = 0; i < PROF_NCTX_LOCKS; i++) {\n\t\t\tif (malloc_mutex_init(&gctx_locks[i], \"prof_gctx\",\n\t\t\t    WITNESS_RANK_PROF_GCTX,\n\t\t\t    malloc_mutex_rank_exclusive)) {\n\t\t\t\treturn true;\n\t\t\t}\n\t\t}\n\n\t\ttdata_locks = (malloc_mutex_t *)base_alloc(tsd_tsdn(tsd), base,\n\t\t    PROF_NTDATA_LOCKS * sizeof(malloc_mutex_t), CACHELINE);\n\t\tif (tdata_locks == NULL) {\n\t\t\treturn true;\n\t\t}\n\t\tfor (unsigned i = 0; i < PROF_NTDATA_LOCKS; i++) {\n\t\t\tif (malloc_mutex_init(&tdata_locks[i], \"prof_tdata\",\n\t\t\t    WITNESS_RANK_PROF_TDATA,\n\t\t\t    malloc_mutex_rank_exclusive)) {\n\t\t\t\treturn true;\n\t\t\t}\n\t\t}\n\n\t\tprof_unwind_init();\n\t\tprof_hooks_init();\n\t}\n\tprof_booted = true;\n\n\treturn false;\n}\n\nvoid\nprof_prefork0(tsdn_t *tsdn) {\n\tif (config_prof && opt_prof) {\n\t\tunsigned i;\n\n\t\tmalloc_mutex_prefork(tsdn, &prof_dump_mtx);\n\t\tmalloc_mutex_prefork(tsdn, &bt2gctx_mtx);\n\t\tmalloc_mutex_prefork(tsdn, &tdatas_mtx);\n\t\tfor (i = 0; i < PROF_NTDATA_LOCKS; i++) {\n\t\t\tmalloc_mutex_prefork(tsdn, &tdata_locks[i]);\n\t\t}\n\t\tmalloc_mutex_prefork(tsdn, &log_mtx);\n\t\tfor (i = 0; i < PROF_NCTX_LOCKS; i++) {\n\t\t\tmalloc_mutex_prefork(tsdn, &gctx_locks[i]);\n\t\t}\n\t\tmalloc_mutex_prefork(tsdn, &prof_recent_dump_mtx);\n\t}\n}\n\nvoid\nprof_prefork1(tsdn_t *tsdn) {\n\tif (config_prof && opt_prof) {\n\t\tcounter_prefork(tsdn, &prof_idump_accumulated);\n\t\tmalloc_mutex_prefork(tsdn, &prof_active_mtx);\n\t\tmalloc_mutex_prefork(tsdn, &prof_dump_filename_mtx);\n\t\tmalloc_mutex_prefork(tsdn, &prof_gdump_mtx);\n\t\tmalloc_mutex_prefork(tsdn, &prof_recent_alloc_mtx);\n\t\tmalloc_mutex_prefork(tsdn, &prof_stats_mtx);\n\t\tmalloc_mutex_prefork(tsdn, &next_thr_uid_mtx);\n\t\tmalloc_mutex_prefork(tsdn, &prof_thread_active_init_mtx);\n\t}\n}\n\nvoid\nprof_postfork_parent(tsdn_t *tsdn) {\n\tif (config_prof && opt_prof) {\n\t\tunsigned i;\n\n\t\tmalloc_mutex_postfork_parent(tsdn,\n\t\t    &prof_thread_active_init_mtx);\n\t\tmalloc_mutex_postfork_parent(tsdn, &next_thr_uid_mtx);\n\t\tmalloc_mutex_postfork_parent(tsdn, &prof_stats_mtx);\n\t\tmalloc_mutex_postfork_parent(tsdn, &prof_recent_alloc_mtx);\n\t\tmalloc_mutex_postfork_parent(tsdn, &prof_gdump_mtx);\n\t\tmalloc_mutex_postfork_parent(tsdn, &prof_dump_filename_mtx);\n\t\tmalloc_mutex_postfork_parent(tsdn, &prof_active_mtx);\n\t\tcounter_postfork_parent(tsdn, &prof_idump_accumulated);\n\t\tmalloc_mutex_postfork_parent(tsdn, &prof_recent_dump_mtx);\n\t\tfor (i = 0; i < PROF_NCTX_LOCKS; i++) {\n\t\t\tmalloc_mutex_postfork_parent(tsdn, &gctx_locks[i]);\n\t\t}\n\t\tmalloc_mutex_postfork_parent(tsdn, &log_mtx);\n\t\tfor (i = 0; i < PROF_NTDATA_LOCKS; i++) {\n\t\t\tmalloc_mutex_postfork_parent(tsdn, &tdata_locks[i]);\n\t\t}\n\t\tmalloc_mutex_postfork_parent(tsdn, &tdatas_mtx);\n\t\tmalloc_mutex_postfork_parent(tsdn, &bt2gctx_mtx);\n\t\tmalloc_mutex_postfork_parent(tsdn, &prof_dump_mtx);\n\t}\n}\n\nvoid\nprof_postfork_child(tsdn_t *tsdn) {\n\tif (config_prof && opt_prof) {\n\t\tunsigned i;\n\n\t\tmalloc_mutex_postfork_child(tsdn, &prof_thread_active_init_mtx);\n\t\tmalloc_mutex_postfork_child(tsdn, &next_thr_uid_mtx);\n\t\tmalloc_mutex_postfork_child(tsdn, &prof_stats_mtx);\n\t\tmalloc_mutex_postfork_child(tsdn, &prof_recent_alloc_mtx);\n\t\tmalloc_mutex_postfork_child(tsdn, &prof_gdump_mtx);\n\t\tmalloc_mutex_postfork_child(tsdn, &prof_dump_filename_mtx);\n\t\tmalloc_mutex_postfork_child(tsdn, &prof_active_mtx);\n\t\tcounter_postfork_child(tsdn, &prof_idump_accumulated);\n\t\tmalloc_mutex_postfork_child(tsdn, &prof_recent_dump_mtx);\n\t\tfor (i = 0; i < PROF_NCTX_LOCKS; i++) {\n\t\t\tmalloc_mutex_postfork_child(tsdn, &gctx_locks[i]);\n\t\t}\n\t\tmalloc_mutex_postfork_child(tsdn, &log_mtx);\n\t\tfor (i = 0; i < PROF_NTDATA_LOCKS; i++) {\n\t\t\tmalloc_mutex_postfork_child(tsdn, &tdata_locks[i]);\n\t\t}\n\t\tmalloc_mutex_postfork_child(tsdn, &tdatas_mtx);\n\t\tmalloc_mutex_postfork_child(tsdn, &bt2gctx_mtx);\n\t\tmalloc_mutex_postfork_child(tsdn, &prof_dump_mtx);\n\t}\n}\n\n/******************************************************************************/\n"
  },
  {
    "path": "deps/jemalloc/src/prof_data.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/ckh.h\"\n#include \"jemalloc/internal/hash.h\"\n#include \"jemalloc/internal/malloc_io.h\"\n#include \"jemalloc/internal/prof_data.h\"\n\n/*\n * This file defines and manages the core profiling data structures.\n *\n * Conceptually, profiling data can be imagined as a table with three columns:\n * thread, stack trace, and current allocation size.  (When prof_accum is on,\n * there's one additional column which is the cumulative allocation size.)\n *\n * Implementation wise, each thread maintains a hash recording the stack trace\n * to allocation size correspondences, which are basically the individual rows\n * in the table.  In addition, two global \"indices\" are built to make data\n * aggregation efficient (for dumping): bt2gctx and tdatas, which are basically\n * the \"grouped by stack trace\" and \"grouped by thread\" views of the same table,\n * respectively.  Note that the allocation size is only aggregated to the two\n * indices at dumping time, so as to optimize for performance.\n */\n\n/******************************************************************************/\n\nmalloc_mutex_t bt2gctx_mtx;\nmalloc_mutex_t tdatas_mtx;\nmalloc_mutex_t prof_dump_mtx;\n\n/*\n * Table of mutexes that are shared among gctx's.  These are leaf locks, so\n * there is no problem with using them for more than one gctx at the same time.\n * The primary motivation for this sharing though is that gctx's are ephemeral,\n * and destroying mutexes causes complications for systems that allocate when\n * creating/destroying mutexes.\n */\nmalloc_mutex_t *gctx_locks;\nstatic atomic_u_t cum_gctxs; /* Atomic counter. */\n\n/*\n * Table of mutexes that are shared among tdata's.  No operations require\n * holding multiple tdata locks, so there is no problem with using them for more\n * than one tdata at the same time, even though a gctx lock may be acquired\n * while holding a tdata lock.\n */\nmalloc_mutex_t *tdata_locks;\n\n/*\n * Global hash of (prof_bt_t *)-->(prof_gctx_t *).  This is the master data\n * structure that knows about all backtraces currently captured.\n */\nstatic ckh_t bt2gctx;\n\n/*\n * Tree of all extant prof_tdata_t structures, regardless of state,\n * {attached,detached,expired}.\n */\nstatic prof_tdata_tree_t tdatas;\n\nsize_t prof_unbiased_sz[PROF_SC_NSIZES];\nsize_t prof_shifted_unbiased_cnt[PROF_SC_NSIZES];\n\n/******************************************************************************/\n/* Red-black trees. */\n\nstatic int\nprof_tctx_comp(const prof_tctx_t *a, const prof_tctx_t *b) {\n\tuint64_t a_thr_uid = a->thr_uid;\n\tuint64_t b_thr_uid = b->thr_uid;\n\tint ret = (a_thr_uid > b_thr_uid) - (a_thr_uid < b_thr_uid);\n\tif (ret == 0) {\n\t\tuint64_t a_thr_discrim = a->thr_discrim;\n\t\tuint64_t b_thr_discrim = b->thr_discrim;\n\t\tret = (a_thr_discrim > b_thr_discrim) - (a_thr_discrim <\n\t\t    b_thr_discrim);\n\t\tif (ret == 0) {\n\t\t\tuint64_t a_tctx_uid = a->tctx_uid;\n\t\t\tuint64_t b_tctx_uid = b->tctx_uid;\n\t\t\tret = (a_tctx_uid > b_tctx_uid) - (a_tctx_uid <\n\t\t\t    b_tctx_uid);\n\t\t}\n\t}\n\treturn ret;\n}\n\nrb_gen(static UNUSED, tctx_tree_, prof_tctx_tree_t, prof_tctx_t,\n    tctx_link, prof_tctx_comp)\n\nstatic int\nprof_gctx_comp(const prof_gctx_t *a, const prof_gctx_t *b) {\n\tunsigned a_len = a->bt.len;\n\tunsigned b_len = b->bt.len;\n\tunsigned comp_len = (a_len < b_len) ? a_len : b_len;\n\tint ret = memcmp(a->bt.vec, b->bt.vec, comp_len * sizeof(void *));\n\tif (ret == 0) {\n\t\tret = (a_len > b_len) - (a_len < b_len);\n\t}\n\treturn ret;\n}\n\nrb_gen(static UNUSED, gctx_tree_, prof_gctx_tree_t, prof_gctx_t, dump_link,\n    prof_gctx_comp)\n\nstatic int\nprof_tdata_comp(const prof_tdata_t *a, const prof_tdata_t *b) {\n\tint ret;\n\tuint64_t a_uid = a->thr_uid;\n\tuint64_t b_uid = b->thr_uid;\n\n\tret = ((a_uid > b_uid) - (a_uid < b_uid));\n\tif (ret == 0) {\n\t\tuint64_t a_discrim = a->thr_discrim;\n\t\tuint64_t b_discrim = b->thr_discrim;\n\n\t\tret = ((a_discrim > b_discrim) - (a_discrim < b_discrim));\n\t}\n\treturn ret;\n}\n\nrb_gen(static UNUSED, tdata_tree_, prof_tdata_tree_t, prof_tdata_t, tdata_link,\n    prof_tdata_comp)\n\n/******************************************************************************/\n\nstatic malloc_mutex_t *\nprof_gctx_mutex_choose(void) {\n\tunsigned ngctxs = atomic_fetch_add_u(&cum_gctxs, 1, ATOMIC_RELAXED);\n\n\treturn &gctx_locks[(ngctxs - 1) % PROF_NCTX_LOCKS];\n}\n\nstatic malloc_mutex_t *\nprof_tdata_mutex_choose(uint64_t thr_uid) {\n\treturn &tdata_locks[thr_uid % PROF_NTDATA_LOCKS];\n}\n\nbool\nprof_data_init(tsd_t *tsd) {\n\ttdata_tree_new(&tdatas);\n\treturn ckh_new(tsd, &bt2gctx, PROF_CKH_MINITEMS,\n\t    prof_bt_hash, prof_bt_keycomp);\n}\n\nstatic void\nprof_enter(tsd_t *tsd, prof_tdata_t *tdata) {\n\tcassert(config_prof);\n\tassert(tdata == prof_tdata_get(tsd, false));\n\n\tif (tdata != NULL) {\n\t\tassert(!tdata->enq);\n\t\ttdata->enq = true;\n\t}\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &bt2gctx_mtx);\n}\n\nstatic void\nprof_leave(tsd_t *tsd, prof_tdata_t *tdata) {\n\tcassert(config_prof);\n\tassert(tdata == prof_tdata_get(tsd, false));\n\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &bt2gctx_mtx);\n\n\tif (tdata != NULL) {\n\t\tbool idump, gdump;\n\n\t\tassert(tdata->enq);\n\t\ttdata->enq = false;\n\t\tidump = tdata->enq_idump;\n\t\ttdata->enq_idump = false;\n\t\tgdump = tdata->enq_gdump;\n\t\ttdata->enq_gdump = false;\n\n\t\tif (idump) {\n\t\t\tprof_idump(tsd_tsdn(tsd));\n\t\t}\n\t\tif (gdump) {\n\t\t\tprof_gdump(tsd_tsdn(tsd));\n\t\t}\n\t}\n}\n\nstatic prof_gctx_t *\nprof_gctx_create(tsdn_t *tsdn, prof_bt_t *bt) {\n\t/*\n\t * Create a single allocation that has space for vec of length bt->len.\n\t */\n\tsize_t size = offsetof(prof_gctx_t, vec) + (bt->len * sizeof(void *));\n\tprof_gctx_t *gctx = (prof_gctx_t *)iallocztm(tsdn, size,\n\t    sz_size2index(size), false, NULL, true, arena_get(TSDN_NULL, 0, true),\n\t    true);\n\tif (gctx == NULL) {\n\t\treturn NULL;\n\t}\n\tgctx->lock = prof_gctx_mutex_choose();\n\t/*\n\t * Set nlimbo to 1, in order to avoid a race condition with\n\t * prof_tctx_destroy()/prof_gctx_try_destroy().\n\t */\n\tgctx->nlimbo = 1;\n\ttctx_tree_new(&gctx->tctxs);\n\t/* Duplicate bt. */\n\tmemcpy(gctx->vec, bt->vec, bt->len * sizeof(void *));\n\tgctx->bt.vec = gctx->vec;\n\tgctx->bt.len = bt->len;\n\treturn gctx;\n}\n\nstatic void\nprof_gctx_try_destroy(tsd_t *tsd, prof_tdata_t *tdata_self,\n    prof_gctx_t *gctx) {\n\tcassert(config_prof);\n\n\t/*\n\t * Check that gctx is still unused by any thread cache before destroying\n\t * it.  prof_lookup() increments gctx->nlimbo in order to avoid a race\n\t * condition with this function, as does prof_tctx_destroy() in order to\n\t * avoid a race between the main body of prof_tctx_destroy() and entry\n\t * into this function.\n\t */\n\tprof_enter(tsd, tdata_self);\n\tmalloc_mutex_lock(tsd_tsdn(tsd), gctx->lock);\n\tassert(gctx->nlimbo != 0);\n\tif (tctx_tree_empty(&gctx->tctxs) && gctx->nlimbo == 1) {\n\t\t/* Remove gctx from bt2gctx. */\n\t\tif (ckh_remove(tsd, &bt2gctx, &gctx->bt, NULL, NULL)) {\n\t\t\tnot_reached();\n\t\t}\n\t\tprof_leave(tsd, tdata_self);\n\t\t/* Destroy gctx. */\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock);\n\t\tidalloctm(tsd_tsdn(tsd), gctx, NULL, NULL, true, true);\n\t} else {\n\t\t/*\n\t\t * Compensate for increment in prof_tctx_destroy() or\n\t\t * prof_lookup().\n\t\t */\n\t\tgctx->nlimbo--;\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock);\n\t\tprof_leave(tsd, tdata_self);\n\t}\n}\n\nstatic bool\nprof_gctx_should_destroy(prof_gctx_t *gctx) {\n\tif (opt_prof_accum) {\n\t\treturn false;\n\t}\n\tif (!tctx_tree_empty(&gctx->tctxs)) {\n\t\treturn false;\n\t}\n\tif (gctx->nlimbo != 0) {\n\t\treturn false;\n\t}\n\treturn true;\n}\n\nstatic bool\nprof_lookup_global(tsd_t *tsd, prof_bt_t *bt, prof_tdata_t *tdata,\n    void **p_btkey, prof_gctx_t **p_gctx, bool *p_new_gctx) {\n\tunion {\n\t\tprof_gctx_t\t*p;\n\t\tvoid\t\t*v;\n\t} gctx, tgctx;\n\tunion {\n\t\tprof_bt_t\t*p;\n\t\tvoid\t\t*v;\n\t} btkey;\n\tbool new_gctx;\n\n\tprof_enter(tsd, tdata);\n\tif (ckh_search(&bt2gctx, bt, &btkey.v, &gctx.v)) {\n\t\t/* bt has never been seen before.  Insert it. */\n\t\tprof_leave(tsd, tdata);\n\t\ttgctx.p = prof_gctx_create(tsd_tsdn(tsd), bt);\n\t\tif (tgctx.v == NULL) {\n\t\t\treturn true;\n\t\t}\n\t\tprof_enter(tsd, tdata);\n\t\tif (ckh_search(&bt2gctx, bt, &btkey.v, &gctx.v)) {\n\t\t\tgctx.p = tgctx.p;\n\t\t\tbtkey.p = &gctx.p->bt;\n\t\t\tif (ckh_insert(tsd, &bt2gctx, btkey.v, gctx.v)) {\n\t\t\t\t/* OOM. */\n\t\t\t\tprof_leave(tsd, tdata);\n\t\t\t\tidalloctm(tsd_tsdn(tsd), gctx.v, NULL, NULL,\n\t\t\t\t    true, true);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tnew_gctx = true;\n\t\t} else {\n\t\t\tnew_gctx = false;\n\t\t}\n\t} else {\n\t\ttgctx.v = NULL;\n\t\tnew_gctx = false;\n\t}\n\n\tif (!new_gctx) {\n\t\t/*\n\t\t * Increment nlimbo, in order to avoid a race condition with\n\t\t * prof_tctx_destroy()/prof_gctx_try_destroy().\n\t\t */\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), gctx.p->lock);\n\t\tgctx.p->nlimbo++;\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), gctx.p->lock);\n\t\tnew_gctx = false;\n\n\t\tif (tgctx.v != NULL) {\n\t\t\t/* Lost race to insert. */\n\t\t\tidalloctm(tsd_tsdn(tsd), tgctx.v, NULL, NULL, true,\n\t\t\t    true);\n\t\t}\n\t}\n\tprof_leave(tsd, tdata);\n\n\t*p_btkey = btkey.v;\n\t*p_gctx = gctx.p;\n\t*p_new_gctx = new_gctx;\n\treturn false;\n}\n\nprof_tctx_t *\nprof_lookup(tsd_t *tsd, prof_bt_t *bt) {\n\tunion {\n\t\tprof_tctx_t\t*p;\n\t\tvoid\t\t*v;\n\t} ret;\n\tprof_tdata_t *tdata;\n\tbool not_found;\n\n\tcassert(config_prof);\n\n\ttdata = prof_tdata_get(tsd, false);\n\tassert(tdata != NULL);\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), tdata->lock);\n\tnot_found = ckh_search(&tdata->bt2tctx, bt, NULL, &ret.v);\n\tif (!not_found) { /* Note double negative! */\n\t\tret.p->prepared = true;\n\t}\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), tdata->lock);\n\tif (not_found) {\n\t\tvoid *btkey;\n\t\tprof_gctx_t *gctx;\n\t\tbool new_gctx, error;\n\n\t\t/*\n\t\t * This thread's cache lacks bt.  Look for it in the global\n\t\t * cache.\n\t\t */\n\t\tif (prof_lookup_global(tsd, bt, tdata, &btkey, &gctx,\n\t\t    &new_gctx)) {\n\t\t\treturn NULL;\n\t\t}\n\n\t\t/* Link a prof_tctx_t into gctx for this thread. */\n\t\tret.v = iallocztm(tsd_tsdn(tsd), sizeof(prof_tctx_t),\n\t\t    sz_size2index(sizeof(prof_tctx_t)), false, NULL, true,\n\t\t    arena_ichoose(tsd, NULL), true);\n\t\tif (ret.p == NULL) {\n\t\t\tif (new_gctx) {\n\t\t\t\tprof_gctx_try_destroy(tsd, tdata, gctx);\n\t\t\t}\n\t\t\treturn NULL;\n\t\t}\n\t\tret.p->tdata = tdata;\n\t\tret.p->thr_uid = tdata->thr_uid;\n\t\tret.p->thr_discrim = tdata->thr_discrim;\n\t\tret.p->recent_count = 0;\n\t\tmemset(&ret.p->cnts, 0, sizeof(prof_cnt_t));\n\t\tret.p->gctx = gctx;\n\t\tret.p->tctx_uid = tdata->tctx_uid_next++;\n\t\tret.p->prepared = true;\n\t\tret.p->state = prof_tctx_state_initializing;\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), tdata->lock);\n\t\terror = ckh_insert(tsd, &tdata->bt2tctx, btkey, ret.v);\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), tdata->lock);\n\t\tif (error) {\n\t\t\tif (new_gctx) {\n\t\t\t\tprof_gctx_try_destroy(tsd, tdata, gctx);\n\t\t\t}\n\t\t\tidalloctm(tsd_tsdn(tsd), ret.v, NULL, NULL, true, true);\n\t\t\treturn NULL;\n\t\t}\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), gctx->lock);\n\t\tret.p->state = prof_tctx_state_nominal;\n\t\ttctx_tree_insert(&gctx->tctxs, ret.p);\n\t\tgctx->nlimbo--;\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock);\n\t}\n\n\treturn ret.p;\n}\n\n/* Used in unit tests. */\nstatic prof_tdata_t *\nprof_tdata_count_iter(prof_tdata_tree_t *tdatas_ptr, prof_tdata_t *tdata,\n    void *arg) {\n\tsize_t *tdata_count = (size_t *)arg;\n\n\t(*tdata_count)++;\n\n\treturn NULL;\n}\n\n/* Used in unit tests. */\nsize_t\nprof_tdata_count(void) {\n\tsize_t tdata_count = 0;\n\ttsdn_t *tsdn;\n\n\ttsdn = tsdn_fetch();\n\tmalloc_mutex_lock(tsdn, &tdatas_mtx);\n\ttdata_tree_iter(&tdatas, NULL, prof_tdata_count_iter,\n\t    (void *)&tdata_count);\n\tmalloc_mutex_unlock(tsdn, &tdatas_mtx);\n\n\treturn tdata_count;\n}\n\n/* Used in unit tests. */\nsize_t\nprof_bt_count(void) {\n\tsize_t bt_count;\n\ttsd_t *tsd;\n\tprof_tdata_t *tdata;\n\n\ttsd = tsd_fetch();\n\ttdata = prof_tdata_get(tsd, false);\n\tif (tdata == NULL) {\n\t\treturn 0;\n\t}\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &bt2gctx_mtx);\n\tbt_count = ckh_count(&bt2gctx);\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &bt2gctx_mtx);\n\n\treturn bt_count;\n}\n\nchar *\nprof_thread_name_alloc(tsd_t *tsd, const char *thread_name) {\n\tchar *ret;\n\tsize_t size;\n\n\tif (thread_name == NULL) {\n\t\treturn NULL;\n\t}\n\n\tsize = strlen(thread_name) + 1;\n\tif (size == 1) {\n\t\treturn \"\";\n\t}\n\n\tret = iallocztm(tsd_tsdn(tsd), size, sz_size2index(size), false, NULL,\n\t    true, arena_get(TSDN_NULL, 0, true), true);\n\tif (ret == NULL) {\n\t\treturn NULL;\n\t}\n\tmemcpy(ret, thread_name, size);\n\treturn ret;\n}\n\nint\nprof_thread_name_set_impl(tsd_t *tsd, const char *thread_name) {\n\tassert(tsd_reentrancy_level_get(tsd) == 0);\n\n\tprof_tdata_t *tdata;\n\tunsigned i;\n\tchar *s;\n\n\ttdata = prof_tdata_get(tsd, true);\n\tif (tdata == NULL) {\n\t\treturn EAGAIN;\n\t}\n\n\t/* Validate input. */\n\tif (thread_name == NULL) {\n\t\treturn EFAULT;\n\t}\n\tfor (i = 0; thread_name[i] != '\\0'; i++) {\n\t\tchar c = thread_name[i];\n\t\tif (!isgraph(c) && !isblank(c)) {\n\t\t\treturn EFAULT;\n\t\t}\n\t}\n\n\ts = prof_thread_name_alloc(tsd, thread_name);\n\tif (s == NULL) {\n\t\treturn EAGAIN;\n\t}\n\n\tif (tdata->thread_name != NULL) {\n\t\tidalloctm(tsd_tsdn(tsd), tdata->thread_name, NULL, NULL, true,\n\t\t    true);\n\t\ttdata->thread_name = NULL;\n\t}\n\tif (strlen(s) > 0) {\n\t\ttdata->thread_name = s;\n\t}\n\treturn 0;\n}\n\nJEMALLOC_FORMAT_PRINTF(3, 4)\nstatic void\nprof_dump_printf(write_cb_t *prof_dump_write, void *cbopaque,\n    const char *format, ...) {\n\tva_list ap;\n\tchar buf[PROF_PRINTF_BUFSIZE];\n\n\tva_start(ap, format);\n\tmalloc_vsnprintf(buf, sizeof(buf), format, ap);\n\tva_end(ap);\n\tprof_dump_write(cbopaque, buf);\n}\n\n/*\n * Casting a double to a uint64_t may not necessarily be in range; this can be\n * UB.  I don't think this is practically possible with the cur counters, but\n * plausibly could be with the accum counters.\n */\n#ifdef JEMALLOC_PROF\nstatic uint64_t\nprof_double_uint64_cast(double d) {\n\t/*\n\t * Note: UINT64_MAX + 1 is exactly representable as a double on all\n\t * reasonable platforms (certainly those we'll support).  Writing this\n\t * as !(a < b) instead of (a >= b) means that we're NaN-safe.\n\t */\n\tdouble rounded = round(d);\n\tif (!(rounded < (double)UINT64_MAX)) {\n\t\treturn UINT64_MAX;\n\t}\n\treturn (uint64_t)rounded;\n}\n#endif\n\nvoid prof_unbias_map_init() {\n\t/* See the comment in prof_sample_new_event_wait */\n#ifdef JEMALLOC_PROF\n\tfor (szind_t i = 0; i < SC_NSIZES; i++) {\n\t\tdouble sz = (double)sz_index2size(i);\n\t\tdouble rate = (double)(ZU(1) << lg_prof_sample);\n\t\tdouble div_val = 1.0 - exp(-sz / rate);\n\t\tdouble unbiased_sz = sz / div_val;\n\t\t/*\n\t\t * The \"true\" right value for the unbiased count is\n\t\t * 1.0/(1 - exp(-sz/rate)).  The problem is, we keep the counts\n\t\t * as integers (for a variety of reasons -- rounding errors\n\t\t * could trigger asserts, and not all libcs can properly handle\n\t\t * floating point arithmetic during malloc calls inside libc).\n\t\t * Rounding to an integer, though, can lead to rounding errors\n\t\t * of over 30% for sizes close to the sampling rate.  So\n\t\t * instead, we multiply by a constant, dividing the maximum\n\t\t * possible roundoff error by that constant.  To avoid overflow\n\t\t * in summing up size_t values, the largest safe constant we can\n\t\t * pick is the size of the smallest allocation.\n\t\t */\n\t\tdouble cnt_shift = (double)(ZU(1) << SC_LG_TINY_MIN);\n\t\tdouble shifted_unbiased_cnt = cnt_shift / div_val;\n\t\tprof_unbiased_sz[i] = (size_t)round(unbiased_sz);\n\t\tprof_shifted_unbiased_cnt[i] = (size_t)round(\n\t\t    shifted_unbiased_cnt);\n\t}\n#else\n\tunreachable();\n#endif\n}\n\n/*\n * The unbiasing story is long.  The jeprof unbiasing logic was copied from\n * pprof.  Both shared an issue: they unbiased using the average size of the\n * allocations at a particular stack trace.  This can work out OK if allocations\n * are mostly of the same size given some stack, but not otherwise.  We now\n * internally track what the unbiased results ought to be.  We can't just report\n * them as they are though; they'll still go through the jeprof unbiasing\n * process.  Instead, we figure out what values we can feed *into* jeprof's\n * unbiasing mechanism that will lead to getting the right values out.\n *\n * It'll unbias count and aggregate size as:\n *\n *   c_out = c_in * 1/(1-exp(-s_in/c_in/R)\n *   s_out = s_in * 1/(1-exp(-s_in/c_in/R)\n *\n * We want to solve for the values of c_in and s_in that will\n * give the c_out and s_out that we've computed internally.\n *\n * Let's do a change of variables (both to make the math easier and to make it\n * easier to write):\n *   x = s_in / c_in\n *   y = s_in\n *   k = 1/R.\n *\n * Then\n *   c_out = y/x * 1/(1-exp(-k*x))\n *   s_out = y * 1/(1-exp(-k*x))\n *\n * The first equation gives:\n *   y = x * c_out * (1-exp(-k*x))\n * The second gives:\n *   y = s_out * (1-exp(-k*x))\n * So we have\n *   x = s_out / c_out.\n * And all the other values fall out from that.\n *\n * This is all a fair bit of work.  The thing we get out of it is that we don't\n * break backwards compatibility with jeprof (and the various tools that have\n * copied its unbiasing logic).  Eventually, we anticipate a v3 heap profile\n * dump format based on JSON, at which point I think much of this logic can get\n * cleaned up (since we'll be taking a compatibility break there anyways).\n */\nstatic void\nprof_do_unbias(uint64_t c_out_shifted_i, uint64_t s_out_i, uint64_t *r_c_in,\n    uint64_t *r_s_in) {\n#ifdef JEMALLOC_PROF\n\tif (c_out_shifted_i == 0 || s_out_i == 0) {\n\t\t*r_c_in = 0;\n\t\t*r_s_in = 0;\n\t\treturn;\n\t}\n\t/*\n\t * See the note in prof_unbias_map_init() to see why we take c_out in a\n\t * shifted form.\n\t */\n\tdouble c_out = (double)c_out_shifted_i\n\t    / (double)(ZU(1) << SC_LG_TINY_MIN);\n\tdouble s_out = (double)s_out_i;\n\tdouble R = (double)(ZU(1) << lg_prof_sample);\n\n\tdouble x = s_out / c_out;\n\tdouble y = s_out * (1.0 - exp(-x / R));\n\n\tdouble c_in = y / x;\n\tdouble s_in = y;\n\n\t*r_c_in = prof_double_uint64_cast(c_in);\n\t*r_s_in = prof_double_uint64_cast(s_in);\n#else\n\tunreachable();\n#endif\n}\n\nstatic void\nprof_dump_print_cnts(write_cb_t *prof_dump_write, void *cbopaque,\n    const prof_cnt_t *cnts) {\n\tuint64_t curobjs;\n\tuint64_t curbytes;\n\tuint64_t accumobjs;\n\tuint64_t accumbytes;\n\tif (opt_prof_unbias) {\n\t\tprof_do_unbias(cnts->curobjs_shifted_unbiased,\n\t\t    cnts->curbytes_unbiased, &curobjs, &curbytes);\n\t\tprof_do_unbias(cnts->accumobjs_shifted_unbiased,\n\t\t    cnts->accumbytes_unbiased, &accumobjs, &accumbytes);\n\t} else {\n\t\tcurobjs = cnts->curobjs;\n\t\tcurbytes = cnts->curbytes;\n\t\taccumobjs = cnts->accumobjs;\n\t\taccumbytes = cnts->accumbytes;\n\t}\n\tprof_dump_printf(prof_dump_write, cbopaque,\n\t    \"%\"FMTu64\": %\"FMTu64\" [%\"FMTu64\": %\"FMTu64\"]\",\n\t    curobjs, curbytes, accumobjs, accumbytes);\n}\n\nstatic void\nprof_tctx_merge_tdata(tsdn_t *tsdn, prof_tctx_t *tctx, prof_tdata_t *tdata) {\n\tmalloc_mutex_assert_owner(tsdn, tctx->tdata->lock);\n\n\tmalloc_mutex_lock(tsdn, tctx->gctx->lock);\n\n\tswitch (tctx->state) {\n\tcase prof_tctx_state_initializing:\n\t\tmalloc_mutex_unlock(tsdn, tctx->gctx->lock);\n\t\treturn;\n\tcase prof_tctx_state_nominal:\n\t\ttctx->state = prof_tctx_state_dumping;\n\t\tmalloc_mutex_unlock(tsdn, tctx->gctx->lock);\n\n\t\tmemcpy(&tctx->dump_cnts, &tctx->cnts, sizeof(prof_cnt_t));\n\n\t\ttdata->cnt_summed.curobjs += tctx->dump_cnts.curobjs;\n\t\ttdata->cnt_summed.curobjs_shifted_unbiased\n\t\t    += tctx->dump_cnts.curobjs_shifted_unbiased;\n\t\ttdata->cnt_summed.curbytes += tctx->dump_cnts.curbytes;\n\t\ttdata->cnt_summed.curbytes_unbiased\n\t\t    += tctx->dump_cnts.curbytes_unbiased;\n\t\tif (opt_prof_accum) {\n\t\t\ttdata->cnt_summed.accumobjs +=\n\t\t\t    tctx->dump_cnts.accumobjs;\n\t\t\ttdata->cnt_summed.accumobjs_shifted_unbiased +=\n\t\t\t    tctx->dump_cnts.accumobjs_shifted_unbiased;\n\t\t\ttdata->cnt_summed.accumbytes +=\n\t\t\t    tctx->dump_cnts.accumbytes;\n\t\t\ttdata->cnt_summed.accumbytes_unbiased +=\n\t\t\t    tctx->dump_cnts.accumbytes_unbiased;\n\t\t}\n\t\tbreak;\n\tcase prof_tctx_state_dumping:\n\tcase prof_tctx_state_purgatory:\n\t\tnot_reached();\n\t}\n}\n\nstatic void\nprof_tctx_merge_gctx(tsdn_t *tsdn, prof_tctx_t *tctx, prof_gctx_t *gctx) {\n\tmalloc_mutex_assert_owner(tsdn, gctx->lock);\n\n\tgctx->cnt_summed.curobjs += tctx->dump_cnts.curobjs;\n\tgctx->cnt_summed.curobjs_shifted_unbiased\n\t    += tctx->dump_cnts.curobjs_shifted_unbiased;\n\tgctx->cnt_summed.curbytes += tctx->dump_cnts.curbytes;\n\tgctx->cnt_summed.curbytes_unbiased += tctx->dump_cnts.curbytes_unbiased;\n\tif (opt_prof_accum) {\n\t\tgctx->cnt_summed.accumobjs += tctx->dump_cnts.accumobjs;\n\t\tgctx->cnt_summed.accumobjs_shifted_unbiased\n\t\t    += tctx->dump_cnts.accumobjs_shifted_unbiased;\n\t\tgctx->cnt_summed.accumbytes += tctx->dump_cnts.accumbytes;\n\t\tgctx->cnt_summed.accumbytes_unbiased\n\t\t    += tctx->dump_cnts.accumbytes_unbiased;\n\t}\n}\n\nstatic prof_tctx_t *\nprof_tctx_merge_iter(prof_tctx_tree_t *tctxs, prof_tctx_t *tctx, void *arg) {\n\ttsdn_t *tsdn = (tsdn_t *)arg;\n\n\tmalloc_mutex_assert_owner(tsdn, tctx->gctx->lock);\n\n\tswitch (tctx->state) {\n\tcase prof_tctx_state_nominal:\n\t\t/* New since dumping started; ignore. */\n\t\tbreak;\n\tcase prof_tctx_state_dumping:\n\tcase prof_tctx_state_purgatory:\n\t\tprof_tctx_merge_gctx(tsdn, tctx, tctx->gctx);\n\t\tbreak;\n\tdefault:\n\t\tnot_reached();\n\t}\n\n\treturn NULL;\n}\n\ntypedef struct prof_dump_iter_arg_s prof_dump_iter_arg_t;\nstruct prof_dump_iter_arg_s {\n\ttsdn_t *tsdn;\n\twrite_cb_t *prof_dump_write;\n\tvoid *cbopaque;\n};\n\nstatic prof_tctx_t *\nprof_tctx_dump_iter(prof_tctx_tree_t *tctxs, prof_tctx_t *tctx, void *opaque) {\n\tprof_dump_iter_arg_t *arg = (prof_dump_iter_arg_t *)opaque;\n\tmalloc_mutex_assert_owner(arg->tsdn, tctx->gctx->lock);\n\n\tswitch (tctx->state) {\n\tcase prof_tctx_state_initializing:\n\tcase prof_tctx_state_nominal:\n\t\t/* Not captured by this dump. */\n\t\tbreak;\n\tcase prof_tctx_state_dumping:\n\tcase prof_tctx_state_purgatory:\n\t\tprof_dump_printf(arg->prof_dump_write, arg->cbopaque,\n\t\t    \"  t%\"FMTu64\": \", tctx->thr_uid);\n\t\tprof_dump_print_cnts(arg->prof_dump_write, arg->cbopaque,\n\t\t    &tctx->dump_cnts);\n\t\targ->prof_dump_write(arg->cbopaque, \"\\n\");\n\t\tbreak;\n\tdefault:\n\t\tnot_reached();\n\t}\n\treturn NULL;\n}\n\nstatic prof_tctx_t *\nprof_tctx_finish_iter(prof_tctx_tree_t *tctxs, prof_tctx_t *tctx, void *arg) {\n\ttsdn_t *tsdn = (tsdn_t *)arg;\n\tprof_tctx_t *ret;\n\n\tmalloc_mutex_assert_owner(tsdn, tctx->gctx->lock);\n\n\tswitch (tctx->state) {\n\tcase prof_tctx_state_nominal:\n\t\t/* New since dumping started; ignore. */\n\t\tbreak;\n\tcase prof_tctx_state_dumping:\n\t\ttctx->state = prof_tctx_state_nominal;\n\t\tbreak;\n\tcase prof_tctx_state_purgatory:\n\t\tret = tctx;\n\t\tgoto label_return;\n\tdefault:\n\t\tnot_reached();\n\t}\n\n\tret = NULL;\nlabel_return:\n\treturn ret;\n}\n\nstatic void\nprof_dump_gctx_prep(tsdn_t *tsdn, prof_gctx_t *gctx, prof_gctx_tree_t *gctxs) {\n\tcassert(config_prof);\n\n\tmalloc_mutex_lock(tsdn, gctx->lock);\n\n\t/*\n\t * Increment nlimbo so that gctx won't go away before dump.\n\t * Additionally, link gctx into the dump list so that it is included in\n\t * prof_dump()'s second pass.\n\t */\n\tgctx->nlimbo++;\n\tgctx_tree_insert(gctxs, gctx);\n\n\tmemset(&gctx->cnt_summed, 0, sizeof(prof_cnt_t));\n\n\tmalloc_mutex_unlock(tsdn, gctx->lock);\n}\n\ntypedef struct prof_gctx_merge_iter_arg_s prof_gctx_merge_iter_arg_t;\nstruct prof_gctx_merge_iter_arg_s {\n\ttsdn_t *tsdn;\n\tsize_t *leak_ngctx;\n};\n\nstatic prof_gctx_t *\nprof_gctx_merge_iter(prof_gctx_tree_t *gctxs, prof_gctx_t *gctx, void *opaque) {\n\tprof_gctx_merge_iter_arg_t *arg = (prof_gctx_merge_iter_arg_t *)opaque;\n\n\tmalloc_mutex_lock(arg->tsdn, gctx->lock);\n\ttctx_tree_iter(&gctx->tctxs, NULL, prof_tctx_merge_iter,\n\t    (void *)arg->tsdn);\n\tif (gctx->cnt_summed.curobjs != 0) {\n\t\t(*arg->leak_ngctx)++;\n\t}\n\tmalloc_mutex_unlock(arg->tsdn, gctx->lock);\n\n\treturn NULL;\n}\n\nstatic void\nprof_gctx_finish(tsd_t *tsd, prof_gctx_tree_t *gctxs) {\n\tprof_tdata_t *tdata = prof_tdata_get(tsd, false);\n\tprof_gctx_t *gctx;\n\n\t/*\n\t * Standard tree iteration won't work here, because as soon as we\n\t * decrement gctx->nlimbo and unlock gctx, another thread can\n\t * concurrently destroy it, which will corrupt the tree.  Therefore,\n\t * tear down the tree one node at a time during iteration.\n\t */\n\twhile ((gctx = gctx_tree_first(gctxs)) != NULL) {\n\t\tgctx_tree_remove(gctxs, gctx);\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), gctx->lock);\n\t\t{\n\t\t\tprof_tctx_t *next;\n\n\t\t\tnext = NULL;\n\t\t\tdo {\n\t\t\t\tprof_tctx_t *to_destroy =\n\t\t\t\t    tctx_tree_iter(&gctx->tctxs, next,\n\t\t\t\t    prof_tctx_finish_iter,\n\t\t\t\t    (void *)tsd_tsdn(tsd));\n\t\t\t\tif (to_destroy != NULL) {\n\t\t\t\t\tnext = tctx_tree_next(&gctx->tctxs,\n\t\t\t\t\t    to_destroy);\n\t\t\t\t\ttctx_tree_remove(&gctx->tctxs,\n\t\t\t\t\t    to_destroy);\n\t\t\t\t\tidalloctm(tsd_tsdn(tsd), to_destroy,\n\t\t\t\t\t    NULL, NULL, true, true);\n\t\t\t\t} else {\n\t\t\t\t\tnext = NULL;\n\t\t\t\t}\n\t\t\t} while (next != NULL);\n\t\t}\n\t\tgctx->nlimbo--;\n\t\tif (prof_gctx_should_destroy(gctx)) {\n\t\t\tgctx->nlimbo++;\n\t\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock);\n\t\t\tprof_gctx_try_destroy(tsd, tdata, gctx);\n\t\t} else {\n\t\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock);\n\t\t}\n\t}\n}\n\ntypedef struct prof_tdata_merge_iter_arg_s prof_tdata_merge_iter_arg_t;\nstruct prof_tdata_merge_iter_arg_s {\n\ttsdn_t *tsdn;\n\tprof_cnt_t *cnt_all;\n};\n\nstatic prof_tdata_t *\nprof_tdata_merge_iter(prof_tdata_tree_t *tdatas_ptr, prof_tdata_t *tdata,\n    void *opaque) {\n\tprof_tdata_merge_iter_arg_t *arg =\n\t    (prof_tdata_merge_iter_arg_t *)opaque;\n\n\tmalloc_mutex_lock(arg->tsdn, tdata->lock);\n\tif (!tdata->expired) {\n\t\tsize_t tabind;\n\t\tunion {\n\t\t\tprof_tctx_t\t*p;\n\t\t\tvoid\t\t*v;\n\t\t} tctx;\n\n\t\ttdata->dumping = true;\n\t\tmemset(&tdata->cnt_summed, 0, sizeof(prof_cnt_t));\n\t\tfor (tabind = 0; !ckh_iter(&tdata->bt2tctx, &tabind, NULL,\n\t\t    &tctx.v);) {\n\t\t\tprof_tctx_merge_tdata(arg->tsdn, tctx.p, tdata);\n\t\t}\n\n\t\targ->cnt_all->curobjs += tdata->cnt_summed.curobjs;\n\t\targ->cnt_all->curobjs_shifted_unbiased\n\t\t    += tdata->cnt_summed.curobjs_shifted_unbiased;\n\t\targ->cnt_all->curbytes += tdata->cnt_summed.curbytes;\n\t\targ->cnt_all->curbytes_unbiased\n\t\t    += tdata->cnt_summed.curbytes_unbiased;\n\t\tif (opt_prof_accum) {\n\t\t\targ->cnt_all->accumobjs += tdata->cnt_summed.accumobjs;\n\t\t\targ->cnt_all->accumobjs_shifted_unbiased\n\t\t\t    += tdata->cnt_summed.accumobjs_shifted_unbiased;\n\t\t\targ->cnt_all->accumbytes +=\n\t\t\t    tdata->cnt_summed.accumbytes;\n\t\t\targ->cnt_all->accumbytes_unbiased +=\n\t\t\t    tdata->cnt_summed.accumbytes_unbiased;\n\t\t}\n\t} else {\n\t\ttdata->dumping = false;\n\t}\n\tmalloc_mutex_unlock(arg->tsdn, tdata->lock);\n\n\treturn NULL;\n}\n\nstatic prof_tdata_t *\nprof_tdata_dump_iter(prof_tdata_tree_t *tdatas_ptr, prof_tdata_t *tdata,\n    void *opaque) {\n\tif (!tdata->dumping) {\n\t\treturn NULL;\n\t}\n\n\tprof_dump_iter_arg_t *arg = (prof_dump_iter_arg_t *)opaque;\n\tprof_dump_printf(arg->prof_dump_write, arg->cbopaque, \"  t%\"FMTu64\": \",\n\t    tdata->thr_uid);\n\tprof_dump_print_cnts(arg->prof_dump_write, arg->cbopaque,\n\t    &tdata->cnt_summed);\n\tif (tdata->thread_name != NULL) {\n\t\targ->prof_dump_write(arg->cbopaque, \" \");\n\t\targ->prof_dump_write(arg->cbopaque, tdata->thread_name);\n\t}\n\targ->prof_dump_write(arg->cbopaque, \"\\n\");\n\treturn NULL;\n}\n\nstatic void\nprof_dump_header(prof_dump_iter_arg_t *arg, const prof_cnt_t *cnt_all) {\n\tprof_dump_printf(arg->prof_dump_write, arg->cbopaque,\n\t    \"heap_v2/%\"FMTu64\"\\n  t*: \", ((uint64_t)1U << lg_prof_sample));\n\tprof_dump_print_cnts(arg->prof_dump_write, arg->cbopaque, cnt_all);\n\targ->prof_dump_write(arg->cbopaque, \"\\n\");\n\n\tmalloc_mutex_lock(arg->tsdn, &tdatas_mtx);\n\ttdata_tree_iter(&tdatas, NULL, prof_tdata_dump_iter, arg);\n\tmalloc_mutex_unlock(arg->tsdn, &tdatas_mtx);\n}\n\nstatic void\nprof_dump_gctx(prof_dump_iter_arg_t *arg, prof_gctx_t *gctx,\n    const prof_bt_t *bt, prof_gctx_tree_t *gctxs) {\n\tcassert(config_prof);\n\tmalloc_mutex_assert_owner(arg->tsdn, gctx->lock);\n\n\t/* Avoid dumping such gctx's that have no useful data. */\n\tif ((!opt_prof_accum && gctx->cnt_summed.curobjs == 0) ||\n\t    (opt_prof_accum && gctx->cnt_summed.accumobjs == 0)) {\n\t\tassert(gctx->cnt_summed.curobjs == 0);\n\t\tassert(gctx->cnt_summed.curbytes == 0);\n\t\t/*\n\t\t * These asserts would not be correct -- see the comment on races\n\t\t * in prof.c\n\t\t * assert(gctx->cnt_summed.curobjs_unbiased == 0);\n\t\t * assert(gctx->cnt_summed.curbytes_unbiased == 0);\n\t\t*/\n\t\tassert(gctx->cnt_summed.accumobjs == 0);\n\t\tassert(gctx->cnt_summed.accumobjs_shifted_unbiased == 0);\n\t\tassert(gctx->cnt_summed.accumbytes == 0);\n\t\tassert(gctx->cnt_summed.accumbytes_unbiased == 0);\n\t\treturn;\n\t}\n\n\targ->prof_dump_write(arg->cbopaque, \"@\");\n\tfor (unsigned i = 0; i < bt->len; i++) {\n\t\tprof_dump_printf(arg->prof_dump_write, arg->cbopaque,\n\t\t    \" %#\"FMTxPTR, (uintptr_t)bt->vec[i]);\n\t}\n\n\targ->prof_dump_write(arg->cbopaque, \"\\n  t*: \");\n\tprof_dump_print_cnts(arg->prof_dump_write, arg->cbopaque,\n\t    &gctx->cnt_summed);\n\targ->prof_dump_write(arg->cbopaque, \"\\n\");\n\n\ttctx_tree_iter(&gctx->tctxs, NULL, prof_tctx_dump_iter, arg);\n}\n\n/*\n * See prof_sample_new_event_wait() comment for why the body of this function\n * is conditionally compiled.\n */\nstatic void\nprof_leakcheck(const prof_cnt_t *cnt_all, size_t leak_ngctx) {\n#ifdef JEMALLOC_PROF\n\t/*\n\t * Scaling is equivalent AdjustSamples() in jeprof, but the result may\n\t * differ slightly from what jeprof reports, because here we scale the\n\t * summary values, whereas jeprof scales each context individually and\n\t * reports the sums of the scaled values.\n\t */\n\tif (cnt_all->curbytes != 0) {\n\t\tdouble sample_period = (double)((uint64_t)1 << lg_prof_sample);\n\t\tdouble ratio = (((double)cnt_all->curbytes) /\n\t\t    (double)cnt_all->curobjs) / sample_period;\n\t\tdouble scale_factor = 1.0 / (1.0 - exp(-ratio));\n\t\tuint64_t curbytes = (uint64_t)round(((double)cnt_all->curbytes)\n\t\t    * scale_factor);\n\t\tuint64_t curobjs = (uint64_t)round(((double)cnt_all->curobjs) *\n\t\t    scale_factor);\n\n\t\tmalloc_printf(\"<jemalloc>: Leak approximation summary: ~%\"FMTu64\n\t\t    \" byte%s, ~%\"FMTu64\" object%s, >= %zu context%s\\n\",\n\t\t    curbytes, (curbytes != 1) ? \"s\" : \"\", curobjs, (curobjs !=\n\t\t    1) ? \"s\" : \"\", leak_ngctx, (leak_ngctx != 1) ? \"s\" : \"\");\n\t\tmalloc_printf(\n\t\t    \"<jemalloc>: Run jeprof on dump output for leak detail\\n\");\n\t\tif (opt_prof_leak_error) {\n\t\t\tmalloc_printf(\n\t\t\t    \"<jemalloc>: Exiting with error code because memory\"\n\t\t\t    \" leaks were detected\\n\");\n\t\t\t/*\n\t\t\t * Use _exit() with underscore to avoid calling atexit()\n\t\t\t * and entering endless cycle.\n\t\t\t */\n\t\t\t_exit(1);\n\t\t}\n\t}\n#endif\n}\n\nstatic prof_gctx_t *\nprof_gctx_dump_iter(prof_gctx_tree_t *gctxs, prof_gctx_t *gctx, void *opaque) {\n\tprof_dump_iter_arg_t *arg = (prof_dump_iter_arg_t *)opaque;\n\tmalloc_mutex_lock(arg->tsdn, gctx->lock);\n\tprof_dump_gctx(arg, gctx, &gctx->bt, gctxs);\n\tmalloc_mutex_unlock(arg->tsdn, gctx->lock);\n\treturn NULL;\n}\n\nstatic void\nprof_dump_prep(tsd_t *tsd, prof_tdata_t *tdata, prof_cnt_t *cnt_all,\n    size_t *leak_ngctx, prof_gctx_tree_t *gctxs) {\n\tsize_t tabind;\n\tunion {\n\t\tprof_gctx_t\t*p;\n\t\tvoid\t\t*v;\n\t} gctx;\n\n\tprof_enter(tsd, tdata);\n\n\t/*\n\t * Put gctx's in limbo and clear their counters in preparation for\n\t * summing.\n\t */\n\tgctx_tree_new(gctxs);\n\tfor (tabind = 0; !ckh_iter(&bt2gctx, &tabind, NULL, &gctx.v);) {\n\t\tprof_dump_gctx_prep(tsd_tsdn(tsd), gctx.p, gctxs);\n\t}\n\n\t/*\n\t * Iterate over tdatas, and for the non-expired ones snapshot their tctx\n\t * stats and merge them into the associated gctx's.\n\t */\n\tmemset(cnt_all, 0, sizeof(prof_cnt_t));\n\tprof_tdata_merge_iter_arg_t prof_tdata_merge_iter_arg = {tsd_tsdn(tsd),\n\t    cnt_all};\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &tdatas_mtx);\n\ttdata_tree_iter(&tdatas, NULL, prof_tdata_merge_iter,\n\t    &prof_tdata_merge_iter_arg);\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &tdatas_mtx);\n\n\t/* Merge tctx stats into gctx's. */\n\t*leak_ngctx = 0;\n\tprof_gctx_merge_iter_arg_t prof_gctx_merge_iter_arg = {tsd_tsdn(tsd),\n\t    leak_ngctx};\n\tgctx_tree_iter(gctxs, NULL, prof_gctx_merge_iter,\n\t    &prof_gctx_merge_iter_arg);\n\n\tprof_leave(tsd, tdata);\n}\n\nvoid\nprof_dump_impl(tsd_t *tsd, write_cb_t *prof_dump_write, void *cbopaque,\n    prof_tdata_t *tdata, bool leakcheck) {\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &prof_dump_mtx);\n\tprof_cnt_t cnt_all;\n\tsize_t leak_ngctx;\n\tprof_gctx_tree_t gctxs;\n\tprof_dump_prep(tsd, tdata, &cnt_all, &leak_ngctx, &gctxs);\n\tprof_dump_iter_arg_t prof_dump_iter_arg = {tsd_tsdn(tsd),\n\t    prof_dump_write, cbopaque};\n\tprof_dump_header(&prof_dump_iter_arg, &cnt_all);\n\tgctx_tree_iter(&gctxs, NULL, prof_gctx_dump_iter, &prof_dump_iter_arg);\n\tprof_gctx_finish(tsd, &gctxs);\n\tif (leakcheck) {\n\t\tprof_leakcheck(&cnt_all, leak_ngctx);\n\t}\n}\n\n/* Used in unit tests. */\nvoid\nprof_cnt_all(prof_cnt_t *cnt_all) {\n\ttsd_t *tsd = tsd_fetch();\n\tprof_tdata_t *tdata = prof_tdata_get(tsd, false);\n\tif (tdata == NULL) {\n\t\tmemset(cnt_all, 0, sizeof(prof_cnt_t));\n\t} else {\n\t\tsize_t leak_ngctx;\n\t\tprof_gctx_tree_t gctxs;\n\t\tprof_dump_prep(tsd, tdata, cnt_all, &leak_ngctx, &gctxs);\n\t\tprof_gctx_finish(tsd, &gctxs);\n\t}\n}\n\nvoid\nprof_bt_hash(const void *key, size_t r_hash[2]) {\n\tprof_bt_t *bt = (prof_bt_t *)key;\n\n\tcassert(config_prof);\n\n\thash(bt->vec, bt->len * sizeof(void *), 0x94122f33U, r_hash);\n}\n\nbool\nprof_bt_keycomp(const void *k1, const void *k2) {\n\tconst prof_bt_t *bt1 = (prof_bt_t *)k1;\n\tconst prof_bt_t *bt2 = (prof_bt_t *)k2;\n\n\tcassert(config_prof);\n\n\tif (bt1->len != bt2->len) {\n\t\treturn false;\n\t}\n\treturn (memcmp(bt1->vec, bt2->vec, bt1->len * sizeof(void *)) == 0);\n}\n\nprof_tdata_t *\nprof_tdata_init_impl(tsd_t *tsd, uint64_t thr_uid, uint64_t thr_discrim,\n    char *thread_name, bool active) {\n\tassert(tsd_reentrancy_level_get(tsd) == 0);\n\n\tprof_tdata_t *tdata;\n\n\tcassert(config_prof);\n\n\t/* Initialize an empty cache for this thread. */\n\ttdata = (prof_tdata_t *)iallocztm(tsd_tsdn(tsd), sizeof(prof_tdata_t),\n\t    sz_size2index(sizeof(prof_tdata_t)), false, NULL, true,\n\t    arena_get(TSDN_NULL, 0, true), true);\n\tif (tdata == NULL) {\n\t\treturn NULL;\n\t}\n\n\ttdata->lock = prof_tdata_mutex_choose(thr_uid);\n\ttdata->thr_uid = thr_uid;\n\ttdata->thr_discrim = thr_discrim;\n\ttdata->thread_name = thread_name;\n\ttdata->attached = true;\n\ttdata->expired = false;\n\ttdata->tctx_uid_next = 0;\n\n\tif (ckh_new(tsd, &tdata->bt2tctx, PROF_CKH_MINITEMS, prof_bt_hash,\n\t    prof_bt_keycomp)) {\n\t\tidalloctm(tsd_tsdn(tsd), tdata, NULL, NULL, true, true);\n\t\treturn NULL;\n\t}\n\n\ttdata->enq = false;\n\ttdata->enq_idump = false;\n\ttdata->enq_gdump = false;\n\n\ttdata->dumping = false;\n\ttdata->active = active;\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &tdatas_mtx);\n\ttdata_tree_insert(&tdatas, tdata);\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &tdatas_mtx);\n\n\treturn tdata;\n}\n\nstatic bool\nprof_tdata_should_destroy_unlocked(prof_tdata_t *tdata, bool even_if_attached) {\n\tif (tdata->attached && !even_if_attached) {\n\t\treturn false;\n\t}\n\tif (ckh_count(&tdata->bt2tctx) != 0) {\n\t\treturn false;\n\t}\n\treturn true;\n}\n\nstatic bool\nprof_tdata_should_destroy(tsdn_t *tsdn, prof_tdata_t *tdata,\n    bool even_if_attached) {\n\tmalloc_mutex_assert_owner(tsdn, tdata->lock);\n\n\treturn prof_tdata_should_destroy_unlocked(tdata, even_if_attached);\n}\n\nstatic void\nprof_tdata_destroy_locked(tsd_t *tsd, prof_tdata_t *tdata,\n    bool even_if_attached) {\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &tdatas_mtx);\n\tmalloc_mutex_assert_not_owner(tsd_tsdn(tsd), tdata->lock);\n\n\ttdata_tree_remove(&tdatas, tdata);\n\n\tassert(prof_tdata_should_destroy_unlocked(tdata, even_if_attached));\n\n\tif (tdata->thread_name != NULL) {\n\t\tidalloctm(tsd_tsdn(tsd), tdata->thread_name, NULL, NULL, true,\n\t\t    true);\n\t}\n\tckh_delete(tsd, &tdata->bt2tctx);\n\tidalloctm(tsd_tsdn(tsd), tdata, NULL, NULL, true, true);\n}\n\nstatic void\nprof_tdata_destroy(tsd_t *tsd, prof_tdata_t *tdata, bool even_if_attached) {\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &tdatas_mtx);\n\tprof_tdata_destroy_locked(tsd, tdata, even_if_attached);\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &tdatas_mtx);\n}\n\nvoid\nprof_tdata_detach(tsd_t *tsd, prof_tdata_t *tdata) {\n\tbool destroy_tdata;\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), tdata->lock);\n\tif (tdata->attached) {\n\t\tdestroy_tdata = prof_tdata_should_destroy(tsd_tsdn(tsd), tdata,\n\t\t    true);\n\t\t/*\n\t\t * Only detach if !destroy_tdata, because detaching would allow\n\t\t * another thread to win the race to destroy tdata.\n\t\t */\n\t\tif (!destroy_tdata) {\n\t\t\ttdata->attached = false;\n\t\t}\n\t\ttsd_prof_tdata_set(tsd, NULL);\n\t} else {\n\t\tdestroy_tdata = false;\n\t}\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), tdata->lock);\n\tif (destroy_tdata) {\n\t\tprof_tdata_destroy(tsd, tdata, true);\n\t}\n}\n\nstatic bool\nprof_tdata_expire(tsdn_t *tsdn, prof_tdata_t *tdata) {\n\tbool destroy_tdata;\n\n\tmalloc_mutex_lock(tsdn, tdata->lock);\n\tif (!tdata->expired) {\n\t\ttdata->expired = true;\n\t\tdestroy_tdata = prof_tdata_should_destroy(tsdn, tdata, false);\n\t} else {\n\t\tdestroy_tdata = false;\n\t}\n\tmalloc_mutex_unlock(tsdn, tdata->lock);\n\n\treturn destroy_tdata;\n}\n\nstatic prof_tdata_t *\nprof_tdata_reset_iter(prof_tdata_tree_t *tdatas_ptr, prof_tdata_t *tdata,\n    void *arg) {\n\ttsdn_t *tsdn = (tsdn_t *)arg;\n\n\treturn (prof_tdata_expire(tsdn, tdata) ? tdata : NULL);\n}\n\nvoid\nprof_reset(tsd_t *tsd, size_t lg_sample) {\n\tprof_tdata_t *next;\n\n\tassert(lg_sample < (sizeof(uint64_t) << 3));\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &prof_dump_mtx);\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &tdatas_mtx);\n\n\tlg_prof_sample = lg_sample;\n\tprof_unbias_map_init();\n\n\tnext = NULL;\n\tdo {\n\t\tprof_tdata_t *to_destroy = tdata_tree_iter(&tdatas, next,\n\t\t    prof_tdata_reset_iter, (void *)tsd);\n\t\tif (to_destroy != NULL) {\n\t\t\tnext = tdata_tree_next(&tdatas, to_destroy);\n\t\t\tprof_tdata_destroy_locked(tsd, to_destroy, false);\n\t\t} else {\n\t\t\tnext = NULL;\n\t\t}\n\t} while (next != NULL);\n\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &tdatas_mtx);\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_mtx);\n}\n\nstatic bool\nprof_tctx_should_destroy(tsd_t *tsd, prof_tctx_t *tctx) {\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), tctx->tdata->lock);\n\n\tif (opt_prof_accum) {\n\t\treturn false;\n\t}\n\tif (tctx->cnts.curobjs != 0) {\n\t\treturn false;\n\t}\n\tif (tctx->prepared) {\n\t\treturn false;\n\t}\n\tif (tctx->recent_count != 0) {\n\t\treturn false;\n\t}\n\treturn true;\n}\n\nstatic void\nprof_tctx_destroy(tsd_t *tsd, prof_tctx_t *tctx) {\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), tctx->tdata->lock);\n\n\tassert(tctx->cnts.curobjs == 0);\n\tassert(tctx->cnts.curbytes == 0);\n\t/*\n\t * These asserts are not correct -- see the comment about races in\n\t * prof.c\n\t *\n\t * assert(tctx->cnts.curobjs_shifted_unbiased == 0);\n\t * assert(tctx->cnts.curbytes_unbiased == 0);\n\t */\n\tassert(!opt_prof_accum);\n\tassert(tctx->cnts.accumobjs == 0);\n\tassert(tctx->cnts.accumbytes == 0);\n\t/*\n\t * These ones are, since accumbyte counts never go down.  Either\n\t * prof_accum is off (in which case these should never have changed from\n\t * their initial value of zero), or it's on (in which case we shouldn't\n\t * be destroying this tctx).\n\t */\n\tassert(tctx->cnts.accumobjs_shifted_unbiased == 0);\n\tassert(tctx->cnts.accumbytes_unbiased == 0);\n\n\tprof_gctx_t *gctx = tctx->gctx;\n\n\t{\n\t\tprof_tdata_t *tdata = tctx->tdata;\n\t\ttctx->tdata = NULL;\n\t\tckh_remove(tsd, &tdata->bt2tctx, &gctx->bt, NULL, NULL);\n\t\tbool destroy_tdata = prof_tdata_should_destroy(tsd_tsdn(tsd),\n\t\t    tdata, false);\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), tdata->lock);\n\t\tif (destroy_tdata) {\n\t\t\tprof_tdata_destroy(tsd, tdata, false);\n\t\t}\n\t}\n\n\tbool destroy_tctx, destroy_gctx;\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), gctx->lock);\n\tswitch (tctx->state) {\n\tcase prof_tctx_state_nominal:\n\t\ttctx_tree_remove(&gctx->tctxs, tctx);\n\t\tdestroy_tctx = true;\n\t\tif (prof_gctx_should_destroy(gctx)) {\n\t\t\t/*\n\t\t\t * Increment gctx->nlimbo in order to keep another\n\t\t\t * thread from winning the race to destroy gctx while\n\t\t\t * this one has gctx->lock dropped.  Without this, it\n\t\t\t * would be possible for another thread to:\n\t\t\t *\n\t\t\t * 1) Sample an allocation associated with gctx.\n\t\t\t * 2) Deallocate the sampled object.\n\t\t\t * 3) Successfully prof_gctx_try_destroy(gctx).\n\t\t\t *\n\t\t\t * The result would be that gctx no longer exists by the\n\t\t\t * time this thread accesses it in\n\t\t\t * prof_gctx_try_destroy().\n\t\t\t */\n\t\t\tgctx->nlimbo++;\n\t\t\tdestroy_gctx = true;\n\t\t} else {\n\t\t\tdestroy_gctx = false;\n\t\t}\n\t\tbreak;\n\tcase prof_tctx_state_dumping:\n\t\t/*\n\t\t * A dumping thread needs tctx to remain valid until dumping\n\t\t * has finished.  Change state such that the dumping thread will\n\t\t * complete destruction during a late dump iteration phase.\n\t\t */\n\t\ttctx->state = prof_tctx_state_purgatory;\n\t\tdestroy_tctx = false;\n\t\tdestroy_gctx = false;\n\t\tbreak;\n\tdefault:\n\t\tnot_reached();\n\t\tdestroy_tctx = false;\n\t\tdestroy_gctx = false;\n\t}\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), gctx->lock);\n\tif (destroy_gctx) {\n\t\tprof_gctx_try_destroy(tsd, prof_tdata_get(tsd, false), gctx);\n\t}\n\tif (destroy_tctx) {\n\t\tidalloctm(tsd_tsdn(tsd), tctx, NULL, NULL, true, true);\n\t}\n}\n\nvoid\nprof_tctx_try_destroy(tsd_t *tsd, prof_tctx_t *tctx) {\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), tctx->tdata->lock);\n\tif (prof_tctx_should_destroy(tsd, tctx)) {\n\t\t/* tctx->tdata->lock will be released in prof_tctx_destroy(). */\n\t\tprof_tctx_destroy(tsd, tctx);\n\t} else {\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), tctx->tdata->lock);\n\t}\n}\n\n/******************************************************************************/\n"
  },
  {
    "path": "deps/jemalloc/src/prof_log.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/buf_writer.h\"\n#include \"jemalloc/internal/ckh.h\"\n#include \"jemalloc/internal/emitter.h\"\n#include \"jemalloc/internal/hash.h\"\n#include \"jemalloc/internal/malloc_io.h\"\n#include \"jemalloc/internal/mutex.h\"\n#include \"jemalloc/internal/prof_data.h\"\n#include \"jemalloc/internal/prof_log.h\"\n#include \"jemalloc/internal/prof_sys.h\"\n\nbool opt_prof_log = false;\ntypedef enum prof_logging_state_e prof_logging_state_t;\nenum prof_logging_state_e {\n\tprof_logging_state_stopped,\n\tprof_logging_state_started,\n\tprof_logging_state_dumping\n};\n\n/*\n * - stopped: log_start never called, or previous log_stop has completed.\n * - started: log_start called, log_stop not called yet. Allocations are logged.\n * - dumping: log_stop called but not finished; samples are not logged anymore.\n */\nprof_logging_state_t prof_logging_state = prof_logging_state_stopped;\n\n/* Used in unit tests. */\nstatic bool prof_log_dummy = false;\n\n/* Incremented for every log file that is output. */\nstatic uint64_t log_seq = 0;\nstatic char log_filename[\n    /* Minimize memory bloat for non-prof builds. */\n#ifdef JEMALLOC_PROF\n    PATH_MAX +\n#endif\n    1];\n\n/* Timestamp for most recent call to log_start(). */\nstatic nstime_t log_start_timestamp;\n\n/* Increment these when adding to the log_bt and log_thr linked lists. */\nstatic size_t log_bt_index = 0;\nstatic size_t log_thr_index = 0;\n\n/* Linked list node definitions. These are only used in this file. */\ntypedef struct prof_bt_node_s prof_bt_node_t;\n\nstruct prof_bt_node_s {\n\tprof_bt_node_t *next;\n\tsize_t index;\n\tprof_bt_t bt;\n\t/* Variable size backtrace vector pointed to by bt. */\n\tvoid *vec[1];\n};\n\ntypedef struct prof_thr_node_s prof_thr_node_t;\n\nstruct prof_thr_node_s {\n\tprof_thr_node_t *next;\n\tsize_t index;\n\tuint64_t thr_uid;\n\t/* Variable size based on thr_name_sz. */\n\tchar name[1];\n};\n\ntypedef struct prof_alloc_node_s prof_alloc_node_t;\n\n/* This is output when logging sampled allocations. */\nstruct prof_alloc_node_s {\n\tprof_alloc_node_t *next;\n\t/* Indices into an array of thread data. */\n\tsize_t alloc_thr_ind;\n\tsize_t free_thr_ind;\n\n\t/* Indices into an array of backtraces. */\n\tsize_t alloc_bt_ind;\n\tsize_t free_bt_ind;\n\n\tuint64_t alloc_time_ns;\n\tuint64_t free_time_ns;\n\n\tsize_t usize;\n};\n\n/*\n * Created on the first call to prof_try_log and deleted on prof_log_stop.\n * These are the backtraces and threads that have already been logged by an\n * allocation.\n */\nstatic bool log_tables_initialized = false;\nstatic ckh_t log_bt_node_set;\nstatic ckh_t log_thr_node_set;\n\n/* Store linked lists for logged data. */\nstatic prof_bt_node_t *log_bt_first = NULL;\nstatic prof_bt_node_t *log_bt_last = NULL;\nstatic prof_thr_node_t *log_thr_first = NULL;\nstatic prof_thr_node_t *log_thr_last = NULL;\nstatic prof_alloc_node_t *log_alloc_first = NULL;\nstatic prof_alloc_node_t *log_alloc_last = NULL;\n\n/* Protects the prof_logging_state and any log_{...} variable. */\nmalloc_mutex_t log_mtx;\n\n/******************************************************************************/\n/*\n * Function prototypes for static functions that are referenced prior to\n * definition.\n */\n\n/* Hashtable functions for log_bt_node_set and log_thr_node_set. */\nstatic void prof_thr_node_hash(const void *key, size_t r_hash[2]);\nstatic bool prof_thr_node_keycomp(const void *k1, const void *k2);\nstatic void prof_bt_node_hash(const void *key, size_t r_hash[2]);\nstatic bool prof_bt_node_keycomp(const void *k1, const void *k2);\n\n/******************************************************************************/\n\nstatic size_t\nprof_log_bt_index(tsd_t *tsd, prof_bt_t *bt) {\n\tassert(prof_logging_state == prof_logging_state_started);\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &log_mtx);\n\n\tprof_bt_node_t dummy_node;\n\tdummy_node.bt = *bt;\n\tprof_bt_node_t *node;\n\n\t/* See if this backtrace is already cached in the table. */\n\tif (ckh_search(&log_bt_node_set, (void *)(&dummy_node),\n\t    (void **)(&node), NULL)) {\n\t\tsize_t sz = offsetof(prof_bt_node_t, vec) +\n\t\t\t        (bt->len * sizeof(void *));\n\t\tprof_bt_node_t *new_node = (prof_bt_node_t *)\n\t\t    iallocztm(tsd_tsdn(tsd), sz, sz_size2index(sz), false, NULL,\n\t\t    true, arena_get(TSDN_NULL, 0, true), true);\n\t\tif (log_bt_first == NULL) {\n\t\t\tlog_bt_first = new_node;\n\t\t\tlog_bt_last = new_node;\n\t\t} else {\n\t\t\tlog_bt_last->next = new_node;\n\t\t\tlog_bt_last = new_node;\n\t\t}\n\n\t\tnew_node->next = NULL;\n\t\tnew_node->index = log_bt_index;\n\t\t/*\n\t\t * Copy the backtrace: bt is inside a tdata or gctx, which\n\t\t * might die before prof_log_stop is called.\n\t\t */\n\t\tnew_node->bt.len = bt->len;\n\t\tmemcpy(new_node->vec, bt->vec, bt->len * sizeof(void *));\n\t\tnew_node->bt.vec = new_node->vec;\n\n\t\tlog_bt_index++;\n\t\tckh_insert(tsd, &log_bt_node_set, (void *)new_node, NULL);\n\t\treturn new_node->index;\n\t} else {\n\t\treturn node->index;\n\t}\n}\n\nstatic size_t\nprof_log_thr_index(tsd_t *tsd, uint64_t thr_uid, const char *name) {\n\tassert(prof_logging_state == prof_logging_state_started);\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &log_mtx);\n\n\tprof_thr_node_t dummy_node;\n\tdummy_node.thr_uid = thr_uid;\n\tprof_thr_node_t *node;\n\n\t/* See if this thread is already cached in the table. */\n\tif (ckh_search(&log_thr_node_set, (void *)(&dummy_node),\n\t    (void **)(&node), NULL)) {\n\t\tsize_t sz = offsetof(prof_thr_node_t, name) + strlen(name) + 1;\n\t\tprof_thr_node_t *new_node = (prof_thr_node_t *)\n\t\t    iallocztm(tsd_tsdn(tsd), sz, sz_size2index(sz), false, NULL,\n\t\t    true, arena_get(TSDN_NULL, 0, true), true);\n\t\tif (log_thr_first == NULL) {\n\t\t\tlog_thr_first = new_node;\n\t\t\tlog_thr_last = new_node;\n\t\t} else {\n\t\t\tlog_thr_last->next = new_node;\n\t\t\tlog_thr_last = new_node;\n\t\t}\n\n\t\tnew_node->next = NULL;\n\t\tnew_node->index = log_thr_index;\n\t\tnew_node->thr_uid = thr_uid;\n\t\tstrcpy(new_node->name, name);\n\n\t\tlog_thr_index++;\n\t\tckh_insert(tsd, &log_thr_node_set, (void *)new_node, NULL);\n\t\treturn new_node->index;\n\t} else {\n\t\treturn node->index;\n\t}\n}\n\nJEMALLOC_COLD\nvoid\nprof_try_log(tsd_t *tsd, size_t usize, prof_info_t *prof_info) {\n\tcassert(config_prof);\n\tprof_tctx_t *tctx = prof_info->alloc_tctx;\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), tctx->tdata->lock);\n\n\tprof_tdata_t *cons_tdata = prof_tdata_get(tsd, false);\n\tif (cons_tdata == NULL) {\n\t\t/*\n\t\t * We decide not to log these allocations. cons_tdata will be\n\t\t * NULL only when the current thread is in a weird state (e.g.\n\t\t * it's being destroyed).\n\t\t */\n\t\treturn;\n\t}\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &log_mtx);\n\n\tif (prof_logging_state != prof_logging_state_started) {\n\t\tgoto label_done;\n\t}\n\n\tif (!log_tables_initialized) {\n\t\tbool err1 = ckh_new(tsd, &log_bt_node_set, PROF_CKH_MINITEMS,\n\t\t\t\tprof_bt_node_hash, prof_bt_node_keycomp);\n\t\tbool err2 = ckh_new(tsd, &log_thr_node_set, PROF_CKH_MINITEMS,\n\t\t\t\tprof_thr_node_hash, prof_thr_node_keycomp);\n\t\tif (err1 || err2) {\n\t\t\tgoto label_done;\n\t\t}\n\t\tlog_tables_initialized = true;\n\t}\n\n\tnstime_t alloc_time = prof_info->alloc_time;\n\tnstime_t free_time;\n\tnstime_prof_init_update(&free_time);\n\n\tsize_t sz = sizeof(prof_alloc_node_t);\n\tprof_alloc_node_t *new_node = (prof_alloc_node_t *)\n\t    iallocztm(tsd_tsdn(tsd), sz, sz_size2index(sz), false, NULL, true,\n\t    arena_get(TSDN_NULL, 0, true), true);\n\n\tconst char *prod_thr_name = (tctx->tdata->thread_name == NULL)?\n\t\t\t\t        \"\" : tctx->tdata->thread_name;\n\tconst char *cons_thr_name = prof_thread_name_get(tsd);\n\n\tprof_bt_t bt;\n\t/* Initialize the backtrace, using the buffer in tdata to store it. */\n\tbt_init(&bt, cons_tdata->vec);\n\tprof_backtrace(tsd, &bt);\n\tprof_bt_t *cons_bt = &bt;\n\n\t/* We haven't destroyed tctx yet, so gctx should be good to read. */\n\tprof_bt_t *prod_bt = &tctx->gctx->bt;\n\n\tnew_node->next = NULL;\n\tnew_node->alloc_thr_ind = prof_log_thr_index(tsd, tctx->tdata->thr_uid,\n\t\t\t\t      prod_thr_name);\n\tnew_node->free_thr_ind = prof_log_thr_index(tsd, cons_tdata->thr_uid,\n\t\t\t\t     cons_thr_name);\n\tnew_node->alloc_bt_ind = prof_log_bt_index(tsd, prod_bt);\n\tnew_node->free_bt_ind = prof_log_bt_index(tsd, cons_bt);\n\tnew_node->alloc_time_ns = nstime_ns(&alloc_time);\n\tnew_node->free_time_ns = nstime_ns(&free_time);\n\tnew_node->usize = usize;\n\n\tif (log_alloc_first == NULL) {\n\t\tlog_alloc_first = new_node;\n\t\tlog_alloc_last = new_node;\n\t} else {\n\t\tlog_alloc_last->next = new_node;\n\t\tlog_alloc_last = new_node;\n\t}\n\nlabel_done:\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &log_mtx);\n}\n\nstatic void\nprof_bt_node_hash(const void *key, size_t r_hash[2]) {\n\tconst prof_bt_node_t *bt_node = (prof_bt_node_t *)key;\n\tprof_bt_hash((void *)(&bt_node->bt), r_hash);\n}\n\nstatic bool\nprof_bt_node_keycomp(const void *k1, const void *k2) {\n\tconst prof_bt_node_t *bt_node1 = (prof_bt_node_t *)k1;\n\tconst prof_bt_node_t *bt_node2 = (prof_bt_node_t *)k2;\n\treturn prof_bt_keycomp((void *)(&bt_node1->bt),\n\t    (void *)(&bt_node2->bt));\n}\n\nstatic void\nprof_thr_node_hash(const void *key, size_t r_hash[2]) {\n\tconst prof_thr_node_t *thr_node = (prof_thr_node_t *)key;\n\thash(&thr_node->thr_uid, sizeof(uint64_t), 0x94122f35U, r_hash);\n}\n\nstatic bool\nprof_thr_node_keycomp(const void *k1, const void *k2) {\n\tconst prof_thr_node_t *thr_node1 = (prof_thr_node_t *)k1;\n\tconst prof_thr_node_t *thr_node2 = (prof_thr_node_t *)k2;\n\treturn thr_node1->thr_uid == thr_node2->thr_uid;\n}\n\n/* Used in unit tests. */\nsize_t\nprof_log_bt_count(void) {\n\tcassert(config_prof);\n\tsize_t cnt = 0;\n\tprof_bt_node_t *node = log_bt_first;\n\twhile (node != NULL) {\n\t\tcnt++;\n\t\tnode = node->next;\n\t}\n\treturn cnt;\n}\n\n/* Used in unit tests. */\nsize_t\nprof_log_alloc_count(void) {\n\tcassert(config_prof);\n\tsize_t cnt = 0;\n\tprof_alloc_node_t *node = log_alloc_first;\n\twhile (node != NULL) {\n\t\tcnt++;\n\t\tnode = node->next;\n\t}\n\treturn cnt;\n}\n\n/* Used in unit tests. */\nsize_t\nprof_log_thr_count(void) {\n\tcassert(config_prof);\n\tsize_t cnt = 0;\n\tprof_thr_node_t *node = log_thr_first;\n\twhile (node != NULL) {\n\t\tcnt++;\n\t\tnode = node->next;\n\t}\n\treturn cnt;\n}\n\n/* Used in unit tests. */\nbool\nprof_log_is_logging(void) {\n\tcassert(config_prof);\n\treturn prof_logging_state == prof_logging_state_started;\n}\n\n/* Used in unit tests. */\nbool\nprof_log_rep_check(void) {\n\tcassert(config_prof);\n\tif (prof_logging_state == prof_logging_state_stopped\n\t    && log_tables_initialized) {\n\t\treturn true;\n\t}\n\n\tif (log_bt_last != NULL && log_bt_last->next != NULL) {\n\t\treturn true;\n\t}\n\tif (log_thr_last != NULL && log_thr_last->next != NULL) {\n\t\treturn true;\n\t}\n\tif (log_alloc_last != NULL && log_alloc_last->next != NULL) {\n\t\treturn true;\n\t}\n\n\tsize_t bt_count = prof_log_bt_count();\n\tsize_t thr_count = prof_log_thr_count();\n\tsize_t alloc_count = prof_log_alloc_count();\n\n\n\tif (prof_logging_state == prof_logging_state_stopped) {\n\t\tif (bt_count != 0 || thr_count != 0 || alloc_count || 0) {\n\t\t\treturn true;\n\t\t}\n\t}\n\n\tprof_alloc_node_t *node = log_alloc_first;\n\twhile (node != NULL) {\n\t\tif (node->alloc_bt_ind >= bt_count) {\n\t\t\treturn true;\n\t\t}\n\t\tif (node->free_bt_ind >= bt_count) {\n\t\t\treturn true;\n\t\t}\n\t\tif (node->alloc_thr_ind >= thr_count) {\n\t\t\treturn true;\n\t\t}\n\t\tif (node->free_thr_ind >= thr_count) {\n\t\t\treturn true;\n\t\t}\n\t\tif (node->alloc_time_ns > node->free_time_ns) {\n\t\t\treturn true;\n\t\t}\n\t\tnode = node->next;\n\t}\n\n\treturn false;\n}\n\n/* Used in unit tests. */\nvoid\nprof_log_dummy_set(bool new_value) {\n\tcassert(config_prof);\n\tprof_log_dummy = new_value;\n}\n\n/* Used as an atexit function to stop logging on exit. */\nstatic void\nprof_log_stop_final(void) {\n\ttsd_t *tsd = tsd_fetch();\n\tprof_log_stop(tsd_tsdn(tsd));\n}\n\nJEMALLOC_COLD\nbool\nprof_log_start(tsdn_t *tsdn, const char *filename) {\n\tcassert(config_prof);\n\n\tif (!opt_prof) {\n\t\treturn true;\n\t}\n\n\tbool ret = false;\n\n\tmalloc_mutex_lock(tsdn, &log_mtx);\n\n\tstatic bool prof_log_atexit_called = false;\n\tif (!prof_log_atexit_called) {\n\t\tprof_log_atexit_called = true;\n\t\tif (atexit(prof_log_stop_final) != 0) {\n\t\t\tmalloc_write(\"<jemalloc>: Error in atexit() \"\n\t\t\t    \"for logging\\n\");\n\t\t\tif (opt_abort) {\n\t\t\t\tabort();\n\t\t\t}\n\t\t\tret = true;\n\t\t\tgoto label_done;\n\t\t}\n\t}\n\n\tif (prof_logging_state != prof_logging_state_stopped) {\n\t\tret = true;\n\t} else if (filename == NULL) {\n\t\t/* Make default name. */\n\t\tprof_get_default_filename(tsdn, log_filename, log_seq);\n\t\tlog_seq++;\n\t\tprof_logging_state = prof_logging_state_started;\n\t} else if (strlen(filename) >= PROF_DUMP_FILENAME_LEN) {\n\t\tret = true;\n\t} else {\n\t\tstrcpy(log_filename, filename);\n\t\tprof_logging_state = prof_logging_state_started;\n\t}\n\n\tif (!ret) {\n\t\tnstime_prof_init_update(&log_start_timestamp);\n\t}\nlabel_done:\n\tmalloc_mutex_unlock(tsdn, &log_mtx);\n\n\treturn ret;\n}\n\nstruct prof_emitter_cb_arg_s {\n\tint fd;\n\tssize_t ret;\n};\n\nstatic void\nprof_emitter_write_cb(void *opaque, const char *to_write) {\n\tstruct prof_emitter_cb_arg_s *arg =\n\t    (struct prof_emitter_cb_arg_s *)opaque;\n\tsize_t bytes = strlen(to_write);\n\tif (prof_log_dummy) {\n\t\treturn;\n\t}\n\targ->ret = malloc_write_fd(arg->fd, to_write, bytes);\n}\n\n/*\n * prof_log_emit_{...} goes through the appropriate linked list, emitting each\n * node to the json and deallocating it.\n */\nstatic void\nprof_log_emit_threads(tsd_t *tsd, emitter_t *emitter) {\n\temitter_json_array_kv_begin(emitter, \"threads\");\n\tprof_thr_node_t *thr_node = log_thr_first;\n\tprof_thr_node_t *thr_old_node;\n\twhile (thr_node != NULL) {\n\t\temitter_json_object_begin(emitter);\n\n\t\temitter_json_kv(emitter, \"thr_uid\", emitter_type_uint64,\n\t\t    &thr_node->thr_uid);\n\n\t\tchar *thr_name = thr_node->name;\n\n\t\temitter_json_kv(emitter, \"thr_name\", emitter_type_string,\n\t\t    &thr_name);\n\n\t\temitter_json_object_end(emitter);\n\t\tthr_old_node = thr_node;\n\t\tthr_node = thr_node->next;\n\t\tidalloctm(tsd_tsdn(tsd), thr_old_node, NULL, NULL, true, true);\n\t}\n\temitter_json_array_end(emitter);\n}\n\nstatic void\nprof_log_emit_traces(tsd_t *tsd, emitter_t *emitter) {\n\temitter_json_array_kv_begin(emitter, \"stack_traces\");\n\tprof_bt_node_t *bt_node = log_bt_first;\n\tprof_bt_node_t *bt_old_node;\n\t/*\n\t * Calculate how many hex digits we need: twice number of bytes, two for\n\t * \"0x\", and then one more for terminating '\\0'.\n\t */\n\tchar buf[2 * sizeof(intptr_t) + 3];\n\tsize_t buf_sz = sizeof(buf);\n\twhile (bt_node != NULL) {\n\t\temitter_json_array_begin(emitter);\n\t\tsize_t i;\n\t\tfor (i = 0; i < bt_node->bt.len; i++) {\n\t\t\tmalloc_snprintf(buf, buf_sz, \"%p\", bt_node->bt.vec[i]);\n\t\t\tchar *trace_str = buf;\n\t\t\temitter_json_value(emitter, emitter_type_string,\n\t\t\t    &trace_str);\n\t\t}\n\t\temitter_json_array_end(emitter);\n\n\t\tbt_old_node = bt_node;\n\t\tbt_node = bt_node->next;\n\t\tidalloctm(tsd_tsdn(tsd), bt_old_node, NULL, NULL, true, true);\n\t}\n\temitter_json_array_end(emitter);\n}\n\nstatic void\nprof_log_emit_allocs(tsd_t *tsd, emitter_t *emitter) {\n\temitter_json_array_kv_begin(emitter, \"allocations\");\n\tprof_alloc_node_t *alloc_node = log_alloc_first;\n\tprof_alloc_node_t *alloc_old_node;\n\twhile (alloc_node != NULL) {\n\t\temitter_json_object_begin(emitter);\n\n\t\temitter_json_kv(emitter, \"alloc_thread\", emitter_type_size,\n\t\t    &alloc_node->alloc_thr_ind);\n\n\t\temitter_json_kv(emitter, \"free_thread\", emitter_type_size,\n\t\t    &alloc_node->free_thr_ind);\n\n\t\temitter_json_kv(emitter, \"alloc_trace\", emitter_type_size,\n\t\t    &alloc_node->alloc_bt_ind);\n\n\t\temitter_json_kv(emitter, \"free_trace\", emitter_type_size,\n\t\t    &alloc_node->free_bt_ind);\n\n\t\temitter_json_kv(emitter, \"alloc_timestamp\",\n\t\t    emitter_type_uint64, &alloc_node->alloc_time_ns);\n\n\t\temitter_json_kv(emitter, \"free_timestamp\", emitter_type_uint64,\n\t\t    &alloc_node->free_time_ns);\n\n\t\temitter_json_kv(emitter, \"usize\", emitter_type_uint64,\n\t\t    &alloc_node->usize);\n\n\t\temitter_json_object_end(emitter);\n\n\t\talloc_old_node = alloc_node;\n\t\talloc_node = alloc_node->next;\n\t\tidalloctm(tsd_tsdn(tsd), alloc_old_node, NULL, NULL, true,\n\t\t    true);\n\t}\n\temitter_json_array_end(emitter);\n}\n\nstatic void\nprof_log_emit_metadata(emitter_t *emitter) {\n\temitter_json_object_kv_begin(emitter, \"info\");\n\n\tnstime_t now;\n\n\tnstime_prof_init_update(&now);\n\tuint64_t ns = nstime_ns(&now) - nstime_ns(&log_start_timestamp);\n\temitter_json_kv(emitter, \"duration\", emitter_type_uint64, &ns);\n\n\tchar *vers = JEMALLOC_VERSION;\n\temitter_json_kv(emitter, \"version\",\n\t    emitter_type_string, &vers);\n\n\temitter_json_kv(emitter, \"lg_sample_rate\",\n\t    emitter_type_int, &lg_prof_sample);\n\n\tconst char *res_type = prof_time_res_mode_names[opt_prof_time_res];\n\temitter_json_kv(emitter, \"prof_time_resolution\", emitter_type_string,\n\t    &res_type);\n\n\tint pid = prof_getpid();\n\temitter_json_kv(emitter, \"pid\", emitter_type_int, &pid);\n\n\temitter_json_object_end(emitter);\n}\n\n#define PROF_LOG_STOP_BUFSIZE PROF_DUMP_BUFSIZE\nJEMALLOC_COLD\nbool\nprof_log_stop(tsdn_t *tsdn) {\n\tcassert(config_prof);\n\tif (!opt_prof || !prof_booted) {\n\t\treturn true;\n\t}\n\n\ttsd_t *tsd = tsdn_tsd(tsdn);\n\tmalloc_mutex_lock(tsdn, &log_mtx);\n\n\tif (prof_logging_state != prof_logging_state_started) {\n\t\tmalloc_mutex_unlock(tsdn, &log_mtx);\n\t\treturn true;\n\t}\n\n\t/*\n\t * Set the state to dumping. We'll set it to stopped when we're done.\n\t * Since other threads won't be able to start/stop/log when the state is\n\t * dumping, we don't have to hold the lock during the whole method.\n\t */\n\tprof_logging_state = prof_logging_state_dumping;\n\tmalloc_mutex_unlock(tsdn, &log_mtx);\n\n\n\temitter_t emitter;\n\n\t/* Create a file. */\n\n\tint fd;\n\tif (prof_log_dummy) {\n\t\tfd = 0;\n\t} else {\n\t\tfd = creat(log_filename, 0644);\n\t}\n\n\tif (fd == -1) {\n\t\tmalloc_printf(\"<jemalloc>: creat() for log file \\\"%s\\\" \"\n\t\t\t      \" failed with %d\\n\", log_filename, errno);\n\t\tif (opt_abort) {\n\t\t\tabort();\n\t\t}\n\t\treturn true;\n\t}\n\n\tstruct prof_emitter_cb_arg_s arg;\n\targ.fd = fd;\n\n\tbuf_writer_t buf_writer;\n\tbuf_writer_init(tsdn, &buf_writer, prof_emitter_write_cb, &arg, NULL,\n\t    PROF_LOG_STOP_BUFSIZE);\n\temitter_init(&emitter, emitter_output_json_compact, buf_writer_cb,\n\t    &buf_writer);\n\n\temitter_begin(&emitter);\n\tprof_log_emit_metadata(&emitter);\n\tprof_log_emit_threads(tsd, &emitter);\n\tprof_log_emit_traces(tsd, &emitter);\n\tprof_log_emit_allocs(tsd, &emitter);\n\temitter_end(&emitter);\n\n\tbuf_writer_terminate(tsdn, &buf_writer);\n\n\t/* Reset global state. */\n\tif (log_tables_initialized) {\n\t\tckh_delete(tsd, &log_bt_node_set);\n\t\tckh_delete(tsd, &log_thr_node_set);\n\t}\n\tlog_tables_initialized = false;\n\tlog_bt_index = 0;\n\tlog_thr_index = 0;\n\tlog_bt_first = NULL;\n\tlog_bt_last = NULL;\n\tlog_thr_first = NULL;\n\tlog_thr_last = NULL;\n\tlog_alloc_first = NULL;\n\tlog_alloc_last = NULL;\n\n\tmalloc_mutex_lock(tsdn, &log_mtx);\n\tprof_logging_state = prof_logging_state_stopped;\n\tmalloc_mutex_unlock(tsdn, &log_mtx);\n\n\tif (prof_log_dummy) {\n\t\treturn false;\n\t}\n\treturn close(fd) || arg.ret == -1;\n}\n#undef PROF_LOG_STOP_BUFSIZE\n\nJEMALLOC_COLD\nbool\nprof_log_init(tsd_t *tsd) {\n\tcassert(config_prof);\n\tif (malloc_mutex_init(&log_mtx, \"prof_log\",\n\t    WITNESS_RANK_PROF_LOG, malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\n\tif (opt_prof_log) {\n\t\tprof_log_start(tsd_tsdn(tsd), NULL);\n\t}\n\n\treturn false;\n}\n\n/******************************************************************************/\n"
  },
  {
    "path": "deps/jemalloc/src/prof_recent.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/buf_writer.h\"\n#include \"jemalloc/internal/emitter.h\"\n#include \"jemalloc/internal/prof_data.h\"\n#include \"jemalloc/internal/prof_recent.h\"\n\nssize_t opt_prof_recent_alloc_max = PROF_RECENT_ALLOC_MAX_DEFAULT;\nmalloc_mutex_t prof_recent_alloc_mtx; /* Protects the fields below */\nstatic atomic_zd_t prof_recent_alloc_max;\nstatic ssize_t prof_recent_alloc_count = 0;\nprof_recent_list_t prof_recent_alloc_list;\n\nmalloc_mutex_t prof_recent_dump_mtx; /* Protects dumping. */\n\nstatic void\nprof_recent_alloc_max_init() {\n\tatomic_store_zd(&prof_recent_alloc_max, opt_prof_recent_alloc_max,\n\t    ATOMIC_RELAXED);\n}\n\nstatic inline ssize_t\nprof_recent_alloc_max_get_no_lock() {\n\treturn atomic_load_zd(&prof_recent_alloc_max, ATOMIC_RELAXED);\n}\n\nstatic inline ssize_t\nprof_recent_alloc_max_get(tsd_t *tsd) {\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\treturn prof_recent_alloc_max_get_no_lock();\n}\n\nstatic inline ssize_t\nprof_recent_alloc_max_update(tsd_t *tsd, ssize_t max) {\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\tssize_t old_max = prof_recent_alloc_max_get(tsd);\n\tatomic_store_zd(&prof_recent_alloc_max, max, ATOMIC_RELAXED);\n\treturn old_max;\n}\n\nstatic prof_recent_t *\nprof_recent_allocate_node(tsdn_t *tsdn) {\n\treturn (prof_recent_t *)iallocztm(tsdn, sizeof(prof_recent_t),\n\t    sz_size2index(sizeof(prof_recent_t)), false, NULL, true,\n\t    arena_get(tsdn, 0, false), true);\n}\n\nstatic void\nprof_recent_free_node(tsdn_t *tsdn, prof_recent_t *node) {\n\tassert(node != NULL);\n\tassert(isalloc(tsdn, node) == sz_s2u(sizeof(prof_recent_t)));\n\tidalloctm(tsdn, node, NULL, NULL, true, true);\n}\n\nstatic inline void\nincrement_recent_count(tsd_t *tsd, prof_tctx_t *tctx) {\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), tctx->tdata->lock);\n\t++tctx->recent_count;\n\tassert(tctx->recent_count > 0);\n}\n\nbool\nprof_recent_alloc_prepare(tsd_t *tsd, prof_tctx_t *tctx) {\n\tcassert(config_prof);\n\tassert(opt_prof && prof_booted);\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), tctx->tdata->lock);\n\tmalloc_mutex_assert_not_owner(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\n\t/*\n\t * Check whether last-N mode is turned on without trying to acquire the\n\t * lock, so as to optimize for the following two scenarios:\n\t * (1) Last-N mode is switched off;\n\t * (2) Dumping, during which last-N mode is temporarily turned off so\n\t *     as not to block sampled allocations.\n\t */\n\tif (prof_recent_alloc_max_get_no_lock() == 0) {\n\t\treturn false;\n\t}\n\n\t/*\n\t * Increment recent_count to hold the tctx so that it won't be gone\n\t * even after tctx->tdata->lock is released.  This acts as a\n\t * \"placeholder\"; the real recording of the allocation requires a lock\n\t * on prof_recent_alloc_mtx and is done in prof_recent_alloc (when\n\t * tctx->tdata->lock has been released).\n\t */\n\tincrement_recent_count(tsd, tctx);\n\treturn true;\n}\n\nstatic void\ndecrement_recent_count(tsd_t *tsd, prof_tctx_t *tctx) {\n\tmalloc_mutex_assert_not_owner(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\tassert(tctx != NULL);\n\tmalloc_mutex_lock(tsd_tsdn(tsd), tctx->tdata->lock);\n\tassert(tctx->recent_count > 0);\n\t--tctx->recent_count;\n\tprof_tctx_try_destroy(tsd, tctx);\n}\n\nstatic inline edata_t *\nprof_recent_alloc_edata_get_no_lock(const prof_recent_t *n) {\n\treturn (edata_t *)atomic_load_p(&n->alloc_edata, ATOMIC_ACQUIRE);\n}\n\nedata_t *\nprof_recent_alloc_edata_get_no_lock_test(const prof_recent_t *n) {\n\tcassert(config_prof);\n\treturn prof_recent_alloc_edata_get_no_lock(n);\n}\n\nstatic inline edata_t *\nprof_recent_alloc_edata_get(tsd_t *tsd, const prof_recent_t *n) {\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\treturn prof_recent_alloc_edata_get_no_lock(n);\n}\n\nstatic void\nprof_recent_alloc_edata_set(tsd_t *tsd, prof_recent_t *n, edata_t *edata) {\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\tatomic_store_p(&n->alloc_edata, edata, ATOMIC_RELEASE);\n}\n\nvoid\nedata_prof_recent_alloc_init(edata_t *edata) {\n\tcassert(config_prof);\n\tedata_prof_recent_alloc_set_dont_call_directly(edata, NULL);\n}\n\nstatic inline prof_recent_t *\nedata_prof_recent_alloc_get_no_lock(const edata_t *edata) {\n\tcassert(config_prof);\n\treturn edata_prof_recent_alloc_get_dont_call_directly(edata);\n}\n\nprof_recent_t *\nedata_prof_recent_alloc_get_no_lock_test(const edata_t *edata) {\n\tcassert(config_prof);\n\treturn edata_prof_recent_alloc_get_no_lock(edata);\n}\n\nstatic inline prof_recent_t *\nedata_prof_recent_alloc_get(tsd_t *tsd, const edata_t *edata) {\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\tprof_recent_t *recent_alloc =\n\t    edata_prof_recent_alloc_get_no_lock(edata);\n\tassert(recent_alloc == NULL ||\n\t    prof_recent_alloc_edata_get(tsd, recent_alloc) == edata);\n\treturn recent_alloc;\n}\n\nstatic prof_recent_t *\nedata_prof_recent_alloc_update_internal(tsd_t *tsd, edata_t *edata,\n    prof_recent_t *recent_alloc) {\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\tprof_recent_t *old_recent_alloc =\n\t    edata_prof_recent_alloc_get(tsd, edata);\n\tedata_prof_recent_alloc_set_dont_call_directly(edata, recent_alloc);\n\treturn old_recent_alloc;\n}\n\nstatic void\nedata_prof_recent_alloc_set(tsd_t *tsd, edata_t *edata,\n    prof_recent_t *recent_alloc) {\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\tassert(recent_alloc != NULL);\n\tprof_recent_t *old_recent_alloc =\n\t    edata_prof_recent_alloc_update_internal(tsd, edata, recent_alloc);\n\tassert(old_recent_alloc == NULL);\n\tprof_recent_alloc_edata_set(tsd, recent_alloc, edata);\n}\n\nstatic void\nedata_prof_recent_alloc_reset(tsd_t *tsd, edata_t *edata,\n    prof_recent_t *recent_alloc) {\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\tassert(recent_alloc != NULL);\n\tprof_recent_t *old_recent_alloc =\n\t    edata_prof_recent_alloc_update_internal(tsd, edata, NULL);\n\tassert(old_recent_alloc == recent_alloc);\n\tassert(edata == prof_recent_alloc_edata_get(tsd, recent_alloc));\n\tprof_recent_alloc_edata_set(tsd, recent_alloc, NULL);\n}\n\n/*\n * This function should be called right before an allocation is released, so\n * that the associated recent allocation record can contain the following\n * information:\n * (1) The allocation is released;\n * (2) The time of the deallocation; and\n * (3) The prof_tctx associated with the deallocation.\n */\nvoid\nprof_recent_alloc_reset(tsd_t *tsd, edata_t *edata) {\n\tcassert(config_prof);\n\t/*\n\t * Check whether the recent allocation record still exists without\n\t * trying to acquire the lock.\n\t */\n\tif (edata_prof_recent_alloc_get_no_lock(edata) == NULL) {\n\t\treturn;\n\t}\n\n\tprof_tctx_t *dalloc_tctx = prof_tctx_create(tsd);\n\t/*\n\t * In case dalloc_tctx is NULL, e.g. due to OOM, we will not record the\n\t * deallocation time / tctx, which is handled later, after we check\n\t * again when holding the lock.\n\t */\n\n\tif (dalloc_tctx != NULL) {\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), dalloc_tctx->tdata->lock);\n\t\tincrement_recent_count(tsd, dalloc_tctx);\n\t\tdalloc_tctx->prepared = false;\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), dalloc_tctx->tdata->lock);\n\t}\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\t/* Check again after acquiring the lock.  */\n\tprof_recent_t *recent = edata_prof_recent_alloc_get(tsd, edata);\n\tif (recent != NULL) {\n\t\tassert(nstime_equals_zero(&recent->dalloc_time));\n\t\tassert(recent->dalloc_tctx == NULL);\n\t\tif (dalloc_tctx != NULL) {\n\t\t\tnstime_prof_update(&recent->dalloc_time);\n\t\t\trecent->dalloc_tctx = dalloc_tctx;\n\t\t\tdalloc_tctx = NULL;\n\t\t}\n\t\tedata_prof_recent_alloc_reset(tsd, edata, recent);\n\t}\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\n\tif (dalloc_tctx != NULL) {\n\t\t/* We lost the rase - the allocation record was just gone. */\n\t\tdecrement_recent_count(tsd, dalloc_tctx);\n\t}\n}\n\nstatic void\nprof_recent_alloc_evict_edata(tsd_t *tsd, prof_recent_t *recent_alloc) {\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\tedata_t *edata = prof_recent_alloc_edata_get(tsd, recent_alloc);\n\tif (edata != NULL) {\n\t\tedata_prof_recent_alloc_reset(tsd, edata, recent_alloc);\n\t}\n}\n\nstatic bool\nprof_recent_alloc_is_empty(tsd_t *tsd) {\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\tif (ql_empty(&prof_recent_alloc_list)) {\n\t\tassert(prof_recent_alloc_count == 0);\n\t\treturn true;\n\t} else {\n\t\tassert(prof_recent_alloc_count > 0);\n\t\treturn false;\n\t}\n}\n\nstatic void\nprof_recent_alloc_assert_count(tsd_t *tsd) {\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\tif (!config_debug) {\n\t\treturn;\n\t}\n\tssize_t count = 0;\n\tprof_recent_t *n;\n\tql_foreach(n, &prof_recent_alloc_list, link) {\n\t\t++count;\n\t}\n\tassert(count == prof_recent_alloc_count);\n\tassert(prof_recent_alloc_max_get(tsd) == -1 ||\n\t    count <= prof_recent_alloc_max_get(tsd));\n}\n\nvoid\nprof_recent_alloc(tsd_t *tsd, edata_t *edata, size_t size, size_t usize) {\n\tcassert(config_prof);\n\tassert(edata != NULL);\n\tprof_tctx_t *tctx = edata_prof_tctx_get(edata);\n\n\tmalloc_mutex_assert_not_owner(tsd_tsdn(tsd), tctx->tdata->lock);\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\tprof_recent_alloc_assert_count(tsd);\n\n\t/*\n\t * Reserve a new prof_recent_t node if needed.  If needed, we release\n\t * the prof_recent_alloc_mtx lock and allocate.  Then, rather than\n\t * immediately checking for OOM, we regain the lock and try to make use\n\t * of the reserve node if needed.  There are six scenarios:\n\t *\n\t *          \\ now | no need | need but OOMed | need and allocated\n\t *     later \\    |         |                |\n\t *    ------------------------------------------------------------\n\t *     no need    |   (1)   |      (2)       |         (3)\n\t *    ------------------------------------------------------------\n\t *     need       |   (4)   |      (5)       |         (6)\n\t *\n\t * First, \"(4)\" never happens, because we don't release the lock in the\n\t * middle if there's no need for a new node; in such cases \"(1)\" always\n\t * takes place, which is trivial.\n\t *\n\t * Out of the remaining four scenarios, \"(6)\" is the common case and is\n\t * trivial.  \"(5)\" is also trivial, in which case we'll rollback the\n\t * effect of prof_recent_alloc_prepare() as expected.\n\t *\n\t * \"(2)\" / \"(3)\" occurs when the need for a new node is gone after we\n\t * regain the lock.  If the new node is successfully allocated, i.e. in\n\t * the case of \"(3)\", we'll release it in the end; otherwise, i.e. in\n\t * the case of \"(2)\", we do nothing - we're lucky that the OOM ends up\n\t * doing no harm at all.\n\t *\n\t * Therefore, the only performance cost of the \"release lock\" ->\n\t * \"allocate\" -> \"regain lock\" design is the \"(3)\" case, but it happens\n\t * very rarely, so the cost is relatively small compared to the gain of\n\t * not having to have the lock order of prof_recent_alloc_mtx above all\n\t * the allocation locks.\n\t */\n\tprof_recent_t *reserve = NULL;\n\tif (prof_recent_alloc_max_get(tsd) == -1 ||\n\t    prof_recent_alloc_count < prof_recent_alloc_max_get(tsd)) {\n\t\tassert(prof_recent_alloc_max_get(tsd) != 0);\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\t\treserve = prof_recent_allocate_node(tsd_tsdn(tsd));\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\t\tprof_recent_alloc_assert_count(tsd);\n\t}\n\n\tif (prof_recent_alloc_max_get(tsd) == 0) {\n\t\tassert(prof_recent_alloc_is_empty(tsd));\n\t\tgoto label_rollback;\n\t}\n\n\tprof_tctx_t *old_alloc_tctx, *old_dalloc_tctx;\n\tif (prof_recent_alloc_count == prof_recent_alloc_max_get(tsd)) {\n\t\t/* If upper limit is reached, rotate the head. */\n\t\tassert(prof_recent_alloc_max_get(tsd) != -1);\n\t\tassert(!prof_recent_alloc_is_empty(tsd));\n\t\tprof_recent_t *head = ql_first(&prof_recent_alloc_list);\n\t\told_alloc_tctx = head->alloc_tctx;\n\t\tassert(old_alloc_tctx != NULL);\n\t\told_dalloc_tctx = head->dalloc_tctx;\n\t\tprof_recent_alloc_evict_edata(tsd, head);\n\t\tql_rotate(&prof_recent_alloc_list, link);\n\t} else {\n\t\t/* Otherwise make use of the new node. */\n\t\tassert(prof_recent_alloc_max_get(tsd) == -1 ||\n\t\t    prof_recent_alloc_count < prof_recent_alloc_max_get(tsd));\n\t\tif (reserve == NULL) {\n\t\t\tgoto label_rollback;\n\t\t}\n\t\tql_elm_new(reserve, link);\n\t\tql_tail_insert(&prof_recent_alloc_list, reserve, link);\n\t\treserve = NULL;\n\t\told_alloc_tctx = NULL;\n\t\told_dalloc_tctx = NULL;\n\t\t++prof_recent_alloc_count;\n\t}\n\n\t/* Fill content into the tail node. */\n\tprof_recent_t *tail = ql_last(&prof_recent_alloc_list, link);\n\tassert(tail != NULL);\n\ttail->size = size;\n\ttail->usize = usize;\n\tnstime_copy(&tail->alloc_time, edata_prof_alloc_time_get(edata));\n\ttail->alloc_tctx = tctx;\n\tnstime_init_zero(&tail->dalloc_time);\n\ttail->dalloc_tctx = NULL;\n\tedata_prof_recent_alloc_set(tsd, edata, tail);\n\n\tassert(!prof_recent_alloc_is_empty(tsd));\n\tprof_recent_alloc_assert_count(tsd);\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\n\tif (reserve != NULL) {\n\t\tprof_recent_free_node(tsd_tsdn(tsd), reserve);\n\t}\n\n\t/*\n\t * Asynchronously handle the tctx of the old node, so that there's no\n\t * simultaneous holdings of prof_recent_alloc_mtx and tdata->lock.\n\t * In the worst case this may delay the tctx release but it's better\n\t * than holding prof_recent_alloc_mtx for longer.\n\t */\n\tif (old_alloc_tctx != NULL) {\n\t\tdecrement_recent_count(tsd, old_alloc_tctx);\n\t}\n\tif (old_dalloc_tctx != NULL) {\n\t\tdecrement_recent_count(tsd, old_dalloc_tctx);\n\t}\n\treturn;\n\nlabel_rollback:\n\tassert(edata_prof_recent_alloc_get(tsd, edata) == NULL);\n\tprof_recent_alloc_assert_count(tsd);\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\tif (reserve != NULL) {\n\t\tprof_recent_free_node(tsd_tsdn(tsd), reserve);\n\t}\n\tdecrement_recent_count(tsd, tctx);\n}\n\nssize_t\nprof_recent_alloc_max_ctl_read() {\n\tcassert(config_prof);\n\t/* Don't bother to acquire the lock. */\n\treturn prof_recent_alloc_max_get_no_lock();\n}\n\nstatic void\nprof_recent_alloc_restore_locked(tsd_t *tsd, prof_recent_list_t *to_delete) {\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\tssize_t max = prof_recent_alloc_max_get(tsd);\n\tif (max == -1 || prof_recent_alloc_count <= max) {\n\t\t/* Easy case - no need to alter the list. */\n\t\tql_new(to_delete);\n\t\tprof_recent_alloc_assert_count(tsd);\n\t\treturn;\n\t}\n\n\tprof_recent_t *node;\n\tql_foreach(node, &prof_recent_alloc_list, link) {\n\t\tif (prof_recent_alloc_count == max) {\n\t\t\tbreak;\n\t\t}\n\t\tprof_recent_alloc_evict_edata(tsd, node);\n\t\t--prof_recent_alloc_count;\n\t}\n\tassert(prof_recent_alloc_count == max);\n\n\tql_move(to_delete, &prof_recent_alloc_list);\n\tif (max == 0) {\n\t\tassert(node == NULL);\n\t} else {\n\t\tassert(node != NULL);\n\t\tql_split(to_delete, node, &prof_recent_alloc_list, link);\n\t}\n\tassert(!ql_empty(to_delete));\n\tprof_recent_alloc_assert_count(tsd);\n}\n\nstatic void\nprof_recent_alloc_async_cleanup(tsd_t *tsd, prof_recent_list_t *to_delete) {\n\tmalloc_mutex_assert_not_owner(tsd_tsdn(tsd), &prof_recent_dump_mtx);\n\tmalloc_mutex_assert_not_owner(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\twhile (!ql_empty(to_delete)) {\n\t\tprof_recent_t *node = ql_first(to_delete);\n\t\tql_remove(to_delete, node, link);\n\t\tdecrement_recent_count(tsd, node->alloc_tctx);\n\t\tif (node->dalloc_tctx != NULL) {\n\t\t\tdecrement_recent_count(tsd, node->dalloc_tctx);\n\t\t}\n\t\tprof_recent_free_node(tsd_tsdn(tsd), node);\n\t}\n}\n\nssize_t\nprof_recent_alloc_max_ctl_write(tsd_t *tsd, ssize_t max) {\n\tcassert(config_prof);\n\tassert(max >= -1);\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\tprof_recent_alloc_assert_count(tsd);\n\tconst ssize_t old_max = prof_recent_alloc_max_update(tsd, max);\n\tprof_recent_list_t to_delete;\n\tprof_recent_alloc_restore_locked(tsd, &to_delete);\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\tprof_recent_alloc_async_cleanup(tsd, &to_delete);\n\treturn old_max;\n}\n\nstatic void\nprof_recent_alloc_dump_bt(emitter_t *emitter, prof_tctx_t *tctx) {\n\tchar bt_buf[2 * sizeof(intptr_t) + 3];\n\tchar *s = bt_buf;\n\tassert(tctx != NULL);\n\tprof_bt_t *bt = &tctx->gctx->bt;\n\tfor (size_t i = 0; i < bt->len; ++i) {\n\t\tmalloc_snprintf(bt_buf, sizeof(bt_buf), \"%p\", bt->vec[i]);\n\t\temitter_json_value(emitter, emitter_type_string, &s);\n\t}\n}\n\nstatic void\nprof_recent_alloc_dump_node(emitter_t *emitter, prof_recent_t *node) {\n\temitter_json_object_begin(emitter);\n\n\temitter_json_kv(emitter, \"size\", emitter_type_size, &node->size);\n\temitter_json_kv(emitter, \"usize\", emitter_type_size, &node->usize);\n\tbool released = prof_recent_alloc_edata_get_no_lock(node) == NULL;\n\temitter_json_kv(emitter, \"released\", emitter_type_bool, &released);\n\n\temitter_json_kv(emitter, \"alloc_thread_uid\", emitter_type_uint64,\n\t    &node->alloc_tctx->thr_uid);\n\tprof_tdata_t *alloc_tdata = node->alloc_tctx->tdata;\n\tassert(alloc_tdata != NULL);\n\tif (alloc_tdata->thread_name != NULL) {\n\t\temitter_json_kv(emitter, \"alloc_thread_name\",\n\t\t    emitter_type_string, &alloc_tdata->thread_name);\n\t}\n\tuint64_t alloc_time_ns = nstime_ns(&node->alloc_time);\n\temitter_json_kv(emitter, \"alloc_time\", emitter_type_uint64,\n\t    &alloc_time_ns);\n\temitter_json_array_kv_begin(emitter, \"alloc_trace\");\n\tprof_recent_alloc_dump_bt(emitter, node->alloc_tctx);\n\temitter_json_array_end(emitter);\n\n\tif (released && node->dalloc_tctx != NULL) {\n\t\temitter_json_kv(emitter, \"dalloc_thread_uid\",\n\t\t    emitter_type_uint64, &node->dalloc_tctx->thr_uid);\n\t\tprof_tdata_t *dalloc_tdata = node->dalloc_tctx->tdata;\n\t\tassert(dalloc_tdata != NULL);\n\t\tif (dalloc_tdata->thread_name != NULL) {\n\t\t\temitter_json_kv(emitter, \"dalloc_thread_name\",\n\t\t\t    emitter_type_string, &dalloc_tdata->thread_name);\n\t\t}\n\t\tassert(!nstime_equals_zero(&node->dalloc_time));\n\t\tuint64_t dalloc_time_ns = nstime_ns(&node->dalloc_time);\n\t\temitter_json_kv(emitter, \"dalloc_time\", emitter_type_uint64,\n\t\t    &dalloc_time_ns);\n\t\temitter_json_array_kv_begin(emitter, \"dalloc_trace\");\n\t\tprof_recent_alloc_dump_bt(emitter, node->dalloc_tctx);\n\t\temitter_json_array_end(emitter);\n\t}\n\n\temitter_json_object_end(emitter);\n}\n\n#define PROF_RECENT_PRINT_BUFSIZE 65536\nJEMALLOC_COLD\nvoid\nprof_recent_alloc_dump(tsd_t *tsd, write_cb_t *write_cb, void *cbopaque) {\n\tcassert(config_prof);\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &prof_recent_dump_mtx);\n\tbuf_writer_t buf_writer;\n\tbuf_writer_init(tsd_tsdn(tsd), &buf_writer, write_cb, cbopaque, NULL,\n\t    PROF_RECENT_PRINT_BUFSIZE);\n\temitter_t emitter;\n\temitter_init(&emitter, emitter_output_json_compact, buf_writer_cb,\n\t    &buf_writer);\n\tprof_recent_list_t temp_list;\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\tprof_recent_alloc_assert_count(tsd);\n\tssize_t dump_max = prof_recent_alloc_max_get(tsd);\n\tql_move(&temp_list, &prof_recent_alloc_list);\n\tssize_t dump_count = prof_recent_alloc_count;\n\tprof_recent_alloc_count = 0;\n\tprof_recent_alloc_assert_count(tsd);\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\n\temitter_begin(&emitter);\n\tuint64_t sample_interval = (uint64_t)1U << lg_prof_sample;\n\temitter_json_kv(&emitter, \"sample_interval\", emitter_type_uint64,\n\t    &sample_interval);\n\temitter_json_kv(&emitter, \"recent_alloc_max\", emitter_type_ssize,\n\t    &dump_max);\n\temitter_json_array_kv_begin(&emitter, \"recent_alloc\");\n\tprof_recent_t *node;\n\tql_foreach(node, &temp_list, link) {\n\t\tprof_recent_alloc_dump_node(&emitter, node);\n\t}\n\temitter_json_array_end(&emitter);\n\temitter_end(&emitter);\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\tprof_recent_alloc_assert_count(tsd);\n\tql_concat(&temp_list, &prof_recent_alloc_list, link);\n\tql_move(&prof_recent_alloc_list, &temp_list);\n\tprof_recent_alloc_count += dump_count;\n\tprof_recent_alloc_restore_locked(tsd, &temp_list);\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &prof_recent_alloc_mtx);\n\n\tbuf_writer_terminate(tsd_tsdn(tsd), &buf_writer);\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &prof_recent_dump_mtx);\n\n\tprof_recent_alloc_async_cleanup(tsd, &temp_list);\n}\n#undef PROF_RECENT_PRINT_BUFSIZE\n\nbool\nprof_recent_init() {\n\tcassert(config_prof);\n\tprof_recent_alloc_max_init();\n\n\tif (malloc_mutex_init(&prof_recent_alloc_mtx, \"prof_recent_alloc\",\n\t    WITNESS_RANK_PROF_RECENT_ALLOC, malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\n\tif (malloc_mutex_init(&prof_recent_dump_mtx, \"prof_recent_dump\",\n\t    WITNESS_RANK_PROF_RECENT_DUMP, malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\n\tql_new(&prof_recent_alloc_list);\n\n\treturn false;\n}\n"
  },
  {
    "path": "deps/jemalloc/src/prof_stats.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/prof_stats.h\"\n\nbool opt_prof_stats = false;\nmalloc_mutex_t prof_stats_mtx;\nstatic prof_stats_t prof_stats_live[PROF_SC_NSIZES];\nstatic prof_stats_t prof_stats_accum[PROF_SC_NSIZES];\n\nstatic void\nprof_stats_enter(tsd_t *tsd, szind_t ind) {\n\tassert(opt_prof && opt_prof_stats);\n\tassert(ind < SC_NSIZES);\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &prof_stats_mtx);\n}\n\nstatic void\nprof_stats_leave(tsd_t *tsd) {\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &prof_stats_mtx);\n}\n\nvoid\nprof_stats_inc(tsd_t *tsd, szind_t ind, size_t size) {\n\tcassert(config_prof);\n\tprof_stats_enter(tsd, ind);\n\tprof_stats_live[ind].req_sum += size;\n\tprof_stats_live[ind].count++;\n\tprof_stats_accum[ind].req_sum += size;\n\tprof_stats_accum[ind].count++;\n\tprof_stats_leave(tsd);\n}\n\nvoid\nprof_stats_dec(tsd_t *tsd, szind_t ind, size_t size) {\n\tcassert(config_prof);\n\tprof_stats_enter(tsd, ind);\n\tprof_stats_live[ind].req_sum -= size;\n\tprof_stats_live[ind].count--;\n\tprof_stats_leave(tsd);\n}\n\nvoid\nprof_stats_get_live(tsd_t *tsd, szind_t ind, prof_stats_t *stats) {\n\tcassert(config_prof);\n\tprof_stats_enter(tsd, ind);\n\tmemcpy(stats, &prof_stats_live[ind], sizeof(prof_stats_t));\n\tprof_stats_leave(tsd);\n}\n\nvoid\nprof_stats_get_accum(tsd_t *tsd, szind_t ind, prof_stats_t *stats) {\n\tcassert(config_prof);\n\tprof_stats_enter(tsd, ind);\n\tmemcpy(stats, &prof_stats_accum[ind], sizeof(prof_stats_t));\n\tprof_stats_leave(tsd);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/prof_sys.c",
    "content": "#define JEMALLOC_PROF_SYS_C_\n#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/buf_writer.h\"\n#include \"jemalloc/internal/ctl.h\"\n#include \"jemalloc/internal/prof_data.h\"\n#include \"jemalloc/internal/prof_sys.h\"\n\n#ifdef JEMALLOC_PROF_LIBUNWIND\n#define UNW_LOCAL_ONLY\n#include <libunwind.h>\n#endif\n\n#ifdef JEMALLOC_PROF_LIBGCC\n/*\n * We have a circular dependency -- jemalloc_internal.h tells us if we should\n * use libgcc's unwinding functionality, but after we've included that, we've\n * already hooked _Unwind_Backtrace.  We'll temporarily disable hooking.\n */\n#undef _Unwind_Backtrace\n#include <unwind.h>\n#define _Unwind_Backtrace JEMALLOC_TEST_HOOK(_Unwind_Backtrace, test_hooks_libc_hook)\n#endif\n\n/******************************************************************************/\n\nmalloc_mutex_t prof_dump_filename_mtx;\n\nbool prof_do_mock = false;\n\nstatic uint64_t prof_dump_seq;\nstatic uint64_t prof_dump_iseq;\nstatic uint64_t prof_dump_mseq;\nstatic uint64_t prof_dump_useq;\n\nstatic char *prof_prefix = NULL;\n\n/* The fallback allocator profiling functionality will use. */\nbase_t *prof_base;\n\nvoid\nbt_init(prof_bt_t *bt, void **vec) {\n\tcassert(config_prof);\n\n\tbt->vec = vec;\n\tbt->len = 0;\n}\n\n#ifdef JEMALLOC_PROF_LIBUNWIND\nstatic void\nprof_backtrace_impl(void **vec, unsigned *len, unsigned max_len) {\n\tint nframes;\n\n\tcassert(config_prof);\n\tassert(*len == 0);\n\tassert(vec != NULL);\n\tassert(max_len == PROF_BT_MAX);\n\n\tnframes = unw_backtrace(vec, PROF_BT_MAX);\n\tif (nframes <= 0) {\n\t\treturn;\n\t}\n\t*len = nframes;\n}\n#elif (defined(JEMALLOC_PROF_LIBGCC))\nstatic _Unwind_Reason_Code\nprof_unwind_init_callback(struct _Unwind_Context *context, void *arg) {\n\tcassert(config_prof);\n\n\treturn _URC_NO_REASON;\n}\n\nstatic _Unwind_Reason_Code\nprof_unwind_callback(struct _Unwind_Context *context, void *arg) {\n\tprof_unwind_data_t *data = (prof_unwind_data_t *)arg;\n\tvoid *ip;\n\n\tcassert(config_prof);\n\n\tip = (void *)_Unwind_GetIP(context);\n\tif (ip == NULL) {\n\t\treturn _URC_END_OF_STACK;\n\t}\n\tdata->vec[*data->len] = ip;\n\t(*data->len)++;\n\tif (*data->len == data->max) {\n\t\treturn _URC_END_OF_STACK;\n\t}\n\n\treturn _URC_NO_REASON;\n}\n\nstatic void\nprof_backtrace_impl(void **vec, unsigned *len, unsigned max_len) {\n\tprof_unwind_data_t data = {vec, len, max_len};\n\n\tcassert(config_prof);\n\tassert(vec != NULL);\n\tassert(max_len == PROF_BT_MAX);\n\n\t_Unwind_Backtrace(prof_unwind_callback, &data);\n}\n#elif (defined(JEMALLOC_PROF_GCC))\nstatic void\nprof_backtrace_impl(void **vec, unsigned *len, unsigned max_len) {\n#define BT_FRAME(i)\t\t\t\t\t\t\t\\\n\tif ((i) < max_len) {\t\t\t\t\t\t\\\n\t\tvoid *p;\t\t\t\t\t\t\\\n\t\tif (__builtin_frame_address(i) == 0) {\t\t\t\\\n\t\t\treturn;\t\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t\tp = __builtin_return_address(i);\t\t\t\\\n\t\tif (p == NULL) {\t\t\t\t\t\\\n\t\t\treturn;\t\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t\tvec[(i)] = p;\t\t\t\t\t\t\\\n\t\t*len = (i) + 1;\t\t\t\t\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t\treturn;\t\t\t\t\t\t\t\\\n\t}\n\n\tcassert(config_prof);\n\tassert(vec != NULL);\n\tassert(max_len == PROF_BT_MAX);\n\n\tBT_FRAME(0)\n\tBT_FRAME(1)\n\tBT_FRAME(2)\n\tBT_FRAME(3)\n\tBT_FRAME(4)\n\tBT_FRAME(5)\n\tBT_FRAME(6)\n\tBT_FRAME(7)\n\tBT_FRAME(8)\n\tBT_FRAME(9)\n\n\tBT_FRAME(10)\n\tBT_FRAME(11)\n\tBT_FRAME(12)\n\tBT_FRAME(13)\n\tBT_FRAME(14)\n\tBT_FRAME(15)\n\tBT_FRAME(16)\n\tBT_FRAME(17)\n\tBT_FRAME(18)\n\tBT_FRAME(19)\n\n\tBT_FRAME(20)\n\tBT_FRAME(21)\n\tBT_FRAME(22)\n\tBT_FRAME(23)\n\tBT_FRAME(24)\n\tBT_FRAME(25)\n\tBT_FRAME(26)\n\tBT_FRAME(27)\n\tBT_FRAME(28)\n\tBT_FRAME(29)\n\n\tBT_FRAME(30)\n\tBT_FRAME(31)\n\tBT_FRAME(32)\n\tBT_FRAME(33)\n\tBT_FRAME(34)\n\tBT_FRAME(35)\n\tBT_FRAME(36)\n\tBT_FRAME(37)\n\tBT_FRAME(38)\n\tBT_FRAME(39)\n\n\tBT_FRAME(40)\n\tBT_FRAME(41)\n\tBT_FRAME(42)\n\tBT_FRAME(43)\n\tBT_FRAME(44)\n\tBT_FRAME(45)\n\tBT_FRAME(46)\n\tBT_FRAME(47)\n\tBT_FRAME(48)\n\tBT_FRAME(49)\n\n\tBT_FRAME(50)\n\tBT_FRAME(51)\n\tBT_FRAME(52)\n\tBT_FRAME(53)\n\tBT_FRAME(54)\n\tBT_FRAME(55)\n\tBT_FRAME(56)\n\tBT_FRAME(57)\n\tBT_FRAME(58)\n\tBT_FRAME(59)\n\n\tBT_FRAME(60)\n\tBT_FRAME(61)\n\tBT_FRAME(62)\n\tBT_FRAME(63)\n\tBT_FRAME(64)\n\tBT_FRAME(65)\n\tBT_FRAME(66)\n\tBT_FRAME(67)\n\tBT_FRAME(68)\n\tBT_FRAME(69)\n\n\tBT_FRAME(70)\n\tBT_FRAME(71)\n\tBT_FRAME(72)\n\tBT_FRAME(73)\n\tBT_FRAME(74)\n\tBT_FRAME(75)\n\tBT_FRAME(76)\n\tBT_FRAME(77)\n\tBT_FRAME(78)\n\tBT_FRAME(79)\n\n\tBT_FRAME(80)\n\tBT_FRAME(81)\n\tBT_FRAME(82)\n\tBT_FRAME(83)\n\tBT_FRAME(84)\n\tBT_FRAME(85)\n\tBT_FRAME(86)\n\tBT_FRAME(87)\n\tBT_FRAME(88)\n\tBT_FRAME(89)\n\n\tBT_FRAME(90)\n\tBT_FRAME(91)\n\tBT_FRAME(92)\n\tBT_FRAME(93)\n\tBT_FRAME(94)\n\tBT_FRAME(95)\n\tBT_FRAME(96)\n\tBT_FRAME(97)\n\tBT_FRAME(98)\n\tBT_FRAME(99)\n\n\tBT_FRAME(100)\n\tBT_FRAME(101)\n\tBT_FRAME(102)\n\tBT_FRAME(103)\n\tBT_FRAME(104)\n\tBT_FRAME(105)\n\tBT_FRAME(106)\n\tBT_FRAME(107)\n\tBT_FRAME(108)\n\tBT_FRAME(109)\n\n\tBT_FRAME(110)\n\tBT_FRAME(111)\n\tBT_FRAME(112)\n\tBT_FRAME(113)\n\tBT_FRAME(114)\n\tBT_FRAME(115)\n\tBT_FRAME(116)\n\tBT_FRAME(117)\n\tBT_FRAME(118)\n\tBT_FRAME(119)\n\n\tBT_FRAME(120)\n\tBT_FRAME(121)\n\tBT_FRAME(122)\n\tBT_FRAME(123)\n\tBT_FRAME(124)\n\tBT_FRAME(125)\n\tBT_FRAME(126)\n\tBT_FRAME(127)\n#undef BT_FRAME\n}\n#else\nstatic void\nprof_backtrace_impl(void **vec, unsigned *len, unsigned max_len) {\n\tcassert(config_prof);\n\tnot_reached();\n}\n#endif\n\nvoid\nprof_backtrace(tsd_t *tsd, prof_bt_t *bt) {\n\tcassert(config_prof);\n\tprof_backtrace_hook_t prof_backtrace_hook = prof_backtrace_hook_get();\n\tassert(prof_backtrace_hook != NULL);\n\n\tpre_reentrancy(tsd, NULL);\n\tprof_backtrace_hook(bt->vec, &bt->len, PROF_BT_MAX);\n\tpost_reentrancy(tsd);\n}\n\nvoid\nprof_hooks_init() {\n\tprof_backtrace_hook_set(&prof_backtrace_impl);\n\tprof_dump_hook_set(NULL);\n}\n\nvoid\nprof_unwind_init() {\n#ifdef JEMALLOC_PROF_LIBGCC\n\t/*\n\t * Cause the backtracing machinery to allocate its internal\n\t * state before enabling profiling.\n\t */\n\t_Unwind_Backtrace(prof_unwind_init_callback, NULL);\n#endif\n}\n\nstatic int\nprof_sys_thread_name_read_impl(char *buf, size_t limit) {\n#if defined(JEMALLOC_HAVE_PTHREAD_GETNAME_NP)\n\treturn pthread_getname_np(pthread_self(), buf, limit);\n#elif defined(JEMALLOC_HAVE_PTHREAD_GET_NAME_NP)\n\tpthread_get_name_np(pthread_self(), buf, limit);\n\treturn 0;\n#else\n\treturn ENOSYS;\n#endif\n}\nprof_sys_thread_name_read_t *JET_MUTABLE prof_sys_thread_name_read =\n    prof_sys_thread_name_read_impl;\n\nvoid\nprof_sys_thread_name_fetch(tsd_t *tsd) {\n#define THREAD_NAME_MAX_LEN 16\n\tchar buf[THREAD_NAME_MAX_LEN];\n\tif (!prof_sys_thread_name_read(buf, THREAD_NAME_MAX_LEN)) {\n\t\tprof_thread_name_set_impl(tsd, buf);\n\t}\n#undef THREAD_NAME_MAX_LEN\n}\n\nint\nprof_getpid(void) {\n#ifdef _WIN32\n\treturn GetCurrentProcessId();\n#else\n\treturn getpid();\n#endif\n}\n\n/*\n * This buffer is rather large for stack allocation, so use a single buffer for\n * all profile dumps; protected by prof_dump_mtx.\n */\nstatic char prof_dump_buf[PROF_DUMP_BUFSIZE];\n\ntypedef struct prof_dump_arg_s prof_dump_arg_t;\nstruct prof_dump_arg_s {\n\t/*\n\t * Whether error should be handled locally: if true, then we print out\n\t * error message as well as abort (if opt_abort is true) when an error\n\t * occurred, and we also report the error back to the caller in the end;\n\t * if false, then we only report the error back to the caller in the\n\t * end.\n\t */\n\tconst bool handle_error_locally;\n\t/*\n\t * Whether there has been an error in the dumping process, which could\n\t * have happened either in file opening or in file writing.  When an\n\t * error has already occurred, we will stop further writing to the file.\n\t */\n\tbool error;\n\t/* File descriptor of the dump file. */\n\tint prof_dump_fd;\n};\n\nstatic void\nprof_dump_check_possible_error(prof_dump_arg_t *arg, bool err_cond,\n    const char *format, ...) {\n\tassert(!arg->error);\n\tif (!err_cond) {\n\t\treturn;\n\t}\n\n\targ->error = true;\n\tif (!arg->handle_error_locally) {\n\t\treturn;\n\t}\n\n\tva_list ap;\n\tchar buf[PROF_PRINTF_BUFSIZE];\n\tva_start(ap, format);\n\tmalloc_vsnprintf(buf, sizeof(buf), format, ap);\n\tva_end(ap);\n\tmalloc_write(buf);\n\n\tif (opt_abort) {\n\t\tabort();\n\t}\n}\n\nstatic int\nprof_dump_open_file_impl(const char *filename, int mode) {\n\treturn creat(filename, mode);\n}\nprof_dump_open_file_t *JET_MUTABLE prof_dump_open_file =\n    prof_dump_open_file_impl;\n\nstatic void\nprof_dump_open(prof_dump_arg_t *arg, const char *filename) {\n\targ->prof_dump_fd = prof_dump_open_file(filename, 0644);\n\tprof_dump_check_possible_error(arg, arg->prof_dump_fd == -1,\n\t    \"<jemalloc>: failed to open \\\"%s\\\"\\n\", filename);\n}\n\nprof_dump_write_file_t *JET_MUTABLE prof_dump_write_file = malloc_write_fd;\n\nstatic void\nprof_dump_flush(void *opaque, const char *s) {\n\tcassert(config_prof);\n\tprof_dump_arg_t *arg = (prof_dump_arg_t *)opaque;\n\tif (!arg->error) {\n\t\tssize_t err = prof_dump_write_file(arg->prof_dump_fd, s,\n\t\t    strlen(s));\n\t\tprof_dump_check_possible_error(arg, err == -1,\n\t\t    \"<jemalloc>: failed to write during heap profile flush\\n\");\n\t}\n}\n\nstatic void\nprof_dump_close(prof_dump_arg_t *arg) {\n\tif (arg->prof_dump_fd != -1) {\n\t\tclose(arg->prof_dump_fd);\n\t}\n}\n\n#ifndef _WIN32\nJEMALLOC_FORMAT_PRINTF(1, 2)\nstatic int\nprof_open_maps_internal(const char *format, ...) {\n\tint mfd;\n\tva_list ap;\n\tchar filename[PATH_MAX + 1];\n\n\tva_start(ap, format);\n\tmalloc_vsnprintf(filename, sizeof(filename), format, ap);\n\tva_end(ap);\n\n#if defined(O_CLOEXEC)\n\tmfd = open(filename, O_RDONLY | O_CLOEXEC);\n#else\n\tmfd = open(filename, O_RDONLY);\n\tif (mfd != -1) {\n\t\tfcntl(mfd, F_SETFD, fcntl(mfd, F_GETFD) | FD_CLOEXEC);\n\t}\n#endif\n\n\treturn mfd;\n}\n#endif\n\nstatic int\nprof_dump_open_maps_impl() {\n\tint mfd;\n\n\tcassert(config_prof);\n#if defined(__FreeBSD__) || defined(__DragonFly__)\n\tmfd = prof_open_maps_internal(\"/proc/curproc/map\");\n#elif defined(_WIN32)\n\tmfd = -1; // Not implemented\n#else\n\tint pid = prof_getpid();\n\n\tmfd = prof_open_maps_internal(\"/proc/%d/task/%d/maps\", pid, pid);\n\tif (mfd == -1) {\n\t\tmfd = prof_open_maps_internal(\"/proc/%d/maps\", pid);\n\t}\n#endif\n\treturn mfd;\n}\nprof_dump_open_maps_t *JET_MUTABLE prof_dump_open_maps =\n    prof_dump_open_maps_impl;\n\nstatic ssize_t\nprof_dump_read_maps_cb(void *read_cbopaque, void *buf, size_t limit) {\n\tint mfd = *(int *)read_cbopaque;\n\tassert(mfd != -1);\n\treturn malloc_read_fd(mfd, buf, limit);\n}\n\nstatic void\nprof_dump_maps(buf_writer_t *buf_writer) {\n\tint mfd = prof_dump_open_maps();\n\tif (mfd == -1) {\n\t\treturn;\n\t}\n\n\tbuf_writer_cb(buf_writer, \"\\nMAPPED_LIBRARIES:\\n\");\n\tbuf_writer_pipe(buf_writer, prof_dump_read_maps_cb, &mfd);\n\tclose(mfd);\n}\n\nstatic bool\nprof_dump(tsd_t *tsd, bool propagate_err, const char *filename,\n    bool leakcheck) {\n\tcassert(config_prof);\n\tassert(tsd_reentrancy_level_get(tsd) == 0);\n\n\tprof_tdata_t * tdata = prof_tdata_get(tsd, true);\n\tif (tdata == NULL) {\n\t\treturn true;\n\t}\n\n\tprof_dump_arg_t arg = {/* handle_error_locally */ !propagate_err,\n\t    /* error */ false, /* prof_dump_fd */ -1};\n\n\tpre_reentrancy(tsd, NULL);\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &prof_dump_mtx);\n\n\tprof_dump_open(&arg, filename);\n\tbuf_writer_t buf_writer;\n\tbool err = buf_writer_init(tsd_tsdn(tsd), &buf_writer, prof_dump_flush,\n\t    &arg, prof_dump_buf, PROF_DUMP_BUFSIZE);\n\tassert(!err);\n\tprof_dump_impl(tsd, buf_writer_cb, &buf_writer, tdata, leakcheck);\n\tprof_dump_maps(&buf_writer);\n\tbuf_writer_terminate(tsd_tsdn(tsd), &buf_writer);\n\tprof_dump_close(&arg);\n\n\tprof_dump_hook_t dump_hook = prof_dump_hook_get();\n\tif (dump_hook != NULL) {\n\t\tdump_hook(filename);\n\t}\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_mtx);\n\tpost_reentrancy(tsd);\n\n\treturn arg.error;\n}\n\n/*\n * If profiling is off, then PROF_DUMP_FILENAME_LEN is 1, so we'll end up\n * calling strncpy with a size of 0, which triggers a -Wstringop-truncation\n * warning (strncpy can never actually be called in this case, since we bail out\n * much earlier when config_prof is false).  This function works around the\n * warning to let us leave the warning on.\n */\nstatic inline void\nprof_strncpy(char *UNUSED dest, const char *UNUSED src, size_t UNUSED size) {\n\tcassert(config_prof);\n#ifdef JEMALLOC_PROF\n\tstrncpy(dest, src, size);\n#endif\n}\n\nstatic const char *\nprof_prefix_get(tsdn_t* tsdn) {\n\tmalloc_mutex_assert_owner(tsdn, &prof_dump_filename_mtx);\n\n\treturn prof_prefix == NULL ? opt_prof_prefix : prof_prefix;\n}\n\nstatic bool\nprof_prefix_is_empty(tsdn_t *tsdn) {\n\tmalloc_mutex_lock(tsdn, &prof_dump_filename_mtx);\n\tbool ret = (prof_prefix_get(tsdn)[0] == '\\0');\n\tmalloc_mutex_unlock(tsdn, &prof_dump_filename_mtx);\n\treturn ret;\n}\n\n#define DUMP_FILENAME_BUFSIZE (PATH_MAX + 1)\n#define VSEQ_INVALID UINT64_C(0xffffffffffffffff)\nstatic void\nprof_dump_filename(tsd_t *tsd, char *filename, char v, uint64_t vseq) {\n\tcassert(config_prof);\n\n\tassert(tsd_reentrancy_level_get(tsd) == 0);\n\tconst char *prefix = prof_prefix_get(tsd_tsdn(tsd));\n\n\tif (vseq != VSEQ_INVALID) {\n\t        /* \"<prefix>.<pid>.<seq>.v<vseq>.heap\" */\n\t\tmalloc_snprintf(filename, DUMP_FILENAME_BUFSIZE,\n\t\t    \"%s.%d.%\"FMTu64\".%c%\"FMTu64\".heap\", prefix, prof_getpid(),\n\t\t    prof_dump_seq, v, vseq);\n\t} else {\n\t        /* \"<prefix>.<pid>.<seq>.<v>.heap\" */\n\t\tmalloc_snprintf(filename, DUMP_FILENAME_BUFSIZE,\n\t\t    \"%s.%d.%\"FMTu64\".%c.heap\", prefix, prof_getpid(),\n\t\t    prof_dump_seq, v);\n\t}\n\tprof_dump_seq++;\n}\n\nvoid\nprof_get_default_filename(tsdn_t *tsdn, char *filename, uint64_t ind) {\n\tmalloc_mutex_lock(tsdn, &prof_dump_filename_mtx);\n\tmalloc_snprintf(filename, PROF_DUMP_FILENAME_LEN,\n\t    \"%s.%d.%\"FMTu64\".json\", prof_prefix_get(tsdn), prof_getpid(), ind);\n\tmalloc_mutex_unlock(tsdn, &prof_dump_filename_mtx);\n}\n\nvoid\nprof_fdump_impl(tsd_t *tsd) {\n\tchar filename[DUMP_FILENAME_BUFSIZE];\n\n\tassert(!prof_prefix_is_empty(tsd_tsdn(tsd)));\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &prof_dump_filename_mtx);\n\tprof_dump_filename(tsd, filename, 'f', VSEQ_INVALID);\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_filename_mtx);\n\tprof_dump(tsd, false, filename, opt_prof_leak);\n}\n\nbool\nprof_prefix_set(tsdn_t *tsdn, const char *prefix) {\n\tcassert(config_prof);\n\tctl_mtx_assert_held(tsdn);\n\tmalloc_mutex_lock(tsdn, &prof_dump_filename_mtx);\n\tif (prof_prefix == NULL) {\n\t\tmalloc_mutex_unlock(tsdn, &prof_dump_filename_mtx);\n\t\t/* Everything is still guarded by ctl_mtx. */\n\t\tchar *buffer = base_alloc(tsdn, prof_base,\n\t\t    PROF_DUMP_FILENAME_LEN, QUANTUM);\n\t\tif (buffer == NULL) {\n\t\t\treturn true;\n\t\t}\n\t\tmalloc_mutex_lock(tsdn, &prof_dump_filename_mtx);\n\t\tprof_prefix = buffer;\n\t}\n\tassert(prof_prefix != NULL);\n\n\tprof_strncpy(prof_prefix, prefix, PROF_DUMP_FILENAME_LEN - 1);\n\tprof_prefix[PROF_DUMP_FILENAME_LEN - 1] = '\\0';\n\tmalloc_mutex_unlock(tsdn, &prof_dump_filename_mtx);\n\n\treturn false;\n}\n\nvoid\nprof_idump_impl(tsd_t *tsd) {\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &prof_dump_filename_mtx);\n\tif (prof_prefix_get(tsd_tsdn(tsd))[0] == '\\0') {\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_filename_mtx);\n\t\treturn;\n\t}\n\tchar filename[PATH_MAX + 1];\n\tprof_dump_filename(tsd, filename, 'i', prof_dump_iseq);\n\tprof_dump_iseq++;\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_filename_mtx);\n\tprof_dump(tsd, false, filename, false);\n}\n\nbool\nprof_mdump_impl(tsd_t *tsd, const char *filename) {\n\tchar filename_buf[DUMP_FILENAME_BUFSIZE];\n\tif (filename == NULL) {\n\t\t/* No filename specified, so automatically generate one. */\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), &prof_dump_filename_mtx);\n\t\tif (prof_prefix_get(tsd_tsdn(tsd))[0] == '\\0') {\n\t\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_filename_mtx);\n\t\t\treturn true;\n\t\t}\n\t\tprof_dump_filename(tsd, filename_buf, 'm', prof_dump_mseq);\n\t\tprof_dump_mseq++;\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), &prof_dump_filename_mtx);\n\t\tfilename = filename_buf;\n\t}\n\treturn prof_dump(tsd, true, filename, false);\n}\n\nvoid\nprof_gdump_impl(tsd_t *tsd) {\n\ttsdn_t *tsdn = tsd_tsdn(tsd);\n\tmalloc_mutex_lock(tsdn, &prof_dump_filename_mtx);\n\tif (prof_prefix_get(tsdn)[0] == '\\0') {\n\t\tmalloc_mutex_unlock(tsdn, &prof_dump_filename_mtx);\n\t\treturn;\n\t}\n\tchar filename[DUMP_FILENAME_BUFSIZE];\n\tprof_dump_filename(tsd, filename, 'u', prof_dump_useq);\n\tprof_dump_useq++;\n\tmalloc_mutex_unlock(tsdn, &prof_dump_filename_mtx);\n\tprof_dump(tsd, false, filename, false);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/psset.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/psset.h\"\n\n#include \"jemalloc/internal/fb.h\"\n\nvoid\npsset_init(psset_t *psset) {\n\tfor (unsigned i = 0; i < PSSET_NPSIZES; i++) {\n\t\thpdata_age_heap_new(&psset->pageslabs[i]);\n\t}\n\tfb_init(psset->pageslab_bitmap, PSSET_NPSIZES);\n\tmemset(&psset->merged_stats, 0, sizeof(psset->merged_stats));\n\tmemset(&psset->stats, 0, sizeof(psset->stats));\n\thpdata_empty_list_init(&psset->empty);\n\tfor (int i = 0; i < PSSET_NPURGE_LISTS; i++) {\n\t\thpdata_purge_list_init(&psset->to_purge[i]);\n\t}\n\tfb_init(psset->purge_bitmap, PSSET_NPURGE_LISTS);\n\thpdata_hugify_list_init(&psset->to_hugify);\n}\n\nstatic void\npsset_bin_stats_accum(psset_bin_stats_t *dst, psset_bin_stats_t *src) {\n\tdst->npageslabs += src->npageslabs;\n\tdst->nactive += src->nactive;\n\tdst->ndirty += src->ndirty;\n}\n\nvoid\npsset_stats_accum(psset_stats_t *dst, psset_stats_t *src) {\n\tpsset_bin_stats_accum(&dst->full_slabs[0], &src->full_slabs[0]);\n\tpsset_bin_stats_accum(&dst->full_slabs[1], &src->full_slabs[1]);\n\tpsset_bin_stats_accum(&dst->empty_slabs[0], &src->empty_slabs[0]);\n\tpsset_bin_stats_accum(&dst->empty_slabs[1], &src->empty_slabs[1]);\n\tfor (pszind_t i = 0; i < PSSET_NPSIZES; i++) {\n\t\tpsset_bin_stats_accum(&dst->nonfull_slabs[i][0],\n\t\t    &src->nonfull_slabs[i][0]);\n\t\tpsset_bin_stats_accum(&dst->nonfull_slabs[i][1],\n\t\t    &src->nonfull_slabs[i][1]);\n\t}\n}\n\n/*\n * The stats maintenance strategy is to remove a pageslab's contribution to the\n * stats when we call psset_update_begin, and re-add it (to a potentially new\n * bin) when we call psset_update_end.\n */\nJEMALLOC_ALWAYS_INLINE void\npsset_bin_stats_insert_remove(psset_t *psset, psset_bin_stats_t *binstats,\n    hpdata_t *ps, bool insert) {\n\tsize_t mul = insert ? (size_t)1 : (size_t)-1;\n\tsize_t huge_idx = (size_t)hpdata_huge_get(ps);\n\n\tbinstats[huge_idx].npageslabs += mul * 1;\n\tbinstats[huge_idx].nactive += mul * hpdata_nactive_get(ps);\n\tbinstats[huge_idx].ndirty += mul * hpdata_ndirty_get(ps);\n\n\tpsset->merged_stats.npageslabs += mul * 1;\n\tpsset->merged_stats.nactive += mul * hpdata_nactive_get(ps);\n\tpsset->merged_stats.ndirty += mul * hpdata_ndirty_get(ps);\n\n\tif (config_debug) {\n\t\tpsset_bin_stats_t check_stats = {0};\n\t\tfor (size_t huge = 0; huge <= 1; huge++) {\n\t\t\tpsset_bin_stats_accum(&check_stats,\n\t\t\t    &psset->stats.full_slabs[huge]);\n\t\t\tpsset_bin_stats_accum(&check_stats,\n\t\t\t    &psset->stats.empty_slabs[huge]);\n\t\t\tfor (pszind_t pind = 0; pind < PSSET_NPSIZES; pind++) {\n\t\t\t\tpsset_bin_stats_accum(&check_stats,\n\t\t\t\t    &psset->stats.nonfull_slabs[pind][huge]);\n\t\t\t}\n\t\t}\n\t\tassert(psset->merged_stats.npageslabs\n\t\t    == check_stats.npageslabs);\n\t\tassert(psset->merged_stats.nactive == check_stats.nactive);\n\t\tassert(psset->merged_stats.ndirty == check_stats.ndirty);\n\t}\n}\n\nstatic void\npsset_bin_stats_insert(psset_t *psset, psset_bin_stats_t *binstats,\n    hpdata_t *ps) {\n\tpsset_bin_stats_insert_remove(psset, binstats, ps, true);\n}\n\nstatic void\npsset_bin_stats_remove(psset_t *psset, psset_bin_stats_t *binstats,\n    hpdata_t *ps) {\n\tpsset_bin_stats_insert_remove(psset, binstats, ps, false);\n}\n\nstatic void\npsset_hpdata_heap_remove(psset_t *psset, pszind_t pind, hpdata_t *ps) {\n\thpdata_age_heap_remove(&psset->pageslabs[pind], ps);\n\tif (hpdata_age_heap_empty(&psset->pageslabs[pind])) {\n\t\tfb_unset(psset->pageslab_bitmap, PSSET_NPSIZES, (size_t)pind);\n\t}\n}\n\nstatic void\npsset_hpdata_heap_insert(psset_t *psset, pszind_t pind, hpdata_t *ps) {\n\tif (hpdata_age_heap_empty(&psset->pageslabs[pind])) {\n\t\tfb_set(psset->pageslab_bitmap, PSSET_NPSIZES, (size_t)pind);\n\t}\n\thpdata_age_heap_insert(&psset->pageslabs[pind], ps);\n}\n\nstatic void\npsset_stats_insert(psset_t* psset, hpdata_t *ps) {\n\tif (hpdata_empty(ps)) {\n\t\tpsset_bin_stats_insert(psset, psset->stats.empty_slabs, ps);\n\t} else if (hpdata_full(ps)) {\n\t\tpsset_bin_stats_insert(psset, psset->stats.full_slabs, ps);\n\t} else {\n\t\tsize_t longest_free_range = hpdata_longest_free_range_get(ps);\n\n\t\tpszind_t pind = sz_psz2ind(sz_psz_quantize_floor(\n\t\t    longest_free_range << LG_PAGE));\n\t\tassert(pind < PSSET_NPSIZES);\n\n\t\tpsset_bin_stats_insert(psset, psset->stats.nonfull_slabs[pind],\n\t\t    ps);\n\t}\n}\n\nstatic void\npsset_stats_remove(psset_t *psset, hpdata_t *ps) {\n\tif (hpdata_empty(ps)) {\n\t\tpsset_bin_stats_remove(psset, psset->stats.empty_slabs, ps);\n\t} else if (hpdata_full(ps)) {\n\t\tpsset_bin_stats_remove(psset, psset->stats.full_slabs, ps);\n\t} else {\n\t\tsize_t longest_free_range = hpdata_longest_free_range_get(ps);\n\n\t\tpszind_t pind = sz_psz2ind(sz_psz_quantize_floor(\n\t\t    longest_free_range << LG_PAGE));\n\t\tassert(pind < PSSET_NPSIZES);\n\n\t\tpsset_bin_stats_remove(psset, psset->stats.nonfull_slabs[pind],\n\t\t    ps);\n\t}\n}\n\n/*\n * Put ps into some container so that it can be found during future allocation\n * requests.\n */\nstatic void\npsset_alloc_container_insert(psset_t *psset, hpdata_t *ps) {\n\tassert(!hpdata_in_psset_alloc_container_get(ps));\n\thpdata_in_psset_alloc_container_set(ps, true);\n\tif (hpdata_empty(ps)) {\n\t\t/*\n\t\t * This prepend, paired with popping the head in psset_fit,\n\t\t * means we implement LIFO ordering for the empty slabs set,\n\t\t * which seems reasonable.\n\t\t */\n\t\thpdata_empty_list_prepend(&psset->empty, ps);\n\t} else if (hpdata_full(ps)) {\n\t\t/*\n\t\t * We don't need to keep track of the full slabs; we're never\n\t\t * going to return them from a psset_pick_alloc call.\n\t\t */\n\t} else {\n\t\tsize_t longest_free_range = hpdata_longest_free_range_get(ps);\n\n\t\tpszind_t pind = sz_psz2ind(sz_psz_quantize_floor(\n\t\t    longest_free_range << LG_PAGE));\n\t\tassert(pind < PSSET_NPSIZES);\n\n\t\tpsset_hpdata_heap_insert(psset, pind, ps);\n\t}\n}\n\n/* Remove ps from those collections. */\nstatic void\npsset_alloc_container_remove(psset_t *psset, hpdata_t *ps) {\n\tassert(hpdata_in_psset_alloc_container_get(ps));\n\thpdata_in_psset_alloc_container_set(ps, false);\n\n\tif (hpdata_empty(ps)) {\n\t\thpdata_empty_list_remove(&psset->empty, ps);\n\t} else if (hpdata_full(ps)) {\n\t\t/* Same as above -- do nothing in this case. */\n\t} else {\n\t\tsize_t longest_free_range = hpdata_longest_free_range_get(ps);\n\n\t\tpszind_t pind = sz_psz2ind(sz_psz_quantize_floor(\n\t\t    longest_free_range << LG_PAGE));\n\t\tassert(pind < PSSET_NPSIZES);\n\n\t\tpsset_hpdata_heap_remove(psset, pind, ps);\n\t}\n}\n\nstatic size_t\npsset_purge_list_ind(hpdata_t *ps) {\n\tsize_t ndirty = hpdata_ndirty_get(ps);\n\t/* Shouldn't have something with no dirty pages purgeable. */\n\tassert(ndirty > 0);\n\t/*\n\t * Higher indices correspond to lists we'd like to purge earlier; make\n\t * the two highest indices correspond to empty lists, which we attempt\n\t * to purge before purging any non-empty list.  This has two advantages:\n\t * - Empty page slabs are the least likely to get reused (we'll only\n\t *   pick them for an allocation if we have no other choice).\n\t * - Empty page slabs can purge every dirty page they contain in a\n\t *   single call, which is not usually the case.\n\t *\n\t * We purge hugeified empty slabs before nonhugeified ones, on the basis\n\t * that they are fully dirty, while nonhugified slabs might not be, so\n\t * we free up more pages more easily.\n\t */\n\tif (hpdata_nactive_get(ps) == 0) {\n\t\tif (hpdata_huge_get(ps)) {\n\t\t\treturn PSSET_NPURGE_LISTS - 1;\n\t\t} else {\n\t\t\treturn PSSET_NPURGE_LISTS - 2;\n\t\t}\n\t}\n\n\tpszind_t pind = sz_psz2ind(sz_psz_quantize_floor(ndirty << LG_PAGE));\n\t/*\n\t * For non-empty slabs, we may reuse them again.  Prefer purging\n\t * non-hugeified slabs before hugeified ones then, among pages of\n\t * similar dirtiness.  We still get some benefit from the hugification.\n\t */\n\treturn (size_t)pind * 2 + (hpdata_huge_get(ps) ? 0 : 1);\n}\n\nstatic void\npsset_maybe_remove_purge_list(psset_t *psset, hpdata_t *ps) {\n\t/*\n\t * Remove the hpdata from its purge list (if it's in one).  Even if it's\n\t * going to stay in the same one, by appending it during\n\t * psset_update_end, we move it to the end of its queue, so that we\n\t * purge LRU within a given dirtiness bucket.\n\t */\n\tif (hpdata_purge_allowed_get(ps)) {\n\t\tsize_t ind = psset_purge_list_ind(ps);\n\t\thpdata_purge_list_t *purge_list = &psset->to_purge[ind];\n\t\thpdata_purge_list_remove(purge_list, ps);\n\t\tif (hpdata_purge_list_empty(purge_list)) {\n\t\t\tfb_unset(psset->purge_bitmap, PSSET_NPURGE_LISTS, ind);\n\t\t}\n\t}\n}\n\nstatic void\npsset_maybe_insert_purge_list(psset_t *psset, hpdata_t *ps) {\n\tif (hpdata_purge_allowed_get(ps)) {\n\t\tsize_t ind = psset_purge_list_ind(ps);\n\t\thpdata_purge_list_t *purge_list = &psset->to_purge[ind];\n\t\tif (hpdata_purge_list_empty(purge_list)) {\n\t\t\tfb_set(psset->purge_bitmap, PSSET_NPURGE_LISTS, ind);\n\t\t}\n\t\thpdata_purge_list_append(purge_list, ps);\n\t}\n\n}\n\nvoid\npsset_update_begin(psset_t *psset, hpdata_t *ps) {\n\thpdata_assert_consistent(ps);\n\tassert(hpdata_in_psset_get(ps));\n\thpdata_updating_set(ps, true);\n\tpsset_stats_remove(psset, ps);\n\tif (hpdata_in_psset_alloc_container_get(ps)) {\n\t\t/*\n\t\t * Some metadata updates can break alloc container invariants\n\t\t * (e.g. the longest free range determines the hpdata_heap_t the\n\t\t * pageslab lives in).\n\t\t */\n\t\tassert(hpdata_alloc_allowed_get(ps));\n\t\tpsset_alloc_container_remove(psset, ps);\n\t}\n\tpsset_maybe_remove_purge_list(psset, ps);\n\t/*\n\t * We don't update presence in the hugify list; we try to keep it FIFO,\n\t * even in the presence of other metadata updates.  We'll update\n\t * presence at the end of the metadata update if necessary.\n\t */\n}\n\nvoid\npsset_update_end(psset_t *psset, hpdata_t *ps) {\n\tassert(hpdata_in_psset_get(ps));\n\thpdata_updating_set(ps, false);\n\tpsset_stats_insert(psset, ps);\n\n\t/*\n\t * The update begin should have removed ps from whatever alloc container\n\t * it was in.\n\t */\n\tassert(!hpdata_in_psset_alloc_container_get(ps));\n\tif (hpdata_alloc_allowed_get(ps)) {\n\t\tpsset_alloc_container_insert(psset, ps);\n\t}\n\tpsset_maybe_insert_purge_list(psset, ps);\n\n\tif (hpdata_hugify_allowed_get(ps)\n\t    && !hpdata_in_psset_hugify_container_get(ps)) {\n\t\thpdata_in_psset_hugify_container_set(ps, true);\n\t\thpdata_hugify_list_append(&psset->to_hugify, ps);\n\t} else if (!hpdata_hugify_allowed_get(ps)\n\t    && hpdata_in_psset_hugify_container_get(ps)) {\n\t\thpdata_in_psset_hugify_container_set(ps, false);\n\t\thpdata_hugify_list_remove(&psset->to_hugify, ps);\n\t}\n\thpdata_assert_consistent(ps);\n}\n\nhpdata_t *\npsset_pick_alloc(psset_t *psset, size_t size) {\n\tassert((size & PAGE_MASK) == 0);\n\tassert(size <= HUGEPAGE);\n\n\tpszind_t min_pind = sz_psz2ind(sz_psz_quantize_ceil(size));\n\tpszind_t pind = (pszind_t)fb_ffs(psset->pageslab_bitmap, PSSET_NPSIZES,\n\t    (size_t)min_pind);\n\tif (pind == PSSET_NPSIZES) {\n\t\treturn hpdata_empty_list_first(&psset->empty);\n\t}\n\thpdata_t *ps = hpdata_age_heap_first(&psset->pageslabs[pind]);\n\tif (ps == NULL) {\n\t\treturn NULL;\n\t}\n\n\thpdata_assert_consistent(ps);\n\n\treturn ps;\n}\n\nhpdata_t *\npsset_pick_purge(psset_t *psset) {\n\tssize_t ind_ssz = fb_fls(psset->purge_bitmap, PSSET_NPURGE_LISTS,\n\t    PSSET_NPURGE_LISTS - 1);\n\tif (ind_ssz < 0) {\n\t\treturn NULL;\n\t}\n\tpszind_t ind = (pszind_t)ind_ssz;\n\tassert(ind < PSSET_NPURGE_LISTS);\n\thpdata_t *ps = hpdata_purge_list_first(&psset->to_purge[ind]);\n\tassert(ps != NULL);\n\treturn ps;\n}\n\nhpdata_t *\npsset_pick_hugify(psset_t *psset) {\n\treturn hpdata_hugify_list_first(&psset->to_hugify);\n}\n\nvoid\npsset_insert(psset_t *psset, hpdata_t *ps) {\n\thpdata_in_psset_set(ps, true);\n\n\tpsset_stats_insert(psset, ps);\n\tif (hpdata_alloc_allowed_get(ps)) {\n\t\tpsset_alloc_container_insert(psset, ps);\n\t}\n\tpsset_maybe_insert_purge_list(psset, ps);\n\n\tif (hpdata_hugify_allowed_get(ps)) {\n\t\thpdata_in_psset_hugify_container_set(ps, true);\n\t\thpdata_hugify_list_append(&psset->to_hugify, ps);\n\t}\n}\n\nvoid\npsset_remove(psset_t *psset, hpdata_t *ps) {\n\thpdata_in_psset_set(ps, false);\n\n\tpsset_stats_remove(psset, ps);\n\tif (hpdata_in_psset_alloc_container_get(ps)) {\n\t\tpsset_alloc_container_remove(psset, ps);\n\t}\n\tpsset_maybe_remove_purge_list(psset, ps);\n\tif (hpdata_in_psset_hugify_container_get(ps)) {\n\t\thpdata_in_psset_hugify_container_set(ps, false);\n\t\thpdata_hugify_list_remove(&psset->to_hugify, ps);\n\t}\n}\n"
  },
  {
    "path": "deps/jemalloc/src/rtree.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/mutex.h\"\n\n/*\n * Only the most significant bits of keys passed to rtree_{read,write}() are\n * used.\n */\nbool\nrtree_new(rtree_t *rtree, base_t *base, bool zeroed) {\n#ifdef JEMALLOC_JET\n\tif (!zeroed) {\n\t\tmemset(rtree, 0, sizeof(rtree_t)); /* Clear root. */\n\t}\n#else\n\tassert(zeroed);\n#endif\n\trtree->base = base;\n\n\tif (malloc_mutex_init(&rtree->init_lock, \"rtree\", WITNESS_RANK_RTREE,\n\t    malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\n\treturn false;\n}\n\nstatic rtree_node_elm_t *\nrtree_node_alloc(tsdn_t *tsdn, rtree_t *rtree, size_t nelms) {\n\treturn (rtree_node_elm_t *)base_alloc(tsdn, rtree->base,\n\t    nelms * sizeof(rtree_node_elm_t), CACHELINE);\n}\n\nstatic rtree_leaf_elm_t *\nrtree_leaf_alloc(tsdn_t *tsdn, rtree_t *rtree, size_t nelms) {\n\treturn (rtree_leaf_elm_t *)base_alloc(tsdn, rtree->base,\n\t    nelms * sizeof(rtree_leaf_elm_t), CACHELINE);\n}\n\nstatic rtree_node_elm_t *\nrtree_node_init(tsdn_t *tsdn, rtree_t *rtree, unsigned level,\n    atomic_p_t *elmp) {\n\tmalloc_mutex_lock(tsdn, &rtree->init_lock);\n\t/*\n\t * If *elmp is non-null, then it was initialized with the init lock\n\t * held, so we can get by with 'relaxed' here.\n\t */\n\trtree_node_elm_t *node = atomic_load_p(elmp, ATOMIC_RELAXED);\n\tif (node == NULL) {\n\t\tnode = rtree_node_alloc(tsdn, rtree, ZU(1) <<\n\t\t    rtree_levels[level].bits);\n\t\tif (node == NULL) {\n\t\t\tmalloc_mutex_unlock(tsdn, &rtree->init_lock);\n\t\t\treturn NULL;\n\t\t}\n\t\t/*\n\t\t * Even though we hold the lock, a later reader might not; we\n\t\t * need release semantics.\n\t\t */\n\t\tatomic_store_p(elmp, node, ATOMIC_RELEASE);\n\t}\n\tmalloc_mutex_unlock(tsdn, &rtree->init_lock);\n\n\treturn node;\n}\n\nstatic rtree_leaf_elm_t *\nrtree_leaf_init(tsdn_t *tsdn, rtree_t *rtree, atomic_p_t *elmp) {\n\tmalloc_mutex_lock(tsdn, &rtree->init_lock);\n\t/*\n\t * If *elmp is non-null, then it was initialized with the init lock\n\t * held, so we can get by with 'relaxed' here.\n\t */\n\trtree_leaf_elm_t *leaf = atomic_load_p(elmp, ATOMIC_RELAXED);\n\tif (leaf == NULL) {\n\t\tleaf = rtree_leaf_alloc(tsdn, rtree, ZU(1) <<\n\t\t    rtree_levels[RTREE_HEIGHT-1].bits);\n\t\tif (leaf == NULL) {\n\t\t\tmalloc_mutex_unlock(tsdn, &rtree->init_lock);\n\t\t\treturn NULL;\n\t\t}\n\t\t/*\n\t\t * Even though we hold the lock, a later reader might not; we\n\t\t * need release semantics.\n\t\t */\n\t\tatomic_store_p(elmp, leaf, ATOMIC_RELEASE);\n\t}\n\tmalloc_mutex_unlock(tsdn, &rtree->init_lock);\n\n\treturn leaf;\n}\n\nstatic bool\nrtree_node_valid(rtree_node_elm_t *node) {\n\treturn ((uintptr_t)node != (uintptr_t)0);\n}\n\nstatic bool\nrtree_leaf_valid(rtree_leaf_elm_t *leaf) {\n\treturn ((uintptr_t)leaf != (uintptr_t)0);\n}\n\nstatic rtree_node_elm_t *\nrtree_child_node_tryread(rtree_node_elm_t *elm, bool dependent) {\n\trtree_node_elm_t *node;\n\n\tif (dependent) {\n\t\tnode = (rtree_node_elm_t *)atomic_load_p(&elm->child,\n\t\t    ATOMIC_RELAXED);\n\t} else {\n\t\tnode = (rtree_node_elm_t *)atomic_load_p(&elm->child,\n\t\t    ATOMIC_ACQUIRE);\n\t}\n\n\tassert(!dependent || node != NULL);\n\treturn node;\n}\n\nstatic rtree_node_elm_t *\nrtree_child_node_read(tsdn_t *tsdn, rtree_t *rtree, rtree_node_elm_t *elm,\n    unsigned level, bool dependent) {\n\trtree_node_elm_t *node;\n\n\tnode = rtree_child_node_tryread(elm, dependent);\n\tif (!dependent && unlikely(!rtree_node_valid(node))) {\n\t\tnode = rtree_node_init(tsdn, rtree, level + 1, &elm->child);\n\t}\n\tassert(!dependent || node != NULL);\n\treturn node;\n}\n\nstatic rtree_leaf_elm_t *\nrtree_child_leaf_tryread(rtree_node_elm_t *elm, bool dependent) {\n\trtree_leaf_elm_t *leaf;\n\n\tif (dependent) {\n\t\tleaf = (rtree_leaf_elm_t *)atomic_load_p(&elm->child,\n\t\t    ATOMIC_RELAXED);\n\t} else {\n\t\tleaf = (rtree_leaf_elm_t *)atomic_load_p(&elm->child,\n\t\t    ATOMIC_ACQUIRE);\n\t}\n\n\tassert(!dependent || leaf != NULL);\n\treturn leaf;\n}\n\nstatic rtree_leaf_elm_t *\nrtree_child_leaf_read(tsdn_t *tsdn, rtree_t *rtree, rtree_node_elm_t *elm,\n    unsigned level, bool dependent) {\n\trtree_leaf_elm_t *leaf;\n\n\tleaf = rtree_child_leaf_tryread(elm, dependent);\n\tif (!dependent && unlikely(!rtree_leaf_valid(leaf))) {\n\t\tleaf = rtree_leaf_init(tsdn, rtree, &elm->child);\n\t}\n\tassert(!dependent || leaf != NULL);\n\treturn leaf;\n}\n\nrtree_leaf_elm_t *\nrtree_leaf_elm_lookup_hard(tsdn_t *tsdn, rtree_t *rtree, rtree_ctx_t *rtree_ctx,\n    uintptr_t key, bool dependent, bool init_missing) {\n\trtree_node_elm_t *node;\n\trtree_leaf_elm_t *leaf;\n#if RTREE_HEIGHT > 1\n\tnode = rtree->root;\n#else\n\tleaf = rtree->root;\n#endif\n\n\tif (config_debug) {\n\t\tuintptr_t leafkey = rtree_leafkey(key);\n\t\tfor (unsigned i = 0; i < RTREE_CTX_NCACHE; i++) {\n\t\t\tassert(rtree_ctx->cache[i].leafkey != leafkey);\n\t\t}\n\t\tfor (unsigned i = 0; i < RTREE_CTX_NCACHE_L2; i++) {\n\t\t\tassert(rtree_ctx->l2_cache[i].leafkey != leafkey);\n\t\t}\n\t}\n\n#define RTREE_GET_CHILD(level) {\t\t\t\t\t\\\n\t\tassert(level < RTREE_HEIGHT-1);\t\t\t\t\\\n\t\tif (level != 0 && !dependent &&\t\t\t\t\\\n\t\t    unlikely(!rtree_node_valid(node))) {\t\t\\\n\t\t\treturn NULL;\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t\tuintptr_t subkey = rtree_subkey(key, level);\t\t\\\n\t\tif (level + 2 < RTREE_HEIGHT) {\t\t\t\t\\\n\t\t\tnode = init_missing ?\t\t\t\t\\\n\t\t\t    rtree_child_node_read(tsdn, rtree,\t\t\\\n\t\t\t    &node[subkey], level, dependent) :\t\t\\\n\t\t\t    rtree_child_node_tryread(&node[subkey],\t\\\n\t\t\t    dependent);\t\t\t\t\t\\\n\t\t} else {\t\t\t\t\t\t\\\n\t\t\tleaf = init_missing ?\t\t\t\t\\\n\t\t\t    rtree_child_leaf_read(tsdn, rtree,\t\t\\\n\t\t\t    &node[subkey], level, dependent) :\t\t\\\n\t\t\t    rtree_child_leaf_tryread(&node[subkey],\t\\\n\t\t\t    dependent);\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\n\t/*\n\t * Cache replacement upon hard lookup (i.e. L1 & L2 rtree cache miss):\n\t * (1) evict last entry in L2 cache; (2) move the collision slot from L1\n\t * cache down to L2; and 3) fill L1.\n\t */\n#define RTREE_GET_LEAF(level) {\t\t\t\t\t\t\\\n\t\tassert(level == RTREE_HEIGHT-1);\t\t\t\\\n\t\tif (!dependent && unlikely(!rtree_leaf_valid(leaf))) {\t\\\n\t\t\treturn NULL;\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t\tif (RTREE_CTX_NCACHE_L2 > 1) {\t\t\t\t\\\n\t\t\tmemmove(&rtree_ctx->l2_cache[1],\t\t\\\n\t\t\t    &rtree_ctx->l2_cache[0],\t\t\t\\\n\t\t\t    sizeof(rtree_ctx_cache_elm_t) *\t\t\\\n\t\t\t    (RTREE_CTX_NCACHE_L2 - 1));\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t\tsize_t slot = rtree_cache_direct_map(key);\t\t\\\n\t\trtree_ctx->l2_cache[0].leafkey =\t\t\t\\\n\t\t    rtree_ctx->cache[slot].leafkey;\t\t\t\\\n\t\trtree_ctx->l2_cache[0].leaf =\t\t\t\t\\\n\t\t    rtree_ctx->cache[slot].leaf;\t\t\t\\\n\t\tuintptr_t leafkey = rtree_leafkey(key);\t\t\t\\\n\t\trtree_ctx->cache[slot].leafkey = leafkey;\t\t\\\n\t\trtree_ctx->cache[slot].leaf = leaf;\t\t\t\\\n\t\tuintptr_t subkey = rtree_subkey(key, level);\t\t\\\n\t\treturn &leaf[subkey];\t\t\t\t\t\\\n\t}\n\tif (RTREE_HEIGHT > 1) {\n\t\tRTREE_GET_CHILD(0)\n\t}\n\tif (RTREE_HEIGHT > 2) {\n\t\tRTREE_GET_CHILD(1)\n\t}\n\tif (RTREE_HEIGHT > 3) {\n\t\tfor (unsigned i = 2; i < RTREE_HEIGHT-1; i++) {\n\t\t\tRTREE_GET_CHILD(i)\n\t\t}\n\t}\n\tRTREE_GET_LEAF(RTREE_HEIGHT-1)\n#undef RTREE_GET_CHILD\n#undef RTREE_GET_LEAF\n\tnot_reached();\n}\n\nvoid\nrtree_ctx_data_init(rtree_ctx_t *ctx) {\n\tfor (unsigned i = 0; i < RTREE_CTX_NCACHE; i++) {\n\t\trtree_ctx_cache_elm_t *cache = &ctx->cache[i];\n\t\tcache->leafkey = RTREE_LEAFKEY_INVALID;\n\t\tcache->leaf = NULL;\n\t}\n\tfor (unsigned i = 0; i < RTREE_CTX_NCACHE_L2; i++) {\n\t\trtree_ctx_cache_elm_t *cache = &ctx->l2_cache[i];\n\t\tcache->leafkey = RTREE_LEAFKEY_INVALID;\n\t\tcache->leaf = NULL;\n\t}\n}\n"
  },
  {
    "path": "deps/jemalloc/src/safety_check.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\nstatic safety_check_abort_hook_t safety_check_abort;\n\nvoid safety_check_fail_sized_dealloc(bool current_dealloc, const void *ptr,\n    size_t true_size, size_t input_size) {\n\tchar *src = current_dealloc ? \"the current pointer being freed\" :\n\t    \"in thread cache, possibly from previous deallocations\";\n\n\tsafety_check_fail(\"<jemalloc>: size mismatch detected (true size %zu \"\n\t    \"vs input size %zu), likely caused by application sized \"\n\t    \"deallocation bugs (source address: %p, %s). Suggest building with \"\n\t    \"--enable-debug or address sanitizer for debugging. Abort.\\n\",\n\t    true_size, input_size, ptr, src);\n}\n\nvoid safety_check_set_abort(safety_check_abort_hook_t abort_fn) {\n\tsafety_check_abort = abort_fn;\n}\n\nvoid safety_check_fail(const char *format, ...) {\n\tchar buf[MALLOC_PRINTF_BUFSIZE];\n\n\tva_list ap;\n\tva_start(ap, format);\n\tmalloc_vsnprintf(buf, MALLOC_PRINTF_BUFSIZE, format, ap);\n\tva_end(ap);\n\n\tif (safety_check_abort == NULL) {\n\t\tmalloc_write(buf);\n\t\tabort();\n\t} else {\n\t\tsafety_check_abort(buf);\n\t}\n}\n"
  },
  {
    "path": "deps/jemalloc/src/san.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/ehooks.h\"\n#include \"jemalloc/internal/san.h\"\n#include \"jemalloc/internal/tsd.h\"\n\n/* The sanitizer options. */\nsize_t opt_san_guard_large = SAN_GUARD_LARGE_EVERY_N_EXTENTS_DEFAULT;\nsize_t opt_san_guard_small = SAN_GUARD_SMALL_EVERY_N_EXTENTS_DEFAULT;\n\n/* Aligned (-1 is off) ptrs will be junked & stashed on dealloc. */\nssize_t opt_lg_san_uaf_align = SAN_LG_UAF_ALIGN_DEFAULT;\n\n/*\n *  Initialized in san_init().  When disabled, the mask is set to (uintptr_t)-1\n *  to always fail the nonfast_align check.\n */\nuintptr_t san_cache_bin_nonfast_mask = SAN_CACHE_BIN_NONFAST_MASK_DEFAULT;\n\nstatic inline void\nsan_find_guarded_addr(edata_t *edata, uintptr_t *guard1, uintptr_t *guard2,\n    uintptr_t *addr, size_t size, bool left, bool right) {\n\tassert(!edata_guarded_get(edata));\n\tassert(size % PAGE == 0);\n\t*addr = (uintptr_t)edata_base_get(edata);\n\tif (left) {\n\t\t*guard1 = *addr;\n\t\t*addr += SAN_PAGE_GUARD;\n\t} else {\n\t\t*guard1 = 0;\n\t}\n\n\tif (right) {\n\t\t*guard2 = *addr + size;\n\t} else {\n\t\t*guard2 = 0;\n\t}\n}\n\nstatic inline void\nsan_find_unguarded_addr(edata_t *edata, uintptr_t *guard1, uintptr_t *guard2,\n    uintptr_t *addr, size_t size, bool left, bool right) {\n\tassert(edata_guarded_get(edata));\n\tassert(size % PAGE == 0);\n\t*addr = (uintptr_t)edata_base_get(edata);\n\tif (right) {\n\t\t*guard2 = *addr + size;\n\t} else {\n\t\t*guard2 = 0;\n\t}\n\n\tif (left) {\n\t\t*guard1 = *addr - SAN_PAGE_GUARD;\n\t\tassert(*guard1 != 0);\n\t\t*addr = *guard1;\n\t} else {\n\t\t*guard1 = 0;\n\t}\n}\n\nvoid\nsan_guard_pages(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata, emap_t *emap,\n    bool left, bool right, bool remap) {\n\tassert(left || right);\n\tif (remap) {\n\t\temap_deregister_boundary(tsdn, emap, edata);\n\t}\n\n\tsize_t size_with_guards = edata_size_get(edata);\n\tsize_t usize = (left && right)\n\t    ? san_two_side_unguarded_sz(size_with_guards)\n\t    : san_one_side_unguarded_sz(size_with_guards);\n\n\tuintptr_t guard1, guard2, addr;\n\tsan_find_guarded_addr(edata, &guard1, &guard2, &addr, usize, left,\n\t    right);\n\n\tassert(edata_state_get(edata) == extent_state_active);\n\tehooks_guard(tsdn, ehooks, (void *)guard1, (void *)guard2);\n\n\t/* Update the guarded addr and usable size of the edata. */\n\tedata_size_set(edata, usize);\n\tedata_addr_set(edata, (void *)addr);\n\tedata_guarded_set(edata, true);\n\n\tif (remap) {\n\t\temap_register_boundary(tsdn, emap, edata, SC_NSIZES,\n\t\t    /* slab */ false);\n\t}\n}\n\nstatic void\nsan_unguard_pages_impl(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata,\n    emap_t *emap, bool left, bool right, bool remap) {\n\tassert(left || right);\n\t/* Remove the inner boundary which no longer exists. */\n\tif (remap) {\n\t\tassert(edata_state_get(edata) == extent_state_active);\n\t\temap_deregister_boundary(tsdn, emap, edata);\n\t} else {\n\t\tassert(edata_state_get(edata) == extent_state_retained);\n\t}\n\n\tsize_t size = edata_size_get(edata);\n\tsize_t size_with_guards = (left && right)\n\t    ? san_two_side_guarded_sz(size)\n\t    : san_one_side_guarded_sz(size);\n\n\tuintptr_t guard1, guard2, addr;\n\tsan_find_unguarded_addr(edata, &guard1, &guard2, &addr, size, left,\n\t    right);\n\n\tehooks_unguard(tsdn, ehooks, (void *)guard1, (void *)guard2);\n\n\t/* Update the true addr and usable size of the edata. */\n\tedata_size_set(edata, size_with_guards);\n\tedata_addr_set(edata, (void *)addr);\n\tedata_guarded_set(edata, false);\n\n\t/*\n\t * Then re-register the outer boundary including the guards, if\n\t * requested.\n\t */\n\tif (remap) {\n\t\temap_register_boundary(tsdn, emap, edata, SC_NSIZES,\n\t\t    /* slab */ false);\n\t}\n}\n\nvoid\nsan_unguard_pages(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata,\n    emap_t *emap, bool left, bool right) {\n\tsan_unguard_pages_impl(tsdn, ehooks, edata, emap, left, right,\n\t    /* remap */ true);\n}\n\nvoid\nsan_unguard_pages_pre_destroy(tsdn_t *tsdn, ehooks_t *ehooks, edata_t *edata,\n    emap_t *emap) {\n\temap_assert_not_mapped(tsdn, emap, edata);\n\t/*\n\t * We don't want to touch the emap of about to be destroyed extents, as\n\t * they have been unmapped upon eviction from the retained ecache. Also,\n\t * we unguard the extents to the right, because retained extents only\n\t * own their right guard page per san_bump_alloc's logic.\n\t */\n\t san_unguard_pages_impl(tsdn, ehooks, edata, emap, /* left */ false,\n\t    /* right */ true, /* remap */ false);\n}\n\nstatic bool\nsan_stashed_corrupted(void *ptr, size_t size) {\n\tif (san_junk_ptr_should_slow()) {\n\t\tfor (size_t i = 0; i < size; i++) {\n\t\t\tif (((char *)ptr)[i] != (char)uaf_detect_junk) {\n\t\t\t\treturn true;\n\t\t\t}\n\t\t}\n\t\treturn false;\n\t}\n\n\tvoid *first, *mid, *last;\n\tsan_junk_ptr_locations(ptr, size, &first, &mid, &last);\n\tif (*(uintptr_t *)first != uaf_detect_junk ||\n\t    *(uintptr_t *)mid != uaf_detect_junk ||\n\t    *(uintptr_t *)last != uaf_detect_junk) {\n\t\treturn true;\n\t}\n\n\treturn false;\n}\n\nvoid\nsan_check_stashed_ptrs(void **ptrs, size_t nstashed, size_t usize) {\n\t/*\n\t * Verify that the junked-filled & stashed pointers remain unchanged, to\n\t * detect write-after-free.\n\t */\n\tfor (size_t n = 0; n < nstashed; n++) {\n\t\tvoid *stashed = ptrs[n];\n\t\tassert(stashed != NULL);\n\t\tassert(cache_bin_nonfast_aligned(stashed));\n\t\tif (unlikely(san_stashed_corrupted(stashed, usize))) {\n\t\t\tsafety_check_fail(\"<jemalloc>: Write-after-free \"\n\t\t\t    \"detected on deallocated pointer %p (size %zu).\\n\",\n\t\t\t    stashed, usize);\n\t\t}\n\t}\n}\n\nvoid\ntsd_san_init(tsd_t *tsd) {\n\t*tsd_san_extents_until_guard_smallp_get(tsd) = opt_san_guard_small;\n\t*tsd_san_extents_until_guard_largep_get(tsd) = opt_san_guard_large;\n}\n\nvoid\nsan_init(ssize_t lg_san_uaf_align) {\n\tassert(lg_san_uaf_align == -1 || lg_san_uaf_align >= LG_PAGE);\n\tif (lg_san_uaf_align == -1) {\n\t\tsan_cache_bin_nonfast_mask = (uintptr_t)-1;\n\t\treturn;\n\t}\n\n\tsan_cache_bin_nonfast_mask = ((uintptr_t)1 << lg_san_uaf_align) - 1;\n}\n"
  },
  {
    "path": "deps/jemalloc/src/san_bump.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/san_bump.h\"\n#include \"jemalloc/internal/pac.h\"\n#include \"jemalloc/internal/san.h\"\n#include \"jemalloc/internal/ehooks.h\"\n#include \"jemalloc/internal/edata_cache.h\"\n\nstatic bool\nsan_bump_grow_locked(tsdn_t *tsdn, san_bump_alloc_t *sba, pac_t *pac,\n    ehooks_t *ehooks, size_t size);\n\nedata_t *\nsan_bump_alloc(tsdn_t *tsdn, san_bump_alloc_t* sba, pac_t *pac,\n    ehooks_t *ehooks, size_t size, bool zero) {\n\tassert(san_bump_enabled());\n\n\tedata_t* to_destroy;\n\tsize_t guarded_size = san_one_side_guarded_sz(size);\n\n\tmalloc_mutex_lock(tsdn, &sba->mtx);\n\n\tif (sba->curr_reg == NULL ||\n\t    edata_size_get(sba->curr_reg) < guarded_size) {\n\t\t/*\n\t\t * If the current region can't accommodate the allocation,\n\t\t * try replacing it with a larger one and destroy current if the\n\t\t * replacement succeeds.\n\t\t */\n\t\tto_destroy = sba->curr_reg;\n\t\tbool err = san_bump_grow_locked(tsdn, sba, pac, ehooks,\n\t\t    guarded_size);\n\t\tif (err) {\n\t\t\tgoto label_err;\n\t\t}\n\t} else {\n\t\tto_destroy = NULL;\n\t}\n\tassert(guarded_size <= edata_size_get(sba->curr_reg));\n\tsize_t trail_size = edata_size_get(sba->curr_reg) - guarded_size;\n\n\tedata_t* edata;\n\tif (trail_size != 0) {\n\t\tedata_t* curr_reg_trail = extent_split_wrapper(tsdn, pac,\n\t\t    ehooks, sba->curr_reg, guarded_size, trail_size,\n\t\t    /* holding_core_locks */ true);\n\t\tif (curr_reg_trail == NULL) {\n\t\t\tgoto label_err;\n\t\t}\n\t\tedata = sba->curr_reg;\n\t\tsba->curr_reg = curr_reg_trail;\n\t} else {\n\t\tedata = sba->curr_reg;\n\t\tsba->curr_reg = NULL;\n\t}\n\n\tmalloc_mutex_unlock(tsdn, &sba->mtx);\n\n\tassert(!edata_guarded_get(edata));\n\tassert(sba->curr_reg == NULL || !edata_guarded_get(sba->curr_reg));\n\tassert(to_destroy == NULL || !edata_guarded_get(to_destroy));\n\n\tif (to_destroy != NULL) {\n\t\textent_destroy_wrapper(tsdn, pac, ehooks, to_destroy);\n\t}\n\n\tsan_guard_pages(tsdn, ehooks, edata, pac->emap, /* left */ false,\n\t    /* right */ true, /* remap */ true);\n\n\tif (extent_commit_zero(tsdn, ehooks, edata, /* commit */ true, zero,\n\t    /* growing_retained */ false)) {\n\t\textent_record(tsdn, pac, ehooks, &pac->ecache_retained,\n\t\t    edata);\n\t\treturn NULL;\n\t}\n\n\tif (config_prof) {\n\t\textent_gdump_add(tsdn, edata);\n\t}\n\n\treturn edata;\nlabel_err:\n\tmalloc_mutex_unlock(tsdn, &sba->mtx);\n\treturn NULL;\n}\n\nstatic bool\nsan_bump_grow_locked(tsdn_t *tsdn, san_bump_alloc_t *sba, pac_t *pac,\n    ehooks_t *ehooks, size_t size) {\n\tmalloc_mutex_assert_owner(tsdn, &sba->mtx);\n\n\tbool committed = false, zeroed = false;\n\tsize_t alloc_size = size > SBA_RETAINED_ALLOC_SIZE ? size :\n\t    SBA_RETAINED_ALLOC_SIZE;\n\tassert((alloc_size & PAGE_MASK) == 0);\n\tsba->curr_reg = extent_alloc_wrapper(tsdn, pac, ehooks, NULL,\n\t    alloc_size, PAGE, zeroed, &committed,\n\t    /* growing_retained */ true);\n\tif (sba->curr_reg == NULL) {\n\t\treturn true;\n\t}\n\treturn false;\n}\n"
  },
  {
    "path": "deps/jemalloc/src/sc.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/bit_util.h\"\n#include \"jemalloc/internal/bitmap.h\"\n#include \"jemalloc/internal/pages.h\"\n#include \"jemalloc/internal/sc.h\"\n\n/*\n * This module computes the size classes used to satisfy allocations.  The logic\n * here was ported more or less line-by-line from a shell script, and because of\n * that is not the most idiomatic C.  Eventually we should fix this, but for now\n * at least the damage is compartmentalized to this file.\n */\n\nsize_t\nreg_size_compute(int lg_base, int lg_delta, int ndelta) {\n\treturn (ZU(1) << lg_base) + (ZU(ndelta) << lg_delta);\n}\n\n/* Returns the number of pages in the slab. */\nstatic int\nslab_size(int lg_page, int lg_base, int lg_delta, int ndelta) {\n\tsize_t page = (ZU(1) << lg_page);\n\tsize_t reg_size = reg_size_compute(lg_base, lg_delta, ndelta);\n\n\tsize_t try_slab_size = page;\n\tsize_t try_nregs = try_slab_size / reg_size;\n\tsize_t perfect_slab_size = 0;\n\tbool perfect = false;\n\t/*\n\t * This loop continues until we find the least common multiple of the\n\t * page size and size class size.  Size classes are all of the form\n\t * base + ndelta * delta == (ndelta + base/ndelta) * delta, which is\n\t * (ndelta + ngroup) * delta.  The way we choose slabbing strategies\n\t * means that delta is at most the page size and ndelta < ngroup.  So\n\t * the loop executes for at most 2 * ngroup - 1 iterations, which is\n\t * also the bound on the number of pages in a slab chosen by default.\n\t * With the current default settings, this is at most 7.\n\t */\n\twhile (!perfect) {\n\t\tperfect_slab_size = try_slab_size;\n\t\tsize_t perfect_nregs = try_nregs;\n\t\ttry_slab_size += page;\n\t\ttry_nregs = try_slab_size / reg_size;\n\t\tif (perfect_slab_size == perfect_nregs * reg_size) {\n\t\t\tperfect = true;\n\t\t}\n\t}\n\treturn (int)(perfect_slab_size / page);\n}\n\nstatic void\nsize_class(\n    /* Output. */\n    sc_t *sc,\n    /* Configuration decisions. */\n    int lg_max_lookup, int lg_page, int lg_ngroup,\n    /* Inputs specific to the size class. */\n    int index, int lg_base, int lg_delta, int ndelta) {\n\tsc->index = index;\n\tsc->lg_base = lg_base;\n\tsc->lg_delta = lg_delta;\n\tsc->ndelta = ndelta;\n\tsize_t size = reg_size_compute(lg_base, lg_delta, ndelta);\n\tsc->psz = (size % (ZU(1) << lg_page) == 0);\n\tif (index == 0) {\n\t\tassert(!sc->psz);\n\t}\n\tif (size < (ZU(1) << (lg_page + lg_ngroup))) {\n\t\tsc->bin = true;\n\t\tsc->pgs = slab_size(lg_page, lg_base, lg_delta, ndelta);\n\t} else {\n\t\tsc->bin = false;\n\t\tsc->pgs = 0;\n\t}\n\tif (size <= (ZU(1) << lg_max_lookup)) {\n\t\tsc->lg_delta_lookup = lg_delta;\n\t} else {\n\t\tsc->lg_delta_lookup = 0;\n\t}\n}\n\nstatic void\nsize_classes(\n    /* Output. */\n    sc_data_t *sc_data,\n    /* Determined by the system. */\n    size_t lg_ptr_size, int lg_quantum,\n    /* Configuration decisions. */\n    int lg_tiny_min, int lg_max_lookup, int lg_page, int lg_ngroup) {\n\tint ptr_bits = (1 << lg_ptr_size) * 8;\n\tint ngroup = (1 << lg_ngroup);\n\tint ntiny = 0;\n\tint nlbins = 0;\n\tint lg_tiny_maxclass = (unsigned)-1;\n\tint nbins = 0;\n\tint npsizes = 0;\n\n\tint index = 0;\n\n\tint ndelta = 0;\n\tint lg_base = lg_tiny_min;\n\tint lg_delta = lg_base;\n\n\t/* Outputs that we update as we go. */\n\tsize_t lookup_maxclass = 0;\n\tsize_t small_maxclass = 0;\n\tint lg_large_minclass = 0;\n\tsize_t large_maxclass = 0;\n\n\t/* Tiny size classes. */\n\twhile (lg_base < lg_quantum) {\n\t\tsc_t *sc = &sc_data->sc[index];\n\t\tsize_class(sc, lg_max_lookup, lg_page, lg_ngroup, index,\n\t\t    lg_base, lg_delta, ndelta);\n\t\tif (sc->lg_delta_lookup != 0) {\n\t\t\tnlbins = index + 1;\n\t\t}\n\t\tif (sc->psz) {\n\t\t\tnpsizes++;\n\t\t}\n\t\tif (sc->bin) {\n\t\t\tnbins++;\n\t\t}\n\t\tntiny++;\n\t\t/* Final written value is correct. */\n\t\tlg_tiny_maxclass = lg_base;\n\t\tindex++;\n\t\tlg_delta = lg_base;\n\t\tlg_base++;\n\t}\n\n\t/* First non-tiny (pseudo) group. */\n\tif (ntiny != 0) {\n\t\tsc_t *sc = &sc_data->sc[index];\n\t\t/*\n\t\t * See the note in sc.h; the first non-tiny size class has an\n\t\t * unusual encoding.\n\t\t */\n\t\tlg_base--;\n\t\tndelta = 1;\n\t\tsize_class(sc, lg_max_lookup, lg_page, lg_ngroup, index,\n\t\t    lg_base, lg_delta, ndelta);\n\t\tindex++;\n\t\tlg_base++;\n\t\tlg_delta++;\n\t\tif (sc->psz) {\n\t\t\tnpsizes++;\n\t\t}\n\t\tif (sc->bin) {\n\t\t\tnbins++;\n\t\t}\n\t}\n\twhile (ndelta < ngroup) {\n\t\tsc_t *sc = &sc_data->sc[index];\n\t\tsize_class(sc, lg_max_lookup, lg_page, lg_ngroup, index,\n\t\t    lg_base, lg_delta, ndelta);\n\t\tindex++;\n\t\tndelta++;\n\t\tif (sc->psz) {\n\t\t\tnpsizes++;\n\t\t}\n\t\tif (sc->bin) {\n\t\t\tnbins++;\n\t\t}\n\t}\n\n\t/* All remaining groups. */\n\tlg_base = lg_base + lg_ngroup;\n\twhile (lg_base < ptr_bits - 1) {\n\t\tndelta = 1;\n\t\tint ndelta_limit;\n\t\tif (lg_base == ptr_bits - 2) {\n\t\t\tndelta_limit = ngroup - 1;\n\t\t} else {\n\t\t\tndelta_limit = ngroup;\n\t\t}\n\t\twhile (ndelta <= ndelta_limit) {\n\t\t\tsc_t *sc = &sc_data->sc[index];\n\t\t\tsize_class(sc, lg_max_lookup, lg_page, lg_ngroup, index,\n\t\t\t    lg_base, lg_delta, ndelta);\n\t\t\tif (sc->lg_delta_lookup != 0) {\n\t\t\t\tnlbins = index + 1;\n\t\t\t\t/* Final written value is correct. */\n\t\t\t\tlookup_maxclass = (ZU(1) << lg_base)\n\t\t\t\t    + (ZU(ndelta) << lg_delta);\n\t\t\t}\n\t\t\tif (sc->psz) {\n\t\t\t\tnpsizes++;\n\t\t\t}\n\t\t\tif (sc->bin) {\n\t\t\t\tnbins++;\n\t\t\t\t/* Final written value is correct. */\n\t\t\t\tsmall_maxclass = (ZU(1) << lg_base)\n\t\t\t\t    + (ZU(ndelta) << lg_delta);\n\t\t\t\tif (lg_ngroup > 0) {\n\t\t\t\t\tlg_large_minclass = lg_base + 1;\n\t\t\t\t} else {\n\t\t\t\t\tlg_large_minclass = lg_base + 2;\n\t\t\t\t}\n\t\t\t}\n\t\t\tlarge_maxclass = (ZU(1) << lg_base)\n\t\t\t    + (ZU(ndelta) << lg_delta);\n\t\t\tindex++;\n\t\t\tndelta++;\n\t\t}\n\t\tlg_base++;\n\t\tlg_delta++;\n\t}\n\t/* Additional outputs. */\n\tint nsizes = index;\n\tunsigned lg_ceil_nsizes = lg_ceil(nsizes);\n\n\t/* Fill in the output data. */\n\tsc_data->ntiny = ntiny;\n\tsc_data->nlbins = nlbins;\n\tsc_data->nbins = nbins;\n\tsc_data->nsizes = nsizes;\n\tsc_data->lg_ceil_nsizes = lg_ceil_nsizes;\n\tsc_data->npsizes = npsizes;\n\tsc_data->lg_tiny_maxclass = lg_tiny_maxclass;\n\tsc_data->lookup_maxclass = lookup_maxclass;\n\tsc_data->small_maxclass = small_maxclass;\n\tsc_data->lg_large_minclass = lg_large_minclass;\n\tsc_data->large_minclass = (ZU(1) << lg_large_minclass);\n\tsc_data->large_maxclass = large_maxclass;\n\n\t/*\n\t * We compute these values in two ways:\n\t *   - Incrementally, as above.\n\t *   - In macros, in sc.h.\n\t * The computation is easier when done incrementally, but putting it in\n\t * a constant makes it available to the fast paths without having to\n\t * touch the extra global cacheline.  We assert, however, that the two\n\t * computations are equivalent.\n\t */\n\tassert(sc_data->npsizes == SC_NPSIZES);\n\tassert(sc_data->lg_tiny_maxclass == SC_LG_TINY_MAXCLASS);\n\tassert(sc_data->small_maxclass == SC_SMALL_MAXCLASS);\n\tassert(sc_data->large_minclass == SC_LARGE_MINCLASS);\n\tassert(sc_data->lg_large_minclass == SC_LG_LARGE_MINCLASS);\n\tassert(sc_data->large_maxclass == SC_LARGE_MAXCLASS);\n\n\t/*\n\t * In the allocation fastpath, we want to assume that we can\n\t * unconditionally subtract the requested allocation size from\n\t * a ssize_t, and detect passing through 0 correctly.  This\n\t * results in optimal generated code.  For this to work, the\n\t * maximum allocation size must be less than SSIZE_MAX.\n\t */\n\tassert(SC_LARGE_MAXCLASS < SSIZE_MAX);\n}\n\nvoid\nsc_data_init(sc_data_t *sc_data) {\n\tsize_classes(sc_data, LG_SIZEOF_PTR, LG_QUANTUM, SC_LG_TINY_MIN,\n\t    SC_LG_MAX_LOOKUP, LG_PAGE, SC_LG_NGROUP);\n\n\tsc_data->initialized = true;\n}\n\nstatic void\nsc_data_update_sc_slab_size(sc_t *sc, size_t reg_size, size_t pgs_guess) {\n\tsize_t min_pgs = reg_size / PAGE;\n\tif (reg_size % PAGE != 0) {\n\t\tmin_pgs++;\n\t}\n\t/*\n\t * BITMAP_MAXBITS is actually determined by putting the smallest\n\t * possible size-class on one page, so this can never be 0.\n\t */\n\tsize_t max_pgs = BITMAP_MAXBITS * reg_size / PAGE;\n\n\tassert(min_pgs <= max_pgs);\n\tassert(min_pgs > 0);\n\tassert(max_pgs >= 1);\n\tif (pgs_guess < min_pgs) {\n\t\tsc->pgs = (int)min_pgs;\n\t} else if (pgs_guess > max_pgs) {\n\t\tsc->pgs = (int)max_pgs;\n\t} else {\n\t\tsc->pgs = (int)pgs_guess;\n\t}\n}\n\nvoid\nsc_data_update_slab_size(sc_data_t *data, size_t begin, size_t end, int pgs) {\n\tassert(data->initialized);\n\tfor (int i = 0; i < data->nsizes; i++) {\n\t\tsc_t *sc = &data->sc[i];\n\t\tif (!sc->bin) {\n\t\t\tbreak;\n\t\t}\n\t\tsize_t reg_size = reg_size_compute(sc->lg_base, sc->lg_delta,\n\t\t    sc->ndelta);\n\t\tif (begin <= reg_size && reg_size <= end) {\n\t\t\tsc_data_update_sc_slab_size(sc, reg_size, pgs);\n\t\t}\n\t}\n}\n\nvoid\nsc_boot(sc_data_t *data) {\n\tsc_data_init(data);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/sec.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/sec.h\"\n\nstatic edata_t *sec_alloc(tsdn_t *tsdn, pai_t *self, size_t size,\n    size_t alignment, bool zero, bool guarded, bool frequent_reuse,\n    bool *deferred_work_generated);\nstatic bool sec_expand(tsdn_t *tsdn, pai_t *self, edata_t *edata,\n    size_t old_size, size_t new_size, bool zero, bool *deferred_work_generated);\nstatic bool sec_shrink(tsdn_t *tsdn, pai_t *self, edata_t *edata,\n    size_t old_size, size_t new_size, bool *deferred_work_generated);\nstatic void sec_dalloc(tsdn_t *tsdn, pai_t *self, edata_t *edata,\n    bool *deferred_work_generated);\n\nstatic void\nsec_bin_init(sec_bin_t *bin) {\n\tbin->being_batch_filled = false;\n\tbin->bytes_cur = 0;\n\tedata_list_active_init(&bin->freelist);\n}\n\nbool\nsec_init(tsdn_t *tsdn, sec_t *sec, base_t *base, pai_t *fallback,\n    const sec_opts_t *opts) {\n\tassert(opts->max_alloc >= PAGE);\n\n\tsize_t max_alloc = PAGE_FLOOR(opts->max_alloc);\n\tpszind_t npsizes = sz_psz2ind(max_alloc) + 1;\n\n\tsize_t sz_shards = opts->nshards * sizeof(sec_shard_t);\n\tsize_t sz_bins = opts->nshards * (size_t)npsizes * sizeof(sec_bin_t);\n\tsize_t sz_alloc = sz_shards + sz_bins;\n\tvoid *dynalloc = base_alloc(tsdn, base, sz_alloc, CACHELINE);\n\tif (dynalloc == NULL) {\n\t\treturn true;\n\t}\n\tsec_shard_t *shard_cur = (sec_shard_t *)dynalloc;\n\tsec->shards = shard_cur;\n\tsec_bin_t *bin_cur = (sec_bin_t *)&shard_cur[opts->nshards];\n\t/* Just for asserts, below. */\n\tsec_bin_t *bin_start = bin_cur;\n\n\tfor (size_t i = 0; i < opts->nshards; i++) {\n\t\tsec_shard_t *shard = shard_cur;\n\t\tshard_cur++;\n\t\tbool err = malloc_mutex_init(&shard->mtx, \"sec_shard\",\n\t\t    WITNESS_RANK_SEC_SHARD, malloc_mutex_rank_exclusive);\n\t\tif (err) {\n\t\t\treturn true;\n\t\t}\n\t\tshard->enabled = true;\n\t\tshard->bins = bin_cur;\n\t\tfor (pszind_t j = 0; j < npsizes; j++) {\n\t\t\tsec_bin_init(&shard->bins[j]);\n\t\t\tbin_cur++;\n\t\t}\n\t\tshard->bytes_cur = 0;\n\t\tshard->to_flush_next = 0;\n\t}\n\t/*\n\t * Should have exactly matched the bin_start to the first unused byte\n\t * after the shards.\n\t */\n\tassert((void *)shard_cur == (void *)bin_start);\n\t/* And the last bin to use up the last bytes of the allocation. */\n\tassert((char *)bin_cur == ((char *)dynalloc + sz_alloc));\n\tsec->fallback = fallback;\n\n\n\tsec->opts = *opts;\n\tsec->npsizes = npsizes;\n\n\t/*\n\t * Initialize these last so that an improper use of an SEC whose\n\t * initialization failed will segfault in an easy-to-spot way.\n\t */\n\tsec->pai.alloc = &sec_alloc;\n\tsec->pai.alloc_batch = &pai_alloc_batch_default;\n\tsec->pai.expand = &sec_expand;\n\tsec->pai.shrink = &sec_shrink;\n\tsec->pai.dalloc = &sec_dalloc;\n\tsec->pai.dalloc_batch = &pai_dalloc_batch_default;\n\n\treturn false;\n}\n\nstatic sec_shard_t *\nsec_shard_pick(tsdn_t *tsdn, sec_t *sec) {\n\t/*\n\t * Eventually, we should implement affinity, tracking source shard using\n\t * the edata_t's newly freed up fields.  For now, just randomly\n\t * distribute across all shards.\n\t */\n\tif (tsdn_null(tsdn)) {\n\t\treturn &sec->shards[0];\n\t}\n\ttsd_t *tsd = tsdn_tsd(tsdn);\n\tuint8_t *idxp = tsd_sec_shardp_get(tsd);\n\tif (*idxp == (uint8_t)-1) {\n\t\t/*\n\t\t * First use; initialize using the trick from Daniel Lemire's\n\t\t * \"A fast alternative to the modulo reduction.  Use a 64 bit\n\t\t * number to store 32 bits, since we'll deliberately overflow\n\t\t * when we multiply by the number of shards.\n\t\t */\n\t\tuint64_t rand32 = prng_lg_range_u64(tsd_prng_statep_get(tsd), 32);\n\t\tuint32_t idx =\n\t\t    (uint32_t)((rand32 * (uint64_t)sec->opts.nshards) >> 32);\n\t\tassert(idx < (uint32_t)sec->opts.nshards);\n\t\t*idxp = (uint8_t)idx;\n\t}\n\treturn &sec->shards[*idxp];\n}\n\n/*\n * Perhaps surprisingly, this can be called on the alloc pathways; if we hit an\n * empty cache, we'll try to fill it, which can push the shard over it's limit.\n */\nstatic void\nsec_flush_some_and_unlock(tsdn_t *tsdn, sec_t *sec, sec_shard_t *shard) {\n\tmalloc_mutex_assert_owner(tsdn, &shard->mtx);\n\tedata_list_active_t to_flush;\n\tedata_list_active_init(&to_flush);\n\twhile (shard->bytes_cur > sec->opts.bytes_after_flush) {\n\t\t/* Pick a victim. */\n\t\tsec_bin_t *bin = &shard->bins[shard->to_flush_next];\n\n\t\t/* Update our victim-picking state. */\n\t\tshard->to_flush_next++;\n\t\tif (shard->to_flush_next == sec->npsizes) {\n\t\t\tshard->to_flush_next = 0;\n\t\t}\n\n\t\tassert(shard->bytes_cur >= bin->bytes_cur);\n\t\tif (bin->bytes_cur != 0) {\n\t\t\tshard->bytes_cur -= bin->bytes_cur;\n\t\t\tbin->bytes_cur = 0;\n\t\t\tedata_list_active_concat(&to_flush, &bin->freelist);\n\t\t}\n\t\t/*\n\t\t * Either bin->bytes_cur was 0, in which case we didn't touch\n\t\t * the bin list but it should be empty anyways (or else we\n\t\t * missed a bytes_cur update on a list modification), or it\n\t\t * *was* 0 and we emptied it ourselves.  Either way, it should\n\t\t * be empty now.\n\t\t */\n\t\tassert(edata_list_active_empty(&bin->freelist));\n\t}\n\n\tmalloc_mutex_unlock(tsdn, &shard->mtx);\n\tbool deferred_work_generated = false;\n\tpai_dalloc_batch(tsdn, sec->fallback, &to_flush,\n\t    &deferred_work_generated);\n}\n\nstatic edata_t *\nsec_shard_alloc_locked(tsdn_t *tsdn, sec_t *sec, sec_shard_t *shard,\n    sec_bin_t *bin) {\n\tmalloc_mutex_assert_owner(tsdn, &shard->mtx);\n\tif (!shard->enabled) {\n\t\treturn NULL;\n\t}\n\tedata_t *edata = edata_list_active_first(&bin->freelist);\n\tif (edata != NULL) {\n\t\tedata_list_active_remove(&bin->freelist, edata);\n\t\tassert(edata_size_get(edata) <= bin->bytes_cur);\n\t\tbin->bytes_cur -= edata_size_get(edata);\n\t\tassert(edata_size_get(edata) <= shard->bytes_cur);\n\t\tshard->bytes_cur -= edata_size_get(edata);\n\t}\n\treturn edata;\n}\n\nstatic edata_t *\nsec_batch_fill_and_alloc(tsdn_t *tsdn, sec_t *sec, sec_shard_t *shard,\n    sec_bin_t *bin, size_t size) {\n\tmalloc_mutex_assert_not_owner(tsdn, &shard->mtx);\n\n\tedata_list_active_t result;\n\tedata_list_active_init(&result);\n\tbool deferred_work_generated = false;\n\tsize_t nalloc = pai_alloc_batch(tsdn, sec->fallback, size,\n\t    1 + sec->opts.batch_fill_extra, &result, &deferred_work_generated);\n\n\tedata_t *ret = edata_list_active_first(&result);\n\tif (ret != NULL) {\n\t\tedata_list_active_remove(&result, ret);\n\t}\n\n\tmalloc_mutex_lock(tsdn, &shard->mtx);\n\tbin->being_batch_filled = false;\n\t/*\n\t * Handle the easy case first: nothing to cache.  Note that this can\n\t * only happen in case of OOM, since sec_alloc checks the expected\n\t * number of allocs, and doesn't bother going down the batch_fill\n\t * pathway if there won't be anything left to cache.  So to be in this\n\t * code path, we must have asked for > 1 alloc, but only gotten 1 back.\n\t */\n\tif (nalloc <= 1) {\n\t\tmalloc_mutex_unlock(tsdn, &shard->mtx);\n\t\treturn ret;\n\t}\n\n\tsize_t new_cached_bytes = (nalloc - 1) * size;\n\n\tedata_list_active_concat(&bin->freelist, &result);\n\tbin->bytes_cur += new_cached_bytes;\n\tshard->bytes_cur += new_cached_bytes;\n\n\tif (shard->bytes_cur > sec->opts.max_bytes) {\n\t\tsec_flush_some_and_unlock(tsdn, sec, shard);\n\t} else {\n\t\tmalloc_mutex_unlock(tsdn, &shard->mtx);\n\t}\n\n\treturn ret;\n}\n\nstatic edata_t *\nsec_alloc(tsdn_t *tsdn, pai_t *self, size_t size, size_t alignment, bool zero,\n    bool guarded, bool frequent_reuse, bool *deferred_work_generated) {\n\tassert((size & PAGE_MASK) == 0);\n\tassert(!guarded);\n\n\tsec_t *sec = (sec_t *)self;\n\n\tif (zero || alignment > PAGE || sec->opts.nshards == 0\n\t    || size > sec->opts.max_alloc) {\n\t\treturn pai_alloc(tsdn, sec->fallback, size, alignment, zero,\n\t\t    /* guarded */ false, frequent_reuse,\n\t\t    deferred_work_generated);\n\t}\n\tpszind_t pszind = sz_psz2ind(size);\n\tassert(pszind < sec->npsizes);\n\n\tsec_shard_t *shard = sec_shard_pick(tsdn, sec);\n\tsec_bin_t *bin = &shard->bins[pszind];\n\tbool do_batch_fill = false;\n\n\tmalloc_mutex_lock(tsdn, &shard->mtx);\n\tedata_t *edata = sec_shard_alloc_locked(tsdn, sec, shard, bin);\n\tif (edata == NULL) {\n\t\tif (!bin->being_batch_filled\n\t\t    && sec->opts.batch_fill_extra > 0) {\n\t\t\tbin->being_batch_filled = true;\n\t\t\tdo_batch_fill = true;\n\t\t}\n\t}\n\tmalloc_mutex_unlock(tsdn, &shard->mtx);\n\tif (edata == NULL) {\n\t\tif (do_batch_fill) {\n\t\t\tedata = sec_batch_fill_and_alloc(tsdn, sec, shard, bin,\n\t\t\t    size);\n\t\t} else {\n\t\t\tedata = pai_alloc(tsdn, sec->fallback, size, alignment,\n\t\t\t    zero, /* guarded */ false, frequent_reuse,\n\t\t\t    deferred_work_generated);\n\t\t}\n\t}\n\treturn edata;\n}\n\nstatic bool\nsec_expand(tsdn_t *tsdn, pai_t *self, edata_t *edata, size_t old_size,\n    size_t new_size, bool zero, bool *deferred_work_generated) {\n\tsec_t *sec = (sec_t *)self;\n\treturn pai_expand(tsdn, sec->fallback, edata, old_size, new_size, zero,\n\t    deferred_work_generated);\n}\n\nstatic bool\nsec_shrink(tsdn_t *tsdn, pai_t *self, edata_t *edata, size_t old_size,\n    size_t new_size, bool *deferred_work_generated) {\n\tsec_t *sec = (sec_t *)self;\n\treturn pai_shrink(tsdn, sec->fallback, edata, old_size, new_size,\n\t    deferred_work_generated);\n}\n\nstatic void\nsec_flush_all_locked(tsdn_t *tsdn, sec_t *sec, sec_shard_t *shard) {\n\tmalloc_mutex_assert_owner(tsdn, &shard->mtx);\n\tshard->bytes_cur = 0;\n\tedata_list_active_t to_flush;\n\tedata_list_active_init(&to_flush);\n\tfor (pszind_t i = 0; i < sec->npsizes; i++) {\n\t\tsec_bin_t *bin = &shard->bins[i];\n\t\tbin->bytes_cur = 0;\n\t\tedata_list_active_concat(&to_flush, &bin->freelist);\n\t}\n\n\t/*\n\t * Ordinarily we would try to avoid doing the batch deallocation while\n\t * holding the shard mutex, but the flush_all pathways only happen when\n\t * we're disabling the HPA or resetting the arena, both of which are\n\t * rare pathways.\n\t */\n\tbool deferred_work_generated = false;\n\tpai_dalloc_batch(tsdn, sec->fallback, &to_flush,\n\t    &deferred_work_generated);\n}\n\nstatic void\nsec_shard_dalloc_and_unlock(tsdn_t *tsdn, sec_t *sec, sec_shard_t *shard,\n    edata_t *edata) {\n\tmalloc_mutex_assert_owner(tsdn, &shard->mtx);\n\tassert(shard->bytes_cur <= sec->opts.max_bytes);\n\tsize_t size = edata_size_get(edata);\n\tpszind_t pszind = sz_psz2ind(size);\n\tassert(pszind < sec->npsizes);\n\t/*\n\t * Prepending here results in LIFO allocation per bin, which seems\n\t * reasonable.\n\t */\n\tsec_bin_t *bin = &shard->bins[pszind];\n\tedata_list_active_prepend(&bin->freelist, edata);\n\tbin->bytes_cur += size;\n\tshard->bytes_cur += size;\n\tif (shard->bytes_cur > sec->opts.max_bytes) {\n\t\t/*\n\t\t * We've exceeded the shard limit.  We make two nods in the\n\t\t * direction of fragmentation avoidance: we flush everything in\n\t\t * the shard, rather than one particular bin, and we hold the\n\t\t * lock while flushing (in case one of the extents we flush is\n\t\t * highly preferred from a fragmentation-avoidance perspective\n\t\t * in the backing allocator).  This has the extra advantage of\n\t\t * not requiring advanced cache balancing strategies.\n\t\t */\n\t\tsec_flush_some_and_unlock(tsdn, sec, shard);\n\t\tmalloc_mutex_assert_not_owner(tsdn, &shard->mtx);\n\t} else {\n\t\tmalloc_mutex_unlock(tsdn, &shard->mtx);\n\t}\n}\n\nstatic void\nsec_dalloc(tsdn_t *tsdn, pai_t *self, edata_t *edata,\n    bool *deferred_work_generated) {\n\tsec_t *sec = (sec_t *)self;\n\tif (sec->opts.nshards == 0\n\t    || edata_size_get(edata) > sec->opts.max_alloc) {\n\t\tpai_dalloc(tsdn, sec->fallback, edata,\n\t\t    deferred_work_generated);\n\t\treturn;\n\t}\n\tsec_shard_t *shard = sec_shard_pick(tsdn, sec);\n\tmalloc_mutex_lock(tsdn, &shard->mtx);\n\tif (shard->enabled) {\n\t\tsec_shard_dalloc_and_unlock(tsdn, sec, shard, edata);\n\t} else {\n\t\tmalloc_mutex_unlock(tsdn, &shard->mtx);\n\t\tpai_dalloc(tsdn, sec->fallback, edata,\n\t\t    deferred_work_generated);\n\t}\n}\n\nvoid\nsec_flush(tsdn_t *tsdn, sec_t *sec) {\n\tfor (size_t i = 0; i < sec->opts.nshards; i++) {\n\t\tmalloc_mutex_lock(tsdn, &sec->shards[i].mtx);\n\t\tsec_flush_all_locked(tsdn, sec, &sec->shards[i]);\n\t\tmalloc_mutex_unlock(tsdn, &sec->shards[i].mtx);\n\t}\n}\n\nvoid\nsec_disable(tsdn_t *tsdn, sec_t *sec) {\n\tfor (size_t i = 0; i < sec->opts.nshards; i++) {\n\t\tmalloc_mutex_lock(tsdn, &sec->shards[i].mtx);\n\t\tsec->shards[i].enabled = false;\n\t\tsec_flush_all_locked(tsdn, sec, &sec->shards[i]);\n\t\tmalloc_mutex_unlock(tsdn, &sec->shards[i].mtx);\n\t}\n}\n\nvoid\nsec_stats_merge(tsdn_t *tsdn, sec_t *sec, sec_stats_t *stats) {\n\tsize_t sum = 0;\n\tfor (size_t i = 0; i < sec->opts.nshards; i++) {\n\t\t/*\n\t\t * We could save these lock acquisitions by making bytes_cur\n\t\t * atomic, but stats collection is rare anyways and we expect\n\t\t * the number and type of stats to get more interesting.\n\t\t */\n\t\tmalloc_mutex_lock(tsdn, &sec->shards[i].mtx);\n\t\tsum += sec->shards[i].bytes_cur;\n\t\tmalloc_mutex_unlock(tsdn, &sec->shards[i].mtx);\n\t}\n\tstats->bytes += sum;\n}\n\nvoid\nsec_mutex_stats_read(tsdn_t *tsdn, sec_t *sec,\n    mutex_prof_data_t *mutex_prof_data) {\n\tfor (size_t i = 0; i < sec->opts.nshards; i++) {\n\t\tmalloc_mutex_lock(tsdn, &sec->shards[i].mtx);\n\t\tmalloc_mutex_prof_accum(tsdn, mutex_prof_data,\n\t\t    &sec->shards[i].mtx);\n\t\tmalloc_mutex_unlock(tsdn, &sec->shards[i].mtx);\n\t}\n}\n\nvoid\nsec_prefork2(tsdn_t *tsdn, sec_t *sec) {\n\tfor (size_t i = 0; i < sec->opts.nshards; i++) {\n\t\tmalloc_mutex_prefork(tsdn, &sec->shards[i].mtx);\n\t}\n}\n\nvoid\nsec_postfork_parent(tsdn_t *tsdn, sec_t *sec) {\n\tfor (size_t i = 0; i < sec->opts.nshards; i++) {\n\t\tmalloc_mutex_postfork_parent(tsdn, &sec->shards[i].mtx);\n\t}\n}\n\nvoid\nsec_postfork_child(tsdn_t *tsdn, sec_t *sec) {\n\tfor (size_t i = 0; i < sec->opts.nshards; i++) {\n\t\tmalloc_mutex_postfork_child(tsdn, &sec->shards[i].mtx);\n\t}\n}\n"
  },
  {
    "path": "deps/jemalloc/src/stats.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/ctl.h\"\n#include \"jemalloc/internal/emitter.h\"\n#include \"jemalloc/internal/fxp.h\"\n#include \"jemalloc/internal/mutex.h\"\n#include \"jemalloc/internal/mutex_prof.h\"\n#include \"jemalloc/internal/prof_stats.h\"\n\nconst char *global_mutex_names[mutex_prof_num_global_mutexes] = {\n#define OP(mtx) #mtx,\n\tMUTEX_PROF_GLOBAL_MUTEXES\n#undef OP\n};\n\nconst char *arena_mutex_names[mutex_prof_num_arena_mutexes] = {\n#define OP(mtx) #mtx,\n\tMUTEX_PROF_ARENA_MUTEXES\n#undef OP\n};\n\n#define CTL_GET(n, v, t) do {\t\t\t\t\t\t\\\n\tsize_t sz = sizeof(t);\t\t\t\t\t\t\\\n\txmallctl(n, (void *)v, &sz, NULL, 0);\t\t\t\t\\\n} while (0)\n\n#define CTL_LEAF_PREPARE(mib, miblen, name) do {\t\t\t\\\n\tassert(miblen < CTL_MAX_DEPTH);\t\t\t\t\t\\\n\tsize_t miblen_new = CTL_MAX_DEPTH;\t\t\t\t\\\n\txmallctlmibnametomib(mib, miblen, name, &miblen_new);\t\t\\\n\tassert(miblen_new > miblen);\t\t\t\t\t\\\n} while (0)\n\n#define CTL_LEAF(mib, miblen, leaf, v, t) do {\t\t\t\\\n\tassert(miblen < CTL_MAX_DEPTH);\t\t\t\t\t\\\n\tsize_t miblen_new = CTL_MAX_DEPTH;\t\t\t\t\\\n\tsize_t sz = sizeof(t);\t\t\t\t\t\t\\\n\txmallctlbymibname(mib, miblen, leaf, &miblen_new, (void *)v,\t\\\n\t    &sz, NULL, 0);\t\t\t\t\t\t\\\n\tassert(miblen_new == miblen + 1);\t\t\t\t\\\n} while (0)\n\n#define CTL_M2_GET(n, i, v, t) do {\t\t\t\t\t\\\n\tsize_t mib[CTL_MAX_DEPTH];\t\t\t\t\t\\\n\tsize_t miblen = sizeof(mib) / sizeof(size_t);\t\t\t\\\n\tsize_t sz = sizeof(t);\t\t\t\t\t\t\\\n\txmallctlnametomib(n, mib, &miblen);\t\t\t\t\\\n\tmib[2] = (i);\t\t\t\t\t\t\t\\\n\txmallctlbymib(mib, miblen, (void *)v, &sz, NULL, 0);\t\t\\\n} while (0)\n\n/******************************************************************************/\n/* Data. */\n\nbool opt_stats_print = false;\nchar opt_stats_print_opts[stats_print_tot_num_options+1] = \"\";\n\nint64_t opt_stats_interval = STATS_INTERVAL_DEFAULT;\nchar opt_stats_interval_opts[stats_print_tot_num_options+1] = \"\";\n\nstatic counter_accum_t stats_interval_accumulated;\n/* Per thread batch accum size for stats_interval. */\nstatic uint64_t stats_interval_accum_batch;\n\n/******************************************************************************/\n\nstatic uint64_t\nrate_per_second(uint64_t value, uint64_t uptime_ns) {\n\tuint64_t billion = 1000000000;\n\tif (uptime_ns == 0 || value == 0) {\n\t\treturn 0;\n\t}\n\tif (uptime_ns < billion) {\n\t\treturn value;\n\t} else {\n\t\tuint64_t uptime_s = uptime_ns / billion;\n\t\treturn value / uptime_s;\n\t}\n}\n\n/* Calculate x.yyy and output a string (takes a fixed sized char array). */\nstatic bool\nget_rate_str(uint64_t dividend, uint64_t divisor, char str[6]) {\n\tif (divisor == 0 || dividend > divisor) {\n\t\t/* The rate is not supposed to be greater than 1. */\n\t\treturn true;\n\t}\n\tif (dividend > 0) {\n\t\tassert(UINT64_MAX / dividend >= 1000);\n\t}\n\n\tunsigned n = (unsigned)((dividend * 1000) / divisor);\n\tif (n < 10) {\n\t\tmalloc_snprintf(str, 6, \"0.00%u\", n);\n\t} else if (n < 100) {\n\t\tmalloc_snprintf(str, 6, \"0.0%u\", n);\n\t} else if (n < 1000) {\n\t\tmalloc_snprintf(str, 6, \"0.%u\", n);\n\t} else {\n\t\tmalloc_snprintf(str, 6, \"1\");\n\t}\n\n\treturn false;\n}\n\nstatic void\nmutex_stats_init_cols(emitter_row_t *row, const char *table_name,\n    emitter_col_t *name,\n    emitter_col_t col_uint64_t[mutex_prof_num_uint64_t_counters],\n    emitter_col_t col_uint32_t[mutex_prof_num_uint32_t_counters]) {\n\tmutex_prof_uint64_t_counter_ind_t k_uint64_t = 0;\n\tmutex_prof_uint32_t_counter_ind_t k_uint32_t = 0;\n\n\temitter_col_t *col;\n\n\tif (name != NULL) {\n\t\temitter_col_init(name, row);\n\t\tname->justify = emitter_justify_left;\n\t\tname->width = 21;\n\t\tname->type = emitter_type_title;\n\t\tname->str_val = table_name;\n\t}\n\n#define WIDTH_uint32_t 12\n#define WIDTH_uint64_t 16\n#define OP(counter, counter_type, human, derived, base_counter)\t\t\\\n\tcol = &col_##counter_type[k_##counter_type];\t\t\t\\\n\t++k_##counter_type;\t\t\t\t\t\t\\\n\temitter_col_init(col, row);\t\t\t\t\t\\\n\tcol->justify = emitter_justify_right;\t\t\t\t\\\n\tcol->width = derived ? 8 : WIDTH_##counter_type;\t\t\\\n\tcol->type = emitter_type_title;\t\t\t\t\t\\\n\tcol->str_val = human;\n\tMUTEX_PROF_COUNTERS\n#undef OP\n#undef WIDTH_uint32_t\n#undef WIDTH_uint64_t\n\tcol_uint64_t[mutex_counter_total_wait_time_ps].width = 10;\n}\n\nstatic void\nmutex_stats_read_global(size_t mib[], size_t miblen, const char *name,\n    emitter_col_t *col_name,\n    emitter_col_t col_uint64_t[mutex_prof_num_uint64_t_counters],\n    emitter_col_t col_uint32_t[mutex_prof_num_uint32_t_counters],\n    uint64_t uptime) {\n\tCTL_LEAF_PREPARE(mib, miblen, name);\n\tsize_t miblen_name = miblen + 1;\n\n\tcol_name->str_val = name;\n\n\temitter_col_t *dst;\n#define EMITTER_TYPE_uint32_t emitter_type_uint32\n#define EMITTER_TYPE_uint64_t emitter_type_uint64\n#define OP(counter, counter_type, human, derived, base_counter)\t\t\\\n\tdst = &col_##counter_type[mutex_counter_##counter];\t\t\\\n\tdst->type = EMITTER_TYPE_##counter_type;\t\t\t\\\n\tif (!derived) {\t\t\t\t\t\t\t\\\n\t\tCTL_LEAF(mib, miblen_name, #counter,\t\t\t\\\n\t\t    (counter_type *)&dst->bool_val, counter_type);\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t\temitter_col_t *base =\t\t\t\t\t\\\n\t\t    &col_##counter_type[mutex_counter_##base_counter];\t\\\n\t\tdst->counter_type##_val =\t\t\t\t\\\n\t\t    (counter_type)rate_per_second(\t\t\t\\\n\t\t    base->counter_type##_val, uptime);\t\t\t\\\n\t}\n\tMUTEX_PROF_COUNTERS\n#undef OP\n#undef EMITTER_TYPE_uint32_t\n#undef EMITTER_TYPE_uint64_t\n}\n\nstatic void\nmutex_stats_read_arena(size_t mib[], size_t miblen, const char *name,\n    emitter_col_t *col_name,\n    emitter_col_t col_uint64_t[mutex_prof_num_uint64_t_counters],\n    emitter_col_t col_uint32_t[mutex_prof_num_uint32_t_counters],\n    uint64_t uptime) {\n\tCTL_LEAF_PREPARE(mib, miblen, name);\n\tsize_t miblen_name = miblen + 1;\n\n\tcol_name->str_val = name;\n\n\temitter_col_t *dst;\n#define EMITTER_TYPE_uint32_t emitter_type_uint32\n#define EMITTER_TYPE_uint64_t emitter_type_uint64\n#define OP(counter, counter_type, human, derived, base_counter)\t\t\\\n\tdst = &col_##counter_type[mutex_counter_##counter];\t\t\\\n\tdst->type = EMITTER_TYPE_##counter_type;\t\t\t\\\n\tif (!derived) {\t\t\t\t\t\t\t\\\n\t\tCTL_LEAF(mib, miblen_name, #counter,\t\t\t\\\n\t\t    (counter_type *)&dst->bool_val, counter_type);\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t\temitter_col_t *base =\t\t\t\t\t\\\n\t\t    &col_##counter_type[mutex_counter_##base_counter];\t\\\n\t\tdst->counter_type##_val =\t\t\t\t\\\n\t\t    (counter_type)rate_per_second(\t\t\t\\\n\t\t    base->counter_type##_val, uptime);\t\t\t\\\n\t}\n\tMUTEX_PROF_COUNTERS\n#undef OP\n#undef EMITTER_TYPE_uint32_t\n#undef EMITTER_TYPE_uint64_t\n}\n\nstatic void\nmutex_stats_read_arena_bin(size_t mib[], size_t miblen,\n    emitter_col_t col_uint64_t[mutex_prof_num_uint64_t_counters],\n    emitter_col_t col_uint32_t[mutex_prof_num_uint32_t_counters],\n    uint64_t uptime) {\n\tCTL_LEAF_PREPARE(mib, miblen, \"mutex\");\n\tsize_t miblen_mutex = miblen + 1;\n\n\temitter_col_t *dst;\n\n#define EMITTER_TYPE_uint32_t emitter_type_uint32\n#define EMITTER_TYPE_uint64_t emitter_type_uint64\n#define OP(counter, counter_type, human, derived, base_counter)\t\t\\\n\tdst = &col_##counter_type[mutex_counter_##counter];\t\t\\\n\tdst->type = EMITTER_TYPE_##counter_type;\t\t\t\\\n\tif (!derived) {\t\t\t\t\t\t\t\\\n\t\tCTL_LEAF(mib, miblen_mutex, #counter,\t\t\t\\\n\t\t    (counter_type *)&dst->bool_val, counter_type);\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t\temitter_col_t *base =\t\t\t\t\t\\\n\t\t    &col_##counter_type[mutex_counter_##base_counter];\t\\\n\t\tdst->counter_type##_val =\t\t\t\t\\\n\t\t    (counter_type)rate_per_second(\t\t\t\\\n\t\t    base->counter_type##_val, uptime);\t\t\t\\\n\t}\n\tMUTEX_PROF_COUNTERS\n#undef OP\n#undef EMITTER_TYPE_uint32_t\n#undef EMITTER_TYPE_uint64_t\n}\n\n/* \"row\" can be NULL to avoid emitting in table mode. */\nstatic void\nmutex_stats_emit(emitter_t *emitter, emitter_row_t *row,\n    emitter_col_t col_uint64_t[mutex_prof_num_uint64_t_counters],\n    emitter_col_t col_uint32_t[mutex_prof_num_uint32_t_counters]) {\n\tif (row != NULL) {\n\t\temitter_table_row(emitter, row);\n\t}\n\n\tmutex_prof_uint64_t_counter_ind_t k_uint64_t = 0;\n\tmutex_prof_uint32_t_counter_ind_t k_uint32_t = 0;\n\n\temitter_col_t *col;\n\n#define EMITTER_TYPE_uint32_t emitter_type_uint32\n#define EMITTER_TYPE_uint64_t emitter_type_uint64\n#define OP(counter, type, human, derived, base_counter)\t\t\\\n\tif (!derived) {                    \\\n\t\tcol = &col_##type[k_##type];                        \\\n\t\t++k_##type;                            \\\n\t\temitter_json_kv(emitter, #counter, EMITTER_TYPE_##type,        \\\n\t\t    (const void *)&col->bool_val); \\\n\t}\n\tMUTEX_PROF_COUNTERS;\n#undef OP\n#undef EMITTER_TYPE_uint32_t\n#undef EMITTER_TYPE_uint64_t\n}\n\n#define COL_DECLARE(column_name)\t\t\t\t\t\\\n\temitter_col_t col_##column_name;\n\n#define COL_INIT(row_name, column_name, left_or_right, col_width, etype)\\\n\temitter_col_init(&col_##column_name, &row_name);\t\t\\\n\tcol_##column_name.justify = emitter_justify_##left_or_right;\t\\\n\tcol_##column_name.width = col_width;\t\t\t\t\\\n\tcol_##column_name.type = emitter_type_##etype;\n\n#define COL(row_name, column_name, left_or_right, col_width, etype)\t\\\n\tCOL_DECLARE(column_name);\t\t\t\t\t\\\n\tCOL_INIT(row_name, column_name, left_or_right, col_width, etype)\n\n#define COL_HDR_DECLARE(column_name)\t\t\t\t\t\\\n\tCOL_DECLARE(column_name);\t\t\t\t\t\\\n\temitter_col_t header_##column_name;\n\n#define COL_HDR_INIT(row_name, column_name, human, left_or_right,\t\\\n\tcol_width, etype)\t\t\t\t\t\t\\\n\tCOL_INIT(row_name, column_name, left_or_right, col_width, etype)\\\n\temitter_col_init(&header_##column_name, &header_##row_name);\t\\\n\theader_##column_name.justify = emitter_justify_##left_or_right;\t\\\n\theader_##column_name.width = col_width;\t\t\t\t\\\n\theader_##column_name.type = emitter_type_title;\t\t\t\\\n\theader_##column_name.str_val = human ? human : #column_name;\n\n#define COL_HDR(row_name, column_name, human, left_or_right, col_width,\t\\\n    etype)\t\t\t\t\t\t\t\t\\\n\tCOL_HDR_DECLARE(column_name)\t\t\t\t\t\\\n\tCOL_HDR_INIT(row_name, column_name, human, left_or_right,\t\\\n\t    col_width, etype)\n\nJEMALLOC_COLD\nstatic void\nstats_arena_bins_print(emitter_t *emitter, bool mutex, unsigned i,\n    uint64_t uptime) {\n\tsize_t page;\n\tbool in_gap, in_gap_prev;\n\tunsigned nbins, j;\n\n\tCTL_GET(\"arenas.page\", &page, size_t);\n\n\tCTL_GET(\"arenas.nbins\", &nbins, unsigned);\n\n\temitter_row_t header_row;\n\temitter_row_init(&header_row);\n\n\temitter_row_t row;\n\temitter_row_init(&row);\n\n\tbool prof_stats_on = config_prof && opt_prof && opt_prof_stats\n\t    && i == MALLCTL_ARENAS_ALL;\n\n\tCOL_HDR(row, size, NULL, right, 20, size)\n\tCOL_HDR(row, ind, NULL, right, 4, unsigned)\n\tCOL_HDR(row, allocated, NULL, right, 13, uint64)\n\tCOL_HDR(row, nmalloc, NULL, right, 13, uint64)\n\tCOL_HDR(row, nmalloc_ps, \"(#/sec)\", right, 8, uint64)\n\tCOL_HDR(row, ndalloc, NULL, right, 13, uint64)\n\tCOL_HDR(row, ndalloc_ps, \"(#/sec)\", right, 8, uint64)\n\tCOL_HDR(row, nrequests, NULL, right, 13, uint64)\n\tCOL_HDR(row, nrequests_ps, \"(#/sec)\", right, 10, uint64)\n\tCOL_HDR_DECLARE(prof_live_requested);\n\tCOL_HDR_DECLARE(prof_live_count);\n\tCOL_HDR_DECLARE(prof_accum_requested);\n\tCOL_HDR_DECLARE(prof_accum_count);\n\tif (prof_stats_on) {\n\t\tCOL_HDR_INIT(row, prof_live_requested, NULL, right, 21, uint64)\n\t\tCOL_HDR_INIT(row, prof_live_count, NULL, right, 17, uint64)\n\t\tCOL_HDR_INIT(row, prof_accum_requested, NULL, right, 21, uint64)\n\t\tCOL_HDR_INIT(row, prof_accum_count, NULL, right, 17, uint64)\n\t}\n\tCOL_HDR(row, nshards, NULL, right, 9, unsigned)\n\tCOL_HDR(row, curregs, NULL, right, 13, size)\n\tCOL_HDR(row, curslabs, NULL, right, 13, size)\n\tCOL_HDR(row, nonfull_slabs, NULL, right, 15, size)\n\tCOL_HDR(row, regs, NULL, right, 5, unsigned)\n\tCOL_HDR(row, pgs, NULL, right, 4, size)\n\t/* To buffer a right- and left-justified column. */\n\tCOL_HDR(row, justify_spacer, NULL, right, 1, title)\n\tCOL_HDR(row, util, NULL, right, 6, title)\n\tCOL_HDR(row, nfills, NULL, right, 13, uint64)\n\tCOL_HDR(row, nfills_ps, \"(#/sec)\", right, 8, uint64)\n\tCOL_HDR(row, nflushes, NULL, right, 13, uint64)\n\tCOL_HDR(row, nflushes_ps, \"(#/sec)\", right, 8, uint64)\n\tCOL_HDR(row, nslabs, NULL, right, 13, uint64)\n\tCOL_HDR(row, nreslabs, NULL, right, 13, uint64)\n\tCOL_HDR(row, nreslabs_ps, \"(#/sec)\", right, 8, uint64)\n\n\t/* Don't want to actually print the name. */\n\theader_justify_spacer.str_val = \" \";\n\tcol_justify_spacer.str_val = \" \";\n\n\temitter_col_t col_mutex64[mutex_prof_num_uint64_t_counters];\n\temitter_col_t col_mutex32[mutex_prof_num_uint32_t_counters];\n\n\temitter_col_t header_mutex64[mutex_prof_num_uint64_t_counters];\n\temitter_col_t header_mutex32[mutex_prof_num_uint32_t_counters];\n\n\tif (mutex) {\n\t\tmutex_stats_init_cols(&row, NULL, NULL, col_mutex64,\n\t\t    col_mutex32);\n\t\tmutex_stats_init_cols(&header_row, NULL, NULL, header_mutex64,\n\t\t    header_mutex32);\n\t}\n\n\t/*\n\t * We print a \"bins:\" header as part of the table row; we need to adjust\n\t * the header size column to compensate.\n\t */\n\theader_size.width -=5;\n\temitter_table_printf(emitter, \"bins:\");\n\temitter_table_row(emitter, &header_row);\n\temitter_json_array_kv_begin(emitter, \"bins\");\n\n\tsize_t stats_arenas_mib[CTL_MAX_DEPTH];\n\tCTL_LEAF_PREPARE(stats_arenas_mib, 0, \"stats.arenas\");\n\tstats_arenas_mib[2] = i;\n\tCTL_LEAF_PREPARE(stats_arenas_mib, 3, \"bins\");\n\n\tsize_t arenas_bin_mib[CTL_MAX_DEPTH];\n\tCTL_LEAF_PREPARE(arenas_bin_mib, 0, \"arenas.bin\");\n\n\tsize_t prof_stats_mib[CTL_MAX_DEPTH];\n\tif (prof_stats_on) {\n\t\tCTL_LEAF_PREPARE(prof_stats_mib, 0, \"prof.stats.bins\");\n\t}\n\n\tfor (j = 0, in_gap = false; j < nbins; j++) {\n\t\tuint64_t nslabs;\n\t\tsize_t reg_size, slab_size, curregs;\n\t\tsize_t curslabs;\n\t\tsize_t nonfull_slabs;\n\t\tuint32_t nregs, nshards;\n\t\tuint64_t nmalloc, ndalloc, nrequests, nfills, nflushes;\n\t\tuint64_t nreslabs;\n\t\tprof_stats_t prof_live;\n\t\tprof_stats_t prof_accum;\n\n\t\tstats_arenas_mib[4] = j;\n\t\tarenas_bin_mib[2] = j;\n\n\t\tCTL_LEAF(stats_arenas_mib, 5, \"nslabs\", &nslabs, uint64_t);\n\n\t\tif (prof_stats_on) {\n\t\t\tprof_stats_mib[3] = j;\n\t\t\tCTL_LEAF(prof_stats_mib, 4, \"live\", &prof_live,\n\t\t\t    prof_stats_t);\n\t\t\tCTL_LEAF(prof_stats_mib, 4, \"accum\", &prof_accum,\n\t\t\t    prof_stats_t);\n\t\t}\n\n\t\tin_gap_prev = in_gap;\n\t\tif (prof_stats_on) {\n\t\t\tin_gap = (nslabs == 0 && prof_accum.count == 0);\n\t\t} else {\n\t\t\tin_gap = (nslabs == 0);\n\t\t}\n\n\t\tif (in_gap_prev && !in_gap) {\n\t\t\temitter_table_printf(emitter,\n\t\t\t    \"                     ---\\n\");\n\t\t}\n\n\t\tif (in_gap && !emitter_outputs_json(emitter)) {\n\t\t\tcontinue;\n\t\t}\n\n\t\tCTL_LEAF(arenas_bin_mib, 3, \"size\", &reg_size, size_t);\n\t\tCTL_LEAF(arenas_bin_mib, 3, \"nregs\", &nregs, uint32_t);\n\t\tCTL_LEAF(arenas_bin_mib, 3, \"slab_size\", &slab_size, size_t);\n\t\tCTL_LEAF(arenas_bin_mib, 3, \"nshards\", &nshards, uint32_t);\n\t\tCTL_LEAF(stats_arenas_mib, 5, \"nmalloc\", &nmalloc, uint64_t);\n\t\tCTL_LEAF(stats_arenas_mib, 5, \"ndalloc\", &ndalloc, uint64_t);\n\t\tCTL_LEAF(stats_arenas_mib, 5, \"curregs\", &curregs, size_t);\n\t\tCTL_LEAF(stats_arenas_mib, 5, \"nrequests\", &nrequests,\n\t\t    uint64_t);\n\t\tCTL_LEAF(stats_arenas_mib, 5, \"nfills\", &nfills, uint64_t);\n\t\tCTL_LEAF(stats_arenas_mib, 5, \"nflushes\", &nflushes, uint64_t);\n\t\tCTL_LEAF(stats_arenas_mib, 5, \"nreslabs\", &nreslabs, uint64_t);\n\t\tCTL_LEAF(stats_arenas_mib, 5, \"curslabs\", &curslabs, size_t);\n\t\tCTL_LEAF(stats_arenas_mib, 5, \"nonfull_slabs\", &nonfull_slabs,\n\t\t    size_t);\n\n\t\tif (mutex) {\n\t\t\tmutex_stats_read_arena_bin(stats_arenas_mib, 5,\n\t\t\t    col_mutex64, col_mutex32, uptime);\n\t\t}\n\n\t\temitter_json_object_begin(emitter);\n\t\temitter_json_kv(emitter, \"nmalloc\", emitter_type_uint64,\n\t\t    &nmalloc);\n\t\temitter_json_kv(emitter, \"ndalloc\", emitter_type_uint64,\n\t\t    &ndalloc);\n\t\temitter_json_kv(emitter, \"curregs\", emitter_type_size,\n\t\t    &curregs);\n\t\temitter_json_kv(emitter, \"nrequests\", emitter_type_uint64,\n\t\t    &nrequests);\n\t\tif (prof_stats_on) {\n\t\t\temitter_json_kv(emitter, \"prof_live_requested\",\n\t\t\t    emitter_type_uint64, &prof_live.req_sum);\n\t\t\temitter_json_kv(emitter, \"prof_live_count\",\n\t\t\t    emitter_type_uint64, &prof_live.count);\n\t\t\temitter_json_kv(emitter, \"prof_accum_requested\",\n\t\t\t    emitter_type_uint64, &prof_accum.req_sum);\n\t\t\temitter_json_kv(emitter, \"prof_accum_count\",\n\t\t\t    emitter_type_uint64, &prof_accum.count);\n\t\t}\n\t\temitter_json_kv(emitter, \"nfills\", emitter_type_uint64,\n\t\t    &nfills);\n\t\temitter_json_kv(emitter, \"nflushes\", emitter_type_uint64,\n\t\t    &nflushes);\n\t\temitter_json_kv(emitter, \"nreslabs\", emitter_type_uint64,\n\t\t    &nreslabs);\n\t\temitter_json_kv(emitter, \"curslabs\", emitter_type_size,\n\t\t    &curslabs);\n\t\temitter_json_kv(emitter, \"nonfull_slabs\", emitter_type_size,\n\t\t    &nonfull_slabs);\n\t\tif (mutex) {\n\t\t\temitter_json_object_kv_begin(emitter, \"mutex\");\n\t\t\tmutex_stats_emit(emitter, NULL, col_mutex64,\n\t\t\t    col_mutex32);\n\t\t\temitter_json_object_end(emitter);\n\t\t}\n\t\temitter_json_object_end(emitter);\n\n\t\tsize_t availregs = nregs * curslabs;\n\t\tchar util[6];\n\t\tif (get_rate_str((uint64_t)curregs, (uint64_t)availregs, util))\n\t\t{\n\t\t\tif (availregs == 0) {\n\t\t\t\tmalloc_snprintf(util, sizeof(util), \"1\");\n\t\t\t} else if (curregs > availregs) {\n\t\t\t\t/*\n\t\t\t\t * Race detected: the counters were read in\n\t\t\t\t * separate mallctl calls and concurrent\n\t\t\t\t * operations happened in between.  In this case\n\t\t\t\t * no meaningful utilization can be computed.\n\t\t\t\t */\n\t\t\t\tmalloc_snprintf(util, sizeof(util), \" race\");\n\t\t\t} else {\n\t\t\t\tnot_reached();\n\t\t\t}\n\t\t}\n\n\t\tcol_size.size_val = reg_size;\n\t\tcol_ind.unsigned_val = j;\n\t\tcol_allocated.size_val = curregs * reg_size;\n\t\tcol_nmalloc.uint64_val = nmalloc;\n\t\tcol_nmalloc_ps.uint64_val = rate_per_second(nmalloc, uptime);\n\t\tcol_ndalloc.uint64_val = ndalloc;\n\t\tcol_ndalloc_ps.uint64_val = rate_per_second(ndalloc, uptime);\n\t\tcol_nrequests.uint64_val = nrequests;\n\t\tcol_nrequests_ps.uint64_val = rate_per_second(nrequests, uptime);\n\t\tif (prof_stats_on) {\n\t\t\tcol_prof_live_requested.uint64_val = prof_live.req_sum;\n\t\t\tcol_prof_live_count.uint64_val = prof_live.count;\n\t\t\tcol_prof_accum_requested.uint64_val =\n\t\t\t    prof_accum.req_sum;\n\t\t\tcol_prof_accum_count.uint64_val = prof_accum.count;\n\t\t}\n\t\tcol_nshards.unsigned_val = nshards;\n\t\tcol_curregs.size_val = curregs;\n\t\tcol_curslabs.size_val = curslabs;\n\t\tcol_nonfull_slabs.size_val = nonfull_slabs;\n\t\tcol_regs.unsigned_val = nregs;\n\t\tcol_pgs.size_val = slab_size / page;\n\t\tcol_util.str_val = util;\n\t\tcol_nfills.uint64_val = nfills;\n\t\tcol_nfills_ps.uint64_val = rate_per_second(nfills, uptime);\n\t\tcol_nflushes.uint64_val = nflushes;\n\t\tcol_nflushes_ps.uint64_val = rate_per_second(nflushes, uptime);\n\t\tcol_nslabs.uint64_val = nslabs;\n\t\tcol_nreslabs.uint64_val = nreslabs;\n\t\tcol_nreslabs_ps.uint64_val = rate_per_second(nreslabs, uptime);\n\n\t\t/*\n\t\t * Note that mutex columns were initialized above, if mutex ==\n\t\t * true.\n\t\t */\n\n\t\temitter_table_row(emitter, &row);\n\t}\n\temitter_json_array_end(emitter); /* Close \"bins\". */\n\n\tif (in_gap) {\n\t\temitter_table_printf(emitter, \"                     ---\\n\");\n\t}\n}\n\nJEMALLOC_COLD\nstatic void\nstats_arena_lextents_print(emitter_t *emitter, unsigned i, uint64_t uptime) {\n\tunsigned nbins, nlextents, j;\n\tbool in_gap, in_gap_prev;\n\n\tCTL_GET(\"arenas.nbins\", &nbins, unsigned);\n\tCTL_GET(\"arenas.nlextents\", &nlextents, unsigned);\n\n\temitter_row_t header_row;\n\temitter_row_init(&header_row);\n\temitter_row_t row;\n\temitter_row_init(&row);\n\n\tbool prof_stats_on = config_prof && opt_prof && opt_prof_stats\n\t    && i == MALLCTL_ARENAS_ALL;\n\n\tCOL_HDR(row, size, NULL, right, 20, size)\n\tCOL_HDR(row, ind, NULL, right, 4, unsigned)\n\tCOL_HDR(row, allocated, NULL, right, 13, size)\n\tCOL_HDR(row, nmalloc, NULL, right, 13, uint64)\n\tCOL_HDR(row, nmalloc_ps, \"(#/sec)\", right, 8, uint64)\n\tCOL_HDR(row, ndalloc, NULL, right, 13, uint64)\n\tCOL_HDR(row, ndalloc_ps, \"(#/sec)\", right, 8, uint64)\n\tCOL_HDR(row, nrequests, NULL, right, 13, uint64)\n\tCOL_HDR(row, nrequests_ps, \"(#/sec)\", right, 8, uint64)\n\tCOL_HDR_DECLARE(prof_live_requested)\n\tCOL_HDR_DECLARE(prof_live_count)\n\tCOL_HDR_DECLARE(prof_accum_requested)\n\tCOL_HDR_DECLARE(prof_accum_count)\n\tif (prof_stats_on) {\n\t\tCOL_HDR_INIT(row, prof_live_requested, NULL, right, 21, uint64)\n\t\tCOL_HDR_INIT(row, prof_live_count, NULL, right, 17, uint64)\n\t\tCOL_HDR_INIT(row, prof_accum_requested, NULL, right, 21, uint64)\n\t\tCOL_HDR_INIT(row, prof_accum_count, NULL, right, 17, uint64)\n\t}\n\tCOL_HDR(row, curlextents, NULL, right, 13, size)\n\n\t/* As with bins, we label the large extents table. */\n\theader_size.width -= 6;\n\temitter_table_printf(emitter, \"large:\");\n\temitter_table_row(emitter, &header_row);\n\temitter_json_array_kv_begin(emitter, \"lextents\");\n\n\tsize_t stats_arenas_mib[CTL_MAX_DEPTH];\n\tCTL_LEAF_PREPARE(stats_arenas_mib, 0, \"stats.arenas\");\n\tstats_arenas_mib[2] = i;\n\tCTL_LEAF_PREPARE(stats_arenas_mib, 3, \"lextents\");\n\n\tsize_t arenas_lextent_mib[CTL_MAX_DEPTH];\n\tCTL_LEAF_PREPARE(arenas_lextent_mib, 0, \"arenas.lextent\");\n\n\tsize_t prof_stats_mib[CTL_MAX_DEPTH];\n\tif (prof_stats_on) {\n\t\tCTL_LEAF_PREPARE(prof_stats_mib, 0, \"prof.stats.lextents\");\n\t}\n\n\tfor (j = 0, in_gap = false; j < nlextents; j++) {\n\t\tuint64_t nmalloc, ndalloc, nrequests;\n\t\tsize_t lextent_size, curlextents;\n\t\tprof_stats_t prof_live;\n\t\tprof_stats_t prof_accum;\n\n\t\tstats_arenas_mib[4] = j;\n\t\tarenas_lextent_mib[2] = j;\n\n\t\tCTL_LEAF(stats_arenas_mib, 5, \"nmalloc\", &nmalloc, uint64_t);\n\t\tCTL_LEAF(stats_arenas_mib, 5, \"ndalloc\", &ndalloc, uint64_t);\n\t\tCTL_LEAF(stats_arenas_mib, 5, \"nrequests\", &nrequests,\n\t\t    uint64_t);\n\n\t\tin_gap_prev = in_gap;\n\t\tin_gap = (nrequests == 0);\n\n\t\tif (in_gap_prev && !in_gap) {\n\t\t\temitter_table_printf(emitter,\n\t\t\t    \"                     ---\\n\");\n\t\t}\n\n\t\tCTL_LEAF(arenas_lextent_mib, 3, \"size\", &lextent_size, size_t);\n\t\tCTL_LEAF(stats_arenas_mib, 5, \"curlextents\", &curlextents,\n\t\t    size_t);\n\n\t\tif (prof_stats_on) {\n\t\t\tprof_stats_mib[3] = j;\n\t\t\tCTL_LEAF(prof_stats_mib, 4, \"live\", &prof_live,\n\t\t\t    prof_stats_t);\n\t\t\tCTL_LEAF(prof_stats_mib, 4, \"accum\", &prof_accum,\n\t\t\t    prof_stats_t);\n\t\t}\n\n\t\temitter_json_object_begin(emitter);\n\t\tif (prof_stats_on) {\n\t\t\temitter_json_kv(emitter, \"prof_live_requested\",\n\t\t\t    emitter_type_uint64, &prof_live.req_sum);\n\t\t\temitter_json_kv(emitter, \"prof_live_count\",\n\t\t\t    emitter_type_uint64, &prof_live.count);\n\t\t\temitter_json_kv(emitter, \"prof_accum_requested\",\n\t\t\t    emitter_type_uint64, &prof_accum.req_sum);\n\t\t\temitter_json_kv(emitter, \"prof_accum_count\",\n\t\t\t    emitter_type_uint64, &prof_accum.count);\n\t\t}\n\t\temitter_json_kv(emitter, \"curlextents\", emitter_type_size,\n\t\t    &curlextents);\n\t\temitter_json_object_end(emitter);\n\n\t\tcol_size.size_val = lextent_size;\n\t\tcol_ind.unsigned_val = nbins + j;\n\t\tcol_allocated.size_val = curlextents * lextent_size;\n\t\tcol_nmalloc.uint64_val = nmalloc;\n\t\tcol_nmalloc_ps.uint64_val = rate_per_second(nmalloc, uptime);\n\t\tcol_ndalloc.uint64_val = ndalloc;\n\t\tcol_ndalloc_ps.uint64_val = rate_per_second(ndalloc, uptime);\n\t\tcol_nrequests.uint64_val = nrequests;\n\t\tcol_nrequests_ps.uint64_val = rate_per_second(nrequests, uptime);\n\t\tif (prof_stats_on) {\n\t\t\tcol_prof_live_requested.uint64_val = prof_live.req_sum;\n\t\t\tcol_prof_live_count.uint64_val = prof_live.count;\n\t\t\tcol_prof_accum_requested.uint64_val =\n\t\t\t    prof_accum.req_sum;\n\t\t\tcol_prof_accum_count.uint64_val = prof_accum.count;\n\t\t}\n\t\tcol_curlextents.size_val = curlextents;\n\n\t\tif (!in_gap) {\n\t\t\temitter_table_row(emitter, &row);\n\t\t}\n\t}\n\temitter_json_array_end(emitter); /* Close \"lextents\". */\n\tif (in_gap) {\n\t\temitter_table_printf(emitter, \"                     ---\\n\");\n\t}\n}\n\nJEMALLOC_COLD\nstatic void\nstats_arena_extents_print(emitter_t *emitter, unsigned i) {\n\tunsigned j;\n\tbool in_gap, in_gap_prev;\n\temitter_row_t header_row;\n\temitter_row_init(&header_row);\n\temitter_row_t row;\n\temitter_row_init(&row);\n\n\tCOL_HDR(row, size, NULL, right, 20, size)\n\tCOL_HDR(row, ind, NULL, right, 4, unsigned)\n\tCOL_HDR(row, ndirty, NULL, right, 13, size)\n\tCOL_HDR(row, dirty, NULL, right, 13, size)\n\tCOL_HDR(row, nmuzzy, NULL, right, 13, size)\n\tCOL_HDR(row, muzzy, NULL, right, 13, size)\n\tCOL_HDR(row, nretained, NULL, right, 13, size)\n\tCOL_HDR(row, retained, NULL, right, 13, size)\n\tCOL_HDR(row, ntotal, NULL, right, 13, size)\n\tCOL_HDR(row, total, NULL, right, 13, size)\n\n\t/* Label this section. */\n\theader_size.width -= 8;\n\temitter_table_printf(emitter, \"extents:\");\n\temitter_table_row(emitter, &header_row);\n\temitter_json_array_kv_begin(emitter, \"extents\");\n\n\tsize_t stats_arenas_mib[CTL_MAX_DEPTH];\n\tCTL_LEAF_PREPARE(stats_arenas_mib, 0, \"stats.arenas\");\n\tstats_arenas_mib[2] = i;\n\tCTL_LEAF_PREPARE(stats_arenas_mib, 3, \"extents\");\n\n\tin_gap = false;\n\tfor (j = 0; j < SC_NPSIZES; j++) {\n\t\tsize_t ndirty, nmuzzy, nretained, total, dirty_bytes,\n\t\t    muzzy_bytes, retained_bytes, total_bytes;\n\t\tstats_arenas_mib[4] = j;\n\n\t\tCTL_LEAF(stats_arenas_mib, 5, \"ndirty\", &ndirty, size_t);\n\t\tCTL_LEAF(stats_arenas_mib, 5, \"nmuzzy\", &nmuzzy, size_t);\n\t\tCTL_LEAF(stats_arenas_mib, 5, \"nretained\", &nretained, size_t);\n\t\tCTL_LEAF(stats_arenas_mib, 5, \"dirty_bytes\", &dirty_bytes,\n\t\t    size_t);\n\t\tCTL_LEAF(stats_arenas_mib, 5, \"muzzy_bytes\", &muzzy_bytes,\n\t\t    size_t);\n\t\tCTL_LEAF(stats_arenas_mib, 5, \"retained_bytes\",\n\t\t    &retained_bytes, size_t);\n\n\t\ttotal = ndirty + nmuzzy + nretained;\n\t\ttotal_bytes = dirty_bytes + muzzy_bytes + retained_bytes;\n\n\t\tin_gap_prev = in_gap;\n\t\tin_gap = (total == 0);\n\n\t\tif (in_gap_prev && !in_gap) {\n\t\t\temitter_table_printf(emitter,\n\t\t\t    \"                     ---\\n\");\n\t\t}\n\n\t\temitter_json_object_begin(emitter);\n\t\temitter_json_kv(emitter, \"ndirty\", emitter_type_size, &ndirty);\n\t\temitter_json_kv(emitter, \"nmuzzy\", emitter_type_size, &nmuzzy);\n\t\temitter_json_kv(emitter, \"nretained\", emitter_type_size,\n\t\t    &nretained);\n\n\t\temitter_json_kv(emitter, \"dirty_bytes\", emitter_type_size,\n\t\t    &dirty_bytes);\n\t\temitter_json_kv(emitter, \"muzzy_bytes\", emitter_type_size,\n\t\t    &muzzy_bytes);\n\t\temitter_json_kv(emitter, \"retained_bytes\", emitter_type_size,\n\t\t    &retained_bytes);\n\t\temitter_json_object_end(emitter);\n\n\t\tcol_size.size_val = sz_pind2sz(j);\n\t\tcol_ind.size_val = j;\n\t\tcol_ndirty.size_val = ndirty;\n\t\tcol_dirty.size_val = dirty_bytes;\n\t\tcol_nmuzzy.size_val = nmuzzy;\n\t\tcol_muzzy.size_val = muzzy_bytes;\n\t\tcol_nretained.size_val = nretained;\n\t\tcol_retained.size_val = retained_bytes;\n\t\tcol_ntotal.size_val = total;\n\t\tcol_total.size_val = total_bytes;\n\n\t\tif (!in_gap) {\n\t\t\temitter_table_row(emitter, &row);\n\t\t}\n\t}\n\temitter_json_array_end(emitter); /* Close \"extents\". */\n\tif (in_gap) {\n\t\temitter_table_printf(emitter, \"                     ---\\n\");\n\t}\n}\n\nstatic void\nstats_arena_hpa_shard_print(emitter_t *emitter, unsigned i, uint64_t uptime) {\n\temitter_row_t header_row;\n\temitter_row_init(&header_row);\n\temitter_row_t row;\n\temitter_row_init(&row);\n\n\tuint64_t npurge_passes;\n\tuint64_t npurges;\n\tuint64_t nhugifies;\n\tuint64_t ndehugifies;\n\n\tCTL_M2_GET(\"stats.arenas.0.hpa_shard.npurge_passes\",\n\t    i, &npurge_passes, uint64_t);\n\tCTL_M2_GET(\"stats.arenas.0.hpa_shard.npurges\",\n\t    i, &npurges, uint64_t);\n\tCTL_M2_GET(\"stats.arenas.0.hpa_shard.nhugifies\",\n\t    i, &nhugifies, uint64_t);\n\tCTL_M2_GET(\"stats.arenas.0.hpa_shard.ndehugifies\",\n\t    i, &ndehugifies, uint64_t);\n\n\tsize_t npageslabs_huge;\n\tsize_t nactive_huge;\n\tsize_t ndirty_huge;\n\n\tsize_t npageslabs_nonhuge;\n\tsize_t nactive_nonhuge;\n\tsize_t ndirty_nonhuge;\n\tsize_t nretained_nonhuge;\n\n\tsize_t sec_bytes;\n\tCTL_M2_GET(\"stats.arenas.0.hpa_sec_bytes\", i, &sec_bytes, size_t);\n\temitter_kv(emitter, \"sec_bytes\", \"Bytes in small extent cache\",\n\t    emitter_type_size, &sec_bytes);\n\n\t/* First, global stats. */\n\temitter_table_printf(emitter,\n\t    \"HPA shard stats:\\n\"\n\t    \"  Purge passes: %\" FMTu64 \" (%\" FMTu64 \" / sec)\\n\"\n\t    \"  Purges: %\" FMTu64 \" (%\" FMTu64 \" / sec)\\n\"\n\t    \"  Hugeifies: %\" FMTu64 \" (%\" FMTu64 \" / sec)\\n\"\n\t    \"  Dehugifies: %\" FMTu64 \" (%\" FMTu64 \" / sec)\\n\"\n\t    \"\\n\",\n\t    npurge_passes, rate_per_second(npurge_passes, uptime),\n\t    npurges, rate_per_second(npurges, uptime),\n\t    nhugifies, rate_per_second(nhugifies, uptime),\n\t    ndehugifies, rate_per_second(ndehugifies, uptime));\n\n\temitter_json_object_kv_begin(emitter, \"hpa_shard\");\n\temitter_json_kv(emitter, \"npurge_passes\", emitter_type_uint64,\n\t    &npurge_passes);\n\temitter_json_kv(emitter, \"npurges\", emitter_type_uint64,\n\t    &npurges);\n\temitter_json_kv(emitter, \"nhugifies\", emitter_type_uint64,\n\t    &nhugifies);\n\temitter_json_kv(emitter, \"ndehugifies\", emitter_type_uint64,\n\t    &ndehugifies);\n\n\t/* Next, full slab stats. */\n\tCTL_M2_GET(\"stats.arenas.0.hpa_shard.full_slabs.npageslabs_huge\",\n\t    i, &npageslabs_huge, size_t);\n\tCTL_M2_GET(\"stats.arenas.0.hpa_shard.full_slabs.nactive_huge\",\n\t    i, &nactive_huge, size_t);\n\tCTL_M2_GET(\"stats.arenas.0.hpa_shard.full_slabs.ndirty_huge\",\n\t    i, &ndirty_huge, size_t);\n\n\tCTL_M2_GET(\"stats.arenas.0.hpa_shard.full_slabs.npageslabs_nonhuge\",\n\t    i, &npageslabs_nonhuge, size_t);\n\tCTL_M2_GET(\"stats.arenas.0.hpa_shard.full_slabs.nactive_nonhuge\",\n\t    i, &nactive_nonhuge, size_t);\n\tCTL_M2_GET(\"stats.arenas.0.hpa_shard.full_slabs.ndirty_nonhuge\",\n\t    i, &ndirty_nonhuge, size_t);\n\tnretained_nonhuge = npageslabs_nonhuge * HUGEPAGE_PAGES\n\t    - nactive_nonhuge - ndirty_nonhuge;\n\n\temitter_table_printf(emitter,\n\t    \"  In full slabs:\\n\"\n\t    \"      npageslabs: %zu huge, %zu nonhuge\\n\"\n\t    \"      nactive: %zu huge, %zu nonhuge \\n\"\n\t    \"      ndirty: %zu huge, %zu nonhuge \\n\"\n\t    \"      nretained: 0 huge, %zu nonhuge \\n\",\n\t    npageslabs_huge, npageslabs_nonhuge,\n\t    nactive_huge, nactive_nonhuge,\n\t    ndirty_huge, ndirty_nonhuge,\n\t    nretained_nonhuge);\n\n\temitter_json_object_kv_begin(emitter, \"full_slabs\");\n\temitter_json_kv(emitter, \"npageslabs_huge\", emitter_type_size,\n\t    &npageslabs_huge);\n\temitter_json_kv(emitter, \"nactive_huge\", emitter_type_size,\n\t    &nactive_huge);\n\temitter_json_kv(emitter, \"nactive_huge\", emitter_type_size,\n\t    &nactive_huge);\n\temitter_json_kv(emitter, \"npageslabs_nonhuge\", emitter_type_size,\n\t    &npageslabs_nonhuge);\n\temitter_json_kv(emitter, \"nactive_nonhuge\", emitter_type_size,\n\t    &nactive_nonhuge);\n\temitter_json_kv(emitter, \"ndirty_nonhuge\", emitter_type_size,\n\t    &ndirty_nonhuge);\n\temitter_json_object_end(emitter); /* End \"full_slabs\" */\n\n\t/* Next, empty slab stats. */\n\tCTL_M2_GET(\"stats.arenas.0.hpa_shard.empty_slabs.npageslabs_huge\",\n\t    i, &npageslabs_huge, size_t);\n\tCTL_M2_GET(\"stats.arenas.0.hpa_shard.empty_slabs.nactive_huge\",\n\t    i, &nactive_huge, size_t);\n\tCTL_M2_GET(\"stats.arenas.0.hpa_shard.empty_slabs.ndirty_huge\",\n\t    i, &ndirty_huge, size_t);\n\n\tCTL_M2_GET(\"stats.arenas.0.hpa_shard.empty_slabs.npageslabs_nonhuge\",\n\t    i, &npageslabs_nonhuge, size_t);\n\tCTL_M2_GET(\"stats.arenas.0.hpa_shard.empty_slabs.nactive_nonhuge\",\n\t    i, &nactive_nonhuge, size_t);\n\tCTL_M2_GET(\"stats.arenas.0.hpa_shard.empty_slabs.ndirty_nonhuge\",\n\t    i, &ndirty_nonhuge, size_t);\n\tnretained_nonhuge = npageslabs_nonhuge * HUGEPAGE_PAGES\n\t    - nactive_nonhuge - ndirty_nonhuge;\n\n\temitter_table_printf(emitter,\n\t    \"  In empty slabs:\\n\"\n\t    \"      npageslabs: %zu huge, %zu nonhuge\\n\"\n\t    \"      nactive: %zu huge, %zu nonhuge \\n\"\n\t    \"      ndirty: %zu huge, %zu nonhuge \\n\"\n\t    \"      nretained: 0 huge, %zu nonhuge \\n\"\n\t    \"\\n\",\n\t    npageslabs_huge, npageslabs_nonhuge,\n\t    nactive_huge, nactive_nonhuge,\n\t    ndirty_huge, ndirty_nonhuge,\n\t    nretained_nonhuge);\n\n\temitter_json_object_kv_begin(emitter, \"empty_slabs\");\n\temitter_json_kv(emitter, \"npageslabs_huge\", emitter_type_size,\n\t    &npageslabs_huge);\n\temitter_json_kv(emitter, \"nactive_huge\", emitter_type_size,\n\t    &nactive_huge);\n\temitter_json_kv(emitter, \"nactive_huge\", emitter_type_size,\n\t    &nactive_huge);\n\temitter_json_kv(emitter, \"npageslabs_nonhuge\", emitter_type_size,\n\t    &npageslabs_nonhuge);\n\temitter_json_kv(emitter, \"nactive_nonhuge\", emitter_type_size,\n\t    &nactive_nonhuge);\n\temitter_json_kv(emitter, \"ndirty_nonhuge\", emitter_type_size,\n\t    &ndirty_nonhuge);\n\temitter_json_object_end(emitter); /* End \"empty_slabs\" */\n\n\tCOL_HDR(row, size, NULL, right, 20, size)\n\tCOL_HDR(row, ind, NULL, right, 4, unsigned)\n\tCOL_HDR(row, npageslabs_huge, NULL, right, 16, size)\n\tCOL_HDR(row, nactive_huge, NULL, right, 16, size)\n\tCOL_HDR(row, ndirty_huge, NULL, right, 16, size)\n\tCOL_HDR(row, npageslabs_nonhuge, NULL, right, 20, size)\n\tCOL_HDR(row, nactive_nonhuge, NULL, right, 20, size)\n\tCOL_HDR(row, ndirty_nonhuge, NULL, right, 20, size)\n\tCOL_HDR(row, nretained_nonhuge, NULL, right, 20, size)\n\n\tsize_t stats_arenas_mib[CTL_MAX_DEPTH];\n\tCTL_LEAF_PREPARE(stats_arenas_mib, 0, \"stats.arenas\");\n\tstats_arenas_mib[2] = i;\n\tCTL_LEAF_PREPARE(stats_arenas_mib, 3, \"hpa_shard.nonfull_slabs\");\n\n\temitter_table_row(emitter, &header_row);\n\temitter_json_array_kv_begin(emitter, \"nonfull_slabs\");\n\tbool in_gap = false;\n\tfor (pszind_t j = 0; j < PSSET_NPSIZES && j < SC_NPSIZES; j++) {\n\t\tstats_arenas_mib[5] = j;\n\n\t\tCTL_LEAF(stats_arenas_mib, 6, \"npageslabs_huge\",\n\t\t    &npageslabs_huge, size_t);\n\t\tCTL_LEAF(stats_arenas_mib, 6, \"nactive_huge\",\n\t\t    &nactive_huge, size_t);\n\t\tCTL_LEAF(stats_arenas_mib, 6, \"ndirty_huge\",\n\t\t    &ndirty_huge, size_t);\n\n\t\tCTL_LEAF(stats_arenas_mib, 6, \"npageslabs_nonhuge\",\n\t\t    &npageslabs_nonhuge, size_t);\n\t\tCTL_LEAF(stats_arenas_mib, 6, \"nactive_nonhuge\",\n\t\t    &nactive_nonhuge, size_t);\n\t\tCTL_LEAF(stats_arenas_mib, 6, \"ndirty_nonhuge\",\n\t\t    &ndirty_nonhuge, size_t);\n\t\tnretained_nonhuge = npageslabs_nonhuge * HUGEPAGE_PAGES\n\t\t    - nactive_nonhuge - ndirty_nonhuge;\n\n\t\tbool in_gap_prev = in_gap;\n\t\tin_gap = (npageslabs_huge == 0 && npageslabs_nonhuge == 0);\n\t\tif (in_gap_prev && !in_gap) {\n\t\t\temitter_table_printf(emitter,\n\t\t\t    \"                     ---\\n\");\n\t\t}\n\n\t\tcol_size.size_val = sz_pind2sz(j);\n\t\tcol_ind.size_val = j;\n\t\tcol_npageslabs_huge.size_val = npageslabs_huge;\n\t\tcol_nactive_huge.size_val = nactive_huge;\n\t\tcol_ndirty_huge.size_val = ndirty_huge;\n\t\tcol_npageslabs_nonhuge.size_val = npageslabs_nonhuge;\n\t\tcol_nactive_nonhuge.size_val = nactive_nonhuge;\n\t\tcol_ndirty_nonhuge.size_val = ndirty_nonhuge;\n\t\tcol_nretained_nonhuge.size_val = nretained_nonhuge;\n\t\tif (!in_gap) {\n\t\t\temitter_table_row(emitter, &row);\n\t\t}\n\n\t\temitter_json_object_begin(emitter);\n\t\temitter_json_kv(emitter, \"npageslabs_huge\", emitter_type_size,\n\t\t    &npageslabs_huge);\n\t\temitter_json_kv(emitter, \"nactive_huge\", emitter_type_size,\n\t\t    &nactive_huge);\n\t\temitter_json_kv(emitter, \"ndirty_huge\", emitter_type_size,\n\t\t    &ndirty_huge);\n\t\temitter_json_kv(emitter, \"npageslabs_nonhuge\", emitter_type_size,\n\t\t    &npageslabs_nonhuge);\n\t\temitter_json_kv(emitter, \"nactive_nonhuge\", emitter_type_size,\n\t\t    &nactive_nonhuge);\n\t\temitter_json_kv(emitter, \"ndirty_nonhuge\", emitter_type_size,\n\t\t    &ndirty_nonhuge);\n\t\temitter_json_object_end(emitter);\n\t}\n\temitter_json_array_end(emitter); /* End \"nonfull_slabs\" */\n\temitter_json_object_end(emitter); /* End \"hpa_shard\" */\n\tif (in_gap) {\n\t\temitter_table_printf(emitter, \"                     ---\\n\");\n\t}\n}\n\nstatic void\nstats_arena_mutexes_print(emitter_t *emitter, unsigned arena_ind, uint64_t uptime) {\n\temitter_row_t row;\n\temitter_col_t col_name;\n\temitter_col_t col64[mutex_prof_num_uint64_t_counters];\n\temitter_col_t col32[mutex_prof_num_uint32_t_counters];\n\n\temitter_row_init(&row);\n\tmutex_stats_init_cols(&row, \"\", &col_name, col64, col32);\n\n\temitter_json_object_kv_begin(emitter, \"mutexes\");\n\temitter_table_row(emitter, &row);\n\n\tsize_t stats_arenas_mib[CTL_MAX_DEPTH];\n\tCTL_LEAF_PREPARE(stats_arenas_mib, 0, \"stats.arenas\");\n\tstats_arenas_mib[2] = arena_ind;\n\tCTL_LEAF_PREPARE(stats_arenas_mib, 3, \"mutexes\");\n\n\tfor (mutex_prof_arena_ind_t i = 0; i < mutex_prof_num_arena_mutexes;\n\t    i++) {\n\t\tconst char *name = arena_mutex_names[i];\n\t\temitter_json_object_kv_begin(emitter, name);\n\t\tmutex_stats_read_arena(stats_arenas_mib, 4, name, &col_name,\n\t\t    col64, col32, uptime);\n\t\tmutex_stats_emit(emitter, &row, col64, col32);\n\t\temitter_json_object_end(emitter); /* Close the mutex dict. */\n\t}\n\temitter_json_object_end(emitter); /* End \"mutexes\". */\n}\n\nJEMALLOC_COLD\nstatic void\nstats_arena_print(emitter_t *emitter, unsigned i, bool bins, bool large,\n    bool mutex, bool extents, bool hpa) {\n\tunsigned nthreads;\n\tconst char *dss;\n\tssize_t dirty_decay_ms, muzzy_decay_ms;\n\tsize_t page, pactive, pdirty, pmuzzy, mapped, retained;\n\tsize_t base, internal, resident, metadata_thp, extent_avail;\n\tuint64_t dirty_npurge, dirty_nmadvise, dirty_purged;\n\tuint64_t muzzy_npurge, muzzy_nmadvise, muzzy_purged;\n\tsize_t small_allocated;\n\tuint64_t small_nmalloc, small_ndalloc, small_nrequests, small_nfills,\n\t    small_nflushes;\n\tsize_t large_allocated;\n\tuint64_t large_nmalloc, large_ndalloc, large_nrequests, large_nfills,\n\t    large_nflushes;\n\tsize_t tcache_bytes, tcache_stashed_bytes, abandoned_vm;\n\tuint64_t uptime;\n\n\tCTL_GET(\"arenas.page\", &page, size_t);\n\n\tCTL_M2_GET(\"stats.arenas.0.nthreads\", i, &nthreads, unsigned);\n\temitter_kv(emitter, \"nthreads\", \"assigned threads\",\n\t    emitter_type_unsigned, &nthreads);\n\n\tCTL_M2_GET(\"stats.arenas.0.uptime\", i, &uptime, uint64_t);\n\temitter_kv(emitter, \"uptime_ns\", \"uptime\", emitter_type_uint64,\n\t    &uptime);\n\n\tCTL_M2_GET(\"stats.arenas.0.dss\", i, &dss, const char *);\n\temitter_kv(emitter, \"dss\", \"dss allocation precedence\",\n\t    emitter_type_string, &dss);\n\n\tCTL_M2_GET(\"stats.arenas.0.dirty_decay_ms\", i, &dirty_decay_ms,\n\t    ssize_t);\n\tCTL_M2_GET(\"stats.arenas.0.muzzy_decay_ms\", i, &muzzy_decay_ms,\n\t    ssize_t);\n\tCTL_M2_GET(\"stats.arenas.0.pactive\", i, &pactive, size_t);\n\tCTL_M2_GET(\"stats.arenas.0.pdirty\", i, &pdirty, size_t);\n\tCTL_M2_GET(\"stats.arenas.0.pmuzzy\", i, &pmuzzy, size_t);\n\tCTL_M2_GET(\"stats.arenas.0.dirty_npurge\", i, &dirty_npurge, uint64_t);\n\tCTL_M2_GET(\"stats.arenas.0.dirty_nmadvise\", i, &dirty_nmadvise,\n\t    uint64_t);\n\tCTL_M2_GET(\"stats.arenas.0.dirty_purged\", i, &dirty_purged, uint64_t);\n\tCTL_M2_GET(\"stats.arenas.0.muzzy_npurge\", i, &muzzy_npurge, uint64_t);\n\tCTL_M2_GET(\"stats.arenas.0.muzzy_nmadvise\", i, &muzzy_nmadvise,\n\t    uint64_t);\n\tCTL_M2_GET(\"stats.arenas.0.muzzy_purged\", i, &muzzy_purged, uint64_t);\n\n\temitter_row_t decay_row;\n\temitter_row_init(&decay_row);\n\n\t/* JSON-style emission. */\n\temitter_json_kv(emitter, \"dirty_decay_ms\", emitter_type_ssize,\n\t    &dirty_decay_ms);\n\temitter_json_kv(emitter, \"muzzy_decay_ms\", emitter_type_ssize,\n\t    &muzzy_decay_ms);\n\n\temitter_json_kv(emitter, \"pactive\", emitter_type_size, &pactive);\n\temitter_json_kv(emitter, \"pdirty\", emitter_type_size, &pdirty);\n\temitter_json_kv(emitter, \"pmuzzy\", emitter_type_size, &pmuzzy);\n\n\temitter_json_kv(emitter, \"dirty_npurge\", emitter_type_uint64,\n\t    &dirty_npurge);\n\temitter_json_kv(emitter, \"dirty_nmadvise\", emitter_type_uint64,\n\t    &dirty_nmadvise);\n\temitter_json_kv(emitter, \"dirty_purged\", emitter_type_uint64,\n\t    &dirty_purged);\n\n\temitter_json_kv(emitter, \"muzzy_npurge\", emitter_type_uint64,\n\t    &muzzy_npurge);\n\temitter_json_kv(emitter, \"muzzy_nmadvise\", emitter_type_uint64,\n\t    &muzzy_nmadvise);\n\temitter_json_kv(emitter, \"muzzy_purged\", emitter_type_uint64,\n\t    &muzzy_purged);\n\n\t/* Table-style emission. */\n\tCOL(decay_row, decay_type, right, 9, title);\n\tcol_decay_type.str_val = \"decaying:\";\n\n\tCOL(decay_row, decay_time, right, 6, title);\n\tcol_decay_time.str_val = \"time\";\n\n\tCOL(decay_row, decay_npages, right, 13, title);\n\tcol_decay_npages.str_val = \"npages\";\n\n\tCOL(decay_row, decay_sweeps, right, 13, title);\n\tcol_decay_sweeps.str_val = \"sweeps\";\n\n\tCOL(decay_row, decay_madvises, right, 13, title);\n\tcol_decay_madvises.str_val = \"madvises\";\n\n\tCOL(decay_row, decay_purged, right, 13, title);\n\tcol_decay_purged.str_val = \"purged\";\n\n\t/* Title row. */\n\temitter_table_row(emitter, &decay_row);\n\n\t/* Dirty row. */\n\tcol_decay_type.str_val = \"dirty:\";\n\n\tif (dirty_decay_ms >= 0) {\n\t\tcol_decay_time.type = emitter_type_ssize;\n\t\tcol_decay_time.ssize_val = dirty_decay_ms;\n\t} else {\n\t\tcol_decay_time.type = emitter_type_title;\n\t\tcol_decay_time.str_val = \"N/A\";\n\t}\n\n\tcol_decay_npages.type = emitter_type_size;\n\tcol_decay_npages.size_val = pdirty;\n\n\tcol_decay_sweeps.type = emitter_type_uint64;\n\tcol_decay_sweeps.uint64_val = dirty_npurge;\n\n\tcol_decay_madvises.type = emitter_type_uint64;\n\tcol_decay_madvises.uint64_val = dirty_nmadvise;\n\n\tcol_decay_purged.type = emitter_type_uint64;\n\tcol_decay_purged.uint64_val = dirty_purged;\n\n\temitter_table_row(emitter, &decay_row);\n\n\t/* Muzzy row. */\n\tcol_decay_type.str_val = \"muzzy:\";\n\n\tif (muzzy_decay_ms >= 0) {\n\t\tcol_decay_time.type = emitter_type_ssize;\n\t\tcol_decay_time.ssize_val = muzzy_decay_ms;\n\t} else {\n\t\tcol_decay_time.type = emitter_type_title;\n\t\tcol_decay_time.str_val = \"N/A\";\n\t}\n\n\tcol_decay_npages.type = emitter_type_size;\n\tcol_decay_npages.size_val = pmuzzy;\n\n\tcol_decay_sweeps.type = emitter_type_uint64;\n\tcol_decay_sweeps.uint64_val = muzzy_npurge;\n\n\tcol_decay_madvises.type = emitter_type_uint64;\n\tcol_decay_madvises.uint64_val = muzzy_nmadvise;\n\n\tcol_decay_purged.type = emitter_type_uint64;\n\tcol_decay_purged.uint64_val = muzzy_purged;\n\n\temitter_table_row(emitter, &decay_row);\n\n\t/* Small / large / total allocation counts. */\n\temitter_row_t alloc_count_row;\n\temitter_row_init(&alloc_count_row);\n\n\tCOL(alloc_count_row, count_title, left, 21, title);\n\tcol_count_title.str_val = \"\";\n\n\tCOL(alloc_count_row, count_allocated, right, 16, title);\n\tcol_count_allocated.str_val = \"allocated\";\n\n\tCOL(alloc_count_row, count_nmalloc, right, 16, title);\n\tcol_count_nmalloc.str_val = \"nmalloc\";\n\tCOL(alloc_count_row, count_nmalloc_ps, right, 10, title);\n\tcol_count_nmalloc_ps.str_val = \"(#/sec)\";\n\n\tCOL(alloc_count_row, count_ndalloc, right, 16, title);\n\tcol_count_ndalloc.str_val = \"ndalloc\";\n\tCOL(alloc_count_row, count_ndalloc_ps, right, 10, title);\n\tcol_count_ndalloc_ps.str_val = \"(#/sec)\";\n\n\tCOL(alloc_count_row, count_nrequests, right, 16, title);\n\tcol_count_nrequests.str_val = \"nrequests\";\n\tCOL(alloc_count_row, count_nrequests_ps, right, 10, title);\n\tcol_count_nrequests_ps.str_val = \"(#/sec)\";\n\n\tCOL(alloc_count_row, count_nfills, right, 16, title);\n\tcol_count_nfills.str_val = \"nfill\";\n\tCOL(alloc_count_row, count_nfills_ps, right, 10, title);\n\tcol_count_nfills_ps.str_val = \"(#/sec)\";\n\n\tCOL(alloc_count_row, count_nflushes, right, 16, title);\n\tcol_count_nflushes.str_val = \"nflush\";\n\tCOL(alloc_count_row, count_nflushes_ps, right, 10, title);\n\tcol_count_nflushes_ps.str_val = \"(#/sec)\";\n\n\temitter_table_row(emitter, &alloc_count_row);\n\n\tcol_count_nmalloc_ps.type = emitter_type_uint64;\n\tcol_count_ndalloc_ps.type = emitter_type_uint64;\n\tcol_count_nrequests_ps.type = emitter_type_uint64;\n\tcol_count_nfills_ps.type = emitter_type_uint64;\n\tcol_count_nflushes_ps.type = emitter_type_uint64;\n\n#define GET_AND_EMIT_ALLOC_STAT(small_or_large, name, valtype)\t\t\\\n\tCTL_M2_GET(\"stats.arenas.0.\" #small_or_large \".\" #name, i,\t\\\n\t    &small_or_large##_##name, valtype##_t);\t\t\t\\\n\temitter_json_kv(emitter, #name, emitter_type_##valtype,\t\t\\\n\t    &small_or_large##_##name);\t\t\t\t\t\\\n\tcol_count_##name.type = emitter_type_##valtype;\t\t\\\n\tcol_count_##name.valtype##_val = small_or_large##_##name;\n\n\temitter_json_object_kv_begin(emitter, \"small\");\n\tcol_count_title.str_val = \"small:\";\n\n\tGET_AND_EMIT_ALLOC_STAT(small, allocated, size)\n\tGET_AND_EMIT_ALLOC_STAT(small, nmalloc, uint64)\n\tcol_count_nmalloc_ps.uint64_val =\n\t    rate_per_second(col_count_nmalloc.uint64_val, uptime);\n\tGET_AND_EMIT_ALLOC_STAT(small, ndalloc, uint64)\n\tcol_count_ndalloc_ps.uint64_val =\n\t    rate_per_second(col_count_ndalloc.uint64_val, uptime);\n\tGET_AND_EMIT_ALLOC_STAT(small, nrequests, uint64)\n\tcol_count_nrequests_ps.uint64_val =\n\t    rate_per_second(col_count_nrequests.uint64_val, uptime);\n\tGET_AND_EMIT_ALLOC_STAT(small, nfills, uint64)\n\tcol_count_nfills_ps.uint64_val =\n\t    rate_per_second(col_count_nfills.uint64_val, uptime);\n\tGET_AND_EMIT_ALLOC_STAT(small, nflushes, uint64)\n\tcol_count_nflushes_ps.uint64_val =\n\t    rate_per_second(col_count_nflushes.uint64_val, uptime);\n\n\temitter_table_row(emitter, &alloc_count_row);\n\temitter_json_object_end(emitter); /* Close \"small\". */\n\n\temitter_json_object_kv_begin(emitter, \"large\");\n\tcol_count_title.str_val = \"large:\";\n\n\tGET_AND_EMIT_ALLOC_STAT(large, allocated, size)\n\tGET_AND_EMIT_ALLOC_STAT(large, nmalloc, uint64)\n\tcol_count_nmalloc_ps.uint64_val =\n\t    rate_per_second(col_count_nmalloc.uint64_val, uptime);\n\tGET_AND_EMIT_ALLOC_STAT(large, ndalloc, uint64)\n\tcol_count_ndalloc_ps.uint64_val =\n\t    rate_per_second(col_count_ndalloc.uint64_val, uptime);\n\tGET_AND_EMIT_ALLOC_STAT(large, nrequests, uint64)\n\tcol_count_nrequests_ps.uint64_val =\n\t    rate_per_second(col_count_nrequests.uint64_val, uptime);\n\tGET_AND_EMIT_ALLOC_STAT(large, nfills, uint64)\n\tcol_count_nfills_ps.uint64_val =\n\t    rate_per_second(col_count_nfills.uint64_val, uptime);\n\tGET_AND_EMIT_ALLOC_STAT(large, nflushes, uint64)\n\tcol_count_nflushes_ps.uint64_val =\n\t    rate_per_second(col_count_nflushes.uint64_val, uptime);\n\n\temitter_table_row(emitter, &alloc_count_row);\n\temitter_json_object_end(emitter); /* Close \"large\". */\n\n#undef GET_AND_EMIT_ALLOC_STAT\n\n\t/* Aggregated small + large stats are emitter only in table mode. */\n\tcol_count_title.str_val = \"total:\";\n\tcol_count_allocated.size_val = small_allocated + large_allocated;\n\tcol_count_nmalloc.uint64_val = small_nmalloc + large_nmalloc;\n\tcol_count_ndalloc.uint64_val = small_ndalloc + large_ndalloc;\n\tcol_count_nrequests.uint64_val = small_nrequests + large_nrequests;\n\tcol_count_nfills.uint64_val = small_nfills + large_nfills;\n\tcol_count_nflushes.uint64_val = small_nflushes + large_nflushes;\n\tcol_count_nmalloc_ps.uint64_val =\n\t    rate_per_second(col_count_nmalloc.uint64_val, uptime);\n\tcol_count_ndalloc_ps.uint64_val =\n\t    rate_per_second(col_count_ndalloc.uint64_val, uptime);\n\tcol_count_nrequests_ps.uint64_val =\n\t    rate_per_second(col_count_nrequests.uint64_val, uptime);\n\tcol_count_nfills_ps.uint64_val =\n\t    rate_per_second(col_count_nfills.uint64_val, uptime);\n\tcol_count_nflushes_ps.uint64_val =\n\t    rate_per_second(col_count_nflushes.uint64_val, uptime);\n\temitter_table_row(emitter, &alloc_count_row);\n\n\temitter_row_t mem_count_row;\n\temitter_row_init(&mem_count_row);\n\n\temitter_col_t mem_count_title;\n\temitter_col_init(&mem_count_title, &mem_count_row);\n\tmem_count_title.justify = emitter_justify_left;\n\tmem_count_title.width = 21;\n\tmem_count_title.type = emitter_type_title;\n\tmem_count_title.str_val = \"\";\n\n\temitter_col_t mem_count_val;\n\temitter_col_init(&mem_count_val, &mem_count_row);\n\tmem_count_val.justify = emitter_justify_right;\n\tmem_count_val.width = 16;\n\tmem_count_val.type = emitter_type_title;\n\tmem_count_val.str_val = \"\";\n\n\temitter_table_row(emitter, &mem_count_row);\n\tmem_count_val.type = emitter_type_size;\n\n\t/* Active count in bytes is emitted only in table mode. */\n\tmem_count_title.str_val = \"active:\";\n\tmem_count_val.size_val = pactive * page;\n\temitter_table_row(emitter, &mem_count_row);\n\n#define GET_AND_EMIT_MEM_STAT(stat)\t\t\t\t\t\\\n\tCTL_M2_GET(\"stats.arenas.0.\"#stat, i, &stat, size_t);\t\t\\\n\temitter_json_kv(emitter, #stat, emitter_type_size, &stat);\t\\\n\tmem_count_title.str_val = #stat\":\";\t\t\t\t\\\n\tmem_count_val.size_val = stat;\t\t\t\t\t\\\n\temitter_table_row(emitter, &mem_count_row);\n\n\tGET_AND_EMIT_MEM_STAT(mapped)\n\tGET_AND_EMIT_MEM_STAT(retained)\n\tGET_AND_EMIT_MEM_STAT(base)\n\tGET_AND_EMIT_MEM_STAT(internal)\n\tGET_AND_EMIT_MEM_STAT(metadata_thp)\n\tGET_AND_EMIT_MEM_STAT(tcache_bytes)\n\tGET_AND_EMIT_MEM_STAT(tcache_stashed_bytes)\n\tGET_AND_EMIT_MEM_STAT(resident)\n\tGET_AND_EMIT_MEM_STAT(abandoned_vm)\n\tGET_AND_EMIT_MEM_STAT(extent_avail)\n#undef GET_AND_EMIT_MEM_STAT\n\n\tif (mutex) {\n\t\tstats_arena_mutexes_print(emitter, i, uptime);\n\t}\n\tif (bins) {\n\t\tstats_arena_bins_print(emitter, mutex, i, uptime);\n\t}\n\tif (large) {\n\t\tstats_arena_lextents_print(emitter, i, uptime);\n\t}\n\tif (extents) {\n\t\tstats_arena_extents_print(emitter, i);\n\t}\n\tif (hpa) {\n\t\tstats_arena_hpa_shard_print(emitter, i, uptime);\n\t}\n}\n\nJEMALLOC_COLD\nstatic void\nstats_general_print(emitter_t *emitter) {\n\tconst char *cpv;\n\tbool bv, bv2;\n\tunsigned uv;\n\tuint32_t u32v;\n\tuint64_t u64v;\n\tint64_t i64v;\n\tssize_t ssv, ssv2;\n\tsize_t sv, bsz, usz, u32sz, u64sz, i64sz, ssz, sssz, cpsz;\n\n\tbsz = sizeof(bool);\n\tusz = sizeof(unsigned);\n\tssz = sizeof(size_t);\n\tsssz = sizeof(ssize_t);\n\tcpsz = sizeof(const char *);\n\tu32sz = sizeof(uint32_t);\n\ti64sz = sizeof(int64_t);\n\tu64sz = sizeof(uint64_t);\n\n\tCTL_GET(\"version\", &cpv, const char *);\n\temitter_kv(emitter, \"version\", \"Version\", emitter_type_string, &cpv);\n\n\t/* config. */\n\temitter_dict_begin(emitter, \"config\", \"Build-time option settings\");\n#define CONFIG_WRITE_BOOL(name)\t\t\t\t\t\t\\\n\tdo {\t\t\t\t\t\t\t\t\\\n\t\tCTL_GET(\"config.\"#name, &bv, bool);\t\t\t\\\n\t\temitter_kv(emitter, #name, \"config.\"#name,\t\t\\\n\t\t    emitter_type_bool, &bv);\t\t\t\t\\\n\t} while (0)\n\n\tCONFIG_WRITE_BOOL(cache_oblivious);\n\tCONFIG_WRITE_BOOL(debug);\n\tCONFIG_WRITE_BOOL(fill);\n\tCONFIG_WRITE_BOOL(lazy_lock);\n\temitter_kv(emitter, \"malloc_conf\", \"config.malloc_conf\",\n\t    emitter_type_string, &config_malloc_conf);\n\n\tCONFIG_WRITE_BOOL(opt_safety_checks);\n\tCONFIG_WRITE_BOOL(prof);\n\tCONFIG_WRITE_BOOL(prof_libgcc);\n\tCONFIG_WRITE_BOOL(prof_libunwind);\n\tCONFIG_WRITE_BOOL(stats);\n\tCONFIG_WRITE_BOOL(utrace);\n\tCONFIG_WRITE_BOOL(xmalloc);\n#undef CONFIG_WRITE_BOOL\n\temitter_dict_end(emitter); /* Close \"config\" dict. */\n\n\t/* opt. */\n#define OPT_WRITE(name, var, size, emitter_type)\t\t\t\\\n\tif (je_mallctl(\"opt.\"name, (void *)&var, &size, NULL, 0) ==\t\\\n\t    0) {\t\t\t\t\t\t\t\\\n\t\temitter_kv(emitter, name, \"opt.\"name, emitter_type,\t\\\n\t\t    &var);\t\t\t\t\t\t\\\n\t}\n\n#define OPT_WRITE_MUTABLE(name, var1, var2, size, emitter_type,\t\t\\\n    altname)\t\t\t\t\t\t\t\t\\\n\tif (je_mallctl(\"opt.\"name, (void *)&var1, &size, NULL, 0) ==\t\\\n\t    0 && je_mallctl(altname, (void *)&var2, &size, NULL, 0)\t\\\n\t    == 0) {\t\t\t\t\t\t\t\\\n\t\temitter_kv_note(emitter, name, \"opt.\"name,\t\t\\\n\t\t    emitter_type, &var1, altname, emitter_type,\t\t\\\n\t\t    &var2);\t\t\t\t\t\t\\\n\t}\n\n#define OPT_WRITE_BOOL(name) OPT_WRITE(name, bv, bsz, emitter_type_bool)\n#define OPT_WRITE_BOOL_MUTABLE(name, altname)\t\t\t\t\\\n\tOPT_WRITE_MUTABLE(name, bv, bv2, bsz, emitter_type_bool, altname)\n\n#define OPT_WRITE_UNSIGNED(name)\t\t\t\t\t\\\n\tOPT_WRITE(name, uv, usz, emitter_type_unsigned)\n\n#define OPT_WRITE_INT64(name)\t\t\t\t\t\t\\\n\tOPT_WRITE(name, i64v, i64sz, emitter_type_int64)\n#define OPT_WRITE_UINT64(name)\t\t\t\t\t\t\\\n\tOPT_WRITE(name, u64v, u64sz, emitter_type_uint64)\n\n#define OPT_WRITE_SIZE_T(name)\t\t\t\t\t\t\\\n\tOPT_WRITE(name, sv, ssz, emitter_type_size)\n#define OPT_WRITE_SSIZE_T(name)\t\t\t\t\t\t\\\n\tOPT_WRITE(name, ssv, sssz, emitter_type_ssize)\n#define OPT_WRITE_SSIZE_T_MUTABLE(name, altname)\t\t\t\\\n\tOPT_WRITE_MUTABLE(name, ssv, ssv2, sssz, emitter_type_ssize,\t\\\n\t    altname)\n\n#define OPT_WRITE_CHAR_P(name)\t\t\t\t\t\t\\\n\tOPT_WRITE(name, cpv, cpsz, emitter_type_string)\n\n\temitter_dict_begin(emitter, \"opt\", \"Run-time option settings\");\n\n\tOPT_WRITE_BOOL(\"abort\")\n\tOPT_WRITE_BOOL(\"abort_conf\")\n\tOPT_WRITE_BOOL(\"cache_oblivious\")\n\tOPT_WRITE_BOOL(\"confirm_conf\")\n\tOPT_WRITE_BOOL(\"retain\")\n\tOPT_WRITE_CHAR_P(\"dss\")\n\tOPT_WRITE_UNSIGNED(\"narenas\")\n\tOPT_WRITE_CHAR_P(\"percpu_arena\")\n\tOPT_WRITE_SIZE_T(\"oversize_threshold\")\n\tOPT_WRITE_BOOL(\"hpa\")\n\tOPT_WRITE_SIZE_T(\"hpa_slab_max_alloc\")\n\tOPT_WRITE_SIZE_T(\"hpa_hugification_threshold\")\n\tOPT_WRITE_UINT64(\"hpa_hugify_delay_ms\")\n\tOPT_WRITE_UINT64(\"hpa_min_purge_interval_ms\")\n\tif (je_mallctl(\"opt.hpa_dirty_mult\", (void *)&u32v, &u32sz, NULL, 0)\n\t    == 0) {\n\t\t/*\n\t\t * We cheat a little and \"know\" the secret meaning of this\n\t\t * representation.\n\t\t */\n\t\tif (u32v == (uint32_t)-1) {\n\t\t\tconst char *neg1 = \"-1\";\n\t\t\temitter_kv(emitter, \"hpa_dirty_mult\",\n\t\t\t    \"opt.hpa_dirty_mult\", emitter_type_string, &neg1);\n\t\t} else {\n\t\t\tchar buf[FXP_BUF_SIZE];\n\t\t\tfxp_print(u32v, buf);\n\t\t\tconst char *bufp = buf;\n\t\t\temitter_kv(emitter, \"hpa_dirty_mult\",\n\t\t\t    \"opt.hpa_dirty_mult\", emitter_type_string, &bufp);\n\t\t}\n\t}\n\tOPT_WRITE_SIZE_T(\"hpa_sec_nshards\")\n\tOPT_WRITE_SIZE_T(\"hpa_sec_max_alloc\")\n\tOPT_WRITE_SIZE_T(\"hpa_sec_max_bytes\")\n\tOPT_WRITE_SIZE_T(\"hpa_sec_bytes_after_flush\")\n\tOPT_WRITE_SIZE_T(\"hpa_sec_batch_fill_extra\")\n\tOPT_WRITE_CHAR_P(\"metadata_thp\")\n\tOPT_WRITE_INT64(\"mutex_max_spin\")\n\tOPT_WRITE_BOOL_MUTABLE(\"background_thread\", \"background_thread\")\n\tOPT_WRITE_SSIZE_T_MUTABLE(\"dirty_decay_ms\", \"arenas.dirty_decay_ms\")\n\tOPT_WRITE_SSIZE_T_MUTABLE(\"muzzy_decay_ms\", \"arenas.muzzy_decay_ms\")\n\tOPT_WRITE_SIZE_T(\"lg_extent_max_active_fit\")\n\tOPT_WRITE_CHAR_P(\"junk\")\n\tOPT_WRITE_BOOL(\"zero\")\n\tOPT_WRITE_BOOL(\"utrace\")\n\tOPT_WRITE_BOOL(\"xmalloc\")\n\tOPT_WRITE_BOOL(\"experimental_infallible_new\")\n\tOPT_WRITE_BOOL(\"tcache\")\n\tOPT_WRITE_SIZE_T(\"tcache_max\")\n\tOPT_WRITE_UNSIGNED(\"tcache_nslots_small_min\")\n\tOPT_WRITE_UNSIGNED(\"tcache_nslots_small_max\")\n\tOPT_WRITE_UNSIGNED(\"tcache_nslots_large\")\n\tOPT_WRITE_SSIZE_T(\"lg_tcache_nslots_mul\")\n\tOPT_WRITE_SIZE_T(\"tcache_gc_incr_bytes\")\n\tOPT_WRITE_SIZE_T(\"tcache_gc_delay_bytes\")\n\tOPT_WRITE_UNSIGNED(\"lg_tcache_flush_small_div\")\n\tOPT_WRITE_UNSIGNED(\"lg_tcache_flush_large_div\")\n\tOPT_WRITE_CHAR_P(\"thp\")\n\tOPT_WRITE_BOOL(\"prof\")\n\tOPT_WRITE_CHAR_P(\"prof_prefix\")\n\tOPT_WRITE_BOOL_MUTABLE(\"prof_active\", \"prof.active\")\n\tOPT_WRITE_BOOL_MUTABLE(\"prof_thread_active_init\",\n\t    \"prof.thread_active_init\")\n\tOPT_WRITE_SSIZE_T_MUTABLE(\"lg_prof_sample\", \"prof.lg_sample\")\n\tOPT_WRITE_BOOL(\"prof_accum\")\n\tOPT_WRITE_SSIZE_T(\"lg_prof_interval\")\n\tOPT_WRITE_BOOL(\"prof_gdump\")\n\tOPT_WRITE_BOOL(\"prof_final\")\n\tOPT_WRITE_BOOL(\"prof_leak\")\n\tOPT_WRITE_BOOL(\"prof_leak_error\")\n\tOPT_WRITE_BOOL(\"stats_print\")\n\tOPT_WRITE_CHAR_P(\"stats_print_opts\")\n\tOPT_WRITE_BOOL(\"stats_print\")\n\tOPT_WRITE_CHAR_P(\"stats_print_opts\")\n\tOPT_WRITE_INT64(\"stats_interval\")\n\tOPT_WRITE_CHAR_P(\"stats_interval_opts\")\n\tOPT_WRITE_CHAR_P(\"zero_realloc\")\n\n\temitter_dict_end(emitter);\n\n#undef OPT_WRITE\n#undef OPT_WRITE_MUTABLE\n#undef OPT_WRITE_BOOL\n#undef OPT_WRITE_BOOL_MUTABLE\n#undef OPT_WRITE_UNSIGNED\n#undef OPT_WRITE_SSIZE_T\n#undef OPT_WRITE_SSIZE_T_MUTABLE\n#undef OPT_WRITE_CHAR_P\n\n\t/* prof. */\n\tif (config_prof) {\n\t\temitter_dict_begin(emitter, \"prof\", \"Profiling settings\");\n\n\t\tCTL_GET(\"prof.thread_active_init\", &bv, bool);\n\t\temitter_kv(emitter, \"thread_active_init\",\n\t\t    \"prof.thread_active_init\", emitter_type_bool, &bv);\n\n\t\tCTL_GET(\"prof.active\", &bv, bool);\n\t\temitter_kv(emitter, \"active\", \"prof.active\", emitter_type_bool,\n\t\t    &bv);\n\n\t\tCTL_GET(\"prof.gdump\", &bv, bool);\n\t\temitter_kv(emitter, \"gdump\", \"prof.gdump\", emitter_type_bool,\n\t\t    &bv);\n\n\t\tCTL_GET(\"prof.interval\", &u64v, uint64_t);\n\t\temitter_kv(emitter, \"interval\", \"prof.interval\",\n\t\t    emitter_type_uint64, &u64v);\n\n\t\tCTL_GET(\"prof.lg_sample\", &ssv, ssize_t);\n\t\temitter_kv(emitter, \"lg_sample\", \"prof.lg_sample\",\n\t\t    emitter_type_ssize, &ssv);\n\n\t\temitter_dict_end(emitter); /* Close \"prof\". */\n\t}\n\n\t/* arenas. */\n\t/*\n\t * The json output sticks arena info into an \"arenas\" dict; the table\n\t * output puts them at the top-level.\n\t */\n\temitter_json_object_kv_begin(emitter, \"arenas\");\n\n\tCTL_GET(\"arenas.narenas\", &uv, unsigned);\n\temitter_kv(emitter, \"narenas\", \"Arenas\", emitter_type_unsigned, &uv);\n\n\t/*\n\t * Decay settings are emitted only in json mode; in table mode, they're\n\t * emitted as notes with the opt output, above.\n\t */\n\tCTL_GET(\"arenas.dirty_decay_ms\", &ssv, ssize_t);\n\temitter_json_kv(emitter, \"dirty_decay_ms\", emitter_type_ssize, &ssv);\n\n\tCTL_GET(\"arenas.muzzy_decay_ms\", &ssv, ssize_t);\n\temitter_json_kv(emitter, \"muzzy_decay_ms\", emitter_type_ssize, &ssv);\n\n\tCTL_GET(\"arenas.quantum\", &sv, size_t);\n\temitter_kv(emitter, \"quantum\", \"Quantum size\", emitter_type_size, &sv);\n\n\tCTL_GET(\"arenas.page\", &sv, size_t);\n\temitter_kv(emitter, \"page\", \"Page size\", emitter_type_size, &sv);\n\n\tif (je_mallctl(\"arenas.tcache_max\", (void *)&sv, &ssz, NULL, 0) == 0) {\n\t\temitter_kv(emitter, \"tcache_max\",\n\t\t    \"Maximum thread-cached size class\", emitter_type_size, &sv);\n\t}\n\n\tunsigned arenas_nbins;\n\tCTL_GET(\"arenas.nbins\", &arenas_nbins, unsigned);\n\temitter_kv(emitter, \"nbins\", \"Number of bin size classes\",\n\t    emitter_type_unsigned, &arenas_nbins);\n\n\tunsigned arenas_nhbins;\n\tCTL_GET(\"arenas.nhbins\", &arenas_nhbins, unsigned);\n\temitter_kv(emitter, \"nhbins\", \"Number of thread-cache bin size classes\",\n\t    emitter_type_unsigned, &arenas_nhbins);\n\n\t/*\n\t * We do enough mallctls in a loop that we actually want to omit them\n\t * (not just omit the printing).\n\t */\n\tif (emitter_outputs_json(emitter)) {\n\t\temitter_json_array_kv_begin(emitter, \"bin\");\n\t\tsize_t arenas_bin_mib[CTL_MAX_DEPTH];\n\t\tCTL_LEAF_PREPARE(arenas_bin_mib, 0, \"arenas.bin\");\n\t\tfor (unsigned i = 0; i < arenas_nbins; i++) {\n\t\t\tarenas_bin_mib[2] = i;\n\t\t\temitter_json_object_begin(emitter);\n\n\t\t\tCTL_LEAF(arenas_bin_mib, 3, \"size\", &sv, size_t);\n\t\t\temitter_json_kv(emitter, \"size\", emitter_type_size,\n\t\t\t    &sv);\n\n\t\t\tCTL_LEAF(arenas_bin_mib, 3, \"nregs\", &u32v, uint32_t);\n\t\t\temitter_json_kv(emitter, \"nregs\", emitter_type_uint32,\n\t\t\t    &u32v);\n\n\t\t\tCTL_LEAF(arenas_bin_mib, 3, \"slab_size\", &sv, size_t);\n\t\t\temitter_json_kv(emitter, \"slab_size\", emitter_type_size,\n\t\t\t    &sv);\n\n\t\t\tCTL_LEAF(arenas_bin_mib, 3, \"nshards\", &u32v, uint32_t);\n\t\t\temitter_json_kv(emitter, \"nshards\", emitter_type_uint32,\n\t\t\t    &u32v);\n\n\t\t\temitter_json_object_end(emitter);\n\t\t}\n\t\temitter_json_array_end(emitter); /* Close \"bin\". */\n\t}\n\n\tunsigned nlextents;\n\tCTL_GET(\"arenas.nlextents\", &nlextents, unsigned);\n\temitter_kv(emitter, \"nlextents\", \"Number of large size classes\",\n\t    emitter_type_unsigned, &nlextents);\n\n\tif (emitter_outputs_json(emitter)) {\n\t\temitter_json_array_kv_begin(emitter, \"lextent\");\n\t\tsize_t arenas_lextent_mib[CTL_MAX_DEPTH];\n\t\tCTL_LEAF_PREPARE(arenas_lextent_mib, 0, \"arenas.lextent\");\n\t\tfor (unsigned i = 0; i < nlextents; i++) {\n\t\t\tarenas_lextent_mib[2] = i;\n\t\t\temitter_json_object_begin(emitter);\n\n\t\t\tCTL_LEAF(arenas_lextent_mib, 3, \"size\", &sv, size_t);\n\t\t\temitter_json_kv(emitter, \"size\", emitter_type_size,\n\t\t\t    &sv);\n\n\t\t\temitter_json_object_end(emitter);\n\t\t}\n\t\temitter_json_array_end(emitter); /* Close \"lextent\". */\n\t}\n\n\temitter_json_object_end(emitter); /* Close \"arenas\" */\n}\n\nJEMALLOC_COLD\nstatic void\nstats_print_helper(emitter_t *emitter, bool merged, bool destroyed,\n    bool unmerged, bool bins, bool large, bool mutex, bool extents, bool hpa) {\n\t/*\n\t * These should be deleted.  We keep them around for a while, to aid in\n\t * the transition to the emitter code.\n\t */\n\tsize_t allocated, active, metadata, metadata_thp, resident, mapped,\n\t    retained;\n\tsize_t num_background_threads;\n\tsize_t zero_reallocs;\n\tuint64_t background_thread_num_runs, background_thread_run_interval;\n\n\tCTL_GET(\"stats.allocated\", &allocated, size_t);\n\tCTL_GET(\"stats.active\", &active, size_t);\n\tCTL_GET(\"stats.metadata\", &metadata, size_t);\n\tCTL_GET(\"stats.metadata_thp\", &metadata_thp, size_t);\n\tCTL_GET(\"stats.resident\", &resident, size_t);\n\tCTL_GET(\"stats.mapped\", &mapped, size_t);\n\tCTL_GET(\"stats.retained\", &retained, size_t);\n\n\tCTL_GET(\"stats.zero_reallocs\", &zero_reallocs, size_t);\n\n\tif (have_background_thread) {\n\t\tCTL_GET(\"stats.background_thread.num_threads\",\n\t\t    &num_background_threads, size_t);\n\t\tCTL_GET(\"stats.background_thread.num_runs\",\n\t\t    &background_thread_num_runs, uint64_t);\n\t\tCTL_GET(\"stats.background_thread.run_interval\",\n\t\t    &background_thread_run_interval, uint64_t);\n\t} else {\n\t\tnum_background_threads = 0;\n\t\tbackground_thread_num_runs = 0;\n\t\tbackground_thread_run_interval = 0;\n\t}\n\n\t/* Generic global stats. */\n\temitter_json_object_kv_begin(emitter, \"stats\");\n\temitter_json_kv(emitter, \"allocated\", emitter_type_size, &allocated);\n\temitter_json_kv(emitter, \"active\", emitter_type_size, &active);\n\temitter_json_kv(emitter, \"metadata\", emitter_type_size, &metadata);\n\temitter_json_kv(emitter, \"metadata_thp\", emitter_type_size,\n\t    &metadata_thp);\n\temitter_json_kv(emitter, \"resident\", emitter_type_size, &resident);\n\temitter_json_kv(emitter, \"mapped\", emitter_type_size, &mapped);\n\temitter_json_kv(emitter, \"retained\", emitter_type_size, &retained);\n\temitter_json_kv(emitter, \"zero_reallocs\", emitter_type_size,\n\t    &zero_reallocs);\n\n\temitter_table_printf(emitter, \"Allocated: %zu, active: %zu, \"\n\t    \"metadata: %zu (n_thp %zu), resident: %zu, mapped: %zu, \"\n\t    \"retained: %zu\\n\", allocated, active, metadata, metadata_thp,\n\t    resident, mapped, retained);\n\n\t/* Strange behaviors */\n\temitter_table_printf(emitter,\n\t    \"Count of realloc(non-null-ptr, 0) calls: %zu\\n\", zero_reallocs);\n\n\t/* Background thread stats. */\n\temitter_json_object_kv_begin(emitter, \"background_thread\");\n\temitter_json_kv(emitter, \"num_threads\", emitter_type_size,\n\t    &num_background_threads);\n\temitter_json_kv(emitter, \"num_runs\", emitter_type_uint64,\n\t    &background_thread_num_runs);\n\temitter_json_kv(emitter, \"run_interval\", emitter_type_uint64,\n\t    &background_thread_run_interval);\n\temitter_json_object_end(emitter); /* Close \"background_thread\". */\n\n\temitter_table_printf(emitter, \"Background threads: %zu, \"\n\t    \"num_runs: %\"FMTu64\", run_interval: %\"FMTu64\" ns\\n\",\n\t    num_background_threads, background_thread_num_runs,\n\t    background_thread_run_interval);\n\n\tif (mutex) {\n\t\temitter_row_t row;\n\t\temitter_col_t name;\n\t\temitter_col_t col64[mutex_prof_num_uint64_t_counters];\n\t\temitter_col_t col32[mutex_prof_num_uint32_t_counters];\n\t\tuint64_t uptime;\n\n\t\temitter_row_init(&row);\n\t\tmutex_stats_init_cols(&row, \"\", &name, col64, col32);\n\n\t\temitter_table_row(emitter, &row);\n\t\temitter_json_object_kv_begin(emitter, \"mutexes\");\n\n\t\tCTL_M2_GET(\"stats.arenas.0.uptime\", 0, &uptime, uint64_t);\n\n\t\tsize_t stats_mutexes_mib[CTL_MAX_DEPTH];\n\t\tCTL_LEAF_PREPARE(stats_mutexes_mib, 0, \"stats.mutexes\");\n\t\tfor (int i = 0; i < mutex_prof_num_global_mutexes; i++) {\n\t\t\tmutex_stats_read_global(stats_mutexes_mib, 2,\n\t\t\t    global_mutex_names[i], &name, col64, col32, uptime);\n\t\t\temitter_json_object_kv_begin(emitter, global_mutex_names[i]);\n\t\t\tmutex_stats_emit(emitter, &row, col64, col32);\n\t\t\temitter_json_object_end(emitter);\n\t\t}\n\n\t\temitter_json_object_end(emitter); /* Close \"mutexes\". */\n\t}\n\n\temitter_json_object_end(emitter); /* Close \"stats\". */\n\n\tif (merged || destroyed || unmerged) {\n\t\tunsigned narenas;\n\n\t\temitter_json_object_kv_begin(emitter, \"stats.arenas\");\n\n\t\tCTL_GET(\"arenas.narenas\", &narenas, unsigned);\n\t\tsize_t mib[3];\n\t\tsize_t miblen = sizeof(mib) / sizeof(size_t);\n\t\tsize_t sz;\n\t\tVARIABLE_ARRAY(bool, initialized, narenas);\n\t\tbool destroyed_initialized;\n\t\tunsigned i, j, ninitialized;\n\n\t\txmallctlnametomib(\"arena.0.initialized\", mib, &miblen);\n\t\tfor (i = ninitialized = 0; i < narenas; i++) {\n\t\t\tmib[1] = i;\n\t\t\tsz = sizeof(bool);\n\t\t\txmallctlbymib(mib, miblen, &initialized[i], &sz,\n\t\t\t    NULL, 0);\n\t\t\tif (initialized[i]) {\n\t\t\t\tninitialized++;\n\t\t\t}\n\t\t}\n\t\tmib[1] = MALLCTL_ARENAS_DESTROYED;\n\t\tsz = sizeof(bool);\n\t\txmallctlbymib(mib, miblen, &destroyed_initialized, &sz,\n\t\t    NULL, 0);\n\n\t\t/* Merged stats. */\n\t\tif (merged && (ninitialized > 1 || !unmerged)) {\n\t\t\t/* Print merged arena stats. */\n\t\t\temitter_table_printf(emitter, \"Merged arenas stats:\\n\");\n\t\t\temitter_json_object_kv_begin(emitter, \"merged\");\n\t\t\tstats_arena_print(emitter, MALLCTL_ARENAS_ALL, bins,\n\t\t\t    large, mutex, extents, hpa);\n\t\t\temitter_json_object_end(emitter); /* Close \"merged\". */\n\t\t}\n\n\t\t/* Destroyed stats. */\n\t\tif (destroyed_initialized && destroyed) {\n\t\t\t/* Print destroyed arena stats. */\n\t\t\temitter_table_printf(emitter,\n\t\t\t    \"Destroyed arenas stats:\\n\");\n\t\t\temitter_json_object_kv_begin(emitter, \"destroyed\");\n\t\t\tstats_arena_print(emitter, MALLCTL_ARENAS_DESTROYED,\n\t\t\t    bins, large, mutex, extents, hpa);\n\t\t\temitter_json_object_end(emitter); /* Close \"destroyed\". */\n\t\t}\n\n\t\t/* Unmerged stats. */\n\t\tif (unmerged) {\n\t\t\tfor (i = j = 0; i < narenas; i++) {\n\t\t\t\tif (initialized[i]) {\n\t\t\t\t\tchar arena_ind_str[20];\n\t\t\t\t\tmalloc_snprintf(arena_ind_str,\n\t\t\t\t\t    sizeof(arena_ind_str), \"%u\", i);\n\t\t\t\t\temitter_json_object_kv_begin(emitter,\n\t\t\t\t\t    arena_ind_str);\n\t\t\t\t\temitter_table_printf(emitter,\n\t\t\t\t\t    \"arenas[%s]:\\n\", arena_ind_str);\n\t\t\t\t\tstats_arena_print(emitter, i, bins,\n\t\t\t\t\t    large, mutex, extents, hpa);\n\t\t\t\t\t/* Close \"<arena-ind>\". */\n\t\t\t\t\temitter_json_object_end(emitter);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\temitter_json_object_end(emitter); /* Close \"stats.arenas\". */\n\t}\n}\n\nvoid\nstats_print(write_cb_t *write_cb, void *cbopaque, const char *opts) {\n\tint err;\n\tuint64_t epoch;\n\tsize_t u64sz;\n#define OPTION(o, v, d, s) bool v = d;\n\tSTATS_PRINT_OPTIONS\n#undef OPTION\n\n\t/*\n\t * Refresh stats, in case mallctl() was called by the application.\n\t *\n\t * Check for OOM here, since refreshing the ctl cache can trigger\n\t * allocation.  In practice, none of the subsequent mallctl()-related\n\t * calls in this function will cause OOM if this one succeeds.\n\t * */\n\tepoch = 1;\n\tu64sz = sizeof(uint64_t);\n\terr = je_mallctl(\"epoch\", (void *)&epoch, &u64sz, (void *)&epoch,\n\t    sizeof(uint64_t));\n\tif (err != 0) {\n\t\tif (err == EAGAIN) {\n\t\t\tmalloc_write(\"<jemalloc>: Memory allocation failure in \"\n\t\t\t    \"mallctl(\\\"epoch\\\", ...)\\n\");\n\t\t\treturn;\n\t\t}\n\t\tmalloc_write(\"<jemalloc>: Failure in mallctl(\\\"epoch\\\", \"\n\t\t    \"...)\\n\");\n\t\tabort();\n\t}\n\n\tif (opts != NULL) {\n\t\tfor (unsigned i = 0; opts[i] != '\\0'; i++) {\n\t\t\tswitch (opts[i]) {\n#define OPTION(o, v, d, s) case o: v = s; break;\n\t\t\t\tSTATS_PRINT_OPTIONS\n#undef OPTION\n\t\t\tdefault:;\n\t\t\t}\n\t\t}\n\t}\n\n\temitter_t emitter;\n\temitter_init(&emitter,\n\t    json ? emitter_output_json_compact : emitter_output_table,\n\t    write_cb, cbopaque);\n\temitter_begin(&emitter);\n\temitter_table_printf(&emitter, \"___ Begin jemalloc statistics ___\\n\");\n\temitter_json_object_kv_begin(&emitter, \"jemalloc\");\n\n\tif (general) {\n\t\tstats_general_print(&emitter);\n\t}\n\tif (config_stats) {\n\t\tstats_print_helper(&emitter, merged, destroyed, unmerged,\n\t\t    bins, large, mutex, extents, hpa);\n\t}\n\n\temitter_json_object_end(&emitter); /* Closes the \"jemalloc\" dict. */\n\temitter_table_printf(&emitter, \"--- End jemalloc statistics ---\\n\");\n\temitter_end(&emitter);\n}\n\nuint64_t\nstats_interval_new_event_wait(tsd_t *tsd) {\n\treturn stats_interval_accum_batch;\n}\n\nuint64_t\nstats_interval_postponed_event_wait(tsd_t *tsd) {\n\treturn TE_MIN_START_WAIT;\n}\n\nvoid\nstats_interval_event_handler(tsd_t *tsd, uint64_t elapsed) {\n\tassert(elapsed > 0 && elapsed != TE_INVALID_ELAPSED);\n\tif (counter_accum(tsd_tsdn(tsd), &stats_interval_accumulated,\n\t    elapsed)) {\n\t\tje_malloc_stats_print(NULL, NULL, opt_stats_interval_opts);\n\t}\n}\n\nbool\nstats_boot(void) {\n\tuint64_t stats_interval;\n\tif (opt_stats_interval < 0) {\n\t\tassert(opt_stats_interval == -1);\n\t\tstats_interval = 0;\n\t\tstats_interval_accum_batch = 0;\n\t} else{\n\t\t/* See comments in stats.h */\n\t\tstats_interval = (opt_stats_interval > 0) ?\n\t\t    opt_stats_interval : 1;\n\t\tuint64_t batch = stats_interval >>\n\t\t    STATS_INTERVAL_ACCUM_LG_BATCH_SIZE;\n\t\tif (batch > STATS_INTERVAL_ACCUM_BATCH_MAX) {\n\t\t\tbatch = STATS_INTERVAL_ACCUM_BATCH_MAX;\n\t\t} else if (batch == 0) {\n\t\t\tbatch = 1;\n\t\t}\n\t\tstats_interval_accum_batch = batch;\n\t}\n\n\treturn counter_accum_init(&stats_interval_accumulated, stats_interval);\n}\n\nvoid\nstats_prefork(tsdn_t *tsdn) {\n\tcounter_prefork(tsdn, &stats_interval_accumulated);\n}\n\nvoid\nstats_postfork_parent(tsdn_t *tsdn) {\n\tcounter_postfork_parent(tsdn, &stats_interval_accumulated);\n}\n\nvoid\nstats_postfork_child(tsdn_t *tsdn) {\n\tcounter_postfork_child(tsdn, &stats_interval_accumulated);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/sz.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n#include \"jemalloc/internal/sz.h\"\n\nJEMALLOC_ALIGNED(CACHELINE)\nsize_t sz_pind2sz_tab[SC_NPSIZES+1];\nsize_t sz_large_pad;\n\nsize_t\nsz_psz_quantize_floor(size_t size) {\n\tsize_t ret;\n\tpszind_t pind;\n\n\tassert(size > 0);\n\tassert((size & PAGE_MASK) == 0);\n\n\tpind = sz_psz2ind(size - sz_large_pad + 1);\n\tif (pind == 0) {\n\t\t/*\n\t\t * Avoid underflow.  This short-circuit would also do the right\n\t\t * thing for all sizes in the range for which there are\n\t\t * PAGE-spaced size classes, but it's simplest to just handle\n\t\t * the one case that would cause erroneous results.\n\t\t */\n\t\treturn size;\n\t}\n\tret = sz_pind2sz(pind - 1) + sz_large_pad;\n\tassert(ret <= size);\n\treturn ret;\n}\n\nsize_t\nsz_psz_quantize_ceil(size_t size) {\n\tsize_t ret;\n\n\tassert(size > 0);\n\tassert(size - sz_large_pad <= SC_LARGE_MAXCLASS);\n\tassert((size & PAGE_MASK) == 0);\n\n\tret = sz_psz_quantize_floor(size);\n\tif (ret < size) {\n\t\t/*\n\t\t * Skip a quantization that may have an adequately large extent,\n\t\t * because under-sized extents may be mixed in.  This only\n\t\t * happens when an unusual size is requested, i.e. for aligned\n\t\t * allocation, and is just one of several places where linear\n\t\t * search would potentially find sufficiently aligned available\n\t\t * memory somewhere lower.\n\t\t */\n\t\tret = sz_pind2sz(sz_psz2ind(ret - sz_large_pad + 1)) +\n\t\t    sz_large_pad;\n\t}\n\treturn ret;\n}\n\nstatic void\nsz_boot_pind2sz_tab(const sc_data_t *sc_data) {\n\tint pind = 0;\n\tfor (unsigned i = 0; i < SC_NSIZES; i++) {\n\t\tconst sc_t *sc = &sc_data->sc[i];\n\t\tif (sc->psz) {\n\t\t\tsz_pind2sz_tab[pind] = (ZU(1) << sc->lg_base)\n\t\t\t    + (ZU(sc->ndelta) << sc->lg_delta);\n\t\t\tpind++;\n\t\t}\n\t}\n\tfor (int i = pind; i <= (int)SC_NPSIZES; i++) {\n\t\tsz_pind2sz_tab[pind] = sc_data->large_maxclass + PAGE;\n\t}\n}\n\nJEMALLOC_ALIGNED(CACHELINE)\nsize_t sz_index2size_tab[SC_NSIZES];\n\nstatic void\nsz_boot_index2size_tab(const sc_data_t *sc_data) {\n\tfor (unsigned i = 0; i < SC_NSIZES; i++) {\n\t\tconst sc_t *sc = &sc_data->sc[i];\n\t\tsz_index2size_tab[i] = (ZU(1) << sc->lg_base)\n\t\t    + (ZU(sc->ndelta) << (sc->lg_delta));\n\t}\n}\n\n/*\n * To keep this table small, we divide sizes by the tiny min size, which gives\n * the smallest interval for which the result can change.\n */\nJEMALLOC_ALIGNED(CACHELINE)\nuint8_t sz_size2index_tab[(SC_LOOKUP_MAXCLASS >> SC_LG_TINY_MIN) + 1];\n\nstatic void\nsz_boot_size2index_tab(const sc_data_t *sc_data) {\n\tsize_t dst_max = (SC_LOOKUP_MAXCLASS >> SC_LG_TINY_MIN) + 1;\n\tsize_t dst_ind = 0;\n\tfor (unsigned sc_ind = 0; sc_ind < SC_NSIZES && dst_ind < dst_max;\n\t    sc_ind++) {\n\t\tconst sc_t *sc = &sc_data->sc[sc_ind];\n\t\tsize_t sz = (ZU(1) << sc->lg_base)\n\t\t    + (ZU(sc->ndelta) << sc->lg_delta);\n\t\tsize_t max_ind = ((sz + (ZU(1) << SC_LG_TINY_MIN) - 1)\n\t\t\t\t   >> SC_LG_TINY_MIN);\n\t\tfor (; dst_ind <= max_ind && dst_ind < dst_max; dst_ind++) {\n\t\t\tsz_size2index_tab[dst_ind] = sc_ind;\n\t\t}\n\t}\n}\n\nvoid\nsz_boot(const sc_data_t *sc_data, bool cache_oblivious) {\n\tsz_large_pad = cache_oblivious ? PAGE : 0;\n\tsz_boot_pind2sz_tab(sc_data);\n\tsz_boot_index2size_tab(sc_data);\n\tsz_boot_size2index_tab(sc_data);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/tcache.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/mutex.h\"\n#include \"jemalloc/internal/safety_check.h\"\n#include \"jemalloc/internal/san.h\"\n#include \"jemalloc/internal/sc.h\"\n\n/******************************************************************************/\n/* Data. */\n\nbool opt_tcache = true;\n\n/* tcache_maxclass is set to 32KB by default.  */\nsize_t opt_tcache_max = ((size_t)1) << 15;\n\n/* Reasonable defaults for min and max values. */\nunsigned opt_tcache_nslots_small_min = 20;\nunsigned opt_tcache_nslots_small_max = 200;\nunsigned opt_tcache_nslots_large = 20;\n\n/*\n * We attempt to make the number of slots in a tcache bin for a given size class\n * equal to the number of objects in a slab times some multiplier.  By default,\n * the multiplier is 2 (i.e. we set the maximum number of objects in the tcache\n * to twice the number of objects in a slab).\n * This is bounded by some other constraints as well, like the fact that it\n * must be even, must be less than opt_tcache_nslots_small_max, etc..\n */\nssize_t\topt_lg_tcache_nslots_mul = 1;\n\n/*\n * Number of allocation bytes between tcache incremental GCs.  Again, this\n * default just seems to work well; more tuning is possible.\n */\nsize_t opt_tcache_gc_incr_bytes = 65536;\n\n/*\n * With default settings, we may end up flushing small bins frequently with\n * small flush amounts.  To limit this tendency, we can set a number of bytes to\n * \"delay\" by.  If we try to flush N M-byte items, we decrease that size-class's\n * delay by N * M.  So, if delay is 1024 and we're looking at the 64-byte size\n * class, we won't do any flushing until we've been asked to flush 1024/64 == 16\n * items.  This can happen in any configuration (i.e. being asked to flush 16\n * items once, or 4 items 4 times).\n *\n * Practically, this is stored as a count of items in a uint8_t, so the\n * effective maximum value for a size class is 255 * sz.\n */\nsize_t opt_tcache_gc_delay_bytes = 0;\n\n/*\n * When a cache bin is flushed because it's full, how much of it do we flush?\n * By default, we flush half the maximum number of items.\n */\nunsigned opt_lg_tcache_flush_small_div = 1;\nunsigned opt_lg_tcache_flush_large_div = 1;\n\ncache_bin_info_t\t*tcache_bin_info;\n\n/* Total stack size required (per tcache).  Include the padding above. */\nstatic size_t tcache_bin_alloc_size;\nstatic size_t tcache_bin_alloc_alignment;\n\n/* Number of cache bins enabled, including both large and small. */\nunsigned\t\tnhbins;\n/* Max size class to be cached (can be small or large). */\nsize_t\t\t\ttcache_maxclass;\n\ntcaches_t\t\t*tcaches;\n\n/* Index of first element within tcaches that has never been used. */\nstatic unsigned\t\ttcaches_past;\n\n/* Head of singly linked list tracking available tcaches elements. */\nstatic tcaches_t\t*tcaches_avail;\n\n/* Protects tcaches{,_past,_avail}. */\nstatic malloc_mutex_t\ttcaches_mtx;\n\n/******************************************************************************/\n\nsize_t\ntcache_salloc(tsdn_t *tsdn, const void *ptr) {\n\treturn arena_salloc(tsdn, ptr);\n}\n\nuint64_t\ntcache_gc_new_event_wait(tsd_t *tsd) {\n\treturn opt_tcache_gc_incr_bytes;\n}\n\nuint64_t\ntcache_gc_postponed_event_wait(tsd_t *tsd) {\n\treturn TE_MIN_START_WAIT;\n}\n\nuint64_t\ntcache_gc_dalloc_new_event_wait(tsd_t *tsd) {\n\treturn opt_tcache_gc_incr_bytes;\n}\n\nuint64_t\ntcache_gc_dalloc_postponed_event_wait(tsd_t *tsd) {\n\treturn TE_MIN_START_WAIT;\n}\n\nstatic uint8_t\ntcache_gc_item_delay_compute(szind_t szind) {\n\tassert(szind < SC_NBINS);\n\tsize_t sz = sz_index2size(szind);\n\tsize_t item_delay = opt_tcache_gc_delay_bytes / sz;\n\tsize_t delay_max = ZU(1)\n\t    << (sizeof(((tcache_slow_t *)NULL)->bin_flush_delay_items[0]) * 8);\n\tif (item_delay >= delay_max) {\n\t\titem_delay = delay_max - 1;\n\t}\n\treturn (uint8_t)item_delay;\n}\n\nstatic void\ntcache_gc_small(tsd_t *tsd, tcache_slow_t *tcache_slow, tcache_t *tcache,\n    szind_t szind) {\n\t/* Aim to flush 3/4 of items below low-water. */\n\tassert(szind < SC_NBINS);\n\n\tcache_bin_t *cache_bin = &tcache->bins[szind];\n\tcache_bin_sz_t ncached = cache_bin_ncached_get_local(cache_bin,\n\t    &tcache_bin_info[szind]);\n\tcache_bin_sz_t low_water = cache_bin_low_water_get(cache_bin,\n\t    &tcache_bin_info[szind]);\n\tassert(!tcache_slow->bin_refilled[szind]);\n\n\tsize_t nflush = low_water - (low_water >> 2);\n\tif (nflush < tcache_slow->bin_flush_delay_items[szind]) {\n\t\t/* Workaround for a conversion warning. */\n\t\tuint8_t nflush_uint8 = (uint8_t)nflush;\n\t\tassert(sizeof(tcache_slow->bin_flush_delay_items[0]) ==\n\t\t    sizeof(nflush_uint8));\n\t\ttcache_slow->bin_flush_delay_items[szind] -= nflush_uint8;\n\t\treturn;\n\t} else {\n\t\ttcache_slow->bin_flush_delay_items[szind]\n\t\t    = tcache_gc_item_delay_compute(szind);\n\t}\n\n\ttcache_bin_flush_small(tsd, tcache, cache_bin, szind,\n\t    (unsigned)(ncached - nflush));\n\n\t/*\n\t * Reduce fill count by 2X.  Limit lg_fill_div such that\n\t * the fill count is always at least 1.\n\t */\n\tif ((cache_bin_info_ncached_max(&tcache_bin_info[szind])\n\t    >> (tcache_slow->lg_fill_div[szind] + 1)) >= 1) {\n\t\ttcache_slow->lg_fill_div[szind]++;\n\t}\n}\n\nstatic void\ntcache_gc_large(tsd_t *tsd, tcache_slow_t *tcache_slow, tcache_t *tcache,\n    szind_t szind) {\n\t/* Like the small GC; flush 3/4 of untouched items. */\n\tassert(szind >= SC_NBINS);\n\tcache_bin_t *cache_bin = &tcache->bins[szind];\n\tcache_bin_sz_t ncached = cache_bin_ncached_get_local(cache_bin,\n\t    &tcache_bin_info[szind]);\n\tcache_bin_sz_t low_water = cache_bin_low_water_get(cache_bin,\n\t    &tcache_bin_info[szind]);\n\ttcache_bin_flush_large(tsd, tcache, cache_bin, szind,\n\t    (unsigned)(ncached - low_water + (low_water >> 2)));\n}\n\nstatic void\ntcache_event(tsd_t *tsd) {\n\ttcache_t *tcache = tcache_get(tsd);\n\tif (tcache == NULL) {\n\t\treturn;\n\t}\n\n\ttcache_slow_t *tcache_slow = tsd_tcache_slowp_get(tsd);\n\tszind_t szind = tcache_slow->next_gc_bin;\n\tbool is_small = (szind < SC_NBINS);\n\tcache_bin_t *cache_bin = &tcache->bins[szind];\n\n\ttcache_bin_flush_stashed(tsd, tcache, cache_bin, szind, is_small);\n\n\tcache_bin_sz_t low_water = cache_bin_low_water_get(cache_bin,\n\t    &tcache_bin_info[szind]);\n\tif (low_water > 0) {\n\t\tif (is_small) {\n\t\t\ttcache_gc_small(tsd, tcache_slow, tcache, szind);\n\t\t} else {\n\t\t\ttcache_gc_large(tsd, tcache_slow, tcache, szind);\n\t\t}\n\t} else if (is_small && tcache_slow->bin_refilled[szind]) {\n\t\tassert(low_water == 0);\n\t\t/*\n\t\t * Increase fill count by 2X for small bins.  Make sure\n\t\t * lg_fill_div stays greater than 0.\n\t\t */\n\t\tif (tcache_slow->lg_fill_div[szind] > 1) {\n\t\t\ttcache_slow->lg_fill_div[szind]--;\n\t\t}\n\t\ttcache_slow->bin_refilled[szind] = false;\n\t}\n\tcache_bin_low_water_set(cache_bin);\n\n\ttcache_slow->next_gc_bin++;\n\tif (tcache_slow->next_gc_bin == nhbins) {\n\t\ttcache_slow->next_gc_bin = 0;\n\t}\n}\n\nvoid\ntcache_gc_event_handler(tsd_t *tsd, uint64_t elapsed) {\n\tassert(elapsed == TE_INVALID_ELAPSED);\n\ttcache_event(tsd);\n}\n\nvoid\ntcache_gc_dalloc_event_handler(tsd_t *tsd, uint64_t elapsed) {\n\tassert(elapsed == TE_INVALID_ELAPSED);\n\ttcache_event(tsd);\n}\n\nvoid *\ntcache_alloc_small_hard(tsdn_t *tsdn, arena_t *arena,\n    tcache_t *tcache, cache_bin_t *cache_bin, szind_t binind,\n    bool *tcache_success) {\n\ttcache_slow_t *tcache_slow = tcache->tcache_slow;\n\tvoid *ret;\n\n\tassert(tcache_slow->arena != NULL);\n\tunsigned nfill = cache_bin_info_ncached_max(&tcache_bin_info[binind])\n\t    >> tcache_slow->lg_fill_div[binind];\n\tarena_cache_bin_fill_small(tsdn, arena, cache_bin,\n\t    &tcache_bin_info[binind], binind, nfill);\n\ttcache_slow->bin_refilled[binind] = true;\n\tret = cache_bin_alloc(cache_bin, tcache_success);\n\n\treturn ret;\n}\n\nstatic const void *\ntcache_bin_flush_ptr_getter(void *arr_ctx, size_t ind) {\n\tcache_bin_ptr_array_t *arr = (cache_bin_ptr_array_t *)arr_ctx;\n\treturn arr->ptr[ind];\n}\n\nstatic void\ntcache_bin_flush_metadata_visitor(void *szind_sum_ctx,\n    emap_full_alloc_ctx_t *alloc_ctx) {\n\tsize_t *szind_sum = (size_t *)szind_sum_ctx;\n\t*szind_sum -= alloc_ctx->szind;\n\tutil_prefetch_write_range(alloc_ctx->edata, sizeof(edata_t));\n}\n\nJEMALLOC_NOINLINE static void\ntcache_bin_flush_size_check_fail(cache_bin_ptr_array_t *arr, szind_t szind,\n    size_t nptrs, emap_batch_lookup_result_t *edatas) {\n\tbool found_mismatch = false;\n\tfor (size_t i = 0; i < nptrs; i++) {\n\t\tszind_t true_szind = edata_szind_get(edatas[i].edata);\n\t\tif (true_szind != szind) {\n\t\t\tfound_mismatch = true;\n\t\t\tsafety_check_fail_sized_dealloc(\n\t\t\t    /* current_dealloc */ false,\n\t\t\t    /* ptr */ tcache_bin_flush_ptr_getter(arr, i),\n\t\t\t    /* true_size */ sz_index2size(true_szind),\n\t\t\t    /* input_size */ sz_index2size(szind));\n\t\t}\n\t}\n\tassert(found_mismatch);\n}\n\nstatic void\ntcache_bin_flush_edatas_lookup(tsd_t *tsd, cache_bin_ptr_array_t *arr,\n    szind_t binind, size_t nflush, emap_batch_lookup_result_t *edatas) {\n\n\t/*\n\t * This gets compiled away when config_opt_safety_checks is false.\n\t * Checks for sized deallocation bugs, failing early rather than\n\t * corrupting metadata.\n\t */\n\tsize_t szind_sum = binind * nflush;\n\temap_edata_lookup_batch(tsd, &arena_emap_global, nflush,\n\t    &tcache_bin_flush_ptr_getter, (void *)arr,\n\t    &tcache_bin_flush_metadata_visitor, (void *)&szind_sum,\n\t    edatas);\n\tif (config_opt_safety_checks && unlikely(szind_sum != 0)) {\n\t\ttcache_bin_flush_size_check_fail(arr, binind, nflush, edatas);\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE bool\ntcache_bin_flush_match(edata_t *edata, unsigned cur_arena_ind,\n    unsigned cur_binshard, bool small) {\n\tif (small) {\n\t\treturn edata_arena_ind_get(edata) == cur_arena_ind\n\t\t    && edata_binshard_get(edata) == cur_binshard;\n\t} else {\n\t\treturn edata_arena_ind_get(edata) == cur_arena_ind;\n\t}\n}\n\nJEMALLOC_ALWAYS_INLINE void\ntcache_bin_flush_impl(tsd_t *tsd, tcache_t *tcache, cache_bin_t *cache_bin,\n    szind_t binind, cache_bin_ptr_array_t *ptrs, unsigned nflush, bool small) {\n\ttcache_slow_t *tcache_slow = tcache->tcache_slow;\n\t/*\n\t * A couple lookup calls take tsdn; declare it once for convenience\n\t * instead of calling tsd_tsdn(tsd) all the time.\n\t */\n\ttsdn_t *tsdn = tsd_tsdn(tsd);\n\n\tif (small) {\n\t\tassert(binind < SC_NBINS);\n\t} else {\n\t\tassert(binind < nhbins);\n\t}\n\tarena_t *tcache_arena = tcache_slow->arena;\n\tassert(tcache_arena != NULL);\n\n\t/*\n\t * Variable length array must have > 0 length; the last element is never\n\t * touched (it's just included to satisfy the no-zero-length rule).\n\t */\n\tVARIABLE_ARRAY(emap_batch_lookup_result_t, item_edata, nflush + 1);\n\ttcache_bin_flush_edatas_lookup(tsd, ptrs, binind, nflush, item_edata);\n\n\t/*\n\t * The slabs where we freed the last remaining object in the slab (and\n\t * so need to free the slab itself).\n\t * Used only if small == true.\n\t */\n\tunsigned dalloc_count = 0;\n\tVARIABLE_ARRAY(edata_t *, dalloc_slabs, nflush + 1);\n\n\t/*\n\t * We're about to grab a bunch of locks.  If one of them happens to be\n\t * the one guarding the arena-level stats counters we flush our\n\t * thread-local ones to, we do so under one critical section.\n\t */\n\tbool merged_stats = false;\n\twhile (nflush > 0) {\n\t\t/* Lock the arena, or bin, associated with the first object. */\n\t\tedata_t *edata = item_edata[0].edata;\n\t\tunsigned cur_arena_ind = edata_arena_ind_get(edata);\n\t\tarena_t *cur_arena = arena_get(tsdn, cur_arena_ind, false);\n\n\t\t/*\n\t\t * These assignments are always overwritten when small is true,\n\t\t * and their values are always ignored when small is false, but\n\t\t * to avoid the technical UB when we pass them as parameters, we\n\t\t * need to intialize them.\n\t\t */\n\t\tunsigned cur_binshard = 0;\n\t\tbin_t *cur_bin = NULL;\n\t\tif (small) {\n\t\t\tcur_binshard = edata_binshard_get(edata);\n\t\t\tcur_bin = arena_get_bin(cur_arena, binind,\n\t\t\t    cur_binshard);\n\t\t\tassert(cur_binshard < bin_infos[binind].n_shards);\n\t\t\t/*\n\t\t\t * If you're looking at profiles, you might think this\n\t\t\t * is a good place to prefetch the bin stats, which are\n\t\t\t * often a cache miss.  This turns out not to be\n\t\t\t * helpful on the workloads we've looked at, with moving\n\t\t\t * the bin stats next to the lock seeming to do better.\n\t\t\t */\n\t\t}\n\n\t\tif (small) {\n\t\t\tmalloc_mutex_lock(tsdn, &cur_bin->lock);\n\t\t}\n\t\tif (!small && !arena_is_auto(cur_arena)) {\n\t\t\tmalloc_mutex_lock(tsdn, &cur_arena->large_mtx);\n\t\t}\n\n\t\t/*\n\t\t * If we acquired the right lock and have some stats to flush,\n\t\t * flush them.\n\t\t */\n\t\tif (config_stats && tcache_arena == cur_arena\n\t\t    && !merged_stats) {\n\t\t\tmerged_stats = true;\n\t\t\tif (small) {\n\t\t\t\tcur_bin->stats.nflushes++;\n\t\t\t\tcur_bin->stats.nrequests +=\n\t\t\t\t    cache_bin->tstats.nrequests;\n\t\t\t\tcache_bin->tstats.nrequests = 0;\n\t\t\t} else {\n\t\t\t\tarena_stats_large_flush_nrequests_add(tsdn,\n\t\t\t\t    &tcache_arena->stats, binind,\n\t\t\t\t    cache_bin->tstats.nrequests);\n\t\t\t\tcache_bin->tstats.nrequests = 0;\n\t\t\t}\n\t\t}\n\n\t\t/*\n\t\t * Large allocations need special prep done.  Afterwards, we can\n\t\t * drop the large lock.\n\t\t */\n\t\tif (!small) {\n\t\t\tfor (unsigned i = 0; i < nflush; i++) {\n\t\t\t\tvoid *ptr = ptrs->ptr[i];\n\t\t\t\tedata = item_edata[i].edata;\n\t\t\t\tassert(ptr != NULL && edata != NULL);\n\n\t\t\t\tif (tcache_bin_flush_match(edata, cur_arena_ind,\n\t\t\t\t    cur_binshard, small)) {\n\t\t\t\t\tlarge_dalloc_prep_locked(tsdn,\n\t\t\t\t\t    edata);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif (!small && !arena_is_auto(cur_arena)) {\n\t\t\tmalloc_mutex_unlock(tsdn, &cur_arena->large_mtx);\n\t\t}\n\n\t\t/* Deallocate whatever we can. */\n\t\tunsigned ndeferred = 0;\n\t\t/* Init only to avoid used-uninitialized warning. */\n\t\tarena_dalloc_bin_locked_info_t dalloc_bin_info = {0};\n\t\tif (small) {\n\t\t\tarena_dalloc_bin_locked_begin(&dalloc_bin_info, binind);\n\t\t}\n\t\tfor (unsigned i = 0; i < nflush; i++) {\n\t\t\tvoid *ptr = ptrs->ptr[i];\n\t\t\tedata = item_edata[i].edata;\n\t\t\tassert(ptr != NULL && edata != NULL);\n\t\t\tif (!tcache_bin_flush_match(edata, cur_arena_ind,\n\t\t\t    cur_binshard, small)) {\n\t\t\t\t/*\n\t\t\t\t * The object was allocated either via a\n\t\t\t\t * different arena, or a different bin in this\n\t\t\t\t * arena.  Either way, stash the object so that\n\t\t\t\t * it can be handled in a future pass.\n\t\t\t\t */\n\t\t\t\tptrs->ptr[ndeferred] = ptr;\n\t\t\t\titem_edata[ndeferred].edata = edata;\n\t\t\t\tndeferred++;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tif (small) {\n\t\t\t\tif (arena_dalloc_bin_locked_step(tsdn,\n\t\t\t\t    cur_arena, cur_bin, &dalloc_bin_info,\n\t\t\t\t    binind, edata, ptr)) {\n\t\t\t\t\tdalloc_slabs[dalloc_count] = edata;\n\t\t\t\t\tdalloc_count++;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif (large_dalloc_safety_checks(edata, ptr,\n\t\t\t\t    binind)) {\n\t\t\t\t\t/* See the comment in isfree. */\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\tlarge_dalloc_finish(tsdn, edata);\n\t\t\t}\n\t\t}\n\n\t\tif (small) {\n\t\t\tarena_dalloc_bin_locked_finish(tsdn, cur_arena, cur_bin,\n\t\t\t    &dalloc_bin_info);\n\t\t\tmalloc_mutex_unlock(tsdn, &cur_bin->lock);\n\t\t}\n\t\tarena_decay_ticks(tsdn, cur_arena, nflush - ndeferred);\n\t\tnflush = ndeferred;\n\t}\n\n\t/* Handle all deferred slab dalloc. */\n\tassert(small || dalloc_count == 0);\n\tfor (unsigned i = 0; i < dalloc_count; i++) {\n\t\tedata_t *slab = dalloc_slabs[i];\n\t\tarena_slab_dalloc(tsdn, arena_get_from_edata(slab), slab);\n\n\t}\n\n\tif (config_stats && !merged_stats) {\n\t\tif (small) {\n\t\t\t/*\n\t\t\t * The flush loop didn't happen to flush to this\n\t\t\t * thread's arena, so the stats didn't get merged.\n\t\t\t * Manually do so now.\n\t\t\t */\n\t\t\tbin_t *bin = arena_bin_choose(tsdn, tcache_arena,\n\t\t\t    binind, NULL);\n\t\t\tmalloc_mutex_lock(tsdn, &bin->lock);\n\t\t\tbin->stats.nflushes++;\n\t\t\tbin->stats.nrequests += cache_bin->tstats.nrequests;\n\t\t\tcache_bin->tstats.nrequests = 0;\n\t\t\tmalloc_mutex_unlock(tsdn, &bin->lock);\n\t\t} else {\n\t\t\tarena_stats_large_flush_nrequests_add(tsdn,\n\t\t\t    &tcache_arena->stats, binind,\n\t\t\t    cache_bin->tstats.nrequests);\n\t\t\tcache_bin->tstats.nrequests = 0;\n\t\t}\n\t}\n\n}\n\nJEMALLOC_ALWAYS_INLINE void\ntcache_bin_flush_bottom(tsd_t *tsd, tcache_t *tcache, cache_bin_t *cache_bin,\n    szind_t binind, unsigned rem, bool small) {\n\ttcache_bin_flush_stashed(tsd, tcache, cache_bin, binind, small);\n\n\tcache_bin_sz_t ncached = cache_bin_ncached_get_local(cache_bin,\n\t    &tcache_bin_info[binind]);\n\tassert((cache_bin_sz_t)rem <= ncached);\n\tunsigned nflush = ncached - rem;\n\n\tCACHE_BIN_PTR_ARRAY_DECLARE(ptrs, nflush);\n\tcache_bin_init_ptr_array_for_flush(cache_bin, &tcache_bin_info[binind],\n\t    &ptrs, nflush);\n\n\ttcache_bin_flush_impl(tsd, tcache, cache_bin, binind, &ptrs, nflush,\n\t    small);\n\n\tcache_bin_finish_flush(cache_bin, &tcache_bin_info[binind], &ptrs,\n\t    ncached - rem);\n}\n\nvoid\ntcache_bin_flush_small(tsd_t *tsd, tcache_t *tcache, cache_bin_t *cache_bin,\n    szind_t binind, unsigned rem) {\n\ttcache_bin_flush_bottom(tsd, tcache, cache_bin, binind, rem, true);\n}\n\nvoid\ntcache_bin_flush_large(tsd_t *tsd, tcache_t *tcache, cache_bin_t *cache_bin,\n    szind_t binind, unsigned rem) {\n\ttcache_bin_flush_bottom(tsd, tcache, cache_bin, binind, rem, false);\n}\n\n/*\n * Flushing stashed happens when 1) tcache fill, 2) tcache flush, or 3) tcache\n * GC event.  This makes sure that the stashed items do not hold memory for too\n * long, and new buffers can only be allocated when nothing is stashed.\n *\n * The downside is, the time between stash and flush may be relatively short,\n * especially when the request rate is high.  It lowers the chance of detecting\n * write-after-free -- however that is a delayed detection anyway, and is less\n * of a focus than the memory overhead.\n */\nvoid\ntcache_bin_flush_stashed(tsd_t *tsd, tcache_t *tcache, cache_bin_t *cache_bin,\n    szind_t binind, bool is_small) {\n\tcache_bin_info_t *info = &tcache_bin_info[binind];\n\t/*\n\t * The two below are for assertion only.  The content of original cached\n\t * items remain unchanged -- the stashed items reside on the other end\n\t * of the stack.  Checking the stack head and ncached to verify.\n\t */\n\tvoid *head_content = *cache_bin->stack_head;\n\tcache_bin_sz_t orig_cached = cache_bin_ncached_get_local(cache_bin,\n\t    info);\n\n\tcache_bin_sz_t nstashed = cache_bin_nstashed_get_local(cache_bin, info);\n\tassert(orig_cached + nstashed <= cache_bin_info_ncached_max(info));\n\tif (nstashed == 0) {\n\t\treturn;\n\t}\n\n\tCACHE_BIN_PTR_ARRAY_DECLARE(ptrs, nstashed);\n\tcache_bin_init_ptr_array_for_stashed(cache_bin, binind, info, &ptrs,\n\t    nstashed);\n\tsan_check_stashed_ptrs(ptrs.ptr, nstashed, sz_index2size(binind));\n\ttcache_bin_flush_impl(tsd, tcache, cache_bin, binind, &ptrs, nstashed,\n\t    is_small);\n\tcache_bin_finish_flush_stashed(cache_bin, info);\n\n\tassert(cache_bin_nstashed_get_local(cache_bin, info) == 0);\n\tassert(cache_bin_ncached_get_local(cache_bin, info) == orig_cached);\n\tassert(head_content == *cache_bin->stack_head);\n}\n\nvoid\ntcache_arena_associate(tsdn_t *tsdn, tcache_slow_t *tcache_slow,\n    tcache_t *tcache, arena_t *arena) {\n\tassert(tcache_slow->arena == NULL);\n\ttcache_slow->arena = arena;\n\n\tif (config_stats) {\n\t\t/* Link into list of extant tcaches. */\n\t\tmalloc_mutex_lock(tsdn, &arena->tcache_ql_mtx);\n\n\t\tql_elm_new(tcache_slow, link);\n\t\tql_tail_insert(&arena->tcache_ql, tcache_slow, link);\n\t\tcache_bin_array_descriptor_init(\n\t\t    &tcache_slow->cache_bin_array_descriptor, tcache->bins);\n\t\tql_tail_insert(&arena->cache_bin_array_descriptor_ql,\n\t\t    &tcache_slow->cache_bin_array_descriptor, link);\n\n\t\tmalloc_mutex_unlock(tsdn, &arena->tcache_ql_mtx);\n\t}\n}\n\nstatic void\ntcache_arena_dissociate(tsdn_t *tsdn, tcache_slow_t *tcache_slow,\n    tcache_t *tcache) {\n\tarena_t *arena = tcache_slow->arena;\n\tassert(arena != NULL);\n\tif (config_stats) {\n\t\t/* Unlink from list of extant tcaches. */\n\t\tmalloc_mutex_lock(tsdn, &arena->tcache_ql_mtx);\n\t\tif (config_debug) {\n\t\t\tbool in_ql = false;\n\t\t\ttcache_slow_t *iter;\n\t\t\tql_foreach(iter, &arena->tcache_ql, link) {\n\t\t\t\tif (iter == tcache_slow) {\n\t\t\t\t\tin_ql = true;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tassert(in_ql);\n\t\t}\n\t\tql_remove(&arena->tcache_ql, tcache_slow, link);\n\t\tql_remove(&arena->cache_bin_array_descriptor_ql,\n\t\t    &tcache_slow->cache_bin_array_descriptor, link);\n\t\ttcache_stats_merge(tsdn, tcache_slow->tcache, arena);\n\t\tmalloc_mutex_unlock(tsdn, &arena->tcache_ql_mtx);\n\t}\n\ttcache_slow->arena = NULL;\n}\n\nvoid\ntcache_arena_reassociate(tsdn_t *tsdn, tcache_slow_t *tcache_slow,\n    tcache_t *tcache, arena_t *arena) {\n\ttcache_arena_dissociate(tsdn, tcache_slow, tcache);\n\ttcache_arena_associate(tsdn, tcache_slow, tcache, arena);\n}\n\nbool\ntsd_tcache_enabled_data_init(tsd_t *tsd) {\n\t/* Called upon tsd initialization. */\n\ttsd_tcache_enabled_set(tsd, opt_tcache);\n\ttsd_slow_update(tsd);\n\n\tif (opt_tcache) {\n\t\t/* Trigger tcache init. */\n\t\ttsd_tcache_data_init(tsd);\n\t}\n\n\treturn false;\n}\n\nstatic void\ntcache_init(tsd_t *tsd, tcache_slow_t *tcache_slow, tcache_t *tcache,\n    void *mem) {\n\ttcache->tcache_slow = tcache_slow;\n\ttcache_slow->tcache = tcache;\n\n\tmemset(&tcache_slow->link, 0, sizeof(ql_elm(tcache_t)));\n\ttcache_slow->next_gc_bin = 0;\n\ttcache_slow->arena = NULL;\n\ttcache_slow->dyn_alloc = mem;\n\n\t/*\n\t * We reserve cache bins for all small size classes, even if some may\n\t * not get used (i.e. bins higher than nhbins).  This allows the fast\n\t * and common paths to access cache bin metadata safely w/o worrying\n\t * about which ones are disabled.\n\t */\n\tunsigned n_reserved_bins = nhbins < SC_NBINS ? SC_NBINS : nhbins;\n\tmemset(tcache->bins, 0, sizeof(cache_bin_t) * n_reserved_bins);\n\n\tsize_t cur_offset = 0;\n\tcache_bin_preincrement(tcache_bin_info, nhbins, mem,\n\t    &cur_offset);\n\tfor (unsigned i = 0; i < nhbins; i++) {\n\t\tif (i < SC_NBINS) {\n\t\t\ttcache_slow->lg_fill_div[i] = 1;\n\t\t\ttcache_slow->bin_refilled[i] = false;\n\t\t\ttcache_slow->bin_flush_delay_items[i]\n\t\t\t    = tcache_gc_item_delay_compute(i);\n\t\t}\n\t\tcache_bin_t *cache_bin = &tcache->bins[i];\n\t\tcache_bin_init(cache_bin, &tcache_bin_info[i], mem,\n\t\t    &cur_offset);\n\t}\n\t/*\n\t * For small size classes beyond tcache_maxclass (i.e. nhbins < NBINS),\n\t * their cache bins are initialized to a state to safely and efficiently\n\t * fail all fastpath alloc / free, so that no additional check around\n\t * nhbins is needed on fastpath.\n\t */\n\tfor (unsigned i = nhbins; i < SC_NBINS; i++) {\n\t\t/* Disabled small bins. */\n\t\tcache_bin_t *cache_bin = &tcache->bins[i];\n\t\tvoid *fake_stack = mem;\n\t\tsize_t fake_offset = 0;\n\n\t\tcache_bin_init(cache_bin, &tcache_bin_info[i], fake_stack,\n\t\t    &fake_offset);\n\t\tassert(tcache_small_bin_disabled(i, cache_bin));\n\t}\n\n\tcache_bin_postincrement(tcache_bin_info, nhbins, mem,\n\t    &cur_offset);\n\t/* Sanity check that the whole stack is used. */\n\tassert(cur_offset == tcache_bin_alloc_size);\n}\n\n/* Initialize auto tcache (embedded in TSD). */\nbool\ntsd_tcache_data_init(tsd_t *tsd) {\n\ttcache_slow_t *tcache_slow = tsd_tcache_slowp_get_unsafe(tsd);\n\ttcache_t *tcache = tsd_tcachep_get_unsafe(tsd);\n\n\tassert(cache_bin_still_zero_initialized(&tcache->bins[0]));\n\tsize_t alignment = tcache_bin_alloc_alignment;\n\tsize_t size = sz_sa2u(tcache_bin_alloc_size, alignment);\n\n\tvoid *mem = ipallocztm(tsd_tsdn(tsd), size, alignment, true, NULL,\n\t    true, arena_get(TSDN_NULL, 0, true));\n\tif (mem == NULL) {\n\t\treturn true;\n\t}\n\n\ttcache_init(tsd, tcache_slow, tcache, mem);\n\t/*\n\t * Initialization is a bit tricky here.  After malloc init is done, all\n\t * threads can rely on arena_choose and associate tcache accordingly.\n\t * However, the thread that does actual malloc bootstrapping relies on\n\t * functional tsd, and it can only rely on a0.  In that case, we\n\t * associate its tcache to a0 temporarily, and later on\n\t * arena_choose_hard() will re-associate properly.\n\t */\n\ttcache_slow->arena = NULL;\n\tarena_t *arena;\n\tif (!malloc_initialized()) {\n\t\t/* If in initialization, assign to a0. */\n\t\tarena = arena_get(tsd_tsdn(tsd), 0, false);\n\t\ttcache_arena_associate(tsd_tsdn(tsd), tcache_slow, tcache,\n\t\t    arena);\n\t} else {\n\t\tarena = arena_choose(tsd, NULL);\n\t\t/* This may happen if thread.tcache.enabled is used. */\n\t\tif (tcache_slow->arena == NULL) {\n\t\t\ttcache_arena_associate(tsd_tsdn(tsd), tcache_slow,\n\t\t\t    tcache, arena);\n\t\t}\n\t}\n\tassert(arena == tcache_slow->arena);\n\n\treturn false;\n}\n\n/* Created manual tcache for tcache.create mallctl. */\ntcache_t *\ntcache_create_explicit(tsd_t *tsd) {\n\t/*\n\t * We place the cache bin stacks, then the tcache_t, then a pointer to\n\t * the beginning of the whole allocation (for freeing).  The makes sure\n\t * the cache bins have the requested alignment.\n\t */\n\tsize_t size = tcache_bin_alloc_size + sizeof(tcache_t)\n\t    + sizeof(tcache_slow_t);\n\t/* Naturally align the pointer stacks. */\n\tsize = PTR_CEILING(size);\n\tsize = sz_sa2u(size, tcache_bin_alloc_alignment);\n\n\tvoid *mem = ipallocztm(tsd_tsdn(tsd), size, tcache_bin_alloc_alignment,\n\t    true, NULL, true, arena_get(TSDN_NULL, 0, true));\n\tif (mem == NULL) {\n\t\treturn NULL;\n\t}\n\ttcache_t *tcache = (void *)((uintptr_t)mem + tcache_bin_alloc_size);\n\ttcache_slow_t *tcache_slow =\n\t    (void *)((uintptr_t)mem + tcache_bin_alloc_size + sizeof(tcache_t));\n\ttcache_init(tsd, tcache_slow, tcache, mem);\n\n\ttcache_arena_associate(tsd_tsdn(tsd), tcache_slow, tcache,\n\t    arena_ichoose(tsd, NULL));\n\n\treturn tcache;\n}\n\nstatic void\ntcache_flush_cache(tsd_t *tsd, tcache_t *tcache) {\n\ttcache_slow_t *tcache_slow = tcache->tcache_slow;\n\tassert(tcache_slow->arena != NULL);\n\n\tfor (unsigned i = 0; i < nhbins; i++) {\n\t\tcache_bin_t *cache_bin = &tcache->bins[i];\n\t\tif (i < SC_NBINS) {\n\t\t\ttcache_bin_flush_small(tsd, tcache, cache_bin, i, 0);\n\t\t} else {\n\t\t\ttcache_bin_flush_large(tsd, tcache, cache_bin, i, 0);\n\t\t}\n\t\tif (config_stats) {\n\t\t\tassert(cache_bin->tstats.nrequests == 0);\n\t\t}\n\t}\n}\n\nvoid\ntcache_flush(tsd_t *tsd) {\n\tassert(tcache_available(tsd));\n\ttcache_flush_cache(tsd, tsd_tcachep_get(tsd));\n}\n\nstatic void\ntcache_destroy(tsd_t *tsd, tcache_t *tcache, bool tsd_tcache) {\n\ttcache_slow_t *tcache_slow = tcache->tcache_slow;\n\ttcache_flush_cache(tsd, tcache);\n\tarena_t *arena = tcache_slow->arena;\n\ttcache_arena_dissociate(tsd_tsdn(tsd), tcache_slow, tcache);\n\n\tif (tsd_tcache) {\n\t\tcache_bin_t *cache_bin = &tcache->bins[0];\n\t\tcache_bin_assert_empty(cache_bin, &tcache_bin_info[0]);\n\t}\n\tidalloctm(tsd_tsdn(tsd), tcache_slow->dyn_alloc, NULL, NULL, true,\n\t    true);\n\n\t/*\n\t * The deallocation and tcache flush above may not trigger decay since\n\t * we are on the tcache shutdown path (potentially with non-nominal\n\t * tsd).  Manually trigger decay to avoid pathological cases.  Also\n\t * include arena 0 because the tcache array is allocated from it.\n\t */\n\tarena_decay(tsd_tsdn(tsd), arena_get(tsd_tsdn(tsd), 0, false),\n\t    false, false);\n\n\tif (arena_nthreads_get(arena, false) == 0 &&\n\t    !background_thread_enabled()) {\n\t\t/* Force purging when no threads assigned to the arena anymore. */\n\t\tarena_decay(tsd_tsdn(tsd), arena,\n\t\t    /* is_background_thread */ false, /* all */ true);\n\t} else {\n\t\tarena_decay(tsd_tsdn(tsd), arena,\n\t\t    /* is_background_thread */ false, /* all */ false);\n\t}\n}\n\n/* For auto tcache (embedded in TSD) only. */\nvoid\ntcache_cleanup(tsd_t *tsd) {\n\ttcache_t *tcache = tsd_tcachep_get(tsd);\n\tif (!tcache_available(tsd)) {\n\t\tassert(tsd_tcache_enabled_get(tsd) == false);\n\t\tassert(cache_bin_still_zero_initialized(&tcache->bins[0]));\n\t\treturn;\n\t}\n\tassert(tsd_tcache_enabled_get(tsd));\n\tassert(!cache_bin_still_zero_initialized(&tcache->bins[0]));\n\n\ttcache_destroy(tsd, tcache, true);\n\tif (config_debug) {\n\t\t/*\n\t\t * For debug testing only, we want to pretend we're still in the\n\t\t * zero-initialized state.\n\t\t */\n\t\tmemset(tcache->bins, 0, sizeof(cache_bin_t) * nhbins);\n\t}\n}\n\nvoid\ntcache_stats_merge(tsdn_t *tsdn, tcache_t *tcache, arena_t *arena) {\n\tcassert(config_stats);\n\n\t/* Merge and reset tcache stats. */\n\tfor (unsigned i = 0; i < nhbins; i++) {\n\t\tcache_bin_t *cache_bin = &tcache->bins[i];\n\t\tif (i < SC_NBINS) {\n\t\t\tbin_t *bin = arena_bin_choose(tsdn, arena, i, NULL);\n\t\t\tmalloc_mutex_lock(tsdn, &bin->lock);\n\t\t\tbin->stats.nrequests += cache_bin->tstats.nrequests;\n\t\t\tmalloc_mutex_unlock(tsdn, &bin->lock);\n\t\t} else {\n\t\t\tarena_stats_large_flush_nrequests_add(tsdn,\n\t\t\t    &arena->stats, i, cache_bin->tstats.nrequests);\n\t\t}\n\t\tcache_bin->tstats.nrequests = 0;\n\t}\n}\n\nstatic bool\ntcaches_create_prep(tsd_t *tsd, base_t *base) {\n\tbool err;\n\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &tcaches_mtx);\n\n\tif (tcaches == NULL) {\n\t\ttcaches = base_alloc(tsd_tsdn(tsd), base,\n\t\t    sizeof(tcache_t *) * (MALLOCX_TCACHE_MAX+1), CACHELINE);\n\t\tif (tcaches == NULL) {\n\t\t\terr = true;\n\t\t\tgoto label_return;\n\t\t}\n\t}\n\n\tif (tcaches_avail == NULL && tcaches_past > MALLOCX_TCACHE_MAX) {\n\t\terr = true;\n\t\tgoto label_return;\n\t}\n\n\terr = false;\nlabel_return:\n\treturn err;\n}\n\nbool\ntcaches_create(tsd_t *tsd, base_t *base, unsigned *r_ind) {\n\twitness_assert_depth(tsdn_witness_tsdp_get(tsd_tsdn(tsd)), 0);\n\n\tbool err;\n\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &tcaches_mtx);\n\n\tif (tcaches_create_prep(tsd, base)) {\n\t\terr = true;\n\t\tgoto label_return;\n\t}\n\n\ttcache_t *tcache = tcache_create_explicit(tsd);\n\tif (tcache == NULL) {\n\t\terr = true;\n\t\tgoto label_return;\n\t}\n\n\ttcaches_t *elm;\n\tif (tcaches_avail != NULL) {\n\t\telm = tcaches_avail;\n\t\ttcaches_avail = tcaches_avail->next;\n\t\telm->tcache = tcache;\n\t\t*r_ind = (unsigned)(elm - tcaches);\n\t} else {\n\t\telm = &tcaches[tcaches_past];\n\t\telm->tcache = tcache;\n\t\t*r_ind = tcaches_past;\n\t\ttcaches_past++;\n\t}\n\n\terr = false;\nlabel_return:\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &tcaches_mtx);\n\twitness_assert_depth(tsdn_witness_tsdp_get(tsd_tsdn(tsd)), 0);\n\treturn err;\n}\n\nstatic tcache_t *\ntcaches_elm_remove(tsd_t *tsd, tcaches_t *elm, bool allow_reinit) {\n\tmalloc_mutex_assert_owner(tsd_tsdn(tsd), &tcaches_mtx);\n\n\tif (elm->tcache == NULL) {\n\t\treturn NULL;\n\t}\n\ttcache_t *tcache = elm->tcache;\n\tif (allow_reinit) {\n\t\telm->tcache = TCACHES_ELM_NEED_REINIT;\n\t} else {\n\t\telm->tcache = NULL;\n\t}\n\n\tif (tcache == TCACHES_ELM_NEED_REINIT) {\n\t\treturn NULL;\n\t}\n\treturn tcache;\n}\n\nvoid\ntcaches_flush(tsd_t *tsd, unsigned ind) {\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &tcaches_mtx);\n\ttcache_t *tcache = tcaches_elm_remove(tsd, &tcaches[ind], true);\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &tcaches_mtx);\n\tif (tcache != NULL) {\n\t\t/* Destroy the tcache; recreate in tcaches_get() if needed. */\n\t\ttcache_destroy(tsd, tcache, false);\n\t}\n}\n\nvoid\ntcaches_destroy(tsd_t *tsd, unsigned ind) {\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &tcaches_mtx);\n\ttcaches_t *elm = &tcaches[ind];\n\ttcache_t *tcache = tcaches_elm_remove(tsd, elm, false);\n\telm->next = tcaches_avail;\n\ttcaches_avail = elm;\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &tcaches_mtx);\n\tif (tcache != NULL) {\n\t\ttcache_destroy(tsd, tcache, false);\n\t}\n}\n\nstatic unsigned\ntcache_ncached_max_compute(szind_t szind) {\n\tif (szind >= SC_NBINS) {\n\t\tassert(szind < nhbins);\n\t\treturn opt_tcache_nslots_large;\n\t}\n\tunsigned slab_nregs = bin_infos[szind].nregs;\n\n\t/* We may modify these values; start with the opt versions. */\n\tunsigned nslots_small_min = opt_tcache_nslots_small_min;\n\tunsigned nslots_small_max = opt_tcache_nslots_small_max;\n\n\t/*\n\t * Clamp values to meet our constraints -- even, nonzero, min < max, and\n\t * suitable for a cache bin size.\n\t */\n\tif (opt_tcache_nslots_small_max > CACHE_BIN_NCACHED_MAX) {\n\t\tnslots_small_max = CACHE_BIN_NCACHED_MAX;\n\t}\n\tif (nslots_small_min % 2 != 0) {\n\t\tnslots_small_min++;\n\t}\n\tif (nslots_small_max % 2 != 0) {\n\t\tnslots_small_max--;\n\t}\n\tif (nslots_small_min < 2) {\n\t\tnslots_small_min = 2;\n\t}\n\tif (nslots_small_max < 2) {\n\t\tnslots_small_max = 2;\n\t}\n\tif (nslots_small_min > nslots_small_max) {\n\t\tnslots_small_min = nslots_small_max;\n\t}\n\n\tunsigned candidate;\n\tif (opt_lg_tcache_nslots_mul < 0) {\n\t\tcandidate = slab_nregs >> (-opt_lg_tcache_nslots_mul);\n\t} else {\n\t\tcandidate = slab_nregs << opt_lg_tcache_nslots_mul;\n\t}\n\tif (candidate % 2 != 0) {\n\t\t/*\n\t\t * We need the candidate size to be even -- we assume that we\n\t\t * can divide by two and get a positive number (e.g. when\n\t\t * flushing).\n\t\t */\n\t\t++candidate;\n\t}\n\tif (candidate <= nslots_small_min) {\n\t\treturn nslots_small_min;\n\t} else if (candidate <= nslots_small_max) {\n\t\treturn candidate;\n\t} else {\n\t\treturn nslots_small_max;\n\t}\n}\n\nbool\ntcache_boot(tsdn_t *tsdn, base_t *base) {\n\ttcache_maxclass = sz_s2u(opt_tcache_max);\n\tassert(tcache_maxclass <= TCACHE_MAXCLASS_LIMIT);\n\tnhbins = sz_size2index(tcache_maxclass) + 1;\n\n\tif (malloc_mutex_init(&tcaches_mtx, \"tcaches\", WITNESS_RANK_TCACHES,\n\t    malloc_mutex_rank_exclusive)) {\n\t\treturn true;\n\t}\n\n\t/* Initialize tcache_bin_info.  See comments in tcache_init(). */\n\tunsigned n_reserved_bins = nhbins < SC_NBINS ? SC_NBINS : nhbins;\n\tsize_t size = n_reserved_bins * sizeof(cache_bin_info_t);\n\ttcache_bin_info = (cache_bin_info_t *)base_alloc(tsdn, base, size,\n\t    CACHELINE);\n\tif (tcache_bin_info == NULL) {\n\t\treturn true;\n\t}\n\n\tfor (szind_t i = 0; i < nhbins; i++) {\n\t\tunsigned ncached_max = tcache_ncached_max_compute(i);\n\t\tcache_bin_info_init(&tcache_bin_info[i], ncached_max);\n\t}\n\tfor (szind_t i = nhbins; i < SC_NBINS; i++) {\n\t\t/* Disabled small bins. */\n\t\tcache_bin_info_init(&tcache_bin_info[i], 0);\n\t\tassert(tcache_small_bin_disabled(i, NULL));\n\t}\n\n\tcache_bin_info_compute_alloc(tcache_bin_info, nhbins,\n\t    &tcache_bin_alloc_size, &tcache_bin_alloc_alignment);\n\n\treturn false;\n}\n\nvoid\ntcache_prefork(tsdn_t *tsdn) {\n\tmalloc_mutex_prefork(tsdn, &tcaches_mtx);\n}\n\nvoid\ntcache_postfork_parent(tsdn_t *tsdn) {\n\tmalloc_mutex_postfork_parent(tsdn, &tcaches_mtx);\n}\n\nvoid\ntcache_postfork_child(tsdn_t *tsdn) {\n\tmalloc_mutex_postfork_child(tsdn, &tcaches_mtx);\n}\n\nvoid tcache_assert_initialized(tcache_t *tcache) {\n\tassert(!cache_bin_still_zero_initialized(&tcache->bins[0]));\n}\n"
  },
  {
    "path": "deps/jemalloc/src/test_hooks.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n\n/*\n * The hooks are a little bit screwy -- they're not genuinely exported in the\n * sense that we want them available to end-users, but we do want them visible\n * from outside the generated library, so that we can use them in test code.\n */\nJEMALLOC_EXPORT\nvoid (*test_hooks_arena_new_hook)() = NULL;\n\nJEMALLOC_EXPORT\nvoid (*test_hooks_libc_hook)() = NULL;\n"
  },
  {
    "path": "deps/jemalloc/src/thread_event.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/thread_event.h\"\n\n/*\n * Signatures for event specific functions.  These functions should be defined\n * by the modules owning each event.  The signatures here verify that the\n * definitions follow the right format.\n *\n * The first two are functions computing new / postponed event wait time.  New\n * event wait time is the time till the next event if an event is currently\n * being triggered; postponed event wait time is the time till the next event\n * if an event should be triggered but needs to be postponed, e.g. when the TSD\n * is not nominal or during reentrancy.\n *\n * The third is the event handler function, which is called whenever an event\n * is triggered.  The parameter is the elapsed time since the last time an\n * event of the same type was triggered.\n */\n#define E(event, condition_unused, is_alloc_event_unused)\t\t\\\nuint64_t event##_new_event_wait(tsd_t *tsd);\t\t\t\t\\\nuint64_t event##_postponed_event_wait(tsd_t *tsd);\t\t\t\\\nvoid event##_event_handler(tsd_t *tsd, uint64_t elapsed);\n\nITERATE_OVER_ALL_EVENTS\n#undef E\n\n/* Signatures for internal functions fetching elapsed time. */\n#define E(event, condition_unused, is_alloc_event_unused)\t\t\\\nstatic uint64_t event##_fetch_elapsed(tsd_t *tsd);\n\nITERATE_OVER_ALL_EVENTS\n#undef E\n\nstatic uint64_t\ntcache_gc_fetch_elapsed(tsd_t *tsd) {\n\treturn TE_INVALID_ELAPSED;\n}\n\nstatic uint64_t\ntcache_gc_dalloc_fetch_elapsed(tsd_t *tsd) {\n\treturn TE_INVALID_ELAPSED;\n}\n\nstatic uint64_t\nprof_sample_fetch_elapsed(tsd_t *tsd) {\n\tuint64_t last_event = thread_allocated_last_event_get(tsd);\n\tuint64_t last_sample_event = prof_sample_last_event_get(tsd);\n\tprof_sample_last_event_set(tsd, last_event);\n\treturn last_event - last_sample_event;\n}\n\nstatic uint64_t\nstats_interval_fetch_elapsed(tsd_t *tsd) {\n\tuint64_t last_event = thread_allocated_last_event_get(tsd);\n\tuint64_t last_stats_event = stats_interval_last_event_get(tsd);\n\tstats_interval_last_event_set(tsd, last_event);\n\treturn last_event - last_stats_event;\n}\n\nstatic uint64_t\npeak_alloc_fetch_elapsed(tsd_t *tsd) {\n\treturn TE_INVALID_ELAPSED;\n}\n\nstatic uint64_t\npeak_dalloc_fetch_elapsed(tsd_t *tsd) {\n\treturn TE_INVALID_ELAPSED;\n}\n\n/* Per event facilities done. */\n\nstatic bool\nte_ctx_has_active_events(te_ctx_t *ctx) {\n\tassert(config_debug);\n#define E(event, condition, alloc_event)\t\t\t       \\\n\tif (condition && alloc_event == ctx->is_alloc) {\t       \\\n\t\treturn true;\t\t\t\t\t       \\\n\t}\n\tITERATE_OVER_ALL_EVENTS\n#undef E\n\treturn false;\n}\n\nstatic uint64_t\nte_next_event_compute(tsd_t *tsd, bool is_alloc) {\n\tuint64_t wait = TE_MAX_START_WAIT;\n#define E(event, condition, alloc_event)\t\t\t\t\\\n\tif (is_alloc == alloc_event && condition) {\t\t\t\\\n\t\tuint64_t event_wait =\t\t\t\t\t\\\n\t\t    event##_event_wait_get(tsd);\t\t\t\\\n\t\tassert(event_wait <= TE_MAX_START_WAIT);\t\t\\\n\t\tif (event_wait > 0U && event_wait < wait) {\t\t\\\n\t\t\twait = event_wait;\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\n\n\tITERATE_OVER_ALL_EVENTS\n#undef E\n\tassert(wait <= TE_MAX_START_WAIT);\n\treturn wait;\n}\n\nstatic void\nte_assert_invariants_impl(tsd_t *tsd, te_ctx_t *ctx) {\n\tuint64_t current_bytes = te_ctx_current_bytes_get(ctx);\n\tuint64_t last_event = te_ctx_last_event_get(ctx);\n\tuint64_t next_event = te_ctx_next_event_get(ctx);\n\tuint64_t next_event_fast = te_ctx_next_event_fast_get(ctx);\n\n\tassert(last_event != next_event);\n\tif (next_event > TE_NEXT_EVENT_FAST_MAX || !tsd_fast(tsd)) {\n\t\tassert(next_event_fast == 0U);\n\t} else {\n\t\tassert(next_event_fast == next_event);\n\t}\n\n\t/* The subtraction is intentionally susceptible to underflow. */\n\tuint64_t interval = next_event - last_event;\n\n\t/* The subtraction is intentionally susceptible to underflow. */\n\tassert(current_bytes - last_event < interval);\n\tuint64_t min_wait = te_next_event_compute(tsd, te_ctx_is_alloc(ctx));\n\t/*\n\t * next_event should have been pushed up only except when no event is\n\t * on and the TSD is just initialized.  The last_event == 0U guard\n\t * below is stronger than needed, but having an exactly accurate guard\n\t * is more complicated to implement.\n\t */\n\tassert((!te_ctx_has_active_events(ctx) && last_event == 0U) ||\n\t    interval == min_wait ||\n\t    (interval < min_wait && interval == TE_MAX_INTERVAL));\n}\n\nvoid\nte_assert_invariants_debug(tsd_t *tsd) {\n\tte_ctx_t ctx;\n\tte_ctx_get(tsd, &ctx, true);\n\tte_assert_invariants_impl(tsd, &ctx);\n\n\tte_ctx_get(tsd, &ctx, false);\n\tte_assert_invariants_impl(tsd, &ctx);\n}\n\n/*\n * Synchronization around the fast threshold in tsd --\n * There are two threads to consider in the synchronization here:\n * - The owner of the tsd being updated by a slow path change\n * - The remote thread, doing that slow path change.\n *\n * As a design constraint, we want to ensure that a slow-path transition cannot\n * be ignored for arbitrarily long, and that if the remote thread causes a\n * slow-path transition and then communicates with the owner thread that it has\n * occurred, then the owner will go down the slow path on the next allocator\n * operation (so that we don't want to just wait until the owner hits its slow\n * path reset condition on its own).\n *\n * Here's our strategy to do that:\n *\n * The remote thread will update the slow-path stores to TSD variables, issue a\n * SEQ_CST fence, and then update the TSD next_event_fast counter. The owner\n * thread will update next_event_fast, issue an SEQ_CST fence, and then check\n * its TSD to see if it's on the slow path.\n\n * This is fairly straightforward when 64-bit atomics are supported. Assume that\n * the remote fence is sandwiched between two owner fences in the reset pathway.\n * The case where there is no preceding or trailing owner fence (i.e. because\n * the owner thread is near the beginning or end of its life) can be analyzed\n * similarly. The owner store to next_event_fast preceding the earlier owner\n * fence will be earlier in coherence order than the remote store to it, so that\n * the owner thread will go down the slow path once the store becomes visible to\n * it, which is no later than the time of the second fence.\n\n * The case where we don't support 64-bit atomics is trickier, since word\n * tearing is possible. We'll repeat the same analysis, and look at the two\n * owner fences sandwiching the remote fence. The next_event_fast stores done\n * alongside the earlier owner fence cannot overwrite any of the remote stores\n * (since they precede the earlier owner fence in sb, which precedes the remote\n * fence in sc, which precedes the remote stores in sb). After the second owner\n * fence there will be a re-check of the slow-path variables anyways, so the\n * \"owner will notice that it's on the slow path eventually\" guarantee is\n * satisfied. To make sure that the out-of-band-messaging constraint is as well,\n * note that either the message passing is sequenced before the second owner\n * fence (in which case the remote stores happen before the second set of owner\n * stores, so malloc sees a value of zero for next_event_fast and goes down the\n * slow path), or it is not (in which case the owner sees the tsd slow-path\n * writes on its previous update). This leaves open the possibility that the\n * remote thread will (at some arbitrary point in the future) zero out one half\n * of the owner thread's next_event_fast, but that's always safe (it just sends\n * it down the slow path earlier).\n */\nstatic void\nte_ctx_next_event_fast_update(te_ctx_t *ctx) {\n\tuint64_t next_event = te_ctx_next_event_get(ctx);\n\tuint64_t next_event_fast = (next_event <= TE_NEXT_EVENT_FAST_MAX) ?\n\t    next_event : 0U;\n\tte_ctx_next_event_fast_set(ctx, next_event_fast);\n}\n\nvoid\nte_recompute_fast_threshold(tsd_t *tsd) {\n\tif (tsd_state_get(tsd) != tsd_state_nominal) {\n\t\t/* Check first because this is also called on purgatory. */\n\t\tte_next_event_fast_set_non_nominal(tsd);\n\t\treturn;\n\t}\n\n\tte_ctx_t ctx;\n\tte_ctx_get(tsd, &ctx, true);\n\tte_ctx_next_event_fast_update(&ctx);\n\tte_ctx_get(tsd, &ctx, false);\n\tte_ctx_next_event_fast_update(&ctx);\n\n\tatomic_fence(ATOMIC_SEQ_CST);\n\tif (tsd_state_get(tsd) != tsd_state_nominal) {\n\t\tte_next_event_fast_set_non_nominal(tsd);\n\t}\n}\n\nstatic void\nte_adjust_thresholds_helper(tsd_t *tsd, te_ctx_t *ctx,\n    uint64_t wait) {\n\t/*\n\t * The next threshold based on future events can only be adjusted after\n\t * progressing the last_event counter (which is set to current).\n\t */\n\tassert(te_ctx_current_bytes_get(ctx) == te_ctx_last_event_get(ctx));\n\tassert(wait <= TE_MAX_START_WAIT);\n\n\tuint64_t next_event = te_ctx_last_event_get(ctx) + (wait <=\n\t    TE_MAX_INTERVAL ? wait : TE_MAX_INTERVAL);\n\tte_ctx_next_event_set(tsd, ctx, next_event);\n}\n\nstatic uint64_t\nte_clip_event_wait(uint64_t event_wait) {\n\tassert(event_wait > 0U);\n\tif (TE_MIN_START_WAIT > 1U &&\n\t    unlikely(event_wait < TE_MIN_START_WAIT)) {\n\t\tevent_wait = TE_MIN_START_WAIT;\n\t}\n\tif (TE_MAX_START_WAIT < UINT64_MAX &&\n\t    unlikely(event_wait > TE_MAX_START_WAIT)) {\n\t\tevent_wait = TE_MAX_START_WAIT;\n\t}\n\treturn event_wait;\n}\n\nvoid\nte_event_trigger(tsd_t *tsd, te_ctx_t *ctx) {\n\t/* usize has already been added to thread_allocated. */\n\tuint64_t bytes_after = te_ctx_current_bytes_get(ctx);\n\t/* The subtraction is intentionally susceptible to underflow. */\n\tuint64_t accumbytes = bytes_after - te_ctx_last_event_get(ctx);\n\n\tte_ctx_last_event_set(ctx, bytes_after);\n\n\tbool allow_event_trigger = tsd_nominal(tsd) &&\n\t    tsd_reentrancy_level_get(tsd) == 0;\n\tbool is_alloc = ctx->is_alloc;\n\tuint64_t wait = TE_MAX_START_WAIT;\n\n#define E(event, condition, alloc_event)\t\t\t\t\\\n\tbool is_##event##_triggered = false;\t\t\t\t\\\n\tif (is_alloc == alloc_event && condition) {\t\t\t\\\n\t\tuint64_t event_wait = event##_event_wait_get(tsd);\t\\\n\t\tassert(event_wait <= TE_MAX_START_WAIT);\t\t\\\n\t\tif (event_wait > accumbytes) {\t\t\t\t\\\n\t\t\tevent_wait -= accumbytes;\t\t\t\\\n\t\t} else if (!allow_event_trigger) {\t\t\t\\\n\t\t\tevent_wait = event##_postponed_event_wait(tsd);\t\\\n\t\t} else {\t\t\t\t\t\t\\\n\t\t\tis_##event##_triggered = true;\t\t\t\\\n\t\t\tevent_wait = event##_new_event_wait(tsd);\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t\tevent_wait = te_clip_event_wait(event_wait);\t\t\\\n\t\tevent##_event_wait_set(tsd, event_wait);\t\t\\\n\t\tif (event_wait < wait) {\t\t\t\t\\\n\t\t\twait = event_wait;\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\n\n\tITERATE_OVER_ALL_EVENTS\n#undef E\n\n\tassert(wait <= TE_MAX_START_WAIT);\n\tte_adjust_thresholds_helper(tsd, ctx, wait);\n\tte_assert_invariants(tsd);\n\n#define E(event, condition, alloc_event)\t\t\t\t\\\n\tif (is_alloc == alloc_event && condition &&\t\t\t\\\n\t    is_##event##_triggered) {\t\t\t\t\t\\\n\t\tassert(allow_event_trigger);\t\t\t\t\\\n\t\tuint64_t elapsed = event##_fetch_elapsed(tsd);\t\t\\\n\t\tevent##_event_handler(tsd, elapsed);\t\t\t\\\n\t}\n\n\tITERATE_OVER_ALL_EVENTS\n#undef E\n\n\tte_assert_invariants(tsd);\n}\n\nstatic void\nte_init(tsd_t *tsd, bool is_alloc) {\n\tte_ctx_t ctx;\n\tte_ctx_get(tsd, &ctx, is_alloc);\n\t/*\n\t * Reset the last event to current, which starts the events from a clean\n\t * state.  This is necessary when re-init the tsd event counters.\n\t *\n\t * The event counters maintain a relationship with the current bytes:\n\t * last_event <= current < next_event.  When a reinit happens (e.g.\n\t * reincarnated tsd), the last event needs progressing because all\n\t * events start fresh from the current bytes.\n\t */\n\tte_ctx_last_event_set(&ctx, te_ctx_current_bytes_get(&ctx));\n\n\tuint64_t wait = TE_MAX_START_WAIT;\n#define E(event, condition, alloc_event)\t\t\t\t\\\n\tif (is_alloc == alloc_event && condition) {\t\t\t\\\n\t\tuint64_t event_wait = event##_new_event_wait(tsd);\t\\\n\t\tevent_wait = te_clip_event_wait(event_wait);\t\t\\\n\t\tevent##_event_wait_set(tsd, event_wait);\t\t\\\n\t\tif (event_wait < wait) {\t\t\t\t\\\n\t\t\twait = event_wait;\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\n\n\tITERATE_OVER_ALL_EVENTS\n#undef E\n\tte_adjust_thresholds_helper(tsd, &ctx, wait);\n}\n\nvoid\ntsd_te_init(tsd_t *tsd) {\n\t/* Make sure no overflow for the bytes accumulated on event_trigger. */\n\tassert(TE_MAX_INTERVAL <= UINT64_MAX - SC_LARGE_MAXCLASS + 1);\n\tte_init(tsd, true);\n\tte_init(tsd, false);\n\tte_assert_invariants(tsd);\n}\n"
  },
  {
    "path": "deps/jemalloc/src/ticker.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n/*\n * To avoid using floating point math down core paths (still necessary because\n * versions of the glibc dynamic loader that did not preserve xmm registers are\n * still somewhat common, requiring us to be compilable with -mno-sse), and also\n * to avoid generally expensive library calls, we use a precomputed table of\n * values.  We want to sample U uniformly on [0, 1], and then compute\n * ceil(log(u)/log(1-1/nticks)).  We're mostly interested in the case where\n * nticks is reasonably big, so 1/log(1-1/nticks) is well-approximated by\n * -nticks.\n *\n * To compute log(u), we sample an integer in [1, 64] and divide, then just look\n * up results in a table.  As a space-compression mechanism, we store these as\n * uint8_t by dividing the range (255) by the highest-magnitude value the log\n * can take on, and using that as a multiplier.  We then have to divide by that\n * multiplier at the end of the computation.\n *\n * The values here are computed in src/ticker.py\n */\n\nconst uint8_t ticker_geom_table[1 << TICKER_GEOM_NBITS] = {\n\t254, 211, 187, 169, 156, 144, 135, 127,\n\t120, 113, 107, 102, 97, 93, 89, 85,\n\t81, 77, 74, 71, 68, 65, 62, 60,\n\t57, 55, 53, 50, 48, 46, 44, 42,\n\t40, 39, 37, 35, 33, 32, 30, 29,\n\t27, 26, 24, 23, 21, 20, 19, 18,\n\t16, 15, 14, 13, 12, 10, 9, 8,\n\t7, 6, 5, 4, 3, 2, 1, 0\n};\n"
  },
  {
    "path": "deps/jemalloc/src/ticker.py",
    "content": "#!/usr/bin/env python3\n\nimport math\n\n# Must match TICKER_GEOM_NBITS\nlg_table_size = 6\ntable_size = 2**lg_table_size\nbyte_max = 255\nmul = math.floor(-byte_max/math.log(1 / table_size))\nvalues = [round(-mul * math.log(i / table_size))\n\tfor i in range(1, table_size+1)]\nprint(\"mul =\", mul)\nprint(\"values:\")\nfor i in range(table_size // 8):\n\tprint(\", \".join((str(x) for x in values[i*8 : i*8 + 8])))\n"
  },
  {
    "path": "deps/jemalloc/src/tsd.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/san.h\"\n#include \"jemalloc/internal/mutex.h\"\n#include \"jemalloc/internal/rtree.h\"\n\n/******************************************************************************/\n/* Data. */\n\n/* TSD_INITIALIZER triggers \"-Wmissing-field-initializer\" */\nJEMALLOC_DIAGNOSTIC_PUSH\nJEMALLOC_DIAGNOSTIC_IGNORE_MISSING_STRUCT_FIELD_INITIALIZERS\n\n#ifdef JEMALLOC_MALLOC_THREAD_CLEANUP\nJEMALLOC_TSD_TYPE_ATTR(tsd_t) tsd_tls = TSD_INITIALIZER;\nJEMALLOC_TSD_TYPE_ATTR(bool) JEMALLOC_TLS_MODEL tsd_initialized = false;\nbool tsd_booted = false;\n#elif (defined(JEMALLOC_TLS))\nJEMALLOC_TSD_TYPE_ATTR(tsd_t) tsd_tls = TSD_INITIALIZER;\npthread_key_t tsd_tsd;\nbool tsd_booted = false;\n#elif (defined(_WIN32))\nDWORD tsd_tsd;\ntsd_wrapper_t tsd_boot_wrapper = {false, TSD_INITIALIZER};\nbool tsd_booted = false;\n#else\n\n/*\n * This contains a mutex, but it's pretty convenient to allow the mutex code to\n * have a dependency on tsd.  So we define the struct here, and only refer to it\n * by pointer in the header.\n */\nstruct tsd_init_head_s {\n\tql_head(tsd_init_block_t) blocks;\n\tmalloc_mutex_t lock;\n};\n\npthread_key_t tsd_tsd;\ntsd_init_head_t\ttsd_init_head = {\n\tql_head_initializer(blocks),\n\tMALLOC_MUTEX_INITIALIZER\n};\n\ntsd_wrapper_t tsd_boot_wrapper = {\n\tfalse,\n\tTSD_INITIALIZER\n};\nbool tsd_booted = false;\n#endif\n\nJEMALLOC_DIAGNOSTIC_POP\n\n/******************************************************************************/\n\n/* A list of all the tsds in the nominal state. */\ntypedef ql_head(tsd_t) tsd_list_t;\nstatic tsd_list_t tsd_nominal_tsds = ql_head_initializer(tsd_nominal_tsds);\nstatic malloc_mutex_t tsd_nominal_tsds_lock;\n\n/* How many slow-path-enabling features are turned on. */\nstatic atomic_u32_t tsd_global_slow_count = ATOMIC_INIT(0);\n\nstatic bool\ntsd_in_nominal_list(tsd_t *tsd) {\n\ttsd_t *tsd_list;\n\tbool found = false;\n\t/*\n\t * We don't know that tsd is nominal; it might not be safe to get data\n\t * out of it here.\n\t */\n\tmalloc_mutex_lock(TSDN_NULL, &tsd_nominal_tsds_lock);\n\tql_foreach(tsd_list, &tsd_nominal_tsds, TSD_MANGLE(tsd_link)) {\n\t\tif (tsd == tsd_list) {\n\t\t\tfound = true;\n\t\t\tbreak;\n\t\t}\n\t}\n\tmalloc_mutex_unlock(TSDN_NULL, &tsd_nominal_tsds_lock);\n\treturn found;\n}\n\nstatic void\ntsd_add_nominal(tsd_t *tsd) {\n\tassert(!tsd_in_nominal_list(tsd));\n\tassert(tsd_state_get(tsd) <= tsd_state_nominal_max);\n\tql_elm_new(tsd, TSD_MANGLE(tsd_link));\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &tsd_nominal_tsds_lock);\n\tql_tail_insert(&tsd_nominal_tsds, tsd, TSD_MANGLE(tsd_link));\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &tsd_nominal_tsds_lock);\n}\n\nstatic void\ntsd_remove_nominal(tsd_t *tsd) {\n\tassert(tsd_in_nominal_list(tsd));\n\tassert(tsd_state_get(tsd) <= tsd_state_nominal_max);\n\tmalloc_mutex_lock(tsd_tsdn(tsd), &tsd_nominal_tsds_lock);\n\tql_remove(&tsd_nominal_tsds, tsd, TSD_MANGLE(tsd_link));\n\tmalloc_mutex_unlock(tsd_tsdn(tsd), &tsd_nominal_tsds_lock);\n}\n\nstatic void\ntsd_force_recompute(tsdn_t *tsdn) {\n\t/*\n\t * The stores to tsd->state here need to synchronize with the exchange\n\t * in tsd_slow_update.\n\t */\n\tatomic_fence(ATOMIC_RELEASE);\n\tmalloc_mutex_lock(tsdn, &tsd_nominal_tsds_lock);\n\ttsd_t *remote_tsd;\n\tql_foreach(remote_tsd, &tsd_nominal_tsds, TSD_MANGLE(tsd_link)) {\n\t\tassert(tsd_atomic_load(&remote_tsd->state, ATOMIC_RELAXED)\n\t\t    <= tsd_state_nominal_max);\n\t\ttsd_atomic_store(&remote_tsd->state,\n\t\t    tsd_state_nominal_recompute, ATOMIC_RELAXED);\n\t\t/* See comments in te_recompute_fast_threshold(). */\n\t\tatomic_fence(ATOMIC_SEQ_CST);\n\t\tte_next_event_fast_set_non_nominal(remote_tsd);\n\t}\n\tmalloc_mutex_unlock(tsdn, &tsd_nominal_tsds_lock);\n}\n\nvoid\ntsd_global_slow_inc(tsdn_t *tsdn) {\n\tatomic_fetch_add_u32(&tsd_global_slow_count, 1, ATOMIC_RELAXED);\n\t/*\n\t * We unconditionally force a recompute, even if the global slow count\n\t * was already positive.  If we didn't, then it would be possible for us\n\t * to return to the user, have the user synchronize externally with some\n\t * other thread, and then have that other thread not have picked up the\n\t * update yet (since the original incrementing thread might still be\n\t * making its way through the tsd list).\n\t */\n\ttsd_force_recompute(tsdn);\n}\n\nvoid tsd_global_slow_dec(tsdn_t *tsdn) {\n\tatomic_fetch_sub_u32(&tsd_global_slow_count, 1, ATOMIC_RELAXED);\n\t/* See the note in ..._inc(). */\n\ttsd_force_recompute(tsdn);\n}\n\nstatic bool\ntsd_local_slow(tsd_t *tsd) {\n\treturn !tsd_tcache_enabled_get(tsd)\n\t    || tsd_reentrancy_level_get(tsd) > 0;\n}\n\nbool\ntsd_global_slow() {\n\treturn atomic_load_u32(&tsd_global_slow_count, ATOMIC_RELAXED) > 0;\n}\n\n/******************************************************************************/\n\nstatic uint8_t\ntsd_state_compute(tsd_t *tsd) {\n\tif (!tsd_nominal(tsd)) {\n\t\treturn tsd_state_get(tsd);\n\t}\n\t/* We're in *a* nominal state; but which one? */\n\tif (malloc_slow || tsd_local_slow(tsd) || tsd_global_slow()) {\n\t\treturn tsd_state_nominal_slow;\n\t} else {\n\t\treturn tsd_state_nominal;\n\t}\n}\n\nvoid\ntsd_slow_update(tsd_t *tsd) {\n\tuint8_t old_state;\n\tdo {\n\t\tuint8_t new_state = tsd_state_compute(tsd);\n\t\told_state = tsd_atomic_exchange(&tsd->state, new_state,\n\t\t    ATOMIC_ACQUIRE);\n\t} while (old_state == tsd_state_nominal_recompute);\n\n\tte_recompute_fast_threshold(tsd);\n}\n\nvoid\ntsd_state_set(tsd_t *tsd, uint8_t new_state) {\n\t/* Only the tsd module can change the state *to* recompute. */\n\tassert(new_state != tsd_state_nominal_recompute);\n\tuint8_t old_state = tsd_atomic_load(&tsd->state, ATOMIC_RELAXED);\n\tif (old_state > tsd_state_nominal_max) {\n\t\t/*\n\t\t * Not currently in the nominal list, but it might need to be\n\t\t * inserted there.\n\t\t */\n\t\tassert(!tsd_in_nominal_list(tsd));\n\t\ttsd_atomic_store(&tsd->state, new_state, ATOMIC_RELAXED);\n\t\tif (new_state <= tsd_state_nominal_max) {\n\t\t\ttsd_add_nominal(tsd);\n\t\t}\n\t} else {\n\t\t/*\n\t\t * We're currently nominal.  If the new state is non-nominal,\n\t\t * great; we take ourselves off the list and just enter the new\n\t\t * state.\n\t\t */\n\t\tassert(tsd_in_nominal_list(tsd));\n\t\tif (new_state > tsd_state_nominal_max) {\n\t\t\ttsd_remove_nominal(tsd);\n\t\t\ttsd_atomic_store(&tsd->state, new_state,\n\t\t\t    ATOMIC_RELAXED);\n\t\t} else {\n\t\t\t/*\n\t\t\t * This is the tricky case.  We're transitioning from\n\t\t\t * one nominal state to another.  The caller can't know\n\t\t\t * about any races that are occurring at the same time,\n\t\t\t * so we always have to recompute no matter what.\n\t\t\t */\n\t\t\ttsd_slow_update(tsd);\n\t\t}\n\t}\n\tte_recompute_fast_threshold(tsd);\n}\n\nstatic void\ntsd_prng_state_init(tsd_t *tsd) {\n\t/*\n\t * A nondeterministic seed based on the address of tsd reduces\n\t * the likelihood of lockstep non-uniform cache index\n\t * utilization among identical concurrent processes, but at the\n\t * cost of test repeatability.  For debug builds, instead use a\n\t * deterministic seed.\n\t */\n\t*tsd_prng_statep_get(tsd) = config_debug ? 0 :\n\t    (uint64_t)(uintptr_t)tsd;\n}\n\nstatic bool\ntsd_data_init(tsd_t *tsd) {\n\t/*\n\t * We initialize the rtree context first (before the tcache), since the\n\t * tcache initialization depends on it.\n\t */\n\trtree_ctx_data_init(tsd_rtree_ctxp_get_unsafe(tsd));\n\ttsd_prng_state_init(tsd);\n\ttsd_te_init(tsd); /* event_init may use the prng state above. */\n\ttsd_san_init(tsd);\n\treturn tsd_tcache_enabled_data_init(tsd);\n}\n\nstatic void\nassert_tsd_data_cleanup_done(tsd_t *tsd) {\n\tassert(!tsd_nominal(tsd));\n\tassert(!tsd_in_nominal_list(tsd));\n\tassert(*tsd_arenap_get_unsafe(tsd) == NULL);\n\tassert(*tsd_iarenap_get_unsafe(tsd) == NULL);\n\tassert(*tsd_tcache_enabledp_get_unsafe(tsd) == false);\n\tassert(*tsd_prof_tdatap_get_unsafe(tsd) == NULL);\n}\n\nstatic bool\ntsd_data_init_nocleanup(tsd_t *tsd) {\n\tassert(tsd_state_get(tsd) == tsd_state_reincarnated ||\n\t    tsd_state_get(tsd) == tsd_state_minimal_initialized);\n\t/*\n\t * During reincarnation, there is no guarantee that the cleanup function\n\t * will be called (deallocation may happen after all tsd destructors).\n\t * We set up tsd in a way that no cleanup is needed.\n\t */\n\trtree_ctx_data_init(tsd_rtree_ctxp_get_unsafe(tsd));\n\t*tsd_tcache_enabledp_get_unsafe(tsd) = false;\n\t*tsd_reentrancy_levelp_get(tsd) = 1;\n\ttsd_prng_state_init(tsd);\n\ttsd_te_init(tsd); /* event_init may use the prng state above. */\n\ttsd_san_init(tsd);\n\tassert_tsd_data_cleanup_done(tsd);\n\n\treturn false;\n}\n\ntsd_t *\ntsd_fetch_slow(tsd_t *tsd, bool minimal) {\n\tassert(!tsd_fast(tsd));\n\n\tif (tsd_state_get(tsd) == tsd_state_nominal_slow) {\n\t\t/*\n\t\t * On slow path but no work needed.  Note that we can't\n\t\t * necessarily *assert* that we're slow, because we might be\n\t\t * slow because of an asynchronous modification to global state,\n\t\t * which might be asynchronously modified *back*.\n\t\t */\n\t} else if (tsd_state_get(tsd) == tsd_state_nominal_recompute) {\n\t\ttsd_slow_update(tsd);\n\t} else if (tsd_state_get(tsd) == tsd_state_uninitialized) {\n\t\tif (!minimal) {\n\t\t\tif (tsd_booted) {\n\t\t\t\ttsd_state_set(tsd, tsd_state_nominal);\n\t\t\t\ttsd_slow_update(tsd);\n\t\t\t\t/* Trigger cleanup handler registration. */\n\t\t\t\ttsd_set(tsd);\n\t\t\t\ttsd_data_init(tsd);\n\t\t\t}\n\t\t} else {\n\t\t\ttsd_state_set(tsd, tsd_state_minimal_initialized);\n\t\t\ttsd_set(tsd);\n\t\t\ttsd_data_init_nocleanup(tsd);\n\t\t}\n\t} else if (tsd_state_get(tsd) == tsd_state_minimal_initialized) {\n\t\tif (!minimal) {\n\t\t\t/* Switch to fully initialized. */\n\t\t\ttsd_state_set(tsd, tsd_state_nominal);\n\t\t\tassert(*tsd_reentrancy_levelp_get(tsd) >= 1);\n\t\t\t(*tsd_reentrancy_levelp_get(tsd))--;\n\t\t\ttsd_slow_update(tsd);\n\t\t\ttsd_data_init(tsd);\n\t\t} else {\n\t\t\tassert_tsd_data_cleanup_done(tsd);\n\t\t}\n\t} else if (tsd_state_get(tsd) == tsd_state_purgatory) {\n\t\ttsd_state_set(tsd, tsd_state_reincarnated);\n\t\ttsd_set(tsd);\n\t\ttsd_data_init_nocleanup(tsd);\n\t} else {\n\t\tassert(tsd_state_get(tsd) == tsd_state_reincarnated);\n\t}\n\n\treturn tsd;\n}\n\nvoid *\nmalloc_tsd_malloc(size_t size) {\n\treturn a0malloc(CACHELINE_CEILING(size));\n}\n\nvoid\nmalloc_tsd_dalloc(void *wrapper) {\n\ta0dalloc(wrapper);\n}\n\n#if defined(JEMALLOC_MALLOC_THREAD_CLEANUP) || defined(_WIN32)\nstatic unsigned ncleanups;\nstatic malloc_tsd_cleanup_t cleanups[MALLOC_TSD_CLEANUPS_MAX];\n\n#ifndef _WIN32\nJEMALLOC_EXPORT\n#endif\nvoid\n_malloc_thread_cleanup(void) {\n\tbool pending[MALLOC_TSD_CLEANUPS_MAX], again;\n\tunsigned i;\n\n\tfor (i = 0; i < ncleanups; i++) {\n\t\tpending[i] = true;\n\t}\n\n\tdo {\n\t\tagain = false;\n\t\tfor (i = 0; i < ncleanups; i++) {\n\t\t\tif (pending[i]) {\n\t\t\t\tpending[i] = cleanups[i]();\n\t\t\t\tif (pending[i]) {\n\t\t\t\t\tagain = true;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t} while (again);\n}\n\n#ifndef _WIN32\nJEMALLOC_EXPORT\n#endif\nvoid\n_malloc_tsd_cleanup_register(bool (*f)(void)) {\n\tassert(ncleanups < MALLOC_TSD_CLEANUPS_MAX);\n\tcleanups[ncleanups] = f;\n\tncleanups++;\n}\n\n#endif\n\nstatic void\ntsd_do_data_cleanup(tsd_t *tsd) {\n\tprof_tdata_cleanup(tsd);\n\tiarena_cleanup(tsd);\n\tarena_cleanup(tsd);\n\ttcache_cleanup(tsd);\n\twitnesses_cleanup(tsd_witness_tsdp_get_unsafe(tsd));\n\t*tsd_reentrancy_levelp_get(tsd) = 1;\n}\n\nvoid\ntsd_cleanup(void *arg) {\n\ttsd_t *tsd = (tsd_t *)arg;\n\n\tswitch (tsd_state_get(tsd)) {\n\tcase tsd_state_uninitialized:\n\t\t/* Do nothing. */\n\t\tbreak;\n\tcase tsd_state_minimal_initialized:\n\t\t/* This implies the thread only did free() in its life time. */\n\t\t/* Fall through. */\n\tcase tsd_state_reincarnated:\n\t\t/*\n\t\t * Reincarnated means another destructor deallocated memory\n\t\t * after the destructor was called.  Cleanup isn't required but\n\t\t * is still called for testing and completeness.\n\t\t */\n\t\tassert_tsd_data_cleanup_done(tsd);\n\t\tJEMALLOC_FALLTHROUGH;\n\tcase tsd_state_nominal:\n\tcase tsd_state_nominal_slow:\n\t\ttsd_do_data_cleanup(tsd);\n\t\ttsd_state_set(tsd, tsd_state_purgatory);\n\t\ttsd_set(tsd);\n\t\tbreak;\n\tcase tsd_state_purgatory:\n\t\t/*\n\t\t * The previous time this destructor was called, we set the\n\t\t * state to tsd_state_purgatory so that other destructors\n\t\t * wouldn't cause re-creation of the tsd.  This time, do\n\t\t * nothing, and do not request another callback.\n\t\t */\n\t\tbreak;\n\tdefault:\n\t\tnot_reached();\n\t}\n#ifdef JEMALLOC_JET\n\ttest_callback_t test_callback = *tsd_test_callbackp_get_unsafe(tsd);\n\tint *data = tsd_test_datap_get_unsafe(tsd);\n\tif (test_callback != NULL) {\n\t\ttest_callback(data);\n\t}\n#endif\n}\n\ntsd_t *\nmalloc_tsd_boot0(void) {\n\ttsd_t *tsd;\n\n#if defined(JEMALLOC_MALLOC_THREAD_CLEANUP) || defined(_WIN32)\n\tncleanups = 0;\n#endif\n\tif (malloc_mutex_init(&tsd_nominal_tsds_lock, \"tsd_nominal_tsds_lock\",\n\t    WITNESS_RANK_OMIT, malloc_mutex_rank_exclusive)) {\n\t\treturn NULL;\n\t}\n\tif (tsd_boot0()) {\n\t\treturn NULL;\n\t}\n\ttsd = tsd_fetch();\n\treturn tsd;\n}\n\nvoid\nmalloc_tsd_boot1(void) {\n\ttsd_boot1();\n\ttsd_t *tsd = tsd_fetch();\n\t/* malloc_slow has been set properly.  Update tsd_slow. */\n\ttsd_slow_update(tsd);\n}\n\n#ifdef _WIN32\nstatic BOOL WINAPI\n_tls_callback(HINSTANCE hinstDLL, DWORD fdwReason, LPVOID lpvReserved) {\n\tswitch (fdwReason) {\n#ifdef JEMALLOC_LAZY_LOCK\n\tcase DLL_THREAD_ATTACH:\n\t\tisthreaded = true;\n\t\tbreak;\n#endif\n\tcase DLL_THREAD_DETACH:\n\t\t_malloc_thread_cleanup();\n\t\tbreak;\n\tdefault:\n\t\tbreak;\n\t}\n\treturn true;\n}\n\n/*\n * We need to be able to say \"read\" here (in the \"pragma section\"), but have\n * hooked \"read\". We won't read for the rest of the file, so we can get away\n * with unhooking.\n */\n#ifdef read\n#  undef read\n#endif\n\n#ifdef _MSC_VER\n#  ifdef _M_IX86\n#    pragma comment(linker, \"/INCLUDE:__tls_used\")\n#    pragma comment(linker, \"/INCLUDE:_tls_callback\")\n#  else\n#    pragma comment(linker, \"/INCLUDE:_tls_used\")\n#    pragma comment(linker, \"/INCLUDE:\" STRINGIFY(tls_callback) )\n#  endif\n#  pragma section(\".CRT$XLY\",long,read)\n#endif\nJEMALLOC_SECTION(\".CRT$XLY\") JEMALLOC_ATTR(used)\nBOOL\t(WINAPI *const tls_callback)(HINSTANCE hinstDLL,\n    DWORD fdwReason, LPVOID lpvReserved) = _tls_callback;\n#endif\n\n#if (!defined(JEMALLOC_MALLOC_THREAD_CLEANUP) && !defined(JEMALLOC_TLS) && \\\n    !defined(_WIN32))\nvoid *\ntsd_init_check_recursion(tsd_init_head_t *head, tsd_init_block_t *block) {\n\tpthread_t self = pthread_self();\n\ttsd_init_block_t *iter;\n\n\t/* Check whether this thread has already inserted into the list. */\n\tmalloc_mutex_lock(TSDN_NULL, &head->lock);\n\tql_foreach(iter, &head->blocks, link) {\n\t\tif (iter->thread == self) {\n\t\t\tmalloc_mutex_unlock(TSDN_NULL, &head->lock);\n\t\t\treturn iter->data;\n\t\t}\n\t}\n\t/* Insert block into list. */\n\tql_elm_new(block, link);\n\tblock->thread = self;\n\tql_tail_insert(&head->blocks, block, link);\n\tmalloc_mutex_unlock(TSDN_NULL, &head->lock);\n\treturn NULL;\n}\n\nvoid\ntsd_init_finish(tsd_init_head_t *head, tsd_init_block_t *block) {\n\tmalloc_mutex_lock(TSDN_NULL, &head->lock);\n\tql_remove(&head->blocks, block, link);\n\tmalloc_mutex_unlock(TSDN_NULL, &head->lock);\n}\n#endif\n\nvoid\ntsd_prefork(tsd_t *tsd) {\n\tmalloc_mutex_prefork(tsd_tsdn(tsd), &tsd_nominal_tsds_lock);\n}\n\nvoid\ntsd_postfork_parent(tsd_t *tsd) {\n\tmalloc_mutex_postfork_parent(tsd_tsdn(tsd), &tsd_nominal_tsds_lock);\n}\n\nvoid\ntsd_postfork_child(tsd_t *tsd) {\n\tmalloc_mutex_postfork_child(tsd_tsdn(tsd), &tsd_nominal_tsds_lock);\n\tql_new(&tsd_nominal_tsds);\n\n\tif (tsd_state_get(tsd) <= tsd_state_nominal_max) {\n\t\ttsd_add_nominal(tsd);\n\t}\n}\n"
  },
  {
    "path": "deps/jemalloc/src/witness.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n#include \"jemalloc/internal/malloc_io.h\"\n\nvoid\nwitness_init(witness_t *witness, const char *name, witness_rank_t rank,\n    witness_comp_t *comp, void *opaque) {\n\twitness->name = name;\n\twitness->rank = rank;\n\twitness->comp = comp;\n\twitness->opaque = opaque;\n}\n\nstatic void\nwitness_print_witness(witness_t *w, unsigned n) {\n\tassert(n > 0);\n\tif (n == 1) {\n\t\tmalloc_printf(\" %s(%u)\", w->name, w->rank);\n\t} else {\n\t\tmalloc_printf(\" %s(%u)X%u\", w->name, w->rank, n);\n\t}\n}\n\nstatic void\nwitness_print_witnesses(const witness_list_t *witnesses) {\n\twitness_t *w, *last = NULL;\n\tunsigned n = 0;\n\tql_foreach(w, witnesses, link) {\n\t\tif (last != NULL && w->rank > last->rank) {\n\t\t\tassert(w->name != last->name);\n\t\t\twitness_print_witness(last, n);\n\t\t\tn = 0;\n\t\t} else if (last != NULL) {\n\t\t\tassert(w->rank == last->rank);\n\t\t\tassert(w->name == last->name);\n\t\t}\n\t\tlast = w;\n\t\t++n;\n\t}\n\tif (last != NULL) {\n\t\twitness_print_witness(last, n);\n\t}\n}\n\nstatic void\nwitness_lock_error_impl(const witness_list_t *witnesses,\n    const witness_t *witness) {\n\tmalloc_printf(\"<jemalloc>: Lock rank order reversal:\");\n\twitness_print_witnesses(witnesses);\n\tmalloc_printf(\" %s(%u)\\n\", witness->name, witness->rank);\n\tabort();\n}\nwitness_lock_error_t *JET_MUTABLE witness_lock_error = witness_lock_error_impl;\n\nstatic void\nwitness_owner_error_impl(const witness_t *witness) {\n\tmalloc_printf(\"<jemalloc>: Should own %s(%u)\\n\", witness->name,\n\t    witness->rank);\n\tabort();\n}\nwitness_owner_error_t *JET_MUTABLE witness_owner_error =\n    witness_owner_error_impl;\n\nstatic void\nwitness_not_owner_error_impl(const witness_t *witness) {\n\tmalloc_printf(\"<jemalloc>: Should not own %s(%u)\\n\", witness->name,\n\t    witness->rank);\n\tabort();\n}\nwitness_not_owner_error_t *JET_MUTABLE witness_not_owner_error =\n    witness_not_owner_error_impl;\n\nstatic void\nwitness_depth_error_impl(const witness_list_t *witnesses,\n    witness_rank_t rank_inclusive, unsigned depth) {\n\tmalloc_printf(\"<jemalloc>: Should own %u lock%s of rank >= %u:\", depth,\n\t    (depth != 1) ?  \"s\" : \"\", rank_inclusive);\n\twitness_print_witnesses(witnesses);\n\tmalloc_printf(\"\\n\");\n\tabort();\n}\nwitness_depth_error_t *JET_MUTABLE witness_depth_error =\n    witness_depth_error_impl;\n\nvoid\nwitnesses_cleanup(witness_tsd_t *witness_tsd) {\n\twitness_assert_lockless(witness_tsd_tsdn(witness_tsd));\n\n\t/* Do nothing. */\n}\n\nvoid\nwitness_prefork(witness_tsd_t *witness_tsd) {\n\tif (!config_debug) {\n\t\treturn;\n\t}\n\twitness_tsd->forking = true;\n}\n\nvoid\nwitness_postfork_parent(witness_tsd_t *witness_tsd) {\n\tif (!config_debug) {\n\t\treturn;\n\t}\n\twitness_tsd->forking = false;\n}\n\nvoid\nwitness_postfork_child(witness_tsd_t *witness_tsd) {\n\tif (!config_debug) {\n\t\treturn;\n\t}\n#ifndef JEMALLOC_MUTEX_INIT_CB\n\twitness_list_t *witnesses;\n\n\twitnesses = &witness_tsd->witnesses;\n\tql_new(witnesses);\n#endif\n\twitness_tsd->forking = false;\n}\n"
  },
  {
    "path": "deps/jemalloc/src/zone.c",
    "content": "#include \"jemalloc/internal/jemalloc_preamble.h\"\n#include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n#include \"jemalloc/internal/assert.h\"\n\n#ifndef JEMALLOC_ZONE\n#  error \"This source file is for zones on Darwin (OS X).\"\n#endif\n\n/* Definitions of the following structs in malloc/malloc.h might be too old\n * for the built binary to run on newer versions of OSX. So use the newest\n * possible version of those structs.\n */\ntypedef struct _malloc_zone_t {\n\tvoid *reserved1;\n\tvoid *reserved2;\n\tsize_t (*size)(struct _malloc_zone_t *, const void *);\n\tvoid *(*malloc)(struct _malloc_zone_t *, size_t);\n\tvoid *(*calloc)(struct _malloc_zone_t *, size_t, size_t);\n\tvoid *(*valloc)(struct _malloc_zone_t *, size_t);\n\tvoid (*free)(struct _malloc_zone_t *, void *);\n\tvoid *(*realloc)(struct _malloc_zone_t *, void *, size_t);\n\tvoid (*destroy)(struct _malloc_zone_t *);\n\tconst char *zone_name;\n\tunsigned (*batch_malloc)(struct _malloc_zone_t *, size_t, void **, unsigned);\n\tvoid (*batch_free)(struct _malloc_zone_t *, void **, unsigned);\n\tstruct malloc_introspection_t *introspect;\n\tunsigned version;\n\tvoid *(*memalign)(struct _malloc_zone_t *, size_t, size_t);\n\tvoid (*free_definite_size)(struct _malloc_zone_t *, void *, size_t);\n\tsize_t (*pressure_relief)(struct _malloc_zone_t *, size_t);\n} malloc_zone_t;\n\ntypedef struct {\n\tvm_address_t address;\n\tvm_size_t size;\n} vm_range_t;\n\ntypedef struct malloc_statistics_t {\n\tunsigned blocks_in_use;\n\tsize_t size_in_use;\n\tsize_t max_size_in_use;\n\tsize_t size_allocated;\n} malloc_statistics_t;\n\ntypedef kern_return_t memory_reader_t(task_t, vm_address_t, vm_size_t, void **);\n\ntypedef void vm_range_recorder_t(task_t, void *, unsigned type, vm_range_t *, unsigned);\n\ntypedef struct malloc_introspection_t {\n\tkern_return_t (*enumerator)(task_t, void *, unsigned, vm_address_t, memory_reader_t, vm_range_recorder_t);\n\tsize_t (*good_size)(malloc_zone_t *, size_t);\n\tboolean_t (*check)(malloc_zone_t *);\n\tvoid (*print)(malloc_zone_t *, boolean_t);\n\tvoid (*log)(malloc_zone_t *, void *);\n\tvoid (*force_lock)(malloc_zone_t *);\n\tvoid (*force_unlock)(malloc_zone_t *);\n\tvoid (*statistics)(malloc_zone_t *, malloc_statistics_t *);\n\tboolean_t (*zone_locked)(malloc_zone_t *);\n\tboolean_t (*enable_discharge_checking)(malloc_zone_t *);\n\tboolean_t (*disable_discharge_checking)(malloc_zone_t *);\n\tvoid (*discharge)(malloc_zone_t *, void *);\n#ifdef __BLOCKS__\n\tvoid (*enumerate_discharged_pointers)(malloc_zone_t *, void (^)(void *, void *));\n#else\n\tvoid *enumerate_unavailable_without_blocks;\n#endif\n\tvoid (*reinit_lock)(malloc_zone_t *);\n} malloc_introspection_t;\n\nextern kern_return_t malloc_get_all_zones(task_t, memory_reader_t, vm_address_t **, unsigned *);\n\nextern malloc_zone_t *malloc_default_zone(void);\n\nextern void malloc_zone_register(malloc_zone_t *zone);\n\nextern void malloc_zone_unregister(malloc_zone_t *zone);\n\n/*\n * The malloc_default_purgeable_zone() function is only available on >= 10.6.\n * We need to check whether it is present at runtime, thus the weak_import.\n */\nextern malloc_zone_t *malloc_default_purgeable_zone(void)\nJEMALLOC_ATTR(weak_import);\n\n/******************************************************************************/\n/* Data. */\n\nstatic malloc_zone_t *default_zone, *purgeable_zone;\nstatic malloc_zone_t jemalloc_zone;\nstatic struct malloc_introspection_t jemalloc_zone_introspect;\nstatic pid_t zone_force_lock_pid = -1;\n\n/******************************************************************************/\n/* Function prototypes for non-inline static functions. */\n\nstatic size_t\tzone_size(malloc_zone_t *zone, const void *ptr);\nstatic void\t*zone_malloc(malloc_zone_t *zone, size_t size);\nstatic void\t*zone_calloc(malloc_zone_t *zone, size_t num, size_t size);\nstatic void\t*zone_valloc(malloc_zone_t *zone, size_t size);\nstatic void\tzone_free(malloc_zone_t *zone, void *ptr);\nstatic void\t*zone_realloc(malloc_zone_t *zone, void *ptr, size_t size);\nstatic void\t*zone_memalign(malloc_zone_t *zone, size_t alignment,\n    size_t size);\nstatic void\tzone_free_definite_size(malloc_zone_t *zone, void *ptr,\n    size_t size);\nstatic void\tzone_destroy(malloc_zone_t *zone);\nstatic unsigned\tzone_batch_malloc(struct _malloc_zone_t *zone, size_t size,\n    void **results, unsigned num_requested);\nstatic void\tzone_batch_free(struct _malloc_zone_t *zone,\n    void **to_be_freed, unsigned num_to_be_freed);\nstatic size_t\tzone_pressure_relief(struct _malloc_zone_t *zone, size_t goal);\nstatic size_t\tzone_good_size(malloc_zone_t *zone, size_t size);\nstatic kern_return_t\tzone_enumerator(task_t task, void *data, unsigned type_mask,\n    vm_address_t zone_address, memory_reader_t reader,\n    vm_range_recorder_t recorder);\nstatic boolean_t\tzone_check(malloc_zone_t *zone);\nstatic void\tzone_print(malloc_zone_t *zone, boolean_t verbose);\nstatic void\tzone_log(malloc_zone_t *zone, void *address);\nstatic void\tzone_force_lock(malloc_zone_t *zone);\nstatic void\tzone_force_unlock(malloc_zone_t *zone);\nstatic void\tzone_statistics(malloc_zone_t *zone,\n    malloc_statistics_t *stats);\nstatic boolean_t\tzone_locked(malloc_zone_t *zone);\nstatic void\tzone_reinit_lock(malloc_zone_t *zone);\n\n/******************************************************************************/\n/*\n * Functions.\n */\n\nstatic size_t\nzone_size(malloc_zone_t *zone, const void *ptr) {\n\t/*\n\t * There appear to be places within Darwin (such as setenv(3)) that\n\t * cause calls to this function with pointers that *no* zone owns.  If\n\t * we knew that all pointers were owned by *some* zone, we could split\n\t * our zone into two parts, and use one as the default allocator and\n\t * the other as the default deallocator/reallocator.  Since that will\n\t * not work in practice, we must check all pointers to assure that they\n\t * reside within a mapped extent before determining size.\n\t */\n\treturn ivsalloc(tsdn_fetch(), ptr);\n}\n\nstatic void *\nzone_malloc(malloc_zone_t *zone, size_t size) {\n\treturn je_malloc(size);\n}\n\nstatic void *\nzone_calloc(malloc_zone_t *zone, size_t num, size_t size) {\n\treturn je_calloc(num, size);\n}\n\nstatic void *\nzone_valloc(malloc_zone_t *zone, size_t size) {\n\tvoid *ret = NULL; /* Assignment avoids useless compiler warning. */\n\n\tje_posix_memalign(&ret, PAGE, size);\n\n\treturn ret;\n}\n\nstatic void\nzone_free(malloc_zone_t *zone, void *ptr) {\n\tif (ivsalloc(tsdn_fetch(), ptr) != 0) {\n\t\tje_free(ptr);\n\t\treturn;\n\t}\n\n\tfree(ptr);\n}\n\nstatic void *\nzone_realloc(malloc_zone_t *zone, void *ptr, size_t size) {\n\tif (ivsalloc(tsdn_fetch(), ptr) != 0) {\n\t\treturn je_realloc(ptr, size);\n\t}\n\n\treturn realloc(ptr, size);\n}\n\nstatic void *\nzone_memalign(malloc_zone_t *zone, size_t alignment, size_t size) {\n\tvoid *ret = NULL; /* Assignment avoids useless compiler warning. */\n\n\tje_posix_memalign(&ret, alignment, size);\n\n\treturn ret;\n}\n\nstatic void\nzone_free_definite_size(malloc_zone_t *zone, void *ptr, size_t size) {\n\tsize_t alloc_size;\n\n\talloc_size = ivsalloc(tsdn_fetch(), ptr);\n\tif (alloc_size != 0) {\n\t\tassert(alloc_size == size);\n\t\tje_free(ptr);\n\t\treturn;\n\t}\n\n\tfree(ptr);\n}\n\nstatic void\nzone_destroy(malloc_zone_t *zone) {\n\t/* This function should never be called. */\n\tnot_reached();\n}\n\nstatic unsigned\nzone_batch_malloc(struct _malloc_zone_t *zone, size_t size, void **results,\n    unsigned num_requested) {\n\tunsigned i;\n\n\tfor (i = 0; i < num_requested; i++) {\n\t\tresults[i] = je_malloc(size);\n\t\tif (!results[i])\n\t\t\tbreak;\n\t}\n\n\treturn i;\n}\n\nstatic void\nzone_batch_free(struct _malloc_zone_t *zone, void **to_be_freed,\n    unsigned num_to_be_freed) {\n\tunsigned i;\n\n\tfor (i = 0; i < num_to_be_freed; i++) {\n\t\tzone_free(zone, to_be_freed[i]);\n\t\tto_be_freed[i] = NULL;\n\t}\n}\n\nstatic size_t\nzone_pressure_relief(struct _malloc_zone_t *zone, size_t goal) {\n\treturn 0;\n}\n\nstatic size_t\nzone_good_size(malloc_zone_t *zone, size_t size) {\n\tif (size == 0) {\n\t\tsize = 1;\n\t}\n\treturn sz_s2u(size);\n}\n\nstatic kern_return_t\nzone_enumerator(task_t task, void *data, unsigned type_mask,\n    vm_address_t zone_address, memory_reader_t reader,\n    vm_range_recorder_t recorder) {\n\treturn KERN_SUCCESS;\n}\n\nstatic boolean_t\nzone_check(malloc_zone_t *zone) {\n\treturn true;\n}\n\nstatic void\nzone_print(malloc_zone_t *zone, boolean_t verbose) {\n}\n\nstatic void\nzone_log(malloc_zone_t *zone, void *address) {\n}\n\nstatic void\nzone_force_lock(malloc_zone_t *zone) {\n\tif (isthreaded) {\n\t\t/*\n\t\t * See the note in zone_force_unlock, below, to see why we need\n\t\t * this.\n\t\t */\n\t\tassert(zone_force_lock_pid == -1);\n\t\tzone_force_lock_pid = getpid();\n\t\tjemalloc_prefork();\n\t}\n}\n\nstatic void\nzone_force_unlock(malloc_zone_t *zone) {\n\t/*\n\t * zone_force_lock and zone_force_unlock are the entry points to the\n\t * forking machinery on OS X.  The tricky thing is, the child is not\n\t * allowed to unlock mutexes locked in the parent, even if owned by the\n\t * forking thread (and the mutex type we use in OS X will fail an assert\n\t * if we try).  In the child, we can get away with reinitializing all\n\t * the mutexes, which has the effect of unlocking them.  In the parent,\n\t * doing this would mean we wouldn't wake any waiters blocked on the\n\t * mutexes we unlock.  So, we record the pid of the current thread in\n\t * zone_force_lock, and use that to detect if we're in the parent or\n\t * child here, to decide which unlock logic we need.\n\t */\n\tif (isthreaded) {\n\t\tassert(zone_force_lock_pid != -1);\n\t\tif (getpid() == zone_force_lock_pid) {\n\t\t\tjemalloc_postfork_parent();\n\t\t} else {\n\t\t\tjemalloc_postfork_child();\n\t\t}\n\t\tzone_force_lock_pid = -1;\n\t}\n}\n\nstatic void\nzone_statistics(malloc_zone_t *zone, malloc_statistics_t *stats) {\n\t/* We make no effort to actually fill the values */\n\tstats->blocks_in_use = 0;\n\tstats->size_in_use = 0;\n\tstats->max_size_in_use = 0;\n\tstats->size_allocated = 0;\n}\n\nstatic boolean_t\nzone_locked(malloc_zone_t *zone) {\n\t/* Pretend no lock is being held */\n\treturn false;\n}\n\nstatic void\nzone_reinit_lock(malloc_zone_t *zone) {\n\t/* As of OSX 10.12, this function is only used when force_unlock would\n\t * be used if the zone version were < 9. So just use force_unlock. */\n\tzone_force_unlock(zone);\n}\n\nstatic void\nzone_init(void) {\n\tjemalloc_zone.size = zone_size;\n\tjemalloc_zone.malloc = zone_malloc;\n\tjemalloc_zone.calloc = zone_calloc;\n\tjemalloc_zone.valloc = zone_valloc;\n\tjemalloc_zone.free = zone_free;\n\tjemalloc_zone.realloc = zone_realloc;\n\tjemalloc_zone.destroy = zone_destroy;\n\tjemalloc_zone.zone_name = \"jemalloc_zone\";\n\tjemalloc_zone.batch_malloc = zone_batch_malloc;\n\tjemalloc_zone.batch_free = zone_batch_free;\n\tjemalloc_zone.introspect = &jemalloc_zone_introspect;\n\tjemalloc_zone.version = 9;\n\tjemalloc_zone.memalign = zone_memalign;\n\tjemalloc_zone.free_definite_size = zone_free_definite_size;\n\tjemalloc_zone.pressure_relief = zone_pressure_relief;\n\n\tjemalloc_zone_introspect.enumerator = zone_enumerator;\n\tjemalloc_zone_introspect.good_size = zone_good_size;\n\tjemalloc_zone_introspect.check = zone_check;\n\tjemalloc_zone_introspect.print = zone_print;\n\tjemalloc_zone_introspect.log = zone_log;\n\tjemalloc_zone_introspect.force_lock = zone_force_lock;\n\tjemalloc_zone_introspect.force_unlock = zone_force_unlock;\n\tjemalloc_zone_introspect.statistics = zone_statistics;\n\tjemalloc_zone_introspect.zone_locked = zone_locked;\n\tjemalloc_zone_introspect.enable_discharge_checking = NULL;\n\tjemalloc_zone_introspect.disable_discharge_checking = NULL;\n\tjemalloc_zone_introspect.discharge = NULL;\n#ifdef __BLOCKS__\n\tjemalloc_zone_introspect.enumerate_discharged_pointers = NULL;\n#else\n\tjemalloc_zone_introspect.enumerate_unavailable_without_blocks = NULL;\n#endif\n\tjemalloc_zone_introspect.reinit_lock = zone_reinit_lock;\n}\n\nstatic malloc_zone_t *\nzone_default_get(void) {\n\tmalloc_zone_t **zones = NULL;\n\tunsigned int num_zones = 0;\n\n\t/*\n\t * On OSX 10.12, malloc_default_zone returns a special zone that is not\n\t * present in the list of registered zones. That zone uses a \"lite zone\"\n\t * if one is present (apparently enabled when malloc stack logging is\n\t * enabled), or the first registered zone otherwise. In practice this\n\t * means unless malloc stack logging is enabled, the first registered\n\t * zone is the default.  So get the list of zones to get the first one,\n\t * instead of relying on malloc_default_zone.\n\t */\n\tif (KERN_SUCCESS != malloc_get_all_zones(0, NULL,\n\t    (vm_address_t**)&zones, &num_zones)) {\n\t\t/*\n\t\t * Reset the value in case the failure happened after it was\n\t\t * set.\n\t\t */\n\t\tnum_zones = 0;\n\t}\n\n\tif (num_zones) {\n\t\treturn zones[0];\n\t}\n\n\treturn malloc_default_zone();\n}\n\n/* As written, this function can only promote jemalloc_zone. */\nstatic void\nzone_promote(void) {\n\tmalloc_zone_t *zone;\n\n\tdo {\n\t\t/*\n\t\t * Unregister and reregister the default zone.  On OSX >= 10.6,\n\t\t * unregistering takes the last registered zone and places it\n\t\t * at the location of the specified zone.  Unregistering the\n\t\t * default zone thus makes the last registered one the default.\n\t\t * On OSX < 10.6, unregistering shifts all registered zones.\n\t\t * The first registered zone then becomes the default.\n\t\t */\n\t\tmalloc_zone_unregister(default_zone);\n\t\tmalloc_zone_register(default_zone);\n\n\t\t/*\n\t\t * On OSX 10.6, having the default purgeable zone appear before\n\t\t * the default zone makes some things crash because it thinks it\n\t\t * owns the default zone allocated pointers.  We thus\n\t\t * unregister/re-register it in order to ensure it's always\n\t\t * after the default zone.  On OSX < 10.6, there is no purgeable\n\t\t * zone, so this does nothing.  On OSX >= 10.6, unregistering\n\t\t * replaces the purgeable zone with the last registered zone\n\t\t * above, i.e. the default zone.  Registering it again then puts\n\t\t * it at the end, obviously after the default zone.\n\t\t */\n\t\tif (purgeable_zone != NULL) {\n\t\t\tmalloc_zone_unregister(purgeable_zone);\n\t\t\tmalloc_zone_register(purgeable_zone);\n\t\t}\n\n\t\tzone = zone_default_get();\n\t} while (zone != &jemalloc_zone);\n}\n\nJEMALLOC_ATTR(constructor)\nvoid\nzone_register(void) {\n\t/*\n\t * If something else replaced the system default zone allocator, don't\n\t * register jemalloc's.\n\t */\n\tdefault_zone = zone_default_get();\n\tif (!default_zone->zone_name || strcmp(default_zone->zone_name,\n\t    \"DefaultMallocZone\") != 0) {\n\t\treturn;\n\t}\n\n\t/*\n\t * The default purgeable zone is created lazily by OSX's libc.  It uses\n\t * the default zone when it is created for \"small\" allocations\n\t * (< 15 KiB), but assumes the default zone is a scalable_zone.  This\n\t * obviously fails when the default zone is the jemalloc zone, so\n\t * malloc_default_purgeable_zone() is called beforehand so that the\n\t * default purgeable zone is created when the default zone is still\n\t * a scalable_zone.  As purgeable zones only exist on >= 10.6, we need\n\t * to check for the existence of malloc_default_purgeable_zone() at\n\t * run time.\n\t */\n\tpurgeable_zone = (malloc_default_purgeable_zone == NULL) ? NULL :\n\t    malloc_default_purgeable_zone();\n\n\t/* Register the custom zone.  At this point it won't be the default. */\n\tzone_init();\n\tmalloc_zone_register(&jemalloc_zone);\n\n\t/* Promote the custom zone to be default. */\n\tzone_promote();\n}\n"
  },
  {
    "path": "deps/jemalloc/test/analyze/prof_bias.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n/*\n * This is a helper utility, only meant to be run manually (and, for example,\n * doesn't check for failures, try to skip execution in non-prof modes, etc.).\n * It runs, allocates objects of two different sizes from the same stack trace,\n * and exits.\n *\n * The idea is that some human operator will run it like:\n *     MALLOC_CONF=\"prof:true,prof_final:true\" test/analyze/prof_bias\n * and manually inspect the results.\n *\n * The results should be:\n * jeprof --text test/analyze/prof_bias --inuse_space jeprof.<pid>.0.f.heap:\n * \taround 1024 MB\n * jeprof --text test/analyze/prof_bias --inuse_objects jeprof.<pid>.0.f.heap:\n * \taround 33554448 = 16 + 32 * 1024 * 1024\n *\n * And, if prof_accum is on:\n * jeprof --text test/analyze/prof_bias --alloc_space jeprof.<pid>.0.f.heap:\n *     around 2048 MB\n * jeprof --text test/analyze/prof_bias --alloc_objects jeprof.<pid>.0.f.heap:\n * \taround 67108896 = 2 * (16 + 32 * 1024 * 1024)\n */\n\nstatic void\nmock_backtrace(void **vec, unsigned *len, unsigned max_len) {\n\t*len = 4;\n\tvec[0] = (void *)0x111;\n\tvec[1] = (void *)0x222;\n\tvec[2] = (void *)0x333;\n\tvec[3] = (void *)0x444;\n}\n\nstatic void\ndo_allocs(size_t sz, size_t cnt, bool do_frees) {\n\tfor (size_t i = 0; i < cnt; i++) {\n\t\tvoid *ptr = mallocx(sz, 0);\n\t\tassert_ptr_not_null(ptr, \"Unexpected mallocx failure\");\n\t\tif (do_frees) {\n\t\t\tdallocx(ptr, 0);\n\t\t}\n\t}\n}\n\nint\nmain(void) {\n\tsize_t lg_prof_sample_local = 19;\n\tint err = mallctl(\"prof.reset\", NULL, NULL,\n\t    (void *)&lg_prof_sample_local, sizeof(lg_prof_sample_local));\n\tassert(err == 0);\n\n\tprof_backtrace_hook_set(mock_backtrace);\n\tdo_allocs(16, 32 * 1024 * 1024, /* do_frees */ true);\n\tdo_allocs(32 * 1024* 1024, 16, /* do_frees */ true);\n\tdo_allocs(16, 32 * 1024 * 1024, /* do_frees */ false);\n\tdo_allocs(32 * 1024* 1024, 16, /* do_frees */ false);\n\n\treturn 0;\n}\n"
  },
  {
    "path": "deps/jemalloc/test/analyze/rand.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n/******************************************************************************/\n\n/*\n * General purpose tool for examining random number distributions.\n *\n * Input -\n * (a) a random number generator, and\n * (b) the buckets:\n *     (1) number of buckets,\n *     (2) width of each bucket, in log scale,\n *     (3) expected mean and stddev of the count of random numbers in each\n *         bucket, and\n * (c) number of iterations to invoke the generator.\n *\n * The program generates the specified amount of random numbers, and assess how\n * well they conform to the expectations: for each bucket, output -\n * (a) the (given) expected mean and stddev,\n * (b) the actual count and any interesting level of deviation:\n *     (1) ~68% buckets should show no interesting deviation, meaning a\n *         deviation less than stddev from the expectation;\n *     (2) ~27% buckets should show '+' / '-', meaning a deviation in the range\n *         of [stddev, 2 * stddev) from the expectation;\n *     (3) ~4% buckets should show '++' / '--', meaning a deviation in the\n *         range of [2 * stddev, 3 * stddev) from the expectation; and\n *     (4) less than 0.3% buckets should show more than two '+'s / '-'s.\n *\n * Technical remarks:\n * (a) The generator is expected to output uint64_t numbers, so you might need\n *     to define a wrapper.\n * (b) The buckets must be of equal width and the lowest bucket starts at\n *     [0, 2^lg_bucket_width - 1).\n * (c) Any generated number >= n_bucket * 2^lg_bucket_width will be counted\n *     towards the last bucket; the expected mean and stddev provided should\n *     also reflect that.\n * (d) The number of iterations is advised to be determined so that the bucket\n *     with the minimal expected proportion gets a sufficient count.\n */\n\nstatic void\nfill(size_t a[], const size_t n, const size_t k) {\n\tfor (size_t i = 0; i < n; ++i) {\n\t\ta[i] = k;\n\t}\n}\n\nstatic void\ncollect_buckets(uint64_t (*gen)(void *), void *opaque, size_t buckets[],\n    const size_t n_bucket, const size_t lg_bucket_width, const size_t n_iter) {\n\tfor (size_t i = 0; i < n_iter; ++i) {\n\t\tuint64_t num = gen(opaque);\n\t\tuint64_t bucket_id = num >> lg_bucket_width;\n\t\tif (bucket_id >= n_bucket) {\n\t\t\tbucket_id = n_bucket - 1;\n\t\t}\n\t\t++buckets[bucket_id];\n\t}\n}\n\nstatic void\nprint_buckets(const size_t buckets[], const size_t means[],\n    const size_t stddevs[], const size_t n_bucket) {\n\tfor (size_t i = 0; i < n_bucket; ++i) {\n\t\tmalloc_printf(\"%zu:\\tmean = %zu,\\tstddev = %zu,\\tbucket = %zu\",\n\t\t    i, means[i], stddevs[i], buckets[i]);\n\n\t\t/* Make sure there's no overflow. */\n\t\tassert(buckets[i] + stddevs[i] >= stddevs[i]);\n\t\tassert(means[i] + stddevs[i] >= stddevs[i]);\n\n\t\tif (buckets[i] + stddevs[i] <= means[i]) {\n\t\t\tmalloc_write(\" \");\n\t\t\tfor (size_t t = means[i] - buckets[i]; t >= stddevs[i];\n\t\t\t    t -= stddevs[i]) {\n\t\t\t\tmalloc_write(\"-\");\n\t\t\t}\n\t\t} else if (buckets[i] >= means[i] + stddevs[i]) {\n\t\t\tmalloc_write(\" \");\n\t\t\tfor (size_t t = buckets[i] - means[i]; t >= stddevs[i];\n\t\t\t    t -= stddevs[i]) {\n\t\t\t\tmalloc_write(\"+\");\n\t\t\t}\n\t\t}\n\t\tmalloc_write(\"\\n\");\n\t}\n}\n\nstatic void\nbucket_analysis(uint64_t (*gen)(void *), void *opaque, size_t buckets[],\n    const size_t means[], const size_t stddevs[], const size_t n_bucket,\n    const size_t lg_bucket_width, const size_t n_iter) {\n\tfor (size_t i = 1; i <= 3; ++i) {\n\t\tmalloc_printf(\"round %zu\\n\", i);\n\t\tfill(buckets, n_bucket, 0);\n\t\tcollect_buckets(gen, opaque, buckets, n_bucket,\n\t\t    lg_bucket_width, n_iter);\n\t\tprint_buckets(buckets, means, stddevs, n_bucket);\n\t}\n}\n\n/* (Recommended) minimal bucket mean. */\n#define MIN_BUCKET_MEAN 10000\n\n/******************************************************************************/\n\n/* Uniform random number generator. */\n\ntypedef struct uniform_gen_arg_s uniform_gen_arg_t;\nstruct uniform_gen_arg_s {\n\tuint64_t state;\n\tconst unsigned lg_range;\n};\n\nstatic uint64_t\nuniform_gen(void *opaque) {\n\tuniform_gen_arg_t *arg = (uniform_gen_arg_t *)opaque;\n\treturn prng_lg_range_u64(&arg->state, arg->lg_range);\n}\n\nTEST_BEGIN(test_uniform) {\n#define LG_N_BUCKET 5\n#define N_BUCKET (1 << LG_N_BUCKET)\n\n#define QUOTIENT_CEIL(n, d) (((n) - 1) / (d) + 1)\n\n\tconst unsigned lg_range_test = 25;\n\n\t/*\n\t * Mathematical tricks to guarantee that both mean and stddev are\n\t * integers, and that the minimal bucket mean is at least\n\t * MIN_BUCKET_MEAN.\n\t */\n\tconst size_t q = 1 << QUOTIENT_CEIL(LG_CEIL(QUOTIENT_CEIL(\n\t    MIN_BUCKET_MEAN, N_BUCKET * (N_BUCKET - 1))), 2);\n\tconst size_t stddev = (N_BUCKET - 1) * q;\n\tconst size_t mean = N_BUCKET * stddev * q;\n\tconst size_t n_iter = N_BUCKET * mean;\n\n\tsize_t means[N_BUCKET];\n\tfill(means, N_BUCKET, mean);\n\tsize_t stddevs[N_BUCKET];\n\tfill(stddevs, N_BUCKET, stddev);\n\n\tuniform_gen_arg_t arg = {(uint64_t)(uintptr_t)&lg_range_test,\n\t    lg_range_test};\n\tsize_t buckets[N_BUCKET];\n\tassert_zu_ge(lg_range_test, LG_N_BUCKET, \"\");\n\tconst size_t lg_bucket_width = lg_range_test - LG_N_BUCKET;\n\n\tbucket_analysis(uniform_gen, &arg, buckets, means, stddevs,\n\t    N_BUCKET, lg_bucket_width, n_iter);\n\n#undef LG_N_BUCKET\n#undef N_BUCKET\n#undef QUOTIENT_CEIL\n}\nTEST_END\n\n/******************************************************************************/\n\n/* Geometric random number generator; compiled only when prof is on. */\n\n#ifdef JEMALLOC_PROF\n\n/*\n * Fills geometric proportions and returns the minimal proportion.  See\n * comments in test_prof_sample for explanations for n_divide.\n */\nstatic double\nfill_geometric_proportions(double proportions[], const size_t n_bucket,\n    const size_t n_divide) {\n\tassert(n_bucket > 0);\n\tassert(n_divide > 0);\n\tdouble x = 1.;\n\tfor (size_t i = 0; i < n_bucket; ++i) {\n\t\tif (i == n_bucket - 1) {\n\t\t\tproportions[i] = x;\n\t\t} else {\n\t\t\tdouble y = x * exp(-1. / n_divide);\n\t\t\tproportions[i] = x - y;\n\t\t\tx = y;\n\t\t}\n\t}\n\t/*\n\t * The minimal proportion is the smaller one of the last two\n\t * proportions for geometric distribution.\n\t */\n\tdouble min_proportion = proportions[n_bucket - 1];\n\tif (n_bucket >= 2 && proportions[n_bucket - 2] < min_proportion) {\n\t\tmin_proportion = proportions[n_bucket - 2];\n\t}\n\treturn min_proportion;\n}\n\nstatic size_t\nround_to_nearest(const double x) {\n\treturn (size_t)(x + .5);\n}\n\nstatic void\nfill_references(size_t means[], size_t stddevs[], const double proportions[],\n    const size_t n_bucket, const size_t n_iter) {\n\tfor (size_t i = 0; i < n_bucket; ++i) {\n\t\tdouble x = n_iter * proportions[i];\n\t\tmeans[i] = round_to_nearest(x);\n\t\tstddevs[i] = round_to_nearest(sqrt(x * (1. - proportions[i])));\n\t}\n}\n\nstatic uint64_t\nprof_sample_gen(void *opaque) {\n\treturn prof_sample_new_event_wait((tsd_t *)opaque) - 1;\n}\n\n#endif /* JEMALLOC_PROF */\n\nTEST_BEGIN(test_prof_sample) {\n\ttest_skip_if(!config_prof);\n#ifdef JEMALLOC_PROF\n\n/* Number of divisions within [0, mean). */\n#define LG_N_DIVIDE 3\n#define N_DIVIDE (1 << LG_N_DIVIDE)\n\n/* Coverage of buckets in terms of multiples of mean. */\n#define LG_N_MULTIPLY 2\n#define N_GEO_BUCKET (N_DIVIDE << LG_N_MULTIPLY)\n\n\ttest_skip_if(!opt_prof);\n\n\tsize_t lg_prof_sample_test = 25;\n\n\tsize_t lg_prof_sample_orig = lg_prof_sample;\n\tassert_d_eq(mallctl(\"prof.reset\", NULL, NULL, &lg_prof_sample_test,\n\t    sizeof(size_t)), 0, \"\");\n\tmalloc_printf(\"lg_prof_sample = %zu\\n\", lg_prof_sample_test);\n\n\tdouble proportions[N_GEO_BUCKET + 1];\n\tconst double min_proportion = fill_geometric_proportions(proportions,\n\t    N_GEO_BUCKET + 1, N_DIVIDE);\n\tconst size_t n_iter = round_to_nearest(MIN_BUCKET_MEAN /\n\t    min_proportion);\n\tsize_t means[N_GEO_BUCKET + 1];\n\tsize_t stddevs[N_GEO_BUCKET + 1];\n\tfill_references(means, stddevs, proportions, N_GEO_BUCKET + 1, n_iter);\n\n\ttsd_t *tsd = tsd_fetch();\n\tassert_ptr_not_null(tsd, \"\");\n\tsize_t buckets[N_GEO_BUCKET + 1];\n\tassert_zu_ge(lg_prof_sample, LG_N_DIVIDE, \"\");\n\tconst size_t lg_bucket_width = lg_prof_sample - LG_N_DIVIDE;\n\n\tbucket_analysis(prof_sample_gen, tsd, buckets, means, stddevs,\n\t    N_GEO_BUCKET + 1, lg_bucket_width, n_iter);\n\n\tassert_d_eq(mallctl(\"prof.reset\", NULL, NULL, &lg_prof_sample_orig,\n\t    sizeof(size_t)), 0, \"\");\n\n#undef LG_N_DIVIDE\n#undef N_DIVIDE\n#undef LG_N_MULTIPLY\n#undef N_GEO_BUCKET\n\n#endif /* JEMALLOC_PROF */\n}\nTEST_END\n\n/******************************************************************************/\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_uniform,\n\t    test_prof_sample);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/analyze/sizes.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include <stdio.h>\n\n/*\n * Print the sizes of various important core data structures.  OK, I guess this\n * isn't really a \"stress\" test, but it does give useful information about\n * low-level performance characteristics, as the other things in this directory\n * do.\n */\n\nstatic void\ndo_print(const char *name, size_t sz_bytes) {\n\tconst char *sizes[] = {\"bytes\", \"KB\", \"MB\", \"GB\", \"TB\", \"PB\", \"EB\",\n\t\t\"ZB\"};\n\tsize_t sizes_max = sizeof(sizes)/sizeof(sizes[0]);\n\n\tsize_t ind = 0;\n\tdouble sz = sz_bytes;\n\twhile (sz >= 1024 && ind < sizes_max - 1) {\n\t\tsz /= 1024;\n\t\tind++;\n\t}\n\tif (ind == 0) {\n\t\tprintf(\"%-20s: %zu bytes\\n\", name, sz_bytes);\n\t} else {\n\t\tprintf(\"%-20s: %f %s\\n\", name, sz, sizes[ind]);\n\t}\n}\n\nint\nmain() {\n#define P(type)\t\t\t\t\t\t\t\t\\\n\tdo_print(#type, sizeof(type))\n\tP(arena_t);\n\tP(arena_stats_t);\n\tP(base_t);\n\tP(decay_t);\n\tP(edata_t);\n\tP(ecache_t);\n\tP(eset_t);\n\tP(malloc_mutex_t);\n\tP(prof_tctx_t);\n\tP(prof_gctx_t);\n\tP(prof_tdata_t);\n\tP(rtree_t);\n\tP(rtree_leaf_elm_t);\n\tP(slab_data_t);\n\tP(tcache_t);\n\tP(tcache_slow_t);\n\tP(tsd_t);\n#undef P\n}\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/SFMT-alti.h",
    "content": "/*\n * This file derives from SFMT 1.3.3\n * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was\n * released under the terms of the following license:\n *\n *   Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima\n *   University. All rights reserved.\n *\n *   Redistribution and use in source and binary forms, with or without\n *   modification, are permitted provided that the following conditions are\n *   met:\n *\n *       * Redistributions of source code must retain the above copyright\n *         notice, this list of conditions and the following disclaimer.\n *       * Redistributions in binary form must reproduce the above\n *         copyright notice, this list of conditions and the following\n *         disclaimer in the documentation and/or other materials provided\n *         with the distribution.\n *       * Neither the name of the Hiroshima University nor the names of\n *         its contributors may be used to endorse or promote products\n *         derived from this software without specific prior written\n *         permission.\n *\n *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n */\n/**\n * @file SFMT-alti.h\n *\n * @brief SIMD oriented Fast Mersenne Twister(SFMT)\n * pseudorandom number generator\n *\n * @author Mutsuo Saito (Hiroshima University)\n * @author Makoto Matsumoto (Hiroshima University)\n *\n * Copyright (C) 2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima\n * University. All rights reserved.\n *\n * The new BSD License is applied to this software.\n * see LICENSE.txt\n */\n\n#ifndef SFMT_ALTI_H\n#define SFMT_ALTI_H\n\n/**\n * This function represents the recursion formula in AltiVec and BIG ENDIAN.\n * @param a a 128-bit part of the interal state array\n * @param b a 128-bit part of the interal state array\n * @param c a 128-bit part of the interal state array\n * @param d a 128-bit part of the interal state array\n * @return output\n */\nJEMALLOC_ALWAYS_INLINE\nvector unsigned int vec_recursion(vector unsigned int a,\n\t\t\t\t\t\tvector unsigned int b,\n\t\t\t\t\t\tvector unsigned int c,\n\t\t\t\t\t\tvector unsigned int d) {\n\n    const vector unsigned int sl1 = ALTI_SL1;\n    const vector unsigned int sr1 = ALTI_SR1;\n#ifdef ONLY64\n    const vector unsigned int mask = ALTI_MSK64;\n    const vector unsigned char perm_sl = ALTI_SL2_PERM64;\n    const vector unsigned char perm_sr = ALTI_SR2_PERM64;\n#else\n    const vector unsigned int mask = ALTI_MSK;\n    const vector unsigned char perm_sl = ALTI_SL2_PERM;\n    const vector unsigned char perm_sr = ALTI_SR2_PERM;\n#endif\n    vector unsigned int v, w, x, y, z;\n    x = vec_perm(a, (vector unsigned int)perm_sl, perm_sl);\n    v = a;\n    y = vec_sr(b, sr1);\n    z = vec_perm(c, (vector unsigned int)perm_sr, perm_sr);\n    w = vec_sl(d, sl1);\n    z = vec_xor(z, w);\n    y = vec_and(y, mask);\n    v = vec_xor(v, x);\n    z = vec_xor(z, y);\n    z = vec_xor(z, v);\n    return z;\n}\n\n/**\n * This function fills the internal state array with pseudorandom\n * integers.\n */\nstatic inline void gen_rand_all(sfmt_t *ctx) {\n    int i;\n    vector unsigned int r, r1, r2;\n\n    r1 = ctx->sfmt[N - 2].s;\n    r2 = ctx->sfmt[N - 1].s;\n    for (i = 0; i < N - POS1; i++) {\n\tr = vec_recursion(ctx->sfmt[i].s, ctx->sfmt[i + POS1].s, r1, r2);\n\tctx->sfmt[i].s = r;\n\tr1 = r2;\n\tr2 = r;\n    }\n    for (; i < N; i++) {\n\tr = vec_recursion(ctx->sfmt[i].s, ctx->sfmt[i + POS1 - N].s, r1, r2);\n\tctx->sfmt[i].s = r;\n\tr1 = r2;\n\tr2 = r;\n    }\n}\n\n/**\n * This function fills the user-specified array with pseudorandom\n * integers.\n *\n * @param array an 128-bit array to be filled by pseudorandom numbers.\n * @param size number of 128-bit pesudorandom numbers to be generated.\n */\nstatic inline void gen_rand_array(sfmt_t *ctx, w128_t *array, int size) {\n    int i, j;\n    vector unsigned int r, r1, r2;\n\n    r1 = ctx->sfmt[N - 2].s;\n    r2 = ctx->sfmt[N - 1].s;\n    for (i = 0; i < N - POS1; i++) {\n\tr = vec_recursion(ctx->sfmt[i].s, ctx->sfmt[i + POS1].s, r1, r2);\n\tarray[i].s = r;\n\tr1 = r2;\n\tr2 = r;\n    }\n    for (; i < N; i++) {\n\tr = vec_recursion(ctx->sfmt[i].s, array[i + POS1 - N].s, r1, r2);\n\tarray[i].s = r;\n\tr1 = r2;\n\tr2 = r;\n    }\n    /* main loop */\n    for (; i < size - N; i++) {\n\tr = vec_recursion(array[i - N].s, array[i + POS1 - N].s, r1, r2);\n\tarray[i].s = r;\n\tr1 = r2;\n\tr2 = r;\n    }\n    for (j = 0; j < 2 * N - size; j++) {\n\tctx->sfmt[j].s = array[j + size - N].s;\n    }\n    for (; i < size; i++) {\n\tr = vec_recursion(array[i - N].s, array[i + POS1 - N].s, r1, r2);\n\tarray[i].s = r;\n\tctx->sfmt[j++].s = r;\n\tr1 = r2;\n\tr2 = r;\n    }\n}\n\n#ifndef ONLY64\n#if defined(__APPLE__)\n#define ALTI_SWAP (vector unsigned char) \\\n\t(4, 5, 6, 7, 0, 1, 2, 3, 12, 13, 14, 15, 8, 9, 10, 11)\n#else\n#define ALTI_SWAP {4, 5, 6, 7, 0, 1, 2, 3, 12, 13, 14, 15, 8, 9, 10, 11}\n#endif\n/**\n * This function swaps high and low 32-bit of 64-bit integers in user\n * specified array.\n *\n * @param array an 128-bit array to be swaped.\n * @param size size of 128-bit array.\n */\nstatic inline void swap(w128_t *array, int size) {\n    int i;\n    const vector unsigned char perm = ALTI_SWAP;\n\n    for (i = 0; i < size; i++) {\n\tarray[i].s = vec_perm(array[i].s, (vector unsigned int)perm, perm);\n    }\n}\n#endif\n\n#endif\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/SFMT-params.h",
    "content": "/*\n * This file derives from SFMT 1.3.3\n * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was\n * released under the terms of the following license:\n *\n *   Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima\n *   University. All rights reserved.\n *\n *   Redistribution and use in source and binary forms, with or without\n *   modification, are permitted provided that the following conditions are\n *   met:\n *\n *       * Redistributions of source code must retain the above copyright\n *         notice, this list of conditions and the following disclaimer.\n *       * Redistributions in binary form must reproduce the above\n *         copyright notice, this list of conditions and the following\n *         disclaimer in the documentation and/or other materials provided\n *         with the distribution.\n *       * Neither the name of the Hiroshima University nor the names of\n *         its contributors may be used to endorse or promote products\n *         derived from this software without specific prior written\n *         permission.\n *\n *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n */\n#ifndef SFMT_PARAMS_H\n#define SFMT_PARAMS_H\n\n#if !defined(MEXP)\n#ifdef __GNUC__\n  #warning \"MEXP is not defined. I assume MEXP is 19937.\"\n#endif\n  #define MEXP 19937\n#endif\n/*-----------------\n  BASIC DEFINITIONS\n  -----------------*/\n/** Mersenne Exponent. The period of the sequence \n *  is a multiple of 2^MEXP-1.\n * #define MEXP 19937 */\n/** SFMT generator has an internal state array of 128-bit integers,\n * and N is its size. */\n#define N (MEXP / 128 + 1)\n/** N32 is the size of internal state array when regarded as an array\n * of 32-bit integers.*/\n#define N32 (N * 4)\n/** N64 is the size of internal state array when regarded as an array\n * of 64-bit integers.*/\n#define N64 (N * 2)\n\n/*----------------------\n  the parameters of SFMT\n  following definitions are in paramsXXXX.h file.\n  ----------------------*/\n/** the pick up position of the array.\n#define POS1 122 \n*/\n\n/** the parameter of shift left as four 32-bit registers.\n#define SL1 18\n */\n\n/** the parameter of shift left as one 128-bit register. \n * The 128-bit integer is shifted by (SL2 * 8) bits. \n#define SL2 1 \n*/\n\n/** the parameter of shift right as four 32-bit registers.\n#define SR1 11\n*/\n\n/** the parameter of shift right as one 128-bit register. \n * The 128-bit integer is shifted by (SL2 * 8) bits. \n#define SR2 1 \n*/\n\n/** A bitmask, used in the recursion.  These parameters are introduced\n * to break symmetry of SIMD.\n#define MSK1 0xdfffffefU\n#define MSK2 0xddfecb7fU\n#define MSK3 0xbffaffffU\n#define MSK4 0xbffffff6U \n*/\n\n/** These definitions are part of a 128-bit period certification vector.\n#define PARITY1\t0x00000001U\n#define PARITY2\t0x00000000U\n#define PARITY3\t0x00000000U\n#define PARITY4\t0xc98e126aU\n*/\n\n#if MEXP == 607\n  #include \"test/SFMT-params607.h\"\n#elif MEXP == 1279\n  #include \"test/SFMT-params1279.h\"\n#elif MEXP == 2281\n  #include \"test/SFMT-params2281.h\"\n#elif MEXP == 4253\n  #include \"test/SFMT-params4253.h\"\n#elif MEXP == 11213\n  #include \"test/SFMT-params11213.h\"\n#elif MEXP == 19937\n  #include \"test/SFMT-params19937.h\"\n#elif MEXP == 44497\n  #include \"test/SFMT-params44497.h\"\n#elif MEXP == 86243\n  #include \"test/SFMT-params86243.h\"\n#elif MEXP == 132049\n  #include \"test/SFMT-params132049.h\"\n#elif MEXP == 216091\n  #include \"test/SFMT-params216091.h\"\n#else\n#ifdef __GNUC__\n  #error \"MEXP is not valid.\"\n  #undef MEXP\n#else\n  #undef MEXP\n#endif\n\n#endif\n\n#endif /* SFMT_PARAMS_H */\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/SFMT-params11213.h",
    "content": "/*\n * This file derives from SFMT 1.3.3\n * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was\n * released under the terms of the following license:\n *\n *   Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima\n *   University. All rights reserved.\n *\n *   Redistribution and use in source and binary forms, with or without\n *   modification, are permitted provided that the following conditions are\n *   met:\n *\n *       * Redistributions of source code must retain the above copyright\n *         notice, this list of conditions and the following disclaimer.\n *       * Redistributions in binary form must reproduce the above\n *         copyright notice, this list of conditions and the following\n *         disclaimer in the documentation and/or other materials provided\n *         with the distribution.\n *       * Neither the name of the Hiroshima University nor the names of\n *         its contributors may be used to endorse or promote products\n *         derived from this software without specific prior written\n *         permission.\n *\n *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n */\n#ifndef SFMT_PARAMS11213_H\n#define SFMT_PARAMS11213_H\n\n#define POS1\t68\n#define SL1\t14\n#define SL2\t3\n#define SR1\t7\n#define SR2\t3\n#define MSK1\t0xeffff7fbU\n#define MSK2\t0xffffffefU\n#define MSK3\t0xdfdfbfffU\n#define MSK4\t0x7fffdbfdU\n#define PARITY1\t0x00000001U\n#define PARITY2\t0x00000000U\n#define PARITY3\t0xe8148000U\n#define PARITY4\t0xd0c7afa3U\n\n\n/* PARAMETERS FOR ALTIVEC */\n#if defined(__APPLE__)\t/* For OSX */\n    #define ALTI_SL1\t(vector unsigned int)(SL1, SL1, SL1, SL1)\n    #define ALTI_SR1\t(vector unsigned int)(SR1, SR1, SR1, SR1)\n    #define ALTI_MSK\t(vector unsigned int)(MSK1, MSK2, MSK3, MSK4)\n    #define ALTI_MSK64 \\\n\t(vector unsigned int)(MSK2, MSK1, MSK4, MSK3)\n    #define ALTI_SL2_PERM \\\n\t(vector unsigned char)(3,21,21,21,7,0,1,2,11,4,5,6,15,8,9,10)\n    #define ALTI_SL2_PERM64 \\\n\t(vector unsigned char)(3,4,5,6,7,29,29,29,11,12,13,14,15,0,1,2)\n    #define ALTI_SR2_PERM \\\n\t(vector unsigned char)(5,6,7,0,9,10,11,4,13,14,15,8,19,19,19,12)\n    #define ALTI_SR2_PERM64 \\\n\t(vector unsigned char)(13,14,15,0,1,2,3,4,19,19,19,8,9,10,11,12)\n#else\t/* For OTHER OSs(Linux?) */\n    #define ALTI_SL1\t{SL1, SL1, SL1, SL1}\n    #define ALTI_SR1\t{SR1, SR1, SR1, SR1}\n    #define ALTI_MSK\t{MSK1, MSK2, MSK3, MSK4}\n    #define ALTI_MSK64\t{MSK2, MSK1, MSK4, MSK3}\n    #define ALTI_SL2_PERM\t{3,21,21,21,7,0,1,2,11,4,5,6,15,8,9,10}\n    #define ALTI_SL2_PERM64\t{3,4,5,6,7,29,29,29,11,12,13,14,15,0,1,2}\n    #define ALTI_SR2_PERM\t{5,6,7,0,9,10,11,4,13,14,15,8,19,19,19,12}\n    #define ALTI_SR2_PERM64\t{13,14,15,0,1,2,3,4,19,19,19,8,9,10,11,12}\n#endif\t/* For OSX */\n#define IDSTR\t\"SFMT-11213:68-14-3-7-3:effff7fb-ffffffef-dfdfbfff-7fffdbfd\"\n\n#endif /* SFMT_PARAMS11213_H */\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/SFMT-params1279.h",
    "content": "/*\n * This file derives from SFMT 1.3.3\n * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was\n * released under the terms of the following license:\n *\n *   Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima\n *   University. All rights reserved.\n *\n *   Redistribution and use in source and binary forms, with or without\n *   modification, are permitted provided that the following conditions are\n *   met:\n *\n *       * Redistributions of source code must retain the above copyright\n *         notice, this list of conditions and the following disclaimer.\n *       * Redistributions in binary form must reproduce the above\n *         copyright notice, this list of conditions and the following\n *         disclaimer in the documentation and/or other materials provided\n *         with the distribution.\n *       * Neither the name of the Hiroshima University nor the names of\n *         its contributors may be used to endorse or promote products\n *         derived from this software without specific prior written\n *         permission.\n *\n *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n */\n#ifndef SFMT_PARAMS1279_H\n#define SFMT_PARAMS1279_H\n\n#define POS1\t7\n#define SL1\t14\n#define SL2\t3\n#define SR1\t5\n#define SR2\t1\n#define MSK1\t0xf7fefffdU\n#define MSK2\t0x7fefcfffU\n#define MSK3\t0xaff3ef3fU\n#define MSK4\t0xb5ffff7fU\n#define PARITY1\t0x00000001U\n#define PARITY2\t0x00000000U\n#define PARITY3\t0x00000000U\n#define PARITY4\t0x20000000U\n\n\n/* PARAMETERS FOR ALTIVEC */\n#if defined(__APPLE__)\t/* For OSX */\n    #define ALTI_SL1\t(vector unsigned int)(SL1, SL1, SL1, SL1)\n    #define ALTI_SR1\t(vector unsigned int)(SR1, SR1, SR1, SR1)\n    #define ALTI_MSK\t(vector unsigned int)(MSK1, MSK2, MSK3, MSK4)\n    #define ALTI_MSK64 \\\n\t(vector unsigned int)(MSK2, MSK1, MSK4, MSK3)\n    #define ALTI_SL2_PERM \\\n\t(vector unsigned char)(3,21,21,21,7,0,1,2,11,4,5,6,15,8,9,10)\n    #define ALTI_SL2_PERM64 \\\n\t(vector unsigned char)(3,4,5,6,7,29,29,29,11,12,13,14,15,0,1,2)\n    #define ALTI_SR2_PERM \\\n\t(vector unsigned char)(7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14)\n    #define ALTI_SR2_PERM64 \\\n\t(vector unsigned char)(15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14)\n#else\t/* For OTHER OSs(Linux?) */\n    #define ALTI_SL1\t{SL1, SL1, SL1, SL1}\n    #define ALTI_SR1\t{SR1, SR1, SR1, SR1}\n    #define ALTI_MSK\t{MSK1, MSK2, MSK3, MSK4}\n    #define ALTI_MSK64\t{MSK2, MSK1, MSK4, MSK3}\n    #define ALTI_SL2_PERM\t{3,21,21,21,7,0,1,2,11,4,5,6,15,8,9,10}\n    #define ALTI_SL2_PERM64\t{3,4,5,6,7,29,29,29,11,12,13,14,15,0,1,2}\n    #define ALTI_SR2_PERM\t{7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14}\n    #define ALTI_SR2_PERM64\t{15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14}\n#endif\t/* For OSX */\n#define IDSTR\t\"SFMT-1279:7-14-3-5-1:f7fefffd-7fefcfff-aff3ef3f-b5ffff7f\"\n\n#endif /* SFMT_PARAMS1279_H */\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/SFMT-params132049.h",
    "content": "/*\n * This file derives from SFMT 1.3.3\n * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was\n * released under the terms of the following license:\n *\n *   Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima\n *   University. All rights reserved.\n *\n *   Redistribution and use in source and binary forms, with or without\n *   modification, are permitted provided that the following conditions are\n *   met:\n *\n *       * Redistributions of source code must retain the above copyright\n *         notice, this list of conditions and the following disclaimer.\n *       * Redistributions in binary form must reproduce the above\n *         copyright notice, this list of conditions and the following\n *         disclaimer in the documentation and/or other materials provided\n *         with the distribution.\n *       * Neither the name of the Hiroshima University nor the names of\n *         its contributors may be used to endorse or promote products\n *         derived from this software without specific prior written\n *         permission.\n *\n *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n */\n#ifndef SFMT_PARAMS132049_H\n#define SFMT_PARAMS132049_H\n\n#define POS1\t110\n#define SL1\t19\n#define SL2\t1\n#define SR1\t21\n#define SR2\t1\n#define MSK1\t0xffffbb5fU\n#define MSK2\t0xfb6ebf95U\n#define MSK3\t0xfffefffaU\n#define MSK4\t0xcff77fffU\n#define PARITY1\t0x00000001U\n#define PARITY2\t0x00000000U\n#define PARITY3\t0xcb520000U\n#define PARITY4\t0xc7e91c7dU\n\n\n/* PARAMETERS FOR ALTIVEC */\n#if defined(__APPLE__)\t/* For OSX */\n    #define ALTI_SL1\t(vector unsigned int)(SL1, SL1, SL1, SL1)\n    #define ALTI_SR1\t(vector unsigned int)(SR1, SR1, SR1, SR1)\n    #define ALTI_MSK\t(vector unsigned int)(MSK1, MSK2, MSK3, MSK4)\n    #define ALTI_MSK64 \\\n\t(vector unsigned int)(MSK2, MSK1, MSK4, MSK3)\n    #define ALTI_SL2_PERM \\\n\t(vector unsigned char)(1,2,3,23,5,6,7,0,9,10,11,4,13,14,15,8)\n    #define ALTI_SL2_PERM64 \\\n\t(vector unsigned char)(1,2,3,4,5,6,7,31,9,10,11,12,13,14,15,0)\n    #define ALTI_SR2_PERM \\\n\t(vector unsigned char)(7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14)\n    #define ALTI_SR2_PERM64 \\\n\t(vector unsigned char)(15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14)\n#else\t/* For OTHER OSs(Linux?) */\n    #define ALTI_SL1\t{SL1, SL1, SL1, SL1}\n    #define ALTI_SR1\t{SR1, SR1, SR1, SR1}\n    #define ALTI_MSK\t{MSK1, MSK2, MSK3, MSK4}\n    #define ALTI_MSK64\t{MSK2, MSK1, MSK4, MSK3}\n    #define ALTI_SL2_PERM\t{1,2,3,23,5,6,7,0,9,10,11,4,13,14,15,8}\n    #define ALTI_SL2_PERM64\t{1,2,3,4,5,6,7,31,9,10,11,12,13,14,15,0}\n    #define ALTI_SR2_PERM\t{7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14}\n    #define ALTI_SR2_PERM64\t{15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14}\n#endif\t/* For OSX */\n#define IDSTR\t\"SFMT-132049:110-19-1-21-1:ffffbb5f-fb6ebf95-fffefffa-cff77fff\"\n\n#endif /* SFMT_PARAMS132049_H */\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/SFMT-params19937.h",
    "content": "/*\n * This file derives from SFMT 1.3.3\n * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was\n * released under the terms of the following license:\n *\n *   Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima\n *   University. All rights reserved.\n *\n *   Redistribution and use in source and binary forms, with or without\n *   modification, are permitted provided that the following conditions are\n *   met:\n *\n *       * Redistributions of source code must retain the above copyright\n *         notice, this list of conditions and the following disclaimer.\n *       * Redistributions in binary form must reproduce the above\n *         copyright notice, this list of conditions and the following\n *         disclaimer in the documentation and/or other materials provided\n *         with the distribution.\n *       * Neither the name of the Hiroshima University nor the names of\n *         its contributors may be used to endorse or promote products\n *         derived from this software without specific prior written\n *         permission.\n *\n *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n */\n#ifndef SFMT_PARAMS19937_H\n#define SFMT_PARAMS19937_H\n\n#define POS1\t122\n#define SL1\t18\n#define SL2\t1\n#define SR1\t11\n#define SR2\t1\n#define MSK1\t0xdfffffefU\n#define MSK2\t0xddfecb7fU\n#define MSK3\t0xbffaffffU\n#define MSK4\t0xbffffff6U\n#define PARITY1\t0x00000001U\n#define PARITY2\t0x00000000U\n#define PARITY3\t0x00000000U\n#define PARITY4\t0x13c9e684U\n\n\n/* PARAMETERS FOR ALTIVEC */\n#if defined(__APPLE__)\t/* For OSX */\n    #define ALTI_SL1\t(vector unsigned int)(SL1, SL1, SL1, SL1)\n    #define ALTI_SR1\t(vector unsigned int)(SR1, SR1, SR1, SR1)\n    #define ALTI_MSK\t(vector unsigned int)(MSK1, MSK2, MSK3, MSK4)\n    #define ALTI_MSK64 \\\n\t(vector unsigned int)(MSK2, MSK1, MSK4, MSK3)\n    #define ALTI_SL2_PERM \\\n\t(vector unsigned char)(1,2,3,23,5,6,7,0,9,10,11,4,13,14,15,8)\n    #define ALTI_SL2_PERM64 \\\n\t(vector unsigned char)(1,2,3,4,5,6,7,31,9,10,11,12,13,14,15,0)\n    #define ALTI_SR2_PERM \\\n\t(vector unsigned char)(7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14)\n    #define ALTI_SR2_PERM64 \\\n\t(vector unsigned char)(15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14)\n#else\t/* For OTHER OSs(Linux?) */\n    #define ALTI_SL1\t{SL1, SL1, SL1, SL1}\n    #define ALTI_SR1\t{SR1, SR1, SR1, SR1}\n    #define ALTI_MSK\t{MSK1, MSK2, MSK3, MSK4}\n    #define ALTI_MSK64\t{MSK2, MSK1, MSK4, MSK3}\n    #define ALTI_SL2_PERM\t{1,2,3,23,5,6,7,0,9,10,11,4,13,14,15,8}\n    #define ALTI_SL2_PERM64\t{1,2,3,4,5,6,7,31,9,10,11,12,13,14,15,0}\n    #define ALTI_SR2_PERM\t{7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14}\n    #define ALTI_SR2_PERM64\t{15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14}\n#endif\t/* For OSX */\n#define IDSTR\t\"SFMT-19937:122-18-1-11-1:dfffffef-ddfecb7f-bffaffff-bffffff6\"\n\n#endif /* SFMT_PARAMS19937_H */\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/SFMT-params216091.h",
    "content": "/*\n * This file derives from SFMT 1.3.3\n * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was\n * released under the terms of the following license:\n *\n *   Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima\n *   University. All rights reserved.\n *\n *   Redistribution and use in source and binary forms, with or without\n *   modification, are permitted provided that the following conditions are\n *   met:\n *\n *       * Redistributions of source code must retain the above copyright\n *         notice, this list of conditions and the following disclaimer.\n *       * Redistributions in binary form must reproduce the above\n *         copyright notice, this list of conditions and the following\n *         disclaimer in the documentation and/or other materials provided\n *         with the distribution.\n *       * Neither the name of the Hiroshima University nor the names of\n *         its contributors may be used to endorse or promote products\n *         derived from this software without specific prior written\n *         permission.\n *\n *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n */\n#ifndef SFMT_PARAMS216091_H\n#define SFMT_PARAMS216091_H\n\n#define POS1\t627\n#define SL1\t11\n#define SL2\t3\n#define SR1\t10\n#define SR2\t1\n#define MSK1\t0xbff7bff7U\n#define MSK2\t0xbfffffffU\n#define MSK3\t0xbffffa7fU\n#define MSK4\t0xffddfbfbU\n#define PARITY1\t0xf8000001U\n#define PARITY2\t0x89e80709U\n#define PARITY3\t0x3bd2b64bU\n#define PARITY4\t0x0c64b1e4U\n\n\n/* PARAMETERS FOR ALTIVEC */\n#if defined(__APPLE__)\t/* For OSX */\n    #define ALTI_SL1\t(vector unsigned int)(SL1, SL1, SL1, SL1)\n    #define ALTI_SR1\t(vector unsigned int)(SR1, SR1, SR1, SR1)\n    #define ALTI_MSK\t(vector unsigned int)(MSK1, MSK2, MSK3, MSK4)\n    #define ALTI_MSK64 \\\n\t(vector unsigned int)(MSK2, MSK1, MSK4, MSK3)\n    #define ALTI_SL2_PERM \\\n\t(vector unsigned char)(3,21,21,21,7,0,1,2,11,4,5,6,15,8,9,10)\n    #define ALTI_SL2_PERM64 \\\n\t(vector unsigned char)(3,4,5,6,7,29,29,29,11,12,13,14,15,0,1,2)\n    #define ALTI_SR2_PERM \\\n\t(vector unsigned char)(7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14)\n    #define ALTI_SR2_PERM64 \\\n\t(vector unsigned char)(15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14)\n#else\t/* For OTHER OSs(Linux?) */\n    #define ALTI_SL1\t{SL1, SL1, SL1, SL1}\n    #define ALTI_SR1\t{SR1, SR1, SR1, SR1}\n    #define ALTI_MSK\t{MSK1, MSK2, MSK3, MSK4}\n    #define ALTI_MSK64\t{MSK2, MSK1, MSK4, MSK3}\n    #define ALTI_SL2_PERM\t{3,21,21,21,7,0,1,2,11,4,5,6,15,8,9,10}\n    #define ALTI_SL2_PERM64\t{3,4,5,6,7,29,29,29,11,12,13,14,15,0,1,2}\n    #define ALTI_SR2_PERM\t{7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14}\n    #define ALTI_SR2_PERM64\t{15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14}\n#endif\t/* For OSX */\n#define IDSTR\t\"SFMT-216091:627-11-3-10-1:bff7bff7-bfffffff-bffffa7f-ffddfbfb\"\n\n#endif /* SFMT_PARAMS216091_H */\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/SFMT-params2281.h",
    "content": "/*\n * This file derives from SFMT 1.3.3\n * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was\n * released under the terms of the following license:\n *\n *   Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima\n *   University. All rights reserved.\n *\n *   Redistribution and use in source and binary forms, with or without\n *   modification, are permitted provided that the following conditions are\n *   met:\n *\n *       * Redistributions of source code must retain the above copyright\n *         notice, this list of conditions and the following disclaimer.\n *       * Redistributions in binary form must reproduce the above\n *         copyright notice, this list of conditions and the following\n *         disclaimer in the documentation and/or other materials provided\n *         with the distribution.\n *       * Neither the name of the Hiroshima University nor the names of\n *         its contributors may be used to endorse or promote products\n *         derived from this software without specific prior written\n *         permission.\n *\n *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n */\n#ifndef SFMT_PARAMS2281_H\n#define SFMT_PARAMS2281_H\n\n#define POS1\t12\n#define SL1\t19\n#define SL2\t1\n#define SR1\t5\n#define SR2\t1\n#define MSK1\t0xbff7ffbfU\n#define MSK2\t0xfdfffffeU\n#define MSK3\t0xf7ffef7fU\n#define MSK4\t0xf2f7cbbfU\n#define PARITY1\t0x00000001U\n#define PARITY2\t0x00000000U\n#define PARITY3\t0x00000000U\n#define PARITY4\t0x41dfa600U\n\n\n/* PARAMETERS FOR ALTIVEC */\n#if defined(__APPLE__)\t/* For OSX */\n    #define ALTI_SL1\t(vector unsigned int)(SL1, SL1, SL1, SL1)\n    #define ALTI_SR1\t(vector unsigned int)(SR1, SR1, SR1, SR1)\n    #define ALTI_MSK\t(vector unsigned int)(MSK1, MSK2, MSK3, MSK4)\n    #define ALTI_MSK64 \\\n\t(vector unsigned int)(MSK2, MSK1, MSK4, MSK3)\n    #define ALTI_SL2_PERM \\\n\t(vector unsigned char)(1,2,3,23,5,6,7,0,9,10,11,4,13,14,15,8)\n    #define ALTI_SL2_PERM64 \\\n\t(vector unsigned char)(1,2,3,4,5,6,7,31,9,10,11,12,13,14,15,0)\n    #define ALTI_SR2_PERM \\\n\t(vector unsigned char)(7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14)\n    #define ALTI_SR2_PERM64 \\\n\t(vector unsigned char)(15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14)\n#else\t/* For OTHER OSs(Linux?) */\n    #define ALTI_SL1\t{SL1, SL1, SL1, SL1}\n    #define ALTI_SR1\t{SR1, SR1, SR1, SR1}\n    #define ALTI_MSK\t{MSK1, MSK2, MSK3, MSK4}\n    #define ALTI_MSK64\t{MSK2, MSK1, MSK4, MSK3}\n    #define ALTI_SL2_PERM\t{1,2,3,23,5,6,7,0,9,10,11,4,13,14,15,8}\n    #define ALTI_SL2_PERM64\t{1,2,3,4,5,6,7,31,9,10,11,12,13,14,15,0}\n    #define ALTI_SR2_PERM\t{7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14}\n    #define ALTI_SR2_PERM64\t{15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14}\n#endif\t/* For OSX */\n#define IDSTR\t\"SFMT-2281:12-19-1-5-1:bff7ffbf-fdfffffe-f7ffef7f-f2f7cbbf\"\n\n#endif /* SFMT_PARAMS2281_H */\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/SFMT-params4253.h",
    "content": "/*\n * This file derives from SFMT 1.3.3\n * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was\n * released under the terms of the following license:\n *\n *   Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima\n *   University. All rights reserved.\n *\n *   Redistribution and use in source and binary forms, with or without\n *   modification, are permitted provided that the following conditions are\n *   met:\n *\n *       * Redistributions of source code must retain the above copyright\n *         notice, this list of conditions and the following disclaimer.\n *       * Redistributions in binary form must reproduce the above\n *         copyright notice, this list of conditions and the following\n *         disclaimer in the documentation and/or other materials provided\n *         with the distribution.\n *       * Neither the name of the Hiroshima University nor the names of\n *         its contributors may be used to endorse or promote products\n *         derived from this software without specific prior written\n *         permission.\n *\n *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n */\n#ifndef SFMT_PARAMS4253_H\n#define SFMT_PARAMS4253_H\n\n#define POS1\t17\n#define SL1\t20\n#define SL2\t1\n#define SR1\t7\n#define SR2\t1\n#define MSK1\t0x9f7bffffU\n#define MSK2\t0x9fffff5fU\n#define MSK3\t0x3efffffbU\n#define MSK4\t0xfffff7bbU\n#define PARITY1\t0xa8000001U\n#define PARITY2\t0xaf5390a3U\n#define PARITY3\t0xb740b3f8U\n#define PARITY4\t0x6c11486dU\n\n\n/* PARAMETERS FOR ALTIVEC */\n#if defined(__APPLE__)\t/* For OSX */\n    #define ALTI_SL1\t(vector unsigned int)(SL1, SL1, SL1, SL1)\n    #define ALTI_SR1\t(vector unsigned int)(SR1, SR1, SR1, SR1)\n    #define ALTI_MSK\t(vector unsigned int)(MSK1, MSK2, MSK3, MSK4)\n    #define ALTI_MSK64 \\\n\t(vector unsigned int)(MSK2, MSK1, MSK4, MSK3)\n    #define ALTI_SL2_PERM \\\n\t(vector unsigned char)(1,2,3,23,5,6,7,0,9,10,11,4,13,14,15,8)\n    #define ALTI_SL2_PERM64 \\\n\t(vector unsigned char)(1,2,3,4,5,6,7,31,9,10,11,12,13,14,15,0)\n    #define ALTI_SR2_PERM \\\n\t(vector unsigned char)(7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14)\n    #define ALTI_SR2_PERM64 \\\n\t(vector unsigned char)(15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14)\n#else\t/* For OTHER OSs(Linux?) */\n    #define ALTI_SL1\t{SL1, SL1, SL1, SL1}\n    #define ALTI_SR1\t{SR1, SR1, SR1, SR1}\n    #define ALTI_MSK\t{MSK1, MSK2, MSK3, MSK4}\n    #define ALTI_MSK64\t{MSK2, MSK1, MSK4, MSK3}\n    #define ALTI_SL2_PERM\t{1,2,3,23,5,6,7,0,9,10,11,4,13,14,15,8}\n    #define ALTI_SL2_PERM64\t{1,2,3,4,5,6,7,31,9,10,11,12,13,14,15,0}\n    #define ALTI_SR2_PERM\t{7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14}\n    #define ALTI_SR2_PERM64\t{15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14}\n#endif\t/* For OSX */\n#define IDSTR\t\"SFMT-4253:17-20-1-7-1:9f7bffff-9fffff5f-3efffffb-fffff7bb\"\n\n#endif /* SFMT_PARAMS4253_H */\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/SFMT-params44497.h",
    "content": "/*\n * This file derives from SFMT 1.3.3\n * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was\n * released under the terms of the following license:\n *\n *   Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima\n *   University. All rights reserved.\n *\n *   Redistribution and use in source and binary forms, with or without\n *   modification, are permitted provided that the following conditions are\n *   met:\n *\n *       * Redistributions of source code must retain the above copyright\n *         notice, this list of conditions and the following disclaimer.\n *       * Redistributions in binary form must reproduce the above\n *         copyright notice, this list of conditions and the following\n *         disclaimer in the documentation and/or other materials provided\n *         with the distribution.\n *       * Neither the name of the Hiroshima University nor the names of\n *         its contributors may be used to endorse or promote products\n *         derived from this software without specific prior written\n *         permission.\n *\n *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n */\n#ifndef SFMT_PARAMS44497_H\n#define SFMT_PARAMS44497_H\n\n#define POS1\t330\n#define SL1\t5\n#define SL2\t3\n#define SR1\t9\n#define SR2\t3\n#define MSK1\t0xeffffffbU\n#define MSK2\t0xdfbebfffU\n#define MSK3\t0xbfbf7befU\n#define MSK4\t0x9ffd7bffU\n#define PARITY1\t0x00000001U\n#define PARITY2\t0x00000000U\n#define PARITY3\t0xa3ac4000U\n#define PARITY4\t0xecc1327aU\n\n\n/* PARAMETERS FOR ALTIVEC */\n#if defined(__APPLE__)\t/* For OSX */\n    #define ALTI_SL1\t(vector unsigned int)(SL1, SL1, SL1, SL1)\n    #define ALTI_SR1\t(vector unsigned int)(SR1, SR1, SR1, SR1)\n    #define ALTI_MSK\t(vector unsigned int)(MSK1, MSK2, MSK3, MSK4)\n    #define ALTI_MSK64 \\\n\t(vector unsigned int)(MSK2, MSK1, MSK4, MSK3)\n    #define ALTI_SL2_PERM \\\n\t(vector unsigned char)(3,21,21,21,7,0,1,2,11,4,5,6,15,8,9,10)\n    #define ALTI_SL2_PERM64 \\\n\t(vector unsigned char)(3,4,5,6,7,29,29,29,11,12,13,14,15,0,1,2)\n    #define ALTI_SR2_PERM \\\n\t(vector unsigned char)(5,6,7,0,9,10,11,4,13,14,15,8,19,19,19,12)\n    #define ALTI_SR2_PERM64 \\\n\t(vector unsigned char)(13,14,15,0,1,2,3,4,19,19,19,8,9,10,11,12)\n#else\t/* For OTHER OSs(Linux?) */\n    #define ALTI_SL1\t{SL1, SL1, SL1, SL1}\n    #define ALTI_SR1\t{SR1, SR1, SR1, SR1}\n    #define ALTI_MSK\t{MSK1, MSK2, MSK3, MSK4}\n    #define ALTI_MSK64\t{MSK2, MSK1, MSK4, MSK3}\n    #define ALTI_SL2_PERM\t{3,21,21,21,7,0,1,2,11,4,5,6,15,8,9,10}\n    #define ALTI_SL2_PERM64\t{3,4,5,6,7,29,29,29,11,12,13,14,15,0,1,2}\n    #define ALTI_SR2_PERM\t{5,6,7,0,9,10,11,4,13,14,15,8,19,19,19,12}\n    #define ALTI_SR2_PERM64\t{13,14,15,0,1,2,3,4,19,19,19,8,9,10,11,12}\n#endif\t/* For OSX */\n#define IDSTR\t\"SFMT-44497:330-5-3-9-3:effffffb-dfbebfff-bfbf7bef-9ffd7bff\"\n\n#endif /* SFMT_PARAMS44497_H */\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/SFMT-params607.h",
    "content": "/*\n * This file derives from SFMT 1.3.3\n * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was\n * released under the terms of the following license:\n *\n *   Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima\n *   University. All rights reserved.\n *\n *   Redistribution and use in source and binary forms, with or without\n *   modification, are permitted provided that the following conditions are\n *   met:\n *\n *       * Redistributions of source code must retain the above copyright\n *         notice, this list of conditions and the following disclaimer.\n *       * Redistributions in binary form must reproduce the above\n *         copyright notice, this list of conditions and the following\n *         disclaimer in the documentation and/or other materials provided\n *         with the distribution.\n *       * Neither the name of the Hiroshima University nor the names of\n *         its contributors may be used to endorse or promote products\n *         derived from this software without specific prior written\n *         permission.\n *\n *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n */\n#ifndef SFMT_PARAMS607_H\n#define SFMT_PARAMS607_H\n\n#define POS1\t2\n#define SL1\t15\n#define SL2\t3\n#define SR1\t13\n#define SR2\t3\n#define MSK1\t0xfdff37ffU\n#define MSK2\t0xef7f3f7dU\n#define MSK3\t0xff777b7dU\n#define MSK4\t0x7ff7fb2fU\n#define PARITY1\t0x00000001U\n#define PARITY2\t0x00000000U\n#define PARITY3\t0x00000000U\n#define PARITY4\t0x5986f054U\n\n\n/* PARAMETERS FOR ALTIVEC */\n#if defined(__APPLE__)\t/* For OSX */\n    #define ALTI_SL1\t(vector unsigned int)(SL1, SL1, SL1, SL1)\n    #define ALTI_SR1\t(vector unsigned int)(SR1, SR1, SR1, SR1)\n    #define ALTI_MSK\t(vector unsigned int)(MSK1, MSK2, MSK3, MSK4)\n    #define ALTI_MSK64 \\\n\t(vector unsigned int)(MSK2, MSK1, MSK4, MSK3)\n    #define ALTI_SL2_PERM \\\n\t(vector unsigned char)(3,21,21,21,7,0,1,2,11,4,5,6,15,8,9,10)\n    #define ALTI_SL2_PERM64 \\\n\t(vector unsigned char)(3,4,5,6,7,29,29,29,11,12,13,14,15,0,1,2)\n    #define ALTI_SR2_PERM \\\n\t(vector unsigned char)(5,6,7,0,9,10,11,4,13,14,15,8,19,19,19,12)\n    #define ALTI_SR2_PERM64 \\\n\t(vector unsigned char)(13,14,15,0,1,2,3,4,19,19,19,8,9,10,11,12)\n#else\t/* For OTHER OSs(Linux?) */\n    #define ALTI_SL1\t{SL1, SL1, SL1, SL1}\n    #define ALTI_SR1\t{SR1, SR1, SR1, SR1}\n    #define ALTI_MSK\t{MSK1, MSK2, MSK3, MSK4}\n    #define ALTI_MSK64\t{MSK2, MSK1, MSK4, MSK3}\n    #define ALTI_SL2_PERM\t{3,21,21,21,7,0,1,2,11,4,5,6,15,8,9,10}\n    #define ALTI_SL2_PERM64\t{3,4,5,6,7,29,29,29,11,12,13,14,15,0,1,2}\n    #define ALTI_SR2_PERM\t{5,6,7,0,9,10,11,4,13,14,15,8,19,19,19,12}\n    #define ALTI_SR2_PERM64\t{13,14,15,0,1,2,3,4,19,19,19,8,9,10,11,12}\n#endif\t/* For OSX */\n#define IDSTR\t\"SFMT-607:2-15-3-13-3:fdff37ff-ef7f3f7d-ff777b7d-7ff7fb2f\"\n\n#endif /* SFMT_PARAMS607_H */\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/SFMT-params86243.h",
    "content": "/*\n * This file derives from SFMT 1.3.3\n * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was\n * released under the terms of the following license:\n *\n *   Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima\n *   University. All rights reserved.\n *\n *   Redistribution and use in source and binary forms, with or without\n *   modification, are permitted provided that the following conditions are\n *   met:\n *\n *       * Redistributions of source code must retain the above copyright\n *         notice, this list of conditions and the following disclaimer.\n *       * Redistributions in binary form must reproduce the above\n *         copyright notice, this list of conditions and the following\n *         disclaimer in the documentation and/or other materials provided\n *         with the distribution.\n *       * Neither the name of the Hiroshima University nor the names of\n *         its contributors may be used to endorse or promote products\n *         derived from this software without specific prior written\n *         permission.\n *\n *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n */\n#ifndef SFMT_PARAMS86243_H\n#define SFMT_PARAMS86243_H\n\n#define POS1\t366\n#define SL1\t6\n#define SL2\t7\n#define SR1\t19\n#define SR2\t1\n#define MSK1\t0xfdbffbffU\n#define MSK2\t0xbff7ff3fU\n#define MSK3\t0xfd77efffU\n#define MSK4\t0xbf9ff3ffU\n#define PARITY1\t0x00000001U\n#define PARITY2\t0x00000000U\n#define PARITY3\t0x00000000U\n#define PARITY4\t0xe9528d85U\n\n\n/* PARAMETERS FOR ALTIVEC */\n#if defined(__APPLE__)\t/* For OSX */\n    #define ALTI_SL1\t(vector unsigned int)(SL1, SL1, SL1, SL1)\n    #define ALTI_SR1\t(vector unsigned int)(SR1, SR1, SR1, SR1)\n    #define ALTI_MSK\t(vector unsigned int)(MSK1, MSK2, MSK3, MSK4)\n    #define ALTI_MSK64 \\\n\t(vector unsigned int)(MSK2, MSK1, MSK4, MSK3)\n    #define ALTI_SL2_PERM \\\n\t(vector unsigned char)(25,25,25,25,3,25,25,25,7,0,1,2,11,4,5,6)\n    #define ALTI_SL2_PERM64 \\\n\t(vector unsigned char)(7,25,25,25,25,25,25,25,15,0,1,2,3,4,5,6)\n    #define ALTI_SR2_PERM \\\n\t(vector unsigned char)(7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14)\n    #define ALTI_SR2_PERM64 \\\n\t(vector unsigned char)(15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14)\n#else\t/* For OTHER OSs(Linux?) */\n    #define ALTI_SL1\t{SL1, SL1, SL1, SL1}\n    #define ALTI_SR1\t{SR1, SR1, SR1, SR1}\n    #define ALTI_MSK\t{MSK1, MSK2, MSK3, MSK4}\n    #define ALTI_MSK64\t{MSK2, MSK1, MSK4, MSK3}\n    #define ALTI_SL2_PERM\t{25,25,25,25,3,25,25,25,7,0,1,2,11,4,5,6}\n    #define ALTI_SL2_PERM64\t{7,25,25,25,25,25,25,25,15,0,1,2,3,4,5,6}\n    #define ALTI_SR2_PERM\t{7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14}\n    #define ALTI_SR2_PERM64\t{15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14}\n#endif\t/* For OSX */\n#define IDSTR\t\"SFMT-86243:366-6-7-19-1:fdbffbff-bff7ff3f-fd77efff-bf9ff3ff\"\n\n#endif /* SFMT_PARAMS86243_H */\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/SFMT-sse2.h",
    "content": "/*\n * This file derives from SFMT 1.3.3\n * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was\n * released under the terms of the following license:\n *\n *   Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima\n *   University. All rights reserved.\n *\n *   Redistribution and use in source and binary forms, with or without\n *   modification, are permitted provided that the following conditions are\n *   met:\n *\n *       * Redistributions of source code must retain the above copyright\n *         notice, this list of conditions and the following disclaimer.\n *       * Redistributions in binary form must reproduce the above\n *         copyright notice, this list of conditions and the following\n *         disclaimer in the documentation and/or other materials provided\n *         with the distribution.\n *       * Neither the name of the Hiroshima University nor the names of\n *         its contributors may be used to endorse or promote products\n *         derived from this software without specific prior written\n *         permission.\n *\n *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n */\n/**\n * @file  SFMT-sse2.h\n * @brief SIMD oriented Fast Mersenne Twister(SFMT) for Intel SSE2\n *\n * @author Mutsuo Saito (Hiroshima University)\n * @author Makoto Matsumoto (Hiroshima University)\n *\n * @note We assume LITTLE ENDIAN in this file\n *\n * Copyright (C) 2006, 2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima\n * University. All rights reserved.\n *\n * The new BSD License is applied to this software, see LICENSE.txt\n */\n\n#ifndef SFMT_SSE2_H\n#define SFMT_SSE2_H\n\n/**\n * This function represents the recursion formula.\n * @param a a 128-bit part of the interal state array\n * @param b a 128-bit part of the interal state array\n * @param c a 128-bit part of the interal state array\n * @param d a 128-bit part of the interal state array\n * @param mask 128-bit mask\n * @return output\n */\nJEMALLOC_ALWAYS_INLINE __m128i mm_recursion(__m128i *a, __m128i *b,\n\t\t\t\t   __m128i c, __m128i d, __m128i mask) {\n    __m128i v, x, y, z;\n\n    x = _mm_load_si128(a);\n    y = _mm_srli_epi32(*b, SR1);\n    z = _mm_srli_si128(c, SR2);\n    v = _mm_slli_epi32(d, SL1);\n    z = _mm_xor_si128(z, x);\n    z = _mm_xor_si128(z, v);\n    x = _mm_slli_si128(x, SL2);\n    y = _mm_and_si128(y, mask);\n    z = _mm_xor_si128(z, x);\n    z = _mm_xor_si128(z, y);\n    return z;\n}\n\n/**\n * This function fills the internal state array with pseudorandom\n * integers.\n */\nstatic inline void gen_rand_all(sfmt_t *ctx) {\n    int i;\n    __m128i r, r1, r2, mask;\n    mask = _mm_set_epi32(MSK4, MSK3, MSK2, MSK1);\n\n    r1 = _mm_load_si128(&ctx->sfmt[N - 2].si);\n    r2 = _mm_load_si128(&ctx->sfmt[N - 1].si);\n    for (i = 0; i < N - POS1; i++) {\n\tr = mm_recursion(&ctx->sfmt[i].si, &ctx->sfmt[i + POS1].si, r1, r2,\n\t  mask);\n\t_mm_store_si128(&ctx->sfmt[i].si, r);\n\tr1 = r2;\n\tr2 = r;\n    }\n    for (; i < N; i++) {\n\tr = mm_recursion(&ctx->sfmt[i].si, &ctx->sfmt[i + POS1 - N].si, r1, r2,\n\t  mask);\n\t_mm_store_si128(&ctx->sfmt[i].si, r);\n\tr1 = r2;\n\tr2 = r;\n    }\n}\n\n/**\n * This function fills the user-specified array with pseudorandom\n * integers.\n *\n * @param array an 128-bit array to be filled by pseudorandom numbers.\n * @param size number of 128-bit pesudorandom numbers to be generated.\n */\nstatic inline void gen_rand_array(sfmt_t *ctx, w128_t *array, int size) {\n    int i, j;\n    __m128i r, r1, r2, mask;\n    mask = _mm_set_epi32(MSK4, MSK3, MSK2, MSK1);\n\n    r1 = _mm_load_si128(&ctx->sfmt[N - 2].si);\n    r2 = _mm_load_si128(&ctx->sfmt[N - 1].si);\n    for (i = 0; i < N - POS1; i++) {\n\tr = mm_recursion(&ctx->sfmt[i].si, &ctx->sfmt[i + POS1].si, r1, r2,\n\t  mask);\n\t_mm_store_si128(&array[i].si, r);\n\tr1 = r2;\n\tr2 = r;\n    }\n    for (; i < N; i++) {\n\tr = mm_recursion(&ctx->sfmt[i].si, &array[i + POS1 - N].si, r1, r2,\n\t  mask);\n\t_mm_store_si128(&array[i].si, r);\n\tr1 = r2;\n\tr2 = r;\n    }\n    /* main loop */\n    for (; i < size - N; i++) {\n\tr = mm_recursion(&array[i - N].si, &array[i + POS1 - N].si, r1, r2,\n\t\t\t mask);\n\t_mm_store_si128(&array[i].si, r);\n\tr1 = r2;\n\tr2 = r;\n    }\n    for (j = 0; j < 2 * N - size; j++) {\n\tr = _mm_load_si128(&array[j + size - N].si);\n\t_mm_store_si128(&ctx->sfmt[j].si, r);\n    }\n    for (; i < size; i++) {\n\tr = mm_recursion(&array[i - N].si, &array[i + POS1 - N].si, r1, r2,\n\t\t\t mask);\n\t_mm_store_si128(&array[i].si, r);\n\t_mm_store_si128(&ctx->sfmt[j++].si, r);\n\tr1 = r2;\n\tr2 = r;\n    }\n}\n\n#endif\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/SFMT.h",
    "content": "/*\n * This file derives from SFMT 1.3.3\n * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was\n * released under the terms of the following license:\n *\n *   Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima\n *   University. All rights reserved.\n *\n *   Redistribution and use in source and binary forms, with or without\n *   modification, are permitted provided that the following conditions are\n *   met:\n *\n *       * Redistributions of source code must retain the above copyright\n *         notice, this list of conditions and the following disclaimer.\n *       * Redistributions in binary form must reproduce the above\n *         copyright notice, this list of conditions and the following\n *         disclaimer in the documentation and/or other materials provided\n *         with the distribution.\n *       * Neither the name of the Hiroshima University nor the names of\n *         its contributors may be used to endorse or promote products\n *         derived from this software without specific prior written\n *         permission.\n *\n *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n */\n/** \n * @file SFMT.h \n *\n * @brief SIMD oriented Fast Mersenne Twister(SFMT) pseudorandom\n * number generator\n *\n * @author Mutsuo Saito (Hiroshima University)\n * @author Makoto Matsumoto (Hiroshima University)\n *\n * Copyright (C) 2006, 2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima\n * University. All rights reserved.\n *\n * The new BSD License is applied to this software.\n * see LICENSE.txt\n *\n * @note We assume that your system has inttypes.h.  If your system\n * doesn't have inttypes.h, you have to typedef uint32_t and uint64_t,\n * and you have to define PRIu64 and PRIx64 in this file as follows:\n * @verbatim\n typedef unsigned int uint32_t\n typedef unsigned long long uint64_t  \n #define PRIu64 \"llu\"\n #define PRIx64 \"llx\"\n@endverbatim\n * uint32_t must be exactly 32-bit unsigned integer type (no more, no\n * less), and uint64_t must be exactly 64-bit unsigned integer type.\n * PRIu64 and PRIx64 are used for printf function to print 64-bit\n * unsigned int and 64-bit unsigned int in hexadecimal format.\n */\n\n#ifndef SFMT_H\n#define SFMT_H\n\ntypedef struct sfmt_s sfmt_t;\n\nuint32_t gen_rand32(sfmt_t *ctx);\nuint32_t gen_rand32_range(sfmt_t *ctx, uint32_t limit);\nuint64_t gen_rand64(sfmt_t *ctx);\nuint64_t gen_rand64_range(sfmt_t *ctx, uint64_t limit);\nvoid fill_array32(sfmt_t *ctx, uint32_t *array, int size);\nvoid fill_array64(sfmt_t *ctx, uint64_t *array, int size);\nsfmt_t *init_gen_rand(uint32_t seed);\nsfmt_t *init_by_array(uint32_t *init_key, int key_length);\nvoid fini_gen_rand(sfmt_t *ctx);\nconst char *get_idstring(void);\nint get_min_array_size32(void);\nint get_min_array_size64(void);\n\n/* These real versions are due to Isaku Wada */\n/** generates a random number on [0,1]-real-interval */\nstatic inline double to_real1(uint32_t v) {\n    return v * (1.0/4294967295.0); \n    /* divided by 2^32-1 */ \n}\n\n/** generates a random number on [0,1]-real-interval */\nstatic inline double genrand_real1(sfmt_t *ctx) {\n    return to_real1(gen_rand32(ctx));\n}\n\n/** generates a random number on [0,1)-real-interval */\nstatic inline double to_real2(uint32_t v) {\n    return v * (1.0/4294967296.0); \n    /* divided by 2^32 */\n}\n\n/** generates a random number on [0,1)-real-interval */\nstatic inline double genrand_real2(sfmt_t *ctx) {\n    return to_real2(gen_rand32(ctx));\n}\n\n/** generates a random number on (0,1)-real-interval */\nstatic inline double to_real3(uint32_t v) {\n    return (((double)v) + 0.5)*(1.0/4294967296.0); \n    /* divided by 2^32 */\n}\n\n/** generates a random number on (0,1)-real-interval */\nstatic inline double genrand_real3(sfmt_t *ctx) {\n    return to_real3(gen_rand32(ctx));\n}\n/** These real versions are due to Isaku Wada */\n\n/** generates a random number on [0,1) with 53-bit resolution*/\nstatic inline double to_res53(uint64_t v) {\n    return v * (1.0/18446744073709551616.0L);\n}\n\n/** generates a random number on [0,1) with 53-bit resolution from two\n * 32 bit integers */\nstatic inline double to_res53_mix(uint32_t x, uint32_t y) {\n    return to_res53(x | ((uint64_t)y << 32));\n}\n\n/** generates a random number on [0,1) with 53-bit resolution\n */\nstatic inline double genrand_res53(sfmt_t *ctx) {\n    return to_res53(gen_rand64(ctx));\n}\n\n/** generates a random number on [0,1) with 53-bit resolution\n    using 32bit integer.\n */\nstatic inline double genrand_res53_mix(sfmt_t *ctx) {\n    uint32_t x, y;\n\n    x = gen_rand32(ctx);\n    y = gen_rand32(ctx);\n    return to_res53_mix(x, y);\n}\n#endif\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/arena_util.h",
    "content": "static inline unsigned\ndo_arena_create(ssize_t dirty_decay_ms, ssize_t muzzy_decay_ms) {\n\tunsigned arena_ind;\n\tsize_t sz = sizeof(unsigned);\n\texpect_d_eq(mallctl(\"arenas.create\", (void *)&arena_ind, &sz, NULL, 0),\n\t    0, \"Unexpected mallctl() failure\");\n\tsize_t mib[3];\n\tsize_t miblen = sizeof(mib)/sizeof(size_t);\n\n\texpect_d_eq(mallctlnametomib(\"arena.0.dirty_decay_ms\", mib, &miblen),\n\t    0, \"Unexpected mallctlnametomib() failure\");\n\tmib[1] = (size_t)arena_ind;\n\texpect_d_eq(mallctlbymib(mib, miblen, NULL, NULL,\n\t    (void *)&dirty_decay_ms, sizeof(dirty_decay_ms)), 0,\n\t    \"Unexpected mallctlbymib() failure\");\n\n\texpect_d_eq(mallctlnametomib(\"arena.0.muzzy_decay_ms\", mib, &miblen),\n\t    0, \"Unexpected mallctlnametomib() failure\");\n\tmib[1] = (size_t)arena_ind;\n\texpect_d_eq(mallctlbymib(mib, miblen, NULL, NULL,\n\t    (void *)&muzzy_decay_ms, sizeof(muzzy_decay_ms)), 0,\n\t    \"Unexpected mallctlbymib() failure\");\n\n\treturn arena_ind;\n}\n\nstatic inline void\ndo_arena_destroy(unsigned arena_ind) {\n\t/* \n\t * For convenience, flush tcache in case there are cached items.\n\t * However not assert success since the tcache may be disabled.\n\t */\n\tmallctl(\"thread.tcache.flush\", NULL, NULL, NULL, 0);\n\n\tsize_t mib[3];\n\tsize_t miblen = sizeof(mib)/sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(\"arena.0.destroy\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() failure\");\n\tmib[1] = (size_t)arena_ind;\n\texpect_d_eq(mallctlbymib(mib, miblen, NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctlbymib() failure\");\n}\n\nstatic inline void\ndo_epoch(void) {\n\tuint64_t epoch = 1;\n\texpect_d_eq(mallctl(\"epoch\", NULL, NULL, (void *)&epoch, sizeof(epoch)),\n\t    0, \"Unexpected mallctl() failure\");\n}\n\nstatic inline void\ndo_purge(unsigned arena_ind) {\n\tsize_t mib[3];\n\tsize_t miblen = sizeof(mib)/sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(\"arena.0.purge\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() failure\");\n\tmib[1] = (size_t)arena_ind;\n\texpect_d_eq(mallctlbymib(mib, miblen, NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctlbymib() failure\");\n}\n\nstatic inline void\ndo_decay(unsigned arena_ind) {\n\tsize_t mib[3];\n\tsize_t miblen = sizeof(mib)/sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(\"arena.0.decay\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() failure\");\n\tmib[1] = (size_t)arena_ind;\n\texpect_d_eq(mallctlbymib(mib, miblen, NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctlbymib() failure\");\n}\n\nstatic inline uint64_t\nget_arena_npurge_impl(const char *mibname, unsigned arena_ind) {\n\tsize_t mib[4];\n\tsize_t miblen = sizeof(mib)/sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(mibname, mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() failure\");\n\tmib[2] = (size_t)arena_ind;\n\tuint64_t npurge = 0;\n\tsize_t sz = sizeof(npurge);\n\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&npurge, &sz, NULL, 0),\n\t    config_stats ? 0 : ENOENT, \"Unexpected mallctlbymib() failure\");\n\treturn npurge;\n}\n\nstatic inline uint64_t\nget_arena_dirty_npurge(unsigned arena_ind) {\n\tdo_epoch();\n\treturn get_arena_npurge_impl(\"stats.arenas.0.dirty_npurge\", arena_ind);\n}\n\nstatic inline uint64_t\nget_arena_dirty_purged(unsigned arena_ind) {\n\tdo_epoch();\n\treturn get_arena_npurge_impl(\"stats.arenas.0.dirty_purged\", arena_ind);\n}\n\nstatic inline uint64_t\nget_arena_muzzy_npurge(unsigned arena_ind) {\n\tdo_epoch();\n\treturn get_arena_npurge_impl(\"stats.arenas.0.muzzy_npurge\", arena_ind);\n}\n\nstatic inline uint64_t\nget_arena_npurge(unsigned arena_ind) {\n\tdo_epoch();\n\treturn get_arena_npurge_impl(\"stats.arenas.0.dirty_npurge\", arena_ind) +\n\t    get_arena_npurge_impl(\"stats.arenas.0.muzzy_npurge\", arena_ind);\n}\n\nstatic inline size_t\nget_arena_pdirty(unsigned arena_ind) {\n\tdo_epoch();\n\tsize_t mib[4];\n\tsize_t miblen = sizeof(mib)/sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(\"stats.arenas.0.pdirty\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() failure\");\n\tmib[2] = (size_t)arena_ind;\n\tsize_t pdirty;\n\tsize_t sz = sizeof(pdirty);\n\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&pdirty, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctlbymib() failure\");\n\treturn pdirty;\n}\n\nstatic inline size_t\nget_arena_pmuzzy(unsigned arena_ind) {\n\tdo_epoch();\n\tsize_t mib[4];\n\tsize_t miblen = sizeof(mib)/sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(\"stats.arenas.0.pmuzzy\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() failure\");\n\tmib[2] = (size_t)arena_ind;\n\tsize_t pmuzzy;\n\tsize_t sz = sizeof(pmuzzy);\n\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&pmuzzy, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctlbymib() failure\");\n\treturn pmuzzy;\n}\n\nstatic inline void *\ndo_mallocx(size_t size, int flags) {\n\tvoid *p = mallocx(size, flags);\n\texpect_ptr_not_null(p, \"Unexpected mallocx() failure\");\n\treturn p;\n}\n\nstatic inline void\ngenerate_dirty(unsigned arena_ind, size_t size) {\n\tint flags = MALLOCX_ARENA(arena_ind) | MALLOCX_TCACHE_NONE;\n\tvoid *p = do_mallocx(size, flags);\n\tdallocx(p, flags);\n}\n\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/bench.h",
    "content": "static inline void\ntime_func(timedelta_t *timer, uint64_t nwarmup, uint64_t niter,\n    void (*func)(void)) {\n\tuint64_t i;\n\n\tfor (i = 0; i < nwarmup; i++) {\n\t\tfunc();\n\t}\n\ttimer_start(timer);\n\tfor (i = 0; i < niter; i++) {\n\t\tfunc();\n\t}\n\ttimer_stop(timer);\n}\n\n#define FMT_NSECS_BUF_SIZE 100\n/* Print nanoseconds / iter into the buffer \"buf\". */\nstatic inline void\nfmt_nsecs(uint64_t usec, uint64_t iters, char *buf) {\n\tuint64_t nsec = usec * 1000;\n\t/* We'll display 3 digits after the decimal point. */\n\tuint64_t nsec1000 = nsec * 1000;\n\tuint64_t nsecs_per_iter1000 = nsec1000 / iters;\n\tuint64_t intpart = nsecs_per_iter1000 / 1000;\n\tuint64_t fracpart = nsecs_per_iter1000 % 1000;\n\tmalloc_snprintf(buf, FMT_NSECS_BUF_SIZE, \"%\"FMTu64\".%03\"FMTu64, intpart,\n\t    fracpart);\n}\n\nstatic inline void\ncompare_funcs(uint64_t nwarmup, uint64_t niter, const char *name_a,\n    void (*func_a), const char *name_b, void (*func_b)) {\n\ttimedelta_t timer_a, timer_b;\n\tchar ratio_buf[6];\n\tvoid *p;\n\n\tp = mallocx(1, 0);\n\tif (p == NULL) {\n\t\ttest_fail(\"Unexpected mallocx() failure\");\n\t\treturn;\n\t}\n\n\ttime_func(&timer_a, nwarmup, niter, func_a);\n\ttime_func(&timer_b, nwarmup, niter, func_b);\n\n\tuint64_t usec_a = timer_usec(&timer_a);\n\tchar buf_a[FMT_NSECS_BUF_SIZE];\n\tfmt_nsecs(usec_a, niter, buf_a);\n\n\tuint64_t usec_b = timer_usec(&timer_b);\n\tchar buf_b[FMT_NSECS_BUF_SIZE];\n\tfmt_nsecs(usec_b, niter, buf_b);\n\n\ttimer_ratio(&timer_a, &timer_b, ratio_buf, sizeof(ratio_buf));\n\tmalloc_printf(\"%\"FMTu64\" iterations, %s=%\"FMTu64\"us (%s ns/iter), \"\n\t    \"%s=%\"FMTu64\"us (%s ns/iter), ratio=1:%s\\n\",\n\t    niter, name_a, usec_a, buf_a, name_b, usec_b, buf_b, ratio_buf);\n\n\tdallocx(p, 0);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/bgthd.h",
    "content": "/*\n * Shared utility for checking if background_thread is enabled, which affects\n * the purging behavior and assumptions in some tests.\n */\n\nstatic inline bool\nis_background_thread_enabled(void) {\n\tbool enabled;\n\tsize_t sz = sizeof(bool);\n\tint ret = mallctl(\"background_thread\", (void *)&enabled, &sz, NULL,0);\n\tif (ret == ENOENT) {\n\t\treturn false;\n\t}\n\tassert_d_eq(ret, 0, \"Unexpected mallctl error\");\n\n\treturn enabled;\n}\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/btalloc.h",
    "content": "/* btalloc() provides a mechanism for allocating via permuted backtraces. */\nvoid\t*btalloc(size_t size, unsigned bits);\n\n#define btalloc_n_proto(n)\t\t\t\t\t\t\\\nvoid\t*btalloc_##n(size_t size, unsigned bits);\nbtalloc_n_proto(0)\nbtalloc_n_proto(1)\n\n#define btalloc_n_gen(n)\t\t\t\t\t\t\\\nvoid *\t\t\t\t\t\t\t\t\t\\\nbtalloc_##n(size_t size, unsigned bits) {\t\t\t\t\\\n\tvoid *p;\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tif (bits == 0) {\t\t\t\t\t\t\\\n\t\tp = mallocx(size, 0);\t\t\t\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t\tswitch (bits & 0x1U) {\t\t\t\t\t\\\n\t\tcase 0:\t\t\t\t\t\t\t\\\n\t\t\tp = (btalloc_0(size, bits >> 1));\t\t\\\n\t\t\tbreak;\t\t\t\t\t\t\\\n\t\tcase 1:\t\t\t\t\t\t\t\\\n\t\t\tp = (btalloc_1(size, bits >> 1));\t\t\\\n\t\t\tbreak;\t\t\t\t\t\t\\\n\t\tdefault: not_reached();\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\t/* Intentionally sabotage tail call optimization. */\t\t\\\n\texpect_ptr_not_null(p, \"Unexpected mallocx() failure\");\t\t\\\n\treturn p;\t\t\t\t\t\t\t\\\n}\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/extent_hooks.h",
    "content": "/*\n * Boilerplate code used for testing extent hooks via interception and\n * passthrough.\n */\n\nstatic void\t*extent_alloc_hook(extent_hooks_t *extent_hooks, void *new_addr,\n    size_t size, size_t alignment, bool *zero, bool *commit,\n    unsigned arena_ind);\nstatic bool\textent_dalloc_hook(extent_hooks_t *extent_hooks, void *addr,\n    size_t size, bool committed, unsigned arena_ind);\nstatic void\textent_destroy_hook(extent_hooks_t *extent_hooks, void *addr,\n    size_t size, bool committed, unsigned arena_ind);\nstatic bool\textent_commit_hook(extent_hooks_t *extent_hooks, void *addr,\n    size_t size, size_t offset, size_t length, unsigned arena_ind);\nstatic bool\textent_decommit_hook(extent_hooks_t *extent_hooks, void *addr,\n    size_t size, size_t offset, size_t length, unsigned arena_ind);\nstatic bool\textent_purge_lazy_hook(extent_hooks_t *extent_hooks, void *addr,\n    size_t size, size_t offset, size_t length, unsigned arena_ind);\nstatic bool\textent_purge_forced_hook(extent_hooks_t *extent_hooks,\n    void *addr, size_t size, size_t offset, size_t length, unsigned arena_ind);\nstatic bool\textent_split_hook(extent_hooks_t *extent_hooks, void *addr,\n    size_t size, size_t size_a, size_t size_b, bool committed,\n    unsigned arena_ind);\nstatic bool\textent_merge_hook(extent_hooks_t *extent_hooks, void *addr_a,\n    size_t size_a, void *addr_b, size_t size_b, bool committed,\n    unsigned arena_ind);\n\nstatic extent_hooks_t *default_hooks;\nstatic extent_hooks_t hooks = {\n\textent_alloc_hook,\n\textent_dalloc_hook,\n\textent_destroy_hook,\n\textent_commit_hook,\n\textent_decommit_hook,\n\textent_purge_lazy_hook,\n\textent_purge_forced_hook,\n\textent_split_hook,\n\textent_merge_hook\n};\n\n/* Control whether hook functions pass calls through to default hooks. */\nstatic bool try_alloc = true;\nstatic bool try_dalloc = true;\nstatic bool try_destroy = true;\nstatic bool try_commit = true;\nstatic bool try_decommit = true;\nstatic bool try_purge_lazy = true;\nstatic bool try_purge_forced = true;\nstatic bool try_split = true;\nstatic bool try_merge = true;\n\n/* Set to false prior to operations, then introspect after operations. */\nstatic bool called_alloc;\nstatic bool called_dalloc;\nstatic bool called_destroy;\nstatic bool called_commit;\nstatic bool called_decommit;\nstatic bool called_purge_lazy;\nstatic bool called_purge_forced;\nstatic bool called_split;\nstatic bool called_merge;\n\n/* Set to false prior to operations, then introspect after operations. */\nstatic bool did_alloc;\nstatic bool did_dalloc;\nstatic bool did_destroy;\nstatic bool did_commit;\nstatic bool did_decommit;\nstatic bool did_purge_lazy;\nstatic bool did_purge_forced;\nstatic bool did_split;\nstatic bool did_merge;\n\n#if 0\n#  define TRACE_HOOK(fmt, ...) malloc_printf(fmt, __VA_ARGS__)\n#else\n#  define TRACE_HOOK(fmt, ...)\n#endif\n\nstatic void *\nextent_alloc_hook(extent_hooks_t *extent_hooks, void *new_addr, size_t size,\n    size_t alignment, bool *zero, bool *commit, unsigned arena_ind) {\n\tvoid *ret;\n\n\tTRACE_HOOK(\"%s(extent_hooks=%p, new_addr=%p, size=%zu, alignment=%zu, \"\n\t    \"*zero=%s, *commit=%s, arena_ind=%u)\\n\", __func__, extent_hooks,\n\t    new_addr, size, alignment, *zero ?  \"true\" : \"false\", *commit ?\n\t    \"true\" : \"false\", arena_ind);\n\texpect_ptr_eq(extent_hooks, &hooks,\n\t    \"extent_hooks should be same as pointer used to set hooks\");\n\texpect_ptr_eq(extent_hooks->alloc, extent_alloc_hook,\n\t    \"Wrong hook function\");\n\tcalled_alloc = true;\n\tif (!try_alloc) {\n\t\treturn NULL;\n\t}\n\tret = default_hooks->alloc(default_hooks, new_addr, size, alignment,\n\t    zero, commit, 0);\n\tdid_alloc = (ret != NULL);\n\treturn ret;\n}\n\nstatic bool\nextent_dalloc_hook(extent_hooks_t *extent_hooks, void *addr, size_t size,\n    bool committed, unsigned arena_ind) {\n\tbool err;\n\n\tTRACE_HOOK(\"%s(extent_hooks=%p, addr=%p, size=%zu, committed=%s, \"\n\t    \"arena_ind=%u)\\n\", __func__, extent_hooks, addr, size, committed ?\n\t    \"true\" : \"false\", arena_ind);\n\texpect_ptr_eq(extent_hooks, &hooks,\n\t    \"extent_hooks should be same as pointer used to set hooks\");\n\texpect_ptr_eq(extent_hooks->dalloc, extent_dalloc_hook,\n\t    \"Wrong hook function\");\n\tcalled_dalloc = true;\n\tif (!try_dalloc) {\n\t\treturn true;\n\t}\n\terr = default_hooks->dalloc(default_hooks, addr, size, committed, 0);\n\tdid_dalloc = !err;\n\treturn err;\n}\n\nstatic void\nextent_destroy_hook(extent_hooks_t *extent_hooks, void *addr, size_t size,\n    bool committed, unsigned arena_ind) {\n\tTRACE_HOOK(\"%s(extent_hooks=%p, addr=%p, size=%zu, committed=%s, \"\n\t    \"arena_ind=%u)\\n\", __func__, extent_hooks, addr, size, committed ?\n\t    \"true\" : \"false\", arena_ind);\n\texpect_ptr_eq(extent_hooks, &hooks,\n\t    \"extent_hooks should be same as pointer used to set hooks\");\n\texpect_ptr_eq(extent_hooks->destroy, extent_destroy_hook,\n\t    \"Wrong hook function\");\n\tcalled_destroy = true;\n\tif (!try_destroy) {\n\t\treturn;\n\t}\n\tdefault_hooks->destroy(default_hooks, addr, size, committed, 0);\n\tdid_destroy = true;\n}\n\nstatic bool\nextent_commit_hook(extent_hooks_t *extent_hooks, void *addr, size_t size,\n    size_t offset, size_t length, unsigned arena_ind) {\n\tbool err;\n\n\tTRACE_HOOK(\"%s(extent_hooks=%p, addr=%p, size=%zu, offset=%zu, \"\n\t    \"length=%zu, arena_ind=%u)\\n\", __func__, extent_hooks, addr, size,\n\t    offset, length, arena_ind);\n\texpect_ptr_eq(extent_hooks, &hooks,\n\t    \"extent_hooks should be same as pointer used to set hooks\");\n\texpect_ptr_eq(extent_hooks->commit, extent_commit_hook,\n\t    \"Wrong hook function\");\n\tcalled_commit = true;\n\tif (!try_commit) {\n\t\treturn true;\n\t}\n\terr = default_hooks->commit(default_hooks, addr, size, offset, length,\n\t    0);\n\tdid_commit = !err;\n\treturn err;\n}\n\nstatic bool\nextent_decommit_hook(extent_hooks_t *extent_hooks, void *addr, size_t size,\n    size_t offset, size_t length, unsigned arena_ind) {\n\tbool err;\n\n\tTRACE_HOOK(\"%s(extent_hooks=%p, addr=%p, size=%zu, offset=%zu, \"\n\t    \"length=%zu, arena_ind=%u)\\n\", __func__, extent_hooks, addr, size,\n\t    offset, length, arena_ind);\n\texpect_ptr_eq(extent_hooks, &hooks,\n\t    \"extent_hooks should be same as pointer used to set hooks\");\n\texpect_ptr_eq(extent_hooks->decommit, extent_decommit_hook,\n\t    \"Wrong hook function\");\n\tcalled_decommit = true;\n\tif (!try_decommit) {\n\t\treturn true;\n\t}\n\terr = default_hooks->decommit(default_hooks, addr, size, offset, length,\n\t    0);\n\tdid_decommit = !err;\n\treturn err;\n}\n\nstatic bool\nextent_purge_lazy_hook(extent_hooks_t *extent_hooks, void *addr, size_t size,\n    size_t offset, size_t length, unsigned arena_ind) {\n\tbool err;\n\n\tTRACE_HOOK(\"%s(extent_hooks=%p, addr=%p, size=%zu, offset=%zu, \"\n\t    \"length=%zu arena_ind=%u)\\n\", __func__, extent_hooks, addr, size,\n\t    offset, length, arena_ind);\n\texpect_ptr_eq(extent_hooks, &hooks,\n\t    \"extent_hooks should be same as pointer used to set hooks\");\n\texpect_ptr_eq(extent_hooks->purge_lazy, extent_purge_lazy_hook,\n\t    \"Wrong hook function\");\n\tcalled_purge_lazy = true;\n\tif (!try_purge_lazy) {\n\t\treturn true;\n\t}\n\terr = default_hooks->purge_lazy == NULL ||\n\t    default_hooks->purge_lazy(default_hooks, addr, size, offset, length,\n\t    0);\n\tdid_purge_lazy = !err;\n\treturn err;\n}\n\nstatic bool\nextent_purge_forced_hook(extent_hooks_t *extent_hooks, void *addr, size_t size,\n    size_t offset, size_t length, unsigned arena_ind) {\n\tbool err;\n\n\tTRACE_HOOK(\"%s(extent_hooks=%p, addr=%p, size=%zu, offset=%zu, \"\n\t    \"length=%zu arena_ind=%u)\\n\", __func__, extent_hooks, addr, size,\n\t    offset, length, arena_ind);\n\texpect_ptr_eq(extent_hooks, &hooks,\n\t    \"extent_hooks should be same as pointer used to set hooks\");\n\texpect_ptr_eq(extent_hooks->purge_forced, extent_purge_forced_hook,\n\t    \"Wrong hook function\");\n\tcalled_purge_forced = true;\n\tif (!try_purge_forced) {\n\t\treturn true;\n\t}\n\terr = default_hooks->purge_forced == NULL ||\n\t    default_hooks->purge_forced(default_hooks, addr, size, offset,\n\t    length, 0);\n\tdid_purge_forced = !err;\n\treturn err;\n}\n\nstatic bool\nextent_split_hook(extent_hooks_t *extent_hooks, void *addr, size_t size,\n    size_t size_a, size_t size_b, bool committed, unsigned arena_ind) {\n\tbool err;\n\n\tTRACE_HOOK(\"%s(extent_hooks=%p, addr=%p, size=%zu, size_a=%zu, \"\n\t    \"size_b=%zu, committed=%s, arena_ind=%u)\\n\", __func__, extent_hooks,\n\t    addr, size, size_a, size_b, committed ? \"true\" : \"false\",\n\t    arena_ind);\n\texpect_ptr_eq(extent_hooks, &hooks,\n\t    \"extent_hooks should be same as pointer used to set hooks\");\n\texpect_ptr_eq(extent_hooks->split, extent_split_hook,\n\t    \"Wrong hook function\");\n\tcalled_split = true;\n\tif (!try_split) {\n\t\treturn true;\n\t}\n\terr = (default_hooks->split == NULL ||\n\t    default_hooks->split(default_hooks, addr, size, size_a, size_b,\n\t    committed, 0));\n\tdid_split = !err;\n\treturn err;\n}\n\nstatic bool\nextent_merge_hook(extent_hooks_t *extent_hooks, void *addr_a, size_t size_a,\n    void *addr_b, size_t size_b, bool committed, unsigned arena_ind) {\n\tbool err;\n\n\tTRACE_HOOK(\"%s(extent_hooks=%p, addr_a=%p, size_a=%zu, addr_b=%p \"\n\t    \"size_b=%zu, committed=%s, arena_ind=%u)\\n\", __func__, extent_hooks,\n\t    addr_a, size_a, addr_b, size_b, committed ? \"true\" : \"false\",\n\t    arena_ind);\n\texpect_ptr_eq(extent_hooks, &hooks,\n\t    \"extent_hooks should be same as pointer used to set hooks\");\n\texpect_ptr_eq(extent_hooks->merge, extent_merge_hook,\n\t    \"Wrong hook function\");\n\texpect_ptr_eq((void *)((uintptr_t)addr_a + size_a), addr_b,\n\t    \"Extents not mergeable\");\n\tcalled_merge = true;\n\tif (!try_merge) {\n\t\treturn true;\n\t}\n\terr = (default_hooks->merge == NULL ||\n\t    default_hooks->merge(default_hooks, addr_a, size_a, addr_b, size_b,\n\t    committed, 0));\n\tdid_merge = !err;\n\treturn err;\n}\n\nstatic void\nextent_hooks_prep(void) {\n\tsize_t sz;\n\n\tsz = sizeof(default_hooks);\n\texpect_d_eq(mallctl(\"arena.0.extent_hooks\", (void *)&default_hooks, &sz,\n\t    NULL, 0), 0, \"Unexpected mallctl() error\");\n}\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/jemalloc_test.h.in",
    "content": "#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <limits.h>\n#ifndef SIZE_T_MAX\n#  define SIZE_T_MAX\tSIZE_MAX\n#endif\n#include <stdlib.h>\n#include <stdarg.h>\n#include <stdbool.h>\n#include <errno.h>\n#include <math.h>\n#include <string.h>\n#ifdef _WIN32\n#  include \"msvc_compat/strings.h\"\n#endif\n\n#ifdef _WIN32\n#  include <windows.h>\n#  include \"msvc_compat/windows_extra.h\"\n#else\n#  include <pthread.h>\n#endif\n\n#include \"test/jemalloc_test_defs.h\"\n\n#if defined(JEMALLOC_OSATOMIC)\n#  include <libkern/OSAtomic.h>\n#endif\n\n#if defined(HAVE_ALTIVEC) && !defined(__APPLE__)\n#  include <altivec.h>\n#endif\n#ifdef HAVE_SSE2\n#  include <emmintrin.h>\n#endif\n\n/******************************************************************************/\n/*\n * For unit tests and analytics tests, expose all public and private interfaces.\n */\n#if defined(JEMALLOC_UNIT_TEST) || defined (JEMALLOC_ANALYZE_TEST)\n#  define JEMALLOC_JET\n#  define JEMALLOC_MANGLE\n#  include \"jemalloc/internal/jemalloc_preamble.h\"\n#  include \"jemalloc/internal/jemalloc_internal_includes.h\"\n\n/******************************************************************************/\n/*\n * For integration tests, expose the public jemalloc interfaces, but only\n * expose the minimum necessary internal utility code (to avoid re-implementing\n * essentially identical code within the test infrastructure).\n */\n#elif defined(JEMALLOC_INTEGRATION_TEST) || \\\n    defined(JEMALLOC_INTEGRATION_CPP_TEST)\n#  define JEMALLOC_MANGLE\n#  include \"jemalloc/jemalloc@install_suffix@.h\"\n#  include \"jemalloc/internal/jemalloc_internal_defs.h\"\n#  include \"jemalloc/internal/jemalloc_internal_macros.h\"\n\nstatic const bool config_debug =\n#ifdef JEMALLOC_DEBUG\n    true\n#else\n    false\n#endif\n    ;\n\n#  define JEMALLOC_N(n) @private_namespace@##n\n#  include \"jemalloc/internal/private_namespace.h\"\n#  include \"jemalloc/internal/test_hooks.h\"\n\n/* Hermetic headers. */\n#  include \"jemalloc/internal/assert.h\"\n#  include \"jemalloc/internal/malloc_io.h\"\n#  include \"jemalloc/internal/nstime.h\"\n#  include \"jemalloc/internal/util.h\"\n\n/* Non-hermetic headers. */\n#  include \"jemalloc/internal/qr.h\"\n#  include \"jemalloc/internal/ql.h\"\n\n/******************************************************************************/\n/*\n * For stress tests, expose the public jemalloc interfaces with name mangling\n * so that they can be tested as e.g. malloc() and free().  Also expose the\n * public jemalloc interfaces with jet_ prefixes, so that stress tests can use\n * a separate allocator for their internal data structures.\n */\n#elif defined(JEMALLOC_STRESS_TEST)\n#  include \"jemalloc/jemalloc@install_suffix@.h\"\n\n#  include \"jemalloc/jemalloc_protos_jet.h\"\n\n#  define JEMALLOC_JET\n#  include \"jemalloc/internal/jemalloc_preamble.h\"\n#  include \"jemalloc/internal/jemalloc_internal_includes.h\"\n#  include \"jemalloc/internal/public_unnamespace.h\"\n#  undef JEMALLOC_JET\n\n#  include \"jemalloc/jemalloc_rename.h\"\n#  define JEMALLOC_MANGLE\n#  ifdef JEMALLOC_STRESS_TESTLIB\n#    include \"jemalloc/jemalloc_mangle_jet.h\"\n#  else\n#    include \"jemalloc/jemalloc_mangle.h\"\n#  endif\n\n/******************************************************************************/\n/*\n * This header does dangerous things, the effects of which only test code\n * should be subject to.\n */\n#else\n#  error \"This header cannot be included outside a testing context\"\n#endif\n\n/******************************************************************************/\n/*\n * Common test utilities.\n */\n#include \"test/btalloc.h\"\n#include \"test/math.h\"\n#include \"test/mtx.h\"\n#include \"test/mq.h\"\n#include \"test/sleep.h\"\n#include \"test/test.h\"\n#include \"test/timer.h\"\n#include \"test/thd.h\"\n#include \"test/bgthd.h\"\n#define MEXP 19937\n#include \"test/SFMT.h\"\n\n#ifndef JEMALLOC_HAVE_MALLOC_SIZE\n#define TEST_MALLOC_SIZE malloc_usable_size\n#else\n#define TEST_MALLOC_SIZE malloc_size\n#endif\n/******************************************************************************/\n/*\n * Define always-enabled assertion macros, so that test assertions execute even\n * if assertions are disabled in the library code.\n */\n#undef assert\n#undef not_reached\n#undef not_implemented\n#undef expect_not_implemented\n\n#define assert(e) do {\t\t\t\t\t\t\t\\\n\tif (!(e)) {\t\t\t\t\t\t\t\\\n\t\tmalloc_printf(\t\t\t\t\t\t\\\n\t\t    \"<jemalloc>: %s:%d: Failed assertion: \\\"%s\\\"\\n\",\t\\\n\t\t    __FILE__, __LINE__, #e);\t\t\t\t\\\n\t\tabort();\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#define not_reached() do {\t\t\t\t\t\t\\\n\tmalloc_printf(\t\t\t\t\t\t\t\\\n\t    \"<jemalloc>: %s:%d: Unreachable code reached\\n\",\t\t\\\n\t    __FILE__, __LINE__);\t\t\t\t\t\\\n\tabort();\t\t\t\t\t\t\t\\\n} while (0)\n\n#define not_implemented() do {\t\t\t\t\t\t\\\n\tmalloc_printf(\"<jemalloc>: %s:%d: Not implemented\\n\",\t\t\\\n\t    __FILE__, __LINE__);\t\t\t\t\t\\\n\tabort();\t\t\t\t\t\t\t\\\n} while (0)\n\n#define expect_not_implemented(e) do {\t\t\t\t\t\\\n\tif (!(e)) {\t\t\t\t\t\t\t\\\n\t\tnot_implemented();\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#ifdef __cplusplus\n}\n#endif\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/jemalloc_test_defs.h.in",
    "content": "#include \"jemalloc/internal/jemalloc_internal_defs.h\"\n#include \"jemalloc/internal/jemalloc_internal_decls.h\"\n\n/*\n * For use by SFMT.  configure.ac doesn't actually define HAVE_SSE2 because its\n * dependencies are notoriously unportable in practice.\n */\n#undef HAVE_SSE2\n#undef HAVE_ALTIVEC\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/math.h",
    "content": "/*\n * Compute the natural log of Gamma(x), accurate to 10 decimal places.\n *\n * This implementation is based on:\n *\n *   Pike, M.C., I.D. Hill (1966) Algorithm 291: Logarithm of Gamma function\n *   [S14].  Communications of the ACM 9(9):684.\n */\nstatic inline double\nln_gamma(double x) {\n\tdouble f, z;\n\n\tassert(x > 0.0);\n\n\tif (x < 7.0) {\n\t\tf = 1.0;\n\t\tz = x;\n\t\twhile (z < 7.0) {\n\t\t\tf *= z;\n\t\t\tz += 1.0;\n\t\t}\n\t\tx = z;\n\t\tf = -log(f);\n\t} else {\n\t\tf = 0.0;\n\t}\n\n\tz = 1.0 / (x * x);\n\n\treturn f + (x-0.5) * log(x) - x + 0.918938533204673 +\n\t    (((-0.000595238095238 * z + 0.000793650793651) * z -\n\t    0.002777777777778) * z + 0.083333333333333) / x;\n}\n\n/*\n * Compute the incomplete Gamma ratio for [0..x], where p is the shape\n * parameter, and ln_gamma_p is ln_gamma(p).\n *\n * This implementation is based on:\n *\n *   Bhattacharjee, G.P. (1970) Algorithm AS 32: The incomplete Gamma integral.\n *   Applied Statistics 19:285-287.\n */\nstatic inline double\ni_gamma(double x, double p, double ln_gamma_p) {\n\tdouble acu, factor, oflo, gin, term, rn, a, b, an, dif;\n\tdouble pn[6];\n\tunsigned i;\n\n\tassert(p > 0.0);\n\tassert(x >= 0.0);\n\n\tif (x == 0.0) {\n\t\treturn 0.0;\n\t}\n\n\tacu = 1.0e-10;\n\toflo = 1.0e30;\n\tgin = 0.0;\n\tfactor = exp(p * log(x) - x - ln_gamma_p);\n\n\tif (x <= 1.0 || x < p) {\n\t\t/* Calculation by series expansion. */\n\t\tgin = 1.0;\n\t\tterm = 1.0;\n\t\trn = p;\n\n\t\twhile (true) {\n\t\t\trn += 1.0;\n\t\t\tterm *= x / rn;\n\t\t\tgin += term;\n\t\t\tif (term <= acu) {\n\t\t\t\tgin *= factor / p;\n\t\t\t\treturn gin;\n\t\t\t}\n\t\t}\n\t} else {\n\t\t/* Calculation by continued fraction. */\n\t\ta = 1.0 - p;\n\t\tb = a + x + 1.0;\n\t\tterm = 0.0;\n\t\tpn[0] = 1.0;\n\t\tpn[1] = x;\n\t\tpn[2] = x + 1.0;\n\t\tpn[3] = x * b;\n\t\tgin = pn[2] / pn[3];\n\n\t\twhile (true) {\n\t\t\ta += 1.0;\n\t\t\tb += 2.0;\n\t\t\tterm += 1.0;\n\t\t\tan = a * term;\n\t\t\tfor (i = 0; i < 2; i++) {\n\t\t\t\tpn[i+4] = b * pn[i+2] - an * pn[i];\n\t\t\t}\n\t\t\tif (pn[5] != 0.0) {\n\t\t\t\trn = pn[4] / pn[5];\n\t\t\t\tdif = fabs(gin - rn);\n\t\t\t\tif (dif <= acu && dif <= acu * rn) {\n\t\t\t\t\tgin = 1.0 - factor * gin;\n\t\t\t\t\treturn gin;\n\t\t\t\t}\n\t\t\t\tgin = rn;\n\t\t\t}\n\t\t\tfor (i = 0; i < 4; i++) {\n\t\t\t\tpn[i] = pn[i+2];\n\t\t\t}\n\n\t\t\tif (fabs(pn[4]) >= oflo) {\n\t\t\t\tfor (i = 0; i < 4; i++) {\n\t\t\t\t\tpn[i] /= oflo;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n/*\n * Given a value p in [0..1] of the lower tail area of the normal distribution,\n * compute the limit on the definite integral from [-inf..z] that satisfies p,\n * accurate to 16 decimal places.\n *\n * This implementation is based on:\n *\n *   Wichura, M.J. (1988) Algorithm AS 241: The percentage points of the normal\n *   distribution.  Applied Statistics 37(3):477-484.\n */\nstatic inline double\npt_norm(double p) {\n\tdouble q, r, ret;\n\n\tassert(p > 0.0 && p < 1.0);\n\n\tq = p - 0.5;\n\tif (fabs(q) <= 0.425) {\n\t\t/* p close to 1/2. */\n\t\tr = 0.180625 - q * q;\n\t\treturn q * (((((((2.5090809287301226727e3 * r +\n\t\t    3.3430575583588128105e4) * r + 6.7265770927008700853e4) * r\n\t\t    + 4.5921953931549871457e4) * r + 1.3731693765509461125e4) *\n\t\t    r + 1.9715909503065514427e3) * r + 1.3314166789178437745e2)\n\t\t    * r + 3.3871328727963666080e0) /\n\t\t    (((((((5.2264952788528545610e3 * r +\n\t\t    2.8729085735721942674e4) * r + 3.9307895800092710610e4) * r\n\t\t    + 2.1213794301586595867e4) * r + 5.3941960214247511077e3) *\n\t\t    r + 6.8718700749205790830e2) * r + 4.2313330701600911252e1)\n\t\t    * r + 1.0);\n\t} else {\n\t\tif (q < 0.0) {\n\t\t\tr = p;\n\t\t} else {\n\t\t\tr = 1.0 - p;\n\t\t}\n\t\tassert(r > 0.0);\n\n\t\tr = sqrt(-log(r));\n\t\tif (r <= 5.0) {\n\t\t\t/* p neither close to 1/2 nor 0 or 1. */\n\t\t\tr -= 1.6;\n\t\t\tret = ((((((((7.74545014278341407640e-4 * r +\n\t\t\t    2.27238449892691845833e-2) * r +\n\t\t\t    2.41780725177450611770e-1) * r +\n\t\t\t    1.27045825245236838258e0) * r +\n\t\t\t    3.64784832476320460504e0) * r +\n\t\t\t    5.76949722146069140550e0) * r +\n\t\t\t    4.63033784615654529590e0) * r +\n\t\t\t    1.42343711074968357734e0) /\n\t\t\t    (((((((1.05075007164441684324e-9 * r +\n\t\t\t    5.47593808499534494600e-4) * r +\n\t\t\t    1.51986665636164571966e-2)\n\t\t\t    * r + 1.48103976427480074590e-1) * r +\n\t\t\t    6.89767334985100004550e-1) * r +\n\t\t\t    1.67638483018380384940e0) * r +\n\t\t\t    2.05319162663775882187e0) * r + 1.0));\n\t\t} else {\n\t\t\t/* p near 0 or 1. */\n\t\t\tr -= 5.0;\n\t\t\tret = ((((((((2.01033439929228813265e-7 * r +\n\t\t\t    2.71155556874348757815e-5) * r +\n\t\t\t    1.24266094738807843860e-3) * r +\n\t\t\t    2.65321895265761230930e-2) * r +\n\t\t\t    2.96560571828504891230e-1) * r +\n\t\t\t    1.78482653991729133580e0) * r +\n\t\t\t    5.46378491116411436990e0) * r +\n\t\t\t    6.65790464350110377720e0) /\n\t\t\t    (((((((2.04426310338993978564e-15 * r +\n\t\t\t    1.42151175831644588870e-7) * r +\n\t\t\t    1.84631831751005468180e-5) * r +\n\t\t\t    7.86869131145613259100e-4) * r +\n\t\t\t    1.48753612908506148525e-2) * r +\n\t\t\t    1.36929880922735805310e-1) * r +\n\t\t\t    5.99832206555887937690e-1)\n\t\t\t    * r + 1.0));\n\t\t}\n\t\tif (q < 0.0) {\n\t\t\tret = -ret;\n\t\t}\n\t\treturn ret;\n\t}\n}\n\n/*\n * Given a value p in [0..1] of the lower tail area of the Chi^2 distribution\n * with df degrees of freedom, where ln_gamma_df_2 is ln_gamma(df/2.0), compute\n * the upper limit on the definite integral from [0..z] that satisfies p,\n * accurate to 12 decimal places.\n *\n * This implementation is based on:\n *\n *   Best, D.J., D.E. Roberts (1975) Algorithm AS 91: The percentage points of\n *   the Chi^2 distribution.  Applied Statistics 24(3):385-388.\n *\n *   Shea, B.L. (1991) Algorithm AS R85: A remark on AS 91: The percentage\n *   points of the Chi^2 distribution.  Applied Statistics 40(1):233-235.\n */\nstatic inline double\npt_chi2(double p, double df, double ln_gamma_df_2) {\n\tdouble e, aa, xx, c, ch, a, q, p1, p2, t, x, b, s1, s2, s3, s4, s5, s6;\n\tunsigned i;\n\n\tassert(p >= 0.0 && p < 1.0);\n\tassert(df > 0.0);\n\n\te = 5.0e-7;\n\taa = 0.6931471805;\n\n\txx = 0.5 * df;\n\tc = xx - 1.0;\n\n\tif (df < -1.24 * log(p)) {\n\t\t/* Starting approximation for small Chi^2. */\n\t\tch = pow(p * xx * exp(ln_gamma_df_2 + xx * aa), 1.0 / xx);\n\t\tif (ch - e < 0.0) {\n\t\t\treturn ch;\n\t\t}\n\t} else {\n\t\tif (df > 0.32) {\n\t\t\tx = pt_norm(p);\n\t\t\t/*\n\t\t\t * Starting approximation using Wilson and Hilferty\n\t\t\t * estimate.\n\t\t\t */\n\t\t\tp1 = 0.222222 / df;\n\t\t\tch = df * pow(x * sqrt(p1) + 1.0 - p1, 3.0);\n\t\t\t/* Starting approximation for p tending to 1. */\n\t\t\tif (ch > 2.2 * df + 6.0) {\n\t\t\t\tch = -2.0 * (log(1.0 - p) - c * log(0.5 * ch) +\n\t\t\t\t    ln_gamma_df_2);\n\t\t\t}\n\t\t} else {\n\t\t\tch = 0.4;\n\t\t\ta = log(1.0 - p);\n\t\t\twhile (true) {\n\t\t\t\tq = ch;\n\t\t\t\tp1 = 1.0 + ch * (4.67 + ch);\n\t\t\t\tp2 = ch * (6.73 + ch * (6.66 + ch));\n\t\t\t\tt = -0.5 + (4.67 + 2.0 * ch) / p1 - (6.73 + ch\n\t\t\t\t    * (13.32 + 3.0 * ch)) / p2;\n\t\t\t\tch -= (1.0 - exp(a + ln_gamma_df_2 + 0.5 * ch +\n\t\t\t\t    c * aa) * p2 / p1) / t;\n\t\t\t\tif (fabs(q / ch - 1.0) - 0.01 <= 0.0) {\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tfor (i = 0; i < 20; i++) {\n\t\t/* Calculation of seven-term Taylor series. */\n\t\tq = ch;\n\t\tp1 = 0.5 * ch;\n\t\tif (p1 < 0.0) {\n\t\t\treturn -1.0;\n\t\t}\n\t\tp2 = p - i_gamma(p1, xx, ln_gamma_df_2);\n\t\tt = p2 * exp(xx * aa + ln_gamma_df_2 + p1 - c * log(ch));\n\t\tb = t / ch;\n\t\ta = 0.5 * t - b * c;\n\t\ts1 = (210.0 + a * (140.0 + a * (105.0 + a * (84.0 + a * (70.0 +\n\t\t    60.0 * a))))) / 420.0;\n\t\ts2 = (420.0 + a * (735.0 + a * (966.0 + a * (1141.0 + 1278.0 *\n\t\t    a)))) / 2520.0;\n\t\ts3 = (210.0 + a * (462.0 + a * (707.0 + 932.0 * a))) / 2520.0;\n\t\ts4 = (252.0 + a * (672.0 + 1182.0 * a) + c * (294.0 + a *\n\t\t    (889.0 + 1740.0 * a))) / 5040.0;\n\t\ts5 = (84.0 + 264.0 * a + c * (175.0 + 606.0 * a)) / 2520.0;\n\t\ts6 = (120.0 + c * (346.0 + 127.0 * c)) / 5040.0;\n\t\tch += t * (1.0 + 0.5 * t * s1 - b * c * (s1 - b * (s2 - b * (s3\n\t\t    - b * (s4 - b * (s5 - b * s6))))));\n\t\tif (fabs(q / ch - 1.0) <= e) {\n\t\t\tbreak;\n\t\t}\n\t}\n\n\treturn ch;\n}\n\n/*\n * Given a value p in [0..1] and Gamma distribution shape and scale parameters,\n * compute the upper limit on the definite integral from [0..z] that satisfies\n * p.\n */\nstatic inline double\npt_gamma(double p, double shape, double scale, double ln_gamma_shape) {\n\treturn pt_chi2(p, shape * 2.0, ln_gamma_shape) * 0.5 * scale;\n}\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/mq.h",
    "content": "#include \"test/sleep.h\"\n\n/*\n * Simple templated message queue implementation that relies on only mutexes for\n * synchronization (which reduces portability issues).  Given the following\n * setup:\n *\n *   typedef struct mq_msg_s mq_msg_t;\n *   struct mq_msg_s {\n *           mq_msg(mq_msg_t) link;\n *           [message data]\n *   };\n *   mq_gen(, mq_, mq_t, mq_msg_t, link)\n *\n * The API is as follows:\n *\n *   bool mq_init(mq_t *mq);\n *   void mq_fini(mq_t *mq);\n *   unsigned mq_count(mq_t *mq);\n *   mq_msg_t *mq_tryget(mq_t *mq);\n *   mq_msg_t *mq_get(mq_t *mq);\n *   void mq_put(mq_t *mq, mq_msg_t *msg);\n *\n * The message queue linkage embedded in each message is to be treated as\n * externally opaque (no need to initialize or clean up externally).  mq_fini()\n * does not perform any cleanup of messages, since it knows nothing of their\n * payloads.\n */\n#define mq_msg(a_mq_msg_type)\tql_elm(a_mq_msg_type)\n\n#define mq_gen(a_attr, a_prefix, a_mq_type, a_mq_msg_type, a_field)\t\\\ntypedef struct {\t\t\t\t\t\t\t\\\n\tmtx_t\t\t\tlock;\t\t\t\t\t\\\n\tql_head(a_mq_msg_type)\tmsgs;\t\t\t\t\t\\\n\tunsigned\t\tcount;\t\t\t\t\t\\\n} a_mq_type;\t\t\t\t\t\t\t\t\\\na_attr bool\t\t\t\t\t\t\t\t\\\na_prefix##init(a_mq_type *mq) {\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tif (mtx_init(&mq->lock)) {\t\t\t\t\t\\\n\t\treturn true;\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tql_new(&mq->msgs);\t\t\t\t\t\t\\\n\tmq->count = 0;\t\t\t\t\t\t\t\\\n\treturn false;\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##fini(a_mq_type *mq) {\t\t\t\t\t\t\\\n\tmtx_fini(&mq->lock);\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr unsigned\t\t\t\t\t\t\t\t\\\na_prefix##count(a_mq_type *mq) {\t\t\t\t\t\\\n\tunsigned count;\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tmtx_lock(&mq->lock);\t\t\t\t\t\t\\\n\tcount = mq->count;\t\t\t\t\t\t\\\n\tmtx_unlock(&mq->lock);\t\t\t\t\t\t\\\n\treturn count;\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_mq_msg_type *\t\t\t\t\t\t\t\\\na_prefix##tryget(a_mq_type *mq) {\t\t\t\t\t\\\n\ta_mq_msg_type *msg;\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tmtx_lock(&mq->lock);\t\t\t\t\t\t\\\n\tmsg = ql_first(&mq->msgs);\t\t\t\t\t\\\n\tif (msg != NULL) {\t\t\t\t\t\t\\\n\t\tql_head_remove(&mq->msgs, a_mq_msg_type, a_field);\t\\\n\t\tmq->count--;\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\tmtx_unlock(&mq->lock);\t\t\t\t\t\t\\\n\treturn msg;\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr a_mq_msg_type *\t\t\t\t\t\t\t\\\na_prefix##get(a_mq_type *mq) {\t\t\t\t\t\t\\\n\ta_mq_msg_type *msg;\t\t\t\t\t\t\\\n\tunsigned ns;\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tmsg = a_prefix##tryget(mq);\t\t\t\t\t\\\n\tif (msg != NULL) {\t\t\t\t\t\t\\\n\t\treturn msg;\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tns = 1;\t\t\t\t\t\t\t\t\\\n\twhile (true) {\t\t\t\t\t\t\t\\\n\t\tsleep_ns(ns);\t\t\t\t\t\t\\\n\t\tmsg = a_prefix##tryget(mq);\t\t\t\t\\\n\t\tif (msg != NULL) {\t\t\t\t\t\\\n\t\t\treturn msg;\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t\tif (ns < 1000*1000*1000) {\t\t\t\t\\\n\t\t\t/* Double sleep time, up to max 1 second. */\t\\\n\t\t\tns <<= 1;\t\t\t\t\t\\\n\t\t\tif (ns > 1000*1000*1000) {\t\t\t\\\n\t\t\t\tns = 1000*1000*1000;\t\t\t\\\n\t\t\t}\t\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n}\t\t\t\t\t\t\t\t\t\\\na_attr void\t\t\t\t\t\t\t\t\\\na_prefix##put(a_mq_type *mq, a_mq_msg_type *msg) {\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tmtx_lock(&mq->lock);\t\t\t\t\t\t\\\n\tql_elm_new(msg, a_field);\t\t\t\t\t\\\n\tql_tail_insert(&mq->msgs, msg, a_field);\t\t\t\\\n\tmq->count++;\t\t\t\t\t\t\t\\\n\tmtx_unlock(&mq->lock);\t\t\t\t\t\t\\\n}\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/mtx.h",
    "content": "/*\n * mtx is a slightly simplified version of malloc_mutex.  This code duplication\n * is unfortunate, but there are allocator bootstrapping considerations that\n * would leak into the test infrastructure if malloc_mutex were used directly\n * in tests.\n */\n\ntypedef struct {\n#ifdef _WIN32\n\tCRITICAL_SECTION\tlock;\n#elif (defined(JEMALLOC_OS_UNFAIR_LOCK))\n\tos_unfair_lock\t\tlock;\n#else\n\tpthread_mutex_t\t\tlock;\n#endif\n} mtx_t;\n\nbool\tmtx_init(mtx_t *mtx);\nvoid\tmtx_fini(mtx_t *mtx);\nvoid\tmtx_lock(mtx_t *mtx);\nvoid\tmtx_unlock(mtx_t *mtx);\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/nbits.h",
    "content": "#ifndef TEST_NBITS_H\n#define TEST_NBITS_H\n\n/* Interesting bitmap counts to test. */\n\n#define NBITS_TAB \\\n    NB( 1) \\\n    NB( 2) \\\n    NB( 3) \\\n    NB( 4) \\\n    NB( 5) \\\n    NB( 6) \\\n    NB( 7) \\\n    NB( 8) \\\n    NB( 9) \\\n    NB(10) \\\n    NB(11) \\\n    NB(12) \\\n    NB(13) \\\n    NB(14) \\\n    NB(15) \\\n    NB(16) \\\n    NB(17) \\\n    NB(18) \\\n    NB(19) \\\n    NB(20) \\\n    NB(21) \\\n    NB(22) \\\n    NB(23) \\\n    NB(24) \\\n    NB(25) \\\n    NB(26) \\\n    NB(27) \\\n    NB(28) \\\n    NB(29) \\\n    NB(30) \\\n    NB(31) \\\n    NB(32) \\\n    \\\n    NB(33) \\\n    NB(34) \\\n    NB(35) \\\n    NB(36) \\\n    NB(37) \\\n    NB(38) \\\n    NB(39) \\\n    NB(40) \\\n    NB(41) \\\n    NB(42) \\\n    NB(43) \\\n    NB(44) \\\n    NB(45) \\\n    NB(46) \\\n    NB(47) \\\n    NB(48) \\\n    NB(49) \\\n    NB(50) \\\n    NB(51) \\\n    NB(52) \\\n    NB(53) \\\n    NB(54) \\\n    NB(55) \\\n    NB(56) \\\n    NB(57) \\\n    NB(58) \\\n    NB(59) \\\n    NB(60) \\\n    NB(61) \\\n    NB(62) \\\n    NB(63) \\\n    NB(64) \\\n    NB(65) \\\n    NB(66) \\\n    NB(67) \\\n    \\\n    NB(126) \\\n    NB(127) \\\n    NB(128) \\\n    NB(129) \\\n    NB(130) \\\n    \\\n    NB(254) \\\n    NB(255) \\\n    NB(256) \\\n    NB(257) \\\n    NB(258) \\\n    \\\n    NB(510) \\\n    NB(511) \\\n    NB(512) \\\n    NB(513) \\\n    NB(514) \\\n    \\\n    NB(1022) \\\n    NB(1023) \\\n    NB(1024) \\\n    NB(1025) \\\n    NB(1026) \\\n    \\\n    NB(2048) \\\n    \\\n    NB(4094) \\\n    NB(4095) \\\n    NB(4096) \\\n    NB(4097) \\\n    NB(4098) \\\n    \\\n    NB(8192) \\\n    NB(16384)\n\n#endif /* TEST_NBITS_H */\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/san.h",
    "content": "#if defined(JEMALLOC_UAF_DETECTION) || defined(JEMALLOC_DEBUG)\n#  define TEST_SAN_UAF_ALIGN_ENABLE \"lg_san_uaf_align:12\"\n#  define TEST_SAN_UAF_ALIGN_DISABLE \"lg_san_uaf_align:-1\"\n#else\n#  define TEST_SAN_UAF_ALIGN_ENABLE \"\"\n#  define TEST_SAN_UAF_ALIGN_DISABLE \"\"\n#endif\n\nstatic inline bool\nextent_is_guarded(tsdn_t *tsdn, void *ptr) {\n\tedata_t *edata = emap_edata_lookup(tsdn, &arena_emap_global, ptr);\n\treturn edata_guarded_get(edata);\n}\n\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/sleep.h",
    "content": "void sleep_ns(unsigned ns);\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/test.h",
    "content": "#define ASSERT_BUFSIZE\t256\n\n#define verify_cmp(may_abort, t, a, b, cmp, neg_cmp, pri, ...) do {\t\\\n\tconst t a_ = (a);\t\t\t\t\t\t\\\n\tconst t b_ = (b);\t\t\t\t\t\t\\\n\tif (!(a_ cmp b_)) {\t\t\t\t\t\t\\\n\t\tchar prefix[ASSERT_BUFSIZE];\t\t\t\t\\\n\t\tchar message[ASSERT_BUFSIZE];\t\t\t\t\\\n\t\tmalloc_snprintf(prefix, sizeof(prefix),\t\t\t\\\n\t\t    \"%s:%s:%d: Failed assertion: \"\t\t\t\\\n\t\t    \"(%s) \" #cmp \" (%s) --> \"\t\t\t\t\\\n\t\t    \"%\" pri \" \" #neg_cmp \" %\" pri \": \",\t\t\t\\\n\t\t    __func__, __FILE__, __LINE__,\t\t\t\\\n\t\t    #a, #b, a_, b_);\t\t\t\t\t\\\n\t\tmalloc_snprintf(message, sizeof(message), __VA_ARGS__);\t\\\n\t\tif (may_abort) {\t\t\t\t\t\\\n\t\t\tabort();\t\t\t\t\t\\\n\t\t} else {\t\t\t\t\t\t\\\n\t\t\tp_test_fail(prefix, message);\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#define expect_cmp(t, a, b, cmp, neg_cmp, pri, ...) verify_cmp(false,\t\\\n    t, a, b, cmp, neg_cmp, pri, __VA_ARGS__)\n\n#define expect_ptr_eq(a, b, ...)\texpect_cmp(void *, a, b, ==,\t\\\n    !=, \"p\", __VA_ARGS__)\n#define expect_ptr_ne(a, b, ...)\texpect_cmp(void *, a, b, !=,\t\\\n    ==, \"p\", __VA_ARGS__)\n#define expect_ptr_null(a, ...)\t\texpect_cmp(void *, a, NULL, ==,\t\\\n    !=, \"p\", __VA_ARGS__)\n#define expect_ptr_not_null(a, ...)\texpect_cmp(void *, a, NULL, !=,\t\\\n    ==, \"p\", __VA_ARGS__)\n\n#define expect_c_eq(a, b, ...)\texpect_cmp(char, a, b, ==, !=, \"c\", __VA_ARGS__)\n#define expect_c_ne(a, b, ...)\texpect_cmp(char, a, b, !=, ==, \"c\", __VA_ARGS__)\n#define expect_c_lt(a, b, ...)\texpect_cmp(char, a, b, <, >=, \"c\", __VA_ARGS__)\n#define expect_c_le(a, b, ...)\texpect_cmp(char, a, b, <=, >, \"c\", __VA_ARGS__)\n#define expect_c_ge(a, b, ...)\texpect_cmp(char, a, b, >=, <, \"c\", __VA_ARGS__)\n#define expect_c_gt(a, b, ...)\texpect_cmp(char, a, b, >, <=, \"c\", __VA_ARGS__)\n\n#define expect_x_eq(a, b, ...)\texpect_cmp(int, a, b, ==, !=, \"#x\", __VA_ARGS__)\n#define expect_x_ne(a, b, ...)\texpect_cmp(int, a, b, !=, ==, \"#x\", __VA_ARGS__)\n#define expect_x_lt(a, b, ...)\texpect_cmp(int, a, b, <, >=, \"#x\", __VA_ARGS__)\n#define expect_x_le(a, b, ...)\texpect_cmp(int, a, b, <=, >, \"#x\", __VA_ARGS__)\n#define expect_x_ge(a, b, ...)\texpect_cmp(int, a, b, >=, <, \"#x\", __VA_ARGS__)\n#define expect_x_gt(a, b, ...)\texpect_cmp(int, a, b, >, <=, \"#x\", __VA_ARGS__)\n\n#define expect_d_eq(a, b, ...)\texpect_cmp(int, a, b, ==, !=, \"d\", __VA_ARGS__)\n#define expect_d_ne(a, b, ...)\texpect_cmp(int, a, b, !=, ==, \"d\", __VA_ARGS__)\n#define expect_d_lt(a, b, ...)\texpect_cmp(int, a, b, <, >=, \"d\", __VA_ARGS__)\n#define expect_d_le(a, b, ...)\texpect_cmp(int, a, b, <=, >, \"d\", __VA_ARGS__)\n#define expect_d_ge(a, b, ...)\texpect_cmp(int, a, b, >=, <, \"d\", __VA_ARGS__)\n#define expect_d_gt(a, b, ...)\texpect_cmp(int, a, b, >, <=, \"d\", __VA_ARGS__)\n\n#define expect_u_eq(a, b, ...)\texpect_cmp(int, a, b, ==, !=, \"u\", __VA_ARGS__)\n#define expect_u_ne(a, b, ...)\texpect_cmp(int, a, b, !=, ==, \"u\", __VA_ARGS__)\n#define expect_u_lt(a, b, ...)\texpect_cmp(int, a, b, <, >=, \"u\", __VA_ARGS__)\n#define expect_u_le(a, b, ...)\texpect_cmp(int, a, b, <=, >, \"u\", __VA_ARGS__)\n#define expect_u_ge(a, b, ...)\texpect_cmp(int, a, b, >=, <, \"u\", __VA_ARGS__)\n#define expect_u_gt(a, b, ...)\texpect_cmp(int, a, b, >, <=, \"u\", __VA_ARGS__)\n\n#define expect_ld_eq(a, b, ...)\texpect_cmp(long, a, b, ==,\t\\\n    !=, \"ld\", __VA_ARGS__)\n#define expect_ld_ne(a, b, ...)\texpect_cmp(long, a, b, !=,\t\\\n    ==, \"ld\", __VA_ARGS__)\n#define expect_ld_lt(a, b, ...)\texpect_cmp(long, a, b, <,\t\\\n    >=, \"ld\", __VA_ARGS__)\n#define expect_ld_le(a, b, ...)\texpect_cmp(long, a, b, <=,\t\\\n    >, \"ld\", __VA_ARGS__)\n#define expect_ld_ge(a, b, ...)\texpect_cmp(long, a, b, >=,\t\\\n    <, \"ld\", __VA_ARGS__)\n#define expect_ld_gt(a, b, ...)\texpect_cmp(long, a, b, >,\t\\\n    <=, \"ld\", __VA_ARGS__)\n\n#define expect_lu_eq(a, b, ...)\texpect_cmp(unsigned long,\t\\\n    a, b, ==, !=, \"lu\", __VA_ARGS__)\n#define expect_lu_ne(a, b, ...)\texpect_cmp(unsigned long,\t\\\n    a, b, !=, ==, \"lu\", __VA_ARGS__)\n#define expect_lu_lt(a, b, ...)\texpect_cmp(unsigned long,\t\\\n    a, b, <, >=, \"lu\", __VA_ARGS__)\n#define expect_lu_le(a, b, ...)\texpect_cmp(unsigned long,\t\\\n    a, b, <=, >, \"lu\", __VA_ARGS__)\n#define expect_lu_ge(a, b, ...)\texpect_cmp(unsigned long,\t\\\n    a, b, >=, <, \"lu\", __VA_ARGS__)\n#define expect_lu_gt(a, b, ...)\texpect_cmp(unsigned long,\t\\\n    a, b, >, <=, \"lu\", __VA_ARGS__)\n\n#define expect_qd_eq(a, b, ...)\texpect_cmp(long long, a, b, ==,\t\\\n    !=, \"qd\", __VA_ARGS__)\n#define expect_qd_ne(a, b, ...)\texpect_cmp(long long, a, b, !=,\t\\\n    ==, \"qd\", __VA_ARGS__)\n#define expect_qd_lt(a, b, ...)\texpect_cmp(long long, a, b, <,\t\\\n    >=, \"qd\", __VA_ARGS__)\n#define expect_qd_le(a, b, ...)\texpect_cmp(long long, a, b, <=,\t\\\n    >, \"qd\", __VA_ARGS__)\n#define expect_qd_ge(a, b, ...)\texpect_cmp(long long, a, b, >=,\t\\\n    <, \"qd\", __VA_ARGS__)\n#define expect_qd_gt(a, b, ...)\texpect_cmp(long long, a, b, >,\t\\\n    <=, \"qd\", __VA_ARGS__)\n\n#define expect_qu_eq(a, b, ...)\texpect_cmp(unsigned long long,\t\\\n    a, b, ==, !=, \"qu\", __VA_ARGS__)\n#define expect_qu_ne(a, b, ...)\texpect_cmp(unsigned long long,\t\\\n    a, b, !=, ==, \"qu\", __VA_ARGS__)\n#define expect_qu_lt(a, b, ...)\texpect_cmp(unsigned long long,\t\\\n    a, b, <, >=, \"qu\", __VA_ARGS__)\n#define expect_qu_le(a, b, ...)\texpect_cmp(unsigned long long,\t\\\n    a, b, <=, >, \"qu\", __VA_ARGS__)\n#define expect_qu_ge(a, b, ...)\texpect_cmp(unsigned long long,\t\\\n    a, b, >=, <, \"qu\", __VA_ARGS__)\n#define expect_qu_gt(a, b, ...)\texpect_cmp(unsigned long long,\t\\\n    a, b, >, <=, \"qu\", __VA_ARGS__)\n\n#define expect_jd_eq(a, b, ...)\texpect_cmp(intmax_t, a, b, ==,\t\\\n    !=, \"jd\", __VA_ARGS__)\n#define expect_jd_ne(a, b, ...)\texpect_cmp(intmax_t, a, b, !=,\t\\\n    ==, \"jd\", __VA_ARGS__)\n#define expect_jd_lt(a, b, ...)\texpect_cmp(intmax_t, a, b, <,\t\\\n    >=, \"jd\", __VA_ARGS__)\n#define expect_jd_le(a, b, ...)\texpect_cmp(intmax_t, a, b, <=,\t\\\n    >, \"jd\", __VA_ARGS__)\n#define expect_jd_ge(a, b, ...)\texpect_cmp(intmax_t, a, b, >=,\t\\\n    <, \"jd\", __VA_ARGS__)\n#define expect_jd_gt(a, b, ...)\texpect_cmp(intmax_t, a, b, >,\t\\\n    <=, \"jd\", __VA_ARGS__)\n\n#define expect_ju_eq(a, b, ...)\texpect_cmp(uintmax_t, a, b, ==,\t\\\n    !=, \"ju\", __VA_ARGS__)\n#define expect_ju_ne(a, b, ...)\texpect_cmp(uintmax_t, a, b, !=,\t\\\n    ==, \"ju\", __VA_ARGS__)\n#define expect_ju_lt(a, b, ...)\texpect_cmp(uintmax_t, a, b, <,\t\\\n    >=, \"ju\", __VA_ARGS__)\n#define expect_ju_le(a, b, ...)\texpect_cmp(uintmax_t, a, b, <=,\t\\\n    >, \"ju\", __VA_ARGS__)\n#define expect_ju_ge(a, b, ...)\texpect_cmp(uintmax_t, a, b, >=,\t\\\n    <, \"ju\", __VA_ARGS__)\n#define expect_ju_gt(a, b, ...)\texpect_cmp(uintmax_t, a, b, >,\t\\\n    <=, \"ju\", __VA_ARGS__)\n\n#define expect_zd_eq(a, b, ...)\texpect_cmp(ssize_t, a, b, ==,\t\\\n    !=, \"zd\", __VA_ARGS__)\n#define expect_zd_ne(a, b, ...)\texpect_cmp(ssize_t, a, b, !=,\t\\\n    ==, \"zd\", __VA_ARGS__)\n#define expect_zd_lt(a, b, ...)\texpect_cmp(ssize_t, a, b, <,\t\\\n    >=, \"zd\", __VA_ARGS__)\n#define expect_zd_le(a, b, ...)\texpect_cmp(ssize_t, a, b, <=,\t\\\n    >, \"zd\", __VA_ARGS__)\n#define expect_zd_ge(a, b, ...)\texpect_cmp(ssize_t, a, b, >=,\t\\\n    <, \"zd\", __VA_ARGS__)\n#define expect_zd_gt(a, b, ...)\texpect_cmp(ssize_t, a, b, >,\t\\\n    <=, \"zd\", __VA_ARGS__)\n\n#define expect_zu_eq(a, b, ...)\texpect_cmp(size_t, a, b, ==,\t\\\n    !=, \"zu\", __VA_ARGS__)\n#define expect_zu_ne(a, b, ...)\texpect_cmp(size_t, a, b, !=,\t\\\n    ==, \"zu\", __VA_ARGS__)\n#define expect_zu_lt(a, b, ...)\texpect_cmp(size_t, a, b, <,\t\\\n    >=, \"zu\", __VA_ARGS__)\n#define expect_zu_le(a, b, ...)\texpect_cmp(size_t, a, b, <=,\t\\\n    >, \"zu\", __VA_ARGS__)\n#define expect_zu_ge(a, b, ...)\texpect_cmp(size_t, a, b, >=,\t\\\n    <, \"zu\", __VA_ARGS__)\n#define expect_zu_gt(a, b, ...)\texpect_cmp(size_t, a, b, >,\t\\\n    <=, \"zu\", __VA_ARGS__)\n\n#define expect_d32_eq(a, b, ...)\texpect_cmp(int32_t, a, b, ==,\t\\\n    !=, FMTd32, __VA_ARGS__)\n#define expect_d32_ne(a, b, ...)\texpect_cmp(int32_t, a, b, !=,\t\\\n    ==, FMTd32, __VA_ARGS__)\n#define expect_d32_lt(a, b, ...)\texpect_cmp(int32_t, a, b, <,\t\\\n    >=, FMTd32, __VA_ARGS__)\n#define expect_d32_le(a, b, ...)\texpect_cmp(int32_t, a, b, <=,\t\\\n    >, FMTd32, __VA_ARGS__)\n#define expect_d32_ge(a, b, ...)\texpect_cmp(int32_t, a, b, >=,\t\\\n    <, FMTd32, __VA_ARGS__)\n#define expect_d32_gt(a, b, ...)\texpect_cmp(int32_t, a, b, >,\t\\\n    <=, FMTd32, __VA_ARGS__)\n\n#define expect_u32_eq(a, b, ...)\texpect_cmp(uint32_t, a, b, ==,\t\\\n    !=, FMTu32, __VA_ARGS__)\n#define expect_u32_ne(a, b, ...)\texpect_cmp(uint32_t, a, b, !=,\t\\\n    ==, FMTu32, __VA_ARGS__)\n#define expect_u32_lt(a, b, ...)\texpect_cmp(uint32_t, a, b, <,\t\\\n    >=, FMTu32, __VA_ARGS__)\n#define expect_u32_le(a, b, ...)\texpect_cmp(uint32_t, a, b, <=,\t\\\n    >, FMTu32, __VA_ARGS__)\n#define expect_u32_ge(a, b, ...)\texpect_cmp(uint32_t, a, b, >=,\t\\\n    <, FMTu32, __VA_ARGS__)\n#define expect_u32_gt(a, b, ...)\texpect_cmp(uint32_t, a, b, >,\t\\\n    <=, FMTu32, __VA_ARGS__)\n\n#define expect_d64_eq(a, b, ...)\texpect_cmp(int64_t, a, b, ==,\t\\\n    !=, FMTd64, __VA_ARGS__)\n#define expect_d64_ne(a, b, ...)\texpect_cmp(int64_t, a, b, !=,\t\\\n    ==, FMTd64, __VA_ARGS__)\n#define expect_d64_lt(a, b, ...)\texpect_cmp(int64_t, a, b, <,\t\\\n    >=, FMTd64, __VA_ARGS__)\n#define expect_d64_le(a, b, ...)\texpect_cmp(int64_t, a, b, <=,\t\\\n    >, FMTd64, __VA_ARGS__)\n#define expect_d64_ge(a, b, ...)\texpect_cmp(int64_t, a, b, >=,\t\\\n    <, FMTd64, __VA_ARGS__)\n#define expect_d64_gt(a, b, ...)\texpect_cmp(int64_t, a, b, >,\t\\\n    <=, FMTd64, __VA_ARGS__)\n\n#define expect_u64_eq(a, b, ...)\texpect_cmp(uint64_t, a, b, ==,\t\\\n    !=, FMTu64, __VA_ARGS__)\n#define expect_u64_ne(a, b, ...)\texpect_cmp(uint64_t, a, b, !=,\t\\\n    ==, FMTu64, __VA_ARGS__)\n#define expect_u64_lt(a, b, ...)\texpect_cmp(uint64_t, a, b, <,\t\\\n    >=, FMTu64, __VA_ARGS__)\n#define expect_u64_le(a, b, ...)\texpect_cmp(uint64_t, a, b, <=,\t\\\n    >, FMTu64, __VA_ARGS__)\n#define expect_u64_ge(a, b, ...)\texpect_cmp(uint64_t, a, b, >=,\t\\\n    <, FMTu64, __VA_ARGS__)\n#define expect_u64_gt(a, b, ...)\texpect_cmp(uint64_t, a, b, >,\t\\\n    <=, FMTu64, __VA_ARGS__)\n\n#define verify_b_eq(may_abort, a, b, ...) do {\t\t\t\t\\\n\tbool a_ = (a);\t\t\t\t\t\t\t\\\n\tbool b_ = (b);\t\t\t\t\t\t\t\\\n\tif (!(a_ == b_)) {\t\t\t\t\t\t\\\n\t\tchar prefix[ASSERT_BUFSIZE];\t\t\t\t\\\n\t\tchar message[ASSERT_BUFSIZE];\t\t\t\t\\\n\t\tmalloc_snprintf(prefix, sizeof(prefix),\t\t\t\\\n\t\t    \"%s:%s:%d: Failed assertion: \"\t\t\t\\\n\t\t    \"(%s) == (%s) --> %s != %s: \",\t\t\t\\\n\t\t    __func__, __FILE__, __LINE__,\t\t\t\\\n\t\t    #a, #b, a_ ? \"true\" : \"false\",\t\t\t\\\n\t\t    b_ ? \"true\" : \"false\");\t\t\t\t\\\n\t\tmalloc_snprintf(message, sizeof(message), __VA_ARGS__);\t\\\n\t\tif (may_abort) {\t\t\t\t\t\\\n\t\t\tabort();\t\t\t\t\t\\\n\t\t} else {\t\t\t\t\t\t\\\n\t\t\tp_test_fail(prefix, message);\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#define verify_b_ne(may_abort, a, b, ...) do {\t\t\t\t\\\n\tbool a_ = (a);\t\t\t\t\t\t\t\\\n\tbool b_ = (b);\t\t\t\t\t\t\t\\\n\tif (!(a_ != b_)) {\t\t\t\t\t\t\\\n\t\tchar prefix[ASSERT_BUFSIZE];\t\t\t\t\\\n\t\tchar message[ASSERT_BUFSIZE];\t\t\t\t\\\n\t\tmalloc_snprintf(prefix, sizeof(prefix),\t\t\t\\\n\t\t    \"%s:%s:%d: Failed assertion: \"\t\t\t\\\n\t\t    \"(%s) != (%s) --> %s == %s: \",\t\t\t\\\n\t\t    __func__, __FILE__, __LINE__,\t\t\t\\\n\t\t    #a, #b, a_ ? \"true\" : \"false\",\t\t\t\\\n\t\t    b_ ? \"true\" : \"false\");\t\t\t\t\\\n\t\tmalloc_snprintf(message, sizeof(message), __VA_ARGS__);\t\\\n\t\tif (may_abort) {\t\t\t\t\t\\\n\t\t\tabort();\t\t\t\t\t\\\n\t\t} else {\t\t\t\t\t\t\\\n\t\t\tp_test_fail(prefix, message);\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#define expect_b_eq(a, b, ...)\tverify_b_eq(false, a, b, __VA_ARGS__)\n#define expect_b_ne(a, b, ...)\tverify_b_ne(false, a, b, __VA_ARGS__)\n\n#define expect_true(a, ...)\texpect_b_eq(a, true, __VA_ARGS__)\n#define expect_false(a, ...)\texpect_b_eq(a, false, __VA_ARGS__)\n\n#define verify_str_eq(may_abort, a, b, ...) do {\t\t\t\\\n\tif (strcmp((a), (b))) {\t\t\t\t\t\t\\\n\t\tchar prefix[ASSERT_BUFSIZE];\t\t\t\t\\\n\t\tchar message[ASSERT_BUFSIZE];\t\t\t\t\\\n\t\tmalloc_snprintf(prefix, sizeof(prefix),\t\t\t\\\n\t\t    \"%s:%s:%d: Failed assertion: \"\t\t\t\\\n\t\t    \"(%s) same as (%s) --> \"\t\t\t\t\\\n\t\t    \"\\\"%s\\\" differs from \\\"%s\\\": \",\t\t\t\\\n\t\t    __func__, __FILE__, __LINE__, #a, #b, a, b);\t\\\n\t\tmalloc_snprintf(message, sizeof(message), __VA_ARGS__);\t\\\n\t\tif (may_abort) {\t\t\t\t\t\\\n\t\t\tabort();\t\t\t\t\t\\\n\t\t} else {\t\t\t\t\t\t\\\n\t\t\tp_test_fail(prefix, message);\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#define verify_str_ne(may_abort, a, b, ...) do {\t\t\t\\\n\tif (!strcmp((a), (b))) {\t\t\t\t\t\\\n\t\tchar prefix[ASSERT_BUFSIZE];\t\t\t\t\\\n\t\tchar message[ASSERT_BUFSIZE];\t\t\t\t\\\n\t\tmalloc_snprintf(prefix, sizeof(prefix),\t\t\t\\\n\t\t    \"%s:%s:%d: Failed assertion: \"\t\t\t\\\n\t\t    \"(%s) differs from (%s) --> \"\t\t\t\\\n\t\t    \"\\\"%s\\\" same as \\\"%s\\\": \",\t\t\t\t\\\n\t\t    __func__, __FILE__, __LINE__, #a, #b, a, b);\t\\\n\t\tmalloc_snprintf(message, sizeof(message), __VA_ARGS__);\t\\\n\t\tif (may_abort) {\t\t\t\t\t\\\n\t\t\tabort();\t\t\t\t\t\\\n\t\t} else {\t\t\t\t\t\t\\\n\t\t\tp_test_fail(prefix, message);\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#define expect_str_eq(a, b, ...) verify_str_eq(false, a, b, __VA_ARGS__)\n#define expect_str_ne(a, b, ...) verify_str_ne(false, a, b, __VA_ARGS__)\n\n#define verify_not_reached(may_abort, ...) do {\t\t\t\t\\\n\tchar prefix[ASSERT_BUFSIZE];\t\t\t\t\t\\\n\tchar message[ASSERT_BUFSIZE];\t\t\t\t\t\\\n\tmalloc_snprintf(prefix, sizeof(prefix),\t\t\t\t\\\n\t    \"%s:%s:%d: Unreachable code reached: \",\t\t\t\\\n\t    __func__, __FILE__, __LINE__);\t\t\t\t\\\n\tmalloc_snprintf(message, sizeof(message), __VA_ARGS__);\t\t\\\n\tif (may_abort) {\t\t\t\t\t\t\\\n\t\tabort();\t\t\t\t\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t\tp_test_fail(prefix, message);\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#define expect_not_reached(...) verify_not_reached(false, __VA_ARGS__)\n\n#define assert_cmp(t, a, b, cmp, neg_cmp, pri, ...) verify_cmp(true,\t\\\n    t, a, b, cmp, neg_cmp, pri, __VA_ARGS__)\n\n#define assert_ptr_eq(a, b, ...)\tassert_cmp(void *, a, b, ==,\t\\\n    !=, \"p\", __VA_ARGS__)\n#define assert_ptr_ne(a, b, ...)\tassert_cmp(void *, a, b, !=,\t\\\n    ==, \"p\", __VA_ARGS__)\n#define assert_ptr_null(a, ...)\t\tassert_cmp(void *, a, NULL, ==,\t\\\n    !=, \"p\", __VA_ARGS__)\n#define assert_ptr_not_null(a, ...)\tassert_cmp(void *, a, NULL, !=,\t\\\n    ==, \"p\", __VA_ARGS__)\n\n#define assert_c_eq(a, b, ...)\tassert_cmp(char, a, b, ==, !=, \"c\", __VA_ARGS__)\n#define assert_c_ne(a, b, ...)\tassert_cmp(char, a, b, !=, ==, \"c\", __VA_ARGS__)\n#define assert_c_lt(a, b, ...)\tassert_cmp(char, a, b, <, >=, \"c\", __VA_ARGS__)\n#define assert_c_le(a, b, ...)\tassert_cmp(char, a, b, <=, >, \"c\", __VA_ARGS__)\n#define assert_c_ge(a, b, ...)\tassert_cmp(char, a, b, >=, <, \"c\", __VA_ARGS__)\n#define assert_c_gt(a, b, ...)\tassert_cmp(char, a, b, >, <=, \"c\", __VA_ARGS__)\n\n#define assert_x_eq(a, b, ...)\tassert_cmp(int, a, b, ==, !=, \"#x\", __VA_ARGS__)\n#define assert_x_ne(a, b, ...)\tassert_cmp(int, a, b, !=, ==, \"#x\", __VA_ARGS__)\n#define assert_x_lt(a, b, ...)\tassert_cmp(int, a, b, <, >=, \"#x\", __VA_ARGS__)\n#define assert_x_le(a, b, ...)\tassert_cmp(int, a, b, <=, >, \"#x\", __VA_ARGS__)\n#define assert_x_ge(a, b, ...)\tassert_cmp(int, a, b, >=, <, \"#x\", __VA_ARGS__)\n#define assert_x_gt(a, b, ...)\tassert_cmp(int, a, b, >, <=, \"#x\", __VA_ARGS__)\n\n#define assert_d_eq(a, b, ...)\tassert_cmp(int, a, b, ==, !=, \"d\", __VA_ARGS__)\n#define assert_d_ne(a, b, ...)\tassert_cmp(int, a, b, !=, ==, \"d\", __VA_ARGS__)\n#define assert_d_lt(a, b, ...)\tassert_cmp(int, a, b, <, >=, \"d\", __VA_ARGS__)\n#define assert_d_le(a, b, ...)\tassert_cmp(int, a, b, <=, >, \"d\", __VA_ARGS__)\n#define assert_d_ge(a, b, ...)\tassert_cmp(int, a, b, >=, <, \"d\", __VA_ARGS__)\n#define assert_d_gt(a, b, ...)\tassert_cmp(int, a, b, >, <=, \"d\", __VA_ARGS__)\n\n#define assert_u_eq(a, b, ...)\tassert_cmp(int, a, b, ==, !=, \"u\", __VA_ARGS__)\n#define assert_u_ne(a, b, ...)\tassert_cmp(int, a, b, !=, ==, \"u\", __VA_ARGS__)\n#define assert_u_lt(a, b, ...)\tassert_cmp(int, a, b, <, >=, \"u\", __VA_ARGS__)\n#define assert_u_le(a, b, ...)\tassert_cmp(int, a, b, <=, >, \"u\", __VA_ARGS__)\n#define assert_u_ge(a, b, ...)\tassert_cmp(int, a, b, >=, <, \"u\", __VA_ARGS__)\n#define assert_u_gt(a, b, ...)\tassert_cmp(int, a, b, >, <=, \"u\", __VA_ARGS__)\n\n#define assert_ld_eq(a, b, ...)\tassert_cmp(long, a, b, ==,\t\\\n    !=, \"ld\", __VA_ARGS__)\n#define assert_ld_ne(a, b, ...)\tassert_cmp(long, a, b, !=,\t\\\n    ==, \"ld\", __VA_ARGS__)\n#define assert_ld_lt(a, b, ...)\tassert_cmp(long, a, b, <,\t\\\n    >=, \"ld\", __VA_ARGS__)\n#define assert_ld_le(a, b, ...)\tassert_cmp(long, a, b, <=,\t\\\n    >, \"ld\", __VA_ARGS__)\n#define assert_ld_ge(a, b, ...)\tassert_cmp(long, a, b, >=,\t\\\n    <, \"ld\", __VA_ARGS__)\n#define assert_ld_gt(a, b, ...)\tassert_cmp(long, a, b, >,\t\\\n    <=, \"ld\", __VA_ARGS__)\n\n#define assert_lu_eq(a, b, ...)\tassert_cmp(unsigned long,\t\\\n    a, b, ==, !=, \"lu\", __VA_ARGS__)\n#define assert_lu_ne(a, b, ...)\tassert_cmp(unsigned long,\t\\\n    a, b, !=, ==, \"lu\", __VA_ARGS__)\n#define assert_lu_lt(a, b, ...)\tassert_cmp(unsigned long,\t\\\n    a, b, <, >=, \"lu\", __VA_ARGS__)\n#define assert_lu_le(a, b, ...)\tassert_cmp(unsigned long,\t\\\n    a, b, <=, >, \"lu\", __VA_ARGS__)\n#define assert_lu_ge(a, b, ...)\tassert_cmp(unsigned long,\t\\\n    a, b, >=, <, \"lu\", __VA_ARGS__)\n#define assert_lu_gt(a, b, ...)\tassert_cmp(unsigned long,\t\\\n    a, b, >, <=, \"lu\", __VA_ARGS__)\n\n#define assert_qd_eq(a, b, ...)\tassert_cmp(long long, a, b, ==,\t\\\n    !=, \"qd\", __VA_ARGS__)\n#define assert_qd_ne(a, b, ...)\tassert_cmp(long long, a, b, !=,\t\\\n    ==, \"qd\", __VA_ARGS__)\n#define assert_qd_lt(a, b, ...)\tassert_cmp(long long, a, b, <,\t\\\n    >=, \"qd\", __VA_ARGS__)\n#define assert_qd_le(a, b, ...)\tassert_cmp(long long, a, b, <=,\t\\\n    >, \"qd\", __VA_ARGS__)\n#define assert_qd_ge(a, b, ...)\tassert_cmp(long long, a, b, >=,\t\\\n    <, \"qd\", __VA_ARGS__)\n#define assert_qd_gt(a, b, ...)\tassert_cmp(long long, a, b, >,\t\\\n    <=, \"qd\", __VA_ARGS__)\n\n#define assert_qu_eq(a, b, ...)\tassert_cmp(unsigned long long,\t\\\n    a, b, ==, !=, \"qu\", __VA_ARGS__)\n#define assert_qu_ne(a, b, ...)\tassert_cmp(unsigned long long,\t\\\n    a, b, !=, ==, \"qu\", __VA_ARGS__)\n#define assert_qu_lt(a, b, ...)\tassert_cmp(unsigned long long,\t\\\n    a, b, <, >=, \"qu\", __VA_ARGS__)\n#define assert_qu_le(a, b, ...)\tassert_cmp(unsigned long long,\t\\\n    a, b, <=, >, \"qu\", __VA_ARGS__)\n#define assert_qu_ge(a, b, ...)\tassert_cmp(unsigned long long,\t\\\n    a, b, >=, <, \"qu\", __VA_ARGS__)\n#define assert_qu_gt(a, b, ...)\tassert_cmp(unsigned long long,\t\\\n    a, b, >, <=, \"qu\", __VA_ARGS__)\n\n#define assert_jd_eq(a, b, ...)\tassert_cmp(intmax_t, a, b, ==,\t\\\n    !=, \"jd\", __VA_ARGS__)\n#define assert_jd_ne(a, b, ...)\tassert_cmp(intmax_t, a, b, !=,\t\\\n    ==, \"jd\", __VA_ARGS__)\n#define assert_jd_lt(a, b, ...)\tassert_cmp(intmax_t, a, b, <,\t\\\n    >=, \"jd\", __VA_ARGS__)\n#define assert_jd_le(a, b, ...)\tassert_cmp(intmax_t, a, b, <=,\t\\\n    >, \"jd\", __VA_ARGS__)\n#define assert_jd_ge(a, b, ...)\tassert_cmp(intmax_t, a, b, >=,\t\\\n    <, \"jd\", __VA_ARGS__)\n#define assert_jd_gt(a, b, ...)\tassert_cmp(intmax_t, a, b, >,\t\\\n    <=, \"jd\", __VA_ARGS__)\n\n#define assert_ju_eq(a, b, ...)\tassert_cmp(uintmax_t, a, b, ==,\t\\\n    !=, \"ju\", __VA_ARGS__)\n#define assert_ju_ne(a, b, ...)\tassert_cmp(uintmax_t, a, b, !=,\t\\\n    ==, \"ju\", __VA_ARGS__)\n#define assert_ju_lt(a, b, ...)\tassert_cmp(uintmax_t, a, b, <,\t\\\n    >=, \"ju\", __VA_ARGS__)\n#define assert_ju_le(a, b, ...)\tassert_cmp(uintmax_t, a, b, <=,\t\\\n    >, \"ju\", __VA_ARGS__)\n#define assert_ju_ge(a, b, ...)\tassert_cmp(uintmax_t, a, b, >=,\t\\\n    <, \"ju\", __VA_ARGS__)\n#define assert_ju_gt(a, b, ...)\tassert_cmp(uintmax_t, a, b, >,\t\\\n    <=, \"ju\", __VA_ARGS__)\n\n#define assert_zd_eq(a, b, ...)\tassert_cmp(ssize_t, a, b, ==,\t\\\n    !=, \"zd\", __VA_ARGS__)\n#define assert_zd_ne(a, b, ...)\tassert_cmp(ssize_t, a, b, !=,\t\\\n    ==, \"zd\", __VA_ARGS__)\n#define assert_zd_lt(a, b, ...)\tassert_cmp(ssize_t, a, b, <,\t\\\n    >=, \"zd\", __VA_ARGS__)\n#define assert_zd_le(a, b, ...)\tassert_cmp(ssize_t, a, b, <=,\t\\\n    >, \"zd\", __VA_ARGS__)\n#define assert_zd_ge(a, b, ...)\tassert_cmp(ssize_t, a, b, >=,\t\\\n    <, \"zd\", __VA_ARGS__)\n#define assert_zd_gt(a, b, ...)\tassert_cmp(ssize_t, a, b, >,\t\\\n    <=, \"zd\", __VA_ARGS__)\n\n#define assert_zu_eq(a, b, ...)\tassert_cmp(size_t, a, b, ==,\t\\\n    !=, \"zu\", __VA_ARGS__)\n#define assert_zu_ne(a, b, ...)\tassert_cmp(size_t, a, b, !=,\t\\\n    ==, \"zu\", __VA_ARGS__)\n#define assert_zu_lt(a, b, ...)\tassert_cmp(size_t, a, b, <,\t\\\n    >=, \"zu\", __VA_ARGS__)\n#define assert_zu_le(a, b, ...)\tassert_cmp(size_t, a, b, <=,\t\\\n    >, \"zu\", __VA_ARGS__)\n#define assert_zu_ge(a, b, ...)\tassert_cmp(size_t, a, b, >=,\t\\\n    <, \"zu\", __VA_ARGS__)\n#define assert_zu_gt(a, b, ...)\tassert_cmp(size_t, a, b, >,\t\\\n    <=, \"zu\", __VA_ARGS__)\n\n#define assert_d32_eq(a, b, ...)\tassert_cmp(int32_t, a, b, ==,\t\\\n    !=, FMTd32, __VA_ARGS__)\n#define assert_d32_ne(a, b, ...)\tassert_cmp(int32_t, a, b, !=,\t\\\n    ==, FMTd32, __VA_ARGS__)\n#define assert_d32_lt(a, b, ...)\tassert_cmp(int32_t, a, b, <,\t\\\n    >=, FMTd32, __VA_ARGS__)\n#define assert_d32_le(a, b, ...)\tassert_cmp(int32_t, a, b, <=,\t\\\n    >, FMTd32, __VA_ARGS__)\n#define assert_d32_ge(a, b, ...)\tassert_cmp(int32_t, a, b, >=,\t\\\n    <, FMTd32, __VA_ARGS__)\n#define assert_d32_gt(a, b, ...)\tassert_cmp(int32_t, a, b, >,\t\\\n    <=, FMTd32, __VA_ARGS__)\n\n#define assert_u32_eq(a, b, ...)\tassert_cmp(uint32_t, a, b, ==,\t\\\n    !=, FMTu32, __VA_ARGS__)\n#define assert_u32_ne(a, b, ...)\tassert_cmp(uint32_t, a, b, !=,\t\\\n    ==, FMTu32, __VA_ARGS__)\n#define assert_u32_lt(a, b, ...)\tassert_cmp(uint32_t, a, b, <,\t\\\n    >=, FMTu32, __VA_ARGS__)\n#define assert_u32_le(a, b, ...)\tassert_cmp(uint32_t, a, b, <=,\t\\\n    >, FMTu32, __VA_ARGS__)\n#define assert_u32_ge(a, b, ...)\tassert_cmp(uint32_t, a, b, >=,\t\\\n    <, FMTu32, __VA_ARGS__)\n#define assert_u32_gt(a, b, ...)\tassert_cmp(uint32_t, a, b, >,\t\\\n    <=, FMTu32, __VA_ARGS__)\n\n#define assert_d64_eq(a, b, ...)\tassert_cmp(int64_t, a, b, ==,\t\\\n    !=, FMTd64, __VA_ARGS__)\n#define assert_d64_ne(a, b, ...)\tassert_cmp(int64_t, a, b, !=,\t\\\n    ==, FMTd64, __VA_ARGS__)\n#define assert_d64_lt(a, b, ...)\tassert_cmp(int64_t, a, b, <,\t\\\n    >=, FMTd64, __VA_ARGS__)\n#define assert_d64_le(a, b, ...)\tassert_cmp(int64_t, a, b, <=,\t\\\n    >, FMTd64, __VA_ARGS__)\n#define assert_d64_ge(a, b, ...)\tassert_cmp(int64_t, a, b, >=,\t\\\n    <, FMTd64, __VA_ARGS__)\n#define assert_d64_gt(a, b, ...)\tassert_cmp(int64_t, a, b, >,\t\\\n    <=, FMTd64, __VA_ARGS__)\n\n#define assert_u64_eq(a, b, ...)\tassert_cmp(uint64_t, a, b, ==,\t\\\n    !=, FMTu64, __VA_ARGS__)\n#define assert_u64_ne(a, b, ...)\tassert_cmp(uint64_t, a, b, !=,\t\\\n    ==, FMTu64, __VA_ARGS__)\n#define assert_u64_lt(a, b, ...)\tassert_cmp(uint64_t, a, b, <,\t\\\n    >=, FMTu64, __VA_ARGS__)\n#define assert_u64_le(a, b, ...)\tassert_cmp(uint64_t, a, b, <=,\t\\\n    >, FMTu64, __VA_ARGS__)\n#define assert_u64_ge(a, b, ...)\tassert_cmp(uint64_t, a, b, >=,\t\\\n    <, FMTu64, __VA_ARGS__)\n#define assert_u64_gt(a, b, ...)\tassert_cmp(uint64_t, a, b, >,\t\\\n    <=, FMTu64, __VA_ARGS__)\n\n#define assert_b_eq(a, b, ...)\tverify_b_eq(true, a, b, __VA_ARGS__)\n#define assert_b_ne(a, b, ...)\tverify_b_ne(true, a, b, __VA_ARGS__)\n\n#define assert_true(a, ...)\tassert_b_eq(a, true, __VA_ARGS__)\n#define assert_false(a, ...)\tassert_b_eq(a, false, __VA_ARGS__)\n\n#define assert_str_eq(a, b, ...) verify_str_eq(true, a, b, __VA_ARGS__)\n#define assert_str_ne(a, b, ...) verify_str_ne(true, a, b, __VA_ARGS__)\n\n#define assert_not_reached(...) verify_not_reached(true, __VA_ARGS__)\n\n/*\n * If this enum changes, corresponding changes in test/test.sh.in are also\n * necessary.\n */\ntypedef enum {\n\ttest_status_pass = 0,\n\ttest_status_skip = 1,\n\ttest_status_fail = 2,\n\n\ttest_status_count = 3\n} test_status_t;\n\ntypedef void (test_t)(void);\n\n#define TEST_BEGIN(f)\t\t\t\t\t\t\t\\\nstatic void\t\t\t\t\t\t\t\t\\\nf(void) {\t\t\t\t\t\t\t\t\\\n\tp_test_init(#f);\n\n#define TEST_END\t\t\t\t\t\t\t\\\n\tgoto label_test_end;\t\t\t\t\t\t\\\nlabel_test_end:\t\t\t\t\t\t\t\t\\\n\tp_test_fini();\t\t\t\t\t\t\t\\\n}\n\n#define test(...)\t\t\t\t\t\t\t\\\n\tp_test(__VA_ARGS__, NULL)\n\n#define test_no_reentrancy(...)\t\t\t\t\t\t\t\\\n\tp_test_no_reentrancy(__VA_ARGS__, NULL)\n\n#define test_no_malloc_init(...)\t\t\t\t\t\\\n\tp_test_no_malloc_init(__VA_ARGS__, NULL)\n\n#define test_skip_if(e) do {\t\t\t\t\t\t\\\n\tif (e) {\t\t\t\t\t\t\t\\\n\t\ttest_skip(\"%s:%s:%d: Test skipped: (%s)\",\t\t\\\n\t\t    __func__, __FILE__, __LINE__, #e);\t\t\t\\\n\t\tgoto label_test_end;\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\nbool test_is_reentrant();\n\nvoid\ttest_skip(const char *format, ...) JEMALLOC_FORMAT_PRINTF(1, 2);\nvoid\ttest_fail(const char *format, ...) JEMALLOC_FORMAT_PRINTF(1, 2);\n\n/* For private use by macros. */\ntest_status_t\tp_test(test_t *t, ...);\ntest_status_t\tp_test_no_reentrancy(test_t *t, ...);\ntest_status_t\tp_test_no_malloc_init(test_t *t, ...);\nvoid\tp_test_init(const char *name);\nvoid\tp_test_fini(void);\nvoid\tp_test_fail(const char *prefix, const char *message);\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/thd.h",
    "content": "/* Abstraction layer for threading in tests. */\n#ifdef _WIN32\ntypedef HANDLE thd_t;\n#else\ntypedef pthread_t thd_t;\n#endif\n\nvoid\tthd_create(thd_t *thd, void *(*proc)(void *), void *arg);\nvoid\tthd_join(thd_t thd, void **ret);\n"
  },
  {
    "path": "deps/jemalloc/test/include/test/timer.h",
    "content": "/* Simple timer, for use in benchmark reporting. */\n\ntypedef struct {\n\tnstime_t t0;\n\tnstime_t t1;\n} timedelta_t;\n\nvoid\ttimer_start(timedelta_t *timer);\nvoid\ttimer_stop(timedelta_t *timer);\nuint64_t\ttimer_usec(const timedelta_t *timer);\nvoid\ttimer_ratio(timedelta_t *a, timedelta_t *b, char *buf, size_t buflen);\n"
  },
  {
    "path": "deps/jemalloc/test/integration/MALLOCX_ARENA.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#define NTHREADS 10\n\nstatic bool have_dss =\n#ifdef JEMALLOC_DSS\n    true\n#else\n    false\n#endif\n    ;\n\nvoid *\nthd_start(void *arg) {\n\tunsigned thread_ind = (unsigned)(uintptr_t)arg;\n\tunsigned arena_ind;\n\tvoid *p;\n\tsize_t sz;\n\n\tsz = sizeof(arena_ind);\n\texpect_d_eq(mallctl(\"arenas.create\", (void *)&arena_ind, &sz, NULL, 0),\n\t    0, \"Error in arenas.create\");\n\n\tif (thread_ind % 4 != 3) {\n\t\tsize_t mib[3];\n\t\tsize_t miblen = sizeof(mib) / sizeof(size_t);\n\t\tconst char *dss_precs[] = {\"disabled\", \"primary\", \"secondary\"};\n\t\tunsigned prec_ind = thread_ind %\n\t\t    (sizeof(dss_precs)/sizeof(char*));\n\t\tconst char *dss = dss_precs[prec_ind];\n\t\tint expected_err = (have_dss || prec_ind == 0) ? 0 : EFAULT;\n\t\texpect_d_eq(mallctlnametomib(\"arena.0.dss\", mib, &miblen), 0,\n\t\t    \"Error in mallctlnametomib()\");\n\t\tmib[1] = arena_ind;\n\t\texpect_d_eq(mallctlbymib(mib, miblen, NULL, NULL, (void *)&dss,\n\t\t    sizeof(const char *)), expected_err,\n\t\t    \"Error in mallctlbymib()\");\n\t}\n\n\tp = mallocx(1, MALLOCX_ARENA(arena_ind));\n\texpect_ptr_not_null(p, \"Unexpected mallocx() error\");\n\tdallocx(p, 0);\n\n\treturn NULL;\n}\n\nTEST_BEGIN(test_MALLOCX_ARENA) {\n\tthd_t thds[NTHREADS];\n\tunsigned i;\n\n\tfor (i = 0; i < NTHREADS; i++) {\n\t\tthd_create(&thds[i], thd_start,\n\t\t    (void *)(uintptr_t)i);\n\t}\n\n\tfor (i = 0; i < NTHREADS; i++) {\n\t\tthd_join(thds[i], NULL);\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_MALLOCX_ARENA);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/integration/aligned_alloc.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#define MAXALIGN (((size_t)1) << 23)\n\n/*\n * On systems which can't merge extents, tests that call this function generate\n * a lot of dirty memory very quickly.  Purging between cycles mitigates\n * potential OOM on e.g. 32-bit Windows.\n */\nstatic void\npurge(void) {\n\texpect_d_eq(mallctl(\"arena.0.purge\", NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctl error\");\n}\n\nTEST_BEGIN(test_alignment_errors) {\n\tsize_t alignment;\n\tvoid *p;\n\n\talignment = 0;\n\tset_errno(0);\n\tp = aligned_alloc(alignment, 1);\n\texpect_false(p != NULL || get_errno() != EINVAL,\n\t    \"Expected error for invalid alignment %zu\", alignment);\n\n\tfor (alignment = sizeof(size_t); alignment < MAXALIGN;\n\t    alignment <<= 1) {\n\t\tset_errno(0);\n\t\tp = aligned_alloc(alignment + 1, 1);\n\t\texpect_false(p != NULL || get_errno() != EINVAL,\n\t\t    \"Expected error for invalid alignment %zu\",\n\t\t    alignment + 1);\n\t}\n}\nTEST_END\n\n\n/*\n * GCC \"-Walloc-size-larger-than\" warning detects when one of the memory\n * allocation functions is called with a size larger than the maximum size that\n * they support. Here we want to explicitly test that the allocation functions\n * do indeed fail properly when this is the case, which triggers the warning.\n * Therefore we disable the warning for these tests.\n */\nJEMALLOC_DIAGNOSTIC_PUSH\nJEMALLOC_DIAGNOSTIC_IGNORE_ALLOC_SIZE_LARGER_THAN\n\nTEST_BEGIN(test_oom_errors) {\n\tsize_t alignment, size;\n\tvoid *p;\n\n#if LG_SIZEOF_PTR == 3\n\talignment = UINT64_C(0x8000000000000000);\n\tsize      = UINT64_C(0x8000000000000000);\n#else\n\talignment = 0x80000000LU;\n\tsize      = 0x80000000LU;\n#endif\n\tset_errno(0);\n\tp = aligned_alloc(alignment, size);\n\texpect_false(p != NULL || get_errno() != ENOMEM,\n\t    \"Expected error for aligned_alloc(%zu, %zu)\",\n\t    alignment, size);\n\n#if LG_SIZEOF_PTR == 3\n\talignment = UINT64_C(0x4000000000000000);\n\tsize      = UINT64_C(0xc000000000000001);\n#else\n\talignment = 0x40000000LU;\n\tsize      = 0xc0000001LU;\n#endif\n\tset_errno(0);\n\tp = aligned_alloc(alignment, size);\n\texpect_false(p != NULL || get_errno() != ENOMEM,\n\t    \"Expected error for aligned_alloc(%zu, %zu)\",\n\t    alignment, size);\n\n\talignment = 0x10LU;\n#if LG_SIZEOF_PTR == 3\n\tsize = UINT64_C(0xfffffffffffffff0);\n#else\n\tsize = 0xfffffff0LU;\n#endif\n\tset_errno(0);\n\tp = aligned_alloc(alignment, size);\n\texpect_false(p != NULL || get_errno() != ENOMEM,\n\t    \"Expected error for aligned_alloc(&p, %zu, %zu)\",\n\t    alignment, size);\n}\nTEST_END\n\n/* Re-enable the \"-Walloc-size-larger-than=\" warning */\nJEMALLOC_DIAGNOSTIC_POP\n\nTEST_BEGIN(test_alignment_and_size) {\n#define NITER 4\n\tsize_t alignment, size, total;\n\tunsigned i;\n\tvoid *ps[NITER];\n\n\tfor (i = 0; i < NITER; i++) {\n\t\tps[i] = NULL;\n\t}\n\n\tfor (alignment = 8;\n\t    alignment <= MAXALIGN;\n\t    alignment <<= 1) {\n\t\ttotal = 0;\n\t\tfor (size = 1;\n\t\t    size < 3 * alignment && size < (1U << 31);\n\t\t    size += (alignment >> (LG_SIZEOF_PTR-1)) - 1) {\n\t\t\tfor (i = 0; i < NITER; i++) {\n\t\t\t\tps[i] = aligned_alloc(alignment, size);\n\t\t\t\tif (ps[i] == NULL) {\n\t\t\t\t\tchar buf[BUFERROR_BUF];\n\n\t\t\t\t\tbuferror(get_errno(), buf, sizeof(buf));\n\t\t\t\t\ttest_fail(\n\t\t\t\t\t    \"Error for alignment=%zu, \"\n\t\t\t\t\t    \"size=%zu (%#zx): %s\",\n\t\t\t\t\t    alignment, size, size, buf);\n\t\t\t\t}\n\t\t\t\ttotal += TEST_MALLOC_SIZE(ps[i]);\n\t\t\t\tif (total >= (MAXALIGN << 1)) {\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tfor (i = 0; i < NITER; i++) {\n\t\t\t\tif (ps[i] != NULL) {\n\t\t\t\t\tfree(ps[i]);\n\t\t\t\t\tps[i] = NULL;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tpurge();\n\t}\n#undef NITER\n}\nTEST_END\n\nTEST_BEGIN(test_zero_alloc) {\n\tvoid *res = aligned_alloc(8, 0);\n\tassert(res);\n\tsize_t usable = TEST_MALLOC_SIZE(res);\n\tassert(usable > 0);\n\tfree(res);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_alignment_errors,\n\t    test_oom_errors,\n\t    test_alignment_and_size,\n\t    test_zero_alloc);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/integration/allocated.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nstatic const bool config_stats =\n#ifdef JEMALLOC_STATS\n    true\n#else\n    false\n#endif\n    ;\n\nvoid *\nthd_start(void *arg) {\n\tint err;\n\tvoid *p;\n\tuint64_t a0, a1, d0, d1;\n\tuint64_t *ap0, *ap1, *dp0, *dp1;\n\tsize_t sz, usize;\n\n\tsz = sizeof(a0);\n\tif ((err = mallctl(\"thread.allocated\", (void *)&a0, &sz, NULL, 0))) {\n\t\tif (err == ENOENT) {\n\t\t\tgoto label_ENOENT;\n\t\t}\n\t\ttest_fail(\"%s(): Error in mallctl(): %s\", __func__,\n\t\t    strerror(err));\n\t}\n\tsz = sizeof(ap0);\n\tif ((err = mallctl(\"thread.allocatedp\", (void *)&ap0, &sz, NULL, 0))) {\n\t\tif (err == ENOENT) {\n\t\t\tgoto label_ENOENT;\n\t\t}\n\t\ttest_fail(\"%s(): Error in mallctl(): %s\", __func__,\n\t\t    strerror(err));\n\t}\n\texpect_u64_eq(*ap0, a0,\n\t    \"\\\"thread.allocatedp\\\" should provide a pointer to internal \"\n\t    \"storage\");\n\n\tsz = sizeof(d0);\n\tif ((err = mallctl(\"thread.deallocated\", (void *)&d0, &sz, NULL, 0))) {\n\t\tif (err == ENOENT) {\n\t\t\tgoto label_ENOENT;\n\t\t}\n\t\ttest_fail(\"%s(): Error in mallctl(): %s\", __func__,\n\t\t    strerror(err));\n\t}\n\tsz = sizeof(dp0);\n\tif ((err = mallctl(\"thread.deallocatedp\", (void *)&dp0, &sz, NULL,\n\t    0))) {\n\t\tif (err == ENOENT) {\n\t\t\tgoto label_ENOENT;\n\t\t}\n\t\ttest_fail(\"%s(): Error in mallctl(): %s\", __func__,\n\t\t    strerror(err));\n\t}\n\texpect_u64_eq(*dp0, d0,\n\t    \"\\\"thread.deallocatedp\\\" should provide a pointer to internal \"\n\t    \"storage\");\n\n\tp = malloc(1);\n\texpect_ptr_not_null(p, \"Unexpected malloc() error\");\n\n\tsz = sizeof(a1);\n\tmallctl(\"thread.allocated\", (void *)&a1, &sz, NULL, 0);\n\tsz = sizeof(ap1);\n\tmallctl(\"thread.allocatedp\", (void *)&ap1, &sz, NULL, 0);\n\texpect_u64_eq(*ap1, a1,\n\t    \"Dereferenced \\\"thread.allocatedp\\\" value should equal \"\n\t    \"\\\"thread.allocated\\\" value\");\n\texpect_ptr_eq(ap0, ap1,\n\t    \"Pointer returned by \\\"thread.allocatedp\\\" should not change\");\n\n\tusize = TEST_MALLOC_SIZE(p);\n\texpect_u64_le(a0 + usize, a1,\n\t    \"Allocated memory counter should increase by at least the amount \"\n\t    \"explicitly allocated\");\n\n\tfree(p);\n\n\tsz = sizeof(d1);\n\tmallctl(\"thread.deallocated\", (void *)&d1, &sz, NULL, 0);\n\tsz = sizeof(dp1);\n\tmallctl(\"thread.deallocatedp\", (void *)&dp1, &sz, NULL, 0);\n\texpect_u64_eq(*dp1, d1,\n\t    \"Dereferenced \\\"thread.deallocatedp\\\" value should equal \"\n\t    \"\\\"thread.deallocated\\\" value\");\n\texpect_ptr_eq(dp0, dp1,\n\t    \"Pointer returned by \\\"thread.deallocatedp\\\" should not change\");\n\n\texpect_u64_le(d0 + usize, d1,\n\t    \"Deallocated memory counter should increase by at least the amount \"\n\t    \"explicitly deallocated\");\n\n\treturn NULL;\nlabel_ENOENT:\n\texpect_false(config_stats,\n\t    \"ENOENT should only be returned if stats are disabled\");\n\ttest_skip(\"\\\"thread.allocated\\\" mallctl not available\");\n\treturn NULL;\n}\n\nTEST_BEGIN(test_main_thread) {\n\tthd_start(NULL);\n}\nTEST_END\n\nTEST_BEGIN(test_subthread) {\n\tthd_t thd;\n\n\tthd_create(&thd, thd_start, NULL);\n\tthd_join(thd, NULL);\n}\nTEST_END\n\nint\nmain(void) {\n\t/* Run tests multiple times to check for bad interactions. */\n\treturn test(\n\t    test_main_thread,\n\t    test_subthread,\n\t    test_main_thread,\n\t    test_subthread,\n\t    test_main_thread);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/integration/cpp/basic.cpp",
    "content": "#include \"test/jemalloc_test.h\"\n\nTEST_BEGIN(test_basic) {\n\tauto foo = new long(4);\n\texpect_ptr_not_null(foo, \"Unexpected new[] failure\");\n\tdelete foo;\n\t// Test nullptr handling.\n\tfoo = nullptr;\n\tdelete foo;\n\n\tauto bar = new long;\n\texpect_ptr_not_null(bar, \"Unexpected new failure\");\n\tdelete bar;\n\t// Test nullptr handling.\n\tbar = nullptr;\n\tdelete bar;\n}\nTEST_END\n\nint\nmain() {\n\treturn test(\n\t    test_basic);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/integration/cpp/infallible_new_false.cpp",
    "content": "#include <memory>\n\n#include \"test/jemalloc_test.h\"\n\nTEST_BEGIN(test_failing_alloc) {\n\tbool saw_exception = false;\n\ttry {\n\t\t/* Too big of an allocation to succeed. */\n\t\tvoid *volatile ptr = ::operator new((size_t)-1);\n\t\t(void)ptr;\n\t} catch (...) {\n\t\tsaw_exception = true;\n\t}\n\texpect_true(saw_exception, \"Didn't get a failure\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_failing_alloc);\n}\n\n"
  },
  {
    "path": "deps/jemalloc/test/integration/cpp/infallible_new_false.sh",
    "content": "#!/bin/sh\n\nXMALLOC_STR=\"\"\nif [ \"x${enable_xmalloc}\" = \"x1\" ] ; then\n  XMALLOC_STR=\"xmalloc:false,\"\nfi\n\nexport MALLOC_CONF=\"${XMALLOC_STR}experimental_infallible_new:false\"\n"
  },
  {
    "path": "deps/jemalloc/test/integration/cpp/infallible_new_true.cpp",
    "content": "#include <stdio.h>\n\n#include \"test/jemalloc_test.h\"\n\n/*\n * We can't test C++ in unit tests.  In order to intercept abort, use a secret\n * safety check abort hook in integration tests.\n */\ntypedef void (*abort_hook_t)(const char *message);\nbool fake_abort_called;\nvoid fake_abort(const char *message) {\n\tif (strcmp(message, \"<jemalloc>: Allocation failed and \"\n\t    \"opt.experimental_infallible_new is true. Aborting.\\n\") != 0) {\n\t\tabort();\n\t}\n\tfake_abort_called = true;\n}\n\nstatic bool\nown_operator_new(void) {\n\tuint64_t before, after;\n\tsize_t sz = sizeof(before);\n\n\t/* thread.allocated is always available, even w/o config_stats. */\n\texpect_d_eq(mallctl(\"thread.allocated\", (void *)&before, &sz, NULL, 0),\n\t    0, \"Unexpected mallctl failure reading stats\");\n\tvoid *volatile ptr = ::operator new((size_t)8);\n\texpect_ptr_not_null(ptr, \"Unexpected allocation failure\");\n\texpect_d_eq(mallctl(\"thread.allocated\", (void *)&after, &sz, NULL, 0),\n\t    0, \"Unexpected mallctl failure reading stats\");\n\n\treturn (after != before);\n}\n\nTEST_BEGIN(test_failing_alloc) {\n\tabort_hook_t abort_hook = &fake_abort;\n\texpect_d_eq(mallctl(\"experimental.hooks.safety_check_abort\", NULL, NULL,\n\t    (void *)&abort_hook, sizeof(abort_hook)), 0,\n\t    \"Unexpected mallctl failure setting abort hook\");\n\n\t/*\n\t * Not owning operator new is only expected to happen on MinGW which\n\t * does not support operator new / delete replacement.\n\t */\n#ifdef _WIN32\n\ttest_skip_if(!own_operator_new());\n#else\n\texpect_true(own_operator_new(), \"No operator new overload\");\n#endif\n\tvoid *volatile ptr = (void *)1;\n\ttry {\n\t\t/* Too big of an allocation to succeed. */\n\t\tptr = ::operator new((size_t)-1);\n\t} catch (...) {\n\t\tabort();\n\t}\n\texpect_ptr_null(ptr, \"Allocation should have failed\");\n\texpect_b_eq(fake_abort_called, true, \"Abort hook not invoked\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_failing_alloc);\n}\n\n"
  },
  {
    "path": "deps/jemalloc/test/integration/cpp/infallible_new_true.sh",
    "content": "#!/bin/sh\n\nXMALLOC_STR=\"\"\nif [ \"x${enable_xmalloc}\" = \"x1\" ] ; then\n  XMALLOC_STR=\"xmalloc:false,\"\nfi\n\nexport MALLOC_CONF=\"${XMALLOC_STR}experimental_infallible_new:true\"\n"
  },
  {
    "path": "deps/jemalloc/test/integration/extent.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"test/extent_hooks.h\"\n\n#include \"jemalloc/internal/arena_types.h\"\n\nstatic void\ntest_extent_body(unsigned arena_ind) {\n\tvoid *p;\n\tsize_t large0, large1, large2, sz;\n\tsize_t purge_mib[3];\n\tsize_t purge_miblen;\n\tint flags;\n\tbool xallocx_success_a, xallocx_success_b, xallocx_success_c;\n\n\tflags = MALLOCX_ARENA(arena_ind) | MALLOCX_TCACHE_NONE;\n\n\t/* Get large size classes. */\n\tsz = sizeof(size_t);\n\texpect_d_eq(mallctl(\"arenas.lextent.0.size\", (void *)&large0, &sz, NULL,\n\t    0), 0, \"Unexpected arenas.lextent.0.size failure\");\n\texpect_d_eq(mallctl(\"arenas.lextent.1.size\", (void *)&large1, &sz, NULL,\n\t    0), 0, \"Unexpected arenas.lextent.1.size failure\");\n\texpect_d_eq(mallctl(\"arenas.lextent.2.size\", (void *)&large2, &sz, NULL,\n\t    0), 0, \"Unexpected arenas.lextent.2.size failure\");\n\n\t/* Test dalloc/decommit/purge cascade. */\n\tpurge_miblen = sizeof(purge_mib)/sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(\"arena.0.purge\", purge_mib, &purge_miblen),\n\t    0, \"Unexpected mallctlnametomib() failure\");\n\tpurge_mib[1] = (size_t)arena_ind;\n\tcalled_alloc = false;\n\ttry_alloc = true;\n\ttry_dalloc = false;\n\ttry_decommit = false;\n\tp = mallocx(large0 * 2, flags);\n\texpect_ptr_not_null(p, \"Unexpected mallocx() error\");\n\texpect_true(called_alloc, \"Expected alloc call\");\n\tcalled_dalloc = false;\n\tcalled_decommit = false;\n\tdid_purge_lazy = false;\n\tdid_purge_forced = false;\n\tcalled_split = false;\n\txallocx_success_a = (xallocx(p, large0, 0, flags) == large0);\n\texpect_d_eq(mallctlbymib(purge_mib, purge_miblen, NULL, NULL, NULL, 0),\n\t    0, \"Unexpected arena.%u.purge error\", arena_ind);\n\tif (xallocx_success_a) {\n\t\texpect_true(called_dalloc, \"Expected dalloc call\");\n\t\texpect_true(called_decommit, \"Expected decommit call\");\n\t\texpect_true(did_purge_lazy || did_purge_forced,\n\t\t    \"Expected purge\");\n\t\texpect_true(called_split, \"Expected split call\");\n\t}\n\tdallocx(p, flags);\n\ttry_dalloc = true;\n\n\t/* Test decommit/commit and observe split/merge. */\n\ttry_dalloc = false;\n\ttry_decommit = true;\n\tp = mallocx(large0 * 2, flags);\n\texpect_ptr_not_null(p, \"Unexpected mallocx() error\");\n\tdid_decommit = false;\n\tdid_commit = false;\n\tcalled_split = false;\n\tdid_split = false;\n\tdid_merge = false;\n\txallocx_success_b = (xallocx(p, large0, 0, flags) == large0);\n\texpect_d_eq(mallctlbymib(purge_mib, purge_miblen, NULL, NULL, NULL, 0),\n\t    0, \"Unexpected arena.%u.purge error\", arena_ind);\n\tif (xallocx_success_b) {\n\t\texpect_true(did_split, \"Expected split\");\n\t}\n\txallocx_success_c = (xallocx(p, large0 * 2, 0, flags) == large0 * 2);\n\tif (did_split) {\n\t\texpect_b_eq(did_decommit, did_commit,\n\t\t    \"Expected decommit/commit match\");\n\t}\n\tif (xallocx_success_b && xallocx_success_c) {\n\t\texpect_true(did_merge, \"Expected merge\");\n\t}\n\tdallocx(p, flags);\n\ttry_dalloc = true;\n\ttry_decommit = false;\n\n\t/* Make sure non-large allocation succeeds. */\n\tp = mallocx(42, flags);\n\texpect_ptr_not_null(p, \"Unexpected mallocx() error\");\n\tdallocx(p, flags);\n}\n\nstatic void\ntest_manual_hook_auto_arena(void) {\n\tunsigned narenas;\n\tsize_t old_size, new_size, sz;\n\tsize_t hooks_mib[3];\n\tsize_t hooks_miblen;\n\textent_hooks_t *new_hooks, *old_hooks;\n\n\textent_hooks_prep();\n\n\tsz = sizeof(unsigned);\n\t/* Get number of auto arenas. */\n\texpect_d_eq(mallctl(\"opt.narenas\", (void *)&narenas, &sz, NULL, 0),\n\t    0, \"Unexpected mallctl() failure\");\n\tif (narenas == 1) {\n\t\treturn;\n\t}\n\n\t/* Install custom extent hooks on arena 1 (might not be initialized). */\n\thooks_miblen = sizeof(hooks_mib)/sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(\"arena.0.extent_hooks\", hooks_mib,\n\t    &hooks_miblen), 0, \"Unexpected mallctlnametomib() failure\");\n\thooks_mib[1] = 1;\n\told_size = sizeof(extent_hooks_t *);\n\tnew_hooks = &hooks;\n\tnew_size = sizeof(extent_hooks_t *);\n\texpect_d_eq(mallctlbymib(hooks_mib, hooks_miblen, (void *)&old_hooks,\n\t    &old_size, (void *)&new_hooks, new_size), 0,\n\t    \"Unexpected extent_hooks error\");\n\tstatic bool auto_arena_created = false;\n\tif (old_hooks != &hooks) {\n\t\texpect_b_eq(auto_arena_created, false,\n\t\t    \"Expected auto arena 1 created only once.\");\n\t\tauto_arena_created = true;\n\t}\n}\n\nstatic void\ntest_manual_hook_body(void) {\n\tunsigned arena_ind;\n\tsize_t old_size, new_size, sz;\n\tsize_t hooks_mib[3];\n\tsize_t hooks_miblen;\n\textent_hooks_t *new_hooks, *old_hooks;\n\n\textent_hooks_prep();\n\n\tsz = sizeof(unsigned);\n\texpect_d_eq(mallctl(\"arenas.create\", (void *)&arena_ind, &sz, NULL, 0),\n\t    0, \"Unexpected mallctl() failure\");\n\n\t/* Install custom extent hooks. */\n\thooks_miblen = sizeof(hooks_mib)/sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(\"arena.0.extent_hooks\", hooks_mib,\n\t    &hooks_miblen), 0, \"Unexpected mallctlnametomib() failure\");\n\thooks_mib[1] = (size_t)arena_ind;\n\told_size = sizeof(extent_hooks_t *);\n\tnew_hooks = &hooks;\n\tnew_size = sizeof(extent_hooks_t *);\n\texpect_d_eq(mallctlbymib(hooks_mib, hooks_miblen, (void *)&old_hooks,\n\t    &old_size, (void *)&new_hooks, new_size), 0,\n\t    \"Unexpected extent_hooks error\");\n\texpect_ptr_ne(old_hooks->alloc, extent_alloc_hook,\n\t    \"Unexpected extent_hooks error\");\n\texpect_ptr_ne(old_hooks->dalloc, extent_dalloc_hook,\n\t    \"Unexpected extent_hooks error\");\n\texpect_ptr_ne(old_hooks->commit, extent_commit_hook,\n\t    \"Unexpected extent_hooks error\");\n\texpect_ptr_ne(old_hooks->decommit, extent_decommit_hook,\n\t    \"Unexpected extent_hooks error\");\n\texpect_ptr_ne(old_hooks->purge_lazy, extent_purge_lazy_hook,\n\t    \"Unexpected extent_hooks error\");\n\texpect_ptr_ne(old_hooks->purge_forced, extent_purge_forced_hook,\n\t    \"Unexpected extent_hooks error\");\n\texpect_ptr_ne(old_hooks->split, extent_split_hook,\n\t    \"Unexpected extent_hooks error\");\n\texpect_ptr_ne(old_hooks->merge, extent_merge_hook,\n\t    \"Unexpected extent_hooks error\");\n\n\tif (!is_background_thread_enabled()) {\n\t\ttest_extent_body(arena_ind);\n\t}\n\n\t/* Restore extent hooks. */\n\texpect_d_eq(mallctlbymib(hooks_mib, hooks_miblen, NULL, NULL,\n\t    (void *)&old_hooks, new_size), 0, \"Unexpected extent_hooks error\");\n\texpect_d_eq(mallctlbymib(hooks_mib, hooks_miblen, (void *)&old_hooks,\n\t    &old_size, NULL, 0), 0, \"Unexpected extent_hooks error\");\n\texpect_ptr_eq(old_hooks, default_hooks, \"Unexpected extent_hooks error\");\n\texpect_ptr_eq(old_hooks->alloc, default_hooks->alloc,\n\t    \"Unexpected extent_hooks error\");\n\texpect_ptr_eq(old_hooks->dalloc, default_hooks->dalloc,\n\t    \"Unexpected extent_hooks error\");\n\texpect_ptr_eq(old_hooks->commit, default_hooks->commit,\n\t    \"Unexpected extent_hooks error\");\n\texpect_ptr_eq(old_hooks->decommit, default_hooks->decommit,\n\t    \"Unexpected extent_hooks error\");\n\texpect_ptr_eq(old_hooks->purge_lazy, default_hooks->purge_lazy,\n\t    \"Unexpected extent_hooks error\");\n\texpect_ptr_eq(old_hooks->purge_forced, default_hooks->purge_forced,\n\t    \"Unexpected extent_hooks error\");\n\texpect_ptr_eq(old_hooks->split, default_hooks->split,\n\t    \"Unexpected extent_hooks error\");\n\texpect_ptr_eq(old_hooks->merge, default_hooks->merge,\n\t    \"Unexpected extent_hooks error\");\n}\n\nTEST_BEGIN(test_extent_manual_hook) {\n\ttest_manual_hook_auto_arena();\n\ttest_manual_hook_body();\n\n\t/* Test failure paths. */\n\ttry_split = false;\n\ttest_manual_hook_body();\n\ttry_merge = false;\n\ttest_manual_hook_body();\n\ttry_purge_lazy = false;\n\ttry_purge_forced = false;\n\ttest_manual_hook_body();\n\n\ttry_split = try_merge = try_purge_lazy = try_purge_forced = true;\n}\nTEST_END\n\nTEST_BEGIN(test_extent_auto_hook) {\n\tunsigned arena_ind;\n\tsize_t new_size, sz;\n\textent_hooks_t *new_hooks;\n\n\textent_hooks_prep();\n\n\tsz = sizeof(unsigned);\n\tnew_hooks = &hooks;\n\tnew_size = sizeof(extent_hooks_t *);\n\texpect_d_eq(mallctl(\"arenas.create\", (void *)&arena_ind, &sz,\n\t    (void *)&new_hooks, new_size), 0, \"Unexpected mallctl() failure\");\n\n\ttest_skip_if(is_background_thread_enabled());\n\ttest_extent_body(arena_ind);\n}\nTEST_END\n\nstatic void\ntest_arenas_create_ext_base(arena_config_t config,\n\tbool expect_hook_data, bool expect_hook_metadata)\n{\n\tunsigned arena, arena1;\n\tvoid *ptr;\n\tsize_t sz = sizeof(unsigned);\n\n\textent_hooks_prep();\n\n\tcalled_alloc = false;\n\texpect_d_eq(mallctl(\"experimental.arenas_create_ext\",\n\t    (void *)&arena, &sz, &config, sizeof(arena_config_t)), 0,\n\t    \"Unexpected mallctl() failure\");\n\texpect_b_eq(called_alloc, expect_hook_metadata,\n\t    \"expected hook metadata alloc mismatch\");\n\n\tcalled_alloc = false;\n\tptr = mallocx(42, MALLOCX_ARENA(arena) | MALLOCX_TCACHE_NONE);\n\texpect_b_eq(called_alloc, expect_hook_data,\n\t    \"expected hook data alloc mismatch\");\n\n\texpect_ptr_not_null(ptr, \"Unexpected mallocx() failure\");\n\texpect_d_eq(mallctl(\"arenas.lookup\", &arena1, &sz, &ptr, sizeof(ptr)),\n\t    0, \"Unexpected mallctl() failure\");\n\texpect_u_eq(arena, arena1, \"Unexpected arena index\");\n\tdallocx(ptr, 0);\n}\n\nTEST_BEGIN(test_arenas_create_ext_with_ehooks_no_metadata) {\n\tarena_config_t config;\n\tconfig.extent_hooks = &hooks;\n\tconfig.metadata_use_hooks = false;\n\n\ttest_arenas_create_ext_base(config, true, false);\n}\nTEST_END\n\nTEST_BEGIN(test_arenas_create_ext_with_ehooks_with_metadata) {\n\tarena_config_t config;\n\tconfig.extent_hooks = &hooks;\n\tconfig.metadata_use_hooks = true;\n\n\ttest_arenas_create_ext_base(config, true, true);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_extent_manual_hook,\n\t    test_extent_auto_hook,\n\t    test_arenas_create_ext_with_ehooks_no_metadata,\n\t    test_arenas_create_ext_with_ehooks_with_metadata);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/integration/extent.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_fill}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"junk:false\"\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/integration/malloc.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nTEST_BEGIN(test_zero_alloc) {\n\tvoid *res = malloc(0);\n\tassert(res);\n\tsize_t usable = TEST_MALLOC_SIZE(res);\n\tassert(usable > 0);\n\tfree(res);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_zero_alloc);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/integration/mallocx.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nstatic unsigned\nget_nsizes_impl(const char *cmd) {\n\tunsigned ret;\n\tsize_t z;\n\n\tz = sizeof(unsigned);\n\texpect_d_eq(mallctl(cmd, (void *)&ret, &z, NULL, 0), 0,\n\t    \"Unexpected mallctl(\\\"%s\\\", ...) failure\", cmd);\n\n\treturn ret;\n}\n\nstatic unsigned\nget_nlarge(void) {\n\treturn get_nsizes_impl(\"arenas.nlextents\");\n}\n\nstatic size_t\nget_size_impl(const char *cmd, size_t ind) {\n\tsize_t ret;\n\tsize_t z;\n\tsize_t mib[4];\n\tsize_t miblen = 4;\n\n\tz = sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(cmd, mib, &miblen),\n\t    0, \"Unexpected mallctlnametomib(\\\"%s\\\", ...) failure\", cmd);\n\tmib[2] = ind;\n\tz = sizeof(size_t);\n\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&ret, &z, NULL, 0),\n\t    0, \"Unexpected mallctlbymib([\\\"%s\\\", %zu], ...) failure\", cmd, ind);\n\n\treturn ret;\n}\n\nstatic size_t\nget_large_size(size_t ind) {\n\treturn get_size_impl(\"arenas.lextent.0.size\", ind);\n}\n\n/*\n * On systems which can't merge extents, tests that call this function generate\n * a lot of dirty memory very quickly.  Purging between cycles mitigates\n * potential OOM on e.g. 32-bit Windows.\n */\nstatic void\npurge(void) {\n\texpect_d_eq(mallctl(\"arena.0.purge\", NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctl error\");\n}\n\n/*\n * GCC \"-Walloc-size-larger-than\" warning detects when one of the memory\n * allocation functions is called with a size larger than the maximum size that\n * they support. Here we want to explicitly test that the allocation functions\n * do indeed fail properly when this is the case, which triggers the warning.\n * Therefore we disable the warning for these tests.\n */\nJEMALLOC_DIAGNOSTIC_PUSH\nJEMALLOC_DIAGNOSTIC_IGNORE_ALLOC_SIZE_LARGER_THAN\n\nTEST_BEGIN(test_overflow) {\n\tsize_t largemax;\n\n\tlargemax = get_large_size(get_nlarge()-1);\n\n\texpect_ptr_null(mallocx(largemax+1, 0),\n\t    \"Expected OOM for mallocx(size=%#zx, 0)\", largemax+1);\n\n\texpect_ptr_null(mallocx(ZU(PTRDIFF_MAX)+1, 0),\n\t    \"Expected OOM for mallocx(size=%#zx, 0)\", ZU(PTRDIFF_MAX)+1);\n\n\texpect_ptr_null(mallocx(SIZE_T_MAX, 0),\n\t    \"Expected OOM for mallocx(size=%#zx, 0)\", SIZE_T_MAX);\n\n\texpect_ptr_null(mallocx(1, MALLOCX_ALIGN(ZU(PTRDIFF_MAX)+1)),\n\t    \"Expected OOM for mallocx(size=1, MALLOCX_ALIGN(%#zx))\",\n\t    ZU(PTRDIFF_MAX)+1);\n}\nTEST_END\n\nstatic void *\nremote_alloc(void *arg) {\n\tunsigned arena;\n\tsize_t sz = sizeof(unsigned);\n\texpect_d_eq(mallctl(\"arenas.create\", (void *)&arena, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\tsize_t large_sz;\n\tsz = sizeof(size_t);\n\texpect_d_eq(mallctl(\"arenas.lextent.0.size\", (void *)&large_sz, &sz,\n\t    NULL, 0), 0, \"Unexpected mallctl failure\");\n\n\tvoid *ptr = mallocx(large_sz, MALLOCX_ARENA(arena)\n\t    | MALLOCX_TCACHE_NONE);\n\tvoid **ret = (void **)arg;\n\t*ret = ptr;\n\n\treturn NULL;\n}\n\nTEST_BEGIN(test_remote_free) {\n\tthd_t thd;\n\tvoid *ret;\n\tthd_create(&thd, remote_alloc, (void *)&ret);\n\tthd_join(thd, NULL);\n\texpect_ptr_not_null(ret, \"Unexpected mallocx failure\");\n\n\t/* Avoid TCACHE_NONE to explicitly test tcache_flush(). */\n\tdallocx(ret, 0);\n\tmallctl(\"thread.tcache.flush\", NULL, NULL, NULL, 0);\n}\nTEST_END\n\nTEST_BEGIN(test_oom) {\n\tsize_t largemax;\n\tbool oom;\n\tvoid *ptrs[3];\n\tunsigned i;\n\n\t/*\n\t * It should be impossible to allocate three objects that each consume\n\t * nearly half the virtual address space.\n\t */\n\tlargemax = get_large_size(get_nlarge()-1);\n\toom = false;\n\tfor (i = 0; i < sizeof(ptrs) / sizeof(void *); i++) {\n\t\tptrs[i] = mallocx(largemax, MALLOCX_ARENA(0));\n\t\tif (ptrs[i] == NULL) {\n\t\t\toom = true;\n\t\t}\n\t}\n\texpect_true(oom,\n\t    \"Expected OOM during series of calls to mallocx(size=%zu, 0)\",\n\t    largemax);\n\tfor (i = 0; i < sizeof(ptrs) / sizeof(void *); i++) {\n\t\tif (ptrs[i] != NULL) {\n\t\t\tdallocx(ptrs[i], 0);\n\t\t}\n\t}\n\tpurge();\n\n#if LG_SIZEOF_PTR == 3\n\texpect_ptr_null(mallocx(0x8000000000000000ULL,\n\t    MALLOCX_ALIGN(0x8000000000000000ULL)),\n\t    \"Expected OOM for mallocx()\");\n\texpect_ptr_null(mallocx(0x8000000000000000ULL,\n\t    MALLOCX_ALIGN(0x80000000)),\n\t    \"Expected OOM for mallocx()\");\n#else\n\texpect_ptr_null(mallocx(0x80000000UL, MALLOCX_ALIGN(0x80000000UL)),\n\t    \"Expected OOM for mallocx()\");\n#endif\n}\nTEST_END\n\n/* Re-enable the \"-Walloc-size-larger-than=\" warning */\nJEMALLOC_DIAGNOSTIC_POP\n\nTEST_BEGIN(test_basic) {\n#define MAXSZ (((size_t)1) << 23)\n\tsize_t sz;\n\n\tfor (sz = 1; sz < MAXSZ; sz = nallocx(sz, 0) + 1) {\n\t\tsize_t nsz, rsz;\n\t\tvoid *p;\n\t\tnsz = nallocx(sz, 0);\n\t\texpect_zu_ne(nsz, 0, \"Unexpected nallocx() error\");\n\t\tp = mallocx(sz, 0);\n\t\texpect_ptr_not_null(p,\n\t\t    \"Unexpected mallocx(size=%zx, flags=0) error\", sz);\n\t\trsz = sallocx(p, 0);\n\t\texpect_zu_ge(rsz, sz, \"Real size smaller than expected\");\n\t\texpect_zu_eq(nsz, rsz, \"nallocx()/sallocx() size mismatch\");\n\t\tdallocx(p, 0);\n\n\t\tp = mallocx(sz, 0);\n\t\texpect_ptr_not_null(p,\n\t\t    \"Unexpected mallocx(size=%zx, flags=0) error\", sz);\n\t\tdallocx(p, 0);\n\n\t\tnsz = nallocx(sz, MALLOCX_ZERO);\n\t\texpect_zu_ne(nsz, 0, \"Unexpected nallocx() error\");\n\t\tp = mallocx(sz, MALLOCX_ZERO);\n\t\texpect_ptr_not_null(p,\n\t\t    \"Unexpected mallocx(size=%zx, flags=MALLOCX_ZERO) error\",\n\t\t    nsz);\n\t\trsz = sallocx(p, 0);\n\t\texpect_zu_eq(nsz, rsz, \"nallocx()/sallocx() rsize mismatch\");\n\t\tdallocx(p, 0);\n\t\tpurge();\n\t}\n#undef MAXSZ\n}\nTEST_END\n\nTEST_BEGIN(test_alignment_and_size) {\n\tconst char *percpu_arena;\n\tsize_t sz = sizeof(percpu_arena);\n\n\tif(mallctl(\"opt.percpu_arena\", (void *)&percpu_arena, &sz, NULL, 0) ||\n\t    strcmp(percpu_arena, \"disabled\") != 0) {\n\t\ttest_skip(\"test_alignment_and_size skipped: \"\n\t\t    \"not working with percpu arena.\");\n\t};\n#define MAXALIGN (((size_t)1) << 23)\n#define NITER 4\n\tsize_t nsz, rsz, alignment, total;\n\tunsigned i;\n\tvoid *ps[NITER];\n\n\tfor (i = 0; i < NITER; i++) {\n\t\tps[i] = NULL;\n\t}\n\n\tfor (alignment = 8;\n\t    alignment <= MAXALIGN;\n\t    alignment <<= 1) {\n\t\ttotal = 0;\n\t\tfor (sz = 1;\n\t\t    sz < 3 * alignment && sz < (1U << 31);\n\t\t    sz += (alignment >> (LG_SIZEOF_PTR-1)) - 1) {\n\t\t\tfor (i = 0; i < NITER; i++) {\n\t\t\t\tnsz = nallocx(sz, MALLOCX_ALIGN(alignment) |\n\t\t\t\t    MALLOCX_ZERO | MALLOCX_ARENA(0));\n\t\t\t\texpect_zu_ne(nsz, 0,\n\t\t\t\t    \"nallocx() error for alignment=%zu, \"\n\t\t\t\t    \"size=%zu (%#zx)\", alignment, sz, sz);\n\t\t\t\tps[i] = mallocx(sz, MALLOCX_ALIGN(alignment) |\n\t\t\t\t    MALLOCX_ZERO | MALLOCX_ARENA(0));\n\t\t\t\texpect_ptr_not_null(ps[i],\n\t\t\t\t    \"mallocx() error for alignment=%zu, \"\n\t\t\t\t    \"size=%zu (%#zx)\", alignment, sz, sz);\n\t\t\t\trsz = sallocx(ps[i], 0);\n\t\t\t\texpect_zu_ge(rsz, sz,\n\t\t\t\t    \"Real size smaller than expected for \"\n\t\t\t\t    \"alignment=%zu, size=%zu\", alignment, sz);\n\t\t\t\texpect_zu_eq(nsz, rsz,\n\t\t\t\t    \"nallocx()/sallocx() size mismatch for \"\n\t\t\t\t    \"alignment=%zu, size=%zu\", alignment, sz);\n\t\t\t\texpect_ptr_null(\n\t\t\t\t    (void *)((uintptr_t)ps[i] & (alignment-1)),\n\t\t\t\t    \"%p inadequately aligned for\"\n\t\t\t\t    \" alignment=%zu, size=%zu\", ps[i],\n\t\t\t\t    alignment, sz);\n\t\t\t\ttotal += rsz;\n\t\t\t\tif (total >= (MAXALIGN << 1)) {\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tfor (i = 0; i < NITER; i++) {\n\t\t\t\tif (ps[i] != NULL) {\n\t\t\t\t\tdallocx(ps[i], 0);\n\t\t\t\t\tps[i] = NULL;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tpurge();\n\t}\n#undef MAXALIGN\n#undef NITER\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_overflow,\n\t    test_oom,\n\t    test_remote_free,\n\t    test_basic,\n\t    test_alignment_and_size);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/integration/mallocx.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_fill}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"junk:false\"\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/integration/overflow.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n/*\n * GCC \"-Walloc-size-larger-than\" warning detects when one of the memory\n * allocation functions is called with a size larger than the maximum size that\n * they support. Here we want to explicitly test that the allocation functions\n * do indeed fail properly when this is the case, which triggers the warning.\n * Therefore we disable the warning for these tests.\n */\nJEMALLOC_DIAGNOSTIC_PUSH\nJEMALLOC_DIAGNOSTIC_IGNORE_ALLOC_SIZE_LARGER_THAN\n\nTEST_BEGIN(test_overflow) {\n\tunsigned nlextents;\n\tsize_t mib[4];\n\tsize_t sz, miblen, max_size_class;\n\tvoid *p;\n\n\tsz = sizeof(unsigned);\n\texpect_d_eq(mallctl(\"arenas.nlextents\", (void *)&nlextents, &sz, NULL,\n\t    0), 0, \"Unexpected mallctl() error\");\n\n\tmiblen = sizeof(mib) / sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(\"arenas.lextent.0.size\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() error\");\n\tmib[2] = nlextents - 1;\n\n\tsz = sizeof(size_t);\n\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&max_size_class, &sz,\n\t    NULL, 0), 0, \"Unexpected mallctlbymib() error\");\n\n\texpect_ptr_null(malloc(max_size_class + 1),\n\t    \"Expected OOM due to over-sized allocation request\");\n\texpect_ptr_null(malloc(SIZE_T_MAX),\n\t    \"Expected OOM due to over-sized allocation request\");\n\n\texpect_ptr_null(calloc(1, max_size_class + 1),\n\t    \"Expected OOM due to over-sized allocation request\");\n\texpect_ptr_null(calloc(1, SIZE_T_MAX),\n\t    \"Expected OOM due to over-sized allocation request\");\n\n\tp = malloc(1);\n\texpect_ptr_not_null(p, \"Unexpected malloc() OOM\");\n\texpect_ptr_null(realloc(p, max_size_class + 1),\n\t    \"Expected OOM due to over-sized allocation request\");\n\texpect_ptr_null(realloc(p, SIZE_T_MAX),\n\t    \"Expected OOM due to over-sized allocation request\");\n\tfree(p);\n}\nTEST_END\n\n/* Re-enable the \"-Walloc-size-larger-than=\" warning */\nJEMALLOC_DIAGNOSTIC_POP\n\nint\nmain(void) {\n\treturn test(\n\t    test_overflow);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/integration/posix_memalign.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#define MAXALIGN (((size_t)1) << 23)\n\n/*\n * On systems which can't merge extents, tests that call this function generate\n * a lot of dirty memory very quickly.  Purging between cycles mitigates\n * potential OOM on e.g. 32-bit Windows.\n */\nstatic void\npurge(void) {\n\texpect_d_eq(mallctl(\"arena.0.purge\", NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctl error\");\n}\n\nTEST_BEGIN(test_alignment_errors) {\n\tsize_t alignment;\n\tvoid *p;\n\n\tfor (alignment = 0; alignment < sizeof(void *); alignment++) {\n\t\texpect_d_eq(posix_memalign(&p, alignment, 1), EINVAL,\n\t\t    \"Expected error for invalid alignment %zu\",\n\t\t    alignment);\n\t}\n\n\tfor (alignment = sizeof(size_t); alignment < MAXALIGN;\n\t    alignment <<= 1) {\n\t\texpect_d_ne(posix_memalign(&p, alignment + 1, 1), 0,\n\t\t    \"Expected error for invalid alignment %zu\",\n\t\t    alignment + 1);\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_oom_errors) {\n\tsize_t alignment, size;\n\tvoid *p;\n\n#if LG_SIZEOF_PTR == 3\n\talignment = UINT64_C(0x8000000000000000);\n\tsize      = UINT64_C(0x8000000000000000);\n#else\n\talignment = 0x80000000LU;\n\tsize      = 0x80000000LU;\n#endif\n\texpect_d_ne(posix_memalign(&p, alignment, size), 0,\n\t    \"Expected error for posix_memalign(&p, %zu, %zu)\",\n\t    alignment, size);\n\n#if LG_SIZEOF_PTR == 3\n\talignment = UINT64_C(0x4000000000000000);\n\tsize      = UINT64_C(0xc000000000000001);\n#else\n\talignment = 0x40000000LU;\n\tsize      = 0xc0000001LU;\n#endif\n\texpect_d_ne(posix_memalign(&p, alignment, size), 0,\n\t    \"Expected error for posix_memalign(&p, %zu, %zu)\",\n\t    alignment, size);\n\n\talignment = 0x10LU;\n#if LG_SIZEOF_PTR == 3\n\tsize = UINT64_C(0xfffffffffffffff0);\n#else\n\tsize = 0xfffffff0LU;\n#endif\n\texpect_d_ne(posix_memalign(&p, alignment, size), 0,\n\t    \"Expected error for posix_memalign(&p, %zu, %zu)\",\n\t    alignment, size);\n}\nTEST_END\n\nTEST_BEGIN(test_alignment_and_size) {\n#define NITER 4\n\tsize_t alignment, size, total;\n\tunsigned i;\n\tint err;\n\tvoid *ps[NITER];\n\n\tfor (i = 0; i < NITER; i++) {\n\t\tps[i] = NULL;\n\t}\n\n\tfor (alignment = 8;\n\t    alignment <= MAXALIGN;\n\t    alignment <<= 1) {\n\t\ttotal = 0;\n\t\tfor (size = 0;\n\t\t    size < 3 * alignment && size < (1U << 31);\n\t\t    size += ((size == 0) ? 1 :\n\t\t    (alignment >> (LG_SIZEOF_PTR-1)) - 1)) {\n\t\t\tfor (i = 0; i < NITER; i++) {\n\t\t\t\terr = posix_memalign(&ps[i],\n\t\t\t\t    alignment, size);\n\t\t\t\tif (err) {\n\t\t\t\t\tchar buf[BUFERROR_BUF];\n\n\t\t\t\t\tbuferror(get_errno(), buf, sizeof(buf));\n\t\t\t\t\ttest_fail(\n\t\t\t\t\t    \"Error for alignment=%zu, \"\n\t\t\t\t\t    \"size=%zu (%#zx): %s\",\n\t\t\t\t\t    alignment, size, size, buf);\n\t\t\t\t}\n\t\t\t\ttotal += TEST_MALLOC_SIZE(ps[i]);\n\t\t\t\tif (total >= (MAXALIGN << 1)) {\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tfor (i = 0; i < NITER; i++) {\n\t\t\t\tif (ps[i] != NULL) {\n\t\t\t\t\tfree(ps[i]);\n\t\t\t\t\tps[i] = NULL;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tpurge();\n\t}\n#undef NITER\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_alignment_errors,\n\t    test_oom_errors,\n\t    test_alignment_and_size);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/integration/rallocx.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nstatic unsigned\nget_nsizes_impl(const char *cmd) {\n\tunsigned ret;\n\tsize_t z;\n\n\tz = sizeof(unsigned);\n\texpect_d_eq(mallctl(cmd, (void *)&ret, &z, NULL, 0), 0,\n\t    \"Unexpected mallctl(\\\"%s\\\", ...) failure\", cmd);\n\n\treturn ret;\n}\n\nstatic unsigned\nget_nlarge(void) {\n\treturn get_nsizes_impl(\"arenas.nlextents\");\n}\n\nstatic size_t\nget_size_impl(const char *cmd, size_t ind) {\n\tsize_t ret;\n\tsize_t z;\n\tsize_t mib[4];\n\tsize_t miblen = 4;\n\n\tz = sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(cmd, mib, &miblen),\n\t    0, \"Unexpected mallctlnametomib(\\\"%s\\\", ...) failure\", cmd);\n\tmib[2] = ind;\n\tz = sizeof(size_t);\n\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&ret, &z, NULL, 0),\n\t    0, \"Unexpected mallctlbymib([\\\"%s\\\", %zu], ...) failure\", cmd, ind);\n\n\treturn ret;\n}\n\nstatic size_t\nget_large_size(size_t ind) {\n\treturn get_size_impl(\"arenas.lextent.0.size\", ind);\n}\n\nTEST_BEGIN(test_grow_and_shrink) {\n\t/*\n\t * Use volatile to workaround buffer overflow false positives\n\t * (-D_FORTIFY_SOURCE=3).\n\t */\n\tvoid *volatile p, *volatile q;\n\tsize_t tsz;\n#define NCYCLES 3\n\tunsigned i, j;\n#define NSZS 1024\n\tsize_t szs[NSZS];\n#define MAXSZ ZU(12 * 1024 * 1024)\n\n\tp = mallocx(1, 0);\n\texpect_ptr_not_null(p, \"Unexpected mallocx() error\");\n\tszs[0] = sallocx(p, 0);\n\n\tfor (i = 0; i < NCYCLES; i++) {\n\t\tfor (j = 1; j < NSZS && szs[j-1] < MAXSZ; j++) {\n\t\t\tq = rallocx(p, szs[j-1]+1, 0);\n\t\t\texpect_ptr_not_null(q,\n\t\t\t    \"Unexpected rallocx() error for size=%zu-->%zu\",\n\t\t\t    szs[j-1], szs[j-1]+1);\n\t\t\tszs[j] = sallocx(q, 0);\n\t\t\texpect_zu_ne(szs[j], szs[j-1]+1,\n\t\t\t    \"Expected size to be at least: %zu\", szs[j-1]+1);\n\t\t\tp = q;\n\t\t}\n\n\t\tfor (j--; j > 0; j--) {\n\t\t\tq = rallocx(p, szs[j-1], 0);\n\t\t\texpect_ptr_not_null(q,\n\t\t\t    \"Unexpected rallocx() error for size=%zu-->%zu\",\n\t\t\t    szs[j], szs[j-1]);\n\t\t\ttsz = sallocx(q, 0);\n\t\t\texpect_zu_eq(tsz, szs[j-1],\n\t\t\t    \"Expected size=%zu, got size=%zu\", szs[j-1], tsz);\n\t\t\tp = q;\n\t\t}\n\t}\n\n\tdallocx(p, 0);\n#undef MAXSZ\n#undef NSZS\n#undef NCYCLES\n}\nTEST_END\n\nstatic bool\nvalidate_fill(void *p, uint8_t c, size_t offset, size_t len) {\n\tbool ret = false;\n\t/*\n\t * Use volatile to workaround buffer overflow false positives\n\t * (-D_FORTIFY_SOURCE=3).\n\t */\n\tuint8_t *volatile buf = (uint8_t *)p;\n\tsize_t i;\n\n\tfor (i = 0; i < len; i++) {\n\t\tuint8_t b = buf[offset+i];\n\t\tif (b != c) {\n\t\t\ttest_fail(\"Allocation at %p (len=%zu) contains %#x \"\n\t\t\t    \"rather than %#x at offset %zu\", p, len, b, c,\n\t\t\t    offset+i);\n\t\t\tret = true;\n\t\t}\n\t}\n\n\treturn ret;\n}\n\nTEST_BEGIN(test_zero) {\n\t/*\n\t * Use volatile to workaround buffer overflow false positives\n\t * (-D_FORTIFY_SOURCE=3).\n\t */\n\tvoid *volatile p, *volatile q;\n\tsize_t psz, qsz, i, j;\n\tsize_t start_sizes[] = {1, 3*1024, 63*1024, 4095*1024};\n#define FILL_BYTE 0xaaU\n#define RANGE 2048\n\n\tfor (i = 0; i < sizeof(start_sizes)/sizeof(size_t); i++) {\n\t\tsize_t start_size = start_sizes[i];\n\t\tp = mallocx(start_size, MALLOCX_ZERO);\n\t\texpect_ptr_not_null(p, \"Unexpected mallocx() error\");\n\t\tpsz = sallocx(p, 0);\n\n\t\texpect_false(validate_fill(p, 0, 0, psz),\n\t\t    \"Expected zeroed memory\");\n\t\tmemset(p, FILL_BYTE, psz);\n\t\texpect_false(validate_fill(p, FILL_BYTE, 0, psz),\n\t\t    \"Expected filled memory\");\n\n\t\tfor (j = 1; j < RANGE; j++) {\n\t\t\tq = rallocx(p, start_size+j, MALLOCX_ZERO);\n\t\t\texpect_ptr_not_null(q, \"Unexpected rallocx() error\");\n\t\t\tqsz = sallocx(q, 0);\n\t\t\tif (q != p || qsz != psz) {\n\t\t\t\texpect_false(validate_fill(q, FILL_BYTE, 0,\n\t\t\t\t    psz), \"Expected filled memory\");\n\t\t\t\texpect_false(validate_fill(q, 0, psz, qsz-psz),\n\t\t\t\t    \"Expected zeroed memory\");\n\t\t\t}\n\t\t\tif (psz != qsz) {\n\t\t\t\tmemset((void *)((uintptr_t)q+psz), FILL_BYTE,\n\t\t\t\t    qsz-psz);\n\t\t\t\tpsz = qsz;\n\t\t\t}\n\t\t\tp = q;\n\t\t}\n\t\texpect_false(validate_fill(p, FILL_BYTE, 0, psz),\n\t\t    \"Expected filled memory\");\n\t\tdallocx(p, 0);\n\t}\n#undef FILL_BYTE\n}\nTEST_END\n\nTEST_BEGIN(test_align) {\n\tvoid *p, *q;\n\tsize_t align;\n#define MAX_ALIGN (ZU(1) << 25)\n\n\talign = ZU(1);\n\tp = mallocx(1, MALLOCX_ALIGN(align));\n\texpect_ptr_not_null(p, \"Unexpected mallocx() error\");\n\n\tfor (align <<= 1; align <= MAX_ALIGN; align <<= 1) {\n\t\tq = rallocx(p, 1, MALLOCX_ALIGN(align));\n\t\texpect_ptr_not_null(q,\n\t\t    \"Unexpected rallocx() error for align=%zu\", align);\n\t\texpect_ptr_null(\n\t\t    (void *)((uintptr_t)q & (align-1)),\n\t\t    \"%p inadequately aligned for align=%zu\",\n\t\t    q, align);\n\t\tp = q;\n\t}\n\tdallocx(p, 0);\n#undef MAX_ALIGN\n}\nTEST_END\n\nTEST_BEGIN(test_align_enum) {\n/* Span both small sizes and large sizes. */\n#define LG_MIN 12\n#define LG_MAX 15\n\tfor (size_t lg_align = LG_MIN; lg_align <= LG_MAX; ++lg_align) {\n\t\tfor (size_t lg_size = LG_MIN; lg_size <= LG_MAX; ++lg_size) {\n\t\t\tsize_t size = 1 << lg_size;\n\t\t\tfor (size_t lg_align_next = LG_MIN;\n\t\t\t    lg_align_next <= LG_MAX; ++lg_align_next) {\n\t\t\t\tint flags = MALLOCX_LG_ALIGN(lg_align);\n\t\t\t\tvoid *p = mallocx(1, flags);\n\t\t\t\tassert_ptr_not_null(p,\n\t\t\t\t    \"Unexpected mallocx() error\");\n\t\t\t\tassert_zu_eq(nallocx(1, flags),\n\t\t\t\t    TEST_MALLOC_SIZE(p),\n\t\t\t\t    \"Wrong mallocx() usable size\");\n\t\t\t\tint flags_next =\n\t\t\t\t    MALLOCX_LG_ALIGN(lg_align_next);\n\t\t\t\tp = rallocx(p, size, flags_next);\n\t\t\t\tassert_ptr_not_null(p,\n\t\t\t\t    \"Unexpected rallocx() error\");\n\t\t\t\texpect_zu_eq(nallocx(size, flags_next),\n\t\t\t\t    TEST_MALLOC_SIZE(p),\n\t\t\t\t    \"Wrong rallocx() usable size\");\n\t\t\t\tfree(p);\n\t\t\t}\n\t\t}\n\t}\n#undef LG_MAX\n#undef LG_MIN\n}\nTEST_END\n\nTEST_BEGIN(test_lg_align_and_zero) {\n\t/*\n\t * Use volatile to workaround buffer overflow false positives\n\t * (-D_FORTIFY_SOURCE=3).\n\t */\n\tvoid *volatile p, *volatile q;\n\tunsigned lg_align;\n\tsize_t sz;\n#define MAX_LG_ALIGN 25\n#define MAX_VALIDATE (ZU(1) << 22)\n\n\tlg_align = 0;\n\tp = mallocx(1, MALLOCX_LG_ALIGN(lg_align)|MALLOCX_ZERO);\n\texpect_ptr_not_null(p, \"Unexpected mallocx() error\");\n\n\tfor (lg_align++; lg_align <= MAX_LG_ALIGN; lg_align++) {\n\t\tq = rallocx(p, 1, MALLOCX_LG_ALIGN(lg_align)|MALLOCX_ZERO);\n\t\texpect_ptr_not_null(q,\n\t\t    \"Unexpected rallocx() error for lg_align=%u\", lg_align);\n\t\texpect_ptr_null(\n\t\t    (void *)((uintptr_t)q & ((ZU(1) << lg_align)-1)),\n\t\t    \"%p inadequately aligned for lg_align=%u\", q, lg_align);\n\t\tsz = sallocx(q, 0);\n\t\tif ((sz << 1) <= MAX_VALIDATE) {\n\t\t\texpect_false(validate_fill(q, 0, 0, sz),\n\t\t\t    \"Expected zeroed memory\");\n\t\t} else {\n\t\t\texpect_false(validate_fill(q, 0, 0, MAX_VALIDATE),\n\t\t\t    \"Expected zeroed memory\");\n\t\t\texpect_false(validate_fill(\n\t\t\t    (void *)((uintptr_t)q+sz-MAX_VALIDATE),\n\t\t\t    0, 0, MAX_VALIDATE), \"Expected zeroed memory\");\n\t\t}\n\t\tp = q;\n\t}\n\tdallocx(p, 0);\n#undef MAX_VALIDATE\n#undef MAX_LG_ALIGN\n}\nTEST_END\n\n/*\n * GCC \"-Walloc-size-larger-than\" warning detects when one of the memory\n * allocation functions is called with a size larger than the maximum size that\n * they support. Here we want to explicitly test that the allocation functions\n * do indeed fail properly when this is the case, which triggers the warning.\n * Therefore we disable the warning for these tests.\n */\nJEMALLOC_DIAGNOSTIC_PUSH\nJEMALLOC_DIAGNOSTIC_IGNORE_ALLOC_SIZE_LARGER_THAN\n\nTEST_BEGIN(test_overflow) {\n\tsize_t largemax;\n\tvoid *p;\n\n\tlargemax = get_large_size(get_nlarge()-1);\n\n\tp = mallocx(1, 0);\n\texpect_ptr_not_null(p, \"Unexpected mallocx() failure\");\n\n\texpect_ptr_null(rallocx(p, largemax+1, 0),\n\t    \"Expected OOM for rallocx(p, size=%#zx, 0)\", largemax+1);\n\n\texpect_ptr_null(rallocx(p, ZU(PTRDIFF_MAX)+1, 0),\n\t    \"Expected OOM for rallocx(p, size=%#zx, 0)\", ZU(PTRDIFF_MAX)+1);\n\n\texpect_ptr_null(rallocx(p, SIZE_T_MAX, 0),\n\t    \"Expected OOM for rallocx(p, size=%#zx, 0)\", SIZE_T_MAX);\n\n\texpect_ptr_null(rallocx(p, 1, MALLOCX_ALIGN(ZU(PTRDIFF_MAX)+1)),\n\t    \"Expected OOM for rallocx(p, size=1, MALLOCX_ALIGN(%#zx))\",\n\t    ZU(PTRDIFF_MAX)+1);\n\n\tdallocx(p, 0);\n}\nTEST_END\n\n/* Re-enable the \"-Walloc-size-larger-than=\" warning */\nJEMALLOC_DIAGNOSTIC_POP\n\nint\nmain(void) {\n\treturn test(\n\t    test_grow_and_shrink,\n\t    test_zero,\n\t    test_align,\n\t    test_align_enum,\n\t    test_lg_align_and_zero,\n\t    test_overflow);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/integration/sdallocx.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#define MAXALIGN (((size_t)1) << 22)\n#define NITER 3\n\nTEST_BEGIN(test_basic) {\n\tvoid *ptr = mallocx(64, 0);\n\tsdallocx(ptr, 64, 0);\n}\nTEST_END\n\nTEST_BEGIN(test_alignment_and_size) {\n\tsize_t nsz, sz, alignment, total;\n\tunsigned i;\n\tvoid *ps[NITER];\n\n\tfor (i = 0; i < NITER; i++) {\n\t\tps[i] = NULL;\n\t}\n\n\tfor (alignment = 8;\n\t    alignment <= MAXALIGN;\n\t    alignment <<= 1) {\n\t\ttotal = 0;\n\t\tfor (sz = 1;\n\t\t    sz < 3 * alignment && sz < (1U << 31);\n\t\t    sz += (alignment >> (LG_SIZEOF_PTR-1)) - 1) {\n\t\t\tfor (i = 0; i < NITER; i++) {\n\t\t\t\tnsz = nallocx(sz, MALLOCX_ALIGN(alignment) |\n\t\t\t\t    MALLOCX_ZERO);\n\t\t\t\tps[i] = mallocx(sz, MALLOCX_ALIGN(alignment) |\n\t\t\t\t    MALLOCX_ZERO);\n\t\t\t\ttotal += nsz;\n\t\t\t\tif (total >= (MAXALIGN << 1)) {\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tfor (i = 0; i < NITER; i++) {\n\t\t\t\tif (ps[i] != NULL) {\n\t\t\t\t\tsdallocx(ps[i], sz,\n\t\t\t\t\t    MALLOCX_ALIGN(alignment));\n\t\t\t\t\tps[i] = NULL;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_basic,\n\t    test_alignment_and_size);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/integration/slab_sizes.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n/* Note that this test relies on the unusual slab sizes set in slab_sizes.sh. */\n\nTEST_BEGIN(test_slab_sizes) {\n\tunsigned nbins;\n\tsize_t page;\n\tsize_t sizemib[4];\n\tsize_t slabmib[4];\n\tsize_t len;\n\n\tlen = sizeof(nbins);\n\texpect_d_eq(mallctl(\"arenas.nbins\", &nbins, &len, NULL, 0), 0,\n\t    \"nbins mallctl failure\");\n\n\tlen = sizeof(page);\n\texpect_d_eq(mallctl(\"arenas.page\", &page, &len, NULL, 0), 0,\n\t    \"page mallctl failure\");\n\n\tlen = 4;\n\texpect_d_eq(mallctlnametomib(\"arenas.bin.0.size\", sizemib, &len), 0,\n\t    \"bin size mallctlnametomib failure\");\n\n\tlen = 4;\n\texpect_d_eq(mallctlnametomib(\"arenas.bin.0.slab_size\", slabmib, &len),\n\t    0, \"slab size mallctlnametomib failure\");\n\n\tsize_t biggest_slab_seen = 0;\n\n\tfor (unsigned i = 0; i < nbins; i++) {\n\t\tsize_t bin_size;\n\t\tsize_t slab_size;\n\t\tlen = sizeof(size_t);\n\t\tsizemib[2] = i;\n\t\tslabmib[2] = i;\n\t\texpect_d_eq(mallctlbymib(sizemib, 4, (void *)&bin_size, &len,\n\t\t    NULL, 0), 0, \"bin size mallctlbymib failure\");\n\n\t\tlen = sizeof(size_t);\n\t\texpect_d_eq(mallctlbymib(slabmib, 4, (void *)&slab_size, &len,\n\t\t    NULL, 0), 0, \"slab size mallctlbymib failure\");\n\n\t\tif (bin_size < 100) {\n\t\t\t/*\n\t\t\t * Then we should be as close to 17 as possible.  Since\n\t\t\t * not all page sizes are valid (because of bitmap\n\t\t\t * limitations on the number of items in a slab), we\n\t\t\t * should at least make sure that the number of pages\n\t\t\t * goes up.\n\t\t\t */\n\t\t\texpect_zu_ge(slab_size, biggest_slab_seen,\n\t\t\t    \"Slab sizes should go up\");\n\t\t\tbiggest_slab_seen = slab_size;\n\t\t} else if (\n\t\t    (100 <= bin_size && bin_size < 128)\n\t\t    || (128 < bin_size && bin_size <= 200)) {\n\t\t\texpect_zu_eq(slab_size, page,\n\t\t\t    \"Forced-small slabs should be small\");\n\t\t} else if (bin_size == 128) {\n\t\t\texpect_zu_eq(slab_size, 2 * page,\n\t\t\t    \"Forced-2-page slab should be 2 pages\");\n\t\t} else if (200 < bin_size && bin_size <= 4096) {\n\t\t\texpect_zu_ge(slab_size, biggest_slab_seen,\n\t\t\t    \"Slab sizes should go up\");\n\t\t\tbiggest_slab_seen = slab_size;\n\t\t}\n\t}\n\t/*\n\t * For any reasonable configuration, 17 pages should be a valid slab\n\t * size for 4096-byte items.\n\t */\n\texpect_zu_eq(biggest_slab_seen, 17 * page, \"Didn't hit page target\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_slab_sizes);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/integration/slab_sizes.sh",
    "content": "#!/bin/sh\n\n# Some screwy-looking slab sizes.\nexport MALLOC_CONF=\"slab_sizes:1-4096:17|100-200:1|128-128:2\"\n"
  },
  {
    "path": "deps/jemalloc/test/integration/smallocx.c",
    "content": "#include \"test/jemalloc_test.h\"\n#include \"jemalloc/jemalloc_macros.h\"\n\n#define STR_HELPER(x) #x\n#define STR(x) STR_HELPER(x)\n\n#ifndef JEMALLOC_VERSION_GID_IDENT\n  #error \"JEMALLOC_VERSION_GID_IDENT not defined\"\n#endif\n\n#define JOIN(x, y) x ## y\n#define JOIN2(x, y) JOIN(x, y)\n#define smallocx JOIN2(smallocx_, JEMALLOC_VERSION_GID_IDENT)\n\ntypedef struct {\n\tvoid *ptr;\n\tsize_t size;\n} smallocx_return_t;\n\nextern smallocx_return_t\nsmallocx(size_t size, int flags);\n\nstatic unsigned\nget_nsizes_impl(const char *cmd) {\n\tunsigned ret;\n\tsize_t z;\n\n\tz = sizeof(unsigned);\n\texpect_d_eq(mallctl(cmd, (void *)&ret, &z, NULL, 0), 0,\n\t    \"Unexpected mallctl(\\\"%s\\\", ...) failure\", cmd);\n\n\treturn ret;\n}\n\nstatic unsigned\nget_nlarge(void) {\n\treturn get_nsizes_impl(\"arenas.nlextents\");\n}\n\nstatic size_t\nget_size_impl(const char *cmd, size_t ind) {\n\tsize_t ret;\n\tsize_t z;\n\tsize_t mib[4];\n\tsize_t miblen = 4;\n\n\tz = sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(cmd, mib, &miblen),\n\t    0, \"Unexpected mallctlnametomib(\\\"%s\\\", ...) failure\", cmd);\n\tmib[2] = ind;\n\tz = sizeof(size_t);\n\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&ret, &z, NULL, 0),\n\t    0, \"Unexpected mallctlbymib([\\\"%s\\\", %zu], ...) failure\", cmd, ind);\n\n\treturn ret;\n}\n\nstatic size_t\nget_large_size(size_t ind) {\n\treturn get_size_impl(\"arenas.lextent.0.size\", ind);\n}\n\n/*\n * On systems which can't merge extents, tests that call this function generate\n * a lot of dirty memory very quickly.  Purging between cycles mitigates\n * potential OOM on e.g. 32-bit Windows.\n */\nstatic void\npurge(void) {\n\texpect_d_eq(mallctl(\"arena.0.purge\", NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctl error\");\n}\n\n/*\n * GCC \"-Walloc-size-larger-than\" warning detects when one of the memory\n * allocation functions is called with a size larger than the maximum size that\n * they support. Here we want to explicitly test that the allocation functions\n * do indeed fail properly when this is the case, which triggers the warning.\n * Therefore we disable the warning for these tests.\n */\nJEMALLOC_DIAGNOSTIC_PUSH\nJEMALLOC_DIAGNOSTIC_IGNORE_ALLOC_SIZE_LARGER_THAN\n\nTEST_BEGIN(test_overflow) {\n\tsize_t largemax;\n\n\tlargemax = get_large_size(get_nlarge()-1);\n\n\texpect_ptr_null(smallocx(largemax+1, 0).ptr,\n\t    \"Expected OOM for smallocx(size=%#zx, 0)\", largemax+1);\n\n\texpect_ptr_null(smallocx(ZU(PTRDIFF_MAX)+1, 0).ptr,\n\t    \"Expected OOM for smallocx(size=%#zx, 0)\", ZU(PTRDIFF_MAX)+1);\n\n\texpect_ptr_null(smallocx(SIZE_T_MAX, 0).ptr,\n\t    \"Expected OOM for smallocx(size=%#zx, 0)\", SIZE_T_MAX);\n\n\texpect_ptr_null(smallocx(1, MALLOCX_ALIGN(ZU(PTRDIFF_MAX)+1)).ptr,\n\t    \"Expected OOM for smallocx(size=1, MALLOCX_ALIGN(%#zx))\",\n\t    ZU(PTRDIFF_MAX)+1);\n}\nTEST_END\n\nstatic void *\nremote_alloc(void *arg) {\n\tunsigned arena;\n\tsize_t sz = sizeof(unsigned);\n\texpect_d_eq(mallctl(\"arenas.create\", (void *)&arena, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\tsize_t large_sz;\n\tsz = sizeof(size_t);\n\texpect_d_eq(mallctl(\"arenas.lextent.0.size\", (void *)&large_sz, &sz,\n\t    NULL, 0), 0, \"Unexpected mallctl failure\");\n\n\tsmallocx_return_t r\n\t    = smallocx(large_sz, MALLOCX_ARENA(arena) | MALLOCX_TCACHE_NONE);\n\tvoid *ptr = r.ptr;\n\texpect_zu_eq(r.size,\n\t    nallocx(large_sz, MALLOCX_ARENA(arena) | MALLOCX_TCACHE_NONE),\n\t    \"Expected smalloc(size,flags).size == nallocx(size,flags)\");\n\tvoid **ret = (void **)arg;\n\t*ret = ptr;\n\n\treturn NULL;\n}\n\nTEST_BEGIN(test_remote_free) {\n\tthd_t thd;\n\tvoid *ret;\n\tthd_create(&thd, remote_alloc, (void *)&ret);\n\tthd_join(thd, NULL);\n\texpect_ptr_not_null(ret, \"Unexpected smallocx failure\");\n\n\t/* Avoid TCACHE_NONE to explicitly test tcache_flush(). */\n\tdallocx(ret, 0);\n\tmallctl(\"thread.tcache.flush\", NULL, NULL, NULL, 0);\n}\nTEST_END\n\nTEST_BEGIN(test_oom) {\n\tsize_t largemax;\n\tbool oom;\n\tvoid *ptrs[3];\n\tunsigned i;\n\n\t/*\n\t * It should be impossible to allocate three objects that each consume\n\t * nearly half the virtual address space.\n\t */\n\tlargemax = get_large_size(get_nlarge()-1);\n\toom = false;\n\tfor (i = 0; i < sizeof(ptrs) / sizeof(void *); i++) {\n\t\tptrs[i] = smallocx(largemax, 0).ptr;\n\t\tif (ptrs[i] == NULL) {\n\t\t\toom = true;\n\t\t}\n\t}\n\texpect_true(oom,\n\t    \"Expected OOM during series of calls to smallocx(size=%zu, 0)\",\n\t    largemax);\n\tfor (i = 0; i < sizeof(ptrs) / sizeof(void *); i++) {\n\t\tif (ptrs[i] != NULL) {\n\t\t\tdallocx(ptrs[i], 0);\n\t\t}\n\t}\n\tpurge();\n\n#if LG_SIZEOF_PTR == 3\n\texpect_ptr_null(smallocx(0x8000000000000000ULL,\n\t    MALLOCX_ALIGN(0x8000000000000000ULL)).ptr,\n\t    \"Expected OOM for smallocx()\");\n\texpect_ptr_null(smallocx(0x8000000000000000ULL,\n\t    MALLOCX_ALIGN(0x80000000)).ptr,\n\t    \"Expected OOM for smallocx()\");\n#else\n\texpect_ptr_null(smallocx(0x80000000UL, MALLOCX_ALIGN(0x80000000UL)).ptr,\n\t    \"Expected OOM for smallocx()\");\n#endif\n}\nTEST_END\n\n/* Re-enable the \"-Walloc-size-larger-than=\" warning */\nJEMALLOC_DIAGNOSTIC_POP\n\nTEST_BEGIN(test_basic) {\n#define MAXSZ (((size_t)1) << 23)\n\tsize_t sz;\n\n\tfor (sz = 1; sz < MAXSZ; sz = nallocx(sz, 0) + 1) {\n\t\tsmallocx_return_t ret;\n\t\tsize_t nsz, rsz, smz;\n\t\tvoid *p;\n\t\tnsz = nallocx(sz, 0);\n\t\texpect_zu_ne(nsz, 0, \"Unexpected nallocx() error\");\n\t\tret = smallocx(sz, 0);\n\t\tp = ret.ptr;\n\t\tsmz = ret.size;\n\t\texpect_ptr_not_null(p,\n\t\t    \"Unexpected smallocx(size=%zx, flags=0) error\", sz);\n\t\trsz = sallocx(p, 0);\n\t\texpect_zu_ge(rsz, sz, \"Real size smaller than expected\");\n\t\texpect_zu_eq(nsz, rsz, \"nallocx()/sallocx() size mismatch\");\n\t\texpect_zu_eq(nsz, smz, \"nallocx()/smallocx() size mismatch\");\n\t\tdallocx(p, 0);\n\n\t\tret = smallocx(sz, 0);\n\t\tp = ret.ptr;\n\t\tsmz = ret.size;\n\t\texpect_ptr_not_null(p,\n\t\t    \"Unexpected smallocx(size=%zx, flags=0) error\", sz);\n\t\tdallocx(p, 0);\n\n\t\tnsz = nallocx(sz, MALLOCX_ZERO);\n\t\texpect_zu_ne(nsz, 0, \"Unexpected nallocx() error\");\n\t\texpect_zu_ne(smz, 0, \"Unexpected smallocx() error\");\n\t\tret = smallocx(sz, MALLOCX_ZERO);\n\t\tp = ret.ptr;\n\t\texpect_ptr_not_null(p,\n\t\t    \"Unexpected smallocx(size=%zx, flags=MALLOCX_ZERO) error\",\n\t\t    nsz);\n\t\trsz = sallocx(p, 0);\n\t\texpect_zu_eq(nsz, rsz, \"nallocx()/sallocx() rsize mismatch\");\n\t\texpect_zu_eq(nsz, smz, \"nallocx()/smallocx() size mismatch\");\n\t\tdallocx(p, 0);\n\t\tpurge();\n\t}\n#undef MAXSZ\n}\nTEST_END\n\nTEST_BEGIN(test_alignment_and_size) {\n\tconst char *percpu_arena;\n\tsize_t sz = sizeof(percpu_arena);\n\n\tif(mallctl(\"opt.percpu_arena\", (void *)&percpu_arena, &sz, NULL, 0) ||\n\t    strcmp(percpu_arena, \"disabled\") != 0) {\n\t\ttest_skip(\"test_alignment_and_size skipped: \"\n\t\t    \"not working with percpu arena.\");\n\t};\n#define MAXALIGN (((size_t)1) << 23)\n#define NITER 4\n\tsize_t nsz, rsz, smz, alignment, total;\n\tunsigned i;\n\tvoid *ps[NITER];\n\n\tfor (i = 0; i < NITER; i++) {\n\t\tps[i] = NULL;\n\t}\n\n\tfor (alignment = 8;\n\t    alignment <= MAXALIGN;\n\t    alignment <<= 1) {\n\t\ttotal = 0;\n\t\tfor (sz = 1;\n\t\t    sz < 3 * alignment && sz < (1U << 31);\n\t\t    sz += (alignment >> (LG_SIZEOF_PTR-1)) - 1) {\n\t\t\tfor (i = 0; i < NITER; i++) {\n\t\t\t\tnsz = nallocx(sz, MALLOCX_ALIGN(alignment) |\n\t\t\t\t    MALLOCX_ZERO);\n\t\t\t\texpect_zu_ne(nsz, 0,\n\t\t\t\t    \"nallocx() error for alignment=%zu, \"\n\t\t\t\t    \"size=%zu (%#zx)\", alignment, sz, sz);\n\t\t\t\tsmallocx_return_t ret\n\t\t\t\t    = smallocx(sz, MALLOCX_ALIGN(alignment) | MALLOCX_ZERO);\n\t\t\t\tps[i] = ret.ptr;\n\t\t\t\texpect_ptr_not_null(ps[i],\n\t\t\t\t    \"smallocx() error for alignment=%zu, \"\n\t\t\t\t    \"size=%zu (%#zx)\", alignment, sz, sz);\n\t\t\t\trsz = sallocx(ps[i], 0);\n\t\t\t\tsmz = ret.size;\n\t\t\t\texpect_zu_ge(rsz, sz,\n\t\t\t\t    \"Real size smaller than expected for \"\n\t\t\t\t    \"alignment=%zu, size=%zu\", alignment, sz);\n\t\t\t\texpect_zu_eq(nsz, rsz,\n\t\t\t\t    \"nallocx()/sallocx() size mismatch for \"\n\t\t\t\t    \"alignment=%zu, size=%zu\", alignment, sz);\n\t\t\t\texpect_zu_eq(nsz, smz,\n\t\t\t\t    \"nallocx()/smallocx() size mismatch for \"\n\t\t\t\t    \"alignment=%zu, size=%zu\", alignment, sz);\n\t\t\t\texpect_ptr_null(\n\t\t\t\t    (void *)((uintptr_t)ps[i] & (alignment-1)),\n\t\t\t\t    \"%p inadequately aligned for\"\n\t\t\t\t    \" alignment=%zu, size=%zu\", ps[i],\n\t\t\t\t    alignment, sz);\n\t\t\t\ttotal += rsz;\n\t\t\t\tif (total >= (MAXALIGN << 1)) {\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tfor (i = 0; i < NITER; i++) {\n\t\t\t\tif (ps[i] != NULL) {\n\t\t\t\t\tdallocx(ps[i], 0);\n\t\t\t\t\tps[i] = NULL;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tpurge();\n\t}\n#undef MAXALIGN\n#undef NITER\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_overflow,\n\t    test_oom,\n\t    test_remote_free,\n\t    test_basic,\n\t    test_alignment_and_size);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/integration/smallocx.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_fill}\" = \"x1\" ] ; then\n    export MALLOC_CONF=\"junk:false\"\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/integration/thread_arena.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#define NTHREADS 10\n\nvoid *\nthd_start(void *arg) {\n\tunsigned main_arena_ind = *(unsigned *)arg;\n\tvoid *p;\n\tunsigned arena_ind;\n\tsize_t size;\n\tint err;\n\n\tp = malloc(1);\n\texpect_ptr_not_null(p, \"Error in malloc()\");\n\tfree(p);\n\n\tsize = sizeof(arena_ind);\n\tif ((err = mallctl(\"thread.arena\", (void *)&arena_ind, &size,\n\t    (void *)&main_arena_ind, sizeof(main_arena_ind)))) {\n\t\tchar buf[BUFERROR_BUF];\n\n\t\tbuferror(err, buf, sizeof(buf));\n\t\ttest_fail(\"Error in mallctl(): %s\", buf);\n\t}\n\n\tsize = sizeof(arena_ind);\n\tif ((err = mallctl(\"thread.arena\", (void *)&arena_ind, &size, NULL,\n\t    0))) {\n\t\tchar buf[BUFERROR_BUF];\n\n\t\tbuferror(err, buf, sizeof(buf));\n\t\ttest_fail(\"Error in mallctl(): %s\", buf);\n\t}\n\texpect_u_eq(arena_ind, main_arena_ind,\n\t    \"Arena index should be same as for main thread\");\n\n\treturn NULL;\n}\n\nstatic void\nmallctl_failure(int err) {\n\tchar buf[BUFERROR_BUF];\n\n\tbuferror(err, buf, sizeof(buf));\n\ttest_fail(\"Error in mallctl(): %s\", buf);\n}\n\nTEST_BEGIN(test_thread_arena) {\n\tvoid *p;\n\tint err;\n\tthd_t thds[NTHREADS];\n\tunsigned i;\n\n\tp = malloc(1);\n\texpect_ptr_not_null(p, \"Error in malloc()\");\n\n\tunsigned arena_ind, old_arena_ind;\n\tsize_t sz = sizeof(unsigned);\n\texpect_d_eq(mallctl(\"arenas.create\", (void *)&arena_ind, &sz, NULL, 0),\n\t    0, \"Arena creation failure\");\n\n\tsize_t size = sizeof(arena_ind);\n\tif ((err = mallctl(\"thread.arena\", (void *)&old_arena_ind, &size,\n\t    (void *)&arena_ind, sizeof(arena_ind))) != 0) {\n\t\tmallctl_failure(err);\n\t}\n\n\tfor (i = 0; i < NTHREADS; i++) {\n\t\tthd_create(&thds[i], thd_start,\n\t\t    (void *)&arena_ind);\n\t}\n\n\tfor (i = 0; i < NTHREADS; i++) {\n\t\tintptr_t join_ret;\n\t\tthd_join(thds[i], (void *)&join_ret);\n\t\texpect_zd_eq(join_ret, 0, \"Unexpected thread join error\");\n\t}\n\tfree(p);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_thread_arena);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/integration/thread_tcache_enabled.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nvoid *\nthd_start(void *arg) {\n\tbool e0, e1;\n\tsize_t sz = sizeof(bool);\n\texpect_d_eq(mallctl(\"thread.tcache.enabled\", (void *)&e0, &sz, NULL,\n\t    0), 0, \"Unexpected mallctl failure\");\n\n\tif (e0) {\n\t\te1 = false;\n\t\texpect_d_eq(mallctl(\"thread.tcache.enabled\", (void *)&e0, &sz,\n\t\t    (void *)&e1, sz), 0, \"Unexpected mallctl() error\");\n\t\texpect_true(e0, \"tcache should be enabled\");\n\t}\n\n\te1 = true;\n\texpect_d_eq(mallctl(\"thread.tcache.enabled\", (void *)&e0, &sz,\n\t    (void *)&e1, sz), 0, \"Unexpected mallctl() error\");\n\texpect_false(e0, \"tcache should be disabled\");\n\n\te1 = true;\n\texpect_d_eq(mallctl(\"thread.tcache.enabled\", (void *)&e0, &sz,\n\t    (void *)&e1, sz), 0, \"Unexpected mallctl() error\");\n\texpect_true(e0, \"tcache should be enabled\");\n\n\te1 = false;\n\texpect_d_eq(mallctl(\"thread.tcache.enabled\", (void *)&e0, &sz,\n\t    (void *)&e1, sz), 0, \"Unexpected mallctl() error\");\n\texpect_true(e0, \"tcache should be enabled\");\n\n\te1 = false;\n\texpect_d_eq(mallctl(\"thread.tcache.enabled\", (void *)&e0, &sz,\n\t    (void *)&e1, sz), 0, \"Unexpected mallctl() error\");\n\texpect_false(e0, \"tcache should be disabled\");\n\n\tfree(malloc(1));\n\te1 = true;\n\texpect_d_eq(mallctl(\"thread.tcache.enabled\", (void *)&e0, &sz,\n\t    (void *)&e1, sz), 0, \"Unexpected mallctl() error\");\n\texpect_false(e0, \"tcache should be disabled\");\n\n\tfree(malloc(1));\n\te1 = true;\n\texpect_d_eq(mallctl(\"thread.tcache.enabled\", (void *)&e0, &sz,\n\t    (void *)&e1, sz), 0, \"Unexpected mallctl() error\");\n\texpect_true(e0, \"tcache should be enabled\");\n\n\tfree(malloc(1));\n\te1 = false;\n\texpect_d_eq(mallctl(\"thread.tcache.enabled\", (void *)&e0, &sz,\n\t    (void *)&e1, sz), 0, \"Unexpected mallctl() error\");\n\texpect_true(e0, \"tcache should be enabled\");\n\n\tfree(malloc(1));\n\te1 = false;\n\texpect_d_eq(mallctl(\"thread.tcache.enabled\", (void *)&e0, &sz,\n\t    (void *)&e1, sz), 0, \"Unexpected mallctl() error\");\n\texpect_false(e0, \"tcache should be disabled\");\n\n\tfree(malloc(1));\n\treturn NULL;\n}\n\nTEST_BEGIN(test_main_thread) {\n\tthd_start(NULL);\n}\nTEST_END\n\nTEST_BEGIN(test_subthread) {\n\tthd_t thd;\n\n\tthd_create(&thd, thd_start, NULL);\n\tthd_join(thd, NULL);\n}\nTEST_END\n\nint\nmain(void) {\n\t/* Run tests multiple times to check for bad interactions. */\n\treturn test(\n\t    test_main_thread,\n\t    test_subthread,\n\t    test_main_thread,\n\t    test_subthread,\n\t    test_main_thread);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/integration/xallocx.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n/*\n * Use a separate arena for xallocx() extension/contraction tests so that\n * internal allocation e.g. by heap profiling can't interpose allocations where\n * xallocx() would ordinarily be able to extend.\n */\nstatic unsigned\narena_ind(void) {\n\tstatic unsigned ind = 0;\n\n\tif (ind == 0) {\n\t\tsize_t sz = sizeof(ind);\n\t\texpect_d_eq(mallctl(\"arenas.create\", (void *)&ind, &sz, NULL,\n\t\t    0), 0, \"Unexpected mallctl failure creating arena\");\n\t}\n\n\treturn ind;\n}\n\nTEST_BEGIN(test_same_size) {\n\tvoid *p;\n\tsize_t sz, tsz;\n\n\tp = mallocx(42, 0);\n\texpect_ptr_not_null(p, \"Unexpected mallocx() error\");\n\tsz = sallocx(p, 0);\n\n\ttsz = xallocx(p, sz, 0, 0);\n\texpect_zu_eq(tsz, sz, \"Unexpected size change: %zu --> %zu\", sz, tsz);\n\n\tdallocx(p, 0);\n}\nTEST_END\n\nTEST_BEGIN(test_extra_no_move) {\n\tvoid *p;\n\tsize_t sz, tsz;\n\n\tp = mallocx(42, 0);\n\texpect_ptr_not_null(p, \"Unexpected mallocx() error\");\n\tsz = sallocx(p, 0);\n\n\ttsz = xallocx(p, sz, sz-42, 0);\n\texpect_zu_eq(tsz, sz, \"Unexpected size change: %zu --> %zu\", sz, tsz);\n\n\tdallocx(p, 0);\n}\nTEST_END\n\nTEST_BEGIN(test_no_move_fail) {\n\tvoid *p;\n\tsize_t sz, tsz;\n\n\tp = mallocx(42, 0);\n\texpect_ptr_not_null(p, \"Unexpected mallocx() error\");\n\tsz = sallocx(p, 0);\n\n\ttsz = xallocx(p, sz + 5, 0, 0);\n\texpect_zu_eq(tsz, sz, \"Unexpected size change: %zu --> %zu\", sz, tsz);\n\n\tdallocx(p, 0);\n}\nTEST_END\n\nstatic unsigned\nget_nsizes_impl(const char *cmd) {\n\tunsigned ret;\n\tsize_t z;\n\n\tz = sizeof(unsigned);\n\texpect_d_eq(mallctl(cmd, (void *)&ret, &z, NULL, 0), 0,\n\t    \"Unexpected mallctl(\\\"%s\\\", ...) failure\", cmd);\n\n\treturn ret;\n}\n\nstatic unsigned\nget_nsmall(void) {\n\treturn get_nsizes_impl(\"arenas.nbins\");\n}\n\nstatic unsigned\nget_nlarge(void) {\n\treturn get_nsizes_impl(\"arenas.nlextents\");\n}\n\nstatic size_t\nget_size_impl(const char *cmd, size_t ind) {\n\tsize_t ret;\n\tsize_t z;\n\tsize_t mib[4];\n\tsize_t miblen = 4;\n\n\tz = sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(cmd, mib, &miblen),\n\t    0, \"Unexpected mallctlnametomib(\\\"%s\\\", ...) failure\", cmd);\n\tmib[2] = ind;\n\tz = sizeof(size_t);\n\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&ret, &z, NULL, 0),\n\t    0, \"Unexpected mallctlbymib([\\\"%s\\\", %zu], ...) failure\", cmd, ind);\n\n\treturn ret;\n}\n\nstatic size_t\nget_small_size(size_t ind) {\n\treturn get_size_impl(\"arenas.bin.0.size\", ind);\n}\n\nstatic size_t\nget_large_size(size_t ind) {\n\treturn get_size_impl(\"arenas.lextent.0.size\", ind);\n}\n\nTEST_BEGIN(test_size) {\n\tsize_t small0, largemax;\n\tvoid *p;\n\n\t/* Get size classes. */\n\tsmall0 = get_small_size(0);\n\tlargemax = get_large_size(get_nlarge()-1);\n\n\tp = mallocx(small0, 0);\n\texpect_ptr_not_null(p, \"Unexpected mallocx() error\");\n\n\t/* Test smallest supported size. */\n\texpect_zu_eq(xallocx(p, 1, 0, 0), small0,\n\t    \"Unexpected xallocx() behavior\");\n\n\t/* Test largest supported size. */\n\texpect_zu_le(xallocx(p, largemax, 0, 0), largemax,\n\t    \"Unexpected xallocx() behavior\");\n\n\t/* Test size overflow. */\n\texpect_zu_le(xallocx(p, largemax+1, 0, 0), largemax,\n\t    \"Unexpected xallocx() behavior\");\n\texpect_zu_le(xallocx(p, SIZE_T_MAX, 0, 0), largemax,\n\t    \"Unexpected xallocx() behavior\");\n\n\tdallocx(p, 0);\n}\nTEST_END\n\nTEST_BEGIN(test_size_extra_overflow) {\n\tsize_t small0, largemax;\n\tvoid *p;\n\n\t/* Get size classes. */\n\tsmall0 = get_small_size(0);\n\tlargemax = get_large_size(get_nlarge()-1);\n\n\tp = mallocx(small0, 0);\n\texpect_ptr_not_null(p, \"Unexpected mallocx() error\");\n\n\t/* Test overflows that can be resolved by clamping extra. */\n\texpect_zu_le(xallocx(p, largemax-1, 2, 0), largemax,\n\t    \"Unexpected xallocx() behavior\");\n\texpect_zu_le(xallocx(p, largemax, 1, 0), largemax,\n\t    \"Unexpected xallocx() behavior\");\n\n\t/* Test overflow such that largemax-size underflows. */\n\texpect_zu_le(xallocx(p, largemax+1, 2, 0), largemax,\n\t    \"Unexpected xallocx() behavior\");\n\texpect_zu_le(xallocx(p, largemax+2, 3, 0), largemax,\n\t    \"Unexpected xallocx() behavior\");\n\texpect_zu_le(xallocx(p, SIZE_T_MAX-2, 2, 0), largemax,\n\t    \"Unexpected xallocx() behavior\");\n\texpect_zu_le(xallocx(p, SIZE_T_MAX-1, 1, 0), largemax,\n\t    \"Unexpected xallocx() behavior\");\n\n\tdallocx(p, 0);\n}\nTEST_END\n\nTEST_BEGIN(test_extra_small) {\n\tsize_t small0, small1, largemax;\n\tvoid *p;\n\n\t/* Get size classes. */\n\tsmall0 = get_small_size(0);\n\tsmall1 = get_small_size(1);\n\tlargemax = get_large_size(get_nlarge()-1);\n\n\tp = mallocx(small0, 0);\n\texpect_ptr_not_null(p, \"Unexpected mallocx() error\");\n\n\texpect_zu_eq(xallocx(p, small1, 0, 0), small0,\n\t    \"Unexpected xallocx() behavior\");\n\n\texpect_zu_eq(xallocx(p, small1, 0, 0), small0,\n\t    \"Unexpected xallocx() behavior\");\n\n\texpect_zu_eq(xallocx(p, small0, small1 - small0, 0), small0,\n\t    \"Unexpected xallocx() behavior\");\n\n\t/* Test size+extra overflow. */\n\texpect_zu_eq(xallocx(p, small0, largemax - small0 + 1, 0), small0,\n\t    \"Unexpected xallocx() behavior\");\n\texpect_zu_eq(xallocx(p, small0, SIZE_T_MAX - small0, 0), small0,\n\t    \"Unexpected xallocx() behavior\");\n\n\tdallocx(p, 0);\n}\nTEST_END\n\nTEST_BEGIN(test_extra_large) {\n\tint flags = MALLOCX_ARENA(arena_ind());\n\tsize_t smallmax, large1, large2, large3, largemax;\n\tvoid *p;\n\n\t/* Get size classes. */\n\tsmallmax = get_small_size(get_nsmall()-1);\n\tlarge1 = get_large_size(1);\n\tlarge2 = get_large_size(2);\n\tlarge3 = get_large_size(3);\n\tlargemax = get_large_size(get_nlarge()-1);\n\n\tp = mallocx(large3, flags);\n\texpect_ptr_not_null(p, \"Unexpected mallocx() error\");\n\n\texpect_zu_eq(xallocx(p, large3, 0, flags), large3,\n\t    \"Unexpected xallocx() behavior\");\n\t/* Test size decrease with zero extra. */\n\texpect_zu_ge(xallocx(p, large1, 0, flags), large1,\n\t    \"Unexpected xallocx() behavior\");\n\texpect_zu_ge(xallocx(p, smallmax, 0, flags), large1,\n\t    \"Unexpected xallocx() behavior\");\n\n\tif (xallocx(p, large3, 0, flags) != large3) {\n\t\tp = rallocx(p, large3, flags);\n\t\texpect_ptr_not_null(p, \"Unexpected rallocx() failure\");\n\t}\n\t/* Test size decrease with non-zero extra. */\n\texpect_zu_eq(xallocx(p, large1, large3 - large1, flags), large3,\n\t    \"Unexpected xallocx() behavior\");\n\texpect_zu_eq(xallocx(p, large2, large3 - large2, flags), large3,\n\t    \"Unexpected xallocx() behavior\");\n\texpect_zu_ge(xallocx(p, large1, large2 - large1, flags), large2,\n\t    \"Unexpected xallocx() behavior\");\n\texpect_zu_ge(xallocx(p, smallmax, large1 - smallmax, flags), large1,\n\t    \"Unexpected xallocx() behavior\");\n\n\texpect_zu_ge(xallocx(p, large1, 0, flags), large1,\n\t    \"Unexpected xallocx() behavior\");\n\t/* Test size increase with zero extra. */\n\texpect_zu_le(xallocx(p, large3, 0, flags), large3,\n\t    \"Unexpected xallocx() behavior\");\n\texpect_zu_le(xallocx(p, largemax+1, 0, flags), large3,\n\t    \"Unexpected xallocx() behavior\");\n\n\texpect_zu_ge(xallocx(p, large1, 0, flags), large1,\n\t    \"Unexpected xallocx() behavior\");\n\t/* Test size increase with non-zero extra. */\n\texpect_zu_le(xallocx(p, large1, SIZE_T_MAX - large1, flags), largemax,\n\t    \"Unexpected xallocx() behavior\");\n\n\texpect_zu_ge(xallocx(p, large1, 0, flags), large1,\n\t    \"Unexpected xallocx() behavior\");\n\t/* Test size increase with non-zero extra. */\n\texpect_zu_le(xallocx(p, large1, large3 - large1, flags), large3,\n\t    \"Unexpected xallocx() behavior\");\n\n\tif (xallocx(p, large3, 0, flags) != large3) {\n\t\tp = rallocx(p, large3, flags);\n\t\texpect_ptr_not_null(p, \"Unexpected rallocx() failure\");\n\t}\n\t/* Test size+extra overflow. */\n\texpect_zu_le(xallocx(p, large3, largemax - large3 + 1, flags), largemax,\n\t    \"Unexpected xallocx() behavior\");\n\n\tdallocx(p, flags);\n}\nTEST_END\n\nstatic void\nprint_filled_extents(const void *p, uint8_t c, size_t len) {\n\tconst uint8_t *pc = (const uint8_t *)p;\n\tsize_t i, range0;\n\tuint8_t c0;\n\n\tmalloc_printf(\"  p=%p, c=%#x, len=%zu:\", p, c, len);\n\trange0 = 0;\n\tc0 = pc[0];\n\tfor (i = 0; i < len; i++) {\n\t\tif (pc[i] != c0) {\n\t\t\tmalloc_printf(\" %#x[%zu..%zu)\", c0, range0, i);\n\t\t\trange0 = i;\n\t\t\tc0 = pc[i];\n\t\t}\n\t}\n\tmalloc_printf(\" %#x[%zu..%zu)\\n\", c0, range0, i);\n}\n\nstatic bool\nvalidate_fill(const void *p, uint8_t c, size_t offset, size_t len) {\n\tconst uint8_t *pc = (const uint8_t *)p;\n\tbool err;\n\tsize_t i;\n\n\tfor (i = offset, err = false; i < offset+len; i++) {\n\t\tif (pc[i] != c) {\n\t\t\terr = true;\n\t\t}\n\t}\n\n\tif (err) {\n\t\tprint_filled_extents(p, c, offset + len);\n\t}\n\n\treturn err;\n}\n\nstatic void\ntest_zero(size_t szmin, size_t szmax) {\n\tint flags = MALLOCX_ARENA(arena_ind()) | MALLOCX_ZERO;\n\tsize_t sz, nsz;\n\tvoid *p;\n#define FILL_BYTE 0x7aU\n\n\tsz = szmax;\n\tp = mallocx(sz, flags);\n\texpect_ptr_not_null(p, \"Unexpected mallocx() error\");\n\texpect_false(validate_fill(p, 0x00, 0, sz), \"Memory not filled: sz=%zu\",\n\t    sz);\n\n\t/*\n\t * Fill with non-zero so that non-debug builds are more likely to detect\n\t * errors.\n\t */\n\tmemset(p, FILL_BYTE, sz);\n\texpect_false(validate_fill(p, FILL_BYTE, 0, sz),\n\t    \"Memory not filled: sz=%zu\", sz);\n\n\t/* Shrink in place so that we can expect growing in place to succeed. */\n\tsz = szmin;\n\tif (xallocx(p, sz, 0, flags) != sz) {\n\t\tp = rallocx(p, sz, flags);\n\t\texpect_ptr_not_null(p, \"Unexpected rallocx() failure\");\n\t}\n\texpect_false(validate_fill(p, FILL_BYTE, 0, sz),\n\t    \"Memory not filled: sz=%zu\", sz);\n\n\tfor (sz = szmin; sz < szmax; sz = nsz) {\n\t\tnsz = nallocx(sz+1, flags);\n\t\tif (xallocx(p, sz+1, 0, flags) != nsz) {\n\t\t\tp = rallocx(p, sz+1, flags);\n\t\t\texpect_ptr_not_null(p, \"Unexpected rallocx() failure\");\n\t\t}\n\t\texpect_false(validate_fill(p, FILL_BYTE, 0, sz),\n\t\t    \"Memory not filled: sz=%zu\", sz);\n\t\texpect_false(validate_fill(p, 0x00, sz, nsz-sz),\n\t\t    \"Memory not filled: sz=%zu, nsz-sz=%zu\", sz, nsz-sz);\n\t\tmemset((void *)((uintptr_t)p + sz), FILL_BYTE, nsz-sz);\n\t\texpect_false(validate_fill(p, FILL_BYTE, 0, nsz),\n\t\t    \"Memory not filled: nsz=%zu\", nsz);\n\t}\n\n\tdallocx(p, flags);\n}\n\nTEST_BEGIN(test_zero_large) {\n\tsize_t large0, large1;\n\n\t/* Get size classes. */\n\tlarge0 = get_large_size(0);\n\tlarge1 = get_large_size(1);\n\n\ttest_zero(large1, large0 * 2);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_same_size,\n\t    test_extra_no_move,\n\t    test_no_move_fail,\n\t    test_size,\n\t    test_size_extra_overflow,\n\t    test_extra_small,\n\t    test_extra_large,\n\t    test_zero_large);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/integration/xallocx.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_fill}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"junk:false\"\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/src/SFMT.c",
    "content": "/*\n * This file derives from SFMT 1.3.3\n * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was\n * released under the terms of the following license:\n *\n *   Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima\n *   University. All rights reserved.\n *\n *   Redistribution and use in source and binary forms, with or without\n *   modification, are permitted provided that the following conditions are\n *   met:\n *\n *       * Redistributions of source code must retain the above copyright\n *         notice, this list of conditions and the following disclaimer.\n *       * Redistributions in binary form must reproduce the above\n *         copyright notice, this list of conditions and the following\n *         disclaimer in the documentation and/or other materials provided\n *         with the distribution.\n *       * Neither the name of the Hiroshima University nor the names of\n *         its contributors may be used to endorse or promote products\n *         derived from this software without specific prior written\n *         permission.\n *\n *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n */\n/**\n * @file  SFMT.c\n * @brief SIMD oriented Fast Mersenne Twister(SFMT)\n *\n * @author Mutsuo Saito (Hiroshima University)\n * @author Makoto Matsumoto (Hiroshima University)\n *\n * Copyright (C) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima\n * University. All rights reserved.\n *\n * The new BSD License is applied to this software, see LICENSE.txt\n */\n#define SFMT_C_\n#include \"test/jemalloc_test.h\"\n#include \"test/SFMT-params.h\"\n\n#if defined(JEMALLOC_BIG_ENDIAN) && !defined(BIG_ENDIAN64)\n#define BIG_ENDIAN64 1\n#endif\n#if defined(__BIG_ENDIAN__) && !defined(__amd64) && !defined(BIG_ENDIAN64)\n#define BIG_ENDIAN64 1\n#endif\n#if defined(HAVE_ALTIVEC) && !defined(BIG_ENDIAN64)\n#define BIG_ENDIAN64 1\n#endif\n#if defined(ONLY64) && !defined(BIG_ENDIAN64)\n  #if defined(__GNUC__)\n    #error \"-DONLY64 must be specified with -DBIG_ENDIAN64\"\n  #endif\n#undef ONLY64\n#endif\n/*------------------------------------------------------\n  128-bit SIMD data type for Altivec, SSE2 or standard C\n  ------------------------------------------------------*/\n#if defined(HAVE_ALTIVEC)\n/** 128-bit data structure */\nunion W128_T {\n    vector unsigned int s;\n    uint32_t u[4];\n};\n/** 128-bit data type */\ntypedef union W128_T w128_t;\n\n#elif defined(HAVE_SSE2)\n/** 128-bit data structure */\nunion W128_T {\n    __m128i si;\n    uint32_t u[4];\n};\n/** 128-bit data type */\ntypedef union W128_T w128_t;\n\n#else\n\n/** 128-bit data structure */\nstruct W128_T {\n    uint32_t u[4];\n};\n/** 128-bit data type */\ntypedef struct W128_T w128_t;\n\n#endif\n\nstruct sfmt_s {\n    /** the 128-bit internal state array */\n    w128_t sfmt[N];\n    /** index counter to the 32-bit internal state array */\n    int idx;\n    /** a flag: it is 0 if and only if the internal state is not yet\n     * initialized. */\n    int initialized;\n};\n\n/*--------------------------------------\n  FILE GLOBAL VARIABLES\n  internal state, index counter and flag\n  --------------------------------------*/\n\n/** a parity check vector which certificate the period of 2^{MEXP} */\nstatic uint32_t parity[4] = {PARITY1, PARITY2, PARITY3, PARITY4};\n\n/*----------------\n  STATIC FUNCTIONS\n  ----------------*/\nstatic inline int idxof(int i);\n#if (!defined(HAVE_ALTIVEC)) && (!defined(HAVE_SSE2))\nstatic inline void rshift128(w128_t *out,  w128_t const *in, int shift);\nstatic inline void lshift128(w128_t *out,  w128_t const *in, int shift);\n#endif\nstatic inline void gen_rand_all(sfmt_t *ctx);\nstatic inline void gen_rand_array(sfmt_t *ctx, w128_t *array, int size);\nstatic inline uint32_t func1(uint32_t x);\nstatic inline uint32_t func2(uint32_t x);\nstatic void period_certification(sfmt_t *ctx);\n#if defined(BIG_ENDIAN64) && !defined(ONLY64)\nstatic inline void swap(w128_t *array, int size);\n#endif\n\n#if defined(HAVE_ALTIVEC)\n  #include \"test/SFMT-alti.h\"\n#elif defined(HAVE_SSE2)\n  #include \"test/SFMT-sse2.h\"\n#endif\n\n/**\n * This function simulate a 64-bit index of LITTLE ENDIAN\n * in BIG ENDIAN machine.\n */\n#ifdef ONLY64\nstatic inline int idxof(int i) {\n    return i ^ 1;\n}\n#else\nstatic inline int idxof(int i) {\n    return i;\n}\n#endif\n/**\n * This function simulates SIMD 128-bit right shift by the standard C.\n * The 128-bit integer given in in is shifted by (shift * 8) bits.\n * This function simulates the LITTLE ENDIAN SIMD.\n * @param out the output of this function\n * @param in the 128-bit data to be shifted\n * @param shift the shift value\n */\n#if (!defined(HAVE_ALTIVEC)) && (!defined(HAVE_SSE2))\n#ifdef ONLY64\nstatic inline void rshift128(w128_t *out, w128_t const *in, int shift) {\n    uint64_t th, tl, oh, ol;\n\n    th = ((uint64_t)in->u[2] << 32) | ((uint64_t)in->u[3]);\n    tl = ((uint64_t)in->u[0] << 32) | ((uint64_t)in->u[1]);\n\n    oh = th >> (shift * 8);\n    ol = tl >> (shift * 8);\n    ol |= th << (64 - shift * 8);\n    out->u[0] = (uint32_t)(ol >> 32);\n    out->u[1] = (uint32_t)ol;\n    out->u[2] = (uint32_t)(oh >> 32);\n    out->u[3] = (uint32_t)oh;\n}\n#else\nstatic inline void rshift128(w128_t *out, w128_t const *in, int shift) {\n    uint64_t th, tl, oh, ol;\n\n    th = ((uint64_t)in->u[3] << 32) | ((uint64_t)in->u[2]);\n    tl = ((uint64_t)in->u[1] << 32) | ((uint64_t)in->u[0]);\n\n    oh = th >> (shift * 8);\n    ol = tl >> (shift * 8);\n    ol |= th << (64 - shift * 8);\n    out->u[1] = (uint32_t)(ol >> 32);\n    out->u[0] = (uint32_t)ol;\n    out->u[3] = (uint32_t)(oh >> 32);\n    out->u[2] = (uint32_t)oh;\n}\n#endif\n/**\n * This function simulates SIMD 128-bit left shift by the standard C.\n * The 128-bit integer given in in is shifted by (shift * 8) bits.\n * This function simulates the LITTLE ENDIAN SIMD.\n * @param out the output of this function\n * @param in the 128-bit data to be shifted\n * @param shift the shift value\n */\n#ifdef ONLY64\nstatic inline void lshift128(w128_t *out, w128_t const *in, int shift) {\n    uint64_t th, tl, oh, ol;\n\n    th = ((uint64_t)in->u[2] << 32) | ((uint64_t)in->u[3]);\n    tl = ((uint64_t)in->u[0] << 32) | ((uint64_t)in->u[1]);\n\n    oh = th << (shift * 8);\n    ol = tl << (shift * 8);\n    oh |= tl >> (64 - shift * 8);\n    out->u[0] = (uint32_t)(ol >> 32);\n    out->u[1] = (uint32_t)ol;\n    out->u[2] = (uint32_t)(oh >> 32);\n    out->u[3] = (uint32_t)oh;\n}\n#else\nstatic inline void lshift128(w128_t *out, w128_t const *in, int shift) {\n    uint64_t th, tl, oh, ol;\n\n    th = ((uint64_t)in->u[3] << 32) | ((uint64_t)in->u[2]);\n    tl = ((uint64_t)in->u[1] << 32) | ((uint64_t)in->u[0]);\n\n    oh = th << (shift * 8);\n    ol = tl << (shift * 8);\n    oh |= tl >> (64 - shift * 8);\n    out->u[1] = (uint32_t)(ol >> 32);\n    out->u[0] = (uint32_t)ol;\n    out->u[3] = (uint32_t)(oh >> 32);\n    out->u[2] = (uint32_t)oh;\n}\n#endif\n#endif\n\n/**\n * This function represents the recursion formula.\n * @param r output\n * @param a a 128-bit part of the internal state array\n * @param b a 128-bit part of the internal state array\n * @param c a 128-bit part of the internal state array\n * @param d a 128-bit part of the internal state array\n */\n#if (!defined(HAVE_ALTIVEC)) && (!defined(HAVE_SSE2))\n#ifdef ONLY64\nstatic inline void do_recursion(w128_t *r, w128_t *a, w128_t *b, w128_t *c,\n\t\t\t\tw128_t *d) {\n    w128_t x;\n    w128_t y;\n\n    lshift128(&x, a, SL2);\n    rshift128(&y, c, SR2);\n    r->u[0] = a->u[0] ^ x.u[0] ^ ((b->u[0] >> SR1) & MSK2) ^ y.u[0]\n\t^ (d->u[0] << SL1);\n    r->u[1] = a->u[1] ^ x.u[1] ^ ((b->u[1] >> SR1) & MSK1) ^ y.u[1]\n\t^ (d->u[1] << SL1);\n    r->u[2] = a->u[2] ^ x.u[2] ^ ((b->u[2] >> SR1) & MSK4) ^ y.u[2]\n\t^ (d->u[2] << SL1);\n    r->u[3] = a->u[3] ^ x.u[3] ^ ((b->u[3] >> SR1) & MSK3) ^ y.u[3]\n\t^ (d->u[3] << SL1);\n}\n#else\nstatic inline void do_recursion(w128_t *r, w128_t *a, w128_t *b, w128_t *c,\n\t\t\t\tw128_t *d) {\n    w128_t x;\n    w128_t y;\n\n    lshift128(&x, a, SL2);\n    rshift128(&y, c, SR2);\n    r->u[0] = a->u[0] ^ x.u[0] ^ ((b->u[0] >> SR1) & MSK1) ^ y.u[0]\n\t^ (d->u[0] << SL1);\n    r->u[1] = a->u[1] ^ x.u[1] ^ ((b->u[1] >> SR1) & MSK2) ^ y.u[1]\n\t^ (d->u[1] << SL1);\n    r->u[2] = a->u[2] ^ x.u[2] ^ ((b->u[2] >> SR1) & MSK3) ^ y.u[2]\n\t^ (d->u[2] << SL1);\n    r->u[3] = a->u[3] ^ x.u[3] ^ ((b->u[3] >> SR1) & MSK4) ^ y.u[3]\n\t^ (d->u[3] << SL1);\n}\n#endif\n#endif\n\n#if (!defined(HAVE_ALTIVEC)) && (!defined(HAVE_SSE2))\n/**\n * This function fills the internal state array with pseudorandom\n * integers.\n */\nstatic inline void gen_rand_all(sfmt_t *ctx) {\n    int i;\n    w128_t *r1, *r2;\n\n    r1 = &ctx->sfmt[N - 2];\n    r2 = &ctx->sfmt[N - 1];\n    for (i = 0; i < N - POS1; i++) {\n\tdo_recursion(&ctx->sfmt[i], &ctx->sfmt[i], &ctx->sfmt[i + POS1], r1,\n\t  r2);\n\tr1 = r2;\n\tr2 = &ctx->sfmt[i];\n    }\n    for (; i < N; i++) {\n\tdo_recursion(&ctx->sfmt[i], &ctx->sfmt[i], &ctx->sfmt[i + POS1 - N], r1,\n\t  r2);\n\tr1 = r2;\n\tr2 = &ctx->sfmt[i];\n    }\n}\n\n/**\n * This function fills the user-specified array with pseudorandom\n * integers.\n *\n * @param array an 128-bit array to be filled by pseudorandom numbers.\n * @param size number of 128-bit pseudorandom numbers to be generated.\n */\nstatic inline void gen_rand_array(sfmt_t *ctx, w128_t *array, int size) {\n    int i, j;\n    w128_t *r1, *r2;\n\n    r1 = &ctx->sfmt[N - 2];\n    r2 = &ctx->sfmt[N - 1];\n    for (i = 0; i < N - POS1; i++) {\n\tdo_recursion(&array[i], &ctx->sfmt[i], &ctx->sfmt[i + POS1], r1, r2);\n\tr1 = r2;\n\tr2 = &array[i];\n    }\n    for (; i < N; i++) {\n\tdo_recursion(&array[i], &ctx->sfmt[i], &array[i + POS1 - N], r1, r2);\n\tr1 = r2;\n\tr2 = &array[i];\n    }\n    for (; i < size - N; i++) {\n\tdo_recursion(&array[i], &array[i - N], &array[i + POS1 - N], r1, r2);\n\tr1 = r2;\n\tr2 = &array[i];\n    }\n    for (j = 0; j < 2 * N - size; j++) {\n\tctx->sfmt[j] = array[j + size - N];\n    }\n    for (; i < size; i++, j++) {\n\tdo_recursion(&array[i], &array[i - N], &array[i + POS1 - N], r1, r2);\n\tr1 = r2;\n\tr2 = &array[i];\n\tctx->sfmt[j] = array[i];\n    }\n}\n#endif\n\n#if defined(BIG_ENDIAN64) && !defined(ONLY64) && !defined(HAVE_ALTIVEC)\nstatic inline void swap(w128_t *array, int size) {\n    int i;\n    uint32_t x, y;\n\n    for (i = 0; i < size; i++) {\n\tx = array[i].u[0];\n\ty = array[i].u[2];\n\tarray[i].u[0] = array[i].u[1];\n\tarray[i].u[2] = array[i].u[3];\n\tarray[i].u[1] = x;\n\tarray[i].u[3] = y;\n    }\n}\n#endif\n/**\n * This function represents a function used in the initialization\n * by init_by_array\n * @param x 32-bit integer\n * @return 32-bit integer\n */\nstatic uint32_t func1(uint32_t x) {\n    return (x ^ (x >> 27)) * (uint32_t)1664525UL;\n}\n\n/**\n * This function represents a function used in the initialization\n * by init_by_array\n * @param x 32-bit integer\n * @return 32-bit integer\n */\nstatic uint32_t func2(uint32_t x) {\n    return (x ^ (x >> 27)) * (uint32_t)1566083941UL;\n}\n\n/**\n * This function certificate the period of 2^{MEXP}\n */\nstatic void period_certification(sfmt_t *ctx) {\n    int inner = 0;\n    int i, j;\n    uint32_t work;\n    uint32_t *psfmt32 = &ctx->sfmt[0].u[0];\n\n    for (i = 0; i < 4; i++)\n\tinner ^= psfmt32[idxof(i)] & parity[i];\n    for (i = 16; i > 0; i >>= 1)\n\tinner ^= inner >> i;\n    inner &= 1;\n    /* check OK */\n    if (inner == 1) {\n\treturn;\n    }\n    /* check NG, and modification */\n    for (i = 0; i < 4; i++) {\n\twork = 1;\n\tfor (j = 0; j < 32; j++) {\n\t    if ((work & parity[i]) != 0) {\n\t\tpsfmt32[idxof(i)] ^= work;\n\t\treturn;\n\t    }\n\t    work = work << 1;\n\t}\n    }\n}\n\n/*----------------\n  PUBLIC FUNCTIONS\n  ----------------*/\n/**\n * This function returns the identification string.\n * The string shows the word size, the Mersenne exponent,\n * and all parameters of this generator.\n */\nconst char *get_idstring(void) {\n    return IDSTR;\n}\n\n/**\n * This function returns the minimum size of array used for \\b\n * fill_array32() function.\n * @return minimum size of array used for fill_array32() function.\n */\nint get_min_array_size32(void) {\n    return N32;\n}\n\n/**\n * This function returns the minimum size of array used for \\b\n * fill_array64() function.\n * @return minimum size of array used for fill_array64() function.\n */\nint get_min_array_size64(void) {\n    return N64;\n}\n\n#ifndef ONLY64\n/**\n * This function generates and returns 32-bit pseudorandom number.\n * init_gen_rand or init_by_array must be called before this function.\n * @return 32-bit pseudorandom number\n */\nuint32_t gen_rand32(sfmt_t *ctx) {\n    uint32_t r;\n    uint32_t *psfmt32 = &ctx->sfmt[0].u[0];\n\n    assert(ctx->initialized);\n    if (ctx->idx >= N32) {\n\tgen_rand_all(ctx);\n\tctx->idx = 0;\n    }\n    r = psfmt32[ctx->idx++];\n    return r;\n}\n\n/* Generate a random integer in [0..limit). */\nuint32_t gen_rand32_range(sfmt_t *ctx, uint32_t limit) {\n    uint32_t ret, above;\n\n    above = 0xffffffffU - (0xffffffffU % limit);\n    while (1) {\n\tret = gen_rand32(ctx);\n\tif (ret < above) {\n\t    ret %= limit;\n\t    break;\n\t}\n    }\n    return ret;\n}\n#endif\n/**\n * This function generates and returns 64-bit pseudorandom number.\n * init_gen_rand or init_by_array must be called before this function.\n * The function gen_rand64 should not be called after gen_rand32,\n * unless an initialization is again executed.\n * @return 64-bit pseudorandom number\n */\nuint64_t gen_rand64(sfmt_t *ctx) {\n#if defined(BIG_ENDIAN64) && !defined(ONLY64)\n    uint32_t r1, r2;\n    uint32_t *psfmt32 = &ctx->sfmt[0].u[0];\n#else\n    uint64_t r;\n    uint64_t *psfmt64 = (uint64_t *)&ctx->sfmt[0].u[0];\n#endif\n\n    assert(ctx->initialized);\n    assert(ctx->idx % 2 == 0);\n\n    if (ctx->idx >= N32) {\n\tgen_rand_all(ctx);\n\tctx->idx = 0;\n    }\n#if defined(BIG_ENDIAN64) && !defined(ONLY64)\n    r1 = psfmt32[ctx->idx];\n    r2 = psfmt32[ctx->idx + 1];\n    ctx->idx += 2;\n    return ((uint64_t)r2 << 32) | r1;\n#else\n    r = psfmt64[ctx->idx / 2];\n    ctx->idx += 2;\n    return r;\n#endif\n}\n\n/* Generate a random integer in [0..limit). */\nuint64_t gen_rand64_range(sfmt_t *ctx, uint64_t limit) {\n    uint64_t ret, above;\n\n    above = KQU(0xffffffffffffffff) - (KQU(0xffffffffffffffff) % limit);\n    while (1) {\n\tret = gen_rand64(ctx);\n\tif (ret < above) {\n\t    ret %= limit;\n\t    break;\n\t}\n    }\n    return ret;\n}\n\n#ifndef ONLY64\n/**\n * This function generates pseudorandom 32-bit integers in the\n * specified array[] by one call. The number of pseudorandom integers\n * is specified by the argument size, which must be at least 624 and a\n * multiple of four.  The generation by this function is much faster\n * than the following gen_rand function.\n *\n * For initialization, init_gen_rand or init_by_array must be called\n * before the first call of this function. This function can not be\n * used after calling gen_rand function, without initialization.\n *\n * @param array an array where pseudorandom 32-bit integers are filled\n * by this function.  The pointer to the array must be \\b \"aligned\"\n * (namely, must be a multiple of 16) in the SIMD version, since it\n * refers to the address of a 128-bit integer.  In the standard C\n * version, the pointer is arbitrary.\n *\n * @param size the number of 32-bit pseudorandom integers to be\n * generated.  size must be a multiple of 4, and greater than or equal\n * to (MEXP / 128 + 1) * 4.\n *\n * @note \\b memalign or \\b posix_memalign is available to get aligned\n * memory. Mac OSX doesn't have these functions, but \\b malloc of OSX\n * returns the pointer to the aligned memory block.\n */\nvoid fill_array32(sfmt_t *ctx, uint32_t *array, int size) {\n    assert(ctx->initialized);\n    assert(ctx->idx == N32);\n    assert(size % 4 == 0);\n    assert(size >= N32);\n\n    gen_rand_array(ctx, (w128_t *)array, size / 4);\n    ctx->idx = N32;\n}\n#endif\n\n/**\n * This function generates pseudorandom 64-bit integers in the\n * specified array[] by one call. The number of pseudorandom integers\n * is specified by the argument size, which must be at least 312 and a\n * multiple of two.  The generation by this function is much faster\n * than the following gen_rand function.\n *\n * For initialization, init_gen_rand or init_by_array must be called\n * before the first call of this function. This function can not be\n * used after calling gen_rand function, without initialization.\n *\n * @param array an array where pseudorandom 64-bit integers are filled\n * by this function.  The pointer to the array must be \"aligned\"\n * (namely, must be a multiple of 16) in the SIMD version, since it\n * refers to the address of a 128-bit integer.  In the standard C\n * version, the pointer is arbitrary.\n *\n * @param size the number of 64-bit pseudorandom integers to be\n * generated.  size must be a multiple of 2, and greater than or equal\n * to (MEXP / 128 + 1) * 2\n *\n * @note \\b memalign or \\b posix_memalign is available to get aligned\n * memory. Mac OSX doesn't have these functions, but \\b malloc of OSX\n * returns the pointer to the aligned memory block.\n */\nvoid fill_array64(sfmt_t *ctx, uint64_t *array, int size) {\n    assert(ctx->initialized);\n    assert(ctx->idx == N32);\n    assert(size % 2 == 0);\n    assert(size >= N64);\n\n    gen_rand_array(ctx, (w128_t *)array, size / 2);\n    ctx->idx = N32;\n\n#if defined(BIG_ENDIAN64) && !defined(ONLY64)\n    swap((w128_t *)array, size /2);\n#endif\n}\n\n/**\n * This function initializes the internal state array with a 32-bit\n * integer seed.\n *\n * @param seed a 32-bit integer used as the seed.\n */\nsfmt_t *init_gen_rand(uint32_t seed) {\n    void *p;\n    sfmt_t *ctx;\n    int i;\n    uint32_t *psfmt32;\n\n    if (posix_memalign(&p, sizeof(w128_t), sizeof(sfmt_t)) != 0) {\n\treturn NULL;\n    }\n    ctx = (sfmt_t *)p;\n    psfmt32 = &ctx->sfmt[0].u[0];\n\n    psfmt32[idxof(0)] = seed;\n    for (i = 1; i < N32; i++) {\n\tpsfmt32[idxof(i)] = 1812433253UL * (psfmt32[idxof(i - 1)]\n\t\t\t\t\t    ^ (psfmt32[idxof(i - 1)] >> 30))\n\t    + i;\n    }\n    ctx->idx = N32;\n    period_certification(ctx);\n    ctx->initialized = 1;\n\n    return ctx;\n}\n\n/**\n * This function initializes the internal state array,\n * with an array of 32-bit integers used as the seeds\n * @param init_key the array of 32-bit integers, used as a seed.\n * @param key_length the length of init_key.\n */\nsfmt_t *init_by_array(uint32_t *init_key, int key_length) {\n    void *p;\n    sfmt_t *ctx;\n    int i, j, count;\n    uint32_t r;\n    int lag;\n    int mid;\n    int size = N * 4;\n    uint32_t *psfmt32;\n\n    if (posix_memalign(&p, sizeof(w128_t), sizeof(sfmt_t)) != 0) {\n\treturn NULL;\n    }\n    ctx = (sfmt_t *)p;\n    psfmt32 = &ctx->sfmt[0].u[0];\n\n    if (size >= 623) {\n\tlag = 11;\n    } else if (size >= 68) {\n\tlag = 7;\n    } else if (size >= 39) {\n\tlag = 5;\n    } else {\n\tlag = 3;\n    }\n    mid = (size - lag) / 2;\n\n    memset(ctx->sfmt, 0x8b, sizeof(ctx->sfmt));\n    if (key_length + 1 > N32) {\n\tcount = key_length + 1;\n    } else {\n\tcount = N32;\n    }\n    r = func1(psfmt32[idxof(0)] ^ psfmt32[idxof(mid)]\n\t      ^ psfmt32[idxof(N32 - 1)]);\n    psfmt32[idxof(mid)] += r;\n    r += key_length;\n    psfmt32[idxof(mid + lag)] += r;\n    psfmt32[idxof(0)] = r;\n\n    count--;\n    for (i = 1, j = 0; (j < count) && (j < key_length); j++) {\n\tr = func1(psfmt32[idxof(i)] ^ psfmt32[idxof((i + mid) % N32)]\n\t\t  ^ psfmt32[idxof((i + N32 - 1) % N32)]);\n\tpsfmt32[idxof((i + mid) % N32)] += r;\n\tr += init_key[j] + i;\n\tpsfmt32[idxof((i + mid + lag) % N32)] += r;\n\tpsfmt32[idxof(i)] = r;\n\ti = (i + 1) % N32;\n    }\n    for (; j < count; j++) {\n\tr = func1(psfmt32[idxof(i)] ^ psfmt32[idxof((i + mid) % N32)]\n\t\t  ^ psfmt32[idxof((i + N32 - 1) % N32)]);\n\tpsfmt32[idxof((i + mid) % N32)] += r;\n\tr += i;\n\tpsfmt32[idxof((i + mid + lag) % N32)] += r;\n\tpsfmt32[idxof(i)] = r;\n\ti = (i + 1) % N32;\n    }\n    for (j = 0; j < N32; j++) {\n\tr = func2(psfmt32[idxof(i)] + psfmt32[idxof((i + mid) % N32)]\n\t\t  + psfmt32[idxof((i + N32 - 1) % N32)]);\n\tpsfmt32[idxof((i + mid) % N32)] ^= r;\n\tr -= i;\n\tpsfmt32[idxof((i + mid + lag) % N32)] ^= r;\n\tpsfmt32[idxof(i)] = r;\n\ti = (i + 1) % N32;\n    }\n\n    ctx->idx = N32;\n    period_certification(ctx);\n    ctx->initialized = 1;\n\n    return ctx;\n}\n\nvoid fini_gen_rand(sfmt_t *ctx) {\n    assert(ctx != NULL);\n\n    ctx->initialized = 0;\n    free(ctx);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/src/btalloc.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nvoid *\nbtalloc(size_t size, unsigned bits) {\n\treturn btalloc_0(size, bits);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/src/btalloc_0.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nbtalloc_n_gen(0)\n"
  },
  {
    "path": "deps/jemalloc/test/src/btalloc_1.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nbtalloc_n_gen(1)\n"
  },
  {
    "path": "deps/jemalloc/test/src/math.c",
    "content": "#define MATH_C_\n#include \"test/jemalloc_test.h\"\n"
  },
  {
    "path": "deps/jemalloc/test/src/mtx.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#ifndef _CRT_SPINCOUNT\n#define _CRT_SPINCOUNT 4000\n#endif\n\nbool\nmtx_init(mtx_t *mtx) {\n#ifdef _WIN32\n\tif (!InitializeCriticalSectionAndSpinCount(&mtx->lock,\n\t    _CRT_SPINCOUNT)) {\n\t\treturn true;\n\t}\n#elif (defined(JEMALLOC_OS_UNFAIR_LOCK))\n\tmtx->lock = OS_UNFAIR_LOCK_INIT;\n#else\n\tpthread_mutexattr_t attr;\n\n\tif (pthread_mutexattr_init(&attr) != 0) {\n\t\treturn true;\n\t}\n\tpthread_mutexattr_settype(&attr, PTHREAD_MUTEX_DEFAULT);\n\tif (pthread_mutex_init(&mtx->lock, &attr) != 0) {\n\t\tpthread_mutexattr_destroy(&attr);\n\t\treturn true;\n\t}\n\tpthread_mutexattr_destroy(&attr);\n#endif\n\treturn false;\n}\n\nvoid\nmtx_fini(mtx_t *mtx) {\n#ifdef _WIN32\n#elif (defined(JEMALLOC_OS_UNFAIR_LOCK))\n#else\n\tpthread_mutex_destroy(&mtx->lock);\n#endif\n}\n\nvoid\nmtx_lock(mtx_t *mtx) {\n#ifdef _WIN32\n\tEnterCriticalSection(&mtx->lock);\n#elif (defined(JEMALLOC_OS_UNFAIR_LOCK))\n\tos_unfair_lock_lock(&mtx->lock);\n#else\n\tpthread_mutex_lock(&mtx->lock);\n#endif\n}\n\nvoid\nmtx_unlock(mtx_t *mtx) {\n#ifdef _WIN32\n\tLeaveCriticalSection(&mtx->lock);\n#elif (defined(JEMALLOC_OS_UNFAIR_LOCK))\n\tos_unfair_lock_unlock(&mtx->lock);\n#else\n\tpthread_mutex_unlock(&mtx->lock);\n#endif\n}\n"
  },
  {
    "path": "deps/jemalloc/test/src/sleep.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n/*\n * Sleep for approximately ns nanoseconds.  No lower *nor* upper bound on sleep\n * time is guaranteed.\n */\nvoid\nsleep_ns(unsigned ns) {\n\tassert(ns <= 1000*1000*1000);\n\n#ifdef _WIN32\n\tSleep(ns / 1000 / 1000);\n#else\n\t{\n\t\tstruct timespec timeout;\n\n\t\tif (ns < 1000*1000*1000) {\n\t\t\ttimeout.tv_sec = 0;\n\t\t\ttimeout.tv_nsec = ns;\n\t\t} else {\n\t\t\ttimeout.tv_sec = 1;\n\t\t\ttimeout.tv_nsec = 0;\n\t\t}\n\t\tnanosleep(&timeout, NULL);\n\t}\n#endif\n}\n"
  },
  {
    "path": "deps/jemalloc/test/src/test.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n/* Test status state. */\n\nstatic unsigned\t\ttest_count = 0;\nstatic test_status_t\ttest_counts[test_status_count] = {0, 0, 0};\nstatic test_status_t\ttest_status = test_status_pass;\nstatic const char *\ttest_name = \"\";\n\n/* Reentrancy testing helpers. */\n\n#define NUM_REENTRANT_ALLOCS 20\ntypedef enum {\n\tnon_reentrant = 0,\n\tlibc_reentrant = 1,\n\tarena_new_reentrant = 2\n} reentrancy_t;\nstatic reentrancy_t reentrancy;\n\nstatic bool libc_hook_ran = false;\nstatic bool arena_new_hook_ran = false;\n\nstatic const char *\nreentrancy_t_str(reentrancy_t r) {\n\tswitch (r) {\n\tcase non_reentrant:\n\t\treturn \"non-reentrant\";\n\tcase libc_reentrant:\n\t\treturn \"libc-reentrant\";\n\tcase arena_new_reentrant:\n\t\treturn \"arena_new-reentrant\";\n\tdefault:\n\t\tunreachable();\n\t}\n}\n\nstatic void\ndo_hook(bool *hook_ran, void (**hook)()) {\n\t*hook_ran = true;\n\t*hook = NULL;\n\n\tsize_t alloc_size = 1;\n\tfor (int i = 0; i < NUM_REENTRANT_ALLOCS; i++) {\n\t\tfree(malloc(alloc_size));\n\t\talloc_size *= 2;\n\t}\n}\n\nstatic void\nlibc_reentrancy_hook() {\n\tdo_hook(&libc_hook_ran, &test_hooks_libc_hook);\n}\n\nstatic void\narena_new_reentrancy_hook() {\n\tdo_hook(&arena_new_hook_ran, &test_hooks_arena_new_hook);\n}\n\n/* Actual test infrastructure. */\nbool\ntest_is_reentrant() {\n\treturn reentrancy != non_reentrant;\n}\n\nJEMALLOC_FORMAT_PRINTF(1, 2)\nvoid\ntest_skip(const char *format, ...) {\n\tva_list ap;\n\n\tva_start(ap, format);\n\tmalloc_vcprintf(NULL, NULL, format, ap);\n\tva_end(ap);\n\tmalloc_printf(\"\\n\");\n\ttest_status = test_status_skip;\n}\n\nJEMALLOC_FORMAT_PRINTF(1, 2)\nvoid\ntest_fail(const char *format, ...) {\n\tva_list ap;\n\n\tva_start(ap, format);\n\tmalloc_vcprintf(NULL, NULL, format, ap);\n\tva_end(ap);\n\tmalloc_printf(\"\\n\");\n\ttest_status = test_status_fail;\n}\n\nstatic const char *\ntest_status_string(test_status_t current_status) {\n\tswitch (current_status) {\n\tcase test_status_pass: return \"pass\";\n\tcase test_status_skip: return \"skip\";\n\tcase test_status_fail: return \"fail\";\n\tdefault: not_reached();\n\t}\n}\n\nvoid\np_test_init(const char *name) {\n\ttest_count++;\n\ttest_status = test_status_pass;\n\ttest_name = name;\n}\n\nvoid\np_test_fini(void) {\n\ttest_counts[test_status]++;\n\tmalloc_printf(\"%s (%s): %s\\n\", test_name, reentrancy_t_str(reentrancy),\n\t    test_status_string(test_status));\n}\n\nstatic void\ncheck_global_slow(test_status_t *status) {\n#ifdef JEMALLOC_UNIT_TEST\n\t/*\n\t * This check needs to peek into tsd internals, which is why it's only\n\t * exposed in unit tests.\n\t */\n\tif (tsd_global_slow()) {\n\t\tmalloc_printf(\"Testing increased global slow count\\n\");\n\t\t*status = test_status_fail;\n\t}\n#endif\n}\n\nstatic test_status_t\np_test_impl(bool do_malloc_init, bool do_reentrant, test_t *t, va_list ap) {\n\ttest_status_t ret;\n\n\tif (do_malloc_init) {\n\t\t/*\n\t\t * Make sure initialization occurs prior to running tests.\n\t\t * Tests are special because they may use internal facilities\n\t\t * prior to triggering initialization as a side effect of\n\t\t * calling into the public API.\n\t\t */\n\t\tif (nallocx(1, 0) == 0) {\n\t\t\tmalloc_printf(\"Initialization error\");\n\t\t\treturn test_status_fail;\n\t\t}\n\t}\n\n\tret = test_status_pass;\n\tfor (; t != NULL; t = va_arg(ap, test_t *)) {\n\t\t/* Non-reentrant run. */\n\t\treentrancy = non_reentrant;\n\t\ttest_hooks_arena_new_hook = test_hooks_libc_hook = NULL;\n\t\tt();\n\t\tif (test_status > ret) {\n\t\t\tret = test_status;\n\t\t}\n\t\tcheck_global_slow(&ret);\n\t\t/* Reentrant run. */\n\t\tif (do_reentrant) {\n\t\t\treentrancy = libc_reentrant;\n\t\t\ttest_hooks_arena_new_hook = NULL;\n\t\t\ttest_hooks_libc_hook = &libc_reentrancy_hook;\n\t\t\tt();\n\t\t\tif (test_status > ret) {\n\t\t\t\tret = test_status;\n\t\t\t}\n\t\t\tcheck_global_slow(&ret);\n\n\t\t\treentrancy = arena_new_reentrant;\n\t\t\ttest_hooks_libc_hook = NULL;\n\t\t\ttest_hooks_arena_new_hook = &arena_new_reentrancy_hook;\n\t\t\tt();\n\t\t\tif (test_status > ret) {\n\t\t\t\tret = test_status;\n\t\t\t}\n\t\t\tcheck_global_slow(&ret);\n\t\t}\n\t}\n\n\tmalloc_printf(\"--- %s: %u/%u, %s: %u/%u, %s: %u/%u ---\\n\",\n\t    test_status_string(test_status_pass),\n\t    test_counts[test_status_pass], test_count,\n\t    test_status_string(test_status_skip),\n\t    test_counts[test_status_skip], test_count,\n\t    test_status_string(test_status_fail),\n\t    test_counts[test_status_fail], test_count);\n\n\treturn ret;\n}\n\ntest_status_t\np_test(test_t *t, ...) {\n\ttest_status_t ret;\n\tva_list ap;\n\n\tret = test_status_pass;\n\tva_start(ap, t);\n\tret = p_test_impl(true, true, t, ap);\n\tva_end(ap);\n\n\treturn ret;\n}\n\ntest_status_t\np_test_no_reentrancy(test_t *t, ...) {\n\ttest_status_t ret;\n\tva_list ap;\n\n\tret = test_status_pass;\n\tva_start(ap, t);\n\tret = p_test_impl(true, false, t, ap);\n\tva_end(ap);\n\n\treturn ret;\n}\n\ntest_status_t\np_test_no_malloc_init(test_t *t, ...) {\n\ttest_status_t ret;\n\tva_list ap;\n\n\tret = test_status_pass;\n\tva_start(ap, t);\n\t/*\n\t * We also omit reentrancy from bootstrapping tests, since we don't\n\t * (yet) care about general reentrancy during bootstrapping.\n\t */\n\tret = p_test_impl(false, false, t, ap);\n\tva_end(ap);\n\n\treturn ret;\n}\n\nvoid\np_test_fail(const char *prefix, const char *message) {\n\tmalloc_cprintf(NULL, NULL, \"%s%s\\n\", prefix, message);\n\ttest_status = test_status_fail;\n}\n"
  },
  {
    "path": "deps/jemalloc/test/src/thd.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#ifdef _WIN32\nvoid\nthd_create(thd_t *thd, void *(*proc)(void *), void *arg) {\n\tLPTHREAD_START_ROUTINE routine = (LPTHREAD_START_ROUTINE)proc;\n\t*thd = CreateThread(NULL, 0, routine, arg, 0, NULL);\n\tif (*thd == NULL) {\n\t\ttest_fail(\"Error in CreateThread()\\n\");\n\t}\n}\n\nvoid\nthd_join(thd_t thd, void **ret) {\n\tif (WaitForSingleObject(thd, INFINITE) == WAIT_OBJECT_0 && ret) {\n\t\tDWORD exit_code;\n\t\tGetExitCodeThread(thd, (LPDWORD) &exit_code);\n\t\t*ret = (void *)(uintptr_t)exit_code;\n\t}\n}\n\n#else\nvoid\nthd_create(thd_t *thd, void *(*proc)(void *), void *arg) {\n\tif (pthread_create(thd, NULL, proc, arg) != 0) {\n\t\ttest_fail(\"Error in pthread_create()\\n\");\n\t}\n}\n\nvoid\nthd_join(thd_t thd, void **ret) {\n\tpthread_join(thd, ret);\n}\n#endif\n"
  },
  {
    "path": "deps/jemalloc/test/src/timer.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nvoid\ntimer_start(timedelta_t *timer) {\n\tnstime_init_update(&timer->t0);\n}\n\nvoid\ntimer_stop(timedelta_t *timer) {\n\tnstime_copy(&timer->t1, &timer->t0);\n\tnstime_update(&timer->t1);\n}\n\nuint64_t\ntimer_usec(const timedelta_t *timer) {\n\tnstime_t delta;\n\n\tnstime_copy(&delta, &timer->t1);\n\tnstime_subtract(&delta, &timer->t0);\n\treturn nstime_ns(&delta) / 1000;\n}\n\nvoid\ntimer_ratio(timedelta_t *a, timedelta_t *b, char *buf, size_t buflen) {\n\tuint64_t t0 = timer_usec(a);\n\tuint64_t t1 = timer_usec(b);\n\tuint64_t mult;\n\tsize_t i = 0;\n\tsize_t j, n;\n\n\t/* Whole. */\n\tn = malloc_snprintf(&buf[i], buflen-i, \"%\"FMTu64, t0 / t1);\n\ti += n;\n\tif (i >= buflen) {\n\t\treturn;\n\t}\n\tmult = 1;\n\tfor (j = 0; j < n; j++) {\n\t\tmult *= 10;\n\t}\n\n\t/* Decimal. */\n\tn = malloc_snprintf(&buf[i], buflen-i, \".\");\n\ti += n;\n\n\t/* Fraction. */\n\twhile (i < buflen-1) {\n\t\tuint64_t round = (i+1 == buflen-1 && ((t0 * mult * 10 / t1) % 10\n\t\t    >= 5)) ? 1 : 0;\n\t\tn = malloc_snprintf(&buf[i], buflen-i,\n\t\t    \"%\"FMTu64, (t0 * mult / t1) % 10 + round);\n\t\ti += n;\n\t\tmult *= 10;\n\t}\n}\n"
  },
  {
    "path": "deps/jemalloc/test/stress/batch_alloc.c",
    "content": "#include \"test/jemalloc_test.h\"\n#include \"test/bench.h\"\n\n#define MIBLEN 8\nstatic size_t mib[MIBLEN];\nstatic size_t miblen = MIBLEN;\n\n#define TINY_BATCH 10\n#define TINY_BATCH_ITER (10 * 1000 * 1000)\n#define HUGE_BATCH (1000 * 1000)\n#define HUGE_BATCH_ITER 100\n#define LEN (100 * 1000 * 1000)\nstatic void *batch_ptrs[LEN];\nstatic size_t batch_ptrs_next = 0;\nstatic void *item_ptrs[LEN];\nstatic size_t item_ptrs_next = 0;\n\n#define SIZE 7\n\ntypedef struct batch_alloc_packet_s batch_alloc_packet_t;\nstruct batch_alloc_packet_s {\n\tvoid **ptrs;\n\tsize_t num;\n\tsize_t size;\n\tint flags;\n};\n\nstatic void\nbatch_alloc_wrapper(size_t batch) {\n\tbatch_alloc_packet_t batch_alloc_packet =\n\t    {batch_ptrs + batch_ptrs_next, batch, SIZE, 0};\n\tsize_t filled;\n\tsize_t len = sizeof(size_t);\n\tassert_d_eq(mallctlbymib(mib, miblen, &filled, &len,\n\t    &batch_alloc_packet, sizeof(batch_alloc_packet)), 0, \"\");\n\tassert_zu_eq(filled, batch, \"\");\n}\n\nstatic void\nitem_alloc_wrapper(size_t batch) {\n\tfor (size_t i = item_ptrs_next, end = i + batch; i < end; ++i) {\n\t\titem_ptrs[i] = malloc(SIZE);\n\t}\n}\n\nstatic void\nrelease_and_clear(void **ptrs, size_t len) {\n\tfor (size_t i = 0; i < len; ++i) {\n\t\tvoid *p = ptrs[i];\n\t\tassert_ptr_not_null(p, \"allocation failed\");\n\t\tsdallocx(p, SIZE, 0);\n\t\tptrs[i] = NULL;\n\t}\n}\n\nstatic void\nbatch_alloc_without_free(size_t batch) {\n\tbatch_alloc_wrapper(batch);\n\tbatch_ptrs_next += batch;\n}\n\nstatic void\nitem_alloc_without_free(size_t batch) {\n\titem_alloc_wrapper(batch);\n\titem_ptrs_next += batch;\n}\n\nstatic void\nbatch_alloc_with_free(size_t batch) {\n\tbatch_alloc_wrapper(batch);\n\trelease_and_clear(batch_ptrs + batch_ptrs_next, batch);\n\tbatch_ptrs_next += batch;\n}\n\nstatic void\nitem_alloc_with_free(size_t batch) {\n\titem_alloc_wrapper(batch);\n\trelease_and_clear(item_ptrs + item_ptrs_next, batch);\n\titem_ptrs_next += batch;\n}\n\nstatic void\ncompare_without_free(size_t batch, size_t iter,\n    void (*batch_alloc_without_free_func)(void),\n    void (*item_alloc_without_free_func)(void)) {\n\tassert(batch_ptrs_next == 0);\n\tassert(item_ptrs_next == 0);\n\tassert(batch * iter <= LEN);\n\tfor (size_t i = 0; i < iter; ++i) {\n\t\tbatch_alloc_without_free_func();\n\t\titem_alloc_without_free_func();\n\t}\n\trelease_and_clear(batch_ptrs, batch_ptrs_next);\n\tbatch_ptrs_next = 0;\n\trelease_and_clear(item_ptrs, item_ptrs_next);\n\titem_ptrs_next = 0;\n\tcompare_funcs(0, iter,\n\t    \"batch allocation\", batch_alloc_without_free_func,\n\t    \"item allocation\", item_alloc_without_free_func);\n\trelease_and_clear(batch_ptrs, batch_ptrs_next);\n\tbatch_ptrs_next = 0;\n\trelease_and_clear(item_ptrs, item_ptrs_next);\n\titem_ptrs_next = 0;\n}\n\nstatic void\ncompare_with_free(size_t batch, size_t iter,\n    void (*batch_alloc_with_free_func)(void),\n    void (*item_alloc_with_free_func)(void)) {\n\tassert(batch_ptrs_next == 0);\n\tassert(item_ptrs_next == 0);\n\tassert(batch * iter <= LEN);\n\tfor (size_t i = 0; i < iter; ++i) {\n\t\tbatch_alloc_with_free_func();\n\t\titem_alloc_with_free_func();\n\t}\n\tbatch_ptrs_next = 0;\n\titem_ptrs_next = 0;\n\tcompare_funcs(0, iter,\n\t    \"batch allocation\", batch_alloc_with_free_func,\n\t    \"item allocation\", item_alloc_with_free_func);\n\tbatch_ptrs_next = 0;\n\titem_ptrs_next = 0;\n}\n\nstatic void\nbatch_alloc_without_free_tiny() {\n\tbatch_alloc_without_free(TINY_BATCH);\n}\n\nstatic void\nitem_alloc_without_free_tiny() {\n\titem_alloc_without_free(TINY_BATCH);\n}\n\nTEST_BEGIN(test_tiny_batch_without_free) {\n\tcompare_without_free(TINY_BATCH, TINY_BATCH_ITER,\n\t    batch_alloc_without_free_tiny, item_alloc_without_free_tiny);\n}\nTEST_END\n\nstatic void\nbatch_alloc_with_free_tiny() {\n\tbatch_alloc_with_free(TINY_BATCH);\n}\n\nstatic void\nitem_alloc_with_free_tiny() {\n\titem_alloc_with_free(TINY_BATCH);\n}\n\nTEST_BEGIN(test_tiny_batch_with_free) {\n\tcompare_with_free(TINY_BATCH, TINY_BATCH_ITER,\n\t    batch_alloc_with_free_tiny, item_alloc_with_free_tiny);\n}\nTEST_END\n\nstatic void\nbatch_alloc_without_free_huge() {\n\tbatch_alloc_without_free(HUGE_BATCH);\n}\n\nstatic void\nitem_alloc_without_free_huge() {\n\titem_alloc_without_free(HUGE_BATCH);\n}\n\nTEST_BEGIN(test_huge_batch_without_free) {\n\tcompare_without_free(HUGE_BATCH, HUGE_BATCH_ITER,\n\t    batch_alloc_without_free_huge, item_alloc_without_free_huge);\n}\nTEST_END\n\nstatic void\nbatch_alloc_with_free_huge() {\n\tbatch_alloc_with_free(HUGE_BATCH);\n}\n\nstatic void\nitem_alloc_with_free_huge() {\n\titem_alloc_with_free(HUGE_BATCH);\n}\n\nTEST_BEGIN(test_huge_batch_with_free) {\n\tcompare_with_free(HUGE_BATCH, HUGE_BATCH_ITER,\n\t    batch_alloc_with_free_huge, item_alloc_with_free_huge);\n}\nTEST_END\n\nint main(void) {\n\tassert_d_eq(mallctlnametomib(\"experimental.batch_alloc\", mib, &miblen),\n\t    0, \"\");\n\treturn test_no_reentrancy(\n\t    test_tiny_batch_without_free,\n\t    test_tiny_batch_with_free,\n\t    test_huge_batch_without_free,\n\t    test_huge_batch_with_free);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/stress/fill_flush.c",
    "content": "#include \"test/jemalloc_test.h\"\n#include \"test/bench.h\"\n\n#define SMALL_ALLOC_SIZE 128\n#define LARGE_ALLOC_SIZE SC_LARGE_MINCLASS\n#define NALLOCS 1000\n\n/*\n * We make this volatile so the 1-at-a-time variants can't leave the allocation\n * in a register, just to try to get the cache behavior closer.\n */\nvoid *volatile allocs[NALLOCS];\n\nstatic void\narray_alloc_dalloc_small(void) {\n\tfor (int i = 0; i < NALLOCS; i++) {\n\t\tvoid *p = mallocx(SMALL_ALLOC_SIZE, 0);\n\t\tassert_ptr_not_null(p, \"mallocx shouldn't fail\");\n\t\tallocs[i] = p;\n\t}\n\tfor (int i = 0; i < NALLOCS; i++) {\n\t\tsdallocx(allocs[i], SMALL_ALLOC_SIZE, 0);\n\t}\n}\n\nstatic void\nitem_alloc_dalloc_small(void) {\n\tfor (int i = 0; i < NALLOCS; i++) {\n\t\tvoid *p = mallocx(SMALL_ALLOC_SIZE, 0);\n\t\tassert_ptr_not_null(p, \"mallocx shouldn't fail\");\n\t\tallocs[i] = p;\n\t\tsdallocx(allocs[i], SMALL_ALLOC_SIZE, 0);\n\t}\n}\n\nTEST_BEGIN(test_array_vs_item_small) {\n\tcompare_funcs(1 * 1000, 10 * 1000,\n\t    \"array of small allocations\", array_alloc_dalloc_small,\n\t    \"small item allocation\", item_alloc_dalloc_small);\n}\nTEST_END\n\nstatic void\narray_alloc_dalloc_large(void) {\n\tfor (int i = 0; i < NALLOCS; i++) {\n\t\tvoid *p = mallocx(LARGE_ALLOC_SIZE, 0);\n\t\tassert_ptr_not_null(p, \"mallocx shouldn't fail\");\n\t\tallocs[i] = p;\n\t}\n\tfor (int i = 0; i < NALLOCS; i++) {\n\t\tsdallocx(allocs[i], LARGE_ALLOC_SIZE, 0);\n\t}\n}\n\nstatic void\nitem_alloc_dalloc_large(void) {\n\tfor (int i = 0; i < NALLOCS; i++) {\n\t\tvoid *p = mallocx(LARGE_ALLOC_SIZE, 0);\n\t\tassert_ptr_not_null(p, \"mallocx shouldn't fail\");\n\t\tallocs[i] = p;\n\t\tsdallocx(allocs[i], LARGE_ALLOC_SIZE, 0);\n\t}\n}\n\nTEST_BEGIN(test_array_vs_item_large) {\n\tcompare_funcs(100, 1000,\n\t    \"array of large allocations\", array_alloc_dalloc_large,\n\t    \"large item allocation\", item_alloc_dalloc_large);\n}\nTEST_END\n\nint main(void) {\n\treturn test_no_reentrancy(\n\t    test_array_vs_item_small,\n\t    test_array_vs_item_large);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/stress/hookbench.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nstatic void\nnoop_alloc_hook(void *extra, hook_alloc_t type, void *result,\n    uintptr_t result_raw, uintptr_t args_raw[3]) {\n}\n\nstatic void\nnoop_dalloc_hook(void *extra, hook_dalloc_t type, void *address,\n    uintptr_t args_raw[3]) {\n}\n\nstatic void\nnoop_expand_hook(void *extra, hook_expand_t type, void *address,\n    size_t old_usize, size_t new_usize, uintptr_t result_raw,\n    uintptr_t args_raw[4]) {\n}\n\nstatic void\nmalloc_free_loop(int iters) {\n\tfor (int i = 0; i < iters; i++) {\n\t\tvoid *p = mallocx(1, 0);\n\t\tfree(p);\n\t}\n}\n\nstatic void\ntest_hooked(int iters) {\n\thooks_t hooks = {&noop_alloc_hook, &noop_dalloc_hook, &noop_expand_hook,\n\t\tNULL};\n\n\tint err;\n\tvoid *handles[HOOK_MAX];\n\tsize_t sz = sizeof(handles[0]);\n\n\tfor (int i = 0; i < HOOK_MAX; i++) {\n\t\terr = mallctl(\"experimental.hooks.install\", &handles[i],\n\t\t    &sz, &hooks, sizeof(hooks));\n\t\tassert(err == 0);\n\n\t\ttimedelta_t timer;\n\t\ttimer_start(&timer);\n\t\tmalloc_free_loop(iters);\n\t\ttimer_stop(&timer);\n\t\tmalloc_printf(\"With %d hook%s: %\"FMTu64\"us\\n\", i + 1,\n\t\t    i + 1 == 1 ? \"\" : \"s\", timer_usec(&timer));\n\t}\n\tfor (int i = 0; i < HOOK_MAX; i++) {\n\t\terr = mallctl(\"experimental.hooks.remove\", NULL, NULL,\n\t\t    &handles[i], sizeof(handles[i]));\n\t\tassert(err == 0);\n\t}\n}\n\nstatic void\ntest_unhooked(int iters) {\n\ttimedelta_t timer;\n\ttimer_start(&timer);\n\tmalloc_free_loop(iters);\n\ttimer_stop(&timer);\n\n\tmalloc_printf(\"Without hooks: %\"FMTu64\"us\\n\", timer_usec(&timer));\n}\n\nint\nmain(void) {\n\t/* Initialize */\n\tfree(mallocx(1, 0));\n\tint iters = 10 * 1000 * 1000;\n\tmalloc_printf(\"Benchmarking hooks with %d iterations:\\n\", iters);\n\ttest_hooked(iters);\n\ttest_unhooked(iters);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/stress/large_microbench.c",
    "content": "#include \"test/jemalloc_test.h\"\n#include \"test/bench.h\"\n\nstatic void\nlarge_mallocx_free(void) {\n\t/*\n\t * We go a bit larger than the large minclass on its own to better\n\t * expose costs from things like zeroing.\n\t */\n\tvoid *p = mallocx(SC_LARGE_MINCLASS, MALLOCX_TCACHE_NONE);\n\tassert_ptr_not_null(p, \"mallocx shouldn't fail\");\n\tfree(p);\n}\n\nstatic void\nsmall_mallocx_free(void) {\n\tvoid *p = mallocx(16, 0);\n\tassert_ptr_not_null(p, \"mallocx shouldn't fail\");\n\tfree(p);\n}\n\nTEST_BEGIN(test_large_vs_small) {\n\tcompare_funcs(100*1000, 1*1000*1000, \"large mallocx\",\n\t    large_mallocx_free, \"small mallocx\", small_mallocx_free);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_large_vs_small);\n}\n\n"
  },
  {
    "path": "deps/jemalloc/test/stress/mallctl.c",
    "content": "#include \"test/jemalloc_test.h\"\n#include \"test/bench.h\"\n\nstatic void\nmallctl_short(void) {\n\tconst char *version;\n\tsize_t sz = sizeof(version);\n\tint err = mallctl(\"version\", &version, &sz, NULL, 0);\n\tassert_d_eq(err, 0, \"mallctl failure\");\n}\n\nsize_t mib_short[1];\n\nstatic void\nmallctlbymib_short(void) {\n\tsize_t miblen = sizeof(mib_short)/sizeof(mib_short[0]);\n\tconst char *version;\n\tsize_t sz = sizeof(version);\n\tint err = mallctlbymib(mib_short, miblen, &version, &sz, NULL, 0);\n\tassert_d_eq(err, 0, \"mallctlbymib failure\");\n}\n\nTEST_BEGIN(test_mallctl_vs_mallctlbymib_short) {\n\tsize_t miblen = sizeof(mib_short)/sizeof(mib_short[0]);\n\n\tint err = mallctlnametomib(\"version\", mib_short, &miblen);\n\tassert_d_eq(err, 0, \"mallctlnametomib failure\");\n\tcompare_funcs(10*1000*1000, 10*1000*1000, \"mallctl_short\",\n\t    mallctl_short, \"mallctlbymib_short\", mallctlbymib_short);\n}\nTEST_END\n\nstatic void\nmallctl_long(void) {\n\tuint64_t nmalloc;\n\tsize_t sz = sizeof(nmalloc);\n\tint err = mallctl(\"stats.arenas.0.bins.0.nmalloc\", &nmalloc, &sz, NULL,\n\t    0);\n\tassert_d_eq(err, 0, \"mallctl failure\");\n}\n\nsize_t mib_long[6];\n\nstatic void\nmallctlbymib_long(void) {\n\tsize_t miblen = sizeof(mib_long)/sizeof(mib_long[0]);\n\tuint64_t nmalloc;\n\tsize_t sz = sizeof(nmalloc);\n\tint err = mallctlbymib(mib_long, miblen, &nmalloc, &sz, NULL, 0);\n\tassert_d_eq(err, 0, \"mallctlbymib failure\");\n}\n\nTEST_BEGIN(test_mallctl_vs_mallctlbymib_long) {\n\t/*\n\t * We want to use the longest mallctl we have; that needs stats support\n\t * to be allowed.\n\t */\n\ttest_skip_if(!config_stats);\n\n\tsize_t miblen = sizeof(mib_long)/sizeof(mib_long[0]);\n\tint err = mallctlnametomib(\"stats.arenas.0.bins.0.nmalloc\", mib_long,\n\t    &miblen);\n\tassert_d_eq(err, 0, \"mallctlnametomib failure\");\n\tcompare_funcs(10*1000*1000, 10*1000*1000, \"mallctl_long\",\n\t    mallctl_long, \"mallctlbymib_long\", mallctlbymib_long);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_mallctl_vs_mallctlbymib_short,\n\t    test_mallctl_vs_mallctlbymib_long);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/stress/microbench.c",
    "content": "#include \"test/jemalloc_test.h\"\n#include \"test/bench.h\"\n\nstatic void\nmalloc_free(void) {\n\t/* The compiler can optimize away free(malloc(1))! */\n\tvoid *p = malloc(1);\n\tif (p == NULL) {\n\t\ttest_fail(\"Unexpected malloc() failure\");\n\t\treturn;\n\t}\n\tfree(p);\n}\n\nstatic void\nmallocx_free(void) {\n\tvoid *p = mallocx(1, 0);\n\tif (p == NULL) {\n\t\ttest_fail(\"Unexpected mallocx() failure\");\n\t\treturn;\n\t}\n\tfree(p);\n}\n\nTEST_BEGIN(test_malloc_vs_mallocx) {\n\tcompare_funcs(10*1000*1000, 100*1000*1000, \"malloc\",\n\t    malloc_free, \"mallocx\", mallocx_free);\n}\nTEST_END\n\nstatic void\nmalloc_dallocx(void) {\n\tvoid *p = malloc(1);\n\tif (p == NULL) {\n\t\ttest_fail(\"Unexpected malloc() failure\");\n\t\treturn;\n\t}\n\tdallocx(p, 0);\n}\n\nstatic void\nmalloc_sdallocx(void) {\n\tvoid *p = malloc(1);\n\tif (p == NULL) {\n\t\ttest_fail(\"Unexpected malloc() failure\");\n\t\treturn;\n\t}\n\tsdallocx(p, 1, 0);\n}\n\nTEST_BEGIN(test_free_vs_dallocx) {\n\tcompare_funcs(10*1000*1000, 100*1000*1000, \"free\", malloc_free,\n\t    \"dallocx\", malloc_dallocx);\n}\nTEST_END\n\nTEST_BEGIN(test_dallocx_vs_sdallocx) {\n\tcompare_funcs(10*1000*1000, 100*1000*1000, \"dallocx\", malloc_dallocx,\n\t    \"sdallocx\", malloc_sdallocx);\n}\nTEST_END\n\nstatic void\nmalloc_mus_free(void) {\n\tvoid *p;\n\n\tp = malloc(1);\n\tif (p == NULL) {\n\t\ttest_fail(\"Unexpected malloc() failure\");\n\t\treturn;\n\t}\n\tTEST_MALLOC_SIZE(p);\n\tfree(p);\n}\n\nstatic void\nmalloc_sallocx_free(void) {\n\tvoid *p;\n\n\tp = malloc(1);\n\tif (p == NULL) {\n\t\ttest_fail(\"Unexpected malloc() failure\");\n\t\treturn;\n\t}\n\tif (sallocx(p, 0) < 1) {\n\t\ttest_fail(\"Unexpected sallocx() failure\");\n\t}\n\tfree(p);\n}\n\nTEST_BEGIN(test_mus_vs_sallocx) {\n\tcompare_funcs(10*1000*1000, 100*1000*1000, \"malloc_usable_size\",\n\t    malloc_mus_free, \"sallocx\", malloc_sallocx_free);\n}\nTEST_END\n\nstatic void\nmalloc_nallocx_free(void) {\n\tvoid *p;\n\n\tp = malloc(1);\n\tif (p == NULL) {\n\t\ttest_fail(\"Unexpected malloc() failure\");\n\t\treturn;\n\t}\n\tif (nallocx(1, 0) < 1) {\n\t\ttest_fail(\"Unexpected nallocx() failure\");\n\t}\n\tfree(p);\n}\n\nTEST_BEGIN(test_sallocx_vs_nallocx) {\n\tcompare_funcs(10*1000*1000, 100*1000*1000, \"sallocx\",\n\t    malloc_sallocx_free, \"nallocx\", malloc_nallocx_free);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_malloc_vs_mallocx,\n\t    test_free_vs_dallocx,\n\t    test_dallocx_vs_sdallocx,\n\t    test_mus_vs_sallocx,\n\t    test_sallocx_vs_nallocx);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/test.sh.in",
    "content": "#!/bin/sh\n\ncase @abi@ in\n  macho)\n    export DYLD_FALLBACK_LIBRARY_PATH=\"@objroot@lib\"\n    ;;\n  pecoff)\n    export PATH=\"${PATH}:@objroot@lib\"\n    ;;\n  *)\n    ;;\nesac\n\n# Make a copy of the @JEMALLOC_CPREFIX@MALLOC_CONF passed in to this script, so\n# it can be repeatedly concatenated with per test settings.\nexport MALLOC_CONF_ALL=${@JEMALLOC_CPREFIX@MALLOC_CONF}\n# Concatenate the individual test's MALLOC_CONF and MALLOC_CONF_ALL.\nexport_malloc_conf() {\n  if [ \"x${MALLOC_CONF}\" != \"x\" -a \"x${MALLOC_CONF_ALL}\" != \"x\" ] ; then\n    export @JEMALLOC_CPREFIX@MALLOC_CONF=\"${MALLOC_CONF},${MALLOC_CONF_ALL}\"\n  else\n    export @JEMALLOC_CPREFIX@MALLOC_CONF=\"${MALLOC_CONF}${MALLOC_CONF_ALL}\"\n  fi\n}\n\n# Corresponds to test_status_t.\npass_code=0\nskip_code=1\nfail_code=2\n\npass_count=0\nskip_count=0\nfail_count=0\nfor t in $@; do\n  if [ $pass_count -ne 0 -o $skip_count -ne 0 -o $fail_count != 0 ] ; then\n    echo\n  fi\n  echo \"=== ${t} ===\"\n  if [ -e \"@srcroot@${t}.sh\" ] ; then\n    # Source the shell script corresponding to the test in a subshell and\n    # execute the test.  This allows the shell script to set MALLOC_CONF, which\n    # is then used to set @JEMALLOC_CPREFIX@MALLOC_CONF (thus allowing the\n    # per test shell script to ignore the @JEMALLOC_CPREFIX@ detail).\n    enable_fill=@enable_fill@ \\\n    enable_prof=@enable_prof@ \\\n    . @srcroot@${t}.sh && \\\n    export_malloc_conf && \\\n    $JEMALLOC_TEST_PREFIX ${t}@exe@ @abs_srcroot@ @abs_objroot@\n  else\n    export MALLOC_CONF= && \\\n    export_malloc_conf && \\\n    $JEMALLOC_TEST_PREFIX ${t}@exe@ @abs_srcroot@ @abs_objroot@\n  fi\n  result_code=$?\n  case ${result_code} in\n    ${pass_code})\n      pass_count=$((pass_count+1))\n      ;;\n    ${skip_code})\n      skip_count=$((skip_count+1))\n      ;;\n    ${fail_code})\n      fail_count=$((fail_count+1))\n      ;;\n    *)\n      echo \"Test harness error: ${t} w/ MALLOC_CONF=\\\"${MALLOC_CONF}\\\"\" 1>&2\n      echo \"Use prefix to debug, e.g. JEMALLOC_TEST_PREFIX=\\\"gdb --args\\\" sh test/test.sh ${t}\" 1>&2\n      exit 1\n  esac\ndone\n\ntotal_count=`expr ${pass_count} + ${skip_count} + ${fail_count}`\necho\necho \"Test suite summary: pass: ${pass_count}/${total_count}, skip: ${skip_count}/${total_count}, fail: ${fail_count}/${total_count}\"\n\nif [ ${fail_count} -eq 0 ] ; then\n  exit 0\nelse\n  exit 1\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/unit/SFMT.c",
    "content": "/*\n * This file derives from SFMT 1.3.3\n * (http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html), which was\n * released under the terms of the following license:\n *\n *   Copyright (c) 2006,2007 Mutsuo Saito, Makoto Matsumoto and Hiroshima\n *   University. All rights reserved.\n *\n *   Redistribution and use in source and binary forms, with or without\n *   modification, are permitted provided that the following conditions are\n *   met:\n *\n *       * Redistributions of source code must retain the above copyright\n *         notice, this list of conditions and the following disclaimer.\n *       * Redistributions in binary form must reproduce the above\n *         copyright notice, this list of conditions and the following\n *         disclaimer in the documentation and/or other materials provided\n *         with the distribution.\n *       * Neither the name of the Hiroshima University nor the names of\n *         its contributors may be used to endorse or promote products\n *         derived from this software without specific prior written\n *         permission.\n *\n *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n */\n#include \"test/jemalloc_test.h\"\n\n#define BLOCK_SIZE 10000\n#define BLOCK_SIZE64 (BLOCK_SIZE / 2)\n#define COUNT_1 1000\n#define COUNT_2 700\n\nstatic const uint32_t init_gen_rand_32_expected[] = {\n\t3440181298U, 1564997079U, 1510669302U, 2930277156U, 1452439940U,\n\t3796268453U,  423124208U, 2143818589U, 3827219408U, 2987036003U,\n\t2674978610U, 1536842514U, 2027035537U, 2534897563U, 1686527725U,\n\t 545368292U, 1489013321U, 1370534252U, 4231012796U, 3994803019U,\n\t1764869045U,  824597505U,  862581900U, 2469764249U,  812862514U,\n\t 359318673U,  116957936U, 3367389672U, 2327178354U, 1898245200U,\n\t3206507879U, 2378925033U, 1040214787U, 2524778605U, 3088428700U,\n\t1417665896U,  964324147U, 2282797708U, 2456269299U,  313400376U,\n\t2245093271U, 1015729427U, 2694465011U, 3246975184U, 1992793635U,\n\t 463679346U, 3721104591U, 3475064196U,  856141236U, 1499559719U,\n\t3522818941U, 3721533109U, 1954826617U, 1282044024U, 1543279136U,\n\t1301863085U, 2669145051U, 4221477354U, 3896016841U, 3392740262U,\n\t 462466863U, 1037679449U, 1228140306U,  922298197U, 1205109853U,\n\t1872938061U, 3102547608U, 2742766808U, 1888626088U, 4028039414U,\n\t 157593879U, 1136901695U, 4038377686U, 3572517236U, 4231706728U,\n\t2997311961U, 1189931652U, 3981543765U, 2826166703U,   87159245U,\n\t1721379072U, 3897926942U, 1790395498U, 2569178939U, 1047368729U,\n\t2340259131U, 3144212906U, 2301169789U, 2442885464U, 3034046771U,\n\t3667880593U, 3935928400U, 2372805237U, 1666397115U, 2460584504U,\n\t 513866770U, 3810869743U, 2147400037U, 2792078025U, 2941761810U,\n\t3212265810U,  984692259U,  346590253U, 1804179199U, 3298543443U,\n\t 750108141U, 2880257022U,  243310542U, 1869036465U, 1588062513U,\n\t2983949551U, 1931450364U, 4034505847U, 2735030199U, 1628461061U,\n\t2539522841U,  127965585U, 3992448871U,  913388237U,  559130076U,\n\t1202933193U, 4087643167U, 2590021067U, 2256240196U, 1746697293U,\n\t1013913783U, 1155864921U, 2715773730U,  915061862U, 1948766573U,\n\t2322882854U, 3761119102U, 1343405684U, 3078711943U, 3067431651U,\n\t3245156316U, 3588354584U, 3484623306U, 3899621563U, 4156689741U,\n\t3237090058U, 3880063844U,  862416318U, 4039923869U, 2303788317U,\n\t3073590536U,  701653667U, 2131530884U, 3169309950U, 2028486980U,\n\t 747196777U, 3620218225U,  432016035U, 1449580595U, 2772266392U,\n\t 444224948U, 1662832057U, 3184055582U, 3028331792U, 1861686254U,\n\t1104864179U,  342430307U, 1350510923U, 3024656237U, 1028417492U,\n\t2870772950U,  290847558U, 3675663500U,  508431529U, 4264340390U,\n\t2263569913U, 1669302976U,  519511383U, 2706411211U, 3764615828U,\n\t3883162495U, 4051445305U, 2412729798U, 3299405164U, 3991911166U,\n\t2348767304U, 2664054906U, 3763609282U,  593943581U, 3757090046U,\n\t2075338894U, 2020550814U, 4287452920U, 4290140003U, 1422957317U,\n\t2512716667U, 2003485045U, 2307520103U, 2288472169U, 3940751663U,\n\t4204638664U, 2892583423U, 1710068300U, 3904755993U, 2363243951U,\n\t3038334120U,  547099465U,  771105860U, 3199983734U, 4282046461U,\n\t2298388363U,  934810218U, 2837827901U, 3952500708U, 2095130248U,\n\t3083335297U,   26885281U, 3932155283U, 1531751116U, 1425227133U,\n\t 495654159U, 3279634176U, 3855562207U, 3957195338U, 4159985527U,\n\t 893375062U, 1875515536U, 1327247422U, 3754140693U, 1028923197U,\n\t1729880440U,  805571298U,  448971099U, 2726757106U, 2749436461U,\n\t2485987104U,  175337042U, 3235477922U, 3882114302U, 2020970972U,\n\t 943926109U, 2762587195U, 1904195558U, 3452650564U,  108432281U,\n\t3893463573U, 3977583081U, 2636504348U, 1110673525U, 3548479841U,\n\t4258854744U,  980047703U, 4057175418U, 3890008292U,  145653646U,\n\t3141868989U, 3293216228U, 1194331837U, 1254570642U, 3049934521U,\n\t2868313360U, 2886032750U, 1110873820U,  279553524U, 3007258565U,\n\t1104807822U, 3186961098U,  315764646U, 2163680838U, 3574508994U,\n\t3099755655U,  191957684U, 3642656737U, 3317946149U, 3522087636U,\n\t 444526410U,  779157624U, 1088229627U, 1092460223U, 1856013765U,\n\t3659877367U,  368270451U,  503570716U, 3000984671U, 2742789647U,\n\t 928097709U, 2914109539U,  308843566U, 2816161253U, 3667192079U,\n\t2762679057U, 3395240989U, 2928925038U, 1491465914U, 3458702834U,\n\t3787782576U, 2894104823U, 1296880455U, 1253636503U,  989959407U,\n\t2291560361U, 2776790436U, 1913178042U, 1584677829U,  689637520U,\n\t1898406878U,  688391508U, 3385234998U,  845493284U, 1943591856U,\n\t2720472050U,  222695101U, 1653320868U, 2904632120U, 4084936008U,\n\t1080720688U, 3938032556U,  387896427U, 2650839632U,   99042991U,\n\t1720913794U, 1047186003U, 1877048040U, 2090457659U,  517087501U,\n\t4172014665U, 2129713163U, 2413533132U, 2760285054U, 4129272496U,\n\t1317737175U, 2309566414U, 2228873332U, 3889671280U, 1110864630U,\n\t3576797776U, 2074552772U,  832002644U, 3097122623U, 2464859298U,\n\t2679603822U, 1667489885U, 3237652716U, 1478413938U, 1719340335U,\n\t2306631119U,  639727358U, 3369698270U,  226902796U, 2099920751U,\n\t1892289957U, 2201594097U, 3508197013U, 3495811856U, 3900381493U,\n\t 841660320U, 3974501451U, 3360949056U, 1676829340U,  728899254U,\n\t2047809627U, 2390948962U,  670165943U, 3412951831U, 4189320049U,\n\t1911595255U, 2055363086U,  507170575U,  418219594U, 4141495280U,\n\t2692088692U, 4203630654U, 3540093932U,  791986533U, 2237921051U,\n\t2526864324U, 2956616642U, 1394958700U, 1983768223U, 1893373266U,\n\t 591653646U,  228432437U, 1611046598U, 3007736357U, 1040040725U,\n\t2726180733U, 2789804360U, 4263568405U,  829098158U, 3847722805U,\n\t1123578029U, 1804276347U,  997971319U, 4203797076U, 4185199713U,\n\t2811733626U, 2343642194U, 2985262313U, 1417930827U, 3759587724U,\n\t1967077982U, 1585223204U, 1097475516U, 1903944948U,  740382444U,\n\t1114142065U, 1541796065U, 1718384172U, 1544076191U, 1134682254U,\n\t3519754455U, 2866243923U,  341865437U,  645498576U, 2690735853U,\n\t1046963033U, 2493178460U, 1187604696U, 1619577821U,  488503634U,\n\t3255768161U, 2306666149U, 1630514044U, 2377698367U, 2751503746U,\n\t3794467088U, 1796415981U, 3657173746U,  409136296U, 1387122342U,\n\t1297726519U,  219544855U, 4270285558U,  437578827U, 1444698679U,\n\t2258519491U,  963109892U, 3982244073U, 3351535275U,  385328496U,\n\t1804784013U,  698059346U, 3920535147U,  708331212U,  784338163U,\n\t 785678147U, 1238376158U, 1557298846U, 2037809321U,  271576218U,\n\t4145155269U, 1913481602U, 2763691931U,  588981080U, 1201098051U,\n\t3717640232U, 1509206239U,  662536967U, 3180523616U, 1133105435U,\n\t2963500837U, 2253971215U, 3153642623U, 1066925709U, 2582781958U,\n\t3034720222U, 1090798544U, 2942170004U, 4036187520U,  686972531U,\n\t2610990302U, 2641437026U, 1837562420U,  722096247U, 1315333033U,\n\t2102231203U, 3402389208U, 3403698140U, 1312402831U, 2898426558U,\n\t 814384596U,  385649582U, 1916643285U, 1924625106U, 2512905582U,\n\t2501170304U, 4275223366U, 2841225246U, 1467663688U, 3563567847U,\n\t2969208552U,  884750901U,  102992576U,  227844301U, 3681442994U,\n\t3502881894U, 4034693299U, 1166727018U, 1697460687U, 1737778332U,\n\t1787161139U, 1053003655U, 1215024478U, 2791616766U, 2525841204U,\n\t1629323443U,    3233815U, 2003823032U, 3083834263U, 2379264872U,\n\t3752392312U, 1287475550U, 3770904171U, 3004244617U, 1502117784U,\n\t 918698423U, 2419857538U, 3864502062U, 1751322107U, 2188775056U,\n\t4018728324U,  983712955U,  440071928U, 3710838677U, 2001027698U,\n\t3994702151U,   22493119U, 3584400918U, 3446253670U, 4254789085U,\n\t1405447860U, 1240245579U, 1800644159U, 1661363424U, 3278326132U,\n\t3403623451U,   67092802U, 2609352193U, 3914150340U, 1814842761U,\n\t3610830847U,  591531412U, 3880232807U, 1673505890U, 2585326991U,\n\t1678544474U, 3148435887U, 3457217359U, 1193226330U, 2816576908U,\n\t 154025329U,  121678860U, 1164915738U,  973873761U,  269116100U,\n\t  52087970U,  744015362U,  498556057U,   94298882U, 1563271621U,\n\t2383059628U, 4197367290U, 3958472990U, 2592083636U, 2906408439U,\n\t1097742433U, 3924840517U,  264557272U, 2292287003U, 3203307984U,\n\t4047038857U, 3820609705U, 2333416067U, 1839206046U, 3600944252U,\n\t3412254904U,  583538222U, 2390557166U, 4140459427U, 2810357445U,\n\t 226777499U, 2496151295U, 2207301712U, 3283683112U,  611630281U,\n\t1933218215U, 3315610954U, 3889441987U, 3719454256U, 3957190521U,\n\t1313998161U, 2365383016U, 3146941060U, 1801206260U,  796124080U,\n\t2076248581U, 1747472464U, 3254365145U,  595543130U, 3573909503U,\n\t3758250204U, 2020768540U, 2439254210U,   93368951U, 3155792250U,\n\t2600232980U, 3709198295U, 3894900440U, 2971850836U, 1578909644U,\n\t1443493395U, 2581621665U, 3086506297U, 2443465861U,  558107211U,\n\t1519367835U,  249149686U,  908102264U, 2588765675U, 1232743965U,\n\t1001330373U, 3561331654U, 2259301289U, 1564977624U, 3835077093U,\n\t 727244906U, 4255738067U, 1214133513U, 2570786021U, 3899704621U,\n\t1633861986U, 1636979509U, 1438500431U,   58463278U, 2823485629U,\n\t2297430187U, 2926781924U, 3371352948U, 1864009023U, 2722267973U,\n\t1444292075U,  437703973U, 1060414512U,  189705863U,  910018135U,\n\t4077357964U,  884213423U, 2644986052U, 3973488374U, 1187906116U,\n\t2331207875U,  780463700U, 3713351662U, 3854611290U,  412805574U,\n\t2978462572U, 2176222820U,  829424696U, 2790788332U, 2750819108U,\n\t1594611657U, 3899878394U, 3032870364U, 1702887682U, 1948167778U,\n\t  14130042U,  192292500U,  947227076U,   90719497U, 3854230320U,\n\t 784028434U, 2142399787U, 1563449646U, 2844400217U,  819143172U,\n\t2883302356U, 2328055304U, 1328532246U, 2603885363U, 3375188924U,\n\t 933941291U, 3627039714U, 2129697284U, 2167253953U, 2506905438U,\n\t1412424497U, 2981395985U, 1418359660U, 2925902456U,   52752784U,\n\t3713667988U, 3924669405U,  648975707U, 1145520213U, 4018650664U,\n\t3805915440U, 2380542088U, 2013260958U, 3262572197U, 2465078101U,\n\t1114540067U, 3728768081U, 2396958768U,  590672271U,  904818725U,\n\t4263660715U,  700754408U, 1042601829U, 4094111823U, 4274838909U,\n\t2512692617U, 2774300207U, 2057306915U, 3470942453U,   99333088U,\n\t1142661026U, 2889931380U,   14316674U, 2201179167U,  415289459U,\n\t 448265759U, 3515142743U, 3254903683U,  246633281U, 1184307224U,\n\t2418347830U, 2092967314U, 2682072314U, 2558750234U, 2000352263U,\n\t1544150531U,  399010405U, 1513946097U,  499682937U,  461167460U,\n\t3045570638U, 1633669705U,  851492362U, 4052801922U, 2055266765U,\n\t 635556996U,  368266356U, 2385737383U, 3218202352U, 2603772408U,\n\t 349178792U,  226482567U, 3102426060U, 3575998268U, 2103001871U,\n\t3243137071U,  225500688U, 1634718593U, 4283311431U, 4292122923U,\n\t3842802787U,  811735523U,  105712518U,  663434053U, 1855889273U,\n\t2847972595U, 1196355421U, 2552150115U, 4254510614U, 3752181265U,\n\t3430721819U, 3828705396U, 3436287905U, 3441964937U, 4123670631U,\n\t 353001539U,  459496439U, 3799690868U, 1293777660U, 2761079737U,\n\t 498096339U, 3398433374U, 4080378380U, 2304691596U, 2995729055U,\n\t4134660419U, 3903444024U, 3576494993U,  203682175U, 3321164857U,\n\t2747963611U,   79749085U, 2992890370U, 1240278549U, 1772175713U,\n\t2111331972U, 2655023449U, 1683896345U, 2836027212U, 3482868021U,\n\t2489884874U,  756853961U, 2298874501U, 4013448667U, 4143996022U,\n\t2948306858U, 4132920035U, 1283299272U,  995592228U, 3450508595U,\n\t1027845759U, 1766942720U, 3861411826U, 1446861231U,   95974993U,\n\t3502263554U, 1487532194U,  601502472U, 4129619129U,  250131773U,\n\t2050079547U, 3198903947U, 3105589778U, 4066481316U, 3026383978U,\n\t2276901713U,  365637751U, 2260718426U, 1394775634U, 1791172338U,\n\t2690503163U, 2952737846U, 1568710462U,  732623190U, 2980358000U,\n\t1053631832U, 1432426951U, 3229149635U, 1854113985U, 3719733532U,\n\t3204031934U,  735775531U,  107468620U, 3734611984U,  631009402U,\n\t3083622457U, 4109580626U,  159373458U, 1301970201U, 4132389302U,\n\t1293255004U,  847182752U, 4170022737U,   96712900U, 2641406755U,\n\t1381727755U,  405608287U, 4287919625U, 1703554290U, 3589580244U,\n\t2911403488U,    2166565U, 2647306451U, 2330535117U, 1200815358U,\n\t1165916754U,  245060911U, 4040679071U, 3684908771U, 2452834126U,\n\t2486872773U, 2318678365U, 2940627908U, 1837837240U, 3447897409U,\n\t4270484676U, 1495388728U, 3754288477U, 4204167884U, 1386977705U,\n\t2692224733U, 3076249689U, 4109568048U, 4170955115U, 4167531356U,\n\t4020189950U, 4261855038U, 3036907575U, 3410399885U, 3076395737U,\n\t1046178638U,  144496770U,  230725846U, 3349637149U,   17065717U,\n\t2809932048U, 2054581785U, 3608424964U, 3259628808U,  134897388U,\n\t3743067463U,  257685904U, 3795656590U, 1562468719U, 3589103904U,\n\t3120404710U,  254684547U, 2653661580U, 3663904795U, 2631942758U,\n\t1063234347U, 2609732900U, 2332080715U, 3521125233U, 1180599599U,\n\t1935868586U, 4110970440U,  296706371U, 2128666368U, 1319875791U,\n\t1570900197U, 3096025483U, 1799882517U, 1928302007U, 1163707758U,\n\t1244491489U, 3533770203U,  567496053U, 2757924305U, 2781639343U,\n\t2818420107U,  560404889U, 2619609724U, 4176035430U, 2511289753U,\n\t2521842019U, 3910553502U, 2926149387U, 3302078172U, 4237118867U,\n\t 330725126U,  367400677U,  888239854U,  545570454U, 4259590525U,\n\t 134343617U, 1102169784U, 1647463719U, 3260979784U, 1518840883U,\n\t3631537963U, 3342671457U, 1301549147U, 2083739356U,  146593792U,\n\t3217959080U,  652755743U, 2032187193U, 3898758414U, 1021358093U,\n\t4037409230U, 2176407931U, 3427391950U, 2883553603U,  985613827U,\n\t3105265092U, 3423168427U, 3387507672U,  467170288U, 2141266163U,\n\t3723870208U,  916410914U, 1293987799U, 2652584950U,  769160137U,\n\t3205292896U, 1561287359U, 1684510084U, 3136055621U, 3765171391U,\n\t 639683232U, 2639569327U, 1218546948U, 4263586685U, 3058215773U,\n\t2352279820U,  401870217U, 2625822463U, 1529125296U, 2981801895U,\n\t1191285226U, 4027725437U, 3432700217U, 4098835661U,  971182783U,\n\t2443861173U, 3881457123U, 3874386651U,  457276199U, 2638294160U,\n\t4002809368U,  421169044U, 1112642589U, 3076213779U, 3387033971U,\n\t2499610950U, 3057240914U, 1662679783U,  461224431U, 1168395933U\n};\nstatic const uint32_t init_by_array_32_expected[] = {\n\t2920711183U, 3885745737U, 3501893680U,  856470934U, 1421864068U,\n\t 277361036U, 1518638004U, 2328404353U, 3355513634U,   64329189U,\n\t1624587673U, 3508467182U, 2481792141U, 3706480799U, 1925859037U,\n\t2913275699U,  882658412U,  384641219U,  422202002U, 1873384891U,\n\t2006084383U, 3924929912U, 1636718106U, 3108838742U, 1245465724U,\n\t4195470535U,  779207191U, 1577721373U, 1390469554U, 2928648150U,\n\t 121399709U, 3170839019U, 4044347501U,  953953814U, 3821710850U,\n\t3085591323U, 3666535579U, 3577837737U, 2012008410U, 3565417471U,\n\t4044408017U,  433600965U, 1637785608U, 1798509764U,  860770589U,\n\t3081466273U, 3982393409U, 2451928325U, 3437124742U, 4093828739U,\n\t3357389386U, 2154596123U,  496568176U, 2650035164U, 2472361850U,\n\t   3438299U, 2150366101U, 1577256676U, 3802546413U, 1787774626U,\n\t4078331588U, 3706103141U,  170391138U, 3806085154U, 1680970100U,\n\t1961637521U, 3316029766U,  890610272U, 1453751581U, 1430283664U,\n\t3051057411U, 3597003186U,  542563954U, 3796490244U, 1690016688U,\n\t3448752238U,  440702173U,  347290497U, 1121336647U, 2540588620U,\n\t 280881896U, 2495136428U,  213707396U,   15104824U, 2946180358U,\n\t 659000016U,  566379385U, 2614030979U, 2855760170U,  334526548U,\n\t2315569495U, 2729518615U,  564745877U, 1263517638U, 3157185798U,\n\t1604852056U, 1011639885U, 2950579535U, 2524219188U,  312951012U,\n\t1528896652U, 1327861054U, 2846910138U, 3966855905U, 2536721582U,\n\t 855353911U, 1685434729U, 3303978929U, 1624872055U, 4020329649U,\n\t3164802143U, 1642802700U, 1957727869U, 1792352426U, 3334618929U,\n\t2631577923U, 3027156164U,  842334259U, 3353446843U, 1226432104U,\n\t1742801369U, 3552852535U, 3471698828U, 1653910186U, 3380330939U,\n\t2313782701U, 3351007196U, 2129839995U, 1800682418U, 4085884420U,\n\t1625156629U, 3669701987U,  615211810U, 3294791649U, 4131143784U,\n\t2590843588U, 3207422808U, 3275066464U,  561592872U, 3957205738U,\n\t3396578098U,   48410678U, 3505556445U, 1005764855U, 3920606528U,\n\t2936980473U, 2378918600U, 2404449845U, 1649515163U,  701203563U,\n\t3705256349U,   83714199U, 3586854132U,  922978446U, 2863406304U,\n\t3523398907U, 2606864832U, 2385399361U, 3171757816U, 4262841009U,\n\t3645837721U, 1169579486U, 3666433897U, 3174689479U, 1457866976U,\n\t3803895110U, 3346639145U, 1907224409U, 1978473712U, 1036712794U,\n\t 980754888U, 1302782359U, 1765252468U,  459245755U, 3728923860U,\n\t1512894209U, 2046491914U,  207860527U,  514188684U, 2288713615U,\n\t1597354672U, 3349636117U, 2357291114U, 3995796221U,  945364213U,\n\t1893326518U, 3770814016U, 1691552714U, 2397527410U,  967486361U,\n\t 776416472U, 4197661421U,  951150819U, 1852770983U, 4044624181U,\n\t1399439738U, 4194455275U, 2284037669U, 1550734958U, 3321078108U,\n\t1865235926U, 2912129961U, 2664980877U, 1357572033U, 2600196436U,\n\t2486728200U, 2372668724U, 1567316966U, 2374111491U, 1839843570U,\n\t  20815612U, 3727008608U, 3871996229U,  824061249U, 1932503978U,\n\t3404541726U,  758428924U, 2609331364U, 1223966026U, 1299179808U,\n\t 648499352U, 2180134401U,  880821170U, 3781130950U,  113491270U,\n\t1032413764U, 4185884695U, 2490396037U, 1201932817U, 4060951446U,\n\t4165586898U, 1629813212U, 2887821158U,  415045333U,  628926856U,\n\t2193466079U, 3391843445U, 2227540681U, 1907099846U, 2848448395U,\n\t1717828221U, 1372704537U, 1707549841U, 2294058813U, 2101214437U,\n\t2052479531U, 1695809164U, 3176587306U, 2632770465U,   81634404U,\n\t1603220563U,  644238487U,  302857763U,  897352968U, 2613146653U,\n\t1391730149U, 4245717312U, 4191828749U, 1948492526U, 2618174230U,\n\t3992984522U, 2178852787U, 3596044509U, 3445573503U, 2026614616U,\n\t 915763564U, 3415689334U, 2532153403U, 3879661562U, 2215027417U,\n\t3111154986U, 2929478371U,  668346391U, 1152241381U, 2632029711U,\n\t3004150659U, 2135025926U,  948690501U, 2799119116U, 4228829406U,\n\t1981197489U, 4209064138U,  684318751U, 3459397845U,  201790843U,\n\t4022541136U, 3043635877U,  492509624U, 3263466772U, 1509148086U,\n\t 921459029U, 3198857146U,  705479721U, 3835966910U, 3603356465U,\n\t 576159741U, 1742849431U,  594214882U, 2055294343U, 3634861861U,\n\t 449571793U, 3246390646U, 3868232151U, 1479156585U, 2900125656U,\n\t2464815318U, 3960178104U, 1784261920U,   18311476U, 3627135050U,\n\t 644609697U,  424968996U,  919890700U, 2986824110U,  816423214U,\n\t4003562844U, 1392714305U, 1757384428U, 2569030598U,  995949559U,\n\t3875659880U, 2933807823U, 2752536860U, 2993858466U, 4030558899U,\n\t2770783427U, 2775406005U, 2777781742U, 1931292655U,  472147933U,\n\t3865853827U, 2726470545U, 2668412860U, 2887008249U,  408979190U,\n\t3578063323U, 3242082049U, 1778193530U,   27981909U, 2362826515U,\n\t 389875677U, 1043878156U,  581653903U, 3830568952U,  389535942U,\n\t3713523185U, 2768373359U, 2526101582U, 1998618197U, 1160859704U,\n\t3951172488U, 1098005003U,  906275699U, 3446228002U, 2220677963U,\n\t2059306445U,  132199571U,  476838790U, 1868039399U, 3097344807U,\n\t 857300945U,  396345050U, 2835919916U, 1782168828U, 1419519470U,\n\t4288137521U,  819087232U,  596301494U,  872823172U, 1526888217U,\n\t 805161465U, 1116186205U, 2829002754U, 2352620120U,  620121516U,\n\t 354159268U, 3601949785U,  209568138U, 1352371732U, 2145977349U,\n\t4236871834U, 1539414078U, 3558126206U, 3224857093U, 4164166682U,\n\t3817553440U, 3301780278U, 2682696837U, 3734994768U, 1370950260U,\n\t1477421202U, 2521315749U, 1330148125U, 1261554731U, 2769143688U,\n\t3554756293U, 4235882678U, 3254686059U, 3530579953U, 1215452615U,\n\t3574970923U, 4057131421U,  589224178U, 1000098193U,  171190718U,\n\t2521852045U, 2351447494U, 2284441580U, 2646685513U, 3486933563U,\n\t3789864960U, 1190528160U, 1702536782U, 1534105589U, 4262946827U,\n\t2726686826U, 3584544841U, 2348270128U, 2145092281U, 2502718509U,\n\t1027832411U, 3571171153U, 1287361161U, 4011474411U, 3241215351U,\n\t2419700818U,  971242709U, 1361975763U, 1096842482U, 3271045537U,\n\t  81165449U,  612438025U, 3912966678U, 1356929810U,  733545735U,\n\t 537003843U, 1282953084U,  884458241U,  588930090U, 3930269801U,\n\t2961472450U, 1219535534U, 3632251943U,  268183903U, 1441240533U,\n\t3653903360U, 3854473319U, 2259087390U, 2548293048U, 2022641195U,\n\t2105543911U, 1764085217U, 3246183186U,  482438805U,  888317895U,\n\t2628314765U, 2466219854U,  717546004U, 2322237039U,  416725234U,\n\t1544049923U, 1797944973U, 3398652364U, 3111909456U,  485742908U,\n\t2277491072U, 1056355088U, 3181001278U,  129695079U, 2693624550U,\n\t1764438564U, 3797785470U,  195503713U, 3266519725U, 2053389444U,\n\t1961527818U, 3400226523U, 3777903038U, 2597274307U, 4235851091U,\n\t4094406648U, 2171410785U, 1781151386U, 1378577117U,  654643266U,\n\t3424024173U, 3385813322U,  679385799U,  479380913U,  681715441U,\n\t3096225905U,  276813409U, 3854398070U, 2721105350U,  831263315U,\n\t3276280337U, 2628301522U, 3984868494U, 1466099834U, 2104922114U,\n\t1412672743U,  820330404U, 3491501010U,  942735832U,  710652807U,\n\t3972652090U,  679881088U,   40577009U, 3705286397U, 2815423480U,\n\t3566262429U,  663396513U, 3777887429U, 4016670678U,  404539370U,\n\t1142712925U, 1140173408U, 2913248352U, 2872321286U,  263751841U,\n\t3175196073U, 3162557581U, 2878996619U,   75498548U, 3836833140U,\n\t3284664959U, 1157523805U,  112847376U,  207855609U, 1337979698U,\n\t1222578451U,  157107174U,  901174378U, 3883717063U, 1618632639U,\n\t1767889440U, 4264698824U, 1582999313U,  884471997U, 2508825098U,\n\t3756370771U, 2457213553U, 3565776881U, 3709583214U,  915609601U,\n\t 460833524U, 1091049576U,   85522880U,    2553251U,  132102809U,\n\t2429882442U, 2562084610U, 1386507633U, 4112471229U,   21965213U,\n\t1981516006U, 2418435617U, 3054872091U, 4251511224U, 2025783543U,\n\t1916911512U, 2454491136U, 3938440891U, 3825869115U, 1121698605U,\n\t3463052265U,  802340101U, 1912886800U, 4031997367U, 3550640406U,\n\t1596096923U,  610150600U,  431464457U, 2541325046U,  486478003U,\n\t 739704936U, 2862696430U, 3037903166U, 1129749694U, 2611481261U,\n\t1228993498U,  510075548U, 3424962587U, 2458689681U,  818934833U,\n\t4233309125U, 1608196251U, 3419476016U, 1858543939U, 2682166524U,\n\t3317854285U,  631986188U, 3008214764U,  613826412U, 3567358221U,\n\t3512343882U, 1552467474U, 3316162670U, 1275841024U, 4142173454U,\n\t 565267881U,  768644821U,  198310105U, 2396688616U, 1837659011U,\n\t 203429334U,  854539004U, 4235811518U, 3338304926U, 3730418692U,\n\t3852254981U, 3032046452U, 2329811860U, 2303590566U, 2696092212U,\n\t3894665932U,  145835667U,  249563655U, 1932210840U, 2431696407U,\n\t3312636759U,  214962629U, 2092026914U, 3020145527U, 4073039873U,\n\t2739105705U, 1308336752U,  855104522U, 2391715321U,   67448785U,\n\t 547989482U,  854411802U, 3608633740U,  431731530U,  537375589U,\n\t3888005760U,  696099141U,  397343236U, 1864511780U,   44029739U,\n\t1729526891U, 1993398655U, 2010173426U, 2591546756U,  275223291U,\n\t1503900299U, 4217765081U, 2185635252U, 1122436015U, 3550155364U,\n\t 681707194U, 3260479338U,  933579397U, 2983029282U, 2505504587U,\n\t2667410393U, 2962684490U, 4139721708U, 2658172284U, 2452602383U,\n\t2607631612U, 1344296217U, 3075398709U, 2949785295U, 1049956168U,\n\t3917185129U, 2155660174U, 3280524475U, 1503827867U,  674380765U,\n\t1918468193U, 3843983676U,  634358221U, 2538335643U, 1873351298U,\n\t3368723763U, 2129144130U, 3203528633U, 3087174986U, 2691698871U,\n\t2516284287U,   24437745U, 1118381474U, 2816314867U, 2448576035U,\n\t4281989654U,  217287825U,  165872888U, 2628995722U, 3533525116U,\n\t2721669106U,  872340568U, 3429930655U, 3309047304U, 3916704967U,\n\t3270160355U, 1348884255U, 1634797670U,  881214967U, 4259633554U,\n\t 174613027U, 1103974314U, 1625224232U, 2678368291U, 1133866707U,\n\t3853082619U, 4073196549U, 1189620777U,  637238656U,  930241537U,\n\t4042750792U, 3842136042U, 2417007212U, 2524907510U, 1243036827U,\n\t1282059441U, 3764588774U, 1394459615U, 2323620015U, 1166152231U,\n\t3307479609U, 3849322257U, 3507445699U, 4247696636U,  758393720U,\n\t 967665141U, 1095244571U, 1319812152U,  407678762U, 2640605208U,\n\t2170766134U, 3663594275U, 4039329364U, 2512175520U,  725523154U,\n\t2249807004U, 3312617979U, 2414634172U, 1278482215U,  349206484U,\n\t1573063308U, 1196429124U, 3873264116U, 2400067801U,  268795167U,\n\t 226175489U, 2961367263U, 1968719665U,   42656370U, 1010790699U,\n\t 561600615U, 2422453992U, 3082197735U, 1636700484U, 3977715296U,\n\t3125350482U, 3478021514U, 2227819446U, 1540868045U, 3061908980U,\n\t1087362407U, 3625200291U,  361937537U,  580441897U, 1520043666U,\n\t2270875402U, 1009161260U, 2502355842U, 4278769785U,  473902412U,\n\t1057239083U, 1905829039U, 1483781177U, 2080011417U, 1207494246U,\n\t1806991954U, 2194674403U, 3455972205U,  807207678U, 3655655687U,\n\t 674112918U,  195425752U, 3917890095U, 1874364234U, 1837892715U,\n\t3663478166U, 1548892014U, 2570748714U, 2049929836U, 2167029704U,\n\t 697543767U, 3499545023U, 3342496315U, 1725251190U, 3561387469U,\n\t2905606616U, 1580182447U, 3934525927U, 4103172792U, 1365672522U,\n\t1534795737U, 3308667416U, 2841911405U, 3943182730U, 4072020313U,\n\t3494770452U, 3332626671U,   55327267U,  478030603U,  411080625U,\n\t3419529010U, 1604767823U, 3513468014U,  570668510U,  913790824U,\n\t2283967995U,  695159462U, 3825542932U, 4150698144U, 1829758699U,\n\t 202895590U, 1609122645U, 1267651008U, 2910315509U, 2511475445U,\n\t2477423819U, 3932081579U,  900879979U, 2145588390U, 2670007504U,\n\t 580819444U, 1864996828U, 2526325979U, 1019124258U,  815508628U,\n\t2765933989U, 1277301341U, 3006021786U,  855540956U,  288025710U,\n\t1919594237U, 2331223864U,  177452412U, 2475870369U, 2689291749U,\n\t 865194284U,  253432152U, 2628531804U, 2861208555U, 2361597573U,\n\t1653952120U, 1039661024U, 2159959078U, 3709040440U, 3564718533U,\n\t2596878672U, 2041442161U,   31164696U, 2662962485U, 3665637339U,\n\t1678115244U, 2699839832U, 3651968520U, 3521595541U,  458433303U,\n\t2423096824U,   21831741U,  380011703U, 2498168716U,  861806087U,\n\t1673574843U, 4188794405U, 2520563651U, 2632279153U, 2170465525U,\n\t4171949898U, 3886039621U, 1661344005U, 3424285243U,  992588372U,\n\t2500984144U, 2993248497U, 3590193895U, 1535327365U,  515645636U,\n\t 131633450U, 3729760261U, 1613045101U, 3254194278U,   15889678U,\n\t1493590689U,  244148718U, 2991472662U, 1401629333U,  777349878U,\n\t2501401703U, 4285518317U, 3794656178U,  955526526U, 3442142820U,\n\t3970298374U,  736025417U, 2737370764U, 1271509744U,  440570731U,\n\t 136141826U, 1596189518U,  923399175U,  257541519U, 3505774281U,\n\t2194358432U, 2518162991U, 1379893637U, 2667767062U, 3748146247U,\n\t1821712620U, 3923161384U, 1947811444U, 2392527197U, 4127419685U,\n\t1423694998U, 4156576871U, 1382885582U, 3420127279U, 3617499534U,\n\t2994377493U, 4038063986U, 1918458672U, 2983166794U, 4200449033U,\n\t 353294540U, 1609232588U,  243926648U, 2332803291U,  507996832U,\n\t2392838793U, 4075145196U, 2060984340U, 4287475136U,   88232602U,\n\t2491531140U, 4159725633U, 2272075455U,  759298618U,  201384554U,\n\t 838356250U, 1416268324U,  674476934U,   90795364U,  141672229U,\n\t3660399588U, 4196417251U, 3249270244U, 3774530247U,   59587265U,\n\t3683164208U,   19392575U, 1463123697U, 1882205379U,  293780489U,\n\t2553160622U, 2933904694U,  675638239U, 2851336944U, 1435238743U,\n\t2448730183U,  804436302U, 2119845972U,  322560608U, 4097732704U,\n\t2987802540U,  641492617U, 2575442710U, 4217822703U, 3271835300U,\n\t2836418300U, 3739921620U, 2138378768U, 2879771855U, 4294903423U,\n\t3121097946U, 2603440486U, 2560820391U, 1012930944U, 2313499967U,\n\t 584489368U, 3431165766U,  897384869U, 2062537737U, 2847889234U,\n\t3742362450U, 2951174585U, 4204621084U, 1109373893U, 3668075775U,\n\t2750138839U, 3518055702U,  733072558U, 4169325400U,  788493625U\n};\nstatic const uint64_t init_gen_rand_64_expected[] = {\n\tKQU(16924766246869039260), KQU( 8201438687333352714),\n\tKQU( 2265290287015001750), KQU(18397264611805473832),\n\tKQU( 3375255223302384358), KQU( 6345559975416828796),\n\tKQU(18229739242790328073), KQU( 7596792742098800905),\n\tKQU(  255338647169685981), KQU( 2052747240048610300),\n\tKQU(18328151576097299343), KQU(12472905421133796567),\n\tKQU(11315245349717600863), KQU(16594110197775871209),\n\tKQU(15708751964632456450), KQU(10452031272054632535),\n\tKQU(11097646720811454386), KQU( 4556090668445745441),\n\tKQU(17116187693090663106), KQU(14931526836144510645),\n\tKQU( 9190752218020552591), KQU( 9625800285771901401),\n\tKQU(13995141077659972832), KQU( 5194209094927829625),\n\tKQU( 4156788379151063303), KQU( 8523452593770139494),\n\tKQU(14082382103049296727), KQU( 2462601863986088483),\n\tKQU( 3030583461592840678), KQU( 5221622077872827681),\n\tKQU( 3084210671228981236), KQU(13956758381389953823),\n\tKQU(13503889856213423831), KQU(15696904024189836170),\n\tKQU( 4612584152877036206), KQU( 6231135538447867881),\n\tKQU(10172457294158869468), KQU( 6452258628466708150),\n\tKQU(14044432824917330221), KQU(  370168364480044279),\n\tKQU(10102144686427193359), KQU(  667870489994776076),\n\tKQU( 2732271956925885858), KQU(18027788905977284151),\n\tKQU(15009842788582923859), KQU( 7136357960180199542),\n\tKQU(15901736243475578127), KQU(16951293785352615701),\n\tKQU(10551492125243691632), KQU(17668869969146434804),\n\tKQU(13646002971174390445), KQU( 9804471050759613248),\n\tKQU( 5511670439655935493), KQU(18103342091070400926),\n\tKQU(17224512747665137533), KQU(15534627482992618168),\n\tKQU( 1423813266186582647), KQU(15821176807932930024),\n\tKQU(   30323369733607156), KQU(11599382494723479403),\n\tKQU(  653856076586810062), KQU( 3176437395144899659),\n\tKQU(14028076268147963917), KQU(16156398271809666195),\n\tKQU( 3166955484848201676), KQU( 5746805620136919390),\n\tKQU(17297845208891256593), KQU(11691653183226428483),\n\tKQU(17900026146506981577), KQU(15387382115755971042),\n\tKQU(16923567681040845943), KQU( 8039057517199388606),\n\tKQU(11748409241468629263), KQU(  794358245539076095),\n\tKQU(13438501964693401242), KQU(14036803236515618962),\n\tKQU( 5252311215205424721), KQU(17806589612915509081),\n\tKQU( 6802767092397596006), KQU(14212120431184557140),\n\tKQU( 1072951366761385712), KQU(13098491780722836296),\n\tKQU( 9466676828710797353), KQU(12673056849042830081),\n\tKQU(12763726623645357580), KQU(16468961652999309493),\n\tKQU(15305979875636438926), KQU(17444713151223449734),\n\tKQU( 5692214267627883674), KQU(13049589139196151505),\n\tKQU(  880115207831670745), KQU( 1776529075789695498),\n\tKQU(16695225897801466485), KQU(10666901778795346845),\n\tKQU( 6164389346722833869), KQU( 2863817793264300475),\n\tKQU( 9464049921886304754), KQU( 3993566636740015468),\n\tKQU( 9983749692528514136), KQU(16375286075057755211),\n\tKQU(16042643417005440820), KQU(11445419662923489877),\n\tKQU( 7999038846885158836), KQU( 6721913661721511535),\n\tKQU( 5363052654139357320), KQU( 1817788761173584205),\n\tKQU(13290974386445856444), KQU( 4650350818937984680),\n\tKQU( 8219183528102484836), KQU( 1569862923500819899),\n\tKQU( 4189359732136641860), KQU(14202822961683148583),\n\tKQU( 4457498315309429058), KQU(13089067387019074834),\n\tKQU(11075517153328927293), KQU(10277016248336668389),\n\tKQU( 7070509725324401122), KQU(17808892017780289380),\n\tKQU(13143367339909287349), KQU( 1377743745360085151),\n\tKQU( 5749341807421286485), KQU(14832814616770931325),\n\tKQU( 7688820635324359492), KQU(10960474011539770045),\n\tKQU(   81970066653179790), KQU(12619476072607878022),\n\tKQU( 4419566616271201744), KQU(15147917311750568503),\n\tKQU( 5549739182852706345), KQU( 7308198397975204770),\n\tKQU(13580425496671289278), KQU(17070764785210130301),\n\tKQU( 8202832846285604405), KQU( 6873046287640887249),\n\tKQU( 6927424434308206114), KQU( 6139014645937224874),\n\tKQU(10290373645978487639), KQU(15904261291701523804),\n\tKQU( 9628743442057826883), KQU(18383429096255546714),\n\tKQU( 4977413265753686967), KQU( 7714317492425012869),\n\tKQU( 9025232586309926193), KQU(14627338359776709107),\n\tKQU(14759849896467790763), KQU(10931129435864423252),\n\tKQU( 4588456988775014359), KQU(10699388531797056724),\n\tKQU(  468652268869238792), KQU( 5755943035328078086),\n\tKQU( 2102437379988580216), KQU( 9986312786506674028),\n\tKQU( 2654207180040945604), KQU( 8726634790559960062),\n\tKQU(  100497234871808137), KQU( 2800137176951425819),\n\tKQU( 6076627612918553487), KQU( 5780186919186152796),\n\tKQU( 8179183595769929098), KQU( 6009426283716221169),\n\tKQU( 2796662551397449358), KQU( 1756961367041986764),\n\tKQU( 6972897917355606205), KQU(14524774345368968243),\n\tKQU( 2773529684745706940), KQU( 4853632376213075959),\n\tKQU( 4198177923731358102), KQU( 8271224913084139776),\n\tKQU( 2741753121611092226), KQU(16782366145996731181),\n\tKQU(15426125238972640790), KQU(13595497100671260342),\n\tKQU( 3173531022836259898), KQU( 6573264560319511662),\n\tKQU(18041111951511157441), KQU( 2351433581833135952),\n\tKQU( 3113255578908173487), KQU( 1739371330877858784),\n\tKQU(16046126562789165480), KQU( 8072101652214192925),\n\tKQU(15267091584090664910), KQU( 9309579200403648940),\n\tKQU( 5218892439752408722), KQU(14492477246004337115),\n\tKQU(17431037586679770619), KQU( 7385248135963250480),\n\tKQU( 9580144956565560660), KQU( 4919546228040008720),\n\tKQU(15261542469145035584), KQU(18233297270822253102),\n\tKQU( 5453248417992302857), KQU( 9309519155931460285),\n\tKQU(10342813012345291756), KQU(15676085186784762381),\n\tKQU(15912092950691300645), KQU( 9371053121499003195),\n\tKQU( 9897186478226866746), KQU(14061858287188196327),\n\tKQU(  122575971620788119), KQU(12146750969116317754),\n\tKQU( 4438317272813245201), KQU( 8332576791009527119),\n\tKQU(13907785691786542057), KQU(10374194887283287467),\n\tKQU( 2098798755649059566), KQU( 3416235197748288894),\n\tKQU( 8688269957320773484), KQU( 7503964602397371571),\n\tKQU(16724977015147478236), KQU( 9461512855439858184),\n\tKQU(13259049744534534727), KQU( 3583094952542899294),\n\tKQU( 8764245731305528292), KQU(13240823595462088985),\n\tKQU(13716141617617910448), KQU(18114969519935960955),\n\tKQU( 2297553615798302206), KQU( 4585521442944663362),\n\tKQU(17776858680630198686), KQU( 4685873229192163363),\n\tKQU(  152558080671135627), KQU(15424900540842670088),\n\tKQU(13229630297130024108), KQU(17530268788245718717),\n\tKQU(16675633913065714144), KQU( 3158912717897568068),\n\tKQU(15399132185380087288), KQU( 7401418744515677872),\n\tKQU(13135412922344398535), KQU( 6385314346100509511),\n\tKQU(13962867001134161139), KQU(10272780155442671999),\n\tKQU(12894856086597769142), KQU(13340877795287554994),\n\tKQU(12913630602094607396), KQU(12543167911119793857),\n\tKQU(17343570372251873096), KQU(10959487764494150545),\n\tKQU( 6966737953093821128), KQU(13780699135496988601),\n\tKQU( 4405070719380142046), KQU(14923788365607284982),\n\tKQU( 2869487678905148380), KQU( 6416272754197188403),\n\tKQU(15017380475943612591), KQU( 1995636220918429487),\n\tKQU( 3402016804620122716), KQU(15800188663407057080),\n\tKQU(11362369990390932882), KQU(15262183501637986147),\n\tKQU(10239175385387371494), KQU( 9352042420365748334),\n\tKQU( 1682457034285119875), KQU( 1724710651376289644),\n\tKQU( 2038157098893817966), KQU( 9897825558324608773),\n\tKQU( 1477666236519164736), KQU(16835397314511233640),\n\tKQU(10370866327005346508), KQU(10157504370660621982),\n\tKQU(12113904045335882069), KQU(13326444439742783008),\n\tKQU(11302769043000765804), KQU(13594979923955228484),\n\tKQU(11779351762613475968), KQU( 3786101619539298383),\n\tKQU( 8021122969180846063), KQU(15745904401162500495),\n\tKQU(10762168465993897267), KQU(13552058957896319026),\n\tKQU(11200228655252462013), KQU( 5035370357337441226),\n\tKQU( 7593918984545500013), KQU( 5418554918361528700),\n\tKQU( 4858270799405446371), KQU( 9974659566876282544),\n\tKQU(18227595922273957859), KQU( 2772778443635656220),\n\tKQU(14285143053182085385), KQU( 9939700992429600469),\n\tKQU(12756185904545598068), KQU( 2020783375367345262),\n\tKQU(   57026775058331227), KQU(  950827867930065454),\n\tKQU( 6602279670145371217), KQU( 2291171535443566929),\n\tKQU( 5832380724425010313), KQU( 1220343904715982285),\n\tKQU(17045542598598037633), KQU(15460481779702820971),\n\tKQU(13948388779949365130), KQU(13975040175430829518),\n\tKQU(17477538238425541763), KQU(11104663041851745725),\n\tKQU(15860992957141157587), KQU(14529434633012950138),\n\tKQU( 2504838019075394203), KQU( 7512113882611121886),\n\tKQU( 4859973559980886617), KQU( 1258601555703250219),\n\tKQU(15594548157514316394), KQU( 4516730171963773048),\n\tKQU(11380103193905031983), KQU( 6809282239982353344),\n\tKQU(18045256930420065002), KQU( 2453702683108791859),\n\tKQU(  977214582986981460), KQU( 2006410402232713466),\n\tKQU( 6192236267216378358), KQU( 3429468402195675253),\n\tKQU(18146933153017348921), KQU(17369978576367231139),\n\tKQU( 1246940717230386603), KQU(11335758870083327110),\n\tKQU(14166488801730353682), KQU( 9008573127269635732),\n\tKQU(10776025389820643815), KQU(15087605441903942962),\n\tKQU( 1359542462712147922), KQU(13898874411226454206),\n\tKQU(17911176066536804411), KQU( 9435590428600085274),\n\tKQU(  294488509967864007), KQU( 8890111397567922046),\n\tKQU( 7987823476034328778), KQU(13263827582440967651),\n\tKQU( 7503774813106751573), KQU(14974747296185646837),\n\tKQU( 8504765037032103375), KQU(17340303357444536213),\n\tKQU( 7704610912964485743), KQU( 8107533670327205061),\n\tKQU( 9062969835083315985), KQU(16968963142126734184),\n\tKQU(12958041214190810180), KQU( 2720170147759570200),\n\tKQU( 2986358963942189566), KQU(14884226322219356580),\n\tKQU(  286224325144368520), KQU(11313800433154279797),\n\tKQU(18366849528439673248), KQU(17899725929482368789),\n\tKQU( 3730004284609106799), KQU( 1654474302052767205),\n\tKQU( 5006698007047077032), KQU( 8196893913601182838),\n\tKQU(15214541774425211640), KQU(17391346045606626073),\n\tKQU( 8369003584076969089), KQU( 3939046733368550293),\n\tKQU(10178639720308707785), KQU( 2180248669304388697),\n\tKQU(   62894391300126322), KQU( 9205708961736223191),\n\tKQU( 6837431058165360438), KQU( 3150743890848308214),\n\tKQU(17849330658111464583), KQU(12214815643135450865),\n\tKQU(13410713840519603402), KQU( 3200778126692046802),\n\tKQU(13354780043041779313), KQU(  800850022756886036),\n\tKQU(15660052933953067433), KQU( 6572823544154375676),\n\tKQU(11030281857015819266), KQU(12682241941471433835),\n\tKQU(11654136407300274693), KQU( 4517795492388641109),\n\tKQU( 9757017371504524244), KQU(17833043400781889277),\n\tKQU(12685085201747792227), KQU(10408057728835019573),\n\tKQU(   98370418513455221), KQU( 6732663555696848598),\n\tKQU(13248530959948529780), KQU( 3530441401230622826),\n\tKQU(18188251992895660615), KQU( 1847918354186383756),\n\tKQU( 1127392190402660921), KQU(11293734643143819463),\n\tKQU( 3015506344578682982), KQU(13852645444071153329),\n\tKQU( 2121359659091349142), KQU( 1294604376116677694),\n\tKQU( 5616576231286352318), KQU( 7112502442954235625),\n\tKQU(11676228199551561689), KQU(12925182803007305359),\n\tKQU( 7852375518160493082), KQU( 1136513130539296154),\n\tKQU( 5636923900916593195), KQU( 3221077517612607747),\n\tKQU(17784790465798152513), KQU( 3554210049056995938),\n\tKQU(17476839685878225874), KQU( 3206836372585575732),\n\tKQU( 2765333945644823430), KQU(10080070903718799528),\n\tKQU( 5412370818878286353), KQU( 9689685887726257728),\n\tKQU( 8236117509123533998), KQU( 1951139137165040214),\n\tKQU( 4492205209227980349), KQU(16541291230861602967),\n\tKQU( 1424371548301437940), KQU( 9117562079669206794),\n\tKQU(14374681563251691625), KQU(13873164030199921303),\n\tKQU( 6680317946770936731), KQU(15586334026918276214),\n\tKQU(10896213950976109802), KQU( 9506261949596413689),\n\tKQU( 9903949574308040616), KQU( 6038397344557204470),\n\tKQU(  174601465422373648), KQU(15946141191338238030),\n\tKQU(17142225620992044937), KQU( 7552030283784477064),\n\tKQU( 2947372384532947997), KQU(  510797021688197711),\n\tKQU( 4962499439249363461), KQU(   23770320158385357),\n\tKQU(  959774499105138124), KQU( 1468396011518788276),\n\tKQU( 2015698006852312308), KQU( 4149400718489980136),\n\tKQU( 5992916099522371188), KQU(10819182935265531076),\n\tKQU(16189787999192351131), KQU(  342833961790261950),\n\tKQU(12470830319550495336), KQU(18128495041912812501),\n\tKQU( 1193600899723524337), KQU( 9056793666590079770),\n\tKQU( 2154021227041669041), KQU( 4963570213951235735),\n\tKQU( 4865075960209211409), KQU( 2097724599039942963),\n\tKQU( 2024080278583179845), KQU(11527054549196576736),\n\tKQU(10650256084182390252), KQU( 4808408648695766755),\n\tKQU( 1642839215013788844), KQU(10607187948250398390),\n\tKQU( 7076868166085913508), KQU(  730522571106887032),\n\tKQU(12500579240208524895), KQU( 4484390097311355324),\n\tKQU(15145801330700623870), KQU( 8055827661392944028),\n\tKQU( 5865092976832712268), KQU(15159212508053625143),\n\tKQU( 3560964582876483341), KQU( 4070052741344438280),\n\tKQU( 6032585709886855634), KQU(15643262320904604873),\n\tKQU( 2565119772293371111), KQU(  318314293065348260),\n\tKQU(15047458749141511872), KQU( 7772788389811528730),\n\tKQU( 7081187494343801976), KQU( 6465136009467253947),\n\tKQU(10425940692543362069), KQU(  554608190318339115),\n\tKQU(14796699860302125214), KQU( 1638153134431111443),\n\tKQU(10336967447052276248), KQU( 8412308070396592958),\n\tKQU( 4004557277152051226), KQU( 8143598997278774834),\n\tKQU(16413323996508783221), KQU(13139418758033994949),\n\tKQU( 9772709138335006667), KQU( 2818167159287157659),\n\tKQU(17091740573832523669), KQU(14629199013130751608),\n\tKQU(18268322711500338185), KQU( 8290963415675493063),\n\tKQU( 8830864907452542588), KQU( 1614839084637494849),\n\tKQU(14855358500870422231), KQU( 3472996748392519937),\n\tKQU(15317151166268877716), KQU( 5825895018698400362),\n\tKQU(16730208429367544129), KQU(10481156578141202800),\n\tKQU( 4746166512382823750), KQU(12720876014472464998),\n\tKQU( 8825177124486735972), KQU(13733447296837467838),\n\tKQU( 6412293741681359625), KQU( 8313213138756135033),\n\tKQU(11421481194803712517), KQU( 7997007691544174032),\n\tKQU( 6812963847917605930), KQU( 9683091901227558641),\n\tKQU(14703594165860324713), KQU( 1775476144519618309),\n\tKQU( 2724283288516469519), KQU(  717642555185856868),\n\tKQU( 8736402192215092346), KQU(11878800336431381021),\n\tKQU( 4348816066017061293), KQU( 6115112756583631307),\n\tKQU( 9176597239667142976), KQU(12615622714894259204),\n\tKQU(10283406711301385987), KQU( 5111762509485379420),\n\tKQU( 3118290051198688449), KQU( 7345123071632232145),\n\tKQU( 9176423451688682359), KQU( 4843865456157868971),\n\tKQU(12008036363752566088), KQU(12058837181919397720),\n\tKQU( 2145073958457347366), KQU( 1526504881672818067),\n\tKQU( 3488830105567134848), KQU(13208362960674805143),\n\tKQU( 4077549672899572192), KQU( 7770995684693818365),\n\tKQU( 1398532341546313593), KQU(12711859908703927840),\n\tKQU( 1417561172594446813), KQU(17045191024194170604),\n\tKQU( 4101933177604931713), KQU(14708428834203480320),\n\tKQU(17447509264469407724), KQU(14314821973983434255),\n\tKQU(17990472271061617265), KQU( 5087756685841673942),\n\tKQU(12797820586893859939), KQU( 1778128952671092879),\n\tKQU( 3535918530508665898), KQU( 9035729701042481301),\n\tKQU(14808661568277079962), KQU(14587345077537747914),\n\tKQU(11920080002323122708), KQU( 6426515805197278753),\n\tKQU( 3295612216725984831), KQU(11040722532100876120),\n\tKQU(12305952936387598754), KQU(16097391899742004253),\n\tKQU( 4908537335606182208), KQU(12446674552196795504),\n\tKQU(16010497855816895177), KQU( 9194378874788615551),\n\tKQU( 3382957529567613384), KQU( 5154647600754974077),\n\tKQU( 9801822865328396141), KQU( 9023662173919288143),\n\tKQU(17623115353825147868), KQU( 8238115767443015816),\n\tKQU(15811444159859002560), KQU( 9085612528904059661),\n\tKQU( 6888601089398614254), KQU(  258252992894160189),\n\tKQU( 6704363880792428622), KQU( 6114966032147235763),\n\tKQU(11075393882690261875), KQU( 8797664238933620407),\n\tKQU( 5901892006476726920), KQU( 5309780159285518958),\n\tKQU(14940808387240817367), KQU(14642032021449656698),\n\tKQU( 9808256672068504139), KQU( 3670135111380607658),\n\tKQU(11211211097845960152), KQU( 1474304506716695808),\n\tKQU(15843166204506876239), KQU( 7661051252471780561),\n\tKQU(10170905502249418476), KQU( 7801416045582028589),\n\tKQU( 2763981484737053050), KQU( 9491377905499253054),\n\tKQU(16201395896336915095), KQU( 9256513756442782198),\n\tKQU( 5411283157972456034), KQU( 5059433122288321676),\n\tKQU( 4327408006721123357), KQU( 9278544078834433377),\n\tKQU( 7601527110882281612), KQU(11848295896975505251),\n\tKQU(12096998801094735560), KQU(14773480339823506413),\n\tKQU(15586227433895802149), KQU(12786541257830242872),\n\tKQU( 6904692985140503067), KQU( 5309011515263103959),\n\tKQU(12105257191179371066), KQU(14654380212442225037),\n\tKQU( 2556774974190695009), KQU( 4461297399927600261),\n\tKQU(14888225660915118646), KQU(14915459341148291824),\n\tKQU( 2738802166252327631), KQU( 6047155789239131512),\n\tKQU(12920545353217010338), KQU(10697617257007840205),\n\tKQU( 2751585253158203504), KQU(13252729159780047496),\n\tKQU(14700326134672815469), KQU(14082527904374600529),\n\tKQU(16852962273496542070), KQU(17446675504235853907),\n\tKQU(15019600398527572311), KQU(12312781346344081551),\n\tKQU(14524667935039810450), KQU( 5634005663377195738),\n\tKQU(11375574739525000569), KQU( 2423665396433260040),\n\tKQU( 5222836914796015410), KQU( 4397666386492647387),\n\tKQU( 4619294441691707638), KQU(  665088602354770716),\n\tKQU(13246495665281593610), KQU( 6564144270549729409),\n\tKQU(10223216188145661688), KQU( 3961556907299230585),\n\tKQU(11543262515492439914), KQU(16118031437285993790),\n\tKQU( 7143417964520166465), KQU(13295053515909486772),\n\tKQU(   40434666004899675), KQU(17127804194038347164),\n\tKQU( 8599165966560586269), KQU( 8214016749011284903),\n\tKQU(13725130352140465239), KQU( 5467254474431726291),\n\tKQU( 7748584297438219877), KQU(16933551114829772472),\n\tKQU( 2169618439506799400), KQU( 2169787627665113463),\n\tKQU(17314493571267943764), KQU(18053575102911354912),\n\tKQU(11928303275378476973), KQU(11593850925061715550),\n\tKQU(17782269923473589362), KQU( 3280235307704747039),\n\tKQU( 6145343578598685149), KQU(17080117031114086090),\n\tKQU(18066839902983594755), KQU( 6517508430331020706),\n\tKQU( 8092908893950411541), KQU(12558378233386153732),\n\tKQU( 4476532167973132976), KQU(16081642430367025016),\n\tKQU( 4233154094369139361), KQU( 8693630486693161027),\n\tKQU(11244959343027742285), KQU(12273503967768513508),\n\tKQU(14108978636385284876), KQU( 7242414665378826984),\n\tKQU( 6561316938846562432), KQU( 8601038474994665795),\n\tKQU(17532942353612365904), KQU(17940076637020912186),\n\tKQU( 7340260368823171304), KQU( 7061807613916067905),\n\tKQU(10561734935039519326), KQU(17990796503724650862),\n\tKQU( 6208732943911827159), KQU(  359077562804090617),\n\tKQU(14177751537784403113), KQU(10659599444915362902),\n\tKQU(15081727220615085833), KQU(13417573895659757486),\n\tKQU(15513842342017811524), KQU(11814141516204288231),\n\tKQU( 1827312513875101814), KQU( 2804611699894603103),\n\tKQU(17116500469975602763), KQU(12270191815211952087),\n\tKQU(12256358467786024988), KQU(18435021722453971267),\n\tKQU(  671330264390865618), KQU(  476504300460286050),\n\tKQU(16465470901027093441), KQU( 4047724406247136402),\n\tKQU( 1322305451411883346), KQU( 1388308688834322280),\n\tKQU( 7303989085269758176), KQU( 9323792664765233642),\n\tKQU( 4542762575316368936), KQU(17342696132794337618),\n\tKQU( 4588025054768498379), KQU(13415475057390330804),\n\tKQU(17880279491733405570), KQU(10610553400618620353),\n\tKQU( 3180842072658960139), KQU(13002966655454270120),\n\tKQU( 1665301181064982826), KQU( 7083673946791258979),\n\tKQU(  190522247122496820), KQU(17388280237250677740),\n\tKQU( 8430770379923642945), KQU(12987180971921668584),\n\tKQU( 2311086108365390642), KQU( 2870984383579822345),\n\tKQU(14014682609164653318), KQU(14467187293062251484),\n\tKQU(  192186361147413298), KQU(15171951713531796524),\n\tKQU( 9900305495015948728), KQU(17958004775615466344),\n\tKQU(14346380954498606514), KQU(18040047357617407096),\n\tKQU( 5035237584833424532), KQU(15089555460613972287),\n\tKQU( 4131411873749729831), KQU( 1329013581168250330),\n\tKQU(10095353333051193949), KQU(10749518561022462716),\n\tKQU( 9050611429810755847), KQU(15022028840236655649),\n\tKQU( 8775554279239748298), KQU(13105754025489230502),\n\tKQU(15471300118574167585), KQU(   89864764002355628),\n\tKQU( 8776416323420466637), KQU( 5280258630612040891),\n\tKQU( 2719174488591862912), KQU( 7599309137399661994),\n\tKQU(15012887256778039979), KQU(14062981725630928925),\n\tKQU(12038536286991689603), KQU( 7089756544681775245),\n\tKQU(10376661532744718039), KQU( 1265198725901533130),\n\tKQU(13807996727081142408), KQU( 2935019626765036403),\n\tKQU( 7651672460680700141), KQU( 3644093016200370795),\n\tKQU( 2840982578090080674), KQU(17956262740157449201),\n\tKQU(18267979450492880548), KQU(11799503659796848070),\n\tKQU( 9942537025669672388), KQU(11886606816406990297),\n\tKQU( 5488594946437447576), KQU( 7226714353282744302),\n\tKQU( 3784851653123877043), KQU(  878018453244803041),\n\tKQU(12110022586268616085), KQU(  734072179404675123),\n\tKQU(11869573627998248542), KQU(  469150421297783998),\n\tKQU(  260151124912803804), KQU(11639179410120968649),\n\tKQU( 9318165193840846253), KQU(12795671722734758075),\n\tKQU(15318410297267253933), KQU(  691524703570062620),\n\tKQU( 5837129010576994601), KQU(15045963859726941052),\n\tKQU( 5850056944932238169), KQU(12017434144750943807),\n\tKQU( 7447139064928956574), KQU( 3101711812658245019),\n\tKQU(16052940704474982954), KQU(18195745945986994042),\n\tKQU( 8932252132785575659), KQU(13390817488106794834),\n\tKQU(11582771836502517453), KQU( 4964411326683611686),\n\tKQU( 2195093981702694011), KQU(14145229538389675669),\n\tKQU(16459605532062271798), KQU(  866316924816482864),\n\tKQU( 4593041209937286377), KQU( 8415491391910972138),\n\tKQU( 4171236715600528969), KQU(16637569303336782889),\n\tKQU( 2002011073439212680), KQU(17695124661097601411),\n\tKQU( 4627687053598611702), KQU( 7895831936020190403),\n\tKQU( 8455951300917267802), KQU( 2923861649108534854),\n\tKQU( 8344557563927786255), KQU( 6408671940373352556),\n\tKQU(12210227354536675772), KQU(14294804157294222295),\n\tKQU(10103022425071085127), KQU(10092959489504123771),\n\tKQU( 6554774405376736268), KQU(12629917718410641774),\n\tKQU( 6260933257596067126), KQU( 2460827021439369673),\n\tKQU( 2541962996717103668), KQU(  597377203127351475),\n\tKQU( 5316984203117315309), KQU( 4811211393563241961),\n\tKQU(13119698597255811641), KQU( 8048691512862388981),\n\tKQU(10216818971194073842), KQU( 4612229970165291764),\n\tKQU(10000980798419974770), KQU( 6877640812402540687),\n\tKQU( 1488727563290436992), KQU( 2227774069895697318),\n\tKQU(11237754507523316593), KQU(13478948605382290972),\n\tKQU( 1963583846976858124), KQU( 5512309205269276457),\n\tKQU( 3972770164717652347), KQU( 3841751276198975037),\n\tKQU(10283343042181903117), KQU( 8564001259792872199),\n\tKQU(16472187244722489221), KQU( 8953493499268945921),\n\tKQU( 3518747340357279580), KQU( 4003157546223963073),\n\tKQU( 3270305958289814590), KQU( 3966704458129482496),\n\tKQU( 8122141865926661939), KQU(14627734748099506653),\n\tKQU(13064426990862560568), KQU( 2414079187889870829),\n\tKQU( 5378461209354225306), KQU(10841985740128255566),\n\tKQU(  538582442885401738), KQU( 7535089183482905946),\n\tKQU(16117559957598879095), KQU( 8477890721414539741),\n\tKQU( 1459127491209533386), KQU(17035126360733620462),\n\tKQU( 8517668552872379126), KQU(10292151468337355014),\n\tKQU(17081267732745344157), KQU(13751455337946087178),\n\tKQU(14026945459523832966), KQU( 6653278775061723516),\n\tKQU(10619085543856390441), KQU( 2196343631481122885),\n\tKQU(10045966074702826136), KQU(10082317330452718282),\n\tKQU( 5920859259504831242), KQU( 9951879073426540617),\n\tKQU( 7074696649151414158), KQU(15808193543879464318),\n\tKQU( 7385247772746953374), KQU( 3192003544283864292),\n\tKQU(18153684490917593847), KQU(12423498260668568905),\n\tKQU(10957758099756378169), KQU(11488762179911016040),\n\tKQU( 2099931186465333782), KQU(11180979581250294432),\n\tKQU( 8098916250668367933), KQU( 3529200436790763465),\n\tKQU(12988418908674681745), KQU( 6147567275954808580),\n\tKQU( 3207503344604030989), KQU(10761592604898615360),\n\tKQU(  229854861031893504), KQU( 8809853962667144291),\n\tKQU(13957364469005693860), KQU( 7634287665224495886),\n\tKQU(12353487366976556874), KQU( 1134423796317152034),\n\tKQU( 2088992471334107068), KQU( 7393372127190799698),\n\tKQU( 1845367839871058391), KQU(  207922563987322884),\n\tKQU(11960870813159944976), KQU(12182120053317317363),\n\tKQU(17307358132571709283), KQU(13871081155552824936),\n\tKQU(18304446751741566262), KQU( 7178705220184302849),\n\tKQU(10929605677758824425), KQU(16446976977835806844),\n\tKQU(13723874412159769044), KQU( 6942854352100915216),\n\tKQU( 1726308474365729390), KQU( 2150078766445323155),\n\tKQU(15345558947919656626), KQU(12145453828874527201),\n\tKQU( 2054448620739726849), KQU( 2740102003352628137),\n\tKQU(11294462163577610655), KQU(  756164283387413743),\n\tKQU(17841144758438810880), KQU(10802406021185415861),\n\tKQU( 8716455530476737846), KQU( 6321788834517649606),\n\tKQU(14681322910577468426), KQU(17330043563884336387),\n\tKQU(12701802180050071614), KQU(14695105111079727151),\n\tKQU( 5112098511654172830), KQU( 4957505496794139973),\n\tKQU( 8270979451952045982), KQU(12307685939199120969),\n\tKQU(12425799408953443032), KQU( 8376410143634796588),\n\tKQU(16621778679680060464), KQU( 3580497854566660073),\n\tKQU( 1122515747803382416), KQU(  857664980960597599),\n\tKQU( 6343640119895925918), KQU(12878473260854462891),\n\tKQU(10036813920765722626), KQU(14451335468363173812),\n\tKQU( 5476809692401102807), KQU(16442255173514366342),\n\tKQU(13060203194757167104), KQU(14354124071243177715),\n\tKQU(15961249405696125227), KQU(13703893649690872584),\n\tKQU(  363907326340340064), KQU( 6247455540491754842),\n\tKQU(12242249332757832361), KQU(  156065475679796717),\n\tKQU( 9351116235749732355), KQU( 4590350628677701405),\n\tKQU( 1671195940982350389), KQU(13501398458898451905),\n\tKQU( 6526341991225002255), KQU( 1689782913778157592),\n\tKQU( 7439222350869010334), KQU(13975150263226478308),\n\tKQU(11411961169932682710), KQU(17204271834833847277),\n\tKQU(  541534742544435367), KQU( 6591191931218949684),\n\tKQU( 2645454775478232486), KQU( 4322857481256485321),\n\tKQU( 8477416487553065110), KQU(12902505428548435048),\n\tKQU(  971445777981341415), KQU(14995104682744976712),\n\tKQU( 4243341648807158063), KQU( 8695061252721927661),\n\tKQU( 5028202003270177222), KQU( 2289257340915567840),\n\tKQU(13870416345121866007), KQU(13994481698072092233),\n\tKQU( 6912785400753196481), KQU( 2278309315841980139),\n\tKQU( 4329765449648304839), KQU( 5963108095785485298),\n\tKQU( 4880024847478722478), KQU(16015608779890240947),\n\tKQU( 1866679034261393544), KQU(  914821179919731519),\n\tKQU( 9643404035648760131), KQU( 2418114953615593915),\n\tKQU(  944756836073702374), KQU(15186388048737296834),\n\tKQU( 7723355336128442206), KQU( 7500747479679599691),\n\tKQU(18013961306453293634), KQU( 2315274808095756456),\n\tKQU(13655308255424029566), KQU(17203800273561677098),\n\tKQU( 1382158694422087756), KQU( 5090390250309588976),\n\tKQU(  517170818384213989), KQU( 1612709252627729621),\n\tKQU( 1330118955572449606), KQU(  300922478056709885),\n\tKQU(18115693291289091987), KQU(13491407109725238321),\n\tKQU(15293714633593827320), KQU( 5151539373053314504),\n\tKQU( 5951523243743139207), KQU(14459112015249527975),\n\tKQU( 5456113959000700739), KQU( 3877918438464873016),\n\tKQU(12534071654260163555), KQU(15871678376893555041),\n\tKQU(11005484805712025549), KQU(16353066973143374252),\n\tKQU( 4358331472063256685), KQU( 8268349332210859288),\n\tKQU(12485161590939658075), KQU(13955993592854471343),\n\tKQU( 5911446886848367039), KQU(14925834086813706974),\n\tKQU( 6590362597857994805), KQU( 1280544923533661875),\n\tKQU( 1637756018947988164), KQU( 4734090064512686329),\n\tKQU(16693705263131485912), KQU( 6834882340494360958),\n\tKQU( 8120732176159658505), KQU( 2244371958905329346),\n\tKQU(10447499707729734021), KQU( 7318742361446942194),\n\tKQU( 8032857516355555296), KQU(14023605983059313116),\n\tKQU( 1032336061815461376), KQU( 9840995337876562612),\n\tKQU( 9869256223029203587), KQU(12227975697177267636),\n\tKQU(12728115115844186033), KQU( 7752058479783205470),\n\tKQU(  729733219713393087), KQU(12954017801239007622)\n};\nstatic const uint64_t init_by_array_64_expected[] = {\n\tKQU( 2100341266307895239), KQU( 8344256300489757943),\n\tKQU(15687933285484243894), KQU( 8268620370277076319),\n\tKQU(12371852309826545459), KQU( 8800491541730110238),\n\tKQU(18113268950100835773), KQU( 2886823658884438119),\n\tKQU( 3293667307248180724), KQU( 9307928143300172731),\n\tKQU( 7688082017574293629), KQU(  900986224735166665),\n\tKQU( 9977972710722265039), KQU( 6008205004994830552),\n\tKQU(  546909104521689292), KQU( 7428471521869107594),\n\tKQU(14777563419314721179), KQU(16116143076567350053),\n\tKQU( 5322685342003142329), KQU( 4200427048445863473),\n\tKQU( 4693092150132559146), KQU(13671425863759338582),\n\tKQU( 6747117460737639916), KQU( 4732666080236551150),\n\tKQU( 5912839950611941263), KQU( 3903717554504704909),\n\tKQU( 2615667650256786818), KQU(10844129913887006352),\n\tKQU(13786467861810997820), KQU(14267853002994021570),\n\tKQU(13767807302847237439), KQU(16407963253707224617),\n\tKQU( 4802498363698583497), KQU( 2523802839317209764),\n\tKQU( 3822579397797475589), KQU( 8950320572212130610),\n\tKQU( 3745623504978342534), KQU(16092609066068482806),\n\tKQU( 9817016950274642398), KQU(10591660660323829098),\n\tKQU(11751606650792815920), KQU( 5122873818577122211),\n\tKQU(17209553764913936624), KQU( 6249057709284380343),\n\tKQU(15088791264695071830), KQU(15344673071709851930),\n\tKQU( 4345751415293646084), KQU( 2542865750703067928),\n\tKQU(13520525127852368784), KQU(18294188662880997241),\n\tKQU( 3871781938044881523), KQU( 2873487268122812184),\n\tKQU(15099676759482679005), KQU(15442599127239350490),\n\tKQU( 6311893274367710888), KQU( 3286118760484672933),\n\tKQU( 4146067961333542189), KQU(13303942567897208770),\n\tKQU( 8196013722255630418), KQU( 4437815439340979989),\n\tKQU(15433791533450605135), KQU( 4254828956815687049),\n\tKQU( 1310903207708286015), KQU(10529182764462398549),\n\tKQU(14900231311660638810), KQU( 9727017277104609793),\n\tKQU( 1821308310948199033), KQU(11628861435066772084),\n\tKQU( 9469019138491546924), KQU( 3145812670532604988),\n\tKQU( 9938468915045491919), KQU( 1562447430672662142),\n\tKQU(13963995266697989134), KQU( 3356884357625028695),\n\tKQU( 4499850304584309747), KQU( 8456825817023658122),\n\tKQU(10859039922814285279), KQU( 8099512337972526555),\n\tKQU(  348006375109672149), KQU(11919893998241688603),\n\tKQU( 1104199577402948826), KQU(16689191854356060289),\n\tKQU(10992552041730168078), KQU( 7243733172705465836),\n\tKQU( 5668075606180319560), KQU(18182847037333286970),\n\tKQU( 4290215357664631322), KQU( 4061414220791828613),\n\tKQU(13006291061652989604), KQU( 7140491178917128798),\n\tKQU(12703446217663283481), KQU( 5500220597564558267),\n\tKQU(10330551509971296358), KQU(15958554768648714492),\n\tKQU( 5174555954515360045), KQU( 1731318837687577735),\n\tKQU( 3557700801048354857), KQU(13764012341928616198),\n\tKQU(13115166194379119043), KQU( 7989321021560255519),\n\tKQU( 2103584280905877040), KQU( 9230788662155228488),\n\tKQU(16396629323325547654), KQU(  657926409811318051),\n\tKQU(15046700264391400727), KQU( 5120132858771880830),\n\tKQU( 7934160097989028561), KQU( 6963121488531976245),\n\tKQU(17412329602621742089), KQU(15144843053931774092),\n\tKQU(17204176651763054532), KQU(13166595387554065870),\n\tKQU( 8590377810513960213), KQU( 5834365135373991938),\n\tKQU( 7640913007182226243), KQU( 3479394703859418425),\n\tKQU(16402784452644521040), KQU( 4993979809687083980),\n\tKQU(13254522168097688865), KQU(15643659095244365219),\n\tKQU( 5881437660538424982), KQU(11174892200618987379),\n\tKQU(  254409966159711077), KQU(17158413043140549909),\n\tKQU( 3638048789290376272), KQU( 1376816930299489190),\n\tKQU( 4622462095217761923), KQU(15086407973010263515),\n\tKQU(13253971772784692238), KQU( 5270549043541649236),\n\tKQU(11182714186805411604), KQU(12283846437495577140),\n\tKQU( 5297647149908953219), KQU(10047451738316836654),\n\tKQU( 4938228100367874746), KQU(12328523025304077923),\n\tKQU( 3601049438595312361), KQU( 9313624118352733770),\n\tKQU(13322966086117661798), KQU(16660005705644029394),\n\tKQU(11337677526988872373), KQU(13869299102574417795),\n\tKQU(15642043183045645437), KQU( 3021755569085880019),\n\tKQU( 4979741767761188161), KQU(13679979092079279587),\n\tKQU( 3344685842861071743), KQU(13947960059899588104),\n\tKQU(  305806934293368007), KQU( 5749173929201650029),\n\tKQU(11123724852118844098), KQU(15128987688788879802),\n\tKQU(15251651211024665009), KQU( 7689925933816577776),\n\tKQU(16732804392695859449), KQU(17087345401014078468),\n\tKQU(14315108589159048871), KQU( 4820700266619778917),\n\tKQU(16709637539357958441), KQU( 4936227875177351374),\n\tKQU( 2137907697912987247), KQU(11628565601408395420),\n\tKQU( 2333250549241556786), KQU( 5711200379577778637),\n\tKQU( 5170680131529031729), KQU(12620392043061335164),\n\tKQU(   95363390101096078), KQU( 5487981914081709462),\n\tKQU( 1763109823981838620), KQU( 3395861271473224396),\n\tKQU( 1300496844282213595), KQU( 6894316212820232902),\n\tKQU(10673859651135576674), KQU( 5911839658857903252),\n\tKQU(17407110743387299102), KQU( 8257427154623140385),\n\tKQU(11389003026741800267), KQU( 4070043211095013717),\n\tKQU(11663806997145259025), KQU(15265598950648798210),\n\tKQU(  630585789434030934), KQU( 3524446529213587334),\n\tKQU( 7186424168495184211), KQU(10806585451386379021),\n\tKQU(11120017753500499273), KQU( 1586837651387701301),\n\tKQU(17530454400954415544), KQU( 9991670045077880430),\n\tKQU( 7550997268990730180), KQU( 8640249196597379304),\n\tKQU( 3522203892786893823), KQU(10401116549878854788),\n\tKQU(13690285544733124852), KQU( 8295785675455774586),\n\tKQU(15535716172155117603), KQU( 3112108583723722511),\n\tKQU(17633179955339271113), KQU(18154208056063759375),\n\tKQU( 1866409236285815666), KQU(13326075895396412882),\n\tKQU( 8756261842948020025), KQU( 6281852999868439131),\n\tKQU(15087653361275292858), KQU(10333923911152949397),\n\tKQU( 5265567645757408500), KQU(12728041843210352184),\n\tKQU( 6347959327507828759), KQU(  154112802625564758),\n\tKQU(18235228308679780218), KQU( 3253805274673352418),\n\tKQU( 4849171610689031197), KQU(17948529398340432518),\n\tKQU(13803510475637409167), KQU(13506570190409883095),\n\tKQU(15870801273282960805), KQU( 8451286481299170773),\n\tKQU( 9562190620034457541), KQU( 8518905387449138364),\n\tKQU(12681306401363385655), KQU( 3788073690559762558),\n\tKQU( 5256820289573487769), KQU( 2752021372314875467),\n\tKQU( 6354035166862520716), KQU( 4328956378309739069),\n\tKQU(  449087441228269600), KQU( 5533508742653090868),\n\tKQU( 1260389420404746988), KQU(18175394473289055097),\n\tKQU( 1535467109660399420), KQU( 8818894282874061442),\n\tKQU(12140873243824811213), KQU(15031386653823014946),\n\tKQU( 1286028221456149232), KQU( 6329608889367858784),\n\tKQU( 9419654354945132725), KQU( 6094576547061672379),\n\tKQU(17706217251847450255), KQU( 1733495073065878126),\n\tKQU(16918923754607552663), KQU( 8881949849954945044),\n\tKQU(12938977706896313891), KQU(14043628638299793407),\n\tKQU(18393874581723718233), KQU( 6886318534846892044),\n\tKQU(14577870878038334081), KQU(13541558383439414119),\n\tKQU(13570472158807588273), KQU(18300760537910283361),\n\tKQU(  818368572800609205), KQU( 1417000585112573219),\n\tKQU(12337533143867683655), KQU(12433180994702314480),\n\tKQU(  778190005829189083), KQU(13667356216206524711),\n\tKQU( 9866149895295225230), KQU(11043240490417111999),\n\tKQU( 1123933826541378598), KQU( 6469631933605123610),\n\tKQU(14508554074431980040), KQU(13918931242962026714),\n\tKQU( 2870785929342348285), KQU(14786362626740736974),\n\tKQU(13176680060902695786), KQU( 9591778613541679456),\n\tKQU( 9097662885117436706), KQU(  749262234240924947),\n\tKQU( 1944844067793307093), KQU( 4339214904577487742),\n\tKQU( 8009584152961946551), KQU(16073159501225501777),\n\tKQU( 3335870590499306217), KQU(17088312653151202847),\n\tKQU( 3108893142681931848), KQU(16636841767202792021),\n\tKQU(10423316431118400637), KQU( 8008357368674443506),\n\tKQU(11340015231914677875), KQU(17687896501594936090),\n\tKQU(15173627921763199958), KQU(  542569482243721959),\n\tKQU(15071714982769812975), KQU( 4466624872151386956),\n\tKQU( 1901780715602332461), KQU( 9822227742154351098),\n\tKQU( 1479332892928648780), KQU( 6981611948382474400),\n\tKQU( 7620824924456077376), KQU(14095973329429406782),\n\tKQU( 7902744005696185404), KQU(15830577219375036920),\n\tKQU(10287076667317764416), KQU(12334872764071724025),\n\tKQU( 4419302088133544331), KQU(14455842851266090520),\n\tKQU(12488077416504654222), KQU( 7953892017701886766),\n\tKQU( 6331484925529519007), KQU( 4902145853785030022),\n\tKQU(17010159216096443073), KQU(11945354668653886087),\n\tKQU(15112022728645230829), KQU(17363484484522986742),\n\tKQU( 4423497825896692887), KQU( 8155489510809067471),\n\tKQU(  258966605622576285), KQU( 5462958075742020534),\n\tKQU( 6763710214913276228), KQU( 2368935183451109054),\n\tKQU(14209506165246453811), KQU( 2646257040978514881),\n\tKQU( 3776001911922207672), KQU( 1419304601390147631),\n\tKQU(14987366598022458284), KQU( 3977770701065815721),\n\tKQU(  730820417451838898), KQU( 3982991703612885327),\n\tKQU( 2803544519671388477), KQU(17067667221114424649),\n\tKQU( 2922555119737867166), KQU( 1989477584121460932),\n\tKQU(15020387605892337354), KQU( 9293277796427533547),\n\tKQU(10722181424063557247), KQU(16704542332047511651),\n\tKQU( 5008286236142089514), KQU(16174732308747382540),\n\tKQU(17597019485798338402), KQU(13081745199110622093),\n\tKQU( 8850305883842258115), KQU(12723629125624589005),\n\tKQU( 8140566453402805978), KQU(15356684607680935061),\n\tKQU(14222190387342648650), KQU(11134610460665975178),\n\tKQU( 1259799058620984266), KQU(13281656268025610041),\n\tKQU(  298262561068153992), KQU(12277871700239212922),\n\tKQU(13911297774719779438), KQU(16556727962761474934),\n\tKQU(17903010316654728010), KQU( 9682617699648434744),\n\tKQU(14757681836838592850), KQU( 1327242446558524473),\n\tKQU(11126645098780572792), KQU( 1883602329313221774),\n\tKQU( 2543897783922776873), KQU(15029168513767772842),\n\tKQU(12710270651039129878), KQU(16118202956069604504),\n\tKQU(15010759372168680524), KQU( 2296827082251923948),\n\tKQU(10793729742623518101), KQU(13829764151845413046),\n\tKQU(17769301223184451213), KQU( 3118268169210783372),\n\tKQU(17626204544105123127), KQU( 7416718488974352644),\n\tKQU(10450751996212925994), KQU( 9352529519128770586),\n\tKQU(  259347569641110140), KQU( 8048588892269692697),\n\tKQU( 1774414152306494058), KQU(10669548347214355622),\n\tKQU(13061992253816795081), KQU(18432677803063861659),\n\tKQU( 8879191055593984333), KQU(12433753195199268041),\n\tKQU(14919392415439730602), KQU( 6612848378595332963),\n\tKQU( 6320986812036143628), KQU(10465592420226092859),\n\tKQU( 4196009278962570808), KQU( 3747816564473572224),\n\tKQU(17941203486133732898), KQU( 2350310037040505198),\n\tKQU( 5811779859134370113), KQU(10492109599506195126),\n\tKQU( 7699650690179541274), KQU( 1954338494306022961),\n\tKQU(14095816969027231152), KQU( 5841346919964852061),\n\tKQU(14945969510148214735), KQU( 3680200305887550992),\n\tKQU( 6218047466131695792), KQU( 8242165745175775096),\n\tKQU(11021371934053307357), KQU( 1265099502753169797),\n\tKQU( 4644347436111321718), KQU( 3609296916782832859),\n\tKQU( 8109807992218521571), KQU(18387884215648662020),\n\tKQU(14656324896296392902), KQU(17386819091238216751),\n\tKQU(17788300878582317152), KQU( 7919446259742399591),\n\tKQU( 4466613134576358004), KQU(12928181023667938509),\n\tKQU(13147446154454932030), KQU(16552129038252734620),\n\tKQU( 8395299403738822450), KQU(11313817655275361164),\n\tKQU(  434258809499511718), KQU( 2074882104954788676),\n\tKQU( 7929892178759395518), KQU( 9006461629105745388),\n\tKQU( 5176475650000323086), KQU(11128357033468341069),\n\tKQU(12026158851559118955), KQU(14699716249471156500),\n\tKQU(  448982497120206757), KQU( 4156475356685519900),\n\tKQU( 6063816103417215727), KQU(10073289387954971479),\n\tKQU( 8174466846138590962), KQU( 2675777452363449006),\n\tKQU( 9090685420572474281), KQU( 6659652652765562060),\n\tKQU(12923120304018106621), KQU(11117480560334526775),\n\tKQU(  937910473424587511), KQU( 1838692113502346645),\n\tKQU(11133914074648726180), KQU( 7922600945143884053),\n\tKQU(13435287702700959550), KQU( 5287964921251123332),\n\tKQU(11354875374575318947), KQU(17955724760748238133),\n\tKQU(13728617396297106512), KQU( 4107449660118101255),\n\tKQU( 1210269794886589623), KQU(11408687205733456282),\n\tKQU( 4538354710392677887), KQU(13566803319341319267),\n\tKQU(17870798107734050771), KQU( 3354318982568089135),\n\tKQU( 9034450839405133651), KQU(13087431795753424314),\n\tKQU(  950333102820688239), KQU( 1968360654535604116),\n\tKQU(16840551645563314995), KQU( 8867501803892924995),\n\tKQU(11395388644490626845), KQU( 1529815836300732204),\n\tKQU(13330848522996608842), KQU( 1813432878817504265),\n\tKQU( 2336867432693429560), KQU(15192805445973385902),\n\tKQU( 2528593071076407877), KQU(  128459777936689248),\n\tKQU( 9976345382867214866), KQU( 6208885766767996043),\n\tKQU(14982349522273141706), KQU( 3099654362410737822),\n\tKQU(13776700761947297661), KQU( 8806185470684925550),\n\tKQU( 8151717890410585321), KQU(  640860591588072925),\n\tKQU(14592096303937307465), KQU( 9056472419613564846),\n\tKQU(14861544647742266352), KQU(12703771500398470216),\n\tKQU( 3142372800384138465), KQU( 6201105606917248196),\n\tKQU(18337516409359270184), KQU(15042268695665115339),\n\tKQU(15188246541383283846), KQU(12800028693090114519),\n\tKQU( 5992859621101493472), KQU(18278043971816803521),\n\tKQU( 9002773075219424560), KQU( 7325707116943598353),\n\tKQU( 7930571931248040822), KQU( 5645275869617023448),\n\tKQU( 7266107455295958487), KQU( 4363664528273524411),\n\tKQU(14313875763787479809), KQU(17059695613553486802),\n\tKQU( 9247761425889940932), KQU(13704726459237593128),\n\tKQU( 2701312427328909832), KQU(17235532008287243115),\n\tKQU(14093147761491729538), KQU( 6247352273768386516),\n\tKQU( 8268710048153268415), KQU( 7985295214477182083),\n\tKQU(15624495190888896807), KQU( 3772753430045262788),\n\tKQU( 9133991620474991698), KQU( 5665791943316256028),\n\tKQU( 7551996832462193473), KQU(13163729206798953877),\n\tKQU( 9263532074153846374), KQU( 1015460703698618353),\n\tKQU(17929874696989519390), KQU(18257884721466153847),\n\tKQU(16271867543011222991), KQU( 3905971519021791941),\n\tKQU(16814488397137052085), KQU( 1321197685504621613),\n\tKQU( 2870359191894002181), KQU(14317282970323395450),\n\tKQU(13663920845511074366), KQU( 2052463995796539594),\n\tKQU(14126345686431444337), KQU( 1727572121947022534),\n\tKQU(17793552254485594241), KQU( 6738857418849205750),\n\tKQU( 1282987123157442952), KQU(16655480021581159251),\n\tKQU( 6784587032080183866), KQU(14726758805359965162),\n\tKQU( 7577995933961987349), KQU(12539609320311114036),\n\tKQU(10789773033385439494), KQU( 8517001497411158227),\n\tKQU(10075543932136339710), KQU(14838152340938811081),\n\tKQU( 9560840631794044194), KQU(17445736541454117475),\n\tKQU(10633026464336393186), KQU(15705729708242246293),\n\tKQU( 1117517596891411098), KQU( 4305657943415886942),\n\tKQU( 4948856840533979263), KQU(16071681989041789593),\n\tKQU(13723031429272486527), KQU( 7639567622306509462),\n\tKQU(12670424537483090390), KQU( 9715223453097197134),\n\tKQU( 5457173389992686394), KQU(  289857129276135145),\n\tKQU(17048610270521972512), KQU(  692768013309835485),\n\tKQU(14823232360546632057), KQU(18218002361317895936),\n\tKQU( 3281724260212650204), KQU(16453957266549513795),\n\tKQU( 8592711109774511881), KQU(  929825123473369579),\n\tKQU(15966784769764367791), KQU( 9627344291450607588),\n\tKQU(10849555504977813287), KQU( 9234566913936339275),\n\tKQU( 6413807690366911210), KQU(10862389016184219267),\n\tKQU(13842504799335374048), KQU( 1531994113376881174),\n\tKQU( 2081314867544364459), KQU(16430628791616959932),\n\tKQU( 8314714038654394368), KQU( 9155473892098431813),\n\tKQU(12577843786670475704), KQU( 4399161106452401017),\n\tKQU( 1668083091682623186), KQU( 1741383777203714216),\n\tKQU( 2162597285417794374), KQU(15841980159165218736),\n\tKQU( 1971354603551467079), KQU( 1206714764913205968),\n\tKQU( 4790860439591272330), KQU(14699375615594055799),\n\tKQU( 8374423871657449988), KQU(10950685736472937738),\n\tKQU(  697344331343267176), KQU(10084998763118059810),\n\tKQU(12897369539795983124), KQU(12351260292144383605),\n\tKQU( 1268810970176811234), KQU( 7406287800414582768),\n\tKQU(  516169557043807831), KQU( 5077568278710520380),\n\tKQU( 3828791738309039304), KQU( 7721974069946943610),\n\tKQU( 3534670260981096460), KQU( 4865792189600584891),\n\tKQU(16892578493734337298), KQU( 9161499464278042590),\n\tKQU(11976149624067055931), KQU(13219479887277343990),\n\tKQU(14161556738111500680), KQU(14670715255011223056),\n\tKQU( 4671205678403576558), KQU(12633022931454259781),\n\tKQU(14821376219869187646), KQU(  751181776484317028),\n\tKQU( 2192211308839047070), KQU(11787306362361245189),\n\tKQU(10672375120744095707), KQU( 4601972328345244467),\n\tKQU(15457217788831125879), KQU( 8464345256775460809),\n\tKQU(10191938789487159478), KQU( 6184348739615197613),\n\tKQU(11425436778806882100), KQU( 2739227089124319793),\n\tKQU(  461464518456000551), KQU( 4689850170029177442),\n\tKQU( 6120307814374078625), KQU(11153579230681708671),\n\tKQU( 7891721473905347926), KQU(10281646937824872400),\n\tKQU( 3026099648191332248), KQU( 8666750296953273818),\n\tKQU(14978499698844363232), KQU(13303395102890132065),\n\tKQU( 8182358205292864080), KQU(10560547713972971291),\n\tKQU(11981635489418959093), KQU( 3134621354935288409),\n\tKQU(11580681977404383968), KQU(14205530317404088650),\n\tKQU( 5997789011854923157), KQU(13659151593432238041),\n\tKQU(11664332114338865086), KQU( 7490351383220929386),\n\tKQU( 7189290499881530378), KQU(15039262734271020220),\n\tKQU( 2057217285976980055), KQU(  555570804905355739),\n\tKQU(11235311968348555110), KQU(13824557146269603217),\n\tKQU(16906788840653099693), KQU( 7222878245455661677),\n\tKQU( 5245139444332423756), KQU( 4723748462805674292),\n\tKQU(12216509815698568612), KQU(17402362976648951187),\n\tKQU(17389614836810366768), KQU( 4880936484146667711),\n\tKQU( 9085007839292639880), KQU(13837353458498535449),\n\tKQU(11914419854360366677), KQU(16595890135313864103),\n\tKQU( 6313969847197627222), KQU(18296909792163910431),\n\tKQU(10041780113382084042), KQU( 2499478551172884794),\n\tKQU(11057894246241189489), KQU( 9742243032389068555),\n\tKQU(12838934582673196228), KQU(13437023235248490367),\n\tKQU(13372420669446163240), KQU( 6752564244716909224),\n\tKQU( 7157333073400313737), KQU(12230281516370654308),\n\tKQU( 1182884552219419117), KQU( 2955125381312499218),\n\tKQU(10308827097079443249), KQU( 1337648572986534958),\n\tKQU(16378788590020343939), KQU(  108619126514420935),\n\tKQU( 3990981009621629188), KQU( 5460953070230946410),\n\tKQU( 9703328329366531883), KQU(13166631489188077236),\n\tKQU( 1104768831213675170), KQU( 3447930458553877908),\n\tKQU( 8067172487769945676), KQU( 5445802098190775347),\n\tKQU( 3244840981648973873), KQU(17314668322981950060),\n\tKQU( 5006812527827763807), KQU(18158695070225526260),\n\tKQU( 2824536478852417853), KQU(13974775809127519886),\n\tKQU( 9814362769074067392), KQU(17276205156374862128),\n\tKQU(11361680725379306967), KQU( 3422581970382012542),\n\tKQU(11003189603753241266), KQU(11194292945277862261),\n\tKQU( 6839623313908521348), KQU(11935326462707324634),\n\tKQU( 1611456788685878444), KQU(13112620989475558907),\n\tKQU(  517659108904450427), KQU(13558114318574407624),\n\tKQU(15699089742731633077), KQU( 4988979278862685458),\n\tKQU( 8111373583056521297), KQU( 3891258746615399627),\n\tKQU( 8137298251469718086), KQU(12748663295624701649),\n\tKQU( 4389835683495292062), KQU( 5775217872128831729),\n\tKQU( 9462091896405534927), KQU( 8498124108820263989),\n\tKQU( 8059131278842839525), KQU(10503167994254090892),\n\tKQU(11613153541070396656), KQU(18069248738504647790),\n\tKQU(  570657419109768508), KQU( 3950574167771159665),\n\tKQU( 5514655599604313077), KQU( 2908460854428484165),\n\tKQU(10777722615935663114), KQU(12007363304839279486),\n\tKQU( 9800646187569484767), KQU( 8795423564889864287),\n\tKQU(14257396680131028419), KQU( 6405465117315096498),\n\tKQU( 7939411072208774878), KQU(17577572378528990006),\n\tKQU(14785873806715994850), KQU(16770572680854747390),\n\tKQU(18127549474419396481), KQU(11637013449455757750),\n\tKQU(14371851933996761086), KQU( 3601181063650110280),\n\tKQU( 4126442845019316144), KQU(10198287239244320669),\n\tKQU(18000169628555379659), KQU(18392482400739978269),\n\tKQU( 6219919037686919957), KQU( 3610085377719446052),\n\tKQU( 2513925039981776336), KQU(16679413537926716955),\n\tKQU(12903302131714909434), KQU( 5581145789762985009),\n\tKQU(12325955044293303233), KQU(17216111180742141204),\n\tKQU( 6321919595276545740), KQU( 3507521147216174501),\n\tKQU( 9659194593319481840), KQU(11473976005975358326),\n\tKQU(14742730101435987026), KQU(  492845897709954780),\n\tKQU(16976371186162599676), KQU(17712703422837648655),\n\tKQU( 9881254778587061697), KQU( 8413223156302299551),\n\tKQU( 1563841828254089168), KQU( 9996032758786671975),\n\tKQU(  138877700583772667), KQU(13003043368574995989),\n\tKQU( 4390573668650456587), KQU( 8610287390568126755),\n\tKQU(15126904974266642199), KQU( 6703637238986057662),\n\tKQU( 2873075592956810157), KQU( 6035080933946049418),\n\tKQU(13382846581202353014), KQU( 7303971031814642463),\n\tKQU(18418024405307444267), KQU( 5847096731675404647),\n\tKQU( 4035880699639842500), KQU(11525348625112218478),\n\tKQU( 3041162365459574102), KQU( 2604734487727986558),\n\tKQU(15526341771636983145), KQU(14556052310697370254),\n\tKQU(12997787077930808155), KQU( 9601806501755554499),\n\tKQU(11349677952521423389), KQU(14956777807644899350),\n\tKQU(16559736957742852721), KQU(12360828274778140726),\n\tKQU( 6685373272009662513), KQU(16932258748055324130),\n\tKQU(15918051131954158508), KQU( 1692312913140790144),\n\tKQU(  546653826801637367), KQU( 5341587076045986652),\n\tKQU(14975057236342585662), KQU(12374976357340622412),\n\tKQU(10328833995181940552), KQU(12831807101710443149),\n\tKQU(10548514914382545716), KQU( 2217806727199715993),\n\tKQU(12627067369242845138), KQU( 4598965364035438158),\n\tKQU(  150923352751318171), KQU(14274109544442257283),\n\tKQU( 4696661475093863031), KQU( 1505764114384654516),\n\tKQU(10699185831891495147), KQU( 2392353847713620519),\n\tKQU( 3652870166711788383), KQU( 8640653276221911108),\n\tKQU( 3894077592275889704), KQU( 4918592872135964845),\n\tKQU(16379121273281400789), KQU(12058465483591683656),\n\tKQU(11250106829302924945), KQU( 1147537556296983005),\n\tKQU( 6376342756004613268), KQU(14967128191709280506),\n\tKQU(18007449949790627628), KQU( 9497178279316537841),\n\tKQU( 7920174844809394893), KQU(10037752595255719907),\n\tKQU(15875342784985217697), KQU(15311615921712850696),\n\tKQU( 9552902652110992950), KQU(14054979450099721140),\n\tKQU( 5998709773566417349), KQU(18027910339276320187),\n\tKQU( 8223099053868585554), KQU( 7842270354824999767),\n\tKQU( 4896315688770080292), KQU(12969320296569787895),\n\tKQU( 2674321489185759961), KQU( 4053615936864718439),\n\tKQU(11349775270588617578), KQU( 4743019256284553975),\n\tKQU( 5602100217469723769), KQU(14398995691411527813),\n\tKQU( 7412170493796825470), KQU(  836262406131744846),\n\tKQU( 8231086633845153022), KQU( 5161377920438552287),\n\tKQU( 8828731196169924949), KQU(16211142246465502680),\n\tKQU( 3307990879253687818), KQU( 5193405406899782022),\n\tKQU( 8510842117467566693), KQU( 6070955181022405365),\n\tKQU(14482950231361409799), KQU(12585159371331138077),\n\tKQU( 3511537678933588148), KQU( 2041849474531116417),\n\tKQU(10944936685095345792), KQU(18303116923079107729),\n\tKQU( 2720566371239725320), KQU( 4958672473562397622),\n\tKQU( 3032326668253243412), KQU(13689418691726908338),\n\tKQU( 1895205511728843996), KQU( 8146303515271990527),\n\tKQU(16507343500056113480), KQU(  473996939105902919),\n\tKQU( 9897686885246881481), KQU(14606433762712790575),\n\tKQU( 6732796251605566368), KQU( 1399778120855368916),\n\tKQU(  935023885182833777), KQU(16066282816186753477),\n\tKQU( 7291270991820612055), KQU(17530230393129853844),\n\tKQU(10223493623477451366), KQU(15841725630495676683),\n\tKQU(17379567246435515824), KQU( 8588251429375561971),\n\tKQU(18339511210887206423), KQU(17349587430725976100),\n\tKQU(12244876521394838088), KQU( 6382187714147161259),\n\tKQU(12335807181848950831), KQU(16948885622305460665),\n\tKQU(13755097796371520506), KQU(14806740373324947801),\n\tKQU( 4828699633859287703), KQU( 8209879281452301604),\n\tKQU(12435716669553736437), KQU(13970976859588452131),\n\tKQU( 6233960842566773148), KQU(12507096267900505759),\n\tKQU( 1198713114381279421), KQU(14989862731124149015),\n\tKQU(15932189508707978949), KQU( 2526406641432708722),\n\tKQU(   29187427817271982), KQU( 1499802773054556353),\n\tKQU(10816638187021897173), KQU( 5436139270839738132),\n\tKQU( 6659882287036010082), KQU( 2154048955317173697),\n\tKQU(10887317019333757642), KQU(16281091802634424955),\n\tKQU(10754549879915384901), KQU(10760611745769249815),\n\tKQU( 2161505946972504002), KQU( 5243132808986265107),\n\tKQU(10129852179873415416), KQU(  710339480008649081),\n\tKQU( 7802129453068808528), KQU(17967213567178907213),\n\tKQU(15730859124668605599), KQU(13058356168962376502),\n\tKQU( 3701224985413645909), KQU(14464065869149109264),\n\tKQU( 9959272418844311646), KQU(10157426099515958752),\n\tKQU(14013736814538268528), KQU(17797456992065653951),\n\tKQU(17418878140257344806), KQU(15457429073540561521),\n\tKQU( 2184426881360949378), KQU( 2062193041154712416),\n\tKQU( 8553463347406931661), KQU( 4913057625202871854),\n\tKQU( 2668943682126618425), KQU(17064444737891172288),\n\tKQU( 4997115903913298637), KQU(12019402608892327416),\n\tKQU(17603584559765897352), KQU(11367529582073647975),\n\tKQU( 8211476043518436050), KQU( 8676849804070323674),\n\tKQU(18431829230394475730), KQU(10490177861361247904),\n\tKQU( 9508720602025651349), KQU( 7409627448555722700),\n\tKQU( 5804047018862729008), KQU(11943858176893142594),\n\tKQU(11908095418933847092), KQU( 5415449345715887652),\n\tKQU( 1554022699166156407), KQU( 9073322106406017161),\n\tKQU( 7080630967969047082), KQU(18049736940860732943),\n\tKQU(12748714242594196794), KQU( 1226992415735156741),\n\tKQU(17900981019609531193), KQU(11720739744008710999),\n\tKQU( 3006400683394775434), KQU(11347974011751996028),\n\tKQU( 3316999628257954608), KQU( 8384484563557639101),\n\tKQU(18117794685961729767), KQU( 1900145025596618194),\n\tKQU(17459527840632892676), KQU( 5634784101865710994),\n\tKQU( 7918619300292897158), KQU( 3146577625026301350),\n\tKQU( 9955212856499068767), KQU( 1873995843681746975),\n\tKQU( 1561487759967972194), KQU( 8322718804375878474),\n\tKQU(11300284215327028366), KQU( 4667391032508998982),\n\tKQU( 9820104494306625580), KQU(17922397968599970610),\n\tKQU( 1784690461886786712), KQU(14940365084341346821),\n\tKQU( 5348719575594186181), KQU(10720419084507855261),\n\tKQU(14210394354145143274), KQU( 2426468692164000131),\n\tKQU(16271062114607059202), KQU(14851904092357070247),\n\tKQU( 6524493015693121897), KQU( 9825473835127138531),\n\tKQU(14222500616268569578), KQU(15521484052007487468),\n\tKQU(14462579404124614699), KQU(11012375590820665520),\n\tKQU(11625327350536084927), KQU(14452017765243785417),\n\tKQU( 9989342263518766305), KQU( 3640105471101803790),\n\tKQU( 4749866455897513242), KQU(13963064946736312044),\n\tKQU(10007416591973223791), KQU(18314132234717431115),\n\tKQU( 3286596588617483450), KQU( 7726163455370818765),\n\tKQU( 7575454721115379328), KQU( 5308331576437663422),\n\tKQU(18288821894903530934), KQU( 8028405805410554106),\n\tKQU(15744019832103296628), KQU(  149765559630932100),\n\tKQU( 6137705557200071977), KQU(14513416315434803615),\n\tKQU(11665702820128984473), KQU(  218926670505601386),\n\tKQU( 6868675028717769519), KQU(15282016569441512302),\n\tKQU( 5707000497782960236), KQU( 6671120586555079567),\n\tKQU( 2194098052618985448), KQU(16849577895477330978),\n\tKQU(12957148471017466283), KQU( 1997805535404859393),\n\tKQU( 1180721060263860490), KQU(13206391310193756958),\n\tKQU(12980208674461861797), KQU( 3825967775058875366),\n\tKQU(17543433670782042631), KQU( 1518339070120322730),\n\tKQU(16344584340890991669), KQU( 2611327165318529819),\n\tKQU(11265022723283422529), KQU( 4001552800373196817),\n\tKQU(14509595890079346161), KQU( 3528717165416234562),\n\tKQU(18153222571501914072), KQU( 9387182977209744425),\n\tKQU(10064342315985580021), KQU(11373678413215253977),\n\tKQU( 2308457853228798099), KQU( 9729042942839545302),\n\tKQU( 7833785471140127746), KQU( 6351049900319844436),\n\tKQU(14454610627133496067), KQU(12533175683634819111),\n\tKQU(15570163926716513029), KQU(13356980519185762498)\n};\n\nTEST_BEGIN(test_gen_rand_32) {\n\tuint32_t array32[BLOCK_SIZE] JEMALLOC_ATTR(aligned(16));\n\tuint32_t array32_2[BLOCK_SIZE] JEMALLOC_ATTR(aligned(16));\n\tint i;\n\tuint32_t r32;\n\tsfmt_t *ctx;\n\n\texpect_d_le(get_min_array_size32(), BLOCK_SIZE,\n\t    \"Array size too small\");\n\tctx = init_gen_rand(1234);\n\tfill_array32(ctx, array32, BLOCK_SIZE);\n\tfill_array32(ctx, array32_2, BLOCK_SIZE);\n\tfini_gen_rand(ctx);\n\n\tctx = init_gen_rand(1234);\n\tfor (i = 0; i < BLOCK_SIZE; i++) {\n\t\tif (i < COUNT_1) {\n\t\t\texpect_u32_eq(array32[i], init_gen_rand_32_expected[i],\n\t\t\t    \"Output mismatch for i=%d\", i);\n\t\t}\n\t\tr32 = gen_rand32(ctx);\n\t\texpect_u32_eq(r32, array32[i],\n\t\t    \"Mismatch at array32[%d]=%x, gen=%x\", i, array32[i], r32);\n\t}\n\tfor (i = 0; i < COUNT_2; i++) {\n\t\tr32 = gen_rand32(ctx);\n\t\texpect_u32_eq(r32, array32_2[i],\n\t\t    \"Mismatch at array32_2[%d]=%x, gen=%x\", i, array32_2[i],\n\t\t    r32);\n\t}\n\tfini_gen_rand(ctx);\n}\nTEST_END\n\nTEST_BEGIN(test_by_array_32) {\n\tuint32_t array32[BLOCK_SIZE] JEMALLOC_ATTR(aligned(16));\n\tuint32_t array32_2[BLOCK_SIZE] JEMALLOC_ATTR(aligned(16));\n\tint i;\n\tuint32_t ini[4] = {0x1234, 0x5678, 0x9abc, 0xdef0};\n\tuint32_t r32;\n\tsfmt_t *ctx;\n\n\texpect_d_le(get_min_array_size32(), BLOCK_SIZE,\n\t    \"Array size too small\");\n\tctx = init_by_array(ini, 4);\n\tfill_array32(ctx, array32, BLOCK_SIZE);\n\tfill_array32(ctx, array32_2, BLOCK_SIZE);\n\tfini_gen_rand(ctx);\n\n\tctx = init_by_array(ini, 4);\n\tfor (i = 0; i < BLOCK_SIZE; i++) {\n\t\tif (i < COUNT_1) {\n\t\t\texpect_u32_eq(array32[i], init_by_array_32_expected[i],\n\t\t\t    \"Output mismatch for i=%d\", i);\n\t\t}\n\t\tr32 = gen_rand32(ctx);\n\t\texpect_u32_eq(r32, array32[i],\n\t\t    \"Mismatch at array32[%d]=%x, gen=%x\", i, array32[i], r32);\n\t}\n\tfor (i = 0; i < COUNT_2; i++) {\n\t\tr32 = gen_rand32(ctx);\n\t\texpect_u32_eq(r32, array32_2[i],\n\t\t    \"Mismatch at array32_2[%d]=%x, gen=%x\", i, array32_2[i],\n\t\t    r32);\n\t}\n\tfini_gen_rand(ctx);\n}\nTEST_END\n\nTEST_BEGIN(test_gen_rand_64) {\n\tuint64_t array64[BLOCK_SIZE64] JEMALLOC_ATTR(aligned(16));\n\tuint64_t array64_2[BLOCK_SIZE64] JEMALLOC_ATTR(aligned(16));\n\tint i;\n\tuint64_t r;\n\tsfmt_t *ctx;\n\n\texpect_d_le(get_min_array_size64(), BLOCK_SIZE64,\n\t    \"Array size too small\");\n\tctx = init_gen_rand(4321);\n\tfill_array64(ctx, array64, BLOCK_SIZE64);\n\tfill_array64(ctx, array64_2, BLOCK_SIZE64);\n\tfini_gen_rand(ctx);\n\n\tctx = init_gen_rand(4321);\n\tfor (i = 0; i < BLOCK_SIZE64; i++) {\n\t\tif (i < COUNT_1) {\n\t\t\texpect_u64_eq(array64[i], init_gen_rand_64_expected[i],\n\t\t\t    \"Output mismatch for i=%d\", i);\n\t\t}\n\t\tr = gen_rand64(ctx);\n\t\texpect_u64_eq(r, array64[i],\n\t\t    \"Mismatch at array64[%d]=%\"FMTx64\", gen=%\"FMTx64, i,\n\t\t    array64[i], r);\n\t}\n\tfor (i = 0; i < COUNT_2; i++) {\n\t\tr = gen_rand64(ctx);\n\t\texpect_u64_eq(r, array64_2[i],\n\t\t    \"Mismatch at array64_2[%d]=%\"FMTx64\" gen=%\"FMTx64\"\", i,\n\t\t    array64_2[i], r);\n\t}\n\tfini_gen_rand(ctx);\n}\nTEST_END\n\nTEST_BEGIN(test_by_array_64) {\n\tuint64_t array64[BLOCK_SIZE64] JEMALLOC_ATTR(aligned(16));\n\tuint64_t array64_2[BLOCK_SIZE64] JEMALLOC_ATTR(aligned(16));\n\tint i;\n\tuint64_t r;\n\tuint32_t ini[] = {5, 4, 3, 2, 1};\n\tsfmt_t *ctx;\n\n\texpect_d_le(get_min_array_size64(), BLOCK_SIZE64,\n\t    \"Array size too small\");\n\tctx = init_by_array(ini, 5);\n\tfill_array64(ctx, array64, BLOCK_SIZE64);\n\tfill_array64(ctx, array64_2, BLOCK_SIZE64);\n\tfini_gen_rand(ctx);\n\n\tctx = init_by_array(ini, 5);\n\tfor (i = 0; i < BLOCK_SIZE64; i++) {\n\t\tif (i < COUNT_1) {\n\t\t\texpect_u64_eq(array64[i], init_by_array_64_expected[i],\n\t\t\t    \"Output mismatch for i=%d\", i);\n\t\t}\n\t\tr = gen_rand64(ctx);\n\t\texpect_u64_eq(r, array64[i],\n\t\t    \"Mismatch at array64[%d]=%\"FMTx64\" gen=%\"FMTx64, i,\n\t\t    array64[i], r);\n\t}\n\tfor (i = 0; i < COUNT_2; i++) {\n\t\tr = gen_rand64(ctx);\n\t\texpect_u64_eq(r, array64_2[i],\n\t\t    \"Mismatch at array64_2[%d]=%\"FMTx64\" gen=%\"FMTx64, i,\n\t\t    array64_2[i], r);\n\t}\n\tfini_gen_rand(ctx);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_gen_rand_32,\n\t    test_by_array_32,\n\t    test_gen_rand_64,\n\t    test_by_array_64);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/a0.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nTEST_BEGIN(test_a0) {\n\tvoid *p;\n\n\tp = a0malloc(1);\n\texpect_ptr_not_null(p, \"Unexpected a0malloc() error\");\n\ta0dalloc(p);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_malloc_init(\n\t    test_a0);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/arena_decay.c",
    "content": "#include \"test/jemalloc_test.h\"\n#include \"test/arena_util.h\"\n\n#include \"jemalloc/internal/ticker.h\"\n\nstatic nstime_monotonic_t *nstime_monotonic_orig;\nstatic nstime_update_t *nstime_update_orig;\n\nstatic unsigned nupdates_mock;\nstatic nstime_t time_mock;\nstatic bool monotonic_mock;\n\nstatic bool\nnstime_monotonic_mock(void) {\n\treturn monotonic_mock;\n}\n\nstatic void\nnstime_update_mock(nstime_t *time) {\n\tnupdates_mock++;\n\tif (monotonic_mock) {\n\t\tnstime_copy(time, &time_mock);\n\t}\n}\n\nTEST_BEGIN(test_decay_ticks) {\n\ttest_skip_if(is_background_thread_enabled());\n\ttest_skip_if(opt_hpa);\n\n\tticker_geom_t *decay_ticker;\n\tunsigned tick0, tick1, arena_ind;\n\tsize_t sz, large0;\n\tvoid *p;\n\n\tsz = sizeof(size_t);\n\texpect_d_eq(mallctl(\"arenas.lextent.0.size\", (void *)&large0, &sz, NULL,\n\t    0), 0, \"Unexpected mallctl failure\");\n\n\t/* Set up a manually managed arena for test. */\n\tarena_ind = do_arena_create(0, 0);\n\n\t/* Migrate to the new arena, and get the ticker. */\n\tunsigned old_arena_ind;\n\tsize_t sz_arena_ind = sizeof(old_arena_ind);\n\texpect_d_eq(mallctl(\"thread.arena\", (void *)&old_arena_ind,\n\t    &sz_arena_ind, (void *)&arena_ind, sizeof(arena_ind)), 0,\n\t    \"Unexpected mallctl() failure\");\n\tdecay_ticker = tsd_arena_decay_tickerp_get(tsd_fetch());\n\texpect_ptr_not_null(decay_ticker,\n\t    \"Unexpected failure getting decay ticker\");\n\n\t/*\n\t * Test the standard APIs using a large size class, since we can't\n\t * control tcache interactions for small size classes (except by\n\t * completely disabling tcache for the entire test program).\n\t */\n\n\t/* malloc(). */\n\ttick0 = ticker_geom_read(decay_ticker);\n\tp = malloc(large0);\n\texpect_ptr_not_null(p, \"Unexpected malloc() failure\");\n\ttick1 = ticker_geom_read(decay_ticker);\n\texpect_u32_ne(tick1, tick0, \"Expected ticker to tick during malloc()\");\n\t/* free(). */\n\ttick0 = ticker_geom_read(decay_ticker);\n\tfree(p);\n\ttick1 = ticker_geom_read(decay_ticker);\n\texpect_u32_ne(tick1, tick0, \"Expected ticker to tick during free()\");\n\n\t/* calloc(). */\n\ttick0 = ticker_geom_read(decay_ticker);\n\tp = calloc(1, large0);\n\texpect_ptr_not_null(p, \"Unexpected calloc() failure\");\n\ttick1 = ticker_geom_read(decay_ticker);\n\texpect_u32_ne(tick1, tick0, \"Expected ticker to tick during calloc()\");\n\tfree(p);\n\n\t/* posix_memalign(). */\n\ttick0 = ticker_geom_read(decay_ticker);\n\texpect_d_eq(posix_memalign(&p, sizeof(size_t), large0), 0,\n\t    \"Unexpected posix_memalign() failure\");\n\ttick1 = ticker_geom_read(decay_ticker);\n\texpect_u32_ne(tick1, tick0,\n\t    \"Expected ticker to tick during posix_memalign()\");\n\tfree(p);\n\n\t/* aligned_alloc(). */\n\ttick0 = ticker_geom_read(decay_ticker);\n\tp = aligned_alloc(sizeof(size_t), large0);\n\texpect_ptr_not_null(p, \"Unexpected aligned_alloc() failure\");\n\ttick1 = ticker_geom_read(decay_ticker);\n\texpect_u32_ne(tick1, tick0,\n\t    \"Expected ticker to tick during aligned_alloc()\");\n\tfree(p);\n\n\t/* realloc(). */\n\t/* Allocate. */\n\ttick0 = ticker_geom_read(decay_ticker);\n\tp = realloc(NULL, large0);\n\texpect_ptr_not_null(p, \"Unexpected realloc() failure\");\n\ttick1 = ticker_geom_read(decay_ticker);\n\texpect_u32_ne(tick1, tick0, \"Expected ticker to tick during realloc()\");\n\t/* Reallocate. */\n\ttick0 = ticker_geom_read(decay_ticker);\n\tp = realloc(p, large0);\n\texpect_ptr_not_null(p, \"Unexpected realloc() failure\");\n\ttick1 = ticker_geom_read(decay_ticker);\n\texpect_u32_ne(tick1, tick0, \"Expected ticker to tick during realloc()\");\n\t/* Deallocate. */\n\ttick0 = ticker_geom_read(decay_ticker);\n\trealloc(p, 0);\n\ttick1 = ticker_geom_read(decay_ticker);\n\texpect_u32_ne(tick1, tick0, \"Expected ticker to tick during realloc()\");\n\n\t/*\n\t * Test the *allocx() APIs using large and small size classes, with\n\t * tcache explicitly disabled.\n\t */\n\t{\n\t\tunsigned i;\n\t\tsize_t allocx_sizes[2];\n\t\tallocx_sizes[0] = large0;\n\t\tallocx_sizes[1] = 1;\n\n\t\tfor (i = 0; i < sizeof(allocx_sizes) / sizeof(size_t); i++) {\n\t\t\tsz = allocx_sizes[i];\n\n\t\t\t/* mallocx(). */\n\t\t\ttick0 = ticker_geom_read(decay_ticker);\n\t\t\tp = mallocx(sz, MALLOCX_TCACHE_NONE);\n\t\t\texpect_ptr_not_null(p, \"Unexpected mallocx() failure\");\n\t\t\ttick1 = ticker_geom_read(decay_ticker);\n\t\t\texpect_u32_ne(tick1, tick0,\n\t\t\t    \"Expected ticker to tick during mallocx() (sz=%zu)\",\n\t\t\t    sz);\n\t\t\t/* rallocx(). */\n\t\t\ttick0 = ticker_geom_read(decay_ticker);\n\t\t\tp = rallocx(p, sz, MALLOCX_TCACHE_NONE);\n\t\t\texpect_ptr_not_null(p, \"Unexpected rallocx() failure\");\n\t\t\ttick1 = ticker_geom_read(decay_ticker);\n\t\t\texpect_u32_ne(tick1, tick0,\n\t\t\t    \"Expected ticker to tick during rallocx() (sz=%zu)\",\n\t\t\t    sz);\n\t\t\t/* xallocx(). */\n\t\t\ttick0 = ticker_geom_read(decay_ticker);\n\t\t\txallocx(p, sz, 0, MALLOCX_TCACHE_NONE);\n\t\t\ttick1 = ticker_geom_read(decay_ticker);\n\t\t\texpect_u32_ne(tick1, tick0,\n\t\t\t    \"Expected ticker to tick during xallocx() (sz=%zu)\",\n\t\t\t    sz);\n\t\t\t/* dallocx(). */\n\t\t\ttick0 = ticker_geom_read(decay_ticker);\n\t\t\tdallocx(p, MALLOCX_TCACHE_NONE);\n\t\t\ttick1 = ticker_geom_read(decay_ticker);\n\t\t\texpect_u32_ne(tick1, tick0,\n\t\t\t    \"Expected ticker to tick during dallocx() (sz=%zu)\",\n\t\t\t    sz);\n\t\t\t/* sdallocx(). */\n\t\t\tp = mallocx(sz, MALLOCX_TCACHE_NONE);\n\t\t\texpect_ptr_not_null(p, \"Unexpected mallocx() failure\");\n\t\t\ttick0 = ticker_geom_read(decay_ticker);\n\t\t\tsdallocx(p, sz, MALLOCX_TCACHE_NONE);\n\t\t\ttick1 = ticker_geom_read(decay_ticker);\n\t\t\texpect_u32_ne(tick1, tick0,\n\t\t\t    \"Expected ticker to tick during sdallocx() \"\n\t\t\t    \"(sz=%zu)\", sz);\n\t\t}\n\t}\n\n\t/*\n\t * Test tcache fill/flush interactions for large and small size classes,\n\t * using an explicit tcache.\n\t */\n\tunsigned tcache_ind, i;\n\tsize_t tcache_sizes[2];\n\ttcache_sizes[0] = large0;\n\ttcache_sizes[1] = 1;\n\n\tsize_t tcache_max, sz_tcache_max;\n\tsz_tcache_max = sizeof(tcache_max);\n\texpect_d_eq(mallctl(\"arenas.tcache_max\", (void *)&tcache_max,\n\t    &sz_tcache_max, NULL, 0), 0, \"Unexpected mallctl() failure\");\n\n\tsz = sizeof(unsigned);\n\texpect_d_eq(mallctl(\"tcache.create\", (void *)&tcache_ind, &sz,\n\t    NULL, 0), 0, \"Unexpected mallctl failure\");\n\n\tfor (i = 0; i < sizeof(tcache_sizes) / sizeof(size_t); i++) {\n\t\tsz = tcache_sizes[i];\n\n\t\t/* tcache fill. */\n\t\ttick0 = ticker_geom_read(decay_ticker);\n\t\tp = mallocx(sz, MALLOCX_TCACHE(tcache_ind));\n\t\texpect_ptr_not_null(p, \"Unexpected mallocx() failure\");\n\t\ttick1 = ticker_geom_read(decay_ticker);\n\t\texpect_u32_ne(tick1, tick0,\n\t\t    \"Expected ticker to tick during tcache fill \"\n\t\t    \"(sz=%zu)\", sz);\n\t\t/* tcache flush. */\n\t\tdallocx(p, MALLOCX_TCACHE(tcache_ind));\n\t\ttick0 = ticker_geom_read(decay_ticker);\n\t\texpect_d_eq(mallctl(\"tcache.flush\", NULL, NULL,\n\t\t    (void *)&tcache_ind, sizeof(unsigned)), 0,\n\t\t    \"Unexpected mallctl failure\");\n\t\ttick1 = ticker_geom_read(decay_ticker);\n\n\t\t/* Will only tick if it's in tcache. */\n\t\texpect_u32_ne(tick1, tick0,\n\t\t    \"Expected ticker to tick during tcache flush (sz=%zu)\", sz);\n\t}\n}\nTEST_END\n\nstatic void\ndecay_ticker_helper(unsigned arena_ind, int flags, bool dirty, ssize_t dt,\n    uint64_t dirty_npurge0, uint64_t muzzy_npurge0, bool terminate_asap) {\n#define NINTERVALS 101\n\tnstime_t time, update_interval, decay_ms, deadline;\n\n\tnstime_init_update(&time);\n\n\tnstime_init2(&decay_ms, dt, 0);\n\tnstime_copy(&deadline, &time);\n\tnstime_add(&deadline, &decay_ms);\n\n\tnstime_init2(&update_interval, dt, 0);\n\tnstime_idivide(&update_interval, NINTERVALS);\n\n\t/*\n\t * Keep q's slab from being deallocated during the looping below.  If a\n\t * cached slab were to repeatedly come and go during looping, it could\n\t * prevent the decay backlog ever becoming empty.\n\t */\n\tvoid *p = do_mallocx(1, flags);\n\tuint64_t dirty_npurge1, muzzy_npurge1;\n\tdo {\n\t\tfor (unsigned i = 0; i < ARENA_DECAY_NTICKS_PER_UPDATE / 2;\n\t\t    i++) {\n\t\t\tvoid *q = do_mallocx(1, flags);\n\t\t\tdallocx(q, flags);\n\t\t}\n\t\tdirty_npurge1 = get_arena_dirty_npurge(arena_ind);\n\t\tmuzzy_npurge1 = get_arena_muzzy_npurge(arena_ind);\n\n\t\tnstime_add(&time_mock, &update_interval);\n\t\tnstime_update(&time);\n\t} while (nstime_compare(&time, &deadline) <= 0 && ((dirty_npurge1 ==\n\t    dirty_npurge0 && muzzy_npurge1 == muzzy_npurge0) ||\n\t    !terminate_asap));\n\tdallocx(p, flags);\n\n\tif (config_stats) {\n\t\texpect_u64_gt(dirty_npurge1 + muzzy_npurge1, dirty_npurge0 +\n\t\t    muzzy_npurge0, \"Expected purging to occur\");\n\t}\n#undef NINTERVALS\n}\n\nTEST_BEGIN(test_decay_ticker) {\n\ttest_skip_if(is_background_thread_enabled());\n\ttest_skip_if(opt_hpa);\n#define NPS 2048\n\tssize_t ddt = opt_dirty_decay_ms;\n\tssize_t mdt = opt_muzzy_decay_ms;\n\tunsigned arena_ind = do_arena_create(ddt, mdt);\n\tint flags = (MALLOCX_ARENA(arena_ind) | MALLOCX_TCACHE_NONE);\n\tvoid *ps[NPS];\n\n\t/*\n\t * Allocate a bunch of large objects, pause the clock, deallocate every\n\t * other object (to fragment virtual memory), restore the clock, then\n\t * [md]allocx() in a tight loop while advancing time rapidly to verify\n\t * the ticker triggers purging.\n\t */\n\tsize_t large;\n\tsize_t sz = sizeof(size_t);\n\texpect_d_eq(mallctl(\"arenas.lextent.0.size\", (void *)&large, &sz, NULL,\n\t    0), 0, \"Unexpected mallctl failure\");\n\n\tdo_purge(arena_ind);\n\tuint64_t dirty_npurge0 = get_arena_dirty_npurge(arena_ind);\n\tuint64_t muzzy_npurge0 = get_arena_muzzy_npurge(arena_ind);\n\n\tfor (unsigned i = 0; i < NPS; i++) {\n\t\tps[i] = do_mallocx(large, flags);\n\t}\n\n\tnupdates_mock = 0;\n\tnstime_init_update(&time_mock);\n\tmonotonic_mock = true;\n\n\tnstime_monotonic_orig = nstime_monotonic;\n\tnstime_update_orig = nstime_update;\n\tnstime_monotonic = nstime_monotonic_mock;\n\tnstime_update = nstime_update_mock;\n\n\tfor (unsigned i = 0; i < NPS; i += 2) {\n\t\tdallocx(ps[i], flags);\n\t\tunsigned nupdates0 = nupdates_mock;\n\t\tdo_decay(arena_ind);\n\t\texpect_u_gt(nupdates_mock, nupdates0,\n\t\t    \"Expected nstime_update() to be called\");\n\t}\n\n\tdecay_ticker_helper(arena_ind, flags, true, ddt, dirty_npurge0,\n\t    muzzy_npurge0, true);\n\tdecay_ticker_helper(arena_ind, flags, false, ddt+mdt, dirty_npurge0,\n\t    muzzy_npurge0, false);\n\n\tdo_arena_destroy(arena_ind);\n\n\tnstime_monotonic = nstime_monotonic_orig;\n\tnstime_update = nstime_update_orig;\n#undef NPS\n}\nTEST_END\n\nTEST_BEGIN(test_decay_nonmonotonic) {\n\ttest_skip_if(is_background_thread_enabled());\n\ttest_skip_if(opt_hpa);\n#define NPS (SMOOTHSTEP_NSTEPS + 1)\n\tint flags = (MALLOCX_ARENA(0) | MALLOCX_TCACHE_NONE);\n\tvoid *ps[NPS];\n\tuint64_t npurge0 = 0;\n\tuint64_t npurge1 = 0;\n\tsize_t sz, large0;\n\tunsigned i, nupdates0;\n\n\tsz = sizeof(size_t);\n\texpect_d_eq(mallctl(\"arenas.lextent.0.size\", (void *)&large0, &sz, NULL,\n\t    0), 0, \"Unexpected mallctl failure\");\n\n\texpect_d_eq(mallctl(\"arena.0.purge\", NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctl failure\");\n\tdo_epoch();\n\tsz = sizeof(uint64_t);\n\tnpurge0 = get_arena_npurge(0);\n\n\tnupdates_mock = 0;\n\tnstime_init_update(&time_mock);\n\tmonotonic_mock = false;\n\n\tnstime_monotonic_orig = nstime_monotonic;\n\tnstime_update_orig = nstime_update;\n\tnstime_monotonic = nstime_monotonic_mock;\n\tnstime_update = nstime_update_mock;\n\n\tfor (i = 0; i < NPS; i++) {\n\t\tps[i] = mallocx(large0, flags);\n\t\texpect_ptr_not_null(ps[i], \"Unexpected mallocx() failure\");\n\t}\n\n\tfor (i = 0; i < NPS; i++) {\n\t\tdallocx(ps[i], flags);\n\t\tnupdates0 = nupdates_mock;\n\t\texpect_d_eq(mallctl(\"arena.0.decay\", NULL, NULL, NULL, 0), 0,\n\t\t    \"Unexpected arena.0.decay failure\");\n\t\texpect_u_gt(nupdates_mock, nupdates0,\n\t\t    \"Expected nstime_update() to be called\");\n\t}\n\n\tdo_epoch();\n\tsz = sizeof(uint64_t);\n\tnpurge1 = get_arena_npurge(0);\n\n\tif (config_stats) {\n\t\texpect_u64_eq(npurge0, npurge1, \"Unexpected purging occurred\");\n\t}\n\n\tnstime_monotonic = nstime_monotonic_orig;\n\tnstime_update = nstime_update_orig;\n#undef NPS\n}\nTEST_END\n\nTEST_BEGIN(test_decay_now) {\n\ttest_skip_if(is_background_thread_enabled());\n\ttest_skip_if(opt_hpa);\n\n\tunsigned arena_ind = do_arena_create(0, 0);\n\texpect_zu_eq(get_arena_pdirty(arena_ind), 0, \"Unexpected dirty pages\");\n\texpect_zu_eq(get_arena_pmuzzy(arena_ind), 0, \"Unexpected muzzy pages\");\n\tsize_t sizes[] = {16, PAGE<<2, HUGEPAGE<<2};\n\t/* Verify that dirty/muzzy pages never linger after deallocation. */\n\tfor (unsigned i = 0; i < sizeof(sizes)/sizeof(size_t); i++) {\n\t\tsize_t size = sizes[i];\n\t\tgenerate_dirty(arena_ind, size);\n\t\texpect_zu_eq(get_arena_pdirty(arena_ind), 0,\n\t\t    \"Unexpected dirty pages\");\n\t\texpect_zu_eq(get_arena_pmuzzy(arena_ind), 0,\n\t\t    \"Unexpected muzzy pages\");\n\t}\n\tdo_arena_destroy(arena_ind);\n}\nTEST_END\n\nTEST_BEGIN(test_decay_never) {\n\ttest_skip_if(is_background_thread_enabled() || !config_stats);\n\ttest_skip_if(opt_hpa);\n\n\tunsigned arena_ind = do_arena_create(-1, -1);\n\tint flags = MALLOCX_ARENA(arena_ind) | MALLOCX_TCACHE_NONE;\n\texpect_zu_eq(get_arena_pdirty(arena_ind), 0, \"Unexpected dirty pages\");\n\texpect_zu_eq(get_arena_pmuzzy(arena_ind), 0, \"Unexpected muzzy pages\");\n\tsize_t sizes[] = {16, PAGE<<2, HUGEPAGE<<2};\n\tvoid *ptrs[sizeof(sizes)/sizeof(size_t)];\n\tfor (unsigned i = 0; i < sizeof(sizes)/sizeof(size_t); i++) {\n\t\tptrs[i] = do_mallocx(sizes[i], flags);\n\t}\n\t/* Verify that each deallocation generates additional dirty pages. */\n\tsize_t pdirty_prev = get_arena_pdirty(arena_ind);\n\tsize_t pmuzzy_prev = get_arena_pmuzzy(arena_ind);\n\texpect_zu_eq(pdirty_prev, 0, \"Unexpected dirty pages\");\n\texpect_zu_eq(pmuzzy_prev, 0, \"Unexpected muzzy pages\");\n\tfor (unsigned i = 0; i < sizeof(sizes)/sizeof(size_t); i++) {\n\t\tdallocx(ptrs[i], flags);\n\t\tsize_t pdirty = get_arena_pdirty(arena_ind);\n\t\tsize_t pmuzzy = get_arena_pmuzzy(arena_ind);\n\t\texpect_zu_gt(pdirty + (size_t)get_arena_dirty_purged(arena_ind),\n\t\t    pdirty_prev, \"Expected dirty pages to increase.\");\n\t\texpect_zu_eq(pmuzzy, 0, \"Unexpected muzzy pages\");\n\t\tpdirty_prev = pdirty;\n\t}\n\tdo_arena_destroy(arena_ind);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_decay_ticks,\n\t    test_decay_ticker,\n\t    test_decay_nonmonotonic,\n\t    test_decay_now,\n\t    test_decay_never);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/arena_decay.sh",
    "content": "#!/bin/sh\n\nexport MALLOC_CONF=\"dirty_decay_ms:1000,muzzy_decay_ms:1000,tcache_max:1024\"\n"
  },
  {
    "path": "deps/jemalloc/test/unit/arena_reset.c",
    "content": "#ifndef ARENA_RESET_PROF_C_\n#include \"test/jemalloc_test.h\"\n#endif\n\n#include \"jemalloc/internal/extent_mmap.h\"\n#include \"jemalloc/internal/rtree.h\"\n\n#include \"test/extent_hooks.h\"\n\nstatic unsigned\nget_nsizes_impl(const char *cmd) {\n\tunsigned ret;\n\tsize_t z;\n\n\tz = sizeof(unsigned);\n\texpect_d_eq(mallctl(cmd, (void *)&ret, &z, NULL, 0), 0,\n\t    \"Unexpected mallctl(\\\"%s\\\", ...) failure\", cmd);\n\n\treturn ret;\n}\n\nstatic unsigned\nget_nsmall(void) {\n\treturn get_nsizes_impl(\"arenas.nbins\");\n}\n\nstatic unsigned\nget_nlarge(void) {\n\treturn get_nsizes_impl(\"arenas.nlextents\");\n}\n\nstatic size_t\nget_size_impl(const char *cmd, size_t ind) {\n\tsize_t ret;\n\tsize_t z;\n\tsize_t mib[4];\n\tsize_t miblen = 4;\n\n\tz = sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(cmd, mib, &miblen),\n\t    0, \"Unexpected mallctlnametomib(\\\"%s\\\", ...) failure\", cmd);\n\tmib[2] = ind;\n\tz = sizeof(size_t);\n\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&ret, &z, NULL, 0),\n\t    0, \"Unexpected mallctlbymib([\\\"%s\\\", %zu], ...) failure\", cmd, ind);\n\n\treturn ret;\n}\n\nstatic size_t\nget_small_size(size_t ind) {\n\treturn get_size_impl(\"arenas.bin.0.size\", ind);\n}\n\nstatic size_t\nget_large_size(size_t ind) {\n\treturn get_size_impl(\"arenas.lextent.0.size\", ind);\n}\n\n/* Like ivsalloc(), but safe to call on discarded allocations. */\nstatic size_t\nvsalloc(tsdn_t *tsdn, const void *ptr) {\n\temap_full_alloc_ctx_t full_alloc_ctx;\n\tbool missing = emap_full_alloc_ctx_try_lookup(tsdn, &arena_emap_global,\n\t    ptr, &full_alloc_ctx);\n\tif (missing) {\n\t\treturn 0;\n\t}\n\n\tif (full_alloc_ctx.edata == NULL) {\n\t\treturn 0;\n\t}\n\tif (edata_state_get(full_alloc_ctx.edata) != extent_state_active) {\n\t\treturn 0;\n\t}\n\n\tif (full_alloc_ctx.szind == SC_NSIZES) {\n\t\treturn 0;\n\t}\n\n\treturn sz_index2size(full_alloc_ctx.szind);\n}\n\nstatic unsigned\ndo_arena_create(extent_hooks_t *h) {\n\tunsigned arena_ind;\n\tsize_t sz = sizeof(unsigned);\n\texpect_d_eq(mallctl(\"arenas.create\", (void *)&arena_ind, &sz,\n\t    (void *)(h != NULL ? &h : NULL), (h != NULL ? sizeof(h) : 0)), 0,\n\t    \"Unexpected mallctl() failure\");\n\treturn arena_ind;\n}\n\nstatic void\ndo_arena_reset_pre(unsigned arena_ind, void ***ptrs, unsigned *nptrs) {\n#define NLARGE\t32\n\tunsigned nsmall, nlarge, i;\n\tsize_t sz;\n\tint flags;\n\ttsdn_t *tsdn;\n\n\tflags = MALLOCX_ARENA(arena_ind) | MALLOCX_TCACHE_NONE;\n\n\tnsmall = get_nsmall();\n\tnlarge = get_nlarge() > NLARGE ? NLARGE : get_nlarge();\n\t*nptrs = nsmall + nlarge;\n\t*ptrs = (void **)malloc(*nptrs * sizeof(void *));\n\texpect_ptr_not_null(*ptrs, \"Unexpected malloc() failure\");\n\n\t/* Allocate objects with a wide range of sizes. */\n\tfor (i = 0; i < nsmall; i++) {\n\t\tsz = get_small_size(i);\n\t\t(*ptrs)[i] = mallocx(sz, flags);\n\t\texpect_ptr_not_null((*ptrs)[i],\n\t\t    \"Unexpected mallocx(%zu, %#x) failure\", sz, flags);\n\t}\n\tfor (i = 0; i < nlarge; i++) {\n\t\tsz = get_large_size(i);\n\t\t(*ptrs)[nsmall + i] = mallocx(sz, flags);\n\t\texpect_ptr_not_null((*ptrs)[i],\n\t\t    \"Unexpected mallocx(%zu, %#x) failure\", sz, flags);\n\t}\n\n\ttsdn = tsdn_fetch();\n\n\t/* Verify allocations. */\n\tfor (i = 0; i < *nptrs; i++) {\n\t\texpect_zu_gt(ivsalloc(tsdn, (*ptrs)[i]), 0,\n\t\t    \"Allocation should have queryable size\");\n\t}\n}\n\nstatic void\ndo_arena_reset_post(void **ptrs, unsigned nptrs, unsigned arena_ind) {\n\ttsdn_t *tsdn;\n\tunsigned i;\n\n\ttsdn = tsdn_fetch();\n\n\tif (have_background_thread) {\n\t\tmalloc_mutex_lock(tsdn,\n\t\t    &background_thread_info_get(arena_ind)->mtx);\n\t}\n\t/* Verify allocations no longer exist. */\n\tfor (i = 0; i < nptrs; i++) {\n\t\texpect_zu_eq(vsalloc(tsdn, ptrs[i]), 0,\n\t\t    \"Allocation should no longer exist\");\n\t}\n\tif (have_background_thread) {\n\t\tmalloc_mutex_unlock(tsdn,\n\t\t    &background_thread_info_get(arena_ind)->mtx);\n\t}\n\n\tfree(ptrs);\n}\n\nstatic void\ndo_arena_reset_destroy(const char *name, unsigned arena_ind) {\n\tsize_t mib[3];\n\tsize_t miblen;\n\n\tmiblen = sizeof(mib)/sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(name, mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() failure\");\n\tmib[1] = (size_t)arena_ind;\n\texpect_d_eq(mallctlbymib(mib, miblen, NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctlbymib() failure\");\n}\n\nstatic void\ndo_arena_reset(unsigned arena_ind) {\n\tdo_arena_reset_destroy(\"arena.0.reset\", arena_ind);\n}\n\nstatic void\ndo_arena_destroy(unsigned arena_ind) {\n\tdo_arena_reset_destroy(\"arena.0.destroy\", arena_ind);\n}\n\nTEST_BEGIN(test_arena_reset) {\n\tunsigned arena_ind;\n\tvoid **ptrs;\n\tunsigned nptrs;\n\n\tarena_ind = do_arena_create(NULL);\n\tdo_arena_reset_pre(arena_ind, &ptrs, &nptrs);\n\tdo_arena_reset(arena_ind);\n\tdo_arena_reset_post(ptrs, nptrs, arena_ind);\n}\nTEST_END\n\nstatic bool\narena_i_initialized(unsigned arena_ind, bool refresh) {\n\tbool initialized;\n\tsize_t mib[3];\n\tsize_t miblen, sz;\n\n\tif (refresh) {\n\t\tuint64_t epoch = 1;\n\t\texpect_d_eq(mallctl(\"epoch\", NULL, NULL, (void *)&epoch,\n\t\t    sizeof(epoch)), 0, \"Unexpected mallctl() failure\");\n\t}\n\n\tmiblen = sizeof(mib)/sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(\"arena.0.initialized\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() failure\");\n\tmib[1] = (size_t)arena_ind;\n\tsz = sizeof(initialized);\n\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&initialized, &sz, NULL,\n\t    0), 0, \"Unexpected mallctlbymib() failure\");\n\n\treturn initialized;\n}\n\nTEST_BEGIN(test_arena_destroy_initial) {\n\texpect_false(arena_i_initialized(MALLCTL_ARENAS_DESTROYED, false),\n\t    \"Destroyed arena stats should not be initialized\");\n}\nTEST_END\n\nTEST_BEGIN(test_arena_destroy_hooks_default) {\n\tunsigned arena_ind, arena_ind_another, arena_ind_prev;\n\tvoid **ptrs;\n\tunsigned nptrs;\n\n\tarena_ind = do_arena_create(NULL);\n\tdo_arena_reset_pre(arena_ind, &ptrs, &nptrs);\n\n\texpect_false(arena_i_initialized(arena_ind, false),\n\t    \"Arena stats should not be initialized\");\n\texpect_true(arena_i_initialized(arena_ind, true),\n\t    \"Arena stats should be initialized\");\n\n\t/*\n\t * Create another arena before destroying one, to better verify arena\n\t * index reuse.\n\t */\n\tarena_ind_another = do_arena_create(NULL);\n\n\tdo_arena_destroy(arena_ind);\n\n\texpect_false(arena_i_initialized(arena_ind, true),\n\t    \"Arena stats should not be initialized\");\n\texpect_true(arena_i_initialized(MALLCTL_ARENAS_DESTROYED, false),\n\t    \"Destroyed arena stats should be initialized\");\n\n\tdo_arena_reset_post(ptrs, nptrs, arena_ind);\n\n\tarena_ind_prev = arena_ind;\n\tarena_ind = do_arena_create(NULL);\n\tdo_arena_reset_pre(arena_ind, &ptrs, &nptrs);\n\texpect_u_eq(arena_ind, arena_ind_prev,\n\t    \"Arena index should have been recycled\");\n\tdo_arena_destroy(arena_ind);\n\tdo_arena_reset_post(ptrs, nptrs, arena_ind);\n\n\tdo_arena_destroy(arena_ind_another);\n\n\t/* Try arena.create with custom hooks. */\n\tsize_t sz = sizeof(extent_hooks_t *);\n\textent_hooks_t *a0_default_hooks;\n\texpect_d_eq(mallctl(\"arena.0.extent_hooks\", (void *)&a0_default_hooks,\n\t    &sz, NULL, 0), 0, \"Unexpected mallctlnametomib() failure\");\n\n\t/* Default impl; but wrapped as \"customized\". */\n\textent_hooks_t new_hooks = *a0_default_hooks;\n\textent_hooks_t *hook = &new_hooks;\n\tsz = sizeof(unsigned);\n\texpect_d_eq(mallctl(\"arenas.create\", (void *)&arena_ind, &sz,\n\t    (void *)&hook, sizeof(void *)), 0,\n\t    \"Unexpected mallctl() failure\");\n\tdo_arena_destroy(arena_ind);\n}\nTEST_END\n\n/*\n * Actually unmap extents, regardless of opt_retain, so that attempts to access\n * a destroyed arena's memory will segfault.\n */\nstatic bool\nextent_dalloc_unmap(extent_hooks_t *extent_hooks, void *addr, size_t size,\n    bool committed, unsigned arena_ind) {\n\tTRACE_HOOK(\"%s(extent_hooks=%p, addr=%p, size=%zu, committed=%s, \"\n\t    \"arena_ind=%u)\\n\", __func__, extent_hooks, addr, size, committed ?\n\t    \"true\" : \"false\", arena_ind);\n\texpect_ptr_eq(extent_hooks, &hooks,\n\t    \"extent_hooks should be same as pointer used to set hooks\");\n\texpect_ptr_eq(extent_hooks->dalloc, extent_dalloc_unmap,\n\t    \"Wrong hook function\");\n\tcalled_dalloc = true;\n\tif (!try_dalloc) {\n\t\treturn true;\n\t}\n\tdid_dalloc = true;\n\tif (!maps_coalesce && opt_retain) {\n\t\treturn true;\n\t}\n\tpages_unmap(addr, size);\n\treturn false;\n}\n\nstatic extent_hooks_t hooks_orig;\n\nstatic extent_hooks_t hooks_unmap = {\n\textent_alloc_hook,\n\textent_dalloc_unmap, /* dalloc */\n\textent_destroy_hook,\n\textent_commit_hook,\n\textent_decommit_hook,\n\textent_purge_lazy_hook,\n\textent_purge_forced_hook,\n\textent_split_hook,\n\textent_merge_hook\n};\n\nTEST_BEGIN(test_arena_destroy_hooks_unmap) {\n\tunsigned arena_ind;\n\tvoid **ptrs;\n\tunsigned nptrs;\n\n\textent_hooks_prep();\n\tif (maps_coalesce) {\n\t\ttry_decommit = false;\n\t}\n\tmemcpy(&hooks_orig, &hooks, sizeof(extent_hooks_t));\n\tmemcpy(&hooks, &hooks_unmap, sizeof(extent_hooks_t));\n\n\tdid_alloc = false;\n\tarena_ind = do_arena_create(&hooks);\n\tdo_arena_reset_pre(arena_ind, &ptrs, &nptrs);\n\n\texpect_true(did_alloc, \"Expected alloc\");\n\n\texpect_false(arena_i_initialized(arena_ind, false),\n\t    \"Arena stats should not be initialized\");\n\texpect_true(arena_i_initialized(arena_ind, true),\n\t    \"Arena stats should be initialized\");\n\n\tdid_dalloc = false;\n\tdo_arena_destroy(arena_ind);\n\texpect_true(did_dalloc, \"Expected dalloc\");\n\n\texpect_false(arena_i_initialized(arena_ind, true),\n\t    \"Arena stats should not be initialized\");\n\texpect_true(arena_i_initialized(MALLCTL_ARENAS_DESTROYED, false),\n\t    \"Destroyed arena stats should be initialized\");\n\n\tdo_arena_reset_post(ptrs, nptrs, arena_ind);\n\n\tmemcpy(&hooks, &hooks_orig, sizeof(extent_hooks_t));\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_arena_reset,\n\t    test_arena_destroy_initial,\n\t    test_arena_destroy_hooks_default,\n\t    test_arena_destroy_hooks_unmap);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/arena_reset_prof.c",
    "content": "#include \"test/jemalloc_test.h\"\n#define ARENA_RESET_PROF_C_\n\n#include \"arena_reset.c\"\n"
  },
  {
    "path": "deps/jemalloc/test/unit/arena_reset_prof.sh",
    "content": "#!/bin/sh\n\nexport MALLOC_CONF=\"prof:true,lg_prof_sample:0\"\n"
  },
  {
    "path": "deps/jemalloc/test/unit/atomic.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n/*\n * We *almost* have consistent short names (e.g. \"u32\" for uint32_t, \"b\" for\n * bool, etc.  The one exception is that the short name for void * is \"p\" in\n * some places and \"ptr\" in others.  In the long run it would be nice to unify\n * these, but in the short run we'll use this shim.\n */\n#define expect_p_eq expect_ptr_eq\n\n/*\n * t: the non-atomic type, like \"uint32_t\".\n * ta: the short name for the type, like \"u32\".\n * val[1,2,3]: Values of the given type.  The CAS tests use val2 for expected,\n * and val3 for desired.\n */\n\n#define DO_TESTS(t, ta, val1, val2, val3) do {\t\t\t\t\\\n\tt val;\t\t\t\t\t\t\t\t\\\n\tt expected;\t\t\t\t\t\t\t\\\n\tbool success;\t\t\t\t\t\t\t\\\n\t/* This (along with the load below) also tests ATOMIC_LOAD. */\t\\\n\tatomic_##ta##_t atom = ATOMIC_INIT(val1);\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\t/* ATOMIC_INIT and load. */\t\t\t\t\t\\\n\tval = atomic_load_##ta(&atom, ATOMIC_RELAXED);\t\t\t\\\n\texpect_##ta##_eq(val1, val, \"Load or init failed\");\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\t/* Store. */\t\t\t\t\t\t\t\\\n\tatomic_store_##ta(&atom, val1, ATOMIC_RELAXED);\t\t\t\\\n\tatomic_store_##ta(&atom, val2, ATOMIC_RELAXED);\t\t\t\\\n\tval = atomic_load_##ta(&atom, ATOMIC_RELAXED);\t\t\t\\\n\texpect_##ta##_eq(val2, val, \"Store failed\");\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\t/* Exchange. */\t\t\t\t\t\t\t\\\n\tatomic_store_##ta(&atom, val1, ATOMIC_RELAXED);\t\t\t\\\n\tval = atomic_exchange_##ta(&atom, val2, ATOMIC_RELAXED);\t\\\n\texpect_##ta##_eq(val1, val, \"Exchange returned invalid value\");\t\\\n\tval = atomic_load_##ta(&atom, ATOMIC_RELAXED);\t\t\t\\\n\texpect_##ta##_eq(val2, val, \"Exchange store invalid value\");\t\\\n\t\t\t\t\t\t\t\t\t\\\n\t/* \t\t\t\t\t\t\t\t\\\n\t * Weak CAS.  Spurious failures are allowed, so we loop a few\t\\\n\t * times.\t\t\t\t\t\t\t\\\n\t */\t\t\t\t\t\t\t\t\\\n\tatomic_store_##ta(&atom, val1, ATOMIC_RELAXED);\t\t\t\\\n\tsuccess = false;\t\t\t\t\t\t\\\n\tfor (int retry = 0; retry < 10 && !success; retry++) {\t\t\\\n\t\texpected = val2;\t\t\t\t\t\\\n\t\tsuccess = atomic_compare_exchange_weak_##ta(&atom,\t\\\n\t\t    &expected, val3, ATOMIC_RELAXED, ATOMIC_RELAXED);\t\\\n\t\texpect_##ta##_eq(val1, expected, \t\t\t\\\n\t\t    \"CAS should update expected\");\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\texpect_b_eq(val1 == val2, success,\t\t\t\t\\\n\t    \"Weak CAS did the wrong state update\");\t\t\t\\\n\tval = atomic_load_##ta(&atom, ATOMIC_RELAXED);\t\t\t\\\n\tif (success) {\t\t\t\t\t\t\t\\\n\t\texpect_##ta##_eq(val3, val,\t\t\t\t\\\n\t\t    \"Successful CAS should update atomic\");\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t\texpect_##ta##_eq(val1, val,\t\t\t\t\\\n\t\t    \"Unsuccessful CAS should not update atomic\");\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\t/* Strong CAS. */\t\t\t\t\t\t\\\n\tatomic_store_##ta(&atom, val1, ATOMIC_RELAXED);\t\t\t\\\n\texpected = val2;\t\t\t\t\t\t\\\n\tsuccess = atomic_compare_exchange_strong_##ta(&atom, &expected,\t\\\n\t    val3, ATOMIC_RELAXED, ATOMIC_RELAXED);\t\t\t\\\n\texpect_b_eq(val1 == val2, success,\t\t\t\t\\\n\t    \"Strong CAS did the wrong state update\");\t\t\t\\\n\tval = atomic_load_##ta(&atom, ATOMIC_RELAXED);\t\t\t\\\n\tif (success) {\t\t\t\t\t\t\t\\\n\t\texpect_##ta##_eq(val3, val,\t\t\t\t\\\n\t\t    \"Successful CAS should update atomic\");\t\t\\\n\t} else {\t\t\t\t\t\t\t\\\n\t\texpect_##ta##_eq(val1, val,\t\t\t\t\\\n\t\t    \"Unsuccessful CAS should not update atomic\");\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#define DO_INTEGER_TESTS(t, ta, val1, val2) do {\t\t\t\\\n\tatomic_##ta##_t atom;\t\t\t\t\t\t\\\n\tt val;\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\t/* Fetch-add. */\t\t\t\t\t\t\\\n\tatomic_store_##ta(&atom, val1, ATOMIC_RELAXED);\t\t\t\\\n\tval = atomic_fetch_add_##ta(&atom, val2, ATOMIC_RELAXED);\t\\\n\texpect_##ta##_eq(val1, val,\t\t\t\t\t\\\n\t    \"Fetch-add should return previous value\");\t\t\t\\\n\tval = atomic_load_##ta(&atom, ATOMIC_RELAXED);\t\t\t\\\n\texpect_##ta##_eq(val1 + val2, val,\t\t\t\t\\\n\t    \"Fetch-add should update atomic\");\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\t/* Fetch-sub. */\t\t\t\t\t\t\\\n\tatomic_store_##ta(&atom, val1, ATOMIC_RELAXED);\t\t\t\\\n\tval = atomic_fetch_sub_##ta(&atom, val2, ATOMIC_RELAXED);\t\\\n\texpect_##ta##_eq(val1, val,\t\t\t\t\t\\\n\t    \"Fetch-sub should return previous value\");\t\t\t\\\n\tval = atomic_load_##ta(&atom, ATOMIC_RELAXED);\t\t\t\\\n\texpect_##ta##_eq(val1 - val2, val,\t\t\t\t\\\n\t    \"Fetch-sub should update atomic\");\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\t/* Fetch-and. */\t\t\t\t\t\t\\\n\tatomic_store_##ta(&atom, val1, ATOMIC_RELAXED);\t\t\t\\\n\tval = atomic_fetch_and_##ta(&atom, val2, ATOMIC_RELAXED);\t\\\n\texpect_##ta##_eq(val1, val,\t\t\t\t\t\\\n\t    \"Fetch-and should return previous value\");\t\t\t\\\n\tval = atomic_load_##ta(&atom, ATOMIC_RELAXED);\t\t\t\\\n\texpect_##ta##_eq(val1 & val2, val,\t\t\t\t\\\n\t    \"Fetch-and should update atomic\");\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\t/* Fetch-or. */\t\t\t\t\t\t\t\\\n\tatomic_store_##ta(&atom, val1, ATOMIC_RELAXED);\t\t\t\\\n\tval = atomic_fetch_or_##ta(&atom, val2, ATOMIC_RELAXED);\t\\\n\texpect_##ta##_eq(val1, val,\t\t\t\t\t\\\n\t    \"Fetch-or should return previous value\");\t\t\t\\\n\tval = atomic_load_##ta(&atom, ATOMIC_RELAXED);\t\t\t\\\n\texpect_##ta##_eq(val1 | val2, val,\t\t\t\t\\\n\t    \"Fetch-or should update atomic\");\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\t/* Fetch-xor. */\t\t\t\t\t\t\\\n\tatomic_store_##ta(&atom, val1, ATOMIC_RELAXED);\t\t\t\\\n\tval = atomic_fetch_xor_##ta(&atom, val2, ATOMIC_RELAXED);\t\\\n\texpect_##ta##_eq(val1, val,\t\t\t\t\t\\\n\t    \"Fetch-xor should return previous value\");\t\t\t\\\n\tval = atomic_load_##ta(&atom, ATOMIC_RELAXED);\t\t\t\\\n\texpect_##ta##_eq(val1 ^ val2, val,\t\t\t\t\\\n\t    \"Fetch-xor should update atomic\");\t\t\t\t\\\n} while (0)\n\n#define TEST_STRUCT(t, ta)\t\t\t\t\t\t\\\ntypedef struct {\t\t\t\t\t\t\t\\\n\tt val1;\t\t\t\t\t\t\t\t\\\n\tt val2;\t\t\t\t\t\t\t\t\\\n\tt val3;\t\t\t\t\t\t\t\t\\\n} ta##_test_t;\n\n#define TEST_CASES(t) {\t\t\t\t\t\t\t\\\n\t{(t)-1, (t)-1, (t)-2},\t\t\t\t\t\t\\\n\t{(t)-1, (t) 0, (t)-2},\t\t\t\t\t\t\\\n\t{(t)-1, (t) 1, (t)-2},\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\t{(t) 0, (t)-1, (t)-2},\t\t\t\t\t\t\\\n\t{(t) 0, (t) 0, (t)-2},\t\t\t\t\t\t\\\n\t{(t) 0, (t) 1, (t)-2},\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\t{(t) 1, (t)-1, (t)-2},\t\t\t\t\t\t\\\n\t{(t) 1, (t) 0, (t)-2},\t\t\t\t\t\t\\\n\t{(t) 1, (t) 1, (t)-2},\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\t{(t)0, (t)-(1 << 22), (t)-2},\t\t\t\t\t\\\n\t{(t)0, (t)(1 << 22), (t)-2},\t\t\t\t\t\\\n\t{(t)(1 << 22), (t)-(1 << 22), (t)-2},\t\t\t\t\\\n\t{(t)(1 << 22), (t)(1 << 22), (t)-2}\t\t\t\t\\\n}\n\n#define TEST_BODY(t, ta) do {\t\t\t\t\t\t\\\n\tconst ta##_test_t tests[] = TEST_CASES(t);\t\t\t\\\n\tfor (unsigned i = 0; i < sizeof(tests)/sizeof(tests[0]); i++) {\t\\\n\t\tta##_test_t test = tests[i];\t\t\t\t\\\n\t\tDO_TESTS(t, ta, test.val1, test.val2, test.val3);\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\n#define INTEGER_TEST_BODY(t, ta) do {\t\t\t\t\t\\\n\tconst ta##_test_t tests[] = TEST_CASES(t);\t\t\t\\\n\tfor (unsigned i = 0; i < sizeof(tests)/sizeof(tests[0]); i++) {\t\\\n\t\tta##_test_t test = tests[i];\t\t\t\t\\\n\t\tDO_TESTS(t, ta, test.val1, test.val2, test.val3);\t\\\n\t\tDO_INTEGER_TESTS(t, ta, test.val1, test.val2);\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\nTEST_STRUCT(uint64_t, u64);\nTEST_BEGIN(test_atomic_u64) {\n#if !(LG_SIZEOF_PTR == 3 || LG_SIZEOF_INT == 3)\n\ttest_skip(\"64-bit atomic operations not supported\");\n#else\n\tINTEGER_TEST_BODY(uint64_t, u64);\n#endif\n}\nTEST_END\n\n\nTEST_STRUCT(uint32_t, u32);\nTEST_BEGIN(test_atomic_u32) {\n\tINTEGER_TEST_BODY(uint32_t, u32);\n}\nTEST_END\n\nTEST_STRUCT(void *, p);\nTEST_BEGIN(test_atomic_p) {\n\tTEST_BODY(void *, p);\n}\nTEST_END\n\nTEST_STRUCT(size_t, zu);\nTEST_BEGIN(test_atomic_zu) {\n\tINTEGER_TEST_BODY(size_t, zu);\n}\nTEST_END\n\nTEST_STRUCT(ssize_t, zd);\nTEST_BEGIN(test_atomic_zd) {\n\tINTEGER_TEST_BODY(ssize_t, zd);\n}\nTEST_END\n\n\nTEST_STRUCT(unsigned, u);\nTEST_BEGIN(test_atomic_u) {\n\tINTEGER_TEST_BODY(unsigned, u);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_atomic_u64,\n\t    test_atomic_u32,\n\t    test_atomic_p,\n\t    test_atomic_zu,\n\t    test_atomic_zd,\n\t    test_atomic_u);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/background_thread.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/util.h\"\n\nstatic void\ntest_switch_background_thread_ctl(bool new_val) {\n\tbool e0, e1;\n\tsize_t sz = sizeof(bool);\n\n\te1 = new_val;\n\texpect_d_eq(mallctl(\"background_thread\", (void *)&e0, &sz,\n\t    &e1, sz), 0, \"Unexpected mallctl() failure\");\n\texpect_b_eq(e0, !e1,\n\t    \"background_thread should be %d before.\\n\", !e1);\n\tif (e1) {\n\t\texpect_zu_gt(n_background_threads, 0,\n\t\t    \"Number of background threads should be non zero.\\n\");\n\t} else {\n\t\texpect_zu_eq(n_background_threads, 0,\n\t\t    \"Number of background threads should be zero.\\n\");\n\t}\n}\n\nstatic void\ntest_repeat_background_thread_ctl(bool before) {\n\tbool e0, e1;\n\tsize_t sz = sizeof(bool);\n\n\te1 = before;\n\texpect_d_eq(mallctl(\"background_thread\", (void *)&e0, &sz,\n\t    &e1, sz), 0, \"Unexpected mallctl() failure\");\n\texpect_b_eq(e0, before,\n\t    \"background_thread should be %d.\\n\", before);\n\tif (e1) {\n\t\texpect_zu_gt(n_background_threads, 0,\n\t\t    \"Number of background threads should be non zero.\\n\");\n\t} else {\n\t\texpect_zu_eq(n_background_threads, 0,\n\t\t    \"Number of background threads should be zero.\\n\");\n\t}\n}\n\nTEST_BEGIN(test_background_thread_ctl) {\n\ttest_skip_if(!have_background_thread);\n\n\tbool e0, e1;\n\tsize_t sz = sizeof(bool);\n\n\texpect_d_eq(mallctl(\"opt.background_thread\", (void *)&e0, &sz,\n\t    NULL, 0), 0, \"Unexpected mallctl() failure\");\n\texpect_d_eq(mallctl(\"background_thread\", (void *)&e1, &sz,\n\t    NULL, 0), 0, \"Unexpected mallctl() failure\");\n\texpect_b_eq(e0, e1,\n\t    \"Default and opt.background_thread does not match.\\n\");\n\tif (e0) {\n\t\ttest_switch_background_thread_ctl(false);\n\t}\n\texpect_zu_eq(n_background_threads, 0,\n\t    \"Number of background threads should be 0.\\n\");\n\n\tfor (unsigned i = 0; i < 4; i++) {\n\t\ttest_switch_background_thread_ctl(true);\n\t\ttest_repeat_background_thread_ctl(true);\n\t\ttest_repeat_background_thread_ctl(true);\n\n\t\ttest_switch_background_thread_ctl(false);\n\t\ttest_repeat_background_thread_ctl(false);\n\t\ttest_repeat_background_thread_ctl(false);\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_background_thread_running) {\n\ttest_skip_if(!have_background_thread);\n\ttest_skip_if(!config_stats);\n\n#if defined(JEMALLOC_BACKGROUND_THREAD)\n\ttsd_t *tsd = tsd_fetch();\n\tbackground_thread_info_t *info = &background_thread_info[0];\n\n\ttest_repeat_background_thread_ctl(false);\n\ttest_switch_background_thread_ctl(true);\n\texpect_b_eq(info->state, background_thread_started,\n\t    \"Background_thread did not start.\\n\");\n\n\tnstime_t start;\n\tnstime_init_update(&start);\n\n\tbool ran = false;\n\twhile (true) {\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), &info->mtx);\n\t\tif (info->tot_n_runs > 0) {\n\t\t\tran = true;\n\t\t}\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), &info->mtx);\n\t\tif (ran) {\n\t\t\tbreak;\n\t\t}\n\n\t\tnstime_t now;\n\t\tnstime_init_update(&now);\n\t\tnstime_subtract(&now, &start);\n\t\texpect_u64_lt(nstime_sec(&now), 1000,\n\t\t    \"Background threads did not run for 1000 seconds.\");\n\t\tsleep(1);\n\t}\n\ttest_switch_background_thread_ctl(false);\n#endif\n}\nTEST_END\n\nint\nmain(void) {\n\t/* Background_thread creation tests reentrancy naturally. */\n\treturn test_no_reentrancy(\n\t    test_background_thread_ctl,\n\t    test_background_thread_running);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/background_thread_enable.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nconst char *malloc_conf = \"background_thread:false,narenas:1,max_background_threads:20\";\n\nstatic unsigned\nmax_test_narenas(void) {\n\t/*\n\t * 10 here is somewhat arbitrary, except insofar as we want to ensure\n\t * that the number of background threads is smaller than the number of\n\t * arenas.  I'll ragequit long before we have to spin up 10 threads per\n\t * cpu to handle background purging, so this is a conservative\n\t * approximation.\n\t */\n\tunsigned ret = 10 * ncpus;\n\t/* Limit the max to avoid VM exhaustion on 32-bit . */\n\tif (ret > 512) {\n\t\tret = 512;\n\t}\n\n\treturn ret;\n}\n\nTEST_BEGIN(test_deferred) {\n\ttest_skip_if(!have_background_thread);\n\n\tunsigned id;\n\tsize_t sz_u = sizeof(unsigned);\n\n\tfor (unsigned i = 0; i < max_test_narenas(); i++) {\n\t\texpect_d_eq(mallctl(\"arenas.create\", &id, &sz_u, NULL, 0), 0,\n\t\t    \"Failed to create arena\");\n\t}\n\n\tbool enable = true;\n\tsize_t sz_b = sizeof(bool);\n\texpect_d_eq(mallctl(\"background_thread\", NULL, NULL, &enable, sz_b), 0,\n\t    \"Failed to enable background threads\");\n\tenable = false;\n\texpect_d_eq(mallctl(\"background_thread\", NULL, NULL, &enable, sz_b), 0,\n\t    \"Failed to disable background threads\");\n}\nTEST_END\n\nTEST_BEGIN(test_max_background_threads) {\n\ttest_skip_if(!have_background_thread);\n\n\tsize_t max_n_thds;\n\tsize_t opt_max_n_thds;\n\tsize_t sz_m = sizeof(max_n_thds);\n\texpect_d_eq(mallctl(\"opt.max_background_threads\",\n\t    &opt_max_n_thds, &sz_m, NULL, 0), 0,\n\t    \"Failed to get opt.max_background_threads\");\n\texpect_d_eq(mallctl(\"max_background_threads\", &max_n_thds, &sz_m, NULL,\n\t    0), 0, \"Failed to get max background threads\");\n\texpect_zu_eq(opt_max_n_thds, max_n_thds,\n\t    \"max_background_threads and \"\n\t    \"opt.max_background_threads should match\");\n\texpect_d_eq(mallctl(\"max_background_threads\", NULL, NULL, &max_n_thds,\n\t    sz_m), 0, \"Failed to set max background threads\");\n\n\tunsigned id;\n\tsize_t sz_u = sizeof(unsigned);\n\n\tfor (unsigned i = 0; i < max_test_narenas(); i++) {\n\t\texpect_d_eq(mallctl(\"arenas.create\", &id, &sz_u, NULL, 0), 0,\n\t\t    \"Failed to create arena\");\n\t}\n\n\tbool enable = true;\n\tsize_t sz_b = sizeof(bool);\n\texpect_d_eq(mallctl(\"background_thread\", NULL, NULL, &enable, sz_b), 0,\n\t    \"Failed to enable background threads\");\n\texpect_zu_eq(n_background_threads, max_n_thds,\n\t    \"Number of background threads should not change.\\n\");\n\tsize_t new_max_thds = max_n_thds - 1;\n\tif (new_max_thds > 0) {\n\t\texpect_d_eq(mallctl(\"max_background_threads\", NULL, NULL,\n\t\t    &new_max_thds, sz_m), 0,\n\t\t    \"Failed to set max background threads\");\n\t\texpect_zu_eq(n_background_threads, new_max_thds,\n\t\t    \"Number of background threads should decrease by 1.\\n\");\n\t}\n\tnew_max_thds = 1;\n\texpect_d_eq(mallctl(\"max_background_threads\", NULL, NULL, &new_max_thds,\n\t    sz_m), 0, \"Failed to set max background threads\");\n\texpect_zu_eq(n_background_threads, new_max_thds,\n\t    \"Number of background threads should be 1.\\n\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t\ttest_deferred,\n\t\ttest_max_background_threads);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/base.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"test/extent_hooks.h\"\n\nstatic extent_hooks_t hooks_null = {\n\textent_alloc_hook,\n\tNULL, /* dalloc */\n\tNULL, /* destroy */\n\tNULL, /* commit */\n\tNULL, /* decommit */\n\tNULL, /* purge_lazy */\n\tNULL, /* purge_forced */\n\tNULL, /* split */\n\tNULL /* merge */\n};\n\nstatic extent_hooks_t hooks_not_null = {\n\textent_alloc_hook,\n\textent_dalloc_hook,\n\textent_destroy_hook,\n\tNULL, /* commit */\n\textent_decommit_hook,\n\textent_purge_lazy_hook,\n\textent_purge_forced_hook,\n\tNULL, /* split */\n\tNULL /* merge */\n};\n\nTEST_BEGIN(test_base_hooks_default) {\n\tbase_t *base;\n\tsize_t allocated0, allocated1, resident, mapped, n_thp;\n\n\ttsdn_t *tsdn = tsd_tsdn(tsd_fetch());\n\tbase = base_new(tsdn, 0,\n\t    (extent_hooks_t *)&ehooks_default_extent_hooks,\n\t    /* metadata_use_hooks */ true);\n\n\tif (config_stats) {\n\t\tbase_stats_get(tsdn, base, &allocated0, &resident, &mapped,\n\t\t    &n_thp);\n\t\texpect_zu_ge(allocated0, sizeof(base_t),\n\t\t    \"Base header should count as allocated\");\n\t\tif (opt_metadata_thp == metadata_thp_always) {\n\t\t\texpect_zu_gt(n_thp, 0,\n\t\t\t    \"Base should have 1 THP at least.\");\n\t\t}\n\t}\n\n\texpect_ptr_not_null(base_alloc(tsdn, base, 42, 1),\n\t    \"Unexpected base_alloc() failure\");\n\n\tif (config_stats) {\n\t\tbase_stats_get(tsdn, base, &allocated1, &resident, &mapped,\n\t\t    &n_thp);\n\t\texpect_zu_ge(allocated1 - allocated0, 42,\n\t\t    \"At least 42 bytes were allocated by base_alloc()\");\n\t}\n\n\tbase_delete(tsdn, base);\n}\nTEST_END\n\nTEST_BEGIN(test_base_hooks_null) {\n\textent_hooks_t hooks_orig;\n\tbase_t *base;\n\tsize_t allocated0, allocated1, resident, mapped, n_thp;\n\n\textent_hooks_prep();\n\ttry_dalloc = false;\n\ttry_destroy = true;\n\ttry_decommit = false;\n\ttry_purge_lazy = false;\n\ttry_purge_forced = false;\n\tmemcpy(&hooks_orig, &hooks, sizeof(extent_hooks_t));\n\tmemcpy(&hooks, &hooks_null, sizeof(extent_hooks_t));\n\n\ttsdn_t *tsdn = tsd_tsdn(tsd_fetch());\n\tbase = base_new(tsdn, 0, &hooks, /* metadata_use_hooks */ true);\n\texpect_ptr_not_null(base, \"Unexpected base_new() failure\");\n\n\tif (config_stats) {\n\t\tbase_stats_get(tsdn, base, &allocated0, &resident, &mapped,\n\t\t    &n_thp);\n\t\texpect_zu_ge(allocated0, sizeof(base_t),\n\t\t    \"Base header should count as allocated\");\n\t\tif (opt_metadata_thp == metadata_thp_always) {\n\t\t\texpect_zu_gt(n_thp, 0,\n\t\t\t    \"Base should have 1 THP at least.\");\n\t\t}\n\t}\n\n\texpect_ptr_not_null(base_alloc(tsdn, base, 42, 1),\n\t    \"Unexpected base_alloc() failure\");\n\n\tif (config_stats) {\n\t\tbase_stats_get(tsdn, base, &allocated1, &resident, &mapped,\n\t\t    &n_thp);\n\t\texpect_zu_ge(allocated1 - allocated0, 42,\n\t\t    \"At least 42 bytes were allocated by base_alloc()\");\n\t}\n\n\tbase_delete(tsdn, base);\n\n\tmemcpy(&hooks, &hooks_orig, sizeof(extent_hooks_t));\n}\nTEST_END\n\nTEST_BEGIN(test_base_hooks_not_null) {\n\textent_hooks_t hooks_orig;\n\tbase_t *base;\n\tvoid *p, *q, *r, *r_exp;\n\n\textent_hooks_prep();\n\ttry_dalloc = false;\n\ttry_destroy = true;\n\ttry_decommit = false;\n\ttry_purge_lazy = false;\n\ttry_purge_forced = false;\n\tmemcpy(&hooks_orig, &hooks, sizeof(extent_hooks_t));\n\tmemcpy(&hooks, &hooks_not_null, sizeof(extent_hooks_t));\n\n\ttsdn_t *tsdn = tsd_tsdn(tsd_fetch());\n\tdid_alloc = false;\n\tbase = base_new(tsdn, 0, &hooks, /* metadata_use_hooks */ true);\n\texpect_ptr_not_null(base, \"Unexpected base_new() failure\");\n\texpect_true(did_alloc, \"Expected alloc\");\n\n\t/*\n\t * Check for tight packing at specified alignment under simple\n\t * conditions.\n\t */\n\t{\n\t\tconst size_t alignments[] = {\n\t\t\t1,\n\t\t\tQUANTUM,\n\t\t\tQUANTUM << 1,\n\t\t\tCACHELINE,\n\t\t\tCACHELINE << 1,\n\t\t};\n\t\tunsigned i;\n\n\t\tfor (i = 0; i < sizeof(alignments) / sizeof(size_t); i++) {\n\t\t\tsize_t alignment = alignments[i];\n\t\t\tsize_t align_ceil = ALIGNMENT_CEILING(alignment,\n\t\t\t    QUANTUM);\n\t\t\tp = base_alloc(tsdn, base, 1, alignment);\n\t\t\texpect_ptr_not_null(p,\n\t\t\t    \"Unexpected base_alloc() failure\");\n\t\t\texpect_ptr_eq(p,\n\t\t\t    (void *)(ALIGNMENT_CEILING((uintptr_t)p,\n\t\t\t    alignment)), \"Expected quantum alignment\");\n\t\t\tq = base_alloc(tsdn, base, alignment, alignment);\n\t\t\texpect_ptr_not_null(q,\n\t\t\t    \"Unexpected base_alloc() failure\");\n\t\t\texpect_ptr_eq((void *)((uintptr_t)p + align_ceil), q,\n\t\t\t    \"Minimal allocation should take up %zu bytes\",\n\t\t\t    align_ceil);\n\t\t\tr = base_alloc(tsdn, base, 1, alignment);\n\t\t\texpect_ptr_not_null(r,\n\t\t\t    \"Unexpected base_alloc() failure\");\n\t\t\texpect_ptr_eq((void *)((uintptr_t)q + align_ceil), r,\n\t\t\t    \"Minimal allocation should take up %zu bytes\",\n\t\t\t    align_ceil);\n\t\t}\n\t}\n\n\t/*\n\t * Allocate an object that cannot fit in the first block, then verify\n\t * that the first block's remaining space is considered for subsequent\n\t * allocation.\n\t */\n\texpect_zu_ge(edata_bsize_get(&base->blocks->edata), QUANTUM,\n\t    \"Remainder insufficient for test\");\n\t/* Use up all but one quantum of block. */\n\twhile (edata_bsize_get(&base->blocks->edata) > QUANTUM) {\n\t\tp = base_alloc(tsdn, base, QUANTUM, QUANTUM);\n\t\texpect_ptr_not_null(p, \"Unexpected base_alloc() failure\");\n\t}\n\tr_exp = edata_addr_get(&base->blocks->edata);\n\texpect_zu_eq(base->extent_sn_next, 1, \"One extant block expected\");\n\tq = base_alloc(tsdn, base, QUANTUM + 1, QUANTUM);\n\texpect_ptr_not_null(q, \"Unexpected base_alloc() failure\");\n\texpect_ptr_ne(q, r_exp, \"Expected allocation from new block\");\n\texpect_zu_eq(base->extent_sn_next, 2, \"Two extant blocks expected\");\n\tr = base_alloc(tsdn, base, QUANTUM, QUANTUM);\n\texpect_ptr_not_null(r, \"Unexpected base_alloc() failure\");\n\texpect_ptr_eq(r, r_exp, \"Expected allocation from first block\");\n\texpect_zu_eq(base->extent_sn_next, 2, \"Two extant blocks expected\");\n\n\t/*\n\t * Check for proper alignment support when normal blocks are too small.\n\t */\n\t{\n\t\tconst size_t alignments[] = {\n\t\t\tHUGEPAGE,\n\t\t\tHUGEPAGE << 1\n\t\t};\n\t\tunsigned i;\n\n\t\tfor (i = 0; i < sizeof(alignments) / sizeof(size_t); i++) {\n\t\t\tsize_t alignment = alignments[i];\n\t\t\tp = base_alloc(tsdn, base, QUANTUM, alignment);\n\t\t\texpect_ptr_not_null(p,\n\t\t\t    \"Unexpected base_alloc() failure\");\n\t\t\texpect_ptr_eq(p,\n\t\t\t    (void *)(ALIGNMENT_CEILING((uintptr_t)p,\n\t\t\t    alignment)), \"Expected %zu-byte alignment\",\n\t\t\t    alignment);\n\t\t}\n\t}\n\n\tcalled_dalloc = called_destroy = called_decommit = called_purge_lazy =\n\t    called_purge_forced = false;\n\tbase_delete(tsdn, base);\n\texpect_true(called_dalloc, \"Expected dalloc call\");\n\texpect_true(!called_destroy, \"Unexpected destroy call\");\n\texpect_true(called_decommit, \"Expected decommit call\");\n\texpect_true(called_purge_lazy, \"Expected purge_lazy call\");\n\texpect_true(called_purge_forced, \"Expected purge_forced call\");\n\n\ttry_dalloc = true;\n\ttry_destroy = true;\n\ttry_decommit = true;\n\ttry_purge_lazy = true;\n\ttry_purge_forced = true;\n\tmemcpy(&hooks, &hooks_orig, sizeof(extent_hooks_t));\n}\nTEST_END\n\nTEST_BEGIN(test_base_ehooks_get_for_metadata_default_hook) {\n\textent_hooks_prep();\n\tmemcpy(&hooks, &hooks_not_null, sizeof(extent_hooks_t));\n\tbase_t *base;\n\ttsdn_t *tsdn = tsd_tsdn(tsd_fetch());\n\tbase = base_new(tsdn, 0, &hooks, /* metadata_use_hooks */ false);\n\tehooks_t *ehooks = base_ehooks_get_for_metadata(base);\n\texpect_true(ehooks_are_default(ehooks),\n\t\t\"Expected default extent hook functions pointer\");\n\tbase_delete(tsdn, base);\n}\nTEST_END\n\n\nTEST_BEGIN(test_base_ehooks_get_for_metadata_custom_hook) {\n\textent_hooks_prep();\n\tmemcpy(&hooks, &hooks_not_null, sizeof(extent_hooks_t));\n\tbase_t *base;\n\ttsdn_t *tsdn = tsd_tsdn(tsd_fetch());\n\tbase = base_new(tsdn, 0, &hooks, /* metadata_use_hooks */ true);\n\tehooks_t *ehooks = base_ehooks_get_for_metadata(base);\n\texpect_ptr_eq(&hooks, ehooks_get_extent_hooks_ptr(ehooks),\n\t\t\"Expected user-specified extend hook functions pointer\");\n\tbase_delete(tsdn, base);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_base_hooks_default,\n\t    test_base_hooks_null,\n\t    test_base_hooks_not_null,\n            test_base_ehooks_get_for_metadata_default_hook,\n            test_base_ehooks_get_for_metadata_custom_hook);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/batch_alloc.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#define BATCH_MAX ((1U << 16) + 1024)\nstatic void *global_ptrs[BATCH_MAX];\n\n#define PAGE_ALIGNED(ptr) (((uintptr_t)ptr & PAGE_MASK) == 0)\n\nstatic void\nverify_batch_basic(tsd_t *tsd, void **ptrs, size_t batch, size_t usize,\n    bool zero) {\n\tfor (size_t i = 0; i < batch; ++i) {\n\t\tvoid *p = ptrs[i];\n\t\texpect_zu_eq(isalloc(tsd_tsdn(tsd), p), usize, \"\");\n\t\tif (zero) {\n\t\t\tfor (size_t k = 0; k < usize; ++k) {\n\t\t\t\texpect_true(*((unsigned char *)p + k) == 0, \"\");\n\t\t\t}\n\t\t}\n\t}\n}\n\nstatic void\nverify_batch_locality(tsd_t *tsd, void **ptrs, size_t batch, size_t usize,\n    arena_t *arena, unsigned nregs) {\n\tif (config_prof && opt_prof) {\n\t\t/*\n\t\t * Checking batch locality when prof is on is feasible but\n\t\t * complicated, while checking the non-prof case suffices for\n\t\t * unit-test purpose.\n\t\t */\n\t\treturn;\n\t}\n\tfor (size_t i = 0, j = 0; i < batch; ++i, ++j) {\n\t\tif (j == nregs) {\n\t\t\tj = 0;\n\t\t}\n\t\tif (j == 0 && batch - i < nregs) {\n\t\t\tbreak;\n\t\t}\n\t\tvoid *p = ptrs[i];\n\t\texpect_ptr_eq(iaalloc(tsd_tsdn(tsd), p), arena, \"\");\n\t\tif (j == 0) {\n\t\t\texpect_true(PAGE_ALIGNED(p), \"\");\n\t\t\tcontinue;\n\t\t}\n\t\tassert(i > 0);\n\t\tvoid *q = ptrs[i - 1];\n\t\texpect_true((uintptr_t)p > (uintptr_t)q\n\t\t    && (size_t)((uintptr_t)p - (uintptr_t)q) == usize, \"\");\n\t}\n}\n\nstatic void\nrelease_batch(void **ptrs, size_t batch, size_t size) {\n\tfor (size_t i = 0; i < batch; ++i) {\n\t\tsdallocx(ptrs[i], size, 0);\n\t}\n}\n\ntypedef struct batch_alloc_packet_s batch_alloc_packet_t;\nstruct batch_alloc_packet_s {\n\tvoid **ptrs;\n\tsize_t num;\n\tsize_t size;\n\tint flags;\n};\n\nstatic size_t\nbatch_alloc_wrapper(void **ptrs, size_t num, size_t size, int flags) {\n\tbatch_alloc_packet_t batch_alloc_packet = {ptrs, num, size, flags};\n\tsize_t filled;\n\tsize_t len = sizeof(size_t);\n\tassert_d_eq(mallctl(\"experimental.batch_alloc\", &filled, &len,\n\t    &batch_alloc_packet, sizeof(batch_alloc_packet)), 0, \"\");\n\treturn filled;\n}\n\nstatic void\ntest_wrapper(size_t size, size_t alignment, bool zero, unsigned arena_flag) {\n\ttsd_t *tsd = tsd_fetch();\n\tassert(tsd != NULL);\n\tconst size_t usize =\n\t    (alignment != 0 ? sz_sa2u(size, alignment) : sz_s2u(size));\n\tconst szind_t ind = sz_size2index(usize);\n\tconst bin_info_t *bin_info = &bin_infos[ind];\n\tconst unsigned nregs = bin_info->nregs;\n\tassert(nregs > 0);\n\tarena_t *arena;\n\tif (arena_flag != 0) {\n\t\tarena = arena_get(tsd_tsdn(tsd), MALLOCX_ARENA_GET(arena_flag),\n\t\t    false);\n\t} else {\n\t\tarena = arena_choose(tsd, NULL);\n\t}\n\tassert(arena != NULL);\n\tint flags = arena_flag;\n\tif (alignment != 0) {\n\t\tflags |= MALLOCX_ALIGN(alignment);\n\t}\n\tif (zero) {\n\t\tflags |= MALLOCX_ZERO;\n\t}\n\n\t/*\n\t * Allocate for the purpose of bootstrapping arena_tdata, so that the\n\t * change in bin stats won't contaminate the stats to be verified below.\n\t */\n\tvoid *p = mallocx(size, flags | MALLOCX_TCACHE_NONE);\n\n\tfor (size_t i = 0; i < 4; ++i) {\n\t\tsize_t base = 0;\n\t\tif (i == 1) {\n\t\t\tbase = nregs;\n\t\t} else if (i == 2) {\n\t\t\tbase = nregs * 2;\n\t\t} else if (i == 3) {\n\t\t\tbase = (1 << 16);\n\t\t}\n\t\tfor (int j = -1; j <= 1; ++j) {\n\t\t\tif (base == 0 && j == -1) {\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tsize_t batch = base + (size_t)j;\n\t\t\tassert(batch < BATCH_MAX);\n\t\t\tsize_t filled = batch_alloc_wrapper(global_ptrs, batch,\n\t\t\t    size, flags);\n\t\t\tassert_zu_eq(filled, batch, \"\");\n\t\t\tverify_batch_basic(tsd, global_ptrs, batch, usize,\n\t\t\t    zero);\n\t\t\tverify_batch_locality(tsd, global_ptrs, batch, usize,\n\t\t\t    arena, nregs);\n\t\t\trelease_batch(global_ptrs, batch, usize);\n\t\t}\n\t}\n\n\tfree(p);\n}\n\nTEST_BEGIN(test_batch_alloc) {\n\ttest_wrapper(11, 0, false, 0);\n}\nTEST_END\n\nTEST_BEGIN(test_batch_alloc_zero) {\n\ttest_wrapper(11, 0, true, 0);\n}\nTEST_END\n\nTEST_BEGIN(test_batch_alloc_aligned) {\n\ttest_wrapper(7, 16, false, 0);\n}\nTEST_END\n\nTEST_BEGIN(test_batch_alloc_manual_arena) {\n\tunsigned arena_ind;\n\tsize_t len_unsigned = sizeof(unsigned);\n\tassert_d_eq(mallctl(\"arenas.create\", &arena_ind, &len_unsigned, NULL,\n\t    0), 0, \"\");\n\ttest_wrapper(11, 0, false, MALLOCX_ARENA(arena_ind));\n}\nTEST_END\n\nTEST_BEGIN(test_batch_alloc_large) {\n\tsize_t size = SC_LARGE_MINCLASS;\n\tfor (size_t batch = 0; batch < 4; ++batch) {\n\t\tassert(batch < BATCH_MAX);\n\t\tsize_t filled = batch_alloc(global_ptrs, batch, size, 0);\n\t\tassert_zu_eq(filled, batch, \"\");\n\t\trelease_batch(global_ptrs, batch, size);\n\t}\n\tsize = tcache_maxclass + 1;\n\tfor (size_t batch = 0; batch < 4; ++batch) {\n\t\tassert(batch < BATCH_MAX);\n\t\tsize_t filled = batch_alloc(global_ptrs, batch, size, 0);\n\t\tassert_zu_eq(filled, batch, \"\");\n\t\trelease_batch(global_ptrs, batch, size);\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_batch_alloc,\n\t    test_batch_alloc_zero,\n\t    test_batch_alloc_aligned,\n\t    test_batch_alloc_manual_arena,\n\t    test_batch_alloc_large);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/batch_alloc.sh",
    "content": "#!/bin/sh\n\nexport MALLOC_CONF=\"tcache_gc_incr_bytes:2147483648\"\n"
  },
  {
    "path": "deps/jemalloc/test/unit/batch_alloc_prof.c",
    "content": "#include \"batch_alloc.c\"\n"
  },
  {
    "path": "deps/jemalloc/test/unit/batch_alloc_prof.sh",
    "content": "#!/bin/sh\n\nexport MALLOC_CONF=\"prof:true,lg_prof_sample:14\"\n"
  },
  {
    "path": "deps/jemalloc/test/unit/binshard.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n/* Config -- \"narenas:1,bin_shards:1-160:16|129-512:4|256-256:8\" */\n\n#define NTHREADS 16\n#define REMOTE_NALLOC 256\n\nstatic void *\nthd_producer(void *varg) {\n\tvoid **mem = varg;\n\tunsigned arena, i;\n\tsize_t sz;\n\n\tsz = sizeof(arena);\n\t/* Remote arena. */\n\texpect_d_eq(mallctl(\"arenas.create\", (void *)&arena, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\tfor (i = 0; i < REMOTE_NALLOC / 2; i++) {\n\t\tmem[i] = mallocx(1, MALLOCX_TCACHE_NONE | MALLOCX_ARENA(arena));\n\t}\n\n\t/* Remote bin. */\n\tfor (; i < REMOTE_NALLOC; i++) {\n\t\tmem[i] = mallocx(1, MALLOCX_TCACHE_NONE | MALLOCX_ARENA(0));\n\t}\n\n\treturn NULL;\n}\n\nTEST_BEGIN(test_producer_consumer) {\n\tthd_t thds[NTHREADS];\n\tvoid *mem[NTHREADS][REMOTE_NALLOC];\n\tunsigned i;\n\n\t/* Create producer threads to allocate. */\n\tfor (i = 0; i < NTHREADS; i++) {\n\t\tthd_create(&thds[i], thd_producer, mem[i]);\n\t}\n\tfor (i = 0; i < NTHREADS; i++) {\n\t\tthd_join(thds[i], NULL);\n\t}\n\t/* Remote deallocation by the current thread. */\n\tfor (i = 0; i < NTHREADS; i++) {\n\t\tfor (unsigned j = 0; j < REMOTE_NALLOC; j++) {\n\t\t\texpect_ptr_not_null(mem[i][j],\n\t\t\t    \"Unexpected remote allocation failure\");\n\t\t\tdallocx(mem[i][j], 0);\n\t\t}\n\t}\n}\nTEST_END\n\nstatic void *\nthd_start(void *varg) {\n\tvoid *ptr, *ptr2;\n\tedata_t *edata;\n\tunsigned shard1, shard2;\n\n\ttsdn_t *tsdn = tsdn_fetch();\n\t/* Try triggering allocations from sharded bins. */\n\tfor (unsigned i = 0; i < 1024; i++) {\n\t\tptr = mallocx(1, MALLOCX_TCACHE_NONE);\n\t\tptr2 = mallocx(129, MALLOCX_TCACHE_NONE);\n\n\t\tedata = emap_edata_lookup(tsdn, &arena_emap_global, ptr);\n\t\tshard1 = edata_binshard_get(edata);\n\t\tdallocx(ptr, 0);\n\t\texpect_u_lt(shard1, 16, \"Unexpected bin shard used\");\n\n\t\tedata = emap_edata_lookup(tsdn, &arena_emap_global, ptr2);\n\t\tshard2 = edata_binshard_get(edata);\n\t\tdallocx(ptr2, 0);\n\t\texpect_u_lt(shard2, 4, \"Unexpected bin shard used\");\n\n\t\tif (shard1 > 0 || shard2 > 0) {\n\t\t\t/* Triggered sharded bin usage. */\n\t\t\treturn (void *)(uintptr_t)shard1;\n\t\t}\n\t}\n\n\treturn NULL;\n}\n\nTEST_BEGIN(test_bin_shard_mt) {\n\ttest_skip_if(have_percpu_arena &&\n\t    PERCPU_ARENA_ENABLED(opt_percpu_arena));\n\n\tthd_t thds[NTHREADS];\n\tunsigned i;\n\tfor (i = 0; i < NTHREADS; i++) {\n\t\tthd_create(&thds[i], thd_start, NULL);\n\t}\n\tbool sharded = false;\n\tfor (i = 0; i < NTHREADS; i++) {\n\t\tvoid *ret;\n\t\tthd_join(thds[i], &ret);\n\t\tif (ret != NULL) {\n\t\t\tsharded = true;\n\t\t}\n\t}\n\texpect_b_eq(sharded, true, \"Did not find sharded bins\");\n}\nTEST_END\n\nTEST_BEGIN(test_bin_shard) {\n\tunsigned nbins, i;\n\tsize_t mib[4], mib2[4];\n\tsize_t miblen, miblen2, len;\n\n\tlen = sizeof(nbins);\n\texpect_d_eq(mallctl(\"arenas.nbins\", (void *)&nbins, &len, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\n\tmiblen = 4;\n\texpect_d_eq(mallctlnametomib(\"arenas.bin.0.nshards\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() failure\");\n\tmiblen2 = 4;\n\texpect_d_eq(mallctlnametomib(\"arenas.bin.0.size\", mib2, &miblen2), 0,\n\t    \"Unexpected mallctlnametomib() failure\");\n\n\tfor (i = 0; i < nbins; i++) {\n\t\tuint32_t nshards;\n\t\tsize_t size, sz1, sz2;\n\n\t\tmib[2] = i;\n\t\tsz1 = sizeof(nshards);\n\t\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&nshards, &sz1,\n\t\t    NULL, 0), 0, \"Unexpected mallctlbymib() failure\");\n\n\t\tmib2[2] = i;\n\t\tsz2 = sizeof(size);\n\t\texpect_d_eq(mallctlbymib(mib2, miblen2, (void *)&size, &sz2,\n\t\t    NULL, 0), 0, \"Unexpected mallctlbymib() failure\");\n\n\t\tif (size >= 1 && size <= 128) {\n\t\t\texpect_u_eq(nshards, 16, \"Unexpected nshards\");\n\t\t} else if (size == 256) {\n\t\t\texpect_u_eq(nshards, 8, \"Unexpected nshards\");\n\t\t} else if (size > 128 && size <= 512) {\n\t\t\texpect_u_eq(nshards, 4, \"Unexpected nshards\");\n\t\t} else {\n\t\t\texpect_u_eq(nshards, 1, \"Unexpected nshards\");\n\t\t}\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_bin_shard,\n\t    test_bin_shard_mt,\n\t    test_producer_consumer);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/binshard.sh",
    "content": "#!/bin/sh\n\nexport MALLOC_CONF=\"narenas:1,bin_shards:1-160:16|129-512:4|256-256:8\"\n"
  },
  {
    "path": "deps/jemalloc/test/unit/bit_util.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/bit_util.h\"\n\n#define TEST_POW2_CEIL(t, suf, pri) do {\t\t\t\t\\\n\tunsigned i, pow2;\t\t\t\t\t\t\\\n\tt x;\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\texpect_##suf##_eq(pow2_ceil_##suf(0), 0, \"Unexpected result\");\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tfor (i = 0; i < sizeof(t) * 8; i++) {\t\t\t\t\\\n\t\texpect_##suf##_eq(pow2_ceil_##suf(((t)1) << i), ((t)1)\t\\\n\t\t    << i, \"Unexpected result\");\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tfor (i = 2; i < sizeof(t) * 8; i++) {\t\t\t\t\\\n\t\texpect_##suf##_eq(pow2_ceil_##suf((((t)1) << i) - 1),\t\\\n\t\t    ((t)1) << i, \"Unexpected result\");\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tfor (i = 0; i < sizeof(t) * 8 - 1; i++) {\t\t\t\\\n\t\texpect_##suf##_eq(pow2_ceil_##suf((((t)1) << i) + 1),\t\\\n\t\t    ((t)1) << (i+1), \"Unexpected result\");\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n\t\t\t\t\t\t\t\t\t\\\n\tfor (pow2 = 1; pow2 < 25; pow2++) {\t\t\t\t\\\n\t\tfor (x = (((t)1) << (pow2-1)) + 1; x <= ((t)1) << pow2;\t\\\n\t\t    x++) {\t\t\t\t\t\t\\\n\t\t\texpect_##suf##_eq(pow2_ceil_##suf(x),\t\t\\\n\t\t\t    ((t)1) << pow2,\t\t\t\t\\\n\t\t\t    \"Unexpected result, x=%\"pri, x);\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\nTEST_BEGIN(test_pow2_ceil_u64) {\n\tTEST_POW2_CEIL(uint64_t, u64, FMTu64);\n}\nTEST_END\n\nTEST_BEGIN(test_pow2_ceil_u32) {\n\tTEST_POW2_CEIL(uint32_t, u32, FMTu32);\n}\nTEST_END\n\nTEST_BEGIN(test_pow2_ceil_zu) {\n\tTEST_POW2_CEIL(size_t, zu, \"zu\");\n}\nTEST_END\n\nvoid\nexpect_lg_ceil_range(size_t input, unsigned answer) {\n\tif (input == 1) {\n\t\texpect_u_eq(0, answer, \"Got %u as lg_ceil of 1\", answer);\n\t\treturn;\n\t}\n\texpect_zu_le(input, (ZU(1) << answer),\n\t    \"Got %u as lg_ceil of %zu\", answer, input);\n\texpect_zu_gt(input, (ZU(1) << (answer - 1)),\n\t    \"Got %u as lg_ceil of %zu\", answer, input);\n}\n\nvoid\nexpect_lg_floor_range(size_t input, unsigned answer) {\n\tif (input == 1) {\n\t\texpect_u_eq(0, answer, \"Got %u as lg_floor of 1\", answer);\n\t\treturn;\n\t}\n\texpect_zu_ge(input, (ZU(1) << answer),\n\t    \"Got %u as lg_floor of %zu\", answer, input);\n\texpect_zu_lt(input, (ZU(1) << (answer + 1)),\n\t    \"Got %u as lg_floor of %zu\", answer, input);\n}\n\nTEST_BEGIN(test_lg_ceil_floor) {\n\tfor (size_t i = 1; i < 10 * 1000 * 1000; i++) {\n\t\texpect_lg_ceil_range(i, lg_ceil(i));\n\t\texpect_lg_ceil_range(i, LG_CEIL(i));\n\t\texpect_lg_floor_range(i, lg_floor(i));\n\t\texpect_lg_floor_range(i, LG_FLOOR(i));\n\t}\n\tfor (int i = 10; i < 8 * (1 << LG_SIZEOF_PTR) - 5; i++) {\n\t\tfor (size_t j = 0; j < (1 << 4); j++) {\n\t\t\tsize_t num1 = ((size_t)1 << i)\n\t\t\t    - j * ((size_t)1 << (i - 4));\n\t\t\tsize_t num2 = ((size_t)1 << i)\n\t\t\t    + j * ((size_t)1 << (i - 4));\n\t\t\texpect_zu_ne(num1, 0, \"Invalid lg argument\");\n\t\t\texpect_zu_ne(num2, 0, \"Invalid lg argument\");\n\t\t\texpect_lg_ceil_range(num1, lg_ceil(num1));\n\t\t\texpect_lg_ceil_range(num1, LG_CEIL(num1));\n\t\t\texpect_lg_ceil_range(num2, lg_ceil(num2));\n\t\t\texpect_lg_ceil_range(num2, LG_CEIL(num2));\n\n\t\t\texpect_lg_floor_range(num1, lg_floor(num1));\n\t\t\texpect_lg_floor_range(num1, LG_FLOOR(num1));\n\t\t\texpect_lg_floor_range(num2, lg_floor(num2));\n\t\t\texpect_lg_floor_range(num2, LG_FLOOR(num2));\n\t\t}\n\t}\n}\nTEST_END\n\n#define TEST_FFS(t, suf, test_suf, pri) do {\t\t\t\t\\\n\tfor (unsigned i = 0; i < sizeof(t) * 8; i++) {\t\t\t\\\n\t\tfor (unsigned j = 0; j <= i; j++) {\t\t\t\\\n\t\t\tfor (unsigned k = 0; k <= j; k++) {\t\t\\\n\t\t\t\tt x = (t)1 << i;\t\t\t\\\n\t\t\t\tx |= (t)1 << j;\t\t\t\t\\\n\t\t\t\tx |= (t)1 << k;\t\t\t\t\\\n\t\t\t\texpect_##test_suf##_eq(ffs_##suf(x), k,\t\\\n\t\t\t\t    \"Unexpected result, x=%\"pri, x);\t\\\n\t\t\t}\t\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while(0)\n\nTEST_BEGIN(test_ffs_u) {\n\tTEST_FFS(unsigned, u, u,\"u\");\n}\nTEST_END\n\nTEST_BEGIN(test_ffs_lu) {\n\tTEST_FFS(unsigned long, lu, lu, \"lu\");\n}\nTEST_END\n\nTEST_BEGIN(test_ffs_llu) {\n\tTEST_FFS(unsigned long long, llu, qd, \"llu\");\n}\nTEST_END\n\nTEST_BEGIN(test_ffs_u32) {\n\tTEST_FFS(uint32_t, u32, u32, FMTu32);\n}\nTEST_END\n\nTEST_BEGIN(test_ffs_u64) {\n\tTEST_FFS(uint64_t, u64, u64, FMTu64);\n}\nTEST_END\n\nTEST_BEGIN(test_ffs_zu) {\n\tTEST_FFS(size_t, zu, zu, \"zu\");\n}\nTEST_END\n\n#define TEST_FLS(t, suf, test_suf, pri) do {\t\t\t\t\\\n\tfor (unsigned i = 0; i < sizeof(t) * 8; i++) {\t\t\t\\\n\t\tfor (unsigned j = 0; j <= i; j++) {\t\t\t\\\n\t\t\tfor (unsigned k = 0; k <= j; k++) {\t\t\\\n\t\t\t\tt x = (t)1 << i;\t\t\t\\\n\t\t\t\tx |= (t)1 << j;\t\t\t\t\\\n\t\t\t\tx |= (t)1 << k;\t\t\t\t\\\n\t\t\t\texpect_##test_suf##_eq(fls_##suf(x), i,\t\\\n\t\t\t\t    \"Unexpected result, x=%\"pri, x);\t\\\n\t\t\t}\t\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while(0)\n\nTEST_BEGIN(test_fls_u) {\n\tTEST_FLS(unsigned, u, u,\"u\");\n}\nTEST_END\n\nTEST_BEGIN(test_fls_lu) {\n\tTEST_FLS(unsigned long, lu, lu, \"lu\");\n}\nTEST_END\n\nTEST_BEGIN(test_fls_llu) {\n\tTEST_FLS(unsigned long long, llu, qd, \"llu\");\n}\nTEST_END\n\nTEST_BEGIN(test_fls_u32) {\n\tTEST_FLS(uint32_t, u32, u32, FMTu32);\n}\nTEST_END\n\nTEST_BEGIN(test_fls_u64) {\n\tTEST_FLS(uint64_t, u64, u64, FMTu64);\n}\nTEST_END\n\nTEST_BEGIN(test_fls_zu) {\n\tTEST_FLS(size_t, zu, zu, \"zu\");\n}\nTEST_END\n\nTEST_BEGIN(test_fls_u_slow) {\n\tTEST_FLS(unsigned, u_slow, u,\"u\");\n}\nTEST_END\n\nTEST_BEGIN(test_fls_lu_slow) {\n\tTEST_FLS(unsigned long, lu_slow, lu, \"lu\");\n}\nTEST_END\n\nTEST_BEGIN(test_fls_llu_slow) {\n\tTEST_FLS(unsigned long long, llu_slow, qd, \"llu\");\n}\nTEST_END\n\nstatic unsigned\npopcount_byte(unsigned byte) {\n\tint count = 0;\n\tfor (int i = 0; i < 8; i++) {\n\t\tif ((byte & (1 << i)) != 0) {\n\t\t\tcount++;\n\t\t}\n\t}\n\treturn count;\n}\n\nstatic uint64_t\nexpand_byte_to_mask(unsigned byte) {\n\tuint64_t result = 0;\n\tfor (int i = 0; i < 8; i++) {\n\t\tif ((byte & (1 << i)) != 0) {\n\t\t\tresult |= ((uint64_t)0xFF << (i * 8));\n\t\t}\n\t}\n\treturn result;\n}\n\n#define TEST_POPCOUNT(t, suf, pri_hex) do {\t\t\t\t\\\n\tt bmul = (t)0x0101010101010101ULL;\t\t\t\t\\\n\tfor (unsigned i = 0; i < (1 << sizeof(t)); i++) {\t\t\\\n\t\tfor (unsigned j = 0; j < 256; j++) {\t\t\t\\\n\t\t\t/*\t\t\t\t\t\t\\\n\t\t\t * Replicate the byte j into various\t\t\\\n\t\t\t * bytes of the integer (as indicated by the\t\\\n\t\t\t * mask in i), and ensure that the popcount of\t\\\n\t\t\t * the result is popcount(i) * popcount(j)\t\\\n\t\t\t */\t\t\t\t\t\t\\\n\t\t\tt mask = (t)expand_byte_to_mask(i);\t\t\\\n\t\t\tt x = (bmul * j) & mask;\t\t\t\\\n\t\t\texpect_u_eq(\t\t\t\t\t\\\n\t\t\t    popcount_byte(i) * popcount_byte(j),\t\\\n\t\t\t    popcount_##suf(x),\t\t\t\t\\\n\t\t\t    \"Unexpected result, x=0x%\"pri_hex, x);\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\nTEST_BEGIN(test_popcount_u) {\n\tTEST_POPCOUNT(unsigned, u, \"x\");\n}\nTEST_END\n\nTEST_BEGIN(test_popcount_u_slow) {\n\tTEST_POPCOUNT(unsigned, u_slow, \"x\");\n}\nTEST_END\n\nTEST_BEGIN(test_popcount_lu) {\n\tTEST_POPCOUNT(unsigned long, lu, \"lx\");\n}\nTEST_END\n\nTEST_BEGIN(test_popcount_lu_slow) {\n\tTEST_POPCOUNT(unsigned long, lu_slow, \"lx\");\n}\nTEST_END\n\nTEST_BEGIN(test_popcount_llu) {\n\tTEST_POPCOUNT(unsigned long long, llu, \"llx\");\n}\nTEST_END\n\nTEST_BEGIN(test_popcount_llu_slow) {\n\tTEST_POPCOUNT(unsigned long long, llu_slow, \"llx\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_pow2_ceil_u64,\n\t    test_pow2_ceil_u32,\n\t    test_pow2_ceil_zu,\n\t    test_lg_ceil_floor,\n\t    test_ffs_u,\n\t    test_ffs_lu,\n\t    test_ffs_llu,\n\t    test_ffs_u32,\n\t    test_ffs_u64,\n\t    test_ffs_zu,\n\t    test_fls_u,\n\t    test_fls_lu,\n\t    test_fls_llu,\n\t    test_fls_u32,\n\t    test_fls_u64,\n\t    test_fls_zu,\n\t    test_fls_u_slow,\n\t    test_fls_lu_slow,\n\t    test_fls_llu_slow,\n\t    test_popcount_u,\n\t    test_popcount_u_slow,\n\t    test_popcount_lu,\n\t    test_popcount_lu_slow,\n\t    test_popcount_llu,\n\t    test_popcount_llu_slow);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/bitmap.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"test/nbits.h\"\n\nstatic void\ntest_bitmap_initializer_body(const bitmap_info_t *binfo, size_t nbits) {\n\tbitmap_info_t binfo_dyn;\n\tbitmap_info_init(&binfo_dyn, nbits);\n\n\texpect_zu_eq(bitmap_size(binfo), bitmap_size(&binfo_dyn),\n\t    \"Unexpected difference between static and dynamic initialization, \"\n\t    \"nbits=%zu\", nbits);\n\texpect_zu_eq(binfo->nbits, binfo_dyn.nbits,\n\t    \"Unexpected difference between static and dynamic initialization, \"\n\t    \"nbits=%zu\", nbits);\n#ifdef BITMAP_USE_TREE\n\texpect_u_eq(binfo->nlevels, binfo_dyn.nlevels,\n\t    \"Unexpected difference between static and dynamic initialization, \"\n\t    \"nbits=%zu\", nbits);\n\t{\n\t\tunsigned i;\n\n\t\tfor (i = 0; i < binfo->nlevels; i++) {\n\t\t\texpect_zu_eq(binfo->levels[i].group_offset,\n\t\t\t    binfo_dyn.levels[i].group_offset,\n\t\t\t    \"Unexpected difference between static and dynamic \"\n\t\t\t    \"initialization, nbits=%zu, level=%u\", nbits, i);\n\t\t}\n\t}\n#else\n\texpect_zu_eq(binfo->ngroups, binfo_dyn.ngroups,\n\t    \"Unexpected difference between static and dynamic initialization\");\n#endif\n}\n\nTEST_BEGIN(test_bitmap_initializer) {\n#define NB(nbits) {\t\t\t\t\t\t\t\\\n\t\tif (nbits <= BITMAP_MAXBITS) {\t\t\t\t\\\n\t\t\tbitmap_info_t binfo =\t\t\t\t\\\n\t\t\t    BITMAP_INFO_INITIALIZER(nbits);\t\t\\\n\t\t\ttest_bitmap_initializer_body(&binfo, nbits);\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\n\tNBITS_TAB\n#undef NB\n}\nTEST_END\n\nstatic size_t\ntest_bitmap_size_body(const bitmap_info_t *binfo, size_t nbits,\n    size_t prev_size) {\n\tsize_t size = bitmap_size(binfo);\n\texpect_zu_ge(size, (nbits >> 3),\n\t    \"Bitmap size is smaller than expected\");\n\texpect_zu_ge(size, prev_size, \"Bitmap size is smaller than expected\");\n\treturn size;\n}\n\nTEST_BEGIN(test_bitmap_size) {\n\tsize_t nbits, prev_size;\n\n\tprev_size = 0;\n\tfor (nbits = 1; nbits <= BITMAP_MAXBITS; nbits++) {\n\t\tbitmap_info_t binfo;\n\t\tbitmap_info_init(&binfo, nbits);\n\t\tprev_size = test_bitmap_size_body(&binfo, nbits, prev_size);\n\t}\n#define NB(nbits) {\t\t\t\t\t\t\t\\\n\t\tbitmap_info_t binfo = BITMAP_INFO_INITIALIZER(nbits);\t\\\n\t\tprev_size = test_bitmap_size_body(&binfo, nbits,\t\\\n\t\t    prev_size);\t\t\t\t\t\t\\\n\t}\n\tprev_size = 0;\n\tNBITS_TAB\n#undef NB\n}\nTEST_END\n\nstatic void\ntest_bitmap_init_body(const bitmap_info_t *binfo, size_t nbits) {\n\tsize_t i;\n\tbitmap_t *bitmap = (bitmap_t *)malloc(bitmap_size(binfo));\n\texpect_ptr_not_null(bitmap, \"Unexpected malloc() failure\");\n\n\tbitmap_init(bitmap, binfo, false);\n\tfor (i = 0; i < nbits; i++) {\n\t\texpect_false(bitmap_get(bitmap, binfo, i),\n\t\t    \"Bit should be unset\");\n\t}\n\n\tbitmap_init(bitmap, binfo, true);\n\tfor (i = 0; i < nbits; i++) {\n\t\texpect_true(bitmap_get(bitmap, binfo, i), \"Bit should be set\");\n\t}\n\n\tfree(bitmap);\n}\n\nTEST_BEGIN(test_bitmap_init) {\n\tsize_t nbits;\n\n\tfor (nbits = 1; nbits <= BITMAP_MAXBITS; nbits++) {\n\t\tbitmap_info_t binfo;\n\t\tbitmap_info_init(&binfo, nbits);\n\t\ttest_bitmap_init_body(&binfo, nbits);\n\t}\n#define NB(nbits) {\t\t\t\t\t\t\t\\\n\t\tbitmap_info_t binfo = BITMAP_INFO_INITIALIZER(nbits);\t\\\n\t\ttest_bitmap_init_body(&binfo, nbits);\t\t\t\\\n\t}\n\tNBITS_TAB\n#undef NB\n}\nTEST_END\n\nstatic void\ntest_bitmap_set_body(const bitmap_info_t *binfo, size_t nbits) {\n\tsize_t i;\n\tbitmap_t *bitmap = (bitmap_t *)malloc(bitmap_size(binfo));\n\texpect_ptr_not_null(bitmap, \"Unexpected malloc() failure\");\n\tbitmap_init(bitmap, binfo, false);\n\n\tfor (i = 0; i < nbits; i++) {\n\t\tbitmap_set(bitmap, binfo, i);\n\t}\n\texpect_true(bitmap_full(bitmap, binfo), \"All bits should be set\");\n\tfree(bitmap);\n}\n\nTEST_BEGIN(test_bitmap_set) {\n\tsize_t nbits;\n\n\tfor (nbits = 1; nbits <= BITMAP_MAXBITS; nbits++) {\n\t\tbitmap_info_t binfo;\n\t\tbitmap_info_init(&binfo, nbits);\n\t\ttest_bitmap_set_body(&binfo, nbits);\n\t}\n#define NB(nbits) {\t\t\t\t\t\t\t\\\n\t\tbitmap_info_t binfo = BITMAP_INFO_INITIALIZER(nbits);\t\\\n\t\ttest_bitmap_set_body(&binfo, nbits);\t\t\t\\\n\t}\n\tNBITS_TAB\n#undef NB\n}\nTEST_END\n\nstatic void\ntest_bitmap_unset_body(const bitmap_info_t *binfo, size_t nbits) {\n\tsize_t i;\n\tbitmap_t *bitmap = (bitmap_t *)malloc(bitmap_size(binfo));\n\texpect_ptr_not_null(bitmap, \"Unexpected malloc() failure\");\n\tbitmap_init(bitmap, binfo, false);\n\n\tfor (i = 0; i < nbits; i++) {\n\t\tbitmap_set(bitmap, binfo, i);\n\t}\n\texpect_true(bitmap_full(bitmap, binfo), \"All bits should be set\");\n\tfor (i = 0; i < nbits; i++) {\n\t\tbitmap_unset(bitmap, binfo, i);\n\t}\n\tfor (i = 0; i < nbits; i++) {\n\t\tbitmap_set(bitmap, binfo, i);\n\t}\n\texpect_true(bitmap_full(bitmap, binfo), \"All bits should be set\");\n\tfree(bitmap);\n}\n\nTEST_BEGIN(test_bitmap_unset) {\n\tsize_t nbits;\n\n\tfor (nbits = 1; nbits <= BITMAP_MAXBITS; nbits++) {\n\t\tbitmap_info_t binfo;\n\t\tbitmap_info_init(&binfo, nbits);\n\t\ttest_bitmap_unset_body(&binfo, nbits);\n\t}\n#define NB(nbits) {\t\t\t\t\t\t\t\\\n\t\tbitmap_info_t binfo = BITMAP_INFO_INITIALIZER(nbits);\t\\\n\t\ttest_bitmap_unset_body(&binfo, nbits);\t\t\t\\\n\t}\n\tNBITS_TAB\n#undef NB\n}\nTEST_END\n\nstatic void\ntest_bitmap_xfu_body(const bitmap_info_t *binfo, size_t nbits) {\n\tbitmap_t *bitmap = (bitmap_t *)malloc(bitmap_size(binfo));\n\texpect_ptr_not_null(bitmap, \"Unexpected malloc() failure\");\n\tbitmap_init(bitmap, binfo, false);\n\n\t/* Iteratively set bits starting at the beginning. */\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\texpect_zu_eq(bitmap_ffu(bitmap, binfo, 0), i,\n\t\t    \"First unset bit should be just after previous first unset \"\n\t\t    \"bit\");\n\t\texpect_zu_eq(bitmap_ffu(bitmap, binfo, (i > 0) ? i-1 : i), i,\n\t\t    \"First unset bit should be just after previous first unset \"\n\t\t    \"bit\");\n\t\texpect_zu_eq(bitmap_ffu(bitmap, binfo, i), i,\n\t\t    \"First unset bit should be just after previous first unset \"\n\t\t    \"bit\");\n\t\texpect_zu_eq(bitmap_sfu(bitmap, binfo), i,\n\t\t    \"First unset bit should be just after previous first unset \"\n\t\t    \"bit\");\n\t}\n\texpect_true(bitmap_full(bitmap, binfo), \"All bits should be set\");\n\n\t/*\n\t * Iteratively unset bits starting at the end, and verify that\n\t * bitmap_sfu() reaches the unset bits.\n\t */\n\tfor (size_t i = nbits - 1; i < nbits; i--) { /* (nbits..0] */\n\t\tbitmap_unset(bitmap, binfo, i);\n\t\texpect_zu_eq(bitmap_ffu(bitmap, binfo, 0), i,\n\t\t    \"First unset bit should the bit previously unset\");\n\t\texpect_zu_eq(bitmap_ffu(bitmap, binfo, (i > 0) ? i-1 : i), i,\n\t\t    \"First unset bit should the bit previously unset\");\n\t\texpect_zu_eq(bitmap_ffu(bitmap, binfo, i), i,\n\t\t    \"First unset bit should the bit previously unset\");\n\t\texpect_zu_eq(bitmap_sfu(bitmap, binfo), i,\n\t\t    \"First unset bit should the bit previously unset\");\n\t\tbitmap_unset(bitmap, binfo, i);\n\t}\n\texpect_false(bitmap_get(bitmap, binfo, 0), \"Bit should be unset\");\n\n\t/*\n\t * Iteratively set bits starting at the beginning, and verify that\n\t * bitmap_sfu() looks past them.\n\t */\n\tfor (size_t i = 1; i < nbits; i++) {\n\t\tbitmap_set(bitmap, binfo, i - 1);\n\t\texpect_zu_eq(bitmap_ffu(bitmap, binfo, 0), i,\n\t\t    \"First unset bit should be just after the bit previously \"\n\t\t    \"set\");\n\t\texpect_zu_eq(bitmap_ffu(bitmap, binfo, (i > 0) ? i-1 : i), i,\n\t\t    \"First unset bit should be just after the bit previously \"\n\t\t    \"set\");\n\t\texpect_zu_eq(bitmap_ffu(bitmap, binfo, i), i,\n\t\t    \"First unset bit should be just after the bit previously \"\n\t\t    \"set\");\n\t\texpect_zu_eq(bitmap_sfu(bitmap, binfo), i,\n\t\t    \"First unset bit should be just after the bit previously \"\n\t\t    \"set\");\n\t\tbitmap_unset(bitmap, binfo, i);\n\t}\n\texpect_zu_eq(bitmap_ffu(bitmap, binfo, 0), nbits - 1,\n\t    \"First unset bit should be the last bit\");\n\texpect_zu_eq(bitmap_ffu(bitmap, binfo, (nbits > 1) ? nbits-2 : nbits-1),\n\t    nbits - 1, \"First unset bit should be the last bit\");\n\texpect_zu_eq(bitmap_ffu(bitmap, binfo, nbits - 1), nbits - 1,\n\t    \"First unset bit should be the last bit\");\n\texpect_zu_eq(bitmap_sfu(bitmap, binfo), nbits - 1,\n\t    \"First unset bit should be the last bit\");\n\texpect_true(bitmap_full(bitmap, binfo), \"All bits should be set\");\n\n\t/*\n\t * Bubble a \"usu\" pattern through the bitmap and verify that\n\t * bitmap_ffu() finds the correct bit for all five min_bit cases.\n\t */\n\tif (nbits >= 3) {\n\t\tfor (size_t i = 0; i < nbits-2; i++) {\n\t\t\tbitmap_unset(bitmap, binfo, i);\n\t\t\tbitmap_unset(bitmap, binfo, i+2);\n\t\t\tif (i > 0) {\n\t\t\t\texpect_zu_eq(bitmap_ffu(bitmap, binfo, i-1), i,\n\t\t\t\t    \"Unexpected first unset bit\");\n\t\t\t}\n\t\t\texpect_zu_eq(bitmap_ffu(bitmap, binfo, i), i,\n\t\t\t    \"Unexpected first unset bit\");\n\t\t\texpect_zu_eq(bitmap_ffu(bitmap, binfo, i+1), i+2,\n\t\t\t    \"Unexpected first unset bit\");\n\t\t\texpect_zu_eq(bitmap_ffu(bitmap, binfo, i+2), i+2,\n\t\t\t    \"Unexpected first unset bit\");\n\t\t\tif (i + 3 < nbits) {\n\t\t\t\texpect_zu_eq(bitmap_ffu(bitmap, binfo, i+3),\n\t\t\t\t    nbits, \"Unexpected first unset bit\");\n\t\t\t}\n\t\t\texpect_zu_eq(bitmap_sfu(bitmap, binfo), i,\n\t\t\t    \"Unexpected first unset bit\");\n\t\t\texpect_zu_eq(bitmap_sfu(bitmap, binfo), i+2,\n\t\t\t    \"Unexpected first unset bit\");\n\t\t}\n\t}\n\n\t/*\n\t * Unset the last bit, bubble another unset bit through the bitmap, and\n\t * verify that bitmap_ffu() finds the correct bit for all four min_bit\n\t * cases.\n\t */\n\tif (nbits >= 3) {\n\t\tbitmap_unset(bitmap, binfo, nbits-1);\n\t\tfor (size_t i = 0; i < nbits-1; i++) {\n\t\t\tbitmap_unset(bitmap, binfo, i);\n\t\t\tif (i > 0) {\n\t\t\t\texpect_zu_eq(bitmap_ffu(bitmap, binfo, i-1), i,\n\t\t\t\t    \"Unexpected first unset bit\");\n\t\t\t}\n\t\t\texpect_zu_eq(bitmap_ffu(bitmap, binfo, i), i,\n\t\t\t    \"Unexpected first unset bit\");\n\t\t\texpect_zu_eq(bitmap_ffu(bitmap, binfo, i+1), nbits-1,\n\t\t\t    \"Unexpected first unset bit\");\n\t\t\texpect_zu_eq(bitmap_ffu(bitmap, binfo, nbits-1),\n\t\t\t    nbits-1, \"Unexpected first unset bit\");\n\n\t\t\texpect_zu_eq(bitmap_sfu(bitmap, binfo), i,\n\t\t\t    \"Unexpected first unset bit\");\n\t\t}\n\t\texpect_zu_eq(bitmap_sfu(bitmap, binfo), nbits-1,\n\t\t    \"Unexpected first unset bit\");\n\t}\n\n\tfree(bitmap);\n}\n\nTEST_BEGIN(test_bitmap_xfu) {\n\tsize_t nbits, nbits_max;\n\n\t/* The test is O(n^2); large page sizes may slow down too much. */\n\tnbits_max = BITMAP_MAXBITS > 512 ? 512 : BITMAP_MAXBITS;\n\tfor (nbits = 1; nbits <= nbits_max; nbits++) {\n\t\tbitmap_info_t binfo;\n\t\tbitmap_info_init(&binfo, nbits);\n\t\ttest_bitmap_xfu_body(&binfo, nbits);\n\t}\n#define NB(nbits) {\t\t\t\t\t\t\t\\\n\t\tbitmap_info_t binfo = BITMAP_INFO_INITIALIZER(nbits);\t\\\n\t\ttest_bitmap_xfu_body(&binfo, nbits);\t\t\t\\\n\t}\n\tNBITS_TAB\n#undef NB\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_bitmap_initializer,\n\t    test_bitmap_size,\n\t    test_bitmap_init,\n\t    test_bitmap_set,\n\t    test_bitmap_unset,\n\t    test_bitmap_xfu);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/buf_writer.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/buf_writer.h\"\n\n#define TEST_BUF_SIZE 16\n#define UNIT_MAX (TEST_BUF_SIZE * 3)\n\nstatic size_t test_write_len;\nstatic char test_buf[TEST_BUF_SIZE];\nstatic uint64_t arg;\nstatic uint64_t arg_store;\n\nstatic void\ntest_write_cb(void *cbopaque, const char *s) {\n\tsize_t prev_test_write_len = test_write_len;\n\ttest_write_len += strlen(s); /* only increase the length */\n\targ_store = *(uint64_t *)cbopaque; /* only pass along the argument */\n\tassert_zu_le(prev_test_write_len, test_write_len,\n\t    \"Test write overflowed\");\n}\n\nstatic void\ntest_buf_writer_body(tsdn_t *tsdn, buf_writer_t *buf_writer) {\n\tchar s[UNIT_MAX + 1];\n\tsize_t n_unit, remain, i;\n\tssize_t unit;\n\n\tassert(buf_writer->buf != NULL);\n\tmemset(s, 'a', UNIT_MAX);\n\targ = 4; /* Starting value of random argument. */\n\targ_store = arg;\n\tfor (unit = UNIT_MAX; unit >= 0; --unit) {\n\t\t/* unit keeps decreasing, so strlen(s) is always unit. */\n\t\ts[unit] = '\\0';\n\t\tfor (n_unit = 1; n_unit <= 3; ++n_unit) {\n\t\t\ttest_write_len = 0;\n\t\t\tremain = 0;\n\t\t\tfor (i = 1; i <= n_unit; ++i) {\n\t\t\t\targ = prng_lg_range_u64(&arg, 64);\n\t\t\t\tbuf_writer_cb(buf_writer, s);\n\t\t\t\tremain += unit;\n\t\t\t\tif (remain > buf_writer->buf_size) {\n\t\t\t\t\t/* Flushes should have happened. */\n\t\t\t\t\tassert_u64_eq(arg_store, arg, \"Call \"\n\t\t\t\t\t    \"back argument didn't get through\");\n\t\t\t\t\tremain %= buf_writer->buf_size;\n\t\t\t\t\tif (remain == 0) {\n\t\t\t\t\t\t/* Last flush should be lazy. */\n\t\t\t\t\t\tremain += buf_writer->buf_size;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tassert_zu_eq(test_write_len + remain, i * unit,\n\t\t\t\t    \"Incorrect length after writing %zu strings\"\n\t\t\t\t    \" of length %zu\", i, unit);\n\t\t\t}\n\t\t\tbuf_writer_flush(buf_writer);\n\t\t\texpect_zu_eq(test_write_len, n_unit * unit,\n\t\t\t    \"Incorrect length after flushing at the end of\"\n\t\t\t    \" writing %zu strings of length %zu\", n_unit, unit);\n\t\t}\n\t}\n\tbuf_writer_terminate(tsdn, buf_writer);\n}\n\nTEST_BEGIN(test_buf_write_static) {\n\tbuf_writer_t buf_writer;\n\ttsdn_t *tsdn = tsdn_fetch();\n\tassert_false(buf_writer_init(tsdn, &buf_writer, test_write_cb, &arg,\n\t    test_buf, TEST_BUF_SIZE),\n\t    \"buf_writer_init() should not encounter error on static buffer\");\n\ttest_buf_writer_body(tsdn, &buf_writer);\n}\nTEST_END\n\nTEST_BEGIN(test_buf_write_dynamic) {\n\tbuf_writer_t buf_writer;\n\ttsdn_t *tsdn = tsdn_fetch();\n\tassert_false(buf_writer_init(tsdn, &buf_writer, test_write_cb, &arg,\n\t    NULL, TEST_BUF_SIZE), \"buf_writer_init() should not OOM\");\n\ttest_buf_writer_body(tsdn, &buf_writer);\n}\nTEST_END\n\nTEST_BEGIN(test_buf_write_oom) {\n\tbuf_writer_t buf_writer;\n\ttsdn_t *tsdn = tsdn_fetch();\n\tassert_true(buf_writer_init(tsdn, &buf_writer, test_write_cb, &arg,\n\t    NULL, SC_LARGE_MAXCLASS + 1), \"buf_writer_init() should OOM\");\n\tassert(buf_writer.buf == NULL);\n\n\tchar s[UNIT_MAX + 1];\n\tsize_t n_unit, i;\n\tssize_t unit;\n\n\tmemset(s, 'a', UNIT_MAX);\n\targ = 4; /* Starting value of random argument. */\n\targ_store = arg;\n\tfor (unit = UNIT_MAX; unit >= 0; unit -= UNIT_MAX / 4) {\n\t\t/* unit keeps decreasing, so strlen(s) is always unit. */\n\t\ts[unit] = '\\0';\n\t\tfor (n_unit = 1; n_unit <= 3; ++n_unit) {\n\t\t\ttest_write_len = 0;\n\t\t\tfor (i = 1; i <= n_unit; ++i) {\n\t\t\t\targ = prng_lg_range_u64(&arg, 64);\n\t\t\t\tbuf_writer_cb(&buf_writer, s);\n\t\t\t\tassert_u64_eq(arg_store, arg,\n\t\t\t\t    \"Call back argument didn't get through\");\n\t\t\t\tassert_zu_eq(test_write_len, i * unit,\n\t\t\t\t    \"Incorrect length after writing %zu strings\"\n\t\t\t\t    \" of length %zu\", i, unit);\n\t\t\t}\n\t\t\tbuf_writer_flush(&buf_writer);\n\t\t\texpect_zu_eq(test_write_len, n_unit * unit,\n\t\t\t    \"Incorrect length after flushing at the end of\"\n\t\t\t    \" writing %zu strings of length %zu\", n_unit, unit);\n\t\t}\n\t}\n\tbuf_writer_terminate(tsdn, &buf_writer);\n}\nTEST_END\n\nstatic int test_read_count;\nstatic size_t test_read_len;\nstatic uint64_t arg_sum;\n\nssize_t\ntest_read_cb(void *cbopaque, void *buf, size_t limit) {\n\tstatic uint64_t rand = 4;\n\n\targ_sum += *(uint64_t *)cbopaque;\n\tassert_zu_gt(limit, 0, \"Limit for read_cb must be positive\");\n\t--test_read_count;\n\tif (test_read_count == 0) {\n\t\treturn -1;\n\t} else {\n\t\tsize_t read_len = limit;\n\t\tif (limit > 1) {\n\t\t\trand = prng_range_u64(&rand, (uint64_t)limit);\n\t\t\tread_len -= (size_t)rand;\n\t\t}\n\t\tassert(read_len > 0);\n\t\tmemset(buf, 'a', read_len);\n\t\tsize_t prev_test_read_len = test_read_len;\n\t\ttest_read_len += read_len;\n\t\tassert_zu_le(prev_test_read_len, test_read_len,\n\t\t    \"Test read overflowed\");\n\t\treturn read_len;\n\t}\n}\n\nstatic void\ntest_buf_writer_pipe_body(tsdn_t *tsdn, buf_writer_t *buf_writer) {\n\targ = 4; /* Starting value of random argument. */\n\tfor (int count = 5; count > 0; --count) {\n\t\targ = prng_lg_range_u64(&arg, 64);\n\t\targ_sum = 0;\n\t\ttest_read_count = count;\n\t\ttest_read_len = 0;\n\t\ttest_write_len = 0;\n\t\tbuf_writer_pipe(buf_writer, test_read_cb, &arg);\n\t\tassert(test_read_count == 0);\n\t\texpect_u64_eq(arg_sum, arg * count, \"\");\n\t\texpect_zu_eq(test_write_len, test_read_len,\n\t\t    \"Write length should be equal to read length\");\n\t}\n\tbuf_writer_terminate(tsdn, buf_writer);\n}\n\nTEST_BEGIN(test_buf_write_pipe) {\n\tbuf_writer_t buf_writer;\n\ttsdn_t *tsdn = tsdn_fetch();\n\tassert_false(buf_writer_init(tsdn, &buf_writer, test_write_cb, &arg,\n\t    test_buf, TEST_BUF_SIZE),\n\t    \"buf_writer_init() should not encounter error on static buffer\");\n\ttest_buf_writer_pipe_body(tsdn, &buf_writer);\n}\nTEST_END\n\nTEST_BEGIN(test_buf_write_pipe_oom) {\n\tbuf_writer_t buf_writer;\n\ttsdn_t *tsdn = tsdn_fetch();\n\tassert_true(buf_writer_init(tsdn, &buf_writer, test_write_cb, &arg,\n\t    NULL, SC_LARGE_MAXCLASS + 1), \"buf_writer_init() should OOM\");\n\ttest_buf_writer_pipe_body(tsdn, &buf_writer);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_buf_write_static,\n\t    test_buf_write_dynamic,\n\t    test_buf_write_oom,\n\t    test_buf_write_pipe,\n\t    test_buf_write_pipe_oom);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/cache_bin.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nstatic void\ndo_fill_test(cache_bin_t *bin, cache_bin_info_t *info, void **ptrs,\n    cache_bin_sz_t ncached_max, cache_bin_sz_t nfill_attempt,\n    cache_bin_sz_t nfill_succeed) {\n\tbool success;\n\tvoid *ptr;\n\tassert_true(cache_bin_ncached_get_local(bin, info) == 0, \"\");\n\tCACHE_BIN_PTR_ARRAY_DECLARE(arr, nfill_attempt);\n\tcache_bin_init_ptr_array_for_fill(bin, info, &arr, nfill_attempt);\n\tfor (cache_bin_sz_t i = 0; i < nfill_succeed; i++) {\n\t\tarr.ptr[i] = &ptrs[i];\n\t}\n\tcache_bin_finish_fill(bin, info, &arr, nfill_succeed);\n\texpect_true(cache_bin_ncached_get_local(bin, info) == nfill_succeed,\n\t    \"\");\n\tcache_bin_low_water_set(bin);\n\n\tfor (cache_bin_sz_t i = 0; i < nfill_succeed; i++) {\n\t\tptr = cache_bin_alloc(bin, &success);\n\t\texpect_true(success, \"\");\n\t\texpect_ptr_eq(ptr, (void *)&ptrs[i],\n\t\t    \"Should pop in order filled\");\n\t\texpect_true(cache_bin_low_water_get(bin, info)\n\t\t    == nfill_succeed - i - 1, \"\");\n\t}\n\texpect_true(cache_bin_ncached_get_local(bin, info) == 0, \"\");\n\texpect_true(cache_bin_low_water_get(bin, info) == 0, \"\");\n}\n\nstatic void\ndo_flush_test(cache_bin_t *bin, cache_bin_info_t *info, void **ptrs,\n    cache_bin_sz_t nfill, cache_bin_sz_t nflush) {\n\tbool success;\n\tassert_true(cache_bin_ncached_get_local(bin, info) == 0, \"\");\n\n\tfor (cache_bin_sz_t i = 0; i < nfill; i++) {\n\t\tsuccess = cache_bin_dalloc_easy(bin, &ptrs[i]);\n\t\texpect_true(success, \"\");\n\t}\n\n\tCACHE_BIN_PTR_ARRAY_DECLARE(arr, nflush);\n\tcache_bin_init_ptr_array_for_flush(bin, info, &arr, nflush);\n\tfor (cache_bin_sz_t i = 0; i < nflush; i++) {\n\t\texpect_ptr_eq(arr.ptr[i], &ptrs[nflush - i - 1], \"\");\n\t}\n\tcache_bin_finish_flush(bin, info, &arr, nflush);\n\n\texpect_true(cache_bin_ncached_get_local(bin, info) == nfill - nflush,\n\t    \"\");\n\twhile (cache_bin_ncached_get_local(bin, info) > 0) {\n\t\tcache_bin_alloc(bin, &success);\n\t}\n}\n\nstatic void\ndo_batch_alloc_test(cache_bin_t *bin, cache_bin_info_t *info, void **ptrs,\n    cache_bin_sz_t nfill, size_t batch) {\n\tassert_true(cache_bin_ncached_get_local(bin, info) == 0, \"\");\n\tCACHE_BIN_PTR_ARRAY_DECLARE(arr, nfill);\n\tcache_bin_init_ptr_array_for_fill(bin, info, &arr, nfill);\n\tfor (cache_bin_sz_t i = 0; i < nfill; i++) {\n\t\tarr.ptr[i] = &ptrs[i];\n\t}\n\tcache_bin_finish_fill(bin, info, &arr, nfill);\n\tassert_true(cache_bin_ncached_get_local(bin, info) == nfill, \"\");\n\tcache_bin_low_water_set(bin);\n\n\tvoid **out = malloc((batch + 1) * sizeof(void *));\n\tsize_t n = cache_bin_alloc_batch(bin, batch, out);\n\tassert_true(n == ((size_t)nfill < batch ? (size_t)nfill : batch), \"\");\n\tfor (cache_bin_sz_t i = 0; i < (cache_bin_sz_t)n; i++) {\n\t\texpect_ptr_eq(out[i], &ptrs[i], \"\");\n\t}\n\texpect_true(cache_bin_low_water_get(bin, info) == nfill -\n\t    (cache_bin_sz_t)n, \"\");\n\twhile (cache_bin_ncached_get_local(bin, info) > 0) {\n\t\tbool success;\n\t\tcache_bin_alloc(bin, &success);\n\t}\n\tfree(out);\n}\n\nstatic void\ntest_bin_init(cache_bin_t *bin, cache_bin_info_t *info) {\n\tsize_t size;\n\tsize_t alignment;\n\tcache_bin_info_compute_alloc(info, 1, &size, &alignment);\n\tvoid *mem = mallocx(size, MALLOCX_ALIGN(alignment));\n\tassert_ptr_not_null(mem, \"Unexpected mallocx failure\");\n\n\tsize_t cur_offset = 0;\n\tcache_bin_preincrement(info, 1, mem, &cur_offset);\n\tcache_bin_init(bin, info, mem, &cur_offset);\n\tcache_bin_postincrement(info, 1, mem, &cur_offset);\n\tassert_zu_eq(cur_offset, size, \"Should use all requested memory\");\n}\n\nTEST_BEGIN(test_cache_bin) {\n\tconst int ncached_max = 100;\n\tbool success;\n\tvoid *ptr;\n\n\tcache_bin_info_t info;\n\tcache_bin_info_init(&info, ncached_max);\n\tcache_bin_t bin;\n\ttest_bin_init(&bin, &info);\n\n\t/* Initialize to empty; should then have 0 elements. */\n\texpect_d_eq(ncached_max, cache_bin_info_ncached_max(&info), \"\");\n\texpect_true(cache_bin_ncached_get_local(&bin, &info) == 0, \"\");\n\texpect_true(cache_bin_low_water_get(&bin, &info) == 0, \"\");\n\n\tptr = cache_bin_alloc_easy(&bin, &success);\n\texpect_false(success, \"Shouldn't successfully allocate when empty\");\n\texpect_ptr_null(ptr, \"Shouldn't get a non-null pointer on failure\");\n\n\tptr = cache_bin_alloc(&bin, &success);\n\texpect_false(success, \"Shouldn't successfully allocate when empty\");\n\texpect_ptr_null(ptr, \"Shouldn't get a non-null pointer on failure\");\n\n\t/*\n\t * We allocate one more item than ncached_max, so we can test cache bin\n\t * exhaustion.\n\t */\n\tvoid **ptrs = mallocx(sizeof(void *) * (ncached_max + 1), 0);\n\tassert_ptr_not_null(ptrs, \"Unexpected mallocx failure\");\n\tfor  (cache_bin_sz_t i = 0; i < ncached_max; i++) {\n\t\texpect_true(cache_bin_ncached_get_local(&bin, &info) == i, \"\");\n\t\tsuccess = cache_bin_dalloc_easy(&bin, &ptrs[i]);\n\t\texpect_true(success,\n\t\t    \"Should be able to dalloc into a non-full cache bin.\");\n\t\texpect_true(cache_bin_low_water_get(&bin, &info) == 0,\n\t\t    \"Pushes and pops shouldn't change low water of zero.\");\n\t}\n\texpect_true(cache_bin_ncached_get_local(&bin, &info) == ncached_max,\n\t    \"\");\n\tsuccess = cache_bin_dalloc_easy(&bin, &ptrs[ncached_max]);\n\texpect_false(success, \"Shouldn't be able to dalloc into a full bin.\");\n\n\tcache_bin_low_water_set(&bin);\n\n\tfor (cache_bin_sz_t i = 0; i < ncached_max; i++) {\n\t\texpect_true(cache_bin_low_water_get(&bin, &info)\n\t\t    == ncached_max - i, \"\");\n\t\texpect_true(cache_bin_ncached_get_local(&bin, &info)\n\t\t    == ncached_max - i, \"\");\n\t\t/*\n\t\t * This should fail -- the easy variant can't change the low\n\t\t * water mark.\n\t\t */\n\t\tptr = cache_bin_alloc_easy(&bin, &success);\n\t\texpect_ptr_null(ptr, \"\");\n\t\texpect_false(success, \"\");\n\t\texpect_true(cache_bin_low_water_get(&bin, &info)\n\t\t    == ncached_max - i, \"\");\n\t\texpect_true(cache_bin_ncached_get_local(&bin, &info)\n\t\t    == ncached_max - i, \"\");\n\n\t\t/* This should succeed, though. */\n\t\tptr = cache_bin_alloc(&bin, &success);\n\t\texpect_true(success, \"\");\n\t\texpect_ptr_eq(ptr, &ptrs[ncached_max - i - 1],\n\t\t    \"Alloc should pop in stack order\");\n\t\texpect_true(cache_bin_low_water_get(&bin, &info)\n\t\t    == ncached_max - i - 1, \"\");\n\t\texpect_true(cache_bin_ncached_get_local(&bin, &info)\n\t\t    == ncached_max - i - 1, \"\");\n\t}\n\t/* Now we're empty -- all alloc attempts should fail. */\n\texpect_true(cache_bin_ncached_get_local(&bin, &info) == 0, \"\");\n\tptr = cache_bin_alloc_easy(&bin, &success);\n\texpect_ptr_null(ptr, \"\");\n\texpect_false(success, \"\");\n\tptr = cache_bin_alloc(&bin, &success);\n\texpect_ptr_null(ptr, \"\");\n\texpect_false(success, \"\");\n\n\tfor (cache_bin_sz_t i = 0; i < ncached_max / 2; i++) {\n\t\tcache_bin_dalloc_easy(&bin, &ptrs[i]);\n\t}\n\tcache_bin_low_water_set(&bin);\n\n\tfor (cache_bin_sz_t i = ncached_max / 2; i < ncached_max; i++) {\n\t\tcache_bin_dalloc_easy(&bin, &ptrs[i]);\n\t}\n\texpect_true(cache_bin_ncached_get_local(&bin, &info) == ncached_max,\n\t    \"\");\n\tfor (cache_bin_sz_t i = ncached_max - 1; i >= ncached_max / 2; i--) {\n\t\t/*\n\t\t * Size is bigger than low water -- the reduced version should\n\t\t * succeed.\n\t\t */\n\t\tptr = cache_bin_alloc_easy(&bin, &success);\n\t\texpect_true(success, \"\");\n\t\texpect_ptr_eq(ptr, &ptrs[i], \"\");\n\t}\n\t/* But now, we've hit low-water. */\n\tptr = cache_bin_alloc_easy(&bin, &success);\n\texpect_false(success, \"\");\n\texpect_ptr_null(ptr, \"\");\n\n\t/* We're going to test filling -- we must be empty to start. */\n\twhile (cache_bin_ncached_get_local(&bin, &info)) {\n\t\tcache_bin_alloc(&bin, &success);\n\t\texpect_true(success, \"\");\n\t}\n\n\t/* Test fill. */\n\t/* Try to fill all, succeed fully. */\n\tdo_fill_test(&bin, &info, ptrs, ncached_max, ncached_max, ncached_max);\n\t/* Try to fill all, succeed partially. */\n\tdo_fill_test(&bin, &info, ptrs, ncached_max, ncached_max,\n\t    ncached_max / 2);\n\t/* Try to fill all, fail completely. */\n\tdo_fill_test(&bin, &info, ptrs, ncached_max, ncached_max, 0);\n\n\t/* Try to fill some, succeed fully. */\n\tdo_fill_test(&bin, &info, ptrs, ncached_max, ncached_max / 2,\n\t    ncached_max / 2);\n\t/* Try to fill some, succeed partially. */\n\tdo_fill_test(&bin, &info, ptrs, ncached_max, ncached_max / 2,\n\t    ncached_max / 4);\n\t/* Try to fill some, fail completely. */\n\tdo_fill_test(&bin, &info, ptrs, ncached_max, ncached_max / 2, 0);\n\n\tdo_flush_test(&bin, &info, ptrs, ncached_max, ncached_max);\n\tdo_flush_test(&bin, &info, ptrs, ncached_max, ncached_max / 2);\n\tdo_flush_test(&bin, &info, ptrs, ncached_max, 0);\n\tdo_flush_test(&bin, &info, ptrs, ncached_max / 2, ncached_max / 2);\n\tdo_flush_test(&bin, &info, ptrs, ncached_max / 2, ncached_max / 4);\n\tdo_flush_test(&bin, &info, ptrs, ncached_max / 2, 0);\n\n\tdo_batch_alloc_test(&bin, &info, ptrs, ncached_max, ncached_max);\n\tdo_batch_alloc_test(&bin, &info, ptrs, ncached_max, ncached_max * 2);\n\tdo_batch_alloc_test(&bin, &info, ptrs, ncached_max, ncached_max / 2);\n\tdo_batch_alloc_test(&bin, &info, ptrs, ncached_max, 2);\n\tdo_batch_alloc_test(&bin, &info, ptrs, ncached_max, 1);\n\tdo_batch_alloc_test(&bin, &info, ptrs, ncached_max, 0);\n\tdo_batch_alloc_test(&bin, &info, ptrs, ncached_max / 2,\n\t    ncached_max / 2);\n\tdo_batch_alloc_test(&bin, &info, ptrs, ncached_max / 2, ncached_max);\n\tdo_batch_alloc_test(&bin, &info, ptrs, ncached_max / 2,\n\t    ncached_max / 4);\n\tdo_batch_alloc_test(&bin, &info, ptrs, ncached_max / 2, 2);\n\tdo_batch_alloc_test(&bin, &info, ptrs, ncached_max / 2, 1);\n\tdo_batch_alloc_test(&bin, &info, ptrs, ncached_max / 2, 0);\n\tdo_batch_alloc_test(&bin, &info, ptrs, 2, ncached_max);\n\tdo_batch_alloc_test(&bin, &info, ptrs, 2, 2);\n\tdo_batch_alloc_test(&bin, &info, ptrs, 2, 1);\n\tdo_batch_alloc_test(&bin, &info, ptrs, 2, 0);\n\tdo_batch_alloc_test(&bin, &info, ptrs, 1, 2);\n\tdo_batch_alloc_test(&bin, &info, ptrs, 1, 1);\n\tdo_batch_alloc_test(&bin, &info, ptrs, 1, 0);\n\tdo_batch_alloc_test(&bin, &info, ptrs, 0, 2);\n\tdo_batch_alloc_test(&bin, &info, ptrs, 0, 1);\n\tdo_batch_alloc_test(&bin, &info, ptrs, 0, 0);\n\n\tfree(ptrs);\n}\nTEST_END\n\nstatic void\ndo_flush_stashed_test(cache_bin_t *bin, cache_bin_info_t *info, void **ptrs,\n    cache_bin_sz_t nfill, cache_bin_sz_t nstash) {\n\texpect_true(cache_bin_ncached_get_local(bin, info) == 0,\n\t    \"Bin not empty\");\n\texpect_true(cache_bin_nstashed_get_local(bin, info) == 0,\n\t    \"Bin not empty\");\n\texpect_true(nfill + nstash <= info->ncached_max, \"Exceeded max\");\n\n\tbool ret;\n\t/* Fill */\n\tfor (cache_bin_sz_t i = 0; i < nfill; i++) {\n\t\tret = cache_bin_dalloc_easy(bin, &ptrs[i]);\n\t\texpect_true(ret, \"Unexpected fill failure\");\n\t}\n\texpect_true(cache_bin_ncached_get_local(bin, info) == nfill,\n\t    \"Wrong cached count\");\n\n\t/* Stash */\n\tfor (cache_bin_sz_t i = 0; i < nstash; i++) {\n\t\tret = cache_bin_stash(bin, &ptrs[i + nfill]);\n\t\texpect_true(ret, \"Unexpected stash failure\");\n\t}\n\texpect_true(cache_bin_nstashed_get_local(bin, info) == nstash,\n\t    \"Wrong stashed count\");\n\n\tif (nfill + nstash == info->ncached_max) {\n\t\tret = cache_bin_dalloc_easy(bin, &ptrs[0]);\n\t\texpect_false(ret, \"Should not dalloc into a full bin\");\n\t\tret = cache_bin_stash(bin, &ptrs[0]);\n\t\texpect_false(ret, \"Should not stash into a full bin\");\n\t}\n\n\t/* Alloc filled ones */\n\tfor (cache_bin_sz_t i = 0; i < nfill; i++) {\n\t\tvoid *ptr = cache_bin_alloc(bin, &ret);\n\t\texpect_true(ret, \"Unexpected alloc failure\");\n\t\t/* Verify it's not from the stashed range. */\n\t\texpect_true((uintptr_t)ptr < (uintptr_t)&ptrs[nfill],\n\t\t    \"Should not alloc stashed ptrs\");\n\t}\n\texpect_true(cache_bin_ncached_get_local(bin, info) == 0,\n\t    \"Wrong cached count\");\n\texpect_true(cache_bin_nstashed_get_local(bin, info) == nstash,\n\t    \"Wrong stashed count\");\n\n\tcache_bin_alloc(bin, &ret);\n\texpect_false(ret, \"Should not alloc stashed\");\n\n\t/* Clear stashed ones */\n\tcache_bin_finish_flush_stashed(bin, info);\n\texpect_true(cache_bin_ncached_get_local(bin, info) == 0,\n\t    \"Wrong cached count\");\n\texpect_true(cache_bin_nstashed_get_local(bin, info) == 0,\n\t    \"Wrong stashed count\");\n\n\tcache_bin_alloc(bin, &ret);\n\texpect_false(ret, \"Should not alloc from empty bin\");\n}\n\nTEST_BEGIN(test_cache_bin_stash) {\n\tconst int ncached_max = 100;\n\n\tcache_bin_t bin;\n\tcache_bin_info_t info;\n\tcache_bin_info_init(&info, ncached_max);\n\ttest_bin_init(&bin, &info);\n\n\t/*\n\t * The content of this array is not accessed; instead the interior\n\t * addresses are used to insert / stash into the bins as test pointers.\n\t */\n\tvoid **ptrs = mallocx(sizeof(void *) * (ncached_max + 1), 0);\n\tassert_ptr_not_null(ptrs, \"Unexpected mallocx failure\");\n\tbool ret;\n\tfor (cache_bin_sz_t i = 0; i < ncached_max; i++) {\n\t\texpect_true(cache_bin_ncached_get_local(&bin, &info) ==\n\t\t    (i / 2 + i % 2), \"Wrong ncached value\");\n\t\texpect_true(cache_bin_nstashed_get_local(&bin, &info) == i / 2,\n\t\t    \"Wrong nstashed value\");\n\t\tif (i % 2 == 0) {\n\t\t\tcache_bin_dalloc_easy(&bin, &ptrs[i]);\n\t\t} else {\n\t\t\tret = cache_bin_stash(&bin, &ptrs[i]);\n\t\t\texpect_true(ret, \"Should be able to stash into a \"\n\t\t\t    \"non-full cache bin\");\n\t\t}\n\t}\n\tret = cache_bin_dalloc_easy(&bin, &ptrs[0]);\n\texpect_false(ret, \"Should not dalloc into a full cache bin\");\n\tret = cache_bin_stash(&bin, &ptrs[0]);\n\texpect_false(ret, \"Should not stash into a full cache bin\");\n\tfor (cache_bin_sz_t i = 0; i < ncached_max; i++) {\n\t\tvoid *ptr = cache_bin_alloc(&bin, &ret);\n\t\tif (i < ncached_max / 2) {\n\t\t\texpect_true(ret, \"Should be able to alloc\");\n\t\t\tuintptr_t diff = ((uintptr_t)ptr - (uintptr_t)&ptrs[0])\n\t\t\t    / sizeof(void *);\n\t\t\texpect_true(diff % 2 == 0, \"Should be able to alloc\");\n\t\t} else {\n\t\t\texpect_false(ret, \"Should not alloc stashed\");\n\t\t\texpect_true(cache_bin_nstashed_get_local(&bin, &info) ==\n\t\t\t    ncached_max / 2, \"Wrong nstashed value\");\n\t\t}\n\t}\n\n\ttest_bin_init(&bin, &info);\n\tdo_flush_stashed_test(&bin, &info, ptrs, ncached_max, 0);\n\tdo_flush_stashed_test(&bin, &info, ptrs, 0, ncached_max);\n\tdo_flush_stashed_test(&bin, &info, ptrs, ncached_max / 2, ncached_max / 2);\n\tdo_flush_stashed_test(&bin, &info, ptrs, ncached_max / 4, ncached_max / 2);\n\tdo_flush_stashed_test(&bin, &info, ptrs, ncached_max / 2, ncached_max / 4);\n\tdo_flush_stashed_test(&bin, &info, ptrs, ncached_max / 4, ncached_max / 4);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(test_cache_bin,\n\t\ttest_cache_bin_stash);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/ckh.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nTEST_BEGIN(test_new_delete) {\n\ttsd_t *tsd;\n\tckh_t ckh;\n\n\ttsd = tsd_fetch();\n\n\texpect_false(ckh_new(tsd, &ckh, 2, ckh_string_hash,\n\t    ckh_string_keycomp), \"Unexpected ckh_new() error\");\n\tckh_delete(tsd, &ckh);\n\n\texpect_false(ckh_new(tsd, &ckh, 3, ckh_pointer_hash,\n\t    ckh_pointer_keycomp), \"Unexpected ckh_new() error\");\n\tckh_delete(tsd, &ckh);\n}\nTEST_END\n\nTEST_BEGIN(test_count_insert_search_remove) {\n\ttsd_t *tsd;\n\tckh_t ckh;\n\tconst char *strs[] = {\n\t    \"a string\",\n\t    \"A string\",\n\t    \"a string.\",\n\t    \"A string.\"\n\t};\n\tconst char *missing = \"A string not in the hash table.\";\n\tsize_t i;\n\n\ttsd = tsd_fetch();\n\n\texpect_false(ckh_new(tsd, &ckh, 2, ckh_string_hash,\n\t    ckh_string_keycomp), \"Unexpected ckh_new() error\");\n\texpect_zu_eq(ckh_count(&ckh), 0,\n\t    \"ckh_count() should return %zu, but it returned %zu\", ZU(0),\n\t    ckh_count(&ckh));\n\n\t/* Insert. */\n\tfor (i = 0; i < sizeof(strs)/sizeof(const char *); i++) {\n\t\tckh_insert(tsd, &ckh, strs[i], strs[i]);\n\t\texpect_zu_eq(ckh_count(&ckh), i+1,\n\t\t    \"ckh_count() should return %zu, but it returned %zu\", i+1,\n\t\t    ckh_count(&ckh));\n\t}\n\n\t/* Search. */\n\tfor (i = 0; i < sizeof(strs)/sizeof(const char *); i++) {\n\t\tunion {\n\t\t\tvoid *p;\n\t\t\tconst char *s;\n\t\t} k, v;\n\t\tvoid **kp, **vp;\n\t\tconst char *ks, *vs;\n\n\t\tkp = (i & 1) ? &k.p : NULL;\n\t\tvp = (i & 2) ? &v.p : NULL;\n\t\tk.p = NULL;\n\t\tv.p = NULL;\n\t\texpect_false(ckh_search(&ckh, strs[i], kp, vp),\n\t\t    \"Unexpected ckh_search() error\");\n\n\t\tks = (i & 1) ? strs[i] : (const char *)NULL;\n\t\tvs = (i & 2) ? strs[i] : (const char *)NULL;\n\t\texpect_ptr_eq((void *)ks, (void *)k.s, \"Key mismatch, i=%zu\",\n\t\t    i);\n\t\texpect_ptr_eq((void *)vs, (void *)v.s, \"Value mismatch, i=%zu\",\n\t\t    i);\n\t}\n\texpect_true(ckh_search(&ckh, missing, NULL, NULL),\n\t    \"Unexpected ckh_search() success\");\n\n\t/* Remove. */\n\tfor (i = 0; i < sizeof(strs)/sizeof(const char *); i++) {\n\t\tunion {\n\t\t\tvoid *p;\n\t\t\tconst char *s;\n\t\t} k, v;\n\t\tvoid **kp, **vp;\n\t\tconst char *ks, *vs;\n\n\t\tkp = (i & 1) ? &k.p : NULL;\n\t\tvp = (i & 2) ? &v.p : NULL;\n\t\tk.p = NULL;\n\t\tv.p = NULL;\n\t\texpect_false(ckh_remove(tsd, &ckh, strs[i], kp, vp),\n\t\t    \"Unexpected ckh_remove() error\");\n\n\t\tks = (i & 1) ? strs[i] : (const char *)NULL;\n\t\tvs = (i & 2) ? strs[i] : (const char *)NULL;\n\t\texpect_ptr_eq((void *)ks, (void *)k.s, \"Key mismatch, i=%zu\",\n\t\t    i);\n\t\texpect_ptr_eq((void *)vs, (void *)v.s, \"Value mismatch, i=%zu\",\n\t\t    i);\n\t\texpect_zu_eq(ckh_count(&ckh),\n\t\t    sizeof(strs)/sizeof(const char *) - i - 1,\n\t\t    \"ckh_count() should return %zu, but it returned %zu\",\n\t\t        sizeof(strs)/sizeof(const char *) - i - 1,\n\t\t    ckh_count(&ckh));\n\t}\n\n\tckh_delete(tsd, &ckh);\n}\nTEST_END\n\nTEST_BEGIN(test_insert_iter_remove) {\n#define NITEMS ZU(1000)\n\ttsd_t *tsd;\n\tckh_t ckh;\n\tvoid **p[NITEMS];\n\tvoid *q, *r;\n\tsize_t i;\n\n\ttsd = tsd_fetch();\n\n\texpect_false(ckh_new(tsd, &ckh, 2, ckh_pointer_hash,\n\t    ckh_pointer_keycomp), \"Unexpected ckh_new() error\");\n\n\tfor (i = 0; i < NITEMS; i++) {\n\t\tp[i] = mallocx(i+1, 0);\n\t\texpect_ptr_not_null(p[i], \"Unexpected mallocx() failure\");\n\t}\n\n\tfor (i = 0; i < NITEMS; i++) {\n\t\tsize_t j;\n\n\t\tfor (j = i; j < NITEMS; j++) {\n\t\t\texpect_false(ckh_insert(tsd, &ckh, p[j], p[j]),\n\t\t\t    \"Unexpected ckh_insert() failure\");\n\t\t\texpect_false(ckh_search(&ckh, p[j], &q, &r),\n\t\t\t    \"Unexpected ckh_search() failure\");\n\t\t\texpect_ptr_eq(p[j], q, \"Key pointer mismatch\");\n\t\t\texpect_ptr_eq(p[j], r, \"Value pointer mismatch\");\n\t\t}\n\n\t\texpect_zu_eq(ckh_count(&ckh), NITEMS,\n\t\t    \"ckh_count() should return %zu, but it returned %zu\",\n\t\t    NITEMS, ckh_count(&ckh));\n\n\t\tfor (j = i + 1; j < NITEMS; j++) {\n\t\t\texpect_false(ckh_search(&ckh, p[j], NULL, NULL),\n\t\t\t    \"Unexpected ckh_search() failure\");\n\t\t\texpect_false(ckh_remove(tsd, &ckh, p[j], &q, &r),\n\t\t\t    \"Unexpected ckh_remove() failure\");\n\t\t\texpect_ptr_eq(p[j], q, \"Key pointer mismatch\");\n\t\t\texpect_ptr_eq(p[j], r, \"Value pointer mismatch\");\n\t\t\texpect_true(ckh_search(&ckh, p[j], NULL, NULL),\n\t\t\t    \"Unexpected ckh_search() success\");\n\t\t\texpect_true(ckh_remove(tsd, &ckh, p[j], &q, &r),\n\t\t\t    \"Unexpected ckh_remove() success\");\n\t\t}\n\n\t\t{\n\t\t\tbool seen[NITEMS];\n\t\t\tsize_t tabind;\n\n\t\t\tmemset(seen, 0, sizeof(seen));\n\n\t\t\tfor (tabind = 0; !ckh_iter(&ckh, &tabind, &q, &r);) {\n\t\t\t\tsize_t k;\n\n\t\t\t\texpect_ptr_eq(q, r, \"Key and val not equal\");\n\n\t\t\t\tfor (k = 0; k < NITEMS; k++) {\n\t\t\t\t\tif (p[k] == q) {\n\t\t\t\t\t\texpect_false(seen[k],\n\t\t\t\t\t\t    \"Item %zu already seen\", k);\n\t\t\t\t\t\tseen[k] = true;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tfor (j = 0; j < i + 1; j++) {\n\t\t\t\texpect_true(seen[j], \"Item %zu not seen\", j);\n\t\t\t}\n\t\t\tfor (; j < NITEMS; j++) {\n\t\t\t\texpect_false(seen[j], \"Item %zu seen\", j);\n\t\t\t}\n\t\t}\n\t}\n\n\tfor (i = 0; i < NITEMS; i++) {\n\t\texpect_false(ckh_search(&ckh, p[i], NULL, NULL),\n\t\t    \"Unexpected ckh_search() failure\");\n\t\texpect_false(ckh_remove(tsd, &ckh, p[i], &q, &r),\n\t\t    \"Unexpected ckh_remove() failure\");\n\t\texpect_ptr_eq(p[i], q, \"Key pointer mismatch\");\n\t\texpect_ptr_eq(p[i], r, \"Value pointer mismatch\");\n\t\texpect_true(ckh_search(&ckh, p[i], NULL, NULL),\n\t\t    \"Unexpected ckh_search() success\");\n\t\texpect_true(ckh_remove(tsd, &ckh, p[i], &q, &r),\n\t\t    \"Unexpected ckh_remove() success\");\n\t\tdallocx(p[i], 0);\n\t}\n\n\texpect_zu_eq(ckh_count(&ckh), 0,\n\t    \"ckh_count() should return %zu, but it returned %zu\",\n\t    ZU(0), ckh_count(&ckh));\n\tckh_delete(tsd, &ckh);\n#undef NITEMS\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_new_delete,\n\t    test_count_insert_search_remove,\n\t    test_insert_iter_remove);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/counter.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nstatic const uint64_t interval = 1 << 20;\n\nTEST_BEGIN(test_counter_accum) {\n\tuint64_t increment = interval >> 4;\n\tunsigned n = interval / increment;\n\tuint64_t accum = 0;\n\n\tcounter_accum_t c;\n\tcounter_accum_init(&c, interval);\n\n\ttsd_t *tsd = tsd_fetch();\n\tbool trigger;\n\tfor (unsigned i = 0; i < n; i++) {\n\t\ttrigger = counter_accum(tsd_tsdn(tsd), &c, increment);\n\t\taccum += increment;\n\t\tif (accum < interval) {\n\t\t\texpect_b_eq(trigger, false, \"Should not trigger\");\n\t\t} else {\n\t\t\texpect_b_eq(trigger, true, \"Should have triggered\");\n\t\t}\n\t}\n\texpect_b_eq(trigger, true, \"Should have triggered\");\n}\nTEST_END\n\nvoid\nexpect_counter_value(counter_accum_t *c, uint64_t v) {\n\tuint64_t accum = locked_read_u64_unsynchronized(&c->accumbytes);\n\texpect_u64_eq(accum, v, \"Counter value mismatch\");\n}\n\n#define N_THDS (16)\n#define N_ITER_THD (1 << 12)\n#define ITER_INCREMENT (interval >> 4)\n\nstatic void *\nthd_start(void *varg) {\n\tcounter_accum_t *c = (counter_accum_t *)varg;\n\n\ttsd_t *tsd = tsd_fetch();\n\tbool trigger;\n\tuintptr_t n_triggered = 0;\n\tfor (unsigned i = 0; i < N_ITER_THD; i++) {\n\t\ttrigger = counter_accum(tsd_tsdn(tsd), c, ITER_INCREMENT);\n\t\tn_triggered += trigger ? 1 : 0;\n\t}\n\n\treturn (void *)n_triggered;\n}\n\n\nTEST_BEGIN(test_counter_mt) {\n\tcounter_accum_t shared_c;\n\tcounter_accum_init(&shared_c, interval);\n\n\tthd_t thds[N_THDS];\n\tunsigned i;\n\tfor (i = 0; i < N_THDS; i++) {\n\t\tthd_create(&thds[i], thd_start, (void *)&shared_c);\n\t}\n\n\tuint64_t sum = 0;\n\tfor (i = 0; i < N_THDS; i++) {\n\t\tvoid *ret;\n\t\tthd_join(thds[i], &ret);\n\t\tsum += (uintptr_t)ret;\n\t}\n\texpect_u64_eq(sum, N_THDS * N_ITER_THD / (interval / ITER_INCREMENT),\n\t    \"Incorrect number of triggers\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_counter_accum,\n\t    test_counter_mt);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/decay.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/decay.h\"\n\nTEST_BEGIN(test_decay_init) {\n\tdecay_t decay;\n\tmemset(&decay, 0, sizeof(decay));\n\n\tnstime_t curtime;\n\tnstime_init(&curtime, 0);\n\n\tssize_t decay_ms = 1000;\n\tassert_true(decay_ms_valid(decay_ms), \"\");\n\n\texpect_false(decay_init(&decay, &curtime, decay_ms),\n\t    \"Failed to initialize decay\");\n\texpect_zd_eq(decay_ms_read(&decay), decay_ms,\n\t    \"Decay_ms was initialized incorrectly\");\n\texpect_u64_ne(decay_epoch_duration_ns(&decay), 0,\n\t    \"Epoch duration was initialized incorrectly\");\n}\nTEST_END\n\nTEST_BEGIN(test_decay_ms_valid) {\n\texpect_false(decay_ms_valid(-7),\n\t    \"Misclassified negative decay as valid\");\n\texpect_true(decay_ms_valid(-1),\n\t    \"Misclassified -1 (never decay) as invalid decay\");\n\texpect_true(decay_ms_valid(8943),\n\t    \"Misclassified valid decay\");\n\tif (SSIZE_MAX > NSTIME_SEC_MAX) {\n\t\texpect_false(\n\t\t    decay_ms_valid((ssize_t)(NSTIME_SEC_MAX * KQU(1000) + 39)),\n\t\t    \"Misclassified too large decay\");\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_decay_npages_purge_in) {\n\tdecay_t decay;\n\tmemset(&decay, 0, sizeof(decay));\n\n\tnstime_t curtime;\n\tnstime_init(&curtime, 0);\n\n\tuint64_t decay_ms = 1000;\n\tnstime_t decay_nstime;\n\tnstime_init(&decay_nstime, decay_ms * 1000 * 1000);\n\texpect_false(decay_init(&decay, &curtime, (ssize_t)decay_ms),\n\t    \"Failed to initialize decay\");\n\n\tsize_t new_pages = 100;\n\n\tnstime_t time;\n\tnstime_copy(&time, &decay_nstime);\n\texpect_u64_eq(decay_npages_purge_in(&decay, &time, new_pages),\n\t    new_pages, \"Not all pages are expected to decay in decay_ms\");\n\n\tnstime_init(&time, 0);\n\texpect_u64_eq(decay_npages_purge_in(&decay, &time, new_pages), 0,\n\t    \"More than zero pages are expected to instantly decay\");\n\n\tnstime_copy(&time, &decay_nstime);\n\tnstime_idivide(&time, 2);\n\texpect_u64_eq(decay_npages_purge_in(&decay, &time, new_pages),\n\t    new_pages / 2, \"Not half of pages decay in half the decay period\");\n}\nTEST_END\n\nTEST_BEGIN(test_decay_maybe_advance_epoch) {\n\tdecay_t decay;\n\tmemset(&decay, 0, sizeof(decay));\n\n\tnstime_t curtime;\n\tnstime_init(&curtime, 0);\n\n\tuint64_t decay_ms = 1000;\n\n\tbool err = decay_init(&decay, &curtime, (ssize_t)decay_ms);\n\texpect_false(err, \"\");\n\n\tbool advanced;\n\tadvanced = decay_maybe_advance_epoch(&decay, &curtime, 0);\n\texpect_false(advanced, \"Epoch advanced while time didn't\");\n\n\tnstime_t interval;\n\tnstime_init(&interval, decay_epoch_duration_ns(&decay));\n\n\tnstime_add(&curtime, &interval);\n\tadvanced = decay_maybe_advance_epoch(&decay, &curtime, 0);\n\texpect_false(advanced, \"Epoch advanced after first interval\");\n\n\tnstime_add(&curtime, &interval);\n\tadvanced = decay_maybe_advance_epoch(&decay, &curtime, 0);\n\texpect_true(advanced, \"Epoch didn't advance after two intervals\");\n}\nTEST_END\n\nTEST_BEGIN(test_decay_empty) {\n\t/* If we never have any decaying pages, npages_limit should be 0. */\n\tdecay_t decay;\n\tmemset(&decay, 0, sizeof(decay));\n\n\tnstime_t curtime;\n\tnstime_init(&curtime, 0);\n\n\tuint64_t decay_ms = 1000;\n\tuint64_t decay_ns = decay_ms * 1000 * 1000;\n\n\tbool err = decay_init(&decay, &curtime, (ssize_t)decay_ms);\n\tassert_false(err, \"\");\n\n\tuint64_t time_between_calls = decay_epoch_duration_ns(&decay) / 5;\n\tint nepochs = 0;\n\tfor (uint64_t i = 0; i < decay_ns / time_between_calls * 10; i++) {\n\t\tsize_t dirty_pages = 0;\n\t\tnstime_init(&curtime, i * time_between_calls);\n\t\tbool epoch_advanced = decay_maybe_advance_epoch(&decay,\n\t\t    &curtime, dirty_pages);\n\t\tif (epoch_advanced) {\n\t\t\tnepochs++;\n\t\t\texpect_zu_eq(decay_npages_limit_get(&decay), 0,\n\t\t\t    \"Unexpectedly increased npages_limit\");\n\t\t}\n\t}\n\texpect_d_gt(nepochs, 0, \"Epochs never advanced\");\n}\nTEST_END\n\n/*\n * Verify that npages_limit correctly decays as the time goes.\n *\n * During first 'nepoch_init' epochs, add new dirty pages.\n * After that, let them decay and verify npages_limit decreases.\n * Then proceed with another 'nepoch_init' epochs and check that\n * all dirty pages are flushed out of backlog, bringing npages_limit\n * down to zero.\n */\nTEST_BEGIN(test_decay) {\n\tconst uint64_t nepoch_init = 10;\n\n\tdecay_t decay;\n\tmemset(&decay, 0, sizeof(decay));\n\n\tnstime_t curtime;\n\tnstime_init(&curtime, 0);\n\n\tuint64_t decay_ms = 1000;\n\tuint64_t decay_ns = decay_ms * 1000 * 1000;\n\n\tbool err = decay_init(&decay, &curtime, (ssize_t)decay_ms);\n\tassert_false(err, \"\");\n\n\texpect_zu_eq(decay_npages_limit_get(&decay), 0,\n\t    \"Empty decay returned nonzero npages_limit\");\n\n\tnstime_t epochtime;\n\tnstime_init(&epochtime, decay_epoch_duration_ns(&decay));\n\n\tconst size_t dirty_pages_per_epoch = 1000;\n\tsize_t dirty_pages = 0;\n\tuint64_t epoch_ns = decay_epoch_duration_ns(&decay);\n\tbool epoch_advanced = false;\n\n\t/* Populate backlog with some dirty pages */\n\tfor (uint64_t i = 0; i < nepoch_init; i++) {\n\t\tnstime_add(&curtime, &epochtime);\n\t\tdirty_pages += dirty_pages_per_epoch;\n\t\tepoch_advanced |= decay_maybe_advance_epoch(&decay, &curtime,\n\t\t    dirty_pages);\n\t}\n\texpect_true(epoch_advanced, \"Epoch never advanced\");\n\n\tsize_t npages_limit = decay_npages_limit_get(&decay);\n\texpect_zu_gt(npages_limit, 0, \"npages_limit is incorrectly equal \"\n\t    \"to zero after dirty pages have been added\");\n\n\t/* Keep dirty pages unchanged and verify that npages_limit decreases */\n\tfor (uint64_t i = nepoch_init; i * epoch_ns < decay_ns; ++i) {\n\t\tnstime_add(&curtime, &epochtime);\n\t\tepoch_advanced = decay_maybe_advance_epoch(&decay, &curtime,\n\t\t\t\t    dirty_pages);\n\t\tif (epoch_advanced) {\n\t\t\tsize_t npages_limit_new = decay_npages_limit_get(&decay);\n\t\t\texpect_zu_lt(npages_limit_new, npages_limit,\n\t\t\t    \"napges_limit failed to decay\");\n\n\t\t\tnpages_limit = npages_limit_new;\n\t\t}\n\t}\n\n\texpect_zu_gt(npages_limit, 0, \"npages_limit decayed to zero earlier \"\n\t    \"than decay_ms since last dirty page was added\");\n\n\t/* Completely push all dirty pages out of the backlog */\n\tepoch_advanced = false;\n\tfor (uint64_t i = 0; i < nepoch_init; i++) {\n\t\tnstime_add(&curtime, &epochtime);\n\t\tepoch_advanced |= decay_maybe_advance_epoch(&decay, &curtime,\n\t\t    dirty_pages);\n\t}\n\texpect_true(epoch_advanced, \"Epoch never advanced\");\n\n\tnpages_limit = decay_npages_limit_get(&decay);\n\texpect_zu_eq(npages_limit, 0, \"npages_limit didn't decay to 0 after \"\n\t    \"decay_ms since last bump in dirty pages\");\n}\nTEST_END\n\nTEST_BEGIN(test_decay_ns_until_purge) {\n\tconst uint64_t nepoch_init = 10;\n\n\tdecay_t decay;\n\tmemset(&decay, 0, sizeof(decay));\n\n\tnstime_t curtime;\n\tnstime_init(&curtime, 0);\n\n\tuint64_t decay_ms = 1000;\n\tuint64_t decay_ns = decay_ms * 1000 * 1000;\n\n\tbool err = decay_init(&decay, &curtime, (ssize_t)decay_ms);\n\tassert_false(err, \"\");\n\n\tnstime_t epochtime;\n\tnstime_init(&epochtime, decay_epoch_duration_ns(&decay));\n\n\tuint64_t ns_until_purge_empty = decay_ns_until_purge(&decay, 0, 0);\n\texpect_u64_eq(ns_until_purge_empty, DECAY_UNBOUNDED_TIME_TO_PURGE,\n\t    \"Failed to return unbounded wait time for zero threshold\");\n\n\tconst size_t dirty_pages_per_epoch = 1000;\n\tsize_t dirty_pages = 0;\n\tbool epoch_advanced = false;\n\tfor (uint64_t i = 0; i < nepoch_init; i++) {\n\t\tnstime_add(&curtime, &epochtime);\n\t\tdirty_pages += dirty_pages_per_epoch;\n\t\tepoch_advanced |= decay_maybe_advance_epoch(&decay, &curtime,\n\t\t    dirty_pages);\n\t}\n\texpect_true(epoch_advanced, \"Epoch never advanced\");\n\n\tuint64_t ns_until_purge_all = decay_ns_until_purge(&decay,\n\t    dirty_pages, dirty_pages);\n\texpect_u64_ge(ns_until_purge_all, decay_ns,\n\t    \"Incorrectly calculated time to purge all pages\");\n\n\tuint64_t ns_until_purge_none = decay_ns_until_purge(&decay,\n\t    dirty_pages, 0);\n\texpect_u64_eq(ns_until_purge_none, decay_epoch_duration_ns(&decay) * 2,\n\t    \"Incorrectly calculated time to purge 0 pages\");\n\n\tuint64_t npages_threshold = dirty_pages / 2;\n\tuint64_t ns_until_purge_half = decay_ns_until_purge(&decay,\n\t    dirty_pages, npages_threshold);\n\n\tnstime_t waittime;\n\tnstime_init(&waittime, ns_until_purge_half);\n\tnstime_add(&curtime, &waittime);\n\n\tdecay_maybe_advance_epoch(&decay, &curtime, dirty_pages);\n\tsize_t npages_limit = decay_npages_limit_get(&decay);\n\texpect_zu_lt(npages_limit, dirty_pages,\n\t    \"npages_limit failed to decrease after waiting\");\n\tsize_t expected = dirty_pages - npages_limit;\n\tint deviation = abs((int)expected - (int)(npages_threshold));\n\texpect_d_lt(deviation, (int)(npages_threshold / 2),\n\t    \"After waiting, number of pages is out of the expected interval \"\n\t    \"[0.5 * npages_threshold .. 1.5 * npages_threshold]\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_decay_init,\n\t    test_decay_ms_valid,\n\t    test_decay_npages_purge_in,\n\t    test_decay_maybe_advance_epoch,\n\t    test_decay_empty,\n\t    test_decay,\n\t    test_decay_ns_until_purge);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/div.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/div.h\"\n\nTEST_BEGIN(test_div_exhaustive) {\n\tfor (size_t divisor = 2; divisor < 1000 * 1000; ++divisor) {\n\t\tdiv_info_t div_info;\n\t\tdiv_init(&div_info, divisor);\n\t\tsize_t max = 1000 * divisor;\n\t\tif (max < 1000 * 1000) {\n\t\t\tmax = 1000 * 1000;\n\t\t}\n\t\tfor (size_t dividend = 0; dividend < 1000 * divisor;\n\t\t    dividend += divisor) {\n\t\t\tsize_t quotient = div_compute(\n\t\t\t    &div_info, dividend);\n\t\t\texpect_zu_eq(dividend, quotient * divisor,\n\t\t\t    \"With divisor = %zu, dividend = %zu, \"\n\t\t\t    \"got quotient %zu\", divisor, dividend, quotient);\n\t\t}\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_div_exhaustive);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/double_free.c",
    "content": "#include \"test/jemalloc_test.h\"\n#include \"test/san.h\"\n\n#include \"jemalloc/internal/safety_check.h\"\n\nbool fake_abort_called;\nvoid fake_abort(const char *message) {\n\t(void)message;\n\tfake_abort_called = true;\n}\n\nvoid\ntest_large_double_free_pre(void) {\n\tsafety_check_set_abort(&fake_abort);\n\tfake_abort_called = false;\n}\n\nvoid\ntest_large_double_free_post() {\n\texpect_b_eq(fake_abort_called, true, \"Double-free check didn't fire.\");\n\tsafety_check_set_abort(NULL);\n}\n\nTEST_BEGIN(test_large_double_free_tcache) {\n\ttest_skip_if(!config_opt_safety_checks);\n\t/*\n\t * Skip debug builds, since too many assertions will be triggered with\n\t * double-free before hitting the one we are interested in.\n\t */\n\ttest_skip_if(config_debug);\n\n\ttest_large_double_free_pre();\n\tchar *ptr = malloc(SC_LARGE_MINCLASS);\n\tbool guarded = extent_is_guarded(tsdn_fetch(), ptr);\n\tfree(ptr);\n\tif (!guarded) {\n\t\tfree(ptr);\n\t} else {\n\t\t/*\n\t\t * Skip because guarded extents may unguard immediately on\n\t\t * deallocation, in which case the second free will crash before\n\t\t * reaching the intended safety check.\n\t\t */\n\t\tfake_abort_called = true;\n\t}\n\tmallctl(\"thread.tcache.flush\", NULL, NULL, NULL, 0);\n\ttest_large_double_free_post();\n}\nTEST_END\n\nTEST_BEGIN(test_large_double_free_no_tcache) {\n\ttest_skip_if(!config_opt_safety_checks);\n\ttest_skip_if(config_debug);\n\n\ttest_large_double_free_pre();\n\tchar *ptr = mallocx(SC_LARGE_MINCLASS, MALLOCX_TCACHE_NONE);\n\tbool guarded = extent_is_guarded(tsdn_fetch(), ptr);\n\tdallocx(ptr, MALLOCX_TCACHE_NONE);\n\tif (!guarded) {\n\t\tdallocx(ptr, MALLOCX_TCACHE_NONE);\n\t} else {\n\t\t/*\n\t\t * Skip because guarded extents may unguard immediately on\n\t\t * deallocation, in which case the second free will crash before\n\t\t * reaching the intended safety check.\n\t\t */\n\t\tfake_abort_called = true;\n\t}\n\ttest_large_double_free_post();\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(test_large_double_free_no_tcache,\n\t    test_large_double_free_tcache);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/double_free.h",
    "content": "\n"
  },
  {
    "path": "deps/jemalloc/test/unit/edata_cache.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/edata_cache.h\"\n\nstatic void\ntest_edata_cache_init(edata_cache_t *edata_cache) {\n\tbase_t *base = base_new(TSDN_NULL, /* ind */ 1,\n\t    &ehooks_default_extent_hooks, /* metadata_use_hooks */ true);\n\tassert_ptr_not_null(base, \"\");\n\tbool err = edata_cache_init(edata_cache, base);\n\tassert_false(err, \"\");\n}\n\nstatic void\ntest_edata_cache_destroy(edata_cache_t *edata_cache) {\n\tbase_delete(TSDN_NULL, edata_cache->base);\n}\n\nTEST_BEGIN(test_edata_cache) {\n\tedata_cache_t ec;\n\ttest_edata_cache_init(&ec);\n\n\t/* Get one */\n\tedata_t *ed1 = edata_cache_get(TSDN_NULL, &ec);\n\tassert_ptr_not_null(ed1, \"\");\n\n\t/* Cache should be empty */\n\tassert_zu_eq(atomic_load_zu(&ec.count, ATOMIC_RELAXED), 0, \"\");\n\n\t/* Get another */\n\tedata_t *ed2 = edata_cache_get(TSDN_NULL, &ec);\n\tassert_ptr_not_null(ed2, \"\");\n\n\t/* Still empty */\n\tassert_zu_eq(atomic_load_zu(&ec.count, ATOMIC_RELAXED), 0, \"\");\n\n\t/* Put one back, and the cache should now have one item */\n\tedata_cache_put(TSDN_NULL, &ec, ed1);\n\tassert_zu_eq(atomic_load_zu(&ec.count, ATOMIC_RELAXED), 1, \"\");\n\n\t/* Reallocating should reuse the item, and leave an empty cache. */\n\tedata_t *ed1_again = edata_cache_get(TSDN_NULL, &ec);\n\tassert_ptr_eq(ed1, ed1_again, \"\");\n\tassert_zu_eq(atomic_load_zu(&ec.count, ATOMIC_RELAXED), 0, \"\");\n\n\ttest_edata_cache_destroy(&ec);\n}\nTEST_END\n\nstatic size_t\necf_count(edata_cache_fast_t *ecf) {\n\tsize_t count = 0;\n\tedata_t *cur;\n\tql_foreach(cur, &ecf->list.head, ql_link_inactive) {\n\t\tcount++;\n\t}\n\treturn count;\n}\n\nTEST_BEGIN(test_edata_cache_fast_simple) {\n\tedata_cache_t ec;\n\tedata_cache_fast_t ecf;\n\n\ttest_edata_cache_init(&ec);\n\tedata_cache_fast_init(&ecf, &ec);\n\n\tedata_t *ed1 = edata_cache_fast_get(TSDN_NULL, &ecf);\n\texpect_ptr_not_null(ed1, \"\");\n\texpect_zu_eq(ecf_count(&ecf), 0, \"\");\n\texpect_zu_eq(atomic_load_zu(&ec.count, ATOMIC_RELAXED), 0, \"\");\n\n\tedata_t *ed2 = edata_cache_fast_get(TSDN_NULL, &ecf);\n\texpect_ptr_not_null(ed2, \"\");\n\texpect_zu_eq(ecf_count(&ecf), 0, \"\");\n\texpect_zu_eq(atomic_load_zu(&ec.count, ATOMIC_RELAXED), 0, \"\");\n\n\tedata_cache_fast_put(TSDN_NULL, &ecf, ed1);\n\texpect_zu_eq(ecf_count(&ecf), 1, \"\");\n\texpect_zu_eq(atomic_load_zu(&ec.count, ATOMIC_RELAXED), 0, \"\");\n\n\tedata_cache_fast_put(TSDN_NULL, &ecf, ed2);\n\texpect_zu_eq(ecf_count(&ecf), 2, \"\");\n\texpect_zu_eq(atomic_load_zu(&ec.count, ATOMIC_RELAXED), 0, \"\");\n\n\t/* LIFO ordering. */\n\texpect_ptr_eq(ed2, edata_cache_fast_get(TSDN_NULL, &ecf), \"\");\n\texpect_zu_eq(ecf_count(&ecf), 1, \"\");\n\texpect_zu_eq(atomic_load_zu(&ec.count, ATOMIC_RELAXED), 0, \"\");\n\n\texpect_ptr_eq(ed1, edata_cache_fast_get(TSDN_NULL, &ecf), \"\");\n\texpect_zu_eq(ecf_count(&ecf), 0, \"\");\n\texpect_zu_eq(atomic_load_zu(&ec.count, ATOMIC_RELAXED), 0, \"\");\n\n\ttest_edata_cache_destroy(&ec);\n}\nTEST_END\n\nTEST_BEGIN(test_edata_cache_fill) {\n\tedata_cache_t ec;\n\tedata_cache_fast_t ecf;\n\n\ttest_edata_cache_init(&ec);\n\tedata_cache_fast_init(&ecf, &ec);\n\n\tedata_t *allocs[EDATA_CACHE_FAST_FILL * 2];\n\n\t/*\n\t * If the fallback cache can't satisfy the request, we shouldn't do\n\t * extra allocations until compelled to.  Put half the fill goal in the\n\t * fallback.\n\t */\n\tfor (int i = 0; i < EDATA_CACHE_FAST_FILL / 2; i++) {\n\t\tallocs[i] = edata_cache_get(TSDN_NULL, &ec);\n\t}\n\tfor (int i = 0; i < EDATA_CACHE_FAST_FILL / 2; i++) {\n\t\tedata_cache_put(TSDN_NULL, &ec, allocs[i]);\n\t}\n\texpect_zu_eq(EDATA_CACHE_FAST_FILL / 2,\n\t    atomic_load_zu(&ec.count, ATOMIC_RELAXED), \"\");\n\n\tallocs[0] = edata_cache_fast_get(TSDN_NULL, &ecf);\n\texpect_zu_eq(EDATA_CACHE_FAST_FILL / 2 - 1, ecf_count(&ecf),\n\t    \"Should have grabbed all edatas available but no more.\");\n\n\tfor (int i = 1; i < EDATA_CACHE_FAST_FILL / 2; i++) {\n\t\tallocs[i] = edata_cache_fast_get(TSDN_NULL, &ecf);\n\t\texpect_ptr_not_null(allocs[i], \"\");\n\t}\n\texpect_zu_eq(0, ecf_count(&ecf), \"\");\n\n\t/* When forced, we should alloc from the base. */\n\tedata_t *edata = edata_cache_fast_get(TSDN_NULL, &ecf);\n\texpect_ptr_not_null(edata, \"\");\n\texpect_zu_eq(0, ecf_count(&ecf), \"Allocated more than necessary\");\n\texpect_zu_eq(0, atomic_load_zu(&ec.count, ATOMIC_RELAXED),\n\t    \"Allocated more than necessary\");\n\n\t/*\n\t * We should correctly fill in the common case where the fallback isn't\n\t * exhausted, too.\n\t */\n\tfor (int i = 0; i < EDATA_CACHE_FAST_FILL * 2; i++) {\n\t\tallocs[i] = edata_cache_get(TSDN_NULL, &ec);\n\t\texpect_ptr_not_null(allocs[i], \"\");\n\t}\n\tfor (int i = 0; i < EDATA_CACHE_FAST_FILL * 2; i++) {\n\t\tedata_cache_put(TSDN_NULL, &ec, allocs[i]);\n\t}\n\n\tallocs[0] = edata_cache_fast_get(TSDN_NULL, &ecf);\n\texpect_zu_eq(EDATA_CACHE_FAST_FILL - 1, ecf_count(&ecf), \"\");\n\texpect_zu_eq(EDATA_CACHE_FAST_FILL,\n\t    atomic_load_zu(&ec.count, ATOMIC_RELAXED), \"\");\n\tfor (int i = 1; i < EDATA_CACHE_FAST_FILL; i++) {\n\t\texpect_zu_eq(EDATA_CACHE_FAST_FILL - i, ecf_count(&ecf), \"\");\n\t\texpect_zu_eq(EDATA_CACHE_FAST_FILL,\n\t\t    atomic_load_zu(&ec.count, ATOMIC_RELAXED), \"\");\n\t\tallocs[i] = edata_cache_fast_get(TSDN_NULL, &ecf);\n\t\texpect_ptr_not_null(allocs[i], \"\");\n\t}\n\texpect_zu_eq(0, ecf_count(&ecf), \"\");\n\texpect_zu_eq(EDATA_CACHE_FAST_FILL,\n\t    atomic_load_zu(&ec.count, ATOMIC_RELAXED), \"\");\n\n\tallocs[0] = edata_cache_fast_get(TSDN_NULL, &ecf);\n\texpect_zu_eq(EDATA_CACHE_FAST_FILL - 1, ecf_count(&ecf), \"\");\n\texpect_zu_eq(0, atomic_load_zu(&ec.count, ATOMIC_RELAXED), \"\");\n\tfor (int i = 1; i < EDATA_CACHE_FAST_FILL; i++) {\n\t\texpect_zu_eq(EDATA_CACHE_FAST_FILL - i, ecf_count(&ecf), \"\");\n\t\texpect_zu_eq(0, atomic_load_zu(&ec.count, ATOMIC_RELAXED), \"\");\n\t\tallocs[i] = edata_cache_fast_get(TSDN_NULL, &ecf);\n\t\texpect_ptr_not_null(allocs[i], \"\");\n\t}\n\texpect_zu_eq(0, ecf_count(&ecf), \"\");\n\texpect_zu_eq(0, atomic_load_zu(&ec.count, ATOMIC_RELAXED), \"\");\n\n\ttest_edata_cache_destroy(&ec);\n}\nTEST_END\n\nTEST_BEGIN(test_edata_cache_disable) {\n\tedata_cache_t ec;\n\tedata_cache_fast_t ecf;\n\n\ttest_edata_cache_init(&ec);\n\tedata_cache_fast_init(&ecf, &ec);\n\n\tfor (int i = 0; i < EDATA_CACHE_FAST_FILL; i++) {\n\t\tedata_t *edata = edata_cache_get(TSDN_NULL, &ec);\n\t\texpect_ptr_not_null(edata, \"\");\n\t\tedata_cache_fast_put(TSDN_NULL, &ecf, edata);\n\t}\n\n\texpect_zu_eq(EDATA_CACHE_FAST_FILL, ecf_count(&ecf), \"\");\n\texpect_zu_eq(0, atomic_load_zu(&ec.count, ATOMIC_RELAXED), \"\");\n\n\tedata_cache_fast_disable(TSDN_NULL, &ecf);\n\n\texpect_zu_eq(0, ecf_count(&ecf), \"\");\n\texpect_zu_eq(EDATA_CACHE_FAST_FILL,\n\t    atomic_load_zu(&ec.count, ATOMIC_RELAXED), \"Disabling should flush\");\n\n\tedata_t *edata = edata_cache_fast_get(TSDN_NULL, &ecf);\n\texpect_zu_eq(0, ecf_count(&ecf), \"\");\n\texpect_zu_eq(EDATA_CACHE_FAST_FILL - 1,\n\t    atomic_load_zu(&ec.count, ATOMIC_RELAXED),\n\t    \"Disabled ecf should forward on get\");\n\n\tedata_cache_fast_put(TSDN_NULL, &ecf, edata);\n\texpect_zu_eq(0, ecf_count(&ecf), \"\");\n\texpect_zu_eq(EDATA_CACHE_FAST_FILL,\n\t    atomic_load_zu(&ec.count, ATOMIC_RELAXED),\n\t    \"Disabled ecf should forward on put\");\n\n\ttest_edata_cache_destroy(&ec);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_edata_cache,\n\t    test_edata_cache_fast_simple,\n\t    test_edata_cache_fill,\n\t    test_edata_cache_disable);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/emitter.c",
    "content": "#include \"test/jemalloc_test.h\"\n#include \"jemalloc/internal/emitter.h\"\n\n/*\n * This is so useful for debugging and feature work, we'll leave printing\n * functionality committed but disabled by default.\n */\n/* Print the text as it will appear. */\nstatic bool print_raw = false;\n/* Print the text escaped, so it can be copied back into the test case. */\nstatic bool print_escaped = false;\n\ntypedef struct buf_descriptor_s buf_descriptor_t;\nstruct buf_descriptor_s {\n\tchar *buf;\n\tsize_t len;\n\tbool mid_quote;\n};\n\n/*\n * Forwards all writes to the passed-in buf_v (which should be cast from a\n * buf_descriptor_t *).\n */\nstatic void\nforwarding_cb(void *buf_descriptor_v, const char *str) {\n\tbuf_descriptor_t *buf_descriptor = (buf_descriptor_t *)buf_descriptor_v;\n\n\tif (print_raw) {\n\t\tmalloc_printf(\"%s\", str);\n\t}\n\tif (print_escaped) {\n\t\tconst char *it = str;\n\t\twhile (*it != '\\0') {\n\t\t\tif (!buf_descriptor->mid_quote) {\n\t\t\t\tmalloc_printf(\"\\\"\");\n\t\t\t\tbuf_descriptor->mid_quote = true;\n\t\t\t}\n\t\t\tswitch (*it) {\n\t\t\tcase '\\\\':\n\t\t\t\tmalloc_printf(\"\\\\\");\n\t\t\t\tbreak;\n\t\t\tcase '\\\"':\n\t\t\t\tmalloc_printf(\"\\\\\\\"\");\n\t\t\t\tbreak;\n\t\t\tcase '\\t':\n\t\t\t\tmalloc_printf(\"\\\\t\");\n\t\t\t\tbreak;\n\t\t\tcase '\\n':\n\t\t\t\tmalloc_printf(\"\\\\n\\\"\\n\");\n\t\t\t\tbuf_descriptor->mid_quote = false;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tmalloc_printf(\"%c\", *it);\n\t\t\t}\n\t\t\tit++;\n\t\t}\n\t}\n\n\tsize_t written = malloc_snprintf(buf_descriptor->buf,\n\t    buf_descriptor->len, \"%s\", str);\n\texpect_zu_eq(written, strlen(str), \"Buffer overflow!\");\n\tbuf_descriptor->buf += written;\n\tbuf_descriptor->len -= written;\n\texpect_zu_gt(buf_descriptor->len, 0, \"Buffer out of space!\");\n}\n\nstatic void\nexpect_emit_output(void (*emit_fn)(emitter_t *),\n    const char *expected_json_output,\n    const char *expected_json_compact_output,\n    const char *expected_table_output) {\n\temitter_t emitter;\n\tchar buf[MALLOC_PRINTF_BUFSIZE];\n\tbuf_descriptor_t buf_descriptor;\n\n\tbuf_descriptor.buf = buf;\n\tbuf_descriptor.len = MALLOC_PRINTF_BUFSIZE;\n\tbuf_descriptor.mid_quote = false;\n\n\temitter_init(&emitter, emitter_output_json, &forwarding_cb,\n\t    &buf_descriptor);\n\t(*emit_fn)(&emitter);\n\texpect_str_eq(expected_json_output, buf, \"json output failure\");\n\n\tbuf_descriptor.buf = buf;\n\tbuf_descriptor.len = MALLOC_PRINTF_BUFSIZE;\n\tbuf_descriptor.mid_quote = false;\n\n\temitter_init(&emitter, emitter_output_json_compact, &forwarding_cb,\n\t    &buf_descriptor);\n\t(*emit_fn)(&emitter);\n\texpect_str_eq(expected_json_compact_output, buf,\n\t    \"compact json output failure\");\n\n\tbuf_descriptor.buf = buf;\n\tbuf_descriptor.len = MALLOC_PRINTF_BUFSIZE;\n\tbuf_descriptor.mid_quote = false;\n\n\temitter_init(&emitter, emitter_output_table, &forwarding_cb,\n\t    &buf_descriptor);\n\t(*emit_fn)(&emitter);\n\texpect_str_eq(expected_table_output, buf, \"table output failure\");\n}\n\nstatic void\nemit_dict(emitter_t *emitter) {\n\tbool b_false = false;\n\tbool b_true = true;\n\tint i_123 = 123;\n\tconst char *str = \"a string\";\n\n\temitter_begin(emitter);\n\temitter_dict_begin(emitter, \"foo\", \"This is the foo table:\");\n\temitter_kv(emitter, \"abc\", \"ABC\", emitter_type_bool, &b_false);\n\temitter_kv(emitter, \"def\", \"DEF\", emitter_type_bool, &b_true);\n\temitter_kv_note(emitter, \"ghi\", \"GHI\", emitter_type_int, &i_123,\n\t    \"note_key1\", emitter_type_string, &str);\n\temitter_kv_note(emitter, \"jkl\", \"JKL\", emitter_type_string, &str,\n\t    \"note_key2\", emitter_type_bool, &b_false);\n\temitter_dict_end(emitter);\n\temitter_end(emitter);\n}\n\nstatic const char *dict_json =\n\"{\\n\"\n\"\\t\\\"foo\\\": {\\n\"\n\"\\t\\t\\\"abc\\\": false,\\n\"\n\"\\t\\t\\\"def\\\": true,\\n\"\n\"\\t\\t\\\"ghi\\\": 123,\\n\"\n\"\\t\\t\\\"jkl\\\": \\\"a string\\\"\\n\"\n\"\\t}\\n\"\n\"}\\n\";\nstatic const char *dict_json_compact =\n\"{\"\n\t\"\\\"foo\\\":{\"\n\t\t\"\\\"abc\\\":false,\"\n\t\t\"\\\"def\\\":true,\"\n\t\t\"\\\"ghi\\\":123,\"\n\t\t\"\\\"jkl\\\":\\\"a string\\\"\"\n\t\"}\"\n\"}\";\nstatic const char *dict_table =\n\"This is the foo table:\\n\"\n\"  ABC: false\\n\"\n\"  DEF: true\\n\"\n\"  GHI: 123 (note_key1: \\\"a string\\\")\\n\"\n\"  JKL: \\\"a string\\\" (note_key2: false)\\n\";\n\nstatic void\nemit_table_printf(emitter_t *emitter) {\n\temitter_begin(emitter);\n\temitter_table_printf(emitter, \"Table note 1\\n\");\n\temitter_table_printf(emitter, \"Table note 2 %s\\n\",\n\t    \"with format string\");\n\temitter_end(emitter);\n}\n\nstatic const char *table_printf_json =\n\"{\\n\"\n\"}\\n\";\nstatic const char *table_printf_json_compact = \"{}\";\nstatic const char *table_printf_table =\n\"Table note 1\\n\"\n\"Table note 2 with format string\\n\";\n\nstatic void emit_nested_dict(emitter_t *emitter) {\n\tint val = 123;\n\temitter_begin(emitter);\n\temitter_dict_begin(emitter, \"json1\", \"Dict 1\");\n\temitter_dict_begin(emitter, \"json2\", \"Dict 2\");\n\temitter_kv(emitter, \"primitive\", \"A primitive\", emitter_type_int, &val);\n\temitter_dict_end(emitter); /* Close 2 */\n\temitter_dict_begin(emitter, \"json3\", \"Dict 3\");\n\temitter_dict_end(emitter); /* Close 3 */\n\temitter_dict_end(emitter); /* Close 1 */\n\temitter_dict_begin(emitter, \"json4\", \"Dict 4\");\n\temitter_kv(emitter, \"primitive\", \"Another primitive\",\n\t    emitter_type_int, &val);\n\temitter_dict_end(emitter); /* Close 4 */\n\temitter_end(emitter);\n}\n\nstatic const char *nested_dict_json =\n\"{\\n\"\n\"\\t\\\"json1\\\": {\\n\"\n\"\\t\\t\\\"json2\\\": {\\n\"\n\"\\t\\t\\t\\\"primitive\\\": 123\\n\"\n\"\\t\\t},\\n\"\n\"\\t\\t\\\"json3\\\": {\\n\"\n\"\\t\\t}\\n\"\n\"\\t},\\n\"\n\"\\t\\\"json4\\\": {\\n\"\n\"\\t\\t\\\"primitive\\\": 123\\n\"\n\"\\t}\\n\"\n\"}\\n\";\nstatic const char *nested_dict_json_compact =\n\"{\"\n\t\"\\\"json1\\\":{\"\n\t\t\"\\\"json2\\\":{\"\n\t\t\t\"\\\"primitive\\\":123\"\n\t\t\"},\"\n\t\t\"\\\"json3\\\":{\"\n\t\t\"}\"\n\t\"},\"\n\t\"\\\"json4\\\":{\"\n\t\t\"\\\"primitive\\\":123\"\n\t\"}\"\n\"}\";\nstatic const char *nested_dict_table =\n\"Dict 1\\n\"\n\"  Dict 2\\n\"\n\"    A primitive: 123\\n\"\n\"  Dict 3\\n\"\n\"Dict 4\\n\"\n\"  Another primitive: 123\\n\";\n\nstatic void\nemit_types(emitter_t *emitter) {\n\tbool b = false;\n\tint i = -123;\n\tunsigned u = 123;\n\tssize_t zd = -456;\n\tsize_t zu = 456;\n\tconst char *str = \"string\";\n\tuint32_t u32 = 789;\n\tuint64_t u64 = 10000000000ULL;\n\n\temitter_begin(emitter);\n\temitter_kv(emitter, \"k1\", \"K1\", emitter_type_bool, &b);\n\temitter_kv(emitter, \"k2\", \"K2\", emitter_type_int, &i);\n\temitter_kv(emitter, \"k3\", \"K3\", emitter_type_unsigned, &u);\n\temitter_kv(emitter, \"k4\", \"K4\", emitter_type_ssize, &zd);\n\temitter_kv(emitter, \"k5\", \"K5\", emitter_type_size, &zu);\n\temitter_kv(emitter, \"k6\", \"K6\", emitter_type_string, &str);\n\temitter_kv(emitter, \"k7\", \"K7\", emitter_type_uint32, &u32);\n\temitter_kv(emitter, \"k8\", \"K8\", emitter_type_uint64, &u64);\n\t/*\n\t * We don't test the title type, since it's only used for tables.  It's\n\t * tested in the emitter_table_row tests.\n\t */\n\temitter_end(emitter);\n}\n\nstatic const char *types_json =\n\"{\\n\"\n\"\\t\\\"k1\\\": false,\\n\"\n\"\\t\\\"k2\\\": -123,\\n\"\n\"\\t\\\"k3\\\": 123,\\n\"\n\"\\t\\\"k4\\\": -456,\\n\"\n\"\\t\\\"k5\\\": 456,\\n\"\n\"\\t\\\"k6\\\": \\\"string\\\",\\n\"\n\"\\t\\\"k7\\\": 789,\\n\"\n\"\\t\\\"k8\\\": 10000000000\\n\"\n\"}\\n\";\nstatic const char *types_json_compact =\n\"{\"\n\t\"\\\"k1\\\":false,\"\n\t\"\\\"k2\\\":-123,\"\n\t\"\\\"k3\\\":123,\"\n\t\"\\\"k4\\\":-456,\"\n\t\"\\\"k5\\\":456,\"\n\t\"\\\"k6\\\":\\\"string\\\",\"\n\t\"\\\"k7\\\":789,\"\n\t\"\\\"k8\\\":10000000000\"\n\"}\";\nstatic const char *types_table =\n\"K1: false\\n\"\n\"K2: -123\\n\"\n\"K3: 123\\n\"\n\"K4: -456\\n\"\n\"K5: 456\\n\"\n\"K6: \\\"string\\\"\\n\"\n\"K7: 789\\n\"\n\"K8: 10000000000\\n\";\n\nstatic void\nemit_modal(emitter_t *emitter) {\n\tint val = 123;\n\temitter_begin(emitter);\n\temitter_dict_begin(emitter, \"j0\", \"T0\");\n\temitter_json_key(emitter, \"j1\");\n\temitter_json_object_begin(emitter);\n\temitter_kv(emitter, \"i1\", \"I1\", emitter_type_int, &val);\n\temitter_json_kv(emitter, \"i2\", emitter_type_int, &val);\n\temitter_table_kv(emitter, \"I3\", emitter_type_int, &val);\n\temitter_table_dict_begin(emitter, \"T1\");\n\temitter_kv(emitter, \"i4\", \"I4\", emitter_type_int, &val);\n\temitter_json_object_end(emitter); /* Close j1 */\n\temitter_kv(emitter, \"i5\", \"I5\", emitter_type_int, &val);\n\temitter_table_dict_end(emitter); /* Close T1 */\n\temitter_kv(emitter, \"i6\", \"I6\", emitter_type_int, &val);\n\temitter_dict_end(emitter); /* Close j0 / T0 */\n\temitter_end(emitter);\n}\n\nconst char *modal_json =\n\"{\\n\"\n\"\\t\\\"j0\\\": {\\n\"\n\"\\t\\t\\\"j1\\\": {\\n\"\n\"\\t\\t\\t\\\"i1\\\": 123,\\n\"\n\"\\t\\t\\t\\\"i2\\\": 123,\\n\"\n\"\\t\\t\\t\\\"i4\\\": 123\\n\"\n\"\\t\\t},\\n\"\n\"\\t\\t\\\"i5\\\": 123,\\n\"\n\"\\t\\t\\\"i6\\\": 123\\n\"\n\"\\t}\\n\"\n\"}\\n\";\nconst char *modal_json_compact =\n\"{\"\n\t\"\\\"j0\\\":{\"\n\t\t\"\\\"j1\\\":{\"\n\t\t\t\"\\\"i1\\\":123,\"\n\t\t\t\"\\\"i2\\\":123,\"\n\t\t\t\"\\\"i4\\\":123\"\n\t\t\"},\"\n\t\t\"\\\"i5\\\":123,\"\n\t\t\"\\\"i6\\\":123\"\n\t\"}\"\n\"}\";\nconst char *modal_table =\n\"T0\\n\"\n\"  I1: 123\\n\"\n\"  I3: 123\\n\"\n\"  T1\\n\"\n\"    I4: 123\\n\"\n\"    I5: 123\\n\"\n\"  I6: 123\\n\";\n\nstatic void\nemit_json_array(emitter_t *emitter) {\n\tint ival = 123;\n\n\temitter_begin(emitter);\n\temitter_json_key(emitter, \"dict\");\n\temitter_json_object_begin(emitter);\n\temitter_json_key(emitter, \"arr\");\n\temitter_json_array_begin(emitter);\n\temitter_json_object_begin(emitter);\n\temitter_json_kv(emitter, \"foo\", emitter_type_int, &ival);\n\temitter_json_object_end(emitter); /* Close arr[0] */\n\t/* arr[1] and arr[2] are primitives. */\n\temitter_json_value(emitter, emitter_type_int, &ival);\n\temitter_json_value(emitter, emitter_type_int, &ival);\n\temitter_json_object_begin(emitter);\n\temitter_json_kv(emitter, \"bar\", emitter_type_int, &ival);\n\temitter_json_kv(emitter, \"baz\", emitter_type_int, &ival);\n\temitter_json_object_end(emitter); /* Close arr[3]. */\n\temitter_json_array_end(emitter); /* Close arr. */\n\temitter_json_object_end(emitter); /* Close dict. */\n\temitter_end(emitter);\n}\n\nstatic const char *json_array_json =\n\"{\\n\"\n\"\\t\\\"dict\\\": {\\n\"\n\"\\t\\t\\\"arr\\\": [\\n\"\n\"\\t\\t\\t{\\n\"\n\"\\t\\t\\t\\t\\\"foo\\\": 123\\n\"\n\"\\t\\t\\t},\\n\"\n\"\\t\\t\\t123,\\n\"\n\"\\t\\t\\t123,\\n\"\n\"\\t\\t\\t{\\n\"\n\"\\t\\t\\t\\t\\\"bar\\\": 123,\\n\"\n\"\\t\\t\\t\\t\\\"baz\\\": 123\\n\"\n\"\\t\\t\\t}\\n\"\n\"\\t\\t]\\n\"\n\"\\t}\\n\"\n\"}\\n\";\nstatic const char *json_array_json_compact =\n\"{\"\n\t\"\\\"dict\\\":{\"\n\t\t\"\\\"arr\\\":[\"\n\t\t\t\"{\"\n\t\t\t\t\"\\\"foo\\\":123\"\n\t\t\t\"},\"\n\t\t\t\"123,\"\n\t\t\t\"123,\"\n\t\t\t\"{\"\n\t\t\t\t\"\\\"bar\\\":123,\"\n\t\t\t\t\"\\\"baz\\\":123\"\n\t\t\t\"}\"\n\t\t\"]\"\n\t\"}\"\n\"}\";\nstatic const char *json_array_table = \"\";\n\nstatic void\nemit_json_nested_array(emitter_t *emitter) {\n\tint ival = 123;\n\tchar *sval = \"foo\";\n\temitter_begin(emitter);\n\temitter_json_array_begin(emitter);\n\t\temitter_json_array_begin(emitter);\n\t\temitter_json_value(emitter, emitter_type_int, &ival);\n\t\temitter_json_value(emitter, emitter_type_string, &sval);\n\t\temitter_json_value(emitter, emitter_type_int, &ival);\n\t\temitter_json_value(emitter, emitter_type_string, &sval);\n\t\temitter_json_array_end(emitter);\n\t\temitter_json_array_begin(emitter);\n\t\temitter_json_value(emitter, emitter_type_int, &ival);\n\t\temitter_json_array_end(emitter);\n\t\temitter_json_array_begin(emitter);\n\t\temitter_json_value(emitter, emitter_type_string, &sval);\n\t\temitter_json_value(emitter, emitter_type_int, &ival);\n\t\temitter_json_array_end(emitter);\n\t\temitter_json_array_begin(emitter);\n\t\temitter_json_array_end(emitter);\n\temitter_json_array_end(emitter);\n\temitter_end(emitter);\n}\n\nstatic const char *json_nested_array_json =\n\"{\\n\"\n\"\\t[\\n\"\n\"\\t\\t[\\n\"\n\"\\t\\t\\t123,\\n\"\n\"\\t\\t\\t\\\"foo\\\",\\n\"\n\"\\t\\t\\t123,\\n\"\n\"\\t\\t\\t\\\"foo\\\"\\n\"\n\"\\t\\t],\\n\"\n\"\\t\\t[\\n\"\n\"\\t\\t\\t123\\n\"\n\"\\t\\t],\\n\"\n\"\\t\\t[\\n\"\n\"\\t\\t\\t\\\"foo\\\",\\n\"\n\"\\t\\t\\t123\\n\"\n\"\\t\\t],\\n\"\n\"\\t\\t[\\n\"\n\"\\t\\t]\\n\"\n\"\\t]\\n\"\n\"}\\n\";\nstatic const char *json_nested_array_json_compact =\n\"{\"\n\t\"[\"\n\t\t\"[\"\n\t\t\t\"123,\"\n\t\t\t\"\\\"foo\\\",\"\n\t\t\t\"123,\"\n\t\t\t\"\\\"foo\\\"\"\n\t\t\"],\"\n\t\t\"[\"\n\t\t\t\"123\"\n\t\t\"],\"\n\t\t\"[\"\n\t\t\t\"\\\"foo\\\",\"\n\t\t\t\"123\"\n\t\t\"],\"\n\t\t\"[\"\n\t\t\"]\"\n\t\"]\"\n\"}\";\nstatic const char *json_nested_array_table = \"\";\n\nstatic void\nemit_table_row(emitter_t *emitter) {\n\temitter_begin(emitter);\n\temitter_row_t row;\n\temitter_col_t abc = {emitter_justify_left, 10, emitter_type_title, {0}, {0, 0}};\n\tabc.str_val = \"ABC title\";\n\temitter_col_t def = {emitter_justify_right, 15, emitter_type_title, {0}, {0, 0}};\n\tdef.str_val = \"DEF title\";\n\temitter_col_t ghi = {emitter_justify_right, 5, emitter_type_title, {0}, {0, 0}};\n\tghi.str_val = \"GHI\";\n\n\temitter_row_init(&row);\n\temitter_col_init(&abc, &row);\n\temitter_col_init(&def, &row);\n\temitter_col_init(&ghi, &row);\n\n\temitter_table_row(emitter, &row);\n\n\tabc.type = emitter_type_int;\n\tdef.type = emitter_type_bool;\n\tghi.type = emitter_type_int;\n\n\tabc.int_val = 123;\n\tdef.bool_val = true;\n\tghi.int_val = 456;\n\temitter_table_row(emitter, &row);\n\n\tabc.int_val = 789;\n\tdef.bool_val = false;\n\tghi.int_val = 1011;\n\temitter_table_row(emitter, &row);\n\n\tabc.type = emitter_type_string;\n\tabc.str_val = \"a string\";\n\tdef.bool_val = false;\n\tghi.type = emitter_type_title;\n\tghi.str_val = \"ghi\";\n\temitter_table_row(emitter, &row);\n\n\temitter_end(emitter);\n}\n\nstatic const char *table_row_json =\n\"{\\n\"\n\"}\\n\";\nstatic const char *table_row_json_compact = \"{}\";\nstatic const char *table_row_table =\n\"ABC title       DEF title  GHI\\n\"\n\"123                  true  456\\n\"\n\"789                 false 1011\\n\"\n\"\\\"a string\\\"          false  ghi\\n\";\n\n#define GENERATE_TEST(feature)\t\t\t\t\t\\\nTEST_BEGIN(test_##feature) {\t\t\t\t\t\\\n\texpect_emit_output(emit_##feature, feature##_json,\t\\\n\t    feature##_json_compact, feature##_table);\t\t\\\n}\t\t\t\t\t\t\t\t\\\nTEST_END\n\nGENERATE_TEST(dict)\nGENERATE_TEST(table_printf)\nGENERATE_TEST(nested_dict)\nGENERATE_TEST(types)\nGENERATE_TEST(modal)\nGENERATE_TEST(json_array)\nGENERATE_TEST(json_nested_array)\nGENERATE_TEST(table_row)\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_dict,\n\t    test_table_printf,\n\t    test_nested_dict,\n\t    test_types,\n\t    test_modal,\n\t    test_json_array,\n\t    test_json_nested_array,\n\t    test_table_row);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/extent_quantize.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nTEST_BEGIN(test_small_extent_size) {\n\tunsigned nbins, i;\n\tsize_t sz, extent_size;\n\tsize_t mib[4];\n\tsize_t miblen = sizeof(mib) / sizeof(size_t);\n\n\t/*\n\t * Iterate over all small size classes, get their extent sizes, and\n\t * verify that the quantized size is the same as the extent size.\n\t */\n\n\tsz = sizeof(unsigned);\n\texpect_d_eq(mallctl(\"arenas.nbins\", (void *)&nbins, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl failure\");\n\n\texpect_d_eq(mallctlnametomib(\"arenas.bin.0.slab_size\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib failure\");\n\tfor (i = 0; i < nbins; i++) {\n\t\tmib[2] = i;\n\t\tsz = sizeof(size_t);\n\t\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&extent_size, &sz,\n\t\t    NULL, 0), 0, \"Unexpected mallctlbymib failure\");\n\t\texpect_zu_eq(extent_size,\n\t\t    sz_psz_quantize_floor(extent_size),\n\t\t    \"Small extent quantization should be a no-op \"\n\t\t    \"(extent_size=%zu)\", extent_size);\n\t\texpect_zu_eq(extent_size,\n\t\t    sz_psz_quantize_ceil(extent_size),\n\t\t    \"Small extent quantization should be a no-op \"\n\t\t    \"(extent_size=%zu)\", extent_size);\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_large_extent_size) {\n\tbool cache_oblivious;\n\tunsigned nlextents, i;\n\tsize_t sz, extent_size_prev, ceil_prev;\n\tsize_t mib[4];\n\tsize_t miblen = sizeof(mib) / sizeof(size_t);\n\n\t/*\n\t * Iterate over all large size classes, get their extent sizes, and\n\t * verify that the quantized size is the same as the extent size.\n\t */\n\n\tsz = sizeof(bool);\n\texpect_d_eq(mallctl(\"opt.cache_oblivious\", (void *)&cache_oblivious,\n\t    &sz, NULL, 0), 0, \"Unexpected mallctl failure\");\n\n\tsz = sizeof(unsigned);\n\texpect_d_eq(mallctl(\"arenas.nlextents\", (void *)&nlextents, &sz, NULL,\n\t    0), 0, \"Unexpected mallctl failure\");\n\n\texpect_d_eq(mallctlnametomib(\"arenas.lextent.0.size\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib failure\");\n\tfor (i = 0; i < nlextents; i++) {\n\t\tsize_t lextent_size, extent_size, floor, ceil;\n\n\t\tmib[2] = i;\n\t\tsz = sizeof(size_t);\n\t\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&lextent_size,\n\t\t    &sz, NULL, 0), 0, \"Unexpected mallctlbymib failure\");\n\t\textent_size = cache_oblivious ? lextent_size + PAGE :\n\t\t    lextent_size;\n\t\tfloor = sz_psz_quantize_floor(extent_size);\n\t\tceil = sz_psz_quantize_ceil(extent_size);\n\n\t\texpect_zu_eq(extent_size, floor,\n\t\t    \"Extent quantization should be a no-op for precise size \"\n\t\t    \"(lextent_size=%zu, extent_size=%zu)\", lextent_size,\n\t\t    extent_size);\n\t\texpect_zu_eq(extent_size, ceil,\n\t\t    \"Extent quantization should be a no-op for precise size \"\n\t\t    \"(lextent_size=%zu, extent_size=%zu)\", lextent_size,\n\t\t    extent_size);\n\n\t\tif (i > 0) {\n\t\t\texpect_zu_eq(extent_size_prev,\n\t\t\t    sz_psz_quantize_floor(extent_size - PAGE),\n\t\t\t    \"Floor should be a precise size\");\n\t\t\tif (extent_size_prev < ceil_prev) {\n\t\t\t\texpect_zu_eq(ceil_prev, extent_size,\n\t\t\t\t    \"Ceiling should be a precise size \"\n\t\t\t\t    \"(extent_size_prev=%zu, ceil_prev=%zu, \"\n\t\t\t\t    \"extent_size=%zu)\", extent_size_prev,\n\t\t\t\t    ceil_prev, extent_size);\n\t\t\t}\n\t\t}\n\t\tif (i + 1 < nlextents) {\n\t\t\textent_size_prev = floor;\n\t\t\tceil_prev = sz_psz_quantize_ceil(extent_size +\n\t\t\t    PAGE);\n\t\t}\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_monotonic) {\n#define SZ_MAX\tZU(4 * 1024 * 1024)\n\tunsigned i;\n\tsize_t floor_prev, ceil_prev;\n\n\tfloor_prev = 0;\n\tceil_prev = 0;\n\tfor (i = 1; i <= SZ_MAX >> LG_PAGE; i++) {\n\t\tsize_t extent_size, floor, ceil;\n\n\t\textent_size = i << LG_PAGE;\n\t\tfloor = sz_psz_quantize_floor(extent_size);\n\t\tceil = sz_psz_quantize_ceil(extent_size);\n\n\t\texpect_zu_le(floor, extent_size,\n\t\t    \"Floor should be <= (floor=%zu, extent_size=%zu, ceil=%zu)\",\n\t\t    floor, extent_size, ceil);\n\t\texpect_zu_ge(ceil, extent_size,\n\t\t    \"Ceiling should be >= (floor=%zu, extent_size=%zu, \"\n\t\t    \"ceil=%zu)\", floor, extent_size, ceil);\n\n\t\texpect_zu_le(floor_prev, floor, \"Floor should be monotonic \"\n\t\t    \"(floor_prev=%zu, floor=%zu, extent_size=%zu, ceil=%zu)\",\n\t\t    floor_prev, floor, extent_size, ceil);\n\t\texpect_zu_le(ceil_prev, ceil, \"Ceiling should be monotonic \"\n\t\t    \"(floor=%zu, extent_size=%zu, ceil_prev=%zu, ceil=%zu)\",\n\t\t    floor, extent_size, ceil_prev, ceil);\n\n\t\tfloor_prev = floor;\n\t\tceil_prev = ceil;\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_small_extent_size,\n\t    test_large_extent_size,\n\t    test_monotonic);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/fb.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/fb.h\"\n#include \"test/nbits.h\"\n\nstatic void\ndo_test_init(size_t nbits) {\n\tsize_t sz = FB_NGROUPS(nbits) * sizeof(fb_group_t);\n\tfb_group_t *fb = malloc(sz);\n\t/* Junk fb's contents. */\n\tmemset(fb, 99, sz);\n\tfb_init(fb, nbits);\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\texpect_false(fb_get(fb, nbits, i),\n\t\t    \"bitmap should start empty\");\n\t}\n\tfree(fb);\n}\n\nTEST_BEGIN(test_fb_init) {\n#define NB(nbits) \\\n\tdo_test_init(nbits);\n\tNBITS_TAB\n#undef NB\n}\nTEST_END\n\nstatic void\ndo_test_get_set_unset(size_t nbits) {\n\tsize_t sz = FB_NGROUPS(nbits) * sizeof(fb_group_t);\n\tfb_group_t *fb = malloc(sz);\n\tfb_init(fb, nbits);\n\t/* Set the bits divisible by 3. */\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\tif (i % 3 == 0) {\n\t\t\tfb_set(fb, nbits, i);\n\t\t}\n\t}\n\t/* Check them. */\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\texpect_b_eq(i % 3 == 0, fb_get(fb, nbits, i),\n\t\t    \"Unexpected bit at position %zu\", i);\n\t}\n\t/* Unset those divisible by 5. */\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\tif (i % 5 == 0) {\n\t\t\tfb_unset(fb, nbits, i);\n\t\t}\n\t}\n\t/* Check them. */\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\texpect_b_eq(i % 3 == 0 && i % 5 != 0, fb_get(fb, nbits, i),\n\t\t    \"Unexpected bit at position %zu\", i);\n\t}\n\tfree(fb);\n}\n\nTEST_BEGIN(test_get_set_unset) {\n#define NB(nbits) \\\n\tdo_test_get_set_unset(nbits);\n\tNBITS_TAB\n#undef NB\n}\nTEST_END\n\nstatic ssize_t\nfind_3_5_compute(ssize_t i, size_t nbits, bool bit, bool forward) {\n\tfor(; i < (ssize_t)nbits && i >= 0; i += (forward ? 1 : -1)) {\n\t\tbool expected_bit = i % 3 == 0 || i % 5 == 0;\n\t\tif (expected_bit == bit) {\n\t\t\treturn i;\n\t\t}\n\t}\n\treturn forward ? (ssize_t)nbits : (ssize_t)-1;\n}\n\nstatic void\ndo_test_search_simple(size_t nbits) {\n\tsize_t sz = FB_NGROUPS(nbits) * sizeof(fb_group_t);\n\tfb_group_t *fb = malloc(sz);\n\tfb_init(fb, nbits);\n\n\t/* We pick multiples of 3 or 5. */\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\tif (i % 3 == 0) {\n\t\t\tfb_set(fb, nbits, i);\n\t\t}\n\t\t/* This tests double-setting a little, too. */\n\t\tif (i % 5 == 0) {\n\t\t\tfb_set(fb, nbits, i);\n\t\t}\n\t}\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\tsize_t ffs_compute = find_3_5_compute(i, nbits, true, true);\n\t\tsize_t ffs_search = fb_ffs(fb, nbits, i);\n\t\texpect_zu_eq(ffs_compute, ffs_search, \"ffs mismatch at %zu\", i);\n\n\t\tssize_t fls_compute = find_3_5_compute(i, nbits, true, false);\n\t\tsize_t fls_search = fb_fls(fb, nbits, i);\n\t\texpect_zu_eq(fls_compute, fls_search, \"fls mismatch at %zu\", i);\n\n\t\tsize_t ffu_compute = find_3_5_compute(i, nbits, false, true);\n\t\tsize_t ffu_search = fb_ffu(fb, nbits, i);\n\t\texpect_zu_eq(ffu_compute, ffu_search, \"ffu mismatch at %zu\", i);\n\n\t\tsize_t flu_compute = find_3_5_compute(i, nbits, false, false);\n\t\tsize_t flu_search = fb_flu(fb, nbits, i);\n\t\texpect_zu_eq(flu_compute, flu_search, \"flu mismatch at %zu\", i);\n\t}\n\n\tfree(fb);\n}\n\nTEST_BEGIN(test_search_simple) {\n#define NB(nbits) \\\n\tdo_test_search_simple(nbits);\n\tNBITS_TAB\n#undef NB\n}\nTEST_END\n\nstatic void\nexpect_exhaustive_results(fb_group_t *mostly_full, fb_group_t *mostly_empty,\n    size_t nbits, size_t special_bit, size_t position) {\n\tif (position < special_bit) {\n\t\texpect_zu_eq(special_bit, fb_ffs(mostly_empty, nbits, position),\n\t\t    \"mismatch at %zu, %zu\", position, special_bit);\n\t\texpect_zd_eq(-1, fb_fls(mostly_empty, nbits, position),\n\t\t    \"mismatch at %zu, %zu\", position, special_bit);\n\t\texpect_zu_eq(position, fb_ffu(mostly_empty, nbits, position),\n\t\t    \"mismatch at %zu, %zu\", position, special_bit);\n\t\texpect_zd_eq(position, fb_flu(mostly_empty, nbits, position),\n\t\t    \"mismatch at %zu, %zu\", position, special_bit);\n\n\t\texpect_zu_eq(position, fb_ffs(mostly_full, nbits, position),\n\t\t    \"mismatch at %zu, %zu\", position, special_bit);\n\t\texpect_zd_eq(position, fb_fls(mostly_full, nbits, position),\n\t\t    \"mismatch at %zu, %zu\", position, special_bit);\n\t\texpect_zu_eq(special_bit, fb_ffu(mostly_full, nbits, position),\n\t\t    \"mismatch at %zu, %zu\", position, special_bit);\n\t\texpect_zd_eq(-1, fb_flu(mostly_full, nbits, position),\n\t\t    \"mismatch at %zu, %zu\", position, special_bit);\n\t} else if (position == special_bit) {\n\t\texpect_zu_eq(special_bit, fb_ffs(mostly_empty, nbits, position),\n\t\t    \"mismatch at %zu, %zu\", position, special_bit);\n\t\texpect_zd_eq(special_bit, fb_fls(mostly_empty, nbits, position),\n\t\t    \"mismatch at %zu, %zu\", position, special_bit);\n\t\texpect_zu_eq(position + 1, fb_ffu(mostly_empty, nbits, position),\n\t\t    \"mismatch at %zu, %zu\", position, special_bit);\n\t\texpect_zd_eq(position - 1, fb_flu(mostly_empty, nbits,\n\t\t    position), \"mismatch at %zu, %zu\", position, special_bit);\n\n\t\texpect_zu_eq(position + 1, fb_ffs(mostly_full, nbits, position),\n\t\t    \"mismatch at %zu, %zu\", position, special_bit);\n\t\texpect_zd_eq(position - 1, fb_fls(mostly_full, nbits,\n\t\t    position), \"mismatch at %zu, %zu\", position, special_bit);\n\t\texpect_zu_eq(position, fb_ffu(mostly_full, nbits, position),\n\t\t    \"mismatch at %zu, %zu\", position, special_bit);\n\t\texpect_zd_eq(position, fb_flu(mostly_full, nbits, position),\n\t\t    \"mismatch at %zu, %zu\", position, special_bit);\n\t} else {\n\t\t/* position > special_bit. */\n\t\texpect_zu_eq(nbits, fb_ffs(mostly_empty, nbits, position),\n\t\t    \"mismatch at %zu, %zu\", position, special_bit);\n\t\texpect_zd_eq(special_bit, fb_fls(mostly_empty, nbits,\n\t\t    position), \"mismatch at %zu, %zu\", position, special_bit);\n\t\texpect_zu_eq(position, fb_ffu(mostly_empty, nbits, position),\n\t\t    \"mismatch at %zu, %zu\", position, special_bit);\n\t\texpect_zd_eq(position, fb_flu(mostly_empty, nbits, position),\n\t\t    \"mismatch at %zu, %zu\", position, special_bit);\n\n\t\texpect_zu_eq(position, fb_ffs(mostly_full, nbits, position),\n\t\t    \"mismatch at %zu, %zu\", position, special_bit);\n\t\texpect_zd_eq(position, fb_fls(mostly_full, nbits, position),\n\t\t    \"mismatch at %zu, %zu\", position, special_bit);\n\t\texpect_zu_eq(nbits, fb_ffu(mostly_full, nbits, position),\n\t\t    \"mismatch at %zu, %zu\", position, special_bit);\n\t\texpect_zd_eq(special_bit, fb_flu(mostly_full, nbits, position),\n\t\t    \"mismatch at %zu, %zu\", position, special_bit);\n\t}\n}\n\nstatic void\ndo_test_search_exhaustive(size_t nbits) {\n\t/* This test is quadratic; let's not get too big. */\n\tif (nbits > 1000) {\n\t\treturn;\n\t}\n\tsize_t sz = FB_NGROUPS(nbits) * sizeof(fb_group_t);\n\tfb_group_t *empty = malloc(sz);\n\tfb_init(empty, nbits);\n\tfb_group_t *full = malloc(sz);\n\tfb_init(full, nbits);\n\tfb_set_range(full, nbits, 0, nbits);\n\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\tfb_set(empty, nbits, i);\n\t\tfb_unset(full, nbits, i);\n\n\t\tfor (size_t j = 0; j < nbits; j++) {\n\t\t\texpect_exhaustive_results(full, empty, nbits, i, j);\n\t\t}\n\t\tfb_unset(empty, nbits, i);\n\t\tfb_set(full, nbits, i);\n\t}\n\n\tfree(empty);\n\tfree(full);\n}\n\nTEST_BEGIN(test_search_exhaustive) {\n#define NB(nbits) \\\n\tdo_test_search_exhaustive(nbits);\n\tNBITS_TAB\n#undef NB\n}\nTEST_END\n\nTEST_BEGIN(test_range_simple) {\n\t/*\n\t * Just pick a constant big enough to have nontrivial middle sizes, and\n\t * big enough that usages of things like weirdnum (below) near the\n\t * beginning fit comfortably into the beginning of the bitmap.\n\t */\n\tsize_t nbits = 64 * 10;\n\tsize_t ngroups = FB_NGROUPS(nbits);\n\tfb_group_t *fb = malloc(sizeof(fb_group_t) * ngroups);\n\tfb_init(fb, nbits);\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\tif (i % 2 == 0) {\n\t\t\tfb_set_range(fb, nbits, i, 1);\n\t\t}\n\t}\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\texpect_b_eq(i % 2 == 0, fb_get(fb, nbits, i),\n\t\t    \"mismatch at position %zu\", i);\n\t}\n\tfb_set_range(fb, nbits, 0, nbits / 2);\n\tfb_unset_range(fb, nbits, nbits / 2, nbits / 2);\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\texpect_b_eq(i < nbits / 2, fb_get(fb, nbits, i),\n\t\t    \"mismatch at position %zu\", i);\n\t}\n\n\tstatic const size_t weirdnum = 7;\n\tfb_set_range(fb, nbits, 0, nbits);\n\tfb_unset_range(fb, nbits, weirdnum, FB_GROUP_BITS + weirdnum);\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\texpect_b_eq(7 <= i && i <= 2 * weirdnum + FB_GROUP_BITS - 1,\n\t\t    !fb_get(fb, nbits, i), \"mismatch at position %zu\", i);\n\t}\n\tfree(fb);\n}\nTEST_END\n\nstatic void\ndo_test_empty_full_exhaustive(size_t nbits) {\n\tsize_t sz = FB_NGROUPS(nbits) * sizeof(fb_group_t);\n\tfb_group_t *empty = malloc(sz);\n\tfb_init(empty, nbits);\n\tfb_group_t *full = malloc(sz);\n\tfb_init(full, nbits);\n\tfb_set_range(full, nbits, 0, nbits);\n\n\texpect_true(fb_full(full, nbits), \"\");\n\texpect_false(fb_empty(full, nbits), \"\");\n\texpect_false(fb_full(empty, nbits), \"\");\n\texpect_true(fb_empty(empty, nbits), \"\");\n\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\tfb_set(empty, nbits, i);\n\t\tfb_unset(full, nbits, i);\n\n\t\texpect_false(fb_empty(empty, nbits), \"error at bit %zu\", i);\n\t\tif (nbits != 1) {\n\t\t\texpect_false(fb_full(empty, nbits),\n\t\t\t    \"error at bit %zu\", i);\n\t\t\texpect_false(fb_empty(full, nbits),\n\t\t\t    \"error at bit %zu\", i);\n\t\t} else {\n\t\t\texpect_true(fb_full(empty, nbits),\n\t\t\t    \"error at bit %zu\", i);\n\t\t\texpect_true(fb_empty(full, nbits),\n\t\t\t    \"error at bit %zu\", i);\n\t\t}\n\t\texpect_false(fb_full(full, nbits), \"error at bit %zu\", i);\n\n\t\tfb_unset(empty, nbits, i);\n\t\tfb_set(full, nbits, i);\n\t}\n\n\tfree(empty);\n\tfree(full);\n}\n\nTEST_BEGIN(test_empty_full) {\n#define NB(nbits) \\\n\tdo_test_empty_full_exhaustive(nbits);\n\tNBITS_TAB\n#undef NB\n}\nTEST_END\n\n/*\n * This tests both iter_range and the longest range functionality, which is\n * built closely on top of it.\n */\nTEST_BEGIN(test_iter_range_simple) {\n\tsize_t set_limit = 30;\n\tsize_t nbits = 100;\n\tfb_group_t fb[FB_NGROUPS(100)];\n\n\tfb_init(fb, nbits);\n\n\t/*\n\t * Failing to initialize these can lead to build failures with -Wall;\n\t * the compiler can't prove that they're set.\n\t */\n\tsize_t begin = (size_t)-1;\n\tsize_t len = (size_t)-1;\n\tbool result;\n\n\t/* A set of checks with only the first set_limit bits *set*. */\n\tfb_set_range(fb, nbits, 0, set_limit);\n\texpect_zu_eq(set_limit, fb_srange_longest(fb, nbits),\n\t    \"Incorrect longest set range\");\n\texpect_zu_eq(nbits - set_limit, fb_urange_longest(fb, nbits),\n\t    \"Incorrect longest unset range\");\n\tfor (size_t i = 0; i < set_limit; i++) {\n\t\tresult = fb_srange_iter(fb, nbits, i, &begin, &len);\n\t\texpect_true(result, \"Should have found a range at %zu\", i);\n\t\texpect_zu_eq(i, begin, \"Incorrect begin at %zu\", i);\n\t\texpect_zu_eq(set_limit - i, len, \"Incorrect len at %zu\", i);\n\n\t\tresult = fb_urange_iter(fb, nbits, i, &begin, &len);\n\t\texpect_true(result, \"Should have found a range at %zu\", i);\n\t\texpect_zu_eq(set_limit, begin, \"Incorrect begin at %zu\", i);\n\t\texpect_zu_eq(nbits - set_limit, len, \"Incorrect len at %zu\", i);\n\n\t\tresult = fb_srange_riter(fb, nbits, i, &begin, &len);\n\t\texpect_true(result, \"Should have found a range at %zu\", i);\n\t\texpect_zu_eq(0, begin, \"Incorrect begin at %zu\", i);\n\t\texpect_zu_eq(i + 1, len, \"Incorrect len at %zu\", i);\n\n\t\tresult = fb_urange_riter(fb, nbits, i, &begin, &len);\n\t\texpect_false(result, \"Should not have found a range at %zu\", i);\n\t}\n\tfor (size_t i = set_limit; i < nbits; i++) {\n\t\tresult = fb_srange_iter(fb, nbits, i, &begin, &len);\n\t\texpect_false(result, \"Should not have found a range at %zu\", i);\n\n\t\tresult = fb_urange_iter(fb, nbits, i, &begin, &len);\n\t\texpect_true(result, \"Should have found a range at %zu\", i);\n\t\texpect_zu_eq(i, begin, \"Incorrect begin at %zu\", i);\n\t\texpect_zu_eq(nbits - i, len, \"Incorrect len at %zu\", i);\n\n\t\tresult = fb_srange_riter(fb, nbits, i, &begin, &len);\n\t\texpect_true(result, \"Should have found a range at %zu\", i);\n\t\texpect_zu_eq(0, begin, \"Incorrect begin at %zu\", i);\n\t\texpect_zu_eq(set_limit, len, \"Incorrect len at %zu\", i);\n\n\t\tresult = fb_urange_riter(fb, nbits, i, &begin, &len);\n\t\texpect_true(result, \"Should have found a range at %zu\", i);\n\t\texpect_zu_eq(set_limit, begin, \"Incorrect begin at %zu\", i);\n\t\texpect_zu_eq(i - set_limit + 1, len, \"Incorrect len at %zu\", i);\n\t}\n\n\t/* A set of checks with only the first set_limit bits *unset*. */\n\tfb_unset_range(fb, nbits, 0, set_limit);\n\tfb_set_range(fb, nbits, set_limit, nbits - set_limit);\n\texpect_zu_eq(nbits - set_limit, fb_srange_longest(fb, nbits),\n\t    \"Incorrect longest set range\");\n\texpect_zu_eq(set_limit, fb_urange_longest(fb, nbits),\n\t    \"Incorrect longest unset range\");\n\tfor (size_t i = 0; i < set_limit; i++) {\n\t\tresult = fb_srange_iter(fb, nbits, i, &begin, &len);\n\t\texpect_true(result, \"Should have found a range at %zu\", i);\n\t\texpect_zu_eq(set_limit, begin, \"Incorrect begin at %zu\", i);\n\t\texpect_zu_eq(nbits - set_limit, len, \"Incorrect len at %zu\", i);\n\n\t\tresult = fb_urange_iter(fb, nbits, i, &begin, &len);\n\t\texpect_true(result, \"Should have found a range at %zu\", i);\n\t\texpect_zu_eq(i, begin, \"Incorrect begin at %zu\", i);\n\t\texpect_zu_eq(set_limit - i, len, \"Incorrect len at %zu\", i);\n\n\t\tresult = fb_srange_riter(fb, nbits, i, &begin, &len);\n\t\texpect_false(result, \"Should not have found a range at %zu\", i);\n\n\t\tresult = fb_urange_riter(fb, nbits, i, &begin, &len);\n\t\texpect_true(result, \"Should not have found a range at %zu\", i);\n\t\texpect_zu_eq(0, begin, \"Incorrect begin at %zu\", i);\n\t\texpect_zu_eq(i + 1, len, \"Incorrect len at %zu\", i);\n\t}\n\tfor (size_t i = set_limit; i < nbits; i++) {\n\t\tresult = fb_srange_iter(fb, nbits, i, &begin, &len);\n\t\texpect_true(result, \"Should have found a range at %zu\", i);\n\t\texpect_zu_eq(i, begin, \"Incorrect begin at %zu\", i);\n\t\texpect_zu_eq(nbits - i, len, \"Incorrect len at %zu\", i);\n\n\t\tresult = fb_urange_iter(fb, nbits, i, &begin, &len);\n\t\texpect_false(result, \"Should not have found a range at %zu\", i);\n\n\t\tresult = fb_srange_riter(fb, nbits, i, &begin, &len);\n\t\texpect_true(result, \"Should have found a range at %zu\", i);\n\t\texpect_zu_eq(set_limit, begin, \"Incorrect begin at %zu\", i);\n\t\texpect_zu_eq(i - set_limit + 1, len, \"Incorrect len at %zu\", i);\n\n\t\tresult = fb_urange_riter(fb, nbits, i, &begin, &len);\n\t\texpect_true(result, \"Should have found a range at %zu\", i);\n\t\texpect_zu_eq(0, begin, \"Incorrect begin at %zu\", i);\n\t\texpect_zu_eq(set_limit, len, \"Incorrect len at %zu\", i);\n\t}\n\n}\nTEST_END\n\n/*\n * Doing this bit-by-bit is too slow for a real implementation, but for testing\n * code, it's easy to get right.  In the exhaustive tests, we'll compare the\n * (fast but tricky) real implementation against the (slow but simple) testing\n * one.\n */\nstatic bool\nfb_iter_simple(fb_group_t *fb, size_t nbits, size_t start, size_t *r_begin,\n    size_t *r_len, bool val, bool forward) {\n\tssize_t stride = (forward ? (ssize_t)1 : (ssize_t)-1);\n\tssize_t range_begin = (ssize_t)start;\n\tfor (; range_begin != (ssize_t)nbits && range_begin != -1;\n\t    range_begin += stride) {\n\t\tif (fb_get(fb, nbits, range_begin) == val) {\n\t\t\tssize_t range_end = range_begin;\n\t\t\tfor (; range_end != (ssize_t)nbits && range_end != -1;\n\t\t\t    range_end += stride) {\n\t\t\t\tif (fb_get(fb, nbits, range_end) != val) {\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (forward) {\n\t\t\t\t*r_begin = range_begin;\n\t\t\t\t*r_len = range_end - range_begin;\n\t\t\t} else {\n\t\t\t\t*r_begin = range_end + 1;\n\t\t\t\t*r_len = range_begin - range_end;\n\t\t\t}\n\t\t\treturn true;\n\t\t}\n\t}\n\treturn false;\n}\n\n/* Similar, but for finding longest ranges. */\nstatic size_t\nfb_range_longest_simple(fb_group_t *fb, size_t nbits, bool val) {\n\tsize_t longest_so_far = 0;\n\tfor (size_t begin = 0; begin < nbits; begin++) {\n\t\tif (fb_get(fb, nbits, begin) != val) {\n\t\t\tcontinue;\n\t\t}\n\t\tsize_t end = begin + 1;\n\t\tfor (; end < nbits; end++) {\n\t\t\tif (fb_get(fb, nbits, end) != val) {\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif (end - begin > longest_so_far) {\n\t\t\tlongest_so_far = end - begin;\n\t\t}\n\t}\n\treturn longest_so_far;\n}\n\nstatic void\nexpect_iter_results_at(fb_group_t *fb, size_t nbits, size_t pos,\n    bool val, bool forward) {\n\tbool iter_res;\n\tsize_t iter_begin JEMALLOC_CC_SILENCE_INIT(0);\n\tsize_t iter_len JEMALLOC_CC_SILENCE_INIT(0);\n\tif (val) {\n\t\tif (forward) {\n\t\t\titer_res = fb_srange_iter(fb, nbits, pos,\n\t\t\t    &iter_begin, &iter_len);\n\t\t} else {\n\t\t\titer_res = fb_srange_riter(fb, nbits, pos,\n\t\t\t    &iter_begin, &iter_len);\n\t\t}\n\t} else {\n\t\tif (forward) {\n\t\t\titer_res = fb_urange_iter(fb, nbits, pos,\n\t\t\t    &iter_begin, &iter_len);\n\t\t} else {\n\t\t\titer_res = fb_urange_riter(fb, nbits, pos,\n\t\t\t    &iter_begin, &iter_len);\n\t\t}\n\t}\n\n\tbool simple_iter_res;\n\t/*\n\t * These are dead stores, but the compiler can't always figure that out\n\t * statically, and warns on the uninitialized variable.\n\t */\n\tsize_t simple_iter_begin = 0;\n\tsize_t simple_iter_len = 0;\n\tsimple_iter_res = fb_iter_simple(fb, nbits, pos, &simple_iter_begin,\n\t    &simple_iter_len, val, forward);\n\n\texpect_b_eq(iter_res, simple_iter_res, \"Result mismatch at %zu\", pos);\n\tif (iter_res && simple_iter_res) {\n\t\tassert_zu_eq(iter_begin, simple_iter_begin,\n\t\t    \"Begin mismatch at %zu\", pos);\n\t\texpect_zu_eq(iter_len, simple_iter_len,\n\t\t    \"Length mismatch at %zu\", pos);\n\t}\n}\n\nstatic void\nexpect_iter_results(fb_group_t *fb, size_t nbits) {\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\texpect_iter_results_at(fb, nbits, i, false, false);\n\t\texpect_iter_results_at(fb, nbits, i, false, true);\n\t\texpect_iter_results_at(fb, nbits, i, true, false);\n\t\texpect_iter_results_at(fb, nbits, i, true, true);\n\t}\n\texpect_zu_eq(fb_range_longest_simple(fb, nbits, true),\n\t    fb_srange_longest(fb, nbits), \"Longest range mismatch\");\n\texpect_zu_eq(fb_range_longest_simple(fb, nbits, false),\n\t    fb_urange_longest(fb, nbits), \"Longest range mismatch\");\n}\n\nstatic void\nset_pattern_3(fb_group_t *fb, size_t nbits, bool zero_val) {\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\tif ((i % 6 < 3 && zero_val) || (i % 6 >= 3 && !zero_val)) {\n\t\t\tfb_set(fb, nbits, i);\n\t\t} else {\n\t\t\tfb_unset(fb, nbits, i);\n\t\t}\n\t}\n}\n\nstatic void\ndo_test_iter_range_exhaustive(size_t nbits) {\n\t/* This test is also pretty slow. */\n\tif (nbits > 1000) {\n\t\treturn;\n\t}\n\tsize_t sz = FB_NGROUPS(nbits) * sizeof(fb_group_t);\n\tfb_group_t *fb = malloc(sz);\n\tfb_init(fb, nbits);\n\n\tset_pattern_3(fb, nbits, /* zero_val */ true);\n\texpect_iter_results(fb, nbits);\n\n\tset_pattern_3(fb, nbits, /* zero_val */ false);\n\texpect_iter_results(fb, nbits);\n\n\tfb_set_range(fb, nbits, 0, nbits);\n\tfb_unset_range(fb, nbits, 0, nbits / 2 == 0 ? 1 : nbits / 2);\n\texpect_iter_results(fb, nbits);\n\n\tfb_unset_range(fb, nbits, 0, nbits);\n\tfb_set_range(fb, nbits, 0, nbits / 2 == 0 ? 1: nbits / 2);\n\texpect_iter_results(fb, nbits);\n\n\tfree(fb);\n}\n\n/*\n * Like test_iter_range_simple, this tests both iteration and longest-range\n * computation.\n */\nTEST_BEGIN(test_iter_range_exhaustive) {\n#define NB(nbits) \\\n\tdo_test_iter_range_exhaustive(nbits);\n\tNBITS_TAB\n#undef NB\n}\nTEST_END\n\n/*\n * If all set bits in the bitmap are contiguous, in [set_start, set_end),\n * returns the number of set bits in [scount_start, scount_end).\n */\nstatic size_t\nscount_contiguous(size_t set_start, size_t set_end, size_t scount_start,\n    size_t scount_end) {\n\t/* No overlap. */\n\tif (set_end <= scount_start || scount_end <= set_start) {\n\t\treturn 0;\n\t}\n\t/* set range contains scount range */\n\tif (set_start <= scount_start && set_end >= scount_end) {\n\t\treturn scount_end - scount_start;\n\t}\n\t/* scount range contains set range. */\n\tif (scount_start <= set_start && scount_end >= set_end) {\n\t\treturn set_end - set_start;\n\t}\n\t/* Partial overlap, with set range starting first. */\n\tif (set_start < scount_start && set_end < scount_end) {\n\t\treturn set_end - scount_start;\n\t}\n\t/* Partial overlap, with scount range starting first. */\n\tif (scount_start < set_start && scount_end < set_end) {\n\t\treturn scount_end - set_start;\n\t}\n\t/*\n\t * Trigger an assert failure; the above list should have been\n\t * exhaustive.\n\t */\n\tunreachable();\n}\n\nstatic size_t\nucount_contiguous(size_t set_start, size_t set_end, size_t ucount_start,\n    size_t ucount_end) {\n\t/* No overlap. */\n\tif (set_end <= ucount_start || ucount_end <= set_start) {\n\t\treturn ucount_end - ucount_start;\n\t}\n\t/* set range contains ucount range */\n\tif (set_start <= ucount_start && set_end >= ucount_end) {\n\t\treturn 0;\n\t}\n\t/* ucount range contains set range. */\n\tif (ucount_start <= set_start && ucount_end >= set_end) {\n\t\treturn (ucount_end - ucount_start) - (set_end - set_start);\n\t}\n\t/* Partial overlap, with set range starting first. */\n\tif (set_start < ucount_start && set_end < ucount_end) {\n\t\treturn ucount_end - set_end;\n\t}\n\t/* Partial overlap, with ucount range starting first. */\n\tif (ucount_start < set_start && ucount_end < set_end) {\n\t\treturn set_start - ucount_start;\n\t}\n\t/*\n\t * Trigger an assert failure; the above list should have been\n\t * exhaustive.\n\t */\n\tunreachable();\n}\n\nstatic void\nexpect_count_match_contiguous(fb_group_t *fb, size_t nbits, size_t set_start,\n    size_t set_end) {\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\tfor (size_t j = i + 1; j <= nbits; j++) {\n\t\t\tsize_t cnt = j - i;\n\t\t\tsize_t scount_expected = scount_contiguous(set_start,\n\t\t\t    set_end, i, j);\n\t\t\tsize_t scount_computed = fb_scount(fb, nbits, i, cnt);\n\t\t\texpect_zu_eq(scount_expected, scount_computed,\n\t\t\t    \"fb_scount error with nbits=%zu, start=%zu, \"\n\t\t\t    \"cnt=%zu, with bits set in [%zu, %zu)\",\n\t\t\t    nbits, i, cnt, set_start, set_end);\n\n\t\t\tsize_t ucount_expected = ucount_contiguous(set_start,\n\t\t\t    set_end, i, j);\n\t\t\tsize_t ucount_computed = fb_ucount(fb, nbits, i, cnt);\n\t\t\tassert_zu_eq(ucount_expected, ucount_computed,\n\t\t\t    \"fb_ucount error with nbits=%zu, start=%zu, \"\n\t\t\t    \"cnt=%zu, with bits set in [%zu, %zu)\",\n\t\t\t    nbits, i, cnt, set_start, set_end);\n\n\t\t}\n\t}\n}\n\nstatic void\ndo_test_count_contiguous(size_t nbits) {\n\tsize_t sz = FB_NGROUPS(nbits) * sizeof(fb_group_t);\n\tfb_group_t *fb = malloc(sz);\n\n\tfb_init(fb, nbits);\n\n\texpect_count_match_contiguous(fb, nbits, 0, 0);\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\tfb_set(fb, nbits, i);\n\t\texpect_count_match_contiguous(fb, nbits, 0, i + 1);\n\t}\n\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\tfb_unset(fb, nbits, i);\n\t\texpect_count_match_contiguous(fb, nbits, i + 1, nbits);\n\t}\n\n\tfree(fb);\n}\n\nTEST_BEGIN(test_count_contiguous_simple) {\n\tenum {nbits = 300};\n\tfb_group_t fb[FB_NGROUPS(nbits)];\n\tfb_init(fb, nbits);\n\t/* Just an arbitrary number. */\n\tsize_t start = 23;\n\n\tfb_set_range(fb, nbits, start, 30 - start);\n\texpect_count_match_contiguous(fb, nbits, start, 30);\n\n\tfb_set_range(fb, nbits, start, 40 - start);\n\texpect_count_match_contiguous(fb, nbits, start, 40);\n\n\tfb_set_range(fb, nbits, start, 70 - start);\n\texpect_count_match_contiguous(fb, nbits, start, 70);\n\n\tfb_set_range(fb, nbits, start, 120 - start);\n\texpect_count_match_contiguous(fb, nbits, start, 120);\n\n\tfb_set_range(fb, nbits, start, 150 - start);\n\texpect_count_match_contiguous(fb, nbits, start, 150);\n\n\tfb_set_range(fb, nbits, start, 200 - start);\n\texpect_count_match_contiguous(fb, nbits, start, 200);\n\n\tfb_set_range(fb, nbits, start, 290 - start);\n\texpect_count_match_contiguous(fb, nbits, start, 290);\n}\nTEST_END\n\nTEST_BEGIN(test_count_contiguous) {\n#define NB(nbits) \\\n\t/* This test is *particularly* slow in debug builds. */ \\\n\tif ((!config_debug && nbits < 300) || nbits < 150) { \\\n\t\tdo_test_count_contiguous(nbits); \\\n\t}\n\tNBITS_TAB\n#undef NB\n}\nTEST_END\n\nstatic void\nexpect_count_match_alternating(fb_group_t *fb_even, fb_group_t *fb_odd,\n    size_t nbits) {\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\tfor (size_t j = i + 1; j <= nbits; j++) {\n\t\t\tsize_t cnt = j - i;\n\t\t\tsize_t odd_scount = cnt / 2\n\t\t\t    + (size_t)(cnt % 2 == 1 && i % 2 == 1);\n\t\t\tsize_t odd_scount_computed = fb_scount(fb_odd, nbits,\n\t\t\t    i, j - i);\n\t\t\tassert_zu_eq(odd_scount, odd_scount_computed,\n\t\t\t    \"fb_scount error with nbits=%zu, start=%zu, \"\n\t\t\t    \"cnt=%zu, with alternating bits set.\",\n\t\t\t    nbits, i, j - i);\n\n\t\t\tsize_t odd_ucount = cnt / 2\n\t\t\t    + (size_t)(cnt % 2 == 1 && i % 2 == 0);\n\t\t\tsize_t odd_ucount_computed = fb_ucount(fb_odd, nbits,\n\t\t\t    i, j - i);\n\t\t\tassert_zu_eq(odd_ucount, odd_ucount_computed,\n\t\t\t    \"fb_ucount error with nbits=%zu, start=%zu, \"\n\t\t\t    \"cnt=%zu, with alternating bits set.\",\n\t\t\t    nbits, i, j - i);\n\n\t\t\tsize_t even_scount = cnt / 2\n\t\t\t    + (size_t)(cnt % 2 == 1 && i % 2 == 0);\n\t\t\tsize_t even_scount_computed = fb_scount(fb_even, nbits,\n\t\t\t    i, j - i);\n\t\t\tassert_zu_eq(even_scount, even_scount_computed,\n\t\t\t    \"fb_scount error with nbits=%zu, start=%zu, \"\n\t\t\t    \"cnt=%zu, with alternating bits set.\",\n\t\t\t    nbits, i, j - i);\n\n\t\t\tsize_t even_ucount = cnt / 2\n\t\t\t    + (size_t)(cnt % 2 == 1 && i % 2 == 1);\n\t\t\tsize_t even_ucount_computed = fb_ucount(fb_even, nbits,\n\t\t\t    i, j - i);\n\t\t\tassert_zu_eq(even_ucount, even_ucount_computed,\n\t\t\t    \"fb_ucount error with nbits=%zu, start=%zu, \"\n\t\t\t    \"cnt=%zu, with alternating bits set.\",\n\t\t\t    nbits, i, j - i);\n\t\t}\n\t}\n}\n\nstatic void\ndo_test_count_alternating(size_t nbits) {\n\tif (nbits > 1000) {\n\t\treturn;\n\t}\n\tsize_t sz = FB_NGROUPS(nbits) * sizeof(fb_group_t);\n\tfb_group_t *fb_even = malloc(sz);\n\tfb_group_t *fb_odd = malloc(sz);\n\n\tfb_init(fb_even, nbits);\n\tfb_init(fb_odd, nbits);\n\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\tif (i % 2 == 0) {\n\t\t\tfb_set(fb_even, nbits, i);\n\t\t} else {\n\t\t\tfb_set(fb_odd, nbits, i);\n\t\t}\n\t}\n\n\texpect_count_match_alternating(fb_even, fb_odd, nbits);\n\n\tfree(fb_even);\n\tfree(fb_odd);\n}\n\nTEST_BEGIN(test_count_alternating) {\n#define NB(nbits) \\\n\tdo_test_count_alternating(nbits);\n\tNBITS_TAB\n#undef NB\n}\nTEST_END\n\nstatic void\ndo_test_bit_op(size_t nbits, bool (*op)(bool a, bool b),\n    void (*fb_op)(fb_group_t *dst, fb_group_t *src1, fb_group_t *src2, size_t nbits)) {\n\tsize_t sz = FB_NGROUPS(nbits) * sizeof(fb_group_t);\n\tfb_group_t *fb1 = malloc(sz);\n\tfb_group_t *fb2 = malloc(sz);\n\tfb_group_t *fb_result = malloc(sz);\n\tfb_init(fb1, nbits);\n\tfb_init(fb2, nbits);\n\tfb_init(fb_result, nbits);\n\n\t/* Just two random numbers. */\n\tconst uint64_t prng_init1 = (uint64_t)0X4E9A9DE6A35691CDULL;\n\tconst uint64_t prng_init2 = (uint64_t)0X7856E396B063C36EULL;\n\n\tuint64_t prng1 = prng_init1;\n\tuint64_t prng2 = prng_init2;\n\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\tbool bit1 = ((prng1 & (1ULL << (i % 64))) != 0);\n\t\tbool bit2 = ((prng2 & (1ULL << (i % 64))) != 0);\n\n\t\tif (bit1) {\n\t\t\tfb_set(fb1, nbits, i);\n\t\t}\n\t\tif (bit2) {\n\t\t\tfb_set(fb2, nbits, i);\n\t\t}\n\n\t\tif (i % 64 == 0) {\n\t\t\tprng1 = prng_state_next_u64(prng1);\n\t\t\tprng2 = prng_state_next_u64(prng2);\n\t\t}\n\t}\n\n\tfb_op(fb_result, fb1, fb2, nbits);\n\n\t/* Reset the prngs to replay them. */\n\tprng1 = prng_init1;\n\tprng2 = prng_init2;\n\n\tfor (size_t i = 0; i < nbits; i++) {\n\t\tbool bit1 = ((prng1 & (1ULL << (i % 64))) != 0);\n\t\tbool bit2 = ((prng2 & (1ULL << (i % 64))) != 0);\n\n\t\t/* Original bitmaps shouldn't change. */\n\t\texpect_b_eq(bit1, fb_get(fb1, nbits, i), \"difference at bit %zu\", i);\n\t\texpect_b_eq(bit2, fb_get(fb2, nbits, i), \"difference at bit %zu\", i);\n\n\t\t/* New one should be bitwise and. */\n\t\texpect_b_eq(op(bit1, bit2), fb_get(fb_result, nbits, i),\n\t\t    \"difference at bit %zu\", i);\n\n\t\t/* Update the same way we did last time. */\n\t\tif (i % 64 == 0) {\n\t\t\tprng1 = prng_state_next_u64(prng1);\n\t\t\tprng2 = prng_state_next_u64(prng2);\n\t\t}\n\t}\n\n\tfree(fb1);\n\tfree(fb2);\n\tfree(fb_result);\n}\n\nstatic bool\nbinary_and(bool a, bool b) {\n\treturn a & b;\n}\n\nstatic void\ndo_test_bit_and(size_t nbits) {\n\tdo_test_bit_op(nbits, &binary_and, &fb_bit_and);\n}\n\nTEST_BEGIN(test_bit_and) {\n#define NB(nbits) \\\n\tdo_test_bit_and(nbits);\n\tNBITS_TAB\n#undef NB\n}\nTEST_END\n\nstatic bool\nbinary_or(bool a, bool b) {\n\treturn a | b;\n}\n\nstatic void\ndo_test_bit_or(size_t nbits) {\n\tdo_test_bit_op(nbits, &binary_or, &fb_bit_or);\n}\n\nTEST_BEGIN(test_bit_or) {\n#define NB(nbits) \\\n\tdo_test_bit_or(nbits);\n\tNBITS_TAB\n#undef NB\n}\nTEST_END\n\nstatic bool\nbinary_not(bool a, bool b) {\n\t(void)b;\n\treturn !a;\n}\n\nstatic void\nfb_bit_not_shim(fb_group_t *dst, fb_group_t *src1, fb_group_t *src2,\n    size_t nbits) {\n\t(void)src2;\n\tfb_bit_not(dst, src1, nbits);\n}\n\nstatic void\ndo_test_bit_not(size_t nbits) {\n\tdo_test_bit_op(nbits, &binary_not, &fb_bit_not_shim);\n}\n\nTEST_BEGIN(test_bit_not) {\n#define NB(nbits) \\\n\tdo_test_bit_not(nbits);\n\tNBITS_TAB\n#undef NB\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_fb_init,\n\t    test_get_set_unset,\n\t    test_search_simple,\n\t    test_search_exhaustive,\n\t    test_range_simple,\n\t    test_empty_full,\n\t    test_iter_range_simple,\n\t    test_iter_range_exhaustive,\n\t    test_count_contiguous_simple,\n\t    test_count_contiguous,\n\t    test_count_alternating,\n\t    test_bit_and,\n\t    test_bit_or,\n\t    test_bit_not);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/fork.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#ifndef _WIN32\n#include <sys/wait.h>\n#endif\n\n#ifndef _WIN32\nstatic void\nwait_for_child_exit(int pid) {\n\tint status;\n\twhile (true) {\n\t\tif (waitpid(pid, &status, 0) == -1) {\n\t\t\ttest_fail(\"Unexpected waitpid() failure.\");\n\t\t}\n\t\tif (WIFSIGNALED(status)) {\n\t\t\ttest_fail(\"Unexpected child termination due to \"\n\t\t\t    \"signal %d\", WTERMSIG(status));\n\t\t\tbreak;\n\t\t}\n\t\tif (WIFEXITED(status)) {\n\t\t\tif (WEXITSTATUS(status) != 0) {\n\t\t\t\ttest_fail(\"Unexpected child exit value %d\",\n\t\t\t\t    WEXITSTATUS(status));\n\t\t\t}\n\t\t\tbreak;\n\t\t}\n\t}\n}\n#endif\n\nTEST_BEGIN(test_fork) {\n#ifndef _WIN32\n\tvoid *p;\n\tpid_t pid;\n\n\t/* Set up a manually managed arena for test. */\n\tunsigned arena_ind;\n\tsize_t sz = sizeof(unsigned);\n\texpect_d_eq(mallctl(\"arenas.create\", (void *)&arena_ind, &sz, NULL, 0),\n\t    0, \"Unexpected mallctl() failure\");\n\n\t/* Migrate to the new arena. */\n\tunsigned old_arena_ind;\n\tsz = sizeof(old_arena_ind);\n\texpect_d_eq(mallctl(\"thread.arena\", (void *)&old_arena_ind, &sz,\n\t    (void *)&arena_ind, sizeof(arena_ind)), 0,\n\t    \"Unexpected mallctl() failure\");\n\n\tp = malloc(1);\n\texpect_ptr_not_null(p, \"Unexpected malloc() failure\");\n\n\tpid = fork();\n\n\tfree(p);\n\n\tp = malloc(64);\n\texpect_ptr_not_null(p, \"Unexpected malloc() failure\");\n\tfree(p);\n\n\tif (pid == -1) {\n\t\t/* Error. */\n\t\ttest_fail(\"Unexpected fork() failure\");\n\t} else if (pid == 0) {\n\t\t/* Child. */\n\t\t_exit(0);\n\t} else {\n\t\twait_for_child_exit(pid);\n\t}\n#else\n\ttest_skip(\"fork(2) is irrelevant to Windows\");\n#endif\n}\nTEST_END\n\n#ifndef _WIN32\nstatic void *\ndo_fork_thd(void *arg) {\n\tmalloc(1);\n\tint pid = fork();\n\tif (pid == -1) {\n\t\t/* Error. */\n\t\ttest_fail(\"Unexpected fork() failure\");\n\t} else if (pid == 0) {\n\t\t/* Child. */\n\t\tchar *args[] = {\"true\", NULL};\n\t\texecvp(args[0], args);\n\t\ttest_fail(\"Exec failed\");\n\t} else {\n\t\t/* Parent */\n\t\twait_for_child_exit(pid);\n\t}\n\treturn NULL;\n}\n#endif\n\n#ifndef _WIN32\nstatic void\ndo_test_fork_multithreaded() {\n\tthd_t child;\n\tthd_create(&child, do_fork_thd, NULL);\n\tdo_fork_thd(NULL);\n\tthd_join(child, NULL);\n}\n#endif\n\nTEST_BEGIN(test_fork_multithreaded) {\n#ifndef _WIN32\n\t/*\n\t * We've seen bugs involving hanging on arenas_lock (though the same\n\t * class of bugs can happen on any mutex).  The bugs are intermittent\n\t * though, so we want to run the test multiple times.  Since we hold the\n\t * arenas lock only early in the process lifetime, we can't just run\n\t * this test in a loop (since, after all the arenas are initialized, we\n\t * won't acquire arenas_lock any further).  We therefore repeat the test\n\t * with multiple processes.\n\t */\n\tfor (int i = 0; i < 100; i++) {\n\t\tint pid = fork();\n\t\tif (pid == -1) {\n\t\t\t/* Error. */\n\t\t\ttest_fail(\"Unexpected fork() failure,\");\n\t\t} else if (pid == 0) {\n\t\t\t/* Child. */\n\t\t\tdo_test_fork_multithreaded();\n\t\t\t_exit(0);\n\t\t} else {\n\t\t\twait_for_child_exit(pid);\n\t\t}\n\t}\n#else\n\ttest_skip(\"fork(2) is irrelevant to Windows\");\n#endif\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_fork,\n\t    test_fork_multithreaded);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/fxp.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/fxp.h\"\n\nstatic double\nfxp2double(fxp_t a) {\n\tdouble intpart = (double)(a >> 16);\n\tdouble fracpart = (double)(a & ((1U << 16) - 1)) / (1U << 16);\n\treturn intpart + fracpart;\n}\n\n/* Is a close to b? */\nstatic bool\ndouble_close(double a, double b) {\n\t/*\n\t * Our implementation doesn't try for precision.  Correspondingly, don't\n\t * enforce it too strenuously here; accept values that are close in\n\t * either relative or absolute terms.\n\t */\n\treturn fabs(a - b) < 0.01 || fabs(a - b) / a < 0.01;\n}\n\nstatic bool\nfxp_close(fxp_t a, fxp_t b) {\n\treturn double_close(fxp2double(a), fxp2double(b));\n}\n\nstatic fxp_t\nxparse_fxp(const char *str) {\n\tfxp_t result;\n\tbool err = fxp_parse(&result, str, NULL);\n\tassert_false(err, \"Invalid fxp string: %s\", str);\n\treturn result;\n}\n\nstatic void\nexpect_parse_accurate(const char *str, const char *parse_str) {\n\tdouble true_val = strtod(str, NULL);\n\tfxp_t fxp_val;\n\tchar *end;\n\tbool err = fxp_parse(&fxp_val, parse_str, &end);\n\texpect_false(err, \"Unexpected parse failure\");\n\texpect_ptr_eq(parse_str + strlen(str), end,\n\t    \"Didn't parse whole string\");\n\texpect_true(double_close(fxp2double(fxp_val), true_val),\n\t    \"Misparsed %s\", str);\n}\n\nstatic void\nparse_valid_trial(const char *str) {\n\t/* The value it parses should be correct. */\n\texpect_parse_accurate(str, str);\n\tchar buf[100];\n\tsnprintf(buf, sizeof(buf), \"%swith_some_trailing_text\", str);\n\texpect_parse_accurate(str, buf);\n\tsnprintf(buf, sizeof(buf), \"%s with a space\", str);\n\texpect_parse_accurate(str, buf);\n\tsnprintf(buf, sizeof(buf), \"%s,in_a_malloc_conf_string:1\", str);\n\texpect_parse_accurate(str, buf);\n}\n\nTEST_BEGIN(test_parse_valid) {\n\tparse_valid_trial(\"0\");\n\tparse_valid_trial(\"1\");\n\tparse_valid_trial(\"2\");\n\tparse_valid_trial(\"100\");\n\tparse_valid_trial(\"345\");\n\tparse_valid_trial(\"00000000123\");\n\tparse_valid_trial(\"00000000987\");\n\n\tparse_valid_trial(\"0.0\");\n\tparse_valid_trial(\"0.00000000000456456456\");\n\tparse_valid_trial(\"100.00000000000456456456\");\n\n\tparse_valid_trial(\"123.1\");\n\tparse_valid_trial(\"123.01\");\n\tparse_valid_trial(\"123.001\");\n\tparse_valid_trial(\"123.0001\");\n\tparse_valid_trial(\"123.00001\");\n\tparse_valid_trial(\"123.000001\");\n\tparse_valid_trial(\"123.0000001\");\n\n\tparse_valid_trial(\".0\");\n\tparse_valid_trial(\".1\");\n\tparse_valid_trial(\".01\");\n\tparse_valid_trial(\".001\");\n\tparse_valid_trial(\".0001\");\n\tparse_valid_trial(\".00001\");\n\tparse_valid_trial(\".000001\");\n\n\tparse_valid_trial(\".1\");\n\tparse_valid_trial(\".10\");\n\tparse_valid_trial(\".100\");\n\tparse_valid_trial(\".1000\");\n\tparse_valid_trial(\".100000\");\n}\nTEST_END\n\nstatic void\nexpect_parse_failure(const char *str) {\n\tfxp_t result = FXP_INIT_INT(333);\n\tchar *end = (void *)0x123;\n\tbool err = fxp_parse(&result, str, &end);\n\texpect_true(err, \"Expected a parse error on: %s\", str);\n\texpect_ptr_eq((void *)0x123, end,\n\t    \"Parse error shouldn't change results\");\n\texpect_u32_eq(result, FXP_INIT_INT(333),\n\t    \"Parse error shouldn't change results\");\n}\n\nTEST_BEGIN(test_parse_invalid) {\n\texpect_parse_failure(\"123.\");\n\texpect_parse_failure(\"3.a\");\n\texpect_parse_failure(\".a\");\n\texpect_parse_failure(\"a.1\");\n\texpect_parse_failure(\"a\");\n\t/* A valid string, but one that overflows. */\n\texpect_parse_failure(\"123456789\");\n\texpect_parse_failure(\"0000000123456789\");\n\texpect_parse_failure(\"1000000\");\n}\nTEST_END\n\nstatic void\nexpect_init_percent(unsigned percent, const char *str) {\n\tfxp_t result_init = FXP_INIT_PERCENT(percent);\n\tfxp_t result_parse = xparse_fxp(str);\n\texpect_u32_eq(result_init, result_parse,\n\t    \"Expect representations of FXP_INIT_PERCENT(%u) and \"\n\t    \"fxp_parse(\\\"%s\\\") to be equal; got %x and %x\",\n\t    percent, str, result_init, result_parse);\n\n}\n\n/*\n * Every other test uses either parsing or FXP_INIT_INT; it gets tested in those\n * ways.  We need a one-off for the percent-based initialization, though.\n */\nTEST_BEGIN(test_init_percent) {\n\texpect_init_percent(100, \"1\");\n\texpect_init_percent(75, \".75\");\n\texpect_init_percent(1, \".01\");\n\texpect_init_percent(50, \".5\");\n}\nTEST_END\n\nstatic void\nexpect_add(const char *astr, const char *bstr, const char* resultstr) {\n\tfxp_t a = xparse_fxp(astr);\n\tfxp_t b = xparse_fxp(bstr);\n\tfxp_t result = xparse_fxp(resultstr);\n\texpect_true(fxp_close(fxp_add(a, b), result),\n\t    \"Expected %s + %s == %s\", astr, bstr, resultstr);\n}\n\nTEST_BEGIN(test_add_simple) {\n\texpect_add(\"0\", \"0\", \"0\");\n\texpect_add(\"0\", \"1\", \"1\");\n\texpect_add(\"1\", \"1\", \"2\");\n\texpect_add(\"1.5\", \"1.5\", \"3\");\n\texpect_add(\"0.1\", \"0.1\", \"0.2\");\n\texpect_add(\"123\", \"456\", \"579\");\n}\nTEST_END\n\nstatic void\nexpect_sub(const char *astr, const char *bstr, const char* resultstr) {\n\tfxp_t a = xparse_fxp(astr);\n\tfxp_t b = xparse_fxp(bstr);\n\tfxp_t result = xparse_fxp(resultstr);\n\texpect_true(fxp_close(fxp_sub(a, b), result),\n\t    \"Expected %s - %s == %s\", astr, bstr, resultstr);\n}\n\nTEST_BEGIN(test_sub_simple) {\n\texpect_sub(\"0\", \"0\", \"0\");\n\texpect_sub(\"1\", \"0\", \"1\");\n\texpect_sub(\"1\", \"1\", \"0\");\n\texpect_sub(\"3.5\", \"1.5\", \"2\");\n\texpect_sub(\"0.3\", \"0.1\", \"0.2\");\n\texpect_sub(\"456\", \"123\", \"333\");\n}\nTEST_END\n\nstatic void\nexpect_mul(const char *astr, const char *bstr, const char* resultstr) {\n\tfxp_t a = xparse_fxp(astr);\n\tfxp_t b = xparse_fxp(bstr);\n\tfxp_t result = xparse_fxp(resultstr);\n\texpect_true(fxp_close(fxp_mul(a, b), result),\n\t    \"Expected %s * %s == %s\", astr, bstr, resultstr);\n}\n\nTEST_BEGIN(test_mul_simple) {\n\texpect_mul(\"0\", \"0\", \"0\");\n\texpect_mul(\"1\", \"0\", \"0\");\n\texpect_mul(\"1\", \"1\", \"1\");\n\texpect_mul(\"1.5\", \"1.5\", \"2.25\");\n\texpect_mul(\"100.0\", \"10\", \"1000\");\n\texpect_mul(\".1\", \"10\", \"1\");\n}\nTEST_END\n\nstatic void\nexpect_div(const char *astr, const char *bstr, const char* resultstr) {\n\tfxp_t a = xparse_fxp(astr);\n\tfxp_t b = xparse_fxp(bstr);\n\tfxp_t result = xparse_fxp(resultstr);\n\texpect_true(fxp_close(fxp_div(a, b), result),\n\t    \"Expected %s / %s == %s\", astr, bstr, resultstr);\n}\n\nTEST_BEGIN(test_div_simple) {\n\texpect_div(\"1\", \"1\", \"1\");\n\texpect_div(\"0\", \"1\", \"0\");\n\texpect_div(\"2\", \"1\", \"2\");\n\texpect_div(\"3\", \"2\", \"1.5\");\n\texpect_div(\"3\", \"1.5\", \"2\");\n\texpect_div(\"10\", \".1\", \"100\");\n\texpect_div(\"123\", \"456\", \".2697368421\");\n}\nTEST_END\n\nstatic void\nexpect_round(const char *str, uint32_t rounded_down, uint32_t rounded_nearest) {\n\tfxp_t fxp = xparse_fxp(str);\n\tuint32_t fxp_rounded_down = fxp_round_down(fxp);\n\tuint32_t fxp_rounded_nearest = fxp_round_nearest(fxp);\n\texpect_u32_eq(rounded_down, fxp_rounded_down,\n\t    \"Mistake rounding %s down\", str);\n\texpect_u32_eq(rounded_nearest, fxp_rounded_nearest,\n\t    \"Mistake rounding %s to nearest\", str);\n}\n\nTEST_BEGIN(test_round_simple) {\n\texpect_round(\"1.5\", 1, 2);\n\texpect_round(\"0\", 0, 0);\n\texpect_round(\"0.1\", 0, 0);\n\texpect_round(\"0.4\", 0, 0);\n\texpect_round(\"0.40000\", 0, 0);\n\texpect_round(\"0.5\", 0, 1);\n\texpect_round(\"0.6\", 0, 1);\n\texpect_round(\"123\", 123, 123);\n\texpect_round(\"123.4\", 123, 123);\n\texpect_round(\"123.5\", 123, 124);\n}\nTEST_END\n\nstatic void\nexpect_mul_frac(size_t a, const char *fracstr, size_t expected) {\n\tfxp_t frac = xparse_fxp(fracstr);\n\tsize_t result = fxp_mul_frac(a, frac);\n\texpect_true(double_close(expected, result),\n\t    \"Expected %zu * %s == %zu (fracmul); got %zu\", a, fracstr,\n\t    expected, result);\n}\n\nTEST_BEGIN(test_mul_frac_simple) {\n\texpect_mul_frac(SIZE_MAX, \"1.0\", SIZE_MAX);\n\texpect_mul_frac(SIZE_MAX, \".75\", SIZE_MAX / 4 * 3);\n\texpect_mul_frac(SIZE_MAX, \".5\", SIZE_MAX / 2);\n\texpect_mul_frac(SIZE_MAX, \".25\", SIZE_MAX / 4);\n\texpect_mul_frac(1U << 16, \"1.0\", 1U << 16);\n\texpect_mul_frac(1U << 30, \"0.5\", 1U << 29);\n\texpect_mul_frac(1U << 30, \"0.25\", 1U << 28);\n\texpect_mul_frac(1U << 30, \"0.125\", 1U << 27);\n\texpect_mul_frac((1U << 30) + 1, \"0.125\", 1U << 27);\n\texpect_mul_frac(100, \"0.25\", 25);\n\texpect_mul_frac(1000 * 1000, \"0.001\", 1000);\n}\nTEST_END\n\nstatic void\nexpect_print(const char *str) {\n\tfxp_t fxp = xparse_fxp(str);\n\tchar buf[FXP_BUF_SIZE];\n\tfxp_print(fxp, buf);\n\texpect_d_eq(0, strcmp(str, buf), \"Couldn't round-trip print %s\", str);\n}\n\nTEST_BEGIN(test_print_simple) {\n\texpect_print(\"0.0\");\n\texpect_print(\"1.0\");\n\texpect_print(\"2.0\");\n\texpect_print(\"123.0\");\n\t/*\n\t * We hit the possibility of roundoff errors whenever the fractional\n\t * component isn't a round binary number; only check these here (we\n\t * round-trip properly in the stress test).\n\t */\n\texpect_print(\"1.5\");\n\texpect_print(\"3.375\");\n\texpect_print(\"0.25\");\n\texpect_print(\"0.125\");\n\t/* 1 / 2**14 */\n\texpect_print(\"0.00006103515625\");\n}\nTEST_END\n\nTEST_BEGIN(test_stress) {\n\tconst char *numbers[] = {\n\t\t\"0.0\", \"0.1\", \"0.2\", \"0.3\", \"0.4\",\n\t\t\"0.5\", \"0.6\", \"0.7\", \"0.8\", \"0.9\",\n\n\t\t\"1.0\", \"1.1\", \"1.2\", \"1.3\", \"1.4\",\n\t\t\"1.5\", \"1.6\", \"1.7\", \"1.8\", \"1.9\",\n\n\t\t\"2.0\", \"2.1\", \"2.2\", \"2.3\", \"2.4\",\n\t\t\"2.5\", \"2.6\", \"2.7\", \"2.8\", \"2.9\",\n\n\t\t\"17.0\", \"17.1\", \"17.2\", \"17.3\", \"17.4\",\n\t\t\"17.5\", \"17.6\", \"17.7\", \"17.8\", \"17.9\",\n\n\t\t\"18.0\", \"18.1\", \"18.2\", \"18.3\", \"18.4\",\n\t\t\"18.5\", \"18.6\", \"18.7\", \"18.8\", \"18.9\",\n\n\t\t\"123.0\", \"123.1\", \"123.2\", \"123.3\", \"123.4\",\n\t\t\"123.5\", \"123.6\", \"123.7\", \"123.8\", \"123.9\",\n\n\t\t\"124.0\", \"124.1\", \"124.2\", \"124.3\", \"124.4\",\n\t\t\"124.5\", \"124.6\", \"124.7\", \"124.8\", \"124.9\",\n\n\t\t\"125.0\", \"125.1\", \"125.2\", \"125.3\", \"125.4\",\n\t\t\"125.5\", \"125.6\", \"125.7\", \"125.8\", \"125.9\"};\n\tsize_t numbers_len = sizeof(numbers)/sizeof(numbers[0]);\n\tfor (size_t i = 0; i < numbers_len; i++) {\n\t\tfxp_t fxp_a = xparse_fxp(numbers[i]);\n\t\tdouble double_a = strtod(numbers[i], NULL);\n\n\t\tuint32_t fxp_rounded_down = fxp_round_down(fxp_a);\n\t\tuint32_t fxp_rounded_nearest = fxp_round_nearest(fxp_a);\n\t\tuint32_t double_rounded_down = (uint32_t)double_a;\n\t\tuint32_t double_rounded_nearest = (uint32_t)round(double_a);\n\n\t\texpect_u32_eq(double_rounded_down, fxp_rounded_down,\n\t\t    \"Incorrectly rounded down %s\", numbers[i]);\n\t\texpect_u32_eq(double_rounded_nearest, fxp_rounded_nearest,\n\t\t    \"Incorrectly rounded-to-nearest %s\", numbers[i]);\n\n\t\tfor (size_t j = 0; j < numbers_len; j++) {\n\t\t\tfxp_t fxp_b = xparse_fxp(numbers[j]);\n\t\t\tdouble double_b = strtod(numbers[j], NULL);\n\n\t\t\tfxp_t fxp_sum = fxp_add(fxp_a, fxp_b);\n\t\t\tdouble double_sum = double_a + double_b;\n\t\t\texpect_true(\n\t\t\t    double_close(fxp2double(fxp_sum), double_sum),\n\t\t\t    \"Miscomputed %s + %s\", numbers[i], numbers[j]);\n\n\t\t\tif (double_a > double_b) {\n\t\t\t\tfxp_t fxp_diff = fxp_sub(fxp_a, fxp_b);\n\t\t\t\tdouble double_diff = double_a - double_b;\n\t\t\t\texpect_true(\n\t\t\t\t    double_close(fxp2double(fxp_diff),\n\t\t\t\t    double_diff),\n\t\t\t\t    \"Miscomputed %s - %s\", numbers[i],\n\t\t\t\t    numbers[j]);\n\t\t\t}\n\n\t\t\tfxp_t fxp_prod = fxp_mul(fxp_a, fxp_b);\n\t\t\tdouble double_prod = double_a * double_b;\n\t\t\texpect_true(\n\t\t\t    double_close(fxp2double(fxp_prod), double_prod),\n\t\t\t    \"Miscomputed %s * %s\", numbers[i], numbers[j]);\n\n\t\t\tif (double_b != 0.0) {\n\t\t\t\tfxp_t fxp_quot = fxp_div(fxp_a, fxp_b);\n\t\t\t\tdouble double_quot = double_a / double_b;\n\t\t\t\texpect_true(\n\t\t\t\t    double_close(fxp2double(fxp_quot),\n\t\t\t\t    double_quot),\n\t\t\t\t    \"Miscomputed %s / %s\", numbers[i],\n\t\t\t\t    numbers[j]);\n\t\t\t}\n\t\t}\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_parse_valid,\n\t    test_parse_invalid,\n\t    test_init_percent,\n\t    test_add_simple,\n\t    test_sub_simple,\n\t    test_mul_simple,\n\t    test_div_simple,\n\t    test_round_simple,\n\t    test_mul_frac_simple,\n\t    test_print_simple,\n\t    test_stress);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/hash.c",
    "content": "/*\n * This file is based on code that is part of SMHasher\n * (https://code.google.com/p/smhasher/), and is subject to the MIT license\n * (http://www.opensource.org/licenses/mit-license.php).  Both email addresses\n * associated with the source code's revision history belong to Austin Appleby,\n * and the revision history ranges from 2010 to 2012.  Therefore the copyright\n * and license are here taken to be:\n *\n * Copyright (c) 2010-2012 Austin Appleby\n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n * THE SOFTWARE.\n */\n\n#include \"test/jemalloc_test.h\"\n#include \"jemalloc/internal/hash.h\"\n\ntypedef enum {\n\thash_variant_x86_32,\n\thash_variant_x86_128,\n\thash_variant_x64_128\n} hash_variant_t;\n\nstatic int\nhash_variant_bits(hash_variant_t variant) {\n\tswitch (variant) {\n\tcase hash_variant_x86_32: return 32;\n\tcase hash_variant_x86_128: return 128;\n\tcase hash_variant_x64_128: return 128;\n\tdefault: not_reached();\n\t}\n}\n\nstatic const char *\nhash_variant_string(hash_variant_t variant) {\n\tswitch (variant) {\n\tcase hash_variant_x86_32: return \"hash_x86_32\";\n\tcase hash_variant_x86_128: return \"hash_x86_128\";\n\tcase hash_variant_x64_128: return \"hash_x64_128\";\n\tdefault: not_reached();\n\t}\n}\n\n#define KEY_SIZE\t256\nstatic void\nhash_variant_verify_key(hash_variant_t variant, uint8_t *key) {\n\tconst int hashbytes = hash_variant_bits(variant) / 8;\n\tconst int hashes_size = hashbytes * 256;\n\tVARIABLE_ARRAY(uint8_t, hashes, hashes_size);\n\tVARIABLE_ARRAY(uint8_t, final, hashbytes);\n\tunsigned i;\n\tuint32_t computed, expected;\n\n\tmemset(key, 0, KEY_SIZE);\n\tmemset(hashes, 0, hashes_size);\n\tmemset(final, 0, hashbytes);\n\n\t/*\n\t * Hash keys of the form {0}, {0,1}, {0,1,2}, ..., {0,1,...,255} as the\n\t * seed.\n\t */\n\tfor (i = 0; i < 256; i++) {\n\t\tkey[i] = (uint8_t)i;\n\t\tswitch (variant) {\n\t\tcase hash_variant_x86_32: {\n\t\t\tuint32_t out;\n\t\t\tout = hash_x86_32(key, i, 256-i);\n\t\t\tmemcpy(&hashes[i*hashbytes], &out, hashbytes);\n\t\t\tbreak;\n\t\t} case hash_variant_x86_128: {\n\t\t\tuint64_t out[2];\n\t\t\thash_x86_128(key, i, 256-i, out);\n\t\t\tmemcpy(&hashes[i*hashbytes], out, hashbytes);\n\t\t\tbreak;\n\t\t} case hash_variant_x64_128: {\n\t\t\tuint64_t out[2];\n\t\t\thash_x64_128(key, i, 256-i, out);\n\t\t\tmemcpy(&hashes[i*hashbytes], out, hashbytes);\n\t\t\tbreak;\n\t\t} default: not_reached();\n\t\t}\n\t}\n\n\t/* Hash the result array. */\n\tswitch (variant) {\n\tcase hash_variant_x86_32: {\n\t\tuint32_t out = hash_x86_32(hashes, hashes_size, 0);\n\t\tmemcpy(final, &out, sizeof(out));\n\t\tbreak;\n\t} case hash_variant_x86_128: {\n\t\tuint64_t out[2];\n\t\thash_x86_128(hashes, hashes_size, 0, out);\n\t\tmemcpy(final, out, sizeof(out));\n\t\tbreak;\n\t} case hash_variant_x64_128: {\n\t\tuint64_t out[2];\n\t\thash_x64_128(hashes, hashes_size, 0, out);\n\t\tmemcpy(final, out, sizeof(out));\n\t\tbreak;\n\t} default: not_reached();\n\t}\n\n\tcomputed = (final[0] << 0) | (final[1] << 8) | (final[2] << 16) |\n\t    (final[3] << 24);\n\n\tswitch (variant) {\n#ifdef JEMALLOC_BIG_ENDIAN\n\tcase hash_variant_x86_32: expected = 0x6213303eU; break;\n\tcase hash_variant_x86_128: expected = 0x266820caU; break;\n\tcase hash_variant_x64_128: expected = 0xcc622b6fU; break;\n#else\n\tcase hash_variant_x86_32: expected = 0xb0f57ee3U; break;\n\tcase hash_variant_x86_128: expected = 0xb3ece62aU; break;\n\tcase hash_variant_x64_128: expected = 0x6384ba69U; break;\n#endif\n\tdefault: not_reached();\n\t}\n\n\texpect_u32_eq(computed, expected,\n\t    \"Hash mismatch for %s(): expected %#x but got %#x\",\n\t    hash_variant_string(variant), expected, computed);\n}\n\nstatic void\nhash_variant_verify(hash_variant_t variant) {\n#define MAX_ALIGN\t16\n\tuint8_t key[KEY_SIZE + (MAX_ALIGN - 1)];\n\tunsigned i;\n\n\tfor (i = 0; i < MAX_ALIGN; i++) {\n\t\thash_variant_verify_key(variant, &key[i]);\n\t}\n#undef MAX_ALIGN\n}\n#undef KEY_SIZE\n\nTEST_BEGIN(test_hash_x86_32) {\n\thash_variant_verify(hash_variant_x86_32);\n}\nTEST_END\n\nTEST_BEGIN(test_hash_x86_128) {\n\thash_variant_verify(hash_variant_x86_128);\n}\nTEST_END\n\nTEST_BEGIN(test_hash_x64_128) {\n\thash_variant_verify(hash_variant_x64_128);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_hash_x86_32,\n\t    test_hash_x86_128,\n\t    test_hash_x64_128);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/hook.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/hook.h\"\n\nstatic void *arg_extra;\nstatic int arg_type;\nstatic void *arg_result;\nstatic void *arg_address;\nstatic size_t arg_old_usize;\nstatic size_t arg_new_usize;\nstatic uintptr_t arg_result_raw;\nstatic uintptr_t arg_args_raw[4];\n\nstatic int call_count = 0;\n\nstatic void\nreset_args() {\n\targ_extra = NULL;\n\targ_type = 12345;\n\targ_result = NULL;\n\targ_address = NULL;\n\targ_old_usize = 0;\n\targ_new_usize = 0;\n\targ_result_raw = 0;\n\tmemset(arg_args_raw, 77, sizeof(arg_args_raw));\n}\n\nstatic void\nalloc_free_size(size_t sz) {\n\tvoid *ptr = mallocx(1, 0);\n\tfree(ptr);\n\tptr = mallocx(1, 0);\n\tfree(ptr);\n\tptr = mallocx(1, MALLOCX_TCACHE_NONE);\n\tdallocx(ptr, MALLOCX_TCACHE_NONE);\n}\n\n/*\n * We want to support a degree of user reentrancy.  This tests a variety of\n * allocation scenarios.\n */\nstatic void\nbe_reentrant() {\n\t/* Let's make sure the tcache is non-empty if enabled. */\n\talloc_free_size(1);\n\talloc_free_size(1024);\n\talloc_free_size(64 * 1024);\n\talloc_free_size(256 * 1024);\n\talloc_free_size(1024 * 1024);\n\n\t/* Some reallocation. */\n\tvoid *ptr = mallocx(129, 0);\n\tptr = rallocx(ptr, 130, 0);\n\tfree(ptr);\n\n\tptr = mallocx(2 * 1024 * 1024, 0);\n\tfree(ptr);\n\tptr = mallocx(1 * 1024 * 1024, 0);\n\tptr = rallocx(ptr, 2 * 1024 * 1024, 0);\n\tfree(ptr);\n\n\tptr = mallocx(1, 0);\n\tptr = rallocx(ptr, 1000, 0);\n\tfree(ptr);\n}\n\nstatic void\nset_args_raw(uintptr_t *args_raw, int nargs) {\n\tmemcpy(arg_args_raw, args_raw, sizeof(uintptr_t) * nargs);\n}\n\nstatic void\nexpect_args_raw(uintptr_t *args_raw_expected, int nargs) {\n\tint cmp = memcmp(args_raw_expected, arg_args_raw,\n\t    sizeof(uintptr_t) * nargs);\n\texpect_d_eq(cmp, 0, \"Raw args mismatch\");\n}\n\nstatic void\nreset() {\n\tcall_count = 0;\n\treset_args();\n}\n\nstatic void\ntest_alloc_hook(void *extra, hook_alloc_t type, void *result,\n    uintptr_t result_raw, uintptr_t args_raw[3]) {\n\tcall_count++;\n\targ_extra = extra;\n\targ_type = (int)type;\n\targ_result = result;\n\targ_result_raw = result_raw;\n\tset_args_raw(args_raw, 3);\n\tbe_reentrant();\n}\n\nstatic void\ntest_dalloc_hook(void *extra, hook_dalloc_t type, void *address,\n    uintptr_t args_raw[3]) {\n\tcall_count++;\n\targ_extra = extra;\n\targ_type = (int)type;\n\targ_address = address;\n\tset_args_raw(args_raw, 3);\n\tbe_reentrant();\n}\n\nstatic void\ntest_expand_hook(void *extra, hook_expand_t type, void *address,\n    size_t old_usize, size_t new_usize, uintptr_t result_raw,\n    uintptr_t args_raw[4]) {\n\tcall_count++;\n\targ_extra = extra;\n\targ_type = (int)type;\n\targ_address = address;\n\targ_old_usize = old_usize;\n\targ_new_usize = new_usize;\n\targ_result_raw = result_raw;\n\tset_args_raw(args_raw, 4);\n\tbe_reentrant();\n}\n\nTEST_BEGIN(test_hooks_basic) {\n\t/* Just verify that the record their arguments correctly. */\n\thooks_t hooks = {\n\t\t&test_alloc_hook, &test_dalloc_hook, &test_expand_hook,\n\t\t(void *)111};\n\tvoid *handle = hook_install(TSDN_NULL, &hooks);\n\tuintptr_t args_raw[4] = {10, 20, 30, 40};\n\n\t/* Alloc */\n\treset_args();\n\thook_invoke_alloc(hook_alloc_posix_memalign, (void *)222, 333,\n\t    args_raw);\n\texpect_ptr_eq(arg_extra, (void *)111, \"Passed wrong user pointer\");\n\texpect_d_eq((int)hook_alloc_posix_memalign, arg_type,\n\t    \"Passed wrong alloc type\");\n\texpect_ptr_eq((void *)222, arg_result, \"Passed wrong result address\");\n\texpect_u64_eq(333, arg_result_raw, \"Passed wrong result\");\n\texpect_args_raw(args_raw, 3);\n\n\t/* Dalloc */\n\treset_args();\n\thook_invoke_dalloc(hook_dalloc_sdallocx, (void *)222, args_raw);\n\texpect_d_eq((int)hook_dalloc_sdallocx, arg_type,\n\t    \"Passed wrong dalloc type\");\n\texpect_ptr_eq((void *)111, arg_extra, \"Passed wrong user pointer\");\n\texpect_ptr_eq((void *)222, arg_address, \"Passed wrong address\");\n\texpect_args_raw(args_raw, 3);\n\n\t/* Expand */\n\treset_args();\n\thook_invoke_expand(hook_expand_xallocx, (void *)222, 333, 444, 555,\n\t    args_raw);\n\texpect_d_eq((int)hook_expand_xallocx, arg_type,\n\t    \"Passed wrong expand type\");\n\texpect_ptr_eq((void *)111, arg_extra, \"Passed wrong user pointer\");\n\texpect_ptr_eq((void *)222, arg_address, \"Passed wrong address\");\n\texpect_zu_eq(333, arg_old_usize, \"Passed wrong old usize\");\n\texpect_zu_eq(444, arg_new_usize, \"Passed wrong new usize\");\n\texpect_zu_eq(555, arg_result_raw, \"Passed wrong result\");\n\texpect_args_raw(args_raw, 4);\n\n\thook_remove(TSDN_NULL, handle);\n}\nTEST_END\n\nTEST_BEGIN(test_hooks_null) {\n\t/* Null hooks should be ignored, not crash. */\n\thooks_t hooks1 = {NULL, NULL, NULL, NULL};\n\thooks_t hooks2 = {&test_alloc_hook, NULL, NULL, NULL};\n\thooks_t hooks3 = {NULL, &test_dalloc_hook, NULL, NULL};\n\thooks_t hooks4 = {NULL, NULL, &test_expand_hook, NULL};\n\n\tvoid *handle1 = hook_install(TSDN_NULL, &hooks1);\n\tvoid *handle2 = hook_install(TSDN_NULL, &hooks2);\n\tvoid *handle3 = hook_install(TSDN_NULL, &hooks3);\n\tvoid *handle4 = hook_install(TSDN_NULL, &hooks4);\n\n\texpect_ptr_ne(handle1, NULL, \"Hook installation failed\");\n\texpect_ptr_ne(handle2, NULL, \"Hook installation failed\");\n\texpect_ptr_ne(handle3, NULL, \"Hook installation failed\");\n\texpect_ptr_ne(handle4, NULL, \"Hook installation failed\");\n\n\tuintptr_t args_raw[4] = {10, 20, 30, 40};\n\n\tcall_count = 0;\n\thook_invoke_alloc(hook_alloc_malloc, NULL, 0, args_raw);\n\texpect_d_eq(call_count, 1, \"Called wrong number of times\");\n\n\tcall_count = 0;\n\thook_invoke_dalloc(hook_dalloc_free, NULL, args_raw);\n\texpect_d_eq(call_count, 1, \"Called wrong number of times\");\n\n\tcall_count = 0;\n\thook_invoke_expand(hook_expand_realloc, NULL, 0, 0, 0, args_raw);\n\texpect_d_eq(call_count, 1, \"Called wrong number of times\");\n\n\thook_remove(TSDN_NULL, handle1);\n\thook_remove(TSDN_NULL, handle2);\n\thook_remove(TSDN_NULL, handle3);\n\thook_remove(TSDN_NULL, handle4);\n}\nTEST_END\n\nTEST_BEGIN(test_hooks_remove) {\n\thooks_t hooks = {&test_alloc_hook, NULL, NULL, NULL};\n\tvoid *handle = hook_install(TSDN_NULL, &hooks);\n\texpect_ptr_ne(handle, NULL, \"Hook installation failed\");\n\tcall_count = 0;\n\tuintptr_t args_raw[4] = {10, 20, 30, 40};\n\thook_invoke_alloc(hook_alloc_malloc, NULL, 0, args_raw);\n\texpect_d_eq(call_count, 1, \"Hook not invoked\");\n\n\tcall_count = 0;\n\thook_remove(TSDN_NULL, handle);\n\thook_invoke_alloc(hook_alloc_malloc, NULL, 0, NULL);\n\texpect_d_eq(call_count, 0, \"Hook invoked after removal\");\n\n}\nTEST_END\n\nTEST_BEGIN(test_hooks_alloc_simple) {\n\t/* \"Simple\" in the sense that we're not in a realloc variant. */\n\thooks_t hooks = {&test_alloc_hook, NULL, NULL, (void *)123};\n\tvoid *handle = hook_install(TSDN_NULL, &hooks);\n\texpect_ptr_ne(handle, NULL, \"Hook installation failed\");\n\n\t/* Stop malloc from being optimized away. */\n\tvolatile int err;\n\tvoid *volatile ptr;\n\n\t/* malloc */\n\treset();\n\tptr = malloc(1);\n\texpect_d_eq(call_count, 1, \"Hook not called\");\n\texpect_ptr_eq(arg_extra, (void *)123, \"Wrong extra\");\n\texpect_d_eq(arg_type, (int)hook_alloc_malloc, \"Wrong hook type\");\n\texpect_ptr_eq(ptr, arg_result, \"Wrong result\");\n\texpect_u64_eq((uintptr_t)ptr, (uintptr_t)arg_result_raw,\n\t    \"Wrong raw result\");\n\texpect_u64_eq((uintptr_t)1, arg_args_raw[0], \"Wrong argument\");\n\tfree(ptr);\n\n\t/* posix_memalign */\n\treset();\n\terr = posix_memalign((void **)&ptr, 1024, 1);\n\texpect_d_eq(call_count, 1, \"Hook not called\");\n\texpect_ptr_eq(arg_extra, (void *)123, \"Wrong extra\");\n\texpect_d_eq(arg_type, (int)hook_alloc_posix_memalign,\n\t    \"Wrong hook type\");\n\texpect_ptr_eq(ptr, arg_result, \"Wrong result\");\n\texpect_u64_eq((uintptr_t)err, (uintptr_t)arg_result_raw,\n\t    \"Wrong raw result\");\n\texpect_u64_eq((uintptr_t)&ptr, arg_args_raw[0], \"Wrong argument\");\n\texpect_u64_eq((uintptr_t)1024, arg_args_raw[1], \"Wrong argument\");\n\texpect_u64_eq((uintptr_t)1, arg_args_raw[2], \"Wrong argument\");\n\tfree(ptr);\n\n\t/* aligned_alloc */\n\treset();\n\tptr = aligned_alloc(1024, 1);\n\texpect_d_eq(call_count, 1, \"Hook not called\");\n\texpect_ptr_eq(arg_extra, (void *)123, \"Wrong extra\");\n\texpect_d_eq(arg_type, (int)hook_alloc_aligned_alloc,\n\t    \"Wrong hook type\");\n\texpect_ptr_eq(ptr, arg_result, \"Wrong result\");\n\texpect_u64_eq((uintptr_t)ptr, (uintptr_t)arg_result_raw,\n\t    \"Wrong raw result\");\n\texpect_u64_eq((uintptr_t)1024, arg_args_raw[0], \"Wrong argument\");\n\texpect_u64_eq((uintptr_t)1, arg_args_raw[1], \"Wrong argument\");\n\tfree(ptr);\n\n\t/* calloc */\n\treset();\n\tptr = calloc(11, 13);\n\texpect_d_eq(call_count, 1, \"Hook not called\");\n\texpect_ptr_eq(arg_extra, (void *)123, \"Wrong extra\");\n\texpect_d_eq(arg_type, (int)hook_alloc_calloc, \"Wrong hook type\");\n\texpect_ptr_eq(ptr, arg_result, \"Wrong result\");\n\texpect_u64_eq((uintptr_t)ptr, (uintptr_t)arg_result_raw,\n\t    \"Wrong raw result\");\n\texpect_u64_eq((uintptr_t)11, arg_args_raw[0], \"Wrong argument\");\n\texpect_u64_eq((uintptr_t)13, arg_args_raw[1], \"Wrong argument\");\n\tfree(ptr);\n\n\t/* memalign */\n#ifdef JEMALLOC_OVERRIDE_MEMALIGN\n\treset();\n\tptr = memalign(1024, 1);\n\texpect_d_eq(call_count, 1, \"Hook not called\");\n\texpect_ptr_eq(arg_extra, (void *)123, \"Wrong extra\");\n\texpect_d_eq(arg_type, (int)hook_alloc_memalign, \"Wrong hook type\");\n\texpect_ptr_eq(ptr, arg_result, \"Wrong result\");\n\texpect_u64_eq((uintptr_t)ptr, (uintptr_t)arg_result_raw,\n\t    \"Wrong raw result\");\n\texpect_u64_eq((uintptr_t)1024, arg_args_raw[0], \"Wrong argument\");\n\texpect_u64_eq((uintptr_t)1, arg_args_raw[1], \"Wrong argument\");\n\tfree(ptr);\n#endif /* JEMALLOC_OVERRIDE_MEMALIGN */\n\n\t/* valloc */\n#ifdef JEMALLOC_OVERRIDE_VALLOC\n\treset();\n\tptr = valloc(1);\n\texpect_d_eq(call_count, 1, \"Hook not called\");\n\texpect_ptr_eq(arg_extra, (void *)123, \"Wrong extra\");\n\texpect_d_eq(arg_type, (int)hook_alloc_valloc, \"Wrong hook type\");\n\texpect_ptr_eq(ptr, arg_result, \"Wrong result\");\n\texpect_u64_eq((uintptr_t)ptr, (uintptr_t)arg_result_raw,\n\t    \"Wrong raw result\");\n\texpect_u64_eq((uintptr_t)1, arg_args_raw[0], \"Wrong argument\");\n\tfree(ptr);\n#endif /* JEMALLOC_OVERRIDE_VALLOC */\n\n\t/* mallocx */\n\treset();\n\tptr = mallocx(1, MALLOCX_LG_ALIGN(10));\n\texpect_d_eq(call_count, 1, \"Hook not called\");\n\texpect_ptr_eq(arg_extra, (void *)123, \"Wrong extra\");\n\texpect_d_eq(arg_type, (int)hook_alloc_mallocx, \"Wrong hook type\");\n\texpect_ptr_eq(ptr, arg_result, \"Wrong result\");\n\texpect_u64_eq((uintptr_t)ptr, (uintptr_t)arg_result_raw,\n\t    \"Wrong raw result\");\n\texpect_u64_eq((uintptr_t)1, arg_args_raw[0], \"Wrong argument\");\n\texpect_u64_eq((uintptr_t)MALLOCX_LG_ALIGN(10), arg_args_raw[1],\n\t    \"Wrong flags\");\n\tfree(ptr);\n\n\thook_remove(TSDN_NULL, handle);\n}\nTEST_END\n\nTEST_BEGIN(test_hooks_dalloc_simple) {\n\t/* \"Simple\" in the sense that we're not in a realloc variant. */\n\thooks_t hooks = {NULL, &test_dalloc_hook, NULL, (void *)123};\n\tvoid *handle = hook_install(TSDN_NULL, &hooks);\n\texpect_ptr_ne(handle, NULL, \"Hook installation failed\");\n\n\tvoid *volatile ptr;\n\n\t/* free() */\n\treset();\n\tptr = malloc(1);\n\tfree(ptr);\n\texpect_d_eq(call_count, 1, \"Hook not called\");\n\texpect_ptr_eq(arg_extra, (void *)123, \"Wrong extra\");\n\texpect_d_eq(arg_type, (int)hook_dalloc_free, \"Wrong hook type\");\n\texpect_ptr_eq(ptr, arg_address, \"Wrong pointer freed\");\n\texpect_u64_eq((uintptr_t)ptr, arg_args_raw[0], \"Wrong raw arg\");\n\n\t/* dallocx() */\n\treset();\n\tptr = malloc(1);\n\tdallocx(ptr, MALLOCX_TCACHE_NONE);\n\texpect_d_eq(call_count, 1, \"Hook not called\");\n\texpect_ptr_eq(arg_extra, (void *)123, \"Wrong extra\");\n\texpect_d_eq(arg_type, (int)hook_dalloc_dallocx, \"Wrong hook type\");\n\texpect_ptr_eq(ptr, arg_address, \"Wrong pointer freed\");\n\texpect_u64_eq((uintptr_t)ptr, arg_args_raw[0], \"Wrong raw arg\");\n\texpect_u64_eq((uintptr_t)MALLOCX_TCACHE_NONE, arg_args_raw[1],\n\t    \"Wrong raw arg\");\n\n\t/* sdallocx() */\n\treset();\n\tptr = malloc(1);\n\tsdallocx(ptr, 1, MALLOCX_TCACHE_NONE);\n\texpect_d_eq(call_count, 1, \"Hook not called\");\n\texpect_ptr_eq(arg_extra, (void *)123, \"Wrong extra\");\n\texpect_d_eq(arg_type, (int)hook_dalloc_sdallocx, \"Wrong hook type\");\n\texpect_ptr_eq(ptr, arg_address, \"Wrong pointer freed\");\n\texpect_u64_eq((uintptr_t)ptr, arg_args_raw[0], \"Wrong raw arg\");\n\texpect_u64_eq((uintptr_t)1, arg_args_raw[1], \"Wrong raw arg\");\n\texpect_u64_eq((uintptr_t)MALLOCX_TCACHE_NONE, arg_args_raw[2],\n\t    \"Wrong raw arg\");\n\n\thook_remove(TSDN_NULL, handle);\n}\nTEST_END\n\nTEST_BEGIN(test_hooks_expand_simple) {\n\t/* \"Simple\" in the sense that we're not in a realloc variant. */\n\thooks_t hooks = {NULL, NULL, &test_expand_hook, (void *)123};\n\tvoid *handle = hook_install(TSDN_NULL, &hooks);\n\texpect_ptr_ne(handle, NULL, \"Hook installation failed\");\n\n\tvoid *volatile ptr;\n\n\t/* xallocx() */\n\treset();\n\tptr = malloc(1);\n\tsize_t new_usize = xallocx(ptr, 100, 200, MALLOCX_TCACHE_NONE);\n\texpect_d_eq(call_count, 1, \"Hook not called\");\n\texpect_ptr_eq(arg_extra, (void *)123, \"Wrong extra\");\n\texpect_d_eq(arg_type, (int)hook_expand_xallocx, \"Wrong hook type\");\n\texpect_ptr_eq(ptr, arg_address, \"Wrong pointer expanded\");\n\texpect_u64_eq(arg_old_usize, nallocx(1, 0), \"Wrong old usize\");\n\texpect_u64_eq(arg_new_usize, sallocx(ptr, 0), \"Wrong new usize\");\n\texpect_u64_eq(new_usize, arg_result_raw, \"Wrong result\");\n\texpect_u64_eq((uintptr_t)ptr, arg_args_raw[0], \"Wrong arg\");\n\texpect_u64_eq(100, arg_args_raw[1], \"Wrong arg\");\n\texpect_u64_eq(200, arg_args_raw[2], \"Wrong arg\");\n\texpect_u64_eq(MALLOCX_TCACHE_NONE, arg_args_raw[3], \"Wrong arg\");\n\n\thook_remove(TSDN_NULL, handle);\n}\nTEST_END\n\nTEST_BEGIN(test_hooks_realloc_as_malloc_or_free) {\n\thooks_t hooks = {&test_alloc_hook, &test_dalloc_hook,\n\t\t&test_expand_hook, (void *)123};\n\tvoid *handle = hook_install(TSDN_NULL, &hooks);\n\texpect_ptr_ne(handle, NULL, \"Hook installation failed\");\n\n\tvoid *volatile ptr;\n\n\t/* realloc(NULL, size) as malloc */\n\treset();\n\tptr = realloc(NULL, 1);\n\texpect_d_eq(call_count, 1, \"Hook not called\");\n\texpect_ptr_eq(arg_extra, (void *)123, \"Wrong extra\");\n\texpect_d_eq(arg_type, (int)hook_alloc_realloc, \"Wrong hook type\");\n\texpect_ptr_eq(ptr, arg_result, \"Wrong result\");\n\texpect_u64_eq((uintptr_t)ptr, (uintptr_t)arg_result_raw,\n\t    \"Wrong raw result\");\n\texpect_u64_eq((uintptr_t)NULL, arg_args_raw[0], \"Wrong argument\");\n\texpect_u64_eq((uintptr_t)1, arg_args_raw[1], \"Wrong argument\");\n\tfree(ptr);\n\n\t/* realloc(ptr, 0) as free */\n\tif (opt_zero_realloc_action == zero_realloc_action_free) {\n\t\tptr = malloc(1);\n\t\treset();\n\t\trealloc(ptr, 0);\n\t\texpect_d_eq(call_count, 1, \"Hook not called\");\n\t\texpect_ptr_eq(arg_extra, (void *)123, \"Wrong extra\");\n\t\texpect_d_eq(arg_type, (int)hook_dalloc_realloc,\n\t\t    \"Wrong hook type\");\n\t\texpect_ptr_eq(ptr, arg_address,\n\t\t    \"Wrong pointer freed\");\n\t\texpect_u64_eq((uintptr_t)ptr, arg_args_raw[0],\n\t\t    \"Wrong raw arg\");\n\t\texpect_u64_eq((uintptr_t)0, arg_args_raw[1],\n\t\t    \"Wrong raw arg\");\n\t}\n\n\t/* realloc(NULL, 0) as malloc(0) */\n\treset();\n\tptr = realloc(NULL, 0);\n\texpect_d_eq(call_count, 1, \"Hook not called\");\n\texpect_ptr_eq(arg_extra, (void *)123, \"Wrong extra\");\n\texpect_d_eq(arg_type, (int)hook_alloc_realloc, \"Wrong hook type\");\n\texpect_ptr_eq(ptr, arg_result, \"Wrong result\");\n\texpect_u64_eq((uintptr_t)ptr, (uintptr_t)arg_result_raw,\n\t    \"Wrong raw result\");\n\texpect_u64_eq((uintptr_t)NULL, arg_args_raw[0], \"Wrong argument\");\n\texpect_u64_eq((uintptr_t)0, arg_args_raw[1], \"Wrong argument\");\n\tfree(ptr);\n\n\thook_remove(TSDN_NULL, handle);\n}\nTEST_END\n\nstatic void\ndo_realloc_test(void *(*ralloc)(void *, size_t, int), int flags,\n    int expand_type, int dalloc_type) {\n\thooks_t hooks = {&test_alloc_hook, &test_dalloc_hook,\n\t\t&test_expand_hook, (void *)123};\n\tvoid *handle = hook_install(TSDN_NULL, &hooks);\n\texpect_ptr_ne(handle, NULL, \"Hook installation failed\");\n\n\tvoid *volatile ptr;\n\tvoid *volatile ptr2;\n\n\t/* Realloc in-place, small. */\n\tptr = malloc(129);\n\treset();\n\tptr2 = ralloc(ptr, 130, flags);\n\texpect_ptr_eq(ptr, ptr2, \"Small realloc moved\");\n\n\texpect_d_eq(call_count, 1, \"Hook not called\");\n\texpect_ptr_eq(arg_extra, (void *)123, \"Wrong extra\");\n\texpect_d_eq(arg_type, expand_type, \"Wrong hook type\");\n\texpect_ptr_eq(ptr, arg_address, \"Wrong address\");\n\texpect_u64_eq((uintptr_t)ptr, (uintptr_t)arg_result_raw,\n\t    \"Wrong raw result\");\n\texpect_u64_eq((uintptr_t)ptr, arg_args_raw[0], \"Wrong argument\");\n\texpect_u64_eq((uintptr_t)130, arg_args_raw[1], \"Wrong argument\");\n\tfree(ptr);\n\n\t/*\n\t * Realloc in-place, large.  Since we can't guarantee the large case\n\t * across all platforms, we stay resilient to moving results.\n\t */\n\tptr = malloc(2 * 1024 * 1024);\n\tfree(ptr);\n\tptr2 = malloc(1 * 1024 * 1024);\n\treset();\n\tptr = ralloc(ptr2, 2 * 1024 * 1024, flags);\n\t/* ptr is the new address, ptr2 is the old address. */\n\tif (ptr == ptr2) {\n\t\texpect_d_eq(call_count, 1, \"Hook not called\");\n\t\texpect_d_eq(arg_type, expand_type, \"Wrong hook type\");\n\t} else {\n\t\texpect_d_eq(call_count, 2, \"Wrong hooks called\");\n\t\texpect_ptr_eq(ptr, arg_result, \"Wrong address\");\n\t\texpect_d_eq(arg_type, dalloc_type, \"Wrong hook type\");\n\t}\n\texpect_ptr_eq(arg_extra, (void *)123, \"Wrong extra\");\n\texpect_ptr_eq(ptr2, arg_address, \"Wrong address\");\n\texpect_u64_eq((uintptr_t)ptr, (uintptr_t)arg_result_raw,\n\t    \"Wrong raw result\");\n\texpect_u64_eq((uintptr_t)ptr2, arg_args_raw[0], \"Wrong argument\");\n\texpect_u64_eq((uintptr_t)2 * 1024 * 1024, arg_args_raw[1],\n\t    \"Wrong argument\");\n\tfree(ptr);\n\n\t/* Realloc with move, small. */\n\tptr = malloc(8);\n\treset();\n\tptr2 = ralloc(ptr, 128, flags);\n\texpect_ptr_ne(ptr, ptr2, \"Small realloc didn't move\");\n\n\texpect_d_eq(call_count, 2, \"Hook not called\");\n\texpect_ptr_eq(arg_extra, (void *)123, \"Wrong extra\");\n\texpect_d_eq(arg_type, dalloc_type, \"Wrong hook type\");\n\texpect_ptr_eq(ptr, arg_address, \"Wrong address\");\n\texpect_ptr_eq(ptr2, arg_result, \"Wrong address\");\n\texpect_u64_eq((uintptr_t)ptr2, (uintptr_t)arg_result_raw,\n\t    \"Wrong raw result\");\n\texpect_u64_eq((uintptr_t)ptr, arg_args_raw[0], \"Wrong argument\");\n\texpect_u64_eq((uintptr_t)128, arg_args_raw[1], \"Wrong argument\");\n\tfree(ptr2);\n\n\t/* Realloc with move, large. */\n\tptr = malloc(1);\n\treset();\n\tptr2 = ralloc(ptr, 2 * 1024 * 1024, flags);\n\texpect_ptr_ne(ptr, ptr2, \"Large realloc didn't move\");\n\n\texpect_d_eq(call_count, 2, \"Hook not called\");\n\texpect_ptr_eq(arg_extra, (void *)123, \"Wrong extra\");\n\texpect_d_eq(arg_type, dalloc_type, \"Wrong hook type\");\n\texpect_ptr_eq(ptr, arg_address, \"Wrong address\");\n\texpect_ptr_eq(ptr2, arg_result, \"Wrong address\");\n\texpect_u64_eq((uintptr_t)ptr2, (uintptr_t)arg_result_raw,\n\t    \"Wrong raw result\");\n\texpect_u64_eq((uintptr_t)ptr, arg_args_raw[0], \"Wrong argument\");\n\texpect_u64_eq((uintptr_t)2 * 1024 * 1024, arg_args_raw[1],\n\t    \"Wrong argument\");\n\tfree(ptr2);\n\n\thook_remove(TSDN_NULL, handle);\n}\n\nstatic void *\nrealloc_wrapper(void *ptr, size_t size, UNUSED int flags) {\n\treturn realloc(ptr, size);\n}\n\nTEST_BEGIN(test_hooks_realloc) {\n\tdo_realloc_test(&realloc_wrapper, 0, hook_expand_realloc,\n\t    hook_dalloc_realloc);\n}\nTEST_END\n\nTEST_BEGIN(test_hooks_rallocx) {\n\tdo_realloc_test(&rallocx, MALLOCX_TCACHE_NONE, hook_expand_rallocx,\n\t    hook_dalloc_rallocx);\n}\nTEST_END\n\nint\nmain(void) {\n\t/* We assert on call counts. */\n\treturn test_no_reentrancy(\n\t    test_hooks_basic,\n\t    test_hooks_null,\n\t    test_hooks_remove,\n\t    test_hooks_alloc_simple,\n\t    test_hooks_dalloc_simple,\n\t    test_hooks_expand_simple,\n\t    test_hooks_realloc_as_malloc_or_free,\n\t    test_hooks_realloc,\n\t    test_hooks_rallocx);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/hpa.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/hpa.h\"\n#include \"jemalloc/internal/nstime.h\"\n\n#define SHARD_IND 111\n\n#define ALLOC_MAX (HUGEPAGE / 4)\n\ntypedef struct test_data_s test_data_t;\nstruct test_data_s {\n\t/*\n\t * Must be the first member -- we convert back and forth between the\n\t * test_data_t and the hpa_shard_t;\n\t */\n\thpa_shard_t shard;\n\thpa_central_t central;\n\tbase_t *base;\n\tedata_cache_t shard_edata_cache;\n\n\temap_t emap;\n};\n\nstatic hpa_shard_opts_t test_hpa_shard_opts_default = {\n\t/* slab_max_alloc */\n\tALLOC_MAX,\n\t/* hugification threshold */\n\tHUGEPAGE,\n\t/* dirty_mult */\n\tFXP_INIT_PERCENT(25),\n\t/* deferral_allowed */\n\tfalse,\n\t/* hugify_delay_ms */\n\t10 * 1000,\n};\n\nstatic hpa_shard_t *\ncreate_test_data(hpa_hooks_t *hooks, hpa_shard_opts_t *opts) {\n\tbool err;\n\tbase_t *base = base_new(TSDN_NULL, /* ind */ SHARD_IND,\n\t    &ehooks_default_extent_hooks, /* metadata_use_hooks */ true);\n\tassert_ptr_not_null(base, \"\");\n\n\ttest_data_t *test_data = malloc(sizeof(test_data_t));\n\tassert_ptr_not_null(test_data, \"\");\n\n\ttest_data->base = base;\n\n\terr = edata_cache_init(&test_data->shard_edata_cache, base);\n\tassert_false(err, \"\");\n\n\terr = emap_init(&test_data->emap, test_data->base, /* zeroed */ false);\n\tassert_false(err, \"\");\n\n\terr = hpa_central_init(&test_data->central, test_data->base, hooks);\n\tassert_false(err, \"\");\n\n\terr = hpa_shard_init(&test_data->shard, &test_data->central,\n\t    &test_data->emap, test_data->base, &test_data->shard_edata_cache,\n\t    SHARD_IND, opts);\n\tassert_false(err, \"\");\n\n\treturn (hpa_shard_t *)test_data;\n}\n\nstatic void\ndestroy_test_data(hpa_shard_t *shard) {\n\ttest_data_t *test_data = (test_data_t *)shard;\n\tbase_delete(TSDN_NULL, test_data->base);\n\tfree(test_data);\n}\n\nTEST_BEGIN(test_alloc_max) {\n\ttest_skip_if(!hpa_supported());\n\n\thpa_shard_t *shard = create_test_data(&hpa_hooks_default,\n\t    &test_hpa_shard_opts_default);\n\ttsdn_t *tsdn = tsd_tsdn(tsd_fetch());\n\n\tedata_t *edata;\n\n\t/* Small max */\n\tbool deferred_work_generated = false;\n\tedata = pai_alloc(tsdn, &shard->pai, ALLOC_MAX, PAGE, false, false,\n\t    false, &deferred_work_generated);\n\texpect_ptr_not_null(edata, \"Allocation of small max failed\");\n\tedata = pai_alloc(tsdn, &shard->pai, ALLOC_MAX + PAGE, PAGE, false,\n\t    false, false, &deferred_work_generated);\n\texpect_ptr_null(edata, \"Allocation of larger than small max succeeded\");\n\n\tdestroy_test_data(shard);\n}\nTEST_END\n\ntypedef struct mem_contents_s mem_contents_t;\nstruct mem_contents_s {\n\tuintptr_t my_addr;\n\tsize_t size;\n\tedata_t *my_edata;\n\trb_node(mem_contents_t) link;\n};\n\nstatic int\nmem_contents_cmp(const mem_contents_t *a, const mem_contents_t *b) {\n\treturn (a->my_addr > b->my_addr) - (a->my_addr < b->my_addr);\n}\n\ntypedef rb_tree(mem_contents_t) mem_tree_t;\nrb_gen(static, mem_tree_, mem_tree_t, mem_contents_t, link,\n    mem_contents_cmp);\n\nstatic void\nnode_assert_ordered(mem_contents_t *a, mem_contents_t *b) {\n\tassert_zu_lt(a->my_addr, a->my_addr + a->size, \"Overflow\");\n\tassert_zu_le(a->my_addr + a->size, b->my_addr, \"\");\n}\n\nstatic void\nnode_check(mem_tree_t *tree, mem_contents_t *contents) {\n\tedata_t *edata = contents->my_edata;\n\tassert_ptr_eq(contents, (void *)contents->my_addr, \"\");\n\tassert_ptr_eq(contents, edata_base_get(edata), \"\");\n\tassert_zu_eq(contents->size, edata_size_get(edata), \"\");\n\tassert_ptr_eq(contents->my_edata, edata, \"\");\n\n\tmem_contents_t *next = mem_tree_next(tree, contents);\n\tif (next != NULL) {\n\t\tnode_assert_ordered(contents, next);\n\t}\n\tmem_contents_t *prev = mem_tree_prev(tree, contents);\n\tif (prev != NULL) {\n\t\tnode_assert_ordered(prev, contents);\n\t}\n}\n\nstatic void\nnode_insert(mem_tree_t *tree, edata_t *edata, size_t npages) {\n\tmem_contents_t *contents = (mem_contents_t *)edata_base_get(edata);\n\tcontents->my_addr = (uintptr_t)edata_base_get(edata);\n\tcontents->size = edata_size_get(edata);\n\tcontents->my_edata = edata;\n\tmem_tree_insert(tree, contents);\n\tnode_check(tree, contents);\n}\n\nstatic void\nnode_remove(mem_tree_t *tree, edata_t *edata) {\n\tmem_contents_t *contents = (mem_contents_t *)edata_base_get(edata);\n\tnode_check(tree, contents);\n\tmem_tree_remove(tree, contents);\n}\n\nTEST_BEGIN(test_stress) {\n\ttest_skip_if(!hpa_supported());\n\n\thpa_shard_t *shard = create_test_data(&hpa_hooks_default,\n\t    &test_hpa_shard_opts_default);\n\n\ttsdn_t *tsdn = tsd_tsdn(tsd_fetch());\n\n\tconst size_t nlive_edatas_max = 500;\n\tsize_t nlive_edatas = 0;\n\tedata_t **live_edatas = calloc(nlive_edatas_max, sizeof(edata_t *));\n\t/*\n\t * Nothing special about this constant; we're only fixing it for\n\t * consistency across runs.\n\t */\n\tsize_t prng_state = (size_t)0x76999ffb014df07c;\n\n\tmem_tree_t tree;\n\tmem_tree_new(&tree);\n\n\tbool deferred_work_generated = false;\n\n\tfor (size_t i = 0; i < 100 * 1000; i++) {\n\t\tsize_t operation = prng_range_zu(&prng_state, 2);\n\t\tif (operation == 0) {\n\t\t\t/* Alloc */\n\t\t\tif (nlive_edatas == nlive_edatas_max) {\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\t/*\n\t\t\t * We make sure to get an even balance of small and\n\t\t\t * large allocations.\n\t\t\t */\n\t\t\tsize_t npages_min = 1;\n\t\t\tsize_t npages_max = ALLOC_MAX / PAGE;\n\t\t\tsize_t npages = npages_min + prng_range_zu(&prng_state,\n\t\t\t    npages_max - npages_min);\n\t\t\tedata_t *edata = pai_alloc(tsdn, &shard->pai,\n\t\t\t    npages * PAGE, PAGE, false, false, false,\n\t\t\t    &deferred_work_generated);\n\t\t\tassert_ptr_not_null(edata,\n\t\t\t    \"Unexpected allocation failure\");\n\t\t\tlive_edatas[nlive_edatas] = edata;\n\t\t\tnlive_edatas++;\n\t\t\tnode_insert(&tree, edata, npages);\n\t\t} else {\n\t\t\t/* Free. */\n\t\t\tif (nlive_edatas == 0) {\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tsize_t victim = prng_range_zu(&prng_state, nlive_edatas);\n\t\t\tedata_t *to_free = live_edatas[victim];\n\t\t\tlive_edatas[victim] = live_edatas[nlive_edatas - 1];\n\t\t\tnlive_edatas--;\n\t\t\tnode_remove(&tree, to_free);\n\t\t\tpai_dalloc(tsdn, &shard->pai, to_free,\n\t\t\t    &deferred_work_generated);\n\t\t}\n\t}\n\n\tsize_t ntreenodes = 0;\n\tfor (mem_contents_t *contents = mem_tree_first(&tree); contents != NULL;\n\t    contents = mem_tree_next(&tree, contents)) {\n\t\tntreenodes++;\n\t\tnode_check(&tree, contents);\n\t}\n\texpect_zu_eq(ntreenodes, nlive_edatas, \"\");\n\n\t/*\n\t * Test hpa_shard_destroy, which requires as a precondition that all its\n\t * extents have been deallocated.\n\t */\n\tfor (size_t i = 0; i < nlive_edatas; i++) {\n\t\tedata_t *to_free = live_edatas[i];\n\t\tnode_remove(&tree, to_free);\n\t\tpai_dalloc(tsdn, &shard->pai, to_free,\n\t\t    &deferred_work_generated);\n\t}\n\thpa_shard_destroy(tsdn, shard);\n\n\tfree(live_edatas);\n\tdestroy_test_data(shard);\n}\nTEST_END\n\nstatic void\nexpect_contiguous(edata_t **edatas, size_t nedatas) {\n\tfor (size_t i = 0; i < nedatas; i++) {\n\t\tsize_t expected = (size_t)edata_base_get(edatas[0])\n\t\t    + i * PAGE;\n\t\texpect_zu_eq(expected, (size_t)edata_base_get(edatas[i]),\n\t\t    \"Mismatch at index %zu\", i);\n\t}\n}\n\nTEST_BEGIN(test_alloc_dalloc_batch) {\n\ttest_skip_if(!hpa_supported());\n\n\thpa_shard_t *shard = create_test_data(&hpa_hooks_default,\n\t    &test_hpa_shard_opts_default);\n\ttsdn_t *tsdn = tsd_tsdn(tsd_fetch());\n\n\tbool deferred_work_generated = false;\n\n\tenum {NALLOCS = 8};\n\n\tedata_t *allocs[NALLOCS];\n\t/*\n\t * Allocate a mix of ways; first half from regular alloc, second half\n\t * from alloc_batch.\n\t */\n\tfor (size_t i = 0; i < NALLOCS / 2; i++) {\n\t\tallocs[i] = pai_alloc(tsdn, &shard->pai, PAGE, PAGE,\n\t\t    /* zero */ false, /* guarded */ false,\n\t\t    /* frequent_reuse */ false, &deferred_work_generated);\n\t\texpect_ptr_not_null(allocs[i], \"Unexpected alloc failure\");\n\t}\n\tedata_list_active_t allocs_list;\n\tedata_list_active_init(&allocs_list);\n\tsize_t nsuccess = pai_alloc_batch(tsdn, &shard->pai, PAGE, NALLOCS / 2,\n\t    &allocs_list, &deferred_work_generated);\n\texpect_zu_eq(NALLOCS / 2, nsuccess, \"Unexpected oom\");\n\tfor (size_t i = NALLOCS / 2; i < NALLOCS; i++) {\n\t\tallocs[i] = edata_list_active_first(&allocs_list);\n\t\tedata_list_active_remove(&allocs_list, allocs[i]);\n\t}\n\n\t/*\n\t * Should have allocated them contiguously, despite the differing\n\t * methods used.\n\t */\n\tvoid *orig_base = edata_base_get(allocs[0]);\n\texpect_contiguous(allocs, NALLOCS);\n\n\t/*\n\t * Batch dalloc the first half, individually deallocate the second half.\n\t */\n\tfor (size_t i = 0; i < NALLOCS / 2; i++) {\n\t\tedata_list_active_append(&allocs_list, allocs[i]);\n\t}\n\tpai_dalloc_batch(tsdn, &shard->pai, &allocs_list,\n\t    &deferred_work_generated);\n\tfor (size_t i = NALLOCS / 2; i < NALLOCS; i++) {\n\t\tpai_dalloc(tsdn, &shard->pai, allocs[i],\n\t\t    &deferred_work_generated);\n\t}\n\n\t/* Reallocate (individually), and ensure reuse and contiguity. */\n\tfor (size_t i = 0; i < NALLOCS; i++) {\n\t\tallocs[i] = pai_alloc(tsdn, &shard->pai, PAGE, PAGE,\n\t\t    /* zero */ false, /* guarded */ false, /* frequent_reuse */\n\t\t    false, &deferred_work_generated);\n\t\texpect_ptr_not_null(allocs[i], \"Unexpected alloc failure.\");\n\t}\n\tvoid *new_base = edata_base_get(allocs[0]);\n\texpect_ptr_eq(orig_base, new_base,\n\t    \"Failed to reuse the allocated memory.\");\n\texpect_contiguous(allocs, NALLOCS);\n\n\tdestroy_test_data(shard);\n}\nTEST_END\n\nstatic uintptr_t defer_bump_ptr = HUGEPAGE * 123;\nstatic void *\ndefer_test_map(size_t size) {\n\tvoid *result = (void *)defer_bump_ptr;\n\tdefer_bump_ptr += size;\n\treturn result;\n}\n\nstatic void\ndefer_test_unmap(void *ptr, size_t size) {\n\t(void)ptr;\n\t(void)size;\n}\n\nstatic bool defer_purge_called = false;\nstatic void\ndefer_test_purge(void *ptr, size_t size) {\n\t(void)ptr;\n\t(void)size;\n\tdefer_purge_called = true;\n}\n\nstatic bool defer_hugify_called = false;\nstatic void\ndefer_test_hugify(void *ptr, size_t size) {\n\tdefer_hugify_called = true;\n}\n\nstatic bool defer_dehugify_called = false;\nstatic void\ndefer_test_dehugify(void *ptr, size_t size) {\n\tdefer_dehugify_called = true;\n}\n\nstatic nstime_t defer_curtime;\nstatic void\ndefer_test_curtime(nstime_t *r_time, bool first_reading) {\n\t*r_time = defer_curtime;\n}\n\nstatic uint64_t\ndefer_test_ms_since(nstime_t *past_time) {\n\treturn (nstime_ns(&defer_curtime) - nstime_ns(past_time)) / 1000 / 1000;\n}\n\nTEST_BEGIN(test_defer_time) {\n\ttest_skip_if(!hpa_supported());\n\n\thpa_hooks_t hooks;\n\thooks.map = &defer_test_map;\n\thooks.unmap = &defer_test_unmap;\n\thooks.purge = &defer_test_purge;\n\thooks.hugify = &defer_test_hugify;\n\thooks.dehugify = &defer_test_dehugify;\n\thooks.curtime = &defer_test_curtime;\n\thooks.ms_since = &defer_test_ms_since;\n\n\thpa_shard_opts_t opts = test_hpa_shard_opts_default;\n\topts.deferral_allowed = true;\n\n\thpa_shard_t *shard = create_test_data(&hooks, &opts);\n\n\tbool deferred_work_generated = false;\n\n\tnstime_init(&defer_curtime, 0);\n\ttsdn_t *tsdn = tsd_tsdn(tsd_fetch());\n\tedata_t *edatas[HUGEPAGE_PAGES];\n\tfor (int i = 0; i < (int)HUGEPAGE_PAGES; i++) {\n\t\tedatas[i] = pai_alloc(tsdn, &shard->pai, PAGE, PAGE, false,\n\t\t    false, false, &deferred_work_generated);\n\t\texpect_ptr_not_null(edatas[i], \"Unexpected null edata\");\n\t}\n\thpa_shard_do_deferred_work(tsdn, shard);\n\texpect_false(defer_hugify_called, \"Hugified too early\");\n\n\t/* Hugification delay is set to 10 seconds in options. */\n\tnstime_init2(&defer_curtime, 11, 0);\n\thpa_shard_do_deferred_work(tsdn, shard);\n\texpect_true(defer_hugify_called, \"Failed to hugify\");\n\n\tdefer_hugify_called = false;\n\n\t/* Purge.  Recall that dirty_mult is .25. */\n\tfor (int i = 0; i < (int)HUGEPAGE_PAGES / 2; i++) {\n\t\tpai_dalloc(tsdn, &shard->pai, edatas[i],\n\t\t    &deferred_work_generated);\n\t}\n\n\thpa_shard_do_deferred_work(tsdn, shard);\n\n\texpect_false(defer_hugify_called, \"Hugified too early\");\n\texpect_true(defer_dehugify_called, \"Should have dehugified\");\n\texpect_true(defer_purge_called, \"Should have purged\");\n\tdefer_hugify_called = false;\n\tdefer_dehugify_called = false;\n\tdefer_purge_called = false;\n\n\t/*\n\t * Refill the page.  We now meet the hugification threshold; we should\n\t * be marked for pending hugify.\n\t */\n\tfor (int i = 0; i < (int)HUGEPAGE_PAGES / 2; i++) {\n\t\tedatas[i] = pai_alloc(tsdn, &shard->pai, PAGE, PAGE, false,\n\t\t    false, false, &deferred_work_generated);\n\t\texpect_ptr_not_null(edatas[i], \"Unexpected null edata\");\n\t}\n\t/*\n\t * We would be ineligible for hugification, had we not already met the\n\t * threshold before dipping below it.\n\t */\n\tpai_dalloc(tsdn, &shard->pai, edatas[0],\n\t    &deferred_work_generated);\n\t/* Wait for the threshold again. */\n\tnstime_init2(&defer_curtime, 22, 0);\n\thpa_shard_do_deferred_work(tsdn, shard);\n\texpect_true(defer_hugify_called, \"Hugified too early\");\n\texpect_false(defer_dehugify_called, \"Unexpected dehugify\");\n\texpect_false(defer_purge_called, \"Unexpected purge\");\n\n\tdestroy_test_data(shard);\n}\nTEST_END\n\nint\nmain(void) {\n\t/*\n\t * These trigger unused-function warnings on CI runs, even if declared\n\t * with static inline.\n\t */\n\t(void)mem_tree_empty;\n\t(void)mem_tree_last;\n\t(void)mem_tree_search;\n\t(void)mem_tree_nsearch;\n\t(void)mem_tree_psearch;\n\t(void)mem_tree_iter;\n\t(void)mem_tree_reverse_iter;\n\t(void)mem_tree_destroy;\n\treturn test_no_reentrancy(\n\t    test_alloc_max,\n\t    test_stress,\n\t    test_alloc_dalloc_batch,\n\t    test_defer_time);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/hpa_background_thread.c",
    "content": "#include \"test/jemalloc_test.h\"\n#include \"test/sleep.h\"\n\nstatic void\nsleep_for_background_thread_interval() {\n\t/*\n\t * The sleep interval set in our .sh file is 50ms.  So it likely will\n\t * run if we sleep for four times that.\n\t */\n\tsleep_ns(200 * 1000 * 1000);\n}\n\nstatic unsigned\ncreate_arena() {\n\tunsigned arena_ind;\n\tsize_t sz;\n\n\tsz = sizeof(unsigned);\n\texpect_d_eq(mallctl(\"arenas.create\", (void *)&arena_ind, &sz, NULL, 2),\n\t    0, \"Unexpected mallctl() failure\");\n\treturn arena_ind;\n}\n\nstatic size_t\nget_empty_ndirty(unsigned arena_ind) {\n\tint err;\n\tsize_t ndirty_huge;\n\tsize_t ndirty_nonhuge;\n\tuint64_t epoch = 1;\n\tsize_t sz = sizeof(epoch);\n\terr = je_mallctl(\"epoch\", (void *)&epoch, &sz, (void *)&epoch,\n\t    sizeof(epoch));\n\texpect_d_eq(0, err, \"Unexpected mallctl() failure\");\n\n\tsize_t mib[6];\n\tsize_t miblen = sizeof(mib)/sizeof(mib[0]);\n\terr = mallctlnametomib(\n\t    \"stats.arenas.0.hpa_shard.empty_slabs.ndirty_nonhuge\", mib,\n\t    &miblen);\n\texpect_d_eq(0, err, \"Unexpected mallctlnametomib() failure\");\n\n\tsz = sizeof(ndirty_nonhuge);\n\tmib[2] = arena_ind;\n\terr = mallctlbymib(mib, miblen, &ndirty_nonhuge, &sz, NULL, 0);\n\texpect_d_eq(0, err, \"Unexpected mallctlbymib() failure\");\n\n\terr = mallctlnametomib(\n\t    \"stats.arenas.0.hpa_shard.empty_slabs.ndirty_huge\", mib,\n\t    &miblen);\n\texpect_d_eq(0, err, \"Unexpected mallctlnametomib() failure\");\n\n\tsz = sizeof(ndirty_huge);\n\tmib[2] = arena_ind;\n\terr = mallctlbymib(mib, miblen, &ndirty_huge, &sz, NULL, 0);\n\texpect_d_eq(0, err, \"Unexpected mallctlbymib() failure\");\n\n\treturn ndirty_huge + ndirty_nonhuge;\n}\n\nstatic void\nset_background_thread_enabled(bool enabled) {\n\tint err;\n\terr = je_mallctl(\"background_thread\", NULL, NULL, &enabled,\n\t    sizeof(enabled));\n\texpect_d_eq(0, err, \"Unexpected mallctl failure\");\n}\n\nstatic void\nwait_until_thread_is_enabled(unsigned arena_id) {\n\ttsd_t* tsd = tsd_fetch();\n\n\tbool sleeping = false;\n\tint iterations = 0;\n\tdo {\n\t\tbackground_thread_info_t *info =\n\t\t    background_thread_info_get(arena_id);\n\t\tmalloc_mutex_lock(tsd_tsdn(tsd), &info->mtx);\n\t\tmalloc_mutex_unlock(tsd_tsdn(tsd), &info->mtx);\n\t\tsleeping = background_thread_indefinite_sleep(info);\n\t\tassert_d_lt(iterations, UINT64_C(1000000),\n\t\t    \"Waiting for a thread to start for too long\");\n\t} while (!sleeping);\n}\n\nstatic void\nexpect_purging(unsigned arena_ind, bool expect_deferred) {\n\tsize_t empty_ndirty;\n\n\tempty_ndirty = get_empty_ndirty(arena_ind);\n\texpect_zu_eq(0, empty_ndirty, \"Expected arena to start unused.\");\n\n\t/*\n\t * It's possible that we get unlucky with our stats collection timing,\n\t * and the background thread runs in between the deallocation and the\n\t * stats collection.  So we retry 10 times, and see if we *ever* see\n\t * deferred reclamation.\n\t */\n\tbool observed_dirty_page = false;\n\tfor (int i = 0; i < 10; i++) {\n\t\tvoid *ptr = mallocx(PAGE,\n\t\t    MALLOCX_TCACHE_NONE | MALLOCX_ARENA(arena_ind));\n\t\tempty_ndirty = get_empty_ndirty(arena_ind);\n\t\texpect_zu_eq(0, empty_ndirty, \"All pages should be active\");\n\t\tdallocx(ptr, MALLOCX_TCACHE_NONE);\n\t\tempty_ndirty = get_empty_ndirty(arena_ind);\n\t\tif (expect_deferred) {\n\t\t\texpect_true(empty_ndirty == 0 || empty_ndirty == 1 ||\n\t\t\t    opt_prof, \"Unexpected extra dirty page count: %zu\",\n\t\t\t    empty_ndirty);\n\t\t} else {\n\t\t\tassert_zu_eq(0, empty_ndirty,\n\t\t\t    \"Saw dirty pages without deferred purging\");\n\t\t}\n\t\tif (empty_ndirty > 0) {\n\t\t\tobserved_dirty_page = true;\n\t\t\tbreak;\n\t\t}\n\t}\n\texpect_b_eq(expect_deferred, observed_dirty_page, \"\");\n\n\t/*\n\t * Under high concurrency / heavy test load (e.g. using run_test.sh),\n\t * the background thread may not get scheduled for a longer period of\n\t * time.  Retry 100 times max before bailing out.\n\t */\n\tunsigned retry = 0;\n\twhile ((empty_ndirty = get_empty_ndirty(arena_ind)) > 0 &&\n\t    expect_deferred && (retry++ < 100)) {\n\t\tsleep_for_background_thread_interval();\n\t}\n\n\texpect_zu_eq(0, empty_ndirty, \"Should have seen a background purge\");\n}\n\nTEST_BEGIN(test_hpa_background_thread_purges) {\n\ttest_skip_if(!config_stats);\n\ttest_skip_if(!hpa_supported());\n\ttest_skip_if(!have_background_thread);\n\t/* Skip since guarded pages cannot be allocated from hpa. */\n\ttest_skip_if(san_guard_enabled());\n\n\tunsigned arena_ind = create_arena();\n\t/*\n\t * Our .sh sets dirty mult to 0, so all dirty pages should get purged\n\t * any time any thread frees.\n\t */\n\texpect_purging(arena_ind, /* expect_deferred */ true);\n}\nTEST_END\n\nTEST_BEGIN(test_hpa_background_thread_enable_disable) {\n\ttest_skip_if(!config_stats);\n\ttest_skip_if(!hpa_supported());\n\ttest_skip_if(!have_background_thread);\n\t/* Skip since guarded pages cannot be allocated from hpa. */\n\ttest_skip_if(san_guard_enabled());\n\n\tunsigned arena_ind = create_arena();\n\n\tset_background_thread_enabled(false);\n\texpect_purging(arena_ind, false);\n\n\tset_background_thread_enabled(true);\n\twait_until_thread_is_enabled(arena_ind);\n\texpect_purging(arena_ind, true);\n}\nTEST_END\n\nint\nmain(void) {\n\t/*\n\t * OK, this is a sort of nasty hack.  We don't want to add *another*\n\t * config option for HPA (the intent is that it becomes available on\n\t * more platforms over time, and we're trying to prune back config\n\t * options generally.  But we'll get initialization errors on other\n\t * platforms if we set hpa:true in the MALLOC_CONF (even if we set\n\t * abort_conf:false as well).  So we reach into the internals and set\n\t * them directly, but only if we know that we're actually going to do\n\t * something nontrivial in the tests.\n\t */\n\tif (config_stats && hpa_supported() && have_background_thread) {\n\t\topt_hpa = true;\n\t\topt_background_thread = true;\n\t}\n\treturn test_no_reentrancy(\n\t    test_hpa_background_thread_purges,\n\t    test_hpa_background_thread_enable_disable);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/hpa_background_thread.sh",
    "content": "#!/bin/sh\n\nexport MALLOC_CONF=\"hpa_dirty_mult:0,hpa_min_purge_interval_ms:50,hpa_sec_nshards:0\"\n\n"
  },
  {
    "path": "deps/jemalloc/test/unit/hpdata.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#define HPDATA_ADDR ((void *)(10 * HUGEPAGE))\n#define HPDATA_AGE 123\n\nTEST_BEGIN(test_reserve_alloc) {\n\thpdata_t hpdata;\n\thpdata_init(&hpdata, HPDATA_ADDR, HPDATA_AGE);\n\n\t/* Allocating a page at a time, we should do first fit. */\n\tfor (size_t i = 0; i < HUGEPAGE_PAGES; i++) {\n\t\texpect_true(hpdata_consistent(&hpdata), \"\");\n\t\texpect_zu_eq(HUGEPAGE_PAGES - i,\n\t\t    hpdata_longest_free_range_get(&hpdata), \"\");\n\t\tvoid *alloc = hpdata_reserve_alloc(&hpdata, PAGE);\n\t\texpect_ptr_eq((char *)HPDATA_ADDR + i * PAGE, alloc, \"\");\n\t\texpect_true(hpdata_consistent(&hpdata), \"\");\n\t}\n\texpect_true(hpdata_consistent(&hpdata), \"\");\n\texpect_zu_eq(0, hpdata_longest_free_range_get(&hpdata), \"\");\n\n\t/*\n\t * Build up a bigger free-range, 2 pages at a time, until we've got 6\n\t * adjacent free pages total.  Pages 8-13 should be unreserved after\n\t * this.\n\t */\n\thpdata_unreserve(&hpdata, (char *)HPDATA_ADDR + 10 * PAGE, 2 * PAGE);\n\texpect_true(hpdata_consistent(&hpdata), \"\");\n\texpect_zu_eq(2, hpdata_longest_free_range_get(&hpdata), \"\");\n\n\thpdata_unreserve(&hpdata, (char *)HPDATA_ADDR + 12 * PAGE, 2 * PAGE);\n\texpect_true(hpdata_consistent(&hpdata), \"\");\n\texpect_zu_eq(4, hpdata_longest_free_range_get(&hpdata), \"\");\n\n\thpdata_unreserve(&hpdata, (char *)HPDATA_ADDR + 8 * PAGE, 2 * PAGE);\n\texpect_true(hpdata_consistent(&hpdata), \"\");\n\texpect_zu_eq(6, hpdata_longest_free_range_get(&hpdata), \"\");\n\n\t/*\n\t * Leave page 14 reserved, but free page 15 (this test the case where\n\t * unreserving combines two ranges).\n\t */\n\thpdata_unreserve(&hpdata, (char *)HPDATA_ADDR + 15 * PAGE, PAGE);\n\t/*\n\t * Longest free range shouldn't change; we've got a free range of size\n\t * 6, then a reserved page, then another free range.\n\t */\n\texpect_true(hpdata_consistent(&hpdata), \"\");\n\texpect_zu_eq(6, hpdata_longest_free_range_get(&hpdata), \"\");\n\n\t/* After freeing page 14, the two ranges get combined. */\n\thpdata_unreserve(&hpdata, (char *)HPDATA_ADDR + 14 * PAGE, PAGE);\n\texpect_true(hpdata_consistent(&hpdata), \"\");\n\texpect_zu_eq(8, hpdata_longest_free_range_get(&hpdata), \"\");\n}\nTEST_END\n\nTEST_BEGIN(test_purge_simple) {\n\thpdata_t hpdata;\n\thpdata_init(&hpdata, HPDATA_ADDR, HPDATA_AGE);\n\n\tvoid *alloc = hpdata_reserve_alloc(&hpdata, HUGEPAGE_PAGES / 2 * PAGE);\n\texpect_ptr_eq(alloc, HPDATA_ADDR, \"\");\n\n\t/* Create HUGEPAGE_PAGES / 4 dirty inactive pages at the beginning. */\n\thpdata_unreserve(&hpdata, alloc, HUGEPAGE_PAGES / 4 * PAGE);\n\n\texpect_zu_eq(hpdata_ntouched_get(&hpdata), HUGEPAGE_PAGES / 2, \"\");\n\n\thpdata_alloc_allowed_set(&hpdata, false);\n\thpdata_purge_state_t purge_state;\n\tsize_t to_purge = hpdata_purge_begin(&hpdata, &purge_state);\n\texpect_zu_eq(HUGEPAGE_PAGES / 4, to_purge, \"\");\n\n\tvoid *purge_addr;\n\tsize_t purge_size;\n\tbool got_result = hpdata_purge_next(&hpdata, &purge_state, &purge_addr,\n\t    &purge_size);\n\texpect_true(got_result, \"\");\n\texpect_ptr_eq(HPDATA_ADDR, purge_addr, \"\");\n\texpect_zu_eq(HUGEPAGE_PAGES / 4 * PAGE, purge_size, \"\");\n\n\tgot_result = hpdata_purge_next(&hpdata, &purge_state, &purge_addr,\n\t    &purge_size);\n\texpect_false(got_result, \"Unexpected additional purge range: \"\n\t    \"extent at %p of size %zu\", purge_addr, purge_size);\n\n\thpdata_purge_end(&hpdata, &purge_state);\n\texpect_zu_eq(hpdata_ntouched_get(&hpdata), HUGEPAGE_PAGES / 4, \"\");\n}\nTEST_END\n\n/*\n * We only test intervening dalloc's not intervening allocs; the latter are\n * disallowed as a purging precondition (because they interfere with purging\n * across a retained extent, saving a purge call).\n */\nTEST_BEGIN(test_purge_intervening_dalloc) {\n\thpdata_t hpdata;\n\thpdata_init(&hpdata, HPDATA_ADDR, HPDATA_AGE);\n\n\t/* Allocate the first 3/4 of the pages. */\n\tvoid *alloc = hpdata_reserve_alloc(&hpdata, 3 * HUGEPAGE_PAGES / 4  * PAGE);\n\texpect_ptr_eq(alloc, HPDATA_ADDR, \"\");\n\n\t/* Free the first 1/4 and the third 1/4 of the pages. */\n\thpdata_unreserve(&hpdata, alloc, HUGEPAGE_PAGES / 4 * PAGE);\n\thpdata_unreserve(&hpdata,\n\t    (void *)((uintptr_t)alloc + 2 * HUGEPAGE_PAGES / 4 * PAGE),\n\t    HUGEPAGE_PAGES / 4 * PAGE);\n\n\texpect_zu_eq(hpdata_ntouched_get(&hpdata), 3 * HUGEPAGE_PAGES / 4, \"\");\n\n\thpdata_alloc_allowed_set(&hpdata, false);\n\thpdata_purge_state_t purge_state;\n\tsize_t to_purge = hpdata_purge_begin(&hpdata, &purge_state);\n\texpect_zu_eq(HUGEPAGE_PAGES / 2, to_purge, \"\");\n\n\tvoid *purge_addr;\n\tsize_t purge_size;\n\t/* First purge. */\n\tbool got_result = hpdata_purge_next(&hpdata, &purge_state, &purge_addr,\n\t    &purge_size);\n\texpect_true(got_result, \"\");\n\texpect_ptr_eq(HPDATA_ADDR, purge_addr, \"\");\n\texpect_zu_eq(HUGEPAGE_PAGES / 4 * PAGE, purge_size, \"\");\n\n\t/* Deallocate the second 1/4 before the second purge occurs. */\n\thpdata_unreserve(&hpdata,\n\t    (void *)((uintptr_t)alloc + 1 * HUGEPAGE_PAGES / 4 * PAGE),\n\t    HUGEPAGE_PAGES / 4 * PAGE);\n\n\t/* Now continue purging. */\n\tgot_result = hpdata_purge_next(&hpdata, &purge_state, &purge_addr,\n\t    &purge_size);\n\texpect_true(got_result, \"\");\n\texpect_ptr_eq(\n\t    (void *)((uintptr_t)alloc + 2 * HUGEPAGE_PAGES / 4 * PAGE),\n\t    purge_addr, \"\");\n\texpect_zu_ge(HUGEPAGE_PAGES / 4 * PAGE, purge_size, \"\");\n\n\tgot_result = hpdata_purge_next(&hpdata, &purge_state, &purge_addr,\n\t    &purge_size);\n\texpect_false(got_result, \"Unexpected additional purge range: \"\n\t    \"extent at %p of size %zu\", purge_addr, purge_size);\n\n\thpdata_purge_end(&hpdata, &purge_state);\n\n\texpect_zu_eq(hpdata_ntouched_get(&hpdata), HUGEPAGE_PAGES / 4, \"\");\n}\nTEST_END\n\nTEST_BEGIN(test_purge_over_retained) {\n\tvoid *purge_addr;\n\tsize_t purge_size;\n\n\thpdata_t hpdata;\n\thpdata_init(&hpdata, HPDATA_ADDR, HPDATA_AGE);\n\n\t/* Allocate the first 3/4 of the pages. */\n\tvoid *alloc = hpdata_reserve_alloc(&hpdata, 3 * HUGEPAGE_PAGES / 4  * PAGE);\n\texpect_ptr_eq(alloc, HPDATA_ADDR, \"\");\n\n\t/* Free the second quarter. */\n\tvoid *second_quarter =\n\t    (void *)((uintptr_t)alloc + HUGEPAGE_PAGES / 4 * PAGE);\n\thpdata_unreserve(&hpdata, second_quarter, HUGEPAGE_PAGES / 4 * PAGE);\n\n\texpect_zu_eq(hpdata_ntouched_get(&hpdata), 3 * HUGEPAGE_PAGES / 4, \"\");\n\n\t/* Purge the second quarter. */\n\thpdata_alloc_allowed_set(&hpdata, false);\n\thpdata_purge_state_t purge_state;\n\tsize_t to_purge_dirty = hpdata_purge_begin(&hpdata, &purge_state);\n\texpect_zu_eq(HUGEPAGE_PAGES / 4, to_purge_dirty, \"\");\n\n\tbool got_result = hpdata_purge_next(&hpdata, &purge_state, &purge_addr,\n\t    &purge_size);\n\texpect_true(got_result, \"\");\n\texpect_ptr_eq(second_quarter, purge_addr, \"\");\n\texpect_zu_eq(HUGEPAGE_PAGES / 4 * PAGE, purge_size, \"\");\n\n\tgot_result = hpdata_purge_next(&hpdata, &purge_state, &purge_addr,\n\t    &purge_size);\n\texpect_false(got_result, \"Unexpected additional purge range: \"\n\t    \"extent at %p of size %zu\", purge_addr, purge_size);\n\thpdata_purge_end(&hpdata, &purge_state);\n\n\texpect_zu_eq(hpdata_ntouched_get(&hpdata), HUGEPAGE_PAGES / 2, \"\");\n\n\t/* Free the first and third quarter. */\n\thpdata_unreserve(&hpdata, HPDATA_ADDR, HUGEPAGE_PAGES / 4 * PAGE);\n\thpdata_unreserve(&hpdata,\n\t    (void *)((uintptr_t)alloc + 2 * HUGEPAGE_PAGES / 4 * PAGE),\n\t    HUGEPAGE_PAGES / 4 * PAGE);\n\n\t/*\n\t * Purge again.  The second quarter is retained, so we can safely\n\t * re-purge it.  We expect a single purge of 3/4 of the hugepage,\n\t * purging half its pages.\n\t */\n\tto_purge_dirty = hpdata_purge_begin(&hpdata, &purge_state);\n\texpect_zu_eq(HUGEPAGE_PAGES / 2, to_purge_dirty, \"\");\n\n\tgot_result = hpdata_purge_next(&hpdata, &purge_state, &purge_addr,\n\t    &purge_size);\n\texpect_true(got_result, \"\");\n\texpect_ptr_eq(HPDATA_ADDR, purge_addr, \"\");\n\texpect_zu_eq(3 * HUGEPAGE_PAGES / 4 * PAGE, purge_size, \"\");\n\n\tgot_result = hpdata_purge_next(&hpdata, &purge_state, &purge_addr,\n\t    &purge_size);\n\texpect_false(got_result, \"Unexpected additional purge range: \"\n\t    \"extent at %p of size %zu\", purge_addr, purge_size);\n\thpdata_purge_end(&hpdata, &purge_state);\n\n\texpect_zu_eq(hpdata_ntouched_get(&hpdata), 0, \"\");\n}\nTEST_END\n\nTEST_BEGIN(test_hugify) {\n\thpdata_t hpdata;\n\thpdata_init(&hpdata, HPDATA_ADDR, HPDATA_AGE);\n\n\tvoid *alloc = hpdata_reserve_alloc(&hpdata, HUGEPAGE / 2);\n\texpect_ptr_eq(alloc, HPDATA_ADDR, \"\");\n\n\texpect_zu_eq(HUGEPAGE_PAGES / 2, hpdata_ntouched_get(&hpdata), \"\");\n\n\thpdata_hugify(&hpdata);\n\n\t/* Hugeifying should have increased the dirty page count. */\n\texpect_zu_eq(HUGEPAGE_PAGES, hpdata_ntouched_get(&hpdata), \"\");\n}\nTEST_END\n\nint main(void) {\n\treturn test_no_reentrancy(\n\t    test_reserve_alloc,\n\t    test_purge_simple,\n\t    test_purge_intervening_dalloc,\n\t    test_purge_over_retained,\n\t    test_hugify);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/huge.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n/* Threshold: 2 << 20 = 2097152. */\nconst char *malloc_conf = \"oversize_threshold:2097152\";\n\n#define HUGE_SZ (2 << 20)\n#define SMALL_SZ (8)\n\nTEST_BEGIN(huge_bind_thread) {\n\tunsigned arena1, arena2;\n\tsize_t sz = sizeof(unsigned);\n\n\t/* Bind to a manual arena. */\n\texpect_d_eq(mallctl(\"arenas.create\", &arena1, &sz, NULL, 0), 0,\n\t    \"Failed to create arena\");\n\texpect_d_eq(mallctl(\"thread.arena\", NULL, NULL, &arena1,\n\t    sizeof(arena1)), 0, \"Fail to bind thread\");\n\n\tvoid *ptr = mallocx(HUGE_SZ, 0);\n\texpect_ptr_not_null(ptr, \"Fail to allocate huge size\");\n\texpect_d_eq(mallctl(\"arenas.lookup\", &arena2, &sz, &ptr,\n\t    sizeof(ptr)), 0, \"Unexpected mallctl() failure\");\n\texpect_u_eq(arena1, arena2, \"Wrong arena used after binding\");\n\tdallocx(ptr, 0);\n\n\t/* Switch back to arena 0. */\n\ttest_skip_if(have_percpu_arena &&\n\t    PERCPU_ARENA_ENABLED(opt_percpu_arena));\n\tarena2 = 0;\n\texpect_d_eq(mallctl(\"thread.arena\", NULL, NULL, &arena2,\n\t    sizeof(arena2)), 0, \"Fail to bind thread\");\n\tptr = mallocx(SMALL_SZ, MALLOCX_TCACHE_NONE);\n\texpect_d_eq(mallctl(\"arenas.lookup\", &arena2, &sz, &ptr,\n\t    sizeof(ptr)), 0, \"Unexpected mallctl() failure\");\n\texpect_u_eq(arena2, 0, \"Wrong arena used after binding\");\n\tdallocx(ptr, MALLOCX_TCACHE_NONE);\n\n\t/* Then huge allocation should use the huge arena. */\n\tptr = mallocx(HUGE_SZ, 0);\n\texpect_ptr_not_null(ptr, \"Fail to allocate huge size\");\n\texpect_d_eq(mallctl(\"arenas.lookup\", &arena2, &sz, &ptr,\n\t    sizeof(ptr)), 0, \"Unexpected mallctl() failure\");\n\texpect_u_ne(arena2, 0, \"Wrong arena used after binding\");\n\texpect_u_ne(arena1, arena2, \"Wrong arena used after binding\");\n\tdallocx(ptr, 0);\n}\nTEST_END\n\nTEST_BEGIN(huge_mallocx) {\n\tunsigned arena1, arena2;\n\tsize_t sz = sizeof(unsigned);\n\n\texpect_d_eq(mallctl(\"arenas.create\", &arena1, &sz, NULL, 0), 0,\n\t    \"Failed to create arena\");\n\tvoid *huge = mallocx(HUGE_SZ, MALLOCX_ARENA(arena1));\n\texpect_ptr_not_null(huge, \"Fail to allocate huge size\");\n\texpect_d_eq(mallctl(\"arenas.lookup\", &arena2, &sz, &huge,\n\t    sizeof(huge)), 0, \"Unexpected mallctl() failure\");\n\texpect_u_eq(arena1, arena2, \"Wrong arena used for mallocx\");\n\tdallocx(huge, MALLOCX_ARENA(arena1));\n\n\tvoid *huge2 = mallocx(HUGE_SZ, 0);\n\texpect_ptr_not_null(huge, \"Fail to allocate huge size\");\n\texpect_d_eq(mallctl(\"arenas.lookup\", &arena2, &sz, &huge2,\n\t    sizeof(huge2)), 0, \"Unexpected mallctl() failure\");\n\texpect_u_ne(arena1, arena2,\n\t    \"Huge allocation should not come from the manual arena.\");\n\texpect_u_ne(arena2, 0,\n\t    \"Huge allocation should not come from the arena 0.\");\n\tdallocx(huge2, 0);\n}\nTEST_END\n\nTEST_BEGIN(huge_allocation) {\n\tunsigned arena1, arena2;\n\n\tvoid *ptr = mallocx(HUGE_SZ, 0);\n\texpect_ptr_not_null(ptr, \"Fail to allocate huge size\");\n\tsize_t sz = sizeof(unsigned);\n\texpect_d_eq(mallctl(\"arenas.lookup\", &arena1, &sz, &ptr, sizeof(ptr)),\n\t    0, \"Unexpected mallctl() failure\");\n\texpect_u_gt(arena1, 0, \"Huge allocation should not come from arena 0\");\n\tdallocx(ptr, 0);\n\n\tptr = mallocx(HUGE_SZ >> 1, 0);\n\texpect_ptr_not_null(ptr, \"Fail to allocate half huge size\");\n\texpect_d_eq(mallctl(\"arenas.lookup\", &arena2, &sz, &ptr,\n\t    sizeof(ptr)), 0, \"Unexpected mallctl() failure\");\n\texpect_u_ne(arena1, arena2, \"Wrong arena used for half huge\");\n\tdallocx(ptr, 0);\n\n\tptr = mallocx(SMALL_SZ, MALLOCX_TCACHE_NONE);\n\texpect_ptr_not_null(ptr, \"Fail to allocate small size\");\n\texpect_d_eq(mallctl(\"arenas.lookup\", &arena2, &sz, &ptr,\n\t    sizeof(ptr)), 0, \"Unexpected mallctl() failure\");\n\texpect_u_ne(arena1, arena2,\n\t    \"Huge and small should be from different arenas\");\n\tdallocx(ptr, 0);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    huge_allocation,\n\t    huge_mallocx,\n\t    huge_bind_thread);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/inspect.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#define TEST_UTIL_EINVAL(node, a, b, c, d, why_inval) do {\t\t\\\n\tassert_d_eq(mallctl(\"experimental.utilization.\" node,\t\t\\\n\t    a, b, c, d), EINVAL, \"Should fail when \" why_inval);\t\\\n\tassert_zu_eq(out_sz, out_sz_ref,\t\t\t\t\\\n\t    \"Output size touched when given invalid arguments\");\t\\\n\tassert_d_eq(memcmp(out, out_ref, out_sz_ref), 0,\t\t\\\n\t    \"Output content touched when given invalid arguments\");\t\\\n} while (0)\n\n#define TEST_UTIL_QUERY_EINVAL(a, b, c, d, why_inval)\t\t\t\\\n\tTEST_UTIL_EINVAL(\"query\", a, b, c, d, why_inval)\n#define TEST_UTIL_BATCH_EINVAL(a, b, c, d, why_inval)\t\t\t\\\n\tTEST_UTIL_EINVAL(\"batch_query\", a, b, c, d, why_inval)\n\n#define TEST_UTIL_VALID(node) do {\t\t\t\t\t\\\n        assert_d_eq(mallctl(\"experimental.utilization.\" node,\t\t\\\n\t    out, &out_sz, in, in_sz), 0,\t\t\t\t\\\n\t    \"Should return 0 on correct arguments\");\t\t\t\\\n        expect_zu_eq(out_sz, out_sz_ref, \"incorrect output size\");\t\\\n\texpect_d_ne(memcmp(out, out_ref, out_sz_ref), 0,\t\t\\\n\t    \"Output content should be changed\");\t\t\t\\\n} while (0)\n\n#define TEST_UTIL_BATCH_VALID TEST_UTIL_VALID(\"batch_query\")\n\n#define TEST_MAX_SIZE (1 << 20)\n\nTEST_BEGIN(test_query) {\n\tsize_t sz;\n\t/*\n\t * Select some sizes that can span both small and large sizes, and are\n\t * numerically unrelated to any size boundaries.\n\t */\n\tfor (sz = 7; sz <= TEST_MAX_SIZE && sz <= SC_LARGE_MAXCLASS;\n\t    sz += (sz <= SC_SMALL_MAXCLASS ? 1009 : 99989)) {\n\t\tvoid *p = mallocx(sz, 0);\n\t\tvoid **in = &p;\n\t\tsize_t in_sz = sizeof(const void *);\n\t\tsize_t out_sz = sizeof(void *) + sizeof(size_t) * 5;\n\t\tvoid *out = mallocx(out_sz, 0);\n\t\tvoid *out_ref = mallocx(out_sz, 0);\n\t\tsize_t out_sz_ref = out_sz;\n\n\t\tassert_ptr_not_null(p,\n\t\t    \"test pointer allocation failed\");\n\t\tassert_ptr_not_null(out,\n\t\t    \"test output allocation failed\");\n\t\tassert_ptr_not_null(out_ref,\n\t\t    \"test reference output allocation failed\");\n\n#define SLABCUR_READ(out) (*(void **)out)\n#define COUNTS(out) ((size_t *)((void **)out + 1))\n#define NFREE_READ(out) COUNTS(out)[0]\n#define NREGS_READ(out) COUNTS(out)[1]\n#define SIZE_READ(out) COUNTS(out)[2]\n#define BIN_NFREE_READ(out) COUNTS(out)[3]\n#define BIN_NREGS_READ(out) COUNTS(out)[4]\n\n\t\tSLABCUR_READ(out) = NULL;\n\t\tNFREE_READ(out) = NREGS_READ(out) = SIZE_READ(out) = -1;\n\t\tBIN_NFREE_READ(out) = BIN_NREGS_READ(out) = -1;\n\t\tmemcpy(out_ref, out, out_sz);\n\n\t\t/* Test invalid argument(s) errors */\n\t\tTEST_UTIL_QUERY_EINVAL(NULL, &out_sz, in, in_sz,\n\t\t    \"old is NULL\");\n\t\tTEST_UTIL_QUERY_EINVAL(out, NULL, in, in_sz,\n\t\t    \"oldlenp is NULL\");\n\t\tTEST_UTIL_QUERY_EINVAL(out, &out_sz, NULL, in_sz,\n\t\t    \"newp is NULL\");\n\t\tTEST_UTIL_QUERY_EINVAL(out, &out_sz, in, 0,\n\t\t    \"newlen is zero\");\n\t\tin_sz -= 1;\n\t\tTEST_UTIL_QUERY_EINVAL(out, &out_sz, in, in_sz,\n\t\t    \"invalid newlen\");\n\t\tin_sz += 1;\n\t\tout_sz_ref = out_sz -= 2 * sizeof(size_t);\n\t\tTEST_UTIL_QUERY_EINVAL(out, &out_sz, in, in_sz,\n\t\t    \"invalid *oldlenp\");\n\t\tout_sz_ref = out_sz += 2 * sizeof(size_t);\n\n\t\t/* Examine output for valid call */\n\t\tTEST_UTIL_VALID(\"query\");\n\t\texpect_zu_le(sz, SIZE_READ(out),\n\t\t    \"Extent size should be at least allocation size\");\n\t\texpect_zu_eq(SIZE_READ(out) & (PAGE - 1), 0,\n\t\t    \"Extent size should be a multiple of page size\");\n\n\t\t/*\n\t\t * We don't do much bin checking if prof is on, since profiling\n\t\t * can produce extents that are for small size classes but not\n\t\t * slabs, which interferes with things like region counts.\n\t\t */\n\t\tif (!opt_prof && sz <= SC_SMALL_MAXCLASS) {\n\t\t\texpect_zu_le(NFREE_READ(out), NREGS_READ(out),\n\t\t\t    \"Extent free count exceeded region count\");\n\t\t\texpect_zu_le(NREGS_READ(out), SIZE_READ(out),\n\t\t\t    \"Extent region count exceeded size\");\n\t\t\texpect_zu_ne(NREGS_READ(out), 0,\n\t\t\t    \"Extent region count must be positive\");\n\t\t\texpect_true(NFREE_READ(out) == 0 || (SLABCUR_READ(out)\n\t\t\t    != NULL && SLABCUR_READ(out) <= p),\n\t\t\t    \"Allocation should follow first fit principle\");\n\n\t\t\tif (config_stats) {\n\t\t\t\texpect_zu_le(BIN_NFREE_READ(out),\n\t\t\t\t    BIN_NREGS_READ(out),\n\t\t\t\t    \"Bin free count exceeded region count\");\n\t\t\t\texpect_zu_ne(BIN_NREGS_READ(out), 0,\n\t\t\t\t    \"Bin region count must be positive\");\n\t\t\t\texpect_zu_le(NFREE_READ(out),\n\t\t\t\t    BIN_NFREE_READ(out),\n\t\t\t\t    \"Extent free count exceeded bin free count\");\n\t\t\t\texpect_zu_le(NREGS_READ(out),\n\t\t\t\t    BIN_NREGS_READ(out),\n\t\t\t\t    \"Extent region count exceeded \"\n\t\t\t\t    \"bin region count\");\n\t\t\t\texpect_zu_eq(BIN_NREGS_READ(out)\n\t\t\t\t    % NREGS_READ(out), 0,\n\t\t\t\t    \"Bin region count isn't a multiple of \"\n\t\t\t\t    \"extent region count\");\n\t\t\t\texpect_zu_le(\n\t\t\t\t    BIN_NFREE_READ(out) - NFREE_READ(out),\n\t\t\t\t    BIN_NREGS_READ(out) - NREGS_READ(out),\n\t\t\t\t    \"Free count in other extents in the bin \"\n\t\t\t\t    \"exceeded region count in other extents \"\n\t\t\t\t    \"in the bin\");\n\t\t\t\texpect_zu_le(NREGS_READ(out) - NFREE_READ(out),\n\t\t\t\t    BIN_NREGS_READ(out) - BIN_NFREE_READ(out),\n\t\t\t\t    \"Extent utilized count exceeded \"\n\t\t\t\t    \"bin utilized count\");\n\t\t\t}\n\t\t} else if (sz > SC_SMALL_MAXCLASS) {\n\t\t\texpect_zu_eq(NFREE_READ(out), 0,\n\t\t\t    \"Extent free count should be zero\");\n\t\t\texpect_zu_eq(NREGS_READ(out), 1,\n\t\t\t    \"Extent region count should be one\");\n\t\t\texpect_ptr_null(SLABCUR_READ(out),\n\t\t\t    \"Current slab must be null for large size classes\");\n\t\t\tif (config_stats) {\n\t\t\t\texpect_zu_eq(BIN_NFREE_READ(out), 0,\n\t\t\t\t    \"Bin free count must be zero for \"\n\t\t\t\t    \"large sizes\");\n\t\t\t\texpect_zu_eq(BIN_NREGS_READ(out), 0,\n\t\t\t\t    \"Bin region count must be zero for \"\n\t\t\t\t    \"large sizes\");\n\t\t\t}\n\t\t}\n\n#undef BIN_NREGS_READ\n#undef BIN_NFREE_READ\n#undef SIZE_READ\n#undef NREGS_READ\n#undef NFREE_READ\n#undef COUNTS\n#undef SLABCUR_READ\n\n\t\tfree(out_ref);\n\t\tfree(out);\n\t\tfree(p);\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_batch) {\n\tsize_t sz;\n\t/*\n\t * Select some sizes that can span both small and large sizes, and are\n\t * numerically unrelated to any size boundaries.\n\t */\n\tfor (sz = 17; sz <= TEST_MAX_SIZE && sz <= SC_LARGE_MAXCLASS;\n\t    sz += (sz <= SC_SMALL_MAXCLASS ? 1019 : 99991)) {\n\t\tvoid *p = mallocx(sz, 0);\n\t\tvoid *q = mallocx(sz, 0);\n\t\tvoid *in[] = {p, q};\n\t\tsize_t in_sz = sizeof(const void *) * 2;\n\t\tsize_t out[] = {-1, -1, -1, -1, -1, -1};\n\t\tsize_t out_sz = sizeof(size_t) * 6;\n\t\tsize_t out_ref[] = {-1, -1, -1, -1, -1, -1};\n\t\tsize_t out_sz_ref = out_sz;\n\n\t\tassert_ptr_not_null(p, \"test pointer allocation failed\");\n\t\tassert_ptr_not_null(q, \"test pointer allocation failed\");\n\n\t\t/* Test invalid argument(s) errors */\n\t\tTEST_UTIL_BATCH_EINVAL(NULL, &out_sz, in, in_sz,\n\t\t    \"old is NULL\");\n\t\tTEST_UTIL_BATCH_EINVAL(out, NULL, in, in_sz,\n\t\t    \"oldlenp is NULL\");\n\t\tTEST_UTIL_BATCH_EINVAL(out, &out_sz, NULL, in_sz,\n\t\t    \"newp is NULL\");\n\t\tTEST_UTIL_BATCH_EINVAL(out, &out_sz, in, 0,\n\t\t    \"newlen is zero\");\n\t\tin_sz -= 1;\n\t\tTEST_UTIL_BATCH_EINVAL(out, &out_sz, in, in_sz,\n\t\t    \"newlen is not an exact multiple\");\n\t\tin_sz += 1;\n\t\tout_sz_ref = out_sz -= 2 * sizeof(size_t);\n\t\tTEST_UTIL_BATCH_EINVAL(out, &out_sz, in, in_sz,\n\t\t    \"*oldlenp is not an exact multiple\");\n\t\tout_sz_ref = out_sz += 2 * sizeof(size_t);\n\t\tin_sz -= sizeof(const void *);\n\t\tTEST_UTIL_BATCH_EINVAL(out, &out_sz, in, in_sz,\n\t\t    \"*oldlenp and newlen do not match\");\n\t\tin_sz += sizeof(const void *);\n\n\t/* Examine output for valid calls */\n#define TEST_EQUAL_REF(i, message) \\\n\tassert_d_eq(memcmp(out + (i) * 3, out_ref + (i) * 3, 3), 0, message)\n\n#define NFREE_READ(out, i) out[(i) * 3]\n#define NREGS_READ(out, i) out[(i) * 3 + 1]\n#define SIZE_READ(out, i) out[(i) * 3 + 2]\n\n\t\tout_sz_ref = out_sz /= 2;\n\t\tin_sz /= 2;\n\t\tTEST_UTIL_BATCH_VALID;\n\t\texpect_zu_le(sz, SIZE_READ(out, 0),\n\t\t    \"Extent size should be at least allocation size\");\n\t\texpect_zu_eq(SIZE_READ(out, 0) & (PAGE - 1), 0,\n\t\t    \"Extent size should be a multiple of page size\");\n\t\t/*\n\t\t * See the corresponding comment in test_query; profiling breaks\n\t\t * our slab count expectations.\n\t\t */\n\t\tif (sz <= SC_SMALL_MAXCLASS && !opt_prof) {\n\t\t\texpect_zu_le(NFREE_READ(out, 0), NREGS_READ(out, 0),\n\t\t\t    \"Extent free count exceeded region count\");\n\t\t\texpect_zu_le(NREGS_READ(out, 0), SIZE_READ(out, 0),\n\t\t\t    \"Extent region count exceeded size\");\n\t\t\texpect_zu_ne(NREGS_READ(out, 0), 0,\n\t\t\t    \"Extent region count must be positive\");\n\t\t} else if (sz > SC_SMALL_MAXCLASS) {\n\t\t\texpect_zu_eq(NFREE_READ(out, 0), 0,\n\t\t\t    \"Extent free count should be zero\");\n\t\t\texpect_zu_eq(NREGS_READ(out, 0), 1,\n\t\t\t    \"Extent region count should be one\");\n\t\t}\n\t\tTEST_EQUAL_REF(1,\n\t\t    \"Should not overwrite content beyond what's needed\");\n\t\tin_sz *= 2;\n\t\tout_sz_ref = out_sz *= 2;\n\n\t\tmemcpy(out_ref, out, 3 * sizeof(size_t));\n\t\tTEST_UTIL_BATCH_VALID;\n\t\tTEST_EQUAL_REF(0, \"Statistics should be stable across calls\");\n\t\tif (sz <= SC_SMALL_MAXCLASS) {\n\t\t\texpect_zu_le(NFREE_READ(out, 1), NREGS_READ(out, 1),\n\t\t\t    \"Extent free count exceeded region count\");\n\t\t} else {\n\t\t\texpect_zu_eq(NFREE_READ(out, 0), 0,\n\t\t\t    \"Extent free count should be zero\");\n\t\t}\n\t\texpect_zu_eq(NREGS_READ(out, 0), NREGS_READ(out, 1),\n\t\t    \"Extent region count should be same for same region size\");\n\t\texpect_zu_eq(SIZE_READ(out, 0), SIZE_READ(out, 1),\n\t\t    \"Extent size should be same for same region size\");\n\n#undef SIZE_READ\n#undef NREGS_READ\n#undef NFREE_READ\n\n#undef TEST_EQUAL_REF\n\n\t\tfree(q);\n\t\tfree(p);\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\tassert_zu_lt(SC_SMALL_MAXCLASS + 100000, TEST_MAX_SIZE,\n\t    \"Test case cannot cover large classes\");\n\treturn test(test_query, test_batch);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/inspect.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_prof}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"prof:false\"\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/unit/junk.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#define arraylen(arr) (sizeof(arr)/sizeof(arr[0]))\nstatic size_t ptr_ind;\nstatic void *volatile ptrs[100];\nstatic void *last_junked_ptr;\nstatic size_t last_junked_usize;\n\nstatic void\nreset() {\n\tptr_ind = 0;\n\tlast_junked_ptr = NULL;\n\tlast_junked_usize = 0;\n}\n\nstatic void\ntest_junk(void *ptr, size_t usize) {\n\tlast_junked_ptr = ptr;\n\tlast_junked_usize = usize;\n}\n\nstatic void\ndo_allocs(size_t size, bool zero, size_t lg_align) {\n#define JUNK_ALLOC(...)\t\t\t\t\t\t\t\\\n\tdo {\t\t\t\t\t\t\t\t\\\n\t\tassert(ptr_ind + 1 < arraylen(ptrs));\t\t\t\\\n\t\tvoid *ptr = __VA_ARGS__;\t\t\t\t\\\n\t\tassert_ptr_not_null(ptr, \"\");\t\t\t\t\\\n\t\tptrs[ptr_ind++] = ptr;\t\t\t\t\t\\\n\t\tif (opt_junk_alloc && !zero) {\t\t\t\t\\\n\t\t\texpect_ptr_eq(ptr, last_junked_ptr, \"\");\t\\\n\t\t\texpect_zu_eq(last_junked_usize,\t\t\t\\\n\t\t\t    TEST_MALLOC_SIZE(ptr), \"\");\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t} while (0)\n\tif (!zero && lg_align == 0) {\n\t\tJUNK_ALLOC(malloc(size));\n\t}\n\tif (!zero) {\n\t\tJUNK_ALLOC(aligned_alloc(1 << lg_align, size));\n\t}\n#ifdef JEMALLOC_OVERRIDE_MEMALIGN\n\tif (!zero) {\n\t\tJUNK_ALLOC(je_memalign(1 << lg_align, size));\n\t}\n#endif\n#ifdef JEMALLOC_OVERRIDE_VALLOC\n\tif (!zero && lg_align == LG_PAGE) {\n\t\tJUNK_ALLOC(je_valloc(size));\n\t}\n#endif\n\tint zero_flag = zero ? MALLOCX_ZERO : 0;\n\tJUNK_ALLOC(mallocx(size, zero_flag | MALLOCX_LG_ALIGN(lg_align)));\n\tJUNK_ALLOC(mallocx(size, zero_flag | MALLOCX_LG_ALIGN(lg_align)\n\t    | MALLOCX_TCACHE_NONE));\n\tif (lg_align >= LG_SIZEOF_PTR) {\n\t\tvoid *memalign_result;\n\t\tint err = posix_memalign(&memalign_result, (1 << lg_align),\n\t\t    size);\n\t\tassert_d_eq(err, 0, \"\");\n\t\tJUNK_ALLOC(memalign_result);\n\t}\n}\n\nTEST_BEGIN(test_junk_alloc_free) {\n\tbool zerovals[] = {false, true};\n\tsize_t sizevals[] = {\n\t\t1, 8, 100, 1000, 100*1000\n\t/*\n\t * Memory allocation failure is a real possibility in 32-bit mode.\n\t * Rather than try to check in the face of resource exhaustion, we just\n\t * rely more on the 64-bit tests.  This is a little bit white-box-y in\n\t * the sense that this is only a good test strategy if we know that the\n\t * junk pathways don't touch interact with the allocation selection\n\t * mechanisms; but this is in fact the case.\n\t */\n#if LG_SIZEOF_PTR == 3\n\t\t    , 10 * 1000 * 1000\n#endif\n\t};\n\tsize_t lg_alignvals[] = {\n\t\t0, 4, 10, 15, 16, LG_PAGE\n#if LG_SIZEOF_PTR == 3\n\t\t    , 20, 24\n#endif\n\t};\n\n#define JUNK_FREE(...)\t\t\t\t\t\t\t\\\n\tdo {\t\t\t\t\t\t\t\t\\\n\t\tdo_allocs(size, zero, lg_align);\t\t\t\\\n\t\tfor (size_t n = 0; n < ptr_ind; n++) {\t\t\t\\\n\t\t\tvoid *ptr = ptrs[n];\t\t\t\t\\\n\t\t\t__VA_ARGS__;\t\t\t\t\t\\\n\t\t\tif (opt_junk_free) {\t\t\t\t\\\n\t\t\t\tassert_ptr_eq(ptr, last_junked_ptr,\t\\\n\t\t\t\t    \"\");\t\t\t\t\\\n\t\t\t\tassert_zu_eq(usize, last_junked_usize,\t\\\n\t\t\t\t    \"\");\t\t\t\t\\\n\t\t\t}\t\t\t\t\t\t\\\n\t\t\treset();\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t} while (0)\n\tfor (size_t i = 0; i < arraylen(zerovals); i++) {\n\t\tfor (size_t j = 0; j < arraylen(sizevals); j++) {\n\t\t\tfor (size_t k = 0; k < arraylen(lg_alignvals); k++) {\n\t\t\t\tbool zero = zerovals[i];\n\t\t\t\tsize_t size = sizevals[j];\n\t\t\t\tsize_t lg_align = lg_alignvals[k];\n\t\t\t\tsize_t usize = nallocx(size,\n\t\t\t\t    MALLOCX_LG_ALIGN(lg_align));\n\n\t\t\t\tJUNK_FREE(free(ptr));\n\t\t\t\tJUNK_FREE(dallocx(ptr, 0));\n\t\t\t\tJUNK_FREE(dallocx(ptr, MALLOCX_TCACHE_NONE));\n\t\t\t\tJUNK_FREE(dallocx(ptr, MALLOCX_LG_ALIGN(\n\t\t\t\t    lg_align)));\n\t\t\t\tJUNK_FREE(sdallocx(ptr, usize, MALLOCX_LG_ALIGN(\n\t\t\t\t    lg_align)));\n\t\t\t\tJUNK_FREE(sdallocx(ptr, usize,\n\t\t\t\t    MALLOCX_TCACHE_NONE | MALLOCX_LG_ALIGN(lg_align)));\n\t\t\t\tif (opt_zero_realloc_action\n\t\t\t\t    == zero_realloc_action_free) {\n\t\t\t\t\tJUNK_FREE(realloc(ptr, 0));\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_realloc_expand) {\n\tchar *volatile ptr;\n\tchar *volatile expanded;\n\n\ttest_skip_if(!opt_junk_alloc);\n\n\t/* Realloc */\n\tptr = malloc(SC_SMALL_MAXCLASS);\n\texpanded = realloc(ptr, SC_LARGE_MINCLASS);\n\texpect_ptr_eq(last_junked_ptr, &expanded[SC_SMALL_MAXCLASS], \"\");\n\texpect_zu_eq(last_junked_usize,\n\t    SC_LARGE_MINCLASS - SC_SMALL_MAXCLASS, \"\");\n\tfree(expanded);\n\n\t/* rallocx(..., 0) */\n\tptr = malloc(SC_SMALL_MAXCLASS);\n\texpanded = rallocx(ptr, SC_LARGE_MINCLASS, 0);\n\texpect_ptr_eq(last_junked_ptr, &expanded[SC_SMALL_MAXCLASS], \"\");\n\texpect_zu_eq(last_junked_usize,\n\t    SC_LARGE_MINCLASS - SC_SMALL_MAXCLASS, \"\");\n\tfree(expanded);\n\n\t/* rallocx(..., nonzero) */\n\tptr = malloc(SC_SMALL_MAXCLASS);\n\texpanded = rallocx(ptr, SC_LARGE_MINCLASS, MALLOCX_TCACHE_NONE);\n\texpect_ptr_eq(last_junked_ptr, &expanded[SC_SMALL_MAXCLASS], \"\");\n\texpect_zu_eq(last_junked_usize,\n\t    SC_LARGE_MINCLASS - SC_SMALL_MAXCLASS, \"\");\n\tfree(expanded);\n\n\t/* rallocx(..., MALLOCX_ZERO) */\n\tptr = malloc(SC_SMALL_MAXCLASS);\n\tlast_junked_ptr = (void *)-1;\n\tlast_junked_usize = (size_t)-1;\n\texpanded = rallocx(ptr, SC_LARGE_MINCLASS, MALLOCX_ZERO);\n\texpect_ptr_eq(last_junked_ptr, (void *)-1, \"\");\n\texpect_zu_eq(last_junked_usize, (size_t)-1, \"\");\n\tfree(expanded);\n\n\t/*\n\t * Unfortunately, testing xallocx reliably is difficult to do portably\n\t * (since allocations can be expanded / not expanded differently on\n\t * different platforms.  We rely on manual inspection there -- the\n\t * xallocx pathway is easy to inspect, though.\n\t *\n\t * Likewise, we don't test the shrinking pathways.  It's difficult to do\n\t * so consistently (because of the risk of split failure or memory\n\t * exhaustion, in which case no junking should happen).  This is fine\n\t * -- junking is a best-effort debug mechanism in the first place.\n\t */\n}\nTEST_END\n\nint\nmain(void) {\n\tjunk_alloc_callback = &test_junk;\n\tjunk_free_callback = &test_junk;\n\t/*\n\t * We check the last pointer junked.  If a reentrant call happens, that\n\t * might be an internal allocation.\n\t */\n\treturn test_no_reentrancy(\n\t    test_junk_alloc_free,\n\t    test_realloc_expand);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/junk.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_fill}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"abort:false,zero:false,junk:true\"\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/unit/junk_alloc.c",
    "content": "#include \"junk.c\"\n"
  },
  {
    "path": "deps/jemalloc/test/unit/junk_alloc.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_fill}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"abort:false,zero:false,junk:alloc\"\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/unit/junk_free.c",
    "content": "#include \"junk.c\"\n"
  },
  {
    "path": "deps/jemalloc/test/unit/junk_free.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_fill}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"abort:false,zero:false,junk:free\"\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/unit/log.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/log.h\"\n\nstatic void\nupdate_log_var_names(const char *names) {\n\tstrncpy(log_var_names, names, sizeof(log_var_names));\n}\n\nstatic void\nexpect_no_logging(const char *names) {\n\tlog_var_t log_l1 = LOG_VAR_INIT(\"l1\");\n\tlog_var_t log_l2 = LOG_VAR_INIT(\"l2\");\n\tlog_var_t log_l2_a = LOG_VAR_INIT(\"l2.a\");\n\n\tupdate_log_var_names(names);\n\n\tint count = 0;\n\n\tfor (int i = 0; i < 10; i++) {\n\t\tlog_do_begin(log_l1)\n\t\t\tcount++;\n\t\tlog_do_end(log_l1)\n\n\t\tlog_do_begin(log_l2)\n\t\t\tcount++;\n\t\tlog_do_end(log_l2)\n\n\t\tlog_do_begin(log_l2_a)\n\t\t\tcount++;\n\t\tlog_do_end(log_l2_a)\n\t}\n\texpect_d_eq(count, 0, \"Disabled logging not ignored!\");\n}\n\nTEST_BEGIN(test_log_disabled) {\n\ttest_skip_if(!config_log);\n\tatomic_store_b(&log_init_done, true, ATOMIC_RELAXED);\n\texpect_no_logging(\"\");\n\texpect_no_logging(\"abc\");\n\texpect_no_logging(\"a.b.c\");\n\texpect_no_logging(\"l12\");\n\texpect_no_logging(\"l123|a456|b789\");\n\texpect_no_logging(\"|||\");\n}\nTEST_END\n\nTEST_BEGIN(test_log_enabled_direct) {\n\ttest_skip_if(!config_log);\n\tatomic_store_b(&log_init_done, true, ATOMIC_RELAXED);\n\tlog_var_t log_l1 = LOG_VAR_INIT(\"l1\");\n\tlog_var_t log_l1_a = LOG_VAR_INIT(\"l1.a\");\n\tlog_var_t log_l2 = LOG_VAR_INIT(\"l2\");\n\n\tint count;\n\n\tcount = 0;\n\tupdate_log_var_names(\"l1\");\n\tfor (int i = 0; i < 10; i++) {\n\t\tlog_do_begin(log_l1)\n\t\t\tcount++;\n\t\tlog_do_end(log_l1)\n\t}\n\texpect_d_eq(count, 10, \"Mis-logged!\");\n\n\tcount = 0;\n\tupdate_log_var_names(\"l1.a\");\n\tfor (int i = 0; i < 10; i++) {\n\t\tlog_do_begin(log_l1_a)\n\t\t\tcount++;\n\t\tlog_do_end(log_l1_a)\n\t}\n\texpect_d_eq(count, 10, \"Mis-logged!\");\n\n\tcount = 0;\n\tupdate_log_var_names(\"l1.a|abc|l2|def\");\n\tfor (int i = 0; i < 10; i++) {\n\t\tlog_do_begin(log_l1_a)\n\t\t\tcount++;\n\t\tlog_do_end(log_l1_a)\n\n\t\tlog_do_begin(log_l2)\n\t\t\tcount++;\n\t\tlog_do_end(log_l2)\n\t}\n\texpect_d_eq(count, 20, \"Mis-logged!\");\n}\nTEST_END\n\nTEST_BEGIN(test_log_enabled_indirect) {\n\ttest_skip_if(!config_log);\n\tatomic_store_b(&log_init_done, true, ATOMIC_RELAXED);\n\tupdate_log_var_names(\"l0|l1|abc|l2.b|def\");\n\n\t/* On. */\n\tlog_var_t log_l1 = LOG_VAR_INIT(\"l1\");\n\t/* Off. */\n\tlog_var_t log_l1a = LOG_VAR_INIT(\"l1a\");\n\t/* On. */\n\tlog_var_t log_l1_a = LOG_VAR_INIT(\"l1.a\");\n\t/* Off. */\n\tlog_var_t log_l2_a = LOG_VAR_INIT(\"l2.a\");\n\t/* On. */\n\tlog_var_t log_l2_b_a = LOG_VAR_INIT(\"l2.b.a\");\n\t/* On. */\n\tlog_var_t log_l2_b_b = LOG_VAR_INIT(\"l2.b.b\");\n\n\t/* 4 are on total, so should sum to 40. */\n\tint count = 0;\n\tfor (int i = 0; i < 10; i++) {\n\t\tlog_do_begin(log_l1)\n\t\t\tcount++;\n\t\tlog_do_end(log_l1)\n\n\t\tlog_do_begin(log_l1a)\n\t\t\tcount++;\n\t\tlog_do_end(log_l1a)\n\n\t\tlog_do_begin(log_l1_a)\n\t\t\tcount++;\n\t\tlog_do_end(log_l1_a)\n\n\t\tlog_do_begin(log_l2_a)\n\t\t\tcount++;\n\t\tlog_do_end(log_l2_a)\n\n\t\tlog_do_begin(log_l2_b_a)\n\t\t\tcount++;\n\t\tlog_do_end(log_l2_b_a)\n\n\t\tlog_do_begin(log_l2_b_b)\n\t\t\tcount++;\n\t\tlog_do_end(log_l2_b_b)\n\t}\n\n\texpect_d_eq(count, 40, \"Mis-logged!\");\n}\nTEST_END\n\nTEST_BEGIN(test_log_enabled_global) {\n\ttest_skip_if(!config_log);\n\tatomic_store_b(&log_init_done, true, ATOMIC_RELAXED);\n\tupdate_log_var_names(\"abc|.|def\");\n\n\tlog_var_t log_l1 = LOG_VAR_INIT(\"l1\");\n\tlog_var_t log_l2_a_a = LOG_VAR_INIT(\"l2.a.a\");\n\n\tint count = 0;\n\tfor (int i = 0; i < 10; i++) {\n\t\tlog_do_begin(log_l1)\n\t\t    count++;\n\t\tlog_do_end(log_l1)\n\n\t\tlog_do_begin(log_l2_a_a)\n\t\t    count++;\n\t\tlog_do_end(log_l2_a_a)\n\t}\n\texpect_d_eq(count, 20, \"Mis-logged!\");\n}\nTEST_END\n\nTEST_BEGIN(test_logs_if_no_init) {\n\ttest_skip_if(!config_log);\n\tatomic_store_b(&log_init_done, false, ATOMIC_RELAXED);\n\n\tlog_var_t l = LOG_VAR_INIT(\"definitely.not.enabled\");\n\n\tint count = 0;\n\tfor (int i = 0; i < 10; i++) {\n\t\tlog_do_begin(l)\n\t\t\tcount++;\n\t\tlog_do_end(l)\n\t}\n\texpect_d_eq(count, 0, \"Logging shouldn't happen if not initialized.\");\n}\nTEST_END\n\n/*\n * This really just checks to make sure that this usage compiles; we don't have\n * any test code to run.\n */\nTEST_BEGIN(test_log_only_format_string) {\n\tif (false) {\n\t\tLOG(\"log_str\", \"No arguments follow this format string.\");\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_log_disabled,\n\t    test_log_enabled_direct,\n\t    test_log_enabled_indirect,\n\t    test_log_enabled_global,\n\t    test_logs_if_no_init,\n\t    test_log_only_format_string);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/mallctl.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/ctl.h\"\n#include \"jemalloc/internal/hook.h\"\n#include \"jemalloc/internal/util.h\"\n\nTEST_BEGIN(test_mallctl_errors) {\n\tuint64_t epoch;\n\tsize_t sz;\n\n\texpect_d_eq(mallctl(\"no_such_name\", NULL, NULL, NULL, 0), ENOENT,\n\t    \"mallctl() should return ENOENT for non-existent names\");\n\n\texpect_d_eq(mallctl(\"version\", NULL, NULL, \"0.0.0\", strlen(\"0.0.0\")),\n\t    EPERM, \"mallctl() should return EPERM on attempt to write \"\n\t    \"read-only value\");\n\n\texpect_d_eq(mallctl(\"epoch\", NULL, NULL, (void *)&epoch,\n\t    sizeof(epoch)-1), EINVAL,\n\t    \"mallctl() should return EINVAL for input size mismatch\");\n\texpect_d_eq(mallctl(\"epoch\", NULL, NULL, (void *)&epoch,\n\t    sizeof(epoch)+1), EINVAL,\n\t    \"mallctl() should return EINVAL for input size mismatch\");\n\n\tsz = sizeof(epoch)-1;\n\texpect_d_eq(mallctl(\"epoch\", (void *)&epoch, &sz, NULL, 0), EINVAL,\n\t    \"mallctl() should return EINVAL for output size mismatch\");\n\tsz = sizeof(epoch)+1;\n\texpect_d_eq(mallctl(\"epoch\", (void *)&epoch, &sz, NULL, 0), EINVAL,\n\t    \"mallctl() should return EINVAL for output size mismatch\");\n}\nTEST_END\n\nTEST_BEGIN(test_mallctlnametomib_errors) {\n\tsize_t mib[1];\n\tsize_t miblen;\n\n\tmiblen = sizeof(mib)/sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(\"no_such_name\", mib, &miblen), ENOENT,\n\t    \"mallctlnametomib() should return ENOENT for non-existent names\");\n}\nTEST_END\n\nTEST_BEGIN(test_mallctlbymib_errors) {\n\tuint64_t epoch;\n\tsize_t sz;\n\tsize_t mib[1];\n\tsize_t miblen;\n\n\tmiblen = sizeof(mib)/sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(\"version\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() failure\");\n\n\texpect_d_eq(mallctlbymib(mib, miblen, NULL, NULL, \"0.0.0\",\n\t    strlen(\"0.0.0\")), EPERM, \"mallctl() should return EPERM on \"\n\t    \"attempt to write read-only value\");\n\n\tmiblen = sizeof(mib)/sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(\"epoch\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() failure\");\n\n\texpect_d_eq(mallctlbymib(mib, miblen, NULL, NULL, (void *)&epoch,\n\t    sizeof(epoch)-1), EINVAL,\n\t    \"mallctlbymib() should return EINVAL for input size mismatch\");\n\texpect_d_eq(mallctlbymib(mib, miblen, NULL, NULL, (void *)&epoch,\n\t    sizeof(epoch)+1), EINVAL,\n\t    \"mallctlbymib() should return EINVAL for input size mismatch\");\n\n\tsz = sizeof(epoch)-1;\n\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&epoch, &sz, NULL, 0),\n\t    EINVAL,\n\t    \"mallctlbymib() should return EINVAL for output size mismatch\");\n\tsz = sizeof(epoch)+1;\n\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&epoch, &sz, NULL, 0),\n\t    EINVAL,\n\t    \"mallctlbymib() should return EINVAL for output size mismatch\");\n}\nTEST_END\n\nTEST_BEGIN(test_mallctl_read_write) {\n\tuint64_t old_epoch, new_epoch;\n\tsize_t sz = sizeof(old_epoch);\n\n\t/* Blind. */\n\texpect_d_eq(mallctl(\"epoch\", NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\texpect_zu_eq(sz, sizeof(old_epoch), \"Unexpected output size\");\n\n\t/* Read. */\n\texpect_d_eq(mallctl(\"epoch\", (void *)&old_epoch, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\texpect_zu_eq(sz, sizeof(old_epoch), \"Unexpected output size\");\n\n\t/* Write. */\n\texpect_d_eq(mallctl(\"epoch\", NULL, NULL, (void *)&new_epoch,\n\t    sizeof(new_epoch)), 0, \"Unexpected mallctl() failure\");\n\texpect_zu_eq(sz, sizeof(old_epoch), \"Unexpected output size\");\n\n\t/* Read+write. */\n\texpect_d_eq(mallctl(\"epoch\", (void *)&old_epoch, &sz,\n\t    (void *)&new_epoch, sizeof(new_epoch)), 0,\n\t    \"Unexpected mallctl() failure\");\n\texpect_zu_eq(sz, sizeof(old_epoch), \"Unexpected output size\");\n}\nTEST_END\n\nTEST_BEGIN(test_mallctlnametomib_short_mib) {\n\tsize_t mib[4];\n\tsize_t miblen;\n\n\tmiblen = 3;\n\tmib[3] = 42;\n\texpect_d_eq(mallctlnametomib(\"arenas.bin.0.nregs\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() failure\");\n\texpect_zu_eq(miblen, 3, \"Unexpected mib output length\");\n\texpect_zu_eq(mib[3], 42,\n\t    \"mallctlnametomib() wrote past the end of the input mib\");\n}\nTEST_END\n\nTEST_BEGIN(test_mallctlnametomib_short_name) {\n\tsize_t mib[4];\n\tsize_t miblen;\n\n\tmiblen = 4;\n\tmib[3] = 42;\n\texpect_d_eq(mallctlnametomib(\"arenas.bin.0\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() failure\");\n\texpect_zu_eq(miblen, 3, \"Unexpected mib output length\");\n\texpect_zu_eq(mib[3], 42,\n\t    \"mallctlnametomib() wrote past the end of the input mib\");\n}\nTEST_END\n\nTEST_BEGIN(test_mallctlmibnametomib) {\n\tsize_t mib[4];\n\tsize_t miblen = 4;\n\tuint32_t result, result_ref;\n\tsize_t len_result = sizeof(uint32_t);\n\n\ttsd_t *tsd = tsd_fetch();\n\n\t/* Error cases */\n\tassert_d_eq(ctl_mibnametomib(tsd, mib, 0, \"bob\", &miblen), ENOENT, \"\");\n\tassert_zu_eq(miblen, 4, \"\");\n\tassert_d_eq(ctl_mibnametomib(tsd, mib, 0, \"9999\", &miblen), ENOENT, \"\");\n\tassert_zu_eq(miblen, 4, \"\");\n\n\t/* Valid case. */\n\tassert_d_eq(ctl_mibnametomib(tsd, mib, 0, \"arenas\", &miblen), 0, \"\");\n\tassert_zu_eq(miblen, 1, \"\");\n\tmiblen = 4;\n\tassert_d_eq(ctl_mibnametomib(tsd, mib, 1, \"bin\", &miblen), 0, \"\");\n\tassert_zu_eq(miblen, 2, \"\");\n\texpect_d_eq(mallctlbymib(mib, miblen, &result, &len_result, NULL, 0),\n\t    ENOENT, \"mallctlbymib() should fail on partial path\");\n\n\t/* Error cases. */\n\tmiblen = 4;\n\tassert_d_eq(ctl_mibnametomib(tsd, mib, 2, \"bob\", &miblen), ENOENT, \"\");\n\tassert_zu_eq(miblen, 4, \"\");\n\tassert_d_eq(ctl_mibnametomib(tsd, mib, 2, \"9999\", &miblen), ENOENT, \"\");\n\tassert_zu_eq(miblen, 4, \"\");\n\n\t/* Valid case. */\n\tassert_d_eq(ctl_mibnametomib(tsd, mib, 2, \"0\", &miblen), 0, \"\");\n\tassert_zu_eq(miblen, 3, \"\");\n\texpect_d_eq(mallctlbymib(mib, miblen, &result, &len_result, NULL, 0),\n\t    ENOENT, \"mallctlbymib() should fail on partial path\");\n\n\t/* Error cases. */\n\tmiblen = 4;\n\tassert_d_eq(ctl_mibnametomib(tsd, mib, 3, \"bob\", &miblen), ENOENT, \"\");\n\tassert_zu_eq(miblen, 4, \"\");\n\tassert_d_eq(ctl_mibnametomib(tsd, mib, 3, \"9999\", &miblen), ENOENT, \"\");\n\tassert_zu_eq(miblen, 4, \"\");\n\n\t/* Valid case. */\n\tassert_d_eq(ctl_mibnametomib(tsd, mib, 3, \"nregs\", &miblen), 0, \"\");\n\tassert_zu_eq(miblen, 4, \"\");\n\tassert_d_eq(mallctlbymib(mib, miblen, &result, &len_result, NULL, 0),\n\t    0, \"Unexpected mallctlbymib() failure\");\n\tassert_d_eq(mallctl(\"arenas.bin.0.nregs\", &result_ref, &len_result,\n\t    NULL, 0), 0, \"Unexpected mallctl() failure\");\n\texpect_zu_eq(result, result_ref,\n\t    \"mallctlbymib() and mallctl() returned different result\");\n}\nTEST_END\n\nTEST_BEGIN(test_mallctlbymibname) {\n\tsize_t mib[4];\n\tsize_t miblen = 4;\n\tuint32_t result, result_ref;\n\tsize_t len_result = sizeof(uint32_t);\n\n\ttsd_t *tsd = tsd_fetch();\n\n\t/* Error cases. */\n\n\tassert_d_eq(mallctlnametomib(\"arenas\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() failure\");\n\tassert_zu_eq(miblen, 1, \"\");\n\n\tmiblen = 4;\n\tassert_d_eq(ctl_bymibname(tsd, mib, 1, \"bin.0\", &miblen,\n\t    &result, &len_result, NULL, 0), ENOENT, \"\");\n\tmiblen = 4;\n\tassert_d_eq(ctl_bymibname(tsd, mib, 1, \"bin.0.bob\", &miblen,\n\t    &result, &len_result, NULL, 0), ENOENT, \"\");\n\tassert_zu_eq(miblen, 4, \"\");\n\n\t/* Valid cases. */\n\n\tassert_d_eq(mallctl(\"arenas.bin.0.nregs\", &result_ref, &len_result,\n\t    NULL, 0), 0, \"Unexpected mallctl() failure\");\n\tmiblen = 4;\n\n\tassert_d_eq(ctl_bymibname(tsd, mib, 0, \"arenas.bin.0.nregs\", &miblen,\n\t    &result, &len_result, NULL, 0), 0, \"\");\n\tassert_zu_eq(miblen, 4, \"\");\n\texpect_zu_eq(result, result_ref, \"Unexpected result\");\n\n\tassert_d_eq(ctl_bymibname(tsd, mib, 1, \"bin.0.nregs\", &miblen, &result,\n\t    &len_result, NULL, 0), 0, \"\");\n\tassert_zu_eq(miblen, 4, \"\");\n\texpect_zu_eq(result, result_ref, \"Unexpected result\");\n\n\tassert_d_eq(ctl_bymibname(tsd, mib, 2, \"0.nregs\", &miblen, &result,\n\t    &len_result, NULL, 0), 0, \"\");\n\tassert_zu_eq(miblen, 4, \"\");\n\texpect_zu_eq(result, result_ref, \"Unexpected result\");\n\n\tassert_d_eq(ctl_bymibname(tsd, mib, 3, \"nregs\", &miblen, &result,\n\t    &len_result, NULL, 0), 0, \"\");\n\tassert_zu_eq(miblen, 4, \"\");\n\texpect_zu_eq(result, result_ref, \"Unexpected result\");\n}\nTEST_END\n\nTEST_BEGIN(test_mallctl_config) {\n#define TEST_MALLCTL_CONFIG(config, t) do {\t\t\t\t\\\n\tt oldval;\t\t\t\t\t\t\t\\\n\tsize_t sz = sizeof(oldval);\t\t\t\t\t\\\n\texpect_d_eq(mallctl(\"config.\"#config, (void *)&oldval, &sz,\t\\\n\t    NULL, 0), 0, \"Unexpected mallctl() failure\");\t\t\\\n\texpect_b_eq(oldval, config_##config, \"Incorrect config value\");\t\\\n\texpect_zu_eq(sz, sizeof(oldval), \"Unexpected output size\");\t\\\n} while (0)\n\n\tTEST_MALLCTL_CONFIG(cache_oblivious, bool);\n\tTEST_MALLCTL_CONFIG(debug, bool);\n\tTEST_MALLCTL_CONFIG(fill, bool);\n\tTEST_MALLCTL_CONFIG(lazy_lock, bool);\n\tTEST_MALLCTL_CONFIG(malloc_conf, const char *);\n\tTEST_MALLCTL_CONFIG(prof, bool);\n\tTEST_MALLCTL_CONFIG(prof_libgcc, bool);\n\tTEST_MALLCTL_CONFIG(prof_libunwind, bool);\n\tTEST_MALLCTL_CONFIG(stats, bool);\n\tTEST_MALLCTL_CONFIG(utrace, bool);\n\tTEST_MALLCTL_CONFIG(xmalloc, bool);\n\n#undef TEST_MALLCTL_CONFIG\n}\nTEST_END\n\nTEST_BEGIN(test_mallctl_opt) {\n\tbool config_always = true;\n\n#define TEST_MALLCTL_OPT(t, opt, config) do {\t\t\t\t\\\n\tt oldval;\t\t\t\t\t\t\t\\\n\tsize_t sz = sizeof(oldval);\t\t\t\t\t\\\n\tint expected = config_##config ? 0 : ENOENT;\t\t\t\\\n\tint result = mallctl(\"opt.\"#opt, (void *)&oldval, &sz, NULL,\t\\\n\t    0);\t\t\t\t\t\t\t\t\\\n\texpect_d_eq(result, expected,\t\t\t\t\t\\\n\t    \"Unexpected mallctl() result for opt.\"#opt);\t\t\\\n\texpect_zu_eq(sz, sizeof(oldval), \"Unexpected output size\");\t\\\n} while (0)\n\n\tTEST_MALLCTL_OPT(bool, abort, always);\n\tTEST_MALLCTL_OPT(bool, abort_conf, always);\n\tTEST_MALLCTL_OPT(bool, cache_oblivious, always);\n\tTEST_MALLCTL_OPT(bool, trust_madvise, always);\n\tTEST_MALLCTL_OPT(bool, confirm_conf, always);\n\tTEST_MALLCTL_OPT(const char *, metadata_thp, always);\n\tTEST_MALLCTL_OPT(bool, retain, always);\n\tTEST_MALLCTL_OPT(const char *, dss, always);\n\tTEST_MALLCTL_OPT(bool, hpa, always);\n\tTEST_MALLCTL_OPT(size_t, hpa_slab_max_alloc, always);\n\tTEST_MALLCTL_OPT(size_t, hpa_sec_nshards, always);\n\tTEST_MALLCTL_OPT(size_t, hpa_sec_max_alloc, always);\n\tTEST_MALLCTL_OPT(size_t, hpa_sec_max_bytes, always);\n\tTEST_MALLCTL_OPT(size_t, hpa_sec_bytes_after_flush, always);\n\tTEST_MALLCTL_OPT(size_t, hpa_sec_batch_fill_extra, always);\n\tTEST_MALLCTL_OPT(unsigned, narenas, always);\n\tTEST_MALLCTL_OPT(const char *, percpu_arena, always);\n\tTEST_MALLCTL_OPT(size_t, oversize_threshold, always);\n\tTEST_MALLCTL_OPT(bool, background_thread, always);\n\tTEST_MALLCTL_OPT(ssize_t, dirty_decay_ms, always);\n\tTEST_MALLCTL_OPT(ssize_t, muzzy_decay_ms, always);\n\tTEST_MALLCTL_OPT(bool, stats_print, always);\n\tTEST_MALLCTL_OPT(const char *, stats_print_opts, always);\n\tTEST_MALLCTL_OPT(int64_t, stats_interval, always);\n\tTEST_MALLCTL_OPT(const char *, stats_interval_opts, always);\n\tTEST_MALLCTL_OPT(const char *, junk, fill);\n\tTEST_MALLCTL_OPT(bool, zero, fill);\n\tTEST_MALLCTL_OPT(bool, utrace, utrace);\n\tTEST_MALLCTL_OPT(bool, xmalloc, xmalloc);\n\tTEST_MALLCTL_OPT(bool, tcache, always);\n\tTEST_MALLCTL_OPT(size_t, lg_extent_max_active_fit, always);\n\tTEST_MALLCTL_OPT(size_t, tcache_max, always);\n\tTEST_MALLCTL_OPT(const char *, thp, always);\n\tTEST_MALLCTL_OPT(const char *, zero_realloc, always);\n\tTEST_MALLCTL_OPT(bool, prof, prof);\n\tTEST_MALLCTL_OPT(const char *, prof_prefix, prof);\n\tTEST_MALLCTL_OPT(bool, prof_active, prof);\n\tTEST_MALLCTL_OPT(ssize_t, lg_prof_sample, prof);\n\tTEST_MALLCTL_OPT(bool, prof_accum, prof);\n\tTEST_MALLCTL_OPT(ssize_t, lg_prof_interval, prof);\n\tTEST_MALLCTL_OPT(bool, prof_gdump, prof);\n\tTEST_MALLCTL_OPT(bool, prof_final, prof);\n\tTEST_MALLCTL_OPT(bool, prof_leak, prof);\n\tTEST_MALLCTL_OPT(bool, prof_leak_error, prof);\n\tTEST_MALLCTL_OPT(ssize_t, prof_recent_alloc_max, prof);\n\tTEST_MALLCTL_OPT(bool, prof_stats, prof);\n\tTEST_MALLCTL_OPT(bool, prof_sys_thread_name, prof);\n\tTEST_MALLCTL_OPT(ssize_t, lg_san_uaf_align, uaf_detection);\n\n#undef TEST_MALLCTL_OPT\n}\nTEST_END\n\nTEST_BEGIN(test_manpage_example) {\n\tunsigned nbins, i;\n\tsize_t mib[4];\n\tsize_t len, miblen;\n\n\tlen = sizeof(nbins);\n\texpect_d_eq(mallctl(\"arenas.nbins\", (void *)&nbins, &len, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\n\tmiblen = 4;\n\texpect_d_eq(mallctlnametomib(\"arenas.bin.0.size\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() failure\");\n\tfor (i = 0; i < nbins; i++) {\n\t\tsize_t bin_size;\n\n\t\tmib[2] = i;\n\t\tlen = sizeof(bin_size);\n\t\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&bin_size, &len,\n\t\t    NULL, 0), 0, \"Unexpected mallctlbymib() failure\");\n\t\t/* Do something with bin_size... */\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_tcache_none) {\n\ttest_skip_if(!opt_tcache);\n\n\t/* Allocate p and q. */\n\tvoid *p0 = mallocx(42, 0);\n\texpect_ptr_not_null(p0, \"Unexpected mallocx() failure\");\n\tvoid *q = mallocx(42, 0);\n\texpect_ptr_not_null(q, \"Unexpected mallocx() failure\");\n\n\t/* Deallocate p and q, but bypass the tcache for q. */\n\tdallocx(p0, 0);\n\tdallocx(q, MALLOCX_TCACHE_NONE);\n\n\t/* Make sure that tcache-based allocation returns p, not q. */\n\tvoid *p1 = mallocx(42, 0);\n\texpect_ptr_not_null(p1, \"Unexpected mallocx() failure\");\n\tif (!opt_prof && !san_uaf_detection_enabled()) {\n\t\texpect_ptr_eq(p0, p1,\n\t\t    \"Expected tcache to allocate cached region\");\n\t}\n\n\t/* Clean up. */\n\tdallocx(p1, MALLOCX_TCACHE_NONE);\n}\nTEST_END\n\nTEST_BEGIN(test_tcache) {\n#define NTCACHES\t10\n\tunsigned tis[NTCACHES];\n\tvoid *ps[NTCACHES];\n\tvoid *qs[NTCACHES];\n\tunsigned i;\n\tsize_t sz, psz, qsz;\n\n\tpsz = 42;\n\tqsz = nallocx(psz, 0) + 1;\n\n\t/* Create tcaches. */\n\tfor (i = 0; i < NTCACHES; i++) {\n\t\tsz = sizeof(unsigned);\n\t\texpect_d_eq(mallctl(\"tcache.create\", (void *)&tis[i], &sz, NULL,\n\t\t    0), 0, \"Unexpected mallctl() failure, i=%u\", i);\n\t}\n\n\t/* Exercise tcache ID recycling. */\n\tfor (i = 0; i < NTCACHES; i++) {\n\t\texpect_d_eq(mallctl(\"tcache.destroy\", NULL, NULL,\n\t\t    (void *)&tis[i], sizeof(unsigned)), 0,\n\t\t    \"Unexpected mallctl() failure, i=%u\", i);\n\t}\n\tfor (i = 0; i < NTCACHES; i++) {\n\t\tsz = sizeof(unsigned);\n\t\texpect_d_eq(mallctl(\"tcache.create\", (void *)&tis[i], &sz, NULL,\n\t\t    0), 0, \"Unexpected mallctl() failure, i=%u\", i);\n\t}\n\n\t/* Flush empty tcaches. */\n\tfor (i = 0; i < NTCACHES; i++) {\n\t\texpect_d_eq(mallctl(\"tcache.flush\", NULL, NULL, (void *)&tis[i],\n\t\t    sizeof(unsigned)), 0, \"Unexpected mallctl() failure, i=%u\",\n\t\t    i);\n\t}\n\n\t/* Cache some allocations. */\n\tfor (i = 0; i < NTCACHES; i++) {\n\t\tps[i] = mallocx(psz, MALLOCX_TCACHE(tis[i]));\n\t\texpect_ptr_not_null(ps[i], \"Unexpected mallocx() failure, i=%u\",\n\t\t    i);\n\t\tdallocx(ps[i], MALLOCX_TCACHE(tis[i]));\n\n\t\tqs[i] = mallocx(qsz, MALLOCX_TCACHE(tis[i]));\n\t\texpect_ptr_not_null(qs[i], \"Unexpected mallocx() failure, i=%u\",\n\t\t    i);\n\t\tdallocx(qs[i], MALLOCX_TCACHE(tis[i]));\n\t}\n\n\t/* Verify that tcaches allocate cached regions. */\n\tfor (i = 0; i < NTCACHES; i++) {\n\t\tvoid *p0 = ps[i];\n\t\tps[i] = mallocx(psz, MALLOCX_TCACHE(tis[i]));\n\t\texpect_ptr_not_null(ps[i], \"Unexpected mallocx() failure, i=%u\",\n\t\t    i);\n\t\tif (!san_uaf_detection_enabled()) {\n\t\t\texpect_ptr_eq(ps[i], p0, \"Expected mallocx() to \"\n\t\t\t    \"allocate cached region, i=%u\", i);\n\t\t}\n\t}\n\n\t/* Verify that reallocation uses cached regions. */\n\tfor (i = 0; i < NTCACHES; i++) {\n\t\tvoid *q0 = qs[i];\n\t\tqs[i] = rallocx(ps[i], qsz, MALLOCX_TCACHE(tis[i]));\n\t\texpect_ptr_not_null(qs[i], \"Unexpected rallocx() failure, i=%u\",\n\t\t    i);\n\t\tif (!san_uaf_detection_enabled()) {\n\t\t\texpect_ptr_eq(qs[i], q0, \"Expected rallocx() to \"\n\t\t\t    \"allocate cached region, i=%u\", i);\n\t\t}\n\t\t/* Avoid undefined behavior in case of test failure. */\n\t\tif (qs[i] == NULL) {\n\t\t\tqs[i] = ps[i];\n\t\t}\n\t}\n\tfor (i = 0; i < NTCACHES; i++) {\n\t\tdallocx(qs[i], MALLOCX_TCACHE(tis[i]));\n\t}\n\n\t/* Flush some non-empty tcaches. */\n\tfor (i = 0; i < NTCACHES/2; i++) {\n\t\texpect_d_eq(mallctl(\"tcache.flush\", NULL, NULL, (void *)&tis[i],\n\t\t    sizeof(unsigned)), 0, \"Unexpected mallctl() failure, i=%u\",\n\t\t    i);\n\t}\n\n\t/* Destroy tcaches. */\n\tfor (i = 0; i < NTCACHES; i++) {\n\t\texpect_d_eq(mallctl(\"tcache.destroy\", NULL, NULL,\n\t\t    (void *)&tis[i], sizeof(unsigned)), 0,\n\t\t    \"Unexpected mallctl() failure, i=%u\", i);\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_thread_arena) {\n\tunsigned old_arena_ind, new_arena_ind, narenas;\n\n\tconst char *opa;\n\tsize_t sz = sizeof(opa);\n\texpect_d_eq(mallctl(\"opt.percpu_arena\", (void *)&opa, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\n\tsz = sizeof(unsigned);\n\texpect_d_eq(mallctl(\"arenas.narenas\", (void *)&narenas, &sz, NULL, 0),\n\t    0, \"Unexpected mallctl() failure\");\n\tif (opt_oversize_threshold != 0) {\n\t\tnarenas--;\n\t}\n\texpect_u_eq(narenas, opt_narenas, \"Number of arenas incorrect\");\n\n\tif (strcmp(opa, \"disabled\") == 0) {\n\t\tnew_arena_ind = narenas - 1;\n\t\texpect_d_eq(mallctl(\"thread.arena\", (void *)&old_arena_ind, &sz,\n\t\t    (void *)&new_arena_ind, sizeof(unsigned)), 0,\n\t\t    \"Unexpected mallctl() failure\");\n\t\tnew_arena_ind = 0;\n\t\texpect_d_eq(mallctl(\"thread.arena\", (void *)&old_arena_ind, &sz,\n\t\t    (void *)&new_arena_ind, sizeof(unsigned)), 0,\n\t\t    \"Unexpected mallctl() failure\");\n\t} else {\n\t\texpect_d_eq(mallctl(\"thread.arena\", (void *)&old_arena_ind, &sz,\n\t\t    NULL, 0), 0, \"Unexpected mallctl() failure\");\n\t\tnew_arena_ind = percpu_arena_ind_limit(opt_percpu_arena) - 1;\n\t\tif (old_arena_ind != new_arena_ind) {\n\t\t\texpect_d_eq(mallctl(\"thread.arena\",\n\t\t\t    (void *)&old_arena_ind, &sz, (void *)&new_arena_ind,\n\t\t\t    sizeof(unsigned)), EPERM, \"thread.arena ctl \"\n\t\t\t    \"should not be allowed with percpu arena\");\n\t\t}\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_arena_i_initialized) {\n\tunsigned narenas, i;\n\tsize_t sz;\n\tsize_t mib[3];\n\tsize_t miblen = sizeof(mib) / sizeof(size_t);\n\tbool initialized;\n\n\tsz = sizeof(narenas);\n\texpect_d_eq(mallctl(\"arenas.narenas\", (void *)&narenas, &sz, NULL, 0),\n\t    0, \"Unexpected mallctl() failure\");\n\n\texpect_d_eq(mallctlnametomib(\"arena.0.initialized\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() failure\");\n\tfor (i = 0; i < narenas; i++) {\n\t\tmib[1] = i;\n\t\tsz = sizeof(initialized);\n\t\texpect_d_eq(mallctlbymib(mib, miblen, &initialized, &sz, NULL,\n\t\t    0), 0, \"Unexpected mallctl() failure\");\n\t}\n\n\tmib[1] = MALLCTL_ARENAS_ALL;\n\tsz = sizeof(initialized);\n\texpect_d_eq(mallctlbymib(mib, miblen, &initialized, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\texpect_true(initialized,\n\t    \"Merged arena statistics should always be initialized\");\n\n\t/* Equivalent to the above but using mallctl() directly. */\n\tsz = sizeof(initialized);\n\texpect_d_eq(mallctl(\n\t    \"arena.\" STRINGIFY(MALLCTL_ARENAS_ALL) \".initialized\",\n\t    (void *)&initialized, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\texpect_true(initialized,\n\t    \"Merged arena statistics should always be initialized\");\n}\nTEST_END\n\nTEST_BEGIN(test_arena_i_dirty_decay_ms) {\n\tssize_t dirty_decay_ms, orig_dirty_decay_ms, prev_dirty_decay_ms;\n\tsize_t sz = sizeof(ssize_t);\n\n\texpect_d_eq(mallctl(\"arena.0.dirty_decay_ms\",\n\t    (void *)&orig_dirty_decay_ms, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\n\tdirty_decay_ms = -2;\n\texpect_d_eq(mallctl(\"arena.0.dirty_decay_ms\", NULL, NULL,\n\t    (void *)&dirty_decay_ms, sizeof(ssize_t)), EFAULT,\n\t    \"Unexpected mallctl() success\");\n\n\tdirty_decay_ms = 0x7fffffff;\n\texpect_d_eq(mallctl(\"arena.0.dirty_decay_ms\", NULL, NULL,\n\t    (void *)&dirty_decay_ms, sizeof(ssize_t)), 0,\n\t    \"Unexpected mallctl() failure\");\n\n\tfor (prev_dirty_decay_ms = dirty_decay_ms, dirty_decay_ms = -1;\n\t    dirty_decay_ms < 20; prev_dirty_decay_ms = dirty_decay_ms,\n\t    dirty_decay_ms++) {\n\t\tssize_t old_dirty_decay_ms;\n\n\t\texpect_d_eq(mallctl(\"arena.0.dirty_decay_ms\",\n\t\t    (void *)&old_dirty_decay_ms, &sz, (void *)&dirty_decay_ms,\n\t\t    sizeof(ssize_t)), 0, \"Unexpected mallctl() failure\");\n\t\texpect_zd_eq(old_dirty_decay_ms, prev_dirty_decay_ms,\n\t\t    \"Unexpected old arena.0.dirty_decay_ms\");\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_arena_i_muzzy_decay_ms) {\n\tssize_t muzzy_decay_ms, orig_muzzy_decay_ms, prev_muzzy_decay_ms;\n\tsize_t sz = sizeof(ssize_t);\n\n\texpect_d_eq(mallctl(\"arena.0.muzzy_decay_ms\",\n\t    (void *)&orig_muzzy_decay_ms, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\n\tmuzzy_decay_ms = -2;\n\texpect_d_eq(mallctl(\"arena.0.muzzy_decay_ms\", NULL, NULL,\n\t    (void *)&muzzy_decay_ms, sizeof(ssize_t)), EFAULT,\n\t    \"Unexpected mallctl() success\");\n\n\tmuzzy_decay_ms = 0x7fffffff;\n\texpect_d_eq(mallctl(\"arena.0.muzzy_decay_ms\", NULL, NULL,\n\t    (void *)&muzzy_decay_ms, sizeof(ssize_t)), 0,\n\t    \"Unexpected mallctl() failure\");\n\n\tfor (prev_muzzy_decay_ms = muzzy_decay_ms, muzzy_decay_ms = -1;\n\t    muzzy_decay_ms < 20; prev_muzzy_decay_ms = muzzy_decay_ms,\n\t    muzzy_decay_ms++) {\n\t\tssize_t old_muzzy_decay_ms;\n\n\t\texpect_d_eq(mallctl(\"arena.0.muzzy_decay_ms\",\n\t\t    (void *)&old_muzzy_decay_ms, &sz, (void *)&muzzy_decay_ms,\n\t\t    sizeof(ssize_t)), 0, \"Unexpected mallctl() failure\");\n\t\texpect_zd_eq(old_muzzy_decay_ms, prev_muzzy_decay_ms,\n\t\t    \"Unexpected old arena.0.muzzy_decay_ms\");\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_arena_i_purge) {\n\tunsigned narenas;\n\tsize_t sz = sizeof(unsigned);\n\tsize_t mib[3];\n\tsize_t miblen = 3;\n\n\texpect_d_eq(mallctl(\"arena.0.purge\", NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\n\texpect_d_eq(mallctl(\"arenas.narenas\", (void *)&narenas, &sz, NULL, 0),\n\t    0, \"Unexpected mallctl() failure\");\n\texpect_d_eq(mallctlnametomib(\"arena.0.purge\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() failure\");\n\tmib[1] = narenas;\n\texpect_d_eq(mallctlbymib(mib, miblen, NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctlbymib() failure\");\n\n\tmib[1] = MALLCTL_ARENAS_ALL;\n\texpect_d_eq(mallctlbymib(mib, miblen, NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctlbymib() failure\");\n}\nTEST_END\n\nTEST_BEGIN(test_arena_i_decay) {\n\tunsigned narenas;\n\tsize_t sz = sizeof(unsigned);\n\tsize_t mib[3];\n\tsize_t miblen = 3;\n\n\texpect_d_eq(mallctl(\"arena.0.decay\", NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\n\texpect_d_eq(mallctl(\"arenas.narenas\", (void *)&narenas, &sz, NULL, 0),\n\t    0, \"Unexpected mallctl() failure\");\n\texpect_d_eq(mallctlnametomib(\"arena.0.decay\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() failure\");\n\tmib[1] = narenas;\n\texpect_d_eq(mallctlbymib(mib, miblen, NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctlbymib() failure\");\n\n\tmib[1] = MALLCTL_ARENAS_ALL;\n\texpect_d_eq(mallctlbymib(mib, miblen, NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctlbymib() failure\");\n}\nTEST_END\n\nTEST_BEGIN(test_arena_i_dss) {\n\tconst char *dss_prec_old, *dss_prec_new;\n\tsize_t sz = sizeof(dss_prec_old);\n\tsize_t mib[3];\n\tsize_t miblen;\n\n\tmiblen = sizeof(mib)/sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(\"arena.0.dss\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() error\");\n\n\tdss_prec_new = \"disabled\";\n\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&dss_prec_old, &sz,\n\t    (void *)&dss_prec_new, sizeof(dss_prec_new)), 0,\n\t    \"Unexpected mallctl() failure\");\n\texpect_str_ne(dss_prec_old, \"primary\",\n\t    \"Unexpected default for dss precedence\");\n\n\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&dss_prec_new, &sz,\n\t    (void *)&dss_prec_old, sizeof(dss_prec_old)), 0,\n\t    \"Unexpected mallctl() failure\");\n\n\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&dss_prec_old, &sz, NULL,\n\t    0), 0, \"Unexpected mallctl() failure\");\n\texpect_str_ne(dss_prec_old, \"primary\",\n\t    \"Unexpected value for dss precedence\");\n\n\tmib[1] = narenas_total_get();\n\tdss_prec_new = \"disabled\";\n\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&dss_prec_old, &sz,\n\t    (void *)&dss_prec_new, sizeof(dss_prec_new)), 0,\n\t    \"Unexpected mallctl() failure\");\n\texpect_str_ne(dss_prec_old, \"primary\",\n\t    \"Unexpected default for dss precedence\");\n\n\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&dss_prec_new, &sz,\n\t    (void *)&dss_prec_old, sizeof(dss_prec_new)), 0,\n\t    \"Unexpected mallctl() failure\");\n\n\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&dss_prec_old, &sz, NULL,\n\t    0), 0, \"Unexpected mallctl() failure\");\n\texpect_str_ne(dss_prec_old, \"primary\",\n\t    \"Unexpected value for dss precedence\");\n}\nTEST_END\n\nTEST_BEGIN(test_arena_i_retain_grow_limit) {\n\tsize_t old_limit, new_limit, default_limit;\n\tsize_t mib[3];\n\tsize_t miblen;\n\n\tbool retain_enabled;\n\tsize_t sz = sizeof(retain_enabled);\n\texpect_d_eq(mallctl(\"opt.retain\", &retain_enabled, &sz, NULL, 0),\n\t    0, \"Unexpected mallctl() failure\");\n\ttest_skip_if(!retain_enabled);\n\n\tsz = sizeof(default_limit);\n\tmiblen = sizeof(mib)/sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(\"arena.0.retain_grow_limit\", mib, &miblen),\n\t    0, \"Unexpected mallctlnametomib() error\");\n\n\texpect_d_eq(mallctlbymib(mib, miblen, &default_limit, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\texpect_zu_eq(default_limit, SC_LARGE_MAXCLASS,\n\t    \"Unexpected default for retain_grow_limit\");\n\n\tnew_limit = PAGE - 1;\n\texpect_d_eq(mallctlbymib(mib, miblen, NULL, NULL, &new_limit,\n\t    sizeof(new_limit)), EFAULT, \"Unexpected mallctl() success\");\n\n\tnew_limit = PAGE + 1;\n\texpect_d_eq(mallctlbymib(mib, miblen, NULL, NULL, &new_limit,\n\t    sizeof(new_limit)), 0, \"Unexpected mallctl() failure\");\n\texpect_d_eq(mallctlbymib(mib, miblen, &old_limit, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\texpect_zu_eq(old_limit, PAGE,\n\t    \"Unexpected value for retain_grow_limit\");\n\n\t/* Expect grow less than psize class 10. */\n\tnew_limit = sz_pind2sz(10) - 1;\n\texpect_d_eq(mallctlbymib(mib, miblen, NULL, NULL, &new_limit,\n\t    sizeof(new_limit)), 0, \"Unexpected mallctl() failure\");\n\texpect_d_eq(mallctlbymib(mib, miblen, &old_limit, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\texpect_zu_eq(old_limit, sz_pind2sz(9),\n\t    \"Unexpected value for retain_grow_limit\");\n\n\t/* Restore to default. */\n\texpect_d_eq(mallctlbymib(mib, miblen, NULL, NULL, &default_limit,\n\t    sizeof(default_limit)), 0, \"Unexpected mallctl() failure\");\n}\nTEST_END\n\nTEST_BEGIN(test_arenas_dirty_decay_ms) {\n\tssize_t dirty_decay_ms, orig_dirty_decay_ms, prev_dirty_decay_ms;\n\tsize_t sz = sizeof(ssize_t);\n\n\texpect_d_eq(mallctl(\"arenas.dirty_decay_ms\",\n\t    (void *)&orig_dirty_decay_ms, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\n\tdirty_decay_ms = -2;\n\texpect_d_eq(mallctl(\"arenas.dirty_decay_ms\", NULL, NULL,\n\t    (void *)&dirty_decay_ms, sizeof(ssize_t)), EFAULT,\n\t    \"Unexpected mallctl() success\");\n\n\tdirty_decay_ms = 0x7fffffff;\n\texpect_d_eq(mallctl(\"arenas.dirty_decay_ms\", NULL, NULL,\n\t    (void *)&dirty_decay_ms, sizeof(ssize_t)), 0,\n\t    \"Expected mallctl() failure\");\n\n\tfor (prev_dirty_decay_ms = dirty_decay_ms, dirty_decay_ms = -1;\n\t    dirty_decay_ms < 20; prev_dirty_decay_ms = dirty_decay_ms,\n\t    dirty_decay_ms++) {\n\t\tssize_t old_dirty_decay_ms;\n\n\t\texpect_d_eq(mallctl(\"arenas.dirty_decay_ms\",\n\t\t    (void *)&old_dirty_decay_ms, &sz, (void *)&dirty_decay_ms,\n\t\t    sizeof(ssize_t)), 0, \"Unexpected mallctl() failure\");\n\t\texpect_zd_eq(old_dirty_decay_ms, prev_dirty_decay_ms,\n\t\t    \"Unexpected old arenas.dirty_decay_ms\");\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_arenas_muzzy_decay_ms) {\n\tssize_t muzzy_decay_ms, orig_muzzy_decay_ms, prev_muzzy_decay_ms;\n\tsize_t sz = sizeof(ssize_t);\n\n\texpect_d_eq(mallctl(\"arenas.muzzy_decay_ms\",\n\t    (void *)&orig_muzzy_decay_ms, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\n\tmuzzy_decay_ms = -2;\n\texpect_d_eq(mallctl(\"arenas.muzzy_decay_ms\", NULL, NULL,\n\t    (void *)&muzzy_decay_ms, sizeof(ssize_t)), EFAULT,\n\t    \"Unexpected mallctl() success\");\n\n\tmuzzy_decay_ms = 0x7fffffff;\n\texpect_d_eq(mallctl(\"arenas.muzzy_decay_ms\", NULL, NULL,\n\t    (void *)&muzzy_decay_ms, sizeof(ssize_t)), 0,\n\t    \"Expected mallctl() failure\");\n\n\tfor (prev_muzzy_decay_ms = muzzy_decay_ms, muzzy_decay_ms = -1;\n\t    muzzy_decay_ms < 20; prev_muzzy_decay_ms = muzzy_decay_ms,\n\t    muzzy_decay_ms++) {\n\t\tssize_t old_muzzy_decay_ms;\n\n\t\texpect_d_eq(mallctl(\"arenas.muzzy_decay_ms\",\n\t\t    (void *)&old_muzzy_decay_ms, &sz, (void *)&muzzy_decay_ms,\n\t\t    sizeof(ssize_t)), 0, \"Unexpected mallctl() failure\");\n\t\texpect_zd_eq(old_muzzy_decay_ms, prev_muzzy_decay_ms,\n\t\t    \"Unexpected old arenas.muzzy_decay_ms\");\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_arenas_constants) {\n#define TEST_ARENAS_CONSTANT(t, name, expected) do {\t\t\t\\\n\tt name;\t\t\t\t\t\t\t\t\\\n\tsize_t sz = sizeof(t);\t\t\t\t\t\t\\\n\texpect_d_eq(mallctl(\"arenas.\"#name, (void *)&name, &sz, NULL,\t\\\n\t    0), 0, \"Unexpected mallctl() failure\");\t\t\t\\\n\texpect_zu_eq(name, expected, \"Incorrect \"#name\" size\");\t\t\\\n} while (0)\n\n\tTEST_ARENAS_CONSTANT(size_t, quantum, QUANTUM);\n\tTEST_ARENAS_CONSTANT(size_t, page, PAGE);\n\tTEST_ARENAS_CONSTANT(unsigned, nbins, SC_NBINS);\n\tTEST_ARENAS_CONSTANT(unsigned, nlextents, SC_NSIZES - SC_NBINS);\n\n#undef TEST_ARENAS_CONSTANT\n}\nTEST_END\n\nTEST_BEGIN(test_arenas_bin_constants) {\n#define TEST_ARENAS_BIN_CONSTANT(t, name, expected) do {\t\t\\\n\tt name;\t\t\t\t\t\t\t\t\\\n\tsize_t sz = sizeof(t);\t\t\t\t\t\t\\\n\texpect_d_eq(mallctl(\"arenas.bin.0.\"#name, (void *)&name, &sz,\t\\\n\t    NULL, 0), 0, \"Unexpected mallctl() failure\");\t\t\\\n\texpect_zu_eq(name, expected, \"Incorrect \"#name\" size\");\t\t\\\n} while (0)\n\n\tTEST_ARENAS_BIN_CONSTANT(size_t, size, bin_infos[0].reg_size);\n\tTEST_ARENAS_BIN_CONSTANT(uint32_t, nregs, bin_infos[0].nregs);\n\tTEST_ARENAS_BIN_CONSTANT(size_t, slab_size,\n\t    bin_infos[0].slab_size);\n\tTEST_ARENAS_BIN_CONSTANT(uint32_t, nshards, bin_infos[0].n_shards);\n\n#undef TEST_ARENAS_BIN_CONSTANT\n}\nTEST_END\n\nTEST_BEGIN(test_arenas_lextent_constants) {\n#define TEST_ARENAS_LEXTENT_CONSTANT(t, name, expected) do {\t\t\\\n\tt name;\t\t\t\t\t\t\t\t\\\n\tsize_t sz = sizeof(t);\t\t\t\t\t\t\\\n\texpect_d_eq(mallctl(\"arenas.lextent.0.\"#name, (void *)&name,\t\\\n\t    &sz, NULL, 0), 0, \"Unexpected mallctl() failure\");\t\t\\\n\texpect_zu_eq(name, expected, \"Incorrect \"#name\" size\");\t\t\\\n} while (0)\n\n\tTEST_ARENAS_LEXTENT_CONSTANT(size_t, size,\n\t    SC_LARGE_MINCLASS);\n\n#undef TEST_ARENAS_LEXTENT_CONSTANT\n}\nTEST_END\n\nTEST_BEGIN(test_arenas_create) {\n\tunsigned narenas_before, arena, narenas_after;\n\tsize_t sz = sizeof(unsigned);\n\n\texpect_d_eq(mallctl(\"arenas.narenas\", (void *)&narenas_before, &sz,\n\t    NULL, 0), 0, \"Unexpected mallctl() failure\");\n\texpect_d_eq(mallctl(\"arenas.create\", (void *)&arena, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\texpect_d_eq(mallctl(\"arenas.narenas\", (void *)&narenas_after, &sz, NULL,\n\t    0), 0, \"Unexpected mallctl() failure\");\n\n\texpect_u_eq(narenas_before+1, narenas_after,\n\t    \"Unexpected number of arenas before versus after extension\");\n\texpect_u_eq(arena, narenas_after-1, \"Unexpected arena index\");\n}\nTEST_END\n\nTEST_BEGIN(test_arenas_lookup) {\n\tunsigned arena, arena1;\n\tvoid *ptr;\n\tsize_t sz = sizeof(unsigned);\n\n\texpect_d_eq(mallctl(\"arenas.create\", (void *)&arena, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\tptr = mallocx(42, MALLOCX_ARENA(arena) | MALLOCX_TCACHE_NONE);\n\texpect_ptr_not_null(ptr, \"Unexpected mallocx() failure\");\n\texpect_d_eq(mallctl(\"arenas.lookup\", &arena1, &sz, &ptr, sizeof(ptr)),\n\t    0, \"Unexpected mallctl() failure\");\n\texpect_u_eq(arena, arena1, \"Unexpected arena index\");\n\tdallocx(ptr, 0);\n}\nTEST_END\n\nTEST_BEGIN(test_prof_active) {\n\t/*\n\t * If config_prof is off, then the test for prof_active in\n\t * test_mallctl_opt was already enough.\n\t */\n\ttest_skip_if(!config_prof);\n\ttest_skip_if(opt_prof);\n\n\tbool active, old;\n\tsize_t len = sizeof(bool);\n\n\tactive = true;\n\texpect_d_eq(mallctl(\"prof.active\", NULL, NULL, &active, len), ENOENT,\n\t    \"Setting prof_active to true should fail when opt_prof is off\");\n\told = true;\n\texpect_d_eq(mallctl(\"prof.active\", &old, &len, &active, len), ENOENT,\n\t    \"Setting prof_active to true should fail when opt_prof is off\");\n\texpect_true(old, \"old value should not be touched when mallctl fails\");\n\tactive = false;\n\texpect_d_eq(mallctl(\"prof.active\", NULL, NULL, &active, len), 0,\n\t    \"Setting prof_active to false should succeed when opt_prof is off\");\n\texpect_d_eq(mallctl(\"prof.active\", &old, &len, &active, len), 0,\n\t    \"Setting prof_active to false should succeed when opt_prof is off\");\n\texpect_false(old, \"prof_active should be false when opt_prof is off\");\n}\nTEST_END\n\nTEST_BEGIN(test_stats_arenas) {\n#define TEST_STATS_ARENAS(t, name) do {\t\t\t\t\t\\\n\tt name;\t\t\t\t\t\t\t\t\\\n\tsize_t sz = sizeof(t);\t\t\t\t\t\t\\\n\texpect_d_eq(mallctl(\"stats.arenas.0.\"#name, (void *)&name, &sz,\t\\\n\t    NULL, 0), 0, \"Unexpected mallctl() failure\");\t\t\\\n} while (0)\n\n\tTEST_STATS_ARENAS(unsigned, nthreads);\n\tTEST_STATS_ARENAS(const char *, dss);\n\tTEST_STATS_ARENAS(ssize_t, dirty_decay_ms);\n\tTEST_STATS_ARENAS(ssize_t, muzzy_decay_ms);\n\tTEST_STATS_ARENAS(size_t, pactive);\n\tTEST_STATS_ARENAS(size_t, pdirty);\n\n#undef TEST_STATS_ARENAS\n}\nTEST_END\n\nstatic void\nalloc_hook(void *extra, UNUSED hook_alloc_t type, UNUSED void *result,\n    UNUSED uintptr_t result_raw, UNUSED uintptr_t args_raw[3]) {\n\t*(bool *)extra = true;\n}\n\nstatic void\ndalloc_hook(void *extra, UNUSED hook_dalloc_t type,\n    UNUSED void *address, UNUSED uintptr_t args_raw[3]) {\n\t*(bool *)extra = true;\n}\n\nTEST_BEGIN(test_hooks) {\n\tbool hook_called = false;\n\thooks_t hooks = {&alloc_hook, &dalloc_hook, NULL, &hook_called};\n\tvoid *handle = NULL;\n\tsize_t sz = sizeof(handle);\n\tint err = mallctl(\"experimental.hooks.install\", &handle, &sz, &hooks,\n\t    sizeof(hooks));\n\texpect_d_eq(err, 0, \"Hook installation failed\");\n\texpect_ptr_ne(handle, NULL, \"Hook installation gave null handle\");\n\tvoid *ptr = mallocx(1, 0);\n\texpect_true(hook_called, \"Alloc hook not called\");\n\thook_called = false;\n\tfree(ptr);\n\texpect_true(hook_called, \"Free hook not called\");\n\n\terr = mallctl(\"experimental.hooks.remove\", NULL, NULL, &handle,\n\t    sizeof(handle));\n\texpect_d_eq(err, 0, \"Hook removal failed\");\n\thook_called = false;\n\tptr = mallocx(1, 0);\n\tfree(ptr);\n\texpect_false(hook_called, \"Hook called after removal\");\n}\nTEST_END\n\nTEST_BEGIN(test_hooks_exhaustion) {\n\tbool hook_called = false;\n\thooks_t hooks = {&alloc_hook, &dalloc_hook, NULL, &hook_called};\n\n\tvoid *handle;\n\tvoid *handles[HOOK_MAX];\n\tsize_t sz = sizeof(handle);\n\tint err;\n\tfor (int i = 0; i < HOOK_MAX; i++) {\n\t\thandle = NULL;\n\t\terr = mallctl(\"experimental.hooks.install\", &handle, &sz,\n\t\t    &hooks, sizeof(hooks));\n\t\texpect_d_eq(err, 0, \"Error installation hooks\");\n\t\texpect_ptr_ne(handle, NULL, \"Got NULL handle\");\n\t\thandles[i] = handle;\n\t}\n\terr = mallctl(\"experimental.hooks.install\", &handle, &sz, &hooks,\n\t    sizeof(hooks));\n\texpect_d_eq(err, EAGAIN, \"Should have failed hook installation\");\n\tfor (int i = 0; i < HOOK_MAX; i++) {\n\t\terr = mallctl(\"experimental.hooks.remove\", NULL, NULL,\n\t\t    &handles[i], sizeof(handles[i]));\n\t\texpect_d_eq(err, 0, \"Hook removal failed\");\n\t}\n\t/* Insertion failed, but then we removed some; it should work now. */\n\thandle = NULL;\n\terr = mallctl(\"experimental.hooks.install\", &handle, &sz, &hooks,\n\t    sizeof(hooks));\n\texpect_d_eq(err, 0, \"Hook insertion failed\");\n\texpect_ptr_ne(handle, NULL, \"Got NULL handle\");\n\terr = mallctl(\"experimental.hooks.remove\", NULL, NULL, &handle,\n\t    sizeof(handle));\n\texpect_d_eq(err, 0, \"Hook removal failed\");\n}\nTEST_END\n\nTEST_BEGIN(test_thread_idle) {\n\t/*\n\t * We're cheating a little bit in this test, and inferring things about\n\t * implementation internals (like tcache details).  We have to;\n\t * thread.idle has no guaranteed effects.  We need stats to make these\n\t * inferences.\n\t */\n\ttest_skip_if(!config_stats);\n\n\tint err;\n\tsize_t sz;\n\tsize_t miblen;\n\n\tbool tcache_enabled = false;\n\tsz = sizeof(tcache_enabled);\n\terr = mallctl(\"thread.tcache.enabled\", &tcache_enabled, &sz, NULL, 0);\n\texpect_d_eq(err, 0, \"\");\n\ttest_skip_if(!tcache_enabled);\n\n\tsize_t tcache_max;\n\tsz = sizeof(tcache_max);\n\terr = mallctl(\"arenas.tcache_max\", &tcache_max, &sz, NULL, 0);\n\texpect_d_eq(err, 0, \"\");\n\ttest_skip_if(tcache_max == 0);\n\n\tunsigned arena_ind;\n\tsz = sizeof(arena_ind);\n\terr = mallctl(\"thread.arena\", &arena_ind, &sz, NULL, 0);\n\texpect_d_eq(err, 0, \"\");\n\n\t/* We're going to do an allocation of size 1, which we know is small. */\n\tsize_t mib[5];\n\tmiblen = sizeof(mib)/sizeof(mib[0]);\n\terr = mallctlnametomib(\"stats.arenas.0.small.ndalloc\", mib, &miblen);\n\texpect_d_eq(err, 0, \"\");\n\tmib[2] = arena_ind;\n\n\t/*\n\t * This alloc and dalloc should leave something in the tcache, in a\n\t * small size's cache bin.\n\t */\n\tvoid *ptr = mallocx(1, 0);\n\tdallocx(ptr, 0);\n\n\tuint64_t epoch;\n\terr = mallctl(\"epoch\", NULL, NULL, &epoch, sizeof(epoch));\n\texpect_d_eq(err, 0, \"\");\n\n\tuint64_t small_dalloc_pre_idle;\n\tsz = sizeof(small_dalloc_pre_idle);\n\terr = mallctlbymib(mib, miblen, &small_dalloc_pre_idle, &sz, NULL, 0);\n\texpect_d_eq(err, 0, \"\");\n\n\terr = mallctl(\"thread.idle\", NULL, NULL, NULL, 0);\n\texpect_d_eq(err, 0, \"\");\n\n\terr = mallctl(\"epoch\", NULL, NULL, &epoch, sizeof(epoch));\n\texpect_d_eq(err, 0, \"\");\n\n\tuint64_t small_dalloc_post_idle;\n\tsz = sizeof(small_dalloc_post_idle);\n\terr = mallctlbymib(mib, miblen, &small_dalloc_post_idle, &sz, NULL, 0);\n\texpect_d_eq(err, 0, \"\");\n\n\texpect_u64_lt(small_dalloc_pre_idle, small_dalloc_post_idle,\n\t    \"Purge didn't flush the tcache\");\n}\nTEST_END\n\nTEST_BEGIN(test_thread_peak) {\n\ttest_skip_if(!config_stats);\n\n\t/*\n\t * We don't commit to any stable amount of accuracy for peak tracking\n\t * (in practice, when this test was written, we made sure to be within\n\t * 100k).  But 10MB is big for more or less any definition of big.\n\t */\n\tsize_t big_size = 10 * 1024 * 1024;\n\tsize_t small_size = 256;\n\n\tvoid *ptr;\n\tint err;\n\tsize_t sz;\n\tuint64_t peak;\n\tsz = sizeof(uint64_t);\n\n\terr = mallctl(\"thread.peak.reset\", NULL, NULL, NULL, 0);\n\texpect_d_eq(err, 0, \"\");\n\tptr = mallocx(SC_SMALL_MAXCLASS, 0);\n\terr = mallctl(\"thread.peak.read\", &peak, &sz, NULL, 0);\n\texpect_d_eq(err, 0, \"\");\n\texpect_u64_eq(peak, SC_SMALL_MAXCLASS, \"Missed an update\");\n\tfree(ptr);\n\terr = mallctl(\"thread.peak.read\", &peak, &sz, NULL, 0);\n\texpect_d_eq(err, 0, \"\");\n\texpect_u64_eq(peak, SC_SMALL_MAXCLASS, \"Freeing changed peak\");\n\tptr = mallocx(big_size, 0);\n\tfree(ptr);\n\t/*\n\t * The peak should have hit big_size in the last two lines, even though\n\t * the net allocated bytes has since dropped back down to zero.  We\n\t * should have noticed the peak change without having down any mallctl\n\t * calls while net allocated bytes was high.\n\t */\n\terr = mallctl(\"thread.peak.read\", &peak, &sz, NULL, 0);\n\texpect_d_eq(err, 0, \"\");\n\texpect_u64_ge(peak, big_size, \"Missed a peak change.\");\n\n\t/* Allocate big_size, but using small allocations. */\n\tsize_t nallocs = big_size / small_size;\n\tvoid **ptrs = calloc(nallocs, sizeof(void *));\n\terr = mallctl(\"thread.peak.reset\", NULL, NULL, NULL, 0);\n\texpect_d_eq(err, 0, \"\");\n\terr = mallctl(\"thread.peak.read\", &peak, &sz, NULL, 0);\n\texpect_d_eq(err, 0, \"\");\n\texpect_u64_eq(0, peak, \"Missed a reset.\");\n\tfor (size_t i = 0; i < nallocs; i++) {\n\t\tptrs[i] = mallocx(small_size, 0);\n\t}\n\tfor (size_t i = 0; i < nallocs; i++) {\n\t\tfree(ptrs[i]);\n\t}\n\terr = mallctl(\"thread.peak.read\", &peak, &sz, NULL, 0);\n\texpect_d_eq(err, 0, \"\");\n\t/*\n\t * We don't guarantee exactness; make sure we're within 10% of the peak,\n\t * though.\n\t */\n\texpect_u64_ge(peak, nallocx(small_size, 0) * nallocs * 9 / 10,\n\t    \"Missed some peak changes.\");\n\texpect_u64_le(peak, nallocx(small_size, 0) * nallocs * 11 / 10,\n\t    \"Overcounted peak changes.\");\n\tfree(ptrs);\n}\nTEST_END\n\ntypedef struct activity_test_data_s activity_test_data_t;\nstruct activity_test_data_s {\n\tuint64_t obtained_alloc;\n\tuint64_t obtained_dalloc;\n};\n\nstatic void\nactivity_test_callback(void *uctx, uint64_t alloc, uint64_t dalloc) {\n\tactivity_test_data_t *test_data = (activity_test_data_t *)uctx;\n\ttest_data->obtained_alloc = alloc;\n\ttest_data->obtained_dalloc = dalloc;\n}\n\nTEST_BEGIN(test_thread_activity_callback) {\n\ttest_skip_if(!config_stats);\n\n\tconst size_t big_size = 10 * 1024 * 1024;\n\tvoid *ptr;\n\tint err;\n\tsize_t sz;\n\n\tuint64_t *allocatedp;\n\tuint64_t *deallocatedp;\n\tsz = sizeof(allocatedp);\n\terr = mallctl(\"thread.allocatedp\", &allocatedp, &sz, NULL, 0);\n\tassert_d_eq(0, err, \"\");\n\terr = mallctl(\"thread.deallocatedp\", &deallocatedp, &sz, NULL, 0);\n\tassert_d_eq(0, err, \"\");\n\n\tactivity_callback_thunk_t old_thunk = {(activity_callback_t)111,\n\t\t(void *)222};\n\n\tactivity_test_data_t test_data = {333, 444};\n\tactivity_callback_thunk_t new_thunk =\n\t    {&activity_test_callback, &test_data};\n\n\tsz = sizeof(old_thunk);\n\terr = mallctl(\"experimental.thread.activity_callback\", &old_thunk, &sz,\n\t    &new_thunk, sizeof(new_thunk));\n\tassert_d_eq(0, err, \"\");\n\n\texpect_true(old_thunk.callback == NULL, \"Callback already installed\");\n\texpect_true(old_thunk.uctx == NULL, \"Callback data already installed\");\n\n\tptr = mallocx(big_size, 0);\n\texpect_u64_eq(test_data.obtained_alloc, *allocatedp, \"\");\n\texpect_u64_eq(test_data.obtained_dalloc, *deallocatedp, \"\");\n\n\tfree(ptr);\n\texpect_u64_eq(test_data.obtained_alloc, *allocatedp, \"\");\n\texpect_u64_eq(test_data.obtained_dalloc, *deallocatedp, \"\");\n\n\tsz = sizeof(old_thunk);\n\tnew_thunk = (activity_callback_thunk_t){ NULL, NULL };\n\terr = mallctl(\"experimental.thread.activity_callback\", &old_thunk, &sz,\n\t    &new_thunk, sizeof(new_thunk));\n\tassert_d_eq(0, err, \"\");\n\n\texpect_true(old_thunk.callback == &activity_test_callback, \"\");\n\texpect_true(old_thunk.uctx == &test_data, \"\");\n\n\t/* Inserting NULL should have turned off tracking. */\n\ttest_data.obtained_alloc = 333;\n\ttest_data.obtained_dalloc = 444;\n\tptr = mallocx(big_size, 0);\n\tfree(ptr);\n\texpect_u64_eq(333, test_data.obtained_alloc, \"\");\n\texpect_u64_eq(444, test_data.obtained_dalloc, \"\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_mallctl_errors,\n\t    test_mallctlnametomib_errors,\n\t    test_mallctlbymib_errors,\n\t    test_mallctl_read_write,\n\t    test_mallctlnametomib_short_mib,\n\t    test_mallctlnametomib_short_name,\n\t    test_mallctlmibnametomib,\n\t    test_mallctlbymibname,\n\t    test_mallctl_config,\n\t    test_mallctl_opt,\n\t    test_manpage_example,\n\t    test_tcache_none,\n\t    test_tcache,\n\t    test_thread_arena,\n\t    test_arena_i_initialized,\n\t    test_arena_i_dirty_decay_ms,\n\t    test_arena_i_muzzy_decay_ms,\n\t    test_arena_i_purge,\n\t    test_arena_i_decay,\n\t    test_arena_i_dss,\n\t    test_arena_i_retain_grow_limit,\n\t    test_arenas_dirty_decay_ms,\n\t    test_arenas_muzzy_decay_ms,\n\t    test_arenas_constants,\n\t    test_arenas_bin_constants,\n\t    test_arenas_lextent_constants,\n\t    test_arenas_create,\n\t    test_arenas_lookup,\n\t    test_prof_active,\n\t    test_stats_arenas,\n\t    test_hooks,\n\t    test_hooks_exhaustion,\n\t    test_thread_idle,\n\t    test_thread_peak,\n\t    test_thread_activity_callback);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/malloc_conf_2.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nconst char *malloc_conf = \"dirty_decay_ms:1000\";\nconst char *malloc_conf_2_conf_harder = \"dirty_decay_ms:1234\";\n\nTEST_BEGIN(test_malloc_conf_2) {\n#ifdef _WIN32\n\tbool windows = true;\n#else\n\tbool windows = false;\n#endif\n\t/* Windows doesn't support weak symbol linker trickery. */\n\ttest_skip_if(windows);\n\n\tssize_t dirty_decay_ms;\n\tsize_t sz = sizeof(dirty_decay_ms);\n\n\tint err = mallctl(\"opt.dirty_decay_ms\", &dirty_decay_ms, &sz, NULL, 0);\n\tassert_d_eq(err, 0, \"Unexpected mallctl failure\");\n\texpect_zd_eq(dirty_decay_ms, 1234,\n\t    \"malloc_conf_2 setting didn't take effect\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_malloc_conf_2);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/malloc_conf_2.sh",
    "content": "export MALLOC_CONF=\"dirty_decay_ms:500\"\n"
  },
  {
    "path": "deps/jemalloc/test/unit/malloc_io.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nTEST_BEGIN(test_malloc_strtoumax_no_endptr) {\n\tint err;\n\n\tset_errno(0);\n\texpect_ju_eq(malloc_strtoumax(\"0\", NULL, 0), 0, \"Unexpected result\");\n\terr = get_errno();\n\texpect_d_eq(err, 0, \"Unexpected failure\");\n}\nTEST_END\n\nTEST_BEGIN(test_malloc_strtoumax) {\n\tstruct test_s {\n\t\tconst char *input;\n\t\tconst char *expected_remainder;\n\t\tint base;\n\t\tint expected_errno;\n\t\tconst char *expected_errno_name;\n\t\tuintmax_t expected_x;\n\t};\n#define ERR(e)\t\te, #e\n#define KUMAX(x)\t((uintmax_t)x##ULL)\n#define KSMAX(x)\t((uintmax_t)(intmax_t)x##LL)\n\tstruct test_s tests[] = {\n\t\t{\"0\",\t\t\"0\",\t-1,\tERR(EINVAL),\tUINTMAX_MAX},\n\t\t{\"0\",\t\t\"0\",\t1,\tERR(EINVAL),\tUINTMAX_MAX},\n\t\t{\"0\",\t\t\"0\",\t37,\tERR(EINVAL),\tUINTMAX_MAX},\n\n\t\t{\"\",\t\t\"\",\t0,\tERR(EINVAL),\tUINTMAX_MAX},\n\t\t{\"+\",\t\t\"+\",\t0,\tERR(EINVAL),\tUINTMAX_MAX},\n\t\t{\"++3\",\t\t\"++3\",\t0,\tERR(EINVAL),\tUINTMAX_MAX},\n\t\t{\"-\",\t\t\"-\",\t0,\tERR(EINVAL),\tUINTMAX_MAX},\n\n\t\t{\"42\",\t\t\"\",\t0,\tERR(0),\t\tKUMAX(42)},\n\t\t{\"+42\",\t\t\"\",\t0,\tERR(0),\t\tKUMAX(42)},\n\t\t{\"-42\",\t\t\"\",\t0,\tERR(0),\t\tKSMAX(-42)},\n\t\t{\"042\",\t\t\"\",\t0,\tERR(0),\t\tKUMAX(042)},\n\t\t{\"+042\",\t\"\",\t0,\tERR(0),\t\tKUMAX(042)},\n\t\t{\"-042\",\t\"\",\t0,\tERR(0),\t\tKSMAX(-042)},\n\t\t{\"0x42\",\t\"\",\t0,\tERR(0),\t\tKUMAX(0x42)},\n\t\t{\"+0x42\",\t\"\",\t0,\tERR(0),\t\tKUMAX(0x42)},\n\t\t{\"-0x42\",\t\"\",\t0,\tERR(0),\t\tKSMAX(-0x42)},\n\n\t\t{\"0\",\t\t\"\",\t0,\tERR(0),\t\tKUMAX(0)},\n\t\t{\"1\",\t\t\"\",\t0,\tERR(0),\t\tKUMAX(1)},\n\n\t\t{\"42\",\t\t\"\",\t0,\tERR(0),\t\tKUMAX(42)},\n\t\t{\" 42\",\t\t\"\",\t0,\tERR(0),\t\tKUMAX(42)},\n\t\t{\"42 \",\t\t\" \",\t0,\tERR(0),\t\tKUMAX(42)},\n\t\t{\"0x\",\t\t\"x\",\t0,\tERR(0),\t\tKUMAX(0)},\n\t\t{\"42x\",\t\t\"x\",\t0,\tERR(0),\t\tKUMAX(42)},\n\n\t\t{\"07\",\t\t\"\",\t0,\tERR(0),\t\tKUMAX(7)},\n\t\t{\"010\",\t\t\"\",\t0,\tERR(0),\t\tKUMAX(8)},\n\t\t{\"08\",\t\t\"8\",\t0,\tERR(0),\t\tKUMAX(0)},\n\t\t{\"0_\",\t\t\"_\",\t0,\tERR(0),\t\tKUMAX(0)},\n\n\t\t{\"0x\",\t\t\"x\",\t0,\tERR(0),\t\tKUMAX(0)},\n\t\t{\"0X\",\t\t\"X\",\t0,\tERR(0),\t\tKUMAX(0)},\n\t\t{\"0xg\",\t\t\"xg\",\t0,\tERR(0),\t\tKUMAX(0)},\n\t\t{\"0XA\",\t\t\"\",\t0,\tERR(0),\t\tKUMAX(10)},\n\n\t\t{\"010\",\t\t\"\",\t10,\tERR(0),\t\tKUMAX(10)},\n\t\t{\"0x3\",\t\t\"x3\",\t10,\tERR(0),\t\tKUMAX(0)},\n\n\t\t{\"12\",\t\t\"2\",\t2,\tERR(0),\t\tKUMAX(1)},\n\t\t{\"78\",\t\t\"8\",\t8,\tERR(0),\t\tKUMAX(7)},\n\t\t{\"9a\",\t\t\"a\",\t10,\tERR(0),\t\tKUMAX(9)},\n\t\t{\"9A\",\t\t\"A\",\t10,\tERR(0),\t\tKUMAX(9)},\n\t\t{\"fg\",\t\t\"g\",\t16,\tERR(0),\t\tKUMAX(15)},\n\t\t{\"FG\",\t\t\"G\",\t16,\tERR(0),\t\tKUMAX(15)},\n\t\t{\"0xfg\",\t\"g\",\t16,\tERR(0),\t\tKUMAX(15)},\n\t\t{\"0XFG\",\t\"G\",\t16,\tERR(0),\t\tKUMAX(15)},\n\t\t{\"z_\",\t\t\"_\",\t36,\tERR(0),\t\tKUMAX(35)},\n\t\t{\"Z_\",\t\t\"_\",\t36,\tERR(0),\t\tKUMAX(35)}\n\t};\n#undef ERR\n#undef KUMAX\n#undef KSMAX\n\tunsigned i;\n\n\tfor (i = 0; i < sizeof(tests)/sizeof(struct test_s); i++) {\n\t\tstruct test_s *test = &tests[i];\n\t\tint err;\n\t\tuintmax_t result;\n\t\tchar *remainder;\n\n\t\tset_errno(0);\n\t\tresult = malloc_strtoumax(test->input, &remainder, test->base);\n\t\terr = get_errno();\n\t\texpect_d_eq(err, test->expected_errno,\n\t\t    \"Expected errno %s for \\\"%s\\\", base %d\",\n\t\t    test->expected_errno_name, test->input, test->base);\n\t\texpect_str_eq(remainder, test->expected_remainder,\n\t\t    \"Unexpected remainder for \\\"%s\\\", base %d\",\n\t\t    test->input, test->base);\n\t\tif (err == 0) {\n\t\t\texpect_ju_eq(result, test->expected_x,\n\t\t\t    \"Unexpected result for \\\"%s\\\", base %d\",\n\t\t\t    test->input, test->base);\n\t\t}\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_malloc_snprintf_truncated) {\n#define BUFLEN\t15\n\tchar buf[BUFLEN];\n\tsize_t result;\n\tsize_t len;\n#define TEST(expected_str_untruncated, ...) do {\t\t\t\\\n\tresult = malloc_snprintf(buf, len, __VA_ARGS__);\t\t\\\n\texpect_d_eq(strncmp(buf, expected_str_untruncated, len-1), 0,\t\\\n\t    \"Unexpected string inequality (\\\"%s\\\" vs \\\"%s\\\")\",\t\t\\\n\t    buf, expected_str_untruncated);\t\t\t\t\\\n\texpect_zu_eq(result, strlen(expected_str_untruncated),\t\t\\\n\t    \"Unexpected result\");\t\t\t\t\t\\\n} while (0)\n\n\tfor (len = 1; len < BUFLEN; len++) {\n\t\tTEST(\"012346789\",\t\"012346789\");\n\t\tTEST(\"a0123b\",\t\t\"a%sb\", \"0123\");\n\t\tTEST(\"a01234567\",\t\"a%s%s\", \"0123\", \"4567\");\n\t\tTEST(\"a0123  \",\t\t\"a%-6s\", \"0123\");\n\t\tTEST(\"a  0123\",\t\t\"a%6s\", \"0123\");\n\t\tTEST(\"a   012\",\t\t\"a%6.3s\", \"0123\");\n\t\tTEST(\"a   012\",\t\t\"a%*.*s\", 6, 3, \"0123\");\n\t\tTEST(\"a 123b\",\t\t\"a% db\", 123);\n\t\tTEST(\"a123b\",\t\t\"a%-db\", 123);\n\t\tTEST(\"a-123b\",\t\t\"a%-db\", -123);\n\t\tTEST(\"a+123b\",\t\t\"a%+db\", 123);\n\t}\n#undef BUFLEN\n#undef TEST\n}\nTEST_END\n\nTEST_BEGIN(test_malloc_snprintf) {\n#define BUFLEN\t128\n\tchar buf[BUFLEN];\n\tsize_t result;\n#define TEST(expected_str, ...) do {\t\t\t\t\t\\\n\tresult = malloc_snprintf(buf, sizeof(buf), __VA_ARGS__);\t\\\n\texpect_str_eq(buf, expected_str, \"Unexpected output\");\t\t\\\n\texpect_zu_eq(result, strlen(expected_str), \"Unexpected result\");\\\n} while (0)\n\n\tTEST(\"hello\", \"hello\");\n\n\tTEST(\"50%, 100%\", \"50%%, %d%%\", 100);\n\n\tTEST(\"a0123b\", \"a%sb\", \"0123\");\n\n\tTEST(\"a 0123b\", \"a%5sb\", \"0123\");\n\tTEST(\"a 0123b\", \"a%*sb\", 5, \"0123\");\n\n\tTEST(\"a0123 b\", \"a%-5sb\", \"0123\");\n\tTEST(\"a0123b\", \"a%*sb\", -1, \"0123\");\n\tTEST(\"a0123 b\", \"a%*sb\", -5, \"0123\");\n\tTEST(\"a0123 b\", \"a%-*sb\", -5, \"0123\");\n\n\tTEST(\"a012b\", \"a%.3sb\", \"0123\");\n\tTEST(\"a012b\", \"a%.*sb\", 3, \"0123\");\n\tTEST(\"a0123b\", \"a%.*sb\", -3, \"0123\");\n\n\tTEST(\"a  012b\", \"a%5.3sb\", \"0123\");\n\tTEST(\"a  012b\", \"a%5.*sb\", 3, \"0123\");\n\tTEST(\"a  012b\", \"a%*.3sb\", 5, \"0123\");\n\tTEST(\"a  012b\", \"a%*.*sb\", 5, 3, \"0123\");\n\tTEST(\"a 0123b\", \"a%*.*sb\", 5, -3, \"0123\");\n\n\tTEST(\"_abcd_\", \"_%x_\", 0xabcd);\n\tTEST(\"_0xabcd_\", \"_%#x_\", 0xabcd);\n\tTEST(\"_1234_\", \"_%o_\", 01234);\n\tTEST(\"_01234_\", \"_%#o_\", 01234);\n\tTEST(\"_1234_\", \"_%u_\", 1234);\n\tTEST(\"01234\", \"%05u\", 1234);\n\n\tTEST(\"_1234_\", \"_%d_\", 1234);\n\tTEST(\"_ 1234_\", \"_% d_\", 1234);\n\tTEST(\"_+1234_\", \"_%+d_\", 1234);\n\tTEST(\"_-1234_\", \"_%d_\", -1234);\n\tTEST(\"_-1234_\", \"_% d_\", -1234);\n\tTEST(\"_-1234_\", \"_%+d_\", -1234);\n\n\t/*\n\t * Morally, we should test these too, but 0-padded signed types are not\n\t * yet supported.\n\t *\n\t * TEST(\"01234\", \"%05\", 1234);\n\t * TEST(\"-1234\", \"%05d\", -1234);\n\t * TEST(\"-01234\", \"%06d\", -1234);\n\t*/\n\n\tTEST(\"_-1234_\", \"_%d_\", -1234);\n\tTEST(\"_1234_\", \"_%d_\", 1234);\n\tTEST(\"_-1234_\", \"_%i_\", -1234);\n\tTEST(\"_1234_\", \"_%i_\", 1234);\n\tTEST(\"_01234_\", \"_%#o_\", 01234);\n\tTEST(\"_1234_\", \"_%u_\", 1234);\n\tTEST(\"_0x1234abc_\", \"_%#x_\", 0x1234abc);\n\tTEST(\"_0X1234ABC_\", \"_%#X_\", 0x1234abc);\n\tTEST(\"_c_\", \"_%c_\", 'c');\n\tTEST(\"_string_\", \"_%s_\", \"string\");\n\tTEST(\"_0x42_\", \"_%p_\", ((void *)0x42));\n\n\tTEST(\"_-1234_\", \"_%ld_\", ((long)-1234));\n\tTEST(\"_1234_\", \"_%ld_\", ((long)1234));\n\tTEST(\"_-1234_\", \"_%li_\", ((long)-1234));\n\tTEST(\"_1234_\", \"_%li_\", ((long)1234));\n\tTEST(\"_01234_\", \"_%#lo_\", ((long)01234));\n\tTEST(\"_1234_\", \"_%lu_\", ((long)1234));\n\tTEST(\"_0x1234abc_\", \"_%#lx_\", ((long)0x1234abc));\n\tTEST(\"_0X1234ABC_\", \"_%#lX_\", ((long)0x1234ABC));\n\n\tTEST(\"_-1234_\", \"_%lld_\", ((long long)-1234));\n\tTEST(\"_1234_\", \"_%lld_\", ((long long)1234));\n\tTEST(\"_-1234_\", \"_%lli_\", ((long long)-1234));\n\tTEST(\"_1234_\", \"_%lli_\", ((long long)1234));\n\tTEST(\"_01234_\", \"_%#llo_\", ((long long)01234));\n\tTEST(\"_1234_\", \"_%llu_\", ((long long)1234));\n\tTEST(\"_0x1234abc_\", \"_%#llx_\", ((long long)0x1234abc));\n\tTEST(\"_0X1234ABC_\", \"_%#llX_\", ((long long)0x1234ABC));\n\n\tTEST(\"_-1234_\", \"_%qd_\", ((long long)-1234));\n\tTEST(\"_1234_\", \"_%qd_\", ((long long)1234));\n\tTEST(\"_-1234_\", \"_%qi_\", ((long long)-1234));\n\tTEST(\"_1234_\", \"_%qi_\", ((long long)1234));\n\tTEST(\"_01234_\", \"_%#qo_\", ((long long)01234));\n\tTEST(\"_1234_\", \"_%qu_\", ((long long)1234));\n\tTEST(\"_0x1234abc_\", \"_%#qx_\", ((long long)0x1234abc));\n\tTEST(\"_0X1234ABC_\", \"_%#qX_\", ((long long)0x1234ABC));\n\n\tTEST(\"_-1234_\", \"_%jd_\", ((intmax_t)-1234));\n\tTEST(\"_1234_\", \"_%jd_\", ((intmax_t)1234));\n\tTEST(\"_-1234_\", \"_%ji_\", ((intmax_t)-1234));\n\tTEST(\"_1234_\", \"_%ji_\", ((intmax_t)1234));\n\tTEST(\"_01234_\", \"_%#jo_\", ((intmax_t)01234));\n\tTEST(\"_1234_\", \"_%ju_\", ((intmax_t)1234));\n\tTEST(\"_0x1234abc_\", \"_%#jx_\", ((intmax_t)0x1234abc));\n\tTEST(\"_0X1234ABC_\", \"_%#jX_\", ((intmax_t)0x1234ABC));\n\n\tTEST(\"_1234_\", \"_%td_\", ((ptrdiff_t)1234));\n\tTEST(\"_-1234_\", \"_%td_\", ((ptrdiff_t)-1234));\n\tTEST(\"_1234_\", \"_%ti_\", ((ptrdiff_t)1234));\n\tTEST(\"_-1234_\", \"_%ti_\", ((ptrdiff_t)-1234));\n\n\tTEST(\"_-1234_\", \"_%zd_\", ((ssize_t)-1234));\n\tTEST(\"_1234_\", \"_%zd_\", ((ssize_t)1234));\n\tTEST(\"_-1234_\", \"_%zi_\", ((ssize_t)-1234));\n\tTEST(\"_1234_\", \"_%zi_\", ((ssize_t)1234));\n\tTEST(\"_01234_\", \"_%#zo_\", ((ssize_t)01234));\n\tTEST(\"_1234_\", \"_%zu_\", ((ssize_t)1234));\n\tTEST(\"_0x1234abc_\", \"_%#zx_\", ((ssize_t)0x1234abc));\n\tTEST(\"_0X1234ABC_\", \"_%#zX_\", ((ssize_t)0x1234ABC));\n#undef BUFLEN\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_malloc_strtoumax_no_endptr,\n\t    test_malloc_strtoumax,\n\t    test_malloc_snprintf_truncated,\n\t    test_malloc_snprintf);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/math.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#define MAX_REL_ERR 1.0e-9\n#define MAX_ABS_ERR 1.0e-9\n\n#include <float.h>\n\n#ifdef __PGI\n#undef INFINITY\n#endif\n\n#ifndef INFINITY\n#define INFINITY (DBL_MAX + DBL_MAX)\n#endif\n\nstatic bool\ndouble_eq_rel(double a, double b, double max_rel_err, double max_abs_err) {\n\tdouble rel_err;\n\n\tif (fabs(a - b) < max_abs_err) {\n\t\treturn true;\n\t}\n\trel_err = (fabs(b) > fabs(a)) ? fabs((a-b)/b) : fabs((a-b)/a);\n\treturn (rel_err < max_rel_err);\n}\n\nstatic uint64_t\nfactorial(unsigned x) {\n\tuint64_t ret = 1;\n\tunsigned i;\n\n\tfor (i = 2; i <= x; i++) {\n\t\tret *= (uint64_t)i;\n\t}\n\n\treturn ret;\n}\n\nTEST_BEGIN(test_ln_gamma_factorial) {\n\tunsigned x;\n\n\t/* exp(ln_gamma(x)) == (x-1)! for integer x. */\n\tfor (x = 1; x <= 21; x++) {\n\t\texpect_true(double_eq_rel(exp(ln_gamma(x)),\n\t\t    (double)factorial(x-1), MAX_REL_ERR, MAX_ABS_ERR),\n\t\t    \"Incorrect factorial result for x=%u\", x);\n\t}\n}\nTEST_END\n\n/* Expected ln_gamma([0.0..100.0] increment=0.25). */\nstatic const double ln_gamma_misc_expected[] = {\n\tINFINITY,\n\t1.28802252469807743, 0.57236494292470008, 0.20328095143129538,\n\t0.00000000000000000, -0.09827183642181320, -0.12078223763524518,\n\t-0.08440112102048555, 0.00000000000000000, 0.12487171489239651,\n\t0.28468287047291918, 0.47521466691493719, 0.69314718055994529,\n\t0.93580193110872523, 1.20097360234707429, 1.48681557859341718,\n\t1.79175946922805496, 2.11445692745037128, 2.45373657084244234,\n\t2.80857141857573644, 3.17805383034794575, 3.56137591038669710,\n\t3.95781396761871651, 4.36671603662228680, 4.78749174278204581,\n\t5.21960398699022932, 5.66256205985714178, 6.11591589143154568,\n\t6.57925121201010121, 7.05218545073853953, 7.53436423675873268,\n\t8.02545839631598312, 8.52516136106541467, 9.03318691960512332,\n\t9.54926725730099690, 10.07315123968123949, 10.60460290274525086,\n\t11.14340011995171231, 11.68933342079726856, 12.24220494005076176,\n\t12.80182748008146909, 13.36802367147604720, 13.94062521940376342,\n\t14.51947222506051816, 15.10441257307551943, 15.69530137706046524,\n\t16.29200047656724237, 16.89437797963419285, 17.50230784587389010,\n\t18.11566950571089407, 18.73434751193644843, 19.35823122022435427,\n\t19.98721449566188468, 20.62119544270163018, 21.26007615624470048,\n\t21.90376249182879320, 22.55216385312342098, 23.20519299513386002,\n\t23.86276584168908954, 24.52480131594137802, 25.19122118273868338,\n\t25.86194990184851861, 26.53691449111561340, 27.21604439872720604,\n\t27.89927138384089389, 28.58652940490193828, 29.27775451504081516,\n\t29.97288476399884871, 30.67186010608067548, 31.37462231367769050,\n\t32.08111489594735843, 32.79128302226991565, 33.50507345013689076,\n\t34.22243445715505317, 34.94331577687681545, 35.66766853819134298,\n\t36.39544520803305261, 37.12659953718355865, 37.86108650896109395,\n\t38.59886229060776230, 39.33988418719949465, 40.08411059791735198,\n\t40.83150097453079752, 41.58201578195490100, 42.33561646075348506,\n\t43.09226539146988699, 43.85192586067515208, 44.61456202863158893,\n\t45.38013889847690052, 46.14862228684032885, 46.91997879580877395,\n\t47.69417578616628361, 48.47118135183522014, 49.25096429545256882,\n\t50.03349410501914463, 50.81874093156324790, 51.60667556776436982,\n\t52.39726942748592364, 53.19049452616926743, 53.98632346204390586,\n\t54.78472939811231157, 55.58568604486942633, 56.38916764371992940,\n\t57.19514895105859864, 58.00360522298051080, 58.81451220059079787,\n\t59.62784609588432261, 60.44358357816834371, 61.26170176100199427,\n\t62.08217818962842927, 62.90499082887649962, 63.73011805151035958,\n\t64.55753862700632340, 65.38723171073768015, 66.21917683354901385,\n\t67.05335389170279825, 67.88974313718154008, 68.72832516833013017,\n\t69.56908092082363737, 70.41199165894616385, 71.25703896716800045,\n\t72.10420474200799390, 72.95347118416940191, 73.80482079093779646,\n\t74.65823634883015814, 75.51370092648485866, 76.37119786778275454,\n\t77.23071078519033961, 78.09222355331530707, 78.95572030266725960,\n\t79.82118541361435859, 80.68860351052903468, 81.55795945611502873,\n\t82.42923834590904164, 83.30242550295004378, 84.17750647261028973,\n\t85.05446701758152983, 85.93329311301090456, 86.81397094178107920,\n\t87.69648688992882057, 88.58082754219766741, 89.46697967771913795,\n\t90.35493026581838194, 91.24466646193963015, 92.13617560368709292,\n\t93.02944520697742803, 93.92446296229978486, 94.82121673107967297,\n\t95.71969454214321615, 96.61988458827809723, 97.52177522288820910,\n\t98.42535495673848800, 99.33061245478741341, 100.23753653310367895,\n\t101.14611615586458981, 102.05634043243354370, 102.96819861451382394,\n\t103.88168009337621811, 104.79677439715833032, 105.71347118823287303,\n\t106.63176026064346047, 107.55163153760463501, 108.47307506906540198,\n\t109.39608102933323153, 110.32063971475740516, 111.24674154146920557,\n\t112.17437704317786995, 113.10353686902013237, 114.03421178146170689,\n\t114.96639265424990128, 115.90007047041454769, 116.83523632031698014,\n\t117.77188139974506953, 118.70999700805310795, 119.64957454634490830,\n\t120.59060551569974962, 121.53308151543865279, 122.47699424143097247,\n\t123.42233548443955726, 124.36909712850338394, 125.31727114935689826,\n\t126.26684961288492559, 127.21782467361175861, 128.17018857322420899,\n\t129.12393363912724453, 130.07905228303084755, 131.03553699956862033,\n\t131.99338036494577864, 132.95257503561629164, 133.91311374698926784,\n\t134.87498931216194364, 135.83819462068046846, 136.80272263732638294,\n\t137.76856640092901785, 138.73571902320256299, 139.70417368760718091,\n\t140.67392364823425055, 141.64496222871400732, 142.61728282114600574,\n\t143.59087888505104047, 144.56574394634486680, 145.54187159633210058,\n\t146.51925549072063859, 147.49788934865566148, 148.47776695177302031,\n\t149.45888214327129617, 150.44122882700193600, 151.42480096657754984,\n\t152.40959258449737490, 153.39559776128982094, 154.38281063467164245,\n\t155.37122539872302696, 156.36083630307879844, 157.35163765213474107,\n\t158.34362380426921391, 159.33678917107920370, 160.33112821663092973,\n\t161.32663545672428995, 162.32330545817117695, 163.32113283808695314,\n\t164.32011226319519892, 165.32023844914485267, 166.32150615984036790,\n\t167.32391020678358018, 168.32744544842768164, 169.33210678954270634,\n\t170.33788918059275375, 171.34478761712384198, 172.35279713916281707,\n\t173.36191283062726143, 174.37212981874515094, 175.38344327348534080,\n\t176.39584840699734514, 177.40934047306160437, 178.42391476654847793,\n\t179.43956662288721304, 180.45629141754378111, 181.47408456550741107,\n\t182.49294152078630304, 183.51285777591152737, 184.53382886144947861,\n\t185.55585034552262869, 186.57891783333786861, 187.60302696672312095,\n\t188.62817342367162610, 189.65435291789341932, 190.68156119837468054,\n\t191.70979404894376330, 192.73904728784492590, 193.76931676731820176,\n\t194.80059837318714244, 195.83288802445184729, 196.86618167288995096,\n\t197.90047530266301123, 198.93576492992946214, 199.97204660246373464,\n\t201.00931639928148797, 202.04757043027063901, 203.08680483582807597,\n\t204.12701578650228385, 205.16819948264117102, 206.21035215404597807,\n\t207.25347005962987623, 208.29754948708190909, 209.34258675253678916,\n\t210.38857820024875878, 211.43552020227099320, 212.48340915813977858,\n\t213.53224149456323744, 214.58201366511514152, 215.63272214993284592,\n\t216.68436345542014010, 217.73693411395422004, 218.79043068359703739,\n\t219.84484974781133815, 220.90018791517996988, 221.95644181913033322,\n\t223.01360811766215875, 224.07168349307951871, 225.13066465172661879,\n\t226.19054832372759734, 227.25133126272962159, 228.31301024565024704,\n\t229.37558207242807384, 230.43904356577689896, 231.50339157094342113,\n\t232.56862295546847008, 233.63473460895144740, 234.70172344281823484,\n\t235.76958639009222907, 236.83832040516844586, 237.90792246359117712,\n\t238.97838956183431947, 240.04971871708477238, 241.12190696702904802,\n\t242.19495136964280846, 243.26884900298270509, 244.34359696498191283,\n\t245.41919237324782443, 246.49563236486270057, 247.57291409618682110,\n\t248.65103474266476269, 249.72999149863338175, 250.80978157713354904,\n\t251.89040220972316320, 252.97185064629374551, 254.05412415488834199,\n\t255.13722002152300661, 256.22113555000953511, 257.30586806178126835,\n\t258.39141489572085675, 259.47777340799029844, 260.56494097186322279,\n\t261.65291497755913497, 262.74169283208021852, 263.83127195904967266,\n\t264.92164979855277807, 266.01282380697938379, 267.10479145686849733,\n\t268.19755023675537586, 269.29109765101975427, 270.38543121973674488,\n\t271.48054847852881721, 272.57644697842033565, 273.67312428569374561,\n\t274.77057798174683967, 275.86880566295326389, 276.96780494052313770,\n\t278.06757344036617496, 279.16810880295668085, 280.26940868320008349,\n\t281.37147075030043197, 282.47429268763045229, 283.57787219260217171,\n\t284.68220697654078322, 285.78729476455760050, 286.89313329542699194,\n\t287.99972032146268930, 289.10705360839756395, 290.21513093526289140,\n\t291.32395009427028754, 292.43350889069523646, 293.54380514276073200,\n\t294.65483668152336350, 295.76660135076059532, 296.87909700685889902,\n\t297.99232151870342022, 299.10627276756946458, 300.22094864701409733,\n\t301.33634706277030091, 302.45246593264130297, 303.56930318639643929,\n\t304.68685676566872189, 305.80512462385280514, 306.92410472600477078,\n\t308.04379504874236773, 309.16419358014690033, 310.28529831966631036,\n\t311.40710727801865687, 312.52961847709792664, 313.65282994987899201,\n\t314.77673974032603610, 315.90134590329950015, 317.02664650446632777,\n\t318.15263962020929966, 319.27932333753892635, 320.40669575400545455,\n\t321.53475497761127144, 322.66349912672620803, 323.79292633000159185,\n\t324.92303472628691452, 326.05382246454587403, 327.18528770377525916,\n\t328.31742861292224234, 329.45024337080525356, 330.58373016603343331,\n\t331.71788719692847280, 332.85271267144611329, 333.98820480709991898,\n\t335.12436183088397001, 336.26118197919845443, 337.39866349777429377,\n\t338.53680464159958774, 339.67560367484657036, 340.81505887079896411,\n\t341.95516851178109619, 343.09593088908627578, 344.23734430290727460,\n\t345.37940706226686416, 346.52211748494903532, 347.66547389743118401,\n\t348.80947463481720661, 349.95411804077025408, 351.09940246744753267,\n\t352.24532627543504759, 353.39188783368263103, 354.53908551944078908,\n\t355.68691771819692349, 356.83538282361303118, 357.98447923746385868,\n\t359.13420536957539753\n};\n\nTEST_BEGIN(test_ln_gamma_misc) {\n\tunsigned i;\n\n\tfor (i = 1; i < sizeof(ln_gamma_misc_expected)/sizeof(double); i++) {\n\t\tdouble x = (double)i * 0.25;\n\t\texpect_true(double_eq_rel(ln_gamma(x),\n\t\t    ln_gamma_misc_expected[i], MAX_REL_ERR, MAX_ABS_ERR),\n\t\t    \"Incorrect ln_gamma result for i=%u\", i);\n\t}\n}\nTEST_END\n\n/* Expected pt_norm([0.01..0.99] increment=0.01). */\nstatic const double pt_norm_expected[] = {\n\t-INFINITY,\n\t-2.32634787404084076, -2.05374891063182252, -1.88079360815125085,\n\t-1.75068607125216946, -1.64485362695147264, -1.55477359459685305,\n\t-1.47579102817917063, -1.40507156030963221, -1.34075503369021654,\n\t-1.28155156554460081, -1.22652812003661049, -1.17498679206608991,\n\t-1.12639112903880045, -1.08031934081495606, -1.03643338949378938,\n\t-0.99445788320975281, -0.95416525314619416, -0.91536508784281390,\n\t-0.87789629505122846, -0.84162123357291418, -0.80642124701824025,\n\t-0.77219321418868492, -0.73884684918521371, -0.70630256284008752,\n\t-0.67448975019608171, -0.64334540539291685, -0.61281299101662701,\n\t-0.58284150727121620, -0.55338471955567281, -0.52440051270804067,\n\t-0.49585034734745320, -0.46769879911450812, -0.43991316567323380,\n\t-0.41246312944140462, -0.38532046640756751, -0.35845879325119373,\n\t-0.33185334643681652, -0.30548078809939738, -0.27931903444745404,\n\t-0.25334710313579978, -0.22754497664114931, -0.20189347914185077,\n\t-0.17637416478086135, -0.15096921549677725, -0.12566134685507399,\n\t-0.10043372051146975, -0.07526986209982976, -0.05015358346473352,\n\t-0.02506890825871106, 0.00000000000000000, 0.02506890825871106,\n\t0.05015358346473366, 0.07526986209982990, 0.10043372051146990,\n\t0.12566134685507413, 0.15096921549677739, 0.17637416478086146,\n\t0.20189347914185105, 0.22754497664114931, 0.25334710313579978,\n\t0.27931903444745404, 0.30548078809939738, 0.33185334643681652,\n\t0.35845879325119373, 0.38532046640756762, 0.41246312944140484,\n\t0.43991316567323391, 0.46769879911450835, 0.49585034734745348,\n\t0.52440051270804111, 0.55338471955567303, 0.58284150727121620,\n\t0.61281299101662701, 0.64334540539291685, 0.67448975019608171,\n\t0.70630256284008752, 0.73884684918521371, 0.77219321418868492,\n\t0.80642124701824036, 0.84162123357291441, 0.87789629505122879,\n\t0.91536508784281423, 0.95416525314619460, 0.99445788320975348,\n\t1.03643338949378938, 1.08031934081495606, 1.12639112903880045,\n\t1.17498679206608991, 1.22652812003661049, 1.28155156554460081,\n\t1.34075503369021654, 1.40507156030963265, 1.47579102817917085,\n\t1.55477359459685394, 1.64485362695147308, 1.75068607125217102,\n\t1.88079360815125041, 2.05374891063182208, 2.32634787404084076\n};\n\nTEST_BEGIN(test_pt_norm) {\n\tunsigned i;\n\n\tfor (i = 1; i < sizeof(pt_norm_expected)/sizeof(double); i++) {\n\t\tdouble p = (double)i * 0.01;\n\t\texpect_true(double_eq_rel(pt_norm(p), pt_norm_expected[i],\n\t\t    MAX_REL_ERR, MAX_ABS_ERR),\n\t\t    \"Incorrect pt_norm result for i=%u\", i);\n\t}\n}\nTEST_END\n\n/*\n * Expected pt_chi2(p=[0.01..0.99] increment=0.07,\n *                  df={0.1, 1.1, 10.1, 100.1, 1000.1}).\n */\nstatic const double pt_chi2_df[] = {0.1, 1.1, 10.1, 100.1, 1000.1};\nstatic const double pt_chi2_expected[] = {\n\t1.168926411457320e-40, 1.347680397072034e-22, 3.886980416666260e-17,\n\t8.245951724356564e-14, 2.068936347497604e-11, 1.562561743309233e-09,\n\t5.459543043426564e-08, 1.114775688149252e-06, 1.532101202364371e-05,\n\t1.553884683726585e-04, 1.239396954915939e-03, 8.153872320255721e-03,\n\t4.631183739647523e-02, 2.473187311701327e-01, 2.175254800183617e+00,\n\n\t0.0003729887888876379, 0.0164409238228929513, 0.0521523015190650113,\n\t0.1064701372271216612, 0.1800913735793082115, 0.2748704281195626931,\n\t0.3939246282787986497, 0.5420727552260817816, 0.7267265822221973259,\n\t0.9596554296000253670, 1.2607440376386165326, 1.6671185084541604304,\n\t2.2604828984738705167, 3.2868613342148607082, 6.9298574921692139839,\n\n\t2.606673548632508, 4.602913725294877, 5.646152813924212,\n\t6.488971315540869, 7.249823275816285, 7.977314231410841,\n\t8.700354939944047, 9.441728024225892, 10.224338321374127,\n\t11.076435368801061, 12.039320937038386, 13.183878752697167,\n\t14.657791935084575, 16.885728216339373, 23.361991680031817,\n\n\t70.14844087392152, 80.92379498849355, 85.53325420085891,\n\t88.94433120715347, 91.83732712857017, 94.46719943606301,\n\t96.96896479994635, 99.43412843510363, 101.94074719829733,\n\t104.57228644307247, 107.43900093448734, 110.71844673417287,\n\t114.76616819871325, 120.57422505959563, 135.92318818757556,\n\n\t899.0072447849649, 937.9271278858220, 953.8117189560207,\n\t965.3079371501154, 974.8974061207954, 983.4936235182347,\n\t991.5691170518946, 999.4334123954690, 1007.3391826856553,\n\t1015.5445154999951, 1024.3777075619569, 1034.3538789836223,\n\t1046.4872561869577, 1063.5717461999654, 1107.0741966053859\n};\n\nTEST_BEGIN(test_pt_chi2) {\n\tunsigned i, j;\n\tunsigned e = 0;\n\n\tfor (i = 0; i < sizeof(pt_chi2_df)/sizeof(double); i++) {\n\t\tdouble df = pt_chi2_df[i];\n\t\tdouble ln_gamma_df = ln_gamma(df * 0.5);\n\t\tfor (j = 1; j < 100; j += 7) {\n\t\t\tdouble p = (double)j * 0.01;\n\t\t\texpect_true(double_eq_rel(pt_chi2(p, df, ln_gamma_df),\n\t\t\t    pt_chi2_expected[e], MAX_REL_ERR, MAX_ABS_ERR),\n\t\t\t    \"Incorrect pt_chi2 result for i=%u, j=%u\", i, j);\n\t\t\te++;\n\t\t}\n\t}\n}\nTEST_END\n\n/*\n * Expected pt_gamma(p=[0.1..0.99] increment=0.07,\n *                   shape=[0.5..3.0] increment=0.5).\n */\nstatic const double pt_gamma_shape[] = {0.5, 1.0, 1.5, 2.0, 2.5, 3.0};\nstatic const double pt_gamma_expected[] = {\n\t7.854392895485103e-05, 5.043466107888016e-03, 1.788288957794883e-02,\n\t3.900956150232906e-02, 6.913847560638034e-02, 1.093710833465766e-01,\n\t1.613412523825817e-01, 2.274682115597864e-01, 3.114117323127083e-01,\n\t4.189466220207417e-01, 5.598106789059246e-01, 7.521856146202706e-01,\n\t1.036125427911119e+00, 1.532450860038180e+00, 3.317448300510606e+00,\n\n\t0.01005033585350144, 0.08338160893905107, 0.16251892949777497,\n\t0.24846135929849966, 0.34249030894677596, 0.44628710262841947,\n\t0.56211891815354142, 0.69314718055994529, 0.84397007029452920,\n\t1.02165124753198167, 1.23787435600161766, 1.51412773262977574,\n\t1.89711998488588196, 2.52572864430825783, 4.60517018598809091,\n\n\t0.05741590094955853, 0.24747378084860744, 0.39888572212236084,\n\t0.54394139997444901, 0.69048812513915159, 0.84311389861296104,\n\t1.00580622221479898, 1.18298694218766931, 1.38038096305861213,\n\t1.60627736383027453, 1.87396970522337947, 2.20749220408081070,\n\t2.65852391865854942, 3.37934630984842244, 5.67243336507218476,\n\n\t0.1485547402532659, 0.4657458011640391, 0.6832386130709406,\n\t0.8794297834672100, 1.0700752852474524, 1.2629614217350744,\n\t1.4638400448580779, 1.6783469900166610, 1.9132338090606940,\n\t2.1778589228618777, 2.4868823970010991, 2.8664695666264195,\n\t3.3724415436062114, 4.1682658512758071, 6.6383520679938108,\n\n\t0.2771490383641385, 0.7195001279643727, 0.9969081732265243,\n\t1.2383497880608061, 1.4675206597269927, 1.6953064251816552,\n\t1.9291243435606809, 2.1757300955477641, 2.4428032131216391,\n\t2.7406534569230616, 3.0851445039665513, 3.5043101122033367,\n\t4.0575997065264637, 4.9182956424675286, 7.5431362346944937,\n\n\t0.4360451650782932, 0.9983600902486267, 1.3306365880734528,\n\t1.6129750834753802, 1.8767241606994294, 2.1357032436097660,\n\t2.3988853336865565, 2.6740603137235603, 2.9697561737517959,\n\t3.2971457713883265, 3.6731795898504660, 4.1275751617770631,\n\t4.7230515633946677, 5.6417477865306020, 8.4059469148854635\n};\n\nTEST_BEGIN(test_pt_gamma_shape) {\n\tunsigned i, j;\n\tunsigned e = 0;\n\n\tfor (i = 0; i < sizeof(pt_gamma_shape)/sizeof(double); i++) {\n\t\tdouble shape = pt_gamma_shape[i];\n\t\tdouble ln_gamma_shape = ln_gamma(shape);\n\t\tfor (j = 1; j < 100; j += 7) {\n\t\t\tdouble p = (double)j * 0.01;\n\t\t\texpect_true(double_eq_rel(pt_gamma(p, shape, 1.0,\n\t\t\t    ln_gamma_shape), pt_gamma_expected[e], MAX_REL_ERR,\n\t\t\t    MAX_ABS_ERR),\n\t\t\t    \"Incorrect pt_gamma result for i=%u, j=%u\", i, j);\n\t\t\te++;\n\t\t}\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_pt_gamma_scale) {\n\tdouble shape = 1.0;\n\tdouble ln_gamma_shape = ln_gamma(shape);\n\n\texpect_true(double_eq_rel(\n\t    pt_gamma(0.5, shape, 1.0, ln_gamma_shape) * 10.0,\n\t    pt_gamma(0.5, shape, 10.0, ln_gamma_shape), MAX_REL_ERR,\n\t    MAX_ABS_ERR),\n\t    \"Scale should be trivially equivalent to external multiplication\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_ln_gamma_factorial,\n\t    test_ln_gamma_misc,\n\t    test_pt_norm,\n\t    test_pt_chi2,\n\t    test_pt_gamma_shape,\n\t    test_pt_gamma_scale);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/mpsc_queue.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/mpsc_queue.h\"\n\ntypedef struct elem_s elem_t;\ntypedef ql_head(elem_t) elem_list_t;\ntypedef mpsc_queue(elem_t) elem_mpsc_queue_t;\nstruct elem_s {\n\tint thread;\n\tint idx;\n\tql_elm(elem_t) link;\n};\n\n/* Include both proto and gen to make sure they match up. */\nmpsc_queue_proto(static, elem_mpsc_queue_, elem_mpsc_queue_t, elem_t,\n    elem_list_t);\nmpsc_queue_gen(static, elem_mpsc_queue_, elem_mpsc_queue_t, elem_t,\n    elem_list_t, link);\n\nstatic void\ninit_elems_simple(elem_t *elems, int nelems, int thread) {\n\tfor (int i = 0; i < nelems; i++) {\n\t\telems[i].thread = thread;\n\t\telems[i].idx = i;\n\t\tql_elm_new(&elems[i], link);\n\t}\n}\n\nstatic void\ncheck_elems_simple(elem_list_t *list, int nelems, int thread) {\n\telem_t *elem;\n\tint next_idx = 0;\n\tql_foreach(elem, list, link) {\n\t\texpect_d_lt(next_idx, nelems, \"Too many list items\");\n\t\texpect_d_eq(thread, elem->thread, \"\");\n\t\texpect_d_eq(next_idx, elem->idx, \"List out of order\");\n\t\tnext_idx++;\n\t}\n}\n\nTEST_BEGIN(test_simple) {\n\tenum {NELEMS = 10};\n\telem_t elems[NELEMS];\n\telem_list_t list;\n\telem_mpsc_queue_t queue;\n\n\t/* Pop empty queue onto empty list -> empty list */\n\tql_new(&list);\n\telem_mpsc_queue_new(&queue);\n\telem_mpsc_queue_pop_batch(&queue, &list);\n\texpect_true(ql_empty(&list), \"\");\n\n\t/* Pop empty queue onto nonempty list -> list unchanged */\n\tql_new(&list);\n\telem_mpsc_queue_new(&queue);\n\tinit_elems_simple(elems, NELEMS, 0);\n\tfor (int i = 0; i < NELEMS; i++) {\n\t\tql_tail_insert(&list, &elems[i], link);\n\t}\n\telem_mpsc_queue_pop_batch(&queue, &list);\n\tcheck_elems_simple(&list, NELEMS, 0);\n\n\t/* Pop nonempty queue onto empty list -> list takes queue contents */\n\tql_new(&list);\n\telem_mpsc_queue_new(&queue);\n\tinit_elems_simple(elems, NELEMS, 0);\n\tfor (int i = 0; i < NELEMS; i++) {\n\t\telem_mpsc_queue_push(&queue, &elems[i]);\n\t}\n\telem_mpsc_queue_pop_batch(&queue, &list);\n\tcheck_elems_simple(&list, NELEMS, 0);\n\n\t/* Pop nonempty queue onto nonempty list -> list gains queue contents */\n\tql_new(&list);\n\telem_mpsc_queue_new(&queue);\n\tinit_elems_simple(elems, NELEMS, 0);\n\tfor (int i = 0; i < NELEMS / 2; i++) {\n\t\tql_tail_insert(&list, &elems[i], link);\n\t}\n\tfor (int i = NELEMS / 2; i < NELEMS; i++) {\n\t\telem_mpsc_queue_push(&queue, &elems[i]);\n\t}\n\telem_mpsc_queue_pop_batch(&queue, &list);\n\tcheck_elems_simple(&list, NELEMS, 0);\n\n}\nTEST_END\n\nTEST_BEGIN(test_push_single_or_batch) {\n\tenum {\n\t\tBATCH_MAX = 10,\n\t\t/*\n\t\t * We'll push i items one-at-a-time, then i items as a batch,\n\t\t * then i items as a batch again, as i ranges from 1 to\n\t\t * BATCH_MAX.  So we need 3 times the sum of the numbers from 1\n\t\t * to BATCH_MAX elements total.\n\t\t */\n\t\tNELEMS = 3 * BATCH_MAX * (BATCH_MAX - 1) / 2\n\t};\n\telem_t elems[NELEMS];\n\tinit_elems_simple(elems, NELEMS, 0);\n\telem_list_t list;\n\tql_new(&list);\n\telem_mpsc_queue_t queue;\n\telem_mpsc_queue_new(&queue);\n\tint next_idx = 0;\n\tfor (int i = 1; i < 10; i++) {\n\t\t/* Push i items 1 at a time. */\n\t\tfor (int j = 0; j < i; j++) {\n\t\t\telem_mpsc_queue_push(&queue, &elems[next_idx]);\n\t\t\tnext_idx++;\n\t\t}\n\t\t/* Push i items in batch. */\n\t\tfor (int j = 0; j < i; j++) {\n\t\t\tql_tail_insert(&list, &elems[next_idx], link);\n\t\t\tnext_idx++;\n\t\t}\n\t\telem_mpsc_queue_push_batch(&queue, &list);\n\t\texpect_true(ql_empty(&list), \"Batch push should empty source\");\n\t\t/*\n\t\t * Push i items in batch, again.  This tests two batches\n\t\t * proceeding one after the other.\n\t\t */\n\t\tfor (int j = 0; j < i; j++) {\n\t\t\tql_tail_insert(&list, &elems[next_idx], link);\n\t\t\tnext_idx++;\n\t\t}\n\t\telem_mpsc_queue_push_batch(&queue, &list);\n\t\texpect_true(ql_empty(&list), \"Batch push should empty source\");\n\t}\n\texpect_d_eq(NELEMS, next_idx, \"Miscomputed number of elems to push.\");\n\n\texpect_true(ql_empty(&list), \"\");\n\telem_mpsc_queue_pop_batch(&queue, &list);\n\tcheck_elems_simple(&list, NELEMS, 0);\n}\nTEST_END\n\nTEST_BEGIN(test_multi_op) {\n\tenum {NELEMS = 20};\n\telem_t elems[NELEMS];\n\tinit_elems_simple(elems, NELEMS, 0);\n\telem_list_t push_list;\n\tql_new(&push_list);\n\telem_list_t result_list;\n\tql_new(&result_list);\n\telem_mpsc_queue_t queue;\n\telem_mpsc_queue_new(&queue);\n\n\tint next_idx = 0;\n\t/* Push first quarter 1-at-a-time. */\n\tfor (int i = 0; i < NELEMS / 4; i++) {\n\t\telem_mpsc_queue_push(&queue, &elems[next_idx]);\n\t\tnext_idx++;\n\t}\n\t/* Push second quarter in batch. */\n\tfor (int i = NELEMS / 4; i < NELEMS / 2; i++) {\n\t\tql_tail_insert(&push_list, &elems[next_idx], link);\n\t\tnext_idx++;\n\t}\n\telem_mpsc_queue_push_batch(&queue, &push_list);\n\t/* Batch pop all pushed elements. */\n\telem_mpsc_queue_pop_batch(&queue, &result_list);\n\t/* Push third quarter in batch. */\n\tfor (int i = NELEMS / 2; i < 3 * NELEMS / 4; i++) {\n\t\tql_tail_insert(&push_list, &elems[next_idx], link);\n\t\tnext_idx++;\n\t}\n\telem_mpsc_queue_push_batch(&queue, &push_list);\n\t/* Push last quarter one-at-a-time. */\n\tfor (int i = 3 * NELEMS / 4; i < NELEMS; i++) {\n\t\telem_mpsc_queue_push(&queue, &elems[next_idx]);\n\t\tnext_idx++;\n\t}\n\t/* Pop them again.  Order of existing list should be preserved. */\n\telem_mpsc_queue_pop_batch(&queue, &result_list);\n\n\tcheck_elems_simple(&result_list, NELEMS, 0);\n\n}\nTEST_END\n\ntypedef struct pusher_arg_s pusher_arg_t;\nstruct pusher_arg_s {\n\telem_mpsc_queue_t *queue;\n\tint thread;\n\telem_t *elems;\n\tint nelems;\n};\n\ntypedef struct popper_arg_s popper_arg_t;\nstruct popper_arg_s {\n\telem_mpsc_queue_t *queue;\n\tint npushers;\n\tint nelems_per_pusher;\n\tint *pusher_counts;\n};\n\nstatic void *\nthd_pusher(void *void_arg) {\n\tpusher_arg_t *arg = (pusher_arg_t *)void_arg;\n\tint next_idx = 0;\n\twhile (next_idx < arg->nelems) {\n\t\t/* Push 10 items in batch. */\n\t\telem_list_t list;\n\t\tql_new(&list);\n\t\tint limit = next_idx + 10;\n\t\twhile (next_idx < arg->nelems && next_idx < limit) {\n\t\t\tql_tail_insert(&list, &arg->elems[next_idx], link);\n\t\t\tnext_idx++;\n\t\t}\n\t\telem_mpsc_queue_push_batch(arg->queue, &list);\n\t\t/* Push 10 items one-at-a-time. */\n\t\tlimit = next_idx + 10;\n\t\twhile (next_idx < arg->nelems && next_idx < limit) {\n\t\t\telem_mpsc_queue_push(arg->queue, &arg->elems[next_idx]);\n\t\t\tnext_idx++;\n\t\t}\n\n\t}\n\treturn NULL;\n}\n\nstatic void *\nthd_popper(void *void_arg) {\n\tpopper_arg_t *arg = (popper_arg_t *)void_arg;\n\tint done_pushers = 0;\n\twhile (done_pushers < arg->npushers) {\n\t\telem_list_t list;\n\t\tql_new(&list);\n\t\telem_mpsc_queue_pop_batch(arg->queue, &list);\n\t\telem_t *elem;\n\t\tql_foreach(elem, &list, link) {\n\t\t\tint thread = elem->thread;\n\t\t\tint idx = elem->idx;\n\t\t\texpect_d_eq(arg->pusher_counts[thread], idx,\n\t\t\t    \"Thread's pushes reordered\");\n\t\t\targ->pusher_counts[thread]++;\n\t\t\tif (arg->pusher_counts[thread]\n\t\t\t    == arg->nelems_per_pusher) {\n\t\t\t\tdone_pushers++;\n\t\t\t}\n\t\t}\n\t}\n\treturn NULL;\n}\n\nTEST_BEGIN(test_multiple_threads) {\n\tenum {\n\t\tNPUSHERS = 4,\n\t\tNELEMS_PER_PUSHER = 1000*1000,\n\t};\n\tthd_t pushers[NPUSHERS];\n\tpusher_arg_t pusher_arg[NPUSHERS];\n\n\tthd_t popper;\n\tpopper_arg_t popper_arg;\n\n\telem_mpsc_queue_t queue;\n\telem_mpsc_queue_new(&queue);\n\n\telem_t *elems = calloc(NPUSHERS * NELEMS_PER_PUSHER, sizeof(elem_t));\n\telem_t *elem_iter = elems;\n\tfor (int i = 0; i < NPUSHERS; i++) {\n\t\tpusher_arg[i].queue = &queue;\n\t\tpusher_arg[i].thread = i;\n\t\tpusher_arg[i].elems = elem_iter;\n\t\tpusher_arg[i].nelems = NELEMS_PER_PUSHER;\n\n\t\tinit_elems_simple(elem_iter, NELEMS_PER_PUSHER, i);\n\t\telem_iter += NELEMS_PER_PUSHER;\n\t}\n\tpopper_arg.queue = &queue;\n\tpopper_arg.npushers = NPUSHERS;\n\tpopper_arg.nelems_per_pusher = NELEMS_PER_PUSHER;\n\tint pusher_counts[NPUSHERS] = {0};\n\tpopper_arg.pusher_counts = pusher_counts;\n\n\tthd_create(&popper, thd_popper, (void *)&popper_arg);\n\tfor (int i = 0; i < NPUSHERS; i++) {\n\t\tthd_create(&pushers[i], thd_pusher, &pusher_arg[i]);\n\t}\n\n\tthd_join(popper, NULL);\n\tfor (int i = 0; i < NPUSHERS; i++) {\n\t\tthd_join(pushers[i], NULL);\n\t}\n\n\tfor (int i = 0; i < NPUSHERS; i++) {\n\t\texpect_d_eq(NELEMS_PER_PUSHER, pusher_counts[i], \"\");\n\t}\n\n\tfree(elems);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_simple,\n\t    test_push_single_or_batch,\n\t    test_multi_op,\n\t    test_multiple_threads);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/mq.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#define NSENDERS\t3\n#define NMSGS\t\t100000\n\ntypedef struct mq_msg_s mq_msg_t;\nstruct mq_msg_s {\n\tmq_msg(mq_msg_t)\tlink;\n};\nmq_gen(static, mq_, mq_t, mq_msg_t, link)\n\nTEST_BEGIN(test_mq_basic) {\n\tmq_t mq;\n\tmq_msg_t msg;\n\n\texpect_false(mq_init(&mq), \"Unexpected mq_init() failure\");\n\texpect_u_eq(mq_count(&mq), 0, \"mq should be empty\");\n\texpect_ptr_null(mq_tryget(&mq),\n\t    \"mq_tryget() should fail when the queue is empty\");\n\n\tmq_put(&mq, &msg);\n\texpect_u_eq(mq_count(&mq), 1, \"mq should contain one message\");\n\texpect_ptr_eq(mq_tryget(&mq), &msg, \"mq_tryget() should return msg\");\n\n\tmq_put(&mq, &msg);\n\texpect_ptr_eq(mq_get(&mq), &msg, \"mq_get() should return msg\");\n\n\tmq_fini(&mq);\n}\nTEST_END\n\nstatic void *\nthd_receiver_start(void *arg) {\n\tmq_t *mq = (mq_t *)arg;\n\tunsigned i;\n\n\tfor (i = 0; i < (NSENDERS * NMSGS); i++) {\n\t\tmq_msg_t *msg = mq_get(mq);\n\t\texpect_ptr_not_null(msg, \"mq_get() should never return NULL\");\n\t\tdallocx(msg, 0);\n\t}\n\treturn NULL;\n}\n\nstatic void *\nthd_sender_start(void *arg) {\n\tmq_t *mq = (mq_t *)arg;\n\tunsigned i;\n\n\tfor (i = 0; i < NMSGS; i++) {\n\t\tmq_msg_t *msg;\n\t\tvoid *p;\n\t\tp = mallocx(sizeof(mq_msg_t), 0);\n\t\texpect_ptr_not_null(p, \"Unexpected mallocx() failure\");\n\t\tmsg = (mq_msg_t *)p;\n\t\tmq_put(mq, msg);\n\t}\n\treturn NULL;\n}\n\nTEST_BEGIN(test_mq_threaded) {\n\tmq_t mq;\n\tthd_t receiver;\n\tthd_t senders[NSENDERS];\n\tunsigned i;\n\n\texpect_false(mq_init(&mq), \"Unexpected mq_init() failure\");\n\n\tthd_create(&receiver, thd_receiver_start, (void *)&mq);\n\tfor (i = 0; i < NSENDERS; i++) {\n\t\tthd_create(&senders[i], thd_sender_start, (void *)&mq);\n\t}\n\n\tthd_join(receiver, NULL);\n\tfor (i = 0; i < NSENDERS; i++) {\n\t\tthd_join(senders[i], NULL);\n\t}\n\n\tmq_fini(&mq);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_mq_basic,\n\t    test_mq_threaded);\n}\n\n"
  },
  {
    "path": "deps/jemalloc/test/unit/mtx.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#define NTHREADS\t2\n#define NINCRS\t\t2000000\n\nTEST_BEGIN(test_mtx_basic) {\n\tmtx_t mtx;\n\n\texpect_false(mtx_init(&mtx), \"Unexpected mtx_init() failure\");\n\tmtx_lock(&mtx);\n\tmtx_unlock(&mtx);\n\tmtx_fini(&mtx);\n}\nTEST_END\n\ntypedef struct {\n\tmtx_t\t\tmtx;\n\tunsigned\tx;\n} thd_start_arg_t;\n\nstatic void *\nthd_start(void *varg) {\n\tthd_start_arg_t *arg = (thd_start_arg_t *)varg;\n\tunsigned i;\n\n\tfor (i = 0; i < NINCRS; i++) {\n\t\tmtx_lock(&arg->mtx);\n\t\targ->x++;\n\t\tmtx_unlock(&arg->mtx);\n\t}\n\treturn NULL;\n}\n\nTEST_BEGIN(test_mtx_race) {\n\tthd_start_arg_t arg;\n\tthd_t thds[NTHREADS];\n\tunsigned i;\n\n\texpect_false(mtx_init(&arg.mtx), \"Unexpected mtx_init() failure\");\n\targ.x = 0;\n\tfor (i = 0; i < NTHREADS; i++) {\n\t\tthd_create(&thds[i], thd_start, (void *)&arg);\n\t}\n\tfor (i = 0; i < NTHREADS; i++) {\n\t\tthd_join(thds[i], NULL);\n\t}\n\texpect_u_eq(arg.x, NTHREADS * NINCRS,\n\t    \"Race-related counter corruption\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_mtx_basic,\n\t    test_mtx_race);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/nstime.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#define BILLION\tUINT64_C(1000000000)\n\nTEST_BEGIN(test_nstime_init) {\n\tnstime_t nst;\n\n\tnstime_init(&nst, 42000000043);\n\texpect_u64_eq(nstime_ns(&nst), 42000000043, \"ns incorrectly read\");\n\texpect_u64_eq(nstime_sec(&nst), 42, \"sec incorrectly read\");\n\texpect_u64_eq(nstime_nsec(&nst), 43, \"nsec incorrectly read\");\n}\nTEST_END\n\nTEST_BEGIN(test_nstime_init2) {\n\tnstime_t nst;\n\n\tnstime_init2(&nst, 42, 43);\n\texpect_u64_eq(nstime_sec(&nst), 42, \"sec incorrectly read\");\n\texpect_u64_eq(nstime_nsec(&nst), 43, \"nsec incorrectly read\");\n}\nTEST_END\n\nTEST_BEGIN(test_nstime_copy) {\n\tnstime_t nsta, nstb;\n\n\tnstime_init2(&nsta, 42, 43);\n\tnstime_init_zero(&nstb);\n\tnstime_copy(&nstb, &nsta);\n\texpect_u64_eq(nstime_sec(&nstb), 42, \"sec incorrectly copied\");\n\texpect_u64_eq(nstime_nsec(&nstb), 43, \"nsec incorrectly copied\");\n}\nTEST_END\n\nTEST_BEGIN(test_nstime_compare) {\n\tnstime_t nsta, nstb;\n\n\tnstime_init2(&nsta, 42, 43);\n\tnstime_copy(&nstb, &nsta);\n\texpect_d_eq(nstime_compare(&nsta, &nstb), 0, \"Times should be equal\");\n\texpect_d_eq(nstime_compare(&nstb, &nsta), 0, \"Times should be equal\");\n\n\tnstime_init2(&nstb, 42, 42);\n\texpect_d_eq(nstime_compare(&nsta, &nstb), 1,\n\t    \"nsta should be greater than nstb\");\n\texpect_d_eq(nstime_compare(&nstb, &nsta), -1,\n\t    \"nstb should be less than nsta\");\n\n\tnstime_init2(&nstb, 42, 44);\n\texpect_d_eq(nstime_compare(&nsta, &nstb), -1,\n\t    \"nsta should be less than nstb\");\n\texpect_d_eq(nstime_compare(&nstb, &nsta), 1,\n\t    \"nstb should be greater than nsta\");\n\n\tnstime_init2(&nstb, 41, BILLION - 1);\n\texpect_d_eq(nstime_compare(&nsta, &nstb), 1,\n\t    \"nsta should be greater than nstb\");\n\texpect_d_eq(nstime_compare(&nstb, &nsta), -1,\n\t    \"nstb should be less than nsta\");\n\n\tnstime_init2(&nstb, 43, 0);\n\texpect_d_eq(nstime_compare(&nsta, &nstb), -1,\n\t    \"nsta should be less than nstb\");\n\texpect_d_eq(nstime_compare(&nstb, &nsta), 1,\n\t    \"nstb should be greater than nsta\");\n}\nTEST_END\n\nTEST_BEGIN(test_nstime_add) {\n\tnstime_t nsta, nstb;\n\n\tnstime_init2(&nsta, 42, 43);\n\tnstime_copy(&nstb, &nsta);\n\tnstime_add(&nsta, &nstb);\n\tnstime_init2(&nstb, 84, 86);\n\texpect_d_eq(nstime_compare(&nsta, &nstb), 0,\n\t    \"Incorrect addition result\");\n\n\tnstime_init2(&nsta, 42, BILLION - 1);\n\tnstime_copy(&nstb, &nsta);\n\tnstime_add(&nsta, &nstb);\n\tnstime_init2(&nstb, 85, BILLION - 2);\n\texpect_d_eq(nstime_compare(&nsta, &nstb), 0,\n\t    \"Incorrect addition result\");\n}\nTEST_END\n\nTEST_BEGIN(test_nstime_iadd) {\n\tnstime_t nsta, nstb;\n\n\tnstime_init2(&nsta, 42, BILLION - 1);\n\tnstime_iadd(&nsta, 1);\n\tnstime_init2(&nstb, 43, 0);\n\texpect_d_eq(nstime_compare(&nsta, &nstb), 0,\n\t    \"Incorrect addition result\");\n\n\tnstime_init2(&nsta, 42, 1);\n\tnstime_iadd(&nsta, BILLION + 1);\n\tnstime_init2(&nstb, 43, 2);\n\texpect_d_eq(nstime_compare(&nsta, &nstb), 0,\n\t    \"Incorrect addition result\");\n}\nTEST_END\n\nTEST_BEGIN(test_nstime_subtract) {\n\tnstime_t nsta, nstb;\n\n\tnstime_init2(&nsta, 42, 43);\n\tnstime_copy(&nstb, &nsta);\n\tnstime_subtract(&nsta, &nstb);\n\tnstime_init_zero(&nstb);\n\texpect_d_eq(nstime_compare(&nsta, &nstb), 0,\n\t    \"Incorrect subtraction result\");\n\n\tnstime_init2(&nsta, 42, 43);\n\tnstime_init2(&nstb, 41, 44);\n\tnstime_subtract(&nsta, &nstb);\n\tnstime_init2(&nstb, 0, BILLION - 1);\n\texpect_d_eq(nstime_compare(&nsta, &nstb), 0,\n\t    \"Incorrect subtraction result\");\n}\nTEST_END\n\nTEST_BEGIN(test_nstime_isubtract) {\n\tnstime_t nsta, nstb;\n\n\tnstime_init2(&nsta, 42, 43);\n\tnstime_isubtract(&nsta, 42*BILLION + 43);\n\tnstime_init_zero(&nstb);\n\texpect_d_eq(nstime_compare(&nsta, &nstb), 0,\n\t    \"Incorrect subtraction result\");\n\n\tnstime_init2(&nsta, 42, 43);\n\tnstime_isubtract(&nsta, 41*BILLION + 44);\n\tnstime_init2(&nstb, 0, BILLION - 1);\n\texpect_d_eq(nstime_compare(&nsta, &nstb), 0,\n\t    \"Incorrect subtraction result\");\n}\nTEST_END\n\nTEST_BEGIN(test_nstime_imultiply) {\n\tnstime_t nsta, nstb;\n\n\tnstime_init2(&nsta, 42, 43);\n\tnstime_imultiply(&nsta, 10);\n\tnstime_init2(&nstb, 420, 430);\n\texpect_d_eq(nstime_compare(&nsta, &nstb), 0,\n\t    \"Incorrect multiplication result\");\n\n\tnstime_init2(&nsta, 42, 666666666);\n\tnstime_imultiply(&nsta, 3);\n\tnstime_init2(&nstb, 127, 999999998);\n\texpect_d_eq(nstime_compare(&nsta, &nstb), 0,\n\t    \"Incorrect multiplication result\");\n}\nTEST_END\n\nTEST_BEGIN(test_nstime_idivide) {\n\tnstime_t nsta, nstb;\n\n\tnstime_init2(&nsta, 42, 43);\n\tnstime_copy(&nstb, &nsta);\n\tnstime_imultiply(&nsta, 10);\n\tnstime_idivide(&nsta, 10);\n\texpect_d_eq(nstime_compare(&nsta, &nstb), 0,\n\t    \"Incorrect division result\");\n\n\tnstime_init2(&nsta, 42, 666666666);\n\tnstime_copy(&nstb, &nsta);\n\tnstime_imultiply(&nsta, 3);\n\tnstime_idivide(&nsta, 3);\n\texpect_d_eq(nstime_compare(&nsta, &nstb), 0,\n\t    \"Incorrect division result\");\n}\nTEST_END\n\nTEST_BEGIN(test_nstime_divide) {\n\tnstime_t nsta, nstb, nstc;\n\n\tnstime_init2(&nsta, 42, 43);\n\tnstime_copy(&nstb, &nsta);\n\tnstime_imultiply(&nsta, 10);\n\texpect_u64_eq(nstime_divide(&nsta, &nstb), 10,\n\t    \"Incorrect division result\");\n\n\tnstime_init2(&nsta, 42, 43);\n\tnstime_copy(&nstb, &nsta);\n\tnstime_imultiply(&nsta, 10);\n\tnstime_init(&nstc, 1);\n\tnstime_add(&nsta, &nstc);\n\texpect_u64_eq(nstime_divide(&nsta, &nstb), 10,\n\t    \"Incorrect division result\");\n\n\tnstime_init2(&nsta, 42, 43);\n\tnstime_copy(&nstb, &nsta);\n\tnstime_imultiply(&nsta, 10);\n\tnstime_init(&nstc, 1);\n\tnstime_subtract(&nsta, &nstc);\n\texpect_u64_eq(nstime_divide(&nsta, &nstb), 9,\n\t    \"Incorrect division result\");\n}\nTEST_END\n\nvoid\ntest_nstime_since_once(nstime_t *t) {\n\tnstime_t old_t;\n\tnstime_copy(&old_t, t);\n\n\tuint64_t ns_since = nstime_ns_since(t);\n\tnstime_update(t);\n\n\tnstime_t new_t;\n\tnstime_copy(&new_t, t);\n\tnstime_subtract(&new_t, &old_t);\n\n\texpect_u64_ge(nstime_ns(&new_t), ns_since,\n\t    \"Incorrect time since result\");\n}\n\nTEST_BEGIN(test_nstime_ns_since) {\n\tnstime_t t;\n\n\tnstime_init_update(&t);\n\tfor (uint64_t i = 0; i < 10000; i++) {\n\t\t/* Keeps updating t and verifies ns_since is valid. */\n\t\ttest_nstime_since_once(&t);\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_nstime_monotonic) {\n\tnstime_monotonic();\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_nstime_init,\n\t    test_nstime_init2,\n\t    test_nstime_copy,\n\t    test_nstime_compare,\n\t    test_nstime_add,\n\t    test_nstime_iadd,\n\t    test_nstime_subtract,\n\t    test_nstime_isubtract,\n\t    test_nstime_imultiply,\n\t    test_nstime_idivide,\n\t    test_nstime_divide,\n\t    test_nstime_ns_since,\n\t    test_nstime_monotonic);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/oversize_threshold.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/ctl.h\"\n\nstatic void\narena_mallctl(const char *mallctl_str, unsigned arena, void *oldp,\n    size_t *oldlen, void *newp, size_t newlen) {\n\tint err;\n\tchar buf[100];\n\tmalloc_snprintf(buf, sizeof(buf), mallctl_str, arena);\n\n\terr = mallctl(buf, oldp, oldlen, newp, newlen);\n\texpect_d_eq(0, err, \"Mallctl failed; %s\", buf);\n}\n\nTEST_BEGIN(test_oversize_threshold_get_set) {\n\tint err;\n\tsize_t old_threshold;\n\tsize_t new_threshold;\n\tsize_t threshold_sz = sizeof(old_threshold);\n\n\tunsigned arena;\n\tsize_t arena_sz = sizeof(arena);\n\terr = mallctl(\"arenas.create\", (void *)&arena, &arena_sz, NULL, 0);\n\texpect_d_eq(0, err, \"Arena creation failed\");\n\n\t/* Just a write. */\n\tnew_threshold = 1024 * 1024;\n\tarena_mallctl(\"arena.%u.oversize_threshold\", arena, NULL, NULL,\n\t    &new_threshold, threshold_sz);\n\n\t/* Read and write */\n\tnew_threshold = 2 * 1024 * 1024;\n\tarena_mallctl(\"arena.%u.oversize_threshold\", arena, &old_threshold,\n\t    &threshold_sz, &new_threshold, threshold_sz);\n\texpect_zu_eq(1024 * 1024, old_threshold, \"Should have read old value\");\n\n\t/* Just a read */\n\tarena_mallctl(\"arena.%u.oversize_threshold\", arena, &old_threshold,\n\t    &threshold_sz, NULL, 0);\n\texpect_zu_eq(2 * 1024 * 1024, old_threshold, \"Should have read old value\");\n}\nTEST_END\n\nstatic size_t max_purged = 0;\nstatic bool\npurge_forced_record_max(extent_hooks_t* hooks, void *addr, size_t sz,\n    size_t offset, size_t length, unsigned arena_ind) {\n\tif (length > max_purged) {\n\t\tmax_purged = length;\n\t}\n\treturn false;\n}\n\nstatic bool\ndalloc_record_max(extent_hooks_t *extent_hooks, void *addr, size_t sz,\n    bool comitted, unsigned arena_ind) {\n\tif (sz > max_purged) {\n\t\tmax_purged = sz;\n\t}\n\treturn false;\n}\n\nextent_hooks_t max_recording_extent_hooks;\n\nTEST_BEGIN(test_oversize_threshold) {\n\tmax_recording_extent_hooks = ehooks_default_extent_hooks;\n\tmax_recording_extent_hooks.purge_forced = &purge_forced_record_max;\n\tmax_recording_extent_hooks.dalloc = &dalloc_record_max;\n\n\textent_hooks_t *extent_hooks = &max_recording_extent_hooks;\n\n\tint err;\n\n\tunsigned arena;\n\tsize_t arena_sz = sizeof(arena);\n\terr = mallctl(\"arenas.create\", (void *)&arena, &arena_sz, NULL, 0);\n\texpect_d_eq(0, err, \"Arena creation failed\");\n\tarena_mallctl(\"arena.%u.extent_hooks\", arena, NULL, NULL, &extent_hooks,\n\t    sizeof(extent_hooks));\n\n\t/*\n\t * This test will fundamentally race with purging, since we're going to\n\t * check the dirty stats to see if our oversized allocation got purged.\n\t * We don't want other purging to happen accidentally.  We can't just\n\t * disable purging entirely, though, since that will also disable\n\t * oversize purging.  Just set purging intervals to be very large.\n\t */\n\tssize_t decay_ms = 100 * 1000;\n\tssize_t decay_ms_sz = sizeof(decay_ms);\n\tarena_mallctl(\"arena.%u.dirty_decay_ms\", arena, NULL, NULL, &decay_ms,\n\t    decay_ms_sz);\n\tarena_mallctl(\"arena.%u.muzzy_decay_ms\", arena, NULL, NULL, &decay_ms,\n\t    decay_ms_sz);\n\n\t/* Clean everything out. */\n\tarena_mallctl(\"arena.%u.purge\", arena, NULL, NULL, NULL, 0);\n\tmax_purged = 0;\n\n\t/* Set threshold to 1MB. */\n\tsize_t threshold = 1024 * 1024;\n\tsize_t threshold_sz = sizeof(threshold);\n\tarena_mallctl(\"arena.%u.oversize_threshold\", arena, NULL, NULL,\n\t    &threshold, threshold_sz);\n\n\t/* Allocating and freeing half a megabyte should leave them dirty. */\n\tvoid *ptr = mallocx(512 * 1024, MALLOCX_ARENA(arena));\n\tdallocx(ptr, MALLOCX_TCACHE_NONE);\n\tif (!is_background_thread_enabled()) {\n\t\texpect_zu_lt(max_purged, 512 * 1024, \"Expected no 512k purge\");\n\t}\n\n\t/* Purge again to reset everything out. */\n\tarena_mallctl(\"arena.%u.purge\", arena, NULL, NULL, NULL, 0);\n\tmax_purged = 0;\n\n\t/*\n\t * Allocating and freeing 2 megabytes should have them purged because of\n\t * the oversize threshold.\n\t */\n\tptr = mallocx(2 * 1024 * 1024, MALLOCX_ARENA(arena));\n\tdallocx(ptr, MALLOCX_TCACHE_NONE);\n\texpect_zu_ge(max_purged, 2 * 1024 * 1024, \"Expected a 2MB purge\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_oversize_threshold_get_set,\n\t    test_oversize_threshold);\n}\n\n"
  },
  {
    "path": "deps/jemalloc/test/unit/pa.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/pa.h\"\n\nstatic void *\nalloc_hook(extent_hooks_t *extent_hooks, void *new_addr, size_t size,\n    size_t alignment, bool *zero, bool *commit, unsigned arena_ind) {\n\tvoid *ret = pages_map(new_addr, size, alignment, commit);\n\treturn ret;\n}\n\nstatic bool\nmerge_hook(extent_hooks_t *extent_hooks, void *addr_a, size_t size_a,\n    void *addr_b, size_t size_b, bool committed, unsigned arena_ind) {\n\treturn !maps_coalesce;\n}\n\nstatic bool\nsplit_hook(extent_hooks_t *extent_hooks, void *addr, size_t size,\n    size_t size_a, size_t size_b, bool committed, unsigned arena_ind) {\n\treturn !maps_coalesce;\n}\n\nstatic void\ninit_test_extent_hooks(extent_hooks_t *hooks) {\n\t/*\n\t * The default hooks are mostly fine for testing.  A few of them,\n\t * though, access globals (alloc for dss setting in an arena, split and\n\t * merge touch the global emap to find head state.  The first of these\n\t * can be fixed by keeping that state with the hooks, where it logically\n\t * belongs.  The second, though, we can only fix when we use the extent\n\t * hook API.\n\t */\n\tmemcpy(hooks, &ehooks_default_extent_hooks, sizeof(extent_hooks_t));\n\thooks->alloc = &alloc_hook;\n\thooks->merge = &merge_hook;\n\thooks->split = &split_hook;\n}\n\ntypedef struct test_data_s test_data_t;\nstruct test_data_s {\n\tpa_shard_t shard;\n\tpa_central_t central;\n\tbase_t *base;\n\temap_t emap;\n\tpa_shard_stats_t stats;\n\tmalloc_mutex_t stats_mtx;\n\textent_hooks_t hooks;\n};\n\ntest_data_t *init_test_data(ssize_t dirty_decay_ms, ssize_t muzzy_decay_ms) {\n\ttest_data_t *test_data = calloc(1, sizeof(test_data_t));\n\tassert_ptr_not_null(test_data, \"\");\n\tinit_test_extent_hooks(&test_data->hooks);\n\n\tbase_t *base = base_new(TSDN_NULL, /* ind */ 1, &test_data->hooks,\n\t    /* metadata_use_hooks */ true);\n\tassert_ptr_not_null(base, \"\");\n\n\ttest_data->base = base;\n\tbool err = emap_init(&test_data->emap, test_data->base,\n\t    /* zeroed */ true);\n\tassert_false(err, \"\");\n\n\tnstime_t time;\n\tnstime_init(&time, 0);\n\n\terr = pa_central_init(&test_data->central, base, opt_hpa,\n\t    &hpa_hooks_default);\n\tassert_false(err, \"\");\n\n\tconst size_t pa_oversize_threshold = 8 * 1024 * 1024;\n\terr = pa_shard_init(TSDN_NULL, &test_data->shard, &test_data->central,\n\t    &test_data->emap, test_data->base, /* ind */ 1, &test_data->stats,\n\t    &test_data->stats_mtx, &time, pa_oversize_threshold, dirty_decay_ms,\n\t    muzzy_decay_ms);\n\tassert_false(err, \"\");\n\n\treturn test_data;\n}\n\nvoid destroy_test_data(test_data_t *data) {\n\tbase_delete(TSDN_NULL, data->base);\n\tfree(data);\n}\n\nstatic void *\ndo_alloc_free_purge(void *arg) {\n\ttest_data_t *test_data = (test_data_t *)arg;\n\tfor (int i = 0; i < 10 * 1000; i++) {\n\t\tbool deferred_work_generated = false;\n\t\tedata_t *edata = pa_alloc(TSDN_NULL, &test_data->shard, PAGE,\n\t\t    PAGE, /* slab */ false, /* szind */ 0, /* zero */ false,\n\t\t    /* guarded */ false, &deferred_work_generated);\n\t\tassert_ptr_not_null(edata, \"\");\n\t\tpa_dalloc(TSDN_NULL, &test_data->shard, edata,\n\t\t    &deferred_work_generated);\n\t\tmalloc_mutex_lock(TSDN_NULL,\n\t\t    &test_data->shard.pac.decay_dirty.mtx);\n\t\tpac_decay_all(TSDN_NULL, &test_data->shard.pac,\n\t\t    &test_data->shard.pac.decay_dirty,\n\t\t    &test_data->shard.pac.stats->decay_dirty,\n\t\t    &test_data->shard.pac.ecache_dirty, true);\n\t\tmalloc_mutex_unlock(TSDN_NULL,\n\t\t    &test_data->shard.pac.decay_dirty.mtx);\n\t}\n\treturn NULL;\n}\n\nTEST_BEGIN(test_alloc_free_purge_thds) {\n\ttest_data_t *test_data = init_test_data(0, 0);\n\tthd_t thds[4];\n\tfor (int i = 0; i < 4; i++) {\n\t\tthd_create(&thds[i], do_alloc_free_purge, test_data);\n\t}\n\tfor (int i = 0; i < 4; i++) {\n\t\tthd_join(thds[i], NULL);\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_alloc_free_purge_thds);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/pack.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n/*\n * Size class that is a divisor of the page size, ideally 4+ regions per run.\n */\n#if LG_PAGE <= 14\n#define SZ\t(ZU(1) << (LG_PAGE - 2))\n#else\n#define SZ\tZU(4096)\n#endif\n\n/*\n * Number of slabs to consume at high water mark.  Should be at least 2 so that\n * if mmap()ed memory grows downward, downward growth of mmap()ed memory is\n * tested.\n */\n#define NSLABS\t8\n\nstatic unsigned\nbinind_compute(void) {\n\tsize_t sz;\n\tunsigned nbins, i;\n\n\tsz = sizeof(nbins);\n\texpect_d_eq(mallctl(\"arenas.nbins\", (void *)&nbins, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl failure\");\n\n\tfor (i = 0; i < nbins; i++) {\n\t\tsize_t mib[4];\n\t\tsize_t miblen = sizeof(mib)/sizeof(size_t);\n\t\tsize_t size;\n\n\t\texpect_d_eq(mallctlnametomib(\"arenas.bin.0.size\", mib,\n\t\t    &miblen), 0, \"Unexpected mallctlnametomb failure\");\n\t\tmib[2] = (size_t)i;\n\n\t\tsz = sizeof(size);\n\t\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&size, &sz, NULL,\n\t\t    0), 0, \"Unexpected mallctlbymib failure\");\n\t\tif (size == SZ) {\n\t\t\treturn i;\n\t\t}\n\t}\n\n\ttest_fail(\"Unable to compute nregs_per_run\");\n\treturn 0;\n}\n\nstatic size_t\nnregs_per_run_compute(void) {\n\tuint32_t nregs;\n\tsize_t sz;\n\tunsigned binind = binind_compute();\n\tsize_t mib[4];\n\tsize_t miblen = sizeof(mib)/sizeof(size_t);\n\n\texpect_d_eq(mallctlnametomib(\"arenas.bin.0.nregs\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomb failure\");\n\tmib[2] = (size_t)binind;\n\tsz = sizeof(nregs);\n\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&nregs, &sz, NULL,\n\t    0), 0, \"Unexpected mallctlbymib failure\");\n\treturn nregs;\n}\n\nstatic unsigned\narenas_create_mallctl(void) {\n\tunsigned arena_ind;\n\tsize_t sz;\n\n\tsz = sizeof(arena_ind);\n\texpect_d_eq(mallctl(\"arenas.create\", (void *)&arena_ind, &sz, NULL, 0),\n\t    0, \"Error in arenas.create\");\n\n\treturn arena_ind;\n}\n\nstatic void\narena_reset_mallctl(unsigned arena_ind) {\n\tsize_t mib[3];\n\tsize_t miblen = sizeof(mib)/sizeof(size_t);\n\n\texpect_d_eq(mallctlnametomib(\"arena.0.reset\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() failure\");\n\tmib[1] = (size_t)arena_ind;\n\texpect_d_eq(mallctlbymib(mib, miblen, NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctlbymib() failure\");\n}\n\nTEST_BEGIN(test_pack) {\n\tbool prof_enabled;\n\tsize_t sz = sizeof(prof_enabled);\n\tif (mallctl(\"opt.prof\", (void *)&prof_enabled, &sz, NULL, 0) == 0) {\n\t\ttest_skip_if(prof_enabled);\n\t}\n\n\tunsigned arena_ind = arenas_create_mallctl();\n\tsize_t nregs_per_run = nregs_per_run_compute();\n\tsize_t nregs = nregs_per_run * NSLABS;\n\tVARIABLE_ARRAY(void *, ptrs, nregs);\n\tsize_t i, j, offset;\n\n\t/* Fill matrix. */\n\tfor (i = offset = 0; i < NSLABS; i++) {\n\t\tfor (j = 0; j < nregs_per_run; j++) {\n\t\t\tvoid *p = mallocx(SZ, MALLOCX_ARENA(arena_ind) |\n\t\t\t    MALLOCX_TCACHE_NONE);\n\t\t\texpect_ptr_not_null(p,\n\t\t\t    \"Unexpected mallocx(%zu, MALLOCX_ARENA(%u) |\"\n\t\t\t    \" MALLOCX_TCACHE_NONE) failure, run=%zu, reg=%zu\",\n\t\t\t    SZ, arena_ind, i, j);\n\t\t\tptrs[(i * nregs_per_run) + j] = p;\n\t\t}\n\t}\n\n\t/*\n\t * Free all but one region of each run, but rotate which region is\n\t * preserved, so that subsequent allocations exercise the within-run\n\t * layout policy.\n\t */\n\toffset = 0;\n\tfor (i = offset = 0;\n\t    i < NSLABS;\n\t    i++, offset = (offset + 1) % nregs_per_run) {\n\t\tfor (j = 0; j < nregs_per_run; j++) {\n\t\t\tvoid *p = ptrs[(i * nregs_per_run) + j];\n\t\t\tif (offset == j) {\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tdallocx(p, MALLOCX_ARENA(arena_ind) |\n\t\t\t    MALLOCX_TCACHE_NONE);\n\t\t}\n\t}\n\n\t/*\n\t * Logically refill matrix, skipping preserved regions and verifying\n\t * that the matrix is unmodified.\n\t */\n\toffset = 0;\n\tfor (i = offset = 0;\n\t    i < NSLABS;\n\t    i++, offset = (offset + 1) % nregs_per_run) {\n\t\tfor (j = 0; j < nregs_per_run; j++) {\n\t\t\tvoid *p;\n\n\t\t\tif (offset == j) {\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tp = mallocx(SZ, MALLOCX_ARENA(arena_ind) |\n\t\t\t    MALLOCX_TCACHE_NONE);\n\t\t\texpect_ptr_eq(p, ptrs[(i * nregs_per_run) + j],\n\t\t\t    \"Unexpected refill discrepancy, run=%zu, reg=%zu\\n\",\n\t\t\t    i, j);\n\t\t}\n\t}\n\n\t/* Clean up. */\n\tarena_reset_mallctl(arena_ind);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_pack);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/pack.sh",
    "content": "#!/bin/sh\n\n# Immediately purge to minimize fragmentation.\nexport MALLOC_CONF=\"dirty_decay_ms:0,muzzy_decay_ms:0\"\n"
  },
  {
    "path": "deps/jemalloc/test/unit/pages.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nTEST_BEGIN(test_pages_huge) {\n\tsize_t alloc_size;\n\tbool commit;\n\tvoid *pages, *hugepage;\n\n\talloc_size = HUGEPAGE * 2 - PAGE;\n\tcommit = true;\n\tpages = pages_map(NULL, alloc_size, PAGE, &commit);\n\texpect_ptr_not_null(pages, \"Unexpected pages_map() error\");\n\n\tif (init_system_thp_mode == thp_mode_default) {\n\t    hugepage = (void *)(ALIGNMENT_CEILING((uintptr_t)pages, HUGEPAGE));\n\t    expect_b_ne(pages_huge(hugepage, HUGEPAGE), have_madvise_huge,\n\t        \"Unexpected pages_huge() result\");\n\t    expect_false(pages_nohuge(hugepage, HUGEPAGE),\n\t        \"Unexpected pages_nohuge() result\");\n\t}\n\n\tpages_unmap(pages, alloc_size);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_pages_huge);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/peak.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/peak.h\"\n\nTEST_BEGIN(test_peak) {\n\tpeak_t peak = PEAK_INITIALIZER;\n\texpect_u64_eq(0, peak_max(&peak),\n\t    \"Peak should be zero at initialization\");\n\tpeak_update(&peak, 100, 50);\n\texpect_u64_eq(50, peak_max(&peak),\n\t    \"Missed update\");\n\tpeak_update(&peak, 100, 100);\n\texpect_u64_eq(50, peak_max(&peak), \"Dallocs shouldn't change peak\");\n\tpeak_update(&peak, 100, 200);\n\texpect_u64_eq(50, peak_max(&peak), \"Dallocs shouldn't change peak\");\n\tpeak_update(&peak, 200, 200);\n\texpect_u64_eq(50, peak_max(&peak), \"Haven't reached peak again\");\n\tpeak_update(&peak, 300, 200);\n\texpect_u64_eq(100, peak_max(&peak), \"Missed an update.\");\n\tpeak_set_zero(&peak, 300, 200);\n\texpect_u64_eq(0, peak_max(&peak), \"No effect from zeroing\");\n\tpeak_update(&peak, 300, 300);\n\texpect_u64_eq(0, peak_max(&peak), \"Dalloc shouldn't change peak\");\n\tpeak_update(&peak, 400, 300);\n\texpect_u64_eq(0, peak_max(&peak), \"Should still be net negative\");\n\tpeak_update(&peak, 500, 300);\n\texpect_u64_eq(100, peak_max(&peak), \"Missed an update.\");\n\t/*\n\t * Above, we set to zero while a net allocator; let's try as a\n\t * net-deallocator.\n\t */\n\tpeak_set_zero(&peak, 600, 700);\n\texpect_u64_eq(0, peak_max(&peak), \"No effect from zeroing.\");\n\tpeak_update(&peak, 600, 800);\n\texpect_u64_eq(0, peak_max(&peak), \"Dalloc shouldn't change peak.\");\n\tpeak_update(&peak, 700, 800);\n\texpect_u64_eq(0, peak_max(&peak), \"Should still be net negative.\");\n\tpeak_update(&peak, 800, 800);\n\texpect_u64_eq(100, peak_max(&peak), \"Missed an update.\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_peak);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/ph.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/ph.h\"\n\ntypedef struct node_s node_t;\nph_structs(heap, node_t);\n\nstruct node_s {\n#define NODE_MAGIC 0x9823af7e\n\tuint32_t magic;\n\theap_link_t link;\n\tuint64_t key;\n};\n\nstatic int\nnode_cmp(const node_t *a, const node_t *b) {\n\tint ret;\n\n\tret = (a->key > b->key) - (a->key < b->key);\n\tif (ret == 0) {\n\t\t/*\n\t\t * Duplicates are not allowed in the heap, so force an\n\t\t * arbitrary ordering for non-identical items with equal keys.\n\t\t */\n\t\tret = (((uintptr_t)a) > ((uintptr_t)b))\n\t\t    - (((uintptr_t)a) < ((uintptr_t)b));\n\t}\n\treturn ret;\n}\n\nstatic int\nnode_cmp_magic(const node_t *a, const node_t *b) {\n\n\texpect_u32_eq(a->magic, NODE_MAGIC, \"Bad magic\");\n\texpect_u32_eq(b->magic, NODE_MAGIC, \"Bad magic\");\n\n\treturn node_cmp(a, b);\n}\n\nph_gen(static, heap, node_t, link, node_cmp_magic);\n\nstatic node_t *\nnode_next_get(const node_t *node) {\n\treturn phn_next_get((node_t *)node, offsetof(node_t, link));\n}\n\nstatic node_t *\nnode_prev_get(const node_t *node) {\n\treturn phn_prev_get((node_t *)node, offsetof(node_t, link));\n}\n\nstatic node_t *\nnode_lchild_get(const node_t *node) {\n\treturn phn_lchild_get((node_t *)node, offsetof(node_t, link));\n}\n\nstatic void\nnode_print(const node_t *node, unsigned depth) {\n\tunsigned i;\n\tnode_t *leftmost_child, *sibling;\n\n\tfor (i = 0; i < depth; i++) {\n\t\tmalloc_printf(\"\\t\");\n\t}\n\tmalloc_printf(\"%2\"FMTu64\"\\n\", node->key);\n\n\tleftmost_child = node_lchild_get(node);\n\tif (leftmost_child == NULL) {\n\t\treturn;\n\t}\n\tnode_print(leftmost_child, depth + 1);\n\n\tfor (sibling = node_next_get(leftmost_child); sibling !=\n\t    NULL; sibling = node_next_get(sibling)) {\n\t\tnode_print(sibling, depth + 1);\n\t}\n}\n\nstatic void\nheap_print(const heap_t *heap) {\n\tnode_t *auxelm;\n\n\tmalloc_printf(\"vvv heap %p vvv\\n\", heap);\n\tif (heap->ph.root == NULL) {\n\t\tgoto label_return;\n\t}\n\n\tnode_print(heap->ph.root, 0);\n\n\tfor (auxelm = node_next_get(heap->ph.root); auxelm != NULL;\n\t    auxelm = node_next_get(auxelm)) {\n\t\texpect_ptr_eq(node_next_get(node_prev_get(auxelm)), auxelm,\n\t\t    \"auxelm's prev doesn't link to auxelm\");\n\t\tnode_print(auxelm, 0);\n\t}\n\nlabel_return:\n\tmalloc_printf(\"^^^ heap %p ^^^\\n\", heap);\n}\n\nstatic unsigned\nnode_validate(const node_t *node, const node_t *parent) {\n\tunsigned nnodes = 1;\n\tnode_t *leftmost_child, *sibling;\n\n\tif (parent != NULL) {\n\t\texpect_d_ge(node_cmp_magic(node, parent), 0,\n\t\t    \"Child is less than parent\");\n\t}\n\n\tleftmost_child = node_lchild_get(node);\n\tif (leftmost_child == NULL) {\n\t\treturn nnodes;\n\t}\n\texpect_ptr_eq(node_prev_get(leftmost_child),\n\t    (void *)node, \"Leftmost child does not link to node\");\n\tnnodes += node_validate(leftmost_child, node);\n\n\tfor (sibling = node_next_get(leftmost_child); sibling !=\n\t    NULL; sibling = node_next_get(sibling)) {\n\t\texpect_ptr_eq(node_next_get(node_prev_get(sibling)), sibling,\n\t\t    \"sibling's prev doesn't link to sibling\");\n\t\tnnodes += node_validate(sibling, node);\n\t}\n\treturn nnodes;\n}\n\nstatic unsigned\nheap_validate(const heap_t *heap) {\n\tunsigned nnodes = 0;\n\tnode_t *auxelm;\n\n\tif (heap->ph.root == NULL) {\n\t\tgoto label_return;\n\t}\n\n\tnnodes += node_validate(heap->ph.root, NULL);\n\n\tfor (auxelm = node_next_get(heap->ph.root); auxelm != NULL;\n\t    auxelm = node_next_get(auxelm)) {\n\t\texpect_ptr_eq(node_next_get(node_prev_get(auxelm)), auxelm,\n\t\t    \"auxelm's prev doesn't link to auxelm\");\n\t\tnnodes += node_validate(auxelm, NULL);\n\t}\n\nlabel_return:\n\tif (false) {\n\t\theap_print(heap);\n\t}\n\treturn nnodes;\n}\n\nTEST_BEGIN(test_ph_empty) {\n\theap_t heap;\n\n\theap_new(&heap);\n\texpect_true(heap_empty(&heap), \"Heap should be empty\");\n\texpect_ptr_null(heap_first(&heap), \"Unexpected node\");\n\texpect_ptr_null(heap_any(&heap), \"Unexpected node\");\n}\nTEST_END\n\nstatic void\nnode_remove(heap_t *heap, node_t *node) {\n\theap_remove(heap, node);\n\n\tnode->magic = 0;\n}\n\nstatic node_t *\nnode_remove_first(heap_t *heap) {\n\tnode_t *node = heap_remove_first(heap);\n\tnode->magic = 0;\n\treturn node;\n}\n\nstatic node_t *\nnode_remove_any(heap_t *heap) {\n\tnode_t *node = heap_remove_any(heap);\n\tnode->magic = 0;\n\treturn node;\n}\n\nTEST_BEGIN(test_ph_random) {\n#define NNODES 25\n#define NBAGS 250\n#define SEED 42\n\tsfmt_t *sfmt;\n\tuint64_t bag[NNODES];\n\theap_t heap;\n\tnode_t nodes[NNODES];\n\tunsigned i, j, k;\n\n\tsfmt = init_gen_rand(SEED);\n\tfor (i = 0; i < NBAGS; i++) {\n\t\tswitch (i) {\n\t\tcase 0:\n\t\t\t/* Insert in order. */\n\t\t\tfor (j = 0; j < NNODES; j++) {\n\t\t\t\tbag[j] = j;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase 1:\n\t\t\t/* Insert in reverse order. */\n\t\t\tfor (j = 0; j < NNODES; j++) {\n\t\t\t\tbag[j] = NNODES - j - 1;\n\t\t\t}\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tfor (j = 0; j < NNODES; j++) {\n\t\t\t\tbag[j] = gen_rand64_range(sfmt, NNODES);\n\t\t\t}\n\t\t}\n\n\t\tfor (j = 1; j <= NNODES; j++) {\n\t\t\t/* Initialize heap and nodes. */\n\t\t\theap_new(&heap);\n\t\t\texpect_u_eq(heap_validate(&heap), 0,\n\t\t\t    \"Incorrect node count\");\n\t\t\tfor (k = 0; k < j; k++) {\n\t\t\t\tnodes[k].magic = NODE_MAGIC;\n\t\t\t\tnodes[k].key = bag[k];\n\t\t\t}\n\n\t\t\t/* Insert nodes. */\n\t\t\tfor (k = 0; k < j; k++) {\n\t\t\t\theap_insert(&heap, &nodes[k]);\n\t\t\t\tif (i % 13 == 12) {\n\t\t\t\t\texpect_ptr_not_null(heap_any(&heap),\n\t\t\t\t\t    \"Heap should not be empty\");\n\t\t\t\t\t/* Trigger merging. */\n\t\t\t\t\texpect_ptr_not_null(heap_first(&heap),\n\t\t\t\t\t    \"Heap should not be empty\");\n\t\t\t\t}\n\t\t\t\texpect_u_eq(heap_validate(&heap), k + 1,\n\t\t\t\t    \"Incorrect node count\");\n\t\t\t}\n\n\t\t\texpect_false(heap_empty(&heap),\n\t\t\t    \"Heap should not be empty\");\n\n\t\t\t/* Remove nodes. */\n\t\t\tswitch (i % 6) {\n\t\t\tcase 0:\n\t\t\t\tfor (k = 0; k < j; k++) {\n\t\t\t\t\texpect_u_eq(heap_validate(&heap), j - k,\n\t\t\t\t\t    \"Incorrect node count\");\n\t\t\t\t\tnode_remove(&heap, &nodes[k]);\n\t\t\t\t\texpect_u_eq(heap_validate(&heap), j - k\n\t\t\t\t\t    - 1, \"Incorrect node count\");\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 1:\n\t\t\t\tfor (k = j; k > 0; k--) {\n\t\t\t\t\tnode_remove(&heap, &nodes[k-1]);\n\t\t\t\t\texpect_u_eq(heap_validate(&heap), k - 1,\n\t\t\t\t\t    \"Incorrect node count\");\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 2: {\n\t\t\t\tnode_t *prev = NULL;\n\t\t\t\tfor (k = 0; k < j; k++) {\n\t\t\t\t\tnode_t *node = node_remove_first(&heap);\n\t\t\t\t\texpect_u_eq(heap_validate(&heap), j - k\n\t\t\t\t\t    - 1, \"Incorrect node count\");\n\t\t\t\t\tif (prev != NULL) {\n\t\t\t\t\t\texpect_d_ge(node_cmp(node,\n\t\t\t\t\t\t    prev), 0,\n\t\t\t\t\t\t    \"Bad removal order\");\n\t\t\t\t\t}\n\t\t\t\t\tprev = node;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\t} case 3: {\n\t\t\t\tnode_t *prev = NULL;\n\t\t\t\tfor (k = 0; k < j; k++) {\n\t\t\t\t\tnode_t *node = heap_first(&heap);\n\t\t\t\t\texpect_u_eq(heap_validate(&heap), j - k,\n\t\t\t\t\t    \"Incorrect node count\");\n\t\t\t\t\tif (prev != NULL) {\n\t\t\t\t\t\texpect_d_ge(node_cmp(node,\n\t\t\t\t\t\t    prev), 0,\n\t\t\t\t\t\t    \"Bad removal order\");\n\t\t\t\t\t}\n\t\t\t\t\tnode_remove(&heap, node);\n\t\t\t\t\texpect_u_eq(heap_validate(&heap), j - k\n\t\t\t\t\t    - 1, \"Incorrect node count\");\n\t\t\t\t\tprev = node;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\t} case 4: {\n\t\t\t\tfor (k = 0; k < j; k++) {\n\t\t\t\t\tnode_remove_any(&heap);\n\t\t\t\t\texpect_u_eq(heap_validate(&heap), j - k\n\t\t\t\t\t    - 1, \"Incorrect node count\");\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\t} case 5: {\n\t\t\t\tfor (k = 0; k < j; k++) {\n\t\t\t\t\tnode_t *node = heap_any(&heap);\n\t\t\t\t\texpect_u_eq(heap_validate(&heap), j - k,\n\t\t\t\t\t    \"Incorrect node count\");\n\t\t\t\t\tnode_remove(&heap, node);\n\t\t\t\t\texpect_u_eq(heap_validate(&heap), j - k\n\t\t\t\t\t    - 1, \"Incorrect node count\");\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\t} default:\n\t\t\t\tnot_reached();\n\t\t\t}\n\n\t\t\texpect_ptr_null(heap_first(&heap),\n\t\t\t    \"Heap should be empty\");\n\t\t\texpect_ptr_null(heap_any(&heap),\n\t\t\t    \"Heap should be empty\");\n\t\t\texpect_true(heap_empty(&heap), \"Heap should be empty\");\n\t\t}\n\t}\n\tfini_gen_rand(sfmt);\n#undef NNODES\n#undef SEED\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_ph_empty,\n\t    test_ph_random);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prng.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nTEST_BEGIN(test_prng_lg_range_u32) {\n\tuint32_t sa, sb;\n\tuint32_t ra, rb;\n\tunsigned lg_range;\n\n\tsa = 42;\n\tra = prng_lg_range_u32(&sa, 32);\n\tsa = 42;\n\trb = prng_lg_range_u32(&sa, 32);\n\texpect_u32_eq(ra, rb,\n\t    \"Repeated generation should produce repeated results\");\n\n\tsb = 42;\n\trb = prng_lg_range_u32(&sb, 32);\n\texpect_u32_eq(ra, rb,\n\t    \"Equivalent generation should produce equivalent results\");\n\n\tsa = 42;\n\tra = prng_lg_range_u32(&sa, 32);\n\trb = prng_lg_range_u32(&sa, 32);\n\texpect_u32_ne(ra, rb,\n\t    \"Full-width results must not immediately repeat\");\n\n\tsa = 42;\n\tra = prng_lg_range_u32(&sa, 32);\n\tfor (lg_range = 31; lg_range > 0; lg_range--) {\n\t\tsb = 42;\n\t\trb = prng_lg_range_u32(&sb, lg_range);\n\t\texpect_u32_eq((rb & (UINT32_C(0xffffffff) << lg_range)),\n\t\t    0, \"High order bits should be 0, lg_range=%u\", lg_range);\n\t\texpect_u32_eq(rb, (ra >> (32 - lg_range)),\n\t\t    \"Expected high order bits of full-width result, \"\n\t\t    \"lg_range=%u\", lg_range);\n\t}\n\n}\nTEST_END\n\nTEST_BEGIN(test_prng_lg_range_u64) {\n\tuint64_t sa, sb, ra, rb;\n\tunsigned lg_range;\n\n\tsa = 42;\n\tra = prng_lg_range_u64(&sa, 64);\n\tsa = 42;\n\trb = prng_lg_range_u64(&sa, 64);\n\texpect_u64_eq(ra, rb,\n\t    \"Repeated generation should produce repeated results\");\n\n\tsb = 42;\n\trb = prng_lg_range_u64(&sb, 64);\n\texpect_u64_eq(ra, rb,\n\t    \"Equivalent generation should produce equivalent results\");\n\n\tsa = 42;\n\tra = prng_lg_range_u64(&sa, 64);\n\trb = prng_lg_range_u64(&sa, 64);\n\texpect_u64_ne(ra, rb,\n\t    \"Full-width results must not immediately repeat\");\n\n\tsa = 42;\n\tra = prng_lg_range_u64(&sa, 64);\n\tfor (lg_range = 63; lg_range > 0; lg_range--) {\n\t\tsb = 42;\n\t\trb = prng_lg_range_u64(&sb, lg_range);\n\t\texpect_u64_eq((rb & (UINT64_C(0xffffffffffffffff) << lg_range)),\n\t\t    0, \"High order bits should be 0, lg_range=%u\", lg_range);\n\t\texpect_u64_eq(rb, (ra >> (64 - lg_range)),\n\t\t    \"Expected high order bits of full-width result, \"\n\t\t    \"lg_range=%u\", lg_range);\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_prng_lg_range_zu) {\n\tsize_t sa, sb;\n\tsize_t ra, rb;\n\tunsigned lg_range;\n\n\tsa = 42;\n\tra = prng_lg_range_zu(&sa, ZU(1) << (3 + LG_SIZEOF_PTR));\n\tsa = 42;\n\trb = prng_lg_range_zu(&sa, ZU(1) << (3 + LG_SIZEOF_PTR));\n\texpect_zu_eq(ra, rb,\n\t    \"Repeated generation should produce repeated results\");\n\n\tsb = 42;\n\trb = prng_lg_range_zu(&sb, ZU(1) << (3 + LG_SIZEOF_PTR));\n\texpect_zu_eq(ra, rb,\n\t    \"Equivalent generation should produce equivalent results\");\n\n\tsa = 42;\n\tra = prng_lg_range_zu(&sa, ZU(1) << (3 + LG_SIZEOF_PTR));\n\trb = prng_lg_range_zu(&sa, ZU(1) << (3 + LG_SIZEOF_PTR));\n\texpect_zu_ne(ra, rb,\n\t    \"Full-width results must not immediately repeat\");\n\n\tsa = 42;\n\tra = prng_lg_range_zu(&sa, ZU(1) << (3 + LG_SIZEOF_PTR));\n\tfor (lg_range = (ZU(1) << (3 + LG_SIZEOF_PTR)) - 1; lg_range > 0;\n\t    lg_range--) {\n\t\tsb = 42;\n\t\trb = prng_lg_range_zu(&sb, lg_range);\n\t\texpect_zu_eq((rb & (SIZE_T_MAX << lg_range)),\n\t\t    0, \"High order bits should be 0, lg_range=%u\", lg_range);\n\t\texpect_zu_eq(rb, (ra >> ((ZU(1) << (3 + LG_SIZEOF_PTR)) -\n\t\t    lg_range)), \"Expected high order bits of full-width \"\n\t\t    \"result, lg_range=%u\", lg_range);\n\t}\n\n}\nTEST_END\n\nTEST_BEGIN(test_prng_range_u32) {\n\tuint32_t range;\n\n\tconst uint32_t max_range = 10000000;\n\tconst uint32_t range_step = 97;\n\tconst unsigned nreps = 10;\n\n\tfor (range = 2; range < max_range; range += range_step) {\n\t\tuint32_t s;\n\t\tunsigned rep;\n\n\t\ts = range;\n\t\tfor (rep = 0; rep < nreps; rep++) {\n\t\t\tuint32_t r = prng_range_u32(&s, range);\n\n\t\t\texpect_u32_lt(r, range, \"Out of range\");\n\t\t}\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_prng_range_u64) {\n\tuint64_t range;\n\n\tconst uint64_t max_range = 10000000;\n\tconst uint64_t range_step = 97;\n\tconst unsigned nreps = 10;\n\n\tfor (range = 2; range < max_range; range += range_step) {\n\t\tuint64_t s;\n\t\tunsigned rep;\n\n\t\ts = range;\n\t\tfor (rep = 0; rep < nreps; rep++) {\n\t\t\tuint64_t r = prng_range_u64(&s, range);\n\n\t\t\texpect_u64_lt(r, range, \"Out of range\");\n\t\t}\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_prng_range_zu) {\n\tsize_t range;\n\n\tconst size_t max_range = 10000000;\n\tconst size_t range_step = 97;\n\tconst unsigned nreps = 10;\n\n\n\tfor (range = 2; range < max_range; range += range_step) {\n\t\tsize_t s;\n\t\tunsigned rep;\n\n\t\ts = range;\n\t\tfor (rep = 0; rep < nreps; rep++) {\n\t\t\tsize_t r = prng_range_zu(&s, range);\n\n\t\t\texpect_zu_lt(r, range, \"Out of range\");\n\t\t}\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_prng_lg_range_u32,\n\t    test_prng_lg_range_u64,\n\t    test_prng_lg_range_zu,\n\t    test_prng_range_u32,\n\t    test_prng_range_u64,\n\t    test_prng_range_zu);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_accum.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/prof_data.h\"\n#include \"jemalloc/internal/prof_sys.h\"\n\n#define NTHREADS\t\t4\n#define NALLOCS_PER_THREAD\t50\n#define DUMP_INTERVAL\t\t1\n#define BT_COUNT_CHECK_INTERVAL\t5\n\nstatic int\nprof_dump_open_file_intercept(const char *filename, int mode) {\n\tint fd;\n\n\tfd = open(\"/dev/null\", O_WRONLY);\n\tassert_d_ne(fd, -1, \"Unexpected open() failure\");\n\n\treturn fd;\n}\n\nstatic void *\nalloc_from_permuted_backtrace(unsigned thd_ind, unsigned iteration) {\n\treturn btalloc(1, thd_ind*NALLOCS_PER_THREAD + iteration);\n}\n\nstatic void *\nthd_start(void *varg) {\n\tunsigned thd_ind = *(unsigned *)varg;\n\tsize_t bt_count_prev, bt_count;\n\tunsigned i_prev, i;\n\n\ti_prev = 0;\n\tbt_count_prev = 0;\n\tfor (i = 0; i < NALLOCS_PER_THREAD; i++) {\n\t\tvoid *p = alloc_from_permuted_backtrace(thd_ind, i);\n\t\tdallocx(p, 0);\n\t\tif (i % DUMP_INTERVAL == 0) {\n\t\t\texpect_d_eq(mallctl(\"prof.dump\", NULL, NULL, NULL, 0),\n\t\t\t    0, \"Unexpected error while dumping heap profile\");\n\t\t}\n\n\t\tif (i % BT_COUNT_CHECK_INTERVAL == 0 ||\n\t\t    i+1 == NALLOCS_PER_THREAD) {\n\t\t\tbt_count = prof_bt_count();\n\t\t\texpect_zu_le(bt_count_prev+(i-i_prev), bt_count,\n\t\t\t    \"Expected larger backtrace count increase\");\n\t\t\ti_prev = i;\n\t\t\tbt_count_prev = bt_count;\n\t\t}\n\t}\n\n\treturn NULL;\n}\n\nTEST_BEGIN(test_idump) {\n\tbool active;\n\tthd_t thds[NTHREADS];\n\tunsigned thd_args[NTHREADS];\n\tunsigned i;\n\n\ttest_skip_if(!config_prof);\n\n\tactive = true;\n\texpect_d_eq(mallctl(\"prof.active\", NULL, NULL, (void *)&active,\n\t    sizeof(active)), 0,\n\t    \"Unexpected mallctl failure while activating profiling\");\n\n\tprof_dump_open_file = prof_dump_open_file_intercept;\n\n\tfor (i = 0; i < NTHREADS; i++) {\n\t\tthd_args[i] = i;\n\t\tthd_create(&thds[i], thd_start, (void *)&thd_args[i]);\n\t}\n\tfor (i = 0; i < NTHREADS; i++) {\n\t\tthd_join(thds[i], NULL);\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_idump);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_accum.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_prof}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"prof:true,prof_accum:true,prof_active:false,lg_prof_sample:0\"\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_active.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/prof_data.h\"\n\nstatic void\nmallctl_bool_get(const char *name, bool expected, const char *func, int line) {\n\tbool old;\n\tsize_t sz;\n\n\tsz = sizeof(old);\n\texpect_d_eq(mallctl(name, (void *)&old, &sz, NULL, 0), 0,\n\t    \"%s():%d: Unexpected mallctl failure reading %s\", func, line, name);\n\texpect_b_eq(old, expected, \"%s():%d: Unexpected %s value\", func, line,\n\t    name);\n}\n\nstatic void\nmallctl_bool_set(const char *name, bool old_expected, bool val_new,\n    const char *func, int line) {\n\tbool old;\n\tsize_t sz;\n\n\tsz = sizeof(old);\n\texpect_d_eq(mallctl(name, (void *)&old, &sz, (void *)&val_new,\n\t    sizeof(val_new)), 0,\n\t    \"%s():%d: Unexpected mallctl failure reading/writing %s\", func,\n\t    line, name);\n\texpect_b_eq(old, old_expected, \"%s():%d: Unexpected %s value\", func,\n\t    line, name);\n}\n\nstatic void\nmallctl_prof_active_get_impl(bool prof_active_old_expected, const char *func,\n    int line) {\n\tmallctl_bool_get(\"prof.active\", prof_active_old_expected, func, line);\n}\n#define mallctl_prof_active_get(a)\t\t\t\t\t\\\n\tmallctl_prof_active_get_impl(a, __func__, __LINE__)\n\nstatic void\nmallctl_prof_active_set_impl(bool prof_active_old_expected,\n    bool prof_active_new, const char *func, int line) {\n\tmallctl_bool_set(\"prof.active\", prof_active_old_expected,\n\t    prof_active_new, func, line);\n}\n#define mallctl_prof_active_set(a, b)\t\t\t\t\t\\\n\tmallctl_prof_active_set_impl(a, b, __func__, __LINE__)\n\nstatic void\nmallctl_thread_prof_active_get_impl(bool thread_prof_active_old_expected,\n    const char *func, int line) {\n\tmallctl_bool_get(\"thread.prof.active\", thread_prof_active_old_expected,\n\t    func, line);\n}\n#define mallctl_thread_prof_active_get(a)\t\t\t\t\\\n\tmallctl_thread_prof_active_get_impl(a, __func__, __LINE__)\n\nstatic void\nmallctl_thread_prof_active_set_impl(bool thread_prof_active_old_expected,\n    bool thread_prof_active_new, const char *func, int line) {\n\tmallctl_bool_set(\"thread.prof.active\", thread_prof_active_old_expected,\n\t    thread_prof_active_new, func, line);\n}\n#define mallctl_thread_prof_active_set(a, b)\t\t\t\t\\\n\tmallctl_thread_prof_active_set_impl(a, b, __func__, __LINE__)\n\nstatic void\nprof_sampling_probe_impl(bool expect_sample, const char *func, int line) {\n\tvoid *p;\n\tsize_t expected_backtraces = expect_sample ? 1 : 0;\n\n\texpect_zu_eq(prof_bt_count(), 0, \"%s():%d: Expected 0 backtraces\", func,\n\t    line);\n\tp = mallocx(1, 0);\n\texpect_ptr_not_null(p, \"Unexpected mallocx() failure\");\n\texpect_zu_eq(prof_bt_count(), expected_backtraces,\n\t    \"%s():%d: Unexpected backtrace count\", func, line);\n\tdallocx(p, 0);\n}\n#define prof_sampling_probe(a)\t\t\t\t\t\t\\\n\tprof_sampling_probe_impl(a, __func__, __LINE__)\n\nTEST_BEGIN(test_prof_active) {\n\ttest_skip_if(!config_prof);\n\n\tmallctl_prof_active_get(true);\n\tmallctl_thread_prof_active_get(false);\n\n\tmallctl_prof_active_set(true, true);\n\tmallctl_thread_prof_active_set(false, false);\n\t/* prof.active, !thread.prof.active. */\n\tprof_sampling_probe(false);\n\n\tmallctl_prof_active_set(true, false);\n\tmallctl_thread_prof_active_set(false, false);\n\t/* !prof.active, !thread.prof.active. */\n\tprof_sampling_probe(false);\n\n\tmallctl_prof_active_set(false, false);\n\tmallctl_thread_prof_active_set(false, true);\n\t/* !prof.active, thread.prof.active. */\n\tprof_sampling_probe(false);\n\n\tmallctl_prof_active_set(false, true);\n\tmallctl_thread_prof_active_set(true, true);\n\t/* prof.active, thread.prof.active. */\n\tprof_sampling_probe(true);\n\n\t/* Restore settings. */\n\tmallctl_prof_active_set(true, true);\n\tmallctl_thread_prof_active_set(true, false);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_prof_active);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_active.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_prof}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"prof:true,prof_active:true,prof_thread_active_init:false,lg_prof_sample:0\"\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_gdump.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/prof_sys.h\"\n\nstatic bool did_prof_dump_open;\n\nstatic int\nprof_dump_open_file_intercept(const char *filename, int mode) {\n\tint fd;\n\n\tdid_prof_dump_open = true;\n\n\tfd = open(\"/dev/null\", O_WRONLY);\n\tassert_d_ne(fd, -1, \"Unexpected open() failure\");\n\n\treturn fd;\n}\n\nTEST_BEGIN(test_gdump) {\n\ttest_skip_if(opt_hpa);\n\tbool active, gdump, gdump_old;\n\tvoid *p, *q, *r, *s;\n\tsize_t sz;\n\n\ttest_skip_if(!config_prof);\n\n\tactive = true;\n\texpect_d_eq(mallctl(\"prof.active\", NULL, NULL, (void *)&active,\n\t    sizeof(active)), 0,\n\t    \"Unexpected mallctl failure while activating profiling\");\n\n\tprof_dump_open_file = prof_dump_open_file_intercept;\n\n\tdid_prof_dump_open = false;\n\tp = mallocx((1U << SC_LG_LARGE_MINCLASS), 0);\n\texpect_ptr_not_null(p, \"Unexpected mallocx() failure\");\n\texpect_true(did_prof_dump_open, \"Expected a profile dump\");\n\n\tdid_prof_dump_open = false;\n\tq = mallocx((1U << SC_LG_LARGE_MINCLASS), 0);\n\texpect_ptr_not_null(q, \"Unexpected mallocx() failure\");\n\texpect_true(did_prof_dump_open, \"Expected a profile dump\");\n\n\tgdump = false;\n\tsz = sizeof(gdump_old);\n\texpect_d_eq(mallctl(\"prof.gdump\", (void *)&gdump_old, &sz,\n\t    (void *)&gdump, sizeof(gdump)), 0,\n\t    \"Unexpected mallctl failure while disabling prof.gdump\");\n\tassert(gdump_old);\n\tdid_prof_dump_open = false;\n\tr = mallocx((1U << SC_LG_LARGE_MINCLASS), 0);\n\texpect_ptr_not_null(q, \"Unexpected mallocx() failure\");\n\texpect_false(did_prof_dump_open, \"Unexpected profile dump\");\n\n\tgdump = true;\n\tsz = sizeof(gdump_old);\n\texpect_d_eq(mallctl(\"prof.gdump\", (void *)&gdump_old, &sz,\n\t    (void *)&gdump, sizeof(gdump)), 0,\n\t    \"Unexpected mallctl failure while enabling prof.gdump\");\n\tassert(!gdump_old);\n\tdid_prof_dump_open = false;\n\ts = mallocx((1U << SC_LG_LARGE_MINCLASS), 0);\n\texpect_ptr_not_null(q, \"Unexpected mallocx() failure\");\n\texpect_true(did_prof_dump_open, \"Expected a profile dump\");\n\n\tdallocx(p, 0);\n\tdallocx(q, 0);\n\tdallocx(r, 0);\n\tdallocx(s, 0);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_gdump);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_gdump.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_prof}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"prof:true,prof_active:false,prof_gdump:true\"\nfi\n\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_hook.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nconst char *dump_filename = \"/dev/null\";\n\nprof_backtrace_hook_t default_hook;\n\nbool mock_bt_hook_called = false;\nbool mock_dump_hook_called = false;\n\nvoid\nmock_bt_hook(void **vec, unsigned *len, unsigned max_len) {\n\t*len = max_len;\n\tfor (unsigned i = 0; i < max_len; ++i) {\n\t\tvec[i] = (void *)((uintptr_t)i);\n\t}\n\tmock_bt_hook_called = true;\n}\n\nvoid\nmock_bt_augmenting_hook(void **vec, unsigned *len, unsigned max_len) {\n\tdefault_hook(vec, len, max_len);\n\texpect_u_gt(*len, 0, \"Default backtrace hook returned empty backtrace\");\n\texpect_u_lt(*len, max_len,\n\t    \"Default backtrace hook returned too large backtrace\");\n\n\t/* Add a separator between default frames and augmented */\n\tvec[*len] = (void *)0x030303030;\n\t(*len)++;\n\n\t/* Add more stack frames */\n\tfor (unsigned i = 0; i < 3; ++i) {\n\t\tif (*len == max_len) {\n\t\t\tbreak;\n\t\t}\n\t\tvec[*len] = (void *)((uintptr_t)i);\n\t\t(*len)++;\n\t}\n\n\n\tmock_bt_hook_called = true;\n}\n\nvoid\nmock_dump_hook(const char *filename) {\n\tmock_dump_hook_called = true;\n\texpect_str_eq(filename, dump_filename,\n\t    \"Incorrect file name passed to the dump hook\");\n}\n\nTEST_BEGIN(test_prof_backtrace_hook_replace) {\n\n\ttest_skip_if(!config_prof);\n\n\tmock_bt_hook_called = false;\n\n\tvoid *p0 = mallocx(1, 0);\n\tassert_ptr_not_null(p0, \"Failed to allocate\");\n\n\texpect_false(mock_bt_hook_called, \"Called mock hook before it's set\");\n\n\tprof_backtrace_hook_t null_hook = NULL;\n\texpect_d_eq(mallctl(\"experimental.hooks.prof_backtrace\",\n\t    NULL, 0, (void *)&null_hook,  sizeof(null_hook)),\n\t\tEINVAL, \"Incorrectly allowed NULL backtrace hook\");\n\n\tsize_t default_hook_sz = sizeof(prof_backtrace_hook_t);\n\tprof_backtrace_hook_t hook = &mock_bt_hook;\n\texpect_d_eq(mallctl(\"experimental.hooks.prof_backtrace\",\n\t    (void *)&default_hook, &default_hook_sz, (void *)&hook,\n\t    sizeof(hook)), 0, \"Unexpected mallctl failure setting hook\");\n\n\tvoid *p1 = mallocx(1, 0);\n\tassert_ptr_not_null(p1, \"Failed to allocate\");\n\n\texpect_true(mock_bt_hook_called, \"Didn't call mock hook\");\n\n\tprof_backtrace_hook_t current_hook;\n\tsize_t current_hook_sz = sizeof(prof_backtrace_hook_t);\n\texpect_d_eq(mallctl(\"experimental.hooks.prof_backtrace\",\n\t    (void *)&current_hook, &current_hook_sz, (void *)&default_hook,\n\t    sizeof(default_hook)), 0,\n\t    \"Unexpected mallctl failure resetting hook to default\");\n\n\texpect_ptr_eq(current_hook, hook,\n\t    \"Hook returned by mallctl is not equal to mock hook\");\n\n\tdallocx(p1, 0);\n\tdallocx(p0, 0);\n}\nTEST_END\n\nTEST_BEGIN(test_prof_backtrace_hook_augment) {\n\n\ttest_skip_if(!config_prof);\n\n\tmock_bt_hook_called = false;\n\n\tvoid *p0 = mallocx(1, 0);\n\tassert_ptr_not_null(p0, \"Failed to allocate\");\n\n\texpect_false(mock_bt_hook_called, \"Called mock hook before it's set\");\n\n\tsize_t default_hook_sz = sizeof(prof_backtrace_hook_t);\n\tprof_backtrace_hook_t hook = &mock_bt_augmenting_hook;\n\texpect_d_eq(mallctl(\"experimental.hooks.prof_backtrace\",\n\t    (void *)&default_hook, &default_hook_sz, (void *)&hook,\n\t    sizeof(hook)), 0, \"Unexpected mallctl failure setting hook\");\n\n\tvoid *p1 = mallocx(1, 0);\n\tassert_ptr_not_null(p1, \"Failed to allocate\");\n\n\texpect_true(mock_bt_hook_called, \"Didn't call mock hook\");\n\n\tprof_backtrace_hook_t current_hook;\n\tsize_t current_hook_sz = sizeof(prof_backtrace_hook_t);\n\texpect_d_eq(mallctl(\"experimental.hooks.prof_backtrace\",\n\t    (void *)&current_hook, &current_hook_sz, (void *)&default_hook,\n\t    sizeof(default_hook)), 0,\n\t    \"Unexpected mallctl failure resetting hook to default\");\n\n\texpect_ptr_eq(current_hook, hook,\n\t    \"Hook returned by mallctl is not equal to mock hook\");\n\n\tdallocx(p1, 0);\n\tdallocx(p0, 0);\n}\nTEST_END\n\nTEST_BEGIN(test_prof_dump_hook) {\n\n\ttest_skip_if(!config_prof);\n\n\tmock_dump_hook_called = false;\n\n\texpect_d_eq(mallctl(\"prof.dump\", NULL, NULL, (void *)&dump_filename,\n\t    sizeof(dump_filename)), 0, \"Failed to dump heap profile\");\n\n\texpect_false(mock_dump_hook_called, \"Called dump hook before it's set\");\n\n\tsize_t default_hook_sz = sizeof(prof_dump_hook_t);\n\tprof_dump_hook_t hook = &mock_dump_hook;\n\texpect_d_eq(mallctl(\"experimental.hooks.prof_dump\",\n\t    (void *)&default_hook, &default_hook_sz, (void *)&hook,\n\t    sizeof(hook)), 0, \"Unexpected mallctl failure setting hook\");\n\n\texpect_d_eq(mallctl(\"prof.dump\", NULL, NULL, (void *)&dump_filename,\n\t    sizeof(dump_filename)), 0, \"Failed to dump heap profile\");\n\n\texpect_true(mock_dump_hook_called, \"Didn't call mock hook\");\n\n\tprof_dump_hook_t current_hook;\n\tsize_t current_hook_sz = sizeof(prof_dump_hook_t);\n\texpect_d_eq(mallctl(\"experimental.hooks.prof_dump\",\n\t    (void *)&current_hook, &current_hook_sz, (void *)&default_hook,\n\t    sizeof(default_hook)), 0,\n\t    \"Unexpected mallctl failure resetting hook to default\");\n\n\texpect_ptr_eq(current_hook, hook,\n\t    \"Hook returned by mallctl is not equal to mock hook\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_prof_backtrace_hook_replace,\n\t    test_prof_backtrace_hook_augment,\n\t    test_prof_dump_hook);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_hook.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_prof}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"prof:true,prof_active:true,lg_prof_sample:0\"\nfi\n\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_idump.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/prof_sys.h\"\n\n#define TEST_PREFIX \"test_prefix\"\n\nstatic bool did_prof_dump_open;\n\nstatic int\nprof_dump_open_file_intercept(const char *filename, int mode) {\n\tint fd;\n\n\tdid_prof_dump_open = true;\n\n\tconst char filename_prefix[] = TEST_PREFIX \".\";\n\texpect_d_eq(strncmp(filename_prefix, filename, sizeof(filename_prefix)\n\t    - 1), 0, \"Dump file name should start with \\\"\" TEST_PREFIX \".\\\"\");\n\n\tfd = open(\"/dev/null\", O_WRONLY);\n\tassert_d_ne(fd, -1, \"Unexpected open() failure\");\n\n\treturn fd;\n}\n\nTEST_BEGIN(test_idump) {\n\tbool active;\n\tvoid *p;\n\n\tconst char *test_prefix = TEST_PREFIX;\n\n\ttest_skip_if(!config_prof);\n\n\tactive = true;\n\n\texpect_d_eq(mallctl(\"prof.prefix\", NULL, NULL, (void *)&test_prefix,\n\t    sizeof(test_prefix)), 0,\n\t    \"Unexpected mallctl failure while overwriting dump prefix\");\n\n\texpect_d_eq(mallctl(\"prof.active\", NULL, NULL, (void *)&active,\n\t    sizeof(active)), 0,\n\t    \"Unexpected mallctl failure while activating profiling\");\n\n\tprof_dump_open_file = prof_dump_open_file_intercept;\n\n\tdid_prof_dump_open = false;\n\tp = mallocx(1, 0);\n\texpect_ptr_not_null(p, \"Unexpected mallocx() failure\");\n\tdallocx(p, 0);\n\texpect_true(did_prof_dump_open, \"Expected a profile dump\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_idump);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_idump.sh",
    "content": "#!/bin/sh\n\nexport MALLOC_CONF=\"tcache:false\"\nif [ \"x${enable_prof}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"${MALLOC_CONF},prof:true,prof_accum:true,prof_active:false,lg_prof_sample:0,lg_prof_interval:0\"\nfi\n\n\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_log.c",
    "content": "#include \"test/jemalloc_test.h\"\n#include \"jemalloc/internal/prof_log.h\"\n\n#define N_PARAM 100\n#define N_THREADS 10\n\nstatic void expect_rep() {\n\texpect_b_eq(prof_log_rep_check(), false, \"Rep check failed\");\n}\n\nstatic void expect_log_empty() {\n\texpect_zu_eq(prof_log_bt_count(), 0,\n\t    \"The log has backtraces; it isn't empty\");\n\texpect_zu_eq(prof_log_thr_count(), 0,\n\t    \"The log has threads; it isn't empty\");\n\texpect_zu_eq(prof_log_alloc_count(), 0,\n\t    \"The log has allocations; it isn't empty\");\n}\n\nvoid *buf[N_PARAM];\n\nstatic void f() {\n\tint i;\n\tfor (i = 0; i < N_PARAM; i++) {\n\t\tbuf[i] = malloc(100);\n\t}\n\tfor (i = 0; i < N_PARAM; i++) {\n\t\tfree(buf[i]);\n\t}\n}\n\nTEST_BEGIN(test_prof_log_many_logs) {\n\tint i;\n\n\ttest_skip_if(!config_prof);\n\n\tfor (i = 0; i < N_PARAM; i++) {\n\t\texpect_b_eq(prof_log_is_logging(), false,\n\t\t    \"Logging shouldn't have started yet\");\n\t\texpect_d_eq(mallctl(\"prof.log_start\", NULL, NULL, NULL, 0), 0,\n\t\t    \"Unexpected mallctl failure when starting logging\");\n\t\texpect_b_eq(prof_log_is_logging(), true,\n\t\t    \"Logging should be started by now\");\n\t\texpect_log_empty();\n\t\texpect_rep();\n\t\tf();\n\t\texpect_zu_eq(prof_log_thr_count(), 1, \"Wrong thread count\");\n\t\texpect_rep();\n\t\texpect_b_eq(prof_log_is_logging(), true,\n\t\t    \"Logging should still be on\");\n\t\texpect_d_eq(mallctl(\"prof.log_stop\", NULL, NULL, NULL, 0), 0,\n\t\t    \"Unexpected mallctl failure when stopping logging\");\n\t\texpect_b_eq(prof_log_is_logging(), false,\n\t\t    \"Logging should have turned off\");\n\t}\n}\nTEST_END\n\nthd_t thr_buf[N_THREADS];\n\nstatic void *f_thread(void *unused) {\n\tint i;\n\tfor (i = 0; i < N_PARAM; i++) {\n\t\tvoid *p = malloc(100);\n\t\tmemset(p, 100, 1);\n\t\tfree(p);\n\t}\n\n\treturn NULL;\n}\n\nTEST_BEGIN(test_prof_log_many_threads) {\n\n\ttest_skip_if(!config_prof);\n\n\tint i;\n\texpect_d_eq(mallctl(\"prof.log_start\", NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctl failure when starting logging\");\n\tfor (i = 0; i < N_THREADS; i++) {\n\t\tthd_create(&thr_buf[i], &f_thread, NULL);\n\t}\n\n\tfor (i = 0; i < N_THREADS; i++) {\n\t\tthd_join(thr_buf[i], NULL);\n\t}\n\texpect_zu_eq(prof_log_thr_count(), N_THREADS,\n\t    \"Wrong number of thread entries\");\n\texpect_rep();\n\texpect_d_eq(mallctl(\"prof.log_stop\", NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctl failure when stopping logging\");\n}\nTEST_END\n\nstatic void f3() {\n\tvoid *p = malloc(100);\n\tfree(p);\n}\n\nstatic void f1() {\n\tvoid *p = malloc(100);\n\tf3();\n\tfree(p);\n}\n\nstatic void f2() {\n\tvoid *p = malloc(100);\n\tfree(p);\n}\n\nTEST_BEGIN(test_prof_log_many_traces) {\n\n\ttest_skip_if(!config_prof);\n\n\texpect_d_eq(mallctl(\"prof.log_start\", NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctl failure when starting logging\");\n\tint i;\n\texpect_rep();\n\texpect_log_empty();\n\tfor (i = 0; i < N_PARAM; i++) {\n\t\texpect_rep();\n\t\tf1();\n\t\texpect_rep();\n\t\tf2();\n\t\texpect_rep();\n\t\tf3();\n\t\texpect_rep();\n\t}\n\t/*\n\t * There should be 8 total backtraces: two for malloc/free in f1(), two\n\t * for malloc/free in f2(), two for malloc/free in f3(), and then two\n\t * for malloc/free in f1()'s call to f3().  However compiler\n\t * optimizations such as loop unrolling might generate more call sites.\n\t * So >= 8 traces are expected.\n\t */\n\texpect_zu_ge(prof_log_bt_count(), 8,\n\t    \"Expect at least 8 backtraces given sample workload\");\n\texpect_d_eq(mallctl(\"prof.log_stop\", NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctl failure when stopping logging\");\n}\nTEST_END\n\nint\nmain(void) {\n\tif (config_prof) {\n\t\tprof_log_dummy_set(true);\n\t}\n\treturn test_no_reentrancy(\n\t    test_prof_log_many_logs,\n\t    test_prof_log_many_traces,\n\t    test_prof_log_many_threads);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_log.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_prof}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"prof:true,prof_active:true,lg_prof_sample:0\"\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_mdump.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/prof_sys.h\"\n\nstatic const char *test_filename = \"test_filename\";\nstatic bool did_prof_dump_open;\n\nstatic int\nprof_dump_open_file_intercept(const char *filename, int mode) {\n\tint fd;\n\n\tdid_prof_dump_open = true;\n\n\t/*\n\t * Stronger than a strcmp() - verifying that we internally directly use\n\t * the caller supplied char pointer.\n\t */\n\texpect_ptr_eq(filename, test_filename,\n\t    \"Dump file name should be \\\"%s\\\"\", test_filename);\n\n\tfd = open(\"/dev/null\", O_WRONLY);\n\tassert_d_ne(fd, -1, \"Unexpected open() failure\");\n\n\treturn fd;\n}\n\nTEST_BEGIN(test_mdump_normal) {\n\ttest_skip_if(!config_prof);\n\n\tprof_dump_open_file_t *open_file_orig = prof_dump_open_file;\n\n\tvoid *p = mallocx(1, 0);\n\tassert_ptr_not_null(p, \"Unexpected mallocx() failure\");\n\n\tprof_dump_open_file = prof_dump_open_file_intercept;\n\tdid_prof_dump_open = false;\n\texpect_d_eq(mallctl(\"prof.dump\", NULL, NULL, (void *)&test_filename,\n\t    sizeof(test_filename)), 0,\n\t    \"Unexpected mallctl failure while dumping\");\n\texpect_true(did_prof_dump_open, \"Expected a profile dump\");\n\n\tdallocx(p, 0);\n\n\tprof_dump_open_file = open_file_orig;\n}\nTEST_END\n\nstatic int\nprof_dump_open_file_error(const char *filename, int mode) {\n\treturn -1;\n}\n\n/*\n * In the context of test_mdump_output_error, prof_dump_write_file_count is the\n * total number of times prof_dump_write_file_error() is expected to be called.\n * In the context of test_mdump_maps_error, prof_dump_write_file_count is the\n * total number of times prof_dump_write_file_error() is expected to be called\n * starting from the one that contains an 'M' (beginning the \"MAPPED_LIBRARIES\"\n * header).\n */\nstatic int prof_dump_write_file_count;\n\nstatic ssize_t\nprof_dump_write_file_error(int fd, const void *s, size_t len) {\n\t--prof_dump_write_file_count;\n\n\texpect_d_ge(prof_dump_write_file_count, 0,\n\t    \"Write is called after error occurs\");\n\n\tif (prof_dump_write_file_count == 0) {\n\t\treturn -1;\n\t} else {\n\t\t/*\n\t\t * Any non-negative number indicates success, and for\n\t\t * simplicity we just use 0.  When prof_dump_write_file_count\n\t\t * is positive, it means that we haven't reached the write that\n\t\t * we want to fail; when prof_dump_write_file_count is\n\t\t * negative, it means that we've already violated the\n\t\t * expect_d_ge(prof_dump_write_file_count, 0) statement above,\n\t\t * but instead of aborting, we continue the rest of the test,\n\t\t * and we indicate that all the writes after the failed write\n\t\t * are successful.\n\t\t */\n\t\treturn 0;\n\t}\n}\n\nstatic void\nexpect_write_failure(int count) {\n\tprof_dump_write_file_count = count;\n\texpect_d_eq(mallctl(\"prof.dump\", NULL, NULL, (void *)&test_filename,\n\t    sizeof(test_filename)), EFAULT, \"Dump should err\");\n\texpect_d_eq(prof_dump_write_file_count, 0,\n\t    \"Dumping stopped after a wrong number of writes\");\n}\n\nTEST_BEGIN(test_mdump_output_error) {\n\ttest_skip_if(!config_prof);\n\ttest_skip_if(!config_debug);\n\n\tprof_dump_open_file_t *open_file_orig = prof_dump_open_file;\n\tprof_dump_write_file_t *write_file_orig = prof_dump_write_file;\n\n\tprof_dump_write_file = prof_dump_write_file_error;\n\n\tvoid *p = mallocx(1, 0);\n\tassert_ptr_not_null(p, \"Unexpected mallocx() failure\");\n\n\t/*\n\t * When opening the dump file fails, there shouldn't be any write, and\n\t * mallctl() should return failure.\n\t */\n\tprof_dump_open_file = prof_dump_open_file_error;\n\texpect_write_failure(0);\n\n\t/*\n\t * When the n-th write fails, there shouldn't be any more write, and\n\t * mallctl() should return failure.\n\t */\n\tprof_dump_open_file = prof_dump_open_file_intercept;\n\texpect_write_failure(1); /* First write fails. */\n\texpect_write_failure(2); /* Second write fails. */\n\n\tdallocx(p, 0);\n\n\tprof_dump_open_file = open_file_orig;\n\tprof_dump_write_file = write_file_orig;\n}\nTEST_END\n\nstatic int\nprof_dump_open_maps_error() {\n\treturn -1;\n}\n\nstatic bool started_piping_maps_file;\n\nstatic ssize_t\nprof_dump_write_maps_file_error(int fd, const void *s, size_t len) {\n\t/* The main dump doesn't contain any capital 'M'. */\n\tif (!started_piping_maps_file && strchr(s, 'M') != NULL) {\n\t\tstarted_piping_maps_file = true;\n\t}\n\n\tif (started_piping_maps_file) {\n\t\treturn prof_dump_write_file_error(fd, s, len);\n\t} else {\n\t\t/* Return success when we haven't started piping maps. */\n\t\treturn 0;\n\t}\n}\n\nstatic void\nexpect_maps_write_failure(int count) {\n\tint mfd = prof_dump_open_maps();\n\tif (mfd == -1) {\n\t\t/* No need to continue if we just can't find the maps file. */\n\t\treturn;\n\t}\n\tclose(mfd);\n\tstarted_piping_maps_file = false;\n\texpect_write_failure(count);\n\texpect_true(started_piping_maps_file, \"Should start piping maps\");\n}\n\nTEST_BEGIN(test_mdump_maps_error) {\n\ttest_skip_if(!config_prof);\n\ttest_skip_if(!config_debug);\n\n\tprof_dump_open_file_t *open_file_orig = prof_dump_open_file;\n\tprof_dump_write_file_t *write_file_orig = prof_dump_write_file;\n\tprof_dump_open_maps_t *open_maps_orig = prof_dump_open_maps;\n\n\tprof_dump_open_file = prof_dump_open_file_intercept;\n\tprof_dump_write_file = prof_dump_write_maps_file_error;\n\n\tvoid *p = mallocx(1, 0);\n\tassert_ptr_not_null(p, \"Unexpected mallocx() failure\");\n\n\t/*\n\t * When opening the maps file fails, there shouldn't be any maps write,\n\t * and mallctl() should return success.\n\t */\n\tprof_dump_open_maps = prof_dump_open_maps_error;\n\tstarted_piping_maps_file = false;\n\tprof_dump_write_file_count = 0;\n\texpect_d_eq(mallctl(\"prof.dump\", NULL, NULL, (void *)&test_filename,\n\t    sizeof(test_filename)), 0,\n\t    \"mallctl should not fail in case of maps file opening failure\");\n\texpect_false(started_piping_maps_file, \"Shouldn't start piping maps\");\n\texpect_d_eq(prof_dump_write_file_count, 0,\n\t    \"Dumping stopped after a wrong number of writes\");\n\n\t/*\n\t * When the n-th maps write fails (given that we are able to find the\n\t * maps file), there shouldn't be any more maps write, and mallctl()\n\t * should return failure.\n\t */\n\tprof_dump_open_maps = open_maps_orig;\n\texpect_maps_write_failure(1); /* First write fails. */\n\texpect_maps_write_failure(2); /* Second write fails. */\n\n\tdallocx(p, 0);\n\n\tprof_dump_open_file = open_file_orig;\n\tprof_dump_write_file = write_file_orig;\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_mdump_normal,\n\t    test_mdump_output_error,\n\t    test_mdump_maps_error);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_mdump.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_prof}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"prof:true,lg_prof_sample:0\"\nfi\n\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_recent.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/prof_recent.h\"\n\n/* As specified in the shell script */\n#define OPT_ALLOC_MAX 3\n\n/* Invariant before and after every test (when config_prof is on) */\nstatic void\nconfirm_prof_setup() {\n\t/* Options */\n\tassert_true(opt_prof, \"opt_prof not on\");\n\tassert_true(opt_prof_active, \"opt_prof_active not on\");\n\tassert_zd_eq(opt_prof_recent_alloc_max, OPT_ALLOC_MAX,\n\t    \"opt_prof_recent_alloc_max not set correctly\");\n\n\t/* Dynamics */\n\tassert_true(prof_active_state, \"prof_active not on\");\n\tassert_zd_eq(prof_recent_alloc_max_ctl_read(), OPT_ALLOC_MAX,\n\t    \"prof_recent_alloc_max not set correctly\");\n}\n\nTEST_BEGIN(test_confirm_setup) {\n\ttest_skip_if(!config_prof);\n\tconfirm_prof_setup();\n}\nTEST_END\n\nTEST_BEGIN(test_prof_recent_off) {\n\ttest_skip_if(config_prof);\n\n\tconst ssize_t past_ref = 0, future_ref = 0;\n\tconst size_t len_ref = sizeof(ssize_t);\n\n\tssize_t past = past_ref, future = future_ref;\n\tsize_t len = len_ref;\n\n#define ASSERT_SHOULD_FAIL(opt, a, b, c, d) do {\t\t\t\\\n\tassert_d_eq(mallctl(\"experimental.prof_recent.\" opt, a, b, c,\t\\\n\t    d), ENOENT, \"Should return ENOENT when config_prof is off\");\\\n\tassert_zd_eq(past, past_ref, \"output was touched\");\t\t\\\n\tassert_zu_eq(len, len_ref, \"output length was touched\");\t\\\n\tassert_zd_eq(future, future_ref, \"input was touched\");\t\t\\\n} while (0)\n\n\tASSERT_SHOULD_FAIL(\"alloc_max\", NULL, NULL, NULL, 0);\n\tASSERT_SHOULD_FAIL(\"alloc_max\", &past, &len, NULL, 0);\n\tASSERT_SHOULD_FAIL(\"alloc_max\", NULL, NULL, &future, len);\n\tASSERT_SHOULD_FAIL(\"alloc_max\", &past, &len, &future, len);\n\n#undef ASSERT_SHOULD_FAIL\n}\nTEST_END\n\nTEST_BEGIN(test_prof_recent_on) {\n\ttest_skip_if(!config_prof);\n\n\tssize_t past, future;\n\tsize_t len = sizeof(ssize_t);\n\n\tconfirm_prof_setup();\n\n\tassert_d_eq(mallctl(\"experimental.prof_recent.alloc_max\",\n\t    NULL, NULL, NULL, 0), 0, \"no-op mallctl should be allowed\");\n\tconfirm_prof_setup();\n\n\tassert_d_eq(mallctl(\"experimental.prof_recent.alloc_max\",\n\t    &past, &len, NULL, 0), 0, \"Read error\");\n\texpect_zd_eq(past, OPT_ALLOC_MAX, \"Wrong read result\");\n\tfuture = OPT_ALLOC_MAX + 1;\n\tassert_d_eq(mallctl(\"experimental.prof_recent.alloc_max\",\n\t    NULL, NULL, &future, len), 0, \"Write error\");\n\tfuture = -1;\n\tassert_d_eq(mallctl(\"experimental.prof_recent.alloc_max\",\n\t    &past, &len, &future, len), 0, \"Read/write error\");\n\texpect_zd_eq(past, OPT_ALLOC_MAX + 1, \"Wrong read result\");\n\tfuture = -2;\n\tassert_d_eq(mallctl(\"experimental.prof_recent.alloc_max\",\n\t    &past, &len, &future, len), EINVAL,\n\t    \"Invalid write should return EINVAL\");\n\texpect_zd_eq(past, OPT_ALLOC_MAX + 1,\n\t    \"Output should not be touched given invalid write\");\n\tfuture = OPT_ALLOC_MAX;\n\tassert_d_eq(mallctl(\"experimental.prof_recent.alloc_max\",\n\t    &past, &len, &future, len), 0, \"Read/write error\");\n\texpect_zd_eq(past, -1, \"Wrong read result\");\n\tfuture = OPT_ALLOC_MAX + 2;\n\tassert_d_eq(mallctl(\"experimental.prof_recent.alloc_max\",\n\t    &past, &len, &future, len * 2), EINVAL,\n\t    \"Invalid write should return EINVAL\");\n\texpect_zd_eq(past, -1,\n\t    \"Output should not be touched given invalid write\");\n\n\tconfirm_prof_setup();\n}\nTEST_END\n\n/* Reproducible sequence of request sizes */\n#define NTH_REQ_SIZE(n) ((n) * 97 + 101)\n\nstatic void\nconfirm_malloc(void *p) {\n\tassert_ptr_not_null(p, \"malloc failed unexpectedly\");\n\tedata_t *e = emap_edata_lookup(TSDN_NULL, &arena_emap_global, p);\n\tassert_ptr_not_null(e, \"NULL edata for living pointer\");\n\tprof_recent_t *n = edata_prof_recent_alloc_get_no_lock_test(e);\n\tassert_ptr_not_null(n, \"Record in edata should not be NULL\");\n\texpect_ptr_not_null(n->alloc_tctx,\n\t    \"alloc_tctx in record should not be NULL\");\n\texpect_ptr_eq(e, prof_recent_alloc_edata_get_no_lock_test(n),\n\t    \"edata pointer in record is not correct\");\n\texpect_ptr_null(n->dalloc_tctx, \"dalloc_tctx in record should be NULL\");\n}\n\nstatic void\nconfirm_record_size(prof_recent_t *n, unsigned kth) {\n\texpect_zu_eq(n->size, NTH_REQ_SIZE(kth),\n\t    \"Recorded allocation size is wrong\");\n}\n\nstatic void\nconfirm_record_living(prof_recent_t *n) {\n\texpect_ptr_not_null(n->alloc_tctx,\n\t    \"alloc_tctx in record should not be NULL\");\n\tedata_t *edata = prof_recent_alloc_edata_get_no_lock_test(n);\n\tassert_ptr_not_null(edata,\n\t    \"Recorded edata should not be NULL for living pointer\");\n\texpect_ptr_eq(n, edata_prof_recent_alloc_get_no_lock_test(edata),\n\t    \"Record in edata is not correct\");\n\texpect_ptr_null(n->dalloc_tctx, \"dalloc_tctx in record should be NULL\");\n}\n\nstatic void\nconfirm_record_released(prof_recent_t *n) {\n\texpect_ptr_not_null(n->alloc_tctx,\n\t    \"alloc_tctx in record should not be NULL\");\n\texpect_ptr_null(prof_recent_alloc_edata_get_no_lock_test(n),\n\t    \"Recorded edata should be NULL for released pointer\");\n\texpect_ptr_not_null(n->dalloc_tctx,\n\t    \"dalloc_tctx in record should not be NULL for released pointer\");\n}\n\nTEST_BEGIN(test_prof_recent_alloc) {\n\ttest_skip_if(!config_prof);\n\n\tbool b;\n\tunsigned i, c;\n\tsize_t req_size;\n\tvoid *p;\n\tprof_recent_t *n;\n\tssize_t future;\n\n\tconfirm_prof_setup();\n\n\t/*\n\t * First batch of 2 * OPT_ALLOC_MAX allocations.  After the\n\t * (OPT_ALLOC_MAX - 1)'th allocation the recorded allocations should\n\t * always be the last OPT_ALLOC_MAX allocations coming from here.\n\t */\n\tfor (i = 0; i < 2 * OPT_ALLOC_MAX; ++i) {\n\t\treq_size = NTH_REQ_SIZE(i);\n\t\tp = malloc(req_size);\n\t\tconfirm_malloc(p);\n\t\tif (i < OPT_ALLOC_MAX - 1) {\n\t\t\tassert_false(ql_empty(&prof_recent_alloc_list),\n\t\t\t    \"Empty recent allocation\");\n\t\t\tfree(p);\n\t\t\t/*\n\t\t\t * The recorded allocations may still include some\n\t\t\t * other allocations before the test run started,\n\t\t\t * so keep allocating without checking anything.\n\t\t\t */\n\t\t\tcontinue;\n\t\t}\n\t\tc = 0;\n\t\tql_foreach(n, &prof_recent_alloc_list, link) {\n\t\t\t++c;\n\t\t\tconfirm_record_size(n, i + c - OPT_ALLOC_MAX);\n\t\t\tif (c == OPT_ALLOC_MAX) {\n\t\t\t\tconfirm_record_living(n);\n\t\t\t} else {\n\t\t\t\tconfirm_record_released(n);\n\t\t\t}\n\t\t}\n\t\tassert_u_eq(c, OPT_ALLOC_MAX,\n\t\t    \"Incorrect total number of allocations\");\n\t\tfree(p);\n\t}\n\n\tconfirm_prof_setup();\n\n\tb = false;\n\tassert_d_eq(mallctl(\"prof.active\", NULL, NULL, &b, sizeof(bool)), 0,\n\t    \"mallctl for turning off prof_active failed\");\n\n\t/*\n\t * Second batch of OPT_ALLOC_MAX allocations.  Since prof_active is\n\t * turned off, this batch shouldn't be recorded.\n\t */\n\tfor (; i < 3 * OPT_ALLOC_MAX; ++i) {\n\t\treq_size = NTH_REQ_SIZE(i);\n\t\tp = malloc(req_size);\n\t\tassert_ptr_not_null(p, \"malloc failed unexpectedly\");\n\t\tc = 0;\n\t\tql_foreach(n, &prof_recent_alloc_list, link) {\n\t\t\tconfirm_record_size(n, c + OPT_ALLOC_MAX);\n\t\t\tconfirm_record_released(n);\n\t\t\t++c;\n\t\t}\n\t\tassert_u_eq(c, OPT_ALLOC_MAX,\n\t\t    \"Incorrect total number of allocations\");\n\t\tfree(p);\n\t}\n\n\tb = true;\n\tassert_d_eq(mallctl(\"prof.active\", NULL, NULL, &b, sizeof(bool)), 0,\n\t    \"mallctl for turning on prof_active failed\");\n\n\tconfirm_prof_setup();\n\n\t/*\n\t * Third batch of OPT_ALLOC_MAX allocations.  Since prof_active is\n\t * turned back on, they should be recorded, and in the list of recorded\n\t * allocations they should follow the first batch rather than the\n\t * second batch.\n\t */\n\tfor (; i < 4 * OPT_ALLOC_MAX; ++i) {\n\t\treq_size = NTH_REQ_SIZE(i);\n\t\tp = malloc(req_size);\n\t\tconfirm_malloc(p);\n\t\tc = 0;\n\t\tql_foreach(n, &prof_recent_alloc_list, link) {\n\t\t\t++c;\n\t\t\tconfirm_record_size(n,\n\t\t\t    /* Is the allocation from the third batch? */\n\t\t\t    i + c - OPT_ALLOC_MAX >= 3 * OPT_ALLOC_MAX ?\n\t\t\t    /* If yes, then it's just recorded. */\n\t\t\t    i + c - OPT_ALLOC_MAX :\n\t\t\t    /*\n\t\t\t     * Otherwise, it should come from the first batch\n\t\t\t     * instead of the second batch.\n\t\t\t     */\n\t\t\t    i + c - 2 * OPT_ALLOC_MAX);\n\t\t\tif (c == OPT_ALLOC_MAX) {\n\t\t\t\tconfirm_record_living(n);\n\t\t\t} else {\n\t\t\t\tconfirm_record_released(n);\n\t\t\t}\n\t\t}\n\t\tassert_u_eq(c, OPT_ALLOC_MAX,\n\t\t    \"Incorrect total number of allocations\");\n\t\tfree(p);\n\t}\n\n\t/* Increasing the limit shouldn't alter the list of records. */\n\tfuture = OPT_ALLOC_MAX + 1;\n\tassert_d_eq(mallctl(\"experimental.prof_recent.alloc_max\",\n\t    NULL, NULL, &future, sizeof(ssize_t)), 0, \"Write error\");\n\tc = 0;\n\tql_foreach(n, &prof_recent_alloc_list, link) {\n\t\tconfirm_record_size(n, c + 3 * OPT_ALLOC_MAX);\n\t\tconfirm_record_released(n);\n\t\t++c;\n\t}\n\tassert_u_eq(c, OPT_ALLOC_MAX,\n\t    \"Incorrect total number of allocations\");\n\n\t/*\n\t * Decreasing the limit shouldn't alter the list of records as long as\n\t * the new limit is still no less than the length of the list.\n\t */\n\tfuture = OPT_ALLOC_MAX;\n\tassert_d_eq(mallctl(\"experimental.prof_recent.alloc_max\",\n\t    NULL, NULL, &future, sizeof(ssize_t)), 0, \"Write error\");\n\tc = 0;\n\tql_foreach(n, &prof_recent_alloc_list, link) {\n\t\tconfirm_record_size(n, c + 3 * OPT_ALLOC_MAX);\n\t\tconfirm_record_released(n);\n\t\t++c;\n\t}\n\tassert_u_eq(c, OPT_ALLOC_MAX,\n\t    \"Incorrect total number of allocations\");\n\n\t/*\n\t * Decreasing the limit should shorten the list of records if the new\n\t * limit is less than the length of the list.\n\t */\n\tfuture = OPT_ALLOC_MAX - 1;\n\tassert_d_eq(mallctl(\"experimental.prof_recent.alloc_max\",\n\t    NULL, NULL, &future, sizeof(ssize_t)), 0, \"Write error\");\n\tc = 0;\n\tql_foreach(n, &prof_recent_alloc_list, link) {\n\t\t++c;\n\t\tconfirm_record_size(n, c + 3 * OPT_ALLOC_MAX);\n\t\tconfirm_record_released(n);\n\t}\n\tassert_u_eq(c, OPT_ALLOC_MAX - 1,\n\t    \"Incorrect total number of allocations\");\n\n\t/* Setting to unlimited shouldn't alter the list of records. */\n\tfuture = -1;\n\tassert_d_eq(mallctl(\"experimental.prof_recent.alloc_max\",\n\t    NULL, NULL, &future, sizeof(ssize_t)), 0, \"Write error\");\n\tc = 0;\n\tql_foreach(n, &prof_recent_alloc_list, link) {\n\t\t++c;\n\t\tconfirm_record_size(n, c + 3 * OPT_ALLOC_MAX);\n\t\tconfirm_record_released(n);\n\t}\n\tassert_u_eq(c, OPT_ALLOC_MAX - 1,\n\t    \"Incorrect total number of allocations\");\n\n\t/* Downshift to only one record. */\n\tfuture = 1;\n\tassert_d_eq(mallctl(\"experimental.prof_recent.alloc_max\",\n\t    NULL, NULL, &future, sizeof(ssize_t)), 0, \"Write error\");\n\tassert_false(ql_empty(&prof_recent_alloc_list), \"Recent list is empty\");\n\tn = ql_first(&prof_recent_alloc_list);\n\tconfirm_record_size(n, 4 * OPT_ALLOC_MAX - 1);\n\tconfirm_record_released(n);\n\tn = ql_next(&prof_recent_alloc_list, n, link);\n\tassert_ptr_null(n, \"Recent list should only contain one record\");\n\n\t/* Completely turn off. */\n\tfuture = 0;\n\tassert_d_eq(mallctl(\"experimental.prof_recent.alloc_max\",\n\t    NULL, NULL, &future, sizeof(ssize_t)), 0, \"Write error\");\n\tassert_true(ql_empty(&prof_recent_alloc_list),\n\t    \"Recent list should be empty\");\n\n\t/* Restore the settings. */\n\tfuture = OPT_ALLOC_MAX;\n\tassert_d_eq(mallctl(\"experimental.prof_recent.alloc_max\",\n\t    NULL, NULL, &future, sizeof(ssize_t)), 0, \"Write error\");\n\tassert_true(ql_empty(&prof_recent_alloc_list),\n\t    \"Recent list should be empty\");\n\n\tconfirm_prof_setup();\n}\nTEST_END\n\n#undef NTH_REQ_SIZE\n\n#define DUMP_OUT_SIZE 4096\nstatic char dump_out[DUMP_OUT_SIZE];\nstatic size_t dump_out_len = 0;\n\nstatic void\ntest_dump_write_cb(void *not_used, const char *str) {\n\tsize_t len = strlen(str);\n\tassert(dump_out_len + len < DUMP_OUT_SIZE);\n\tmemcpy(dump_out + dump_out_len, str, len + 1);\n\tdump_out_len += len;\n}\n\nstatic void\ncall_dump() {\n\tstatic void *in[2] = {test_dump_write_cb, NULL};\n\tdump_out_len = 0;\n\tassert_d_eq(mallctl(\"experimental.prof_recent.alloc_dump\",\n\t    NULL, NULL, in, sizeof(in)), 0, \"Dump mallctl raised error\");\n}\n\ntypedef struct {\n\tsize_t size;\n\tsize_t usize;\n\tbool released;\n} confirm_record_t;\n\n#define DUMP_ERROR \"Dump output is wrong\"\n\nstatic void\nconfirm_record(const char *template, const confirm_record_t *records,\n    const size_t n_records) {\n\tstatic const char *types[2] = {\"alloc\", \"dalloc\"};\n\tstatic char buf[64];\n\n\t/*\n\t * The template string would be in the form of:\n\t * \"{...,\\\"recent_alloc\\\":[]}\",\n\t * and dump_out would be in the form of:\n\t * \"{...,\\\"recent_alloc\\\":[...]}\".\n\t * Using \"- 2\" serves to cut right before the ending \"]}\".\n\t */\n\tassert_d_eq(memcmp(dump_out, template, strlen(template) - 2), 0,\n\t    DUMP_ERROR);\n\tassert_d_eq(memcmp(dump_out + strlen(dump_out) - 2,\n\t    template + strlen(template) - 2, 2), 0, DUMP_ERROR);\n\n\tconst char *start = dump_out + strlen(template) - 2;\n\tconst char *end = dump_out + strlen(dump_out) - 2;\n\tconst confirm_record_t *record;\n\tfor (record = records; record < records + n_records; ++record) {\n\n#define ASSERT_CHAR(c) do {\t\t\t\t\t\t\\\n\tassert_true(start < end, DUMP_ERROR);\t\t\t\t\\\n\tassert_c_eq(*start++, c, DUMP_ERROR);\t\t\t\t\\\n} while (0)\n\n#define ASSERT_STR(s) do {\t\t\t\t\t\t\\\n\tconst size_t len = strlen(s);\t\t\t\t\t\\\n\tassert_true(start + len <= end, DUMP_ERROR);\t\t\t\\\n\tassert_d_eq(memcmp(start, s, len), 0, DUMP_ERROR);\t\t\\\n\tstart += len;\t\t\t\t\t\t\t\\\n} while (0)\n\n#define ASSERT_FORMATTED_STR(s, ...) do {\t\t\t\t\\\n\tmalloc_snprintf(buf, sizeof(buf), s, __VA_ARGS__);\t\t\\\n\tASSERT_STR(buf);\t\t\t\t\t\t\\\n} while (0)\n\n\t\tif (record != records) {\n\t\t\tASSERT_CHAR(',');\n\t\t}\n\n\t\tASSERT_CHAR('{');\n\n\t\tASSERT_STR(\"\\\"size\\\"\");\n\t\tASSERT_CHAR(':');\n\t\tASSERT_FORMATTED_STR(\"%zu\", record->size);\n\t\tASSERT_CHAR(',');\n\n\t\tASSERT_STR(\"\\\"usize\\\"\");\n\t\tASSERT_CHAR(':');\n\t\tASSERT_FORMATTED_STR(\"%zu\", record->usize);\n\t\tASSERT_CHAR(',');\n\n\t\tASSERT_STR(\"\\\"released\\\"\");\n\t\tASSERT_CHAR(':');\n\t\tASSERT_STR(record->released ? \"true\" : \"false\");\n\t\tASSERT_CHAR(',');\n\n\t\tconst char **type = types;\n\t\twhile (true) {\n\t\t\tASSERT_FORMATTED_STR(\"\\\"%s_thread_uid\\\"\", *type);\n\t\t\tASSERT_CHAR(':');\n\t\t\twhile (isdigit(*start)) {\n\t\t\t\t++start;\n\t\t\t}\n\t\t\tASSERT_CHAR(',');\n\n\t\t\tif (opt_prof_sys_thread_name) {\n\t\t\t\tASSERT_FORMATTED_STR(\"\\\"%s_thread_name\\\"\",\n\t\t\t\t    *type);\n\t\t\t\tASSERT_CHAR(':');\n\t\t\t\tASSERT_CHAR('\"');\n\t\t\t\twhile (*start != '\"') {\n\t\t\t\t\t++start;\n\t\t\t\t}\n\t\t\t\tASSERT_CHAR('\"');\n\t\t\t\tASSERT_CHAR(',');\n\t\t\t}\n\n\t\t\tASSERT_FORMATTED_STR(\"\\\"%s_time\\\"\", *type);\n\t\t\tASSERT_CHAR(':');\n\t\t\twhile (isdigit(*start)) {\n\t\t\t\t++start;\n\t\t\t}\n\t\t\tASSERT_CHAR(',');\n\n\t\t\tASSERT_FORMATTED_STR(\"\\\"%s_trace\\\"\", *type);\n\t\t\tASSERT_CHAR(':');\n\t\t\tASSERT_CHAR('[');\n\t\t\twhile (isdigit(*start) || *start == 'x' ||\n\t\t\t    (*start >= 'a' && *start <= 'f') ||\n\t\t\t    *start == '\\\"' || *start == ',') {\n\t\t\t\t++start;\n\t\t\t}\n\t\t\tASSERT_CHAR(']');\n\n\t\t\tif (strcmp(*type, \"dalloc\") == 0) {\n\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\tassert(strcmp(*type, \"alloc\") == 0);\n\t\t\tif (!record->released) {\n\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\tASSERT_CHAR(',');\n\t\t\t++type;\n\t\t}\n\n\t\tASSERT_CHAR('}');\n\n#undef ASSERT_FORMATTED_STR\n#undef ASSERT_STR\n#undef ASSERT_CHAR\n\n\t}\n\tassert_ptr_eq(record, records + n_records, DUMP_ERROR);\n\tassert_ptr_eq(start, end, DUMP_ERROR);\n}\n\nTEST_BEGIN(test_prof_recent_alloc_dump) {\n\ttest_skip_if(!config_prof);\n\n\tconfirm_prof_setup();\n\n\tssize_t future;\n\tvoid *p, *q;\n\tconfirm_record_t records[2];\n\n\tassert_zu_eq(lg_prof_sample, (size_t)0,\n\t    \"lg_prof_sample not set correctly\");\n\n\tfuture = 0;\n\tassert_d_eq(mallctl(\"experimental.prof_recent.alloc_max\",\n\t    NULL, NULL, &future, sizeof(ssize_t)), 0, \"Write error\");\n\tcall_dump();\n\texpect_str_eq(dump_out, \"{\\\"sample_interval\\\":1,\"\n\t    \"\\\"recent_alloc_max\\\":0,\\\"recent_alloc\\\":[]}\", DUMP_ERROR);\n\n\tfuture = 2;\n\tassert_d_eq(mallctl(\"experimental.prof_recent.alloc_max\",\n\t    NULL, NULL, &future, sizeof(ssize_t)), 0, \"Write error\");\n\tcall_dump();\n\tconst char *template = \"{\\\"sample_interval\\\":1,\"\n\t    \"\\\"recent_alloc_max\\\":2,\\\"recent_alloc\\\":[]}\";\n\texpect_str_eq(dump_out, template, DUMP_ERROR);\n\n\tp = malloc(7);\n\tcall_dump();\n\trecords[0].size = 7;\n\trecords[0].usize = sz_s2u(7);\n\trecords[0].released = false;\n\tconfirm_record(template, records, 1);\n\n\tq = mallocx(17, MALLOCX_ALIGN(128));\n\tcall_dump();\n\trecords[1].size = 17;\n\trecords[1].usize = sz_sa2u(17, 128);\n\trecords[1].released = false;\n\tconfirm_record(template, records, 2);\n\n\tfree(q);\n\tcall_dump();\n\trecords[1].released = true;\n\tconfirm_record(template, records, 2);\n\n\tfree(p);\n\tcall_dump();\n\trecords[0].released = true;\n\tconfirm_record(template, records, 2);\n\n\tfuture = OPT_ALLOC_MAX;\n\tassert_d_eq(mallctl(\"experimental.prof_recent.alloc_max\",\n\t    NULL, NULL, &future, sizeof(ssize_t)), 0, \"Write error\");\n\tconfirm_prof_setup();\n}\nTEST_END\n\n#undef DUMP_ERROR\n#undef DUMP_OUT_SIZE\n\n#define N_THREADS 8\n#define N_PTRS 512\n#define N_CTLS 8\n#define N_ITERS 2048\n#define STRESS_ALLOC_MAX 4096\n\ntypedef struct {\n\tthd_t thd;\n\tsize_t id;\n\tvoid *ptrs[N_PTRS];\n\tsize_t count;\n} thd_data_t;\n\nstatic thd_data_t thd_data[N_THREADS];\nstatic ssize_t test_max;\n\nstatic void\ntest_write_cb(void *cbopaque, const char *str) {\n\tsleep_ns(1000 * 1000);\n}\n\nstatic void *\nf_thread(void *arg) {\n\tconst size_t thd_id = *(size_t *)arg;\n\tthd_data_t *data_p = thd_data + thd_id;\n\tassert(data_p->id == thd_id);\n\tdata_p->count = 0;\n\tuint64_t rand = (uint64_t)thd_id;\n\ttsd_t *tsd = tsd_fetch();\n\tassert(test_max > 1);\n\tssize_t last_max = -1;\n\tfor (int i = 0; i < N_ITERS; i++) {\n\t\trand = prng_range_u64(&rand, N_PTRS + N_CTLS * 5);\n\t\tassert(data_p->count <= N_PTRS);\n\t\tif (rand < data_p->count) {\n\t\t\tassert(data_p->count > 0);\n\t\t\tif (rand != data_p->count - 1) {\n\t\t\t\tassert(data_p->count > 1);\n\t\t\t\tvoid *temp = data_p->ptrs[rand];\n\t\t\t\tdata_p->ptrs[rand] =\n\t\t\t\t    data_p->ptrs[data_p->count - 1];\n\t\t\t\tdata_p->ptrs[data_p->count - 1] = temp;\n\t\t\t}\n\t\t\tfree(data_p->ptrs[--data_p->count]);\n\t\t} else if (rand < N_PTRS) {\n\t\t\tassert(data_p->count < N_PTRS);\n\t\t\tdata_p->ptrs[data_p->count++] = malloc(1);\n\t\t} else if (rand % 5 == 0) {\n\t\t\tprof_recent_alloc_dump(tsd, test_write_cb, NULL);\n\t\t} else if (rand % 5 == 1) {\n\t\t\tlast_max = prof_recent_alloc_max_ctl_read();\n\t\t} else if (rand % 5 == 2) {\n\t\t\tlast_max =\n\t\t\t    prof_recent_alloc_max_ctl_write(tsd, test_max * 2);\n\t\t} else if (rand % 5 == 3) {\n\t\t\tlast_max =\n\t\t\t    prof_recent_alloc_max_ctl_write(tsd, test_max);\n\t\t} else {\n\t\t\tassert(rand % 5 == 4);\n\t\t\tlast_max =\n\t\t\t    prof_recent_alloc_max_ctl_write(tsd, test_max / 2);\n\t\t}\n\t\tassert_zd_ge(last_max, -1, \"Illegal last-N max\");\n\t}\n\n\twhile (data_p->count > 0) {\n\t\tfree(data_p->ptrs[--data_p->count]);\n\t}\n\n\treturn NULL;\n}\n\nTEST_BEGIN(test_prof_recent_stress) {\n\ttest_skip_if(!config_prof);\n\n\tconfirm_prof_setup();\n\n\ttest_max = OPT_ALLOC_MAX;\n\tfor (size_t i = 0; i < N_THREADS; i++) {\n\t\tthd_data_t *data_p = thd_data + i;\n\t\tdata_p->id = i;\n\t\tthd_create(&data_p->thd, &f_thread, &data_p->id);\n\t}\n\tfor (size_t i = 0; i < N_THREADS; i++) {\n\t\tthd_data_t *data_p = thd_data + i;\n\t\tthd_join(data_p->thd, NULL);\n\t}\n\n\ttest_max = STRESS_ALLOC_MAX;\n\tassert_d_eq(mallctl(\"experimental.prof_recent.alloc_max\",\n\t    NULL, NULL, &test_max, sizeof(ssize_t)), 0, \"Write error\");\n\tfor (size_t i = 0; i < N_THREADS; i++) {\n\t\tthd_data_t *data_p = thd_data + i;\n\t\tdata_p->id = i;\n\t\tthd_create(&data_p->thd, &f_thread, &data_p->id);\n\t}\n\tfor (size_t i = 0; i < N_THREADS; i++) {\n\t\tthd_data_t *data_p = thd_data + i;\n\t\tthd_join(data_p->thd, NULL);\n\t}\n\n\ttest_max = OPT_ALLOC_MAX;\n\tassert_d_eq(mallctl(\"experimental.prof_recent.alloc_max\",\n\t    NULL, NULL, &test_max, sizeof(ssize_t)), 0, \"Write error\");\n\tconfirm_prof_setup();\n}\nTEST_END\n\n#undef STRESS_ALLOC_MAX\n#undef N_ITERS\n#undef N_PTRS\n#undef N_THREADS\n\nint\nmain(void) {\n\treturn test(\n\t    test_confirm_setup,\n\t    test_prof_recent_off,\n\t    test_prof_recent_on,\n\t    test_prof_recent_alloc,\n\t    test_prof_recent_alloc_dump,\n\t    test_prof_recent_stress);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_recent.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_prof}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"prof:true,prof_active:true,lg_prof_sample:0,prof_recent_alloc_max:3\"\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_reset.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/prof_data.h\"\n#include \"jemalloc/internal/prof_sys.h\"\n\nstatic int\nprof_dump_open_file_intercept(const char *filename, int mode) {\n\tint fd;\n\n\tfd = open(\"/dev/null\", O_WRONLY);\n\tassert_d_ne(fd, -1, \"Unexpected open() failure\");\n\n\treturn fd;\n}\n\nstatic void\nset_prof_active(bool active) {\n\texpect_d_eq(mallctl(\"prof.active\", NULL, NULL, (void *)&active,\n\t    sizeof(active)), 0, \"Unexpected mallctl failure\");\n}\n\nstatic size_t\nget_lg_prof_sample(void) {\n\tsize_t ret;\n\tsize_t sz = sizeof(size_t);\n\n\texpect_d_eq(mallctl(\"prof.lg_sample\", (void *)&ret, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl failure while reading profiling sample rate\");\n\treturn ret;\n}\n\nstatic void\ndo_prof_reset(size_t lg_prof_sample_input) {\n\texpect_d_eq(mallctl(\"prof.reset\", NULL, NULL,\n\t    (void *)&lg_prof_sample_input, sizeof(size_t)), 0,\n\t    \"Unexpected mallctl failure while resetting profile data\");\n\texpect_zu_eq(lg_prof_sample_input, get_lg_prof_sample(),\n\t    \"Expected profile sample rate change\");\n}\n\nTEST_BEGIN(test_prof_reset_basic) {\n\tsize_t lg_prof_sample_orig, lg_prof_sample_cur, lg_prof_sample_next;\n\tsize_t sz;\n\tunsigned i;\n\n\ttest_skip_if(!config_prof);\n\n\tsz = sizeof(size_t);\n\texpect_d_eq(mallctl(\"opt.lg_prof_sample\", (void *)&lg_prof_sample_orig,\n\t    &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl failure while reading profiling sample rate\");\n\texpect_zu_eq(lg_prof_sample_orig, 0,\n\t    \"Unexpected profiling sample rate\");\n\tlg_prof_sample_cur = get_lg_prof_sample();\n\texpect_zu_eq(lg_prof_sample_orig, lg_prof_sample_cur,\n\t    \"Unexpected disagreement between \\\"opt.lg_prof_sample\\\" and \"\n\t    \"\\\"prof.lg_sample\\\"\");\n\n\t/* Test simple resets. */\n\tfor (i = 0; i < 2; i++) {\n\t\texpect_d_eq(mallctl(\"prof.reset\", NULL, NULL, NULL, 0), 0,\n\t\t    \"Unexpected mallctl failure while resetting profile data\");\n\t\tlg_prof_sample_cur = get_lg_prof_sample();\n\t\texpect_zu_eq(lg_prof_sample_orig, lg_prof_sample_cur,\n\t\t    \"Unexpected profile sample rate change\");\n\t}\n\n\t/* Test resets with prof.lg_sample changes. */\n\tlg_prof_sample_next = 1;\n\tfor (i = 0; i < 2; i++) {\n\t\tdo_prof_reset(lg_prof_sample_next);\n\t\tlg_prof_sample_cur = get_lg_prof_sample();\n\t\texpect_zu_eq(lg_prof_sample_cur, lg_prof_sample_next,\n\t\t    \"Expected profile sample rate change\");\n\t\tlg_prof_sample_next = lg_prof_sample_orig;\n\t}\n\n\t/* Make sure the test code restored prof.lg_sample. */\n\tlg_prof_sample_cur = get_lg_prof_sample();\n\texpect_zu_eq(lg_prof_sample_orig, lg_prof_sample_cur,\n\t    \"Unexpected disagreement between \\\"opt.lg_prof_sample\\\" and \"\n\t    \"\\\"prof.lg_sample\\\"\");\n}\nTEST_END\n\nTEST_BEGIN(test_prof_reset_cleanup) {\n\ttest_skip_if(!config_prof);\n\n\tset_prof_active(true);\n\n\texpect_zu_eq(prof_bt_count(), 0, \"Expected 0 backtraces\");\n\tvoid *p = mallocx(1, 0);\n\texpect_ptr_not_null(p, \"Unexpected mallocx() failure\");\n\texpect_zu_eq(prof_bt_count(), 1, \"Expected 1 backtrace\");\n\n\tprof_cnt_t cnt_all;\n\tprof_cnt_all(&cnt_all);\n\texpect_u64_eq(cnt_all.curobjs, 1, \"Expected 1 allocation\");\n\n\texpect_d_eq(mallctl(\"prof.reset\", NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected error while resetting heap profile data\");\n\tprof_cnt_all(&cnt_all);\n\texpect_u64_eq(cnt_all.curobjs, 0, \"Expected 0 allocations\");\n\texpect_zu_eq(prof_bt_count(), 1, \"Expected 1 backtrace\");\n\n\tdallocx(p, 0);\n\texpect_zu_eq(prof_bt_count(), 0, \"Expected 0 backtraces\");\n\n\tset_prof_active(false);\n}\nTEST_END\n\n#define NTHREADS\t\t4\n#define NALLOCS_PER_THREAD\t(1U << 13)\n#define OBJ_RING_BUF_COUNT\t1531\n#define RESET_INTERVAL\t\t(1U << 10)\n#define DUMP_INTERVAL\t\t3677\nstatic void *\nthd_start(void *varg) {\n\tunsigned thd_ind = *(unsigned *)varg;\n\tunsigned i;\n\tvoid *objs[OBJ_RING_BUF_COUNT];\n\n\tmemset(objs, 0, sizeof(objs));\n\n\tfor (i = 0; i < NALLOCS_PER_THREAD; i++) {\n\t\tif (i % RESET_INTERVAL == 0) {\n\t\t\texpect_d_eq(mallctl(\"prof.reset\", NULL, NULL, NULL, 0),\n\t\t\t    0, \"Unexpected error while resetting heap profile \"\n\t\t\t    \"data\");\n\t\t}\n\n\t\tif (i % DUMP_INTERVAL == 0) {\n\t\t\texpect_d_eq(mallctl(\"prof.dump\", NULL, NULL, NULL, 0),\n\t\t\t    0, \"Unexpected error while dumping heap profile\");\n\t\t}\n\n\t\t{\n\t\t\tvoid **pp = &objs[i % OBJ_RING_BUF_COUNT];\n\t\t\tif (*pp != NULL) {\n\t\t\t\tdallocx(*pp, 0);\n\t\t\t\t*pp = NULL;\n\t\t\t}\n\t\t\t*pp = btalloc(1, thd_ind*NALLOCS_PER_THREAD + i);\n\t\t\texpect_ptr_not_null(*pp,\n\t\t\t    \"Unexpected btalloc() failure\");\n\t\t}\n\t}\n\n\t/* Clean up any remaining objects. */\n\tfor (i = 0; i < OBJ_RING_BUF_COUNT; i++) {\n\t\tvoid **pp = &objs[i % OBJ_RING_BUF_COUNT];\n\t\tif (*pp != NULL) {\n\t\t\tdallocx(*pp, 0);\n\t\t\t*pp = NULL;\n\t\t}\n\t}\n\n\treturn NULL;\n}\n\nTEST_BEGIN(test_prof_reset) {\n\tsize_t lg_prof_sample_orig;\n\tthd_t thds[NTHREADS];\n\tunsigned thd_args[NTHREADS];\n\tunsigned i;\n\tsize_t bt_count, tdata_count;\n\n\ttest_skip_if(!config_prof);\n\n\tbt_count = prof_bt_count();\n\texpect_zu_eq(bt_count, 0,\n\t    \"Unexpected pre-existing tdata structures\");\n\ttdata_count = prof_tdata_count();\n\n\tlg_prof_sample_orig = get_lg_prof_sample();\n\tdo_prof_reset(5);\n\n\tset_prof_active(true);\n\n\tfor (i = 0; i < NTHREADS; i++) {\n\t\tthd_args[i] = i;\n\t\tthd_create(&thds[i], thd_start, (void *)&thd_args[i]);\n\t}\n\tfor (i = 0; i < NTHREADS; i++) {\n\t\tthd_join(thds[i], NULL);\n\t}\n\n\texpect_zu_eq(prof_bt_count(), bt_count,\n\t    \"Unexpected bactrace count change\");\n\texpect_zu_eq(prof_tdata_count(), tdata_count,\n\t    \"Unexpected remaining tdata structures\");\n\n\tset_prof_active(false);\n\n\tdo_prof_reset(lg_prof_sample_orig);\n}\nTEST_END\n#undef NTHREADS\n#undef NALLOCS_PER_THREAD\n#undef OBJ_RING_BUF_COUNT\n#undef RESET_INTERVAL\n#undef DUMP_INTERVAL\n\n/* Test sampling at the same allocation site across resets. */\n#define NITER 10\nTEST_BEGIN(test_xallocx) {\n\tsize_t lg_prof_sample_orig;\n\tunsigned i;\n\tvoid *ptrs[NITER];\n\n\ttest_skip_if(!config_prof);\n\n\tlg_prof_sample_orig = get_lg_prof_sample();\n\tset_prof_active(true);\n\n\t/* Reset profiling. */\n\tdo_prof_reset(0);\n\n\tfor (i = 0; i < NITER; i++) {\n\t\tvoid *p;\n\t\tsize_t sz, nsz;\n\n\t\t/* Reset profiling. */\n\t\tdo_prof_reset(0);\n\n\t\t/* Allocate small object (which will be promoted). */\n\t\tp = ptrs[i] = mallocx(1, 0);\n\t\texpect_ptr_not_null(p, \"Unexpected mallocx() failure\");\n\n\t\t/* Reset profiling. */\n\t\tdo_prof_reset(0);\n\n\t\t/* Perform successful xallocx(). */\n\t\tsz = sallocx(p, 0);\n\t\texpect_zu_eq(xallocx(p, sz, 0, 0), sz,\n\t\t    \"Unexpected xallocx() failure\");\n\n\t\t/* Perform unsuccessful xallocx(). */\n\t\tnsz = nallocx(sz+1, 0);\n\t\texpect_zu_eq(xallocx(p, nsz, 0, 0), sz,\n\t\t    \"Unexpected xallocx() success\");\n\t}\n\n\tfor (i = 0; i < NITER; i++) {\n\t\t/* dallocx. */\n\t\tdallocx(ptrs[i], 0);\n\t}\n\n\tset_prof_active(false);\n\tdo_prof_reset(lg_prof_sample_orig);\n}\nTEST_END\n#undef NITER\n\nint\nmain(void) {\n\t/* Intercept dumping prior to running any tests. */\n\tprof_dump_open_file = prof_dump_open_file_intercept;\n\n\treturn test_no_reentrancy(\n\t    test_prof_reset_basic,\n\t    test_prof_reset_cleanup,\n\t    test_prof_reset,\n\t    test_xallocx);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_reset.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_prof}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"prof:true,prof_active:false,lg_prof_sample:0,prof_recent_alloc_max:0\"\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_stats.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#define N_PTRS 3\n\nstatic void\ntest_combinations(szind_t ind, size_t sizes_array[N_PTRS],\n    int flags_array[N_PTRS]) {\n#define MALLCTL_STR_LEN 64\n\tassert(opt_prof && opt_prof_stats);\n\n\tchar mallctl_live_str[MALLCTL_STR_LEN];\n\tchar mallctl_accum_str[MALLCTL_STR_LEN];\n\tif (ind < SC_NBINS) {\n\t\tmalloc_snprintf(mallctl_live_str, MALLCTL_STR_LEN,\n\t\t    \"prof.stats.bins.%u.live\", (unsigned)ind);\n\t\tmalloc_snprintf(mallctl_accum_str, MALLCTL_STR_LEN,\n\t\t    \"prof.stats.bins.%u.accum\", (unsigned)ind);\n\t} else {\n\t\tmalloc_snprintf(mallctl_live_str, MALLCTL_STR_LEN,\n\t\t    \"prof.stats.lextents.%u.live\", (unsigned)(ind - SC_NBINS));\n\t\tmalloc_snprintf(mallctl_accum_str, MALLCTL_STR_LEN,\n\t\t    \"prof.stats.lextents.%u.accum\", (unsigned)(ind - SC_NBINS));\n\t}\n\n\tsize_t stats_len = 2 * sizeof(uint64_t);\n\n\tuint64_t live_stats_orig[2];\n\tassert_d_eq(mallctl(mallctl_live_str, &live_stats_orig, &stats_len,\n\t    NULL, 0), 0, \"\");\n\tuint64_t accum_stats_orig[2];\n\tassert_d_eq(mallctl(mallctl_accum_str, &accum_stats_orig, &stats_len,\n\t    NULL, 0), 0, \"\");\n\n\tvoid *ptrs[N_PTRS];\n\n\tuint64_t live_req_sum = 0;\n\tuint64_t live_count = 0;\n\tuint64_t accum_req_sum = 0;\n\tuint64_t accum_count = 0;\n\n\tfor (size_t i = 0; i < N_PTRS; ++i) {\n\t\tsize_t sz = sizes_array[i];\n\t\tint flags = flags_array[i];\n\t\tvoid *p = mallocx(sz, flags);\n\t\tassert_ptr_not_null(p, \"malloc() failed\");\n\t\tassert(TEST_MALLOC_SIZE(p) == sz_index2size(ind));\n\t\tptrs[i] = p;\n\t\tlive_req_sum += sz;\n\t\tlive_count++;\n\t\taccum_req_sum += sz;\n\t\taccum_count++;\n\t\tuint64_t live_stats[2];\n\t\tassert_d_eq(mallctl(mallctl_live_str, &live_stats, &stats_len,\n\t\t    NULL, 0), 0, \"\");\n\t\texpect_u64_eq(live_stats[0] - live_stats_orig[0],\n\t\t    live_req_sum, \"\");\n\t\texpect_u64_eq(live_stats[1] - live_stats_orig[1],\n\t\t    live_count, \"\");\n\t\tuint64_t accum_stats[2];\n\t\tassert_d_eq(mallctl(mallctl_accum_str, &accum_stats, &stats_len,\n\t\t    NULL, 0), 0, \"\");\n\t\texpect_u64_eq(accum_stats[0] - accum_stats_orig[0],\n\t\t    accum_req_sum, \"\");\n\t\texpect_u64_eq(accum_stats[1] - accum_stats_orig[1],\n\t\t    accum_count, \"\");\n\t}\n\n\tfor (size_t i = 0; i < N_PTRS; ++i) {\n\t\tsize_t sz = sizes_array[i];\n\t\tint flags = flags_array[i];\n\t\tsdallocx(ptrs[i], sz, flags);\n\t\tlive_req_sum -= sz;\n\t\tlive_count--;\n\t\tuint64_t live_stats[2];\n\t\tassert_d_eq(mallctl(mallctl_live_str, &live_stats, &stats_len,\n\t\t    NULL, 0), 0, \"\");\n\t\texpect_u64_eq(live_stats[0] - live_stats_orig[0],\n\t\t    live_req_sum, \"\");\n\t\texpect_u64_eq(live_stats[1] - live_stats_orig[1],\n\t\t    live_count, \"\");\n\t\tuint64_t accum_stats[2];\n\t\tassert_d_eq(mallctl(mallctl_accum_str, &accum_stats, &stats_len,\n\t\t    NULL, 0), 0, \"\");\n\t\texpect_u64_eq(accum_stats[0] - accum_stats_orig[0],\n\t\t    accum_req_sum, \"\");\n\t\texpect_u64_eq(accum_stats[1] - accum_stats_orig[1],\n\t\t    accum_count, \"\");\n\t}\n#undef MALLCTL_STR_LEN\n}\n\nstatic void\ntest_szind_wrapper(szind_t ind) {\n\tsize_t sizes_array[N_PTRS];\n\tint flags_array[N_PTRS];\n\tfor (size_t i = 0, sz = sz_index2size(ind) - N_PTRS; i < N_PTRS;\n\t    ++i, ++sz) {\n\t\tsizes_array[i] = sz;\n\t\tflags_array[i] = 0;\n\t}\n\ttest_combinations(ind, sizes_array, flags_array);\n}\n\nTEST_BEGIN(test_prof_stats) {\n\ttest_skip_if(!config_prof);\n\ttest_szind_wrapper(0);\n\ttest_szind_wrapper(1);\n\ttest_szind_wrapper(2);\n\ttest_szind_wrapper(SC_NBINS);\n\ttest_szind_wrapper(SC_NBINS + 1);\n\ttest_szind_wrapper(SC_NBINS + 2);\n}\nTEST_END\n\nstatic void\ntest_szind_aligned_wrapper(szind_t ind, unsigned lg_align) {\n\tsize_t sizes_array[N_PTRS];\n\tint flags_array[N_PTRS];\n\tint flags = MALLOCX_LG_ALIGN(lg_align);\n\tfor (size_t i = 0, sz = sz_index2size(ind) - N_PTRS; i < N_PTRS;\n\t    ++i, ++sz) {\n\t\tsizes_array[i] = sz;\n\t\tflags_array[i] = flags;\n\t}\n\ttest_combinations(\n\t    sz_size2index(sz_sa2u(sz_index2size(ind), 1 << lg_align)),\n\t    sizes_array, flags_array);\n}\n\nTEST_BEGIN(test_prof_stats_aligned) {\n\ttest_skip_if(!config_prof);\n\tfor (szind_t ind = 0; ind < 10; ++ind) {\n\t\tfor (unsigned lg_align = 0; lg_align < 10; ++lg_align) {\n\t\t\ttest_szind_aligned_wrapper(ind, lg_align);\n\t\t}\n\t}\n\tfor (szind_t ind = SC_NBINS - 5; ind < SC_NBINS + 5; ++ind) {\n\t\tfor (unsigned lg_align = SC_LG_LARGE_MINCLASS - 5;\n\t\t    lg_align < SC_LG_LARGE_MINCLASS + 5; ++lg_align) {\n\t\t\ttest_szind_aligned_wrapper(ind, lg_align);\n\t\t}\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_prof_stats,\n\t    test_prof_stats_aligned);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_stats.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_prof}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"prof:true,prof_active:true,lg_prof_sample:0,prof_stats:true\"\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_sys_thread_name.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/prof_sys.h\"\n\nstatic const char *test_thread_name = \"test_name\";\n\nstatic int\ntest_prof_sys_thread_name_read_error(char *buf, size_t limit) {\n\treturn ENOSYS;\n}\n\nstatic int\ntest_prof_sys_thread_name_read(char *buf, size_t limit) {\n\tassert(strlen(test_thread_name) < limit);\n\tstrncpy(buf, test_thread_name, limit);\n\treturn 0;\n}\n\nstatic int\ntest_prof_sys_thread_name_read_clear(char *buf, size_t limit) {\n\tassert(limit > 0);\n\tbuf[0] = '\\0';\n\treturn 0;\n}\n\nTEST_BEGIN(test_prof_sys_thread_name) {\n\ttest_skip_if(!config_prof);\n\n\tbool oldval;\n\tsize_t sz = sizeof(oldval);\n\tassert_d_eq(mallctl(\"opt.prof_sys_thread_name\", &oldval, &sz, NULL, 0),\n\t    0, \"mallctl failed\");\n\tassert_true(oldval, \"option was not set correctly\");\n\n\tconst char *thread_name;\n\tsz = sizeof(thread_name);\n\tassert_d_eq(mallctl(\"thread.prof.name\", &thread_name, &sz, NULL, 0), 0,\n\t    \"mallctl read for thread name should not fail\");\n\texpect_str_eq(thread_name, \"\", \"Initial thread name should be empty\");\n\n\tthread_name = test_thread_name;\n\tassert_d_eq(mallctl(\"thread.prof.name\", NULL, NULL, &thread_name, sz),\n\t    ENOENT, \"mallctl write for thread name should fail\");\n\tassert_ptr_eq(thread_name, test_thread_name,\n\t    \"Thread name should not be touched\");\n\n\tprof_sys_thread_name_read = test_prof_sys_thread_name_read_error;\n\tvoid *p = malloc(1);\n\tfree(p);\n\tassert_d_eq(mallctl(\"thread.prof.name\", &thread_name, &sz, NULL, 0), 0,\n\t    \"mallctl read for thread name should not fail\");\n\tassert_str_eq(thread_name, \"\",\n\t    \"Thread name should stay the same if the system call fails\");\n\n\tprof_sys_thread_name_read = test_prof_sys_thread_name_read;\n\tp = malloc(1);\n\tfree(p);\n\tassert_d_eq(mallctl(\"thread.prof.name\", &thread_name, &sz, NULL, 0), 0,\n\t    \"mallctl read for thread name should not fail\");\n\tassert_str_eq(thread_name, test_thread_name,\n\t    \"Thread name should be changed if the system call succeeds\");\n\n\tprof_sys_thread_name_read = test_prof_sys_thread_name_read_clear;\n\tp = malloc(1);\n\tfree(p);\n\tassert_d_eq(mallctl(\"thread.prof.name\", &thread_name, &sz, NULL, 0), 0,\n\t    \"mallctl read for thread name should not fail\");\n\texpect_str_eq(thread_name, \"\", \"Thread name should be updated if the \"\n\t    \"system call returns a different name\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_prof_sys_thread_name);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_sys_thread_name.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_prof}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"prof:true,prof_active:true,lg_prof_sample:0,prof_sys_thread_name:true\"\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_tctx.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/prof_data.h\"\n\nTEST_BEGIN(test_prof_realloc) {\n\ttsd_t *tsd;\n\tint flags;\n\tvoid *p, *q;\n\tprof_info_t prof_info_p, prof_info_q;\n\tprof_cnt_t cnt_0, cnt_1, cnt_2, cnt_3;\n\n\ttest_skip_if(!config_prof);\n\n\ttsd = tsd_fetch();\n\tflags = MALLOCX_TCACHE_NONE;\n\n\tprof_cnt_all(&cnt_0);\n\tp = mallocx(1024, flags);\n\texpect_ptr_not_null(p, \"Unexpected mallocx() failure\");\n\tprof_info_get(tsd, p, NULL, &prof_info_p);\n\texpect_ptr_ne(prof_info_p.alloc_tctx, (prof_tctx_t *)(uintptr_t)1U,\n\t    \"Expected valid tctx\");\n\tprof_cnt_all(&cnt_1);\n\texpect_u64_eq(cnt_0.curobjs + 1, cnt_1.curobjs,\n\t    \"Allocation should have increased sample size\");\n\n\tq = rallocx(p, 2048, flags);\n\texpect_ptr_ne(p, q, \"Expected move\");\n\texpect_ptr_not_null(p, \"Unexpected rmallocx() failure\");\n\tprof_info_get(tsd, q, NULL, &prof_info_q);\n\texpect_ptr_ne(prof_info_q.alloc_tctx, (prof_tctx_t *)(uintptr_t)1U,\n\t    \"Expected valid tctx\");\n\tprof_cnt_all(&cnt_2);\n\texpect_u64_eq(cnt_1.curobjs, cnt_2.curobjs,\n\t    \"Reallocation should not have changed sample size\");\n\n\tdallocx(q, flags);\n\tprof_cnt_all(&cnt_3);\n\texpect_u64_eq(cnt_0.curobjs, cnt_3.curobjs,\n\t    \"Sample size should have returned to base level\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_prof_realloc);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_tctx.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_prof}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"prof:true,prof_active:true,lg_prof_sample:0\"\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_thread_name.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nstatic void\nmallctl_thread_name_get_impl(const char *thread_name_expected, const char *func,\n    int line) {\n\tconst char *thread_name_old;\n\tsize_t sz;\n\n\tsz = sizeof(thread_name_old);\n\texpect_d_eq(mallctl(\"thread.prof.name\", (void *)&thread_name_old, &sz,\n\t    NULL, 0), 0,\n\t    \"%s():%d: Unexpected mallctl failure reading thread.prof.name\",\n\t    func, line);\n\texpect_str_eq(thread_name_old, thread_name_expected,\n\t    \"%s():%d: Unexpected thread.prof.name value\", func, line);\n}\n#define mallctl_thread_name_get(a)\t\t\t\t\t\\\n\tmallctl_thread_name_get_impl(a, __func__, __LINE__)\n\nstatic void\nmallctl_thread_name_set_impl(const char *thread_name, const char *func,\n    int line) {\n\texpect_d_eq(mallctl(\"thread.prof.name\", NULL, NULL,\n\t    (void *)&thread_name, sizeof(thread_name)), 0,\n\t    \"%s():%d: Unexpected mallctl failure writing thread.prof.name\",\n\t    func, line);\n\tmallctl_thread_name_get_impl(thread_name, func, line);\n}\n#define mallctl_thread_name_set(a)\t\t\t\t\t\\\n\tmallctl_thread_name_set_impl(a, __func__, __LINE__)\n\nTEST_BEGIN(test_prof_thread_name_validation) {\n\tconst char *thread_name;\n\n\ttest_skip_if(!config_prof);\n\ttest_skip_if(opt_prof_sys_thread_name);\n\n\tmallctl_thread_name_get(\"\");\n\tmallctl_thread_name_set(\"hi there\");\n\n\t/* NULL input shouldn't be allowed. */\n\tthread_name = NULL;\n\texpect_d_eq(mallctl(\"thread.prof.name\", NULL, NULL,\n\t    (void *)&thread_name, sizeof(thread_name)), EFAULT,\n\t    \"Unexpected mallctl result writing \\\"%s\\\" to thread.prof.name\",\n\t    thread_name);\n\n\t/* '\\n' shouldn't be allowed. */\n\tthread_name = \"hi\\nthere\";\n\texpect_d_eq(mallctl(\"thread.prof.name\", NULL, NULL,\n\t    (void *)&thread_name, sizeof(thread_name)), EFAULT,\n\t    \"Unexpected mallctl result writing \\\"%s\\\" to thread.prof.name\",\n\t    thread_name);\n\n\t/* Simultaneous read/write shouldn't be allowed. */\n\t{\n\t\tconst char *thread_name_old;\n\t\tsize_t sz;\n\n\t\tsz = sizeof(thread_name_old);\n\t\texpect_d_eq(mallctl(\"thread.prof.name\",\n\t\t    (void *)&thread_name_old, &sz, (void *)&thread_name,\n\t\t    sizeof(thread_name)), EPERM,\n\t\t    \"Unexpected mallctl result writing \\\"%s\\\" to \"\n\t\t    \"thread.prof.name\", thread_name);\n\t}\n\n\tmallctl_thread_name_set(\"\");\n}\nTEST_END\n\n#define NTHREADS\t4\n#define NRESET\t\t25\nstatic void *\nthd_start(void *varg) {\n\tunsigned thd_ind = *(unsigned *)varg;\n\tchar thread_name[16] = \"\";\n\tunsigned i;\n\n\tmalloc_snprintf(thread_name, sizeof(thread_name), \"thread %u\", thd_ind);\n\n\tmallctl_thread_name_get(\"\");\n\tmallctl_thread_name_set(thread_name);\n\n\tfor (i = 0; i < NRESET; i++) {\n\t\texpect_d_eq(mallctl(\"prof.reset\", NULL, NULL, NULL, 0), 0,\n\t\t    \"Unexpected error while resetting heap profile data\");\n\t\tmallctl_thread_name_get(thread_name);\n\t}\n\n\tmallctl_thread_name_set(thread_name);\n\tmallctl_thread_name_set(\"\");\n\n\treturn NULL;\n}\n\nTEST_BEGIN(test_prof_thread_name_threaded) {\n\ttest_skip_if(!config_prof);\n\ttest_skip_if(opt_prof_sys_thread_name);\n\n\tthd_t thds[NTHREADS];\n\tunsigned thd_args[NTHREADS];\n\tunsigned i;\n\n\tfor (i = 0; i < NTHREADS; i++) {\n\t\tthd_args[i] = i;\n\t\tthd_create(&thds[i], thd_start, (void *)&thd_args[i]);\n\t}\n\tfor (i = 0; i < NTHREADS; i++) {\n\t\tthd_join(thds[i], NULL);\n\t}\n}\nTEST_END\n#undef NTHREADS\n#undef NRESET\n\nint\nmain(void) {\n\treturn test(\n\t    test_prof_thread_name_validation,\n\t    test_prof_thread_name_threaded);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/prof_thread_name.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_prof}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"prof:true,prof_active:false\"\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/unit/psset.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/psset.h\"\n\n#define PAGESLAB_ADDR ((void *)(1234 * HUGEPAGE))\n#define PAGESLAB_AGE 5678\n\n#define ALLOC_ARENA_IND 111\n#define ALLOC_ESN 222\n\nstatic void\nedata_init_test(edata_t *edata) {\n\tmemset(edata, 0, sizeof(*edata));\n\tedata_arena_ind_set(edata, ALLOC_ARENA_IND);\n\tedata_esn_set(edata, ALLOC_ESN);\n}\n\nstatic void\ntest_psset_fake_purge(hpdata_t *ps) {\n\thpdata_purge_state_t purge_state;\n\thpdata_alloc_allowed_set(ps, false);\n\thpdata_purge_begin(ps, &purge_state);\n\tvoid *addr;\n\tsize_t size;\n\twhile (hpdata_purge_next(ps, &purge_state, &addr, &size)) {\n\t}\n\thpdata_purge_end(ps, &purge_state);\n\thpdata_alloc_allowed_set(ps, true);\n}\n\nstatic void\ntest_psset_alloc_new(psset_t *psset, hpdata_t *ps, edata_t *r_edata,\n    size_t size) {\n\thpdata_assert_empty(ps);\n\n\ttest_psset_fake_purge(ps);\n\n\tpsset_insert(psset, ps);\n\tpsset_update_begin(psset, ps);\n\n        void *addr = hpdata_reserve_alloc(ps, size);\n        edata_init(r_edata, edata_arena_ind_get(r_edata), addr, size,\n\t    /* slab */ false, SC_NSIZES, /* sn */ 0, extent_state_active,\n            /* zeroed */ false, /* committed */ true, EXTENT_PAI_HPA,\n            EXTENT_NOT_HEAD);\n        edata_ps_set(r_edata, ps);\n\tpsset_update_end(psset, ps);\n}\n\nstatic bool\ntest_psset_alloc_reuse(psset_t *psset, edata_t *r_edata, size_t size) {\n\thpdata_t *ps = psset_pick_alloc(psset, size);\n\tif (ps == NULL) {\n\t\treturn true;\n\t}\n\tpsset_update_begin(psset, ps);\n\tvoid *addr = hpdata_reserve_alloc(ps, size);\n\tedata_init(r_edata, edata_arena_ind_get(r_edata), addr, size,\n\t    /* slab */ false, SC_NSIZES, /* sn */ 0, extent_state_active,\n\t    /* zeroed */ false, /* committed */ true, EXTENT_PAI_HPA,\n\t    EXTENT_NOT_HEAD);\n\tedata_ps_set(r_edata, ps);\n\tpsset_update_end(psset, ps);\n\treturn false;\n}\n\nstatic hpdata_t *\ntest_psset_dalloc(psset_t *psset, edata_t *edata) {\n\thpdata_t *ps = edata_ps_get(edata);\n\tpsset_update_begin(psset, ps);\n\thpdata_unreserve(ps, edata_addr_get(edata), edata_size_get(edata));\n\tpsset_update_end(psset, ps);\n\tif (hpdata_empty(ps)) {\n\t\tpsset_remove(psset, ps);\n\t\treturn ps;\n\t} else {\n\t\treturn NULL;\n\t}\n}\n\nstatic void\nedata_expect(edata_t *edata, size_t page_offset, size_t page_cnt) {\n\t/*\n\t * Note that allocations should get the arena ind of their home\n\t * arena, *not* the arena ind of the pageslab allocator.\n\t */\n\texpect_u_eq(ALLOC_ARENA_IND, edata_arena_ind_get(edata),\n\t    \"Arena ind changed\");\n\texpect_ptr_eq(\n\t    (void *)((uintptr_t)PAGESLAB_ADDR + (page_offset << LG_PAGE)),\n\t    edata_addr_get(edata), \"Didn't allocate in order\");\n\texpect_zu_eq(page_cnt << LG_PAGE, edata_size_get(edata), \"\");\n\texpect_false(edata_slab_get(edata), \"\");\n\texpect_u_eq(SC_NSIZES, edata_szind_get_maybe_invalid(edata),\n\t    \"\");\n\texpect_u64_eq(0, edata_sn_get(edata), \"\");\n\texpect_d_eq(edata_state_get(edata), extent_state_active, \"\");\n\texpect_false(edata_zeroed_get(edata), \"\");\n\texpect_true(edata_committed_get(edata), \"\");\n\texpect_d_eq(EXTENT_PAI_HPA, edata_pai_get(edata), \"\");\n\texpect_false(edata_is_head_get(edata), \"\");\n}\n\nTEST_BEGIN(test_empty) {\n\tbool err;\n\thpdata_t pageslab;\n\thpdata_init(&pageslab, PAGESLAB_ADDR, PAGESLAB_AGE);\n\n\tedata_t alloc;\n\tedata_init_test(&alloc);\n\n\tpsset_t psset;\n\tpsset_init(&psset);\n\n\t/* Empty psset should return fail allocations. */\n\terr = test_psset_alloc_reuse(&psset, &alloc, PAGE);\n\texpect_true(err, \"Empty psset succeeded in an allocation.\");\n}\nTEST_END\n\nTEST_BEGIN(test_fill) {\n\tbool err;\n\n\thpdata_t pageslab;\n\thpdata_init(&pageslab, PAGESLAB_ADDR, PAGESLAB_AGE);\n\n\tedata_t alloc[HUGEPAGE_PAGES];\n\n\tpsset_t psset;\n\tpsset_init(&psset);\n\n\tedata_init_test(&alloc[0]);\n\ttest_psset_alloc_new(&psset, &pageslab, &alloc[0], PAGE);\n\tfor (size_t i = 1; i < HUGEPAGE_PAGES; i++) {\n\t\tedata_init_test(&alloc[i]);\n\t\terr = test_psset_alloc_reuse(&psset, &alloc[i], PAGE);\n\t\texpect_false(err, \"Nonempty psset failed page allocation.\");\n\t}\n\n\tfor (size_t i = 0; i < HUGEPAGE_PAGES; i++) {\n\t\tedata_t *edata = &alloc[i];\n\t\tedata_expect(edata, i, 1);\n\t}\n\n\t/* The pageslab, and thus psset, should now have no allocations. */\n\tedata_t extra_alloc;\n\tedata_init_test(&extra_alloc);\n\terr = test_psset_alloc_reuse(&psset, &extra_alloc, PAGE);\n\texpect_true(err, \"Alloc succeeded even though psset should be empty\");\n}\nTEST_END\n\nTEST_BEGIN(test_reuse) {\n\tbool err;\n\thpdata_t *ps;\n\n\thpdata_t pageslab;\n\thpdata_init(&pageslab, PAGESLAB_ADDR, PAGESLAB_AGE);\n\n\tedata_t alloc[HUGEPAGE_PAGES];\n\n\tpsset_t psset;\n\tpsset_init(&psset);\n\n\tedata_init_test(&alloc[0]);\n\ttest_psset_alloc_new(&psset, &pageslab, &alloc[0], PAGE);\n\tfor (size_t i = 1; i < HUGEPAGE_PAGES; i++) {\n\t\tedata_init_test(&alloc[i]);\n\t\terr = test_psset_alloc_reuse(&psset, &alloc[i], PAGE);\n\t\texpect_false(err, \"Nonempty psset failed page allocation.\");\n\t}\n\n\t/* Free odd indices. */\n\tfor (size_t i = 0; i < HUGEPAGE_PAGES; i ++) {\n\t\tif (i % 2 == 0) {\n\t\t\tcontinue;\n\t\t}\n\t\tps = test_psset_dalloc(&psset, &alloc[i]);\n\t\texpect_ptr_null(ps, \"Nonempty pageslab evicted\");\n\t}\n\t/* Realloc into them. */\n\tfor (size_t i = 0; i < HUGEPAGE_PAGES; i++) {\n\t\tif (i % 2 == 0) {\n\t\t\tcontinue;\n\t\t}\n\t\terr = test_psset_alloc_reuse(&psset, &alloc[i], PAGE);\n\t\texpect_false(err, \"Nonempty psset failed page allocation.\");\n\t\tedata_expect(&alloc[i], i, 1);\n\t}\n\t/* Now, free the pages at indices 0 or 1 mod 2. */\n\tfor (size_t i = 0; i < HUGEPAGE_PAGES; i++) {\n\t\tif (i % 4 > 1) {\n\t\t\tcontinue;\n\t\t}\n\t\tps = test_psset_dalloc(&psset, &alloc[i]);\n\t\texpect_ptr_null(ps, \"Nonempty pageslab evicted\");\n\t}\n\t/* And realloc 2-page allocations into them. */\n\tfor (size_t i = 0; i < HUGEPAGE_PAGES; i++) {\n\t\tif (i % 4 != 0) {\n\t\t\tcontinue;\n\t\t}\n\t\terr = test_psset_alloc_reuse(&psset, &alloc[i], 2 * PAGE);\n\t\texpect_false(err, \"Nonempty psset failed page allocation.\");\n\t\tedata_expect(&alloc[i], i, 2);\n\t}\n\t/* Free all the 2-page allocations. */\n\tfor (size_t i = 0; i < HUGEPAGE_PAGES; i++) {\n\t\tif (i % 4 != 0) {\n\t\t\tcontinue;\n\t\t}\n\t\tps = test_psset_dalloc(&psset, &alloc[i]);\n\t\texpect_ptr_null(ps, \"Nonempty pageslab evicted\");\n\t}\n\t/*\n\t * Free up a 1-page hole next to a 2-page hole, but somewhere in the\n\t * middle of the pageslab.  Index 11 should be right before such a hole\n\t * (since 12 % 4 == 0).\n\t */\n\tsize_t index_of_3 = 11;\n\tps = test_psset_dalloc(&psset, &alloc[index_of_3]);\n\texpect_ptr_null(ps, \"Nonempty pageslab evicted\");\n\terr = test_psset_alloc_reuse(&psset, &alloc[index_of_3], 3 * PAGE);\n\texpect_false(err, \"Should have been able to find alloc.\");\n\tedata_expect(&alloc[index_of_3], index_of_3, 3);\n\n\t/*\n\t * Free up a 4-page hole at the end.  Recall that the pages at offsets 0\n\t * and 1 mod 4 were freed above, so we just have to free the last\n\t * allocations.\n\t */\n\tps = test_psset_dalloc(&psset, &alloc[HUGEPAGE_PAGES - 1]);\n\texpect_ptr_null(ps, \"Nonempty pageslab evicted\");\n\tps = test_psset_dalloc(&psset, &alloc[HUGEPAGE_PAGES - 2]);\n\texpect_ptr_null(ps, \"Nonempty pageslab evicted\");\n\n\t/* Make sure we can satisfy an allocation at the very end of a slab. */\n\tsize_t index_of_4 = HUGEPAGE_PAGES - 4;\n\terr = test_psset_alloc_reuse(&psset, &alloc[index_of_4], 4 * PAGE);\n\texpect_false(err, \"Should have been able to find alloc.\");\n\tedata_expect(&alloc[index_of_4], index_of_4, 4);\n}\nTEST_END\n\nTEST_BEGIN(test_evict) {\n\tbool err;\n\thpdata_t *ps;\n\n\thpdata_t pageslab;\n\thpdata_init(&pageslab, PAGESLAB_ADDR, PAGESLAB_AGE);\n\n\tedata_t alloc[HUGEPAGE_PAGES];\n\n\tpsset_t psset;\n\tpsset_init(&psset);\n\n\t/* Alloc the whole slab. */\n\tedata_init_test(&alloc[0]);\n\ttest_psset_alloc_new(&psset, &pageslab, &alloc[0], PAGE);\n\tfor (size_t i = 1; i < HUGEPAGE_PAGES; i++) {\n\t\tedata_init_test(&alloc[i]);\n\t\terr = test_psset_alloc_reuse(&psset, &alloc[i], PAGE);\n\t\texpect_false(err, \"Unxpected allocation failure\");\n\t}\n\n\t/* Dealloc the whole slab, going forwards. */\n\tfor (size_t i = 0; i < HUGEPAGE_PAGES - 1; i++) {\n\t\tps = test_psset_dalloc(&psset, &alloc[i]);\n\t\texpect_ptr_null(ps, \"Nonempty pageslab evicted\");\n\t}\n\tps = test_psset_dalloc(&psset, &alloc[HUGEPAGE_PAGES - 1]);\n\texpect_ptr_eq(&pageslab, ps, \"Empty pageslab not evicted.\");\n\n\terr = test_psset_alloc_reuse(&psset, &alloc[0], PAGE);\n\texpect_true(err, \"psset should be empty.\");\n}\nTEST_END\n\nTEST_BEGIN(test_multi_pageslab) {\n\tbool err;\n\thpdata_t *ps;\n\n\thpdata_t pageslab[2];\n\thpdata_init(&pageslab[0], PAGESLAB_ADDR, PAGESLAB_AGE);\n\thpdata_init(&pageslab[1],\n\t    (void *)((uintptr_t)PAGESLAB_ADDR + HUGEPAGE),\n\t    PAGESLAB_AGE + 1);\n\n\tedata_t alloc[2][HUGEPAGE_PAGES];\n\n\tpsset_t psset;\n\tpsset_init(&psset);\n\n\t/* Insert both slabs. */\n\tedata_init_test(&alloc[0][0]);\n\ttest_psset_alloc_new(&psset, &pageslab[0], &alloc[0][0], PAGE);\n\tedata_init_test(&alloc[1][0]);\n\ttest_psset_alloc_new(&psset, &pageslab[1], &alloc[1][0], PAGE);\n\n\t/* Fill them both up; make sure we do so in first-fit order. */\n\tfor (size_t i = 0; i < 2; i++) {\n\t\tfor (size_t j = 1; j < HUGEPAGE_PAGES; j++) {\n\t\t\tedata_init_test(&alloc[i][j]);\n\t\t\terr = test_psset_alloc_reuse(&psset, &alloc[i][j], PAGE);\n\t\t\texpect_false(err,\n\t\t\t    \"Nonempty psset failed page allocation.\");\n\t\t\tassert_ptr_eq(&pageslab[i], edata_ps_get(&alloc[i][j]),\n\t\t\t    \"Didn't pick pageslabs in first-fit\");\n\t\t}\n\t}\n\n\t/*\n\t * Free up a 2-page hole in the earlier slab, and a 1-page one in the\n\t * later one.  We should still pick the later one.\n\t */\n\tps = test_psset_dalloc(&psset, &alloc[0][0]);\n\texpect_ptr_null(ps, \"Unexpected eviction\");\n\tps = test_psset_dalloc(&psset, &alloc[0][1]);\n\texpect_ptr_null(ps, \"Unexpected eviction\");\n\tps = test_psset_dalloc(&psset, &alloc[1][0]);\n\texpect_ptr_null(ps, \"Unexpected eviction\");\n\terr = test_psset_alloc_reuse(&psset, &alloc[0][0], PAGE);\n\texpect_ptr_eq(&pageslab[1], edata_ps_get(&alloc[0][0]),\n\t    \"Should have picked the fuller pageslab\");\n\n\t/*\n\t * Now both slabs have 1-page holes. Free up a second one in the later\n\t * slab.\n\t */\n\tps = test_psset_dalloc(&psset, &alloc[1][1]);\n\texpect_ptr_null(ps, \"Unexpected eviction\");\n\n\t/*\n\t * We should be able to allocate a 2-page object, even though an earlier\n\t * size class is nonempty.\n\t */\n\terr = test_psset_alloc_reuse(&psset, &alloc[1][0], 2 * PAGE);\n\texpect_false(err, \"Allocation should have succeeded\");\n}\nTEST_END\n\nstatic void\nstats_expect_empty(psset_bin_stats_t *stats) {\n\tassert_zu_eq(0, stats->npageslabs,\n\t    \"Supposedly empty bin had positive npageslabs\");\n\texpect_zu_eq(0, stats->nactive, \"Unexpected nonempty bin\"\n\t    \"Supposedly empty bin had positive nactive\");\n}\n\nstatic void\nstats_expect(psset_t *psset, size_t nactive) {\n\tif (nactive == HUGEPAGE_PAGES) {\n\t\texpect_zu_eq(1, psset->stats.full_slabs[0].npageslabs,\n\t\t    \"Expected a full slab\");\n\t\texpect_zu_eq(HUGEPAGE_PAGES,\n\t\t    psset->stats.full_slabs[0].nactive,\n\t\t    \"Should have exactly filled the bin\");\n\t} else {\n\t\tstats_expect_empty(&psset->stats.full_slabs[0]);\n\t}\n\tsize_t ninactive = HUGEPAGE_PAGES - nactive;\n\tpszind_t nonempty_pind = PSSET_NPSIZES;\n\tif (ninactive != 0 && ninactive < HUGEPAGE_PAGES) {\n\t\tnonempty_pind = sz_psz2ind(sz_psz_quantize_floor(\n\t\t    ninactive << LG_PAGE));\n\t}\n\tfor (pszind_t i = 0; i < PSSET_NPSIZES; i++) {\n\t\tif (i == nonempty_pind) {\n\t\t\tassert_zu_eq(1,\n\t\t\t    psset->stats.nonfull_slabs[i][0].npageslabs,\n\t\t\t    \"Should have found a slab\");\n\t\t\texpect_zu_eq(nactive,\n\t\t\t    psset->stats.nonfull_slabs[i][0].nactive,\n\t\t\t    \"Mismatch in active pages\");\n\t\t} else {\n\t\t\tstats_expect_empty(&psset->stats.nonfull_slabs[i][0]);\n\t\t}\n\t}\n\texpect_zu_eq(nactive, psset_nactive(psset), \"\");\n}\n\nTEST_BEGIN(test_stats) {\n\tbool err;\n\n\thpdata_t pageslab;\n\thpdata_init(&pageslab, PAGESLAB_ADDR, PAGESLAB_AGE);\n\n\tedata_t alloc[HUGEPAGE_PAGES];\n\n\tpsset_t psset;\n\tpsset_init(&psset);\n\tstats_expect(&psset, 0);\n\n\tedata_init_test(&alloc[0]);\n\ttest_psset_alloc_new(&psset, &pageslab, &alloc[0], PAGE);\n\tfor (size_t i = 1; i < HUGEPAGE_PAGES; i++) {\n\t\tstats_expect(&psset, i);\n\t\tedata_init_test(&alloc[i]);\n\t\terr = test_psset_alloc_reuse(&psset, &alloc[i], PAGE);\n\t\texpect_false(err, \"Nonempty psset failed page allocation.\");\n\t}\n\tstats_expect(&psset, HUGEPAGE_PAGES);\n\thpdata_t *ps;\n\tfor (ssize_t i = HUGEPAGE_PAGES - 1; i >= 0; i--) {\n\t\tps = test_psset_dalloc(&psset, &alloc[i]);\n\t\texpect_true((ps == NULL) == (i != 0),\n\t\t    \"test_psset_dalloc should only evict a slab on the last \"\n\t\t    \"free\");\n\t\tstats_expect(&psset, i);\n\t}\n\n\ttest_psset_alloc_new(&psset, &pageslab, &alloc[0], PAGE);\n\tstats_expect(&psset, 1);\n\tpsset_update_begin(&psset, &pageslab);\n\tstats_expect(&psset, 0);\n\tpsset_update_end(&psset, &pageslab);\n\tstats_expect(&psset, 1);\n}\nTEST_END\n\n/*\n * Fills in and inserts two pageslabs, with the first better than the second,\n * and each fully allocated (into the allocations in allocs and worse_allocs,\n * each of which should be HUGEPAGE_PAGES long), except for a single free page\n * at the end.\n *\n * (There's nothing magic about these numbers; it's just useful to share the\n * setup between the oldest fit and the insert/remove test).\n */\nstatic void\ninit_test_pageslabs(psset_t *psset, hpdata_t *pageslab,\n    hpdata_t *worse_pageslab, edata_t *alloc, edata_t *worse_alloc) {\n\tbool err;\n\n\thpdata_init(pageslab, (void *)(10 * HUGEPAGE), PAGESLAB_AGE);\n\t/*\n\t * This pageslab would be better from an address-first-fit POV, but\n\t * worse from an age POV.\n\t */\n\thpdata_init(worse_pageslab, (void *)(9 * HUGEPAGE), PAGESLAB_AGE + 1);\n\n\tpsset_init(psset);\n\n\tedata_init_test(&alloc[0]);\n\ttest_psset_alloc_new(psset, pageslab, &alloc[0], PAGE);\n\tfor (size_t i = 1; i < HUGEPAGE_PAGES; i++) {\n\t\tedata_init_test(&alloc[i]);\n\t\terr = test_psset_alloc_reuse(psset, &alloc[i], PAGE);\n\t\texpect_false(err, \"Nonempty psset failed page allocation.\");\n\t\texpect_ptr_eq(pageslab, edata_ps_get(&alloc[i]),\n\t\t    \"Allocated from the wrong pageslab\");\n\t}\n\n\tedata_init_test(&worse_alloc[0]);\n\ttest_psset_alloc_new(psset, worse_pageslab, &worse_alloc[0], PAGE);\n\texpect_ptr_eq(worse_pageslab, edata_ps_get(&worse_alloc[0]),\n\t    \"Allocated from the wrong pageslab\");\n\t/*\n\t * Make the two pssets otherwise indistinguishable; all full except for\n\t * a single page.\n\t */\n\tfor (size_t i = 1; i < HUGEPAGE_PAGES - 1; i++) {\n\t\tedata_init_test(&worse_alloc[i]);\n\t\terr = test_psset_alloc_reuse(psset, &alloc[i], PAGE);\n\t\texpect_false(err, \"Nonempty psset failed page allocation.\");\n\t\texpect_ptr_eq(worse_pageslab, edata_ps_get(&alloc[i]),\n\t\t    \"Allocated from the wrong pageslab\");\n\t}\n\n\t/* Deallocate the last page from the older pageslab. */\n\thpdata_t *evicted = test_psset_dalloc(psset,\n\t    &alloc[HUGEPAGE_PAGES - 1]);\n\texpect_ptr_null(evicted, \"Unexpected eviction\");\n}\n\nTEST_BEGIN(test_oldest_fit) {\n\tbool err;\n\tedata_t alloc[HUGEPAGE_PAGES];\n\tedata_t worse_alloc[HUGEPAGE_PAGES];\n\n\thpdata_t pageslab;\n\thpdata_t worse_pageslab;\n\n\tpsset_t psset;\n\n\tinit_test_pageslabs(&psset, &pageslab, &worse_pageslab, alloc,\n\t    worse_alloc);\n\n\t/* The edata should come from the better pageslab. */\n\tedata_t test_edata;\n\tedata_init_test(&test_edata);\n\terr = test_psset_alloc_reuse(&psset, &test_edata, PAGE);\n\texpect_false(err, \"Nonempty psset failed page allocation\");\n\texpect_ptr_eq(&pageslab, edata_ps_get(&test_edata),\n\t    \"Allocated from the wrong pageslab\");\n}\nTEST_END\n\nTEST_BEGIN(test_insert_remove) {\n\tbool err;\n\thpdata_t *ps;\n\tedata_t alloc[HUGEPAGE_PAGES];\n\tedata_t worse_alloc[HUGEPAGE_PAGES];\n\n\thpdata_t pageslab;\n\thpdata_t worse_pageslab;\n\n\tpsset_t psset;\n\n\tinit_test_pageslabs(&psset, &pageslab, &worse_pageslab, alloc,\n\t    worse_alloc);\n\n\t/* Remove better; should still be able to alloc from worse. */\n\tpsset_update_begin(&psset, &pageslab);\n\terr = test_psset_alloc_reuse(&psset, &worse_alloc[HUGEPAGE_PAGES - 1],\n\t    PAGE);\n\texpect_false(err, \"Removal should still leave an empty page\");\n\texpect_ptr_eq(&worse_pageslab,\n\t    edata_ps_get(&worse_alloc[HUGEPAGE_PAGES - 1]),\n\t    \"Allocated out of wrong ps\");\n\n\t/*\n\t * After deallocating the previous alloc and reinserting better, it\n\t * should be preferred for future allocations.\n\t */\n\tps = test_psset_dalloc(&psset, &worse_alloc[HUGEPAGE_PAGES - 1]);\n\texpect_ptr_null(ps, \"Incorrect eviction of nonempty pageslab\");\n\tpsset_update_end(&psset, &pageslab);\n\terr = test_psset_alloc_reuse(&psset, &alloc[HUGEPAGE_PAGES - 1], PAGE);\n\texpect_false(err, \"psset should be nonempty\");\n\texpect_ptr_eq(&pageslab, edata_ps_get(&alloc[HUGEPAGE_PAGES - 1]),\n\t    \"Removal/reinsertion shouldn't change ordering\");\n\t/*\n\t * After deallocating and removing both, allocations should fail.\n\t */\n\tps = test_psset_dalloc(&psset, &alloc[HUGEPAGE_PAGES - 1]);\n\texpect_ptr_null(ps, \"Incorrect eviction\");\n\tpsset_update_begin(&psset, &pageslab);\n\tpsset_update_begin(&psset, &worse_pageslab);\n\terr = test_psset_alloc_reuse(&psset, &alloc[HUGEPAGE_PAGES - 1], PAGE);\n\texpect_true(err, \"psset should be empty, but an alloc succeeded\");\n}\nTEST_END\n\nTEST_BEGIN(test_purge_prefers_nonhuge) {\n\t/*\n\t * All else being equal, we should prefer purging non-huge pages over\n\t * huge ones for non-empty extents.\n\t */\n\n\t/* Nothing magic about this constant. */\n\tenum {\n\t\tNHP = 23,\n\t};\n\thpdata_t *hpdata;\n\n\tpsset_t psset;\n\tpsset_init(&psset);\n\n\thpdata_t hpdata_huge[NHP];\n\tuintptr_t huge_begin = (uintptr_t)&hpdata_huge[0];\n\tuintptr_t huge_end = (uintptr_t)&hpdata_huge[NHP];\n\thpdata_t hpdata_nonhuge[NHP];\n\tuintptr_t nonhuge_begin = (uintptr_t)&hpdata_nonhuge[0];\n\tuintptr_t nonhuge_end = (uintptr_t)&hpdata_nonhuge[NHP];\n\n\tfor (size_t i = 0; i < NHP; i++) {\n\t\thpdata_init(&hpdata_huge[i], (void *)((10 + i) * HUGEPAGE),\n\t\t    123 + i);\n\t\tpsset_insert(&psset, &hpdata_huge[i]);\n\n\t\thpdata_init(&hpdata_nonhuge[i],\n\t\t    (void *)((10 + NHP + i) * HUGEPAGE),\n\t\t    456 + i);\n\t\tpsset_insert(&psset, &hpdata_nonhuge[i]);\n\n\t}\n\tfor (int i = 0; i < 2 * NHP; i++) {\n\t\thpdata = psset_pick_alloc(&psset, HUGEPAGE * 3 / 4);\n\t\tpsset_update_begin(&psset, hpdata);\n\t\tvoid *ptr;\n\t\tptr = hpdata_reserve_alloc(hpdata, HUGEPAGE * 3 / 4);\n\t\t/* Ignore the first alloc, which will stick around. */\n\t\t(void)ptr;\n\t\t/*\n\t\t * The second alloc is to dirty the pages; free it immediately\n\t\t * after allocating.\n\t\t */\n\t\tptr = hpdata_reserve_alloc(hpdata, HUGEPAGE / 4);\n\t\thpdata_unreserve(hpdata, ptr, HUGEPAGE / 4);\n\n\t\tif (huge_begin <= (uintptr_t)hpdata\n\t\t    && (uintptr_t)hpdata < huge_end) {\n\t\t\thpdata_hugify(hpdata);\n\t\t}\n\n\t\thpdata_purge_allowed_set(hpdata, true);\n\t\tpsset_update_end(&psset, hpdata);\n\t}\n\n\t/*\n\t * We've got a bunch of 1/8th dirty hpdatas.  It should give us all the\n\t * non-huge ones to purge, then all the huge ones, then refuse to purge\n\t * further.\n\t */\n\tfor (int i = 0; i < NHP; i++) {\n\t\thpdata = psset_pick_purge(&psset);\n\t\tassert_true(nonhuge_begin <= (uintptr_t)hpdata\n\t\t    && (uintptr_t)hpdata < nonhuge_end, \"\");\n\t\tpsset_update_begin(&psset, hpdata);\n\t\ttest_psset_fake_purge(hpdata);\n\t\thpdata_purge_allowed_set(hpdata, false);\n\t\tpsset_update_end(&psset, hpdata);\n\t}\n\tfor (int i = 0; i < NHP; i++) {\n\t\thpdata = psset_pick_purge(&psset);\n\t\texpect_true(huge_begin <= (uintptr_t)hpdata\n\t\t    && (uintptr_t)hpdata < huge_end, \"\");\n\t\tpsset_update_begin(&psset, hpdata);\n\t\thpdata_dehugify(hpdata);\n\t\ttest_psset_fake_purge(hpdata);\n\t\thpdata_purge_allowed_set(hpdata, false);\n\t\tpsset_update_end(&psset, hpdata);\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_purge_prefers_empty) {\n\tvoid *ptr;\n\n\tpsset_t psset;\n\tpsset_init(&psset);\n\n\thpdata_t hpdata_empty;\n\thpdata_t hpdata_nonempty;\n\thpdata_init(&hpdata_empty, (void *)(10 * HUGEPAGE), 123);\n\tpsset_insert(&psset, &hpdata_empty);\n\thpdata_init(&hpdata_nonempty, (void *)(11 * HUGEPAGE), 456);\n\tpsset_insert(&psset, &hpdata_nonempty);\n\n\tpsset_update_begin(&psset, &hpdata_empty);\n\tptr = hpdata_reserve_alloc(&hpdata_empty, PAGE);\n\texpect_ptr_eq(hpdata_addr_get(&hpdata_empty), ptr, \"\");\n\thpdata_unreserve(&hpdata_empty, ptr, PAGE);\n\thpdata_purge_allowed_set(&hpdata_empty, true);\n\tpsset_update_end(&psset, &hpdata_empty);\n\n\tpsset_update_begin(&psset, &hpdata_nonempty);\n\tptr = hpdata_reserve_alloc(&hpdata_nonempty, 10 * PAGE);\n\texpect_ptr_eq(hpdata_addr_get(&hpdata_nonempty), ptr, \"\");\n\thpdata_unreserve(&hpdata_nonempty, ptr, 9 * PAGE);\n\thpdata_purge_allowed_set(&hpdata_nonempty, true);\n\tpsset_update_end(&psset, &hpdata_nonempty);\n\n\t/*\n\t * The nonempty slab has 9 dirty pages, while the empty one has only 1.\n\t * We should still pick the empty one for purging.\n\t */\n\thpdata_t *to_purge = psset_pick_purge(&psset);\n\texpect_ptr_eq(&hpdata_empty, to_purge, \"\");\n}\nTEST_END\n\nTEST_BEGIN(test_purge_prefers_empty_huge) {\n\tvoid *ptr;\n\n\tpsset_t psset;\n\tpsset_init(&psset);\n\n\tenum {NHP = 10 };\n\n\thpdata_t hpdata_huge[NHP];\n\thpdata_t hpdata_nonhuge[NHP];\n\n\tuintptr_t cur_addr = 100 * HUGEPAGE;\n\tuint64_t cur_age = 123;\n\tfor (int i = 0; i < NHP; i++) {\n\t\thpdata_init(&hpdata_huge[i], (void *)cur_addr, cur_age);\n\t\tcur_addr += HUGEPAGE;\n\t\tcur_age++;\n\t\tpsset_insert(&psset, &hpdata_huge[i]);\n\n\t\thpdata_init(&hpdata_nonhuge[i], (void *)cur_addr, cur_age);\n\t\tcur_addr += HUGEPAGE;\n\t\tcur_age++;\n\t\tpsset_insert(&psset, &hpdata_nonhuge[i]);\n\n\t\t/*\n\t\t * Make the hpdata_huge[i] fully dirty, empty, purgable, and\n\t\t * huge.\n\t\t */\n\t\tpsset_update_begin(&psset, &hpdata_huge[i]);\n\t\tptr = hpdata_reserve_alloc(&hpdata_huge[i], HUGEPAGE);\n\t\texpect_ptr_eq(hpdata_addr_get(&hpdata_huge[i]), ptr, \"\");\n\t\thpdata_hugify(&hpdata_huge[i]);\n\t\thpdata_unreserve(&hpdata_huge[i], ptr, HUGEPAGE);\n\t\thpdata_purge_allowed_set(&hpdata_huge[i], true);\n\t\tpsset_update_end(&psset, &hpdata_huge[i]);\n\n\t\t/*\n\t\t * Make hpdata_nonhuge[i] fully dirty, empty, purgable, and\n\t\t * non-huge.\n\t\t */\n\t\tpsset_update_begin(&psset, &hpdata_nonhuge[i]);\n\t\tptr = hpdata_reserve_alloc(&hpdata_nonhuge[i], HUGEPAGE);\n\t\texpect_ptr_eq(hpdata_addr_get(&hpdata_nonhuge[i]), ptr, \"\");\n\t\thpdata_unreserve(&hpdata_nonhuge[i], ptr, HUGEPAGE);\n\t\thpdata_purge_allowed_set(&hpdata_nonhuge[i], true);\n\t\tpsset_update_end(&psset, &hpdata_nonhuge[i]);\n\t}\n\n\t/*\n\t * We have a bunch of empty slabs, half huge, half nonhuge, inserted in\n\t * alternating order.  We should pop all the huge ones before popping\n\t * any of the non-huge ones for purging.\n\t */\n\tfor (int i = 0; i < NHP; i++) {\n\t\thpdata_t *to_purge = psset_pick_purge(&psset);\n\t\texpect_ptr_eq(&hpdata_huge[i], to_purge, \"\");\n\t\tpsset_update_begin(&psset, to_purge);\n\t\thpdata_purge_allowed_set(to_purge, false);\n\t\tpsset_update_end(&psset, to_purge);\n\t}\n\tfor (int i = 0; i < NHP; i++) {\n\t\thpdata_t *to_purge = psset_pick_purge(&psset);\n\t\texpect_ptr_eq(&hpdata_nonhuge[i], to_purge, \"\");\n\t\tpsset_update_begin(&psset, to_purge);\n\t\thpdata_purge_allowed_set(to_purge, false);\n\t\tpsset_update_end(&psset, to_purge);\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_empty,\n\t    test_fill,\n\t    test_reuse,\n\t    test_evict,\n\t    test_multi_pageslab,\n\t    test_stats,\n\t    test_oldest_fit,\n\t    test_insert_remove,\n\t    test_purge_prefers_nonhuge,\n\t    test_purge_prefers_empty,\n\t    test_purge_prefers_empty_huge);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/ql.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/ql.h\"\n\n/* Number of ring entries, in [2..26]. */\n#define NENTRIES 9\n\ntypedef struct list_s list_t;\ntypedef ql_head(list_t) list_head_t;\n\nstruct list_s {\n\tql_elm(list_t) link;\n\tchar id;\n};\n\nstatic void\ntest_empty_list(list_head_t *head) {\n\tlist_t *t;\n\tunsigned i;\n\n\texpect_true(ql_empty(head), \"Unexpected element for empty list\");\n\texpect_ptr_null(ql_first(head), \"Unexpected element for empty list\");\n\texpect_ptr_null(ql_last(head, link),\n\t    \"Unexpected element for empty list\");\n\n\ti = 0;\n\tql_foreach(t, head, link) {\n\t\ti++;\n\t}\n\texpect_u_eq(i, 0, \"Unexpected element for empty list\");\n\n\ti = 0;\n\tql_reverse_foreach(t, head, link) {\n\t\ti++;\n\t}\n\texpect_u_eq(i, 0, \"Unexpected element for empty list\");\n}\n\nTEST_BEGIN(test_ql_empty) {\n\tlist_head_t head;\n\n\tql_new(&head);\n\ttest_empty_list(&head);\n}\nTEST_END\n\nstatic void\ninit_entries(list_t *entries, unsigned nentries) {\n\tunsigned i;\n\n\tfor (i = 0; i < nentries; i++) {\n\t\tentries[i].id = 'a' + i;\n\t\tql_elm_new(&entries[i], link);\n\t}\n}\n\nstatic void\ntest_entries_list(list_head_t *head, list_t *entries, unsigned nentries) {\n\tlist_t *t;\n\tunsigned i;\n\n\texpect_false(ql_empty(head), \"List should not be empty\");\n\texpect_c_eq(ql_first(head)->id, entries[0].id, \"Element id mismatch\");\n\texpect_c_eq(ql_last(head, link)->id, entries[nentries-1].id,\n\t    \"Element id mismatch\");\n\n\ti = 0;\n\tql_foreach(t, head, link) {\n\t\texpect_c_eq(t->id, entries[i].id, \"Element id mismatch\");\n\t\ti++;\n\t}\n\n\ti = 0;\n\tql_reverse_foreach(t, head, link) {\n\t\texpect_c_eq(t->id, entries[nentries-i-1].id,\n\t\t    \"Element id mismatch\");\n\t\ti++;\n\t}\n\n\tfor (i = 0; i < nentries-1; i++) {\n\t\tt = ql_next(head, &entries[i], link);\n\t\texpect_c_eq(t->id, entries[i+1].id, \"Element id mismatch\");\n\t}\n\texpect_ptr_null(ql_next(head, &entries[nentries-1], link),\n\t    \"Unexpected element\");\n\n\texpect_ptr_null(ql_prev(head, &entries[0], link), \"Unexpected element\");\n\tfor (i = 1; i < nentries; i++) {\n\t\tt = ql_prev(head, &entries[i], link);\n\t\texpect_c_eq(t->id, entries[i-1].id, \"Element id mismatch\");\n\t}\n}\n\nTEST_BEGIN(test_ql_tail_insert) {\n\tlist_head_t head;\n\tlist_t entries[NENTRIES];\n\tunsigned i;\n\n\tql_new(&head);\n\tinit_entries(entries, sizeof(entries)/sizeof(list_t));\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tql_tail_insert(&head, &entries[i], link);\n\t}\n\n\ttest_entries_list(&head, entries, NENTRIES);\n}\nTEST_END\n\nTEST_BEGIN(test_ql_tail_remove) {\n\tlist_head_t head;\n\tlist_t entries[NENTRIES];\n\tunsigned i;\n\n\tql_new(&head);\n\tinit_entries(entries, sizeof(entries)/sizeof(list_t));\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tql_tail_insert(&head, &entries[i], link);\n\t}\n\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\ttest_entries_list(&head, entries, NENTRIES-i);\n\t\tql_tail_remove(&head, list_t, link);\n\t}\n\ttest_empty_list(&head);\n}\nTEST_END\n\nTEST_BEGIN(test_ql_head_insert) {\n\tlist_head_t head;\n\tlist_t entries[NENTRIES];\n\tunsigned i;\n\n\tql_new(&head);\n\tinit_entries(entries, sizeof(entries)/sizeof(list_t));\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tql_head_insert(&head, &entries[NENTRIES-i-1], link);\n\t}\n\n\ttest_entries_list(&head, entries, NENTRIES);\n}\nTEST_END\n\nTEST_BEGIN(test_ql_head_remove) {\n\tlist_head_t head;\n\tlist_t entries[NENTRIES];\n\tunsigned i;\n\n\tql_new(&head);\n\tinit_entries(entries, sizeof(entries)/sizeof(list_t));\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tql_head_insert(&head, &entries[NENTRIES-i-1], link);\n\t}\n\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\ttest_entries_list(&head, &entries[i], NENTRIES-i);\n\t\tql_head_remove(&head, list_t, link);\n\t}\n\ttest_empty_list(&head);\n}\nTEST_END\n\nTEST_BEGIN(test_ql_insert) {\n\tlist_head_t head;\n\tlist_t entries[8];\n\tlist_t *a, *b, *c, *d, *e, *f, *g, *h;\n\n\tql_new(&head);\n\tinit_entries(entries, sizeof(entries)/sizeof(list_t));\n\ta = &entries[0];\n\tb = &entries[1];\n\tc = &entries[2];\n\td = &entries[3];\n\te = &entries[4];\n\tf = &entries[5];\n\tg = &entries[6];\n\th = &entries[7];\n\n\t/*\n\t * ql_remove(), ql_before_insert(), and ql_after_insert() are used\n\t * internally by other macros that are already tested, so there's no\n\t * need to test them completely.  However, insertion/deletion from the\n\t * middle of lists is not otherwise tested; do so here.\n\t */\n\tql_tail_insert(&head, f, link);\n\tql_before_insert(&head, f, b, link);\n\tql_before_insert(&head, f, c, link);\n\tql_after_insert(f, h, link);\n\tql_after_insert(f, g, link);\n\tql_before_insert(&head, b, a, link);\n\tql_after_insert(c, d, link);\n\tql_before_insert(&head, f, e, link);\n\n\ttest_entries_list(&head, entries, sizeof(entries)/sizeof(list_t));\n}\nTEST_END\n\nstatic void\ntest_concat_split_entries(list_t *entries, unsigned nentries_a,\n    unsigned nentries_b) {\n\tinit_entries(entries, nentries_a + nentries_b);\n\n\tlist_head_t head_a;\n\tql_new(&head_a);\n\tfor (unsigned i = 0; i < nentries_a; i++) {\n\t\tql_tail_insert(&head_a, &entries[i], link);\n\t}\n\tif (nentries_a == 0) {\n\t\ttest_empty_list(&head_a);\n\t} else {\n\t\ttest_entries_list(&head_a, entries, nentries_a);\n\t}\n\n\tlist_head_t head_b;\n\tql_new(&head_b);\n\tfor (unsigned i = 0; i < nentries_b; i++) {\n\t\tql_tail_insert(&head_b, &entries[nentries_a + i], link);\n\t}\n\tif (nentries_b == 0) {\n\t\ttest_empty_list(&head_b);\n\t} else {\n\t\ttest_entries_list(&head_b, entries + nentries_a, nentries_b);\n\t}\n\n\tql_concat(&head_a, &head_b, link);\n\tif (nentries_a + nentries_b == 0) {\n\t\ttest_empty_list(&head_a);\n\t} else {\n\t\ttest_entries_list(&head_a, entries, nentries_a + nentries_b);\n\t}\n\ttest_empty_list(&head_b);\n\n\tif (nentries_b == 0) {\n\t\treturn;\n\t}\n\n\tlist_head_t head_c;\n\tql_split(&head_a, &entries[nentries_a], &head_c, link);\n\tif (nentries_a == 0) {\n\t\ttest_empty_list(&head_a);\n\t} else {\n\t\ttest_entries_list(&head_a, entries, nentries_a);\n\t}\n\ttest_entries_list(&head_c, entries + nentries_a, nentries_b);\n}\n\nTEST_BEGIN(test_ql_concat_split) {\n\tlist_t entries[NENTRIES];\n\n\ttest_concat_split_entries(entries, 0, 0);\n\n\ttest_concat_split_entries(entries, 0, 1);\n\ttest_concat_split_entries(entries, 1, 0);\n\n\ttest_concat_split_entries(entries, 0, NENTRIES);\n\ttest_concat_split_entries(entries, 1, NENTRIES - 1);\n\ttest_concat_split_entries(entries, NENTRIES / 2,\n\t    NENTRIES - NENTRIES / 2);\n\ttest_concat_split_entries(entries, NENTRIES - 1, 1);\n\ttest_concat_split_entries(entries, NENTRIES, 0);\n}\nTEST_END\n\nTEST_BEGIN(test_ql_rotate) {\n\tlist_head_t head;\n\tlist_t entries[NENTRIES];\n\tunsigned i;\n\n\tql_new(&head);\n\tinit_entries(entries, sizeof(entries)/sizeof(list_t));\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tql_tail_insert(&head, &entries[i], link);\n\t}\n\n\tchar head_id = ql_first(&head)->id;\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tassert_c_eq(ql_first(&head)->id, head_id, \"\");\n\t\tql_rotate(&head, link);\n\t\tassert_c_eq(ql_last(&head, link)->id, head_id, \"\");\n\t\thead_id++;\n\t}\n\ttest_entries_list(&head, entries, NENTRIES);\n}\nTEST_END\n\nTEST_BEGIN(test_ql_move) {\n\tlist_head_t head_dest, head_src;\n\tlist_t entries[NENTRIES];\n\tunsigned i;\n\n\tql_new(&head_src);\n\tql_move(&head_dest, &head_src);\n\ttest_empty_list(&head_src);\n\ttest_empty_list(&head_dest);\n\n\tinit_entries(entries, sizeof(entries)/sizeof(list_t));\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tql_tail_insert(&head_src, &entries[i], link);\n\t}\n\tql_move(&head_dest, &head_src);\n\ttest_empty_list(&head_src);\n\ttest_entries_list(&head_dest, entries, NENTRIES);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_ql_empty,\n\t    test_ql_tail_insert,\n\t    test_ql_tail_remove,\n\t    test_ql_head_insert,\n\t    test_ql_head_remove,\n\t    test_ql_insert,\n\t    test_ql_concat_split,\n\t    test_ql_rotate,\n\t    test_ql_move);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/qr.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/qr.h\"\n\n/* Number of ring entries, in [2..26]. */\n#define NENTRIES 9\n/* Split index, in [1..NENTRIES). */\n#define SPLIT_INDEX 5\n\ntypedef struct ring_s ring_t;\n\nstruct ring_s {\n\tqr(ring_t) link;\n\tchar id;\n};\n\nstatic void\ninit_entries(ring_t *entries) {\n\tunsigned i;\n\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tqr_new(&entries[i], link);\n\t\tentries[i].id = 'a' + i;\n\t}\n}\n\nstatic void\ntest_independent_entries(ring_t *entries) {\n\tring_t *t;\n\tunsigned i, j;\n\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tj = 0;\n\t\tqr_foreach(t, &entries[i], link) {\n\t\t\tj++;\n\t\t}\n\t\texpect_u_eq(j, 1,\n\t\t    \"Iteration over single-element ring should visit precisely \"\n\t\t    \"one element\");\n\t}\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tj = 0;\n\t\tqr_reverse_foreach(t, &entries[i], link) {\n\t\t\tj++;\n\t\t}\n\t\texpect_u_eq(j, 1,\n\t\t    \"Iteration over single-element ring should visit precisely \"\n\t\t    \"one element\");\n\t}\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tt = qr_next(&entries[i], link);\n\t\texpect_ptr_eq(t, &entries[i],\n\t\t    \"Next element in single-element ring should be same as \"\n\t\t    \"current element\");\n\t}\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tt = qr_prev(&entries[i], link);\n\t\texpect_ptr_eq(t, &entries[i],\n\t\t    \"Previous element in single-element ring should be same as \"\n\t\t    \"current element\");\n\t}\n}\n\nTEST_BEGIN(test_qr_one) {\n\tring_t entries[NENTRIES];\n\n\tinit_entries(entries);\n\ttest_independent_entries(entries);\n}\nTEST_END\n\nstatic void\ntest_entries_ring(ring_t *entries) {\n\tring_t *t;\n\tunsigned i, j;\n\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tj = 0;\n\t\tqr_foreach(t, &entries[i], link) {\n\t\t\texpect_c_eq(t->id, entries[(i+j) % NENTRIES].id,\n\t\t\t    \"Element id mismatch\");\n\t\t\tj++;\n\t\t}\n\t}\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tj = 0;\n\t\tqr_reverse_foreach(t, &entries[i], link) {\n\t\t\texpect_c_eq(t->id, entries[(NENTRIES+i-j-1) %\n\t\t\t    NENTRIES].id, \"Element id mismatch\");\n\t\t\tj++;\n\t\t}\n\t}\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tt = qr_next(&entries[i], link);\n\t\texpect_c_eq(t->id, entries[(i+1) % NENTRIES].id,\n\t\t    \"Element id mismatch\");\n\t}\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tt = qr_prev(&entries[i], link);\n\t\texpect_c_eq(t->id, entries[(NENTRIES+i-1) % NENTRIES].id,\n\t\t    \"Element id mismatch\");\n\t}\n}\n\nTEST_BEGIN(test_qr_after_insert) {\n\tring_t entries[NENTRIES];\n\tunsigned i;\n\n\tinit_entries(entries);\n\tfor (i = 1; i < NENTRIES; i++) {\n\t\tqr_after_insert(&entries[i - 1], &entries[i], link);\n\t}\n\ttest_entries_ring(entries);\n}\nTEST_END\n\nTEST_BEGIN(test_qr_remove) {\n\tring_t entries[NENTRIES];\n\tring_t *t;\n\tunsigned i, j;\n\n\tinit_entries(entries);\n\tfor (i = 1; i < NENTRIES; i++) {\n\t\tqr_after_insert(&entries[i - 1], &entries[i], link);\n\t}\n\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tj = 0;\n\t\tqr_foreach(t, &entries[i], link) {\n\t\t\texpect_c_eq(t->id, entries[i+j].id,\n\t\t\t    \"Element id mismatch\");\n\t\t\tj++;\n\t\t}\n\t\tj = 0;\n\t\tqr_reverse_foreach(t, &entries[i], link) {\n\t\t\texpect_c_eq(t->id, entries[NENTRIES - 1 - j].id,\n\t\t\t\"Element id mismatch\");\n\t\t\tj++;\n\t\t}\n\t\tqr_remove(&entries[i], link);\n\t}\n\ttest_independent_entries(entries);\n}\nTEST_END\n\nTEST_BEGIN(test_qr_before_insert) {\n\tring_t entries[NENTRIES];\n\tring_t *t;\n\tunsigned i, j;\n\n\tinit_entries(entries);\n\tfor (i = 1; i < NENTRIES; i++) {\n\t\tqr_before_insert(&entries[i - 1], &entries[i], link);\n\t}\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tj = 0;\n\t\tqr_foreach(t, &entries[i], link) {\n\t\t\texpect_c_eq(t->id, entries[(NENTRIES+i-j) %\n\t\t\t    NENTRIES].id, \"Element id mismatch\");\n\t\t\tj++;\n\t\t}\n\t}\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tj = 0;\n\t\tqr_reverse_foreach(t, &entries[i], link) {\n\t\t\texpect_c_eq(t->id, entries[(i+j+1) % NENTRIES].id,\n\t\t\t    \"Element id mismatch\");\n\t\t\tj++;\n\t\t}\n\t}\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tt = qr_next(&entries[i], link);\n\t\texpect_c_eq(t->id, entries[(NENTRIES+i-1) % NENTRIES].id,\n\t\t    \"Element id mismatch\");\n\t}\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tt = qr_prev(&entries[i], link);\n\t\texpect_c_eq(t->id, entries[(i+1) % NENTRIES].id,\n\t\t    \"Element id mismatch\");\n\t}\n}\nTEST_END\n\nstatic void\ntest_split_entries(ring_t *entries) {\n\tring_t *t;\n\tunsigned i, j;\n\n\tfor (i = 0; i < NENTRIES; i++) {\n\t\tj = 0;\n\t\tqr_foreach(t, &entries[i], link) {\n\t\t\tif (i < SPLIT_INDEX) {\n\t\t\t\texpect_c_eq(t->id,\n\t\t\t\t    entries[(i+j) % SPLIT_INDEX].id,\n\t\t\t\t    \"Element id mismatch\");\n\t\t\t} else {\n\t\t\t\texpect_c_eq(t->id, entries[(i+j-SPLIT_INDEX) %\n\t\t\t\t    (NENTRIES-SPLIT_INDEX) + SPLIT_INDEX].id,\n\t\t\t\t    \"Element id mismatch\");\n\t\t\t}\n\t\t\tj++;\n\t\t}\n\t}\n}\n\nTEST_BEGIN(test_qr_meld_split) {\n\tring_t entries[NENTRIES];\n\tunsigned i;\n\n\tinit_entries(entries);\n\tfor (i = 1; i < NENTRIES; i++) {\n\t\tqr_after_insert(&entries[i - 1], &entries[i], link);\n\t}\n\n\tqr_split(&entries[0], &entries[SPLIT_INDEX], link);\n\ttest_split_entries(entries);\n\n\tqr_meld(&entries[0], &entries[SPLIT_INDEX], link);\n\ttest_entries_ring(entries);\n\n\tqr_meld(&entries[0], &entries[SPLIT_INDEX], link);\n\ttest_split_entries(entries);\n\n\tqr_split(&entries[0], &entries[SPLIT_INDEX], link);\n\ttest_entries_ring(entries);\n\n\tqr_split(&entries[0], &entries[0], link);\n\ttest_entries_ring(entries);\n\n\tqr_meld(&entries[0], &entries[0], link);\n\ttest_entries_ring(entries);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_qr_one,\n\t    test_qr_after_insert,\n\t    test_qr_remove,\n\t    test_qr_before_insert,\n\t    test_qr_meld_split);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/rb.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include <stdlib.h>\n\n#include \"jemalloc/internal/rb.h\"\n\n#define rbtn_black_height(a_type, a_field, a_rbt, r_height) do {\t\\\n\ta_type *rbp_bh_t;\t\t\t\t\t\t\\\n\tfor (rbp_bh_t = (a_rbt)->rbt_root, (r_height) = 0; rbp_bh_t !=\t\\\n\t    NULL; rbp_bh_t = rbtn_left_get(a_type, a_field,\t\t\\\n\t    rbp_bh_t)) {\t\t\t\t\t\t\\\n\t\tif (!rbtn_red_get(a_type, a_field, rbp_bh_t)) {\t\t\\\n\t\t(r_height)++;\t\t\t\t\t\t\\\n\t\t}\t\t\t\t\t\t\t\\\n\t}\t\t\t\t\t\t\t\t\\\n} while (0)\n\nstatic bool summarize_always_returns_true = false;\n\ntypedef struct node_s node_t;\nstruct node_s {\n#define NODE_MAGIC 0x9823af7e\n\tuint32_t magic;\n\trb_node(node_t) link;\n\t/* Order used by nodes. */\n\tuint64_t key;\n\t/*\n\t * Our made-up summary property is \"specialness\", with summarization\n\t * taking the max.\n\t */\n\tuint64_t specialness;\n\n\t/*\n\t * Used by some of the test randomization to avoid double-removing\n\t * nodes.\n\t */\n\tbool mid_remove;\n\n\t/*\n\t * To test searching functionality, we want to temporarily weaken the\n\t * ordering to allow non-equal nodes that nevertheless compare equal.\n\t */\n\tbool allow_duplicates;\n\n\t/*\n\t * In check_consistency, it's handy to know a node's rank in the tree;\n\t * this tracks it (but only there; not all tests use this).\n\t */\n\tint rank;\n\tint filtered_rank;\n\n\t/*\n\t * Replicate the internal structure of the tree, to make sure the\n\t * implementation doesn't miss any updates.\n\t */\n\tconst node_t *summary_lchild;\n\tconst node_t *summary_rchild;\n\tuint64_t summary_max_specialness;\n};\n\nstatic int\nnode_cmp(const node_t *a, const node_t *b) {\n\tint ret;\n\n\texpect_u32_eq(a->magic, NODE_MAGIC, \"Bad magic\");\n\texpect_u32_eq(b->magic, NODE_MAGIC, \"Bad magic\");\n\n\tret = (a->key > b->key) - (a->key < b->key);\n\tif (ret == 0 && !a->allow_duplicates) {\n\t\t/*\n\t\t * Duplicates are not allowed in the tree, so force an\n\t\t * arbitrary ordering for non-identical items with equal keys,\n\t\t * unless the user is searching and wants to allow the\n\t\t * duplicate.\n\t\t */\n\t\tret = (((uintptr_t)a) > ((uintptr_t)b))\n\t\t    - (((uintptr_t)a) < ((uintptr_t)b));\n\t}\n\treturn ret;\n}\n\nstatic uint64_t\nnode_subtree_specialness(node_t *n, const node_t *lchild,\n    const node_t *rchild) {\n\tuint64_t subtree_specialness = n->specialness;\n\tif (lchild != NULL\n\t    && lchild->summary_max_specialness > subtree_specialness) {\n\t\tsubtree_specialness = lchild->summary_max_specialness;\n\t}\n\tif (rchild != NULL\n\t    && rchild->summary_max_specialness > subtree_specialness) {\n\t\tsubtree_specialness = rchild->summary_max_specialness;\n\t}\n\treturn subtree_specialness;\n}\n\nstatic bool\nnode_summarize(node_t *a, const node_t *lchild, const node_t *rchild) {\n\tuint64_t new_summary_max_specialness = node_subtree_specialness(\n\t    a, lchild, rchild);\n\tbool changed = (a->summary_lchild != lchild)\n\t    || (a->summary_rchild != rchild)\n\t    || (new_summary_max_specialness != a->summary_max_specialness);\n\ta->summary_max_specialness = new_summary_max_specialness;\n\ta->summary_lchild = lchild;\n\ta->summary_rchild = rchild;\n\treturn changed || summarize_always_returns_true;\n}\n\ntypedef rb_tree(node_t) tree_t;\nrb_summarized_proto(static, tree_, tree_t, node_t);\nrb_summarized_gen(static, tree_, tree_t, node_t, link, node_cmp,\n    node_summarize);\n\nstatic bool\nspecialness_filter_node(void *ctx, node_t *node) {\n\tuint64_t specialness = *(uint64_t *)ctx;\n\treturn node->specialness >= specialness;\n}\n\nstatic bool\nspecialness_filter_subtree(void *ctx, node_t *node) {\n\tuint64_t specialness = *(uint64_t *)ctx;\n\treturn node->summary_max_specialness >= specialness;\n}\n\nstatic node_t *\ntree_iterate_cb(tree_t *tree, node_t *node, void *data) {\n\tunsigned *i = (unsigned *)data;\n\tnode_t *search_node;\n\n\texpect_u32_eq(node->magic, NODE_MAGIC, \"Bad magic\");\n\n\t/* Test rb_search(). */\n\tsearch_node = tree_search(tree, node);\n\texpect_ptr_eq(search_node, node,\n\t    \"tree_search() returned unexpected node\");\n\n\t/* Test rb_nsearch(). */\n\tsearch_node = tree_nsearch(tree, node);\n\texpect_ptr_eq(search_node, node,\n\t    \"tree_nsearch() returned unexpected node\");\n\n\t/* Test rb_psearch(). */\n\tsearch_node = tree_psearch(tree, node);\n\texpect_ptr_eq(search_node, node,\n\t    \"tree_psearch() returned unexpected node\");\n\n\t(*i)++;\n\n\treturn NULL;\n}\n\nTEST_BEGIN(test_rb_empty) {\n\ttree_t tree;\n\tnode_t key;\n\n\ttree_new(&tree);\n\n\texpect_true(tree_empty(&tree), \"Tree should be empty\");\n\texpect_ptr_null(tree_first(&tree), \"Unexpected node\");\n\texpect_ptr_null(tree_last(&tree), \"Unexpected node\");\n\n\tkey.key = 0;\n\tkey.magic = NODE_MAGIC;\n\texpect_ptr_null(tree_search(&tree, &key), \"Unexpected node\");\n\n\tkey.key = 0;\n\tkey.magic = NODE_MAGIC;\n\texpect_ptr_null(tree_nsearch(&tree, &key), \"Unexpected node\");\n\n\tkey.key = 0;\n\tkey.magic = NODE_MAGIC;\n\texpect_ptr_null(tree_psearch(&tree, &key), \"Unexpected node\");\n\n\tunsigned nodes = 0;\n\ttree_iter_filtered(&tree, NULL, &tree_iterate_cb,\n\t    &nodes, &specialness_filter_node, &specialness_filter_subtree,\n\t    NULL);\n\texpect_u_eq(0, nodes, \"\");\n\n\tnodes = 0;\n\ttree_reverse_iter_filtered(&tree, NULL, &tree_iterate_cb,\n\t    &nodes, &specialness_filter_node, &specialness_filter_subtree,\n\t    NULL);\n\texpect_u_eq(0, nodes, \"\");\n\n\texpect_ptr_null(tree_first_filtered(&tree, &specialness_filter_node,\n\t    &specialness_filter_subtree, NULL), \"\");\n\texpect_ptr_null(tree_last_filtered(&tree, &specialness_filter_node,\n\t    &specialness_filter_subtree, NULL), \"\");\n\n\tkey.key = 0;\n\tkey.magic = NODE_MAGIC;\n\texpect_ptr_null(tree_search_filtered(&tree, &key,\n\t    &specialness_filter_node, &specialness_filter_subtree, NULL), \"\");\n\texpect_ptr_null(tree_nsearch_filtered(&tree, &key,\n\t    &specialness_filter_node, &specialness_filter_subtree, NULL), \"\");\n\texpect_ptr_null(tree_psearch_filtered(&tree, &key,\n\t    &specialness_filter_node, &specialness_filter_subtree, NULL), \"\");\n}\nTEST_END\n\nstatic unsigned\ntree_recurse(node_t *node, unsigned black_height, unsigned black_depth) {\n\tunsigned ret = 0;\n\tnode_t *left_node;\n\tnode_t *right_node;\n\n\tif (node == NULL) {\n\t\treturn ret;\n\t}\n\n\tleft_node = rbtn_left_get(node_t, link, node);\n\tright_node = rbtn_right_get(node_t, link, node);\n\n\texpect_ptr_eq(left_node, node->summary_lchild,\n\t    \"summary missed a tree update\");\n\texpect_ptr_eq(right_node, node->summary_rchild,\n\t    \"summary missed a tree update\");\n\n\tuint64_t expected_subtree_specialness = node_subtree_specialness(node,\n\t    left_node, right_node);\n\texpect_u64_eq(expected_subtree_specialness,\n\t    node->summary_max_specialness, \"Incorrect summary\");\n\n\tif (!rbtn_red_get(node_t, link, node)) {\n\t\tblack_depth++;\n\t}\n\n\t/* Red nodes must be interleaved with black nodes. */\n\tif (rbtn_red_get(node_t, link, node)) {\n\t\tif (left_node != NULL) {\n\t\t\texpect_false(rbtn_red_get(node_t, link, left_node),\n\t\t\t\t\"Node should be black\");\n\t\t}\n\t\tif (right_node != NULL) {\n\t\t\texpect_false(rbtn_red_get(node_t, link, right_node),\n\t\t\t    \"Node should be black\");\n\t\t}\n\t}\n\n\t/* Self. */\n\texpect_u32_eq(node->magic, NODE_MAGIC, \"Bad magic\");\n\n\t/* Left subtree. */\n\tif (left_node != NULL) {\n\t\tret += tree_recurse(left_node, black_height, black_depth);\n\t} else {\n\t\tret += (black_depth != black_height);\n\t}\n\n\t/* Right subtree. */\n\tif (right_node != NULL) {\n\t\tret += tree_recurse(right_node, black_height, black_depth);\n\t} else {\n\t\tret += (black_depth != black_height);\n\t}\n\n\treturn ret;\n}\n\nstatic unsigned\ntree_iterate(tree_t *tree) {\n\tunsigned i;\n\n\ti = 0;\n\ttree_iter(tree, NULL, tree_iterate_cb, (void *)&i);\n\n\treturn i;\n}\n\nstatic unsigned\ntree_iterate_reverse(tree_t *tree) {\n\tunsigned i;\n\n\ti = 0;\n\ttree_reverse_iter(tree, NULL, tree_iterate_cb, (void *)&i);\n\n\treturn i;\n}\n\nstatic void\nnode_remove(tree_t *tree, node_t *node, unsigned nnodes) {\n\tnode_t *search_node;\n\tunsigned black_height, imbalances;\n\n\ttree_remove(tree, node);\n\n\t/* Test rb_nsearch(). */\n\tsearch_node = tree_nsearch(tree, node);\n\tif (search_node != NULL) {\n\t\texpect_u64_ge(search_node->key, node->key,\n\t\t    \"Key ordering error\");\n\t}\n\n\t/* Test rb_psearch(). */\n\tsearch_node = tree_psearch(tree, node);\n\tif (search_node != NULL) {\n\t\texpect_u64_le(search_node->key, node->key,\n\t\t    \"Key ordering error\");\n\t}\n\n\tnode->magic = 0;\n\n\trbtn_black_height(node_t, link, tree, black_height);\n\timbalances = tree_recurse(tree->rbt_root, black_height, 0);\n\texpect_u_eq(imbalances, 0, \"Tree is unbalanced\");\n\texpect_u_eq(tree_iterate(tree), nnodes-1,\n\t    \"Unexpected node iteration count\");\n\texpect_u_eq(tree_iterate_reverse(tree), nnodes-1,\n\t    \"Unexpected node iteration count\");\n}\n\nstatic node_t *\nremove_iterate_cb(tree_t *tree, node_t *node, void *data) {\n\tunsigned *nnodes = (unsigned *)data;\n\tnode_t *ret = tree_next(tree, node);\n\n\tnode_remove(tree, node, *nnodes);\n\n\treturn ret;\n}\n\nstatic node_t *\nremove_reverse_iterate_cb(tree_t *tree, node_t *node, void *data) {\n\tunsigned *nnodes = (unsigned *)data;\n\tnode_t *ret = tree_prev(tree, node);\n\n\tnode_remove(tree, node, *nnodes);\n\n\treturn ret;\n}\n\nstatic void\ndestroy_cb(node_t *node, void *data) {\n\tunsigned *nnodes = (unsigned *)data;\n\n\texpect_u_gt(*nnodes, 0, \"Destruction removed too many nodes\");\n\t(*nnodes)--;\n}\n\nTEST_BEGIN(test_rb_random) {\n\tenum {\n\t\tNNODES = 25,\n\t\tNBAGS = 500,\n\t\tSEED = 42\n\t};\n\tsfmt_t *sfmt;\n\tuint64_t bag[NNODES];\n\ttree_t tree;\n\tnode_t nodes[NNODES];\n\tunsigned i, j, k, black_height, imbalances;\n\n\tsfmt = init_gen_rand(SEED);\n\tfor (i = 0; i < NBAGS; i++) {\n\t\tswitch (i) {\n\t\tcase 0:\n\t\t\t/* Insert in order. */\n\t\t\tfor (j = 0; j < NNODES; j++) {\n\t\t\t\tbag[j] = j;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase 1:\n\t\t\t/* Insert in reverse order. */\n\t\t\tfor (j = 0; j < NNODES; j++) {\n\t\t\t\tbag[j] = NNODES - j - 1;\n\t\t\t}\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tfor (j = 0; j < NNODES; j++) {\n\t\t\t\tbag[j] = gen_rand64_range(sfmt, NNODES);\n\t\t\t}\n\t\t}\n\n\t\t/*\n\t\t * We alternate test behavior with a period of 2 here, and a\n\t\t * period of 5 down below, so there's no cycle in which certain\n\t\t * combinations get omitted.\n\t\t */\n\t\tsummarize_always_returns_true = (i % 2 == 0);\n\n\t\tfor (j = 1; j <= NNODES; j++) {\n\t\t\t/* Initialize tree and nodes. */\n\t\t\ttree_new(&tree);\n\t\t\tfor (k = 0; k < j; k++) {\n\t\t\t\tnodes[k].magic = NODE_MAGIC;\n\t\t\t\tnodes[k].key = bag[k];\n\t\t\t\tnodes[k].specialness = gen_rand64_range(sfmt,\n\t\t\t\t    NNODES);\n\t\t\t\tnodes[k].mid_remove = false;\n\t\t\t\tnodes[k].allow_duplicates = false;\n\t\t\t\tnodes[k].summary_lchild = NULL;\n\t\t\t\tnodes[k].summary_rchild = NULL;\n\t\t\t\tnodes[k].summary_max_specialness = 0;\n\t\t\t}\n\n\t\t\t/* Insert nodes. */\n\t\t\tfor (k = 0; k < j; k++) {\n\t\t\t\ttree_insert(&tree, &nodes[k]);\n\n\t\t\t\trbtn_black_height(node_t, link, &tree,\n\t\t\t\t    black_height);\n\t\t\t\timbalances = tree_recurse(tree.rbt_root,\n\t\t\t\t    black_height, 0);\n\t\t\t\texpect_u_eq(imbalances, 0,\n\t\t\t\t    \"Tree is unbalanced\");\n\n\t\t\t\texpect_u_eq(tree_iterate(&tree), k+1,\n\t\t\t\t    \"Unexpected node iteration count\");\n\t\t\t\texpect_u_eq(tree_iterate_reverse(&tree), k+1,\n\t\t\t\t    \"Unexpected node iteration count\");\n\n\t\t\t\texpect_false(tree_empty(&tree),\n\t\t\t\t    \"Tree should not be empty\");\n\t\t\t\texpect_ptr_not_null(tree_first(&tree),\n\t\t\t\t    \"Tree should not be empty\");\n\t\t\t\texpect_ptr_not_null(tree_last(&tree),\n\t\t\t\t    \"Tree should not be empty\");\n\n\t\t\t\ttree_next(&tree, &nodes[k]);\n\t\t\t\ttree_prev(&tree, &nodes[k]);\n\t\t\t}\n\n\t\t\t/* Remove nodes. */\n\t\t\tswitch (i % 5) {\n\t\t\tcase 0:\n\t\t\t\tfor (k = 0; k < j; k++) {\n\t\t\t\t\tnode_remove(&tree, &nodes[k], j - k);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 1:\n\t\t\t\tfor (k = j; k > 0; k--) {\n\t\t\t\t\tnode_remove(&tree, &nodes[k-1], k);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 2: {\n\t\t\t\tnode_t *start;\n\t\t\t\tunsigned nnodes = j;\n\n\t\t\t\tstart = NULL;\n\t\t\t\tdo {\n\t\t\t\t\tstart = tree_iter(&tree, start,\n\t\t\t\t\t    remove_iterate_cb, (void *)&nnodes);\n\t\t\t\t\tnnodes--;\n\t\t\t\t} while (start != NULL);\n\t\t\t\texpect_u_eq(nnodes, 0,\n\t\t\t\t    \"Removal terminated early\");\n\t\t\t\tbreak;\n\t\t\t} case 3: {\n\t\t\t\tnode_t *start;\n\t\t\t\tunsigned nnodes = j;\n\n\t\t\t\tstart = NULL;\n\t\t\t\tdo {\n\t\t\t\t\tstart = tree_reverse_iter(&tree, start,\n\t\t\t\t\t    remove_reverse_iterate_cb,\n\t\t\t\t\t    (void *)&nnodes);\n\t\t\t\t\tnnodes--;\n\t\t\t\t} while (start != NULL);\n\t\t\t\texpect_u_eq(nnodes, 0,\n\t\t\t\t    \"Removal terminated early\");\n\t\t\t\tbreak;\n\t\t\t} case 4: {\n\t\t\t\tunsigned nnodes = j;\n\t\t\t\ttree_destroy(&tree, destroy_cb, &nnodes);\n\t\t\t\texpect_u_eq(nnodes, 0,\n\t\t\t\t    \"Destruction terminated early\");\n\t\t\t\tbreak;\n\t\t\t} default:\n\t\t\t\tnot_reached();\n\t\t\t}\n\t\t}\n\t}\n\tfini_gen_rand(sfmt);\n}\nTEST_END\n\nstatic void\nexpect_simple_consistency(tree_t *tree, uint64_t specialness,\n    bool expected_empty, node_t *expected_first, node_t *expected_last) {\n\tbool empty;\n\tnode_t *first;\n\tnode_t *last;\n\n\tempty = tree_empty_filtered(tree, &specialness_filter_node,\n\t    &specialness_filter_subtree, &specialness);\n\texpect_b_eq(expected_empty, empty, \"\");\n\n\tfirst = tree_first_filtered(tree,\n\t    &specialness_filter_node, &specialness_filter_subtree,\n\t    (void *)&specialness);\n\texpect_ptr_eq(expected_first, first, \"\");\n\n\tlast = tree_last_filtered(tree,\n\t    &specialness_filter_node, &specialness_filter_subtree,\n\t    (void *)&specialness);\n\texpect_ptr_eq(expected_last, last, \"\");\n}\n\nTEST_BEGIN(test_rb_filter_simple) {\n\tenum {FILTER_NODES = 10};\n\tnode_t nodes[FILTER_NODES];\n\tfor (unsigned i = 0; i < FILTER_NODES; i++) {\n\t\tnodes[i].magic = NODE_MAGIC;\n\t\tnodes[i].key = i;\n\t\tif (i == 0) {\n\t\t\tnodes[i].specialness = 0;\n\t\t} else {\n\t\t\tnodes[i].specialness = ffs_u(i);\n\t\t}\n\t\tnodes[i].mid_remove = false;\n\t\tnodes[i].allow_duplicates = false;\n\t\tnodes[i].summary_lchild = NULL;\n\t\tnodes[i].summary_rchild = NULL;\n\t\tnodes[i].summary_max_specialness = 0;\n\t}\n\n\tsummarize_always_returns_true = false;\n\n\ttree_t tree;\n\ttree_new(&tree);\n\n\t/* Should be empty */\n\texpect_simple_consistency(&tree, /* specialness */ 0, /* empty */ true,\n\t    /* first */ NULL, /* last */ NULL);\n\n\t/* Fill in just the odd nodes. */\n\tfor (int i = 1; i < FILTER_NODES; i += 2) {\n\t\ttree_insert(&tree, &nodes[i]);\n\t}\n\n\t/* A search for an odd node should succeed. */\n\texpect_simple_consistency(&tree, /* specialness */ 0, /* empty */ false,\n\t    /* first */ &nodes[1], /* last */ &nodes[9]);\n\n\t/* But a search for an even one should fail. */\n\texpect_simple_consistency(&tree, /* specialness */ 1, /* empty */ true,\n\t    /* first */ NULL, /* last */ NULL);\n\n\t/* Now we add an even. */\n\ttree_insert(&tree, &nodes[4]);\n\texpect_simple_consistency(&tree, /* specialness */ 1, /* empty */ false,\n\t    /* first */ &nodes[4], /* last */ &nodes[4]);\n\n\t/* A smaller even, and a larger even. */\n\ttree_insert(&tree, &nodes[2]);\n\ttree_insert(&tree, &nodes[8]);\n\n\t/*\n\t * A first-search (resp. last-search) for an even should switch to the\n\t * lower (higher) one, now that it's been added.\n\t */\n\texpect_simple_consistency(&tree, /* specialness */ 1, /* empty */ false,\n\t    /* first */ &nodes[2], /* last */ &nodes[8]);\n\n\t/*\n\t * If we remove 2, a first-search we should go back to 4, while a\n\t * last-search should remain unchanged.\n\t */\n\ttree_remove(&tree, &nodes[2]);\n\texpect_simple_consistency(&tree, /* specialness */ 1, /* empty */ false,\n\t    /* first */ &nodes[4], /* last */ &nodes[8]);\n\n\t/* Reinsert 2, then find it again. */\n\ttree_insert(&tree, &nodes[2]);\n\texpect_simple_consistency(&tree, /* specialness */ 1, /* empty */ false,\n\t    /* first */ &nodes[2], /* last */ &nodes[8]);\n\n\t/* Searching for a multiple of 4 should not have changed. */\n\texpect_simple_consistency(&tree, /* specialness */ 2, /* empty */ false,\n\t    /* first */ &nodes[4], /* last */ &nodes[8]);\n\n\t/* And a multiple of 8 */\n\texpect_simple_consistency(&tree, /* specialness */ 3, /* empty */ false,\n\t    /* first */ &nodes[8], /* last */ &nodes[8]);\n\n\t/* But not a multiple of 16 */\n\texpect_simple_consistency(&tree, /* specialness */ 4, /* empty */ true,\n\t    /* first */ NULL, /* last */ NULL);\n}\nTEST_END\n\ntypedef struct iter_ctx_s iter_ctx_t;\nstruct iter_ctx_s {\n\tint ncalls;\n\tnode_t *last_node;\n\n\tint ncalls_max;\n\tbool forward;\n};\n\nstatic node_t *\ntree_iterate_filtered_cb(tree_t *tree, node_t *node, void *arg) {\n\titer_ctx_t *ctx = (iter_ctx_t *)arg;\n\tctx->ncalls++;\n\texpect_u64_ge(node->specialness, 1,\n\t    \"Should only invoke cb on nodes that pass the filter\");\n\tif (ctx->last_node != NULL) {\n\t\tif (ctx->forward) {\n\t\t\texpect_d_lt(node_cmp(ctx->last_node, node), 0,\n\t\t\t    \"Incorrect iteration order\");\n\t\t} else {\n\t\t\texpect_d_gt(node_cmp(ctx->last_node, node), 0,\n\t\t\t    \"Incorrect iteration order\");\n\t\t}\n\t}\n\tctx->last_node = node;\n\tif (ctx->ncalls == ctx->ncalls_max) {\n\t\treturn node;\n\t}\n\treturn NULL;\n}\n\nstatic int\nqsort_node_cmp(const void *ap, const void *bp) {\n\tnode_t *a = *(node_t **)ap;\n\tnode_t *b = *(node_t **)bp;\n\treturn node_cmp(a, b);\n}\n\n#define UPDATE_TEST_MAX 100\nstatic void\ncheck_consistency(tree_t *tree, node_t nodes[UPDATE_TEST_MAX], int nnodes) {\n\tuint64_t specialness = 1;\n\n\tbool empty;\n\tbool real_empty = true;\n\tnode_t *first;\n\tnode_t *real_first = NULL;\n\tnode_t *last;\n\tnode_t *real_last = NULL;\n\tfor (int i = 0; i < nnodes; i++) {\n\t\tif (nodes[i].specialness >= specialness) {\n\t\t\treal_empty = false;\n\t\t\tif (real_first == NULL\n\t\t\t    || node_cmp(&nodes[i], real_first) < 0) {\n\t\t\t\treal_first = &nodes[i];\n\t\t\t}\n\t\t\tif (real_last == NULL\n\t\t\t    || node_cmp(&nodes[i], real_last) > 0) {\n\t\t\t\treal_last = &nodes[i];\n\t\t\t}\n\t\t}\n\t}\n\n\tempty = tree_empty_filtered(tree, &specialness_filter_node,\n\t    &specialness_filter_subtree, &specialness);\n\texpect_b_eq(real_empty, empty, \"\");\n\n\tfirst = tree_first_filtered(tree, &specialness_filter_node,\n\t    &specialness_filter_subtree, &specialness);\n\texpect_ptr_eq(real_first, first, \"\");\n\n\tlast = tree_last_filtered(tree, &specialness_filter_node,\n\t    &specialness_filter_subtree, &specialness);\n\texpect_ptr_eq(real_last, last, \"\");\n\n\tfor (int i = 0; i < nnodes; i++) {\n\t\tnode_t *next_filtered;\n\t\tnode_t *real_next_filtered = NULL;\n\t\tnode_t *prev_filtered;\n\t\tnode_t *real_prev_filtered = NULL;\n\t\tfor (int j = 0; j < nnodes; j++) {\n\t\t\tif (nodes[j].specialness < specialness) {\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tif (node_cmp(&nodes[j], &nodes[i]) < 0\n\t\t\t    && (real_prev_filtered == NULL\n\t\t\t    || node_cmp(&nodes[j], real_prev_filtered) > 0)) {\n\t\t\t\treal_prev_filtered = &nodes[j];\n\t\t\t}\n\t\t\tif (node_cmp(&nodes[j], &nodes[i]) > 0\n\t\t\t    && (real_next_filtered == NULL\n\t\t\t    || node_cmp(&nodes[j], real_next_filtered) < 0)) {\n\t\t\t\treal_next_filtered = &nodes[j];\n\t\t\t}\n\t\t}\n\t\tnext_filtered = tree_next_filtered(tree, &nodes[i],\n\t\t    &specialness_filter_node, &specialness_filter_subtree,\n\t\t    &specialness);\n\t\texpect_ptr_eq(real_next_filtered, next_filtered, \"\");\n\n\t\tprev_filtered = tree_prev_filtered(tree, &nodes[i],\n\t\t    &specialness_filter_node, &specialness_filter_subtree,\n\t\t    &specialness);\n\t\texpect_ptr_eq(real_prev_filtered, prev_filtered, \"\");\n\n\t\tnode_t *search_filtered;\n\t\tnode_t *real_search_filtered;\n\t\tnode_t *nsearch_filtered;\n\t\tnode_t *real_nsearch_filtered;\n\t\tnode_t *psearch_filtered;\n\t\tnode_t *real_psearch_filtered;\n\n\t\t/*\n\t\t * search, nsearch, psearch from a node before nodes[i] in the\n\t\t * ordering.\n\t\t */\n\t\tnode_t before;\n\t\tbefore.magic = NODE_MAGIC;\n\t\tbefore.key = nodes[i].key - 1;\n\t\tbefore.allow_duplicates = false;\n\t\treal_search_filtered = NULL;\n\t\tsearch_filtered = tree_search_filtered(tree, &before,\n\t\t    &specialness_filter_node, &specialness_filter_subtree,\n\t\t    &specialness);\n\t\texpect_ptr_eq(real_search_filtered, search_filtered, \"\");\n\n\t\treal_nsearch_filtered = (nodes[i].specialness >= specialness ?\n\t\t    &nodes[i] : real_next_filtered);\n\t\tnsearch_filtered = tree_nsearch_filtered(tree, &before,\n\t\t    &specialness_filter_node, &specialness_filter_subtree,\n\t\t    &specialness);\n\t\texpect_ptr_eq(real_nsearch_filtered, nsearch_filtered, \"\");\n\n\t\treal_psearch_filtered = real_prev_filtered;\n\t\tpsearch_filtered = tree_psearch_filtered(tree, &before,\n\t\t    &specialness_filter_node, &specialness_filter_subtree,\n\t\t    &specialness);\n\t\texpect_ptr_eq(real_psearch_filtered, psearch_filtered, \"\");\n\n\t\t/* search, nsearch, psearch from nodes[i] */\n\t\treal_search_filtered = (nodes[i].specialness >= specialness ?\n\t\t    &nodes[i] : NULL);\n\t\tsearch_filtered = tree_search_filtered(tree, &nodes[i],\n\t\t    &specialness_filter_node, &specialness_filter_subtree,\n\t\t    &specialness);\n\t\texpect_ptr_eq(real_search_filtered, search_filtered, \"\");\n\n\t\treal_nsearch_filtered = (nodes[i].specialness >= specialness ?\n\t\t    &nodes[i] : real_next_filtered);\n\t\tnsearch_filtered = tree_nsearch_filtered(tree, &nodes[i],\n\t\t    &specialness_filter_node, &specialness_filter_subtree,\n\t\t    &specialness);\n\t\texpect_ptr_eq(real_nsearch_filtered, nsearch_filtered, \"\");\n\n\t\treal_psearch_filtered = (nodes[i].specialness >= specialness ?\n\t\t    &nodes[i] : real_prev_filtered);\n\t\tpsearch_filtered = tree_psearch_filtered(tree, &nodes[i],\n\t\t    &specialness_filter_node, &specialness_filter_subtree,\n\t\t    &specialness);\n\t\texpect_ptr_eq(real_psearch_filtered, psearch_filtered, \"\");\n\n\t\t/*\n\t\t * search, nsearch, psearch from a node equivalent to but\n\t\t * distinct from nodes[i].\n\t\t */\n\t\tnode_t equiv;\n\t\tequiv.magic = NODE_MAGIC;\n\t\tequiv.key = nodes[i].key;\n\t\tequiv.allow_duplicates = true;\n\t\treal_search_filtered = (nodes[i].specialness >= specialness ?\n\t\t    &nodes[i] : NULL);\n\t\tsearch_filtered = tree_search_filtered(tree, &equiv,\n\t\t    &specialness_filter_node, &specialness_filter_subtree,\n\t\t    &specialness);\n\t\texpect_ptr_eq(real_search_filtered, search_filtered, \"\");\n\n\t\treal_nsearch_filtered = (nodes[i].specialness >= specialness ?\n\t\t    &nodes[i] : real_next_filtered);\n\t\tnsearch_filtered = tree_nsearch_filtered(tree, &equiv,\n\t\t    &specialness_filter_node, &specialness_filter_subtree,\n\t\t    &specialness);\n\t\texpect_ptr_eq(real_nsearch_filtered, nsearch_filtered, \"\");\n\n\t\treal_psearch_filtered = (nodes[i].specialness >= specialness ?\n\t\t    &nodes[i] : real_prev_filtered);\n\t\tpsearch_filtered = tree_psearch_filtered(tree, &equiv,\n\t\t    &specialness_filter_node, &specialness_filter_subtree,\n\t\t    &specialness);\n\t\texpect_ptr_eq(real_psearch_filtered, psearch_filtered, \"\");\n\n\t\t/*\n\t\t * search, nsearch, psearch from a node after nodes[i] in the\n\t\t * ordering.\n\t\t */\n\t\tnode_t after;\n\t\tafter.magic = NODE_MAGIC;\n\t\tafter.key = nodes[i].key + 1;\n\t\tafter.allow_duplicates = false;\n\t\treal_search_filtered = NULL;\n\t\tsearch_filtered = tree_search_filtered(tree, &after,\n\t\t    &specialness_filter_node, &specialness_filter_subtree,\n\t\t    &specialness);\n\t\texpect_ptr_eq(real_search_filtered, search_filtered, \"\");\n\n\t\treal_nsearch_filtered = real_next_filtered;\n\t\tnsearch_filtered = tree_nsearch_filtered(tree, &after,\n\t\t    &specialness_filter_node, &specialness_filter_subtree,\n\t\t    &specialness);\n\t\texpect_ptr_eq(real_nsearch_filtered, nsearch_filtered, \"\");\n\n\t\treal_psearch_filtered = (nodes[i].specialness >= specialness ?\n\t\t    &nodes[i] : real_prev_filtered);\n\t\tpsearch_filtered = tree_psearch_filtered(tree, &after,\n\t\t    &specialness_filter_node, &specialness_filter_subtree,\n\t\t    &specialness);\n\t\texpect_ptr_eq(real_psearch_filtered, psearch_filtered, \"\");\n\t}\n\n\t/* Filtered iteration test setup. */\n\tint nspecial = 0;\n\tnode_t *sorted_nodes[UPDATE_TEST_MAX];\n\tnode_t *sorted_filtered_nodes[UPDATE_TEST_MAX];\n\tfor (int i = 0; i < nnodes; i++) {\n\t\tsorted_nodes[i] = &nodes[i];\n\t}\n\tqsort(sorted_nodes, nnodes, sizeof(node_t *), &qsort_node_cmp);\n\tfor (int i = 0; i < nnodes; i++) {\n\t\tsorted_nodes[i]->rank = i;\n\t\tsorted_nodes[i]->filtered_rank = nspecial;\n\t\tif (sorted_nodes[i]->specialness >= 1) {\n\t\t\tsorted_filtered_nodes[nspecial] = sorted_nodes[i];\n\t\t\tnspecial++;\n\t\t}\n\t}\n\n\tnode_t *iter_result;\n\n\titer_ctx_t ctx;\n\tctx.ncalls = 0;\n\tctx.last_node = NULL;\n\tctx.ncalls_max = INT_MAX;\n\tctx.forward = true;\n\n\t/* Filtered forward iteration from the beginning. */\n\titer_result = tree_iter_filtered(tree, NULL, &tree_iterate_filtered_cb,\n\t    &ctx, &specialness_filter_node, &specialness_filter_subtree,\n\t    &specialness);\n\texpect_ptr_null(iter_result, \"\");\n\texpect_d_eq(nspecial, ctx.ncalls, \"\");\n\t/* Filtered forward iteration from a starting point. */\n\tfor (int i = 0; i < nnodes; i++) {\n\t\tctx.ncalls = 0;\n\t\tctx.last_node = NULL;\n\t\titer_result = tree_iter_filtered(tree, &nodes[i],\n\t\t    &tree_iterate_filtered_cb, &ctx, &specialness_filter_node,\n\t\t    &specialness_filter_subtree, &specialness);\n\t\texpect_ptr_null(iter_result, \"\");\n\t\texpect_d_eq(nspecial - nodes[i].filtered_rank, ctx.ncalls, \"\");\n\t}\n\t/* Filtered forward iteration from the beginning, with stopping */\n\tfor (int i = 0; i < nspecial; i++) {\n\t\tctx.ncalls = 0;\n\t\tctx.last_node = NULL;\n\t\tctx.ncalls_max = i + 1;\n\t\titer_result = tree_iter_filtered(tree, NULL,\n\t\t    &tree_iterate_filtered_cb, &ctx, &specialness_filter_node,\n\t\t    &specialness_filter_subtree, &specialness);\n\t\texpect_ptr_eq(sorted_filtered_nodes[i], iter_result, \"\");\n\t\texpect_d_eq(ctx.ncalls, i + 1, \"\");\n\t}\n\t/* Filtered forward iteration from a starting point, with stopping. */\n\tfor (int i = 0; i < nnodes; i++) {\n\t\tfor (int j = 0; j < nspecial - nodes[i].filtered_rank; j++) {\n\t\t\tctx.ncalls = 0;\n\t\t\tctx.last_node = NULL;\n\t\t\tctx.ncalls_max = j + 1;\n\t\t\titer_result = tree_iter_filtered(tree, &nodes[i],\n\t\t\t    &tree_iterate_filtered_cb, &ctx,\n\t\t\t    &specialness_filter_node,\n\t\t\t    &specialness_filter_subtree, &specialness);\n\t\t\texpect_d_eq(j + 1, ctx.ncalls, \"\");\n\t\t\texpect_ptr_eq(sorted_filtered_nodes[\n\t\t\t    nodes[i].filtered_rank + j], iter_result, \"\");\n\t\t}\n\t}\n\n\t/* Backwards iteration. */\n\tctx.ncalls = 0;\n\tctx.last_node = NULL;\n\tctx.ncalls_max = INT_MAX;\n\tctx.forward = false;\n\n\t/* Filtered backward iteration from the end. */\n\titer_result = tree_reverse_iter_filtered(tree, NULL,\n\t    &tree_iterate_filtered_cb, &ctx, &specialness_filter_node,\n\t    &specialness_filter_subtree, &specialness);\n\texpect_ptr_null(iter_result, \"\");\n\texpect_d_eq(nspecial, ctx.ncalls, \"\");\n\t/* Filtered backward iteration from a starting point. */\n\tfor (int i = 0; i < nnodes; i++) {\n\t\tctx.ncalls = 0;\n\t\tctx.last_node = NULL;\n\t\titer_result = tree_reverse_iter_filtered(tree, &nodes[i],\n\t\t    &tree_iterate_filtered_cb, &ctx, &specialness_filter_node,\n\t\t    &specialness_filter_subtree, &specialness);\n\t\texpect_ptr_null(iter_result, \"\");\n\t\tint surplus_rank = (nodes[i].specialness >= 1 ? 1 : 0);\n\t\texpect_d_eq(nodes[i].filtered_rank + surplus_rank, ctx.ncalls,\n\t\t    \"\");\n\t}\n\t/* Filtered backward iteration from the end, with stopping */\n\tfor (int i = 0; i < nspecial; i++) {\n\t\tctx.ncalls = 0;\n\t\tctx.last_node = NULL;\n\t\tctx.ncalls_max = i + 1;\n\t\titer_result = tree_reverse_iter_filtered(tree, NULL,\n\t\t    &tree_iterate_filtered_cb, &ctx, &specialness_filter_node,\n\t\t    &specialness_filter_subtree, &specialness);\n\t\texpect_ptr_eq(sorted_filtered_nodes[nspecial - i - 1],\n\t\t    iter_result, \"\");\n\t\texpect_d_eq(ctx.ncalls, i + 1, \"\");\n\t}\n\t/* Filtered backward iteration from a starting point, with stopping. */\n\tfor (int i = 0; i < nnodes; i++) {\n\t\tint surplus_rank = (nodes[i].specialness >= 1 ? 1 : 0);\n\t\tfor (int j = 0; j < nodes[i].filtered_rank + surplus_rank;\n\t\t    j++) {\n\t\t\tctx.ncalls = 0;\n\t\t\tctx.last_node = NULL;\n\t\t\tctx.ncalls_max = j + 1;\n\t\t\titer_result = tree_reverse_iter_filtered(tree,\n\t\t\t    &nodes[i], &tree_iterate_filtered_cb, &ctx,\n\t\t\t    &specialness_filter_node,\n\t\t\t    &specialness_filter_subtree, &specialness);\n\t\t\texpect_d_eq(j + 1, ctx.ncalls, \"\");\n\t\t\texpect_ptr_eq(sorted_filtered_nodes[\n\t\t\t    nodes[i].filtered_rank - j - 1 + surplus_rank],\n\t\t\t    iter_result, \"\");\n\t\t}\n\t}\n}\n\nstatic void\ndo_update_search_test(int nnodes, int ntrees, int nremovals,\n    int nupdates) {\n\tnode_t nodes[UPDATE_TEST_MAX];\n\tassert(nnodes <= UPDATE_TEST_MAX);\n\n\tsfmt_t *sfmt = init_gen_rand(12345);\n\tfor (int i = 0; i < ntrees; i++) {\n\t\ttree_t tree;\n\t\ttree_new(&tree);\n\t\tfor (int j = 0; j < nnodes; j++) {\n\t\t\tnodes[j].magic = NODE_MAGIC;\n\t\t\t/*\n\t\t\t * In consistency checking, we increment or decrement a\n\t\t\t * key and assume that the result is not a key in the\n\t\t\t * tree.  This isn't a *real* concern with 64-bit keys\n\t\t\t * and a good PRNG, but why not be correct anyways?\n\t\t\t */\n\t\t\tnodes[j].key = 2 * gen_rand64(sfmt);\n\t\t\tnodes[j].specialness = 0;\n\t\t\tnodes[j].mid_remove = false;\n\t\t\tnodes[j].allow_duplicates = false;\n\t\t\tnodes[j].summary_lchild = NULL;\n\t\t\tnodes[j].summary_rchild = NULL;\n\t\t\tnodes[j].summary_max_specialness = 0;\n\t\t\ttree_insert(&tree, &nodes[j]);\n\t\t}\n\t\tfor (int j = 0; j < nremovals; j++) {\n\t\t\tint victim = (int)gen_rand64_range(sfmt, nnodes);\n\t\t\tif (!nodes[victim].mid_remove) {\n\t\t\t\ttree_remove(&tree, &nodes[victim]);\n\t\t\t\tnodes[victim].mid_remove = true;\n\t\t\t}\n\t\t}\n\t\tfor (int j = 0; j < nnodes; j++) {\n\t\t\tif (nodes[j].mid_remove) {\n\t\t\t\tnodes[j].mid_remove = false;\n\t\t\t\tnodes[j].key = 2 * gen_rand64(sfmt);\n\t\t\t\ttree_insert(&tree, &nodes[j]);\n\t\t\t}\n\t\t}\n\t\tfor (int j = 0; j < nupdates; j++) {\n\t\t\tuint32_t ind = gen_rand32_range(sfmt, nnodes);\n\t\t\tnodes[ind].specialness = 1 - nodes[ind].specialness;\n\t\t\ttree_update_summaries(&tree, &nodes[ind]);\n\t\t\tcheck_consistency(&tree, nodes, nnodes);\n\t\t}\n\t}\n}\n\nTEST_BEGIN(test_rb_update_search) {\n\tsummarize_always_returns_true = false;\n\tdo_update_search_test(2, 100, 3, 50);\n\tdo_update_search_test(5, 100, 3, 50);\n\tdo_update_search_test(12, 100, 5, 1000);\n\tdo_update_search_test(100, 1, 50, 500);\n}\nTEST_END\n\ntypedef rb_tree(node_t) unsummarized_tree_t;\nrb_gen(static UNUSED, unsummarized_tree_, unsummarized_tree_t, node_t, link,\n    node_cmp);\n\nstatic node_t *\nunsummarized_tree_iterate_cb(unsummarized_tree_t *tree, node_t *node,\n    void *data) {\n\tunsigned *i = (unsigned *)data;\n\t(*i)++;\n\treturn NULL;\n}\n/*\n * The unsummarized and summarized funtionality is implemented via the same\n * functions; we don't really need to do much more than test that we can exclude\n * the filtered functionality without anything breaking.\n */\nTEST_BEGIN(test_rb_unsummarized) {\n\tunsummarized_tree_t tree;\n\tunsummarized_tree_new(&tree);\n\tunsigned nnodes = 0;\n\tunsummarized_tree_iter(&tree, NULL, &unsummarized_tree_iterate_cb,\n\t    &nnodes);\n\texpect_u_eq(0, nnodes, \"\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_rb_empty,\n\t    test_rb_random,\n\t    test_rb_filter_simple,\n\t    test_rb_update_search,\n\t    test_rb_unsummarized);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/retained.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/san.h\"\n#include \"jemalloc/internal/spin.h\"\n\nstatic unsigned\t\tarena_ind;\nstatic size_t\t\tsz;\nstatic size_t\t\tesz;\n#define NEPOCHS\t\t8\n#define PER_THD_NALLOCS\t1\nstatic atomic_u_t\tepoch;\nstatic atomic_u_t\tnfinished;\n\nstatic unsigned\ndo_arena_create(extent_hooks_t *h) {\n\tunsigned new_arena_ind;\n\tsize_t ind_sz = sizeof(unsigned);\n\texpect_d_eq(mallctl(\"arenas.create\", (void *)&new_arena_ind, &ind_sz,\n\t    (void *)(h != NULL ? &h : NULL), (h != NULL ? sizeof(h) : 0)), 0,\n\t    \"Unexpected mallctl() failure\");\n\treturn new_arena_ind;\n}\n\nstatic void\ndo_arena_destroy(unsigned ind) {\n\tsize_t mib[3];\n\tsize_t miblen;\n\n\tmiblen = sizeof(mib)/sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(\"arena.0.destroy\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() failure\");\n\tmib[1] = (size_t)ind;\n\texpect_d_eq(mallctlbymib(mib, miblen, NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctlbymib() failure\");\n}\n\nstatic void\ndo_refresh(void) {\n\tuint64_t refresh_epoch = 1;\n\texpect_d_eq(mallctl(\"epoch\", NULL, NULL, (void *)&refresh_epoch,\n\t    sizeof(refresh_epoch)), 0, \"Unexpected mallctl() failure\");\n}\n\nstatic size_t\ndo_get_size_impl(const char *cmd, unsigned ind) {\n\tsize_t mib[4];\n\tsize_t miblen = sizeof(mib) / sizeof(size_t);\n\tsize_t z = sizeof(size_t);\n\n\texpect_d_eq(mallctlnametomib(cmd, mib, &miblen),\n\t    0, \"Unexpected mallctlnametomib(\\\"%s\\\", ...) failure\", cmd);\n\tmib[2] = ind;\n\tsize_t size;\n\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&size, &z, NULL, 0),\n\t    0, \"Unexpected mallctlbymib([\\\"%s\\\"], ...) failure\", cmd);\n\n\treturn size;\n}\n\nstatic size_t\ndo_get_active(unsigned ind) {\n\treturn do_get_size_impl(\"stats.arenas.0.pactive\", ind) * PAGE;\n}\n\nstatic size_t\ndo_get_mapped(unsigned ind) {\n\treturn do_get_size_impl(\"stats.arenas.0.mapped\", ind);\n}\n\nstatic void *\nthd_start(void *arg) {\n\tfor (unsigned next_epoch = 1; next_epoch < NEPOCHS; next_epoch++) {\n\t\t/* Busy-wait for next epoch. */\n\t\tunsigned cur_epoch;\n\t\tspin_t spinner = SPIN_INITIALIZER;\n\t\twhile ((cur_epoch = atomic_load_u(&epoch, ATOMIC_ACQUIRE)) !=\n\t\t    next_epoch) {\n\t\t\tspin_adaptive(&spinner);\n\t\t}\n\t\texpect_u_eq(cur_epoch, next_epoch, \"Unexpected epoch\");\n\n\t\t/*\n\t\t * Allocate.  The main thread will reset the arena, so there's\n\t\t * no need to deallocate.\n\t\t */\n\t\tfor (unsigned i = 0; i < PER_THD_NALLOCS; i++) {\n\t\t\tvoid *p = mallocx(sz, MALLOCX_ARENA(arena_ind) |\n\t\t\t    MALLOCX_TCACHE_NONE\n\t\t\t    );\n\t\t\texpect_ptr_not_null(p,\n\t\t\t    \"Unexpected mallocx() failure\\n\");\n\t\t}\n\n\t\t/* Let the main thread know we've finished this iteration. */\n\t\tatomic_fetch_add_u(&nfinished, 1, ATOMIC_RELEASE);\n\t}\n\n\treturn NULL;\n}\n\nTEST_BEGIN(test_retained) {\n\ttest_skip_if(!config_stats);\n\ttest_skip_if(opt_hpa);\n\n\tarena_ind = do_arena_create(NULL);\n\tsz = nallocx(HUGEPAGE, 0);\n\tsize_t guard_sz = san_guard_enabled() ? SAN_PAGE_GUARDS_SIZE : 0;\n\tesz = sz + sz_large_pad + guard_sz;\n\n\tatomic_store_u(&epoch, 0, ATOMIC_RELAXED);\n\n\tunsigned nthreads = ncpus * 2;\n\tif (LG_SIZEOF_PTR < 3 && nthreads > 16) {\n\t\tnthreads = 16; /* 32-bit platform could run out of vaddr. */\n\t}\n\tVARIABLE_ARRAY(thd_t, threads, nthreads);\n\tfor (unsigned i = 0; i < nthreads; i++) {\n\t\tthd_create(&threads[i], thd_start, NULL);\n\t}\n\n\tfor (unsigned e = 1; e < NEPOCHS; e++) {\n\t\tatomic_store_u(&nfinished, 0, ATOMIC_RELEASE);\n\t\tatomic_store_u(&epoch, e, ATOMIC_RELEASE);\n\n\t\t/* Wait for threads to finish allocating. */\n\t\tspin_t spinner = SPIN_INITIALIZER;\n\t\twhile (atomic_load_u(&nfinished, ATOMIC_ACQUIRE) < nthreads) {\n\t\t\tspin_adaptive(&spinner);\n\t\t}\n\n\t\t/*\n\t\t * Assert that retained is no more than the sum of size classes\n\t\t * that should have been used to satisfy the worker threads'\n\t\t * requests, discounting per growth fragmentation.\n\t\t */\n\t\tdo_refresh();\n\n\t\tsize_t allocated = (esz - guard_sz) * nthreads *\n\t\t    PER_THD_NALLOCS;\n\t\tsize_t active = do_get_active(arena_ind);\n\t\texpect_zu_le(allocated, active, \"Unexpected active memory\");\n\t\tsize_t mapped = do_get_mapped(arena_ind);\n\t\texpect_zu_le(active, mapped, \"Unexpected mapped memory\");\n\n\t\tarena_t *arena = arena_get(tsdn_fetch(), arena_ind, false);\n\t\tsize_t usable = 0;\n\t\tsize_t fragmented = 0;\n\t\tfor (pszind_t pind = sz_psz2ind(HUGEPAGE); pind <\n\t\t    arena->pa_shard.pac.exp_grow.next; pind++) {\n\t\t\tsize_t psz = sz_pind2sz(pind);\n\t\t\tsize_t psz_fragmented = psz % esz;\n\t\t\tsize_t psz_usable = psz - psz_fragmented;\n\t\t\t/*\n\t\t\t * Only consider size classes that wouldn't be skipped.\n\t\t\t */\n\t\t\tif (psz_usable > 0) {\n\t\t\t\texpect_zu_lt(usable, allocated,\n\t\t\t\t    \"Excessive retained memory \"\n\t\t\t\t    \"(%#zx[+%#zx] > %#zx)\", usable, psz_usable,\n\t\t\t\t    allocated);\n\t\t\t\tfragmented += psz_fragmented;\n\t\t\t\tusable += psz_usable;\n\t\t\t}\n\t\t}\n\n\t\t/*\n\t\t * Clean up arena.  Destroying and recreating the arena\n\t\t * is simpler that specifying extent hooks that deallocate\n\t\t * (rather than retaining) during reset.\n\t\t */\n\t\tdo_arena_destroy(arena_ind);\n\t\texpect_u_eq(do_arena_create(NULL), arena_ind,\n\t\t    \"Unexpected arena index\");\n\t}\n\n\tfor (unsigned i = 0; i < nthreads; i++) {\n\t\tthd_join(threads[i], NULL);\n\t}\n\n\tdo_arena_destroy(arena_ind);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_retained);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/rtree.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/rtree.h\"\n\n#define INVALID_ARENA_IND ((1U << MALLOCX_ARENA_BITS) - 1)\n\n/* Potentially too large to safely place on the stack. */\nrtree_t test_rtree;\n\nTEST_BEGIN(test_rtree_read_empty) {\n\ttsdn_t *tsdn;\n\n\ttsdn = tsdn_fetch();\n\n\tbase_t *base = base_new(tsdn, 0, &ehooks_default_extent_hooks,\n\t    /* metadata_use_hooks */ true);\n\texpect_ptr_not_null(base, \"Unexpected base_new failure\");\n\n\trtree_t *rtree = &test_rtree;\n\trtree_ctx_t rtree_ctx;\n\trtree_ctx_data_init(&rtree_ctx);\n\texpect_false(rtree_new(rtree, base, false),\n\t    \"Unexpected rtree_new() failure\");\n\trtree_contents_t contents;\n\texpect_true(rtree_read_independent(tsdn, rtree, &rtree_ctx, PAGE,\n\t    &contents), \"rtree_read_independent() should fail on empty rtree.\");\n\n\tbase_delete(tsdn, base);\n}\nTEST_END\n\n#undef NTHREADS\n#undef NITERS\n#undef SEED\n\nstatic edata_t *\nalloc_edata(void) {\n\tvoid *ret = mallocx(sizeof(edata_t), MALLOCX_ALIGN(EDATA_ALIGNMENT));\n\tassert_ptr_not_null(ret, \"Unexpected mallocx() failure\");\n\n\treturn ret;\n}\n\nTEST_BEGIN(test_rtree_extrema) {\n\tedata_t *edata_a, *edata_b;\n\tedata_a = alloc_edata();\n\tedata_b = alloc_edata();\n\tedata_init(edata_a, INVALID_ARENA_IND, NULL, SC_LARGE_MINCLASS,\n\t    false, sz_size2index(SC_LARGE_MINCLASS), 0,\n\t    extent_state_active, false, false, EXTENT_PAI_PAC, EXTENT_NOT_HEAD);\n\tedata_init(edata_b, INVALID_ARENA_IND, NULL, 0, false, SC_NSIZES, 0,\n\t    extent_state_active, false, false, EXTENT_PAI_PAC, EXTENT_NOT_HEAD);\n\n\ttsdn_t *tsdn = tsdn_fetch();\n\n\tbase_t *base = base_new(tsdn, 0, &ehooks_default_extent_hooks,\n\t    /* metadata_use_hooks */ true);\n\texpect_ptr_not_null(base, \"Unexpected base_new failure\");\n\n\trtree_t *rtree = &test_rtree;\n\trtree_ctx_t rtree_ctx;\n\trtree_ctx_data_init(&rtree_ctx);\n\texpect_false(rtree_new(rtree, base, false),\n\t    \"Unexpected rtree_new() failure\");\n\n\trtree_contents_t contents_a;\n\tcontents_a.edata = edata_a;\n\tcontents_a.metadata.szind = edata_szind_get(edata_a);\n\tcontents_a.metadata.slab = edata_slab_get(edata_a);\n\tcontents_a.metadata.is_head = edata_is_head_get(edata_a);\n\tcontents_a.metadata.state = edata_state_get(edata_a);\n\texpect_false(rtree_write(tsdn, rtree, &rtree_ctx, PAGE, contents_a),\n\t    \"Unexpected rtree_write() failure\");\n\texpect_false(rtree_write(tsdn, rtree, &rtree_ctx, PAGE, contents_a),\n\t    \"Unexpected rtree_write() failure\");\n\trtree_contents_t read_contents_a = rtree_read(tsdn, rtree, &rtree_ctx,\n\t    PAGE);\n\texpect_true(contents_a.edata == read_contents_a.edata\n\t    && contents_a.metadata.szind == read_contents_a.metadata.szind\n\t    && contents_a.metadata.slab == read_contents_a.metadata.slab\n\t    && contents_a.metadata.is_head == read_contents_a.metadata.is_head\n\t    && contents_a.metadata.state == read_contents_a.metadata.state,\n\t    \"rtree_read() should return previously set value\");\n\n\trtree_contents_t contents_b;\n\tcontents_b.edata = edata_b;\n\tcontents_b.metadata.szind = edata_szind_get_maybe_invalid(edata_b);\n\tcontents_b.metadata.slab = edata_slab_get(edata_b);\n\tcontents_b.metadata.is_head = edata_is_head_get(edata_b);\n\tcontents_b.metadata.state = edata_state_get(edata_b);\n\texpect_false(rtree_write(tsdn, rtree, &rtree_ctx, ~((uintptr_t)0),\n\t    contents_b), \"Unexpected rtree_write() failure\");\n\trtree_contents_t read_contents_b = rtree_read(tsdn, rtree, &rtree_ctx,\n\t    ~((uintptr_t)0));\n\tassert_true(contents_b.edata == read_contents_b.edata\n\t    && contents_b.metadata.szind == read_contents_b.metadata.szind\n\t    && contents_b.metadata.slab == read_contents_b.metadata.slab\n\t    && contents_b.metadata.is_head == read_contents_b.metadata.is_head\n\t    && contents_b.metadata.state == read_contents_b.metadata.state,\n\t    \"rtree_read() should return previously set value\");\n\n\tbase_delete(tsdn, base);\n}\nTEST_END\n\nTEST_BEGIN(test_rtree_bits) {\n\ttsdn_t *tsdn = tsdn_fetch();\n\tbase_t *base = base_new(tsdn, 0, &ehooks_default_extent_hooks,\n\t    /* metadata_use_hooks */ true);\n\texpect_ptr_not_null(base, \"Unexpected base_new failure\");\n\n\tuintptr_t keys[] = {PAGE, PAGE + 1,\n\t    PAGE + (((uintptr_t)1) << LG_PAGE) - 1};\n\tedata_t *edata_c = alloc_edata();\n\tedata_init(edata_c, INVALID_ARENA_IND, NULL, 0, false, SC_NSIZES, 0,\n\t    extent_state_active, false, false, EXTENT_PAI_PAC, EXTENT_NOT_HEAD);\n\n\trtree_t *rtree = &test_rtree;\n\trtree_ctx_t rtree_ctx;\n\trtree_ctx_data_init(&rtree_ctx);\n\texpect_false(rtree_new(rtree, base, false),\n\t    \"Unexpected rtree_new() failure\");\n\n\tfor (unsigned i = 0; i < sizeof(keys)/sizeof(uintptr_t); i++) {\n\t\trtree_contents_t contents;\n\t\tcontents.edata = edata_c;\n\t\tcontents.metadata.szind = SC_NSIZES;\n\t\tcontents.metadata.slab = false;\n\t\tcontents.metadata.is_head = false;\n\t\tcontents.metadata.state = extent_state_active;\n\n\t\texpect_false(rtree_write(tsdn, rtree, &rtree_ctx, keys[i],\n\t\t    contents), \"Unexpected rtree_write() failure\");\n\t\tfor (unsigned j = 0; j < sizeof(keys)/sizeof(uintptr_t); j++) {\n\t\t\texpect_ptr_eq(rtree_read(tsdn, rtree, &rtree_ctx,\n\t\t\t    keys[j]).edata, edata_c,\n\t\t\t    \"rtree_edata_read() should return previously set \"\n\t\t\t    \"value and ignore insignificant key bits; i=%u, \"\n\t\t\t    \"j=%u, set key=%#\"FMTxPTR\", get key=%#\"FMTxPTR, i,\n\t\t\t    j, keys[i], keys[j]);\n\t\t}\n\t\texpect_ptr_null(rtree_read(tsdn, rtree, &rtree_ctx,\n\t\t    (((uintptr_t)2) << LG_PAGE)).edata,\n\t\t    \"Only leftmost rtree leaf should be set; i=%u\", i);\n\t\trtree_clear(tsdn, rtree, &rtree_ctx, keys[i]);\n\t}\n\n\tbase_delete(tsdn, base);\n}\nTEST_END\n\nTEST_BEGIN(test_rtree_random) {\n#define NSET 16\n#define SEED 42\n\tsfmt_t *sfmt = init_gen_rand(SEED);\n\ttsdn_t *tsdn = tsdn_fetch();\n\n\tbase_t *base = base_new(tsdn, 0, &ehooks_default_extent_hooks,\n\t    /* metadata_use_hooks */ true);\n\texpect_ptr_not_null(base, \"Unexpected base_new failure\");\n\n\tuintptr_t keys[NSET];\n\trtree_t *rtree = &test_rtree;\n\trtree_ctx_t rtree_ctx;\n\trtree_ctx_data_init(&rtree_ctx);\n\n\tedata_t *edata_d = alloc_edata();\n\tedata_init(edata_d, INVALID_ARENA_IND, NULL, 0, false, SC_NSIZES, 0,\n\t    extent_state_active, false, false, EXTENT_PAI_PAC, EXTENT_NOT_HEAD);\n\n\texpect_false(rtree_new(rtree, base, false),\n\t    \"Unexpected rtree_new() failure\");\n\n\tfor (unsigned i = 0; i < NSET; i++) {\n\t\tkeys[i] = (uintptr_t)gen_rand64(sfmt);\n\t\trtree_leaf_elm_t *elm = rtree_leaf_elm_lookup(tsdn, rtree,\n\t\t    &rtree_ctx, keys[i], false, true);\n\t\texpect_ptr_not_null(elm,\n\t\t    \"Unexpected rtree_leaf_elm_lookup() failure\");\n\t\trtree_contents_t contents;\n\t\tcontents.edata = edata_d;\n\t\tcontents.metadata.szind = SC_NSIZES;\n\t\tcontents.metadata.slab = false;\n\t\tcontents.metadata.is_head = false;\n\t\tcontents.metadata.state = edata_state_get(edata_d);\n\t\trtree_leaf_elm_write(tsdn, rtree, elm, contents);\n\t\texpect_ptr_eq(rtree_read(tsdn, rtree, &rtree_ctx,\n\t\t    keys[i]).edata, edata_d,\n\t\t    \"rtree_edata_read() should return previously set value\");\n\t}\n\tfor (unsigned i = 0; i < NSET; i++) {\n\t\texpect_ptr_eq(rtree_read(tsdn, rtree, &rtree_ctx,\n\t\t    keys[i]).edata, edata_d,\n\t\t    \"rtree_edata_read() should return previously set value, \"\n\t\t    \"i=%u\", i);\n\t}\n\n\tfor (unsigned i = 0; i < NSET; i++) {\n\t\trtree_clear(tsdn, rtree, &rtree_ctx, keys[i]);\n\t\texpect_ptr_null(rtree_read(tsdn, rtree, &rtree_ctx,\n\t\t    keys[i]).edata,\n\t\t   \"rtree_edata_read() should return previously set value\");\n\t}\n\tfor (unsigned i = 0; i < NSET; i++) {\n\t\texpect_ptr_null(rtree_read(tsdn, rtree, &rtree_ctx,\n\t\t    keys[i]).edata,\n\t\t    \"rtree_edata_read() should return previously set value\");\n\t}\n\n\tbase_delete(tsdn, base);\n\tfini_gen_rand(sfmt);\n#undef NSET\n#undef SEED\n}\nTEST_END\n\nstatic void\ntest_rtree_range_write(tsdn_t *tsdn, rtree_t *rtree, uintptr_t start,\n    uintptr_t end) {\n\trtree_ctx_t rtree_ctx;\n\trtree_ctx_data_init(&rtree_ctx);\n\n\tedata_t *edata_e = alloc_edata();\n\tedata_init(edata_e, INVALID_ARENA_IND, NULL, 0, false, SC_NSIZES, 0,\n\t    extent_state_active, false, false, EXTENT_PAI_PAC, EXTENT_NOT_HEAD);\n\trtree_contents_t contents;\n\tcontents.edata = edata_e;\n\tcontents.metadata.szind = SC_NSIZES;\n\tcontents.metadata.slab = false;\n\tcontents.metadata.is_head = false;\n\tcontents.metadata.state = extent_state_active;\n\n\texpect_false(rtree_write(tsdn, rtree, &rtree_ctx, start,\n\t    contents), \"Unexpected rtree_write() failure\");\n\texpect_false(rtree_write(tsdn, rtree, &rtree_ctx, end,\n\t    contents), \"Unexpected rtree_write() failure\");\n\n\trtree_write_range(tsdn, rtree, &rtree_ctx, start, end, contents);\n\tfor (uintptr_t i = 0; i < ((end - start) >> LG_PAGE); i++) {\n\t\texpect_ptr_eq(rtree_read(tsdn, rtree, &rtree_ctx,\n\t\t    start + (i << LG_PAGE)).edata, edata_e,\n\t\t    \"rtree_edata_read() should return previously set value\");\n\t}\n\trtree_clear_range(tsdn, rtree, &rtree_ctx, start, end);\n\trtree_leaf_elm_t *elm;\n\tfor (uintptr_t i = 0; i < ((end - start) >> LG_PAGE); i++) {\n\t\telm = rtree_leaf_elm_lookup(tsdn, rtree, &rtree_ctx,\n\t\t    start + (i << LG_PAGE), false, false);\n\t\texpect_ptr_not_null(elm, \"Should have been initialized.\");\n\t\texpect_ptr_null(rtree_leaf_elm_read(tsdn, rtree, elm,\n\t\t    false).edata, \"Should have been cleared.\");\n\t}\n}\n\nTEST_BEGIN(test_rtree_range) {\n\ttsdn_t *tsdn = tsdn_fetch();\n\tbase_t *base = base_new(tsdn, 0, &ehooks_default_extent_hooks,\n\t    /* metadata_use_hooks */ true);\n\texpect_ptr_not_null(base, \"Unexpected base_new failure\");\n\n\trtree_t *rtree = &test_rtree;\n\texpect_false(rtree_new(rtree, base, false),\n\t    \"Unexpected rtree_new() failure\");\n\n\t/* Not crossing rtree node boundary first. */\n\tuintptr_t start = ZU(1) << rtree_leaf_maskbits();\n\tuintptr_t end = start + (ZU(100) << LG_PAGE);\n\ttest_rtree_range_write(tsdn, rtree, start, end);\n\n\t/* Crossing rtree node boundary. */\n\tstart = (ZU(1) << rtree_leaf_maskbits()) - (ZU(10) << LG_PAGE);\n\tend = start + (ZU(100) << LG_PAGE);\n\tassert_ptr_ne((void *)rtree_leafkey(start), (void *)rtree_leafkey(end),\n\t    \"The range should span across two rtree nodes\");\n\ttest_rtree_range_write(tsdn, rtree, start, end);\n\n\tbase_delete(tsdn, base);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_rtree_read_empty,\n\t    test_rtree_extrema,\n\t    test_rtree_bits,\n\t    test_rtree_random,\n\t    test_rtree_range);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/safety_check.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/safety_check.h\"\n\n/*\n * Note that we get called through safety_check.sh, which turns on sampling for\n * everything.\n */\n\nbool fake_abort_called;\nvoid fake_abort(const char *message) {\n\t(void)message;\n\tfake_abort_called = true;\n}\n\nstatic void\nbuffer_overflow_write(char *ptr, size_t size) {\n\t/* Avoid overflow warnings. */\n\tvolatile size_t idx = size;\n\tptr[idx] = 0;\n}\n\nTEST_BEGIN(test_malloc_free_overflow) {\n\ttest_skip_if(!config_prof);\n\ttest_skip_if(!config_opt_safety_checks);\n\n\tsafety_check_set_abort(&fake_abort);\n\t/* Buffer overflow! */\n\tchar* ptr = malloc(128);\n\tbuffer_overflow_write(ptr, 128);\n\tfree(ptr);\n\tsafety_check_set_abort(NULL);\n\n\texpect_b_eq(fake_abort_called, true, \"Redzone check didn't fire.\");\n\tfake_abort_called = false;\n}\nTEST_END\n\nTEST_BEGIN(test_mallocx_dallocx_overflow) {\n\ttest_skip_if(!config_prof);\n\ttest_skip_if(!config_opt_safety_checks);\n\n\tsafety_check_set_abort(&fake_abort);\n\t/* Buffer overflow! */\n\tchar* ptr = mallocx(128, 0);\n\tbuffer_overflow_write(ptr, 128);\n\tdallocx(ptr, 0);\n\tsafety_check_set_abort(NULL);\n\n\texpect_b_eq(fake_abort_called, true, \"Redzone check didn't fire.\");\n\tfake_abort_called = false;\n}\nTEST_END\n\nTEST_BEGIN(test_malloc_sdallocx_overflow) {\n\ttest_skip_if(!config_prof);\n\ttest_skip_if(!config_opt_safety_checks);\n\n\tsafety_check_set_abort(&fake_abort);\n\t/* Buffer overflow! */\n\tchar* ptr = malloc(128);\n\tbuffer_overflow_write(ptr, 128);\n\tsdallocx(ptr, 128, 0);\n\tsafety_check_set_abort(NULL);\n\n\texpect_b_eq(fake_abort_called, true, \"Redzone check didn't fire.\");\n\tfake_abort_called = false;\n}\nTEST_END\n\nTEST_BEGIN(test_realloc_overflow) {\n\ttest_skip_if(!config_prof);\n\ttest_skip_if(!config_opt_safety_checks);\n\n\tsafety_check_set_abort(&fake_abort);\n\t/* Buffer overflow! */\n\tchar* ptr = malloc(128);\n\tbuffer_overflow_write(ptr, 128);\n\tptr = realloc(ptr, 129);\n\tsafety_check_set_abort(NULL);\n\tfree(ptr);\n\n\texpect_b_eq(fake_abort_called, true, \"Redzone check didn't fire.\");\n\tfake_abort_called = false;\n}\nTEST_END\n\nTEST_BEGIN(test_rallocx_overflow) {\n\ttest_skip_if(!config_prof);\n\ttest_skip_if(!config_opt_safety_checks);\n\n\tsafety_check_set_abort(&fake_abort);\n\t/* Buffer overflow! */\n\tchar* ptr = malloc(128);\n\tbuffer_overflow_write(ptr, 128);\n\tptr = rallocx(ptr, 129, 0);\n\tsafety_check_set_abort(NULL);\n\tfree(ptr);\n\n\texpect_b_eq(fake_abort_called, true, \"Redzone check didn't fire.\");\n\tfake_abort_called = false;\n}\nTEST_END\n\nTEST_BEGIN(test_xallocx_overflow) {\n\ttest_skip_if(!config_prof);\n\ttest_skip_if(!config_opt_safety_checks);\n\n\tsafety_check_set_abort(&fake_abort);\n\t/* Buffer overflow! */\n\tchar* ptr = malloc(128);\n\tbuffer_overflow_write(ptr, 128);\n\tsize_t result = xallocx(ptr, 129, 0, 0);\n\texpect_zu_eq(result, 128, \"\");\n\tfree(ptr);\n\texpect_b_eq(fake_abort_called, true, \"Redzone check didn't fire.\");\n\tfake_abort_called = false;\n\tsafety_check_set_abort(NULL);\n}\nTEST_END\n\nTEST_BEGIN(test_realloc_no_overflow) {\n\tchar* ptr = malloc(128);\n\tptr = realloc(ptr, 256);\n\tptr[128] = 0;\n\tptr[255] = 0;\n\tfree(ptr);\n\n\tptr = malloc(128);\n\tptr = realloc(ptr, 64);\n\tptr[63] = 0;\n\tptr[0] = 0;\n\tfree(ptr);\n}\nTEST_END\n\nTEST_BEGIN(test_rallocx_no_overflow) {\n\tchar* ptr = malloc(128);\n\tptr = rallocx(ptr, 256, 0);\n\tptr[128] = 0;\n\tptr[255] = 0;\n\tfree(ptr);\n\n\tptr = malloc(128);\n\tptr = rallocx(ptr, 64, 0);\n\tptr[63] = 0;\n\tptr[0] = 0;\n\tfree(ptr);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_malloc_free_overflow,\n\t    test_mallocx_dallocx_overflow,\n\t    test_malloc_sdallocx_overflow,\n\t    test_realloc_overflow,\n\t    test_rallocx_overflow,\n\t    test_xallocx_overflow,\n\t    test_realloc_no_overflow,\n\t    test_rallocx_no_overflow);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/safety_check.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_prof}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"prof:true,prof_active:true,lg_prof_sample:0\"\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/unit/san.c",
    "content": "#include \"test/jemalloc_test.h\"\n#include \"test/arena_util.h\"\n#include \"test/san.h\"\n\n#include \"jemalloc/internal/san.h\"\n\nstatic void\nverify_extent_guarded(tsdn_t *tsdn, void *ptr) {\n\texpect_true(extent_is_guarded(tsdn, ptr),\n\t    \"All extents should be guarded.\");\n}\n\n#define MAX_SMALL_ALLOCATIONS 4096\nvoid *small_alloc[MAX_SMALL_ALLOCATIONS];\n\n/*\n * This test allocates page sized slabs and checks that every two slabs have\n * at least one page in between them. That page is supposed to be the guard\n * page.\n */\nTEST_BEGIN(test_guarded_small) {\n\ttest_skip_if(opt_prof);\n\n\ttsdn_t *tsdn = tsd_tsdn(tsd_fetch());\n\tunsigned npages = 16, pages_found = 0, ends_found = 0;\n\tVARIABLE_ARRAY(uintptr_t, pages, npages);\n\n\t/* Allocate to get sanitized pointers. */\n\tsize_t slab_sz = PAGE;\n\tsize_t sz = slab_sz / 8;\n\tunsigned n_alloc = 0;\n\twhile (n_alloc < MAX_SMALL_ALLOCATIONS) {\n\t\tvoid *ptr = malloc(sz);\n\t\texpect_ptr_not_null(ptr, \"Unexpected malloc() failure\");\n\t\tsmall_alloc[n_alloc] = ptr;\n\t\tverify_extent_guarded(tsdn, ptr);\n\t\tif ((uintptr_t)ptr % PAGE == 0) {\n\t\t\tassert_u_lt(pages_found, npages,\n\t\t\t    \"Unexpectedly large number of page aligned allocs\");\n\t\t\tpages[pages_found++] = (uintptr_t)ptr;\n\t\t}\n\t\tif (((uintptr_t)ptr + (uintptr_t)sz) % PAGE == 0) {\n\t\t\tends_found++;\n\t\t}\n\t\tn_alloc++;\n\t\tif (pages_found == npages && ends_found == npages) {\n\t\t\tbreak;\n\t\t}\n\t}\n\t/* Should found the ptrs being checked for overflow and underflow. */\n\texpect_u_eq(pages_found, npages, \"Could not found the expected pages.\");\n\texpect_u_eq(ends_found, npages, \"Could not found the expected pages.\");\n\n\t/* Verify the pages are not continuous, i.e. separated by guards. */\n\tfor (unsigned i = 0; i < npages - 1; i++) {\n\t\tfor (unsigned j = i + 1; j < npages; j++) {\n\t\t\tuintptr_t ptr_diff = pages[i] > pages[j] ?\n\t\t\t    pages[i] - pages[j] : pages[j] - pages[i];\n\t\t\texpect_zu_ge((size_t)ptr_diff, slab_sz + PAGE,\n\t\t\t    \"There should be at least one pages between \"\n\t\t\t    \"guarded slabs\");\n\t\t}\n\t}\n\n\tfor (unsigned i = 0; i < n_alloc + 1; i++) {\n\t\tfree(small_alloc[i]);\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_guarded_large) {\n\ttsdn_t *tsdn = tsd_tsdn(tsd_fetch());\n\tunsigned nlarge = 32;\n\tVARIABLE_ARRAY(uintptr_t, large, nlarge);\n\n\t/* Allocate to get sanitized pointers. */\n\tsize_t large_sz = SC_LARGE_MINCLASS;\n\tfor (unsigned i = 0; i < nlarge; i++) {\n\t\tvoid *ptr = malloc(large_sz);\n\t\tverify_extent_guarded(tsdn, ptr);\n\t\texpect_ptr_not_null(ptr, \"Unexpected malloc() failure\");\n\t\tlarge[i] = (uintptr_t)ptr;\n\t}\n\n\t/* Verify the pages are not continuous, i.e. separated by guards. */\n\tfor (unsigned i = 0; i < nlarge; i++) {\n\t\tfor (unsigned j = i + 1; j < nlarge; j++) {\n\t\t\tuintptr_t ptr_diff = large[i] > large[j] ?\n\t\t\t    large[i] - large[j] : large[j] - large[i];\n\t\t\texpect_zu_ge((size_t)ptr_diff, large_sz + 2 * PAGE,\n\t\t\t    \"There should be at least two pages between \"\n\t\t\t    \" guarded large allocations\");\n\t\t}\n\t}\n\n\tfor (unsigned i = 0; i < nlarge; i++) {\n\t\tfree((void *)large[i]);\n\t}\n}\nTEST_END\n\nstatic void\nverify_pdirty(unsigned arena_ind, uint64_t expected) {\n\tuint64_t pdirty = get_arena_pdirty(arena_ind);\n\texpect_u64_eq(pdirty, expected / PAGE,\n\t    \"Unexpected dirty page amount.\");\n}\n\nstatic void\nverify_pmuzzy(unsigned arena_ind, uint64_t expected) {\n\tuint64_t pmuzzy = get_arena_pmuzzy(arena_ind);\n\texpect_u64_eq(pmuzzy, expected / PAGE,\n\t    \"Unexpected muzzy page amount.\");\n}\n\nTEST_BEGIN(test_guarded_decay) {\n\tunsigned arena_ind = do_arena_create(-1, -1);\n\tdo_decay(arena_ind);\n\tdo_purge(arena_ind);\n\n\tverify_pdirty(arena_ind, 0);\n\tverify_pmuzzy(arena_ind, 0);\n\n\t/* Verify that guarded extents as dirty. */\n\tsize_t sz1 = PAGE, sz2 = PAGE * 2;\n\t/* W/o maps_coalesce, guarded extents are unguarded eagerly. */\n\tsize_t add_guard_size = maps_coalesce ? 0 : SAN_PAGE_GUARDS_SIZE;\n\tgenerate_dirty(arena_ind, sz1);\n\tverify_pdirty(arena_ind, sz1 + add_guard_size);\n\tverify_pmuzzy(arena_ind, 0);\n\n\t/* Should reuse the first extent. */\n\tgenerate_dirty(arena_ind, sz1);\n\tverify_pdirty(arena_ind, sz1 + add_guard_size);\n\tverify_pmuzzy(arena_ind, 0);\n\n\t/* Should not reuse; expect new dirty pages. */\n\tgenerate_dirty(arena_ind, sz2);\n\tverify_pdirty(arena_ind, sz1 + sz2 + 2 * add_guard_size);\n\tverify_pmuzzy(arena_ind, 0);\n\n\ttsdn_t *tsdn = tsd_tsdn(tsd_fetch());\n\tint flags = MALLOCX_ARENA(arena_ind) | MALLOCX_TCACHE_NONE;\n\n\t/* Should reuse dirty extents for the two mallocx. */\n\tvoid *p1 = do_mallocx(sz1, flags);\n\tverify_extent_guarded(tsdn, p1);\n\tverify_pdirty(arena_ind, sz2 + add_guard_size);\n\n\tvoid *p2 = do_mallocx(sz2, flags);\n\tverify_extent_guarded(tsdn, p2);\n\tverify_pdirty(arena_ind, 0);\n\tverify_pmuzzy(arena_ind, 0);\n\n\tdallocx(p1, flags);\n\tverify_pdirty(arena_ind, sz1 + add_guard_size);\n\tdallocx(p2, flags);\n\tverify_pdirty(arena_ind, sz1 + sz2 + 2 * add_guard_size);\n\tverify_pmuzzy(arena_ind, 0);\n\n\tdo_purge(arena_ind);\n\tverify_pdirty(arena_ind, 0);\n\tverify_pmuzzy(arena_ind, 0);\n\n\tif (config_stats) {\n\t\texpect_u64_eq(get_arena_npurge(arena_ind), 1,\n\t\t    \"Expected purging to occur\");\n\t\texpect_u64_eq(get_arena_dirty_npurge(arena_ind), 1,\n\t\t    \"Expected purging to occur\");\n\t\texpect_u64_eq(get_arena_dirty_purged(arena_ind),\n\t\t    (sz1 + sz2 + 2 * add_guard_size) / PAGE,\n\t\t    \"Expected purging to occur\");\n\t\texpect_u64_eq(get_arena_muzzy_npurge(arena_ind), 0,\n\t\t    \"Expected purging to occur\");\n\t}\n\n\tif (opt_retain) {\n\t\t/*\n\t\t * With retain, guarded extents are not mergable and will be\n\t\t * cached in ecache_retained.  They should be reused.\n\t\t */\n\t\tvoid *new_p1 = do_mallocx(sz1, flags);\n\t\tverify_extent_guarded(tsdn, p1);\n\t\texpect_ptr_eq(p1, new_p1, \"Expect to reuse p1\");\n\n\t\tvoid *new_p2 = do_mallocx(sz2, flags);\n\t\tverify_extent_guarded(tsdn, p2);\n\t\texpect_ptr_eq(p2, new_p2, \"Expect to reuse p2\");\n\n\t\tdallocx(new_p1, flags);\n\t\tverify_pdirty(arena_ind, sz1 + add_guard_size);\n\t\tdallocx(new_p2, flags);\n\t\tverify_pdirty(arena_ind, sz1 + sz2 + 2 * add_guard_size);\n\t\tverify_pmuzzy(arena_ind, 0);\n\t}\n\n\tdo_arena_destroy(arena_ind);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_guarded_small,\n\t    test_guarded_large,\n\t    test_guarded_decay);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/san.sh",
    "content": "#!/bin/sh\n\nexport MALLOC_CONF=\"san_guard_large:1,san_guard_small:1\"\n"
  },
  {
    "path": "deps/jemalloc/test/unit/san_bump.c",
    "content": "#include \"test/jemalloc_test.h\"\n#include \"test/arena_util.h\"\n\n#include \"jemalloc/internal/arena_structs.h\"\n#include \"jemalloc/internal/san_bump.h\"\n\nTEST_BEGIN(test_san_bump_alloc) {\n\ttest_skip_if(!maps_coalesce || !opt_retain);\n\n\ttsdn_t *tsdn = tsdn_fetch();\n\n\tsan_bump_alloc_t sba;\n\tsan_bump_alloc_init(&sba);\n\n\tunsigned arena_ind = do_arena_create(0, 0);\n\tassert_u_ne(arena_ind, UINT_MAX, \"Failed to create an arena\");\n\n\tarena_t *arena = arena_get(tsdn, arena_ind, false);\n\tpac_t *pac = &arena->pa_shard.pac;\n\n\tsize_t alloc_size = PAGE * 16;\n\tsize_t alloc_n = alloc_size / sizeof(unsigned);\n\tedata_t* edata = san_bump_alloc(tsdn, &sba, pac, pac_ehooks_get(pac),\n\t    alloc_size, /* zero */ false);\n\n\texpect_ptr_not_null(edata, \"Failed to allocate edata\");\n\texpect_u_eq(edata_arena_ind_get(edata), arena_ind,\n\t    \"Edata was assigned an incorrect arena id\");\n\texpect_zu_eq(edata_size_get(edata), alloc_size,\n\t    \"Allocated edata of incorrect size\");\n\texpect_false(edata_slab_get(edata),\n\t    \"Bump allocator incorrectly assigned 'slab' to true\");\n\texpect_true(edata_committed_get(edata), \"Edata is not committed\");\n\n\tvoid *ptr = edata_addr_get(edata);\n\texpect_ptr_not_null(ptr, \"Edata was assigned an invalid address\");\n\t/* Test that memory is allocated; no guard pages are misplaced */\n\tfor (unsigned i = 0; i < alloc_n; ++i) {\n\t\t((unsigned *)ptr)[i] = 1;\n\t}\n\n\tsize_t alloc_size2 = PAGE * 28;\n\tsize_t alloc_n2 = alloc_size / sizeof(unsigned);\n\tedata_t *edata2 = san_bump_alloc(tsdn, &sba, pac, pac_ehooks_get(pac),\n\t    alloc_size2, /* zero */ true);\n\n\texpect_ptr_not_null(edata2, \"Failed to allocate edata\");\n\texpect_u_eq(edata_arena_ind_get(edata2), arena_ind,\n\t    \"Edata was assigned an incorrect arena id\");\n\texpect_zu_eq(edata_size_get(edata2), alloc_size2,\n\t    \"Allocated edata of incorrect size\");\n\texpect_false(edata_slab_get(edata2),\n\t    \"Bump allocator incorrectly assigned 'slab' to true\");\n\texpect_true(edata_committed_get(edata2), \"Edata is not committed\");\n\n\tvoid *ptr2 = edata_addr_get(edata2);\n\texpect_ptr_not_null(ptr, \"Edata was assigned an invalid address\");\n\n\tuintptr_t ptrdiff = ptr2 > ptr ? (uintptr_t)ptr2 - (uintptr_t)ptr\n\t    : (uintptr_t)ptr - (uintptr_t)ptr2;\n\tsize_t between_allocs = (size_t)ptrdiff - alloc_size;\n\n\texpect_zu_ge(between_allocs, PAGE,\n\t    \"Guard page between allocs is missing\");\n\n\tfor (unsigned i = 0; i < alloc_n2; ++i) {\n\t\texpect_u_eq(((unsigned *)ptr2)[i], 0, \"Memory is not zeroed\");\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_large_alloc_size) {\n\ttest_skip_if(!maps_coalesce || !opt_retain);\n\n\ttsdn_t *tsdn = tsdn_fetch();\n\n\tsan_bump_alloc_t sba;\n\tsan_bump_alloc_init(&sba);\n\n\tunsigned arena_ind = do_arena_create(0, 0);\n\tassert_u_ne(arena_ind, UINT_MAX, \"Failed to create an arena\");\n\n\tarena_t *arena = arena_get(tsdn, arena_ind, false);\n\tpac_t *pac = &arena->pa_shard.pac;\n\n\tsize_t alloc_size = SBA_RETAINED_ALLOC_SIZE * 2;\n\tedata_t* edata = san_bump_alloc(tsdn, &sba, pac, pac_ehooks_get(pac),\n\t    alloc_size, /* zero */ false);\n\texpect_u_eq(edata_arena_ind_get(edata), arena_ind,\n\t    \"Edata was assigned an incorrect arena id\");\n\texpect_zu_eq(edata_size_get(edata), alloc_size,\n\t    \"Allocated edata of incorrect size\");\n\texpect_false(edata_slab_get(edata),\n\t    \"Bump allocator incorrectly assigned 'slab' to true\");\n\texpect_true(edata_committed_get(edata), \"Edata is not committed\");\n\n\tvoid *ptr = edata_addr_get(edata);\n\texpect_ptr_not_null(ptr, \"Edata was assigned an invalid address\");\n\t/* Test that memory is allocated; no guard pages are misplaced */\n\tfor (unsigned i = 0; i < alloc_size / PAGE; ++i) {\n\t\t*((char *)ptr + PAGE * i) = 1;\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_san_bump_alloc,\n\t    test_large_alloc_size);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/sc.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nTEST_BEGIN(test_update_slab_size) {\n\tsc_data_t data;\n\tmemset(&data, 0, sizeof(data));\n\tsc_data_init(&data);\n\tsc_t *tiny = &data.sc[0];\n\tsize_t tiny_size = (ZU(1) << tiny->lg_base)\n\t    + (ZU(tiny->ndelta) << tiny->lg_delta);\n\tsize_t pgs_too_big = (tiny_size * BITMAP_MAXBITS + PAGE - 1) / PAGE + 1;\n\tsc_data_update_slab_size(&data, tiny_size, tiny_size, (int)pgs_too_big);\n\texpect_zu_lt((size_t)tiny->pgs, pgs_too_big, \"Allowed excessive pages\");\n\n\tsc_data_update_slab_size(&data, 1, 10 * PAGE, 1);\n\tfor (int i = 0; i < data.nbins; i++) {\n\t\tsc_t *sc = &data.sc[i];\n\t\tsize_t reg_size = (ZU(1) << sc->lg_base)\n\t\t    + (ZU(sc->ndelta) << sc->lg_delta);\n\t\tif (reg_size <= PAGE) {\n\t\t\texpect_d_eq(sc->pgs, 1, \"Ignored valid page size hint\");\n\t\t} else {\n\t\t\texpect_d_gt(sc->pgs, 1,\n\t\t\t    \"Allowed invalid page size hint\");\n\t\t}\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_update_slab_size);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/sec.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/sec.h\"\n\ntypedef struct pai_test_allocator_s pai_test_allocator_t;\nstruct pai_test_allocator_s {\n\tpai_t pai;\n\tbool alloc_fail;\n\tsize_t alloc_count;\n\tsize_t alloc_batch_count;\n\tsize_t dalloc_count;\n\tsize_t dalloc_batch_count;\n\t/*\n\t * We use a simple bump allocator as the implementation.  This isn't\n\t * *really* correct, since we may allow expansion into a subsequent\n\t * allocation, but it's not like the SEC is really examining the\n\t * pointers it gets back; this is mostly just helpful for debugging.\n\t */\n\tuintptr_t next_ptr;\n\tsize_t expand_count;\n\tbool expand_return_value;\n\tsize_t shrink_count;\n\tbool shrink_return_value;\n};\n\nstatic void\ntest_sec_init(sec_t *sec, pai_t *fallback, size_t nshards, size_t max_alloc,\n    size_t max_bytes) {\n\tsec_opts_t opts;\n\topts.nshards = 1;\n\topts.max_alloc = max_alloc;\n\topts.max_bytes = max_bytes;\n\t/*\n\t * Just choose reasonable defaults for these; most tests don't care so\n\t * long as they're something reasonable.\n\t */\n\topts.bytes_after_flush = max_bytes / 2;\n\topts.batch_fill_extra = 4;\n\n\t/*\n\t * We end up leaking this base, but that's fine; this test is\n\t * short-running, and SECs are arena-scoped in reality.\n\t */\n\tbase_t *base = base_new(TSDN_NULL, /* ind */ 123,\n\t    &ehooks_default_extent_hooks, /* metadata_use_hooks */ true);\n\n\tbool err = sec_init(TSDN_NULL, sec, base, fallback, &opts);\n\tassert_false(err, \"Unexpected initialization failure\");\n\tassert_u_ge(sec->npsizes, 0, \"Zero size classes allowed for caching\");\n}\n\nstatic inline edata_t *\npai_test_allocator_alloc(tsdn_t *tsdn, pai_t *self, size_t size,\n    size_t alignment, bool zero, bool guarded, bool frequent_reuse,\n    bool *deferred_work_generated) {\n\tassert(!guarded);\n\tpai_test_allocator_t *ta = (pai_test_allocator_t *)self;\n\tif (ta->alloc_fail) {\n\t\treturn NULL;\n\t}\n\tedata_t *edata = malloc(sizeof(edata_t));\n\tassert_ptr_not_null(edata, \"\");\n\tta->next_ptr += alignment - 1;\n\tedata_init(edata, /* arena_ind */ 0,\n\t    (void *)(ta->next_ptr & ~(alignment - 1)), size,\n\t    /* slab */ false,\n\t    /* szind */ 0, /* sn */ 1, extent_state_active, /* zero */ zero,\n\t    /* comitted */ true, /* ranged */ false, EXTENT_NOT_HEAD);\n\tta->next_ptr += size;\n\tta->alloc_count++;\n\treturn edata;\n}\n\nstatic inline size_t\npai_test_allocator_alloc_batch(tsdn_t *tsdn, pai_t *self, size_t size,\n    size_t nallocs, edata_list_active_t *results,\n    bool *deferred_work_generated) {\n\tpai_test_allocator_t *ta = (pai_test_allocator_t *)self;\n\tif (ta->alloc_fail) {\n\t\treturn 0;\n\t}\n\tfor (size_t i = 0; i < nallocs; i++) {\n\t\tedata_t *edata = malloc(sizeof(edata_t));\n\t\tassert_ptr_not_null(edata, \"\");\n\t\tedata_init(edata, /* arena_ind */ 0,\n\t\t    (void *)ta->next_ptr, size,\n\t\t    /* slab */ false, /* szind */ 0, /* sn */ 1,\n\t\t    extent_state_active, /* zero */ false, /* comitted */ true,\n\t\t    /* ranged */ false, EXTENT_NOT_HEAD);\n\t\tta->next_ptr += size;\n\t\tta->alloc_batch_count++;\n\t\tedata_list_active_append(results, edata);\n\t}\n\treturn nallocs;\n}\n\nstatic bool\npai_test_allocator_expand(tsdn_t *tsdn, pai_t *self, edata_t *edata,\n    size_t old_size, size_t new_size, bool zero,\n    bool *deferred_work_generated) {\n\tpai_test_allocator_t *ta = (pai_test_allocator_t *)self;\n\tta->expand_count++;\n\treturn ta->expand_return_value;\n}\n\nstatic bool\npai_test_allocator_shrink(tsdn_t *tsdn, pai_t *self, edata_t *edata,\n    size_t old_size, size_t new_size, bool *deferred_work_generated) {\n\tpai_test_allocator_t *ta = (pai_test_allocator_t *)self;\n\tta->shrink_count++;\n\treturn ta->shrink_return_value;\n}\n\nstatic void\npai_test_allocator_dalloc(tsdn_t *tsdn, pai_t *self, edata_t *edata,\n    bool *deferred_work_generated) {\n\tpai_test_allocator_t *ta = (pai_test_allocator_t *)self;\n\tta->dalloc_count++;\n\tfree(edata);\n}\n\nstatic void\npai_test_allocator_dalloc_batch(tsdn_t *tsdn, pai_t *self,\n    edata_list_active_t *list, bool *deferred_work_generated) {\n\tpai_test_allocator_t *ta = (pai_test_allocator_t *)self;\n\n\tedata_t *edata;\n\twhile ((edata = edata_list_active_first(list)) != NULL) {\n\t\tedata_list_active_remove(list, edata);\n\t\tta->dalloc_batch_count++;\n\t\tfree(edata);\n\t}\n}\n\nstatic inline void\npai_test_allocator_init(pai_test_allocator_t *ta) {\n\tta->alloc_fail = false;\n\tta->alloc_count = 0;\n\tta->alloc_batch_count = 0;\n\tta->dalloc_count = 0;\n\tta->dalloc_batch_count = 0;\n\t/* Just don't start the edata at 0. */\n\tta->next_ptr = 10 * PAGE;\n\tta->expand_count = 0;\n\tta->expand_return_value = false;\n\tta->shrink_count = 0;\n\tta->shrink_return_value = false;\n\tta->pai.alloc = &pai_test_allocator_alloc;\n\tta->pai.alloc_batch = &pai_test_allocator_alloc_batch;\n\tta->pai.expand = &pai_test_allocator_expand;\n\tta->pai.shrink = &pai_test_allocator_shrink;\n\tta->pai.dalloc = &pai_test_allocator_dalloc;\n\tta->pai.dalloc_batch = &pai_test_allocator_dalloc_batch;\n}\n\nTEST_BEGIN(test_reuse) {\n\tpai_test_allocator_t ta;\n\tpai_test_allocator_init(&ta);\n\tsec_t sec;\n\t/*\n\t * We can't use the \"real\" tsd, since we malloc within the test\n\t * allocator hooks; we'd get lock inversion crashes.  Eventually, we\n\t * should have a way to mock tsds, but for now just don't do any\n\t * lock-order checking.\n\t */\n\ttsdn_t *tsdn = TSDN_NULL;\n\t/*\n\t * 11 allocs apiece of 1-PAGE and 2-PAGE objects means that we should be\n\t * able to get to 33 pages in the cache before triggering a flush.  We\n\t * set the flush liimt to twice this amount, to avoid accidentally\n\t * triggering a flush caused by the batch-allocation down the cache fill\n\t * pathway disrupting ordering.\n\t */\n\tenum { NALLOCS = 11 };\n\tedata_t *one_page[NALLOCS];\n\tedata_t *two_page[NALLOCS];\n\tbool deferred_work_generated = false;\n\ttest_sec_init(&sec, &ta.pai, /* nshards */ 1, /* max_alloc */ 2 * PAGE,\n\t    /* max_bytes */ 2 * (NALLOCS * PAGE + NALLOCS * 2 * PAGE));\n\tfor (int i = 0; i < NALLOCS; i++) {\n\t\tone_page[i] = pai_alloc(tsdn, &sec.pai, PAGE, PAGE,\n\t\t    /* zero */ false, /* guarded */ false, /* frequent_reuse */\n\t\t    false, &deferred_work_generated);\n\t\texpect_ptr_not_null(one_page[i], \"Unexpected alloc failure\");\n\t\ttwo_page[i] = pai_alloc(tsdn, &sec.pai, 2 * PAGE, PAGE,\n\t\t    /* zero */ false, /* guarded */ false, /* frequent_reuse */\n\t\t    false, &deferred_work_generated);\n\t\texpect_ptr_not_null(one_page[i], \"Unexpected alloc failure\");\n\t}\n\texpect_zu_eq(0, ta.alloc_count, \"Should be using batch allocs\");\n\tsize_t max_allocs = ta.alloc_count + ta.alloc_batch_count;\n\texpect_zu_le(2 * NALLOCS, max_allocs,\n\t    \"Incorrect number of allocations\");\n\texpect_zu_eq(0, ta.dalloc_count,\n\t    \"Incorrect number of allocations\");\n\t/*\n\t * Free in a different order than we allocated, to make sure free-list\n\t * separation works correctly.\n\t */\n\tfor (int i = NALLOCS - 1; i >= 0; i--) {\n\t\tpai_dalloc(tsdn, &sec.pai, one_page[i],\n\t\t    &deferred_work_generated);\n\t}\n\tfor (int i = NALLOCS - 1; i >= 0; i--) {\n\t\tpai_dalloc(tsdn, &sec.pai, two_page[i],\n\t\t    &deferred_work_generated);\n\t}\n\texpect_zu_eq(max_allocs, ta.alloc_count + ta.alloc_batch_count,\n\t    \"Incorrect number of allocations\");\n\texpect_zu_eq(0, ta.dalloc_count,\n\t    \"Incorrect number of allocations\");\n\t/*\n\t * Check that the n'th most recent deallocated extent is returned for\n\t * the n'th alloc request of a given size.\n\t */\n\tfor (int i = 0; i < NALLOCS; i++) {\n\t\tedata_t *alloc1 = pai_alloc(tsdn, &sec.pai, PAGE, PAGE,\n\t\t    /* zero */ false, /* guarded */ false, /* frequent_reuse */\n\t\t    false, &deferred_work_generated);\n\t\tedata_t *alloc2 = pai_alloc(tsdn, &sec.pai, 2 * PAGE, PAGE,\n\t\t    /* zero */ false, /* guarded */ false, /* frequent_reuse */\n\t\t    false, &deferred_work_generated);\n\t\texpect_ptr_eq(one_page[i], alloc1,\n\t\t    \"Got unexpected allocation\");\n\t\texpect_ptr_eq(two_page[i], alloc2,\n\t\t    \"Got unexpected allocation\");\n\t}\n\texpect_zu_eq(max_allocs, ta.alloc_count + ta.alloc_batch_count,\n\t    \"Incorrect number of allocations\");\n\texpect_zu_eq(0, ta.dalloc_count,\n\t    \"Incorrect number of allocations\");\n}\nTEST_END\n\n\nTEST_BEGIN(test_auto_flush) {\n\tpai_test_allocator_t ta;\n\tpai_test_allocator_init(&ta);\n\tsec_t sec;\n\t/* See the note above -- we can't use the real tsd. */\n\ttsdn_t *tsdn = TSDN_NULL;\n\t/*\n\t * 10-allocs apiece of 1-PAGE and 2-PAGE objects means that we should be\n\t * able to get to 30 pages in the cache before triggering a flush.  The\n\t * choice of NALLOCS here is chosen to match the batch allocation\n\t * default (4 extra + 1 == 5; so 10 allocations leaves the cache exactly\n\t * empty, even in the presence of batch allocation on fill).\n\t * Eventually, once our allocation batching strategies become smarter,\n\t * this should change.\n\t */\n\tenum { NALLOCS = 10 };\n\tedata_t *extra_alloc;\n\tedata_t *allocs[NALLOCS];\n\tbool deferred_work_generated = false;\n\ttest_sec_init(&sec, &ta.pai, /* nshards */ 1, /* max_alloc */ PAGE,\n\t    /* max_bytes */ NALLOCS * PAGE);\n\tfor (int i = 0; i < NALLOCS; i++) {\n\t\tallocs[i] = pai_alloc(tsdn, &sec.pai, PAGE, PAGE,\n\t\t    /* zero */ false, /* guarded */ false, /* frequent_reuse */\n\t\t    false, &deferred_work_generated);\n\t\texpect_ptr_not_null(allocs[i], \"Unexpected alloc failure\");\n\t}\n\textra_alloc = pai_alloc(tsdn, &sec.pai, PAGE, PAGE, /* zero */ false,\n\t    /* guarded */ false, /* frequent_reuse */ false,\n\t    &deferred_work_generated);\n\texpect_ptr_not_null(extra_alloc, \"Unexpected alloc failure\");\n\tsize_t max_allocs = ta.alloc_count + ta.alloc_batch_count;\n\texpect_zu_le(NALLOCS + 1, max_allocs,\n\t    \"Incorrect number of allocations\");\n\texpect_zu_eq(0, ta.dalloc_count,\n\t    \"Incorrect number of allocations\");\n\t/* Free until the SEC is full, but should not have flushed yet. */\n\tfor (int i = 0; i < NALLOCS; i++) {\n\t\tpai_dalloc(tsdn, &sec.pai, allocs[i], &deferred_work_generated);\n\t}\n\texpect_zu_le(NALLOCS + 1, max_allocs,\n\t    \"Incorrect number of allocations\");\n\texpect_zu_eq(0, ta.dalloc_count,\n\t    \"Incorrect number of allocations\");\n\t/*\n\t * Free the extra allocation; this should trigger a flush.  The internal\n\t * flushing logic is allowed to get complicated; for now, we rely on our\n\t * whitebox knowledge of the fact that the SEC flushes bins in their\n\t * entirety when it decides to do so, and it has only one bin active\n\t * right now.\n\t */\n\tpai_dalloc(tsdn, &sec.pai, extra_alloc, &deferred_work_generated);\n\texpect_zu_eq(max_allocs, ta.alloc_count + ta.alloc_batch_count,\n\t    \"Incorrect number of allocations\");\n\texpect_zu_eq(0, ta.dalloc_count,\n\t    \"Incorrect number of (non-batch) deallocations\");\n\texpect_zu_eq(NALLOCS + 1, ta.dalloc_batch_count,\n\t    \"Incorrect number of batch deallocations\");\n}\nTEST_END\n\n/*\n * A disable and a flush are *almost* equivalent; the only difference is what\n * happens afterwards; disabling disallows all future caching as well.\n */\nstatic void\ndo_disable_flush_test(bool is_disable) {\n\tpai_test_allocator_t ta;\n\tpai_test_allocator_init(&ta);\n\tsec_t sec;\n\t/* See the note above -- we can't use the real tsd. */\n\ttsdn_t *tsdn = TSDN_NULL;\n\n\tenum { NALLOCS = 11 };\n\tedata_t *allocs[NALLOCS];\n\tbool deferred_work_generated = false;\n\ttest_sec_init(&sec, &ta.pai, /* nshards */ 1, /* max_alloc */ PAGE,\n\t    /* max_bytes */ NALLOCS * PAGE);\n\tfor (int i = 0; i < NALLOCS; i++) {\n\t\tallocs[i] = pai_alloc(tsdn, &sec.pai, PAGE, PAGE,\n\t\t    /* zero */ false, /* guarded */ false, /* frequent_reuse */\n\t\t    false, &deferred_work_generated);\n\t\texpect_ptr_not_null(allocs[i], \"Unexpected alloc failure\");\n\t}\n\t/* Free all but the last aloc. */\n\tfor (int i = 0; i < NALLOCS - 1; i++) {\n\t\tpai_dalloc(tsdn, &sec.pai, allocs[i], &deferred_work_generated);\n\t}\n\tsize_t max_allocs = ta.alloc_count + ta.alloc_batch_count;\n\n\texpect_zu_le(NALLOCS, max_allocs, \"Incorrect number of allocations\");\n\texpect_zu_eq(0, ta.dalloc_count,\n\t    \"Incorrect number of allocations\");\n\n\tif (is_disable) {\n\t\tsec_disable(tsdn, &sec);\n\t} else {\n\t\tsec_flush(tsdn, &sec);\n\t}\n\n\texpect_zu_eq(max_allocs, ta.alloc_count + ta.alloc_batch_count,\n\t    \"Incorrect number of allocations\");\n\texpect_zu_eq(0, ta.dalloc_count,\n\t    \"Incorrect number of (non-batch) deallocations\");\n\texpect_zu_le(NALLOCS - 1, ta.dalloc_batch_count,\n\t    \"Incorrect number of batch deallocations\");\n\tsize_t old_dalloc_batch_count = ta.dalloc_batch_count;\n\n\t/*\n\t * If we free into a disabled SEC, it should forward to the fallback.\n\t * Otherwise, the SEC should accept the allocation.\n\t */\n\tpai_dalloc(tsdn, &sec.pai, allocs[NALLOCS - 1],\n\t    &deferred_work_generated);\n\n\texpect_zu_eq(max_allocs, ta.alloc_count + ta.alloc_batch_count,\n\t    \"Incorrect number of allocations\");\n\texpect_zu_eq(is_disable ? 1 : 0, ta.dalloc_count,\n\t    \"Incorrect number of (non-batch) deallocations\");\n\texpect_zu_eq(old_dalloc_batch_count, ta.dalloc_batch_count,\n\t    \"Incorrect number of batch deallocations\");\n}\n\nTEST_BEGIN(test_disable) {\n\tdo_disable_flush_test(/* is_disable */ true);\n}\nTEST_END\n\nTEST_BEGIN(test_flush) {\n\tdo_disable_flush_test(/* is_disable */ false);\n}\nTEST_END\n\nTEST_BEGIN(test_max_alloc_respected) {\n\tpai_test_allocator_t ta;\n\tpai_test_allocator_init(&ta);\n\tsec_t sec;\n\t/* See the note above -- we can't use the real tsd. */\n\ttsdn_t *tsdn = TSDN_NULL;\n\n\tsize_t max_alloc = 2 * PAGE;\n\tsize_t attempted_alloc = 3 * PAGE;\n\n\tbool deferred_work_generated = false;\n\n\ttest_sec_init(&sec, &ta.pai, /* nshards */ 1, max_alloc,\n\t    /* max_bytes */ 1000 * PAGE);\n\n\tfor (size_t i = 0; i < 100; i++) {\n\t\texpect_zu_eq(i, ta.alloc_count,\n\t\t    \"Incorrect number of allocations\");\n\t\texpect_zu_eq(i, ta.dalloc_count,\n\t\t    \"Incorrect number of deallocations\");\n\t\tedata_t *edata = pai_alloc(tsdn, &sec.pai, attempted_alloc,\n\t\t    PAGE, /* zero */ false, /* guarded */ false,\n\t\t    /* frequent_reuse */ false, &deferred_work_generated);\n\t\texpect_ptr_not_null(edata, \"Unexpected alloc failure\");\n\t\texpect_zu_eq(i + 1, ta.alloc_count,\n\t\t    \"Incorrect number of allocations\");\n\t\texpect_zu_eq(i, ta.dalloc_count,\n\t\t    \"Incorrect number of deallocations\");\n\t\tpai_dalloc(tsdn, &sec.pai, edata, &deferred_work_generated);\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_expand_shrink_delegate) {\n\t/*\n\t * Expand and shrink shouldn't affect sec state; they should just\n\t * delegate to the fallback PAI.\n\t */\n\tpai_test_allocator_t ta;\n\tpai_test_allocator_init(&ta);\n\tsec_t sec;\n\t/* See the note above -- we can't use the real tsd. */\n\ttsdn_t *tsdn = TSDN_NULL;\n\n\tbool deferred_work_generated = false;\n\n\ttest_sec_init(&sec, &ta.pai, /* nshards */ 1, /* max_alloc */ 10 * PAGE,\n\t    /* max_bytes */ 1000 * PAGE);\n\tedata_t *edata = pai_alloc(tsdn, &sec.pai, PAGE, PAGE,\n\t    /* zero */ false, /* guarded */ false, /* frequent_reuse */ false,\n\t    &deferred_work_generated);\n\texpect_ptr_not_null(edata, \"Unexpected alloc failure\");\n\n\tbool err = pai_expand(tsdn, &sec.pai, edata, PAGE, 4 * PAGE,\n\t    /* zero */ false, &deferred_work_generated);\n\texpect_false(err, \"Unexpected expand failure\");\n\texpect_zu_eq(1, ta.expand_count, \"\");\n\tta.expand_return_value = true;\n\terr = pai_expand(tsdn, &sec.pai, edata, 4 * PAGE, 3 * PAGE,\n\t    /* zero */ false, &deferred_work_generated);\n\texpect_true(err, \"Unexpected expand success\");\n\texpect_zu_eq(2, ta.expand_count, \"\");\n\n\terr = pai_shrink(tsdn, &sec.pai, edata, 4 * PAGE, 2 * PAGE,\n\t    &deferred_work_generated);\n\texpect_false(err, \"Unexpected shrink failure\");\n\texpect_zu_eq(1, ta.shrink_count, \"\");\n\tta.shrink_return_value = true;\n\terr = pai_shrink(tsdn, &sec.pai, edata, 2 * PAGE, PAGE,\n\t    &deferred_work_generated);\n\texpect_true(err, \"Unexpected shrink success\");\n\texpect_zu_eq(2, ta.shrink_count, \"\");\n}\nTEST_END\n\nTEST_BEGIN(test_nshards_0) {\n\tpai_test_allocator_t ta;\n\tpai_test_allocator_init(&ta);\n\tsec_t sec;\n\t/* See the note above -- we can't use the real tsd. */\n\ttsdn_t *tsdn = TSDN_NULL;\n\tbase_t *base = base_new(TSDN_NULL, /* ind */ 123,\n\t    &ehooks_default_extent_hooks, /* metadata_use_hooks */ true);\n\n\tsec_opts_t opts = SEC_OPTS_DEFAULT;\n\topts.nshards = 0;\n\tsec_init(TSDN_NULL, &sec, base, &ta.pai, &opts);\n\n\tbool deferred_work_generated = false;\n\tedata_t *edata = pai_alloc(tsdn, &sec.pai, PAGE, PAGE,\n\t    /* zero */ false, /* guarded */ false, /* frequent_reuse */ false,\n\t    &deferred_work_generated);\n\tpai_dalloc(tsdn, &sec.pai, edata, &deferred_work_generated);\n\n\t/* Both operations should have gone directly to the fallback. */\n\texpect_zu_eq(1, ta.alloc_count, \"\");\n\texpect_zu_eq(1, ta.dalloc_count, \"\");\n}\nTEST_END\n\nstatic void\nexpect_stats_pages(tsdn_t *tsdn, sec_t *sec, size_t npages) {\n\tsec_stats_t stats;\n\t/*\n\t * Check that the stats merging accumulates rather than overwrites by\n\t * putting some (made up) data there to begin with.\n\t */\n\tstats.bytes = 123;\n\tsec_stats_merge(tsdn, sec, &stats);\n\tassert_zu_le(npages * PAGE + 123, stats.bytes, \"\");\n}\n\nTEST_BEGIN(test_stats_simple) {\n\tpai_test_allocator_t ta;\n\tpai_test_allocator_init(&ta);\n\tsec_t sec;\n\n\t/* See the note above -- we can't use the real tsd. */\n\ttsdn_t *tsdn = TSDN_NULL;\n\n\tenum {\n\t\tNITERS = 100,\n\t\tFLUSH_PAGES = 20,\n\t};\n\n\tbool deferred_work_generated = false;\n\n\ttest_sec_init(&sec, &ta.pai, /* nshards */ 1, /* max_alloc */ PAGE,\n\t    /* max_bytes */ FLUSH_PAGES * PAGE);\n\n\tedata_t *allocs[FLUSH_PAGES];\n\tfor (size_t i = 0; i < FLUSH_PAGES; i++) {\n\t\tallocs[i] = pai_alloc(tsdn, &sec.pai, PAGE, PAGE,\n\t\t    /* zero */ false, /* guarded */ false, /* frequent_reuse */\n\t\t    false, &deferred_work_generated);\n\t\texpect_stats_pages(tsdn, &sec, 0);\n\t}\n\n\t/* Increase and decrease, without flushing. */\n\tfor (size_t i = 0; i < NITERS; i++) {\n\t\tfor (size_t j = 0; j < FLUSH_PAGES / 2; j++) {\n\t\t\tpai_dalloc(tsdn, &sec.pai, allocs[j],\n\t\t\t    &deferred_work_generated);\n\t\t\texpect_stats_pages(tsdn, &sec, j + 1);\n\t\t}\n\t\tfor (size_t j = 0; j < FLUSH_PAGES / 2; j++) {\n\t\t\tallocs[j] = pai_alloc(tsdn, &sec.pai, PAGE, PAGE,\n\t\t\t    /* zero */ false, /* guarded */ false,\n\t\t\t    /* frequent_reuse */ false,\n\t\t\t    &deferred_work_generated);\n\t\t\texpect_stats_pages(tsdn, &sec, FLUSH_PAGES / 2 - j - 1);\n\t\t}\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_stats_auto_flush) {\n\tpai_test_allocator_t ta;\n\tpai_test_allocator_init(&ta);\n\tsec_t sec;\n\n\t/* See the note above -- we can't use the real tsd. */\n\ttsdn_t *tsdn = TSDN_NULL;\n\n\tenum {\n\t\tFLUSH_PAGES = 10,\n\t};\n\n\ttest_sec_init(&sec, &ta.pai, /* nshards */ 1, /* max_alloc */ PAGE,\n\t    /* max_bytes */ FLUSH_PAGES * PAGE);\n\n\tedata_t *extra_alloc0;\n\tedata_t *extra_alloc1;\n\tedata_t *allocs[2 * FLUSH_PAGES];\n\n\tbool deferred_work_generated = false;\n\n\textra_alloc0 = pai_alloc(tsdn, &sec.pai, PAGE, PAGE, /* zero */ false,\n\t    /* guarded */ false, /* frequent_reuse */ false,\n\t    &deferred_work_generated);\n\textra_alloc1 = pai_alloc(tsdn, &sec.pai, PAGE, PAGE, /* zero */ false,\n\t    /* guarded */ false, /* frequent_reuse */ false,\n\t    &deferred_work_generated);\n\n\tfor (size_t i = 0; i < 2 * FLUSH_PAGES; i++) {\n\t\tallocs[i] = pai_alloc(tsdn, &sec.pai, PAGE, PAGE,\n\t\t    /* zero */ false, /* guarded */ false, /* frequent_reuse */\n\t\t    false, &deferred_work_generated);\n\t}\n\n\tfor (size_t i = 0; i < FLUSH_PAGES; i++) {\n\t\tpai_dalloc(tsdn, &sec.pai, allocs[i], &deferred_work_generated);\n\t}\n\tpai_dalloc(tsdn, &sec.pai, extra_alloc0, &deferred_work_generated);\n\n\t/* Flush the remaining pages; stats should still work. */\n\tfor (size_t i = 0; i < FLUSH_PAGES; i++) {\n\t\tpai_dalloc(tsdn, &sec.pai, allocs[FLUSH_PAGES + i],\n\t\t    &deferred_work_generated);\n\t}\n\n\tpai_dalloc(tsdn, &sec.pai, extra_alloc1, &deferred_work_generated);\n\n\texpect_stats_pages(tsdn, &sec, ta.alloc_count + ta.alloc_batch_count\n\t    - ta.dalloc_count - ta.dalloc_batch_count);\n}\nTEST_END\n\nTEST_BEGIN(test_stats_manual_flush) {\n\tpai_test_allocator_t ta;\n\tpai_test_allocator_init(&ta);\n\tsec_t sec;\n\n\t/* See the note above -- we can't use the real tsd. */\n\ttsdn_t *tsdn = TSDN_NULL;\n\n\tenum {\n\t\tFLUSH_PAGES = 10,\n\t};\n\n\ttest_sec_init(&sec, &ta.pai, /* nshards */ 1, /* max_alloc */ PAGE,\n\t    /* max_bytes */ FLUSH_PAGES * PAGE);\n\n\tbool deferred_work_generated = false;\n\tedata_t *allocs[FLUSH_PAGES];\n\tfor (size_t i = 0; i < FLUSH_PAGES; i++) {\n\t\tallocs[i] = pai_alloc(tsdn, &sec.pai, PAGE, PAGE,\n\t\t    /* zero */ false, /* guarded */ false, /* frequent_reuse */\n\t\t    false, &deferred_work_generated);\n\t\texpect_stats_pages(tsdn, &sec, 0);\n\t}\n\n\t/* Dalloc the first half of the allocations. */\n\tfor (size_t i = 0; i < FLUSH_PAGES / 2; i++) {\n\t\tpai_dalloc(tsdn, &sec.pai, allocs[i], &deferred_work_generated);\n\t\texpect_stats_pages(tsdn, &sec, i + 1);\n\t}\n\n\tsec_flush(tsdn, &sec);\n\texpect_stats_pages(tsdn, &sec, 0);\n\n\t/* Flush the remaining pages. */\n\tfor (size_t i = 0; i < FLUSH_PAGES / 2; i++) {\n\t\tpai_dalloc(tsdn, &sec.pai, allocs[FLUSH_PAGES / 2 + i],\n\t\t    &deferred_work_generated);\n\t\texpect_stats_pages(tsdn, &sec, i + 1);\n\t}\n\tsec_disable(tsdn, &sec);\n\texpect_stats_pages(tsdn, &sec, 0);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_reuse,\n\t    test_auto_flush,\n\t    test_disable,\n\t    test_flush,\n\t    test_max_alloc_respected,\n\t    test_expand_shrink_delegate,\n\t    test_nshards_0,\n\t    test_stats_simple,\n\t    test_stats_auto_flush,\n\t    test_stats_manual_flush);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/seq.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/seq.h\"\n\ntypedef struct data_s data_t;\nstruct data_s {\n\tint arr[10];\n};\n\nstatic void\nset_data(data_t *data, int num) {\n\tfor (int i = 0; i < 10; i++) {\n\t\tdata->arr[i] = num;\n\t}\n}\n\nstatic void\nexpect_data(data_t *data) {\n\tint num = data->arr[0];\n\tfor (int i = 0; i < 10; i++) {\n\t\texpect_d_eq(num, data->arr[i], \"Data consistency error\");\n\t}\n}\n\nseq_define(data_t, data)\n\ntypedef struct thd_data_s thd_data_t;\nstruct thd_data_s {\n\tseq_data_t data;\n};\n\nstatic void *\nseq_reader_thd(void *arg) {\n\tthd_data_t *thd_data = (thd_data_t *)arg;\n\tint iter = 0;\n\tdata_t local_data;\n\twhile (iter < 1000 * 1000 - 1) {\n\t\tbool success = seq_try_load_data(&local_data, &thd_data->data);\n\t\tif (success) {\n\t\t\texpect_data(&local_data);\n\t\t\texpect_d_le(iter, local_data.arr[0],\n\t\t\t    \"Seq read went back in time.\");\n\t\t\titer = local_data.arr[0];\n\t\t}\n\t}\n\treturn NULL;\n}\n\nstatic void *\nseq_writer_thd(void *arg) {\n\tthd_data_t *thd_data = (thd_data_t *)arg;\n\tdata_t local_data;\n\tmemset(&local_data, 0, sizeof(local_data));\n\tfor (int i = 0; i < 1000 * 1000; i++) {\n\t\tset_data(&local_data, i);\n\t\tseq_store_data(&thd_data->data, &local_data);\n\t}\n\treturn NULL;\n}\n\nTEST_BEGIN(test_seq_threaded) {\n\tthd_data_t thd_data;\n\tmemset(&thd_data, 0, sizeof(thd_data));\n\n\tthd_t reader;\n\tthd_t writer;\n\n\tthd_create(&reader, seq_reader_thd, &thd_data);\n\tthd_create(&writer, seq_writer_thd, &thd_data);\n\n\tthd_join(reader, NULL);\n\tthd_join(writer, NULL);\n}\nTEST_END\n\nTEST_BEGIN(test_seq_simple) {\n\tdata_t data;\n\tseq_data_t seq;\n\tmemset(&seq, 0, sizeof(seq));\n\tfor (int i = 0; i < 1000 * 1000; i++) {\n\t\tset_data(&data, i);\n\t\tseq_store_data(&seq, &data);\n\t\tset_data(&data, 0);\n\t\tbool success = seq_try_load_data(&data, &seq);\n\t\texpect_b_eq(success, true, \"Failed non-racing read\");\n\t\texpect_data(&data);\n\t}\n}\nTEST_END\n\nint main(void) {\n\treturn test_no_reentrancy(\n\t    test_seq_simple,\n\t    test_seq_threaded);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/size_check.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/safety_check.h\"\n\nbool fake_abort_called;\nvoid fake_abort(const char *message) {\n\t(void)message;\n\tfake_abort_called = true;\n}\n\n#define SMALL_SIZE1 SC_SMALL_MAXCLASS\n#define SMALL_SIZE2 (SC_SMALL_MAXCLASS / 2)\n\n#define LARGE_SIZE1 SC_LARGE_MINCLASS\n#define LARGE_SIZE2 (LARGE_SIZE1 * 2)\n\nvoid *\ntest_invalid_size_pre(size_t sz) {\n\tsafety_check_set_abort(&fake_abort);\n\n\tfake_abort_called = false;\n\tvoid *ptr = malloc(sz);\n\tassert_ptr_not_null(ptr, \"Unexpected failure\");\n\n\treturn ptr;\n}\n\nvoid\ntest_invalid_size_post(void) {\n\texpect_true(fake_abort_called, \"Safety check didn't fire\");\n\tsafety_check_set_abort(NULL);\n}\n\nTEST_BEGIN(test_invalid_size_sdallocx) {\n\ttest_skip_if(!config_opt_size_checks);\n\n\tvoid *ptr = test_invalid_size_pre(SMALL_SIZE1);\n\tsdallocx(ptr, SMALL_SIZE2, 0);\n\ttest_invalid_size_post();\n\n\tptr = test_invalid_size_pre(LARGE_SIZE1);\n\tsdallocx(ptr, LARGE_SIZE2, 0);\n\ttest_invalid_size_post();\n}\nTEST_END\n\nTEST_BEGIN(test_invalid_size_sdallocx_nonzero_flag) {\n\ttest_skip_if(!config_opt_size_checks);\n\n\tvoid *ptr = test_invalid_size_pre(SMALL_SIZE1);\n\tsdallocx(ptr, SMALL_SIZE2, MALLOCX_TCACHE_NONE);\n\ttest_invalid_size_post();\n\n\tptr = test_invalid_size_pre(LARGE_SIZE1);\n\tsdallocx(ptr, LARGE_SIZE2, MALLOCX_TCACHE_NONE);\n\ttest_invalid_size_post();\n}\nTEST_END\n\nTEST_BEGIN(test_invalid_size_sdallocx_noflags) {\n\ttest_skip_if(!config_opt_size_checks);\n\n\tvoid *ptr = test_invalid_size_pre(SMALL_SIZE1);\n\tje_sdallocx_noflags(ptr, SMALL_SIZE2);\n\ttest_invalid_size_post();\n\n\tptr = test_invalid_size_pre(LARGE_SIZE1);\n\tje_sdallocx_noflags(ptr, LARGE_SIZE2);\n\ttest_invalid_size_post();\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_invalid_size_sdallocx,\n\t    test_invalid_size_sdallocx_nonzero_flag,\n\t    test_invalid_size_sdallocx_noflags);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/size_check.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_prof}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"prof:false\"\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/unit/size_classes.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nstatic size_t\nget_max_size_class(void) {\n\tunsigned nlextents;\n\tsize_t mib[4];\n\tsize_t sz, miblen, max_size_class;\n\n\tsz = sizeof(unsigned);\n\texpect_d_eq(mallctl(\"arenas.nlextents\", (void *)&nlextents, &sz, NULL,\n\t    0), 0, \"Unexpected mallctl() error\");\n\n\tmiblen = sizeof(mib) / sizeof(size_t);\n\texpect_d_eq(mallctlnametomib(\"arenas.lextent.0.size\", mib, &miblen), 0,\n\t    \"Unexpected mallctlnametomib() error\");\n\tmib[2] = nlextents - 1;\n\n\tsz = sizeof(size_t);\n\texpect_d_eq(mallctlbymib(mib, miblen, (void *)&max_size_class, &sz,\n\t    NULL, 0), 0, \"Unexpected mallctlbymib() error\");\n\n\treturn max_size_class;\n}\n\nTEST_BEGIN(test_size_classes) {\n\tsize_t size_class, max_size_class;\n\tszind_t index, max_index;\n\n\tmax_size_class = get_max_size_class();\n\tmax_index = sz_size2index(max_size_class);\n\n\tfor (index = 0, size_class = sz_index2size(index); index < max_index ||\n\t    size_class < max_size_class; index++, size_class =\n\t    sz_index2size(index)) {\n\t\texpect_true(index < max_index,\n\t\t    \"Loop conditionals should be equivalent; index=%u, \"\n\t\t    \"size_class=%zu (%#zx)\", index, size_class, size_class);\n\t\texpect_true(size_class < max_size_class,\n\t\t    \"Loop conditionals should be equivalent; index=%u, \"\n\t\t    \"size_class=%zu (%#zx)\", index, size_class, size_class);\n\n\t\texpect_u_eq(index, sz_size2index(size_class),\n\t\t    \"sz_size2index() does not reverse sz_index2size(): index=%u\"\n\t\t    \" --> size_class=%zu --> index=%u --> size_class=%zu\",\n\t\t    index, size_class, sz_size2index(size_class),\n\t\t    sz_index2size(sz_size2index(size_class)));\n\t\texpect_zu_eq(size_class,\n\t\t    sz_index2size(sz_size2index(size_class)),\n\t\t    \"sz_index2size() does not reverse sz_size2index(): index=%u\"\n\t\t    \" --> size_class=%zu --> index=%u --> size_class=%zu\",\n\t\t    index, size_class, sz_size2index(size_class),\n\t\t    sz_index2size(sz_size2index(size_class)));\n\n\t\texpect_u_eq(index+1, sz_size2index(size_class+1),\n\t\t    \"Next size_class does not round up properly\");\n\n\t\texpect_zu_eq(size_class, (index > 0) ?\n\t\t    sz_s2u(sz_index2size(index-1)+1) : sz_s2u(1),\n\t\t    \"sz_s2u() does not round up to size class\");\n\t\texpect_zu_eq(size_class, sz_s2u(size_class-1),\n\t\t    \"sz_s2u() does not round up to size class\");\n\t\texpect_zu_eq(size_class, sz_s2u(size_class),\n\t\t    \"sz_s2u() does not compute same size class\");\n\t\texpect_zu_eq(sz_s2u(size_class+1), sz_index2size(index+1),\n\t\t    \"sz_s2u() does not round up to next size class\");\n\t}\n\n\texpect_u_eq(index, sz_size2index(sz_index2size(index)),\n\t    \"sz_size2index() does not reverse sz_index2size()\");\n\texpect_zu_eq(max_size_class, sz_index2size(\n\t    sz_size2index(max_size_class)),\n\t    \"sz_index2size() does not reverse sz_size2index()\");\n\n\texpect_zu_eq(size_class, sz_s2u(sz_index2size(index-1)+1),\n\t    \"sz_s2u() does not round up to size class\");\n\texpect_zu_eq(size_class, sz_s2u(size_class-1),\n\t    \"sz_s2u() does not round up to size class\");\n\texpect_zu_eq(size_class, sz_s2u(size_class),\n\t    \"sz_s2u() does not compute same size class\");\n}\nTEST_END\n\nTEST_BEGIN(test_psize_classes) {\n\tsize_t size_class, max_psz;\n\tpszind_t pind, max_pind;\n\n\tmax_psz = get_max_size_class() + PAGE;\n\tmax_pind = sz_psz2ind(max_psz);\n\n\tfor (pind = 0, size_class = sz_pind2sz(pind);\n\t    pind < max_pind || size_class < max_psz;\n\t    pind++, size_class = sz_pind2sz(pind)) {\n\t\texpect_true(pind < max_pind,\n\t\t    \"Loop conditionals should be equivalent; pind=%u, \"\n\t\t    \"size_class=%zu (%#zx)\", pind, size_class, size_class);\n\t\texpect_true(size_class < max_psz,\n\t\t    \"Loop conditionals should be equivalent; pind=%u, \"\n\t\t    \"size_class=%zu (%#zx)\", pind, size_class, size_class);\n\n\t\texpect_u_eq(pind, sz_psz2ind(size_class),\n\t\t    \"sz_psz2ind() does not reverse sz_pind2sz(): pind=%u -->\"\n\t\t    \" size_class=%zu --> pind=%u --> size_class=%zu\", pind,\n\t\t    size_class, sz_psz2ind(size_class),\n\t\t    sz_pind2sz(sz_psz2ind(size_class)));\n\t\texpect_zu_eq(size_class, sz_pind2sz(sz_psz2ind(size_class)),\n\t\t    \"sz_pind2sz() does not reverse sz_psz2ind(): pind=%u -->\"\n\t\t    \" size_class=%zu --> pind=%u --> size_class=%zu\", pind,\n\t\t    size_class, sz_psz2ind(size_class),\n\t\t    sz_pind2sz(sz_psz2ind(size_class)));\n\n\t\tif (size_class == SC_LARGE_MAXCLASS) {\n\t\t\texpect_u_eq(SC_NPSIZES, sz_psz2ind(size_class + 1),\n\t\t\t    \"Next size_class does not round up properly\");\n\t\t} else {\n\t\t\texpect_u_eq(pind + 1, sz_psz2ind(size_class + 1),\n\t\t\t    \"Next size_class does not round up properly\");\n\t\t}\n\n\t\texpect_zu_eq(size_class, (pind > 0) ?\n\t\t    sz_psz2u(sz_pind2sz(pind-1)+1) : sz_psz2u(1),\n\t\t    \"sz_psz2u() does not round up to size class\");\n\t\texpect_zu_eq(size_class, sz_psz2u(size_class-1),\n\t\t    \"sz_psz2u() does not round up to size class\");\n\t\texpect_zu_eq(size_class, sz_psz2u(size_class),\n\t\t    \"sz_psz2u() does not compute same size class\");\n\t\texpect_zu_eq(sz_psz2u(size_class+1), sz_pind2sz(pind+1),\n\t\t    \"sz_psz2u() does not round up to next size class\");\n\t}\n\n\texpect_u_eq(pind, sz_psz2ind(sz_pind2sz(pind)),\n\t    \"sz_psz2ind() does not reverse sz_pind2sz()\");\n\texpect_zu_eq(max_psz, sz_pind2sz(sz_psz2ind(max_psz)),\n\t    \"sz_pind2sz() does not reverse sz_psz2ind()\");\n\n\texpect_zu_eq(size_class, sz_psz2u(sz_pind2sz(pind-1)+1),\n\t    \"sz_psz2u() does not round up to size class\");\n\texpect_zu_eq(size_class, sz_psz2u(size_class-1),\n\t    \"sz_psz2u() does not round up to size class\");\n\texpect_zu_eq(size_class, sz_psz2u(size_class),\n\t    \"sz_psz2u() does not compute same size class\");\n}\nTEST_END\n\nTEST_BEGIN(test_overflow) {\n\tsize_t max_size_class, max_psz;\n\n\tmax_size_class = get_max_size_class();\n\tmax_psz = max_size_class + PAGE;\n\n\texpect_u_eq(sz_size2index(max_size_class+1), SC_NSIZES,\n\t    \"sz_size2index() should return NSIZES on overflow\");\n\texpect_u_eq(sz_size2index(ZU(PTRDIFF_MAX)+1), SC_NSIZES,\n\t    \"sz_size2index() should return NSIZES on overflow\");\n\texpect_u_eq(sz_size2index(SIZE_T_MAX), SC_NSIZES,\n\t    \"sz_size2index() should return NSIZES on overflow\");\n\n\texpect_zu_eq(sz_s2u(max_size_class+1), 0,\n\t    \"sz_s2u() should return 0 for unsupported size\");\n\texpect_zu_eq(sz_s2u(ZU(PTRDIFF_MAX)+1), 0,\n\t    \"sz_s2u() should return 0 for unsupported size\");\n\texpect_zu_eq(sz_s2u(SIZE_T_MAX), 0,\n\t    \"sz_s2u() should return 0 on overflow\");\n\n\texpect_u_eq(sz_psz2ind(max_size_class+1), SC_NPSIZES,\n\t    \"sz_psz2ind() should return NPSIZES on overflow\");\n\texpect_u_eq(sz_psz2ind(ZU(PTRDIFF_MAX)+1), SC_NPSIZES,\n\t    \"sz_psz2ind() should return NPSIZES on overflow\");\n\texpect_u_eq(sz_psz2ind(SIZE_T_MAX), SC_NPSIZES,\n\t    \"sz_psz2ind() should return NPSIZES on overflow\");\n\n\texpect_zu_eq(sz_psz2u(max_size_class+1), max_psz,\n\t    \"sz_psz2u() should return (LARGE_MAXCLASS + PAGE) for unsupported\"\n\t    \" size\");\n\texpect_zu_eq(sz_psz2u(ZU(PTRDIFF_MAX)+1), max_psz,\n\t    \"sz_psz2u() should return (LARGE_MAXCLASS + PAGE) for unsupported \"\n\t    \"size\");\n\texpect_zu_eq(sz_psz2u(SIZE_T_MAX), max_psz,\n\t    \"sz_psz2u() should return (LARGE_MAXCLASS + PAGE) on overflow\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_size_classes,\n\t    test_psize_classes,\n\t    test_overflow);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/slab.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#define INVALID_ARENA_IND ((1U << MALLOCX_ARENA_BITS) - 1)\n\nTEST_BEGIN(test_arena_slab_regind) {\n\tszind_t binind;\n\n\tfor (binind = 0; binind < SC_NBINS; binind++) {\n\t\tsize_t regind;\n\t\tedata_t slab;\n\t\tconst bin_info_t *bin_info = &bin_infos[binind];\n\t\tedata_init(&slab, INVALID_ARENA_IND,\n\t\t    mallocx(bin_info->slab_size, MALLOCX_LG_ALIGN(LG_PAGE)),\n\t\t    bin_info->slab_size, true,\n\t\t    binind, 0, extent_state_active, false, true, EXTENT_PAI_PAC,\n\t\t    EXTENT_NOT_HEAD);\n\t\texpect_ptr_not_null(edata_addr_get(&slab),\n\t\t    \"Unexpected malloc() failure\");\n\t\tarena_dalloc_bin_locked_info_t dalloc_info;\n\t\tarena_dalloc_bin_locked_begin(&dalloc_info, binind);\n\t\tfor (regind = 0; regind < bin_info->nregs; regind++) {\n\t\t\tvoid *reg = (void *)((uintptr_t)edata_addr_get(&slab) +\n\t\t\t    (bin_info->reg_size * regind));\n\t\t\texpect_zu_eq(arena_slab_regind(&dalloc_info, binind,\n\t\t\t    &slab, reg),\n\t\t\t    regind,\n\t\t\t    \"Incorrect region index computed for size %zu\",\n\t\t\t    bin_info->reg_size);\n\t\t}\n\t\tfree(edata_addr_get(&slab));\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_arena_slab_regind);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/smoothstep.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nstatic const uint64_t smoothstep_tab[] = {\n#define STEP(step, h, x, y)\t\t\t\\\n\th,\n\tSMOOTHSTEP\n#undef STEP\n};\n\nTEST_BEGIN(test_smoothstep_integral) {\n\tuint64_t sum, min, max;\n\tunsigned i;\n\n\t/*\n\t * The integral of smoothstep in the [0..1] range equals 1/2.  Verify\n\t * that the fixed point representation's integral is no more than\n\t * rounding error distant from 1/2.  Regarding rounding, each table\n\t * element is rounded down to the nearest fixed point value, so the\n\t * integral may be off by as much as SMOOTHSTEP_NSTEPS ulps.\n\t */\n\tsum = 0;\n\tfor (i = 0; i < SMOOTHSTEP_NSTEPS; i++) {\n\t\tsum += smoothstep_tab[i];\n\t}\n\n\tmax = (KQU(1) << (SMOOTHSTEP_BFP-1)) * (SMOOTHSTEP_NSTEPS+1);\n\tmin = max - SMOOTHSTEP_NSTEPS;\n\n\texpect_u64_ge(sum, min,\n\t    \"Integral too small, even accounting for truncation\");\n\texpect_u64_le(sum, max, \"Integral exceeds 1/2\");\n\tif (false) {\n\t\tmalloc_printf(\"%\"FMTu64\" ulps under 1/2 (limit %d)\\n\",\n\t\t    max - sum, SMOOTHSTEP_NSTEPS);\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_smoothstep_monotonic) {\n\tuint64_t prev_h;\n\tunsigned i;\n\n\t/*\n\t * The smoothstep function is monotonic in [0..1], i.e. its slope is\n\t * non-negative.  In practice we want to parametrize table generation\n\t * such that piecewise slope is greater than zero, but do not require\n\t * that here.\n\t */\n\tprev_h = 0;\n\tfor (i = 0; i < SMOOTHSTEP_NSTEPS; i++) {\n\t\tuint64_t h = smoothstep_tab[i];\n\t\texpect_u64_ge(h, prev_h, \"Piecewise non-monotonic, i=%u\", i);\n\t\tprev_h = h;\n\t}\n\texpect_u64_eq(smoothstep_tab[SMOOTHSTEP_NSTEPS-1],\n\t    (KQU(1) << SMOOTHSTEP_BFP), \"Last step must equal 1\");\n}\nTEST_END\n\nTEST_BEGIN(test_smoothstep_slope) {\n\tuint64_t prev_h, prev_delta;\n\tunsigned i;\n\n\t/*\n\t * The smoothstep slope strictly increases until x=0.5, and then\n\t * strictly decreases until x=1.0.  Verify the slightly weaker\n\t * requirement of monotonicity, so that inadequate table precision does\n\t * not cause false test failures.\n\t */\n\tprev_h = 0;\n\tprev_delta = 0;\n\tfor (i = 0; i < SMOOTHSTEP_NSTEPS / 2 + SMOOTHSTEP_NSTEPS % 2; i++) {\n\t\tuint64_t h = smoothstep_tab[i];\n\t\tuint64_t delta = h - prev_h;\n\t\texpect_u64_ge(delta, prev_delta,\n\t\t    \"Slope must monotonically increase in 0.0 <= x <= 0.5, \"\n\t\t    \"i=%u\", i);\n\t\tprev_h = h;\n\t\tprev_delta = delta;\n\t}\n\n\tprev_h = KQU(1) << SMOOTHSTEP_BFP;\n\tprev_delta = 0;\n\tfor (i = SMOOTHSTEP_NSTEPS-1; i >= SMOOTHSTEP_NSTEPS / 2; i--) {\n\t\tuint64_t h = smoothstep_tab[i];\n\t\tuint64_t delta = prev_h - h;\n\t\texpect_u64_ge(delta, prev_delta,\n\t\t    \"Slope must monotonically decrease in 0.5 <= x <= 1.0, \"\n\t\t    \"i=%u\", i);\n\t\tprev_h = h;\n\t\tprev_delta = delta;\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_smoothstep_integral,\n\t    test_smoothstep_monotonic,\n\t    test_smoothstep_slope);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/spin.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/spin.h\"\n\nTEST_BEGIN(test_spin) {\n\tspin_t spinner = SPIN_INITIALIZER;\n\n\tfor (unsigned i = 0; i < 100; i++) {\n\t\tspin_adaptive(&spinner);\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_spin);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/stats.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#define STRINGIFY_HELPER(x) #x\n#define STRINGIFY(x) STRINGIFY_HELPER(x)\n\nTEST_BEGIN(test_stats_summary) {\n\tsize_t sz, allocated, active, resident, mapped;\n\tint expected = config_stats ? 0 : ENOENT;\n\n\tsz = sizeof(size_t);\n\texpect_d_eq(mallctl(\"stats.allocated\", (void *)&allocated, &sz, NULL,\n\t    0), expected, \"Unexpected mallctl() result\");\n\texpect_d_eq(mallctl(\"stats.active\", (void *)&active, &sz, NULL, 0),\n\t    expected, \"Unexpected mallctl() result\");\n\texpect_d_eq(mallctl(\"stats.resident\", (void *)&resident, &sz, NULL, 0),\n\t    expected, \"Unexpected mallctl() result\");\n\texpect_d_eq(mallctl(\"stats.mapped\", (void *)&mapped, &sz, NULL, 0),\n\t    expected, \"Unexpected mallctl() result\");\n\n\tif (config_stats) {\n\t\texpect_zu_le(allocated, active,\n\t\t    \"allocated should be no larger than active\");\n\t\texpect_zu_lt(active, resident,\n\t\t    \"active should be less than resident\");\n\t\texpect_zu_lt(active, mapped,\n\t\t    \"active should be less than mapped\");\n\t}\n}\nTEST_END\n\nTEST_BEGIN(test_stats_large) {\n\tvoid *p;\n\tuint64_t epoch;\n\tsize_t allocated;\n\tuint64_t nmalloc, ndalloc, nrequests;\n\tsize_t sz;\n\tint expected = config_stats ? 0 : ENOENT;\n\n\tp = mallocx(SC_SMALL_MAXCLASS + 1, MALLOCX_ARENA(0));\n\texpect_ptr_not_null(p, \"Unexpected mallocx() failure\");\n\n\texpect_d_eq(mallctl(\"epoch\", NULL, NULL, (void *)&epoch, sizeof(epoch)),\n\t    0, \"Unexpected mallctl() failure\");\n\n\tsz = sizeof(size_t);\n\texpect_d_eq(mallctl(\"stats.arenas.0.large.allocated\",\n\t    (void *)&allocated, &sz, NULL, 0), expected,\n\t    \"Unexpected mallctl() result\");\n\tsz = sizeof(uint64_t);\n\texpect_d_eq(mallctl(\"stats.arenas.0.large.nmalloc\", (void *)&nmalloc,\n\t    &sz, NULL, 0), expected, \"Unexpected mallctl() result\");\n\texpect_d_eq(mallctl(\"stats.arenas.0.large.ndalloc\", (void *)&ndalloc,\n\t    &sz, NULL, 0), expected, \"Unexpected mallctl() result\");\n\texpect_d_eq(mallctl(\"stats.arenas.0.large.nrequests\",\n\t    (void *)&nrequests, &sz, NULL, 0), expected,\n\t    \"Unexpected mallctl() result\");\n\n\tif (config_stats) {\n\t\texpect_zu_gt(allocated, 0,\n\t\t    \"allocated should be greater than zero\");\n\t\texpect_u64_ge(nmalloc, ndalloc,\n\t\t    \"nmalloc should be at least as large as ndalloc\");\n\t\texpect_u64_le(nmalloc, nrequests,\n\t\t    \"nmalloc should no larger than nrequests\");\n\t}\n\n\tdallocx(p, 0);\n}\nTEST_END\n\nTEST_BEGIN(test_stats_arenas_summary) {\n\tvoid *little, *large;\n\tuint64_t epoch;\n\tsize_t sz;\n\tint expected = config_stats ? 0 : ENOENT;\n\tsize_t mapped;\n\tuint64_t dirty_npurge, dirty_nmadvise, dirty_purged;\n\tuint64_t muzzy_npurge, muzzy_nmadvise, muzzy_purged;\n\n\tlittle = mallocx(SC_SMALL_MAXCLASS, MALLOCX_ARENA(0));\n\texpect_ptr_not_null(little, \"Unexpected mallocx() failure\");\n\tlarge = mallocx((1U << SC_LG_LARGE_MINCLASS),\n\t    MALLOCX_ARENA(0));\n\texpect_ptr_not_null(large, \"Unexpected mallocx() failure\");\n\n\tdallocx(little, 0);\n\tdallocx(large, 0);\n\n\texpect_d_eq(mallctl(\"thread.tcache.flush\", NULL, NULL, NULL, 0),\n\t    opt_tcache ? 0 : EFAULT, \"Unexpected mallctl() result\");\n\texpect_d_eq(mallctl(\"arena.0.purge\", NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected mallctl() failure\");\n\n\texpect_d_eq(mallctl(\"epoch\", NULL, NULL, (void *)&epoch, sizeof(epoch)),\n\t    0, \"Unexpected mallctl() failure\");\n\n\tsz = sizeof(size_t);\n\texpect_d_eq(mallctl(\"stats.arenas.0.mapped\", (void *)&mapped, &sz, NULL,\n\t    0), expected, \"Unexepected mallctl() result\");\n\n\tsz = sizeof(uint64_t);\n\texpect_d_eq(mallctl(\"stats.arenas.0.dirty_npurge\",\n\t    (void *)&dirty_npurge, &sz, NULL, 0), expected,\n\t    \"Unexepected mallctl() result\");\n\texpect_d_eq(mallctl(\"stats.arenas.0.dirty_nmadvise\",\n\t    (void *)&dirty_nmadvise, &sz, NULL, 0), expected,\n\t    \"Unexepected mallctl() result\");\n\texpect_d_eq(mallctl(\"stats.arenas.0.dirty_purged\",\n\t    (void *)&dirty_purged, &sz, NULL, 0), expected,\n\t    \"Unexepected mallctl() result\");\n\texpect_d_eq(mallctl(\"stats.arenas.0.muzzy_npurge\",\n\t    (void *)&muzzy_npurge, &sz, NULL, 0), expected,\n\t    \"Unexepected mallctl() result\");\n\texpect_d_eq(mallctl(\"stats.arenas.0.muzzy_nmadvise\",\n\t    (void *)&muzzy_nmadvise, &sz, NULL, 0), expected,\n\t    \"Unexepected mallctl() result\");\n\texpect_d_eq(mallctl(\"stats.arenas.0.muzzy_purged\",\n\t    (void *)&muzzy_purged, &sz, NULL, 0), expected,\n\t    \"Unexepected mallctl() result\");\n\n\tif (config_stats) {\n\t\tif (!is_background_thread_enabled() && !opt_hpa) {\n\t\t\texpect_u64_gt(dirty_npurge + muzzy_npurge, 0,\n\t\t\t    \"At least one purge should have occurred\");\n\t\t}\n\t\texpect_u64_le(dirty_nmadvise, dirty_purged,\n\t\t    \"dirty_nmadvise should be no greater than dirty_purged\");\n\t\texpect_u64_le(muzzy_nmadvise, muzzy_purged,\n\t\t    \"muzzy_nmadvise should be no greater than muzzy_purged\");\n\t}\n}\nTEST_END\n\nvoid *\nthd_start(void *arg) {\n\treturn NULL;\n}\n\nstatic void\nno_lazy_lock(void) {\n\tthd_t thd;\n\n\tthd_create(&thd, thd_start, NULL);\n\tthd_join(thd, NULL);\n}\n\nTEST_BEGIN(test_stats_arenas_small) {\n\tvoid *p;\n\tsize_t sz, allocated;\n\tuint64_t epoch, nmalloc, ndalloc, nrequests;\n\tint expected = config_stats ? 0 : ENOENT;\n\n\tno_lazy_lock(); /* Lazy locking would dodge tcache testing. */\n\n\tp = mallocx(SC_SMALL_MAXCLASS, MALLOCX_ARENA(0));\n\texpect_ptr_not_null(p, \"Unexpected mallocx() failure\");\n\n\texpect_d_eq(mallctl(\"thread.tcache.flush\", NULL, NULL, NULL, 0),\n\t    opt_tcache ? 0 : EFAULT, \"Unexpected mallctl() result\");\n\n\texpect_d_eq(mallctl(\"epoch\", NULL, NULL, (void *)&epoch, sizeof(epoch)),\n\t    0, \"Unexpected mallctl() failure\");\n\n\tsz = sizeof(size_t);\n\texpect_d_eq(mallctl(\"stats.arenas.0.small.allocated\",\n\t    (void *)&allocated, &sz, NULL, 0), expected,\n\t    \"Unexpected mallctl() result\");\n\tsz = sizeof(uint64_t);\n\texpect_d_eq(mallctl(\"stats.arenas.0.small.nmalloc\", (void *)&nmalloc,\n\t    &sz, NULL, 0), expected, \"Unexpected mallctl() result\");\n\texpect_d_eq(mallctl(\"stats.arenas.0.small.ndalloc\", (void *)&ndalloc,\n\t    &sz, NULL, 0), expected, \"Unexpected mallctl() result\");\n\texpect_d_eq(mallctl(\"stats.arenas.0.small.nrequests\",\n\t    (void *)&nrequests, &sz, NULL, 0), expected,\n\t    \"Unexpected mallctl() result\");\n\n\tif (config_stats) {\n\t\texpect_zu_gt(allocated, 0,\n\t\t    \"allocated should be greater than zero\");\n\t\texpect_u64_gt(nmalloc, 0,\n\t\t    \"nmalloc should be no greater than zero\");\n\t\texpect_u64_ge(nmalloc, ndalloc,\n\t\t    \"nmalloc should be at least as large as ndalloc\");\n\t\texpect_u64_gt(nrequests, 0,\n\t\t    \"nrequests should be greater than zero\");\n\t}\n\n\tdallocx(p, 0);\n}\nTEST_END\n\nTEST_BEGIN(test_stats_arenas_large) {\n\tvoid *p;\n\tsize_t sz, allocated;\n\tuint64_t epoch, nmalloc, ndalloc;\n\tint expected = config_stats ? 0 : ENOENT;\n\n\tp = mallocx((1U << SC_LG_LARGE_MINCLASS), MALLOCX_ARENA(0));\n\texpect_ptr_not_null(p, \"Unexpected mallocx() failure\");\n\n\texpect_d_eq(mallctl(\"epoch\", NULL, NULL, (void *)&epoch, sizeof(epoch)),\n\t    0, \"Unexpected mallctl() failure\");\n\n\tsz = sizeof(size_t);\n\texpect_d_eq(mallctl(\"stats.arenas.0.large.allocated\",\n\t    (void *)&allocated, &sz, NULL, 0), expected,\n\t    \"Unexpected mallctl() result\");\n\tsz = sizeof(uint64_t);\n\texpect_d_eq(mallctl(\"stats.arenas.0.large.nmalloc\", (void *)&nmalloc,\n\t    &sz, NULL, 0), expected, \"Unexpected mallctl() result\");\n\texpect_d_eq(mallctl(\"stats.arenas.0.large.ndalloc\", (void *)&ndalloc,\n\t    &sz, NULL, 0), expected, \"Unexpected mallctl() result\");\n\n\tif (config_stats) {\n\t\texpect_zu_gt(allocated, 0,\n\t\t    \"allocated should be greater than zero\");\n\t\texpect_u64_gt(nmalloc, 0,\n\t\t    \"nmalloc should be greater than zero\");\n\t\texpect_u64_ge(nmalloc, ndalloc,\n\t\t    \"nmalloc should be at least as large as ndalloc\");\n\t}\n\n\tdallocx(p, 0);\n}\nTEST_END\n\nstatic void\ngen_mallctl_str(char *cmd, char *name, unsigned arena_ind) {\n\tsprintf(cmd, \"stats.arenas.%u.bins.0.%s\", arena_ind, name);\n}\n\nTEST_BEGIN(test_stats_arenas_bins) {\n\tvoid *p;\n\tsize_t sz, curslabs, curregs, nonfull_slabs;\n\tuint64_t epoch, nmalloc, ndalloc, nrequests, nfills, nflushes;\n\tuint64_t nslabs, nreslabs;\n\tint expected = config_stats ? 0 : ENOENT;\n\n\t/* Make sure allocation below isn't satisfied by tcache. */\n\texpect_d_eq(mallctl(\"thread.tcache.flush\", NULL, NULL, NULL, 0),\n\t    opt_tcache ? 0 : EFAULT, \"Unexpected mallctl() result\");\n\n\tunsigned arena_ind, old_arena_ind;\n\tsz = sizeof(unsigned);\n\texpect_d_eq(mallctl(\"arenas.create\", (void *)&arena_ind, &sz, NULL, 0),\n\t    0, \"Arena creation failure\");\n\tsz = sizeof(arena_ind);\n\texpect_d_eq(mallctl(\"thread.arena\", (void *)&old_arena_ind, &sz,\n\t    (void *)&arena_ind, sizeof(arena_ind)), 0,\n\t    \"Unexpected mallctl() failure\");\n\n\tp = malloc(bin_infos[0].reg_size);\n\texpect_ptr_not_null(p, \"Unexpected malloc() failure\");\n\n\texpect_d_eq(mallctl(\"thread.tcache.flush\", NULL, NULL, NULL, 0),\n\t    opt_tcache ? 0 : EFAULT, \"Unexpected mallctl() result\");\n\n\texpect_d_eq(mallctl(\"epoch\", NULL, NULL, (void *)&epoch, sizeof(epoch)),\n\t    0, \"Unexpected mallctl() failure\");\n\n\tchar cmd[128];\n\tsz = sizeof(uint64_t);\n\tgen_mallctl_str(cmd, \"nmalloc\", arena_ind);\n\texpect_d_eq(mallctl(cmd, (void *)&nmalloc, &sz, NULL, 0), expected,\n\t    \"Unexpected mallctl() result\");\n\tgen_mallctl_str(cmd, \"ndalloc\", arena_ind);\n\texpect_d_eq(mallctl(cmd, (void *)&ndalloc, &sz, NULL, 0), expected,\n\t    \"Unexpected mallctl() result\");\n\tgen_mallctl_str(cmd, \"nrequests\", arena_ind);\n\texpect_d_eq(mallctl(cmd, (void *)&nrequests, &sz, NULL, 0), expected,\n\t    \"Unexpected mallctl() result\");\n\tsz = sizeof(size_t);\n\tgen_mallctl_str(cmd, \"curregs\", arena_ind);\n\texpect_d_eq(mallctl(cmd, (void *)&curregs, &sz, NULL, 0), expected,\n\t    \"Unexpected mallctl() result\");\n\n\tsz = sizeof(uint64_t);\n\tgen_mallctl_str(cmd, \"nfills\", arena_ind);\n\texpect_d_eq(mallctl(cmd, (void *)&nfills, &sz, NULL, 0), expected,\n\t    \"Unexpected mallctl() result\");\n\tgen_mallctl_str(cmd, \"nflushes\", arena_ind);\n\texpect_d_eq(mallctl(cmd, (void *)&nflushes, &sz, NULL, 0), expected,\n\t    \"Unexpected mallctl() result\");\n\n\tgen_mallctl_str(cmd, \"nslabs\", arena_ind);\n\texpect_d_eq(mallctl(cmd, (void *)&nslabs, &sz, NULL, 0), expected,\n\t    \"Unexpected mallctl() result\");\n\tgen_mallctl_str(cmd, \"nreslabs\", arena_ind);\n\texpect_d_eq(mallctl(cmd, (void *)&nreslabs, &sz, NULL, 0), expected,\n\t    \"Unexpected mallctl() result\");\n\tsz = sizeof(size_t);\n\tgen_mallctl_str(cmd, \"curslabs\", arena_ind);\n\texpect_d_eq(mallctl(cmd, (void *)&curslabs, &sz, NULL, 0), expected,\n\t    \"Unexpected mallctl() result\");\n\tgen_mallctl_str(cmd, \"nonfull_slabs\", arena_ind);\n\texpect_d_eq(mallctl(cmd, (void *)&nonfull_slabs, &sz, NULL, 0),\n\t    expected, \"Unexpected mallctl() result\");\n\n\tif (config_stats) {\n\t\texpect_u64_gt(nmalloc, 0,\n\t\t    \"nmalloc should be greater than zero\");\n\t\texpect_u64_ge(nmalloc, ndalloc,\n\t\t    \"nmalloc should be at least as large as ndalloc\");\n\t\texpect_u64_gt(nrequests, 0,\n\t\t    \"nrequests should be greater than zero\");\n\t\texpect_zu_gt(curregs, 0,\n\t\t    \"allocated should be greater than zero\");\n\t\tif (opt_tcache) {\n\t\t\texpect_u64_gt(nfills, 0,\n\t\t\t    \"At least one fill should have occurred\");\n\t\t\texpect_u64_gt(nflushes, 0,\n\t\t\t    \"At least one flush should have occurred\");\n\t\t}\n\t\texpect_u64_gt(nslabs, 0,\n\t\t    \"At least one slab should have been allocated\");\n\t\texpect_zu_gt(curslabs, 0,\n\t\t    \"At least one slab should be currently allocated\");\n\t\texpect_zu_eq(nonfull_slabs, 0,\n\t\t    \"slabs_nonfull should be empty\");\n\t}\n\n\tdallocx(p, 0);\n}\nTEST_END\n\nTEST_BEGIN(test_stats_arenas_lextents) {\n\tvoid *p;\n\tuint64_t epoch, nmalloc, ndalloc;\n\tsize_t curlextents, sz, hsize;\n\tint expected = config_stats ? 0 : ENOENT;\n\n\tsz = sizeof(size_t);\n\texpect_d_eq(mallctl(\"arenas.lextent.0.size\", (void *)&hsize, &sz, NULL,\n\t    0), 0, \"Unexpected mallctl() failure\");\n\n\tp = mallocx(hsize, MALLOCX_ARENA(0));\n\texpect_ptr_not_null(p, \"Unexpected mallocx() failure\");\n\n\texpect_d_eq(mallctl(\"epoch\", NULL, NULL, (void *)&epoch, sizeof(epoch)),\n\t    0, \"Unexpected mallctl() failure\");\n\n\tsz = sizeof(uint64_t);\n\texpect_d_eq(mallctl(\"stats.arenas.0.lextents.0.nmalloc\",\n\t    (void *)&nmalloc, &sz, NULL, 0), expected,\n\t    \"Unexpected mallctl() result\");\n\texpect_d_eq(mallctl(\"stats.arenas.0.lextents.0.ndalloc\",\n\t    (void *)&ndalloc, &sz, NULL, 0), expected,\n\t    \"Unexpected mallctl() result\");\n\tsz = sizeof(size_t);\n\texpect_d_eq(mallctl(\"stats.arenas.0.lextents.0.curlextents\",\n\t    (void *)&curlextents, &sz, NULL, 0), expected,\n\t    \"Unexpected mallctl() result\");\n\n\tif (config_stats) {\n\t\texpect_u64_gt(nmalloc, 0,\n\t\t    \"nmalloc should be greater than zero\");\n\t\texpect_u64_ge(nmalloc, ndalloc,\n\t\t    \"nmalloc should be at least as large as ndalloc\");\n\t\texpect_u64_gt(curlextents, 0,\n\t\t    \"At least one extent should be currently allocated\");\n\t}\n\n\tdallocx(p, 0);\n}\nTEST_END\n\nstatic void\ntest_tcache_bytes_for_usize(size_t usize) {\n\tuint64_t epoch;\n\tsize_t tcache_bytes, tcache_stashed_bytes;\n\tsize_t sz = sizeof(tcache_bytes);\n\n\tvoid *ptr = mallocx(usize, 0);\n\n\texpect_d_eq(mallctl(\"epoch\", NULL, NULL, (void *)&epoch, sizeof(epoch)),\n\t    0, \"Unexpected mallctl() failure\");\n\tassert_d_eq(mallctl(\n\t    \"stats.arenas.\" STRINGIFY(MALLCTL_ARENAS_ALL) \".tcache_bytes\",\n\t    &tcache_bytes, &sz, NULL, 0), 0, \"Unexpected mallctl failure\");\n\tassert_d_eq(mallctl(\n\t    \"stats.arenas.\" STRINGIFY(MALLCTL_ARENAS_ALL)\n\t    \".tcache_stashed_bytes\", &tcache_stashed_bytes, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl failure\");\n\tsize_t tcache_bytes_before = tcache_bytes + tcache_stashed_bytes;\n\tdallocx(ptr, 0);\n\n\texpect_d_eq(mallctl(\"epoch\", NULL, NULL, (void *)&epoch, sizeof(epoch)),\n\t    0, \"Unexpected mallctl() failure\");\n\tassert_d_eq(mallctl(\n\t    \"stats.arenas.\" STRINGIFY(MALLCTL_ARENAS_ALL) \".tcache_bytes\",\n\t    &tcache_bytes, &sz, NULL, 0), 0, \"Unexpected mallctl failure\");\n\tassert_d_eq(mallctl(\n\t    \"stats.arenas.\" STRINGIFY(MALLCTL_ARENAS_ALL)\n\t    \".tcache_stashed_bytes\", &tcache_stashed_bytes, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl failure\");\n\tsize_t tcache_bytes_after = tcache_bytes + tcache_stashed_bytes;\n\tassert_zu_eq(tcache_bytes_after - tcache_bytes_before,\n\t    usize, \"Incorrectly attributed a free\");\n}\n\nTEST_BEGIN(test_stats_tcache_bytes_small) {\n\ttest_skip_if(!config_stats);\n\ttest_skip_if(!opt_tcache);\n\ttest_skip_if(opt_tcache_max < SC_SMALL_MAXCLASS);\n\n\ttest_tcache_bytes_for_usize(SC_SMALL_MAXCLASS);\n}\nTEST_END\n\nTEST_BEGIN(test_stats_tcache_bytes_large) {\n\ttest_skip_if(!config_stats);\n\ttest_skip_if(!opt_tcache);\n\ttest_skip_if(opt_tcache_max < SC_LARGE_MINCLASS);\n\n\ttest_tcache_bytes_for_usize(SC_LARGE_MINCLASS);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test_no_reentrancy(\n\t    test_stats_summary,\n\t    test_stats_large,\n\t    test_stats_arenas_summary,\n\t    test_stats_arenas_small,\n\t    test_stats_arenas_large,\n\t    test_stats_arenas_bins,\n\t    test_stats_arenas_lextents,\n\t    test_stats_tcache_bytes_small,\n\t    test_stats_tcache_bytes_large);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/stats_print.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/util.h\"\n\ntypedef enum {\n\tTOKEN_TYPE_NONE,\n\tTOKEN_TYPE_ERROR,\n\tTOKEN_TYPE_EOI,\n\tTOKEN_TYPE_NULL,\n\tTOKEN_TYPE_FALSE,\n\tTOKEN_TYPE_TRUE,\n\tTOKEN_TYPE_LBRACKET,\n\tTOKEN_TYPE_RBRACKET,\n\tTOKEN_TYPE_LBRACE,\n\tTOKEN_TYPE_RBRACE,\n\tTOKEN_TYPE_COLON,\n\tTOKEN_TYPE_COMMA,\n\tTOKEN_TYPE_STRING,\n\tTOKEN_TYPE_NUMBER\n} token_type_t;\n\ntypedef struct parser_s parser_t;\ntypedef struct {\n\tparser_t\t*parser;\n\ttoken_type_t\ttoken_type;\n\tsize_t\t\tpos;\n\tsize_t\t\tlen;\n\tsize_t\t\tline;\n\tsize_t\t\tcol;\n} token_t;\n\nstruct parser_s {\n\tbool verbose;\n\tchar\t*buf; /* '\\0'-terminated. */\n\tsize_t\tlen; /* Number of characters preceding '\\0' in buf. */\n\tsize_t\tpos;\n\tsize_t\tline;\n\tsize_t\tcol;\n\ttoken_t\ttoken;\n};\n\nstatic void\ntoken_init(token_t *token, parser_t *parser, token_type_t token_type,\n    size_t pos, size_t len, size_t line, size_t col) {\n\ttoken->parser = parser;\n\ttoken->token_type = token_type;\n\ttoken->pos = pos;\n\ttoken->len = len;\n\ttoken->line = line;\n\ttoken->col = col;\n}\n\nstatic void\ntoken_error(token_t *token) {\n\tif (!token->parser->verbose) {\n\t\treturn;\n\t}\n\tswitch (token->token_type) {\n\tcase TOKEN_TYPE_NONE:\n\t\tnot_reached();\n\tcase TOKEN_TYPE_ERROR:\n\t\tmalloc_printf(\"%zu:%zu: Unexpected character in token: \",\n\t\t    token->line, token->col);\n\t\tbreak;\n\tdefault:\n\t\tmalloc_printf(\"%zu:%zu: Unexpected token: \", token->line,\n\t\t    token->col);\n\t\tbreak;\n\t}\n\tUNUSED ssize_t err = malloc_write_fd(STDERR_FILENO,\n\t    &token->parser->buf[token->pos], token->len);\n\tmalloc_printf(\"\\n\");\n}\n\nstatic void\nparser_init(parser_t *parser, bool verbose) {\n\tparser->verbose = verbose;\n\tparser->buf = NULL;\n\tparser->len = 0;\n\tparser->pos = 0;\n\tparser->line = 1;\n\tparser->col = 0;\n}\n\nstatic void\nparser_fini(parser_t *parser) {\n\tif (parser->buf != NULL) {\n\t\tdallocx(parser->buf, MALLOCX_TCACHE_NONE);\n\t}\n}\n\nstatic bool\nparser_append(parser_t *parser, const char *str) {\n\tsize_t len = strlen(str);\n\tchar *buf = (parser->buf == NULL) ? mallocx(len + 1,\n\t    MALLOCX_TCACHE_NONE) : rallocx(parser->buf, parser->len + len + 1,\n\t    MALLOCX_TCACHE_NONE);\n\tif (buf == NULL) {\n\t\treturn true;\n\t}\n\tmemcpy(&buf[parser->len], str, len + 1);\n\tparser->buf = buf;\n\tparser->len += len;\n\treturn false;\n}\n\nstatic bool\nparser_tokenize(parser_t *parser) {\n\tenum {\n\t\tSTATE_START,\n\t\tSTATE_EOI,\n\t\tSTATE_N, STATE_NU, STATE_NUL, STATE_NULL,\n\t\tSTATE_F, STATE_FA, STATE_FAL, STATE_FALS, STATE_FALSE,\n\t\tSTATE_T, STATE_TR, STATE_TRU, STATE_TRUE,\n\t\tSTATE_LBRACKET,\n\t\tSTATE_RBRACKET,\n\t\tSTATE_LBRACE,\n\t\tSTATE_RBRACE,\n\t\tSTATE_COLON,\n\t\tSTATE_COMMA,\n\t\tSTATE_CHARS,\n\t\tSTATE_CHAR_ESCAPE,\n\t\tSTATE_CHAR_U, STATE_CHAR_UD, STATE_CHAR_UDD, STATE_CHAR_UDDD,\n\t\tSTATE_STRING,\n\t\tSTATE_MINUS,\n\t\tSTATE_LEADING_ZERO,\n\t\tSTATE_DIGITS,\n\t\tSTATE_DECIMAL,\n\t\tSTATE_FRAC_DIGITS,\n\t\tSTATE_EXP,\n\t\tSTATE_EXP_SIGN,\n\t\tSTATE_EXP_DIGITS,\n\t\tSTATE_ACCEPT\n\t} state = STATE_START;\n\tsize_t token_pos JEMALLOC_CC_SILENCE_INIT(0);\n\tsize_t token_line JEMALLOC_CC_SILENCE_INIT(1);\n\tsize_t token_col JEMALLOC_CC_SILENCE_INIT(0);\n\n\texpect_zu_le(parser->pos, parser->len,\n\t    \"Position is past end of buffer\");\n\n\twhile (state != STATE_ACCEPT) {\n\t\tchar c = parser->buf[parser->pos];\n\n\t\tswitch (state) {\n\t\tcase STATE_START:\n\t\t\ttoken_pos = parser->pos;\n\t\t\ttoken_line = parser->line;\n\t\t\ttoken_col = parser->col;\n\t\t\tswitch (c) {\n\t\t\tcase ' ': case '\\b': case '\\n': case '\\r': case '\\t':\n\t\t\t\tbreak;\n\t\t\tcase '\\0':\n\t\t\t\tstate = STATE_EOI;\n\t\t\t\tbreak;\n\t\t\tcase 'n':\n\t\t\t\tstate = STATE_N;\n\t\t\t\tbreak;\n\t\t\tcase 'f':\n\t\t\t\tstate = STATE_F;\n\t\t\t\tbreak;\n\t\t\tcase 't':\n\t\t\t\tstate = STATE_T;\n\t\t\t\tbreak;\n\t\t\tcase '[':\n\t\t\t\tstate = STATE_LBRACKET;\n\t\t\t\tbreak;\n\t\t\tcase ']':\n\t\t\t\tstate = STATE_RBRACKET;\n\t\t\t\tbreak;\n\t\t\tcase '{':\n\t\t\t\tstate = STATE_LBRACE;\n\t\t\t\tbreak;\n\t\t\tcase '}':\n\t\t\t\tstate = STATE_RBRACE;\n\t\t\t\tbreak;\n\t\t\tcase ':':\n\t\t\t\tstate = STATE_COLON;\n\t\t\t\tbreak;\n\t\t\tcase ',':\n\t\t\t\tstate = STATE_COMMA;\n\t\t\t\tbreak;\n\t\t\tcase '\"':\n\t\t\t\tstate = STATE_CHARS;\n\t\t\t\tbreak;\n\t\t\tcase '-':\n\t\t\t\tstate = STATE_MINUS;\n\t\t\t\tbreak;\n\t\t\tcase '0':\n\t\t\t\tstate = STATE_LEADING_ZERO;\n\t\t\t\tbreak;\n\t\t\tcase '1': case '2': case '3': case '4':\n\t\t\tcase '5': case '6': case '7': case '8': case '9':\n\t\t\t\tstate = STATE_DIGITS;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_EOI:\n\t\t\ttoken_init(&parser->token, parser,\n\t\t\t    TOKEN_TYPE_EOI, token_pos, parser->pos -\n\t\t\t    token_pos, token_line, token_col);\n\t\t\tstate = STATE_ACCEPT;\n\t\t\tbreak;\n\t\tcase STATE_N:\n\t\t\tswitch (c) {\n\t\t\tcase 'u':\n\t\t\t\tstate = STATE_NU;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_NU:\n\t\t\tswitch (c) {\n\t\t\tcase 'l':\n\t\t\t\tstate = STATE_NUL;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_NUL:\n\t\t\tswitch (c) {\n\t\t\tcase 'l':\n\t\t\t\tstate = STATE_NULL;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_NULL:\n\t\t\tswitch (c) {\n\t\t\tcase ' ': case '\\b': case '\\n': case '\\r': case '\\t':\n\t\t\tcase '\\0':\n\t\t\tcase '[': case ']': case '{': case '}': case ':':\n\t\t\tcase ',':\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\ttoken_init(&parser->token, parser, TOKEN_TYPE_NULL,\n\t\t\t    token_pos, parser->pos - token_pos, token_line,\n\t\t\t    token_col);\n\t\t\tstate = STATE_ACCEPT;\n\t\t\tbreak;\n\t\tcase STATE_F:\n\t\t\tswitch (c) {\n\t\t\tcase 'a':\n\t\t\t\tstate = STATE_FA;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_FA:\n\t\t\tswitch (c) {\n\t\t\tcase 'l':\n\t\t\t\tstate = STATE_FAL;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_FAL:\n\t\t\tswitch (c) {\n\t\t\tcase 's':\n\t\t\t\tstate = STATE_FALS;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_FALS:\n\t\t\tswitch (c) {\n\t\t\tcase 'e':\n\t\t\t\tstate = STATE_FALSE;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_FALSE:\n\t\t\tswitch (c) {\n\t\t\tcase ' ': case '\\b': case '\\n': case '\\r': case '\\t':\n\t\t\tcase '\\0':\n\t\t\tcase '[': case ']': case '{': case '}': case ':':\n\t\t\tcase ',':\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\ttoken_init(&parser->token, parser,\n\t\t\t    TOKEN_TYPE_FALSE, token_pos, parser->pos -\n\t\t\t    token_pos, token_line, token_col);\n\t\t\tstate = STATE_ACCEPT;\n\t\t\tbreak;\n\t\tcase STATE_T:\n\t\t\tswitch (c) {\n\t\t\tcase 'r':\n\t\t\t\tstate = STATE_TR;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_TR:\n\t\t\tswitch (c) {\n\t\t\tcase 'u':\n\t\t\t\tstate = STATE_TRU;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_TRU:\n\t\t\tswitch (c) {\n\t\t\tcase 'e':\n\t\t\t\tstate = STATE_TRUE;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_TRUE:\n\t\t\tswitch (c) {\n\t\t\tcase ' ': case '\\b': case '\\n': case '\\r': case '\\t':\n\t\t\tcase '\\0':\n\t\t\tcase '[': case ']': case '{': case '}': case ':':\n\t\t\tcase ',':\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\ttoken_init(&parser->token, parser, TOKEN_TYPE_TRUE,\n\t\t\t    token_pos, parser->pos - token_pos, token_line,\n\t\t\t    token_col);\n\t\t\tstate = STATE_ACCEPT;\n\t\t\tbreak;\n\t\tcase STATE_LBRACKET:\n\t\t\ttoken_init(&parser->token, parser, TOKEN_TYPE_LBRACKET,\n\t\t\t    token_pos, parser->pos - token_pos, token_line,\n\t\t\t    token_col);\n\t\t\tstate = STATE_ACCEPT;\n\t\t\tbreak;\n\t\tcase STATE_RBRACKET:\n\t\t\ttoken_init(&parser->token, parser, TOKEN_TYPE_RBRACKET,\n\t\t\t    token_pos, parser->pos - token_pos, token_line,\n\t\t\t    token_col);\n\t\t\tstate = STATE_ACCEPT;\n\t\t\tbreak;\n\t\tcase STATE_LBRACE:\n\t\t\ttoken_init(&parser->token, parser, TOKEN_TYPE_LBRACE,\n\t\t\t    token_pos, parser->pos - token_pos, token_line,\n\t\t\t    token_col);\n\t\t\tstate = STATE_ACCEPT;\n\t\t\tbreak;\n\t\tcase STATE_RBRACE:\n\t\t\ttoken_init(&parser->token, parser, TOKEN_TYPE_RBRACE,\n\t\t\t    token_pos, parser->pos - token_pos, token_line,\n\t\t\t    token_col);\n\t\t\tstate = STATE_ACCEPT;\n\t\t\tbreak;\n\t\tcase STATE_COLON:\n\t\t\ttoken_init(&parser->token, parser, TOKEN_TYPE_COLON,\n\t\t\t    token_pos, parser->pos - token_pos, token_line,\n\t\t\t    token_col);\n\t\t\tstate = STATE_ACCEPT;\n\t\t\tbreak;\n\t\tcase STATE_COMMA:\n\t\t\ttoken_init(&parser->token, parser, TOKEN_TYPE_COMMA,\n\t\t\t    token_pos, parser->pos - token_pos, token_line,\n\t\t\t    token_col);\n\t\t\tstate = STATE_ACCEPT;\n\t\t\tbreak;\n\t\tcase STATE_CHARS:\n\t\t\tswitch (c) {\n\t\t\tcase '\\\\':\n\t\t\t\tstate = STATE_CHAR_ESCAPE;\n\t\t\t\tbreak;\n\t\t\tcase '\"':\n\t\t\t\tstate = STATE_STRING;\n\t\t\t\tbreak;\n\t\t\tcase 0x00: case 0x01: case 0x02: case 0x03: case 0x04:\n\t\t\tcase 0x05: case 0x06: case 0x07: case 0x08: case 0x09:\n\t\t\tcase 0x0a: case 0x0b: case 0x0c: case 0x0d: case 0x0e:\n\t\t\tcase 0x0f: case 0x10: case 0x11: case 0x12: case 0x13:\n\t\t\tcase 0x14: case 0x15: case 0x16: case 0x17: case 0x18:\n\t\t\tcase 0x19: case 0x1a: case 0x1b: case 0x1c: case 0x1d:\n\t\t\tcase 0x1e: case 0x1f:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\tdefault:\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_CHAR_ESCAPE:\n\t\t\tswitch (c) {\n\t\t\tcase '\"': case '\\\\': case '/': case 'b': case 'n':\n\t\t\tcase 'r': case 't':\n\t\t\t\tstate = STATE_CHARS;\n\t\t\t\tbreak;\n\t\t\tcase 'u':\n\t\t\t\tstate = STATE_CHAR_U;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_CHAR_U:\n\t\t\tswitch (c) {\n\t\t\tcase '0': case '1': case '2': case '3': case '4':\n\t\t\tcase '5': case '6': case '7': case '8': case '9':\n\t\t\tcase 'a': case 'b': case 'c': case 'd': case 'e':\n\t\t\tcase 'f':\n\t\t\tcase 'A': case 'B': case 'C': case 'D': case 'E':\n\t\t\tcase 'F':\n\t\t\t\tstate = STATE_CHAR_UD;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_CHAR_UD:\n\t\t\tswitch (c) {\n\t\t\tcase '0': case '1': case '2': case '3': case '4':\n\t\t\tcase '5': case '6': case '7': case '8': case '9':\n\t\t\tcase 'a': case 'b': case 'c': case 'd': case 'e':\n\t\t\tcase 'f':\n\t\t\tcase 'A': case 'B': case 'C': case 'D': case 'E':\n\t\t\tcase 'F':\n\t\t\t\tstate = STATE_CHAR_UDD;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_CHAR_UDD:\n\t\t\tswitch (c) {\n\t\t\tcase '0': case '1': case '2': case '3': case '4':\n\t\t\tcase '5': case '6': case '7': case '8': case '9':\n\t\t\tcase 'a': case 'b': case 'c': case 'd': case 'e':\n\t\t\tcase 'f':\n\t\t\tcase 'A': case 'B': case 'C': case 'D': case 'E':\n\t\t\tcase 'F':\n\t\t\t\tstate = STATE_CHAR_UDDD;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_CHAR_UDDD:\n\t\t\tswitch (c) {\n\t\t\tcase '0': case '1': case '2': case '3': case '4':\n\t\t\tcase '5': case '6': case '7': case '8': case '9':\n\t\t\tcase 'a': case 'b': case 'c': case 'd': case 'e':\n\t\t\tcase 'f':\n\t\t\tcase 'A': case 'B': case 'C': case 'D': case 'E':\n\t\t\tcase 'F':\n\t\t\t\tstate = STATE_CHARS;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_STRING:\n\t\t\ttoken_init(&parser->token, parser, TOKEN_TYPE_STRING,\n\t\t\t    token_pos, parser->pos - token_pos, token_line,\n\t\t\t    token_col);\n\t\t\tstate = STATE_ACCEPT;\n\t\t\tbreak;\n\t\tcase STATE_MINUS:\n\t\t\tswitch (c) {\n\t\t\tcase '0':\n\t\t\t\tstate = STATE_LEADING_ZERO;\n\t\t\t\tbreak;\n\t\t\tcase '1': case '2': case '3': case '4':\n\t\t\tcase '5': case '6': case '7': case '8': case '9':\n\t\t\t\tstate = STATE_DIGITS;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_LEADING_ZERO:\n\t\t\tswitch (c) {\n\t\t\tcase '.':\n\t\t\t\tstate = STATE_DECIMAL;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_NUMBER, token_pos, parser->pos -\n\t\t\t\t    token_pos, token_line, token_col);\n\t\t\t\tstate = STATE_ACCEPT;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_DIGITS:\n\t\t\tswitch (c) {\n\t\t\tcase '0': case '1': case '2': case '3': case '4':\n\t\t\tcase '5': case '6': case '7': case '8': case '9':\n\t\t\t\tbreak;\n\t\t\tcase '.':\n\t\t\t\tstate = STATE_DECIMAL;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_NUMBER, token_pos, parser->pos -\n\t\t\t\t    token_pos, token_line, token_col);\n\t\t\t\tstate = STATE_ACCEPT;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_DECIMAL:\n\t\t\tswitch (c) {\n\t\t\tcase '0': case '1': case '2': case '3': case '4':\n\t\t\tcase '5': case '6': case '7': case '8': case '9':\n\t\t\t\tstate = STATE_FRAC_DIGITS;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_FRAC_DIGITS:\n\t\t\tswitch (c) {\n\t\t\tcase '0': case '1': case '2': case '3': case '4':\n\t\t\tcase '5': case '6': case '7': case '8': case '9':\n\t\t\t\tbreak;\n\t\t\tcase 'e': case 'E':\n\t\t\t\tstate = STATE_EXP;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_NUMBER, token_pos, parser->pos -\n\t\t\t\t    token_pos, token_line, token_col);\n\t\t\t\tstate = STATE_ACCEPT;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_EXP:\n\t\t\tswitch (c) {\n\t\t\tcase '-': case '+':\n\t\t\t\tstate = STATE_EXP_SIGN;\n\t\t\t\tbreak;\n\t\t\tcase '0': case '1': case '2': case '3': case '4':\n\t\t\tcase '5': case '6': case '7': case '8': case '9':\n\t\t\t\tstate = STATE_EXP_DIGITS;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_EXP_SIGN:\n\t\t\tswitch (c) {\n\t\t\tcase '0': case '1': case '2': case '3': case '4':\n\t\t\tcase '5': case '6': case '7': case '8': case '9':\n\t\t\t\tstate = STATE_EXP_DIGITS;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_ERROR, token_pos, parser->pos + 1\n\t\t\t\t    - token_pos, token_line, token_col);\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STATE_EXP_DIGITS:\n\t\t\tswitch (c) {\n\t\t\tcase '0': case '1': case '2': case '3': case '4':\n\t\t\tcase '5': case '6': case '7': case '8': case '9':\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\ttoken_init(&parser->token, parser,\n\t\t\t\t    TOKEN_TYPE_NUMBER, token_pos, parser->pos -\n\t\t\t\t    token_pos, token_line, token_col);\n\t\t\t\tstate = STATE_ACCEPT;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tnot_reached();\n\t\t}\n\n\t\tif (state != STATE_ACCEPT) {\n\t\t\tif (c == '\\n') {\n\t\t\t\tparser->line++;\n\t\t\t\tparser->col = 0;\n\t\t\t} else {\n\t\t\t\tparser->col++;\n\t\t\t}\n\t\t\tparser->pos++;\n\t\t}\n\t}\n\treturn false;\n}\n\nstatic bool\tparser_parse_array(parser_t *parser);\nstatic bool\tparser_parse_object(parser_t *parser);\n\nstatic bool\nparser_parse_value(parser_t *parser) {\n\tswitch (parser->token.token_type) {\n\tcase TOKEN_TYPE_NULL:\n\tcase TOKEN_TYPE_FALSE:\n\tcase TOKEN_TYPE_TRUE:\n\tcase TOKEN_TYPE_STRING:\n\tcase TOKEN_TYPE_NUMBER:\n\t\treturn false;\n\tcase TOKEN_TYPE_LBRACE:\n\t\treturn parser_parse_object(parser);\n\tcase TOKEN_TYPE_LBRACKET:\n\t\treturn parser_parse_array(parser);\n\tdefault:\n\t\treturn true;\n\t}\n\tnot_reached();\n}\n\nstatic bool\nparser_parse_pair(parser_t *parser) {\n\texpect_d_eq(parser->token.token_type, TOKEN_TYPE_STRING,\n\t    \"Pair should start with string\");\n\tif (parser_tokenize(parser)) {\n\t\treturn true;\n\t}\n\tswitch (parser->token.token_type) {\n\tcase TOKEN_TYPE_COLON:\n\t\tif (parser_tokenize(parser)) {\n\t\t\treturn true;\n\t\t}\n\t\treturn parser_parse_value(parser);\n\tdefault:\n\t\treturn true;\n\t}\n}\n\nstatic bool\nparser_parse_values(parser_t *parser) {\n\tif (parser_parse_value(parser)) {\n\t\treturn true;\n\t}\n\n\twhile (true) {\n\t\tif (parser_tokenize(parser)) {\n\t\t\treturn true;\n\t\t}\n\t\tswitch (parser->token.token_type) {\n\t\tcase TOKEN_TYPE_COMMA:\n\t\t\tif (parser_tokenize(parser)) {\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tif (parser_parse_value(parser)) {\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase TOKEN_TYPE_RBRACKET:\n\t\t\treturn false;\n\t\tdefault:\n\t\t\treturn true;\n\t\t}\n\t}\n}\n\nstatic bool\nparser_parse_array(parser_t *parser) {\n\texpect_d_eq(parser->token.token_type, TOKEN_TYPE_LBRACKET,\n\t    \"Array should start with [\");\n\tif (parser_tokenize(parser)) {\n\t\treturn true;\n\t}\n\tswitch (parser->token.token_type) {\n\tcase TOKEN_TYPE_RBRACKET:\n\t\treturn false;\n\tdefault:\n\t\treturn parser_parse_values(parser);\n\t}\n\tnot_reached();\n}\n\nstatic bool\nparser_parse_pairs(parser_t *parser) {\n\texpect_d_eq(parser->token.token_type, TOKEN_TYPE_STRING,\n\t    \"Object should start with string\");\n\tif (parser_parse_pair(parser)) {\n\t\treturn true;\n\t}\n\n\twhile (true) {\n\t\tif (parser_tokenize(parser)) {\n\t\t\treturn true;\n\t\t}\n\t\tswitch (parser->token.token_type) {\n\t\tcase TOKEN_TYPE_COMMA:\n\t\t\tif (parser_tokenize(parser)) {\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tswitch (parser->token.token_type) {\n\t\t\tcase TOKEN_TYPE_STRING:\n\t\t\t\tif (parser_parse_pair(parser)) {\n\t\t\t\t\treturn true;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\treturn true;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase TOKEN_TYPE_RBRACE:\n\t\t\treturn false;\n\t\tdefault:\n\t\t\treturn true;\n\t\t}\n\t}\n}\n\nstatic bool\nparser_parse_object(parser_t *parser) {\n\texpect_d_eq(parser->token.token_type, TOKEN_TYPE_LBRACE,\n\t    \"Object should start with {\");\n\tif (parser_tokenize(parser)) {\n\t\treturn true;\n\t}\n\tswitch (parser->token.token_type) {\n\tcase TOKEN_TYPE_STRING:\n\t\treturn parser_parse_pairs(parser);\n\tcase TOKEN_TYPE_RBRACE:\n\t\treturn false;\n\tdefault:\n\t\treturn true;\n\t}\n\tnot_reached();\n}\n\nstatic bool\nparser_parse(parser_t *parser) {\n\tif (parser_tokenize(parser)) {\n\t\tgoto label_error;\n\t}\n\tif (parser_parse_value(parser)) {\n\t\tgoto label_error;\n\t}\n\n\tif (parser_tokenize(parser)) {\n\t\tgoto label_error;\n\t}\n\tswitch (parser->token.token_type) {\n\tcase TOKEN_TYPE_EOI:\n\t\treturn false;\n\tdefault:\n\t\tgoto label_error;\n\t}\n\tnot_reached();\n\nlabel_error:\n\ttoken_error(&parser->token);\n\treturn true;\n}\n\nTEST_BEGIN(test_json_parser) {\n\tsize_t i;\n\tconst char *invalid_inputs[] = {\n\t\t/* Tokenizer error case tests. */\n\t\t\"{ \\\"string\\\": X }\",\n\t\t\"{ \\\"string\\\": nXll }\",\n\t\t\"{ \\\"string\\\": nuXl }\",\n\t\t\"{ \\\"string\\\": nulX }\",\n\t\t\"{ \\\"string\\\": nullX }\",\n\t\t\"{ \\\"string\\\": fXlse }\",\n\t\t\"{ \\\"string\\\": faXse }\",\n\t\t\"{ \\\"string\\\": falXe }\",\n\t\t\"{ \\\"string\\\": falsX }\",\n\t\t\"{ \\\"string\\\": falseX }\",\n\t\t\"{ \\\"string\\\": tXue }\",\n\t\t\"{ \\\"string\\\": trXe }\",\n\t\t\"{ \\\"string\\\": truX }\",\n\t\t\"{ \\\"string\\\": trueX }\",\n\t\t\"{ \\\"string\\\": \\\"\\n\\\" }\",\n\t\t\"{ \\\"string\\\": \\\"\\\\z\\\" }\",\n\t\t\"{ \\\"string\\\": \\\"\\\\uX000\\\" }\",\n\t\t\"{ \\\"string\\\": \\\"\\\\u0X00\\\" }\",\n\t\t\"{ \\\"string\\\": \\\"\\\\u00X0\\\" }\",\n\t\t\"{ \\\"string\\\": \\\"\\\\u000X\\\" }\",\n\t\t\"{ \\\"string\\\": -X }\",\n\t\t\"{ \\\"string\\\": 0.X }\",\n\t\t\"{ \\\"string\\\": 0.0eX }\",\n\t\t\"{ \\\"string\\\": 0.0e+X }\",\n\n\t\t/* Parser error test cases. */\n\t\t\"{\\\"string\\\": }\",\n\t\t\"{\\\"string\\\" }\",\n\t\t\"{\\\"string\\\": [ 0 }\",\n\t\t\"{\\\"string\\\": {\\\"a\\\":0, 1 } }\",\n\t\t\"{\\\"string\\\": {\\\"a\\\":0: } }\",\n\t\t\"{\",\n\t\t\"{}{\",\n\t};\n\tconst char *valid_inputs[] = {\n\t\t/* Token tests. */\n\t\t\"null\",\n\t\t\"false\",\n\t\t\"true\",\n\t\t\"{}\",\n\t\t\"{\\\"a\\\": 0}\",\n\t\t\"[]\",\n\t\t\"[0, 1]\",\n\t\t\"0\",\n\t\t\"1\",\n\t\t\"10\",\n\t\t\"-10\",\n\t\t\"10.23\",\n\t\t\"10.23e4\",\n\t\t\"10.23e-4\",\n\t\t\"10.23e+4\",\n\t\t\"10.23E4\",\n\t\t\"10.23E-4\",\n\t\t\"10.23E+4\",\n\t\t\"-10.23\",\n\t\t\"-10.23e4\",\n\t\t\"-10.23e-4\",\n\t\t\"-10.23e+4\",\n\t\t\"-10.23E4\",\n\t\t\"-10.23E-4\",\n\t\t\"-10.23E+4\",\n\t\t\"\\\"value\\\"\",\n\t\t\"\\\" \\\\\\\" \\\\/ \\\\b \\\\n \\\\r \\\\t \\\\u0abc \\\\u1DEF \\\"\",\n\n\t\t/* Parser test with various nesting. */\n\t\t\"{\\\"a\\\":null, \\\"b\\\":[1,[{\\\"c\\\":2},3]], \\\"d\\\":{\\\"e\\\":true}}\",\n\t};\n\n\tfor (i = 0; i < sizeof(invalid_inputs)/sizeof(const char *); i++) {\n\t\tconst char *input = invalid_inputs[i];\n\t\tparser_t parser;\n\t\tparser_init(&parser, false);\n\t\texpect_false(parser_append(&parser, input),\n\t\t    \"Unexpected input appending failure\");\n\t\texpect_true(parser_parse(&parser),\n\t\t    \"Unexpected parse success for input: %s\", input);\n\t\tparser_fini(&parser);\n\t}\n\n\tfor (i = 0; i < sizeof(valid_inputs)/sizeof(const char *); i++) {\n\t\tconst char *input = valid_inputs[i];\n\t\tparser_t parser;\n\t\tparser_init(&parser, true);\n\t\texpect_false(parser_append(&parser, input),\n\t\t    \"Unexpected input appending failure\");\n\t\texpect_false(parser_parse(&parser),\n\t\t    \"Unexpected parse error for input: %s\", input);\n\t\tparser_fini(&parser);\n\t}\n}\nTEST_END\n\nvoid\nwrite_cb(void *opaque, const char *str) {\n\tparser_t *parser = (parser_t *)opaque;\n\tif (parser_append(parser, str)) {\n\t\ttest_fail(\"Unexpected input appending failure\");\n\t}\n}\n\nTEST_BEGIN(test_stats_print_json) {\n\tconst char *opts[] = {\n\t\t\"J\",\n\t\t\"Jg\",\n\t\t\"Jm\",\n\t\t\"Jd\",\n\t\t\"Jmd\",\n\t\t\"Jgd\",\n\t\t\"Jgm\",\n\t\t\"Jgmd\",\n\t\t\"Ja\",\n\t\t\"Jb\",\n\t\t\"Jl\",\n\t\t\"Jx\",\n\t\t\"Jbl\",\n\t\t\"Jal\",\n\t\t\"Jab\",\n\t\t\"Jabl\",\n\t\t\"Jax\",\n\t\t\"Jbx\",\n\t\t\"Jlx\",\n\t\t\"Jablx\",\n\t\t\"Jgmdablx\",\n\t};\n\tunsigned arena_ind, i;\n\n\tfor (i = 0; i < 3; i++) {\n\t\tunsigned j;\n\n\t\tswitch (i) {\n\t\tcase 0:\n\t\t\tbreak;\n\t\tcase 1: {\n\t\t\tsize_t sz = sizeof(arena_ind);\n\t\t\texpect_d_eq(mallctl(\"arenas.create\", (void *)&arena_ind,\n\t\t\t    &sz, NULL, 0), 0, \"Unexpected mallctl failure\");\n\t\t\tbreak;\n\t\t} case 2: {\n\t\t\tsize_t mib[3];\n\t\t\tsize_t miblen = sizeof(mib)/sizeof(size_t);\n\t\t\texpect_d_eq(mallctlnametomib(\"arena.0.destroy\",\n\t\t\t    mib, &miblen), 0,\n\t\t\t    \"Unexpected mallctlnametomib failure\");\n\t\t\tmib[1] = arena_ind;\n\t\t\texpect_d_eq(mallctlbymib(mib, miblen, NULL, NULL, NULL,\n\t\t\t    0), 0, \"Unexpected mallctlbymib failure\");\n\t\t\tbreak;\n\t\t} default:\n\t\t\tnot_reached();\n\t\t}\n\n\t\tfor (j = 0; j < sizeof(opts)/sizeof(const char *); j++) {\n\t\t\tparser_t parser;\n\n\t\t\tparser_init(&parser, true);\n\t\t\tmalloc_stats_print(write_cb, (void *)&parser, opts[j]);\n\t\t\texpect_false(parser_parse(&parser),\n\t\t\t    \"Unexpected parse error, opts=\\\"%s\\\"\", opts[j]);\n\t\t\tparser_fini(&parser);\n\t\t}\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_json_parser,\n\t    test_stats_print_json);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/sz.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nTEST_BEGIN(test_sz_psz2ind) {\n\t/*\n\t * Testing page size classes which reside prior to the regular group\n\t * with all size classes divisible by page size.\n\t * For x86_64 Linux, it's 4096, 8192, 12288, 16384, with corresponding\n\t * pszind 0, 1, 2 and 3.\n\t */\n\tfor (size_t i = 0; i < SC_NGROUP; i++) {\n\t\tfor (size_t psz = i * PAGE + 1; psz <= (i + 1) * PAGE; psz++) {\n\t\t\tpszind_t ind = sz_psz2ind(psz);\n\t\t\texpect_zu_eq(ind, i, \"Got %u as sz_psz2ind of %zu\", ind,\n\t\t\t    psz);\n\t\t}\n\t}\n\n\tsc_data_t data;\n\tmemset(&data, 0, sizeof(data));\n\tsc_data_init(&data);\n\t/*\n\t * 'base' is the base of the first regular group with all size classes\n\t * divisible by page size.\n\t * For x86_64 Linux, it's 16384, and base_ind is 36.\n\t */\n\tsize_t base_psz = 1 << (SC_LG_NGROUP + LG_PAGE);\n\tsize_t base_ind = 0;\n\twhile (base_ind < SC_NSIZES &&\n\t    reg_size_compute(data.sc[base_ind].lg_base,\n\t\tdata.sc[base_ind].lg_delta,\n\t\tdata.sc[base_ind].ndelta) < base_psz) {\n\t\tbase_ind++;\n\t}\n\texpect_zu_eq(\n\t    reg_size_compute(data.sc[base_ind].lg_base,\n\t\tdata.sc[base_ind].lg_delta, data.sc[base_ind].ndelta),\n\t    base_psz, \"Size class equal to %zu not found\", base_psz);\n\t/*\n\t * Test different sizes falling into groups after the 'base'. The\n\t * increment is PAGE / 3 for the execution speed purpose.\n\t */\n\tbase_ind -= SC_NGROUP;\n\tfor (size_t psz = base_psz; psz <= 64 * 1024 * 1024; psz += PAGE / 3) {\n\t\tpszind_t ind = sz_psz2ind(psz);\n\t\tsc_t gt_sc = data.sc[ind + base_ind];\n\t\texpect_zu_gt(psz,\n\t\t    reg_size_compute(gt_sc.lg_base, gt_sc.lg_delta,\n\t\t\tgt_sc.ndelta),\n\t\t    \"Got %u as sz_psz2ind of %zu\", ind, psz);\n\t\tsc_t le_sc = data.sc[ind + base_ind + 1];\n\t\texpect_zu_le(psz,\n\t\t    reg_size_compute(le_sc.lg_base, le_sc.lg_delta,\n\t\t\tle_sc.ndelta),\n\t\t    \"Got %u as sz_psz2ind of %zu\", ind, psz);\n\t}\n\n\tpszind_t max_ind = sz_psz2ind(SC_LARGE_MAXCLASS + 1);\n\texpect_lu_eq(max_ind, SC_NPSIZES,\n\t    \"Got %u as sz_psz2ind of %llu\", max_ind, SC_LARGE_MAXCLASS);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(test_sz_psz2ind);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/tcache_max.c",
    "content": "#include \"test/jemalloc_test.h\"\n#include \"test/san.h\"\n\nconst char *malloc_conf = TEST_SAN_UAF_ALIGN_DISABLE;\n\nenum {\n\talloc_option_start = 0,\n\tuse_malloc = 0,\n\tuse_mallocx,\n\talloc_option_end\n};\n\nenum {\n\tdalloc_option_start = 0,\n\tuse_free = 0,\n\tuse_dallocx,\n\tuse_sdallocx,\n\tdalloc_option_end\n};\n\nstatic unsigned alloc_option, dalloc_option;\nstatic size_t tcache_max;\n\nstatic void *\nalloc_func(size_t sz) {\n\tvoid *ret;\n\n\tswitch (alloc_option) {\n\tcase use_malloc:\n\t\tret = malloc(sz);\n\t\tbreak;\n\tcase use_mallocx:\n\t\tret = mallocx(sz, 0);\n\t\tbreak;\n\tdefault:\n\t\tunreachable();\n\t}\n\texpect_ptr_not_null(ret, \"Unexpected malloc / mallocx failure\");\n\n\treturn ret;\n}\n\nstatic void\ndalloc_func(void *ptr, size_t sz) {\n\tswitch (dalloc_option) {\n\tcase use_free:\n\t\tfree(ptr);\n\t\tbreak;\n\tcase use_dallocx:\n\t\tdallocx(ptr, 0);\n\t\tbreak;\n\tcase use_sdallocx:\n\t\tsdallocx(ptr, sz, 0);\n\t\tbreak;\n\tdefault:\n\t\tunreachable();\n\t}\n}\n\nstatic size_t\ntcache_bytes_read(void) {\n\tuint64_t epoch;\n\tassert_d_eq(mallctl(\"epoch\", NULL, NULL, (void *)&epoch, sizeof(epoch)),\n\t    0, \"Unexpected mallctl() failure\");\n\n\tsize_t tcache_bytes;\n\tsize_t sz = sizeof(tcache_bytes);\n\tassert_d_eq(mallctl(\n\t    \"stats.arenas.\" STRINGIFY(MALLCTL_ARENAS_ALL) \".tcache_bytes\",\n\t    &tcache_bytes, &sz, NULL, 0), 0, \"Unexpected mallctl failure\");\n\n\treturn tcache_bytes;\n}\n\nstatic void\ntcache_bytes_check_update(size_t *prev, ssize_t diff) {\n\tsize_t tcache_bytes = tcache_bytes_read();\n\texpect_zu_eq(tcache_bytes, *prev + diff, \"tcache bytes not expected\");\n\n\t*prev += diff;\n}\n\nstatic void\ntest_tcache_bytes_alloc(size_t alloc_size) {\n\texpect_d_eq(mallctl(\"thread.tcache.flush\", NULL, NULL, NULL, 0), 0,\n\t    \"Unexpected tcache flush failure\");\n\n\tsize_t usize = sz_s2u(alloc_size);\n\t/* No change is expected if usize is outside of tcache_max range. */\n\tbool cached = (usize <= tcache_max);\n\tssize_t diff = cached ? usize : 0;\n\n\tvoid *ptr1 = alloc_func(alloc_size);\n\tvoid *ptr2 = alloc_func(alloc_size);\n\n\tsize_t bytes = tcache_bytes_read();\n\tdalloc_func(ptr2, alloc_size);\n\t/* Expect tcache_bytes increase after dalloc */\n\ttcache_bytes_check_update(&bytes, diff);\n\n\tdalloc_func(ptr1, alloc_size);\n\t/* Expect tcache_bytes increase again */\n\ttcache_bytes_check_update(&bytes, diff);\n\n\tvoid *ptr3 = alloc_func(alloc_size);\n\tif (cached) {\n\t\texpect_ptr_eq(ptr1, ptr3, \"Unexpected cached ptr\");\n\t}\n\t/* Expect tcache_bytes decrease after alloc */\n\ttcache_bytes_check_update(&bytes, -diff);\n\n\tvoid *ptr4 = alloc_func(alloc_size);\n\tif (cached) {\n\t\texpect_ptr_eq(ptr2, ptr4, \"Unexpected cached ptr\");\n\t}\n\t/* Expect tcache_bytes decrease again */\n\ttcache_bytes_check_update(&bytes, -diff);\n\n\tdalloc_func(ptr3, alloc_size);\n\ttcache_bytes_check_update(&bytes, diff);\n\tdalloc_func(ptr4, alloc_size);\n\ttcache_bytes_check_update(&bytes, diff);\n}\n\nstatic void\ntest_tcache_max_impl(void) {\n\tsize_t sz;\n\tsz = sizeof(tcache_max);\n\tassert_d_eq(mallctl(\"arenas.tcache_max\", (void *)&tcache_max,\n\t    &sz, NULL, 0), 0, \"Unexpected mallctl() failure\");\n\n\t/* opt.tcache_max set to 1024 in tcache_max.sh */\n\texpect_zu_eq(tcache_max, 1024, \"tcache_max not expected\");\n\n\ttest_tcache_bytes_alloc(1);\n\ttest_tcache_bytes_alloc(tcache_max - 1);\n\ttest_tcache_bytes_alloc(tcache_max);\n\ttest_tcache_bytes_alloc(tcache_max + 1);\n\n\ttest_tcache_bytes_alloc(PAGE - 1);\n\ttest_tcache_bytes_alloc(PAGE);\n\ttest_tcache_bytes_alloc(PAGE + 1);\n\n\tsize_t large;\n\tsz = sizeof(large);\n\tassert_d_eq(mallctl(\"arenas.lextent.0.size\", (void *)&large, &sz, NULL,\n\t    0), 0, \"Unexpected mallctl() failure\");\n\n\ttest_tcache_bytes_alloc(large - 1);\n\ttest_tcache_bytes_alloc(large);\n\ttest_tcache_bytes_alloc(large + 1);\n}\n\nTEST_BEGIN(test_tcache_max) {\n\ttest_skip_if(!config_stats);\n\ttest_skip_if(!opt_tcache);\n\ttest_skip_if(opt_prof);\n\ttest_skip_if(san_uaf_detection_enabled());\n\n\tfor (alloc_option = alloc_option_start;\n\t     alloc_option < alloc_option_end;\n\t     alloc_option++) {\n\t\tfor (dalloc_option = dalloc_option_start;\n\t\t     dalloc_option < dalloc_option_end;\n\t\t     dalloc_option++) {\n\t\t\ttest_tcache_max_impl();\n\t\t}\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(test_tcache_max);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/tcache_max.sh",
    "content": "#!/bin/sh\n\nexport MALLOC_CONF=\"tcache_max:1024\"\n"
  },
  {
    "path": "deps/jemalloc/test/unit/test_hooks.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nstatic bool hook_called = false;\n\nstatic void\nhook() {\n\thook_called = true;\n}\n\nstatic int\nfunc_to_hook(int arg1, int arg2) {\n\treturn arg1 + arg2;\n}\n\n#define func_to_hook JEMALLOC_TEST_HOOK(func_to_hook, test_hooks_libc_hook)\n\nTEST_BEGIN(unhooked_call) {\n\ttest_hooks_libc_hook = NULL;\n\thook_called = false;\n\texpect_d_eq(3, func_to_hook(1, 2), \"Hooking changed return value.\");\n\texpect_false(hook_called, \"Nulling out hook didn't take.\");\n}\nTEST_END\n\nTEST_BEGIN(hooked_call) {\n\ttest_hooks_libc_hook = &hook;\n\thook_called = false;\n\texpect_d_eq(3, func_to_hook(1, 2), \"Hooking changed return value.\");\n\texpect_true(hook_called, \"Hook should have executed.\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    unhooked_call,\n\t    hooked_call);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/thread_event.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nTEST_BEGIN(test_next_event_fast) {\n\ttsd_t *tsd = tsd_fetch();\n\tte_ctx_t ctx;\n\tte_ctx_get(tsd, &ctx, true);\n\n\tte_ctx_last_event_set(&ctx, 0);\n\tte_ctx_current_bytes_set(&ctx, TE_NEXT_EVENT_FAST_MAX - 8U);\n\tte_ctx_next_event_set(tsd, &ctx, TE_NEXT_EVENT_FAST_MAX);\n#define E(event, condition, is_alloc)\t\t\t\t\t\\\n\tif (is_alloc && condition) {\t\t\t\t\t\\\n\t\tevent##_event_wait_set(tsd, TE_NEXT_EVENT_FAST_MAX);\t\\\n\t}\n\tITERATE_OVER_ALL_EVENTS\n#undef E\n\n\t/* Test next_event_fast rolling back to 0. */\n\tvoid *p = malloc(16U);\n\tassert_ptr_not_null(p, \"malloc() failed\");\n\tfree(p);\n\n\t/* Test next_event_fast resuming to be equal to next_event. */\n\tvoid *q = malloc(SC_LOOKUP_MAXCLASS);\n\tassert_ptr_not_null(q, \"malloc() failed\");\n\tfree(q);\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_next_event_fast);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/thread_event.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_prof}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"prof:true,lg_prof_sample:0\"\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/unit/ticker.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include \"jemalloc/internal/ticker.h\"\n\nTEST_BEGIN(test_ticker_tick) {\n#define NREPS 2\n#define NTICKS 3\n\tticker_t ticker;\n\tint32_t i, j;\n\n\tticker_init(&ticker, NTICKS);\n\tfor (i = 0; i < NREPS; i++) {\n\t\tfor (j = 0; j < NTICKS; j++) {\n\t\t\texpect_u_eq(ticker_read(&ticker), NTICKS - j,\n\t\t\t    \"Unexpected ticker value (i=%d, j=%d)\", i, j);\n\t\t\texpect_false(ticker_tick(&ticker),\n\t\t\t    \"Unexpected ticker fire (i=%d, j=%d)\", i, j);\n\t\t}\n\t\texpect_u32_eq(ticker_read(&ticker), 0,\n\t\t    \"Expected ticker depletion\");\n\t\texpect_true(ticker_tick(&ticker),\n\t\t    \"Expected ticker fire (i=%d)\", i);\n\t\texpect_u32_eq(ticker_read(&ticker), NTICKS,\n\t\t    \"Expected ticker reset\");\n\t}\n#undef NTICKS\n}\nTEST_END\n\nTEST_BEGIN(test_ticker_ticks) {\n#define NTICKS 3\n\tticker_t ticker;\n\n\tticker_init(&ticker, NTICKS);\n\n\texpect_u_eq(ticker_read(&ticker), NTICKS, \"Unexpected ticker value\");\n\texpect_false(ticker_ticks(&ticker, NTICKS), \"Unexpected ticker fire\");\n\texpect_u_eq(ticker_read(&ticker), 0, \"Unexpected ticker value\");\n\texpect_true(ticker_ticks(&ticker, NTICKS), \"Expected ticker fire\");\n\texpect_u_eq(ticker_read(&ticker), NTICKS, \"Unexpected ticker value\");\n\n\texpect_true(ticker_ticks(&ticker, NTICKS + 1), \"Expected ticker fire\");\n\texpect_u_eq(ticker_read(&ticker), NTICKS, \"Unexpected ticker value\");\n#undef NTICKS\n}\nTEST_END\n\nTEST_BEGIN(test_ticker_copy) {\n#define NTICKS 3\n\tticker_t ta, tb;\n\n\tticker_init(&ta, NTICKS);\n\tticker_copy(&tb, &ta);\n\texpect_u_eq(ticker_read(&tb), NTICKS, \"Unexpected ticker value\");\n\texpect_true(ticker_ticks(&tb, NTICKS + 1), \"Expected ticker fire\");\n\texpect_u_eq(ticker_read(&tb), NTICKS, \"Unexpected ticker value\");\n\n\tticker_tick(&ta);\n\tticker_copy(&tb, &ta);\n\texpect_u_eq(ticker_read(&tb), NTICKS - 1, \"Unexpected ticker value\");\n\texpect_true(ticker_ticks(&tb, NTICKS), \"Expected ticker fire\");\n\texpect_u_eq(ticker_read(&tb), NTICKS, \"Unexpected ticker value\");\n#undef NTICKS\n}\nTEST_END\n\nTEST_BEGIN(test_ticker_geom) {\n\tconst int32_t ticks = 100;\n\tconst uint64_t niters = 100 * 1000;\n\n\tticker_geom_t ticker;\n\tticker_geom_init(&ticker, ticks);\n\tuint64_t total_ticks = 0;\n\t/* Just some random constant. */\n\tuint64_t prng_state = 0x343219f93496db9fULL;\n\tfor (uint64_t i = 0; i < niters; i++) {\n\t\twhile(!ticker_geom_tick(&ticker, &prng_state)) {\n\t\t\ttotal_ticks++;\n\t\t}\n\t}\n\t/*\n\t * In fact, with this choice of random seed and the PRNG implementation\n\t * used at the time this was tested, total_ticks is 95.1% of the\n\t * expected ticks.\n\t */\n\texpect_u64_ge(total_ticks , niters * ticks * 9 / 10,\n\t    \"Mean off by > 10%%\");\n\texpect_u64_le(total_ticks , niters * ticks * 11 / 10,\n\t    \"Mean off by > 10%%\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_ticker_tick,\n\t    test_ticker_ticks,\n\t    test_ticker_copy,\n\t    test_ticker_geom);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/tsd.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n/*\n * If we're e.g. in debug mode, we *never* enter the fast path, and so shouldn't\n * be asserting that we're on one.\n */\nstatic bool originally_fast;\nstatic int data_cleanup_count;\n\nvoid\ndata_cleanup(int *data) {\n\tif (data_cleanup_count == 0) {\n\t\texpect_x_eq(*data, MALLOC_TSD_TEST_DATA_INIT,\n\t\t    \"Argument passed into cleanup function should match tsd \"\n\t\t    \"value\");\n\t}\n\t++data_cleanup_count;\n\n\t/*\n\t * Allocate during cleanup for two rounds, in order to assure that\n\t * jemalloc's internal tsd reinitialization happens.\n\t */\n\tbool reincarnate = false;\n\tswitch (*data) {\n\tcase MALLOC_TSD_TEST_DATA_INIT:\n\t\t*data = 1;\n\t\treincarnate = true;\n\t\tbreak;\n\tcase 1:\n\t\t*data = 2;\n\t\treincarnate = true;\n\t\tbreak;\n\tcase 2:\n\t\treturn;\n\tdefault:\n\t\tnot_reached();\n\t}\n\n\tif (reincarnate) {\n\t\tvoid *p = mallocx(1, 0);\n\t\texpect_ptr_not_null(p, \"Unexpeced mallocx() failure\");\n\t\tdallocx(p, 0);\n\t}\n}\n\nstatic void *\nthd_start(void *arg) {\n\tint d = (int)(uintptr_t)arg;\n\tvoid *p;\n\n\t/*\n\t * Test free before tsd init -- the free fast path (which does not\n\t * explicitly check for NULL) has to tolerate this case, and fall back\n\t * to free_default.\n\t */\n\tfree(NULL);\n\n\ttsd_t *tsd = tsd_fetch();\n\texpect_x_eq(tsd_test_data_get(tsd), MALLOC_TSD_TEST_DATA_INIT,\n\t    \"Initial tsd get should return initialization value\");\n\n\tp = malloc(1);\n\texpect_ptr_not_null(p, \"Unexpected malloc() failure\");\n\n\ttsd_test_data_set(tsd, d);\n\texpect_x_eq(tsd_test_data_get(tsd), d,\n\t    \"After tsd set, tsd get should return value that was set\");\n\n\td = 0;\n\texpect_x_eq(tsd_test_data_get(tsd), (int)(uintptr_t)arg,\n\t    \"Resetting local data should have no effect on tsd\");\n\n\ttsd_test_callback_set(tsd, &data_cleanup);\n\n\tfree(p);\n\treturn NULL;\n}\n\nTEST_BEGIN(test_tsd_main_thread) {\n\tthd_start((void *)(uintptr_t)0xa5f3e329);\n}\nTEST_END\n\nTEST_BEGIN(test_tsd_sub_thread) {\n\tthd_t thd;\n\n\tdata_cleanup_count = 0;\n\tthd_create(&thd, thd_start, (void *)MALLOC_TSD_TEST_DATA_INIT);\n\tthd_join(thd, NULL);\n\t/*\n\t * We reincarnate twice in the data cleanup, so it should execute at\n\t * least 3 times.\n\t */\n\texpect_x_ge(data_cleanup_count, 3,\n\t    \"Cleanup function should have executed multiple times.\");\n}\nTEST_END\n\nstatic void *\nthd_start_reincarnated(void *arg) {\n\ttsd_t *tsd = tsd_fetch();\n\tassert(tsd);\n\n\tvoid *p = malloc(1);\n\texpect_ptr_not_null(p, \"Unexpected malloc() failure\");\n\n\t/* Manually trigger reincarnation. */\n\texpect_ptr_not_null(tsd_arena_get(tsd),\n\t    \"Should have tsd arena set.\");\n\ttsd_cleanup((void *)tsd);\n\texpect_ptr_null(*tsd_arenap_get_unsafe(tsd),\n\t    \"TSD arena should have been cleared.\");\n\texpect_u_eq(tsd_state_get(tsd), tsd_state_purgatory,\n\t    \"TSD state should be purgatory\\n\");\n\n\tfree(p);\n\texpect_u_eq(tsd_state_get(tsd), tsd_state_reincarnated,\n\t    \"TSD state should be reincarnated\\n\");\n\tp = mallocx(1, MALLOCX_TCACHE_NONE);\n\texpect_ptr_not_null(p, \"Unexpected malloc() failure\");\n\texpect_ptr_null(*tsd_arenap_get_unsafe(tsd),\n\t    \"Should not have tsd arena set after reincarnation.\");\n\n\tfree(p);\n\ttsd_cleanup((void *)tsd);\n\texpect_ptr_null(*tsd_arenap_get_unsafe(tsd),\n\t    \"TSD arena should have been cleared after 2nd cleanup.\");\n\n\treturn NULL;\n}\n\nTEST_BEGIN(test_tsd_reincarnation) {\n\tthd_t thd;\n\tthd_create(&thd, thd_start_reincarnated, NULL);\n\tthd_join(thd, NULL);\n}\nTEST_END\n\ntypedef struct {\n\tatomic_u32_t phase;\n\tatomic_b_t error;\n} global_slow_data_t;\n\nstatic void *\nthd_start_global_slow(void *arg) {\n\t/* PHASE 0 */\n\tglobal_slow_data_t *data = (global_slow_data_t *)arg;\n\tfree(mallocx(1, 0));\n\n\ttsd_t *tsd = tsd_fetch();\n\t/*\n\t * No global slowness has happened yet; there was an error if we were\n\t * originally fast but aren't now.\n\t */\n\tatomic_store_b(&data->error, originally_fast && !tsd_fast(tsd),\n\t    ATOMIC_SEQ_CST);\n\tatomic_store_u32(&data->phase, 1, ATOMIC_SEQ_CST);\n\n\t/* PHASE 2 */\n\twhile (atomic_load_u32(&data->phase, ATOMIC_SEQ_CST) != 2) {\n\t}\n\tfree(mallocx(1, 0));\n\tatomic_store_b(&data->error, tsd_fast(tsd), ATOMIC_SEQ_CST);\n\tatomic_store_u32(&data->phase, 3, ATOMIC_SEQ_CST);\n\n\t/* PHASE 4 */\n\twhile (atomic_load_u32(&data->phase, ATOMIC_SEQ_CST) != 4) {\n\t}\n\tfree(mallocx(1, 0));\n\tatomic_store_b(&data->error, tsd_fast(tsd), ATOMIC_SEQ_CST);\n\tatomic_store_u32(&data->phase, 5, ATOMIC_SEQ_CST);\n\n\t/* PHASE 6 */\n\twhile (atomic_load_u32(&data->phase, ATOMIC_SEQ_CST) != 6) {\n\t}\n\tfree(mallocx(1, 0));\n\t/* Only one decrement so far. */\n\tatomic_store_b(&data->error, tsd_fast(tsd), ATOMIC_SEQ_CST);\n\tatomic_store_u32(&data->phase, 7, ATOMIC_SEQ_CST);\n\n\t/* PHASE 8 */\n\twhile (atomic_load_u32(&data->phase, ATOMIC_SEQ_CST) != 8) {\n\t}\n\tfree(mallocx(1, 0));\n\t/*\n\t * Both decrements happened; we should be fast again (if we ever\n\t * were)\n\t */\n\tatomic_store_b(&data->error, originally_fast && !tsd_fast(tsd),\n\t    ATOMIC_SEQ_CST);\n\tatomic_store_u32(&data->phase, 9, ATOMIC_SEQ_CST);\n\n\treturn NULL;\n}\n\nTEST_BEGIN(test_tsd_global_slow) {\n\tglobal_slow_data_t data = {ATOMIC_INIT(0), ATOMIC_INIT(false)};\n\t/*\n\t * Note that the \"mallocx\" here (vs. malloc) is important, since the\n\t * compiler is allowed to optimize away free(malloc(1)) but not\n\t * free(mallocx(1)).\n\t */\n\tfree(mallocx(1, 0));\n\ttsd_t *tsd = tsd_fetch();\n\toriginally_fast = tsd_fast(tsd);\n\n\tthd_t thd;\n\tthd_create(&thd, thd_start_global_slow, (void *)&data.phase);\n\t/* PHASE 1 */\n\twhile (atomic_load_u32(&data.phase, ATOMIC_SEQ_CST) != 1) {\n\t\t/*\n\t\t * We don't have a portable condvar/semaphore mechanism.\n\t\t * Spin-wait.\n\t\t */\n\t}\n\texpect_false(atomic_load_b(&data.error, ATOMIC_SEQ_CST), \"\");\n\ttsd_global_slow_inc(tsd_tsdn(tsd));\n\tfree(mallocx(1, 0));\n\texpect_false(tsd_fast(tsd), \"\");\n\tatomic_store_u32(&data.phase, 2, ATOMIC_SEQ_CST);\n\n\t/* PHASE 3 */\n\twhile (atomic_load_u32(&data.phase, ATOMIC_SEQ_CST) != 3) {\n\t}\n\texpect_false(atomic_load_b(&data.error, ATOMIC_SEQ_CST), \"\");\n\t/* Increase again, so that we can test multiple fast/slow changes. */\n\ttsd_global_slow_inc(tsd_tsdn(tsd));\n\tatomic_store_u32(&data.phase, 4, ATOMIC_SEQ_CST);\n\tfree(mallocx(1, 0));\n\texpect_false(tsd_fast(tsd), \"\");\n\n\t/* PHASE 5 */\n\twhile (atomic_load_u32(&data.phase, ATOMIC_SEQ_CST) != 5) {\n\t}\n\texpect_false(atomic_load_b(&data.error, ATOMIC_SEQ_CST), \"\");\n\ttsd_global_slow_dec(tsd_tsdn(tsd));\n\tatomic_store_u32(&data.phase, 6, ATOMIC_SEQ_CST);\n\t/* We only decreased once; things should still be slow. */\n\tfree(mallocx(1, 0));\n\texpect_false(tsd_fast(tsd), \"\");\n\n\t/* PHASE 7 */\n\twhile (atomic_load_u32(&data.phase, ATOMIC_SEQ_CST) != 7) {\n\t}\n\texpect_false(atomic_load_b(&data.error, ATOMIC_SEQ_CST), \"\");\n\ttsd_global_slow_dec(tsd_tsdn(tsd));\n\tatomic_store_u32(&data.phase, 8, ATOMIC_SEQ_CST);\n\t/* We incremented and then decremented twice; we should be fast now. */\n\tfree(mallocx(1, 0));\n\texpect_true(!originally_fast || tsd_fast(tsd), \"\");\n\n\t/* PHASE 9 */\n\twhile (atomic_load_u32(&data.phase, ATOMIC_SEQ_CST) != 9) {\n\t}\n\texpect_false(atomic_load_b(&data.error, ATOMIC_SEQ_CST), \"\");\n\n\tthd_join(thd, NULL);\n}\nTEST_END\n\nint\nmain(void) {\n\t/* Ensure tsd bootstrapped. */\n\tif (nallocx(1, 0) == 0) {\n\t\tmalloc_printf(\"Initialization error\");\n\t\treturn test_status_fail;\n\t}\n\n\treturn test_no_reentrancy(\n\t    test_tsd_main_thread,\n\t    test_tsd_sub_thread,\n\t    test_tsd_reincarnation,\n\t    test_tsd_global_slow);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/uaf.c",
    "content": "#include \"test/jemalloc_test.h\"\n#include \"test/arena_util.h\"\n#include \"test/san.h\"\n\n#include \"jemalloc/internal/cache_bin.h\"\n#include \"jemalloc/internal/san.h\"\n#include \"jemalloc/internal/safety_check.h\"\n\nconst char *malloc_conf = TEST_SAN_UAF_ALIGN_ENABLE;\n\nstatic size_t san_uaf_align;\n\nstatic bool fake_abort_called;\nvoid fake_abort(const char *message) {\n\t(void)message;\n\tfake_abort_called = true;\n}\n\nstatic void\ntest_write_after_free_pre(void) {\n\tsafety_check_set_abort(&fake_abort);\n\tfake_abort_called = false;\n}\n\nstatic void\ntest_write_after_free_post(void) {\n\tassert_d_eq(mallctl(\"thread.tcache.flush\", NULL, NULL, NULL, 0),\n\t    0, \"Unexpected tcache flush failure\");\n\texpect_true(fake_abort_called, \"Use-after-free check didn't fire.\");\n\tsafety_check_set_abort(NULL);\n}\n\nstatic bool\nuaf_detection_enabled(void) {\n\tif (!config_uaf_detection || !san_uaf_detection_enabled()) {\n\t\treturn false;\n\t}\n\n\tssize_t lg_san_uaf_align;\n\tsize_t sz = sizeof(lg_san_uaf_align);\n\tassert_d_eq(mallctl(\"opt.lg_san_uaf_align\", &lg_san_uaf_align, &sz,\n\t    NULL, 0), 0, \"Unexpected mallctl failure\");\n\tif (lg_san_uaf_align < 0) {\n\t\treturn false;\n\t}\n\tassert_zd_ge(lg_san_uaf_align, LG_PAGE, \"san_uaf_align out of range\");\n\tsan_uaf_align = (size_t)1 << lg_san_uaf_align;\n\n\tbool tcache_enabled;\n\tsz = sizeof(tcache_enabled);\n\tassert_d_eq(mallctl(\"thread.tcache.enabled\", &tcache_enabled, &sz, NULL,\n\t    0), 0, \"Unexpected mallctl failure\");\n\tif (!tcache_enabled) {\n\t\treturn false;\n\t}\n\n\treturn true;\n}\n\nstatic size_t\nread_tcache_stashed_bytes(unsigned arena_ind) {\n\tif (!config_stats) {\n\t\treturn 0;\n\t}\n\n\tuint64_t epoch;\n\tassert_d_eq(mallctl(\"epoch\", NULL, NULL, (void *)&epoch, sizeof(epoch)),\n\t    0, \"Unexpected mallctl() failure\");\n\n\tsize_t tcache_stashed_bytes;\n\tsize_t sz = sizeof(tcache_stashed_bytes);\n\tassert_d_eq(mallctl(\n\t    \"stats.arenas.\" STRINGIFY(MALLCTL_ARENAS_ALL)\n\t    \".tcache_stashed_bytes\", &tcache_stashed_bytes, &sz, NULL, 0), 0,\n\t    \"Unexpected mallctl failure\");\n\n\treturn tcache_stashed_bytes;\n}\n\nstatic void\ntest_use_after_free(size_t alloc_size, bool write_after_free) {\n\tvoid *ptr = (void *)(uintptr_t)san_uaf_align;\n\tassert_true(cache_bin_nonfast_aligned(ptr), \"Wrong alignment\");\n\tptr = (void *)((uintptr_t)123 * (uintptr_t)san_uaf_align);\n\tassert_true(cache_bin_nonfast_aligned(ptr), \"Wrong alignment\");\n\tptr = (void *)((uintptr_t)san_uaf_align + 1);\n\tassert_false(cache_bin_nonfast_aligned(ptr), \"Wrong alignment\");\n\n\t/*\n\t * Disable purging (-1) so that all dirty pages remain committed, to\n\t * make use-after-free tolerable.\n\t */\n\tunsigned arena_ind = do_arena_create(-1, -1);\n\tint flags = MALLOCX_ARENA(arena_ind) | MALLOCX_TCACHE_NONE;\n\n\tsize_t n_max = san_uaf_align * 2;\n\tvoid **items = mallocx(n_max * sizeof(void *), flags);\n\tassert_ptr_not_null(items, \"Unexpected mallocx failure\");\n\n\tbool found = false;\n\tsize_t iter = 0;\n\tchar magic = 's';\n\tassert_d_eq(mallctl(\"thread.tcache.flush\", NULL, NULL, NULL, 0),\n\t    0, \"Unexpected tcache flush failure\");\n\twhile (!found) {\n\t\tptr = mallocx(alloc_size, flags);\n\t\tassert_ptr_not_null(ptr, \"Unexpected mallocx failure\");\n\n\t\tfound = cache_bin_nonfast_aligned(ptr);\n\t\t*(char *)ptr = magic;\n\t\titems[iter] = ptr;\n\t\tassert_zu_lt(iter++, n_max, \"No aligned ptr found\");\n\t}\n\n\tif (write_after_free) {\n\t\ttest_write_after_free_pre();\n\t}\n\tbool junked = false;\n\twhile (iter-- != 0) {\n\t\tchar *volatile mem = items[iter];\n\t\tassert_c_eq(*mem, magic, \"Unexpected memory content\");\n\t\tsize_t stashed_before = read_tcache_stashed_bytes(arena_ind);\n\t\tfree(mem);\n\t\tif (*mem != magic) {\n\t\t\tjunked = true;\n\t\t\tassert_c_eq(*mem, (char)uaf_detect_junk,\n\t\t\t    \"Unexpected junk-filling bytes\");\n\t\t\tif (write_after_free) {\n\t\t\t\t*(char *)mem = magic + 1;\n\t\t\t}\n\n\t\t\tsize_t stashed_after = read_tcache_stashed_bytes(\n\t\t\t    arena_ind);\n\t\t\t/*\n\t\t\t * An edge case is the deallocation above triggering the\n\t\t\t * tcache GC event, in which case the stashed pointers\n\t\t\t * may get flushed immediately, before returning from\n\t\t\t * free().  Treat these cases as checked already.\n\t\t\t */\n\t\t\tif (stashed_after <= stashed_before) {\n\t\t\t\tfake_abort_called = true;\n\t\t\t}\n\t\t}\n\t\t/* Flush tcache (including stashed). */\n\t\tassert_d_eq(mallctl(\"thread.tcache.flush\", NULL, NULL, NULL, 0),\n\t\t    0, \"Unexpected tcache flush failure\");\n\t}\n\texpect_true(junked, \"Aligned ptr not junked\");\n\tif (write_after_free) {\n\t\ttest_write_after_free_post();\n\t}\n\n\tdallocx(items, flags);\n\tdo_arena_destroy(arena_ind);\n}\n\nTEST_BEGIN(test_read_after_free) {\n\ttest_skip_if(!uaf_detection_enabled());\n\n\ttest_use_after_free(sizeof(void *), /* write_after_free */ false);\n\ttest_use_after_free(sizeof(void *) + 1, /* write_after_free */ false);\n\ttest_use_after_free(16, /* write_after_free */ false);\n\ttest_use_after_free(20, /* write_after_free */ false);\n\ttest_use_after_free(32, /* write_after_free */ false);\n\ttest_use_after_free(33, /* write_after_free */ false);\n\ttest_use_after_free(48, /* write_after_free */ false);\n\ttest_use_after_free(64, /* write_after_free */ false);\n\ttest_use_after_free(65, /* write_after_free */ false);\n\ttest_use_after_free(129, /* write_after_free */ false);\n\ttest_use_after_free(255, /* write_after_free */ false);\n\ttest_use_after_free(256, /* write_after_free */ false);\n}\nTEST_END\n\nTEST_BEGIN(test_write_after_free) {\n\ttest_skip_if(!uaf_detection_enabled());\n\n\ttest_use_after_free(sizeof(void *), /* write_after_free */ true);\n\ttest_use_after_free(sizeof(void *) + 1, /* write_after_free */ true);\n\ttest_use_after_free(16, /* write_after_free */ true);\n\ttest_use_after_free(20, /* write_after_free */ true);\n\ttest_use_after_free(32, /* write_after_free */ true);\n\ttest_use_after_free(33, /* write_after_free */ true);\n\ttest_use_after_free(48, /* write_after_free */ true);\n\ttest_use_after_free(64, /* write_after_free */ true);\n\ttest_use_after_free(65, /* write_after_free */ true);\n\ttest_use_after_free(129, /* write_after_free */ true);\n\ttest_use_after_free(255, /* write_after_free */ true);\n\ttest_use_after_free(256, /* write_after_free */ true);\n}\nTEST_END\n\nstatic bool\ncheck_allocated_intact(void **allocated, size_t n_alloc) {\n\tfor (unsigned i = 0; i < n_alloc; i++) {\n\t\tvoid *ptr = *(void **)allocated[i];\n\t\tbool found = false;\n\t\tfor (unsigned j = 0; j < n_alloc; j++) {\n\t\t\tif (ptr == allocated[j]) {\n\t\t\t\tfound = true;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif (!found) {\n\t\t\treturn false;\n\t\t}\n\t}\n\n\treturn true;\n}\n\nTEST_BEGIN(test_use_after_free_integration) {\n\ttest_skip_if(!uaf_detection_enabled());\n\n\tunsigned arena_ind = do_arena_create(-1, -1);\n\tint flags = MALLOCX_ARENA(arena_ind);\n\n\tsize_t n_alloc = san_uaf_align * 2;\n\tvoid **allocated = mallocx(n_alloc * sizeof(void *), flags);\n\tassert_ptr_not_null(allocated, \"Unexpected mallocx failure\");\n\n\tfor (unsigned i = 0; i < n_alloc; i++) {\n\t\tallocated[i] = mallocx(sizeof(void *) * 8, flags);\n\t\tassert_ptr_not_null(allocated[i], \"Unexpected mallocx failure\");\n\t\tif (i > 0) {\n\t\t\t/* Emulate a circular list. */\n\t\t\t*(void **)allocated[i] = allocated[i - 1];\n\t\t}\n\t}\n\t*(void **)allocated[0] = allocated[n_alloc - 1];\n\texpect_true(check_allocated_intact(allocated, n_alloc),\n\t    \"Allocated data corrupted\");\n\n\tfor (unsigned i = 0; i < n_alloc; i++) {\n\t\tfree(allocated[i]);\n\t}\n\t/* Read-after-free */\n\texpect_false(check_allocated_intact(allocated, n_alloc),\n\t    \"Junk-filling not detected\");\n\n\ttest_write_after_free_pre();\n\tfor (unsigned i = 0; i < n_alloc; i++) {\n\t\tallocated[i] = mallocx(sizeof(void *), flags);\n\t\tassert_ptr_not_null(allocated[i], \"Unexpected mallocx failure\");\n\t\t*(void **)allocated[i] = (void *)(uintptr_t)i;\n\t}\n\t/* Write-after-free */\n\tfor (unsigned i = 0; i < n_alloc; i++) {\n\t\tfree(allocated[i]);\n\t\t*(void **)allocated[i] = NULL;\n\t}\n\ttest_write_after_free_post();\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_read_after_free,\n\t    test_write_after_free,\n\t    test_use_after_free_integration);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/witness.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nstatic witness_lock_error_t *witness_lock_error_orig;\nstatic witness_owner_error_t *witness_owner_error_orig;\nstatic witness_not_owner_error_t *witness_not_owner_error_orig;\nstatic witness_depth_error_t *witness_depth_error_orig;\n\nstatic bool saw_lock_error;\nstatic bool saw_owner_error;\nstatic bool saw_not_owner_error;\nstatic bool saw_depth_error;\n\nstatic void\nwitness_lock_error_intercept(const witness_list_t *witnesses,\n    const witness_t *witness) {\n\tsaw_lock_error = true;\n}\n\nstatic void\nwitness_owner_error_intercept(const witness_t *witness) {\n\tsaw_owner_error = true;\n}\n\nstatic void\nwitness_not_owner_error_intercept(const witness_t *witness) {\n\tsaw_not_owner_error = true;\n}\n\nstatic void\nwitness_depth_error_intercept(const witness_list_t *witnesses,\n    witness_rank_t rank_inclusive, unsigned depth) {\n\tsaw_depth_error = true;\n}\n\nstatic int\nwitness_comp(const witness_t *a, void *oa, const witness_t *b, void *ob) {\n\texpect_u_eq(a->rank, b->rank, \"Witnesses should have equal rank\");\n\n\tassert(oa == (void *)a);\n\tassert(ob == (void *)b);\n\n\treturn strcmp(a->name, b->name);\n}\n\nstatic int\nwitness_comp_reverse(const witness_t *a, void *oa, const witness_t *b,\n    void *ob) {\n\texpect_u_eq(a->rank, b->rank, \"Witnesses should have equal rank\");\n\n\tassert(oa == (void *)a);\n\tassert(ob == (void *)b);\n\n\treturn -strcmp(a->name, b->name);\n}\n\nTEST_BEGIN(test_witness) {\n\twitness_t a, b;\n\twitness_tsdn_t witness_tsdn = { WITNESS_TSD_INITIALIZER };\n\n\ttest_skip_if(!config_debug);\n\n\twitness_assert_lockless(&witness_tsdn);\n\twitness_assert_depth(&witness_tsdn, 0);\n\twitness_assert_depth_to_rank(&witness_tsdn, (witness_rank_t)1U, 0);\n\n\twitness_init(&a, \"a\", 1, NULL, NULL);\n\twitness_assert_not_owner(&witness_tsdn, &a);\n\twitness_lock(&witness_tsdn, &a);\n\twitness_assert_owner(&witness_tsdn, &a);\n\twitness_assert_depth(&witness_tsdn, 1);\n\twitness_assert_depth_to_rank(&witness_tsdn, (witness_rank_t)1U, 1);\n\twitness_assert_depth_to_rank(&witness_tsdn, (witness_rank_t)2U, 0);\n\n\twitness_init(&b, \"b\", 2, NULL, NULL);\n\twitness_assert_not_owner(&witness_tsdn, &b);\n\twitness_lock(&witness_tsdn, &b);\n\twitness_assert_owner(&witness_tsdn, &b);\n\twitness_assert_depth(&witness_tsdn, 2);\n\twitness_assert_depth_to_rank(&witness_tsdn, (witness_rank_t)1U, 2);\n\twitness_assert_depth_to_rank(&witness_tsdn, (witness_rank_t)2U, 1);\n\twitness_assert_depth_to_rank(&witness_tsdn, (witness_rank_t)3U, 0);\n\n\twitness_unlock(&witness_tsdn, &a);\n\twitness_assert_depth(&witness_tsdn, 1);\n\twitness_assert_depth_to_rank(&witness_tsdn, (witness_rank_t)1U, 1);\n\twitness_assert_depth_to_rank(&witness_tsdn, (witness_rank_t)2U, 1);\n\twitness_assert_depth_to_rank(&witness_tsdn, (witness_rank_t)3U, 0);\n\twitness_unlock(&witness_tsdn, &b);\n\n\twitness_assert_lockless(&witness_tsdn);\n\twitness_assert_depth(&witness_tsdn, 0);\n\twitness_assert_depth_to_rank(&witness_tsdn, (witness_rank_t)1U, 0);\n}\nTEST_END\n\nTEST_BEGIN(test_witness_comp) {\n\twitness_t a, b, c, d;\n\twitness_tsdn_t witness_tsdn = { WITNESS_TSD_INITIALIZER };\n\n\ttest_skip_if(!config_debug);\n\n\twitness_assert_lockless(&witness_tsdn);\n\n\twitness_init(&a, \"a\", 1, witness_comp, &a);\n\twitness_assert_not_owner(&witness_tsdn, &a);\n\twitness_lock(&witness_tsdn, &a);\n\twitness_assert_owner(&witness_tsdn, &a);\n\twitness_assert_depth(&witness_tsdn, 1);\n\n\twitness_init(&b, \"b\", 1, witness_comp, &b);\n\twitness_assert_not_owner(&witness_tsdn, &b);\n\twitness_lock(&witness_tsdn, &b);\n\twitness_assert_owner(&witness_tsdn, &b);\n\twitness_assert_depth(&witness_tsdn, 2);\n\twitness_unlock(&witness_tsdn, &b);\n\twitness_assert_depth(&witness_tsdn, 1);\n\n\twitness_lock_error_orig = witness_lock_error;\n\twitness_lock_error = witness_lock_error_intercept;\n\tsaw_lock_error = false;\n\n\twitness_init(&c, \"c\", 1, witness_comp_reverse, &c);\n\twitness_assert_not_owner(&witness_tsdn, &c);\n\texpect_false(saw_lock_error, \"Unexpected witness lock error\");\n\twitness_lock(&witness_tsdn, &c);\n\texpect_true(saw_lock_error, \"Expected witness lock error\");\n\twitness_unlock(&witness_tsdn, &c);\n\twitness_assert_depth(&witness_tsdn, 1);\n\n\tsaw_lock_error = false;\n\n\twitness_init(&d, \"d\", 1, NULL, NULL);\n\twitness_assert_not_owner(&witness_tsdn, &d);\n\texpect_false(saw_lock_error, \"Unexpected witness lock error\");\n\twitness_lock(&witness_tsdn, &d);\n\texpect_true(saw_lock_error, \"Expected witness lock error\");\n\twitness_unlock(&witness_tsdn, &d);\n\twitness_assert_depth(&witness_tsdn, 1);\n\n\twitness_unlock(&witness_tsdn, &a);\n\n\twitness_assert_lockless(&witness_tsdn);\n\n\twitness_lock_error = witness_lock_error_orig;\n}\nTEST_END\n\nTEST_BEGIN(test_witness_reversal) {\n\twitness_t a, b;\n\twitness_tsdn_t witness_tsdn = { WITNESS_TSD_INITIALIZER };\n\n\ttest_skip_if(!config_debug);\n\n\twitness_lock_error_orig = witness_lock_error;\n\twitness_lock_error = witness_lock_error_intercept;\n\tsaw_lock_error = false;\n\n\twitness_assert_lockless(&witness_tsdn);\n\n\twitness_init(&a, \"a\", 1, NULL, NULL);\n\twitness_init(&b, \"b\", 2, NULL, NULL);\n\n\twitness_lock(&witness_tsdn, &b);\n\twitness_assert_depth(&witness_tsdn, 1);\n\texpect_false(saw_lock_error, \"Unexpected witness lock error\");\n\twitness_lock(&witness_tsdn, &a);\n\texpect_true(saw_lock_error, \"Expected witness lock error\");\n\n\twitness_unlock(&witness_tsdn, &a);\n\twitness_assert_depth(&witness_tsdn, 1);\n\twitness_unlock(&witness_tsdn, &b);\n\n\twitness_assert_lockless(&witness_tsdn);\n\n\twitness_lock_error = witness_lock_error_orig;\n}\nTEST_END\n\nTEST_BEGIN(test_witness_recursive) {\n\twitness_t a;\n\twitness_tsdn_t witness_tsdn = { WITNESS_TSD_INITIALIZER };\n\n\ttest_skip_if(!config_debug);\n\n\twitness_not_owner_error_orig = witness_not_owner_error;\n\twitness_not_owner_error = witness_not_owner_error_intercept;\n\tsaw_not_owner_error = false;\n\n\twitness_lock_error_orig = witness_lock_error;\n\twitness_lock_error = witness_lock_error_intercept;\n\tsaw_lock_error = false;\n\n\twitness_assert_lockless(&witness_tsdn);\n\n\twitness_init(&a, \"a\", 1, NULL, NULL);\n\n\twitness_lock(&witness_tsdn, &a);\n\texpect_false(saw_lock_error, \"Unexpected witness lock error\");\n\texpect_false(saw_not_owner_error, \"Unexpected witness not owner error\");\n\twitness_lock(&witness_tsdn, &a);\n\texpect_true(saw_lock_error, \"Expected witness lock error\");\n\texpect_true(saw_not_owner_error, \"Expected witness not owner error\");\n\n\twitness_unlock(&witness_tsdn, &a);\n\n\twitness_assert_lockless(&witness_tsdn);\n\n\twitness_owner_error = witness_owner_error_orig;\n\twitness_lock_error = witness_lock_error_orig;\n\n}\nTEST_END\n\nTEST_BEGIN(test_witness_unlock_not_owned) {\n\twitness_t a;\n\twitness_tsdn_t witness_tsdn = { WITNESS_TSD_INITIALIZER };\n\n\ttest_skip_if(!config_debug);\n\n\twitness_owner_error_orig = witness_owner_error;\n\twitness_owner_error = witness_owner_error_intercept;\n\tsaw_owner_error = false;\n\n\twitness_assert_lockless(&witness_tsdn);\n\n\twitness_init(&a, \"a\", 1, NULL, NULL);\n\n\texpect_false(saw_owner_error, \"Unexpected owner error\");\n\twitness_unlock(&witness_tsdn, &a);\n\texpect_true(saw_owner_error, \"Expected owner error\");\n\n\twitness_assert_lockless(&witness_tsdn);\n\n\twitness_owner_error = witness_owner_error_orig;\n}\nTEST_END\n\nTEST_BEGIN(test_witness_depth) {\n\twitness_t a;\n\twitness_tsdn_t witness_tsdn = { WITNESS_TSD_INITIALIZER };\n\n\ttest_skip_if(!config_debug);\n\n\twitness_depth_error_orig = witness_depth_error;\n\twitness_depth_error = witness_depth_error_intercept;\n\tsaw_depth_error = false;\n\n\twitness_assert_lockless(&witness_tsdn);\n\twitness_assert_depth(&witness_tsdn, 0);\n\n\twitness_init(&a, \"a\", 1, NULL, NULL);\n\n\texpect_false(saw_depth_error, \"Unexpected depth error\");\n\twitness_assert_lockless(&witness_tsdn);\n\twitness_assert_depth(&witness_tsdn, 0);\n\n\twitness_lock(&witness_tsdn, &a);\n\twitness_assert_lockless(&witness_tsdn);\n\twitness_assert_depth(&witness_tsdn, 0);\n\texpect_true(saw_depth_error, \"Expected depth error\");\n\n\twitness_unlock(&witness_tsdn, &a);\n\n\twitness_assert_lockless(&witness_tsdn);\n\twitness_assert_depth(&witness_tsdn, 0);\n\n\twitness_depth_error = witness_depth_error_orig;\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_witness,\n\t    test_witness_comp,\n\t    test_witness_reversal,\n\t    test_witness_recursive,\n\t    test_witness_unlock_not_owned,\n\t    test_witness_depth);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/zero.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nstatic void\ntest_zero(size_t sz_min, size_t sz_max) {\n\tuint8_t *s;\n\tsize_t sz_prev, sz, i;\n#define MAGIC\t((uint8_t)0x61)\n\n\tsz_prev = 0;\n\ts = (uint8_t *)mallocx(sz_min, 0);\n\texpect_ptr_not_null((void *)s, \"Unexpected mallocx() failure\");\n\n\tfor (sz = sallocx(s, 0); sz <= sz_max;\n\t    sz_prev = sz, sz = sallocx(s, 0)) {\n\t\tif (sz_prev > 0) {\n\t\t\texpect_u_eq(s[0], MAGIC,\n\t\t\t    \"Previously allocated byte %zu/%zu is corrupted\",\n\t\t\t    ZU(0), sz_prev);\n\t\t\texpect_u_eq(s[sz_prev-1], MAGIC,\n\t\t\t    \"Previously allocated byte %zu/%zu is corrupted\",\n\t\t\t    sz_prev-1, sz_prev);\n\t\t}\n\n\t\tfor (i = sz_prev; i < sz; i++) {\n\t\t\texpect_u_eq(s[i], 0x0,\n\t\t\t    \"Newly allocated byte %zu/%zu isn't zero-filled\",\n\t\t\t    i, sz);\n\t\t\ts[i] = MAGIC;\n\t\t}\n\n\t\tif (xallocx(s, sz+1, 0, 0) == sz) {\n\t\t\ts = (uint8_t *)rallocx(s, sz+1, 0);\n\t\t\texpect_ptr_not_null((void *)s,\n\t\t\t    \"Unexpected rallocx() failure\");\n\t\t}\n\t}\n\n\tdallocx(s, 0);\n#undef MAGIC\n}\n\nTEST_BEGIN(test_zero_small) {\n\ttest_skip_if(!config_fill);\n\ttest_zero(1, SC_SMALL_MAXCLASS - 1);\n}\nTEST_END\n\nTEST_BEGIN(test_zero_large) {\n\ttest_skip_if(!config_fill);\n\ttest_zero(SC_SMALL_MAXCLASS + 1, 1U << (SC_LG_LARGE_MINCLASS + 1));\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_zero_small,\n\t    test_zero_large);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/zero.sh",
    "content": "#!/bin/sh\n\nif [ \"x${enable_fill}\" = \"x1\" ] ; then\n  export MALLOC_CONF=\"abort:false,junk:false,zero:true\"\nfi\n"
  },
  {
    "path": "deps/jemalloc/test/unit/zero_realloc_abort.c",
    "content": "#include \"test/jemalloc_test.h\"\n\n#include <signal.h>\n\nstatic bool abort_called = false;\n\nvoid set_abort_called() {\n\tabort_called = true;\n};\n\nTEST_BEGIN(test_realloc_abort) {\n\tabort_called = false;\n\tsafety_check_set_abort(&set_abort_called);\n\tvoid *ptr = mallocx(42, 0);\n\texpect_ptr_not_null(ptr, \"Unexpected mallocx error\");\n\tptr = realloc(ptr, 0);\n\texpect_true(abort_called, \"Realloc with zero size didn't abort\");\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_realloc_abort);\n}\n\n"
  },
  {
    "path": "deps/jemalloc/test/unit/zero_realloc_abort.sh",
    "content": "#!/bin/sh\n\nexport MALLOC_CONF=\"zero_realloc:abort\"\n"
  },
  {
    "path": "deps/jemalloc/test/unit/zero_realloc_alloc.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nstatic uint64_t\nallocated() {\n\tif (!config_stats) {\n\t\treturn 0;\n\t}\n\tuint64_t allocated;\n\tsize_t sz = sizeof(allocated);\n\texpect_d_eq(mallctl(\"thread.allocated\", (void *)&allocated, &sz, NULL,\n\t    0), 0, \"Unexpected mallctl failure\");\n\treturn allocated;\n}\n\nstatic uint64_t\ndeallocated() {\n\tif (!config_stats) {\n\t\treturn 0;\n\t}\n\tuint64_t deallocated;\n\tsize_t sz = sizeof(deallocated);\n\texpect_d_eq(mallctl(\"thread.deallocated\", (void *)&deallocated, &sz,\n\t    NULL, 0), 0, \"Unexpected mallctl failure\");\n\treturn deallocated;\n}\n\nTEST_BEGIN(test_realloc_alloc) {\n\tvoid *ptr = mallocx(1, 0);\n\texpect_ptr_not_null(ptr, \"Unexpected mallocx error\");\n\tuint64_t allocated_before = allocated();\n\tuint64_t deallocated_before = deallocated();\n\tptr = realloc(ptr, 0);\n\tuint64_t allocated_after = allocated();\n\tuint64_t deallocated_after = deallocated();\n\tif (config_stats) {\n\t\texpect_u64_lt(allocated_before, allocated_after,\n\t\t    \"Unexpected stats change\");\n\t\texpect_u64_lt(deallocated_before, deallocated_after,\n\t\t    \"Unexpected stats change\");\n\t}\n\tdallocx(ptr, 0);\n}\nTEST_END\nint\nmain(void) {\n\treturn test(\n\t    test_realloc_alloc);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/zero_realloc_alloc.sh",
    "content": "#!/bin/sh\n\nexport MALLOC_CONF=\"zero_realloc:alloc\"\n"
  },
  {
    "path": "deps/jemalloc/test/unit/zero_realloc_free.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nstatic uint64_t\ndeallocated() {\n\tif (!config_stats) {\n\t\treturn 0;\n\t}\n\tuint64_t deallocated;\n\tsize_t sz = sizeof(deallocated);\n\texpect_d_eq(mallctl(\"thread.deallocated\", (void *)&deallocated, &sz,\n\t    NULL, 0), 0, \"Unexpected mallctl failure\");\n\treturn deallocated;\n}\n\nTEST_BEGIN(test_realloc_free) {\n\tvoid *ptr = mallocx(42, 0);\n\texpect_ptr_not_null(ptr, \"Unexpected mallocx error\");\n\tuint64_t deallocated_before = deallocated();\n\tptr = realloc(ptr, 0);\n\tuint64_t deallocated_after = deallocated();\n\texpect_ptr_null(ptr, \"Realloc didn't free\");\n\tif (config_stats) {\n\t\texpect_u64_gt(deallocated_after, deallocated_before,\n\t\t    \"Realloc didn't free\");\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\treturn test(\n\t    test_realloc_free);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/zero_realloc_free.sh",
    "content": "#!/bin/sh\n\nexport MALLOC_CONF=\"zero_realloc:free\"\n"
  },
  {
    "path": "deps/jemalloc/test/unit/zero_reallocs.c",
    "content": "#include \"test/jemalloc_test.h\"\n\nstatic size_t\nzero_reallocs() {\n\tif (!config_stats) {\n\t\treturn 0;\n\t}\n\tsize_t count = 12345;\n\tsize_t sz = sizeof(count);\n\n\texpect_d_eq(mallctl(\"stats.zero_reallocs\", (void *)&count, &sz,\n\t    NULL, 0), 0, \"Unexpected mallctl failure\");\n\treturn count;\n}\n\nTEST_BEGIN(test_zero_reallocs) {\n\ttest_skip_if(!config_stats);\n\n\tfor (size_t i = 0; i < 100; ++i) {\n\t\tvoid *ptr = mallocx(i * i + 1, 0);\n\t\texpect_ptr_not_null(ptr, \"Unexpected mallocx error\");\n\t\tsize_t count = zero_reallocs();\n\t\texpect_zu_eq(i, count, \"Incorrect zero realloc count\");\n\t\tptr = realloc(ptr, 0);\n\t\texpect_ptr_null(ptr, \"Realloc didn't free\");\n\t\tcount = zero_reallocs();\n\t\texpect_zu_eq(i + 1, count, \"Realloc didn't adjust count\");\n\t}\n}\nTEST_END\n\nint\nmain(void) {\n\t/*\n\t * We expect explicit counts; reentrant tests run multiple times, so\n\t * counts leak across runs.\n\t */\n\treturn test_no_reentrancy(\n\t    test_zero_reallocs);\n}\n"
  },
  {
    "path": "deps/jemalloc/test/unit/zero_reallocs.sh",
    "content": "#!/bin/sh\n\nexport MALLOC_CONF=\"zero_realloc:free\"\n"
  },
  {
    "path": "deps/linenoise/.gitignore",
    "content": "linenoise_example\n*.dSYM\nhistory.txt\n"
  },
  {
    "path": "deps/linenoise/Makefile",
    "content": "STD=\nWARN= -Wall\nOPT= -Os\n\nR_CFLAGS= $(STD) $(WARN) $(OPT) $(DEBUG) $(CFLAGS)\nR_LDFLAGS= $(LDFLAGS)\nDEBUG= -g\n\nR_CC=$(CC) $(R_CFLAGS)\nR_LD=$(CC) $(R_LDFLAGS)\n\nlinenoise.o: linenoise.h linenoise.c\n\nlinenoise_example: linenoise.o example.o\n\t$(R_LD) -o $@ $^\n\n.c.o:\n\t$(R_CC) -c $<\n\nclean:\n\trm -f linenoise_example *.o\n"
  },
  {
    "path": "deps/linenoise/README.markdown",
    "content": "# Linenoise\n\nA minimal, zero-config, BSD licensed, readline replacement used in Redis,\nMongoDB, and Android.\n\n* Single and multi line editing mode with the usual key bindings implemented.\n* History handling.\n* Completion.\n* Hints (suggestions at the right of the prompt as you type).\n* About 1,100 lines of BSD license source code.\n* Only uses a subset of VT100 escapes (ANSI.SYS compatible).\n\n## Can a line editing library be 20k lines of code?\n\nLine editing with some support for history is a really important feature for command line utilities. Instead of retyping almost the same stuff again and again it's just much better to hit the up arrow and edit on syntax errors, or in order to try a slightly different command. But apparently code dealing with terminals is some sort of Black Magic: readline is 30k lines of code, libedit 20k. Is it reasonable to link small utilities to huge libraries just to get a minimal support for line editing?\n\nSo what usually happens is either:\n\n * Large programs with configure scripts disabling line editing if readline is not present in the system, or not supporting it at all since readline is GPL licensed and libedit (the BSD clone) is not as known and available as readline is (Real world example of this problem: Tclsh).\n * Smaller programs not using a configure script not supporting line editing at all (A problem we had with Redis-cli for instance).\n \nThe result is a pollution of binaries without line editing support.\n\nSo I spent more or less two hours doing a reality check resulting in this little library: is it *really* needed for a line editing library to be 20k lines of code? Apparently not, it is possibe to get a very small, zero configuration, trivial to embed library, that solves the problem. Smaller programs will just include this, supporting line editing out of the box. Larger programs may use this little library or just checking with configure if readline/libedit is available and resorting to Linenoise if not.\n\n## Terminals, in 2010.\n\nApparently almost every terminal you can happen to use today has some kind of support for basic VT100 escape sequences. So I tried to write a lib using just very basic VT100 features. The resulting library appears to work everywhere I tried to use it, and now can work even on ANSI.SYS compatible terminals, since no\nVT220 specific sequences are used anymore.\n\nThe library is currently about 1100 lines of code. In order to use it in your project just look at the *example.c* file in the source distribution, it is trivial. Linenoise is BSD code, so you can use both in free software and commercial software.\n\n## Tested with...\n\n * Linux text only console ($TERM = linux)\n * Linux KDE terminal application ($TERM = xterm)\n * Linux xterm ($TERM = xterm)\n * Linux Buildroot ($TERM = vt100)\n * Mac OS X iTerm ($TERM = xterm)\n * Mac OS X default Terminal.app ($TERM = xterm)\n * OpenBSD 4.5 through an OSX Terminal.app ($TERM = screen)\n * IBM AIX 6.1\n * FreeBSD xterm ($TERM = xterm)\n * ANSI.SYS\n * Emacs comint mode ($TERM = dumb)\n\nPlease test it everywhere you can and report back!\n\n## Let's push this forward!\n\nPatches should be provided in the respect of Linenoise sensibility for small\neasy to understand code.\n\nSend feedbacks to antirez at gmail\n\n# The API\n\nLinenoise is very easy to use, and reading the example shipped with the\nlibrary should get you up to speed ASAP. Here is a list of API calls\nand how to use them.\n\n    char *linenoise(const char *prompt);\n\nThis is the main Linenoise call: it shows the user a prompt with line editing\nand history capabilities. The prompt you specify is used as a prompt, that is,\nit will be printed to the left of the cursor. The library returns a buffer\nwith the line composed by the user, or NULL on end of file or when there\nis an out of memory condition.\n\nWhen a tty is detected (the user is actually typing into a terminal session)\nthe maximum editable line length is `LINENOISE_MAX_LINE`. When instead the\nstandard input is not a tty, which happens every time you redirect a file\nto a program, or use it in an Unix pipeline, there are no limits to the\nlength of the line that can be returned.\n\nThe returned line should be freed with the `free()` standard system call.\nHowever sometimes it could happen that your program uses a different dynamic\nallocation library, so you may also used `linenoiseFree` to make sure the\nline is freed with the same allocator it was created.\n\nThe canonical loop used by a program using Linenoise will be something like\nthis:\n\n    while((line = linenoise(\"hello> \")) != NULL) {\n        printf(\"You wrote: %s\\n\", line);\n        linenoiseFree(line); /* Or just free(line) if you use libc malloc. */\n    }\n\n## Single line VS multi line editing\n\nBy default, Linenoise uses single line editing, that is, a single row on the\nscreen will be used, and as the user types more, the text will scroll towards\nleft to make room. This works if your program is one where the user is\nunlikely to write a lot of text, otherwise multi line editing, where multiple\nscreens rows are used, can be a lot more comfortable.\n\nIn order to enable multi line editing use the following API call:\n\n    linenoiseSetMultiLine(1);\n\nYou can disable it using `0` as argument.\n\n## History\n\nLinenoise supporst history, so that the user does not have to retype\nagain and again the same things, but can use the down and up arrows in order\nto search and re-edit already inserted lines of text.\n\nThe followings are the history API calls:\n\n    int linenoiseHistoryAdd(const char *line, int is_sensitive);\n    int linenoiseHistorySetMaxLen(int len);\n    int linenoiseHistorySave(const char *filename);\n    int linenoiseHistoryLoad(const char *filename);\n\nUse `linenoiseHistoryAdd` every time you want to add a new element\nto the top of the history (it will be the first the user will see when\nusing the up arrow).\n\nNote that for history to work, you have to set a length for the history\n(which is zero by default, so history will be disabled if you don't set\na proper one). This is accomplished using the `linenoiseHistorySetMaxLen`\nfunction.\n\nLinenoise has direct support for persisting the history into an history\nfile. The functions `linenoiseHistorySave` and `linenoiseHistoryLoad` do\njust that. Both functions return -1 on error and 0 on success.\n\n## Mask mode\n\nSometimes it is useful to allow the user to type passwords or other\nsecrets that should not be displayed. For such situations linenoise supports\na \"mask mode\" that will just replace the characters the user is typing \nwith `*` characters, like in the following example:\n\n    $ ./linenoise_example\n    hello> get mykey\n    echo: 'get mykey'\n    hello> /mask\n    hello> *********\n\nYou can enable and disable mask mode using the following two functions:\n\n    void linenoiseMaskModeEnable(void);\n    void linenoiseMaskModeDisable(void);\n\n## Completion\n\nLinenoise supports completion, which is the ability to complete the user\ninput when she or he presses the `<TAB>` key.\n\nIn order to use completion, you need to register a completion callback, which\nis called every time the user presses `<TAB>`. Your callback will return a\nlist of items that are completions for the current string.\n\nThe following is an example of registering a completion callback:\n\n    linenoiseSetCompletionCallback(completion);\n\nThe completion must be a function returning `void` and getting as input\na `const char` pointer, which is the line the user has typed so far, and\na `linenoiseCompletions` object pointer, which is used as argument of\n`linenoiseAddCompletion` in order to add completions inside the callback.\nAn example will make it more clear:\n\n    void completion(const char *buf, linenoiseCompletions *lc) {\n        if (buf[0] == 'h') {\n            linenoiseAddCompletion(lc,\"hello\");\n            linenoiseAddCompletion(lc,\"hello there\");\n        }\n    }\n\nBasically in your completion callback, you inspect the input, and return\na list of items that are good completions by using `linenoiseAddCompletion`.\n\nIf you want to test the completion feature, compile the example program\nwith `make`, run it, type `h` and press `<TAB>`.\n\n## Hints\n\nLinenoise has a feature called *hints* which is very useful when you\nuse Linenoise in order to implement a REPL (Read Eval Print Loop) for\na program that accepts commands and arguments, but may also be useful in\nother conditions.\n\nThe feature shows, on the right of the cursor, as the user types, hints that\nmay be useful. The hints can be displayed using a different color compared\nto the color the user is typing, and can also be bold.\n\nFor example as the user starts to type `\"git remote add\"`, with hints it's\npossible to show on the right of the prompt a string `<name> <url>`.\n\nThe feature works similarly to the history feature, using a callback.\nTo register the callback we use:\n\n    linenoiseSetHintsCallback(hints);\n\nThe callback itself is implemented like this:\n\n    char *hints(const char *buf, int *color, int *bold) {\n        if (!strcasecmp(buf,\"git remote add\")) {\n            *color = 35;\n            *bold = 0;\n            return \" <name> <url>\";\n        }\n        return NULL;\n    }\n\nThe callback function returns the string that should be displayed or NULL\nif no hint is available for the text the user currently typed. The returned\nstring will be trimmed as needed depending on the number of columns available\non the screen.\n\nIt is possible to return a string allocated in dynamic way, by also registering\na function to deallocate the hint string once used:\n\n    void linenoiseSetFreeHintsCallback(linenoiseFreeHintsCallback *);\n\nThe free hint callback will just receive the pointer and free the string\nas needed (depending on how the hits callback allocated it).\n\nAs you can see in the example above, a `color` (in xterm color terminal codes)\ncan be provided together with a `bold` attribute. If no color is set, the\ncurrent terminal foreground color is used. If no bold attribute is set,\nnon-bold text is printed.\n\nColor codes are:\n\n    red = 31\n    green = 32\n    yellow = 33\n    blue = 34\n    magenta = 35\n    cyan = 36\n    white = 37;\n\n## Screen handling\n\nSometimes you may want to clear the screen as a result of something the\nuser typed. You can do this by calling the following function:\n\n    void linenoiseClearScreen(void);\n\n## Related projects\n\n* [Linenoise NG](https://github.com/arangodb/linenoise-ng) is a fork of Linenoise that aims to add more advanced features like UTF-8 support, Windows support and other features. Uses C++ instead of C as development language.\n* [Linenoise-swift](https://github.com/andybest/linenoise-swift) is a reimplementation of Linenoise written in Swift.\n"
  },
  {
    "path": "deps/linenoise/example.c",
    "content": "#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include \"linenoise.h\"\n\n\nvoid completion(const char *buf, linenoiseCompletions *lc) {\n    if (buf[0] == 'h') {\n        linenoiseAddCompletion(lc,\"hello\");\n        linenoiseAddCompletion(lc,\"hello there\");\n    }\n}\n\nchar *hints(const char *buf, int *color, int *bold) {\n    if (!strcasecmp(buf,\"hello\")) {\n        *color = 35;\n        *bold = 0;\n        return \" World\";\n    }\n    return NULL;\n}\n\nint main(int argc, char **argv) {\n    char *line;\n    char *prgname = argv[0];\n\n    /* Parse options, with --multiline we enable multi line editing. */\n    while(argc > 1) {\n        argc--;\n        argv++;\n        if (!strcmp(*argv,\"--multiline\")) {\n            linenoiseSetMultiLine(1);\n            printf(\"Multi-line mode enabled.\\n\");\n        } else if (!strcmp(*argv,\"--keycodes\")) {\n            linenoisePrintKeyCodes();\n            exit(0);\n        } else {\n            fprintf(stderr, \"Usage: %s [--multiline] [--keycodes]\\n\", prgname);\n            exit(1);\n        }\n    }\n\n    /* Set the completion callback. This will be called every time the\n     * user uses the <tab> key. */\n    linenoiseSetCompletionCallback(completion);\n    linenoiseSetHintsCallback(hints);\n\n    /* Load history from file. The history file is just a plain text file\n     * where entries are separated by newlines. */\n    linenoiseHistoryLoad(\"history.txt\"); /* Load the history at startup */\n\n    /* Now this is the main loop of the typical linenoise-based application.\n     * The call to linenoise() will block as long as the user types something\n     * and presses enter.\n     *\n     * The typed string is returned as a malloc() allocated string by\n     * linenoise, so the user needs to free() it. */\n    \n    while((line = linenoise(\"hello> \")) != NULL) {\n        /* Do something with the string. */\n        if (line[0] != '\\0' && line[0] != '/') {\n            printf(\"echo: '%s'\\n\", line);\n            linenoiseHistoryAdd(line); /* Add to the history. */\n            linenoiseHistorySave(\"history.txt\"); /* Save the history on disk. */\n        } else if (!strncmp(line,\"/historylen\",11)) {\n            /* The \"/historylen\" command will change the history len. */\n            int len = atoi(line+11);\n            linenoiseHistorySetMaxLen(len);\n        } else if (!strncmp(line, \"/mask\", 5)) {\n            linenoiseMaskModeEnable();\n        } else if (!strncmp(line, \"/unmask\", 7)) {\n            linenoiseMaskModeDisable();\n        } else if (line[0] == '/') {\n            printf(\"Unreconized command: %s\\n\", line);\n        }\n        free(line);\n    }\n    return 0;\n}\n"
  },
  {
    "path": "deps/linenoise/linenoise.c",
    "content": "/* linenoise.c -- guerrilla line editing library against the idea that a\n * line editing lib needs to be 20,000 lines of C code.\n *\n * You can find the latest source code at:\n *\n *   http://github.com/antirez/linenoise\n *\n * Does a number of crazy assumptions that happen to be true in 99.9999% of\n * the 2010 UNIX computers around.\n *\n * ------------------------------------------------------------------------\n *\n * Copyright (c) 2010-2016, Salvatore Sanfilippo <antirez at gmail dot com>\n * Copyright (c) 2010-2013, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are\n * met:\n *\n *  *  Redistributions of source code must retain the above copyright\n *     notice, this list of conditions and the following disclaimer.\n *\n *  *  Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n * \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n * HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n *\n * ------------------------------------------------------------------------\n *\n * References:\n * - http://invisible-island.net/xterm/ctlseqs/ctlseqs.html\n * - http://www.3waylabs.com/nw/WWW/products/wizcon/vt220.html\n *\n * Todo list:\n * - Filter bogus Ctrl+<char> combinations.\n * - Win32 support\n *\n * Bloat:\n * - History search like Ctrl+r in readline?\n *\n * List of escape sequences used by this program, we do everything just\n * with three sequences. In order to be so cheap we may have some\n * flickering effect with some slow terminal, but the lesser sequences\n * the more compatible.\n *\n * EL (Erase Line)\n *    Sequence: ESC [ n K\n *    Effect: if n is 0 or missing, clear from cursor to end of line\n *    Effect: if n is 1, clear from beginning of line to cursor\n *    Effect: if n is 2, clear entire line\n *\n * CUF (CUrsor Forward)\n *    Sequence: ESC [ n C\n *    Effect: moves cursor forward n chars\n *\n * CUB (CUrsor Backward)\n *    Sequence: ESC [ n D\n *    Effect: moves cursor backward n chars\n *\n * The following is used to get the terminal width if getting\n * the width with the TIOCGWINSZ ioctl fails\n *\n * DSR (Device Status Report)\n *    Sequence: ESC [ 6 n\n *    Effect: reports the current cusor position as ESC [ n ; m R\n *            where n is the row and m is the column\n *\n * When multi line mode is enabled, we also use an additional escape\n * sequence. However multi line editing is disabled by default.\n *\n * CUU (Cursor Up)\n *    Sequence: ESC [ n A\n *    Effect: moves cursor up of n chars.\n *\n * CUD (Cursor Down)\n *    Sequence: ESC [ n B\n *    Effect: moves cursor down of n chars.\n *\n * When linenoiseClearScreen() is called, two additional escape sequences\n * are used in order to clear the screen and position the cursor at home\n * position.\n *\n * CUP (Cursor position)\n *    Sequence: ESC [ H\n *    Effect: moves the cursor to upper left corner\n *\n * ED (Erase display)\n *    Sequence: ESC [ 2 J\n *    Effect: clear the whole screen\n *\n */\n\n#define _DEFAULT_SOURCE /* For fchmod() */\n#define _BSD_SOURCE     /* For fchmod() */\n#include <termios.h>\n#include <unistd.h>\n#include <stdlib.h>\n#include <stdio.h>\n#include <errno.h>\n#include <string.h>\n#include <stdlib.h>\n#include <ctype.h>\n#include <sys/stat.h>\n#include <sys/types.h>\n#include <sys/ioctl.h>\n#include <unistd.h>\n#include <assert.h>\n#include \"linenoise.h\"\n\n#define SEQ_BUFFER_MAX_LENGTH 8\n#define LINENOISE_DEFAULT_HISTORY_MAX_LEN 100\n#define LINENOISE_MAX_LINE 4096\nstatic char *unsupported_term[] = {\"dumb\",\"cons25\",\"emacs\",NULL};\nstatic linenoiseCompletionCallback *completionCallback = NULL;\nstatic linenoiseHintsCallback *hintsCallback = NULL;\nstatic linenoiseFreeHintsCallback *freeHintsCallback = NULL;\n\nstatic struct termios orig_termios; /* In order to restore at exit.*/\nstatic int maskmode = 0; /* Show \"***\" instead of input. For passwords. */\nstatic int rawmode = 0; /* For atexit() function to check if restore is needed*/\nstatic int mlmode = 0;  /* Multi line mode. Default is single line. */\nstatic int atexit_registered = 0; /* Register atexit just 1 time. */\nstatic int history_max_len = LINENOISE_DEFAULT_HISTORY_MAX_LEN;\nstatic int history_len = 0;\nstatic char **history = NULL;\nstatic int *history_sensitive = NULL; /* An array records whether each line in\n                                       * history is sensitive. */\n\nstatic int reverse_search_mode_enabled = 0;\nstatic int reverse_search_direction = 0; /* 1 means forward, -1 means backward. */\nstatic int cycle_to_next_search = 0; /* indicates whether to continue the search with CTRL+S or CTRL+R. */\nstatic char search_result[LINENOISE_MAX_LINE];\nstatic char search_result_friendly[LINENOISE_MAX_LINE];\nstatic int search_result_history_index = 0;\nstatic int search_result_start_offset = 0;\nstatic int ignore_once_hint = 0; /* Flag to ignore hint once, preventing it from interfering\n                                  * with search results right after exiting search mode. */\n\n/* The linenoiseState structure represents the state during line editing.\n * We pass this state to functions implementing specific editing\n * functionalities. */\nstruct linenoiseState {\n    int ifd;            /* Terminal stdin file descriptor. */\n    int ofd;            /* Terminal stdout file descriptor. */\n    char *buf;          /* Edited line buffer. */\n    size_t buflen;      /* Edited line buffer size. */\n    const char *origin_prompt; /* Original prompt, used to restore when exiting search mode. */\n    const char *prompt; /* Prompt to display. */\n    size_t plen;        /* Prompt length. */\n    size_t pos;         /* Current cursor position. */\n    size_t oldpos;      /* Previous refresh cursor position. */\n    size_t len;         /* Current edited line length. */\n    size_t cols;        /* Number of columns in terminal. */\n    size_t maxrows;     /* Maximum num of rows used so far (multiline mode) */\n    int history_index;  /* The history index we are currently editing. */\n};\n\ntypedef struct {\n    int len;                /* Length of the result string. */\n    char *result;           /* Search result string. */\n    int search_term_index;  /* Position of the search term in the history record. */\n    int search_term_len;    /* Length of the search term. */\n} linenoiseHistorySearchResult;\n\nenum KEY_ACTION{\n\tKEY_NULL = 0,\t    /* NULL */\n\tCTRL_A = 1,         /* Ctrl+a */\n\tCTRL_B = 2,         /* Ctrl-b */\n\tCTRL_C = 3,         /* Ctrl-c */\n\tCTRL_D = 4,         /* Ctrl-d */\n\tCTRL_E = 5,         /* Ctrl-e */\n\tCTRL_F = 6,         /* Ctrl-f */\n\tCTRL_G = 7,         /* Ctrl-g */\n\tCTRL_H = 8,         /* Ctrl-h */\n\tTAB = 9,            /* Tab */\n\tNL = 10,            /* Enter typed before raw mode was enabled */\n\tCTRL_K = 11,        /* Ctrl+k */\n\tCTRL_L = 12,        /* Ctrl+l */\n\tENTER = 13,         /* Enter */\n\tCTRL_N = 14,        /* Ctrl-n */\n\tCTRL_P = 16,        /* Ctrl-p */\n\tCTRL_R = 18,        /* Ctrl-r */\n\tCTRL_S = 19,        /* Ctrl-s */\n\tCTRL_T = 20,        /* Ctrl-t */\n\tCTRL_U = 21,        /* Ctrl+u */\n\tCTRL_W = 23,        /* Ctrl+w */\n\tESC = 27,           /* Escape */\n\tBACKSPACE =  127    /* Backspace */\n};\n\nstatic void linenoiseAtExit(void);\nint linenoiseHistoryAdd(const char *line, int is_sensitive);\nstatic void refreshLine(struct linenoiseState *l);\nstatic void refreshSearchResult(struct linenoiseState *ls);\n\nstatic inline void resetSearchResult(void) {\n    memset(search_result, 0, sizeof(search_result));\n    memset(search_result_friendly, 0, sizeof(search_result_friendly));\n}\n\n/* Debugging macro. */\n#if 0\nFILE *lndebug_fp = NULL;\n#define lndebug(...) \\\n    do { \\\n        if (lndebug_fp == NULL) { \\\n            lndebug_fp = fopen(\"/tmp/lndebug.txt\",\"a\"); \\\n            fprintf(lndebug_fp, \\\n            \"[%d %d %d] p: %d, rows: %d, rpos: %d, max: %d, oldmax: %d\\n\", \\\n            (int)l->len,(int)l->pos,(int)l->oldpos,plen,rows,rpos, \\\n            (int)l->maxrows,old_rows); \\\n        } \\\n        fprintf(lndebug_fp, \", \" __VA_ARGS__); \\\n        fflush(lndebug_fp); \\\n    } while (0)\n#else\n#define lndebug(fmt, ...)\n#endif\n\n/* ======================= Low level terminal handling ====================== */\n\n/* Enable \"mask mode\". When it is enabled, instead of the input that\n * the user is typing, the terminal will just display a corresponding\n * number of asterisks, like \"****\". This is useful for passwords and other\n * secrets that should not be displayed. */\nvoid linenoiseMaskModeEnable(void) {\n    maskmode = 1;\n}\n\n/* Disable mask mode. */\nvoid linenoiseMaskModeDisable(void) {\n    maskmode = 0;\n}\n\n/* Set if to use or not the multi line mode. */\nvoid linenoiseSetMultiLine(int ml) {\n    mlmode = ml;\n}\n\n#define REVERSE_SEARCH_PROMPT(direction) ((direction) == -1 ? \"(reverse-i-search): \" : \"(i-search): \")\n\n/* Enables the reverse search mode and refreshes the prompt. */\nstatic void enableReverseSearchMode(struct linenoiseState *l) {\n    assert(reverse_search_mode_enabled != 1);\n    reverse_search_mode_enabled = 1;\n    l->origin_prompt = l->prompt;\n    l->prompt = REVERSE_SEARCH_PROMPT(reverse_search_direction);\n    refreshLine(l);\n}\n\n/* This function disables the reverse search mode and returns the terminal to its original state.\n * If the 'discard' parameter is true, it discards the user's input search keyword and search result.\n * Otherwise, it copies the search result into 'buf', If there is no search result, it copies the\n * input search keyword instead. */\nstatic void disableReverseSearchMode(struct linenoiseState *l, char *buf, size_t buflen, int discard) {\n    if (discard) {\n        buf[0] = '\\0';\n        l->pos = l->len = 0;\n    } else {\n        ignore_once_hint = 1;\n        if (strlen(search_result)) {\n            strncpy(buf, search_result, buflen);\n            buf[buflen-1] = '\\0';\n            l->pos = l->len = strlen(buf);\n        }\n    }\n\n    /* Reset the state to non-search state. */\n    reverse_search_mode_enabled = 0;\n    l->prompt = l->origin_prompt;\n    resetSearchResult();\n    refreshLine(l);\n}\n\n/* Return true if the terminal name is in the list of terminals we know are\n * not able to understand basic escape sequences. */\nstatic int isUnsupportedTerm(void) {\n    char *term = getenv(\"TERM\");\n    int j;\n\n    if (term == NULL) return 0;\n    for (j = 0; unsupported_term[j]; j++)\n        if (!strcasecmp(term,unsupported_term[j])) return 1;\n    return 0;\n}\n\n/* Raw mode: 1960's magic. */\nstatic int enableRawMode(int fd) {\n    if (getenv(\"FAKETTY_WITH_PROMPT\") != NULL) {\n        return 0;\n    }\n\n    struct termios raw;\n\n    if (!isatty(STDIN_FILENO)) goto fatal;\n    if (!atexit_registered) {\n        atexit(linenoiseAtExit);\n        atexit_registered = 1;\n    }\n    if (tcgetattr(fd,&orig_termios) == -1) goto fatal;\n\n    raw = orig_termios;  /* modify the original mode */\n    /* input modes: no break, no CR to NL, no parity check, no strip char,\n     * no start/stop output control. */\n    raw.c_iflag &= ~(BRKINT | ICRNL | INPCK | ISTRIP | IXON);\n    /* output modes - disable post processing */\n    raw.c_oflag &= ~(OPOST);\n    /* control modes - set 8 bit chars */\n    raw.c_cflag |= (CS8);\n    /* local modes - choing off, canonical off, no extended functions,\n     * no signal chars (^Z,^C) */\n    raw.c_lflag &= ~(ECHO | ICANON | IEXTEN | ISIG);\n    /* control chars - set return condition: min number of bytes and timer.\n     * We want read to return every single byte, without timeout. */\n    raw.c_cc[VMIN] = 1; raw.c_cc[VTIME] = 0; /* 1 byte, no timer */\n\n    /* put terminal in raw mode */\n    if (tcsetattr(fd,TCSANOW,&raw) < 0) goto fatal;\n    rawmode = 1;\n    return 0;\n\nfatal:\n    errno = ENOTTY;\n    return -1;\n}\n\nstatic void disableRawMode(int fd) {\n    /* Don't even check the return value as it's too late. */\n    if (rawmode && tcsetattr(fd,TCSANOW,&orig_termios) != -1)\n        rawmode = 0;\n}\n\n/* Use the ESC [6n escape sequence to query the horizontal cursor position\n * and return it. On error -1 is returned, on success the position of the\n * cursor. */\nstatic int getCursorPosition(int ifd, int ofd) {\n    char buf[32];\n    int cols, rows;\n    unsigned int i = 0;\n\n    /* Report cursor location */\n    if (write(ofd, \"\\x1b[6n\", 4) != 4) return -1;\n\n    /* Read the response: ESC [ rows ; cols R */\n    while (i < sizeof(buf)-1) {\n        if (read(ifd,buf+i,1) != 1) break;\n        if (buf[i] == 'R') break;\n        i++;\n    }\n    buf[i] = '\\0';\n\n    /* Parse it. */\n    if (buf[0] != ESC || buf[1] != '[') return -1;\n    if (sscanf(buf+2,\"%d;%d\",&rows,&cols) != 2) return -1;\n    return cols;\n}\n\n/* Try to get the number of columns in the current terminal, or assume 80\n * if it fails. */\nstatic int getColumns(int ifd, int ofd) {\n    if (getenv(\"FAKETTY_WITH_PROMPT\") != NULL) {\n        goto failed;\n    }\n    struct winsize ws;\n\n    if (ioctl(1, TIOCGWINSZ, &ws) == -1 || ws.ws_col == 0) {\n        /* ioctl() failed. Try to query the terminal itself. */\n        int start, cols;\n\n        /* Get the initial position so we can restore it later. */\n        start = getCursorPosition(ifd,ofd);\n        if (start == -1) goto failed;\n\n        /* Go to right margin and get position. */\n        if (write(ofd,\"\\x1b[999C\",6) != 6) goto failed;\n        cols = getCursorPosition(ifd,ofd);\n        if (cols == -1) goto failed;\n\n        /* Restore position. */\n        if (cols > start) {\n            char seq[32];\n            snprintf(seq,32,\"\\x1b[%dD\",cols-start);\n            if (write(ofd,seq,strlen(seq)) == -1) {\n                /* Can't recover... */\n            }\n        }\n        return cols;\n    } else {\n        return ws.ws_col;\n    }\n\nfailed:\n    return 80;\n}\n\n/* Clear the screen. Used to handle ctrl+l */\nvoid linenoiseClearScreen(void) {\n    if (write(STDOUT_FILENO,\"\\x1b[H\\x1b[2J\",7) <= 0) {\n        /* nothing to do, just to avoid warning. */\n    }\n}\n\n/* Beep, used for completion when there is nothing to complete or when all\n * the choices were already shown. */\nstatic void linenoiseBeep(void) {\n    fprintf(stderr, \"\\x7\");\n    fflush(stderr);\n}\n\n/* ============================== Completion ================================ */\n\n/* Free a list of completion option populated by linenoiseAddCompletion(). */\nstatic void freeCompletions(linenoiseCompletions *lc) {\n    size_t i;\n    for (i = 0; i < lc->len; i++)\n        free(lc->cvec[i]);\n    if (lc->cvec != NULL)\n        free(lc->cvec);\n}\n\n/* This is an helper function for linenoiseEdit() and is called when the\n * user types the <tab> key in order to complete the string currently in the\n * input.\n *\n * The state of the editing is encapsulated into the pointed linenoiseState\n * structure as described in the structure definition. */\nstatic int completeLine(struct linenoiseState *ls) {\n    linenoiseCompletions lc = { 0, NULL };\n    int nread, nwritten;\n    char c = 0;\n\n    completionCallback(ls->buf,&lc);\n    if (lc.len == 0) {\n        linenoiseBeep();\n    } else {\n        size_t stop = 0, i = 0;\n\n        while(!stop) {\n            /* Show completion or original buffer */\n            if (i < lc.len) {\n                struct linenoiseState saved = *ls;\n\n                ls->len = ls->pos = strlen(lc.cvec[i]);\n                ls->buf = lc.cvec[i];\n                refreshLine(ls);\n                ls->len = saved.len;\n                ls->pos = saved.pos;\n                ls->buf = saved.buf;\n            } else {\n                refreshLine(ls);\n            }\n\n            nread = read(ls->ifd,&c,1);\n            if (nread <= 0) {\n                freeCompletions(&lc);\n                return -1;\n            }\n\n            switch(c) {\n                case 9: /* tab */\n                    i = (i+1) % (lc.len+1);\n                    if (i == lc.len) linenoiseBeep();\n                    break;\n                case 27: /* escape */\n                    /* Re-show original buffer */\n                    if (i < lc.len) refreshLine(ls);\n                    stop = 1;\n                    break;\n                default:\n                    /* Update buffer and return */\n                    if (i < lc.len) {\n                        nwritten = snprintf(ls->buf,ls->buflen,\"%s\",lc.cvec[i]);\n                        ls->len = ls->pos = nwritten;\n                    }\n                    stop = 1;\n                    break;\n            }\n        }\n    }\n\n    freeCompletions(&lc);\n    return c; /* Return last read character */\n}\n\n/* Register a callback function to be called for tab-completion. */\nvoid linenoiseSetCompletionCallback(linenoiseCompletionCallback *fn) {\n    completionCallback = fn;\n}\n\n/* Register a hits function to be called to show hits to the user at the\n * right of the prompt. */\nvoid linenoiseSetHintsCallback(linenoiseHintsCallback *fn) {\n    hintsCallback = fn;\n}\n\n/* Register a function to free the hints returned by the hints callback\n * registered with linenoiseSetHintsCallback(). */\nvoid linenoiseSetFreeHintsCallback(linenoiseFreeHintsCallback *fn) {\n    freeHintsCallback = fn;\n}\n\n/* This function is used by the callback function registered by the user\n * in order to add completion options given the input string when the\n * user typed <tab>. See the example.c source code for a very easy to\n * understand example. */\nvoid linenoiseAddCompletion(linenoiseCompletions *lc, const char *str) {\n    size_t len = strlen(str);\n    char *copy, **cvec;\n\n    copy = malloc(len+1);\n    if (copy == NULL) return;\n    memcpy(copy,str,len+1);\n    cvec = realloc(lc->cvec,sizeof(char*)*(lc->len+1));\n    if (cvec == NULL) {\n        free(copy);\n        return;\n    }\n    lc->cvec = cvec;\n    lc->cvec[lc->len++] = copy;\n}\n\n/* =========================== Line editing ================================= */\n\n/* We define a very simple \"append buffer\" structure, that is an heap\n * allocated string where we can append to. This is useful in order to\n * write all the escape sequences in a buffer and flush them to the standard\n * output in a single call, to avoid flickering effects. */\nstruct abuf {\n    char *b;\n    int len;\n};\n\nstatic void abInit(struct abuf *ab) {\n    ab->b = NULL;\n    ab->len = 0;\n}\n\nstatic void abAppend(struct abuf *ab, const char *s, int len) {\n    char *new = realloc(ab->b,ab->len+len);\n\n    if (new == NULL) return;\n    memcpy(new+ab->len,s,len);\n    ab->b = new;\n    ab->len += len;\n}\n\nstatic void abFree(struct abuf *ab) {\n    free(ab->b);\n}\n\n/* Helper of refreshSingleLine() and refreshMultiLine() to show hints\n * to the right of the prompt. */\nvoid refreshShowHints(struct abuf *ab, struct linenoiseState *l, int plen) {\n    char seq[64];\n\n    /* Show hits when not in reverse search mode and not instructed to ignore once. */\n    if (reverse_search_mode_enabled || ignore_once_hint) {\n        ignore_once_hint = 0;\n        return;\n    }\n\n    if (hintsCallback && plen+l->len < l->cols) {\n        int color = -1, bold = 0;\n        char *hint = hintsCallback(l->buf,&color,&bold);\n        if (hint) {\n            int hintlen = strlen(hint);\n            int hintmaxlen = l->cols-(plen+l->len);\n            if (hintlen > hintmaxlen) hintlen = hintmaxlen;\n            if (bold == 1 && color == -1) color = 37;\n            if (color != -1 || bold != 0)\n                snprintf(seq,64,\"\\033[%d;%d;49m\",bold,color);\n            else\n                seq[0] = '\\0';\n            abAppend(ab,seq,strlen(seq));\n            abAppend(ab,hint,hintlen);\n            if (color != -1 || bold != 0)\n                abAppend(ab,\"\\033[0m\",4);\n            /* Call the function to free the hint returned. */\n            if (freeHintsCallback) freeHintsCallback(hint);\n        }\n    }\n}\n\n/* Single line low level line refresh.\n *\n * Rewrite the currently edited line accordingly to the buffer content,\n * cursor position, and number of columns of the terminal. */\nstatic void refreshSingleLine(struct linenoiseState *l) {\n    char seq[64];\n    size_t plen = strlen(l->prompt);\n    int fd = l->ofd;\n    char *buf = l->buf;\n    size_t len = l->len;\n    size_t pos = l->pos;\n    struct abuf ab;\n\n    while((plen+pos) >= l->cols) {\n        buf++;\n        len--;\n        pos--;\n    }\n    while (plen+len > l->cols) {\n        len--;\n    }\n\n    abInit(&ab);\n    /* Cursor to left edge */\n    snprintf(seq,64,\"\\r\");\n    abAppend(&ab,seq,strlen(seq));\n    /* Write the prompt and the current buffer content */\n    abAppend(&ab,l->prompt,strlen(l->prompt));\n    if (maskmode == 1) {\n        while (len--) abAppend(&ab,\"*\",1);\n    } else {\n        abAppend(&ab,buf,len);\n    }\n    /* Show hits if any. */\n    refreshShowHints(&ab,l,plen);\n    /* Erase to right */\n    snprintf(seq,64,\"\\x1b[0K\");\n    abAppend(&ab,seq,strlen(seq));\n    /* Move cursor to original position. */\n    snprintf(seq,64,\"\\r\\x1b[%dC\", (int)(pos+plen));\n    abAppend(&ab,seq,strlen(seq));\n    if (write(fd,ab.b,ab.len) == -1) {} /* Can't recover from write error. */\n    abFree(&ab);\n}\n\n/* Multi line low level line refresh.\n *\n * Rewrite the currently edited line accordingly to the buffer content,\n * cursor position, and number of columns of the terminal. */\nstatic void refreshMultiLine(struct linenoiseState *l) {\n    char seq[64];\n    int plen = strlen(l->prompt);\n    int rows = (plen+l->len+l->cols-1)/l->cols; /* rows used by current buf. */\n    int rpos = (plen+l->oldpos+l->cols)/l->cols; /* cursor relative row. */\n    int rpos2; /* rpos after refresh. */\n    int col; /* colum position, zero-based. */\n    int old_rows = l->maxrows;\n    int fd = l->ofd, j;\n    struct abuf ab;\n\n    /* Update maxrows if needed. */\n    if (rows > (int)l->maxrows) l->maxrows = rows;\n\n    /* First step: clear all the lines used before. To do so start by\n     * going to the last row. */\n    abInit(&ab);\n    if (old_rows-rpos > 0) {\n        lndebug(\"go down %d\", old_rows-rpos);\n        snprintf(seq,64,\"\\x1b[%dB\", old_rows-rpos);\n        abAppend(&ab,seq,strlen(seq));\n    }\n\n    /* Now for every row clear it, go up. */\n    for (j = 0; j < old_rows-1; j++) {\n        lndebug(\"clear+up\");\n        snprintf(seq,64,\"\\r\\x1b[0K\\x1b[1A\");\n        abAppend(&ab,seq,strlen(seq));\n    }\n\n    /* Clean the top line. */\n    lndebug(\"clear\");\n    snprintf(seq,64,\"\\r\\x1b[0K\");\n    abAppend(&ab,seq,strlen(seq));\n\n    /* Write the prompt and the current buffer content */\n    abAppend(&ab,l->prompt,strlen(l->prompt));\n    if (maskmode == 1) {\n        unsigned int i;\n        for (i = 0; i < l->len; i++) abAppend(&ab,\"*\",1);\n    } else {\n        refreshSearchResult(l);\n        if (strlen(search_result) > 0) {\n            abAppend(&ab, search_result_friendly, strlen(search_result_friendly));\n        } else {\n            abAppend(&ab,l->buf,l->len);\n        }\n    }\n\n    /* Show hits if any. */\n    refreshShowHints(&ab,l,plen);\n\n    /* If we are at the very end of the screen with our prompt, we need to\n     * emit a newline and move the prompt to the first column. */\n    if (l->pos &&\n        l->pos == l->len &&\n        (l->pos+plen) % l->cols == 0)\n    {\n        lndebug(\"<newline>\");\n        abAppend(&ab,\"\\n\",1);\n        snprintf(seq,64,\"\\r\");\n        abAppend(&ab,seq,strlen(seq));\n        rows++;\n        if (rows > (int)l->maxrows) l->maxrows = rows;\n    }\n\n    /* Move cursor to right position. */\n    rpos2 = (plen+l->pos+l->cols)/l->cols; /* current cursor relative row. */\n    lndebug(\"rpos2 %d\", rpos2);\n\n    /* Go up till we reach the expected position. */\n    if (rows-rpos2 > 0) {\n        lndebug(\"go-up %d\", rows-rpos2);\n        snprintf(seq,64,\"\\x1b[%dA\", rows-rpos2);\n        abAppend(&ab,seq,strlen(seq));\n    }\n\n    /* Set column. */\n    col = (plen+(int)l->pos) % (int)l->cols;\n    if (strlen(search_result) > 0) {\n        col += search_result_start_offset;\n    }\n    lndebug(\"set col %d\", 1+col);\n    if (col)\n        snprintf(seq,64,\"\\r\\x1b[%dC\", col);\n    else\n        snprintf(seq,64,\"\\r\");\n    abAppend(&ab,seq,strlen(seq));\n\n    lndebug(\"\\n\");\n    l->oldpos = l->pos;\n\n    if (write(fd,ab.b,ab.len) == -1) {} /* Can't recover from write error. */\n    abFree(&ab);\n}\n\n/* Calls the two low level functions refreshSingleLine() or\n * refreshMultiLine() according to the selected mode. */\nstatic void refreshLine(struct linenoiseState *l) {\n    if (mlmode)\n        refreshMultiLine(l);\n    else\n        refreshSingleLine(l);\n}\n\n/* Insert the character 'c' at cursor current position.\n *\n * On error writing to the terminal -1 is returned, otherwise 0. */\nint linenoiseEditInsert(struct linenoiseState *l, char c) {\n    if (l->len < l->buflen) {\n        if (l->len == l->pos) {\n            l->buf[l->pos] = c;\n            l->pos++;\n            l->len++;\n            l->buf[l->len] = '\\0';\n            if ((!mlmode && l->plen+l->len < l->cols && !hintsCallback)) {\n                /* Avoid a full update of the line in the\n                 * trivial case. */\n                char d = (maskmode==1) ? '*' : c;\n                if (write(l->ofd,&d,1) == -1) return -1;\n            } else {\n                refreshLine(l);\n            }\n        } else {\n            memmove(l->buf+l->pos+1,l->buf+l->pos,l->len-l->pos);\n            l->buf[l->pos] = c;\n            l->len++;\n            l->pos++;\n            l->buf[l->len] = '\\0';\n            refreshLine(l);\n        }\n    }\n    return 0;\n}\n\n/* Move cursor on the left. */\nvoid linenoiseEditMoveLeft(struct linenoiseState *l) {\n    if (l->pos > 0) {\n        l->pos--;\n        refreshLine(l);\n    }\n}\n\n/* Move cursor on the right. */\nvoid linenoiseEditMoveRight(struct linenoiseState *l) {\n    if (l->pos != l->len) {\n        l->pos++;\n        refreshLine(l);\n    }\n}\n\n/* Consider letters/digits/underscore as “word”; others as delimiters. */\nstatic int isWordChar(char c) {\n    return (c == '_' || (c >= '0' && c <= '9') ||\n            (c >= 'A' && c <= 'Z') || (c >= 'a' && c <= 'z'));\n}\n\nstatic void linenoiseEditMoveWordLeft(struct linenoiseState *l) {\n    if (l->pos == 0) return;\n    /* Skip any delimiters, then move left over the previous word */\n    while (l->pos > 0 && !isWordChar(l->buf[l->pos - 1])) l->pos--;\n    /* Then move to the start of that word */\n    while (l->pos > 0 && isWordChar(l->buf[l->pos - 1])) l->pos--;\n    refreshLine(l);\n}\n\nstatic void linenoiseEditMoveWordRight(struct linenoiseState *l) {\n    if (l->pos == l->len) return;\n    /* Skip the current word to the right */\n    while (l->pos < l->len && isWordChar(l->buf[l->pos])) l->pos++;\n    /* Then skip any delimiters to reach the next word */\n    while (l->pos < l->len && !isWordChar(l->buf[l->pos])) l->pos++;\n    refreshLine(l);\n}\n\n/* Move cursor to the start of the line. */\nvoid linenoiseEditMoveHome(struct linenoiseState *l) {\n    if (l->pos != 0) {\n        l->pos = 0;\n        refreshLine(l);\n    }\n}\n\n/* Move cursor to the end of the line. */\nvoid linenoiseEditMoveEnd(struct linenoiseState *l) {\n    if (l->pos != l->len) {\n        l->pos = l->len;\n        refreshLine(l);\n    }\n}\n\n/* Substitute the currently edited line with the next or previous history\n * entry as specified by 'dir'. */\n#define LINENOISE_HISTORY_NEXT 0\n#define LINENOISE_HISTORY_PREV 1\nvoid linenoiseEditHistoryNext(struct linenoiseState *l, int dir) {\n    if (history_len > 1) {\n        /* Update the current history entry before to\n         * overwrite it with the next one. */\n        free(history[history_len - 1 - l->history_index]);\n        history[history_len - 1 - l->history_index] = strdup(l->buf);\n        /* Show the new entry */\n        l->history_index += (dir == LINENOISE_HISTORY_PREV) ? 1 : -1;\n        if (l->history_index < 0) {\n            l->history_index = 0;\n            return;\n        } else if (l->history_index >= history_len) {\n            l->history_index = history_len-1;\n            return;\n        }\n        strncpy(l->buf,history[history_len - 1 - l->history_index],l->buflen);\n        l->buf[l->buflen-1] = '\\0';\n        l->len = l->pos = strlen(l->buf);\n        refreshLine(l);\n    }\n}\n\n/* Delete the character at the right of the cursor without altering the cursor\n * position. Basically this is what happens with the \"Delete\" keyboard key. */\nvoid linenoiseEditDelete(struct linenoiseState *l) {\n    if (l->len > 0 && l->pos < l->len) {\n        memmove(l->buf+l->pos,l->buf+l->pos+1,l->len-l->pos-1);\n        l->len--;\n        l->buf[l->len] = '\\0';\n        refreshLine(l);\n    }\n}\n\n/* Backspace implementation. */\nvoid linenoiseEditBackspace(struct linenoiseState *l) {\n    if (l->pos > 0 && l->len > 0) {\n        memmove(l->buf+l->pos-1,l->buf+l->pos,l->len-l->pos);\n        l->pos--;\n        l->len--;\n        l->buf[l->len] = '\\0';\n        refreshLine(l);\n    }\n}\n\n/* Delete the previous word, maintaining the cursor at the start of the\n * current word. */\nvoid linenoiseEditDeletePrevWord(struct linenoiseState *l) {\n    size_t old_pos = l->pos;\n    size_t diff;\n\n    while (l->pos > 0 && l->buf[l->pos-1] == ' ')\n        l->pos--;\n    while (l->pos > 0 && l->buf[l->pos-1] != ' ')\n        l->pos--;\n    diff = old_pos - l->pos;\n    memmove(l->buf+l->pos,l->buf+old_pos,l->len-old_pos+1);\n    l->len -= diff;\n    refreshLine(l);\n}\n\n/* This function is the core of the line editing capability of linenoise.\n * It expects 'fd' to be already in \"raw mode\" so that every key pressed\n * will be returned ASAP to read().\n *\n * The resulting string is put into 'buf' when the user type enter, or\n * when ctrl+d is typed.\n *\n * The function returns the length of the current buffer. */\nstatic int linenoiseEdit(int stdin_fd, int stdout_fd, char *buf, size_t buflen, const char *prompt)\n{\n    struct linenoiseState l;\n\n    /* Populate the linenoise state that we pass to functions implementing\n     * specific editing functionalities. */\n    l.ifd = stdin_fd;\n    l.ofd = stdout_fd;\n    l.buf = buf;\n    l.buflen = buflen;\n    l.prompt = prompt;\n    l.plen = strlen(prompt);\n    l.oldpos = l.pos = 0;\n    l.len = 0;\n    l.cols = getColumns(stdin_fd, stdout_fd);\n    l.maxrows = 0;\n    l.history_index = 0;\n\n    /* Buffer starts empty. */\n    l.buf[0] = '\\0';\n    l.buflen--; /* Make sure there is always space for the nulterm */\n\n    /* The latest history entry is always our current buffer, that\n     * initially is just an empty string. */\n    linenoiseHistoryAdd(\"\", 0);\n\n    if (write(l.ofd,prompt,l.plen) == -1) return -1;\n    while(1) {\n        char c;\n        int nread;\n        char seq[3];\n\n        nread = read(l.ifd,&c,1);\n        if (nread <= 0) return l.len;\n\n        /* Only autocomplete when the callback is set. It returns < 0 when\n         * there was an error reading from fd. Otherwise it will return the\n         * character that should be handled next. */\n        if (c == TAB && completionCallback != NULL && !reverse_search_mode_enabled) {\n            c = completeLine(&l);\n            /* Return on errors */\n            if (c < 0) return l.len;\n            /* Read next character when 0 */\n            if (c == 0) continue;\n        }\n\n        switch(c) {\n        case NL:       /* enter, typed before raw mode was enabled */\n            break;\n        case TAB:\n            if (reverse_search_mode_enabled) disableReverseSearchMode(&l, buf, buflen, 0);\n            break;\n        case ENTER:    /* enter */\n            history_len--;\n            free(history[history_len]);\n            if (mlmode) linenoiseEditMoveEnd(&l);\n            if (hintsCallback) {\n                /* Force a refresh without hints to leave the previous\n                 * line as the user typed it after a newline. */\n                linenoiseHintsCallback *hc = hintsCallback;\n                hintsCallback = NULL;\n                refreshLine(&l);\n                hintsCallback = hc;\n            }\n\n            if (reverse_search_mode_enabled) disableReverseSearchMode(&l, buf, buflen, 0);\n            return (int)l.len;\n        case CTRL_C:     /* ctrl-c */\n            if (reverse_search_mode_enabled) {\n                disableReverseSearchMode(&l, buf, buflen, 1);\n                break;\n            }\n            errno = EAGAIN;\n            return -1;\n        case BACKSPACE:   /* backspace */\n        case 8:     /* ctrl-h */\n            linenoiseEditBackspace(&l);\n            break;\n        case CTRL_D:     /* ctrl-d, remove char at right of cursor, or if the\n                            line is empty, act as end-of-file. */\n            if (l.len > 0) {\n                linenoiseEditDelete(&l);\n            } else {\n                history_len--;\n                free(history[history_len]);\n                return -1;\n            }\n            break;\n        case CTRL_T:    /* ctrl-t, swaps current character with previous. */\n            if (l.pos > 0 && l.pos < l.len) {\n                int aux = buf[l.pos-1];\n                buf[l.pos-1] = buf[l.pos];\n                buf[l.pos] = aux;\n                if (l.pos != l.len-1) l.pos++;\n                refreshLine(&l);\n            }\n            break;\n        case CTRL_B:     /* ctrl-b */\n            linenoiseEditMoveLeft(&l);\n            break;\n        case CTRL_F:     /* ctrl-f */\n            linenoiseEditMoveRight(&l);\n            break;\n        case CTRL_P:    /* ctrl-p */\n            linenoiseEditHistoryNext(&l, LINENOISE_HISTORY_PREV);\n            break;\n        case CTRL_R:\n        case CTRL_S:\n            reverse_search_direction = c == CTRL_R ? -1 : 1;\n            if (reverse_search_mode_enabled) {\n                /* cycle search results */\n                cycle_to_next_search = 1;\n                l.prompt = REVERSE_SEARCH_PROMPT(reverse_search_direction);\n                refreshLine(&l);\n                break;\n            }\n            buf[0] = '\\0';\n            l.pos = l.len = 0;\n            enableReverseSearchMode(&l);\n            break;\n        case CTRL_G:\n            if (reverse_search_mode_enabled) disableReverseSearchMode(&l, buf, buflen, 1);\n            break;\n        case CTRL_N:    /* ctrl-n */\n            linenoiseEditHistoryNext(&l, LINENOISE_HISTORY_NEXT);\n            break;\n        case ESC:    /* escape sequence */\n            /* Read the next two bytes representing the escape sequence.\n             * Use two calls to handle slow terminals returning the two\n             * chars at different times. */\n            if (read(l.ifd,seq,1) == -1) break;\n\n            /* Handle Meta-b / Meta-f directly because it's a 2-byte sequence */\n            if (seq[0] == 'b' || seq[0] == 'f') {\n                if (reverse_search_mode_enabled) {\n                    disableReverseSearchMode(&l, buf, buflen, 1);\n                    break;\n                }\n                if (seq[0] == 'b') linenoiseEditMoveWordLeft(&l);   /* ESC b → word left */\n                else linenoiseEditMoveWordRight(&l);                /* ESC f → word right */\n                break;\n            }\n\n            if (read(l.ifd,seq+1,1) == -1) break;\n\n            if (reverse_search_mode_enabled) {\n                disableReverseSearchMode(&l, buf, buflen, 1);\n                break;\n            }\n\n            /* ESC [ sequences. */\n            if (seq[0] == '[') {\n                if (seq[1] >= '0' && seq[1] <= '9') {\n                    /* Extended escape, read additional bytes.\n                     * Examples: ESC [1;5C  ESC [3~ */\n                    char seq_buffer[SEQ_BUFFER_MAX_LENGTH];\n                    int i = 0;\n                    seq_buffer[i++] = seq[1];\n\n                    /* Read more bytes until we see a CSI final byte (range @..~).\n                     * Use SEQ_BUFFER_MAX_LENGTH-1 to reserve one position for '\\0'. */\n                    char seq_char;\n                    while (i < SEQ_BUFFER_MAX_LENGTH - 1 && read(l.ifd, &seq_char, 1) == 1) {\n                        seq_buffer[i++] = seq_char;\n                        if (seq_char >= '@' && seq_char <= '~') break; /* CSI final byte */\n                    }\n                    seq_buffer[i] = '\\0';\n\n                    /* The exact key mapping behavior depends on your keyboard/terminal setup.\n                     * For example, in MacOS terminal you can go to the profile keyboard setting\n                     * to see or configure the current mapping.\n                     *\n                     * Take action `[1;5C` (Ctrl + →) or `[1;3D` (Alt + ←) as examples:\n                     * [ indicates a CSI (Control Sequence Introducer), telling the terminal\n                     *    \"What follows is a control command, not text\"\n                     * 1 is how many units to move (default is 1 if omitted)\n                     * ; is the separator between parameters\n                     * 5 is the modifier mask for Ctrl. Other possible values include 1 (no modifier),\n                     *     2 (Shift), 3 (Alt), 4 (Shift + Alt), 6 (Shift + Ctrl), 7 (Alt + Ctrl), etc.\n                     * C is the cursor right command. Other commands include A (cursor up), B (cursor\n                     *     down), D (cursor left).\n                     */\n\n                    /* Word left: Ctrl + ← (modifier 5) or Alt + ← (modifier 3) */\n                    if (strcmp(seq_buffer, \"1;5D\") == 0 || strcmp(seq_buffer, \"1;3D\") == 0) {\n                        linenoiseEditMoveWordLeft(&l);\n                        break;\n                    }\n                    /* Word right: Ctrl + → (modifier 5) or Alt + → (modifier 3) */\n                    if (strcmp(seq_buffer, \"1;5C\") == 0 || strcmp(seq_buffer, \"1;3C\") == 0) {\n                        linenoiseEditMoveWordRight(&l);\n                        break;\n                    }\n                    /* Delete key */\n                    if (strcmp(seq_buffer, \"3~\") == 0) {\n                        linenoiseEditDelete(&l);\n                        break;\n                    }\n                } else {\n                    switch(seq[1]) {\n                    case 'A': /* Up */\n                        linenoiseEditHistoryNext(&l, LINENOISE_HISTORY_PREV);\n                        break;\n                    case 'B': /* Down */\n                        linenoiseEditHistoryNext(&l, LINENOISE_HISTORY_NEXT);\n                        break;\n                    case 'C': /* Right */\n                        linenoiseEditMoveRight(&l);\n                        break;\n                    case 'D': /* Left */\n                        linenoiseEditMoveLeft(&l);\n                        break;\n                    case 'H': /* Home */\n                        linenoiseEditMoveHome(&l);\n                        break;\n                    case 'F': /* End*/\n                        linenoiseEditMoveEnd(&l);\n                        break;\n                    }\n                }\n            }\n\n            /* ESC O sequences. */\n            else if (seq[0] == 'O') {\n                switch(seq[1]) {\n                case 'H': /* Home */\n                    linenoiseEditMoveHome(&l);\n                    break;\n                case 'F': /* End*/\n                    linenoiseEditMoveEnd(&l);\n                    break;\n                }\n            }\n            break;\n        default:\n            if (linenoiseEditInsert(&l,c)) return -1;\n            break;\n        case CTRL_U: /* Ctrl+u, delete the whole line. */\n            buf[0] = '\\0';\n            l.pos = l.len = 0;\n            refreshLine(&l);\n            break;\n        case CTRL_K: /* Ctrl+k, delete from current to end of line. */\n            buf[l.pos] = '\\0';\n            l.len = l.pos;\n            refreshLine(&l);\n            break;\n        case CTRL_A: /* Ctrl+a, go to the start of the line */\n            linenoiseEditMoveHome(&l);\n            break;\n        case CTRL_E: /* ctrl+e, go to the end of the line */\n            linenoiseEditMoveEnd(&l);\n            break;\n        case CTRL_L: /* ctrl+l, clear screen */\n            linenoiseClearScreen();\n            refreshLine(&l);\n            break;\n        case CTRL_W: /* ctrl+w, delete previous word */\n            linenoiseEditDeletePrevWord(&l);\n            break;\n        }\n    }\n    return l.len;\n}\n\n/* This special mode is used by linenoise in order to print scan codes\n * on screen for debugging / development purposes. It is implemented\n * by the linenoise_example program using the --keycodes option. */\nvoid linenoisePrintKeyCodes(void) {\n    char quit[4];\n\n    printf(\"Linenoise key codes debugging mode.\\n\"\n            \"Press keys to see scan codes. Type 'quit' at any time to exit.\\n\");\n    if (enableRawMode(STDIN_FILENO) == -1) return;\n    memset(quit,' ',4);\n    while(1) {\n        char c;\n        int nread;\n\n        nread = read(STDIN_FILENO,&c,1);\n        if (nread <= 0) continue;\n        memmove(quit,quit+1,sizeof(quit)-1); /* shift string to left. */\n        quit[sizeof(quit)-1] = c; /* Insert current char on the right. */\n        if (memcmp(quit,\"quit\",sizeof(quit)) == 0) break;\n\n        printf(\"'%c' %02x (%d) (type quit to exit)\\n\",\n            isprint(c) ? c : '?', (int)c, (int)c);\n        printf(\"\\r\"); /* Go left edge manually, we are in raw mode. */\n        fflush(stdout);\n    }\n    disableRawMode(STDIN_FILENO);\n}\n\n/* This function calls the line editing function linenoiseEdit() using\n * the STDIN file descriptor set in raw mode. */\nstatic int linenoiseRaw(char *buf, size_t buflen, const char *prompt) {\n    int count;\n\n    if (buflen == 0) {\n        errno = EINVAL;\n        return -1;\n    }\n\n    if (enableRawMode(STDIN_FILENO) == -1) return -1;\n    count = linenoiseEdit(STDIN_FILENO, STDOUT_FILENO, buf, buflen, prompt);\n    disableRawMode(STDIN_FILENO);\n    printf(\"\\n\");\n    return count;\n}\n\n/* This function is called when linenoise() is called with the standard\n * input file descriptor not attached to a TTY. So for example when the\n * program using linenoise is called in pipe or with a file redirected\n * to its standard input. In this case, we want to be able to return the\n * line regardless of its length (by default we are limited to 4k). */\nstatic char *linenoiseNoTTY(void) {\n    char *line = NULL;\n    size_t len = 0, maxlen = 0;\n\n    while(1) {\n        if (len == maxlen) {\n            if (maxlen == 0) maxlen = 16;\n            maxlen *= 2;\n            char *oldval = line;\n            line = realloc(line,maxlen);\n            if (line == NULL) {\n                if (oldval) free(oldval);\n                return NULL;\n            }\n        }\n        int c = fgetc(stdin);\n        if (c == EOF || c == '\\n') {\n            if (c == EOF && len == 0) {\n                free(line);\n                return NULL;\n            } else {\n                line[len] = '\\0';\n                return line;\n            }\n        } else {\n            line[len] = c;\n            len++;\n        }\n    }\n}\n\n/* The high level function that is the main API of the linenoise library.\n * This function checks if the terminal has basic capabilities, just checking\n * for a blacklist of stupid terminals, and later either calls the line\n * editing function or uses dummy fgets() so that you will be able to type\n * something even in the most desperate of the conditions. */\nchar *linenoise(const char *prompt) {\n    char buf[LINENOISE_MAX_LINE] = {0};\n    int count;\n\n    if (getenv(\"FAKETTY_WITH_PROMPT\") == NULL && !isatty(STDIN_FILENO)) {\n        /* Not a tty: read from file / pipe. In this mode we don't want any\n         * limit to the line size, so we call a function to handle that. */\n        return linenoiseNoTTY();\n    } else if (getenv(\"FAKETTY_WITH_PROMPT\") == NULL && isUnsupportedTerm()) {\n        size_t len;\n\n        printf(\"%s\",prompt);\n        fflush(stdout);\n        if (fgets(buf,LINENOISE_MAX_LINE,stdin) == NULL) return NULL;\n        len = strlen(buf);\n        while(len && (buf[len-1] == '\\n' || buf[len-1] == '\\r')) {\n            len--;\n            buf[len] = '\\0';\n        }\n        return strdup(buf);\n    } else {\n        count = linenoiseRaw(buf,LINENOISE_MAX_LINE,prompt);\n        if (count == -1) return NULL;\n        return strdup(buf);\n    }\n}\n\n/* This is just a wrapper the user may want to call in order to make sure\n * the linenoise returned buffer is freed with the same allocator it was\n * created with. Useful when the main program is using an alternative\n * allocator. */\nvoid linenoiseFree(void *ptr) {\n    free(ptr);\n}\n\n/* ================================ History ================================= */\n\n/* Free the history, but does not reset it. Only used when we have to\n * exit() to avoid memory leaks are reported by valgrind & co. */\nstatic void freeHistory(void) {\n    if (history) {\n        int j;\n\n        for (j = 0; j < history_len; j++)\n            free(history[j]);\n        free(history);\n        free(history_sensitive);\n    }\n}\n\n/* At exit we'll try to fix the terminal to the initial conditions. */\nstatic void linenoiseAtExit(void) {\n    disableRawMode(STDIN_FILENO);\n    freeHistory();\n}\n\n/* This is the API call to add a new entry in the linenoise history.\n * It uses a fixed array of char pointers that are shifted (memmoved)\n * when the history max length is reached in order to remove the older\n * entry and make room for the new one, so it is not exactly suitable for huge\n * histories, but will work well for a few hundred of entries.\n *\n * Using a circular buffer is smarter, but a bit more complex to handle. */\nint linenoiseHistoryAdd(const char *line, int is_sensitive) {\n    char *linecopy;\n\n    if (history_max_len == 0) return 0;\n\n    /* Initialization on first call. */\n    if (history == NULL) {\n        history = malloc(sizeof(char*)*history_max_len);\n        if (history == NULL) return 0;\n        history_sensitive = malloc(sizeof(int)*history_max_len);\n        if (history_sensitive == NULL) {\n            free(history);\n            history = NULL;\n            return 0;\n        }\n        memset(history,0,(sizeof(char*)*history_max_len));\n        memset(history_sensitive,0,(sizeof(int)*history_max_len));\n    }\n\n    /* Don't add duplicated lines. */\n    if (history_len && !strcmp(history[history_len-1], line)) return 0;\n\n    /* Add an heap allocated copy of the line in the history.\n     * If we reached the max length, remove the older line. */\n    linecopy = strdup(line);\n    if (!linecopy) return 0;\n    if (history_len == history_max_len) {\n        free(history[0]);\n        memmove(history,history+1,sizeof(char*)*(history_max_len-1));\n        memmove(history_sensitive,history_sensitive+1,sizeof(int)*(history_max_len-1));\n        history_len--;\n    }\n    history[history_len] = linecopy;\n    history_sensitive[history_len] = is_sensitive;\n    history_len++;\n    return 1;\n}\n\n/* Set the maximum length for the history. This function can be called even\n * if there is already some history, the function will make sure to retain\n * just the latest 'len' elements if the new history length value is smaller\n * than the amount of items already inside the history. */\nint linenoiseHistorySetMaxLen(int len) {\n    char **new;\n    int *new_sensitive;\n\n    if (len < 1) return 0;\n    if (history) {\n        int tocopy = history_len;\n\n        new = malloc(sizeof(char*)*len);\n        if (new == NULL) return 0;\n        new_sensitive = malloc(sizeof(int)*len);\n        if (new_sensitive == NULL) {\n            free(new);\n            return 0;\n        }\n\n        /* If we can't copy everything, free the elements we'll not use. */\n        if (len < tocopy) {\n            int j;\n\n            for (j = 0; j < tocopy-len; j++) free(history[j]);\n            tocopy = len;\n        }\n        memset(new,0,sizeof(char*)*len);\n        memset(new_sensitive,0,sizeof(int)*len);\n        memcpy(new,history+(history_len-tocopy), sizeof(char*)*tocopy);\n        memcpy(new_sensitive,history_sensitive+(history_len-tocopy), sizeof(int)*tocopy);\n        free(history);\n        free(history_sensitive);\n        history = new;\n        history_sensitive = new_sensitive;\n    }\n    history_max_len = len;\n    if (history_len > history_max_len)\n        history_len = history_max_len;\n    return 1;\n}\n\n/* Save the history in the specified file. On success 0 is returned\n * otherwise -1 is returned. */\nint linenoiseHistorySave(const char *filename) {\n    mode_t old_umask = umask(S_IXUSR|S_IRWXG|S_IRWXO);\n    FILE *fp;\n    int j;\n\n    fp = fopen(filename,\"w\");\n    umask(old_umask);\n    if (fp == NULL) return -1;\n    fchmod(fileno(fp),S_IRUSR|S_IWUSR);\n    for (j = 0; j < history_len; j++)\n        if (!history_sensitive[j]) fprintf(fp,\"%s\\n\",history[j]);\n    fclose(fp);\n    return 0;\n}\n\n/* Load the history from the specified file. If the file does not exist\n * zero is returned and no operation is performed.\n *\n * If the file exists and the operation succeeded 0 is returned, otherwise\n * on error -1 is returned. */\nint linenoiseHistoryLoad(const char *filename) {\n    FILE *fp = fopen(filename,\"r\");\n    char buf[LINENOISE_MAX_LINE];\n\n    if (fp == NULL) return -1;\n\n    while (fgets(buf,LINENOISE_MAX_LINE,fp) != NULL) {\n        char *p;\n\n        p = strchr(buf,'\\r');\n        if (!p) p = strchr(buf,'\\n');\n        if (p) *p = '\\0';\n        linenoiseHistoryAdd(buf, 0);\n    }\n    fclose(fp);\n    return 0;\n}\n\n/* This function updates the search index based on the direction of the search.\n * Returns 0 if the beginning or end of the history is reached, otherwise, returns 1. */\nstatic int setNextSearchIndex(int *i) {\n    if (reverse_search_direction == 1) {\n        if (*i == history_len-1) return 0;\n        *i = *i + 1;\n    } else {\n        if (*i <= 0) return 0;\n        *i = *i - 1;\n    }\n    return 1;\n}\n\nlinenoiseHistorySearchResult searchInHistory(char *search_term) {\n    linenoiseHistorySearchResult result = {0};\n\n    if (!history_len || !strlen(search_term)) return result;\n\n    int i = cycle_to_next_search ? search_result_history_index :\n        (reverse_search_direction == -1 ? history_len-1 : 0);\n    \n    while (1) {\n        char *found = strstr(history[i], search_term);\n        \n        /* check if we found the same string at another index when cycling, this would be annoying to cycle through\n         * as it might appear that cycling isn't working */\n        int strings_are_the_same = cycle_to_next_search && strcmp(history[i], history[search_result_history_index]) == 0; \n\n        if (found && !strings_are_the_same) {\n            int haystack_index = found - history[i];\n            result.result = history[i];\n            result.len = strlen(history[i]);\n            result.search_term_index = haystack_index;\n            result.search_term_len = strlen(search_term);\n            search_result_history_index = i;\n            break;\n        }\n\n        /* Exit if reached the end. */\n        if (!setNextSearchIndex(&i)) break;\n    }\n\n    return result;\n}\n\nstatic void refreshSearchResult(struct linenoiseState *ls) {\n   if (!reverse_search_mode_enabled) {\n        return;\n    }\n\n    linenoiseHistorySearchResult sr = searchInHistory(ls->buf);\n    int found = sr.result && sr.len;\n\n    /* If the search term has not changed and we are cycling to the next search result\n     * (using CTRL+R or CTRL+S), there is no need to reset the old search result. */\n    if (!cycle_to_next_search || found)\n        resetSearchResult();\n    cycle_to_next_search = 0;\n\n    if (found) {\n        char *bold = \"\\x1B[1m\";\n        char *normal = \"\\x1B[0m\";\n\n        int size_needed = sr.search_term_index + sr.search_term_len + sr.len -\n            (sr.search_term_index+sr.search_term_len) + sizeof(normal) + sizeof(bold) + sizeof(normal);\n        if (size_needed > sizeof(search_result_friendly) - 1) {\n            return;\n        }\n\n        /* Allocate memory for the prefix, match, and suffix strings, one extra byte for `\\0`. */\n        char *prefix = calloc(sizeof(char), sr.search_term_index + 1);\n        char *match = calloc(sizeof(char), sr.search_term_len + 1);\n        char *suffix = calloc(sizeof(char), sr.len - (sr.search_term_index+sr.search_term_len) + 1);\n\n        memcpy(prefix, sr.result, sr.search_term_index);\n        memcpy(match, sr.result + sr.search_term_index, sr.search_term_len);\n        memcpy(suffix, sr.result + sr.search_term_index + sr.search_term_len,\n               sr.len - (sr.search_term_index+sr.search_term_len));\n        sprintf(search_result, \"%s%s%s\", prefix, match, suffix);\n        sprintf(search_result_friendly, \"%s%s%s%s%s%s\", normal, prefix, bold, match, normal, suffix);\n\n        free(prefix);\n        free(match);\n        free(suffix);\n\n        search_result_start_offset = sr.search_term_index;\n    }\n}\n"
  },
  {
    "path": "deps/linenoise/linenoise.h",
    "content": "/* linenoise.h -- VERSION 1.0\n *\n * Guerrilla line editing library against the idea that a line editing lib\n * needs to be 20,000 lines of C code.\n *\n * See linenoise.c for more information.\n *\n * ------------------------------------------------------------------------\n *\n * Copyright (c) 2010-2014, Salvatore Sanfilippo <antirez at gmail dot com>\n * Copyright (c) 2010-2013, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n *\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are\n * met:\n *\n *  *  Redistributions of source code must retain the above copyright\n *     notice, this list of conditions and the following disclaimer.\n *\n *  *  Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n * \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n * HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n */\n\n#ifndef __LINENOISE_H\n#define __LINENOISE_H\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\ntypedef struct linenoiseCompletions {\n  size_t len;\n  char **cvec;\n} linenoiseCompletions;\n\ntypedef void(linenoiseCompletionCallback)(const char *, linenoiseCompletions *);\ntypedef char*(linenoiseHintsCallback)(const char *, int *color, int *bold);\ntypedef void(linenoiseFreeHintsCallback)(void *);\nvoid linenoiseSetCompletionCallback(linenoiseCompletionCallback *);\nvoid linenoiseSetHintsCallback(linenoiseHintsCallback *);\nvoid linenoiseSetFreeHintsCallback(linenoiseFreeHintsCallback *);\nvoid linenoiseAddCompletion(linenoiseCompletions *, const char *);\n\nchar *linenoise(const char *prompt);\nvoid linenoiseFree(void *ptr);\nint linenoiseHistoryAdd(const char *line, int is_sensitive);\nint linenoiseHistorySetMaxLen(int len);\nint linenoiseHistorySave(const char *filename);\nint linenoiseHistoryLoad(const char *filename);\nvoid linenoiseClearScreen(void);\nvoid linenoiseSetMultiLine(int ml);\nvoid linenoisePrintKeyCodes(void);\nvoid linenoiseMaskModeEnable(void);\nvoid linenoiseMaskModeDisable(void);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif /* __LINENOISE_H */\n"
  },
  {
    "path": "deps/lua/COPYRIGHT",
    "content": "Lua License\n-----------\n\nLua is licensed under the terms of the MIT license reproduced below.\nThis means that Lua is free software and can be used for both academic\nand commercial purposes at absolutely no cost.\n\nFor details and rationale, see http://www.lua.org/license.html .\n\n===============================================================================\n\nCopyright (C) 1994-2012 Lua.org, PUC-Rio.\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\nTHE SOFTWARE.\n\n===============================================================================\n\n(end of COPYRIGHT)\n"
  },
  {
    "path": "deps/lua/HISTORY",
    "content": "HISTORY for Lua 5.1\n\n* Changes from version 5.0 to 5.1\n  -------------------------------\n  Language:\n  + new module system.\n  + new semantics for control variables of fors.\n  + new semantics for setn/getn.\n  + new syntax/semantics for varargs.\n  + new long strings and comments.\n  + new `mod' operator (`%')\n  + new length operator #t\n  + metatables for all types\n  API:\n  + new functions: lua_createtable, lua_get(set)field, lua_push(to)integer.\n  + user supplies memory allocator (lua_open becomes lua_newstate).\n  + luaopen_* functions must be called through Lua.\n  Implementation:\n  + new configuration scheme via luaconf.h.\n  + incremental garbage collection.\n  + better handling of end-of-line in the lexer.\n  + fully reentrant parser (new Lua function `load')\n  + better support for 64-bit machines.\n  + native loadlib support for Mac OS X.\n  + standard distribution in only one library (lualib.a merged into lua.a)\n\n* Changes from version 4.0 to 5.0\n  -------------------------------\n  Language:\n  + lexical scoping.\n  + Lua coroutines.\n  + standard libraries now packaged in tables.\n  + tags replaced by metatables and tag methods replaced by metamethods,\n    stored in metatables.\n  + proper tail calls.\n  + each function can have its own global table, which can be shared.\n  + new __newindex metamethod, called when we insert a new key into a table.\n  + new block comments: --[[ ... ]].\n  + new generic for.\n  + new weak tables.\n  + new boolean type.\n  + new syntax \"local function\".\n  + (f()) returns the first value returned by f.\n  + {f()} fills a table with all values returned by f.\n  + \\n ignored in [[\\n .\n  + fixed and-or priorities.\n  + more general syntax for function definition (e.g. function a.x.y:f()...end).\n  + more general syntax for function calls (e.g. (print or write)(9)).\n  + new functions (time/date, tmpfile, unpack, require, load*, etc.).\n  API:\n  + chunks are loaded by using lua_load; new luaL_loadfile and luaL_loadbuffer.\n  + introduced lightweight userdata, a simple \"void*\" without a metatable.\n  + new error handling protocol: the core no longer prints error messages;\n    all errors are reported to the caller on the stack.\n  + new lua_atpanic for host cleanup.\n  + new, signal-safe, hook scheme.\n  Implementation:\n  + new license: MIT.\n  + new, faster, register-based virtual machine.\n  + support for external multithreading and coroutines.\n  + new and consistent error message format.\n  + the core no longer needs \"stdio.h\" for anything (except for a single\n    use of sprintf to convert numbers to strings).\n  + lua.c now runs the environment variable LUA_INIT, if present. It can\n    be \"@filename\", to run a file, or the chunk itself.\n  + support for user extensions in lua.c.\n    sample implementation given for command line editing.\n  + new dynamic loading library, active by default on several platforms.\n  + safe garbage-collector metamethods.\n  + precompiled bytecodes checked for integrity (secure binary dostring).\n  + strings are fully aligned.\n  + position capture in string.find.\n  + read('*l') can read lines with embedded zeros.\n\n* Changes from version 3.2 to 4.0\n  -------------------------------\n  Language:\n  + new \"break\" and \"for\" statements (both numerical and for tables).\n  + uniform treatment of globals: globals are now stored in a Lua table.\n  + improved error messages.\n  + no more '$debug': full speed *and* full debug information.\n  + new read form: read(N) for next N bytes.\n  + general read patterns now deprecated.\n    (still available with -DCOMPAT_READPATTERNS.)\n  + all return values are passed as arguments for the last function\n    (old semantics still available with -DLUA_COMPAT_ARGRET)\n  + garbage collection tag methods for tables now deprecated.\n  + there is now only one tag method for order.\n  API:\n  + New API: fully re-entrant, simpler, and more efficient.\n  + New debug API.\n  Implementation:\n  + faster than ever: cleaner virtual machine and new hashing algorithm.\n  + non-recursive garbage-collector algorithm.\n  + reduced memory usage for programs with many strings.\n  + improved treatment for memory allocation errors.\n  + improved support for 16-bit machines (we hope).\n  + code now compiles unmodified as both ANSI C and C++.\n  + numbers in bases other than 10 are converted using strtoul.\n  + new -f option in Lua to support #! scripts.\n  + luac can now combine text and binaries.\n\n* Changes from version 3.1 to 3.2\n  -------------------------------\n  + redirected all output in Lua's core to _ERRORMESSAGE and _ALERT.\n  + increased limit on the number of constants and globals per function\n    (from 2^16 to 2^24).\n  + debugging info (lua_debug and hooks) moved into lua_state and new API\n    functions provided to get and set this info.\n  + new debug lib gives full debugging access within Lua.\n  + new table functions \"foreachi\", \"sort\", \"tinsert\", \"tremove\", \"getn\".\n  + new io functions \"flush\", \"seek\".\n\n* Changes from version 3.0 to 3.1\n  -------------------------------\n  + NEW FEATURE: anonymous functions with closures (via \"upvalues\").\n  + new syntax:\n    - local variables in chunks.\n    - better scope control with DO block END.\n    - constructors can now be also written: { record-part; list-part }.\n    - more general syntax for function calls and lvalues, e.g.:\n      f(x).y=1\n      o:f(x,y):g(z)\n      f\"string\" is sugar for f(\"string\")\n  + strings may now contain arbitrary binary data (e.g., embedded zeros).\n  + major code re-organization and clean-up; reduced module interdependecies.\n  + no arbitrary limits on the total number of constants and globals.\n  + support for multiple global contexts.\n  + better syntax error messages.\n  + new traversal functions \"foreach\" and \"foreachvar\".\n  + the default for numbers is now double.\n    changing it to use floats or longs is easy.\n  + complete debug information stored in pre-compiled chunks.\n  + sample interpreter now prompts user when run interactively, and also\n    handles control-C interruptions gracefully.\n\n* Changes from version 2.5 to 3.0\n  -------------------------------\n  + NEW CONCEPT: \"tag methods\".\n    Tag methods replace fallbacks as the meta-mechanism for extending the\n    semantics of Lua. Whereas fallbacks had a global nature, tag methods\n    work on objects having the same tag (e.g., groups of tables).\n    Existing code that uses fallbacks should work without change.\n  + new, general syntax for constructors {[exp] = exp, ... }.\n  + support for handling variable number of arguments in functions (varargs).\n  + support for conditional compilation ($if ... $else ... $end).\n  + cleaner semantics in API simplifies host code.\n  + better support for writing libraries (auxlib.h).\n  + better type checking and error messages in the standard library.\n  + luac can now also undump.\n\n* Changes from version 2.4 to 2.5\n  -------------------------------\n  + io and string libraries are now based on pattern matching;\n    the old libraries are still available for compatibility\n  + dofile and dostring can now return values (via return statement)\n  + better support for 16- and 64-bit machines\n  + expanded documentation, with more examples\n\n* Changes from version 2.2 to 2.4\n  -------------------------------\n  + external compiler creates portable binary files that can be loaded faster\n  + interface for debugging and profiling\n  + new \"getglobal\" fallback\n  + new functions for handling references to Lua objects\n  + new functions in standard lib\n  + only one copy of each string is stored\n  + expanded documentation, with more examples\n\n* Changes from version 2.1 to 2.2\n  -------------------------------\n  + functions now may be declared with any \"lvalue\" as a name\n  + garbage collection of functions\n  + support for pipes\n\n* Changes from version 1.1 to 2.1\n  -------------------------------\n  + object-oriented support\n  + fallbacks\n  + simplified syntax for tables\n  + many internal improvements\n\n(end of HISTORY)\n"
  },
  {
    "path": "deps/lua/INSTALL",
    "content": "INSTALL for Lua 5.1\n\n* Building Lua\n  ------------\n  Lua is built in the src directory, but the build process can be\n  controlled from the top-level Makefile.\n\n  Building Lua on Unix systems should be very easy. First do \"make\" and\n  see if your platform is listed. If so, just do \"make xxx\", where xxx\n  is your platform name. The platforms currently supported are:\n    aix ansi bsd freebsd generic linux macosx mingw posix solaris\n\n  If your platform is not listed, try the closest one or posix, generic,\n  ansi, in this order.\n\n  See below for customization instructions and for instructions on how\n  to build with other Windows compilers.\n\n  If you want to check that Lua has been built correctly, do \"make test\"\n  after building Lua. Also, have a look at the example programs in test.\n\n* Installing Lua\n  --------------\n  Once you have built Lua, you may want to install it in an official\n  place in your system. In this case, do \"make install\". The official\n  place and the way to install files are defined in Makefile. You must\n  have the right permissions to install files.\n\n  If you want to build and install Lua in one step, do \"make xxx install\",\n  where xxx is your platform name.\n\n  If you want to install Lua locally, then do \"make local\". This will\n  create directories bin, include, lib, man, and install Lua there as\n  follows:\n\n    bin:\tlua luac\n    include:\tlua.h luaconf.h lualib.h lauxlib.h lua.hpp\n    lib:\tliblua.a\n    man/man1:\tlua.1 luac.1\n\n  These are the only directories you need for development.\n\n  There are man pages for lua and luac, in both nroff and html, and a\n  reference manual in html in doc, some sample code in test, and some\n  useful stuff in etc. You don't need these directories for development.\n\n  If you want to install Lua locally, but in some other directory, do\n  \"make install INSTALL_TOP=xxx\", where xxx is your chosen directory.\n\n  See below for instructions for Windows and other systems.\n\n* Customization\n  -------------\n  Three things can be customized by editing a file:\n    - Where and how to install Lua -- edit Makefile.\n    - How to build Lua -- edit src/Makefile.\n    - Lua features -- edit src/luaconf.h.\n\n  You don't actually need to edit the Makefiles because you may set the\n  relevant variables when invoking make.\n\n  On the other hand, if you need to select some Lua features, you'll need\n  to edit src/luaconf.h. The edited file will be the one installed, and\n  it will be used by any Lua clients that you build, to ensure consistency.\n\n  We strongly recommend that you enable dynamic loading. This is done\n  automatically for all platforms listed above that have this feature\n  (and also Windows). See src/luaconf.h and also src/Makefile.\n\n* Building Lua on Windows and other systems\n  -----------------------------------------\n  If you're not using the usual Unix tools, then the instructions for\n  building Lua depend on the compiler you use. You'll need to create\n  projects (or whatever your compiler uses) for building the library,\n  the interpreter, and the compiler, as follows:\n\n  library:\tlapi.c lcode.c ldebug.c ldo.c ldump.c lfunc.c lgc.c llex.c\n\t\tlmem.c lobject.c lopcodes.c lparser.c lstate.c lstring.c\n\t\tltable.c ltm.c lundump.c lvm.c lzio.c\n\t\tlauxlib.c lbaselib.c ldblib.c liolib.c lmathlib.c loslib.c\n\t\tltablib.c lstrlib.c loadlib.c linit.c\n\n  interpreter:\tlibrary, lua.c\n\n  compiler:\tlibrary, luac.c print.c\n\n  If you use Visual Studio .NET, you can use etc/luavs.bat in its\n  \"Command Prompt\".\n\n  If all you want is to build the Lua interpreter, you may put all .c files\n  in a single project, except for luac.c and print.c. Or just use etc/all.c.\n\n  To use Lua as a library in your own programs, you'll need to know how to\n  create and use libraries with your compiler.\n\n  As mentioned above, you may edit luaconf.h to select some features before\n  building Lua.\n\n(end of INSTALL)\n"
  },
  {
    "path": "deps/lua/Makefile",
    "content": "# makefile for installing Lua\n# see INSTALL for installation instructions\n# see src/Makefile and src/luaconf.h for further customization\n\n# == CHANGE THE SETTINGS BELOW TO SUIT YOUR ENVIRONMENT =======================\n\n# Your platform. See PLATS for possible values.\nPLAT= none\n\n# Where to install. The installation starts in the src and doc directories,\n# so take care if INSTALL_TOP is not an absolute path.\nINSTALL_TOP= /usr/local\nINSTALL_BIN= $(INSTALL_TOP)/bin\nINSTALL_INC= $(INSTALL_TOP)/include\nINSTALL_LIB= $(INSTALL_TOP)/lib\nINSTALL_MAN= $(INSTALL_TOP)/man/man1\n#\n# You probably want to make INSTALL_LMOD and INSTALL_CMOD consistent with\n# LUA_ROOT, LUA_LDIR, and LUA_CDIR in luaconf.h (and also with etc/lua.pc).\nINSTALL_LMOD= $(INSTALL_TOP)/share/lua/$V\nINSTALL_CMOD= $(INSTALL_TOP)/lib/lua/$V\n\n# How to install. If your install program does not support \"-p\", then you\n# may have to run ranlib on the installed liblua.a (do \"make ranlib\").\nINSTALL= install -p\nINSTALL_EXEC= $(INSTALL) -m 0755\nINSTALL_DATA= $(INSTALL) -m 0644\n#\n# If you don't have install you can use cp instead.\n# INSTALL= cp -p\n# INSTALL_EXEC= $(INSTALL)\n# INSTALL_DATA= $(INSTALL)\n\n# Utilities.\nMKDIR= mkdir -p\nRANLIB= ranlib\n\n# == END OF USER SETTINGS. NO NEED TO CHANGE ANYTHING BELOW THIS LINE =========\n\n# Convenience platforms targets.\nPLATS= aix ansi bsd freebsd generic linux macosx mingw posix solaris\n\n# What to install.\nTO_BIN= lua luac\nTO_INC= lua.h luaconf.h lualib.h lauxlib.h ../etc/lua.hpp\nTO_LIB= liblua.a\nTO_MAN= lua.1 luac.1\n\n# Lua version and release.\nV= 5.1\nR= 5.1.5\n\nall:\t$(PLAT)\n\n$(PLATS) clean:\n\tcd src && $(MAKE) $@\n\ntest:\tdummy\n\tsrc/lua test/hello.lua\n\ninstall: dummy\n\tcd src && $(MKDIR) $(INSTALL_BIN) $(INSTALL_INC) $(INSTALL_LIB) $(INSTALL_MAN) $(INSTALL_LMOD) $(INSTALL_CMOD)\n\tcd src && $(INSTALL_EXEC) $(TO_BIN) $(INSTALL_BIN)\n\tcd src && $(INSTALL_DATA) $(TO_INC) $(INSTALL_INC)\n\tcd src && $(INSTALL_DATA) $(TO_LIB) $(INSTALL_LIB)\n\tcd doc && $(INSTALL_DATA) $(TO_MAN) $(INSTALL_MAN)\n\nranlib:\n\tcd src && cd $(INSTALL_LIB) && $(RANLIB) $(TO_LIB)\n\nlocal:\n\t$(MAKE) install INSTALL_TOP=..\n\nnone:\n\t@echo \"Please do\"\n\t@echo \"   make PLATFORM\"\n\t@echo \"where PLATFORM is one of these:\"\n\t@echo \"   $(PLATS)\"\n\t@echo \"See INSTALL for complete instructions.\"\n\n# make may get confused with test/ and INSTALL in a case-insensitive OS\ndummy:\n\n# echo config parameters\necho:\n\t@echo \"\"\n\t@echo \"These are the parameters currently set in src/Makefile to build Lua $R:\"\n\t@echo \"\"\n\t@cd src && $(MAKE) -s echo\n\t@echo \"\"\n\t@echo \"These are the parameters currently set in Makefile to install Lua $R:\"\n\t@echo \"\"\n\t@echo \"PLAT = $(PLAT)\"\n\t@echo \"INSTALL_TOP = $(INSTALL_TOP)\"\n\t@echo \"INSTALL_BIN = $(INSTALL_BIN)\"\n\t@echo \"INSTALL_INC = $(INSTALL_INC)\"\n\t@echo \"INSTALL_LIB = $(INSTALL_LIB)\"\n\t@echo \"INSTALL_MAN = $(INSTALL_MAN)\"\n\t@echo \"INSTALL_LMOD = $(INSTALL_LMOD)\"\n\t@echo \"INSTALL_CMOD = $(INSTALL_CMOD)\"\n\t@echo \"INSTALL_EXEC = $(INSTALL_EXEC)\"\n\t@echo \"INSTALL_DATA = $(INSTALL_DATA)\"\n\t@echo \"\"\n\t@echo \"See also src/luaconf.h .\"\n\t@echo \"\"\n\n# echo private config parameters\npecho:\n\t@echo \"V = $(V)\"\n\t@echo \"R = $(R)\"\n\t@echo \"TO_BIN = $(TO_BIN)\"\n\t@echo \"TO_INC = $(TO_INC)\"\n\t@echo \"TO_LIB = $(TO_LIB)\"\n\t@echo \"TO_MAN = $(TO_MAN)\"\n\n# echo config parameters as Lua code\n# uncomment the last sed expression if you want nil instead of empty strings\nlecho:\n\t@echo \"-- installation parameters for Lua $R\"\n\t@echo \"VERSION = '$V'\"\n\t@echo \"RELEASE = '$R'\"\n\t@$(MAKE) echo | grep = | sed -e 's/= /= \"/' -e 's/$$/\"/' #-e 's/\"\"/nil/'\n\t@echo \"-- EOF\"\n\n# list targets that do not create files (but not all makes understand .PHONY)\n.PHONY: all $(PLATS) clean test install local none dummy echo pecho lecho\n\n# (end of Makefile)\n"
  },
  {
    "path": "deps/lua/README",
    "content": "README for Lua 5.1\n\nSee INSTALL for installation instructions.\nSee HISTORY for a summary of changes since the last released version.\n\n* What is Lua?\n  ------------\n  Lua is a powerful, light-weight programming language designed for extending\n  applications. Lua is also frequently used as a general-purpose, stand-alone\n  language. Lua is free software.\n\n  For complete information, visit Lua's web site at http://www.lua.org/ .\n  For an executive summary, see http://www.lua.org/about.html .\n\n  Lua has been used in many different projects around the world.\n  For a short list, see http://www.lua.org/uses.html .\n\n* Availability\n  ------------\n  Lua is freely available for both academic and commercial purposes.\n  See COPYRIGHT and http://www.lua.org/license.html for details.\n  Lua can be downloaded at http://www.lua.org/download.html .\n\n* Installation\n  ------------\n  Lua is implemented in pure ANSI C, and compiles unmodified in all known\n  platforms that have an ANSI C compiler. In most Unix-like platforms, simply\n  do \"make\" with a suitable target. See INSTALL for detailed instructions.\n\n* Origin\n  ------\n  Lua is developed at Lua.org, a laboratory of the Department of Computer\n  Science of PUC-Rio (the Pontifical Catholic University of Rio de Janeiro\n  in Brazil).\n  For more information about the authors, see http://www.lua.org/authors.html .\n\n(end of README)\n"
  },
  {
    "path": "deps/lua/doc/contents.html",
    "content": "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\">\n<HTML>\n<HEAD>\n<TITLE>Lua 5.1 Reference Manual - contents</TITLE>\n<LINK REL=\"stylesheet\" TYPE=\"text/css\" HREF=\"lua.css\">\n<META HTTP-EQUIV=\"content-type\" CONTENT=\"text/html; charset=utf-8\">\n<STYLE TYPE=\"text/css\">\nul {\n\tlist-style-type: none ;\n}\n</STYLE>\n</HEAD>\n\n<BODY>\n\n<HR>\n<H1>\n<A HREF=\"http://www.lua.org/\"><IMG SRC=\"logo.gif\" ALT=\"\" BORDER=0></A>\nLua 5.1 Reference Manual\n</H1>\n\n<P>\nThe reference manual is the official definition of the Lua language.\nFor a complete introduction to Lua programming, see the book\n<A HREF=\"http://www.lua.org/docs.html#pil\">Programming in Lua</A>.\n\n<P>\nThis manual is also available as a book:\n<BLOCKQUOTE>\n<A HREF=\"http://www.amazon.com/exec/obidos/ASIN/8590379833/lua-indexmanual-20\">\n<IMG SRC=\"cover.png\" ALT=\"\" TITLE=\"buy from Amazon\" BORDER=1 ALIGN=\"left\" HSPACE=12>\n</A>\n<B>Lua 5.1 Reference Manual</B>\n<BR>by R. Ierusalimschy, L. H. de Figueiredo, W. Celes\n<BR>Lua.org, August 2006\n<BR>ISBN 85-903798-3-3\n<BR CLEAR=\"all\">\n</BLOCKQUOTE>\n\n<P>\n<A HREF=\"http://www.amazon.com/exec/obidos/ASIN/8590379833/lua-indexmanual-20\">Buy a copy</A>\nof this book and\n<A HREF=\"http://www.lua.org/donations.html\">help to support</A>\nthe Lua project.\n\n<P>\n<A HREF=\"manual.html\">start</A>\n&middot;\n<A HREF=\"#contents\">contents</A>\n&middot;\n<A HREF=\"#index\">index</A>\n&middot;\n<A HREF=\"http://www.lua.org/manual/\">other versions</A>\n<HR>\n<SMALL>\nCopyright &copy; 2006&ndash;2012 Lua.org, PUC-Rio.\nFreely available under the terms of the\n<A HREF=\"http://www.lua.org/license.html\">Lua license</A>.\n</SMALL>\n\n<H2><A NAME=\"contents\">Contents</A></H2>\n<UL style=\"padding: 0\">\n<LI><A HREF=\"manual.html\">1 &ndash; Introduction</A>\n<P>\n<LI><A HREF=\"manual.html#2\">2 &ndash; The Language</A>\n<UL>\n<LI><A HREF=\"manual.html#2.1\">2.1 &ndash; Lexical Conventions</A>\n<LI><A HREF=\"manual.html#2.2\">2.2 &ndash; Values and Types</A>\n<UL>\n<LI><A HREF=\"manual.html#2.2.1\">2.2.1 &ndash; Coercion</A>\n</UL>\n<LI><A HREF=\"manual.html#2.3\">2.3 &ndash; Variables</A>\n<LI><A HREF=\"manual.html#2.4\">2.4 &ndash; Statements</A>\n<UL>\n<LI><A HREF=\"manual.html#2.4.1\">2.4.1 &ndash; Chunks</A>\n<LI><A HREF=\"manual.html#2.4.2\">2.4.2 &ndash; Blocks</A>\n<LI><A HREF=\"manual.html#2.4.3\">2.4.3 &ndash; Assignment</A>\n<LI><A HREF=\"manual.html#2.4.4\">2.4.4 &ndash; Control Structures</A>\n<LI><A HREF=\"manual.html#2.4.5\">2.4.5 &ndash; For Statement</A>\n<LI><A HREF=\"manual.html#2.4.6\">2.4.6 &ndash; Function Calls as Statements</A>\n<LI><A HREF=\"manual.html#2.4.7\">2.4.7 &ndash; Local Declarations</A>\n</UL>\n<LI><A HREF=\"manual.html#2.5\">2.5 &ndash; Expressions</A>\n<UL>\n<LI><A HREF=\"manual.html#2.5.1\">2.5.1 &ndash; Arithmetic Operators</A>\n<LI><A HREF=\"manual.html#2.5.2\">2.5.2 &ndash; Relational Operators</A>\n<LI><A HREF=\"manual.html#2.5.3\">2.5.3 &ndash; Logical Operators</A>\n<LI><A HREF=\"manual.html#2.5.4\">2.5.4 &ndash; Concatenation</A>\n<LI><A HREF=\"manual.html#2.5.5\">2.5.5 &ndash; The Length Operator</A>\n<LI><A HREF=\"manual.html#2.5.6\">2.5.6 &ndash; Precedence</A>\n<LI><A HREF=\"manual.html#2.5.7\">2.5.7 &ndash; Table Constructors</A>\n<LI><A HREF=\"manual.html#2.5.8\">2.5.8 &ndash; Function Calls</A>\n<LI><A HREF=\"manual.html#2.5.9\">2.5.9 &ndash; Function Definitions</A>\n</UL>\n<LI><A HREF=\"manual.html#2.6\">2.6 &ndash; Visibility Rules</A>\n<LI><A HREF=\"manual.html#2.7\">2.7 &ndash; Error Handling</A>\n<LI><A HREF=\"manual.html#2.8\">2.8 &ndash; Metatables</A>\n<LI><A HREF=\"manual.html#2.9\">2.9 &ndash; Environments</A>\n<LI><A HREF=\"manual.html#2.10\">2.10 &ndash; Garbage Collection</A>\n<UL>\n<LI><A HREF=\"manual.html#2.10.1\">2.10.1 &ndash; Garbage-Collection Metamethods</A>\n<LI><A HREF=\"manual.html#2.10.2\">2.10.2 &ndash; Weak Tables</A>\n</UL>\n<LI><A HREF=\"manual.html#2.11\">2.11 &ndash; Coroutines</A>\n</UL>\n<P>\n<LI><A HREF=\"manual.html#3\">3 &ndash; The Application Program Interface</A>\n<UL>\n<LI><A HREF=\"manual.html#3.1\">3.1 &ndash; The Stack</A>\n<LI><A HREF=\"manual.html#3.2\">3.2 &ndash; Stack Size</A>\n<LI><A HREF=\"manual.html#3.3\">3.3 &ndash; Pseudo-Indices</A>\n<LI><A HREF=\"manual.html#3.4\">3.4 &ndash; C Closures</A>\n<LI><A HREF=\"manual.html#3.5\">3.5 &ndash; Registry</A>\n<LI><A HREF=\"manual.html#3.6\">3.6 &ndash; Error Handling in C</A>\n<LI><A HREF=\"manual.html#3.7\">3.7 &ndash; Functions and Types</A>\n<LI><A HREF=\"manual.html#3.8\">3.8 &ndash; The Debug Interface</A>\n</UL>\n<P>\n<LI><A HREF=\"manual.html#4\">4 &ndash; The Auxiliary Library</A>\n<UL>\n<LI><A HREF=\"manual.html#4.1\">4.1 &ndash; Functions and Types</A>\n</UL>\n<P>\n<LI><A HREF=\"manual.html#5\">5 &ndash; Standard Libraries</A>\n<UL>\n<LI><A HREF=\"manual.html#5.1\">5.1 &ndash; Basic Functions</A>\n<LI><A HREF=\"manual.html#5.2\">5.2 &ndash; Coroutine Manipulation</A>\n<LI><A HREF=\"manual.html#5.3\">5.3 &ndash; Modules</A>\n<LI><A HREF=\"manual.html#5.4\">5.4 &ndash; String Manipulation</A>\n<UL>\n<LI><A HREF=\"manual.html#5.4.1\">5.4.1 &ndash; Patterns</A>\n</UL>\n<LI><A HREF=\"manual.html#5.5\">5.5 &ndash; Table Manipulation</A>\n<LI><A HREF=\"manual.html#5.6\">5.6 &ndash; Mathematical Functions</A>\n<LI><A HREF=\"manual.html#5.7\">5.7 &ndash; Input and Output Facilities</A>\n<LI><A HREF=\"manual.html#5.8\">5.8 &ndash; Operating System Facilities</A>\n<LI><A HREF=\"manual.html#5.9\">5.9 &ndash; The Debug Library</A>\n</UL>\n<P>\n<LI><A HREF=\"manual.html#6\">6 &ndash; Lua Stand-alone</A>\n<P>\n<LI><A HREF=\"manual.html#7\">7 &ndash; Incompatibilities with the Previous Version</A>\n<UL>\n<LI><A HREF=\"manual.html#7.1\">7.1 &ndash; Changes in the Language</A>\n<LI><A HREF=\"manual.html#7.2\">7.2 &ndash; Changes in the Libraries</A>\n<LI><A HREF=\"manual.html#7.3\">7.3 &ndash; Changes in the API</A>\n</UL>\n<P>\n<LI><A HREF=\"manual.html#8\">8 &ndash; The Complete Syntax of Lua</A>\n</UL>\n\n<H2><A NAME=\"index\">Index</A></H2>\n<TABLE WIDTH=\"100%\">\n<TR VALIGN=\"top\">\n<TD>\n<H3><A NAME=\"functions\">Lua functions</A></H3>\n<A HREF=\"manual.html#pdf-_G\">_G</A><BR>\n<A HREF=\"manual.html#pdf-_VERSION\">_VERSION</A><BR>\n<P>\n\n<A HREF=\"manual.html#pdf-assert\">assert</A><BR>\n<A HREF=\"manual.html#pdf-collectgarbage\">collectgarbage</A><BR>\n<A HREF=\"manual.html#pdf-dofile\">dofile</A><BR>\n<A HREF=\"manual.html#pdf-error\">error</A><BR>\n<A HREF=\"manual.html#pdf-getfenv\">getfenv</A><BR>\n<A HREF=\"manual.html#pdf-getmetatable\">getmetatable</A><BR>\n<A HREF=\"manual.html#pdf-ipairs\">ipairs</A><BR>\n<A HREF=\"manual.html#pdf-load\">load</A><BR>\n<A HREF=\"manual.html#pdf-loadfile\">loadfile</A><BR>\n<A HREF=\"manual.html#pdf-loadstring\">loadstring</A><BR>\n<A HREF=\"manual.html#pdf-module\">module</A><BR>\n<A HREF=\"manual.html#pdf-next\">next</A><BR>\n<A HREF=\"manual.html#pdf-pairs\">pairs</A><BR>\n<A HREF=\"manual.html#pdf-pcall\">pcall</A><BR>\n<A HREF=\"manual.html#pdf-print\">print</A><BR>\n<A HREF=\"manual.html#pdf-rawequal\">rawequal</A><BR>\n<A HREF=\"manual.html#pdf-rawget\">rawget</A><BR>\n<A HREF=\"manual.html#pdf-rawset\">rawset</A><BR>\n<A HREF=\"manual.html#pdf-require\">require</A><BR>\n<A HREF=\"manual.html#pdf-select\">select</A><BR>\n<A HREF=\"manual.html#pdf-setfenv\">setfenv</A><BR>\n<A HREF=\"manual.html#pdf-setmetatable\">setmetatable</A><BR>\n<A HREF=\"manual.html#pdf-tonumber\">tonumber</A><BR>\n<A HREF=\"manual.html#pdf-tostring\">tostring</A><BR>\n<A HREF=\"manual.html#pdf-type\">type</A><BR>\n<A HREF=\"manual.html#pdf-unpack\">unpack</A><BR>\n<A HREF=\"manual.html#pdf-xpcall\">xpcall</A><BR>\n<P>\n\n<A HREF=\"manual.html#pdf-coroutine.create\">coroutine.create</A><BR>\n<A HREF=\"manual.html#pdf-coroutine.resume\">coroutine.resume</A><BR>\n<A HREF=\"manual.html#pdf-coroutine.running\">coroutine.running</A><BR>\n<A HREF=\"manual.html#pdf-coroutine.status\">coroutine.status</A><BR>\n<A HREF=\"manual.html#pdf-coroutine.wrap\">coroutine.wrap</A><BR>\n<A HREF=\"manual.html#pdf-coroutine.yield\">coroutine.yield</A><BR>\n<P>\n\n<A HREF=\"manual.html#pdf-debug.debug\">debug.debug</A><BR>\n<A HREF=\"manual.html#pdf-debug.getfenv\">debug.getfenv</A><BR>\n<A HREF=\"manual.html#pdf-debug.gethook\">debug.gethook</A><BR>\n<A HREF=\"manual.html#pdf-debug.getinfo\">debug.getinfo</A><BR>\n<A HREF=\"manual.html#pdf-debug.getlocal\">debug.getlocal</A><BR>\n<A HREF=\"manual.html#pdf-debug.getmetatable\">debug.getmetatable</A><BR>\n<A HREF=\"manual.html#pdf-debug.getregistry\">debug.getregistry</A><BR>\n<A HREF=\"manual.html#pdf-debug.getupvalue\">debug.getupvalue</A><BR>\n<A HREF=\"manual.html#pdf-debug.setfenv\">debug.setfenv</A><BR>\n<A HREF=\"manual.html#pdf-debug.sethook\">debug.sethook</A><BR>\n<A HREF=\"manual.html#pdf-debug.setlocal\">debug.setlocal</A><BR>\n<A HREF=\"manual.html#pdf-debug.setmetatable\">debug.setmetatable</A><BR>\n<A HREF=\"manual.html#pdf-debug.setupvalue\">debug.setupvalue</A><BR>\n<A HREF=\"manual.html#pdf-debug.traceback\">debug.traceback</A><BR>\n\n</TD>\n<TD>\n<H3>&nbsp;</H3>\n<A HREF=\"manual.html#pdf-file:close\">file:close</A><BR>\n<A HREF=\"manual.html#pdf-file:flush\">file:flush</A><BR>\n<A HREF=\"manual.html#pdf-file:lines\">file:lines</A><BR>\n<A HREF=\"manual.html#pdf-file:read\">file:read</A><BR>\n<A HREF=\"manual.html#pdf-file:seek\">file:seek</A><BR>\n<A HREF=\"manual.html#pdf-file:setvbuf\">file:setvbuf</A><BR>\n<A HREF=\"manual.html#pdf-file:write\">file:write</A><BR>\n<P>\n\n<A HREF=\"manual.html#pdf-io.close\">io.close</A><BR>\n<A HREF=\"manual.html#pdf-io.flush\">io.flush</A><BR>\n<A HREF=\"manual.html#pdf-io.input\">io.input</A><BR>\n<A HREF=\"manual.html#pdf-io.lines\">io.lines</A><BR>\n<A HREF=\"manual.html#pdf-io.open\">io.open</A><BR>\n<A HREF=\"manual.html#pdf-io.output\">io.output</A><BR>\n<A HREF=\"manual.html#pdf-io.popen\">io.popen</A><BR>\n<A HREF=\"manual.html#pdf-io.read\">io.read</A><BR>\n<A HREF=\"manual.html#pdf-io.stderr\">io.stderr</A><BR>\n<A HREF=\"manual.html#pdf-io.stdin\">io.stdin</A><BR>\n<A HREF=\"manual.html#pdf-io.stdout\">io.stdout</A><BR>\n<A HREF=\"manual.html#pdf-io.tmpfile\">io.tmpfile</A><BR>\n<A HREF=\"manual.html#pdf-io.type\">io.type</A><BR>\n<A HREF=\"manual.html#pdf-io.write\">io.write</A><BR>\n<P>\n\n<A HREF=\"manual.html#pdf-math.abs\">math.abs</A><BR>\n<A HREF=\"manual.html#pdf-math.acos\">math.acos</A><BR>\n<A HREF=\"manual.html#pdf-math.asin\">math.asin</A><BR>\n<A HREF=\"manual.html#pdf-math.atan\">math.atan</A><BR>\n<A HREF=\"manual.html#pdf-math.atan2\">math.atan2</A><BR>\n<A HREF=\"manual.html#pdf-math.ceil\">math.ceil</A><BR>\n<A HREF=\"manual.html#pdf-math.cos\">math.cos</A><BR>\n<A HREF=\"manual.html#pdf-math.cosh\">math.cosh</A><BR>\n<A HREF=\"manual.html#pdf-math.deg\">math.deg</A><BR>\n<A HREF=\"manual.html#pdf-math.exp\">math.exp</A><BR>\n<A HREF=\"manual.html#pdf-math.floor\">math.floor</A><BR>\n<A HREF=\"manual.html#pdf-math.fmod\">math.fmod</A><BR>\n<A HREF=\"manual.html#pdf-math.frexp\">math.frexp</A><BR>\n<A HREF=\"manual.html#pdf-math.huge\">math.huge</A><BR>\n<A HREF=\"manual.html#pdf-math.ldexp\">math.ldexp</A><BR>\n<A HREF=\"manual.html#pdf-math.log\">math.log</A><BR>\n<A HREF=\"manual.html#pdf-math.log10\">math.log10</A><BR>\n<A HREF=\"manual.html#pdf-math.max\">math.max</A><BR>\n<A HREF=\"manual.html#pdf-math.min\">math.min</A><BR>\n<A HREF=\"manual.html#pdf-math.modf\">math.modf</A><BR>\n<A HREF=\"manual.html#pdf-math.pi\">math.pi</A><BR>\n<A HREF=\"manual.html#pdf-math.pow\">math.pow</A><BR>\n<A HREF=\"manual.html#pdf-math.rad\">math.rad</A><BR>\n<A HREF=\"manual.html#pdf-math.random\">math.random</A><BR>\n<A HREF=\"manual.html#pdf-math.randomseed\">math.randomseed</A><BR>\n<A HREF=\"manual.html#pdf-math.sin\">math.sin</A><BR>\n<A HREF=\"manual.html#pdf-math.sinh\">math.sinh</A><BR>\n<A HREF=\"manual.html#pdf-math.sqrt\">math.sqrt</A><BR>\n<A HREF=\"manual.html#pdf-math.tan\">math.tan</A><BR>\n<A HREF=\"manual.html#pdf-math.tanh\">math.tanh</A><BR>\n<P>\n\n<A HREF=\"manual.html#pdf-os.clock\">os.clock</A><BR>\n<A HREF=\"manual.html#pdf-os.date\">os.date</A><BR>\n<A HREF=\"manual.html#pdf-os.difftime\">os.difftime</A><BR>\n<A HREF=\"manual.html#pdf-os.execute\">os.execute</A><BR>\n<A HREF=\"manual.html#pdf-os.exit\">os.exit</A><BR>\n<A HREF=\"manual.html#pdf-os.getenv\">os.getenv</A><BR>\n<A HREF=\"manual.html#pdf-os.remove\">os.remove</A><BR>\n<A HREF=\"manual.html#pdf-os.rename\">os.rename</A><BR>\n<A HREF=\"manual.html#pdf-os.setlocale\">os.setlocale</A><BR>\n<A HREF=\"manual.html#pdf-os.time\">os.time</A><BR>\n<A HREF=\"manual.html#pdf-os.tmpname\">os.tmpname</A><BR>\n<P>\n\n<A HREF=\"manual.html#pdf-package.cpath\">package.cpath</A><BR>\n<A HREF=\"manual.html#pdf-package.loaded\">package.loaded</A><BR>\n<A HREF=\"manual.html#pdf-package.loaders\">package.loaders</A><BR>\n<A HREF=\"manual.html#pdf-package.loadlib\">package.loadlib</A><BR>\n<A HREF=\"manual.html#pdf-package.path\">package.path</A><BR>\n<A HREF=\"manual.html#pdf-package.preload\">package.preload</A><BR>\n<A HREF=\"manual.html#pdf-package.seeall\">package.seeall</A><BR>\n<P>\n\n<A HREF=\"manual.html#pdf-string.byte\">string.byte</A><BR>\n<A HREF=\"manual.html#pdf-string.char\">string.char</A><BR>\n<A HREF=\"manual.html#pdf-string.dump\">string.dump</A><BR>\n<A HREF=\"manual.html#pdf-string.find\">string.find</A><BR>\n<A HREF=\"manual.html#pdf-string.format\">string.format</A><BR>\n<A HREF=\"manual.html#pdf-string.gmatch\">string.gmatch</A><BR>\n<A HREF=\"manual.html#pdf-string.gsub\">string.gsub</A><BR>\n<A HREF=\"manual.html#pdf-string.len\">string.len</A><BR>\n<A HREF=\"manual.html#pdf-string.lower\">string.lower</A><BR>\n<A HREF=\"manual.html#pdf-string.match\">string.match</A><BR>\n<A HREF=\"manual.html#pdf-string.rep\">string.rep</A><BR>\n<A HREF=\"manual.html#pdf-string.reverse\">string.reverse</A><BR>\n<A HREF=\"manual.html#pdf-string.sub\">string.sub</A><BR>\n<A HREF=\"manual.html#pdf-string.upper\">string.upper</A><BR>\n<P>\n\n<A HREF=\"manual.html#pdf-table.concat\">table.concat</A><BR>\n<A HREF=\"manual.html#pdf-table.insert\">table.insert</A><BR>\n<A HREF=\"manual.html#pdf-table.maxn\">table.maxn</A><BR>\n<A HREF=\"manual.html#pdf-table.remove\">table.remove</A><BR>\n<A HREF=\"manual.html#pdf-table.sort\">table.sort</A><BR>\n\n</TD>\n<TD>\n<H3>C API</H3>\n<A HREF=\"manual.html#lua_Alloc\">lua_Alloc</A><BR>\n<A HREF=\"manual.html#lua_CFunction\">lua_CFunction</A><BR>\n<A HREF=\"manual.html#lua_Debug\">lua_Debug</A><BR>\n<A HREF=\"manual.html#lua_Hook\">lua_Hook</A><BR>\n<A HREF=\"manual.html#lua_Integer\">lua_Integer</A><BR>\n<A HREF=\"manual.html#lua_Number\">lua_Number</A><BR>\n<A HREF=\"manual.html#lua_Reader\">lua_Reader</A><BR>\n<A HREF=\"manual.html#lua_State\">lua_State</A><BR>\n<A HREF=\"manual.html#lua_Writer\">lua_Writer</A><BR>\n<P>\n\n<A HREF=\"manual.html#lua_atpanic\">lua_atpanic</A><BR>\n<A HREF=\"manual.html#lua_call\">lua_call</A><BR>\n<A HREF=\"manual.html#lua_checkstack\">lua_checkstack</A><BR>\n<A HREF=\"manual.html#lua_close\">lua_close</A><BR>\n<A HREF=\"manual.html#lua_concat\">lua_concat</A><BR>\n<A HREF=\"manual.html#lua_cpcall\">lua_cpcall</A><BR>\n<A HREF=\"manual.html#lua_createtable\">lua_createtable</A><BR>\n<A HREF=\"manual.html#lua_dump\">lua_dump</A><BR>\n<A HREF=\"manual.html#lua_equal\">lua_equal</A><BR>\n<A HREF=\"manual.html#lua_error\">lua_error</A><BR>\n<A HREF=\"manual.html#lua_gc\">lua_gc</A><BR>\n<A HREF=\"manual.html#lua_getallocf\">lua_getallocf</A><BR>\n<A HREF=\"manual.html#lua_getfenv\">lua_getfenv</A><BR>\n<A HREF=\"manual.html#lua_getfield\">lua_getfield</A><BR>\n<A HREF=\"manual.html#lua_getglobal\">lua_getglobal</A><BR>\n<A HREF=\"manual.html#lua_gethook\">lua_gethook</A><BR>\n<A HREF=\"manual.html#lua_gethookcount\">lua_gethookcount</A><BR>\n<A HREF=\"manual.html#lua_gethookmask\">lua_gethookmask</A><BR>\n<A HREF=\"manual.html#lua_getinfo\">lua_getinfo</A><BR>\n<A HREF=\"manual.html#lua_getlocal\">lua_getlocal</A><BR>\n<A HREF=\"manual.html#lua_getmetatable\">lua_getmetatable</A><BR>\n<A HREF=\"manual.html#lua_getstack\">lua_getstack</A><BR>\n<A HREF=\"manual.html#lua_gettable\">lua_gettable</A><BR>\n<A HREF=\"manual.html#lua_gettop\">lua_gettop</A><BR>\n<A HREF=\"manual.html#lua_getupvalue\">lua_getupvalue</A><BR>\n<A HREF=\"manual.html#lua_insert\">lua_insert</A><BR>\n<A HREF=\"manual.html#lua_isboolean\">lua_isboolean</A><BR>\n<A HREF=\"manual.html#lua_iscfunction\">lua_iscfunction</A><BR>\n<A HREF=\"manual.html#lua_isfunction\">lua_isfunction</A><BR>\n<A HREF=\"manual.html#lua_islightuserdata\">lua_islightuserdata</A><BR>\n<A HREF=\"manual.html#lua_isnil\">lua_isnil</A><BR>\n<A HREF=\"manual.html#lua_isnone\">lua_isnone</A><BR>\n<A HREF=\"manual.html#lua_isnoneornil\">lua_isnoneornil</A><BR>\n<A HREF=\"manual.html#lua_isnumber\">lua_isnumber</A><BR>\n<A HREF=\"manual.html#lua_isstring\">lua_isstring</A><BR>\n<A HREF=\"manual.html#lua_istable\">lua_istable</A><BR>\n<A HREF=\"manual.html#lua_isthread\">lua_isthread</A><BR>\n<A HREF=\"manual.html#lua_isuserdata\">lua_isuserdata</A><BR>\n<A HREF=\"manual.html#lua_lessthan\">lua_lessthan</A><BR>\n<A HREF=\"manual.html#lua_load\">lua_load</A><BR>\n<A HREF=\"manual.html#lua_newstate\">lua_newstate</A><BR>\n<A HREF=\"manual.html#lua_newtable\">lua_newtable</A><BR>\n<A HREF=\"manual.html#lua_newthread\">lua_newthread</A><BR>\n<A HREF=\"manual.html#lua_newuserdata\">lua_newuserdata</A><BR>\n<A HREF=\"manual.html#lua_next\">lua_next</A><BR>\n<A HREF=\"manual.html#lua_objlen\">lua_objlen</A><BR>\n<A HREF=\"manual.html#lua_pcall\">lua_pcall</A><BR>\n<A HREF=\"manual.html#lua_pop\">lua_pop</A><BR>\n<A HREF=\"manual.html#lua_pushboolean\">lua_pushboolean</A><BR>\n<A HREF=\"manual.html#lua_pushcclosure\">lua_pushcclosure</A><BR>\n<A HREF=\"manual.html#lua_pushcfunction\">lua_pushcfunction</A><BR>\n<A HREF=\"manual.html#lua_pushfstring\">lua_pushfstring</A><BR>\n<A HREF=\"manual.html#lua_pushinteger\">lua_pushinteger</A><BR>\n<A HREF=\"manual.html#lua_pushlightuserdata\">lua_pushlightuserdata</A><BR>\n<A HREF=\"manual.html#lua_pushliteral\">lua_pushliteral</A><BR>\n<A HREF=\"manual.html#lua_pushlstring\">lua_pushlstring</A><BR>\n<A HREF=\"manual.html#lua_pushnil\">lua_pushnil</A><BR>\n<A HREF=\"manual.html#lua_pushnumber\">lua_pushnumber</A><BR>\n<A HREF=\"manual.html#lua_pushstring\">lua_pushstring</A><BR>\n<A HREF=\"manual.html#lua_pushthread\">lua_pushthread</A><BR>\n<A HREF=\"manual.html#lua_pushvalue\">lua_pushvalue</A><BR>\n<A HREF=\"manual.html#lua_pushvfstring\">lua_pushvfstring</A><BR>\n<A HREF=\"manual.html#lua_rawequal\">lua_rawequal</A><BR>\n<A HREF=\"manual.html#lua_rawget\">lua_rawget</A><BR>\n<A HREF=\"manual.html#lua_rawgeti\">lua_rawgeti</A><BR>\n<A HREF=\"manual.html#lua_rawset\">lua_rawset</A><BR>\n<A HREF=\"manual.html#lua_rawseti\">lua_rawseti</A><BR>\n<A HREF=\"manual.html#lua_register\">lua_register</A><BR>\n<A HREF=\"manual.html#lua_remove\">lua_remove</A><BR>\n<A HREF=\"manual.html#lua_replace\">lua_replace</A><BR>\n<A HREF=\"manual.html#lua_resume\">lua_resume</A><BR>\n<A HREF=\"manual.html#lua_setallocf\">lua_setallocf</A><BR>\n<A HREF=\"manual.html#lua_setfenv\">lua_setfenv</A><BR>\n<A HREF=\"manual.html#lua_setfield\">lua_setfield</A><BR>\n<A HREF=\"manual.html#lua_setglobal\">lua_setglobal</A><BR>\n<A HREF=\"manual.html#lua_sethook\">lua_sethook</A><BR>\n<A HREF=\"manual.html#lua_setlocal\">lua_setlocal</A><BR>\n<A HREF=\"manual.html#lua_setmetatable\">lua_setmetatable</A><BR>\n<A HREF=\"manual.html#lua_settable\">lua_settable</A><BR>\n<A HREF=\"manual.html#lua_settop\">lua_settop</A><BR>\n<A HREF=\"manual.html#lua_setupvalue\">lua_setupvalue</A><BR>\n<A HREF=\"manual.html#lua_status\">lua_status</A><BR>\n<A HREF=\"manual.html#lua_toboolean\">lua_toboolean</A><BR>\n<A HREF=\"manual.html#lua_tocfunction\">lua_tocfunction</A><BR>\n<A HREF=\"manual.html#lua_tointeger\">lua_tointeger</A><BR>\n<A HREF=\"manual.html#lua_tolstring\">lua_tolstring</A><BR>\n<A HREF=\"manual.html#lua_tonumber\">lua_tonumber</A><BR>\n<A HREF=\"manual.html#lua_topointer\">lua_topointer</A><BR>\n<A HREF=\"manual.html#lua_tostring\">lua_tostring</A><BR>\n<A HREF=\"manual.html#lua_tothread\">lua_tothread</A><BR>\n<A HREF=\"manual.html#lua_touserdata\">lua_touserdata</A><BR>\n<A HREF=\"manual.html#lua_type\">lua_type</A><BR>\n<A HREF=\"manual.html#lua_typename\">lua_typename</A><BR>\n<A HREF=\"manual.html#lua_upvalueindex\">lua_upvalueindex</A><BR>\n<A HREF=\"manual.html#lua_xmove\">lua_xmove</A><BR>\n<A HREF=\"manual.html#lua_yield\">lua_yield</A><BR>\n\n</TD>\n<TD>\n<H3>auxiliary library</H3>\n<A HREF=\"manual.html#luaL_Buffer\">luaL_Buffer</A><BR>\n<A HREF=\"manual.html#luaL_Reg\">luaL_Reg</A><BR>\n<P>\n\n<A HREF=\"manual.html#luaL_addchar\">luaL_addchar</A><BR>\n<A HREF=\"manual.html#luaL_addlstring\">luaL_addlstring</A><BR>\n<A HREF=\"manual.html#luaL_addsize\">luaL_addsize</A><BR>\n<A HREF=\"manual.html#luaL_addstring\">luaL_addstring</A><BR>\n<A HREF=\"manual.html#luaL_addvalue\">luaL_addvalue</A><BR>\n<A HREF=\"manual.html#luaL_argcheck\">luaL_argcheck</A><BR>\n<A HREF=\"manual.html#luaL_argerror\">luaL_argerror</A><BR>\n<A HREF=\"manual.html#luaL_buffinit\">luaL_buffinit</A><BR>\n<A HREF=\"manual.html#luaL_callmeta\">luaL_callmeta</A><BR>\n<A HREF=\"manual.html#luaL_checkany\">luaL_checkany</A><BR>\n<A HREF=\"manual.html#luaL_checkint\">luaL_checkint</A><BR>\n<A HREF=\"manual.html#luaL_checkinteger\">luaL_checkinteger</A><BR>\n<A HREF=\"manual.html#luaL_checklong\">luaL_checklong</A><BR>\n<A HREF=\"manual.html#luaL_checklstring\">luaL_checklstring</A><BR>\n<A HREF=\"manual.html#luaL_checknumber\">luaL_checknumber</A><BR>\n<A HREF=\"manual.html#luaL_checkoption\">luaL_checkoption</A><BR>\n<A HREF=\"manual.html#luaL_checkstack\">luaL_checkstack</A><BR>\n<A HREF=\"manual.html#luaL_checkstring\">luaL_checkstring</A><BR>\n<A HREF=\"manual.html#luaL_checktype\">luaL_checktype</A><BR>\n<A HREF=\"manual.html#luaL_checkudata\">luaL_checkudata</A><BR>\n<A HREF=\"manual.html#luaL_dofile\">luaL_dofile</A><BR>\n<A HREF=\"manual.html#luaL_dostring\">luaL_dostring</A><BR>\n<A HREF=\"manual.html#luaL_error\">luaL_error</A><BR>\n<A HREF=\"manual.html#luaL_getmetafield\">luaL_getmetafield</A><BR>\n<A HREF=\"manual.html#luaL_getmetatable\">luaL_getmetatable</A><BR>\n<A HREF=\"manual.html#luaL_gsub\">luaL_gsub</A><BR>\n<A HREF=\"manual.html#luaL_loadbuffer\">luaL_loadbuffer</A><BR>\n<A HREF=\"manual.html#luaL_loadfile\">luaL_loadfile</A><BR>\n<A HREF=\"manual.html#luaL_loadstring\">luaL_loadstring</A><BR>\n<A HREF=\"manual.html#luaL_newmetatable\">luaL_newmetatable</A><BR>\n<A HREF=\"manual.html#luaL_newstate\">luaL_newstate</A><BR>\n<A HREF=\"manual.html#luaL_openlibs\">luaL_openlibs</A><BR>\n<A HREF=\"manual.html#luaL_optint\">luaL_optint</A><BR>\n<A HREF=\"manual.html#luaL_optinteger\">luaL_optinteger</A><BR>\n<A HREF=\"manual.html#luaL_optlong\">luaL_optlong</A><BR>\n<A HREF=\"manual.html#luaL_optlstring\">luaL_optlstring</A><BR>\n<A HREF=\"manual.html#luaL_optnumber\">luaL_optnumber</A><BR>\n<A HREF=\"manual.html#luaL_optstring\">luaL_optstring</A><BR>\n<A HREF=\"manual.html#luaL_prepbuffer\">luaL_prepbuffer</A><BR>\n<A HREF=\"manual.html#luaL_pushresult\">luaL_pushresult</A><BR>\n<A HREF=\"manual.html#luaL_ref\">luaL_ref</A><BR>\n<A HREF=\"manual.html#luaL_register\">luaL_register</A><BR>\n<A HREF=\"manual.html#luaL_typename\">luaL_typename</A><BR>\n<A HREF=\"manual.html#luaL_typerror\">luaL_typerror</A><BR>\n<A HREF=\"manual.html#luaL_unref\">luaL_unref</A><BR>\n<A HREF=\"manual.html#luaL_where\">luaL_where</A><BR>\n\n</TD>\n</TR>\n</TABLE>\n<P>\n\n<HR>\n<SMALL CLASS=\"footer\">\nLast update:\nMon Feb 13 18:53:32 BRST 2012\n</SMALL>\n<!--\nLast change: revised for Lua 5.1.5\n-->\n\n</BODY>\n</HTML>\n"
  },
  {
    "path": "deps/lua/doc/lua.1",
    "content": ".\\\" $Id: lua.man,v 1.11 2006/01/06 16:03:34 lhf Exp $\n.TH LUA 1 \"$Date: 2006/01/06 16:03:34 $\"\n.SH NAME\nlua \\- Lua interpreter\n.SH SYNOPSIS\n.B lua\n[\n.I options\n]\n[\n.I script\n[\n.I args\n]\n]\n.SH DESCRIPTION\n.B lua\nis the stand-alone Lua interpreter.\nIt loads and executes Lua programs,\neither in textual source form or\nin precompiled binary form.\n(Precompiled binaries are output by\n.BR luac ,\nthe Lua compiler.)\n.B lua\ncan be used as a batch interpreter and also interactively.\n.LP\nThe given\n.I options\n(see below)\nare executed and then\nthe Lua program in file\n.I script\nis loaded and executed.\nThe given\n.I args\nare available to\n.I script\nas strings in a global table named\n.BR arg .\nIf these arguments contain spaces or other characters special to the shell,\nthen they should be quoted\n(but note that the quotes will be removed by the shell).\nThe arguments in\n.B arg\nstart at 0,\nwhich contains the string\n.RI ' script '.\nThe index of the last argument is stored in\n.BR arg.n .\nThe arguments given in the command line before\n.IR script ,\nincluding the name of the interpreter,\nare available in negative indices in\n.BR arg .\n.LP\nAt the very start,\nbefore even handling the command line,\n.B lua\nexecutes the contents of the environment variable\n.BR LUA_INIT ,\nif it is defined.\nIf the value of\n.B LUA_INIT\nis of the form\n.RI '@ filename ',\nthen\n.I filename\nis executed.\nOtherwise, the string is assumed to be a Lua statement and is executed.\n.LP\nOptions start with\n.B '\\-'\nand are described below.\nYou can use\n.B \"'\\--'\"\nto signal the end of options.\n.LP\nIf no arguments are given,\nthen\n.B \"\\-v \\-i\"\nis assumed when the standard input is a terminal;\notherwise,\n.B \"\\-\"\nis assumed.\n.LP\nIn interactive mode,\n.B lua\nprompts the user,\nreads lines from the standard input,\nand executes them as they are read.\nIf a line does not contain a complete statement,\nthen a secondary prompt is displayed and\nlines are read until a complete statement is formed or\na syntax error is found.\nSo, one way to interrupt the reading of an incomplete statement is\nto force a syntax error:\nadding a\n.B ';' \nin the middle of a statement is a sure way of forcing a syntax error\n(except inside multiline strings and comments; these must be closed explicitly).\nIf a line starts with\n.BR '=' ,\nthen\n.B lua\ndisplays the values of all the expressions in the remainder of the\nline. The expressions must be separated by commas.\nThe primary prompt is the value of the global variable\n.BR _PROMPT ,\nif this value is a string;\notherwise, the default prompt is used.\nSimilarly, the secondary prompt is the value of the global variable\n.BR _PROMPT2 .\nSo,\nto change the prompts,\nset the corresponding variable to a string of your choice.\nYou can do that after calling the interpreter\nor on the command line\n(but in this case you have to be careful with quotes\nif the prompt string contains a space; otherwise you may confuse the shell.)\nThe default prompts are \"> \" and \">> \".\n.SH OPTIONS\n.TP\n.B \\-\nload and execute the standard input as a file,\nthat is,\nnot interactively,\neven when the standard input is a terminal.\n.TP\n.BI \\-e \" stat\"\nexecute statement\n.IR stat .\nYou need to quote\n.I stat \nif it contains spaces, quotes,\nor other characters special to the shell.\n.TP\n.B \\-i\nenter interactive mode after\n.I script\nis executed.\n.TP\n.BI \\-l \" name\"\ncall\n.BI require(' name ')\nbefore executing\n.IR script .\nTypically used to load libraries.\n.TP\n.B \\-v\nshow version information.\n.SH \"SEE ALSO\"\n.BR luac (1)\n.br\nhttp://www.lua.org/\n.SH DIAGNOSTICS\nError messages should be self explanatory.\n.SH AUTHORS\nR. Ierusalimschy,\nL. H. de Figueiredo,\nand\nW. Celes\n.\\\" EOF\n"
  },
  {
    "path": "deps/lua/doc/lua.css",
    "content": "body {\n\tcolor: #000000 ;\n\tbackground-color: #FFFFFF ;\n\tfont-family: Helvetica, Arial, sans-serif ;\n\ttext-align: justify ;\n\tmargin-right: 30px ;\n\tmargin-left: 30px ;\n}\n\nh1, h2, h3, h4 {\n\tfont-family: Verdana, Geneva, sans-serif ;\n\tfont-weight: normal ;\n\tfont-style: italic ;\n}\n\nh2 {\n\tpadding-top: 0.4em ;\n\tpadding-bottom: 0.4em ;\n\tpadding-left: 30px ;\n\tpadding-right: 30px ;\n\tmargin-left: -30px ;\n\tbackground-color: #E0E0FF ;\n}\n\nh3 {\n\tpadding-left: 0.5em ;\n\tborder-left: solid #E0E0FF 1em ;\n}\n\ntable h3 {\n\tpadding-left: 0px ;\n\tborder-left: none ;\n}\n\na:link {\n\tcolor: #000080 ;\n\tbackground-color: inherit ;\n\ttext-decoration: none ;\n}\n\na:visited {\n\tbackground-color: inherit ;\n\ttext-decoration: none ;\n}\n\na:link:hover, a:visited:hover {\n\tcolor: #000080 ;\n\tbackground-color: #E0E0FF ;\n}\n\na:link:active, a:visited:active {\n\tcolor: #FF0000 ;\n}\n\nhr {\n\tborder: 0 ;\n\theight: 1px ;\n\tcolor: #a0a0a0 ;\n\tbackground-color: #a0a0a0 ;\n}\n\n:target {\n\tbackground-color: #F8F8F8 ;\n\tpadding: 8px ;\n\tborder: solid #a0a0a0 2px ;\n}\n\n.footer {\n\tcolor: gray ;\n\tfont-size: small ;\n}\n\ninput[type=text] {\n\tborder: solid #a0a0a0 2px ;\n\tborder-radius: 2em ;\n\t-moz-border-radius: 2em ;\n\tbackground-image: url('images/search.png') ;\n\tbackground-repeat: no-repeat;\n\tbackground-position: 4px center ;\n\tpadding-left: 20px ;\n\theight: 2em ;\n}\n\n"
  },
  {
    "path": "deps/lua/doc/lua.html",
    "content": "<!-- $Id: lua.man,v 1.11 2006/01/06 16:03:34 lhf Exp $ -->\n<HTML>\n<HEAD>\n<TITLE>LUA man page</TITLE>\n<LINK REL=\"stylesheet\" TYPE=\"text/css\" HREF=\"lua.css\">\n</HEAD>\n\n<BODY BGCOLOR=\"#FFFFFF\">\n\n<H2>NAME</H2>\nlua - Lua interpreter\n<H2>SYNOPSIS</H2>\n<B>lua</B>\n[\n<I>options</I>\n]\n[\n<I>script</I>\n[\n<I>args</I>\n]\n]\n<H2>DESCRIPTION</H2>\n<B>lua</B>\nis the stand-alone Lua interpreter.\nIt loads and executes Lua programs,\neither in textual source form or\nin precompiled binary form.\n(Precompiled binaries are output by\n<B>luac</B>,\nthe Lua compiler.)\n<B>lua</B>\ncan be used as a batch interpreter and also interactively.\n<P>\nThe given\n<I>options</I>\n(see below)\nare executed and then\nthe Lua program in file\n<I>script</I>\nis loaded and executed.\nThe given\n<I>args</I>\nare available to\n<I>script</I>\nas strings in a global table named\n<B>arg</B>.\nIf these arguments contain spaces or other characters special to the shell,\nthen they should be quoted\n(but note that the quotes will be removed by the shell).\nThe arguments in\n<B>arg</B>\nstart at 0,\nwhich contains the string\n'<I>script</I>'.\nThe index of the last argument is stored in\n<B>arg.n</B>.\nThe arguments given in the command line before\n<I>script</I>,\nincluding the name of the interpreter,\nare available in negative indices in\n<B>arg</B>.\n<P>\nAt the very start,\nbefore even handling the command line,\n<B>lua</B>\nexecutes the contents of the environment variable\n<B>LUA_INIT</B>,\nif it is defined.\nIf the value of\n<B>LUA_INIT</B>\nis of the form\n'@<I>filename</I>',\nthen\n<I>filename</I>\nis executed.\nOtherwise, the string is assumed to be a Lua statement and is executed.\n<P>\nOptions start with\n<B>'-'</B>\nand are described below.\nYou can use\n<B>'--'</B>\nto signal the end of options.\n<P>\nIf no arguments are given,\nthen\n<B>\"-v -i\"</B>\nis assumed when the standard input is a terminal;\notherwise,\n<B>\"-\"</B>\nis assumed.\n<P>\nIn interactive mode,\n<B>lua</B>\nprompts the user,\nreads lines from the standard input,\nand executes them as they are read.\nIf a line does not contain a complete statement,\nthen a secondary prompt is displayed and\nlines are read until a complete statement is formed or\na syntax error is found.\nSo, one way to interrupt the reading of an incomplete statement is\nto force a syntax error:\nadding a\n<B>';'</B>\nin the middle of a statement is a sure way of forcing a syntax error\n(except inside multiline strings and comments; these must be closed explicitly).\nIf a line starts with\n<B>'='</B>,\nthen\n<B>lua</B>\ndisplays the values of all the expressions in the remainder of the\nline. The expressions must be separated by commas.\nThe primary prompt is the value of the global variable\n<B>_PROMPT</B>,\nif this value is a string;\notherwise, the default prompt is used.\nSimilarly, the secondary prompt is the value of the global variable\n<B>_PROMPT2</B>.\nSo,\nto change the prompts,\nset the corresponding variable to a string of your choice.\nYou can do that after calling the interpreter\nor on the command line\n(but in this case you have to be careful with quotes\nif the prompt string contains a space; otherwise you may confuse the shell.)\nThe default prompts are \"&gt; \" and \"&gt;&gt; \".\n<H2>OPTIONS</H2>\n<P>\n<B>-</B>\nload and execute the standard input as a file,\nthat is,\nnot interactively,\neven when the standard input is a terminal.\n<P>\n<B>-e </B><I>stat</I>\nexecute statement\n<I>stat</I>.\nYou need to quote\n<I>stat </I>\nif it contains spaces, quotes,\nor other characters special to the shell.\n<P>\n<B>-i</B>\nenter interactive mode after\n<I>script</I>\nis executed.\n<P>\n<B>-l </B><I>name</I>\ncall\n<B>require</B>('<I>name</I>')\nbefore executing\n<I>script</I>.\nTypically used to load libraries.\n<P>\n<B>-v</B>\nshow version information.\n<H2>SEE ALSO</H2>\n<B>luac</B>(1)\n<BR>\n<A HREF=\"http://www.lua.org/\">http://www.lua.org/</A>\n<H2>DIAGNOSTICS</H2>\nError messages should be self explanatory.\n<H2>AUTHORS</H2>\nR. Ierusalimschy,\nL. H. de Figueiredo,\nand\nW. Celes\n<!-- EOF -->\n</BODY>\n</HTML>\n"
  },
  {
    "path": "deps/lua/doc/luac.1",
    "content": ".\\\" $Id: luac.man,v 1.28 2006/01/06 16:03:34 lhf Exp $\n.TH LUAC 1 \"$Date: 2006/01/06 16:03:34 $\"\n.SH NAME\nluac \\- Lua compiler\n.SH SYNOPSIS\n.B luac\n[\n.I options\n] [\n.I filenames\n]\n.SH DESCRIPTION\n.B luac\nis the Lua compiler.\nIt translates programs written in the Lua programming language\ninto binary files that can be later loaded and executed.\n.LP\nThe main advantages of precompiling chunks are:\nfaster loading,\nprotecting source code from accidental user changes,\nand\noff-line syntax checking.\n.LP\nPre-compiling does not imply faster execution\nbecause in Lua chunks are always compiled into bytecodes before being executed.\n.B luac\nsimply allows those bytecodes to be saved in a file for later execution.\n.LP\nPre-compiled chunks are not necessarily smaller than the corresponding source.\nThe main goal in pre-compiling is faster loading.\n.LP\nThe binary files created by\n.B luac\nare portable only among architectures with the same word size and byte order.\n.LP\n.B luac\nproduces a single output file containing the bytecodes\nfor all source files given.\nBy default,\nthe output file is named\n.BR luac.out ,\nbut you can change this with the\n.B \\-o\noption.\n.LP\nIn the command line,\nyou can mix\ntext files containing Lua source and\nbinary files containing precompiled chunks.\nThis is useful to combine several precompiled chunks,\neven from different (but compatible) platforms,\ninto a single precompiled chunk.\n.LP\nYou can use\n.B \"'\\-'\"\nto indicate the standard input as a source file\nand\n.B \"'\\--'\"\nto signal the end of options\n(that is,\nall remaining arguments will be treated as files even if they start with\n.BR \"'\\-'\" ).\n.LP\nThe internal format of the binary files produced by\n.B luac\nis likely to change when a new version of Lua is released.\nSo,\nsave the source files of all Lua programs that you precompile.\n.LP\n.SH OPTIONS\nOptions must be separate.\n.TP\n.B \\-l\nproduce a listing of the compiled bytecode for Lua's virtual machine.\nListing bytecodes is useful to learn about Lua's virtual machine.\nIf no files are given, then\n.B luac\nloads\n.B luac.out\nand lists its contents.\n.TP\n.BI \\-o \" file\"\noutput to\n.IR file ,\ninstead of the default\n.BR luac.out .\n(You can use\n.B \"'\\-'\"\nfor standard output,\nbut not on platforms that open standard output in text mode.)\nThe output file may be a source file because\nall files are loaded before the output file is written.\nBe careful not to overwrite precious files.\n.TP\n.B \\-p\nload files but do not generate any output file.\nUsed mainly for syntax checking and for testing precompiled chunks:\ncorrupted files will probably generate errors when loaded.\nLua always performs a thorough integrity test on precompiled chunks.\nBytecode that passes this test is completely safe,\nin the sense that it will not break the interpreter.\nHowever,\nthere is no guarantee that such code does anything sensible.\n(None can be given, because the halting problem is unsolvable.)\nIf no files are given, then\n.B luac\nloads\n.B luac.out\nand tests its contents.\nNo messages are displayed if the file passes the integrity test.\n.TP\n.B \\-s\nstrip debug information before writing the output file.\nThis saves some space in very large chunks,\nbut if errors occur when running a stripped chunk,\nthen the error messages may not contain the full information they usually do.\nFor instance,\nline numbers and names of local variables are lost.\n.TP\n.B \\-v\nshow version information.\n.SH FILES\n.TP 15\n.B luac.out\ndefault output file\n.SH \"SEE ALSO\"\n.BR lua (1)\n.br\nhttp://www.lua.org/\n.SH DIAGNOSTICS\nError messages should be self explanatory.\n.SH AUTHORS\nL. H. de Figueiredo,\nR. Ierusalimschy and\nW. Celes\n.\\\" EOF\n"
  },
  {
    "path": "deps/lua/doc/luac.html",
    "content": "<!-- $Id: luac.man,v 1.28 2006/01/06 16:03:34 lhf Exp $ -->\n<HTML>\n<HEAD>\n<TITLE>LUAC man page</TITLE>\n<LINK REL=\"stylesheet\" TYPE=\"text/css\" HREF=\"lua.css\">\n</HEAD>\n\n<BODY BGCOLOR=\"#FFFFFF\">\n\n<H2>NAME</H2>\nluac - Lua compiler\n<H2>SYNOPSIS</H2>\n<B>luac</B>\n[\n<I>options</I>\n] [\n<I>filenames</I>\n]\n<H2>DESCRIPTION</H2>\n<B>luac</B>\nis the Lua compiler.\nIt translates programs written in the Lua programming language\ninto binary files that can be later loaded and executed.\n<P>\nThe main advantages of precompiling chunks are:\nfaster loading,\nprotecting source code from accidental user changes,\nand\noff-line syntax checking.\n<P>\nPrecompiling does not imply faster execution\nbecause in Lua chunks are always compiled into bytecodes before being executed.\n<B>luac</B>\nsimply allows those bytecodes to be saved in a file for later execution.\n<P>\nPrecompiled chunks are not necessarily smaller than the corresponding source.\nThe main goal in precompiling is faster loading.\n<P>\nThe binary files created by\n<B>luac</B>\nare portable only among architectures with the same word size and byte order.\n<P>\n<B>luac</B>\nproduces a single output file containing the bytecodes\nfor all source files given.\nBy default,\nthe output file is named\n<B>luac.out</B>,\nbut you can change this with the\n<B>-o</B>\noption.\n<P>\nIn the command line,\nyou can mix\ntext files containing Lua source and\nbinary files containing precompiled chunks.\nThis is useful because several precompiled chunks,\neven from different (but compatible) platforms,\ncan be combined into a single precompiled chunk.\n<P>\nYou can use\n<B>'-'</B>\nto indicate the standard input as a source file\nand\n<B>'--'</B>\nto signal the end of options\n(that is,\nall remaining arguments will be treated as files even if they start with\n<B>'-'</B>).\n<P>\nThe internal format of the binary files produced by\n<B>luac</B>\nis likely to change when a new version of Lua is released.\nSo,\nsave the source files of all Lua programs that you precompile.\n<P>\n<H2>OPTIONS</H2>\nOptions must be separate.\n<P>\n<B>-l</B>\nproduce a listing of the compiled bytecode for Lua's virtual machine.\nListing bytecodes is useful to learn about Lua's virtual machine.\nIf no files are given, then\n<B>luac</B>\nloads\n<B>luac.out</B>\nand lists its contents.\n<P>\n<B>-o </B><I>file</I>\noutput to\n<I>file</I>,\ninstead of the default\n<B>luac.out</B>.\n(You can use\n<B>'-'</B>\nfor standard output,\nbut not on platforms that open standard output in text mode.)\nThe output file may be a source file because\nall files are loaded before the output file is written.\nBe careful not to overwrite precious files.\n<P>\n<B>-p</B>\nload files but do not generate any output file.\nUsed mainly for syntax checking and for testing precompiled chunks:\ncorrupted files will probably generate errors when loaded.\nLua always performs a thorough integrity test on precompiled chunks.\nBytecode that passes this test is completely safe,\nin the sense that it will not break the interpreter.\nHowever,\nthere is no guarantee that such code does anything sensible.\n(None can be given, because the halting problem is unsolvable.)\nIf no files are given, then\n<B>luac</B>\nloads\n<B>luac.out</B>\nand tests its contents.\nNo messages are displayed if the file passes the integrity test.\n<P>\n<B>-s</B>\nstrip debug information before writing the output file.\nThis saves some space in very large chunks,\nbut if errors occur when running a stripped chunk,\nthen the error messages may not contain the full information they usually do.\nFor instance,\nline numbers and names of local variables are lost.\n<P>\n<B>-v</B>\nshow version information.\n<H2>FILES</H2>\n<P>\n<B>luac.out</B>\ndefault output file\n<H2>SEE ALSO</H2>\n<B>lua</B>(1)\n<BR>\n<A HREF=\"http://www.lua.org/\">http://www.lua.org/</A>\n<H2>DIAGNOSTICS</H2>\nError messages should be self explanatory.\n<H2>AUTHORS</H2>\nL. H. de Figueiredo,\nR. Ierusalimschy and\nW. Celes\n<!-- EOF -->\n</BODY>\n</HTML>\n"
  },
  {
    "path": "deps/lua/doc/manual.css",
    "content": "h3 code {\n\tfont-family: inherit ;\n\tfont-size: inherit ;\n}\n\npre, code {\n\tfont-size: 12pt ;\n}\n\nspan.apii {\n\tfloat: right ;\n\tfont-family: inherit ;\n\tfont-style: normal ;\n\tfont-size: small ;\n\tcolor: gray ;\n}\n\np+h1, ul+h1 {\n\tpadding-top: 0.4em ;\n\tpadding-bottom: 0.4em ;\n\tpadding-left: 30px ;\n\tmargin-left: -30px ;\n\tbackground-color: #E0E0FF ;\n}\n"
  },
  {
    "path": "deps/lua/doc/manual.html",
    "content": "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\">\n<html>\n\n<head>\n<title>Lua 5.1 Reference Manual</title>\n<link rel=\"stylesheet\" type=\"text/css\" href=\"lua.css\">\n<link rel=\"stylesheet\" type=\"text/css\" href=\"manual.css\">\n<META HTTP-EQUIV=\"content-type\" CONTENT=\"text/html; charset=iso-8859-1\">\n</head>\n\n<body>\n\n<hr>\n<h1>\n<a href=\"http://www.lua.org/\"><img src=\"logo.gif\" alt=\"\" border=\"0\"></a>\nLua 5.1 Reference Manual\n</h1>\n\nby Roberto Ierusalimschy, Luiz Henrique de Figueiredo, Waldemar Celes\n<p>\n<small>\nCopyright &copy; 2006&ndash;2012 Lua.org, PUC-Rio.\nFreely available under the terms of the\n<a href=\"http://www.lua.org/license.html\">Lua license</a>.\n</small>\n<hr>\n<p>\n\n<a href=\"contents.html#contents\">contents</A>\n&middot;\n<a href=\"contents.html#index\">index</A>\n&middot;\n<A HREF=\"http://www.lua.org/manual/\">other versions</A>\n\n<!-- ====================================================================== -->\n<p>\n\n<!-- $Id: manual.of,v 1.49.1.2 2012/01/13 20:23:26 roberto Exp $ -->\n\n\n\n\n<h1>1 - <a name=\"1\">Introduction</a></h1>\n\n<p>\nLua is an extension programming language designed to support\ngeneral procedural programming with data description\nfacilities.\nIt also offers good support for object-oriented programming,\nfunctional programming, and data-driven programming.\nLua is intended to be used as a powerful, light-weight\nscripting language for any program that needs one.\nLua is implemented as a library, written in <em>clean</em> C\n(that is, in the common subset of ANSI&nbsp;C and C++).\n\n\n<p>\nBeing an extension language, Lua has no notion of a \"main\" program:\nit only works <em>embedded</em> in a host client,\ncalled the <em>embedding program</em> or simply the <em>host</em>.\nThis host program can invoke functions to execute a piece of Lua code,\ncan write and read Lua variables,\nand can register C&nbsp;functions to be called by Lua code.\nThrough the use of C&nbsp;functions, Lua can be augmented to cope with\na wide range of different domains,\nthus creating customized programming languages sharing a syntactical framework.\nThe Lua distribution includes a sample host program called <code>lua</code>,\nwhich uses the Lua library to offer a complete, stand-alone Lua interpreter.\n\n\n<p>\nLua is free software,\nand is provided as usual with no guarantees,\nas stated in its license.\nThe implementation described in this manual is available\nat Lua's official web site, <code>www.lua.org</code>.\n\n\n<p>\nLike any other reference manual,\nthis document is dry in places.\nFor a discussion of the decisions behind the design of Lua,\nsee the technical papers available at Lua's web site.\nFor a detailed introduction to programming in Lua,\nsee Roberto's book, <em>Programming in Lua (Second Edition)</em>.\n\n\n\n<h1>2 - <a name=\"2\">The Language</a></h1>\n\n<p>\nThis section describes the lexis, the syntax, and the semantics of Lua.\nIn other words,\nthis section describes\nwhich tokens are valid,\nhow they can be combined,\nand what their combinations mean.\n\n\n<p>\nThe language constructs will be explained using the usual extended BNF notation,\nin which\n{<em>a</em>}&nbsp;means&nbsp;0 or more <em>a</em>'s, and\n[<em>a</em>]&nbsp;means an optional <em>a</em>.\nNon-terminals are shown like non-terminal,\nkeywords are shown like <b>kword</b>,\nand other terminal symbols are shown like `<b>=</b>&acute;.\nThe complete syntax of Lua can be found in <a href=\"#8\">&sect;8</a>\nat the end of this manual.\n\n\n\n<h2>2.1 - <a name=\"2.1\">Lexical Conventions</a></h2>\n\n<p>\n<em>Names</em>\n(also called <em>identifiers</em>)\nin Lua can be any string of letters,\ndigits, and underscores,\nnot beginning with a digit.\nThis coincides with the definition of names in most languages.\n(The definition of letter depends on the current locale:\nany character considered alphabetic by the current locale\ncan be used in an identifier.)\nIdentifiers are used to name variables and table fields.\n\n\n<p>\nThe following <em>keywords</em> are reserved\nand cannot be used as names:\n\n\n<pre>\n     and       break     do        else      elseif\n     end       false     for       function  if\n     in        local     nil       not       or\n     repeat    return    then      true      until     while\n</pre>\n\n<p>\nLua is a case-sensitive language:\n<code>and</code> is a reserved word, but <code>And</code> and <code>AND</code>\nare two different, valid names.\nAs a convention, names starting with an underscore followed by\nuppercase letters (such as <a href=\"#pdf-_VERSION\"><code>_VERSION</code></a>)\nare reserved for internal global variables used by Lua.\n\n\n<p>\nThe following strings denote other tokens:\n\n<pre>\n     +     -     *     /     %     ^     #\n     ==    ~=    &lt;=    &gt;=    &lt;     &gt;     =\n     (     )     {     }     [     ]\n     ;     :     ,     .     ..    ...\n</pre>\n\n<p>\n<em>Literal strings</em>\ncan be delimited by matching single or double quotes,\nand can contain the following C-like escape sequences:\n'<code>\\a</code>' (bell),\n'<code>\\b</code>' (backspace),\n'<code>\\f</code>' (form feed),\n'<code>\\n</code>' (newline),\n'<code>\\r</code>' (carriage return),\n'<code>\\t</code>' (horizontal tab),\n'<code>\\v</code>' (vertical tab),\n'<code>\\\\</code>' (backslash),\n'<code>\\\"</code>' (quotation mark [double quote]),\nand '<code>\\'</code>' (apostrophe [single quote]).\nMoreover, a backslash followed by a real newline\nresults in a newline in the string.\nA character in a string can also be specified by its numerical value\nusing the escape sequence <code>\\<em>ddd</em></code>,\nwhere <em>ddd</em> is a sequence of up to three decimal digits.\n(Note that if a numerical escape is to be followed by a digit,\nit must be expressed using exactly three digits.)\nStrings in Lua can contain any 8-bit value, including embedded zeros,\nwhich can be specified as '<code>\\0</code>'.\n\n\n<p>\nLiteral strings can also be defined using a long format\nenclosed by <em>long brackets</em>.\nWe define an <em>opening long bracket of level <em>n</em></em> as an opening\nsquare bracket followed by <em>n</em> equal signs followed by another\nopening square bracket.\nSo, an opening long bracket of level&nbsp;0 is written as <code>[[</code>,\nan opening long bracket of level&nbsp;1 is written as <code>[=[</code>,\nand so on.\nA <em>closing long bracket</em> is defined similarly;\nfor instance, a closing long bracket of level&nbsp;4 is written as <code>]====]</code>.\nA long string starts with an opening long bracket of any level and\nends at the first closing long bracket of the same level.\nLiterals in this bracketed form can run for several lines,\ndo not interpret any escape sequences,\nand ignore long brackets of any other level.\nThey can contain anything except a closing bracket of the proper level.\n\n\n<p>\nFor convenience,\nwhen the opening long bracket is immediately followed by a newline,\nthe newline is not included in the string.\nAs an example, in a system using ASCII\n(in which '<code>a</code>' is coded as&nbsp;97,\nnewline is coded as&nbsp;10, and '<code>1</code>' is coded as&nbsp;49),\nthe five literal strings below denote the same string:\n\n<pre>\n     a = 'alo\\n123\"'\n     a = \"alo\\n123\\\"\"\n     a = '\\97lo\\10\\04923\"'\n     a = [[alo\n     123\"]]\n     a = [==[\n     alo\n     123\"]==]\n</pre>\n\n<p>\nA <em>numerical constant</em> can be written with an optional decimal part\nand an optional decimal exponent.\nLua also accepts integer hexadecimal constants,\nby prefixing them with <code>0x</code>.\nExamples of valid numerical constants are\n\n<pre>\n     3   3.0   3.1416   314.16e-2   0.31416E1   0xff   0x56\n</pre>\n\n<p>\nA <em>comment</em> starts with a double hyphen (<code>--</code>)\nanywhere outside a string.\nIf the text immediately after <code>--</code> is not an opening long bracket,\nthe comment is a <em>short comment</em>,\nwhich runs until the end of the line.\nOtherwise, it is a <em>long comment</em>,\nwhich runs until the corresponding closing long bracket.\nLong comments are frequently used to disable code temporarily.\n\n\n\n\n\n<h2>2.2 - <a name=\"2.2\">Values and Types</a></h2>\n\n<p>\nLua is a <em>dynamically typed language</em>.\nThis means that\nvariables do not have types; only values do.\nThere are no type definitions in the language.\nAll values carry their own type.\n\n\n<p>\nAll values in Lua are <em>first-class values</em>.\nThis means that all values can be stored in variables,\npassed as arguments to other functions, and returned as results.\n\n\n<p>\nThere are eight basic types in Lua:\n<em>nil</em>, <em>boolean</em>, <em>number</em>,\n<em>string</em>, <em>function</em>, <em>userdata</em>,\n<em>thread</em>, and <em>table</em>.\n<em>Nil</em> is the type of the value <b>nil</b>,\nwhose main property is to be different from any other value;\nit usually represents the absence of a useful value.\n<em>Boolean</em> is the type of the values <b>false</b> and <b>true</b>.\nBoth <b>nil</b> and <b>false</b> make a condition false;\nany other value makes it true.\n<em>Number</em> represents real (double-precision floating-point) numbers.\n(It is easy to build Lua interpreters that use other\ninternal representations for numbers,\nsuch as single-precision float or long integers;\nsee file <code>luaconf.h</code>.)\n<em>String</em> represents arrays of characters.\n\nLua is 8-bit clean:\nstrings can contain any 8-bit character,\nincluding embedded zeros ('<code>\\0</code>') (see <a href=\"#2.1\">&sect;2.1</a>).\n\n\n<p>\nLua can call (and manipulate) functions written in Lua and\nfunctions written in C\n(see <a href=\"#2.5.8\">&sect;2.5.8</a>).\n\n\n<p>\nThe type <em>userdata</em> is provided to allow arbitrary C&nbsp;data to\nbe stored in Lua variables.\nThis type corresponds to a block of raw memory\nand has no pre-defined operations in Lua,\nexcept assignment and identity test.\nHowever, by using <em>metatables</em>,\nthe programmer can define operations for userdata values\n(see <a href=\"#2.8\">&sect;2.8</a>).\nUserdata values cannot be created or modified in Lua,\nonly through the C&nbsp;API.\nThis guarantees the integrity of data owned by the host program.\n\n\n<p>\nThe type <em>thread</em> represents independent threads of execution\nand it is used to implement coroutines (see <a href=\"#2.11\">&sect;2.11</a>).\nDo not confuse Lua threads with operating-system threads.\nLua supports coroutines on all systems,\neven those that do not support threads.\n\n\n<p>\nThe type <em>table</em> implements associative arrays,\nthat is, arrays that can be indexed not only with numbers,\nbut with any value (except <b>nil</b>).\nTables can be <em>heterogeneous</em>;\nthat is, they can contain values of all types (except <b>nil</b>).\nTables are the sole data structuring mechanism in Lua;\nthey can be used to represent ordinary arrays,\nsymbol tables, sets, records, graphs, trees, etc.\nTo represent records, Lua uses the field name as an index.\nThe language supports this representation by\nproviding <code>a.name</code> as syntactic sugar for <code>a[\"name\"]</code>.\nThere are several convenient ways to create tables in Lua\n(see <a href=\"#2.5.7\">&sect;2.5.7</a>).\n\n\n<p>\nLike indices,\nthe value of a table field can be of any type (except <b>nil</b>).\nIn particular,\nbecause functions are first-class values,\ntable fields can contain functions.\nThus tables can also carry <em>methods</em> (see <a href=\"#2.5.9\">&sect;2.5.9</a>).\n\n\n<p>\nTables, functions, threads, and (full) userdata values are <em>objects</em>:\nvariables do not actually <em>contain</em> these values,\nonly <em>references</em> to them.\nAssignment, parameter passing, and function returns\nalways manipulate references to such values;\nthese operations do not imply any kind of copy.\n\n\n<p>\nThe library function <a href=\"#pdf-type\"><code>type</code></a> returns a string describing the type\nof a given value.\n\n\n\n<h3>2.2.1 - <a name=\"2.2.1\">Coercion</a></h3>\n\n<p>\nLua provides automatic conversion between\nstring and number values at run time.\nAny arithmetic operation applied to a string tries to convert\nthis string to a number, following the usual conversion rules.\nConversely, whenever a number is used where a string is expected,\nthe number is converted to a string, in a reasonable format.\nFor complete control over how numbers are converted to strings,\nuse the <code>format</code> function from the string library\n(see <a href=\"#pdf-string.format\"><code>string.format</code></a>).\n\n\n\n\n\n\n\n<h2>2.3 - <a name=\"2.3\">Variables</a></h2>\n\n<p>\nVariables are places that store values.\n\nThere are three kinds of variables in Lua:\nglobal variables, local variables, and table fields.\n\n\n<p>\nA single name can denote a global variable or a local variable\n(or a function's formal parameter,\nwhich is a particular kind of local variable):\n\n<pre>\n\tvar ::= Name\n</pre><p>\nName denotes identifiers, as defined in <a href=\"#2.1\">&sect;2.1</a>.\n\n\n<p>\nAny variable is assumed to be global unless explicitly declared\nas a local (see <a href=\"#2.4.7\">&sect;2.4.7</a>).\nLocal variables are <em>lexically scoped</em>:\nlocal variables can be freely accessed by functions\ndefined inside their scope (see <a href=\"#2.6\">&sect;2.6</a>).\n\n\n<p>\nBefore the first assignment to a variable, its value is <b>nil</b>.\n\n\n<p>\nSquare brackets are used to index a table:\n\n<pre>\n\tvar ::= prefixexp `<b>[</b>&acute; exp `<b>]</b>&acute;\n</pre><p>\nThe meaning of accesses to global variables \nand table fields can be changed via metatables.\nAn access to an indexed variable <code>t[i]</code> is equivalent to\na call <code>gettable_event(t,i)</code>.\n(See <a href=\"#2.8\">&sect;2.8</a> for a complete description of the\n<code>gettable_event</code> function.\nThis function is not defined or callable in Lua.\nWe use it here only for explanatory purposes.)\n\n\n<p>\nThe syntax <code>var.Name</code> is just syntactic sugar for\n<code>var[\"Name\"]</code>:\n\n<pre>\n\tvar ::= prefixexp `<b>.</b>&acute; Name\n</pre>\n\n<p>\nAll global variables live as fields in ordinary Lua tables,\ncalled <em>environment tables</em> or simply\n<em>environments</em> (see <a href=\"#2.9\">&sect;2.9</a>).\nEach function has its own reference to an environment,\nso that all global variables in this function\nwill refer to this environment table.\nWhen a function is created,\nit inherits the environment from the function that created it.\nTo get the environment table of a Lua function,\nyou call <a href=\"#pdf-getfenv\"><code>getfenv</code></a>.\nTo replace it,\nyou call <a href=\"#pdf-setfenv\"><code>setfenv</code></a>.\n(You can only manipulate the environment of C&nbsp;functions\nthrough the debug library; (see <a href=\"#5.9\">&sect;5.9</a>).)\n\n\n<p>\nAn access to a global variable <code>x</code>\nis equivalent to <code>_env.x</code>,\nwhich in turn is equivalent to\n\n<pre>\n     gettable_event(_env, \"x\")\n</pre><p>\nwhere <code>_env</code> is the environment of the running function.\n(See <a href=\"#2.8\">&sect;2.8</a> for a complete description of the\n<code>gettable_event</code> function.\nThis function is not defined or callable in Lua.\nSimilarly, the <code>_env</code> variable is not defined in Lua.\nWe use them here only for explanatory purposes.)\n\n\n\n\n\n<h2>2.4 - <a name=\"2.4\">Statements</a></h2>\n\n<p>\nLua supports an almost conventional set of statements,\nsimilar to those in Pascal or C.\nThis set includes\nassignments, control structures, function calls,\nand variable declarations.\n\n\n\n<h3>2.4.1 - <a name=\"2.4.1\">Chunks</a></h3>\n\n<p>\nThe unit of execution of Lua is called a <em>chunk</em>.\nA chunk is simply a sequence of statements,\nwhich are executed sequentially.\nEach statement can be optionally followed by a semicolon:\n\n<pre>\n\tchunk ::= {stat [`<b>;</b>&acute;]}\n</pre><p>\nThere are no empty statements and thus '<code>;;</code>' is not legal.\n\n\n<p>\nLua handles a chunk as the body of an anonymous function \nwith a variable number of arguments\n(see <a href=\"#2.5.9\">&sect;2.5.9</a>).\nAs such, chunks can define local variables,\nreceive arguments, and return values.\n\n\n<p>\nA chunk can be stored in a file or in a string inside the host program.\nTo execute a chunk,\nLua first pre-compiles the chunk into instructions for a virtual machine,\nand then it executes the compiled code\nwith an interpreter for the virtual machine.\n\n\n<p>\nChunks can also be pre-compiled into binary form;\nsee program <code>luac</code> for details.\nPrograms in source and compiled forms are interchangeable;\nLua automatically detects the file type and acts accordingly.\n\n\n\n\n\n\n<h3>2.4.2 - <a name=\"2.4.2\">Blocks</a></h3><p>\nA block is a list of statements;\nsyntactically, a block is the same as a chunk:\n\n<pre>\n\tblock ::= chunk\n</pre>\n\n<p>\nA block can be explicitly delimited to produce a single statement:\n\n<pre>\n\tstat ::= <b>do</b> block <b>end</b>\n</pre><p>\nExplicit blocks are useful\nto control the scope of variable declarations.\nExplicit blocks are also sometimes used to\nadd a <b>return</b> or <b>break</b> statement in the middle\nof another block (see <a href=\"#2.4.4\">&sect;2.4.4</a>).\n\n\n\n\n\n<h3>2.4.3 - <a name=\"2.4.3\">Assignment</a></h3>\n\n<p>\nLua allows multiple assignments.\nTherefore, the syntax for assignment\ndefines a list of variables on the left side\nand a list of expressions on the right side.\nThe elements in both lists are separated by commas:\n\n<pre>\n\tstat ::= varlist `<b>=</b>&acute; explist\n\tvarlist ::= var {`<b>,</b>&acute; var}\n\texplist ::= exp {`<b>,</b>&acute; exp}\n</pre><p>\nExpressions are discussed in <a href=\"#2.5\">&sect;2.5</a>.\n\n\n<p>\nBefore the assignment,\nthe list of values is <em>adjusted</em> to the length of\nthe list of variables.\nIf there are more values than needed,\nthe excess values are thrown away.\nIf there are fewer values than needed,\nthe list is extended with as many  <b>nil</b>'s as needed.\nIf the list of expressions ends with a function call,\nthen all values returned by that call enter the list of values,\nbefore the adjustment\n(except when the call is enclosed in parentheses; see <a href=\"#2.5\">&sect;2.5</a>).\n\n\n<p>\nThe assignment statement first evaluates all its expressions\nand only then are the assignments performed.\nThus the code\n\n<pre>\n     i = 3\n     i, a[i] = i+1, 20\n</pre><p>\nsets <code>a[3]</code> to 20, without affecting <code>a[4]</code>\nbecause the <code>i</code> in <code>a[i]</code> is evaluated (to 3)\nbefore it is assigned&nbsp;4.\nSimilarly, the line\n\n<pre>\n     x, y = y, x\n</pre><p>\nexchanges the values of <code>x</code> and <code>y</code>,\nand\n\n<pre>\n     x, y, z = y, z, x\n</pre><p>\ncyclically permutes the values of <code>x</code>, <code>y</code>, and <code>z</code>.\n\n\n<p>\nThe meaning of assignments to global variables\nand table fields can be changed via metatables.\nAn assignment to an indexed variable <code>t[i] = val</code> is equivalent to\n<code>settable_event(t,i,val)</code>.\n(See <a href=\"#2.8\">&sect;2.8</a> for a complete description of the\n<code>settable_event</code> function.\nThis function is not defined or callable in Lua.\nWe use it here only for explanatory purposes.)\n\n\n<p>\nAn assignment to a global variable <code>x = val</code>\nis equivalent to the assignment\n<code>_env.x = val</code>,\nwhich in turn is equivalent to\n\n<pre>\n     settable_event(_env, \"x\", val)\n</pre><p>\nwhere <code>_env</code> is the environment of the running function.\n(The <code>_env</code> variable is not defined in Lua.\nWe use it here only for explanatory purposes.)\n\n\n\n\n\n<h3>2.4.4 - <a name=\"2.4.4\">Control Structures</a></h3><p>\nThe control structures\n<b>if</b>, <b>while</b>, and <b>repeat</b> have the usual meaning and\nfamiliar syntax:\n\n\n\n\n<pre>\n\tstat ::= <b>while</b> exp <b>do</b> block <b>end</b>\n\tstat ::= <b>repeat</b> block <b>until</b> exp\n\tstat ::= <b>if</b> exp <b>then</b> block {<b>elseif</b> exp <b>then</b> block} [<b>else</b> block] <b>end</b>\n</pre><p>\nLua also has a <b>for</b> statement, in two flavors (see <a href=\"#2.4.5\">&sect;2.4.5</a>).\n\n\n<p>\nThe condition expression of a\ncontrol structure can return any value.\nBoth <b>false</b> and <b>nil</b> are considered false.\nAll values different from <b>nil</b> and <b>false</b> are considered true\n(in particular, the number 0 and the empty string are also true).\n\n\n<p>\nIn the <b>repeat</b>&ndash;<b>until</b> loop,\nthe inner block does not end at the <b>until</b> keyword,\nbut only after the condition.\nSo, the condition can refer to local variables\ndeclared inside the loop block.\n\n\n<p>\nThe <b>return</b> statement is used to return values\nfrom a function or a chunk (which is just a function).\n\nFunctions and chunks can return more than one value,\nand so the syntax for the <b>return</b> statement is\n\n<pre>\n\tstat ::= <b>return</b> [explist]\n</pre>\n\n<p>\nThe <b>break</b> statement is used to terminate the execution of a\n<b>while</b>, <b>repeat</b>, or <b>for</b> loop,\nskipping to the next statement after the loop:\n\n\n<pre>\n\tstat ::= <b>break</b>\n</pre><p>\nA <b>break</b> ends the innermost enclosing loop.\n\n\n<p>\nThe <b>return</b> and <b>break</b>\nstatements can only be written as the <em>last</em> statement of a block.\nIf it is really necessary to <b>return</b> or <b>break</b> in the\nmiddle of a block,\nthen an explicit inner block can be used,\nas in the idioms\n<code>do return end</code> and <code>do break end</code>,\nbecause now <b>return</b> and <b>break</b> are the last statements in\ntheir (inner) blocks.\n\n\n\n\n\n<h3>2.4.5 - <a name=\"2.4.5\">For Statement</a></h3>\n\n<p>\n\nThe <b>for</b> statement has two forms:\none numeric and one generic.\n\n\n<p>\nThe numeric <b>for</b> loop repeats a block of code while a\ncontrol variable runs through an arithmetic progression.\nIt has the following syntax:\n\n<pre>\n\tstat ::= <b>for</b> Name `<b>=</b>&acute; exp `<b>,</b>&acute; exp [`<b>,</b>&acute; exp] <b>do</b> block <b>end</b>\n</pre><p>\nThe <em>block</em> is repeated for <em>name</em> starting at the value of\nthe first <em>exp</em>, until it passes the second <em>exp</em> by steps of the\nthird <em>exp</em>.\nMore precisely, a <b>for</b> statement like\n\n<pre>\n     for v = <em>e1</em>, <em>e2</em>, <em>e3</em> do <em>block</em> end\n</pre><p>\nis equivalent to the code:\n\n<pre>\n     do\n       local <em>var</em>, <em>limit</em>, <em>step</em> = tonumber(<em>e1</em>), tonumber(<em>e2</em>), tonumber(<em>e3</em>)\n       if not (<em>var</em> and <em>limit</em> and <em>step</em>) then error() end\n       while (<em>step</em> &gt; 0 and <em>var</em> &lt;= <em>limit</em>) or (<em>step</em> &lt;= 0 and <em>var</em> &gt;= <em>limit</em>) do\n         local v = <em>var</em>\n         <em>block</em>\n         <em>var</em> = <em>var</em> + <em>step</em>\n       end\n     end\n</pre><p>\nNote the following:\n\n<ul>\n\n<li>\nAll three control expressions are evaluated only once,\nbefore the loop starts.\nThey must all result in numbers.\n</li>\n\n<li>\n<code><em>var</em></code>, <code><em>limit</em></code>, and <code><em>step</em></code> are invisible variables.\nThe names shown here are for explanatory purposes only.\n</li>\n\n<li>\nIf the third expression (the step) is absent,\nthen a step of&nbsp;1 is used.\n</li>\n\n<li>\nYou can use <b>break</b> to exit a <b>for</b> loop.\n</li>\n\n<li>\nThe loop variable <code>v</code> is local to the loop;\nyou cannot use its value after the <b>for</b> ends or is broken.\nIf you need this value,\nassign it to another variable before breaking or exiting the loop.\n</li>\n\n</ul>\n\n<p>\nThe generic <b>for</b> statement works over functions,\ncalled <em>iterators</em>.\nOn each iteration, the iterator function is called to produce a new value,\nstopping when this new value is <b>nil</b>.\nThe generic <b>for</b> loop has the following syntax:\n\n<pre>\n\tstat ::= <b>for</b> namelist <b>in</b> explist <b>do</b> block <b>end</b>\n\tnamelist ::= Name {`<b>,</b>&acute; Name}\n</pre><p>\nA <b>for</b> statement like\n\n<pre>\n     for <em>var_1</em>, &middot;&middot;&middot;, <em>var_n</em> in <em>explist</em> do <em>block</em> end\n</pre><p>\nis equivalent to the code:\n\n<pre>\n     do\n       local <em>f</em>, <em>s</em>, <em>var</em> = <em>explist</em>\n       while true do\n         local <em>var_1</em>, &middot;&middot;&middot;, <em>var_n</em> = <em>f</em>(<em>s</em>, <em>var</em>)\n         <em>var</em> = <em>var_1</em>\n         if <em>var</em> == nil then break end\n         <em>block</em>\n       end\n     end\n</pre><p>\nNote the following:\n\n<ul>\n\n<li>\n<code><em>explist</em></code> is evaluated only once.\nIts results are an <em>iterator</em> function,\na <em>state</em>,\nand an initial value for the first <em>iterator variable</em>.\n</li>\n\n<li>\n<code><em>f</em></code>, <code><em>s</em></code>, and <code><em>var</em></code> are invisible variables.\nThe names are here for explanatory purposes only.\n</li>\n\n<li>\nYou can use <b>break</b> to exit a <b>for</b> loop.\n</li>\n\n<li>\nThe loop variables <code><em>var_i</em></code> are local to the loop;\nyou cannot use their values after the <b>for</b> ends.\nIf you need these values,\nthen assign them to other variables before breaking or exiting the loop.\n</li>\n\n</ul>\n\n\n\n\n<h3>2.4.6 - <a name=\"2.4.6\">Function Calls as Statements</a></h3><p>\nTo allow possible side-effects,\nfunction calls can be executed as statements:\n\n<pre>\n\tstat ::= functioncall\n</pre><p>\nIn this case, all returned values are thrown away.\nFunction calls are explained in <a href=\"#2.5.8\">&sect;2.5.8</a>.\n\n\n\n\n\n<h3>2.4.7 - <a name=\"2.4.7\">Local Declarations</a></h3><p>\nLocal variables can be declared anywhere inside a block.\nThe declaration can include an initial assignment:\n\n<pre>\n\tstat ::= <b>local</b> namelist [`<b>=</b>&acute; explist]\n</pre><p>\nIf present, an initial assignment has the same semantics\nof a multiple assignment (see <a href=\"#2.4.3\">&sect;2.4.3</a>).\nOtherwise, all variables are initialized with <b>nil</b>.\n\n\n<p>\nA chunk is also a block (see <a href=\"#2.4.1\">&sect;2.4.1</a>),\nand so local variables can be declared in a chunk outside any explicit block.\nThe scope of such local variables extends until the end of the chunk.\n\n\n<p>\nThe visibility rules for local variables are explained in <a href=\"#2.6\">&sect;2.6</a>.\n\n\n\n\n\n\n\n<h2>2.5 - <a name=\"2.5\">Expressions</a></h2>\n\n<p>\nThe basic expressions in Lua are the following:\n\n<pre>\n\texp ::= prefixexp\n\texp ::= <b>nil</b> | <b>false</b> | <b>true</b>\n\texp ::= Number\n\texp ::= String\n\texp ::= function\n\texp ::= tableconstructor\n\texp ::= `<b>...</b>&acute;\n\texp ::= exp binop exp\n\texp ::= unop exp\n\tprefixexp ::= var | functioncall | `<b>(</b>&acute; exp `<b>)</b>&acute;\n</pre>\n\n<p>\nNumbers and literal strings are explained in <a href=\"#2.1\">&sect;2.1</a>;\nvariables are explained in <a href=\"#2.3\">&sect;2.3</a>;\nfunction definitions are explained in <a href=\"#2.5.9\">&sect;2.5.9</a>;\nfunction calls are explained in <a href=\"#2.5.8\">&sect;2.5.8</a>;\ntable constructors are explained in <a href=\"#2.5.7\">&sect;2.5.7</a>.\nVararg expressions,\ndenoted by three dots ('<code>...</code>'), can only be used when\ndirectly inside a vararg function;\nthey are explained in <a href=\"#2.5.9\">&sect;2.5.9</a>.\n\n\n<p>\nBinary operators comprise arithmetic operators (see <a href=\"#2.5.1\">&sect;2.5.1</a>),\nrelational operators (see <a href=\"#2.5.2\">&sect;2.5.2</a>), logical operators (see <a href=\"#2.5.3\">&sect;2.5.3</a>),\nand the concatenation operator (see <a href=\"#2.5.4\">&sect;2.5.4</a>).\nUnary operators comprise the unary minus (see <a href=\"#2.5.1\">&sect;2.5.1</a>),\nthe unary <b>not</b> (see <a href=\"#2.5.3\">&sect;2.5.3</a>),\nand the unary <em>length operator</em> (see <a href=\"#2.5.5\">&sect;2.5.5</a>).\n\n\n<p>\nBoth function calls and vararg expressions can result in multiple values.\nIf an expression is used as a statement\n(only possible for function calls (see <a href=\"#2.4.6\">&sect;2.4.6</a>)),\nthen its return list is adjusted to zero elements,\nthus discarding all returned values.\nIf an expression is used as the last (or the only) element\nof a list of expressions,\nthen no adjustment is made\n(unless the call is enclosed in parentheses).\nIn all other contexts,\nLua adjusts the result list to one element,\ndiscarding all values except the first one.\n\n\n<p>\nHere are some examples:\n\n<pre>\n     f()                -- adjusted to 0 results\n     g(f(), x)          -- f() is adjusted to 1 result\n     g(x, f())          -- g gets x plus all results from f()\n     a,b,c = f(), x     -- f() is adjusted to 1 result (c gets nil)\n     a,b = ...          -- a gets the first vararg parameter, b gets\n                        -- the second (both a and b can get nil if there\n                        -- is no corresponding vararg parameter)\n     \n     a,b,c = x, f()     -- f() is adjusted to 2 results\n     a,b,c = f()        -- f() is adjusted to 3 results\n     return f()         -- returns all results from f()\n     return ...         -- returns all received vararg parameters\n     return x,y,f()     -- returns x, y, and all results from f()\n     {f()}              -- creates a list with all results from f()\n     {...}              -- creates a list with all vararg parameters\n     {f(), nil}         -- f() is adjusted to 1 result\n</pre>\n\n<p>\nAny expression enclosed in parentheses always results in only one value.\nThus,\n<code>(f(x,y,z))</code> is always a single value,\neven if <code>f</code> returns several values.\n(The value of <code>(f(x,y,z))</code> is the first value returned by <code>f</code>\nor <b>nil</b> if <code>f</code> does not return any values.)\n\n\n\n<h3>2.5.1 - <a name=\"2.5.1\">Arithmetic Operators</a></h3><p>\nLua supports the usual arithmetic operators:\nthe binary <code>+</code> (addition),\n<code>-</code> (subtraction), <code>*</code> (multiplication),\n<code>/</code> (division), <code>%</code> (modulo), and <code>^</code> (exponentiation);\nand unary <code>-</code> (negation).\nIf the operands are numbers, or strings that can be converted to\nnumbers (see <a href=\"#2.2.1\">&sect;2.2.1</a>),\nthen all operations have the usual meaning.\nExponentiation works for any exponent.\nFor instance, <code>x^(-0.5)</code> computes the inverse of the square root of <code>x</code>.\nModulo is defined as\n\n<pre>\n     a % b == a - math.floor(a/b)*b\n</pre><p>\nThat is, it is the remainder of a division that rounds\nthe quotient towards minus infinity.\n\n\n\n\n\n<h3>2.5.2 - <a name=\"2.5.2\">Relational Operators</a></h3><p>\nThe relational operators in Lua are\n\n<pre>\n     ==    ~=    &lt;     &gt;     &lt;=    &gt;=\n</pre><p>\nThese operators always result in <b>false</b> or <b>true</b>.\n\n\n<p>\nEquality (<code>==</code>) first compares the type of its operands.\nIf the types are different, then the result is <b>false</b>.\nOtherwise, the values of the operands are compared.\nNumbers and strings are compared in the usual way.\nObjects (tables, userdata, threads, and functions)\nare compared by <em>reference</em>:\ntwo objects are considered equal only if they are the <em>same</em> object.\nEvery time you create a new object\n(a table, userdata, thread, or function),\nthis new object is different from any previously existing object.\n\n\n<p>\nYou can change the way that Lua compares tables and userdata \nby using the \"eq\" metamethod (see <a href=\"#2.8\">&sect;2.8</a>).\n\n\n<p>\nThe conversion rules of <a href=\"#2.2.1\">&sect;2.2.1</a>\n<em>do not</em> apply to equality comparisons.\nThus, <code>\"0\"==0</code> evaluates to <b>false</b>,\nand <code>t[0]</code> and <code>t[\"0\"]</code> denote different\nentries in a table.\n\n\n<p>\nThe operator <code>~=</code> is exactly the negation of equality (<code>==</code>).\n\n\n<p>\nThe order operators work as follows.\nIf both arguments are numbers, then they are compared as such.\nOtherwise, if both arguments are strings,\nthen their values are compared according to the current locale.\nOtherwise, Lua tries to call the \"lt\" or the \"le\"\nmetamethod (see <a href=\"#2.8\">&sect;2.8</a>).\nA comparison <code>a &gt; b</code> is translated to <code>b &lt; a</code>\nand <code>a &gt;= b</code> is translated to <code>b &lt;= a</code>.\n\n\n\n\n\n<h3>2.5.3 - <a name=\"2.5.3\">Logical Operators</a></h3><p>\nThe logical operators in Lua are\n<b>and</b>, <b>or</b>, and <b>not</b>.\nLike the control structures (see <a href=\"#2.4.4\">&sect;2.4.4</a>),\nall logical operators consider both <b>false</b> and <b>nil</b> as false\nand anything else as true.\n\n\n<p>\nThe negation operator <b>not</b> always returns <b>false</b> or <b>true</b>.\nThe conjunction operator <b>and</b> returns its first argument\nif this value is <b>false</b> or <b>nil</b>;\notherwise, <b>and</b> returns its second argument.\nThe disjunction operator <b>or</b> returns its first argument\nif this value is different from <b>nil</b> and <b>false</b>;\notherwise, <b>or</b> returns its second argument.\nBoth <b>and</b> and <b>or</b> use short-cut evaluation;\nthat is,\nthe second operand is evaluated only if necessary.\nHere are some examples:\n\n<pre>\n     10 or 20            --&gt; 10\n     10 or error()       --&gt; 10\n     nil or \"a\"          --&gt; \"a\"\n     nil and 10          --&gt; nil\n     false and error()   --&gt; false\n     false and nil       --&gt; false\n     false or nil        --&gt; nil\n     10 and 20           --&gt; 20\n</pre><p>\n(In this manual,\n<code>--&gt;</code> indicates the result of the preceding expression.)\n\n\n\n\n\n<h3>2.5.4 - <a name=\"2.5.4\">Concatenation</a></h3><p>\nThe string concatenation operator in Lua is\ndenoted by two dots ('<code>..</code>').\nIf both operands are strings or numbers, then they are converted to\nstrings according to the rules mentioned in <a href=\"#2.2.1\">&sect;2.2.1</a>.\nOtherwise, the \"concat\" metamethod is called (see <a href=\"#2.8\">&sect;2.8</a>).\n\n\n\n\n\n<h3>2.5.5 - <a name=\"2.5.5\">The Length Operator</a></h3>\n\n<p>\nThe length operator is denoted by the unary operator <code>#</code>.\nThe length of a string is its number of bytes\n(that is, the usual meaning of string length when each\ncharacter is one byte).\n\n\n<p>\nThe length of a table <code>t</code> is defined to be any\ninteger index <code>n</code>\nsuch that <code>t[n]</code> is not <b>nil</b> and <code>t[n+1]</code> is <b>nil</b>;\nmoreover, if <code>t[1]</code> is <b>nil</b>, <code>n</code> can be zero.\nFor a regular array, with non-nil values from 1 to a given <code>n</code>,\nits length is exactly that <code>n</code>,\nthe index of its last value.\nIf the array has \"holes\"\n(that is, <b>nil</b> values between other non-nil values),\nthen <code>#t</code> can be any of the indices that\ndirectly precedes a <b>nil</b> value\n(that is, it may consider any such <b>nil</b> value as the end of\nthe array). \n\n\n\n\n\n<h3>2.5.6 - <a name=\"2.5.6\">Precedence</a></h3><p>\nOperator precedence in Lua follows the table below,\nfrom lower to higher priority:\n\n<pre>\n     or\n     and\n     &lt;     &gt;     &lt;=    &gt;=    ~=    ==\n     ..\n     +     -\n     *     /     %\n     not   #     - (unary)\n     ^\n</pre><p>\nAs usual,\nyou can use parentheses to change the precedences of an expression.\nThe concatenation ('<code>..</code>') and exponentiation ('<code>^</code>')\noperators are right associative.\nAll other binary operators are left associative.\n\n\n\n\n\n<h3>2.5.7 - <a name=\"2.5.7\">Table Constructors</a></h3><p>\nTable constructors are expressions that create tables.\nEvery time a constructor is evaluated, a new table is created.\nA constructor can be used to create an empty table\nor to create a table and initialize some of its fields.\nThe general syntax for constructors is\n\n<pre>\n\ttableconstructor ::= `<b>{</b>&acute; [fieldlist] `<b>}</b>&acute;\n\tfieldlist ::= field {fieldsep field} [fieldsep]\n\tfield ::= `<b>[</b>&acute; exp `<b>]</b>&acute; `<b>=</b>&acute; exp | Name `<b>=</b>&acute; exp | exp\n\tfieldsep ::= `<b>,</b>&acute; | `<b>;</b>&acute;\n</pre>\n\n<p>\nEach field of the form <code>[exp1] = exp2</code> adds to the new table an entry\nwith key <code>exp1</code> and value <code>exp2</code>.\nA field of the form <code>name = exp</code> is equivalent to\n<code>[\"name\"] = exp</code>.\nFinally, fields of the form <code>exp</code> are equivalent to\n<code>[i] = exp</code>, where <code>i</code> are consecutive numerical integers,\nstarting with 1.\nFields in the other formats do not affect this counting.\nFor example,\n\n<pre>\n     a = { [f(1)] = g; \"x\", \"y\"; x = 1, f(x), [30] = 23; 45 }\n</pre><p>\nis equivalent to\n\n<pre>\n     do\n       local t = {}\n       t[f(1)] = g\n       t[1] = \"x\"         -- 1st exp\n       t[2] = \"y\"         -- 2nd exp\n       t.x = 1            -- t[\"x\"] = 1\n       t[3] = f(x)        -- 3rd exp\n       t[30] = 23\n       t[4] = 45          -- 4th exp\n       a = t\n     end\n</pre>\n\n<p>\nIf the last field in the list has the form <code>exp</code>\nand the expression is a function call or a vararg expression,\nthen all values returned by this expression enter the list consecutively\n(see <a href=\"#2.5.8\">&sect;2.5.8</a>).\nTo avoid this,\nenclose the function call or the vararg expression\nin parentheses (see <a href=\"#2.5\">&sect;2.5</a>).\n\n\n<p>\nThe field list can have an optional trailing separator,\nas a convenience for machine-generated code.\n\n\n\n\n\n<h3>2.5.8 - <a name=\"2.5.8\">Function Calls</a></h3><p>\nA function call in Lua has the following syntax:\n\n<pre>\n\tfunctioncall ::= prefixexp args\n</pre><p>\nIn a function call,\nfirst prefixexp and args are evaluated.\nIf the value of prefixexp has type <em>function</em>,\nthen this function is called\nwith the given arguments.\nOtherwise, the prefixexp \"call\" metamethod is called,\nhaving as first parameter the value of prefixexp,\nfollowed by the original call arguments\n(see <a href=\"#2.8\">&sect;2.8</a>).\n\n\n<p>\nThe form\n\n<pre>\n\tfunctioncall ::= prefixexp `<b>:</b>&acute; Name args\n</pre><p>\ncan be used to call \"methods\".\nA call <code>v:name(<em>args</em>)</code>\nis syntactic sugar for <code>v.name(v,<em>args</em>)</code>,\nexcept that <code>v</code> is evaluated only once.\n\n\n<p>\nArguments have the following syntax:\n\n<pre>\n\targs ::= `<b>(</b>&acute; [explist] `<b>)</b>&acute;\n\targs ::= tableconstructor\n\targs ::= String\n</pre><p>\nAll argument expressions are evaluated before the call.\nA call of the form <code>f{<em>fields</em>}</code> is\nsyntactic sugar for <code>f({<em>fields</em>})</code>;\nthat is, the argument list is a single new table.\nA call of the form <code>f'<em>string</em>'</code>\n(or <code>f\"<em>string</em>\"</code> or <code>f[[<em>string</em>]]</code>)\nis syntactic sugar for <code>f('<em>string</em>')</code>;\nthat is, the argument list is a single literal string.\n\n\n<p>\nAs an exception to the free-format syntax of Lua,\nyou cannot put a line break before the '<code>(</code>' in a function call.\nThis restriction avoids some ambiguities in the language.\nIf you write\n\n<pre>\n     a = f\n     (g).x(a)\n</pre><p>\nLua would see that as a single statement, <code>a = f(g).x(a)</code>.\nSo, if you want two statements, you must add a semi-colon between them.\nIf you actually want to call <code>f</code>,\nyou must remove the line break before <code>(g)</code>.\n\n\n<p>\nA call of the form <code>return</code> <em>functioncall</em> is called\na <em>tail call</em>.\nLua implements <em>proper tail calls</em>\n(or <em>proper tail recursion</em>):\nin a tail call,\nthe called function reuses the stack entry of the calling function.\nTherefore, there is no limit on the number of nested tail calls that\na program can execute.\nHowever, a tail call erases any debug information about the\ncalling function.\nNote that a tail call only happens with a particular syntax,\nwhere the <b>return</b> has one single function call as argument;\nthis syntax makes the calling function return exactly\nthe returns of the called function.\nSo, none of the following examples are tail calls:\n\n<pre>\n     return (f(x))        -- results adjusted to 1\n     return 2 * f(x)\n     return x, f(x)       -- additional results\n     f(x); return         -- results discarded\n     return x or f(x)     -- results adjusted to 1\n</pre>\n\n\n\n\n<h3>2.5.9 - <a name=\"2.5.9\">Function Definitions</a></h3>\n\n<p>\nThe syntax for function definition is\n\n<pre>\n\tfunction ::= <b>function</b> funcbody\n\tfuncbody ::= `<b>(</b>&acute; [parlist] `<b>)</b>&acute; block <b>end</b>\n</pre>\n\n<p>\nThe following syntactic sugar simplifies function definitions:\n\n<pre>\n\tstat ::= <b>function</b> funcname funcbody\n\tstat ::= <b>local</b> <b>function</b> Name funcbody\n\tfuncname ::= Name {`<b>.</b>&acute; Name} [`<b>:</b>&acute; Name]\n</pre><p>\nThe statement\n\n<pre>\n     function f () <em>body</em> end\n</pre><p>\ntranslates to\n\n<pre>\n     f = function () <em>body</em> end\n</pre><p>\nThe statement\n\n<pre>\n     function t.a.b.c.f () <em>body</em> end\n</pre><p>\ntranslates to\n\n<pre>\n     t.a.b.c.f = function () <em>body</em> end\n</pre><p>\nThe statement\n\n<pre>\n     local function f () <em>body</em> end\n</pre><p>\ntranslates to\n\n<pre>\n     local f; f = function () <em>body</em> end\n</pre><p>\n<em>not</em> to\n\n<pre>\n     local f = function () <em>body</em> end\n</pre><p>\n(This only makes a difference when the body of the function\ncontains references to <code>f</code>.)\n\n\n<p>\nA function definition is an executable expression,\nwhose value has type <em>function</em>.\nWhen Lua pre-compiles a chunk,\nall its function bodies are pre-compiled too.\nThen, whenever Lua executes the function definition,\nthe function is <em>instantiated</em> (or <em>closed</em>).\nThis function instance (or <em>closure</em>)\nis the final value of the expression.\nDifferent instances of the same function\ncan refer to different  external local variables\nand can have different environment tables.\n\n\n<p>\nParameters act as local variables that are\ninitialized with the argument values:\n\n<pre>\n\tparlist ::= namelist [`<b>,</b>&acute; `<b>...</b>&acute;] | `<b>...</b>&acute;\n</pre><p>\nWhen a function is called,\nthe list of arguments is adjusted to\nthe length of the list of parameters,\nunless the function is a variadic or <em>vararg function</em>,\nwhich is\nindicated by three dots ('<code>...</code>') at the end of its parameter list.\nA vararg function does not adjust its argument list;\ninstead, it collects all extra arguments and supplies them\nto the function through a <em>vararg expression</em>,\nwhich is also written as three dots.\nThe value of this expression is a list of all actual extra arguments,\nsimilar to a function with multiple results.\nIf a vararg expression is used inside another expression\nor in the middle of a list of expressions,\nthen its return list is adjusted to one element.\nIf the expression is used as the last element of a list of expressions,\nthen no adjustment is made\n(unless that last expression is enclosed in parentheses).\n\n\n<p>\nAs an example, consider the following definitions:\n\n<pre>\n     function f(a, b) end\n     function g(a, b, ...) end\n     function r() return 1,2,3 end\n</pre><p>\nThen, we have the following mapping from arguments to parameters and\nto the vararg expression:\n\n<pre>\n     CALL            PARAMETERS\n     \n     f(3)             a=3, b=nil\n     f(3, 4)          a=3, b=4\n     f(3, 4, 5)       a=3, b=4\n     f(r(), 10)       a=1, b=10\n     f(r())           a=1, b=2\n     \n     g(3)             a=3, b=nil, ... --&gt;  (nothing)\n     g(3, 4)          a=3, b=4,   ... --&gt;  (nothing)\n     g(3, 4, 5, 8)    a=3, b=4,   ... --&gt;  5  8\n     g(5, r())        a=5, b=1,   ... --&gt;  2  3\n</pre>\n\n<p>\nResults are returned using the <b>return</b> statement (see <a href=\"#2.4.4\">&sect;2.4.4</a>).\nIf control reaches the end of a function\nwithout encountering a <b>return</b> statement,\nthen the function returns with no results.\n\n\n<p>\nThe <em>colon</em> syntax\nis used for defining <em>methods</em>,\nthat is, functions that have an implicit extra parameter <code>self</code>.\nThus, the statement\n\n<pre>\n     function t.a.b.c:f (<em>params</em>) <em>body</em> end\n</pre><p>\nis syntactic sugar for\n\n<pre>\n     t.a.b.c.f = function (self, <em>params</em>) <em>body</em> end\n</pre>\n\n\n\n\n\n\n<h2>2.6 - <a name=\"2.6\">Visibility Rules</a></h2>\n\n<p>\n\nLua is a lexically scoped language.\nThe scope of variables begins at the first statement <em>after</em>\ntheir declaration and lasts until the end of the innermost block that\nincludes the declaration.\nConsider the following example:\n\n<pre>\n     x = 10                -- global variable\n     do                    -- new block\n       local x = x         -- new 'x', with value 10\n       print(x)            --&gt; 10\n       x = x+1\n       do                  -- another block\n         local x = x+1     -- another 'x'\n         print(x)          --&gt; 12\n       end\n       print(x)            --&gt; 11\n     end\n     print(x)              --&gt; 10  (the global one)\n</pre>\n\n<p>\nNotice that, in a declaration like <code>local x = x</code>,\nthe new <code>x</code> being declared is not in scope yet,\nand so the second <code>x</code> refers to the outside variable.\n\n\n<p>\nBecause of the lexical scoping rules,\nlocal variables can be freely accessed by functions\ndefined inside their scope.\nA local variable used by an inner function is called\nan <em>upvalue</em>, or <em>external local variable</em>,\ninside the inner function.\n\n\n<p>\nNotice that each execution of a <b>local</b> statement\ndefines new local variables.\nConsider the following example:\n\n<pre>\n     a = {}\n     local x = 20\n     for i=1,10 do\n       local y = 0\n       a[i] = function () y=y+1; return x+y end\n     end\n</pre><p>\nThe loop creates ten closures\n(that is, ten instances of the anonymous function).\nEach of these closures uses a different <code>y</code> variable,\nwhile all of them share the same <code>x</code>.\n\n\n\n\n\n<h2>2.7 - <a name=\"2.7\">Error Handling</a></h2>\n\n<p>\nBecause Lua is an embedded extension language,\nall Lua actions start from C&nbsp;code in the host program\ncalling a function from the Lua library (see <a href=\"#lua_pcall\"><code>lua_pcall</code></a>).\nWhenever an error occurs during Lua compilation or execution,\ncontrol returns to C,\nwhich can take appropriate measures\n(such as printing an error message).\n\n\n<p>\nLua code can explicitly generate an error by calling the\n<a href=\"#pdf-error\"><code>error</code></a> function.\nIf you need to catch errors in Lua,\nyou can use the <a href=\"#pdf-pcall\"><code>pcall</code></a> function.\n\n\n\n\n\n<h2>2.8 - <a name=\"2.8\">Metatables</a></h2>\n\n<p>\nEvery value in Lua can have a <em>metatable</em>.\nThis <em>metatable</em> is an ordinary Lua table\nthat defines the behavior of the original value\nunder certain special operations.\nYou can change several aspects of the behavior\nof operations over a value by setting specific fields in its metatable.\nFor instance, when a non-numeric value is the operand of an addition,\nLua checks for a function in the field <code>\"__add\"</code> in its metatable.\nIf it finds one,\nLua calls this function to perform the addition.\n\n\n<p>\nWe call the keys in a metatable <em>events</em>\nand the values <em>metamethods</em>.\nIn the previous example, the event is <code>\"add\"</code> \nand the metamethod is the function that performs the addition.\n\n\n<p>\nYou can query the metatable of any value\nthrough the <a href=\"#pdf-getmetatable\"><code>getmetatable</code></a> function.\n\n\n<p>\nYou can replace the metatable of tables\nthrough the <a href=\"#pdf-setmetatable\"><code>setmetatable</code></a>\nfunction.\nYou cannot change the metatable of other types from Lua\n(except by using the debug library);\nyou must use the C&nbsp;API for that.\n\n\n<p>\nTables and full userdata have individual metatables\n(although multiple tables and userdata can share their metatables).\nValues of all other types share one single metatable per type;\nthat is, there is one single metatable for all numbers,\none for all strings, etc.\n\n\n<p>\nA metatable controls how an object behaves in arithmetic operations,\norder comparisons, concatenation, length operation, and indexing.\nA metatable also can define a function to be called when a userdata\nis garbage collected.\nFor each of these operations Lua associates a specific key\ncalled an <em>event</em>.\nWhen Lua performs one of these operations over a value,\nit checks whether this value has a metatable with the corresponding event.\nIf so, the value associated with that key (the metamethod)\ncontrols how Lua will perform the operation.\n\n\n<p>\nMetatables control the operations listed next.\nEach operation is identified by its corresponding name.\nThe key for each operation is a string with its name prefixed by\ntwo underscores, '<code>__</code>';\nfor instance, the key for operation \"add\" is the\nstring <code>\"__add\"</code>.\nThe semantics of these operations is better explained by a Lua function\ndescribing how the interpreter executes the operation.\n\n\n<p>\nThe code shown here in Lua is only illustrative;\nthe real behavior is hard coded in the interpreter\nand it is much more efficient than this simulation.\nAll functions used in these descriptions\n(<a href=\"#pdf-rawget\"><code>rawget</code></a>, <a href=\"#pdf-tonumber\"><code>tonumber</code></a>, etc.)\nare described in <a href=\"#5.1\">&sect;5.1</a>.\nIn particular, to retrieve the metamethod of a given object,\nwe use the expression\n\n<pre>\n     metatable(obj)[event]\n</pre><p>\nThis should be read as\n\n<pre>\n     rawget(getmetatable(obj) or {}, event)\n</pre><p>\n\nThat is, the access to a metamethod does not invoke other metamethods,\nand the access to objects with no metatables does not fail\n(it simply results in <b>nil</b>).\n\n\n\n<ul>\n\n<li><b>\"add\":</b>\nthe <code>+</code> operation.\n\n\n\n<p>\nThe function <code>getbinhandler</code> below defines how Lua chooses a handler\nfor a binary operation.\nFirst, Lua tries the first operand.\nIf its type does not define a handler for the operation,\nthen Lua tries the second operand.\n\n<pre>\n     function getbinhandler (op1, op2, event)\n       return metatable(op1)[event] or metatable(op2)[event]\n     end\n</pre><p>\nBy using this function,\nthe behavior of the <code>op1 + op2</code> is\n\n<pre>\n     function add_event (op1, op2)\n       local o1, o2 = tonumber(op1), tonumber(op2)\n       if o1 and o2 then  -- both operands are numeric?\n         return o1 + o2   -- '+' here is the primitive 'add'\n       else  -- at least one of the operands is not numeric\n         local h = getbinhandler(op1, op2, \"__add\")\n         if h then\n           -- call the handler with both operands\n           return (h(op1, op2))\n         else  -- no handler available: default behavior\n           error(&middot;&middot;&middot;)\n         end\n       end\n     end\n</pre><p>\n</li>\n\n<li><b>\"sub\":</b>\nthe <code>-</code> operation.\n\nBehavior similar to the \"add\" operation.\n</li>\n\n<li><b>\"mul\":</b>\nthe <code>*</code> operation.\n\nBehavior similar to the \"add\" operation.\n</li>\n\n<li><b>\"div\":</b>\nthe <code>/</code> operation.\n\nBehavior similar to the \"add\" operation.\n</li>\n\n<li><b>\"mod\":</b>\nthe <code>%</code> operation.\n\nBehavior similar to the \"add\" operation,\nwith the operation\n<code>o1 - floor(o1/o2)*o2</code> as the primitive operation.\n</li>\n\n<li><b>\"pow\":</b>\nthe <code>^</code> (exponentiation) operation.\n\nBehavior similar to the \"add\" operation,\nwith the function <code>pow</code> (from the C&nbsp;math library)\nas the primitive operation.\n</li>\n\n<li><b>\"unm\":</b>\nthe unary <code>-</code> operation.\n\n\n<pre>\n     function unm_event (op)\n       local o = tonumber(op)\n       if o then  -- operand is numeric?\n         return -o  -- '-' here is the primitive 'unm'\n       else  -- the operand is not numeric.\n         -- Try to get a handler from the operand\n         local h = metatable(op).__unm\n         if h then\n           -- call the handler with the operand\n           return (h(op))\n         else  -- no handler available: default behavior\n           error(&middot;&middot;&middot;)\n         end\n       end\n     end\n</pre><p>\n</li>\n\n<li><b>\"concat\":</b>\nthe <code>..</code> (concatenation) operation.\n\n\n<pre>\n     function concat_event (op1, op2)\n       if (type(op1) == \"string\" or type(op1) == \"number\") and\n          (type(op2) == \"string\" or type(op2) == \"number\") then\n         return op1 .. op2  -- primitive string concatenation\n       else\n         local h = getbinhandler(op1, op2, \"__concat\")\n         if h then\n           return (h(op1, op2))\n         else\n           error(&middot;&middot;&middot;)\n         end\n       end\n     end\n</pre><p>\n</li>\n\n<li><b>\"len\":</b>\nthe <code>#</code> operation.\n\n\n<pre>\n     function len_event (op)\n       if type(op) == \"string\" then\n         return strlen(op)         -- primitive string length\n       elseif type(op) == \"table\" then\n         return #op                -- primitive table length\n       else\n         local h = metatable(op).__len\n         if h then\n           -- call the handler with the operand\n           return (h(op))\n         else  -- no handler available: default behavior\n           error(&middot;&middot;&middot;)\n         end\n       end\n     end\n</pre><p>\nSee <a href=\"#2.5.5\">&sect;2.5.5</a> for a description of the length of a table.\n</li>\n\n<li><b>\"eq\":</b>\nthe <code>==</code> operation.\n\nThe function <code>getcomphandler</code> defines how Lua chooses a metamethod\nfor comparison operators.\nA metamethod only is selected when both objects\nbeing compared have the same type\nand the same metamethod for the selected operation.\n\n<pre>\n     function getcomphandler (op1, op2, event)\n       if type(op1) ~= type(op2) then return nil end\n       local mm1 = metatable(op1)[event]\n       local mm2 = metatable(op2)[event]\n       if mm1 == mm2 then return mm1 else return nil end\n     end\n</pre><p>\nThe \"eq\" event is defined as follows:\n\n<pre>\n     function eq_event (op1, op2)\n       if type(op1) ~= type(op2) then  -- different types?\n         return false   -- different objects\n       end\n       if op1 == op2 then   -- primitive equal?\n         return true   -- objects are equal\n       end\n       -- try metamethod\n       local h = getcomphandler(op1, op2, \"__eq\")\n       if h then\n         return (h(op1, op2))\n       else\n         return false\n       end\n     end\n</pre><p>\n<code>a ~= b</code> is equivalent to <code>not (a == b)</code>.\n</li>\n\n<li><b>\"lt\":</b>\nthe <code>&lt;</code> operation.\n\n\n<pre>\n     function lt_event (op1, op2)\n       if type(op1) == \"number\" and type(op2) == \"number\" then\n         return op1 &lt; op2   -- numeric comparison\n       elseif type(op1) == \"string\" and type(op2) == \"string\" then\n         return op1 &lt; op2   -- lexicographic comparison\n       else\n         local h = getcomphandler(op1, op2, \"__lt\")\n         if h then\n           return (h(op1, op2))\n         else\n           error(&middot;&middot;&middot;)\n         end\n       end\n     end\n</pre><p>\n<code>a &gt; b</code> is equivalent to <code>b &lt; a</code>.\n</li>\n\n<li><b>\"le\":</b>\nthe <code>&lt;=</code> operation.\n\n\n<pre>\n     function le_event (op1, op2)\n       if type(op1) == \"number\" and type(op2) == \"number\" then\n         return op1 &lt;= op2   -- numeric comparison\n       elseif type(op1) == \"string\" and type(op2) == \"string\" then\n         return op1 &lt;= op2   -- lexicographic comparison\n       else\n         local h = getcomphandler(op1, op2, \"__le\")\n         if h then\n           return (h(op1, op2))\n         else\n           h = getcomphandler(op1, op2, \"__lt\")\n           if h then\n             return not h(op2, op1)\n           else\n             error(&middot;&middot;&middot;)\n           end\n         end\n       end\n     end\n</pre><p>\n<code>a &gt;= b</code> is equivalent to <code>b &lt;= a</code>.\nNote that, in the absence of a \"le\" metamethod,\nLua tries the \"lt\", assuming that <code>a &lt;= b</code> is\nequivalent to <code>not (b &lt; a)</code>.\n</li>\n\n<li><b>\"index\":</b>\nThe indexing access <code>table[key]</code>.\n\n\n<pre>\n     function gettable_event (table, key)\n       local h\n       if type(table) == \"table\" then\n         local v = rawget(table, key)\n         if v ~= nil then return v end\n         h = metatable(table).__index\n         if h == nil then return nil end\n       else\n         h = metatable(table).__index\n         if h == nil then\n           error(&middot;&middot;&middot;)\n         end\n       end\n       if type(h) == \"function\" then\n         return (h(table, key))     -- call the handler\n       else return h[key]           -- or repeat operation on it\n       end\n     end\n</pre><p>\n</li>\n\n<li><b>\"newindex\":</b>\nThe indexing assignment <code>table[key] = value</code>.\n\n\n<pre>\n     function settable_event (table, key, value)\n       local h\n       if type(table) == \"table\" then\n         local v = rawget(table, key)\n         if v ~= nil then rawset(table, key, value); return end\n         h = metatable(table).__newindex\n         if h == nil then rawset(table, key, value); return end\n       else\n         h = metatable(table).__newindex\n         if h == nil then\n           error(&middot;&middot;&middot;)\n         end\n       end\n       if type(h) == \"function\" then\n         h(table, key,value)           -- call the handler\n       else h[key] = value             -- or repeat operation on it\n       end\n     end\n</pre><p>\n</li>\n\n<li><b>\"call\":</b>\ncalled when Lua calls a value.\n\n\n<pre>\n     function function_event (func, ...)\n       if type(func) == \"function\" then\n         return func(...)   -- primitive call\n       else\n         local h = metatable(func).__call\n         if h then\n           return h(func, ...)\n         else\n           error(&middot;&middot;&middot;)\n         end\n       end\n     end\n</pre><p>\n</li>\n\n</ul>\n\n\n\n\n<h2>2.9 - <a name=\"2.9\">Environments</a></h2>\n\n<p>\nBesides metatables,\nobjects of types thread, function, and userdata\nhave another table associated with them,\ncalled their <em>environment</em>.\nLike metatables, environments are regular tables and\nmultiple objects can share the same environment.\n\n\n<p>\nThreads are created sharing the environment of the creating thread.\nUserdata and C&nbsp;functions are created sharing the environment\nof the creating C&nbsp;function.\nNon-nested Lua functions\n(created by <a href=\"#pdf-loadfile\"><code>loadfile</code></a>, <a href=\"#pdf-loadstring\"><code>loadstring</code></a> or <a href=\"#pdf-load\"><code>load</code></a>)\nare created sharing the environment of the creating thread.\nNested Lua functions are created sharing the environment of\nthe creating Lua function.\n\n\n<p>\nEnvironments associated with userdata have no meaning for Lua.\nIt is only a convenience feature for programmers to associate a table to\na userdata.\n\n\n<p>\nEnvironments associated with threads are called\n<em>global environments</em>.\nThey are used as the default environment for threads and\nnon-nested Lua functions created by the thread\nand can be directly accessed by C&nbsp;code (see <a href=\"#3.3\">&sect;3.3</a>).\n\n\n<p>\nThe environment associated with a C&nbsp;function can be directly\naccessed by C&nbsp;code (see <a href=\"#3.3\">&sect;3.3</a>).\nIt is used as the default environment for other C&nbsp;functions\nand userdata created by the function.\n\n\n<p>\nEnvironments associated with Lua functions are used to resolve\nall accesses to global variables within the function (see <a href=\"#2.3\">&sect;2.3</a>).\nThey are used as the default environment for nested Lua functions\ncreated by the function.\n\n\n<p>\nYou can change the environment of a Lua function or the\nrunning thread by calling <a href=\"#pdf-setfenv\"><code>setfenv</code></a>.\nYou can get the environment of a Lua function or the running thread\nby calling <a href=\"#pdf-getfenv\"><code>getfenv</code></a>.\nTo manipulate the environment of other objects\n(userdata, C&nbsp;functions, other threads) you must\nuse the C&nbsp;API.\n\n\n\n\n\n<h2>2.10 - <a name=\"2.10\">Garbage Collection</a></h2>\n\n<p>\nLua performs automatic memory management.\nThis means that\nyou have to worry neither about allocating memory for new objects\nnor about freeing it when the objects are no longer needed.\nLua manages memory automatically by running\na <em>garbage collector</em> from time to time\nto collect all <em>dead objects</em>\n(that is, objects that are no longer accessible from Lua).\nAll memory used by Lua is subject to automatic management:\ntables, userdata, functions, threads, strings, etc.\n\n\n<p>\nLua implements an incremental mark-and-sweep collector.\nIt uses two numbers to control its garbage-collection cycles:\nthe <em>garbage-collector pause</em> and\nthe <em>garbage-collector step multiplier</em>.\nBoth use percentage points as units\n(so that a value of 100 means an internal value of 1).\n\n\n<p>\nThe garbage-collector pause\ncontrols how long the collector waits before starting a new cycle.\nLarger values make the collector less aggressive.\nValues smaller than 100 mean the collector will not wait to\nstart a new cycle.\nA value of 200 means that the collector waits for the total memory in use\nto double before starting a new cycle.\n\n\n<p>\nThe step multiplier\ncontrols the relative speed of the collector relative to\nmemory allocation.\nLarger values make the collector more aggressive but also increase\nthe size of each incremental step.\nValues smaller than 100 make the collector too slow and\ncan result in the collector never finishing a cycle.\nThe default, 200, means that the collector runs at \"twice\"\nthe speed of memory allocation.\n\n\n<p>\nYou can change these numbers by calling <a href=\"#lua_gc\"><code>lua_gc</code></a> in C\nor <a href=\"#pdf-collectgarbage\"><code>collectgarbage</code></a> in Lua.\nWith these functions you can also control \nthe collector directly (e.g., stop and restart it).\n\n\n\n<h3>2.10.1 - <a name=\"2.10.1\">Garbage-Collection Metamethods</a></h3>\n\n<p>\nUsing the C&nbsp;API,\nyou can set garbage-collector metamethods for userdata (see <a href=\"#2.8\">&sect;2.8</a>).\nThese metamethods are also called <em>finalizers</em>.\nFinalizers allow you to coordinate Lua's garbage collection\nwith external resource management\n(such as closing files, network or database connections,\nor freeing your own memory).\n\n\n<p>\nGarbage userdata with a field <code>__gc</code> in their metatables are not\ncollected immediately by the garbage collector.\nInstead, Lua puts them in a list.\nAfter the collection,\nLua does the equivalent of the following function\nfor each userdata in that list:\n\n<pre>\n     function gc_event (udata)\n       local h = metatable(udata).__gc\n       if h then\n         h(udata)\n       end\n     end\n</pre>\n\n<p>\nAt the end of each garbage-collection cycle,\nthe finalizers for userdata are called in <em>reverse</em>\norder of their creation,\namong those collected in that cycle.\nThat is, the first finalizer to be called is the one associated\nwith the userdata created last in the program.\nThe userdata itself is freed only in the next garbage-collection cycle.\n\n\n\n\n\n<h3>2.10.2 - <a name=\"2.10.2\">Weak Tables</a></h3>\n\n<p>\nA <em>weak table</em> is a table whose elements are\n<em>weak references</em>.\nA weak reference is ignored by the garbage collector.\nIn other words,\nif the only references to an object are weak references,\nthen the garbage collector will collect this object.\n\n\n<p>\nA weak table can have weak keys, weak values, or both.\nA table with weak keys allows the collection of its keys,\nbut prevents the collection of its values.\nA table with both weak keys and weak values allows the collection of\nboth keys and values.\nIn any case, if either the key or the value is collected,\nthe whole pair is removed from the table.\nThe weakness of a table is controlled by the\n<code>__mode</code> field of its metatable.\nIf the <code>__mode</code> field is a string containing the character&nbsp;'<code>k</code>',\nthe keys in the table are weak.\nIf <code>__mode</code> contains '<code>v</code>',\nthe values in the table are weak.\n\n\n<p>\nAfter you use a table as a metatable,\nyou should not change the value of its <code>__mode</code> field.\nOtherwise, the weak behavior of the tables controlled by this\nmetatable is undefined.\n\n\n\n\n\n\n\n<h2>2.11 - <a name=\"2.11\">Coroutines</a></h2>\n\n<p>\nLua supports coroutines,\nalso called <em>collaborative multithreading</em>.\nA coroutine in Lua represents an independent thread of execution.\nUnlike threads in multithread systems, however,\na coroutine only suspends its execution by explicitly calling\na yield function.\n\n\n<p>\nYou create a coroutine with a call to <a href=\"#pdf-coroutine.create\"><code>coroutine.create</code></a>.\nIts sole argument is a function\nthat is the main function of the coroutine.\nThe <code>create</code> function only creates a new coroutine and\nreturns a handle to it (an object of type <em>thread</em>);\nit does not start the coroutine execution.\n\n\n<p>\nWhen you first call <a href=\"#pdf-coroutine.resume\"><code>coroutine.resume</code></a>,\npassing as its first argument\na thread returned by <a href=\"#pdf-coroutine.create\"><code>coroutine.create</code></a>,\nthe coroutine starts its execution,\nat the first line of its main function.\nExtra arguments passed to <a href=\"#pdf-coroutine.resume\"><code>coroutine.resume</code></a> are passed on\nto the coroutine main function.\nAfter the coroutine starts running,\nit runs until it terminates or <em>yields</em>.\n\n\n<p>\nA coroutine can terminate its execution in two ways:\nnormally, when its main function returns\n(explicitly or implicitly, after the last instruction);\nand abnormally, if there is an unprotected error.\nIn the first case, <a href=\"#pdf-coroutine.resume\"><code>coroutine.resume</code></a> returns <b>true</b>,\nplus any values returned by the coroutine main function.\nIn case of errors, <a href=\"#pdf-coroutine.resume\"><code>coroutine.resume</code></a> returns <b>false</b>\nplus an error message.\n\n\n<p>\nA coroutine yields by calling <a href=\"#pdf-coroutine.yield\"><code>coroutine.yield</code></a>.\nWhen a coroutine yields,\nthe corresponding <a href=\"#pdf-coroutine.resume\"><code>coroutine.resume</code></a> returns immediately,\neven if the yield happens inside nested function calls\n(that is, not in the main function,\nbut in a function directly or indirectly called by the main function).\nIn the case of a yield, <a href=\"#pdf-coroutine.resume\"><code>coroutine.resume</code></a> also returns <b>true</b>,\nplus any values passed to <a href=\"#pdf-coroutine.yield\"><code>coroutine.yield</code></a>.\nThe next time you resume the same coroutine,\nit continues its execution from the point where it yielded,\nwith the call to <a href=\"#pdf-coroutine.yield\"><code>coroutine.yield</code></a> returning any extra\narguments passed to <a href=\"#pdf-coroutine.resume\"><code>coroutine.resume</code></a>.\n\n\n<p>\nLike <a href=\"#pdf-coroutine.create\"><code>coroutine.create</code></a>,\nthe <a href=\"#pdf-coroutine.wrap\"><code>coroutine.wrap</code></a> function also creates a coroutine,\nbut instead of returning the coroutine itself,\nit returns a function that, when called, resumes the coroutine.\nAny arguments passed to this function\ngo as extra arguments to <a href=\"#pdf-coroutine.resume\"><code>coroutine.resume</code></a>.\n<a href=\"#pdf-coroutine.wrap\"><code>coroutine.wrap</code></a> returns all the values returned by <a href=\"#pdf-coroutine.resume\"><code>coroutine.resume</code></a>,\nexcept the first one (the boolean error code).\nUnlike <a href=\"#pdf-coroutine.resume\"><code>coroutine.resume</code></a>,\n<a href=\"#pdf-coroutine.wrap\"><code>coroutine.wrap</code></a> does not catch errors;\nany error is propagated to the caller.\n\n\n<p>\nAs an example,\nconsider the following code:\n\n<pre>\n     function foo (a)\n       print(\"foo\", a)\n       return coroutine.yield(2*a)\n     end\n     \n     co = coroutine.create(function (a,b)\n           print(\"co-body\", a, b)\n           local r = foo(a+1)\n           print(\"co-body\", r)\n           local r, s = coroutine.yield(a+b, a-b)\n           print(\"co-body\", r, s)\n           return b, \"end\"\n     end)\n            \n     print(\"main\", coroutine.resume(co, 1, 10))\n     print(\"main\", coroutine.resume(co, \"r\"))\n     print(\"main\", coroutine.resume(co, \"x\", \"y\"))\n     print(\"main\", coroutine.resume(co, \"x\", \"y\"))\n</pre><p>\nWhen you run it, it produces the following output:\n\n<pre>\n     co-body 1       10\n     foo     2\n     \n     main    true    4\n     co-body r\n     main    true    11      -9\n     co-body x       y\n     main    true    10      end\n     main    false   cannot resume dead coroutine\n</pre>\n\n\n\n\n<h1>3 - <a name=\"3\">The Application Program Interface</a></h1>\n\n<p>\n\nThis section describes the C&nbsp;API for Lua, that is,\nthe set of C&nbsp;functions available to the host program to communicate\nwith Lua.\nAll API functions and related types and constants\nare declared in the header file <a name=\"pdf-lua.h\"><code>lua.h</code></a>.\n\n\n<p>\nEven when we use the term \"function\",\nany facility in the API may be provided as a macro instead.\nAll such macros use each of their arguments exactly once\n(except for the first argument, which is always a Lua state),\nand so do not generate any hidden side-effects.\n\n\n<p>\nAs in most C&nbsp;libraries,\nthe Lua API functions do not check their arguments for validity or consistency.\nHowever, you can change this behavior by compiling Lua\nwith a proper definition for the macro <a name=\"pdf-luai_apicheck\"><code>luai_apicheck</code></a>,\nin file <code>luaconf.h</code>.\n\n\n\n<h2>3.1 - <a name=\"3.1\">The Stack</a></h2>\n\n<p>\nLua uses a <em>virtual stack</em> to pass values to and from C.\nEach element in this stack represents a Lua value\n(<b>nil</b>, number, string, etc.).\n\n\n<p>\nWhenever Lua calls C, the called function gets a new stack,\nwhich is independent of previous stacks and of stacks of\nC&nbsp;functions that are still active.\nThis stack initially contains any arguments to the C&nbsp;function\nand it is where the C&nbsp;function pushes its results\nto be returned to the caller (see <a href=\"#lua_CFunction\"><code>lua_CFunction</code></a>).\n\n\n<p>\nFor convenience,\nmost query operations in the API do not follow a strict stack discipline.\nInstead, they can refer to any element in the stack\nby using an <em>index</em>:\nA positive index represents an <em>absolute</em> stack position\n(starting at&nbsp;1);\na negative index represents an <em>offset</em> relative to the top of the stack.\nMore specifically, if the stack has <em>n</em> elements,\nthen index&nbsp;1 represents the first element\n(that is, the element that was pushed onto the stack first)\nand\nindex&nbsp;<em>n</em> represents the last element;\nindex&nbsp;-1 also represents the last element\n(that is, the element at the&nbsp;top)\nand index <em>-n</em> represents the first element.\nWe say that an index is <em>valid</em>\nif it lies between&nbsp;1 and the stack top\n(that is, if <code>1 &le; abs(index) &le; top</code>).\n \n\n\n\n\n\n<h2>3.2 - <a name=\"3.2\">Stack Size</a></h2>\n\n<p>\nWhen you interact with Lua API,\nyou are responsible for ensuring consistency.\nIn particular,\n<em>you are responsible for controlling stack overflow</em>.\nYou can use the function <a href=\"#lua_checkstack\"><code>lua_checkstack</code></a>\nto grow the stack size.\n\n\n<p>\nWhenever Lua calls C,\nit ensures that at least <a name=\"pdf-LUA_MINSTACK\"><code>LUA_MINSTACK</code></a> stack positions are available.\n<code>LUA_MINSTACK</code> is defined as 20,\nso that usually you do not have to worry about stack space\nunless your code has loops pushing elements onto the stack.\n\n\n<p>\nMost query functions accept as indices any value inside the\navailable stack space, that is, indices up to the maximum stack size\nyou have set through <a href=\"#lua_checkstack\"><code>lua_checkstack</code></a>.\nSuch indices are called <em>acceptable indices</em>.\nMore formally, we define an <em>acceptable index</em>\nas follows:\n\n<pre>\n     (index &lt; 0 &amp;&amp; abs(index) &lt;= top) ||\n     (index &gt; 0 &amp;&amp; index &lt;= stackspace)\n</pre><p>\nNote that 0 is never an acceptable index.\n\n\n\n\n\n<h2>3.3 - <a name=\"3.3\">Pseudo-Indices</a></h2>\n\n<p>\nUnless otherwise noted,\nany function that accepts valid indices can also be called with\n<em>pseudo-indices</em>,\nwhich represent some Lua values that are accessible to C&nbsp;code\nbut which are not in the stack.\nPseudo-indices are used to access the thread environment,\nthe function environment,\nthe registry,\nand the upvalues of a C&nbsp;function (see <a href=\"#3.4\">&sect;3.4</a>).\n\n\n<p>\nThe thread environment (where global variables live) is\nalways at pseudo-index <a name=\"pdf-LUA_GLOBALSINDEX\"><code>LUA_GLOBALSINDEX</code></a>.\nThe environment of the running C&nbsp;function is always\nat pseudo-index <a name=\"pdf-LUA_ENVIRONINDEX\"><code>LUA_ENVIRONINDEX</code></a>.\n\n\n<p>\nTo access and change the value of global variables,\nyou can use regular table operations over an environment table.\nFor instance, to access the value of a global variable, do\n\n<pre>\n     lua_getfield(L, LUA_GLOBALSINDEX, varname);\n</pre>\n\n\n\n\n<h2>3.4 - <a name=\"3.4\">C Closures</a></h2>\n\n<p>\nWhen a C&nbsp;function is created,\nit is possible to associate some values with it,\nthus creating a <em>C&nbsp;closure</em>;\nthese values are called <em>upvalues</em> and are\naccessible to the function whenever it is called\n(see <a href=\"#lua_pushcclosure\"><code>lua_pushcclosure</code></a>).\n\n\n<p>\nWhenever a C&nbsp;function is called,\nits upvalues are located at specific pseudo-indices.\nThese pseudo-indices are produced by the macro\n<a name=\"lua_upvalueindex\"><code>lua_upvalueindex</code></a>.\nThe first value associated with a function is at position\n<code>lua_upvalueindex(1)</code>, and so on.\nAny access to <code>lua_upvalueindex(<em>n</em>)</code>,\nwhere <em>n</em> is greater than the number of upvalues of the\ncurrent function (but not greater than 256),\nproduces an acceptable (but invalid) index.\n\n\n\n\n\n<h2>3.5 - <a name=\"3.5\">Registry</a></h2>\n\n<p>\nLua provides a <em>registry</em>,\na pre-defined table that can be used by any C&nbsp;code to\nstore whatever Lua value it needs to store.\nThis table is always located at pseudo-index\n<a name=\"pdf-LUA_REGISTRYINDEX\"><code>LUA_REGISTRYINDEX</code></a>.\nAny C&nbsp;library can store data into this table,\nbut it should take care to choose keys different from those used\nby other libraries, to avoid collisions.\nTypically, you should use as key a string containing your library name\nor a light userdata with the address of a C&nbsp;object in your code.\n\n\n<p>\nThe integer keys in the registry are used by the reference mechanism,\nimplemented by the auxiliary library,\nand therefore should not be used for other purposes.\n\n\n\n\n\n<h2>3.6 - <a name=\"3.6\">Error Handling in C</a></h2>\n\n<p>\nInternally, Lua uses the C <code>longjmp</code> facility to handle errors.\n(You can also choose to use exceptions if you use C++;\nsee file <code>luaconf.h</code>.)\nWhen Lua faces any error\n(such as memory allocation errors, type errors, syntax errors,\nand runtime errors)\nit <em>raises</em> an error;\nthat is, it does a long jump.\nA <em>protected environment</em> uses <code>setjmp</code>\nto set a recover point;\nany error jumps to the most recent active recover point.\n\n\n<p>\nMost functions in the API can throw an error,\nfor instance due to a memory allocation error.\nThe documentation for each function indicates whether\nit can throw errors.\n\n\n<p>\nInside a C&nbsp;function you can throw an error by calling <a href=\"#lua_error\"><code>lua_error</code></a>.\n\n\n\n\n\n<h2>3.7 - <a name=\"3.7\">Functions and Types</a></h2>\n\n<p>\nHere we list all functions and types from the C&nbsp;API in\nalphabetical order.\nEach function has an indicator like this:\n<span class=\"apii\">[-o, +p, <em>x</em>]</span>\n\n\n<p>\nThe first field, <code>o</code>,\nis how many elements the function pops from the stack.\nThe second field, <code>p</code>,\nis how many elements the function pushes onto the stack.\n(Any function always pushes its results after popping its arguments.)\nA field in the form <code>x|y</code> means the function can push (or pop)\n<code>x</code> or <code>y</code> elements,\ndepending on the situation;\nan interrogation mark '<code>?</code>' means that\nwe cannot know how many elements the function pops/pushes\nby looking only at its arguments\n(e.g., they may depend on what is on the stack).\nThe third field, <code>x</code>,\ntells whether the function may throw errors:\n'<code>-</code>' means the function never throws any error;\n'<code>m</code>' means the function may throw an error\nonly due to not enough memory;\n'<code>e</code>' means the function may throw other kinds of errors;\n'<code>v</code>' means the function may throw an error on purpose.\n\n\n\n<hr><h3><a name=\"lua_Alloc\"><code>lua_Alloc</code></a></h3>\n<pre>typedef void * (*lua_Alloc) (void *ud,\n                             void *ptr,\n                             size_t osize,\n                             size_t nsize);</pre>\n\n<p>\nThe type of the memory-allocation function used by Lua states.\nThe allocator function must provide a\nfunctionality similar to <code>realloc</code>,\nbut not exactly the same.\nIts arguments are\n<code>ud</code>, an opaque pointer passed to <a href=\"#lua_newstate\"><code>lua_newstate</code></a>;\n<code>ptr</code>, a pointer to the block being allocated/reallocated/freed;\n<code>osize</code>, the original size of the block;\n<code>nsize</code>, the new size of the block.\n<code>ptr</code> is <code>NULL</code> if and only if <code>osize</code> is zero.\nWhen <code>nsize</code> is zero, the allocator must return <code>NULL</code>;\nif <code>osize</code> is not zero,\nit should free the block pointed to by <code>ptr</code>.\nWhen <code>nsize</code> is not zero, the allocator returns <code>NULL</code>\nif and only if it cannot fill the request.\nWhen <code>nsize</code> is not zero and <code>osize</code> is zero,\nthe allocator should behave like <code>malloc</code>.\nWhen <code>nsize</code> and <code>osize</code> are not zero,\nthe allocator behaves like <code>realloc</code>.\nLua assumes that the allocator never fails when\n<code>osize &gt;= nsize</code>.\n\n\n<p>\nHere is a simple implementation for the allocator function.\nIt is used in the auxiliary library by <a href=\"#luaL_newstate\"><code>luaL_newstate</code></a>.\n\n<pre>\n     static void *l_alloc (void *ud, void *ptr, size_t osize,\n                                                size_t nsize) {\n       (void)ud;  (void)osize;  /* not used */\n       if (nsize == 0) {\n         free(ptr);\n         return NULL;\n       }\n       else\n         return realloc(ptr, nsize);\n     }\n</pre><p>\nThis code assumes\nthat <code>free(NULL)</code> has no effect and that\n<code>realloc(NULL, size)</code> is equivalent to <code>malloc(size)</code>.\nANSI&nbsp;C ensures both behaviors.\n\n\n\n\n\n<hr><h3><a name=\"lua_atpanic\"><code>lua_atpanic</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>lua_CFunction lua_atpanic (lua_State *L, lua_CFunction panicf);</pre>\n\n<p>\nSets a new panic function and returns the old one.\n\n\n<p>\nIf an error happens outside any protected environment,\nLua calls a <em>panic function</em>\nand then calls <code>exit(EXIT_FAILURE)</code>,\nthus exiting the host application.\nYour panic function can avoid this exit by\nnever returning (e.g., doing a long jump).\n\n\n<p>\nThe panic function can access the error message at the top of the stack.\n\n\n\n\n\n<hr><h3><a name=\"lua_call\"><code>lua_call</code></a></h3><p>\n<span class=\"apii\">[-(nargs + 1), +nresults, <em>e</em>]</span>\n<pre>void lua_call (lua_State *L, int nargs, int nresults);</pre>\n\n<p>\nCalls a function.\n\n\n<p>\nTo call a function you must use the following protocol:\nfirst, the function to be called is pushed onto the stack;\nthen, the arguments to the function are pushed\nin direct order;\nthat is, the first argument is pushed first.\nFinally you call <a href=\"#lua_call\"><code>lua_call</code></a>;\n<code>nargs</code> is the number of arguments that you pushed onto the stack.\nAll arguments and the function value are popped from the stack\nwhen the function is called.\nThe function results are pushed onto the stack when the function returns.\nThe number of results is adjusted to <code>nresults</code>,\nunless <code>nresults</code> is <a name=\"pdf-LUA_MULTRET\"><code>LUA_MULTRET</code></a>.\nIn this case, <em>all</em> results from the function are pushed.\nLua takes care that the returned values fit into the stack space.\nThe function results are pushed onto the stack in direct order\n(the first result is pushed first),\nso that after the call the last result is on the top of the stack.\n\n\n<p>\nAny error inside the called function is propagated upwards\n(with a <code>longjmp</code>).\n\n\n<p>\nThe following example shows how the host program can do the\nequivalent to this Lua code:\n\n<pre>\n     a = f(\"how\", t.x, 14)\n</pre><p>\nHere it is in&nbsp;C:\n\n<pre>\n     lua_getfield(L, LUA_GLOBALSINDEX, \"f\"); /* function to be called */\n     lua_pushstring(L, \"how\");                        /* 1st argument */\n     lua_getfield(L, LUA_GLOBALSINDEX, \"t\");   /* table to be indexed */\n     lua_getfield(L, -1, \"x\");        /* push result of t.x (2nd arg) */\n     lua_remove(L, -2);                  /* remove 't' from the stack */\n     lua_pushinteger(L, 14);                          /* 3rd argument */\n     lua_call(L, 3, 1);     /* call 'f' with 3 arguments and 1 result */\n     lua_setfield(L, LUA_GLOBALSINDEX, \"a\");        /* set global 'a' */\n</pre><p>\nNote that the code above is \"balanced\":\nat its end, the stack is back to its original configuration.\nThis is considered good programming practice.\n\n\n\n\n\n<hr><h3><a name=\"lua_CFunction\"><code>lua_CFunction</code></a></h3>\n<pre>typedef int (*lua_CFunction) (lua_State *L);</pre>\n\n<p>\nType for C&nbsp;functions.\n\n\n<p>\nIn order to communicate properly with Lua,\na C&nbsp;function must use the following protocol,\nwhich defines the way parameters and results are passed:\na C&nbsp;function receives its arguments from Lua in its stack\nin direct order (the first argument is pushed first).\nSo, when the function starts,\n<code>lua_gettop(L)</code> returns the number of arguments received by the function.\nThe first argument (if any) is at index 1\nand its last argument is at index <code>lua_gettop(L)</code>.\nTo return values to Lua, a C&nbsp;function just pushes them onto the stack,\nin direct order (the first result is pushed first),\nand returns the number of results.\nAny other value in the stack below the results will be properly\ndiscarded by Lua.\nLike a Lua function, a C&nbsp;function called by Lua can also return\nmany results.\n\n\n<p>\nAs an example, the following function receives a variable number\nof numerical arguments and returns their average and sum:\n\n<pre>\n     static int foo (lua_State *L) {\n       int n = lua_gettop(L);    /* number of arguments */\n       lua_Number sum = 0;\n       int i;\n       for (i = 1; i &lt;= n; i++) {\n         if (!lua_isnumber(L, i)) {\n           lua_pushstring(L, \"incorrect argument\");\n           lua_error(L);\n         }\n         sum += lua_tonumber(L, i);\n       }\n       lua_pushnumber(L, sum/n);        /* first result */\n       lua_pushnumber(L, sum);         /* second result */\n       return 2;                   /* number of results */\n     }\n</pre>\n\n\n\n\n<hr><h3><a name=\"lua_checkstack\"><code>lua_checkstack</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>m</em>]</span>\n<pre>int lua_checkstack (lua_State *L, int extra);</pre>\n\n<p>\nEnsures that there are at least <code>extra</code> free stack slots in the stack.\nIt returns false if it cannot grow the stack to that size.\nThis function never shrinks the stack;\nif the stack is already larger than the new size,\nit is left unchanged.\n\n\n\n\n\n<hr><h3><a name=\"lua_close\"><code>lua_close</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>void lua_close (lua_State *L);</pre>\n\n<p>\nDestroys all objects in the given Lua state\n(calling the corresponding garbage-collection metamethods, if any)\nand frees all dynamic memory used by this state.\nOn several platforms, you may not need to call this function,\nbecause all resources are naturally released when the host program ends.\nOn the other hand, long-running programs,\nsuch as a daemon or a web server,\nmight need to release states as soon as they are not needed,\nto avoid growing too large.\n\n\n\n\n\n<hr><h3><a name=\"lua_concat\"><code>lua_concat</code></a></h3><p>\n<span class=\"apii\">[-n, +1, <em>e</em>]</span>\n<pre>void lua_concat (lua_State *L, int n);</pre>\n\n<p>\nConcatenates the <code>n</code> values at the top of the stack,\npops them, and leaves the result at the top.\nIf <code>n</code>&nbsp;is&nbsp;1, the result is the single value on the stack\n(that is, the function does nothing);\nif <code>n</code> is 0, the result is the empty string.\nConcatenation is performed following the usual semantics of Lua\n(see <a href=\"#2.5.4\">&sect;2.5.4</a>).\n\n\n\n\n\n<hr><h3><a name=\"lua_cpcall\"><code>lua_cpcall</code></a></h3><p>\n<span class=\"apii\">[-0, +(0|1), <em>-</em>]</span>\n<pre>int lua_cpcall (lua_State *L, lua_CFunction func, void *ud);</pre>\n\n<p>\nCalls the C&nbsp;function <code>func</code> in protected mode.\n<code>func</code> starts with only one element in its stack,\na light userdata containing <code>ud</code>.\nIn case of errors,\n<a href=\"#lua_cpcall\"><code>lua_cpcall</code></a> returns the same error codes as <a href=\"#lua_pcall\"><code>lua_pcall</code></a>,\nplus the error object on the top of the stack;\notherwise, it returns zero, and does not change the stack.\nAll values returned by <code>func</code> are discarded.\n\n\n\n\n\n<hr><h3><a name=\"lua_createtable\"><code>lua_createtable</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>m</em>]</span>\n<pre>void lua_createtable (lua_State *L, int narr, int nrec);</pre>\n\n<p>\nCreates a new empty table and pushes it onto the stack.\nThe new table has space pre-allocated\nfor <code>narr</code> array elements and <code>nrec</code> non-array elements.\nThis pre-allocation is useful when you know exactly how many elements\nthe table will have.\nOtherwise you can use the function <a href=\"#lua_newtable\"><code>lua_newtable</code></a>.\n\n\n\n\n\n<hr><h3><a name=\"lua_dump\"><code>lua_dump</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>m</em>]</span>\n<pre>int lua_dump (lua_State *L, lua_Writer writer, void *data);</pre>\n\n<p>\nDumps a function as a binary chunk.\nReceives a Lua function on the top of the stack\nand produces a binary chunk that,\nif loaded again,\nresults in a function equivalent to the one dumped.\nAs it produces parts of the chunk,\n<a href=\"#lua_dump\"><code>lua_dump</code></a> calls function <code>writer</code> (see <a href=\"#lua_Writer\"><code>lua_Writer</code></a>)\nwith the given <code>data</code>\nto write them.\n\n\n<p>\nThe value returned is the error code returned by the last\ncall to the writer;\n0&nbsp;means no errors.\n\n\n<p>\nThis function does not pop the Lua function from the stack.\n\n\n\n\n\n<hr><h3><a name=\"lua_equal\"><code>lua_equal</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>e</em>]</span>\n<pre>int lua_equal (lua_State *L, int index1, int index2);</pre>\n\n<p>\nReturns 1 if the two values in acceptable indices <code>index1</code> and\n<code>index2</code> are equal,\nfollowing the semantics of the Lua <code>==</code> operator\n(that is, may call metamethods).\nOtherwise returns&nbsp;0.\nAlso returns&nbsp;0 if any of the indices is non valid.\n\n\n\n\n\n<hr><h3><a name=\"lua_error\"><code>lua_error</code></a></h3><p>\n<span class=\"apii\">[-1, +0, <em>v</em>]</span>\n<pre>int lua_error (lua_State *L);</pre>\n\n<p>\nGenerates a Lua error.\nThe error message (which can actually be a Lua value of any type)\nmust be on the stack top.\nThis function does a long jump,\nand therefore never returns.\n(see <a href=\"#luaL_error\"><code>luaL_error</code></a>).\n\n\n\n\n\n<hr><h3><a name=\"lua_gc\"><code>lua_gc</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>e</em>]</span>\n<pre>int lua_gc (lua_State *L, int what, int data);</pre>\n\n<p>\nControls the garbage collector.\n\n\n<p>\nThis function performs several tasks,\naccording to the value of the parameter <code>what</code>:\n\n<ul>\n\n<li><b><code>LUA_GCSTOP</code>:</b>\nstops the garbage collector.\n</li>\n\n<li><b><code>LUA_GCRESTART</code>:</b>\nrestarts the garbage collector.\n</li>\n\n<li><b><code>LUA_GCCOLLECT</code>:</b>\nperforms a full garbage-collection cycle.\n</li>\n\n<li><b><code>LUA_GCCOUNT</code>:</b>\nreturns the current amount of memory (in Kbytes) in use by Lua.\n</li>\n\n<li><b><code>LUA_GCCOUNTB</code>:</b>\nreturns the remainder of dividing the current amount of bytes of\nmemory in use by Lua by 1024.\n</li>\n\n<li><b><code>LUA_GCSTEP</code>:</b>\nperforms an incremental step of garbage collection.\nThe step \"size\" is controlled by <code>data</code>\n(larger values mean more steps) in a non-specified way.\nIf you want to control the step size\nyou must experimentally tune the value of <code>data</code>.\nThe function returns 1 if the step finished a\ngarbage-collection cycle.\n</li>\n\n<li><b><code>LUA_GCSETPAUSE</code>:</b>\nsets <code>data</code> as the new value\nfor the <em>pause</em> of the collector (see <a href=\"#2.10\">&sect;2.10</a>).\nThe function returns the previous value of the pause.\n</li>\n\n<li><b><code>LUA_GCSETSTEPMUL</code>:</b>\nsets <code>data</code> as the new value for the <em>step multiplier</em> of\nthe collector (see <a href=\"#2.10\">&sect;2.10</a>).\nThe function returns the previous value of the step multiplier.\n</li>\n\n</ul>\n\n\n\n\n<hr><h3><a name=\"lua_getallocf\"><code>lua_getallocf</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>lua_Alloc lua_getallocf (lua_State *L, void **ud);</pre>\n\n<p>\nReturns the memory-allocation function of a given state.\nIf <code>ud</code> is not <code>NULL</code>, Lua stores in <code>*ud</code> the\nopaque pointer passed to <a href=\"#lua_newstate\"><code>lua_newstate</code></a>.\n\n\n\n\n\n<hr><h3><a name=\"lua_getfenv\"><code>lua_getfenv</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>-</em>]</span>\n<pre>void lua_getfenv (lua_State *L, int index);</pre>\n\n<p>\nPushes onto the stack the environment table of\nthe value at the given index.\n\n\n\n\n\n<hr><h3><a name=\"lua_getfield\"><code>lua_getfield</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>e</em>]</span>\n<pre>void lua_getfield (lua_State *L, int index, const char *k);</pre>\n\n<p>\nPushes onto the stack the value <code>t[k]</code>,\nwhere <code>t</code> is the value at the given valid index.\nAs in Lua, this function may trigger a metamethod\nfor the \"index\" event (see <a href=\"#2.8\">&sect;2.8</a>).\n\n\n\n\n\n<hr><h3><a name=\"lua_getglobal\"><code>lua_getglobal</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>e</em>]</span>\n<pre>void lua_getglobal (lua_State *L, const char *name);</pre>\n\n<p>\nPushes onto the stack the value of the global <code>name</code>.\nIt is defined as a macro:\n\n<pre>\n     #define lua_getglobal(L,s)  lua_getfield(L, LUA_GLOBALSINDEX, s)\n</pre>\n\n\n\n\n<hr><h3><a name=\"lua_getmetatable\"><code>lua_getmetatable</code></a></h3><p>\n<span class=\"apii\">[-0, +(0|1), <em>-</em>]</span>\n<pre>int lua_getmetatable (lua_State *L, int index);</pre>\n\n<p>\nPushes onto the stack the metatable of the value at the given\nacceptable index.\nIf the index is not valid,\nor if the value does not have a metatable,\nthe function returns&nbsp;0 and pushes nothing on the stack.\n\n\n\n\n\n<hr><h3><a name=\"lua_gettable\"><code>lua_gettable</code></a></h3><p>\n<span class=\"apii\">[-1, +1, <em>e</em>]</span>\n<pre>void lua_gettable (lua_State *L, int index);</pre>\n\n<p>\nPushes onto the stack the value <code>t[k]</code>,\nwhere <code>t</code> is the value at the given valid index\nand <code>k</code> is the value at the top of the stack.\n\n\n<p>\nThis function pops the key from the stack\n(putting the resulting value in its place).\nAs in Lua, this function may trigger a metamethod\nfor the \"index\" event (see <a href=\"#2.8\">&sect;2.8</a>).\n\n\n\n\n\n<hr><h3><a name=\"lua_gettop\"><code>lua_gettop</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>int lua_gettop (lua_State *L);</pre>\n\n<p>\nReturns the index of the top element in the stack.\nBecause indices start at&nbsp;1,\nthis result is equal to the number of elements in the stack\n(and so 0&nbsp;means an empty stack).\n\n\n\n\n\n<hr><h3><a name=\"lua_insert\"><code>lua_insert</code></a></h3><p>\n<span class=\"apii\">[-1, +1, <em>-</em>]</span>\n<pre>void lua_insert (lua_State *L, int index);</pre>\n\n<p>\nMoves the top element into the given valid index,\nshifting up the elements above this index to open space.\nCannot be called with a pseudo-index,\nbecause a pseudo-index is not an actual stack position.\n\n\n\n\n\n<hr><h3><a name=\"lua_Integer\"><code>lua_Integer</code></a></h3>\n<pre>typedef ptrdiff_t lua_Integer;</pre>\n\n<p>\nThe type used by the Lua API to represent integral values.\n\n\n<p>\nBy default it is a <code>ptrdiff_t</code>,\nwhich is usually the largest signed integral type the machine handles\n\"comfortably\".\n\n\n\n\n\n<hr><h3><a name=\"lua_isboolean\"><code>lua_isboolean</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>int lua_isboolean (lua_State *L, int index);</pre>\n\n<p>\nReturns 1 if the value at the given acceptable index has type boolean,\nand 0&nbsp;otherwise.\n\n\n\n\n\n<hr><h3><a name=\"lua_iscfunction\"><code>lua_iscfunction</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>int lua_iscfunction (lua_State *L, int index);</pre>\n\n<p>\nReturns 1 if the value at the given acceptable index is a C&nbsp;function,\nand 0&nbsp;otherwise.\n\n\n\n\n\n<hr><h3><a name=\"lua_isfunction\"><code>lua_isfunction</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>int lua_isfunction (lua_State *L, int index);</pre>\n\n<p>\nReturns 1 if the value at the given acceptable index is a function\n(either C or Lua), and 0&nbsp;otherwise.\n\n\n\n\n\n<hr><h3><a name=\"lua_islightuserdata\"><code>lua_islightuserdata</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>int lua_islightuserdata (lua_State *L, int index);</pre>\n\n<p>\nReturns 1 if the value at the given acceptable index is a light userdata,\nand 0&nbsp;otherwise.\n\n\n\n\n\n<hr><h3><a name=\"lua_isnil\"><code>lua_isnil</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>int lua_isnil (lua_State *L, int index);</pre>\n\n<p>\nReturns 1 if the value at the given acceptable index is <b>nil</b>,\nand 0&nbsp;otherwise.\n\n\n\n\n\n<hr><h3><a name=\"lua_isnone\"><code>lua_isnone</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>int lua_isnone (lua_State *L, int index);</pre>\n\n<p>\nReturns 1 if the given acceptable index is not valid\n(that is, it refers to an element outside the current stack),\nand 0&nbsp;otherwise.\n\n\n\n\n\n<hr><h3><a name=\"lua_isnoneornil\"><code>lua_isnoneornil</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>int lua_isnoneornil (lua_State *L, int index);</pre>\n\n<p>\nReturns 1 if the given acceptable index is not valid\n(that is, it refers to an element outside the current stack)\nor if the value at this index is <b>nil</b>,\nand 0&nbsp;otherwise.\n\n\n\n\n\n<hr><h3><a name=\"lua_isnumber\"><code>lua_isnumber</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>int lua_isnumber (lua_State *L, int index);</pre>\n\n<p>\nReturns 1 if the value at the given acceptable index is a number\nor a string convertible to a number,\nand 0&nbsp;otherwise.\n\n\n\n\n\n<hr><h3><a name=\"lua_isstring\"><code>lua_isstring</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>int lua_isstring (lua_State *L, int index);</pre>\n\n<p>\nReturns 1 if the value at the given acceptable index is a string\nor a number (which is always convertible to a string),\nand 0&nbsp;otherwise.\n\n\n\n\n\n<hr><h3><a name=\"lua_istable\"><code>lua_istable</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>int lua_istable (lua_State *L, int index);</pre>\n\n<p>\nReturns 1 if the value at the given acceptable index is a table,\nand 0&nbsp;otherwise.\n\n\n\n\n\n<hr><h3><a name=\"lua_isthread\"><code>lua_isthread</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>int lua_isthread (lua_State *L, int index);</pre>\n\n<p>\nReturns 1 if the value at the given acceptable index is a thread,\nand 0&nbsp;otherwise.\n\n\n\n\n\n<hr><h3><a name=\"lua_isuserdata\"><code>lua_isuserdata</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>int lua_isuserdata (lua_State *L, int index);</pre>\n\n<p>\nReturns 1 if the value at the given acceptable index is a userdata\n(either full or light), and 0&nbsp;otherwise.\n\n\n\n\n\n<hr><h3><a name=\"lua_lessthan\"><code>lua_lessthan</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>e</em>]</span>\n<pre>int lua_lessthan (lua_State *L, int index1, int index2);</pre>\n\n<p>\nReturns 1 if the value at acceptable index <code>index1</code> is smaller\nthan the value at acceptable index <code>index2</code>,\nfollowing the semantics of the Lua <code>&lt;</code> operator\n(that is, may call metamethods).\nOtherwise returns&nbsp;0.\nAlso returns&nbsp;0 if any of the indices is non valid.\n\n\n\n\n\n<hr><h3><a name=\"lua_load\"><code>lua_load</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>-</em>]</span>\n<pre>int lua_load (lua_State *L,\n              lua_Reader reader,\n              void *data,\n              const char *chunkname);</pre>\n\n<p>\nLoads a Lua chunk.\nIf there are no errors,\n<a href=\"#lua_load\"><code>lua_load</code></a> pushes the compiled chunk as a Lua\nfunction on top of the stack.\nOtherwise, it pushes an error message.\nThe return values of <a href=\"#lua_load\"><code>lua_load</code></a> are:\n\n<ul>\n\n<li><b>0:</b> no errors;</li>\n\n<li><b><a name=\"pdf-LUA_ERRSYNTAX\"><code>LUA_ERRSYNTAX</code></a>:</b>\nsyntax error during pre-compilation;</li>\n\n<li><b><a href=\"#pdf-LUA_ERRMEM\"><code>LUA_ERRMEM</code></a>:</b>\nmemory allocation error.</li>\n\n</ul>\n\n<p>\nThis function only loads a chunk;\nit does not run it.\n\n\n<p>\n<a href=\"#lua_load\"><code>lua_load</code></a> automatically detects whether the chunk is text or binary,\nand loads it accordingly (see program <code>luac</code>).\n\n\n<p>\nThe <a href=\"#lua_load\"><code>lua_load</code></a> function uses a user-supplied <code>reader</code> function\nto read the chunk (see <a href=\"#lua_Reader\"><code>lua_Reader</code></a>).\nThe <code>data</code> argument is an opaque value passed to the reader function.\n\n\n<p>\nThe <code>chunkname</code> argument gives a name to the chunk,\nwhich is used for error messages and in debug information (see <a href=\"#3.8\">&sect;3.8</a>).\n\n\n\n\n\n<hr><h3><a name=\"lua_newstate\"><code>lua_newstate</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>lua_State *lua_newstate (lua_Alloc f, void *ud);</pre>\n\n<p>\nCreates a new, independent state.\nReturns <code>NULL</code> if cannot create the state\n(due to lack of memory).\nThe argument <code>f</code> is the allocator function;\nLua does all memory allocation for this state through this function.\nThe second argument, <code>ud</code>, is an opaque pointer that Lua\nsimply passes to the allocator in every call.\n\n\n\n\n\n<hr><h3><a name=\"lua_newtable\"><code>lua_newtable</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>m</em>]</span>\n<pre>void lua_newtable (lua_State *L);</pre>\n\n<p>\nCreates a new empty table and pushes it onto the stack.\nIt is equivalent to <code>lua_createtable(L, 0, 0)</code>.\n\n\n\n\n\n<hr><h3><a name=\"lua_newthread\"><code>lua_newthread</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>m</em>]</span>\n<pre>lua_State *lua_newthread (lua_State *L);</pre>\n\n<p>\nCreates a new thread, pushes it on the stack,\nand returns a pointer to a <a href=\"#lua_State\"><code>lua_State</code></a> that represents this new thread.\nThe new state returned by this function shares with the original state\nall global objects (such as tables),\nbut has an independent execution stack.\n\n\n<p>\nThere is no explicit function to close or to destroy a thread.\nThreads are subject to garbage collection,\nlike any Lua object.\n\n\n\n\n\n<hr><h3><a name=\"lua_newuserdata\"><code>lua_newuserdata</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>m</em>]</span>\n<pre>void *lua_newuserdata (lua_State *L, size_t size);</pre>\n\n<p>\nThis function allocates a new block of memory with the given size,\npushes onto the stack a new full userdata with the block address,\nand returns this address.\n\n\n<p>\nUserdata represent C&nbsp;values in Lua.\nA <em>full userdata</em> represents a block of memory.\nIt is an object (like a table):\nyou must create it, it can have its own metatable,\nand you can detect when it is being collected.\nA full userdata is only equal to itself (under raw equality).\n\n\n<p>\nWhen Lua collects a full userdata with a <code>gc</code> metamethod,\nLua calls the metamethod and marks the userdata as finalized.\nWhen this userdata is collected again then\nLua frees its corresponding memory.\n\n\n\n\n\n<hr><h3><a name=\"lua_next\"><code>lua_next</code></a></h3><p>\n<span class=\"apii\">[-1, +(2|0), <em>e</em>]</span>\n<pre>int lua_next (lua_State *L, int index);</pre>\n\n<p>\nPops a key from the stack,\nand pushes a key-value pair from the table at the given index\n(the \"next\" pair after the given key).\nIf there are no more elements in the table,\nthen <a href=\"#lua_next\"><code>lua_next</code></a> returns 0 (and pushes nothing).\n\n\n<p>\nA typical traversal looks like this:\n\n<pre>\n     /* table is in the stack at index 't' */\n     lua_pushnil(L);  /* first key */\n     while (lua_next(L, t) != 0) {\n       /* uses 'key' (at index -2) and 'value' (at index -1) */\n       printf(\"%s - %s\\n\",\n              lua_typename(L, lua_type(L, -2)),\n              lua_typename(L, lua_type(L, -1)));\n       /* removes 'value'; keeps 'key' for next iteration */\n       lua_pop(L, 1);\n     }\n</pre>\n\n<p>\nWhile traversing a table,\ndo not call <a href=\"#lua_tolstring\"><code>lua_tolstring</code></a> directly on a key,\nunless you know that the key is actually a string.\nRecall that <a href=\"#lua_tolstring\"><code>lua_tolstring</code></a> <em>changes</em>\nthe value at the given index;\nthis confuses the next call to <a href=\"#lua_next\"><code>lua_next</code></a>.\n\n\n\n\n\n<hr><h3><a name=\"lua_Number\"><code>lua_Number</code></a></h3>\n<pre>typedef double lua_Number;</pre>\n\n<p>\nThe type of numbers in Lua.\nBy default, it is double, but that can be changed in <code>luaconf.h</code>.\n\n\n<p>\nThrough the configuration file you can change\nLua to operate with another type for numbers (e.g., float or long).\n\n\n\n\n\n<hr><h3><a name=\"lua_objlen\"><code>lua_objlen</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>size_t lua_objlen (lua_State *L, int index);</pre>\n\n<p>\nReturns the \"length\" of the value at the given acceptable index:\nfor strings, this is the string length;\nfor tables, this is the result of the length operator ('<code>#</code>');\nfor userdata, this is the size of the block of memory allocated\nfor the userdata;\nfor other values, it is&nbsp;0.\n\n\n\n\n\n<hr><h3><a name=\"lua_pcall\"><code>lua_pcall</code></a></h3><p>\n<span class=\"apii\">[-(nargs + 1), +(nresults|1), <em>-</em>]</span>\n<pre>int lua_pcall (lua_State *L, int nargs, int nresults, int errfunc);</pre>\n\n<p>\nCalls a function in protected mode.\n\n\n<p>\nBoth <code>nargs</code> and <code>nresults</code> have the same meaning as\nin <a href=\"#lua_call\"><code>lua_call</code></a>.\nIf there are no errors during the call,\n<a href=\"#lua_pcall\"><code>lua_pcall</code></a> behaves exactly like <a href=\"#lua_call\"><code>lua_call</code></a>.\nHowever, if there is any error,\n<a href=\"#lua_pcall\"><code>lua_pcall</code></a> catches it,\npushes a single value on the stack (the error message),\nand returns an error code.\nLike <a href=\"#lua_call\"><code>lua_call</code></a>,\n<a href=\"#lua_pcall\"><code>lua_pcall</code></a> always removes the function\nand its arguments from the stack.\n\n\n<p>\nIf <code>errfunc</code> is 0,\nthen the error message returned on the stack\nis exactly the original error message.\nOtherwise, <code>errfunc</code> is the stack index of an\n<em>error handler function</em>.\n(In the current implementation, this index cannot be a pseudo-index.)\nIn case of runtime errors,\nthis function will be called with the error message\nand its return value will be the message returned on the stack by <a href=\"#lua_pcall\"><code>lua_pcall</code></a>.\n\n\n<p>\nTypically, the error handler function is used to add more debug\ninformation to the error message, such as a stack traceback.\nSuch information cannot be gathered after the return of <a href=\"#lua_pcall\"><code>lua_pcall</code></a>,\nsince by then the stack has unwound.\n\n\n<p>\nThe <a href=\"#lua_pcall\"><code>lua_pcall</code></a> function returns 0 in case of success\nor one of the following error codes\n(defined in <code>lua.h</code>):\n\n<ul>\n\n<li><b><a name=\"pdf-LUA_ERRRUN\"><code>LUA_ERRRUN</code></a>:</b>\na runtime error.\n</li>\n\n<li><b><a name=\"pdf-LUA_ERRMEM\"><code>LUA_ERRMEM</code></a>:</b>\nmemory allocation error.\nFor such errors, Lua does not call the error handler function.\n</li>\n\n<li><b><a name=\"pdf-LUA_ERRERR\"><code>LUA_ERRERR</code></a>:</b>\nerror while running the error handler function.\n</li>\n\n</ul>\n\n\n\n\n<hr><h3><a name=\"lua_pop\"><code>lua_pop</code></a></h3><p>\n<span class=\"apii\">[-n, +0, <em>-</em>]</span>\n<pre>void lua_pop (lua_State *L, int n);</pre>\n\n<p>\nPops <code>n</code> elements from the stack.\n\n\n\n\n\n<hr><h3><a name=\"lua_pushboolean\"><code>lua_pushboolean</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>-</em>]</span>\n<pre>void lua_pushboolean (lua_State *L, int b);</pre>\n\n<p>\nPushes a boolean value with value <code>b</code> onto the stack.\n\n\n\n\n\n<hr><h3><a name=\"lua_pushcclosure\"><code>lua_pushcclosure</code></a></h3><p>\n<span class=\"apii\">[-n, +1, <em>m</em>]</span>\n<pre>void lua_pushcclosure (lua_State *L, lua_CFunction fn, int n);</pre>\n\n<p>\nPushes a new C&nbsp;closure onto the stack.\n\n\n<p>\nWhen a C&nbsp;function is created,\nit is possible to associate some values with it,\nthus creating a C&nbsp;closure (see <a href=\"#3.4\">&sect;3.4</a>);\nthese values are then accessible to the function whenever it is called.\nTo associate values with a C&nbsp;function,\nfirst these values should be pushed onto the stack\n(when there are multiple values, the first value is pushed first).\nThen <a href=\"#lua_pushcclosure\"><code>lua_pushcclosure</code></a>\nis called to create and push the C&nbsp;function onto the stack,\nwith the argument <code>n</code> telling how many values should be\nassociated with the function.\n<a href=\"#lua_pushcclosure\"><code>lua_pushcclosure</code></a> also pops these values from the stack.\n\n\n<p>\nThe maximum value for <code>n</code> is 255.\n\n\n\n\n\n<hr><h3><a name=\"lua_pushcfunction\"><code>lua_pushcfunction</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>m</em>]</span>\n<pre>void lua_pushcfunction (lua_State *L, lua_CFunction f);</pre>\n\n<p>\nPushes a C&nbsp;function onto the stack.\nThis function receives a pointer to a C function\nand pushes onto the stack a Lua value of type <code>function</code> that,\nwhen called, invokes the corresponding C&nbsp;function.\n\n\n<p>\nAny function to be registered in Lua must\nfollow the correct protocol to receive its parameters\nand return its results (see <a href=\"#lua_CFunction\"><code>lua_CFunction</code></a>).\n\n\n<p>\n<code>lua_pushcfunction</code> is defined as a macro:\n\n<pre>\n     #define lua_pushcfunction(L,f)  lua_pushcclosure(L,f,0)\n</pre>\n\n\n\n\n<hr><h3><a name=\"lua_pushfstring\"><code>lua_pushfstring</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>m</em>]</span>\n<pre>const char *lua_pushfstring (lua_State *L, const char *fmt, ...);</pre>\n\n<p>\nPushes onto the stack a formatted string\nand returns a pointer to this string.\nIt is similar to the C&nbsp;function <code>sprintf</code>,\nbut has some important differences:\n\n<ul>\n\n<li>\nYou do not have to allocate space for the result:\nthe result is a Lua string and Lua takes care of memory allocation\n(and deallocation, through garbage collection).\n</li>\n\n<li>\nThe conversion specifiers are quite restricted.\nThere are no flags, widths, or precisions.\nThe conversion specifiers can only be\n'<code>%%</code>' (inserts a '<code>%</code>' in the string),\n'<code>%s</code>' (inserts a zero-terminated string, with no size restrictions),\n'<code>%f</code>' (inserts a <a href=\"#lua_Number\"><code>lua_Number</code></a>),\n'<code>%p</code>' (inserts a pointer as a hexadecimal numeral),\n'<code>%d</code>' (inserts an <code>int</code>), and\n'<code>%c</code>' (inserts an <code>int</code> as a character).\n</li>\n\n</ul>\n\n\n\n\n<hr><h3><a name=\"lua_pushinteger\"><code>lua_pushinteger</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>-</em>]</span>\n<pre>void lua_pushinteger (lua_State *L, lua_Integer n);</pre>\n\n<p>\nPushes a number with value <code>n</code> onto the stack.\n\n\n\n\n\n<hr><h3><a name=\"lua_pushlightuserdata\"><code>lua_pushlightuserdata</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>-</em>]</span>\n<pre>void lua_pushlightuserdata (lua_State *L, void *p);</pre>\n\n<p>\nPushes a light userdata onto the stack.\n\n\n<p>\nUserdata represent C&nbsp;values in Lua.\nA <em>light userdata</em> represents a pointer.\nIt is a value (like a number):\nyou do not create it, it has no individual metatable,\nand it is not collected (as it was never created).\nA light userdata is equal to \"any\"\nlight userdata with the same C&nbsp;address.\n\n\n\n\n\n<hr><h3><a name=\"lua_pushliteral\"><code>lua_pushliteral</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>m</em>]</span>\n<pre>void lua_pushliteral (lua_State *L, const char *s);</pre>\n\n<p>\nThis macro is equivalent to <a href=\"#lua_pushlstring\"><code>lua_pushlstring</code></a>,\nbut can be used only when <code>s</code> is a literal string.\nIn these cases, it automatically provides the string length.\n\n\n\n\n\n<hr><h3><a name=\"lua_pushlstring\"><code>lua_pushlstring</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>m</em>]</span>\n<pre>void lua_pushlstring (lua_State *L, const char *s, size_t len);</pre>\n\n<p>\nPushes the string pointed to by <code>s</code> with size <code>len</code>\nonto the stack.\nLua makes (or reuses) an internal copy of the given string,\nso the memory at <code>s</code> can be freed or reused immediately after\nthe function returns.\nThe string can contain embedded zeros.\n\n\n\n\n\n<hr><h3><a name=\"lua_pushnil\"><code>lua_pushnil</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>-</em>]</span>\n<pre>void lua_pushnil (lua_State *L);</pre>\n\n<p>\nPushes a nil value onto the stack.\n\n\n\n\n\n<hr><h3><a name=\"lua_pushnumber\"><code>lua_pushnumber</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>-</em>]</span>\n<pre>void lua_pushnumber (lua_State *L, lua_Number n);</pre>\n\n<p>\nPushes a number with value <code>n</code> onto the stack.\n\n\n\n\n\n<hr><h3><a name=\"lua_pushstring\"><code>lua_pushstring</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>m</em>]</span>\n<pre>void lua_pushstring (lua_State *L, const char *s);</pre>\n\n<p>\nPushes the zero-terminated string pointed to by <code>s</code>\nonto the stack.\nLua makes (or reuses) an internal copy of the given string,\nso the memory at <code>s</code> can be freed or reused immediately after\nthe function returns.\nThe string cannot contain embedded zeros;\nit is assumed to end at the first zero.\n\n\n\n\n\n<hr><h3><a name=\"lua_pushthread\"><code>lua_pushthread</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>-</em>]</span>\n<pre>int lua_pushthread (lua_State *L);</pre>\n\n<p>\nPushes the thread represented by <code>L</code> onto the stack.\nReturns 1 if this thread is the main thread of its state.\n\n\n\n\n\n<hr><h3><a name=\"lua_pushvalue\"><code>lua_pushvalue</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>-</em>]</span>\n<pre>void lua_pushvalue (lua_State *L, int index);</pre>\n\n<p>\nPushes a copy of the element at the given valid index\nonto the stack.\n\n\n\n\n\n<hr><h3><a name=\"lua_pushvfstring\"><code>lua_pushvfstring</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>m</em>]</span>\n<pre>const char *lua_pushvfstring (lua_State *L,\n                              const char *fmt,\n                              va_list argp);</pre>\n\n<p>\nEquivalent to <a href=\"#lua_pushfstring\"><code>lua_pushfstring</code></a>, except that it receives a <code>va_list</code>\ninstead of a variable number of arguments.\n\n\n\n\n\n<hr><h3><a name=\"lua_rawequal\"><code>lua_rawequal</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>int lua_rawequal (lua_State *L, int index1, int index2);</pre>\n\n<p>\nReturns 1 if the two values in acceptable indices <code>index1</code> and\n<code>index2</code> are primitively equal\n(that is, without calling metamethods).\nOtherwise returns&nbsp;0.\nAlso returns&nbsp;0 if any of the indices are non valid.\n\n\n\n\n\n<hr><h3><a name=\"lua_rawget\"><code>lua_rawget</code></a></h3><p>\n<span class=\"apii\">[-1, +1, <em>-</em>]</span>\n<pre>void lua_rawget (lua_State *L, int index);</pre>\n\n<p>\nSimilar to <a href=\"#lua_gettable\"><code>lua_gettable</code></a>, but does a raw access\n(i.e., without metamethods).\n\n\n\n\n\n<hr><h3><a name=\"lua_rawgeti\"><code>lua_rawgeti</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>-</em>]</span>\n<pre>void lua_rawgeti (lua_State *L, int index, int n);</pre>\n\n<p>\nPushes onto the stack the value <code>t[n]</code>,\nwhere <code>t</code> is the value at the given valid index.\nThe access is raw;\nthat is, it does not invoke metamethods.\n\n\n\n\n\n<hr><h3><a name=\"lua_rawset\"><code>lua_rawset</code></a></h3><p>\n<span class=\"apii\">[-2, +0, <em>m</em>]</span>\n<pre>void lua_rawset (lua_State *L, int index);</pre>\n\n<p>\nSimilar to <a href=\"#lua_settable\"><code>lua_settable</code></a>, but does a raw assignment\n(i.e., without metamethods).\n\n\n\n\n\n<hr><h3><a name=\"lua_rawseti\"><code>lua_rawseti</code></a></h3><p>\n<span class=\"apii\">[-1, +0, <em>m</em>]</span>\n<pre>void lua_rawseti (lua_State *L, int index, int n);</pre>\n\n<p>\nDoes the equivalent of <code>t[n] = v</code>,\nwhere <code>t</code> is the value at the given valid index\nand <code>v</code> is the value at the top of the stack.\n\n\n<p>\nThis function pops the value from the stack.\nThe assignment is raw;\nthat is, it does not invoke metamethods.\n\n\n\n\n\n<hr><h3><a name=\"lua_Reader\"><code>lua_Reader</code></a></h3>\n<pre>typedef const char * (*lua_Reader) (lua_State *L,\n                                    void *data,\n                                    size_t *size);</pre>\n\n<p>\nThe reader function used by <a href=\"#lua_load\"><code>lua_load</code></a>.\nEvery time it needs another piece of the chunk,\n<a href=\"#lua_load\"><code>lua_load</code></a> calls the reader,\npassing along its <code>data</code> parameter.\nThe reader must return a pointer to a block of memory\nwith a new piece of the chunk\nand set <code>size</code> to the block size.\nThe block must exist until the reader function is called again.\nTo signal the end of the chunk,\nthe reader must return <code>NULL</code> or set <code>size</code> to zero.\nThe reader function may return pieces of any size greater than zero.\n\n\n\n\n\n<hr><h3><a name=\"lua_register\"><code>lua_register</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>e</em>]</span>\n<pre>void lua_register (lua_State *L,\n                   const char *name,\n                   lua_CFunction f);</pre>\n\n<p>\nSets the C function <code>f</code> as the new value of global <code>name</code>.\nIt is defined as a macro:\n\n<pre>\n     #define lua_register(L,n,f) \\\n            (lua_pushcfunction(L, f), lua_setglobal(L, n))\n</pre>\n\n\n\n\n<hr><h3><a name=\"lua_remove\"><code>lua_remove</code></a></h3><p>\n<span class=\"apii\">[-1, +0, <em>-</em>]</span>\n<pre>void lua_remove (lua_State *L, int index);</pre>\n\n<p>\nRemoves the element at the given valid index,\nshifting down the elements above this index to fill the gap.\nCannot be called with a pseudo-index,\nbecause a pseudo-index is not an actual stack position.\n\n\n\n\n\n<hr><h3><a name=\"lua_replace\"><code>lua_replace</code></a></h3><p>\n<span class=\"apii\">[-1, +0, <em>-</em>]</span>\n<pre>void lua_replace (lua_State *L, int index);</pre>\n\n<p>\nMoves the top element into the given position (and pops it),\nwithout shifting any element\n(therefore replacing the value at the given position).\n\n\n\n\n\n<hr><h3><a name=\"lua_resume\"><code>lua_resume</code></a></h3><p>\n<span class=\"apii\">[-?, +?, <em>-</em>]</span>\n<pre>int lua_resume (lua_State *L, int narg);</pre>\n\n<p>\nStarts and resumes a coroutine in a given thread.\n\n\n<p>\nTo start a coroutine, you first create a new thread\n(see <a href=\"#lua_newthread\"><code>lua_newthread</code></a>);\nthen you push onto its stack the main function plus any arguments;\nthen you call <a href=\"#lua_resume\"><code>lua_resume</code></a>,\nwith <code>narg</code> being the number of arguments.\nThis call returns when the coroutine suspends or finishes its execution.\nWhen it returns, the stack contains all values passed to <a href=\"#lua_yield\"><code>lua_yield</code></a>,\nor all values returned by the body function.\n<a href=\"#lua_resume\"><code>lua_resume</code></a> returns\n<a href=\"#pdf-LUA_YIELD\"><code>LUA_YIELD</code></a> if the coroutine yields,\n0 if the coroutine finishes its execution\nwithout errors,\nor an error code in case of errors (see <a href=\"#lua_pcall\"><code>lua_pcall</code></a>).\nIn case of errors,\nthe stack is not unwound,\nso you can use the debug API over it.\nThe error message is on the top of the stack.\nTo restart a coroutine, you put on its stack only the values to\nbe passed as results from <code>yield</code>,\nand then call <a href=\"#lua_resume\"><code>lua_resume</code></a>.\n\n\n\n\n\n<hr><h3><a name=\"lua_setallocf\"><code>lua_setallocf</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>void lua_setallocf (lua_State *L, lua_Alloc f, void *ud);</pre>\n\n<p>\nChanges the allocator function of a given state to <code>f</code>\nwith user data <code>ud</code>.\n\n\n\n\n\n<hr><h3><a name=\"lua_setfenv\"><code>lua_setfenv</code></a></h3><p>\n<span class=\"apii\">[-1, +0, <em>-</em>]</span>\n<pre>int lua_setfenv (lua_State *L, int index);</pre>\n\n<p>\nPops a table from the stack and sets it as\nthe new environment for the value at the given index.\nIf the value at the given index is\nneither a function nor a thread nor a userdata,\n<a href=\"#lua_setfenv\"><code>lua_setfenv</code></a> returns 0.\nOtherwise it returns 1.\n\n\n\n\n\n<hr><h3><a name=\"lua_setfield\"><code>lua_setfield</code></a></h3><p>\n<span class=\"apii\">[-1, +0, <em>e</em>]</span>\n<pre>void lua_setfield (lua_State *L, int index, const char *k);</pre>\n\n<p>\nDoes the equivalent to <code>t[k] = v</code>,\nwhere <code>t</code> is the value at the given valid index\nand <code>v</code> is the value at the top of the stack.\n\n\n<p>\nThis function pops the value from the stack.\nAs in Lua, this function may trigger a metamethod\nfor the \"newindex\" event (see <a href=\"#2.8\">&sect;2.8</a>).\n\n\n\n\n\n<hr><h3><a name=\"lua_setglobal\"><code>lua_setglobal</code></a></h3><p>\n<span class=\"apii\">[-1, +0, <em>e</em>]</span>\n<pre>void lua_setglobal (lua_State *L, const char *name);</pre>\n\n<p>\nPops a value from the stack and\nsets it as the new value of global <code>name</code>.\nIt is defined as a macro:\n\n<pre>\n     #define lua_setglobal(L,s)   lua_setfield(L, LUA_GLOBALSINDEX, s)\n</pre>\n\n\n\n\n<hr><h3><a name=\"lua_setmetatable\"><code>lua_setmetatable</code></a></h3><p>\n<span class=\"apii\">[-1, +0, <em>-</em>]</span>\n<pre>int lua_setmetatable (lua_State *L, int index);</pre>\n\n<p>\nPops a table from the stack and\nsets it as the new metatable for the value at the given\nacceptable index.\n\n\n\n\n\n<hr><h3><a name=\"lua_settable\"><code>lua_settable</code></a></h3><p>\n<span class=\"apii\">[-2, +0, <em>e</em>]</span>\n<pre>void lua_settable (lua_State *L, int index);</pre>\n\n<p>\nDoes the equivalent to <code>t[k] = v</code>,\nwhere <code>t</code> is the value at the given valid index,\n<code>v</code> is the value at the top of the stack,\nand <code>k</code> is the value just below the top.\n\n\n<p>\nThis function pops both the key and the value from the stack.\nAs in Lua, this function may trigger a metamethod\nfor the \"newindex\" event (see <a href=\"#2.8\">&sect;2.8</a>).\n\n\n\n\n\n<hr><h3><a name=\"lua_settop\"><code>lua_settop</code></a></h3><p>\n<span class=\"apii\">[-?, +?, <em>-</em>]</span>\n<pre>void lua_settop (lua_State *L, int index);</pre>\n\n<p>\nAccepts any acceptable index, or&nbsp;0,\nand sets the stack top to this index.\nIf the new top is larger than the old one,\nthen the new elements are filled with <b>nil</b>.\nIf <code>index</code> is&nbsp;0, then all stack elements are removed.\n\n\n\n\n\n<hr><h3><a name=\"lua_State\"><code>lua_State</code></a></h3>\n<pre>typedef struct lua_State lua_State;</pre>\n\n<p>\nOpaque structure that keeps the whole state of a Lua interpreter.\nThe Lua library is fully reentrant:\nit has no global variables.\nAll information about a state is kept in this structure.\n\n\n<p>\nA pointer to this state must be passed as the first argument to\nevery function in the library, except to <a href=\"#lua_newstate\"><code>lua_newstate</code></a>,\nwhich creates a Lua state from scratch.\n\n\n\n\n\n<hr><h3><a name=\"lua_status\"><code>lua_status</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>int lua_status (lua_State *L);</pre>\n\n<p>\nReturns the status of the thread <code>L</code>.\n\n\n<p>\nThe status can be 0 for a normal thread,\nan error code if the thread finished its execution with an error,\nor <a name=\"pdf-LUA_YIELD\"><code>LUA_YIELD</code></a> if the thread is suspended.\n\n\n\n\n\n<hr><h3><a name=\"lua_toboolean\"><code>lua_toboolean</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>int lua_toboolean (lua_State *L, int index);</pre>\n\n<p>\nConverts the Lua value at the given acceptable index to a C&nbsp;boolean\nvalue (0&nbsp;or&nbsp;1).\nLike all tests in Lua,\n<a href=\"#lua_toboolean\"><code>lua_toboolean</code></a> returns 1 for any Lua value\ndifferent from <b>false</b> and <b>nil</b>;\notherwise it returns 0.\nIt also returns 0 when called with a non-valid index.\n(If you want to accept only actual boolean values,\nuse <a href=\"#lua_isboolean\"><code>lua_isboolean</code></a> to test the value's type.)\n\n\n\n\n\n<hr><h3><a name=\"lua_tocfunction\"><code>lua_tocfunction</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>lua_CFunction lua_tocfunction (lua_State *L, int index);</pre>\n\n<p>\nConverts a value at the given acceptable index to a C&nbsp;function.\nThat value must be a C&nbsp;function;\notherwise, returns <code>NULL</code>.\n\n\n\n\n\n<hr><h3><a name=\"lua_tointeger\"><code>lua_tointeger</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>lua_Integer lua_tointeger (lua_State *L, int index);</pre>\n\n<p>\nConverts the Lua value at the given acceptable index\nto the signed integral type <a href=\"#lua_Integer\"><code>lua_Integer</code></a>.\nThe Lua value must be a number or a string convertible to a number\n(see <a href=\"#2.2.1\">&sect;2.2.1</a>);\notherwise, <a href=\"#lua_tointeger\"><code>lua_tointeger</code></a> returns&nbsp;0.\n\n\n<p>\nIf the number is not an integer,\nit is truncated in some non-specified way.\n\n\n\n\n\n<hr><h3><a name=\"lua_tolstring\"><code>lua_tolstring</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>m</em>]</span>\n<pre>const char *lua_tolstring (lua_State *L, int index, size_t *len);</pre>\n\n<p>\nConverts the Lua value at the given acceptable index to a C&nbsp;string.\nIf <code>len</code> is not <code>NULL</code>,\nit also sets <code>*len</code> with the string length.\nThe Lua value must be a string or a number;\notherwise, the function returns <code>NULL</code>.\nIf the value is a number,\nthen <a href=\"#lua_tolstring\"><code>lua_tolstring</code></a> also\n<em>changes the actual value in the stack to a string</em>.\n(This change confuses <a href=\"#lua_next\"><code>lua_next</code></a>\nwhen <a href=\"#lua_tolstring\"><code>lua_tolstring</code></a> is applied to keys during a table traversal.)\n\n\n<p>\n<a href=\"#lua_tolstring\"><code>lua_tolstring</code></a> returns a fully aligned pointer\nto a string inside the Lua state.\nThis string always has a zero ('<code>\\0</code>')\nafter its last character (as in&nbsp;C),\nbut can contain other zeros in its body.\nBecause Lua has garbage collection,\nthere is no guarantee that the pointer returned by <a href=\"#lua_tolstring\"><code>lua_tolstring</code></a>\nwill be valid after the corresponding value is removed from the stack.\n\n\n\n\n\n<hr><h3><a name=\"lua_tonumber\"><code>lua_tonumber</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>lua_Number lua_tonumber (lua_State *L, int index);</pre>\n\n<p>\nConverts the Lua value at the given acceptable index\nto the C&nbsp;type <a href=\"#lua_Number\"><code>lua_Number</code></a> (see <a href=\"#lua_Number\"><code>lua_Number</code></a>).\nThe Lua value must be a number or a string convertible to a number\n(see <a href=\"#2.2.1\">&sect;2.2.1</a>);\notherwise, <a href=\"#lua_tonumber\"><code>lua_tonumber</code></a> returns&nbsp;0.\n\n\n\n\n\n<hr><h3><a name=\"lua_topointer\"><code>lua_topointer</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>const void *lua_topointer (lua_State *L, int index);</pre>\n\n<p>\nConverts the value at the given acceptable index to a generic\nC&nbsp;pointer (<code>void*</code>).\nThe value can be a userdata, a table, a thread, or a function;\notherwise, <a href=\"#lua_topointer\"><code>lua_topointer</code></a> returns <code>NULL</code>.\nDifferent objects will give different pointers.\nThere is no way to convert the pointer back to its original value.\n\n\n<p>\nTypically this function is used only for debug information.\n\n\n\n\n\n<hr><h3><a name=\"lua_tostring\"><code>lua_tostring</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>m</em>]</span>\n<pre>const char *lua_tostring (lua_State *L, int index);</pre>\n\n<p>\nEquivalent to <a href=\"#lua_tolstring\"><code>lua_tolstring</code></a> with <code>len</code> equal to <code>NULL</code>.\n\n\n\n\n\n<hr><h3><a name=\"lua_tothread\"><code>lua_tothread</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>lua_State *lua_tothread (lua_State *L, int index);</pre>\n\n<p>\nConverts the value at the given acceptable index to a Lua thread\n(represented as <code>lua_State*</code>).\nThis value must be a thread;\notherwise, the function returns <code>NULL</code>.\n\n\n\n\n\n<hr><h3><a name=\"lua_touserdata\"><code>lua_touserdata</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>void *lua_touserdata (lua_State *L, int index);</pre>\n\n<p>\nIf the value at the given acceptable index is a full userdata,\nreturns its block address.\nIf the value is a light userdata,\nreturns its pointer.\nOtherwise, returns <code>NULL</code>.\n\n\n\n\n\n<hr><h3><a name=\"lua_type\"><code>lua_type</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>int lua_type (lua_State *L, int index);</pre>\n\n<p>\nReturns the type of the value in the given acceptable index,\nor <code>LUA_TNONE</code> for a non-valid index\n(that is, an index to an \"empty\" stack position).\nThe types returned by <a href=\"#lua_type\"><code>lua_type</code></a> are coded by the following constants\ndefined in <code>lua.h</code>:\n<code>LUA_TNIL</code>,\n<code>LUA_TNUMBER</code>,\n<code>LUA_TBOOLEAN</code>,\n<code>LUA_TSTRING</code>,\n<code>LUA_TTABLE</code>,\n<code>LUA_TFUNCTION</code>,\n<code>LUA_TUSERDATA</code>,\n<code>LUA_TTHREAD</code>,\nand\n<code>LUA_TLIGHTUSERDATA</code>.\n\n\n\n\n\n<hr><h3><a name=\"lua_typename\"><code>lua_typename</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>const char *lua_typename  (lua_State *L, int tp);</pre>\n\n<p>\nReturns the name of the type encoded by the value <code>tp</code>,\nwhich must be one the values returned by <a href=\"#lua_type\"><code>lua_type</code></a>.\n\n\n\n\n\n<hr><h3><a name=\"lua_Writer\"><code>lua_Writer</code></a></h3>\n<pre>typedef int (*lua_Writer) (lua_State *L,\n                           const void* p,\n                           size_t sz,\n                           void* ud);</pre>\n\n<p>\nThe type of the writer function used by <a href=\"#lua_dump\"><code>lua_dump</code></a>.\nEvery time it produces another piece of chunk,\n<a href=\"#lua_dump\"><code>lua_dump</code></a> calls the writer,\npassing along the buffer to be written (<code>p</code>),\nits size (<code>sz</code>),\nand the <code>data</code> parameter supplied to <a href=\"#lua_dump\"><code>lua_dump</code></a>.\n\n\n<p>\nThe writer returns an error code:\n0&nbsp;means no errors;\nany other value means an error and stops <a href=\"#lua_dump\"><code>lua_dump</code></a> from\ncalling the writer again.\n\n\n\n\n\n<hr><h3><a name=\"lua_xmove\"><code>lua_xmove</code></a></h3><p>\n<span class=\"apii\">[-?, +?, <em>-</em>]</span>\n<pre>void lua_xmove (lua_State *from, lua_State *to, int n);</pre>\n\n<p>\nExchange values between different threads of the <em>same</em> global state.\n\n\n<p>\nThis function pops <code>n</code> values from the stack <code>from</code>,\nand pushes them onto the stack <code>to</code>.\n\n\n\n\n\n<hr><h3><a name=\"lua_yield\"><code>lua_yield</code></a></h3><p>\n<span class=\"apii\">[-?, +?, <em>-</em>]</span>\n<pre>int lua_yield  (lua_State *L, int nresults);</pre>\n\n<p>\nYields a coroutine.\n\n\n<p>\nThis function should only be called as the\nreturn expression of a C&nbsp;function, as follows:\n\n<pre>\n     return lua_yield (L, nresults);\n</pre><p>\nWhen a C&nbsp;function calls <a href=\"#lua_yield\"><code>lua_yield</code></a> in that way,\nthe running coroutine suspends its execution,\nand the call to <a href=\"#lua_resume\"><code>lua_resume</code></a> that started this coroutine returns.\nThe parameter <code>nresults</code> is the number of values from the stack\nthat are passed as results to <a href=\"#lua_resume\"><code>lua_resume</code></a>.\n\n\n\n\n\n\n\n<h2>3.8 - <a name=\"3.8\">The Debug Interface</a></h2>\n\n<p>\nLua has no built-in debugging facilities.\nInstead, it offers a special interface\nby means of functions and <em>hooks</em>.\nThis interface allows the construction of different\nkinds of debuggers, profilers, and other tools\nthat need \"inside information\" from the interpreter.\n\n\n\n<hr><h3><a name=\"lua_Debug\"><code>lua_Debug</code></a></h3>\n<pre>typedef struct lua_Debug {\n  int event;\n  const char *name;           /* (n) */\n  const char *namewhat;       /* (n) */\n  const char *what;           /* (S) */\n  const char *source;         /* (S) */\n  int currentline;            /* (l) */\n  int nups;                   /* (u) number of upvalues */\n  int linedefined;            /* (S) */\n  int lastlinedefined;        /* (S) */\n  char short_src[LUA_IDSIZE]; /* (S) */\n  /* private part */\n  <em>other fields</em>\n} lua_Debug;</pre>\n\n<p>\nA structure used to carry different pieces of\ninformation about an active function.\n<a href=\"#lua_getstack\"><code>lua_getstack</code></a> fills only the private part\nof this structure, for later use.\nTo fill the other fields of <a href=\"#lua_Debug\"><code>lua_Debug</code></a> with useful information,\ncall <a href=\"#lua_getinfo\"><code>lua_getinfo</code></a>.\n\n\n<p>\nThe fields of <a href=\"#lua_Debug\"><code>lua_Debug</code></a> have the following meaning:\n\n<ul>\n\n<li><b><code>source</code>:</b>\nIf the function was defined in a string,\nthen <code>source</code> is that string.\nIf the function was defined in a file,\nthen <code>source</code> starts with a '<code>@</code>' followed by the file name.\n</li>\n\n<li><b><code>short_src</code>:</b>\na \"printable\" version of <code>source</code>, to be used in error messages.\n</li>\n\n<li><b><code>linedefined</code>:</b>\nthe line number where the definition of the function starts.\n</li>\n\n<li><b><code>lastlinedefined</code>:</b>\nthe line number where the definition of the function ends.\n</li>\n\n<li><b><code>what</code>:</b>\nthe string <code>\"Lua\"</code> if the function is a Lua function,\n<code>\"C\"</code> if it is a C&nbsp;function,\n<code>\"main\"</code> if it is the main part of a chunk,\nand <code>\"tail\"</code> if it was a function that did a tail call.\nIn the latter case,\nLua has no other information about the function.\n</li>\n\n<li><b><code>currentline</code>:</b>\nthe current line where the given function is executing.\nWhen no line information is available,\n<code>currentline</code> is set to -1.\n</li>\n\n<li><b><code>name</code>:</b>\na reasonable name for the given function.\nBecause functions in Lua are first-class values,\nthey do not have a fixed name:\nsome functions can be the value of multiple global variables,\nwhile others can be stored only in a table field.\nThe <code>lua_getinfo</code> function checks how the function was\ncalled to find a suitable name.\nIf it cannot find a name,\nthen <code>name</code> is set to <code>NULL</code>.\n</li>\n\n<li><b><code>namewhat</code>:</b>\nexplains the <code>name</code> field.\nThe value of <code>namewhat</code> can be\n<code>\"global\"</code>, <code>\"local\"</code>, <code>\"method\"</code>,\n<code>\"field\"</code>, <code>\"upvalue\"</code>, or <code>\"\"</code> (the empty string),\naccording to how the function was called.\n(Lua uses the empty string when no other option seems to apply.)\n</li>\n\n<li><b><code>nups</code>:</b>\nthe number of upvalues of the function.\n</li>\n\n</ul>\n\n\n\n\n<hr><h3><a name=\"lua_gethook\"><code>lua_gethook</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>lua_Hook lua_gethook (lua_State *L);</pre>\n\n<p>\nReturns the current hook function.\n\n\n\n\n\n<hr><h3><a name=\"lua_gethookcount\"><code>lua_gethookcount</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>int lua_gethookcount (lua_State *L);</pre>\n\n<p>\nReturns the current hook count.\n\n\n\n\n\n<hr><h3><a name=\"lua_gethookmask\"><code>lua_gethookmask</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>int lua_gethookmask (lua_State *L);</pre>\n\n<p>\nReturns the current hook mask.\n\n\n\n\n\n<hr><h3><a name=\"lua_getinfo\"><code>lua_getinfo</code></a></h3><p>\n<span class=\"apii\">[-(0|1), +(0|1|2), <em>m</em>]</span>\n<pre>int lua_getinfo (lua_State *L, const char *what, lua_Debug *ar);</pre>\n\n<p>\nReturns information about a specific function or function invocation.\n\n\n<p>\nTo get information about a function invocation,\nthe parameter <code>ar</code> must be a valid activation record that was\nfilled by a previous call to <a href=\"#lua_getstack\"><code>lua_getstack</code></a> or\ngiven as argument to a hook (see <a href=\"#lua_Hook\"><code>lua_Hook</code></a>).\n\n\n<p>\nTo get information about a function you push it onto the stack\nand start the <code>what</code> string with the character '<code>&gt;</code>'.\n(In that case,\n<code>lua_getinfo</code> pops the function in the top of the stack.)\nFor instance, to know in which line a function <code>f</code> was defined,\nyou can write the following code:\n\n<pre>\n     lua_Debug ar;\n     lua_getfield(L, LUA_GLOBALSINDEX, \"f\");  /* get global 'f' */\n     lua_getinfo(L, \"&gt;S\", &amp;ar);\n     printf(\"%d\\n\", ar.linedefined);\n</pre>\n\n<p>\nEach character in the string <code>what</code>\nselects some fields of the structure <code>ar</code> to be filled or\na value to be pushed on the stack:\n\n<ul>\n\n<li><b>'<code>n</code>':</b> fills in the field <code>name</code> and <code>namewhat</code>;\n</li>\n\n<li><b>'<code>S</code>':</b>\nfills in the fields <code>source</code>, <code>short_src</code>,\n<code>linedefined</code>, <code>lastlinedefined</code>, and <code>what</code>;\n</li>\n\n<li><b>'<code>l</code>':</b> fills in the field <code>currentline</code>;\n</li>\n\n<li><b>'<code>u</code>':</b> fills in the field <code>nups</code>;\n</li>\n\n<li><b>'<code>f</code>':</b>\npushes onto the stack the function that is\nrunning at the given level;\n</li>\n\n<li><b>'<code>L</code>':</b>\npushes onto the stack a table whose indices are the\nnumbers of the lines that are valid on the function.\n(A <em>valid line</em> is a line with some associated code,\nthat is, a line where you can put a break point.\nNon-valid lines include empty lines and comments.)\n</li>\n\n</ul>\n\n<p>\nThis function returns 0 on error\n(for instance, an invalid option in <code>what</code>).\n\n\n\n\n\n<hr><h3><a name=\"lua_getlocal\"><code>lua_getlocal</code></a></h3><p>\n<span class=\"apii\">[-0, +(0|1), <em>-</em>]</span>\n<pre>const char *lua_getlocal (lua_State *L, lua_Debug *ar, int n);</pre>\n\n<p>\nGets information about a local variable of a given activation record.\nThe parameter <code>ar</code> must be a valid activation record that was\nfilled by a previous call to <a href=\"#lua_getstack\"><code>lua_getstack</code></a> or\ngiven as argument to a hook (see <a href=\"#lua_Hook\"><code>lua_Hook</code></a>).\nThe index <code>n</code> selects which local variable to inspect\n(1 is the first parameter or active local variable, and so on,\nuntil the last active local variable).\n<a href=\"#lua_getlocal\"><code>lua_getlocal</code></a> pushes the variable's value onto the stack\nand returns its name.\n\n\n<p>\nVariable names starting with '<code>(</code>' (open parentheses)\nrepresent internal variables\n(loop control variables, temporaries, and C&nbsp;function locals).\n\n\n<p>\nReturns <code>NULL</code> (and pushes nothing)\nwhen the index is greater than\nthe number of active local variables.\n\n\n\n\n\n<hr><h3><a name=\"lua_getstack\"><code>lua_getstack</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>int lua_getstack (lua_State *L, int level, lua_Debug *ar);</pre>\n\n<p>\nGet information about the interpreter runtime stack.\n\n\n<p>\nThis function fills parts of a <a href=\"#lua_Debug\"><code>lua_Debug</code></a> structure with\nan identification of the <em>activation record</em>\nof the function executing at a given level.\nLevel&nbsp;0 is the current running function,\nwhereas level <em>n+1</em> is the function that has called level <em>n</em>.\nWhen there are no errors, <a href=\"#lua_getstack\"><code>lua_getstack</code></a> returns 1;\nwhen called with a level greater than the stack depth,\nit returns 0.\n\n\n\n\n\n<hr><h3><a name=\"lua_getupvalue\"><code>lua_getupvalue</code></a></h3><p>\n<span class=\"apii\">[-0, +(0|1), <em>-</em>]</span>\n<pre>const char *lua_getupvalue (lua_State *L, int funcindex, int n);</pre>\n\n<p>\nGets information about a closure's upvalue.\n(For Lua functions,\nupvalues are the external local variables that the function uses,\nand that are consequently included in its closure.)\n<a href=\"#lua_getupvalue\"><code>lua_getupvalue</code></a> gets the index <code>n</code> of an upvalue,\npushes the upvalue's value onto the stack,\nand returns its name.\n<code>funcindex</code> points to the closure in the stack.\n(Upvalues have no particular order,\nas they are active through the whole function.\nSo, they are numbered in an arbitrary order.)\n\n\n<p>\nReturns <code>NULL</code> (and pushes nothing)\nwhen the index is greater than the number of upvalues.\nFor C&nbsp;functions, this function uses the empty string <code>\"\"</code>\nas a name for all upvalues.\n\n\n\n\n\n<hr><h3><a name=\"lua_Hook\"><code>lua_Hook</code></a></h3>\n<pre>typedef void (*lua_Hook) (lua_State *L, lua_Debug *ar);</pre>\n\n<p>\nType for debugging hook functions.\n\n\n<p>\nWhenever a hook is called, its <code>ar</code> argument has its field\n<code>event</code> set to the specific event that triggered the hook.\nLua identifies these events with the following constants:\n<a name=\"pdf-LUA_HOOKCALL\"><code>LUA_HOOKCALL</code></a>, <a name=\"pdf-LUA_HOOKRET\"><code>LUA_HOOKRET</code></a>,\n<a name=\"pdf-LUA_HOOKTAILRET\"><code>LUA_HOOKTAILRET</code></a>, <a name=\"pdf-LUA_HOOKLINE\"><code>LUA_HOOKLINE</code></a>,\nand <a name=\"pdf-LUA_HOOKCOUNT\"><code>LUA_HOOKCOUNT</code></a>.\nMoreover, for line events, the field <code>currentline</code> is also set.\nTo get the value of any other field in <code>ar</code>,\nthe hook must call <a href=\"#lua_getinfo\"><code>lua_getinfo</code></a>.\nFor return events, <code>event</code> can be <code>LUA_HOOKRET</code>,\nthe normal value, or <code>LUA_HOOKTAILRET</code>.\nIn the latter case, Lua is simulating a return from\na function that did a tail call;\nin this case, it is useless to call <a href=\"#lua_getinfo\"><code>lua_getinfo</code></a>.\n\n\n<p>\nWhile Lua is running a hook, it disables other calls to hooks.\nTherefore, if a hook calls back Lua to execute a function or a chunk,\nthis execution occurs without any calls to hooks.\n\n\n\n\n\n<hr><h3><a name=\"lua_sethook\"><code>lua_sethook</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>int lua_sethook (lua_State *L, lua_Hook f, int mask, int count);</pre>\n\n<p>\nSets the debugging hook function.\n\n\n<p>\nArgument <code>f</code> is the hook function.\n<code>mask</code> specifies on which events the hook will be called:\nit is formed by a bitwise or of the constants\n<a name=\"pdf-LUA_MASKCALL\"><code>LUA_MASKCALL</code></a>,\n<a name=\"pdf-LUA_MASKRET\"><code>LUA_MASKRET</code></a>,\n<a name=\"pdf-LUA_MASKLINE\"><code>LUA_MASKLINE</code></a>,\nand <a name=\"pdf-LUA_MASKCOUNT\"><code>LUA_MASKCOUNT</code></a>.\nThe <code>count</code> argument is only meaningful when the mask\nincludes <code>LUA_MASKCOUNT</code>.\nFor each event, the hook is called as explained below:\n\n<ul>\n\n<li><b>The call hook:</b> is called when the interpreter calls a function.\nThe hook is called just after Lua enters the new function,\nbefore the function gets its arguments.\n</li>\n\n<li><b>The return hook:</b> is called when the interpreter returns from a function.\nThe hook is called just before Lua leaves the function.\nYou have no access to the values to be returned by the function.\n</li>\n\n<li><b>The line hook:</b> is called when the interpreter is about to\nstart the execution of a new line of code,\nor when it jumps back in the code (even to the same line).\n(This event only happens while Lua is executing a Lua function.)\n</li>\n\n<li><b>The count hook:</b> is called after the interpreter executes every\n<code>count</code> instructions.\n(This event only happens while Lua is executing a Lua function.)\n</li>\n\n</ul>\n\n<p>\nA hook is disabled by setting <code>mask</code> to zero.\n\n\n\n\n\n<hr><h3><a name=\"lua_setlocal\"><code>lua_setlocal</code></a></h3><p>\n<span class=\"apii\">[-(0|1), +0, <em>-</em>]</span>\n<pre>const char *lua_setlocal (lua_State *L, lua_Debug *ar, int n);</pre>\n\n<p>\nSets the value of a local variable of a given activation record.\nParameters <code>ar</code> and <code>n</code> are as in <a href=\"#lua_getlocal\"><code>lua_getlocal</code></a>\n(see <a href=\"#lua_getlocal\"><code>lua_getlocal</code></a>).\n<a href=\"#lua_setlocal\"><code>lua_setlocal</code></a> assigns the value at the top of the stack\nto the variable and returns its name.\nIt also pops the value from the stack.\n\n\n<p>\nReturns <code>NULL</code> (and pops nothing)\nwhen the index is greater than\nthe number of active local variables.\n\n\n\n\n\n<hr><h3><a name=\"lua_setupvalue\"><code>lua_setupvalue</code></a></h3><p>\n<span class=\"apii\">[-(0|1), +0, <em>-</em>]</span>\n<pre>const char *lua_setupvalue (lua_State *L, int funcindex, int n);</pre>\n\n<p>\nSets the value of a closure's upvalue.\nIt assigns the value at the top of the stack\nto the upvalue and returns its name.\nIt also pops the value from the stack.\nParameters <code>funcindex</code> and <code>n</code> are as in the <a href=\"#lua_getupvalue\"><code>lua_getupvalue</code></a>\n(see <a href=\"#lua_getupvalue\"><code>lua_getupvalue</code></a>).\n\n\n<p>\nReturns <code>NULL</code> (and pops nothing)\nwhen the index is greater than the number of upvalues.\n\n\n\n\n\n\n\n<h1>4 - <a name=\"4\">The Auxiliary Library</a></h1>\n\n<p>\n\nThe <em>auxiliary library</em> provides several convenient functions\nto interface C with Lua.\nWhile the basic API provides the primitive functions for all \ninteractions between C and Lua,\nthe auxiliary library provides higher-level functions for some\ncommon tasks.\n\n\n<p>\nAll functions from the auxiliary library\nare defined in header file <code>lauxlib.h</code> and\nhave a prefix <code>luaL_</code>.\n\n\n<p>\nAll functions in the auxiliary library are built on\ntop of the basic API,\nand so they provide nothing that cannot be done with this API.\n\n\n<p>\nSeveral functions in the auxiliary library are used to\ncheck C&nbsp;function arguments.\nTheir names are always <code>luaL_check*</code> or <code>luaL_opt*</code>.\nAll of these functions throw an error if the check is not satisfied.\nBecause the error message is formatted for arguments\n(e.g., \"<code>bad argument #1</code>\"),\nyou should not use these functions for other stack values.\n\n\n\n<h2>4.1 - <a name=\"4.1\">Functions and Types</a></h2>\n\n<p>\nHere we list all functions and types from the auxiliary library\nin alphabetical order.\n\n\n\n<hr><h3><a name=\"luaL_addchar\"><code>luaL_addchar</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>m</em>]</span>\n<pre>void luaL_addchar (luaL_Buffer *B, char c);</pre>\n\n<p>\nAdds the character <code>c</code> to the buffer <code>B</code>\n(see <a href=\"#luaL_Buffer\"><code>luaL_Buffer</code></a>).\n\n\n\n\n\n<hr><h3><a name=\"luaL_addlstring\"><code>luaL_addlstring</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>m</em>]</span>\n<pre>void luaL_addlstring (luaL_Buffer *B, const char *s, size_t l);</pre>\n\n<p>\nAdds the string pointed to by <code>s</code> with length <code>l</code> to\nthe buffer <code>B</code>\n(see <a href=\"#luaL_Buffer\"><code>luaL_Buffer</code></a>).\nThe string may contain embedded zeros.\n\n\n\n\n\n<hr><h3><a name=\"luaL_addsize\"><code>luaL_addsize</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>m</em>]</span>\n<pre>void luaL_addsize (luaL_Buffer *B, size_t n);</pre>\n\n<p>\nAdds to the buffer <code>B</code> (see <a href=\"#luaL_Buffer\"><code>luaL_Buffer</code></a>)\na string of length <code>n</code> previously copied to the\nbuffer area (see <a href=\"#luaL_prepbuffer\"><code>luaL_prepbuffer</code></a>).\n\n\n\n\n\n<hr><h3><a name=\"luaL_addstring\"><code>luaL_addstring</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>m</em>]</span>\n<pre>void luaL_addstring (luaL_Buffer *B, const char *s);</pre>\n\n<p>\nAdds the zero-terminated string pointed to by <code>s</code>\nto the buffer <code>B</code>\n(see <a href=\"#luaL_Buffer\"><code>luaL_Buffer</code></a>).\nThe string may not contain embedded zeros.\n\n\n\n\n\n<hr><h3><a name=\"luaL_addvalue\"><code>luaL_addvalue</code></a></h3><p>\n<span class=\"apii\">[-1, +0, <em>m</em>]</span>\n<pre>void luaL_addvalue (luaL_Buffer *B);</pre>\n\n<p>\nAdds the value at the top of the stack\nto the buffer <code>B</code>\n(see <a href=\"#luaL_Buffer\"><code>luaL_Buffer</code></a>).\nPops the value.\n\n\n<p>\nThis is the only function on string buffers that can (and must)\nbe called with an extra element on the stack,\nwhich is the value to be added to the buffer.\n\n\n\n\n\n<hr><h3><a name=\"luaL_argcheck\"><code>luaL_argcheck</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>v</em>]</span>\n<pre>void luaL_argcheck (lua_State *L,\n                    int cond,\n                    int narg,\n                    const char *extramsg);</pre>\n\n<p>\nChecks whether <code>cond</code> is true.\nIf not, raises an error with the following message,\nwhere <code>func</code> is retrieved from the call stack:\n\n<pre>\n     bad argument #&lt;narg&gt; to &lt;func&gt; (&lt;extramsg&gt;)\n</pre>\n\n\n\n\n<hr><h3><a name=\"luaL_argerror\"><code>luaL_argerror</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>v</em>]</span>\n<pre>int luaL_argerror (lua_State *L, int narg, const char *extramsg);</pre>\n\n<p>\nRaises an error with the following message,\nwhere <code>func</code> is retrieved from the call stack:\n\n<pre>\n     bad argument #&lt;narg&gt; to &lt;func&gt; (&lt;extramsg&gt;)\n</pre>\n\n<p>\nThis function never returns,\nbut it is an idiom to use it in C&nbsp;functions\nas <code>return luaL_argerror(<em>args</em>)</code>.\n\n\n\n\n\n<hr><h3><a name=\"luaL_Buffer\"><code>luaL_Buffer</code></a></h3>\n<pre>typedef struct luaL_Buffer luaL_Buffer;</pre>\n\n<p>\nType for a <em>string buffer</em>.\n\n\n<p>\nA string buffer allows C&nbsp;code to build Lua strings piecemeal.\nIts pattern of use is as follows:\n\n<ul>\n\n<li>First you declare a variable <code>b</code> of type <a href=\"#luaL_Buffer\"><code>luaL_Buffer</code></a>.</li>\n\n<li>Then you initialize it with a call <code>luaL_buffinit(L, &amp;b)</code>.</li>\n\n<li>\nThen you add string pieces to the buffer calling any of\nthe <code>luaL_add*</code> functions.\n</li>\n\n<li>\nYou finish by calling <code>luaL_pushresult(&amp;b)</code>.\nThis call leaves the final string on the top of the stack.\n</li>\n\n</ul>\n\n<p>\nDuring its normal operation,\na string buffer uses a variable number of stack slots.\nSo, while using a buffer, you cannot assume that you know where\nthe top of the stack is.\nYou can use the stack between successive calls to buffer operations\nas long as that use is balanced;\nthat is,\nwhen you call a buffer operation,\nthe stack is at the same level\nit was immediately after the previous buffer operation.\n(The only exception to this rule is <a href=\"#luaL_addvalue\"><code>luaL_addvalue</code></a>.)\nAfter calling <a href=\"#luaL_pushresult\"><code>luaL_pushresult</code></a> the stack is back to its\nlevel when the buffer was initialized,\nplus the final string on its top.\n\n\n\n\n\n<hr><h3><a name=\"luaL_buffinit\"><code>luaL_buffinit</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>void luaL_buffinit (lua_State *L, luaL_Buffer *B);</pre>\n\n<p>\nInitializes a buffer <code>B</code>.\nThis function does not allocate any space;\nthe buffer must be declared as a variable\n(see <a href=\"#luaL_Buffer\"><code>luaL_Buffer</code></a>).\n\n\n\n\n\n<hr><h3><a name=\"luaL_callmeta\"><code>luaL_callmeta</code></a></h3><p>\n<span class=\"apii\">[-0, +(0|1), <em>e</em>]</span>\n<pre>int luaL_callmeta (lua_State *L, int obj, const char *e);</pre>\n\n<p>\nCalls a metamethod.\n\n\n<p>\nIf the object at index <code>obj</code> has a metatable and this\nmetatable has a field <code>e</code>,\nthis function calls this field and passes the object as its only argument.\nIn this case this function returns 1 and pushes onto the\nstack the value returned by the call.\nIf there is no metatable or no metamethod,\nthis function returns 0 (without pushing any value on the stack).\n\n\n\n\n\n<hr><h3><a name=\"luaL_checkany\"><code>luaL_checkany</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>v</em>]</span>\n<pre>void luaL_checkany (lua_State *L, int narg);</pre>\n\n<p>\nChecks whether the function has an argument\nof any type (including <b>nil</b>) at position <code>narg</code>.\n\n\n\n\n\n<hr><h3><a name=\"luaL_checkint\"><code>luaL_checkint</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>v</em>]</span>\n<pre>int luaL_checkint (lua_State *L, int narg);</pre>\n\n<p>\nChecks whether the function argument <code>narg</code> is a number\nand returns this number cast to an <code>int</code>.\n\n\n\n\n\n<hr><h3><a name=\"luaL_checkinteger\"><code>luaL_checkinteger</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>v</em>]</span>\n<pre>lua_Integer luaL_checkinteger (lua_State *L, int narg);</pre>\n\n<p>\nChecks whether the function argument <code>narg</code> is a number\nand returns this number cast to a <a href=\"#lua_Integer\"><code>lua_Integer</code></a>.\n\n\n\n\n\n<hr><h3><a name=\"luaL_checklong\"><code>luaL_checklong</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>v</em>]</span>\n<pre>long luaL_checklong (lua_State *L, int narg);</pre>\n\n<p>\nChecks whether the function argument <code>narg</code> is a number\nand returns this number cast to a <code>long</code>.\n\n\n\n\n\n<hr><h3><a name=\"luaL_checklstring\"><code>luaL_checklstring</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>v</em>]</span>\n<pre>const char *luaL_checklstring (lua_State *L, int narg, size_t *l);</pre>\n\n<p>\nChecks whether the function argument <code>narg</code> is a string\nand returns this string;\nif <code>l</code> is not <code>NULL</code> fills <code>*l</code>\nwith the string's length.\n\n\n<p>\nThis function uses <a href=\"#lua_tolstring\"><code>lua_tolstring</code></a> to get its result,\nso all conversions and caveats of that function apply here.\n\n\n\n\n\n<hr><h3><a name=\"luaL_checknumber\"><code>luaL_checknumber</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>v</em>]</span>\n<pre>lua_Number luaL_checknumber (lua_State *L, int narg);</pre>\n\n<p>\nChecks whether the function argument <code>narg</code> is a number\nand returns this number.\n\n\n\n\n\n<hr><h3><a name=\"luaL_checkoption\"><code>luaL_checkoption</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>v</em>]</span>\n<pre>int luaL_checkoption (lua_State *L,\n                      int narg,\n                      const char *def,\n                      const char *const lst[]);</pre>\n\n<p>\nChecks whether the function argument <code>narg</code> is a string and\nsearches for this string in the array <code>lst</code>\n(which must be NULL-terminated).\nReturns the index in the array where the string was found.\nRaises an error if the argument is not a string or\nif the string cannot be found.\n\n\n<p>\nIf <code>def</code> is not <code>NULL</code>,\nthe function uses <code>def</code> as a default value when\nthere is no argument <code>narg</code> or if this argument is <b>nil</b>.\n\n\n<p>\nThis is a useful function for mapping strings to C&nbsp;enums.\n(The usual convention in Lua libraries is\nto use strings instead of numbers to select options.)\n\n\n\n\n\n<hr><h3><a name=\"luaL_checkstack\"><code>luaL_checkstack</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>v</em>]</span>\n<pre>void luaL_checkstack (lua_State *L, int sz, const char *msg);</pre>\n\n<p>\nGrows the stack size to <code>top + sz</code> elements,\nraising an error if the stack cannot grow to that size.\n<code>msg</code> is an additional text to go into the error message.\n\n\n\n\n\n<hr><h3><a name=\"luaL_checkstring\"><code>luaL_checkstring</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>v</em>]</span>\n<pre>const char *luaL_checkstring (lua_State *L, int narg);</pre>\n\n<p>\nChecks whether the function argument <code>narg</code> is a string\nand returns this string.\n\n\n<p>\nThis function uses <a href=\"#lua_tolstring\"><code>lua_tolstring</code></a> to get its result,\nso all conversions and caveats of that function apply here.\n\n\n\n\n\n<hr><h3><a name=\"luaL_checktype\"><code>luaL_checktype</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>v</em>]</span>\n<pre>void luaL_checktype (lua_State *L, int narg, int t);</pre>\n\n<p>\nChecks whether the function argument <code>narg</code> has type <code>t</code>.\nSee <a href=\"#lua_type\"><code>lua_type</code></a> for the encoding of types for <code>t</code>.\n\n\n\n\n\n<hr><h3><a name=\"luaL_checkudata\"><code>luaL_checkudata</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>v</em>]</span>\n<pre>void *luaL_checkudata (lua_State *L, int narg, const char *tname);</pre>\n\n<p>\nChecks whether the function argument <code>narg</code> is a userdata\nof the type <code>tname</code> (see <a href=\"#luaL_newmetatable\"><code>luaL_newmetatable</code></a>).\n\n\n\n\n\n<hr><h3><a name=\"luaL_dofile\"><code>luaL_dofile</code></a></h3><p>\n<span class=\"apii\">[-0, +?, <em>m</em>]</span>\n<pre>int luaL_dofile (lua_State *L, const char *filename);</pre>\n\n<p>\nLoads and runs the given file.\nIt is defined as the following macro:\n\n<pre>\n     (luaL_loadfile(L, filename) || lua_pcall(L, 0, LUA_MULTRET, 0))\n</pre><p>\nIt returns 0 if there are no errors\nor 1 in case of errors.\n\n\n\n\n\n<hr><h3><a name=\"luaL_dostring\"><code>luaL_dostring</code></a></h3><p>\n<span class=\"apii\">[-0, +?, <em>m</em>]</span>\n<pre>int luaL_dostring (lua_State *L, const char *str);</pre>\n\n<p>\nLoads and runs the given string.\nIt is defined as the following macro:\n\n<pre>\n     (luaL_loadstring(L, str) || lua_pcall(L, 0, LUA_MULTRET, 0))\n</pre><p>\nIt returns 0 if there are no errors\nor 1 in case of errors.\n\n\n\n\n\n<hr><h3><a name=\"luaL_error\"><code>luaL_error</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>v</em>]</span>\n<pre>int luaL_error (lua_State *L, const char *fmt, ...);</pre>\n\n<p>\nRaises an error.\nThe error message format is given by <code>fmt</code>\nplus any extra arguments,\nfollowing the same rules of <a href=\"#lua_pushfstring\"><code>lua_pushfstring</code></a>.\nIt also adds at the beginning of the message the file name and\nthe line number where the error occurred,\nif this information is available.\n\n\n<p>\nThis function never returns,\nbut it is an idiom to use it in C&nbsp;functions\nas <code>return luaL_error(<em>args</em>)</code>.\n\n\n\n\n\n<hr><h3><a name=\"luaL_getmetafield\"><code>luaL_getmetafield</code></a></h3><p>\n<span class=\"apii\">[-0, +(0|1), <em>m</em>]</span>\n<pre>int luaL_getmetafield (lua_State *L, int obj, const char *e);</pre>\n\n<p>\nPushes onto the stack the field <code>e</code> from the metatable\nof the object at index <code>obj</code>.\nIf the object does not have a metatable,\nor if the metatable does not have this field,\nreturns 0 and pushes nothing.\n\n\n\n\n\n<hr><h3><a name=\"luaL_getmetatable\"><code>luaL_getmetatable</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>-</em>]</span>\n<pre>void luaL_getmetatable (lua_State *L, const char *tname);</pre>\n\n<p>\nPushes onto the stack the metatable associated with name <code>tname</code>\nin the registry (see <a href=\"#luaL_newmetatable\"><code>luaL_newmetatable</code></a>).\n\n\n\n\n\n<hr><h3><a name=\"luaL_gsub\"><code>luaL_gsub</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>m</em>]</span>\n<pre>const char *luaL_gsub (lua_State *L,\n                       const char *s,\n                       const char *p,\n                       const char *r);</pre>\n\n<p>\nCreates a copy of string <code>s</code> by replacing\nany occurrence of the string <code>p</code>\nwith the string <code>r</code>.\nPushes the resulting string on the stack and returns it.\n\n\n\n\n\n<hr><h3><a name=\"luaL_loadbuffer\"><code>luaL_loadbuffer</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>m</em>]</span>\n<pre>int luaL_loadbuffer (lua_State *L,\n                     const char *buff,\n                     size_t sz,\n                     const char *name);</pre>\n\n<p>\nLoads a buffer as a Lua chunk.\nThis function uses <a href=\"#lua_load\"><code>lua_load</code></a> to load the chunk in the\nbuffer pointed to by <code>buff</code> with size <code>sz</code>.\n\n\n<p>\nThis function returns the same results as <a href=\"#lua_load\"><code>lua_load</code></a>.\n<code>name</code> is the chunk name,\nused for debug information and error messages.\n\n\n\n\n\n<hr><h3><a name=\"luaL_loadfile\"><code>luaL_loadfile</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>m</em>]</span>\n<pre>int luaL_loadfile (lua_State *L, const char *filename);</pre>\n\n<p>\nLoads a file as a Lua chunk.\nThis function uses <a href=\"#lua_load\"><code>lua_load</code></a> to load the chunk in the file\nnamed <code>filename</code>.\nIf <code>filename</code> is <code>NULL</code>,\nthen it loads from the standard input.\nThe first line in the file is ignored if it starts with a <code>#</code>.\n\n\n<p>\nThis function returns the same results as <a href=\"#lua_load\"><code>lua_load</code></a>,\nbut it has an extra error code <a name=\"pdf-LUA_ERRFILE\"><code>LUA_ERRFILE</code></a>\nif it cannot open/read the file.\n\n\n<p>\nAs <a href=\"#lua_load\"><code>lua_load</code></a>, this function only loads the chunk;\nit does not run it.\n\n\n\n\n\n<hr><h3><a name=\"luaL_loadstring\"><code>luaL_loadstring</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>m</em>]</span>\n<pre>int luaL_loadstring (lua_State *L, const char *s);</pre>\n\n<p>\nLoads a string as a Lua chunk.\nThis function uses <a href=\"#lua_load\"><code>lua_load</code></a> to load the chunk in\nthe zero-terminated string <code>s</code>.\n\n\n<p>\nThis function returns the same results as <a href=\"#lua_load\"><code>lua_load</code></a>.\n\n\n<p>\nAlso as <a href=\"#lua_load\"><code>lua_load</code></a>, this function only loads the chunk;\nit does not run it.\n\n\n\n\n\n<hr><h3><a name=\"luaL_newmetatable\"><code>luaL_newmetatable</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>m</em>]</span>\n<pre>int luaL_newmetatable (lua_State *L, const char *tname);</pre>\n\n<p>\nIf the registry already has the key <code>tname</code>,\nreturns 0.\nOtherwise,\ncreates a new table to be used as a metatable for userdata,\nadds it to the registry with key <code>tname</code>,\nand returns 1.\n\n\n<p>\nIn both cases pushes onto the stack the final value associated\nwith <code>tname</code> in the registry.\n\n\n\n\n\n<hr><h3><a name=\"luaL_newstate\"><code>luaL_newstate</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>lua_State *luaL_newstate (void);</pre>\n\n<p>\nCreates a new Lua state.\nIt calls <a href=\"#lua_newstate\"><code>lua_newstate</code></a> with an\nallocator based on the standard&nbsp;C <code>realloc</code> function\nand then sets a panic function (see <a href=\"#lua_atpanic\"><code>lua_atpanic</code></a>) that prints\nan error message to the standard error output in case of fatal\nerrors.\n\n\n<p>\nReturns the new state,\nor <code>NULL</code> if there is a memory allocation error.\n\n\n\n\n\n<hr><h3><a name=\"luaL_openlibs\"><code>luaL_openlibs</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>m</em>]</span>\n<pre>void luaL_openlibs (lua_State *L);</pre>\n\n<p>\nOpens all standard Lua libraries into the given state.\n\n\n\n\n\n<hr><h3><a name=\"luaL_optint\"><code>luaL_optint</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>v</em>]</span>\n<pre>int luaL_optint (lua_State *L, int narg, int d);</pre>\n\n<p>\nIf the function argument <code>narg</code> is a number,\nreturns this number cast to an <code>int</code>.\nIf this argument is absent or is <b>nil</b>,\nreturns <code>d</code>.\nOtherwise, raises an error.\n\n\n\n\n\n<hr><h3><a name=\"luaL_optinteger\"><code>luaL_optinteger</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>v</em>]</span>\n<pre>lua_Integer luaL_optinteger (lua_State *L,\n                             int narg,\n                             lua_Integer d);</pre>\n\n<p>\nIf the function argument <code>narg</code> is a number,\nreturns this number cast to a <a href=\"#lua_Integer\"><code>lua_Integer</code></a>.\nIf this argument is absent or is <b>nil</b>,\nreturns <code>d</code>.\nOtherwise, raises an error.\n\n\n\n\n\n<hr><h3><a name=\"luaL_optlong\"><code>luaL_optlong</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>v</em>]</span>\n<pre>long luaL_optlong (lua_State *L, int narg, long d);</pre>\n\n<p>\nIf the function argument <code>narg</code> is a number,\nreturns this number cast to a <code>long</code>.\nIf this argument is absent or is <b>nil</b>,\nreturns <code>d</code>.\nOtherwise, raises an error.\n\n\n\n\n\n<hr><h3><a name=\"luaL_optlstring\"><code>luaL_optlstring</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>v</em>]</span>\n<pre>const char *luaL_optlstring (lua_State *L,\n                             int narg,\n                             const char *d,\n                             size_t *l);</pre>\n\n<p>\nIf the function argument <code>narg</code> is a string,\nreturns this string.\nIf this argument is absent or is <b>nil</b>,\nreturns <code>d</code>.\nOtherwise, raises an error.\n\n\n<p>\nIf <code>l</code> is not <code>NULL</code>,\nfills the position <code>*l</code> with the results's length.\n\n\n\n\n\n<hr><h3><a name=\"luaL_optnumber\"><code>luaL_optnumber</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>v</em>]</span>\n<pre>lua_Number luaL_optnumber (lua_State *L, int narg, lua_Number d);</pre>\n\n<p>\nIf the function argument <code>narg</code> is a number,\nreturns this number.\nIf this argument is absent or is <b>nil</b>,\nreturns <code>d</code>.\nOtherwise, raises an error.\n\n\n\n\n\n<hr><h3><a name=\"luaL_optstring\"><code>luaL_optstring</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>v</em>]</span>\n<pre>const char *luaL_optstring (lua_State *L,\n                            int narg,\n                            const char *d);</pre>\n\n<p>\nIf the function argument <code>narg</code> is a string,\nreturns this string.\nIf this argument is absent or is <b>nil</b>,\nreturns <code>d</code>.\nOtherwise, raises an error.\n\n\n\n\n\n<hr><h3><a name=\"luaL_prepbuffer\"><code>luaL_prepbuffer</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>char *luaL_prepbuffer (luaL_Buffer *B);</pre>\n\n<p>\nReturns an address to a space of size <a name=\"pdf-LUAL_BUFFERSIZE\"><code>LUAL_BUFFERSIZE</code></a>\nwhere you can copy a string to be added to buffer <code>B</code>\n(see <a href=\"#luaL_Buffer\"><code>luaL_Buffer</code></a>).\nAfter copying the string into this space you must call\n<a href=\"#luaL_addsize\"><code>luaL_addsize</code></a> with the size of the string to actually add \nit to the buffer.\n\n\n\n\n\n<hr><h3><a name=\"luaL_pushresult\"><code>luaL_pushresult</code></a></h3><p>\n<span class=\"apii\">[-?, +1, <em>m</em>]</span>\n<pre>void luaL_pushresult (luaL_Buffer *B);</pre>\n\n<p>\nFinishes the use of buffer <code>B</code> leaving the final string on\nthe top of the stack.\n\n\n\n\n\n<hr><h3><a name=\"luaL_ref\"><code>luaL_ref</code></a></h3><p>\n<span class=\"apii\">[-1, +0, <em>m</em>]</span>\n<pre>int luaL_ref (lua_State *L, int t);</pre>\n\n<p>\nCreates and returns a <em>reference</em>,\nin the table at index <code>t</code>,\nfor the object at the top of the stack (and pops the object).\n\n\n<p>\nA reference is a unique integer key.\nAs long as you do not manually add integer keys into table <code>t</code>,\n<a href=\"#luaL_ref\"><code>luaL_ref</code></a> ensures the uniqueness of the key it returns.\nYou can retrieve an object referred by reference <code>r</code>\nby calling <code>lua_rawgeti(L, t, r)</code>.\nFunction <a href=\"#luaL_unref\"><code>luaL_unref</code></a> frees a reference and its associated object.\n\n\n<p>\nIf the object at the top of the stack is <b>nil</b>,\n<a href=\"#luaL_ref\"><code>luaL_ref</code></a> returns the constant <a name=\"pdf-LUA_REFNIL\"><code>LUA_REFNIL</code></a>.\nThe constant <a name=\"pdf-LUA_NOREF\"><code>LUA_NOREF</code></a> is guaranteed to be different\nfrom any reference returned by <a href=\"#luaL_ref\"><code>luaL_ref</code></a>.\n\n\n\n\n\n<hr><h3><a name=\"luaL_Reg\"><code>luaL_Reg</code></a></h3>\n<pre>typedef struct luaL_Reg {\n  const char *name;\n  lua_CFunction func;\n} luaL_Reg;</pre>\n\n<p>\nType for arrays of functions to be registered by\n<a href=\"#luaL_register\"><code>luaL_register</code></a>.\n<code>name</code> is the function name and <code>func</code> is a pointer to\nthe function.\nAny array of <a href=\"#luaL_Reg\"><code>luaL_Reg</code></a> must end with an sentinel entry\nin which both <code>name</code> and <code>func</code> are <code>NULL</code>.\n\n\n\n\n\n<hr><h3><a name=\"luaL_register\"><code>luaL_register</code></a></h3><p>\n<span class=\"apii\">[-(0|1), +1, <em>m</em>]</span>\n<pre>void luaL_register (lua_State *L,\n                    const char *libname,\n                    const luaL_Reg *l);</pre>\n\n<p>\nOpens a library.\n\n\n<p>\nWhen called with <code>libname</code> equal to <code>NULL</code>,\nit simply registers all functions in the list <code>l</code>\n(see <a href=\"#luaL_Reg\"><code>luaL_Reg</code></a>) into the table on the top of the stack.\n\n\n<p>\nWhen called with a non-null <code>libname</code>,\n<code>luaL_register</code> creates a new table <code>t</code>,\nsets it as the value of the global variable <code>libname</code>,\nsets it as the value of <code>package.loaded[libname]</code>,\nand registers on it all functions in the list <code>l</code>.\nIf there is a table in <code>package.loaded[libname]</code> or in\nvariable <code>libname</code>,\nreuses this table instead of creating a new one.\n\n\n<p>\nIn any case the function leaves the table\non the top of the stack.\n\n\n\n\n\n<hr><h3><a name=\"luaL_typename\"><code>luaL_typename</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>const char *luaL_typename (lua_State *L, int index);</pre>\n\n<p>\nReturns the name of the type of the value at the given index.\n\n\n\n\n\n<hr><h3><a name=\"luaL_typerror\"><code>luaL_typerror</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>v</em>]</span>\n<pre>int luaL_typerror (lua_State *L, int narg, const char *tname);</pre>\n\n<p>\nGenerates an error with a message like the following:\n\n<pre>\n     <em>location</em>: bad argument <em>narg</em> to '<em>func</em>' (<em>tname</em> expected, got <em>rt</em>)\n</pre><p>\nwhere <code><em>location</em></code> is produced by <a href=\"#luaL_where\"><code>luaL_where</code></a>,\n<code><em>func</em></code> is the name of the current function,\nand <code><em>rt</em></code> is the type name of the actual argument.\n\n\n\n\n\n<hr><h3><a name=\"luaL_unref\"><code>luaL_unref</code></a></h3><p>\n<span class=\"apii\">[-0, +0, <em>-</em>]</span>\n<pre>void luaL_unref (lua_State *L, int t, int ref);</pre>\n\n<p>\nReleases reference <code>ref</code> from the table at index <code>t</code>\n(see <a href=\"#luaL_ref\"><code>luaL_ref</code></a>).\nThe entry is removed from the table,\nso that the referred object can be collected.\nThe reference <code>ref</code> is also freed to be used again.\n\n\n<p>\nIf <code>ref</code> is <a href=\"#pdf-LUA_NOREF\"><code>LUA_NOREF</code></a> or <a href=\"#pdf-LUA_REFNIL\"><code>LUA_REFNIL</code></a>,\n<a href=\"#luaL_unref\"><code>luaL_unref</code></a> does nothing.\n\n\n\n\n\n<hr><h3><a name=\"luaL_where\"><code>luaL_where</code></a></h3><p>\n<span class=\"apii\">[-0, +1, <em>m</em>]</span>\n<pre>void luaL_where (lua_State *L, int lvl);</pre>\n\n<p>\nPushes onto the stack a string identifying the current position\nof the control at level <code>lvl</code> in the call stack.\nTypically this string has the following format:\n\n<pre>\n     <em>chunkname</em>:<em>currentline</em>:\n</pre><p>\nLevel&nbsp;0 is the running function,\nlevel&nbsp;1 is the function that called the running function,\netc.\n\n\n<p>\nThis function is used to build a prefix for error messages.\n\n\n\n\n\n\n\n<h1>5 - <a name=\"5\">Standard Libraries</a></h1>\n\n<p>\nThe standard Lua libraries provide useful functions\nthat are implemented directly through the C&nbsp;API.\nSome of these functions provide essential services to the language\n(e.g., <a href=\"#pdf-type\"><code>type</code></a> and <a href=\"#pdf-getmetatable\"><code>getmetatable</code></a>);\nothers provide access to \"outside\" services (e.g., I/O);\nand others could be implemented in Lua itself,\nbut are quite useful or have critical performance requirements that\ndeserve an implementation in C (e.g., <a href=\"#pdf-table.sort\"><code>table.sort</code></a>).\n\n\n<p>\nAll libraries are implemented through the official C&nbsp;API\nand are provided as separate C&nbsp;modules.\nCurrently, Lua has the following standard libraries:\n\n<ul>\n\n<li>basic library, which includes the coroutine sub-library;</li>\n\n<li>package library;</li>\n\n<li>string manipulation;</li>\n\n<li>table manipulation;</li>\n\n<li>mathematical functions (sin, log, etc.);</li>\n\n<li>input and output;</li>\n\n<li>operating system facilities;</li>\n\n<li>debug facilities.</li>\n\n</ul><p>\nExcept for the basic and package libraries,\neach library provides all its functions as fields of a global table\nor as methods of its objects.\n\n\n<p>\nTo have access to these libraries,\nthe C&nbsp;host program should call the <a href=\"#luaL_openlibs\"><code>luaL_openlibs</code></a> function,\nwhich opens all standard libraries.\nAlternatively,\nit can open them individually by calling\n<a name=\"pdf-luaopen_base\"><code>luaopen_base</code></a> (for the basic library),\n<a name=\"pdf-luaopen_package\"><code>luaopen_package</code></a> (for the package library),\n<a name=\"pdf-luaopen_string\"><code>luaopen_string</code></a> (for the string library),\n<a name=\"pdf-luaopen_table\"><code>luaopen_table</code></a> (for the table library),\n<a name=\"pdf-luaopen_math\"><code>luaopen_math</code></a> (for the mathematical library),\n<a name=\"pdf-luaopen_io\"><code>luaopen_io</code></a> (for the I/O library),\n<a name=\"pdf-luaopen_os\"><code>luaopen_os</code></a> (for the Operating System library),\nand <a name=\"pdf-luaopen_debug\"><code>luaopen_debug</code></a> (for the debug library).\nThese functions are declared in <a name=\"pdf-lualib.h\"><code>lualib.h</code></a>\nand should not be called directly:\nyou must call them like any other Lua C&nbsp;function,\ne.g., by using <a href=\"#lua_call\"><code>lua_call</code></a>.\n\n\n\n<h2>5.1 - <a name=\"5.1\">Basic Functions</a></h2>\n\n<p>\nThe basic library provides some core functions to Lua.\nIf you do not include this library in your application,\nyou should check carefully whether you need to provide \nimplementations for some of its facilities.\n\n\n<p>\n<hr><h3><a name=\"pdf-assert\"><code>assert (v [, message])</code></a></h3>\nIssues an  error when\nthe value of its argument <code>v</code> is false (i.e., <b>nil</b> or <b>false</b>);\notherwise, returns all its arguments.\n<code>message</code> is an error message;\nwhen absent, it defaults to \"assertion failed!\"\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-collectgarbage\"><code>collectgarbage ([opt [, arg]])</code></a></h3>\n\n\n<p>\nThis function is a generic interface to the garbage collector.\nIt performs different functions according to its first argument, <code>opt</code>:\n\n<ul>\n\n<li><b>\"collect\":</b>\nperforms a full garbage-collection cycle.\nThis is the default option.\n</li>\n\n<li><b>\"stop\":</b>\nstops the garbage collector.\n</li>\n\n<li><b>\"restart\":</b>\nrestarts the garbage collector.\n</li>\n\n<li><b>\"count\":</b>\nreturns the total memory in use by Lua (in Kbytes).\n</li>\n\n<li><b>\"step\":</b>\nperforms a garbage-collection step.\nThe step \"size\" is controlled by <code>arg</code>\n(larger values mean more steps) in a non-specified way.\nIf you want to control the step size\nyou must experimentally tune the value of <code>arg</code>.\nReturns <b>true</b> if the step finished a collection cycle.\n</li>\n\n<li><b>\"setpause\":</b>\nsets <code>arg</code> as the new value for the <em>pause</em> of\nthe collector (see <a href=\"#2.10\">&sect;2.10</a>).\nReturns the previous value for <em>pause</em>.\n</li>\n\n<li><b>\"setstepmul\":</b>\nsets <code>arg</code> as the new value for the <em>step multiplier</em> of\nthe collector (see <a href=\"#2.10\">&sect;2.10</a>).\nReturns the previous value for <em>step</em>.\n</li>\n\n</ul>\n\n\n\n<p>\n<hr><h3><a name=\"pdf-dofile\"><code>dofile ([filename])</code></a></h3>\nOpens the named file and executes its contents as a Lua chunk.\nWhen called without arguments,\n<code>dofile</code> executes the contents of the standard input (<code>stdin</code>).\nReturns all values returned by the chunk.\nIn case of errors, <code>dofile</code> propagates the error\nto its caller (that is, <code>dofile</code> does not run in protected mode).\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-error\"><code>error (message [, level])</code></a></h3>\nTerminates the last protected function called\nand returns <code>message</code> as the error message.\nFunction <code>error</code> never returns.\n\n\n<p>\nUsually, <code>error</code> adds some information about the error position\nat the beginning of the message.\nThe <code>level</code> argument specifies how to get the error position.\nWith level&nbsp;1 (the default), the error position is where the\n<code>error</code> function was called.\nLevel&nbsp;2 points the error to where the function\nthat called <code>error</code> was called; and so on.\nPassing a level&nbsp;0 avoids the addition of error position information\nto the message.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-_G\"><code>_G</code></a></h3>\nA global variable (not a function) that\nholds the global environment (that is, <code>_G._G = _G</code>).\nLua itself does not use this variable;\nchanging its value does not affect any environment,\nnor vice-versa.\n(Use <a href=\"#pdf-setfenv\"><code>setfenv</code></a> to change environments.)\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-getfenv\"><code>getfenv ([f])</code></a></h3>\nReturns the current environment in use by the function.\n<code>f</code> can be a Lua function or a number\nthat specifies the function at that stack level:\nLevel&nbsp;1 is the function calling <code>getfenv</code>.\nIf the given function is not a Lua function,\nor if <code>f</code> is 0,\n<code>getfenv</code> returns the global environment.\nThe default for <code>f</code> is 1.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-getmetatable\"><code>getmetatable (object)</code></a></h3>\n\n\n<p>\nIf <code>object</code> does not have a metatable, returns <b>nil</b>.\nOtherwise,\nif the object's metatable has a <code>\"__metatable\"</code> field,\nreturns the associated value.\nOtherwise, returns the metatable of the given object.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-ipairs\"><code>ipairs (t)</code></a></h3>\n\n\n<p>\nReturns three values: an iterator function, the table <code>t</code>, and 0,\nso that the construction\n\n<pre>\n     for i,v in ipairs(t) do <em>body</em> end\n</pre><p>\nwill iterate over the pairs (<code>1,t[1]</code>), (<code>2,t[2]</code>), &middot;&middot;&middot;,\nup to the first integer key absent from the table.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-load\"><code>load (func [, chunkname])</code></a></h3>\n\n\n<p>\nLoads a chunk using function <code>func</code> to get its pieces.\nEach call to <code>func</code> must return a string that concatenates\nwith previous results.\nA return of an empty string, <b>nil</b>, or no value signals the end of the chunk.\n\n\n<p>\nIf there are no errors, \nreturns the compiled chunk as a function;\notherwise, returns <b>nil</b> plus the error message.\nThe environment of the returned function is the global environment.\n\n\n<p>\n<code>chunkname</code> is used as the chunk name for error messages\nand debug information.\nWhen absent,\nit defaults to \"<code>=(load)</code>\".\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-loadfile\"><code>loadfile ([filename])</code></a></h3>\n\n\n<p>\nSimilar to <a href=\"#pdf-load\"><code>load</code></a>,\nbut gets the chunk from file <code>filename</code>\nor from the standard input,\nif no file name is given.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-loadstring\"><code>loadstring (string [, chunkname])</code></a></h3>\n\n\n<p>\nSimilar to <a href=\"#pdf-load\"><code>load</code></a>,\nbut gets the chunk from the given string.\n\n\n<p>\nTo load and run a given string, use the idiom\n\n<pre>\n     assert(loadstring(s))()\n</pre>\n\n<p>\nWhen absent,\n<code>chunkname</code> defaults to the given string.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-next\"><code>next (table [, index])</code></a></h3>\n\n\n<p>\nAllows a program to traverse all fields of a table.\nIts first argument is a table and its second argument\nis an index in this table.\n<code>next</code> returns the next index of the table\nand its associated value.\nWhen called with <b>nil</b> as its second argument,\n<code>next</code> returns an initial index\nand its associated value.\nWhen called with the last index,\nor with <b>nil</b> in an empty table,\n<code>next</code> returns <b>nil</b>.\nIf the second argument is absent, then it is interpreted as <b>nil</b>.\nIn particular,\nyou can use <code>next(t)</code> to check whether a table is empty.\n\n\n<p>\nThe order in which the indices are enumerated is not specified,\n<em>even for numeric indices</em>.\n(To traverse a table in numeric order,\nuse a numerical <b>for</b> or the <a href=\"#pdf-ipairs\"><code>ipairs</code></a> function.)\n\n\n<p>\nThe behavior of <code>next</code> is <em>undefined</em> if,\nduring the traversal,\nyou assign any value to a non-existent field in the table.\nYou may however modify existing fields.\nIn particular, you may clear existing fields.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-pairs\"><code>pairs (t)</code></a></h3>\n\n\n<p>\nReturns three values: the <a href=\"#pdf-next\"><code>next</code></a> function, the table <code>t</code>, and <b>nil</b>,\nso that the construction\n\n<pre>\n     for k,v in pairs(t) do <em>body</em> end\n</pre><p>\nwill iterate over all key&ndash;value pairs of table <code>t</code>.\n\n\n<p>\nSee function <a href=\"#pdf-next\"><code>next</code></a> for the caveats of modifying\nthe table during its traversal.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-pcall\"><code>pcall (f, arg1, &middot;&middot;&middot;)</code></a></h3>\n\n\n<p>\nCalls function <code>f</code> with\nthe given arguments in <em>protected mode</em>.\nThis means that any error inside&nbsp;<code>f</code> is not propagated;\ninstead, <code>pcall</code> catches the error\nand returns a status code.\nIts first result is the status code (a boolean),\nwhich is true if the call succeeds without errors.\nIn such case, <code>pcall</code> also returns all results from the call,\nafter this first result.\nIn case of any error, <code>pcall</code> returns <b>false</b> plus the error message.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-print\"><code>print (&middot;&middot;&middot;)</code></a></h3>\nReceives any number of arguments,\nand prints their values to <code>stdout</code>,\nusing the <a href=\"#pdf-tostring\"><code>tostring</code></a> function to convert them to strings.\n<code>print</code> is not intended for formatted output,\nbut only as a quick way to show a value,\ntypically for debugging.\nFor formatted output, use <a href=\"#pdf-string.format\"><code>string.format</code></a>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-rawequal\"><code>rawequal (v1, v2)</code></a></h3>\nChecks whether <code>v1</code> is equal to <code>v2</code>,\nwithout invoking any metamethod.\nReturns a boolean.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-rawget\"><code>rawget (table, index)</code></a></h3>\nGets the real value of <code>table[index]</code>,\nwithout invoking any metamethod.\n<code>table</code> must be a table;\n<code>index</code> may be any value.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-rawset\"><code>rawset (table, index, value)</code></a></h3>\nSets the real value of <code>table[index]</code> to <code>value</code>,\nwithout invoking any metamethod.\n<code>table</code> must be a table,\n<code>index</code> any value different from <b>nil</b>,\nand <code>value</code> any Lua value.\n\n\n<p>\nThis function returns <code>table</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-select\"><code>select (index, &middot;&middot;&middot;)</code></a></h3>\n\n\n<p>\nIf <code>index</code> is a number,\nreturns all arguments after argument number <code>index</code>.\nOtherwise, <code>index</code> must be the string <code>\"#\"</code>,\nand <code>select</code> returns the total number of extra arguments it received.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-setfenv\"><code>setfenv (f, table)</code></a></h3>\n\n\n<p>\nSets the environment to be used by the given function.\n<code>f</code> can be a Lua function or a number\nthat specifies the function at that stack level:\nLevel&nbsp;1 is the function calling <code>setfenv</code>.\n<code>setfenv</code> returns the given function.\n\n\n<p>\nAs a special case, when <code>f</code> is 0 <code>setfenv</code> changes\nthe environment of the running thread.\nIn this case, <code>setfenv</code> returns no values.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-setmetatable\"><code>setmetatable (table, metatable)</code></a></h3>\n\n\n<p>\nSets the metatable for the given table.\n(You cannot change the metatable of other types from Lua, only from&nbsp;C.)\nIf <code>metatable</code> is <b>nil</b>,\nremoves the metatable of the given table.\nIf the original metatable has a <code>\"__metatable\"</code> field,\nraises an error.\n\n\n<p>\nThis function returns <code>table</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-tonumber\"><code>tonumber (e [, base])</code></a></h3>\nTries to convert its argument to a number.\nIf the argument is already a number or a string convertible\nto a number, then <code>tonumber</code> returns this number;\notherwise, it returns <b>nil</b>.\n\n\n<p>\nAn optional argument specifies the base to interpret the numeral.\nThe base may be any integer between 2 and 36, inclusive.\nIn bases above&nbsp;10, the letter '<code>A</code>' (in either upper or lower case)\nrepresents&nbsp;10, '<code>B</code>' represents&nbsp;11, and so forth,\nwith '<code>Z</code>' representing 35.\nIn base 10 (the default), the number can have a decimal part,\nas well as an optional exponent part (see <a href=\"#2.1\">&sect;2.1</a>).\nIn other bases, only unsigned integers are accepted.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-tostring\"><code>tostring (e)</code></a></h3>\nReceives an argument of any type and\nconverts it to a string in a reasonable format.\nFor complete control of how numbers are converted,\nuse <a href=\"#pdf-string.format\"><code>string.format</code></a>.\n\n\n<p>\nIf the metatable of <code>e</code> has a <code>\"__tostring\"</code> field,\nthen <code>tostring</code> calls the corresponding value\nwith <code>e</code> as argument,\nand uses the result of the call as its result.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-type\"><code>type (v)</code></a></h3>\nReturns the type of its only argument, coded as a string.\nThe possible results of this function are\n\"<code>nil</code>\" (a string, not the value <b>nil</b>),\n\"<code>number</code>\",\n\"<code>string</code>\",\n\"<code>boolean</code>\",\n\"<code>table</code>\",\n\"<code>function</code>\",\n\"<code>thread</code>\",\nand \"<code>userdata</code>\".\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-unpack\"><code>unpack (list [, i [, j]])</code></a></h3>\nReturns the elements from the given table.\nThis function is equivalent to\n\n<pre>\n     return list[i], list[i+1], &middot;&middot;&middot;, list[j]\n</pre><p>\nexcept that the above code can be written only for a fixed number\nof elements.\nBy default, <code>i</code> is&nbsp;1 and <code>j</code> is the length of the list,\nas defined by the length operator (see <a href=\"#2.5.5\">&sect;2.5.5</a>).\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-_VERSION\"><code>_VERSION</code></a></h3>\nA global variable (not a function) that\nholds a string containing the current interpreter version.\nThe current contents of this variable is \"<code>Lua 5.1</code>\".\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-xpcall\"><code>xpcall (f, err)</code></a></h3>\n\n\n<p>\nThis function is similar to <a href=\"#pdf-pcall\"><code>pcall</code></a>,\nexcept that you can set a new error handler.\n\n\n<p>\n<code>xpcall</code> calls function <code>f</code> in protected mode,\nusing <code>err</code> as the error handler.\nAny error inside <code>f</code> is not propagated;\ninstead, <code>xpcall</code> catches the error,\ncalls the <code>err</code> function with the original error object,\nand returns a status code.\nIts first result is the status code (a boolean),\nwhich is true if the call succeeds without errors.\nIn this case, <code>xpcall</code> also returns all results from the call,\nafter this first result.\nIn case of any error,\n<code>xpcall</code> returns <b>false</b> plus the result from <code>err</code>.\n\n\n\n\n\n\n\n<h2>5.2 - <a name=\"5.2\">Coroutine Manipulation</a></h2>\n\n<p>\nThe operations related to coroutines comprise a sub-library of\nthe basic library and come inside the table <a name=\"pdf-coroutine\"><code>coroutine</code></a>.\nSee <a href=\"#2.11\">&sect;2.11</a> for a general description of coroutines.\n\n\n<p>\n<hr><h3><a name=\"pdf-coroutine.create\"><code>coroutine.create (f)</code></a></h3>\n\n\n<p>\nCreates a new coroutine, with body <code>f</code>.\n<code>f</code> must be a Lua function.\nReturns this new coroutine,\nan object with type <code>\"thread\"</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-coroutine.resume\"><code>coroutine.resume (co [, val1, &middot;&middot;&middot;])</code></a></h3>\n\n\n<p>\nStarts or continues the execution of coroutine <code>co</code>.\nThe first time you resume a coroutine,\nit starts running its body.\nThe values <code>val1</code>, &middot;&middot;&middot; are passed\nas the arguments to the body function.\nIf the coroutine has yielded,\n<code>resume</code> restarts it;\nthe values <code>val1</code>, &middot;&middot;&middot; are passed\nas the results from the yield.\n\n\n<p>\nIf the coroutine runs without any errors,\n<code>resume</code> returns <b>true</b> plus any values passed to <code>yield</code>\n(if the coroutine yields) or any values returned by the body function\n(if the coroutine terminates).\nIf there is any error,\n<code>resume</code> returns <b>false</b> plus the error message.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-coroutine.running\"><code>coroutine.running ()</code></a></h3>\n\n\n<p>\nReturns the running coroutine,\nor <b>nil</b> when called by the main thread.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-coroutine.status\"><code>coroutine.status (co)</code></a></h3>\n\n\n<p>\nReturns the status of coroutine <code>co</code>, as a string:\n<code>\"running\"</code>,\nif the coroutine is running (that is, it called <code>status</code>);\n<code>\"suspended\"</code>, if the coroutine is suspended in a call to <code>yield</code>,\nor if it has not started running yet;\n<code>\"normal\"</code> if the coroutine is active but not running\n(that is, it has resumed another coroutine);\nand <code>\"dead\"</code> if the coroutine has finished its body function,\nor if it has stopped with an error.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-coroutine.wrap\"><code>coroutine.wrap (f)</code></a></h3>\n\n\n<p>\nCreates a new coroutine, with body <code>f</code>.\n<code>f</code> must be a Lua function.\nReturns a function that resumes the coroutine each time it is called.\nAny arguments passed to the function behave as the\nextra arguments to <code>resume</code>.\nReturns the same values returned by <code>resume</code>,\nexcept the first boolean.\nIn case of error, propagates the error.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-coroutine.yield\"><code>coroutine.yield (&middot;&middot;&middot;)</code></a></h3>\n\n\n<p>\nSuspends the execution of the calling coroutine.\nThe coroutine cannot be running a C&nbsp;function,\na metamethod, or an iterator.\nAny arguments to <code>yield</code> are passed as extra results to <code>resume</code>.\n\n\n\n\n\n\n\n<h2>5.3 - <a name=\"5.3\">Modules</a></h2>\n\n<p>\nThe package library provides basic\nfacilities for loading and building modules in Lua.\nIt exports two of its functions directly in the global environment:\n<a href=\"#pdf-require\"><code>require</code></a> and <a href=\"#pdf-module\"><code>module</code></a>.\nEverything else is exported in a table <a name=\"pdf-package\"><code>package</code></a>.\n\n\n<p>\n<hr><h3><a name=\"pdf-module\"><code>module (name [, &middot;&middot;&middot;])</code></a></h3>\n\n\n<p>\nCreates a module.\nIf there is a table in <code>package.loaded[name]</code>,\nthis table is the module.\nOtherwise, if there is a global table <code>t</code> with the given name,\nthis table is the module.\nOtherwise creates a new table <code>t</code> and\nsets it as the value of the global <code>name</code> and\nthe value of <code>package.loaded[name]</code>.\nThis function also initializes <code>t._NAME</code> with the given name,\n<code>t._M</code> with the module (<code>t</code> itself),\nand <code>t._PACKAGE</code> with the package name\n(the full module name minus last component; see below).\nFinally, <code>module</code> sets <code>t</code> as the new environment\nof the current function and the new value of <code>package.loaded[name]</code>,\nso that <a href=\"#pdf-require\"><code>require</code></a> returns <code>t</code>.\n\n\n<p>\nIf <code>name</code> is a compound name\n(that is, one with components separated by dots),\n<code>module</code> creates (or reuses, if they already exist)\ntables for each component.\nFor instance, if <code>name</code> is <code>a.b.c</code>,\nthen <code>module</code> stores the module table in field <code>c</code> of\nfield <code>b</code> of global <code>a</code>.\n\n\n<p>\nThis function can receive optional <em>options</em> after\nthe module name,\nwhere each option is a function to be applied over the module.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-require\"><code>require (modname)</code></a></h3>\n\n\n<p>\nLoads the given module.\nThe function starts by looking into the <a href=\"#pdf-package.loaded\"><code>package.loaded</code></a> table\nto determine whether <code>modname</code> is already loaded.\nIf it is, then <code>require</code> returns the value stored\nat <code>package.loaded[modname]</code>.\nOtherwise, it tries to find a <em>loader</em> for the module.\n\n\n<p>\nTo find a loader,\n<code>require</code> is guided by the <a href=\"#pdf-package.loaders\"><code>package.loaders</code></a> array.\nBy changing this array,\nwe can change how <code>require</code> looks for a module.\nThe following explanation is based on the default configuration\nfor <a href=\"#pdf-package.loaders\"><code>package.loaders</code></a>.\n\n\n<p>\nFirst <code>require</code> queries <code>package.preload[modname]</code>.\nIf it has a value,\nthis value (which should be a function) is the loader.\nOtherwise <code>require</code> searches for a Lua loader using the\npath stored in <a href=\"#pdf-package.path\"><code>package.path</code></a>.\nIf that also fails, it searches for a C&nbsp;loader using the\npath stored in <a href=\"#pdf-package.cpath\"><code>package.cpath</code></a>.\nIf that also fails,\nit tries an <em>all-in-one</em> loader (see <a href=\"#pdf-package.loaders\"><code>package.loaders</code></a>).\n\n\n<p>\nOnce a loader is found,\n<code>require</code> calls the loader with a single argument, <code>modname</code>.\nIf the loader returns any value,\n<code>require</code> assigns the returned value to <code>package.loaded[modname]</code>.\nIf the loader returns no value and\nhas not assigned any value to <code>package.loaded[modname]</code>,\nthen <code>require</code> assigns <b>true</b> to this entry.\nIn any case, <code>require</code> returns the\nfinal value of <code>package.loaded[modname]</code>.\n\n\n<p>\nIf there is any error loading or running the module,\nor if it cannot find any loader for the module,\nthen <code>require</code> signals an error. \n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-package.cpath\"><code>package.cpath</code></a></h3>\n\n\n<p>\nThe path used by <a href=\"#pdf-require\"><code>require</code></a> to search for a C&nbsp;loader.\n\n\n<p>\nLua initializes the C&nbsp;path <a href=\"#pdf-package.cpath\"><code>package.cpath</code></a> in the same way\nit initializes the Lua path <a href=\"#pdf-package.path\"><code>package.path</code></a>,\nusing the environment variable <a name=\"pdf-LUA_CPATH\"><code>LUA_CPATH</code></a>\nor a default path defined in <code>luaconf.h</code>.\n\n\n\n\n<p>\n\n<hr><h3><a name=\"pdf-package.loaded\"><code>package.loaded</code></a></h3>\n\n\n<p>\nA table used by <a href=\"#pdf-require\"><code>require</code></a> to control which\nmodules are already loaded.\nWhen you require a module <code>modname</code> and\n<code>package.loaded[modname]</code> is not false,\n<a href=\"#pdf-require\"><code>require</code></a> simply returns the value stored there.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-package.loaders\"><code>package.loaders</code></a></h3>\n\n\n<p>\nA table used by <a href=\"#pdf-require\"><code>require</code></a> to control how to load modules.\n\n\n<p>\nEach entry in this table is a <em>searcher function</em>.\nWhen looking for a module,\n<a href=\"#pdf-require\"><code>require</code></a> calls each of these searchers in ascending order,\nwith the module name (the argument given to <a href=\"#pdf-require\"><code>require</code></a>) as its\nsole parameter.\nThe function can return another function (the module <em>loader</em>)\nor a string explaining why it did not find that module\n(or <b>nil</b> if it has nothing to say).\nLua initializes this table with four functions.\n\n\n<p>\nThe first searcher simply looks for a loader in the\n<a href=\"#pdf-package.preload\"><code>package.preload</code></a> table.\n\n\n<p>\nThe second searcher looks for a loader as a Lua library,\nusing the path stored at <a href=\"#pdf-package.path\"><code>package.path</code></a>.\nA path is a sequence of <em>templates</em> separated by semicolons.\nFor each template,\nthe searcher will change each interrogation\nmark in the template by <code>filename</code>,\nwhich is the module name with each dot replaced by a\n\"directory separator\" (such as \"<code>/</code>\" in Unix);\nthen it will try to open the resulting file name.\nSo, for instance, if the Lua path is the string\n\n<pre>\n     \"./?.lua;./?.lc;/usr/local/?/init.lua\"\n</pre><p>\nthe search for a Lua file for module <code>foo</code>\nwill try to open the files\n<code>./foo.lua</code>, <code>./foo.lc</code>, and\n<code>/usr/local/foo/init.lua</code>, in that order.\n\n\n<p>\nThe third searcher looks for a loader as a C&nbsp;library,\nusing the path given by the variable <a href=\"#pdf-package.cpath\"><code>package.cpath</code></a>.\nFor instance,\nif the C&nbsp;path is the string\n\n<pre>\n     \"./?.so;./?.dll;/usr/local/?/init.so\"\n</pre><p>\nthe searcher for module <code>foo</code>\nwill try to open the files <code>./foo.so</code>, <code>./foo.dll</code>,\nand <code>/usr/local/foo/init.so</code>, in that order.\nOnce it finds a C&nbsp;library,\nthis searcher first uses a dynamic link facility to link the\napplication with the library.\nThen it tries to find a C&nbsp;function inside the library to\nbe used as the loader.\nThe name of this C&nbsp;function is the string \"<code>luaopen_</code>\"\nconcatenated with a copy of the module name where each dot\nis replaced by an underscore.\nMoreover, if the module name has a hyphen,\nits prefix up to (and including) the first hyphen is removed.\nFor instance, if the module name is <code>a.v1-b.c</code>,\nthe function name will be <code>luaopen_b_c</code>.\n\n\n<p>\nThe fourth searcher tries an <em>all-in-one loader</em>.\nIt searches the C&nbsp;path for a library for\nthe root name of the given module.\nFor instance, when requiring <code>a.b.c</code>,\nit will search for a C&nbsp;library for <code>a</code>.\nIf found, it looks into it for an open function for\nthe submodule;\nin our example, that would be <code>luaopen_a_b_c</code>.\nWith this facility, a package can pack several C&nbsp;submodules\ninto one single library,\nwith each submodule keeping its original open function.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-package.loadlib\"><code>package.loadlib (libname, funcname)</code></a></h3>\n\n\n<p>\nDynamically links the host program with the C&nbsp;library <code>libname</code>.\nInside this library, looks for a function <code>funcname</code>\nand returns this function as a C&nbsp;function.\n(So, <code>funcname</code> must follow the protocol (see <a href=\"#lua_CFunction\"><code>lua_CFunction</code></a>)).\n\n\n<p>\nThis is a low-level function.\nIt completely bypasses the package and module system.\nUnlike <a href=\"#pdf-require\"><code>require</code></a>,\nit does not perform any path searching and\ndoes not automatically adds extensions.\n<code>libname</code> must be the complete file name of the C&nbsp;library,\nincluding if necessary a path and extension.\n<code>funcname</code> must be the exact name exported by the C&nbsp;library\n(which may depend on the C&nbsp;compiler and linker used).\n\n\n<p>\nThis function is not supported by ANSI C.\nAs such, it is only available on some platforms\n(Windows, Linux, Mac OS X, Solaris, BSD,\nplus other Unix systems that support the <code>dlfcn</code> standard).\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-package.path\"><code>package.path</code></a></h3>\n\n\n<p>\nThe path used by <a href=\"#pdf-require\"><code>require</code></a> to search for a Lua loader.\n\n\n<p>\nAt start-up, Lua initializes this variable with\nthe value of the environment variable <a name=\"pdf-LUA_PATH\"><code>LUA_PATH</code></a> or\nwith a default path defined in <code>luaconf.h</code>,\nif the environment variable is not defined.\nAny \"<code>;;</code>\" in the value of the environment variable\nis replaced by the default path.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-package.preload\"><code>package.preload</code></a></h3>\n\n\n<p>\nA table to store loaders for specific modules\n(see <a href=\"#pdf-require\"><code>require</code></a>).\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-package.seeall\"><code>package.seeall (module)</code></a></h3>\n\n\n<p>\nSets a metatable for <code>module</code> with\nits <code>__index</code> field referring to the global environment,\nso that this module inherits values\nfrom the global environment.\nTo be used as an option to function <a href=\"#pdf-module\"><code>module</code></a>.\n\n\n\n\n\n\n\n<h2>5.4 - <a name=\"5.4\">String Manipulation</a></h2>\n\n<p>\nThis library provides generic functions for string manipulation,\nsuch as finding and extracting substrings, and pattern matching.\nWhen indexing a string in Lua, the first character is at position&nbsp;1\n(not at&nbsp;0, as in C).\nIndices are allowed to be negative and are interpreted as indexing backwards,\nfrom the end of the string.\nThus, the last character is at position -1, and so on.\n\n\n<p>\nThe string library provides all its functions inside the table\n<a name=\"pdf-string\"><code>string</code></a>.\nIt also sets a metatable for strings\nwhere the <code>__index</code> field points to the <code>string</code> table.\nTherefore, you can use the string functions in object-oriented style.\nFor instance, <code>string.byte(s, i)</code>\ncan be written as <code>s:byte(i)</code>.\n\n\n<p>\nThe string library assumes one-byte character encodings.\n\n\n<p>\n<hr><h3><a name=\"pdf-string.byte\"><code>string.byte (s [, i [, j]])</code></a></h3>\nReturns the internal numerical codes of the characters <code>s[i]</code>,\n<code>s[i+1]</code>, &middot;&middot;&middot;, <code>s[j]</code>.\nThe default value for <code>i</code> is&nbsp;1;\nthe default value for <code>j</code> is&nbsp;<code>i</code>.\n\n\n<p>\nNote that numerical codes are not necessarily portable across platforms.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-string.char\"><code>string.char (&middot;&middot;&middot;)</code></a></h3>\nReceives zero or more integers.\nReturns a string with length equal to the number of arguments,\nin which each character has the internal numerical code equal\nto its corresponding argument.\n\n\n<p>\nNote that numerical codes are not necessarily portable across platforms.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-string.dump\"><code>string.dump (function)</code></a></h3>\n\n\n<p>\nReturns a string containing a binary representation of the given function,\nso that a later <a href=\"#pdf-loadstring\"><code>loadstring</code></a> on this string returns\na copy of the function.\n<code>function</code> must be a Lua function without upvalues.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-string.find\"><code>string.find (s, pattern [, init [, plain]])</code></a></h3>\nLooks for the first match of\n<code>pattern</code> in the string <code>s</code>.\nIf it finds a match, then <code>find</code> returns the indices of&nbsp;<code>s</code>\nwhere this occurrence starts and ends;\notherwise, it returns <b>nil</b>.\nA third, optional numerical argument <code>init</code> specifies\nwhere to start the search;\nits default value is&nbsp;1 and can be negative.\nA value of <b>true</b> as a fourth, optional argument <code>plain</code>\nturns off the pattern matching facilities,\nso the function does a plain \"find substring\" operation,\nwith no characters in <code>pattern</code> being considered \"magic\".\nNote that if <code>plain</code> is given, then <code>init</code> must be given as well.\n\n\n<p>\nIf the pattern has captures,\nthen in a successful match\nthe captured values are also returned,\nafter the two indices.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-string.format\"><code>string.format (formatstring, &middot;&middot;&middot;)</code></a></h3>\nReturns a formatted version of its variable number of arguments\nfollowing the description given in its first argument (which must be a string).\nThe format string follows the same rules as the <code>printf</code> family of\nstandard C&nbsp;functions.\nThe only differences are that the options/modifiers\n<code>*</code>, <code>l</code>, <code>L</code>, <code>n</code>, <code>p</code>,\nand <code>h</code> are not supported\nand that there is an extra option, <code>q</code>.\nThe <code>q</code> option formats a string in a form suitable to be safely read\nback by the Lua interpreter:\nthe string is written between double quotes,\nand all double quotes, newlines, embedded zeros,\nand backslashes in the string\nare correctly escaped when written.\nFor instance, the call\n\n<pre>\n     string.format('%q', 'a string with \"quotes\" and \\n new line')\n</pre><p>\nwill produce the string:\n\n<pre>\n     \"a string with \\\"quotes\\\" and \\\n      new line\"\n</pre>\n\n<p>\nThe options <code>c</code>, <code>d</code>, <code>E</code>, <code>e</code>, <code>f</code>,\n<code>g</code>, <code>G</code>, <code>i</code>, <code>o</code>, <code>u</code>, <code>X</code>, and <code>x</code> all\nexpect a number as argument,\nwhereas <code>q</code> and <code>s</code> expect a string.\n\n\n<p>\nThis function does not accept string values\ncontaining embedded zeros,\nexcept as arguments to the <code>q</code> option.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-string.gmatch\"><code>string.gmatch (s, pattern)</code></a></h3>\nReturns an iterator function that,\neach time it is called,\nreturns the next captures from <code>pattern</code> over string <code>s</code>.\nIf <code>pattern</code> specifies no captures,\nthen the whole match is produced in each call.\n\n\n<p>\nAs an example, the following loop\n\n<pre>\n     s = \"hello world from Lua\"\n     for w in string.gmatch(s, \"%a+\") do\n       print(w)\n     end\n</pre><p>\nwill iterate over all the words from string <code>s</code>,\nprinting one per line.\nThe next example collects all pairs <code>key=value</code> from the\ngiven string into a table:\n\n<pre>\n     t = {}\n     s = \"from=world, to=Lua\"\n     for k, v in string.gmatch(s, \"(%w+)=(%w+)\") do\n       t[k] = v\n     end\n</pre>\n\n<p>\nFor this function, a '<code>^</code>' at the start of a pattern does not\nwork as an anchor, as this would prevent the iteration.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-string.gsub\"><code>string.gsub (s, pattern, repl [, n])</code></a></h3>\nReturns a copy of <code>s</code>\nin which all (or the first <code>n</code>, if given)\noccurrences of the <code>pattern</code> have been\nreplaced by a replacement string specified by <code>repl</code>,\nwhich can be a string, a table, or a function.\n<code>gsub</code> also returns, as its second value,\nthe total number of matches that occurred.\n\n\n<p>\nIf <code>repl</code> is a string, then its value is used for replacement.\nThe character&nbsp;<code>%</code> works as an escape character:\nany sequence in <code>repl</code> of the form <code>%<em>n</em></code>,\nwith <em>n</em> between 1 and 9,\nstands for the value of the <em>n</em>-th captured substring (see below).\nThe sequence <code>%0</code> stands for the whole match.\nThe sequence <code>%%</code> stands for a single&nbsp;<code>%</code>.\n\n\n<p>\nIf <code>repl</code> is a table, then the table is queried for every match,\nusing the first capture as the key;\nif the pattern specifies no captures,\nthen the whole match is used as the key.\n\n\n<p>\nIf <code>repl</code> is a function, then this function is called every time a\nmatch occurs, with all captured substrings passed as arguments,\nin order;\nif the pattern specifies no captures,\nthen the whole match is passed as a sole argument.\n\n\n<p>\nIf the value returned by the table query or by the function call\nis a string or a number,\nthen it is used as the replacement string;\notherwise, if it is <b>false</b> or <b>nil</b>,\nthen there is no replacement\n(that is, the original match is kept in the string).\n\n\n<p>\nHere are some examples:\n\n<pre>\n     x = string.gsub(\"hello world\", \"(%w+)\", \"%1 %1\")\n     --&gt; x=\"hello hello world world\"\n     \n     x = string.gsub(\"hello world\", \"%w+\", \"%0 %0\", 1)\n     --&gt; x=\"hello hello world\"\n     \n     x = string.gsub(\"hello world from Lua\", \"(%w+)%s*(%w+)\", \"%2 %1\")\n     --&gt; x=\"world hello Lua from\"\n     \n     x = string.gsub(\"home = $HOME, user = $USER\", \"%$(%w+)\", os.getenv)\n     --&gt; x=\"home = /home/roberto, user = roberto\"\n     \n     x = string.gsub(\"4+5 = $return 4+5$\", \"%$(.-)%$\", function (s)\n           return loadstring(s)()\n         end)\n     --&gt; x=\"4+5 = 9\"\n     \n     local t = {name=\"lua\", version=\"5.1\"}\n     x = string.gsub(\"$name-$version.tar.gz\", \"%$(%w+)\", t)\n     --&gt; x=\"lua-5.1.tar.gz\"\n</pre>\n\n\n\n<p>\n<hr><h3><a name=\"pdf-string.len\"><code>string.len (s)</code></a></h3>\nReceives a string and returns its length.\nThe empty string <code>\"\"</code> has length 0.\nEmbedded zeros are counted,\nso <code>\"a\\000bc\\000\"</code> has length 5.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-string.lower\"><code>string.lower (s)</code></a></h3>\nReceives a string and returns a copy of this string with all\nuppercase letters changed to lowercase.\nAll other characters are left unchanged.\nThe definition of what an uppercase letter is depends on the current locale.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-string.match\"><code>string.match (s, pattern [, init])</code></a></h3>\nLooks for the first <em>match</em> of\n<code>pattern</code> in the string <code>s</code>.\nIf it finds one, then <code>match</code> returns\nthe captures from the pattern;\notherwise it returns <b>nil</b>.\nIf <code>pattern</code> specifies no captures,\nthen the whole match is returned.\nA third, optional numerical argument <code>init</code> specifies\nwhere to start the search;\nits default value is&nbsp;1 and can be negative.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-string.rep\"><code>string.rep (s, n)</code></a></h3>\nReturns a string that is the concatenation of <code>n</code> copies of\nthe string <code>s</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-string.reverse\"><code>string.reverse (s)</code></a></h3>\nReturns a string that is the string <code>s</code> reversed.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-string.sub\"><code>string.sub (s, i [, j])</code></a></h3>\nReturns the substring of <code>s</code> that\nstarts at <code>i</code>  and continues until <code>j</code>;\n<code>i</code> and <code>j</code> can be negative.\nIf <code>j</code> is absent, then it is assumed to be equal to -1\n(which is the same as the string length).\nIn particular,\nthe call <code>string.sub(s,1,j)</code> returns a prefix of <code>s</code>\nwith length <code>j</code>,\nand <code>string.sub(s, -i)</code> returns a suffix of <code>s</code>\nwith length <code>i</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-string.upper\"><code>string.upper (s)</code></a></h3>\nReceives a string and returns a copy of this string with all\nlowercase letters changed to uppercase.\nAll other characters are left unchanged.\nThe definition of what a lowercase letter is depends on the current locale.\n\n\n\n<h3>5.4.1 - <a name=\"5.4.1\">Patterns</a></h3>\n\n\n<h4>Character Class:</h4><p>\nA <em>character class</em> is used to represent a set of characters.\nThe following combinations are allowed in describing a character class:\n\n<ul>\n\n<li><b><em>x</em>:</b>\n(where <em>x</em> is not one of the <em>magic characters</em>\n<code>^$()%.[]*+-?</code>)\nrepresents the character <em>x</em> itself.\n</li>\n\n<li><b><code>.</code>:</b> (a dot) represents all characters.</li>\n\n<li><b><code>%a</code>:</b> represents all letters.</li>\n\n<li><b><code>%c</code>:</b> represents all control characters.</li>\n\n<li><b><code>%d</code>:</b> represents all digits.</li>\n\n<li><b><code>%l</code>:</b> represents all lowercase letters.</li>\n\n<li><b><code>%p</code>:</b> represents all punctuation characters.</li>\n\n<li><b><code>%s</code>:</b> represents all space characters.</li>\n\n<li><b><code>%u</code>:</b> represents all uppercase letters.</li>\n\n<li><b><code>%w</code>:</b> represents all alphanumeric characters.</li>\n\n<li><b><code>%x</code>:</b> represents all hexadecimal digits.</li>\n\n<li><b><code>%z</code>:</b> represents the character with representation 0.</li>\n\n<li><b><code>%<em>x</em></code>:</b> (where <em>x</em> is any non-alphanumeric character)\nrepresents the character <em>x</em>.\nThis is the standard way to escape the magic characters.\nAny punctuation character (even the non magic)\ncan be preceded by a '<code>%</code>'\nwhen used to represent itself in a pattern.\n</li>\n\n<li><b><code>[<em>set</em>]</code>:</b>\nrepresents the class which is the union of all\ncharacters in <em>set</em>.\nA range of characters can be specified by\nseparating the end characters of the range with a '<code>-</code>'.\nAll classes <code>%</code><em>x</em> described above can also be used as\ncomponents in <em>set</em>.\nAll other characters in <em>set</em> represent themselves.\nFor example, <code>[%w_]</code> (or <code>[_%w]</code>)\nrepresents all alphanumeric characters plus the underscore,\n<code>[0-7]</code> represents the octal digits,\nand <code>[0-7%l%-]</code> represents the octal digits plus\nthe lowercase letters plus the '<code>-</code>' character.\n\n\n<p>\nThe interaction between ranges and classes is not defined.\nTherefore, patterns like <code>[%a-z]</code> or <code>[a-%%]</code>\nhave no meaning.\n</li>\n\n<li><b><code>[^<em>set</em>]</code>:</b>\nrepresents the complement of <em>set</em>,\nwhere <em>set</em> is interpreted as above.\n</li>\n\n</ul><p>\nFor all classes represented by single letters (<code>%a</code>, <code>%c</code>, etc.),\nthe corresponding uppercase letter represents the complement of the class.\nFor instance, <code>%S</code> represents all non-space characters.\n\n\n<p>\nThe definitions of letter, space, and other character groups\ndepend on the current locale.\nIn particular, the class <code>[a-z]</code> may not be equivalent to <code>%l</code>.\n\n\n\n\n\n<h4>Pattern Item:</h4><p>\nA <em>pattern item</em> can be\n\n<ul>\n\n<li>\na single character class,\nwhich matches any single character in the class;\n</li>\n\n<li>\na single character class followed by '<code>*</code>',\nwhich matches 0 or more repetitions of characters in the class.\nThese repetition items will always match the longest possible sequence;\n</li>\n\n<li>\na single character class followed by '<code>+</code>',\nwhich matches 1 or more repetitions of characters in the class.\nThese repetition items will always match the longest possible sequence;\n</li>\n\n<li>\na single character class followed by '<code>-</code>',\nwhich also matches 0 or more repetitions of characters in the class.\nUnlike '<code>*</code>',\nthese repetition items will always match the <em>shortest</em> possible sequence;\n</li>\n\n<li>\na single character class followed by '<code>?</code>',\nwhich matches 0 or 1 occurrence of a character in the class;\n</li>\n\n<li>\n<code>%<em>n</em></code>, for <em>n</em> between 1 and 9;\nsuch item matches a substring equal to the <em>n</em>-th captured string\n(see below);\n</li>\n\n<li>\n<code>%b<em>xy</em></code>, where <em>x</em> and <em>y</em> are two distinct characters;\nsuch item matches strings that start with&nbsp;<em>x</em>, end with&nbsp;<em>y</em>,\nand where the <em>x</em> and <em>y</em> are <em>balanced</em>.\nThis means that, if one reads the string from left to right,\ncounting <em>+1</em> for an <em>x</em> and <em>-1</em> for a <em>y</em>,\nthe ending <em>y</em> is the first <em>y</em> where the count reaches 0.\nFor instance, the item <code>%b()</code> matches expressions with\nbalanced parentheses.\n</li>\n\n</ul>\n\n\n\n\n<h4>Pattern:</h4><p>\nA <em>pattern</em> is a sequence of pattern items.\nA '<code>^</code>' at the beginning of a pattern anchors the match at the\nbeginning of the subject string.\nA '<code>$</code>' at the end of a pattern anchors the match at the\nend of the subject string.\nAt other positions,\n'<code>^</code>' and '<code>$</code>' have no special meaning and represent themselves.\n\n\n\n\n\n<h4>Captures:</h4><p>\nA pattern can contain sub-patterns enclosed in parentheses;\nthey describe <em>captures</em>.\nWhen a match succeeds, the substrings of the subject string\nthat match captures are stored (<em>captured</em>) for future use.\nCaptures are numbered according to their left parentheses.\nFor instance, in the pattern <code>\"(a*(.)%w(%s*))\"</code>,\nthe part of the string matching <code>\"a*(.)%w(%s*)\"</code> is\nstored as the first capture (and therefore has number&nbsp;1);\nthe character matching \"<code>.</code>\" is captured with number&nbsp;2,\nand the part matching \"<code>%s*</code>\" has number&nbsp;3.\n\n\n<p>\nAs a special case, the empty capture <code>()</code> captures\nthe current string position (a number).\nFor instance, if we apply the pattern <code>\"()aa()\"</code> on the\nstring <code>\"flaaap\"</code>, there will be two captures: 3&nbsp;and&nbsp;5.\n\n\n<p>\nA pattern cannot contain embedded zeros.  Use <code>%z</code> instead.\n\n\n\n\n\n\n\n\n\n\n\n<h2>5.5 - <a name=\"5.5\">Table Manipulation</a></h2><p>\nThis library provides generic functions for table manipulation.\nIt provides all its functions inside the table <a name=\"pdf-table\"><code>table</code></a>.\n\n\n<p>\nMost functions in the table library assume that the table\nrepresents an array or a list.\nFor these functions, when we talk about the \"length\" of a table\nwe mean the result of the length operator.\n\n\n<p>\n<hr><h3><a name=\"pdf-table.concat\"><code>table.concat (table [, sep [, i [, j]]])</code></a></h3>\nGiven an array where all elements are strings or numbers,\nreturns <code>table[i]..sep..table[i+1] &middot;&middot;&middot; sep..table[j]</code>.\nThe default value for <code>sep</code> is the empty string,\nthe default for <code>i</code> is 1,\nand the default for <code>j</code> is the length of the table.\nIf <code>i</code> is greater than <code>j</code>, returns the empty string.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-table.insert\"><code>table.insert (table, [pos,] value)</code></a></h3>\n\n\n<p>\nInserts element <code>value</code> at position <code>pos</code> in <code>table</code>,\nshifting up other elements to open space, if necessary.\nThe default value for <code>pos</code> is <code>n+1</code>,\nwhere <code>n</code> is the length of the table (see <a href=\"#2.5.5\">&sect;2.5.5</a>),\nso that a call <code>table.insert(t,x)</code> inserts <code>x</code> at the end\nof table <code>t</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-table.maxn\"><code>table.maxn (table)</code></a></h3>\n\n\n<p>\nReturns the largest positive numerical index of the given table,\nor zero if the table has no positive numerical indices.\n(To do its job this function does a linear traversal of\nthe whole table.) \n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-table.remove\"><code>table.remove (table [, pos])</code></a></h3>\n\n\n<p>\nRemoves from <code>table</code> the element at position <code>pos</code>,\nshifting down other elements to close the space, if necessary.\nReturns the value of the removed element.\nThe default value for <code>pos</code> is <code>n</code>,\nwhere <code>n</code> is the length of the table,\nso that a call <code>table.remove(t)</code> removes the last element\nof table <code>t</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-table.sort\"><code>table.sort (table [, comp])</code></a></h3>\nSorts table elements in a given order, <em>in-place</em>,\nfrom <code>table[1]</code> to <code>table[n]</code>,\nwhere <code>n</code> is the length of the table.\nIf <code>comp</code> is given,\nthen it must be a function that receives two table elements,\nand returns true\nwhen the first is less than the second\n(so that <code>not comp(a[i+1],a[i])</code> will be true after the sort).\nIf <code>comp</code> is not given,\nthen the standard Lua operator <code>&lt;</code> is used instead.\n\n\n<p>\nThe sort algorithm is not stable;\nthat is, elements considered equal by the given order\nmay have their relative positions changed by the sort.\n\n\n\n\n\n\n\n<h2>5.6 - <a name=\"5.6\">Mathematical Functions</a></h2>\n\n<p>\nThis library is an interface to the standard C&nbsp;math library.\nIt provides all its functions inside the table <a name=\"pdf-math\"><code>math</code></a>.\n\n\n<p>\n<hr><h3><a name=\"pdf-math.abs\"><code>math.abs (x)</code></a></h3>\n\n\n<p>\nReturns the absolute value of <code>x</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.acos\"><code>math.acos (x)</code></a></h3>\n\n\n<p>\nReturns the arc cosine of <code>x</code> (in radians).\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.asin\"><code>math.asin (x)</code></a></h3>\n\n\n<p>\nReturns the arc sine of <code>x</code> (in radians).\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.atan\"><code>math.atan (x)</code></a></h3>\n\n\n<p>\nReturns the arc tangent of <code>x</code> (in radians).\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.atan2\"><code>math.atan2 (y, x)</code></a></h3>\n\n\n<p>\nReturns the arc tangent of <code>y/x</code> (in radians),\nbut uses the signs of both parameters to find the\nquadrant of the result.\n(It also handles correctly the case of <code>x</code> being zero.)\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.ceil\"><code>math.ceil (x)</code></a></h3>\n\n\n<p>\nReturns the smallest integer larger than or equal to <code>x</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.cos\"><code>math.cos (x)</code></a></h3>\n\n\n<p>\nReturns the cosine of <code>x</code> (assumed to be in radians).\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.cosh\"><code>math.cosh (x)</code></a></h3>\n\n\n<p>\nReturns the hyperbolic cosine of <code>x</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.deg\"><code>math.deg (x)</code></a></h3>\n\n\n<p>\nReturns the angle <code>x</code> (given in radians) in degrees.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.exp\"><code>math.exp (x)</code></a></h3>\n\n\n<p>\nReturns the value <em>e<sup>x</sup></em>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.floor\"><code>math.floor (x)</code></a></h3>\n\n\n<p>\nReturns the largest integer smaller than or equal to <code>x</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.fmod\"><code>math.fmod (x, y)</code></a></h3>\n\n\n<p>\nReturns the remainder of the division of <code>x</code> by <code>y</code>\nthat rounds the quotient towards zero.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.frexp\"><code>math.frexp (x)</code></a></h3>\n\n\n<p>\nReturns <code>m</code> and <code>e</code> such that <em>x = m2<sup>e</sup></em>,\n<code>e</code> is an integer and the absolute value of <code>m</code> is\nin the range <em>[0.5, 1)</em>\n(or zero when <code>x</code> is zero).\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.huge\"><code>math.huge</code></a></h3>\n\n\n<p>\nThe value <code>HUGE_VAL</code>,\na value larger than or equal to any other numerical value.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.ldexp\"><code>math.ldexp (m, e)</code></a></h3>\n\n\n<p>\nReturns <em>m2<sup>e</sup></em> (<code>e</code> should be an integer).\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.log\"><code>math.log (x)</code></a></h3>\n\n\n<p>\nReturns the natural logarithm of <code>x</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.log10\"><code>math.log10 (x)</code></a></h3>\n\n\n<p>\nReturns the base-10 logarithm of <code>x</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.max\"><code>math.max (x, &middot;&middot;&middot;)</code></a></h3>\n\n\n<p>\nReturns the maximum value among its arguments.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.min\"><code>math.min (x, &middot;&middot;&middot;)</code></a></h3>\n\n\n<p>\nReturns the minimum value among its arguments.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.modf\"><code>math.modf (x)</code></a></h3>\n\n\n<p>\nReturns two numbers,\nthe integral part of <code>x</code> and the fractional part of <code>x</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.pi\"><code>math.pi</code></a></h3>\n\n\n<p>\nThe value of <em>pi</em>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.pow\"><code>math.pow (x, y)</code></a></h3>\n\n\n<p>\nReturns <em>x<sup>y</sup></em>.\n(You can also use the expression <code>x^y</code> to compute this value.)\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.rad\"><code>math.rad (x)</code></a></h3>\n\n\n<p>\nReturns the angle <code>x</code> (given in degrees) in radians.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.random\"><code>math.random ([m [, n]])</code></a></h3>\n\n\n<p>\nThis function is an interface to the simple\npseudo-random generator function <code>rand</code> provided by ANSI&nbsp;C.\n(No guarantees can be given for its statistical properties.)\n\n\n<p>\nWhen called without arguments,\nreturns a uniform pseudo-random real number\nin the range <em>[0,1)</em>.  \nWhen called with an integer number <code>m</code>,\n<code>math.random</code> returns\na uniform pseudo-random integer in the range <em>[1, m]</em>.\nWhen called with two integer numbers <code>m</code> and <code>n</code>,\n<code>math.random</code> returns a uniform pseudo-random\ninteger in the range <em>[m, n]</em>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.randomseed\"><code>math.randomseed (x)</code></a></h3>\n\n\n<p>\nSets <code>x</code> as the \"seed\"\nfor the pseudo-random generator:\nequal seeds produce equal sequences of numbers.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.sin\"><code>math.sin (x)</code></a></h3>\n\n\n<p>\nReturns the sine of <code>x</code> (assumed to be in radians).\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.sinh\"><code>math.sinh (x)</code></a></h3>\n\n\n<p>\nReturns the hyperbolic sine of <code>x</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.sqrt\"><code>math.sqrt (x)</code></a></h3>\n\n\n<p>\nReturns the square root of <code>x</code>.\n(You can also use the expression <code>x^0.5</code> to compute this value.)\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.tan\"><code>math.tan (x)</code></a></h3>\n\n\n<p>\nReturns the tangent of <code>x</code> (assumed to be in radians).\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-math.tanh\"><code>math.tanh (x)</code></a></h3>\n\n\n<p>\nReturns the hyperbolic tangent of <code>x</code>.\n\n\n\n\n\n\n\n<h2>5.7 - <a name=\"5.7\">Input and Output Facilities</a></h2>\n\n<p>\nThe I/O library provides two different styles for file manipulation.\nThe first one uses implicit file descriptors;\nthat is, there are operations to set a default input file and a\ndefault output file,\nand all input/output operations are over these default files.\nThe second style uses explicit file descriptors.\n\n\n<p>\nWhen using implicit file descriptors,\nall operations are supplied by table <a name=\"pdf-io\"><code>io</code></a>.\nWhen using explicit file descriptors,\nthe operation <a href=\"#pdf-io.open\"><code>io.open</code></a> returns a file descriptor\nand then all operations are supplied as methods of the file descriptor.\n\n\n<p>\nThe table <code>io</code> also provides\nthree predefined file descriptors with their usual meanings from C:\n<a name=\"pdf-io.stdin\"><code>io.stdin</code></a>, <a name=\"pdf-io.stdout\"><code>io.stdout</code></a>, and <a name=\"pdf-io.stderr\"><code>io.stderr</code></a>.\nThe I/O library never closes these files.\n\n\n<p>\nUnless otherwise stated,\nall I/O functions return <b>nil</b> on failure\n(plus an error message as a second result and\na system-dependent error code as a third result)\nand some value different from <b>nil</b> on success.\n\n\n<p>\n<hr><h3><a name=\"pdf-io.close\"><code>io.close ([file])</code></a></h3>\n\n\n<p>\nEquivalent to <code>file:close()</code>.\nWithout a <code>file</code>, closes the default output file.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-io.flush\"><code>io.flush ()</code></a></h3>\n\n\n<p>\nEquivalent to <code>file:flush</code> over the default output file.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-io.input\"><code>io.input ([file])</code></a></h3>\n\n\n<p>\nWhen called with a file name, it opens the named file (in text mode),\nand sets its handle as the default input file.\nWhen called with a file handle,\nit simply sets this file handle as the default input file.\nWhen called without parameters,\nit returns the current default input file.\n\n\n<p>\nIn case of errors this function raises the error,\ninstead of returning an error code.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-io.lines\"><code>io.lines ([filename])</code></a></h3>\n\n\n<p>\nOpens the given file name in read mode\nand returns an iterator function that,\neach time it is called,\nreturns a new line from the file.\nTherefore, the construction\n\n<pre>\n     for line in io.lines(filename) do <em>body</em> end\n</pre><p>\nwill iterate over all lines of the file.\nWhen the iterator function detects the end of file,\nit returns <b>nil</b> (to finish the loop) and automatically closes the file.\n\n\n<p>\nThe call <code>io.lines()</code> (with no file name) is equivalent\nto <code>io.input():lines()</code>;\nthat is, it iterates over the lines of the default input file.\nIn this case it does not close the file when the loop ends.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-io.open\"><code>io.open (filename [, mode])</code></a></h3>\n\n\n<p>\nThis function opens a file,\nin the mode specified in the string <code>mode</code>.\nIt returns a new file handle,\nor, in case of errors, <b>nil</b> plus an error message.\n\n\n<p>\nThe <code>mode</code> string can be any of the following:\n\n<ul>\n<li><b>\"r\":</b> read mode (the default);</li>\n<li><b>\"w\":</b> write mode;</li>\n<li><b>\"a\":</b> append mode;</li>\n<li><b>\"r+\":</b> update mode, all previous data is preserved;</li>\n<li><b>\"w+\":</b> update mode, all previous data is erased;</li>\n<li><b>\"a+\":</b> append update mode, previous data is preserved,\n  writing is only allowed at the end of file.</li>\n</ul><p>\nThe <code>mode</code> string can also have a '<code>b</code>' at the end,\nwhich is needed in some systems to open the file in binary mode.\nThis string is exactly what is used in the\nstandard&nbsp;C function <code>fopen</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-io.output\"><code>io.output ([file])</code></a></h3>\n\n\n<p>\nSimilar to <a href=\"#pdf-io.input\"><code>io.input</code></a>, but operates over the default output file.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-io.popen\"><code>io.popen (prog [, mode])</code></a></h3>\n\n\n<p>\nStarts program <code>prog</code> in a separated process and returns\na file handle that you can use to read data from this program\n(if <code>mode</code> is <code>\"r\"</code>, the default)\nor to write data to this program\n(if <code>mode</code> is <code>\"w\"</code>).\n\n\n<p>\nThis function is system dependent and is not available\non all platforms.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-io.read\"><code>io.read (&middot;&middot;&middot;)</code></a></h3>\n\n\n<p>\nEquivalent to <code>io.input():read</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-io.tmpfile\"><code>io.tmpfile ()</code></a></h3>\n\n\n<p>\nReturns a handle for a temporary file.\nThis file is opened in update mode\nand it is automatically removed when the program ends.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-io.type\"><code>io.type (obj)</code></a></h3>\n\n\n<p>\nChecks whether <code>obj</code> is a valid file handle.\nReturns the string <code>\"file\"</code> if <code>obj</code> is an open file handle,\n<code>\"closed file\"</code> if <code>obj</code> is a closed file handle,\nor <b>nil</b> if <code>obj</code> is not a file handle.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-io.write\"><code>io.write (&middot;&middot;&middot;)</code></a></h3>\n\n\n<p>\nEquivalent to <code>io.output():write</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-file:close\"><code>file:close ()</code></a></h3>\n\n\n<p>\nCloses <code>file</code>.\nNote that files are automatically closed when\ntheir handles are garbage collected,\nbut that takes an unpredictable amount of time to happen.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-file:flush\"><code>file:flush ()</code></a></h3>\n\n\n<p>\nSaves any written data to <code>file</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-file:lines\"><code>file:lines ()</code></a></h3>\n\n\n<p>\nReturns an iterator function that,\neach time it is called,\nreturns a new line from the file.\nTherefore, the construction\n\n<pre>\n     for line in file:lines() do <em>body</em> end\n</pre><p>\nwill iterate over all lines of the file.\n(Unlike <a href=\"#pdf-io.lines\"><code>io.lines</code></a>, this function does not close the file\nwhen the loop ends.)\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-file:read\"><code>file:read (&middot;&middot;&middot;)</code></a></h3>\n\n\n<p>\nReads the file <code>file</code>,\naccording to the given formats, which specify what to read.\nFor each format,\nthe function returns a string (or a number) with the characters read,\nor <b>nil</b> if it cannot read data with the specified format.\nWhen called without formats,\nit uses a default format that reads the entire next line\n(see below).\n\n\n<p>\nThe available formats are\n\n<ul>\n\n<li><b>\"*n\":</b>\nreads a number;\nthis is the only format that returns a number instead of a string.\n</li>\n\n<li><b>\"*a\":</b>\nreads the whole file, starting at the current position.\nOn end of file, it returns the empty string.\n</li>\n\n<li><b>\"*l\":</b>\nreads the next line (skipping the end of line),\nreturning <b>nil</b> on end of file.\nThis is the default format.\n</li>\n\n<li><b><em>number</em>:</b>\nreads a string with up to this number of characters,\nreturning <b>nil</b> on end of file.\nIf number is zero,\nit reads nothing and returns an empty string,\nor <b>nil</b> on end of file.\n</li>\n\n</ul>\n\n\n\n<p>\n<hr><h3><a name=\"pdf-file:seek\"><code>file:seek ([whence] [, offset])</code></a></h3>\n\n\n<p>\nSets and gets the file position,\nmeasured from the beginning of the file,\nto the position given by <code>offset</code> plus a base\nspecified by the string <code>whence</code>, as follows:\n\n<ul>\n<li><b>\"set\":</b> base is position 0 (beginning of the file);</li>\n<li><b>\"cur\":</b> base is current position;</li>\n<li><b>\"end\":</b> base is end of file;</li>\n</ul><p>\nIn case of success, function <code>seek</code> returns the final file position,\nmeasured in bytes from the beginning of the file.\nIf this function fails, it returns <b>nil</b>,\nplus a string describing the error.\n\n\n<p>\nThe default value for <code>whence</code> is <code>\"cur\"</code>,\nand for <code>offset</code> is 0.\nTherefore, the call <code>file:seek()</code> returns the current\nfile position, without changing it;\nthe call <code>file:seek(\"set\")</code> sets the position to the\nbeginning of the file (and returns 0);\nand the call <code>file:seek(\"end\")</code> sets the position to the\nend of the file, and returns its size.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-file:setvbuf\"><code>file:setvbuf (mode [, size])</code></a></h3>\n\n\n<p>\nSets the buffering mode for an output file.\nThere are three available modes:\n\n<ul>\n\n<li><b>\"no\":</b>\nno buffering; the result of any output operation appears immediately.\n</li>\n\n<li><b>\"full\":</b>\nfull buffering; output operation is performed only\nwhen the buffer is full (or when you explicitly <code>flush</code> the file\n(see <a href=\"#pdf-io.flush\"><code>io.flush</code></a>)).\n</li>\n\n<li><b>\"line\":</b>\nline buffering; output is buffered until a newline is output\nor there is any input from some special files\n(such as a terminal device).\n</li>\n\n</ul><p>\nFor the last two cases, <code>size</code>\nspecifies the size of the buffer, in bytes.\nThe default is an appropriate size.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-file:write\"><code>file:write (&middot;&middot;&middot;)</code></a></h3>\n\n\n<p>\nWrites the value of each of its arguments to\nthe <code>file</code>.\nThe arguments must be strings or numbers.\nTo write other values,\nuse <a href=\"#pdf-tostring\"><code>tostring</code></a> or <a href=\"#pdf-string.format\"><code>string.format</code></a> before <code>write</code>.\n\n\n\n\n\n\n\n<h2>5.8 - <a name=\"5.8\">Operating System Facilities</a></h2>\n\n<p>\nThis library is implemented through table <a name=\"pdf-os\"><code>os</code></a>.\n\n\n<p>\n<hr><h3><a name=\"pdf-os.clock\"><code>os.clock ()</code></a></h3>\n\n\n<p>\nReturns an approximation of the amount in seconds of CPU time\nused by the program.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-os.date\"><code>os.date ([format [, time]])</code></a></h3>\n\n\n<p>\nReturns a string or a table containing date and time,\nformatted according to the given string <code>format</code>.\n\n\n<p>\nIf the <code>time</code> argument is present,\nthis is the time to be formatted\n(see the <a href=\"#pdf-os.time\"><code>os.time</code></a> function for a description of this value).\nOtherwise, <code>date</code> formats the current time.\n\n\n<p>\nIf <code>format</code> starts with '<code>!</code>',\nthen the date is formatted in Coordinated Universal Time.\nAfter this optional character,\nif <code>format</code> is the string \"<code>*t</code>\",\nthen <code>date</code> returns a table with the following fields:\n<code>year</code> (four digits), <code>month</code> (1--12), <code>day</code> (1--31),\n<code>hour</code> (0--23), <code>min</code> (0--59), <code>sec</code> (0--61),\n<code>wday</code> (weekday, Sunday is&nbsp;1),\n<code>yday</code> (day of the year),\nand <code>isdst</code> (daylight saving flag, a boolean).\n\n\n<p>\nIf <code>format</code> is not \"<code>*t</code>\",\nthen <code>date</code> returns the date as a string,\nformatted according to the same rules as the C&nbsp;function <code>strftime</code>.\n\n\n<p>\nWhen called without arguments,\n<code>date</code> returns a reasonable date and time representation that depends on\nthe host system and on the current locale\n(that is, <code>os.date()</code> is equivalent to <code>os.date(\"%c\")</code>).\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-os.difftime\"><code>os.difftime (t2, t1)</code></a></h3>\n\n\n<p>\nReturns the number of seconds from time <code>t1</code> to time <code>t2</code>.\nIn POSIX, Windows, and some other systems,\nthis value is exactly <code>t2</code><em>-</em><code>t1</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-os.execute\"><code>os.execute ([command])</code></a></h3>\n\n\n<p>\nThis function is equivalent to the C&nbsp;function <code>system</code>.\nIt passes <code>command</code> to be executed by an operating system shell.\nIt returns a status code, which is system-dependent.\nIf <code>command</code> is absent, then it returns nonzero if a shell is available\nand zero otherwise.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-os.exit\"><code>os.exit ([code])</code></a></h3>\n\n\n<p>\nCalls the C&nbsp;function <code>exit</code>,\nwith an optional <code>code</code>,\nto terminate the host program.\nThe default value for <code>code</code> is the success code.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-os.getenv\"><code>os.getenv (varname)</code></a></h3>\n\n\n<p>\nReturns the value of the process environment variable <code>varname</code>,\nor <b>nil</b> if the variable is not defined.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-os.remove\"><code>os.remove (filename)</code></a></h3>\n\n\n<p>\nDeletes the file or directory with the given name.\nDirectories must be empty to be removed.\nIf this function fails, it returns <b>nil</b>,\nplus a string describing the error.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-os.rename\"><code>os.rename (oldname, newname)</code></a></h3>\n\n\n<p>\nRenames file or directory named <code>oldname</code> to <code>newname</code>.\nIf this function fails, it returns <b>nil</b>,\nplus a string describing the error.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-os.setlocale\"><code>os.setlocale (locale [, category])</code></a></h3>\n\n\n<p>\nSets the current locale of the program.\n<code>locale</code> is a string specifying a locale;\n<code>category</code> is an optional string describing which category to change:\n<code>\"all\"</code>, <code>\"collate\"</code>, <code>\"ctype\"</code>,\n<code>\"monetary\"</code>, <code>\"numeric\"</code>, or <code>\"time\"</code>;\nthe default category is <code>\"all\"</code>.\nThe function returns the name of the new locale,\nor <b>nil</b> if the request cannot be honored.\n\n\n<p>\nIf <code>locale</code> is the empty string,\nthe current locale is set to an implementation-defined native locale.\nIf <code>locale</code> is the string \"<code>C</code>\",\nthe current locale is set to the standard C locale.\n\n\n<p>\nWhen called with <b>nil</b> as the first argument,\nthis function only returns the name of the current locale\nfor the given category.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-os.time\"><code>os.time ([table])</code></a></h3>\n\n\n<p>\nReturns the current time when called without arguments,\nor a time representing the date and time specified by the given table.\nThis table must have fields <code>year</code>, <code>month</code>, and <code>day</code>,\nand may have fields <code>hour</code>, <code>min</code>, <code>sec</code>, and <code>isdst</code>\n(for a description of these fields, see the <a href=\"#pdf-os.date\"><code>os.date</code></a> function).\n\n\n<p>\nThe returned value is a number, whose meaning depends on your system.\nIn POSIX, Windows, and some other systems, this number counts the number\nof seconds since some given start time (the \"epoch\").\nIn other systems, the meaning is not specified,\nand the number returned by <code>time</code> can be used only as an argument to\n<code>date</code> and <code>difftime</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-os.tmpname\"><code>os.tmpname ()</code></a></h3>\n\n\n<p>\nReturns a string with a file name that can\nbe used for a temporary file.\nThe file must be explicitly opened before its use\nand explicitly removed when no longer needed.\n\n\n<p>\nOn some systems (POSIX),\nthis function also creates a file with that name,\nto avoid security risks.\n(Someone else might create the file with wrong permissions\nin the time between getting the name and creating the file.)\nYou still have to open the file to use it\nand to remove it (even if you do not use it).\n\n\n<p>\nWhen possible,\nyou may prefer to use <a href=\"#pdf-io.tmpfile\"><code>io.tmpfile</code></a>,\nwhich automatically removes the file when the program ends.\n\n\n\n\n\n\n\n<h2>5.9 - <a name=\"5.9\">The Debug Library</a></h2>\n\n<p>\nThis library provides\nthe functionality of the debug interface to Lua programs.\nYou should exert care when using this library.\nThe functions provided here should be used exclusively for debugging\nand similar tasks, such as profiling.\nPlease resist the temptation to use them as a\nusual programming tool:\nthey can be very slow.\nMoreover, several of these functions\nviolate some assumptions about Lua code\n(e.g., that variables local to a function\ncannot be accessed from outside or\nthat userdata metatables cannot be changed by Lua code)\nand therefore can compromise otherwise secure code.\n\n\n<p>\nAll functions in this library are provided\ninside the <a name=\"pdf-debug\"><code>debug</code></a> table.\nAll functions that operate over a thread\nhave an optional first argument which is the\nthread to operate over.\nThe default is always the current thread.\n\n\n<p>\n<hr><h3><a name=\"pdf-debug.debug\"><code>debug.debug ()</code></a></h3>\n\n\n<p>\nEnters an interactive mode with the user,\nrunning each string that the user enters.\nUsing simple commands and other debug facilities,\nthe user can inspect global and local variables,\nchange their values, evaluate expressions, and so on.\nA line containing only the word <code>cont</code> finishes this function,\nso that the caller continues its execution.\n\n\n<p>\nNote that commands for <code>debug.debug</code> are not lexically nested\nwithin any function, and so have no direct access to local variables.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-debug.getfenv\"><code>debug.getfenv (o)</code></a></h3>\nReturns the environment of object <code>o</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-debug.gethook\"><code>debug.gethook ([thread])</code></a></h3>\n\n\n<p>\nReturns the current hook settings of the thread, as three values:\nthe current hook function, the current hook mask,\nand the current hook count\n(as set by the <a href=\"#pdf-debug.sethook\"><code>debug.sethook</code></a> function).\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-debug.getinfo\"><code>debug.getinfo ([thread,] function [, what])</code></a></h3>\n\n\n<p>\nReturns a table with information about a function.\nYou can give the function directly,\nor you can give a number as the value of <code>function</code>,\nwhich means the function running at level <code>function</code> of the call stack\nof the given thread:\nlevel&nbsp;0 is the current function (<code>getinfo</code> itself);\nlevel&nbsp;1 is the function that called <code>getinfo</code>;\nand so on.\nIf <code>function</code> is a number larger than the number of active functions,\nthen <code>getinfo</code> returns <b>nil</b>.\n\n\n<p>\nThe returned table can contain all the fields returned by <a href=\"#lua_getinfo\"><code>lua_getinfo</code></a>,\nwith the string <code>what</code> describing which fields to fill in.\nThe default for <code>what</code> is to get all information available,\nexcept the table of valid lines.\nIf present,\nthe option '<code>f</code>'\nadds a field named <code>func</code> with the function itself.\nIf present,\nthe option '<code>L</code>'\nadds a field named <code>activelines</code> with the table of\nvalid lines.\n\n\n<p>\nFor instance, the expression <code>debug.getinfo(1,\"n\").name</code> returns\na table with a name for the current function,\nif a reasonable name can be found,\nand the expression <code>debug.getinfo(print)</code>\nreturns a table with all available information\nabout the <a href=\"#pdf-print\"><code>print</code></a> function.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-debug.getlocal\"><code>debug.getlocal ([thread,] level, local)</code></a></h3>\n\n\n<p>\nThis function returns the name and the value of the local variable\nwith index <code>local</code> of the function at level <code>level</code> of the stack.\n(The first parameter or local variable has index&nbsp;1, and so on,\nuntil the last active local variable.)\nThe function returns <b>nil</b> if there is no local\nvariable with the given index,\nand raises an error when called with a <code>level</code> out of range.\n(You can call <a href=\"#pdf-debug.getinfo\"><code>debug.getinfo</code></a> to check whether the level is valid.)\n\n\n<p>\nVariable names starting with '<code>(</code>' (open parentheses)\nrepresent internal variables\n(loop control variables, temporaries, and C&nbsp;function locals).\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-debug.getmetatable\"><code>debug.getmetatable (object)</code></a></h3>\n\n\n<p>\nReturns the metatable of the given <code>object</code>\nor <b>nil</b> if it does not have a metatable.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-debug.getregistry\"><code>debug.getregistry ()</code></a></h3>\n\n\n<p>\nReturns the registry table (see <a href=\"#3.5\">&sect;3.5</a>).\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-debug.getupvalue\"><code>debug.getupvalue (func, up)</code></a></h3>\n\n\n<p>\nThis function returns the name and the value of the upvalue\nwith index <code>up</code> of the function <code>func</code>.\nThe function returns <b>nil</b> if there is no upvalue with the given index.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-debug.setfenv\"><code>debug.setfenv (object, table)</code></a></h3>\n\n\n<p>\nSets the environment of the given <code>object</code> to the given <code>table</code>.\nReturns <code>object</code>.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-debug.sethook\"><code>debug.sethook ([thread,] hook, mask [, count])</code></a></h3>\n\n\n<p>\nSets the given function as a hook.\nThe string <code>mask</code> and the number <code>count</code> describe\nwhen the hook will be called.\nThe string mask may have the following characters,\nwith the given meaning:\n\n<ul>\n<li><b><code>\"c\"</code>:</b> the hook is called every time Lua calls a function;</li>\n<li><b><code>\"r\"</code>:</b> the hook is called every time Lua returns from a function;</li>\n<li><b><code>\"l\"</code>:</b> the hook is called every time Lua enters a new line of code.</li>\n</ul><p>\nWith a <code>count</code> different from zero,\nthe hook is called after every <code>count</code> instructions.\n\n\n<p>\nWhen called without arguments,\n<a href=\"#pdf-debug.sethook\"><code>debug.sethook</code></a> turns off the hook.\n\n\n<p>\nWhen the hook is called, its first parameter is a string\ndescribing the event that has triggered its call:\n<code>\"call\"</code>, <code>\"return\"</code> (or <code>\"tail return\"</code>,\nwhen simulating a return from a tail call),\n<code>\"line\"</code>, and <code>\"count\"</code>.\nFor line events,\nthe hook also gets the new line number as its second parameter.\nInside a hook,\nyou can call <code>getinfo</code> with level&nbsp;2 to get more information about\nthe running function\n(level&nbsp;0 is the <code>getinfo</code> function,\nand level&nbsp;1 is the hook function),\nunless the event is <code>\"tail return\"</code>.\nIn this case, Lua is only simulating the return,\nand a call to <code>getinfo</code> will return invalid data.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-debug.setlocal\"><code>debug.setlocal ([thread,] level, local, value)</code></a></h3>\n\n\n<p>\nThis function assigns the value <code>value</code> to the local variable\nwith index <code>local</code> of the function at level <code>level</code> of the stack.\nThe function returns <b>nil</b> if there is no local\nvariable with the given index,\nand raises an error when called with a <code>level</code> out of range.\n(You can call <code>getinfo</code> to check whether the level is valid.)\nOtherwise, it returns the name of the local variable.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-debug.setmetatable\"><code>debug.setmetatable (object, table)</code></a></h3>\n\n\n<p>\nSets the metatable for the given <code>object</code> to the given <code>table</code>\n(which can be <b>nil</b>).\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-debug.setupvalue\"><code>debug.setupvalue (func, up, value)</code></a></h3>\n\n\n<p>\nThis function assigns the value <code>value</code> to the upvalue\nwith index <code>up</code> of the function <code>func</code>.\nThe function returns <b>nil</b> if there is no upvalue\nwith the given index.\nOtherwise, it returns the name of the upvalue.\n\n\n\n\n<p>\n<hr><h3><a name=\"pdf-debug.traceback\"><code>debug.traceback ([thread,] [message [, level]])</code></a></h3>\n\n\n<p>\nReturns a string with a traceback of the call stack.\nAn optional <code>message</code> string is appended\nat the beginning of the traceback.\nAn optional <code>level</code> number tells at which level\nto start the traceback\n(default is 1, the function calling <code>traceback</code>).\n\n\n\n\n\n\n\n<h1>6 - <a name=\"6\">Lua Stand-alone</a></h1>\n\n<p>\nAlthough Lua has been designed as an extension language,\nto be embedded in a host C&nbsp;program,\nit is also frequently used as a stand-alone language.\nAn interpreter for Lua as a stand-alone language,\ncalled simply <code>lua</code>,\nis provided with the standard distribution.\nThe stand-alone interpreter includes\nall standard libraries, including the debug library.\nIts usage is:\n\n<pre>\n     lua [options] [script [args]]\n</pre><p>\nThe options are:\n\n<ul>\n<li><b><code>-e <em>stat</em></code>:</b> executes string <em>stat</em>;</li>\n<li><b><code>-l <em>mod</em></code>:</b> \"requires\" <em>mod</em>;</li>\n<li><b><code>-i</code>:</b> enters interactive mode after running <em>script</em>;</li>\n<li><b><code>-v</code>:</b> prints version information;</li>\n<li><b><code>--</code>:</b> stops handling options;</li>\n<li><b><code>-</code>:</b> executes <code>stdin</code> as a file and stops handling options.</li>\n</ul><p>\nAfter handling its options, <code>lua</code> runs the given <em>script</em>,\npassing to it the given <em>args</em> as string arguments.\nWhen called without arguments,\n<code>lua</code> behaves as <code>lua -v -i</code>\nwhen the standard input (<code>stdin</code>) is a terminal,\nand as <code>lua -</code> otherwise.\n\n\n<p>\nBefore running any argument,\nthe interpreter checks for an environment variable <a name=\"pdf-LUA_INIT\"><code>LUA_INIT</code></a>.\nIf its format is <code>@<em>filename</em></code>,\nthen <code>lua</code> executes the file.\nOtherwise, <code>lua</code> executes the string itself.\n\n\n<p>\nAll options are handled in order, except <code>-i</code>.\nFor instance, an invocation like\n\n<pre>\n     $ lua -e'a=1' -e 'print(a)' script.lua\n</pre><p>\nwill first set <code>a</code> to 1, then print the value of <code>a</code> (which is '<code>1</code>'),\nand finally run the file <code>script.lua</code> with no arguments.\n(Here <code>$</code> is the shell prompt. Your prompt may be different.)\n\n\n<p>\nBefore starting to run the script,\n<code>lua</code> collects all arguments in the command line\nin a global table called <code>arg</code>.\nThe script name is stored at index 0,\nthe first argument after the script name goes to index 1,\nand so on.\nAny arguments before the script name\n(that is, the interpreter name plus the options)\ngo to negative indices.\nFor instance, in the call\n\n<pre>\n     $ lua -la b.lua t1 t2\n</pre><p>\nthe interpreter first runs the file <code>a.lua</code>,\nthen creates a table\n\n<pre>\n     arg = { [-2] = \"lua\", [-1] = \"-la\",\n             [0] = \"b.lua\",\n             [1] = \"t1\", [2] = \"t2\" }\n</pre><p>\nand finally runs the file <code>b.lua</code>.\nThe script is called with <code>arg[1]</code>, <code>arg[2]</code>, &middot;&middot;&middot;\nas arguments;\nit can also access these arguments with the vararg expression '<code>...</code>'.\n\n\n<p>\nIn interactive mode,\nif you write an incomplete statement,\nthe interpreter waits for its completion\nby issuing a different prompt.\n\n\n<p>\nIf the global variable <a name=\"pdf-_PROMPT\"><code>_PROMPT</code></a> contains a string,\nthen its value is used as the prompt.\nSimilarly, if the global variable <a name=\"pdf-_PROMPT2\"><code>_PROMPT2</code></a> contains a string,\nits value is used as the secondary prompt\n(issued during incomplete statements).\nTherefore, both prompts can be changed directly on the command line\nor in any Lua programs by assigning to <code>_PROMPT</code>.\nSee the next example:\n\n<pre>\n     $ lua -e\"_PROMPT='myprompt&gt; '\" -i\n</pre><p>\n(The outer pair of quotes is for the shell,\nthe inner pair is for Lua.)\nNote the use of <code>-i</code> to enter interactive mode;\notherwise,\nthe program would just end silently\nright after the assignment to <code>_PROMPT</code>.\n\n\n<p>\nTo allow the use of Lua as a\nscript interpreter in Unix systems,\nthe stand-alone interpreter skips\nthe first line of a chunk if it starts with <code>#</code>.\nTherefore, Lua scripts can be made into executable programs\nby using <code>chmod +x</code> and the&nbsp;<code>#!</code> form,\nas in\n\n<pre>\n     #!/usr/local/bin/lua\n</pre><p>\n(Of course,\nthe location of the Lua interpreter may be different in your machine.\nIf <code>lua</code> is in your <code>PATH</code>,\nthen \n\n<pre>\n     #!/usr/bin/env lua\n</pre><p>\nis a more portable solution.) \n\n\n\n<h1>7 - <a name=\"7\">Incompatibilities with the Previous Version</a></h1>\n\n<p>\nHere we list the incompatibilities that you may find when moving a program\nfrom Lua&nbsp;5.0 to Lua&nbsp;5.1.\nYou can avoid most of the incompatibilities compiling Lua with\nappropriate options (see file <code>luaconf.h</code>).\nHowever,\nall these compatibility options will be removed in the next version of Lua.\n\n\n\n<h2>7.1 - <a name=\"7.1\">Changes in the Language</a></h2>\n<ul>\n\n<li>\nThe vararg system changed from the pseudo-argument <code>arg</code> with a\ntable with the extra arguments to the vararg expression.\n(See compile-time option <code>LUA_COMPAT_VARARG</code> in <code>luaconf.h</code>.)\n</li>\n\n<li>\nThere was a subtle change in the scope of the implicit\nvariables of the <b>for</b> statement and for the <b>repeat</b> statement.\n</li>\n\n<li>\nThe long string/long comment syntax (<code>[[<em>string</em>]]</code>)\ndoes not allow nesting.\nYou can use the new syntax (<code>[=[<em>string</em>]=]</code>) in these cases.\n(See compile-time option <code>LUA_COMPAT_LSTR</code> in <code>luaconf.h</code>.)\n</li>\n\n</ul>\n\n\n\n\n<h2>7.2 - <a name=\"7.2\">Changes in the Libraries</a></h2>\n<ul>\n\n<li>\nFunction <code>string.gfind</code> was renamed <a href=\"#pdf-string.gmatch\"><code>string.gmatch</code></a>.\n(See compile-time option <code>LUA_COMPAT_GFIND</code> in <code>luaconf.h</code>.)\n</li>\n\n<li>\nWhen <a href=\"#pdf-string.gsub\"><code>string.gsub</code></a> is called with a function as its\nthird argument,\nwhenever this function returns <b>nil</b> or <b>false</b> the\nreplacement string is the whole match,\ninstead of the empty string.\n</li>\n\n<li>\nFunction <code>table.setn</code> was deprecated.\nFunction <code>table.getn</code> corresponds\nto the new length operator (<code>#</code>);\nuse the operator instead of the function.\n(See compile-time option <code>LUA_COMPAT_GETN</code> in <code>luaconf.h</code>.)\n</li>\n\n<li>\nFunction <code>loadlib</code> was renamed <a href=\"#pdf-package.loadlib\"><code>package.loadlib</code></a>.\n(See compile-time option <code>LUA_COMPAT_LOADLIB</code> in <code>luaconf.h</code>.)\n</li>\n\n<li>\nFunction <code>math.mod</code> was renamed <a href=\"#pdf-math.fmod\"><code>math.fmod</code></a>.\n(See compile-time option <code>LUA_COMPAT_MOD</code> in <code>luaconf.h</code>.)\n</li>\n\n<li>\nFunctions <code>table.foreach</code> and <code>table.foreachi</code> are deprecated.\nYou can use a for loop with <code>pairs</code> or <code>ipairs</code> instead.\n</li>\n\n<li>\nThere were substantial changes in function <a href=\"#pdf-require\"><code>require</code></a> due to\nthe new module system.\nHowever, the new behavior is mostly compatible with the old,\nbut <code>require</code> gets the path from <a href=\"#pdf-package.path\"><code>package.path</code></a> instead\nof from <code>LUA_PATH</code>.\n</li>\n\n<li>\nFunction <a href=\"#pdf-collectgarbage\"><code>collectgarbage</code></a> has different arguments.\nFunction <code>gcinfo</code> is deprecated;\nuse <code>collectgarbage(\"count\")</code> instead.\n</li>\n\n</ul>\n\n\n\n\n<h2>7.3 - <a name=\"7.3\">Changes in the API</a></h2>\n<ul>\n\n<li>\nThe <code>luaopen_*</code> functions (to open libraries)\ncannot be called directly,\nlike a regular C function.\nThey must be called through Lua,\nlike a Lua function.\n</li>\n\n<li>\nFunction <code>lua_open</code> was replaced by <a href=\"#lua_newstate\"><code>lua_newstate</code></a> to\nallow the user to set a memory-allocation function.\nYou can use <a href=\"#luaL_newstate\"><code>luaL_newstate</code></a> from the standard library to\ncreate a state with a standard allocation function\n(based on <code>realloc</code>).\n</li>\n\n<li>\nFunctions <code>luaL_getn</code> and <code>luaL_setn</code>\n(from the auxiliary library) are deprecated.\nUse <a href=\"#lua_objlen\"><code>lua_objlen</code></a> instead of <code>luaL_getn</code>\nand nothing instead of <code>luaL_setn</code>.\n</li>\n\n<li>\nFunction <code>luaL_openlib</code> was replaced by <a href=\"#luaL_register\"><code>luaL_register</code></a>.\n</li>\n\n<li>\nFunction <code>luaL_checkudata</code> now throws an error when the given value\nis not a userdata of the expected type.\n(In Lua&nbsp;5.0 it returned <code>NULL</code>.)\n</li>\n\n</ul>\n\n\n\n\n<h1>8 - <a name=\"8\">The Complete Syntax of Lua</a></h1>\n\n<p>\nHere is the complete syntax of Lua in extended BNF.\n(It does not describe operator precedences.)\n\n\n\n\n<pre>\n\n\tchunk ::= {stat [`<b>;</b>&acute;]} [laststat [`<b>;</b>&acute;]]\n\n\tblock ::= chunk\n\n\tstat ::=  varlist `<b>=</b>&acute; explist | \n\t\t functioncall | \n\t\t <b>do</b> block <b>end</b> | \n\t\t <b>while</b> exp <b>do</b> block <b>end</b> | \n\t\t <b>repeat</b> block <b>until</b> exp | \n\t\t <b>if</b> exp <b>then</b> block {<b>elseif</b> exp <b>then</b> block} [<b>else</b> block] <b>end</b> | \n\t\t <b>for</b> Name `<b>=</b>&acute; exp `<b>,</b>&acute; exp [`<b>,</b>&acute; exp] <b>do</b> block <b>end</b> | \n\t\t <b>for</b> namelist <b>in</b> explist <b>do</b> block <b>end</b> | \n\t\t <b>function</b> funcname funcbody | \n\t\t <b>local</b> <b>function</b> Name funcbody | \n\t\t <b>local</b> namelist [`<b>=</b>&acute; explist] \n\n\tlaststat ::= <b>return</b> [explist] | <b>break</b>\n\n\tfuncname ::= Name {`<b>.</b>&acute; Name} [`<b>:</b>&acute; Name]\n\n\tvarlist ::= var {`<b>,</b>&acute; var}\n\n\tvar ::=  Name | prefixexp `<b>[</b>&acute; exp `<b>]</b>&acute; | prefixexp `<b>.</b>&acute; Name \n\n\tnamelist ::= Name {`<b>,</b>&acute; Name}\n\n\texplist ::= {exp `<b>,</b>&acute;} exp\n\n\texp ::=  <b>nil</b> | <b>false</b> | <b>true</b> | Number | String | `<b>...</b>&acute; | function | \n\t\t prefixexp | tableconstructor | exp binop exp | unop exp \n\n\tprefixexp ::= var | functioncall | `<b>(</b>&acute; exp `<b>)</b>&acute;\n\n\tfunctioncall ::=  prefixexp args | prefixexp `<b>:</b>&acute; Name args \n\n\targs ::=  `<b>(</b>&acute; [explist] `<b>)</b>&acute; | tableconstructor | String \n\n\tfunction ::= <b>function</b> funcbody\n\n\tfuncbody ::= `<b>(</b>&acute; [parlist] `<b>)</b>&acute; block <b>end</b>\n\n\tparlist ::= namelist [`<b>,</b>&acute; `<b>...</b>&acute;] | `<b>...</b>&acute;\n\n\ttableconstructor ::= `<b>{</b>&acute; [fieldlist] `<b>}</b>&acute;\n\n\tfieldlist ::= field {fieldsep field} [fieldsep]\n\n\tfield ::= `<b>[</b>&acute; exp `<b>]</b>&acute; `<b>=</b>&acute; exp | Name `<b>=</b>&acute; exp | exp\n\n\tfieldsep ::= `<b>,</b>&acute; | `<b>;</b>&acute;\n\n\tbinop ::= `<b>+</b>&acute; | `<b>-</b>&acute; | `<b>*</b>&acute; | `<b>/</b>&acute; | `<b>^</b>&acute; | `<b>%</b>&acute; | `<b>..</b>&acute; | \n\t\t `<b>&lt;</b>&acute; | `<b>&lt;=</b>&acute; | `<b>&gt;</b>&acute; | `<b>&gt;=</b>&acute; | `<b>==</b>&acute; | `<b>~=</b>&acute; | \n\t\t <b>and</b> | <b>or</b>\n\n\tunop ::= `<b>-</b>&acute; | <b>not</b> | `<b>#</b>&acute;\n\n</pre>\n\n<p>\n\n\n\n\n\n\n\n<HR>\n<SMALL CLASS=\"footer\">\nLast update:\nMon Feb 13 18:54:19 BRST 2012\n</SMALL>\n<!--\nLast change: revised for Lua 5.1.5\n-->\n\n</body></html>\n\n"
  },
  {
    "path": "deps/lua/doc/readme.html",
    "content": "<HTML>\n<HEAD>\n<TITLE>Lua documentation</TITLE>\n<LINK REL=\"stylesheet\" TYPE=\"text/css\" HREF=\"lua.css\">\n</HEAD>\n\n<BODY>\n\n<HR>\n<H1>\n<A HREF=\"http://www.lua.org/\"><IMG SRC=\"logo.gif\" ALT=\"Lua\" BORDER=0></A>\nDocumentation\n</H1>\n\nThis is the documentation included in the source distribution of Lua 5.1.5.\n\n<UL>\n<LI><A HREF=\"contents.html\">Reference manual</A>\n<LI><A HREF=\"lua.html\">lua man page</A>\n<LI><A HREF=\"luac.html\">luac man page</A>\n<LI><A HREF=\"../README\">lua/README</A>\n<LI><A HREF=\"../etc/README\">lua/etc/README</A>\n<LI><A HREF=\"../test/README\">lua/test/README</A>\n</UL>\n\nLua's\n<A HREF=\"http://www.lua.org/\">official web site</A>\ncontains updated documentation,\nespecially the\n<A HREF=\"http://www.lua.org/manual/5.1/\">reference manual</A>.\n<P>\n\n<HR>\n<SMALL>\nLast update:\nFri Feb  3 09:44:42 BRST 2012\n</SMALL>\n\n</BODY>\n</HTML>\n"
  },
  {
    "path": "deps/lua/etc/Makefile",
    "content": "# makefile for Lua etc\n\nTOP= ..\nLIB= $(TOP)/src\nINC= $(TOP)/src\nBIN= $(TOP)/src\nSRC= $(TOP)/src\nTST= $(TOP)/test\n\nCC= gcc\nCFLAGS= -O2 -Wall -I$(INC) $(MYCFLAGS)\nMYCFLAGS= \nMYLDFLAGS= -Wl,-E\nMYLIBS= -lm\n#MYLIBS= -lm -Wl,-E -ldl -lreadline -lhistory -lncurses\nRM= rm -f\n\ndefault:\n\t@echo 'Please choose a target: min noparser one strict clean'\n\nmin:\tmin.c\n\t$(CC) $(CFLAGS) $@.c -L$(LIB) -llua $(MYLIBS)\n\techo 'print\"Hello there!\"' | ./a.out\n\nnoparser: noparser.o\n\t$(CC) noparser.o $(SRC)/lua.o -L$(LIB) -llua $(MYLIBS)\n\t$(BIN)/luac $(TST)/hello.lua\n\t-./a.out luac.out\n\t-./a.out -e'a=1'\n\none:\n\t$(CC) $(CFLAGS) all.c $(MYLIBS)\n\t./a.out $(TST)/hello.lua\n\nstrict:\n\t-$(BIN)/lua -e 'print(a);b=2'\n\t-$(BIN)/lua -lstrict -e 'print(a)'\n\t-$(BIN)/lua -e 'function f() b=2 end f()'\n\t-$(BIN)/lua -lstrict -e 'function f() b=2 end f()'\n\nclean:\n\t$(RM) a.out core core.* *.o luac.out\n\n.PHONY:\tdefault min noparser one strict clean\n"
  },
  {
    "path": "deps/lua/etc/README",
    "content": "This directory contains some useful files and code.\nUnlike the code in ../src, everything here is in the public domain.\n\nIf any of the makes fail, you're probably not using the same libraries\nused to build Lua. Set MYLIBS in Makefile accordingly.\n\nall.c\n\tFull Lua interpreter in a single file.\n\tDo \"make one\" for a demo.\n\nlua.hpp\n\tLua header files for C++ using 'extern \"C\"'.\n\nlua.ico\n\tA Lua icon for Windows (and web sites: save as favicon.ico).\n\tDrawn by hand by Markus Gritsch <gritsch@iue.tuwien.ac.at>.\n\nlua.pc\n\tpkg-config data for Lua\n\nluavs.bat\n\tScript to build Lua under \"Visual Studio .NET Command Prompt\".\n\tRun it from the toplevel as etc\\luavs.bat.\n\nmin.c\n\tA minimal Lua interpreter.\n\tGood for learning and for starting your own.\n\tDo \"make min\" for a demo.\n\nnoparser.c\n\tLinking with noparser.o avoids loading the parsing modules in lualib.a.\n\tDo \"make noparser\" for a demo.\n\nstrict.lua\n\tTraps uses of undeclared global variables.\n\tDo \"make strict\" for a demo.\n\n"
  },
  {
    "path": "deps/lua/etc/all.c",
    "content": "/*\n* all.c -- Lua core, libraries and interpreter in a single file\n*/\n\n#define luaall_c\n\n#include \"lapi.c\"\n#include \"lcode.c\"\n#include \"ldebug.c\"\n#include \"ldo.c\"\n#include \"ldump.c\"\n#include \"lfunc.c\"\n#include \"lgc.c\"\n#include \"llex.c\"\n#include \"lmem.c\"\n#include \"lobject.c\"\n#include \"lopcodes.c\"\n#include \"lparser.c\"\n#include \"lstate.c\"\n#include \"lstring.c\"\n#include \"ltable.c\"\n#include \"ltm.c\"\n#include \"lundump.c\"\n#include \"lvm.c\"\n#include \"lzio.c\"\n\n#include \"lauxlib.c\"\n#include \"lbaselib.c\"\n#include \"ldblib.c\"\n#include \"liolib.c\"\n#include \"linit.c\"\n#include \"lmathlib.c\"\n#include \"loadlib.c\"\n#include \"loslib.c\"\n#include \"lstrlib.c\"\n#include \"ltablib.c\"\n\n#include \"lua.c\"\n"
  },
  {
    "path": "deps/lua/etc/lua.hpp",
    "content": "// lua.hpp\n// Lua header files for C++\n// <<extern \"C\">> not supplied automatically because Lua also compiles as C++\n\nextern \"C\" {\n#include \"lua.h\"\n#include \"lualib.h\"\n#include \"lauxlib.h\"\n}\n"
  },
  {
    "path": "deps/lua/etc/lua.pc",
    "content": "# lua.pc -- pkg-config data for Lua\n\n# vars from install Makefile\n\n# grep '^V=' ../Makefile\nV= 5.1\n# grep '^R=' ../Makefile\nR= 5.1.5\n\n# grep '^INSTALL_.*=' ../Makefile | sed 's/INSTALL_TOP/prefix/'\nprefix= /usr/local\nINSTALL_BIN= ${prefix}/bin\nINSTALL_INC= ${prefix}/include\nINSTALL_LIB= ${prefix}/lib\nINSTALL_MAN= ${prefix}/man/man1\nINSTALL_LMOD= ${prefix}/share/lua/${V}\nINSTALL_CMOD= ${prefix}/lib/lua/${V}\n\n# canonical vars\nexec_prefix=${prefix}\nlibdir=${exec_prefix}/lib\nincludedir=${prefix}/include\n\nName: Lua\nDescription: An Extensible Extension Language\nVersion: ${R}\nRequires: \nLibs: -L${libdir} -llua -lm\nCflags: -I${includedir}\n\n# (end of lua.pc)\n"
  },
  {
    "path": "deps/lua/etc/luavs.bat",
    "content": "@rem Script to build Lua under \"Visual Studio .NET Command Prompt\".\r\n@rem Do not run from this directory; run it from the toplevel: etc\\luavs.bat .\r\n@rem It creates lua51.dll, lua51.lib, lua.exe, and luac.exe in src.\r\n@rem (contributed by David Manura and Mike Pall)\r\n\r\n@setlocal\r\n@set MYCOMPILE=cl /nologo /MD /O2 /W3 /c /D_CRT_SECURE_NO_DEPRECATE\r\n@set MYLINK=link /nologo\r\n@set MYMT=mt /nologo\r\n\r\ncd src\r\n%MYCOMPILE% /DLUA_BUILD_AS_DLL l*.c\r\ndel lua.obj luac.obj\r\n%MYLINK% /DLL /out:lua51.dll l*.obj\r\nif exist lua51.dll.manifest^\r\n  %MYMT% -manifest lua51.dll.manifest -outputresource:lua51.dll;2\r\n%MYCOMPILE% /DLUA_BUILD_AS_DLL lua.c\r\n%MYLINK% /out:lua.exe lua.obj lua51.lib\r\nif exist lua.exe.manifest^\r\n  %MYMT% -manifest lua.exe.manifest -outputresource:lua.exe\r\n%MYCOMPILE% l*.c print.c\r\ndel lua.obj linit.obj lbaselib.obj ldblib.obj liolib.obj lmathlib.obj^\r\n    loslib.obj ltablib.obj lstrlib.obj loadlib.obj\r\n%MYLINK% /out:luac.exe *.obj\r\nif exist luac.exe.manifest^\r\n  %MYMT% -manifest luac.exe.manifest -outputresource:luac.exe\r\ndel *.obj *.manifest\r\ncd ..\r\n"
  },
  {
    "path": "deps/lua/etc/min.c",
    "content": "/*\n* min.c -- a minimal Lua interpreter\n* loads stdin only with minimal error handling.\n* no interaction, and no standard library, only a \"print\" function.\n*/\n\n#include <stdio.h>\n\n#include \"lua.h\"\n#include \"lauxlib.h\"\n\nstatic int print(lua_State *L)\n{\n int n=lua_gettop(L);\n int i;\n for (i=1; i<=n; i++)\n {\n  if (i>1) printf(\"\\t\");\n  if (lua_isstring(L,i))\n   printf(\"%s\",lua_tostring(L,i));\n  else if (lua_isnil(L,i))\n   printf(\"%s\",\"nil\");\n  else if (lua_isboolean(L,i))\n   printf(\"%s\",lua_toboolean(L,i) ? \"true\" : \"false\");\n  else\n   printf(\"%s:%p\",luaL_typename(L,i),lua_topointer(L,i));\n }\n printf(\"\\n\");\n return 0;\n}\n\nint main(void)\n{\n lua_State *L=lua_open();\n lua_register(L,\"print\",print);\n if (luaL_dofile(L,NULL)!=0) fprintf(stderr,\"%s\\n\",lua_tostring(L,-1));\n lua_close(L);\n return 0;\n}\n"
  },
  {
    "path": "deps/lua/etc/noparser.c",
    "content": "/*\n* The code below can be used to make a Lua core that does not contain the\n* parsing modules (lcode, llex, lparser), which represent 35% of the total core.\n* You'll only be able to load binary files and strings, precompiled with luac.\n* (Of course, you'll have to build luac with the original parsing modules!)\n*\n* To use this module, simply compile it (\"make noparser\" does that) and list\n* its object file before the Lua libraries. The linker should then not load\n* the parsing modules. To try it, do \"make luab\".\n*\n* If you also want to avoid the dump module (ldump.o), define NODUMP.\n* #define NODUMP\n*/\n\n#define LUA_CORE\n\n#include \"llex.h\"\n#include \"lparser.h\"\n#include \"lzio.h\"\n\nLUAI_FUNC void luaX_init (lua_State *L) {\n  UNUSED(L);\n}\n\nLUAI_FUNC Proto *luaY_parser (lua_State *L, ZIO *z, Mbuffer *buff, const char *name) {\n  UNUSED(z);\n  UNUSED(buff);\n  UNUSED(name);\n  lua_pushliteral(L,\"parser not loaded\");\n  lua_error(L);\n  return NULL;\n}\n\n#ifdef NODUMP\n#include \"lundump.h\"\n\nLUAI_FUNC int luaU_dump (lua_State* L, const Proto* f, lua_Writer w, void* data, int strip) {\n  UNUSED(f);\n  UNUSED(w);\n  UNUSED(data);\n  UNUSED(strip);\n#if 1\n  UNUSED(L);\n  return 0;\n#else\n  lua_pushliteral(L,\"dumper not loaded\");\n  lua_error(L);\n#endif\n}\n#endif\n"
  },
  {
    "path": "deps/lua/etc/strict.lua",
    "content": "--\n-- strict.lua\n-- checks uses of undeclared global variables\n-- All global variables must be 'declared' through a regular assignment\n-- (even assigning nil will do) in a main chunk before being used\n-- anywhere or assigned to inside a function.\n--\n\nlocal getinfo, error, rawset, rawget = debug.getinfo, error, rawset, rawget\n\nlocal mt = getmetatable(_G)\nif mt == nil then\n  mt = {}\n  setmetatable(_G, mt)\nend\n\nmt.__declared = {}\n\nlocal function what ()\n  local d = getinfo(3, \"S\")\n  return d and d.what or \"C\"\nend\n\nmt.__newindex = function (t, n, v)\n  if not mt.__declared[n] then\n    local w = what()\n    if w ~= \"main\" and w ~= \"C\" then\n      error(\"assign to undeclared variable '\"..n..\"'\", 2)\n    end\n    mt.__declared[n] = true\n  end\n  rawset(t, n, v)\nend\n  \nmt.__index = function (t, n)\n  if not mt.__declared[n] and what() ~= \"C\" then\n    error(\"variable '\"..n..\"' is not declared\", 2)\n  end\n  return rawget(t, n)\nend\n\n"
  },
  {
    "path": "deps/lua/src/Makefile",
    "content": "# makefile for building Lua\n# see ../INSTALL for installation instructions\n# see ../Makefile and luaconf.h for further customization\n\n# == CHANGE THE SETTINGS BELOW TO SUIT YOUR ENVIRONMENT =======================\n\n# Your platform. See PLATS for possible values.\nPLAT= none\n\nCC?= gcc\nCFLAGS= -O2 -Wall $(MYCFLAGS)\nAR= ar rcu\nRANLIB= ranlib\nRM= rm -f\nLIBS= -lm $(MYLIBS)\n\nMYCFLAGS=\nMYLDFLAGS=\nMYLIBS=\n\n# == END OF USER SETTINGS. NO NEED TO CHANGE ANYTHING BELOW THIS LINE =========\n\nPLATS= aix ansi bsd freebsd generic linux macosx mingw posix solaris\n\nLUA_A=\tliblua.a\nCORE_O=\tlapi.o lcode.o ldebug.o ldo.o ldump.o lfunc.o lgc.o llex.o lmem.o \\\n\tlobject.o lopcodes.o lparser.o lstate.o lstring.o ltable.o ltm.o  \\\n\tlundump.o lvm.o lzio.o strbuf.o fpconv.o\nLIB_O=\tlauxlib.o lbaselib.o ldblib.o liolib.o lmathlib.o loslib.o ltablib.o \\\n\tlstrlib.o loadlib.o linit.o lua_cjson.o lua_struct.o lua_cmsgpack.o \\\n\tlua_bit.o\n\nLUA_T=\tlua\nLUA_O=\tlua.o\n\nLUAC_T=\tluac\nLUAC_O=\tluac.o print.o\n\nALL_O= $(CORE_O) $(LIB_O) $(LUA_O) $(LUAC_O)\nALL_T= $(LUA_A) $(LUA_T) $(LUAC_T)\nALL_A= $(LUA_A)\n\ndefault: $(PLAT)\n\nall:\t$(ALL_T)\n\no:\t$(ALL_O)\n\na:\t$(ALL_A)\n\n$(LUA_A): $(CORE_O) $(LIB_O)\n\t$(AR) $@ $(CORE_O) $(LIB_O)\t# DLL needs all object files\n\t$(RANLIB) $@\n\n$(LUA_T): $(LUA_O) $(LUA_A)\n\t$(CC) -o $@ $(MYLDFLAGS) $(LUA_O) $(LUA_A) $(LIBS)\n\n$(LUAC_T): $(LUAC_O) $(LUA_A)\n\t$(CC) -o $@ $(MYLDFLAGS) $(LUAC_O) $(LUA_A) $(LIBS)\n\nclean:\n\t$(RM) $(ALL_T) $(ALL_O)\n\ndepend:\n\t@$(CC) $(CFLAGS) -MM l*.c print.c\n\necho:\n\t@echo \"PLAT = $(PLAT)\"\n\t@echo \"CC = $(CC)\"\n\t@echo \"CFLAGS = $(CFLAGS)\"\n\t@echo \"AR = $(AR)\"\n\t@echo \"RANLIB = $(RANLIB)\"\n\t@echo \"RM = $(RM)\"\n\t@echo \"MYCFLAGS = $(MYCFLAGS)\"\n\t@echo \"MYLDFLAGS = $(MYLDFLAGS)\"\n\t@echo \"MYLIBS = $(MYLIBS)\"\n\n# convenience targets for popular platforms\n\nnone:\n\t@echo \"Please choose a platform:\"\n\t@echo \"   $(PLATS)\"\n\naix:\n\t$(MAKE) all CC=\"xlc\" CFLAGS=\"-O2 -DLUA_USE_POSIX -DLUA_USE_DLOPEN\" MYLIBS=\"-ldl\" MYLDFLAGS=\"-brtl -bexpall\"\n\nansi:\n\t$(MAKE) all MYCFLAGS=-DLUA_ANSI\n\nbsd:\n\t$(MAKE) all MYCFLAGS=\"-DLUA_USE_POSIX -DLUA_USE_DLOPEN\" MYLIBS=\"-Wl,-E\"\n\nfreebsd:\n\t$(MAKE) all MYCFLAGS=\"-DLUA_USE_LINUX\" MYLIBS=\"-Wl,-E -lreadline\"\n\ngeneric:\n\t$(MAKE) all MYCFLAGS=\n\nlinux:\n\t$(MAKE) all MYCFLAGS=-DLUA_USE_LINUX MYLIBS=\"-Wl,-E -ldl -lreadline -lhistory -lncurses\"\n\nmacosx:\n\t$(MAKE) all MYCFLAGS=-DLUA_USE_LINUX MYLIBS=\"-lreadline\"\n# use this on Mac OS X 10.3-\n#\t$(MAKE) all MYCFLAGS=-DLUA_USE_MACOSX\n\nmingw:\n\t$(MAKE) \"LUA_A=lua51.dll\" \"LUA_T=lua.exe\" \\\n\t\"AR=$(CC) -shared -o\" \"RANLIB=strip --strip-unneeded\" \\\n\t\"MYCFLAGS=-DLUA_BUILD_AS_DLL\" \"MYLIBS=\" \"MYLDFLAGS=-s\" lua.exe\n\t$(MAKE) \"LUAC_T=luac.exe\" luac.exe\n\nposix:\n\t$(MAKE) all MYCFLAGS=-DLUA_USE_POSIX\n\nsolaris:\n\t$(MAKE) all MYCFLAGS=\"-DLUA_USE_POSIX -DLUA_USE_DLOPEN\" MYLIBS=\"-ldl\"\n\n# list targets that do not create files (but not all makes understand .PHONY)\n.PHONY: all $(PLATS) default o a clean depend echo none\n\n# DO NOT DELETE\n\nlapi.o: lapi.c lua.h luaconf.h lapi.h lobject.h llimits.h ldebug.h \\\n  lstate.h ltm.h lzio.h lmem.h ldo.h lfunc.h lgc.h lstring.h ltable.h \\\n  lundump.h lvm.h\nlauxlib.o: lauxlib.c lua.h luaconf.h lauxlib.h\nlbaselib.o: lbaselib.c lua.h luaconf.h lauxlib.h lualib.h\nlcode.o: lcode.c lua.h luaconf.h lcode.h llex.h lobject.h llimits.h \\\n  lzio.h lmem.h lopcodes.h lparser.h ldebug.h lstate.h ltm.h ldo.h lgc.h \\\n  ltable.h\nldblib.o: ldblib.c lua.h luaconf.h lauxlib.h lualib.h\nldebug.o: ldebug.c lua.h luaconf.h lapi.h lobject.h llimits.h lcode.h \\\n  llex.h lzio.h lmem.h lopcodes.h lparser.h ldebug.h lstate.h ltm.h ldo.h \\\n  lfunc.h lstring.h lgc.h ltable.h lvm.h\nldo.o: ldo.c lua.h luaconf.h ldebug.h lstate.h lobject.h llimits.h ltm.h \\\n  lzio.h lmem.h ldo.h lfunc.h lgc.h lopcodes.h lparser.h lstring.h \\\n  ltable.h lundump.h lvm.h\nldump.o: ldump.c lua.h luaconf.h lobject.h llimits.h lstate.h ltm.h \\\n  lzio.h lmem.h lundump.h\nlfunc.o: lfunc.c lua.h luaconf.h lfunc.h lobject.h llimits.h lgc.h lmem.h \\\n  lstate.h ltm.h lzio.h\nlgc.o: lgc.c lua.h luaconf.h ldebug.h lstate.h lobject.h llimits.h ltm.h \\\n  lzio.h lmem.h ldo.h lfunc.h lgc.h lstring.h ltable.h\nlinit.o: linit.c lua.h luaconf.h lualib.h lauxlib.h\nliolib.o: liolib.c lua.h luaconf.h lauxlib.h lualib.h\nllex.o: llex.c lua.h luaconf.h ldo.h lobject.h llimits.h lstate.h ltm.h \\\n  lzio.h lmem.h llex.h lparser.h lstring.h lgc.h ltable.h\nlmathlib.o: lmathlib.c lua.h luaconf.h lauxlib.h lualib.h\nlmem.o: lmem.c lua.h luaconf.h ldebug.h lstate.h lobject.h llimits.h \\\n  ltm.h lzio.h lmem.h ldo.h\nloadlib.o: loadlib.c lua.h luaconf.h lauxlib.h lualib.h\nlobject.o: lobject.c lua.h luaconf.h ldo.h lobject.h llimits.h lstate.h \\\n  ltm.h lzio.h lmem.h lstring.h lgc.h lvm.h\nlopcodes.o: lopcodes.c lopcodes.h llimits.h lua.h luaconf.h\nloslib.o: loslib.c lua.h luaconf.h lauxlib.h lualib.h\nlparser.o: lparser.c lua.h luaconf.h lcode.h llex.h lobject.h llimits.h \\\n  lzio.h lmem.h lopcodes.h lparser.h ldebug.h lstate.h ltm.h ldo.h \\\n  lfunc.h lstring.h lgc.h ltable.h\nlstate.o: lstate.c lua.h luaconf.h ldebug.h lstate.h lobject.h llimits.h \\\n  ltm.h lzio.h lmem.h ldo.h lfunc.h lgc.h llex.h lstring.h ltable.h\nlstring.o: lstring.c lua.h luaconf.h lmem.h llimits.h lobject.h lstate.h \\\n  ltm.h lzio.h lstring.h lgc.h\nlstrlib.o: lstrlib.c lua.h luaconf.h lauxlib.h lualib.h\nltable.o: ltable.c lua.h luaconf.h ldebug.h lstate.h lobject.h llimits.h \\\n  ltm.h lzio.h lmem.h ldo.h lgc.h ltable.h\nltablib.o: ltablib.c lua.h luaconf.h lauxlib.h lualib.h\nltm.o: ltm.c lua.h luaconf.h lobject.h llimits.h lstate.h ltm.h lzio.h \\\n  lmem.h lstring.h lgc.h ltable.h\nlua.o: lua.c lua.h luaconf.h lauxlib.h lualib.h\nluac.o: luac.c lua.h luaconf.h lauxlib.h ldo.h lobject.h llimits.h \\\n  lstate.h ltm.h lzio.h lmem.h lfunc.h lopcodes.h lstring.h lgc.h \\\n  lundump.h\nlundump.o: lundump.c lua.h luaconf.h ldebug.h lstate.h lobject.h \\\n  llimits.h ltm.h lzio.h lmem.h ldo.h lfunc.h lstring.h lgc.h lundump.h\nlvm.o: lvm.c lua.h luaconf.h ldebug.h lstate.h lobject.h llimits.h ltm.h \\\n  lzio.h lmem.h ldo.h lfunc.h lgc.h lopcodes.h lstring.h ltable.h lvm.h\nlzio.o: lzio.c lua.h luaconf.h llimits.h lmem.h lstate.h lobject.h ltm.h \\\n  lzio.h\nprint.o: print.c ldebug.h lstate.h lua.h luaconf.h lobject.h llimits.h \\\n  ltm.h lzio.h lmem.h lopcodes.h lundump.h\n\n# (end of Makefile)\n"
  },
  {
    "path": "deps/lua/src/fpconv.c",
    "content": "/* fpconv - Floating point conversion routines\n *\n * Copyright (c) 2011-2012  Mark Pulford <mark@kyne.com.au>\n *\n * Permission is hereby granted, free of charge, to any person obtaining\n * a copy of this software and associated documentation files (the\n * \"Software\"), to deal in the Software without restriction, including\n * without limitation the rights to use, copy, modify, merge, publish,\n * distribute, sublicense, and/or sell copies of the Software, and to\n * permit persons to whom the Software is furnished to do so, subject to\n * the following conditions:\n *\n * The above copyright notice and this permission notice shall be\n * included in all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\n * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\n * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.\n * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY\n * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,\n * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n */\n\n/* JSON uses a '.' decimal separator. strtod() / sprintf() under C libraries\n * with locale support will break when the decimal separator is a comma.\n *\n * fpconv_* will around these issues with a translation buffer if required.\n */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <assert.h>\n#include <string.h>\n\n#include \"fpconv.h\"\n\n/* Lua CJSON assumes the locale is the same for all threads within a\n * process and doesn't change after initialisation.\n *\n * This avoids the need for per thread storage or expensive checks\n * for call. */\nstatic char locale_decimal_point = '.';\n\n/* In theory multibyte decimal_points are possible, but\n * Lua CJSON only supports UTF-8 and known locales only have\n * single byte decimal points ([.,]).\n *\n * localconv() may not be thread safe (=>crash), and nl_langinfo() is\n * not supported on some platforms. Use sprintf() instead - if the\n * locale does change, at least Lua CJSON won't crash. */\nstatic void fpconv_update_locale()\n{\n    char buf[8];\n\n    snprintf(buf, sizeof(buf), \"%g\", 0.5);\n\n    /* Failing this test might imply the platform has a buggy dtoa\n     * implementation or wide characters */\n    if (buf[0] != '0' || buf[2] != '5' || buf[3] != 0) {\n        fprintf(stderr, \"Error: wide characters found or printf() bug.\");\n        abort();\n    }\n\n    locale_decimal_point = buf[1];\n}\n\n/* Check for a valid number character: [-+0-9a-yA-Y.]\n * Eg: -0.6e+5, infinity, 0xF0.F0pF0\n *\n * Used to find the probable end of a number. It doesn't matter if\n * invalid characters are counted - strtod() will find the valid\n * number if it exists.  The risk is that slightly more memory might\n * be allocated before a parse error occurs. */\nstatic inline int valid_number_character(char ch)\n{\n    char lower_ch;\n\n    if ('0' <= ch && ch <= '9')\n        return 1;\n    if (ch == '-' || ch == '+' || ch == '.')\n        return 1;\n\n    /* Hex digits, exponent (e), base (p), \"infinity\",.. */\n    lower_ch = ch | 0x20;\n    if ('a' <= lower_ch && lower_ch <= 'y')\n        return 1;\n\n    return 0;\n}\n\n/* Calculate the size of the buffer required for a strtod locale\n * conversion. */\nstatic int strtod_buffer_size(const char *s)\n{\n    const char *p = s;\n\n    while (valid_number_character(*p))\n        p++;\n\n    return p - s;\n}\n\n/* Similar to strtod(), but must be passed the current locale's decimal point\n * character. Guaranteed to be called at the start of any valid number in a string */\ndouble fpconv_strtod(const char *nptr, char **endptr)\n{\n    char localbuf[FPCONV_G_FMT_BUFSIZE];\n    char *buf, *endbuf, *dp;\n    int buflen;\n    double value;\n\n    /* System strtod() is fine when decimal point is '.' */\n    if (locale_decimal_point == '.')\n        return strtod(nptr, endptr);\n\n    buflen = strtod_buffer_size(nptr);\n    if (!buflen) {\n        /* No valid characters found, standard strtod() return */\n        *endptr = (char *)nptr;\n        return 0;\n    }\n\n    /* Duplicate number into buffer */\n    if (buflen >= FPCONV_G_FMT_BUFSIZE) {\n        /* Handle unusually large numbers */\n        buf = malloc(buflen + 1);\n        if (!buf) {\n            fprintf(stderr, \"Out of memory\");\n            abort();\n        }\n    } else {\n        /* This is the common case.. */\n        buf = localbuf;\n    }\n    memcpy(buf, nptr, buflen);\n    buf[buflen] = 0;\n\n    /* Update decimal point character if found */\n    dp = strchr(buf, '.');\n    if (dp)\n        *dp = locale_decimal_point;\n\n    value = strtod(buf, &endbuf);\n    *endptr = (char *)&nptr[endbuf - buf];\n    if (buflen >= FPCONV_G_FMT_BUFSIZE)\n        free(buf);\n\n    return value;\n}\n\n/* \"fmt\" must point to a buffer of at least 6 characters */\nstatic void set_number_format(char *fmt, int precision)\n{\n    int d1, d2, i;\n\n    assert(1 <= precision && precision <= 14);\n\n    /* Create printf format (%.14g) from precision */\n    d1 = precision / 10;\n    d2 = precision % 10;\n    fmt[0] = '%';\n    fmt[1] = '.';\n    i = 2;\n    if (d1) {\n        fmt[i++] = '0' + d1;\n    }\n    fmt[i++] = '0' + d2;\n    fmt[i++] = 'g';\n    fmt[i] = 0;\n}\n\n/* Assumes there is always at least 32 characters available in the target buffer */\nint fpconv_g_fmt(char *str, double num, int precision)\n{\n    char buf[FPCONV_G_FMT_BUFSIZE];\n    char fmt[6];\n    int len;\n    char *b;\n\n    set_number_format(fmt, precision);\n\n    /* Pass through when decimal point character is dot. */\n    if (locale_decimal_point == '.')\n        return snprintf(str, FPCONV_G_FMT_BUFSIZE, fmt, num);\n\n    /* snprintf() to a buffer then translate for other decimal point characters */\n    len = snprintf(buf, FPCONV_G_FMT_BUFSIZE, fmt, num);\n\n    /* Copy into target location. Translate decimal point if required */\n    b = buf;\n    do {\n        *str++ = (*b == locale_decimal_point ? '.' : *b);\n    } while(*b++);\n\n    return len;\n}\n\nvoid fpconv_init()\n{\n    fpconv_update_locale();\n}\n\n/* vi:ai et sw=4 ts=4:\n */\n"
  },
  {
    "path": "deps/lua/src/fpconv.h",
    "content": "/* Lua CJSON floating point conversion routines */\n\n/* Buffer required to store the largest string representation of a double.\n *\n * Longest double printed with %.14g is 21 characters long:\n * -1.7976931348623e+308 */\n# define FPCONV_G_FMT_BUFSIZE   32\n\n#ifdef USE_INTERNAL_FPCONV\nstatic inline void fpconv_init()\n{\n    /* Do nothing - not required */\n}\n#else\nextern void fpconv_init();\n#endif\n\nextern int fpconv_g_fmt(char*, double, int);\nextern double fpconv_strtod(const char*, char**);\n\n/* vi:ai et sw=4 ts=4:\n */\n"
  },
  {
    "path": "deps/lua/src/lapi.c",
    "content": "/*\n** $Id: lapi.c,v 2.55.1.5 2008/07/04 18:41:18 roberto Exp $\n** Lua API\n** See Copyright Notice in lua.h\n*/\n\n\n#include <assert.h>\n#include <math.h>\n#include <stdarg.h>\n#include <string.h>\n\n#define lapi_c\n#define LUA_CORE\n\n#include \"lua.h\"\n\n#include \"lapi.h\"\n#include \"ldebug.h\"\n#include \"ldo.h\"\n#include \"lfunc.h\"\n#include \"lgc.h\"\n#include \"lmem.h\"\n#include \"lobject.h\"\n#include \"lstate.h\"\n#include \"lstring.h\"\n#include \"ltable.h\"\n#include \"ltm.h\"\n#include \"lundump.h\"\n#include \"lvm.h\"\n\n\n\nconst char lua_ident[] =\n  \"$Lua: \" LUA_RELEASE \" \" LUA_COPYRIGHT \" $\\n\"\n  \"$Authors: \" LUA_AUTHORS \" $\\n\"\n  \"$URL: www.lua.org $\\n\";\n\n\n\n#define api_checknelems(L, n)\tapi_check(L, (n) <= (L->top - L->base))\n\n#define api_checkvalidindex(L, i)\tapi_check(L, (i) != luaO_nilobject)\n\n#define api_incr_top(L)   {api_check(L, L->top < L->ci->top); L->top++;}\n\n\n\nstatic TValue *index2adr (lua_State *L, int idx) {\n  if (idx > 0) {\n    TValue *o = L->base + (idx - 1);\n    api_check(L, idx <= L->ci->top - L->base);\n    if (o >= L->top) return cast(TValue *, luaO_nilobject);\n    else return o;\n  }\n  else if (idx > LUA_REGISTRYINDEX) {\n    api_check(L, idx != 0 && -idx <= L->top - L->base);\n    return L->top + idx;\n  }\n  else switch (idx) {  /* pseudo-indices */\n    case LUA_REGISTRYINDEX: return registry(L);\n    case LUA_ENVIRONINDEX: {\n      Closure *func = curr_func(L);\n      sethvalue(L, &L->env, func->c.env);\n      return &L->env;\n    }\n    case LUA_GLOBALSINDEX: return gt(L);\n    default: {\n      Closure *func = curr_func(L);\n      idx = LUA_GLOBALSINDEX - idx;\n      return (idx <= func->c.nupvalues)\n                ? &func->c.upvalue[idx-1]\n                : cast(TValue *, luaO_nilobject);\n    }\n  }\n}\n\n\nstatic Table *getcurrenv (lua_State *L) {\n  if (L->ci == L->base_ci)  /* no enclosing function? */\n    return hvalue(gt(L));  /* use global table as environment */\n  else {\n    Closure *func = curr_func(L);\n    return func->c.env;\n  }\n}\n\n\nvoid luaA_pushobject (lua_State *L, const TValue *o) {\n  setobj2s(L, L->top, o);\n  api_incr_top(L);\n}\n\n\nLUA_API int lua_checkstack (lua_State *L, int size) {\n  int res = 1;\n  lua_lock(L);\n  if (size > LUAI_MAXCSTACK || (L->top - L->base + size) > LUAI_MAXCSTACK)\n    res = 0;  /* stack overflow */\n  else if (size > 0) {\n    luaD_checkstack(L, size);\n    if (L->ci->top < L->top + size)\n      L->ci->top = L->top + size;\n  }\n  lua_unlock(L);\n  return res;\n}\n\n\nLUA_API void lua_xmove (lua_State *from, lua_State *to, int n) {\n  int i;\n  if (from == to) return;\n  lua_lock(to);\n  api_checknelems(from, n);\n  api_check(from, G(from) == G(to));\n  api_check(from, to->ci->top - to->top >= n);\n  from->top -= n;\n  for (i = 0; i < n; i++) {\n    setobj2s(to, to->top++, from->top + i);\n  }\n  lua_unlock(to);\n}\n\n\nLUA_API void lua_setlevel (lua_State *from, lua_State *to) {\n  to->nCcalls = from->nCcalls;\n}\n\n\nLUA_API lua_CFunction lua_atpanic (lua_State *L, lua_CFunction panicf) {\n  lua_CFunction old;\n  lua_lock(L);\n  old = G(L)->panic;\n  G(L)->panic = panicf;\n  lua_unlock(L);\n  return old;\n}\n\n\nLUA_API lua_State *lua_newthread (lua_State *L) {\n  lua_State *L1;\n  lua_lock(L);\n  luaC_checkGC(L);\n  L1 = luaE_newthread(L);\n  setthvalue(L, L->top, L1);\n  api_incr_top(L);\n  lua_unlock(L);\n  luai_userstatethread(L, L1);\n  return L1;\n}\n\n\n\n/*\n** basic stack manipulation\n*/\n\n\nLUA_API int lua_gettop (lua_State *L) {\n  return cast_int(L->top - L->base);\n}\n\n\nLUA_API void lua_settop (lua_State *L, int idx) {\n  lua_lock(L);\n  if (idx >= 0) {\n    api_check(L, idx <= L->stack_last - L->base);\n    while (L->top < L->base + idx)\n      setnilvalue(L->top++);\n    L->top = L->base + idx;\n  }\n  else {\n    api_check(L, -(idx+1) <= (L->top - L->base));\n    L->top += idx+1;  /* `subtract' index (index is negative) */\n  }\n  lua_unlock(L);\n}\n\n\nLUA_API void lua_remove (lua_State *L, int idx) {\n  StkId p;\n  lua_lock(L);\n  p = index2adr(L, idx);\n  api_checkvalidindex(L, p);\n  while (++p < L->top) setobjs2s(L, p-1, p);\n  L->top--;\n  lua_unlock(L);\n}\n\n\nLUA_API void lua_insert (lua_State *L, int idx) {\n  StkId p;\n  StkId q;\n  lua_lock(L);\n  p = index2adr(L, idx);\n  api_checkvalidindex(L, p);\n  for (q = L->top; q>p; q--) setobjs2s(L, q, q-1);\n  setobjs2s(L, p, L->top);\n  lua_unlock(L);\n}\n\n\nLUA_API void lua_replace (lua_State *L, int idx) {\n  StkId o;\n  lua_lock(L);\n  /* explicit test for incompatible code */\n  if (idx == LUA_ENVIRONINDEX && L->ci == L->base_ci)\n    luaG_runerror(L, \"no calling environment\");\n  api_checknelems(L, 1);\n  o = index2adr(L, idx);\n  api_checkvalidindex(L, o);\n  if (idx == LUA_ENVIRONINDEX) {\n    Closure *func = curr_func(L);\n    api_check(L, ttistable(L->top - 1)); \n    func->c.env = hvalue(L->top - 1);\n    luaC_barrier(L, func, L->top - 1);\n  }\n  else {\n    setobj(L, o, L->top - 1);\n    if (idx < LUA_GLOBALSINDEX)  /* function upvalue? */\n      luaC_barrier(L, curr_func(L), L->top - 1);\n  }\n  L->top--;\n  lua_unlock(L);\n}\n\n\nLUA_API void lua_pushvalue (lua_State *L, int idx) {\n  lua_lock(L);\n  setobj2s(L, L->top, index2adr(L, idx));\n  api_incr_top(L);\n  lua_unlock(L);\n}\n\n\n\n/*\n** access functions (stack -> C)\n*/\n\n\nLUA_API int lua_type (lua_State *L, int idx) {\n  StkId o = index2adr(L, idx);\n  return (o == luaO_nilobject) ? LUA_TNONE : ttype(o);\n}\n\n\nLUA_API const char *lua_typename (lua_State *L, int t) {\n  UNUSED(L);\n  return (t == LUA_TNONE) ? \"no value\" : luaT_typenames[t];\n}\n\n\nLUA_API int lua_iscfunction (lua_State *L, int idx) {\n  StkId o = index2adr(L, idx);\n  return iscfunction(o);\n}\n\n\nLUA_API int lua_isnumber (lua_State *L, int idx) {\n  TValue n;\n  const TValue *o = index2adr(L, idx);\n  return tonumber(o, &n);\n}\n\n\nLUA_API int lua_isstring (lua_State *L, int idx) {\n  int t = lua_type(L, idx);\n  return (t == LUA_TSTRING || t == LUA_TNUMBER);\n}\n\n\nLUA_API int lua_isuserdata (lua_State *L, int idx) {\n  const TValue *o = index2adr(L, idx);\n  return (ttisuserdata(o) || ttislightuserdata(o));\n}\n\n\nLUA_API int lua_rawequal (lua_State *L, int index1, int index2) {\n  StkId o1 = index2adr(L, index1);\n  StkId o2 = index2adr(L, index2);\n  return (o1 == luaO_nilobject || o2 == luaO_nilobject) ? 0\n         : luaO_rawequalObj(o1, o2);\n}\n\n\nLUA_API int lua_equal (lua_State *L, int index1, int index2) {\n  StkId o1, o2;\n  int i;\n  lua_lock(L);  /* may call tag method */\n  o1 = index2adr(L, index1);\n  o2 = index2adr(L, index2);\n  i = (o1 == luaO_nilobject || o2 == luaO_nilobject) ? 0 : equalobj(L, o1, o2);\n  lua_unlock(L);\n  return i;\n}\n\n\nLUA_API int lua_lessthan (lua_State *L, int index1, int index2) {\n  StkId o1, o2;\n  int i;\n  lua_lock(L);  /* may call tag method */\n  o1 = index2adr(L, index1);\n  o2 = index2adr(L, index2);\n  i = (o1 == luaO_nilobject || o2 == luaO_nilobject) ? 0\n       : luaV_lessthan(L, o1, o2);\n  lua_unlock(L);\n  return i;\n}\n\n\n\nLUA_API lua_Number lua_tonumber (lua_State *L, int idx) {\n  TValue n;\n  const TValue *o = index2adr(L, idx);\n  if (tonumber(o, &n))\n    return nvalue(o);\n  else\n    return 0;\n}\n\n\nLUA_API lua_Integer lua_tointeger (lua_State *L, int idx) {\n  TValue n;\n  const TValue *o = index2adr(L, idx);\n  if (tonumber(o, &n)) {\n    lua_Integer res;\n    lua_Number num = nvalue(o);\n    lua_number2integer(res, num);\n    return res;\n  }\n  else\n    return 0;\n}\n\n\nLUA_API int lua_toboolean (lua_State *L, int idx) {\n  const TValue *o = index2adr(L, idx);\n  return !l_isfalse(o);\n}\n\n\nLUA_API const char *lua_tolstring (lua_State *L, int idx, size_t *len) {\n  StkId o = index2adr(L, idx);\n  if (!ttisstring(o)) {\n    lua_lock(L);  /* `luaV_tostring' may create a new string */\n    if (!luaV_tostring(L, o)) {  /* conversion failed? */\n      if (len != NULL) *len = 0;\n      lua_unlock(L);\n      return NULL;\n    }\n    luaC_checkGC(L);\n    o = index2adr(L, idx);  /* previous call may reallocate the stack */\n    lua_unlock(L);\n  }\n  if (len != NULL) *len = tsvalue(o)->len;\n  return svalue(o);\n}\n\n\nLUA_API size_t lua_objlen (lua_State *L, int idx) {\n  StkId o = index2adr(L, idx);\n  switch (ttype(o)) {\n    case LUA_TSTRING: return tsvalue(o)->len;\n    case LUA_TUSERDATA: return uvalue(o)->len;\n    case LUA_TTABLE: return luaH_getn(hvalue(o));\n    case LUA_TNUMBER: {\n      size_t l;\n      lua_lock(L);  /* `luaV_tostring' may create a new string */\n      l = (luaV_tostring(L, o) ? tsvalue(o)->len : 0);\n      lua_unlock(L);\n      return l;\n    }\n    default: return 0;\n  }\n}\n\n\nLUA_API lua_CFunction lua_tocfunction (lua_State *L, int idx) {\n  StkId o = index2adr(L, idx);\n  return (!iscfunction(o)) ? NULL : clvalue(o)->c.f;\n}\n\n\nLUA_API void *lua_touserdata (lua_State *L, int idx) {\n  StkId o = index2adr(L, idx);\n  switch (ttype(o)) {\n    case LUA_TUSERDATA: return (rawuvalue(o) + 1);\n    case LUA_TLIGHTUSERDATA: return pvalue(o);\n    default: return NULL;\n  }\n}\n\n\nLUA_API lua_State *lua_tothread (lua_State *L, int idx) {\n  StkId o = index2adr(L, idx);\n  return (!ttisthread(o)) ? NULL : thvalue(o);\n}\n\n\nLUA_API const void *lua_topointer (lua_State *L, int idx) {\n  StkId o = index2adr(L, idx);\n  switch (ttype(o)) {\n    case LUA_TTABLE: return hvalue(o);\n    case LUA_TFUNCTION: return clvalue(o);\n    case LUA_TTHREAD: return thvalue(o);\n    case LUA_TUSERDATA:\n    case LUA_TLIGHTUSERDATA:\n      return lua_touserdata(L, idx);\n    default: return NULL;\n  }\n}\n\n\n\n/*\n** push functions (C -> stack)\n*/\n\n\nLUA_API void lua_pushnil (lua_State *L) {\n  lua_lock(L);\n  setnilvalue(L->top);\n  api_incr_top(L);\n  lua_unlock(L);\n}\n\n\nLUA_API void lua_pushnumber (lua_State *L, lua_Number n) {\n  lua_lock(L);\n  setnvalue(L->top, n);\n  api_incr_top(L);\n  lua_unlock(L);\n}\n\n\nLUA_API void lua_pushinteger (lua_State *L, lua_Integer n) {\n  lua_lock(L);\n  setnvalue(L->top, cast_num(n));\n  api_incr_top(L);\n  lua_unlock(L);\n}\n\n\nLUA_API void lua_pushlstring (lua_State *L, const char *s, size_t len) {\n  lua_lock(L);\n  luaC_checkGC(L);\n  setsvalue2s(L, L->top, luaS_newlstr(L, s, len));\n  api_incr_top(L);\n  lua_unlock(L);\n}\n\n\nLUA_API void lua_pushstring (lua_State *L, const char *s) {\n  if (s == NULL)\n    lua_pushnil(L);\n  else\n    lua_pushlstring(L, s, strlen(s));\n}\n\n\nLUA_API const char *lua_pushvfstring (lua_State *L, const char *fmt,\n                                      va_list argp) {\n  const char *ret;\n  lua_lock(L);\n  luaC_checkGC(L);\n  ret = luaO_pushvfstring(L, fmt, argp);\n  lua_unlock(L);\n  return ret;\n}\n\n\nLUA_API const char *lua_pushfstring (lua_State *L, const char *fmt, ...) {\n  const char *ret;\n  va_list argp;\n  lua_lock(L);\n  luaC_checkGC(L);\n  va_start(argp, fmt);\n  ret = luaO_pushvfstring(L, fmt, argp);\n  va_end(argp);\n  lua_unlock(L);\n  return ret;\n}\n\n\nLUA_API void lua_pushcclosure (lua_State *L, lua_CFunction fn, int n) {\n  Closure *cl;\n  lua_lock(L);\n  luaC_checkGC(L);\n  api_checknelems(L, n);\n  cl = luaF_newCclosure(L, n, getcurrenv(L));\n  cl->c.f = fn;\n  L->top -= n;\n  while (n--)\n    setobj2n(L, &cl->c.upvalue[n], L->top+n);\n  setclvalue(L, L->top, cl);\n  lua_assert(iswhite(obj2gco(cl)));\n  api_incr_top(L);\n  lua_unlock(L);\n}\n\n\nLUA_API void lua_pushboolean (lua_State *L, int b) {\n  lua_lock(L);\n  setbvalue(L->top, (b != 0));  /* ensure that true is 1 */\n  api_incr_top(L);\n  lua_unlock(L);\n}\n\n\nLUA_API void lua_pushlightuserdata (lua_State *L, void *p) {\n  lua_lock(L);\n  setpvalue(L->top, p);\n  api_incr_top(L);\n  lua_unlock(L);\n}\n\n\nLUA_API int lua_pushthread (lua_State *L) {\n  lua_lock(L);\n  setthvalue(L, L->top, L);\n  api_incr_top(L);\n  lua_unlock(L);\n  return (G(L)->mainthread == L);\n}\n\n\n\n/*\n** get functions (Lua -> stack)\n*/\n\n\nLUA_API void lua_gettable (lua_State *L, int idx) {\n  StkId t;\n  lua_lock(L);\n  t = index2adr(L, idx);\n  api_checkvalidindex(L, t);\n  luaV_gettable(L, t, L->top - 1, L->top - 1);\n  lua_unlock(L);\n}\n\n\nLUA_API void lua_getfield (lua_State *L, int idx, const char *k) {\n  StkId t;\n  TValue key;\n  lua_lock(L);\n  t = index2adr(L, idx);\n  api_checkvalidindex(L, t);\n  setsvalue(L, &key, luaS_new(L, k));\n  luaV_gettable(L, t, &key, L->top);\n  api_incr_top(L);\n  lua_unlock(L);\n}\n\n\nLUA_API void lua_rawget (lua_State *L, int idx) {\n  StkId t;\n  lua_lock(L);\n  t = index2adr(L, idx);\n  api_check(L, ttistable(t));\n  setobj2s(L, L->top - 1, luaH_get(hvalue(t), L->top - 1));\n  lua_unlock(L);\n}\n\n\nLUA_API void lua_rawgeti (lua_State *L, int idx, int n) {\n  StkId o;\n  lua_lock(L);\n  o = index2adr(L, idx);\n  api_check(L, ttistable(o));\n  setobj2s(L, L->top, luaH_getnum(hvalue(o), n));\n  api_incr_top(L);\n  lua_unlock(L);\n}\n\n\nLUA_API void lua_createtable (lua_State *L, int narray, int nrec) {\n  lua_lock(L);\n  luaC_checkGC(L);\n  sethvalue(L, L->top, luaH_new(L, narray, nrec));\n  api_incr_top(L);\n  lua_unlock(L);\n}\n\n\nLUA_API int lua_getmetatable (lua_State *L, int objindex) {\n  const TValue *obj;\n  Table *mt = NULL;\n  int res;\n  lua_lock(L);\n  obj = index2adr(L, objindex);\n  switch (ttype(obj)) {\n    case LUA_TTABLE:\n      mt = hvalue(obj)->metatable;\n      break;\n    case LUA_TUSERDATA:\n      mt = uvalue(obj)->metatable;\n      break;\n    default:\n      mt = G(L)->mt[ttype(obj)];\n      break;\n  }\n  if (mt == NULL)\n    res = 0;\n  else {\n    sethvalue(L, L->top, mt);\n    api_incr_top(L);\n    res = 1;\n  }\n  lua_unlock(L);\n  return res;\n}\n\n\nLUA_API void lua_getfenv (lua_State *L, int idx) {\n  StkId o;\n  lua_lock(L);\n  o = index2adr(L, idx);\n  api_checkvalidindex(L, o);\n  switch (ttype(o)) {\n    case LUA_TFUNCTION:\n      sethvalue(L, L->top, clvalue(o)->c.env);\n      break;\n    case LUA_TUSERDATA:\n      sethvalue(L, L->top, uvalue(o)->env);\n      break;\n    case LUA_TTHREAD:\n      setobj2s(L, L->top,  gt(thvalue(o)));\n      break;\n    default:\n      setnilvalue(L->top);\n      break;\n  }\n  api_incr_top(L);\n  lua_unlock(L);\n}\n\n\n/*\n** set functions (stack -> Lua)\n*/\n\n\nLUA_API void lua_settable (lua_State *L, int idx) {\n  StkId t;\n  lua_lock(L);\n  api_checknelems(L, 2);\n  t = index2adr(L, idx);\n  api_checkvalidindex(L, t);\n  luaV_settable(L, t, L->top - 2, L->top - 1);\n  L->top -= 2;  /* pop index and value */\n  lua_unlock(L);\n}\n\n\nLUA_API void lua_setfield (lua_State *L, int idx, const char *k) {\n  StkId t;\n  TValue key;\n  lua_lock(L);\n  api_checknelems(L, 1);\n  t = index2adr(L, idx);\n  api_checkvalidindex(L, t);\n  setsvalue(L, &key, luaS_new(L, k));\n  luaV_settable(L, t, &key, L->top - 1);\n  L->top--;  /* pop value */\n  lua_unlock(L);\n}\n\n\nLUA_API void lua_rawset (lua_State *L, int idx) {\n  StkId t;\n  lua_lock(L);\n  api_checknelems(L, 2);\n  t = index2adr(L, idx);\n  api_check(L, ttistable(t));\n  if (hvalue(t)->readonly)\n    luaG_runerror(L, \"Attempt to modify a readonly table\");\n  setobj2t(L, luaH_set(L, hvalue(t), L->top-2), L->top-1);\n  luaC_barriert(L, hvalue(t), L->top-1);\n  L->top -= 2;\n  lua_unlock(L);\n}\n\n\nLUA_API void lua_rawseti (lua_State *L, int idx, int n) {\n  StkId o;\n  lua_lock(L);\n  api_checknelems(L, 1);\n  o = index2adr(L, idx);\n  api_check(L, ttistable(o));\n  if (hvalue(o)->readonly)\n    luaG_runerror(L, \"Attempt to modify a readonly table\");\n  setobj2t(L, luaH_setnum(L, hvalue(o), n), L->top-1);\n  luaC_barriert(L, hvalue(o), L->top-1);\n  L->top--;\n  lua_unlock(L);\n}\n\n\nLUA_API int lua_setmetatable (lua_State *L, int objindex) {\n  TValue *obj;\n  Table *mt;\n  lua_lock(L);\n  api_checknelems(L, 1);\n  obj = index2adr(L, objindex);\n  api_checkvalidindex(L, obj);\n  if (ttisnil(L->top - 1))\n    mt = NULL;\n  else {\n    api_check(L, ttistable(L->top - 1));\n    mt = hvalue(L->top - 1);\n  }\n  switch (ttype(obj)) {\n    case LUA_TTABLE: {\n      if (hvalue(obj)->readonly)\n        luaG_runerror(L, \"Attempt to modify a readonly table\");\n      hvalue(obj)->metatable = mt;\n      if (mt)\n        luaC_objbarriert(L, hvalue(obj), mt);\n      break;\n    }\n    case LUA_TUSERDATA: {\n      uvalue(obj)->metatable = mt;\n      if (mt)\n        luaC_objbarrier(L, rawuvalue(obj), mt);\n      break;\n    }\n    default: {\n      G(L)->mt[ttype(obj)] = mt;\n      break;\n    }\n  }\n  L->top--;\n  lua_unlock(L);\n  return 1;\n}\n\n\nLUA_API int lua_setfenv (lua_State *L, int idx) {\n  StkId o;\n  int res = 1;\n  lua_lock(L);\n  api_checknelems(L, 1);\n  o = index2adr(L, idx);\n  api_checkvalidindex(L, o);\n  api_check(L, ttistable(L->top - 1));\n  switch (ttype(o)) {\n    case LUA_TFUNCTION:\n      clvalue(o)->c.env = hvalue(L->top - 1);\n      break;\n    case LUA_TUSERDATA:\n      uvalue(o)->env = hvalue(L->top - 1);\n      break;\n    case LUA_TTHREAD:\n      sethvalue(L, gt(thvalue(o)), hvalue(L->top - 1));\n      break;\n    default:\n      res = 0;\n      break;\n  }\n  if (res) luaC_objbarrier(L, gcvalue(o), hvalue(L->top - 1));\n  L->top--;\n  lua_unlock(L);\n  return res;\n}\n\n\n/*\n** `load' and `call' functions (run Lua code)\n*/\n\n\n#define adjustresults(L,nres) \\\n    { if (nres == LUA_MULTRET && L->top >= L->ci->top) L->ci->top = L->top; }\n\n\n#define checkresults(L,na,nr) \\\n     api_check(L, (nr) == LUA_MULTRET || (L->ci->top - L->top >= (nr) - (na)))\n\t\n\nLUA_API void lua_call (lua_State *L, int nargs, int nresults) {\n  StkId func;\n  lua_lock(L);\n  api_checknelems(L, nargs+1);\n  checkresults(L, nargs, nresults);\n  func = L->top - (nargs+1);\n  luaD_call(L, func, nresults);\n  adjustresults(L, nresults);\n  lua_unlock(L);\n}\n\n\n\n/*\n** Execute a protected call.\n*/\nstruct CallS {  /* data to `f_call' */\n  StkId func;\n  int nresults;\n};\n\n\nstatic void f_call (lua_State *L, void *ud) {\n  struct CallS *c = cast(struct CallS *, ud);\n  luaD_call(L, c->func, c->nresults);\n}\n\n\n\nLUA_API int lua_pcall (lua_State *L, int nargs, int nresults, int errfunc) {\n  struct CallS c;\n  int status;\n  ptrdiff_t func;\n  lua_lock(L);\n  api_checknelems(L, nargs+1);\n  checkresults(L, nargs, nresults);\n  if (errfunc == 0)\n    func = 0;\n  else {\n    StkId o = index2adr(L, errfunc);\n    api_checkvalidindex(L, o);\n    func = savestack(L, o);\n  }\n  c.func = L->top - (nargs+1);  /* function to be called */\n  c.nresults = nresults;\n  status = luaD_pcall(L, f_call, &c, savestack(L, c.func), func);\n  adjustresults(L, nresults);\n  lua_unlock(L);\n  return status;\n}\n\n\n/*\n** Execute a protected C call.\n*/\nstruct CCallS {  /* data to `f_Ccall' */\n  lua_CFunction func;\n  void *ud;\n};\n\n\nstatic void f_Ccall (lua_State *L, void *ud) {\n  struct CCallS *c = cast(struct CCallS *, ud);\n  Closure *cl;\n  cl = luaF_newCclosure(L, 0, getcurrenv(L));\n  cl->c.f = c->func;\n  setclvalue(L, L->top, cl);  /* push function */\n  api_incr_top(L);\n  setpvalue(L->top, c->ud);  /* push only argument */\n  api_incr_top(L);\n  luaD_call(L, L->top - 2, 0);\n}\n\n\nLUA_API int lua_cpcall (lua_State *L, lua_CFunction func, void *ud) {\n  struct CCallS c;\n  int status;\n  lua_lock(L);\n  c.func = func;\n  c.ud = ud;\n  status = luaD_pcall(L, f_Ccall, &c, savestack(L, L->top), 0);\n  lua_unlock(L);\n  return status;\n}\n\n\nLUA_API int lua_load (lua_State *L, lua_Reader reader, void *data,\n                      const char *chunkname) {\n  ZIO z;\n  int status;\n  lua_lock(L);\n  if (!chunkname) chunkname = \"?\";\n  luaZ_init(L, &z, reader, data);\n  status = luaD_protectedparser(L, &z, chunkname);\n  lua_unlock(L);\n  return status;\n}\n\n\nLUA_API int lua_dump (lua_State *L, lua_Writer writer, void *data) {\n  int status;\n  TValue *o;\n  lua_lock(L);\n  api_checknelems(L, 1);\n  o = L->top - 1;\n  if (isLfunction(o))\n    status = luaU_dump(L, clvalue(o)->l.p, writer, data, 0);\n  else\n    status = 1;\n  lua_unlock(L);\n  return status;\n}\n\n\nLUA_API int  lua_status (lua_State *L) {\n  return L->status;\n}\n\n\n/*\n** Garbage-collection function\n*/\n\nLUA_API int lua_gc (lua_State *L, int what, int data) {\n  int res = 0;\n  global_State *g;\n  lua_lock(L);\n  g = G(L);\n  switch (what) {\n    case LUA_GCSTOP: {\n      g->GCthreshold = MAX_LUMEM;\n      break;\n    }\n    case LUA_GCRESTART: {\n      g->GCthreshold = g->totalbytes;\n      break;\n    }\n    case LUA_GCCOLLECT: {\n      luaC_fullgc(L);\n      break;\n    }\n    case LUA_GCCOUNT: {\n      /* GC values are expressed in Kbytes: #bytes/2^10 */\n      res = cast_int(g->totalbytes >> 10);\n      break;\n    }\n    case LUA_GCCOUNTB: {\n      res = cast_int(g->totalbytes & 0x3ff);\n      break;\n    }\n    case LUA_GCSTEP: {\n      lu_mem a = (cast(lu_mem, data) << 10);\n      if (a <= g->totalbytes)\n        g->GCthreshold = g->totalbytes - a;\n      else\n        g->GCthreshold = 0;\n      while (g->GCthreshold <= g->totalbytes) {\n        luaC_step(L);\n        if (g->gcstate == GCSpause) {  /* end of cycle? */\n          res = 1;  /* signal it */\n          break;\n        }\n      }\n      break;\n    }\n    case LUA_GCSETPAUSE: {\n      res = g->gcpause;\n      g->gcpause = data;\n      break;\n    }\n    case LUA_GCSETSTEPMUL: {\n      res = g->gcstepmul;\n      g->gcstepmul = data;\n      break;\n    }\n    default: res = -1;  /* invalid option */\n  }\n  lua_unlock(L);\n  return res;\n}\n\n\n\n/*\n** miscellaneous functions\n*/\n\n\nLUA_API int lua_error (lua_State *L) {\n  lua_lock(L);\n  api_checknelems(L, 1);\n  luaG_errormsg(L);\n  lua_unlock(L);\n  return 0;  /* to avoid warnings */\n}\n\n\nLUA_API int lua_next (lua_State *L, int idx) {\n  StkId t;\n  int more;\n  lua_lock(L);\n  t = index2adr(L, idx);\n  api_check(L, ttistable(t));\n  more = luaH_next(L, hvalue(t), L->top - 1);\n  if (more) {\n    api_incr_top(L);\n  }\n  else  /* no more elements */\n    L->top -= 1;  /* remove key */\n  lua_unlock(L);\n  return more;\n}\n\n\nLUA_API void lua_concat (lua_State *L, int n) {\n  lua_lock(L);\n  api_checknelems(L, n);\n  if (n >= 2) {\n    luaC_checkGC(L);\n    luaV_concat(L, n, cast_int(L->top - L->base) - 1);\n    L->top -= (n-1);\n  }\n  else if (n == 0) {  /* push empty string */\n    setsvalue2s(L, L->top, luaS_newlstr(L, \"\", 0));\n    api_incr_top(L);\n  }\n  /* else n == 1; nothing to do */\n  lua_unlock(L);\n}\n\n\nLUA_API lua_Alloc lua_getallocf (lua_State *L, void **ud) {\n  lua_Alloc f;\n  lua_lock(L);\n  if (ud) *ud = G(L)->ud;\n  f = G(L)->frealloc;\n  lua_unlock(L);\n  return f;\n}\n\n\nLUA_API void lua_setallocf (lua_State *L, lua_Alloc f, void *ud) {\n  lua_lock(L);\n  G(L)->ud = ud;\n  G(L)->frealloc = f;\n  lua_unlock(L);\n}\n\n\nLUA_API void *lua_newuserdata (lua_State *L, size_t size) {\n  Udata *u;\n  lua_lock(L);\n  luaC_checkGC(L);\n  u = luaS_newudata(L, size, getcurrenv(L));\n  setuvalue(L, L->top, u);\n  api_incr_top(L);\n  lua_unlock(L);\n  return u + 1;\n}\n\n\n\n\nstatic const char *aux_upvalue (StkId fi, int n, TValue **val) {\n  Closure *f;\n  if (!ttisfunction(fi)) return NULL;\n  f = clvalue(fi);\n  if (f->c.isC) {\n    if (!(1 <= n && n <= f->c.nupvalues)) return NULL;\n    *val = &f->c.upvalue[n-1];\n    return \"\";\n  }\n  else {\n    Proto *p = f->l.p;\n    if (!(1 <= n && n <= p->sizeupvalues)) return NULL;\n    *val = f->l.upvals[n-1]->v;\n    return getstr(p->upvalues[n-1]);\n  }\n}\n\n\nLUA_API const char *lua_getupvalue (lua_State *L, int funcindex, int n) {\n  const char *name;\n  TValue *val;\n  lua_lock(L);\n  name = aux_upvalue(index2adr(L, funcindex), n, &val);\n  if (name) {\n    setobj2s(L, L->top, val);\n    api_incr_top(L);\n  }\n  lua_unlock(L);\n  return name;\n}\n\n\nLUA_API const char *lua_setupvalue (lua_State *L, int funcindex, int n) {\n  const char *name;\n  TValue *val;\n  StkId fi;\n  lua_lock(L);\n  fi = index2adr(L, funcindex);\n  api_checknelems(L, 1);\n  name = aux_upvalue(fi, n, &val);\n  if (name) {\n    L->top--;\n    setobj(L, val, L->top);\n    luaC_barrier(L, clvalue(fi), L->top);\n  }\n  lua_unlock(L);\n  return name;\n}\n\nLUA_API void lua_enablereadonlytable (lua_State *L, int objindex, int enabled) {\n  const TValue* o = index2adr(L, objindex);\n  api_check(L, ttistable(o));\n  Table* t = hvalue(o);\n  api_check(L, t != hvalue(registry(L)));\n  t->readonly = enabled;\n}\n\nLUA_API int lua_isreadonlytable (lua_State *L, int objindex) {\n    const TValue* o = index2adr(L, objindex);\n  api_check(L, ttistable(o));\n  Table* t = hvalue(o);\n  api_check(L, t != hvalue(registry(L)));\n  return t->readonly;\n}\n\n"
  },
  {
    "path": "deps/lua/src/lapi.h",
    "content": "/*\n** $Id: lapi.h,v 2.2.1.1 2007/12/27 13:02:25 roberto Exp $\n** Auxiliary functions from Lua API\n** See Copyright Notice in lua.h\n*/\n\n#ifndef lapi_h\n#define lapi_h\n\n\n#include \"lobject.h\"\n\n\nLUAI_FUNC void luaA_pushobject (lua_State *L, const TValue *o);\n\n#endif\n"
  },
  {
    "path": "deps/lua/src/lauxlib.c",
    "content": "/*\n** $Id: lauxlib.c,v 1.159.1.3 2008/01/21 13:20:51 roberto Exp $\n** Auxiliary functions for building Lua libraries\n** See Copyright Notice in lua.h\n*/\n\n\n#include <ctype.h>\n#include <errno.h>\n#include <stdarg.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n\n/* This file uses only the official API of Lua.\n** Any function declared here could be written as an application function.\n*/\n\n#define lauxlib_c\n#define LUA_LIB\n\n#include \"lua.h\"\n\n#include \"lauxlib.h\"\n\n\n#define FREELIST_REF\t0\t/* free list of references */\n\n\n/* convert a stack index to positive */\n#define abs_index(L, i)\t\t((i) > 0 || (i) <= LUA_REGISTRYINDEX ? (i) : \\\n\t\t\t\t\tlua_gettop(L) + (i) + 1)\n\n\n/*\n** {======================================================\n** Error-report functions\n** =======================================================\n*/\n\n\nLUALIB_API int luaL_argerror (lua_State *L, int narg, const char *extramsg) {\n  lua_Debug ar;\n  if (!lua_getstack(L, 0, &ar))  /* no stack frame? */\n    return luaL_error(L, \"bad argument #%d (%s)\", narg, extramsg);\n  lua_getinfo(L, \"n\", &ar);\n  if (strcmp(ar.namewhat, \"method\") == 0) {\n    narg--;  /* do not count `self' */\n    if (narg == 0)  /* error is in the self argument itself? */\n      return luaL_error(L, \"calling \" LUA_QS \" on bad self (%s)\",\n                           ar.name, extramsg);\n  }\n  if (ar.name == NULL)\n    ar.name = \"?\";\n  return luaL_error(L, \"bad argument #%d to \" LUA_QS \" (%s)\",\n                        narg, ar.name, extramsg);\n}\n\n\nLUALIB_API int luaL_typerror (lua_State *L, int narg, const char *tname) {\n  const char *msg = lua_pushfstring(L, \"%s expected, got %s\",\n                                    tname, luaL_typename(L, narg));\n  return luaL_argerror(L, narg, msg);\n}\n\n\nstatic void tag_error (lua_State *L, int narg, int tag) {\n  luaL_typerror(L, narg, lua_typename(L, tag));\n}\n\n\nLUALIB_API void luaL_where (lua_State *L, int level) {\n  lua_Debug ar;\n  if (lua_getstack(L, level, &ar)) {  /* check function at level */\n    lua_getinfo(L, \"Sl\", &ar);  /* get info about it */\n    if (ar.currentline > 0) {  /* is there info? */\n      lua_pushfstring(L, \"%s:%d: \", ar.short_src, ar.currentline);\n      return;\n    }\n  }\n  lua_pushliteral(L, \"\");  /* else, no information available... */\n}\n\n\nLUALIB_API int luaL_error (lua_State *L, const char *fmt, ...) {\n  va_list argp;\n  va_start(argp, fmt);\n  luaL_where(L, 1);\n  lua_pushvfstring(L, fmt, argp);\n  va_end(argp);\n  lua_concat(L, 2);\n  return lua_error(L);\n}\n\n/* }====================================================== */\n\n\nLUALIB_API int luaL_checkoption (lua_State *L, int narg, const char *def,\n                                 const char *const lst[]) {\n  const char *name = (def) ? luaL_optstring(L, narg, def) :\n                             luaL_checkstring(L, narg);\n  int i;\n  for (i=0; lst[i]; i++)\n    if (strcmp(lst[i], name) == 0)\n      return i;\n  return luaL_argerror(L, narg,\n                       lua_pushfstring(L, \"invalid option \" LUA_QS, name));\n}\n\n\nLUALIB_API int luaL_newmetatable (lua_State *L, const char *tname) {\n  lua_getfield(L, LUA_REGISTRYINDEX, tname);  /* get registry.name */\n  if (!lua_isnil(L, -1))  /* name already in use? */\n    return 0;  /* leave previous value on top, but return 0 */\n  lua_pop(L, 1);\n  lua_newtable(L);  /* create metatable */\n  lua_pushvalue(L, -1);\n  lua_setfield(L, LUA_REGISTRYINDEX, tname);  /* registry.name = metatable */\n  return 1;\n}\n\n\nLUALIB_API void *luaL_checkudata (lua_State *L, int ud, const char *tname) {\n  void *p = lua_touserdata(L, ud);\n  if (p != NULL) {  /* value is a userdata? */\n    if (lua_getmetatable(L, ud)) {  /* does it have a metatable? */\n      lua_getfield(L, LUA_REGISTRYINDEX, tname);  /* get correct metatable */\n      if (lua_rawequal(L, -1, -2)) {  /* does it have the correct mt? */\n        lua_pop(L, 2);  /* remove both metatables */\n        return p;\n      }\n    }\n  }\n  luaL_typerror(L, ud, tname);  /* else error */\n  return NULL;  /* to avoid warnings */\n}\n\n\nLUALIB_API void luaL_checkstack (lua_State *L, int space, const char *mes) {\n  if (!lua_checkstack(L, space))\n    luaL_error(L, \"stack overflow (%s)\", mes);\n}\n\n\nLUALIB_API void luaL_checktype (lua_State *L, int narg, int t) {\n  if (lua_type(L, narg) != t)\n    tag_error(L, narg, t);\n}\n\n\nLUALIB_API void luaL_checkany (lua_State *L, int narg) {\n  if (lua_type(L, narg) == LUA_TNONE)\n    luaL_argerror(L, narg, \"value expected\");\n}\n\n\nLUALIB_API const char *luaL_checklstring (lua_State *L, int narg, size_t *len) {\n  const char *s = lua_tolstring(L, narg, len);\n  if (!s) tag_error(L, narg, LUA_TSTRING);\n  return s;\n}\n\n\nLUALIB_API const char *luaL_optlstring (lua_State *L, int narg,\n                                        const char *def, size_t *len) {\n  if (lua_isnoneornil(L, narg)) {\n    if (len)\n      *len = (def ? strlen(def) : 0);\n    return def;\n  }\n  else return luaL_checklstring(L, narg, len);\n}\n\n\nLUALIB_API lua_Number luaL_checknumber (lua_State *L, int narg) {\n  lua_Number d = lua_tonumber(L, narg);\n  if (d == 0 && !lua_isnumber(L, narg))  /* avoid extra test when d is not 0 */\n    tag_error(L, narg, LUA_TNUMBER);\n  return d;\n}\n\n\nLUALIB_API lua_Number luaL_optnumber (lua_State *L, int narg, lua_Number def) {\n  return luaL_opt(L, luaL_checknumber, narg, def);\n}\n\n\nLUALIB_API lua_Integer luaL_checkinteger (lua_State *L, int narg) {\n  lua_Integer d = lua_tointeger(L, narg);\n  if (d == 0 && !lua_isnumber(L, narg))  /* avoid extra test when d is not 0 */\n    tag_error(L, narg, LUA_TNUMBER);\n  return d;\n}\n\n\nLUALIB_API lua_Integer luaL_optinteger (lua_State *L, int narg,\n                                                      lua_Integer def) {\n  return luaL_opt(L, luaL_checkinteger, narg, def);\n}\n\n\nLUALIB_API int luaL_getmetafield (lua_State *L, int obj, const char *event) {\n  if (!lua_getmetatable(L, obj))  /* no metatable? */\n    return 0;\n  lua_pushstring(L, event);\n  lua_rawget(L, -2);\n  if (lua_isnil(L, -1)) {\n    lua_pop(L, 2);  /* remove metatable and metafield */\n    return 0;\n  }\n  else {\n    lua_remove(L, -2);  /* remove only metatable */\n    return 1;\n  }\n}\n\n\nLUALIB_API int luaL_callmeta (lua_State *L, int obj, const char *event) {\n  obj = abs_index(L, obj);\n  if (!luaL_getmetafield(L, obj, event))  /* no metafield? */\n    return 0;\n  lua_pushvalue(L, obj);\n  lua_call(L, 1, 1);\n  return 1;\n}\n\n\nLUALIB_API void (luaL_register) (lua_State *L, const char *libname,\n                                const luaL_Reg *l) {\n  luaI_openlib(L, libname, l, 0);\n}\n\n\nstatic int libsize (const luaL_Reg *l) {\n  int size = 0;\n  for (; l->name; l++) size++;\n  return size;\n}\n\n\nLUALIB_API void luaI_openlib (lua_State *L, const char *libname,\n                              const luaL_Reg *l, int nup) {\n  if (libname) {\n    int size = libsize(l);\n    /* check whether lib already exists */\n    luaL_findtable(L, LUA_REGISTRYINDEX, \"_LOADED\", 1);\n    lua_getfield(L, -1, libname);  /* get _LOADED[libname] */\n    if (!lua_istable(L, -1)) {  /* not found? */\n      lua_pop(L, 1);  /* remove previous result */\n      /* try global variable (and create one if it does not exist) */\n      if (luaL_findtable(L, LUA_GLOBALSINDEX, libname, size) != NULL)\n        luaL_error(L, \"name conflict for module \" LUA_QS, libname);\n      lua_pushvalue(L, -1);\n      lua_setfield(L, -3, libname);  /* _LOADED[libname] = new table */\n    }\n    lua_remove(L, -2);  /* remove _LOADED table */\n    lua_insert(L, -(nup+1));  /* move library table to below upvalues */\n  }\n  for (; l->name; l++) {\n    int i;\n    for (i=0; i<nup; i++)  /* copy upvalues to the top */\n      lua_pushvalue(L, -nup);\n    lua_pushcclosure(L, l->func, nup);\n    lua_setfield(L, -(nup+2), l->name);\n  }\n  lua_pop(L, nup);  /* remove upvalues */\n}\n\n\n\n/*\n** {======================================================\n** getn-setn: size for arrays\n** =======================================================\n*/\n\n#if defined(LUA_COMPAT_GETN)\n\nstatic int checkint (lua_State *L, int topop) {\n  int n = (lua_type(L, -1) == LUA_TNUMBER) ? lua_tointeger(L, -1) : -1;\n  lua_pop(L, topop);\n  return n;\n}\n\n\nstatic void getsizes (lua_State *L) {\n  lua_getfield(L, LUA_REGISTRYINDEX, \"LUA_SIZES\");\n  if (lua_isnil(L, -1)) {  /* no `size' table? */\n    lua_pop(L, 1);  /* remove nil */\n    lua_newtable(L);  /* create it */\n    lua_pushvalue(L, -1);  /* `size' will be its own metatable */\n    lua_setmetatable(L, -2);\n    lua_pushliteral(L, \"kv\");\n    lua_setfield(L, -2, \"__mode\");  /* metatable(N).__mode = \"kv\" */\n    lua_pushvalue(L, -1);\n    lua_setfield(L, LUA_REGISTRYINDEX, \"LUA_SIZES\");  /* store in register */\n  }\n}\n\n\nLUALIB_API void luaL_setn (lua_State *L, int t, int n) {\n  t = abs_index(L, t);\n  lua_pushliteral(L, \"n\");\n  lua_rawget(L, t);\n  if (checkint(L, 1) >= 0) {  /* is there a numeric field `n'? */\n    lua_pushliteral(L, \"n\");  /* use it */\n    lua_pushinteger(L, n);\n    lua_rawset(L, t);\n  }\n  else {  /* use `sizes' */\n    getsizes(L);\n    lua_pushvalue(L, t);\n    lua_pushinteger(L, n);\n    lua_rawset(L, -3);  /* sizes[t] = n */\n    lua_pop(L, 1);  /* remove `sizes' */\n  }\n}\n\n\nLUALIB_API int luaL_getn (lua_State *L, int t) {\n  int n;\n  t = abs_index(L, t);\n  lua_pushliteral(L, \"n\");  /* try t.n */\n  lua_rawget(L, t);\n  if ((n = checkint(L, 1)) >= 0) return n;\n  getsizes(L);  /* else try sizes[t] */\n  lua_pushvalue(L, t);\n  lua_rawget(L, -2);\n  if ((n = checkint(L, 2)) >= 0) return n;\n  return (int)lua_objlen(L, t);\n}\n\n#endif\n\n/* }====================================================== */\n\n\n\nLUALIB_API const char *luaL_gsub (lua_State *L, const char *s, const char *p,\n                                                               const char *r) {\n  const char *wild;\n  size_t l = strlen(p);\n  luaL_Buffer b;\n  luaL_buffinit(L, &b);\n  while ((wild = strstr(s, p)) != NULL) {\n    luaL_addlstring(&b, s, wild - s);  /* push prefix */\n    luaL_addstring(&b, r);  /* push replacement in place of pattern */\n    s = wild + l;  /* continue after `p' */\n  }\n  luaL_addstring(&b, s);  /* push last suffix */\n  luaL_pushresult(&b);\n  return lua_tostring(L, -1);\n}\n\n\nLUALIB_API const char *luaL_findtable (lua_State *L, int idx,\n                                       const char *fname, int szhint) {\n  const char *e;\n  lua_pushvalue(L, idx);\n  do {\n    e = strchr(fname, '.');\n    if (e == NULL) e = fname + strlen(fname);\n    lua_pushlstring(L, fname, e - fname);\n    lua_rawget(L, -2);\n    if (lua_isnil(L, -1)) {  /* no such field? */\n      lua_pop(L, 1);  /* remove this nil */\n      lua_createtable(L, 0, (*e == '.' ? 1 : szhint)); /* new table for field */\n      lua_pushlstring(L, fname, e - fname);\n      lua_pushvalue(L, -2);\n      lua_settable(L, -4);  /* set new table into field */\n    }\n    else if (!lua_istable(L, -1)) {  /* field has a non-table value? */\n      lua_pop(L, 2);  /* remove table and value */\n      return fname;  /* return problematic part of the name */\n    }\n    lua_remove(L, -2);  /* remove previous table */\n    fname = e + 1;\n  } while (*e == '.');\n  return NULL;\n}\n\n\n\n/*\n** {======================================================\n** Generic Buffer manipulation\n** =======================================================\n*/\n\n\n#define bufflen(B)\t((B)->p - (B)->buffer)\n#define bufffree(B)\t((size_t)(LUAL_BUFFERSIZE - bufflen(B)))\n\n#define LIMIT\t(LUA_MINSTACK/2)\n\n\nstatic int emptybuffer (luaL_Buffer *B) {\n  size_t l = bufflen(B);\n  if (l == 0) return 0;  /* put nothing on stack */\n  else {\n    lua_pushlstring(B->L, B->buffer, l);\n    B->p = B->buffer;\n    B->lvl++;\n    return 1;\n  }\n}\n\n\nstatic void adjuststack (luaL_Buffer *B) {\n  if (B->lvl > 1) {\n    lua_State *L = B->L;\n    int toget = 1;  /* number of levels to concat */\n    size_t toplen = lua_strlen(L, -1);\n    do {\n      size_t l = lua_strlen(L, -(toget+1));\n      if (B->lvl - toget + 1 >= LIMIT || toplen > l) {\n        toplen += l;\n        toget++;\n      }\n      else break;\n    } while (toget < B->lvl);\n    lua_concat(L, toget);\n    B->lvl = B->lvl - toget + 1;\n  }\n}\n\n\nLUALIB_API char *luaL_prepbuffer (luaL_Buffer *B) {\n  if (emptybuffer(B))\n    adjuststack(B);\n  return B->buffer;\n}\n\n\nLUALIB_API void luaL_addlstring (luaL_Buffer *B, const char *s, size_t l) {\n  while (l--)\n    luaL_addchar(B, *s++);\n}\n\n\nLUALIB_API void luaL_addstring (luaL_Buffer *B, const char *s) {\n  luaL_addlstring(B, s, strlen(s));\n}\n\n\nLUALIB_API void luaL_pushresult (luaL_Buffer *B) {\n  emptybuffer(B);\n  lua_concat(B->L, B->lvl);\n  B->lvl = 1;\n}\n\n\nLUALIB_API void luaL_addvalue (luaL_Buffer *B) {\n  lua_State *L = B->L;\n  size_t vl;\n  const char *s = lua_tolstring(L, -1, &vl);\n  if (vl <= bufffree(B)) {  /* fit into buffer? */\n    memcpy(B->p, s, vl);  /* put it there */\n    B->p += vl;\n    lua_pop(L, 1);  /* remove from stack */\n  }\n  else {\n    if (emptybuffer(B))\n      lua_insert(L, -2);  /* put buffer before new value */\n    B->lvl++;  /* add new value into B stack */\n    adjuststack(B);\n  }\n}\n\n\nLUALIB_API void luaL_buffinit (lua_State *L, luaL_Buffer *B) {\n  B->L = L;\n  B->p = B->buffer;\n  B->lvl = 0;\n}\n\n/* }====================================================== */\n\n\nLUALIB_API int luaL_ref (lua_State *L, int t) {\n  int ref;\n  t = abs_index(L, t);\n  if (lua_isnil(L, -1)) {\n    lua_pop(L, 1);  /* remove from stack */\n    return LUA_REFNIL;  /* `nil' has a unique fixed reference */\n  }\n  lua_rawgeti(L, t, FREELIST_REF);  /* get first free element */\n  ref = (int)lua_tointeger(L, -1);  /* ref = t[FREELIST_REF] */\n  lua_pop(L, 1);  /* remove it from stack */\n  if (ref != 0) {  /* any free element? */\n    lua_rawgeti(L, t, ref);  /* remove it from list */\n    lua_rawseti(L, t, FREELIST_REF);  /* (t[FREELIST_REF] = t[ref]) */\n  }\n  else {  /* no free elements */\n    ref = (int)lua_objlen(L, t);\n    ref++;  /* create new reference */\n  }\n  lua_rawseti(L, t, ref);\n  return ref;\n}\n\n\nLUALIB_API void luaL_unref (lua_State *L, int t, int ref) {\n  if (ref >= 0) {\n    t = abs_index(L, t);\n    lua_rawgeti(L, t, FREELIST_REF);\n    lua_rawseti(L, t, ref);  /* t[ref] = t[FREELIST_REF] */\n    lua_pushinteger(L, ref);\n    lua_rawseti(L, t, FREELIST_REF);  /* t[FREELIST_REF] = ref */\n  }\n}\n\n\n\n/*\n** {======================================================\n** Load functions\n** =======================================================\n*/\n\ntypedef struct LoadF {\n  int extraline;\n  FILE *f;\n  char buff[LUAL_BUFFERSIZE];\n} LoadF;\n\n\nstatic const char *getF (lua_State *L, void *ud, size_t *size) {\n  LoadF *lf = (LoadF *)ud;\n  (void)L;\n  if (lf->extraline) {\n    lf->extraline = 0;\n    *size = 1;\n    return \"\\n\";\n  }\n  if (feof(lf->f)) return NULL;\n  *size = fread(lf->buff, 1, sizeof(lf->buff), lf->f);\n  return (*size > 0) ? lf->buff : NULL;\n}\n\n\nstatic int errfile (lua_State *L, const char *what, int fnameindex) {\n  const char *serr = strerror(errno);\n  const char *filename = lua_tostring(L, fnameindex) + 1;\n  lua_pushfstring(L, \"cannot %s %s: %s\", what, filename, serr);\n  lua_remove(L, fnameindex);\n  return LUA_ERRFILE;\n}\n\n\nLUALIB_API int luaL_loadfile (lua_State *L, const char *filename) {\n  LoadF lf;\n  int status, readstatus;\n  int c;\n  int fnameindex = lua_gettop(L) + 1;  /* index of filename on the stack */\n  lf.extraline = 0;\n  if (filename == NULL) {\n    lua_pushliteral(L, \"=stdin\");\n    lf.f = stdin;\n  }\n  else {\n    lua_pushfstring(L, \"@%s\", filename);\n    lf.f = fopen(filename, \"r\");\n    if (lf.f == NULL) return errfile(L, \"open\", fnameindex);\n  }\n  c = getc(lf.f);\n  if (c == '#') {  /* Unix exec. file? */\n    lf.extraline = 1;\n    while ((c = getc(lf.f)) != EOF && c != '\\n') ;  /* skip first line */\n    if (c == '\\n') c = getc(lf.f);\n  }\n  if (c == LUA_SIGNATURE[0] && filename) {  /* binary file? */\n    lf.f = freopen(filename, \"rb\", lf.f);  /* reopen in binary mode */\n    if (lf.f == NULL) return errfile(L, \"reopen\", fnameindex);\n    /* skip eventual `#!...' */\n   while ((c = getc(lf.f)) != EOF && c != LUA_SIGNATURE[0]) ;\n   lf.extraline = 0;\n  }\n  ungetc(c, lf.f);\n  status = lua_load(L, getF, &lf, lua_tostring(L, -1));\n  readstatus = ferror(lf.f);\n  if (filename) fclose(lf.f);  /* close file (even in case of errors) */\n  if (readstatus) {\n    lua_settop(L, fnameindex);  /* ignore results from `lua_load' */\n    return errfile(L, \"read\", fnameindex);\n  }\n  lua_remove(L, fnameindex);\n  return status;\n}\n\n\ntypedef struct LoadS {\n  const char *s;\n  size_t size;\n} LoadS;\n\n\nstatic const char *getS (lua_State *L, void *ud, size_t *size) {\n  LoadS *ls = (LoadS *)ud;\n  (void)L;\n  if (ls->size == 0) return NULL;\n  *size = ls->size;\n  ls->size = 0;\n  return ls->s;\n}\n\n\nLUALIB_API int luaL_loadbuffer (lua_State *L, const char *buff, size_t size,\n                                const char *name) {\n  LoadS ls;\n  ls.s = buff;\n  ls.size = size;\n  return lua_load(L, getS, &ls, name);\n}\n\n\nLUALIB_API int (luaL_loadstring) (lua_State *L, const char *s) {\n  return luaL_loadbuffer(L, s, strlen(s), s);\n}\n\n\n\n/* }====================================================== */\n\n\nstatic void *l_alloc (void *ud, void *ptr, size_t osize, size_t nsize) {\n  (void)ud;\n  (void)osize;\n  if (nsize == 0) {\n    free(ptr);\n    return NULL;\n  }\n  else\n    return realloc(ptr, nsize);\n}\n\n\nstatic int panic (lua_State *L) {\n  (void)L;  /* to avoid warnings */\n  fprintf(stderr, \"PANIC: unprotected error in call to Lua API (%s)\\n\",\n                   lua_tostring(L, -1));\n  return 0;\n}\n\n\nLUALIB_API lua_State *luaL_newstate (void) {\n  lua_State *L = lua_newstate(l_alloc, NULL);\n  if (L) lua_atpanic(L, &panic);\n  return L;\n}\n\n"
  },
  {
    "path": "deps/lua/src/lauxlib.h",
    "content": "/*\n** $Id: lauxlib.h,v 1.88.1.1 2007/12/27 13:02:25 roberto Exp $\n** Auxiliary functions for building Lua libraries\n** See Copyright Notice in lua.h\n*/\n\n\n#ifndef lauxlib_h\n#define lauxlib_h\n\n\n#include <stddef.h>\n#include <stdio.h>\n\n#include \"lua.h\"\n\n\n#if defined(LUA_COMPAT_GETN)\nLUALIB_API int (luaL_getn) (lua_State *L, int t);\nLUALIB_API void (luaL_setn) (lua_State *L, int t, int n);\n#else\n#define luaL_getn(L,i)          ((int)lua_objlen(L, i))\n#define luaL_setn(L,i,j)        ((void)0)  /* no op! */\n#endif\n\n#if defined(LUA_COMPAT_OPENLIB)\n#define luaI_openlib\tluaL_openlib\n#endif\n\n\n/* extra error code for `luaL_load' */\n#define LUA_ERRFILE     (LUA_ERRERR+1)\n\n\ntypedef struct luaL_Reg {\n  const char *name;\n  lua_CFunction func;\n} luaL_Reg;\n\n\n\nLUALIB_API void (luaI_openlib) (lua_State *L, const char *libname,\n                                const luaL_Reg *l, int nup);\nLUALIB_API void (luaL_register) (lua_State *L, const char *libname,\n                                const luaL_Reg *l);\nLUALIB_API int (luaL_getmetafield) (lua_State *L, int obj, const char *e);\nLUALIB_API int (luaL_callmeta) (lua_State *L, int obj, const char *e);\nLUALIB_API int (luaL_typerror) (lua_State *L, int narg, const char *tname);\nLUALIB_API int (luaL_argerror) (lua_State *L, int numarg, const char *extramsg);\nLUALIB_API const char *(luaL_checklstring) (lua_State *L, int numArg,\n                                                          size_t *l);\nLUALIB_API const char *(luaL_optlstring) (lua_State *L, int numArg,\n                                          const char *def, size_t *l);\nLUALIB_API lua_Number (luaL_checknumber) (lua_State *L, int numArg);\nLUALIB_API lua_Number (luaL_optnumber) (lua_State *L, int nArg, lua_Number def);\n\nLUALIB_API lua_Integer (luaL_checkinteger) (lua_State *L, int numArg);\nLUALIB_API lua_Integer (luaL_optinteger) (lua_State *L, int nArg,\n                                          lua_Integer def);\n\nLUALIB_API void (luaL_checkstack) (lua_State *L, int sz, const char *msg);\nLUALIB_API void (luaL_checktype) (lua_State *L, int narg, int t);\nLUALIB_API void (luaL_checkany) (lua_State *L, int narg);\n\nLUALIB_API int   (luaL_newmetatable) (lua_State *L, const char *tname);\nLUALIB_API void *(luaL_checkudata) (lua_State *L, int ud, const char *tname);\n\nLUALIB_API void (luaL_where) (lua_State *L, int lvl);\nLUALIB_API int (luaL_error) (lua_State *L, const char *fmt, ...);\n\nLUALIB_API int (luaL_checkoption) (lua_State *L, int narg, const char *def,\n                                   const char *const lst[]);\n\nLUALIB_API int (luaL_ref) (lua_State *L, int t);\nLUALIB_API void (luaL_unref) (lua_State *L, int t, int ref);\n\nLUALIB_API int (luaL_loadfile) (lua_State *L, const char *filename);\nLUALIB_API int (luaL_loadbuffer) (lua_State *L, const char *buff, size_t sz,\n                                  const char *name);\nLUALIB_API int (luaL_loadstring) (lua_State *L, const char *s);\n\nLUALIB_API lua_State *(luaL_newstate) (void);\n\n\nLUALIB_API const char *(luaL_gsub) (lua_State *L, const char *s, const char *p,\n                                                  const char *r);\n\nLUALIB_API const char *(luaL_findtable) (lua_State *L, int idx,\n                                         const char *fname, int szhint);\n\n\n\n\n/*\n** ===============================================================\n** some useful macros\n** ===============================================================\n*/\n\n#define luaL_argcheck(L, cond,numarg,extramsg)\t\\\n\t\t((void)((cond) || luaL_argerror(L, (numarg), (extramsg))))\n#define luaL_checkstring(L,n)\t(luaL_checklstring(L, (n), NULL))\n#define luaL_optstring(L,n,d)\t(luaL_optlstring(L, (n), (d), NULL))\n#define luaL_checkint(L,n)\t((int)luaL_checkinteger(L, (n)))\n#define luaL_optint(L,n,d)\t((int)luaL_optinteger(L, (n), (d)))\n#define luaL_checklong(L,n)\t((long)luaL_checkinteger(L, (n)))\n#define luaL_optlong(L,n,d)\t((long)luaL_optinteger(L, (n), (d)))\n\n#define luaL_typename(L,i)\tlua_typename(L, lua_type(L,(i)))\n\n#define luaL_dofile(L, fn) \\\n\t(luaL_loadfile(L, fn) || lua_pcall(L, 0, LUA_MULTRET, 0))\n\n#define luaL_dostring(L, s) \\\n\t(luaL_loadstring(L, s) || lua_pcall(L, 0, LUA_MULTRET, 0))\n\n#define luaL_getmetatable(L,n)\t(lua_getfield(L, LUA_REGISTRYINDEX, (n)))\n\n#define luaL_opt(L,f,n,d)\t(lua_isnoneornil(L,(n)) ? (d) : f(L,(n)))\n\n/*\n** {======================================================\n** Generic Buffer manipulation\n** =======================================================\n*/\n\n\n\ntypedef struct luaL_Buffer {\n  char *p;\t\t\t/* current position in buffer */\n  int lvl;  /* number of strings in the stack (level) */\n  lua_State *L;\n  char buffer[LUAL_BUFFERSIZE];\n} luaL_Buffer;\n\n#define luaL_addchar(B,c) \\\n  ((void)((B)->p < ((B)->buffer+LUAL_BUFFERSIZE) || luaL_prepbuffer(B)), \\\n   (*(B)->p++ = (char)(c)))\n\n/* compatibility only */\n#define luaL_putchar(B,c)\tluaL_addchar(B,c)\n\n#define luaL_addsize(B,n)\t((B)->p += (n))\n\nLUALIB_API void (luaL_buffinit) (lua_State *L, luaL_Buffer *B);\nLUALIB_API char *(luaL_prepbuffer) (luaL_Buffer *B);\nLUALIB_API void (luaL_addlstring) (luaL_Buffer *B, const char *s, size_t l);\nLUALIB_API void (luaL_addstring) (luaL_Buffer *B, const char *s);\nLUALIB_API void (luaL_addvalue) (luaL_Buffer *B);\nLUALIB_API void (luaL_pushresult) (luaL_Buffer *B);\n\n\n/* }====================================================== */\n\n\n/* compatibility with ref system */\n\n/* pre-defined references */\n#define LUA_NOREF       (-2)\n#define LUA_REFNIL      (-1)\n\n#define lua_ref(L,lock) ((lock) ? luaL_ref(L, LUA_REGISTRYINDEX) : \\\n      (lua_pushstring(L, \"unlocked references are obsolete\"), lua_error(L), 0))\n\n#define lua_unref(L,ref)        luaL_unref(L, LUA_REGISTRYINDEX, (ref))\n\n#define lua_getref(L,ref)       lua_rawgeti(L, LUA_REGISTRYINDEX, (ref))\n\n\n#define luaL_reg\tluaL_Reg\n\n#endif\n\n\n"
  },
  {
    "path": "deps/lua/src/lbaselib.c",
    "content": "/*\n** $Id: lbaselib.c,v 1.191.1.6 2008/02/14 16:46:22 roberto Exp $\n** Basic library\n** See Copyright Notice in lua.h\n*/\n\n\n\n#include <ctype.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n#define lbaselib_c\n#define LUA_LIB\n\n#include \"lua.h\"\n\n#include \"lauxlib.h\"\n#include \"lualib.h\"\n\n\n\n\n/*\n** If your system does not support `stdout', you can just remove this function.\n** If you need, you can define your own `print' function, following this\n** model but changing `fputs' to put the strings at a proper place\n** (a console window or a log file, for instance).\n*/\nstatic int luaB_print (lua_State *L) {\n  int n = lua_gettop(L);  /* number of arguments */\n  int i;\n  lua_getglobal(L, \"tostring\");\n  for (i=1; i<=n; i++) {\n    const char *s;\n    lua_pushvalue(L, -1);  /* function to be called */\n    lua_pushvalue(L, i);   /* value to print */\n    lua_call(L, 1, 1);\n    s = lua_tostring(L, -1);  /* get result */\n    if (s == NULL)\n      return luaL_error(L, LUA_QL(\"tostring\") \" must return a string to \"\n                           LUA_QL(\"print\"));\n    if (i>1) fputs(\"\\t\", stdout);\n    fputs(s, stdout);\n    lua_pop(L, 1);  /* pop result */\n  }\n  fputs(\"\\n\", stdout);\n  return 0;\n}\n\n\nstatic int luaB_tonumber (lua_State *L) {\n  int base = luaL_optint(L, 2, 10);\n  if (base == 10) {  /* standard conversion */\n    luaL_checkany(L, 1);\n    if (lua_isnumber(L, 1)) {\n      lua_pushnumber(L, lua_tonumber(L, 1));\n      return 1;\n    }\n  }\n  else {\n    const char *s1 = luaL_checkstring(L, 1);\n    char *s2;\n    unsigned long n;\n    luaL_argcheck(L, 2 <= base && base <= 36, 2, \"base out of range\");\n    n = strtoul(s1, &s2, base);\n    if (s1 != s2) {  /* at least one valid digit? */\n      while (isspace((unsigned char)(*s2))) s2++;  /* skip trailing spaces */\n      if (*s2 == '\\0') {  /* no invalid trailing characters? */\n        lua_pushnumber(L, (lua_Number)n);\n        return 1;\n      }\n    }\n  }\n  lua_pushnil(L);  /* else not a number */\n  return 1;\n}\n\n\nstatic int luaB_error (lua_State *L) {\n  int level = luaL_optint(L, 2, 1);\n  lua_settop(L, 1);\n  if (lua_isstring(L, 1) && level > 0) {  /* add extra information? */\n    luaL_where(L, level);\n    lua_pushvalue(L, 1);\n    lua_concat(L, 2);\n  }\n  return lua_error(L);\n}\n\n\nstatic int luaB_getmetatable (lua_State *L) {\n  luaL_checkany(L, 1);\n  if (!lua_getmetatable(L, 1)) {\n    lua_pushnil(L);\n    return 1;  /* no metatable */\n  }\n  luaL_getmetafield(L, 1, \"__metatable\");\n  return 1;  /* returns either __metatable field (if present) or metatable */\n}\n\n\nstatic int luaB_setmetatable (lua_State *L) {\n  int t = lua_type(L, 2);\n  luaL_checktype(L, 1, LUA_TTABLE);\n  luaL_argcheck(L, t == LUA_TNIL || t == LUA_TTABLE, 2,\n                    \"nil or table expected\");\n  if (luaL_getmetafield(L, 1, \"__metatable\"))\n    luaL_error(L, \"cannot change a protected metatable\");\n  lua_settop(L, 2);\n  lua_setmetatable(L, 1);\n  return 1;\n}\n\n\nstatic void getfunc (lua_State *L, int opt) {\n  if (lua_isfunction(L, 1)) lua_pushvalue(L, 1);\n  else {\n    lua_Debug ar;\n    int level = opt ? luaL_optint(L, 1, 1) : luaL_checkint(L, 1);\n    luaL_argcheck(L, level >= 0, 1, \"level must be non-negative\");\n    if (lua_getstack(L, level, &ar) == 0)\n      luaL_argerror(L, 1, \"invalid level\");\n    lua_getinfo(L, \"f\", &ar);\n    if (lua_isnil(L, -1))\n      luaL_error(L, \"no function environment for tail call at level %d\",\n                    level);\n  }\n}\n\n\nstatic int luaB_getfenv (lua_State *L) {\n  getfunc(L, 1);\n  if (lua_iscfunction(L, -1))  /* is a C function? */\n    lua_pushvalue(L, LUA_GLOBALSINDEX);  /* return the thread's global env. */\n  else\n    lua_getfenv(L, -1);\n  return 1;\n}\n\n\nstatic int luaB_setfenv (lua_State *L) {\n  luaL_checktype(L, 2, LUA_TTABLE);\n  getfunc(L, 0);\n  lua_pushvalue(L, 2);\n  if (lua_isnumber(L, 1) && lua_tonumber(L, 1) == 0) {\n    /* change environment of current thread */\n    lua_pushthread(L);\n    lua_insert(L, -2);\n    lua_setfenv(L, -2);\n    return 0;\n  }\n  else if (lua_iscfunction(L, -2) || lua_setfenv(L, -2) == 0)\n    luaL_error(L,\n          LUA_QL(\"setfenv\") \" cannot change environment of given object\");\n  return 1;\n}\n\n\nstatic int luaB_rawequal (lua_State *L) {\n  luaL_checkany(L, 1);\n  luaL_checkany(L, 2);\n  lua_pushboolean(L, lua_rawequal(L, 1, 2));\n  return 1;\n}\n\n\nstatic int luaB_rawget (lua_State *L) {\n  luaL_checktype(L, 1, LUA_TTABLE);\n  luaL_checkany(L, 2);\n  lua_settop(L, 2);\n  lua_rawget(L, 1);\n  return 1;\n}\n\nstatic int luaB_rawset (lua_State *L) {\n  luaL_checktype(L, 1, LUA_TTABLE);\n  luaL_checkany(L, 2);\n  luaL_checkany(L, 3);\n  lua_settop(L, 3);\n  lua_rawset(L, 1);\n  return 1;\n}\n\n\nstatic int luaB_gcinfo (lua_State *L) {\n  lua_pushinteger(L, lua_getgccount(L));\n  return 1;\n}\n\n\nstatic int luaB_collectgarbage (lua_State *L) {\n  static const char *const opts[] = {\"stop\", \"restart\", \"collect\",\n    \"count\", \"step\", \"setpause\", \"setstepmul\", NULL};\n  static const int optsnum[] = {LUA_GCSTOP, LUA_GCRESTART, LUA_GCCOLLECT,\n    LUA_GCCOUNT, LUA_GCSTEP, LUA_GCSETPAUSE, LUA_GCSETSTEPMUL};\n  int o = luaL_checkoption(L, 1, \"collect\", opts);\n  int ex = luaL_optint(L, 2, 0);\n  int res = lua_gc(L, optsnum[o], ex);\n  switch (optsnum[o]) {\n    case LUA_GCCOUNT: {\n      int b = lua_gc(L, LUA_GCCOUNTB, 0);\n      lua_pushnumber(L, res + ((lua_Number)b/1024));\n      return 1;\n    }\n    case LUA_GCSTEP: {\n      lua_pushboolean(L, res);\n      return 1;\n    }\n    default: {\n      lua_pushnumber(L, res);\n      return 1;\n    }\n  }\n}\n\n\nstatic int luaB_type (lua_State *L) {\n  luaL_checkany(L, 1);\n  lua_pushstring(L, luaL_typename(L, 1));\n  return 1;\n}\n\n\nstatic int luaB_next (lua_State *L) {\n  luaL_checktype(L, 1, LUA_TTABLE);\n  lua_settop(L, 2);  /* create a 2nd argument if there isn't one */\n  if (lua_next(L, 1))\n    return 2;\n  else {\n    lua_pushnil(L);\n    return 1;\n  }\n}\n\n\nstatic int luaB_pairs (lua_State *L) {\n  luaL_checktype(L, 1, LUA_TTABLE);\n  lua_pushvalue(L, lua_upvalueindex(1));  /* return generator, */\n  lua_pushvalue(L, 1);  /* state, */\n  lua_pushnil(L);  /* and initial value */\n  return 3;\n}\n\n\nstatic int ipairsaux (lua_State *L) {\n  int i = luaL_checkint(L, 2);\n  luaL_checktype(L, 1, LUA_TTABLE);\n  i++;  /* next value */\n  lua_pushinteger(L, i);\n  lua_rawgeti(L, 1, i);\n  return (lua_isnil(L, -1)) ? 0 : 2;\n}\n\n\nstatic int luaB_ipairs (lua_State *L) {\n  luaL_checktype(L, 1, LUA_TTABLE);\n  lua_pushvalue(L, lua_upvalueindex(1));  /* return generator, */\n  lua_pushvalue(L, 1);  /* state, */\n  lua_pushinteger(L, 0);  /* and initial value */\n  return 3;\n}\n\n\nstatic int load_aux (lua_State *L, int status) {\n  if (status == 0)  /* OK? */\n    return 1;\n  else {\n    lua_pushnil(L);\n    lua_insert(L, -2);  /* put before error message */\n    return 2;  /* return nil plus error message */\n  }\n}\n\n\nstatic int luaB_loadstring (lua_State *L) {\n  size_t l;\n  const char *s = luaL_checklstring(L, 1, &l);\n  const char *chunkname = luaL_optstring(L, 2, s);\n  return load_aux(L, luaL_loadbuffer(L, s, l, chunkname));\n}\n\n\nstatic int luaB_loadfile (lua_State *L) {\n  const char *fname = luaL_optstring(L, 1, NULL);\n  return load_aux(L, luaL_loadfile(L, fname));\n}\n\n\n/*\n** Reader for generic `load' function: `lua_load' uses the\n** stack for internal stuff, so the reader cannot change the\n** stack top. Instead, it keeps its resulting string in a\n** reserved slot inside the stack.\n*/\nstatic const char *generic_reader (lua_State *L, void *ud, size_t *size) {\n  (void)ud;  /* to avoid warnings */\n  luaL_checkstack(L, 2, \"too many nested functions\");\n  lua_pushvalue(L, 1);  /* get function */\n  lua_call(L, 0, 1);  /* call it */\n  if (lua_isnil(L, -1)) {\n    *size = 0;\n    return NULL;\n  }\n  else if (lua_isstring(L, -1)) {\n    lua_replace(L, 3);  /* save string in a reserved stack slot */\n    return lua_tolstring(L, 3, size);\n  }\n  else luaL_error(L, \"reader function must return a string\");\n  return NULL;  /* to avoid warnings */\n}\n\n\nstatic int luaB_load (lua_State *L) {\n  int status;\n  const char *cname = luaL_optstring(L, 2, \"=(load)\");\n  luaL_checktype(L, 1, LUA_TFUNCTION);\n  lua_settop(L, 3);  /* function, eventual name, plus one reserved slot */\n  status = lua_load(L, generic_reader, NULL, cname);\n  return load_aux(L, status);\n}\n\n\nstatic int luaB_dofile (lua_State *L) {\n  const char *fname = luaL_optstring(L, 1, NULL);\n  int n = lua_gettop(L);\n  if (luaL_loadfile(L, fname) != 0) lua_error(L);\n  lua_call(L, 0, LUA_MULTRET);\n  return lua_gettop(L) - n;\n}\n\n\nstatic int luaB_assert (lua_State *L) {\n  luaL_checkany(L, 1);\n  if (!lua_toboolean(L, 1))\n    return luaL_error(L, \"%s\", luaL_optstring(L, 2, \"assertion failed!\"));\n  return lua_gettop(L);\n}\n\n\nstatic int luaB_unpack (lua_State *L) {\n  int i, e;\n  unsigned int n;\n  luaL_checktype(L, 1, LUA_TTABLE);\n  i = luaL_optint(L, 2, 1);\n  e = luaL_opt(L, luaL_checkint, 3, luaL_getn(L, 1));\n  if (i > e) return 0;  /* empty range */\n  n = (unsigned int)e - (unsigned int)i;  /* number of elements minus 1 */\n  if (n >= INT_MAX || !lua_checkstack(L, ++n))\n    return luaL_error(L, \"too many results to unpack\");\n  lua_rawgeti(L, 1, i);  /* push arg[i] (avoiding overflow problems) */\n  while (i++ < e)  /* push arg[i + 1...e] */\n    lua_rawgeti(L, 1, i);\n  return n;\n}\n\n\nstatic int luaB_select (lua_State *L) {\n  int n = lua_gettop(L);\n  if (lua_type(L, 1) == LUA_TSTRING && *lua_tostring(L, 1) == '#') {\n    lua_pushinteger(L, n-1);\n    return 1;\n  }\n  else {\n    int i = luaL_checkint(L, 1);\n    if (i < 0) i = n + i;\n    else if (i > n) i = n;\n    luaL_argcheck(L, 1 <= i, 1, \"index out of range\");\n    return n - i;\n  }\n}\n\n\nstatic int luaB_pcall (lua_State *L) {\n  int status;\n  luaL_checkany(L, 1);\n  status = lua_pcall(L, lua_gettop(L) - 1, LUA_MULTRET, 0);\n  lua_pushboolean(L, (status == 0));\n  lua_insert(L, 1);\n  return lua_gettop(L);  /* return status + all results */\n}\n\n\nstatic int luaB_xpcall (lua_State *L) {\n  int status;\n  luaL_checkany(L, 2);\n  lua_settop(L, 2);\n  lua_insert(L, 1);  /* put error function under function to be called */\n  status = lua_pcall(L, 0, LUA_MULTRET, 1);\n  lua_pushboolean(L, (status == 0));\n  lua_replace(L, 1);\n  return lua_gettop(L);  /* return status + all results */\n}\n\n\nstatic int luaB_tostring (lua_State *L) {\n  luaL_checkany(L, 1);\n  if (luaL_callmeta(L, 1, \"__tostring\"))  /* is there a metafield? */\n    return 1;  /* use its value */\n  switch (lua_type(L, 1)) {\n    case LUA_TNUMBER:\n      lua_pushstring(L, lua_tostring(L, 1));\n      break;\n    case LUA_TSTRING:\n      lua_pushvalue(L, 1);\n      break;\n    case LUA_TBOOLEAN:\n      lua_pushstring(L, (lua_toboolean(L, 1) ? \"true\" : \"false\"));\n      break;\n    case LUA_TNIL:\n      lua_pushliteral(L, \"nil\");\n      break;\n    default:\n      lua_pushfstring(L, \"%s: %p\", luaL_typename(L, 1), lua_topointer(L, 1));\n      break;\n  }\n  return 1;\n}\n\n\nstatic int luaB_newproxy (lua_State *L) {\n  lua_settop(L, 1);\n  lua_newuserdata(L, 0);  /* create proxy */\n  if (lua_toboolean(L, 1) == 0)\n    return 1;  /* no metatable */\n  else if (lua_isboolean(L, 1)) {\n    lua_newtable(L);  /* create a new metatable `m' ... */\n    lua_pushvalue(L, -1);  /* ... and mark `m' as a valid metatable */\n    lua_pushboolean(L, 1);\n    lua_rawset(L, lua_upvalueindex(1));  /* weaktable[m] = true */\n  }\n  else {\n    int validproxy = 0;  /* to check if weaktable[metatable(u)] == true */\n    if (lua_getmetatable(L, 1)) {\n      lua_rawget(L, lua_upvalueindex(1));\n      validproxy = lua_toboolean(L, -1);\n      lua_pop(L, 1);  /* remove value */\n    }\n    luaL_argcheck(L, validproxy, 1, \"boolean or proxy expected\");\n    lua_getmetatable(L, 1);  /* metatable is valid; get it */\n  }\n  lua_setmetatable(L, 2);\n  return 1;\n}\n\n\nstatic const luaL_Reg base_funcs[] = {\n  {\"assert\", luaB_assert},\n  {\"collectgarbage\", luaB_collectgarbage},\n  {\"dofile\", luaB_dofile},\n  {\"error\", luaB_error},\n  {\"gcinfo\", luaB_gcinfo},\n  {\"getfenv\", luaB_getfenv},\n  {\"getmetatable\", luaB_getmetatable},\n  {\"loadfile\", luaB_loadfile},\n  {\"load\", luaB_load},\n  {\"loadstring\", luaB_loadstring},\n  {\"next\", luaB_next},\n  {\"pcall\", luaB_pcall},\n  {\"print\", luaB_print},\n  {\"rawequal\", luaB_rawequal},\n  {\"rawget\", luaB_rawget},\n  {\"rawset\", luaB_rawset},\n  {\"select\", luaB_select},\n  {\"setfenv\", luaB_setfenv},\n  {\"setmetatable\", luaB_setmetatable},\n  {\"tonumber\", luaB_tonumber},\n  {\"tostring\", luaB_tostring},\n  {\"type\", luaB_type},\n  {\"unpack\", luaB_unpack},\n  {\"xpcall\", luaB_xpcall},\n  {NULL, NULL}\n};\n\n\n/*\n** {======================================================\n** Coroutine library\n** =======================================================\n*/\n\n#define CO_RUN\t0\t/* running */\n#define CO_SUS\t1\t/* suspended */\n#define CO_NOR\t2\t/* 'normal' (it resumed another coroutine) */\n#define CO_DEAD\t3\n\nstatic const char *const statnames[] =\n    {\"running\", \"suspended\", \"normal\", \"dead\"};\n\nstatic int costatus (lua_State *L, lua_State *co) {\n  if (L == co) return CO_RUN;\n  switch (lua_status(co)) {\n    case LUA_YIELD:\n      return CO_SUS;\n    case 0: {\n      lua_Debug ar;\n      if (lua_getstack(co, 0, &ar) > 0)  /* does it have frames? */\n        return CO_NOR;  /* it is running */\n      else if (lua_gettop(co) == 0)\n          return CO_DEAD;\n      else\n        return CO_SUS;  /* initial state */\n    }\n    default:  /* some error occured */\n      return CO_DEAD;\n  }\n}\n\n\nstatic int luaB_costatus (lua_State *L) {\n  lua_State *co = lua_tothread(L, 1);\n  luaL_argcheck(L, co, 1, \"coroutine expected\");\n  lua_pushstring(L, statnames[costatus(L, co)]);\n  return 1;\n}\n\n\nstatic int auxresume (lua_State *L, lua_State *co, int narg) {\n  int status = costatus(L, co);\n  if (!lua_checkstack(co, narg))\n    luaL_error(L, \"too many arguments to resume\");\n  if (status != CO_SUS) {\n    lua_pushfstring(L, \"cannot resume %s coroutine\", statnames[status]);\n    return -1;  /* error flag */\n  }\n  lua_xmove(L, co, narg);\n  lua_setlevel(L, co);\n  status = lua_resume(co, narg);\n  if (status == 0 || status == LUA_YIELD) {\n    int nres = lua_gettop(co);\n    if (!lua_checkstack(L, nres + 1))\n      luaL_error(L, \"too many results to resume\");\n    lua_xmove(co, L, nres);  /* move yielded values */\n    return nres;\n  }\n  else {\n    lua_xmove(co, L, 1);  /* move error message */\n    return -1;  /* error flag */\n  }\n}\n\n\nstatic int luaB_coresume (lua_State *L) {\n  lua_State *co = lua_tothread(L, 1);\n  int r;\n  luaL_argcheck(L, co, 1, \"coroutine expected\");\n  r = auxresume(L, co, lua_gettop(L) - 1);\n  if (r < 0) {\n    lua_pushboolean(L, 0);\n    lua_insert(L, -2);\n    return 2;  /* return false + error message */\n  }\n  else {\n    lua_pushboolean(L, 1);\n    lua_insert(L, -(r + 1));\n    return r + 1;  /* return true + `resume' returns */\n  }\n}\n\n\nstatic int luaB_auxwrap (lua_State *L) {\n  lua_State *co = lua_tothread(L, lua_upvalueindex(1));\n  int r = auxresume(L, co, lua_gettop(L));\n  if (r < 0) {\n    if (lua_isstring(L, -1)) {  /* error object is a string? */\n      luaL_where(L, 1);  /* add extra info */\n      lua_insert(L, -2);\n      lua_concat(L, 2);\n    }\n    lua_error(L);  /* propagate error */\n  }\n  return r;\n}\n\n\nstatic int luaB_cocreate (lua_State *L) {\n  lua_State *NL = lua_newthread(L);\n  luaL_argcheck(L, lua_isfunction(L, 1) && !lua_iscfunction(L, 1), 1,\n    \"Lua function expected\");\n  lua_pushvalue(L, 1);  /* move function to top */\n  lua_xmove(L, NL, 1);  /* move function from L to NL */\n  return 1;\n}\n\n\nstatic int luaB_cowrap (lua_State *L) {\n  luaB_cocreate(L);\n  lua_pushcclosure(L, luaB_auxwrap, 1);\n  return 1;\n}\n\n\nstatic int luaB_yield (lua_State *L) {\n  return lua_yield(L, lua_gettop(L));\n}\n\n\nstatic int luaB_corunning (lua_State *L) {\n  if (lua_pushthread(L))\n    lua_pushnil(L);  /* main thread is not a coroutine */\n  return 1;\n}\n\n\nstatic const luaL_Reg co_funcs[] = {\n  {\"create\", luaB_cocreate},\n  {\"resume\", luaB_coresume},\n  {\"running\", luaB_corunning},\n  {\"status\", luaB_costatus},\n  {\"wrap\", luaB_cowrap},\n  {\"yield\", luaB_yield},\n  {NULL, NULL}\n};\n\n/* }====================================================== */\n\n\nstatic void auxopen (lua_State *L, const char *name,\n                     lua_CFunction f, lua_CFunction u) {\n  lua_pushcfunction(L, u);\n  lua_pushcclosure(L, f, 1);\n  lua_setfield(L, -2, name);\n}\n\n\nstatic void base_open (lua_State *L) {\n  /* set global _G */\n  lua_pushvalue(L, LUA_GLOBALSINDEX);\n  lua_setglobal(L, \"_G\");\n  /* open lib into global table */\n  luaL_register(L, \"_G\", base_funcs);\n  lua_pushliteral(L, LUA_VERSION);\n  lua_setglobal(L, \"_VERSION\");  /* set global _VERSION */\n  /* `ipairs' and `pairs' need auxiliary functions as upvalues */\n  auxopen(L, \"ipairs\", luaB_ipairs, ipairsaux);\n  auxopen(L, \"pairs\", luaB_pairs, luaB_next);\n  /* `newproxy' needs a weaktable as upvalue */\n  lua_createtable(L, 0, 1);  /* new table `w' */\n  lua_pushvalue(L, -1);  /* `w' will be its own metatable */\n  lua_setmetatable(L, -2);\n  lua_pushliteral(L, \"kv\");\n  lua_setfield(L, -2, \"__mode\");  /* metatable(w).__mode = \"kv\" */\n  lua_pushcclosure(L, luaB_newproxy, 1);\n  lua_setglobal(L, \"newproxy\");  /* set global `newproxy' */\n}\n\n\nLUALIB_API int luaopen_base (lua_State *L) {\n  base_open(L);\n  luaL_register(L, LUA_COLIBNAME, co_funcs);\n  return 2;\n}\n\n"
  },
  {
    "path": "deps/lua/src/lcode.c",
    "content": "/*\n** $Id: lcode.c,v 2.25.1.5 2011/01/31 14:53:16 roberto Exp $\n** Code generator for Lua\n** See Copyright Notice in lua.h\n*/\n\n\n#include <stdlib.h>\n\n#define lcode_c\n#define LUA_CORE\n\n#include \"lua.h\"\n\n#include \"lcode.h\"\n#include \"ldebug.h\"\n#include \"ldo.h\"\n#include \"lgc.h\"\n#include \"llex.h\"\n#include \"lmem.h\"\n#include \"lobject.h\"\n#include \"lopcodes.h\"\n#include \"lparser.h\"\n#include \"ltable.h\"\n\n\n#define hasjumps(e)\t((e)->t != (e)->f)\n\n\nstatic int isnumeral(expdesc *e) {\n  return (e->k == VKNUM && e->t == NO_JUMP && e->f == NO_JUMP);\n}\n\n\nvoid luaK_nil (FuncState *fs, int from, int n) {\n  Instruction *previous;\n  if (fs->pc > fs->lasttarget) {  /* no jumps to current position? */\n    if (fs->pc == 0) {  /* function start? */\n      if (from >= fs->nactvar)\n        return;  /* positions are already clean */\n    }\n    else {\n      previous = &fs->f->code[fs->pc-1];\n      if (GET_OPCODE(*previous) == OP_LOADNIL) {\n        int pfrom = GETARG_A(*previous);\n        int pto = GETARG_B(*previous);\n        if (pfrom <= from && from <= pto+1) {  /* can connect both? */\n          if (from+n-1 > pto)\n            SETARG_B(*previous, from+n-1);\n          return;\n        }\n      }\n    }\n  }\n  luaK_codeABC(fs, OP_LOADNIL, from, from+n-1, 0);  /* else no optimization */\n}\n\n\nint luaK_jump (FuncState *fs) {\n  int jpc = fs->jpc;  /* save list of jumps to here */\n  int j;\n  fs->jpc = NO_JUMP;\n  j = luaK_codeAsBx(fs, OP_JMP, 0, NO_JUMP);\n  luaK_concat(fs, &j, jpc);  /* keep them on hold */\n  return j;\n}\n\n\nvoid luaK_ret (FuncState *fs, int first, int nret) {\n  luaK_codeABC(fs, OP_RETURN, first, nret+1, 0);\n}\n\n\nstatic int condjump (FuncState *fs, OpCode op, int A, int B, int C) {\n  luaK_codeABC(fs, op, A, B, C);\n  return luaK_jump(fs);\n}\n\n\nstatic void fixjump (FuncState *fs, int pc, int dest) {\n  Instruction *jmp = &fs->f->code[pc];\n  int offset = dest-(pc+1);\n  lua_assert(dest != NO_JUMP);\n  if (abs(offset) > MAXARG_sBx)\n    luaX_syntaxerror(fs->ls, \"control structure too long\");\n  SETARG_sBx(*jmp, offset);\n}\n\n\n/*\n** returns current `pc' and marks it as a jump target (to avoid wrong\n** optimizations with consecutive instructions not in the same basic block).\n*/\nint luaK_getlabel (FuncState *fs) {\n  fs->lasttarget = fs->pc;\n  return fs->pc;\n}\n\n\nstatic int getjump (FuncState *fs, int pc) {\n  int offset = GETARG_sBx(fs->f->code[pc]);\n  if (offset == NO_JUMP)  /* point to itself represents end of list */\n    return NO_JUMP;  /* end of list */\n  else\n    return (pc+1)+offset;  /* turn offset into absolute position */\n}\n\n\nstatic Instruction *getjumpcontrol (FuncState *fs, int pc) {\n  Instruction *pi = &fs->f->code[pc];\n  if (pc >= 1 && testTMode(GET_OPCODE(*(pi-1))))\n    return pi-1;\n  else\n    return pi;\n}\n\n\n/*\n** check whether list has any jump that do not produce a value\n** (or produce an inverted value)\n*/\nstatic int need_value (FuncState *fs, int list) {\n  for (; list != NO_JUMP; list = getjump(fs, list)) {\n    Instruction i = *getjumpcontrol(fs, list);\n    if (GET_OPCODE(i) != OP_TESTSET) return 1;\n  }\n  return 0;  /* not found */\n}\n\n\nstatic int patchtestreg (FuncState *fs, int node, int reg) {\n  Instruction *i = getjumpcontrol(fs, node);\n  if (GET_OPCODE(*i) != OP_TESTSET)\n    return 0;  /* cannot patch other instructions */\n  if (reg != NO_REG && reg != GETARG_B(*i))\n    SETARG_A(*i, reg);\n  else  /* no register to put value or register already has the value */\n    *i = CREATE_ABC(OP_TEST, GETARG_B(*i), 0, GETARG_C(*i));\n\n  return 1;\n}\n\n\nstatic void removevalues (FuncState *fs, int list) {\n  for (; list != NO_JUMP; list = getjump(fs, list))\n      patchtestreg(fs, list, NO_REG);\n}\n\n\nstatic void patchlistaux (FuncState *fs, int list, int vtarget, int reg,\n                          int dtarget) {\n  while (list != NO_JUMP) {\n    int next = getjump(fs, list);\n    if (patchtestreg(fs, list, reg))\n      fixjump(fs, list, vtarget);\n    else\n      fixjump(fs, list, dtarget);  /* jump to default target */\n    list = next;\n  }\n}\n\n\nstatic void dischargejpc (FuncState *fs) {\n  patchlistaux(fs, fs->jpc, fs->pc, NO_REG, fs->pc);\n  fs->jpc = NO_JUMP;\n}\n\n\nvoid luaK_patchlist (FuncState *fs, int list, int target) {\n  if (target == fs->pc)\n    luaK_patchtohere(fs, list);\n  else {\n    lua_assert(target < fs->pc);\n    patchlistaux(fs, list, target, NO_REG, target);\n  }\n}\n\n\nvoid luaK_patchtohere (FuncState *fs, int list) {\n  luaK_getlabel(fs);\n  luaK_concat(fs, &fs->jpc, list);\n}\n\n\nvoid luaK_concat (FuncState *fs, int *l1, int l2) {\n  if (l2 == NO_JUMP) return;\n  else if (*l1 == NO_JUMP)\n    *l1 = l2;\n  else {\n    int list = *l1;\n    int next;\n    while ((next = getjump(fs, list)) != NO_JUMP)  /* find last element */\n      list = next;\n    fixjump(fs, list, l2);\n  }\n}\n\n\nvoid luaK_checkstack (FuncState *fs, int n) {\n  int newstack = fs->freereg + n;\n  if (newstack > fs->f->maxstacksize) {\n    if (newstack >= MAXSTACK)\n      luaX_syntaxerror(fs->ls, \"function or expression too complex\");\n    fs->f->maxstacksize = cast_byte(newstack);\n  }\n}\n\n\nvoid luaK_reserveregs (FuncState *fs, int n) {\n  luaK_checkstack(fs, n);\n  fs->freereg += n;\n}\n\n\nstatic void freereg (FuncState *fs, int reg) {\n  if (!ISK(reg) && reg >= fs->nactvar) {\n    fs->freereg--;\n    lua_assert(reg == fs->freereg);\n  }\n}\n\n\nstatic void freeexp (FuncState *fs, expdesc *e) {\n  if (e->k == VNONRELOC)\n    freereg(fs, e->u.s.info);\n}\n\n\nstatic int addk (FuncState *fs, TValue *k, TValue *v) {\n  lua_State *L = fs->L;\n  TValue *idx = luaH_set(L, fs->h, k);\n  Proto *f = fs->f;\n  int oldsize = f->sizek;\n  if (ttisnumber(idx)) {\n    lua_assert(luaO_rawequalObj(&fs->f->k[cast_int(nvalue(idx))], v));\n    return cast_int(nvalue(idx));\n  }\n  else {  /* constant not found; create a new entry */\n    setnvalue(idx, cast_num(fs->nk));\n    luaM_growvector(L, f->k, fs->nk, f->sizek, TValue,\n                    MAXARG_Bx, \"constant table overflow\");\n    while (oldsize < f->sizek) setnilvalue(&f->k[oldsize++]);\n    setobj(L, &f->k[fs->nk], v);\n    luaC_barrier(L, f, v);\n    return fs->nk++;\n  }\n}\n\n\nint luaK_stringK (FuncState *fs, TString *s) {\n  TValue o;\n  setsvalue(fs->L, &o, s);\n  return addk(fs, &o, &o);\n}\n\n\nint luaK_numberK (FuncState *fs, lua_Number r) {\n  TValue o;\n  setnvalue(&o, r);\n  return addk(fs, &o, &o);\n}\n\n\nstatic int boolK (FuncState *fs, int b) {\n  TValue o;\n  setbvalue(&o, b);\n  return addk(fs, &o, &o);\n}\n\n\nstatic int nilK (FuncState *fs) {\n  TValue k, v;\n  setnilvalue(&v);\n  /* cannot use nil as key; instead use table itself to represent nil */\n  sethvalue(fs->L, &k, fs->h);\n  return addk(fs, &k, &v);\n}\n\n\nvoid luaK_setreturns (FuncState *fs, expdesc *e, int nresults) {\n  if (e->k == VCALL) {  /* expression is an open function call? */\n    SETARG_C(getcode(fs, e), nresults+1);\n  }\n  else if (e->k == VVARARG) {\n    SETARG_B(getcode(fs, e), nresults+1);\n    SETARG_A(getcode(fs, e), fs->freereg);\n    luaK_reserveregs(fs, 1);\n  }\n}\n\n\nvoid luaK_setoneret (FuncState *fs, expdesc *e) {\n  if (e->k == VCALL) {  /* expression is an open function call? */\n    e->k = VNONRELOC;\n    e->u.s.info = GETARG_A(getcode(fs, e));\n  }\n  else if (e->k == VVARARG) {\n    SETARG_B(getcode(fs, e), 2);\n    e->k = VRELOCABLE;  /* can relocate its simple result */\n  }\n}\n\n\nvoid luaK_dischargevars (FuncState *fs, expdesc *e) {\n  switch (e->k) {\n    case VLOCAL: {\n      e->k = VNONRELOC;\n      break;\n    }\n    case VUPVAL: {\n      e->u.s.info = luaK_codeABC(fs, OP_GETUPVAL, 0, e->u.s.info, 0);\n      e->k = VRELOCABLE;\n      break;\n    }\n    case VGLOBAL: {\n      e->u.s.info = luaK_codeABx(fs, OP_GETGLOBAL, 0, e->u.s.info);\n      e->k = VRELOCABLE;\n      break;\n    }\n    case VINDEXED: {\n      freereg(fs, e->u.s.aux);\n      freereg(fs, e->u.s.info);\n      e->u.s.info = luaK_codeABC(fs, OP_GETTABLE, 0, e->u.s.info, e->u.s.aux);\n      e->k = VRELOCABLE;\n      break;\n    }\n    case VVARARG:\n    case VCALL: {\n      luaK_setoneret(fs, e);\n      break;\n    }\n    default: break;  /* there is one value available (somewhere) */\n  }\n}\n\n\nstatic int code_label (FuncState *fs, int A, int b, int jump) {\n  luaK_getlabel(fs);  /* those instructions may be jump targets */\n  return luaK_codeABC(fs, OP_LOADBOOL, A, b, jump);\n}\n\n\nstatic void discharge2reg (FuncState *fs, expdesc *e, int reg) {\n  luaK_dischargevars(fs, e);\n  switch (e->k) {\n    case VNIL: {\n      luaK_nil(fs, reg, 1);\n      break;\n    }\n    case VFALSE:  case VTRUE: {\n      luaK_codeABC(fs, OP_LOADBOOL, reg, e->k == VTRUE, 0);\n      break;\n    }\n    case VK: {\n      luaK_codeABx(fs, OP_LOADK, reg, e->u.s.info);\n      break;\n    }\n    case VKNUM: {\n      luaK_codeABx(fs, OP_LOADK, reg, luaK_numberK(fs, e->u.nval));\n      break;\n    }\n    case VRELOCABLE: {\n      Instruction *pc = &getcode(fs, e);\n      SETARG_A(*pc, reg);\n      break;\n    }\n    case VNONRELOC: {\n      if (reg != e->u.s.info)\n        luaK_codeABC(fs, OP_MOVE, reg, e->u.s.info, 0);\n      break;\n    }\n    default: {\n      lua_assert(e->k == VVOID || e->k == VJMP);\n      return;  /* nothing to do... */\n    }\n  }\n  e->u.s.info = reg;\n  e->k = VNONRELOC;\n}\n\n\nstatic void discharge2anyreg (FuncState *fs, expdesc *e) {\n  if (e->k != VNONRELOC) {\n    luaK_reserveregs(fs, 1);\n    discharge2reg(fs, e, fs->freereg-1);\n  }\n}\n\n\nstatic void exp2reg (FuncState *fs, expdesc *e, int reg) {\n  discharge2reg(fs, e, reg);\n  if (e->k == VJMP)\n    luaK_concat(fs, &e->t, e->u.s.info);  /* put this jump in `t' list */\n  if (hasjumps(e)) {\n    int final;  /* position after whole expression */\n    int p_f = NO_JUMP;  /* position of an eventual LOAD false */\n    int p_t = NO_JUMP;  /* position of an eventual LOAD true */\n    if (need_value(fs, e->t) || need_value(fs, e->f)) {\n      int fj = (e->k == VJMP) ? NO_JUMP : luaK_jump(fs);\n      p_f = code_label(fs, reg, 0, 1);\n      p_t = code_label(fs, reg, 1, 0);\n      luaK_patchtohere(fs, fj);\n    }\n    final = luaK_getlabel(fs);\n    patchlistaux(fs, e->f, final, reg, p_f);\n    patchlistaux(fs, e->t, final, reg, p_t);\n  }\n  e->f = e->t = NO_JUMP;\n  e->u.s.info = reg;\n  e->k = VNONRELOC;\n}\n\n\nvoid luaK_exp2nextreg (FuncState *fs, expdesc *e) {\n  luaK_dischargevars(fs, e);\n  freeexp(fs, e);\n  luaK_reserveregs(fs, 1);\n  exp2reg(fs, e, fs->freereg - 1);\n}\n\n\nint luaK_exp2anyreg (FuncState *fs, expdesc *e) {\n  luaK_dischargevars(fs, e);\n  if (e->k == VNONRELOC) {\n    if (!hasjumps(e)) return e->u.s.info;  /* exp is already in a register */\n    if (e->u.s.info >= fs->nactvar) {  /* reg. is not a local? */\n      exp2reg(fs, e, e->u.s.info);  /* put value on it */\n      return e->u.s.info;\n    }\n  }\n  luaK_exp2nextreg(fs, e);  /* default */\n  return e->u.s.info;\n}\n\n\nvoid luaK_exp2val (FuncState *fs, expdesc *e) {\n  if (hasjumps(e))\n    luaK_exp2anyreg(fs, e);\n  else\n    luaK_dischargevars(fs, e);\n}\n\n\nint luaK_exp2RK (FuncState *fs, expdesc *e) {\n  luaK_exp2val(fs, e);\n  switch (e->k) {\n    case VKNUM:\n    case VTRUE:\n    case VFALSE:\n    case VNIL: {\n      if (fs->nk <= MAXINDEXRK) {  /* constant fit in RK operand? */\n        e->u.s.info = (e->k == VNIL)  ? nilK(fs) :\n                      (e->k == VKNUM) ? luaK_numberK(fs, e->u.nval) :\n                                        boolK(fs, (e->k == VTRUE));\n        e->k = VK;\n        return RKASK(e->u.s.info);\n      }\n      else break;\n    }\n    case VK: {\n      if (e->u.s.info <= MAXINDEXRK)  /* constant fit in argC? */\n        return RKASK(e->u.s.info);\n      else break;\n    }\n    default: break;\n  }\n  /* not a constant in the right range: put it in a register */\n  return luaK_exp2anyreg(fs, e);\n}\n\n\nvoid luaK_storevar (FuncState *fs, expdesc *var, expdesc *ex) {\n  switch (var->k) {\n    case VLOCAL: {\n      freeexp(fs, ex);\n      exp2reg(fs, ex, var->u.s.info);\n      return;\n    }\n    case VUPVAL: {\n      int e = luaK_exp2anyreg(fs, ex);\n      luaK_codeABC(fs, OP_SETUPVAL, e, var->u.s.info, 0);\n      break;\n    }\n    case VGLOBAL: {\n      int e = luaK_exp2anyreg(fs, ex);\n      luaK_codeABx(fs, OP_SETGLOBAL, e, var->u.s.info);\n      break;\n    }\n    case VINDEXED: {\n      int e = luaK_exp2RK(fs, ex);\n      luaK_codeABC(fs, OP_SETTABLE, var->u.s.info, var->u.s.aux, e);\n      break;\n    }\n    default: {\n      lua_assert(0);  /* invalid var kind to store */\n      break;\n    }\n  }\n  freeexp(fs, ex);\n}\n\n\nvoid luaK_self (FuncState *fs, expdesc *e, expdesc *key) {\n  int func;\n  luaK_exp2anyreg(fs, e);\n  freeexp(fs, e);\n  func = fs->freereg;\n  luaK_reserveregs(fs, 2);\n  luaK_codeABC(fs, OP_SELF, func, e->u.s.info, luaK_exp2RK(fs, key));\n  freeexp(fs, key);\n  e->u.s.info = func;\n  e->k = VNONRELOC;\n}\n\n\nstatic void invertjump (FuncState *fs, expdesc *e) {\n  Instruction *pc = getjumpcontrol(fs, e->u.s.info);\n  lua_assert(testTMode(GET_OPCODE(*pc)) && GET_OPCODE(*pc) != OP_TESTSET &&\n                                           GET_OPCODE(*pc) != OP_TEST);\n  SETARG_A(*pc, !(GETARG_A(*pc)));\n}\n\n\nstatic int jumponcond (FuncState *fs, expdesc *e, int cond) {\n  if (e->k == VRELOCABLE) {\n    Instruction ie = getcode(fs, e);\n    if (GET_OPCODE(ie) == OP_NOT) {\n      fs->pc--;  /* remove previous OP_NOT */\n      return condjump(fs, OP_TEST, GETARG_B(ie), 0, !cond);\n    }\n    /* else go through */\n  }\n  discharge2anyreg(fs, e);\n  freeexp(fs, e);\n  return condjump(fs, OP_TESTSET, NO_REG, e->u.s.info, cond);\n}\n\n\nvoid luaK_goiftrue (FuncState *fs, expdesc *e) {\n  int pc;  /* pc of last jump */\n  luaK_dischargevars(fs, e);\n  switch (e->k) {\n    case VK: case VKNUM: case VTRUE: {\n      pc = NO_JUMP;  /* always true; do nothing */\n      break;\n    }\n    case VJMP: {\n      invertjump(fs, e);\n      pc = e->u.s.info;\n      break;\n    }\n    default: {\n      pc = jumponcond(fs, e, 0);\n      break;\n    }\n  }\n  luaK_concat(fs, &e->f, pc);  /* insert last jump in `f' list */\n  luaK_patchtohere(fs, e->t);\n  e->t = NO_JUMP;\n}\n\n\nstatic void luaK_goiffalse (FuncState *fs, expdesc *e) {\n  int pc;  /* pc of last jump */\n  luaK_dischargevars(fs, e);\n  switch (e->k) {\n    case VNIL: case VFALSE: {\n      pc = NO_JUMP;  /* always false; do nothing */\n      break;\n    }\n    case VJMP: {\n      pc = e->u.s.info;\n      break;\n    }\n    default: {\n      pc = jumponcond(fs, e, 1);\n      break;\n    }\n  }\n  luaK_concat(fs, &e->t, pc);  /* insert last jump in `t' list */\n  luaK_patchtohere(fs, e->f);\n  e->f = NO_JUMP;\n}\n\n\nstatic void codenot (FuncState *fs, expdesc *e) {\n  luaK_dischargevars(fs, e);\n  switch (e->k) {\n    case VNIL: case VFALSE: {\n      e->k = VTRUE;\n      break;\n    }\n    case VK: case VKNUM: case VTRUE: {\n      e->k = VFALSE;\n      break;\n    }\n    case VJMP: {\n      invertjump(fs, e);\n      break;\n    }\n    case VRELOCABLE:\n    case VNONRELOC: {\n      discharge2anyreg(fs, e);\n      freeexp(fs, e);\n      e->u.s.info = luaK_codeABC(fs, OP_NOT, 0, e->u.s.info, 0);\n      e->k = VRELOCABLE;\n      break;\n    }\n    default: {\n      lua_assert(0);  /* cannot happen */\n      break;\n    }\n  }\n  /* interchange true and false lists */\n  { int temp = e->f; e->f = e->t; e->t = temp; }\n  removevalues(fs, e->f);\n  removevalues(fs, e->t);\n}\n\n\nvoid luaK_indexed (FuncState *fs, expdesc *t, expdesc *k) {\n  t->u.s.aux = luaK_exp2RK(fs, k);\n  t->k = VINDEXED;\n}\n\n\nstatic int constfolding (OpCode op, expdesc *e1, expdesc *e2) {\n  lua_Number v1, v2, r;\n  if (!isnumeral(e1) || !isnumeral(e2)) return 0;\n  v1 = e1->u.nval;\n  v2 = e2->u.nval;\n  switch (op) {\n    case OP_ADD: r = luai_numadd(v1, v2); break;\n    case OP_SUB: r = luai_numsub(v1, v2); break;\n    case OP_MUL: r = luai_nummul(v1, v2); break;\n    case OP_DIV:\n      if (v2 == 0) return 0;  /* do not attempt to divide by 0 */\n      r = luai_numdiv(v1, v2); break;\n    case OP_MOD:\n      if (v2 == 0) return 0;  /* do not attempt to divide by 0 */\n      r = luai_nummod(v1, v2); break;\n    case OP_POW: r = luai_numpow(v1, v2); break;\n    case OP_UNM: r = luai_numunm(v1); break;\n    case OP_LEN: return 0;  /* no constant folding for 'len' */\n    default: lua_assert(0); r = 0; break;\n  }\n  if (luai_numisnan(r)) return 0;  /* do not attempt to produce NaN */\n  e1->u.nval = r;\n  return 1;\n}\n\n\nstatic void codearith (FuncState *fs, OpCode op, expdesc *e1, expdesc *e2) {\n  if (constfolding(op, e1, e2))\n    return;\n  else {\n    int o2 = (op != OP_UNM && op != OP_LEN) ? luaK_exp2RK(fs, e2) : 0;\n    int o1 = luaK_exp2RK(fs, e1);\n    if (o1 > o2) {\n      freeexp(fs, e1);\n      freeexp(fs, e2);\n    }\n    else {\n      freeexp(fs, e2);\n      freeexp(fs, e1);\n    }\n    e1->u.s.info = luaK_codeABC(fs, op, 0, o1, o2);\n    e1->k = VRELOCABLE;\n  }\n}\n\n\nstatic void codecomp (FuncState *fs, OpCode op, int cond, expdesc *e1,\n                                                          expdesc *e2) {\n  int o1 = luaK_exp2RK(fs, e1);\n  int o2 = luaK_exp2RK(fs, e2);\n  freeexp(fs, e2);\n  freeexp(fs, e1);\n  if (cond == 0 && op != OP_EQ) {\n    int temp;  /* exchange args to replace by `<' or `<=' */\n    temp = o1; o1 = o2; o2 = temp;  /* o1 <==> o2 */\n    cond = 1;\n  }\n  e1->u.s.info = condjump(fs, op, cond, o1, o2);\n  e1->k = VJMP;\n}\n\n\nvoid luaK_prefix (FuncState *fs, UnOpr op, expdesc *e) {\n  expdesc e2;\n  e2.t = e2.f = NO_JUMP; e2.k = VKNUM; e2.u.nval = 0;\n  switch (op) {\n    case OPR_MINUS: {\n      if (!isnumeral(e))\n        luaK_exp2anyreg(fs, e);  /* cannot operate on non-numeric constants */\n      codearith(fs, OP_UNM, e, &e2);\n      break;\n    }\n    case OPR_NOT: codenot(fs, e); break;\n    case OPR_LEN: {\n      luaK_exp2anyreg(fs, e);  /* cannot operate on constants */\n      codearith(fs, OP_LEN, e, &e2);\n      break;\n    }\n    default: lua_assert(0);\n  }\n}\n\n\nvoid luaK_infix (FuncState *fs, BinOpr op, expdesc *v) {\n  switch (op) {\n    case OPR_AND: {\n      luaK_goiftrue(fs, v);\n      break;\n    }\n    case OPR_OR: {\n      luaK_goiffalse(fs, v);\n      break;\n    }\n    case OPR_CONCAT: {\n      luaK_exp2nextreg(fs, v);  /* operand must be on the `stack' */\n      break;\n    }\n    case OPR_ADD: case OPR_SUB: case OPR_MUL: case OPR_DIV:\n    case OPR_MOD: case OPR_POW: {\n      if (!isnumeral(v)) luaK_exp2RK(fs, v);\n      break;\n    }\n    default: {\n      luaK_exp2RK(fs, v);\n      break;\n    }\n  }\n}\n\n\nvoid luaK_posfix (FuncState *fs, BinOpr op, expdesc *e1, expdesc *e2) {\n  switch (op) {\n    case OPR_AND: {\n      lua_assert(e1->t == NO_JUMP);  /* list must be closed */\n      luaK_dischargevars(fs, e2);\n      luaK_concat(fs, &e2->f, e1->f);\n      *e1 = *e2;\n      break;\n    }\n    case OPR_OR: {\n      lua_assert(e1->f == NO_JUMP);  /* list must be closed */\n      luaK_dischargevars(fs, e2);\n      luaK_concat(fs, &e2->t, e1->t);\n      *e1 = *e2;\n      break;\n    }\n    case OPR_CONCAT: {\n      luaK_exp2val(fs, e2);\n      if (e2->k == VRELOCABLE && GET_OPCODE(getcode(fs, e2)) == OP_CONCAT) {\n        lua_assert(e1->u.s.info == GETARG_B(getcode(fs, e2))-1);\n        freeexp(fs, e1);\n        SETARG_B(getcode(fs, e2), e1->u.s.info);\n        e1->k = VRELOCABLE; e1->u.s.info = e2->u.s.info;\n      }\n      else {\n        luaK_exp2nextreg(fs, e2);  /* operand must be on the 'stack' */\n        codearith(fs, OP_CONCAT, e1, e2);\n      }\n      break;\n    }\n    case OPR_ADD: codearith(fs, OP_ADD, e1, e2); break;\n    case OPR_SUB: codearith(fs, OP_SUB, e1, e2); break;\n    case OPR_MUL: codearith(fs, OP_MUL, e1, e2); break;\n    case OPR_DIV: codearith(fs, OP_DIV, e1, e2); break;\n    case OPR_MOD: codearith(fs, OP_MOD, e1, e2); break;\n    case OPR_POW: codearith(fs, OP_POW, e1, e2); break;\n    case OPR_EQ: codecomp(fs, OP_EQ, 1, e1, e2); break;\n    case OPR_NE: codecomp(fs, OP_EQ, 0, e1, e2); break;\n    case OPR_LT: codecomp(fs, OP_LT, 1, e1, e2); break;\n    case OPR_LE: codecomp(fs, OP_LE, 1, e1, e2); break;\n    case OPR_GT: codecomp(fs, OP_LT, 0, e1, e2); break;\n    case OPR_GE: codecomp(fs, OP_LE, 0, e1, e2); break;\n    default: lua_assert(0);\n  }\n}\n\n\nvoid luaK_fixline (FuncState *fs, int line) {\n  fs->f->lineinfo[fs->pc - 1] = line;\n}\n\n\nstatic int luaK_code (FuncState *fs, Instruction i, int line) {\n  Proto *f = fs->f;\n  dischargejpc(fs);  /* `pc' will change */\n  /* put new instruction in code array */\n  luaM_growvector(fs->L, f->code, fs->pc, f->sizecode, Instruction,\n                  MAX_INT, \"code size overflow\");\n  f->code[fs->pc] = i;\n  /* save corresponding line information */\n  luaM_growvector(fs->L, f->lineinfo, fs->pc, f->sizelineinfo, int,\n                  MAX_INT, \"code size overflow\");\n  f->lineinfo[fs->pc] = line;\n  return fs->pc++;\n}\n\n\nint luaK_codeABC (FuncState *fs, OpCode o, int a, int b, int c) {\n  lua_assert(getOpMode(o) == iABC);\n  lua_assert(getBMode(o) != OpArgN || b == 0);\n  lua_assert(getCMode(o) != OpArgN || c == 0);\n  return luaK_code(fs, CREATE_ABC(o, a, b, c), fs->ls->lastline);\n}\n\n\nint luaK_codeABx (FuncState *fs, OpCode o, int a, unsigned int bc) {\n  lua_assert(getOpMode(o) == iABx || getOpMode(o) == iAsBx);\n  lua_assert(getCMode(o) == OpArgN);\n  return luaK_code(fs, CREATE_ABx(o, a, bc), fs->ls->lastline);\n}\n\n\nvoid luaK_setlist (FuncState *fs, int base, int nelems, int tostore) {\n  int c =  (nelems - 1)/LFIELDS_PER_FLUSH + 1;\n  int b = (tostore == LUA_MULTRET) ? 0 : tostore;\n  lua_assert(tostore != 0);\n  if (c <= MAXARG_C)\n    luaK_codeABC(fs, OP_SETLIST, base, b, c);\n  else {\n    luaK_codeABC(fs, OP_SETLIST, base, b, 0);\n    luaK_code(fs, cast(Instruction, c), fs->ls->lastline);\n  }\n  fs->freereg = base + 1;  /* free registers with list values */\n}\n\n"
  },
  {
    "path": "deps/lua/src/lcode.h",
    "content": "/*\n** $Id: lcode.h,v 1.48.1.1 2007/12/27 13:02:25 roberto Exp $\n** Code generator for Lua\n** See Copyright Notice in lua.h\n*/\n\n#ifndef lcode_h\n#define lcode_h\n\n#include \"llex.h\"\n#include \"lobject.h\"\n#include \"lopcodes.h\"\n#include \"lparser.h\"\n\n\n/*\n** Marks the end of a patch list. It is an invalid value both as an absolute\n** address, and as a list link (would link an element to itself).\n*/\n#define NO_JUMP (-1)\n\n\n/*\n** grep \"ORDER OPR\" if you change these enums\n*/\ntypedef enum BinOpr {\n  OPR_ADD, OPR_SUB, OPR_MUL, OPR_DIV, OPR_MOD, OPR_POW,\n  OPR_CONCAT,\n  OPR_NE, OPR_EQ,\n  OPR_LT, OPR_LE, OPR_GT, OPR_GE,\n  OPR_AND, OPR_OR,\n  OPR_NOBINOPR\n} BinOpr;\n\n\ntypedef enum UnOpr { OPR_MINUS, OPR_NOT, OPR_LEN, OPR_NOUNOPR } UnOpr;\n\n\n#define getcode(fs,e)\t((fs)->f->code[(e)->u.s.info])\n\n#define luaK_codeAsBx(fs,o,A,sBx)\tluaK_codeABx(fs,o,A,(sBx)+MAXARG_sBx)\n\n#define luaK_setmultret(fs,e)\tluaK_setreturns(fs, e, LUA_MULTRET)\n\nLUAI_FUNC int luaK_codeABx (FuncState *fs, OpCode o, int A, unsigned int Bx);\nLUAI_FUNC int luaK_codeABC (FuncState *fs, OpCode o, int A, int B, int C);\nLUAI_FUNC void luaK_fixline (FuncState *fs, int line);\nLUAI_FUNC void luaK_nil (FuncState *fs, int from, int n);\nLUAI_FUNC void luaK_reserveregs (FuncState *fs, int n);\nLUAI_FUNC void luaK_checkstack (FuncState *fs, int n);\nLUAI_FUNC int luaK_stringK (FuncState *fs, TString *s);\nLUAI_FUNC int luaK_numberK (FuncState *fs, lua_Number r);\nLUAI_FUNC void luaK_dischargevars (FuncState *fs, expdesc *e);\nLUAI_FUNC int luaK_exp2anyreg (FuncState *fs, expdesc *e);\nLUAI_FUNC void luaK_exp2nextreg (FuncState *fs, expdesc *e);\nLUAI_FUNC void luaK_exp2val (FuncState *fs, expdesc *e);\nLUAI_FUNC int luaK_exp2RK (FuncState *fs, expdesc *e);\nLUAI_FUNC void luaK_self (FuncState *fs, expdesc *e, expdesc *key);\nLUAI_FUNC void luaK_indexed (FuncState *fs, expdesc *t, expdesc *k);\nLUAI_FUNC void luaK_goiftrue (FuncState *fs, expdesc *e);\nLUAI_FUNC void luaK_storevar (FuncState *fs, expdesc *var, expdesc *e);\nLUAI_FUNC void luaK_setreturns (FuncState *fs, expdesc *e, int nresults);\nLUAI_FUNC void luaK_setoneret (FuncState *fs, expdesc *e);\nLUAI_FUNC int luaK_jump (FuncState *fs);\nLUAI_FUNC void luaK_ret (FuncState *fs, int first, int nret);\nLUAI_FUNC void luaK_patchlist (FuncState *fs, int list, int target);\nLUAI_FUNC void luaK_patchtohere (FuncState *fs, int list);\nLUAI_FUNC void luaK_concat (FuncState *fs, int *l1, int l2);\nLUAI_FUNC int luaK_getlabel (FuncState *fs);\nLUAI_FUNC void luaK_prefix (FuncState *fs, UnOpr op, expdesc *v);\nLUAI_FUNC void luaK_infix (FuncState *fs, BinOpr op, expdesc *v);\nLUAI_FUNC void luaK_posfix (FuncState *fs, BinOpr op, expdesc *v1, expdesc *v2);\nLUAI_FUNC void luaK_setlist (FuncState *fs, int base, int nelems, int tostore);\n\n\n#endif\n"
  },
  {
    "path": "deps/lua/src/ldblib.c",
    "content": "/*\n** $Id: ldblib.c,v 1.104.1.4 2009/08/04 18:50:18 roberto Exp $\n** Interface from Lua to its debug API\n** See Copyright Notice in lua.h\n*/\n\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n#define ldblib_c\n#define LUA_LIB\n\n#include \"lua.h\"\n\n#include \"lauxlib.h\"\n#include \"lualib.h\"\n\n\n\nstatic int db_getregistry (lua_State *L) {\n  lua_pushvalue(L, LUA_REGISTRYINDEX);\n  return 1;\n}\n\n\nstatic int db_getmetatable (lua_State *L) {\n  luaL_checkany(L, 1);\n  if (!lua_getmetatable(L, 1)) {\n    lua_pushnil(L);  /* no metatable */\n  }\n  return 1;\n}\n\n\nstatic int db_setmetatable (lua_State *L) {\n  int t = lua_type(L, 2);\n  luaL_argcheck(L, t == LUA_TNIL || t == LUA_TTABLE, 2,\n                    \"nil or table expected\");\n  lua_settop(L, 2);\n  lua_pushboolean(L, lua_setmetatable(L, 1));\n  return 1;\n}\n\n\nstatic int db_getfenv (lua_State *L) {\n  luaL_checkany(L, 1);\n  lua_getfenv(L, 1);\n  return 1;\n}\n\n\nstatic int db_setfenv (lua_State *L) {\n  luaL_checktype(L, 2, LUA_TTABLE);\n  lua_settop(L, 2);\n  if (lua_setfenv(L, 1) == 0)\n    luaL_error(L, LUA_QL(\"setfenv\")\n                  \" cannot change environment of given object\");\n  return 1;\n}\n\n\nstatic void settabss (lua_State *L, const char *i, const char *v) {\n  lua_pushstring(L, v);\n  lua_setfield(L, -2, i);\n}\n\n\nstatic void settabsi (lua_State *L, const char *i, int v) {\n  lua_pushinteger(L, v);\n  lua_setfield(L, -2, i);\n}\n\n\nstatic lua_State *getthread (lua_State *L, int *arg) {\n  if (lua_isthread(L, 1)) {\n    *arg = 1;\n    return lua_tothread(L, 1);\n  }\n  else {\n    *arg = 0;\n    return L;\n  }\n}\n\n\nstatic void treatstackoption (lua_State *L, lua_State *L1, const char *fname) {\n  if (L == L1) {\n    lua_pushvalue(L, -2);\n    lua_remove(L, -3);\n  }\n  else\n    lua_xmove(L1, L, 1);\n  lua_setfield(L, -2, fname);\n}\n\n\nstatic int db_getinfo (lua_State *L) {\n  lua_Debug ar;\n  int arg;\n  lua_State *L1 = getthread(L, &arg);\n  const char *options = luaL_optstring(L, arg+2, \"flnSu\");\n  if (lua_isnumber(L, arg+1)) {\n    if (!lua_getstack(L1, (int)lua_tointeger(L, arg+1), &ar)) {\n      lua_pushnil(L);  /* level out of range */\n      return 1;\n    }\n  }\n  else if (lua_isfunction(L, arg+1)) {\n    lua_pushfstring(L, \">%s\", options);\n    options = lua_tostring(L, -1);\n    lua_pushvalue(L, arg+1);\n    lua_xmove(L, L1, 1);\n  }\n  else\n    return luaL_argerror(L, arg+1, \"function or level expected\");\n  if (!lua_getinfo(L1, options, &ar))\n    return luaL_argerror(L, arg+2, \"invalid option\");\n  lua_createtable(L, 0, 2);\n  if (strchr(options, 'S')) {\n    settabss(L, \"source\", ar.source);\n    settabss(L, \"short_src\", ar.short_src);\n    settabsi(L, \"linedefined\", ar.linedefined);\n    settabsi(L, \"lastlinedefined\", ar.lastlinedefined);\n    settabss(L, \"what\", ar.what);\n  }\n  if (strchr(options, 'l'))\n    settabsi(L, \"currentline\", ar.currentline);\n  if (strchr(options, 'u'))\n    settabsi(L, \"nups\", ar.nups);\n  if (strchr(options, 'n')) {\n    settabss(L, \"name\", ar.name);\n    settabss(L, \"namewhat\", ar.namewhat);\n  }\n  if (strchr(options, 'L'))\n    treatstackoption(L, L1, \"activelines\");\n  if (strchr(options, 'f'))\n    treatstackoption(L, L1, \"func\");\n  return 1;  /* return table */\n}\n    \n\nstatic int db_getlocal (lua_State *L) {\n  int arg;\n  lua_State *L1 = getthread(L, &arg);\n  lua_Debug ar;\n  const char *name;\n  if (!lua_getstack(L1, luaL_checkint(L, arg+1), &ar))  /* out of range? */\n    return luaL_argerror(L, arg+1, \"level out of range\");\n  name = lua_getlocal(L1, &ar, luaL_checkint(L, arg+2));\n  if (name) {\n    lua_xmove(L1, L, 1);\n    lua_pushstring(L, name);\n    lua_pushvalue(L, -2);\n    return 2;\n  }\n  else {\n    lua_pushnil(L);\n    return 1;\n  }\n}\n\n\nstatic int db_setlocal (lua_State *L) {\n  int arg;\n  lua_State *L1 = getthread(L, &arg);\n  lua_Debug ar;\n  if (!lua_getstack(L1, luaL_checkint(L, arg+1), &ar))  /* out of range? */\n    return luaL_argerror(L, arg+1, \"level out of range\");\n  luaL_checkany(L, arg+3);\n  lua_settop(L, arg+3);\n  lua_xmove(L, L1, 1);\n  lua_pushstring(L, lua_setlocal(L1, &ar, luaL_checkint(L, arg+2)));\n  return 1;\n}\n\n\nstatic int auxupvalue (lua_State *L, int get) {\n  const char *name;\n  int n = luaL_checkint(L, 2);\n  luaL_checktype(L, 1, LUA_TFUNCTION);\n  if (lua_iscfunction(L, 1)) return 0;  /* cannot touch C upvalues from Lua */\n  name = get ? lua_getupvalue(L, 1, n) : lua_setupvalue(L, 1, n);\n  if (name == NULL) return 0;\n  lua_pushstring(L, name);\n  lua_insert(L, -(get+1));\n  return get + 1;\n}\n\n\nstatic int db_getupvalue (lua_State *L) {\n  return auxupvalue(L, 1);\n}\n\n\nstatic int db_setupvalue (lua_State *L) {\n  luaL_checkany(L, 3);\n  return auxupvalue(L, 0);\n}\n\n\n\nstatic const char KEY_HOOK = 'h';\n\n\nstatic void hookf (lua_State *L, lua_Debug *ar) {\n  static const char *const hooknames[] =\n    {\"call\", \"return\", \"line\", \"count\", \"tail return\"};\n  lua_pushlightuserdata(L, (void *)&KEY_HOOK);\n  lua_rawget(L, LUA_REGISTRYINDEX);\n  lua_pushlightuserdata(L, L);\n  lua_rawget(L, -2);\n  if (lua_isfunction(L, -1)) {\n    lua_pushstring(L, hooknames[(int)ar->event]);\n    if (ar->currentline >= 0)\n      lua_pushinteger(L, ar->currentline);\n    else lua_pushnil(L);\n    lua_assert(lua_getinfo(L, \"lS\", ar));\n    lua_call(L, 2, 0);\n  }\n}\n\n\nstatic int makemask (const char *smask, int count) {\n  int mask = 0;\n  if (strchr(smask, 'c')) mask |= LUA_MASKCALL;\n  if (strchr(smask, 'r')) mask |= LUA_MASKRET;\n  if (strchr(smask, 'l')) mask |= LUA_MASKLINE;\n  if (count > 0) mask |= LUA_MASKCOUNT;\n  return mask;\n}\n\n\nstatic char *unmakemask (int mask, char *smask) {\n  int i = 0;\n  if (mask & LUA_MASKCALL) smask[i++] = 'c';\n  if (mask & LUA_MASKRET) smask[i++] = 'r';\n  if (mask & LUA_MASKLINE) smask[i++] = 'l';\n  smask[i] = '\\0';\n  return smask;\n}\n\n\nstatic void gethooktable (lua_State *L) {\n  lua_pushlightuserdata(L, (void *)&KEY_HOOK);\n  lua_rawget(L, LUA_REGISTRYINDEX);\n  if (!lua_istable(L, -1)) {\n    lua_pop(L, 1);\n    lua_createtable(L, 0, 1);\n    lua_pushlightuserdata(L, (void *)&KEY_HOOK);\n    lua_pushvalue(L, -2);\n    lua_rawset(L, LUA_REGISTRYINDEX);\n  }\n}\n\n\nstatic int db_sethook (lua_State *L) {\n  int arg, mask, count;\n  lua_Hook func;\n  lua_State *L1 = getthread(L, &arg);\n  if (lua_isnoneornil(L, arg+1)) {\n    lua_settop(L, arg+1);\n    func = NULL; mask = 0; count = 0;  /* turn off hooks */\n  }\n  else {\n    const char *smask = luaL_checkstring(L, arg+2);\n    luaL_checktype(L, arg+1, LUA_TFUNCTION);\n    count = luaL_optint(L, arg+3, 0);\n    func = hookf; mask = makemask(smask, count);\n  }\n  gethooktable(L);\n  lua_pushlightuserdata(L, L1);\n  lua_pushvalue(L, arg+1);\n  lua_rawset(L, -3);  /* set new hook */\n  lua_pop(L, 1);  /* remove hook table */\n  lua_sethook(L1, func, mask, count);  /* set hooks */\n  return 0;\n}\n\n\nstatic int db_gethook (lua_State *L) {\n  int arg;\n  lua_State *L1 = getthread(L, &arg);\n  char buff[5];\n  int mask = lua_gethookmask(L1);\n  lua_Hook hook = lua_gethook(L1);\n  if (hook != NULL && hook != hookf)  /* external hook? */\n    lua_pushliteral(L, \"external hook\");\n  else {\n    gethooktable(L);\n    lua_pushlightuserdata(L, L1);\n    lua_rawget(L, -2);   /* get hook */\n    lua_remove(L, -2);  /* remove hook table */\n  }\n  lua_pushstring(L, unmakemask(mask, buff));\n  lua_pushinteger(L, lua_gethookcount(L1));\n  return 3;\n}\n\n\nstatic int db_debug (lua_State *L) {\n  for (;;) {\n    char buffer[250];\n    fputs(\"lua_debug> \", stderr);\n    if (fgets(buffer, sizeof(buffer), stdin) == 0 ||\n        strcmp(buffer, \"cont\\n\") == 0)\n      return 0;\n    if (luaL_loadbuffer(L, buffer, strlen(buffer), \"=(debug command)\") ||\n        lua_pcall(L, 0, 0, 0)) {\n      fputs(lua_tostring(L, -1), stderr);\n      fputs(\"\\n\", stderr);\n    }\n    lua_settop(L, 0);  /* remove eventual returns */\n  }\n}\n\n\n#define LEVELS1\t12\t/* size of the first part of the stack */\n#define LEVELS2\t10\t/* size of the second part of the stack */\n\nstatic int db_errorfb (lua_State *L) {\n  int level;\n  int firstpart = 1;  /* still before eventual `...' */\n  int arg;\n  lua_State *L1 = getthread(L, &arg);\n  lua_Debug ar;\n  if (lua_isnumber(L, arg+2)) {\n    level = (int)lua_tointeger(L, arg+2);\n    lua_pop(L, 1);\n  }\n  else\n    level = (L == L1) ? 1 : 0;  /* level 0 may be this own function */\n  if (lua_gettop(L) == arg)\n    lua_pushliteral(L, \"\");\n  else if (!lua_isstring(L, arg+1)) return 1;  /* message is not a string */\n  else lua_pushliteral(L, \"\\n\");\n  lua_pushliteral(L, \"stack traceback:\");\n  while (lua_getstack(L1, level++, &ar)) {\n    if (level > LEVELS1 && firstpart) {\n      /* no more than `LEVELS2' more levels? */\n      if (!lua_getstack(L1, level+LEVELS2, &ar))\n        level--;  /* keep going */\n      else {\n        lua_pushliteral(L, \"\\n\\t...\");  /* too many levels */\n        while (lua_getstack(L1, level+LEVELS2, &ar))  /* find last levels */\n          level++;\n      }\n      firstpart = 0;\n      continue;\n    }\n    lua_pushliteral(L, \"\\n\\t\");\n    lua_getinfo(L1, \"Snl\", &ar);\n    lua_pushfstring(L, \"%s:\", ar.short_src);\n    if (ar.currentline > 0)\n      lua_pushfstring(L, \"%d:\", ar.currentline);\n    if (*ar.namewhat != '\\0')  /* is there a name? */\n        lua_pushfstring(L, \" in function \" LUA_QS, ar.name);\n    else {\n      if (*ar.what == 'm')  /* main? */\n        lua_pushfstring(L, \" in main chunk\");\n      else if (*ar.what == 'C' || *ar.what == 't')\n        lua_pushliteral(L, \" ?\");  /* C function or tail call */\n      else\n        lua_pushfstring(L, \" in function <%s:%d>\",\n                           ar.short_src, ar.linedefined);\n    }\n    lua_concat(L, lua_gettop(L) - arg);\n  }\n  lua_concat(L, lua_gettop(L) - arg);\n  return 1;\n}\n\n\nstatic const luaL_Reg dblib[] = {\n  {\"debug\", db_debug},\n  {\"getfenv\", db_getfenv},\n  {\"gethook\", db_gethook},\n  {\"getinfo\", db_getinfo},\n  {\"getlocal\", db_getlocal},\n  {\"getregistry\", db_getregistry},\n  {\"getmetatable\", db_getmetatable},\n  {\"getupvalue\", db_getupvalue},\n  {\"setfenv\", db_setfenv},\n  {\"sethook\", db_sethook},\n  {\"setlocal\", db_setlocal},\n  {\"setmetatable\", db_setmetatable},\n  {\"setupvalue\", db_setupvalue},\n  {\"traceback\", db_errorfb},\n  {NULL, NULL}\n};\n\n\nLUALIB_API int luaopen_debug (lua_State *L) {\n  luaL_register(L, LUA_DBLIBNAME, dblib);\n  return 1;\n}\n\n"
  },
  {
    "path": "deps/lua/src/ldebug.c",
    "content": "/*\n** $Id: ldebug.c,v 2.29.1.6 2008/05/08 16:56:26 roberto Exp $\n** Debug Interface\n** See Copyright Notice in lua.h\n*/\n\n\n#include <stdarg.h>\n#include <stddef.h>\n#include <string.h>\n\n\n#define ldebug_c\n#define LUA_CORE\n\n#include \"lua.h\"\n\n#include \"lapi.h\"\n#include \"lcode.h\"\n#include \"ldebug.h\"\n#include \"ldo.h\"\n#include \"lfunc.h\"\n#include \"lobject.h\"\n#include \"lopcodes.h\"\n#include \"lstate.h\"\n#include \"lstring.h\"\n#include \"ltable.h\"\n#include \"ltm.h\"\n#include \"lvm.h\"\n\n\n\nstatic const char *getfuncname (lua_State *L, CallInfo *ci, const char **name);\n\n\nstatic int currentpc (lua_State *L, CallInfo *ci) {\n  if (!isLua(ci)) return -1;  /* function is not a Lua function? */\n  if (ci == L->ci)\n    ci->savedpc = L->savedpc;\n  return pcRel(ci->savedpc, ci_func(ci)->l.p);\n}\n\n\nstatic int currentline (lua_State *L, CallInfo *ci) {\n  int pc = currentpc(L, ci);\n  if (pc < 0)\n    return -1;  /* only active lua functions have current-line information */\n  else\n    return getline(ci_func(ci)->l.p, pc);\n}\n\n\n/*\n** this function can be called asynchronous (e.g. during a signal)\n*/\nLUA_API int lua_sethook (lua_State *L, lua_Hook func, int mask, int count) {\n  if (func == NULL || mask == 0) {  /* turn off hooks? */\n    mask = 0;\n    func = NULL;\n  }\n  L->hook = func;\n  L->basehookcount = count;\n  resethookcount(L);\n  L->hookmask = cast_byte(mask);\n  return 1;\n}\n\n\nLUA_API lua_Hook lua_gethook (lua_State *L) {\n  return L->hook;\n}\n\n\nLUA_API int lua_gethookmask (lua_State *L) {\n  return L->hookmask;\n}\n\n\nLUA_API int lua_gethookcount (lua_State *L) {\n  return L->basehookcount;\n}\n\nLUA_API int lua_getstack (lua_State *L, int level, lua_Debug *ar) {\n  int status;\n  CallInfo *ci;\n  lua_lock(L);\n  for (ci = L->ci; level > 0 && ci > L->base_ci; ci--) {\n    level--;\n    if (f_isLua(ci))  /* Lua function? */\n      level -= ci->tailcalls;  /* skip lost tail calls */\n  }\n  if (level == 0 && ci > L->base_ci) {  /* level found? */\n    status = 1;\n    ar->i_ci = cast_int(ci - L->base_ci);\n  }\n  else if (level < 0) {  /* level is of a lost tail call? */\n    status = 1;\n    ar->i_ci = 0;\n  }\n  else status = 0;  /* no such level */\n  lua_unlock(L);\n  return status;\n}\n\n\nstatic Proto *getluaproto (CallInfo *ci) {\n  return (isLua(ci) ? ci_func(ci)->l.p : NULL);\n}\n\n\nstatic const char *findlocal (lua_State *L, CallInfo *ci, int n) {\n  const char *name;\n  Proto *fp = getluaproto(ci);\n  if (fp && (name = luaF_getlocalname(fp, n, currentpc(L, ci))) != NULL)\n    return name;  /* is a local variable in a Lua function */\n  else {\n    StkId limit = (ci == L->ci) ? L->top : (ci+1)->func;\n    if (limit - ci->base >= n && n > 0)  /* is 'n' inside 'ci' stack? */\n      return \"(*temporary)\";\n    else\n      return NULL;\n  }\n}\n\n\nLUA_API const char *lua_getlocal (lua_State *L, const lua_Debug *ar, int n) {\n  CallInfo *ci = L->base_ci + ar->i_ci;\n  const char *name = findlocal(L, ci, n);\n  lua_lock(L);\n  if (name)\n      luaA_pushobject(L, ci->base + (n - 1));\n  lua_unlock(L);\n  return name;\n}\n\n\nLUA_API const char *lua_setlocal (lua_State *L, const lua_Debug *ar, int n) {\n  CallInfo *ci = L->base_ci + ar->i_ci;\n  const char *name = findlocal(L, ci, n);\n  lua_lock(L);\n  if (name)\n      setobjs2s(L, ci->base + (n - 1), L->top - 1);\n  L->top--;  /* pop value */\n  lua_unlock(L);\n  return name;\n}\n\n\nstatic void funcinfo (lua_Debug *ar, Closure *cl) {\n  if (cl->c.isC) {\n    ar->source = \"=[C]\";\n    ar->linedefined = -1;\n    ar->lastlinedefined = -1;\n    ar->what = \"C\";\n  }\n  else {\n    ar->source = getstr(cl->l.p->source);\n    ar->linedefined = cl->l.p->linedefined;\n    ar->lastlinedefined = cl->l.p->lastlinedefined;\n    ar->what = (ar->linedefined == 0) ? \"main\" : \"Lua\";\n  }\n  luaO_chunkid(ar->short_src, ar->source, LUA_IDSIZE);\n}\n\n\nstatic void info_tailcall (lua_Debug *ar) {\n  ar->name = ar->namewhat = \"\";\n  ar->what = \"tail\";\n  ar->lastlinedefined = ar->linedefined = ar->currentline = -1;\n  ar->source = \"=(tail call)\";\n  luaO_chunkid(ar->short_src, ar->source, LUA_IDSIZE);\n  ar->nups = 0;\n}\n\n\nstatic void collectvalidlines (lua_State *L, Closure *f) {\n  if (f == NULL || f->c.isC) {\n    setnilvalue(L->top);\n  }\n  else {\n    Table *t = luaH_new(L, 0, 0);\n    int *lineinfo = f->l.p->lineinfo;\n    int i;\n    for (i=0; i<f->l.p->sizelineinfo; i++)\n      setbvalue(luaH_setnum(L, t, lineinfo[i]), 1);\n    sethvalue(L, L->top, t); \n  }\n  incr_top(L);\n}\n\n\nstatic int auxgetinfo (lua_State *L, const char *what, lua_Debug *ar,\n                    Closure *f, CallInfo *ci) {\n  int status = 1;\n  if (f == NULL) {\n    info_tailcall(ar);\n    return status;\n  }\n  for (; *what; what++) {\n    switch (*what) {\n      case 'S': {\n        funcinfo(ar, f);\n        break;\n      }\n      case 'l': {\n        ar->currentline = (ci) ? currentline(L, ci) : -1;\n        break;\n      }\n      case 'u': {\n        ar->nups = f->c.nupvalues;\n        break;\n      }\n      case 'n': {\n        ar->namewhat = (ci) ? getfuncname(L, ci, &ar->name) : NULL;\n        if (ar->namewhat == NULL) {\n          ar->namewhat = \"\";  /* not found */\n          ar->name = NULL;\n        }\n        break;\n      }\n      case 'L':\n      case 'f':  /* handled by lua_getinfo */\n        break;\n      default: status = 0;  /* invalid option */\n    }\n  }\n  return status;\n}\n\n\nLUA_API int lua_getinfo (lua_State *L, const char *what, lua_Debug *ar) {\n  int status;\n  Closure *f = NULL;\n  CallInfo *ci = NULL;\n  lua_lock(L);\n  if (*what == '>') {\n    StkId func = L->top - 1;\n    luai_apicheck(L, ttisfunction(func));\n    what++;  /* skip the '>' */\n    f = clvalue(func);\n    L->top--;  /* pop function */\n  }\n  else if (ar->i_ci != 0) {  /* no tail call? */\n    ci = L->base_ci + ar->i_ci;\n    lua_assert(ttisfunction(ci->func));\n    f = clvalue(ci->func);\n  }\n  status = auxgetinfo(L, what, ar, f, ci);\n  if (strchr(what, 'f')) {\n    if (f == NULL) setnilvalue(L->top);\n    else setclvalue(L, L->top, f);\n    incr_top(L);\n  }\n  if (strchr(what, 'L'))\n    collectvalidlines(L, f);\n  lua_unlock(L);\n  return status;\n}\n\n\n/*\n** {======================================================\n** Symbolic Execution and code checker\n** =======================================================\n*/\n\n#define check(x)\t\tif (!(x)) return 0;\n\n#define checkjump(pt,pc)\tcheck(0 <= pc && pc < pt->sizecode)\n\n#define checkreg(pt,reg)\tcheck((reg) < (pt)->maxstacksize)\n\n\n\nstatic int precheck (const Proto *pt) {\n  check(pt->maxstacksize <= MAXSTACK);\n  check(pt->numparams+(pt->is_vararg & VARARG_HASARG) <= pt->maxstacksize);\n  check(!(pt->is_vararg & VARARG_NEEDSARG) ||\n              (pt->is_vararg & VARARG_HASARG));\n  check(pt->sizeupvalues <= pt->nups);\n  check(pt->sizelineinfo == pt->sizecode || pt->sizelineinfo == 0);\n  check(pt->sizecode > 0 && GET_OPCODE(pt->code[pt->sizecode-1]) == OP_RETURN);\n  return 1;\n}\n\n\n#define checkopenop(pt,pc)\tluaG_checkopenop((pt)->code[(pc)+1])\n\nint luaG_checkopenop (Instruction i) {\n  switch (GET_OPCODE(i)) {\n    case OP_CALL:\n    case OP_TAILCALL:\n    case OP_RETURN:\n    case OP_SETLIST: {\n      check(GETARG_B(i) == 0);\n      return 1;\n    }\n    default: return 0;  /* invalid instruction after an open call */\n  }\n}\n\n\nstatic int checkArgMode (const Proto *pt, int r, enum OpArgMask mode) {\n  switch (mode) {\n    case OpArgN: check(r == 0); break;\n    case OpArgU: break;\n    case OpArgR: checkreg(pt, r); break;\n    case OpArgK:\n      check(ISK(r) ? INDEXK(r) < pt->sizek : r < pt->maxstacksize);\n      break;\n  }\n  return 1;\n}\n\n\nstatic Instruction symbexec (const Proto *pt, int lastpc, int reg) {\n  int pc;\n  int last;  /* stores position of last instruction that changed `reg' */\n  last = pt->sizecode-1;  /* points to final return (a `neutral' instruction) */\n  check(precheck(pt));\n  for (pc = 0; pc < lastpc; pc++) {\n    Instruction i = pt->code[pc];\n    OpCode op = GET_OPCODE(i);\n    int a = GETARG_A(i);\n    int b = 0;\n    int c = 0;\n    check(op < NUM_OPCODES);\n    checkreg(pt, a);\n    switch (getOpMode(op)) {\n      case iABC: {\n        b = GETARG_B(i);\n        c = GETARG_C(i);\n        check(checkArgMode(pt, b, getBMode(op)));\n        check(checkArgMode(pt, c, getCMode(op)));\n        break;\n      }\n      case iABx: {\n        b = GETARG_Bx(i);\n        if (getBMode(op) == OpArgK) check(b < pt->sizek);\n        break;\n      }\n      case iAsBx: {\n        b = GETARG_sBx(i);\n        if (getBMode(op) == OpArgR) {\n          int dest = pc+1+b;\n          check(0 <= dest && dest < pt->sizecode);\n          if (dest > 0) {\n            int j;\n            /* check that it does not jump to a setlist count; this\n               is tricky, because the count from a previous setlist may\n               have the same value of an invalid setlist; so, we must\n               go all the way back to the first of them (if any) */\n            for (j = 0; j < dest; j++) {\n              Instruction d = pt->code[dest-1-j];\n              if (!(GET_OPCODE(d) == OP_SETLIST && GETARG_C(d) == 0)) break;\n            }\n            /* if 'j' is even, previous value is not a setlist (even if\n               it looks like one) */\n            check((j&1) == 0);\n          }\n        }\n        break;\n      }\n    }\n    if (testAMode(op)) {\n      if (a == reg) last = pc;  /* change register `a' */\n    }\n    if (testTMode(op)) {\n      check(pc+2 < pt->sizecode);  /* check skip */\n      check(GET_OPCODE(pt->code[pc+1]) == OP_JMP);\n    }\n    switch (op) {\n      case OP_LOADBOOL: {\n        if (c == 1) {  /* does it jump? */\n          check(pc+2 < pt->sizecode);  /* check its jump */\n          check(GET_OPCODE(pt->code[pc+1]) != OP_SETLIST ||\n                GETARG_C(pt->code[pc+1]) != 0);\n        }\n        break;\n      }\n      case OP_LOADNIL: {\n        if (a <= reg && reg <= b)\n          last = pc;  /* set registers from `a' to `b' */\n        break;\n      }\n      case OP_GETUPVAL:\n      case OP_SETUPVAL: {\n        check(b < pt->nups);\n        break;\n      }\n      case OP_GETGLOBAL:\n      case OP_SETGLOBAL: {\n        check(ttisstring(&pt->k[b]));\n        break;\n      }\n      case OP_SELF: {\n        checkreg(pt, a+1);\n        if (reg == a+1) last = pc;\n        break;\n      }\n      case OP_CONCAT: {\n        check(b < c);  /* at least two operands */\n        break;\n      }\n      case OP_TFORLOOP: {\n        check(c >= 1);  /* at least one result (control variable) */\n        checkreg(pt, a+2+c);  /* space for results */\n        if (reg >= a+2) last = pc;  /* affect all regs above its base */\n        break;\n      }\n      case OP_FORLOOP:\n      case OP_FORPREP:\n        checkreg(pt, a+3);\n        /* go through */\n      case OP_JMP: {\n        int dest = pc+1+b;\n        /* not full check and jump is forward and do not skip `lastpc'? */\n        if (reg != NO_REG && pc < dest && dest <= lastpc)\n          pc += b;  /* do the jump */\n        break;\n      }\n      case OP_CALL:\n      case OP_TAILCALL: {\n        if (b != 0) {\n          checkreg(pt, a+b-1);\n        }\n        c--;  /* c = num. returns */\n        if (c == LUA_MULTRET) {\n          check(checkopenop(pt, pc));\n        }\n        else if (c != 0)\n          checkreg(pt, a+c-1);\n        if (reg >= a) last = pc;  /* affect all registers above base */\n        break;\n      }\n      case OP_RETURN: {\n        b--;  /* b = num. returns */\n        if (b > 0) checkreg(pt, a+b-1);\n        break;\n      }\n      case OP_SETLIST: {\n        if (b > 0) checkreg(pt, a + b);\n        if (c == 0) {\n          pc++;\n          check(pc < pt->sizecode - 1);\n        }\n        break;\n      }\n      case OP_CLOSURE: {\n        int nup, j;\n        check(b < pt->sizep);\n        nup = pt->p[b]->nups;\n        check(pc + nup < pt->sizecode);\n        for (j = 1; j <= nup; j++) {\n          OpCode op1 = GET_OPCODE(pt->code[pc + j]);\n          check(op1 == OP_GETUPVAL || op1 == OP_MOVE);\n        }\n        if (reg != NO_REG)  /* tracing? */\n          pc += nup;  /* do not 'execute' these pseudo-instructions */\n        break;\n      }\n      case OP_VARARG: {\n        check((pt->is_vararg & VARARG_ISVARARG) &&\n             !(pt->is_vararg & VARARG_NEEDSARG));\n        b--;\n        if (b == LUA_MULTRET) check(checkopenop(pt, pc));\n        checkreg(pt, a+b-1);\n        break;\n      }\n      default: break;\n    }\n  }\n  return pt->code[last];\n}\n\n#undef check\n#undef checkjump\n#undef checkreg\n\n/* }====================================================== */\n\n\nint luaG_checkcode (const Proto *pt) {\n  return (symbexec(pt, pt->sizecode, NO_REG) != 0);\n}\n\n\nstatic const char *kname (Proto *p, int c) {\n  if (ISK(c) && ttisstring(&p->k[INDEXK(c)]))\n    return svalue(&p->k[INDEXK(c)]);\n  else\n    return \"?\";\n}\n\n\nstatic const char *getobjname (lua_State *L, CallInfo *ci, int stackpos,\n                               const char **name) {\n  if (isLua(ci)) {  /* a Lua function? */\n    Proto *p = ci_func(ci)->l.p;\n    int pc = currentpc(L, ci);\n    Instruction i;\n    *name = luaF_getlocalname(p, stackpos+1, pc);\n    if (*name)  /* is a local? */\n      return \"local\";\n    i = symbexec(p, pc, stackpos);  /* try symbolic execution */\n    lua_assert(pc != -1);\n    switch (GET_OPCODE(i)) {\n      case OP_GETGLOBAL: {\n        int g = GETARG_Bx(i);  /* global index */\n        lua_assert(ttisstring(&p->k[g]));\n        *name = svalue(&p->k[g]);\n        return \"global\";\n      }\n      case OP_MOVE: {\n        int a = GETARG_A(i);\n        int b = GETARG_B(i);  /* move from `b' to `a' */\n        if (b < a)\n          return getobjname(L, ci, b, name);  /* get name for `b' */\n        break;\n      }\n      case OP_GETTABLE: {\n        int k = GETARG_C(i);  /* key index */\n        *name = kname(p, k);\n        return \"field\";\n      }\n      case OP_GETUPVAL: {\n        int u = GETARG_B(i);  /* upvalue index */\n        *name = p->upvalues ? getstr(p->upvalues[u]) : \"?\";\n        return \"upvalue\";\n      }\n      case OP_SELF: {\n        int k = GETARG_C(i);  /* key index */\n        *name = kname(p, k);\n        return \"method\";\n      }\n      default: break;\n    }\n  }\n  return NULL;  /* no useful name found */\n}\n\n\nstatic const char *getfuncname (lua_State *L, CallInfo *ci, const char **name) {\n  Instruction i;\n  if ((isLua(ci) && ci->tailcalls > 0) || !isLua(ci - 1))\n    return NULL;  /* calling function is not Lua (or is unknown) */\n  ci--;  /* calling function */\n  i = ci_func(ci)->l.p->code[currentpc(L, ci)];\n  if (GET_OPCODE(i) == OP_CALL || GET_OPCODE(i) == OP_TAILCALL ||\n      GET_OPCODE(i) == OP_TFORLOOP)\n    return getobjname(L, ci, GETARG_A(i), name);\n  else\n    return NULL;  /* no useful name can be found */\n}\n\n\n/* only ANSI way to check whether a pointer points to an array */\nstatic int isinstack (CallInfo *ci, const TValue *o) {\n  StkId p;\n  for (p = ci->base; p < ci->top; p++)\n    if (o == p) return 1;\n  return 0;\n}\n\n\nvoid luaG_typeerror (lua_State *L, const TValue *o, const char *op) {\n  const char *name = NULL;\n  const char *t = luaT_typenames[ttype(o)];\n  const char *kind = (isinstack(L->ci, o)) ?\n                         getobjname(L, L->ci, cast_int(o - L->base), &name) :\n                         NULL;\n  if (kind)\n    luaG_runerror(L, \"attempt to %s %s \" LUA_QS \" (a %s value)\",\n                op, kind, name, t);\n  else\n    luaG_runerror(L, \"attempt to %s a %s value\", op, t);\n}\n\n\nvoid luaG_concaterror (lua_State *L, StkId p1, StkId p2) {\n  if (ttisstring(p1) || ttisnumber(p1)) p1 = p2;\n  lua_assert(!ttisstring(p1) && !ttisnumber(p1));\n  luaG_typeerror(L, p1, \"concatenate\");\n}\n\n\nvoid luaG_aritherror (lua_State *L, const TValue *p1, const TValue *p2) {\n  TValue temp;\n  if (luaV_tonumber(p1, &temp) == NULL)\n    p2 = p1;  /* first operand is wrong */\n  luaG_typeerror(L, p2, \"perform arithmetic on\");\n}\n\n\nint luaG_ordererror (lua_State *L, const TValue *p1, const TValue *p2) {\n  const char *t1 = luaT_typenames[ttype(p1)];\n  const char *t2 = luaT_typenames[ttype(p2)];\n  if (t1[2] == t2[2])\n    luaG_runerror(L, \"attempt to compare two %s values\", t1);\n  else\n    luaG_runerror(L, \"attempt to compare %s with %s\", t1, t2);\n  return 0;\n}\n\n\nstatic void addinfo (lua_State *L, const char *msg) {\n  CallInfo *ci = L->ci;\n  if (isLua(ci)) {  /* is Lua code? */\n    char buff[LUA_IDSIZE];  /* add file:line information */\n    int line = currentline(L, ci);\n    luaO_chunkid(buff, getstr(getluaproto(ci)->source), LUA_IDSIZE);\n    luaO_pushfstring(L, \"%s:%d: %s\", buff, line, msg);\n  }\n}\n\n\nvoid luaG_errormsg (lua_State *L) {\n  if (L->errfunc != 0) {  /* is there an error handling function? */\n    StkId errfunc = restorestack(L, L->errfunc);\n    if (!ttisfunction(errfunc)) luaD_throw(L, LUA_ERRERR);\n    setobjs2s(L, L->top, L->top - 1);  /* move argument */\n    setobjs2s(L, L->top - 1, errfunc);  /* push function */\n    incr_top(L);\n    luaD_call(L, L->top - 2, 1);  /* call it */\n  }\n  luaD_throw(L, LUA_ERRRUN);\n}\n\n\nvoid luaG_runerror (lua_State *L, const char *fmt, ...) {\n  va_list argp;\n  va_start(argp, fmt);\n  addinfo(L, luaO_pushvfstring(L, fmt, argp));\n  va_end(argp);\n  luaG_errormsg(L);\n}\n\n"
  },
  {
    "path": "deps/lua/src/ldebug.h",
    "content": "/*\n** $Id: ldebug.h,v 2.3.1.1 2007/12/27 13:02:25 roberto Exp $\n** Auxiliary functions from Debug Interface module\n** See Copyright Notice in lua.h\n*/\n\n#ifndef ldebug_h\n#define ldebug_h\n\n\n#include \"lstate.h\"\n\n\n#define pcRel(pc, p)\t(cast(int, (pc) - (p)->code) - 1)\n\n#define getline(f,pc)\t(((f)->lineinfo) ? (f)->lineinfo[pc] : 0)\n\n#define resethookcount(L)\t(L->hookcount = L->basehookcount)\n\n\nLUAI_FUNC void luaG_typeerror (lua_State *L, const TValue *o,\n                                             const char *opname);\nLUAI_FUNC void luaG_concaterror (lua_State *L, StkId p1, StkId p2);\nLUAI_FUNC void luaG_aritherror (lua_State *L, const TValue *p1,\n                                              const TValue *p2);\nLUAI_FUNC int luaG_ordererror (lua_State *L, const TValue *p1,\n                                             const TValue *p2);\nLUAI_FUNC void luaG_runerror (lua_State *L, const char *fmt, ...);\nLUAI_FUNC void luaG_errormsg (lua_State *L);\nLUAI_FUNC int luaG_checkcode (const Proto *pt);\nLUAI_FUNC int luaG_checkopenop (Instruction i);\n\n#endif\n"
  },
  {
    "path": "deps/lua/src/ldo.c",
    "content": "/*\n** $Id: ldo.c,v 2.38.1.4 2012/01/18 02:27:10 roberto Exp $\n** Stack and Call structure of Lua\n** See Copyright Notice in lua.h\n*/\n\n\n#include <setjmp.h>\n#include <stdlib.h>\n#include <string.h>\n\n#define ldo_c\n#define LUA_CORE\n\n#include \"lua.h\"\n\n#include \"ldebug.h\"\n#include \"ldo.h\"\n#include \"lfunc.h\"\n#include \"lgc.h\"\n#include \"lmem.h\"\n#include \"lobject.h\"\n#include \"lopcodes.h\"\n#include \"lparser.h\"\n#include \"lstate.h\"\n#include \"lstring.h\"\n#include \"ltable.h\"\n#include \"ltm.h\"\n#include \"lundump.h\"\n#include \"lvm.h\"\n#include \"lzio.h\"\n\n\n\n\n/*\n** {======================================================\n** Error-recovery functions\n** =======================================================\n*/\n\n\n/* chain list of long jump buffers */\nstruct lua_longjmp {\n  struct lua_longjmp *previous;\n  luai_jmpbuf b;\n  volatile int status;  /* error code */\n};\n\n\nvoid luaD_seterrorobj (lua_State *L, int errcode, StkId oldtop) {\n  switch (errcode) {\n    case LUA_ERRMEM: {\n      setsvalue2s(L, oldtop, luaS_newliteral(L, MEMERRMSG));\n      break;\n    }\n    case LUA_ERRERR: {\n      setsvalue2s(L, oldtop, luaS_newliteral(L, \"error in error handling\"));\n      break;\n    }\n    case LUA_ERRSYNTAX:\n    case LUA_ERRRUN: {\n      setobjs2s(L, oldtop, L->top - 1);  /* error message on current top */\n      break;\n    }\n  }\n  L->top = oldtop + 1;\n}\n\n\nstatic void restore_stack_limit (lua_State *L) {\n  lua_assert(L->stack_last - L->stack == L->stacksize - EXTRA_STACK - 1);\n  if (L->size_ci > LUAI_MAXCALLS) {  /* there was an overflow? */\n    int inuse = cast_int(L->ci - L->base_ci);\n    if (inuse + 1 < LUAI_MAXCALLS)  /* can `undo' overflow? */\n      luaD_reallocCI(L, LUAI_MAXCALLS);\n  }\n}\n\n\nstatic void resetstack (lua_State *L, int status) {\n  L->ci = L->base_ci;\n  L->base = L->ci->base;\n  luaF_close(L, L->base);  /* close eventual pending closures */\n  luaD_seterrorobj(L, status, L->base);\n  L->nCcalls = L->baseCcalls;\n  L->allowhook = 1;\n  restore_stack_limit(L);\n  L->errfunc = 0;\n  L->errorJmp = NULL;\n}\n\n\nvoid luaD_throw (lua_State *L, int errcode) {\n  if (L->errorJmp) {\n    L->errorJmp->status = errcode;\n    LUAI_THROW(L, L->errorJmp);\n  }\n  else {\n    L->status = cast_byte(errcode);\n    if (G(L)->panic) {\n      resetstack(L, errcode);\n      lua_unlock(L);\n      G(L)->panic(L);\n    }\n    exit(EXIT_FAILURE);\n  }\n}\n\n\nint luaD_rawrunprotected (lua_State *L, Pfunc f, void *ud) {\n  struct lua_longjmp lj;\n  lj.status = 0;\n  lj.previous = L->errorJmp;  /* chain new error handler */\n  L->errorJmp = &lj;\n  LUAI_TRY(L, &lj,\n    (*f)(L, ud);\n  );\n  L->errorJmp = lj.previous;  /* restore old error handler */\n  return lj.status;\n}\n\n/* }====================================================== */\n\n\nstatic void correctstack (lua_State *L, TValue *oldstack) {\n  CallInfo *ci;\n  GCObject *up;\n  L->top = (L->top - oldstack) + L->stack;\n  for (up = L->openupval; up != NULL; up = up->gch.next)\n    gco2uv(up)->v = (gco2uv(up)->v - oldstack) + L->stack;\n  for (ci = L->base_ci; ci <= L->ci; ci++) {\n    ci->top = (ci->top - oldstack) + L->stack;\n    ci->base = (ci->base - oldstack) + L->stack;\n    ci->func = (ci->func - oldstack) + L->stack;\n  }\n  L->base = (L->base - oldstack) + L->stack;\n}\n\n\nvoid luaD_reallocstack (lua_State *L, int newsize) {\n  TValue *oldstack = L->stack;\n  int realsize = newsize + 1 + EXTRA_STACK;\n  lua_assert(L->stack_last - L->stack == L->stacksize - EXTRA_STACK - 1);\n  luaM_reallocvector(L, L->stack, L->stacksize, realsize, TValue);\n  L->stacksize = realsize;\n  L->stack_last = L->stack+newsize;\n  correctstack(L, oldstack);\n}\n\n\nvoid luaD_reallocCI (lua_State *L, int newsize) {\n  CallInfo *oldci = L->base_ci;\n  luaM_reallocvector(L, L->base_ci, L->size_ci, newsize, CallInfo);\n  L->size_ci = newsize;\n  L->ci = (L->ci - oldci) + L->base_ci;\n  L->end_ci = L->base_ci + L->size_ci - 1;\n}\n\n\nvoid luaD_growstack (lua_State *L, int n) {\n  if (n <= L->stacksize)  /* double size is enough? */\n    luaD_reallocstack(L, 2*L->stacksize);\n  else\n    luaD_reallocstack(L, L->stacksize + n);\n}\n\n\nstatic CallInfo *growCI (lua_State *L) {\n  if (L->size_ci > LUAI_MAXCALLS)  /* overflow while handling overflow? */\n    luaD_throw(L, LUA_ERRERR);\n  else {\n    luaD_reallocCI(L, 2*L->size_ci);\n    if (L->size_ci > LUAI_MAXCALLS)\n      luaG_runerror(L, \"stack overflow\");\n  }\n  return ++L->ci;\n}\n\n\nvoid luaD_callhook (lua_State *L, int event, int line) {\n  lua_Hook hook = L->hook;\n  if (hook && L->allowhook) {\n    ptrdiff_t top = savestack(L, L->top);\n    ptrdiff_t ci_top = savestack(L, L->ci->top);\n    lua_Debug ar;\n    ar.event = event;\n    ar.currentline = line;\n    if (event == LUA_HOOKTAILRET)\n      ar.i_ci = 0;  /* tail call; no debug information about it */\n    else\n      ar.i_ci = cast_int(L->ci - L->base_ci);\n    luaD_checkstack(L, LUA_MINSTACK);  /* ensure minimum stack size */\n    L->ci->top = L->top + LUA_MINSTACK;\n    lua_assert(L->ci->top <= L->stack_last);\n    L->allowhook = 0;  /* cannot call hooks inside a hook */\n    lua_unlock(L);\n    (*hook)(L, &ar);\n    lua_lock(L);\n    lua_assert(!L->allowhook);\n    L->allowhook = 1;\n    L->ci->top = restorestack(L, ci_top);\n    L->top = restorestack(L, top);\n  }\n}\n\n\nstatic StkId adjust_varargs (lua_State *L, Proto *p, int actual) {\n  int i;\n  int nfixargs = p->numparams;\n  Table *htab = NULL;\n  StkId base, fixed;\n  for (; actual < nfixargs; ++actual)\n    setnilvalue(L->top++);\n#if defined(LUA_COMPAT_VARARG)\n  if (p->is_vararg & VARARG_NEEDSARG) { /* compat. with old-style vararg? */\n    int nvar = actual - nfixargs;  /* number of extra arguments */\n    lua_assert(p->is_vararg & VARARG_HASARG);\n    luaC_checkGC(L);\n    luaD_checkstack(L, p->maxstacksize);\n    htab = luaH_new(L, nvar, 1);  /* create `arg' table */\n    for (i=0; i<nvar; i++)  /* put extra arguments into `arg' table */\n      setobj2n(L, luaH_setnum(L, htab, i+1), L->top - nvar + i);\n    /* store counter in field `n' */\n    setnvalue(luaH_setstr(L, htab, luaS_newliteral(L, \"n\")), cast_num(nvar));\n  }\n#endif\n  /* move fixed parameters to final position */\n  fixed = L->top - actual;  /* first fixed argument */\n  base = L->top;  /* final position of first argument */\n  for (i=0; i<nfixargs; i++) {\n    setobjs2s(L, L->top++, fixed+i);\n    setnilvalue(fixed+i);\n  }\n  /* add `arg' parameter */\n  if (htab) {\n    sethvalue(L, L->top++, htab);\n    lua_assert(iswhite(obj2gco(htab)));\n  }\n  return base;\n}\n\n\nstatic StkId tryfuncTM (lua_State *L, StkId func) {\n  const TValue *tm = luaT_gettmbyobj(L, func, TM_CALL);\n  StkId p;\n  ptrdiff_t funcr = savestack(L, func);\n  if (!ttisfunction(tm))\n    luaG_typeerror(L, func, \"call\");\n  /* Open a hole inside the stack at `func' */\n  for (p = L->top; p > func; p--) setobjs2s(L, p, p-1);\n  incr_top(L);\n  func = restorestack(L, funcr);  /* previous call may change stack */\n  setobj2s(L, func, tm);  /* tag method is the new function to be called */\n  return func;\n}\n\n\n\n#define inc_ci(L) \\\n  ((L->ci == L->end_ci) ? growCI(L) : \\\n   (condhardstacktests(luaD_reallocCI(L, L->size_ci)), ++L->ci))\n\n\nint luaD_precall (lua_State *L, StkId func, int nresults) {\n  LClosure *cl;\n  ptrdiff_t funcr;\n  if (!ttisfunction(func)) /* `func' is not a function? */\n    func = tryfuncTM(L, func);  /* check the `function' tag method */\n  funcr = savestack(L, func);\n  cl = &clvalue(func)->l;\n  L->ci->savedpc = L->savedpc;\n  if (!cl->isC) {  /* Lua function? prepare its call */\n    CallInfo *ci;\n    StkId st, base;\n    Proto *p = cl->p;\n    luaD_checkstack(L, p->maxstacksize + p->numparams);\n    func = restorestack(L, funcr);\n    if (!p->is_vararg) {  /* no varargs? */\n      base = func + 1;\n      if (L->top > base + p->numparams)\n        L->top = base + p->numparams;\n    }\n    else {  /* vararg function */\n      int nargs = cast_int(L->top - func) - 1;\n      base = adjust_varargs(L, p, nargs);\n      func = restorestack(L, funcr);  /* previous call may change the stack */\n    }\n    ci = inc_ci(L);  /* now `enter' new function */\n    ci->func = func;\n    L->base = ci->base = base;\n    ci->top = L->base + p->maxstacksize;\n    lua_assert(ci->top <= L->stack_last);\n    L->savedpc = p->code;  /* starting point */\n    ci->tailcalls = 0;\n    ci->nresults = nresults;\n    for (st = L->top; st < ci->top; st++)\n      setnilvalue(st);\n    L->top = ci->top;\n    if (L->hookmask & LUA_MASKCALL) {\n      L->savedpc++;  /* hooks assume 'pc' is already incremented */\n      luaD_callhook(L, LUA_HOOKCALL, -1);\n      L->savedpc--;  /* correct 'pc' */\n    }\n    return PCRLUA;\n  }\n  else {  /* if is a C function, call it */\n    CallInfo *ci;\n    int n;\n    luaD_checkstack(L, LUA_MINSTACK);  /* ensure minimum stack size */\n    ci = inc_ci(L);  /* now `enter' new function */\n    ci->func = restorestack(L, funcr);\n    L->base = ci->base = ci->func + 1;\n    ci->top = L->top + LUA_MINSTACK;\n    lua_assert(ci->top <= L->stack_last);\n    ci->nresults = nresults;\n    if (L->hookmask & LUA_MASKCALL)\n      luaD_callhook(L, LUA_HOOKCALL, -1);\n    lua_unlock(L);\n    n = (*curr_func(L)->c.f)(L);  /* do the actual call */\n    lua_lock(L);\n    if (n < 0)  /* yielding? */\n      return PCRYIELD;\n    else {\n      luaD_poscall(L, L->top - n);\n      return PCRC;\n    }\n  }\n}\n\n\nstatic StkId callrethooks (lua_State *L, StkId firstResult) {\n  ptrdiff_t fr = savestack(L, firstResult);  /* next call may change stack */\n  luaD_callhook(L, LUA_HOOKRET, -1);\n  if (f_isLua(L->ci)) {  /* Lua function? */\n    while ((L->hookmask & LUA_MASKRET) && L->ci->tailcalls--) /* tail calls */\n      luaD_callhook(L, LUA_HOOKTAILRET, -1);\n  }\n  return restorestack(L, fr);\n}\n\n\nint luaD_poscall (lua_State *L, StkId firstResult) {\n  StkId res;\n  int wanted, i;\n  CallInfo *ci;\n  if (L->hookmask & LUA_MASKRET)\n    firstResult = callrethooks(L, firstResult);\n  ci = L->ci--;\n  res = ci->func;  /* res == final position of 1st result */\n  wanted = ci->nresults;\n  L->base = (ci - 1)->base;  /* restore base */\n  L->savedpc = (ci - 1)->savedpc;  /* restore savedpc */\n  /* move results to correct place */\n  for (i = wanted; i != 0 && firstResult < L->top; i--)\n    setobjs2s(L, res++, firstResult++);\n  while (i-- > 0)\n    setnilvalue(res++);\n  L->top = res;\n  return (wanted - LUA_MULTRET);  /* 0 iff wanted == LUA_MULTRET */\n}\n\n\n/*\n** Call a function (C or Lua). The function to be called is at *func.\n** The arguments are on the stack, right after the function.\n** When returns, all the results are on the stack, starting at the original\n** function position.\n*/ \nvoid luaD_call (lua_State *L, StkId func, int nResults) {\n  if (++L->nCcalls >= LUAI_MAXCCALLS) {\n    if (L->nCcalls == LUAI_MAXCCALLS)\n      luaG_runerror(L, \"C stack overflow\");\n    else if (L->nCcalls >= (LUAI_MAXCCALLS + (LUAI_MAXCCALLS>>3)))\n      luaD_throw(L, LUA_ERRERR);  /* error while handing stack error */\n  }\n  if (luaD_precall(L, func, nResults) == PCRLUA)  /* is a Lua function? */\n    luaV_execute(L, 1);  /* call it */\n  L->nCcalls--;\n  luaC_checkGC(L);\n}\n\n\nstatic void resume (lua_State *L, void *ud) {\n  StkId firstArg = cast(StkId, ud);\n  CallInfo *ci = L->ci;\n  if (L->status == 0) {  /* start coroutine? */\n    lua_assert(ci == L->base_ci && firstArg > L->base);\n    if (luaD_precall(L, firstArg - 1, LUA_MULTRET) != PCRLUA)\n      return;\n  }\n  else {  /* resuming from previous yield */\n    lua_assert(L->status == LUA_YIELD);\n    L->status = 0;\n    if (!f_isLua(ci)) {  /* `common' yield? */\n      /* finish interrupted execution of `OP_CALL' */\n      lua_assert(GET_OPCODE(*((ci-1)->savedpc - 1)) == OP_CALL ||\n                 GET_OPCODE(*((ci-1)->savedpc - 1)) == OP_TAILCALL);\n      if (luaD_poscall(L, firstArg))  /* complete it... */\n        L->top = L->ci->top;  /* and correct top if not multiple results */\n    }\n    else  /* yielded inside a hook: just continue its execution */\n      L->base = L->ci->base;\n  }\n  luaV_execute(L, cast_int(L->ci - L->base_ci));\n}\n\n\nstatic int resume_error (lua_State *L, const char *msg) {\n  L->top = L->ci->base;\n  setsvalue2s(L, L->top, luaS_new(L, msg));\n  incr_top(L);\n  lua_unlock(L);\n  return LUA_ERRRUN;\n}\n\n\nLUA_API int lua_resume (lua_State *L, int nargs) {\n  int status;\n  lua_lock(L);\n  if (L->status != LUA_YIELD && (L->status != 0 || L->ci != L->base_ci))\n      return resume_error(L, \"cannot resume non-suspended coroutine\");\n  if (L->nCcalls >= LUAI_MAXCCALLS)\n    return resume_error(L, \"C stack overflow\");\n  luai_userstateresume(L, nargs);\n  lua_assert(L->errfunc == 0);\n  L->baseCcalls = ++L->nCcalls;\n  status = luaD_rawrunprotected(L, resume, L->top - nargs);\n  if (status != 0) {  /* error? */\n    L->status = cast_byte(status);  /* mark thread as `dead' */\n    luaD_seterrorobj(L, status, L->top);\n    L->ci->top = L->top;\n  }\n  else {\n    lua_assert(L->nCcalls == L->baseCcalls);\n    status = L->status;\n  }\n  --L->nCcalls;\n  lua_unlock(L);\n  return status;\n}\n\n\nLUA_API int lua_yield (lua_State *L, int nresults) {\n  luai_userstateyield(L, nresults);\n  lua_lock(L);\n  if (L->nCcalls > L->baseCcalls)\n    luaG_runerror(L, \"attempt to yield across metamethod/C-call boundary\");\n  L->base = L->top - nresults;  /* protect stack slots below */\n  L->status = LUA_YIELD;\n  lua_unlock(L);\n  return -1;\n}\n\n\nint luaD_pcall (lua_State *L, Pfunc func, void *u,\n                ptrdiff_t old_top, ptrdiff_t ef) {\n  int status;\n  unsigned short oldnCcalls = L->nCcalls;\n  ptrdiff_t old_ci = saveci(L, L->ci);\n  lu_byte old_allowhooks = L->allowhook;\n  ptrdiff_t old_errfunc = L->errfunc;\n  L->errfunc = ef;\n  status = luaD_rawrunprotected(L, func, u);\n  if (status != 0) {  /* an error occurred? */\n    StkId oldtop = restorestack(L, old_top);\n    luaF_close(L, oldtop);  /* close eventual pending closures */\n    luaD_seterrorobj(L, status, oldtop);\n    L->nCcalls = oldnCcalls;\n    L->ci = restoreci(L, old_ci);\n    L->base = L->ci->base;\n    L->savedpc = L->ci->savedpc;\n    L->allowhook = old_allowhooks;\n    restore_stack_limit(L);\n  }\n  L->errfunc = old_errfunc;\n  return status;\n}\n\n\n\n/*\n** Execute a protected parser.\n*/\nstruct SParser {  /* data to `f_parser' */\n  ZIO *z;\n  Mbuffer buff;  /* buffer to be used by the scanner */\n  const char *name;\n};\n\nstatic void f_parser (lua_State *L, void *ud) {\n  int i;\n  Proto *tf;\n  Closure *cl;\n  struct SParser *p = cast(struct SParser *, ud);\n  luaZ_lookahead(p->z);\n  luaC_checkGC(L);\n  tf = (luaY_parser)(L, p->z,\n                                                             &p->buff, p->name);\n  cl = luaF_newLclosure(L, tf->nups, hvalue(gt(L)));\n  cl->l.p = tf;\n  for (i = 0; i < tf->nups; i++)  /* initialize eventual upvalues */\n    cl->l.upvals[i] = luaF_newupval(L);\n  setclvalue(L, L->top, cl);\n  incr_top(L);\n}\n\n\nint luaD_protectedparser (lua_State *L, ZIO *z, const char *name) {\n  struct SParser p;\n  int status;\n  p.z = z; p.name = name;\n  luaZ_initbuffer(L, &p.buff);\n  status = luaD_pcall(L, f_parser, &p, savestack(L, L->top), L->errfunc);\n  luaZ_freebuffer(L, &p.buff);\n  return status;\n}\n\n\n"
  },
  {
    "path": "deps/lua/src/ldo.h",
    "content": "/*\n** $Id: ldo.h,v 2.7.1.1 2007/12/27 13:02:25 roberto Exp $\n** Stack and Call structure of Lua\n** See Copyright Notice in lua.h\n*/\n\n#ifndef ldo_h\n#define ldo_h\n\n\n#include \"lobject.h\"\n#include \"lstate.h\"\n#include \"lzio.h\"\n\n\n#define luaD_checkstack(L,n)\t\\\n  if ((char *)L->stack_last - (char *)L->top <= (n)*(int)sizeof(TValue)) \\\n    luaD_growstack(L, n); \\\n  else condhardstacktests(luaD_reallocstack(L, L->stacksize - EXTRA_STACK - 1));\n\n\n#define incr_top(L) {luaD_checkstack(L,1); L->top++;}\n\n#define savestack(L,p)\t\t((char *)(p) - (char *)L->stack)\n#define restorestack(L,n)\t((TValue *)((char *)L->stack + (n)))\n\n#define saveci(L,p)\t\t((char *)(p) - (char *)L->base_ci)\n#define restoreci(L,n)\t\t((CallInfo *)((char *)L->base_ci + (n)))\n\n\n/* results from luaD_precall */\n#define PCRLUA\t\t0\t/* initiated a call to a Lua function */\n#define PCRC\t\t1\t/* did a call to a C function */\n#define PCRYIELD\t2\t/* C funtion yielded */\n\n\n/* type of protected functions, to be ran by `runprotected' */\ntypedef void (*Pfunc) (lua_State *L, void *ud);\n\nLUAI_FUNC int luaD_protectedparser (lua_State *L, ZIO *z, const char *name);\nLUAI_FUNC void luaD_callhook (lua_State *L, int event, int line);\nLUAI_FUNC int luaD_precall (lua_State *L, StkId func, int nresults);\nLUAI_FUNC void luaD_call (lua_State *L, StkId func, int nResults);\nLUAI_FUNC int luaD_pcall (lua_State *L, Pfunc func, void *u,\n                                        ptrdiff_t oldtop, ptrdiff_t ef);\nLUAI_FUNC int luaD_poscall (lua_State *L, StkId firstResult);\nLUAI_FUNC void luaD_reallocCI (lua_State *L, int newsize);\nLUAI_FUNC void luaD_reallocstack (lua_State *L, int newsize);\nLUAI_FUNC void luaD_growstack (lua_State *L, int n);\n\nLUAI_FUNC void luaD_throw (lua_State *L, int errcode);\nLUAI_FUNC int luaD_rawrunprotected (lua_State *L, Pfunc f, void *ud);\n\nLUAI_FUNC void luaD_seterrorobj (lua_State *L, int errcode, StkId oldtop);\n\n#endif\n\n"
  },
  {
    "path": "deps/lua/src/ldump.c",
    "content": "/*\n** $Id: ldump.c,v 2.8.1.1 2007/12/27 13:02:25 roberto Exp $\n** save precompiled Lua chunks\n** See Copyright Notice in lua.h\n*/\n\n#include <stddef.h>\n\n#define ldump_c\n#define LUA_CORE\n\n#include \"lua.h\"\n\n#include \"lobject.h\"\n#include \"lstate.h\"\n#include \"lundump.h\"\n\ntypedef struct {\n lua_State* L;\n lua_Writer writer;\n void* data;\n int strip;\n int status;\n} DumpState;\n\n#define DumpMem(b,n,size,D)\tDumpBlock(b,(n)*(size),D)\n#define DumpVar(x,D)\t \tDumpMem(&x,1,sizeof(x),D)\n\nstatic void DumpBlock(const void* b, size_t size, DumpState* D)\n{\n if (D->status==0)\n {\n  lua_unlock(D->L);\n  D->status=(*D->writer)(D->L,b,size,D->data);\n  lua_lock(D->L);\n }\n}\n\nstatic void DumpChar(int y, DumpState* D)\n{\n char x=(char)y;\n DumpVar(x,D);\n}\n\nstatic void DumpInt(int x, DumpState* D)\n{\n DumpVar(x,D);\n}\n\nstatic void DumpNumber(lua_Number x, DumpState* D)\n{\n DumpVar(x,D);\n}\n\nstatic void DumpVector(const void* b, int n, size_t size, DumpState* D)\n{\n DumpInt(n,D);\n DumpMem(b,n,size,D);\n}\n\nstatic void DumpString(const TString* s, DumpState* D)\n{\n if (s==NULL)\n {\n  size_t size=0;\n  DumpVar(size,D);\n }\n else\n {\n  size_t size=s->tsv.len+1;\t\t/* include trailing '\\0' */\n  DumpVar(size,D);\n  DumpBlock(getstr(s),size,D);\n }\n}\n\n#define DumpCode(f,D)\t DumpVector(f->code,f->sizecode,sizeof(Instruction),D)\n\nstatic void DumpFunction(const Proto* f, const TString* p, DumpState* D);\n\nstatic void DumpConstants(const Proto* f, DumpState* D)\n{\n int i,n=f->sizek;\n DumpInt(n,D);\n for (i=0; i<n; i++)\n {\n  const TValue* o=&f->k[i];\n  DumpChar(ttype(o),D);\n  switch (ttype(o))\n  {\n   case LUA_TNIL:\n\tbreak;\n   case LUA_TBOOLEAN:\n\tDumpChar(bvalue(o),D);\n\tbreak;\n   case LUA_TNUMBER:\n\tDumpNumber(nvalue(o),D);\n\tbreak;\n   case LUA_TSTRING:\n\tDumpString(rawtsvalue(o),D);\n\tbreak;\n   default:\n\tlua_assert(0);\t\t\t/* cannot happen */\n\tbreak;\n  }\n }\n n=f->sizep;\n DumpInt(n,D);\n for (i=0; i<n; i++) DumpFunction(f->p[i],f->source,D);\n}\n\nstatic void DumpDebug(const Proto* f, DumpState* D)\n{\n int i,n;\n n= (D->strip) ? 0 : f->sizelineinfo;\n DumpVector(f->lineinfo,n,sizeof(int),D);\n n= (D->strip) ? 0 : f->sizelocvars;\n DumpInt(n,D);\n for (i=0; i<n; i++)\n {\n  DumpString(f->locvars[i].varname,D);\n  DumpInt(f->locvars[i].startpc,D);\n  DumpInt(f->locvars[i].endpc,D);\n }\n n= (D->strip) ? 0 : f->sizeupvalues;\n DumpInt(n,D);\n for (i=0; i<n; i++) DumpString(f->upvalues[i],D);\n}\n\nstatic void DumpFunction(const Proto* f, const TString* p, DumpState* D)\n{\n DumpString((f->source==p || D->strip) ? NULL : f->source,D);\n DumpInt(f->linedefined,D);\n DumpInt(f->lastlinedefined,D);\n DumpChar(f->nups,D);\n DumpChar(f->numparams,D);\n DumpChar(f->is_vararg,D);\n DumpChar(f->maxstacksize,D);\n DumpCode(f,D);\n DumpConstants(f,D);\n DumpDebug(f,D);\n}\n\nstatic void DumpHeader(DumpState* D)\n{\n char h[LUAC_HEADERSIZE];\n luaU_header(h);\n DumpBlock(h,LUAC_HEADERSIZE,D);\n}\n\n/*\n** dump Lua function as precompiled chunk\n*/\nint luaU_dump (lua_State* L, const Proto* f, lua_Writer w, void* data, int strip)\n{\n DumpState D;\n D.L=L;\n D.writer=w;\n D.data=data;\n D.strip=strip;\n D.status=0;\n DumpHeader(&D);\n DumpFunction(f,NULL,&D);\n return D.status;\n}\n"
  },
  {
    "path": "deps/lua/src/lfunc.c",
    "content": "/*\n** $Id: lfunc.c,v 2.12.1.2 2007/12/28 14:58:43 roberto Exp $\n** Auxiliary functions to manipulate prototypes and closures\n** See Copyright Notice in lua.h\n*/\n\n\n#include <stddef.h>\n\n#define lfunc_c\n#define LUA_CORE\n\n#include \"lua.h\"\n\n#include \"lfunc.h\"\n#include \"lgc.h\"\n#include \"lmem.h\"\n#include \"lobject.h\"\n#include \"lstate.h\"\n\n\n\nClosure *luaF_newCclosure (lua_State *L, int nelems, Table *e) {\n  Closure *c = cast(Closure *, luaM_malloc(L, sizeCclosure(nelems)));\n  luaC_link(L, obj2gco(c), LUA_TFUNCTION);\n  c->c.isC = 1;\n  c->c.env = e;\n  c->c.nupvalues = cast_byte(nelems);\n  return c;\n}\n\n\nClosure *luaF_newLclosure (lua_State *L, int nelems, Table *e) {\n  Closure *c = cast(Closure *, luaM_malloc(L, sizeLclosure(nelems)));\n  luaC_link(L, obj2gco(c), LUA_TFUNCTION);\n  c->l.isC = 0;\n  c->l.env = e;\n  c->l.nupvalues = cast_byte(nelems);\n  while (nelems--) c->l.upvals[nelems] = NULL;\n  return c;\n}\n\n\nUpVal *luaF_newupval (lua_State *L) {\n  UpVal *uv = luaM_new(L, UpVal);\n  luaC_link(L, obj2gco(uv), LUA_TUPVAL);\n  uv->v = &uv->u.value;\n  setnilvalue(uv->v);\n  return uv;\n}\n\n\nUpVal *luaF_findupval (lua_State *L, StkId level) {\n  global_State *g = G(L);\n  GCObject **pp = &L->openupval;\n  UpVal *p;\n  UpVal *uv;\n  while (*pp != NULL && (p = ngcotouv(*pp))->v >= level) {\n    lua_assert(p->v != &p->u.value);\n    if (p->v == level) {  /* found a corresponding upvalue? */\n      if (isdead(g, obj2gco(p)))  /* is it dead? */\n        changewhite(obj2gco(p));  /* ressurect it */\n      return p;\n    }\n    pp = &p->next;\n  }\n  uv = luaM_new(L, UpVal);  /* not found: create a new one */\n  uv->tt = LUA_TUPVAL;\n  uv->marked = luaC_white(g);\n  uv->v = level;  /* current value lives in the stack */\n  uv->next = *pp;  /* chain it in the proper position */\n  *pp = obj2gco(uv);\n  uv->u.l.prev = &g->uvhead;  /* double link it in `uvhead' list */\n  uv->u.l.next = g->uvhead.u.l.next;\n  uv->u.l.next->u.l.prev = uv;\n  g->uvhead.u.l.next = uv;\n  lua_assert(uv->u.l.next->u.l.prev == uv && uv->u.l.prev->u.l.next == uv);\n  return uv;\n}\n\n\nstatic void unlinkupval (UpVal *uv) {\n  lua_assert(uv->u.l.next->u.l.prev == uv && uv->u.l.prev->u.l.next == uv);\n  uv->u.l.next->u.l.prev = uv->u.l.prev;  /* remove from `uvhead' list */\n  uv->u.l.prev->u.l.next = uv->u.l.next;\n}\n\n\nvoid luaF_freeupval (lua_State *L, UpVal *uv) {\n  if (uv->v != &uv->u.value)  /* is it open? */\n    unlinkupval(uv);  /* remove from open list */\n  luaM_free(L, uv);  /* free upvalue */\n}\n\n\nvoid luaF_close (lua_State *L, StkId level) {\n  UpVal *uv;\n  global_State *g = G(L);\n  while (L->openupval != NULL && (uv = ngcotouv(L->openupval))->v >= level) {\n    GCObject *o = obj2gco(uv);\n    lua_assert(!isblack(o) && uv->v != &uv->u.value);\n    L->openupval = uv->next;  /* remove from `open' list */\n    if (isdead(g, o))\n      luaF_freeupval(L, uv);  /* free upvalue */\n    else {\n      unlinkupval(uv);\n      setobj(L, &uv->u.value, uv->v);\n      uv->v = &uv->u.value;  /* now current value lives here */\n      luaC_linkupval(L, uv);  /* link upvalue into `gcroot' list */\n    }\n  }\n}\n\n\nProto *luaF_newproto (lua_State *L) {\n  Proto *f = luaM_new(L, Proto);\n  luaC_link(L, obj2gco(f), LUA_TPROTO);\n  f->k = NULL;\n  f->sizek = 0;\n  f->p = NULL;\n  f->sizep = 0;\n  f->code = NULL;\n  f->sizecode = 0;\n  f->sizelineinfo = 0;\n  f->sizeupvalues = 0;\n  f->nups = 0;\n  f->upvalues = NULL;\n  f->numparams = 0;\n  f->is_vararg = 0;\n  f->maxstacksize = 0;\n  f->lineinfo = NULL;\n  f->sizelocvars = 0;\n  f->locvars = NULL;\n  f->linedefined = 0;\n  f->lastlinedefined = 0;\n  f->source = NULL;\n  return f;\n}\n\n\nvoid luaF_freeproto (lua_State *L, Proto *f) {\n  luaM_freearray(L, f->code, f->sizecode, Instruction);\n  luaM_freearray(L, f->p, f->sizep, Proto *);\n  luaM_freearray(L, f->k, f->sizek, TValue);\n  luaM_freearray(L, f->lineinfo, f->sizelineinfo, int);\n  luaM_freearray(L, f->locvars, f->sizelocvars, struct LocVar);\n  luaM_freearray(L, f->upvalues, f->sizeupvalues, TString *);\n  luaM_free(L, f);\n}\n\n\nvoid luaF_freeclosure (lua_State *L, Closure *c) {\n  int size = (c->c.isC) ? sizeCclosure(c->c.nupvalues) :\n                          sizeLclosure(c->l.nupvalues);\n  luaM_freemem(L, c, size);\n}\n\n\n/*\n** Look for n-th local variable at line `line' in function `func'.\n** Returns NULL if not found.\n*/\nconst char *luaF_getlocalname (const Proto *f, int local_number, int pc) {\n  int i;\n  for (i = 0; i<f->sizelocvars && f->locvars[i].startpc <= pc; i++) {\n    if (pc < f->locvars[i].endpc) {  /* is variable active? */\n      local_number--;\n      if (local_number == 0)\n        return getstr(f->locvars[i].varname);\n    }\n  }\n  return NULL;  /* not found */\n}\n\n"
  },
  {
    "path": "deps/lua/src/lfunc.h",
    "content": "/*\n** $Id: lfunc.h,v 2.4.1.1 2007/12/27 13:02:25 roberto Exp $\n** Auxiliary functions to manipulate prototypes and closures\n** See Copyright Notice in lua.h\n*/\n\n#ifndef lfunc_h\n#define lfunc_h\n\n\n#include \"lobject.h\"\n\n\n#define sizeCclosure(n)\t(cast(int, sizeof(CClosure)) + \\\n                         cast(int, sizeof(TValue)*((n)-1)))\n\n#define sizeLclosure(n)\t(cast(int, sizeof(LClosure)) + \\\n                         cast(int, sizeof(TValue *)*((n)-1)))\n\n\nLUAI_FUNC Proto *luaF_newproto (lua_State *L);\nLUAI_FUNC Closure *luaF_newCclosure (lua_State *L, int nelems, Table *e);\nLUAI_FUNC Closure *luaF_newLclosure (lua_State *L, int nelems, Table *e);\nLUAI_FUNC UpVal *luaF_newupval (lua_State *L);\nLUAI_FUNC UpVal *luaF_findupval (lua_State *L, StkId level);\nLUAI_FUNC void luaF_close (lua_State *L, StkId level);\nLUAI_FUNC void luaF_freeproto (lua_State *L, Proto *f);\nLUAI_FUNC void luaF_freeclosure (lua_State *L, Closure *c);\nLUAI_FUNC void luaF_freeupval (lua_State *L, UpVal *uv);\nLUAI_FUNC const char *luaF_getlocalname (const Proto *func, int local_number,\n                                         int pc);\n\n\n#endif\n"
  },
  {
    "path": "deps/lua/src/lgc.c",
    "content": "/*\n** $Id: lgc.c,v 2.38.1.2 2011/03/18 18:05:38 roberto Exp $\n** Garbage Collector\n** See Copyright Notice in lua.h\n*/\n\n#include <string.h>\n\n#define lgc_c\n#define LUA_CORE\n\n#include \"lua.h\"\n\n#include \"ldebug.h\"\n#include \"ldo.h\"\n#include \"lfunc.h\"\n#include \"lgc.h\"\n#include \"lmem.h\"\n#include \"lobject.h\"\n#include \"lstate.h\"\n#include \"lstring.h\"\n#include \"ltable.h\"\n#include \"ltm.h\"\n\n\n#define GCSTEPSIZE\t1024u\n#define GCSWEEPMAX\t40\n#define GCSWEEPCOST\t10\n#define GCFINALIZECOST\t100\n\n\n#define maskmarks\tcast_byte(~(bitmask(BLACKBIT)|WHITEBITS))\n\n#define makewhite(g,x)\t\\\n   ((x)->gch.marked = cast_byte(((x)->gch.marked & maskmarks) | luaC_white(g)))\n\n#define white2gray(x)\treset2bits((x)->gch.marked, WHITE0BIT, WHITE1BIT)\n#define black2gray(x)\tresetbit((x)->gch.marked, BLACKBIT)\n\n#define stringmark(s)\treset2bits((s)->tsv.marked, WHITE0BIT, WHITE1BIT)\n\n\n#define isfinalized(u)\t\ttestbit((u)->marked, FINALIZEDBIT)\n#define markfinalized(u)\tl_setbit((u)->marked, FINALIZEDBIT)\n\n\n#define KEYWEAK         bitmask(KEYWEAKBIT)\n#define VALUEWEAK       bitmask(VALUEWEAKBIT)\n\n\n\n#define markvalue(g,o) { checkconsistency(o); \\\n  if (iscollectable(o) && iswhite(gcvalue(o))) reallymarkobject(g,gcvalue(o)); }\n\n#define markobject(g,t) { if (iswhite(obj2gco(t))) \\\n\t\treallymarkobject(g, obj2gco(t)); }\n\n\n#define setthreshold(g)  (g->GCthreshold = (g->estimate/100) * g->gcpause)\n\n\nstatic void removeentry (Node *n) {\n  lua_assert(ttisnil(gval(n)));\n  if (iscollectable(gkey(n)))\n    setttype(gkey(n), LUA_TDEADKEY);  /* dead key; remove it */\n}\n\n\nstatic void reallymarkobject (global_State *g, GCObject *o) {\n  lua_assert(iswhite(o) && !isdead(g, o));\n  white2gray(o);\n  switch (o->gch.tt) {\n    case LUA_TSTRING: {\n      return;\n    }\n    case LUA_TUSERDATA: {\n      Table *mt = gco2u(o)->metatable;\n      gray2black(o);  /* udata are never gray */\n      if (mt) markobject(g, mt);\n      markobject(g, gco2u(o)->env);\n      return;\n    }\n    case LUA_TUPVAL: {\n      UpVal *uv = gco2uv(o);\n      markvalue(g, uv->v);\n      if (uv->v == &uv->u.value)  /* closed? */\n        gray2black(o);  /* open upvalues are never black */\n      return;\n    }\n    case LUA_TFUNCTION: {\n      gco2cl(o)->c.gclist = g->gray;\n      g->gray = o;\n      break;\n    }\n    case LUA_TTABLE: {\n      gco2h(o)->gclist = g->gray;\n      g->gray = o;\n      break;\n    }\n    case LUA_TTHREAD: {\n      gco2th(o)->gclist = g->gray;\n      g->gray = o;\n      break;\n    }\n    case LUA_TPROTO: {\n      gco2p(o)->gclist = g->gray;\n      g->gray = o;\n      break;\n    }\n    default: lua_assert(0);\n  }\n}\n\n\nstatic void marktmu (global_State *g) {\n  GCObject *u = g->tmudata;\n  if (u) {\n    do {\n      u = u->gch.next;\n      makewhite(g, u);  /* may be marked, if left from previous GC */\n      reallymarkobject(g, u);\n    } while (u != g->tmudata);\n  }\n}\n\n\n/* move `dead' udata that need finalization to list `tmudata' */\nsize_t luaC_separateudata (lua_State *L, int all) {\n  global_State *g = G(L);\n  size_t deadmem = 0;\n  GCObject **p = &g->mainthread->next;\n  GCObject *curr;\n  while ((curr = *p) != NULL) {\n    if (!(iswhite(curr) || all) || isfinalized(gco2u(curr)))\n      p = &curr->gch.next;  /* don't bother with them */\n    else if (fasttm(L, gco2u(curr)->metatable, TM_GC) == NULL) {\n      markfinalized(gco2u(curr));  /* don't need finalization */\n      p = &curr->gch.next;\n    }\n    else {  /* must call its gc method */\n      deadmem += sizeudata(gco2u(curr));\n      markfinalized(gco2u(curr));\n      *p = curr->gch.next;\n      /* link `curr' at the end of `tmudata' list */\n      if (g->tmudata == NULL)  /* list is empty? */\n        g->tmudata = curr->gch.next = curr;  /* creates a circular list */\n      else {\n        curr->gch.next = g->tmudata->gch.next;\n        g->tmudata->gch.next = curr;\n        g->tmudata = curr;\n      }\n    }\n  }\n  return deadmem;\n}\n\n\nstatic int traversetable (global_State *g, Table *h) {\n  int i;\n  int weakkey = 0;\n  int weakvalue = 0;\n  const TValue *mode;\n  if (h->metatable)\n    markobject(g, h->metatable);\n  mode = gfasttm(g, h->metatable, TM_MODE);\n  if (mode && ttisstring(mode)) {  /* is there a weak mode? */\n    weakkey = (strchr(svalue(mode), 'k') != NULL);\n    weakvalue = (strchr(svalue(mode), 'v') != NULL);\n    if (weakkey || weakvalue) {  /* is really weak? */\n      h->marked &= ~(KEYWEAK | VALUEWEAK);  /* clear bits */\n      h->marked |= cast_byte((weakkey << KEYWEAKBIT) |\n                             (weakvalue << VALUEWEAKBIT));\n      h->gclist = g->weak;  /* must be cleared after GC, ... */\n      g->weak = obj2gco(h);  /* ... so put in the appropriate list */\n    }\n  }\n  if (weakkey && weakvalue) return 1;\n  if (!weakvalue) {\n    i = h->sizearray;\n    while (i--)\n      markvalue(g, &h->array[i]);\n  }\n  i = sizenode(h);\n  while (i--) {\n    Node *n = gnode(h, i);\n    lua_assert(ttype(gkey(n)) != LUA_TDEADKEY || ttisnil(gval(n)));\n    if (ttisnil(gval(n)))\n      removeentry(n);  /* remove empty entries */\n    else {\n      lua_assert(!ttisnil(gkey(n)));\n      if (!weakkey) markvalue(g, gkey(n));\n      if (!weakvalue) markvalue(g, gval(n));\n    }\n  }\n  return weakkey || weakvalue;\n}\n\n\n/*\n** All marks are conditional because a GC may happen while the\n** prototype is still being created\n*/\nstatic void traverseproto (global_State *g, Proto *f) {\n  int i;\n  if (f->source) stringmark(f->source);\n  for (i=0; i<f->sizek; i++)  /* mark literals */\n    markvalue(g, &f->k[i]);\n  for (i=0; i<f->sizeupvalues; i++) {  /* mark upvalue names */\n    if (f->upvalues[i])\n      stringmark(f->upvalues[i]);\n  }\n  for (i=0; i<f->sizep; i++) {  /* mark nested protos */\n    if (f->p[i])\n      markobject(g, f->p[i]);\n  }\n  for (i=0; i<f->sizelocvars; i++) {  /* mark local-variable names */\n    if (f->locvars[i].varname)\n      stringmark(f->locvars[i].varname);\n  }\n}\n\n\n\nstatic void traverseclosure (global_State *g, Closure *cl) {\n  markobject(g, cl->c.env);\n  if (cl->c.isC) {\n    int i;\n    for (i=0; i<cl->c.nupvalues; i++)  /* mark its upvalues */\n      markvalue(g, &cl->c.upvalue[i]);\n  }\n  else {\n    int i;\n    lua_assert(cl->l.nupvalues == cl->l.p->nups);\n    markobject(g, cl->l.p);\n    for (i=0; i<cl->l.nupvalues; i++)  /* mark its upvalues */\n      markobject(g, cl->l.upvals[i]);\n  }\n}\n\n\nstatic void checkstacksizes (lua_State *L, StkId max) {\n  int ci_used = cast_int(L->ci - L->base_ci);  /* number of `ci' in use */\n  int s_used = cast_int(max - L->stack);  /* part of stack in use */\n  if (L->size_ci > LUAI_MAXCALLS)  /* handling overflow? */\n    return;  /* do not touch the stacks */\n  if (4*ci_used < L->size_ci && 2*BASIC_CI_SIZE < L->size_ci)\n    luaD_reallocCI(L, L->size_ci/2);  /* still big enough... */\n  condhardstacktests(luaD_reallocCI(L, ci_used + 1));\n  if (4*s_used < L->stacksize &&\n      2*(BASIC_STACK_SIZE+EXTRA_STACK) < L->stacksize)\n    luaD_reallocstack(L, L->stacksize/2);  /* still big enough... */\n  condhardstacktests(luaD_reallocstack(L, s_used));\n}\n\n\nstatic void traversestack (global_State *g, lua_State *l) {\n  StkId o, lim;\n  CallInfo *ci;\n  markvalue(g, gt(l));\n  lim = l->top;\n  for (ci = l->base_ci; ci <= l->ci; ci++) {\n    lua_assert(ci->top <= l->stack_last);\n    if (lim < ci->top) lim = ci->top;\n  }\n  for (o = l->stack; o < l->top; o++)\n    markvalue(g, o);\n  for (; o <= lim; o++)\n    setnilvalue(o);\n  checkstacksizes(l, lim);\n}\n\n\n/*\n** traverse one gray object, turning it to black.\n** Returns `quantity' traversed.\n*/\nstatic l_mem propagatemark (global_State *g) {\n  GCObject *o = g->gray;\n  lua_assert(isgray(o));\n  gray2black(o);\n  switch (o->gch.tt) {\n    case LUA_TTABLE: {\n      Table *h = gco2h(o);\n      g->gray = h->gclist;\n      if (traversetable(g, h))  /* table is weak? */\n        black2gray(o);  /* keep it gray */\n      return sizeof(Table) + sizeof(TValue) * h->sizearray +\n                             sizeof(Node) * sizenode(h);\n    }\n    case LUA_TFUNCTION: {\n      Closure *cl = gco2cl(o);\n      g->gray = cl->c.gclist;\n      traverseclosure(g, cl);\n      return (cl->c.isC) ? sizeCclosure(cl->c.nupvalues) :\n                           sizeLclosure(cl->l.nupvalues);\n    }\n    case LUA_TTHREAD: {\n      lua_State *th = gco2th(o);\n      g->gray = th->gclist;\n      th->gclist = g->grayagain;\n      g->grayagain = o;\n      black2gray(o);\n      traversestack(g, th);\n      return sizeof(lua_State) + sizeof(TValue) * th->stacksize +\n                                 sizeof(CallInfo) * th->size_ci;\n    }\n    case LUA_TPROTO: {\n      Proto *p = gco2p(o);\n      g->gray = p->gclist;\n      traverseproto(g, p);\n      return sizeof(Proto) + sizeof(Instruction) * p->sizecode +\n                             sizeof(Proto *) * p->sizep +\n                             sizeof(TValue) * p->sizek + \n                             sizeof(int) * p->sizelineinfo +\n                             sizeof(LocVar) * p->sizelocvars +\n                             sizeof(TString *) * p->sizeupvalues;\n    }\n    default: lua_assert(0); return 0;\n  }\n}\n\n\nstatic size_t propagateall (global_State *g) {\n  size_t m = 0;\n  while (g->gray) m += propagatemark(g);\n  return m;\n}\n\n\n/*\n** The next function tells whether a key or value can be cleared from\n** a weak table. Non-collectable objects are never removed from weak\n** tables. Strings behave as `values', so are never removed too. for\n** other objects: if really collected, cannot keep them; for userdata\n** being finalized, keep them in keys, but not in values\n*/\nstatic int iscleared (const TValue *o, int iskey) {\n  if (!iscollectable(o)) return 0;\n  if (ttisstring(o)) {\n    stringmark(rawtsvalue(o));  /* strings are `values', so are never weak */\n    return 0;\n  }\n  return iswhite(gcvalue(o)) ||\n    (ttisuserdata(o) && (!iskey && isfinalized(uvalue(o))));\n}\n\n\n/*\n** clear collected entries from weaktables\n*/\nstatic void cleartable (GCObject *l) {\n  while (l) {\n    Table *h = gco2h(l);\n    int i = h->sizearray;\n    lua_assert(testbit(h->marked, VALUEWEAKBIT) ||\n               testbit(h->marked, KEYWEAKBIT));\n    if (testbit(h->marked, VALUEWEAKBIT)) {\n      while (i--) {\n        TValue *o = &h->array[i];\n        if (iscleared(o, 0))  /* value was collected? */\n          setnilvalue(o);  /* remove value */\n      }\n    }\n    i = sizenode(h);\n    while (i--) {\n      Node *n = gnode(h, i);\n      if (!ttisnil(gval(n)) &&  /* non-empty entry? */\n          (iscleared(key2tval(n), 1) || iscleared(gval(n), 0))) {\n        setnilvalue(gval(n));  /* remove value ... */\n        removeentry(n);  /* remove entry from table */\n      }\n    }\n    l = h->gclist;\n  }\n}\n\n\nstatic void freeobj (lua_State *L, GCObject *o) {\n  switch (o->gch.tt) {\n    case LUA_TPROTO: luaF_freeproto(L, gco2p(o)); break;\n    case LUA_TFUNCTION: luaF_freeclosure(L, gco2cl(o)); break;\n    case LUA_TUPVAL: luaF_freeupval(L, gco2uv(o)); break;\n    case LUA_TTABLE: luaH_free(L, gco2h(o)); break;\n    case LUA_TTHREAD: {\n      lua_assert(gco2th(o) != L && gco2th(o) != G(L)->mainthread);\n      luaE_freethread(L, gco2th(o));\n      break;\n    }\n    case LUA_TSTRING: {\n      G(L)->strt.nuse--;\n      luaM_freemem(L, o, sizestring(gco2ts(o)));\n      break;\n    }\n    case LUA_TUSERDATA: {\n      luaM_freemem(L, o, sizeudata(gco2u(o)));\n      break;\n    }\n    default: lua_assert(0);\n  }\n}\n\n\n\n#define sweepwholelist(L,p)\tsweeplist(L,p,MAX_LUMEM)\n\n\nstatic GCObject **sweeplist (lua_State *L, GCObject **p, lu_mem count) {\n  GCObject *curr;\n  global_State *g = G(L);\n  int deadmask = otherwhite(g);\n  while ((curr = *p) != NULL && count-- > 0) {\n    if (curr->gch.tt == LUA_TTHREAD)  /* sweep open upvalues of each thread */\n      sweepwholelist(L, &gco2th(curr)->openupval);\n    if ((curr->gch.marked ^ WHITEBITS) & deadmask) {  /* not dead? */\n      lua_assert(!isdead(g, curr) || testbit(curr->gch.marked, FIXEDBIT));\n      makewhite(g, curr);  /* make it white (for next cycle) */\n      p = &curr->gch.next;\n    }\n    else {  /* must erase `curr' */\n      lua_assert(isdead(g, curr) || deadmask == bitmask(SFIXEDBIT));\n      *p = curr->gch.next;\n      if (curr == g->rootgc)  /* is the first element of the list? */\n        g->rootgc = curr->gch.next;  /* adjust first */\n      freeobj(L, curr);\n    }\n  }\n  return p;\n}\n\n\nstatic void checkSizes (lua_State *L) {\n  global_State *g = G(L);\n  /* check size of string hash */\n  if (g->strt.nuse < cast(lu_int32, g->strt.size/4) &&\n      g->strt.size > MINSTRTABSIZE*2)\n    luaS_resize(L, g->strt.size/2);  /* table is too big */\n  /* check size of buffer */\n  if (luaZ_sizebuffer(&g->buff) > LUA_MINBUFFER*2) {  /* buffer too big? */\n    size_t newsize = luaZ_sizebuffer(&g->buff) / 2;\n    luaZ_resizebuffer(L, &g->buff, newsize);\n  }\n}\n\n\nstatic void GCTM (lua_State *L) {\n  global_State *g = G(L);\n  GCObject *o = g->tmudata->gch.next;  /* get first element */\n  Udata *udata = rawgco2u(o);\n  const TValue *tm;\n  /* remove udata from `tmudata' */\n  if (o == g->tmudata)  /* last element? */\n    g->tmudata = NULL;\n  else\n    g->tmudata->gch.next = udata->uv.next;\n  udata->uv.next = g->mainthread->next;  /* return it to `root' list */\n  g->mainthread->next = o;\n  makewhite(g, o);\n  tm = fasttm(L, udata->uv.metatable, TM_GC);\n  if (tm != NULL) {\n    lu_byte oldah = L->allowhook;\n    lu_mem oldt = g->GCthreshold;\n    L->allowhook = 0;  /* stop debug hooks during GC tag method */\n    g->GCthreshold = 2*g->totalbytes;  /* avoid GC steps */\n    setobj2s(L, L->top, tm);\n    setuvalue(L, L->top+1, udata);\n    L->top += 2;\n    luaD_call(L, L->top - 2, 0);\n    L->allowhook = oldah;  /* restore hooks */\n    g->GCthreshold = oldt;  /* restore threshold */\n  }\n}\n\n\n/*\n** Call all GC tag methods\n*/\nvoid luaC_callGCTM (lua_State *L) {\n  while (G(L)->tmudata)\n    GCTM(L);\n}\n\n\nvoid luaC_freeall (lua_State *L) {\n  global_State *g = G(L);\n  int i;\n  g->currentwhite = WHITEBITS | bitmask(SFIXEDBIT);  /* mask to collect all elements */\n  sweepwholelist(L, &g->rootgc);\n  for (i = 0; i < g->strt.size; i++)  /* free all string lists */\n    sweepwholelist(L, &g->strt.hash[i]);\n}\n\n\nstatic void markmt (global_State *g) {\n  int i;\n  for (i=0; i<NUM_TAGS; i++)\n    if (g->mt[i]) markobject(g, g->mt[i]);\n}\n\n\n/* mark root set */\nstatic void markroot (lua_State *L) {\n  global_State *g = G(L);\n  g->gray = NULL;\n  g->grayagain = NULL;\n  g->weak = NULL;\n  markobject(g, g->mainthread);\n  /* make global table be traversed before main stack */\n  markvalue(g, gt(g->mainthread));\n  markvalue(g, registry(L));\n  markmt(g);\n  g->gcstate = GCSpropagate;\n}\n\n\nstatic void remarkupvals (global_State *g) {\n  UpVal *uv;\n  for (uv = g->uvhead.u.l.next; uv != &g->uvhead; uv = uv->u.l.next) {\n    lua_assert(uv->u.l.next->u.l.prev == uv && uv->u.l.prev->u.l.next == uv);\n    if (isgray(obj2gco(uv)))\n      markvalue(g, uv->v);\n  }\n}\n\n\nstatic void atomic (lua_State *L) {\n  global_State *g = G(L);\n  size_t udsize;  /* total size of userdata to be finalized */\n  /* remark occasional upvalues of (maybe) dead threads */\n  remarkupvals(g);\n  /* traverse objects cautch by write barrier and by 'remarkupvals' */\n  propagateall(g);\n  /* remark weak tables */\n  g->gray = g->weak;\n  g->weak = NULL;\n  lua_assert(!iswhite(obj2gco(g->mainthread)));\n  markobject(g, L);  /* mark running thread */\n  markmt(g);  /* mark basic metatables (again) */\n  propagateall(g);\n  /* remark gray again */\n  g->gray = g->grayagain;\n  g->grayagain = NULL;\n  propagateall(g);\n  udsize = luaC_separateudata(L, 0);  /* separate userdata to be finalized */\n  marktmu(g);  /* mark `preserved' userdata */\n  udsize += propagateall(g);  /* remark, to propagate `preserveness' */\n  cleartable(g->weak);  /* remove collected objects from weak tables */\n  /* flip current white */\n  g->currentwhite = cast_byte(otherwhite(g));\n  g->sweepstrgc = 0;\n  g->sweepgc = &g->rootgc;\n  g->gcstate = GCSsweepstring;\n  g->estimate = g->totalbytes - udsize;  /* first estimate */\n}\n\n\nstatic l_mem singlestep (lua_State *L) {\n  global_State *g = G(L);\n  /*lua_checkmemory(L);*/\n  switch (g->gcstate) {\n    case GCSpause: {\n      markroot(L);  /* start a new collection */\n      return 0;\n    }\n    case GCSpropagate: {\n      if (g->gray)\n        return propagatemark(g);\n      else {  /* no more `gray' objects */\n        atomic(L);  /* finish mark phase */\n        return 0;\n      }\n    }\n    case GCSsweepstring: {\n      lu_mem old = g->totalbytes;\n      sweepwholelist(L, &g->strt.hash[g->sweepstrgc++]);\n      if (g->sweepstrgc >= g->strt.size)  /* nothing more to sweep? */\n        g->gcstate = GCSsweep;  /* end sweep-string phase */\n      lua_assert(old >= g->totalbytes);\n      g->estimate -= old - g->totalbytes;\n      return GCSWEEPCOST;\n    }\n    case GCSsweep: {\n      lu_mem old = g->totalbytes;\n      g->sweepgc = sweeplist(L, g->sweepgc, GCSWEEPMAX);\n      if (*g->sweepgc == NULL) {  /* nothing more to sweep? */\n        checkSizes(L);\n        g->gcstate = GCSfinalize;  /* end sweep phase */\n      }\n      lua_assert(old >= g->totalbytes);\n      g->estimate -= old - g->totalbytes;\n      return GCSWEEPMAX*GCSWEEPCOST;\n    }\n    case GCSfinalize: {\n      if (g->tmudata) {\n        GCTM(L);\n        if (g->estimate > GCFINALIZECOST)\n          g->estimate -= GCFINALIZECOST;\n        return GCFINALIZECOST;\n      }\n      else {\n        g->gcstate = GCSpause;  /* end collection */\n        g->gcdept = 0;\n        return 0;\n      }\n    }\n    default: lua_assert(0); return 0;\n  }\n}\n\n\nvoid luaC_step (lua_State *L) {\n  global_State *g = G(L);\n  l_mem lim = (GCSTEPSIZE/100) * g->gcstepmul;\n  if (lim == 0)\n    lim = (MAX_LUMEM-1)/2;  /* no limit */\n  g->gcdept += g->totalbytes - g->GCthreshold;\n  do {\n    lim -= singlestep(L);\n    if (g->gcstate == GCSpause)\n      break;\n  } while (lim > 0);\n  if (g->gcstate != GCSpause) {\n    if (g->gcdept < GCSTEPSIZE)\n      g->GCthreshold = g->totalbytes + GCSTEPSIZE;  /* - lim/g->gcstepmul;*/\n    else {\n      g->gcdept -= GCSTEPSIZE;\n      g->GCthreshold = g->totalbytes;\n    }\n  }\n  else {\n    setthreshold(g);\n  }\n}\n\n\nvoid luaC_fullgc (lua_State *L) {\n  global_State *g = G(L);\n  if (g->gcstate <= GCSpropagate) {\n    /* reset sweep marks to sweep all elements (returning them to white) */\n    g->sweepstrgc = 0;\n    g->sweepgc = &g->rootgc;\n    /* reset other collector lists */\n    g->gray = NULL;\n    g->grayagain = NULL;\n    g->weak = NULL;\n    g->gcstate = GCSsweepstring;\n  }\n  lua_assert(g->gcstate != GCSpause && g->gcstate != GCSpropagate);\n  /* finish any pending sweep phase */\n  while (g->gcstate != GCSfinalize) {\n    lua_assert(g->gcstate == GCSsweepstring || g->gcstate == GCSsweep);\n    singlestep(L);\n  }\n  markroot(L);\n  while (g->gcstate != GCSpause) {\n    singlestep(L);\n  }\n  setthreshold(g);\n}\n\n\nvoid luaC_barrierf (lua_State *L, GCObject *o, GCObject *v) {\n  global_State *g = G(L);\n  lua_assert(isblack(o) && iswhite(v) && !isdead(g, v) && !isdead(g, o));\n  lua_assert(g->gcstate != GCSfinalize && g->gcstate != GCSpause);\n  lua_assert(ttype(&o->gch) != LUA_TTABLE);\n  /* must keep invariant? */\n  if (g->gcstate == GCSpropagate)\n    reallymarkobject(g, v);  /* restore invariant */\n  else  /* don't mind */\n    makewhite(g, o);  /* mark as white just to avoid other barriers */\n}\n\n\nvoid luaC_barrierback (lua_State *L, Table *t) {\n  global_State *g = G(L);\n  GCObject *o = obj2gco(t);\n  lua_assert(isblack(o) && !isdead(g, o));\n  lua_assert(g->gcstate != GCSfinalize && g->gcstate != GCSpause);\n  black2gray(o);  /* make table gray (again) */\n  t->gclist = g->grayagain;\n  g->grayagain = o;\n}\n\n\nvoid luaC_link (lua_State *L, GCObject *o, lu_byte tt) {\n  global_State *g = G(L);\n  o->gch.next = g->rootgc;\n  g->rootgc = o;\n  o->gch.marked = luaC_white(g);\n  o->gch.tt = tt;\n}\n\n\nvoid luaC_linkupval (lua_State *L, UpVal *uv) {\n  global_State *g = G(L);\n  GCObject *o = obj2gco(uv);\n  o->gch.next = g->rootgc;  /* link upvalue into `rootgc' list */\n  g->rootgc = o;\n  if (isgray(o)) { \n    if (g->gcstate == GCSpropagate) {\n      gray2black(o);  /* closed upvalues need barrier */\n      luaC_barrier(L, uv, uv->v);\n    }\n    else {  /* sweep phase: sweep it (turning it into white) */\n      makewhite(g, o);\n      lua_assert(g->gcstate != GCSfinalize && g->gcstate != GCSpause);\n    }\n  }\n}\n\n"
  },
  {
    "path": "deps/lua/src/lgc.h",
    "content": "/*\n** $Id: lgc.h,v 2.15.1.1 2007/12/27 13:02:25 roberto Exp $\n** Garbage Collector\n** See Copyright Notice in lua.h\n*/\n\n#ifndef lgc_h\n#define lgc_h\n\n\n#include \"lobject.h\"\n\n\n/*\n** Possible states of the Garbage Collector\n*/\n#define GCSpause\t0\n#define GCSpropagate\t1\n#define GCSsweepstring\t2\n#define GCSsweep\t3\n#define GCSfinalize\t4\n\n\n/*\n** some userful bit tricks\n*/\n#define resetbits(x,m)\t((x) &= cast(lu_byte, ~(m)))\n#define setbits(x,m)\t((x) |= (m))\n#define testbits(x,m)\t((x) & (m))\n#define bitmask(b)\t(1<<(b))\n#define bit2mask(b1,b2)\t(bitmask(b1) | bitmask(b2))\n#define l_setbit(x,b)\tsetbits(x, bitmask(b))\n#define resetbit(x,b)\tresetbits(x, bitmask(b))\n#define testbit(x,b)\ttestbits(x, bitmask(b))\n#define set2bits(x,b1,b2)\tsetbits(x, (bit2mask(b1, b2)))\n#define reset2bits(x,b1,b2)\tresetbits(x, (bit2mask(b1, b2)))\n#define test2bits(x,b1,b2)\ttestbits(x, (bit2mask(b1, b2)))\n\n\n\n/*\n** Layout for bit use in `marked' field:\n** bit 0 - object is white (type 0)\n** bit 1 - object is white (type 1)\n** bit 2 - object is black\n** bit 3 - for userdata: has been finalized\n** bit 3 - for tables: has weak keys\n** bit 4 - for tables: has weak values\n** bit 5 - object is fixed (should not be collected)\n** bit 6 - object is \"super\" fixed (only the main thread)\n*/\n\n\n#define WHITE0BIT\t0\n#define WHITE1BIT\t1\n#define BLACKBIT\t2\n#define FINALIZEDBIT\t3\n#define KEYWEAKBIT\t3\n#define VALUEWEAKBIT\t4\n#define FIXEDBIT\t5\n#define SFIXEDBIT\t6\n#define WHITEBITS\tbit2mask(WHITE0BIT, WHITE1BIT)\n\n\n#define iswhite(x)      test2bits((x)->gch.marked, WHITE0BIT, WHITE1BIT)\n#define isblack(x)      testbit((x)->gch.marked, BLACKBIT)\n#define isgray(x)\t(!isblack(x) && !iswhite(x))\n\n#define otherwhite(g)\t(g->currentwhite ^ WHITEBITS)\n#define isdead(g,v)\t((v)->gch.marked & otherwhite(g) & WHITEBITS)\n\n#define changewhite(x)\t((x)->gch.marked ^= WHITEBITS)\n#define gray2black(x)\tl_setbit((x)->gch.marked, BLACKBIT)\n\n#define valiswhite(x)\t(iscollectable(x) && iswhite(gcvalue(x)))\n\n#define luaC_white(g)\tcast(lu_byte, (g)->currentwhite & WHITEBITS)\n\n\n#define luaC_checkGC(L) { \\\n  condhardstacktests(luaD_reallocstack(L, L->stacksize - EXTRA_STACK - 1)); \\\n  if (G(L)->totalbytes >= G(L)->GCthreshold) \\\n\tluaC_step(L); }\n\n\n#define luaC_barrier(L,p,v) { if (valiswhite(v) && isblack(obj2gco(p)))  \\\n\tluaC_barrierf(L,obj2gco(p),gcvalue(v)); }\n\n#define luaC_barriert(L,t,v) { if (valiswhite(v) && isblack(obj2gco(t)))  \\\n\tluaC_barrierback(L,t); }\n\n#define luaC_objbarrier(L,p,o)  \\\n\t{ if (iswhite(obj2gco(o)) && isblack(obj2gco(p))) \\\n\t\tluaC_barrierf(L,obj2gco(p),obj2gco(o)); }\n\n#define luaC_objbarriert(L,t,o)  \\\n   { if (iswhite(obj2gco(o)) && isblack(obj2gco(t))) luaC_barrierback(L,t); }\n\nLUAI_FUNC size_t luaC_separateudata (lua_State *L, int all);\nLUAI_FUNC void luaC_callGCTM (lua_State *L);\nLUAI_FUNC void luaC_freeall (lua_State *L);\nLUAI_FUNC void luaC_step (lua_State *L);\nLUAI_FUNC void luaC_fullgc (lua_State *L);\nLUAI_FUNC void luaC_link (lua_State *L, GCObject *o, lu_byte tt);\nLUAI_FUNC void luaC_linkupval (lua_State *L, UpVal *uv);\nLUAI_FUNC void luaC_barrierf (lua_State *L, GCObject *o, GCObject *v);\nLUAI_FUNC void luaC_barrierback (lua_State *L, Table *t);\n\n\n#endif\n"
  },
  {
    "path": "deps/lua/src/linit.c",
    "content": "/*\n** $Id: linit.c,v 1.14.1.1 2007/12/27 13:02:25 roberto Exp $\n** Initialization of libraries for lua.c\n** See Copyright Notice in lua.h\n*/\n\n\n#define linit_c\n#define LUA_LIB\n\n#include \"lua.h\"\n\n#include \"lualib.h\"\n#include \"lauxlib.h\"\n\n\nstatic const luaL_Reg lualibs[] = {\n  {\"\", luaopen_base},\n  {LUA_LOADLIBNAME, luaopen_package},\n  {LUA_TABLIBNAME, luaopen_table},\n  {LUA_IOLIBNAME, luaopen_io},\n  {LUA_OSLIBNAME, luaopen_os},\n  {LUA_STRLIBNAME, luaopen_string},\n  {LUA_MATHLIBNAME, luaopen_math},\n  {LUA_DBLIBNAME, luaopen_debug},\n  {NULL, NULL}\n};\n\n\nLUALIB_API void luaL_openlibs (lua_State *L) {\n  const luaL_Reg *lib = lualibs;\n  for (; lib->func; lib++) {\n    lua_pushcfunction(L, lib->func);\n    lua_pushstring(L, lib->name);\n    lua_call(L, 1, 0);\n  }\n}\n\n"
  },
  {
    "path": "deps/lua/src/liolib.c",
    "content": "/*\n** $Id: liolib.c,v 2.73.1.4 2010/05/14 15:33:51 roberto Exp $\n** Standard I/O (and system) library\n** See Copyright Notice in lua.h\n*/\n\n\n#include <errno.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n#define liolib_c\n#define LUA_LIB\n\n#include \"lua.h\"\n\n#include \"lauxlib.h\"\n#include \"lualib.h\"\n\n\n\n#define IO_INPUT\t1\n#define IO_OUTPUT\t2\n\n\nstatic const char *const fnames[] = {\"input\", \"output\"};\n\n\nstatic int pushresult (lua_State *L, int i, const char *filename) {\n  int en = errno;  /* calls to Lua API may change this value */\n  if (i) {\n    lua_pushboolean(L, 1);\n    return 1;\n  }\n  else {\n    lua_pushnil(L);\n    if (filename)\n      lua_pushfstring(L, \"%s: %s\", filename, strerror(en));\n    else\n      lua_pushfstring(L, \"%s\", strerror(en));\n    lua_pushinteger(L, en);\n    return 3;\n  }\n}\n\n\nstatic void fileerror (lua_State *L, int arg, const char *filename) {\n  lua_pushfstring(L, \"%s: %s\", filename, strerror(errno));\n  luaL_argerror(L, arg, lua_tostring(L, -1));\n}\n\n\n#define tofilep(L)\t((FILE **)luaL_checkudata(L, 1, LUA_FILEHANDLE))\n\n\nstatic int io_type (lua_State *L) {\n  void *ud;\n  luaL_checkany(L, 1);\n  ud = lua_touserdata(L, 1);\n  lua_getfield(L, LUA_REGISTRYINDEX, LUA_FILEHANDLE);\n  if (ud == NULL || !lua_getmetatable(L, 1) || !lua_rawequal(L, -2, -1))\n    lua_pushnil(L);  /* not a file */\n  else if (*((FILE **)ud) == NULL)\n    lua_pushliteral(L, \"closed file\");\n  else\n    lua_pushliteral(L, \"file\");\n  return 1;\n}\n\n\nstatic FILE *tofile (lua_State *L) {\n  FILE **f = tofilep(L);\n  if (*f == NULL)\n    luaL_error(L, \"attempt to use a closed file\");\n  return *f;\n}\n\n\n\n/*\n** When creating file handles, always creates a `closed' file handle\n** before opening the actual file; so, if there is a memory error, the\n** file is not left opened.\n*/\nstatic FILE **newfile (lua_State *L) {\n  FILE **pf = (FILE **)lua_newuserdata(L, sizeof(FILE *));\n  *pf = NULL;  /* file handle is currently `closed' */\n  luaL_getmetatable(L, LUA_FILEHANDLE);\n  lua_setmetatable(L, -2);\n  return pf;\n}\n\n\n/*\n** function to (not) close the standard files stdin, stdout, and stderr\n*/\nstatic int io_noclose (lua_State *L) {\n  lua_pushnil(L);\n  lua_pushliteral(L, \"cannot close standard file\");\n  return 2;\n}\n\n\n/*\n** function to close 'popen' files\n*/\nstatic int io_pclose (lua_State *L) {\n  FILE **p = tofilep(L);\n  int ok = lua_pclose(L, *p);\n  *p = NULL;\n  return pushresult(L, ok, NULL);\n}\n\n\n/*\n** function to close regular files\n*/\nstatic int io_fclose (lua_State *L) {\n  FILE **p = tofilep(L);\n  int ok = (fclose(*p) == 0);\n  *p = NULL;\n  return pushresult(L, ok, NULL);\n}\n\n\nstatic int aux_close (lua_State *L) {\n  lua_getfenv(L, 1);\n  lua_getfield(L, -1, \"__close\");\n  return (lua_tocfunction(L, -1))(L);\n}\n\n\nstatic int io_close (lua_State *L) {\n  if (lua_isnone(L, 1))\n    lua_rawgeti(L, LUA_ENVIRONINDEX, IO_OUTPUT);\n  tofile(L);  /* make sure argument is a file */\n  return aux_close(L);\n}\n\n\nstatic int io_gc (lua_State *L) {\n  FILE *f = *tofilep(L);\n  /* ignore closed files */\n  if (f != NULL)\n    aux_close(L);\n  return 0;\n}\n\n\nstatic int io_tostring (lua_State *L) {\n  FILE *f = *tofilep(L);\n  if (f == NULL)\n    lua_pushliteral(L, \"file (closed)\");\n  else\n    lua_pushfstring(L, \"file (%p)\", f);\n  return 1;\n}\n\n\nstatic int io_open (lua_State *L) {\n  const char *filename = luaL_checkstring(L, 1);\n  const char *mode = luaL_optstring(L, 2, \"r\");\n  FILE **pf = newfile(L);\n  *pf = fopen(filename, mode);\n  return (*pf == NULL) ? pushresult(L, 0, filename) : 1;\n}\n\n\n/*\n** this function has a separated environment, which defines the\n** correct __close for 'popen' files\n*/\nstatic int io_popen (lua_State *L) {\n  const char *filename = luaL_checkstring(L, 1);\n  const char *mode = luaL_optstring(L, 2, \"r\");\n  FILE **pf = newfile(L);\n  *pf = lua_popen(L, filename, mode);\n  return (*pf == NULL) ? pushresult(L, 0, filename) : 1;\n}\n\n\nstatic int io_tmpfile (lua_State *L) {\n  FILE **pf = newfile(L);\n  *pf = tmpfile();\n  return (*pf == NULL) ? pushresult(L, 0, NULL) : 1;\n}\n\n\nstatic FILE *getiofile (lua_State *L, int findex) {\n  FILE *f;\n  lua_rawgeti(L, LUA_ENVIRONINDEX, findex);\n  f = *(FILE **)lua_touserdata(L, -1);\n  if (f == NULL)\n    luaL_error(L, \"standard %s file is closed\", fnames[findex - 1]);\n  return f;\n}\n\n\nstatic int g_iofile (lua_State *L, int f, const char *mode) {\n  if (!lua_isnoneornil(L, 1)) {\n    const char *filename = lua_tostring(L, 1);\n    if (filename) {\n      FILE **pf = newfile(L);\n      *pf = fopen(filename, mode);\n      if (*pf == NULL)\n        fileerror(L, 1, filename);\n    }\n    else {\n      tofile(L);  /* check that it's a valid file handle */\n      lua_pushvalue(L, 1);\n    }\n    lua_rawseti(L, LUA_ENVIRONINDEX, f);\n  }\n  /* return current value */\n  lua_rawgeti(L, LUA_ENVIRONINDEX, f);\n  return 1;\n}\n\n\nstatic int io_input (lua_State *L) {\n  return g_iofile(L, IO_INPUT, \"r\");\n}\n\n\nstatic int io_output (lua_State *L) {\n  return g_iofile(L, IO_OUTPUT, \"w\");\n}\n\n\nstatic int io_readline (lua_State *L);\n\n\nstatic void aux_lines (lua_State *L, int idx, int toclose) {\n  lua_pushvalue(L, idx);\n  lua_pushboolean(L, toclose);  /* close/not close file when finished */\n  lua_pushcclosure(L, io_readline, 2);\n}\n\n\nstatic int f_lines (lua_State *L) {\n  tofile(L);  /* check that it's a valid file handle */\n  aux_lines(L, 1, 0);\n  return 1;\n}\n\n\nstatic int io_lines (lua_State *L) {\n  if (lua_isnoneornil(L, 1)) {  /* no arguments? */\n    /* will iterate over default input */\n    lua_rawgeti(L, LUA_ENVIRONINDEX, IO_INPUT);\n    return f_lines(L);\n  }\n  else {\n    const char *filename = luaL_checkstring(L, 1);\n    FILE **pf = newfile(L);\n    *pf = fopen(filename, \"r\");\n    if (*pf == NULL)\n      fileerror(L, 1, filename);\n    aux_lines(L, lua_gettop(L), 1);\n    return 1;\n  }\n}\n\n\n/*\n** {======================================================\n** READ\n** =======================================================\n*/\n\n\nstatic int read_number (lua_State *L, FILE *f) {\n  lua_Number d;\n  if (fscanf(f, LUA_NUMBER_SCAN, &d) == 1) {\n    lua_pushnumber(L, d);\n    return 1;\n  }\n  else {\n    lua_pushnil(L);  /* \"result\" to be removed */\n    return 0;  /* read fails */\n  }\n}\n\n\nstatic int test_eof (lua_State *L, FILE *f) {\n  int c = getc(f);\n  ungetc(c, f);\n  lua_pushlstring(L, NULL, 0);\n  return (c != EOF);\n}\n\n\nstatic int read_line (lua_State *L, FILE *f) {\n  luaL_Buffer b;\n  luaL_buffinit(L, &b);\n  for (;;) {\n    size_t l;\n    char *p = luaL_prepbuffer(&b);\n    if (fgets(p, LUAL_BUFFERSIZE, f) == NULL) {  /* eof? */\n      luaL_pushresult(&b);  /* close buffer */\n      return (lua_objlen(L, -1) > 0);  /* check whether read something */\n    }\n    l = strlen(p);\n    if (l == 0 || p[l-1] != '\\n')\n      luaL_addsize(&b, l);\n    else {\n      luaL_addsize(&b, l - 1);  /* do not include `eol' */\n      luaL_pushresult(&b);  /* close buffer */\n      return 1;  /* read at least an `eol' */\n    }\n  }\n}\n\n\nstatic int read_chars (lua_State *L, FILE *f, size_t n) {\n  size_t rlen;  /* how much to read */\n  size_t nr;  /* number of chars actually read */\n  luaL_Buffer b;\n  luaL_buffinit(L, &b);\n  rlen = LUAL_BUFFERSIZE;  /* try to read that much each time */\n  do {\n    char *p = luaL_prepbuffer(&b);\n    if (rlen > n) rlen = n;  /* cannot read more than asked */\n    nr = fread(p, sizeof(char), rlen, f);\n    luaL_addsize(&b, nr);\n    n -= nr;  /* still have to read `n' chars */\n  } while (n > 0 && nr == rlen);  /* until end of count or eof */\n  luaL_pushresult(&b);  /* close buffer */\n  return (n == 0 || lua_objlen(L, -1) > 0);\n}\n\n\nstatic int g_read (lua_State *L, FILE *f, int first) {\n  int nargs = lua_gettop(L) - 1;\n  int success;\n  int n;\n  clearerr(f);\n  if (nargs == 0) {  /* no arguments? */\n    success = read_line(L, f);\n    n = first+1;  /* to return 1 result */\n  }\n  else {  /* ensure stack space for all results and for auxlib's buffer */\n    luaL_checkstack(L, nargs+LUA_MINSTACK, \"too many arguments\");\n    success = 1;\n    for (n = first; nargs-- && success; n++) {\n      if (lua_type(L, n) == LUA_TNUMBER) {\n        size_t l = (size_t)lua_tointeger(L, n);\n        success = (l == 0) ? test_eof(L, f) : read_chars(L, f, l);\n      }\n      else {\n        const char *p = lua_tostring(L, n);\n        luaL_argcheck(L, p && p[0] == '*', n, \"invalid option\");\n        switch (p[1]) {\n          case 'n':  /* number */\n            success = read_number(L, f);\n            break;\n          case 'l':  /* line */\n            success = read_line(L, f);\n            break;\n          case 'a':  /* file */\n            read_chars(L, f, ~((size_t)0));  /* read MAX_SIZE_T chars */\n            success = 1; /* always success */\n            break;\n          default:\n            return luaL_argerror(L, n, \"invalid format\");\n        }\n      }\n    }\n  }\n  if (ferror(f))\n    return pushresult(L, 0, NULL);\n  if (!success) {\n    lua_pop(L, 1);  /* remove last result */\n    lua_pushnil(L);  /* push nil instead */\n  }\n  return n - first;\n}\n\n\nstatic int io_read (lua_State *L) {\n  return g_read(L, getiofile(L, IO_INPUT), 1);\n}\n\n\nstatic int f_read (lua_State *L) {\n  return g_read(L, tofile(L), 2);\n}\n\n\nstatic int io_readline (lua_State *L) {\n  FILE *f = *(FILE **)lua_touserdata(L, lua_upvalueindex(1));\n  int sucess;\n  if (f == NULL)  /* file is already closed? */\n    luaL_error(L, \"file is already closed\");\n  sucess = read_line(L, f);\n  if (ferror(f))\n    return luaL_error(L, \"%s\", strerror(errno));\n  if (sucess) return 1;\n  else {  /* EOF */\n    if (lua_toboolean(L, lua_upvalueindex(2))) {  /* generator created file? */\n      lua_settop(L, 0);\n      lua_pushvalue(L, lua_upvalueindex(1));\n      aux_close(L);  /* close it */\n    }\n    return 0;\n  }\n}\n\n/* }====================================================== */\n\n\nstatic int g_write (lua_State *L, FILE *f, int arg) {\n  int nargs = lua_gettop(L) - 1;\n  int status = 1;\n  for (; nargs--; arg++) {\n    if (lua_type(L, arg) == LUA_TNUMBER) {\n      /* optimization: could be done exactly as for strings */\n      status = status &&\n          fprintf(f, LUA_NUMBER_FMT, lua_tonumber(L, arg)) > 0;\n    }\n    else {\n      size_t l;\n      const char *s = luaL_checklstring(L, arg, &l);\n      status = status && (fwrite(s, sizeof(char), l, f) == l);\n    }\n  }\n  return pushresult(L, status, NULL);\n}\n\n\nstatic int io_write (lua_State *L) {\n  return g_write(L, getiofile(L, IO_OUTPUT), 1);\n}\n\n\nstatic int f_write (lua_State *L) {\n  return g_write(L, tofile(L), 2);\n}\n\n\nstatic int f_seek (lua_State *L) {\n  static const int mode[] = {SEEK_SET, SEEK_CUR, SEEK_END};\n  static const char *const modenames[] = {\"set\", \"cur\", \"end\", NULL};\n  FILE *f = tofile(L);\n  int op = luaL_checkoption(L, 2, \"cur\", modenames);\n  long offset = luaL_optlong(L, 3, 0);\n  op = fseek(f, offset, mode[op]);\n  if (op)\n    return pushresult(L, 0, NULL);  /* error */\n  else {\n    lua_pushinteger(L, ftell(f));\n    return 1;\n  }\n}\n\n\nstatic int f_setvbuf (lua_State *L) {\n  static const int mode[] = {_IONBF, _IOFBF, _IOLBF};\n  static const char *const modenames[] = {\"no\", \"full\", \"line\", NULL};\n  FILE *f = tofile(L);\n  int op = luaL_checkoption(L, 2, NULL, modenames);\n  lua_Integer sz = luaL_optinteger(L, 3, LUAL_BUFFERSIZE);\n  int res = setvbuf(f, NULL, mode[op], sz);\n  return pushresult(L, res == 0, NULL);\n}\n\n\n\nstatic int io_flush (lua_State *L) {\n  return pushresult(L, fflush(getiofile(L, IO_OUTPUT)) == 0, NULL);\n}\n\n\nstatic int f_flush (lua_State *L) {\n  return pushresult(L, fflush(tofile(L)) == 0, NULL);\n}\n\n\nstatic const luaL_Reg iolib[] = {\n  {\"close\", io_close},\n  {\"flush\", io_flush},\n  {\"input\", io_input},\n  {\"lines\", io_lines},\n  {\"open\", io_open},\n  {\"output\", io_output},\n  {\"popen\", io_popen},\n  {\"read\", io_read},\n  {\"tmpfile\", io_tmpfile},\n  {\"type\", io_type},\n  {\"write\", io_write},\n  {NULL, NULL}\n};\n\n\nstatic const luaL_Reg flib[] = {\n  {\"close\", io_close},\n  {\"flush\", f_flush},\n  {\"lines\", f_lines},\n  {\"read\", f_read},\n  {\"seek\", f_seek},\n  {\"setvbuf\", f_setvbuf},\n  {\"write\", f_write},\n  {\"__gc\", io_gc},\n  {\"__tostring\", io_tostring},\n  {NULL, NULL}\n};\n\n\nstatic void createmeta (lua_State *L) {\n  luaL_newmetatable(L, LUA_FILEHANDLE);  /* create metatable for file handles */\n  lua_pushvalue(L, -1);  /* push metatable */\n  lua_setfield(L, -2, \"__index\");  /* metatable.__index = metatable */\n  luaL_register(L, NULL, flib);  /* file methods */\n}\n\n\nstatic void createstdfile (lua_State *L, FILE *f, int k, const char *fname) {\n  *newfile(L) = f;\n  if (k > 0) {\n    lua_pushvalue(L, -1);\n    lua_rawseti(L, LUA_ENVIRONINDEX, k);\n  }\n  lua_pushvalue(L, -2);  /* copy environment */\n  lua_setfenv(L, -2);  /* set it */\n  lua_setfield(L, -3, fname);\n}\n\n\nstatic void newfenv (lua_State *L, lua_CFunction cls) {\n  lua_createtable(L, 0, 1);\n  lua_pushcfunction(L, cls);\n  lua_setfield(L, -2, \"__close\");\n}\n\n\nLUALIB_API int luaopen_io (lua_State *L) {\n  createmeta(L);\n  /* create (private) environment (with fields IO_INPUT, IO_OUTPUT, __close) */\n  newfenv(L, io_fclose);\n  lua_replace(L, LUA_ENVIRONINDEX);\n  /* open library */\n  luaL_register(L, LUA_IOLIBNAME, iolib);\n  /* create (and set) default files */\n  newfenv(L, io_noclose);  /* close function for default files */\n  createstdfile(L, stdin, IO_INPUT, \"stdin\");\n  createstdfile(L, stdout, IO_OUTPUT, \"stdout\");\n  createstdfile(L, stderr, 0, \"stderr\");\n  lua_pop(L, 1);  /* pop environment for default files */\n  lua_getfield(L, -1, \"popen\");\n  newfenv(L, io_pclose);  /* create environment for 'popen' */\n  lua_setfenv(L, -2);  /* set fenv for 'popen' */\n  lua_pop(L, 1);  /* pop 'popen' */\n  return 1;\n}\n\n"
  },
  {
    "path": "deps/lua/src/llex.c",
    "content": "/*\n** $Id: llex.c,v 2.20.1.2 2009/11/23 14:58:22 roberto Exp $\n** Lexical Analyzer\n** See Copyright Notice in lua.h\n*/\n\n\n#include <ctype.h>\n#include <locale.h>\n#include <string.h>\n\n#define llex_c\n#define LUA_CORE\n\n#include \"lua.h\"\n\n#include \"ldo.h\"\n#include \"llex.h\"\n#include \"lobject.h\"\n#include \"lparser.h\"\n#include \"lstate.h\"\n#include \"lstring.h\"\n#include \"ltable.h\"\n#include \"lzio.h\"\n\n\n\n#define next(ls) (ls->current = zgetc(ls->z))\n\n\n\n\n#define currIsNewline(ls)\t(ls->current == '\\n' || ls->current == '\\r')\n\n\n/* ORDER RESERVED */\nconst char *const luaX_tokens [] = {\n    \"and\", \"break\", \"do\", \"else\", \"elseif\",\n    \"end\", \"false\", \"for\", \"function\", \"if\",\n    \"in\", \"local\", \"nil\", \"not\", \"or\", \"repeat\",\n    \"return\", \"then\", \"true\", \"until\", \"while\",\n    \"..\", \"...\", \"==\", \">=\", \"<=\", \"~=\",\n    \"<number>\", \"<name>\", \"<string>\", \"<eof>\",\n    NULL\n};\n\n\n#define save_and_next(ls) (save(ls, ls->current), next(ls))\n\n\nstatic void save (LexState *ls, int c) {\n  Mbuffer *b = ls->buff;\n  if (b->n + 1 > b->buffsize) {\n    size_t newsize;\n    if (b->buffsize >= MAX_SIZET/2)\n      luaX_lexerror(ls, \"lexical element too long\", 0);\n    newsize = b->buffsize * 2;\n    luaZ_resizebuffer(ls->L, b, newsize);\n  }\n  b->buffer[b->n++] = cast(char, c);\n}\n\n\nvoid luaX_init (lua_State *L) {\n  int i;\n  for (i=0; i<NUM_RESERVED; i++) {\n    TString *ts = luaS_new(L, luaX_tokens[i]);\n    luaS_fix(ts);  /* reserved words are never collected */\n    lua_assert(strlen(luaX_tokens[i])+1 <= TOKEN_LEN);\n    ts->tsv.reserved = cast_byte(i+1);  /* reserved word */\n  }\n}\n\n\n#define MAXSRC          80\n\n\nconst char *luaX_token2str (LexState *ls, int token) {\n  if (token < FIRST_RESERVED) {\n    lua_assert(token == cast(unsigned char, token));\n    return (iscntrl(token)) ? luaO_pushfstring(ls->L, \"char(%d)\", token) :\n                              luaO_pushfstring(ls->L, \"%c\", token);\n  }\n  else\n    return luaX_tokens[token-FIRST_RESERVED];\n}\n\n\nstatic const char *txtToken (LexState *ls, int token) {\n  switch (token) {\n    case TK_NAME:\n    case TK_STRING:\n    case TK_NUMBER:\n      save(ls, '\\0');\n      return luaZ_buffer(ls->buff);\n    default:\n      return luaX_token2str(ls, token);\n  }\n}\n\n\nvoid luaX_lexerror (LexState *ls, const char *msg, int token) {\n  char buff[MAXSRC];\n  luaO_chunkid(buff, getstr(ls->source), MAXSRC);\n  msg = luaO_pushfstring(ls->L, \"%s:%d: %s\", buff, ls->linenumber, msg);\n  if (token)\n    luaO_pushfstring(ls->L, \"%s near \" LUA_QS, msg, txtToken(ls, token));\n  luaD_throw(ls->L, LUA_ERRSYNTAX);\n}\n\n\nvoid luaX_syntaxerror (LexState *ls, const char *msg) {\n  luaX_lexerror(ls, msg, ls->t.token);\n}\n\n\nTString *luaX_newstring (LexState *ls, const char *str, size_t l) {\n  lua_State *L = ls->L;\n  TString *ts = luaS_newlstr(L, str, l);\n  TValue *o = luaH_setstr(L, ls->fs->h, ts);  /* entry for `str' */\n  if (ttisnil(o)) {\n    setbvalue(o, 1);  /* make sure `str' will not be collected */\n    luaC_checkGC(L);\n  }\n  return ts;\n}\n\n\nstatic void inclinenumber (LexState *ls) {\n  int old = ls->current;\n  lua_assert(currIsNewline(ls));\n  next(ls);  /* skip `\\n' or `\\r' */\n  if (currIsNewline(ls) && ls->current != old)\n    next(ls);  /* skip `\\n\\r' or `\\r\\n' */\n  if (++ls->linenumber >= MAX_INT)\n    luaX_syntaxerror(ls, \"chunk has too many lines\");\n}\n\n\nvoid luaX_setinput (lua_State *L, LexState *ls, ZIO *z, TString *source) {\n  ls->t.token = 0;\n  ls->decpoint = '.';\n  ls->L = L;\n  ls->lookahead.token = TK_EOS;  /* no look-ahead token */\n  ls->z = z;\n  ls->fs = NULL;\n  ls->linenumber = 1;\n  ls->lastline = 1;\n  ls->source = source;\n  luaZ_resizebuffer(ls->L, ls->buff, LUA_MINBUFFER);  /* initialize buffer */\n  next(ls);  /* read first char */\n}\n\n\n\n/*\n** =======================================================\n** LEXICAL ANALYZER\n** =======================================================\n*/\n\n\n\nstatic int check_next (LexState *ls, const char *set) {\n  if (!strchr(set, ls->current))\n    return 0;\n  save_and_next(ls);\n  return 1;\n}\n\n\nstatic void buffreplace (LexState *ls, char from, char to) {\n  size_t n = luaZ_bufflen(ls->buff);\n  char *p = luaZ_buffer(ls->buff);\n  while (n--)\n    if (p[n] == from) p[n] = to;\n}\n\n\nstatic void trydecpoint (LexState *ls, SemInfo *seminfo) {\n  /* format error: try to update decimal point separator */\n  struct lconv *cv = localeconv();\n  char old = ls->decpoint;\n  ls->decpoint = (cv ? cv->decimal_point[0] : '.');\n  buffreplace(ls, old, ls->decpoint);  /* try updated decimal separator */\n  if (!luaO_str2d(luaZ_buffer(ls->buff), &seminfo->r)) {\n    /* format error with correct decimal point: no more options */\n    buffreplace(ls, ls->decpoint, '.');  /* undo change (for error message) */\n    luaX_lexerror(ls, \"malformed number\", TK_NUMBER);\n  }\n}\n\n\n/* LUA_NUMBER */\nstatic void read_numeral (LexState *ls, SemInfo *seminfo) {\n  lua_assert(isdigit(ls->current));\n  do {\n    save_and_next(ls);\n  } while (isdigit(ls->current) || ls->current == '.');\n  if (check_next(ls, \"Ee\"))  /* `E'? */\n    check_next(ls, \"+-\");  /* optional exponent sign */\n  while (isalnum(ls->current) || ls->current == '_')\n    save_and_next(ls);\n  save(ls, '\\0');\n  buffreplace(ls, '.', ls->decpoint);  /* follow locale for decimal point */\n  if (!luaO_str2d(luaZ_buffer(ls->buff), &seminfo->r))  /* format error? */\n    trydecpoint(ls, seminfo); /* try to update decimal point separator */\n}\n\n/*\n** reads a sequence '[=*[' or ']=*]', leaving the last bracket.\n** If a sequence is well-formed, return its number of '='s + 2; otherwise,\n** return 1 if there is no '='s or 0 otherwise (an unfinished '[==...').\n*/\nstatic size_t skip_sep (LexState *ls) {\n  size_t count = 0;\n  int s = ls->current;\n  lua_assert(s == '[' || s == ']');\n  save_and_next(ls);\n  while (ls->current == '=') {\n    save_and_next(ls);\n    count++;\n  }\n   return (ls->current == s) ? count + 2\n          : (count == 0) ? 1\n          : 0;\n}\n\n\nstatic void read_long_string (LexState *ls, SemInfo *seminfo, size_t sep) {\n  int cont = 0;\n  (void)(cont);  /* avoid warnings when `cont' is not used */\n  save_and_next(ls);  /* skip 2nd `[' */\n  if (currIsNewline(ls))  /* string starts with a newline? */\n    inclinenumber(ls);  /* skip it */\n  for (;;) {\n    switch (ls->current) {\n      case EOZ:\n        luaX_lexerror(ls, (seminfo) ? \"unfinished long string\" :\n                                   \"unfinished long comment\", TK_EOS);\n        break;  /* to avoid warnings */\n#if defined(LUA_COMPAT_LSTR)\n      case '[': {\n        if (skip_sep(ls) == sep) {\n          save_and_next(ls);  /* skip 2nd `[' */\n          cont++;\n#if LUA_COMPAT_LSTR == 1\n          if (sep == 0)\n            luaX_lexerror(ls, \"nesting of [[...]] is deprecated\", '[');\n#endif\n        }\n        break;\n      }\n#endif\n      case ']': {\n        if (skip_sep(ls) == sep) {\n          save_and_next(ls);  /* skip 2nd `]' */\n#if defined(LUA_COMPAT_LSTR) && LUA_COMPAT_LSTR == 2\n          cont--;\n          if (sep == 0 && cont >= 0) break;\n#endif\n          goto endloop;\n        }\n        break;\n      }\n      case '\\n':\n      case '\\r': {\n        save(ls, '\\n');\n        inclinenumber(ls);\n        if (!seminfo) luaZ_resetbuffer(ls->buff);  /* avoid wasting space */\n        break;\n      }\n      default: {\n        if (seminfo) save_and_next(ls);\n        else next(ls);\n      }\n    }\n  } endloop:\n  if (seminfo)\n    seminfo->ts = luaX_newstring(ls, luaZ_buffer(ls->buff) + sep,\n                                     luaZ_bufflen(ls->buff) - 2 * sep);\n}\n\n\nstatic void read_string (LexState *ls, int del, SemInfo *seminfo) {\n  save_and_next(ls);\n  while (ls->current != del) {\n    switch (ls->current) {\n      case EOZ:\n        luaX_lexerror(ls, \"unfinished string\", TK_EOS);\n        continue;  /* to avoid warnings */\n      case '\\n':\n      case '\\r':\n        luaX_lexerror(ls, \"unfinished string\", TK_STRING);\n        continue;  /* to avoid warnings */\n      case '\\\\': {\n        int c;\n        next(ls);  /* do not save the `\\' */\n        switch (ls->current) {\n          case 'a': c = '\\a'; break;\n          case 'b': c = '\\b'; break;\n          case 'f': c = '\\f'; break;\n          case 'n': c = '\\n'; break;\n          case 'r': c = '\\r'; break;\n          case 't': c = '\\t'; break;\n          case 'v': c = '\\v'; break;\n          case '\\n':  /* go through */\n          case '\\r': save(ls, '\\n'); inclinenumber(ls); continue;\n          case EOZ: continue;  /* will raise an error next loop */\n          default: {\n            if (!isdigit(ls->current))\n              save_and_next(ls);  /* handles \\\\, \\\", \\', and \\? */\n            else {  /* \\xxx */\n              int i = 0;\n              c = 0;\n              do {\n                c = 10*c + (ls->current-'0');\n                next(ls);\n              } while (++i<3 && isdigit(ls->current));\n              if (c > UCHAR_MAX)\n                luaX_lexerror(ls, \"escape sequence too large\", TK_STRING);\n              save(ls, c);\n            }\n            continue;\n          }\n        }\n        save(ls, c);\n        next(ls);\n        continue;\n      }\n      default:\n        save_and_next(ls);\n    }\n  }\n  save_and_next(ls);  /* skip delimiter */\n  seminfo->ts = luaX_newstring(ls, luaZ_buffer(ls->buff) + 1,\n                                   luaZ_bufflen(ls->buff) - 2);\n}\n\n\nstatic int llex (LexState *ls, SemInfo *seminfo) {\n  luaZ_resetbuffer(ls->buff);\n  for (;;) {\n    switch (ls->current) {\n      case '\\n':\n      case '\\r': {\n        inclinenumber(ls);\n        continue;\n      }\n      case '-': {\n        next(ls);\n        if (ls->current != '-') return '-';\n        /* else is a comment */\n        next(ls);\n        if (ls->current == '[') {\n          size_t sep = skip_sep(ls);\n          luaZ_resetbuffer(ls->buff);  /* `skip_sep' may dirty the buffer */\n          if (sep >= 2) {\n            read_long_string(ls, NULL, sep);  /* long comment */\n            luaZ_resetbuffer(ls->buff);\n            continue;\n          }\n        }\n        /* else short comment */\n        while (!currIsNewline(ls) && ls->current != EOZ)\n          next(ls);\n        continue;\n      }\n      case '[': {\n        size_t sep = skip_sep(ls);\n        if (sep >= 2) {\n          read_long_string(ls, seminfo, sep);\n          return TK_STRING;\n        }\n        else if (sep == 0)  /* '[=...' missing second bracket */\n          luaX_lexerror(ls, \"invalid long string delimiter\", TK_STRING);\n        return '[';\n      }\n      case '=': {\n        next(ls);\n        if (ls->current != '=') return '=';\n        else { next(ls); return TK_EQ; }\n      }\n      case '<': {\n        next(ls);\n        if (ls->current != '=') return '<';\n        else { next(ls); return TK_LE; }\n      }\n      case '>': {\n        next(ls);\n        if (ls->current != '=') return '>';\n        else { next(ls); return TK_GE; }\n      }\n      case '~': {\n        next(ls);\n        if (ls->current != '=') return '~';\n        else { next(ls); return TK_NE; }\n      }\n      case '\"':\n      case '\\'': {\n        read_string(ls, ls->current, seminfo);\n        return TK_STRING;\n      }\n      case '.': {\n        save_and_next(ls);\n        if (check_next(ls, \".\")) {\n          if (check_next(ls, \".\"))\n            return TK_DOTS;   /* ... */\n          else return TK_CONCAT;   /* .. */\n        }\n        else if (!isdigit(ls->current)) return '.';\n        else {\n          read_numeral(ls, seminfo);\n          return TK_NUMBER;\n        }\n      }\n      case EOZ: {\n        return TK_EOS;\n      }\n      default: {\n        if (isspace(ls->current)) {\n          lua_assert(!currIsNewline(ls));\n          next(ls);\n          continue;\n        }\n        else if (isdigit(ls->current)) {\n          read_numeral(ls, seminfo);\n          return TK_NUMBER;\n        }\n        else if (isalpha(ls->current) || ls->current == '_') {\n          /* identifier or reserved word */\n          TString *ts;\n          do {\n            save_and_next(ls);\n          } while (isalnum(ls->current) || ls->current == '_');\n          ts = luaX_newstring(ls, luaZ_buffer(ls->buff),\n                                  luaZ_bufflen(ls->buff));\n          if (ts->tsv.reserved > 0)  /* reserved word? */\n            return ts->tsv.reserved - 1 + FIRST_RESERVED;\n          else {\n            seminfo->ts = ts;\n            return TK_NAME;\n          }\n        }\n        else {\n          int c = ls->current;\n          next(ls);\n          return c;  /* single-char tokens (+ - / ...) */\n        }\n      }\n    }\n  }\n}\n\n\nvoid luaX_next (LexState *ls) {\n  ls->lastline = ls->linenumber;\n  if (ls->lookahead.token != TK_EOS) {  /* is there a look-ahead token? */\n    ls->t = ls->lookahead;  /* use this one */\n    ls->lookahead.token = TK_EOS;  /* and discharge it */\n  }\n  else\n    ls->t.token = llex(ls, &ls->t.seminfo);  /* read next token */\n}\n\n\nvoid luaX_lookahead (LexState *ls) {\n  lua_assert(ls->lookahead.token == TK_EOS);\n  ls->lookahead.token = llex(ls, &ls->lookahead.seminfo);\n}\n\n"
  },
  {
    "path": "deps/lua/src/llex.h",
    "content": "/*\n** $Id: llex.h,v 1.58.1.1 2007/12/27 13:02:25 roberto Exp $\n** Lexical Analyzer\n** See Copyright Notice in lua.h\n*/\n\n#ifndef llex_h\n#define llex_h\n\n#include \"lobject.h\"\n#include \"lzio.h\"\n\n\n#define FIRST_RESERVED\t257\n\n/* maximum length of a reserved word */\n#define TOKEN_LEN\t(sizeof(\"function\")/sizeof(char))\n\n\n/*\n* WARNING: if you change the order of this enumeration,\n* grep \"ORDER RESERVED\"\n*/\nenum RESERVED {\n  /* terminal symbols denoted by reserved words */\n  TK_AND = FIRST_RESERVED, TK_BREAK,\n  TK_DO, TK_ELSE, TK_ELSEIF, TK_END, TK_FALSE, TK_FOR, TK_FUNCTION,\n  TK_IF, TK_IN, TK_LOCAL, TK_NIL, TK_NOT, TK_OR, TK_REPEAT,\n  TK_RETURN, TK_THEN, TK_TRUE, TK_UNTIL, TK_WHILE,\n  /* other terminal symbols */\n  TK_CONCAT, TK_DOTS, TK_EQ, TK_GE, TK_LE, TK_NE, TK_NUMBER,\n  TK_NAME, TK_STRING, TK_EOS\n};\n\n/* number of reserved words */\n#define NUM_RESERVED\t(cast(int, TK_WHILE-FIRST_RESERVED+1))\n\n\n/* array with token `names' */\nLUAI_DATA const char *const luaX_tokens [];\n\n\ntypedef union {\n  lua_Number r;\n  TString *ts;\n} SemInfo;  /* semantics information */\n\n\ntypedef struct Token {\n  int token;\n  SemInfo seminfo;\n} Token;\n\n\ntypedef struct LexState {\n  int current;  /* current character (charint) */\n  int linenumber;  /* input line counter */\n  int lastline;  /* line of last token `consumed' */\n  Token t;  /* current token */\n  Token lookahead;  /* look ahead token */\n  struct FuncState *fs;  /* `FuncState' is private to the parser */\n  struct lua_State *L;\n  ZIO *z;  /* input stream */\n  Mbuffer *buff;  /* buffer for tokens */\n  TString *source;  /* current source name */\n  char decpoint;  /* locale decimal point */\n} LexState;\n\n\nLUAI_FUNC void luaX_init (lua_State *L);\nLUAI_FUNC void luaX_setinput (lua_State *L, LexState *ls, ZIO *z,\n                              TString *source);\nLUAI_FUNC TString *luaX_newstring (LexState *ls, const char *str, size_t l);\nLUAI_FUNC void luaX_next (LexState *ls);\nLUAI_FUNC void luaX_lookahead (LexState *ls);\nLUAI_FUNC void luaX_lexerror (LexState *ls, const char *msg, int token);\nLUAI_FUNC void luaX_syntaxerror (LexState *ls, const char *s);\nLUAI_FUNC const char *luaX_token2str (LexState *ls, int token);\n\n\n#endif\n"
  },
  {
    "path": "deps/lua/src/llimits.h",
    "content": "/*\n** $Id: llimits.h,v 1.69.1.1 2007/12/27 13:02:25 roberto Exp $\n** Limits, basic types, and some other `installation-dependent' definitions\n** See Copyright Notice in lua.h\n*/\n\n#ifndef llimits_h\n#define llimits_h\n\n\n#include <limits.h>\n#include <stddef.h>\n\n\n#include \"lua.h\"\n\n\ntypedef LUAI_UINT32 lu_int32;\n\ntypedef LUAI_UMEM lu_mem;\n\ntypedef LUAI_MEM l_mem;\n\n\n\n/* chars used as small naturals (so that `char' is reserved for characters) */\ntypedef unsigned char lu_byte;\n\n\n#define MAX_SIZET\t((size_t)(~(size_t)0)-2)\n\n#define MAX_LUMEM\t((lu_mem)(~(lu_mem)0)-2)\n\n\n#define MAX_INT (INT_MAX-2)  /* maximum value of an int (-2 for safety) */\n\n/*\n** conversion of pointer to integer\n** this is for hashing only; there is no problem if the integer\n** cannot hold the whole pointer value\n*/\n#define IntPoint(p)  ((unsigned int)(lu_mem)(p))\n\n\n\n/* type to ensure maximum alignment */\ntypedef LUAI_USER_ALIGNMENT_T L_Umaxalign;\n\n\n/* result of a `usual argument conversion' over lua_Number */\ntypedef LUAI_UACNUMBER l_uacNumber;\n\n\n/* internal assertions for in-house debugging */\n#ifdef lua_assert\n\n#define check_exp(c,e)\t\t(lua_assert(c), (e))\n#define api_check(l,e)\t\tlua_assert(e)\n\n#else\n\n#define lua_assert(c)\t\t((void)0)\n#define check_exp(c,e)\t\t(e)\n#define api_check\t\tluai_apicheck\n\n#endif\n\n\n#ifndef UNUSED\n#define UNUSED(x)\t((void)(x))\t/* to avoid warnings */\n#endif\n\n\n#ifndef cast\n#define cast(t, exp)\t((t)(exp))\n#endif\n\n#define cast_byte(i)\tcast(lu_byte, (i))\n#define cast_num(i)\tcast(lua_Number, (i))\n#define cast_int(i)\tcast(int, (i))\n\n\n\n/*\n** type for virtual-machine instructions\n** must be an unsigned with (at least) 4 bytes (see details in lopcodes.h)\n*/\ntypedef lu_int32 Instruction;\n\n\n\n/* maximum stack for a Lua function */\n#define MAXSTACK\t250\n\n\n\n/* minimum size for the string table (must be power of 2) */\n#ifndef MINSTRTABSIZE\n#define MINSTRTABSIZE\t32\n#endif\n\n\n/* minimum size for string buffer */\n#ifndef LUA_MINBUFFER\n#define LUA_MINBUFFER\t32\n#endif\n\n\n#ifndef lua_lock\n#define lua_lock(L)     ((void) 0) \n#define lua_unlock(L)   ((void) 0)\n#endif\n\n#ifndef luai_threadyield\n#define luai_threadyield(L)     {lua_unlock(L); lua_lock(L);}\n#endif\n\n\n/*\n** macro to control inclusion of some hard tests on stack reallocation\n*/ \n#ifndef HARDSTACKTESTS\n#define condhardstacktests(x)\t((void)0)\n#else\n#define condhardstacktests(x)\tx\n#endif\n\n#endif\n"
  },
  {
    "path": "deps/lua/src/lmathlib.c",
    "content": "/*\n** $Id: lmathlib.c,v 1.67.1.1 2007/12/27 13:02:25 roberto Exp $\n** Standard mathematical library\n** See Copyright Notice in lua.h\n*/\n\n\n#include <stdlib.h>\n#include <math.h>\n\n#define lmathlib_c\n#define LUA_LIB\n\n#include \"lua.h\"\n\n#include \"lauxlib.h\"\n#include \"lualib.h\"\n\n\n#undef PI\n#define PI (3.14159265358979323846)\n#define RADIANS_PER_DEGREE (PI/180.0)\n\n\n\nstatic int math_abs (lua_State *L) {\n  lua_pushnumber(L, fabs(luaL_checknumber(L, 1)));\n  return 1;\n}\n\nstatic int math_sin (lua_State *L) {\n  lua_pushnumber(L, sin(luaL_checknumber(L, 1)));\n  return 1;\n}\n\nstatic int math_sinh (lua_State *L) {\n  lua_pushnumber(L, sinh(luaL_checknumber(L, 1)));\n  return 1;\n}\n\nstatic int math_cos (lua_State *L) {\n  lua_pushnumber(L, cos(luaL_checknumber(L, 1)));\n  return 1;\n}\n\nstatic int math_cosh (lua_State *L) {\n  lua_pushnumber(L, cosh(luaL_checknumber(L, 1)));\n  return 1;\n}\n\nstatic int math_tan (lua_State *L) {\n  lua_pushnumber(L, tan(luaL_checknumber(L, 1)));\n  return 1;\n}\n\nstatic int math_tanh (lua_State *L) {\n  lua_pushnumber(L, tanh(luaL_checknumber(L, 1)));\n  return 1;\n}\n\nstatic int math_asin (lua_State *L) {\n  lua_pushnumber(L, asin(luaL_checknumber(L, 1)));\n  return 1;\n}\n\nstatic int math_acos (lua_State *L) {\n  lua_pushnumber(L, acos(luaL_checknumber(L, 1)));\n  return 1;\n}\n\nstatic int math_atan (lua_State *L) {\n  lua_pushnumber(L, atan(luaL_checknumber(L, 1)));\n  return 1;\n}\n\nstatic int math_atan2 (lua_State *L) {\n  lua_pushnumber(L, atan2(luaL_checknumber(L, 1), luaL_checknumber(L, 2)));\n  return 1;\n}\n\nstatic int math_ceil (lua_State *L) {\n  lua_pushnumber(L, ceil(luaL_checknumber(L, 1)));\n  return 1;\n}\n\nstatic int math_floor (lua_State *L) {\n  lua_pushnumber(L, floor(luaL_checknumber(L, 1)));\n  return 1;\n}\n\nstatic int math_fmod (lua_State *L) {\n  lua_pushnumber(L, fmod(luaL_checknumber(L, 1), luaL_checknumber(L, 2)));\n  return 1;\n}\n\nstatic int math_modf (lua_State *L) {\n  double ip;\n  double fp = modf(luaL_checknumber(L, 1), &ip);\n  lua_pushnumber(L, ip);\n  lua_pushnumber(L, fp);\n  return 2;\n}\n\nstatic int math_sqrt (lua_State *L) {\n  lua_pushnumber(L, sqrt(luaL_checknumber(L, 1)));\n  return 1;\n}\n\nstatic int math_pow (lua_State *L) {\n  lua_pushnumber(L, pow(luaL_checknumber(L, 1), luaL_checknumber(L, 2)));\n  return 1;\n}\n\nstatic int math_log (lua_State *L) {\n  lua_pushnumber(L, log(luaL_checknumber(L, 1)));\n  return 1;\n}\n\nstatic int math_log10 (lua_State *L) {\n  lua_pushnumber(L, log10(luaL_checknumber(L, 1)));\n  return 1;\n}\n\nstatic int math_exp (lua_State *L) {\n  lua_pushnumber(L, exp(luaL_checknumber(L, 1)));\n  return 1;\n}\n\nstatic int math_deg (lua_State *L) {\n  lua_pushnumber(L, luaL_checknumber(L, 1)/RADIANS_PER_DEGREE);\n  return 1;\n}\n\nstatic int math_rad (lua_State *L) {\n  lua_pushnumber(L, luaL_checknumber(L, 1)*RADIANS_PER_DEGREE);\n  return 1;\n}\n\nstatic int math_frexp (lua_State *L) {\n  int e;\n  lua_pushnumber(L, frexp(luaL_checknumber(L, 1), &e));\n  lua_pushinteger(L, e);\n  return 2;\n}\n\nstatic int math_ldexp (lua_State *L) {\n  lua_pushnumber(L, ldexp(luaL_checknumber(L, 1), luaL_checkint(L, 2)));\n  return 1;\n}\n\n\n\nstatic int math_min (lua_State *L) {\n  int n = lua_gettop(L);  /* number of arguments */\n  lua_Number dmin = luaL_checknumber(L, 1);\n  int i;\n  for (i=2; i<=n; i++) {\n    lua_Number d = luaL_checknumber(L, i);\n    if (d < dmin)\n      dmin = d;\n  }\n  lua_pushnumber(L, dmin);\n  return 1;\n}\n\n\nstatic int math_max (lua_State *L) {\n  int n = lua_gettop(L);  /* number of arguments */\n  lua_Number dmax = luaL_checknumber(L, 1);\n  int i;\n  for (i=2; i<=n; i++) {\n    lua_Number d = luaL_checknumber(L, i);\n    if (d > dmax)\n      dmax = d;\n  }\n  lua_pushnumber(L, dmax);\n  return 1;\n}\n\n\nstatic int math_random (lua_State *L) {\n  /* the `%' avoids the (rare) case of r==1, and is needed also because on\n     some systems (SunOS!) `rand()' may return a value larger than RAND_MAX */\n  lua_Number r = (lua_Number)(rand()%RAND_MAX) / (lua_Number)RAND_MAX;\n  switch (lua_gettop(L)) {  /* check number of arguments */\n    case 0: {  /* no arguments */\n      lua_pushnumber(L, r);  /* Number between 0 and 1 */\n      break;\n    }\n    case 1: {  /* only upper limit */\n      int u = luaL_checkint(L, 1);\n      luaL_argcheck(L, 1<=u, 1, \"interval is empty\");\n      lua_pushnumber(L, floor(r*u)+1);  /* int between 1 and `u' */\n      break;\n    }\n    case 2: {  /* lower and upper limits */\n      int l = luaL_checkint(L, 1);\n      int u = luaL_checkint(L, 2);\n      luaL_argcheck(L, l<=u, 2, \"interval is empty\");\n      lua_pushnumber(L, floor(r*(u-l+1))+l);  /* int between `l' and `u' */\n      break;\n    }\n    default: return luaL_error(L, \"wrong number of arguments\");\n  }\n  return 1;\n}\n\n\nstatic int math_randomseed (lua_State *L) {\n  srand(luaL_checkint(L, 1));\n  return 0;\n}\n\n\nstatic const luaL_Reg mathlib[] = {\n  {\"abs\",   math_abs},\n  {\"acos\",  math_acos},\n  {\"asin\",  math_asin},\n  {\"atan2\", math_atan2},\n  {\"atan\",  math_atan},\n  {\"ceil\",  math_ceil},\n  {\"cosh\",   math_cosh},\n  {\"cos\",   math_cos},\n  {\"deg\",   math_deg},\n  {\"exp\",   math_exp},\n  {\"floor\", math_floor},\n  {\"fmod\",   math_fmod},\n  {\"frexp\", math_frexp},\n  {\"ldexp\", math_ldexp},\n  {\"log10\", math_log10},\n  {\"log\",   math_log},\n  {\"max\",   math_max},\n  {\"min\",   math_min},\n  {\"modf\",   math_modf},\n  {\"pow\",   math_pow},\n  {\"rad\",   math_rad},\n  {\"random\",     math_random},\n  {\"randomseed\", math_randomseed},\n  {\"sinh\",   math_sinh},\n  {\"sin\",   math_sin},\n  {\"sqrt\",  math_sqrt},\n  {\"tanh\",   math_tanh},\n  {\"tan\",   math_tan},\n  {NULL, NULL}\n};\n\n\n/*\n** Open math library\n*/\nLUALIB_API int luaopen_math (lua_State *L) {\n  luaL_register(L, LUA_MATHLIBNAME, mathlib);\n  lua_pushnumber(L, PI);\n  lua_setfield(L, -2, \"pi\");\n  lua_pushnumber(L, HUGE_VAL);\n  lua_setfield(L, -2, \"huge\");\n#if defined(LUA_COMPAT_MOD)\n  lua_getfield(L, -1, \"fmod\");\n  lua_setfield(L, -2, \"mod\");\n#endif\n  return 1;\n}\n\n"
  },
  {
    "path": "deps/lua/src/lmem.c",
    "content": "/*\n** $Id: lmem.c,v 1.70.1.1 2007/12/27 13:02:25 roberto Exp $\n** Interface to Memory Manager\n** See Copyright Notice in lua.h\n*/\n\n\n#include <stddef.h>\n\n#define lmem_c\n#define LUA_CORE\n\n#include \"lua.h\"\n\n#include \"ldebug.h\"\n#include \"ldo.h\"\n#include \"lmem.h\"\n#include \"lobject.h\"\n#include \"lstate.h\"\n\n\n\n/*\n** About the realloc function:\n** void * frealloc (void *ud, void *ptr, size_t osize, size_t nsize);\n** (`osize' is the old size, `nsize' is the new size)\n**\n** Lua ensures that (ptr == NULL) iff (osize == 0).\n**\n** * frealloc(ud, NULL, 0, x) creates a new block of size `x'\n**\n** * frealloc(ud, p, x, 0) frees the block `p'\n** (in this specific case, frealloc must return NULL).\n** particularly, frealloc(ud, NULL, 0, 0) does nothing\n** (which is equivalent to free(NULL) in ANSI C)\n**\n** frealloc returns NULL if it cannot create or reallocate the area\n** (any reallocation to an equal or smaller size cannot fail!)\n*/\n\n\n\n#define MINSIZEARRAY\t4\n\n\nvoid *luaM_growaux_ (lua_State *L, void *block, int *size, size_t size_elems,\n                     int limit, const char *errormsg) {\n  void *newblock;\n  int newsize;\n  if (*size >= limit/2) {  /* cannot double it? */\n    if (*size >= limit)  /* cannot grow even a little? */\n      luaG_runerror(L, errormsg);\n    newsize = limit;  /* still have at least one free place */\n  }\n  else {\n    newsize = (*size)*2;\n    if (newsize < MINSIZEARRAY)\n      newsize = MINSIZEARRAY;  /* minimum size */\n  }\n  newblock = luaM_reallocv(L, block, *size, newsize, size_elems);\n  *size = newsize;  /* update only when everything else is OK */\n  return newblock;\n}\n\n\nvoid *luaM_toobig (lua_State *L) {\n  luaG_runerror(L, \"memory allocation error: block too big\");\n  return NULL;  /* to avoid warnings */\n}\n\n\n\n/*\n** generic allocation routine.\n*/\nvoid *luaM_realloc_ (lua_State *L, void *block, size_t osize, size_t nsize) {\n  global_State *g = G(L);\n  lua_assert((osize == 0) == (block == NULL));\n  block = (*g->frealloc)(g->ud, block, osize, nsize);\n  if (block == NULL && nsize > 0)\n    luaD_throw(L, LUA_ERRMEM);\n  lua_assert((nsize == 0) == (block == NULL));\n  g->totalbytes = (g->totalbytes - osize) + nsize;\n  return block;\n}\n\n"
  },
  {
    "path": "deps/lua/src/lmem.h",
    "content": "/*\n** $Id: lmem.h,v 1.31.1.1 2007/12/27 13:02:25 roberto Exp $\n** Interface to Memory Manager\n** See Copyright Notice in lua.h\n*/\n\n#ifndef lmem_h\n#define lmem_h\n\n\n#include <stddef.h>\n\n#include \"llimits.h\"\n#include \"lua.h\"\n\n#define MEMERRMSG\t\"not enough memory\"\n\n\n#define luaM_reallocv(L,b,on,n,e) \\\n\t((cast(size_t, (n)+1) <= MAX_SIZET/(e)) ?  /* +1 to avoid warnings */ \\\n\t\tluaM_realloc_(L, (b), (on)*(e), (n)*(e)) : \\\n\t\tluaM_toobig(L))\n\n#define luaM_freemem(L, b, s)\tluaM_realloc_(L, (b), (s), 0)\n#define luaM_free(L, b)\t\tluaM_realloc_(L, (b), sizeof(*(b)), 0)\n#define luaM_freearray(L, b, n, t)   luaM_reallocv(L, (b), n, 0, sizeof(t))\n\n#define luaM_malloc(L,t)\tluaM_realloc_(L, NULL, 0, (t))\n#define luaM_new(L,t)\t\tcast(t *, luaM_malloc(L, sizeof(t)))\n#define luaM_newvector(L,n,t) \\\n\t\tcast(t *, luaM_reallocv(L, NULL, 0, n, sizeof(t)))\n\n#define luaM_growvector(L,v,nelems,size,t,limit,e) \\\n          if ((nelems)+1 > (size)) \\\n            ((v)=cast(t *, luaM_growaux_(L,v,&(size),sizeof(t),limit,e)))\n\n#define luaM_reallocvector(L, v,oldn,n,t) \\\n   ((v)=cast(t *, luaM_reallocv(L, v, oldn, n, sizeof(t))))\n\n\nLUAI_FUNC void *luaM_realloc_ (lua_State *L, void *block, size_t oldsize,\n                                                          size_t size);\nLUAI_FUNC void *luaM_toobig (lua_State *L);\nLUAI_FUNC void *luaM_growaux_ (lua_State *L, void *block, int *size,\n                               size_t size_elem, int limit,\n                               const char *errormsg);\n\n#endif\n\n"
  },
  {
    "path": "deps/lua/src/loadlib.c",
    "content": "/*\n** $Id: loadlib.c,v 1.52.1.4 2009/09/09 13:17:16 roberto Exp $\n** Dynamic library loader for Lua\n** See Copyright Notice in lua.h\n**\n** This module contains an implementation of loadlib for Unix systems\n** that have dlfcn, an implementation for Darwin (Mac OS X), an\n** implementation for Windows, and a stub for other systems.\n*/\n\n\n#include <stdlib.h>\n#include <string.h>\n\n\n#define loadlib_c\n#define LUA_LIB\n\n#include \"lua.h\"\n\n#include \"lauxlib.h\"\n#include \"lualib.h\"\n\n\n/* prefix for open functions in C libraries */\n#define LUA_POF\t\t\"luaopen_\"\n\n/* separator for open functions in C libraries */\n#define LUA_OFSEP\t\"_\"\n\n\n#define LIBPREFIX\t\"LOADLIB: \"\n\n#define POF\t\tLUA_POF\n#define LIB_FAIL\t\"open\"\n\n\n/* error codes for ll_loadfunc */\n#define ERRLIB\t\t1\n#define ERRFUNC\t\t2\n\n#define setprogdir(L)\t\t((void)0)\n\n\nstatic void ll_unloadlib (void *lib);\nstatic void *ll_load (lua_State *L, const char *path);\nstatic lua_CFunction ll_sym (lua_State *L, void *lib, const char *sym);\n\n\n\n#if defined(LUA_DL_DLOPEN)\n/*\n** {========================================================================\n** This is an implementation of loadlib based on the dlfcn interface.\n** The dlfcn interface is available in Linux, SunOS, Solaris, IRIX, FreeBSD,\n** NetBSD, AIX 4.2, HPUX 11, and  probably most other Unix flavors, at least\n** as an emulation layer on top of native functions.\n** =========================================================================\n*/\n\n#include <dlfcn.h>\n\nstatic void ll_unloadlib (void *lib) {\n  dlclose(lib);\n}\n\n\nstatic void *ll_load (lua_State *L, const char *path) {\n  void *lib = dlopen(path, RTLD_NOW);\n  if (lib == NULL) lua_pushstring(L, dlerror());\n  return lib;\n}\n\n\nstatic lua_CFunction ll_sym (lua_State *L, void *lib, const char *sym) {\n  lua_CFunction f = (lua_CFunction)dlsym(lib, sym);\n  if (f == NULL) lua_pushstring(L, dlerror());\n  return f;\n}\n\n/* }====================================================== */\n\n\n\n#elif defined(LUA_DL_DLL)\n/*\n** {======================================================================\n** This is an implementation of loadlib for Windows using native functions.\n** =======================================================================\n*/\n\n#include <windows.h>\n\n\n#undef setprogdir\n\nstatic void setprogdir (lua_State *L) {\n  char buff[MAX_PATH + 1];\n  char *lb;\n  DWORD nsize = sizeof(buff)/sizeof(char);\n  DWORD n = GetModuleFileNameA(NULL, buff, nsize);\n  if (n == 0 || n == nsize || (lb = strrchr(buff, '\\\\')) == NULL)\n    luaL_error(L, \"unable to get ModuleFileName\");\n  else {\n    *lb = '\\0';\n    luaL_gsub(L, lua_tostring(L, -1), LUA_EXECDIR, buff);\n    lua_remove(L, -2);  /* remove original string */\n  }\n}\n\n\nstatic void pusherror (lua_State *L) {\n  int error = GetLastError();\n  char buffer[128];\n  if (FormatMessageA(FORMAT_MESSAGE_IGNORE_INSERTS | FORMAT_MESSAGE_FROM_SYSTEM,\n      NULL, error, 0, buffer, sizeof(buffer), NULL))\n    lua_pushstring(L, buffer);\n  else\n    lua_pushfstring(L, \"system error %d\\n\", error);\n}\n\nstatic void ll_unloadlib (void *lib) {\n  FreeLibrary((HINSTANCE)lib);\n}\n\n\nstatic void *ll_load (lua_State *L, const char *path) {\n  HINSTANCE lib = LoadLibraryA(path);\n  if (lib == NULL) pusherror(L);\n  return lib;\n}\n\n\nstatic lua_CFunction ll_sym (lua_State *L, void *lib, const char *sym) {\n  lua_CFunction f = (lua_CFunction)GetProcAddress((HINSTANCE)lib, sym);\n  if (f == NULL) pusherror(L);\n  return f;\n}\n\n/* }====================================================== */\n\n\n\n#elif defined(LUA_DL_DYLD)\n/*\n** {======================================================================\n** Native Mac OS X / Darwin Implementation\n** =======================================================================\n*/\n\n#include <mach-o/dyld.h>\n\n\n/* Mac appends a `_' before C function names */\n#undef POF\n#define POF\t\"_\" LUA_POF\n\n\nstatic void pusherror (lua_State *L) {\n  const char *err_str;\n  const char *err_file;\n  NSLinkEditErrors err;\n  int err_num;\n  NSLinkEditError(&err, &err_num, &err_file, &err_str);\n  lua_pushstring(L, err_str);\n}\n\n\nstatic const char *errorfromcode (NSObjectFileImageReturnCode ret) {\n  switch (ret) {\n    case NSObjectFileImageInappropriateFile:\n      return \"file is not a bundle\";\n    case NSObjectFileImageArch:\n      return \"library is for wrong CPU type\";\n    case NSObjectFileImageFormat:\n      return \"bad format\";\n    case NSObjectFileImageAccess:\n      return \"cannot access file\";\n    case NSObjectFileImageFailure:\n    default:\n      return \"unable to load library\";\n  }\n}\n\n\nstatic void ll_unloadlib (void *lib) {\n  NSUnLinkModule((NSModule)lib, NSUNLINKMODULE_OPTION_RESET_LAZY_REFERENCES);\n}\n\n\nstatic void *ll_load (lua_State *L, const char *path) {\n  NSObjectFileImage img;\n  NSObjectFileImageReturnCode ret;\n  /* this would be a rare case, but prevents crashing if it happens */\n  if(!_dyld_present()) {\n    lua_pushliteral(L, \"dyld not present\");\n    return NULL;\n  }\n  ret = NSCreateObjectFileImageFromFile(path, &img);\n  if (ret == NSObjectFileImageSuccess) {\n    NSModule mod = NSLinkModule(img, path, NSLINKMODULE_OPTION_PRIVATE |\n                       NSLINKMODULE_OPTION_RETURN_ON_ERROR);\n    NSDestroyObjectFileImage(img);\n    if (mod == NULL) pusherror(L);\n    return mod;\n  }\n  lua_pushstring(L, errorfromcode(ret));\n  return NULL;\n}\n\n\nstatic lua_CFunction ll_sym (lua_State *L, void *lib, const char *sym) {\n  NSSymbol nss = NSLookupSymbolInModule((NSModule)lib, sym);\n  if (nss == NULL) {\n    lua_pushfstring(L, \"symbol \" LUA_QS \" not found\", sym);\n    return NULL;\n  }\n  return (lua_CFunction)NSAddressOfSymbol(nss);\n}\n\n/* }====================================================== */\n\n\n\n#else\n/*\n** {======================================================\n** Fallback for other systems\n** =======================================================\n*/\n\n#undef LIB_FAIL\n#define LIB_FAIL\t\"absent\"\n\n\n#define DLMSG\t\"dynamic libraries not enabled; check your Lua installation\"\n\n\nstatic void ll_unloadlib (void *lib) {\n  (void)lib;  /* to avoid warnings */\n}\n\n\nstatic void *ll_load (lua_State *L, const char *path) {\n  (void)path;  /* to avoid warnings */\n  lua_pushliteral(L, DLMSG);\n  return NULL;\n}\n\n\nstatic lua_CFunction ll_sym (lua_State *L, void *lib, const char *sym) {\n  (void)lib; (void)sym;  /* to avoid warnings */\n  lua_pushliteral(L, DLMSG);\n  return NULL;\n}\n\n/* }====================================================== */\n#endif\n\n\n\nstatic void **ll_register (lua_State *L, const char *path) {\n  void **plib;\n  lua_pushfstring(L, \"%s%s\", LIBPREFIX, path);\n  lua_gettable(L, LUA_REGISTRYINDEX);  /* check library in registry? */\n  if (!lua_isnil(L, -1))  /* is there an entry? */\n    plib = (void **)lua_touserdata(L, -1);\n  else {  /* no entry yet; create one */\n    lua_pop(L, 1);\n    plib = (void **)lua_newuserdata(L, sizeof(const void *));\n    *plib = NULL;\n    luaL_getmetatable(L, \"_LOADLIB\");\n    lua_setmetatable(L, -2);\n    lua_pushfstring(L, \"%s%s\", LIBPREFIX, path);\n    lua_pushvalue(L, -2);\n    lua_settable(L, LUA_REGISTRYINDEX);\n  }\n  return plib;\n}\n\n\n/*\n** __gc tag method: calls library's `ll_unloadlib' function with the lib\n** handle\n*/\nstatic int gctm (lua_State *L) {\n  void **lib = (void **)luaL_checkudata(L, 1, \"_LOADLIB\");\n  if (*lib) ll_unloadlib(*lib);\n  *lib = NULL;  /* mark library as closed */\n  return 0;\n}\n\n\nstatic int ll_loadfunc (lua_State *L, const char *path, const char *sym) {\n  void **reg = ll_register(L, path);\n  if (*reg == NULL) *reg = ll_load(L, path);\n  if (*reg == NULL)\n    return ERRLIB;  /* unable to load library */\n  else {\n    lua_CFunction f = ll_sym(L, *reg, sym);\n    if (f == NULL)\n      return ERRFUNC;  /* unable to find function */\n    lua_pushcfunction(L, f);\n    return 0;  /* return function */\n  }\n}\n\n\nstatic int ll_loadlib (lua_State *L) {\n  const char *path = luaL_checkstring(L, 1);\n  const char *init = luaL_checkstring(L, 2);\n  int stat = ll_loadfunc(L, path, init);\n  if (stat == 0)  /* no errors? */\n    return 1;  /* return the loaded function */\n  else {  /* error; error message is on stack top */\n    lua_pushnil(L);\n    lua_insert(L, -2);\n    lua_pushstring(L, (stat == ERRLIB) ?  LIB_FAIL : \"init\");\n    return 3;  /* return nil, error message, and where */\n  }\n}\n\n\n\n/*\n** {======================================================\n** 'require' function\n** =======================================================\n*/\n\n\nstatic int readable (const char *filename) {\n  FILE *f = fopen(filename, \"r\");  /* try to open file */\n  if (f == NULL) return 0;  /* open failed */\n  fclose(f);\n  return 1;\n}\n\n\nstatic const char *pushnexttemplate (lua_State *L, const char *path) {\n  const char *l;\n  while (*path == *LUA_PATHSEP) path++;  /* skip separators */\n  if (*path == '\\0') return NULL;  /* no more templates */\n  l = strchr(path, *LUA_PATHSEP);  /* find next separator */\n  if (l == NULL) l = path + strlen(path);\n  lua_pushlstring(L, path, l - path);  /* template */\n  return l;\n}\n\n\nstatic const char *findfile (lua_State *L, const char *name,\n                                           const char *pname) {\n  const char *path;\n  name = luaL_gsub(L, name, \".\", LUA_DIRSEP);\n  lua_getfield(L, LUA_ENVIRONINDEX, pname);\n  path = lua_tostring(L, -1);\n  if (path == NULL)\n    luaL_error(L, LUA_QL(\"package.%s\") \" must be a string\", pname);\n  lua_pushliteral(L, \"\");  /* error accumulator */\n  while ((path = pushnexttemplate(L, path)) != NULL) {\n    const char *filename;\n    filename = luaL_gsub(L, lua_tostring(L, -1), LUA_PATH_MARK, name);\n    lua_remove(L, -2);  /* remove path template */\n    if (readable(filename))  /* does file exist and is readable? */\n      return filename;  /* return that file name */\n    lua_pushfstring(L, \"\\n\\tno file \" LUA_QS, filename);\n    lua_remove(L, -2);  /* remove file name */\n    lua_concat(L, 2);  /* add entry to possible error message */\n  }\n  return NULL;  /* not found */\n}\n\n\nstatic void loaderror (lua_State *L, const char *filename) {\n  luaL_error(L, \"error loading module \" LUA_QS \" from file \" LUA_QS \":\\n\\t%s\",\n                lua_tostring(L, 1), filename, lua_tostring(L, -1));\n}\n\n\nstatic int loader_Lua (lua_State *L) {\n  const char *filename;\n  const char *name = luaL_checkstring(L, 1);\n  filename = findfile(L, name, \"path\");\n  if (filename == NULL) return 1;  /* library not found in this path */\n  if (luaL_loadfile(L, filename) != 0)\n    loaderror(L, filename);\n  return 1;  /* library loaded successfully */\n}\n\n\nstatic const char *mkfuncname (lua_State *L, const char *modname) {\n  const char *funcname;\n  const char *mark = strchr(modname, *LUA_IGMARK);\n  if (mark) modname = mark + 1;\n  funcname = luaL_gsub(L, modname, \".\", LUA_OFSEP);\n  funcname = lua_pushfstring(L, POF\"%s\", funcname);\n  lua_remove(L, -2);  /* remove 'gsub' result */\n  return funcname;\n}\n\n\nstatic int loader_C (lua_State *L) {\n  const char *funcname;\n  const char *name = luaL_checkstring(L, 1);\n  const char *filename = findfile(L, name, \"cpath\");\n  if (filename == NULL) return 1;  /* library not found in this path */\n  funcname = mkfuncname(L, name);\n  if (ll_loadfunc(L, filename, funcname) != 0)\n    loaderror(L, filename);\n  return 1;  /* library loaded successfully */\n}\n\n\nstatic int loader_Croot (lua_State *L) {\n  const char *funcname;\n  const char *filename;\n  const char *name = luaL_checkstring(L, 1);\n  const char *p = strchr(name, '.');\n  int stat;\n  if (p == NULL) return 0;  /* is root */\n  lua_pushlstring(L, name, p - name);\n  filename = findfile(L, lua_tostring(L, -1), \"cpath\");\n  if (filename == NULL) return 1;  /* root not found */\n  funcname = mkfuncname(L, name);\n  if ((stat = ll_loadfunc(L, filename, funcname)) != 0) {\n    if (stat != ERRFUNC) loaderror(L, filename);  /* real error */\n    lua_pushfstring(L, \"\\n\\tno module \" LUA_QS \" in file \" LUA_QS,\n                       name, filename);\n    return 1;  /* function not found */\n  }\n  return 1;\n}\n\n\nstatic int loader_preload (lua_State *L) {\n  const char *name = luaL_checkstring(L, 1);\n  lua_getfield(L, LUA_ENVIRONINDEX, \"preload\");\n  if (!lua_istable(L, -1))\n    luaL_error(L, LUA_QL(\"package.preload\") \" must be a table\");\n  lua_getfield(L, -1, name);\n  if (lua_isnil(L, -1))  /* not found? */\n    lua_pushfstring(L, \"\\n\\tno field package.preload['%s']\", name);\n  return 1;\n}\n\n\nstatic const int sentinel_ = 0;\n#define sentinel\t((void *)&sentinel_)\n\n\nstatic int ll_require (lua_State *L) {\n  const char *name = luaL_checkstring(L, 1);\n  int i;\n  lua_settop(L, 1);  /* _LOADED table will be at index 2 */\n  lua_getfield(L, LUA_REGISTRYINDEX, \"_LOADED\");\n  lua_getfield(L, 2, name);\n  if (lua_toboolean(L, -1)) {  /* is it there? */\n    if (lua_touserdata(L, -1) == sentinel)  /* check loops */\n      luaL_error(L, \"loop or previous error loading module \" LUA_QS, name);\n    return 1;  /* package is already loaded */\n  }\n  /* else must load it; iterate over available loaders */\n  lua_getfield(L, LUA_ENVIRONINDEX, \"loaders\");\n  if (!lua_istable(L, -1))\n    luaL_error(L, LUA_QL(\"package.loaders\") \" must be a table\");\n  lua_pushliteral(L, \"\");  /* error message accumulator */\n  for (i=1; ; i++) {\n    lua_rawgeti(L, -2, i);  /* get a loader */\n    if (lua_isnil(L, -1))\n      luaL_error(L, \"module \" LUA_QS \" not found:%s\",\n                    name, lua_tostring(L, -2));\n    lua_pushstring(L, name);\n    lua_call(L, 1, 1);  /* call it */\n    if (lua_isfunction(L, -1))  /* did it find module? */\n      break;  /* module loaded successfully */\n    else if (lua_isstring(L, -1))  /* loader returned error message? */\n      lua_concat(L, 2);  /* accumulate it */\n    else\n      lua_pop(L, 1);\n  }\n  lua_pushlightuserdata(L, sentinel);\n  lua_setfield(L, 2, name);  /* _LOADED[name] = sentinel */\n  lua_pushstring(L, name);  /* pass name as argument to module */\n  lua_call(L, 1, 1);  /* run loaded module */\n  if (!lua_isnil(L, -1))  /* non-nil return? */\n    lua_setfield(L, 2, name);  /* _LOADED[name] = returned value */\n  lua_getfield(L, 2, name);\n  if (lua_touserdata(L, -1) == sentinel) {   /* module did not set a value? */\n    lua_pushboolean(L, 1);  /* use true as result */\n    lua_pushvalue(L, -1);  /* extra copy to be returned */\n    lua_setfield(L, 2, name);  /* _LOADED[name] = true */\n  }\n  return 1;\n}\n\n/* }====================================================== */\n\n\n\n/*\n** {======================================================\n** 'module' function\n** =======================================================\n*/\n  \n\nstatic void setfenv (lua_State *L) {\n  lua_Debug ar;\n  if (lua_getstack(L, 1, &ar) == 0 ||\n      lua_getinfo(L, \"f\", &ar) == 0 ||  /* get calling function */\n      lua_iscfunction(L, -1))\n    luaL_error(L, LUA_QL(\"module\") \" not called from a Lua function\");\n  lua_pushvalue(L, -2);\n  lua_setfenv(L, -2);\n  lua_pop(L, 1);\n}\n\n\nstatic void dooptions (lua_State *L, int n) {\n  int i;\n  for (i = 2; i <= n; i++) {\n    lua_pushvalue(L, i);  /* get option (a function) */\n    lua_pushvalue(L, -2);  /* module */\n    lua_call(L, 1, 0);\n  }\n}\n\n\nstatic void modinit (lua_State *L, const char *modname) {\n  const char *dot;\n  lua_pushvalue(L, -1);\n  lua_setfield(L, -2, \"_M\");  /* module._M = module */\n  lua_pushstring(L, modname);\n  lua_setfield(L, -2, \"_NAME\");\n  dot = strrchr(modname, '.');  /* look for last dot in module name */\n  if (dot == NULL) dot = modname;\n  else dot++;\n  /* set _PACKAGE as package name (full module name minus last part) */\n  lua_pushlstring(L, modname, dot - modname);\n  lua_setfield(L, -2, \"_PACKAGE\");\n}\n\n\nstatic int ll_module (lua_State *L) {\n  const char *modname = luaL_checkstring(L, 1);\n  int loaded = lua_gettop(L) + 1;  /* index of _LOADED table */\n  lua_getfield(L, LUA_REGISTRYINDEX, \"_LOADED\");\n  lua_getfield(L, loaded, modname);  /* get _LOADED[modname] */\n  if (!lua_istable(L, -1)) {  /* not found? */\n    lua_pop(L, 1);  /* remove previous result */\n    /* try global variable (and create one if it does not exist) */\n    if (luaL_findtable(L, LUA_GLOBALSINDEX, modname, 1) != NULL)\n      return luaL_error(L, \"name conflict for module \" LUA_QS, modname);\n    lua_pushvalue(L, -1);\n    lua_setfield(L, loaded, modname);  /* _LOADED[modname] = new table */\n  }\n  /* check whether table already has a _NAME field */\n  lua_getfield(L, -1, \"_NAME\");\n  if (!lua_isnil(L, -1))  /* is table an initialized module? */\n    lua_pop(L, 1);\n  else {  /* no; initialize it */\n    lua_pop(L, 1);\n    modinit(L, modname);\n  }\n  lua_pushvalue(L, -1);\n  setfenv(L);\n  dooptions(L, loaded - 1);\n  return 0;\n}\n\n\nstatic int ll_seeall (lua_State *L) {\n  luaL_checktype(L, 1, LUA_TTABLE);\n  if (!lua_getmetatable(L, 1)) {\n    lua_createtable(L, 0, 1); /* create new metatable */\n    lua_pushvalue(L, -1);\n    lua_setmetatable(L, 1);\n  }\n  lua_pushvalue(L, LUA_GLOBALSINDEX);\n  lua_setfield(L, -2, \"__index\");  /* mt.__index = _G */\n  return 0;\n}\n\n\n/* }====================================================== */\n\n\n\n/* auxiliary mark (for internal use) */\n#define AUXMARK\t\t\"\\1\"\n\nstatic void setpath (lua_State *L, const char *fieldname, const char *envname,\n                                   const char *def) {\n  const char *path = getenv(envname);\n  if (path == NULL)  /* no environment variable? */\n    lua_pushstring(L, def);  /* use default */\n  else {\n    /* replace \";;\" by \";AUXMARK;\" and then AUXMARK by default path */\n    path = luaL_gsub(L, path, LUA_PATHSEP LUA_PATHSEP,\n                              LUA_PATHSEP AUXMARK LUA_PATHSEP);\n    luaL_gsub(L, path, AUXMARK, def);\n    lua_remove(L, -2);\n  }\n  setprogdir(L);\n  lua_setfield(L, -2, fieldname);\n}\n\n\nstatic const luaL_Reg pk_funcs[] = {\n  {\"loadlib\", ll_loadlib},\n  {\"seeall\", ll_seeall},\n  {NULL, NULL}\n};\n\n\nstatic const luaL_Reg ll_funcs[] = {\n  {\"module\", ll_module},\n  {\"require\", ll_require},\n  {NULL, NULL}\n};\n\n\nstatic const lua_CFunction loaders[] =\n  {loader_preload, loader_Lua, loader_C, loader_Croot, NULL};\n\n\nLUALIB_API int luaopen_package (lua_State *L) {\n  int i;\n  /* create new type _LOADLIB */\n  luaL_newmetatable(L, \"_LOADLIB\");\n  lua_pushcfunction(L, gctm);\n  lua_setfield(L, -2, \"__gc\");\n  /* create `package' table */\n  luaL_register(L, LUA_LOADLIBNAME, pk_funcs);\n#if defined(LUA_COMPAT_LOADLIB) \n  lua_getfield(L, -1, \"loadlib\");\n  lua_setfield(L, LUA_GLOBALSINDEX, \"loadlib\");\n#endif\n  lua_pushvalue(L, -1);\n  lua_replace(L, LUA_ENVIRONINDEX);\n  /* create `loaders' table */\n  lua_createtable(L, sizeof(loaders)/sizeof(loaders[0]) - 1, 0);\n  /* fill it with pre-defined loaders */\n  for (i=0; loaders[i] != NULL; i++) {\n    lua_pushcfunction(L, loaders[i]);\n    lua_rawseti(L, -2, i+1);\n  }\n  lua_setfield(L, -2, \"loaders\");  /* put it in field `loaders' */\n  setpath(L, \"path\", LUA_PATH, LUA_PATH_DEFAULT);  /* set field `path' */\n  setpath(L, \"cpath\", LUA_CPATH, LUA_CPATH_DEFAULT); /* set field `cpath' */\n  /* store config information */\n  lua_pushliteral(L, LUA_DIRSEP \"\\n\" LUA_PATHSEP \"\\n\" LUA_PATH_MARK \"\\n\"\n                     LUA_EXECDIR \"\\n\" LUA_IGMARK);\n  lua_setfield(L, -2, \"config\");\n  /* set field `loaded' */\n  luaL_findtable(L, LUA_REGISTRYINDEX, \"_LOADED\", 2);\n  lua_setfield(L, -2, \"loaded\");\n  /* set field `preload' */\n  lua_newtable(L);\n  lua_setfield(L, -2, \"preload\");\n  lua_pushvalue(L, LUA_GLOBALSINDEX);\n  luaL_register(L, NULL, ll_funcs);  /* open lib into global table */\n  lua_pop(L, 1);\n  return 1;  /* return 'package' table */\n}\n\n"
  },
  {
    "path": "deps/lua/src/lobject.c",
    "content": "/*\n** $Id: lobject.c,v 2.22.1.1 2007/12/27 13:02:25 roberto Exp $\n** Some generic functions over Lua objects\n** See Copyright Notice in lua.h\n*/\n\n#include <ctype.h>\n#include <stdarg.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n#define lobject_c\n#define LUA_CORE\n\n#include \"lua.h\"\n\n#include \"ldo.h\"\n#include \"lmem.h\"\n#include \"lobject.h\"\n#include \"lstate.h\"\n#include \"lstring.h\"\n#include \"lvm.h\"\n\n\n\nconst TValue luaO_nilobject_ = {{NULL}, LUA_TNIL};\n\n\n/*\n** converts an integer to a \"floating point byte\", represented as\n** (eeeeexxx), where the real value is (1xxx) * 2^(eeeee - 1) if\n** eeeee != 0 and (xxx) otherwise.\n*/\nint luaO_int2fb (unsigned int x) {\n  int e = 0;  /* expoent */\n  while (x >= 16) {\n    x = (x+1) >> 1;\n    e++;\n  }\n  if (x < 8) return x;\n  else return ((e+1) << 3) | (cast_int(x) - 8);\n}\n\n\n/* converts back */\nint luaO_fb2int (int x) {\n  int e = (x >> 3) & 31;\n  if (e == 0) return x;\n  else return ((x & 7)+8) << (e - 1);\n}\n\n\nint luaO_log2 (unsigned int x) {\n  static const lu_byte log_2[256] = {\n    0,1,2,2,3,3,3,3,4,4,4,4,4,4,4,4,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5,\n    6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,6,\n    7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,\n    7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,\n    8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,\n    8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,\n    8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,\n    8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8,8\n  };\n  int l = -1;\n  while (x >= 256) { l += 8; x >>= 8; }\n  return l + log_2[x];\n\n}\n\n\nint luaO_rawequalObj (const TValue *t1, const TValue *t2) {\n  if (ttype(t1) != ttype(t2)) return 0;\n  else switch (ttype(t1)) {\n    case LUA_TNIL:\n      return 1;\n    case LUA_TNUMBER:\n      return luai_numeq(nvalue(t1), nvalue(t2));\n    case LUA_TBOOLEAN:\n      return bvalue(t1) == bvalue(t2);  /* boolean true must be 1 !! */\n    case LUA_TLIGHTUSERDATA:\n      return pvalue(t1) == pvalue(t2);\n    default:\n      lua_assert(iscollectable(t1));\n      return gcvalue(t1) == gcvalue(t2);\n  }\n}\n\n\nint luaO_str2d (const char *s, lua_Number *result) {\n  char *endptr;\n  *result = lua_str2number(s, &endptr);\n  if (endptr == s) return 0;  /* conversion failed */\n  if (*endptr == 'x' || *endptr == 'X')  /* maybe an hexadecimal constant? */\n    *result = cast_num(strtoul(s, &endptr, 16));\n  if (*endptr == '\\0') return 1;  /* most common case */\n  while (isspace(cast(unsigned char, *endptr))) endptr++;\n  if (*endptr != '\\0') return 0;  /* invalid trailing characters? */\n  return 1;\n}\n\n\n\nstatic void pushstr (lua_State *L, const char *str) {\n  setsvalue2s(L, L->top, luaS_new(L, str));\n  incr_top(L);\n}\n\n\n/* this function handles only `%d', `%c', %f, %p, and `%s' formats */\nconst char *luaO_pushvfstring (lua_State *L, const char *fmt, va_list argp) {\n  int n = 1;\n  pushstr(L, \"\");\n  for (;;) {\n    const char *e = strchr(fmt, '%');\n    if (e == NULL) break;\n    setsvalue2s(L, L->top, luaS_newlstr(L, fmt, e-fmt));\n    incr_top(L);\n    switch (*(e+1)) {\n      case 's': {\n        const char *s = va_arg(argp, char *);\n        if (s == NULL) s = \"(null)\";\n        pushstr(L, s);\n        break;\n      }\n      case 'c': {\n        char buff[2];\n        buff[0] = cast(char, va_arg(argp, int));\n        buff[1] = '\\0';\n        pushstr(L, buff);\n        break;\n      }\n      case 'd': {\n        setnvalue(L->top, cast_num(va_arg(argp, int)));\n        incr_top(L);\n        break;\n      }\n      case 'f': {\n        setnvalue(L->top, cast_num(va_arg(argp, l_uacNumber)));\n        incr_top(L);\n        break;\n      }\n      case 'p': {\n        char buff[4*sizeof(void *) + 8]; /* should be enough space for a `%p' */\n        sprintf(buff, \"%p\", va_arg(argp, void *));\n        pushstr(L, buff);\n        break;\n      }\n      case '%': {\n        pushstr(L, \"%\");\n        break;\n      }\n      default: {\n        char buff[3];\n        buff[0] = '%';\n        buff[1] = *(e+1);\n        buff[2] = '\\0';\n        pushstr(L, buff);\n        break;\n      }\n    }\n    n += 2;\n    fmt = e+2;\n  }\n  pushstr(L, fmt);\n  luaV_concat(L, n+1, cast_int(L->top - L->base) - 1);\n  L->top -= n;\n  return svalue(L->top - 1);\n}\n\n\nconst char *luaO_pushfstring (lua_State *L, const char *fmt, ...) {\n  const char *msg;\n  va_list argp;\n  va_start(argp, fmt);\n  msg = luaO_pushvfstring(L, fmt, argp);\n  va_end(argp);\n  return msg;\n}\n\n\nvoid luaO_chunkid (char *out, const char *source, size_t bufflen) {\n  if (*source == '=') {\n    strncpy(out, source+1, bufflen);  /* remove first char */\n    out[bufflen-1] = '\\0';  /* ensures null termination */\n  }\n  else {  /* out = \"source\", or \"...source\" */\n    if (*source == '@') {\n      size_t l;\n      source++;  /* skip the `@' */\n      bufflen -= sizeof(\" '...' \");\n      l = strlen(source);\n      strcpy(out, \"\");\n      if (l > bufflen) {\n        source += (l-bufflen);  /* get last part of file name */\n        strcat(out, \"...\");\n      }\n      strcat(out, source);\n    }\n    else {  /* out = [string \"string\"] */\n      size_t len = strcspn(source, \"\\n\\r\");  /* stop at first newline */\n      bufflen -= sizeof(\" [string \\\"...\\\"] \");\n      if (len > bufflen) len = bufflen;\n      strcpy(out, \"[string \\\"\");\n      if (source[len] != '\\0') {  /* must truncate? */\n        strncat(out, source, len);\n        strcat(out, \"...\");\n      }\n      else\n        strcat(out, source);\n      strcat(out, \"\\\"]\");\n    }\n  }\n}\n"
  },
  {
    "path": "deps/lua/src/lobject.h",
    "content": "/*\n** $Id: lobject.h,v 2.20.1.2 2008/08/06 13:29:48 roberto Exp $\n** Type definitions for Lua objects\n** See Copyright Notice in lua.h\n*/\n\n\n#ifndef lobject_h\n#define lobject_h\n\n\n#include <stdarg.h>\n\n\n#include \"llimits.h\"\n#include \"lua.h\"\n\n\n/* tags for values visible from Lua */\n#define LAST_TAG\tLUA_TTHREAD\n\n#define NUM_TAGS\t(LAST_TAG+1)\n\n\n/*\n** Extra tags for non-values\n*/\n#define LUA_TPROTO\t(LAST_TAG+1)\n#define LUA_TUPVAL\t(LAST_TAG+2)\n#define LUA_TDEADKEY\t(LAST_TAG+3)\n\n\n/*\n** Union of all collectable objects\n*/\ntypedef union GCObject GCObject;\n\n\n/*\n** Common Header for all collectable objects (in macro form, to be\n** included in other objects)\n*/\n#define CommonHeader\tGCObject *next; lu_byte tt; lu_byte marked\n\n\n/*\n** Common header in struct form\n*/\ntypedef struct GCheader {\n  CommonHeader;\n} GCheader;\n\n\n\n\n/*\n** Union of all Lua values\n*/\ntypedef union {\n  GCObject *gc;\n  void *p;\n  lua_Number n;\n  int b;\n} Value;\n\n\n/*\n** Tagged Values\n*/\n\n#define TValuefields\tValue value; int tt\n\ntypedef struct lua_TValue {\n  TValuefields;\n} TValue;\n\n\n/* Macros to test type */\n#define ttisnil(o)\t(ttype(o) == LUA_TNIL)\n#define ttisnumber(o)\t(ttype(o) == LUA_TNUMBER)\n#define ttisstring(o)\t(ttype(o) == LUA_TSTRING)\n#define ttistable(o)\t(ttype(o) == LUA_TTABLE)\n#define ttisfunction(o)\t(ttype(o) == LUA_TFUNCTION)\n#define ttisboolean(o)\t(ttype(o) == LUA_TBOOLEAN)\n#define ttisuserdata(o)\t(ttype(o) == LUA_TUSERDATA)\n#define ttisthread(o)\t(ttype(o) == LUA_TTHREAD)\n#define ttislightuserdata(o)\t(ttype(o) == LUA_TLIGHTUSERDATA)\n\n/* Macros to access values */\n#define ttype(o)\t((o)->tt)\n#define gcvalue(o)\tcheck_exp(iscollectable(o), (o)->value.gc)\n#define pvalue(o)\tcheck_exp(ttislightuserdata(o), (o)->value.p)\n#define nvalue(o)\tcheck_exp(ttisnumber(o), (o)->value.n)\n#define rawtsvalue(o)\tcheck_exp(ttisstring(o), &(o)->value.gc->ts)\n#define tsvalue(o)\t(&rawtsvalue(o)->tsv)\n#define rawuvalue(o)\tcheck_exp(ttisuserdata(o), &(o)->value.gc->u)\n#define uvalue(o)\t(&rawuvalue(o)->uv)\n#define clvalue(o)\tcheck_exp(ttisfunction(o), &(o)->value.gc->cl)\n#define hvalue(o)\tcheck_exp(ttistable(o), &(o)->value.gc->h)\n#define bvalue(o)\tcheck_exp(ttisboolean(o), (o)->value.b)\n#define thvalue(o)\tcheck_exp(ttisthread(o), &(o)->value.gc->th)\n\n#define l_isfalse(o)\t(ttisnil(o) || (ttisboolean(o) && bvalue(o) == 0))\n\n/*\n** for internal debug only\n*/\n#define checkconsistency(obj) \\\n  lua_assert(!iscollectable(obj) || (ttype(obj) == (obj)->value.gc->gch.tt))\n\n#define checkliveness(g,obj) \\\n  lua_assert(!iscollectable(obj) || \\\n  ((ttype(obj) == (obj)->value.gc->gch.tt) && !isdead(g, (obj)->value.gc)))\n\n\n/* Macros to set values */\n#define setnilvalue(obj) ((obj)->tt=LUA_TNIL)\n\n#define setnvalue(obj,x) \\\n  { TValue *i_o=(obj); i_o->value.n=(x); i_o->tt=LUA_TNUMBER; }\n\n#define setpvalue(obj,x) \\\n  { TValue *i_o=(obj); i_o->value.p=(x); i_o->tt=LUA_TLIGHTUSERDATA; }\n\n#define setbvalue(obj,x) \\\n  { TValue *i_o=(obj); i_o->value.b=(x); i_o->tt=LUA_TBOOLEAN; }\n\n#define setsvalue(L,obj,x) \\\n  { TValue *i_o=(obj); \\\n    i_o->value.gc=cast(GCObject *, (x)); i_o->tt=LUA_TSTRING; \\\n    checkliveness(G(L),i_o); }\n\n#define setuvalue(L,obj,x) \\\n  { TValue *i_o=(obj); \\\n    i_o->value.gc=cast(GCObject *, (x)); i_o->tt=LUA_TUSERDATA; \\\n    checkliveness(G(L),i_o); }\n\n#define setthvalue(L,obj,x) \\\n  { TValue *i_o=(obj); \\\n    i_o->value.gc=cast(GCObject *, (x)); i_o->tt=LUA_TTHREAD; \\\n    checkliveness(G(L),i_o); }\n\n#define setclvalue(L,obj,x) \\\n  { TValue *i_o=(obj); \\\n    i_o->value.gc=cast(GCObject *, (x)); i_o->tt=LUA_TFUNCTION; \\\n    checkliveness(G(L),i_o); }\n\n#define sethvalue(L,obj,x) \\\n  { TValue *i_o=(obj); \\\n    i_o->value.gc=cast(GCObject *, (x)); i_o->tt=LUA_TTABLE; \\\n    checkliveness(G(L),i_o); }\n\n#define setptvalue(L,obj,x) \\\n  { TValue *i_o=(obj); \\\n    i_o->value.gc=cast(GCObject *, (x)); i_o->tt=LUA_TPROTO; \\\n    checkliveness(G(L),i_o); }\n\n\n\n\n#define setobj(L,obj1,obj2) \\\n  { const TValue *o2=(obj2); TValue *o1=(obj1); \\\n    o1->value = o2->value; o1->tt=o2->tt; \\\n    checkliveness(G(L),o1); }\n\n\n/*\n** different types of sets, according to destination\n*/\n\n/* from stack to (same) stack */\n#define setobjs2s\tsetobj\n/* to stack (not from same stack) */\n#define setobj2s\tsetobj\n#define setsvalue2s\tsetsvalue\n#define sethvalue2s\tsethvalue\n#define setptvalue2s\tsetptvalue\n/* from table to same table */\n#define setobjt2t\tsetobj\n/* to table */\n#define setobj2t\tsetobj\n/* to new object */\n#define setobj2n\tsetobj\n#define setsvalue2n\tsetsvalue\n\n#define setttype(obj, tt) (ttype(obj) = (tt))\n\n\n#define iscollectable(o)\t(ttype(o) >= LUA_TSTRING)\n\n\n\ntypedef TValue *StkId;  /* index to stack elements */\n\n\n/*\n** String headers for string table\n*/\ntypedef union TString {\n  L_Umaxalign dummy;  /* ensures maximum alignment for strings */\n  struct {\n    CommonHeader;\n    lu_byte reserved;\n    unsigned int hash;\n    size_t len;\n  } tsv;\n} TString;\n\n\n#define getstr(ts)\tcast(const char *, (ts) + 1)\n#define svalue(o)       getstr(rawtsvalue(o))\n\n\n\ntypedef union Udata {\n  L_Umaxalign dummy;  /* ensures maximum alignment for `local' udata */\n  struct {\n    CommonHeader;\n    struct Table *metatable;\n    struct Table *env;\n    size_t len;\n  } uv;\n} Udata;\n\n\n\n\n/*\n** Function Prototypes\n*/\ntypedef struct Proto {\n  CommonHeader;\n  TValue *k;  /* constants used by the function */\n  Instruction *code;\n  struct Proto **p;  /* functions defined inside the function */\n  int *lineinfo;  /* map from opcodes to source lines */\n  struct LocVar *locvars;  /* information about local variables */\n  TString **upvalues;  /* upvalue names */\n  TString  *source;\n  int sizeupvalues;\n  int sizek;  /* size of `k' */\n  int sizecode;\n  int sizelineinfo;\n  int sizep;  /* size of `p' */\n  int sizelocvars;\n  int linedefined;\n  int lastlinedefined;\n  GCObject *gclist;\n  lu_byte nups;  /* number of upvalues */\n  lu_byte numparams;\n  lu_byte is_vararg;\n  lu_byte maxstacksize;\n} Proto;\n\n\n/* masks for new-style vararg */\n#define VARARG_HASARG\t\t1\n#define VARARG_ISVARARG\t\t2\n#define VARARG_NEEDSARG\t\t4\n\n\ntypedef struct LocVar {\n  TString *varname;\n  int startpc;  /* first point where variable is active */\n  int endpc;    /* first point where variable is dead */\n} LocVar;\n\n\n\n/*\n** Upvalues\n*/\n\ntypedef struct UpVal {\n  CommonHeader;\n  TValue *v;  /* points to stack or to its own value */\n  union {\n    TValue value;  /* the value (when closed) */\n    struct {  /* double linked list (when open) */\n      struct UpVal *prev;\n      struct UpVal *next;\n    } l;\n  } u;\n} UpVal;\n\n\n/*\n** Closures\n*/\n\n#define ClosureHeader \\\n\tCommonHeader; lu_byte isC; lu_byte nupvalues; GCObject *gclist; \\\n\tstruct Table *env\n\ntypedef struct CClosure {\n  ClosureHeader;\n  lua_CFunction f;\n  TValue upvalue[1];\n} CClosure;\n\n\ntypedef struct LClosure {\n  ClosureHeader;\n  struct Proto *p;\n  UpVal *upvals[1];\n} LClosure;\n\n\ntypedef union Closure {\n  CClosure c;\n  LClosure l;\n} Closure;\n\n\n#define iscfunction(o)\t(ttype(o) == LUA_TFUNCTION && clvalue(o)->c.isC)\n#define isLfunction(o)\t(ttype(o) == LUA_TFUNCTION && !clvalue(o)->c.isC)\n\n\n/*\n** Tables\n*/\n\ntypedef union TKey {\n  struct {\n    TValuefields;\n    struct Node *next;  /* for chaining */\n  } nk;\n  TValue tvk;\n} TKey;\n\n\ntypedef struct Node {\n  TValue i_val;\n  TKey i_key;\n} Node;\n\n\ntypedef struct Table {\n  CommonHeader;\n  lu_byte flags;  /* 1<<p means tagmethod(p) is not present */\n  int readonly;\n  lu_byte lsizenode;  /* log2 of size of `node' array */\n  struct Table *metatable;\n  TValue *array;  /* array part */\n  Node *node;\n  Node *lastfree;  /* any free position is before this position */\n  GCObject *gclist;\n  int sizearray;  /* size of `array' array */\n} Table;\n\n\n\n/*\n** `module' operation for hashing (size is always a power of 2)\n*/\n#define lmod(s,size) \\\n\t(check_exp((size&(size-1))==0, (cast(int, (s) & ((size)-1)))))\n\n\n#define twoto(x)\t(1<<(x))\n#define sizenode(t)\t(twoto((t)->lsizenode))\n\n\n#define luaO_nilobject\t\t(&luaO_nilobject_)\n\nLUAI_DATA const TValue luaO_nilobject_;\n\n#define ceillog2(x)\t(luaO_log2((x)-1) + 1)\n\nLUAI_FUNC int luaO_log2 (unsigned int x);\nLUAI_FUNC int luaO_int2fb (unsigned int x);\nLUAI_FUNC int luaO_fb2int (int x);\nLUAI_FUNC int luaO_rawequalObj (const TValue *t1, const TValue *t2);\nLUAI_FUNC int luaO_str2d (const char *s, lua_Number *result);\nLUAI_FUNC const char *luaO_pushvfstring (lua_State *L, const char *fmt,\n                                                       va_list argp);\nLUAI_FUNC const char *luaO_pushfstring (lua_State *L, const char *fmt, ...);\nLUAI_FUNC void luaO_chunkid (char *out, const char *source, size_t len);\n\n\n#endif\n\n"
  },
  {
    "path": "deps/lua/src/lopcodes.c",
    "content": "/*\n** $Id: lopcodes.c,v 1.37.1.1 2007/12/27 13:02:25 roberto Exp $\n** See Copyright Notice in lua.h\n*/\n\n\n#define lopcodes_c\n#define LUA_CORE\n\n\n#include \"lopcodes.h\"\n\n\n/* ORDER OP */\n\nconst char *const luaP_opnames[NUM_OPCODES+1] = {\n  \"MOVE\",\n  \"LOADK\",\n  \"LOADBOOL\",\n  \"LOADNIL\",\n  \"GETUPVAL\",\n  \"GETGLOBAL\",\n  \"GETTABLE\",\n  \"SETGLOBAL\",\n  \"SETUPVAL\",\n  \"SETTABLE\",\n  \"NEWTABLE\",\n  \"SELF\",\n  \"ADD\",\n  \"SUB\",\n  \"MUL\",\n  \"DIV\",\n  \"MOD\",\n  \"POW\",\n  \"UNM\",\n  \"NOT\",\n  \"LEN\",\n  \"CONCAT\",\n  \"JMP\",\n  \"EQ\",\n  \"LT\",\n  \"LE\",\n  \"TEST\",\n  \"TESTSET\",\n  \"CALL\",\n  \"TAILCALL\",\n  \"RETURN\",\n  \"FORLOOP\",\n  \"FORPREP\",\n  \"TFORLOOP\",\n  \"SETLIST\",\n  \"CLOSE\",\n  \"CLOSURE\",\n  \"VARARG\",\n  NULL\n};\n\n\n#define opmode(t,a,b,c,m) (((t)<<7) | ((a)<<6) | ((b)<<4) | ((c)<<2) | (m))\n\nconst lu_byte luaP_opmodes[NUM_OPCODES] = {\n/*       T  A    B       C     mode\t\t   opcode\t*/\n  opmode(0, 1, OpArgR, OpArgN, iABC) \t\t/* OP_MOVE */\n ,opmode(0, 1, OpArgK, OpArgN, iABx)\t\t/* OP_LOADK */\n ,opmode(0, 1, OpArgU, OpArgU, iABC)\t\t/* OP_LOADBOOL */\n ,opmode(0, 1, OpArgR, OpArgN, iABC)\t\t/* OP_LOADNIL */\n ,opmode(0, 1, OpArgU, OpArgN, iABC)\t\t/* OP_GETUPVAL */\n ,opmode(0, 1, OpArgK, OpArgN, iABx)\t\t/* OP_GETGLOBAL */\n ,opmode(0, 1, OpArgR, OpArgK, iABC)\t\t/* OP_GETTABLE */\n ,opmode(0, 0, OpArgK, OpArgN, iABx)\t\t/* OP_SETGLOBAL */\n ,opmode(0, 0, OpArgU, OpArgN, iABC)\t\t/* OP_SETUPVAL */\n ,opmode(0, 0, OpArgK, OpArgK, iABC)\t\t/* OP_SETTABLE */\n ,opmode(0, 1, OpArgU, OpArgU, iABC)\t\t/* OP_NEWTABLE */\n ,opmode(0, 1, OpArgR, OpArgK, iABC)\t\t/* OP_SELF */\n ,opmode(0, 1, OpArgK, OpArgK, iABC)\t\t/* OP_ADD */\n ,opmode(0, 1, OpArgK, OpArgK, iABC)\t\t/* OP_SUB */\n ,opmode(0, 1, OpArgK, OpArgK, iABC)\t\t/* OP_MUL */\n ,opmode(0, 1, OpArgK, OpArgK, iABC)\t\t/* OP_DIV */\n ,opmode(0, 1, OpArgK, OpArgK, iABC)\t\t/* OP_MOD */\n ,opmode(0, 1, OpArgK, OpArgK, iABC)\t\t/* OP_POW */\n ,opmode(0, 1, OpArgR, OpArgN, iABC)\t\t/* OP_UNM */\n ,opmode(0, 1, OpArgR, OpArgN, iABC)\t\t/* OP_NOT */\n ,opmode(0, 1, OpArgR, OpArgN, iABC)\t\t/* OP_LEN */\n ,opmode(0, 1, OpArgR, OpArgR, iABC)\t\t/* OP_CONCAT */\n ,opmode(0, 0, OpArgR, OpArgN, iAsBx)\t\t/* OP_JMP */\n ,opmode(1, 0, OpArgK, OpArgK, iABC)\t\t/* OP_EQ */\n ,opmode(1, 0, OpArgK, OpArgK, iABC)\t\t/* OP_LT */\n ,opmode(1, 0, OpArgK, OpArgK, iABC)\t\t/* OP_LE */\n ,opmode(1, 1, OpArgR, OpArgU, iABC)\t\t/* OP_TEST */\n ,opmode(1, 1, OpArgR, OpArgU, iABC)\t\t/* OP_TESTSET */\n ,opmode(0, 1, OpArgU, OpArgU, iABC)\t\t/* OP_CALL */\n ,opmode(0, 1, OpArgU, OpArgU, iABC)\t\t/* OP_TAILCALL */\n ,opmode(0, 0, OpArgU, OpArgN, iABC)\t\t/* OP_RETURN */\n ,opmode(0, 1, OpArgR, OpArgN, iAsBx)\t\t/* OP_FORLOOP */\n ,opmode(0, 1, OpArgR, OpArgN, iAsBx)\t\t/* OP_FORPREP */\n ,opmode(1, 0, OpArgN, OpArgU, iABC)\t\t/* OP_TFORLOOP */\n ,opmode(0, 0, OpArgU, OpArgU, iABC)\t\t/* OP_SETLIST */\n ,opmode(0, 0, OpArgN, OpArgN, iABC)\t\t/* OP_CLOSE */\n ,opmode(0, 1, OpArgU, OpArgN, iABx)\t\t/* OP_CLOSURE */\n ,opmode(0, 1, OpArgU, OpArgN, iABC)\t\t/* OP_VARARG */\n};\n\n"
  },
  {
    "path": "deps/lua/src/lopcodes.h",
    "content": "/*\n** $Id: lopcodes.h,v 1.125.1.1 2007/12/27 13:02:25 roberto Exp $\n** Opcodes for Lua virtual machine\n** See Copyright Notice in lua.h\n*/\n\n#ifndef lopcodes_h\n#define lopcodes_h\n\n#include \"llimits.h\"\n\n\n/*===========================================================================\n  We assume that instructions are unsigned numbers.\n  All instructions have an opcode in the first 6 bits.\n  Instructions can have the following fields:\n\t`A' : 8 bits\n\t`B' : 9 bits\n\t`C' : 9 bits\n\t`Bx' : 18 bits (`B' and `C' together)\n\t`sBx' : signed Bx\n\n  A signed argument is represented in excess K; that is, the number\n  value is the unsigned value minus K. K is exactly the maximum value\n  for that argument (so that -max is represented by 0, and +max is\n  represented by 2*max), which is half the maximum for the corresponding\n  unsigned argument.\n===========================================================================*/\n\n\nenum OpMode {iABC, iABx, iAsBx};  /* basic instruction format */\n\n\n/*\n** size and position of opcode arguments.\n*/\n#define SIZE_C\t\t9\n#define SIZE_B\t\t9\n#define SIZE_Bx\t\t(SIZE_C + SIZE_B)\n#define SIZE_A\t\t8\n\n#define SIZE_OP\t\t6\n\n#define POS_OP\t\t0\n#define POS_A\t\t(POS_OP + SIZE_OP)\n#define POS_C\t\t(POS_A + SIZE_A)\n#define POS_B\t\t(POS_C + SIZE_C)\n#define POS_Bx\t\tPOS_C\n\n\n/*\n** limits for opcode arguments.\n** we use (signed) int to manipulate most arguments,\n** so they must fit in LUAI_BITSINT-1 bits (-1 for sign)\n*/\n#if SIZE_Bx < LUAI_BITSINT-1\n#define MAXARG_Bx        ((1<<SIZE_Bx)-1)\n#define MAXARG_sBx        (MAXARG_Bx>>1)         /* `sBx' is signed */\n#else\n#define MAXARG_Bx        MAX_INT\n#define MAXARG_sBx        MAX_INT\n#endif\n\n\n#define MAXARG_A        ((1<<SIZE_A)-1)\n#define MAXARG_B        ((1<<SIZE_B)-1)\n#define MAXARG_C        ((1<<SIZE_C)-1)\n\n\n/* creates a mask with `n' 1 bits at position `p' */\n#define MASK1(n,p)\t((~((~(Instruction)0)<<n))<<p)\n\n/* creates a mask with `n' 0 bits at position `p' */\n#define MASK0(n,p)\t(~MASK1(n,p))\n\n/*\n** the following macros help to manipulate instructions\n*/\n\n#define GET_OPCODE(i)\t(cast(OpCode, ((i)>>POS_OP) & MASK1(SIZE_OP,0)))\n#define SET_OPCODE(i,o)\t((i) = (((i)&MASK0(SIZE_OP,POS_OP)) | \\\n\t\t((cast(Instruction, o)<<POS_OP)&MASK1(SIZE_OP,POS_OP))))\n\n#define GETARG_A(i)\t(cast(int, ((i)>>POS_A) & MASK1(SIZE_A,0)))\n#define SETARG_A(i,u)\t((i) = (((i)&MASK0(SIZE_A,POS_A)) | \\\n\t\t((cast(Instruction, u)<<POS_A)&MASK1(SIZE_A,POS_A))))\n\n#define GETARG_B(i)\t(cast(int, ((i)>>POS_B) & MASK1(SIZE_B,0)))\n#define SETARG_B(i,b)\t((i) = (((i)&MASK0(SIZE_B,POS_B)) | \\\n\t\t((cast(Instruction, b)<<POS_B)&MASK1(SIZE_B,POS_B))))\n\n#define GETARG_C(i)\t(cast(int, ((i)>>POS_C) & MASK1(SIZE_C,0)))\n#define SETARG_C(i,b)\t((i) = (((i)&MASK0(SIZE_C,POS_C)) | \\\n\t\t((cast(Instruction, b)<<POS_C)&MASK1(SIZE_C,POS_C))))\n\n#define GETARG_Bx(i)\t(cast(int, ((i)>>POS_Bx) & MASK1(SIZE_Bx,0)))\n#define SETARG_Bx(i,b)\t((i) = (((i)&MASK0(SIZE_Bx,POS_Bx)) | \\\n\t\t((cast(Instruction, b)<<POS_Bx)&MASK1(SIZE_Bx,POS_Bx))))\n\n#define GETARG_sBx(i)\t(GETARG_Bx(i)-MAXARG_sBx)\n#define SETARG_sBx(i,b)\tSETARG_Bx((i),cast(unsigned int, (b)+MAXARG_sBx))\n\n\n#define CREATE_ABC(o,a,b,c)\t((cast(Instruction, o)<<POS_OP) \\\n\t\t\t| (cast(Instruction, a)<<POS_A) \\\n\t\t\t| (cast(Instruction, b)<<POS_B) \\\n\t\t\t| (cast(Instruction, c)<<POS_C))\n\n#define CREATE_ABx(o,a,bc)\t((cast(Instruction, o)<<POS_OP) \\\n\t\t\t| (cast(Instruction, a)<<POS_A) \\\n\t\t\t| (cast(Instruction, bc)<<POS_Bx))\n\n\n/*\n** Macros to operate RK indices\n*/\n\n/* this bit 1 means constant (0 means register) */\n#define BITRK\t\t(1 << (SIZE_B - 1))\n\n/* test whether value is a constant */\n#define ISK(x)\t\t((x) & BITRK)\n\n/* gets the index of the constant */\n#define INDEXK(r)\t((int)(r) & ~BITRK)\n\n#define MAXINDEXRK\t(BITRK - 1)\n\n/* code a constant index as a RK value */\n#define RKASK(x)\t((x) | BITRK)\n\n\n/*\n** invalid register that fits in 8 bits\n*/\n#define NO_REG\t\tMAXARG_A\n\n\n/*\n** R(x) - register\n** Kst(x) - constant (in constant table)\n** RK(x) == if ISK(x) then Kst(INDEXK(x)) else R(x)\n*/\n\n\n/*\n** grep \"ORDER OP\" if you change these enums\n*/\n\ntypedef enum {\n/*----------------------------------------------------------------------\nname\t\targs\tdescription\n------------------------------------------------------------------------*/\nOP_MOVE,/*\tA B\tR(A) := R(B)\t\t\t\t\t*/\nOP_LOADK,/*\tA Bx\tR(A) := Kst(Bx)\t\t\t\t\t*/\nOP_LOADBOOL,/*\tA B C\tR(A) := (Bool)B; if (C) pc++\t\t\t*/\nOP_LOADNIL,/*\tA B\tR(A) := ... := R(B) := nil\t\t\t*/\nOP_GETUPVAL,/*\tA B\tR(A) := UpValue[B]\t\t\t\t*/\n\nOP_GETGLOBAL,/*\tA Bx\tR(A) := Gbl[Kst(Bx)]\t\t\t\t*/\nOP_GETTABLE,/*\tA B C\tR(A) := R(B)[RK(C)]\t\t\t\t*/\n\nOP_SETGLOBAL,/*\tA Bx\tGbl[Kst(Bx)] := R(A)\t\t\t\t*/\nOP_SETUPVAL,/*\tA B\tUpValue[B] := R(A)\t\t\t\t*/\nOP_SETTABLE,/*\tA B C\tR(A)[RK(B)] := RK(C)\t\t\t\t*/\n\nOP_NEWTABLE,/*\tA B C\tR(A) := {} (size = B,C)\t\t\t\t*/\n\nOP_SELF,/*\tA B C\tR(A+1) := R(B); R(A) := R(B)[RK(C)]\t\t*/\n\nOP_ADD,/*\tA B C\tR(A) := RK(B) + RK(C)\t\t\t\t*/\nOP_SUB,/*\tA B C\tR(A) := RK(B) - RK(C)\t\t\t\t*/\nOP_MUL,/*\tA B C\tR(A) := RK(B) * RK(C)\t\t\t\t*/\nOP_DIV,/*\tA B C\tR(A) := RK(B) / RK(C)\t\t\t\t*/\nOP_MOD,/*\tA B C\tR(A) := RK(B) % RK(C)\t\t\t\t*/\nOP_POW,/*\tA B C\tR(A) := RK(B) ^ RK(C)\t\t\t\t*/\nOP_UNM,/*\tA B\tR(A) := -R(B)\t\t\t\t\t*/\nOP_NOT,/*\tA B\tR(A) := not R(B)\t\t\t\t*/\nOP_LEN,/*\tA B\tR(A) := length of R(B)\t\t\t\t*/\n\nOP_CONCAT,/*\tA B C\tR(A) := R(B).. ... ..R(C)\t\t\t*/\n\nOP_JMP,/*\tsBx\tpc+=sBx\t\t\t\t\t*/\n\nOP_EQ,/*\tA B C\tif ((RK(B) == RK(C)) ~= A) then pc++\t\t*/\nOP_LT,/*\tA B C\tif ((RK(B) <  RK(C)) ~= A) then pc++  \t\t*/\nOP_LE,/*\tA B C\tif ((RK(B) <= RK(C)) ~= A) then pc++  \t\t*/\n\nOP_TEST,/*\tA C\tif not (R(A) <=> C) then pc++\t\t\t*/ \nOP_TESTSET,/*\tA B C\tif (R(B) <=> C) then R(A) := R(B) else pc++\t*/ \n\nOP_CALL,/*\tA B C\tR(A), ... ,R(A+C-2) := R(A)(R(A+1), ... ,R(A+B-1)) */\nOP_TAILCALL,/*\tA B C\treturn R(A)(R(A+1), ... ,R(A+B-1))\t\t*/\nOP_RETURN,/*\tA B\treturn R(A), ... ,R(A+B-2)\t(see note)\t*/\n\nOP_FORLOOP,/*\tA sBx\tR(A)+=R(A+2);\n\t\t\tif R(A) <?= R(A+1) then { pc+=sBx; R(A+3)=R(A) }*/\nOP_FORPREP,/*\tA sBx\tR(A)-=R(A+2); pc+=sBx\t\t\t\t*/\n\nOP_TFORLOOP,/*\tA C\tR(A+3), ... ,R(A+2+C) := R(A)(R(A+1), R(A+2)); \n                        if R(A+3) ~= nil then R(A+2)=R(A+3) else pc++\t*/ \nOP_SETLIST,/*\tA B C\tR(A)[(C-1)*FPF+i] := R(A+i), 1 <= i <= B\t*/\n\nOP_CLOSE,/*\tA \tclose all variables in the stack up to (>=) R(A)*/\nOP_CLOSURE,/*\tA Bx\tR(A) := closure(KPROTO[Bx], R(A), ... ,R(A+n))\t*/\n\nOP_VARARG/*\tA B\tR(A), R(A+1), ..., R(A+B-1) = vararg\t\t*/\n} OpCode;\n\n\n#define NUM_OPCODES\t(cast(int, OP_VARARG) + 1)\n\n\n\n/*===========================================================================\n  Notes:\n  (*) In OP_CALL, if (B == 0) then B = top. C is the number of returns - 1,\n      and can be 0: OP_CALL then sets `top' to last_result+1, so\n      next open instruction (OP_CALL, OP_RETURN, OP_SETLIST) may use `top'.\n\n  (*) In OP_VARARG, if (B == 0) then use actual number of varargs and\n      set top (like in OP_CALL with C == 0).\n\n  (*) In OP_RETURN, if (B == 0) then return up to `top'\n\n  (*) In OP_SETLIST, if (B == 0) then B = `top';\n      if (C == 0) then next `instruction' is real C\n\n  (*) For comparisons, A specifies what condition the test should accept\n      (true or false).\n\n  (*) All `skips' (pc++) assume that next instruction is a jump\n===========================================================================*/\n\n\n/*\n** masks for instruction properties. The format is:\n** bits 0-1: op mode\n** bits 2-3: C arg mode\n** bits 4-5: B arg mode\n** bit 6: instruction set register A\n** bit 7: operator is a test\n*/  \n\nenum OpArgMask {\n  OpArgN,  /* argument is not used */\n  OpArgU,  /* argument is used */\n  OpArgR,  /* argument is a register or a jump offset */\n  OpArgK   /* argument is a constant or register/constant */\n};\n\nLUAI_DATA const lu_byte luaP_opmodes[NUM_OPCODES];\n\n#define getOpMode(m)\t(cast(enum OpMode, luaP_opmodes[m] & 3))\n#define getBMode(m)\t(cast(enum OpArgMask, (luaP_opmodes[m] >> 4) & 3))\n#define getCMode(m)\t(cast(enum OpArgMask, (luaP_opmodes[m] >> 2) & 3))\n#define testAMode(m)\t(luaP_opmodes[m] & (1 << 6))\n#define testTMode(m)\t(luaP_opmodes[m] & (1 << 7))\n\n\nLUAI_DATA const char *const luaP_opnames[NUM_OPCODES+1];  /* opcode names */\n\n\n/* number of list items to accumulate before a SETLIST instruction */\n#define LFIELDS_PER_FLUSH\t50\n\n\n#endif\n"
  },
  {
    "path": "deps/lua/src/loslib.c",
    "content": "/*\n** $Id: loslib.c,v 1.19.1.3 2008/01/18 16:38:18 roberto Exp $\n** Standard Operating System library\n** See Copyright Notice in lua.h\n*/\n\n\n#include <errno.h>\n#include <locale.h>\n#include <stdlib.h>\n#include <string.h>\n#include <time.h>\n\n#define loslib_c\n#define LUA_LIB\n\n#include \"lua.h\"\n\n#include \"lauxlib.h\"\n#include \"lualib.h\"\n\n\nstatic int os_pushresult (lua_State *L, int i, const char *filename) {\n  int en = errno;  /* calls to Lua API may change this value */\n  if (i) {\n    lua_pushboolean(L, 1);\n    return 1;\n  }\n  else {\n    lua_pushnil(L);\n    lua_pushfstring(L, \"%s: %s\", filename, strerror(en));\n    lua_pushinteger(L, en);\n    return 3;\n  }\n}\n\n\nstatic int os_execute (lua_State *L) {\n  lua_pushinteger(L, system(luaL_optstring(L, 1, NULL)));\n  return 1;\n}\n\n\nstatic int os_remove (lua_State *L) {\n  const char *filename = luaL_checkstring(L, 1);\n  return os_pushresult(L, remove(filename) == 0, filename);\n}\n\n\nstatic int os_rename (lua_State *L) {\n  const char *fromname = luaL_checkstring(L, 1);\n  const char *toname = luaL_checkstring(L, 2);\n  return os_pushresult(L, rename(fromname, toname) == 0, fromname);\n}\n\n\nstatic int os_tmpname (lua_State *L) {\n  char buff[LUA_TMPNAMBUFSIZE];\n  int err;\n  lua_tmpnam(buff, err);\n  if (err)\n    return luaL_error(L, \"unable to generate a unique filename\");\n  lua_pushstring(L, buff);\n  return 1;\n}\n\n\nstatic int os_getenv (lua_State *L) {\n  lua_pushstring(L, getenv(luaL_checkstring(L, 1)));  /* if NULL push nil */\n  return 1;\n}\n\n\nstatic int os_clock (lua_State *L) {\n  lua_pushnumber(L, ((lua_Number)clock())/(lua_Number)CLOCKS_PER_SEC);\n  return 1;\n}\n\n\n/*\n** {======================================================\n** Time/Date operations\n** { year=%Y, month=%m, day=%d, hour=%H, min=%M, sec=%S,\n**   wday=%w+1, yday=%j, isdst=? }\n** =======================================================\n*/\n\nstatic void setfield (lua_State *L, const char *key, int value) {\n  lua_pushinteger(L, value);\n  lua_setfield(L, -2, key);\n}\n\nstatic void setboolfield (lua_State *L, const char *key, int value) {\n  if (value < 0)  /* undefined? */\n    return;  /* does not set field */\n  lua_pushboolean(L, value);\n  lua_setfield(L, -2, key);\n}\n\nstatic int getboolfield (lua_State *L, const char *key) {\n  int res;\n  lua_getfield(L, -1, key);\n  res = lua_isnil(L, -1) ? -1 : lua_toboolean(L, -1);\n  lua_pop(L, 1);\n  return res;\n}\n\n\nstatic int getfield (lua_State *L, const char *key, int d) {\n  int res;\n  lua_getfield(L, -1, key);\n  if (lua_isnumber(L, -1))\n    res = (int)lua_tointeger(L, -1);\n  else {\n    if (d < 0)\n      return luaL_error(L, \"field \" LUA_QS \" missing in date table\", key);\n    res = d;\n  }\n  lua_pop(L, 1);\n  return res;\n}\n\n\nstatic int os_date (lua_State *L) {\n  const char *s = luaL_optstring(L, 1, \"%c\");\n  time_t t = luaL_opt(L, (time_t)luaL_checknumber, 2, time(NULL));\n  struct tm *stm;\n  if (*s == '!') {  /* UTC? */\n    stm = gmtime(&t);\n    s++;  /* skip `!' */\n  }\n  else\n    stm = localtime(&t);\n  if (stm == NULL)  /* invalid date? */\n    lua_pushnil(L);\n  else if (strcmp(s, \"*t\") == 0) {\n    lua_createtable(L, 0, 9);  /* 9 = number of fields */\n    setfield(L, \"sec\", stm->tm_sec);\n    setfield(L, \"min\", stm->tm_min);\n    setfield(L, \"hour\", stm->tm_hour);\n    setfield(L, \"day\", stm->tm_mday);\n    setfield(L, \"month\", stm->tm_mon+1);\n    setfield(L, \"year\", stm->tm_year+1900);\n    setfield(L, \"wday\", stm->tm_wday+1);\n    setfield(L, \"yday\", stm->tm_yday+1);\n    setboolfield(L, \"isdst\", stm->tm_isdst);\n  }\n  else {\n    char cc[3];\n    luaL_Buffer b;\n    cc[0] = '%'; cc[2] = '\\0';\n    luaL_buffinit(L, &b);\n    for (; *s; s++) {\n      if (*s != '%' || *(s + 1) == '\\0')  /* no conversion specifier? */\n        luaL_addchar(&b, *s);\n      else {\n        size_t reslen;\n        char buff[200];  /* should be big enough for any conversion result */\n        cc[1] = *(++s);\n        reslen = strftime(buff, sizeof(buff), cc, stm);\n        luaL_addlstring(&b, buff, reslen);\n      }\n    }\n    luaL_pushresult(&b);\n  }\n  return 1;\n}\n\n\nstatic int os_time (lua_State *L) {\n  time_t t;\n  if (lua_isnoneornil(L, 1))  /* called without args? */\n    t = time(NULL);  /* get current time */\n  else {\n    struct tm ts;\n    luaL_checktype(L, 1, LUA_TTABLE);\n    lua_settop(L, 1);  /* make sure table is at the top */\n    ts.tm_sec = getfield(L, \"sec\", 0);\n    ts.tm_min = getfield(L, \"min\", 0);\n    ts.tm_hour = getfield(L, \"hour\", 12);\n    ts.tm_mday = getfield(L, \"day\", -1);\n    ts.tm_mon = getfield(L, \"month\", -1) - 1;\n    ts.tm_year = getfield(L, \"year\", -1) - 1900;\n    ts.tm_isdst = getboolfield(L, \"isdst\");\n    t = mktime(&ts);\n  }\n  if (t == (time_t)(-1))\n    lua_pushnil(L);\n  else\n    lua_pushnumber(L, (lua_Number)t);\n  return 1;\n}\n\n\nstatic int os_difftime (lua_State *L) {\n  lua_pushnumber(L, difftime((time_t)(luaL_checknumber(L, 1)),\n                             (time_t)(luaL_optnumber(L, 2, 0))));\n  return 1;\n}\n\n/* }====================================================== */\n\n\nstatic int os_setlocale (lua_State *L) {\n  static const int cat[] = {LC_ALL, LC_COLLATE, LC_CTYPE, LC_MONETARY,\n                      LC_NUMERIC, LC_TIME};\n  static const char *const catnames[] = {\"all\", \"collate\", \"ctype\", \"monetary\",\n     \"numeric\", \"time\", NULL};\n  const char *l = luaL_optstring(L, 1, NULL);\n  int op = luaL_checkoption(L, 2, \"all\", catnames);\n  lua_pushstring(L, setlocale(cat[op], l));\n  return 1;\n}\n\n\nstatic int os_exit (lua_State *L) {\n  exit(luaL_optint(L, 1, EXIT_SUCCESS));\n}\n\nstatic const luaL_Reg syslib[] = {\n  {\"clock\",     os_clock},\n  {\"date\",      os_date},\n  {\"difftime\",  os_difftime},\n  {\"execute\",   os_execute},\n  {\"exit\",      os_exit},\n  {\"getenv\",    os_getenv},\n  {\"remove\",    os_remove},\n  {\"rename\",    os_rename},\n  {\"setlocale\", os_setlocale},\n  {\"time\",      os_time},\n  {\"tmpname\",   os_tmpname},\n  {NULL, NULL}\n};\n\n/* }====================================================== */\n\n#define UNUSED(V) ((void) V)\n\n/* Only a subset is loaded currently, for sandboxing concerns. */\nstatic const luaL_Reg sandbox_syslib[] = {\n  {\"clock\",     os_clock},\n  {NULL, NULL}\n};\n\nLUALIB_API int luaopen_os (lua_State *L) {\n  UNUSED(syslib);\n  luaL_register(L, LUA_OSLIBNAME, sandbox_syslib);\n  return 1;\n}\n\n"
  },
  {
    "path": "deps/lua/src/lparser.c",
    "content": "/*\n** $Id: lparser.c,v 2.42.1.4 2011/10/21 19:31:42 roberto Exp $\n** Lua Parser\n** See Copyright Notice in lua.h\n*/\n\n\n#include <string.h>\n\n#define lparser_c\n#define LUA_CORE\n\n#include \"lua.h\"\n\n#include \"lcode.h\"\n#include \"ldebug.h\"\n#include \"ldo.h\"\n#include \"lfunc.h\"\n#include \"llex.h\"\n#include \"lmem.h\"\n#include \"lobject.h\"\n#include \"lopcodes.h\"\n#include \"lparser.h\"\n#include \"lstate.h\"\n#include \"lstring.h\"\n#include \"ltable.h\"\n\n\n\n#define hasmultret(k)\t\t((k) == VCALL || (k) == VVARARG)\n\n#define getlocvar(fs, i)\t((fs)->f->locvars[(fs)->actvar[i]])\n\n#define luaY_checklimit(fs,v,l,m)\tif ((v)>(l)) errorlimit(fs,l,m)\n\n\n/*\n** nodes for block list (list of active blocks)\n*/\ntypedef struct BlockCnt {\n  struct BlockCnt *previous;  /* chain */\n  int breaklist;  /* list of jumps out of this loop */\n  lu_byte nactvar;  /* # active locals outside the breakable structure */\n  lu_byte upval;  /* true if some variable in the block is an upvalue */\n  lu_byte isbreakable;  /* true if `block' is a loop */\n} BlockCnt;\n\n\n\n/*\n** prototypes for recursive non-terminal functions\n*/\nstatic void chunk (LexState *ls);\nstatic void expr (LexState *ls, expdesc *v);\n\n\nstatic void anchor_token (LexState *ls) {\n  if (ls->t.token == TK_NAME || ls->t.token == TK_STRING) {\n    TString *ts = ls->t.seminfo.ts;\n    luaX_newstring(ls, getstr(ts), ts->tsv.len);\n  }\n}\n\n\nstatic void error_expected (LexState *ls, int token) {\n  luaX_syntaxerror(ls,\n      luaO_pushfstring(ls->L, LUA_QS \" expected\", luaX_token2str(ls, token)));\n}\n\n\nstatic void errorlimit (FuncState *fs, int limit, const char *what) {\n  const char *msg = (fs->f->linedefined == 0) ?\n    luaO_pushfstring(fs->L, \"main function has more than %d %s\", limit, what) :\n    luaO_pushfstring(fs->L, \"function at line %d has more than %d %s\",\n                            fs->f->linedefined, limit, what);\n  luaX_lexerror(fs->ls, msg, 0);\n}\n\n\nstatic int testnext (LexState *ls, int c) {\n  if (ls->t.token == c) {\n    luaX_next(ls);\n    return 1;\n  }\n  else return 0;\n}\n\n\nstatic void check (LexState *ls, int c) {\n  if (ls->t.token != c)\n    error_expected(ls, c);\n}\n\nstatic void checknext (LexState *ls, int c) {\n  check(ls, c);\n  luaX_next(ls);\n}\n\n\n#define check_condition(ls,c,msg)\t{ if (!(c)) luaX_syntaxerror(ls, msg); }\n\n\n\nstatic void check_match (LexState *ls, int what, int who, int where) {\n  if (!testnext(ls, what)) {\n    if (where == ls->linenumber)\n      error_expected(ls, what);\n    else {\n      luaX_syntaxerror(ls, luaO_pushfstring(ls->L,\n             LUA_QS \" expected (to close \" LUA_QS \" at line %d)\",\n              luaX_token2str(ls, what), luaX_token2str(ls, who), where));\n    }\n  }\n}\n\n\nstatic TString *str_checkname (LexState *ls) {\n  TString *ts;\n  check(ls, TK_NAME);\n  ts = ls->t.seminfo.ts;\n  luaX_next(ls);\n  return ts;\n}\n\n\nstatic void init_exp (expdesc *e, expkind k, int i) {\n  e->f = e->t = NO_JUMP;\n  e->k = k;\n  e->u.s.info = i;\n}\n\n\nstatic void codestring (LexState *ls, expdesc *e, TString *s) {\n  init_exp(e, VK, luaK_stringK(ls->fs, s));\n}\n\n\nstatic void checkname(LexState *ls, expdesc *e) {\n  codestring(ls, e, str_checkname(ls));\n}\n\n\nstatic int registerlocalvar (LexState *ls, TString *varname) {\n  FuncState *fs = ls->fs;\n  Proto *f = fs->f;\n  int oldsize = f->sizelocvars;\n  luaM_growvector(ls->L, f->locvars, fs->nlocvars, f->sizelocvars,\n                  LocVar, SHRT_MAX, \"too many local variables\");\n  while (oldsize < f->sizelocvars) f->locvars[oldsize++].varname = NULL;\n  f->locvars[fs->nlocvars].varname = varname;\n  luaC_objbarrier(ls->L, f, varname);\n  return fs->nlocvars++;\n}\n\n\n#define new_localvarliteral(ls,v,n) \\\n  new_localvar(ls, luaX_newstring(ls, \"\" v, (sizeof(v)/sizeof(char))-1), n)\n\n\nstatic void new_localvar (LexState *ls, TString *name, int n) {\n  FuncState *fs = ls->fs;\n  luaY_checklimit(fs, fs->nactvar+n+1, LUAI_MAXVARS, \"local variables\");\n  fs->actvar[fs->nactvar+n] = cast(unsigned short, registerlocalvar(ls, name));\n}\n\n\nstatic void adjustlocalvars (LexState *ls, int nvars) {\n  FuncState *fs = ls->fs;\n  fs->nactvar = cast_byte(fs->nactvar + nvars);\n  for (; nvars; nvars--) {\n    getlocvar(fs, fs->nactvar - nvars).startpc = fs->pc;\n  }\n}\n\n\nstatic void removevars (LexState *ls, int tolevel) {\n  FuncState *fs = ls->fs;\n  while (fs->nactvar > tolevel)\n    getlocvar(fs, --fs->nactvar).endpc = fs->pc;\n}\n\n\nstatic int indexupvalue (FuncState *fs, TString *name, expdesc *v) {\n  int i;\n  Proto *f = fs->f;\n  int oldsize = f->sizeupvalues;\n  for (i=0; i<f->nups; i++) {\n    if (fs->upvalues[i].k == v->k && fs->upvalues[i].info == v->u.s.info) {\n      lua_assert(f->upvalues[i] == name);\n      return i;\n    }\n  }\n  /* new one */\n  luaY_checklimit(fs, f->nups + 1, LUAI_MAXUPVALUES, \"upvalues\");\n  luaM_growvector(fs->L, f->upvalues, f->nups, f->sizeupvalues,\n                  TString *, MAX_INT, \"\");\n  while (oldsize < f->sizeupvalues) f->upvalues[oldsize++] = NULL;\n  f->upvalues[f->nups] = name;\n  luaC_objbarrier(fs->L, f, name);\n  lua_assert(v->k == VLOCAL || v->k == VUPVAL);\n  fs->upvalues[f->nups].k = cast_byte(v->k);\n  fs->upvalues[f->nups].info = cast_byte(v->u.s.info);\n  return f->nups++;\n}\n\n\nstatic int searchvar (FuncState *fs, TString *n) {\n  int i;\n  for (i=fs->nactvar-1; i >= 0; i--) {\n    if (n == getlocvar(fs, i).varname)\n      return i;\n  }\n  return -1;  /* not found */\n}\n\n\nstatic void markupval (FuncState *fs, int level) {\n  BlockCnt *bl = fs->bl;\n  while (bl && bl->nactvar > level) bl = bl->previous;\n  if (bl) bl->upval = 1;\n}\n\n\nstatic int singlevaraux (FuncState *fs, TString *n, expdesc *var, int base) {\n  if (fs == NULL) {  /* no more levels? */\n    init_exp(var, VGLOBAL, NO_REG);  /* default is global variable */\n    return VGLOBAL;\n  }\n  else {\n    int v = searchvar(fs, n);  /* look up at current level */\n    if (v >= 0) {\n      init_exp(var, VLOCAL, v);\n      if (!base)\n        markupval(fs, v);  /* local will be used as an upval */\n      return VLOCAL;\n    }\n    else {  /* not found at current level; try upper one */\n      if (singlevaraux(fs->prev, n, var, 0) == VGLOBAL)\n        return VGLOBAL;\n      var->u.s.info = indexupvalue(fs, n, var);  /* else was LOCAL or UPVAL */\n      var->k = VUPVAL;  /* upvalue in this level */\n      return VUPVAL;\n    }\n  }\n}\n\n\nstatic void singlevar (LexState *ls, expdesc *var) {\n  TString *varname = str_checkname(ls);\n  FuncState *fs = ls->fs;\n  if (singlevaraux(fs, varname, var, 1) == VGLOBAL)\n    var->u.s.info = luaK_stringK(fs, varname);  /* info points to global name */\n}\n\n\nstatic void adjust_assign (LexState *ls, int nvars, int nexps, expdesc *e) {\n  FuncState *fs = ls->fs;\n  int extra = nvars - nexps;\n  if (hasmultret(e->k)) {\n    extra++;  /* includes call itself */\n    if (extra < 0) extra = 0;\n    luaK_setreturns(fs, e, extra);  /* last exp. provides the difference */\n    if (extra > 1) luaK_reserveregs(fs, extra-1);\n  }\n  else {\n    if (e->k != VVOID) luaK_exp2nextreg(fs, e);  /* close last expression */\n    if (extra > 0) {\n      int reg = fs->freereg;\n      luaK_reserveregs(fs, extra);\n      luaK_nil(fs, reg, extra);\n    }\n  }\n}\n\n\nstatic void enterlevel (LexState *ls) {\n  if (++ls->L->nCcalls > LUAI_MAXCCALLS)\n\tluaX_lexerror(ls, \"chunk has too many syntax levels\", 0);\n}\n\n\n#define leavelevel(ls)\t((ls)->L->nCcalls--)\n\n\nstatic void enterblock (FuncState *fs, BlockCnt *bl, lu_byte isbreakable) {\n  bl->breaklist = NO_JUMP;\n  bl->isbreakable = isbreakable;\n  bl->nactvar = fs->nactvar;\n  bl->upval = 0;\n  bl->previous = fs->bl;\n  fs->bl = bl;\n  lua_assert(fs->freereg == fs->nactvar);\n}\n\n\nstatic void leaveblock (FuncState *fs) {\n  BlockCnt *bl = fs->bl;\n  fs->bl = bl->previous;\n  removevars(fs->ls, bl->nactvar);\n  if (bl->upval)\n    luaK_codeABC(fs, OP_CLOSE, bl->nactvar, 0, 0);\n  /* a block either controls scope or breaks (never both) */\n  lua_assert(!bl->isbreakable || !bl->upval);\n  lua_assert(bl->nactvar == fs->nactvar);\n  fs->freereg = fs->nactvar;  /* free registers */\n  luaK_patchtohere(fs, bl->breaklist);\n}\n\n\nstatic void pushclosure (LexState *ls, FuncState *func, expdesc *v) {\n  FuncState *fs = ls->fs;\n  Proto *f = fs->f;\n  int oldsize = f->sizep;\n  int i;\n  luaM_growvector(ls->L, f->p, fs->np, f->sizep, Proto *,\n                  MAXARG_Bx, \"constant table overflow\");\n  while (oldsize < f->sizep) f->p[oldsize++] = NULL;\n  f->p[fs->np++] = func->f;\n  luaC_objbarrier(ls->L, f, func->f);\n  init_exp(v, VRELOCABLE, luaK_codeABx(fs, OP_CLOSURE, 0, fs->np-1));\n  for (i=0; i<func->f->nups; i++) {\n    OpCode o = (func->upvalues[i].k == VLOCAL) ? OP_MOVE : OP_GETUPVAL;\n    luaK_codeABC(fs, o, 0, func->upvalues[i].info, 0);\n  }\n}\n\n\nstatic void open_func (LexState *ls, FuncState *fs) {\n  lua_State *L = ls->L;\n  Proto *f = luaF_newproto(L);\n  fs->f = f;\n  fs->prev = ls->fs;  /* linked list of funcstates */\n  fs->ls = ls;\n  fs->L = L;\n  ls->fs = fs;\n  fs->pc = 0;\n  fs->lasttarget = -1;\n  fs->jpc = NO_JUMP;\n  fs->freereg = 0;\n  fs->nk = 0;\n  fs->np = 0;\n  fs->nlocvars = 0;\n  fs->nactvar = 0;\n  fs->bl = NULL;\n  f->source = ls->source;\n  f->maxstacksize = 2;  /* registers 0/1 are always valid */\n  fs->h = luaH_new(L, 0, 0);\n  /* anchor table of constants and prototype (to avoid being collected) */\n  sethvalue2s(L, L->top, fs->h);\n  incr_top(L);\n  setptvalue2s(L, L->top, f);\n  incr_top(L);\n}\n\n\nstatic void close_func (LexState *ls) {\n  lua_State *L = ls->L;\n  FuncState *fs = ls->fs;\n  Proto *f = fs->f;\n  removevars(ls, 0);\n  luaK_ret(fs, 0, 0);  /* final return */\n  luaM_reallocvector(L, f->code, f->sizecode, fs->pc, Instruction);\n  f->sizecode = fs->pc;\n  luaM_reallocvector(L, f->lineinfo, f->sizelineinfo, fs->pc, int);\n  f->sizelineinfo = fs->pc;\n  luaM_reallocvector(L, f->k, f->sizek, fs->nk, TValue);\n  f->sizek = fs->nk;\n  luaM_reallocvector(L, f->p, f->sizep, fs->np, Proto *);\n  f->sizep = fs->np;\n  luaM_reallocvector(L, f->locvars, f->sizelocvars, fs->nlocvars, LocVar);\n  f->sizelocvars = fs->nlocvars;\n  luaM_reallocvector(L, f->upvalues, f->sizeupvalues, f->nups, TString *);\n  f->sizeupvalues = f->nups;\n  lua_assert(luaG_checkcode(f));\n  lua_assert(fs->bl == NULL);\n  ls->fs = fs->prev;\n  /* last token read was anchored in defunct function; must reanchor it */\n  if (fs) anchor_token(ls);\n  L->top -= 2;  /* remove table and prototype from the stack */\n}\n\n\nProto *luaY_parser (lua_State *L, ZIO *z, Mbuffer *buff, const char *name) {\n  struct LexState lexstate;\n  struct FuncState funcstate;\n  lexstate.buff = buff;\n  TString *tname = luaS_new(L, name);\n  setsvalue2s(L, L->top, tname);\n  incr_top(L);\n  luaX_setinput(L, &lexstate, z, tname);\n  open_func(&lexstate, &funcstate);\n  funcstate.f->is_vararg = VARARG_ISVARARG;  /* main func. is always vararg */\n  luaX_next(&lexstate);  /* read first token */\n  chunk(&lexstate);\n  check(&lexstate, TK_EOS);\n  close_func(&lexstate);\n  --L->top;\n  lua_assert(funcstate.prev == NULL);\n  lua_assert(funcstate.f->nups == 0);\n  lua_assert(lexstate.fs == NULL);\n  return funcstate.f;\n}\n\n\n\n/*============================================================*/\n/* GRAMMAR RULES */\n/*============================================================*/\n\n\nstatic void field (LexState *ls, expdesc *v) {\n  /* field -> ['.' | ':'] NAME */\n  FuncState *fs = ls->fs;\n  expdesc key;\n  luaK_exp2anyreg(fs, v);\n  luaX_next(ls);  /* skip the dot or colon */\n  checkname(ls, &key);\n  luaK_indexed(fs, v, &key);\n}\n\n\nstatic void yindex (LexState *ls, expdesc *v) {\n  /* index -> '[' expr ']' */\n  luaX_next(ls);  /* skip the '[' */\n  expr(ls, v);\n  luaK_exp2val(ls->fs, v);\n  checknext(ls, ']');\n}\n\n\n/*\n** {======================================================================\n** Rules for Constructors\n** =======================================================================\n*/\n\n\nstruct ConsControl {\n  expdesc v;  /* last list item read */\n  expdesc *t;  /* table descriptor */\n  int nh;  /* total number of `record' elements */\n  int na;  /* total number of array elements */\n  int tostore;  /* number of array elements pending to be stored */\n};\n\n\nstatic void recfield (LexState *ls, struct ConsControl *cc) {\n  /* recfield -> (NAME | `['exp1`]') = exp1 */\n  FuncState *fs = ls->fs;\n  int reg = ls->fs->freereg;\n  expdesc key, val;\n  int rkkey;\n  if (ls->t.token == TK_NAME) {\n    luaY_checklimit(fs, cc->nh, MAX_INT, \"items in a constructor\");\n    checkname(ls, &key);\n  }\n  else  /* ls->t.token == '[' */\n    yindex(ls, &key);\n  cc->nh++;\n  checknext(ls, '=');\n  rkkey = luaK_exp2RK(fs, &key);\n  expr(ls, &val);\n  luaK_codeABC(fs, OP_SETTABLE, cc->t->u.s.info, rkkey, luaK_exp2RK(fs, &val));\n  fs->freereg = reg;  /* free registers */\n}\n\n\nstatic void closelistfield (FuncState *fs, struct ConsControl *cc) {\n  if (cc->v.k == VVOID) return;  /* there is no list item */\n  luaK_exp2nextreg(fs, &cc->v);\n  cc->v.k = VVOID;\n  if (cc->tostore == LFIELDS_PER_FLUSH) {\n    luaK_setlist(fs, cc->t->u.s.info, cc->na, cc->tostore);  /* flush */\n    cc->tostore = 0;  /* no more items pending */\n  }\n}\n\n\nstatic void lastlistfield (FuncState *fs, struct ConsControl *cc) {\n  if (cc->tostore == 0) return;\n  if (hasmultret(cc->v.k)) {\n    luaK_setmultret(fs, &cc->v);\n    luaK_setlist(fs, cc->t->u.s.info, cc->na, LUA_MULTRET);\n    cc->na--;  /* do not count last expression (unknown number of elements) */\n  }\n  else {\n    if (cc->v.k != VVOID)\n      luaK_exp2nextreg(fs, &cc->v);\n    luaK_setlist(fs, cc->t->u.s.info, cc->na, cc->tostore);\n  }\n}\n\n\nstatic void listfield (LexState *ls, struct ConsControl *cc) {\n  expr(ls, &cc->v);\n  luaY_checklimit(ls->fs, cc->na, MAX_INT, \"items in a constructor\");\n  cc->na++;\n  cc->tostore++;\n}\n\n\nstatic void constructor (LexState *ls, expdesc *t) {\n  /* constructor -> ?? */\n  FuncState *fs = ls->fs;\n  int line = ls->linenumber;\n  int pc = luaK_codeABC(fs, OP_NEWTABLE, 0, 0, 0);\n  struct ConsControl cc;\n  cc.na = cc.nh = cc.tostore = 0;\n  cc.t = t;\n  init_exp(t, VRELOCABLE, pc);\n  init_exp(&cc.v, VVOID, 0);  /* no value (yet) */\n  luaK_exp2nextreg(ls->fs, t);  /* fix it at stack top (for gc) */\n  checknext(ls, '{');\n  do {\n    lua_assert(cc.v.k == VVOID || cc.tostore > 0);\n    if (ls->t.token == '}') break;\n    closelistfield(fs, &cc);\n    switch(ls->t.token) {\n      case TK_NAME: {  /* may be listfields or recfields */\n        luaX_lookahead(ls);\n        if (ls->lookahead.token != '=')  /* expression? */\n          listfield(ls, &cc);\n        else\n          recfield(ls, &cc);\n        break;\n      }\n      case '[': {  /* constructor_item -> recfield */\n        recfield(ls, &cc);\n        break;\n      }\n      default: {  /* constructor_part -> listfield */\n        listfield(ls, &cc);\n        break;\n      }\n    }\n  } while (testnext(ls, ',') || testnext(ls, ';'));\n  check_match(ls, '}', '{', line);\n  lastlistfield(fs, &cc);\n  SETARG_B(fs->f->code[pc], luaO_int2fb(cc.na)); /* set initial array size */\n  SETARG_C(fs->f->code[pc], luaO_int2fb(cc.nh));  /* set initial table size */\n}\n\n/* }====================================================================== */\n\n\n\nstatic void parlist (LexState *ls) {\n  /* parlist -> [ param { `,' param } ] */\n  FuncState *fs = ls->fs;\n  Proto *f = fs->f;\n  int nparams = 0;\n  f->is_vararg = 0;\n  if (ls->t.token != ')') {  /* is `parlist' not empty? */\n    do {\n      switch (ls->t.token) {\n        case TK_NAME: {  /* param -> NAME */\n          new_localvar(ls, str_checkname(ls), nparams++);\n          break;\n        }\n        case TK_DOTS: {  /* param -> `...' */\n          luaX_next(ls);\n#if defined(LUA_COMPAT_VARARG)\n          /* use `arg' as default name */\n          new_localvarliteral(ls, \"arg\", nparams++);\n          f->is_vararg = VARARG_HASARG | VARARG_NEEDSARG;\n#endif\n          f->is_vararg |= VARARG_ISVARARG;\n          break;\n        }\n        default: luaX_syntaxerror(ls, \"<name> or \" LUA_QL(\"...\") \" expected\");\n      }\n    } while (!f->is_vararg && testnext(ls, ','));\n  }\n  adjustlocalvars(ls, nparams);\n  f->numparams = cast_byte(fs->nactvar - (f->is_vararg & VARARG_HASARG));\n  luaK_reserveregs(fs, fs->nactvar);  /* reserve register for parameters */\n}\n\n\nstatic void body (LexState *ls, expdesc *e, int needself, int line) {\n  /* body ->  `(' parlist `)' chunk END */\n  FuncState new_fs;\n  open_func(ls, &new_fs);\n  new_fs.f->linedefined = line;\n  checknext(ls, '(');\n  if (needself) {\n    new_localvarliteral(ls, \"self\", 0);\n    adjustlocalvars(ls, 1);\n  }\n  parlist(ls);\n  checknext(ls, ')');\n  chunk(ls);\n  new_fs.f->lastlinedefined = ls->linenumber;\n  check_match(ls, TK_END, TK_FUNCTION, line);\n  close_func(ls);\n  pushclosure(ls, &new_fs, e);\n}\n\n\nstatic int explist1 (LexState *ls, expdesc *v) {\n  /* explist1 -> expr { `,' expr } */\n  int n = 1;  /* at least one expression */\n  expr(ls, v);\n  while (testnext(ls, ',')) {\n    luaK_exp2nextreg(ls->fs, v);\n    expr(ls, v);\n    n++;\n  }\n  return n;\n}\n\n\nstatic void funcargs (LexState *ls, expdesc *f) {\n  FuncState *fs = ls->fs;\n  expdesc args;\n  int base, nparams;\n  int line = ls->linenumber;\n  switch (ls->t.token) {\n    case '(': {  /* funcargs -> `(' [ explist1 ] `)' */\n      if (line != ls->lastline)\n        luaX_syntaxerror(ls,\"ambiguous syntax (function call x new statement)\");\n      luaX_next(ls);\n      if (ls->t.token == ')')  /* arg list is empty? */\n        args.k = VVOID;\n      else {\n        explist1(ls, &args);\n        luaK_setmultret(fs, &args);\n      }\n      check_match(ls, ')', '(', line);\n      break;\n    }\n    case '{': {  /* funcargs -> constructor */\n      constructor(ls, &args);\n      break;\n    }\n    case TK_STRING: {  /* funcargs -> STRING */\n      codestring(ls, &args, ls->t.seminfo.ts);\n      luaX_next(ls);  /* must use `seminfo' before `next' */\n      break;\n    }\n    default: {\n      luaX_syntaxerror(ls, \"function arguments expected\");\n      return;\n    }\n  }\n  lua_assert(f->k == VNONRELOC);\n  base = f->u.s.info;  /* base register for call */\n  if (hasmultret(args.k))\n    nparams = LUA_MULTRET;  /* open call */\n  else {\n    if (args.k != VVOID)\n      luaK_exp2nextreg(fs, &args);  /* close last argument */\n    nparams = fs->freereg - (base+1);\n  }\n  init_exp(f, VCALL, luaK_codeABC(fs, OP_CALL, base, nparams+1, 2));\n  luaK_fixline(fs, line);\n  fs->freereg = base+1;  /* call remove function and arguments and leaves\n                            (unless changed) one result */\n}\n\n\n\n\n/*\n** {======================================================================\n** Expression parsing\n** =======================================================================\n*/\n\n\nstatic void prefixexp (LexState *ls, expdesc *v) {\n  /* prefixexp -> NAME | '(' expr ')' */\n  switch (ls->t.token) {\n    case '(': {\n      int line = ls->linenumber;\n      luaX_next(ls);\n      expr(ls, v);\n      check_match(ls, ')', '(', line);\n      luaK_dischargevars(ls->fs, v);\n      return;\n    }\n    case TK_NAME: {\n      singlevar(ls, v);\n      return;\n    }\n    default: {\n      luaX_syntaxerror(ls, \"unexpected symbol\");\n      return;\n    }\n  }\n}\n\n\nstatic void primaryexp (LexState *ls, expdesc *v) {\n  /* primaryexp ->\n        prefixexp { `.' NAME | `[' exp `]' | `:' NAME funcargs | funcargs } */\n  FuncState *fs = ls->fs;\n  prefixexp(ls, v);\n  for (;;) {\n    switch (ls->t.token) {\n      case '.': {  /* field */\n        field(ls, v);\n        break;\n      }\n      case '[': {  /* `[' exp1 `]' */\n        expdesc key;\n        luaK_exp2anyreg(fs, v);\n        yindex(ls, &key);\n        luaK_indexed(fs, v, &key);\n        break;\n      }\n      case ':': {  /* `:' NAME funcargs */\n        expdesc key;\n        luaX_next(ls);\n        checkname(ls, &key);\n        luaK_self(fs, v, &key);\n        funcargs(ls, v);\n        break;\n      }\n      case '(': case TK_STRING: case '{': {  /* funcargs */\n        luaK_exp2nextreg(fs, v);\n        funcargs(ls, v);\n        break;\n      }\n      default: return;\n    }\n  }\n}\n\n\nstatic void simpleexp (LexState *ls, expdesc *v) {\n  /* simpleexp -> NUMBER | STRING | NIL | true | false | ... |\n                  constructor | FUNCTION body | primaryexp */\n  switch (ls->t.token) {\n    case TK_NUMBER: {\n      init_exp(v, VKNUM, 0);\n      v->u.nval = ls->t.seminfo.r;\n      break;\n    }\n    case TK_STRING: {\n      codestring(ls, v, ls->t.seminfo.ts);\n      break;\n    }\n    case TK_NIL: {\n      init_exp(v, VNIL, 0);\n      break;\n    }\n    case TK_TRUE: {\n      init_exp(v, VTRUE, 0);\n      break;\n    }\n    case TK_FALSE: {\n      init_exp(v, VFALSE, 0);\n      break;\n    }\n    case TK_DOTS: {  /* vararg */\n      FuncState *fs = ls->fs;\n      check_condition(ls, fs->f->is_vararg,\n                      \"cannot use \" LUA_QL(\"...\") \" outside a vararg function\");\n      fs->f->is_vararg &= ~VARARG_NEEDSARG;  /* don't need 'arg' */\n      init_exp(v, VVARARG, luaK_codeABC(fs, OP_VARARG, 0, 1, 0));\n      break;\n    }\n    case '{': {  /* constructor */\n      constructor(ls, v);\n      return;\n    }\n    case TK_FUNCTION: {\n      luaX_next(ls);\n      body(ls, v, 0, ls->linenumber);\n      return;\n    }\n    default: {\n      primaryexp(ls, v);\n      return;\n    }\n  }\n  luaX_next(ls);\n}\n\n\nstatic UnOpr getunopr (int op) {\n  switch (op) {\n    case TK_NOT: return OPR_NOT;\n    case '-': return OPR_MINUS;\n    case '#': return OPR_LEN;\n    default: return OPR_NOUNOPR;\n  }\n}\n\n\nstatic BinOpr getbinopr (int op) {\n  switch (op) {\n    case '+': return OPR_ADD;\n    case '-': return OPR_SUB;\n    case '*': return OPR_MUL;\n    case '/': return OPR_DIV;\n    case '%': return OPR_MOD;\n    case '^': return OPR_POW;\n    case TK_CONCAT: return OPR_CONCAT;\n    case TK_NE: return OPR_NE;\n    case TK_EQ: return OPR_EQ;\n    case '<': return OPR_LT;\n    case TK_LE: return OPR_LE;\n    case '>': return OPR_GT;\n    case TK_GE: return OPR_GE;\n    case TK_AND: return OPR_AND;\n    case TK_OR: return OPR_OR;\n    default: return OPR_NOBINOPR;\n  }\n}\n\n\nstatic const struct {\n  lu_byte left;  /* left priority for each binary operator */\n  lu_byte right; /* right priority */\n} priority[] = {  /* ORDER OPR */\n   {6, 6}, {6, 6}, {7, 7}, {7, 7}, {7, 7},  /* `+' `-' `/' `%' */\n   {10, 9}, {5, 4},                 /* power and concat (right associative) */\n   {3, 3}, {3, 3},                  /* equality and inequality */\n   {3, 3}, {3, 3}, {3, 3}, {3, 3},  /* order */\n   {2, 2}, {1, 1}                   /* logical (and/or) */\n};\n\n#define UNARY_PRIORITY\t8  /* priority for unary operators */\n\n\n/*\n** subexpr -> (simpleexp | unop subexpr) { binop subexpr }\n** where `binop' is any binary operator with a priority higher than `limit'\n*/\nstatic BinOpr subexpr (LexState *ls, expdesc *v, unsigned int limit) {\n  BinOpr op;\n  UnOpr uop;\n  enterlevel(ls);\n  uop = getunopr(ls->t.token);\n  if (uop != OPR_NOUNOPR) {\n    luaX_next(ls);\n    subexpr(ls, v, UNARY_PRIORITY);\n    luaK_prefix(ls->fs, uop, v);\n  }\n  else simpleexp(ls, v);\n  /* expand while operators have priorities higher than `limit' */\n  op = getbinopr(ls->t.token);\n  while (op != OPR_NOBINOPR && priority[op].left > limit) {\n    expdesc v2;\n    BinOpr nextop;\n    luaX_next(ls);\n    luaK_infix(ls->fs, op, v);\n    /* read sub-expression with higher priority */\n    nextop = subexpr(ls, &v2, priority[op].right);\n    luaK_posfix(ls->fs, op, v, &v2);\n    op = nextop;\n  }\n  leavelevel(ls);\n  return op;  /* return first untreated operator */\n}\n\n\nstatic void expr (LexState *ls, expdesc *v) {\n  subexpr(ls, v, 0);\n}\n\n/* }==================================================================== */\n\n\n\n/*\n** {======================================================================\n** Rules for Statements\n** =======================================================================\n*/\n\n\nstatic int block_follow (int token) {\n  switch (token) {\n    case TK_ELSE: case TK_ELSEIF: case TK_END:\n    case TK_UNTIL: case TK_EOS:\n      return 1;\n    default: return 0;\n  }\n}\n\n\nstatic void block (LexState *ls) {\n  /* block -> chunk */\n  FuncState *fs = ls->fs;\n  BlockCnt bl;\n  enterblock(fs, &bl, 0);\n  chunk(ls);\n  lua_assert(bl.breaklist == NO_JUMP);\n  leaveblock(fs);\n}\n\n\n/*\n** structure to chain all variables in the left-hand side of an\n** assignment\n*/\nstruct LHS_assign {\n  struct LHS_assign *prev;\n  expdesc v;  /* variable (global, local, upvalue, or indexed) */\n};\n\n\n/*\n** check whether, in an assignment to a local variable, the local variable\n** is needed in a previous assignment (to a table). If so, save original\n** local value in a safe place and use this safe copy in the previous\n** assignment.\n*/\nstatic void check_conflict (LexState *ls, struct LHS_assign *lh, expdesc *v) {\n  FuncState *fs = ls->fs;\n  int extra = fs->freereg;  /* eventual position to save local variable */\n  int conflict = 0;\n  for (; lh; lh = lh->prev) {\n    if (lh->v.k == VINDEXED) {\n      if (lh->v.u.s.info == v->u.s.info) {  /* conflict? */\n        conflict = 1;\n        lh->v.u.s.info = extra;  /* previous assignment will use safe copy */\n      }\n      if (lh->v.u.s.aux == v->u.s.info) {  /* conflict? */\n        conflict = 1;\n        lh->v.u.s.aux = extra;  /* previous assignment will use safe copy */\n      }\n    }\n  }\n  if (conflict) {\n    luaK_codeABC(fs, OP_MOVE, fs->freereg, v->u.s.info, 0);  /* make copy */\n    luaK_reserveregs(fs, 1);\n  }\n}\n\n\nstatic void assignment (LexState *ls, struct LHS_assign *lh, int nvars) {\n  expdesc e;\n  check_condition(ls, VLOCAL <= lh->v.k && lh->v.k <= VINDEXED,\n                      \"syntax error\");\n  if (testnext(ls, ',')) {  /* assignment -> `,' primaryexp assignment */\n    struct LHS_assign nv;\n    nv.prev = lh;\n    primaryexp(ls, &nv.v);\n    if (nv.v.k == VLOCAL)\n      check_conflict(ls, lh, &nv.v);\n    luaY_checklimit(ls->fs, nvars, LUAI_MAXCCALLS - ls->L->nCcalls,\n                    \"variables in assignment\");\n    assignment(ls, &nv, nvars+1);\n  }\n  else {  /* assignment -> `=' explist1 */\n    int nexps;\n    checknext(ls, '=');\n    nexps = explist1(ls, &e);\n    if (nexps != nvars) {\n      adjust_assign(ls, nvars, nexps, &e);\n      if (nexps > nvars)\n        ls->fs->freereg -= nexps - nvars;  /* remove extra values */\n    }\n    else {\n      luaK_setoneret(ls->fs, &e);  /* close last expression */\n      luaK_storevar(ls->fs, &lh->v, &e);\n      return;  /* avoid default */\n    }\n  }\n  init_exp(&e, VNONRELOC, ls->fs->freereg-1);  /* default assignment */\n  luaK_storevar(ls->fs, &lh->v, &e);\n}\n\n\nstatic int cond (LexState *ls) {\n  /* cond -> exp */\n  expdesc v;\n  expr(ls, &v);  /* read condition */\n  if (v.k == VNIL) v.k = VFALSE;  /* `falses' are all equal here */\n  luaK_goiftrue(ls->fs, &v);\n  return v.f;\n}\n\n\nstatic void breakstat (LexState *ls) {\n  FuncState *fs = ls->fs;\n  BlockCnt *bl = fs->bl;\n  int upval = 0;\n  while (bl && !bl->isbreakable) {\n    upval |= bl->upval;\n    bl = bl->previous;\n  }\n  if (!bl)\n    luaX_syntaxerror(ls, \"no loop to break\");\n  if (upval)\n    luaK_codeABC(fs, OP_CLOSE, bl->nactvar, 0, 0);\n  luaK_concat(fs, &bl->breaklist, luaK_jump(fs));\n}\n\n\nstatic void whilestat (LexState *ls, int line) {\n  /* whilestat -> WHILE cond DO block END */\n  FuncState *fs = ls->fs;\n  int whileinit;\n  int condexit;\n  BlockCnt bl;\n  luaX_next(ls);  /* skip WHILE */\n  whileinit = luaK_getlabel(fs);\n  condexit = cond(ls);\n  enterblock(fs, &bl, 1);\n  checknext(ls, TK_DO);\n  block(ls);\n  luaK_patchlist(fs, luaK_jump(fs), whileinit);\n  check_match(ls, TK_END, TK_WHILE, line);\n  leaveblock(fs);\n  luaK_patchtohere(fs, condexit);  /* false conditions finish the loop */\n}\n\n\nstatic void repeatstat (LexState *ls, int line) {\n  /* repeatstat -> REPEAT block UNTIL cond */\n  int condexit;\n  FuncState *fs = ls->fs;\n  int repeat_init = luaK_getlabel(fs);\n  BlockCnt bl1, bl2;\n  enterblock(fs, &bl1, 1);  /* loop block */\n  enterblock(fs, &bl2, 0);  /* scope block */\n  luaX_next(ls);  /* skip REPEAT */\n  chunk(ls);\n  check_match(ls, TK_UNTIL, TK_REPEAT, line);\n  condexit = cond(ls);  /* read condition (inside scope block) */\n  if (!bl2.upval) {  /* no upvalues? */\n    leaveblock(fs);  /* finish scope */\n    luaK_patchlist(ls->fs, condexit, repeat_init);  /* close the loop */\n  }\n  else {  /* complete semantics when there are upvalues */\n    breakstat(ls);  /* if condition then break */\n    luaK_patchtohere(ls->fs, condexit);  /* else... */\n    leaveblock(fs);  /* finish scope... */\n    luaK_patchlist(ls->fs, luaK_jump(fs), repeat_init);  /* and repeat */\n  }\n  leaveblock(fs);  /* finish loop */\n}\n\n\nstatic int exp1 (LexState *ls) {\n  expdesc e;\n  int k;\n  expr(ls, &e);\n  k = e.k;\n  luaK_exp2nextreg(ls->fs, &e);\n  return k;\n}\n\n\nstatic void forbody (LexState *ls, int base, int line, int nvars, int isnum) {\n  /* forbody -> DO block */\n  BlockCnt bl;\n  FuncState *fs = ls->fs;\n  int prep, endfor;\n  adjustlocalvars(ls, 3);  /* control variables */\n  checknext(ls, TK_DO);\n  prep = isnum ? luaK_codeAsBx(fs, OP_FORPREP, base, NO_JUMP) : luaK_jump(fs);\n  enterblock(fs, &bl, 0);  /* scope for declared variables */\n  adjustlocalvars(ls, nvars);\n  luaK_reserveregs(fs, nvars);\n  block(ls);\n  leaveblock(fs);  /* end of scope for declared variables */\n  luaK_patchtohere(fs, prep);\n  endfor = (isnum) ? luaK_codeAsBx(fs, OP_FORLOOP, base, NO_JUMP) :\n                     luaK_codeABC(fs, OP_TFORLOOP, base, 0, nvars);\n  luaK_fixline(fs, line);  /* pretend that `OP_FOR' starts the loop */\n  luaK_patchlist(fs, (isnum ? endfor : luaK_jump(fs)), prep + 1);\n}\n\n\nstatic void fornum (LexState *ls, TString *varname, int line) {\n  /* fornum -> NAME = exp1,exp1[,exp1] forbody */\n  FuncState *fs = ls->fs;\n  int base = fs->freereg;\n  new_localvarliteral(ls, \"(for index)\", 0);\n  new_localvarliteral(ls, \"(for limit)\", 1);\n  new_localvarliteral(ls, \"(for step)\", 2);\n  new_localvar(ls, varname, 3);\n  checknext(ls, '=');\n  exp1(ls);  /* initial value */\n  checknext(ls, ',');\n  exp1(ls);  /* limit */\n  if (testnext(ls, ','))\n    exp1(ls);  /* optional step */\n  else {  /* default step = 1 */\n    luaK_codeABx(fs, OP_LOADK, fs->freereg, luaK_numberK(fs, 1));\n    luaK_reserveregs(fs, 1);\n  }\n  forbody(ls, base, line, 1, 1);\n}\n\n\nstatic void forlist (LexState *ls, TString *indexname) {\n  /* forlist -> NAME {,NAME} IN explist1 forbody */\n  FuncState *fs = ls->fs;\n  expdesc e;\n  int nvars = 0;\n  int line;\n  int base = fs->freereg;\n  /* create control variables */\n  new_localvarliteral(ls, \"(for generator)\", nvars++);\n  new_localvarliteral(ls, \"(for state)\", nvars++);\n  new_localvarliteral(ls, \"(for control)\", nvars++);\n  /* create declared variables */\n  new_localvar(ls, indexname, nvars++);\n  while (testnext(ls, ','))\n    new_localvar(ls, str_checkname(ls), nvars++);\n  checknext(ls, TK_IN);\n  line = ls->linenumber;\n  adjust_assign(ls, 3, explist1(ls, &e), &e);\n  luaK_checkstack(fs, 3);  /* extra space to call generator */\n  forbody(ls, base, line, nvars - 3, 0);\n}\n\n\nstatic void forstat (LexState *ls, int line) {\n  /* forstat -> FOR (fornum | forlist) END */\n  FuncState *fs = ls->fs;\n  TString *varname;\n  BlockCnt bl;\n  enterblock(fs, &bl, 1);  /* scope for loop and control variables */\n  luaX_next(ls);  /* skip `for' */\n  varname = str_checkname(ls);  /* first variable name */\n  switch (ls->t.token) {\n    case '=': fornum(ls, varname, line); break;\n    case ',': case TK_IN: forlist(ls, varname); break;\n    default: luaX_syntaxerror(ls, LUA_QL(\"=\") \" or \" LUA_QL(\"in\") \" expected\");\n  }\n  check_match(ls, TK_END, TK_FOR, line);\n  leaveblock(fs);  /* loop scope (`break' jumps to this point) */\n}\n\n\nstatic int test_then_block (LexState *ls) {\n  /* test_then_block -> [IF | ELSEIF] cond THEN block */\n  int condexit;\n  luaX_next(ls);  /* skip IF or ELSEIF */\n  condexit = cond(ls);\n  checknext(ls, TK_THEN);\n  block(ls);  /* `then' part */\n  return condexit;\n}\n\n\nstatic void ifstat (LexState *ls, int line) {\n  /* ifstat -> IF cond THEN block {ELSEIF cond THEN block} [ELSE block] END */\n  FuncState *fs = ls->fs;\n  int flist;\n  int escapelist = NO_JUMP;\n  flist = test_then_block(ls);  /* IF cond THEN block */\n  while (ls->t.token == TK_ELSEIF) {\n    luaK_concat(fs, &escapelist, luaK_jump(fs));\n    luaK_patchtohere(fs, flist);\n    flist = test_then_block(ls);  /* ELSEIF cond THEN block */\n  }\n  if (ls->t.token == TK_ELSE) {\n    luaK_concat(fs, &escapelist, luaK_jump(fs));\n    luaK_patchtohere(fs, flist);\n    luaX_next(ls);  /* skip ELSE (after patch, for correct line info) */\n    block(ls);  /* `else' part */\n  }\n  else\n    luaK_concat(fs, &escapelist, flist);\n  luaK_patchtohere(fs, escapelist);\n  check_match(ls, TK_END, TK_IF, line);\n}\n\n\nstatic void localfunc (LexState *ls) {\n  expdesc v, b;\n  FuncState *fs = ls->fs;\n  new_localvar(ls, str_checkname(ls), 0);\n  init_exp(&v, VLOCAL, fs->freereg);\n  luaK_reserveregs(fs, 1);\n  adjustlocalvars(ls, 1);\n  body(ls, &b, 0, ls->linenumber);\n  luaK_storevar(fs, &v, &b);\n  /* debug information will only see the variable after this point! */\n  getlocvar(fs, fs->nactvar - 1).startpc = fs->pc;\n}\n\n\nstatic void localstat (LexState *ls) {\n  /* stat -> LOCAL NAME {`,' NAME} [`=' explist1] */\n  int nvars = 0;\n  int nexps;\n  expdesc e;\n  do {\n    new_localvar(ls, str_checkname(ls), nvars++);\n  } while (testnext(ls, ','));\n  if (testnext(ls, '='))\n    nexps = explist1(ls, &e);\n  else {\n    e.k = VVOID;\n    nexps = 0;\n  }\n  adjust_assign(ls, nvars, nexps, &e);\n  adjustlocalvars(ls, nvars);\n}\n\n\nstatic int funcname (LexState *ls, expdesc *v) {\n  /* funcname -> NAME {field} [`:' NAME] */\n  int needself = 0;\n  singlevar(ls, v);\n  while (ls->t.token == '.')\n    field(ls, v);\n  if (ls->t.token == ':') {\n    needself = 1;\n    field(ls, v);\n  }\n  return needself;\n}\n\n\nstatic void funcstat (LexState *ls, int line) {\n  /* funcstat -> FUNCTION funcname body */\n  int needself;\n  expdesc v, b;\n  luaX_next(ls);  /* skip FUNCTION */\n  needself = funcname(ls, &v);\n  body(ls, &b, needself, line);\n  luaK_storevar(ls->fs, &v, &b);\n  luaK_fixline(ls->fs, line);  /* definition `happens' in the first line */\n}\n\n\nstatic void exprstat (LexState *ls) {\n  /* stat -> func | assignment */\n  FuncState *fs = ls->fs;\n  struct LHS_assign v;\n  primaryexp(ls, &v.v);\n  if (v.v.k == VCALL)  /* stat -> func */\n    SETARG_C(getcode(fs, &v.v), 1);  /* call statement uses no results */\n  else {  /* stat -> assignment */\n    v.prev = NULL;\n    assignment(ls, &v, 1);\n  }\n}\n\n\nstatic void retstat (LexState *ls) {\n  /* stat -> RETURN explist */\n  FuncState *fs = ls->fs;\n  expdesc e;\n  int first, nret;  /* registers with returned values */\n  luaX_next(ls);  /* skip RETURN */\n  if (block_follow(ls->t.token) || ls->t.token == ';')\n    first = nret = 0;  /* return no values */\n  else {\n    nret = explist1(ls, &e);  /* optional return values */\n    if (hasmultret(e.k)) {\n      luaK_setmultret(fs, &e);\n      if (e.k == VCALL && nret == 1) {  /* tail call? */\n        SET_OPCODE(getcode(fs,&e), OP_TAILCALL);\n        lua_assert(GETARG_A(getcode(fs,&e)) == fs->nactvar);\n      }\n      first = fs->nactvar;\n      nret = LUA_MULTRET;  /* return all values */\n    }\n    else {\n      if (nret == 1)  /* only one single value? */\n        first = luaK_exp2anyreg(fs, &e);\n      else {\n        luaK_exp2nextreg(fs, &e);  /* values must go to the `stack' */\n        first = fs->nactvar;  /* return all `active' values */\n        lua_assert(nret == fs->freereg - first);\n      }\n    }\n  }\n  luaK_ret(fs, first, nret);\n}\n\n\nstatic int statement (LexState *ls) {\n  int line = ls->linenumber;  /* may be needed for error messages */\n  switch (ls->t.token) {\n    case TK_IF: {  /* stat -> ifstat */\n      ifstat(ls, line);\n      return 0;\n    }\n    case TK_WHILE: {  /* stat -> whilestat */\n      whilestat(ls, line);\n      return 0;\n    }\n    case TK_DO: {  /* stat -> DO block END */\n      luaX_next(ls);  /* skip DO */\n      block(ls);\n      check_match(ls, TK_END, TK_DO, line);\n      return 0;\n    }\n    case TK_FOR: {  /* stat -> forstat */\n      forstat(ls, line);\n      return 0;\n    }\n    case TK_REPEAT: {  /* stat -> repeatstat */\n      repeatstat(ls, line);\n      return 0;\n    }\n    case TK_FUNCTION: {\n      funcstat(ls, line);  /* stat -> funcstat */\n      return 0;\n    }\n    case TK_LOCAL: {  /* stat -> localstat */\n      luaX_next(ls);  /* skip LOCAL */\n      if (testnext(ls, TK_FUNCTION))  /* local function? */\n        localfunc(ls);\n      else\n        localstat(ls);\n      return 0;\n    }\n    case TK_RETURN: {  /* stat -> retstat */\n      retstat(ls);\n      return 1;  /* must be last statement */\n    }\n    case TK_BREAK: {  /* stat -> breakstat */\n      luaX_next(ls);  /* skip BREAK */\n      breakstat(ls);\n      return 1;  /* must be last statement */\n    }\n    default: {\n      exprstat(ls);\n      return 0;  /* to avoid warnings */\n    }\n  }\n}\n\n\nstatic void chunk (LexState *ls) {\n  /* chunk -> { stat [`;'] } */\n  int islast = 0;\n  enterlevel(ls);\n  while (!islast && !block_follow(ls->t.token)) {\n    islast = statement(ls);\n    testnext(ls, ';');\n    lua_assert(ls->fs->f->maxstacksize >= ls->fs->freereg &&\n               ls->fs->freereg >= ls->fs->nactvar);\n    ls->fs->freereg = ls->fs->nactvar;  /* free registers */\n  }\n  leavelevel(ls);\n}\n\n/* }====================================================================== */\n"
  },
  {
    "path": "deps/lua/src/lparser.h",
    "content": "/*\n** $Id: lparser.h,v 1.57.1.1 2007/12/27 13:02:25 roberto Exp $\n** Lua Parser\n** See Copyright Notice in lua.h\n*/\n\n#ifndef lparser_h\n#define lparser_h\n\n#include \"llimits.h\"\n#include \"lobject.h\"\n#include \"lzio.h\"\n\n\n/*\n** Expression descriptor\n*/\n\ntypedef enum {\n  VVOID,\t/* no value */\n  VNIL,\n  VTRUE,\n  VFALSE,\n  VK,\t\t/* info = index of constant in `k' */\n  VKNUM,\t/* nval = numerical value */\n  VLOCAL,\t/* info = local register */\n  VUPVAL,       /* info = index of upvalue in `upvalues' */\n  VGLOBAL,\t/* info = index of table; aux = index of global name in `k' */\n  VINDEXED,\t/* info = table register; aux = index register (or `k') */\n  VJMP,\t\t/* info = instruction pc */\n  VRELOCABLE,\t/* info = instruction pc */\n  VNONRELOC,\t/* info = result register */\n  VCALL,\t/* info = instruction pc */\n  VVARARG\t/* info = instruction pc */\n} expkind;\n\ntypedef struct expdesc {\n  expkind k;\n  union {\n    struct { int info, aux; } s;\n    lua_Number nval;\n  } u;\n  int t;  /* patch list of `exit when true' */\n  int f;  /* patch list of `exit when false' */\n} expdesc;\n\n\ntypedef struct upvaldesc {\n  lu_byte k;\n  lu_byte info;\n} upvaldesc;\n\n\nstruct BlockCnt;  /* defined in lparser.c */\n\n\n/* state needed to generate code for a given function */\ntypedef struct FuncState {\n  Proto *f;  /* current function header */\n  Table *h;  /* table to find (and reuse) elements in `k' */\n  struct FuncState *prev;  /* enclosing function */\n  struct LexState *ls;  /* lexical state */\n  struct lua_State *L;  /* copy of the Lua state */\n  struct BlockCnt *bl;  /* chain of current blocks */\n  int pc;  /* next position to code (equivalent to `ncode') */\n  int lasttarget;   /* `pc' of last `jump target' */\n  int jpc;  /* list of pending jumps to `pc' */\n  int freereg;  /* first free register */\n  int nk;  /* number of elements in `k' */\n  int np;  /* number of elements in `p' */\n  short nlocvars;  /* number of elements in `locvars' */\n  lu_byte nactvar;  /* number of active local variables */\n  upvaldesc upvalues[LUAI_MAXUPVALUES];  /* upvalues */\n  unsigned short actvar[LUAI_MAXVARS];  /* declared-variable stack */\n} FuncState;\n\n\nLUAI_FUNC Proto *luaY_parser (lua_State *L, ZIO *z, Mbuffer *buff,\n                                            const char *name);\n\n\n#endif\n"
  },
  {
    "path": "deps/lua/src/lstate.c",
    "content": "/*\n** $Id: lstate.c,v 2.36.1.2 2008/01/03 15:20:39 roberto Exp $\n** Global State\n** See Copyright Notice in lua.h\n*/\n\n\n#include <stddef.h>\n\n#define lstate_c\n#define LUA_CORE\n\n#include \"lua.h\"\n\n#include \"ldebug.h\"\n#include \"ldo.h\"\n#include \"lfunc.h\"\n#include \"lgc.h\"\n#include \"llex.h\"\n#include \"lmem.h\"\n#include \"lstate.h\"\n#include \"lstring.h\"\n#include \"ltable.h\"\n#include \"ltm.h\"\n\n\n#define state_size(x)\t(sizeof(x) + LUAI_EXTRASPACE)\n#define fromstate(l)\t(cast(lu_byte *, (l)) - LUAI_EXTRASPACE)\n#define tostate(l)   (cast(lua_State *, cast(lu_byte *, l) + LUAI_EXTRASPACE))\n\n\n/*\n** Main thread combines a thread state and the global state\n*/\ntypedef struct LG {\n  lua_State l;\n  global_State g;\n} LG;\n  \n\n\nstatic void stack_init (lua_State *L1, lua_State *L) {\n  /* initialize CallInfo array */\n  L1->base_ci = luaM_newvector(L, BASIC_CI_SIZE, CallInfo);\n  L1->ci = L1->base_ci;\n  L1->size_ci = BASIC_CI_SIZE;\n  L1->end_ci = L1->base_ci + L1->size_ci - 1;\n  /* initialize stack array */\n  L1->stack = luaM_newvector(L, BASIC_STACK_SIZE + EXTRA_STACK, TValue);\n  L1->stacksize = BASIC_STACK_SIZE + EXTRA_STACK;\n  L1->top = L1->stack;\n  L1->stack_last = L1->stack+(L1->stacksize - EXTRA_STACK)-1;\n  /* initialize first ci */\n  L1->ci->func = L1->top;\n  setnilvalue(L1->top++);  /* `function' entry for this `ci' */\n  L1->base = L1->ci->base = L1->top;\n  L1->ci->top = L1->top + LUA_MINSTACK;\n}\n\n\nstatic void freestack (lua_State *L, lua_State *L1) {\n  luaM_freearray(L, L1->base_ci, L1->size_ci, CallInfo);\n  luaM_freearray(L, L1->stack, L1->stacksize, TValue);\n}\n\n\n/*\n** open parts that may cause memory-allocation errors\n*/\nstatic void f_luaopen (lua_State *L, void *ud) {\n  global_State *g = G(L);\n  UNUSED(ud);\n  stack_init(L, L);  /* init stack */\n  sethvalue(L, gt(L), luaH_new(L, 0, 2));  /* table of globals */\n  sethvalue(L, registry(L), luaH_new(L, 0, 2));  /* registry */\n  luaS_resize(L, MINSTRTABSIZE);  /* initial size of string table */\n  luaT_init(L);\n  luaX_init(L);\n  luaS_fix(luaS_newliteral(L, MEMERRMSG));\n  g->GCthreshold = 4*g->totalbytes;\n}\n\n\nstatic void preinit_state (lua_State *L, global_State *g) {\n  G(L) = g;\n  L->stack = NULL;\n  L->stacksize = 0;\n  L->errorJmp = NULL;\n  L->hook = NULL;\n  L->hookmask = 0;\n  L->basehookcount = 0;\n  L->allowhook = 1;\n  resethookcount(L);\n  L->openupval = NULL;\n  L->size_ci = 0;\n  L->nCcalls = L->baseCcalls = 0;\n  L->status = 0;\n  L->base_ci = L->ci = NULL;\n  L->savedpc = NULL;\n  L->errfunc = 0;\n  setnilvalue(gt(L));\n}\n\n\nstatic void close_state (lua_State *L) {\n  global_State *g = G(L);\n  luaF_close(L, L->stack);  /* close all upvalues for this thread */\n  luaC_freeall(L);  /* collect all objects */\n  lua_assert(g->rootgc == obj2gco(L));\n  lua_assert(g->strt.nuse == 0);\n  luaM_freearray(L, G(L)->strt.hash, G(L)->strt.size, TString *);\n  luaZ_freebuffer(L, &g->buff);\n  freestack(L, L);\n  lua_assert(g->totalbytes == sizeof(LG));\n  (*g->frealloc)(g->ud, fromstate(L), state_size(LG), 0);\n}\n\n\nlua_State *luaE_newthread (lua_State *L) {\n  lua_State *L1 = tostate(luaM_malloc(L, state_size(lua_State)));\n  luaC_link(L, obj2gco(L1), LUA_TTHREAD);\n  preinit_state(L1, G(L));\n  stack_init(L1, L);  /* init stack */\n  setobj2n(L, gt(L1), gt(L));  /* share table of globals */\n  L1->hookmask = L->hookmask;\n  L1->basehookcount = L->basehookcount;\n  L1->hook = L->hook;\n  resethookcount(L1);\n  lua_assert(iswhite(obj2gco(L1)));\n  return L1;\n}\n\n\nvoid luaE_freethread (lua_State *L, lua_State *L1) {\n  luaF_close(L1, L1->stack);  /* close all upvalues for this thread */\n  lua_assert(L1->openupval == NULL);\n  luai_userstatefree(L1);\n  freestack(L, L1);\n  luaM_freemem(L, fromstate(L1), state_size(lua_State));\n}\n\n\nLUA_API lua_State *lua_newstate (lua_Alloc f, void *ud) {\n  int i;\n  lua_State *L;\n  global_State *g;\n  void *l = (*f)(ud, NULL, 0, state_size(LG));\n  if (l == NULL) return NULL;\n  L = tostate(l);\n  g = &((LG *)L)->g;\n  L->next = NULL;\n  L->tt = LUA_TTHREAD;\n  g->currentwhite = bit2mask(WHITE0BIT, FIXEDBIT);\n  L->marked = luaC_white(g);\n  set2bits(L->marked, FIXEDBIT, SFIXEDBIT);\n  preinit_state(L, g);\n  g->frealloc = f;\n  g->ud = ud;\n  g->mainthread = L;\n  g->uvhead.u.l.prev = &g->uvhead;\n  g->uvhead.u.l.next = &g->uvhead;\n  g->GCthreshold = 0;  /* mark it as unfinished state */\n  g->strt.size = 0;\n  g->strt.nuse = 0;\n  g->strt.hash = NULL;\n  setnilvalue(registry(L));\n  luaZ_initbuffer(L, &g->buff);\n  g->panic = NULL;\n  g->gcstate = GCSpause;\n  g->rootgc = obj2gco(L);\n  g->sweepstrgc = 0;\n  g->sweepgc = &g->rootgc;\n  g->gray = NULL;\n  g->grayagain = NULL;\n  g->weak = NULL;\n  g->tmudata = NULL;\n  g->totalbytes = sizeof(LG);\n  g->gcpause = LUAI_GCPAUSE;\n  g->gcstepmul = LUAI_GCMUL;\n  g->gcdept = 0;\n  for (i=0; i<NUM_TAGS; i++) g->mt[i] = NULL;\n  if (luaD_rawrunprotected(L, f_luaopen, NULL) != 0) {\n    /* memory allocation error: free partial state */\n    close_state(L);\n    L = NULL;\n  }\n  else\n    luai_userstateopen(L);\n  return L;\n}\n\n\nstatic void callallgcTM (lua_State *L, void *ud) {\n  UNUSED(ud);\n  luaC_callGCTM(L);  /* call GC metamethods for all udata */\n}\n\n\nLUA_API void lua_close (lua_State *L) {\n  L = G(L)->mainthread;  /* only the main thread can be closed */\n  lua_lock(L);\n  luaF_close(L, L->stack);  /* close all upvalues for this thread */\n  luaC_separateudata(L, 1);  /* separate udata that have GC metamethods */\n  L->errfunc = 0;  /* no error function during GC metamethods */\n  do {  /* repeat until no more errors */\n    L->ci = L->base_ci;\n    L->base = L->top = L->ci->base;\n    L->nCcalls = L->baseCcalls = 0;\n  } while (luaD_rawrunprotected(L, callallgcTM, NULL) != 0);\n  lua_assert(G(L)->tmudata == NULL);\n  luai_userstateclose(L);\n  close_state(L);\n}\n\n"
  },
  {
    "path": "deps/lua/src/lstate.h",
    "content": "/*\n** $Id: lstate.h,v 2.24.1.2 2008/01/03 15:20:39 roberto Exp $\n** Global State\n** See Copyright Notice in lua.h\n*/\n\n#ifndef lstate_h\n#define lstate_h\n\n#include \"lua.h\"\n\n#include \"lobject.h\"\n#include \"ltm.h\"\n#include \"lzio.h\"\n\n\n\nstruct lua_longjmp;  /* defined in ldo.c */\n\n\n/* table of globals */\n#define gt(L)\t(&L->l_gt)\n\n/* registry */\n#define registry(L)\t(&G(L)->l_registry)\n\n\n/* extra stack space to handle TM calls and some other extras */\n#define EXTRA_STACK   5\n\n\n#define BASIC_CI_SIZE           8\n\n#define BASIC_STACK_SIZE        (2*LUA_MINSTACK)\n\n\n\ntypedef struct stringtable {\n  GCObject **hash;\n  lu_int32 nuse;  /* number of elements */\n  int size;\n} stringtable;\n\n\n/*\n** informations about a call\n*/\ntypedef struct CallInfo {\n  StkId base;  /* base for this function */\n  StkId func;  /* function index in the stack */\n  StkId\ttop;  /* top for this function */\n  const Instruction *savedpc;\n  int nresults;  /* expected number of results from this function */\n  int tailcalls;  /* number of tail calls lost under this entry */\n} CallInfo;\n\n\n\n#define curr_func(L)\t(clvalue(L->ci->func))\n#define ci_func(ci)\t(clvalue((ci)->func))\n#define f_isLua(ci)\t(!ci_func(ci)->c.isC)\n#define isLua(ci)\t(ttisfunction((ci)->func) && f_isLua(ci))\n\n\n/*\n** `global state', shared by all threads of this state\n*/\ntypedef struct global_State {\n  stringtable strt;  /* hash table for strings */\n  lua_Alloc frealloc;  /* function to reallocate memory */\n  void *ud;         /* auxiliary data to `frealloc' */\n  lu_byte currentwhite;\n  lu_byte gcstate;  /* state of garbage collector */\n  int sweepstrgc;  /* position of sweep in `strt' */\n  GCObject *rootgc;  /* list of all collectable objects */\n  GCObject **sweepgc;  /* position of sweep in `rootgc' */\n  GCObject *gray;  /* list of gray objects */\n  GCObject *grayagain;  /* list of objects to be traversed atomically */\n  GCObject *weak;  /* list of weak tables (to be cleared) */\n  GCObject *tmudata;  /* last element of list of userdata to be GC */\n  Mbuffer buff;  /* temporary buffer for string concatentation */\n  lu_mem GCthreshold;\n  lu_mem totalbytes;  /* number of bytes currently allocated */\n  lu_mem estimate;  /* an estimate of number of bytes actually in use */\n  lu_mem gcdept;  /* how much GC is `behind schedule' */\n  int gcpause;  /* size of pause between successive GCs */\n  int gcstepmul;  /* GC `granularity' */\n  lua_CFunction panic;  /* to be called in unprotected errors */\n  TValue l_registry;\n  struct lua_State *mainthread;\n  UpVal uvhead;  /* head of double-linked list of all open upvalues */\n  struct Table *mt[NUM_TAGS];  /* metatables for basic types */\n  TString *tmname[TM_N];  /* array with tag-method names */\n} global_State;\n\n\n/*\n** `per thread' state\n*/\nstruct lua_State {\n  CommonHeader;\n  lu_byte status;\n  StkId top;  /* first free slot in the stack */\n  StkId base;  /* base of current function */\n  global_State *l_G;\n  CallInfo *ci;  /* call info for current function */\n  const Instruction *savedpc;  /* `savedpc' of current function */\n  StkId stack_last;  /* last free slot in the stack */\n  StkId stack;  /* stack base */\n  CallInfo *end_ci;  /* points after end of ci array*/\n  CallInfo *base_ci;  /* array of CallInfo's */\n  int stacksize;\n  int size_ci;  /* size of array `base_ci' */\n  unsigned short nCcalls;  /* number of nested C calls */\n  unsigned short baseCcalls;  /* nested C calls when resuming coroutine */\n  lu_byte hookmask;\n  lu_byte allowhook;\n  int basehookcount;\n  int hookcount;\n  lua_Hook hook;\n  TValue l_gt;  /* table of globals */\n  TValue env;  /* temporary place for environments */\n  GCObject *openupval;  /* list of open upvalues in this stack */\n  GCObject *gclist;\n  struct lua_longjmp *errorJmp;  /* current error recover point */\n  ptrdiff_t errfunc;  /* current error handling function (stack index) */\n};\n\n\n#define G(L)\t(L->l_G)\n\n\n/*\n** Union of all collectable objects\n*/\nunion GCObject {\n  GCheader gch;\n  union TString ts;\n  union Udata u;\n  union Closure cl;\n  struct Table h;\n  struct Proto p;\n  struct UpVal uv;\n  struct lua_State th;  /* thread */\n};\n\n\n/* macros to convert a GCObject into a specific value */\n#define rawgco2ts(o)\tcheck_exp((o)->gch.tt == LUA_TSTRING, &((o)->ts))\n#define gco2ts(o)\t(&rawgco2ts(o)->tsv)\n#define rawgco2u(o)\tcheck_exp((o)->gch.tt == LUA_TUSERDATA, &((o)->u))\n#define gco2u(o)\t(&rawgco2u(o)->uv)\n#define gco2cl(o)\tcheck_exp((o)->gch.tt == LUA_TFUNCTION, &((o)->cl))\n#define gco2h(o)\tcheck_exp((o)->gch.tt == LUA_TTABLE, &((o)->h))\n#define gco2p(o)\tcheck_exp((o)->gch.tt == LUA_TPROTO, &((o)->p))\n#define gco2uv(o)\tcheck_exp((o)->gch.tt == LUA_TUPVAL, &((o)->uv))\n#define ngcotouv(o) \\\n\tcheck_exp((o) == NULL || (o)->gch.tt == LUA_TUPVAL, &((o)->uv))\n#define gco2th(o)\tcheck_exp((o)->gch.tt == LUA_TTHREAD, &((o)->th))\n\n/* macro to convert any Lua object into a GCObject */\n#define obj2gco(v)\t(cast(GCObject *, (v)))\n\n\nLUAI_FUNC lua_State *luaE_newthread (lua_State *L);\nLUAI_FUNC void luaE_freethread (lua_State *L, lua_State *L1);\n\n#endif\n\n"
  },
  {
    "path": "deps/lua/src/lstring.c",
    "content": "/*\n** $Id: lstring.c,v 2.8.1.1 2007/12/27 13:02:25 roberto Exp $\n** String table (keeps all strings handled by Lua)\n** See Copyright Notice in lua.h\n*/\n\n\n#include <string.h>\n\n#define lstring_c\n#define LUA_CORE\n\n#include \"lua.h\"\n\n#include \"lmem.h\"\n#include \"lobject.h\"\n#include \"lstate.h\"\n#include \"lstring.h\"\n\n\n\nvoid luaS_resize (lua_State *L, int newsize) {\n  GCObject **newhash;\n  stringtable *tb;\n  int i;\n  if (G(L)->gcstate == GCSsweepstring)\n    return;  /* cannot resize during GC traverse */\n  newhash = luaM_newvector(L, newsize, GCObject *);\n  tb = &G(L)->strt;\n  for (i=0; i<newsize; i++) newhash[i] = NULL;\n  /* rehash */\n  for (i=0; i<tb->size; i++) {\n    GCObject *p = tb->hash[i];\n    while (p) {  /* for each node in the list */\n      GCObject *next = p->gch.next;  /* save next */\n      unsigned int h = gco2ts(p)->hash;\n      int h1 = lmod(h, newsize);  /* new position */\n      lua_assert(cast_int(h%newsize) == lmod(h, newsize));\n      p->gch.next = newhash[h1];  /* chain it */\n      newhash[h1] = p;\n      p = next;\n    }\n  }\n  luaM_freearray(L, tb->hash, tb->size, TString *);\n  tb->size = newsize;\n  tb->hash = newhash;\n}\n\n\nstatic TString *newlstr (lua_State *L, const char *str, size_t l,\n                                       unsigned int h) {\n  TString *ts;\n  stringtable *tb;\n  if (l+1 > (MAX_SIZET - sizeof(TString))/sizeof(char))\n    luaM_toobig(L);\n  ts = cast(TString *, luaM_malloc(L, (l+1)*sizeof(char)+sizeof(TString)));\n  ts->tsv.len = l;\n  ts->tsv.hash = h;\n  ts->tsv.marked = luaC_white(G(L));\n  ts->tsv.tt = LUA_TSTRING;\n  ts->tsv.reserved = 0;\n  memcpy(ts+1, str, l*sizeof(char));\n  ((char *)(ts+1))[l] = '\\0';  /* ending 0 */\n  tb = &G(L)->strt;\n  h = lmod(h, tb->size);\n  ts->tsv.next = tb->hash[h];  /* chain new entry */\n  tb->hash[h] = obj2gco(ts);\n  tb->nuse++;\n  if (tb->nuse > cast(lu_int32, tb->size) && tb->size <= MAX_INT/2)\n    luaS_resize(L, tb->size*2);  /* too crowded */\n  return ts;\n}\n\n\nTString *luaS_newlstr (lua_State *L, const char *str, size_t l) {\n  GCObject *o;\n  unsigned int h = cast(unsigned int, l);  /* seed */\n  size_t step = 1;\n  size_t l1;\n  for (l1=l; l1>=step; l1-=step)  /* compute hash */\n    h = h ^ ((h<<5)+(h>>2)+cast(unsigned char, str[l1-1]));\n  for (o = G(L)->strt.hash[lmod(h, G(L)->strt.size)];\n       o != NULL;\n       o = o->gch.next) {\n    TString *ts = rawgco2ts(o);\n    if (ts->tsv.len == l && (memcmp(str, getstr(ts), l) == 0)) {\n      /* string may be dead */\n      if (isdead(G(L), o)) changewhite(o);\n      return ts;\n    }\n  }\n  return newlstr(L, str, l, h);  /* not found */\n}\n\n\nUdata *luaS_newudata (lua_State *L, size_t s, Table *e) {\n  Udata *u;\n  if (s > MAX_SIZET - sizeof(Udata))\n    luaM_toobig(L);\n  u = cast(Udata *, luaM_malloc(L, s + sizeof(Udata)));\n  u->uv.marked = luaC_white(G(L));  /* is not finalized */\n  u->uv.tt = LUA_TUSERDATA;\n  u->uv.len = s;\n  u->uv.metatable = NULL;\n  u->uv.env = e;\n  /* chain it on udata list (after main thread) */\n  u->uv.next = G(L)->mainthread->next;\n  G(L)->mainthread->next = obj2gco(u);\n  return u;\n}\n\n"
  },
  {
    "path": "deps/lua/src/lstring.h",
    "content": "/*\n** $Id: lstring.h,v 1.43.1.1 2007/12/27 13:02:25 roberto Exp $\n** String table (keep all strings handled by Lua)\n** See Copyright Notice in lua.h\n*/\n\n#ifndef lstring_h\n#define lstring_h\n\n\n#include \"lgc.h\"\n#include \"lobject.h\"\n#include \"lstate.h\"\n\n\n#define sizestring(s)\t(sizeof(union TString)+((s)->len+1)*sizeof(char))\n\n#define sizeudata(u)\t(sizeof(union Udata)+(u)->len)\n\n#define luaS_new(L, s)\t(luaS_newlstr(L, s, strlen(s)))\n#define luaS_newliteral(L, s)\t(luaS_newlstr(L, \"\" s, \\\n                                 (sizeof(s)/sizeof(char))-1))\n\n#define luaS_fix(s)\tl_setbit((s)->tsv.marked, FIXEDBIT)\n\nLUAI_FUNC void luaS_resize (lua_State *L, int newsize);\nLUAI_FUNC Udata *luaS_newudata (lua_State *L, size_t s, Table *e);\nLUAI_FUNC TString *luaS_newlstr (lua_State *L, const char *str, size_t l);\n\n\n#endif\n"
  },
  {
    "path": "deps/lua/src/lstrlib.c",
    "content": "/*\n** $Id: lstrlib.c,v 1.132.1.5 2010/05/14 15:34:19 roberto Exp $\n** Standard library for string operations and pattern-matching\n** See Copyright Notice in lua.h\n*/\n\n\n#include <ctype.h>\n#include <stddef.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n#define lstrlib_c\n#define LUA_LIB\n\n#include \"lua.h\"\n\n#include \"lauxlib.h\"\n#include \"lualib.h\"\n\n\n/* macro to `unsign' a character */\n#define uchar(c)        ((unsigned char)(c))\n\n\n\nstatic int str_len (lua_State *L) {\n  size_t l;\n  luaL_checklstring(L, 1, &l);\n  lua_pushinteger(L, l);\n  return 1;\n}\n\n\nstatic ptrdiff_t posrelat (ptrdiff_t pos, size_t len) {\n  /* relative string position: negative means back from end */\n  if (pos < 0) pos += (ptrdiff_t)len + 1;\n  return (pos >= 0) ? pos : 0;\n}\n\n\nstatic int str_sub (lua_State *L) {\n  size_t l;\n  const char *s = luaL_checklstring(L, 1, &l);\n  ptrdiff_t start = posrelat(luaL_checkinteger(L, 2), l);\n  ptrdiff_t end = posrelat(luaL_optinteger(L, 3, -1), l);\n  if (start < 1) start = 1;\n  if (end > (ptrdiff_t)l) end = (ptrdiff_t)l;\n  if (start <= end)\n    lua_pushlstring(L, s+start-1, end-start+1);\n  else lua_pushliteral(L, \"\");\n  return 1;\n}\n\n\nstatic int str_reverse (lua_State *L) {\n  size_t l;\n  luaL_Buffer b;\n  const char *s = luaL_checklstring(L, 1, &l);\n  luaL_buffinit(L, &b);\n  while (l--) luaL_addchar(&b, s[l]);\n  luaL_pushresult(&b);\n  return 1;\n}\n\n\nstatic int str_lower (lua_State *L) {\n  size_t l;\n  size_t i;\n  luaL_Buffer b;\n  const char *s = luaL_checklstring(L, 1, &l);\n  luaL_buffinit(L, &b);\n  for (i=0; i<l; i++)\n    luaL_addchar(&b, tolower(uchar(s[i])));\n  luaL_pushresult(&b);\n  return 1;\n}\n\n\nstatic int str_upper (lua_State *L) {\n  size_t l;\n  size_t i;\n  luaL_Buffer b;\n  const char *s = luaL_checklstring(L, 1, &l);\n  luaL_buffinit(L, &b);\n  for (i=0; i<l; i++)\n    luaL_addchar(&b, toupper(uchar(s[i])));\n  luaL_pushresult(&b);\n  return 1;\n}\n\nstatic int str_rep (lua_State *L) {\n  size_t l;\n  luaL_Buffer b;\n  const char *s = luaL_checklstring(L, 1, &l);\n  int n = luaL_checkint(L, 2);\n  luaL_buffinit(L, &b);\n  while (n-- > 0)\n    luaL_addlstring(&b, s, l);\n  luaL_pushresult(&b);\n  return 1;\n}\n\n\nstatic int str_byte (lua_State *L) {\n  size_t l;\n  const char *s = luaL_checklstring(L, 1, &l);\n  ptrdiff_t posi = posrelat(luaL_optinteger(L, 2, 1), l);\n  ptrdiff_t pose = posrelat(luaL_optinteger(L, 3, posi), l);\n  int n, i;\n  if (posi <= 0) posi = 1;\n  if ((size_t)pose > l) pose = l;\n  if (posi > pose) return 0;  /* empty interval; return no values */\n  n = (int)(pose -  posi + 1);\n  if (posi + n <= pose)  /* overflow? */\n    luaL_error(L, \"string slice too long\");\n  luaL_checkstack(L, n, \"string slice too long\");\n  for (i=0; i<n; i++)\n    lua_pushinteger(L, uchar(s[posi+i-1]));\n  return n;\n}\n\n\nstatic int str_char (lua_State *L) {\n  int n = lua_gettop(L);  /* number of arguments */\n  int i;\n  luaL_Buffer b;\n  luaL_buffinit(L, &b);\n  for (i=1; i<=n; i++) {\n    int c = luaL_checkint(L, i);\n    luaL_argcheck(L, uchar(c) == c, i, \"invalid value\");\n    luaL_addchar(&b, uchar(c));\n  }\n  luaL_pushresult(&b);\n  return 1;\n}\n\n\nstatic int writer (lua_State *L, const void* b, size_t size, void* B) {\n  (void)L;\n  luaL_addlstring((luaL_Buffer*) B, (const char *)b, size);\n  return 0;\n}\n\n\nstatic int str_dump (lua_State *L) {\n  luaL_Buffer b;\n  luaL_checktype(L, 1, LUA_TFUNCTION);\n  lua_settop(L, 1);\n  luaL_buffinit(L,&b);\n  if (lua_dump(L, writer, &b) != 0)\n    luaL_error(L, \"unable to dump given function\");\n  luaL_pushresult(&b);\n  return 1;\n}\n\n\n\n/*\n** {======================================================\n** PATTERN MATCHING\n** =======================================================\n*/\n\n\n#define CAP_UNFINISHED\t(-1)\n#define CAP_POSITION\t(-2)\n\ntypedef struct MatchState {\n  const char *src_init;  /* init of source string */\n  const char *src_end;  /* end (`\\0') of source string */\n  lua_State *L;\n  int level;  /* total number of captures (finished or unfinished) */\n  struct {\n    const char *init;\n    ptrdiff_t len;\n  } capture[LUA_MAXCAPTURES];\n} MatchState;\n\n\n#define L_ESC\t\t'%'\n#define SPECIALS\t\"^$*+?.([%-\"\n\n\nstatic int check_capture (MatchState *ms, int l) {\n  l -= '1';\n  if (l < 0 || l >= ms->level || ms->capture[l].len == CAP_UNFINISHED)\n    return luaL_error(ms->L, \"invalid capture index\");\n  return l;\n}\n\n\nstatic int capture_to_close (MatchState *ms) {\n  int level = ms->level;\n  for (level--; level>=0; level--)\n    if (ms->capture[level].len == CAP_UNFINISHED) return level;\n  return luaL_error(ms->L, \"invalid pattern capture\");\n}\n\n\nstatic const char *classend (MatchState *ms, const char *p) {\n  switch (*p++) {\n    case L_ESC: {\n      if (*p == '\\0')\n        luaL_error(ms->L, \"malformed pattern (ends with \" LUA_QL(\"%%\") \")\");\n      return p+1;\n    }\n    case '[': {\n      if (*p == '^') p++;\n      do {  /* look for a `]' */\n        if (*p == '\\0')\n          luaL_error(ms->L, \"malformed pattern (missing \" LUA_QL(\"]\") \")\");\n        if (*(p++) == L_ESC && *p != '\\0')\n          p++;  /* skip escapes (e.g. `%]') */\n      } while (*p != ']');\n      return p+1;\n    }\n    default: {\n      return p;\n    }\n  }\n}\n\n\nstatic int match_class (int c, int cl) {\n  int res;\n  switch (tolower(cl)) {\n    case 'a' : res = isalpha(c); break;\n    case 'c' : res = iscntrl(c); break;\n    case 'd' : res = isdigit(c); break;\n    case 'l' : res = islower(c); break;\n    case 'p' : res = ispunct(c); break;\n    case 's' : res = isspace(c); break;\n    case 'u' : res = isupper(c); break;\n    case 'w' : res = isalnum(c); break;\n    case 'x' : res = isxdigit(c); break;\n    case 'z' : res = (c == 0); break;\n    default: return (cl == c);\n  }\n  return (islower(cl) ? res : !res);\n}\n\n\nstatic int matchbracketclass (int c, const char *p, const char *ec) {\n  int sig = 1;\n  if (*(p+1) == '^') {\n    sig = 0;\n    p++;  /* skip the `^' */\n  }\n  while (++p < ec) {\n    if (*p == L_ESC) {\n      p++;\n      if (match_class(c, uchar(*p)))\n        return sig;\n    }\n    else if ((*(p+1) == '-') && (p+2 < ec)) {\n      p+=2;\n      if (uchar(*(p-2)) <= c && c <= uchar(*p))\n        return sig;\n    }\n    else if (uchar(*p) == c) return sig;\n  }\n  return !sig;\n}\n\n\nstatic int singlematch (int c, const char *p, const char *ep) {\n  switch (*p) {\n    case '.': return 1;  /* matches any char */\n    case L_ESC: return match_class(c, uchar(*(p+1)));\n    case '[': return matchbracketclass(c, p, ep-1);\n    default:  return (uchar(*p) == c);\n  }\n}\n\n\nstatic const char *match (MatchState *ms, const char *s, const char *p);\n\n\nstatic const char *matchbalance (MatchState *ms, const char *s,\n                                   const char *p) {\n  if (*p == 0 || *(p+1) == 0)\n    luaL_error(ms->L, \"unbalanced pattern\");\n  if (*s != *p) return NULL;\n  else {\n    int b = *p;\n    int e = *(p+1);\n    int cont = 1;\n    while (++s < ms->src_end) {\n      if (*s == e) {\n        if (--cont == 0) return s+1;\n      }\n      else if (*s == b) cont++;\n    }\n  }\n  return NULL;  /* string ends out of balance */\n}\n\n\nstatic const char *max_expand (MatchState *ms, const char *s,\n                                 const char *p, const char *ep) {\n  ptrdiff_t i = 0;  /* counts maximum expand for item */\n  while ((s+i)<ms->src_end && singlematch(uchar(*(s+i)), p, ep))\n    i++;\n  /* keeps trying to match with the maximum repetitions */\n  while (i>=0) {\n    const char *res = match(ms, (s+i), ep+1);\n    if (res) return res;\n    i--;  /* else didn't match; reduce 1 repetition to try again */\n  }\n  return NULL;\n}\n\n\nstatic const char *min_expand (MatchState *ms, const char *s,\n                                 const char *p, const char *ep) {\n  for (;;) {\n    const char *res = match(ms, s, ep+1);\n    if (res != NULL)\n      return res;\n    else if (s<ms->src_end && singlematch(uchar(*s), p, ep))\n      s++;  /* try with one more repetition */\n    else return NULL;\n  }\n}\n\n\nstatic const char *start_capture (MatchState *ms, const char *s,\n                                    const char *p, int what) {\n  const char *res;\n  int level = ms->level;\n  if (level >= LUA_MAXCAPTURES) luaL_error(ms->L, \"too many captures\");\n  ms->capture[level].init = s;\n  ms->capture[level].len = what;\n  ms->level = level+1;\n  if ((res=match(ms, s, p)) == NULL)  /* match failed? */\n    ms->level--;  /* undo capture */\n  return res;\n}\n\n\nstatic const char *end_capture (MatchState *ms, const char *s,\n                                  const char *p) {\n  int l = capture_to_close(ms);\n  const char *res;\n  ms->capture[l].len = s - ms->capture[l].init;  /* close capture */\n  if ((res = match(ms, s, p)) == NULL)  /* match failed? */\n    ms->capture[l].len = CAP_UNFINISHED;  /* undo capture */\n  return res;\n}\n\n\nstatic const char *match_capture (MatchState *ms, const char *s, int l) {\n  size_t len;\n  l = check_capture(ms, l);\n  len = ms->capture[l].len;\n  if ((size_t)(ms->src_end-s) >= len &&\n      memcmp(ms->capture[l].init, s, len) == 0)\n    return s+len;\n  else return NULL;\n}\n\n\nstatic const char *match (MatchState *ms, const char *s, const char *p) {\n  init: /* using goto's to optimize tail recursion */\n  switch (*p) {\n    case '(': {  /* start capture */\n      if (*(p+1) == ')')  /* position capture? */\n        return start_capture(ms, s, p+2, CAP_POSITION);\n      else\n        return start_capture(ms, s, p+1, CAP_UNFINISHED);\n    }\n    case ')': {  /* end capture */\n      return end_capture(ms, s, p+1);\n    }\n    case L_ESC: {\n      switch (*(p+1)) {\n        case 'b': {  /* balanced string? */\n          s = matchbalance(ms, s, p+2);\n          if (s == NULL) return NULL;\n          p+=4; goto init;  /* else return match(ms, s, p+4); */\n        }\n        case 'f': {  /* frontier? */\n          const char *ep; char previous;\n          p += 2;\n          if (*p != '[')\n            luaL_error(ms->L, \"missing \" LUA_QL(\"[\") \" after \"\n                               LUA_QL(\"%%f\") \" in pattern\");\n          ep = classend(ms, p);  /* points to what is next */\n          previous = (s == ms->src_init) ? '\\0' : *(s-1);\n          if (matchbracketclass(uchar(previous), p, ep-1) ||\n             !matchbracketclass(uchar(*s), p, ep-1)) return NULL;\n          p=ep; goto init;  /* else return match(ms, s, ep); */\n        }\n        default: {\n          if (isdigit(uchar(*(p+1)))) {  /* capture results (%0-%9)? */\n            s = match_capture(ms, s, uchar(*(p+1)));\n            if (s == NULL) return NULL;\n            p+=2; goto init;  /* else return match(ms, s, p+2) */\n          }\n          goto dflt;  /* case default */\n        }\n      }\n    }\n    case '\\0': {  /* end of pattern */\n      return s;  /* match succeeded */\n    }\n    case '$': {\n      if (*(p+1) == '\\0')  /* is the `$' the last char in pattern? */\n        return (s == ms->src_end) ? s : NULL;  /* check end of string */\n      else goto dflt;\n    }\n    default: dflt: {  /* it is a pattern item */\n      const char *ep = classend(ms, p);  /* points to what is next */\n      int m = s<ms->src_end && singlematch(uchar(*s), p, ep);\n      switch (*ep) {\n        case '?': {  /* optional */\n          const char *res;\n          if (m && ((res=match(ms, s+1, ep+1)) != NULL))\n            return res;\n          p=ep+1; goto init;  /* else return match(ms, s, ep+1); */\n        }\n        case '*': {  /* 0 or more repetitions */\n          return max_expand(ms, s, p, ep);\n        }\n        case '+': {  /* 1 or more repetitions */\n          return (m ? max_expand(ms, s+1, p, ep) : NULL);\n        }\n        case '-': {  /* 0 or more repetitions (minimum) */\n          return min_expand(ms, s, p, ep);\n        }\n        default: {\n          if (!m) return NULL;\n          s++; p=ep; goto init;  /* else return match(ms, s+1, ep); */\n        }\n      }\n    }\n  }\n}\n\n\n\nstatic const char *lmemfind (const char *s1, size_t l1,\n                               const char *s2, size_t l2) {\n  if (l2 == 0) return s1;  /* empty strings are everywhere */\n  else if (l2 > l1) return NULL;  /* avoids a negative `l1' */\n  else {\n    const char *init;  /* to search for a `*s2' inside `s1' */\n    l2--;  /* 1st char will be checked by `memchr' */\n    l1 = l1-l2;  /* `s2' cannot be found after that */\n    while (l1 > 0 && (init = (const char *)memchr(s1, *s2, l1)) != NULL) {\n      init++;   /* 1st char is already checked */\n      if (memcmp(init, s2+1, l2) == 0)\n        return init-1;\n      else {  /* correct `l1' and `s1' to try again */\n        l1 -= init-s1;\n        s1 = init;\n      }\n    }\n    return NULL;  /* not found */\n  }\n}\n\n\nstatic void push_onecapture (MatchState *ms, int i, const char *s,\n                                                    const char *e) {\n  if (i >= ms->level) {\n    if (i == 0)  /* ms->level == 0, too */\n      lua_pushlstring(ms->L, s, e - s);  /* add whole match */\n    else\n      luaL_error(ms->L, \"invalid capture index\");\n  }\n  else {\n    ptrdiff_t l = ms->capture[i].len;\n    if (l == CAP_UNFINISHED) luaL_error(ms->L, \"unfinished capture\");\n    if (l == CAP_POSITION)\n      lua_pushinteger(ms->L, ms->capture[i].init - ms->src_init + 1);\n    else\n      lua_pushlstring(ms->L, ms->capture[i].init, l);\n  }\n}\n\n\nstatic int push_captures (MatchState *ms, const char *s, const char *e) {\n  int i;\n  int nlevels = (ms->level == 0 && s) ? 1 : ms->level;\n  luaL_checkstack(ms->L, nlevels, \"too many captures\");\n  for (i = 0; i < nlevels; i++)\n    push_onecapture(ms, i, s, e);\n  return nlevels;  /* number of strings pushed */\n}\n\n\nstatic int str_find_aux (lua_State *L, int find) {\n  size_t l1, l2;\n  const char *s = luaL_checklstring(L, 1, &l1);\n  const char *p = luaL_checklstring(L, 2, &l2);\n  ptrdiff_t init = posrelat(luaL_optinteger(L, 3, 1), l1) - 1;\n  if (init < 0) init = 0;\n  else if ((size_t)(init) > l1) init = (ptrdiff_t)l1;\n  if (find && (lua_toboolean(L, 4) ||  /* explicit request? */\n      strpbrk(p, SPECIALS) == NULL)) {  /* or no special characters? */\n    /* do a plain search */\n    const char *s2 = lmemfind(s+init, l1-init, p, l2);\n    if (s2) {\n      lua_pushinteger(L, s2-s+1);\n      lua_pushinteger(L, s2-s+l2);\n      return 2;\n    }\n  }\n  else {\n    MatchState ms;\n    int anchor = (*p == '^') ? (p++, 1) : 0;\n    const char *s1=s+init;\n    ms.L = L;\n    ms.src_init = s;\n    ms.src_end = s+l1;\n    do {\n      const char *res;\n      ms.level = 0;\n      if ((res=match(&ms, s1, p)) != NULL) {\n        if (find) {\n          lua_pushinteger(L, s1-s+1);  /* start */\n          lua_pushinteger(L, res-s);   /* end */\n          return push_captures(&ms, NULL, 0) + 2;\n        }\n        else\n          return push_captures(&ms, s1, res);\n      }\n    } while (s1++ < ms.src_end && !anchor);\n  }\n  lua_pushnil(L);  /* not found */\n  return 1;\n}\n\n\nstatic int str_find (lua_State *L) {\n  return str_find_aux(L, 1);\n}\n\n\nstatic int str_match (lua_State *L) {\n  return str_find_aux(L, 0);\n}\n\n\nstatic int gmatch_aux (lua_State *L) {\n  MatchState ms;\n  size_t ls;\n  const char *s = lua_tolstring(L, lua_upvalueindex(1), &ls);\n  const char *p = lua_tostring(L, lua_upvalueindex(2));\n  const char *src;\n  ms.L = L;\n  ms.src_init = s;\n  ms.src_end = s+ls;\n  for (src = s + (size_t)lua_tointeger(L, lua_upvalueindex(3));\n       src <= ms.src_end;\n       src++) {\n    const char *e;\n    ms.level = 0;\n    if ((e = match(&ms, src, p)) != NULL) {\n      lua_Integer newstart = e-s;\n      if (e == src) newstart++;  /* empty match? go at least one position */\n      lua_pushinteger(L, newstart);\n      lua_replace(L, lua_upvalueindex(3));\n      return push_captures(&ms, src, e);\n    }\n  }\n  return 0;  /* not found */\n}\n\n\nstatic int gmatch (lua_State *L) {\n  luaL_checkstring(L, 1);\n  luaL_checkstring(L, 2);\n  lua_settop(L, 2);\n  lua_pushinteger(L, 0);\n  lua_pushcclosure(L, gmatch_aux, 3);\n  return 1;\n}\n\n\nstatic int gfind_nodef (lua_State *L) {\n  return luaL_error(L, LUA_QL(\"string.gfind\") \" was renamed to \"\n                       LUA_QL(\"string.gmatch\"));\n}\n\n\nstatic void add_s (MatchState *ms, luaL_Buffer *b, const char *s,\n                                                   const char *e) {\n  size_t l, i;\n  const char *news = lua_tolstring(ms->L, 3, &l);\n  for (i = 0; i < l; i++) {\n    if (news[i] != L_ESC)\n      luaL_addchar(b, news[i]);\n    else {\n      i++;  /* skip ESC */\n      if (!isdigit(uchar(news[i])))\n        luaL_addchar(b, news[i]);\n      else if (news[i] == '0')\n          luaL_addlstring(b, s, e - s);\n      else {\n        push_onecapture(ms, news[i] - '1', s, e);\n        luaL_addvalue(b);  /* add capture to accumulated result */\n      }\n    }\n  }\n}\n\n\nstatic void add_value (MatchState *ms, luaL_Buffer *b, const char *s,\n                                                       const char *e) {\n  lua_State *L = ms->L;\n  switch (lua_type(L, 3)) {\n    case LUA_TNUMBER:\n    case LUA_TSTRING: {\n      add_s(ms, b, s, e);\n      return;\n    }\n    case LUA_TFUNCTION: {\n      int n;\n      lua_pushvalue(L, 3);\n      n = push_captures(ms, s, e);\n      lua_call(L, n, 1);\n      break;\n    }\n    case LUA_TTABLE: {\n      push_onecapture(ms, 0, s, e);\n      lua_gettable(L, 3);\n      break;\n    }\n  }\n  if (!lua_toboolean(L, -1)) {  /* nil or false? */\n    lua_pop(L, 1);\n    lua_pushlstring(L, s, e - s);  /* keep original text */\n  }\n  else if (!lua_isstring(L, -1))\n    luaL_error(L, \"invalid replacement value (a %s)\", luaL_typename(L, -1)); \n  luaL_addvalue(b);  /* add result to accumulator */\n}\n\n\nstatic int str_gsub (lua_State *L) {\n  size_t srcl;\n  const char *src = luaL_checklstring(L, 1, &srcl);\n  const char *p = luaL_checkstring(L, 2);\n  int  tr = lua_type(L, 3);\n  int max_s = luaL_optint(L, 4, srcl+1);\n  int anchor = (*p == '^') ? (p++, 1) : 0;\n  int n = 0;\n  MatchState ms;\n  luaL_Buffer b;\n  luaL_argcheck(L, tr == LUA_TNUMBER || tr == LUA_TSTRING ||\n                   tr == LUA_TFUNCTION || tr == LUA_TTABLE, 3,\n                      \"string/function/table expected\");\n  luaL_buffinit(L, &b);\n  ms.L = L;\n  ms.src_init = src;\n  ms.src_end = src+srcl;\n  while (n < max_s) {\n    const char *e;\n    ms.level = 0;\n    e = match(&ms, src, p);\n    if (e) {\n      n++;\n      add_value(&ms, &b, src, e);\n    }\n    if (e && e>src) /* non empty match? */\n      src = e;  /* skip it */\n    else if (src < ms.src_end)\n      luaL_addchar(&b, *src++);\n    else break;\n    if (anchor) break;\n  }\n  luaL_addlstring(&b, src, ms.src_end-src);\n  luaL_pushresult(&b);\n  lua_pushinteger(L, n);  /* number of substitutions */\n  return 2;\n}\n\n/* }====================================================== */\n\n\n/* maximum size of each formatted item (> len(format('%99.99f', -1e308))) */\n#define MAX_ITEM\t512\n/* valid flags in a format specification */\n#define FLAGS\t\"-+ #0\"\n/*\n** maximum size of each format specification (such as '%-099.99d')\n** (+10 accounts for %99.99x plus margin of error)\n*/\n#define MAX_FORMAT\t(sizeof(FLAGS) + sizeof(LUA_INTFRMLEN) + 10)\n\n\nstatic void addquoted (lua_State *L, luaL_Buffer *b, int arg) {\n  size_t l;\n  const char *s = luaL_checklstring(L, arg, &l);\n  luaL_addchar(b, '\"');\n  while (l--) {\n    switch (*s) {\n      case '\"': case '\\\\': case '\\n': {\n        luaL_addchar(b, '\\\\');\n        luaL_addchar(b, *s);\n        break;\n      }\n      case '\\r': {\n        luaL_addlstring(b, \"\\\\r\", 2);\n        break;\n      }\n      case '\\0': {\n        luaL_addlstring(b, \"\\\\000\", 4);\n        break;\n      }\n      default: {\n        luaL_addchar(b, *s);\n        break;\n      }\n    }\n    s++;\n  }\n  luaL_addchar(b, '\"');\n}\n\nstatic const char *scanformat (lua_State *L, const char *strfrmt, char *form) {\n  const char *p = strfrmt;\n  while (*p != '\\0' && strchr(FLAGS, *p) != NULL) p++;  /* skip flags */\n  if ((size_t)(p - strfrmt) >= sizeof(FLAGS))\n    luaL_error(L, \"invalid format (repeated flags)\");\n  if (isdigit(uchar(*p))) p++;  /* skip width */\n  if (isdigit(uchar(*p))) p++;  /* (2 digits at most) */\n  if (*p == '.') {\n    p++;\n    if (isdigit(uchar(*p))) p++;  /* skip precision */\n    if (isdigit(uchar(*p))) p++;  /* (2 digits at most) */\n  }\n  if (isdigit(uchar(*p)))\n    luaL_error(L, \"invalid format (width or precision too long)\");\n  *(form++) = '%';\n  strncpy(form, strfrmt, p - strfrmt + 1);\n  form += p - strfrmt + 1;\n  *form = '\\0';\n  return p;\n}\n\n\nstatic void addintlen (char *form) {\n  size_t l = strlen(form);\n  char spec = form[l - 1];\n  strcpy(form + l - 1, LUA_INTFRMLEN);\n  form[l + sizeof(LUA_INTFRMLEN) - 2] = spec;\n  form[l + sizeof(LUA_INTFRMLEN) - 1] = '\\0';\n}\n\n\nstatic int str_format (lua_State *L) {\n  int top = lua_gettop(L);\n  int arg = 1;\n  size_t sfl;\n  const char *strfrmt = luaL_checklstring(L, arg, &sfl);\n  const char *strfrmt_end = strfrmt+sfl;\n  luaL_Buffer b;\n  luaL_buffinit(L, &b);\n  while (strfrmt < strfrmt_end) {\n    if (*strfrmt != L_ESC)\n      luaL_addchar(&b, *strfrmt++);\n    else if (*++strfrmt == L_ESC)\n      luaL_addchar(&b, *strfrmt++);  /* %% */\n    else { /* format item */\n      char form[MAX_FORMAT];  /* to store the format (`%...') */\n      char buff[MAX_ITEM];  /* to store the formatted item */\n      if (++arg > top)\n        luaL_argerror(L, arg, \"no value\");\n      strfrmt = scanformat(L, strfrmt, form);\n      switch (*strfrmt++) {\n        case 'c': {\n          sprintf(buff, form, (int)luaL_checknumber(L, arg));\n          break;\n        }\n        case 'd':  case 'i': {\n          addintlen(form);\n          sprintf(buff, form, (LUA_INTFRM_T)luaL_checknumber(L, arg));\n          break;\n        }\n        case 'o':  case 'u':  case 'x':  case 'X': {\n          addintlen(form);\n          sprintf(buff, form, (unsigned LUA_INTFRM_T)luaL_checknumber(L, arg));\n          break;\n        }\n        case 'e':  case 'E': case 'f':\n        case 'g': case 'G': {\n          sprintf(buff, form, (double)luaL_checknumber(L, arg));\n          break;\n        }\n        case 'q': {\n          addquoted(L, &b, arg);\n          continue;  /* skip the 'addsize' at the end */\n        }\n        case 's': {\n          size_t l;\n          const char *s = luaL_checklstring(L, arg, &l);\n          if (!strchr(form, '.') && l >= 100) {\n            /* no precision and string is too long to be formatted;\n               keep original string */\n            lua_pushvalue(L, arg);\n            luaL_addvalue(&b);\n            continue;  /* skip the `addsize' at the end */\n          }\n          else {\n            sprintf(buff, form, s);\n            break;\n          }\n        }\n        default: {  /* also treat cases `pnLlh' */\n          return luaL_error(L, \"invalid option \" LUA_QL(\"%%%c\") \" to \"\n                               LUA_QL(\"format\"), *(strfrmt - 1));\n        }\n      }\n      luaL_addlstring(&b, buff, strlen(buff));\n    }\n  }\n  luaL_pushresult(&b);\n  return 1;\n}\n\n\nstatic const luaL_Reg strlib[] = {\n  {\"byte\", str_byte},\n  {\"char\", str_char},\n  {\"dump\", str_dump},\n  {\"find\", str_find},\n  {\"format\", str_format},\n  {\"gfind\", gfind_nodef},\n  {\"gmatch\", gmatch},\n  {\"gsub\", str_gsub},\n  {\"len\", str_len},\n  {\"lower\", str_lower},\n  {\"match\", str_match},\n  {\"rep\", str_rep},\n  {\"reverse\", str_reverse},\n  {\"sub\", str_sub},\n  {\"upper\", str_upper},\n  {NULL, NULL}\n};\n\n\nstatic void createmetatable (lua_State *L) {\n  lua_createtable(L, 0, 1);  /* create metatable for strings */\n  lua_pushliteral(L, \"\");  /* dummy string */\n  lua_pushvalue(L, -2);\n  lua_setmetatable(L, -2);  /* set string metatable */\n  lua_pop(L, 1);  /* pop dummy string */\n  lua_pushvalue(L, -2);  /* string library... */\n  lua_setfield(L, -2, \"__index\");  /* ...is the __index metamethod */\n  lua_pop(L, 1);  /* pop metatable */\n}\n\n\n/*\n** Open string library\n*/\nLUALIB_API int luaopen_string (lua_State *L) {\n  luaL_register(L, LUA_STRLIBNAME, strlib);\n#if defined(LUA_COMPAT_GFIND)\n  lua_getfield(L, -1, \"gmatch\");\n  lua_setfield(L, -2, \"gfind\");\n#endif\n  createmetatable(L);\n  return 1;\n}\n\n"
  },
  {
    "path": "deps/lua/src/ltable.c",
    "content": "/*\n** $Id: ltable.c,v 2.32.1.2 2007/12/28 15:32:23 roberto Exp $\n** Lua tables (hash)\n** See Copyright Notice in lua.h\n*/\n\n\n/*\n** Implementation of tables (aka arrays, objects, or hash tables).\n** Tables keep its elements in two parts: an array part and a hash part.\n** Non-negative integer keys are all candidates to be kept in the array\n** part. The actual size of the array is the largest `n' such that at\n** least half the slots between 0 and n are in use.\n** Hash uses a mix of chained scatter table with Brent's variation.\n** A main invariant of these tables is that, if an element is not\n** in its main position (i.e. the `original' position that its hash gives\n** to it), then the colliding element is in its own main position.\n** Hence even when the load factor reaches 100%, performance remains good.\n*/\n\n#include <math.h>\n#include <string.h>\n\n#define ltable_c\n#define LUA_CORE\n\n#include \"lua.h\"\n\n#include \"ldebug.h\"\n#include \"ldo.h\"\n#include \"lgc.h\"\n#include \"lmem.h\"\n#include \"lobject.h\"\n#include \"lstate.h\"\n#include \"ltable.h\"\n\n\n/*\n** max size of array part is 2^MAXBITS\n*/\n#if LUAI_BITSINT > 26\n#define MAXBITS\t\t26\n#else\n#define MAXBITS\t\t(LUAI_BITSINT-2)\n#endif\n\n#define MAXASIZE\t(1 << MAXBITS)\n\n\n#define hashpow2(t,n)      (gnode(t, lmod((n), sizenode(t))))\n  \n#define hashstr(t,str)  hashpow2(t, (str)->tsv.hash)\n#define hashboolean(t,p)        hashpow2(t, p)\n\n\n/*\n** for some types, it is better to avoid modulus by power of 2, as\n** they tend to have many 2 factors.\n*/\n#define hashmod(t,n)\t(gnode(t, ((n) % ((sizenode(t)-1)|1))))\n\n\n#define hashpointer(t,p)\thashmod(t, IntPoint(p))\n\n\n/*\n** number of ints inside a lua_Number\n*/\n#define numints\t\tcast_int(sizeof(lua_Number)/sizeof(int))\n\n\n\n#define dummynode\t\t(&dummynode_)\n\nstatic const Node dummynode_ = {\n  {{NULL}, LUA_TNIL},  /* value */\n  {{{NULL}, LUA_TNIL, NULL}}  /* key */\n};\n\n\n/*\n** hash for lua_Numbers\n*/\nstatic Node *hashnum (const Table *t, lua_Number n) {\n  unsigned int a[numints];\n  int i;\n  if (luai_numeq(n, 0))  /* avoid problems with -0 */\n    return gnode(t, 0);\n  memcpy(a, &n, sizeof(a));\n  for (i = 1; i < numints; i++) a[0] += a[i];\n  return hashmod(t, a[0]);\n}\n\n\n\n/*\n** returns the `main' position of an element in a table (that is, the index\n** of its hash value)\n*/\nstatic Node *mainposition (const Table *t, const TValue *key) {\n  switch (ttype(key)) {\n    case LUA_TNUMBER:\n      return hashnum(t, nvalue(key));\n    case LUA_TSTRING:\n      return hashstr(t, rawtsvalue(key));\n    case LUA_TBOOLEAN:\n      return hashboolean(t, bvalue(key));\n    case LUA_TLIGHTUSERDATA:\n      return hashpointer(t, pvalue(key));\n    default:\n      return hashpointer(t, gcvalue(key));\n  }\n}\n\n\n/*\n** returns the index for `key' if `key' is an appropriate key to live in\n** the array part of the table, -1 otherwise.\n*/\nstatic int arrayindex (const TValue *key) {\n  if (ttisnumber(key)) {\n    lua_Number n = nvalue(key);\n    int k;\n    lua_number2int(k, n);\n    if (luai_numeq(cast_num(k), n))\n      return k;\n  }\n  return -1;  /* `key' did not match some condition */\n}\n\n\n/*\n** returns the index of a `key' for table traversals. First goes all\n** elements in the array part, then elements in the hash part. The\n** beginning of a traversal is signalled by -1.\n*/\nstatic int findindex (lua_State *L, Table *t, StkId key) {\n  int i;\n  if (ttisnil(key)) return -1;  /* first iteration */\n  i = arrayindex(key);\n  if (0 < i && i <= t->sizearray)  /* is `key' inside array part? */\n    return i-1;  /* yes; that's the index (corrected to C) */\n  else {\n    Node *n = mainposition(t, key);\n    do {  /* check whether `key' is somewhere in the chain */\n      /* key may be dead already, but it is ok to use it in `next' */\n      if (luaO_rawequalObj(key2tval(n), key) ||\n            (ttype(gkey(n)) == LUA_TDEADKEY && iscollectable(key) &&\n             gcvalue(gkey(n)) == gcvalue(key))) {\n        i = cast_int(n - gnode(t, 0));  /* key index in hash table */\n        /* hash elements are numbered after array ones */\n        return i + t->sizearray;\n      }\n      else n = gnext(n);\n    } while (n);\n    luaG_runerror(L, \"invalid key to \" LUA_QL(\"next\"));  /* key not found */\n    return 0;  /* to avoid warnings */\n  }\n}\n\n\nint luaH_next (lua_State *L, Table *t, StkId key) {\n  int i = findindex(L, t, key);  /* find original element */\n  for (i++; i < t->sizearray; i++) {  /* try first array part */\n    if (!ttisnil(&t->array[i])) {  /* a non-nil value? */\n      setnvalue(key, cast_num(i+1));\n      setobj2s(L, key+1, &t->array[i]);\n      return 1;\n    }\n  }\n  for (i -= t->sizearray; i < sizenode(t); i++) {  /* then hash part */\n    if (!ttisnil(gval(gnode(t, i)))) {  /* a non-nil value? */\n      setobj2s(L, key, key2tval(gnode(t, i)));\n      setobj2s(L, key+1, gval(gnode(t, i)));\n      return 1;\n    }\n  }\n  return 0;  /* no more elements */\n}\n\n\n/*\n** {=============================================================\n** Rehash\n** ==============================================================\n*/\n\n\nstatic int computesizes (int nums[], int *narray) {\n  int i;\n  int twotoi;  /* 2^i */\n  int a = 0;  /* number of elements smaller than 2^i */\n  int na = 0;  /* number of elements to go to array part */\n  int n = 0;  /* optimal size for array part */\n  for (i = 0, twotoi = 1; twotoi/2 < *narray; i++, twotoi *= 2) {\n    if (nums[i] > 0) {\n      a += nums[i];\n      if (a > twotoi/2) {  /* more than half elements present? */\n        n = twotoi;  /* optimal size (till now) */\n        na = a;  /* all elements smaller than n will go to array part */\n      }\n    }\n    if (a == *narray) break;  /* all elements already counted */\n  }\n  *narray = n;\n  lua_assert(*narray/2 <= na && na <= *narray);\n  return na;\n}\n\n\nstatic int countint (const TValue *key, int *nums) {\n  int k = arrayindex(key);\n  if (0 < k && k <= MAXASIZE) {  /* is `key' an appropriate array index? */\n    nums[ceillog2(k)]++;  /* count as such */\n    return 1;\n  }\n  else\n    return 0;\n}\n\n\nstatic int numusearray (const Table *t, int *nums) {\n  int lg;\n  int ttlg;  /* 2^lg */\n  int ause = 0;  /* summation of `nums' */\n  int i = 1;  /* count to traverse all array keys */\n  for (lg=0, ttlg=1; lg<=MAXBITS; lg++, ttlg*=2) {  /* for each slice */\n    int lc = 0;  /* counter */\n    int lim = ttlg;\n    if (lim > t->sizearray) {\n      lim = t->sizearray;  /* adjust upper limit */\n      if (i > lim)\n        break;  /* no more elements to count */\n    }\n    /* count elements in range (2^(lg-1), 2^lg] */\n    for (; i <= lim; i++) {\n      if (!ttisnil(&t->array[i-1]))\n        lc++;\n    }\n    nums[lg] += lc;\n    ause += lc;\n  }\n  return ause;\n}\n\n\nstatic int numusehash (const Table *t, int *nums, int *pnasize) {\n  int totaluse = 0;  /* total number of elements */\n  int ause = 0;  /* summation of `nums' */\n  int i = sizenode(t);\n  while (i--) {\n    Node *n = &t->node[i];\n    if (!ttisnil(gval(n))) {\n      ause += countint(key2tval(n), nums);\n      totaluse++;\n    }\n  }\n  *pnasize += ause;\n  return totaluse;\n}\n\n\nstatic void setarrayvector (lua_State *L, Table *t, int size) {\n  int i;\n  luaM_reallocvector(L, t->array, t->sizearray, size, TValue);\n  for (i=t->sizearray; i<size; i++)\n     setnilvalue(&t->array[i]);\n  t->sizearray = size;\n}\n\n\nstatic void setnodevector (lua_State *L, Table *t, int size) {\n  int lsize;\n  if (size == 0) {  /* no elements to hash part? */\n    t->node = cast(Node *, dummynode);  /* use common `dummynode' */\n    lsize = 0;\n  }\n  else {\n    int i;\n    lsize = ceillog2(size);\n    if (lsize > MAXBITS)\n      luaG_runerror(L, \"table overflow\");\n    size = twoto(lsize);\n    t->node = luaM_newvector(L, size, Node);\n    for (i=0; i<size; i++) {\n      Node *n = gnode(t, i);\n      gnext(n) = NULL;\n      setnilvalue(gkey(n));\n      setnilvalue(gval(n));\n    }\n  }\n  t->lsizenode = cast_byte(lsize);\n  t->lastfree = gnode(t, size);  /* all positions are free */\n}\n\n\nstatic void resize (lua_State *L, Table *t, int nasize, int nhsize) {\n  int i;\n  int oldasize = t->sizearray;\n  int oldhsize = t->lsizenode;\n  Node *nold = t->node;  /* save old hash ... */\n  if (nasize > oldasize)  /* array part must grow? */\n    setarrayvector(L, t, nasize);\n  /* create new hash part with appropriate size */\n  setnodevector(L, t, nhsize);  \n  if (nasize < oldasize) {  /* array part must shrink? */\n    t->sizearray = nasize;\n    /* re-insert elements from vanishing slice */\n    for (i=nasize; i<oldasize; i++) {\n      if (!ttisnil(&t->array[i]))\n        setobjt2t(L, luaH_setnum(L, t, i+1), &t->array[i]);\n    }\n    /* shrink array */\n    luaM_reallocvector(L, t->array, oldasize, nasize, TValue);\n  }\n  /* re-insert elements from hash part */\n  for (i = twoto(oldhsize) - 1; i >= 0; i--) {\n    Node *old = nold+i;\n    if (!ttisnil(gval(old)))\n      setobjt2t(L, luaH_set(L, t, key2tval(old)), gval(old));\n  }\n  if (nold != dummynode)\n    luaM_freearray(L, nold, twoto(oldhsize), Node);  /* free old array */\n}\n\n\nvoid luaH_resizearray (lua_State *L, Table *t, int nasize) {\n  int nsize = (t->node == dummynode) ? 0 : sizenode(t);\n  resize(L, t, nasize, nsize);\n}\n\n\nstatic void rehash (lua_State *L, Table *t, const TValue *ek) {\n  int nasize, na;\n  int nums[MAXBITS+1];  /* nums[i] = number of keys between 2^(i-1) and 2^i */\n  int i;\n  int totaluse;\n  for (i=0; i<=MAXBITS; i++) nums[i] = 0;  /* reset counts */\n  nasize = numusearray(t, nums);  /* count keys in array part */\n  totaluse = nasize;  /* all those keys are integer keys */\n  totaluse += numusehash(t, nums, &nasize);  /* count keys in hash part */\n  /* count extra key */\n  nasize += countint(ek, nums);\n  totaluse++;\n  /* compute new size for array part */\n  na = computesizes(nums, &nasize);\n  /* resize the table to new computed sizes */\n  resize(L, t, nasize, totaluse - na);\n}\n\n\n\n/*\n** }=============================================================\n*/\n\n\nTable *luaH_new (lua_State *L, int narray, int nhash) {\n  Table *t = luaM_new(L, Table);\n  luaC_link(L, obj2gco(t), LUA_TTABLE);\n  t->metatable = NULL;\n  t->flags = cast_byte(~0);\n  /* temporary values (kept only if some malloc fails) */\n  t->array = NULL;\n  t->sizearray = 0;\n  t->lsizenode = 0;\n  t->readonly = 0;\n  t->node = cast(Node *, dummynode);\n  setarrayvector(L, t, narray);\n  setnodevector(L, t, nhash);\n  return t;\n}\n\n\nvoid luaH_free (lua_State *L, Table *t) {\n  if (t->node != dummynode)\n    luaM_freearray(L, t->node, sizenode(t), Node);\n  luaM_freearray(L, t->array, t->sizearray, TValue);\n  luaM_free(L, t);\n}\n\n\nstatic Node *getfreepos (Table *t) {\n  while (t->lastfree-- > t->node) {\n    if (ttisnil(gkey(t->lastfree)))\n      return t->lastfree;\n  }\n  return NULL;  /* could not find a free place */\n}\n\n\n\n/*\n** inserts a new key into a hash table; first, check whether key's main \n** position is free. If not, check whether colliding node is in its main \n** position or not: if it is not, move colliding node to an empty place and \n** put new key in its main position; otherwise (colliding node is in its main \n** position), new key goes to an empty position. \n*/\nstatic TValue *newkey (lua_State *L, Table *t, const TValue *key) {\n  Node *mp = mainposition(t, key);\n  if (!ttisnil(gval(mp)) || mp == dummynode) {\n    Node *othern;\n    Node *n = getfreepos(t);  /* get a free place */\n    if (n == NULL) {  /* cannot find a free place? */\n      rehash(L, t, key);  /* grow table */\n      return luaH_set(L, t, key);  /* re-insert key into grown table */\n    }\n    lua_assert(n != dummynode);\n    othern = mainposition(t, key2tval(mp));\n    if (othern != mp) {  /* is colliding node out of its main position? */\n      /* yes; move colliding node into free position */\n      while (gnext(othern) != mp) othern = gnext(othern);  /* find previous */\n      gnext(othern) = n;  /* redo the chain with `n' in place of `mp' */\n      *n = *mp;  /* copy colliding node into free pos. (mp->next also goes) */\n      gnext(mp) = NULL;  /* now `mp' is free */\n      setnilvalue(gval(mp));\n    }\n    else {  /* colliding node is in its own main position */\n      /* new node will go into free position */\n      gnext(n) = gnext(mp);  /* chain new position */\n      gnext(mp) = n;\n      mp = n;\n    }\n  }\n  gkey(mp)->value = key->value; gkey(mp)->tt = key->tt;\n  luaC_barriert(L, t, key);\n  lua_assert(ttisnil(gval(mp)));\n  return gval(mp);\n}\n\n\n/*\n** search function for integers\n*/\nconst TValue *luaH_getnum (Table *t, int key) {\n  if (1 <= key && key <= t->sizearray)\n    return &t->array[key-1];\n  else {\n    lua_Number nk = cast_num(key);\n    Node *n = hashnum(t, nk);\n    do {  /* check whether `key' is somewhere in the chain */\n      if (ttisnumber(gkey(n)) && luai_numeq(nvalue(gkey(n)), nk))\n        return gval(n);  /* that's it */\n      else n = gnext(n);\n    } while (n);\n    return luaO_nilobject;\n  }\n}\n\n\n/*\n** search function for strings\n*/\nconst TValue *luaH_getstr (Table *t, TString *key) {\n  Node *n = hashstr(t, key);\n  do {  /* check whether `key' is somewhere in the chain */\n    if (ttisstring(gkey(n)) && rawtsvalue(gkey(n)) == key)\n      return gval(n);  /* that's it */\n    else n = gnext(n);\n  } while (n);\n  return luaO_nilobject;\n}\n\n\n/*\n** main search function\n*/\nconst TValue *luaH_get (Table *t, const TValue *key) {\n  switch (ttype(key)) {\n    case LUA_TNIL: return luaO_nilobject;\n    case LUA_TSTRING: return luaH_getstr(t, rawtsvalue(key));\n    case LUA_TNUMBER: {\n      int k;\n      lua_Number n = nvalue(key);\n      lua_number2int(k, n);\n      if (luai_numeq(cast_num(k), nvalue(key))) /* index is int? */\n        return luaH_getnum(t, k);  /* use specialized version */\n      /* else go through */\n    }\n    default: {\n      Node *n = mainposition(t, key);\n      do {  /* check whether `key' is somewhere in the chain */\n        if (luaO_rawequalObj(key2tval(n), key))\n          return gval(n);  /* that's it */\n        else n = gnext(n);\n      } while (n);\n      return luaO_nilobject;\n    }\n  }\n}\n\n\nTValue *luaH_set (lua_State *L, Table *t, const TValue *key) {\n  const TValue *p = luaH_get(t, key);\n  t->flags = 0;\n  if (p != luaO_nilobject)\n    return cast(TValue *, p);\n  else {\n    if (ttisnil(key)) luaG_runerror(L, \"table index is nil\");\n    else if (ttisnumber(key) && luai_numisnan(nvalue(key)))\n      luaG_runerror(L, \"table index is NaN\");\n    return newkey(L, t, key);\n  }\n}\n\n\nTValue *luaH_setnum (lua_State *L, Table *t, int key) {\n  const TValue *p = luaH_getnum(t, key);\n  if (p != luaO_nilobject)\n    return cast(TValue *, p);\n  else {\n    TValue k;\n    setnvalue(&k, cast_num(key));\n    return newkey(L, t, &k);\n  }\n}\n\n\nTValue *luaH_setstr (lua_State *L, Table *t, TString *key) {\n  const TValue *p = luaH_getstr(t, key);\n  if (p != luaO_nilobject)\n    return cast(TValue *, p);\n  else {\n    TValue k;\n    setsvalue(L, &k, key);\n    return newkey(L, t, &k);\n  }\n}\n\n\nstatic int unbound_search (Table *t, unsigned int j) {\n  unsigned int i = j;  /* i is zero or a present index */\n  j++;\n  /* find `i' and `j' such that i is present and j is not */\n  while (!ttisnil(luaH_getnum(t, j))) {\n    i = j;\n    j *= 2;\n    if (j > cast(unsigned int, MAX_INT)) {  /* overflow? */\n      /* table was built with bad purposes: resort to linear search */\n      i = 1;\n      while (!ttisnil(luaH_getnum(t, i))) i++;\n      return i - 1;\n    }\n  }\n  /* now do a binary search between them */\n  while (j - i > 1) {\n    unsigned int m = (i+j)/2;\n    if (ttisnil(luaH_getnum(t, m))) j = m;\n    else i = m;\n  }\n  return i;\n}\n\n\n/*\n** Try to find a boundary in table `t'. A `boundary' is an integer index\n** such that t[i] is non-nil and t[i+1] is nil (and 0 if t[1] is nil).\n*/\nint luaH_getn (Table *t) {\n  unsigned int j = t->sizearray;\n  if (j > 0 && ttisnil(&t->array[j - 1])) {\n    /* there is a boundary in the array part: (binary) search for it */\n    unsigned int i = 0;\n    while (j - i > 1) {\n      unsigned int m = (i+j)/2;\n      if (ttisnil(&t->array[m - 1])) j = m;\n      else i = m;\n    }\n    return i;\n  }\n  /* else must find a boundary in hash part */\n  else if (t->node == dummynode)  /* hash part is empty? */\n    return j;  /* that is easy... */\n  else return unbound_search(t, j);\n}\n\n\n\n#if defined(LUA_DEBUG)\n\nNode *luaH_mainposition (const Table *t, const TValue *key) {\n  return mainposition(t, key);\n}\n\nint luaH_isdummy (Node *n) { return n == dummynode; }\n\n#endif\n"
  },
  {
    "path": "deps/lua/src/ltable.h",
    "content": "/*\n** $Id: ltable.h,v 2.10.1.1 2007/12/27 13:02:25 roberto Exp $\n** Lua tables (hash)\n** See Copyright Notice in lua.h\n*/\n\n#ifndef ltable_h\n#define ltable_h\n\n#include \"lobject.h\"\n\n\n#define gnode(t,i)\t(&(t)->node[i])\n#define gkey(n)\t\t(&(n)->i_key.nk)\n#define gval(n)\t\t(&(n)->i_val)\n#define gnext(n)\t((n)->i_key.nk.next)\n\n#define key2tval(n)\t(&(n)->i_key.tvk)\n\n\nLUAI_FUNC const TValue *luaH_getnum (Table *t, int key);\nLUAI_FUNC TValue *luaH_setnum (lua_State *L, Table *t, int key);\nLUAI_FUNC const TValue *luaH_getstr (Table *t, TString *key);\nLUAI_FUNC TValue *luaH_setstr (lua_State *L, Table *t, TString *key);\nLUAI_FUNC const TValue *luaH_get (Table *t, const TValue *key);\nLUAI_FUNC TValue *luaH_set (lua_State *L, Table *t, const TValue *key);\nLUAI_FUNC Table *luaH_new (lua_State *L, int narray, int lnhash);\nLUAI_FUNC void luaH_resizearray (lua_State *L, Table *t, int nasize);\nLUAI_FUNC void luaH_free (lua_State *L, Table *t);\nLUAI_FUNC int luaH_next (lua_State *L, Table *t, StkId key);\nLUAI_FUNC int luaH_getn (Table *t);\n\n\n#if defined(LUA_DEBUG)\nLUAI_FUNC Node *luaH_mainposition (const Table *t, const TValue *key);\nLUAI_FUNC int luaH_isdummy (Node *n);\n#endif\n\n\n#endif\n"
  },
  {
    "path": "deps/lua/src/ltablib.c",
    "content": "/*\n** $Id: ltablib.c,v 1.38.1.3 2008/02/14 16:46:58 roberto Exp $\n** Library for Table Manipulation\n** See Copyright Notice in lua.h\n*/\n\n\n#include <stddef.h>\n\n#define ltablib_c\n#define LUA_LIB\n\n#include \"lua.h\"\n\n#include \"lauxlib.h\"\n#include \"lualib.h\"\n\n\n#define aux_getn(L,n)\t(luaL_checktype(L, n, LUA_TTABLE), luaL_getn(L, n))\n\n\nstatic int foreachi (lua_State *L) {\n  int i;\n  int n = aux_getn(L, 1);\n  luaL_checktype(L, 2, LUA_TFUNCTION);\n  for (i=1; i <= n; i++) {\n    lua_pushvalue(L, 2);  /* function */\n    lua_pushinteger(L, i);  /* 1st argument */\n    lua_rawgeti(L, 1, i);  /* 2nd argument */\n    lua_call(L, 2, 1);\n    if (!lua_isnil(L, -1))\n      return 1;\n    lua_pop(L, 1);  /* remove nil result */\n  }\n  return 0;\n}\n\n\nstatic int foreach (lua_State *L) {\n  luaL_checktype(L, 1, LUA_TTABLE);\n  luaL_checktype(L, 2, LUA_TFUNCTION);\n  lua_pushnil(L);  /* first key */\n  while (lua_next(L, 1)) {\n    lua_pushvalue(L, 2);  /* function */\n    lua_pushvalue(L, -3);  /* key */\n    lua_pushvalue(L, -3);  /* value */\n    lua_call(L, 2, 1);\n    if (!lua_isnil(L, -1))\n      return 1;\n    lua_pop(L, 2);  /* remove value and result */\n  }\n  return 0;\n}\n\n\nstatic int maxn (lua_State *L) {\n  lua_Number max = 0;\n  luaL_checktype(L, 1, LUA_TTABLE);\n  lua_pushnil(L);  /* first key */\n  while (lua_next(L, 1)) {\n    lua_pop(L, 1);  /* remove value */\n    if (lua_type(L, -1) == LUA_TNUMBER) {\n      lua_Number v = lua_tonumber(L, -1);\n      if (v > max) max = v;\n    }\n  }\n  lua_pushnumber(L, max);\n  return 1;\n}\n\n\nstatic int getn (lua_State *L) {\n  lua_pushinteger(L, aux_getn(L, 1));\n  return 1;\n}\n\n\nstatic int setn (lua_State *L) {\n  luaL_checktype(L, 1, LUA_TTABLE);\n#ifndef luaL_setn\n  luaL_setn(L, 1, luaL_checkint(L, 2));\n#else\n  luaL_error(L, LUA_QL(\"setn\") \" is obsolete\");\n#endif\n  lua_pushvalue(L, 1);\n  return 1;\n}\n\n\nstatic int tinsert (lua_State *L) {\n  int e = aux_getn(L, 1) + 1;  /* first empty element */\n  int pos;  /* where to insert new element */\n  switch (lua_gettop(L)) {\n    case 2: {  /* called with only 2 arguments */\n      pos = e;  /* insert new element at the end */\n      break;\n    }\n    case 3: {\n      int i;\n      pos = luaL_checkint(L, 2);  /* 2nd argument is the position */\n      if (pos > e) e = pos;  /* `grow' array if necessary */\n      for (i = e; i > pos; i--) {  /* move up elements */\n        lua_rawgeti(L, 1, i-1);\n        lua_rawseti(L, 1, i);  /* t[i] = t[i-1] */\n      }\n      break;\n    }\n    default: {\n      return luaL_error(L, \"wrong number of arguments to \" LUA_QL(\"insert\"));\n    }\n  }\n  luaL_setn(L, 1, e);  /* new size */\n  lua_rawseti(L, 1, pos);  /* t[pos] = v */\n  return 0;\n}\n\n\nstatic int tremove (lua_State *L) {\n  int e = aux_getn(L, 1);\n  int pos = luaL_optint(L, 2, e);\n  if (!(1 <= pos && pos <= e))  /* position is outside bounds? */\n   return 0;  /* nothing to remove */\n  luaL_setn(L, 1, e - 1);  /* t.n = n-1 */\n  lua_rawgeti(L, 1, pos);  /* result = t[pos] */\n  for ( ;pos<e; pos++) {\n    lua_rawgeti(L, 1, pos+1);\n    lua_rawseti(L, 1, pos);  /* t[pos] = t[pos+1] */\n  }\n  lua_pushnil(L);\n  lua_rawseti(L, 1, e);  /* t[e] = nil */\n  return 1;\n}\n\n\nstatic void addfield (lua_State *L, luaL_Buffer *b, int i) {\n  lua_rawgeti(L, 1, i);\n  if (!lua_isstring(L, -1))\n    luaL_error(L, \"invalid value (%s) at index %d in table for \"\n                  LUA_QL(\"concat\"), luaL_typename(L, -1), i);\n  luaL_addvalue(b);\n}\n\n\nstatic int tconcat (lua_State *L) {\n  luaL_Buffer b;\n  size_t lsep;\n  int i, last;\n  const char *sep = luaL_optlstring(L, 2, \"\", &lsep);\n  luaL_checktype(L, 1, LUA_TTABLE);\n  i = luaL_optint(L, 3, 1);\n  last = luaL_opt(L, luaL_checkint, 4, luaL_getn(L, 1));\n  luaL_buffinit(L, &b);\n  for (; i < last; i++) {\n    addfield(L, &b, i);\n    luaL_addlstring(&b, sep, lsep);\n  }\n  if (i == last)  /* add last value (if interval was not empty) */\n    addfield(L, &b, i);\n  luaL_pushresult(&b);\n  return 1;\n}\n\n\n\n/*\n** {======================================================\n** Quicksort\n** (based on `Algorithms in MODULA-3', Robert Sedgewick;\n**  Addison-Wesley, 1993.)\n*/\n\n\nstatic void set2 (lua_State *L, int i, int j) {\n  lua_rawseti(L, 1, i);\n  lua_rawseti(L, 1, j);\n}\n\nstatic int sort_comp (lua_State *L, int a, int b) {\n  if (!lua_isnil(L, 2)) {  /* function? */\n    int res;\n    lua_pushvalue(L, 2);\n    lua_pushvalue(L, a-1);  /* -1 to compensate function */\n    lua_pushvalue(L, b-2);  /* -2 to compensate function and `a' */\n    lua_call(L, 2, 1);\n    res = lua_toboolean(L, -1);\n    lua_pop(L, 1);\n    return res;\n  }\n  else  /* a < b? */\n    return lua_lessthan(L, a, b);\n}\n\nstatic void auxsort (lua_State *L, int l, int u) {\n  while (l < u) {  /* for tail recursion */\n    int i, j;\n    /* sort elements a[l], a[(l+u)/2] and a[u] */\n    lua_rawgeti(L, 1, l);\n    lua_rawgeti(L, 1, u);\n    if (sort_comp(L, -1, -2))  /* a[u] < a[l]? */\n      set2(L, l, u);  /* swap a[l] - a[u] */\n    else\n      lua_pop(L, 2);\n    if (u-l == 1) break;  /* only 2 elements */\n    i = (l+u)/2;\n    lua_rawgeti(L, 1, i);\n    lua_rawgeti(L, 1, l);\n    if (sort_comp(L, -2, -1))  /* a[i]<a[l]? */\n      set2(L, i, l);\n    else {\n      lua_pop(L, 1);  /* remove a[l] */\n      lua_rawgeti(L, 1, u);\n      if (sort_comp(L, -1, -2))  /* a[u]<a[i]? */\n        set2(L, i, u);\n      else\n        lua_pop(L, 2);\n    }\n    if (u-l == 2) break;  /* only 3 elements */\n    lua_rawgeti(L, 1, i);  /* Pivot */\n    lua_pushvalue(L, -1);\n    lua_rawgeti(L, 1, u-1);\n    set2(L, i, u-1);\n    /* a[l] <= P == a[u-1] <= a[u], only need to sort from l+1 to u-2 */\n    i = l; j = u-1;\n    for (;;) {  /* invariant: a[l..i] <= P <= a[j..u] */\n      /* repeat ++i until a[i] >= P */\n      while (lua_rawgeti(L, 1, ++i), sort_comp(L, -1, -2)) {\n        if (i>u) luaL_error(L, \"invalid order function for sorting\");\n        lua_pop(L, 1);  /* remove a[i] */\n      }\n      /* repeat --j until a[j] <= P */\n      while (lua_rawgeti(L, 1, --j), sort_comp(L, -3, -1)) {\n        if (j<l) luaL_error(L, \"invalid order function for sorting\");\n        lua_pop(L, 1);  /* remove a[j] */\n      }\n      if (j<i) {\n        lua_pop(L, 3);  /* pop pivot, a[i], a[j] */\n        break;\n      }\n      set2(L, i, j);\n    }\n    lua_rawgeti(L, 1, u-1);\n    lua_rawgeti(L, 1, i);\n    set2(L, u-1, i);  /* swap pivot (a[u-1]) with a[i] */\n    /* a[l..i-1] <= a[i] == P <= a[i+1..u] */\n    /* adjust so that smaller half is in [j..i] and larger one in [l..u] */\n    if (i-l < u-i) {\n      j=l; i=i-1; l=i+2;\n    }\n    else {\n      j=i+1; i=u; u=j-2;\n    }\n    auxsort(L, j, i);  /* call recursively the smaller one */\n  }  /* repeat the routine for the larger one */\n}\n\nstatic int sort (lua_State *L) {\n  int n = aux_getn(L, 1);\n  luaL_checkstack(L, 40, \"\");  /* assume array is smaller than 2^40 */\n  if (!lua_isnoneornil(L, 2))  /* is there a 2nd argument? */\n    luaL_checktype(L, 2, LUA_TFUNCTION);\n  lua_settop(L, 2);  /* make sure there is two arguments */\n  auxsort(L, 1, n);\n  return 0;\n}\n\n/* }====================================================== */\n\n\nstatic const luaL_Reg tab_funcs[] = {\n  {\"concat\", tconcat},\n  {\"foreach\", foreach},\n  {\"foreachi\", foreachi},\n  {\"getn\", getn},\n  {\"maxn\", maxn},\n  {\"insert\", tinsert},\n  {\"remove\", tremove},\n  {\"setn\", setn},\n  {\"sort\", sort},\n  {NULL, NULL}\n};\n\n\nLUALIB_API int luaopen_table (lua_State *L) {\n  luaL_register(L, LUA_TABLIBNAME, tab_funcs);\n  return 1;\n}\n\n"
  },
  {
    "path": "deps/lua/src/ltm.c",
    "content": "/*\n** $Id: ltm.c,v 2.8.1.1 2007/12/27 13:02:25 roberto Exp $\n** Tag methods\n** See Copyright Notice in lua.h\n*/\n\n\n#include <string.h>\n\n#define ltm_c\n#define LUA_CORE\n\n#include \"lua.h\"\n\n#include \"lobject.h\"\n#include \"lstate.h\"\n#include \"lstring.h\"\n#include \"ltable.h\"\n#include \"ltm.h\"\n\n\n\nconst char *const luaT_typenames[] = {\n  \"nil\", \"boolean\", \"userdata\", \"number\",\n  \"string\", \"table\", \"function\", \"userdata\", \"thread\",\n  \"proto\", \"upval\"\n};\n\n\nvoid luaT_init (lua_State *L) {\n  static const char *const luaT_eventname[] = {  /* ORDER TM */\n    \"__index\", \"__newindex\",\n    \"__gc\", \"__mode\", \"__eq\",\n    \"__add\", \"__sub\", \"__mul\", \"__div\", \"__mod\",\n    \"__pow\", \"__unm\", \"__len\", \"__lt\", \"__le\",\n    \"__concat\", \"__call\"\n  };\n  int i;\n  for (i=0; i<TM_N; i++) {\n    G(L)->tmname[i] = luaS_new(L, luaT_eventname[i]);\n    luaS_fix(G(L)->tmname[i]);  /* never collect these names */\n  }\n}\n\n\n/*\n** function to be used with macro \"fasttm\": optimized for absence of\n** tag methods\n*/\nconst TValue *luaT_gettm (Table *events, TMS event, TString *ename) {\n  const TValue *tm = luaH_getstr(events, ename);\n  lua_assert(event <= TM_EQ);\n  if (ttisnil(tm)) {  /* no tag method? */\n    events->flags |= cast_byte(1u<<event);  /* cache this fact */\n    return NULL;\n  }\n  else return tm;\n}\n\n\nconst TValue *luaT_gettmbyobj (lua_State *L, const TValue *o, TMS event) {\n  Table *mt;\n  switch (ttype(o)) {\n    case LUA_TTABLE:\n      mt = hvalue(o)->metatable;\n      break;\n    case LUA_TUSERDATA:\n      mt = uvalue(o)->metatable;\n      break;\n    default:\n      mt = G(L)->mt[ttype(o)];\n  }\n  return (mt ? luaH_getstr(mt, G(L)->tmname[event]) : luaO_nilobject);\n}\n\n"
  },
  {
    "path": "deps/lua/src/ltm.h",
    "content": "/*\n** $Id: ltm.h,v 2.6.1.1 2007/12/27 13:02:25 roberto Exp $\n** Tag methods\n** See Copyright Notice in lua.h\n*/\n\n#ifndef ltm_h\n#define ltm_h\n\n\n#include \"lobject.h\"\n\n\n/*\n* WARNING: if you change the order of this enumeration,\n* grep \"ORDER TM\"\n*/\ntypedef enum {\n  TM_INDEX,\n  TM_NEWINDEX,\n  TM_GC,\n  TM_MODE,\n  TM_EQ,  /* last tag method with `fast' access */\n  TM_ADD,\n  TM_SUB,\n  TM_MUL,\n  TM_DIV,\n  TM_MOD,\n  TM_POW,\n  TM_UNM,\n  TM_LEN,\n  TM_LT,\n  TM_LE,\n  TM_CONCAT,\n  TM_CALL,\n  TM_N\t\t/* number of elements in the enum */\n} TMS;\n\n\n\n#define gfasttm(g,et,e) ((et) == NULL ? NULL : \\\n  ((et)->flags & (1u<<(e))) ? NULL : luaT_gettm(et, e, (g)->tmname[e]))\n\n#define fasttm(l,et,e)\tgfasttm(G(l), et, e)\n\nLUAI_DATA const char *const luaT_typenames[];\n\n\nLUAI_FUNC const TValue *luaT_gettm (Table *events, TMS event, TString *ename);\nLUAI_FUNC const TValue *luaT_gettmbyobj (lua_State *L, const TValue *o,\n                                                       TMS event);\nLUAI_FUNC void luaT_init (lua_State *L);\n\n#endif\n"
  },
  {
    "path": "deps/lua/src/lua.c",
    "content": "/*\n** $Id: lua.c,v 1.160.1.2 2007/12/28 15:32:23 roberto Exp $\n** Lua stand-alone interpreter\n** See Copyright Notice in lua.h\n*/\n\n\n#include <signal.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n#define lua_c\n\n#include \"lua.h\"\n\n#include \"lauxlib.h\"\n#include \"lualib.h\"\n\n\n\nstatic lua_State *globalL = NULL;\n\nstatic const char *progname = LUA_PROGNAME;\n\n\n\nstatic void lstop (lua_State *L, lua_Debug *ar) {\n  (void)ar;  /* unused arg. */\n  lua_sethook(L, NULL, 0, 0);\n  luaL_error(L, \"interrupted!\");\n}\n\n\nstatic void laction (int i) {\n  signal(i, SIG_DFL); /* if another SIGINT happens before lstop,\n                              terminate process (default action) */\n  lua_sethook(globalL, lstop, LUA_MASKCALL | LUA_MASKRET | LUA_MASKCOUNT, 1);\n}\n\n\nstatic void print_usage (void) {\n  fprintf(stderr,\n  \"usage: %s [options] [script [args]].\\n\"\n  \"Available options are:\\n\"\n  \"  -e stat  execute string \" LUA_QL(\"stat\") \"\\n\"\n  \"  -l name  require library \" LUA_QL(\"name\") \"\\n\"\n  \"  -i       enter interactive mode after executing \" LUA_QL(\"script\") \"\\n\"\n  \"  -v       show version information\\n\"\n  \"  --       stop handling options\\n\"\n  \"  -        execute stdin and stop handling options\\n\"\n  ,\n  progname);\n  fflush(stderr);\n}\n\n\nstatic void l_message (const char *pname, const char *msg) {\n  if (pname) fprintf(stderr, \"%s: \", pname);\n  fprintf(stderr, \"%s\\n\", msg);\n  fflush(stderr);\n}\n\n\nstatic int report (lua_State *L, int status) {\n  if (status && !lua_isnil(L, -1)) {\n    const char *msg = lua_tostring(L, -1);\n    if (msg == NULL) msg = \"(error object is not a string)\";\n    l_message(progname, msg);\n    lua_pop(L, 1);\n  }\n  return status;\n}\n\n\nstatic int traceback (lua_State *L) {\n  if (!lua_isstring(L, 1))  /* 'message' not a string? */\n    return 1;  /* keep it intact */\n  lua_getfield(L, LUA_GLOBALSINDEX, \"debug\");\n  if (!lua_istable(L, -1)) {\n    lua_pop(L, 1);\n    return 1;\n  }\n  lua_getfield(L, -1, \"traceback\");\n  if (!lua_isfunction(L, -1)) {\n    lua_pop(L, 2);\n    return 1;\n  }\n  lua_pushvalue(L, 1);  /* pass error message */\n  lua_pushinteger(L, 2);  /* skip this function and traceback */\n  lua_call(L, 2, 1);  /* call debug.traceback */\n  return 1;\n}\n\n\nstatic int docall (lua_State *L, int narg, int clear) {\n  int status;\n  int base = lua_gettop(L) - narg;  /* function index */\n  lua_pushcfunction(L, traceback);  /* push traceback function */\n  lua_insert(L, base);  /* put it under chunk and args */\n  signal(SIGINT, laction);\n  status = lua_pcall(L, narg, (clear ? 0 : LUA_MULTRET), base);\n  signal(SIGINT, SIG_DFL);\n  lua_remove(L, base);  /* remove traceback function */\n  /* force a complete garbage collection in case of errors */\n  if (status != 0) lua_gc(L, LUA_GCCOLLECT, 0);\n  return status;\n}\n\n\nstatic void print_version (void) {\n  l_message(NULL, LUA_RELEASE \"  \" LUA_COPYRIGHT);\n}\n\n\nstatic int getargs (lua_State *L, char **argv, int n) {\n  int narg;\n  int i;\n  int argc = 0;\n  while (argv[argc]) argc++;  /* count total number of arguments */\n  narg = argc - (n + 1);  /* number of arguments to the script */\n  luaL_checkstack(L, narg + 3, \"too many arguments to script\");\n  for (i=n+1; i < argc; i++)\n    lua_pushstring(L, argv[i]);\n  lua_createtable(L, narg, n + 1);\n  for (i=0; i < argc; i++) {\n    lua_pushstring(L, argv[i]);\n    lua_rawseti(L, -2, i - n);\n  }\n  return narg;\n}\n\n\nstatic int dofile (lua_State *L, const char *name) {\n  int status = luaL_loadfile(L, name) || docall(L, 0, 1);\n  return report(L, status);\n}\n\n\nstatic int dostring (lua_State *L, const char *s, const char *name) {\n  int status = luaL_loadbuffer(L, s, strlen(s), name) || docall(L, 0, 1);\n  return report(L, status);\n}\n\n\nstatic int dolibrary (lua_State *L, const char *name) {\n  lua_getglobal(L, \"require\");\n  lua_pushstring(L, name);\n  return report(L, docall(L, 1, 1));\n}\n\n\nstatic const char *get_prompt (lua_State *L, int firstline) {\n  const char *p;\n  lua_getfield(L, LUA_GLOBALSINDEX, firstline ? \"_PROMPT\" : \"_PROMPT2\");\n  p = lua_tostring(L, -1);\n  if (p == NULL) p = (firstline ? LUA_PROMPT : LUA_PROMPT2);\n  lua_pop(L, 1);  /* remove global */\n  return p;\n}\n\n\nstatic int incomplete (lua_State *L, int status) {\n  if (status == LUA_ERRSYNTAX) {\n    size_t lmsg;\n    const char *msg = lua_tolstring(L, -1, &lmsg);\n    const char *tp = msg + lmsg - (sizeof(LUA_QL(\"<eof>\")) - 1);\n    if (strstr(msg, LUA_QL(\"<eof>\")) == tp) {\n      lua_pop(L, 1);\n      return 1;\n    }\n  }\n  return 0;  /* else... */\n}\n\n\nstatic int pushline (lua_State *L, int firstline) {\n  char buffer[LUA_MAXINPUT];\n  char *b = buffer;\n  size_t l;\n  const char *prmt = get_prompt(L, firstline);\n  if (lua_readline(L, b, prmt) == 0)\n    return 0;  /* no input */\n  l = strlen(b);\n  if (l > 0 && b[l-1] == '\\n')  /* line ends with newline? */\n    b[l-1] = '\\0';  /* remove it */\n  if (firstline && b[0] == '=')  /* first line starts with `=' ? */\n    lua_pushfstring(L, \"return %s\", b+1);  /* change it to `return' */\n  else\n    lua_pushstring(L, b);\n  lua_freeline(L, b);\n  return 1;\n}\n\n\nstatic int loadline (lua_State *L) {\n  int status;\n  lua_settop(L, 0);\n  if (!pushline(L, 1))\n    return -1;  /* no input */\n  for (;;) {  /* repeat until gets a complete line */\n    status = luaL_loadbuffer(L, lua_tostring(L, 1), lua_strlen(L, 1), \"=stdin\");\n    if (!incomplete(L, status)) break;  /* cannot try to add lines? */\n    if (!pushline(L, 0))  /* no more input? */\n      return -1;\n    lua_pushliteral(L, \"\\n\");  /* add a new line... */\n    lua_insert(L, -2);  /* ...between the two lines */\n    lua_concat(L, 3);  /* join them */\n  }\n  lua_saveline(L, 1);\n  lua_remove(L, 1);  /* remove line */\n  return status;\n}\n\n\nstatic void dotty (lua_State *L) {\n  int status;\n  const char *oldprogname = progname;\n  progname = NULL;\n  while ((status = loadline(L)) != -1) {\n    if (status == 0) status = docall(L, 0, 0);\n    report(L, status);\n    if (status == 0 && lua_gettop(L) > 0) {  /* any result to print? */\n      lua_getglobal(L, \"print\");\n      lua_insert(L, 1);\n      if (lua_pcall(L, lua_gettop(L)-1, 0, 0) != 0)\n        l_message(progname, lua_pushfstring(L,\n                               \"error calling \" LUA_QL(\"print\") \" (%s)\",\n                               lua_tostring(L, -1)));\n    }\n  }\n  lua_settop(L, 0);  /* clear stack */\n  fputs(\"\\n\", stdout);\n  fflush(stdout);\n  progname = oldprogname;\n}\n\n\nstatic int handle_script (lua_State *L, char **argv, int n) {\n  int status;\n  const char *fname;\n  int narg = getargs(L, argv, n);  /* collect arguments */\n  lua_setglobal(L, \"arg\");\n  fname = argv[n];\n  if (strcmp(fname, \"-\") == 0 && strcmp(argv[n-1], \"--\") != 0) \n    fname = NULL;  /* stdin */\n  status = luaL_loadfile(L, fname);\n  lua_insert(L, -(narg+1));\n  if (status == 0)\n    status = docall(L, narg, 0);\n  else\n    lua_pop(L, narg);      \n  return report(L, status);\n}\n\n\n/* check that argument has no extra characters at the end */\n#define notail(x)\t{if ((x)[2] != '\\0') return -1;}\n\n\nstatic int collectargs (char **argv, int *pi, int *pv, int *pe) {\n  int i;\n  for (i = 1; argv[i] != NULL; i++) {\n    if (argv[i][0] != '-')  /* not an option? */\n        return i;\n    switch (argv[i][1]) {  /* option */\n      case '-':\n        notail(argv[i]);\n        return (argv[i+1] != NULL ? i+1 : 0);\n      case '\\0':\n        return i;\n      case 'i':\n        notail(argv[i]);\n        *pi = 1;  /* go through */\n      case 'v':\n        notail(argv[i]);\n        *pv = 1;\n        break;\n      case 'e':\n        *pe = 1;  /* go through */\n      case 'l':\n        if (argv[i][2] == '\\0') {\n          i++;\n          if (argv[i] == NULL) return -1;\n        }\n        break;\n      default: return -1;  /* invalid option */\n    }\n  }\n  return 0;\n}\n\n\nstatic int runargs (lua_State *L, char **argv, int n) {\n  int i;\n  for (i = 1; i < n; i++) {\n    if (argv[i] == NULL) continue;\n    lua_assert(argv[i][0] == '-');\n    switch (argv[i][1]) {  /* option */\n      case 'e': {\n        const char *chunk = argv[i] + 2;\n        if (*chunk == '\\0') chunk = argv[++i];\n        lua_assert(chunk != NULL);\n        if (dostring(L, chunk, \"=(command line)\") != 0)\n          return 1;\n        break;\n      }\n      case 'l': {\n        const char *filename = argv[i] + 2;\n        if (*filename == '\\0') filename = argv[++i];\n        lua_assert(filename != NULL);\n        if (dolibrary(L, filename))\n          return 1;  /* stop if file fails */\n        break;\n      }\n      default: break;\n    }\n  }\n  return 0;\n}\n\n\nstatic int handle_luainit (lua_State *L) {\n  const char *init = getenv(LUA_INIT);\n  if (init == NULL) return 0;  /* status OK */\n  else if (init[0] == '@')\n    return dofile(L, init+1);\n  else\n    return dostring(L, init, \"=\" LUA_INIT);\n}\n\n\nstruct Smain {\n  int argc;\n  char **argv;\n  int status;\n};\n\n\nstatic int pmain (lua_State *L) {\n  struct Smain *s = (struct Smain *)lua_touserdata(L, 1);\n  char **argv = s->argv;\n  int script;\n  int has_i = 0, has_v = 0, has_e = 0;\n  globalL = L;\n  if (argv[0] && argv[0][0]) progname = argv[0];\n  lua_gc(L, LUA_GCSTOP, 0);  /* stop collector during initialization */\n  luaL_openlibs(L);  /* open libraries */\n  lua_gc(L, LUA_GCRESTART, 0);\n  s->status = handle_luainit(L);\n  if (s->status != 0) return 0;\n  script = collectargs(argv, &has_i, &has_v, &has_e);\n  if (script < 0) {  /* invalid args? */\n    print_usage();\n    s->status = 1;\n    return 0;\n  }\n  if (has_v) print_version();\n  s->status = runargs(L, argv, (script > 0) ? script : s->argc);\n  if (s->status != 0) return 0;\n  if (script)\n    s->status = handle_script(L, argv, script);\n  if (s->status != 0) return 0;\n  if (has_i)\n    dotty(L);\n  else if (script == 0 && !has_e && !has_v) {\n    if (lua_stdin_is_tty()) {\n      print_version();\n      dotty(L);\n    }\n    else dofile(L, NULL);  /* executes stdin as a file */\n  }\n  return 0;\n}\n\n\nint main (int argc, char **argv) {\n  int status;\n  struct Smain s;\n  lua_State *L = lua_open();  /* create state */\n  if (L == NULL) {\n    l_message(argv[0], \"cannot create state: not enough memory\");\n    return EXIT_FAILURE;\n  }\n  s.argc = argc;\n  s.argv = argv;\n  status = lua_cpcall(L, &pmain, &s);\n  report(L, status);\n  lua_close(L);\n  return (status || s.status) ? EXIT_FAILURE : EXIT_SUCCESS;\n}\n\n"
  },
  {
    "path": "deps/lua/src/lua.h",
    "content": "/*\n** $Id: lua.h,v 1.218.1.7 2012/01/13 20:36:20 roberto Exp $\n** Lua - An Extensible Extension Language\n** Lua.org, PUC-Rio, Brazil (http://www.lua.org)\n** See Copyright Notice at the end of this file\n*/\n\n\n#ifndef lua_h\n#define lua_h\n\n#include <stdarg.h>\n#include <stddef.h>\n\n\n#include \"luaconf.h\"\n\n\n#define LUA_VERSION\t\"Lua 5.1\"\n#define LUA_RELEASE\t\"Lua 5.1.5\"\n#define LUA_VERSION_NUM\t501\n#define LUA_COPYRIGHT\t\"Copyright (C) 1994-2012 Lua.org, PUC-Rio\"\n#define LUA_AUTHORS \t\"R. Ierusalimschy, L. H. de Figueiredo & W. Celes\"\n\n\n/* mark for precompiled code (`<esc>Lua') */\n#define\tLUA_SIGNATURE\t\"\\033Lua\"\n\n/* option for multiple returns in `lua_pcall' and `lua_call' */\n#define LUA_MULTRET\t(-1)\n\n\n/*\n** pseudo-indices\n*/\n#define LUA_REGISTRYINDEX\t(-10000)\n#define LUA_ENVIRONINDEX\t(-10001)\n#define LUA_GLOBALSINDEX\t(-10002)\n#define lua_upvalueindex(i)\t(LUA_GLOBALSINDEX-(i))\n\n\n/* thread status; 0 is OK */\n#define LUA_YIELD\t1\n#define LUA_ERRRUN\t2\n#define LUA_ERRSYNTAX\t3\n#define LUA_ERRMEM\t4\n#define LUA_ERRERR\t5\n\n\ntypedef struct lua_State lua_State;\n\ntypedef int (*lua_CFunction) (lua_State *L);\n\n\n/*\n** functions that read/write blocks when loading/dumping Lua chunks\n*/\ntypedef const char * (*lua_Reader) (lua_State *L, void *ud, size_t *sz);\n\ntypedef int (*lua_Writer) (lua_State *L, const void* p, size_t sz, void* ud);\n\n\n/*\n** prototype for memory-allocation functions\n*/\ntypedef void * (*lua_Alloc) (void *ud, void *ptr, size_t osize, size_t nsize);\n\n\n/*\n** basic types\n*/\n#define LUA_TNONE\t\t(-1)\n\n#define LUA_TNIL\t\t0\n#define LUA_TBOOLEAN\t\t1\n#define LUA_TLIGHTUSERDATA\t2\n#define LUA_TNUMBER\t\t3\n#define LUA_TSTRING\t\t4\n#define LUA_TTABLE\t\t5\n#define LUA_TFUNCTION\t\t6\n#define LUA_TUSERDATA\t\t7\n#define LUA_TTHREAD\t\t8\n\n\n\n/* minimum Lua stack available to a C function */\n#define LUA_MINSTACK\t20\n\n\n/*\n** generic extra include file\n*/\n#if defined(LUA_USER_H)\n#include LUA_USER_H\n#endif\n\n\n/* type of numbers in Lua */\ntypedef LUA_NUMBER lua_Number;\n\n\n/* type for integer functions */\ntypedef LUA_INTEGER lua_Integer;\n\n\n\n/*\n** state manipulation\n*/\nLUA_API lua_State *(lua_newstate) (lua_Alloc f, void *ud);\nLUA_API void       (lua_close) (lua_State *L);\nLUA_API lua_State *(lua_newthread) (lua_State *L);\n\nLUA_API lua_CFunction (lua_atpanic) (lua_State *L, lua_CFunction panicf);\n\n\n/*\n** basic stack manipulation\n*/\nLUA_API int   (lua_gettop) (lua_State *L);\nLUA_API void  (lua_settop) (lua_State *L, int idx);\nLUA_API void  (lua_pushvalue) (lua_State *L, int idx);\nLUA_API void  (lua_remove) (lua_State *L, int idx);\nLUA_API void  (lua_insert) (lua_State *L, int idx);\nLUA_API void  (lua_replace) (lua_State *L, int idx);\nLUA_API int   (lua_checkstack) (lua_State *L, int sz);\n\nLUA_API void  (lua_xmove) (lua_State *from, lua_State *to, int n);\n\n\n/*\n** access functions (stack -> C)\n*/\n\nLUA_API int             (lua_isnumber) (lua_State *L, int idx);\nLUA_API int             (lua_isstring) (lua_State *L, int idx);\nLUA_API int             (lua_iscfunction) (lua_State *L, int idx);\nLUA_API int             (lua_isuserdata) (lua_State *L, int idx);\nLUA_API int             (lua_type) (lua_State *L, int idx);\nLUA_API const char     *(lua_typename) (lua_State *L, int tp);\n\nLUA_API int            (lua_equal) (lua_State *L, int idx1, int idx2);\nLUA_API int            (lua_rawequal) (lua_State *L, int idx1, int idx2);\nLUA_API int            (lua_lessthan) (lua_State *L, int idx1, int idx2);\n\nLUA_API lua_Number      (lua_tonumber) (lua_State *L, int idx);\nLUA_API lua_Integer     (lua_tointeger) (lua_State *L, int idx);\nLUA_API int             (lua_toboolean) (lua_State *L, int idx);\nLUA_API const char     *(lua_tolstring) (lua_State *L, int idx, size_t *len);\nLUA_API size_t          (lua_objlen) (lua_State *L, int idx);\nLUA_API lua_CFunction   (lua_tocfunction) (lua_State *L, int idx);\nLUA_API void\t       *(lua_touserdata) (lua_State *L, int idx);\nLUA_API lua_State      *(lua_tothread) (lua_State *L, int idx);\nLUA_API const void     *(lua_topointer) (lua_State *L, int idx);\n\n\n/*\n** push functions (C -> stack)\n*/\nLUA_API void  (lua_pushnil) (lua_State *L);\nLUA_API void  (lua_pushnumber) (lua_State *L, lua_Number n);\nLUA_API void  (lua_pushinteger) (lua_State *L, lua_Integer n);\nLUA_API void  (lua_pushlstring) (lua_State *L, const char *s, size_t l);\nLUA_API void  (lua_pushstring) (lua_State *L, const char *s);\nLUA_API const char *(lua_pushvfstring) (lua_State *L, const char *fmt,\n                                                      va_list argp);\nLUA_API const char *(lua_pushfstring) (lua_State *L, const char *fmt, ...);\nLUA_API void  (lua_pushcclosure) (lua_State *L, lua_CFunction fn, int n);\nLUA_API void  (lua_pushboolean) (lua_State *L, int b);\nLUA_API void  (lua_pushlightuserdata) (lua_State *L, void *p);\nLUA_API int   (lua_pushthread) (lua_State *L);\n\n\n/*\n** get functions (Lua -> stack)\n*/\nLUA_API void  (lua_gettable) (lua_State *L, int idx);\nLUA_API void  (lua_getfield) (lua_State *L, int idx, const char *k);\nLUA_API void  (lua_rawget) (lua_State *L, int idx);\nLUA_API void  (lua_rawgeti) (lua_State *L, int idx, int n);\nLUA_API void  (lua_createtable) (lua_State *L, int narr, int nrec);\nLUA_API void *(lua_newuserdata) (lua_State *L, size_t sz);\nLUA_API int   (lua_getmetatable) (lua_State *L, int objindex);\nLUA_API void  (lua_getfenv) (lua_State *L, int idx);\n\n\n/*\n** set functions (stack -> Lua)\n*/\nLUA_API void  (lua_settable) (lua_State *L, int idx);\nLUA_API void  (lua_setfield) (lua_State *L, int idx, const char *k);\nLUA_API void  (lua_rawset) (lua_State *L, int idx);\nLUA_API void  (lua_rawseti) (lua_State *L, int idx, int n);\nLUA_API int   (lua_setmetatable) (lua_State *L, int objindex);\nLUA_API int   (lua_setfenv) (lua_State *L, int idx);\n\n\n/*\n** `load' and `call' functions (load and run Lua code)\n*/\nLUA_API void  (lua_call) (lua_State *L, int nargs, int nresults);\nLUA_API int   (lua_pcall) (lua_State *L, int nargs, int nresults, int errfunc);\nLUA_API int   (lua_cpcall) (lua_State *L, lua_CFunction func, void *ud);\nLUA_API int   (lua_load) (lua_State *L, lua_Reader reader, void *dt,\n                                        const char *chunkname);\n\nLUA_API int (lua_dump) (lua_State *L, lua_Writer writer, void *data);\n\n\n/*\n** coroutine functions\n*/\nLUA_API int  (lua_yield) (lua_State *L, int nresults);\nLUA_API int  (lua_resume) (lua_State *L, int narg);\nLUA_API int  (lua_status) (lua_State *L);\n\n/*\n** garbage-collection function and options\n*/\n\n#define LUA_GCSTOP\t\t0\n#define LUA_GCRESTART\t\t1\n#define LUA_GCCOLLECT\t\t2\n#define LUA_GCCOUNT\t\t3\n#define LUA_GCCOUNTB\t\t4\n#define LUA_GCSTEP\t\t5\n#define LUA_GCSETPAUSE\t\t6\n#define LUA_GCSETSTEPMUL\t7\n\nLUA_API int (lua_gc) (lua_State *L, int what, int data);\n\n\n/*\n** miscellaneous functions\n*/\n\nLUA_API int   (lua_error) (lua_State *L);\n\nLUA_API int   (lua_next) (lua_State *L, int idx);\n\nLUA_API void  (lua_concat) (lua_State *L, int n);\n\nLUA_API lua_Alloc (lua_getallocf) (lua_State *L, void **ud);\nLUA_API void lua_setallocf (lua_State *L, lua_Alloc f, void *ud);\n\n\n\n/* \n** ===============================================================\n** some useful macros\n** ===============================================================\n*/\n\n#define lua_pop(L,n)\t\tlua_settop(L, -(n)-1)\n\n#define lua_newtable(L)\t\tlua_createtable(L, 0, 0)\n\n#define lua_register(L,n,f) (lua_pushcfunction(L, (f)), lua_setglobal(L, (n)))\n\n#define lua_pushcfunction(L,f)\tlua_pushcclosure(L, (f), 0)\n\n#define lua_strlen(L,i)\t\tlua_objlen(L, (i))\n\n#define lua_isfunction(L,n)\t(lua_type(L, (n)) == LUA_TFUNCTION)\n#define lua_istable(L,n)\t(lua_type(L, (n)) == LUA_TTABLE)\n#define lua_islightuserdata(L,n)\t(lua_type(L, (n)) == LUA_TLIGHTUSERDATA)\n#define lua_isnil(L,n)\t\t(lua_type(L, (n)) == LUA_TNIL)\n#define lua_isboolean(L,n)\t(lua_type(L, (n)) == LUA_TBOOLEAN)\n#define lua_isthread(L,n)\t(lua_type(L, (n)) == LUA_TTHREAD)\n#define lua_isnone(L,n)\t\t(lua_type(L, (n)) == LUA_TNONE)\n#define lua_isnoneornil(L, n)\t(lua_type(L, (n)) <= 0)\n\n#define lua_pushliteral(L, s)\t\\\n\tlua_pushlstring(L, \"\" s, (sizeof(s)/sizeof(char))-1)\n\n#define lua_setglobal(L,s)\tlua_setfield(L, LUA_GLOBALSINDEX, (s))\n#define lua_getglobal(L,s)\tlua_getfield(L, LUA_GLOBALSINDEX, (s))\n\n#define lua_tostring(L,i)\tlua_tolstring(L, (i), NULL)\n\n\n\n/*\n** compatibility macros and functions\n*/\n\n#define lua_open()\tluaL_newstate()\n\n#define lua_getregistry(L)\tlua_pushvalue(L, LUA_REGISTRYINDEX)\n\n#define lua_getgccount(L)\tlua_gc(L, LUA_GCCOUNT, 0)\n\n#define lua_Chunkreader\t\tlua_Reader\n#define lua_Chunkwriter\t\tlua_Writer\n\n\n/* hack */\nLUA_API void lua_setlevel\t(lua_State *from, lua_State *to);\n\n\n/*\n** {======================================================================\n** Debug API\n** =======================================================================\n*/\n\n\n/*\n** Event codes\n*/\n#define LUA_HOOKCALL\t0\n#define LUA_HOOKRET\t1\n#define LUA_HOOKLINE\t2\n#define LUA_HOOKCOUNT\t3\n#define LUA_HOOKTAILRET 4\n\n\n/*\n** Event masks\n*/\n#define LUA_MASKCALL\t(1 << LUA_HOOKCALL)\n#define LUA_MASKRET\t(1 << LUA_HOOKRET)\n#define LUA_MASKLINE\t(1 << LUA_HOOKLINE)\n#define LUA_MASKCOUNT\t(1 << LUA_HOOKCOUNT)\n\ntypedef struct lua_Debug lua_Debug;  /* activation record */\n\n\n/* Functions to be called by the debuger in specific events */\ntypedef void (*lua_Hook) (lua_State *L, lua_Debug *ar);\n\n\nLUA_API int lua_getstack (lua_State *L, int level, lua_Debug *ar);\nLUA_API int lua_getinfo (lua_State *L, const char *what, lua_Debug *ar);\nLUA_API const char *lua_getlocal (lua_State *L, const lua_Debug *ar, int n);\nLUA_API const char *lua_setlocal (lua_State *L, const lua_Debug *ar, int n);\nLUA_API const char *lua_getupvalue (lua_State *L, int funcindex, int n);\nLUA_API const char *lua_setupvalue (lua_State *L, int funcindex, int n);\n\nLUA_API int lua_sethook (lua_State *L, lua_Hook func, int mask, int count);\nLUA_API lua_Hook lua_gethook (lua_State *L);\nLUA_API int lua_gethookmask (lua_State *L);\nLUA_API int lua_gethookcount (lua_State *L);\n\n\nstruct lua_Debug {\n  int event;\n  const char *name;\t/* (n) */\n  const char *namewhat;\t/* (n) `global', `local', `field', `method' */\n  const char *what;\t/* (S) `Lua', `C', `main', `tail' */\n  const char *source;\t/* (S) */\n  int currentline;\t/* (l) */\n  int nups;\t\t/* (u) number of upvalues */\n  int linedefined;\t/* (S) */\n  int lastlinedefined;\t/* (S) */\n  char short_src[LUA_IDSIZE]; /* (S) */\n  /* private part */\n  int i_ci;  /* active function */\n};\n\nLUA_API void lua_enablereadonlytable (lua_State *L, int index, int enabled);\nLUA_API int lua_isreadonlytable (lua_State *L, int index);\n\n/* }====================================================================== */\n\n\n/******************************************************************************\n* Copyright (C) 1994-2012 Lua.org, PUC-Rio.  All rights reserved.\n*\n* Permission is hereby granted, free of charge, to any person obtaining\n* a copy of this software and associated documentation files (the\n* \"Software\"), to deal in the Software without restriction, including\n* without limitation the rights to use, copy, modify, merge, publish,\n* distribute, sublicense, and/or sell copies of the Software, and to\n* permit persons to whom the Software is furnished to do so, subject to\n* the following conditions:\n*\n* The above copyright notice and this permission notice shall be\n* included in all copies or substantial portions of the Software.\n*\n* THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\n* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\n* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.\n* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY\n* CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,\n* TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n* SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n******************************************************************************/\n\n\n#endif\n"
  },
  {
    "path": "deps/lua/src/lua_bit.c",
    "content": "/*\n** Lua BitOp -- a bit operations library for Lua 5.1/5.2.\n** http://bitop.luajit.org/\n**\n** Copyright (C) 2008-2012 Mike Pall. All rights reserved.\n**\n** Permission is hereby granted, free of charge, to any person obtaining\n** a copy of this software and associated documentation files (the\n** \"Software\"), to deal in the Software without restriction, including\n** without limitation the rights to use, copy, modify, merge, publish,\n** distribute, sublicense, and/or sell copies of the Software, and to\n** permit persons to whom the Software is furnished to do so, subject to\n** the following conditions:\n**\n** The above copyright notice and this permission notice shall be\n** included in all copies or substantial portions of the Software.\n**\n** THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\n** EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\n** MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.\n** IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY\n** CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,\n** TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n** SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n**\n** [ MIT license: http://www.opensource.org/licenses/mit-license.php ]\n*/\n\n#define LUA_BITOP_VERSION\t\"1.0.2\"\n\n#define LUA_LIB\n#include \"lua.h\"\n#include \"lauxlib.h\"\n\n#ifdef _MSC_VER\n/* MSVC is stuck in the last century and doesn't have C99's stdint.h. */\ntypedef __int32 int32_t;\ntypedef unsigned __int32 uint32_t;\ntypedef unsigned __int64 uint64_t;\n#else\n#include <stdint.h>\n#endif\n\ntypedef int32_t SBits;\ntypedef uint32_t UBits;\n\ntypedef union {\n  lua_Number n;\n#ifdef LUA_NUMBER_DOUBLE\n  uint64_t b;\n#else\n  UBits b;\n#endif\n} BitNum;\n\n/* Convert argument to bit type. */\nstatic UBits barg(lua_State *L, int idx)\n{\n  BitNum bn;\n  UBits b;\n#if LUA_VERSION_NUM < 502\n  bn.n = lua_tonumber(L, idx);\n#else\n  bn.n = luaL_checknumber(L, idx);\n#endif\n#if defined(LUA_NUMBER_DOUBLE)\n  bn.n += 6755399441055744.0;  /* 2^52+2^51 */\n#ifdef SWAPPED_DOUBLE\n  b = (UBits)(bn.b >> 32);\n#else\n  b = (UBits)bn.b;\n#endif\n#elif defined(LUA_NUMBER_INT) || defined(LUA_NUMBER_LONG) || \\\n      defined(LUA_NUMBER_LONGLONG) || defined(LUA_NUMBER_LONG_LONG) || \\\n      defined(LUA_NUMBER_LLONG)\n  if (sizeof(UBits) == sizeof(lua_Number))\n    b = bn.b;\n  else\n    b = (UBits)(SBits)bn.n;\n#elif defined(LUA_NUMBER_FLOAT)\n#error \"A 'float' lua_Number type is incompatible with this library\"\n#else\n#error \"Unknown number type, check LUA_NUMBER_* in luaconf.h\"\n#endif\n#if LUA_VERSION_NUM < 502\n  if (b == 0 && !lua_isnumber(L, idx)) {\n    luaL_typerror(L, idx, \"number\");\n  }\n#endif\n  return b;\n}\n\n/* Return bit type. */\n#define BRET(b)  lua_pushnumber(L, (lua_Number)(SBits)(b)); return 1;\n\nstatic int bit_tobit(lua_State *L) { BRET(barg(L, 1)) }\nstatic int bit_bnot(lua_State *L) { BRET(~barg(L, 1)) }\n\n#define BIT_OP(func, opr) \\\n  static int func(lua_State *L) { int i; UBits b = barg(L, 1); \\\n    for (i = lua_gettop(L); i > 1; i--) b opr barg(L, i);      \\\n    BRET(b) }\nBIT_OP(bit_band, &=)\nBIT_OP(bit_bor, |=)\nBIT_OP(bit_bxor, ^=)\n\n#define bshl(b, n)  (b << n)\n#define bshr(b, n)  (b >> n)\n#define bsar(b, n)  ((SBits)b >> n)\n#define brol(b, n)  ((b << n) | (b >> (32-n)))\n#define bror(b, n)  ((b << (32-n)) | (b >> n))\n#define BIT_SH(func, fn) \\\n  static int func(lua_State *L) { \\\n    UBits b = barg(L, 1); UBits n = barg(L, 2) & 31; BRET(fn(b, n)) }\nBIT_SH(bit_lshift, bshl)\nBIT_SH(bit_rshift, bshr)\nBIT_SH(bit_arshift, bsar)\nBIT_SH(bit_rol, brol)\nBIT_SH(bit_ror, bror)\n\nstatic int bit_bswap(lua_State *L)\n{\n  UBits b = barg(L, 1);\n  b = (b >> 24) | ((b >> 8) & 0xff00) | ((b & 0xff00) << 8) | (b << 24);\n  BRET(b)\n}\n\nstatic int bit_tohex(lua_State *L)\n{\n  UBits b = barg(L, 1);\n  SBits n = lua_isnone(L, 2) ? 8 : (SBits)barg(L, 2);\n  const char *hexdigits = \"0123456789abcdef\";\n  char buf[8];\n  int i;\n  if (n == INT32_MIN) n = INT32_MIN+1;\n  if (n < 0) { n = -n; hexdigits = \"0123456789ABCDEF\"; }\n  if (n > 8) n = 8;\n  for (i = (int)n; --i >= 0; ) { buf[i] = hexdigits[b & 15]; b >>= 4; }\n  lua_pushlstring(L, buf, (size_t)n);\n  return 1;\n}\n\nstatic const struct luaL_Reg bit_funcs[] = {\n  { \"tobit\",\tbit_tobit },\n  { \"bnot\",\tbit_bnot },\n  { \"band\",\tbit_band },\n  { \"bor\",\tbit_bor },\n  { \"bxor\",\tbit_bxor },\n  { \"lshift\",\tbit_lshift },\n  { \"rshift\",\tbit_rshift },\n  { \"arshift\",\tbit_arshift },\n  { \"rol\",\tbit_rol },\n  { \"ror\",\tbit_ror },\n  { \"bswap\",\tbit_bswap },\n  { \"tohex\",\tbit_tohex },\n  { NULL, NULL }\n};\n\n/* Signed right-shifts are implementation-defined per C89/C99.\n** But the de facto standard are arithmetic right-shifts on two's\n** complement CPUs. This behaviour is required here, so test for it.\n*/\n#define BAD_SAR\t\t(bsar(-8, 2) != (SBits)-2)\n\nLUALIB_API int luaopen_bit(lua_State *L)\n{\n  UBits b;\n  lua_pushnumber(L, (lua_Number)1437217655L);\n  b = barg(L, -1);\n  if (b != (UBits)1437217655L || BAD_SAR) {  /* Perform a simple self-test. */\n    const char *msg = \"compiled with incompatible luaconf.h\";\n#ifdef LUA_NUMBER_DOUBLE\n#ifdef _WIN32\n    if (b == (UBits)1610612736L)\n      msg = \"use D3DCREATE_FPU_PRESERVE with DirectX\";\n#endif\n    if (b == (UBits)1127743488L)\n      msg = \"not compiled with SWAPPED_DOUBLE\";\n#endif\n    if (BAD_SAR)\n      msg = \"arithmetic right-shift broken\";\n    luaL_error(L, \"bit library self-test failed (%s)\", msg);\n  }\n#if LUA_VERSION_NUM < 502\n  luaL_register(L, \"bit\", bit_funcs);\n#else\n  luaL_newlib(L, bit_funcs);\n#endif\n  return 1;\n}\n\n"
  },
  {
    "path": "deps/lua/src/lua_cjson.c",
    "content": "/* Lua CJSON - JSON support for Lua\n *\n * Copyright (c) 2010-2012  Mark Pulford <mark@kyne.com.au>\n *\n * Permission is hereby granted, free of charge, to any person obtaining\n * a copy of this software and associated documentation files (the\n * \"Software\"), to deal in the Software without restriction, including\n * without limitation the rights to use, copy, modify, merge, publish,\n * distribute, sublicense, and/or sell copies of the Software, and to\n * permit persons to whom the Software is furnished to do so, subject to\n * the following conditions:\n *\n * The above copyright notice and this permission notice shall be\n * included in all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\n * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\n * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.\n * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY\n * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,\n * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n */\n\n/* Caveats:\n * - JSON \"null\" values are represented as lightuserdata since Lua\n *   tables cannot contain \"nil\". Compare with cjson.null.\n * - Invalid UTF-8 characters are not detected and will be passed\n *   untouched. If required, UTF-8 error checking should be done\n *   outside this library.\n * - Javascript comments are not part of the JSON spec, and are not\n *   currently supported.\n *\n * Note: Decoding is slower than encoding. Lua spends significant\n *       time (30%) managing tables when parsing JSON since it is\n *       difficult to know object/array sizes ahead of time.\n */\n\n#include <assert.h>\n#include <string.h>\n#include <math.h>\n#include <stdint.h>\n#include <limits.h>\n#include \"lua.h\"\n#include \"lauxlib.h\"\n\n#include \"strbuf.h\"\n#include \"fpconv.h\"\n\n#include \"../../../src/solarisfixes.h\"\n\n#ifndef CJSON_MODNAME\n#define CJSON_MODNAME   \"cjson\"\n#endif\n\n#ifndef CJSON_VERSION\n#define CJSON_VERSION   \"2.1.0\"\n#endif\n\n/* Workaround for Solaris platforms missing isinf() */\n#if !defined(isinf) && (defined(USE_INTERNAL_ISINF) || defined(MISSING_ISINF))\n#define isinf(x) (!isnan(x) && isnan((x) - (x)))\n#endif\n\n#define DEFAULT_SPARSE_CONVERT 0\n#define DEFAULT_SPARSE_RATIO 2\n#define DEFAULT_SPARSE_SAFE 10\n#define DEFAULT_ENCODE_MAX_DEPTH 1000\n#define DEFAULT_DECODE_MAX_DEPTH 1000\n#define DEFAULT_ENCODE_INVALID_NUMBERS 0\n#define DEFAULT_DECODE_INVALID_NUMBERS 1\n#define DEFAULT_ENCODE_KEEP_BUFFER 1\n#define DEFAULT_ENCODE_NUMBER_PRECISION 14\n#define DEFAULT_DECODE_ARRAY_WITH_ARRAY_MT 0\n\n#ifdef DISABLE_INVALID_NUMBERS\n#undef DEFAULT_DECODE_INVALID_NUMBERS\n#define DEFAULT_DECODE_INVALID_NUMBERS 0\n#endif\n\ntypedef enum {\n    T_OBJ_BEGIN,\n    T_OBJ_END,\n    T_ARR_BEGIN,\n    T_ARR_END,\n    T_STRING,\n    T_NUMBER,\n    T_BOOLEAN,\n    T_NULL,\n    T_COLON,\n    T_COMMA,\n    T_END,\n    T_WHITESPACE,\n    T_ERROR,\n    T_UNKNOWN\n} json_token_type_t;\n\nstatic const char *json_token_type_name[] = {\n    \"T_OBJ_BEGIN\",\n    \"T_OBJ_END\",\n    \"T_ARR_BEGIN\",\n    \"T_ARR_END\",\n    \"T_STRING\",\n    \"T_NUMBER\",\n    \"T_BOOLEAN\",\n    \"T_NULL\",\n    \"T_COLON\",\n    \"T_COMMA\",\n    \"T_END\",\n    \"T_WHITESPACE\",\n    \"T_ERROR\",\n    \"T_UNKNOWN\",\n    NULL\n};\n\ntypedef struct {\n    json_token_type_t ch2token[256];\n    char escape2char[256];  /* Decoding */\n\n    /* encode_buf is only allocated and used when\n     * encode_keep_buffer is set */\n    strbuf_t encode_buf;\n\n    int encode_sparse_convert;\n    int encode_sparse_ratio;\n    int encode_sparse_safe;\n    int encode_max_depth;\n    int encode_invalid_numbers;     /* 2 => Encode as \"null\" */\n    int encode_number_precision;\n    int encode_keep_buffer;\n\n    int decode_invalid_numbers;\n    int decode_max_depth;\n    int decode_array_with_array_mt;\n} json_config_t;\n\ntypedef struct {\n    const char *data;\n    const char *ptr;\n    strbuf_t *tmp;    /* Temporary storage for strings */\n    json_config_t *cfg;\n    int current_depth;\n} json_parse_t;\n\ntypedef struct {\n    json_token_type_t type;\n    size_t index;\n    union {\n        const char *string;\n        double number;\n        int boolean;\n    } value;\n    size_t string_len;\n} json_token_t;\n\nstatic const char *char2escape[256] = {\n    \"\\\\u0000\", \"\\\\u0001\", \"\\\\u0002\", \"\\\\u0003\",\n    \"\\\\u0004\", \"\\\\u0005\", \"\\\\u0006\", \"\\\\u0007\",\n    \"\\\\b\", \"\\\\t\", \"\\\\n\", \"\\\\u000b\",\n    \"\\\\f\", \"\\\\r\", \"\\\\u000e\", \"\\\\u000f\",\n    \"\\\\u0010\", \"\\\\u0011\", \"\\\\u0012\", \"\\\\u0013\",\n    \"\\\\u0014\", \"\\\\u0015\", \"\\\\u0016\", \"\\\\u0017\",\n    \"\\\\u0018\", \"\\\\u0019\", \"\\\\u001a\", \"\\\\u001b\",\n    \"\\\\u001c\", \"\\\\u001d\", \"\\\\u001e\", \"\\\\u001f\",\n    NULL, NULL, \"\\\\\\\"\", NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, \"\\\\/\",\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, \"\\\\\\\\\", NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, \"\\\\u007f\",\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n    NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,\n};\n\n/* ===== CONFIGURATION ===== */\n\nstatic json_config_t *json_fetch_config(lua_State *l)\n{\n    json_config_t *cfg;\n\n    cfg = lua_touserdata(l, lua_upvalueindex(1));\n    if (!cfg)\n        luaL_error(l, \"BUG: Unable to fetch CJSON configuration\");\n\n    return cfg;\n}\n\n/* Ensure the correct number of arguments have been provided.\n * Pad with nil to allow other functions to simply check arg[i]\n * to find whether an argument was provided */\nstatic json_config_t *json_arg_init(lua_State *l, int args)\n{\n    luaL_argcheck(l, lua_gettop(l) <= args, args + 1,\n                  \"found too many arguments\");\n\n    while (lua_gettop(l) < args)\n        lua_pushnil(l);\n\n    return json_fetch_config(l);\n}\n\n/* Process integer options for configuration functions */\nstatic int json_integer_option(lua_State *l, int optindex, int *setting,\n                               int min, int max)\n{\n    char errmsg[64];\n    int value;\n\n    if (!lua_isnil(l, optindex)) {\n        value = luaL_checkinteger(l, optindex);\n        snprintf(errmsg, sizeof(errmsg), \"expected integer between %d and %d\", min, max);\n        luaL_argcheck(l, min <= value && value <= max, 1, errmsg);\n        *setting = value;\n    }\n\n    lua_pushinteger(l, *setting);\n\n    return 1;\n}\n\n/* Process enumerated arguments for a configuration function */\nstatic int json_enum_option(lua_State *l, int optindex, int *setting,\n                            const char **options, int bool_true)\n{\n    static const char *bool_options[] = { \"off\", \"on\", NULL };\n\n    if (!options) {\n        options = bool_options;\n        bool_true = 1;\n    }\n\n    if (!lua_isnil(l, optindex)) {\n        if (bool_true && lua_isboolean(l, optindex))\n            *setting = lua_toboolean(l, optindex) * bool_true;\n        else\n            *setting = luaL_checkoption(l, optindex, NULL, options);\n    }\n\n    if (bool_true && (*setting == 0 || *setting == bool_true))\n        lua_pushboolean(l, *setting);\n    else\n        lua_pushstring(l, options[*setting]);\n\n    return 1;\n}\n\n/* Configures handling of extremely sparse arrays:\n * convert: Convert extremely sparse arrays into objects? Otherwise error.\n * ratio: 0: always allow sparse; 1: never allow sparse; >1: use ratio\n * safe: Always use an array when the max index <= safe */\nstatic int json_cfg_encode_sparse_array(lua_State *l)\n{\n    json_config_t *cfg = json_arg_init(l, 3);\n\n    json_enum_option(l, 1, &cfg->encode_sparse_convert, NULL, 1);\n    json_integer_option(l, 2, &cfg->encode_sparse_ratio, 0, INT_MAX);\n    json_integer_option(l, 3, &cfg->encode_sparse_safe, 0, INT_MAX);\n\n    return 3;\n}\n\n/* Configures the maximum number of nested arrays/objects allowed when\n * encoding */\nstatic int json_cfg_encode_max_depth(lua_State *l)\n{\n    json_config_t *cfg = json_arg_init(l, 1);\n\n    return json_integer_option(l, 1, &cfg->encode_max_depth, 1, INT_MAX);\n}\n\n/* Configures the maximum number of nested arrays/objects allowed when\n * encoding */\nstatic int json_cfg_decode_max_depth(lua_State *l)\n{\n    json_config_t *cfg = json_arg_init(l, 1);\n\n    return json_integer_option(l, 1, &cfg->decode_max_depth, 1, INT_MAX);\n}\n\n/* Configures number precision when converting doubles to text */\nstatic int json_cfg_encode_number_precision(lua_State *l)\n{\n    json_config_t *cfg = json_arg_init(l, 1);\n\n    return json_integer_option(l, 1, &cfg->encode_number_precision, 1, 14);\n}\n\n/* Configures how to decode arrays */\nstatic int json_cfg_decode_array_with_array_mt(lua_State *l)\n{\n    json_config_t *cfg = json_arg_init(l, 1);\n\n    return json_enum_option(l, 1, &cfg->decode_array_with_array_mt, NULL, 1);\n}\n\n/* Configures JSON encoding buffer persistence */\nstatic int json_cfg_encode_keep_buffer(lua_State *l)\n{\n    json_config_t *cfg = json_arg_init(l, 1);\n    int old_value;\n\n    old_value = cfg->encode_keep_buffer;\n\n    json_enum_option(l, 1, &cfg->encode_keep_buffer, NULL, 1);\n\n    /* Init / free the buffer if the setting has changed */\n    if (old_value ^ cfg->encode_keep_buffer) {\n        if (cfg->encode_keep_buffer)\n            strbuf_init(&cfg->encode_buf, 0);\n        else\n            strbuf_free(&cfg->encode_buf);\n    }\n\n    return 1;\n}\n\n#if defined(DISABLE_INVALID_NUMBERS) && !defined(USE_INTERNAL_FPCONV)\nvoid json_verify_invalid_number_setting(lua_State *l, int *setting)\n{\n    if (*setting == 1) {\n        *setting = 0;\n        luaL_error(l, \"Infinity, NaN, and/or hexadecimal numbers are not supported.\");\n    }\n}\n#else\n#define json_verify_invalid_number_setting(l, s)    do { } while(0)\n#endif\n\nstatic int json_cfg_encode_invalid_numbers(lua_State *l)\n{\n    static const char *options[] = { \"off\", \"on\", \"null\", NULL };\n    json_config_t *cfg = json_arg_init(l, 1);\n\n    json_enum_option(l, 1, &cfg->encode_invalid_numbers, options, 1);\n\n    json_verify_invalid_number_setting(l, &cfg->encode_invalid_numbers);\n\n    return 1;\n}\n\nstatic int json_cfg_decode_invalid_numbers(lua_State *l)\n{\n    json_config_t *cfg = json_arg_init(l, 1);\n\n    json_enum_option(l, 1, &cfg->decode_invalid_numbers, NULL, 1);\n\n    json_verify_invalid_number_setting(l, &cfg->encode_invalid_numbers);\n\n    return 1;\n}\n\nstatic int json_destroy_config(lua_State *l)\n{\n    json_config_t *cfg;\n\n    cfg = lua_touserdata(l, 1);\n    if (cfg)\n        strbuf_free(&cfg->encode_buf);\n    cfg = NULL;\n\n    return 0;\n}\n\nstatic void json_create_config(lua_State *l)\n{\n    json_config_t *cfg;\n    int i;\n\n    cfg = lua_newuserdata(l, sizeof(*cfg));\n\n    /* Create GC method to clean up strbuf */\n    lua_newtable(l);\n    lua_pushcfunction(l, json_destroy_config);\n    lua_setfield(l, -2, \"__gc\");\n    lua_setmetatable(l, -2);\n\n    cfg->encode_sparse_convert = DEFAULT_SPARSE_CONVERT;\n    cfg->encode_sparse_ratio = DEFAULT_SPARSE_RATIO;\n    cfg->encode_sparse_safe = DEFAULT_SPARSE_SAFE;\n    cfg->encode_max_depth = DEFAULT_ENCODE_MAX_DEPTH;\n    cfg->decode_max_depth = DEFAULT_DECODE_MAX_DEPTH;\n    cfg->encode_invalid_numbers = DEFAULT_ENCODE_INVALID_NUMBERS;\n    cfg->decode_invalid_numbers = DEFAULT_DECODE_INVALID_NUMBERS;\n    cfg->encode_keep_buffer = DEFAULT_ENCODE_KEEP_BUFFER;\n    cfg->encode_number_precision = DEFAULT_ENCODE_NUMBER_PRECISION;\n    cfg->decode_array_with_array_mt = DEFAULT_DECODE_ARRAY_WITH_ARRAY_MT;\n\n#if DEFAULT_ENCODE_KEEP_BUFFER > 0\n    strbuf_init(&cfg->encode_buf, 0);\n#endif\n\n    /* Decoding init */\n\n    /* Tag all characters as an error */\n    for (i = 0; i < 256; i++)\n        cfg->ch2token[i] = T_ERROR;\n\n    /* Set tokens that require no further processing */\n    cfg->ch2token['{'] = T_OBJ_BEGIN;\n    cfg->ch2token['}'] = T_OBJ_END;\n    cfg->ch2token['['] = T_ARR_BEGIN;\n    cfg->ch2token[']'] = T_ARR_END;\n    cfg->ch2token[','] = T_COMMA;\n    cfg->ch2token[':'] = T_COLON;\n    cfg->ch2token['\\0'] = T_END;\n    cfg->ch2token[' '] = T_WHITESPACE;\n    cfg->ch2token['\\t'] = T_WHITESPACE;\n    cfg->ch2token['\\n'] = T_WHITESPACE;\n    cfg->ch2token['\\r'] = T_WHITESPACE;\n\n    /* Update characters that require further processing */\n    cfg->ch2token['f'] = T_UNKNOWN;     /* false? */\n    cfg->ch2token['i'] = T_UNKNOWN;     /* inf, ininity? */\n    cfg->ch2token['I'] = T_UNKNOWN;\n    cfg->ch2token['n'] = T_UNKNOWN;     /* null, nan? */\n    cfg->ch2token['N'] = T_UNKNOWN;\n    cfg->ch2token['t'] = T_UNKNOWN;     /* true? */\n    cfg->ch2token['\"'] = T_UNKNOWN;     /* string? */\n    cfg->ch2token['+'] = T_UNKNOWN;     /* number? */\n    cfg->ch2token['-'] = T_UNKNOWN;\n    for (i = 0; i < 10; i++)\n        cfg->ch2token['0' + i] = T_UNKNOWN;\n\n    /* Lookup table for parsing escape characters */\n    for (i = 0; i < 256; i++)\n        cfg->escape2char[i] = 0;          /* String error */\n    cfg->escape2char['\"'] = '\"';\n    cfg->escape2char['\\\\'] = '\\\\';\n    cfg->escape2char['/'] = '/';\n    cfg->escape2char['b'] = '\\b';\n    cfg->escape2char['t'] = '\\t';\n    cfg->escape2char['n'] = '\\n';\n    cfg->escape2char['f'] = '\\f';\n    cfg->escape2char['r'] = '\\r';\n    cfg->escape2char['u'] = 'u';          /* Unicode parsing required */\n}\n\n/* ===== ENCODING ===== */\n\nstatic void json_encode_exception(lua_State *l, json_config_t *cfg, strbuf_t *json, int lindex,\n                                  const char *reason)\n{\n    if (!cfg->encode_keep_buffer)\n        strbuf_free(json);\n    luaL_error(l, \"Cannot serialise %s: %s\",\n                  lua_typename(l, lua_type(l, lindex)), reason);\n}\n\n/* json_append_string args:\n * - lua_State\n * - JSON strbuf\n * - String (Lua stack index)\n *\n * Returns nothing. Doesn't remove string from Lua stack */\nstatic void json_append_string(lua_State *l, strbuf_t *json, int lindex)\n{\n    const char *escstr;\n    const char *str;\n    size_t i, len;\n\n    str = lua_tolstring(l, lindex, &len);\n\n    /* Worst case is len * 6 (all unicode escapes).\n     * This buffer is reused constantly for small strings\n     * If there are any excess pages, they won't be hit anyway.\n     * This gains ~5% speedup. */\n    if (len > SIZE_MAX / 6 - 3)\n        abort(); /* Overflow check */\n    strbuf_ensure_empty_length(json, len * 6 + 2);\n\n    strbuf_append_char_unsafe(json, '\\\"');\n    for (i = 0; i < len; i++) {\n        escstr = char2escape[(unsigned char)str[i]];\n        if (escstr)\n            strbuf_append_string(json, escstr);\n        else\n            strbuf_append_char_unsafe(json, str[i]);\n    }\n    strbuf_append_char_unsafe(json, '\\\"');\n}\n\n/* Find the size of the array on the top of the Lua stack\n * -1   object (not a pure array)\n * >=0  elements in array\n */\nstatic int lua_array_length(lua_State *l, json_config_t *cfg, strbuf_t *json)\n{\n    double k;\n    int max;\n    int items;\n\n    max = 0;\n    items = 0;\n\n    lua_pushnil(l);\n    /* table, startkey */\n    while (lua_next(l, -2) != 0) {\n        /* table, key, value */\n        if (lua_type(l, -2) == LUA_TNUMBER &&\n            (k = lua_tonumber(l, -2))) {\n            /* Integer >= 1 ? */\n            if (floor(k) == k && k >= 1) {\n                if (k > max)\n                    max = k;\n                items++;\n                lua_pop(l, 1);\n                continue;\n            }\n        }\n\n        /* Must not be an array (non integer key) */\n        lua_pop(l, 2);\n        return -1;\n    }\n\n    /* Encode excessively sparse arrays as objects (if enabled) */\n    if (cfg->encode_sparse_ratio > 0 &&\n        max > items * cfg->encode_sparse_ratio &&\n        max > cfg->encode_sparse_safe) {\n        if (!cfg->encode_sparse_convert)\n            json_encode_exception(l, cfg, json, -1, \"excessively sparse array\");\n\n        return -1;\n    }\n\n    return max;\n}\n\nstatic void json_check_encode_depth(lua_State *l, json_config_t *cfg,\n                                    int current_depth, strbuf_t *json)\n{\n    /* Ensure there are enough slots free to traverse a table (key,\n     * value) and push a string for a potential error message.\n     *\n     * Unlike \"decode\", the key and value are still on the stack when\n     * lua_checkstack() is called.  Hence an extra slot for luaL_error()\n     * below is required just in case the next check to lua_checkstack()\n     * fails.\n     *\n     * While this won't cause a crash due to the EXTRA_STACK reserve\n     * slots, it would still be an improper use of the API. */\n    if (current_depth <= cfg->encode_max_depth && lua_checkstack(l, 3))\n        return;\n\n    if (!cfg->encode_keep_buffer)\n        strbuf_free(json);\n\n    luaL_error(l, \"Cannot serialise, excessive nesting (%d)\",\n               current_depth);\n}\n\nstatic void json_append_data(lua_State *l, json_config_t *cfg,\n                             int current_depth, strbuf_t *json);\n\n/* json_append_array args:\n * - lua_State\n * - JSON strbuf\n * - Size of passwd Lua array (top of stack) */\nstatic void json_append_array(lua_State *l, json_config_t *cfg, int current_depth,\n                              strbuf_t *json, int array_length)\n{\n    int comma, i;\n\n    strbuf_append_char(json, '[');\n\n    comma = 0;\n    for (i = 1; i <= array_length; i++) {\n        if (comma)\n            strbuf_append_char(json, ',');\n        else\n            comma = 1;\n\n        lua_rawgeti(l, -1, i);\n        json_append_data(l, cfg, current_depth, json);\n        lua_pop(l, 1);\n    }\n\n    strbuf_append_char(json, ']');\n}\n\nstatic void json_append_number(lua_State *l, json_config_t *cfg,\n                               strbuf_t *json, int lindex)\n{\n    double num = lua_tonumber(l, lindex);\n    int len;\n\n    if (cfg->encode_invalid_numbers == 0) {\n        /* Prevent encoding invalid numbers */\n        if (isinf(num) || isnan(num))\n            json_encode_exception(l, cfg, json, lindex, \"must not be NaN or Inf\");\n    } else if (cfg->encode_invalid_numbers == 1) {\n        /* Encode invalid numbers, but handle \"nan\" separately\n         * since some platforms may encode as \"-nan\". */\n        if (isnan(num)) {\n            strbuf_append_mem(json, \"nan\", 3);\n            return;\n        }\n    } else {\n        /* Encode invalid numbers as \"null\" */\n        if (isinf(num) || isnan(num)) {\n            strbuf_append_mem(json, \"null\", 4);\n            return;\n        }\n    }\n\n    strbuf_ensure_empty_length(json, FPCONV_G_FMT_BUFSIZE);\n    len = fpconv_g_fmt(strbuf_empty_ptr(json), num, cfg->encode_number_precision);\n    strbuf_extend_length(json, len);\n}\n\nstatic void json_append_object(lua_State *l, json_config_t *cfg,\n                               int current_depth, strbuf_t *json)\n{\n    int comma, keytype;\n\n    /* Object */\n    strbuf_append_char(json, '{');\n\n    lua_pushnil(l);\n    /* table, startkey */\n    comma = 0;\n    while (lua_next(l, -2) != 0) {\n        if (comma)\n            strbuf_append_char(json, ',');\n        else\n            comma = 1;\n\n        /* table, key, value */\n        keytype = lua_type(l, -2);\n        if (keytype == LUA_TNUMBER) {\n            strbuf_append_char(json, '\"');\n            json_append_number(l, cfg, json, -2);\n            strbuf_append_mem(json, \"\\\":\", 2);\n        } else if (keytype == LUA_TSTRING) {\n            json_append_string(l, json, -2);\n            strbuf_append_char(json, ':');\n        } else {\n            json_encode_exception(l, cfg, json, -2,\n                                  \"table key must be a number or string\");\n            /* never returns */\n        }\n\n        /* table, key, value */\n        json_append_data(l, cfg, current_depth, json);\n        lua_pop(l, 1);\n        /* table, key */\n    }\n\n    strbuf_append_char(json, '}');\n}\n\n/* Serialise Lua data into JSON string. */\nstatic void json_append_data(lua_State *l, json_config_t *cfg,\n                             int current_depth, strbuf_t *json)\n{\n    int len;\n\n    switch (lua_type(l, -1)) {\n    case LUA_TSTRING:\n        json_append_string(l, json, -1);\n        break;\n    case LUA_TNUMBER:\n        json_append_number(l, cfg, json, -1);\n        break;\n    case LUA_TBOOLEAN:\n        if (lua_toboolean(l, -1))\n            strbuf_append_mem(json, \"true\", 4);\n        else\n            strbuf_append_mem(json, \"false\", 5);\n        break;\n    case LUA_TTABLE:\n        current_depth++;\n        json_check_encode_depth(l, cfg, current_depth, json);\n\n        /* Check if this is an array */\n        int as_array = 0;\n        if (!lua_checkstack(l, 2))\n            luaL_error(l, \"max lua stack reached\");\n        if (lua_getmetatable(l, -1)) {\n            lua_getfield(l, -1, \"__is_cjson_array\");\n            as_array = lua_toboolean(l, -1);\n            lua_pop(l, 2); /* pop value and metatable */\n        }\n\n        if (as_array) {\n            len = lua_objlen(l, -1);\n            json_append_array(l, cfg, current_depth, json, len);\n            break;\n        }\n        \n        len = lua_array_length(l, cfg, json);\n        if (len > 0)\n            json_append_array(l, cfg, current_depth, json, len);\n        else\n            json_append_object(l, cfg, current_depth, json);\n        break;\n    case LUA_TNIL:\n        strbuf_append_mem(json, \"null\", 4);\n        break;\n    case LUA_TLIGHTUSERDATA:\n        if (lua_touserdata(l, -1) == NULL) {\n            strbuf_append_mem(json, \"null\", 4);\n            break;\n        }\n    default:\n        /* Remaining types (LUA_TFUNCTION, LUA_TUSERDATA, LUA_TTHREAD,\n         * and LUA_TLIGHTUSERDATA) cannot be serialised */\n        json_encode_exception(l, cfg, json, -1, \"type not supported\");\n        /* never returns */\n    }\n}\n\nstatic int json_encode(lua_State *l)\n{\n    json_config_t *cfg = json_fetch_config(l);\n    strbuf_t local_encode_buf;\n    strbuf_t *encode_buf;\n    char *json;\n    size_t len;\n\n    luaL_argcheck(l, lua_gettop(l) == 1, 1, \"expected 1 argument\");\n\n    if (!cfg->encode_keep_buffer) {\n        /* Use private buffer */\n        encode_buf = &local_encode_buf;\n        strbuf_init(encode_buf, 0);\n    } else {\n        /* Reuse existing buffer */\n        encode_buf = &cfg->encode_buf;\n        strbuf_reset(encode_buf);\n    }\n\n    json_append_data(l, cfg, 0, encode_buf);\n    json = strbuf_string(encode_buf, &len);\n\n    lua_pushlstring(l, json, len);\n\n    if (!cfg->encode_keep_buffer)\n        strbuf_free(encode_buf);\n\n    return 1;\n}\n\n/* ===== DECODING ===== */\n\nstatic void json_process_value(lua_State *l, json_parse_t *json,\n                               json_token_t *token);\n\nstatic int hexdigit2int(char hex)\n{\n    if ('0' <= hex  && hex <= '9')\n        return hex - '0';\n\n    /* Force lowercase */\n    hex |= 0x20;\n    if ('a' <= hex && hex <= 'f')\n        return 10 + hex - 'a';\n\n    return -1;\n}\n\nstatic int decode_hex4(const char *hex)\n{\n    int digit[4];\n    int i;\n\n    /* Convert ASCII hex digit to numeric digit\n     * Note: this returns an error for invalid hex digits, including\n     *       NULL */\n    for (i = 0; i < 4; i++) {\n        digit[i] = hexdigit2int(hex[i]);\n        if (digit[i] < 0) {\n            return -1;\n        }\n    }\n\n    return (digit[0] << 12) +\n           (digit[1] << 8) +\n           (digit[2] << 4) +\n            digit[3];\n}\n\n/* Converts a Unicode codepoint to UTF-8.\n * Returns UTF-8 string length, and up to 4 bytes in *utf8 */\nstatic int codepoint_to_utf8(char *utf8, int codepoint)\n{\n    /* 0xxxxxxx */\n    if (codepoint <= 0x7F) {\n        utf8[0] = codepoint;\n        return 1;\n    }\n\n    /* 110xxxxx 10xxxxxx */\n    if (codepoint <= 0x7FF) {\n        utf8[0] = (codepoint >> 6) | 0xC0;\n        utf8[1] = (codepoint & 0x3F) | 0x80;\n        return 2;\n    }\n\n    /* 1110xxxx 10xxxxxx 10xxxxxx */\n    if (codepoint <= 0xFFFF) {\n        utf8[0] = (codepoint >> 12) | 0xE0;\n        utf8[1] = ((codepoint >> 6) & 0x3F) | 0x80;\n        utf8[2] = (codepoint & 0x3F) | 0x80;\n        return 3;\n    }\n\n    /* 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx */\n    if (codepoint <= 0x1FFFFF) {\n        utf8[0] = (codepoint >> 18) | 0xF0;\n        utf8[1] = ((codepoint >> 12) & 0x3F) | 0x80;\n        utf8[2] = ((codepoint >> 6) & 0x3F) | 0x80;\n        utf8[3] = (codepoint & 0x3F) | 0x80;\n        return 4;\n    }\n\n    return 0;\n}\n\n\n/* Called when index pointing to beginning of UTF-16 code escape: \\uXXXX\n * \\u is guaranteed to exist, but the remaining hex characters may be\n * missing.\n * Translate to UTF-8 and append to temporary token string.\n * Must advance index to the next character to be processed.\n * Returns: 0   success\n *          -1  error\n */\nstatic int json_append_unicode_escape(json_parse_t *json)\n{\n    char utf8[4];       /* Surrogate pairs require 4 UTF-8 bytes */\n    int codepoint;\n    int surrogate_low;\n    int len;\n    int escape_len = 6;\n\n    /* Fetch UTF-16 code unit */\n    codepoint = decode_hex4(json->ptr + 2);\n    if (codepoint < 0)\n        return -1;\n\n    /* UTF-16 surrogate pairs take the following 2 byte form:\n     *      11011 x yyyyyyyyyy\n     * When x = 0: y is the high 10 bits of the codepoint\n     *      x = 1: y is the low 10 bits of the codepoint\n     *\n     * Check for a surrogate pair (high or low) */\n    if ((codepoint & 0xF800) == 0xD800) {\n        /* Error if the 1st surrogate is not high */\n        if (codepoint & 0x400)\n            return -1;\n\n        /* Ensure the next code is a unicode escape */\n        if (*(json->ptr + escape_len) != '\\\\' ||\n            *(json->ptr + escape_len + 1) != 'u') {\n            return -1;\n        }\n\n        /* Fetch the next codepoint */\n        surrogate_low = decode_hex4(json->ptr + 2 + escape_len);\n        if (surrogate_low < 0)\n            return -1;\n\n        /* Error if the 2nd code is not a low surrogate */\n        if ((surrogate_low & 0xFC00) != 0xDC00)\n            return -1;\n\n        /* Calculate Unicode codepoint */\n        codepoint = (codepoint & 0x3FF) << 10;\n        surrogate_low &= 0x3FF;\n        codepoint = (codepoint | surrogate_low) + 0x10000;\n        escape_len = 12;\n    }\n\n    /* Convert codepoint to UTF-8 */\n    len = codepoint_to_utf8(utf8, codepoint);\n    if (!len)\n        return -1;\n\n    /* Append bytes and advance parse index */\n    strbuf_append_mem_unsafe(json->tmp, utf8, len);\n    json->ptr += escape_len;\n\n    return 0;\n}\n\nstatic void json_set_token_error(json_token_t *token, json_parse_t *json,\n                                 const char *errtype)\n{\n    token->type = T_ERROR;\n    token->index = json->ptr - json->data;\n    token->value.string = errtype;\n}\n\nstatic void json_next_string_token(json_parse_t *json, json_token_t *token)\n{\n    char *escape2char = json->cfg->escape2char;\n    char ch;\n\n    /* Caller must ensure a string is next */\n    assert(*json->ptr == '\"');\n\n    /* Skip \" */\n    json->ptr++;\n\n    /* json->tmp is the temporary strbuf used to accumulate the\n     * decoded string value.\n     * json->tmp is sized to handle JSON containing only a string value.\n     */\n    strbuf_reset(json->tmp);\n\n    while ((ch = *json->ptr) != '\"') {\n        if (!ch) {\n            /* Premature end of the string */\n            json_set_token_error(token, json, \"unexpected end of string\");\n            return;\n        }\n\n        /* Handle escapes */\n        if (ch == '\\\\') {\n            /* Fetch escape character */\n            ch = *(json->ptr + 1);\n\n            /* Translate escape code and append to tmp string */\n            ch = escape2char[(unsigned char)ch];\n            if (ch == 'u') {\n                if (json_append_unicode_escape(json) == 0)\n                    continue;\n\n                json_set_token_error(token, json,\n                                     \"invalid unicode escape code\");\n                return;\n            }\n            if (!ch) {\n                json_set_token_error(token, json, \"invalid escape code\");\n                return;\n            }\n\n            /* Skip '\\' */\n            json->ptr++;\n        }\n        /* Append normal character or translated single character\n         * Unicode escapes are handled above */\n        strbuf_append_char_unsafe(json->tmp, ch);\n        json->ptr++;\n    }\n    json->ptr++;    /* Eat final quote (\") */\n\n    strbuf_ensure_null(json->tmp);\n\n    token->type = T_STRING;\n    token->value.string = strbuf_string(json->tmp, &token->string_len);\n}\n\n/* JSON numbers should take the following form:\n *      -?(0|[1-9]|[1-9][0-9]+)(.[0-9]+)?([eE][-+]?[0-9]+)?\n *\n * json_next_number_token() uses strtod() which allows other forms:\n * - numbers starting with '+'\n * - NaN, -NaN, infinity, -infinity\n * - hexadecimal numbers\n * - numbers with leading zeros\n *\n * json_is_invalid_number() detects \"numbers\" which may pass strtod()'s\n * error checking, but should not be allowed with strict JSON.\n *\n * json_is_invalid_number() may pass numbers which cause strtod()\n * to generate an error.\n */\nstatic int json_is_invalid_number(json_parse_t *json)\n{\n    const char *p = json->ptr;\n\n    /* Reject numbers starting with + */\n    if (*p == '+')\n        return 1;\n\n    /* Skip minus sign if it exists */\n    if (*p == '-')\n        p++;\n\n    /* Reject numbers starting with 0x, or leading zeros */\n    if (*p == '0') {\n        int ch2 = *(p + 1);\n\n        if ((ch2 | 0x20) == 'x' ||          /* Hex */\n            ('0' <= ch2 && ch2 <= '9'))     /* Leading zero */\n            return 1;\n\n        return 0;\n    } else if (*p <= '9') {\n        return 0;                           /* Ordinary number */\n    }\n\n    /* Reject inf/nan */\n    if (!strncasecmp(p, \"inf\", 3))\n        return 1;\n    if (!strncasecmp(p, \"nan\", 3))\n        return 1;\n\n    /* Pass all other numbers which may still be invalid, but\n     * strtod() will catch them. */\n    return 0;\n}\n\nstatic void json_next_number_token(json_parse_t *json, json_token_t *token)\n{\n    char *endptr;\n\n    token->type = T_NUMBER;\n    token->value.number = fpconv_strtod(json->ptr, &endptr);\n    if (json->ptr == endptr)\n        json_set_token_error(token, json, \"invalid number\");\n    else\n        json->ptr = endptr;     /* Skip the processed number */\n\n    return;\n}\n\n/* Fills in the token struct.\n * T_STRING will return a pointer to the json_parse_t temporary string\n * T_ERROR will leave the json->ptr pointer at the error.\n */\nstatic void json_next_token(json_parse_t *json, json_token_t *token)\n{\n    const json_token_type_t *ch2token = json->cfg->ch2token;\n    int ch;\n\n    /* Eat whitespace. */\n    while (1) {\n        ch = (unsigned char)*(json->ptr);\n        token->type = ch2token[ch];\n        if (token->type != T_WHITESPACE)\n            break;\n        json->ptr++;\n    }\n\n    /* Store location of new token. Required when throwing errors\n     * for unexpected tokens (syntax errors). */\n    token->index = json->ptr - json->data;\n\n    /* Don't advance the pointer for an error or the end */\n    if (token->type == T_ERROR) {\n        json_set_token_error(token, json, \"invalid token\");\n        return;\n    }\n\n    if (token->type == T_END) {\n        return;\n    }\n\n    /* Found a known single character token, advance index and return */\n    if (token->type != T_UNKNOWN) {\n        json->ptr++;\n        return;\n    }\n\n    /* Process characters which triggered T_UNKNOWN\n     *\n     * Must use strncmp() to match the front of the JSON string.\n     * JSON identifier must be lowercase.\n     * When strict_numbers if disabled, either case is allowed for\n     * Infinity/NaN (since we are no longer following the spec..) */\n    if (ch == '\"') {\n        json_next_string_token(json, token);\n        return;\n    } else if (ch == '-' || ('0' <= ch && ch <= '9')) {\n        if (!json->cfg->decode_invalid_numbers && json_is_invalid_number(json)) {\n            json_set_token_error(token, json, \"invalid number\");\n            return;\n        }\n        json_next_number_token(json, token);\n        return;\n    } else if (!strncmp(json->ptr, \"true\", 4)) {\n        token->type = T_BOOLEAN;\n        token->value.boolean = 1;\n        json->ptr += 4;\n        return;\n    } else if (!strncmp(json->ptr, \"false\", 5)) {\n        token->type = T_BOOLEAN;\n        token->value.boolean = 0;\n        json->ptr += 5;\n        return;\n    } else if (!strncmp(json->ptr, \"null\", 4)) {\n        token->type = T_NULL;\n        json->ptr += 4;\n        return;\n    } else if (json->cfg->decode_invalid_numbers &&\n               json_is_invalid_number(json)) {\n        /* When decode_invalid_numbers is enabled, only attempt to process\n         * numbers we know are invalid JSON (Inf, NaN, hex)\n         * This is required to generate an appropriate token error,\n         * otherwise all bad tokens will register as \"invalid number\"\n         */\n        json_next_number_token(json, token);\n        return;\n    }\n\n    /* Token starts with t/f/n but isn't recognised above. */\n    json_set_token_error(token, json, \"invalid token\");\n}\n\n/* This function does not return.\n * DO NOT CALL WITH DYNAMIC MEMORY ALLOCATED.\n * The only supported exception is the temporary parser string\n * json->tmp struct.\n * json and token should exist on the stack somewhere.\n * luaL_error() will long_jmp and release the stack */\nstatic void json_throw_parse_error(lua_State *l, json_parse_t *json,\n                                   const char *exp, json_token_t *token)\n{\n    const char *found;\n\n    strbuf_free(json->tmp);\n\n    if (token->type == T_ERROR)\n        found = token->value.string;\n    else\n        found = json_token_type_name[token->type];\n\n    /* Note: token->index is 0 based, display starting from 1 */\n    luaL_error(l, \"Expected %s but found %s at character %d\",\n               exp, found, token->index + 1);\n}\n\nstatic inline void json_decode_ascend(json_parse_t *json)\n{\n    json->current_depth--;\n}\n\nstatic void json_decode_descend(lua_State *l, json_parse_t *json, int slots)\n{\n    json->current_depth++;\n\n    if (json->current_depth <= json->cfg->decode_max_depth &&\n        lua_checkstack(l, slots)) {\n        return;\n    }\n\n    strbuf_free(json->tmp);\n    luaL_error(l, \"Found too many nested data structures (%d) at character %d\",\n        json->current_depth, json->ptr - json->data);\n}\n\nstatic void json_parse_object_context(lua_State *l, json_parse_t *json)\n{\n    json_token_t token;\n\n    /* 3 slots required:\n     * .., table, key, value */\n    json_decode_descend(l, json, 3);\n\n    lua_newtable(l);\n\n    json_next_token(json, &token);\n\n    /* Handle empty objects */\n    if (token.type == T_OBJ_END) {\n        json_decode_ascend(json);\n        return;\n    }\n\n    while (1) {\n        if (token.type != T_STRING)\n            json_throw_parse_error(l, json, \"object key string\", &token);\n\n        /* Push key */\n        lua_pushlstring(l, token.value.string, token.string_len);\n\n        json_next_token(json, &token);\n        if (token.type != T_COLON)\n            json_throw_parse_error(l, json, \"colon\", &token);\n\n        /* Fetch value */\n        json_next_token(json, &token);\n        json_process_value(l, json, &token);\n\n        /* Set key = value */\n        lua_rawset(l, -3);\n\n        json_next_token(json, &token);\n\n        if (token.type == T_OBJ_END) {\n            json_decode_ascend(json);\n            return;\n        }\n\n        if (token.type != T_COMMA)\n            json_throw_parse_error(l, json, \"comma or object end\", &token);\n\n        json_next_token(json, &token);\n    }\n}\n\n/* Handle the array context */\nstatic void json_parse_array_context(lua_State *l, json_parse_t *json)\n{\n    json_token_t token;\n    int i;\n\n    /* 2 slots required:\n     * .., table, value */\n    json_decode_descend(l, json, 2);\n\n    lua_newtable(l);\n\n    /* set array_mt on the table at the top of the stack */\n    if (json->cfg->decode_array_with_array_mt) {\n        /* Ensure sufficient stack space for metatable creation (metatable + boolean) */\n        if (!lua_checkstack(l, 2))\n            luaL_error(l, \"max lua stack reached\");\n        /* Mark this table so encoder can emit [] for empty arrays */\n        lua_newtable(l);\n        lua_pushboolean(l, 1);\n        lua_setfield(l, -2, \"__is_cjson_array\");\n        lua_enablereadonlytable(l, -1, 1); /* protect the metatable. */\n        lua_setmetatable(l, -2); /* set metatable for the array table */\n    }\n\n    json_next_token(json, &token);\n\n    /* Handle empty arrays */\n    if (token.type == T_ARR_END) {\n        json_decode_ascend(json);\n        return;\n    }\n\n    for (i = 1; ; i++) {\n        json_process_value(l, json, &token);\n        lua_rawseti(l, -2, i);            /* arr[i] = value */\n\n        json_next_token(json, &token);\n\n        if (token.type == T_ARR_END) {\n            json_decode_ascend(json);\n            return;\n        }\n\n        if (token.type != T_COMMA)\n            json_throw_parse_error(l, json, \"comma or array end\", &token);\n\n        json_next_token(json, &token);\n    }\n}\n\n/* Handle the \"value\" context */\nstatic void json_process_value(lua_State *l, json_parse_t *json,\n                               json_token_t *token)\n{\n    switch (token->type) {\n    case T_STRING:\n        lua_pushlstring(l, token->value.string, token->string_len);\n        break;;\n    case T_NUMBER:\n        lua_pushnumber(l, token->value.number);\n        break;;\n    case T_BOOLEAN:\n        lua_pushboolean(l, token->value.boolean);\n        break;;\n    case T_OBJ_BEGIN:\n        json_parse_object_context(l, json);\n        break;;\n    case T_ARR_BEGIN:\n        json_parse_array_context(l, json);\n        break;;\n    case T_NULL:\n        /* In Lua, setting \"t[k] = nil\" will delete k from the table.\n         * Hence a NULL pointer lightuserdata object is used instead */\n        lua_pushlightuserdata(l, NULL);\n        break;;\n    default:\n        json_throw_parse_error(l, json, \"value\", token);\n    }\n}\n\nstatic int json_decode(lua_State *l)\n{\n    json_parse_t json;\n    json_token_t token;\n    size_t json_len;\n\n    luaL_argcheck(l, lua_gettop(l) == 1, 1, \"expected 1 argument\");\n\n    json.cfg = json_fetch_config(l);\n    json.data = luaL_checklstring(l, 1, &json_len);\n    json.current_depth = 0;\n    json.ptr = json.data;\n\n    /* Detect Unicode other than UTF-8 (see RFC 4627, Sec 3)\n     *\n     * CJSON can support any simple data type, hence only the first\n     * character is guaranteed to be ASCII (at worst: '\"'). This is\n     * still enough to detect whether the wrong encoding is in use. */\n    if (json_len >= 2 && (!json.data[0] || !json.data[1]))\n        luaL_error(l, \"JSON parser does not support UTF-16 or UTF-32\");\n\n    /* Ensure the temporary buffer can hold the entire string.\n     * This means we no longer need to do length checks since the decoded\n     * string must be smaller than the entire json string */\n    json.tmp = strbuf_new(json_len);\n\n    json_next_token(&json, &token);\n    json_process_value(l, &json, &token);\n\n    /* Ensure there is no more input left */\n    json_next_token(&json, &token);\n\n    if (token.type != T_END)\n        json_throw_parse_error(l, &json, \"the end\", &token);\n\n    strbuf_free(json.tmp);\n\n    return 1;\n}\n\n/* ===== INITIALISATION ===== */\n\n#if !defined(LUA_VERSION_NUM) || LUA_VERSION_NUM < 502\n/* Compatibility for Lua 5.1.\n *\n * luaL_setfuncs() is used to create a module table where the functions have\n * json_config_t as their first upvalue. Code borrowed from Lua 5.2 source. */\nstatic void luaL_setfuncs (lua_State *l, const luaL_Reg *reg, int nup)\n{\n    int i;\n\n    luaL_checkstack(l, nup, \"too many upvalues\");\n    for (; reg->name != NULL; reg++) {  /* fill the table with given functions */\n        for (i = 0; i < nup; i++)  /* copy upvalues to the top */\n            lua_pushvalue(l, -nup);\n        lua_pushcclosure(l, reg->func, nup);  /* closure with those upvalues */\n        lua_setfield(l, -(nup + 2), reg->name);\n    }\n    lua_pop(l, nup);  /* remove upvalues */\n}\n#endif\n\n/* Call target function in protected mode with all supplied args.\n * Assumes target function only returns a single non-nil value.\n * Convert and return thrown errors as: nil, \"error message\" */\nstatic int json_protect_conversion(lua_State *l)\n{\n    int err;\n\n    /* Deliberately throw an error for invalid arguments */\n    luaL_argcheck(l, lua_gettop(l) == 1, 1, \"expected 1 argument\");\n\n    /* pcall() the function stored as upvalue(1) */\n    lua_pushvalue(l, lua_upvalueindex(1));\n    lua_insert(l, 1);\n    err = lua_pcall(l, 1, 1, 0);\n    if (!err)\n        return 1;\n\n    if (err == LUA_ERRRUN) {\n        lua_pushnil(l);\n        lua_insert(l, -2);\n        return 2;\n    }\n\n    /* Since we are not using a custom error handler, the only remaining\n     * errors are memory related */\n    return luaL_error(l, \"Memory allocation error in CJSON protected call\");\n}\n\n/* Return cjson module table */\nstatic int lua_cjson_new(lua_State *l)\n{\n    luaL_Reg reg[] = {\n        { \"encode\", json_encode },\n        { \"decode\", json_decode },\n        { \"decode_array_with_array_mt\", json_cfg_decode_array_with_array_mt },\n        { \"encode_sparse_array\", json_cfg_encode_sparse_array },\n        { \"encode_max_depth\", json_cfg_encode_max_depth },\n        { \"decode_max_depth\", json_cfg_decode_max_depth },\n        { \"encode_number_precision\", json_cfg_encode_number_precision },\n        { \"encode_keep_buffer\", json_cfg_encode_keep_buffer },\n        { \"encode_invalid_numbers\", json_cfg_encode_invalid_numbers },\n        { \"decode_invalid_numbers\", json_cfg_decode_invalid_numbers },\n        { \"new\", lua_cjson_new },\n        { NULL, NULL }\n    };\n\n    /* Initialise number conversions */\n    fpconv_init();\n\n    /* cjson module table */\n    lua_newtable(l);\n\n    /* Register functions with config data as upvalue */\n    json_create_config(l);\n    luaL_setfuncs(l, reg, 1);\n\n    /* Set cjson.null */\n    lua_pushlightuserdata(l, NULL);\n    lua_setfield(l, -2, \"null\");\n\n    /* Set module name / version fields */\n    lua_pushliteral(l, CJSON_MODNAME);\n    lua_setfield(l, -2, \"_NAME\");\n    lua_pushliteral(l, CJSON_VERSION);\n    lua_setfield(l, -2, \"_VERSION\");\n\n    return 1;\n}\n\n/* Return cjson.safe module table */\nstatic int lua_cjson_safe_new(lua_State *l)\n{\n    const char *func[] = { \"decode\", \"encode\", NULL };\n    int i;\n\n    lua_cjson_new(l);\n\n    /* Fix new() method */\n    lua_pushcfunction(l, lua_cjson_safe_new);\n    lua_setfield(l, -2, \"new\");\n\n    for (i = 0; func[i]; i++) {\n        lua_getfield(l, -1, func[i]);\n        lua_pushcclosure(l, json_protect_conversion, 1);\n        lua_setfield(l, -2, func[i]);\n    }\n\n    return 1;\n}\n\nint luaopen_cjson(lua_State *l)\n{\n    lua_cjson_new(l);\n\n#ifdef ENABLE_CJSON_GLOBAL\n    /* Register a global \"cjson\" table. */\n    lua_pushvalue(l, -1);\n    lua_setglobal(l, CJSON_MODNAME);\n#endif\n\n    /* Return cjson table */\n    return 1;\n}\n\nint luaopen_cjson_safe(lua_State *l)\n{\n    lua_cjson_safe_new(l);\n\n    /* Return cjson.safe table */\n    return 1;\n}\n\n/* vi:ai et sw=4 ts=4:\n */\n"
  },
  {
    "path": "deps/lua/src/lua_cmsgpack.c",
    "content": "#include <math.h>\n#include <stdlib.h>\n#include <stdint.h>\n#include <string.h>\n#include <assert.h>\n\n#include \"lua.h\"\n#include \"lauxlib.h\"\n\n#define LUACMSGPACK_NAME        \"cmsgpack\"\n#define LUACMSGPACK_SAFE_NAME   \"cmsgpack_safe\"\n#define LUACMSGPACK_VERSION     \"lua-cmsgpack 0.4.0\"\n#define LUACMSGPACK_COPYRIGHT   \"Copyright (C) 2012, Salvatore Sanfilippo\"\n#define LUACMSGPACK_DESCRIPTION \"MessagePack C implementation for Lua\"\n\n/* Allows a preprocessor directive to override MAX_NESTING */\n#ifndef LUACMSGPACK_MAX_NESTING\n    #define LUACMSGPACK_MAX_NESTING  16 /* Max tables nesting. */\n#endif\n\n/* Check if float or double can be an integer without loss of precision */\n#define IS_INT_TYPE_EQUIVALENT(x, T) (!isinf(x) && (T)(x) == (x))\n\n#define IS_INT64_EQUIVALENT(x) IS_INT_TYPE_EQUIVALENT(x, int64_t)\n#define IS_INT_EQUIVALENT(x) IS_INT_TYPE_EQUIVALENT(x, int)\n\n/* If size of pointer is equal to a 4 byte integer, we're on 32 bits. */\n#if UINTPTR_MAX == UINT_MAX\n    #define BITS_32 1\n#else\n    #define BITS_32 0\n#endif\n\n#if BITS_32\n    #define lua_pushunsigned(L, n) lua_pushnumber(L, n)\n#else\n    #define lua_pushunsigned(L, n) lua_pushinteger(L, n)\n#endif\n\n/* =============================================================================\n * MessagePack implementation and bindings for Lua 5.1/5.2.\n * Copyright(C) 2012 Salvatore Sanfilippo <antirez@gmail.com>\n *\n * http://github.com/antirez/lua-cmsgpack\n *\n * For MessagePack specification check the following web site:\n * http://wiki.msgpack.org/display/MSGPACK/Format+specification\n *\n * See Copyright Notice at the end of this file.\n *\n * CHANGELOG:\n * 19-Feb-2012 (ver 0.1.0): Initial release.\n * 20-Feb-2012 (ver 0.2.0): Tables encoding improved.\n * 20-Feb-2012 (ver 0.2.1): Minor bug fixing.\n * 20-Feb-2012 (ver 0.3.0): Module renamed lua-cmsgpack (was lua-msgpack).\n * 04-Apr-2014 (ver 0.3.1): Lua 5.2 support and minor bug fix.\n * 07-Apr-2014 (ver 0.4.0): Multiple pack/unpack, lua allocator, efficiency.\n * ========================================================================== */\n\n/* -------------------------- Endian conversion --------------------------------\n * We use it only for floats and doubles, all the other conversions performed\n * in an endian independent fashion. So the only thing we need is a function\n * that swaps a binary string if arch is little endian (and left it untouched\n * otherwise). */\n\n/* Reverse memory bytes if arch is little endian. Given the conceptual\n * simplicity of the Lua build system we prefer check for endianess at runtime.\n * The performance difference should be acceptable. */\nvoid memrevifle(void *ptr, size_t len) {\n    unsigned char   *p = (unsigned char *)ptr,\n                    *e = (unsigned char *)p+len-1,\n                    aux;\n    int test = 1;\n    unsigned char *testp = (unsigned char*) &test;\n\n    if (testp[0] == 0) return; /* Big endian, nothing to do. */\n    len /= 2;\n    while(len--) {\n        aux = *p;\n        *p = *e;\n        *e = aux;\n        p++;\n        e--;\n    }\n}\n\n/* ---------------------------- String buffer ----------------------------------\n * This is a simple implementation of string buffers. The only operation\n * supported is creating empty buffers and appending bytes to it.\n * The string buffer uses 2x preallocation on every realloc for O(N) append\n * behavior.  */\n\ntypedef struct mp_buf {\n    unsigned char *b;\n    size_t len, free;\n} mp_buf;\n\nvoid *mp_realloc(lua_State *L, void *target, size_t osize,size_t nsize) {\n    void *(*local_realloc) (void *, void *, size_t osize, size_t nsize) = NULL;\n    void *ud;\n\n    local_realloc = lua_getallocf(L, &ud);\n\n    return local_realloc(ud, target, osize, nsize);\n}\n\nmp_buf *mp_buf_new(lua_State *L) {\n    mp_buf *buf = NULL;\n\n    /* Old size = 0; new size = sizeof(*buf) */\n    buf = (mp_buf*)mp_realloc(L, NULL, 0, sizeof(*buf));\n\n    buf->b = NULL;\n    buf->len = buf->free = 0;\n    return buf;\n}\n\nvoid mp_buf_append(lua_State *L, mp_buf *buf, const unsigned char *s, size_t len) {\n    if (buf->free < len) {\n        size_t newsize = buf->len+len;\n        if (newsize < buf->len || newsize >= SIZE_MAX/2) abort();\n        newsize *= 2;\n\n        buf->b = (unsigned char*)mp_realloc(L, buf->b, buf->len + buf->free, newsize);\n        buf->free = newsize - buf->len;\n    }\n    memcpy(buf->b+buf->len,s,len);\n    buf->len += len;\n    buf->free -= len;\n}\n\nvoid mp_buf_free(lua_State *L, mp_buf *buf) {\n    mp_realloc(L, buf->b, buf->len + buf->free, 0); /* realloc to 0 = free */\n    mp_realloc(L, buf, sizeof(*buf), 0);\n}\n\n/* ---------------------------- String cursor ----------------------------------\n * This simple data structure is used for parsing. Basically you create a cursor\n * using a string pointer and a length, then it is possible to access the\n * current string position with cursor->p, check the remaining length\n * in cursor->left, and finally consume more string using\n * mp_cur_consume(cursor,len), to advance 'p' and subtract 'left'.\n * An additional field cursor->error is set to zero on initialization and can\n * be used to report errors. */\n\n#define MP_CUR_ERROR_NONE   0\n#define MP_CUR_ERROR_EOF    1   /* Not enough data to complete operation. */\n#define MP_CUR_ERROR_BADFMT 2   /* Bad data format */\n\ntypedef struct mp_cur {\n    const unsigned char *p;\n    size_t left;\n    int err;\n} mp_cur;\n\nvoid mp_cur_init(mp_cur *cursor, const unsigned char *s, size_t len) {\n    cursor->p = s;\n    cursor->left = len;\n    cursor->err = MP_CUR_ERROR_NONE;\n}\n\n#define mp_cur_consume(_c,_len) do { _c->p += _len; _c->left -= _len; } while(0)\n\n/* When there is not enough room we set an error in the cursor and return. This\n * is very common across the code so we have a macro to make the code look\n * a bit simpler. */\n#define mp_cur_need(_c,_len) do { \\\n    if (_c->left < _len) { \\\n        _c->err = MP_CUR_ERROR_EOF; \\\n        return; \\\n    } \\\n} while(0)\n\n/* ------------------------- Low level MP encoding -------------------------- */\n\nvoid mp_encode_bytes(lua_State *L, mp_buf *buf, const unsigned char *s, size_t len) {\n    unsigned char hdr[5];\n    size_t hdrlen;\n\n    if (len < 32) {\n        hdr[0] = 0xa0 | (len&0xff); /* fix raw */\n        hdrlen = 1;\n    } else if (len <= 0xff) {\n        hdr[0] = 0xd9;\n        hdr[1] = len;\n        hdrlen = 2;\n    } else if (len <= 0xffff) {\n        hdr[0] = 0xda;\n        hdr[1] = (len&0xff00)>>8;\n        hdr[2] = len&0xff;\n        hdrlen = 3;\n    } else {\n        hdr[0] = 0xdb;\n        hdr[1] = (len&0xff000000)>>24;\n        hdr[2] = (len&0xff0000)>>16;\n        hdr[3] = (len&0xff00)>>8;\n        hdr[4] = len&0xff;\n        hdrlen = 5;\n    }\n    mp_buf_append(L,buf,hdr,hdrlen);\n    mp_buf_append(L,buf,s,len);\n}\n\n/* we assume IEEE 754 internal format for single and double precision floats. */\nvoid mp_encode_double(lua_State *L, mp_buf *buf, double d) {\n    unsigned char b[9];\n    float f = d;\n\n    assert(sizeof(f) == 4 && sizeof(d) == 8);\n    if (d == (double)f) {\n        b[0] = 0xca;    /* float IEEE 754 */\n        memcpy(b+1,&f,4);\n        memrevifle(b+1,4);\n        mp_buf_append(L,buf,b,5);\n    } else if (sizeof(d) == 8) {\n        b[0] = 0xcb;    /* double IEEE 754 */\n        memcpy(b+1,&d,8);\n        memrevifle(b+1,8);\n        mp_buf_append(L,buf,b,9);\n    }\n}\n\nvoid mp_encode_int(lua_State *L, mp_buf *buf, int64_t n) {\n    unsigned char b[9];\n    size_t enclen;\n\n    if (n >= 0) {\n        if (n <= 127) {\n            b[0] = n & 0x7f;    /* positive fixnum */\n            enclen = 1;\n        } else if (n <= 0xff) {\n            b[0] = 0xcc;        /* uint 8 */\n            b[1] = n & 0xff;\n            enclen = 2;\n        } else if (n <= 0xffff) {\n            b[0] = 0xcd;        /* uint 16 */\n            b[1] = (n & 0xff00) >> 8;\n            b[2] = n & 0xff;\n            enclen = 3;\n        } else if (n <= 0xffffffffLL) {\n            b[0] = 0xce;        /* uint 32 */\n            b[1] = (n & 0xff000000) >> 24;\n            b[2] = (n & 0xff0000) >> 16;\n            b[3] = (n & 0xff00) >> 8;\n            b[4] = n & 0xff;\n            enclen = 5;\n        } else {\n            b[0] = 0xcf;        /* uint 64 */\n            b[1] = (n & 0xff00000000000000LL) >> 56;\n            b[2] = (n & 0xff000000000000LL) >> 48;\n            b[3] = (n & 0xff0000000000LL) >> 40;\n            b[4] = (n & 0xff00000000LL) >> 32;\n            b[5] = (n & 0xff000000) >> 24;\n            b[6] = (n & 0xff0000) >> 16;\n            b[7] = (n & 0xff00) >> 8;\n            b[8] = n & 0xff;\n            enclen = 9;\n        }\n    } else {\n        if (n >= -32) {\n            b[0] = ((signed char)n);   /* negative fixnum */\n            enclen = 1;\n        } else if (n >= -128) {\n            b[0] = 0xd0;        /* int 8 */\n            b[1] = n & 0xff;\n            enclen = 2;\n        } else if (n >= -32768) {\n            b[0] = 0xd1;        /* int 16 */\n            b[1] = (n & 0xff00) >> 8;\n            b[2] = n & 0xff;\n            enclen = 3;\n        } else if (n >= -2147483648LL) {\n            b[0] = 0xd2;        /* int 32 */\n            b[1] = (n & 0xff000000) >> 24;\n            b[2] = (n & 0xff0000) >> 16;\n            b[3] = (n & 0xff00) >> 8;\n            b[4] = n & 0xff;\n            enclen = 5;\n        } else {\n            b[0] = 0xd3;        /* int 64 */\n            b[1] = (n & 0xff00000000000000LL) >> 56;\n            b[2] = (n & 0xff000000000000LL) >> 48;\n            b[3] = (n & 0xff0000000000LL) >> 40;\n            b[4] = (n & 0xff00000000LL) >> 32;\n            b[5] = (n & 0xff000000) >> 24;\n            b[6] = (n & 0xff0000) >> 16;\n            b[7] = (n & 0xff00) >> 8;\n            b[8] = n & 0xff;\n            enclen = 9;\n        }\n    }\n    mp_buf_append(L,buf,b,enclen);\n}\n\nvoid mp_encode_array(lua_State *L, mp_buf *buf, uint64_t n) {\n    unsigned char b[5];\n    size_t enclen;\n\n    if (n <= 15) {\n        b[0] = 0x90 | (n & 0xf);    /* fix array */\n        enclen = 1;\n    } else if (n <= 65535) {\n        b[0] = 0xdc;                /* array 16 */\n        b[1] = (n & 0xff00) >> 8;\n        b[2] = n & 0xff;\n        enclen = 3;\n    } else {\n        b[0] = 0xdd;                /* array 32 */\n        b[1] = (n & 0xff000000) >> 24;\n        b[2] = (n & 0xff0000) >> 16;\n        b[3] = (n & 0xff00) >> 8;\n        b[4] = n & 0xff;\n        enclen = 5;\n    }\n    mp_buf_append(L,buf,b,enclen);\n}\n\nvoid mp_encode_map(lua_State *L, mp_buf *buf, uint64_t n) {\n    unsigned char b[5];\n    int enclen;\n\n    if (n <= 15) {\n        b[0] = 0x80 | (n & 0xf);    /* fix map */\n        enclen = 1;\n    } else if (n <= 65535) {\n        b[0] = 0xde;                /* map 16 */\n        b[1] = (n & 0xff00) >> 8;\n        b[2] = n & 0xff;\n        enclen = 3;\n    } else {\n        b[0] = 0xdf;                /* map 32 */\n        b[1] = (n & 0xff000000) >> 24;\n        b[2] = (n & 0xff0000) >> 16;\n        b[3] = (n & 0xff00) >> 8;\n        b[4] = n & 0xff;\n        enclen = 5;\n    }\n    mp_buf_append(L,buf,b,enclen);\n}\n\n/* --------------------------- Lua types encoding --------------------------- */\n\nvoid mp_encode_lua_string(lua_State *L, mp_buf *buf) {\n    size_t len;\n    const char *s;\n\n    s = lua_tolstring(L,-1,&len);\n    mp_encode_bytes(L,buf,(const unsigned char*)s,len);\n}\n\nvoid mp_encode_lua_bool(lua_State *L, mp_buf *buf) {\n    unsigned char b = lua_toboolean(L,-1) ? 0xc3 : 0xc2;\n    mp_buf_append(L,buf,&b,1);\n}\n\n/* Lua 5.3 has a built in 64-bit integer type */\nvoid mp_encode_lua_integer(lua_State *L, mp_buf *buf) {\n#if (LUA_VERSION_NUM < 503) && BITS_32\n    lua_Number i = lua_tonumber(L,-1);\n#else\n    lua_Integer i = lua_tointeger(L,-1);\n#endif\n    mp_encode_int(L, buf, (int64_t)i);\n}\n\n/* Lua 5.2 and lower only has 64-bit doubles, so we need to\n * detect if the double may be representable as an int\n * for Lua < 5.3 */\nvoid mp_encode_lua_number(lua_State *L, mp_buf *buf) {\n    lua_Number n = lua_tonumber(L,-1);\n\n    if (IS_INT64_EQUIVALENT(n)) {\n        mp_encode_lua_integer(L, buf);\n    } else {\n        mp_encode_double(L,buf,(double)n);\n    }\n}\n\nvoid mp_encode_lua_type(lua_State *L, mp_buf *buf, int level);\n\n/* Convert a lua table into a message pack list. */\nvoid mp_encode_lua_table_as_array(lua_State *L, mp_buf *buf, int level) {\n#if LUA_VERSION_NUM < 502\n    size_t len = lua_objlen(L,-1), j;\n#else\n    size_t len = lua_rawlen(L,-1), j;\n#endif\n\n    mp_encode_array(L,buf,len);\n    luaL_checkstack(L, 1, \"in function mp_encode_lua_table_as_array\");\n    for (j = 1; j <= len; j++) {\n        lua_pushnumber(L,j);\n        lua_gettable(L,-2);\n        mp_encode_lua_type(L,buf,level+1);\n    }\n}\n\n/* Convert a lua table into a message pack key-value map. */\nvoid mp_encode_lua_table_as_map(lua_State *L, mp_buf *buf, int level) {\n    size_t len = 0;\n\n    /* First step: count keys into table. No other way to do it with the\n     * Lua API, we need to iterate a first time. Note that an alternative\n     * would be to do a single run, and then hack the buffer to insert the\n     * map opcodes for message pack. Too hackish for this lib. */\n    luaL_checkstack(L, 3, \"in function mp_encode_lua_table_as_map\");\n    lua_pushnil(L);\n    while(lua_next(L,-2)) {\n        lua_pop(L,1); /* remove value, keep key for next iteration. */\n        len++;\n    }\n\n    /* Step two: actually encoding of the map. */\n    mp_encode_map(L,buf,len);\n    lua_pushnil(L);\n    while(lua_next(L,-2)) {\n        /* Stack: ... key value */\n        lua_pushvalue(L,-2); /* Stack: ... key value key */\n        mp_encode_lua_type(L,buf,level+1); /* encode key */\n        mp_encode_lua_type(L,buf,level+1); /* encode val */\n    }\n}\n\n/* Returns true if the Lua table on top of the stack is exclusively composed\n * of keys from numerical keys from 1 up to N, with N being the total number\n * of elements, without any hole in the middle. */\nint table_is_an_array(lua_State *L) {\n    int count = 0, max = 0;\n#if LUA_VERSION_NUM < 503\n    lua_Number n;\n#else\n    lua_Integer n;\n#endif\n\n    /* Stack top on function entry */\n    int stacktop;\n\n    stacktop = lua_gettop(L);\n\n    luaL_checkstack(L, 2, \"in function table_is_an_array\");\n    lua_pushnil(L);\n    while(lua_next(L,-2)) {\n        /* Stack: ... key value */\n        lua_pop(L,1); /* Stack: ... key */\n        /* The <= 0 check is valid here because we're comparing indexes. */\n#if LUA_VERSION_NUM < 503\n        if ((LUA_TNUMBER != lua_type(L,-1)) || (n = lua_tonumber(L, -1)) <= 0 ||\n            !IS_INT_EQUIVALENT(n))\n#else\n        if (!lua_isinteger(L,-1) || (n = lua_tointeger(L, -1)) <= 0)\n#endif\n        {\n            lua_settop(L, stacktop);\n            return 0;\n        }\n        max = (n > max ? n : max);\n        count++;\n    }\n    /* We have the total number of elements in \"count\". Also we have\n     * the max index encountered in \"max\". We can't reach this code\n     * if there are indexes <= 0. If you also note that there can not be\n     * repeated keys into a table, you have that if max==count you are sure\n     * that there are all the keys form 1 to count (both included). */\n    lua_settop(L, stacktop);\n    return max == count;\n}\n\n/* If the length operator returns non-zero, that is, there is at least\n * an object at key '1', we serialize to message pack list. Otherwise\n * we use a map. */\nvoid mp_encode_lua_table(lua_State *L, mp_buf *buf, int level) {\n    if (table_is_an_array(L))\n        mp_encode_lua_table_as_array(L,buf,level);\n    else\n        mp_encode_lua_table_as_map(L,buf,level);\n}\n\nvoid mp_encode_lua_null(lua_State *L, mp_buf *buf) {\n    unsigned char b[1];\n\n    b[0] = 0xc0;\n    mp_buf_append(L,buf,b,1);\n}\n\nvoid mp_encode_lua_type(lua_State *L, mp_buf *buf, int level) {\n    int t = lua_type(L,-1);\n\n    /* Limit the encoding of nested tables to a specified maximum depth, so that\n     * we survive when called against circular references in tables. */\n    if (t == LUA_TTABLE && level == LUACMSGPACK_MAX_NESTING) t = LUA_TNIL;\n    switch(t) {\n    case LUA_TSTRING: mp_encode_lua_string(L,buf); break;\n    case LUA_TBOOLEAN: mp_encode_lua_bool(L,buf); break;\n    case LUA_TNUMBER:\n    #if LUA_VERSION_NUM < 503\n        mp_encode_lua_number(L,buf); break;\n    #else\n        if (lua_isinteger(L, -1)) {\n            mp_encode_lua_integer(L, buf);\n        } else {\n            mp_encode_lua_number(L, buf);\n        }\n        break;\n    #endif\n    case LUA_TTABLE: mp_encode_lua_table(L,buf,level); break;\n    default: mp_encode_lua_null(L,buf); break;\n    }\n    lua_pop(L,1);\n}\n\n/*\n * Packs all arguments as a stream for multiple upacking later.\n * Returns error if no arguments provided.\n */\nint mp_pack(lua_State *L) {\n    int nargs = lua_gettop(L);\n    int i;\n    mp_buf *buf;\n\n    if (nargs == 0)\n        return luaL_argerror(L, 0, \"MessagePack pack needs input.\");\n\n    if (!lua_checkstack(L, nargs))\n        return luaL_argerror(L, 0, \"Too many arguments for MessagePack pack.\");\n\n    buf = mp_buf_new(L);\n    for(i = 1; i <= nargs; i++) {\n        /* Copy argument i to top of stack for _encode processing;\n         * the encode function pops it from the stack when complete. */\n        luaL_checkstack(L, 1, \"in function mp_check\");\n        lua_pushvalue(L, i);\n\n        mp_encode_lua_type(L,buf,0);\n\n        lua_pushlstring(L,(char*)buf->b,buf->len);\n\n        /* Reuse the buffer for the next operation by\n         * setting its free count to the total buffer size\n         * and the current position to zero. */\n        buf->free += buf->len;\n        buf->len = 0;\n    }\n    mp_buf_free(L, buf);\n\n    /* Concatenate all nargs buffers together */\n    lua_concat(L, nargs);\n    return 1;\n}\n\n/* ------------------------------- Decoding --------------------------------- */\n\nvoid mp_decode_to_lua_type(lua_State *L, mp_cur *c);\n\nvoid mp_decode_to_lua_array(lua_State *L, mp_cur *c, size_t len) {\n    assert(len <= UINT_MAX);\n    int index = 1;\n\n    lua_newtable(L);\n    luaL_checkstack(L, 1, \"in function mp_decode_to_lua_array\");\n    while(len--) {\n        lua_pushnumber(L,index++);\n        mp_decode_to_lua_type(L,c);\n        if (c->err) return;\n        lua_settable(L,-3);\n    }\n}\n\nvoid mp_decode_to_lua_hash(lua_State *L, mp_cur *c, size_t len) {\n    assert(len <= UINT_MAX);\n    lua_newtable(L);\n    while(len--) {\n        mp_decode_to_lua_type(L,c); /* key */\n        if (c->err) return;\n        mp_decode_to_lua_type(L,c); /* value */\n        if (c->err) return;\n        lua_settable(L,-3);\n    }\n}\n\n/* Decode a Message Pack raw object pointed by the string cursor 'c' to\n * a Lua type, that is left as the only result on the stack. */\nvoid mp_decode_to_lua_type(lua_State *L, mp_cur *c) {\n    mp_cur_need(c,1);\n\n    /* If we return more than 18 elements, we must resize the stack to\n     * fit all our return values.  But, there is no way to\n     * determine how many objects a msgpack will unpack to up front, so\n     * we request a +1 larger stack on each iteration (noop if stack is\n     * big enough, and when stack does require resize it doubles in size) */\n    luaL_checkstack(L, 1,\n        \"too many return values at once; \"\n        \"use unpack_one or unpack_limit instead.\");\n\n    switch(c->p[0]) {\n    case 0xcc:  /* uint 8 */\n        mp_cur_need(c,2);\n        lua_pushunsigned(L,c->p[1]);\n        mp_cur_consume(c,2);\n        break;\n    case 0xd0:  /* int 8 */\n        mp_cur_need(c,2);\n        lua_pushinteger(L,(signed char)c->p[1]);\n        mp_cur_consume(c,2);\n        break;\n    case 0xcd:  /* uint 16 */\n        mp_cur_need(c,3);\n        lua_pushunsigned(L,\n            (c->p[1] << 8) |\n             c->p[2]);\n        mp_cur_consume(c,3);\n        break;\n    case 0xd1:  /* int 16 */\n        mp_cur_need(c,3);\n        lua_pushinteger(L,(int16_t)\n            (c->p[1] << 8) |\n             c->p[2]);\n        mp_cur_consume(c,3);\n        break;\n    case 0xce:  /* uint 32 */\n        mp_cur_need(c,5);\n        lua_pushunsigned(L,\n            ((uint32_t)c->p[1] << 24) |\n            ((uint32_t)c->p[2] << 16) |\n            ((uint32_t)c->p[3] << 8) |\n             (uint32_t)c->p[4]);\n        mp_cur_consume(c,5);\n        break;\n    case 0xd2:  /* int 32 */\n        mp_cur_need(c,5);\n        lua_pushinteger(L,\n            ((int32_t)c->p[1] << 24) |\n            ((int32_t)c->p[2] << 16) |\n            ((int32_t)c->p[3] << 8) |\n             (int32_t)c->p[4]);\n        mp_cur_consume(c,5);\n        break;\n    case 0xcf:  /* uint 64 */\n        mp_cur_need(c,9);\n        lua_pushunsigned(L,\n            ((uint64_t)c->p[1] << 56) |\n            ((uint64_t)c->p[2] << 48) |\n            ((uint64_t)c->p[3] << 40) |\n            ((uint64_t)c->p[4] << 32) |\n            ((uint64_t)c->p[5] << 24) |\n            ((uint64_t)c->p[6] << 16) |\n            ((uint64_t)c->p[7] << 8) |\n             (uint64_t)c->p[8]);\n        mp_cur_consume(c,9);\n        break;\n    case 0xd3:  /* int 64 */\n        mp_cur_need(c,9);\n#if LUA_VERSION_NUM < 503\n        lua_pushnumber(L,\n#else\n        lua_pushinteger(L,\n#endif\n            ((int64_t)c->p[1] << 56) |\n            ((int64_t)c->p[2] << 48) |\n            ((int64_t)c->p[3] << 40) |\n            ((int64_t)c->p[4] << 32) |\n            ((int64_t)c->p[5] << 24) |\n            ((int64_t)c->p[6] << 16) |\n            ((int64_t)c->p[7] << 8) |\n             (int64_t)c->p[8]);\n        mp_cur_consume(c,9);\n        break;\n    case 0xc0:  /* nil */\n        lua_pushnil(L);\n        mp_cur_consume(c,1);\n        break;\n    case 0xc3:  /* true */\n        lua_pushboolean(L,1);\n        mp_cur_consume(c,1);\n        break;\n    case 0xc2:  /* false */\n        lua_pushboolean(L,0);\n        mp_cur_consume(c,1);\n        break;\n    case 0xca:  /* float */\n        mp_cur_need(c,5);\n        assert(sizeof(float) == 4);\n        {\n            float f;\n            memcpy(&f,c->p+1,4);\n            memrevifle(&f,4);\n            lua_pushnumber(L,f);\n            mp_cur_consume(c,5);\n        }\n        break;\n    case 0xcb:  /* double */\n        mp_cur_need(c,9);\n        assert(sizeof(double) == 8);\n        {\n            double d;\n            memcpy(&d,c->p+1,8);\n            memrevifle(&d,8);\n            lua_pushnumber(L,d);\n            mp_cur_consume(c,9);\n        }\n        break;\n    case 0xd9:  /* raw 8 */\n        mp_cur_need(c,2);\n        {\n            size_t l = c->p[1];\n            mp_cur_need(c,2+l);\n            lua_pushlstring(L,(char*)c->p+2,l);\n            mp_cur_consume(c,2+l);\n        }\n        break;\n    case 0xda:  /* raw 16 */\n        mp_cur_need(c,3);\n        {\n            size_t l = (c->p[1] << 8) | c->p[2];\n            mp_cur_need(c,3+l);\n            lua_pushlstring(L,(char*)c->p+3,l);\n            mp_cur_consume(c,3+l);\n        }\n        break;\n    case 0xdb:  /* raw 32 */\n        mp_cur_need(c,5);\n        {\n            size_t l = ((size_t)c->p[1] << 24) |\n                       ((size_t)c->p[2] << 16) |\n                       ((size_t)c->p[3] << 8) |\n                       (size_t)c->p[4];\n            mp_cur_consume(c,5);\n            mp_cur_need(c,l);\n            lua_pushlstring(L,(char*)c->p,l);\n            mp_cur_consume(c,l);\n        }\n        break;\n    case 0xdc:  /* array 16 */\n        mp_cur_need(c,3);\n        {\n            size_t l = (c->p[1] << 8) | c->p[2];\n            mp_cur_consume(c,3);\n            mp_decode_to_lua_array(L,c,l);\n        }\n        break;\n    case 0xdd:  /* array 32 */\n        mp_cur_need(c,5);\n        {\n            size_t l = ((size_t)c->p[1] << 24) |\n                       ((size_t)c->p[2] << 16) |\n                       ((size_t)c->p[3] << 8) |\n                       (size_t)c->p[4];\n            mp_cur_consume(c,5);\n            mp_decode_to_lua_array(L,c,l);\n        }\n        break;\n    case 0xde:  /* map 16 */\n        mp_cur_need(c,3);\n        {\n            size_t l = (c->p[1] << 8) | c->p[2];\n            mp_cur_consume(c,3);\n            mp_decode_to_lua_hash(L,c,l);\n        }\n        break;\n    case 0xdf:  /* map 32 */\n        mp_cur_need(c,5);\n        {\n            size_t l = ((size_t)c->p[1] << 24) |\n                       ((size_t)c->p[2] << 16) |\n                       ((size_t)c->p[3] << 8) |\n                       (size_t)c->p[4];\n            mp_cur_consume(c,5);\n            mp_decode_to_lua_hash(L,c,l);\n        }\n        break;\n    default:    /* types that can't be idenitified by first byte value. */\n        if ((c->p[0] & 0x80) == 0) {   /* positive fixnum */\n            lua_pushunsigned(L,c->p[0]);\n            mp_cur_consume(c,1);\n        } else if ((c->p[0] & 0xe0) == 0xe0) {  /* negative fixnum */\n            lua_pushinteger(L,(signed char)c->p[0]);\n            mp_cur_consume(c,1);\n        } else if ((c->p[0] & 0xe0) == 0xa0) {  /* fix raw */\n            size_t l = c->p[0] & 0x1f;\n            mp_cur_need(c,1+l);\n            lua_pushlstring(L,(char*)c->p+1,l);\n            mp_cur_consume(c,1+l);\n        } else if ((c->p[0] & 0xf0) == 0x90) {  /* fix map */\n            size_t l = c->p[0] & 0xf;\n            mp_cur_consume(c,1);\n            mp_decode_to_lua_array(L,c,l);\n        } else if ((c->p[0] & 0xf0) == 0x80) {  /* fix map */\n            size_t l = c->p[0] & 0xf;\n            mp_cur_consume(c,1);\n            mp_decode_to_lua_hash(L,c,l);\n        } else {\n            c->err = MP_CUR_ERROR_BADFMT;\n        }\n    }\n}\n\nint mp_unpack_full(lua_State *L, lua_Integer limit, lua_Integer offset) {\n    size_t len;\n    const char *s;\n    mp_cur c;\n    int cnt; /* Number of objects unpacked */\n    int decode_all = (!limit && !offset);\n\n    s = luaL_checklstring(L,1,&len); /* if no match, exits */\n\n    if (offset < 0 || limit < 0) /* requesting negative off or lim is invalid */\n        return luaL_error(L,\n            \"Invalid request to unpack with offset of %d and limit of %d.\",\n            (int) offset, (int) len);\n    else if (offset > len)\n        return luaL_error(L,\n            \"Start offset %d greater than input length %d.\", (int) offset, (int) len);\n\n    if (decode_all) limit = INT_MAX;\n\n    mp_cur_init(&c,(const unsigned char *)s+offset,len-offset);\n\n    /* We loop over the decode because this could be a stream\n     * of multiple top-level values serialized together */\n    for(cnt = 0; c.left > 0 && cnt < limit; cnt++) {\n        mp_decode_to_lua_type(L,&c);\n\n        if (c.err == MP_CUR_ERROR_EOF) {\n            return luaL_error(L,\"Missing bytes in input.\");\n        } else if (c.err == MP_CUR_ERROR_BADFMT) {\n            return luaL_error(L,\"Bad data format in input.\");\n        }\n    }\n\n    if (!decode_all) {\n        /* c->left is the remaining size of the input buffer.\n         * subtract the entire buffer size from the unprocessed size\n         * to get our next start offset */\n        size_t new_offset = len - c.left;\n        if (new_offset > LONG_MAX) abort();\n\n        luaL_checkstack(L, 1, \"in function mp_unpack_full\");\n\n        /* Return offset -1 when we have have processed the entire buffer. */\n        lua_pushinteger(L, c.left == 0 ? -1 : (lua_Integer) new_offset);\n        /* Results are returned with the arg elements still\n         * in place. Lua takes care of only returning\n         * elements above the args for us.\n         * In this case, we have one arg on the stack\n         * for this function, so we insert our first return\n         * value at position 2. */\n        lua_insert(L, 2);\n        cnt += 1; /* increase return count by one to make room for offset */\n    }\n\n    return cnt;\n}\n\nint mp_unpack(lua_State *L) {\n    return mp_unpack_full(L, 0, 0);\n}\n\nint mp_unpack_one(lua_State *L) {\n    lua_Integer offset = luaL_optinteger(L, 2, 0);\n    /* Variable pop because offset may not exist */\n    lua_pop(L, lua_gettop(L)-1);\n    return mp_unpack_full(L, 1, offset);\n}\n\nint mp_unpack_limit(lua_State *L) {\n    lua_Integer limit = luaL_checkinteger(L, 2);\n    lua_Integer offset = luaL_optinteger(L, 3, 0);\n    /* Variable pop because offset may not exist */\n    lua_pop(L, lua_gettop(L)-1);\n\n    return mp_unpack_full(L, limit, offset);\n}\n\nint mp_safe(lua_State *L) {\n    int argc, err, total_results;\n\n    argc = lua_gettop(L);\n\n    /* This adds our function to the bottom of the stack\n     * (the \"call this function\" position) */\n    lua_pushvalue(L, lua_upvalueindex(1));\n    lua_insert(L, 1);\n\n    err = lua_pcall(L, argc, LUA_MULTRET, 0);\n    total_results = lua_gettop(L);\n\n    if (!err) {\n        return total_results;\n    } else {\n        lua_pushnil(L);\n        lua_insert(L,-2);\n        return 2;\n    }\n}\n\n/* -------------------------------------------------------------------------- */\nconst struct luaL_Reg cmds[] = {\n    {\"pack\", mp_pack},\n    {\"unpack\", mp_unpack},\n    {\"unpack_one\", mp_unpack_one},\n    {\"unpack_limit\", mp_unpack_limit},\n    {0}\n};\n\nint luaopen_create(lua_State *L) {\n    int i;\n    /* Manually construct our module table instead of\n     * relying on _register or _newlib */\n    lua_newtable(L);\n\n    for (i = 0; i < (sizeof(cmds)/sizeof(*cmds) - 1); i++) {\n        lua_pushcfunction(L, cmds[i].func);\n        lua_setfield(L, -2, cmds[i].name);\n    }\n\n    /* Add metadata */\n    lua_pushliteral(L, LUACMSGPACK_NAME);\n    lua_setfield(L, -2, \"_NAME\");\n    lua_pushliteral(L, LUACMSGPACK_VERSION);\n    lua_setfield(L, -2, \"_VERSION\");\n    lua_pushliteral(L, LUACMSGPACK_COPYRIGHT);\n    lua_setfield(L, -2, \"_COPYRIGHT\");\n    lua_pushliteral(L, LUACMSGPACK_DESCRIPTION);\n    lua_setfield(L, -2, \"_DESCRIPTION\");\n    return 1;\n}\n\nLUALIB_API int luaopen_cmsgpack(lua_State *L) {\n    luaopen_create(L);\n\n#if LUA_VERSION_NUM < 502\n    /* Register name globally for 5.1 */\n    lua_pushvalue(L, -1);\n    lua_setglobal(L, LUACMSGPACK_NAME);\n#endif\n\n    return 1;\n}\n\nLUALIB_API int luaopen_cmsgpack_safe(lua_State *L) {\n    int i;\n\n    luaopen_cmsgpack(L);\n\n    /* Wrap all functions in the safe handler */\n    for (i = 0; i < (sizeof(cmds)/sizeof(*cmds) - 1); i++) {\n        lua_getfield(L, -1, cmds[i].name);\n        lua_pushcclosure(L, mp_safe, 1);\n        lua_setfield(L, -2, cmds[i].name);\n    }\n\n#if LUA_VERSION_NUM < 502\n    /* Register name globally for 5.1 */\n    lua_pushvalue(L, -1);\n    lua_setglobal(L, LUACMSGPACK_SAFE_NAME);\n#endif\n\n    return 1;\n}\n\n/******************************************************************************\n* Copyright (C) 2012 Salvatore Sanfilippo.  All rights reserved.\n*\n* Permission is hereby granted, free of charge, to any person obtaining\n* a copy of this software and associated documentation files (the\n* \"Software\"), to deal in the Software without restriction, including\n* without limitation the rights to use, copy, modify, merge, publish,\n* distribute, sublicense, and/or sell copies of the Software, and to\n* permit persons to whom the Software is furnished to do so, subject to\n* the following conditions:\n*\n* The above copyright notice and this permission notice shall be\n* included in all copies or substantial portions of the Software.\n*\n* THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\n* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\n* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.\n* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY\n* CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,\n* TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n* SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n******************************************************************************/\n"
  },
  {
    "path": "deps/lua/src/lua_struct.c",
    "content": "/*\n** {======================================================\n** Library for packing/unpacking structures.\n** $Id: struct.c,v 1.7 2018/05/11 22:04:31 roberto Exp $\n** See Copyright Notice at the end of this file\n** =======================================================\n*/\n/*\n** Valid formats:\n** > - big endian\n** < - little endian\n** ![num] - alignment\n** x - pading\n** b/B - signed/unsigned byte\n** h/H - signed/unsigned short\n** l/L - signed/unsigned long\n** T   - size_t\n** i/In - signed/unsigned integer with size 'n' (default is size of int)\n** cn - sequence of 'n' chars (from/to a string); when packing, n==0 means\n        the whole string; when unpacking, n==0 means use the previous\n        read number as the string length\n** s - zero-terminated string\n** f - float\n** d - double\n** ' ' - ignored\n*/\n\n\n#include <assert.h>\n#include <ctype.h>\n#include <limits.h>\n#include <stddef.h>\n#include <string.h>\n\n\n#include \"lua.h\"\n#include \"lauxlib.h\"\n\n\n#if (LUA_VERSION_NUM >= 502)\n\n#define luaL_register(L,n,f)\tluaL_newlib(L,f)\n\n#endif\n\n\n/* basic integer type */\n#if !defined(STRUCT_INT)\n#define STRUCT_INT\tlong\n#endif\n\ntypedef STRUCT_INT Inttype;\n\n/* corresponding unsigned version */\ntypedef unsigned STRUCT_INT Uinttype;\n\n\n/* maximum size (in bytes) for integral types */\n#define MAXINTSIZE\t32\n\n/* is 'x' a power of 2? */\n#define isp2(x)\t\t((x) > 0 && ((x) & ((x) - 1)) == 0)\n\n/* dummy structure to get alignment requirements */\nstruct cD {\n  char c;\n  double d;\n};\n\n\n#define PADDING\t\t(sizeof(struct cD) - sizeof(double))\n#define MAXALIGN  \t(PADDING > sizeof(int) ? PADDING : sizeof(int))\n\n\n/* endian options */\n#define BIG\t0\n#define LITTLE\t1\n\n\nstatic union {\n  int dummy;\n  char endian;\n} const native = {1};\n\n\ntypedef struct Header {\n  int endian;\n  int align;\n} Header;\n\n\nstatic int getnum (lua_State *L, const char **fmt, int df) {\n  if (!isdigit(**fmt))  /* no number? */\n    return df;  /* return default value */\n  else {\n    int a = 0;\n    do {\n      if (a > (INT_MAX / 10) || a * 10 > (INT_MAX - (**fmt - '0')))\n        luaL_error(L, \"integral size overflow\");\n      a = a*10 + *((*fmt)++) - '0';\n    } while (isdigit(**fmt));\n    return a;\n  }\n}\n\n\n#define defaultoptions(h)\t((h)->endian = native.endian, (h)->align = 1)\n\n\n\nstatic size_t optsize (lua_State *L, char opt, const char **fmt) {\n  switch (opt) {\n    case 'B': case 'b': return sizeof(char);\n    case 'H': case 'h': return sizeof(short);\n    case 'L': case 'l': return sizeof(long);\n    case 'T': return sizeof(size_t);\n    case 'f':  return sizeof(float);\n    case 'd':  return sizeof(double);\n    case 'x': return 1;\n    case 'c': return getnum(L, fmt, 1);\n    case 'i': case 'I': {\n      int sz = getnum(L, fmt, sizeof(int));\n      if (sz > MAXINTSIZE)\n        luaL_error(L, \"integral size %d is larger than limit of %d\",\n                       sz, MAXINTSIZE);\n      return sz;\n    }\n    default: return 0;  /* other cases do not need alignment */\n  }\n}\n\n\n/*\n** return number of bytes needed to align an element of size 'size'\n** at current position 'len'\n*/\nstatic int gettoalign (size_t len, Header *h, int opt, size_t size) {\n  if (size == 0 || opt == 'c') return 0;\n  if (size > (size_t)h->align)\n    size = h->align;  /* respect max. alignment */\n  return (size - (len & (size - 1))) & (size - 1);\n}\n\n\n/*\n** options to control endianess and alignment\n*/\nstatic void controloptions (lua_State *L, int opt, const char **fmt,\n                            Header *h) {\n  switch (opt) {\n    case  ' ': return;  /* ignore white spaces */\n    case '>': h->endian = BIG; return;\n    case '<': h->endian = LITTLE; return;\n    case '!': {\n      int a = getnum(L, fmt, MAXALIGN);\n      if (!isp2(a))\n        luaL_error(L, \"alignment %d is not a power of 2\", a);\n      h->align = a;\n      return;\n    }\n    default: {\n      const char *msg = lua_pushfstring(L, \"invalid format option '%c'\", opt);\n      luaL_argerror(L, 1, msg);\n    }\n  }\n}\n\n\nstatic void putinteger (lua_State *L, luaL_Buffer *b, int arg, int endian,\n                        int size) {\n  lua_Number n = luaL_checknumber(L, arg);\n  Uinttype value;\n  char buff[MAXINTSIZE];\n  if (n < 0)\n    value = (Uinttype)(Inttype)n;\n  else\n    value = (Uinttype)n;\n  if (endian == LITTLE) {\n    int i;\n    for (i = 0; i < size; i++) {\n      buff[i] = (value & 0xff);\n      value >>= 8;\n    }\n  }\n  else {\n    int i;\n    for (i = size - 1; i >= 0; i--) {\n      buff[i] = (value & 0xff);\n      value >>= 8;\n    }\n  }\n  luaL_addlstring(b, buff, size);\n}\n\n\nstatic void correctbytes (char *b, int size, int endian) {\n  if (endian != native.endian) {\n    int i = 0;\n    while (i < --size) {\n      char temp = b[i];\n      b[i++] = b[size];\n      b[size] = temp;\n    }\n  }\n}\n\n\nstatic int b_pack (lua_State *L) {\n  luaL_Buffer b;\n  const char *fmt = luaL_checkstring(L, 1);\n  Header h;\n  int arg = 2;\n  size_t totalsize = 0;\n  defaultoptions(&h);\n  lua_pushnil(L);  /* mark to separate arguments from string buffer */\n  luaL_buffinit(L, &b);\n  while (*fmt != '\\0') {\n    int opt = *fmt++;\n    size_t size = optsize(L, opt, &fmt);\n    int toalign = gettoalign(totalsize, &h, opt, size);\n    totalsize += toalign;\n    while (toalign-- > 0) luaL_addchar(&b, '\\0');\n    switch (opt) {\n      case 'b': case 'B': case 'h': case 'H':\n      case 'l': case 'L': case 'T': case 'i': case 'I': {  /* integer types */\n        putinteger(L, &b, arg++, h.endian, size);\n        break;\n      }\n      case 'x': {\n        luaL_addchar(&b, '\\0');\n        break;\n      }\n      case 'f': {\n        float f = (float)luaL_checknumber(L, arg++);\n        correctbytes((char *)&f, size, h.endian);\n        luaL_addlstring(&b, (char *)&f, size);\n        break;\n      }\n      case 'd': {\n        double d = luaL_checknumber(L, arg++);\n        correctbytes((char *)&d, size, h.endian);\n        luaL_addlstring(&b, (char *)&d, size);\n        break;\n      }\n      case 'c': case 's': {\n        size_t l;\n        const char *s = luaL_checklstring(L, arg++, &l);\n        if (size == 0) size = l;\n        luaL_argcheck(L, l >= (size_t)size, arg, \"string too short\");\n        luaL_addlstring(&b, s, size);\n        if (opt == 's') {\n          luaL_addchar(&b, '\\0');  /* add zero at the end */\n          size++;\n        }\n        break;\n      }\n      default: controloptions(L, opt, &fmt, &h);\n    }\n    totalsize += size;\n  }\n  luaL_pushresult(&b);\n  return 1;\n}\n\n\nstatic lua_Number getinteger (const char *buff, int endian,\n                        int issigned, int size) {\n  Uinttype l = 0;\n  int i;\n  if (endian == BIG) {\n    for (i = 0; i < size; i++) {\n      l <<= 8;\n      l |= (Uinttype)(unsigned char)buff[i];\n    }\n  }\n  else {\n    for (i = size - 1; i >= 0; i--) {\n      l <<= 8;\n      l |= (Uinttype)(unsigned char)buff[i];\n    }\n  }\n  if (!issigned)\n    return (lua_Number)l;\n  else {  /* signed format */\n    Uinttype mask = (Uinttype)(~((Uinttype)0)) << (size*8 - 1);\n    if (l & mask)  /* negative value? */\n      l |= mask;  /* signal extension */\n    return (lua_Number)(Inttype)l;\n  }\n}\n\n\nstatic int b_unpack (lua_State *L) {\n  Header h;\n  const char *fmt = luaL_checkstring(L, 1);\n  size_t ld;\n  const char *data = luaL_checklstring(L, 2, &ld);\n  size_t pos = luaL_optinteger(L, 3, 1);\n  luaL_argcheck(L, pos > 0, 3, \"offset must be 1 or greater\");\n  pos--; /* Lua indexes are 1-based, but here we want 0-based for C\n          * pointer math. */\n  int n = 0;  /* number of results */\n  defaultoptions(&h);\n  while (*fmt) {\n    int opt = *fmt++;\n    size_t size = optsize(L, opt, &fmt);\n    pos += gettoalign(pos, &h, opt, size);\n    luaL_argcheck(L, size <= ld && pos <= ld - size,\n                   2, \"data string too short\");\n    /* stack space for item + next position */\n    luaL_checkstack(L, 2, \"too many results\");\n    switch (opt) {\n      case 'b': case 'B': case 'h': case 'H':\n      case 'l': case 'L': case 'T': case 'i':  case 'I': {  /* integer types */\n        int issigned = islower(opt);\n        lua_Number res = getinteger(data+pos, h.endian, issigned, size);\n        lua_pushnumber(L, res); n++;\n        break;\n      }\n      case 'x': {\n        break;\n      }\n      case 'f': {\n        float f;\n        memcpy(&f, data+pos, size);\n        correctbytes((char *)&f, sizeof(f), h.endian);\n        lua_pushnumber(L, f); n++;\n        break;\n      }\n      case 'd': {\n        double d;\n        memcpy(&d, data+pos, size);\n        correctbytes((char *)&d, sizeof(d), h.endian);\n        lua_pushnumber(L, d); n++;\n        break;\n      }\n      case 'c': {\n        if (size == 0) {\n          if (n == 0 || !lua_isnumber(L, -1))\n            luaL_error(L, \"format 'c0' needs a previous size\");\n          size = lua_tonumber(L, -1);\n          lua_pop(L, 1); n--;\n          luaL_argcheck(L, size <= ld && pos <= ld - size,\n                           2, \"data string too short\");\n        }\n        lua_pushlstring(L, data+pos, size); n++;\n        break;\n      }\n      case 's': {\n        const char *e = (const char *)memchr(data+pos, '\\0', ld - pos);\n        if (e == NULL)\n          luaL_error(L, \"unfinished string in data\");\n        size = (e - (data+pos)) + 1;\n        lua_pushlstring(L, data+pos, size - 1); n++;\n        break;\n      }\n      default: controloptions(L, opt, &fmt, &h);\n    }\n    pos += size;\n  }\n  lua_pushinteger(L, pos + 1);  /* next position */\n  return n + 1;\n}\n\n\nstatic int b_size (lua_State *L) {\n  Header h;\n  const char *fmt = luaL_checkstring(L, 1);\n  size_t pos = 0;\n  defaultoptions(&h);\n  while (*fmt) {\n    int opt = *fmt++;\n    size_t size = optsize(L, opt, &fmt);\n    pos += gettoalign(pos, &h, opt, size);\n    if (opt == 's')\n      luaL_argerror(L, 1, \"option 's' has no fixed size\");\n    else if (opt == 'c' && size == 0)\n      luaL_argerror(L, 1, \"option 'c0' has no fixed size\");\n    if (!isalnum(opt))\n      controloptions(L, opt, &fmt, &h);\n    pos += size;\n  }\n  lua_pushinteger(L, pos);\n  return 1;\n}\n\n/* }====================================================== */\n\n\n\nstatic const struct luaL_Reg thislib[] = {\n  {\"pack\", b_pack},\n  {\"unpack\", b_unpack},\n  {\"size\", b_size},\n  {NULL, NULL}\n};\n\n\nLUALIB_API int luaopen_struct (lua_State *L);\n\nLUALIB_API int luaopen_struct (lua_State *L) {\n  luaL_register(L, \"struct\", thislib);\n  return 1;\n}\n\n\n/******************************************************************************\n* Copyright (C) 2010-2018 Lua.org, PUC-Rio.  All rights reserved.\n*\n* Permission is hereby granted, free of charge, to any person obtaining\n* a copy of this software and associated documentation files (the\n* \"Software\"), to deal in the Software without restriction, including\n* without limitation the rights to use, copy, modify, merge, publish,\n* distribute, sublicense, and/or sell copies of the Software, and to\n* permit persons to whom the Software is furnished to do so, subject to\n* the following conditions:\n*\n* The above copyright notice and this permission notice shall be\n* included in all copies or substantial portions of the Software.\n*\n* THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\n* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\n* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.\n* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY\n* CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,\n* TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n* SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n******************************************************************************/\n\n"
  },
  {
    "path": "deps/lua/src/luac.c",
    "content": "/*\n** $Id: luac.c,v 1.54 2006/06/02 17:37:11 lhf Exp $\n** Lua compiler (saves bytecodes to files; also list bytecodes)\n** See Copyright Notice in lua.h\n*/\n\n#include <errno.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n#define luac_c\n#define LUA_CORE\n\n#include \"lua.h\"\n#include \"lauxlib.h\"\n\n#include \"ldo.h\"\n#include \"lfunc.h\"\n#include \"lmem.h\"\n#include \"lobject.h\"\n#include \"lopcodes.h\"\n#include \"lstring.h\"\n#include \"lundump.h\"\n\n#define PROGNAME\t\"luac\"\t\t/* default program name */\n#define\tOUTPUT\t\tPROGNAME \".out\"\t/* default output file */\n\nstatic int listing=0;\t\t\t/* list bytecodes? */\nstatic int dumping=1;\t\t\t/* dump bytecodes? */\nstatic int stripping=0;\t\t\t/* strip debug information? */\nstatic char Output[]={ OUTPUT };\t/* default output file name */\nstatic const char* output=Output;\t/* actual output file name */\nstatic const char* progname=PROGNAME;\t/* actual program name */\n\nstatic void fatal(const char* message)\n{\n fprintf(stderr,\"%s: %s\\n\",progname,message);\n exit(EXIT_FAILURE);\n}\n\nstatic void cannot(const char* what)\n{\n fprintf(stderr,\"%s: cannot %s %s: %s\\n\",progname,what,output,strerror(errno));\n exit(EXIT_FAILURE);\n}\n\nstatic void usage(const char* message)\n{\n if (*message=='-')\n  fprintf(stderr,\"%s: unrecognized option \" LUA_QS \"\\n\",progname,message);\n else\n  fprintf(stderr,\"%s: %s\\n\",progname,message);\n fprintf(stderr,\n \"usage: %s [options] [filenames].\\n\"\n \"Available options are:\\n\"\n \"  -        process stdin\\n\"\n \"  -l       list\\n\"\n \"  -o name  output to file \" LUA_QL(\"name\") \" (default is \\\"%s\\\")\\n\"\n \"  -p       parse only\\n\"\n \"  -s       strip debug information\\n\"\n \"  -v       show version information\\n\"\n \"  --       stop handling options\\n\",\n progname,Output);\n exit(EXIT_FAILURE);\n}\n\n#define\tIS(s)\t(strcmp(argv[i],s)==0)\n\nstatic int doargs(int argc, char* argv[])\n{\n int i;\n int version=0;\n if (argv[0]!=NULL && *argv[0]!=0) progname=argv[0];\n for (i=1; i<argc; i++)\n {\n  if (*argv[i]!='-')\t\t\t/* end of options; keep it */\n   break;\n  else if (IS(\"--\"))\t\t\t/* end of options; skip it */\n  {\n   ++i;\n   if (version) ++version;\n   break;\n  }\n  else if (IS(\"-\"))\t\t\t/* end of options; use stdin */\n   break;\n  else if (IS(\"-l\"))\t\t\t/* list */\n   ++listing;\n  else if (IS(\"-o\"))\t\t\t/* output file */\n  {\n   output=argv[++i];\n   if (output==NULL || *output==0) usage(LUA_QL(\"-o\") \" needs argument\");\n   if (IS(\"-\")) output=NULL;\n  }\n  else if (IS(\"-p\"))\t\t\t/* parse only */\n   dumping=0;\n  else if (IS(\"-s\"))\t\t\t/* strip debug information */\n   stripping=1;\n  else if (IS(\"-v\"))\t\t\t/* show version */\n   ++version;\n  else\t\t\t\t\t/* unknown option */\n   usage(argv[i]);\n }\n if (i==argc && (listing || !dumping))\n {\n  dumping=0;\n  argv[--i]=Output;\n }\n if (version)\n {\n  printf(\"%s  %s\\n\",LUA_RELEASE,LUA_COPYRIGHT);\n  if (version==argc-1) exit(EXIT_SUCCESS);\n }\n return i;\n}\n\n#define toproto(L,i) (clvalue(L->top+(i))->l.p)\n\nstatic const Proto* combine(lua_State* L, int n)\n{\n if (n==1)\n  return toproto(L,-1);\n else\n {\n  int i,pc;\n  Proto* f=luaF_newproto(L);\n  setptvalue2s(L,L->top,f); incr_top(L);\n  f->source=luaS_newliteral(L,\"=(\" PROGNAME \")\");\n  f->maxstacksize=1;\n  pc=2*n+1;\n  f->code=luaM_newvector(L,pc,Instruction);\n  f->sizecode=pc;\n  f->p=luaM_newvector(L,n,Proto*);\n  f->sizep=n;\n  pc=0;\n  for (i=0; i<n; i++)\n  {\n   f->p[i]=toproto(L,i-n-1);\n   f->code[pc++]=CREATE_ABx(OP_CLOSURE,0,i);\n   f->code[pc++]=CREATE_ABC(OP_CALL,0,1,1);\n  }\n  f->code[pc++]=CREATE_ABC(OP_RETURN,0,1,0);\n  return f;\n }\n}\n\nstatic int writer(lua_State* L, const void* p, size_t size, void* u)\n{\n UNUSED(L);\n return (fwrite(p,size,1,(FILE*)u)!=1) && (size!=0);\n}\n\nstruct Smain {\n int argc;\n char** argv;\n};\n\nstatic int pmain(lua_State* L)\n{\n struct Smain* s = (struct Smain*)lua_touserdata(L, 1);\n int argc=s->argc;\n char** argv=s->argv;\n const Proto* f;\n int i;\n if (!lua_checkstack(L,argc)) fatal(\"too many input files\");\n for (i=0; i<argc; i++)\n {\n  const char* filename=IS(\"-\") ? NULL : argv[i];\n  if (luaL_loadfile(L,filename)!=0) fatal(lua_tostring(L,-1));\n }\n f=combine(L,argc);\n if (listing) luaU_print(f,listing>1);\n if (dumping)\n {\n  FILE* D= (output==NULL) ? stdout : fopen(output,\"wb\");\n  if (D==NULL) cannot(\"open\");\n  lua_lock(L);\n  luaU_dump(L,f,writer,D,stripping);\n  lua_unlock(L);\n  if (ferror(D)) cannot(\"write\");\n  if (fclose(D)) cannot(\"close\");\n }\n return 0;\n}\n\nint main(int argc, char* argv[])\n{\n lua_State* L;\n struct Smain s;\n int i=doargs(argc,argv);\n argc-=i; argv+=i;\n if (argc<=0) usage(\"no input files given\");\n L=lua_open();\n if (L==NULL) fatal(\"not enough memory for state\");\n s.argc=argc;\n s.argv=argv;\n if (lua_cpcall(L,pmain,&s)!=0) fatal(lua_tostring(L,-1));\n lua_close(L);\n return EXIT_SUCCESS;\n}\n"
  },
  {
    "path": "deps/lua/src/luaconf.h",
    "content": "/*\n** $Id: luaconf.h,v 1.82.1.7 2008/02/11 16:25:08 roberto Exp $\n** Configuration file for Lua\n** See Copyright Notice in lua.h\n*/\n\n\n#ifndef lconfig_h\n#define lconfig_h\n\n#include <limits.h>\n#include <stddef.h>\n\n\n/*\n** ==================================================================\n** Search for \"@@\" to find all configurable definitions.\n** ===================================================================\n*/\n\n\n/*\n@@ LUA_ANSI controls the use of non-ansi features.\n** CHANGE it (define it) if you want Lua to avoid the use of any\n** non-ansi feature or library.\n*/\n#if defined(__STRICT_ANSI__)\n#define LUA_ANSI\n#endif\n\n\n#if !defined(LUA_ANSI) && defined(_WIN32)\n#define LUA_WIN\n#endif\n\n#if defined(LUA_USE_LINUX)\n#define LUA_USE_POSIX\n#define LUA_USE_DLOPEN\t\t/* needs an extra library: -ldl */\n#define LUA_USE_READLINE\t/* needs some extra libraries */\n#endif\n\n#if defined(LUA_USE_MACOSX)\n#define LUA_USE_POSIX\n#define LUA_DL_DYLD\t\t/* does not need extra library */\n#endif\n\n\n\n/*\n@@ LUA_USE_POSIX includes all functionallity listed as X/Open System\n@* Interfaces Extension (XSI).\n** CHANGE it (define it) if your system is XSI compatible.\n*/\n#if defined(LUA_USE_POSIX)\n#define LUA_USE_MKSTEMP\n#define LUA_USE_ISATTY\n#define LUA_USE_POPEN\n#define LUA_USE_ULONGJMP\n#endif\n\n\n/*\n@@ LUA_PATH and LUA_CPATH are the names of the environment variables that\n@* Lua check to set its paths.\n@@ LUA_INIT is the name of the environment variable that Lua\n@* checks for initialization code.\n** CHANGE them if you want different names.\n*/\n#define LUA_PATH        \"LUA_PATH\"\n#define LUA_CPATH       \"LUA_CPATH\"\n#define LUA_INIT\t\"LUA_INIT\"\n\n\n/*\n@@ LUA_PATH_DEFAULT is the default path that Lua uses to look for\n@* Lua libraries.\n@@ LUA_CPATH_DEFAULT is the default path that Lua uses to look for\n@* C libraries.\n** CHANGE them if your machine has a non-conventional directory\n** hierarchy or if you want to install your libraries in\n** non-conventional directories.\n*/\n#if defined(_WIN32)\n/*\n** In Windows, any exclamation mark ('!') in the path is replaced by the\n** path of the directory of the executable file of the current process.\n*/\n#define LUA_LDIR\t\"!\\\\lua\\\\\"\n#define LUA_CDIR\t\"!\\\\\"\n#define LUA_PATH_DEFAULT  \\\n\t\t\".\\\\?.lua;\"  LUA_LDIR\"?.lua;\"  LUA_LDIR\"?\\\\init.lua;\" \\\n\t\t             LUA_CDIR\"?.lua;\"  LUA_CDIR\"?\\\\init.lua\"\n#define LUA_CPATH_DEFAULT \\\n\t\".\\\\?.dll;\"  LUA_CDIR\"?.dll;\" LUA_CDIR\"loadall.dll\"\n\n#else\n#define LUA_ROOT\t\"/usr/local/\"\n#define LUA_LDIR\tLUA_ROOT \"share/lua/5.1/\"\n#define LUA_CDIR\tLUA_ROOT \"lib/lua/5.1/\"\n#define LUA_PATH_DEFAULT  \\\n\t\t\"./?.lua;\"  LUA_LDIR\"?.lua;\"  LUA_LDIR\"?/init.lua;\" \\\n\t\t            LUA_CDIR\"?.lua;\"  LUA_CDIR\"?/init.lua\"\n#define LUA_CPATH_DEFAULT \\\n\t\"./?.so;\"  LUA_CDIR\"?.so;\" LUA_CDIR\"loadall.so\"\n#endif\n\n\n/*\n@@ LUA_DIRSEP is the directory separator (for submodules).\n** CHANGE it if your machine does not use \"/\" as the directory separator\n** and is not Windows. (On Windows Lua automatically uses \"\\\".)\n*/\n#if defined(_WIN32)\n#define LUA_DIRSEP\t\"\\\\\"\n#else\n#define LUA_DIRSEP\t\"/\"\n#endif\n\n\n/*\n@@ LUA_PATHSEP is the character that separates templates in a path.\n@@ LUA_PATH_MARK is the string that marks the substitution points in a\n@* template.\n@@ LUA_EXECDIR in a Windows path is replaced by the executable's\n@* directory.\n@@ LUA_IGMARK is a mark to ignore all before it when bulding the\n@* luaopen_ function name.\n** CHANGE them if for some reason your system cannot use those\n** characters. (E.g., if one of those characters is a common character\n** in file/directory names.) Probably you do not need to change them.\n*/\n#define LUA_PATHSEP\t\";\"\n#define LUA_PATH_MARK\t\"?\"\n#define LUA_EXECDIR\t\"!\"\n#define LUA_IGMARK\t\"-\"\n\n\n/*\n@@ LUA_INTEGER is the integral type used by lua_pushinteger/lua_tointeger.\n** CHANGE that if ptrdiff_t is not adequate on your machine. (On most\n** machines, ptrdiff_t gives a good choice between int or long.)\n*/\n#define LUA_INTEGER\tptrdiff_t\n\n\n/*\n@@ LUA_API is a mark for all core API functions.\n@@ LUALIB_API is a mark for all standard library functions.\n** CHANGE them if you need to define those functions in some special way.\n** For instance, if you want to create one Windows DLL with the core and\n** the libraries, you may want to use the following definition (define\n** LUA_BUILD_AS_DLL to get it).\n*/\n#if defined(LUA_BUILD_AS_DLL)\n\n#if defined(LUA_CORE) || defined(LUA_LIB)\n#define LUA_API __declspec(dllexport)\n#else\n#define LUA_API __declspec(dllimport)\n#endif\n\n#else\n\n#define LUA_API\t\textern\n\n#endif\n\n/* more often than not the libs go together with the core */\n#define LUALIB_API\tLUA_API\n\n\n/*\n@@ LUAI_FUNC is a mark for all extern functions that are not to be\n@* exported to outside modules.\n@@ LUAI_DATA is a mark for all extern (const) variables that are not to\n@* be exported to outside modules.\n** CHANGE them if you need to mark them in some special way. Elf/gcc\n** (versions 3.2 and later) mark them as \"hidden\" to optimize access\n** when Lua is compiled as a shared library.\n*/\n#if defined(luaall_c)\n#define LUAI_FUNC\tstatic\n#define LUAI_DATA\t/* empty */\n\n#elif defined(__GNUC__) && ((__GNUC__*100 + __GNUC_MINOR__) >= 302) && \\\n      defined(__ELF__)\n#define LUAI_FUNC\t__attribute__((visibility(\"hidden\"))) extern\n#define LUAI_DATA\tLUAI_FUNC\n\n#else\n#define LUAI_FUNC\textern\n#define LUAI_DATA\textern\n#endif\n\n\n\n/*\n@@ LUA_QL describes how error messages quote program elements.\n** CHANGE it if you want a different appearance.\n*/\n#define LUA_QL(x)\t\"'\" x \"'\"\n#define LUA_QS\t\tLUA_QL(\"%s\")\n\n\n/*\n@@ LUA_IDSIZE gives the maximum size for the description of the source\n@* of a function in debug information.\n** CHANGE it if you want a different size.\n*/\n#define LUA_IDSIZE\t60\n\n\n/*\n** {==================================================================\n** Stand-alone configuration\n** ===================================================================\n*/\n\n#if defined(lua_c) || defined(luaall_c)\n\n/*\n@@ lua_stdin_is_tty detects whether the standard input is a 'tty' (that\n@* is, whether we're running lua interactively).\n** CHANGE it if you have a better definition for non-POSIX/non-Windows\n** systems.\n*/\n#if defined(LUA_USE_ISATTY)\n#include <unistd.h>\n#define lua_stdin_is_tty()\tisatty(0)\n#elif defined(LUA_WIN)\n#include <io.h>\n#include <stdio.h>\n#define lua_stdin_is_tty()\t_isatty(_fileno(stdin))\n#else\n#define lua_stdin_is_tty()\t1  /* assume stdin is a tty */\n#endif\n\n\n/*\n@@ LUA_PROMPT is the default prompt used by stand-alone Lua.\n@@ LUA_PROMPT2 is the default continuation prompt used by stand-alone Lua.\n** CHANGE them if you want different prompts. (You can also change the\n** prompts dynamically, assigning to globals _PROMPT/_PROMPT2.)\n*/\n#define LUA_PROMPT\t\t\"> \"\n#define LUA_PROMPT2\t\t\">> \"\n\n\n/*\n@@ LUA_PROGNAME is the default name for the stand-alone Lua program.\n** CHANGE it if your stand-alone interpreter has a different name and\n** your system is not able to detect that name automatically.\n*/\n#define LUA_PROGNAME\t\t\"lua\"\n\n\n/*\n@@ LUA_MAXINPUT is the maximum length for an input line in the\n@* stand-alone interpreter.\n** CHANGE it if you need longer lines.\n*/\n#define LUA_MAXINPUT\t512\n\n\n/*\n@@ lua_readline defines how to show a prompt and then read a line from\n@* the standard input.\n@@ lua_saveline defines how to \"save\" a read line in a \"history\".\n@@ lua_freeline defines how to free a line read by lua_readline.\n** CHANGE them if you want to improve this functionality (e.g., by using\n** GNU readline and history facilities).\n*/\n#if defined(LUA_USE_READLINE)\n#include <stdio.h>\n#include <readline/readline.h>\n#include <readline/history.h>\n#define lua_readline(L,b,p)\t((void)L, ((b)=readline(p)) != NULL)\n#define lua_saveline(L,idx) \\\n\tif (lua_strlen(L,idx) > 0)  /* non-empty line? */ \\\n\t  add_history(lua_tostring(L, idx));  /* add it to history */\n#define lua_freeline(L,b)\t((void)L, free(b))\n#else\n#define lua_readline(L,b,p)\t\\\n\t((void)L, fputs(p, stdout), fflush(stdout),  /* show prompt */ \\\n\tfgets(b, LUA_MAXINPUT, stdin) != NULL)  /* get line */\n#define lua_saveline(L,idx)\t{ (void)L; (void)idx; }\n#define lua_freeline(L,b)\t{ (void)L; (void)b; }\n#endif\n\n#endif\n\n/* }================================================================== */\n\n\n/*\n@@ LUAI_GCPAUSE defines the default pause between garbage-collector cycles\n@* as a percentage.\n** CHANGE it if you want the GC to run faster or slower (higher values\n** mean larger pauses which mean slower collection.) You can also change\n** this value dynamically.\n*/\n#define LUAI_GCPAUSE\t200  /* 200% (wait memory to double before next GC) */\n\n\n/*\n@@ LUAI_GCMUL defines the default speed of garbage collection relative to\n@* memory allocation as a percentage.\n** CHANGE it if you want to change the granularity of the garbage\n** collection. (Higher values mean coarser collections. 0 represents\n** infinity, where each step performs a full collection.) You can also\n** change this value dynamically.\n*/\n#define LUAI_GCMUL\t200 /* GC runs 'twice the speed' of memory allocation */\n\n\n\n/*\n@@ LUA_COMPAT_GETN controls compatibility with old getn behavior.\n** CHANGE it (define it) if you want exact compatibility with the\n** behavior of setn/getn in Lua 5.0.\n*/\n#undef LUA_COMPAT_GETN\n\n/*\n@@ LUA_COMPAT_LOADLIB controls compatibility about global loadlib.\n** CHANGE it to undefined as soon as you do not need a global 'loadlib'\n** function (the function is still available as 'package.loadlib').\n*/\n#undef LUA_COMPAT_LOADLIB\n\n/*\n@@ LUA_COMPAT_VARARG controls compatibility with old vararg feature.\n** CHANGE it to undefined as soon as your programs use only '...' to\n** access vararg parameters (instead of the old 'arg' table).\n*/\n#define LUA_COMPAT_VARARG\n\n/*\n@@ LUA_COMPAT_MOD controls compatibility with old math.mod function.\n** CHANGE it to undefined as soon as your programs use 'math.fmod' or\n** the new '%' operator instead of 'math.mod'.\n*/\n#define LUA_COMPAT_MOD\n\n/*\n@@ LUA_COMPAT_LSTR controls compatibility with old long string nesting\n@* facility.\n** CHANGE it to 2 if you want the old behaviour, or undefine it to turn\n** off the advisory error when nesting [[...]].\n*/\n#define LUA_COMPAT_LSTR\t\t1\n\n/*\n@@ LUA_COMPAT_GFIND controls compatibility with old 'string.gfind' name.\n** CHANGE it to undefined as soon as you rename 'string.gfind' to\n** 'string.gmatch'.\n*/\n#define LUA_COMPAT_GFIND\n\n/*\n@@ LUA_COMPAT_OPENLIB controls compatibility with old 'luaL_openlib'\n@* behavior.\n** CHANGE it to undefined as soon as you replace to 'luaL_register'\n** your uses of 'luaL_openlib'\n*/\n#define LUA_COMPAT_OPENLIB\n\n\n\n/*\n@@ luai_apicheck is the assert macro used by the Lua-C API.\n** CHANGE luai_apicheck if you want Lua to perform some checks in the\n** parameters it gets from API calls. This may slow down the interpreter\n** a bit, but may be quite useful when debugging C code that interfaces\n** with Lua. A useful redefinition is to use assert.h.\n*/\n#if defined(LUA_USE_APICHECK)\n#include <assert.h>\n#define luai_apicheck(L,o)\t{ (void)L; assert(o); }\n#else\n#define luai_apicheck(L,o)\t{ (void)L; }\n#endif\n\n\n/*\n@@ LUAI_BITSINT defines the number of bits in an int.\n** CHANGE here if Lua cannot automatically detect the number of bits of\n** your machine. Probably you do not need to change this.\n*/\n/* avoid overflows in comparison */\n#if INT_MAX-20 < 32760\n#define LUAI_BITSINT\t16\n#elif INT_MAX > 2147483640L\n/* int has at least 32 bits */\n#define LUAI_BITSINT\t32\n#else\n#error \"you must define LUA_BITSINT with number of bits in an integer\"\n#endif\n\n\n/*\n@@ LUAI_UINT32 is an unsigned integer with at least 32 bits.\n@@ LUAI_INT32 is an signed integer with at least 32 bits.\n@@ LUAI_UMEM is an unsigned integer big enough to count the total\n@* memory used by Lua.\n@@ LUAI_MEM is a signed integer big enough to count the total memory\n@* used by Lua.\n** CHANGE here if for some weird reason the default definitions are not\n** good enough for your machine. (The definitions in the 'else'\n** part always works, but may waste space on machines with 64-bit\n** longs.) Probably you do not need to change this.\n*/\n#if LUAI_BITSINT >= 32\n#define LUAI_UINT32\tunsigned int\n#define LUAI_INT32\tint\n#define LUAI_MAXINT32\tINT_MAX\n#define LUAI_UMEM\tsize_t\n#define LUAI_MEM\tptrdiff_t\n#else\n/* 16-bit ints */\n#define LUAI_UINT32\tunsigned long\n#define LUAI_INT32\tlong\n#define LUAI_MAXINT32\tLONG_MAX\n#define LUAI_UMEM\tunsigned long\n#define LUAI_MEM\tlong\n#endif\n\n\n/*\n@@ LUAI_MAXCALLS limits the number of nested calls.\n** CHANGE it if you need really deep recursive calls. This limit is\n** arbitrary; its only purpose is to stop infinite recursion before\n** exhausting memory.\n*/\n#define LUAI_MAXCALLS\t20000\n\n\n/*\n@@ LUAI_MAXCSTACK limits the number of Lua stack slots that a C function\n@* can use.\n** CHANGE it if you need lots of (Lua) stack space for your C\n** functions. This limit is arbitrary; its only purpose is to stop C\n** functions to consume unlimited stack space. (must be smaller than\n** -LUA_REGISTRYINDEX)\n*/\n#define LUAI_MAXCSTACK\t8000\n\n\n\n/*\n** {==================================================================\n** CHANGE (to smaller values) the following definitions if your system\n** has a small C stack. (Or you may want to change them to larger\n** values if your system has a large C stack and these limits are\n** too rigid for you.) Some of these constants control the size of\n** stack-allocated arrays used by the compiler or the interpreter, while\n** others limit the maximum number of recursive calls that the compiler\n** or the interpreter can perform. Values too large may cause a C stack\n** overflow for some forms of deep constructs.\n** ===================================================================\n*/\n\n\n/*\n@@ LUAI_MAXCCALLS is the maximum depth for nested C calls (short) and\n@* syntactical nested non-terminals in a program.\n*/\n#define LUAI_MAXCCALLS\t\t200\n\n\n/*\n@@ LUAI_MAXVARS is the maximum number of local variables per function\n@* (must be smaller than 250).\n*/\n#define LUAI_MAXVARS\t\t200\n\n\n/*\n@@ LUAI_MAXUPVALUES is the maximum number of upvalues per function\n@* (must be smaller than 250).\n*/\n#define LUAI_MAXUPVALUES\t60\n\n\n/*\n@@ LUAL_BUFFERSIZE is the buffer size used by the lauxlib buffer system.\n*/\n#define LUAL_BUFFERSIZE\t\tBUFSIZ\n\n/* }================================================================== */\n\n\n\n\n/*\n** {==================================================================\n@@ LUA_NUMBER is the type of numbers in Lua.\n** CHANGE the following definitions only if you want to build Lua\n** with a number type different from double. You may also need to\n** change lua_number2int & lua_number2integer.\n** ===================================================================\n*/\n\n#define LUA_NUMBER_DOUBLE\n#define LUA_NUMBER\tdouble\n\n/*\n@@ LUAI_UACNUMBER is the result of an 'usual argument conversion'\n@* over a number.\n*/\n#define LUAI_UACNUMBER\tdouble\n\n\n/*\n@@ LUA_NUMBER_SCAN is the format for reading numbers.\n@@ LUA_NUMBER_FMT is the format for writing numbers.\n@@ lua_number2str converts a number to a string.\n@@ LUAI_MAXNUMBER2STR is maximum size of previous conversion.\n@@ lua_str2number converts a string to a number.\n*/\n#define LUA_NUMBER_SCAN\t\t\"%lf\"\n#define LUA_NUMBER_FMT\t\t\"%.14g\"\n#define lua_number2str(s,n)\tsprintf((s), LUA_NUMBER_FMT, (n))\n#define LUAI_MAXNUMBER2STR\t32 /* 16 digits, sign, point, and \\0 */\n#define lua_str2number(s,p)\tstrtod((s), (p))\n\n\n/*\n@@ The luai_num* macros define the primitive operations over numbers.\n*/\n#if defined(LUA_CORE)\n#include <math.h>\n#define luai_numadd(a,b)\t((a)+(b))\n#define luai_numsub(a,b)\t((a)-(b))\n#define luai_nummul(a,b)\t((a)*(b))\n#define luai_numdiv(a,b)\t((a)/(b))\n#define luai_nummod(a,b)\t((a) - floor((a)/(b))*(b))\n#define luai_numpow(a,b)\t(pow(a,b))\n#define luai_numunm(a)\t\t(-(a))\n#define luai_numeq(a,b)\t\t((a)==(b))\n#define luai_numlt(a,b)\t\t((a)<(b))\n#define luai_numle(a,b)\t\t((a)<=(b))\n#define luai_numisnan(a)\t(!luai_numeq((a), (a)))\n#endif\n\n\n/*\n@@ lua_number2int is a macro to convert lua_Number to int.\n@@ lua_number2integer is a macro to convert lua_Number to lua_Integer.\n** CHANGE them if you know a faster way to convert a lua_Number to\n** int (with any rounding method and without throwing errors) in your\n** system. In Pentium machines, a naive typecast from double to int\n** in C is extremely slow, so any alternative is worth trying.\n*/\n\n/* On a Pentium, resort to a trick */\n#if defined(LUA_NUMBER_DOUBLE) && !defined(LUA_ANSI) && !defined(__SSE2__) && \\\n    (defined(__i386) || defined (_M_IX86) || defined(__i386__))\n\n/* On a Microsoft compiler, use assembler */\n#if defined(_MSC_VER)\n\n#define lua_number2int(i,d)   __asm fld d   __asm fistp i\n#define lua_number2integer(i,n)\t\tlua_number2int(i, n)\n\n/* the next trick should work on any Pentium, but sometimes clashes\n   with a DirectX idiosyncrasy */\n#else\n\nunion luai_Cast { double l_d; long l_l; };\n#define lua_number2int(i,d) \\\n  { volatile union luai_Cast u; u.l_d = (d) + 6755399441055744.0; (i) = u.l_l; }\n#define lua_number2integer(i,n)\t\tlua_number2int(i, n)\n\n#endif\n\n\n/* this option always works, but may be slow */\n#else\n#define lua_number2int(i,d)\t((i)=(int)(d))\n#define lua_number2integer(i,d)\t((i)=(lua_Integer)(d))\n\n#endif\n\n/* }================================================================== */\n\n\n/*\n@@ LUAI_USER_ALIGNMENT_T is a type that requires maximum alignment.\n** CHANGE it if your system requires alignments larger than double. (For\n** instance, if your system supports long doubles and they must be\n** aligned in 16-byte boundaries, then you should add long double in the\n** union.) Probably you do not need to change this.\n*/\n#define LUAI_USER_ALIGNMENT_T\tunion { double u; void *s; long l; }\n\n\n/*\n@@ LUAI_THROW/LUAI_TRY define how Lua does exception handling.\n** CHANGE them if you prefer to use longjmp/setjmp even with C++\n** or if want/don't to use _longjmp/_setjmp instead of regular\n** longjmp/setjmp. By default, Lua handles errors with exceptions when\n** compiling as C++ code, with _longjmp/_setjmp when asked to use them,\n** and with longjmp/setjmp otherwise.\n*/\n#if defined(__cplusplus)\n/* C++ exceptions */\n#define LUAI_THROW(L,c)\tthrow(c)\n#define LUAI_TRY(L,c,a)\ttry { a } catch(...) \\\n\t{ if ((c)->status == 0) (c)->status = -1; }\n#define luai_jmpbuf\tint  /* dummy variable */\n\n#elif defined(LUA_USE_ULONGJMP)\n/* in Unix, try _longjmp/_setjmp (more efficient) */\n#define LUAI_THROW(L,c)\t_longjmp((c)->b, 1)\n#define LUAI_TRY(L,c,a)\tif (_setjmp((c)->b) == 0) { a }\n#define luai_jmpbuf\tjmp_buf\n\n#else\n/* default handling with long jumps */\n#define LUAI_THROW(L,c)\tlongjmp((c)->b, 1)\n#define LUAI_TRY(L,c,a)\tif (setjmp((c)->b) == 0) { a }\n#define luai_jmpbuf\tjmp_buf\n\n#endif\n\n\n/*\n@@ LUA_MAXCAPTURES is the maximum number of captures that a pattern\n@* can do during pattern-matching.\n** CHANGE it if you need more captures. This limit is arbitrary.\n*/\n#define LUA_MAXCAPTURES\t\t32\n\n\n/*\n@@ lua_tmpnam is the function that the OS library uses to create a\n@* temporary name.\n@@ LUA_TMPNAMBUFSIZE is the maximum size of a name created by lua_tmpnam.\n** CHANGE them if you have an alternative to tmpnam (which is considered\n** insecure) or if you want the original tmpnam anyway.  By default, Lua\n** uses tmpnam except when POSIX is available, where it uses mkstemp.\n*/\n#if defined(loslib_c) || defined(luaall_c)\n\n#if defined(LUA_USE_MKSTEMP)\n#include <unistd.h>\n#define LUA_TMPNAMBUFSIZE\t32\n#define lua_tmpnam(b,e)\t{ \\\n\tstrcpy(b, \"/tmp/lua_XXXXXX\"); \\\n\te = mkstemp(b); \\\n\tif (e != -1) close(e); \\\n\te = (e == -1); }\n\n#else\n#define LUA_TMPNAMBUFSIZE\tL_tmpnam\n#define lua_tmpnam(b,e)\t\t{ e = (tmpnam(b) == NULL); }\n#endif\n\n#endif\n\n\n/*\n@@ lua_popen spawns a new process connected to the current one through\n@* the file streams.\n** CHANGE it if you have a way to implement it in your system.\n*/\n#if defined(LUA_USE_POPEN)\n\n#define lua_popen(L,c,m)\t((void)L, fflush(NULL), popen(c,m))\n#define lua_pclose(L,file)\t((void)L, (pclose(file) != -1))\n\n#elif defined(LUA_WIN)\n\n#define lua_popen(L,c,m)\t((void)L, _popen(c,m))\n#define lua_pclose(L,file)\t((void)L, (_pclose(file) != -1))\n\n#else\n\n#define lua_popen(L,c,m)\t((void)((void)c, m),  \\\n\t\tluaL_error(L, LUA_QL(\"popen\") \" not supported\"), (FILE*)0)\n#define lua_pclose(L,file)\t\t((void)((void)L, file), 0)\n\n#endif\n\n/*\n@@ LUA_DL_* define which dynamic-library system Lua should use.\n** CHANGE here if Lua has problems choosing the appropriate\n** dynamic-library system for your platform (either Windows' DLL, Mac's\n** dyld, or Unix's dlopen). If your system is some kind of Unix, there\n** is a good chance that it has dlopen, so LUA_DL_DLOPEN will work for\n** it.  To use dlopen you also need to adapt the src/Makefile (probably\n** adding -ldl to the linker options), so Lua does not select it\n** automatically.  (When you change the makefile to add -ldl, you must\n** also add -DLUA_USE_DLOPEN.)\n** If you do not want any kind of dynamic library, undefine all these\n** options.\n** By default, _WIN32 gets LUA_DL_DLL and MAC OS X gets LUA_DL_DYLD.\n*/\n#if defined(LUA_USE_DLOPEN)\n#define LUA_DL_DLOPEN\n#endif\n\n#if defined(LUA_WIN)\n#define LUA_DL_DLL\n#endif\n\n\n/*\n@@ LUAI_EXTRASPACE allows you to add user-specific data in a lua_State\n@* (the data goes just *before* the lua_State pointer).\n** CHANGE (define) this if you really need that. This value must be\n** a multiple of the maximum alignment required for your machine.\n*/\n#define LUAI_EXTRASPACE\t\t0\n\n\n/*\n@@ luai_userstate* allow user-specific actions on threads.\n** CHANGE them if you defined LUAI_EXTRASPACE and need to do something\n** extra when a thread is created/deleted/resumed/yielded.\n*/\n#define luai_userstateopen(L)\t\t((void)L)\n#define luai_userstateclose(L)\t\t((void)L)\n#define luai_userstatethread(L,L1)\t((void)L)\n#define luai_userstatefree(L)\t\t((void)L)\n#define luai_userstateresume(L,n)\t((void)L)\n#define luai_userstateyield(L,n)\t((void)L)\n\n\n/*\n@@ LUA_INTFRMLEN is the length modifier for integer conversions\n@* in 'string.format'.\n@@ LUA_INTFRM_T is the integer type correspoding to the previous length\n@* modifier.\n** CHANGE them if your system supports long long or does not support long.\n*/\n\n#if defined(LUA_USELONGLONG)\n\n#define LUA_INTFRMLEN\t\t\"ll\"\n#define LUA_INTFRM_T\t\tlong long\n\n#else\n\n#define LUA_INTFRMLEN\t\t\"l\"\n#define LUA_INTFRM_T\t\tlong\n\n#endif\n\n\n\n/* =================================================================== */\n\n/*\n** Local configuration. You can use this space to add your redefinitions\n** without modifying the main part of the file.\n*/\n\n\n\n#endif\n\n"
  },
  {
    "path": "deps/lua/src/lualib.h",
    "content": "/*\n** $Id: lualib.h,v 1.36.1.1 2007/12/27 13:02:25 roberto Exp $\n** Lua standard libraries\n** See Copyright Notice in lua.h\n*/\n\n\n#ifndef lualib_h\n#define lualib_h\n\n#include \"lua.h\"\n\n\n/* Key to file-handle type */\n#define LUA_FILEHANDLE\t\t\"FILE*\"\n\n\n#define LUA_COLIBNAME\t\"coroutine\"\nLUALIB_API int (luaopen_base) (lua_State *L);\n\n#define LUA_TABLIBNAME\t\"table\"\nLUALIB_API int (luaopen_table) (lua_State *L);\n\n#define LUA_IOLIBNAME\t\"io\"\nLUALIB_API int (luaopen_io) (lua_State *L);\n\n#define LUA_OSLIBNAME\t\"os\"\nLUALIB_API int (luaopen_os) (lua_State *L);\n\n#define LUA_STRLIBNAME\t\"string\"\nLUALIB_API int (luaopen_string) (lua_State *L);\n\n#define LUA_MATHLIBNAME\t\"math\"\nLUALIB_API int (luaopen_math) (lua_State *L);\n\n#define LUA_DBLIBNAME\t\"debug\"\nLUALIB_API int (luaopen_debug) (lua_State *L);\n\n#define LUA_LOADLIBNAME\t\"package\"\nLUALIB_API int (luaopen_package) (lua_State *L);\n\n\n/* open all previous libraries */\nLUALIB_API void (luaL_openlibs) (lua_State *L); \n\n\n\n#ifndef lua_assert\n#define lua_assert(x)\t((void)0)\n#endif\n\n\n#endif\n"
  },
  {
    "path": "deps/lua/src/lundump.c",
    "content": "/*\n** $Id: lundump.c,v 2.7.1.4 2008/04/04 19:51:41 roberto Exp $\n** load precompiled Lua chunks\n** See Copyright Notice in lua.h\n*/\n\n#include <string.h>\n\n#define lundump_c\n#define LUA_CORE\n\n#include \"lua.h\"\n\n#include \"ldebug.h\"\n#include \"ldo.h\"\n#include \"lfunc.h\"\n#include \"lmem.h\"\n#include \"lobject.h\"\n#include \"lstring.h\"\n#include \"lundump.h\"\n#include \"lzio.h\"\n\ntypedef struct {\n lua_State* L;\n ZIO* Z;\n Mbuffer* b;\n const char* name;\n} LoadState;\n\n#ifdef LUAC_TRUST_BINARIES\n#define IF(c,s)\n#define error(S,s)\n#else\n#define IF(c,s)\t\tif (c) error(S,s)\n\nstatic void error(LoadState* S, const char* why)\n{\n luaO_pushfstring(S->L,\"%s: %s in precompiled chunk\",S->name,why);\n luaD_throw(S->L,LUA_ERRSYNTAX);\n}\n#endif\n\n#define LoadMem(S,b,n,size)\tLoadBlock(S,b,(n)*(size))\n#define\tLoadByte(S)\t\t(lu_byte)LoadChar(S)\n#define LoadVar(S,x)\t\tLoadMem(S,&x,1,sizeof(x))\n#define LoadVector(S,b,n,size)\tLoadMem(S,b,n,size)\n\nstatic void LoadBlock(LoadState* S, void* b, size_t size)\n{\n size_t r=luaZ_read(S->Z,b,size);\n IF (r!=0, \"unexpected end\");\n}\n\nstatic int LoadChar(LoadState* S)\n{\n char x;\n LoadVar(S,x);\n return x;\n}\n\nstatic int LoadInt(LoadState* S)\n{\n int x;\n LoadVar(S,x);\n IF (x<0, \"bad integer\");\n return x;\n}\n\nstatic lua_Number LoadNumber(LoadState* S)\n{\n lua_Number x;\n LoadVar(S,x);\n return x;\n}\n\nstatic TString* LoadString(LoadState* S)\n{\n size_t size;\n LoadVar(S,size);\n if (size==0)\n  return NULL;\n else\n {\n  char* s=luaZ_openspace(S->L,S->b,size);\n  LoadBlock(S,s,size);\n  return luaS_newlstr(S->L,s,size-1);\t\t/* remove trailing '\\0' */\n }\n}\n\nstatic void LoadCode(LoadState* S, Proto* f)\n{\n int n=LoadInt(S);\n f->code=luaM_newvector(S->L,n,Instruction);\n f->sizecode=n;\n LoadVector(S,f->code,n,sizeof(Instruction));\n}\n\nstatic Proto* LoadFunction(LoadState* S, TString* p);\n\nstatic void LoadConstants(LoadState* S, Proto* f)\n{\n int i,n;\n n=LoadInt(S);\n f->k=luaM_newvector(S->L,n,TValue);\n f->sizek=n;\n for (i=0; i<n; i++) setnilvalue(&f->k[i]);\n for (i=0; i<n; i++)\n {\n  TValue* o=&f->k[i];\n  int t=LoadChar(S);\n  switch (t)\n  {\n   case LUA_TNIL:\n   \tsetnilvalue(o);\n\tbreak;\n   case LUA_TBOOLEAN:\n   \tsetbvalue(o,LoadChar(S)!=0);\n\tbreak;\n   case LUA_TNUMBER:\n\tsetnvalue(o,LoadNumber(S));\n\tbreak;\n   case LUA_TSTRING:\n\tsetsvalue2n(S->L,o,LoadString(S));\n\tbreak;\n   default:\n\terror(S,\"bad constant\");\n\tbreak;\n  }\n }\n n=LoadInt(S);\n f->p=luaM_newvector(S->L,n,Proto*);\n f->sizep=n;\n for (i=0; i<n; i++) f->p[i]=NULL;\n for (i=0; i<n; i++) f->p[i]=LoadFunction(S,f->source);\n}\n\nstatic void LoadDebug(LoadState* S, Proto* f)\n{\n int i,n;\n n=LoadInt(S);\n f->lineinfo=luaM_newvector(S->L,n,int);\n f->sizelineinfo=n;\n LoadVector(S,f->lineinfo,n,sizeof(int));\n n=LoadInt(S);\n f->locvars=luaM_newvector(S->L,n,LocVar);\n f->sizelocvars=n;\n for (i=0; i<n; i++) f->locvars[i].varname=NULL;\n for (i=0; i<n; i++)\n {\n  f->locvars[i].varname=LoadString(S);\n  f->locvars[i].startpc=LoadInt(S);\n  f->locvars[i].endpc=LoadInt(S);\n }\n n=LoadInt(S);\n f->upvalues=luaM_newvector(S->L,n,TString*);\n f->sizeupvalues=n;\n for (i=0; i<n; i++) f->upvalues[i]=NULL;\n for (i=0; i<n; i++) f->upvalues[i]=LoadString(S);\n}\n\nstatic Proto* LoadFunction(LoadState* S, TString* p)\n{\n Proto* f;\n if (++S->L->nCcalls > LUAI_MAXCCALLS) error(S,\"code too deep\");\n f=luaF_newproto(S->L);\n setptvalue2s(S->L,S->L->top,f); incr_top(S->L);\n f->source=LoadString(S); if (f->source==NULL) f->source=p;\n f->linedefined=LoadInt(S);\n f->lastlinedefined=LoadInt(S);\n f->nups=LoadByte(S);\n f->numparams=LoadByte(S);\n f->is_vararg=LoadByte(S);\n f->maxstacksize=LoadByte(S);\n LoadCode(S,f);\n LoadConstants(S,f);\n LoadDebug(S,f);\n IF (!luaG_checkcode(f), \"bad code\");\n S->L->top--;\n S->L->nCcalls--;\n return f;\n}\n\nstatic void LoadHeader(LoadState* S)\n{\n char h[LUAC_HEADERSIZE];\n char s[LUAC_HEADERSIZE];\n luaU_header(h);\n LoadBlock(S,s,LUAC_HEADERSIZE);\n IF (memcmp(h,s,LUAC_HEADERSIZE)!=0, \"bad header\");\n}\n\n/*\n** load precompiled chunk\n*/\nProto* luaU_undump (lua_State* L, ZIO* Z, Mbuffer* buff, const char* name)\n{\n LoadState S;\n if (*name=='@' || *name=='=')\n  S.name=name+1;\n else if (*name==LUA_SIGNATURE[0])\n  S.name=\"binary string\";\n else\n  S.name=name;\n S.L=L;\n S.Z=Z;\n S.b=buff;\n LoadHeader(&S);\n return LoadFunction(&S,luaS_newliteral(L,\"=?\"));\n}\n\n/*\n* make header\n*/\nvoid luaU_header (char* h)\n{\n int x=1;\n memcpy(h,LUA_SIGNATURE,sizeof(LUA_SIGNATURE)-1);\n h+=sizeof(LUA_SIGNATURE)-1;\n *h++=(char)LUAC_VERSION;\n *h++=(char)LUAC_FORMAT;\n *h++=(char)*(char*)&x;\t\t\t\t/* endianness */\n *h++=(char)sizeof(int);\n *h++=(char)sizeof(size_t);\n *h++=(char)sizeof(Instruction);\n *h++=(char)sizeof(lua_Number);\n *h++=(char)(((lua_Number)0.5)==0);\t\t/* is lua_Number integral? */\n}\n"
  },
  {
    "path": "deps/lua/src/lundump.h",
    "content": "/*\n** $Id: lundump.h,v 1.37.1.1 2007/12/27 13:02:25 roberto Exp $\n** load precompiled Lua chunks\n** See Copyright Notice in lua.h\n*/\n\n#ifndef lundump_h\n#define lundump_h\n\n#include \"lobject.h\"\n#include \"lzio.h\"\n\n/* load one chunk; from lundump.c */\nLUAI_FUNC Proto* luaU_undump (lua_State* L, ZIO* Z, Mbuffer* buff, const char* name);\n\n/* make header; from lundump.c */\nLUAI_FUNC void luaU_header (char* h);\n\n/* dump one chunk; from ldump.c */\nLUAI_FUNC int luaU_dump (lua_State* L, const Proto* f, lua_Writer w, void* data, int strip);\n\n#ifdef luac_c\n/* print one chunk; from print.c */\nLUAI_FUNC void luaU_print (const Proto* f, int full);\n#endif\n\n/* for header of binary files -- this is Lua 5.1 */\n#define LUAC_VERSION\t\t0x51\n\n/* for header of binary files -- this is the official format */\n#define LUAC_FORMAT\t\t0\n\n/* size of header of binary files */\n#define LUAC_HEADERSIZE\t\t12\n\n#endif\n"
  },
  {
    "path": "deps/lua/src/lvm.c",
    "content": "/*\n** $Id: lvm.c,v 2.63.1.5 2011/08/17 20:43:11 roberto Exp $\n** Lua virtual machine\n** See Copyright Notice in lua.h\n*/\n\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n#define lvm_c\n#define LUA_CORE\n\n#include \"lua.h\"\n\n#include \"ldebug.h\"\n#include \"ldo.h\"\n#include \"lfunc.h\"\n#include \"lgc.h\"\n#include \"lobject.h\"\n#include \"lopcodes.h\"\n#include \"lstate.h\"\n#include \"lstring.h\"\n#include \"ltable.h\"\n#include \"ltm.h\"\n#include \"lvm.h\"\n\n\n\n/* limit for table tag-method chains (to avoid loops) */\n#define MAXTAGLOOP\t100\n\n\nconst TValue *luaV_tonumber (const TValue *obj, TValue *n) {\n  lua_Number num;\n  if (ttisnumber(obj)) return obj;\n  if (ttisstring(obj) && luaO_str2d(svalue(obj), &num)) {\n    setnvalue(n, num);\n    return n;\n  }\n  else\n    return NULL;\n}\n\n\nint luaV_tostring (lua_State *L, StkId obj) {\n  if (!ttisnumber(obj))\n    return 0;\n  else {\n    char s[LUAI_MAXNUMBER2STR];\n    lua_Number n = nvalue(obj);\n    lua_number2str(s, n);\n    setsvalue2s(L, obj, luaS_new(L, s));\n    return 1;\n  }\n}\n\n\nstatic void traceexec (lua_State *L, const Instruction *pc) {\n  lu_byte mask = L->hookmask;\n  const Instruction *oldpc = L->savedpc;\n  L->savedpc = pc;\n  if ((mask & LUA_MASKCOUNT) && L->hookcount == 0) {\n    resethookcount(L);\n    luaD_callhook(L, LUA_HOOKCOUNT, -1);\n  }\n  if (mask & LUA_MASKLINE) {\n    Proto *p = ci_func(L->ci)->l.p;\n    int npc = pcRel(pc, p);\n    int newline = getline(p, npc);\n    /* call linehook when enter a new function, when jump back (loop),\n       or when enter a new line */\n    if (npc == 0 || pc <= oldpc || newline != getline(p, pcRel(oldpc, p)))\n      luaD_callhook(L, LUA_HOOKLINE, newline);\n  }\n}\n\n\nstatic void callTMres (lua_State *L, StkId res, const TValue *f,\n                        const TValue *p1, const TValue *p2) {\n  ptrdiff_t result = savestack(L, res);\n  setobj2s(L, L->top, f);  /* push function */\n  setobj2s(L, L->top+1, p1);  /* 1st argument */\n  setobj2s(L, L->top+2, p2);  /* 2nd argument */\n  luaD_checkstack(L, 3);\n  L->top += 3;\n  luaD_call(L, L->top - 3, 1);\n  res = restorestack(L, result);\n  L->top--;\n  setobjs2s(L, res, L->top);\n}\n\n\n\nstatic void callTM (lua_State *L, const TValue *f, const TValue *p1,\n                    const TValue *p2, const TValue *p3) {\n  setobj2s(L, L->top, f);  /* push function */\n  setobj2s(L, L->top+1, p1);  /* 1st argument */\n  setobj2s(L, L->top+2, p2);  /* 2nd argument */\n  setobj2s(L, L->top+3, p3);  /* 3th argument */\n  luaD_checkstack(L, 4);\n  L->top += 4;\n  luaD_call(L, L->top - 4, 0);\n}\n\n\nvoid luaV_gettable (lua_State *L, const TValue *t, TValue *key, StkId val) {\n  int loop;\n  for (loop = 0; loop < MAXTAGLOOP; loop++) {\n    const TValue *tm;\n    if (ttistable(t)) {  /* `t' is a table? */\n      Table *h = hvalue(t);\n      const TValue *res = luaH_get(h, key); /* do a primitive get */\n      if (!ttisnil(res) ||  /* result is no nil? */\n          (tm = fasttm(L, h->metatable, TM_INDEX)) == NULL) { /* or no TM? */\n        setobj2s(L, val, res);\n        return;\n      }\n      /* else will try the tag method */\n    }\n    else if (ttisnil(tm = luaT_gettmbyobj(L, t, TM_INDEX)))\n      luaG_typeerror(L, t, \"index\");\n    if (ttisfunction(tm)) {\n      callTMres(L, val, tm, t, key);\n      return;\n    }\n    t = tm;  /* else repeat with `tm' */ \n  }\n  luaG_runerror(L, \"loop in gettable\");\n}\n\n\nvoid luaV_settable (lua_State *L, const TValue *t, TValue *key, StkId val) {\n  int loop;\n  TValue temp;\n  for (loop = 0; loop < MAXTAGLOOP; loop++) {\n    const TValue *tm;\n    if (ttistable(t)) {  /* `t' is a table? */\n      Table *h = hvalue(t);\n      if (h->readonly)\n        luaG_runerror(L, \"Attempt to modify a readonly table\");\n      TValue *oldval = luaH_set(L, h, key); /* do a primitive set */\n      if (!ttisnil(oldval) ||  /* result is no nil? */\n          (tm = fasttm(L, h->metatable, TM_NEWINDEX)) == NULL) { /* or no TM? */\n        setobj2t(L, oldval, val);\n        h->flags = 0;\n        luaC_barriert(L, h, val);\n        return;\n      }\n      /* else will try the tag method */\n    }\n    else if (ttisnil(tm = luaT_gettmbyobj(L, t, TM_NEWINDEX)))\n      luaG_typeerror(L, t, \"index\");\n    if (ttisfunction(tm)) {\n      callTM(L, tm, t, key, val);\n      return;\n    }\n    /* else repeat with `tm' */\n    setobj(L, &temp, tm);  /* avoid pointing inside table (may rehash) */\n    t = &temp;\n  }\n  luaG_runerror(L, \"loop in settable\");\n}\n\n\nstatic int call_binTM (lua_State *L, const TValue *p1, const TValue *p2,\n                       StkId res, TMS event) {\n  const TValue *tm = luaT_gettmbyobj(L, p1, event);  /* try first operand */\n  if (ttisnil(tm))\n    tm = luaT_gettmbyobj(L, p2, event);  /* try second operand */\n  if (ttisnil(tm)) return 0;\n  callTMres(L, res, tm, p1, p2);\n  return 1;\n}\n\n\nstatic const TValue *get_compTM (lua_State *L, Table *mt1, Table *mt2,\n                                  TMS event) {\n  const TValue *tm1 = fasttm(L, mt1, event);\n  const TValue *tm2;\n  if (tm1 == NULL) return NULL;  /* no metamethod */\n  if (mt1 == mt2) return tm1;  /* same metatables => same metamethods */\n  tm2 = fasttm(L, mt2, event);\n  if (tm2 == NULL) return NULL;  /* no metamethod */\n  if (luaO_rawequalObj(tm1, tm2))  /* same metamethods? */\n    return tm1;\n  return NULL;\n}\n\n\nstatic int call_orderTM (lua_State *L, const TValue *p1, const TValue *p2,\n                         TMS event) {\n  const TValue *tm1 = luaT_gettmbyobj(L, p1, event);\n  const TValue *tm2;\n  if (ttisnil(tm1)) return -1;  /* no metamethod? */\n  tm2 = luaT_gettmbyobj(L, p2, event);\n  if (!luaO_rawequalObj(tm1, tm2))  /* different metamethods? */\n    return -1;\n  callTMres(L, L->top, tm1, p1, p2);\n  return !l_isfalse(L->top);\n}\n\n\nstatic int l_strcmp (const TString *ls, const TString *rs) {\n  const char *l = getstr(ls);\n  size_t ll = ls->tsv.len;\n  const char *r = getstr(rs);\n  size_t lr = rs->tsv.len;\n  for (;;) {\n    int temp = strcoll(l, r);\n    if (temp != 0) return temp;\n    else {  /* strings are equal up to a `\\0' */\n      size_t len = strlen(l);  /* index of first `\\0' in both strings */\n      if (len == lr)  /* r is finished? */\n        return (len == ll) ? 0 : 1;\n      else if (len == ll)  /* l is finished? */\n        return -1;  /* l is smaller than r (because r is not finished) */\n      /* both strings longer than `len'; go on comparing (after the `\\0') */\n      len++;\n      l += len; ll -= len; r += len; lr -= len;\n    }\n  }\n}\n\n\nint luaV_lessthan (lua_State *L, const TValue *l, const TValue *r) {\n  int res;\n  if (ttype(l) != ttype(r))\n    return luaG_ordererror(L, l, r);\n  else if (ttisnumber(l))\n    return luai_numlt(nvalue(l), nvalue(r));\n  else if (ttisstring(l))\n    return l_strcmp(rawtsvalue(l), rawtsvalue(r)) < 0;\n  else if ((res = call_orderTM(L, l, r, TM_LT)) != -1)\n    return res;\n  return luaG_ordererror(L, l, r);\n}\n\n\nstatic int lessequal (lua_State *L, const TValue *l, const TValue *r) {\n  int res;\n  if (ttype(l) != ttype(r))\n    return luaG_ordererror(L, l, r);\n  else if (ttisnumber(l))\n    return luai_numle(nvalue(l), nvalue(r));\n  else if (ttisstring(l))\n    return l_strcmp(rawtsvalue(l), rawtsvalue(r)) <= 0;\n  else if ((res = call_orderTM(L, l, r, TM_LE)) != -1)  /* first try `le' */\n    return res;\n  else if ((res = call_orderTM(L, r, l, TM_LT)) != -1)  /* else try `lt' */\n    return !res;\n  return luaG_ordererror(L, l, r);\n}\n\n\nint luaV_equalval (lua_State *L, const TValue *t1, const TValue *t2) {\n  const TValue *tm;\n  lua_assert(ttype(t1) == ttype(t2));\n  switch (ttype(t1)) {\n    case LUA_TNIL: return 1;\n    case LUA_TNUMBER: return luai_numeq(nvalue(t1), nvalue(t2));\n    case LUA_TBOOLEAN: return bvalue(t1) == bvalue(t2);  /* true must be 1 !! */\n    case LUA_TLIGHTUSERDATA: return pvalue(t1) == pvalue(t2);\n    case LUA_TUSERDATA: {\n      if (uvalue(t1) == uvalue(t2)) return 1;\n      tm = get_compTM(L, uvalue(t1)->metatable, uvalue(t2)->metatable,\n                         TM_EQ);\n      break;  /* will try TM */\n    }\n    case LUA_TTABLE: {\n      if (hvalue(t1) == hvalue(t2)) return 1;\n      tm = get_compTM(L, hvalue(t1)->metatable, hvalue(t2)->metatable, TM_EQ);\n      break;  /* will try TM */\n    }\n    default: return gcvalue(t1) == gcvalue(t2);\n  }\n  if (tm == NULL) return 0;  /* no TM? */\n  callTMres(L, L->top, tm, t1, t2);  /* call TM */\n  return !l_isfalse(L->top);\n}\n\n\nvoid luaV_concat (lua_State *L, int total, int last) {\n  do {\n    StkId top = L->base + last + 1;\n    int n = 2;  /* number of elements handled in this pass (at least 2) */\n    if (!(ttisstring(top-2) || ttisnumber(top-2)) || !tostring(L, top-1)) {\n      if (!call_binTM(L, top-2, top-1, top-2, TM_CONCAT))\n        luaG_concaterror(L, top-2, top-1);\n    } else if (tsvalue(top-1)->len == 0)  /* second op is empty? */\n      (void)tostring(L, top - 2);  /* result is first op (as string) */\n    else {\n      /* at least two string values; get as many as possible */\n      size_t tl = tsvalue(top-1)->len;\n      char *buffer;\n      int i;\n      /* collect total length */\n      for (n = 1; n < total && tostring(L, top-n-1); n++) {\n        size_t l = tsvalue(top-n-1)->len;\n        if (l >= MAX_SIZET - tl) luaG_runerror(L, \"string length overflow\");\n        tl += l;\n      }\n      buffer = luaZ_openspace(L, &G(L)->buff, tl);\n      tl = 0;\n      for (i=n; i>0; i--) {  /* concat all strings */\n        size_t l = tsvalue(top-i)->len;\n        memcpy(buffer+tl, svalue(top-i), l);\n        tl += l;\n      }\n      setsvalue2s(L, top-n, luaS_newlstr(L, buffer, tl));\n    }\n    total -= n-1;  /* got `n' strings to create 1 new */\n    last -= n-1;\n  } while (total > 1);  /* repeat until only 1 result left */\n}\n\n\nstatic void Arith (lua_State *L, StkId ra, const TValue *rb,\n                   const TValue *rc, TMS op) {\n  TValue tempb, tempc;\n  const TValue *b, *c;\n  if ((b = luaV_tonumber(rb, &tempb)) != NULL &&\n      (c = luaV_tonumber(rc, &tempc)) != NULL) {\n    lua_Number nb = nvalue(b), nc = nvalue(c);\n    switch (op) {\n      case TM_ADD: setnvalue(ra, luai_numadd(nb, nc)); break;\n      case TM_SUB: setnvalue(ra, luai_numsub(nb, nc)); break;\n      case TM_MUL: setnvalue(ra, luai_nummul(nb, nc)); break;\n      case TM_DIV: setnvalue(ra, luai_numdiv(nb, nc)); break;\n      case TM_MOD: setnvalue(ra, luai_nummod(nb, nc)); break;\n      case TM_POW: setnvalue(ra, luai_numpow(nb, nc)); break;\n      case TM_UNM: setnvalue(ra, luai_numunm(nb)); break;\n      default: lua_assert(0); break;\n    }\n  }\n  else if (!call_binTM(L, rb, rc, ra, op))\n    luaG_aritherror(L, rb, rc);\n}\n\n\n\n/*\n** some macros for common tasks in `luaV_execute'\n*/\n\n#define runtime_check(L, c)\t{ if (!(c)) break; }\n\n#define RA(i)\t(base+GETARG_A(i))\n/* to be used after possible stack reallocation */\n#define RB(i)\tcheck_exp(getBMode(GET_OPCODE(i)) == OpArgR, base+GETARG_B(i))\n#define RC(i)\tcheck_exp(getCMode(GET_OPCODE(i)) == OpArgR, base+GETARG_C(i))\n#define RKB(i)\tcheck_exp(getBMode(GET_OPCODE(i)) == OpArgK, \\\n\tISK(GETARG_B(i)) ? k+INDEXK(GETARG_B(i)) : base+GETARG_B(i))\n#define RKC(i)\tcheck_exp(getCMode(GET_OPCODE(i)) == OpArgK, \\\n\tISK(GETARG_C(i)) ? k+INDEXK(GETARG_C(i)) : base+GETARG_C(i))\n#define KBx(i)\tcheck_exp(getBMode(GET_OPCODE(i)) == OpArgK, k+GETARG_Bx(i))\n\n\n#define dojump(L,pc,i)\t{(pc) += (i); luai_threadyield(L);}\n\n\n#define Protect(x)\t{ L->savedpc = pc; {x;}; base = L->base; }\n\n\n#define arith_op(op,tm) { \\\n        TValue *rb = RKB(i); \\\n        TValue *rc = RKC(i); \\\n        if (ttisnumber(rb) && ttisnumber(rc)) { \\\n          lua_Number nb = nvalue(rb), nc = nvalue(rc); \\\n          setnvalue(ra, op(nb, nc)); \\\n        } \\\n        else \\\n          Protect(Arith(L, ra, rb, rc, tm)); \\\n      }\n\n\n\nvoid luaV_execute (lua_State *L, int nexeccalls) {\n  LClosure *cl;\n  StkId base;\n  TValue *k;\n  const Instruction *pc;\n reentry:  /* entry point */\n  lua_assert(isLua(L->ci));\n  pc = L->savedpc;\n  cl = &clvalue(L->ci->func)->l;\n  base = L->base;\n  k = cl->p->k;\n  /* main loop of interpreter */\n  for (;;) {\n    const Instruction i = *pc++;\n    StkId ra;\n    if ((L->hookmask & (LUA_MASKLINE | LUA_MASKCOUNT)) &&\n        (--L->hookcount == 0 || L->hookmask & LUA_MASKLINE)) {\n      traceexec(L, pc);\n      if (L->status == LUA_YIELD) {  /* did hook yield? */\n        L->savedpc = pc - 1;\n        return;\n      }\n      base = L->base;\n    }\n    /* warning!! several calls may realloc the stack and invalidate `ra' */\n    ra = RA(i);\n    lua_assert(base == L->base && L->base == L->ci->base);\n    lua_assert(base <= L->top && L->top <= L->stack + L->stacksize);\n    lua_assert(L->top == L->ci->top || luaG_checkopenop(i));\n    switch (GET_OPCODE(i)) {\n      case OP_MOVE: {\n        setobjs2s(L, ra, RB(i));\n        continue;\n      }\n      case OP_LOADK: {\n        setobj2s(L, ra, KBx(i));\n        continue;\n      }\n      case OP_LOADBOOL: {\n        setbvalue(ra, GETARG_B(i));\n        if (GETARG_C(i)) pc++;  /* skip next instruction (if C) */\n        continue;\n      }\n      case OP_LOADNIL: {\n        TValue *rb = RB(i);\n        do {\n          setnilvalue(rb--);\n        } while (rb >= ra);\n        continue;\n      }\n      case OP_GETUPVAL: {\n        int b = GETARG_B(i);\n        setobj2s(L, ra, cl->upvals[b]->v);\n        continue;\n      }\n      case OP_GETGLOBAL: {\n        TValue g;\n        TValue *rb = KBx(i);\n        sethvalue(L, &g, cl->env);\n        lua_assert(ttisstring(rb));\n        Protect(luaV_gettable(L, &g, rb, ra));\n        continue;\n      }\n      case OP_GETTABLE: {\n        Protect(luaV_gettable(L, RB(i), RKC(i), ra));\n        continue;\n      }\n      case OP_SETGLOBAL: {\n        TValue g;\n        sethvalue(L, &g, cl->env);\n        lua_assert(ttisstring(KBx(i)));\n        Protect(luaV_settable(L, &g, KBx(i), ra));\n        continue;\n      }\n      case OP_SETUPVAL: {\n        UpVal *uv = cl->upvals[GETARG_B(i)];\n        setobj(L, uv->v, ra);\n        luaC_barrier(L, uv, ra);\n        continue;\n      }\n      case OP_SETTABLE: {\n        Protect(luaV_settable(L, ra, RKB(i), RKC(i)));\n        continue;\n      }\n      case OP_NEWTABLE: {\n        int b = GETARG_B(i);\n        int c = GETARG_C(i);\n        sethvalue(L, ra, luaH_new(L, luaO_fb2int(b), luaO_fb2int(c)));\n        Protect(luaC_checkGC(L));\n        continue;\n      }\n      case OP_SELF: {\n        StkId rb = RB(i);\n        setobjs2s(L, ra+1, rb);\n        Protect(luaV_gettable(L, rb, RKC(i), ra));\n        continue;\n      }\n      case OP_ADD: {\n        arith_op(luai_numadd, TM_ADD);\n        continue;\n      }\n      case OP_SUB: {\n        arith_op(luai_numsub, TM_SUB);\n        continue;\n      }\n      case OP_MUL: {\n        arith_op(luai_nummul, TM_MUL);\n        continue;\n      }\n      case OP_DIV: {\n        arith_op(luai_numdiv, TM_DIV);\n        continue;\n      }\n      case OP_MOD: {\n        arith_op(luai_nummod, TM_MOD);\n        continue;\n      }\n      case OP_POW: {\n        arith_op(luai_numpow, TM_POW);\n        continue;\n      }\n      case OP_UNM: {\n        TValue *rb = RB(i);\n        if (ttisnumber(rb)) {\n          lua_Number nb = nvalue(rb);\n          setnvalue(ra, luai_numunm(nb));\n        }\n        else {\n          Protect(Arith(L, ra, rb, rb, TM_UNM));\n        }\n        continue;\n      }\n      case OP_NOT: {\n        int res = l_isfalse(RB(i));  /* next assignment may change this value */\n        setbvalue(ra, res);\n        continue;\n      }\n      case OP_LEN: {\n        const TValue *rb = RB(i);\n        switch (ttype(rb)) {\n          case LUA_TTABLE: {\n            setnvalue(ra, cast_num(luaH_getn(hvalue(rb))));\n            break;\n          }\n          case LUA_TSTRING: {\n            setnvalue(ra, cast_num(tsvalue(rb)->len));\n            break;\n          }\n          default: {  /* try metamethod */\n            Protect(\n              if (!call_binTM(L, rb, luaO_nilobject, ra, TM_LEN))\n                luaG_typeerror(L, rb, \"get length of\");\n            )\n          }\n        }\n        continue;\n      }\n      case OP_CONCAT: {\n        int b = GETARG_B(i);\n        int c = GETARG_C(i);\n        Protect(luaV_concat(L, c-b+1, c); luaC_checkGC(L));\n        setobjs2s(L, RA(i), base+b);\n        continue;\n      }\n      case OP_JMP: {\n        dojump(L, pc, GETARG_sBx(i));\n        continue;\n      }\n      case OP_EQ: {\n        TValue *rb = RKB(i);\n        TValue *rc = RKC(i);\n        Protect(\n          if (equalobj(L, rb, rc) == GETARG_A(i))\n            dojump(L, pc, GETARG_sBx(*pc));\n        )\n        pc++;\n        continue;\n      }\n      case OP_LT: {\n        Protect(\n          if (luaV_lessthan(L, RKB(i), RKC(i)) == GETARG_A(i))\n            dojump(L, pc, GETARG_sBx(*pc));\n        )\n        pc++;\n        continue;\n      }\n      case OP_LE: {\n        Protect(\n          if (lessequal(L, RKB(i), RKC(i)) == GETARG_A(i))\n            dojump(L, pc, GETARG_sBx(*pc));\n        )\n        pc++;\n        continue;\n      }\n      case OP_TEST: {\n        if (l_isfalse(ra) != GETARG_C(i))\n          dojump(L, pc, GETARG_sBx(*pc));\n        pc++;\n        continue;\n      }\n      case OP_TESTSET: {\n        TValue *rb = RB(i);\n        if (l_isfalse(rb) != GETARG_C(i)) {\n          setobjs2s(L, ra, rb);\n          dojump(L, pc, GETARG_sBx(*pc));\n        }\n        pc++;\n        continue;\n      }\n      case OP_CALL: {\n        int b = GETARG_B(i);\n        int nresults = GETARG_C(i) - 1;\n        if (b != 0) L->top = ra+b;  /* else previous instruction set top */\n        L->savedpc = pc;\n        switch (luaD_precall(L, ra, nresults)) {\n          case PCRLUA: {\n            nexeccalls++;\n            goto reentry;  /* restart luaV_execute over new Lua function */\n          }\n          case PCRC: {\n            /* it was a C function (`precall' called it); adjust results */\n            if (nresults >= 0) L->top = L->ci->top;\n            base = L->base;\n            continue;\n          }\n          default: {\n            return;  /* yield */\n          }\n        }\n      }\n      case OP_TAILCALL: {\n        int b = GETARG_B(i);\n        if (b != 0) L->top = ra+b;  /* else previous instruction set top */\n        L->savedpc = pc;\n        lua_assert(GETARG_C(i) - 1 == LUA_MULTRET);\n        switch (luaD_precall(L, ra, LUA_MULTRET)) {\n          case PCRLUA: {\n            /* tail call: put new frame in place of previous one */\n            CallInfo *ci = L->ci - 1;  /* previous frame */\n            int aux;\n            StkId func = ci->func;\n            StkId pfunc = (ci+1)->func;  /* previous function index */\n            if (L->openupval) luaF_close(L, ci->base);\n            L->base = ci->base = ci->func + ((ci+1)->base - pfunc);\n            for (aux = 0; pfunc+aux < L->top; aux++)  /* move frame down */\n              setobjs2s(L, func+aux, pfunc+aux);\n            ci->top = L->top = func+aux;  /* correct top */\n            lua_assert(L->top == L->base + clvalue(func)->l.p->maxstacksize);\n            ci->savedpc = L->savedpc;\n            ci->tailcalls++;  /* one more call lost */\n            L->ci--;  /* remove new frame */\n            goto reentry;\n          }\n          case PCRC: {  /* it was a C function (`precall' called it) */\n            base = L->base;\n            continue;\n          }\n          default: {\n            return;  /* yield */\n          }\n        }\n      }\n      case OP_RETURN: {\n        int b = GETARG_B(i);\n        if (b != 0) L->top = ra+b-1;\n        if (L->openupval) luaF_close(L, base);\n        L->savedpc = pc;\n        b = luaD_poscall(L, ra);\n        if (--nexeccalls == 0)  /* was previous function running `here'? */\n          return;  /* no: return */\n        else {  /* yes: continue its execution */\n          if (b) L->top = L->ci->top;\n          lua_assert(isLua(L->ci));\n          lua_assert(GET_OPCODE(*((L->ci)->savedpc - 1)) == OP_CALL);\n          goto reentry;\n        }\n      }\n      case OP_FORLOOP: {\n        lua_Number step = nvalue(ra+2);\n        lua_Number idx = luai_numadd(nvalue(ra), step); /* increment index */\n        lua_Number limit = nvalue(ra+1);\n        if (luai_numlt(0, step) ? luai_numle(idx, limit)\n                                : luai_numle(limit, idx)) {\n          dojump(L, pc, GETARG_sBx(i));  /* jump back */\n          setnvalue(ra, idx);  /* update internal index... */\n          setnvalue(ra+3, idx);  /* ...and external index */\n        }\n        continue;\n      }\n      case OP_FORPREP: {\n        const TValue *init = ra;\n        const TValue *plimit = ra+1;\n        const TValue *pstep = ra+2;\n        L->savedpc = pc;  /* next steps may throw errors */\n        if (!tonumber(init, ra))\n          luaG_runerror(L, LUA_QL(\"for\") \" initial value must be a number\");\n        else if (!tonumber(plimit, ra+1))\n          luaG_runerror(L, LUA_QL(\"for\") \" limit must be a number\");\n        else if (!tonumber(pstep, ra+2))\n          luaG_runerror(L, LUA_QL(\"for\") \" step must be a number\");\n        setnvalue(ra, luai_numsub(nvalue(ra), nvalue(pstep)));\n        dojump(L, pc, GETARG_sBx(i));\n        continue;\n      }\n      case OP_TFORLOOP: {\n        StkId cb = ra + 3;  /* call base */\n        setobjs2s(L, cb+2, ra+2);\n        setobjs2s(L, cb+1, ra+1);\n        setobjs2s(L, cb, ra);\n        L->top = cb+3;  /* func. + 2 args (state and index) */\n        Protect(luaD_call(L, cb, GETARG_C(i)));\n        L->top = L->ci->top;\n        cb = RA(i) + 3;  /* previous call may change the stack */\n        if (!ttisnil(cb)) {  /* continue loop? */\n          setobjs2s(L, cb-1, cb);  /* save control variable */\n          dojump(L, pc, GETARG_sBx(*pc));  /* jump back */\n        }\n        pc++;\n        continue;\n      }\n      case OP_SETLIST: {\n        int n = GETARG_B(i);\n        int c = GETARG_C(i);\n        int last;\n        Table *h;\n        if (n == 0) {\n          n = cast_int(L->top - ra) - 1;\n          L->top = L->ci->top;\n        }\n        if (c == 0) c = cast_int(*pc++);\n        runtime_check(L, ttistable(ra));\n        h = hvalue(ra);\n        last = ((c-1)*LFIELDS_PER_FLUSH) + n;\n        if (last > h->sizearray)  /* needs more space? */\n          luaH_resizearray(L, h, last);  /* pre-alloc it at once */\n        for (; n > 0; n--) {\n          TValue *val = ra+n;\n          setobj2t(L, luaH_setnum(L, h, last--), val);\n          luaC_barriert(L, h, val);\n        }\n        continue;\n      }\n      case OP_CLOSE: {\n        luaF_close(L, ra);\n        continue;\n      }\n      case OP_CLOSURE: {\n        Proto *p;\n        Closure *ncl;\n        int nup, j;\n        p = cl->p->p[GETARG_Bx(i)];\n        nup = p->nups;\n        ncl = luaF_newLclosure(L, nup, cl->env);\n        ncl->l.p = p;\n        for (j=0; j<nup; j++, pc++) {\n          if (GET_OPCODE(*pc) == OP_GETUPVAL)\n            ncl->l.upvals[j] = cl->upvals[GETARG_B(*pc)];\n          else {\n            lua_assert(GET_OPCODE(*pc) == OP_MOVE);\n            ncl->l.upvals[j] = luaF_findupval(L, base + GETARG_B(*pc));\n          }\n        }\n        setclvalue(L, ra, ncl);\n        Protect(luaC_checkGC(L));\n        continue;\n      }\n      case OP_VARARG: {\n        int b = GETARG_B(i) - 1;\n        int j;\n        CallInfo *ci = L->ci;\n        int n = cast_int(ci->base - ci->func) - cl->p->numparams - 1;\n        if (b == LUA_MULTRET) {\n          Protect(luaD_checkstack(L, n));\n          ra = RA(i);  /* previous call may change the stack */\n          b = n;\n          L->top = ra + n;\n        }\n        for (j = 0; j < b; j++) {\n          if (j < n) {\n            setobjs2s(L, ra + j, ci->base - n + j);\n          }\n          else {\n            setnilvalue(ra + j);\n          }\n        }\n        continue;\n      }\n    }\n  }\n}\n\n"
  },
  {
    "path": "deps/lua/src/lvm.h",
    "content": "/*\n** $Id: lvm.h,v 2.5.1.1 2007/12/27 13:02:25 roberto Exp $\n** Lua virtual machine\n** See Copyright Notice in lua.h\n*/\n\n#ifndef lvm_h\n#define lvm_h\n\n\n#include \"ldo.h\"\n#include \"lobject.h\"\n#include \"ltm.h\"\n\n\n#define tostring(L,o) ((ttype(o) == LUA_TSTRING) || (luaV_tostring(L, o)))\n\n#define tonumber(o,n)\t(ttype(o) == LUA_TNUMBER || \\\n                         (((o) = luaV_tonumber(o,n)) != NULL))\n\n#define equalobj(L,o1,o2) \\\n\t(ttype(o1) == ttype(o2) && luaV_equalval(L, o1, o2))\n\n\nLUAI_FUNC int luaV_lessthan (lua_State *L, const TValue *l, const TValue *r);\nLUAI_FUNC int luaV_equalval (lua_State *L, const TValue *t1, const TValue *t2);\nLUAI_FUNC const TValue *luaV_tonumber (const TValue *obj, TValue *n);\nLUAI_FUNC int luaV_tostring (lua_State *L, StkId obj);\nLUAI_FUNC void luaV_gettable (lua_State *L, const TValue *t, TValue *key,\n                                            StkId val);\nLUAI_FUNC void luaV_settable (lua_State *L, const TValue *t, TValue *key,\n                                            StkId val);\nLUAI_FUNC void luaV_execute (lua_State *L, int nexeccalls);\nLUAI_FUNC void luaV_concat (lua_State *L, int total, int last);\n\n#endif\n"
  },
  {
    "path": "deps/lua/src/lzio.c",
    "content": "/*\n** $Id: lzio.c,v 1.31.1.1 2007/12/27 13:02:25 roberto Exp $\n** a generic input stream interface\n** See Copyright Notice in lua.h\n*/\n\n\n#include <string.h>\n\n#define lzio_c\n#define LUA_CORE\n\n#include \"lua.h\"\n\n#include \"llimits.h\"\n#include \"lmem.h\"\n#include \"lstate.h\"\n#include \"lzio.h\"\n\n\nint luaZ_fill (ZIO *z) {\n  size_t size;\n  lua_State *L = z->L;\n  const char *buff;\n  lua_unlock(L);\n  buff = z->reader(L, z->data, &size);\n  lua_lock(L);\n  if (buff == NULL || size == 0) return EOZ;\n  z->n = size - 1;\n  z->p = buff;\n  return char2int(*(z->p++));\n}\n\n\nint luaZ_lookahead (ZIO *z) {\n  if (z->n == 0) {\n    if (luaZ_fill(z) == EOZ)\n      return EOZ;\n    else {\n      z->n++;  /* luaZ_fill removed first byte; put back it */\n      z->p--;\n    }\n  }\n  return char2int(*z->p);\n}\n\n\nvoid luaZ_init (lua_State *L, ZIO *z, lua_Reader reader, void *data) {\n  z->L = L;\n  z->reader = reader;\n  z->data = data;\n  z->n = 0;\n  z->p = NULL;\n}\n\n\n/* --------------------------------------------------------------- read --- */\nsize_t luaZ_read (ZIO *z, void *b, size_t n) {\n  while (n) {\n    size_t m;\n    if (luaZ_lookahead(z) == EOZ)\n      return n;  /* return number of missing bytes */\n    m = (n <= z->n) ? n : z->n;  /* min. between n and z->n */\n    memcpy(b, z->p, m);\n    z->n -= m;\n    z->p += m;\n    b = (char *)b + m;\n    n -= m;\n  }\n  return 0;\n}\n\n/* ------------------------------------------------------------------------ */\nchar *luaZ_openspace (lua_State *L, Mbuffer *buff, size_t n) {\n  if (n > buff->buffsize) {\n    if (n < LUA_MINBUFFER) n = LUA_MINBUFFER;\n    luaZ_resizebuffer(L, buff, n);\n  }\n  return buff->buffer;\n}\n\n\n"
  },
  {
    "path": "deps/lua/src/lzio.h",
    "content": "/*\n** $Id: lzio.h,v 1.21.1.1 2007/12/27 13:02:25 roberto Exp $\n** Buffered streams\n** See Copyright Notice in lua.h\n*/\n\n\n#ifndef lzio_h\n#define lzio_h\n\n#include \"lua.h\"\n\n#include \"lmem.h\"\n\n\n#define EOZ\t(-1)\t\t\t/* end of stream */\n\ntypedef struct Zio ZIO;\n\n#define char2int(c)\tcast(int, cast(unsigned char, (c)))\n\n#define zgetc(z)  (((z)->n--)>0 ?  char2int(*(z)->p++) : luaZ_fill(z))\n\ntypedef struct Mbuffer {\n  char *buffer;\n  size_t n;\n  size_t buffsize;\n} Mbuffer;\n\n#define luaZ_initbuffer(L, buff) ((buff)->buffer = NULL, (buff)->buffsize = 0)\n\n#define luaZ_buffer(buff)\t((buff)->buffer)\n#define luaZ_sizebuffer(buff)\t((buff)->buffsize)\n#define luaZ_bufflen(buff)\t((buff)->n)\n\n#define luaZ_resetbuffer(buff) ((buff)->n = 0)\n\n\n#define luaZ_resizebuffer(L, buff, size) \\\n\t(luaM_reallocvector(L, (buff)->buffer, (buff)->buffsize, size, char), \\\n\t(buff)->buffsize = size)\n\n#define luaZ_freebuffer(L, buff)\tluaZ_resizebuffer(L, buff, 0)\n\n\nLUAI_FUNC char *luaZ_openspace (lua_State *L, Mbuffer *buff, size_t n);\nLUAI_FUNC void luaZ_init (lua_State *L, ZIO *z, lua_Reader reader,\n                                        void *data);\nLUAI_FUNC size_t luaZ_read (ZIO* z, void* b, size_t n);\t/* read next n bytes */\nLUAI_FUNC int luaZ_lookahead (ZIO *z);\n\n\n\n/* --------- Private Part ------------------ */\n\nstruct Zio {\n  size_t n;\t\t\t/* bytes still unread */\n  const char *p;\t\t/* current position in buffer */\n  lua_Reader reader;\n  void* data;\t\t\t/* additional data */\n  lua_State *L;\t\t\t/* Lua state (for reader) */\n};\n\n\nLUAI_FUNC int luaZ_fill (ZIO *z);\n\n#endif\n"
  },
  {
    "path": "deps/lua/src/print.c",
    "content": "/*\n** $Id: print.c,v 1.55a 2006/05/31 13:30:05 lhf Exp $\n** print bytecodes\n** See Copyright Notice in lua.h\n*/\n\n#include <ctype.h>\n#include <stdio.h>\n\n#define luac_c\n#define LUA_CORE\n\n#include \"ldebug.h\"\n#include \"lobject.h\"\n#include \"lopcodes.h\"\n#include \"lundump.h\"\n\n#define PrintFunction\tluaU_print\n\n#define Sizeof(x)\t((int)sizeof(x))\n#define VOID(p)\t\t((const void*)(p))\n\nstatic void PrintString(const TString* ts)\n{\n const char* s=getstr(ts);\n size_t i,n=ts->tsv.len;\n putchar('\"');\n for (i=0; i<n; i++)\n {\n  int c=s[i];\n  switch (c)\n  {\n   case '\"': printf(\"\\\\\\\"\"); break;\n   case '\\\\': printf(\"\\\\\\\\\"); break;\n   case '\\a': printf(\"\\\\a\"); break;\n   case '\\b': printf(\"\\\\b\"); break;\n   case '\\f': printf(\"\\\\f\"); break;\n   case '\\n': printf(\"\\\\n\"); break;\n   case '\\r': printf(\"\\\\r\"); break;\n   case '\\t': printf(\"\\\\t\"); break;\n   case '\\v': printf(\"\\\\v\"); break;\n   default:\tif (isprint((unsigned char)c))\n   \t\t\tputchar(c);\n\t\telse\n\t\t\tprintf(\"\\\\%03u\",(unsigned char)c);\n  }\n }\n putchar('\"');\n}\n\nstatic void PrintConstant(const Proto* f, int i)\n{\n const TValue* o=&f->k[i];\n switch (ttype(o))\n {\n  case LUA_TNIL:\n\tprintf(\"nil\");\n\tbreak;\n  case LUA_TBOOLEAN:\n\tprintf(bvalue(o) ? \"true\" : \"false\");\n\tbreak;\n  case LUA_TNUMBER:\n\tprintf(LUA_NUMBER_FMT,nvalue(o));\n\tbreak;\n  case LUA_TSTRING:\n\tPrintString(rawtsvalue(o));\n\tbreak;\n  default:\t\t\t\t/* cannot happen */\n\tprintf(\"? type=%d\",ttype(o));\n\tbreak;\n }\n}\n\nstatic void PrintCode(const Proto* f)\n{\n const Instruction* code=f->code;\n int pc,n=f->sizecode;\n for (pc=0; pc<n; pc++)\n {\n  Instruction i=code[pc];\n  OpCode o=GET_OPCODE(i);\n  int a=GETARG_A(i);\n  int b=GETARG_B(i);\n  int c=GETARG_C(i);\n  int bx=GETARG_Bx(i);\n  int sbx=GETARG_sBx(i);\n  int line=getline(f,pc);\n  printf(\"\\t%d\\t\",pc+1);\n  if (line>0) printf(\"[%d]\\t\",line); else printf(\"[-]\\t\");\n  printf(\"%-9s\\t\",luaP_opnames[o]);\n  switch (getOpMode(o))\n  {\n   case iABC:\n    printf(\"%d\",a);\n    if (getBMode(o)!=OpArgN) printf(\" %d\",ISK(b) ? (-1-INDEXK(b)) : b);\n    if (getCMode(o)!=OpArgN) printf(\" %d\",ISK(c) ? (-1-INDEXK(c)) : c);\n    break;\n   case iABx:\n    if (getBMode(o)==OpArgK) printf(\"%d %d\",a,-1-bx); else printf(\"%d %d\",a,bx);\n    break;\n   case iAsBx:\n    if (o==OP_JMP) printf(\"%d\",sbx); else printf(\"%d %d\",a,sbx);\n    break;\n  }\n  switch (o)\n  {\n   case OP_LOADK:\n    printf(\"\\t; \"); PrintConstant(f,bx);\n    break;\n   case OP_GETUPVAL:\n   case OP_SETUPVAL:\n    printf(\"\\t; %s\", (f->sizeupvalues>0) ? getstr(f->upvalues[b]) : \"-\");\n    break;\n   case OP_GETGLOBAL:\n   case OP_SETGLOBAL:\n    printf(\"\\t; %s\",svalue(&f->k[bx]));\n    break;\n   case OP_GETTABLE:\n   case OP_SELF:\n    if (ISK(c)) { printf(\"\\t; \"); PrintConstant(f,INDEXK(c)); }\n    break;\n   case OP_SETTABLE:\n   case OP_ADD:\n   case OP_SUB:\n   case OP_MUL:\n   case OP_DIV:\n   case OP_POW:\n   case OP_EQ:\n   case OP_LT:\n   case OP_LE:\n    if (ISK(b) || ISK(c))\n    {\n     printf(\"\\t; \");\n     if (ISK(b)) PrintConstant(f,INDEXK(b)); else printf(\"-\");\n     printf(\" \");\n     if (ISK(c)) PrintConstant(f,INDEXK(c)); else printf(\"-\");\n    }\n    break;\n   case OP_JMP:\n   case OP_FORLOOP:\n   case OP_FORPREP:\n    printf(\"\\t; to %d\",sbx+pc+2);\n    break;\n   case OP_CLOSURE:\n    printf(\"\\t; %p\",VOID(f->p[bx]));\n    break;\n   case OP_SETLIST:\n    if (c==0) printf(\"\\t; %d\",(int)code[++pc]);\n    else printf(\"\\t; %d\",c);\n    break;\n   default:\n    break;\n  }\n  printf(\"\\n\");\n }\n}\n\n#define SS(x)\t(x==1)?\"\":\"s\"\n#define S(x)\tx,SS(x)\n\nstatic void PrintHeader(const Proto* f)\n{\n const char* s=getstr(f->source);\n if (*s=='@' || *s=='=')\n  s++;\n else if (*s==LUA_SIGNATURE[0])\n  s=\"(bstring)\";\n else\n  s=\"(string)\";\n printf(\"\\n%s <%s:%d,%d> (%d instruction%s, %d bytes at %p)\\n\",\n \t(f->linedefined==0)?\"main\":\"function\",s,\n\tf->linedefined,f->lastlinedefined,\n\tS(f->sizecode),f->sizecode*Sizeof(Instruction),VOID(f));\n printf(\"%d%s param%s, %d slot%s, %d upvalue%s, \",\n\tf->numparams,f->is_vararg?\"+\":\"\",SS(f->numparams),\n\tS(f->maxstacksize),S(f->nups));\n printf(\"%d local%s, %d constant%s, %d function%s\\n\",\n\tS(f->sizelocvars),S(f->sizek),S(f->sizep));\n}\n\nstatic void PrintConstants(const Proto* f)\n{\n int i,n=f->sizek;\n printf(\"constants (%d) for %p:\\n\",n,VOID(f));\n for (i=0; i<n; i++)\n {\n  printf(\"\\t%d\\t\",i+1);\n  PrintConstant(f,i);\n  printf(\"\\n\");\n }\n}\n\nstatic void PrintLocals(const Proto* f)\n{\n int i,n=f->sizelocvars;\n printf(\"locals (%d) for %p:\\n\",n,VOID(f));\n for (i=0; i<n; i++)\n {\n  printf(\"\\t%d\\t%s\\t%d\\t%d\\n\",\n  i,getstr(f->locvars[i].varname),f->locvars[i].startpc+1,f->locvars[i].endpc+1);\n }\n}\n\nstatic void PrintUpvalues(const Proto* f)\n{\n int i,n=f->sizeupvalues;\n printf(\"upvalues (%d) for %p:\\n\",n,VOID(f));\n if (f->upvalues==NULL) return;\n for (i=0; i<n; i++)\n {\n  printf(\"\\t%d\\t%s\\n\",i,getstr(f->upvalues[i]));\n }\n}\n\nvoid PrintFunction(const Proto* f, int full)\n{\n int i,n=f->sizep;\n PrintHeader(f);\n PrintCode(f);\n if (full)\n {\n  PrintConstants(f);\n  PrintLocals(f);\n  PrintUpvalues(f);\n }\n for (i=0; i<n; i++) PrintFunction(f->p[i],full);\n}\n"
  },
  {
    "path": "deps/lua/src/strbuf.c",
    "content": "/* strbuf - String buffer routines\n *\n * Copyright (c) 2010-2012  Mark Pulford <mark@kyne.com.au>\n *\n * Permission is hereby granted, free of charge, to any person obtaining\n * a copy of this software and associated documentation files (the\n * \"Software\"), to deal in the Software without restriction, including\n * without limitation the rights to use, copy, modify, merge, publish,\n * distribute, sublicense, and/or sell copies of the Software, and to\n * permit persons to whom the Software is furnished to do so, subject to\n * the following conditions:\n *\n * The above copyright notice and this permission notice shall be\n * included in all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\n * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\n * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.\n * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY\n * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,\n * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <stdarg.h>\n#include <string.h>\n#include <stdint.h>\n\n#include \"strbuf.h\"\n\nstatic void die(const char *fmt, ...)\n{\n    va_list arg;\n\n    va_start(arg, fmt);\n    vfprintf(stderr, fmt, arg);\n    va_end(arg);\n    fprintf(stderr, \"\\n\");\n\n    abort();\n}\n\nvoid strbuf_init(strbuf_t *s, size_t len)\n{\n    size_t size;\n\n    if (!len)\n        size = STRBUF_DEFAULT_SIZE;\n    else\n        size = len + 1;\n    if (size < len)\n        die(\"Overflow, len: %zu\", len);\n    s->buf = NULL;\n    s->size = size;\n    s->length = 0;\n    s->dynamic = 0;\n    s->reallocs = 0;\n    s->debug = 0;\n\n    s->buf = malloc(size);\n    if (!s->buf)\n        die(\"Out of memory\");\n\n    strbuf_ensure_null(s);\n}\n\nstrbuf_t *strbuf_new(size_t len)\n{\n    strbuf_t *s;\n\n    s = malloc(sizeof(strbuf_t));\n    if (!s)\n        die(\"Out of memory\");\n\n    strbuf_init(s, len);\n\n    /* Dynamic strbuf allocation / deallocation */\n    s->dynamic = 1;\n\n    return s;\n}\n\nstatic inline void debug_stats(strbuf_t *s)\n{\n    if (s->debug) {\n        fprintf(stderr, \"strbuf(%lx) reallocs: %d, length: %zd, size: %zd\\n\",\n                (long)s, s->reallocs, s->length, s->size);\n    }\n}\n\n/* If strbuf_t has not been dynamically allocated, strbuf_free() can\n * be called any number of times strbuf_init() */\nvoid strbuf_free(strbuf_t *s)\n{\n    debug_stats(s);\n\n    if (s->buf) {\n        free(s->buf);\n        s->buf = NULL;\n    }\n    if (s->dynamic)\n        free(s);\n}\n\nchar *strbuf_free_to_string(strbuf_t *s, size_t *len)\n{\n    char *buf;\n\n    debug_stats(s);\n\n    strbuf_ensure_null(s);\n\n    buf = s->buf;\n    if (len)\n        *len = s->length;\n\n    if (s->dynamic)\n        free(s);\n\n    return buf;\n}\n\nstatic size_t calculate_new_size(strbuf_t *s, size_t len)\n{\n    size_t reqsize, newsize;\n\n    if (len <= 0)\n        die(\"BUG: Invalid strbuf length requested\");\n\n    /* Ensure there is room for optional NULL termination */\n    reqsize = len + 1;\n    if (reqsize < len)\n        die(\"Overflow, len: %zu\", len);\n\n    /* If the user has requested to shrink the buffer, do it exactly */\n    if (s->size > reqsize)\n        return reqsize;\n\n    newsize = s->size;\n    if (reqsize >= SIZE_MAX / 2) {\n        newsize = reqsize;\n    } else {\n        /* Exponential sizing */\n        while (newsize < reqsize)\n            newsize *= 2;\n    }\n\n    if (newsize < reqsize)\n        die(\"BUG: strbuf length would overflow, len: %zu\", len);\n\n    return newsize;\n}\n\n\n/* Ensure strbuf can handle a string length bytes long (ignoring NULL\n * optional termination). */\nvoid strbuf_resize(strbuf_t *s, size_t len)\n{\n    size_t newsize;\n\n    newsize = calculate_new_size(s, len);\n\n    if (s->debug > 1) {\n        fprintf(stderr, \"strbuf(%lx) resize: %zd => %zd\\n\",\n                (long)s, s->size, newsize);\n    }\n\n    s->size = newsize;\n    s->buf = realloc(s->buf, s->size);\n    if (!s->buf)\n        die(\"Out of memory, len: %zu\", len);\n    s->reallocs++;\n}\n\nvoid strbuf_append_string(strbuf_t *s, const char *str)\n{\n    size_t i, space;\n\n    space = strbuf_empty_length(s);\n\n    for (i = 0; str[i]; i++) {\n        if (space < 1) {\n            strbuf_resize(s, s->length + 1);\n            space = strbuf_empty_length(s);\n        }\n\n        s->buf[s->length] = str[i];\n        s->length++;\n        space--;\n    }\n}\n\n\n/* vi:ai et sw=4 ts=4:\n */\n"
  },
  {
    "path": "deps/lua/src/strbuf.h",
    "content": "/* strbuf - String buffer routines\n *\n * Copyright (c) 2010-2012  Mark Pulford <mark@kyne.com.au>\n *\n * Permission is hereby granted, free of charge, to any person obtaining\n * a copy of this software and associated documentation files (the\n * \"Software\"), to deal in the Software without restriction, including\n * without limitation the rights to use, copy, modify, merge, publish,\n * distribute, sublicense, and/or sell copies of the Software, and to\n * permit persons to whom the Software is furnished to do so, subject to\n * the following conditions:\n *\n * The above copyright notice and this permission notice shall be\n * included in all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\n * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\n * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.\n * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY\n * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,\n * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n */\n\n#include <stdlib.h>\n#include <stdarg.h>\n\n/* Size: Total bytes allocated to *buf\n * Length: String length, excluding optional NULL terminator.\n * Dynamic: True if created via strbuf_new()\n */\n\ntypedef struct {\n    char *buf;\n    size_t size;\n    size_t length;\n    int dynamic;\n    int reallocs;\n    int debug;\n} strbuf_t;\n\n#ifndef STRBUF_DEFAULT_SIZE\n#define STRBUF_DEFAULT_SIZE 1023\n#endif\n\n/* Initialise */\nextern strbuf_t *strbuf_new(size_t len);\nextern void strbuf_init(strbuf_t *s, size_t len);\n\n/* Release */\nextern void strbuf_free(strbuf_t *s);\nextern char *strbuf_free_to_string(strbuf_t *s, size_t *len);\n\n/* Management */\nextern void strbuf_resize(strbuf_t *s, size_t len);\nstatic size_t strbuf_empty_length(strbuf_t *s);\nstatic size_t strbuf_length(strbuf_t *s);\nstatic char *strbuf_string(strbuf_t *s, size_t *len);\nstatic void strbuf_ensure_empty_length(strbuf_t *s, size_t len);\nstatic char *strbuf_empty_ptr(strbuf_t *s);\nstatic void strbuf_extend_length(strbuf_t *s, size_t len);\n\n/* Update */\nstatic void strbuf_append_mem(strbuf_t *s, const char *c, size_t len);\nextern void strbuf_append_string(strbuf_t *s, const char *str);\nstatic void strbuf_append_char(strbuf_t *s, const char c);\nstatic void strbuf_ensure_null(strbuf_t *s);\n\n/* Reset string for before use */\nstatic inline void strbuf_reset(strbuf_t *s)\n{\n    s->length = 0;\n}\n\nstatic inline int strbuf_allocated(strbuf_t *s)\n{\n    return s->buf != NULL;\n}\n\n/* Return bytes remaining in the string buffer\n * Ensure there is space for a NULL terminator. */\nstatic inline size_t strbuf_empty_length(strbuf_t *s)\n{\n    return s->size - s->length - 1;\n}\n\nstatic inline void strbuf_ensure_empty_length(strbuf_t *s, size_t len)\n{\n    if (len > strbuf_empty_length(s))\n        strbuf_resize(s, s->length + len);\n}\n\nstatic inline char *strbuf_empty_ptr(strbuf_t *s)\n{\n    return s->buf + s->length;\n}\n\nstatic inline void strbuf_extend_length(strbuf_t *s, size_t len)\n{\n    s->length += len;\n}\n\nstatic inline size_t strbuf_length(strbuf_t *s)\n{\n    return s->length;\n}\n\nstatic inline void strbuf_append_char(strbuf_t *s, const char c)\n{\n    strbuf_ensure_empty_length(s, 1);\n    s->buf[s->length++] = c;\n}\n\nstatic inline void strbuf_append_char_unsafe(strbuf_t *s, const char c)\n{\n    s->buf[s->length++] = c;\n}\n\nstatic inline void strbuf_append_mem(strbuf_t *s, const char *c, size_t len)\n{\n    strbuf_ensure_empty_length(s, len);\n    memcpy(s->buf + s->length, c, len);\n    s->length += len;\n}\n\nstatic inline void strbuf_append_mem_unsafe(strbuf_t *s, const char *c, size_t len)\n{\n    memcpy(s->buf + s->length, c, len);\n    s->length += len;\n}\n\nstatic inline void strbuf_ensure_null(strbuf_t *s)\n{\n    s->buf[s->length] = 0;\n}\n\nstatic inline char *strbuf_string(strbuf_t *s, size_t *len)\n{\n    if (len)\n        *len = s->length;\n\n    return s->buf;\n}\n\n/* vi:ai et sw=4 ts=4:\n */\n"
  },
  {
    "path": "deps/lua/test/README",
    "content": "These are simple tests for Lua.  Some of them contain useful code.\nThey are meant to be run to make sure Lua is built correctly and also\nto be read, to see how Lua programs look.\n\nHere is a one-line summary of each program:\n\n   bisect.lua\t\tbisection method for solving non-linear equations\n   cf.lua\t\ttemperature conversion table (celsius to farenheit)\n   echo.lua             echo command line arguments\n   env.lua              environment variables as automatic global variables\n   factorial.lua\tfactorial without recursion\n   fib.lua\t\tfibonacci function with cache\n   fibfor.lua\t\tfibonacci numbers with coroutines and generators\n   globals.lua\t\treport global variable usage\n   hello.lua\t\tthe first program in every language\n   life.lua\t\tConway's Game of Life\n   luac.lua\t \tbare-bones luac\n   printf.lua\t\tan implementation of printf\n   readonly.lua\t\tmake global variables readonly\n   sieve.lua\t\tthe sieve of of Eratosthenes programmed with coroutines\n   sort.lua\t\ttwo implementations of a sort function\n   table.lua\t\tmake table, grouping all data for the same item\n   trace-calls.lua\ttrace calls\n   trace-globals.lua\ttrace assigments to global variables\n   xd.lua\t\thex dump\n\n"
  },
  {
    "path": "deps/lua/test/bisect.lua",
    "content": "-- bisection method for solving non-linear equations\n\ndelta=1e-6\t-- tolerance\n\nfunction bisect(f,a,b,fa,fb)\n local c=(a+b)/2\n io.write(n,\" c=\",c,\" a=\",a,\" b=\",b,\"\\n\")\n if c==a or c==b or math.abs(a-b)<delta then return c,b-a end\n n=n+1\n local fc=f(c)\n if fa*fc<0 then return bisect(f,a,c,fa,fc) else return bisect(f,c,b,fc,fb) end\nend\n\n-- find root of f in the inverval [a,b]. needs f(a)*f(b)<0\nfunction solve(f,a,b)\n n=0\n local z,e=bisect(f,a,b,f(a),f(b))\n io.write(string.format(\"after %d steps, root is %.17g with error %.1e, f=%.1e\\n\",n,z,e,f(z)))\nend\n\n-- our function\nfunction f(x)\n return x*x*x-x-1\nend\n\n-- find zero in [1,2]\nsolve(f,1,2)\n"
  },
  {
    "path": "deps/lua/test/cf.lua",
    "content": "-- temperature conversion table (celsius to farenheit)\n\nfor c0=-20,50-1,10 do\n\tio.write(\"C \")\n\tfor c=c0,c0+10-1 do\n\t\tio.write(string.format(\"%3.0f \",c))\n\tend\n\tio.write(\"\\n\")\n\t\n\tio.write(\"F \")\n\tfor c=c0,c0+10-1 do\n\t\tf=(9/5)*c+32\n\t\tio.write(string.format(\"%3.0f \",f))\n\tend\n\tio.write(\"\\n\\n\")\nend\n"
  },
  {
    "path": "deps/lua/test/echo.lua",
    "content": "-- echo command line arguments\n\nfor i=0,table.getn(arg) do\n print(i,arg[i])\nend\n"
  },
  {
    "path": "deps/lua/test/env.lua",
    "content": "-- read environment variables as if they were global variables\n\nlocal f=function (t,i) return os.getenv(i) end\nsetmetatable(getfenv(),{__index=f})\n\n-- an example\nprint(a,USER,PATH)\n"
  },
  {
    "path": "deps/lua/test/factorial.lua",
    "content": "-- function closures are powerful\n\n-- traditional fixed-point operator from functional programming\nY = function (g)\n      local a = function (f) return f(f) end\n      return a(function (f)\n                 return g(function (x)\n                             local c=f(f)\n                             return c(x)\n                           end)\n               end)\nend\n\n\n-- factorial without recursion\nF = function (f)\n      return function (n)\n               if n == 0 then return 1\n               else return n*f(n-1) end\n             end\n    end\n\nfactorial = Y(F)   -- factorial is the fixed point of F\n\n-- now test it\nfunction test(x)\n\tio.write(x,\"! = \",factorial(x),\"\\n\")\nend\n\nfor n=0,16 do\n\ttest(n)\nend\n"
  },
  {
    "path": "deps/lua/test/fib.lua",
    "content": "-- fibonacci function with cache\n\n-- very inefficient fibonacci function\nfunction fib(n)\n\tN=N+1\n\tif n<2 then\n\t\treturn n\n\telse\n\t\treturn fib(n-1)+fib(n-2)\n\tend\nend\n\n-- a general-purpose value cache\nfunction cache(f)\n\tlocal c={}\n\treturn function (x)\n\t\tlocal y=c[x]\n\t\tif not y then\n\t\t\ty=f(x)\n\t\t\tc[x]=y\n\t\tend\n\t\treturn y\n\tend\nend\n\n-- run and time it\nfunction test(s,f)\n\tN=0\n\tlocal c=os.clock()\n\tlocal v=f(n)\n\tlocal t=os.clock()-c\n\tprint(s,n,v,t,N)\nend\n\nn=arg[1] or 24\t\t-- for other values, do lua fib.lua XX\nn=tonumber(n)\nprint(\"\",\"n\",\"value\",\"time\",\"evals\")\ntest(\"plain\",fib)\nfib=cache(fib)\ntest(\"cached\",fib)\n"
  },
  {
    "path": "deps/lua/test/fibfor.lua",
    "content": "-- example of for with generator functions\n\nfunction generatefib (n)\n  return coroutine.wrap(function ()\n    local a,b = 1, 1\n    while a <= n do\n      coroutine.yield(a)\n      a, b = b, a+b\n    end\n  end)\nend\n\nfor i in generatefib(1000) do print(i) end\n"
  },
  {
    "path": "deps/lua/test/globals.lua",
    "content": "-- reads luac listings and reports global variable usage\n-- lines where a global is written to are marked with \"*\"\n-- typical usage: luac -p -l file.lua | lua globals.lua | sort | lua table.lua\n\nwhile 1 do\n local s=io.read()\n if s==nil then break end\n local ok,_,l,op,g=string.find(s,\"%[%-?(%d*)%]%s*([GS])ETGLOBAL.-;%s+(.*)$\")\n if ok then\n  if op==\"S\" then op=\"*\" else op=\"\" end\n  io.write(g,\"\\t\",l,op,\"\\n\")\n end\nend\n"
  },
  {
    "path": "deps/lua/test/hello.lua",
    "content": "-- the first program in every language\n\nio.write(\"Hello world, from \",_VERSION,\"!\\n\")\n"
  },
  {
    "path": "deps/lua/test/life.lua",
    "content": "-- life.lua\n-- original by Dave Bollinger <DBollinger@compuserve.com> posted to lua-l\n-- modified to use ANSI terminal escape sequences\n-- modified to use for instead of while\n\nlocal write=io.write\n\nALIVE=\"\"\tDEAD=\"\"\nALIVE=\"O\"\tDEAD=\"-\"\n\nfunction delay() -- NOTE: SYSTEM-DEPENDENT, adjust as necessary\n  for i=1,10000 do end\n  -- local i=os.clock()+1 while(os.clock()<i) do end\nend\n\nfunction ARRAY2D(w,h)\n  local t = {w=w,h=h}\n  for y=1,h do\n    t[y] = {}\n    for x=1,w do\n      t[y][x]=0\n    end\n  end\n  return t\nend\n\n_CELLS = {}\n\n-- give birth to a \"shape\" within the cell array\nfunction _CELLS:spawn(shape,left,top)\n  for y=0,shape.h-1 do\n    for x=0,shape.w-1 do\n      self[top+y][left+x] = shape[y*shape.w+x+1]\n    end\n  end\nend\n\n-- run the CA and produce the next generation\nfunction _CELLS:evolve(next)\n  local ym1,y,yp1,yi=self.h-1,self.h,1,self.h\n  while yi > 0 do\n    local xm1,x,xp1,xi=self.w-1,self.w,1,self.w\n    while xi > 0 do\n      local sum = self[ym1][xm1] + self[ym1][x] + self[ym1][xp1] +\n                  self[y][xm1] + self[y][xp1] +\n                  self[yp1][xm1] + self[yp1][x] + self[yp1][xp1]\n      next[y][x] = ((sum==2) and self[y][x]) or ((sum==3) and 1) or 0\n      xm1,x,xp1,xi = x,xp1,xp1+1,xi-1\n    end\n    ym1,y,yp1,yi = y,yp1,yp1+1,yi-1\n  end\nend\n\n-- output the array to screen\nfunction _CELLS:draw()\n  local out=\"\" -- accumulate to reduce flicker\n  for y=1,self.h do\n   for x=1,self.w do\n      out=out..(((self[y][x]>0) and ALIVE) or DEAD)\n    end\n    out=out..\"\\n\"\n  end\n  write(out)\nend\n\n-- constructor\nfunction CELLS(w,h)\n  local c = ARRAY2D(w,h)\n  c.spawn = _CELLS.spawn\n  c.evolve = _CELLS.evolve\n  c.draw = _CELLS.draw\n  return c\nend\n\n--\n-- shapes suitable for use with spawn() above\n--\nHEART = { 1,0,1,1,0,1,1,1,1; w=3,h=3 }\nGLIDER = { 0,0,1,1,0,1,0,1,1; w=3,h=3 }\nEXPLODE = { 0,1,0,1,1,1,1,0,1,0,1,0; w=3,h=4 }\nFISH = { 0,1,1,1,1,1,0,0,0,1,0,0,0,0,1,1,0,0,1,0; w=5,h=4 }\nBUTTERFLY = { 1,0,0,0,1,0,1,1,1,0,1,0,0,0,1,1,0,1,0,1,1,0,0,0,1; w=5,h=5 }\n\n-- the main routine\nfunction LIFE(w,h)\n  -- create two arrays\n  local thisgen = CELLS(w,h)\n  local nextgen = CELLS(w,h)\n\n  -- create some life\n  -- about 1000 generations of fun, then a glider steady-state\n  thisgen:spawn(GLIDER,5,4)\n  thisgen:spawn(EXPLODE,25,10)\n  thisgen:spawn(FISH,4,12)\n\n  -- run until break\n  local gen=1\n  write(\"\\027[2J\")\t-- ANSI clear screen\n  while 1 do\n    thisgen:evolve(nextgen)\n    thisgen,nextgen = nextgen,thisgen\n    write(\"\\027[H\")\t-- ANSI home cursor\n    thisgen:draw()\n    write(\"Life - generation \",gen,\"\\n\")\n    gen=gen+1\n    if gen>2000 then break end\n    --delay()\t\t-- no delay\n  end\nend\n\nLIFE(40,20)\n"
  },
  {
    "path": "deps/lua/test/luac.lua",
    "content": "-- bare-bones luac in Lua\n-- usage: lua luac.lua file.lua\n\nassert(arg[1]~=nil and arg[2]==nil,\"usage: lua luac.lua file.lua\")\nf=assert(io.open(\"luac.out\",\"wb\"))\nassert(f:write(string.dump(assert(loadfile(arg[1])))))\nassert(f:close())\n"
  },
  {
    "path": "deps/lua/test/printf.lua",
    "content": "-- an implementation of printf\n\nfunction printf(...)\n io.write(string.format(...))\nend\n\nprintf(\"Hello %s from %s on %s\\n\",os.getenv\"USER\" or \"there\",_VERSION,os.date())\n"
  },
  {
    "path": "deps/lua/test/readonly.lua",
    "content": "-- make global variables readonly\n\nlocal f=function (t,i) error(\"cannot redefine global variable `\"..i..\"'\",2) end\nlocal g={}\nlocal G=getfenv()\nsetmetatable(g,{__index=G,__newindex=f})\nsetfenv(1,g)\n\n-- an example\nrawset(g,\"x\",3)\nx=2\ny=1\t-- cannot redefine `y'\n"
  },
  {
    "path": "deps/lua/test/sieve.lua",
    "content": "-- the sieve of of Eratosthenes programmed with coroutines\n-- typical usage: lua -e N=1000 sieve.lua | column\n\n-- generate all the numbers from 2 to n\nfunction gen (n)\n  return coroutine.wrap(function ()\n    for i=2,n do coroutine.yield(i) end\n  end)\nend\n\n-- filter the numbers generated by `g', removing multiples of `p'\nfunction filter (p, g)\n  return coroutine.wrap(function ()\n    while 1 do\n      local n = g()\n      if n == nil then return end\n      if math.mod(n, p) ~= 0 then coroutine.yield(n) end\n    end\n  end)\nend\n\nN=N or 1000\t\t-- from command line\nx = gen(N)\t\t-- generate primes up to N\nwhile 1 do\n  local n = x()\t\t-- pick a number until done\n  if n == nil then break end\n  print(n)\t\t-- must be a prime number\n  x = filter(n, x)\t-- now remove its multiples\nend\n"
  },
  {
    "path": "deps/lua/test/sort.lua",
    "content": "-- two implementations of a sort function\n-- this is an example only. Lua has now a built-in function \"sort\"\n\n-- extracted from Programming Pearls, page 110\nfunction qsort(x,l,u,f)\n if l<u then\n  local m=math.random(u-(l-1))+l-1\t-- choose a random pivot in range l..u\n  x[l],x[m]=x[m],x[l]\t\t\t-- swap pivot to first position\n  local t=x[l]\t\t\t\t-- pivot value\n  m=l\n  local i=l+1\n  while i<=u do\n    -- invariant: x[l+1..m] < t <= x[m+1..i-1]\n    if f(x[i],t) then\n      m=m+1\n      x[m],x[i]=x[i],x[m]\t\t-- swap x[i] and x[m]\n    end\n    i=i+1\n  end\n  x[l],x[m]=x[m],x[l]\t\t\t-- swap pivot to a valid place\n  -- x[l+1..m-1] < x[m] <= x[m+1..u]\n  qsort(x,l,m-1,f)\n  qsort(x,m+1,u,f)\n end\nend\n\nfunction selectionsort(x,n,f)\n local i=1\n while i<=n do\n  local m,j=i,i+1\n  while j<=n do\n   if f(x[j],x[m]) then m=j end\n   j=j+1\n  end\n x[i],x[m]=x[m],x[i]\t\t\t-- swap x[i] and x[m]\n i=i+1\n end\nend\n\nfunction show(m,x)\n io.write(m,\"\\n\\t\")\n local i=1\n while x[i] do\n  io.write(x[i])\n  i=i+1\n  if x[i] then io.write(\",\") end\n end\n io.write(\"\\n\")\nend\n\nfunction testsorts(x)\n local n=1\n while x[n] do n=n+1 end; n=n-1\t\t-- count elements\n show(\"original\",x)\n qsort(x,1,n,function (x,y) return x<y end)\n show(\"after quicksort\",x)\n selectionsort(x,n,function (x,y) return x>y end)\n show(\"after reverse selection sort\",x)\n qsort(x,1,n,function (x,y) return x<y end)\n show(\"after quicksort again\",x)\nend\n\n-- array to be sorted\nx={\"Jan\",\"Feb\",\"Mar\",\"Apr\",\"May\",\"Jun\",\"Jul\",\"Aug\",\"Sep\",\"Oct\",\"Nov\",\"Dec\"}\n\ntestsorts(x)\n"
  },
  {
    "path": "deps/lua/test/table.lua",
    "content": "-- make table, grouping all data for the same item\n-- input is 2 columns (item, data)\n\nlocal A\nwhile 1 do\n local l=io.read()\n if l==nil then break end\n local _,_,a,b=string.find(l,'\"?([_%w]+)\"?%s*(.*)$')\n if a~=A then A=a io.write(\"\\n\",a,\":\") end\n io.write(\" \",b)\nend\nio.write(\"\\n\")\n"
  },
  {
    "path": "deps/lua/test/trace-calls.lua",
    "content": "-- trace calls\n-- example: lua -ltrace-calls bisect.lua\n\nlocal level=0\n\nlocal function hook(event)\n local t=debug.getinfo(3)\n io.write(level,\" >>> \",string.rep(\" \",level))\n if t~=nil and t.currentline>=0 then io.write(t.short_src,\":\",t.currentline,\" \") end\n t=debug.getinfo(2)\n if event==\"call\" then\n  level=level+1\n else\n  level=level-1 if level<0 then level=0 end\n end\n if t.what==\"main\" then\n  if event==\"call\" then\n   io.write(\"begin \",t.short_src)\n  else\n   io.write(\"end \",t.short_src)\n  end\n elseif t.what==\"Lua\" then\n-- table.foreach(t,print)\n  io.write(event,\" \",t.name or \"(Lua)\",\" <\",t.linedefined,\":\",t.short_src,\">\")\n else\n io.write(event,\" \",t.name or \"(C)\",\" [\",t.what,\"] \")\n end\n io.write(\"\\n\")\nend\n\ndebug.sethook(hook,\"cr\")\nlevel=0\n"
  },
  {
    "path": "deps/lua/test/trace-globals.lua",
    "content": "-- trace assigments to global variables\n\ndo\n -- a tostring that quotes strings. note the use of the original tostring.\n local _tostring=tostring\n local tostring=function(a)\n  if type(a)==\"string\" then\n   return string.format(\"%q\",a)\n  else\n   return _tostring(a)\n  end\n end\n\n local log=function (name,old,new)\n  local t=debug.getinfo(3,\"Sl\")\n  local line=t.currentline\n  io.write(t.short_src)\n  if line>=0 then io.write(\":\",line) end\n  io.write(\": \",name,\" is now \",tostring(new),\" (was \",tostring(old),\")\",\"\\n\")\n end\n\n local g={}\n local set=function (t,name,value)\n  log(name,g[name],value)\n  g[name]=value\n end\n setmetatable(getfenv(),{__index=g,__newindex=set})\nend\n\n-- an example\n\na=1\nb=2\na=10\nb=20\nb=nil\nb=200\nprint(a,b,c)\n"
  },
  {
    "path": "deps/lua/test/xd.lua",
    "content": "-- hex dump\n-- usage: lua xd.lua < file\n\nlocal offset=0\nwhile true do\n local s=io.read(16)\n if s==nil then return end\n io.write(string.format(\"%08X  \",offset))\n string.gsub(s,\"(.)\",\n\tfunction (c) io.write(string.format(\"%02X \",string.byte(c))) end)\n io.write(string.rep(\" \",3*(16-string.len(s))))\n io.write(\" \",string.gsub(s,\"%c\",\".\"),\"\\n\") \n offset=offset+16\nend\n"
  },
  {
    "path": "deps/xxhash/.gitignore",
    "content": "# Ignore:\nlibxxhash.*\nxxh*sum\n\n# But include:\n!libxxhash.pc.in\n"
  },
  {
    "path": "deps/xxhash/LICENSE",
    "content": "xxHash Library\nCopyright (c) 2012-2021 Yann Collet\nAll rights reserved.\n\nBSD 2-Clause License (https://www.opensource.org/licenses/bsd-license.php)\n\nRedistribution and use in source and binary forms, with or without modification,\nare permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice, this\n  list of conditions and the following disclaimer.\n\n* Redistributions in binary form must reproduce the above copyright notice, this\n  list of conditions and the following disclaimer in the documentation and/or\n  other materials provided with the distribution.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\nWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR\nANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\nLOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON\nANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "deps/xxhash/Makefile",
    "content": "# ################################################################\n# xxHash Makefile\n# Copyright (C) 2012-2024 Yann Collet\n#\n# GPL v2 License\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License along\n# with this program; if not, write to the Free Software Foundation, Inc.,\n# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# You can contact the author at:\n#   - xxHash homepage: https://www.xxhash.com\n#   - xxHash source repository: https://github.com/Cyan4973/xxHash\n# ################################################################\n# xxhsum: provides 32/64 bits hash of one or multiple files, or stdin\n# ################################################################\n\n# Version numbers\nSED ?= sed\nSED_ERE_OPT ?= -E\nLIBVER_MAJOR_SCRIPT:=`$(SED) -n '/define XXH_VERSION_MAJOR/s/.*[[:blank:]]\\([0-9][0-9]*\\).*/\\1/p' < xxhash.h`\nLIBVER_MINOR_SCRIPT:=`$(SED) -n '/define XXH_VERSION_MINOR/s/.*[[:blank:]]\\([0-9][0-9]*\\).*/\\1/p' < xxhash.h`\nLIBVER_PATCH_SCRIPT:=`$(SED) -n '/define XXH_VERSION_RELEASE/s/.*[[:blank:]]\\([0-9][0-9]*\\).*/\\1/p' < xxhash.h`\nLIBVER_MAJOR := $(shell echo $(LIBVER_MAJOR_SCRIPT))\nLIBVER_MINOR := $(shell echo $(LIBVER_MINOR_SCRIPT))\nLIBVER_PATCH := $(shell echo $(LIBVER_PATCH_SCRIPT))\nLIBVER := $(LIBVER_MAJOR).$(LIBVER_MINOR).$(LIBVER_PATCH)\n\nMAKEFLAGS += --no-print-directory\nCFLAGS ?= -O3\nDEBUGFLAGS+=-Wall -Wextra -Wconversion -Wcast-qual -Wcast-align -Wshadow \\\n            -Wstrict-aliasing=1 -Wswitch-enum -Wdeclaration-after-statement \\\n            -Wstrict-prototypes -Wundef -Wpointer-arith -Wformat-security \\\n            -Wvla -Wformat=2 -Winit-self -Wfloat-equal -Wwrite-strings \\\n            -Wredundant-decls -Wstrict-overflow=2\nCFLAGS += $(DEBUGFLAGS) $(MOREFLAGS)\nFLAGS   = $(CFLAGS) $(CPPFLAGS)\nXXHSUM_VERSION = $(LIBVER)\n\n# Define *.exe as extension for Windows systems\nifneq (,$(filter Windows%,$(OS)))\nEXT =.exe\nelse\nEXT =\nendif\n\n# automatically enable runtime vector dispatch on x86/64 targets\ndetect_x86_arch = $(shell $(CC) -dumpmachine | grep -E 'i[3-6]86|x86_64')\nifneq ($(strip $(call detect_x86_arch)),)\n    #note: can be overridden at compile time, by setting DISPATCH=0\n    DISPATCH ?= 1\nelse\n    ifeq ($(DISPATCH),1)\n        $(info \"Note: DISPATCH=1 is only supported on x86/x64 targets\")\n    endif\n    override DISPATCH := 0\nendif\n\nifeq ($(NODE_JS),1)\n    # Link in unrestricted filesystem support\n    LDFLAGS += -sNODERAWFS\n    # Set flag to fix isatty() support\n    CPPFLAGS += -DXSUM_NODE_JS=1\nendif\n\n# OS X linker doesn't support -soname, and use different extension\n# see: https://developer.apple.com/library/mac/documentation/DeveloperTools/Conceptual/DynamicLibraries/100-Articles/DynamicLibraryDesignGuidelines.html\nUNAME ?= $(shell uname)\nifeq ($(UNAME), Darwin)\n\tSHARED_EXT = dylib\n\tSHARED_EXT_MAJOR = $(LIBVER_MAJOR).$(SHARED_EXT)\n\tSHARED_EXT_VER = $(LIBVER).$(SHARED_EXT)\n\tSONAME_FLAGS = -install_name $(LIBDIR)/libxxhash.$(SHARED_EXT_MAJOR) -compatibility_version $(LIBVER_MAJOR) -current_version $(LIBVER)\nelse\n\tSONAME_FLAGS = -Wl,-soname=libxxhash.$(SHARED_EXT).$(LIBVER_MAJOR)\n\tSHARED_EXT = so\n\tSHARED_EXT_MAJOR = $(SHARED_EXT).$(LIBVER_MAJOR)\n\tSHARED_EXT_VER = $(SHARED_EXT).$(LIBVER)\nendif\n\nLIBXXH = libxxhash.$(SHARED_EXT_VER)\n\nCLI_DIR = cli\nCLI_SRCS = $(wildcard $(CLI_DIR)/*.c)\nCLI_OBJS = $(CLI_SRCS:.c=.o)\n\n## define default before including multiconf.make\n## generate CLI and libraries in release mode (default for `make`)\n.PHONY: default\ndefault: DEBUGFLAGS=\ndefault: lib xxhsum_and_links\n\nC_SRCDIRS = . $(CLI_DIR) fuzz\ninclude build/make/multiconf.make\n\n.PHONY: all\nall: lib xxhsum xxhsum_inlinedXXH\n\n## xxhsum is the command line interface (CLI)\nifeq ($(DISPATCH),1)\nxxhsum: CPPFLAGS += -DXXHSUM_DISPATCH=1\nXXHSUM_ADD_O = xxh_x86dispatch.o\nendif\n$(eval $(call c_program,xxhsum,xxhash.o $(CLI_OBJS) $(XXHSUM_ADD_O)))\n\n.PHONY: xxhsum_and_links\nxxhsum_and_links: xxhsum xxh32sum xxh64sum xxh128sum xxh3sum\n\nLN ?= ln\n\nxxh32sum xxh64sum xxh128sum xxh3sum: xxhsum\n\t$(LN) -sf $<$(EXT) $@$(EXT)\n\n## generate CLI in 32-bits mode\nxxhsum32: CFLAGS += -m32\nifeq ($(DISPATCH),1)\nxxhsum32: CPPFLAGS += -DXXHSUM_DISPATCH=1\nendif\n$(eval $(call c_program,xxhsum32,xxhash.o $(CLI_OBJS) $(XXHSUM_ADD_O)))\n\n## Warning: dispatch only works for x86/x64 systems\ndispatch: CPPFLAGS += -DXXHSUM_DISPATCH=1\n$(eval $(call c_program,dispatch,xxhash.o xxh_x86dispatch.o $(CLI_OBJS)))\n\nxxhsum_inlinedXXH: CPPFLAGS += -DXXH_INLINE_ALL\n$(eval $(call c_program,xxhsum_inlinedXXH,$(CLI_OBJS)))\n\n\n# =================================================\n# library\n\nlibxxhash.a:\n$(eval $(call static_library,libxxhash.a,xxhash.o))\n\n$(LIBXXH): LDFLAGS += $(SONAME_FLAGS)\nifeq (,$(filter Windows%,$(OS)))\n$(LIBXXH): CFLAGS += -fPIC\nendif\nLIBXXHASH_OBJS := xxhash.o $(if $(filter 1,$(LIBXXH_DISPATCH)),xxh_x86dispatch.o)\n$(eval $(call c_dynamic_library,$(LIBXXH),$(LIBXXHASH_OBJS)))\n\nlibxxhash.$(SHARED_EXT_MAJOR): $(LIBXXH)\n\t$(LN) -sf $< $@\n\nlibxxhash.$(SHARED_EXT): libxxhash.$(SHARED_EXT_MAJOR)\n\t$(LN) -sf $< $@\n\n.PHONY: libxxhash  ## generate dynamic xxhash library\nlibxxhash: $(LIBXXH) libxxhash.$(SHARED_EXT_MAJOR) libxxhash.$(SHARED_EXT)\n\n.PHONY: lib  ## generate static and dynamic xxhash libraries\nlib: libxxhash.a libxxhash\n\n\n# helper targets\n\nAWK  ?= awk\nGREP ?= grep\nSORT ?= sort\nNM   ?= nm\n\n.PHONY: list\nlist:  ## list all Makefile targets\n\t$(MAKE) -pRrq -f $(lastword $(MAKEFILE_LIST)) : 2>/dev/null | $(AWK) -v RS= -F: '/^# File/,/^# Finished Make data base/ {if ($$1 !~ \"^[#.]\") {print $$1}}' | $(SORT) | egrep -v -e '^[^[:alnum:]]' -e '^$@$$' | xargs\n\n.PHONY: help\nhelp:  ## list documented targets\n\t$(GREP) -E '^[0-9a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | \\\n\t$(SORT) | \\\n\t$(AWK) 'BEGIN {FS = \":.*?## \"}; {printf \"\\033[36m%-30s\\033[0m %s\\n\", $$1, $$2}'\n\n.PHONY: clean\nclean:\n\t$(RM) -r *.dSYM   # Mac OS-X specific\n\t$(RM) core *.o *.obj *.$(SHARED_EXT) *.$(SHARED_EXT).* *.a libxxhash.pc\n\t$(RM) xxhsum.wasm xxhsum.js xxhsum.html\n\t$(RM) xxh32sum$(EXT) xxh64sum$(EXT) xxh128sum$(EXT) xxh3sum$(EXT)\n\t$(RM) fuzzer\n\t$(MAKE) -C tests clean\n\t$(MAKE) -C tests/bench clean\n\t$(MAKE) -C tests/collisions clean\n\t@echo cleaning completed\n\n\n# =================================================\n# tests\n# =================================================\n\n# make check can be run with cross-compiled binaries on emulated environments (qemu user mode)\n# by setting $(RUN_ENV) to the target emulation environment\n.PHONY: check\ncheck: xxhsum test_sanity   ## basic tests for xxhsum CLI, set RUN_ENV for emulated environments\n\t# stdin\n\t# If you get \"Wrong parameters\" on Emscripten+Node.js, recompile with `NODE_JS=1`\n\t$(RUN_ENV) ./xxhsum$(EXT) < xxhash.c\n\t# multiple files\n\t$(RUN_ENV) ./xxhsum$(EXT) xxhash.*\n\t# internal bench\n\t$(RUN_ENV) ./xxhsum$(EXT) -bi0\n\t# long bench command\n\t$(RUN_ENV) ./xxhsum$(EXT) --benchmark-all -i0\n\t# bench multiple variants\n\t$(RUN_ENV) ./xxhsum$(EXT) -b1,2,3 -i0\n\t# file bench\n\t$(RUN_ENV) ./xxhsum$(EXT) -bi0 xxhash.c\n\t# 32-bit\n\t$(RUN_ENV) ./xxhsum$(EXT) -H0 xxhash.c\n\t# 128-bit\n\t$(RUN_ENV) ./xxhsum$(EXT) -H2 xxhash.c\n\t# XXH3 (enforce BSD style)\n\t$(RUN_ENV) ./xxhsum$(EXT) -H3 xxhash.c | grep \"XXH3\"\n\t# request incorrect variant\n\t$(RUN_ENV) ./xxhsum$(EXT) -H9 xxhash.c ; test $$? -eq 1\n\t@printf \"\\n .......   checks completed successfully   ....... \\n\"\n\n.PHONY: test-unicode\ntest-unicode:\n\t$(MAKE) -C tests test_unicode\n\n.PHONY: test_sanity\ntest_sanity:\n\t$(MAKE) -C tests test_sanity\n\n.PHONY: test-mem\nVALGRIND = valgrind --leak-check=yes --error-exitcode=1\ntest-mem: RUN_ENV = $(VALGRIND)\ntest-mem: xxhsum check\n\n.PHONY: test32\ntest32: xxhsum32\n\t@echo ---- test 32-bit ----\n\t./xxhsum32 -bi0 xxhash.c\n\nTEST_FILES = xxhsum$(EXT) xxhash.c xxhash.h\n.PHONY: test-xxhsum-c\ntest-xxhsum-c: xxhsum\n\t# xxhsum to/from pipe\n\t./xxhsum $(TEST_FILES) | ./xxhsum -c -\n\t./xxhsum -H0 $(TEST_FILES) | ./xxhsum -c -\n\t# xxhsum -c is unable to verify checksum of file from STDIN (#470)\n\t./xxhsum < README.md > .test.README.md.xxh\n\t./xxhsum -c .test.README.md.xxh < README.md\n\t# xxhsum -q does not display \"Loading\" message into stderr (#251)\n\t! ./xxhsum -q $(TEST_FILES) 2>&1 | grep Loading\n\t# xxhsum does not display \"Loading\" message into stderr either\n\t! ./xxhsum $(TEST_FILES) 2>&1 | grep Loading\n\t# Check that xxhsum do display filename that it failed to open.\n\tLC_ALL=C ./xxhsum nonexistent 2>&1 | grep \"Error: Could not open 'nonexistent'\"\n\t# xxhsum to/from file, shell redirection\n\t./xxhsum $(TEST_FILES) > .test.xxh64\n\t./xxhsum --tag $(TEST_FILES) > .test.xxh64_tag\n\t./xxhsum --little-endian $(TEST_FILES) > .test.le_xxh64\n\t./xxhsum --tag --little-endian $(TEST_FILES) > .test.le_xxh64_tag\n\t./xxhsum -H0 $(TEST_FILES) > .test.xxh32\n\t./xxhsum -H0 --tag $(TEST_FILES) > .test.xxh32_tag\n\t./xxhsum -H0 --little-endian $(TEST_FILES) > .test.le_xxh32\n\t./xxhsum -H0 --tag --little-endian $(TEST_FILES) > .test.le_xxh32_tag\n\t./xxhsum -H2 $(TEST_FILES) > .test.xxh128\n\t./xxhsum -H2 --tag $(TEST_FILES) > .test.xxh128_tag\n\t./xxhsum -H2 --little-endian $(TEST_FILES) > .test.le_xxh128\n\t./xxhsum -H2 --tag --little-endian $(TEST_FILES) > .test.le_xxh128_tag\n\t./xxhsum -H3 $(TEST_FILES) > .test.xxh3\n\t./xxhsum -H3 --tag $(TEST_FILES) > .test.xxh3_tag\n\t./xxhsum -H3 --little-endian $(TEST_FILES) > .test.le_xxh3\n\t./xxhsum -H3 --tag --little-endian $(TEST_FILES) > .test.le_xxh3_tag\n\t./xxhsum -c .test.xxh*\n\t./xxhsum -c --little-endian .test.le_xxh*\n\t./xxhsum -c .test.*_tag\n\t# read list of files from stdin\n\t./xxhsum -c < .test.xxh32\n\t./xxhsum -c < .test.xxh64\n\t./xxhsum -c < .test.xxh128\n\t./xxhsum -c < .test.xxh3\n\tcat .test.xxh* | ./xxhsum -c -\n\t# check variant with '*' marker as second separator\n\t$(SED) 's/  / \\*/' .test.xxh32 | ./xxhsum -c\n\t# bsd-style output\n\t./xxhsum --tag xxhsum* | $(GREP) XXH64\n\t./xxhsum --tag -H0 xxhsum* | $(GREP) XXH32\n\t./xxhsum --tag -H1 xxhsum* | $(GREP) XXH64\n\t./xxhsum --tag -H2 xxhsum* | $(GREP) XXH128\n\t./xxhsum --tag -H3 xxhsum* | $(GREP) XXH3\n\t./xxhsum       -H3 xxhsum* | $(GREP) XXH3_ # prefix for GNU format\n\t./xxhsum --tag -H32 xxhsum* | $(GREP) XXH32\n\t./xxhsum --tag -H64 xxhsum* | $(GREP) XXH64\n\t./xxhsum --tag -H128 xxhsum* | $(GREP) XXH128\n\t./xxhsum --tag -H0 --little-endian xxhsum* | $(GREP) XXH32_LE\n\t./xxhsum --tag -H1 --little-endian xxhsum* | $(GREP) XXH64_LE\n\t./xxhsum --tag -H2 --little-endian xxhsum* | $(GREP) XXH128_LE\n\t./xxhsum --tag -H3 --little-endian xxhsum* | $(GREP) XXH3_LE\n\t./xxhsum --tag -H32 --little-endian xxhsum* | $(GREP) XXH32_LE\n\t./xxhsum --tag -H64 --little-endian xxhsum* | $(GREP) XXH64_LE\n\t./xxhsum --tag -H128 --little-endian xxhsum* | $(GREP) XXH128_LE\n\t# check bsd-style\n\t./xxhsum --tag xxhsum* | ./xxhsum -c\n\t./xxhsum --tag -H32 --little-endian xxhsum* | ./xxhsum -c\n\t# xxhsum -c warns improperly format lines.\n\techo '12345678 ' >>.test.xxh32\n\t./xxhsum -c .test.xxh32 | $(GREP) improperly\n\techo '123456789  file' >>.test.xxh64\n\t./xxhsum -c .test.xxh64 | $(GREP) improperly\n\t# Expects \"FAILED\"\n\techo \"0000000000000000  LICENSE\" | ./xxhsum -c -; test $$? -eq 1\n\techo \"00000000  LICENSE\" | ./xxhsum -c -; test $$? -eq 1\n\t# Expects \"FAILED open or read\"\n\techo \"0000000000000000  test-expects-file-not-found\" | ./xxhsum -c -; test $$? -eq 1\n\techo \"00000000  test-expects-file-not-found\" | ./xxhsum -c -; test $$? -eq 1\n\t# --filelist\n\techo xxhash.c > .test.filenames\n\t$(RUN_ENV) ./xxhsum$(EXT) --filelist .test.filenames\n\t# --filelist from stdin\n\tcat .test.filenames | $(RUN_ENV) ./xxhsum$(EXT) --filelist\n\t@$(RM) .test.*\n\nCC_VERSION := $(shell $(CC) --version 2>/dev/null)\nifneq (,$(findstring clang,$(CC_VERSION)))\nfuzzer: CFLAGS += -fsanitize=fuzzer\n$(eval $(call c_program,fuzzer, fuzz/fuzzer.o xxhash.o))\nelse\nfuzzer: this_target_requires_clang # intentional fail\nendif\n\n.PHONY: test-filename-escape\ntest-filename-escape:\n\t$(MAKE) -C tests test_filename_escape\n\n.PHONY: test-cli-comment-line\ntest-cli-comment-line:\n\t$(MAKE) -C tests test_cli_comment_line\n\n.PHONY: test-cli-ignore-missing\ntest-cli-ignore-missing:\n\t$(MAKE) -C tests test_cli_ignore_missing\n\n.PHONY: armtest\narmtest:\n\t@echo ---- test ARM compilation ----\n\tCC=arm-linux-gnueabi-gcc MOREFLAGS=\"-Werror -static\" $(MAKE) xxhsum\n\n.PHONY: arm64test\narm64test:\n\t@echo ---- test ARM64 compilation ----\n\tCC=aarch64-linux-gnu-gcc MOREFLAGS=\"-Werror -static\" $(MAKE) xxhsum\n\n.PHONY: clangtest\nclangtest:\n\t@echo ---- test clang compilation ----\n\tCC=clang MOREFLAGS=\"-Werror -Wconversion -Wno-sign-conversion\" $(MAKE) all\n\n.PHONY: gcc-og-test\ngcc-og-test:\n\t@echo ---- test gcc -Og compilation ----\n\tCFLAGS=\"-Og -Wall -Wextra -Wundef -Wshadow -Wcast-align -Werror -fPIC\" CPPFLAGS=\"-DXXH_NO_INLINE_HINTS\" MOREFLAGS=\"-Werror\" $(MAKE) all\n\n.PHONY: cxxtest\ncxxtest:\n\t@echo ---- test C++ compilation ----\n\tCC=\"$(CXX) -Wno-deprecated\" $(MAKE) all CFLAGS=\"-O3 -Wall -Wextra -Wundef -Wshadow -Wcast-align -Werror -fPIC\"\n\n# In strict C90 mode, there is no `long long` type support,\n# consequently, only XXH32 can be compiled.\n.PHONY: c90test\nifeq ($(NO_C90_TEST),true)\nc90test:\n\t@echo no c90 compatibility test\nelse\nc90test: CPPFLAGS += -DXXH_NO_LONG_LONG\nc90test: CFLAGS += -std=c90 -Werror -pedantic\nc90test: xxhash.c\n\t@echo ---- test strict C90 compilation [xxh32 only] ----\n\t$(RM) xxhash.o\n\t$(CC) $(FLAGS) $^ -c\n\t$(NM) xxhash.o | $(GREP) XXH64 ; test $$? -eq 1\n\t$(RM) xxhash.o\nendif\n\n.PHONY: noxxh3test\nnoxxh3test: CPPFLAGS += -DXXH_NO_XXH3\nnoxxh3test: CFLAGS += -Werror -pedantic -Wno-long-long  # XXH64 requires long long support\nnoxxh3test: OFILE = xxh_noxxh3.o\nnoxxh3test: xxhash.c\n\t@echo ---- test compilation without XXH3 ----\n\t$(CC) $(FLAGS) -c $^ -o $(OFILE)\n\t$(NM) $(OFILE) | $(GREP) XXH3_ ; test $$? -eq 1\n\t$(RM) $(OFILE)\n\n.PHONY: nostreamtest\nnostreamtest: CPPFLAGS += -DXXH_NO_STREAM\nnostreamtest: CFLAGS += -Werror -pedantic -Wno-long-long  # XXH64 requires long long support\nnostreamtest: OFILE = xxh_nostream.o\nnostreamtest: xxhash.c\n\t@echo ---- test compilation without streaming ----\n\t$(CC) $(FLAGS) -c $^ -o $(OFILE)\n\t$(NM) $(OFILE) | $(GREP) update ; test $$? -eq 1\n\t$(RM) $(OFILE)\n\n.PHONY: nostdlibtest\nnostdlibtest: CPPFLAGS += -DXXH_NO_STDLIB\nnostdlibtest: CFLAGS += -Werror -pedantic -Wno-long-long  # XXH64 requires long long support\nnostdlibtest: OFILE = xxh_nostdlib.o\nnostdlibtest: xxhash.c\n\t@echo ---- test compilation without \\<stdlib.h\\> ----\n\t$(CC) $(FLAGS) -c $^ -o $(OFILE)\n\t$(NM) $(OFILE) | $(GREP) \"U _free\\|U free\" ; test $$? -eq 1\n\t$(RM) $(OFILE)\n\n.PHONY: usan\nusan: CC=clang\nusan: CXX=clang++\nusan:  ## check CLI runtime for undefined behavior, using clang's sanitizer\n\t@echo ---- check undefined behavior - sanitize ----\n\t$(MAKE) test CC=$(CC) CXX=$(CXX) MOREFLAGS=\"-g -fsanitize=undefined -fno-sanitize-recover=all\"\n\n.PHONY: staticAnalyze\nSCANBUILD ?= scan-build\nstaticAnalyze: clean  ## check C source files using $(SCANBUILD) static analyzer\n\t@echo ---- static analyzer - $(SCANBUILD) ----\n\tCFLAGS=\"-g -Werror\" $(SCANBUILD) --status-bugs -v $(MAKE) all\n\nCPPCHECK ?= cppcheck\n.PHONY: cppcheck\ncppcheck:  ## check C source files using $(CPPCHECK) static analyzer\n\t@echo ---- static analyzer - $(CPPCHECK) ----\n\t$(CPPCHECK) . --force --enable=warning,portability,performance,style --error-exitcode=1 > /dev/null\n\n.PHONY: namespaceTest\nnamespaceTest:  ## ensure XXH_NAMESPACE redefines all public symbols\n\t$(CC) -c xxhash.c\n\t$(CC) -DXXH_NAMESPACE=TEST_ -c xxhash.c -o xxhash2.o\n\t$(CC) xxhash.o xxhash2.o $(CLI_SRCS)  -o xxhsum2  # will fail if one namespace missing (symbol collision)\n\t$(RM) *.o xxhsum2  # clean\n\nMAN = $(CLI_DIR)/xxhsum.1\nMD2ROFF ?= ronn\nMD2ROFF_FLAGS ?= --roff --warnings --manual=\"User Commands\" --organization=\"xxhsum $(XXHSUM_VERSION)\"\n$(MAN): $(CLI_DIR)/xxhsum.1.md xxhash.h\n\tcat $< | $(MD2ROFF) $(MD2ROFF_FLAGS) | $(SED) -n '/^\\.\\\\\\\".*/!p' > $@\n\n.PHONY: man\nman: $(MAN)  ## generate man page from markdown source\n\n.PHONY: clean-man\nclean-man:\n\t$(RM) xxhsum.1\n\n.PHONY: preview-man\npreview-man: man\n\tman ./xxhsum.1\n\n.PHONY: test\ntest: DEBUGFLAGS += -DXXH_DEBUGLEVEL=1\ntest: all namespaceTest check test-xxhsum-c c90test test-tools noxxh3test nostdlibtest\n\n# this test checks that including \"xxhash.h\" multiple times and with different directives still compiles properly\n.PHONY: test-multiInclude\ntest-multiInclude:\n\t$(MAKE) -C tests test_multiInclude\n\n.PHONY: test-inline-notexposed\ntest-inline-notexposed: xxhsum_inlinedXXH\n\t$(NM) xxhsum_inlinedXXH | $(GREP) \"t _XXH32_\" ; test $$? -eq 1  # no XXH32 symbol should be left\n\t$(NM) xxhsum_inlinedXXH | $(GREP) \"t _XXH64_\" ; test $$? -eq 1  # no XXH64 symbol should be left\n\n.PHONY: test-inline\ntest-inline: test-inline-notexposed test-multiInclude\n\n\n.PHONY: test-all\ntest-all: CFLAGS += -Werror\ntest-all: test test32 test-unicode clangtest gcc-og-test cxxtest usan test-inline listL120 trailingWhitespace test-xxh-nnn-sums\n\n.PHONY: test-tools\ntest-tools:\n\tCFLAGS=-Werror $(MAKE) -C tests/bench\n\tCFLAGS=-Werror $(MAKE) -C tests/collisions check\n\n.PHONY: test-xxh-nnn-sums\ntest-xxh-nnn-sums: xxhsum_and_links\n\t./xxhsum    README.md > tmp.xxhsum.out    # xxhsum outputs xxh64\n\t./xxh32sum  README.md > tmp.xxh32sum.out\n\t./xxh64sum  README.md > tmp.xxh64sum.out\n\t./xxh128sum README.md > tmp.xxh128sum.out\n\t./xxh3sum   README.md > tmp.xxh3sum.out\n\tcat tmp.xxhsum.out\n\tcat tmp.xxh32sum.out\n\tcat tmp.xxh64sum.out\n\tcat tmp.xxh128sum.out\n\tcat tmp.xxh3sum.out\n\t./xxhsum -c tmp.xxhsum.out\n\t./xxhsum -c tmp.xxh32sum.out\n\t./xxhsum -c tmp.xxh64sum.out\n\t./xxhsum -c tmp.xxh128sum.out\n\t./xxhsum -c tmp.xxh3sum.out\n\t./xxh32sum -c tmp.xxhsum.out            ; test $$? -eq 1  # expects \"no properly formatted\"\n\t./xxh32sum -c tmp.xxh32sum.out\n\t./xxh32sum -c tmp.xxh64sum.out          ; test $$? -eq 1  # expects \"no properly formatted\"\n\t./xxh32sum -c tmp.xxh128sum.out         ; test $$? -eq 1  # expects \"no properly formatted\"\n\t./xxh32sum -c tmp.xxh3sum.out           ; test $$? -eq 1  # expects \"no properly formatted\"\n\t./xxh64sum -c tmp.xxhsum.out\n\t./xxh64sum -c tmp.xxh32sum.out          ; test $$? -eq 1  # expects \"no properly formatted\"\n\t./xxh64sum -c tmp.xxh64sum.out\n\t./xxh64sum -c tmp.xxh128sum.out         ; test $$? -eq 1  # expects \"no properly formatted\"\n\t./xxh64sum -c tmp.xxh3sum.out           ; test $$? -eq 1  # expects \"no properly formatted\"\n\t./xxh128sum -c tmp.xxhsum.out           ; test $$? -eq 1  # expects \"no properly formatted\"\n\t./xxh128sum -c tmp.xxh32sum.out         ; test $$? -eq 1  # expects \"no properly formatted\"\n\t./xxh128sum -c tmp.xxh64sum.out         ; test $$? -eq 1  # expects \"no properly formatted\"\n\t./xxh128sum -c tmp.xxh128sum.out\n\t./xxh128sum -c tmp.xxh3sum.out          ; test $$? -eq 1  # expects \"no properly formatted\"\n\t./xxh3sum -c tmp.xxhsum.out             ; test $$? -eq 1  # expects \"no properly formatted\"\n\t./xxh3sum -c tmp.xxh32sum.out           ; test $$? -eq 1  # expects \"no properly formatted\"\n\t./xxh3sum -c tmp.xxh64sum.out           ; test $$? -eq 1  # expects \"no properly formatted\"\n\t./xxh3sum -c tmp.xxh128sum.out          ; test $$? -eq 1  # expects \"no properly formatted\"\n\t./xxh3sum -c tmp.xxh3sum.out\n\n.PHONY: listL120\nlistL120:  # extract lines >= 120 characters in *.{c,h}, by Takayuki Matsuoka (note: $$, for Makefile compatibility)\n\tfind . -type f -name '*.c' -o -name '*.h' | while read -r filename; do awk 'length > 120 {print FILENAME \"(\" FNR \"): \" $$0}' $$filename; done\n\n.PHONY: trailingWhitespace\ntrailingWhitespace:\n\t! $(GREP) -E \"`printf '[ \\\\t]$$'`\" cli/*.c cli/*.h cli/*.1 *.c *.h LICENSE Makefile build/cmake/CMakeLists.txt\n\n.PHONY: lint-unicode\nlint-unicode:\n\t./tests/unicode_lint.sh\n\n# =========================================================\n# make install is validated only for the following targets\n# =========================================================\nifneq (,$(filter Linux Darwin GNU/kFreeBSD GNU Haiku OpenBSD FreeBSD NetBSD DragonFly SunOS CYGWIN% , $(UNAME)))\n\nDESTDIR     ?=\n# directory variables: GNU conventions prefer lowercase\n# see https://www.gnu.org/prep/standards/html_node/Makefile-Conventions.html\n# support both lower and uppercase (BSD), use uppercase in script\nprefix      ?= /usr/local\nPREFIX      ?= $(prefix)\nexec_prefix ?= $(PREFIX)\nEXEC_PREFIX ?= $(exec_prefix)\nlibdir      ?= $(EXEC_PREFIX)/lib\nLIBDIR      ?= $(libdir)\nincludedir  ?= $(PREFIX)/include\nINCLUDEDIR  ?= $(includedir)\nbindir      ?= $(EXEC_PREFIX)/bin\nBINDIR      ?= $(bindir)\ndatarootdir ?= $(PREFIX)/share\nmandir      ?= $(datarootdir)/man\nman1dir     ?= $(mandir)/man1\n\nifneq (,$(filter $(UNAME),FreeBSD NetBSD DragonFly))\nPKGCONFIGDIR ?= $(PREFIX)/libdata/pkgconfig\nelse\nPKGCONFIGDIR ?= $(LIBDIR)/pkgconfig\nendif\n\nifneq (,$(filter $(UNAME),OpenBSD NetBSD DragonFly SunOS))\nMANDIR  ?= $(PREFIX)/man/man1\nelse\nMANDIR  ?= $(man1dir)\nendif\n\nifneq (,$(filter $(UNAME),SunOS))\nINSTALL ?= ginstall\nelse\nINSTALL ?= install\nendif\n\nINSTALL_PROGRAM ?= $(INSTALL)\nINSTALL_DATA    ?= $(INSTALL) -m 644\nMAKE_DIR        ?= $(INSTALL) -d -m 755\n\n\n# Escape special symbols by putting each character into its separate class\nEXEC_PREFIX_REGEX ?= $(shell echo \"$(EXEC_PREFIX)\" | $(SED) $(SED_ERE_OPT) -e \"s/([^^])/[\\1]/g\" -e \"s/\\\\^/\\\\\\\\^/g\")\nPREFIX_REGEX ?= $(shell echo \"$(PREFIX)\" | $(SED) $(SED_ERE_OPT) -e \"s/([^^])/[\\1]/g\" -e \"s/\\\\^/\\\\\\\\^/g\")\n\nPCLIBDIR ?= $(shell echo \"$(LIBDIR)\"     | $(SED) -n $(SED_ERE_OPT) -e \"s@^$(EXEC_PREFIX_REGEX)(/|$$)@@p\")\nPCINCDIR ?= $(shell echo \"$(INCLUDEDIR)\" | $(SED) -n $(SED_ERE_OPT) -e \"s@^$(PREFIX_REGEX)(/|$$)@@p\")\nPCEXECDIR?= $(if $(filter $(PREFIX),$(EXEC_PREFIX)),$$\\{prefix\\},$(EXEC_PREFIX))\n\nifeq (,$(PCLIBDIR))\n# Additional prefix check is required, since the empty string is technically a\n# valid PCLIBDIR\nifeq (,$(shell echo \"$(LIBDIR)\" | $(SED) -n $(SED_ERE_OPT) -e \"\\\\@^$(EXEC_PREFIX_REGEX)(/|$$)@ p\"))\n$(error configured libdir ($(LIBDIR)) is outside of exec_prefix ($(EXEC_PREFIX)), can't generate pkg-config file)\nendif\nendif\n\nifeq (,$(PCINCDIR))\n# Additional prefix check is required, since the empty string is technically a\n# valid PCINCDIR\nifeq (,$(shell echo \"$(INCLUDEDIR)\" | $(SED) -n $(SED_ERE_OPT) -e \"\\\\@^$(PREFIX_REGEX)(/|$$)@ p\"))\n$(error configured includedir ($(INCLUDEDIR)) is outside of prefix ($(PREFIX)), can't generate pkg-config file)\nendif\nendif\n\nlibxxhash.pc: libxxhash.pc.in\n\t@echo creating pkgconfig\n\t$(SED) $(SED_ERE_OPT) -e 's|@PREFIX@|$(PREFIX)|' \\\n          -e 's|@EXECPREFIX@|$(PCEXECDIR)|' \\\n          -e 's|@LIBDIR@|$$\\{exec_prefix\\}/$(PCLIBDIR)|' \\\n          -e 's|@INCLUDEDIR@|$$\\{prefix\\}/$(PCINCDIR)|' \\\n          -e 's|@VERSION@|$(LIBVER)|' \\\n          $< > $@\n\n\ninstall_libxxhash.a: libxxhash.a\n\t@echo Installing libxxhash.a\n\t$(MAKE_DIR) $(DESTDIR)$(LIBDIR)\n\t$(INSTALL_DATA) libxxhash.a $(DESTDIR)$(LIBDIR)\n\ninstall_libxxhash: libxxhash\n\t@echo Installing libxxhash\n\t$(MAKE_DIR) $(DESTDIR)$(LIBDIR)\n\t$(INSTALL_PROGRAM) $(LIBXXH) $(DESTDIR)$(LIBDIR)\n\tln -sf $(LIBXXH) $(DESTDIR)$(LIBDIR)/libxxhash.$(SHARED_EXT_MAJOR)\n\tln -sf libxxhash.$(SHARED_EXT_MAJOR) $(DESTDIR)$(LIBDIR)/libxxhash.$(SHARED_EXT)\n\ninstall_libxxhash.includes:\n\t$(INSTALL) -d -m 755 $(DESTDIR)$(INCLUDEDIR)   # includes\n\t$(INSTALL_DATA) xxhash.h $(DESTDIR)$(INCLUDEDIR)\n\t$(INSTALL_DATA) xxh3.h $(DESTDIR)$(INCLUDEDIR) # for compatibility, will be removed in v0.9.0\nifeq ($(LIBXXH_DISPATCH),1)\n\t$(INSTALL_DATA) xxh_x86dispatch.h $(DESTDIR)$(INCLUDEDIR)\nendif\n\ninstall_libxxhash.pc: libxxhash.pc\n\t@echo Installing pkgconfig\n\t$(MAKE_DIR) $(DESTDIR)$(PKGCONFIGDIR)/\n\t$(INSTALL_DATA) libxxhash.pc $(DESTDIR)$(PKGCONFIGDIR)/\n\ninstall_xxhsum: xxhsum\n\t@echo Installing xxhsum\n\t$(MAKE_DIR) $(DESTDIR)$(BINDIR)/\n\t$(INSTALL_PROGRAM) xxhsum$(EXT) $(DESTDIR)$(BINDIR)/xxhsum$(EXT)\n\tln -sf xxhsum$(EXT) $(DESTDIR)$(BINDIR)/xxh32sum$(EXT)\n\tln -sf xxhsum$(EXT) $(DESTDIR)$(BINDIR)/xxh64sum$(EXT)\n\tln -sf xxhsum$(EXT) $(DESTDIR)$(BINDIR)/xxh128sum$(EXT)\n\tln -sf xxhsum$(EXT) $(DESTDIR)$(BINDIR)/xxh3sum$(EXT)\n\ninstall_man:\n\t@echo Installing man pages\n\t$(MAKE_DIR) $(DESTDIR)$(MANDIR)/\n\t$(INSTALL_DATA) $(MAN) $(DESTDIR)$(MANDIR)/xxhsum.1\n\tln -sf xxhsum.1 $(DESTDIR)$(MANDIR)/xxh32sum.1\n\tln -sf xxhsum.1 $(DESTDIR)$(MANDIR)/xxh64sum.1\n\tln -sf xxhsum.1 $(DESTDIR)$(MANDIR)/xxh128sum.1\n\tln -sf xxhsum.1 $(DESTDIR)$(MANDIR)/xxh3sum.1\n\n.PHONY: install\n## install libraries, CLI, links and man pages\ninstall: install_libxxhash.a install_libxxhash install_libxxhash.includes install_libxxhash.pc install_xxhsum install_man\n\t@echo xxhash installation completed\n\n.PHONY: uninstall\nuninstall:  ## uninstall libraries, CLI, links and man page\n\t$(RM) $(DESTDIR)$(LIBDIR)/libxxhash.a\n\t$(RM) $(DESTDIR)$(LIBDIR)/libxxhash.$(SHARED_EXT)\n\t$(RM) $(DESTDIR)$(LIBDIR)/libxxhash.$(SHARED_EXT_MAJOR)\n\t$(RM) $(DESTDIR)$(LIBDIR)/$(LIBXXH)\n\t$(RM) $(DESTDIR)$(INCLUDEDIR)/xxhash.h\n\t$(RM) $(DESTDIR)$(INCLUDEDIR)/xxh3.h\n\t$(RM) $(DESTDIR)$(INCLUDEDIR)/xxh_x86dispatch.h\n\t$(RM) $(DESTDIR)$(PKGCONFIGDIR)/libxxhash.pc\n\t$(RM) $(DESTDIR)$(BINDIR)/xxh32sum\n\t$(RM) $(DESTDIR)$(BINDIR)/xxh64sum\n\t$(RM) $(DESTDIR)$(BINDIR)/xxh128sum\n\t$(RM) $(DESTDIR)$(BINDIR)/xxh3sum\n\t$(RM) $(DESTDIR)$(BINDIR)/xxhsum\n\t$(RM) $(DESTDIR)$(MANDIR)/xxh32sum.1\n\t$(RM) $(DESTDIR)$(MANDIR)/xxh64sum.1\n\t$(RM) $(DESTDIR)$(MANDIR)/xxh128sum.1\n\t$(RM) $(DESTDIR)$(MANDIR)/xxh3sum.1\n\t$(RM) $(DESTDIR)$(MANDIR)/xxhsum.1\n\t@echo xxhsum successfully uninstalled\n\nendif\n"
  },
  {
    "path": "deps/xxhash/build/make/README.md",
    "content": "# multiconf.make\n\n**multiconf.make** is a self-contained Makefile include that lets you build the **same targets under many different flag sets**. For example debug vs release, ASan vs UBSan, GCC vs Clang.\nEach different set of flags generates object files into its own **dedicated cache directory**, so objects compiled with one configuration are never reused by another.\nObject files from previous configurations are preserved, so swapping back to a previous configuration only requires compiling objects which have actually changed.\n\n---\n\n## Benefits at a glance\n\n| Benefit | What `multiconf.make` does |\n| --- | --- |\n| **Isolated configs** | Stores objects into `cachedObjs/<hash>/`, one directory per unique flag set. |\n| **Fast switching** | Reusing an old config is instant—link only, no recompilation. |\n| **Header deps** | Edits to headers trigger only needed rebuilds. |\n| **One-liner targets** | Macros (`c_program`, `cxx_program`, …) hide all rule boilerplate. |\n| **Parallel-ready** | Safe with `make -j`, no duplicate compiles of shared sources. |\n| **Controlled verbosity** | Default only lists objects, while `V=1` display full commands. |\n| **`clean` included** | `make clean` deletes all objects, binaries and links. |\n\n---\n\n## Quick Start\n\n### 1 · List your sources\n\n```make\nC_SRCDIRS   := src src/cdeps    # all .c are in these directories\nCXX_SRCDIRS := src src/cxxdeps  # all .cpp are in these directories\n```\n\n### 2 · Add and include\n\n```make\n# root/Makefile\ninclude multiconf.make\n```\n\n### 3 · Declare targets\n\n```make\napp:\n$(eval $(call c_program,app,app.o cdeps/obj.o))\n\ntest:\n$(eval $(call cxx_program,test, test.o cxxdeps/objcxx.o))\n\nlib.a:\n$(eval $(call static_library,lib.a, lib.o cdeps/obj.o))\n\nlib.so:\n$(eval $(call c_dynamic_library,lib.so, lib.o cdeps/obj.o))\n```\n\n### 4 · Build any config you like\n\n```sh\n# Release with GCC\nmake CFLAGS=\"-O3\"\n\n# Debug with Clang + AddressSanitizer (new cache dir)\nmake CC=clang CFLAGS=\"-g -O0 -fsanitize=address\"\n\n# Switch back to GCC release (objects still valid, relink only)\nmake CFLAGS=\"-O3\"\n```\n\nObjects for each command live in different sub-folders; nothing overlaps.\n\n---\n\n"
  },
  {
    "path": "deps/xxhash/build/make/multiconf.make",
    "content": "# ##########################################################################\n# multiconf.make\n# Copyright (C) Yann Collet\n#\n# GPL v2 License\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License along\n# with this program; if not, write to the Free Software Foundation, Inc.,\n# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# ##########################################################################\n\n\n# Provides c_program(_shared_o) and cxx_program(_shared_o) target generation macros\n# Provides static_library and c_dynamic_library target generation macros\n# Support recompilation of only impacted units when an associated *.h is updated.\n# Provides V=1 / VERBOSE=1 support. V=2 is used for debugging purposes.\n# Complement target clean: delete objects and binaries created by this script\n\n# Requires:\n# - C_SRCDIRS, CXX_SRCDIRS, ASM_SRCDIRS defined\n#   OR\n#   C_SRCS, CXX_SRCS and ASM_SRCS variables defined\n#   *and* vpath set to find all source files\n#   OR\n#   C_OBJS, CXX_OBJS and ASM_OBJS variables defined\n#   *and* vpath set to find all source files\n# - directory `cachedObjs/` available to cache object files.\n#   alternatively: set CACHE_ROOT to some different value.\n# Optional:\n# - HASH can be set to a different custom hash program.\n\n# *_program*: generates a recipe for a target that will be built in a cache directory.\n# The cache directory is automatically derived from CACHE_ROOT and list of flags and compilers.\n# *_shared_o* variants are optional optimization variants, that share the same objects across multiple targets.\n# However, as a consequence, all these objects must have exactly the same list of flags,\n# which in practice means that there must be no target-level modification (like: target: CFLAGS += someFlag).\n# If unsure, only use the standard variants, c_program and cxx_program.\n\n# All *_program* macro functions take up to 4 argument:\n# - The name of the target\n# - The list of object files to build in the cache directory\n# - An optional list of dependencies for linking, that will not be built\n# - An optional complementary recipe code, that will run after compilation and link\n\n\n# Silent mode is default; use V = 1 or VERBOSE = 1 to see compilation lines\nVERBOSE ?= $(V)\n$(VERBOSE).SILENT:\n\n# Directory where object files will be built\nCACHE_ROOT ?= cachedObjs\n\n# --------------------------------------------------------------------------------------------\n\n# Dependency management\nDEPFLAGS = -MT $@ -MMD -MP -MF\n\n# Include dependency files\ninclude $(wildcard $(CACHE_ROOT)/**/*.d)\ninclude $(wildcard $(CACHE_ROOT)/generic/*/*.d)\n\n# --------------------------------------------------------------------------------------------\n\n# Automatic determination of build artifacts cache directory, keyed on build\n# flags, so that we can do incremental, parallel builds of different binaries\n# with different build flags without collisions.\n\nUNAME ?= $(shell uname)\nifeq ($(UNAME), Darwin)\n  HASH ?= md5\nelse ifeq ($(UNAME), FreeBSD)\n  HASH ?= gmd5sum\nelse ifeq ($(UNAME), OpenBSD)\n  HASH ?= md5\nendif\nHASH ?= md5sum\n\nHAVE_HASH := $(shell echo 1 | $(HASH) > /dev/null && echo 1 || echo 0)\nifeq ($(HAVE_HASH),0)\n  $(info warning : could not find HASH ($(HASH)), required to differentiate builds using different flags)\n  HASH_FUNC = generic/$(1)\nelse\n  HASH_FUNC = $(firstword $(shell echo $(2) | $(HASH) ))\nendif\n\n\nMKDIR ?= mkdir\nLN ?= ln\n\n# --------------------------------------------------------------------------------------------\n# The following macros are used to create object files in the cache directory.\n# The object files are named after the source file, but with a different path.\n\n# Create build directories on-demand.\n#\n# For some reason, make treats the directory as an intermediate file and tries\n# to delete it. So we work around that by marking it \"precious\". Solution found\n# here:\n# http://ismail.badawi.io/blog/2017/03/28/automatic-directory-creation-in-make/\n.PRECIOUS: $(CACHE_ROOT)/%/.\n$(CACHE_ROOT)/%/. :\n\t$(MKDIR) -p $@\n\n\ndefine addTargetAsmObject  # targetName, addlDeps\n$$(if $$(filter 2,$$(V)),$$(info $$(call $(0),$(1),$(2))))\n\n.PRECIOUS: $$(CACHE_ROOT)/%/$(1)\n$$(CACHE_ROOT)/%/$(1) : $(1:.o=.S) $(2) | $$(CACHE_ROOT)/%/.\n\t@echo AS $$@\n\t$$(CC) $$(CPPFLAGS) $$(CXXFLAGS) $$(DEPFLAGS) $$(CACHE_ROOT)/$$*/$(1:.o=.d) -c $$< -o $$@\n\nendef # addTargetAsmObject\n\ndefine addTargetCObject  # targetName, addlDeps\n$$(if $$(filter 2,$$(V)),$$(info $$(call $(0),$(1),$(2)))) #debug print\n\n.PRECIOUS: $$(CACHE_ROOT)/%/$(1)\n$$(CACHE_ROOT)/%/$(1) : $(1:.o=.c) $(2) | $$(CACHE_ROOT)/%/.\n\t@echo CC $$@\n\t$$(CC) $$(CPPFLAGS) $$(CFLAGS) $$(DEPFLAGS) $$(CACHE_ROOT)/$$*/$(1:.o=.d) -c $$< -o $$@\n\nendef # addTargetCObject\n\ndefine addTargetCxxObject  # targetName, suffix, addlDeps\n$$(if $$(filter 2,$$(V)),$$(info $$(call $(0),$(1),$(2),$(3))))\n\n.PRECIOUS: $$(CACHE_ROOT)/%/$(1)\n$$(CACHE_ROOT)/%/$(1) : $(1:.o=.$(2)) $(3) | $$(CACHE_ROOT)/%/.\n\t@echo CXX $$@\n\t$$(CXX) $$(CPPFLAGS) $$(CXXFLAGS) $$(DEPFLAGS) $$(CACHE_ROOT)/$$*/$(1:.o=.d) -c $$< -o $$@\n\nendef # addTargetCxxObject\n\n# Create targets for individual object files\nC_SRCDIRS += .\nvpath %.c $(C_SRCDIRS)\nCXX_SRCDIRS += .\nvpath %.cpp $(CXX_SRCDIRS)\nvpath %.cc $(CXX_SRCDIRS)\nASM_SRCDIRS += .\nvpath %.S $(ASM_SRCDIRS)\n\n# If C_SRCDIRS, CXX_SRCDIRS and ASM_SRCDIRS are not defined, use C_SRCS, CXX_SRCS and ASM_SRCS\nC_SRCS   ?= $(notdir $(foreach dir,$(C_SRCDIRS),$(wildcard $(dir)/*.c)))\nCPP_SRCS ?= $(notdir $(foreach dir,$(CXX_SRCDIRS),$(wildcard $(dir)/*.cpp)))\nCC_SRCS  ?= $(notdir $(foreach dir,$(CXX_SRCDIRS),$(wildcard $(dir)/*.cc)))\nCXX_SRCS ?= $(CPP_SRCS) $(CC_SRCS)\nASM_SRCS ?= $(notdir $(foreach dir,$(ASM_SRCDIRS),$(wildcard $(dir)/*.S)))\n\n# If C_SRCS, CXX_SRCS and ASM_SRCS are not defined, use C_OBJS, CXX_OBJS and ASM_OBJS\nC_OBJS   ?= $(patsubst %.c,%.o,$(C_SRCS))\nCPP_OBJS ?= $(patsubst %.cpp,%.o,$(CPP_SRCS))\nCC_OBJS  ?= $(patsubst %.cc,%.o,$(CC_SRCS))\nCXX_OBJS ?= $(CPP_OBJS) $(CC_OBJS) # Note: not used\nASM_OBJS ?= $(patsubst %.S,%.o,$(ASM_SRCS))\n\n$(foreach OBJ,$(C_OBJS),$(eval $(call addTargetCObject,$(OBJ))))\n$(foreach OBJ,$(CPP_OBJS),$(eval $(call addTargetCxxObject,$(OBJ),cpp)))\n$(foreach OBJ,$(CC_OBJS),$(eval $(call addTargetCxxObject,$(OBJ),cc)))\n$(foreach OBJ,$(ASM_OBJS),$(eval $(call addTargetAsmObject,$(OBJ))))\n\n# --------------------------------------------------------------------------------------------\n# The following macros are used to create targets in the user Makefile.\n# Binaries are built in the cache directory, and then symlinked to the current directory.\n# The cache directory is automatically derived from CACHE_ROOT and list of flags and compilers.\n\n\n# static_library - Create build rules for a static library with caching\n# Parameters:\n#   1. libName       - Library name (becomes output file and phony target)\n#   2. objectDeps    - Object file dependencies (will be built in cache path)\n# The following parameters are all optional:\n#   3. extraDeps     - Additional dependencies (no cache path prefix)\n#   4. postBuildCmds - Extra commands to run after AR\n#   5. extraHash     - Additional key to compute the unique cache path\n# Example:\n#   $(call static_library,libmath.a,vector.o matrix.o,$(CONFIG_H),strip $@,$(VERSION))\ndefine static_library  # libName, objectDeps, extraDeps, postBuildCmds, extraHash\n\n$$(if $$(filter 2,$$(V)),$$(info $$(call $(0),$(1),$(2),$(3),$(4),$(5))))\nMCM_ALL_BINS += $(1)\n\n$$(CACHE_ROOT)/%/$(1) : $$(addprefix $$(CACHE_ROOT)/%/,$(2)) $(3)\n\t@echo AR $$@\n\t$$(AR) $$(ARFLAGS) $$@ $$^\n\t$(4)\n\n.PHONY: $(1)\n$(1) : ARFLAGS = rcs\n$(1) : $$(CACHE_ROOT)/$$(call HASH_FUNC,$(1),$(2) $$(CPPFLAGS) $$(CC) $$(CFLAGS) $$(CXX) $$(CXXFLAGS) $$(AR) $$(ARFLAGS) $(5))/$(1)\n\t$$(LN) -sf $$< $$@\n\nendef # static_library\n\n\n# c_dynamic_library - Create build rules for a C dynamic/shared library with caching\n# Parameters:\n#   1. libName      - Library name (becomes output file and phony target)\n#   2. objectDeps   - Object file dependencies (will be built in cache path)\n# The following parameters are all optional:\n#   3. extraDeps    - Additional dependencies (no cache path prefix)\n#   4. postLinkCmds - Extra commands to run after linking\n#   5. extraHash    - Additional key to compute the unique cache path\n# Example:\n#   $(call c_dynamic_library,libmath.so,vector.o matrix.o,$(CONFIG_H),strip $@,$(VERSION))\ndefine c_dynamic_library  # libName, objectDeps, extraDeps, postLinkCmds, extraHash\n\n$$(if $$(filter 2,$$(V)),$$(info $$(call $(0),$(1),$(2),$(3),$(4),$(5))))\nMCM_ALL_BINS += $(1)\n\n$$(CACHE_ROOT)/%/$(1) : $$(addprefix $$(CACHE_ROOT)/%/,$(2)) $(3)\n\t@echo LD $$@\n\t$$(CC) $$(CPPFLAGS) $$(CFLAGS) $$(LDFLAGS) -shared -o $$@ $$^ $$(LDLIBS)\n\t$(4)\n\n.PHONY: $(1)\n$(1) : CFLAGS += -fPIC\n$(1) : $$(CACHE_ROOT)/$$(call HASH_FUNC,$(1),$(2) $$(CPPFLAGS) $$(CC) $$(CFLAGS) $$(LDFLAGS) $$(LDLIBS) $(5))/$(1)\n\t$$(LN) -sf $$< $$@\n\nendef # c_dynamic_library\n\n\n# program_base - Create build rules for an executable program with caching\n# Parameters:\n#   1. progName      - Executable name (becomes output file and phony target)\n#   2. objectDeps    - Object file dependencies (will be prefixed with cache path)\n# Parameters 3 to 5 are optional:\n#   3. extraDeps     - Additional dependencies (without cache path prefix)\n#   4. postLinkCmds  - Extra commands to run after linking\n#   5. extraHash     - Additional data to include in cache path hash\n# Parameters 6 & 7 are compulsory:\n#   6. compiler      - Variable name of compiler to use (CC or CXX)\n#   7. compilerFlags - Variable name of compiler flags to use (CFLAGS or CXXFLAGS)\n# Example:\n#   $(call program_base,myapp,main.o utils.o,$(CONFIG_H),strip $@,$(VERSION),CC,CFLAGS)\n#   $(call program_base,mycppapp,main.o utils.o,$(CONFIG_H),strip $@,$(VERSION),CXX,CXXFLAGS)\ndefine program_base  # progName, objectDeps, extraDeps, postLinkCmds, extraHash, compiler, compilerFlags\n\n$$(if $$(filter 2,$$(V)),$$(info $$(call $(0),$(1),$(2),$(3),$(4),$(5),$(6),$(7))))\nMCM_ALL_BINS += $(1)\n\n$$(CACHE_ROOT)/%/$(1) : $$(addprefix $$(CACHE_ROOT)/%/,$(2)) $(3)\n\t@echo LD $$@\n\t$$($(6)) $$(CPPFLAGS) $$($(7)) $$^ -o $$@ $$(LDFLAGS) $$(LDLIBS)\n\t$(4)\n\n.PHONY: $(1)\n$(1) : $$(CACHE_ROOT)/$$(call HASH_FUNC,$(1),$$($(6)) $$(CPPFLAGS) $$($(7)) $$(LDFLAGS) $$(LDLIBS) $(5))/$(1)\n\t$$(LN) -sf $$< $$@$(EXT)\n\nendef # program_base\n# Note: $(EXT) must be set to .exe for Windows\n\ndefine c_program  # progName, objectDeps, extraDeps, postLinkCmds\n$$(eval $$(call program_base,$(1),$(2),$(3),$(4),$(1)$(2),CC,CFLAGS))\nendef # c_program\n\ndefine c_program_shared_o  # progName, objectDeps, extraDeps, postLinkCmds\n$$(eval $$(call program_base,$(1),$(2),$(3),$(4),,CC,CFLAGS))\nendef # c_program_shared_o\n\ndefine cxx_program  # progName, objectDeps, extraDeps, postLinkCmds\n$$(eval $$(call program_base,$(1),$(2),$(3),$(4),$(1)$(2),CXX,CXXFLAGS))\nendef # cxx_program\n\ndefine cxx_program_shared_o  # progName, objectDeps, extraDeps, postLinkCmds\n$$(eval $$(call program_base,$(1),$(2),$(3),$(4),,CXX,CXXFLAGS))\nendef # cxx_program_shared_o\n\n# --------------------------------------------------------------------------------------------\n\n# Cleaning: delete all objects and binaries created by this script\n.PHONY: clean_cache\nclean_cache:\n\t$(RM) -rf $(CACHE_ROOT)\n\t$(RM) $(MCM_ALL_BINS)\n\n# automatically attach to standard clean target\n.PHONY: clean\nclean: clean_cache\n"
  },
  {
    "path": "deps/xxhash/xxhash.c",
    "content": "/*\n * xxHash - Extremely Fast Hash algorithm\n * Copyright (C) 2012-2023 Yann Collet\n *\n * BSD 2-Clause License (https://www.opensource.org/licenses/bsd-license.php)\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are\n * met:\n *\n *    * Redistributions of source code must retain the above copyright\n *      notice, this list of conditions and the following disclaimer.\n *    * Redistributions in binary form must reproduce the above\n *      copyright notice, this list of conditions and the following disclaimer\n *      in the documentation and/or other materials provided with the\n *      distribution.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n * \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n *\n * You can contact the author at:\n *   - xxHash homepage: https://www.xxhash.com\n *   - xxHash source repository: https://github.com/Cyan4973/xxHash\n */\n\n/*\n * xxhash.c instantiates functions defined in xxhash.h\n */\n\n#define XXH_STATIC_LINKING_ONLY /* access advanced declarations */\n#define XXH_IMPLEMENTATION      /* access definitions */\n\n#include \"xxhash.h\"\n"
  },
  {
    "path": "deps/xxhash/xxhash.h",
    "content": "/*\n * xxHash - Extremely Fast Hash algorithm\n * Header File\n * Copyright (C) 2012-2023 Yann Collet\n *\n * BSD 2-Clause License (https://www.opensource.org/licenses/bsd-license.php)\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are\n * met:\n *\n *    * Redistributions of source code must retain the above copyright\n *      notice, this list of conditions and the following disclaimer.\n *    * Redistributions in binary form must reproduce the above\n *      copyright notice, this list of conditions and the following disclaimer\n *      in the documentation and/or other materials provided with the\n *      distribution.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n * \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n *\n * You can contact the author at:\n *   - xxHash homepage: https://www.xxhash.com\n *   - xxHash source repository: https://github.com/Cyan4973/xxHash\n */\n\n/*!\n * @mainpage xxHash\n *\n * xxHash is an extremely fast non-cryptographic hash algorithm, working at RAM speed\n * limits.\n *\n * It is proposed in four flavors, in three families:\n * 1. @ref XXH32_family\n *   - Classic 32-bit hash function. Simple, compact, and runs on almost all\n *     32-bit and 64-bit systems.\n * 2. @ref XXH64_family\n *   - Classic 64-bit adaptation of XXH32. Just as simple, and runs well on most\n *     64-bit systems (but _not_ 32-bit systems).\n * 3. @ref XXH3_family\n *   - Modern 64-bit and 128-bit hash function family which features improved\n *     strength and performance across the board, especially on smaller data.\n *     It benefits greatly from SIMD and 64-bit without requiring it.\n *\n * Benchmarks\n * ---\n * The reference system uses an Intel i7-9700K CPU, and runs Ubuntu x64 20.04.\n * The open source benchmark program is compiled with clang v10.0 using -O3 flag.\n *\n * | Hash Name            | ISA ext | Width | Large Data Speed | Small Data Velocity |\n * | -------------------- | ------- | ----: | ---------------: | ------------------: |\n * | XXH3_64bits()        | @b AVX2 |    64 |        59.4 GB/s |               133.1 |\n * | MeowHash             | AES-NI  |   128 |        58.2 GB/s |                52.5 |\n * | XXH3_128bits()       | @b AVX2 |   128 |        57.9 GB/s |               118.1 |\n * | CLHash               | PCLMUL  |    64 |        37.1 GB/s |                58.1 |\n * | XXH3_64bits()        | @b SSE2 |    64 |        31.5 GB/s |               133.1 |\n * | XXH3_128bits()       | @b SSE2 |   128 |        29.6 GB/s |               118.1 |\n * | RAM sequential read  |         |   N/A |        28.0 GB/s |                 N/A |\n * | ahash                | AES-NI  |    64 |        22.5 GB/s |               107.2 |\n * | City64               |         |    64 |        22.0 GB/s |                76.6 |\n * | T1ha2                |         |    64 |        22.0 GB/s |                99.0 |\n * | City128              |         |   128 |        21.7 GB/s |                57.7 |\n * | FarmHash             | AES-NI  |    64 |        21.3 GB/s |                71.9 |\n * | XXH64()              |         |    64 |        19.4 GB/s |                71.0 |\n * | SpookyHash           |         |    64 |        19.3 GB/s |                53.2 |\n * | Mum                  |         |    64 |        18.0 GB/s |                67.0 |\n * | CRC32C               | SSE4.2  |    32 |        13.0 GB/s |                57.9 |\n * | XXH32()              |         |    32 |         9.7 GB/s |                71.9 |\n * | City32               |         |    32 |         9.1 GB/s |                66.0 |\n * | Blake3*              | @b AVX2 |   256 |         4.4 GB/s |                 8.1 |\n * | Murmur3              |         |    32 |         3.9 GB/s |                56.1 |\n * | SipHash*             |         |    64 |         3.0 GB/s |                43.2 |\n * | Blake3*              | @b SSE2 |   256 |         2.4 GB/s |                 8.1 |\n * | HighwayHash          |         |    64 |         1.4 GB/s |                 6.0 |\n * | FNV64                |         |    64 |         1.2 GB/s |                62.7 |\n * | Blake2*              |         |   256 |         1.1 GB/s |                 5.1 |\n * | SHA1*                |         |   160 |         0.8 GB/s |                 5.6 |\n * | MD5*                 |         |   128 |         0.6 GB/s |                 7.8 |\n * @note\n *   - Hashes which require a specific ISA extension are noted. SSE2 is also noted,\n *     even though it is mandatory on x64.\n *   - Hashes with an asterisk are cryptographic. Note that MD5 is non-cryptographic\n *     by modern standards.\n *   - Small data velocity is a rough average of algorithm's efficiency for small\n *     data. For more accurate information, see the wiki.\n *   - More benchmarks and strength tests are found on the wiki:\n *         https://github.com/Cyan4973/xxHash/wiki\n *\n * Usage\n * ------\n * All xxHash variants use a similar API. Changing the algorithm is a trivial\n * substitution.\n *\n * @pre\n *    For functions which take an input and length parameter, the following\n *    requirements are assumed:\n *    - The range from [`input`, `input + length`) is valid, readable memory.\n *      - The only exception is if the `length` is `0`, `input` may be `NULL`.\n *    - For C++, the objects must have the *TriviallyCopyable* property, as the\n *      functions access bytes directly as if it was an array of `unsigned char`.\n *\n * @anchor single_shot_example\n * **Single Shot**\n *\n * These functions are stateless functions which hash a contiguous block of memory,\n * immediately returning the result. They are the easiest and usually the fastest\n * option.\n *\n * XXH32(), XXH64(), XXH3_64bits(), XXH3_128bits()\n *\n * @code{.c}\n *   #include <string.h>\n *   #include \"xxhash.h\"\n *\n *   // Example for a function which hashes a null terminated string with XXH32().\n *   XXH32_hash_t hash_string(const char* string, XXH32_hash_t seed)\n *   {\n *       // NULL pointers are only valid if the length is zero\n *       size_t length = (string == NULL) ? 0 : strlen(string);\n *       return XXH32(string, length, seed);\n *   }\n * @endcode\n *\n *\n * @anchor streaming_example\n * **Streaming**\n *\n * These groups of functions allow incremental hashing of unknown size, even\n * more than what would fit in a size_t.\n *\n * XXH32_reset(), XXH64_reset(), XXH3_64bits_reset(), XXH3_128bits_reset()\n *\n * @code{.c}\n *   #include <stdio.h>\n *   #include <assert.h>\n *   #include \"xxhash.h\"\n *   // Example for a function which hashes a FILE incrementally with XXH3_64bits().\n *   XXH64_hash_t hashFile(FILE* f)\n *   {\n *       // Allocate a state struct. Do not just use malloc() or new.\n *       XXH3_state_t* state = XXH3_createState();\n *       assert(state != NULL && \"Out of memory!\");\n *       // Reset the state to start a new hashing session.\n *       XXH3_64bits_reset(state);\n *       char buffer[4096];\n *       size_t count;\n *       // Read the file in chunks\n *       while ((count = fread(buffer, 1, sizeof(buffer), f)) != 0) {\n *           // Run update() as many times as necessary to process the data\n *           XXH3_64bits_update(state, buffer, count);\n *       }\n *       // Retrieve the finalized hash. This will not change the state.\n *       XXH64_hash_t result = XXH3_64bits_digest(state);\n *       // Free the state. Do not use free().\n *       XXH3_freeState(state);\n *       return result;\n *   }\n * @endcode\n *\n * Streaming functions generate the xxHash value from an incremental input.\n * This method is slower than single-call functions, due to state management.\n * For small inputs, prefer `XXH32()` and `XXH64()`, which are better optimized.\n *\n * An XXH state must first be allocated using `XXH*_createState()`.\n *\n * Start a new hash by initializing the state with a seed using `XXH*_reset()`.\n *\n * Then, feed the hash state by calling `XXH*_update()` as many times as necessary.\n *\n * The function returns an error code, with 0 meaning OK, and any other value\n * meaning there is an error.\n *\n * Finally, a hash value can be produced anytime, by using `XXH*_digest()`.\n * This function returns the nn-bits hash as an int or long long.\n *\n * It's still possible to continue inserting input into the hash state after a\n * digest, and generate new hash values later on by invoking `XXH*_digest()`.\n *\n * When done, release the state using `XXH*_freeState()`.\n *\n *\n * @anchor canonical_representation_example\n * **Canonical Representation**\n *\n * The default return values from XXH functions are unsigned 32, 64 and 128 bit\n * integers.\n * This the simplest and fastest format for further post-processing.\n *\n * However, this leaves open the question of what is the order on the byte level,\n * since little and big endian conventions will store the same number differently.\n *\n * The canonical representation settles this issue by mandating big-endian\n * convention, the same convention as human-readable numbers (large digits first).\n *\n * When writing hash values to storage, sending them over a network, or printing\n * them, it's highly recommended to use the canonical representation to ensure\n * portability across a wider range of systems, present and future.\n *\n * The following functions allow transformation of hash values to and from\n * canonical format.\n *\n * XXH32_canonicalFromHash(), XXH32_hashFromCanonical(),\n * XXH64_canonicalFromHash(), XXH64_hashFromCanonical(),\n * XXH128_canonicalFromHash(), XXH128_hashFromCanonical(),\n *\n * @code{.c}\n *   #include <stdio.h>\n *   #include \"xxhash.h\"\n *\n *   // Example for a function which prints XXH32_hash_t in human readable format\n *   void printXxh32(XXH32_hash_t hash)\n *   {\n *       XXH32_canonical_t cano;\n *       XXH32_canonicalFromHash(&cano, hash);\n *       size_t i;\n *       for(i = 0; i < sizeof(cano.digest); ++i) {\n *           printf(\"%02x\", cano.digest[i]);\n *       }\n *       printf(\"\\n\");\n *   }\n *\n *   // Example for a function which converts XXH32_canonical_t to XXH32_hash_t\n *   XXH32_hash_t convertCanonicalToXxh32(XXH32_canonical_t cano)\n *   {\n *       XXH32_hash_t hash = XXH32_hashFromCanonical(&cano);\n *       return hash;\n *   }\n * @endcode\n *\n *\n * @file xxhash.h\n * xxHash prototypes and implementation\n */\n\n#if defined(__cplusplus) && !defined(XXH_NO_EXTERNC_GUARD)\nextern \"C\" {\n#endif\n\n/* ****************************\n *  INLINE mode\n ******************************/\n/*!\n * @defgroup public Public API\n * Contains details on the public xxHash functions.\n * @{\n */\n#ifdef XXH_DOXYGEN\n/*!\n * @brief Gives access to internal state declaration, required for static allocation.\n *\n * Incompatible with dynamic linking, due to risks of ABI changes.\n *\n * Usage:\n * @code{.c}\n *     #define XXH_STATIC_LINKING_ONLY\n *     #include \"xxhash.h\"\n * @endcode\n */\n#  define XXH_STATIC_LINKING_ONLY\n/* Do not undef XXH_STATIC_LINKING_ONLY for Doxygen */\n\n/*!\n * @brief Gives access to internal definitions.\n *\n * Usage:\n * @code{.c}\n *     #define XXH_STATIC_LINKING_ONLY\n *     #define XXH_IMPLEMENTATION\n *     #include \"xxhash.h\"\n * @endcode\n */\n#  define XXH_IMPLEMENTATION\n/* Do not undef XXH_IMPLEMENTATION for Doxygen */\n\n/*!\n * @brief Exposes the implementation and marks all functions as `inline`.\n *\n * Use these build macros to inline xxhash into the target unit.\n * Inlining improves performance on small inputs, especially when the length is\n * expressed as a compile-time constant:\n *\n *  https://fastcompression.blogspot.com/2018/03/xxhash-for-small-keys-impressive-power.html\n *\n * It also keeps xxHash symbols private to the unit, so they are not exported.\n *\n * Usage:\n * @code{.c}\n *     #define XXH_INLINE_ALL\n *     #include \"xxhash.h\"\n * @endcode\n * Do not compile and link xxhash.o as a separate object, as it is not useful.\n */\n#  define XXH_INLINE_ALL\n#  undef XXH_INLINE_ALL\n/*!\n * @brief Exposes the implementation without marking functions as inline.\n */\n#  define XXH_PRIVATE_API\n#  undef XXH_PRIVATE_API\n/*!\n * @brief Emulate a namespace by transparently prefixing all symbols.\n *\n * If you want to include _and expose_ xxHash functions from within your own\n * library, but also want to avoid symbol collisions with other libraries which\n * may also include xxHash, you can use @ref XXH_NAMESPACE to automatically prefix\n * any public symbol from xxhash library with the value of @ref XXH_NAMESPACE\n * (therefore, avoid empty or numeric values).\n *\n * Note that no change is required within the calling program as long as it\n * includes `xxhash.h`: Regular symbol names will be automatically translated\n * by this header.\n */\n#  define XXH_NAMESPACE /* YOUR NAME HERE */\n#  undef XXH_NAMESPACE\n#endif\n\n#if (defined(XXH_INLINE_ALL) || defined(XXH_PRIVATE_API)) \\\n    && !defined(XXH_INLINE_ALL_31684351384)\n   /* this section should be traversed only once */\n#  define XXH_INLINE_ALL_31684351384\n   /* give access to the advanced API, required to compile implementations */\n#  undef XXH_STATIC_LINKING_ONLY   /* avoid macro redef */\n#  define XXH_STATIC_LINKING_ONLY\n   /* make all functions private */\n#  undef XXH_PUBLIC_API\n#  if defined(__GNUC__)\n#    define XXH_PUBLIC_API static __inline __attribute__((__unused__))\n#  elif defined (__cplusplus) || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */)\n#    define XXH_PUBLIC_API static inline\n#  elif defined(_MSC_VER)\n#    define XXH_PUBLIC_API static __inline\n#  else\n     /* note: this version may generate warnings for unused static functions */\n#    define XXH_PUBLIC_API static\n#  endif\n\n   /*\n    * This part deals with the special case where a unit wants to inline xxHash,\n    * but \"xxhash.h\" has previously been included without XXH_INLINE_ALL,\n    * such as part of some previously included *.h header file.\n    * Without further action, the new include would just be ignored,\n    * and functions would effectively _not_ be inlined (silent failure).\n    * The following macros solve this situation by prefixing all inlined names,\n    * avoiding naming collision with previous inclusions.\n    */\n   /* Before that, we unconditionally #undef all symbols,\n    * in case they were already defined with XXH_NAMESPACE.\n    * They will then be redefined for XXH_INLINE_ALL\n    */\n#  undef XXH_versionNumber\n    /* XXH32 */\n#  undef XXH32\n#  undef XXH32_createState\n#  undef XXH32_freeState\n#  undef XXH32_reset\n#  undef XXH32_update\n#  undef XXH32_digest\n#  undef XXH32_copyState\n#  undef XXH32_canonicalFromHash\n#  undef XXH32_hashFromCanonical\n    /* XXH64 */\n#  undef XXH64\n#  undef XXH64_createState\n#  undef XXH64_freeState\n#  undef XXH64_reset\n#  undef XXH64_update\n#  undef XXH64_digest\n#  undef XXH64_copyState\n#  undef XXH64_canonicalFromHash\n#  undef XXH64_hashFromCanonical\n    /* XXH3_64bits */\n#  undef XXH3_64bits\n#  undef XXH3_64bits_withSecret\n#  undef XXH3_64bits_withSeed\n#  undef XXH3_64bits_withSecretandSeed\n#  undef XXH3_createState\n#  undef XXH3_freeState\n#  undef XXH3_copyState\n#  undef XXH3_64bits_reset\n#  undef XXH3_64bits_reset_withSeed\n#  undef XXH3_64bits_reset_withSecret\n#  undef XXH3_64bits_update\n#  undef XXH3_64bits_digest\n#  undef XXH3_generateSecret\n    /* XXH3_128bits */\n#  undef XXH128\n#  undef XXH3_128bits\n#  undef XXH3_128bits_withSeed\n#  undef XXH3_128bits_withSecret\n#  undef XXH3_128bits_reset\n#  undef XXH3_128bits_reset_withSeed\n#  undef XXH3_128bits_reset_withSecret\n#  undef XXH3_128bits_reset_withSecretandSeed\n#  undef XXH3_128bits_update\n#  undef XXH3_128bits_digest\n#  undef XXH128_isEqual\n#  undef XXH128_cmp\n#  undef XXH128_canonicalFromHash\n#  undef XXH128_hashFromCanonical\n    /* Finally, free the namespace itself */\n#  undef XXH_NAMESPACE\n\n    /* employ the namespace for XXH_INLINE_ALL */\n#  define XXH_NAMESPACE XXH_INLINE_\n   /*\n    * Some identifiers (enums, type names) are not symbols,\n    * but they must nonetheless be renamed to avoid redeclaration.\n    * Alternative solution: do not redeclare them.\n    * However, this requires some #ifdefs, and has a more dispersed impact.\n    * Meanwhile, renaming can be achieved in a single place.\n    */\n#  define XXH_IPREF(Id)   XXH_NAMESPACE ## Id\n#  define XXH_OK XXH_IPREF(XXH_OK)\n#  define XXH_ERROR XXH_IPREF(XXH_ERROR)\n#  define XXH_errorcode XXH_IPREF(XXH_errorcode)\n#  define XXH32_canonical_t  XXH_IPREF(XXH32_canonical_t)\n#  define XXH64_canonical_t  XXH_IPREF(XXH64_canonical_t)\n#  define XXH128_canonical_t XXH_IPREF(XXH128_canonical_t)\n#  define XXH32_state_s XXH_IPREF(XXH32_state_s)\n#  define XXH32_state_t XXH_IPREF(XXH32_state_t)\n#  define XXH64_state_s XXH_IPREF(XXH64_state_s)\n#  define XXH64_state_t XXH_IPREF(XXH64_state_t)\n#  define XXH3_state_s  XXH_IPREF(XXH3_state_s)\n#  define XXH3_state_t  XXH_IPREF(XXH3_state_t)\n#  define XXH128_hash_t XXH_IPREF(XXH128_hash_t)\n   /* Ensure the header is parsed again, even if it was previously included */\n#  undef XXHASH_H_5627135585666179\n#  undef XXHASH_H_STATIC_13879238742\n#endif /* XXH_INLINE_ALL || XXH_PRIVATE_API */\n\n/* ****************************************************************\n *  Stable API\n *****************************************************************/\n#ifndef XXHASH_H_5627135585666179\n#define XXHASH_H_5627135585666179 1\n\n/*! @brief Marks a global symbol. */\n#if !defined(XXH_INLINE_ALL) && !defined(XXH_PRIVATE_API)\n#  if defined(_WIN32) && defined(_MSC_VER) && (defined(XXH_IMPORT) || defined(XXH_EXPORT))\n#    ifdef XXH_EXPORT\n#      define XXH_PUBLIC_API __declspec(dllexport)\n#    elif XXH_IMPORT\n#      define XXH_PUBLIC_API __declspec(dllimport)\n#    endif\n#  else\n#    define XXH_PUBLIC_API   /* do nothing */\n#  endif\n#endif\n\n#ifdef XXH_NAMESPACE\n#  define XXH_CAT(A,B) A##B\n#  define XXH_NAME2(A,B) XXH_CAT(A,B)\n#  define XXH_versionNumber XXH_NAME2(XXH_NAMESPACE, XXH_versionNumber)\n/* XXH32 */\n#  define XXH32 XXH_NAME2(XXH_NAMESPACE, XXH32)\n#  define XXH32_createState XXH_NAME2(XXH_NAMESPACE, XXH32_createState)\n#  define XXH32_freeState XXH_NAME2(XXH_NAMESPACE, XXH32_freeState)\n#  define XXH32_reset XXH_NAME2(XXH_NAMESPACE, XXH32_reset)\n#  define XXH32_update XXH_NAME2(XXH_NAMESPACE, XXH32_update)\n#  define XXH32_digest XXH_NAME2(XXH_NAMESPACE, XXH32_digest)\n#  define XXH32_copyState XXH_NAME2(XXH_NAMESPACE, XXH32_copyState)\n#  define XXH32_canonicalFromHash XXH_NAME2(XXH_NAMESPACE, XXH32_canonicalFromHash)\n#  define XXH32_hashFromCanonical XXH_NAME2(XXH_NAMESPACE, XXH32_hashFromCanonical)\n/* XXH64 */\n#  define XXH64 XXH_NAME2(XXH_NAMESPACE, XXH64)\n#  define XXH64_createState XXH_NAME2(XXH_NAMESPACE, XXH64_createState)\n#  define XXH64_freeState XXH_NAME2(XXH_NAMESPACE, XXH64_freeState)\n#  define XXH64_reset XXH_NAME2(XXH_NAMESPACE, XXH64_reset)\n#  define XXH64_update XXH_NAME2(XXH_NAMESPACE, XXH64_update)\n#  define XXH64_digest XXH_NAME2(XXH_NAMESPACE, XXH64_digest)\n#  define XXH64_copyState XXH_NAME2(XXH_NAMESPACE, XXH64_copyState)\n#  define XXH64_canonicalFromHash XXH_NAME2(XXH_NAMESPACE, XXH64_canonicalFromHash)\n#  define XXH64_hashFromCanonical XXH_NAME2(XXH_NAMESPACE, XXH64_hashFromCanonical)\n/* XXH3_64bits */\n#  define XXH3_64bits XXH_NAME2(XXH_NAMESPACE, XXH3_64bits)\n#  define XXH3_64bits_withSecret XXH_NAME2(XXH_NAMESPACE, XXH3_64bits_withSecret)\n#  define XXH3_64bits_withSeed XXH_NAME2(XXH_NAMESPACE, XXH3_64bits_withSeed)\n#  define XXH3_64bits_withSecretandSeed XXH_NAME2(XXH_NAMESPACE, XXH3_64bits_withSecretandSeed)\n#  define XXH3_createState XXH_NAME2(XXH_NAMESPACE, XXH3_createState)\n#  define XXH3_freeState XXH_NAME2(XXH_NAMESPACE, XXH3_freeState)\n#  define XXH3_copyState XXH_NAME2(XXH_NAMESPACE, XXH3_copyState)\n#  define XXH3_64bits_reset XXH_NAME2(XXH_NAMESPACE, XXH3_64bits_reset)\n#  define XXH3_64bits_reset_withSeed XXH_NAME2(XXH_NAMESPACE, XXH3_64bits_reset_withSeed)\n#  define XXH3_64bits_reset_withSecret XXH_NAME2(XXH_NAMESPACE, XXH3_64bits_reset_withSecret)\n#  define XXH3_64bits_reset_withSecretandSeed XXH_NAME2(XXH_NAMESPACE, XXH3_64bits_reset_withSecretandSeed)\n#  define XXH3_64bits_update XXH_NAME2(XXH_NAMESPACE, XXH3_64bits_update)\n#  define XXH3_64bits_digest XXH_NAME2(XXH_NAMESPACE, XXH3_64bits_digest)\n#  define XXH3_generateSecret XXH_NAME2(XXH_NAMESPACE, XXH3_generateSecret)\n#  define XXH3_generateSecret_fromSeed XXH_NAME2(XXH_NAMESPACE, XXH3_generateSecret_fromSeed)\n/* XXH3_128bits */\n#  define XXH128 XXH_NAME2(XXH_NAMESPACE, XXH128)\n#  define XXH3_128bits XXH_NAME2(XXH_NAMESPACE, XXH3_128bits)\n#  define XXH3_128bits_withSeed XXH_NAME2(XXH_NAMESPACE, XXH3_128bits_withSeed)\n#  define XXH3_128bits_withSecret XXH_NAME2(XXH_NAMESPACE, XXH3_128bits_withSecret)\n#  define XXH3_128bits_withSecretandSeed XXH_NAME2(XXH_NAMESPACE, XXH3_128bits_withSecretandSeed)\n#  define XXH3_128bits_reset XXH_NAME2(XXH_NAMESPACE, XXH3_128bits_reset)\n#  define XXH3_128bits_reset_withSeed XXH_NAME2(XXH_NAMESPACE, XXH3_128bits_reset_withSeed)\n#  define XXH3_128bits_reset_withSecret XXH_NAME2(XXH_NAMESPACE, XXH3_128bits_reset_withSecret)\n#  define XXH3_128bits_reset_withSecretandSeed XXH_NAME2(XXH_NAMESPACE, XXH3_128bits_reset_withSecretandSeed)\n#  define XXH3_128bits_update XXH_NAME2(XXH_NAMESPACE, XXH3_128bits_update)\n#  define XXH3_128bits_digest XXH_NAME2(XXH_NAMESPACE, XXH3_128bits_digest)\n#  define XXH128_isEqual XXH_NAME2(XXH_NAMESPACE, XXH128_isEqual)\n#  define XXH128_cmp     XXH_NAME2(XXH_NAMESPACE, XXH128_cmp)\n#  define XXH128_canonicalFromHash XXH_NAME2(XXH_NAMESPACE, XXH128_canonicalFromHash)\n#  define XXH128_hashFromCanonical XXH_NAME2(XXH_NAMESPACE, XXH128_hashFromCanonical)\n#endif\n\n\n/* *************************************\n*  Compiler specifics\n***************************************/\n\n/* specific declaration modes for Windows */\n#if !defined(XXH_INLINE_ALL) && !defined(XXH_PRIVATE_API)\n#  if defined(_WIN32) && defined(_MSC_VER) && (defined(XXH_IMPORT) || defined(XXH_EXPORT))\n#    ifdef XXH_EXPORT\n#      define XXH_PUBLIC_API __declspec(dllexport)\n#    elif XXH_IMPORT\n#      define XXH_PUBLIC_API __declspec(dllimport)\n#    endif\n#  else\n#    define XXH_PUBLIC_API   /* do nothing */\n#  endif\n#endif\n\n#if defined (__GNUC__)\n# define XXH_CONSTF  __attribute__((__const__))\n# define XXH_PUREF   __attribute__((__pure__))\n# define XXH_MALLOCF __attribute__((__malloc__))\n#else\n# define XXH_CONSTF  /* disable */\n# define XXH_PUREF\n# define XXH_MALLOCF\n#endif\n\n/* *************************************\n*  Version\n***************************************/\n#define XXH_VERSION_MAJOR    0\n#define XXH_VERSION_MINOR    8\n#define XXH_VERSION_RELEASE  3\n/*! @brief Version number, encoded as two digits each */\n#define XXH_VERSION_NUMBER  (XXH_VERSION_MAJOR *100*100 + XXH_VERSION_MINOR *100 + XXH_VERSION_RELEASE)\n\n/*!\n * @brief Obtains the xxHash version.\n *\n * This is mostly useful when xxHash is compiled as a shared library,\n * since the returned value comes from the library, as opposed to header file.\n *\n * @return @ref XXH_VERSION_NUMBER of the invoked library.\n */\nXXH_PUBLIC_API XXH_CONSTF unsigned XXH_versionNumber (void);\n\n\n/* ****************************\n*  Common basic types\n******************************/\n#include <stddef.h>   /* size_t */\n/*!\n * @brief Exit code for the streaming API.\n */\ntypedef enum {\n    XXH_OK = 0, /*!< OK */\n    XXH_ERROR   /*!< Error */\n} XXH_errorcode;\n\n\n/*-**********************************************************************\n*  32-bit hash\n************************************************************************/\n#if defined(XXH_DOXYGEN) /* Don't show <stdint.h> include */\n/*!\n * @brief An unsigned 32-bit integer.\n *\n * Not necessarily defined to `uint32_t` but functionally equivalent.\n */\ntypedef uint32_t XXH32_hash_t;\n\n#elif !defined (__VMS) \\\n  && (defined (__cplusplus) \\\n  || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) )\n#   ifdef _AIX\n#     include <inttypes.h>\n#   else\n#     include <stdint.h>\n#   endif\n    typedef uint32_t XXH32_hash_t;\n\n#else\n#   include <limits.h>\n#   if UINT_MAX == 0xFFFFFFFFUL\n      typedef unsigned int XXH32_hash_t;\n#   elif ULONG_MAX == 0xFFFFFFFFUL\n      typedef unsigned long XXH32_hash_t;\n#   else\n#     error \"unsupported platform: need a 32-bit type\"\n#   endif\n#endif\n\n/*!\n * @}\n *\n * @defgroup XXH32_family XXH32 family\n * @ingroup public\n * Contains functions used in the classic 32-bit xxHash algorithm.\n *\n * @note\n *   XXH32 is useful for older platforms, with no or poor 64-bit performance.\n *   Note that the @ref XXH3_family provides competitive speed for both 32-bit\n *   and 64-bit systems, and offers true 64/128 bit hash results.\n *\n * @see @ref XXH64_family, @ref XXH3_family : Other xxHash families\n * @see @ref XXH32_impl for implementation details\n * @{\n */\n\n/*!\n * @brief Calculates the 32-bit hash of @p input using xxHash32.\n *\n * @param input The block of data to be hashed, at least @p length bytes in size.\n * @param length The length of @p input, in bytes.\n * @param seed The 32-bit seed to alter the hash's output predictably.\n *\n * @pre\n *   The memory between @p input and @p input + @p length must be valid,\n *   readable, contiguous memory. However, if @p length is `0`, @p input may be\n *   `NULL`. In C++, this also must be *TriviallyCopyable*.\n *\n * @return The calculated 32-bit xxHash32 value.\n *\n * @see @ref single_shot_example \"Single Shot Example\" for an example.\n */\nXXH_PUBLIC_API XXH_PUREF XXH32_hash_t XXH32 (const void* input, size_t length, XXH32_hash_t seed);\n\n#ifndef XXH_NO_STREAM\n/*!\n * @typedef struct XXH32_state_s XXH32_state_t\n * @brief The opaque state struct for the XXH32 streaming API.\n *\n * @see XXH32_state_s for details.\n * @see @ref streaming_example \"Streaming Example\"\n */\ntypedef struct XXH32_state_s XXH32_state_t;\n\n/*!\n * @brief Allocates an @ref XXH32_state_t.\n *\n * @return An allocated pointer of @ref XXH32_state_t on success.\n * @return `NULL` on failure.\n *\n * @note Must be freed with XXH32_freeState().\n *\n * @see @ref streaming_example \"Streaming Example\"\n */\nXXH_PUBLIC_API XXH_MALLOCF XXH32_state_t* XXH32_createState(void);\n/*!\n * @brief Frees an @ref XXH32_state_t.\n *\n * @param statePtr A pointer to an @ref XXH32_state_t allocated with @ref XXH32_createState().\n *\n * @return @ref XXH_OK.\n *\n * @note @p statePtr must be allocated with XXH32_createState().\n *\n * @see @ref streaming_example \"Streaming Example\"\n *\n */\nXXH_PUBLIC_API XXH_errorcode  XXH32_freeState(XXH32_state_t* statePtr);\n/*!\n * @brief Copies one @ref XXH32_state_t to another.\n *\n * @param dst_state The state to copy to.\n * @param src_state The state to copy from.\n * @pre\n *   @p dst_state and @p src_state must not be `NULL` and must not overlap.\n */\nXXH_PUBLIC_API void XXH32_copyState(XXH32_state_t* dst_state, const XXH32_state_t* src_state);\n\n/*!\n * @brief Resets an @ref XXH32_state_t to begin a new hash.\n *\n * @param statePtr The state struct to reset.\n * @param seed The 32-bit seed to alter the hash result predictably.\n *\n * @pre\n *   @p statePtr must not be `NULL`.\n *\n * @return @ref XXH_OK on success.\n * @return @ref XXH_ERROR on failure.\n *\n * @note This function resets and seeds a state. Call it before @ref XXH32_update().\n *\n * @see @ref streaming_example \"Streaming Example\"\n */\nXXH_PUBLIC_API XXH_errorcode XXH32_reset  (XXH32_state_t* statePtr, XXH32_hash_t seed);\n\n/*!\n * @brief Consumes a block of @p input to an @ref XXH32_state_t.\n *\n * @param statePtr The state struct to update.\n * @param input The block of data to be hashed, at least @p length bytes in size.\n * @param length The length of @p input, in bytes.\n *\n * @pre\n *   @p statePtr must not be `NULL`.\n * @pre\n *   The memory between @p input and @p input + @p length must be valid,\n *   readable, contiguous memory. However, if @p length is `0`, @p input may be\n *   `NULL`. In C++, this also must be *TriviallyCopyable*.\n *\n * @return @ref XXH_OK on success.\n * @return @ref XXH_ERROR on failure.\n *\n * @note Call this to incrementally consume blocks of data.\n *\n * @see @ref streaming_example \"Streaming Example\"\n */\nXXH_PUBLIC_API XXH_errorcode XXH32_update (XXH32_state_t* statePtr, const void* input, size_t length);\n\n/*!\n * @brief Returns the calculated hash value from an @ref XXH32_state_t.\n *\n * @param statePtr The state struct to calculate the hash from.\n *\n * @pre\n *  @p statePtr must not be `NULL`.\n *\n * @return The calculated 32-bit xxHash32 value from that state.\n *\n * @note\n *   Calling XXH32_digest() will not affect @p statePtr, so you can update,\n *   digest, and update again.\n *\n * @see @ref streaming_example \"Streaming Example\"\n */\nXXH_PUBLIC_API XXH_PUREF XXH32_hash_t XXH32_digest (const XXH32_state_t* statePtr);\n#endif /* !XXH_NO_STREAM */\n\n/*******   Canonical representation   *******/\n\n/*!\n * @brief Canonical (big endian) representation of @ref XXH32_hash_t.\n */\ntypedef struct {\n    unsigned char digest[4]; /*!< Hash bytes, big endian */\n} XXH32_canonical_t;\n\n/*!\n * @brief Converts an @ref XXH32_hash_t to a big endian @ref XXH32_canonical_t.\n *\n * @param dst  The @ref XXH32_canonical_t pointer to be stored to.\n * @param hash The @ref XXH32_hash_t to be converted.\n *\n * @pre\n *   @p dst must not be `NULL`.\n *\n * @see @ref canonical_representation_example \"Canonical Representation Example\"\n */\nXXH_PUBLIC_API void XXH32_canonicalFromHash(XXH32_canonical_t* dst, XXH32_hash_t hash);\n\n/*!\n * @brief Converts an @ref XXH32_canonical_t to a native @ref XXH32_hash_t.\n *\n * @param src The @ref XXH32_canonical_t to convert.\n *\n * @pre\n *   @p src must not be `NULL`.\n *\n * @return The converted hash.\n *\n * @see @ref canonical_representation_example \"Canonical Representation Example\"\n */\nXXH_PUBLIC_API XXH_PUREF XXH32_hash_t XXH32_hashFromCanonical(const XXH32_canonical_t* src);\n\n\n/*! @cond Doxygen ignores this part */\n#ifdef __has_attribute\n# define XXH_HAS_ATTRIBUTE(x) __has_attribute(x)\n#else\n# define XXH_HAS_ATTRIBUTE(x) 0\n#endif\n/*! @endcond */\n\n/*! @cond Doxygen ignores this part */\n/* C-language Attributes are added in C23. */\n#if defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 202311L) && defined(__has_c_attribute)\n# define XXH_HAS_C_ATTRIBUTE(x) __has_c_attribute(x)\n#else\n# define XXH_HAS_C_ATTRIBUTE(x) 0\n#endif\n/*! @endcond */\n\n/*! @cond Doxygen ignores this part */\n#if defined(__cplusplus) && defined(__has_cpp_attribute)\n# define XXH_HAS_CPP_ATTRIBUTE(x) __has_cpp_attribute(x)\n#else\n# define XXH_HAS_CPP_ATTRIBUTE(x) 0\n#endif\n/*! @endcond */\n\n/*! @cond Doxygen ignores this part */\n/*\n * Define XXH_FALLTHROUGH macro for annotating switch case with the 'fallthrough' attribute\n * introduced in CPP17 and C23.\n * CPP17 : https://en.cppreference.com/w/cpp/language/attributes/fallthrough\n * C23   : https://en.cppreference.com/w/c/language/attributes/fallthrough\n */\n#if XXH_HAS_C_ATTRIBUTE(fallthrough) || XXH_HAS_CPP_ATTRIBUTE(fallthrough)\n# define XXH_FALLTHROUGH [[fallthrough]]\n#elif XXH_HAS_ATTRIBUTE(__fallthrough__)\n# define XXH_FALLTHROUGH __attribute__ ((__fallthrough__))\n#else\n# define XXH_FALLTHROUGH /* fallthrough */\n#endif\n/*! @endcond */\n\n/*! @cond Doxygen ignores this part */\n/*\n * Define XXH_NOESCAPE for annotated pointers in public API.\n * https://clang.llvm.org/docs/AttributeReference.html#noescape\n * As of writing this, only supported by clang.\n */\n#if XXH_HAS_ATTRIBUTE(noescape)\n# define XXH_NOESCAPE __attribute__((__noescape__))\n#else\n# define XXH_NOESCAPE\n#endif\n/*! @endcond */\n\n\n/*!\n * @}\n * @ingroup public\n * @{\n */\n\n#ifndef XXH_NO_LONG_LONG\n/*-**********************************************************************\n*  64-bit hash\n************************************************************************/\n#if defined(XXH_DOXYGEN) /* don't include <stdint.h> */\n/*!\n * @brief An unsigned 64-bit integer.\n *\n * Not necessarily defined to `uint64_t` but functionally equivalent.\n */\ntypedef uint64_t XXH64_hash_t;\n#elif !defined (__VMS) \\\n  && (defined (__cplusplus) \\\n  || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) )\n#   ifdef _AIX\n#     include <inttypes.h>\n#   else\n#     include <stdint.h>\n#   endif\n   typedef uint64_t XXH64_hash_t;\n#else\n#  include <limits.h>\n#  if defined(__LP64__) && ULONG_MAX == 0xFFFFFFFFFFFFFFFFULL\n     /* LP64 ABI says uint64_t is unsigned long */\n     typedef unsigned long XXH64_hash_t;\n#  else\n     /* the following type must have a width of 64-bit */\n     typedef unsigned long long XXH64_hash_t;\n#  endif\n#endif\n\n/*!\n * @}\n *\n * @defgroup XXH64_family XXH64 family\n * @ingroup public\n * @{\n * Contains functions used in the classic 64-bit xxHash algorithm.\n *\n * @note\n *   XXH3 provides competitive speed for both 32-bit and 64-bit systems,\n *   and offers true 64/128 bit hash results.\n *   It provides better speed for systems with vector processing capabilities.\n */\n\n/*!\n * @brief Calculates the 64-bit hash of @p input using xxHash64.\n *\n * @param input The block of data to be hashed, at least @p length bytes in size.\n * @param length The length of @p input, in bytes.\n * @param seed The 64-bit seed to alter the hash's output predictably.\n *\n * @pre\n *   The memory between @p input and @p input + @p length must be valid,\n *   readable, contiguous memory. However, if @p length is `0`, @p input may be\n *   `NULL`. In C++, this also must be *TriviallyCopyable*.\n *\n * @return The calculated 64-bit xxHash64 value.\n *\n * @see @ref single_shot_example \"Single Shot Example\" for an example.\n */\nXXH_PUBLIC_API XXH_PUREF XXH64_hash_t XXH64(XXH_NOESCAPE const void* input, size_t length, XXH64_hash_t seed);\n\n/*******   Streaming   *******/\n#ifndef XXH_NO_STREAM\n/*!\n * @brief The opaque state struct for the XXH64 streaming API.\n *\n * @see XXH64_state_s for details.\n * @see @ref streaming_example \"Streaming Example\"\n */\ntypedef struct XXH64_state_s XXH64_state_t;   /* incomplete type */\n\n/*!\n * @brief Allocates an @ref XXH64_state_t.\n *\n * @return An allocated pointer of @ref XXH64_state_t on success.\n * @return `NULL` on failure.\n *\n * @note Must be freed with XXH64_freeState().\n *\n * @see @ref streaming_example \"Streaming Example\"\n */\nXXH_PUBLIC_API XXH_MALLOCF XXH64_state_t* XXH64_createState(void);\n\n/*!\n * @brief Frees an @ref XXH64_state_t.\n *\n * @param statePtr A pointer to an @ref XXH64_state_t allocated with @ref XXH64_createState().\n *\n * @return @ref XXH_OK.\n *\n * @note @p statePtr must be allocated with XXH64_createState().\n *\n * @see @ref streaming_example \"Streaming Example\"\n */\nXXH_PUBLIC_API XXH_errorcode  XXH64_freeState(XXH64_state_t* statePtr);\n\n/*!\n * @brief Copies one @ref XXH64_state_t to another.\n *\n * @param dst_state The state to copy to.\n * @param src_state The state to copy from.\n * @pre\n *   @p dst_state and @p src_state must not be `NULL` and must not overlap.\n */\nXXH_PUBLIC_API void XXH64_copyState(XXH_NOESCAPE XXH64_state_t* dst_state, const XXH64_state_t* src_state);\n\n/*!\n * @brief Resets an @ref XXH64_state_t to begin a new hash.\n *\n * @param statePtr The state struct to reset.\n * @param seed The 64-bit seed to alter the hash result predictably.\n *\n * @pre\n *   @p statePtr must not be `NULL`.\n *\n * @return @ref XXH_OK on success.\n * @return @ref XXH_ERROR on failure.\n *\n * @note This function resets and seeds a state. Call it before @ref XXH64_update().\n *\n * @see @ref streaming_example \"Streaming Example\"\n */\nXXH_PUBLIC_API XXH_errorcode XXH64_reset  (XXH_NOESCAPE XXH64_state_t* statePtr, XXH64_hash_t seed);\n\n/*!\n * @brief Consumes a block of @p input to an @ref XXH64_state_t.\n *\n * @param statePtr The state struct to update.\n * @param input The block of data to be hashed, at least @p length bytes in size.\n * @param length The length of @p input, in bytes.\n *\n * @pre\n *   @p statePtr must not be `NULL`.\n * @pre\n *   The memory between @p input and @p input + @p length must be valid,\n *   readable, contiguous memory. However, if @p length is `0`, @p input may be\n *   `NULL`. In C++, this also must be *TriviallyCopyable*.\n *\n * @return @ref XXH_OK on success.\n * @return @ref XXH_ERROR on failure.\n *\n * @note Call this to incrementally consume blocks of data.\n *\n * @see @ref streaming_example \"Streaming Example\"\n */\nXXH_PUBLIC_API XXH_errorcode XXH64_update (XXH_NOESCAPE XXH64_state_t* statePtr, XXH_NOESCAPE const void* input, size_t length);\n\n/*!\n * @brief Returns the calculated hash value from an @ref XXH64_state_t.\n *\n * @param statePtr The state struct to calculate the hash from.\n *\n * @pre\n *  @p statePtr must not be `NULL`.\n *\n * @return The calculated 64-bit xxHash64 value from that state.\n *\n * @note\n *   Calling XXH64_digest() will not affect @p statePtr, so you can update,\n *   digest, and update again.\n *\n * @see @ref streaming_example \"Streaming Example\"\n */\nXXH_PUBLIC_API XXH_PUREF XXH64_hash_t XXH64_digest (XXH_NOESCAPE const XXH64_state_t* statePtr);\n#endif /* !XXH_NO_STREAM */\n/*******   Canonical representation   *******/\n\n/*!\n * @brief Canonical (big endian) representation of @ref XXH64_hash_t.\n */\ntypedef struct { unsigned char digest[sizeof(XXH64_hash_t)]; } XXH64_canonical_t;\n\n/*!\n * @brief Converts an @ref XXH64_hash_t to a big endian @ref XXH64_canonical_t.\n *\n * @param dst The @ref XXH64_canonical_t pointer to be stored to.\n * @param hash The @ref XXH64_hash_t to be converted.\n *\n * @pre\n *   @p dst must not be `NULL`.\n *\n * @see @ref canonical_representation_example \"Canonical Representation Example\"\n */\nXXH_PUBLIC_API void XXH64_canonicalFromHash(XXH_NOESCAPE XXH64_canonical_t* dst, XXH64_hash_t hash);\n\n/*!\n * @brief Converts an @ref XXH64_canonical_t to a native @ref XXH64_hash_t.\n *\n * @param src The @ref XXH64_canonical_t to convert.\n *\n * @pre\n *   @p src must not be `NULL`.\n *\n * @return The converted hash.\n *\n * @see @ref canonical_representation_example \"Canonical Representation Example\"\n */\nXXH_PUBLIC_API XXH_PUREF XXH64_hash_t XXH64_hashFromCanonical(XXH_NOESCAPE const XXH64_canonical_t* src);\n\n#ifndef XXH_NO_XXH3\n\n/*!\n * @}\n * ************************************************************************\n * @defgroup XXH3_family XXH3 family\n * @ingroup public\n * @{\n *\n * XXH3 is a more recent hash algorithm featuring:\n *  - Improved speed for both small and large inputs\n *  - True 64-bit and 128-bit outputs\n *  - SIMD acceleration\n *  - Improved 32-bit viability\n *\n * Speed analysis methodology is explained here:\n *\n *    https://fastcompression.blogspot.com/2019/03/presenting-xxh3.html\n *\n * Compared to XXH64, expect XXH3 to run approximately\n * ~2x faster on large inputs and >3x faster on small ones,\n * exact differences vary depending on platform.\n *\n * XXH3's speed benefits greatly from SIMD and 64-bit arithmetic,\n * but does not require it.\n * Most 32-bit and 64-bit targets that can run XXH32 smoothly can run XXH3\n * at competitive speeds, even without vector support. Further details are\n * explained in the implementation.\n *\n * XXH3 has a fast scalar implementation, but it also includes accelerated SIMD\n * implementations for many common platforms:\n *   - AVX512\n *   - AVX2\n *   - SSE2\n *   - ARM NEON\n *   - WebAssembly SIMD128\n *   - POWER8 VSX\n *   - s390x ZVector\n * This can be controlled via the @ref XXH_VECTOR macro, but it automatically\n * selects the best version according to predefined macros. For the x86 family, an\n * automatic runtime dispatcher is included separately in @ref xxh_x86dispatch.c.\n *\n * XXH3 implementation is portable:\n * it has a generic C90 formulation that can be compiled on any platform,\n * all implementations generate exactly the same hash value on all platforms.\n * Starting from v0.8.0, it's also labelled \"stable\", meaning that\n * any future version will also generate the same hash value.\n *\n * XXH3 offers 2 variants, _64bits and _128bits.\n *\n * When only 64 bits are needed, prefer invoking the _64bits variant, as it\n * reduces the amount of mixing, resulting in faster speed on small inputs.\n * It's also generally simpler to manipulate a scalar return type than a struct.\n *\n * The API supports one-shot hashing, streaming mode, and custom secrets.\n */\n\n/*!\n * @ingroup tuning\n * @brief Possible values for @ref XXH_VECTOR.\n *\n * Unless set explicitly, determined automatically.\n */\n#  define XXH_SCALAR 0 /*!< Portable scalar version */\n#  define XXH_SSE2   1 /*!< SSE2 for Pentium 4, Opteron, all x86_64. */\n#  define XXH_AVX2   2 /*!< AVX2 for Haswell and Bulldozer */\n#  define XXH_AVX512 3 /*!< AVX512 for Skylake and Icelake */\n#  define XXH_NEON   4 /*!< NEON for most ARMv7-A, all AArch64, and WASM SIMD128 */\n#  define XXH_VSX    5 /*!< VSX and ZVector for POWER8/z13 (64-bit) */\n#  define XXH_SVE    6 /*!< SVE for some ARMv8-A and ARMv9-A */\n#  define XXH_LSX    7 /*!< LSX (128-bit SIMD) for LoongArch64 */\n#  define XXH_LASX   8 /*!< LASX (256-bit SIMD) for LoongArch64 */\n#  define XXH_RVV    9 /*!< RVV (RISC-V Vector) for RISC-V */\n\n/*-**********************************************************************\n*  XXH3 64-bit variant\n************************************************************************/\n\n/*!\n * @brief Calculates 64-bit unseeded variant of XXH3 hash of @p input.\n *\n * @param input  The block of data to be hashed, at least @p length bytes in size.\n * @param length The length of @p input, in bytes.\n *\n * @pre\n *   The memory between @p input and @p input + @p length must be valid,\n *   readable, contiguous memory. However, if @p length is `0`, @p input may be\n *   `NULL`. In C++, this also must be *TriviallyCopyable*.\n *\n * @return The calculated 64-bit XXH3 hash value.\n *\n * @note\n *   This is equivalent to @ref XXH3_64bits_withSeed() with a seed of `0`, however\n *   it may have slightly better performance due to constant propagation of the\n *   defaults.\n *\n * @see\n *    XXH3_64bits_withSeed(), XXH3_64bits_withSecret(): other seeding variants\n * @see @ref single_shot_example \"Single Shot Example\" for an example.\n */\nXXH_PUBLIC_API XXH_PUREF XXH64_hash_t XXH3_64bits(XXH_NOESCAPE const void* input, size_t length);\n\n/*!\n * @brief Calculates 64-bit seeded variant of XXH3 hash of @p input.\n *\n * @param input  The block of data to be hashed, at least @p length bytes in size.\n * @param length The length of @p input, in bytes.\n * @param seed   The 64-bit seed to alter the hash result predictably.\n *\n * @pre\n *   The memory between @p input and @p input + @p length must be valid,\n *   readable, contiguous memory. However, if @p length is `0`, @p input may be\n *   `NULL`. In C++, this also must be *TriviallyCopyable*.\n *\n * @return The calculated 64-bit XXH3 hash value.\n *\n * @note\n *    seed == 0 produces the same results as @ref XXH3_64bits().\n *\n * This variant generates a custom secret on the fly based on default secret\n * altered using the @p seed value.\n *\n * While this operation is decently fast, note that it's not completely free.\n *\n * @see @ref single_shot_example \"Single Shot Example\" for an example.\n */\nXXH_PUBLIC_API XXH_PUREF XXH64_hash_t XXH3_64bits_withSeed(XXH_NOESCAPE const void* input, size_t length, XXH64_hash_t seed);\n\n/*!\n * The bare minimum size for a custom secret.\n *\n * @see\n *  XXH3_64bits_withSecret(), XXH3_64bits_reset_withSecret(),\n *  XXH3_128bits_withSecret(), XXH3_128bits_reset_withSecret().\n */\n#define XXH3_SECRET_SIZE_MIN 136\n\n/*!\n * @brief Calculates 64-bit variant of XXH3 with a custom \"secret\".\n *\n * @param data       The block of data to be hashed, at least @p len bytes in size.\n * @param len        The length of @p data, in bytes.\n * @param secret     The secret data.\n * @param secretSize The length of @p secret, in bytes.\n *\n * @return The calculated 64-bit XXH3 hash value.\n *\n * @pre\n *   The memory between @p data and @p data + @p len must be valid,\n *   readable, contiguous memory. However, if @p length is `0`, @p data may be\n *   `NULL`. In C++, this also must be *TriviallyCopyable*.\n *\n * It's possible to provide any blob of bytes as a \"secret\" to generate the hash.\n * This makes it more difficult for an external actor to prepare an intentional collision.\n * The main condition is that @p secretSize *must* be large enough (>= @ref XXH3_SECRET_SIZE_MIN).\n * However, the quality of the secret impacts the dispersion of the hash algorithm.\n * Therefore, the secret _must_ look like a bunch of random bytes.\n * Avoid \"trivial\" or structured data such as repeated sequences or a text document.\n * Whenever in doubt about the \"randomness\" of the blob of bytes,\n * consider employing @ref XXH3_generateSecret() instead (see below).\n * It will generate a proper high entropy secret derived from the blob of bytes.\n * Another advantage of using XXH3_generateSecret() is that\n * it guarantees that all bits within the initial blob of bytes\n * will impact every bit of the output.\n * This is not necessarily the case when using the blob of bytes directly\n * because, when hashing _small_ inputs, only a portion of the secret is employed.\n *\n * @see @ref single_shot_example \"Single Shot Example\" for an example.\n */\nXXH_PUBLIC_API XXH_PUREF XXH64_hash_t XXH3_64bits_withSecret(XXH_NOESCAPE const void* data, size_t len, XXH_NOESCAPE const void* secret, size_t secretSize);\n\n\n/*******   Streaming   *******/\n#ifndef XXH_NO_STREAM\n/*\n * Streaming requires state maintenance.\n * This operation costs memory and CPU.\n * As a consequence, streaming is slower than one-shot hashing.\n * For better performance, prefer one-shot functions whenever applicable.\n */\n\n/*!\n * @brief The opaque state struct for the XXH3 streaming API.\n *\n * @see XXH3_state_s for details.\n * @see @ref streaming_example \"Streaming Example\"\n */\ntypedef struct XXH3_state_s XXH3_state_t;\nXXH_PUBLIC_API XXH_MALLOCF XXH3_state_t* XXH3_createState(void);\nXXH_PUBLIC_API XXH_errorcode XXH3_freeState(XXH3_state_t* statePtr);\n\n/*!\n * @brief Copies one @ref XXH3_state_t to another.\n *\n * @param dst_state The state to copy to.\n * @param src_state The state to copy from.\n * @pre\n *   @p dst_state and @p src_state must not be `NULL` and must not overlap.\n */\nXXH_PUBLIC_API void XXH3_copyState(XXH_NOESCAPE XXH3_state_t* dst_state, XXH_NOESCAPE const XXH3_state_t* src_state);\n\n/*!\n * @brief Resets an @ref XXH3_state_t to begin a new hash.\n *\n * @param statePtr The state struct to reset.\n *\n * @pre\n *   @p statePtr must not be `NULL`.\n *\n * @return @ref XXH_OK on success.\n * @return @ref XXH_ERROR on failure.\n *\n * @note\n *   - This function resets `statePtr` and generate a secret with default parameters.\n *   - Call this function before @ref XXH3_64bits_update().\n *   - Digest will be equivalent to `XXH3_64bits()`.\n *\n * @see @ref streaming_example \"Streaming Example\"\n *\n */\nXXH_PUBLIC_API XXH_errorcode XXH3_64bits_reset(XXH_NOESCAPE XXH3_state_t* statePtr);\n\n/*!\n * @brief Resets an @ref XXH3_state_t with 64-bit seed to begin a new hash.\n *\n * @param statePtr The state struct to reset.\n * @param seed     The 64-bit seed to alter the hash result predictably.\n *\n * @pre\n *   @p statePtr must not be `NULL`.\n *\n * @return @ref XXH_OK on success.\n * @return @ref XXH_ERROR on failure.\n *\n * @note\n *   - This function resets `statePtr` and generate a secret from `seed`.\n *   - Call this function before @ref XXH3_64bits_update().\n *   - Digest will be equivalent to `XXH3_64bits_withSeed()`.\n *\n * @see @ref streaming_example \"Streaming Example\"\n *\n */\nXXH_PUBLIC_API XXH_errorcode XXH3_64bits_reset_withSeed(XXH_NOESCAPE XXH3_state_t* statePtr, XXH64_hash_t seed);\n\n/*!\n * @brief Resets an @ref XXH3_state_t with secret data to begin a new hash.\n *\n * @param statePtr The state struct to reset.\n * @param secret     The secret data.\n * @param secretSize The length of @p secret, in bytes.\n *\n * @pre\n *   @p statePtr must not be `NULL`.\n *\n * @return @ref XXH_OK on success.\n * @return @ref XXH_ERROR on failure.\n *\n * @note\n *   `secret` is referenced, it _must outlive_ the hash streaming session.\n *\n * Similar to one-shot API, `secretSize` must be >= @ref XXH3_SECRET_SIZE_MIN,\n * and the quality of produced hash values depends on secret's entropy\n * (secret's content should look like a bunch of random bytes).\n * When in doubt about the randomness of a candidate `secret`,\n * consider employing `XXH3_generateSecret()` instead (see below).\n *\n * @see @ref streaming_example \"Streaming Example\"\n */\nXXH_PUBLIC_API XXH_errorcode XXH3_64bits_reset_withSecret(XXH_NOESCAPE XXH3_state_t* statePtr, XXH_NOESCAPE const void* secret, size_t secretSize);\n\n/*!\n * @brief Consumes a block of @p input to an @ref XXH3_state_t.\n *\n * @param statePtr The state struct to update.\n * @param input The block of data to be hashed, at least @p length bytes in size.\n * @param length The length of @p input, in bytes.\n *\n * @pre\n *   @p statePtr must not be `NULL`.\n * @pre\n *   The memory between @p input and @p input + @p length must be valid,\n *   readable, contiguous memory. However, if @p length is `0`, @p input may be\n *   `NULL`. In C++, this also must be *TriviallyCopyable*.\n *\n * @return @ref XXH_OK on success.\n * @return @ref XXH_ERROR on failure.\n *\n * @note Call this to incrementally consume blocks of data.\n *\n * @see @ref streaming_example \"Streaming Example\"\n */\nXXH_PUBLIC_API XXH_errorcode XXH3_64bits_update (XXH_NOESCAPE XXH3_state_t* statePtr, XXH_NOESCAPE const void* input, size_t length);\n\n/*!\n * @brief Returns the calculated XXH3 64-bit hash value from an @ref XXH3_state_t.\n *\n * @param statePtr The state struct to calculate the hash from.\n *\n * @pre\n *  @p statePtr must not be `NULL`.\n *\n * @return The calculated XXH3 64-bit hash value from that state.\n *\n * @note\n *   Calling XXH3_64bits_digest() will not affect @p statePtr, so you can update,\n *   digest, and update again.\n *\n * @see @ref streaming_example \"Streaming Example\"\n */\nXXH_PUBLIC_API XXH_PUREF XXH64_hash_t XXH3_64bits_digest (XXH_NOESCAPE const XXH3_state_t* statePtr);\n#endif /* !XXH_NO_STREAM */\n\n/* note : canonical representation of XXH3 is the same as XXH64\n * since they both produce XXH64_hash_t values */\n\n\n/*-**********************************************************************\n*  XXH3 128-bit variant\n************************************************************************/\n\n/*!\n * @brief The return value from 128-bit hashes.\n *\n * Stored in little endian order, although the fields themselves are in native\n * endianness.\n */\ntypedef struct {\n    XXH64_hash_t low64;   /*!< `value & 0xFFFFFFFFFFFFFFFF` */\n    XXH64_hash_t high64;  /*!< `value >> 64` */\n} XXH128_hash_t;\n\n/*!\n * @brief Calculates 128-bit unseeded variant of XXH3 of @p data.\n *\n * @param data The block of data to be hashed, at least @p length bytes in size.\n * @param len  The length of @p data, in bytes.\n *\n * @return The calculated 128-bit variant of XXH3 value.\n *\n * The 128-bit variant of XXH3 has more strength, but it has a bit of overhead\n * for shorter inputs.\n *\n * This is equivalent to @ref XXH3_128bits_withSeed() with a seed of `0`, however\n * it may have slightly better performance due to constant propagation of the\n * defaults.\n *\n * @see XXH3_128bits_withSeed(), XXH3_128bits_withSecret(): other seeding variants\n * @see @ref single_shot_example \"Single Shot Example\" for an example.\n */\nXXH_PUBLIC_API XXH_PUREF XXH128_hash_t XXH3_128bits(XXH_NOESCAPE const void* data, size_t len);\n/*! @brief Calculates 128-bit seeded variant of XXH3 hash of @p data.\n *\n * @param data The block of data to be hashed, at least @p length bytes in size.\n * @param len  The length of @p data, in bytes.\n * @param seed The 64-bit seed to alter the hash result predictably.\n *\n * @return The calculated 128-bit variant of XXH3 value.\n *\n * @note\n *    seed == 0 produces the same results as @ref XXH3_64bits().\n *\n * This variant generates a custom secret on the fly based on default secret\n * altered using the @p seed value.\n *\n * While this operation is decently fast, note that it's not completely free.\n *\n * @see XXH3_128bits(), XXH3_128bits_withSecret(): other seeding variants\n * @see @ref single_shot_example \"Single Shot Example\" for an example.\n */\nXXH_PUBLIC_API XXH_PUREF XXH128_hash_t XXH3_128bits_withSeed(XXH_NOESCAPE const void* data, size_t len, XXH64_hash_t seed);\n/*!\n * @brief Calculates 128-bit variant of XXH3 with a custom \"secret\".\n *\n * @param data       The block of data to be hashed, at least @p len bytes in size.\n * @param len        The length of @p data, in bytes.\n * @param secret     The secret data.\n * @param secretSize The length of @p secret, in bytes.\n *\n * @return The calculated 128-bit variant of XXH3 value.\n *\n * It's possible to provide any blob of bytes as a \"secret\" to generate the hash.\n * This makes it more difficult for an external actor to prepare an intentional collision.\n * The main condition is that @p secretSize *must* be large enough (>= @ref XXH3_SECRET_SIZE_MIN).\n * However, the quality of the secret impacts the dispersion of the hash algorithm.\n * Therefore, the secret _must_ look like a bunch of random bytes.\n * Avoid \"trivial\" or structured data such as repeated sequences or a text document.\n * Whenever in doubt about the \"randomness\" of the blob of bytes,\n * consider employing @ref XXH3_generateSecret() instead (see below).\n * It will generate a proper high entropy secret derived from the blob of bytes.\n * Another advantage of using XXH3_generateSecret() is that\n * it guarantees that all bits within the initial blob of bytes\n * will impact every bit of the output.\n * This is not necessarily the case when using the blob of bytes directly\n * because, when hashing _small_ inputs, only a portion of the secret is employed.\n *\n * @see @ref single_shot_example \"Single Shot Example\" for an example.\n */\nXXH_PUBLIC_API XXH_PUREF XXH128_hash_t XXH3_128bits_withSecret(XXH_NOESCAPE const void* data, size_t len, XXH_NOESCAPE const void* secret, size_t secretSize);\n\n/*******   Streaming   *******/\n#ifndef XXH_NO_STREAM\n/*\n * Streaming requires state maintenance.\n * This operation costs memory and CPU.\n * As a consequence, streaming is slower than one-shot hashing.\n * For better performance, prefer one-shot functions whenever applicable.\n *\n * XXH3_128bits uses the same XXH3_state_t as XXH3_64bits().\n * Use already declared XXH3_createState() and XXH3_freeState().\n *\n * All reset and streaming functions have same meaning as their 64-bit counterpart.\n */\n\n/*!\n * @brief Resets an @ref XXH3_state_t to begin a new hash.\n *\n * @param statePtr The state struct to reset.\n *\n * @pre\n *   @p statePtr must not be `NULL`.\n *\n * @return @ref XXH_OK on success.\n * @return @ref XXH_ERROR on failure.\n *\n * @note\n *   - This function resets `statePtr` and generate a secret with default parameters.\n *   - Call it before @ref XXH3_128bits_update().\n *   - Digest will be equivalent to `XXH3_128bits()`.\n *\n * @see @ref streaming_example \"Streaming Example\"\n */\nXXH_PUBLIC_API XXH_errorcode XXH3_128bits_reset(XXH_NOESCAPE XXH3_state_t* statePtr);\n\n/*!\n * @brief Resets an @ref XXH3_state_t with 64-bit seed to begin a new hash.\n *\n * @param statePtr The state struct to reset.\n * @param seed     The 64-bit seed to alter the hash result predictably.\n *\n * @pre\n *   @p statePtr must not be `NULL`.\n *\n * @return @ref XXH_OK on success.\n * @return @ref XXH_ERROR on failure.\n *\n * @note\n *   - This function resets `statePtr` and generate a secret from `seed`.\n *   - Call it before @ref XXH3_128bits_update().\n *   - Digest will be equivalent to `XXH3_128bits_withSeed()`.\n *\n * @see @ref streaming_example \"Streaming Example\"\n */\nXXH_PUBLIC_API XXH_errorcode XXH3_128bits_reset_withSeed(XXH_NOESCAPE XXH3_state_t* statePtr, XXH64_hash_t seed);\n/*!\n * @brief Resets an @ref XXH3_state_t with secret data to begin a new hash.\n *\n * @param statePtr   The state struct to reset.\n * @param secret     The secret data.\n * @param secretSize The length of @p secret, in bytes.\n *\n * @pre\n *   @p statePtr must not be `NULL`.\n *\n * @return @ref XXH_OK on success.\n * @return @ref XXH_ERROR on failure.\n *\n * `secret` is referenced, it _must outlive_ the hash streaming session.\n * Similar to one-shot API, `secretSize` must be >= @ref XXH3_SECRET_SIZE_MIN,\n * and the quality of produced hash values depends on secret's entropy\n * (secret's content should look like a bunch of random bytes).\n * When in doubt about the randomness of a candidate `secret`,\n * consider employing `XXH3_generateSecret()` instead (see below).\n *\n * @see @ref streaming_example \"Streaming Example\"\n */\nXXH_PUBLIC_API XXH_errorcode XXH3_128bits_reset_withSecret(XXH_NOESCAPE XXH3_state_t* statePtr, XXH_NOESCAPE const void* secret, size_t secretSize);\n\n/*!\n * @brief Consumes a block of @p input to an @ref XXH3_state_t.\n *\n * Call this to incrementally consume blocks of data.\n *\n * @param statePtr The state struct to update.\n * @param input The block of data to be hashed, at least @p length bytes in size.\n * @param length The length of @p input, in bytes.\n *\n * @pre\n *   @p statePtr must not be `NULL`.\n *\n * @return @ref XXH_OK on success.\n * @return @ref XXH_ERROR on failure.\n *\n * @note\n *   The memory between @p input and @p input + @p length must be valid,\n *   readable, contiguous memory. However, if @p length is `0`, @p input may be\n *   `NULL`. In C++, this also must be *TriviallyCopyable*.\n *\n */\nXXH_PUBLIC_API XXH_errorcode XXH3_128bits_update (XXH_NOESCAPE XXH3_state_t* statePtr, XXH_NOESCAPE const void* input, size_t length);\n\n/*!\n * @brief Returns the calculated XXH3 128-bit hash value from an @ref XXH3_state_t.\n *\n * @param statePtr The state struct to calculate the hash from.\n *\n * @pre\n *  @p statePtr must not be `NULL`.\n *\n * @return The calculated XXH3 128-bit hash value from that state.\n *\n * @note\n *   Calling XXH3_128bits_digest() will not affect @p statePtr, so you can update,\n *   digest, and update again.\n *\n */\nXXH_PUBLIC_API XXH_PUREF XXH128_hash_t XXH3_128bits_digest (XXH_NOESCAPE const XXH3_state_t* statePtr);\n#endif /* !XXH_NO_STREAM */\n\n/* Following helper functions make it possible to compare XXH128_hast_t values.\n * Since XXH128_hash_t is a structure, this capability is not offered by the language.\n * Note: For better performance, these functions can be inlined using XXH_INLINE_ALL */\n\n/*!\n * @brief Check equality of two XXH128_hash_t values\n *\n * @param h1 The 128-bit hash value.\n * @param h2 Another 128-bit hash value.\n *\n * @return `1` if `h1` and `h2` are equal.\n * @return `0` if they are not.\n */\nXXH_PUBLIC_API XXH_PUREF int XXH128_isEqual(XXH128_hash_t h1, XXH128_hash_t h2);\n\n/*!\n * @brief Compares two @ref XXH128_hash_t\n *\n * This comparator is compatible with stdlib's `qsort()`/`bsearch()`.\n *\n * @param h128_1 Left-hand side value\n * @param h128_2 Right-hand side value\n *\n * @return >0 if @p h128_1  > @p h128_2\n * @return =0 if @p h128_1 == @p h128_2\n * @return <0 if @p h128_1  < @p h128_2\n */\nXXH_PUBLIC_API XXH_PUREF int XXH128_cmp(XXH_NOESCAPE const void* h128_1, XXH_NOESCAPE const void* h128_2);\n\n\n/*******   Canonical representation   *******/\ntypedef struct { unsigned char digest[sizeof(XXH128_hash_t)]; } XXH128_canonical_t;\n\n\n/*!\n * @brief Converts an @ref XXH128_hash_t to a big endian @ref XXH128_canonical_t.\n *\n * @param dst  The @ref XXH128_canonical_t pointer to be stored to.\n * @param hash The @ref XXH128_hash_t to be converted.\n *\n * @pre\n *   @p dst must not be `NULL`.\n * @see @ref canonical_representation_example \"Canonical Representation Example\"\n */\nXXH_PUBLIC_API void XXH128_canonicalFromHash(XXH_NOESCAPE XXH128_canonical_t* dst, XXH128_hash_t hash);\n\n/*!\n * @brief Converts an @ref XXH128_canonical_t to a native @ref XXH128_hash_t.\n *\n * @param src The @ref XXH128_canonical_t to convert.\n *\n * @pre\n *   @p src must not be `NULL`.\n *\n * @return The converted hash.\n * @see @ref canonical_representation_example \"Canonical Representation Example\"\n */\nXXH_PUBLIC_API XXH_PUREF XXH128_hash_t XXH128_hashFromCanonical(XXH_NOESCAPE const XXH128_canonical_t* src);\n\n\n#endif  /* !XXH_NO_XXH3 */\n#endif  /* XXH_NO_LONG_LONG */\n\n/*!\n * @}\n */\n#endif /* XXHASH_H_5627135585666179 */\n\n\n\n#if defined(XXH_STATIC_LINKING_ONLY) && !defined(XXHASH_H_STATIC_13879238742)\n#define XXHASH_H_STATIC_13879238742\n/* ****************************************************************************\n * This section contains declarations which are not guaranteed to remain stable.\n * They may change in future versions, becoming incompatible with a different\n * version of the library.\n * These declarations should only be used with static linking.\n * Never use them in association with dynamic linking!\n ***************************************************************************** */\n\n/*\n * These definitions are only present to allow static allocation\n * of XXH states, on stack or in a struct, for example.\n * Never **ever** access their members directly.\n */\n\n/*!\n * @internal\n * @brief Structure for XXH32 streaming API.\n *\n * @note This is only defined when @ref XXH_STATIC_LINKING_ONLY,\n * @ref XXH_INLINE_ALL, or @ref XXH_IMPLEMENTATION is defined. Otherwise it is\n * an opaque type. This allows fields to safely be changed.\n *\n * Typedef'd to @ref XXH32_state_t.\n * Do not access the members of this struct directly.\n * @see XXH64_state_s, XXH3_state_s\n */\nstruct XXH32_state_s {\n   XXH32_hash_t total_len_32; /*!< Total length hashed, modulo 2^32 */\n   XXH32_hash_t large_len;    /*!< Whether the hash is >= 16 (handles @ref total_len_32 overflow) */\n   XXH32_hash_t acc[4];       /*!< Accumulator lanes */\n   unsigned char buffer[16];  /*!< Internal buffer for partial reads. */\n   XXH32_hash_t bufferedSize; /*!< Amount of data in @ref buffer */\n   XXH32_hash_t reserved;     /*!< Reserved field. Do not read nor write to it. */\n};   /* typedef'd to XXH32_state_t */\n\n\n#ifndef XXH_NO_LONG_LONG  /* defined when there is no 64-bit support */\n\n/*!\n * @internal\n * @brief Structure for XXH64 streaming API.\n *\n * @note This is only defined when @ref XXH_STATIC_LINKING_ONLY,\n * @ref XXH_INLINE_ALL, or @ref XXH_IMPLEMENTATION is defined. Otherwise it is\n * an opaque type. This allows fields to safely be changed.\n *\n * Typedef'd to @ref XXH64_state_t.\n * Do not access the members of this struct directly.\n * @see XXH32_state_s, XXH3_state_s\n */\nstruct XXH64_state_s {\n   XXH64_hash_t total_len;    /*!< Total length hashed. This is always 64-bit. */\n   XXH64_hash_t acc[4];       /*!< Accumulator lanes */\n   unsigned char buffer[32];  /*!< Internal buffer for partial reads.. */\n   XXH32_hash_t bufferedSize; /*!< Amount of data in @ref buffer */\n   XXH32_hash_t reserved32;   /*!< Reserved field, needed for padding anyways*/\n   XXH64_hash_t reserved64;   /*!< Reserved field. Do not read or write to it. */\n};   /* typedef'd to XXH64_state_t */\n\n#ifndef XXH_NO_XXH3\n\n#if defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 201112L) /* >= C11 */\n#  define XXH_ALIGN(n)      _Alignas(n)\n#elif defined(__cplusplus) && (__cplusplus >= 201103L) /* >= C++11 */\n/* In C++ alignas() is a keyword */\n#  define XXH_ALIGN(n)      alignas(n)\n#elif defined(__GNUC__)\n#  define XXH_ALIGN(n)      __attribute__ ((aligned(n)))\n#elif defined(_MSC_VER)\n#  define XXH_ALIGN(n)      __declspec(align(n))\n#else\n#  define XXH_ALIGN(n)   /* disabled */\n#endif\n\n/* Old GCC versions only accept the attribute after the type in structures. */\n#if !(defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 201112L))   /* C11+ */ \\\n    && ! (defined(__cplusplus) && (__cplusplus >= 201103L)) /* >= C++11 */ \\\n    && defined(__GNUC__)\n#   define XXH_ALIGN_MEMBER(align, type) type XXH_ALIGN(align)\n#else\n#   define XXH_ALIGN_MEMBER(align, type) XXH_ALIGN(align) type\n#endif\n\n/*!\n * @internal\n * @brief The size of the internal XXH3 buffer.\n *\n * This is the optimal update size for incremental hashing.\n *\n * @see XXH3_64b_update(), XXH3_128b_update().\n */\n#define XXH3_INTERNALBUFFER_SIZE 256\n\n/*!\n * @def XXH3_SECRET_DEFAULT_SIZE\n * @brief Default Secret's size\n *\n * This is the size of internal XXH3_kSecret\n * and is needed by XXH3_generateSecret_fromSeed().\n *\n * Not to be confused with @ref XXH3_SECRET_SIZE_MIN.\n */\n#define XXH3_SECRET_DEFAULT_SIZE 192\n\n/*!\n * @internal\n * @brief Structure for XXH3 streaming API.\n *\n * @note This is only defined when @ref XXH_STATIC_LINKING_ONLY,\n * @ref XXH_INLINE_ALL, or @ref XXH_IMPLEMENTATION is defined.\n * Otherwise it is an opaque type.\n * Never use this definition in combination with dynamic library.\n * This allows fields to safely be changed in the future.\n *\n * @note ** This structure has a strict alignment requirement of 64 bytes!! **\n * Do not allocate this with `malloc()` or `new`,\n * it will not be sufficiently aligned.\n * Use @ref XXH3_createState() and @ref XXH3_freeState(), or stack allocation.\n *\n * Typedef'd to @ref XXH3_state_t.\n * Do never access the members of this struct directly.\n *\n * @see XXH3_INITSTATE() for stack initialization.\n * @see XXH3_createState(), XXH3_freeState().\n * @see XXH32_state_s, XXH64_state_s\n */\nstruct XXH3_state_s {\n   XXH_ALIGN_MEMBER(64, XXH64_hash_t acc[8]);\n       /*!< The 8 accumulators. See @ref XXH32_state_s::acc and @ref XXH64_state_s::acc */\n   XXH_ALIGN_MEMBER(64, unsigned char customSecret[XXH3_SECRET_DEFAULT_SIZE]);\n       /*!< Used to store a custom secret generated from a seed. */\n   XXH_ALIGN_MEMBER(64, unsigned char buffer[XXH3_INTERNALBUFFER_SIZE]);\n       /*!< The internal buffer. @see XXH32_state_s::mem32 */\n   XXH32_hash_t bufferedSize;\n       /*!< The amount of memory in @ref buffer, @see XXH32_state_s::memsize */\n   XXH32_hash_t useSeed;\n       /*!< Reserved field. Needed for padding on 64-bit. */\n   size_t nbStripesSoFar;\n       /*!< Number or stripes processed. */\n   XXH64_hash_t totalLen;\n       /*!< Total length hashed. 64-bit even on 32-bit targets. */\n   size_t nbStripesPerBlock;\n       /*!< Number of stripes per block. */\n   size_t secretLimit;\n       /*!< Size of @ref customSecret or @ref extSecret */\n   XXH64_hash_t seed;\n       /*!< Seed for _withSeed variants. Must be zero otherwise, @see XXH3_INITSTATE() */\n   XXH64_hash_t reserved64;\n       /*!< Reserved field. */\n   const unsigned char* extSecret;\n       /*!< Reference to an external secret for the _withSecret variants, NULL\n        *   for other variants. */\n   /* note: there may be some padding at the end due to alignment on 64 bytes */\n}; /* typedef'd to XXH3_state_t */\n\n#undef XXH_ALIGN_MEMBER\n\n/*!\n * @brief Initializes a stack-allocated `XXH3_state_s`.\n *\n * When the @ref XXH3_state_t structure is merely emplaced on stack,\n * it should be initialized with XXH3_INITSTATE() or a memset()\n * in case its first reset uses XXH3_NNbits_reset_withSeed().\n * This init can be omitted if the first reset uses default or _withSecret mode.\n * This operation isn't necessary when the state is created with XXH3_createState().\n * Note that this doesn't prepare the state for a streaming operation,\n * it's still necessary to use XXH3_NNbits_reset*() afterwards.\n */\n#define XXH3_INITSTATE(XXH3_state_ptr)                       \\\n    do {                                                     \\\n        XXH3_state_t* tmp_xxh3_state_ptr = (XXH3_state_ptr); \\\n        tmp_xxh3_state_ptr->seed = 0;                        \\\n        tmp_xxh3_state_ptr->extSecret = NULL;                \\\n    } while(0)\n\n\n/*!\n * @brief Calculates the 128-bit hash of @p data using XXH3.\n *\n * @param data The block of data to be hashed, at least @p len bytes in size.\n * @param len  The length of @p data, in bytes.\n * @param seed The 64-bit seed to alter the hash's output predictably.\n *\n * @pre\n *   The memory between @p data and @p data + @p len must be valid,\n *   readable, contiguous memory. However, if @p len is `0`, @p data may be\n *   `NULL`. In C++, this also must be *TriviallyCopyable*.\n *\n * @return The calculated 128-bit XXH3 value.\n *\n * @see @ref single_shot_example \"Single Shot Example\" for an example.\n */\nXXH_PUBLIC_API XXH_PUREF XXH128_hash_t XXH128(XXH_NOESCAPE const void* data, size_t len, XXH64_hash_t seed);\n\n\n/* ===   Experimental API   === */\n/* Symbols defined below must be considered tied to a specific library version. */\n\n/*!\n * @brief Derive a high-entropy secret from any user-defined content, named customSeed.\n *\n * @param secretBuffer    A writable buffer for derived high-entropy secret data.\n * @param secretSize      Size of secretBuffer, in bytes.  Must be >= XXH3_SECRET_SIZE_MIN.\n * @param customSeed      A user-defined content.\n * @param customSeedSize  Size of customSeed, in bytes.\n *\n * @return @ref XXH_OK on success.\n * @return @ref XXH_ERROR on failure.\n *\n * The generated secret can be used in combination with `*_withSecret()` functions.\n * The `_withSecret()` variants are useful to provide a higher level of protection\n * than 64-bit seed, as it becomes much more difficult for an external actor to\n * guess how to impact the calculation logic.\n *\n * The function accepts as input a custom seed of any length and any content,\n * and derives from it a high-entropy secret of length @p secretSize into an\n * already allocated buffer @p secretBuffer.\n *\n * The generated secret can then be used with any `*_withSecret()` variant.\n * The functions @ref XXH3_128bits_withSecret(), @ref XXH3_64bits_withSecret(),\n * @ref XXH3_128bits_reset_withSecret() and @ref XXH3_64bits_reset_withSecret()\n * are part of this list. They all accept a `secret` parameter\n * which must be large enough for implementation reasons (>= @ref XXH3_SECRET_SIZE_MIN)\n * _and_ feature very high entropy (consist of random-looking bytes).\n * These conditions can be a high bar to meet, so @ref XXH3_generateSecret() can\n * be employed to ensure proper quality.\n *\n * @p customSeed can be anything. It can have any size, even small ones,\n * and its content can be anything, even \"poor entropy\" sources such as a bunch\n * of zeroes. The resulting `secret` will nonetheless provide all required qualities.\n *\n * @pre\n *   - @p secretSize must be >= @ref XXH3_SECRET_SIZE_MIN\n *   - When @p customSeedSize > 0, supplying NULL as customSeed is undefined behavior.\n *\n * Example code:\n * @code{.c}\n *    #include <stdio.h>\n *    #include <stdlib.h>\n *    #include <string.h>\n *    #define XXH_STATIC_LINKING_ONLY // expose unstable API\n *    #include \"xxhash.h\"\n *    // Hashes argv[2] using the entropy from argv[1].\n *    int main(int argc, char* argv[])\n *    {\n *        char secret[XXH3_SECRET_SIZE_MIN];\n *        if (argv != 3) { return 1; }\n *        XXH3_generateSecret(secret, sizeof(secret), argv[1], strlen(argv[1]));\n *        XXH64_hash_t h = XXH3_64bits_withSecret(\n *             argv[2], strlen(argv[2]),\n *             secret, sizeof(secret)\n *        );\n *        printf(\"%016llx\\n\", (unsigned long long) h);\n *    }\n * @endcode\n */\nXXH_PUBLIC_API XXH_errorcode XXH3_generateSecret(XXH_NOESCAPE void* secretBuffer, size_t secretSize, XXH_NOESCAPE const void* customSeed, size_t customSeedSize);\n\n/*!\n * @brief Generate the same secret as the _withSeed() variants.\n *\n * @param secretBuffer A writable buffer of @ref XXH3_SECRET_DEFAULT_SIZE bytes\n * @param seed         The 64-bit seed to alter the hash result predictably.\n *\n * The generated secret can be used in combination with\n *`*_withSecret()` and `_withSecretandSeed()` variants.\n *\n * Example C++ `std::string` hash class:\n * @code{.cpp}\n *    #include <string>\n *    #define XXH_STATIC_LINKING_ONLY // expose unstable API\n *    #include \"xxhash.h\"\n *    // Slow, seeds each time\n *    class HashSlow {\n *        XXH64_hash_t seed;\n *    public:\n *        HashSlow(XXH64_hash_t s) : seed{s} {}\n *        size_t operator()(const std::string& x) const {\n *            return size_t{XXH3_64bits_withSeed(x.c_str(), x.length(), seed)};\n *        }\n *    };\n *    // Fast, caches the seeded secret for future uses.\n *    class HashFast {\n *        unsigned char secret[XXH3_SECRET_DEFAULT_SIZE];\n *    public:\n *        HashFast(XXH64_hash_t s) {\n *            XXH3_generateSecret_fromSeed(secret, seed);\n *        }\n *        size_t operator()(const std::string& x) const {\n *            return size_t{\n *                XXH3_64bits_withSecret(x.c_str(), x.length(), secret, sizeof(secret))\n *            };\n *        }\n *    };\n * @endcode\n */\nXXH_PUBLIC_API void XXH3_generateSecret_fromSeed(XXH_NOESCAPE void* secretBuffer, XXH64_hash_t seed);\n\n/*!\n * @brief Maximum size of \"short\" key in bytes.\n */\n#define XXH3_MIDSIZE_MAX 240\n\n/*!\n * @brief Calculates 64/128-bit seeded variant of XXH3 hash of @p data.\n *\n * @param data       The block of data to be hashed, at least @p len bytes in size.\n * @param len        The length of @p data, in bytes.\n * @param secret     The secret data.\n * @param secretSize The length of @p secret, in bytes.\n * @param seed       The 64-bit seed to alter the hash result predictably.\n *\n * These variants generate hash values using either:\n * - @p seed for \"short\" keys (< @ref XXH3_MIDSIZE_MAX = 240 bytes)\n * - @p secret for \"large\" keys (>= @ref XXH3_MIDSIZE_MAX).\n *\n * This generally benefits speed, compared to `_withSeed()` or `_withSecret()`.\n * `_withSeed()` has to generate the secret on the fly for \"large\" keys.\n * It's fast, but can be perceptible for \"not so large\" keys (< 1 KB).\n * `_withSecret()` has to generate the masks on the fly for \"small\" keys,\n * which requires more instructions than _withSeed() variants.\n * Therefore, _withSecretandSeed variant combines the best of both worlds.\n *\n * When @p secret has been generated by XXH3_generateSecret_fromSeed(),\n * this variant produces *exactly* the same results as `_withSeed()` variant,\n * hence offering only a pure speed benefit on \"large\" input,\n * by skipping the need to regenerate the secret for every large input.\n *\n * Another usage scenario is to hash the secret to a 64-bit hash value,\n * for example with XXH3_64bits(), which then becomes the seed,\n * and then employ both the seed and the secret in _withSecretandSeed().\n * On top of speed, an added benefit is that each bit in the secret\n * has a 50% chance to swap each bit in the output, via its impact to the seed.\n *\n * This is not guaranteed when using the secret directly in \"small data\" scenarios,\n * because only portions of the secret are employed for small data.\n */\nXXH_PUBLIC_API XXH_PUREF XXH64_hash_t\nXXH3_64bits_withSecretandSeed(XXH_NOESCAPE const void* data, size_t len,\n                              XXH_NOESCAPE const void* secret, size_t secretSize,\n                              XXH64_hash_t seed);\n\n/*!\n * @brief Calculates 128-bit seeded variant of XXH3 hash of @p data.\n *\n * @param input      The memory segment to be hashed, at least @p len bytes in size.\n * @param length     The length of @p data, in bytes.\n * @param secret     The secret used to alter hash result predictably.\n * @param secretSize The length of @p secret, in bytes (must be >= XXH3_SECRET_SIZE_MIN)\n * @param seed64     The 64-bit seed to alter the hash result predictably.\n *\n * @return @ref XXH_OK on success.\n * @return @ref XXH_ERROR on failure.\n *\n * @see XXH3_64bits_withSecretandSeed(): contract is the same.\n */\nXXH_PUBLIC_API XXH_PUREF XXH128_hash_t\nXXH3_128bits_withSecretandSeed(XXH_NOESCAPE const void* input, size_t length,\n                               XXH_NOESCAPE const void* secret, size_t secretSize,\n                               XXH64_hash_t seed64);\n\n#ifndef XXH_NO_STREAM\n/*!\n * @brief Resets an @ref XXH3_state_t with secret data to begin a new hash.\n *\n * @param statePtr   A pointer to an @ref XXH3_state_t allocated with @ref XXH3_createState().\n * @param secret     The secret data.\n * @param secretSize The length of @p secret, in bytes.\n * @param seed64     The 64-bit seed to alter the hash result predictably.\n *\n * @return @ref XXH_OK on success.\n * @return @ref XXH_ERROR on failure.\n *\n * @see XXH3_64bits_withSecretandSeed(). Contract is identical.\n */\nXXH_PUBLIC_API XXH_errorcode\nXXH3_64bits_reset_withSecretandSeed(XXH_NOESCAPE XXH3_state_t* statePtr,\n                                    XXH_NOESCAPE const void* secret, size_t secretSize,\n                                    XXH64_hash_t seed64);\n\n/*!\n * @brief Resets an @ref XXH3_state_t with secret data to begin a new hash.\n *\n * @param statePtr   A pointer to an @ref XXH3_state_t allocated with @ref XXH3_createState().\n * @param secret     The secret data.\n * @param secretSize The length of @p secret, in bytes.\n * @param seed64     The 64-bit seed to alter the hash result predictably.\n *\n * @return @ref XXH_OK on success.\n * @return @ref XXH_ERROR on failure.\n *\n * @see XXH3_64bits_withSecretandSeed(). Contract is identical.\n *\n * Note: there was a bug in an earlier version of this function (<= v0.8.2)\n * that would make it generate an incorrect hash value\n * when @p seed == 0 and @p length < XXH3_MIDSIZE_MAX\n * and @p secret is different from XXH3_generateSecret_fromSeed().\n * As stated in the contract, the correct hash result must be\n * the same as XXH3_128bits_withSeed() when @p length <= XXH3_MIDSIZE_MAX.\n * Results generated by this older version are wrong, hence not comparable.\n */\nXXH_PUBLIC_API XXH_errorcode\nXXH3_128bits_reset_withSecretandSeed(XXH_NOESCAPE XXH3_state_t* statePtr,\n                                     XXH_NOESCAPE const void* secret, size_t secretSize,\n                                     XXH64_hash_t seed64);\n\n#endif /* !XXH_NO_STREAM */\n\n#endif  /* !XXH_NO_XXH3 */\n#endif  /* XXH_NO_LONG_LONG */\n#if defined(XXH_INLINE_ALL) || defined(XXH_PRIVATE_API)\n#  define XXH_IMPLEMENTATION\n#endif\n\n#endif  /* defined(XXH_STATIC_LINKING_ONLY) && !defined(XXHASH_H_STATIC_13879238742) */\n\n\n/* ======================================================================== */\n/* ======================================================================== */\n/* ======================================================================== */\n\n\n/*-**********************************************************************\n * xxHash implementation\n *-**********************************************************************\n * xxHash's implementation used to be hosted inside xxhash.c.\n *\n * However, inlining requires implementation to be visible to the compiler,\n * hence be included alongside the header.\n * Previously, implementation was hosted inside xxhash.c,\n * which was then #included when inlining was activated.\n * This construction created issues with a few build and install systems,\n * as it required xxhash.c to be stored in /include directory.\n *\n * xxHash implementation is now directly integrated within xxhash.h.\n * As a consequence, xxhash.c is no longer needed in /include.\n *\n * xxhash.c is still available and is still useful.\n * In a \"normal\" setup, when xxhash is not inlined,\n * xxhash.h only exposes the prototypes and public symbols,\n * while xxhash.c can be built into an object file xxhash.o\n * which can then be linked into the final binary.\n ************************************************************************/\n\n#if ( defined(XXH_INLINE_ALL) || defined(XXH_PRIVATE_API) \\\n   || defined(XXH_IMPLEMENTATION) ) && !defined(XXH_IMPLEM_13a8737387)\n#  define XXH_IMPLEM_13a8737387\n\n/* *************************************\n*  Tuning parameters\n***************************************/\n\n/*!\n * @defgroup tuning Tuning parameters\n * @{\n *\n * Various macros to control xxHash's behavior.\n */\n#ifdef XXH_DOXYGEN\n/*!\n * @brief Define this to disable 64-bit code.\n *\n * Useful if only using the @ref XXH32_family and you have a strict C90 compiler.\n */\n#  define XXH_NO_LONG_LONG\n#  undef XXH_NO_LONG_LONG /* don't actually */\n/*!\n * @brief Controls how unaligned memory is accessed.\n *\n * By default, access to unaligned memory is controlled by `memcpy()`, which is\n * safe and portable.\n *\n * Unfortunately, on some target/compiler combinations, the generated assembly\n * is sub-optimal.\n *\n * The below switch allow selection of a different access method\n * in the search for improved performance.\n *\n * @par Possible options:\n *\n *  - `XXH_FORCE_MEMORY_ACCESS=0` (default): `memcpy`\n *   @par\n *     Use `memcpy()`. Safe and portable. Note that most modern compilers will\n *     eliminate the function call and treat it as an unaligned access.\n *\n *  - `XXH_FORCE_MEMORY_ACCESS=1`: `__attribute__((aligned(1)))`\n *   @par\n *     Depends on compiler extensions and is therefore not portable.\n *     This method is safe _if_ your compiler supports it,\n *     and *generally* as fast or faster than `memcpy`.\n *\n *  - `XXH_FORCE_MEMORY_ACCESS=2`: Direct cast\n *  @par\n *     Casts directly and dereferences. This method doesn't depend on the\n *     compiler, but it violates the C standard as it directly dereferences an\n *     unaligned pointer. It can generate buggy code on targets which do not\n *     support unaligned memory accesses, but in some circumstances, it's the\n *     only known way to get the most performance.\n *\n *  - `XXH_FORCE_MEMORY_ACCESS=3`: Byteshift\n *  @par\n *     Also portable. This can generate the best code on old compilers which don't\n *     inline small `memcpy()` calls, and it might also be faster on big-endian\n *     systems which lack a native byteswap instruction. However, some compilers\n *     will emit literal byteshifts even if the target supports unaligned access.\n *\n *\n * @warning\n *   Methods 1 and 2 rely on implementation-defined behavior. Use these with\n *   care, as what works on one compiler/platform/optimization level may cause\n *   another to read garbage data or even crash.\n *\n * See https://fastcompression.blogspot.com/2015/08/accessing-unaligned-memory.html for details.\n *\n * Prefer these methods in priority order (0 > 3 > 1 > 2)\n */\n#  define XXH_FORCE_MEMORY_ACCESS 0\n\n/*!\n * @def XXH_SIZE_OPT\n * @brief Controls how much xxHash optimizes for size.\n *\n * xxHash, when compiled, tends to result in a rather large binary size. This\n * is mostly due to heavy usage to forced inlining and constant folding of the\n * @ref XXH3_family to increase performance.\n *\n * However, some developers prefer size over speed. This option can\n * significantly reduce the size of the generated code. When using the `-Os`\n * or `-Oz` options on GCC or Clang, this is defined to 1 by default,\n * otherwise it is defined to 0.\n *\n * Most of these size optimizations can be controlled manually.\n *\n * This is a number from 0-2.\n *  - `XXH_SIZE_OPT` == 0: Default. xxHash makes no size optimizations. Speed\n *    comes first.\n *  - `XXH_SIZE_OPT` == 1: Default for `-Os` and `-Oz`. xxHash is more\n *    conservative and disables hacks that increase code size. It implies the\n *    options @ref XXH_NO_INLINE_HINTS == 1, @ref XXH_FORCE_ALIGN_CHECK == 0,\n *    and @ref XXH3_NEON_LANES == 8 if they are not already defined.\n *  - `XXH_SIZE_OPT` == 2: xxHash tries to make itself as small as possible.\n *    Performance may cry. For example, the single shot functions just use the\n *    streaming API.\n */\n#  define XXH_SIZE_OPT 0\n\n/*!\n * @def XXH_FORCE_ALIGN_CHECK\n * @brief If defined to non-zero, adds a special path for aligned inputs (XXH32()\n * and XXH64() only).\n *\n * This is an important performance trick for architectures without decent\n * unaligned memory access performance.\n *\n * It checks for input alignment, and when conditions are met, uses a \"fast\n * path\" employing direct 32-bit/64-bit reads, resulting in _dramatically\n * faster_ read speed.\n *\n * The check costs one initial branch per hash, which is generally negligible,\n * but not zero.\n *\n * Moreover, it's not useful to generate an additional code path if memory\n * access uses the same instruction for both aligned and unaligned\n * addresses (e.g. x86 and aarch64).\n *\n * In these cases, the alignment check can be removed by setting this macro to 0.\n * Then the code will always use unaligned memory access.\n * Align check is automatically disabled on x86, x64, ARM64, and some ARM chips\n * which are platforms known to offer good unaligned memory accesses performance.\n *\n * It is also disabled by default when @ref XXH_SIZE_OPT >= 1.\n *\n * This option does not affect XXH3 (only XXH32 and XXH64).\n */\n#  define XXH_FORCE_ALIGN_CHECK 0\n\n/*!\n * @def XXH_NO_INLINE_HINTS\n * @brief When non-zero, sets all functions to `static`.\n *\n * By default, xxHash tries to force the compiler to inline almost all internal\n * functions.\n *\n * This can usually improve performance due to reduced jumping and improved\n * constant folding, but significantly increases the size of the binary which\n * might not be favorable.\n *\n * Additionally, sometimes the forced inlining can be detrimental to performance,\n * depending on the architecture.\n *\n * XXH_NO_INLINE_HINTS marks all internal functions as static, giving the\n * compiler full control on whether to inline or not.\n *\n * When not optimizing (-O0), using `-fno-inline` with GCC or Clang, or if\n * @ref XXH_SIZE_OPT >= 1, this will automatically be defined.\n */\n#  define XXH_NO_INLINE_HINTS 0\n\n/*!\n * @def XXH3_INLINE_SECRET\n * @brief Determines whether to inline the XXH3 withSecret code.\n *\n * When the secret size is known, the compiler can improve the performance\n * of XXH3_64bits_withSecret() and XXH3_128bits_withSecret().\n *\n * However, if the secret size is not known, it doesn't have any benefit. This\n * happens when xxHash is compiled into a global symbol. Therefore, if\n * @ref XXH_INLINE_ALL is *not* defined, this will be defined to 0.\n *\n * Additionally, this defaults to 0 on GCC 12+, which has an issue with function pointers\n * that are *sometimes* force inline on -Og, and it is impossible to automatically\n * detect this optimization level.\n */\n#  define XXH3_INLINE_SECRET 0\n\n/*!\n * @def XXH32_ENDJMP\n * @brief Whether to use a jump for `XXH32_finalize`.\n *\n * For performance, `XXH32_finalize` uses multiple branches in the finalizer.\n * This is generally preferable for performance,\n * but depending on exact architecture, a jmp may be preferable.\n *\n * This setting is only possibly making a difference for very small inputs.\n */\n#  define XXH32_ENDJMP 0\n\n/*!\n * @internal\n * @brief Redefines old internal names.\n *\n * For compatibility with code that uses xxHash's internals before the names\n * were changed to improve namespacing. There is no other reason to use this.\n */\n#  define XXH_OLD_NAMES\n#  undef XXH_OLD_NAMES /* don't actually use, it is ugly. */\n\n/*!\n * @def XXH_NO_STREAM\n * @brief Disables the streaming API.\n *\n * When xxHash is not inlined and the streaming functions are not used, disabling\n * the streaming functions can improve code size significantly, especially with\n * the @ref XXH3_family which tends to make constant folded copies of itself.\n */\n#  define XXH_NO_STREAM\n#  undef XXH_NO_STREAM /* don't actually */\n#endif /* XXH_DOXYGEN */\n/*!\n * @}\n */\n\n#ifndef XXH_FORCE_MEMORY_ACCESS   /* can be defined externally, on command line for example */\n   /* prefer __packed__ structures (method 1) for GCC\n    * < ARMv7 with unaligned access (e.g. Raspbian armhf) still uses byte shifting, so we use memcpy\n    * which for some reason does unaligned loads. */\n#  if defined(__GNUC__) && !(defined(__ARM_ARCH) && __ARM_ARCH < 7 && defined(__ARM_FEATURE_UNALIGNED))\n#    define XXH_FORCE_MEMORY_ACCESS 1\n#  endif\n#endif\n\n#ifndef XXH_SIZE_OPT\n   /* default to 1 for -Os or -Oz */\n#  if (defined(__GNUC__) || defined(__clang__)) && defined(__OPTIMIZE_SIZE__)\n#    define XXH_SIZE_OPT 1\n#  else\n#    define XXH_SIZE_OPT 0\n#  endif\n#endif\n\n#ifndef XXH_FORCE_ALIGN_CHECK  /* can be defined externally */\n   /* don't check on sizeopt, x86, aarch64, or arm when unaligned access is available */\n#  if XXH_SIZE_OPT >= 1 || \\\n      defined(__i386)  || defined(__x86_64__) || defined(__aarch64__) || defined(__ARM_FEATURE_UNALIGNED) \\\n   || defined(_M_IX86) || defined(_M_X64)     || defined(_M_ARM64)    || defined(_M_ARM) /* visual */\n#    define XXH_FORCE_ALIGN_CHECK 0\n#  else\n#    define XXH_FORCE_ALIGN_CHECK 1\n#  endif\n#endif\n\n#ifndef XXH_NO_INLINE_HINTS\n#  if XXH_SIZE_OPT >= 1 || defined(__NO_INLINE__)  /* -O0, -fno-inline */\n#    define XXH_NO_INLINE_HINTS 1\n#  else\n#    define XXH_NO_INLINE_HINTS 0\n#  endif\n#endif\n\n#ifndef XXH3_INLINE_SECRET\n#  if (defined(__GNUC__) && !defined(__clang__) && __GNUC__ >= 12) \\\n     || !defined(XXH_INLINE_ALL)\n#    define XXH3_INLINE_SECRET 0\n#  else\n#    define XXH3_INLINE_SECRET 1\n#  endif\n#endif\n\n#ifndef XXH32_ENDJMP\n/* generally preferable for performance */\n#  define XXH32_ENDJMP 0\n#endif\n\n/*!\n * @defgroup impl Implementation\n * @{\n */\n\n\n/* *************************************\n*  Includes & Memory related functions\n***************************************/\n#if defined(XXH_NO_STREAM)\n/* nothing */\n#elif defined(XXH_NO_STDLIB)\n\n/* When requesting to disable any mention of stdlib,\n * the library loses the ability to invoked malloc / free.\n * In practice, it means that functions like `XXH*_createState()`\n * will always fail, and return NULL.\n * This flag is useful in situations where\n * xxhash.h is integrated into some kernel, embedded or limited environment\n * without access to dynamic allocation.\n */\n\nstatic XXH_CONSTF void* XXH_malloc(size_t s) { (void)s; return NULL; }\nstatic void XXH_free(void* p) { (void)p; }\n\n#else\n\n/*\n * Modify the local functions below should you wish to use\n * different memory routines for malloc() and free()\n */\n#include <stdlib.h>\n\n/*!\n * @internal\n * @brief Modify this function to use a different routine than malloc().\n */\nstatic XXH_MALLOCF void* XXH_malloc(size_t s) { return malloc(s); }\n\n/*!\n * @internal\n * @brief Modify this function to use a different routine than free().\n */\nstatic void XXH_free(void* p) { free(p); }\n\n#endif  /* XXH_NO_STDLIB */\n\n#ifndef XXH_memcpy\n/*!\n * @internal\n * @brief XXH_memcpy() macro can be redirected at compile time\n */\n#  include <string.h>\n#  define XXH_memcpy memcpy\n#endif\n\n#ifndef XXH_memset\n/*!\n * @internal\n * @brief XXH_memset() macro can be redirected at compile time\n */\n#  include <string.h>\n#  define XXH_memset memset\n#endif\n\n#ifndef XXH_memcmp\n/*!\n * @internal\n * @brief XXH_memcmp() macro can be redirected at compile time\n * Note: only needed by XXH128.\n */\n#  include <string.h>\n#  define XXH_memcmp memcmp\n#endif\n\n\n\n#include <limits.h>   /* ULLONG_MAX */\n\n\n/* *************************************\n*  Compiler Specific Options\n***************************************/\n#ifdef _MSC_VER /* Visual Studio warning fix */\n#  pragma warning(disable : 4127) /* disable: C4127: conditional expression is constant */\n#endif\n\n#if XXH_NO_INLINE_HINTS  /* disable inlining hints */\n#  if defined(__GNUC__) || defined(__clang__)\n#    define XXH_FORCE_INLINE static __attribute__((__unused__))\n#  else\n#    define XXH_FORCE_INLINE static\n#  endif\n#  define XXH_NO_INLINE static\n/* enable inlining hints */\n#elif defined(__GNUC__) || defined(__clang__)\n#  define XXH_FORCE_INLINE static __inline__ __attribute__((__always_inline__, __unused__))\n#  define XXH_NO_INLINE static __attribute__((__noinline__))\n#elif defined(_MSC_VER)  /* Visual Studio */\n#  define XXH_FORCE_INLINE static __forceinline\n#  define XXH_NO_INLINE static __declspec(noinline)\n#elif defined (__cplusplus) \\\n  || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L))   /* C99 */\n#  define XXH_FORCE_INLINE static inline\n#  define XXH_NO_INLINE static\n#else\n#  define XXH_FORCE_INLINE static\n#  define XXH_NO_INLINE static\n#endif\n\n#if defined(XXH_INLINE_ALL)\n#  define XXH_STATIC XXH_FORCE_INLINE\n#else\n#  define XXH_STATIC static\n#endif\n\n#if XXH3_INLINE_SECRET\n#  define XXH3_WITH_SECRET_INLINE XXH_FORCE_INLINE\n#else\n#  define XXH3_WITH_SECRET_INLINE XXH_NO_INLINE\n#endif\n\n#if ((defined(sun) || defined(__sun)) && __cplusplus) /* Solaris includes __STDC_VERSION__ with C++. Tested with GCC 5.5 */\n#  define XXH_RESTRICT   /* disable */\n#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L   /* >= C99 */\n#  define XXH_RESTRICT   restrict\n#elif (defined (__GNUC__) && ((__GNUC__ > 3) || (__GNUC__ == 3 && __GNUC_MINOR__ >= 1))) \\\n   || (defined (__clang__)) \\\n   || (defined (_MSC_VER) && (_MSC_VER >= 1400)) \\\n   || (defined (__INTEL_COMPILER) && (__INTEL_COMPILER >= 1300))\n/*\n * There are a LOT more compilers that recognize __restrict but this\n * covers the major ones.\n */\n#  define XXH_RESTRICT   __restrict\n#else\n#  define XXH_RESTRICT   /* disable */\n#endif\n\n/* *************************************\n*  Debug\n***************************************/\n/*!\n * @ingroup tuning\n * @def XXH_DEBUGLEVEL\n * @brief Sets the debugging level.\n *\n * XXH_DEBUGLEVEL is expected to be defined externally, typically via the\n * compiler's command line options. The value must be a number.\n */\n#ifndef XXH_DEBUGLEVEL\n#  ifdef DEBUGLEVEL /* backwards compat */\n#    define XXH_DEBUGLEVEL DEBUGLEVEL\n#  else\n#    define XXH_DEBUGLEVEL 0\n#  endif\n#endif\n\n#if (XXH_DEBUGLEVEL>=1)\n#  include <assert.h>   /* note: can still be disabled with NDEBUG */\n#  define XXH_ASSERT(c)   assert(c)\n#else\n#  if defined(__INTEL_COMPILER)\n#    define XXH_ASSERT(c)   XXH_ASSUME((unsigned char) (c))\n#  else\n#    define XXH_ASSERT(c)   XXH_ASSUME(c)\n#  endif\n#endif\n\n/* note: use after variable declarations */\n#ifndef XXH_STATIC_ASSERT\n#  if defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 201112L)    /* C11 */\n#    define XXH_STATIC_ASSERT_WITH_MESSAGE(c,m) do { _Static_assert((c),m); } while(0)\n#  elif defined(__cplusplus) && (__cplusplus >= 201103L)            /* C++11 */\n#    define XXH_STATIC_ASSERT_WITH_MESSAGE(c,m) do { static_assert((c),m); } while(0)\n#  else\n#    define XXH_STATIC_ASSERT_WITH_MESSAGE(c,m) do { struct xxh_sa { char x[(c) ? 1 : -1]; }; } while(0)\n#  endif\n#  define XXH_STATIC_ASSERT(c) XXH_STATIC_ASSERT_WITH_MESSAGE((c),#c)\n#endif\n\n/*!\n * @internal\n * @def XXH_COMPILER_GUARD(var)\n * @brief Used to prevent unwanted optimizations for @p var.\n *\n * It uses an empty GCC inline assembly statement with a register constraint\n * which forces @p var into a general purpose register (eg eax, ebx, ecx\n * on x86) and marks it as modified.\n *\n * This is used in a few places to avoid unwanted autovectorization (e.g.\n * XXH32_round()). All vectorization we want is explicit via intrinsics,\n * and _usually_ isn't wanted elsewhere.\n *\n * We also use it to prevent unwanted constant folding for AArch64 in\n * XXH3_initCustomSecret_scalar().\n */\n#if defined(__GNUC__) || defined(__clang__)\n#  define XXH_COMPILER_GUARD(var) __asm__(\"\" : \"+r\" (var))\n#else\n#  define XXH_COMPILER_GUARD(var) ((void)0)\n#endif\n\n/* Specifically for NEON vectors which use the \"w\" constraint, on\n * Clang. */\n#if defined(__clang__) && defined(__ARM_ARCH) && !defined(__wasm__)\n#  define XXH_COMPILER_GUARD_CLANG_NEON(var) __asm__(\"\" : \"+w\" (var))\n#else\n#  define XXH_COMPILER_GUARD_CLANG_NEON(var) ((void)0)\n#endif\n\n/* *************************************\n*  Basic Types\n***************************************/\n#if !defined (__VMS) \\\n && (defined (__cplusplus) \\\n || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) )\n#   ifdef _AIX\n#     include <inttypes.h>\n#   else\n#     include <stdint.h>\n#   endif\n    typedef uint8_t xxh_u8;\n#else\n    typedef unsigned char xxh_u8;\n#endif\ntypedef XXH32_hash_t xxh_u32;\n\n#ifdef XXH_OLD_NAMES\n#  warning \"XXH_OLD_NAMES is planned to be removed starting v0.9. If the program depends on it, consider moving away from it by employing newer type names directly\"\n#  define BYTE xxh_u8\n#  define U8   xxh_u8\n#  define U32  xxh_u32\n#endif\n\n/* ***   Memory access   *** */\n\n/*!\n * @internal\n * @fn xxh_u32 XXH_read32(const void* ptr)\n * @brief Reads an unaligned 32-bit integer from @p ptr in native endianness.\n *\n * Affected by @ref XXH_FORCE_MEMORY_ACCESS.\n *\n * @param ptr The pointer to read from.\n * @return The 32-bit native endian integer from the bytes at @p ptr.\n */\n\n/*!\n * @internal\n * @fn xxh_u32 XXH_readLE32(const void* ptr)\n * @brief Reads an unaligned 32-bit little endian integer from @p ptr.\n *\n * Affected by @ref XXH_FORCE_MEMORY_ACCESS.\n *\n * @param ptr The pointer to read from.\n * @return The 32-bit little endian integer from the bytes at @p ptr.\n */\n\n/*!\n * @internal\n * @fn xxh_u32 XXH_readBE32(const void* ptr)\n * @brief Reads an unaligned 32-bit big endian integer from @p ptr.\n *\n * Affected by @ref XXH_FORCE_MEMORY_ACCESS.\n *\n * @param ptr The pointer to read from.\n * @return The 32-bit big endian integer from the bytes at @p ptr.\n */\n\n/*!\n * @internal\n * @fn xxh_u32 XXH_readLE32_align(const void* ptr, XXH_alignment align)\n * @brief Like @ref XXH_readLE32(), but has an option for aligned reads.\n *\n * Affected by @ref XXH_FORCE_MEMORY_ACCESS.\n * Note that when @ref XXH_FORCE_ALIGN_CHECK == 0, the @p align parameter is\n * always @ref XXH_alignment::XXH_unaligned.\n *\n * @param ptr The pointer to read from.\n * @param align Whether @p ptr is aligned.\n * @pre\n *   If @p align == @ref XXH_alignment::XXH_aligned, @p ptr must be 4 byte\n *   aligned.\n * @return The 32-bit little endian integer from the bytes at @p ptr.\n */\n\n#if (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==3))\n/*\n * Manual byteshift. Best for old compilers which don't inline memcpy.\n * We actually directly use XXH_readLE32 and XXH_readBE32.\n */\n#elif (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==2))\n\n/*\n * Force direct memory access. Only works on CPU which support unaligned memory\n * access in hardware.\n */\nstatic xxh_u32 XXH_read32(const void* memPtr) { return *(const xxh_u32*) memPtr; }\n\n#elif (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==1))\n\n/*\n * __attribute__((aligned(1))) is supported by gcc and clang. Originally the\n * documentation claimed that it only increased the alignment, but actually it\n * can decrease it on gcc, clang, and icc:\n * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69502,\n * https://gcc.godbolt.org/z/xYez1j67Y.\n */\n#ifdef XXH_OLD_NAMES\ntypedef union { xxh_u32 u32; } __attribute__((__packed__)) unalign;\n#endif\nstatic xxh_u32 XXH_read32(const void* ptr)\n{\n    typedef __attribute__((__aligned__(1))) __attribute__((__may_alias__)) xxh_u32 xxh_unalign32;\n    return *((const xxh_unalign32*)ptr);\n}\n\n#else\n\n/*\n * Portable and safe solution. Generally efficient.\n * see: https://fastcompression.blogspot.com/2015/08/accessing-unaligned-memory.html\n */\nstatic xxh_u32 XXH_read32(const void* memPtr)\n{\n    xxh_u32 val;\n    XXH_memcpy(&val, memPtr, sizeof(val));\n    return val;\n}\n\n#endif   /* XXH_FORCE_DIRECT_MEMORY_ACCESS */\n\n\n/* ***   Endianness   *** */\n\n/*!\n * @ingroup tuning\n * @def XXH_CPU_LITTLE_ENDIAN\n * @brief Whether the target is little endian.\n *\n * Defined to 1 if the target is little endian, or 0 if it is big endian.\n * It can be defined externally, for example on the compiler command line.\n *\n * If it is not defined,\n * a runtime check (which is usually constant folded) is used instead.\n *\n * @note\n *   This is not necessarily defined to an integer constant.\n *\n * @see XXH_isLittleEndian() for the runtime check.\n */\n#ifndef XXH_CPU_LITTLE_ENDIAN\n/*\n * Try to detect endianness automatically, to avoid the nonstandard behavior\n * in `XXH_isLittleEndian()`\n */\n#  if defined(_WIN32) /* Windows is always little endian */ \\\n     || defined(__LITTLE_ENDIAN__) \\\n     || (defined(__BYTE_ORDER__) && __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__)\n#    define XXH_CPU_LITTLE_ENDIAN 1\n#  elif defined(__BIG_ENDIAN__) \\\n     || (defined(__BYTE_ORDER__) && __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__)\n#    define XXH_CPU_LITTLE_ENDIAN 0\n#  else\n/*!\n * @internal\n * @brief Runtime check for @ref XXH_CPU_LITTLE_ENDIAN.\n *\n * Most compilers will constant fold this.\n */\nstatic int XXH_isLittleEndian(void)\n{\n    /*\n     * Portable and well-defined behavior.\n     * Don't use static: it is detrimental to performance.\n     */\n    const union { xxh_u32 u; xxh_u8 c[4]; } one = { 1 };\n    return one.c[0];\n}\n#   define XXH_CPU_LITTLE_ENDIAN   XXH_isLittleEndian()\n#  endif\n#endif\n\n\n\n\n/* ****************************************\n*  Compiler-specific Functions and Macros\n******************************************/\n#define XXH_GCC_VERSION (__GNUC__ * 100 + __GNUC_MINOR__)\n\n#ifdef __has_builtin\n#  define XXH_HAS_BUILTIN(x) __has_builtin(x)\n#else\n#  define XXH_HAS_BUILTIN(x) 0\n#endif\n\n\n\n/*\n * C23 and future versions have standard \"unreachable()\".\n * Once it has been implemented reliably we can add it as an\n * additional case:\n *\n * ```\n * #if defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 202311L)\n * #  include <stddef.h>\n * #  ifdef unreachable\n * #    define XXH_UNREACHABLE() unreachable()\n * #  endif\n * #endif\n * ```\n *\n * Note C++23 also has std::unreachable() which can be detected\n * as follows:\n * ```\n * #if defined(__cpp_lib_unreachable) && (__cpp_lib_unreachable >= 202202L)\n * #  include <utility>\n * #  define XXH_UNREACHABLE() std::unreachable()\n * #endif\n * ```\n * NB: `__cpp_lib_unreachable` is defined in the `<version>` header.\n * We don't use that as including `<utility>` in `extern \"C\"` blocks\n * doesn't work on GCC12\n */\n\n#if XXH_HAS_BUILTIN(__builtin_unreachable)\n#  define XXH_UNREACHABLE() __builtin_unreachable()\n\n#elif defined(_MSC_VER)\n#  define XXH_UNREACHABLE() __assume(0)\n\n#else\n#  define XXH_UNREACHABLE()\n#endif\n\n#if XXH_HAS_BUILTIN(__builtin_assume)\n#  define XXH_ASSUME(c) __builtin_assume(c)\n#else\n#  define XXH_ASSUME(c) if (!(c)) { XXH_UNREACHABLE(); }\n#endif\n\n/*!\n * @internal\n * @def XXH_rotl32(x,r)\n * @brief 32-bit rotate left.\n *\n * @param x The 32-bit integer to be rotated.\n * @param r The number of bits to rotate.\n * @pre\n *   @p r > 0 && @p r < 32\n * @note\n *   @p x and @p r may be evaluated multiple times.\n * @return The rotated result.\n */\n#if !defined(NO_CLANG_BUILTIN) && XXH_HAS_BUILTIN(__builtin_rotateleft32) \\\n                               && XXH_HAS_BUILTIN(__builtin_rotateleft64)\n#  define XXH_rotl32 __builtin_rotateleft32\n#  define XXH_rotl64 __builtin_rotateleft64\n#elif XXH_HAS_BUILTIN(__builtin_stdc_rotate_left)\n#  define XXH_rotl32 __builtin_stdc_rotate_left\n#  define XXH_rotl64 __builtin_stdc_rotate_left\n/* Note: although _rotl exists for minGW (GCC under windows), performance seems poor */\n#elif defined(_MSC_VER)\n#  define XXH_rotl32(x,r) _rotl(x,r)\n#  define XXH_rotl64(x,r) _rotl64(x,r)\n#else\n#  define XXH_rotl32(x,r) (((x) << (r)) | ((x) >> (32 - (r))))\n#  define XXH_rotl64(x,r) (((x) << (r)) | ((x) >> (64 - (r))))\n#endif\n\n/*!\n * @internal\n * @fn xxh_u32 XXH_swap32(xxh_u32 x)\n * @brief A 32-bit byteswap.\n *\n * @param x The 32-bit integer to byteswap.\n * @return @p x, byteswapped.\n */\n#if defined(_MSC_VER)     /* Visual Studio */\n#  define XXH_swap32 _byteswap_ulong\n#elif XXH_GCC_VERSION >= 403\n#  define XXH_swap32 __builtin_bswap32\n#else\nstatic xxh_u32 XXH_swap32 (xxh_u32 x)\n{\n    return  ((x << 24) & 0xff000000 ) |\n            ((x <<  8) & 0x00ff0000 ) |\n            ((x >>  8) & 0x0000ff00 ) |\n            ((x >> 24) & 0x000000ff );\n}\n#endif\n\n\n/* ***************************\n*  Memory reads\n*****************************/\n\n/*!\n * @internal\n * @brief Enum to indicate whether a pointer is aligned.\n */\ntypedef enum {\n    XXH_aligned,  /*!< Aligned */\n    XXH_unaligned /*!< Possibly unaligned */\n} XXH_alignment;\n\n/*\n * XXH_FORCE_MEMORY_ACCESS==3 is an endian-independent byteshift load.\n *\n * This is ideal for older compilers which don't inline memcpy.\n */\n#if (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==3))\n\nXXH_FORCE_INLINE xxh_u32 XXH_readLE32(const void* memPtr)\n{\n    const xxh_u8* bytePtr = (const xxh_u8 *)memPtr;\n    return bytePtr[0]\n         | ((xxh_u32)bytePtr[1] << 8)\n         | ((xxh_u32)bytePtr[2] << 16)\n         | ((xxh_u32)bytePtr[3] << 24);\n}\n\nXXH_FORCE_INLINE xxh_u32 XXH_readBE32(const void* memPtr)\n{\n    const xxh_u8* bytePtr = (const xxh_u8 *)memPtr;\n    return bytePtr[3]\n         | ((xxh_u32)bytePtr[2] << 8)\n         | ((xxh_u32)bytePtr[1] << 16)\n         | ((xxh_u32)bytePtr[0] << 24);\n}\n\n#else\nXXH_FORCE_INLINE xxh_u32 XXH_readLE32(const void* ptr)\n{\n    return XXH_CPU_LITTLE_ENDIAN ? XXH_read32(ptr) : XXH_swap32(XXH_read32(ptr));\n}\n\nstatic xxh_u32 XXH_readBE32(const void* ptr)\n{\n    return XXH_CPU_LITTLE_ENDIAN ? XXH_swap32(XXH_read32(ptr)) : XXH_read32(ptr);\n}\n#endif\n\nXXH_FORCE_INLINE xxh_u32\nXXH_readLE32_align(const void* ptr, XXH_alignment align)\n{\n    if (align==XXH_unaligned) {\n        return XXH_readLE32(ptr);\n    } else {\n        return XXH_CPU_LITTLE_ENDIAN ? *(const xxh_u32*)ptr : XXH_swap32(*(const xxh_u32*)ptr);\n    }\n}\n\n\n/* *************************************\n*  Misc\n***************************************/\n/*! @ingroup public */\nXXH_PUBLIC_API unsigned XXH_versionNumber (void) { return XXH_VERSION_NUMBER; }\n\n\n/* *******************************************************************\n*  32-bit hash functions\n*********************************************************************/\n/*!\n * @}\n * @defgroup XXH32_impl XXH32 implementation\n * @ingroup impl\n *\n * Details on the XXH32 implementation.\n * @{\n */\n /* #define instead of static const, to be used as initializers */\n#define XXH_PRIME32_1  0x9E3779B1U  /*!< 0b10011110001101110111100110110001 */\n#define XXH_PRIME32_2  0x85EBCA77U  /*!< 0b10000101111010111100101001110111 */\n#define XXH_PRIME32_3  0xC2B2AE3DU  /*!< 0b11000010101100101010111000111101 */\n#define XXH_PRIME32_4  0x27D4EB2FU  /*!< 0b00100111110101001110101100101111 */\n#define XXH_PRIME32_5  0x165667B1U  /*!< 0b00010110010101100110011110110001 */\n\n#ifdef XXH_OLD_NAMES\n#  define PRIME32_1 XXH_PRIME32_1\n#  define PRIME32_2 XXH_PRIME32_2\n#  define PRIME32_3 XXH_PRIME32_3\n#  define PRIME32_4 XXH_PRIME32_4\n#  define PRIME32_5 XXH_PRIME32_5\n#endif\n\n/*!\n * @internal\n * @brief Normal stripe processing routine.\n *\n * This shuffles the bits so that any bit from @p input impacts several bits in\n * @p acc.\n *\n * @param acc The accumulator lane.\n * @param input The stripe of input to mix.\n * @return The mixed accumulator lane.\n */\nstatic xxh_u32 XXH32_round(xxh_u32 acc, xxh_u32 input)\n{\n    acc += input * XXH_PRIME32_2;\n    acc  = XXH_rotl32(acc, 13);\n    acc *= XXH_PRIME32_1;\n#if (defined(__SSE4_1__) || defined(__aarch64__) || defined(__wasm_simd128__)) && !defined(XXH_ENABLE_AUTOVECTORIZE)\n    /*\n     * UGLY HACK:\n     * A compiler fence is used to prevent GCC and Clang from\n     * autovectorizing the XXH32 loop (pragmas and attributes don't work for some\n     * reason) without globally disabling SSE4.1.\n     *\n     * The reason we want to avoid vectorization is because despite working on\n     * 4 integers at a time, there are multiple factors slowing XXH32 down on\n     * SSE4:\n     * - There's a ridiculous amount of lag from pmulld (10 cycles of latency on\n     *   newer chips!) making it slightly slower to multiply four integers at\n     *   once compared to four integers independently. Even when pmulld was\n     *   fastest, Sandy/Ivy Bridge, it is still not worth it to go into SSE\n     *   just to multiply unless doing a long operation.\n     *\n     * - Four instructions are required to rotate,\n     *      movqda tmp,  v // not required with VEX encoding\n     *      pslld  tmp, 13 // tmp <<= 13\n     *      psrld  v,   19 // x >>= 19\n     *      por    v,  tmp // x |= tmp\n     *   compared to one for scalar:\n     *      roll   v, 13    // reliably fast across the board\n     *      shldl  v, v, 13 // Sandy Bridge and later prefer this for some reason\n     *\n     * - Instruction level parallelism is actually more beneficial here because\n     *   the SIMD actually serializes this operation: While v1 is rotating, v2\n     *   can load data, while v3 can multiply. SSE forces them to operate\n     *   together.\n     *\n     * This is also enabled on AArch64, as Clang is *very aggressive* in vectorizing\n     * the loop. NEON is only faster on the A53, and with the newer cores, it is less\n     * than half the speed.\n     *\n     * Additionally, this is used on WASM SIMD128 because it JITs to the same\n     * SIMD instructions and has the same issue.\n     */\n    XXH_COMPILER_GUARD(acc);\n#endif\n    return acc;\n}\n\n/*!\n * @internal\n * @brief Mixes all bits to finalize the hash.\n *\n * The final mix ensures that all input bits have a chance to impact any bit in\n * the output digest, resulting in an unbiased distribution.\n *\n * @param hash The hash to avalanche.\n * @return The avalanched hash.\n */\nstatic xxh_u32 XXH32_avalanche(xxh_u32 hash)\n{\n    hash ^= hash >> 15;\n    hash *= XXH_PRIME32_2;\n    hash ^= hash >> 13;\n    hash *= XXH_PRIME32_3;\n    hash ^= hash >> 16;\n    return hash;\n}\n\n#define XXH_get32bits(p) XXH_readLE32_align(p, align)\n\n/*!\n * @internal\n * @brief Sets up the initial accumulator state for XXH32().\n */\nXXH_FORCE_INLINE void\nXXH32_initAccs(xxh_u32 *acc, xxh_u32 seed)\n{\n    XXH_ASSERT(acc != NULL);\n    acc[0] = seed + XXH_PRIME32_1 + XXH_PRIME32_2;\n    acc[1] = seed + XXH_PRIME32_2;\n    acc[2] = seed + 0;\n    acc[3] = seed - XXH_PRIME32_1;\n}\n\n/*!\n * @internal\n * @brief Consumes a block of data for XXH32().\n *\n * @return the end input pointer.\n */\nXXH_FORCE_INLINE const xxh_u8 *\nXXH32_consumeLong(\n    xxh_u32 *XXH_RESTRICT acc,\n    xxh_u8 const *XXH_RESTRICT input,\n    size_t len,\n    XXH_alignment align\n)\n{\n    const xxh_u8* const bEnd = input + len;\n    const xxh_u8* const limit = bEnd - 15;\n    XXH_ASSERT(acc != NULL);\n    XXH_ASSERT(input != NULL);\n    XXH_ASSERT(len >= 16);\n    do {\n        acc[0] = XXH32_round(acc[0], XXH_get32bits(input)); input += 4;\n        acc[1] = XXH32_round(acc[1], XXH_get32bits(input)); input += 4;\n        acc[2] = XXH32_round(acc[2], XXH_get32bits(input)); input += 4;\n        acc[3] = XXH32_round(acc[3], XXH_get32bits(input)); input += 4;\n    } while (input < limit);\n\n    return input;\n}\n\n/*!\n * @internal\n * @brief Merges the accumulator lanes together for XXH32()\n */\nXXH_FORCE_INLINE XXH_PUREF xxh_u32\nXXH32_mergeAccs(const xxh_u32 *acc)\n{\n    XXH_ASSERT(acc != NULL);\n    return XXH_rotl32(acc[0], 1)  + XXH_rotl32(acc[1], 7)\n         + XXH_rotl32(acc[2], 12) + XXH_rotl32(acc[3], 18);\n}\n\n/*!\n * @internal\n * @brief Processes the last 0-15 bytes of @p ptr.\n *\n * There may be up to 15 bytes remaining to consume from the input.\n * This final stage will digest them to ensure that all input bytes are present\n * in the final mix.\n *\n * @param hash The hash to finalize.\n * @param ptr The pointer to the remaining input.\n * @param len The remaining length, modulo 16.\n * @param align Whether @p ptr is aligned.\n * @return The finalized hash.\n * @see XXH64_finalize().\n */\nstatic XXH_PUREF xxh_u32\nXXH32_finalize(xxh_u32 hash, const xxh_u8* ptr, size_t len, XXH_alignment align)\n{\n#define XXH_PROCESS1 do {                             \\\n    hash += (*ptr++) * XXH_PRIME32_5;                 \\\n    hash = XXH_rotl32(hash, 11) * XXH_PRIME32_1;      \\\n} while (0)\n\n#define XXH_PROCESS4 do {                             \\\n    hash += XXH_get32bits(ptr) * XXH_PRIME32_3;       \\\n    ptr += 4;                                         \\\n    hash  = XXH_rotl32(hash, 17) * XXH_PRIME32_4;     \\\n} while (0)\n\n    if (ptr==NULL) XXH_ASSERT(len == 0);\n\n    /* Compact rerolled version; generally faster */\n    if (!XXH32_ENDJMP) {\n        len &= 15;\n        while (len >= 4) {\n            XXH_PROCESS4;\n            len -= 4;\n        }\n        while (len > 0) {\n            XXH_PROCESS1;\n            --len;\n        }\n        return XXH32_avalanche(hash);\n    } else {\n         switch(len&15) /* or switch(bEnd - p) */ {\n           case 12:      XXH_PROCESS4;\n                         XXH_FALLTHROUGH;  /* fallthrough */\n           case 8:       XXH_PROCESS4;\n                         XXH_FALLTHROUGH;  /* fallthrough */\n           case 4:       XXH_PROCESS4;\n                         return XXH32_avalanche(hash);\n\n           case 13:      XXH_PROCESS4;\n                         XXH_FALLTHROUGH;  /* fallthrough */\n           case 9:       XXH_PROCESS4;\n                         XXH_FALLTHROUGH;  /* fallthrough */\n           case 5:       XXH_PROCESS4;\n                         XXH_PROCESS1;\n                         return XXH32_avalanche(hash);\n\n           case 14:      XXH_PROCESS4;\n                         XXH_FALLTHROUGH;  /* fallthrough */\n           case 10:      XXH_PROCESS4;\n                         XXH_FALLTHROUGH;  /* fallthrough */\n           case 6:       XXH_PROCESS4;\n                         XXH_PROCESS1;\n                         XXH_PROCESS1;\n                         return XXH32_avalanche(hash);\n\n           case 15:      XXH_PROCESS4;\n                         XXH_FALLTHROUGH;  /* fallthrough */\n           case 11:      XXH_PROCESS4;\n                         XXH_FALLTHROUGH;  /* fallthrough */\n           case 7:       XXH_PROCESS4;\n                         XXH_FALLTHROUGH;  /* fallthrough */\n           case 3:       XXH_PROCESS1;\n                         XXH_FALLTHROUGH;  /* fallthrough */\n           case 2:       XXH_PROCESS1;\n                         XXH_FALLTHROUGH;  /* fallthrough */\n           case 1:       XXH_PROCESS1;\n                         XXH_FALLTHROUGH;  /* fallthrough */\n           case 0:       return XXH32_avalanche(hash);\n        }\n        XXH_ASSERT(0);\n        return hash;   /* reaching this point is deemed impossible */\n    }\n}\n\n#ifdef XXH_OLD_NAMES\n#  define PROCESS1 XXH_PROCESS1\n#  define PROCESS4 XXH_PROCESS4\n#else\n#  undef XXH_PROCESS1\n#  undef XXH_PROCESS4\n#endif\n\n/*!\n * @internal\n * @brief The implementation for @ref XXH32().\n *\n * @param input , len , seed Directly passed from @ref XXH32().\n * @param align Whether @p input is aligned.\n * @return The calculated hash.\n */\nXXH_FORCE_INLINE XXH_PUREF xxh_u32\nXXH32_endian_align(const xxh_u8* input, size_t len, xxh_u32 seed, XXH_alignment align)\n{\n    xxh_u32 h32;\n\n    if (input==NULL) XXH_ASSERT(len == 0);\n\n    if (len>=16) {\n        xxh_u32 acc[4];\n        XXH32_initAccs(acc, seed);\n\n        input = XXH32_consumeLong(acc, input, len, align);\n\n        h32 = XXH32_mergeAccs(acc);\n    } else {\n        h32  = seed + XXH_PRIME32_5;\n    }\n\n    h32 += (xxh_u32)len;\n\n    return XXH32_finalize(h32, input, len&15, align);\n}\n\n/*! @ingroup XXH32_family */\nXXH_PUBLIC_API XXH32_hash_t XXH32 (const void* input, size_t len, XXH32_hash_t seed)\n{\n#if !defined(XXH_NO_STREAM) && XXH_SIZE_OPT >= 2\n    /* Simple version, good for code maintenance, but unfortunately slow for small inputs */\n    XXH32_state_t state;\n    XXH32_reset(&state, seed);\n    XXH32_update(&state, (const xxh_u8*)input, len);\n    return XXH32_digest(&state);\n#else\n    if (XXH_FORCE_ALIGN_CHECK) {\n        if ((((size_t)input) & 3) == 0) {   /* Input is 4-bytes aligned, leverage the speed benefit */\n            return XXH32_endian_align((const xxh_u8*)input, len, seed, XXH_aligned);\n    }   }\n\n    return XXH32_endian_align((const xxh_u8*)input, len, seed, XXH_unaligned);\n#endif\n}\n\n\n\n/*******   Hash streaming   *******/\n#ifndef XXH_NO_STREAM\n/*! @ingroup XXH32_family */\nXXH_PUBLIC_API XXH32_state_t* XXH32_createState(void)\n{\n    return (XXH32_state_t*)XXH_malloc(sizeof(XXH32_state_t));\n}\n/*! @ingroup XXH32_family */\nXXH_PUBLIC_API XXH_errorcode XXH32_freeState(XXH32_state_t* statePtr)\n{\n    XXH_free(statePtr);\n    return XXH_OK;\n}\n\n/*! @ingroup XXH32_family */\nXXH_PUBLIC_API void XXH32_copyState(XXH32_state_t* dstState, const XXH32_state_t* srcState)\n{\n    XXH_memcpy(dstState, srcState, sizeof(*dstState));\n}\n\n/*! @ingroup XXH32_family */\nXXH_PUBLIC_API XXH_errorcode XXH32_reset(XXH32_state_t* statePtr, XXH32_hash_t seed)\n{\n    XXH_ASSERT(statePtr != NULL);\n    XXH_memset(statePtr, 0, sizeof(*statePtr));\n    XXH32_initAccs(statePtr->acc, seed);\n    return XXH_OK;\n}\n\n\n/*! @ingroup XXH32_family */\nXXH_PUBLIC_API XXH_errorcode\nXXH32_update(XXH32_state_t* state, const void* input, size_t len)\n{\n    if (input==NULL) {\n        XXH_ASSERT(len == 0);\n        return XXH_OK;\n    }\n\n    state->total_len_32 += (XXH32_hash_t)len;\n    state->large_len |= (XXH32_hash_t)((len>=16) | (state->total_len_32>=16));\n\n    XXH_ASSERT(state->bufferedSize < sizeof(state->buffer));\n    if (len < sizeof(state->buffer) - state->bufferedSize)  {   /* fill in tmp buffer */\n        XXH_memcpy(state->buffer + state->bufferedSize, input, len);\n        state->bufferedSize += (XXH32_hash_t)len;\n        return XXH_OK;\n    }\n\n    {   const xxh_u8* xinput = (const xxh_u8*)input;\n        const xxh_u8* const bEnd = xinput + len;\n\n        if (state->bufferedSize) {   /* non-empty buffer: complete first */\n            XXH_memcpy(state->buffer + state->bufferedSize, xinput, sizeof(state->buffer) - state->bufferedSize);\n            xinput += sizeof(state->buffer) - state->bufferedSize;\n            /* then process one round */\n            (void)XXH32_consumeLong(state->acc, state->buffer, sizeof(state->buffer), XXH_aligned);\n            state->bufferedSize = 0;\n        }\n\n        XXH_ASSERT(xinput <= bEnd);\n        if ((size_t)(bEnd - xinput) >= sizeof(state->buffer)) {\n            /* Process the remaining data */\n            xinput = XXH32_consumeLong(state->acc, xinput, (size_t)(bEnd - xinput), XXH_unaligned);\n        }\n\n        if (xinput < bEnd) {\n            /* Copy the leftover to the tmp buffer */\n            XXH_memcpy(state->buffer, xinput, (size_t)(bEnd-xinput));\n            state->bufferedSize = (unsigned)(bEnd-xinput);\n        }\n    }\n\n    return XXH_OK;\n}\n\n\n/*! @ingroup XXH32_family */\nXXH_PUBLIC_API XXH32_hash_t XXH32_digest(const XXH32_state_t* state)\n{\n    xxh_u32 h32;\n\n    if (state->large_len) {\n        h32 = XXH32_mergeAccs(state->acc);\n    } else {\n        h32 = state->acc[2] /* == seed */ + XXH_PRIME32_5;\n    }\n\n    h32 += state->total_len_32;\n\n    return XXH32_finalize(h32, state->buffer, state->bufferedSize, XXH_aligned);\n}\n#endif /* !XXH_NO_STREAM */\n\n/*******   Canonical representation   *******/\n\n/*! @ingroup XXH32_family */\nXXH_PUBLIC_API void XXH32_canonicalFromHash(XXH32_canonical_t* dst, XXH32_hash_t hash)\n{\n    XXH_STATIC_ASSERT(sizeof(XXH32_canonical_t) == sizeof(XXH32_hash_t));\n    if (XXH_CPU_LITTLE_ENDIAN) hash = XXH_swap32(hash);\n    XXH_memcpy(dst, &hash, sizeof(*dst));\n}\n/*! @ingroup XXH32_family */\nXXH_PUBLIC_API XXH32_hash_t XXH32_hashFromCanonical(const XXH32_canonical_t* src)\n{\n    return XXH_readBE32(src);\n}\n\n\n#ifndef XXH_NO_LONG_LONG\n\n/* *******************************************************************\n*  64-bit hash functions\n*********************************************************************/\n/*!\n * @}\n * @ingroup impl\n * @{\n */\n/*******   Memory access   *******/\n\ntypedef XXH64_hash_t xxh_u64;\n\n#ifdef XXH_OLD_NAMES\n#  define U64 xxh_u64\n#endif\n\n#if (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==3))\n/*\n * Manual byteshift. Best for old compilers which don't inline memcpy.\n * We actually directly use XXH_readLE64 and XXH_readBE64.\n */\n#elif (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==2))\n\n/* Force direct memory access. Only works on CPU which support unaligned memory access in hardware */\nstatic xxh_u64 XXH_read64(const void* memPtr)\n{\n    return *(const xxh_u64*) memPtr;\n}\n\n#elif (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==1))\n\n/*\n * __attribute__((aligned(1))) is supported by gcc and clang. Originally the\n * documentation claimed that it only increased the alignment, but actually it\n * can decrease it on gcc, clang, and icc:\n * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69502,\n * https://gcc.godbolt.org/z/xYez1j67Y.\n */\n#ifdef XXH_OLD_NAMES\ntypedef union { xxh_u32 u32; xxh_u64 u64; } __attribute__((__packed__)) unalign64;\n#endif\nstatic xxh_u64 XXH_read64(const void* ptr)\n{\n    typedef __attribute__((__aligned__(1))) __attribute__((__may_alias__)) xxh_u64 xxh_unalign64;\n    return *((const xxh_unalign64*)ptr);\n}\n\n#else\n\n/*\n * Portable and safe solution. Generally efficient.\n * see: https://fastcompression.blogspot.com/2015/08/accessing-unaligned-memory.html\n */\nstatic xxh_u64 XXH_read64(const void* memPtr)\n{\n    xxh_u64 val;\n    XXH_memcpy(&val, memPtr, sizeof(val));\n    return val;\n}\n\n#endif   /* XXH_FORCE_DIRECT_MEMORY_ACCESS */\n\n#if defined(_MSC_VER)     /* Visual Studio */\n#  define XXH_swap64 _byteswap_uint64\n#elif XXH_GCC_VERSION >= 403\n#  define XXH_swap64 __builtin_bswap64\n#else\nstatic xxh_u64 XXH_swap64(xxh_u64 x)\n{\n    return  ((x << 56) & 0xff00000000000000ULL) |\n            ((x << 40) & 0x00ff000000000000ULL) |\n            ((x << 24) & 0x0000ff0000000000ULL) |\n            ((x << 8)  & 0x000000ff00000000ULL) |\n            ((x >> 8)  & 0x00000000ff000000ULL) |\n            ((x >> 24) & 0x0000000000ff0000ULL) |\n            ((x >> 40) & 0x000000000000ff00ULL) |\n            ((x >> 56) & 0x00000000000000ffULL);\n}\n#endif\n\n\n/* XXH_FORCE_MEMORY_ACCESS==3 is an endian-independent byteshift load. */\n#if (defined(XXH_FORCE_MEMORY_ACCESS) && (XXH_FORCE_MEMORY_ACCESS==3))\n\nXXH_FORCE_INLINE xxh_u64 XXH_readLE64(const void* memPtr)\n{\n    const xxh_u8* bytePtr = (const xxh_u8 *)memPtr;\n    return bytePtr[0]\n         | ((xxh_u64)bytePtr[1] << 8)\n         | ((xxh_u64)bytePtr[2] << 16)\n         | ((xxh_u64)bytePtr[3] << 24)\n         | ((xxh_u64)bytePtr[4] << 32)\n         | ((xxh_u64)bytePtr[5] << 40)\n         | ((xxh_u64)bytePtr[6] << 48)\n         | ((xxh_u64)bytePtr[7] << 56);\n}\n\nXXH_FORCE_INLINE xxh_u64 XXH_readBE64(const void* memPtr)\n{\n    const xxh_u8* bytePtr = (const xxh_u8 *)memPtr;\n    return bytePtr[7]\n         | ((xxh_u64)bytePtr[6] << 8)\n         | ((xxh_u64)bytePtr[5] << 16)\n         | ((xxh_u64)bytePtr[4] << 24)\n         | ((xxh_u64)bytePtr[3] << 32)\n         | ((xxh_u64)bytePtr[2] << 40)\n         | ((xxh_u64)bytePtr[1] << 48)\n         | ((xxh_u64)bytePtr[0] << 56);\n}\n\n#else\nXXH_FORCE_INLINE xxh_u64 XXH_readLE64(const void* ptr)\n{\n    return XXH_CPU_LITTLE_ENDIAN ? XXH_read64(ptr) : XXH_swap64(XXH_read64(ptr));\n}\n\nstatic xxh_u64 XXH_readBE64(const void* ptr)\n{\n    return XXH_CPU_LITTLE_ENDIAN ? XXH_swap64(XXH_read64(ptr)) : XXH_read64(ptr);\n}\n#endif\n\nXXH_FORCE_INLINE xxh_u64\nXXH_readLE64_align(const void* ptr, XXH_alignment align)\n{\n    if (align==XXH_unaligned)\n        return XXH_readLE64(ptr);\n    else\n        return XXH_CPU_LITTLE_ENDIAN ? *(const xxh_u64*)ptr : XXH_swap64(*(const xxh_u64*)ptr);\n}\n\n\n/*******   xxh64   *******/\n/*!\n * @}\n * @defgroup XXH64_impl XXH64 implementation\n * @ingroup impl\n *\n * Details on the XXH64 implementation.\n * @{\n */\n/* #define rather that static const, to be used as initializers */\n#define XXH_PRIME64_1  0x9E3779B185EBCA87ULL  /*!< 0b1001111000110111011110011011000110000101111010111100101010000111 */\n#define XXH_PRIME64_2  0xC2B2AE3D27D4EB4FULL  /*!< 0b1100001010110010101011100011110100100111110101001110101101001111 */\n#define XXH_PRIME64_3  0x165667B19E3779F9ULL  /*!< 0b0001011001010110011001111011000110011110001101110111100111111001 */\n#define XXH_PRIME64_4  0x85EBCA77C2B2AE63ULL  /*!< 0b1000010111101011110010100111011111000010101100101010111001100011 */\n#define XXH_PRIME64_5  0x27D4EB2F165667C5ULL  /*!< 0b0010011111010100111010110010111100010110010101100110011111000101 */\n\n#ifdef XXH_OLD_NAMES\n#  define PRIME64_1 XXH_PRIME64_1\n#  define PRIME64_2 XXH_PRIME64_2\n#  define PRIME64_3 XXH_PRIME64_3\n#  define PRIME64_4 XXH_PRIME64_4\n#  define PRIME64_5 XXH_PRIME64_5\n#endif\n\n/*! @copydoc XXH32_round */\nstatic xxh_u64 XXH64_round(xxh_u64 acc, xxh_u64 input)\n{\n    acc += input * XXH_PRIME64_2;\n    acc  = XXH_rotl64(acc, 31);\n    acc *= XXH_PRIME64_1;\n#if (defined(__AVX512F__)) && !defined(XXH_ENABLE_AUTOVECTORIZE)\n    /*\n     * DISABLE AUTOVECTORIZATION:\n     * A compiler fence is used to prevent GCC and Clang from\n     * autovectorizing the XXH64 loop (pragmas and attributes don't work for some\n     * reason) without globally disabling AVX512.\n     *\n     * Autovectorization of XXH64 tends to be detrimental,\n     * though the exact outcome may change depending on exact cpu and compiler version.\n     * For information, it has been reported as detrimental for Skylake-X,\n     * but possibly beneficial for Zen4.\n     *\n     * The default is to disable auto-vectorization,\n     * but you can select to enable it instead using `XXH_ENABLE_AUTOVECTORIZE` build variable.\n     */\n    XXH_COMPILER_GUARD(acc);\n#endif\n    return acc;\n}\n\nstatic xxh_u64 XXH64_mergeRound(xxh_u64 acc, xxh_u64 val)\n{\n    val  = XXH64_round(0, val);\n    acc ^= val;\n    acc  = acc * XXH_PRIME64_1 + XXH_PRIME64_4;\n    return acc;\n}\n\n/*! @copydoc XXH32_avalanche */\nstatic xxh_u64 XXH64_avalanche(xxh_u64 hash)\n{\n    hash ^= hash >> 33;\n    hash *= XXH_PRIME64_2;\n    hash ^= hash >> 29;\n    hash *= XXH_PRIME64_3;\n    hash ^= hash >> 32;\n    return hash;\n}\n\n\n#define XXH_get64bits(p) XXH_readLE64_align(p, align)\n\n/*!\n * @internal\n * @brief Sets up the initial accumulator state for XXH64().\n */\nXXH_FORCE_INLINE void\nXXH64_initAccs(xxh_u64 *acc, xxh_u64 seed)\n{\n    XXH_ASSERT(acc != NULL);\n    acc[0] = seed + XXH_PRIME64_1 + XXH_PRIME64_2;\n    acc[1] = seed + XXH_PRIME64_2;\n    acc[2] = seed + 0;\n    acc[3] = seed - XXH_PRIME64_1;\n}\n\n/*!\n * @internal\n * @brief Consumes a block of data for XXH64().\n *\n * @return the end input pointer.\n */\nXXH_FORCE_INLINE const xxh_u8 *\nXXH64_consumeLong(\n    xxh_u64 *XXH_RESTRICT acc,\n    xxh_u8 const *XXH_RESTRICT input,\n    size_t len,\n    XXH_alignment align\n)\n{\n    const xxh_u8* const bEnd = input + len;\n    const xxh_u8* const limit = bEnd - 31;\n    XXH_ASSERT(acc != NULL);\n    XXH_ASSERT(input != NULL);\n    XXH_ASSERT(len >= 32);\n    do {\n        /* reroll on 32-bit */\n        if (sizeof(void *) < sizeof(xxh_u64)) {\n            size_t i;\n            for (i = 0; i < 4; i++) {\n                acc[i] = XXH64_round(acc[i], XXH_get64bits(input));\n                input += 8;\n            }\n        } else {\n            acc[0] = XXH64_round(acc[0], XXH_get64bits(input)); input += 8;\n            acc[1] = XXH64_round(acc[1], XXH_get64bits(input)); input += 8;\n            acc[2] = XXH64_round(acc[2], XXH_get64bits(input)); input += 8;\n            acc[3] = XXH64_round(acc[3], XXH_get64bits(input)); input += 8;\n        }\n    } while (input < limit);\n\n    return input;\n}\n\n/*!\n * @internal\n * @brief Merges the accumulator lanes together for XXH64()\n */\nXXH_FORCE_INLINE XXH_PUREF xxh_u64\nXXH64_mergeAccs(const xxh_u64 *acc)\n{\n    XXH_ASSERT(acc != NULL);\n    {\n        xxh_u64 h64 = XXH_rotl64(acc[0], 1) + XXH_rotl64(acc[1], 7)\n                    + XXH_rotl64(acc[2], 12) + XXH_rotl64(acc[3], 18);\n        /* reroll on 32-bit */\n        if (sizeof(void *) < sizeof(xxh_u64)) {\n            size_t i;\n            for (i = 0; i < 4; i++) {\n                h64 = XXH64_mergeRound(h64, acc[i]);\n            }\n        } else {\n            h64 = XXH64_mergeRound(h64, acc[0]);\n            h64 = XXH64_mergeRound(h64, acc[1]);\n            h64 = XXH64_mergeRound(h64, acc[2]);\n            h64 = XXH64_mergeRound(h64, acc[3]);\n        }\n        return h64;\n    }\n}\n\n/*!\n * @internal\n * @brief Processes the last 0-31 bytes of @p ptr.\n *\n * There may be up to 31 bytes remaining to consume from the input.\n * This final stage will digest them to ensure that all input bytes are present\n * in the final mix.\n *\n * @param hash The hash to finalize.\n * @param ptr The pointer to the remaining input.\n * @param len The remaining length, modulo 32.\n * @param align Whether @p ptr is aligned.\n * @return The finalized hash\n * @see XXH32_finalize().\n */\nXXH_STATIC XXH_PUREF xxh_u64\nXXH64_finalize(xxh_u64 hash, const xxh_u8* ptr, size_t len, XXH_alignment align)\n{\n    if (ptr==NULL) XXH_ASSERT(len == 0);\n    len &= 31;\n    while (len >= 8) {\n        xxh_u64 const k1 = XXH64_round(0, XXH_get64bits(ptr));\n        ptr += 8;\n        hash ^= k1;\n        hash  = XXH_rotl64(hash,27) * XXH_PRIME64_1 + XXH_PRIME64_4;\n        len -= 8;\n    }\n    if (len >= 4) {\n        hash ^= (xxh_u64)(XXH_get32bits(ptr)) * XXH_PRIME64_1;\n        ptr += 4;\n        hash = XXH_rotl64(hash, 23) * XXH_PRIME64_2 + XXH_PRIME64_3;\n        len -= 4;\n    }\n    while (len > 0) {\n        hash ^= (*ptr++) * XXH_PRIME64_5;\n        hash = XXH_rotl64(hash, 11) * XXH_PRIME64_1;\n        --len;\n    }\n    return  XXH64_avalanche(hash);\n}\n\n#ifdef XXH_OLD_NAMES\n#  define PROCESS1_64 XXH_PROCESS1_64\n#  define PROCESS4_64 XXH_PROCESS4_64\n#  define PROCESS8_64 XXH_PROCESS8_64\n#else\n#  undef XXH_PROCESS1_64\n#  undef XXH_PROCESS4_64\n#  undef XXH_PROCESS8_64\n#endif\n\n/*!\n * @internal\n * @brief The implementation for @ref XXH64().\n *\n * @param input , len , seed Directly passed from @ref XXH64().\n * @param align Whether @p input is aligned.\n * @return The calculated hash.\n */\nXXH_FORCE_INLINE XXH_PUREF xxh_u64\nXXH64_endian_align(const xxh_u8* input, size_t len, xxh_u64 seed, XXH_alignment align)\n{\n    xxh_u64 h64;\n    if (input==NULL) XXH_ASSERT(len == 0);\n\n    if (len>=32) {  /* Process a large block of data */\n        xxh_u64 acc[4];\n        XXH64_initAccs(acc, seed);\n\n        input = XXH64_consumeLong(acc, input, len, align);\n\n        h64 = XXH64_mergeAccs(acc);\n    } else {\n        h64  = seed + XXH_PRIME64_5;\n    }\n\n    h64 += (xxh_u64) len;\n\n    return XXH64_finalize(h64, input, len, align);\n}\n\n\n/*! @ingroup XXH64_family */\nXXH_PUBLIC_API XXH64_hash_t XXH64 (XXH_NOESCAPE const void* input, size_t len, XXH64_hash_t seed)\n{\n#if !defined(XXH_NO_STREAM) && XXH_SIZE_OPT >= 2\n    /* Simple version, good for code maintenance, but unfortunately slow for small inputs */\n    XXH64_state_t state;\n    XXH64_reset(&state, seed);\n    XXH64_update(&state, (const xxh_u8*)input, len);\n    return XXH64_digest(&state);\n#else\n    if (XXH_FORCE_ALIGN_CHECK) {\n        if ((((size_t)input) & 7)==0) {  /* Input is aligned, let's leverage the speed advantage */\n            return XXH64_endian_align((const xxh_u8*)input, len, seed, XXH_aligned);\n    }   }\n\n    return XXH64_endian_align((const xxh_u8*)input, len, seed, XXH_unaligned);\n\n#endif\n}\n\n/*******   Hash Streaming   *******/\n#ifndef XXH_NO_STREAM\n/*! @ingroup XXH64_family*/\nXXH_PUBLIC_API XXH64_state_t* XXH64_createState(void)\n{\n    return (XXH64_state_t*)XXH_malloc(sizeof(XXH64_state_t));\n}\n/*! @ingroup XXH64_family */\nXXH_PUBLIC_API XXH_errorcode XXH64_freeState(XXH64_state_t* statePtr)\n{\n    XXH_free(statePtr);\n    return XXH_OK;\n}\n\n/*! @ingroup XXH64_family */\nXXH_PUBLIC_API void XXH64_copyState(XXH_NOESCAPE XXH64_state_t* dstState, const XXH64_state_t* srcState)\n{\n    XXH_memcpy(dstState, srcState, sizeof(*dstState));\n}\n\n/*! @ingroup XXH64_family */\nXXH_PUBLIC_API XXH_errorcode XXH64_reset(XXH_NOESCAPE XXH64_state_t* statePtr, XXH64_hash_t seed)\n{\n    XXH_ASSERT(statePtr != NULL);\n    XXH_memset(statePtr, 0, sizeof(*statePtr));\n    XXH64_initAccs(statePtr->acc, seed);\n    return XXH_OK;\n}\n\n/*! @ingroup XXH64_family */\nXXH_PUBLIC_API XXH_errorcode\nXXH64_update (XXH_NOESCAPE XXH64_state_t* state, XXH_NOESCAPE const void* input, size_t len)\n{\n    if (input==NULL) {\n        XXH_ASSERT(len == 0);\n        return XXH_OK;\n    }\n\n    state->total_len += len;\n\n    XXH_ASSERT(state->bufferedSize <= sizeof(state->buffer));\n    if (len < sizeof(state->buffer) - state->bufferedSize)  {   /* fill in tmp buffer */\n        XXH_memcpy(state->buffer + state->bufferedSize, input, len);\n        state->bufferedSize += (XXH32_hash_t)len;\n        return XXH_OK;\n    }\n\n    {   const xxh_u8* xinput = (const xxh_u8*)input;\n        const xxh_u8* const bEnd = xinput + len;\n\n        if (state->bufferedSize) {   /* non-empty buffer => complete first */\n            XXH_memcpy(state->buffer + state->bufferedSize, xinput, sizeof(state->buffer) - state->bufferedSize);\n            xinput += sizeof(state->buffer) - state->bufferedSize;\n            /* and process one round */\n            (void)XXH64_consumeLong(state->acc, state->buffer, sizeof(state->buffer), XXH_aligned);\n            state->bufferedSize = 0;\n        }\n\n        XXH_ASSERT(xinput <= bEnd);\n        if ((size_t)(bEnd - xinput) >= sizeof(state->buffer)) {\n            /* Process the remaining data */\n            xinput = XXH64_consumeLong(state->acc, xinput, (size_t)(bEnd - xinput), XXH_unaligned);\n        }\n\n        if (xinput < bEnd) {\n            /* Copy the leftover to the tmp buffer */\n            XXH_memcpy(state->buffer, xinput, (size_t)(bEnd-xinput));\n            state->bufferedSize = (unsigned)(bEnd-xinput);\n        }\n    }\n\n    return XXH_OK;\n}\n\n\n/*! @ingroup XXH64_family */\nXXH_PUBLIC_API XXH64_hash_t XXH64_digest(XXH_NOESCAPE const XXH64_state_t* state)\n{\n    xxh_u64 h64;\n\n    if (state->total_len >= 32) {\n        h64 = XXH64_mergeAccs(state->acc);\n    } else {\n        h64  = state->acc[2] /*seed*/ + XXH_PRIME64_5;\n    }\n\n    h64 += (xxh_u64) state->total_len;\n\n    return XXH64_finalize(h64, state->buffer, (size_t)state->total_len, XXH_aligned);\n}\n#endif /* !XXH_NO_STREAM */\n\n/******* Canonical representation   *******/\n\n/*! @ingroup XXH64_family */\nXXH_PUBLIC_API void XXH64_canonicalFromHash(XXH_NOESCAPE XXH64_canonical_t* dst, XXH64_hash_t hash)\n{\n    XXH_STATIC_ASSERT(sizeof(XXH64_canonical_t) == sizeof(XXH64_hash_t));\n    if (XXH_CPU_LITTLE_ENDIAN) hash = XXH_swap64(hash);\n    XXH_memcpy(dst, &hash, sizeof(*dst));\n}\n\n/*! @ingroup XXH64_family */\nXXH_PUBLIC_API XXH64_hash_t XXH64_hashFromCanonical(XXH_NOESCAPE const XXH64_canonical_t* src)\n{\n    return XXH_readBE64(src);\n}\n\n#ifndef XXH_NO_XXH3\n\n/* *********************************************************************\n*  XXH3\n*  New generation hash designed for speed on small keys and vectorization\n************************************************************************ */\n/*!\n * @}\n * @defgroup XXH3_impl XXH3 implementation\n * @ingroup impl\n * @{\n */\n\n/* ===   Compiler specifics   === */\n\n\n#if (defined(__GNUC__) && (__GNUC__ >= 3))  \\\n  || (defined(__INTEL_COMPILER) && (__INTEL_COMPILER >= 800)) \\\n  || defined(__clang__)\n#    define XXH_likely(x) __builtin_expect(x, 1)\n#    define XXH_unlikely(x) __builtin_expect(x, 0)\n#else\n#    define XXH_likely(x) (x)\n#    define XXH_unlikely(x) (x)\n#endif\n\n#ifndef XXH_HAS_INCLUDE\n#  ifdef __has_include\n/*\n * Not defined as XXH_HAS_INCLUDE(x) (function-like) because\n * this causes segfaults in Apple Clang 4.2 (on Mac OS X 10.7 Lion)\n */\n#    define XXH_HAS_INCLUDE __has_include\n#  else\n#    define XXH_HAS_INCLUDE(x) 0\n#  endif\n#endif\n\n#if defined(__GNUC__) || defined(__clang__)\n#  if defined(__ARM_FEATURE_SVE)\n#    include <arm_sve.h>\n#  endif\n#  if defined(__ARM_NEON__) || defined(__ARM_NEON) \\\n   || (defined(_M_ARM) && _M_ARM >= 7) \\\n   || defined(_M_ARM64) || defined(_M_ARM64EC) \\\n   || (defined(__wasm_simd128__) && XXH_HAS_INCLUDE(<arm_neon.h>)) /* WASM SIMD128 via SIMDe */\n#    define inline __inline__  /* circumvent a clang bug */\n#    include <arm_neon.h>\n#    undef inline\n#  elif defined(__AVX2__)\n#    include <immintrin.h>\n#  elif defined(__SSE2__)\n#    include <emmintrin.h>\n#  elif defined(__loongarch_asx)\n#    include <lasxintrin.h>\n#    include <lsxintrin.h>\n#  elif defined(__loongarch_sx)\n#    include <lsxintrin.h>\n#  elif defined(__riscv_vector)\n#    include <riscv_vector.h>\n#  endif\n#endif\n\n#if defined(_MSC_VER)\n#  include <intrin.h>\n#endif\n\n/*\n * One goal of XXH3 is to make it fast on both 32-bit and 64-bit, while\n * remaining a true 64-bit/128-bit hash function.\n *\n * This is done by prioritizing a subset of 64-bit operations that can be\n * emulated without too many steps on the average 32-bit machine.\n *\n * For example, these two lines seem similar, and run equally fast on 64-bit:\n *\n *   xxh_u64 x;\n *   x ^= (x >> 47); // good\n *   x ^= (x >> 13); // bad\n *\n * However, to a 32-bit machine, there is a major difference.\n *\n * x ^= (x >> 47) looks like this:\n *\n *   x.lo ^= (x.hi >> (47 - 32));\n *\n * while x ^= (x >> 13) looks like this:\n *\n *   // note: funnel shifts are not usually cheap.\n *   x.lo ^= (x.lo >> 13) | (x.hi << (32 - 13));\n *   x.hi ^= (x.hi >> 13);\n *\n * The first one is significantly faster than the second, simply because the\n * shift is larger than 32. This means:\n *  - All the bits we need are in the upper 32 bits, so we can ignore the lower\n *    32 bits in the shift.\n *  - The shift result will always fit in the lower 32 bits, and therefore,\n *    we can ignore the upper 32 bits in the xor.\n *\n * Thanks to this optimization, XXH3 only requires these features to be efficient:\n *\n *  - Usable unaligned access\n *  - A 32-bit or 64-bit ALU\n *      - If 32-bit, a decent ADC instruction\n *  - A 32 or 64-bit multiply with a 64-bit result\n *  - For the 128-bit variant, a decent byteswap helps short inputs.\n *\n * The first two are already required by XXH32, and almost all 32-bit and 64-bit\n * platforms which can run XXH32 can run XXH3 efficiently.\n *\n * Thumb-1, the classic 16-bit only subset of ARM's instruction set, is one\n * notable exception.\n *\n * First of all, Thumb-1 lacks support for the UMULL instruction which\n * performs the important long multiply. This means numerous __aeabi_lmul\n * calls.\n *\n * Second of all, the 8 functional registers are just not enough.\n * Setup for __aeabi_lmul, byteshift loads, pointers, and all arithmetic need\n * Lo registers, and this shuffling results in thousands more MOVs than A32.\n *\n * A32 and T32 don't have this limitation. They can access all 14 registers,\n * do a 32->64 multiply with UMULL, and the flexible operand allowing free\n * shifts is helpful, too.\n *\n * Therefore, we do a quick sanity check.\n *\n * If compiling Thumb-1 for a target which supports ARM instructions, we will\n * emit a warning, as it is not a \"sane\" platform to compile for.\n *\n * Usually, if this happens, it is because of an accident and you probably need\n * to specify -march, as you likely meant to compile for a newer architecture.\n *\n * Credit: large sections of the vectorial and asm source code paths\n *         have been contributed by @easyaspi314\n */\n#if defined(__thumb__) && !defined(__thumb2__) && defined(__ARM_ARCH_ISA_ARM)\n#   warning \"XXH3 is highly inefficient without ARM or Thumb-2.\"\n#endif\n\n/* ==========================================\n * Vectorization detection\n * ========================================== */\n\n#ifdef XXH_DOXYGEN\n/*!\n * @ingroup tuning\n * @brief Overrides the vectorization implementation chosen for XXH3.\n *\n * Can be defined to 0 to disable SIMD,\n * or any other authorized value of @ref XXH_VECTOR.\n *\n * If this is not defined, it uses predefined macros to determine the best\n * implementation.\n */\n#  define XXH_VECTOR XXH_SCALAR\n/*!\n * @ingroup tuning\n * @brief Selects the minimum alignment for XXH3's accumulators.\n *\n * When using SIMD, this should match the alignment required for said vector\n * type, so, for example, 32 for AVX2.\n *\n * Default: Auto detected.\n */\n#  define XXH_ACC_ALIGN 8\n#endif\n\n/* Actual definition */\n#ifndef XXH_DOXYGEN\n#endif\n\n#ifndef XXH_VECTOR    /* can be defined on command line */\n#  if ( \\\n        defined(__ARM_NEON__) || defined(__ARM_NEON) /* gcc */ \\\n     || defined(_M_ARM) || defined(_M_ARM64) || defined(_M_ARM64EC) /* msvc */ \\\n     || (defined(__wasm_simd128__) && XXH_HAS_INCLUDE(<arm_neon.h>)) /* wasm simd128 via SIMDe */ \\\n   ) && ( \\\n        defined(_WIN32) || defined(__LITTLE_ENDIAN__) /* little endian only */ \\\n    || (defined(__BYTE_ORDER__) && __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__) \\\n   )\n#    define XXH_VECTOR XXH_NEON\n#  elif defined(__ARM_FEATURE_SVE)\n#    define XXH_VECTOR XXH_SVE\n#  elif defined(__AVX512F__)\n#    define XXH_VECTOR XXH_AVX512\n#  elif defined(__AVX2__)\n#    define XXH_VECTOR XXH_AVX2\n#  elif defined(__SSE2__) || defined(_M_X64) || (defined(_M_IX86_FP) && (_M_IX86_FP == 2))\n#    define XXH_VECTOR XXH_SSE2\n#  elif (defined(__PPC64__) && defined(__POWER8_VECTOR__)) \\\n     || (defined(__s390x__) && defined(__VEC__)) \\\n     && defined(__GNUC__) /* TODO: IBM XL */\n#    define XXH_VECTOR XXH_VSX\n#  elif defined(__loongarch_asx)\n#    define XXH_VECTOR XXH_LASX\n#  elif defined(__loongarch_sx)\n#    define XXH_VECTOR XXH_LSX\n#  elif defined(__riscv_vector)\n#    define XXH_VECTOR XXH_RVV\n#  else\n#    define XXH_VECTOR XXH_SCALAR\n#  endif\n#endif\n\n/* __ARM_FEATURE_SVE is only supported by GCC & Clang. */\n#if (XXH_VECTOR == XXH_SVE) && !defined(__ARM_FEATURE_SVE)\n#  ifdef _MSC_VER\n#    pragma warning(once : 4606)\n#  else\n#    warning \"__ARM_FEATURE_SVE isn't supported. Use SCALAR instead.\"\n#  endif\n#  undef XXH_VECTOR\n#  define XXH_VECTOR XXH_SCALAR\n#endif\n\n/*\n * Controls the alignment of the accumulator,\n * for compatibility with aligned vector loads, which are usually faster.\n */\n#ifndef XXH_ACC_ALIGN\n#  if defined(XXH_X86DISPATCH)\n#     define XXH_ACC_ALIGN 64  /* for compatibility with avx512 */\n#  elif XXH_VECTOR == XXH_SCALAR  /* scalar */\n#     define XXH_ACC_ALIGN 8\n#  elif XXH_VECTOR == XXH_SSE2  /* sse2 */\n#     define XXH_ACC_ALIGN 16\n#  elif XXH_VECTOR == XXH_AVX2  /* avx2 */\n#     define XXH_ACC_ALIGN 32\n#  elif XXH_VECTOR == XXH_NEON  /* neon */\n#     define XXH_ACC_ALIGN 16\n#  elif XXH_VECTOR == XXH_VSX   /* vsx */\n#     define XXH_ACC_ALIGN 16\n#  elif XXH_VECTOR == XXH_AVX512  /* avx512 */\n#     define XXH_ACC_ALIGN 64\n#  elif XXH_VECTOR == XXH_SVE   /* sve */\n#     define XXH_ACC_ALIGN 64\n#  elif XXH_VECTOR == XXH_LASX   /* lasx */\n#     define XXH_ACC_ALIGN 64\n#  elif XXH_VECTOR == XXH_LSX   /* lsx */\n#     define XXH_ACC_ALIGN 64\n#  elif XXH_VECTOR == XXH_RVV   /* rvv */\n#     define XXH_ACC_ALIGN 64   /* could be 8, but 64 may be faster */\n#  endif\n#endif\n\n#if defined(XXH_X86DISPATCH) || XXH_VECTOR == XXH_SSE2 \\\n    || XXH_VECTOR == XXH_AVX2 || XXH_VECTOR == XXH_AVX512\n#  define XXH_SEC_ALIGN XXH_ACC_ALIGN\n#elif XXH_VECTOR == XXH_SVE\n#  define XXH_SEC_ALIGN XXH_ACC_ALIGN\n#elif XXH_VECTOR == XXH_RVV\n#  define XXH_SEC_ALIGN XXH_ACC_ALIGN\n#else\n#  define XXH_SEC_ALIGN 8\n#endif\n\n#if defined(__GNUC__) || defined(__clang__)\n#  define XXH_ALIASING __attribute__((__may_alias__))\n#else\n#  define XXH_ALIASING /* nothing */\n#endif\n\n/*\n * UGLY HACK:\n * GCC usually generates the best code with -O3 for xxHash.\n *\n * However, when targeting AVX2, it is overzealous in its unrolling resulting\n * in code roughly 3/4 the speed of Clang.\n *\n * There are other issues, such as GCC splitting _mm256_loadu_si256 into\n * _mm_loadu_si128 + _mm256_inserti128_si256. This is an optimization which\n * only applies to Sandy and Ivy Bridge... which don't even support AVX2.\n *\n * That is why when compiling the AVX2 version, it is recommended to use either\n *   -O2 -mavx2 -march=haswell\n * or\n *   -O2 -mavx2 -mno-avx256-split-unaligned-load\n * for decent performance, or to use Clang instead.\n *\n * Fortunately, we can control the first one with a pragma that forces GCC into\n * -O2, but the other one we can't control without \"failed to inline always\n * inline function due to target mismatch\" warnings.\n */\n#if XXH_VECTOR == XXH_AVX2 /* AVX2 */ \\\n  && defined(__GNUC__) && !defined(__clang__) /* GCC, not Clang */ \\\n  && defined(__OPTIMIZE__) && XXH_SIZE_OPT <= 0 /* respect -O0 and -Os */\n#  pragma GCC push_options\n#  pragma GCC optimize(\"-O2\")\n#endif\n\n#if XXH_VECTOR == XXH_NEON\n\n/*\n * UGLY HACK: While AArch64 GCC on Linux does not seem to care, on macOS, GCC -O3\n * optimizes out the entire hashLong loop because of the aliasing violation.\n *\n * However, GCC is also inefficient at load-store optimization with vld1q/vst1q,\n * so the only option is to mark it as aliasing.\n */\ntypedef uint64x2_t xxh_aliasing_uint64x2_t XXH_ALIASING;\n\n/*!\n * @internal\n * @brief `vld1q_u64` but faster and alignment-safe.\n *\n * On AArch64, unaligned access is always safe, but on ARMv7-a, it is only\n * *conditionally* safe (`vld1` has an alignment bit like `movdq[ua]` in x86).\n *\n * GCC for AArch64 sees `vld1q_u8` as an intrinsic instead of a load, so it\n * prohibits load-store optimizations. Therefore, a direct dereference is used.\n *\n * Otherwise, `vld1q_u8` is used with `vreinterpretq_u8_u64` to do a safe\n * unaligned load.\n */\n#if defined(__aarch64__) && defined(__GNUC__) && !defined(__clang__)\nXXH_FORCE_INLINE uint64x2_t XXH_vld1q_u64(void const* ptr) /* silence -Wcast-align */\n{\n    return *(xxh_aliasing_uint64x2_t const *)ptr;\n}\n#else\nXXH_FORCE_INLINE uint64x2_t XXH_vld1q_u64(void const* ptr)\n{\n    return vreinterpretq_u64_u8(vld1q_u8((uint8_t const*)ptr));\n}\n#endif\n\n/*!\n * @internal\n * @brief `vmlal_u32` on low and high halves of a vector.\n *\n * This is a workaround for AArch64 GCC < 11 which implemented arm_neon.h with\n * inline assembly and were therefore incapable of merging the `vget_{low, high}_u32`\n * with `vmlal_u32`.\n */\n#if defined(__aarch64__) && defined(__GNUC__) && !defined(__clang__) && __GNUC__ < 11\nXXH_FORCE_INLINE uint64x2_t\nXXH_vmlal_low_u32(uint64x2_t acc, uint32x4_t lhs, uint32x4_t rhs)\n{\n    /* Inline assembly is the only way */\n    __asm__(\"umlal   %0.2d, %1.2s, %2.2s\" : \"+w\" (acc) : \"w\" (lhs), \"w\" (rhs));\n    return acc;\n}\nXXH_FORCE_INLINE uint64x2_t\nXXH_vmlal_high_u32(uint64x2_t acc, uint32x4_t lhs, uint32x4_t rhs)\n{\n    /* This intrinsic works as expected */\n    return vmlal_high_u32(acc, lhs, rhs);\n}\n#else\n/* Portable intrinsic versions */\nXXH_FORCE_INLINE uint64x2_t\nXXH_vmlal_low_u32(uint64x2_t acc, uint32x4_t lhs, uint32x4_t rhs)\n{\n    return vmlal_u32(acc, vget_low_u32(lhs), vget_low_u32(rhs));\n}\n/*! @copydoc XXH_vmlal_low_u32\n * Assume the compiler converts this to vmlal_high_u32 on aarch64 */\nXXH_FORCE_INLINE uint64x2_t\nXXH_vmlal_high_u32(uint64x2_t acc, uint32x4_t lhs, uint32x4_t rhs)\n{\n    return vmlal_u32(acc, vget_high_u32(lhs), vget_high_u32(rhs));\n}\n#endif\n\n/*!\n * @ingroup tuning\n * @brief Controls the NEON to scalar ratio for XXH3\n *\n * This can be set to 2, 4, 6, or 8.\n *\n * ARM Cortex CPUs are _very_ sensitive to how their pipelines are used.\n *\n * For example, the Cortex-A73 can dispatch 3 micro-ops per cycle, but only 2 of those\n * can be NEON. If you are only using NEON instructions, you are only using 2/3 of the CPU\n * bandwidth.\n *\n * This is even more noticeable on the more advanced cores like the Cortex-A76 which\n * can dispatch 8 micro-ops per cycle, but still only 2 NEON micro-ops at once.\n *\n * Therefore, to make the most out of the pipeline, it is beneficial to run 6 NEON lanes\n * and 2 scalar lanes, which is chosen by default.\n *\n * This does not apply to Apple processors or 32-bit processors, which run better with\n * full NEON. These will default to 8. Additionally, size-optimized builds run 8 lanes.\n *\n * This change benefits CPUs with large micro-op buffers without negatively affecting\n * most other CPUs:\n *\n *  | Chipset               | Dispatch type       | NEON only | 6:2 hybrid | Diff. |\n *  |:----------------------|:--------------------|----------:|-----------:|------:|\n *  | Snapdragon 730 (A76)  | 2 NEON/8 micro-ops  |  8.8 GB/s |  10.1 GB/s |  ~16% |\n *  | Snapdragon 835 (A73)  | 2 NEON/3 micro-ops  |  5.1 GB/s |   5.3 GB/s |   ~5% |\n *  | Marvell PXA1928 (A53) | In-order dual-issue |  1.9 GB/s |   1.9 GB/s |    0% |\n *  | Apple M1              | 4 NEON/8 micro-ops  | 37.3 GB/s |  36.1 GB/s |  ~-3% |\n *\n * It also seems to fix some bad codegen on GCC, making it almost as fast as clang.\n *\n * When using WASM SIMD128, if this is 2 or 6, SIMDe will scalarize 2 of the lanes meaning\n * it effectively becomes worse 4.\n *\n * @see XXH3_accumulate_512_neon()\n */\n# ifndef XXH3_NEON_LANES\n#  if (defined(__aarch64__) || defined(__arm64__) || defined(_M_ARM64) || defined(_M_ARM64EC)) \\\n   && !defined(__APPLE__) && XXH_SIZE_OPT <= 0\n#   define XXH3_NEON_LANES 6\n#  else\n#   define XXH3_NEON_LANES XXH_ACC_NB\n#  endif\n# endif\n#endif  /* XXH_VECTOR == XXH_NEON */\n\n/*\n * VSX and Z Vector helpers.\n *\n * This is very messy, and any pull requests to clean this up are welcome.\n *\n * There are a lot of problems with supporting VSX and s390x, due to\n * inconsistent intrinsics, spotty coverage, and multiple endiannesses.\n */\n#if XXH_VECTOR == XXH_VSX\n/* Annoyingly, these headers _may_ define three macros: `bool`, `vector`,\n * and `pixel`. This is a problem for obvious reasons.\n *\n * These keywords are unnecessary; the spec literally says they are\n * equivalent to `__bool`, `__vector`, and `__pixel` and may be undef'd\n * after including the header.\n *\n * We use pragma push_macro/pop_macro to keep the namespace clean. */\n#  pragma push_macro(\"bool\")\n#  pragma push_macro(\"vector\")\n#  pragma push_macro(\"pixel\")\n/* silence potential macro redefined warnings */\n#  undef bool\n#  undef vector\n#  undef pixel\n\n#  if defined(__s390x__)\n#    include <s390intrin.h>\n#  else\n#    include <altivec.h>\n#  endif\n\n/* Restore the original macro values, if applicable. */\n#  pragma pop_macro(\"pixel\")\n#  pragma pop_macro(\"vector\")\n#  pragma pop_macro(\"bool\")\n\ntypedef __vector unsigned long long xxh_u64x2;\ntypedef __vector unsigned char xxh_u8x16;\ntypedef __vector unsigned xxh_u32x4;\n\n/*\n * UGLY HACK: Similar to aarch64 macOS GCC, s390x GCC has the same aliasing issue.\n */\ntypedef xxh_u64x2 xxh_aliasing_u64x2 XXH_ALIASING;\n\n# ifndef XXH_VSX_BE\n#  if defined(__BIG_ENDIAN__) \\\n  || (defined(__BYTE_ORDER__) && __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__)\n#    define XXH_VSX_BE 1\n#  elif defined(__VEC_ELEMENT_REG_ORDER__) && __VEC_ELEMENT_REG_ORDER__ == __ORDER_BIG_ENDIAN__\n#    warning \"-maltivec=be is not recommended. Please use native endianness.\"\n#    define XXH_VSX_BE 1\n#  else\n#    define XXH_VSX_BE 0\n#  endif\n# endif /* !defined(XXH_VSX_BE) */\n\n# if XXH_VSX_BE\n#  if defined(__POWER9_VECTOR__) || (defined(__clang__) && defined(__s390x__))\n#    define XXH_vec_revb vec_revb\n#  else\n/*!\n * A polyfill for POWER9's vec_revb().\n */\nXXH_FORCE_INLINE xxh_u64x2 XXH_vec_revb(xxh_u64x2 val)\n{\n    xxh_u8x16 const vByteSwap = { 0x07, 0x06, 0x05, 0x04, 0x03, 0x02, 0x01, 0x00,\n                                  0x0F, 0x0E, 0x0D, 0x0C, 0x0B, 0x0A, 0x09, 0x08 };\n    return vec_perm(val, val, vByteSwap);\n}\n#  endif\n# endif /* XXH_VSX_BE */\n\n/*!\n * Performs an unaligned vector load and byte swaps it on big endian.\n */\nXXH_FORCE_INLINE xxh_u64x2 XXH_vec_loadu(const void *ptr)\n{\n    xxh_u64x2 ret;\n    XXH_memcpy(&ret, ptr, sizeof(xxh_u64x2));\n# if XXH_VSX_BE\n    ret = XXH_vec_revb(ret);\n# endif\n    return ret;\n}\n\n/*\n * vec_mulo and vec_mule are very problematic intrinsics on PowerPC\n *\n * These intrinsics weren't added until GCC 8, despite existing for a while,\n * and they are endian dependent. Also, their meaning swap depending on version.\n * */\n# if defined(__s390x__)\n /* s390x is always big endian, no issue on this platform */\n#  define XXH_vec_mulo vec_mulo\n#  define XXH_vec_mule vec_mule\n# elif defined(__clang__) && XXH_HAS_BUILTIN(__builtin_altivec_vmuleuw) && !defined(__ibmxl__)\n/* Clang has a better way to control this, we can just use the builtin which doesn't swap. */\n /* The IBM XL Compiler (which defined __clang__) only implements the vec_* operations */\n#  define XXH_vec_mulo __builtin_altivec_vmulouw\n#  define XXH_vec_mule __builtin_altivec_vmuleuw\n# else\n/* gcc needs inline assembly */\n/* Adapted from https://github.com/google/highwayhash/blob/master/highwayhash/hh_vsx.h. */\nXXH_FORCE_INLINE xxh_u64x2 XXH_vec_mulo(xxh_u32x4 a, xxh_u32x4 b)\n{\n    xxh_u64x2 result;\n    __asm__(\"vmulouw %0, %1, %2\" : \"=v\" (result) : \"v\" (a), \"v\" (b));\n    return result;\n}\nXXH_FORCE_INLINE xxh_u64x2 XXH_vec_mule(xxh_u32x4 a, xxh_u32x4 b)\n{\n    xxh_u64x2 result;\n    __asm__(\"vmuleuw %0, %1, %2\" : \"=v\" (result) : \"v\" (a), \"v\" (b));\n    return result;\n}\n# endif /* XXH_vec_mulo, XXH_vec_mule */\n#endif /* XXH_VECTOR == XXH_VSX */\n\n#if XXH_VECTOR == XXH_SVE\n#define ACCRND(acc, offset) \\\ndo { \\\n    svuint64_t input_vec = svld1_u64(mask, xinput + offset);         \\\n    svuint64_t secret_vec = svld1_u64(mask, xsecret + offset);       \\\n    svuint64_t mixed = sveor_u64_x(mask, secret_vec, input_vec);     \\\n    svuint64_t swapped = svtbl_u64(input_vec, kSwap);                \\\n    svuint64_t mixed_lo = svextw_u64_x(mask, mixed);                 \\\n    svuint64_t mixed_hi = svlsr_n_u64_x(mask, mixed, 32);            \\\n    svuint64_t mul = svmad_u64_x(mask, mixed_lo, mixed_hi, swapped); \\\n    acc = svadd_u64_x(mask, acc, mul);                               \\\n} while (0)\n#endif /* XXH_VECTOR == XXH_SVE */\n\n/* prefetch\n * can be disabled, by declaring XXH_NO_PREFETCH build macro */\n#if defined(XXH_NO_PREFETCH)\n#  define XXH_PREFETCH(ptr)  (void)(ptr)  /* disabled */\n#else\n#  if XXH_SIZE_OPT >= 1\n#    define XXH_PREFETCH(ptr) (void)(ptr)\n#  elif defined(_MSC_VER) && (defined(_M_X64) || defined(_M_IX86))  /* _mm_prefetch() not defined outside of x86/x64 */\n#    include <mmintrin.h>   /* https://msdn.microsoft.com/fr-fr/library/84szxsww(v=vs.90).aspx */\n#    define XXH_PREFETCH(ptr)  _mm_prefetch((const char*)(ptr), _MM_HINT_T0)\n#  elif defined(__GNUC__) && ( (__GNUC__ >= 4) || ( (__GNUC__ == 3) && (__GNUC_MINOR__ >= 1) ) )\n#    define XXH_PREFETCH(ptr)  __builtin_prefetch((ptr), 0 /* rw==read */, 3 /* locality */)\n#  else\n#    define XXH_PREFETCH(ptr) (void)(ptr)  /* disabled */\n#  endif\n#endif  /* XXH_NO_PREFETCH */\n\n\n/* ==========================================\n * XXH3 default settings\n * ========================================== */\n\n#define XXH_SECRET_DEFAULT_SIZE 192   /* minimum XXH3_SECRET_SIZE_MIN */\n\n#if (XXH_SECRET_DEFAULT_SIZE < XXH3_SECRET_SIZE_MIN)\n#  error \"default keyset is not large enough\"\n#endif\n\n/*!\n * @internal\n * @def XXH3_kSecret\n * @brief Pseudorandom secret taken directly from FARSH. */\nXXH_ALIGN(64) static const xxh_u8 XXH3_kSecret[XXH_SECRET_DEFAULT_SIZE] = {\n    0xb8, 0xfe, 0x6c, 0x39, 0x23, 0xa4, 0x4b, 0xbe, 0x7c, 0x01, 0x81, 0x2c, 0xf7, 0x21, 0xad, 0x1c,\n    0xde, 0xd4, 0x6d, 0xe9, 0x83, 0x90, 0x97, 0xdb, 0x72, 0x40, 0xa4, 0xa4, 0xb7, 0xb3, 0x67, 0x1f,\n    0xcb, 0x79, 0xe6, 0x4e, 0xcc, 0xc0, 0xe5, 0x78, 0x82, 0x5a, 0xd0, 0x7d, 0xcc, 0xff, 0x72, 0x21,\n    0xb8, 0x08, 0x46, 0x74, 0xf7, 0x43, 0x24, 0x8e, 0xe0, 0x35, 0x90, 0xe6, 0x81, 0x3a, 0x26, 0x4c,\n    0x3c, 0x28, 0x52, 0xbb, 0x91, 0xc3, 0x00, 0xcb, 0x88, 0xd0, 0x65, 0x8b, 0x1b, 0x53, 0x2e, 0xa3,\n    0x71, 0x64, 0x48, 0x97, 0xa2, 0x0d, 0xf9, 0x4e, 0x38, 0x19, 0xef, 0x46, 0xa9, 0xde, 0xac, 0xd8,\n    0xa8, 0xfa, 0x76, 0x3f, 0xe3, 0x9c, 0x34, 0x3f, 0xf9, 0xdc, 0xbb, 0xc7, 0xc7, 0x0b, 0x4f, 0x1d,\n    0x8a, 0x51, 0xe0, 0x4b, 0xcd, 0xb4, 0x59, 0x31, 0xc8, 0x9f, 0x7e, 0xc9, 0xd9, 0x78, 0x73, 0x64,\n    0xea, 0xc5, 0xac, 0x83, 0x34, 0xd3, 0xeb, 0xc3, 0xc5, 0x81, 0xa0, 0xff, 0xfa, 0x13, 0x63, 0xeb,\n    0x17, 0x0d, 0xdd, 0x51, 0xb7, 0xf0, 0xda, 0x49, 0xd3, 0x16, 0x55, 0x26, 0x29, 0xd4, 0x68, 0x9e,\n    0x2b, 0x16, 0xbe, 0x58, 0x7d, 0x47, 0xa1, 0xfc, 0x8f, 0xf8, 0xb8, 0xd1, 0x7a, 0xd0, 0x31, 0xce,\n    0x45, 0xcb, 0x3a, 0x8f, 0x95, 0x16, 0x04, 0x28, 0xaf, 0xd7, 0xfb, 0xca, 0xbb, 0x4b, 0x40, 0x7e,\n};\n\nstatic const xxh_u64 PRIME_MX1 = 0x165667919E3779F9ULL;  /*!< 0b0001011001010110011001111001000110011110001101110111100111111001 */\nstatic const xxh_u64 PRIME_MX2 = 0x9FB21C651E98DF25ULL;  /*!< 0b1001111110110010000111000110010100011110100110001101111100100101 */\n\n#ifdef XXH_OLD_NAMES\n#  define kSecret XXH3_kSecret\n#endif\n\n#ifdef XXH_DOXYGEN\n/*!\n * @brief Calculates a 32-bit to 64-bit long multiply.\n *\n * Implemented as a macro.\n *\n * Wraps `__emulu` on MSVC x86 because it tends to call `__allmul` when it doesn't\n * need to (but it shouldn't need to anyways, it is about 7 instructions to do\n * a 64x64 multiply...). Since we know that this will _always_ emit `MULL`, we\n * use that instead of the normal method.\n *\n * If you are compiling for platforms like Thumb-1 and don't have a better option,\n * you may also want to write your own long multiply routine here.\n *\n * @param x, y Numbers to be multiplied\n * @return 64-bit product of the low 32 bits of @p x and @p y.\n */\nXXH_FORCE_INLINE xxh_u64\nXXH_mult32to64(xxh_u64 x, xxh_u64 y)\n{\n   return (x & 0xFFFFFFFF) * (y & 0xFFFFFFFF);\n}\n#elif defined(_MSC_VER) && defined(_M_IX86)\n#    define XXH_mult32to64(x, y) __emulu((unsigned)(x), (unsigned)(y))\n#else\n/*\n * Downcast + upcast is usually better than masking on older compilers like\n * GCC 4.2 (especially 32-bit ones), all without affecting newer compilers.\n *\n * The other method, (x & 0xFFFFFFFF) * (y & 0xFFFFFFFF), will AND both operands\n * and perform a full 64x64 multiply -- entirely redundant on 32-bit.\n */\n#    define XXH_mult32to64(x, y) ((xxh_u64)(xxh_u32)(x) * (xxh_u64)(xxh_u32)(y))\n#endif\n\n/*!\n * @brief Calculates a 64->128-bit long multiply.\n *\n * Uses `__uint128_t` and `_umul128` if available, otherwise uses a scalar\n * version.\n *\n * @param lhs , rhs The 64-bit integers to be multiplied\n * @return The 128-bit result represented in an @ref XXH128_hash_t.\n */\nstatic XXH128_hash_t\nXXH_mult64to128(xxh_u64 lhs, xxh_u64 rhs)\n{\n    /*\n     * GCC/Clang __uint128_t method.\n     *\n     * On most 64-bit targets, GCC and Clang define a __uint128_t type.\n     * This is usually the best way as it usually uses a native long 64-bit\n     * multiply, such as MULQ on x86_64 or MUL + UMULH on aarch64.\n     *\n     * Usually.\n     *\n     * Despite being a 32-bit platform, Clang (and emscripten) define this type\n     * despite not having the arithmetic for it. This results in a laggy\n     * compiler builtin call which calculates a full 128-bit multiply.\n     * In that case it is best to use the portable one.\n     * https://github.com/Cyan4973/xxHash/issues/211#issuecomment-515575677\n     */\n#if (defined(__GNUC__) || defined(__clang__)) && !defined(__wasm__) \\\n    && defined(__SIZEOF_INT128__) \\\n    || (defined(_INTEGRAL_MAX_BITS) && _INTEGRAL_MAX_BITS >= 128)\n\n    __uint128_t const product = (__uint128_t)lhs * (__uint128_t)rhs;\n    XXH128_hash_t r128;\n    r128.low64  = (xxh_u64)(product);\n    r128.high64 = (xxh_u64)(product >> 64);\n    return r128;\n\n    /*\n     * MSVC for x64's _umul128 method.\n     *\n     * xxh_u64 _umul128(xxh_u64 Multiplier, xxh_u64 Multiplicand, xxh_u64 *HighProduct);\n     *\n     * This compiles to single operand MUL on x64.\n     */\n#elif (defined(_M_X64) || defined(_M_IA64)) && !defined(_M_ARM64EC)\n\n#ifndef _MSC_VER\n#   pragma intrinsic(_umul128)\n#endif\n    xxh_u64 product_high;\n    xxh_u64 const product_low = _umul128(lhs, rhs, &product_high);\n    XXH128_hash_t r128;\n    r128.low64  = product_low;\n    r128.high64 = product_high;\n    return r128;\n\n    /*\n     * MSVC for ARM64's __umulh method.\n     *\n     * This compiles to the same MUL + UMULH as GCC/Clang's __uint128_t method.\n     */\n#elif defined(_M_ARM64) || defined(_M_ARM64EC)\n\n#ifndef _MSC_VER\n#   pragma intrinsic(__umulh)\n#endif\n    XXH128_hash_t r128;\n    r128.low64  = lhs * rhs;\n    r128.high64 = __umulh(lhs, rhs);\n    return r128;\n\n#else\n    /*\n     * Portable scalar method. Optimized for 32-bit and 64-bit ALUs.\n     *\n     * This is a fast and simple grade school multiply, which is shown below\n     * with base 10 arithmetic instead of base 0x100000000.\n     *\n     *           9 3 // D2 lhs = 93\n     *         x 7 5 // D2 rhs = 75\n     *     ----------\n     *           1 5 // D2 lo_lo = (93 % 10) * (75 % 10) = 15\n     *         4 5 | // D2 hi_lo = (93 / 10) * (75 % 10) = 45\n     *         2 1 | // D2 lo_hi = (93 % 10) * (75 / 10) = 21\n     *     + 6 3 | | // D2 hi_hi = (93 / 10) * (75 / 10) = 63\n     *     ---------\n     *         2 7 | // D2 cross = (15 / 10) + (45 % 10) + 21 = 27\n     *     + 6 7 | | // D2 upper = (27 / 10) + (45 / 10) + 63 = 67\n     *     ---------\n     *       6 9 7 5 // D4 res = (27 * 10) + (15 % 10) + (67 * 100) = 6975\n     *\n     * The reasons for adding the products like this are:\n     *  1. It avoids manual carry tracking. Just like how\n     *     (9 * 9) + 9 + 9 = 99, the same applies with this for UINT64_MAX.\n     *     This avoids a lot of complexity.\n     *\n     *  2. It hints for, and on Clang, compiles to, the powerful UMAAL\n     *     instruction available in ARM's Digital Signal Processing extension\n     *     in 32-bit ARMv6 and later, which is shown below:\n     *\n     *         void UMAAL(xxh_u32 *RdLo, xxh_u32 *RdHi, xxh_u32 Rn, xxh_u32 Rm)\n     *         {\n     *             xxh_u64 product = (xxh_u64)*RdLo * (xxh_u64)*RdHi + Rn + Rm;\n     *             *RdLo = (xxh_u32)(product & 0xFFFFFFFF);\n     *             *RdHi = (xxh_u32)(product >> 32);\n     *         }\n     *\n     *     This instruction was designed for efficient long multiplication, and\n     *     allows this to be calculated in only 4 instructions at speeds\n     *     comparable to some 64-bit ALUs.\n     *\n     *  3. It isn't terrible on other platforms. Usually this will be a couple\n     *     of 32-bit ADD/ADCs.\n     */\n\n    /* First calculate all of the cross products. */\n    xxh_u64 const lo_lo = XXH_mult32to64(lhs & 0xFFFFFFFF, rhs & 0xFFFFFFFF);\n    xxh_u64 const hi_lo = XXH_mult32to64(lhs >> 32,        rhs & 0xFFFFFFFF);\n    xxh_u64 const lo_hi = XXH_mult32to64(lhs & 0xFFFFFFFF, rhs >> 32);\n    xxh_u64 const hi_hi = XXH_mult32to64(lhs >> 32,        rhs >> 32);\n\n    /* Now add the products together. These will never overflow. */\n    xxh_u64 const cross = (lo_lo >> 32) + (hi_lo & 0xFFFFFFFF) + lo_hi;\n    xxh_u64 const upper = (hi_lo >> 32) + (cross >> 32)        + hi_hi;\n    xxh_u64 const lower = (cross << 32) | (lo_lo & 0xFFFFFFFF);\n\n    XXH128_hash_t r128;\n    r128.low64  = lower;\n    r128.high64 = upper;\n    return r128;\n#endif\n}\n\n/*!\n * @brief Calculates a 64-bit to 128-bit multiply, then XOR folds it.\n *\n * The reason for the separate function is to prevent passing too many structs\n * around by value. This will hopefully inline the multiply, but we don't force it.\n *\n * @param lhs , rhs The 64-bit integers to multiply\n * @return The low 64 bits of the product XOR'd by the high 64 bits.\n * @see XXH_mult64to128()\n */\nstatic xxh_u64\nXXH3_mul128_fold64(xxh_u64 lhs, xxh_u64 rhs)\n{\n    XXH128_hash_t product = XXH_mult64to128(lhs, rhs);\n    return product.low64 ^ product.high64;\n}\n\n/*! Seems to produce slightly better code on GCC for some reason. */\nXXH_FORCE_INLINE XXH_CONSTF xxh_u64 XXH_xorshift64(xxh_u64 v64, int shift)\n{\n    XXH_ASSERT(0 <= shift && shift < 64);\n    return v64 ^ (v64 >> shift);\n}\n\n/*\n * This is a fast avalanche stage,\n * suitable when input bits are already partially mixed\n */\nstatic XXH64_hash_t XXH3_avalanche(xxh_u64 h64)\n{\n    h64 = XXH_xorshift64(h64, 37);\n    h64 *= PRIME_MX1;\n    h64 = XXH_xorshift64(h64, 32);\n    return h64;\n}\n\n/*\n * This is a stronger avalanche,\n * inspired by Pelle Evensen's rrmxmx\n * preferable when input has not been previously mixed\n */\nstatic XXH64_hash_t XXH3_rrmxmx(xxh_u64 h64, xxh_u64 len)\n{\n    /* this mix is inspired by Pelle Evensen's rrmxmx */\n    h64 ^= XXH_rotl64(h64, 49) ^ XXH_rotl64(h64, 24);\n    h64 *= PRIME_MX2;\n    h64 ^= (h64 >> 35) + len ;\n    h64 *= PRIME_MX2;\n    return XXH_xorshift64(h64, 28);\n}\n\n\n/* ==========================================\n * Short keys\n * ==========================================\n * One of the shortcomings of XXH32 and XXH64 was that their performance was\n * sub-optimal on short lengths. It used an iterative algorithm which strongly\n * favored lengths that were a multiple of 4 or 8.\n *\n * Instead of iterating over individual inputs, we use a set of single shot\n * functions which piece together a range of lengths and operate in constant time.\n *\n * Additionally, the number of multiplies has been significantly reduced. This\n * reduces latency, especially when emulating 64-bit multiplies on 32-bit.\n *\n * Depending on the platform, this may or may not be faster than XXH32, but it\n * is almost guaranteed to be faster than XXH64.\n */\n\n/*\n * At very short lengths, there isn't enough input to fully hide secrets, or use\n * the entire secret.\n *\n * There is also only a limited amount of mixing we can do before significantly\n * impacting performance.\n *\n * Therefore, we use different sections of the secret and always mix two secret\n * samples with an XOR. This should have no effect on performance on the\n * seedless or withSeed variants because everything _should_ be constant folded\n * by modern compilers.\n *\n * The XOR mixing hides individual parts of the secret and increases entropy.\n *\n * This adds an extra layer of strength for custom secrets.\n */\nXXH_FORCE_INLINE XXH_PUREF XXH64_hash_t\nXXH3_len_1to3_64b(const xxh_u8* input, size_t len, const xxh_u8* secret, XXH64_hash_t seed)\n{\n    XXH_ASSERT(input != NULL);\n    XXH_ASSERT(1 <= len && len <= 3);\n    XXH_ASSERT(secret != NULL);\n    /*\n     * len = 1: combined = { input[0], 0x01, input[0], input[0] }\n     * len = 2: combined = { input[1], 0x02, input[0], input[1] }\n     * len = 3: combined = { input[2], 0x03, input[0], input[1] }\n     */\n    {   xxh_u8  const c1 = input[0];\n        xxh_u8  const c2 = input[len >> 1];\n        xxh_u8  const c3 = input[len - 1];\n        xxh_u32 const combined = ((xxh_u32)c1 << 16) | ((xxh_u32)c2  << 24)\n                               | ((xxh_u32)c3 <<  0) | ((xxh_u32)len << 8);\n        xxh_u64 const bitflip = (XXH_readLE32(secret) ^ XXH_readLE32(secret+4)) + seed;\n        xxh_u64 const keyed = (xxh_u64)combined ^ bitflip;\n        return XXH64_avalanche(keyed);\n    }\n}\n\nXXH_FORCE_INLINE XXH_PUREF XXH64_hash_t\nXXH3_len_4to8_64b(const xxh_u8* input, size_t len, const xxh_u8* secret, XXH64_hash_t seed)\n{\n    XXH_ASSERT(input != NULL);\n    XXH_ASSERT(secret != NULL);\n    XXH_ASSERT(4 <= len && len <= 8);\n    seed ^= (xxh_u64)XXH_swap32((xxh_u32)seed) << 32;\n    {   xxh_u32 const input1 = XXH_readLE32(input);\n        xxh_u32 const input2 = XXH_readLE32(input + len - 4);\n        xxh_u64 const bitflip = (XXH_readLE64(secret+8) ^ XXH_readLE64(secret+16)) - seed;\n        xxh_u64 const input64 = input2 + (((xxh_u64)input1) << 32);\n        xxh_u64 const keyed = input64 ^ bitflip;\n        return XXH3_rrmxmx(keyed, len);\n    }\n}\n\nXXH_FORCE_INLINE XXH_PUREF XXH64_hash_t\nXXH3_len_9to16_64b(const xxh_u8* input, size_t len, const xxh_u8* secret, XXH64_hash_t seed)\n{\n    XXH_ASSERT(input != NULL);\n    XXH_ASSERT(secret != NULL);\n    XXH_ASSERT(9 <= len && len <= 16);\n    {   xxh_u64 const bitflip1 = (XXH_readLE64(secret+24) ^ XXH_readLE64(secret+32)) + seed;\n        xxh_u64 const bitflip2 = (XXH_readLE64(secret+40) ^ XXH_readLE64(secret+48)) - seed;\n        xxh_u64 const input_lo = XXH_readLE64(input)           ^ bitflip1;\n        xxh_u64 const input_hi = XXH_readLE64(input + len - 8) ^ bitflip2;\n        xxh_u64 const acc = len\n                          + XXH_swap64(input_lo) + input_hi\n                          + XXH3_mul128_fold64(input_lo, input_hi);\n        return XXH3_avalanche(acc);\n    }\n}\n\nXXH_FORCE_INLINE XXH_PUREF XXH64_hash_t\nXXH3_len_0to16_64b(const xxh_u8* input, size_t len, const xxh_u8* secret, XXH64_hash_t seed)\n{\n    XXH_ASSERT(len <= 16);\n    {   if (XXH_likely(len >  8)) return XXH3_len_9to16_64b(input, len, secret, seed);\n        if (XXH_likely(len >= 4)) return XXH3_len_4to8_64b(input, len, secret, seed);\n        if (len) return XXH3_len_1to3_64b(input, len, secret, seed);\n        return XXH64_avalanche(seed ^ (XXH_readLE64(secret+56) ^ XXH_readLE64(secret+64)));\n    }\n}\n\n/*\n * DISCLAIMER: There are known *seed-dependent* multicollisions here due to\n * multiplication by zero, affecting hashes of lengths 17 to 240.\n *\n * However, they are very unlikely.\n *\n * Keep this in mind when using the unseeded XXH3_64bits() variant: As with all\n * unseeded non-cryptographic hashes, it does not attempt to defend itself\n * against specially crafted inputs, only random inputs.\n *\n * Compared to classic UMAC where a 1 in 2^31 chance of 4 consecutive bytes\n * cancelling out the secret is taken an arbitrary number of times (addressed\n * in XXH3_accumulate_512), this collision is very unlikely with random inputs\n * and/or proper seeding:\n *\n * This only has a 1 in 2^63 chance of 8 consecutive bytes cancelling out, in a\n * function that is only called up to 16 times per hash with up to 240 bytes of\n * input.\n *\n * This is not too bad for a non-cryptographic hash function, especially with\n * only 64 bit outputs.\n *\n * The 128-bit variant (which trades some speed for strength) is NOT affected\n * by this, although it is always a good idea to use a proper seed if you care\n * about strength.\n */\nXXH_FORCE_INLINE xxh_u64 XXH3_mix16B(const xxh_u8* XXH_RESTRICT input,\n                                     const xxh_u8* XXH_RESTRICT secret, xxh_u64 seed64)\n{\n#if defined(__GNUC__) && !defined(__clang__) /* GCC, not Clang */ \\\n  && defined(__i386__) && defined(__SSE2__)  /* x86 + SSE2 */ \\\n  && !defined(XXH_ENABLE_AUTOVECTORIZE)      /* Define to disable like XXH32 hack */\n    /*\n     * UGLY HACK:\n     * GCC for x86 tends to autovectorize the 128-bit multiply, resulting in\n     * slower code.\n     *\n     * By forcing seed64 into a register, we disrupt the cost model and\n     * cause it to scalarize. See `XXH32_round()`\n     *\n     * FIXME: Clang's output is still _much_ faster -- On an AMD Ryzen 3600,\n     * XXH3_64bits @ len=240 runs at 4.6 GB/s with Clang 9, but 3.3 GB/s on\n     * GCC 9.2, despite both emitting scalar code.\n     *\n     * GCC generates much better scalar code than Clang for the rest of XXH3,\n     * which is why finding a more optimal codepath is an interest.\n     */\n    XXH_COMPILER_GUARD(seed64);\n#endif\n    {   xxh_u64 const input_lo = XXH_readLE64(input);\n        xxh_u64 const input_hi = XXH_readLE64(input+8);\n        return XXH3_mul128_fold64(\n            input_lo ^ (XXH_readLE64(secret)   + seed64),\n            input_hi ^ (XXH_readLE64(secret+8) - seed64)\n        );\n    }\n}\n\n/* For mid range keys, XXH3 uses a Mum-hash variant. */\nXXH_FORCE_INLINE XXH_PUREF XXH64_hash_t\nXXH3_len_17to128_64b(const xxh_u8* XXH_RESTRICT input, size_t len,\n                     const xxh_u8* XXH_RESTRICT secret, size_t secretSize,\n                     XXH64_hash_t seed)\n{\n    XXH_ASSERT(secretSize >= XXH3_SECRET_SIZE_MIN); (void)secretSize;\n    XXH_ASSERT(16 < len && len <= 128);\n\n    {   xxh_u64 acc = len * XXH_PRIME64_1;\n#if XXH_SIZE_OPT >= 1\n        /* Smaller and cleaner, but slightly slower. */\n        unsigned int i = (unsigned int)(len - 1) / 32;\n        do {\n            acc += XXH3_mix16B(input+16 * i, secret+32*i, seed);\n            acc += XXH3_mix16B(input+len-16*(i+1), secret+32*i+16, seed);\n        } while (i-- != 0);\n#else\n        if (len > 32) {\n            if (len > 64) {\n                if (len > 96) {\n                    acc += XXH3_mix16B(input+48, secret+96, seed);\n                    acc += XXH3_mix16B(input+len-64, secret+112, seed);\n                }\n                acc += XXH3_mix16B(input+32, secret+64, seed);\n                acc += XXH3_mix16B(input+len-48, secret+80, seed);\n            }\n            acc += XXH3_mix16B(input+16, secret+32, seed);\n            acc += XXH3_mix16B(input+len-32, secret+48, seed);\n        }\n        acc += XXH3_mix16B(input+0, secret+0, seed);\n        acc += XXH3_mix16B(input+len-16, secret+16, seed);\n#endif\n        return XXH3_avalanche(acc);\n    }\n}\n\nXXH_NO_INLINE XXH_PUREF XXH64_hash_t\nXXH3_len_129to240_64b(const xxh_u8* XXH_RESTRICT input, size_t len,\n                      const xxh_u8* XXH_RESTRICT secret, size_t secretSize,\n                      XXH64_hash_t seed)\n{\n    XXH_ASSERT(secretSize >= XXH3_SECRET_SIZE_MIN); (void)secretSize;\n    XXH_ASSERT(128 < len && len <= XXH3_MIDSIZE_MAX);\n\n    #define XXH3_MIDSIZE_STARTOFFSET 3\n    #define XXH3_MIDSIZE_LASTOFFSET  17\n\n    {   xxh_u64 acc = len * XXH_PRIME64_1;\n        xxh_u64 acc_end;\n        unsigned int const nbRounds = (unsigned int)len / 16;\n        unsigned int i;\n        XXH_ASSERT(128 < len && len <= XXH3_MIDSIZE_MAX);\n        for (i=0; i<8; i++) {\n            acc += XXH3_mix16B(input+(16*i), secret+(16*i), seed);\n        }\n        /* last bytes */\n        acc_end = XXH3_mix16B(input + len - 16, secret + XXH3_SECRET_SIZE_MIN - XXH3_MIDSIZE_LASTOFFSET, seed);\n        XXH_ASSERT(nbRounds >= 8);\n        acc = XXH3_avalanche(acc);\n#if defined(__clang__)                                /* Clang */ \\\n    && (defined(__ARM_NEON) || defined(__ARM_NEON__)) /* NEON */ \\\n    && !defined(XXH_ENABLE_AUTOVECTORIZE)             /* Define to disable */\n        /*\n         * UGLY HACK:\n         * Clang for ARMv7-A tries to vectorize this loop, similar to GCC x86.\n         * In everywhere else, it uses scalar code.\n         *\n         * For 64->128-bit multiplies, even if the NEON was 100% optimal, it\n         * would still be slower than UMAAL (see XXH_mult64to128).\n         *\n         * Unfortunately, Clang doesn't handle the long multiplies properly and\n         * converts them to the nonexistent \"vmulq_u64\" intrinsic, which is then\n         * scalarized into an ugly mess of VMOV.32 instructions.\n         *\n         * This mess is difficult to avoid without turning autovectorization\n         * off completely, but they are usually relatively minor and/or not\n         * worth it to fix.\n         *\n         * This loop is the easiest to fix, as unlike XXH32, this pragma\n         * _actually works_ because it is a loop vectorization instead of an\n         * SLP vectorization.\n         */\n        #pragma clang loop vectorize(disable)\n#endif\n        for (i=8 ; i < nbRounds; i++) {\n            /*\n             * Prevents clang for unrolling the acc loop and interleaving with this one.\n             */\n            XXH_COMPILER_GUARD(acc);\n            acc_end += XXH3_mix16B(input+(16*i), secret+(16*(i-8)) + XXH3_MIDSIZE_STARTOFFSET, seed);\n        }\n        return XXH3_avalanche(acc + acc_end);\n    }\n}\n\n\n/* =======     Long Keys     ======= */\n\n#define XXH_STRIPE_LEN 64\n#define XXH_SECRET_CONSUME_RATE 8   /* nb of secret bytes consumed at each accumulation */\n#define XXH_ACC_NB (XXH_STRIPE_LEN / sizeof(xxh_u64))\n\n#ifdef XXH_OLD_NAMES\n#  define STRIPE_LEN XXH_STRIPE_LEN\n#  define ACC_NB XXH_ACC_NB\n#endif\n\n#ifndef XXH_PREFETCH_DIST\n#  ifdef __clang__\n#    define XXH_PREFETCH_DIST 320\n#  else\n#    if (XXH_VECTOR == XXH_AVX512)\n#      define XXH_PREFETCH_DIST 512\n#    else\n#      define XXH_PREFETCH_DIST 384\n#    endif\n#  endif  /* __clang__ */\n#endif  /* XXH_PREFETCH_DIST */\n\n/*\n * These macros are to generate an XXH3_accumulate() function.\n * The two arguments select the name suffix and target attribute.\n *\n * The name of this symbol is XXH3_accumulate_<name>() and it calls\n * XXH3_accumulate_512_<name>().\n *\n * It may be useful to hand implement this function if the compiler fails to\n * optimize the inline function.\n */\n#define XXH3_ACCUMULATE_TEMPLATE(name)                      \\\nvoid                                                        \\\nXXH3_accumulate_##name(xxh_u64* XXH_RESTRICT acc,           \\\n                       const xxh_u8* XXH_RESTRICT input,    \\\n                       const xxh_u8* XXH_RESTRICT secret,   \\\n                       size_t nbStripes)                    \\\n{                                                           \\\n    size_t n;                                               \\\n    for (n = 0; n < nbStripes; n++ ) {                      \\\n        const xxh_u8* const in = input + n*XXH_STRIPE_LEN;  \\\n        XXH_PREFETCH(in + XXH_PREFETCH_DIST);               \\\n        XXH3_accumulate_512_##name(                         \\\n                 acc,                                       \\\n                 in,                                        \\\n                 secret + n*XXH_SECRET_CONSUME_RATE);       \\\n    }                                                       \\\n}\n\n\nXXH_FORCE_INLINE void XXH_writeLE64(void* dst, xxh_u64 v64)\n{\n    if (!XXH_CPU_LITTLE_ENDIAN) v64 = XXH_swap64(v64);\n    XXH_memcpy(dst, &v64, sizeof(v64));\n}\n\n/* Several intrinsic functions below are supposed to accept __int64 as argument,\n * as documented in https://software.intel.com/sites/landingpage/IntrinsicsGuide/ .\n * However, several environments do not define __int64 type,\n * requiring a workaround.\n */\n#if !defined (__VMS) \\\n  && (defined (__cplusplus) \\\n  || (defined (__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L) /* C99 */) )\n    typedef int64_t xxh_i64;\n#else\n    /* the following type must have a width of 64-bit */\n    typedef long long xxh_i64;\n#endif\n\n\n/*\n * XXH3_accumulate_512 is the tightest loop for long inputs, and it is the most optimized.\n *\n * It is a hardened version of UMAC, based off of FARSH's implementation.\n *\n * This was chosen because it adapts quite well to 32-bit, 64-bit, and SIMD\n * implementations, and it is ridiculously fast.\n *\n * We harden it by mixing the original input to the accumulators as well as the product.\n *\n * This means that in the (relatively likely) case of a multiply by zero, the\n * original input is preserved.\n *\n * On 128-bit inputs, we swap 64-bit pairs when we add the input to improve\n * cross-pollination, as otherwise the upper and lower halves would be\n * essentially independent.\n *\n * This doesn't matter on 64-bit hashes since they all get merged together in\n * the end, so we skip the extra step.\n *\n * Both XXH3_64bits and XXH3_128bits use this subroutine.\n */\n\n#if (XXH_VECTOR == XXH_AVX512) \\\n     || (defined(XXH_DISPATCH_AVX512) && XXH_DISPATCH_AVX512 != 0)\n\n#ifndef XXH_TARGET_AVX512\n# define XXH_TARGET_AVX512  /* disable attribute target */\n#endif\n\nXXH_FORCE_INLINE XXH_TARGET_AVX512 void\nXXH3_accumulate_512_avx512(void* XXH_RESTRICT acc,\n                     const void* XXH_RESTRICT input,\n                     const void* XXH_RESTRICT secret)\n{\n    __m512i* const xacc = (__m512i *) acc;\n    XXH_ASSERT((((size_t)acc) & 63) == 0);\n    XXH_STATIC_ASSERT(XXH_STRIPE_LEN == sizeof(__m512i));\n\n    {\n        /* data_vec    = input[0]; */\n        __m512i const data_vec    = _mm512_loadu_si512   (input);\n        /* key_vec     = secret[0]; */\n        __m512i const key_vec     = _mm512_loadu_si512   (secret);\n        /* data_key    = data_vec ^ key_vec; */\n        __m512i const data_key    = _mm512_xor_si512     (data_vec, key_vec);\n        /* data_key_lo = data_key >> 32; */\n        __m512i const data_key_lo = _mm512_srli_epi64 (data_key, 32);\n        /* product     = (data_key & 0xffffffff) * (data_key_lo & 0xffffffff); */\n        __m512i const product     = _mm512_mul_epu32     (data_key, data_key_lo);\n        /* xacc[0] += swap(data_vec); */\n        __m512i const data_swap = _mm512_shuffle_epi32(data_vec, (_MM_PERM_ENUM)_MM_SHUFFLE(1, 0, 3, 2));\n        __m512i const sum       = _mm512_add_epi64(*xacc, data_swap);\n        /* xacc[0] += product; */\n        *xacc = _mm512_add_epi64(product, sum);\n    }\n}\nXXH_FORCE_INLINE XXH_TARGET_AVX512 XXH3_ACCUMULATE_TEMPLATE(avx512)\n\n/*\n * XXH3_scrambleAcc: Scrambles the accumulators to improve mixing.\n *\n * Multiplication isn't perfect, as explained by Google in HighwayHash:\n *\n *  // Multiplication mixes/scrambles bytes 0-7 of the 64-bit result to\n *  // varying degrees. In descending order of goodness, bytes\n *  // 3 4 2 5 1 6 0 7 have quality 228 224 164 160 100 96 36 32.\n *  // As expected, the upper and lower bytes are much worse.\n *\n * Source: https://github.com/google/highwayhash/blob/0aaf66b/highwayhash/hh_avx2.h#L291\n *\n * Since our algorithm uses a pseudorandom secret to add some variance into the\n * mix, we don't need to (or want to) mix as often or as much as HighwayHash does.\n *\n * This isn't as tight as XXH3_accumulate, but still written in SIMD to avoid\n * extraction.\n *\n * Both XXH3_64bits and XXH3_128bits use this subroutine.\n */\n\nXXH_FORCE_INLINE XXH_TARGET_AVX512 void\nXXH3_scrambleAcc_avx512(void* XXH_RESTRICT acc, const void* XXH_RESTRICT secret)\n{\n    XXH_ASSERT((((size_t)acc) & 63) == 0);\n    XXH_STATIC_ASSERT(XXH_STRIPE_LEN == sizeof(__m512i));\n    {   __m512i* const xacc = (__m512i*) acc;\n        const __m512i prime32 = _mm512_set1_epi32((int)XXH_PRIME32_1);\n\n        /* xacc[0] ^= (xacc[0] >> 47) */\n        __m512i const acc_vec     = *xacc;\n        __m512i const shifted     = _mm512_srli_epi64    (acc_vec, 47);\n        /* xacc[0] ^= secret; */\n        __m512i const key_vec     = _mm512_loadu_si512   (secret);\n        __m512i const data_key    = _mm512_ternarylogic_epi32(key_vec, acc_vec, shifted, 0x96 /* key_vec ^ acc_vec ^ shifted */);\n\n        /* xacc[0] *= XXH_PRIME32_1; */\n        __m512i const data_key_hi = _mm512_srli_epi64 (data_key, 32);\n        __m512i const prod_lo     = _mm512_mul_epu32     (data_key, prime32);\n        __m512i const prod_hi     = _mm512_mul_epu32     (data_key_hi, prime32);\n        *xacc = _mm512_add_epi64(prod_lo, _mm512_slli_epi64(prod_hi, 32));\n    }\n}\n\nXXH_FORCE_INLINE XXH_TARGET_AVX512 void\nXXH3_initCustomSecret_avx512(void* XXH_RESTRICT customSecret, xxh_u64 seed64)\n{\n    XXH_STATIC_ASSERT((XXH_SECRET_DEFAULT_SIZE & 63) == 0);\n    XXH_STATIC_ASSERT(XXH_SEC_ALIGN == 64);\n    XXH_ASSERT(((size_t)customSecret & 63) == 0);\n    (void)(&XXH_writeLE64);\n    {   int const nbRounds = XXH_SECRET_DEFAULT_SIZE / sizeof(__m512i);\n        __m512i const seed_pos = _mm512_set1_epi64((xxh_i64)seed64);\n        __m512i const seed     = _mm512_mask_sub_epi64(seed_pos, 0xAA, _mm512_set1_epi8(0), seed_pos);\n\n        const __m512i* const src  = (const __m512i*) ((const void*) XXH3_kSecret);\n              __m512i* const dest = (      __m512i*) customSecret;\n        int i;\n        XXH_ASSERT(((size_t)src & 63) == 0); /* control alignment */\n        XXH_ASSERT(((size_t)dest & 63) == 0);\n        for (i=0; i < nbRounds; ++i) {\n            dest[i] = _mm512_add_epi64(_mm512_load_si512(src + i), seed);\n    }   }\n}\n\n#endif\n\n#if (XXH_VECTOR == XXH_AVX2) \\\n    || (defined(XXH_DISPATCH_AVX2) && XXH_DISPATCH_AVX2 != 0)\n\n#ifndef XXH_TARGET_AVX2\n# define XXH_TARGET_AVX2  /* disable attribute target */\n#endif\n\nXXH_FORCE_INLINE XXH_TARGET_AVX2 void\nXXH3_accumulate_512_avx2( void* XXH_RESTRICT acc,\n                    const void* XXH_RESTRICT input,\n                    const void* XXH_RESTRICT secret)\n{\n    XXH_ASSERT((((size_t)acc) & 31) == 0);\n    {   __m256i* const xacc    =       (__m256i *) acc;\n        /* Unaligned. This is mainly for pointer arithmetic, and because\n         * _mm256_loadu_si256 requires  a const __m256i * pointer for some reason. */\n        const         __m256i* const xinput  = (const __m256i *) input;\n        /* Unaligned. This is mainly for pointer arithmetic, and because\n         * _mm256_loadu_si256 requires a const __m256i * pointer for some reason. */\n        const         __m256i* const xsecret = (const __m256i *) secret;\n\n        size_t i;\n        for (i=0; i < XXH_STRIPE_LEN/sizeof(__m256i); i++) {\n            /* data_vec    = xinput[i]; */\n            __m256i const data_vec    = _mm256_loadu_si256    (xinput+i);\n            /* key_vec     = xsecret[i]; */\n            __m256i const key_vec     = _mm256_loadu_si256   (xsecret+i);\n            /* data_key    = data_vec ^ key_vec; */\n            __m256i const data_key    = _mm256_xor_si256     (data_vec, key_vec);\n            /* data_key_lo = data_key >> 32; */\n            __m256i const data_key_lo = _mm256_srli_epi64 (data_key, 32);\n            /* product     = (data_key & 0xffffffff) * (data_key_lo & 0xffffffff); */\n            __m256i const product     = _mm256_mul_epu32     (data_key, data_key_lo);\n            /* xacc[i] += swap(data_vec); */\n            __m256i const data_swap = _mm256_shuffle_epi32(data_vec, _MM_SHUFFLE(1, 0, 3, 2));\n            __m256i const sum       = _mm256_add_epi64(xacc[i], data_swap);\n            /* xacc[i] += product; */\n            xacc[i] = _mm256_add_epi64(product, sum);\n    }   }\n}\nXXH_FORCE_INLINE XXH_TARGET_AVX2 XXH3_ACCUMULATE_TEMPLATE(avx2)\n\nXXH_FORCE_INLINE XXH_TARGET_AVX2 void\nXXH3_scrambleAcc_avx2(void* XXH_RESTRICT acc, const void* XXH_RESTRICT secret)\n{\n    XXH_ASSERT((((size_t)acc) & 31) == 0);\n    {   __m256i* const xacc = (__m256i*) acc;\n        /* Unaligned. This is mainly for pointer arithmetic, and because\n         * _mm256_loadu_si256 requires a const __m256i * pointer for some reason. */\n        const         __m256i* const xsecret = (const __m256i *) secret;\n        const __m256i prime32 = _mm256_set1_epi32((int)XXH_PRIME32_1);\n\n        size_t i;\n        for (i=0; i < XXH_STRIPE_LEN/sizeof(__m256i); i++) {\n            /* xacc[i] ^= (xacc[i] >> 47) */\n            __m256i const acc_vec     = xacc[i];\n            __m256i const shifted     = _mm256_srli_epi64    (acc_vec, 47);\n            __m256i const data_vec    = _mm256_xor_si256     (acc_vec, shifted);\n            /* xacc[i] ^= xsecret; */\n            __m256i const key_vec     = _mm256_loadu_si256   (xsecret+i);\n            __m256i const data_key    = _mm256_xor_si256     (data_vec, key_vec);\n\n            /* xacc[i] *= XXH_PRIME32_1; */\n            __m256i const data_key_hi = _mm256_srli_epi64 (data_key, 32);\n            __m256i const prod_lo     = _mm256_mul_epu32     (data_key, prime32);\n            __m256i const prod_hi     = _mm256_mul_epu32     (data_key_hi, prime32);\n            xacc[i] = _mm256_add_epi64(prod_lo, _mm256_slli_epi64(prod_hi, 32));\n        }\n    }\n}\n\nXXH_FORCE_INLINE XXH_TARGET_AVX2 void XXH3_initCustomSecret_avx2(void* XXH_RESTRICT customSecret, xxh_u64 seed64)\n{\n    XXH_STATIC_ASSERT((XXH_SECRET_DEFAULT_SIZE & 31) == 0);\n    XXH_STATIC_ASSERT((XXH_SECRET_DEFAULT_SIZE / sizeof(__m256i)) == 6);\n    XXH_STATIC_ASSERT(XXH_SEC_ALIGN <= 64);\n    (void)(&XXH_writeLE64);\n    XXH_PREFETCH(customSecret);\n    {   __m256i const seed = _mm256_set_epi64x((xxh_i64)(0U - seed64), (xxh_i64)seed64, (xxh_i64)(0U - seed64), (xxh_i64)seed64);\n\n        const __m256i* const src  = (const __m256i*) ((const void*) XXH3_kSecret);\n              __m256i*       dest = (      __m256i*) customSecret;\n\n#       if defined(__GNUC__) || defined(__clang__)\n        /*\n         * On GCC & Clang, marking 'dest' as modified will cause the compiler:\n         *   - do not extract the secret from sse registers in the internal loop\n         *   - use less common registers, and avoid pushing these reg into stack\n         */\n        XXH_COMPILER_GUARD(dest);\n#       endif\n        XXH_ASSERT(((size_t)src & 31) == 0); /* control alignment */\n        XXH_ASSERT(((size_t)dest & 31) == 0);\n\n        /* GCC -O2 need unroll loop manually */\n        dest[0] = _mm256_add_epi64(_mm256_load_si256(src+0), seed);\n        dest[1] = _mm256_add_epi64(_mm256_load_si256(src+1), seed);\n        dest[2] = _mm256_add_epi64(_mm256_load_si256(src+2), seed);\n        dest[3] = _mm256_add_epi64(_mm256_load_si256(src+3), seed);\n        dest[4] = _mm256_add_epi64(_mm256_load_si256(src+4), seed);\n        dest[5] = _mm256_add_epi64(_mm256_load_si256(src+5), seed);\n    }\n}\n\n#endif\n\n/* x86dispatch always generates SSE2 */\n#if (XXH_VECTOR == XXH_SSE2) || defined(XXH_X86DISPATCH)\n\n#ifndef XXH_TARGET_SSE2\n# define XXH_TARGET_SSE2  /* disable attribute target */\n#endif\n\nXXH_FORCE_INLINE XXH_TARGET_SSE2 void\nXXH3_accumulate_512_sse2( void* XXH_RESTRICT acc,\n                    const void* XXH_RESTRICT input,\n                    const void* XXH_RESTRICT secret)\n{\n    /* SSE2 is just a half-scale version of the AVX2 version. */\n    XXH_ASSERT((((size_t)acc) & 15) == 0);\n    {   __m128i* const xacc    =       (__m128i *) acc;\n        /* Unaligned. This is mainly for pointer arithmetic, and because\n         * _mm_loadu_si128 requires a const __m128i * pointer for some reason. */\n        const         __m128i* const xinput  = (const __m128i *) input;\n        /* Unaligned. This is mainly for pointer arithmetic, and because\n         * _mm_loadu_si128 requires a const __m128i * pointer for some reason. */\n        const         __m128i* const xsecret = (const __m128i *) secret;\n\n        size_t i;\n        for (i=0; i < XXH_STRIPE_LEN/sizeof(__m128i); i++) {\n            /* data_vec    = xinput[i]; */\n            __m128i const data_vec    = _mm_loadu_si128   (xinput+i);\n            /* key_vec     = xsecret[i]; */\n            __m128i const key_vec     = _mm_loadu_si128   (xsecret+i);\n            /* data_key    = data_vec ^ key_vec; */\n            __m128i const data_key    = _mm_xor_si128     (data_vec, key_vec);\n            /* data_key_lo = data_key >> 32; */\n            __m128i const data_key_lo = _mm_shuffle_epi32 (data_key, _MM_SHUFFLE(0, 3, 0, 1));\n            /* product     = (data_key & 0xffffffff) * (data_key_lo & 0xffffffff); */\n            __m128i const product     = _mm_mul_epu32     (data_key, data_key_lo);\n            /* xacc[i] += swap(data_vec); */\n            __m128i const data_swap = _mm_shuffle_epi32(data_vec, _MM_SHUFFLE(1,0,3,2));\n            __m128i const sum       = _mm_add_epi64(xacc[i], data_swap);\n            /* xacc[i] += product; */\n            xacc[i] = _mm_add_epi64(product, sum);\n    }   }\n}\nXXH_FORCE_INLINE XXH_TARGET_SSE2 XXH3_ACCUMULATE_TEMPLATE(sse2)\n\nXXH_FORCE_INLINE XXH_TARGET_SSE2 void\nXXH3_scrambleAcc_sse2(void* XXH_RESTRICT acc, const void* XXH_RESTRICT secret)\n{\n    XXH_ASSERT((((size_t)acc) & 15) == 0);\n    {   __m128i* const xacc = (__m128i*) acc;\n        /* Unaligned. This is mainly for pointer arithmetic, and because\n         * _mm_loadu_si128 requires a const __m128i * pointer for some reason. */\n        const         __m128i* const xsecret = (const __m128i *) secret;\n        const __m128i prime32 = _mm_set1_epi32((int)XXH_PRIME32_1);\n\n        size_t i;\n        for (i=0; i < XXH_STRIPE_LEN/sizeof(__m128i); i++) {\n            /* xacc[i] ^= (xacc[i] >> 47) */\n            __m128i const acc_vec     = xacc[i];\n            __m128i const shifted     = _mm_srli_epi64    (acc_vec, 47);\n            __m128i const data_vec    = _mm_xor_si128     (acc_vec, shifted);\n            /* xacc[i] ^= xsecret[i]; */\n            __m128i const key_vec     = _mm_loadu_si128   (xsecret+i);\n            __m128i const data_key    = _mm_xor_si128     (data_vec, key_vec);\n\n            /* xacc[i] *= XXH_PRIME32_1; */\n            __m128i const data_key_hi = _mm_shuffle_epi32 (data_key, _MM_SHUFFLE(0, 3, 0, 1));\n            __m128i const prod_lo     = _mm_mul_epu32     (data_key, prime32);\n            __m128i const prod_hi     = _mm_mul_epu32     (data_key_hi, prime32);\n            xacc[i] = _mm_add_epi64(prod_lo, _mm_slli_epi64(prod_hi, 32));\n        }\n    }\n}\n\nXXH_FORCE_INLINE XXH_TARGET_SSE2 void XXH3_initCustomSecret_sse2(void* XXH_RESTRICT customSecret, xxh_u64 seed64)\n{\n    XXH_STATIC_ASSERT((XXH_SECRET_DEFAULT_SIZE & 15) == 0);\n    (void)(&XXH_writeLE64);\n    {   int const nbRounds = XXH_SECRET_DEFAULT_SIZE / sizeof(__m128i);\n\n#       if defined(_MSC_VER) && defined(_M_IX86) && _MSC_VER <= 1900\n        /* MSVC 32bit mode does not support _mm_set_epi64x before 2015\n         * and some specific variants of 2015 may also lack it */\n        /* Cast to unsigned 64-bit first to avoid signed arithmetic issues */\n        xxh_u64 const seed64_unsigned = (xxh_u64)seed64;\n        xxh_u64 const neg_seed64 = (xxh_u64)(0ULL - seed64_unsigned);\n        __m128i const seed = _mm_set_epi32(\n            (int)(neg_seed64 >> 32),      /* high 32 bits of negated seed */\n            (int)(neg_seed64),            /* low 32 bits of negated seed */\n            (int)(seed64_unsigned >> 32), /* high 32 bits of original seed */\n            (int)(seed64_unsigned)        /* low 32 bits of original seed */\n        );\n#       else\n        __m128i const seed = _mm_set_epi64x((xxh_i64)(0U - seed64), (xxh_i64)seed64);\n#       endif\n        int i;\n\n        const void* const src16 = XXH3_kSecret;\n        __m128i* dst16 = (__m128i*) customSecret;\n#       if defined(__GNUC__) || defined(__clang__)\n        /*\n         * On GCC & Clang, marking 'dest' as modified will cause the compiler:\n         *   - do not extract the secret from sse registers in the internal loop\n         *   - use less common registers, and avoid pushing these reg into stack\n         */\n        XXH_COMPILER_GUARD(dst16);\n#       endif\n        XXH_ASSERT(((size_t)src16 & 15) == 0); /* control alignment */\n        XXH_ASSERT(((size_t)dst16 & 15) == 0);\n\n        for (i=0; i < nbRounds; ++i) {\n            dst16[i] = _mm_add_epi64(_mm_load_si128((const __m128i *)src16+i), seed);\n    }   }\n}\n\n#endif\n\n#if (XXH_VECTOR == XXH_NEON)\n\n/* forward declarations for the scalar routines */\nXXH_FORCE_INLINE void\nXXH3_scalarRound(void* XXH_RESTRICT acc, void const* XXH_RESTRICT input,\n                 void const* XXH_RESTRICT secret, size_t lane);\n\nXXH_FORCE_INLINE void\nXXH3_scalarScrambleRound(void* XXH_RESTRICT acc,\n                         void const* XXH_RESTRICT secret, size_t lane);\n\n/*!\n * @internal\n * @brief The bulk processing loop for NEON and WASM SIMD128.\n *\n * The NEON code path is actually partially scalar when running on AArch64. This\n * is to optimize the pipelining and can have up to 15% speedup depending on the\n * CPU, and it also mitigates some GCC codegen issues.\n *\n * @see XXH3_NEON_LANES for configuring this and details about this optimization.\n *\n * NEON's 32-bit to 64-bit long multiply takes a half vector of 32-bit\n * integers instead of the other platforms which mask full 64-bit vectors,\n * so the setup is more complicated than just shifting right.\n *\n * Additionally, there is an optimization for 4 lanes at once noted below.\n *\n * Since, as stated, the most optimal amount of lanes for Cortexes is 6,\n * there needs to be *three* versions of the accumulate operation used\n * for the remaining 2 lanes.\n *\n * WASM's SIMD128 uses SIMDe's arm_neon.h polyfill because the intrinsics overlap\n * nearly perfectly.\n */\n\nXXH_FORCE_INLINE void\nXXH3_accumulate_512_neon( void* XXH_RESTRICT acc,\n                    const void* XXH_RESTRICT input,\n                    const void* XXH_RESTRICT secret)\n{\n    XXH_ASSERT((((size_t)acc) & 15) == 0);\n    XXH_STATIC_ASSERT(XXH3_NEON_LANES > 0 && XXH3_NEON_LANES <= XXH_ACC_NB && XXH3_NEON_LANES % 2 == 0);\n    {   /* GCC for darwin arm64 does not like aliasing here */\n        xxh_aliasing_uint64x2_t* const xacc = (xxh_aliasing_uint64x2_t*) acc;\n        /* We don't use a uint32x4_t pointer because it causes bus errors on ARMv7. */\n        uint8_t const* xinput = (const uint8_t *) input;\n        uint8_t const* xsecret  = (const uint8_t *) secret;\n\n        size_t i;\n#ifdef __wasm_simd128__\n        /*\n         * On WASM SIMD128, Clang emits direct address loads when XXH3_kSecret\n         * is constant propagated, which results in it converting it to this\n         * inside the loop:\n         *\n         *    a = v128.load(XXH3_kSecret +  0 + $secret_offset, offset = 0)\n         *    b = v128.load(XXH3_kSecret + 16 + $secret_offset, offset = 0)\n         *    ...\n         *\n         * This requires a full 32-bit address immediate (and therefore a 6 byte\n         * instruction) as well as an add for each offset.\n         *\n         * Putting an asm guard prevents it from folding (at the cost of losing\n         * the alignment hint), and uses the free offset in `v128.load` instead\n         * of adding secret_offset each time which overall reduces code size by\n         * about a kilobyte and improves performance.\n         */\n        XXH_COMPILER_GUARD(xsecret);\n#endif\n        /* Scalar lanes use the normal scalarRound routine */\n        for (i = XXH3_NEON_LANES; i < XXH_ACC_NB; i++) {\n            XXH3_scalarRound(acc, input, secret, i);\n        }\n        i = 0;\n        /* 4 NEON lanes at a time. */\n        for (; i+1 < XXH3_NEON_LANES / 2; i+=2) {\n            /* data_vec = xinput[i]; */\n            uint64x2_t data_vec_1 = XXH_vld1q_u64(xinput  + (i * 16));\n            uint64x2_t data_vec_2 = XXH_vld1q_u64(xinput  + ((i+1) * 16));\n            /* key_vec  = xsecret[i];  */\n            uint64x2_t key_vec_1  = XXH_vld1q_u64(xsecret + (i * 16));\n            uint64x2_t key_vec_2  = XXH_vld1q_u64(xsecret + ((i+1) * 16));\n            /* data_swap = swap(data_vec) */\n            uint64x2_t data_swap_1 = vextq_u64(data_vec_1, data_vec_1, 1);\n            uint64x2_t data_swap_2 = vextq_u64(data_vec_2, data_vec_2, 1);\n            /* data_key = data_vec ^ key_vec; */\n            uint64x2_t data_key_1 = veorq_u64(data_vec_1, key_vec_1);\n            uint64x2_t data_key_2 = veorq_u64(data_vec_2, key_vec_2);\n\n            /*\n             * If we reinterpret the 64x2 vectors as 32x4 vectors, we can use a\n             * de-interleave operation for 4 lanes in 1 step with `vuzpq_u32` to\n             * get one vector with the low 32 bits of each lane, and one vector\n             * with the high 32 bits of each lane.\n             *\n             * The intrinsic returns a double vector because the original ARMv7-a\n             * instruction modified both arguments in place. AArch64 and SIMD128 emit\n             * two instructions from this intrinsic.\n             *\n             *  [ dk11L | dk11H | dk12L | dk12H ] -> [ dk11L | dk12L | dk21L | dk22L ]\n             *  [ dk21L | dk21H | dk22L | dk22H ] -> [ dk11H | dk12H | dk21H | dk22H ]\n             */\n            uint32x4x2_t unzipped = vuzpq_u32(\n                vreinterpretq_u32_u64(data_key_1),\n                vreinterpretq_u32_u64(data_key_2)\n            );\n            /* data_key_lo = data_key & 0xFFFFFFFF */\n            uint32x4_t data_key_lo = unzipped.val[0];\n            /* data_key_hi = data_key >> 32 */\n            uint32x4_t data_key_hi = unzipped.val[1];\n            /*\n             * Then, we can split the vectors horizontally and multiply which, as for most\n             * widening intrinsics, have a variant that works on both high half vectors\n             * for free on AArch64. A similar instruction is available on SIMD128.\n             *\n             * sum = data_swap + (u64x2) data_key_lo * (u64x2) data_key_hi\n             */\n            uint64x2_t sum_1 = XXH_vmlal_low_u32(data_swap_1, data_key_lo, data_key_hi);\n            uint64x2_t sum_2 = XXH_vmlal_high_u32(data_swap_2, data_key_lo, data_key_hi);\n            /*\n             * Clang reorders\n             *    a += b * c;     // umlal   swap.2d, dkl.2s, dkh.2s\n             *    c += a;         // add     acc.2d, acc.2d, swap.2d\n             * to\n             *    c += a;         // add     acc.2d, acc.2d, swap.2d\n             *    c += b * c;     // umlal   acc.2d, dkl.2s, dkh.2s\n             *\n             * While it would make sense in theory since the addition is faster,\n             * for reasons likely related to umlal being limited to certain NEON\n             * pipelines, this is worse. A compiler guard fixes this.\n             */\n            XXH_COMPILER_GUARD_CLANG_NEON(sum_1);\n            XXH_COMPILER_GUARD_CLANG_NEON(sum_2);\n            /* xacc[i] = acc_vec + sum; */\n            xacc[i]   = vaddq_u64(xacc[i], sum_1);\n            xacc[i+1] = vaddq_u64(xacc[i+1], sum_2);\n        }\n        /* Operate on the remaining NEON lanes 2 at a time. */\n        for (; i < XXH3_NEON_LANES / 2; i++) {\n            /* data_vec = xinput[i]; */\n            uint64x2_t data_vec = XXH_vld1q_u64(xinput  + (i * 16));\n            /* key_vec  = xsecret[i];  */\n            uint64x2_t key_vec  = XXH_vld1q_u64(xsecret + (i * 16));\n            /* acc_vec_2 = swap(data_vec) */\n            uint64x2_t data_swap = vextq_u64(data_vec, data_vec, 1);\n            /* data_key = data_vec ^ key_vec; */\n            uint64x2_t data_key = veorq_u64(data_vec, key_vec);\n            /* For two lanes, just use VMOVN and VSHRN. */\n            /* data_key_lo = data_key & 0xFFFFFFFF; */\n            uint32x2_t data_key_lo = vmovn_u64(data_key);\n            /* data_key_hi = data_key >> 32; */\n            uint32x2_t data_key_hi = vshrn_n_u64(data_key, 32);\n            /* sum = data_swap + (u64x2) data_key_lo * (u64x2) data_key_hi; */\n            uint64x2_t sum = vmlal_u32(data_swap, data_key_lo, data_key_hi);\n            /* Same Clang workaround as before */\n            XXH_COMPILER_GUARD_CLANG_NEON(sum);\n            /* xacc[i] = acc_vec + sum; */\n            xacc[i] = vaddq_u64 (xacc[i], sum);\n        }\n    }\n}\nXXH_FORCE_INLINE XXH3_ACCUMULATE_TEMPLATE(neon)\n\nXXH_FORCE_INLINE void\nXXH3_scrambleAcc_neon(void* XXH_RESTRICT acc, const void* XXH_RESTRICT secret)\n{\n    XXH_ASSERT((((size_t)acc) & 15) == 0);\n\n    {   xxh_aliasing_uint64x2_t* xacc       = (xxh_aliasing_uint64x2_t*) acc;\n        uint8_t const* xsecret = (uint8_t const*) secret;\n\n        size_t i;\n        /* WASM uses operator overloads and doesn't need these. */\n#ifndef __wasm_simd128__\n        /* { prime32_1, prime32_1 } */\n        uint32x2_t const kPrimeLo = vdup_n_u32(XXH_PRIME32_1);\n        /* { 0, prime32_1, 0, prime32_1 } */\n        uint32x4_t const kPrimeHi = vreinterpretq_u32_u64(vdupq_n_u64((xxh_u64)XXH_PRIME32_1 << 32));\n#endif\n\n        /* AArch64 uses both scalar and neon at the same time */\n        for (i = XXH3_NEON_LANES; i < XXH_ACC_NB; i++) {\n            XXH3_scalarScrambleRound(acc, secret, i);\n        }\n        for (i=0; i < XXH3_NEON_LANES / 2; i++) {\n            /* xacc[i] ^= (xacc[i] >> 47); */\n            uint64x2_t acc_vec  = xacc[i];\n            uint64x2_t shifted  = vshrq_n_u64(acc_vec, 47);\n            uint64x2_t data_vec = veorq_u64(acc_vec, shifted);\n\n            /* xacc[i] ^= xsecret[i]; */\n            uint64x2_t key_vec  = XXH_vld1q_u64(xsecret + (i * 16));\n            uint64x2_t data_key = veorq_u64(data_vec, key_vec);\n            /* xacc[i] *= XXH_PRIME32_1 */\n#ifdef __wasm_simd128__\n            /* SIMD128 has multiply by u64x2, use it instead of expanding and scalarizing */\n            xacc[i] = data_key * XXH_PRIME32_1;\n#else\n            /*\n             * Expanded version with portable NEON intrinsics\n             *\n             *    lo(x) * lo(y) + (hi(x) * lo(y) << 32)\n             *\n             * prod_hi = hi(data_key) * lo(prime) << 32\n             *\n             * Since we only need 32 bits of this multiply a trick can be used, reinterpreting the vector\n             * as a uint32x4_t and multiplying by { 0, prime, 0, prime } to cancel out the unwanted bits\n             * and avoid the shift.\n             */\n            uint32x4_t prod_hi = vmulq_u32 (vreinterpretq_u32_u64(data_key), kPrimeHi);\n            /* Extract low bits for vmlal_u32  */\n            uint32x2_t data_key_lo = vmovn_u64(data_key);\n            /* xacc[i] = prod_hi + lo(data_key) * XXH_PRIME32_1; */\n            xacc[i] = vmlal_u32(vreinterpretq_u64_u32(prod_hi), data_key_lo, kPrimeLo);\n#endif\n        }\n    }\n}\n#endif\n\n#if (XXH_VECTOR == XXH_VSX)\n\nXXH_FORCE_INLINE void\nXXH3_accumulate_512_vsx(  void* XXH_RESTRICT acc,\n                    const void* XXH_RESTRICT input,\n                    const void* XXH_RESTRICT secret)\n{\n    /* presumed aligned */\n    xxh_aliasing_u64x2* const xacc = (xxh_aliasing_u64x2*) acc;\n    xxh_u8 const* const xinput   = (xxh_u8 const*) input;   /* no alignment restriction */\n    xxh_u8 const* const xsecret  = (xxh_u8 const*) secret;    /* no alignment restriction */\n    xxh_u64x2 const v32 = { 32, 32 };\n    size_t i;\n    for (i = 0; i < XXH_STRIPE_LEN / sizeof(xxh_u64x2); i++) {\n        /* data_vec = xinput[i]; */\n        xxh_u64x2 const data_vec = XXH_vec_loadu(xinput + 16*i);\n        /* key_vec = xsecret[i]; */\n        xxh_u64x2 const key_vec  = XXH_vec_loadu(xsecret + 16*i);\n        xxh_u64x2 const data_key = data_vec ^ key_vec;\n        /* shuffled = (data_key << 32) | (data_key >> 32); */\n        xxh_u32x4 const shuffled = (xxh_u32x4)vec_rl(data_key, v32);\n        /* product = ((xxh_u64x2)data_key & 0xFFFFFFFF) * ((xxh_u64x2)shuffled & 0xFFFFFFFF); */\n        xxh_u64x2 const product  = XXH_vec_mulo((xxh_u32x4)data_key, shuffled);\n        /* acc_vec = xacc[i]; */\n        xxh_u64x2 acc_vec        = xacc[i];\n        acc_vec += product;\n\n        /* swap high and low halves */\n#ifdef __s390x__\n        acc_vec += vec_permi(data_vec, data_vec, 2);\n#else\n        acc_vec += vec_xxpermdi(data_vec, data_vec, 2);\n#endif\n        xacc[i] = acc_vec;\n    }\n}\nXXH_FORCE_INLINE XXH3_ACCUMULATE_TEMPLATE(vsx)\n\nXXH_FORCE_INLINE void\nXXH3_scrambleAcc_vsx(void* XXH_RESTRICT acc, const void* XXH_RESTRICT secret)\n{\n    XXH_ASSERT((((size_t)acc) & 15) == 0);\n\n    {   xxh_aliasing_u64x2* const xacc = (xxh_aliasing_u64x2*) acc;\n        const xxh_u8* const xsecret = (const xxh_u8*) secret;\n        /* constants */\n        xxh_u64x2 const v32  = { 32, 32 };\n        xxh_u64x2 const v47 = { 47, 47 };\n        xxh_u32x4 const prime = { XXH_PRIME32_1, XXH_PRIME32_1, XXH_PRIME32_1, XXH_PRIME32_1 };\n        size_t i;\n        for (i = 0; i < XXH_STRIPE_LEN / sizeof(xxh_u64x2); i++) {\n            /* xacc[i] ^= (xacc[i] >> 47); */\n            xxh_u64x2 const acc_vec  = xacc[i];\n            xxh_u64x2 const data_vec = acc_vec ^ (acc_vec >> v47);\n\n            /* xacc[i] ^= xsecret[i]; */\n            xxh_u64x2 const key_vec  = XXH_vec_loadu(xsecret + 16*i);\n            xxh_u64x2 const data_key = data_vec ^ key_vec;\n\n            /* xacc[i] *= XXH_PRIME32_1 */\n            /* prod_lo = ((xxh_u64x2)data_key & 0xFFFFFFFF) * ((xxh_u64x2)prime & 0xFFFFFFFF);  */\n            xxh_u64x2 const prod_even  = XXH_vec_mule((xxh_u32x4)data_key, prime);\n            /* prod_hi = ((xxh_u64x2)data_key >> 32) * ((xxh_u64x2)prime >> 32);  */\n            xxh_u64x2 const prod_odd  = XXH_vec_mulo((xxh_u32x4)data_key, prime);\n            xacc[i] = prod_odd + (prod_even << v32);\n    }   }\n}\n\n#endif\n\n#if (XXH_VECTOR == XXH_SVE)\n\nXXH_FORCE_INLINE void\nXXH3_accumulate_512_sve( void* XXH_RESTRICT acc,\n                   const void* XXH_RESTRICT input,\n                   const void* XXH_RESTRICT secret)\n{\n    uint64_t *xacc = (uint64_t *)acc;\n    const uint64_t *xinput = (const uint64_t *)(const void *)input;\n    const uint64_t *xsecret = (const uint64_t *)(const void *)secret;\n    svuint64_t kSwap = sveor_n_u64_z(svptrue_b64(), svindex_u64(0, 1), 1);\n    uint64_t element_count = svcntd();\n    if (element_count >= 8) {\n        svbool_t mask = svptrue_pat_b64(SV_VL8);\n        svuint64_t vacc = svld1_u64(mask, xacc);\n        ACCRND(vacc, 0);\n        svst1_u64(mask, xacc, vacc);\n    } else if (element_count == 2) {   /* sve128 */\n        svbool_t mask = svptrue_pat_b64(SV_VL2);\n        svuint64_t acc0 = svld1_u64(mask, xacc + 0);\n        svuint64_t acc1 = svld1_u64(mask, xacc + 2);\n        svuint64_t acc2 = svld1_u64(mask, xacc + 4);\n        svuint64_t acc3 = svld1_u64(mask, xacc + 6);\n        ACCRND(acc0, 0);\n        ACCRND(acc1, 2);\n        ACCRND(acc2, 4);\n        ACCRND(acc3, 6);\n        svst1_u64(mask, xacc + 0, acc0);\n        svst1_u64(mask, xacc + 2, acc1);\n        svst1_u64(mask, xacc + 4, acc2);\n        svst1_u64(mask, xacc + 6, acc3);\n    } else {\n        svbool_t mask = svptrue_pat_b64(SV_VL4);\n        svuint64_t acc0 = svld1_u64(mask, xacc + 0);\n        svuint64_t acc1 = svld1_u64(mask, xacc + 4);\n        ACCRND(acc0, 0);\n        ACCRND(acc1, 4);\n        svst1_u64(mask, xacc + 0, acc0);\n        svst1_u64(mask, xacc + 4, acc1);\n    }\n}\n\nXXH_FORCE_INLINE void\nXXH3_accumulate_sve(xxh_u64* XXH_RESTRICT acc,\n               const xxh_u8* XXH_RESTRICT input,\n               const xxh_u8* XXH_RESTRICT secret,\n               size_t nbStripes)\n{\n    if (nbStripes != 0) {\n        uint64_t *xacc = (uint64_t *)acc;\n        const uint64_t *xinput = (const uint64_t *)(const void *)input;\n        const uint64_t *xsecret = (const uint64_t *)(const void *)secret;\n        svuint64_t kSwap = sveor_n_u64_z(svptrue_b64(), svindex_u64(0, 1), 1);\n        uint64_t element_count = svcntd();\n        if (element_count >= 8) {\n            svbool_t mask = svptrue_pat_b64(SV_VL8);\n            svuint64_t vacc = svld1_u64(mask, xacc + 0);\n            do {\n                /* svprfd(svbool_t, void *, enum svfprop); */\n                svprfd(mask, xinput + 128, SV_PLDL1STRM);\n                ACCRND(vacc, 0);\n                xinput += 8;\n                xsecret += 1;\n                nbStripes--;\n           } while (nbStripes != 0);\n\n           svst1_u64(mask, xacc + 0, vacc);\n        } else if (element_count == 2) { /* sve128 */\n            svbool_t mask = svptrue_pat_b64(SV_VL2);\n            svuint64_t acc0 = svld1_u64(mask, xacc + 0);\n            svuint64_t acc1 = svld1_u64(mask, xacc + 2);\n            svuint64_t acc2 = svld1_u64(mask, xacc + 4);\n            svuint64_t acc3 = svld1_u64(mask, xacc + 6);\n            do {\n                svprfd(mask, xinput + 128, SV_PLDL1STRM);\n                ACCRND(acc0, 0);\n                ACCRND(acc1, 2);\n                ACCRND(acc2, 4);\n                ACCRND(acc3, 6);\n                xinput += 8;\n                xsecret += 1;\n                nbStripes--;\n           } while (nbStripes != 0);\n\n           svst1_u64(mask, xacc + 0, acc0);\n           svst1_u64(mask, xacc + 2, acc1);\n           svst1_u64(mask, xacc + 4, acc2);\n           svst1_u64(mask, xacc + 6, acc3);\n        } else {\n            svbool_t mask = svptrue_pat_b64(SV_VL4);\n            svuint64_t acc0 = svld1_u64(mask, xacc + 0);\n            svuint64_t acc1 = svld1_u64(mask, xacc + 4);\n            do {\n                svprfd(mask, xinput + 128, SV_PLDL1STRM);\n                ACCRND(acc0, 0);\n                ACCRND(acc1, 4);\n                xinput += 8;\n                xsecret += 1;\n                nbStripes--;\n           } while (nbStripes != 0);\n\n           svst1_u64(mask, xacc + 0, acc0);\n           svst1_u64(mask, xacc + 4, acc1);\n       }\n    }\n}\n\n#endif\n\n#if (XXH_VECTOR == XXH_LSX)\n#define _LSX_SHUFFLE(z, y, x, w) (((z) << 6) | ((y) << 4) | ((x) << 2) | (w))\n\nXXH_FORCE_INLINE void\nXXH3_accumulate_512_lsx( void* XXH_RESTRICT acc,\n                    const void* XXH_RESTRICT input,\n                    const void* XXH_RESTRICT secret)\n{\n    XXH_ASSERT((((size_t)acc) & 15) == 0);\n    {\n        __m128i* const xacc    =       (__m128i *) acc;\n        const __m128i* const xinput  = (const __m128i *) input;\n        const __m128i* const xsecret = (const __m128i *) secret;\n        size_t i;\n\n        for (i = 0; i < XXH_STRIPE_LEN / sizeof(__m128i); i++) {\n            /* data_vec = xinput[i]; */\n            __m128i const data_vec = __lsx_vld(xinput + i, 0);\n            /* key_vec = xsecret[i]; */\n            __m128i const key_vec = __lsx_vld(xsecret + i, 0);\n            /* data_key = data_vec ^ key_vec; */\n            __m128i const data_key = __lsx_vxor_v(data_vec, key_vec);\n            /* data_key_lo = data_key >> 32; */\n            __m128i const data_key_lo = __lsx_vsrli_d(data_key, 32);\n            // __m128i const data_key_lo = __lsx_vsrli_d(data_key, 32);\n            /* product = (data_key & 0xffffffff) * (data_key_lo & 0xffffffff); */\n            __m128i const product = __lsx_vmulwev_d_wu(data_key, data_key_lo);\n            /* xacc[i] += swap(data_vec); */\n            __m128i const data_swap = __lsx_vshuf4i_w(data_vec, _LSX_SHUFFLE(1, 0, 3, 2));\n            __m128i const sum = __lsx_vadd_d(xacc[i], data_swap);\n            /* xacc[i] += product; */\n            xacc[i] = __lsx_vadd_d(product, sum);\n        }\n    }\n}\nXXH_FORCE_INLINE XXH3_ACCUMULATE_TEMPLATE(lsx)\n\nXXH_FORCE_INLINE void\nXXH3_scrambleAcc_lsx(void* XXH_RESTRICT acc, const void* XXH_RESTRICT secret)\n{\n    XXH_ASSERT((((size_t)acc) & 15) == 0);\n    {\n        __m128i* const xacc = (__m128i*) acc;\n        const __m128i* const xsecret = (const __m128i *) secret;\n        const __m128i prime32 = __lsx_vreplgr2vr_d(XXH_PRIME32_1);\n        size_t i;\n\n        for (i = 0; i < XXH_STRIPE_LEN / sizeof(__m128i); i++) {\n            /* xacc[i] ^= (xacc[i] >> 47) */\n            __m128i const acc_vec = xacc[i];\n            __m128i const shifted = __lsx_vsrli_d(acc_vec, 47);\n            __m128i const data_vec = __lsx_vxor_v(acc_vec, shifted);\n            /* xacc[i] ^= xsecret[i]; */\n            __m128i const key_vec = __lsx_vld(xsecret + i, 0);\n            __m128i const data_key = __lsx_vxor_v(data_vec, key_vec);\n\n            /* xacc[i] *= XXH_PRIME32_1; */\n            xacc[i] = __lsx_vmul_d(data_key, prime32);\n        }\n    }\n}\n\n#endif\n\n#if (XXH_VECTOR == XXH_LASX)\n#define _LASX_SHUFFLE(z, y, x, w) (((z) << 6) | ((y) << 4) | ((x) << 2) | (w))\n\nXXH_FORCE_INLINE void\nXXH3_accumulate_512_lasx( void* XXH_RESTRICT acc,\n                    const void* XXH_RESTRICT input,\n                    const void* XXH_RESTRICT secret)\n{\n    XXH_ASSERT((((size_t)acc) & 31) == 0);\n    {\n        size_t i;\n        __m256i* const xacc    =       (__m256i *) acc;\n        const __m256i* const xinput  = (const __m256i *) input;\n        const __m256i* const xsecret = (const __m256i *) secret;\n\n        for (i = 0; i < XXH_STRIPE_LEN / sizeof(__m256i); i++) {\n            /* data_vec = xinput[i]; */\n            __m256i const data_vec = __lasx_xvld(xinput + i, 0);\n            /* key_vec = xsecret[i]; */\n            __m256i const key_vec = __lasx_xvld(xsecret + i, 0);\n            /* data_key = data_vec ^ key_vec; */\n            __m256i const data_key = __lasx_xvxor_v(data_vec, key_vec);\n            /* data_key_lo = data_key >> 32; */\n            __m256i const data_key_lo = __lasx_xvsrli_d(data_key, 32);\n            // __m256i const data_key_lo = __lasx_xvsrli_d(data_key, 32);\n            /* product = (data_key & 0xffffffff) * (data_key_lo & 0xffffffff); */\n            __m256i const product = __lasx_xvmulwev_d_wu(data_key, data_key_lo);\n            /* xacc[i] += swap(data_vec); */\n            __m256i const data_swap = __lasx_xvshuf4i_w(data_vec, _LASX_SHUFFLE(1, 0, 3, 2));\n            __m256i const sum = __lasx_xvadd_d(xacc[i], data_swap);\n            /* xacc[i] += product; */\n            xacc[i] = __lasx_xvadd_d(product, sum);\n        }\n    }\n}\nXXH_FORCE_INLINE XXH3_ACCUMULATE_TEMPLATE(lasx)\n\nXXH_FORCE_INLINE void\nXXH3_scrambleAcc_lasx(void* XXH_RESTRICT acc, const void* XXH_RESTRICT secret)\n{\n    XXH_ASSERT((((size_t)acc) & 31) == 0);\n    {\n        __m256i* const xacc = (__m256i*) acc;\n        const __m256i* const xsecret = (const __m256i *) secret;\n        const __m256i prime32 = __lasx_xvreplgr2vr_d(XXH_PRIME32_1);\n        size_t i;\n\n        for (i = 0; i < XXH_STRIPE_LEN / sizeof(__m256i); i++) {\n            /* xacc[i] ^= (xacc[i] >> 47) */\n            __m256i const acc_vec = xacc[i];\n            __m256i const shifted = __lasx_xvsrli_d(acc_vec, 47);\n            __m256i const data_vec = __lasx_xvxor_v(acc_vec, shifted);\n            /* xacc[i] ^= xsecret[i]; */\n            __m256i const key_vec = __lasx_xvld(xsecret + i, 0);\n            __m256i const data_key = __lasx_xvxor_v(data_vec, key_vec);\n\n            /* xacc[i] *= XXH_PRIME32_1; */\n            xacc[i] = __lasx_xvmul_d(data_key, prime32);\n        }\n    }\n}\n\n#endif\n\n#if (XXH_VECTOR == XXH_RVV)\n    #define XXH_CONCAT2(X, Y) X ## Y\n    #define XXH_CONCAT(X, Y) XXH_CONCAT2(X, Y)\n#if ((defined(__GNUC__) && !defined(__clang__) && __GNUC__ < 13) || \\\n        (defined(__clang__) && __clang_major__ < 16))\n    #define XXH_RVOP(op) op\n    #define XXH_RVCAST(op) XXH_CONCAT(vreinterpret_v_, op)\n#else\n    #define XXH_RVOP(op) XXH_CONCAT(__riscv_, op)\n    #define XXH_RVCAST(op) XXH_CONCAT(__riscv_vreinterpret_v_, op)\n#endif\nXXH_FORCE_INLINE void\nXXH3_accumulate_512_rvv(  void* XXH_RESTRICT acc,\n                    const void* XXH_RESTRICT input,\n                    const void* XXH_RESTRICT secret)\n{\n    XXH_ASSERT((((size_t)acc) & 63) == 0);\n    {\n        // Try to set vector lenght to 512 bits.\n        // If this length is unavailable, then maximum available will be used\n        size_t vl = XXH_RVOP(vsetvl_e64m2)(8);\n\n        uint64_t*       xacc    = (uint64_t*) acc;\n        const uint64_t* xinput  = (const uint64_t*) input;\n        const uint64_t* xsecret = (const uint64_t*) secret;\n        static const uint64_t swap_mask[16] = {1, 0, 3, 2, 5, 4, 7, 6, 9, 8, 11, 10, 13, 12, 15, 14};\n        vuint64m2_t xswap_mask = XXH_RVOP(vle64_v_u64m2)(swap_mask, vl);\n\n        size_t i;\n        for (i = 0; i < XXH_STRIPE_LEN/8; i += vl) {\n            /* data_vec = xinput[i]; */\n            vuint64m2_t data_vec = XXH_RVCAST(u8m2_u64m2)(XXH_RVOP(vle8_v_u8m2)((const uint8_t*)(xinput + i), vl * 8));\n            /* key_vec = xsecret[i]; */\n            vuint64m2_t key_vec = XXH_RVCAST(u8m2_u64m2)(XXH_RVOP(vle8_v_u8m2)((const uint8_t*)(xsecret + i), vl * 8));\n            /* acc_vec = xacc[i]; */\n            vuint64m2_t acc_vec = XXH_RVOP(vle64_v_u64m2)(xacc + i, vl);\n            /* data_key = data_vec ^ key_vec; */\n            vuint64m2_t data_key = XXH_RVOP(vxor_vv_u64m2)(data_vec, key_vec, vl);\n            /* data_key_hi = data_key >> 32; */\n            vuint64m2_t data_key_hi = XXH_RVOP(vsrl_vx_u64m2)(data_key, 32, vl);\n            /* data_key_lo = data_key & 0xffffffff; */\n            vuint64m2_t data_key_lo = XXH_RVOP(vand_vx_u64m2)(data_key, 0xffffffff, vl);\n            /* swap high and low halves */\n            vuint64m2_t data_swap = XXH_RVOP(vrgather_vv_u64m2)(data_vec, xswap_mask, vl);\n            /* acc_vec += data_key_lo * data_key_hi; */\n            acc_vec = XXH_RVOP(vmacc_vv_u64m2)(acc_vec, data_key_lo, data_key_hi, vl);\n            /* acc_vec += data_swap; */\n            acc_vec = XXH_RVOP(vadd_vv_u64m2)(acc_vec, data_swap, vl);\n            /* xacc[i] = acc_vec; */\n            XXH_RVOP(vse64_v_u64m2)(xacc + i, acc_vec, vl);\n        }\n    }\n}\n\nXXH_FORCE_INLINE XXH3_ACCUMULATE_TEMPLATE(rvv)\n\nXXH_FORCE_INLINE void\nXXH3_scrambleAcc_rvv(void* XXH_RESTRICT acc, const void* XXH_RESTRICT secret)\n{\n    XXH_ASSERT((((size_t)acc) & 15) == 0);\n    {\n        size_t count = XXH_STRIPE_LEN/8;\n        uint64_t* xacc = (uint64_t*)acc;\n        const uint8_t* xsecret = (const uint8_t *)secret;\n        size_t vl;\n        for (; count > 0; count -= vl, xacc += vl, xsecret += vl*8) {\n            vl = XXH_RVOP(vsetvl_e64m2)(count);\n            {\n                /* key_vec = xsecret[i]; */\n                vuint64m2_t key_vec = XXH_RVCAST(u8m2_u64m2)(XXH_RVOP(vle8_v_u8m2)(xsecret, vl*8));\n                /* acc_vec = xacc[i]; */\n                vuint64m2_t acc_vec = XXH_RVOP(vle64_v_u64m2)(xacc, vl);\n                /* acc_vec ^= acc_vec >> 47; */\n                vuint64m2_t vsrl = XXH_RVOP(vsrl_vx_u64m2)(acc_vec, 47, vl);\n                acc_vec = XXH_RVOP(vxor_vv_u64m2)(acc_vec, vsrl, vl);\n                /* acc_vec ^= key_vec; */\n                acc_vec = XXH_RVOP(vxor_vv_u64m2)(acc_vec, key_vec, vl);\n                /* acc_vec *= XXH_PRIME32_1; */\n                acc_vec = XXH_RVOP(vmul_vx_u64m2)(acc_vec, XXH_PRIME32_1, vl);\n                /* xacc[i] *= acc_vec; */\n                XXH_RVOP(vse64_v_u64m2)(xacc, acc_vec, vl);\n            }\n        }\n    }\n}\n\nXXH_FORCE_INLINE void\nXXH3_initCustomSecret_rvv(void* XXH_RESTRICT customSecret, xxh_u64 seed64)\n{\n    XXH_STATIC_ASSERT(XXH_SEC_ALIGN >= 8);\n    XXH_ASSERT(((size_t)customSecret & 7) == 0);\n    (void)(&XXH_writeLE64);\n    {\n        size_t count = XXH_SECRET_DEFAULT_SIZE/8;\n        size_t vl;\n        size_t VLMAX = XXH_RVOP(vsetvlmax_e64m2)();\n        int64_t* cSecret = (int64_t*)customSecret;\n        const int64_t* kSecret = (const int64_t*)(const void*)XXH3_kSecret;\n\n#if __riscv_v_intrinsic >= 1000000\n        // ratified v1.0 intrinics version\n        vbool32_t mneg = XXH_RVCAST(u8m1_b32)(\n                         XXH_RVOP(vmv_v_x_u8m1)(0xaa, XXH_RVOP(vsetvlmax_e8m1)()));\n#else\n        // support pre-ratification intrinics, which lack mask to vector casts\n        size_t vlmax = XXH_RVOP(vsetvlmax_e8m1)();\n        vbool32_t mneg = XXH_RVOP(vmseq_vx_u8mf4_b32)(\n                         XXH_RVOP(vand_vx_u8mf4)(\n                         XXH_RVOP(vid_v_u8mf4)(vlmax), 1, vlmax), 1, vlmax);\n#endif\n        vint64m2_t seed = XXH_RVOP(vmv_v_x_i64m2)((int64_t)seed64, VLMAX);\n        seed = XXH_RVOP(vneg_v_i64m2_mu)(mneg, seed, seed, VLMAX);\n\n        for (; count > 0; count -= vl, cSecret += vl, kSecret += vl) {\n            /* make sure vl=VLMAX until last iteration */\n            vl = XXH_RVOP(vsetvl_e64m2)(count < VLMAX ? count : VLMAX);\n            {\n                vint64m2_t src = XXH_RVOP(vle64_v_i64m2)(kSecret, vl);\n                vint64m2_t res = XXH_RVOP(vadd_vv_i64m2)(src, seed, vl);\n                XXH_RVOP(vse64_v_i64m2)(cSecret, res, vl);\n            }\n        }\n    }\n}\n#endif\n\n\n/* scalar variants - universal */\n\n#if defined(__aarch64__) && (defined(__GNUC__) || defined(__clang__))\n/*\n * In XXH3_scalarRound(), GCC and Clang have a similar codegen issue, where they\n * emit an excess mask and a full 64-bit multiply-add (MADD X-form).\n *\n * While this might not seem like much, as AArch64 is a 64-bit architecture, only\n * big Cortex designs have a full 64-bit multiplier.\n *\n * On the little cores, the smaller 32-bit multiplier is used, and full 64-bit\n * multiplies expand to 2-3 multiplies in microcode. This has a major penalty\n * of up to 4 latency cycles and 2 stall cycles in the multiply pipeline.\n *\n * Thankfully, AArch64 still provides the 32-bit long multiply-add (UMADDL) which does\n * not have this penalty and does the mask automatically.\n */\nXXH_FORCE_INLINE xxh_u64\nXXH_mult32to64_add64(xxh_u64 lhs, xxh_u64 rhs, xxh_u64 acc)\n{\n    xxh_u64 ret;\n    /* note: %x = 64-bit register, %w = 32-bit register */\n    __asm__(\"umaddl %x0, %w1, %w2, %x3\" : \"=r\" (ret) : \"r\" (lhs), \"r\" (rhs), \"r\" (acc));\n    return ret;\n}\n#else\nXXH_FORCE_INLINE xxh_u64\nXXH_mult32to64_add64(xxh_u64 lhs, xxh_u64 rhs, xxh_u64 acc)\n{\n    return XXH_mult32to64((xxh_u32)lhs, (xxh_u32)rhs) + acc;\n}\n#endif\n\n/*!\n * @internal\n * @brief Scalar round for @ref XXH3_accumulate_512_scalar().\n *\n * This is extracted to its own function because the NEON path uses a combination\n * of NEON and scalar.\n */\nXXH_FORCE_INLINE void\nXXH3_scalarRound(void* XXH_RESTRICT acc,\n                 void const* XXH_RESTRICT input,\n                 void const* XXH_RESTRICT secret,\n                 size_t lane)\n{\n    xxh_u64* xacc = (xxh_u64*) acc;\n    xxh_u8 const* xinput  = (xxh_u8 const*) input;\n    xxh_u8 const* xsecret = (xxh_u8 const*) secret;\n    XXH_ASSERT(lane < XXH_ACC_NB);\n    XXH_ASSERT(((size_t)acc & (XXH_ACC_ALIGN-1)) == 0);\n    {\n        xxh_u64 const data_val = XXH_readLE64(xinput + lane * 8);\n        xxh_u64 const data_key = data_val ^ XXH_readLE64(xsecret + lane * 8);\n        xacc[lane ^ 1] += data_val; /* swap adjacent lanes */\n        xacc[lane] = XXH_mult32to64_add64(data_key /* & 0xFFFFFFFF */, data_key >> 32, xacc[lane]);\n    }\n}\n\n/*!\n * @internal\n * @brief Processes a 64 byte block of data using the scalar path.\n */\nXXH_FORCE_INLINE void\nXXH3_accumulate_512_scalar(void* XXH_RESTRICT acc,\n                     const void* XXH_RESTRICT input,\n                     const void* XXH_RESTRICT secret)\n{\n    size_t i;\n    /* ARM GCC refuses to unroll this loop, resulting in a 24% slowdown on ARMv6. */\n#if defined(__GNUC__) && !defined(__clang__) \\\n  && (defined(__arm__) || defined(__thumb2__)) \\\n  && defined(__ARM_FEATURE_UNALIGNED) /* no unaligned access just wastes bytes */ \\\n  && XXH_SIZE_OPT <= 0\n#  pragma GCC unroll 8\n#endif\n    for (i=0; i < XXH_ACC_NB; i++) {\n        XXH3_scalarRound(acc, input, secret, i);\n    }\n}\nXXH_FORCE_INLINE XXH3_ACCUMULATE_TEMPLATE(scalar)\n\n/*!\n * @internal\n * @brief Scalar scramble step for @ref XXH3_scrambleAcc_scalar().\n *\n * This is extracted to its own function because the NEON path uses a combination\n * of NEON and scalar.\n */\nXXH_FORCE_INLINE void\nXXH3_scalarScrambleRound(void* XXH_RESTRICT acc,\n                         void const* XXH_RESTRICT secret,\n                         size_t lane)\n{\n    xxh_u64* const xacc = (xxh_u64*) acc;   /* presumed aligned */\n    const xxh_u8* const xsecret = (const xxh_u8*) secret;   /* no alignment restriction */\n    XXH_ASSERT((((size_t)acc) & (XXH_ACC_ALIGN-1)) == 0);\n    XXH_ASSERT(lane < XXH_ACC_NB);\n    {\n        xxh_u64 const key64 = XXH_readLE64(xsecret + lane * 8);\n        xxh_u64 acc64 = xacc[lane];\n        acc64 = XXH_xorshift64(acc64, 47);\n        acc64 ^= key64;\n        acc64 *= XXH_PRIME32_1;\n        xacc[lane] = acc64;\n    }\n}\n\n/*!\n * @internal\n * @brief Scrambles the accumulators after a large chunk has been read\n */\nXXH_FORCE_INLINE void\nXXH3_scrambleAcc_scalar(void* XXH_RESTRICT acc, const void* XXH_RESTRICT secret)\n{\n    size_t i;\n    for (i=0; i < XXH_ACC_NB; i++) {\n        XXH3_scalarScrambleRound(acc, secret, i);\n    }\n}\n\nXXH_FORCE_INLINE void\nXXH3_initCustomSecret_scalar(void* XXH_RESTRICT customSecret, xxh_u64 seed64)\n{\n    /*\n     * We need a separate pointer for the hack below,\n     * which requires a non-const pointer.\n     * Any decent compiler will optimize this out otherwise.\n     */\n    const xxh_u8* kSecretPtr = XXH3_kSecret;\n    XXH_STATIC_ASSERT((XXH_SECRET_DEFAULT_SIZE & 15) == 0);\n\n#if defined(__GNUC__) && defined(__aarch64__)\n    /*\n     * UGLY HACK:\n     * GCC and Clang generate a bunch of MOV/MOVK pairs for aarch64, and they are\n     * placed sequentially, in order, at the top of the unrolled loop.\n     *\n     * While MOVK is great for generating constants (2 cycles for a 64-bit\n     * constant compared to 4 cycles for LDR), it fights for bandwidth with\n     * the arithmetic instructions.\n     *\n     *   I   L   S\n     * MOVK\n     * MOVK\n     * MOVK\n     * MOVK\n     * ADD\n     * SUB      STR\n     *          STR\n     * By forcing loads from memory (as the asm line causes the compiler to assume\n     * that XXH3_kSecretPtr has been changed), the pipelines are used more\n     * efficiently:\n     *   I   L   S\n     *      LDR\n     *  ADD LDR\n     *  SUB     STR\n     *          STR\n     *\n     * See XXH3_NEON_LANES for details on the pipeline.\n     *\n     * XXH3_64bits_withSeed, len == 256, Snapdragon 835\n     *   without hack: 2654.4 MB/s\n     *   with hack:    3202.9 MB/s\n     */\n    XXH_COMPILER_GUARD(kSecretPtr);\n#endif\n    {   int const nbRounds = XXH_SECRET_DEFAULT_SIZE / 16;\n        int i;\n        for (i=0; i < nbRounds; i++) {\n            /*\n             * The asm hack causes the compiler to assume that kSecretPtr aliases with\n             * customSecret, and on aarch64, this prevented LDP from merging two\n             * loads together for free. Putting the loads together before the stores\n             * properly generates LDP.\n             */\n            xxh_u64 lo = XXH_readLE64(kSecretPtr + 16*i)     + seed64;\n            xxh_u64 hi = XXH_readLE64(kSecretPtr + 16*i + 8) - seed64;\n            XXH_writeLE64((xxh_u8*)customSecret + 16*i,     lo);\n            XXH_writeLE64((xxh_u8*)customSecret + 16*i + 8, hi);\n    }   }\n}\n\n\ntypedef void (*XXH3_f_accumulate)(xxh_u64* XXH_RESTRICT, const xxh_u8* XXH_RESTRICT, const xxh_u8* XXH_RESTRICT, size_t);\ntypedef void (*XXH3_f_scrambleAcc)(void* XXH_RESTRICT, const void*);\ntypedef void (*XXH3_f_initCustomSecret)(void* XXH_RESTRICT, xxh_u64);\n\n\n#if (XXH_VECTOR == XXH_AVX512)\n\n#define XXH3_accumulate_512 XXH3_accumulate_512_avx512\n#define XXH3_accumulate     XXH3_accumulate_avx512\n#define XXH3_scrambleAcc    XXH3_scrambleAcc_avx512\n#define XXH3_initCustomSecret XXH3_initCustomSecret_avx512\n\n#elif (XXH_VECTOR == XXH_AVX2)\n\n#define XXH3_accumulate_512 XXH3_accumulate_512_avx2\n#define XXH3_accumulate     XXH3_accumulate_avx2\n#define XXH3_scrambleAcc    XXH3_scrambleAcc_avx2\n#define XXH3_initCustomSecret XXH3_initCustomSecret_avx2\n\n#elif (XXH_VECTOR == XXH_SSE2)\n\n#define XXH3_accumulate_512 XXH3_accumulate_512_sse2\n#define XXH3_accumulate     XXH3_accumulate_sse2\n#define XXH3_scrambleAcc    XXH3_scrambleAcc_sse2\n#define XXH3_initCustomSecret XXH3_initCustomSecret_sse2\n\n#elif (XXH_VECTOR == XXH_NEON)\n\n#define XXH3_accumulate_512 XXH3_accumulate_512_neon\n#define XXH3_accumulate     XXH3_accumulate_neon\n#define XXH3_scrambleAcc    XXH3_scrambleAcc_neon\n#define XXH3_initCustomSecret XXH3_initCustomSecret_scalar\n\n#elif (XXH_VECTOR == XXH_VSX)\n\n#define XXH3_accumulate_512 XXH3_accumulate_512_vsx\n#define XXH3_accumulate     XXH3_accumulate_vsx\n#define XXH3_scrambleAcc    XXH3_scrambleAcc_vsx\n#define XXH3_initCustomSecret XXH3_initCustomSecret_scalar\n\n#elif (XXH_VECTOR == XXH_SVE)\n#define XXH3_accumulate_512 XXH3_accumulate_512_sve\n#define XXH3_accumulate     XXH3_accumulate_sve\n#define XXH3_scrambleAcc    XXH3_scrambleAcc_scalar\n#define XXH3_initCustomSecret XXH3_initCustomSecret_scalar\n\n#elif (XXH_VECTOR == XXH_LASX)\n#define XXH3_accumulate_512 XXH3_accumulate_512_lasx\n#define XXH3_accumulate     XXH3_accumulate_lasx\n#define XXH3_scrambleAcc    XXH3_scrambleAcc_lasx\n#define XXH3_initCustomSecret XXH3_initCustomSecret_scalar\n\n#elif (XXH_VECTOR == XXH_LSX)\n#define XXH3_accumulate_512 XXH3_accumulate_512_lsx\n#define XXH3_accumulate     XXH3_accumulate_lsx\n#define XXH3_scrambleAcc    XXH3_scrambleAcc_lsx\n#define XXH3_initCustomSecret XXH3_initCustomSecret_scalar\n\n#elif (XXH_VECTOR == XXH_RVV)\n#define XXH3_accumulate_512 XXH3_accumulate_512_rvv\n#define XXH3_accumulate     XXH3_accumulate_rvv\n#define XXH3_scrambleAcc    XXH3_scrambleAcc_rvv\n#define XXH3_initCustomSecret XXH3_initCustomSecret_rvv\n\n#else /* scalar */\n\n#define XXH3_accumulate_512 XXH3_accumulate_512_scalar\n#define XXH3_accumulate     XXH3_accumulate_scalar\n#define XXH3_scrambleAcc    XXH3_scrambleAcc_scalar\n#define XXH3_initCustomSecret XXH3_initCustomSecret_scalar\n\n#endif\n\n#if XXH_SIZE_OPT >= 1 /* don't do SIMD for initialization */\n#  undef XXH3_initCustomSecret\n#  define XXH3_initCustomSecret XXH3_initCustomSecret_scalar\n#endif\n\nXXH_FORCE_INLINE void\nXXH3_hashLong_internal_loop(xxh_u64* XXH_RESTRICT acc,\n                      const xxh_u8* XXH_RESTRICT input, size_t len,\n                      const xxh_u8* XXH_RESTRICT secret, size_t secretSize,\n                            XXH3_f_accumulate f_acc,\n                            XXH3_f_scrambleAcc f_scramble)\n{\n    size_t const nbStripesPerBlock = (secretSize - XXH_STRIPE_LEN) / XXH_SECRET_CONSUME_RATE;\n    size_t const block_len = XXH_STRIPE_LEN * nbStripesPerBlock;\n    size_t const nb_blocks = (len - 1) / block_len;\n\n    size_t n;\n\n    XXH_ASSERT(secretSize >= XXH3_SECRET_SIZE_MIN);\n\n    for (n = 0; n < nb_blocks; n++) {\n        f_acc(acc, input + n*block_len, secret, nbStripesPerBlock);\n        f_scramble(acc, secret + secretSize - XXH_STRIPE_LEN);\n    }\n\n    /* last partial block */\n    XXH_ASSERT(len > XXH_STRIPE_LEN);\n    {   size_t const nbStripes = ((len - 1) - (block_len * nb_blocks)) / XXH_STRIPE_LEN;\n        XXH_ASSERT(nbStripes <= (secretSize / XXH_SECRET_CONSUME_RATE));\n        f_acc(acc, input + nb_blocks*block_len, secret, nbStripes);\n\n        /* last stripe */\n        {   const xxh_u8* const p = input + len - XXH_STRIPE_LEN;\n#define XXH_SECRET_LASTACC_START 7  /* not aligned on 8, last secret is different from acc & scrambler */\n            XXH3_accumulate_512(acc, p, secret + secretSize - XXH_STRIPE_LEN - XXH_SECRET_LASTACC_START);\n    }   }\n}\n\nXXH_FORCE_INLINE xxh_u64\nXXH3_mix2Accs(const xxh_u64* XXH_RESTRICT acc, const xxh_u8* XXH_RESTRICT secret)\n{\n    return XXH3_mul128_fold64(\n               acc[0] ^ XXH_readLE64(secret),\n               acc[1] ^ XXH_readLE64(secret+8) );\n}\n\nstatic XXH_PUREF XXH64_hash_t\nXXH3_mergeAccs(const xxh_u64* XXH_RESTRICT acc, const xxh_u8* XXH_RESTRICT secret, xxh_u64 start)\n{\n    xxh_u64 result64 = start;\n    size_t i = 0;\n\n    for (i = 0; i < 4; i++) {\n        result64 += XXH3_mix2Accs(acc+2*i, secret + 16*i);\n#if defined(__clang__)                                /* Clang */ \\\n    && (defined(__arm__) || defined(__thumb__))       /* ARMv7 */ \\\n    && (defined(__ARM_NEON) || defined(__ARM_NEON__)) /* NEON */  \\\n    && !defined(XXH_ENABLE_AUTOVECTORIZE)             /* Define to disable */\n        /*\n         * UGLY HACK:\n         * Prevent autovectorization on Clang ARMv7-a. Exact same problem as\n         * the one in XXH3_len_129to240_64b. Speeds up shorter keys > 240b.\n         * XXH3_64bits, len == 256, Snapdragon 835:\n         *   without hack: 2063.7 MB/s\n         *   with hack:    2560.7 MB/s\n         */\n        XXH_COMPILER_GUARD(result64);\n#endif\n    }\n\n    return XXH3_avalanche(result64);\n}\n\n/* do not align on 8, so that the secret is different from the accumulator */\n#define XXH_SECRET_MERGEACCS_START 11\n\nstatic XXH_PUREF XXH64_hash_t\nXXH3_finalizeLong_64b(const xxh_u64* XXH_RESTRICT acc, const xxh_u8* XXH_RESTRICT secret, xxh_u64 len)\n{\n    return XXH3_mergeAccs(acc, secret + XXH_SECRET_MERGEACCS_START, len * XXH_PRIME64_1);\n}\n\n#define XXH3_INIT_ACC { XXH_PRIME32_3, XXH_PRIME64_1, XXH_PRIME64_2, XXH_PRIME64_3, \\\n                        XXH_PRIME64_4, XXH_PRIME32_2, XXH_PRIME64_5, XXH_PRIME32_1 }\n\nXXH_FORCE_INLINE XXH64_hash_t\nXXH3_hashLong_64b_internal(const void* XXH_RESTRICT input, size_t len,\n                           const void* XXH_RESTRICT secret, size_t secretSize,\n                           XXH3_f_accumulate f_acc,\n                           XXH3_f_scrambleAcc f_scramble)\n{\n    XXH_ALIGN(XXH_ACC_ALIGN) xxh_u64 acc[XXH_ACC_NB] = XXH3_INIT_ACC;\n\n    XXH3_hashLong_internal_loop(acc, (const xxh_u8*)input, len, (const xxh_u8*)secret, secretSize, f_acc, f_scramble);\n\n    /* converge into final hash */\n    XXH_STATIC_ASSERT(sizeof(acc) == 64);\n    XXH_ASSERT(secretSize >= sizeof(acc) + XXH_SECRET_MERGEACCS_START);\n    return XXH3_finalizeLong_64b(acc, (const xxh_u8*)secret, (xxh_u64)len);\n}\n\n/*\n * It's important for performance to transmit secret's size (when it's static)\n * so that the compiler can properly optimize the vectorized loop.\n * This makes a big performance difference for \"medium\" keys (<1 KB) when using AVX instruction set.\n * When the secret size is unknown, or on GCC 12 where the mix of NO_INLINE and FORCE_INLINE\n * breaks -Og, this is XXH_NO_INLINE.\n */\nXXH3_WITH_SECRET_INLINE XXH64_hash_t\nXXH3_hashLong_64b_withSecret(const void* XXH_RESTRICT input, size_t len,\n                             XXH64_hash_t seed64, const xxh_u8* XXH_RESTRICT secret, size_t secretLen)\n{\n    (void)seed64;\n    return XXH3_hashLong_64b_internal(input, len, secret, secretLen, XXH3_accumulate, XXH3_scrambleAcc);\n}\n\n/*\n * It's preferable for performance that XXH3_hashLong is not inlined,\n * as it results in a smaller function for small data, easier to the instruction cache.\n * Note that inside this no_inline function, we do inline the internal loop,\n * and provide a statically defined secret size to allow optimization of vector loop.\n */\nXXH_NO_INLINE XXH_PUREF XXH64_hash_t\nXXH3_hashLong_64b_default(const void* XXH_RESTRICT input, size_t len,\n                          XXH64_hash_t seed64, const xxh_u8* XXH_RESTRICT secret, size_t secretLen)\n{\n    (void)seed64; (void)secret; (void)secretLen;\n    return XXH3_hashLong_64b_internal(input, len, XXH3_kSecret, sizeof(XXH3_kSecret), XXH3_accumulate, XXH3_scrambleAcc);\n}\n\n/*\n * XXH3_hashLong_64b_withSeed():\n * Generate a custom key based on alteration of default XXH3_kSecret with the seed,\n * and then use this key for long mode hashing.\n *\n * This operation is decently fast but nonetheless costs a little bit of time.\n * Try to avoid it whenever possible (typically when seed==0).\n *\n * It's important for performance that XXH3_hashLong is not inlined. Not sure\n * why (uop cache maybe?), but the difference is large and easily measurable.\n */\nXXH_FORCE_INLINE XXH64_hash_t\nXXH3_hashLong_64b_withSeed_internal(const void* input, size_t len,\n                                    XXH64_hash_t seed,\n                                    XXH3_f_accumulate f_acc,\n                                    XXH3_f_scrambleAcc f_scramble,\n                                    XXH3_f_initCustomSecret f_initSec)\n{\n#if XXH_SIZE_OPT <= 0\n    if (seed == 0)\n        return XXH3_hashLong_64b_internal(input, len,\n                                          XXH3_kSecret, sizeof(XXH3_kSecret),\n                                          f_acc, f_scramble);\n#endif\n    {   XXH_ALIGN(XXH_SEC_ALIGN) xxh_u8 secret[XXH_SECRET_DEFAULT_SIZE];\n        f_initSec(secret, seed);\n        return XXH3_hashLong_64b_internal(input, len, secret, sizeof(secret),\n                                          f_acc, f_scramble);\n    }\n}\n\n/*\n * It's important for performance that XXH3_hashLong is not inlined.\n */\nXXH_NO_INLINE XXH64_hash_t\nXXH3_hashLong_64b_withSeed(const void* XXH_RESTRICT input, size_t len,\n                           XXH64_hash_t seed, const xxh_u8* XXH_RESTRICT secret, size_t secretLen)\n{\n    (void)secret; (void)secretLen;\n    return XXH3_hashLong_64b_withSeed_internal(input, len, seed,\n                XXH3_accumulate, XXH3_scrambleAcc, XXH3_initCustomSecret);\n}\n\n\ntypedef XXH64_hash_t (*XXH3_hashLong64_f)(const void* XXH_RESTRICT, size_t,\n                                          XXH64_hash_t, const xxh_u8* XXH_RESTRICT, size_t);\n\nXXH_FORCE_INLINE XXH64_hash_t\nXXH3_64bits_internal(const void* XXH_RESTRICT input, size_t len,\n                     XXH64_hash_t seed64, const void* XXH_RESTRICT secret, size_t secretLen,\n                     XXH3_hashLong64_f f_hashLong)\n{\n    XXH_ASSERT(secretLen >= XXH3_SECRET_SIZE_MIN);\n    /*\n     * If an action is to be taken if `secretLen` condition is not respected,\n     * it should be done here.\n     * For now, it's a contract pre-condition.\n     * Adding a check and a branch here would cost performance at every hash.\n     * Also, note that function signature doesn't offer room to return an error.\n     */\n    if (len <= 16)\n        return XXH3_len_0to16_64b((const xxh_u8*)input, len, (const xxh_u8*)secret, seed64);\n    if (len <= 128)\n        return XXH3_len_17to128_64b((const xxh_u8*)input, len, (const xxh_u8*)secret, secretLen, seed64);\n    if (len <= XXH3_MIDSIZE_MAX)\n        return XXH3_len_129to240_64b((const xxh_u8*)input, len, (const xxh_u8*)secret, secretLen, seed64);\n    return f_hashLong(input, len, seed64, (const xxh_u8*)secret, secretLen);\n}\n\n\n/* ===   Public entry point   === */\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH64_hash_t XXH3_64bits(XXH_NOESCAPE const void* input, size_t length)\n{\n    return XXH3_64bits_internal(input, length, 0, XXH3_kSecret, sizeof(XXH3_kSecret), XXH3_hashLong_64b_default);\n}\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH64_hash_t\nXXH3_64bits_withSecret(XXH_NOESCAPE const void* input, size_t length, XXH_NOESCAPE const void* secret, size_t secretSize)\n{\n    return XXH3_64bits_internal(input, length, 0, secret, secretSize, XXH3_hashLong_64b_withSecret);\n}\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH64_hash_t\nXXH3_64bits_withSeed(XXH_NOESCAPE const void* input, size_t length, XXH64_hash_t seed)\n{\n    return XXH3_64bits_internal(input, length, seed, XXH3_kSecret, sizeof(XXH3_kSecret), XXH3_hashLong_64b_withSeed);\n}\n\nXXH_PUBLIC_API XXH64_hash_t\nXXH3_64bits_withSecretandSeed(XXH_NOESCAPE const void* input, size_t length, XXH_NOESCAPE const void* secret, size_t secretSize, XXH64_hash_t seed)\n{\n    if (length <= XXH3_MIDSIZE_MAX)\n        return XXH3_64bits_internal(input, length, seed, XXH3_kSecret, sizeof(XXH3_kSecret), NULL);\n    return XXH3_hashLong_64b_withSecret(input, length, seed, (const xxh_u8*)secret, secretSize);\n}\n\n\n/* ===   XXH3 streaming   === */\n#ifndef XXH_NO_STREAM\n/*\n * Malloc's a pointer that is always aligned to @align.\n *\n * This must be freed with `XXH_alignedFree()`.\n *\n * malloc typically guarantees 16 byte alignment on 64-bit systems and 8 byte\n * alignment on 32-bit. This isn't enough for the 32 byte aligned loads in AVX2\n * or on 32-bit, the 16 byte aligned loads in SSE2 and NEON.\n *\n * This underalignment previously caused a rather obvious crash which went\n * completely unnoticed due to XXH3_createState() not actually being tested.\n * Credit to RedSpah for noticing this bug.\n *\n * The alignment is done manually: Functions like posix_memalign or _mm_malloc\n * are avoided: To maintain portability, we would have to write a fallback\n * like this anyways, and besides, testing for the existence of library\n * functions without relying on external build tools is impossible.\n *\n * The method is simple: Overallocate, manually align, and store the offset\n * to the original behind the returned pointer.\n *\n * Align must be a power of 2 and 8 <= align <= 128.\n */\nstatic XXH_MALLOCF void* XXH_alignedMalloc(size_t s, size_t align)\n{\n    XXH_ASSERT(align <= 128 && align >= 8); /* range check */\n    XXH_ASSERT((align & (align-1)) == 0);   /* power of 2 */\n    XXH_ASSERT(s != 0 && s < (s + align));  /* empty/overflow */\n    {   /* Overallocate to make room for manual realignment and an offset byte */\n        xxh_u8* base = (xxh_u8*)XXH_malloc(s + align);\n        if (base != NULL) {\n            /*\n             * Get the offset needed to align this pointer.\n             *\n             * Even if the returned pointer is aligned, there will always be\n             * at least one byte to store the offset to the original pointer.\n             */\n            size_t offset = align - ((size_t)base & (align - 1)); /* base % align */\n            /* Add the offset for the now-aligned pointer */\n            xxh_u8* ptr = base + offset;\n\n            XXH_ASSERT((size_t)ptr % align == 0);\n\n            /* Store the offset immediately before the returned pointer. */\n            ptr[-1] = (xxh_u8)offset;\n            return ptr;\n        }\n        return NULL;\n    }\n}\n/*\n * Frees an aligned pointer allocated by XXH_alignedMalloc(). Don't pass\n * normal malloc'd pointers, XXH_alignedMalloc has a specific data layout.\n */\nstatic void XXH_alignedFree(void* p)\n{\n    if (p != NULL) {\n        xxh_u8* ptr = (xxh_u8*)p;\n        /* Get the offset byte we added in XXH_malloc. */\n        xxh_u8 offset = ptr[-1];\n        /* Free the original malloc'd pointer */\n        xxh_u8* base = ptr - offset;\n        XXH_free(base);\n    }\n}\n/*! @ingroup XXH3_family */\n/*!\n * @brief Allocate an @ref XXH3_state_t.\n *\n * @return An allocated pointer of @ref XXH3_state_t on success.\n * @return `NULL` on failure.\n *\n * @note Must be freed with XXH3_freeState().\n *\n * @see @ref streaming_example \"Streaming Example\"\n */\nXXH_PUBLIC_API XXH3_state_t* XXH3_createState(void)\n{\n    XXH3_state_t* const state = (XXH3_state_t*)XXH_alignedMalloc(sizeof(XXH3_state_t), 64);\n    if (state==NULL) return NULL;\n    XXH3_INITSTATE(state);\n    return state;\n}\n\n/*! @ingroup XXH3_family */\n/*!\n * @brief Frees an @ref XXH3_state_t.\n *\n * @param statePtr A pointer to an @ref XXH3_state_t allocated with @ref XXH3_createState().\n *\n * @return @ref XXH_OK.\n *\n * @note Must be allocated with XXH3_createState().\n *\n * @see @ref streaming_example \"Streaming Example\"\n */\nXXH_PUBLIC_API XXH_errorcode XXH3_freeState(XXH3_state_t* statePtr)\n{\n    XXH_alignedFree(statePtr);\n    return XXH_OK;\n}\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API void\nXXH3_copyState(XXH_NOESCAPE XXH3_state_t* dst_state, XXH_NOESCAPE const XXH3_state_t* src_state)\n{\n    XXH_memcpy(dst_state, src_state, sizeof(*dst_state));\n}\n\nstatic void\nXXH3_reset_internal(XXH3_state_t* statePtr,\n                    XXH64_hash_t seed,\n                    const void* secret, size_t secretSize)\n{\n    size_t const initStart = offsetof(XXH3_state_t, bufferedSize);\n    size_t const initLength = offsetof(XXH3_state_t, nbStripesPerBlock) - initStart;\n    XXH_ASSERT(offsetof(XXH3_state_t, nbStripesPerBlock) > initStart);\n    XXH_ASSERT(statePtr != NULL);\n    /* set members from bufferedSize to nbStripesPerBlock (excluded) to 0 */\n    XXH_memset((char*)statePtr + initStart, 0, initLength);\n    statePtr->acc[0] = XXH_PRIME32_3;\n    statePtr->acc[1] = XXH_PRIME64_1;\n    statePtr->acc[2] = XXH_PRIME64_2;\n    statePtr->acc[3] = XXH_PRIME64_3;\n    statePtr->acc[4] = XXH_PRIME64_4;\n    statePtr->acc[5] = XXH_PRIME32_2;\n    statePtr->acc[6] = XXH_PRIME64_5;\n    statePtr->acc[7] = XXH_PRIME32_1;\n    statePtr->seed = seed;\n    statePtr->useSeed = (seed != 0);\n    statePtr->extSecret = (const unsigned char*)secret;\n    XXH_ASSERT(secretSize >= XXH3_SECRET_SIZE_MIN);\n    statePtr->secretLimit = secretSize - XXH_STRIPE_LEN;\n    statePtr->nbStripesPerBlock = statePtr->secretLimit / XXH_SECRET_CONSUME_RATE;\n}\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH_errorcode\nXXH3_64bits_reset(XXH_NOESCAPE XXH3_state_t* statePtr)\n{\n    if (statePtr == NULL) return XXH_ERROR;\n    XXH3_reset_internal(statePtr, 0, XXH3_kSecret, XXH_SECRET_DEFAULT_SIZE);\n    return XXH_OK;\n}\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH_errorcode\nXXH3_64bits_reset_withSecret(XXH_NOESCAPE XXH3_state_t* statePtr, XXH_NOESCAPE const void* secret, size_t secretSize)\n{\n    if (statePtr == NULL) return XXH_ERROR;\n    XXH3_reset_internal(statePtr, 0, secret, secretSize);\n    if (secret == NULL) return XXH_ERROR;\n    if (secretSize < XXH3_SECRET_SIZE_MIN) return XXH_ERROR;\n    return XXH_OK;\n}\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH_errorcode\nXXH3_64bits_reset_withSeed(XXH_NOESCAPE XXH3_state_t* statePtr, XXH64_hash_t seed)\n{\n    if (statePtr == NULL) return XXH_ERROR;\n    if (seed==0) return XXH3_64bits_reset(statePtr);\n    if ((seed != statePtr->seed) || (statePtr->extSecret != NULL))\n        XXH3_initCustomSecret(statePtr->customSecret, seed);\n    XXH3_reset_internal(statePtr, seed, NULL, XXH_SECRET_DEFAULT_SIZE);\n    return XXH_OK;\n}\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH_errorcode\nXXH3_64bits_reset_withSecretandSeed(XXH_NOESCAPE XXH3_state_t* statePtr, XXH_NOESCAPE const void* secret, size_t secretSize, XXH64_hash_t seed64)\n{\n    if (statePtr == NULL) return XXH_ERROR;\n    if (secret == NULL) return XXH_ERROR;\n    if (secretSize < XXH3_SECRET_SIZE_MIN) return XXH_ERROR;\n    XXH3_reset_internal(statePtr, seed64, secret, secretSize);\n    statePtr->useSeed = 1; /* always, even if seed64==0 */\n    return XXH_OK;\n}\n\n/*!\n * @internal\n * @brief Processes a large input for XXH3_update() and XXH3_digest_long().\n *\n * Unlike XXH3_hashLong_internal_loop(), this can process data that overlaps a block.\n *\n * @param acc                Pointer to the 8 accumulator lanes\n * @param nbStripesSoFarPtr  In/out pointer to the number of leftover stripes in the block*\n * @param nbStripesPerBlock  Number of stripes in a block\n * @param input              Input pointer\n * @param nbStripes          Number of stripes to process\n * @param secret             Secret pointer\n * @param secretLimit        Offset of the last block in @p secret\n * @param f_acc              Pointer to an XXH3_accumulate implementation\n * @param f_scramble         Pointer to an XXH3_scrambleAcc implementation\n * @return                   Pointer past the end of @p input after processing\n */\nXXH_FORCE_INLINE const xxh_u8 *\nXXH3_consumeStripes(xxh_u64* XXH_RESTRICT acc,\n                    size_t* XXH_RESTRICT nbStripesSoFarPtr, size_t nbStripesPerBlock,\n                    const xxh_u8* XXH_RESTRICT input, size_t nbStripes,\n                    const xxh_u8* XXH_RESTRICT secret, size_t secretLimit,\n                    XXH3_f_accumulate f_acc,\n                    XXH3_f_scrambleAcc f_scramble)\n{\n    const xxh_u8* initialSecret = secret + *nbStripesSoFarPtr * XXH_SECRET_CONSUME_RATE;\n    /* Process full blocks */\n    if (nbStripes >= (nbStripesPerBlock - *nbStripesSoFarPtr)) {\n        /* Process the initial partial block... */\n        size_t nbStripesThisIter = nbStripesPerBlock - *nbStripesSoFarPtr;\n\n        do {\n            /* Accumulate and scramble */\n            f_acc(acc, input, initialSecret, nbStripesThisIter);\n            f_scramble(acc, secret + secretLimit);\n            input += nbStripesThisIter * XXH_STRIPE_LEN;\n            nbStripes -= nbStripesThisIter;\n            /* Then continue the loop with the full block size */\n            nbStripesThisIter = nbStripesPerBlock;\n            initialSecret = secret;\n        } while (nbStripes >= nbStripesPerBlock);\n        *nbStripesSoFarPtr = 0;\n    }\n    /* Process a partial block */\n    if (nbStripes > 0) {\n        f_acc(acc, input, initialSecret, nbStripes);\n        input += nbStripes * XXH_STRIPE_LEN;\n        *nbStripesSoFarPtr += nbStripes;\n    }\n    /* Return end pointer */\n    return input;\n}\n\n#ifndef XXH3_STREAM_USE_STACK\n# if XXH_SIZE_OPT <= 0 && !defined(__clang__) /* clang doesn't need additional stack space */\n#   define XXH3_STREAM_USE_STACK 1\n# endif\n#endif\n/* This function accepts f_acc and f_scramble as function pointers,\n * making it possible to implement multiple variants with different acc & scramble stages.\n * This is notably useful to implement multiple vector variants with different intrinsics.\n */\nXXH_FORCE_INLINE XXH_errorcode\nXXH3_update(XXH3_state_t* XXH_RESTRICT const state,\n            const xxh_u8* XXH_RESTRICT input, size_t len,\n            XXH3_f_accumulate f_acc,\n            XXH3_f_scrambleAcc f_scramble)\n{\n    if (input==NULL) {\n        XXH_ASSERT(len == 0);\n        return XXH_OK;\n    }\n\n    XXH_ASSERT(state != NULL);\n    state->totalLen += len;\n\n    /* small input : just fill in tmp buffer */\n    XXH_ASSERT(state->bufferedSize <= XXH3_INTERNALBUFFER_SIZE);\n    if (len <= XXH3_INTERNALBUFFER_SIZE - state->bufferedSize) {\n        XXH_memcpy(state->buffer + state->bufferedSize, input, len);\n        state->bufferedSize += (XXH32_hash_t)len;\n        return XXH_OK;\n    }\n\n    {   const xxh_u8* const bEnd = input + len;\n        const unsigned char* const secret = (state->extSecret == NULL) ? state->customSecret : state->extSecret;\n#if defined(XXH3_STREAM_USE_STACK) && XXH3_STREAM_USE_STACK >= 1\n        /* For some reason, gcc and MSVC seem to suffer greatly\n         * when operating accumulators directly into state.\n         * Operating into stack space seems to enable proper optimization.\n         * clang, on the other hand, doesn't seem to need this trick */\n        XXH_ALIGN(XXH_ACC_ALIGN) xxh_u64 acc[8];\n        XXH_memcpy(acc, state->acc, sizeof(acc));\n#else\n        xxh_u64* XXH_RESTRICT const acc = state->acc;\n#endif\n\n        /* total input is now > XXH3_INTERNALBUFFER_SIZE */\n        #define XXH3_INTERNALBUFFER_STRIPES (XXH3_INTERNALBUFFER_SIZE / XXH_STRIPE_LEN)\n        XXH_STATIC_ASSERT(XXH3_INTERNALBUFFER_SIZE % XXH_STRIPE_LEN == 0);   /* clean multiple */\n\n        /*\n         * Internal buffer is partially filled (always, except at beginning)\n         * Complete it, then consume it.\n         */\n        if (state->bufferedSize) {\n            size_t const loadSize = XXH3_INTERNALBUFFER_SIZE - state->bufferedSize;\n            XXH_memcpy(state->buffer + state->bufferedSize, input, loadSize);\n            input += loadSize;\n            XXH3_consumeStripes(acc,\n                               &state->nbStripesSoFar, state->nbStripesPerBlock,\n                                state->buffer, XXH3_INTERNALBUFFER_STRIPES,\n                                secret, state->secretLimit,\n                                f_acc, f_scramble);\n            state->bufferedSize = 0;\n        }\n        XXH_ASSERT(input < bEnd);\n        if (bEnd - input > XXH3_INTERNALBUFFER_SIZE) {\n            size_t nbStripes = (size_t)(bEnd - 1 - input) / XXH_STRIPE_LEN;\n            input = XXH3_consumeStripes(acc,\n                                       &state->nbStripesSoFar, state->nbStripesPerBlock,\n                                       input, nbStripes,\n                                       secret, state->secretLimit,\n                                       f_acc, f_scramble);\n            XXH_memcpy(state->buffer + sizeof(state->buffer) - XXH_STRIPE_LEN, input - XXH_STRIPE_LEN, XXH_STRIPE_LEN);\n\n        }\n        /* Some remaining input (always) : buffer it */\n        XXH_ASSERT(input < bEnd);\n        XXH_ASSERT(bEnd - input <= XXH3_INTERNALBUFFER_SIZE);\n        XXH_ASSERT(state->bufferedSize == 0);\n        XXH_memcpy(state->buffer, input, (size_t)(bEnd-input));\n        state->bufferedSize = (XXH32_hash_t)(bEnd-input);\n#if defined(XXH3_STREAM_USE_STACK) && XXH3_STREAM_USE_STACK >= 1\n        /* save stack accumulators into state */\n        XXH_memcpy(state->acc, acc, sizeof(acc));\n#endif\n    }\n\n    return XXH_OK;\n}\n\n/*\n * Both XXH3_64bits_update and XXH3_128bits_update use this routine.\n */\nXXH_NO_INLINE XXH_errorcode\nXXH3_update_regular(XXH_NOESCAPE XXH3_state_t* state, XXH_NOESCAPE const void* input, size_t len)\n{\n    return XXH3_update(state, (const xxh_u8*)input, len,\n                       XXH3_accumulate, XXH3_scrambleAcc);\n}\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH_errorcode\nXXH3_64bits_update(XXH_NOESCAPE XXH3_state_t* state, XXH_NOESCAPE const void* input, size_t len)\n{\n    return XXH3_update_regular(state, input, len);\n}\n\n\nXXH_FORCE_INLINE void\nXXH3_digest_long (XXH64_hash_t* acc,\n                  const XXH3_state_t* state,\n                  const unsigned char* secret)\n{\n    xxh_u8 lastStripe[XXH_STRIPE_LEN];\n    const xxh_u8* lastStripePtr;\n\n    /*\n     * Digest on a local copy. This way, the state remains unaltered, and it can\n     * continue ingesting more input afterwards.\n     */\n    XXH_memcpy(acc, state->acc, sizeof(state->acc));\n    if (state->bufferedSize >= XXH_STRIPE_LEN) {\n        /* Consume remaining stripes then point to remaining data in buffer */\n        size_t const nbStripes = (state->bufferedSize - 1) / XXH_STRIPE_LEN;\n        size_t nbStripesSoFar = state->nbStripesSoFar;\n        XXH3_consumeStripes(acc,\n                           &nbStripesSoFar, state->nbStripesPerBlock,\n                            state->buffer, nbStripes,\n                            secret, state->secretLimit,\n                            XXH3_accumulate, XXH3_scrambleAcc);\n        lastStripePtr = state->buffer + state->bufferedSize - XXH_STRIPE_LEN;\n    } else {  /* bufferedSize < XXH_STRIPE_LEN */\n        /* Copy to temp buffer */\n        size_t const catchupSize = XXH_STRIPE_LEN - state->bufferedSize;\n        XXH_ASSERT(state->bufferedSize > 0);  /* there is always some input buffered */\n        XXH_memcpy(lastStripe, state->buffer + sizeof(state->buffer) - catchupSize, catchupSize);\n        XXH_memcpy(lastStripe + catchupSize, state->buffer, state->bufferedSize);\n        lastStripePtr = lastStripe;\n    }\n    /* Last stripe */\n    XXH3_accumulate_512(acc,\n                        lastStripePtr,\n                        secret + state->secretLimit - XXH_SECRET_LASTACC_START);\n}\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH64_hash_t XXH3_64bits_digest (XXH_NOESCAPE const XXH3_state_t* state)\n{\n    const unsigned char* const secret = (state->extSecret == NULL) ? state->customSecret : state->extSecret;\n    if (state->totalLen > XXH3_MIDSIZE_MAX) {\n        XXH_ALIGN(XXH_ACC_ALIGN) XXH64_hash_t acc[XXH_ACC_NB];\n        XXH3_digest_long(acc, state, secret);\n        return XXH3_finalizeLong_64b(acc, secret, (xxh_u64)state->totalLen);\n    }\n    /* totalLen <= XXH3_MIDSIZE_MAX: digesting a short input */\n    if (state->useSeed)\n        return XXH3_64bits_withSeed(state->buffer, (size_t)state->totalLen, state->seed);\n    return XXH3_64bits_withSecret(state->buffer, (size_t)(state->totalLen),\n                                  secret, state->secretLimit + XXH_STRIPE_LEN);\n}\n#endif /* !XXH_NO_STREAM */\n\n\n/* ==========================================\n * XXH3 128 bits (a.k.a XXH128)\n * ==========================================\n * XXH3's 128-bit variant has better mixing and strength than the 64-bit variant,\n * even without counting the significantly larger output size.\n *\n * For example, extra steps are taken to avoid the seed-dependent collisions\n * in 17-240 byte inputs (See XXH3_mix16B and XXH128_mix32B).\n *\n * This strength naturally comes at the cost of some speed, especially on short\n * lengths. Note that longer hashes are about as fast as the 64-bit version\n * due to it using only a slight modification of the 64-bit loop.\n *\n * XXH128 is also more oriented towards 64-bit machines. It is still extremely\n * fast for a _128-bit_ hash on 32-bit (it usually clears XXH64).\n */\n\nXXH_FORCE_INLINE XXH_PUREF XXH128_hash_t\nXXH3_len_1to3_128b(const xxh_u8* input, size_t len, const xxh_u8* secret, XXH64_hash_t seed)\n{\n    /* A doubled version of 1to3_64b with different constants. */\n    XXH_ASSERT(input != NULL);\n    XXH_ASSERT(1 <= len && len <= 3);\n    XXH_ASSERT(secret != NULL);\n    /*\n     * len = 1: combinedl = { input[0], 0x01, input[0], input[0] }\n     * len = 2: combinedl = { input[1], 0x02, input[0], input[1] }\n     * len = 3: combinedl = { input[2], 0x03, input[0], input[1] }\n     */\n    {   xxh_u8 const c1 = input[0];\n        xxh_u8 const c2 = input[len >> 1];\n        xxh_u8 const c3 = input[len - 1];\n        xxh_u32 const combinedl = ((xxh_u32)c1 <<16) | ((xxh_u32)c2 << 24)\n                                | ((xxh_u32)c3 << 0) | ((xxh_u32)len << 8);\n        xxh_u32 const combinedh = XXH_rotl32(XXH_swap32(combinedl), 13);\n        xxh_u64 const bitflipl = (XXH_readLE32(secret) ^ XXH_readLE32(secret+4)) + seed;\n        xxh_u64 const bitfliph = (XXH_readLE32(secret+8) ^ XXH_readLE32(secret+12)) - seed;\n        xxh_u64 const keyed_lo = (xxh_u64)combinedl ^ bitflipl;\n        xxh_u64 const keyed_hi = (xxh_u64)combinedh ^ bitfliph;\n        XXH128_hash_t h128;\n        h128.low64  = XXH64_avalanche(keyed_lo);\n        h128.high64 = XXH64_avalanche(keyed_hi);\n        return h128;\n    }\n}\n\nXXH_FORCE_INLINE XXH_PUREF XXH128_hash_t\nXXH3_len_4to8_128b(const xxh_u8* input, size_t len, const xxh_u8* secret, XXH64_hash_t seed)\n{\n    XXH_ASSERT(input != NULL);\n    XXH_ASSERT(secret != NULL);\n    XXH_ASSERT(4 <= len && len <= 8);\n    seed ^= (xxh_u64)XXH_swap32((xxh_u32)seed) << 32;\n    {   xxh_u32 const input_lo = XXH_readLE32(input);\n        xxh_u32 const input_hi = XXH_readLE32(input + len - 4);\n        xxh_u64 const input_64 = input_lo + ((xxh_u64)input_hi << 32);\n        xxh_u64 const bitflip = (XXH_readLE64(secret+16) ^ XXH_readLE64(secret+24)) + seed;\n        xxh_u64 const keyed = input_64 ^ bitflip;\n\n        /* Shift len to the left to ensure it is even, this avoids even multiplies. */\n        XXH128_hash_t m128 = XXH_mult64to128(keyed, XXH_PRIME64_1 + (len << 2));\n\n        m128.high64 += (m128.low64 << 1);\n        m128.low64  ^= (m128.high64 >> 3);\n\n        m128.low64   = XXH_xorshift64(m128.low64, 35);\n        m128.low64  *= PRIME_MX2;\n        m128.low64   = XXH_xorshift64(m128.low64, 28);\n        m128.high64  = XXH3_avalanche(m128.high64);\n        return m128;\n    }\n}\n\nXXH_FORCE_INLINE XXH_PUREF XXH128_hash_t\nXXH3_len_9to16_128b(const xxh_u8* input, size_t len, const xxh_u8* secret, XXH64_hash_t seed)\n{\n    XXH_ASSERT(input != NULL);\n    XXH_ASSERT(secret != NULL);\n    XXH_ASSERT(9 <= len && len <= 16);\n    {   xxh_u64 const bitflipl = (XXH_readLE64(secret+32) ^ XXH_readLE64(secret+40)) - seed;\n        xxh_u64 const bitfliph = (XXH_readLE64(secret+48) ^ XXH_readLE64(secret+56)) + seed;\n        xxh_u64 const input_lo = XXH_readLE64(input);\n        xxh_u64       input_hi = XXH_readLE64(input + len - 8);\n        XXH128_hash_t m128 = XXH_mult64to128(input_lo ^ input_hi ^ bitflipl, XXH_PRIME64_1);\n        /*\n         * Put len in the middle of m128 to ensure that the length gets mixed to\n         * both the low and high bits in the 128x64 multiply below.\n         */\n        m128.low64 += (xxh_u64)(len - 1) << 54;\n        input_hi   ^= bitfliph;\n        /*\n         * Add the high 32 bits of input_hi to the high 32 bits of m128, then\n         * add the long product of the low 32 bits of input_hi and XXH_PRIME32_2 to\n         * the high 64 bits of m128.\n         *\n         * The best approach to this operation is different on 32-bit and 64-bit.\n         */\n        if (sizeof(void *) < sizeof(xxh_u64)) { /* 32-bit */\n            /*\n             * 32-bit optimized version, which is more readable.\n             *\n             * On 32-bit, it removes an ADC and delays a dependency between the two\n             * halves of m128.high64, but it generates an extra mask on 64-bit.\n             */\n            m128.high64 += (input_hi & 0xFFFFFFFF00000000ULL) + XXH_mult32to64((xxh_u32)input_hi, XXH_PRIME32_2);\n        } else {\n            /*\n             * 64-bit optimized (albeit more confusing) version.\n             *\n             * Uses some properties of addition and multiplication to remove the mask:\n             *\n             * Let:\n             *    a = input_hi.lo = (input_hi & 0x00000000FFFFFFFF)\n             *    b = input_hi.hi = (input_hi & 0xFFFFFFFF00000000)\n             *    c = XXH_PRIME32_2\n             *\n             *    a + (b * c)\n             * Inverse Property: x + y - x == y\n             *    a + (b * (1 + c - 1))\n             * Distributive Property: x * (y + z) == (x * y) + (x * z)\n             *    a + (b * 1) + (b * (c - 1))\n             * Identity Property: x * 1 == x\n             *    a + b + (b * (c - 1))\n             *\n             * Substitute a, b, and c:\n             *    input_hi.hi + input_hi.lo + ((xxh_u64)input_hi.lo * (XXH_PRIME32_2 - 1))\n             *\n             * Since input_hi.hi + input_hi.lo == input_hi, we get this:\n             *    input_hi + ((xxh_u64)input_hi.lo * (XXH_PRIME32_2 - 1))\n             */\n            m128.high64 += input_hi + XXH_mult32to64((xxh_u32)input_hi, XXH_PRIME32_2 - 1);\n        }\n        /* m128 ^= XXH_swap64(m128 >> 64); */\n        m128.low64  ^= XXH_swap64(m128.high64);\n\n        {   /* 128x64 multiply: h128 = m128 * XXH_PRIME64_2; */\n            XXH128_hash_t h128 = XXH_mult64to128(m128.low64, XXH_PRIME64_2);\n            h128.high64 += m128.high64 * XXH_PRIME64_2;\n\n            h128.low64   = XXH3_avalanche(h128.low64);\n            h128.high64  = XXH3_avalanche(h128.high64);\n            return h128;\n    }   }\n}\n\n/*\n * Assumption: `secret` size is >= XXH3_SECRET_SIZE_MIN\n */\nXXH_FORCE_INLINE XXH_PUREF XXH128_hash_t\nXXH3_len_0to16_128b(const xxh_u8* input, size_t len, const xxh_u8* secret, XXH64_hash_t seed)\n{\n    XXH_ASSERT(len <= 16);\n    {   if (len > 8) return XXH3_len_9to16_128b(input, len, secret, seed);\n        if (len >= 4) return XXH3_len_4to8_128b(input, len, secret, seed);\n        if (len) return XXH3_len_1to3_128b(input, len, secret, seed);\n        {   XXH128_hash_t h128;\n            xxh_u64 const bitflipl = XXH_readLE64(secret+64) ^ XXH_readLE64(secret+72);\n            xxh_u64 const bitfliph = XXH_readLE64(secret+80) ^ XXH_readLE64(secret+88);\n            h128.low64 = XXH64_avalanche(seed ^ bitflipl);\n            h128.high64 = XXH64_avalanche( seed ^ bitfliph);\n            return h128;\n    }   }\n}\n\n/*\n * A bit slower than XXH3_mix16B, but handles multiply by zero better.\n */\nXXH_FORCE_INLINE XXH128_hash_t\nXXH128_mix32B(XXH128_hash_t acc, const xxh_u8* input_1, const xxh_u8* input_2,\n              const xxh_u8* secret, XXH64_hash_t seed)\n{\n    acc.low64  += XXH3_mix16B (input_1, secret+0, seed);\n    acc.low64  ^= XXH_readLE64(input_2) + XXH_readLE64(input_2 + 8);\n    acc.high64 += XXH3_mix16B (input_2, secret+16, seed);\n    acc.high64 ^= XXH_readLE64(input_1) + XXH_readLE64(input_1 + 8);\n    return acc;\n}\n\n\nXXH_FORCE_INLINE XXH_PUREF XXH128_hash_t\nXXH3_len_17to128_128b(const xxh_u8* XXH_RESTRICT input, size_t len,\n                      const xxh_u8* XXH_RESTRICT secret, size_t secretSize,\n                      XXH64_hash_t seed)\n{\n    XXH_ASSERT(secretSize >= XXH3_SECRET_SIZE_MIN); (void)secretSize;\n    XXH_ASSERT(16 < len && len <= 128);\n\n    {   XXH128_hash_t acc;\n        acc.low64 = len * XXH_PRIME64_1;\n        acc.high64 = 0;\n\n#if XXH_SIZE_OPT >= 1\n        {\n            /* Smaller, but slightly slower. */\n            unsigned int i = (unsigned int)(len - 1) / 32;\n            do {\n                acc = XXH128_mix32B(acc, input+16*i, input+len-16*(i+1), secret+32*i, seed);\n            } while (i-- != 0);\n        }\n#else\n        if (len > 32) {\n            if (len > 64) {\n                if (len > 96) {\n                    acc = XXH128_mix32B(acc, input+48, input+len-64, secret+96, seed);\n                }\n                acc = XXH128_mix32B(acc, input+32, input+len-48, secret+64, seed);\n            }\n            acc = XXH128_mix32B(acc, input+16, input+len-32, secret+32, seed);\n        }\n        acc = XXH128_mix32B(acc, input, input+len-16, secret, seed);\n#endif\n        {   XXH128_hash_t h128;\n            h128.low64  = acc.low64 + acc.high64;\n            h128.high64 = (acc.low64    * XXH_PRIME64_1)\n                        + (acc.high64   * XXH_PRIME64_4)\n                        + ((len - seed) * XXH_PRIME64_2);\n            h128.low64  = XXH3_avalanche(h128.low64);\n            h128.high64 = (XXH64_hash_t)0 - XXH3_avalanche(h128.high64);\n            return h128;\n        }\n    }\n}\n\nXXH_NO_INLINE XXH_PUREF XXH128_hash_t\nXXH3_len_129to240_128b(const xxh_u8* XXH_RESTRICT input, size_t len,\n                       const xxh_u8* XXH_RESTRICT secret, size_t secretSize,\n                       XXH64_hash_t seed)\n{\n    XXH_ASSERT(secretSize >= XXH3_SECRET_SIZE_MIN); (void)secretSize;\n    XXH_ASSERT(128 < len && len <= XXH3_MIDSIZE_MAX);\n\n    {   XXH128_hash_t acc;\n        unsigned i;\n        acc.low64 = len * XXH_PRIME64_1;\n        acc.high64 = 0;\n        /*\n         *  We set as `i` as offset + 32. We do this so that unchanged\n         * `len` can be used as upper bound. This reaches a sweet spot\n         * where both x86 and aarch64 get simple agen and good codegen\n         * for the loop.\n         */\n        for (i = 32; i < 160; i += 32) {\n            acc = XXH128_mix32B(acc,\n                                input  + i - 32,\n                                input  + i - 16,\n                                secret + i - 32,\n                                seed);\n        }\n        acc.low64 = XXH3_avalanche(acc.low64);\n        acc.high64 = XXH3_avalanche(acc.high64);\n        /*\n         * NB: `i <= len` will duplicate the last 32-bytes if\n         * len % 32 was zero. This is an unfortunate necessity to keep\n         * the hash result stable.\n         */\n        for (i=160; i <= len; i += 32) {\n            acc = XXH128_mix32B(acc,\n                                input + i - 32,\n                                input + i - 16,\n                                secret + XXH3_MIDSIZE_STARTOFFSET + i - 160,\n                                seed);\n        }\n        /* last bytes */\n        acc = XXH128_mix32B(acc,\n                            input + len - 16,\n                            input + len - 32,\n                            secret + XXH3_SECRET_SIZE_MIN - XXH3_MIDSIZE_LASTOFFSET - 16,\n                            (XXH64_hash_t)0 - seed);\n\n        {   XXH128_hash_t h128;\n            h128.low64  = acc.low64 + acc.high64;\n            h128.high64 = (acc.low64    * XXH_PRIME64_1)\n                        + (acc.high64   * XXH_PRIME64_4)\n                        + ((len - seed) * XXH_PRIME64_2);\n            h128.low64  = XXH3_avalanche(h128.low64);\n            h128.high64 = (XXH64_hash_t)0 - XXH3_avalanche(h128.high64);\n            return h128;\n        }\n    }\n}\n\nstatic XXH_PUREF XXH128_hash_t\nXXH3_finalizeLong_128b(const xxh_u64* XXH_RESTRICT acc, const xxh_u8* XXH_RESTRICT secret, size_t secretSize, xxh_u64 len)\n{\n    XXH128_hash_t h128;\n    h128.low64 = XXH3_finalizeLong_64b(acc, secret, len);\n    h128.high64 = XXH3_mergeAccs(acc, secret + secretSize\n                                             - XXH_STRIPE_LEN - XXH_SECRET_MERGEACCS_START,\n                                             ~(len * XXH_PRIME64_2));\n    return h128;\n}\n\nXXH_FORCE_INLINE XXH128_hash_t\nXXH3_hashLong_128b_internal(const void* XXH_RESTRICT input, size_t len,\n                            const xxh_u8* XXH_RESTRICT secret, size_t secretSize,\n                            XXH3_f_accumulate f_acc,\n                            XXH3_f_scrambleAcc f_scramble)\n{\n    XXH_ALIGN(XXH_ACC_ALIGN) xxh_u64 acc[XXH_ACC_NB] = XXH3_INIT_ACC;\n\n    XXH3_hashLong_internal_loop(acc, (const xxh_u8*)input, len, secret, secretSize, f_acc, f_scramble);\n\n    /* converge into final hash */\n    XXH_STATIC_ASSERT(sizeof(acc) == 64);\n    XXH_ASSERT(secretSize >= sizeof(acc) + XXH_SECRET_MERGEACCS_START);\n    return XXH3_finalizeLong_128b(acc, secret, secretSize, (xxh_u64)len);\n}\n\n/*\n * It's important for performance that XXH3_hashLong() is not inlined.\n */\nXXH_NO_INLINE XXH_PUREF XXH128_hash_t\nXXH3_hashLong_128b_default(const void* XXH_RESTRICT input, size_t len,\n                           XXH64_hash_t seed64,\n                           const void* XXH_RESTRICT secret, size_t secretLen)\n{\n    (void)seed64; (void)secret; (void)secretLen;\n    return XXH3_hashLong_128b_internal(input, len, XXH3_kSecret, sizeof(XXH3_kSecret),\n                                       XXH3_accumulate, XXH3_scrambleAcc);\n}\n\n/*\n * It's important for performance to pass @p secretLen (when it's static)\n * to the compiler, so that it can properly optimize the vectorized loop.\n *\n * When the secret size is unknown, or on GCC 12 where the mix of NO_INLINE and FORCE_INLINE\n * breaks -Og, this is XXH_NO_INLINE.\n */\nXXH3_WITH_SECRET_INLINE XXH128_hash_t\nXXH3_hashLong_128b_withSecret(const void* XXH_RESTRICT input, size_t len,\n                              XXH64_hash_t seed64,\n                              const void* XXH_RESTRICT secret, size_t secretLen)\n{\n    (void)seed64;\n    return XXH3_hashLong_128b_internal(input, len, (const xxh_u8*)secret, secretLen,\n                                       XXH3_accumulate, XXH3_scrambleAcc);\n}\n\nXXH_FORCE_INLINE XXH128_hash_t\nXXH3_hashLong_128b_withSeed_internal(const void* XXH_RESTRICT input, size_t len,\n                                XXH64_hash_t seed64,\n                                XXH3_f_accumulate f_acc,\n                                XXH3_f_scrambleAcc f_scramble,\n                                XXH3_f_initCustomSecret f_initSec)\n{\n    if (seed64 == 0)\n        return XXH3_hashLong_128b_internal(input, len,\n                                           XXH3_kSecret, sizeof(XXH3_kSecret),\n                                           f_acc, f_scramble);\n    {   XXH_ALIGN(XXH_SEC_ALIGN) xxh_u8 secret[XXH_SECRET_DEFAULT_SIZE];\n        f_initSec(secret, seed64);\n        return XXH3_hashLong_128b_internal(input, len, (const xxh_u8*)secret, sizeof(secret),\n                                           f_acc, f_scramble);\n    }\n}\n\n/*\n * It's important for performance that XXH3_hashLong is not inlined.\n */\nXXH_NO_INLINE XXH128_hash_t\nXXH3_hashLong_128b_withSeed(const void* input, size_t len,\n                            XXH64_hash_t seed64, const void* XXH_RESTRICT secret, size_t secretLen)\n{\n    (void)secret; (void)secretLen;\n    return XXH3_hashLong_128b_withSeed_internal(input, len, seed64,\n                XXH3_accumulate, XXH3_scrambleAcc, XXH3_initCustomSecret);\n}\n\ntypedef XXH128_hash_t (*XXH3_hashLong128_f)(const void* XXH_RESTRICT, size_t,\n                                            XXH64_hash_t, const void* XXH_RESTRICT, size_t);\n\nXXH_FORCE_INLINE XXH128_hash_t\nXXH3_128bits_internal(const void* input, size_t len,\n                      XXH64_hash_t seed64, const void* XXH_RESTRICT secret, size_t secretLen,\n                      XXH3_hashLong128_f f_hl128)\n{\n    XXH_ASSERT(secretLen >= XXH3_SECRET_SIZE_MIN);\n    /*\n     * If an action is to be taken if `secret` conditions are not respected,\n     * it should be done here.\n     * For now, it's a contract pre-condition.\n     * Adding a check and a branch here would cost performance at every hash.\n     */\n    if (len <= 16)\n        return XXH3_len_0to16_128b((const xxh_u8*)input, len, (const xxh_u8*)secret, seed64);\n    if (len <= 128)\n        return XXH3_len_17to128_128b((const xxh_u8*)input, len, (const xxh_u8*)secret, secretLen, seed64);\n    if (len <= XXH3_MIDSIZE_MAX)\n        return XXH3_len_129to240_128b((const xxh_u8*)input, len, (const xxh_u8*)secret, secretLen, seed64);\n    return f_hl128(input, len, seed64, secret, secretLen);\n}\n\n\n/* ===   Public XXH128 API   === */\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH128_hash_t XXH3_128bits(XXH_NOESCAPE const void* input, size_t len)\n{\n    return XXH3_128bits_internal(input, len, 0,\n                                 XXH3_kSecret, sizeof(XXH3_kSecret),\n                                 XXH3_hashLong_128b_default);\n}\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH128_hash_t\nXXH3_128bits_withSecret(XXH_NOESCAPE const void* input, size_t len, XXH_NOESCAPE const void* secret, size_t secretSize)\n{\n    return XXH3_128bits_internal(input, len, 0,\n                                 (const xxh_u8*)secret, secretSize,\n                                 XXH3_hashLong_128b_withSecret);\n}\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH128_hash_t\nXXH3_128bits_withSeed(XXH_NOESCAPE const void* input, size_t len, XXH64_hash_t seed)\n{\n    return XXH3_128bits_internal(input, len, seed,\n                                 XXH3_kSecret, sizeof(XXH3_kSecret),\n                                 XXH3_hashLong_128b_withSeed);\n}\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH128_hash_t\nXXH3_128bits_withSecretandSeed(XXH_NOESCAPE const void* input, size_t len, XXH_NOESCAPE const void* secret, size_t secretSize, XXH64_hash_t seed)\n{\n    if (len <= XXH3_MIDSIZE_MAX)\n        return XXH3_128bits_internal(input, len, seed, XXH3_kSecret, sizeof(XXH3_kSecret), NULL);\n    return XXH3_hashLong_128b_withSecret(input, len, seed, secret, secretSize);\n}\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH128_hash_t\nXXH128(XXH_NOESCAPE const void* input, size_t len, XXH64_hash_t seed)\n{\n    return XXH3_128bits_withSeed(input, len, seed);\n}\n\n\n/* ===   XXH3 128-bit streaming   === */\n#ifndef XXH_NO_STREAM\n/*\n * All initialization and update functions are identical to 64-bit streaming variant.\n * The only difference is the finalization routine.\n */\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH_errorcode\nXXH3_128bits_reset(XXH_NOESCAPE XXH3_state_t* statePtr)\n{\n    return XXH3_64bits_reset(statePtr);\n}\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH_errorcode\nXXH3_128bits_reset_withSecret(XXH_NOESCAPE XXH3_state_t* statePtr, XXH_NOESCAPE const void* secret, size_t secretSize)\n{\n    return XXH3_64bits_reset_withSecret(statePtr, secret, secretSize);\n}\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH_errorcode\nXXH3_128bits_reset_withSeed(XXH_NOESCAPE XXH3_state_t* statePtr, XXH64_hash_t seed)\n{\n    return XXH3_64bits_reset_withSeed(statePtr, seed);\n}\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH_errorcode\nXXH3_128bits_reset_withSecretandSeed(XXH_NOESCAPE XXH3_state_t* statePtr, XXH_NOESCAPE const void* secret, size_t secretSize, XXH64_hash_t seed)\n{\n    return XXH3_64bits_reset_withSecretandSeed(statePtr, secret, secretSize, seed);\n}\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH_errorcode\nXXH3_128bits_update(XXH_NOESCAPE XXH3_state_t* state, XXH_NOESCAPE const void* input, size_t len)\n{\n    return XXH3_update_regular(state, input, len);\n}\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH128_hash_t XXH3_128bits_digest (XXH_NOESCAPE const XXH3_state_t* state)\n{\n    const unsigned char* const secret = (state->extSecret == NULL) ? state->customSecret : state->extSecret;\n    if (state->totalLen > XXH3_MIDSIZE_MAX) {\n        XXH_ALIGN(XXH_ACC_ALIGN) XXH64_hash_t acc[XXH_ACC_NB];\n        XXH3_digest_long(acc, state, secret);\n        XXH_ASSERT(state->secretLimit + XXH_STRIPE_LEN >= sizeof(acc) + XXH_SECRET_MERGEACCS_START);\n        return XXH3_finalizeLong_128b(acc, secret, state->secretLimit + XXH_STRIPE_LEN,  (xxh_u64)state->totalLen);\n    }\n    /* len <= XXH3_MIDSIZE_MAX : short code */\n    if (state->useSeed)\n        return XXH3_128bits_withSeed(state->buffer, (size_t)state->totalLen, state->seed);\n    return XXH3_128bits_withSecret(state->buffer, (size_t)(state->totalLen),\n                                   secret, state->secretLimit + XXH_STRIPE_LEN);\n}\n#endif /* !XXH_NO_STREAM */\n/* 128-bit utility functions */\n\n/* return : 1 is equal, 0 if different */\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API int XXH128_isEqual(XXH128_hash_t h1, XXH128_hash_t h2)\n{\n    /* note : XXH128_hash_t is compact, it has no padding byte */\n    return !(XXH_memcmp(&h1, &h2, sizeof(h1)));\n}\n\n/* This prototype is compatible with stdlib's qsort().\n * @return : >0 if *h128_1  > *h128_2\n *           <0 if *h128_1  < *h128_2\n *           =0 if *h128_1 == *h128_2  */\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API int XXH128_cmp(XXH_NOESCAPE const void* h128_1, XXH_NOESCAPE const void* h128_2)\n{\n    XXH128_hash_t const h1 = *(const XXH128_hash_t*)h128_1;\n    XXH128_hash_t const h2 = *(const XXH128_hash_t*)h128_2;\n    int const hcmp = (h1.high64 > h2.high64) - (h2.high64 > h1.high64);\n    /* note : bets that, in most cases, hash values are different */\n    if (hcmp) return hcmp;\n    return (h1.low64 > h2.low64) - (h2.low64 > h1.low64);\n}\n\n\n/*======   Canonical representation   ======*/\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API void\nXXH128_canonicalFromHash(XXH_NOESCAPE XXH128_canonical_t* dst, XXH128_hash_t hash)\n{\n    XXH_STATIC_ASSERT(sizeof(XXH128_canonical_t) == sizeof(XXH128_hash_t));\n    if (XXH_CPU_LITTLE_ENDIAN) {\n        hash.high64 = XXH_swap64(hash.high64);\n        hash.low64  = XXH_swap64(hash.low64);\n    }\n    XXH_memcpy(dst, &hash.high64, sizeof(hash.high64));\n    XXH_memcpy((char*)dst + sizeof(hash.high64), &hash.low64, sizeof(hash.low64));\n}\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH128_hash_t\nXXH128_hashFromCanonical(XXH_NOESCAPE const XXH128_canonical_t* src)\n{\n    XXH128_hash_t h;\n    h.high64 = XXH_readBE64(src);\n    h.low64  = XXH_readBE64(src->digest + 8);\n    return h;\n}\n\n\n\n/* ==========================================\n * Secret generators\n * ==========================================\n */\n#define XXH_MIN(x, y) (((x) > (y)) ? (y) : (x))\n\nXXH_FORCE_INLINE void XXH3_combine16(void* dst, XXH128_hash_t h128)\n{\n    XXH_writeLE64( dst, XXH_readLE64(dst) ^ h128.low64 );\n    XXH_writeLE64( (char*)dst+8, XXH_readLE64((char*)dst+8) ^ h128.high64 );\n}\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API XXH_errorcode\nXXH3_generateSecret(XXH_NOESCAPE void* secretBuffer, size_t secretSize, XXH_NOESCAPE const void* customSeed, size_t customSeedSize)\n{\n#if (XXH_DEBUGLEVEL >= 1)\n    XXH_ASSERT(secretBuffer != NULL);\n    XXH_ASSERT(secretSize >= XXH3_SECRET_SIZE_MIN);\n#else\n    /* production mode, assert() are disabled */\n    if (secretBuffer == NULL) return XXH_ERROR;\n    if (secretSize < XXH3_SECRET_SIZE_MIN) return XXH_ERROR;\n#endif\n\n    if (customSeedSize == 0) {\n        customSeed = XXH3_kSecret;\n        customSeedSize = XXH_SECRET_DEFAULT_SIZE;\n    }\n#if (XXH_DEBUGLEVEL >= 1)\n    XXH_ASSERT(customSeed != NULL);\n#else\n    if (customSeed == NULL) return XXH_ERROR;\n#endif\n\n    /* Fill secretBuffer with a copy of customSeed - repeat as needed */\n    {   size_t pos = 0;\n        while (pos < secretSize) {\n            size_t const toCopy = XXH_MIN((secretSize - pos), customSeedSize);\n            XXH_memcpy((char*)secretBuffer + pos, customSeed, toCopy);\n            pos += toCopy;\n    }   }\n\n    {   size_t const nbSeg16 = secretSize / 16;\n        size_t n;\n        XXH128_canonical_t scrambler;\n        XXH128_canonicalFromHash(&scrambler, XXH128(customSeed, customSeedSize, 0));\n        for (n=0; n<nbSeg16; n++) {\n            XXH128_hash_t const h128 = XXH128(&scrambler, sizeof(scrambler), n);\n            XXH3_combine16((char*)secretBuffer + n*16, h128);\n        }\n        /* last segment */\n        XXH3_combine16((char*)secretBuffer + secretSize - 16, XXH128_hashFromCanonical(&scrambler));\n    }\n    return XXH_OK;\n}\n\n/*! @ingroup XXH3_family */\nXXH_PUBLIC_API void\nXXH3_generateSecret_fromSeed(XXH_NOESCAPE void* secretBuffer, XXH64_hash_t seed)\n{\n    XXH_ALIGN(XXH_SEC_ALIGN) xxh_u8 secret[XXH_SECRET_DEFAULT_SIZE];\n    XXH3_initCustomSecret(secret, seed);\n    XXH_ASSERT(secretBuffer != NULL);\n    XXH_memcpy(secretBuffer, secret, XXH_SECRET_DEFAULT_SIZE);\n}\n\n\n\n/* Pop our optimization override from above */\n#if XXH_VECTOR == XXH_AVX2 /* AVX2 */ \\\n  && defined(__GNUC__) && !defined(__clang__) /* GCC, not Clang */ \\\n  && defined(__OPTIMIZE__) && XXH_SIZE_OPT <= 0 /* respect -O0 and -Os */\n#  pragma GCC pop_options\n#endif\n\n#endif  /* XXH_NO_LONG_LONG */\n\n#endif  /* XXH_NO_XXH3 */\n\n/*!\n * @}\n */\n#endif  /* XXH_IMPLEMENTATION */\n\n\n#if defined (__cplusplus) && !defined(XXH_NO_EXTERNC_GUARD)\n} /* extern \"C\" */\n#endif\n"
  },
  {
    "path": "modules/Makefile",
    "content": "\nSUBDIRS = redisjson redistimeseries redisbloom redisearch\n\ndefine submake\n\tfor dir in $(SUBDIRS); do $(MAKE) -C $$dir $(1); done\nendef\n\nall: prepare_source\n\t$(call submake,$@)\n\nget_source:\n\t$(call submake,$@)\n\nprepare_source: get_source handle-werrors setup_environment\n\nclean:\n\t$(call submake,$@)\n\ndistclean: clean_environment\n\t$(call submake,$@)\n\npristine:\n\t$(call submake,$@)\n\ninstall:\n\t$(call submake,$@)\n\nsetup_environment: install-rust handle-werrors\n\nclean_environment: uninstall-rust\n\n# Keep all of the Rust stuff in one place\ninstall-rust:\nifeq ($(INSTALL_RUST_TOOLCHAIN),yes)\n\t@RUST_VERSION=1.94.0; \\\n\tARCH=\"$$(uname -m)\"; \\\n\tif ldd --version 2>&1 | grep -q musl; then LIBC_TYPE=\"musl\"; else LIBC_TYPE=\"gnu\"; fi; \\\n\techo \"Detected architecture: $${ARCH} and libc: $${LIBC_TYPE}\"; \\\n\tcase \"$${ARCH}\" in \\\n\t\t'x86_64') \\\n\t\t\tif [ \"$${LIBC_TYPE}\" = \"musl\" ]; then \\\n\t\t\t\tRUST_INSTALLER=\"rust-$${RUST_VERSION}-x86_64-unknown-linux-musl\"; \\\n\t\t\t\tRUST_SHA256=\"9a358120ce1491a4d5b7f71a41e4e97b380b5db5d4ec31f7110f5b3090bd3d55\"; \\\n\t\t\telse \\\n\t\t\t\tRUST_INSTALLER=\"rust-$${RUST_VERSION}-x86_64-unknown-linux-gnu\"; \\\n\t\t\t\tRUST_SHA256=\"e8fa4185f3ef6ae32725ff638b1ecdbff28f5d651dc0b3111e2539350d03b15a\"; \\\n\t\t\tfi ;; \\\n\t\t'aarch64') \\\n\t\t\tif [ \"$${LIBC_TYPE}\" = \"musl\" ]; then \\\n\t\t\t\tRUST_INSTALLER=\"rust-$${RUST_VERSION}-aarch64-unknown-linux-musl\"; \\\n\t\t\t\tRUST_SHA256=\"008b3f0fc4175c956ecbfa4e0c48865ec3f953741b2926e75e8ded7e3adfdb19\"; \\\n\t\t\telse \\\n\t\t\t\tRUST_INSTALLER=\"rust-$${RUST_VERSION}-aarch64-unknown-linux-gnu\"; \\\n\t\t\t\tRUST_SHA256=\"c6fd6d1c925ed986df3b2c0b89bbc90ce15afb62e4d522a054e7d50c856b3c1a\"; \\\n\t\t\tfi ;; \\\n\t\t*) echo >&2 \"Unsupported architecture: '$${ARCH}'\"; exit 1 ;; \\\n\tesac; \\\n\techo \"Downloading and installing Rust standalone installer: $${RUST_INSTALLER}\"; \\\n\twget --quiet -O $${RUST_INSTALLER}.tar.xz https://static.rust-lang.org/dist/$${RUST_INSTALLER}.tar.xz; \\\n\techo \"$${RUST_SHA256} $${RUST_INSTALLER}.tar.xz\" | sha256sum -c --status || { echo \"Rust standalone installer checksum failed!\"; exit 1; }; \\\n\ttar -xf $${RUST_INSTALLER}.tar.xz; \\\n\t(cd $${RUST_INSTALLER} && ./install.sh); \\\n\trm -rf $${RUST_INSTALLER}\nendif\n\nuninstall-rust:\nifeq ($(INSTALL_RUST_TOOLCHAIN),yes)\n\t@if [ -x \"/usr/local/lib/rustlib/uninstall.sh\" ]; then \\\n\t\techo \"Uninstalling Rust using uninstall.sh script\"; \\\n\t\trm -rf ~/.cargo; \\\n\t\t/usr/local/lib/rustlib/uninstall.sh; \\\n\telse \\\n\t\techo \"WARNING: Rust toolchain not found or uninstall script is missing.\"; \\\n\tfi\nendif\n\nhandle-werrors: get_source\nifeq ($(DISABLE_WERRORS),yes)\n\t@echo \"Disabling -Werror for all modules\"\n\t@for dir in $(SUBDIRS); do \\\n\t\techo \"Processing $$dir\"; \\\n\t\tfind $$dir/src -type f \\\n\t\t\t\\( -name \"Makefile\" \\\n\t\t\t-o -name \"*.mk\" \\\n\t\t\t-o -name \"CMakeLists.txt\" \\) \\\n\t\t\t-exec sed -i 's/-Werror//g' {} +; \\\n\tdone\nendif\n\n.PHONY: all clean distclean install $(SUBDIRS) setup_environment clean_environment install-rust uninstall-rust handle-werrors\n"
  },
  {
    "path": "modules/common.mk",
    "content": "PREFIX ?= /usr/local\nINSTALL_DIR ?= $(DESTDIR)$(PREFIX)/lib/redis/modules\nINSTALL ?= install\n\n# This logic *partially* follows the current module build system. It is a bit awkward and\n# should be changed if/when the modules' build process is refactored.\n\nARCH_MAP_x86_64 := x64\nARCH_MAP_i386 := x86\nARCH_MAP_i686 := x86\nARCH_MAP_aarch64 := arm64v8\nARCH_MAP_arm64 := arm64v8\n\nOS := $(shell uname -s | tr '[:upper:]' '[:lower:]')\nARCH := $(ARCH_MAP_$(shell uname -m))\nifeq ($(ARCH),)\n\t$(error Unrecognized CPU architecture $(shell uname -m))\nendif\n\nFULL_VARIANT := $(OS)-$(ARCH)-release\n\n# Common rules for all modules, based on per-module configuration\n\nall: $(TARGET_MODULE)\n\n$(TARGET_MODULE): get_source\n\t$(MAKE) -C $(SRC_DIR)\n\tcp ${TARGET_MODULE} ./\n\nget_source: $(SRC_DIR)/.prepared\n\n$(SRC_DIR)/.prepared:\n\tmkdir -p $(SRC_DIR)\n\tgit clone --recursive --depth 1 --branch $(MODULE_VERSION) $(MODULE_REPO) $(SRC_DIR)\n\ttouch $@\n\nclean:\n\t-$(MAKE) -C $(SRC_DIR) clean\n\t-rm -f ./*.so\n\ndistclean: clean\n\t-$(MAKE) -C $(SRC_DIR) distclean\n\npristine:\n\t-rm -rf $(SRC_DIR)\n\ninstall: $(TARGET_MODULE)\n\tmkdir -p $(INSTALL_DIR)\n\t$(INSTALL) -m 0755 -D $(TARGET_MODULE) $(INSTALL_DIR)\n\n.PHONY: all clean distclean pristine install\n"
  },
  {
    "path": "modules/redisbloom/Makefile",
    "content": "SRC_DIR = src\nMODULE_VERSION = v8.7.90\nMODULE_REPO = https://github.com/redisbloom/redisbloom\nTARGET_MODULE = $(SRC_DIR)/bin/$(FULL_VARIANT)/redisbloom.so\n\ninclude ../common.mk\n"
  },
  {
    "path": "modules/redisearch/Makefile",
    "content": "SRC_DIR = src\nMODULE_VERSION = v8.7.90\nMODULE_REPO = https://github.com/redisearch/redisearch\nTARGET_MODULE = $(SRC_DIR)/bin/$(FULL_VARIANT)/search-community/redisearch.so\n\ninclude ../common.mk\n\n"
  },
  {
    "path": "modules/redisjson/Makefile",
    "content": "SRC_DIR = src\nMODULE_VERSION = v8.7.90\nMODULE_REPO = https://github.com/redisjson/redisjson\nTARGET_MODULE = $(SRC_DIR)/bin/$(FULL_VARIANT)/rejson.so\n\ninclude ../common.mk\n\n$(SRC_DIR)/.cargo_fetched:\n\tcd $(SRC_DIR) && cargo fetch\n\nget_source: $(SRC_DIR)/.cargo_fetched\n"
  },
  {
    "path": "modules/redistimeseries/Makefile",
    "content": "SRC_DIR = src\nMODULE_VERSION = v8.7.90\nMODULE_REPO = https://github.com/redistimeseries/redistimeseries\nTARGET_MODULE = $(SRC_DIR)/bin/$(FULL_VARIANT)/redistimeseries.so\n\ninclude ../common.mk\n"
  },
  {
    "path": "modules/vector-sets/.gitignore",
    "content": "__pycache__\nmisc\n*.so\n*.xo\n*.o\n.DS_Store\nw2v\nword2vec.bin\nTODO\n*.txt\n*.rdb\n"
  },
  {
    "path": "modules/vector-sets/Makefile",
    "content": "# Compiler settings\nCC = cc\n\nifdef SANITIZER\nifeq ($(SANITIZER),address)\n\tSAN=-fsanitize=address\nelse\nifeq ($(SANITIZER),undefined)\n\tSAN=-fsanitize=undefined\nelse\nifeq ($(SANITIZER),thread)\n\tSAN=-fsanitize=thread\nelse\n    $(error \"unknown sanitizer=${SANITIZER}\")\nendif\nendif\nendif\nendif\n\nCFLAGS = -O2 -Wall -Wextra -g $(SAN) -std=c11\nLDFLAGS = -lm $(SAN)\n\n# Detect OS\nuname_S := $(shell sh -c 'uname -s 2>/dev/null || echo not')\nuname_M := $(shell sh -c 'uname -m 2>/dev/null || echo not')\n\n# Shared library compile flags for linux / osx\nifeq ($(uname_S),Linux)\n\tSHOBJ_CFLAGS ?= -W -Wall -fno-common -g -ggdb -std=c11 -O2\n\tSHOBJ_LDFLAGS ?= -shared\nifneq (,$(findstring armv,$(uname_M)))\n\tSHOBJ_LDFLAGS += -latomic\nendif\nifneq (,$(findstring aarch64,$(uname_M)))\n\tSHOBJ_LDFLAGS += -latomic\nendif\nelse\n\tSHOBJ_CFLAGS ?= -W -Wall -dynamic -fno-common -g -ggdb -std=c11 -O3\n\tSHOBJ_LDFLAGS ?= -bundle -undefined dynamic_lookup\nendif\n\n# OS X 11.x doesn't have /usr/lib/libSystem.dylib and needs an explicit setting.\nifeq ($(uname_S),Darwin)\nifeq (\"$(wildcard /usr/lib/libSystem.dylib)\",\"\")\nLIBS = -L /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/lib -lsystem\nendif\nendif\n\n.SUFFIXES: .c .so .xo .o\n\nall: vset.so\n\n.c.xo:\n\t$(CC) -I. $(CFLAGS) $(SHOBJ_CFLAGS) -fPIC -c $< -o $@\n\nvset.xo: ../../src/redismodule.h expr.c\n\nvset.so: vset.xo hnsw.xo vset_config.xo\n\t$(CC) -o $@ $^ $(SHOBJ_LDFLAGS) $(LIBS) $(SAN) -lc\n\n# Example sources / objects\nSRCS = hnsw.c w2v.c vset_config.c\nOBJS = $(SRCS:.c=.o)\n\nTARGET = w2v\nMODULE = vset.so\n\n# Default target\nall: $(TARGET) $(MODULE)\n\n# Example linking rule\n$(TARGET): $(OBJS)\n\t$(CC) $(OBJS) $(LDFLAGS) -o $(TARGET)\n\n# Compilation rule for object files\n%.o: %.c\n\t$(CC) $(CFLAGS) -c $< -o $@\n\nexpr-test: expr.c fastjson.c fastjson_test.c\n\t$(CC) $(CFLAGS) expr.c -o expr-test -DTEST_MAIN -lm\n\n# Clean rule\nclean:\n\trm -f $(TARGET) $(OBJS) *.xo *.so\n\n# Declare phony targets\n.PHONY: all clean\n"
  },
  {
    "path": "modules/vector-sets/README.md",
    "content": "**IMPORTANT:** *Please note that this is a merged module, it's part of the Redis binary now, and you don't need to build it and load it into Redis. Compiling Redis version 8 or greater will result into having the Vector Sets commands available. However, you could compile this module as a shared library in order to load it in older versions of Redis.*\n\nThis module implements Vector Sets for Redis, a new Redis data type similar\nto Sorted Sets but having string elements associated to a vector instead of\na score. The fundamental goal of Vector Sets is to make possible adding items,\nand later get a subset of the added items that are the most similar to a\nspecified vector (often a learned embedding), or the most similar to the vector\nof an element that is already part of the Vector Set.\n\nMoreover, Vector sets implement optional filtered search capabilities: it is possible to associate attributes to all or to a subset of elements in the set, and then, using the `FILTER` option of the `VSIM` command, to ask for items similar to a given vector but also passing a filter specified as a simple mathematical expression (Like `\".year > 1950\"` or similar). This means that **you can have vector similarity and scalar filters at the same time**.\n\n## Installation\n\n**WARNING:** If you are running **Redis 8.0 RC1 or greater** you don't need to install anything, just compile Redis, and the Vector Sets commands will be part of the default install. Otherwise to test Vector Sets with older Redis versions follow the following instructions.\n\nBuild with:\n\n    make\n\nThen load the module with the following command line, or by inserting the needed directives in the `redis.conf` file.\n\n    ./redis-server --loadmodule vset.so\n\nTo run tests, I suggest using this:\n\n    ./redis-server --save \"\" --enable-debug-command yes\n\nThe execute the tests with:\n\n    ./test.py\n\n## Reference of available commands\n\n**VADD: add items into a vector set**\n\n    VADD key [REDUCE dim] FP32|VALUES vector element [CAS] [NOQUANT | Q8 | BIN]\n             [EF build-exploration-factor] [SETATTR <attributes>] [M <numlinks>]\n\nAdd a new element into the vector set specified by the key.\nThe vector can be provided as FP32 blob of values, or as floating point\nnumbers as strings, prefixed by the number of elements (3 in the example):\n\n    VADD mykey VALUES 3 0.1 1.2 0.5 my-element\n\nMeaning of the options:\n\n`REDUCE` implements random projection, in order to reduce the\ndimensionality of the vector. The projection matrix is saved and reloaded\nalong with the vector set. **Please note that** the `REDUCE` option must be passed immediately before the vector, like in `REDUCE 50 VALUES ...`.\n\n`CAS` performs the operation partially using threads, in a\ncheck-and-set style. The neighbor candidates collection, which is slow, is\nperformed in the background, while the command is executed in the main thread.\n\n`NOQUANT` forces the vector to be created (in the first VADD call to a given key) without integer 8 quantization, which is otherwise the default.\n\n`BIN` forces the vector to use binary quantization instead of int8. This is much faster and uses less memory, but has impacts on the recall quality. The distance is computed as normalized Hamming distance (`hamming_bits * 2 / dim`), yielding values in [0, 2] consistent with cosine distance semantics, not raw Hamming bit counts.\n\n`Q8` forces the vector to use signed 8 bit quantization. This is the default, and the option only exists in order to make sure to check at insertion time if the vector set is of the same format.\n\n`EF` plays a role in the effort made to find good candidates when connecting the new node to the existing HNSW graph. The default is 200. Using a larger value, may help to have a better recall. To improve the recall it is also possible to increase `EF` during `VSIM` searches.\n\n`SETATTR` associates attributes to the newly created entry or update the entry attributes (if it already exists). It is the same as calling the `VSETATTR` attribute separately, so please check the documentation of that command in the filtered search section of this documentation.\n\n`M` defaults to 16 and is the HNSW famous `M` parameters. It is the maximum number of connections that each node of the graph have with other nodes: more connections mean more memory, but a better ability to explore the graph. Nodes at layer zero (every node exists at least at layer zero) have `M*2` connections, while the other layers only have `M` connections. This means that, for instance, an `M` of 64 will use at least 1024 bytes of memory for each node! That is, `64 links * 2 times * 8 bytes pointers`, and even more, since on average each node has something like 1.33 layers (but the other layers have just `M` connections, instead of `M*2`). If you don't have a recall quality problem, the default is fine, and uses a limited amount of memory.\n\n**VSIM: return elements by vector similarity**\n\n    VSIM key [ELE|FP32|VALUES] <vector or element> [WITHSCORES] [WITHATTRIBS] [COUNT num] [EPSILON delta] [EF search-exploration-factor] [FILTER expression] [FILTER-EF max-filtering-effort] [TRUTH] [NOTHREAD]\n\nThe command returns similar vectors, for simplicity (and verbosity) in the following example, instead of providing a vector using FP32 or VALUES (like in `VADD`), we will ask for elements having a vector similar to a given element already in the sorted set:\n\n    > VSIM word_embeddings ELE apple\n     1) \"apple\"\n     2) \"apples\"\n     3) \"pear\"\n     4) \"fruit\"\n     5) \"berry\"\n     6) \"pears\"\n     7) \"strawberry\"\n     8) \"peach\"\n     9) \"potato\"\n    10) \"grape\"\n\nIt is possible to specify a `COUNT` and also to get the similarity score (from 1 to 0, where 1 is identical, 0 is opposite vector) between the query and the returned items.\n\n    > VSIM word_embeddings ELE apple WITHSCORES COUNT 3\n    1) \"apple\"\n    2) \"0.9998867657923256\"\n    3) \"apples\"\n    4) \"0.8598527610301971\"\n    5) \"pear\"\n    6) \"0.8226882219314575\"\n\nIt is also possible to specify a `EPSILON`, that is a floating point number between 0 and 1 in order to only return elements that have a distance that is no further than the specified one. In vector sets, the returned elements have a similarity score (when compared to the query vector) that is between 1 and 0, where 1 means identical, 0 opposite vectors. If for instance the `EPSILON` option is specified with an argument of 0.2, it means that we will get only elements that have a similarity of 0.8 or better (a distance < 0.2). This is useful when a large `COUNT` is specified, yet we don't want elements that are too far away our query vector.\n\nThe `EF` argument is the exploration factor: the higher it is, the slower the command becomes, but the better the index is explored to find nodes that are near to our query. Sensible values are from 50 to 1000.\n\nThe `TRUTH` option forces the command to perform a linear scan of all the entries inside the set, without using the graph search inside the HNSW, so it returns the best matching elements (the perfect result set) that can be used in order to easily calculate the recall. Of course the linear scan is `O(N)`, so it is much slower than the `log(N)` (considering a small `COUNT`) provided by the HNSW index.\n\nThe `NOTHREAD` option forces the command to execute the search on the data structure in the main thread. Normally `VSIM` spawns a thread instead. This may be useful for benchmarking purposes, or when we work with extremely small vector sets and don't want to pay the cost of spawning a thread. It is possible that in the future this option will be automatically used by Redis when we detect small vector sets. Note that this option blocks the server for all the time needed to complete the command, so it is a source of potential latency issues: if you are in doubt, never use it.\n\nThe `WITHSCORES` option returns, for each returned element, a floating point number representing how near the element is from the query, as a similarity between 0 and 1, where 0 means the vectors are opposite, and 1 means they are pointing exactly in the same direction (maximum similarity).\n\nThe `WITHATTRIBS` option returns, for each element, the JSON attribute associated with the element, or NULL for the elements missing an attribute.\n\nFor `FILTER` and `FILTER-EF` options, please check the filtered search section of this documentation.\n\nNote that when `WITHSCORES` and `WITHATTRIBS` are provided at the same time, the RESP2 reply guarantees that the returned elements are always in the sequence *ele*,*score*,*attribs*, while RESP3 replies will be in the form *ele > score|attrib* when just one is provided, or *ele -> [score,attrib]* when both are provided, that is, when both options are used and RESP3 is used the score and attribute will be a two-items array associated to the element key.\n\n**VDIM: return the dimension of the vectors inside the vector set**\n\n    VDIM keyname\n\nExample:\n\n    > VDIM word_embeddings\n    (integer) 300\n\nNote that in the case of vectors that were populated using the `REDUCE`\noption, for random projection, the vector set will report the size of\nthe projected (reduced) dimension. Yet the user should perform all the\nqueries using full-size vectors.\n\n**VCARD: return the number of elements in a vector set**\n\n    VCARD key\n\nExample:\n\n    > VCARD word_embeddings\n    (integer) 3000000\n\n\n**VREM: remove elements from vector set**\n\n    VREM key element\n\nExample:\n\n    > VADD vset VALUES 3 1 0 1 bar\n    (integer) 1\n    > VREM vset bar\n    (integer) 1\n    > VREM vset bar\n    (integer) 0\n\nVREM does not perform thumstone / logical deletion, but will actually reclaim\nthe memory from the vector set, so it is save to add and remove elements\nin a vector set in the context of long running applications that continuously\nupdate the same index.\n\n**VEMB: return the approximated vector of an element**\n\n    VEMB key element\n\nExample:\n\n    > VEMB word_embeddings SQL\n      1) \"0.18208661675453186\"\n      2) \"0.08535309880971909\"\n      3) \"0.1365649551153183\"\n      4) \"-0.16501599550247192\"\n      5) \"0.14225517213344574\"\n      ... 295 more elements ...\n\nBecause vector sets perform insertion time normalization and optional\nquantization, the returned vector could be approximated. `VEMB` will take\ncare to de-quantized and de-normalize the vector before returning it.\n\nIt is possible to ask VEMB to return raw data, that is, the internal representation used by the vector: fp32, int8, or a bitmap for binary quantization. This behavior is triggered by the `RAW` option of of VEMB:\n\n    VEMB word_embedding apple RAW\n\nIn this case the return value of the command is an array of three or more elements:\n1. The name of the quantization used, that is one of: \"fp32\", \"bin\", \"q8\".\n2. The a string blob containing the raw data, 4 bytes fp32 floats for fp32, a bitmap for binary quants, or int8 bytes array for q8 quants.\n3. A float representing the l2 of the vector before normalization. You need to multiply by this vector if you want to de-normalize the value for any reason.\n\nFor q8 quantization, an additional elements is also returned: the quantization\nrange, so the integers from -127 to 127 represent (normalized) components\nin the range `-range`, `+range`.\n\n**VISMEMBER: test if a given element already exists**\n\nThis command will return 1 (or true) if the specified element is already in the vector set, otherwise 0 (or false) is returned.\n\n    VISMEMBER key element\n\nAs with other existence check Redis commands, if the key does not exist it is considered as if it was empty, thus the element is reported as non existing.\n\n**VRANGE: return elements in a lexicographical range\n\n    VRANGE key start end count\n\nThe `VRANGE` command has many different use cases, but its main goal is to\nprovide a stateless iterator for the elements inside a vector set: that is,\nit allows to retrieve all the elements inside a vector set in small amounts\nfor each call, without an explicit cursor, and with guarantees about what\nthe user will miss in case the vector set is changing (elements added and/or\nremoved) during the iteration.\n\nThe command usage is straightforward:\n\n```\n> VRANGE word_embeddings_int8 [Redis + 10\n 1) \"Redis\"\n 2) \"Rediscover\"\n 3) \"Rediscover_Ashland\"\n 4) \"Rediscover_Northern_Ireland\"\n 5) \"Rediscovered\"\n 6) \"Rediscovered_Bookshop\"\n 7) \"Rediscovering\"\n 8) \"Rediscovering_God\"\n 9) \"Rediscovering_Lost\"\n10) \"Rediscovers\"\n```\n\nThe above command returns 10 (or less, if less are available in the specified range) elements from \"Redis\" (inclusive) to the maximum possible element. The comparison is performed byte by byte, as `memcmp()` would do, in this way the elements have a total order. The start and end range can be either a string, prefixed by `[` or `(` (the prefix is mandatory) to tell the command if the range is inclusive or exclusive, or can be the special symbols `-` and `+` that means the maximum and minimum element.\n\nSo for instance if I want to iterate all the elements, ten elements for each call, I'll proceed as such:\n\n```\n> VRANGE mykey - + 10\n 1) \"a\"\n 2) \"a-league\"\n 3) \"a.\"\n 4) \"a.d.\"\n 5) \"a.k.a.\"\n 6) \"a.m.\"\n 7) \"a1\"\n 8) \"a2\"\n 9) \"a3\"\n10) \"a7\"\n```\n\nThis will give me the first 10 elements. Then I want the next ten elements\nstarting from the last element in the previous result, but *excluding* it,\nso the next range will use the `(` prefix with the last element of the\nprevious call, that was `\"a7\"`:\n\n```\n> VRANGE mykey (a7 + 10\n 1) \"a930913\"\n 2) \"aa\"\n 3) \"aaa\"\n 4) \"aaron\"\n 5) \"ab\"\n 6) \"aba\"\n 7) \"abandon\"\n 8) \"abandoned\"\n 9) \"abandoning\"\n10) \"abandonment\"\n```\n\nAnd so forth.\n\nThe command count is mandatory, however a negative count means to return all the elements in the set. This means that `VRANGE mykey - + -1` will return every element. Of course, iterating like that means that it is possible to block the server for a long time.\n\nThe command time complexity is O(1) to seek to the element (considering the element would be of reasonable size), since we use a Radix Tree in the underlying implementation, plus the time to yield \"M\" elements. So if M is small, each call is just executed in constant time. However the iteration of a total set (via multiple calls) of N elements is O(N). Basically: this command, with a small count, will never produce latency issues in the Redis server.\n\nIn case the elements are changing continuously as the set is iterated, the guarantees are very simple: each range will produce exactly the elements that were present in the range in the moment the `VRANGE` command was executed. In other words, an iteration performed in this way is *guaranteed* to return all the elements that stayed within the vector set from the start to the end of the iteration. Elements removed or added in the meantime may be returned or not depending on the moment they were added or removed.\n\n**VLINKS: introspection command that shows neighbors for a node**\n\n    VLINKS key element [WITHSCORES]\n\nThe command reports the neighbors for each level.\n\n**VINFO: introspection command that shows info about a vector set**\n\n    VINFO key\n\nExample:\n\n    > VINFO word_embeddings\n     1) quant-type\n     2) int8\n     3) vector-dim\n     4) (integer) 300\n     5) size\n     6) (integer) 3000000\n     7) max-level\n     8) (integer) 12\n     9) vset-uid\n    10) (integer) 1\n    11) hnsw-max-node-uid\n    12) (integer) 3000000\n\n**VSETATTR: associate or remove the JSON attributes of elements**\n\n    VSETATTR key element \"{... json ...}\"\n\nEach element of a vector set can be optionally associated with a JSON string\nin order to use the `FILTER` option of `VSIM` to filter elements by scalars\n(see the filtered search section for more information). This command can set,\nupdate (if already set) or delete (if you set to an empty string) the\nassociated JSON attributes of an element.\n\nThe command returns 0 if the element or the key don't exist, without\nraising an error, otherwise 1 is returned, and the element attributes\nare set or updated.\n\n**VGETATTR: retrieve the JSON attributes of elements**\n\n    VGETATTR key element\n\nThe command returns the JSON attribute associated with an element, or\nnull if there is no element associated, or no element at all, or no key.\n\n**VRANDMEMBER: return random members from a vector set**\n\n    VRANDMEMBER key [count]\n\nReturn one or more random elements from a vector set.\n\nThe semantics of this command are similar to Redis's native SRANDMEMBER command:\n\n- When called without count, returns a single random element from the set, as a single string (no array reply).\n- When called with a positive count, returns up to count distinct random elements (no duplicates).\n- When called with a negative count, returns count random elements, potentially with duplicates.\n- If the count value is larger than the set size (and positive), only the entire set is returned.\n\nIf the key doesn't exist, returns a Null reply if count is not given, or an empty array if a count is provided.\n\nExamples:\n\n    > VADD vset VALUES 3 1 0 0 elem1\n    (integer) 1\n    > VADD vset VALUES 3 0 1 0 elem2\n    (integer) 1\n    > VADD vset VALUES 3 0 0 1 elem3\n    (integer) 1\n\n    # Return a single random element\n    > VRANDMEMBER vset\n    \"elem2\"\n\n    # Return 2 distinct random elements\n    > VRANDMEMBER vset 2\n    1) \"elem1\"\n    2) \"elem3\"\n\n    # Return 3 random elements with possible duplicates\n    > VRANDMEMBER vset -3\n    1) \"elem2\"\n    2) \"elem2\"\n    3) \"elem1\"\n\n    # Return more elements than in the set (returns all elements)\n    > VRANDMEMBER vset 10\n    1) \"elem1\"\n    2) \"elem2\"\n    3) \"elem3\"\n\n    # When key doesn't exist\n    > VRANDMEMBER nonexistent\n    (nil)\n    > VRANDMEMBER nonexistent 3\n    (empty array)\n\nThis command is particularly useful for:\n\n1. Selecting random samples from a vector set for testing or training.\n2. Performance testing by retrieving random elements for subsequent similarity searches.\n\nWhen the user asks for unique elements (positev count) the implementation optimizes for two scenarios:\n- For small sample sizes (less than 20% of the set size), it uses a dictionary to avoid duplicates, and performs a real random walk inside the graph.\n- For large sample sizes (more than 20% of the set size), it starts from a random node and sequentially traverses the internal list, providing faster performances but not really \"random\" elements.\n\nThe command has `O(N)` worst-case time complexity when requesting many unique elements (it uses linear scanning), or `O(M*log(N))` complexity when the users asks for `M` random elements in a sorted set of `N` elements, with `M` much smaller than `N`.\n\n# Filtered search\n\nEach element of the vector set can be associated with a set of attributes specified as a JSON blob:\n\n    > VADD vset VALUES 3 1 1 1 a SETATTR '{\"year\": 1950}'\n    (integer) 1\n    > VADD vset VALUES 3 -1 -1 -1 b SETATTR '{\"year\": 1951}'\n    (integer) 1\n\nSpecifying an attribute with the `SETATTR` option of `VADD` is exactly equivalent to adding an element and then setting (or updating, if already set) the attributes JSON string. Also the symmetrical `VGETATTR` command returns the attribute associated to a given element.\n\n    > VADD vset VALUES 3 0 1 0 c\n    (integer) 1\n    > VSETATTR vset c '{\"year\": 1952}'\n    (integer) 1\n    > VGETATTR vset c\n    \"{\\\"year\\\": 1952}\"\n\nAt this point, I may use the FILTER option of VSIM to only ask for the subset of elements that are verified by my expression:\n\n    > VSIM vset VALUES 3 0 0 0 FILTER '.year > 1950'\n    1) \"c\"\n    2) \"b\"\n\nThe items will be returned again in order of similarity (most similar first), but only the items with the year field matching the expression is returned.\n\nThe expressions are similar to what you would write inside the `if` statement of JavaScript or other familiar programming languages: you can use `and`, `or`, the obvious math operators like `+`, `-`, `/`, `>=`, `<`, ... and so forth (see the expressions section for more info). The selectors of the JSON object attributes start with a dot followed by the name of the key inside the JSON objects.\n\nElements with invalid JSON or not having a given specified field **are considered as not matching** the expression, but will not generate any error at runtime.\n\n## FILTER expressions capabilities\n\nFILTER expressions allow you to perform complex filtering on vector similarity results using a JavaScript-like syntax. The expression is evaluated against each element's JSON attributes, with only elements that satisfy the expression being included in the results.\n\n### Expression Syntax\n\nExpressions support the following operators and capabilities:\n\n1. **Arithmetic operators**: `+`, `-`, `*`, `/`, `%` (modulo), `**` (exponentiation)\n2. **Comparison operators**: `>`, `>=`, `<`, `<=`, `==`, `!=`\n3. **Logical operators**: `and`/`&&`, `or`/`||`, `!`/`not`\n4. **Containment operator**: `in`\n5. **Parentheses** for grouping: `(...)`\n\n### Selector Notation\n\nAttributes are accessed using dot notation:\n\n- `.year` references the \"year\" attribute\n- `.movie.year` would **NOT** reference the \"year\" field inside a \"movie\" object, only keys that are at the first level of the JSON object are accessible.\n\n### JSON and expressions data types\n\nExpressions can work with:\n\n- Numbers (dobule precision floats)\n- Strings (enclosed in single or double quotes)\n- Booleans (no native type: they are represented as 1 for true, 0 for false)\n- Arrays (for use with the `in` operator: `value in [1, 2, 3]`)\n\nJSON attributes are converted in this way:\n\n- Numbers will be converted to numbers.\n- Strings to strings.\n- Booleans to 0 or 1 number.\n- Arrays to tuples (for \"in\" operator), but only if composed of just numbers and strings.\n\nAny other type is ignored, and accessig it will make the expression evaluate to false.\n\n### The IN operator\n\nThe `IN` operator works in two ways, it can test for membership in an array, like in:\n\n    5 in [1, 2, 3]\n    \"foo\" in [1, \"foo\", \"bar\"]\n\nBut can also check for substrings, in case the A and B operators are both strings.\n\n    \"foo\" in \"barfoobar\" # Will evaluate to true\n    \"zap\" in \"foobar\" # Will evaluate to false\n\n### Examples\n\n```\n# Find items from the 1980s\nVSIM movies VALUES 3 0.5 0.8 0.2 FILTER '.year >= 1980 and .year < 1990'\n\n# Find action movies with high ratings\nVSIM movies VALUES 3 0.5 0.8 0.2 FILTER '.genre == \"action\" and .rating > 8.0'\n\n# Find movies directed by either Spielberg or Nolan\nVSIM movies VALUES 3 0.5 0.8 0.2 FILTER '.director in [\"Spielberg\", \"Nolan\"]'\n\n# Complex condition with numerical operations\nVSIM movies VALUES 3 0.5 0.8 0.2 FILTER '(.year - 2000) ** 2 < 100 and .rating / 2 > 4'\n```\n\n### Error Handling\n\nElements with any of the following conditions are considered not matching:\n- Missing the queried JSON attribute\n- Having invalid JSON in their attributes\n- Having a JSON value that cannot be converted to the expected type\n\nThis behavior allows you to safely filter on optional attributes without generating errors.\n\n### FILTER effort\n\nThe `FILTER-EF` option controls the maximum effort spent when filtering vector search results.\n\nWhen performing vector similarity search with filtering, Vector Sets perform the standard similarity search as they apply the filter expression to each node. Since many results might be filtered out, Vector Sets may need to examine a lot more candidates than the requested `COUNT` to ensure sufficient matching results are returned. Actually, if the elements matching the filter are very rare or if there are less than elements matching than the specified count, this would trigger a full scan of the HNSW graph.\n\nFor this reason, by default, the maximum effort is limited to a reasonable amount of nodes explored.\n\n### Modifying the FILTER effort\n\n1. By default, Vector Sets will explore up to `COUNT * 100` candidates to find matching results.\n2. You can control this exploration with the `FILTER-EF` parameter.\n3. A higher `FILTER-EF` value increases the chances of finding all relevant matches at the cost of increased processing time.\n4. A `FILTER-EF` of zero will explore as many nodes as needed in order to actually return the number of elements specified by `COUNT`.\n5. Even when a high `FILTER-EF` value is specified **the implementation will do a lot less work** if the elements passing the filter are very common, because of the early stop conditions of the HNSW implementation (once the specified amount of elements is reached and the quality check of the other candidates trigger an early stop).\n\n```\nVSIM key [ELE|FP32|VALUES] <vector or element> COUNT 10 FILTER '.year > 2000' FILTER-EF 500\n```\n\nIn this example, Vector Sets will examine up to 500 potential nodes. Of course if count is reached before exploring 500 nodes, and the quality checks show that it is not possible to make progresses on similarity, the search is ended sooner.\n\n### Performance Considerations\n\n- If you have highly selective filters (few items match), use a higher `FILTER-EF`, or just design your application in order to handle a result set that is smaller than the requested count. Note that anyway the additional elements may be too distant than the query vector.\n- For less selective filters, the default should be sufficient.\n- Very selective filters with low `FILTER-EF` values may return fewer items than requested.\n- Extremely high values may impact performance without significantly improving results.\n\nThe optimal `FILTER-EF` value depends on:\n1. The selectivity of your filter.\n2. The distribution of your data.\n3. The required recall quality.\n\nA good practice is to start with the default and increase if needed when you observe fewer results than expected.\n\n### Testing a larg-ish data set\n\nTo really see how things work at scale, you can [download](https://antirez.com/word2vec_with_attribs.rdb) the following dataset:\n\n    wget https://antirez.com/word2vec_with_attribs.rdb\n\nIt contains the 3 million words in Word2Vec having as attribute a JSON with just the length of the word. Because of the length distribution of words in large amounts of texts, where longer words become less and less common, this is ideal to check how filtering behaves with a filter verifying as true with less and less elements in a vector set.\n\nFor instance:\n\n    > VSIM word_embeddings_bin ele \"pasta\" FILTER \".len == 6\"\n     1) \"pastas\"\n     2) \"rotini\"\n     3) \"gnocci\"\n     4) \"panino\"\n     5) \"salads\"\n     6) \"breads\"\n     7) \"salame\"\n     8) \"sauces\"\n     9) \"cheese\"\n    10) \"fritti\"\n\nThis will easily retrieve the desired amount of items (`COUNT` is 10 by default) since there are many items of length 6. However:\n\n    > VSIM word_embeddings_bin ele \"pasta\" FILTER \".len == 33\"\n    1) \"skinless_boneless_chicken_breasts\"\n    2) \"boneless_skinless_chicken_breasts\"\n    3) \"Boneless_skinless_chicken_breasts\"\n\nThis time even if we asked for 10 items, we only get 3, since the default filter effort will be `10*100 = 1000`. We can tune this giving the effort in an explicit way, with the risk of our query being slower, of course:\n\n    > VSIM word_embeddings_bin ele \"pasta\" FILTER \".len == 33\" FILTER-EF 10000\n     1) \"skinless_boneless_chicken_breasts\"\n     2) \"boneless_skinless_chicken_breasts\"\n     3) \"Boneless_skinless_chicken_breasts\"\n     4) \"mozzarella_feta_provolone_cheddar\"\n     5) \"Greatfood.com_R_www.greatfood.com\"\n     6) \"Pepperidge_Farm_Goldfish_crackers\"\n     7) \"Prosecuted_Mobsters_Rebuilt_Dying\"\n     8) \"Crispy_Snacker_Sandwiches_Popcorn\"\n     9) \"risultati_delle_partite_disputate\"\n    10) \"Peppermint_Mocha_Twist_Gingersnap\"\n\nThis time we get all the ten items, even if the last one will be quite far from our query vector. We encourage to experiment with this test dataset in order to understand better the dynamics of the implementation and the natural tradeoffs of filtered search.\n\n**Keep in mind** that by default, Redis Vector Sets will try to avoid a likely very useless huge scan of the HNSW graph, and will be more happy to return few or no elements at all, since this is almost always what the user actually wants in the context of retrieving *similar* items to the query.\n\n# Single Instance Scalability and Latency\n\nVector Sets implement a threading model that allows Redis to handle many concurrent requests: by default `VSIM` is always threaded, and `VADD` is not (but can be partially threaded using the `CAS` option). This section explains how the threading and locking mechanisms work, and what to expect in terms of performance.\n\n## Threading Model\n\n- The `VSIM` command runs in a separate thread by default, allowing Redis to continue serving other commands.\n- A maximum of 32 threads can run concurrently (defined by `HNSW_MAX_THREADS`).\n- When this limit is reached, additional `VSIM` requests are queued - Redis remains responsive, no latency event is generated.\n- The `VADD` command with the `CAS` option also leverages threading for the computation-heavy candidate search phase, but the insertion itself is performed in the main thread. `VADD` always runs in a sub-millisecond time, so this is not a source of latency, but having too many hundreds of writes per second can be challenging to handle with a single instance. Please, look at the next section about multiple instances scalability.\n- Commands run within Lua scripts, MULTI/EXEC blocks, or from replication are executed in the main thread to ensure consistency.\n\n```\n> VSIM vset VALUES 3 1 1 1 FILTER '.year > 2000'  # This runs in a thread.\n> VADD vset VALUES 3 1 1 1 element CAS            # Candidate search runs in a thread.\n```\n\n## Locking Mechanism\n\nVector Sets use a read/write locking mechanism to coordinate access:\n\n- Reads (`VSIM`, `VEMB`, etc.) acquire a read lock, allowing multiple concurrent reads.\n- Writes (`VADD`, `VREM`, etc.) acquire a write lock, temporarily blocking all reads.\n- When a write lock is requested while reads are in progress, the write operation waits for all reads to complete.\n- Once a write lock is granted, all reads are blocked until the write completes.\n- Each thread has a dedicated slot for tracking visited nodes during graph traversal, avoiding contention. This improves performances but limits the maximum number of concurrent threads, since each node has a memory cost proportional to the number of slots.\n\n## DEL latency\n\nDeleting a very large vector set (millions of elements) can cause latency spikes, as deletion rebuilds connections between nodes. This may change in the future.\nThe deletion latency is most noticeable when using `DEL` on a key containing a large vector set or when the key expires.\n\n## Performance Characteristics\n\n- Search operations (`VSIM`) scale almost linearly with the number of CPU cores available, up to the thread limit. You can expect a Vector Set composed of million of items associated with components of dimension 300, with the default int8 quantization, to deliver around 50k VSIM operations per second in a single host.\n- Insertion operations (`VADD`) are more computationally expensive than searches, and can't be threaded: expect much lower throughput, in the range of a few thousands inserts per second.\n- Binary quantization offers significantly faster search performance at the cost of some recall quality, while int8 quantization, the default, seems to have very small impacts on recall quality, while it significantly improves performances and space efficiency.\n- The `EF` parameter has a major impact on both search quality and performance - higher values mean better recall but slower searches.\n- Graph traversal time scales logarithmically with the number of elements, making Vector Sets efficient even with millions of vectors\n\n## Loading / Saving performances\n\nVector Sets are able to serialize on disk the graph structure as it is in memory, so loading back the data does not need to rebuild the HNSW graph. This means that Redis can load millions of items per minute. For instance 3 million items with 300 components vectors can be loaded back into memory into around 15 seconds.\n\n# Scaling vector sets to multiple instances\n\nThe fundamental way vector sets can be scaled to very large data sets\nand to many Redis instances is that a given very large set of vectors\ncan be partitioned into N different Redis keys, that can also live into\ndifferent Redis instances.\n\nFor instance, I could add my elements into `key0`, `key1`, `key2`, by hashing\nthe item in some way, like doing `crc32(item)%3`, effectively splitting\nthe dataset into three different parts. However once I want all the vectors\nof my dataset near to a given query vector, I could simply perform the\n`VSIM` command against all the three keys, merging the results by\nscore (so the commands must be called using the `WITHSCORES` option) on\nthe client side: once the union of the results are ordered by the\nsimilarity score, the query is equivalent to having a single key `key1+2+3`\ncontaining all the items.\n\nThere are a few interesting facts to note about this pattern:\n\n1. It is possible to have a logical sorted set that is as big as the sum of all the Redis instances we are using.\n2. Deletion operations remain simple, we can hash the key and select the key where our item belongs.\n3. However, even if I use 10 different Redis instances, I'm not going to reach 10x the **read** operations per second, compared to using a single server: for each logical query, I need to query all the instances. Yet, smaller graphs are faster to navigate, so there is some win even from the point of view of CPU usage.\n4. Insertions, so **write** queries, will be scaled linearly: I can add N items against N instances at the same time, splitting the insertion load evenly. This is very important since vector sets, being based on HNSW data structures, are slower to add items than to query similar items, by a very big factor.\n5. While it cannot guarantee always the best results, with proper timeout management this system may be considered *highly available*, since if a subset of N instances are reachable, I'll be still be able to return similar items to my query vector.\n\nNotably, this pattern can be implemented in a way that avoids paying the sum of the round trip time with all the servers: it is possible to send the queries at the same time to all the instances, so that latency will be equal the slower reply out of of the N servers queries.\n\n# Optimizing memory usage\n\nVector Sets, or better, HNSWs, the underlying data structure used by Vector Sets, combined with the features provided by the Vector Sets themselves (quantization, random projection, filtering, ...) form an implementation that has a non-trivial space of parameters that can be tuned. Despite to the complexity of the implementation and of vector similarity problems, here there is a list of simple ideas that can drive the user to pick the best settings:\n\n* 8 bit quantization (the default) is almost always a win. It reduces the memory usage of vectors by a factor of 4, yet the performance penalty in terms of recall is minimal. It also reduces insertion and search time by around 2 times or more.\n* Binary quantization is much more extreme: it makes vector sets a lot faster, but increases the recall error in a sensible way, for instance from 95% to 80% if all the parameters remain the same. Yet, the speedup is really big, and the memory usage of vectors, compaerd to full precision vectors, 32 times smaller.\n* Vectors memory usage are not the only responsible for Vector Set high memory usage per entry: nodes contain, on average `M*2 + M*0.33` pointers, where M is by default 16 (but can be tuned in `VADD`, see the `M` option). Also each node has the string item and the optional JSON attributes: those should be as small as possible in order to avoid contributing more to the memory usage.\n* The `M` parameter should be increased to 32 or more only when a near perfect recall is really needed.\n* It is possible to gain space (less memory usage) sacrificing time (more CPU time) by using a low `M` (the default of 16, for instance) and a high `EF` (the effort parameter of `VSIM`) in order to scan the graph more deeply.\n* When memory usage is seriosu concern, and there is the suspect the vectors we are storing don't contain as much information - at least for our use case - to justify the number of components they feature, random projection (the `REDUCE` option of `VADD`) could be tested to see if dimensionality reduction is possible with acceptable precision loss.\n\n## Random projection tradeoffs\n\nSometimes learned vectors are not as information dense as we could guess, that\nis there are components having similar meanings in the space, and components\nhaving values that don't really represent features that matter in our use case.\n\nAt the same time, certain vectors are very big, 1024 components or more. In this cases, it is possible to use the random projection feature of Redis Vector Sets in order to reduce both space (less RAM used) and space (more operstions per second). The feature is accessible via the `REDUCE` option of the `VADD` command. However, keep in mind that you need to test how much reduction impacts the performances of your vectors in term of recall and quality of the results you get back.\n\n## What is a random projection?\n\nThe concept of Random Projection is relatively simple to grasp. For instance, a projection that turns a 100 components vector into a 10 components vector will perform a different linear transformation between the 100 components and each of the target 10 components. Please note that *each of the target components* will get some random amount of all the 100 original components. It is mathematically proved that this process results in a vector space where elements still have similar distances among them, but still some information will get lost.\n\n## Examples of projections and loss of precision\n\nTo show you a bit of a extreme case, let's take Word2Vec 3 million items and compress them from 300 to 100, 50 and 25 components vectors. Then, we check the recall compared to the ground truth against each of the vector sets produced in this way (using different `REDUCE` parameters of `VADD`). This is the result, obtained asking for the top 10 elements.\n\n```\n----------------------------------------------------------------------\nKey                            Average Recall % Std Dev\n----------------------------------------------------------------------\nword_embeddings_int8           95.98           12.14\n  ^ This is the same key used for ground truth, but without TRUTH option\nword_embeddings_reduced_100    40.20           20.13\nword_embeddings_reduced_50     24.42           16.89\nword_embeddings_reduced_25     14.31           9.99\n```\n\nHere the dimensionality reduction we are using is quite extreme: from 300 to 100 means that 66.6% of the original information is lost. The recall drops from 96% to 40%, down to 24% and 14% for even more extreme dimension reduction.\n\nReducing the dimension of vectors that are already relatively small, like the above example, of 300 components, will provide only relatively small memory savings, especially because by default Vector Sets use `int8` quantization, that will use only one byte per component:\n\n```\n> MEMORY USAGE word_embeddings_int8\n(integer) 3107002888\n> MEMORY USAGE word_embeddings_reduced_100\n(integer) 2507122888\n```\n\nOf course going, for example, from 2048 component vectors to 1024 would provide a much more sensible memory saving, even with the `int8` quantization used by Vector Sets, assuming the recall loss is acceptable. Other than the memory saving, there is also the reduction in CPU time, translating to more operations per second.\n\nAnother thing to note is that, with certain embedding models, binary quantization (that offers a 8x reduction of memory usage compared to 8 bit quants, and a very big speedup in computation) performs much better than reducing the dimension of vectors of the same amount via random projections:\n\n```\nword_embeddings_bin            35.48           19.78\n```\n\nHere in the same test did above: we have a 35% recall which is not too far than the 40% obtained with a random projection from 300 to 100 components. However, while the first technique reduces the size by 3 times, the size reduced of binary quantization is by 8 times.\n\n```\n> memory usage word_embeddings_bin\n(integer) 2327002888\n```\n\nIn this specific case the key uses JSON attributes and has a graph connection overhead that is much bigger than the 300 bits each vector takes, but, as already said, for big vectors (1024 components, for instance) or for lower values of `M` (see `VADD`, the `M` parameter connects the level of connectivity, so it changes the amount of pointers used per node) the memory saving is much stronger.\n\n# Vector Sets troubleshooting and understandability\n\n## Debugging poor recall or unexpected results\n\nVector graphs and similarity queries pose many challenges mainly due to the following three problems:\n\n1. The error due to the approximated nature of Vector Sets is hard to evaluate.\n2. The error added by the quantization is often depends on the exact vector space (the embedding we are using **and** how far apart the elements we represent into such embeddings are).\n3. We live in the illusion that learned embeddings capture the best similarity possible among elements, which is obviously not always true, and highly application dependent.\n\nThe only way to debug such problems, is the ability to inspect step by step what is happening inside our application, and the structure of the HNSW graph itself. To do so, we suggest to consider the following tools:\n\n1. The `TRUTH` option of the `VSIM` command is able to return the ground truth of the most similar elements, without using the HNSW graph, but doing a linear scan.\n2. The `VLINKS` command allows to explore the graph to see if the connections among nodes make sense, and to investigate why a given node may be more isolated than expected. Such command can also be used in a different way, when we want very fast \"similar items\" without paying the HNSW traversal time. It exploits the fact that we have a direct reference from each element in our vector set to each node in our HNSW graph.\n3. The `WITHSCORES` option, in the supported commands, return a value that is directly related to the *cosine similarity* between the query and the items vectors, the interval of the similarity is simply rescaled from the -1, 1 original range to 0, 1, otherwise the metric is identical.\n\n## Clients, latency and bandwidth usage\n\nDuring Vector Sets testing, we discovered that often clients introduce considerable latecy and CPU usage (in the client side, not in Redis) for two main reasons:\n\n1. Often the serialization to `VALUES ... list of floats ...` can be very slow.\n2. The vector payload of floats represented as strings is very large, resulting in high bandwidth usage and latency, compared to other Redis commands.\n\nSwitching from `VALUES` to `FP32` as a method for transmitting vectors may easily provide 10-20x speedups.\n\n# Implementation details\n\nVector sets are based on the `hnsw.c` implementation of the HNSW data structure with extensions for speed and functionality.\n\nThe main features are:\n\n* Proper nodes deletion with relinking.\n* 8 bits and binary quantization.\n* Threaded queries.\n* Filtered search with predicate callback.\n"
  },
  {
    "path": "modules/vector-sets/commands.json",
    "content": "{\n  \"VADD\": {\n    \"summary\": \"Add one or more elements to a vector set, or update its vector if it already exists\",\n    \"complexity\": \"O(log(N)) for each element added, where N is the number of elements in the vector set.\",\n    \"group\": \"vector_set\",\n    \"since\": \"8.0.0\",\n    \"arity\": -5,\n    \"function\": \"vaddCommand\",\n    \"arguments\": [\n      {\n        \"name\": \"key\",\n        \"type\": \"key\"\n      },\n      {\n        \"token\": \"REDUCE\",\n        \"name\": \"reduce\",\n        \"type\": \"block\",\n        \"optional\": true,\n        \"arguments\": [\n          {\n            \"name\": \"dim\",\n            \"type\": \"integer\"\n          }\n        ]\n      },\n      {\n        \"name\": \"format\",\n        \"type\": \"oneof\",\n        \"arguments\": [\n          {\n            \"name\": \"fp32\",\n            \"type\": \"pure-token\",\n            \"token\": \"FP32\"\n          },\n          {\n            \"name\": \"values\",\n            \"type\": \"pure-token\",\n            \"token\": \"VALUES\"\n          }\n        ]\n      },\n      {\n        \"name\": \"vector\",\n        \"type\": \"string\"\n      },\n      {\n        \"name\": \"element\",\n        \"type\": \"string\"\n      },\n      {\n        \"token\": \"CAS\",\n        \"name\": \"cas\",\n        \"type\": \"pure-token\",\n        \"optional\": true\n      },\n      {\n        \"name\": \"quant_type\",\n        \"type\": \"oneof\",\n        \"optional\": true,\n        \"arguments\": [\n          {\n            \"name\": \"noquant\",\n            \"type\": \"pure-token\",\n            \"token\": \"NOQUANT\"\n          },\n          {\n            \"name\": \"bin\",\n            \"type\": \"pure-token\",\n            \"token\": \"BIN\"\n          },\n          {\n            \"name\": \"q8\",\n            \"type\": \"pure-token\",\n            \"token\": \"Q8\"\n          }\n        ]\n      },\n      {\n        \"token\": \"EF\",\n        \"name\": \"build-exploration-factor\",\n        \"type\": \"integer\",\n        \"optional\": true\n      },\n      {\n        \"token\": \"SETATTR\",\n        \"name\": \"attributes\",\n        \"type\": \"string\",\n        \"optional\": true\n      },\n      {\n        \"token\": \"M\",\n        \"name\": \"numlinks\",\n        \"type\": \"integer\",\n        \"optional\": true\n      }\n    ],\n    \"command_flags\": [\n      \"WRITE\",\n      \"DENYOOM\"\n    ]\n  },\n  \"VREM\": {\n    \"summary\": \"Remove an element from a vector set\",\n    \"complexity\": \"O(log(N)) for each element removed, where N is the number of elements in the vector set.\",\n    \"group\": \"vector_set\",\n    \"since\": \"8.0.0\",\n    \"arity\": 3,\n    \"function\": \"vremCommand\",\n    \"command_flags\": [\n      \"WRITE\"\n    ],\n    \"arguments\": [\n      {\n        \"name\": \"key\",\n        \"type\": \"key\"\n      },\n      {\n        \"name\": \"element\",\n        \"type\": \"string\"\n      }\n    ]\n  },\n  \"VSIM\": {\n    \"summary\": \"Return elements by vector similarity\",\n    \"complexity\": \"O(log(N)) where N is the number of elements in the vector set.\",\n    \"group\": \"vector_set\",\n    \"since\": \"8.0.0\",\n    \"arity\": -4,\n    \"function\": \"vsimCommand\",\n    \"command_flags\": [\n      \"READONLY\"\n    ],\n    \"arguments\": [\n      {\n        \"name\": \"key\",\n        \"type\": \"key\"\n      },\n      {\n        \"name\": \"format\",\n        \"type\": \"oneof\",\n        \"arguments\": [\n          {\n            \"name\": \"ele\",\n            \"type\": \"pure-token\",\n            \"token\": \"ELE\"\n          },\n          {\n            \"name\": \"fp32\",\n            \"type\": \"pure-token\",\n            \"token\": \"FP32\"\n          },\n          {\n            \"name\": \"values\",\n            \"type\": \"pure-token\",\n            \"token\": \"VALUES\"\n          }\n        ]\n      },\n      {\n        \"name\": \"vector_or_element\",\n        \"type\": \"string\"\n      },\n      {\n        \"token\": \"WITHSCORES\",\n        \"name\": \"withscores\",\n        \"type\": \"pure-token\",\n        \"optional\": true\n      },\n      {\n        \"token\": \"WITHATTRIBS\",\n        \"name\": \"withattribs\",\n        \"type\": \"pure-token\",\n        \"optional\": true\n      },\n      {\n        \"token\": \"COUNT\",\n        \"name\": \"count\",\n        \"type\": \"integer\",\n        \"optional\": true\n      },\n      {\n        \"token\": \"EPSILON\",\n        \"name\": \"max_distance\",\n        \"type\": \"double\",\n        \"optional\": true\n      },\n      {\n        \"token\": \"EF\",\n        \"name\": \"search-exploration-factor\",\n        \"type\": \"integer\",\n        \"optional\": true\n      },\n      {\n        \"token\": \"FILTER\",\n        \"name\": \"expression\",\n        \"type\": \"string\",\n        \"optional\": true\n      },\n      {\n        \"token\": \"FILTER-EF\",\n        \"name\": \"max-filtering-effort\",\n        \"type\": \"integer\",\n        \"optional\": true\n      },\n      {\n        \"token\": \"TRUTH\",\n        \"name\": \"truth\",\n        \"type\": \"pure-token\",\n        \"optional\": true\n      },\n      {\n        \"token\": \"NOTHREAD\",\n        \"name\": \"nothread\",\n        \"type\": \"pure-token\",\n        \"optional\": true\n      }\n    ]\n  },\n  \"VDIM\": {\n    \"summary\": \"Return the dimension of vectors in the vector set\",\n    \"complexity\": \"O(1)\",\n    \"group\": \"vector_set\",\n    \"since\": \"8.0.0\",\n    \"arity\": 2,\n    \"function\": \"vdimCommand\",\n    \"command_flags\": [\n      \"READONLY\",\n      \"FAST\"\n    ],\n    \"arguments\": [\n      {\n        \"name\": \"key\",\n        \"type\": \"key\"\n      }\n    ]\n  },\n  \"VCARD\": {\n    \"summary\": \"Return the number of elements in a vector set\",\n    \"complexity\": \"O(1)\",\n    \"group\": \"vector_set\",\n    \"since\": \"8.0.0\",\n    \"arity\": 2,\n    \"function\": \"vcardCommand\",\n    \"command_flags\": [\n      \"READONLY\",\n      \"FAST\"\n    ],\n    \"arguments\": [\n      {\n        \"name\": \"key\",\n        \"type\": \"key\"\n      }\n    ]\n  },\n  \"VEMB\": {\n    \"summary\": \"Return the vector associated with an element\",\n    \"complexity\": \"O(1)\",\n    \"group\": \"vector_set\",\n    \"since\": \"8.0.0\",\n    \"arity\": -3,\n    \"function\": \"vembCommand\",\n    \"command_flags\": [\n      \"READONLY\"\n    ],\n    \"arguments\": [\n      {\n        \"name\": \"key\",\n        \"type\": \"key\"\n      },\n      {\n        \"name\": \"element\",\n        \"type\": \"string\"\n      },\n      {\n        \"token\": \"RAW\",\n        \"name\": \"raw\",\n        \"type\": \"pure-token\",\n        \"optional\": true\n      }\n    ]\n  },\n  \"VLINKS\": {\n    \"summary\": \"Return the neighbors of an element at each layer in the HNSW graph\",\n    \"complexity\": \"O(1)\",\n    \"group\": \"vector_set\",\n    \"since\": \"8.0.0\",\n    \"arity\": -3,\n    \"function\": \"vlinksCommand\",\n    \"command_flags\": [\n      \"READONLY\"\n    ],\n    \"arguments\": [\n      {\n        \"name\": \"key\",\n        \"type\": \"key\"\n      },\n      {\n        \"name\": \"element\",\n        \"type\": \"string\"\n      },\n      {\n        \"token\": \"WITHSCORES\",\n        \"name\": \"withscores\",\n        \"type\": \"pure-token\",\n        \"optional\": true\n      }\n    ]\n  },\n  \"VINFO\": {\n    \"summary\": \"Return information about a vector set\",\n    \"complexity\": \"O(1)\",\n    \"group\": \"vector_set\",\n    \"since\": \"8.0.0\",\n    \"arity\": 2,\n    \"function\": \"vinfoCommand\",\n    \"command_flags\": [\n      \"READONLY\",\n      \"FAST\"\n    ],\n    \"arguments\": [\n      {\n        \"name\": \"key\",\n        \"type\": \"key\"\n      }\n    ]\n  },\n  \"VSETATTR\": {\n    \"summary\": \"Associate or remove the JSON attributes of elements\",\n    \"complexity\": \"O(1)\",\n    \"group\": \"vector_set\",\n    \"since\": \"8.0.0\",\n    \"arity\": 4,\n    \"function\": \"vsetattrCommand\",\n    \"command_flags\": [\n      \"WRITE\"\n    ],\n    \"arguments\": [\n      {\n        \"name\": \"key\",\n        \"type\": \"key\"\n      },\n      {\n        \"name\": \"element\",\n        \"type\": \"string\"\n      },\n      {\n        \"name\": \"json\",\n        \"type\": \"string\"\n      }\n    ]\n  },\n  \"VGETATTR\": {\n    \"summary\": \"Retrieve the JSON attributes of elements\",\n    \"complexity\": \"O(1)\",\n    \"group\": \"vector_set\",\n    \"since\": \"8.0.0\",\n    \"arity\": 3,\n    \"function\": \"vgetattrCommand\",\n    \"command_flags\": [\n      \"READONLY\"\n    ],\n    \"arguments\": [\n      {\n        \"name\": \"key\",\n        \"type\": \"key\"\n      },\n      {\n        \"name\": \"element\",\n        \"type\": \"string\"\n      }\n    ]\n  },\n  \"VRANDMEMBER\": {\n    \"summary\": \"Return one or multiple random members from a vector set\",\n    \"complexity\": \"O(N) where N is the absolute value of the count argument.\",\n    \"group\": \"vector_set\",\n    \"since\": \"8.0.0\",\n    \"arity\": -2,\n    \"function\": \"vrandmemberCommand\",\n    \"command_flags\": [\n      \"READONLY\"\n    ],\n    \"arguments\": [\n      {\n        \"name\": \"key\",\n        \"type\": \"key\"\n      },\n      {\n        \"name\": \"count\",\n        \"type\": \"integer\",\n        \"optional\": true\n      }\n    ]\n  },\n  \"VISMEMBER\": {\n    \"summary\": \"Check if an element exists in a vector set\",\n    \"complexity\": \"O(1)\",\n    \"group\": \"vector_set\",\n    \"since\": \"8.2.0\",\n    \"arity\": 3,\n    \"function\": \"vismemberCommand\",\n    \"command_flags\": [\n      \"READONLY\"\n    ],\n    \"arguments\": [\n      {\n        \"name\": \"key\",\n        \"type\": \"key\"\n      },\n      {\n        \"name\": \"element\",\n        \"type\": \"string\"\n      }\n    ]\n  },\n  \"VRANGE\": {\n    \"summary\": \"Return elements in a lexicographical range\",\n    \"complexity\": \"O(log(K)+M) where K is the number of elements in the start prefix, and M is the number of elements returned. In practical terms, the command is just O(M)\",\n    \"group\": \"vector_set\",\n    \"since\": \"8.4.0\",\n    \"arity\": -4,\n    \"function\": \"vrangeCommand\",\n    \"command_flags\": [\n      \"READONLY\"\n    ],\n    \"arguments\": [\n      {\n        \"name\": \"key\",\n        \"type\": \"key\"\n      },\n      {\n        \"name\": \"start\",\n        \"type\": \"string\"\n      },\n      {\n        \"name\": \"end\",\n        \"type\": \"string\"\n      },\n      {\n        \"name\": \"count\",\n        \"type\": \"integer\",\n        \"optional\": true\n      }\n    ]\n  }\n}\n"
  },
  {
    "path": "modules/vector-sets/examples/cli-tool/.gitignore",
    "content": "venv\n"
  },
  {
    "path": "modules/vector-sets/examples/cli-tool/README.md",
    "content": "This tool is similar to redis-cli (but very basic) but allows\nto specify arguments that are expanded as vectors by calling\nollama to get the embedding.\n\nWhatever is passed as !\"foo bar\" gets expanded into\n    VALUES ... embedding ...\n\nYou must have ollama running with the mxbai-emb-large model\nalready installed for this to work.\n\nExample:\n\n    redis> KEYS *\n    1) food_items\n    2) glove_embeddings_bin\n    3) many_movies_mxbai-embed-large_BIN\n    4) many_movies_mxbai-embed-large_NOQUANT\n    5) word_embeddings\n    6) word_embeddings_bin\n    7) glove_embeddings_fp32\n\n    redis> VSIM food_items !\"drinks with fruit\"\n    1) (Fruit)Juices,Lemonade,100ml,50 cal,210 kJ\n    2) (Fruit)Juices,Limeade,100ml,128 cal,538 kJ\n    3) CannedFruit,Canned Fruit Cocktail,100g,81 cal,340 kJ\n    4) (Fruit)Juices,Energy-Drink,100ml,87 cal,365 kJ\n    5) Fruits,Lime,100g,30 cal,126 kJ\n    6) (Fruit)Juices,Coconut Water,100ml,19 cal,80 kJ\n    7) Fruits,Lemon,100g,29 cal,122 kJ\n    8) (Fruit)Juices,Clamato,100ml,60 cal,252 kJ\n    9) Fruits,Fruit salad,100g,50 cal,210 kJ\n    10) (Fruit)Juices,Capri-Sun,100ml,41 cal,172 kJ\n\n    redis> vsim food_items !\"barilla\"\n    1) Pasta&Noodles,Spirelli,100g,367 cal,1541 kJ\n    2) Pasta&Noodles,Farfalle,100g,358 cal,1504 kJ\n    3) Pasta&Noodles,Capellini,100g,353 cal,1483 kJ\n    4) Pasta&Noodles,Spaetzle,100g,368 cal,1546 kJ\n    5) Pasta&Noodles,Cappelletti,100g,164 cal,689 kJ\n    6) Pasta&Noodles,Penne,100g,351 cal,1474 kJ\n    7) Pasta&Noodles,Shells,100g,353 cal,1483 kJ\n    8) Pasta&Noodles,Linguine,100g,357 cal,1499 kJ\n    9) Pasta&Noodles,Rotini,100g,353 cal,1483 kJ\n    10) Pasta&Noodles,Rigatoni,100g,353 cal,1483 kJ\n"
  },
  {
    "path": "modules/vector-sets/examples/cli-tool/cli.py",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n\n#!/usr/bin/env python3\nimport argparse\nimport redis\nimport requests\nimport re\nimport shlex\nfrom prompt_toolkit import PromptSession\nfrom prompt_toolkit.history import InMemoryHistory\n\n# Default Ollama embeddings URL (can be overridden with --ollama-url)\nOLLAMA_URL = \"http://localhost:11434/api/embeddings\"\n\ndef get_embedding(text):\n    \"\"\"Get embedding from local Ollama API\"\"\"\n    url = OLLAMA_URL\n    payload = {\n        \"model\": \"mxbai-embed-large\",\n        \"prompt\": text\n    }\n    try:\n        response = requests.post(url, json=payload)\n        response.raise_for_status()\n        return response.json()['embedding']\n    except requests.exceptions.RequestException as e:\n        raise Exception(f\"Failed to get embedding: {str(e)}\")\n\ndef process_embedding_patterns(text):\n    \"\"\"Process !\"text\" and !!\"text\" patterns in the command\"\"\"\n\n    def replace_with_embedding(match):\n        text = match.group(1)\n        embedding = get_embedding(text)\n        return f\"VALUES {len(embedding)} {' '.join(map(str, embedding))}\"\n\n    def replace_with_embedding_and_text(match):\n        text = match.group(1)\n        embedding = get_embedding(text)\n        # Return both the embedding values and the original text as next argument\n        return f'VALUES {len(embedding)} {\" \".join(map(str, embedding))} \"{text}\"'\n\n    # First handle !!\"text\" pattern (must be done before !\"text\")\n    text = re.sub(r'!!\"([^\"]*)\"', replace_with_embedding_and_text, text)\n    # Then handle !\"text\" pattern\n    text = re.sub(r'!\"([^\"]*)\"', replace_with_embedding, text)\n    return text\n\ndef parse_command(command):\n    \"\"\"Parse command respecting quoted strings\"\"\"\n    try:\n        # Use shlex to properly handle quoted strings\n        return shlex.split(command)\n    except ValueError as e:\n        raise Exception(f\"Invalid command syntax: {str(e)}\")\n\ndef format_response(response):\n    \"\"\"Format the response to match Redis protocol style\"\"\"\n    if response is None:\n        return \"(nil)\"\n    elif isinstance(response, bool):\n        return \"+OK\" if response else \"(error) Operation failed\"\n    elif isinstance(response, (list, set)):\n        if not response:\n            return \"(empty list or set)\"\n        return \"\\n\".join(f\"{i+1}) {item}\" for i, item in enumerate(response))\n    elif isinstance(response, int):\n        return f\"(integer) {response}\"\n    else:\n        return str(response)\n\ndef main():\n    global OLLAMA_URL\n\n    parser = argparse.ArgumentParser(prog=\"cli.py\", add_help=False)\n    parser.add_argument(\"--ollama-url\", dest=\"ollama_url\",\n                        help=\"Ollama embeddings API URL (default: {OLLAMA_URL})\",\n                        default=OLLAMA_URL)\n    args, _ = parser.parse_known_args()\n    OLLAMA_URL = args.ollama_url\n\n    # Default connection to localhost:6379\n    r = redis.Redis(host='localhost', port=6379, decode_responses=True)\n\n    try:\n        # Test connection\n        r.ping()\n        print(\"Connected to Redis. Type your commands (CTRL+D to exit):\")\n        print(\"Special syntax:\")\n        print(\"  !\\\"text\\\"  - Replace with embedding\")\n        print(\"  !!\\\"text\\\" - Replace with embedding and append text as value\")\n        print(\"  \\\"text\\\"   - Quote strings containing spaces\")\n    except redis.ConnectionError:\n        print(\"Error: Could not connect to Redis server\")\n        return\n\n    # Setup prompt session with history\n    session = PromptSession(history=InMemoryHistory())\n\n    # Main loop\n    while True:\n        try:\n            # Read input with line editing support\n            command = session.prompt(\"redis> \")\n\n            # Skip empty commands\n            if not command.strip():\n                continue\n\n            # Process any embedding patterns before parsing\n            try:\n                processed_command = process_embedding_patterns(command)\n            except Exception as e:\n                print(f\"(error) Embedding processing failed: {str(e)}\")\n                continue\n\n            # Parse the command respecting quoted strings\n            try:\n                parts = parse_command(processed_command)\n            except Exception as e:\n                print(f\"(error) {str(e)}\")\n                continue\n\n            if not parts:\n                continue\n\n            cmd = parts[0].lower()\n            args = parts[1:]\n\n            # Execute command\n            try:\n                method = getattr(r, cmd, None)\n                if method is not None:\n                    result = method(*args)\n                else:\n                    # Use execute_command for unknown commands\n                    result = r.execute_command(cmd, *args)\n                print(format_response(result))\n            except AttributeError:\n                print(f\"(error) Unknown command '{cmd}'\")\n\n        except EOFError:\n            print(\"\\nGoodbye!\")\n            break\n        except KeyboardInterrupt:\n            continue  # Allow Ctrl+C to clear current line\n        except redis.RedisError as e:\n            print(f\"(error) {str(e)}\")\n        except Exception as e:\n            print(f\"(error) {str(e)}\")\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "modules/vector-sets/examples/glove-100/README",
    "content": "wget http://ann-benchmarks.com/glove-100-angular.hdf5\npython insert.py\npython recall.py (use --k <count> optionally, default top-10)\n"
  },
  {
    "path": "modules/vector-sets/examples/glove-100/insert.py",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n\nimport h5py\nimport redis\nfrom tqdm import tqdm\n\n# Initialize Redis connection\nredis_client = redis.Redis(host='localhost', port=6379, decode_responses=True, encoding='utf-8')\n\ndef add_to_redis(index, embedding):\n    \"\"\"Add embedding to Redis using VADD command\"\"\"\n    args = [\"VADD\", \"glove_embeddings\", \"VALUES\", \"100\"]  # 100 is vector dimension\n    args.extend(map(str, embedding))\n    args.append(f\"{index}\")  # Using index as identifier since we don't have words\n    args.append(\"EF\")\n    args.append(\"200\")\n    # args.append(\"NOQUANT\")\n    # args.append(\"BIN\")\n    redis_client.execute_command(*args)\n\ndef main():\n    with h5py.File('glove-100-angular.hdf5', 'r') as f:\n        # Get the train dataset\n        train_vectors = f['train']\n        total_vectors = train_vectors.shape[0]\n\n        print(f\"Starting to process {total_vectors} vectors...\")\n\n        # Process in batches to avoid memory issues\n        batch_size = 1000\n\n        for i in tqdm(range(0, total_vectors, batch_size)):\n            batch_end = min(i + batch_size, total_vectors)\n            batch = train_vectors[i:batch_end]\n\n            for j, vector in enumerate(batch):\n                try:\n                    current_index = i + j\n                    add_to_redis(current_index, vector)\n\n                except Exception as e:\n                    print(f\"Error processing vector {current_index}: {str(e)}\")\n                    continue\n\n            if (i + batch_size) % 10000 == 0:\n                print(f\"Processed {i + batch_size} vectors\")\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "modules/vector-sets/examples/glove-100/recall.py",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n\nimport h5py\nimport redis\nimport numpy as np\nfrom tqdm import tqdm\nimport argparse\n\n# Initialize Redis connection\nredis_client = redis.Redis(host='localhost', port=6379, decode_responses=True, encoding='utf-8')\n\ndef get_redis_neighbors(query_vector, k):\n    \"\"\"Get nearest neighbors using Redis VSIM command\"\"\"\n    args = [\"VSIM\", \"glove_embeddings_bin\", \"VALUES\", \"100\"]\n    args.extend(map(str, query_vector))\n    args.extend([\"COUNT\", str(k)])\n    args.extend([\"EF\", 100])\n    if False:\n        print(args)\n        exit(1)\n    results = redis_client.execute_command(*args)\n    return [int(res) for res in results]\n\ndef calculate_recall(ground_truth, predicted, k):\n    \"\"\"Calculate recall@k\"\"\"\n    relevant = set(ground_truth[:k])\n    retrieved = set(predicted[:k])\n    return len(relevant.intersection(retrieved)) / len(relevant)\n\ndef main():\n    parser = argparse.ArgumentParser(description='Evaluate Redis VSIM recall')\n    parser.add_argument('--k', type=int, default=10, help='Number of neighbors to evaluate (default: 10)')\n    parser.add_argument('--batch', type=int, default=100, help='Progress update frequency (default: 100)')\n    args = parser.parse_args()\n\n    k = args.k\n    batch_size = args.batch\n\n    with h5py.File('glove-100-angular.hdf5', 'r') as f:\n        test_vectors = f['test'][:]\n        ground_truth_neighbors = f['neighbors'][:]\n        \n        num_queries = len(test_vectors)\n        recalls = []\n        \n        print(f\"Evaluating recall@{k} for {num_queries} test queries...\")\n        \n        for i in tqdm(range(num_queries)):\n            try:\n                # Get Redis results\n                redis_neighbors = get_redis_neighbors(test_vectors[i], k)\n                \n                # Get ground truth for this query\n                true_neighbors = ground_truth_neighbors[i]\n                \n                # Calculate recall\n                recall = calculate_recall(true_neighbors, redis_neighbors, k)\n                recalls.append(recall)\n                \n                if (i + 1) % batch_size == 0:\n                    current_avg_recall = np.mean(recalls)\n                    print(f\"Current average recall@{k} after {i+1} queries: {current_avg_recall:.4f}\")\n                \n            except Exception as e:\n                print(f\"Error processing query {i}: {str(e)}\")\n                continue\n        \n        final_recall = np.mean(recalls)\n        print(\"\\nFinal Results:\")\n        print(f\"Average recall@{k}: {final_recall:.4f}\")\n        print(f\"Total queries evaluated: {len(recalls)}\")\n        \n        # Save detailed results\n        with open(f'recall_evaluation_results_k{k}.txt', 'w') as f:\n            f.write(f\"Average recall@{k}: {final_recall:.4f}\\n\")\n            f.write(f\"Total queries evaluated: {len(recalls)}\\n\")\n            f.write(f\"Individual query recalls: {recalls}\\n\")\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "modules/vector-sets/examples/movies/.gitignore",
    "content": "mpst_full_data.csv\npartition.json\n"
  },
  {
    "path": "modules/vector-sets/examples/movies/README",
    "content": "This example maps long form movies plots to movies titles.\nIt will create fp32 and binary vectors (the two extremes).\n\n1. Install ollama, and install the embedding model \"mxbai-embed-large\"\n2. Download mpst_full_data.csv from https://www.kaggle.com/datasets/cryptexcode/mpst-movie-plot-synopses-with-tags\n3. python insert.py\n\n127.0.0.1:6379> VSIM many_movies_mxbai-embed-large_NOQUANT ELE \"The Matrix\"\n 1) \"The Matrix\"\n 2) \"The Matrix Reloaded\"\n 3) \"The Matrix Revolutions\"\n 4) \"Commando\"\n 5) \"Avatar\"\n 6) \"Forbidden Planet\"\n 7) \"Terminator Salvation\"\n 8) \"Mandroid\"\n 9) \"The Omega Code\"\n10) \"Coherence\"\n\n127.0.0.1:6379> VSIM many_movies_mxbai-embed-large_BIN ELE \"The Matrix\"\n 1) \"The Matrix\"\n 2) \"The Matrix Reloaded\"\n 3) \"The Matrix Revolutions\"\n 4) \"The Omega Code\"\n 5) \"Forbidden Planet\"\n 6) \"Avatar\"\n 7) \"John Carter\"\n 8) \"System Shock 2\"\n 9) \"Coherence\"\n10) \"Tomorrowland\"\n"
  },
  {
    "path": "modules/vector-sets/examples/movies/insert.py",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n\nimport csv\nimport requests\nimport redis\n\nModelName=\"mxbai-embed-large\"\n\n# Initialize Redis connection, setting encoding to utf-8\nredis_client = redis.Redis(host='localhost', port=6379, decode_responses=True, encoding='utf-8')\n\ndef get_embedding(text):\n    \"\"\"Get embedding from local API\"\"\"\n    url = \"http://localhost:11434/api/embeddings\"\n    payload = {\n        \"model\": ModelName,\n        \"prompt\": \"Represent this movie plot and genre: \"+text\n    }\n    response = requests.post(url, json=payload)\n    return response.json()['embedding']\n\ndef add_to_redis(title, embedding, quant_type):\n    \"\"\"Add embedding to Redis using VADD command\"\"\"\n    args = [\"VADD\", \"many_movies_\"+ModelName+\"_\"+quant_type, \"VALUES\", str(len(embedding))]\n    args.extend(map(str, embedding))\n    args.append(title)\n    args.append(quant_type)\n    redis_client.execute_command(*args)\n\ndef main():\n    with open('mpst_full_data.csv', 'r', encoding='utf-8') as file:\n        reader = csv.DictReader(file)\n\n        for movie in reader:\n            try:\n                text_to_embed = f\"{movie['title']} {movie['plot_synopsis']} {movie['tags']}\"\n\n                print(f\"Getting embedding for: {movie['title']}\")\n                embedding = get_embedding(text_to_embed)\n\n                add_to_redis(movie['title'], embedding, \"BIN\")\n                add_to_redis(movie['title'], embedding, \"NOQUANT\")\n                print(f\"Successfully processed: {movie['title']}\")\n\n            except Exception as e:\n                print(f\"Error processing {movie['title']}: {str(e)}\")\n                continue\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "modules/vector-sets/expr.c",
    "content": "/* Filtering of objects based on simple expressions.\n * This powers the FILTER option of Vector Sets, but it is otherwise\n * general code to be used when we want to tell if a given object (with fields)\n * passes or fails a given test for scalars, strings, ...\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n * Originally authored by: Salvatore Sanfilippo.\n */\n\n#ifdef TEST_MAIN\n#define RedisModule_Alloc malloc\n#define RedisModule_Realloc realloc\n#define RedisModule_Free free\n#define RedisModule_Strdup strdup\n#define RedisModule_Assert assert\n#define _DEFAULT_SOURCE\n#define _USE_MATH_DEFINES\n#include <assert.h>\n#include <math.h>\n#endif\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <ctype.h>\n#include <math.h>\n#include <string.h>\n\n#define EXPR_TOKEN_EOF 0\n#define EXPR_TOKEN_NUM 1\n#define EXPR_TOKEN_STR 2\n#define EXPR_TOKEN_TUPLE 3\n#define EXPR_TOKEN_SELECTOR 4\n#define EXPR_TOKEN_OP 5\n#define EXPR_TOKEN_NULL 6\n\n#define EXPR_OP_OPAREN 0  /* ( */\n#define EXPR_OP_CPAREN 1  /* ) */\n#define EXPR_OP_NOT    2  /* ! */\n#define EXPR_OP_POW    3  /* ** */\n#define EXPR_OP_MULT   4  /* * */\n#define EXPR_OP_DIV    5  /* / */\n#define EXPR_OP_MOD    6  /* % */\n#define EXPR_OP_SUM    7  /* + */\n#define EXPR_OP_DIFF   8  /* - */\n#define EXPR_OP_GT     9  /* > */\n#define EXPR_OP_GTE    10 /* >= */\n#define EXPR_OP_LT     11 /* < */\n#define EXPR_OP_LTE    12 /* <= */\n#define EXPR_OP_EQ     13 /* == */\n#define EXPR_OP_NEQ    14 /* != */\n#define EXPR_OP_IN     15 /* in */\n#define EXPR_OP_AND    16 /* and */\n#define EXPR_OP_OR     17 /* or */\n\n/* This structure represents a token in our expression. It's either\n * literals like 4, \"foo\", or operators like \"+\", \"-\", \"and\", or\n * json selectors, that start with a dot: \".age\", \".properties.somearray[1]\" */\ntypedef struct exprtoken {\n    int refcount;           // Reference counting for memory reclaiming.\n    int token_type;         // Token type of the just parsed token.\n    int offset;             // Chars offset in expression.\n    union {\n        double num;         // Value for EXPR_TOKEN_NUM.\n        struct {\n            char *start;    // String pointer for EXPR_TOKEN_STR / SELECTOR.\n            size_t len;     // String len for EXPR_TOKEN_STR / SELECTOR.\n            char *heapstr;  // True if we have a private allocation for this\n                            // string. When possible, it just references to the\n                            // string expression we compiled, exprstate->expr.\n        } str;\n        int opcode;         // Opcode ID for EXPR_TOKEN_OP.\n        struct {\n            struct exprtoken **ele;\n            size_t len;\n        } tuple;            // Tuples are like [1, 2, 3] for \"in\" operator.\n    };\n} exprtoken;\n\n/* Simple stack of expr tokens. This is used both to represent the stack\n * of values and the stack of operands during VM execution. */\ntypedef struct exprstack {\n    exprtoken **items;\n    int numitems;\n    int allocsize;\n} exprstack;\n\ntypedef struct exprstate {\n    char *expr;             /* Expression string to compile. Note that\n                             * expression token strings point directly to this\n                             * string. */\n    char *p;                // Current position inside 'expr', while parsing.\n\n    // Virtual machine state.\n    exprstack values_stack;\n    exprstack ops_stack;    // Operator stack used during compilation.\n    exprstack tokens;       // Expression processed into a sequence of tokens.\n    exprstack program;      // Expression compiled into opcodes and values.\n} exprstate;\n\n/* Valid operators. */\nstruct {\n    char *opname;\n    int oplen;\n    int opcode;\n    int precedence;\n    int arity;\n} ExprOptable[] = {\n    {\"(\",   1,  EXPR_OP_OPAREN,  7, 0},\n    {\")\",   1,  EXPR_OP_CPAREN,  7, 0},\n    {\"!\",   1,  EXPR_OP_NOT,     6, 1},\n    {\"not\", 3,  EXPR_OP_NOT,     6, 1},\n    {\"**\",  2,  EXPR_OP_POW,     5, 2},\n    {\"*\",   1,  EXPR_OP_MULT,    4, 2},\n    {\"/\",   1,  EXPR_OP_DIV,     4, 2},\n    {\"%\",   1,  EXPR_OP_MOD,     4, 2},\n    {\"+\",   1,  EXPR_OP_SUM,     3, 2},\n    {\"-\",   1,  EXPR_OP_DIFF,    3, 2},\n    {\">\",   1,  EXPR_OP_GT,      2, 2},\n    {\">=\",  2,  EXPR_OP_GTE,     2, 2},\n    {\"<\",   1,  EXPR_OP_LT,      2, 2},\n    {\"<=\",  2,  EXPR_OP_LTE,     2, 2},\n    {\"==\",  2,  EXPR_OP_EQ,      2, 2},\n    {\"!=\",  2,  EXPR_OP_NEQ,     2, 2},\n    {\"in\",  2,  EXPR_OP_IN,      2, 2},\n    {\"and\", 3,  EXPR_OP_AND,     1, 2},\n    {\"&&\",  2,  EXPR_OP_AND,     1, 2},\n    {\"or\",  2,  EXPR_OP_OR,      0, 2},\n    {\"||\",  2,  EXPR_OP_OR,      0, 2},\n    {NULL,  0,  0,               0, 0}   // Terminator.\n};\n\n#define EXPR_OP_SPECIALCHARS \"+-*%/!()<>=|&\"\n#define EXPR_SELECTOR_SPECIALCHARS \"_-\"\n\n/* ================================ Expr token ============================== */\n\n/* Return an heap allocated token of the specified type, setting the\n * reference count to 1. */\nexprtoken *exprNewToken(int type) {\n    exprtoken *t = RedisModule_Alloc(sizeof(exprtoken));\n    memset(t,0,sizeof(*t));\n    t->token_type = type;\n    t->refcount = 1;\n    return t;\n}\n\n/* Generic free token function, can be used to free stack allocated\n * objects (in this case the pointer itself will not be freed) or\n * heap allocated objects. See the wrappers below. */\nvoid exprTokenRelease(exprtoken *t) {\n    if (t == NULL) return;\n\n    RedisModule_Assert(t->refcount > 0); // Catch double free & more.\n    t->refcount--;\n    if (t->refcount > 0) return;\n\n    // We reached refcount 0: free the object.\n    if (t->token_type == EXPR_TOKEN_STR) {\n        if (t->str.heapstr != NULL) RedisModule_Free(t->str.heapstr);\n    } else if (t->token_type == EXPR_TOKEN_TUPLE) {\n        for (size_t j = 0; j < t->tuple.len; j++)\n            exprTokenRelease(t->tuple.ele[j]);\n        if (t->tuple.ele) RedisModule_Free(t->tuple.ele);\n    }\n    RedisModule_Free(t);\n}\n\nvoid exprTokenRetain(exprtoken *t) {\n    t->refcount++;\n}\n\n/* ============================== Stack handling ============================ */\n\n#include <stdlib.h>\n#include <string.h>\n\n#define EXPR_STACK_INITIAL_SIZE 16\n\n/* Initialize a new expression stack. */\nvoid exprStackInit(exprstack *stack) {\n    stack->items = RedisModule_Alloc(sizeof(exprtoken*) * EXPR_STACK_INITIAL_SIZE);\n    stack->numitems = 0;\n    stack->allocsize = EXPR_STACK_INITIAL_SIZE;\n}\n\n/* Push a token pointer onto the stack. Does not increment the refcount\n * of the token: it is up to the caller doing this. */\nvoid exprStackPush(exprstack *stack, exprtoken *token) {\n    /* Check if we need to grow the stack. */\n    if (stack->numitems == stack->allocsize) {\n        size_t newsize = stack->allocsize * 2;\n        exprtoken **newitems =\n            RedisModule_Realloc(stack->items, sizeof(exprtoken*) * newsize);\n        stack->items = newitems;\n        stack->allocsize = newsize;\n    }\n    stack->items[stack->numitems] = token;\n    stack->numitems++;\n}\n\n/* Pop a token pointer from the stack. Return NULL if the stack is\n * empty. Does NOT recrement the refcount of the token, it's up to the\n * caller to do so, as the new owner of the reference. */\nexprtoken *exprStackPop(exprstack *stack) {\n    if (stack->numitems == 0) return NULL;\n    stack->numitems--;\n    return stack->items[stack->numitems];\n}\n\n/* Just return the last element pushed, without consuming it nor altering\n * the reference count. */\nexprtoken *exprStackPeek(exprstack *stack) {\n    if (stack->numitems == 0) return NULL;\n    return stack->items[stack->numitems-1];\n}\n\n/* Free the stack structure state, including the items it contains, that are\n * assumed to be heap allocated. The passed pointer itself is not freed. */\nvoid exprStackFree(exprstack *stack) {\n    for (int j = 0; j < stack->numitems; j++)\n        exprTokenRelease(stack->items[j]);\n    RedisModule_Free(stack->items);\n}\n\n/* Just reset the stack removing all the items, but leaving it in a state\n * that makes it still usable for new elements. */\nvoid exprStackReset(exprstack *stack) {\n    for (int j = 0; j < stack->numitems; j++)\n        exprTokenRelease(stack->items[j]);\n    stack->numitems = 0;\n}\n\n/* =========================== Expression compilation ======================= */\n\nvoid exprConsumeSpaces(exprstate *es) {\n    while(es->p[0] && isspace(es->p[0])) es->p++;\n}\n\n/* Parse an operator or a literal (just \"null\" currently).\n * When parsing operators, the function will try to match the longest match\n * in the operators table. */\nexprtoken *exprParseOperatorOrLiteral(exprstate *es) {\n    exprtoken *t = exprNewToken(EXPR_TOKEN_OP);\n    char *start = es->p;\n\n    while(es->p[0] &&\n          (isalpha(es->p[0]) ||\n           strchr(EXPR_OP_SPECIALCHARS,es->p[0]) != NULL))\n    {\n        es->p++;\n    }\n\n    int matchlen = es->p - start;\n    int bestlen = 0;\n    int j;\n\n    // Check if it's a literal.\n    if (matchlen == 4 && !memcmp(\"null\",start,4)) {\n        t->token_type = EXPR_TOKEN_NULL;\n        return t;\n    }\n\n    // Find the longest matching operator.\n    for (j = 0; ExprOptable[j].opname != NULL; j++) {\n        if (ExprOptable[j].oplen > matchlen) continue;\n        if (memcmp(ExprOptable[j].opname, start, ExprOptable[j].oplen) != 0)\n        {\n            continue;\n        }\n        if (ExprOptable[j].oplen > bestlen) {\n            t->opcode = ExprOptable[j].opcode;\n            bestlen = ExprOptable[j].oplen;\n        }\n    }\n    if (bestlen == 0) {\n        exprTokenRelease(t);\n        return NULL;\n    } else {\n        es->p = start + bestlen;\n    }\n    return t;\n}\n\n// Valid selector charset.\nstatic int is_selector_char(int c) {\n    return (isalpha(c) ||\n            isdigit(c) ||\n            strchr(EXPR_SELECTOR_SPECIALCHARS,c) != NULL);\n}\n\n/* Parse selectors, they start with a dot and can have alphanumerical\n * or few special chars. */\nexprtoken *exprParseSelector(exprstate *es) {\n    exprtoken *t = exprNewToken(EXPR_TOKEN_SELECTOR);\n    es->p++; // Skip dot.\n    char *start = es->p;\n\n    while(es->p[0] && is_selector_char(es->p[0])) es->p++;\n    int matchlen = es->p - start;\n    t->str.start = start;\n    t->str.len = matchlen;\n    return t;\n}\n\nexprtoken *exprParseNumber(exprstate *es) {\n    exprtoken *t = exprNewToken(EXPR_TOKEN_NUM);\n    char num[256];\n    int idx = 0;\n    while(isdigit(es->p[0]) || es->p[0] == '.' || es->p[0] == 'e' ||\n          es->p[0] == 'E' || (idx == 0 && es->p[0] == '-'))\n    {\n        if (idx >= (int)sizeof(num)-1) {\n            exprTokenRelease(t);\n            return NULL;\n        }\n        num[idx++] = es->p[0];\n        es->p++;\n    }\n    num[idx] = 0;\n\n    char *endptr;\n    t->num = strtod(num, &endptr);\n    if (*endptr != '\\0') {\n        exprTokenRelease(t);\n        return NULL;\n    }\n    return t;\n}\n\nexprtoken *exprParseString(exprstate *es) {\n    char quote = es->p[0];  /* Store the quote type (' or \"). */\n    es->p++;                /* Skip opening quote. */\n\n    exprtoken *t = exprNewToken(EXPR_TOKEN_STR);\n    t->str.start = es->p;\n\n    while(es->p[0] != '\\0') {\n        if (es->p[0] == '\\\\' && es->p[1] != '\\0') {\n            es->p += 2; // Skip escaped char.\n            continue;\n        }\n        if (es->p[0] == quote) {\n            t->str.len = es->p - t->str.start;\n            es->p++; // Skip closing quote.\n            return t;\n        }\n        es->p++;\n    }\n    /* If we reach here, string was not terminated. */\n    exprTokenRelease(t);\n    return NULL;\n}\n\n/* Parse a tuple of the form [1, \"foo\", 42]. No nested tuples are\n * supported. This type is useful mostly to be used with the \"IN\"\n * operator. */\nexprtoken *exprParseTuple(exprstate *es) {\n    exprtoken *t = exprNewToken(EXPR_TOKEN_TUPLE);\n    t->tuple.ele = NULL;\n    t->tuple.len = 0;\n    es->p++; /* Skip opening '['. */\n\n    size_t allocated = 0;\n    while(1) {\n        exprConsumeSpaces(es);\n\n        /* Check for empty tuple or end. */\n        if (es->p[0] == ']') {\n            es->p++;\n            break;\n        }\n\n        /* Grow tuple array if needed. */\n        if (t->tuple.len == allocated) {\n            size_t newsize = allocated == 0 ? 4 : allocated * 2;\n            exprtoken **newele = RedisModule_Realloc(t->tuple.ele,\n                sizeof(exprtoken*) * newsize);\n            t->tuple.ele = newele;\n            allocated = newsize;\n        }\n\n        /* Parse tuple element. */\n        exprtoken *ele = NULL;\n        if (isdigit(es->p[0]) || es->p[0] == '-') {\n            ele = exprParseNumber(es);\n        } else if (es->p[0] == '\"' || es->p[0] == '\\'') {\n            ele = exprParseString(es);\n        } else {\n            exprTokenRelease(t);\n            return NULL;\n        }\n\n        /* Error parsing number/string? */\n        if (ele == NULL) {\n            exprTokenRelease(t);\n            return NULL;\n        }\n\n        /* Store element if no error was detected. */\n        t->tuple.ele[t->tuple.len] = ele;\n        t->tuple.len++;\n\n        /* Check for next element. */\n        exprConsumeSpaces(es);\n        if (es->p[0] == ']') {\n            es->p++;\n            break;\n        }\n        if (es->p[0] != ',') {\n            exprTokenRelease(t);\n            return NULL;\n        }\n        es->p++; /* Skip comma. */\n    }\n    return t;\n}\n\n/* Deallocate the object returned by exprCompile(). */\nvoid exprFree(exprstate *es) {\n    if (es == NULL) return;\n\n    /* Free the original expression string. */\n    if (es->expr) RedisModule_Free(es->expr);\n\n    /* Free all stacks. */\n    exprStackFree(&es->values_stack);\n    exprStackFree(&es->ops_stack);\n    exprStackFree(&es->tokens);\n    exprStackFree(&es->program);\n\n    /* Free the state object itself. */\n    RedisModule_Free(es);\n}\n\n/* Split the provided expression into a stack of tokens. Returns\n * 0 on success, 1 on error. */\nint exprTokenize(exprstate *es, int *errpos) {\n    /* Main parsing loop. */\n    while(1) {\n        exprConsumeSpaces(es);\n\n        /* Set a flag to see if we can consider the - part of the\n         * number, or an operator. */\n        int minus_is_number = 0; // By default is an operator.\n\n        exprtoken *last = exprStackPeek(&es->tokens);\n        if (last == NULL) {\n            /* If we are at the start of an expression, the minus is\n             * considered a number. */\n            minus_is_number = 1;\n        } else if (last->token_type == EXPR_TOKEN_OP &&\n                   last->opcode != EXPR_OP_CPAREN)\n        {\n            /* Also, if the previous token was an operator, the minus\n             * is considered a number, unless the previous operator is\n             * a closing parens. In such case it's like (...) -5, or alike\n             * and we want to emit an operator. */\n            minus_is_number = 1;\n        }\n\n        /* Parse based on the current character. */\n        exprtoken *current = NULL;\n        if (*es->p == '\\0') {\n            current = exprNewToken(EXPR_TOKEN_EOF);\n        } else if (isdigit(*es->p) ||\n                  (minus_is_number && *es->p == '-' && isdigit(es->p[1])))\n        {\n            current = exprParseNumber(es);\n        } else if (*es->p == '\"' || *es->p == '\\'') {\n            current = exprParseString(es);\n        } else if (*es->p == '.' && is_selector_char(es->p[1])) {\n            current = exprParseSelector(es);\n        } else if (*es->p == '[') {\n            current = exprParseTuple(es);\n        } else if (isalpha(*es->p) || strchr(EXPR_OP_SPECIALCHARS, *es->p)) {\n            current = exprParseOperatorOrLiteral(es);\n        }\n\n        if (current == NULL) {\n            if (errpos) *errpos = es->p - es->expr;\n            return 1; // Syntax Error.\n        }\n\n        /* Push the current token to tokens stack. */\n        exprStackPush(&es->tokens, current);\n        if (current->token_type == EXPR_TOKEN_EOF) break;\n    }\n    return 0;\n}\n\n/* Helper function to get operator precedence from the operator table. */\nint exprGetOpPrecedence(int opcode) {\n    for (int i = 0; ExprOptable[i].opname != NULL; i++) {\n        if (ExprOptable[i].opcode == opcode)\n            return ExprOptable[i].precedence;\n    }\n    return -1;\n}\n\n/* Helper function to get operator arity from the operator table. */\nint exprGetOpArity(int opcode) {\n    for (int i = 0; ExprOptable[i].opname != NULL; i++) {\n        if (ExprOptable[i].opcode == opcode)\n            return ExprOptable[i].arity;\n    }\n    return -1;\n}\n\n/* Process an operator during compilation. Returns 0 on success, 1 on error.\n * This function will retain a reference of the operator 'op' in case it\n * is pushed on the operators stack. */\nint exprProcessOperator(exprstate *es, exprtoken *op, int *stack_items, int *errpos) {\n    if (op->opcode == EXPR_OP_OPAREN) {\n\t// This is just a marker for us. Do nothing.\n        exprStackPush(&es->ops_stack, op);\n        exprTokenRetain(op);\n        return 0;\n    }\n\n    if (op->opcode == EXPR_OP_CPAREN) {\n        /* Process operators until we find the matching opening parenthesis. */\n        while (1) {\n            exprtoken *top_op = exprStackPop(&es->ops_stack);\n            if (top_op == NULL) {\n                if (errpos) *errpos = op->offset;\n                return 1;\n            }\n\n            if (top_op->opcode == EXPR_OP_OPAREN) {\n                /* Open parethesis found. Our work finished. */\n                exprTokenRelease(top_op);\n                return 0;\n            }\n\n            int arity = exprGetOpArity(top_op->opcode);\n            if (*stack_items < arity) {\n                exprTokenRelease(top_op);\n                if (errpos) *errpos = top_op->offset;\n                return 1;\n            }\n\n            /* Move the operator on the program stack. */\n            exprStackPush(&es->program, top_op);\n            *stack_items = *stack_items - arity + 1;\n        }\n    }\n\n    int curr_prec = exprGetOpPrecedence(op->opcode);\n\n    /* Process operators with higher or equal precedence. */\n    while (1) {\n        exprtoken *top_op = exprStackPeek(&es->ops_stack);\n        if (top_op == NULL || top_op->opcode == EXPR_OP_OPAREN) break;\n\n        int top_prec = exprGetOpPrecedence(top_op->opcode);\n        if (top_prec < curr_prec) break;\n        /* Special case for **: only pop if precedence is strictly higher\n         * so that the operator is right associative, that is:\n         * 2 ** 3 ** 2 is evaluated as 2 ** (3 ** 2) == 512 instead\n         * of (2 ** 3) ** 2 == 64. */\n        if (op->opcode == EXPR_OP_POW && top_prec <= curr_prec) break;\n\n        /* Pop and add to program. */\n        top_op = exprStackPop(&es->ops_stack);\n        int arity = exprGetOpArity(top_op->opcode);\n        if (*stack_items < arity) {\n            exprTokenRelease(top_op);\n            if (errpos) *errpos = top_op->offset;\n            return 1;\n        }\n\n        /* Move to the program stack. */\n        exprStackPush(&es->program, top_op);\n        *stack_items = *stack_items - arity + 1;\n    }\n\n    /* Push current operator. */\n    exprStackPush(&es->ops_stack, op);\n    exprTokenRetain(op);\n    return 0;\n}\n\n/* Compile the expression into a set of push-value and exec-operator\n * that exprRun() can execute. The function returns an expstate object\n * that can be used for execution of the program. On error, NULL\n * is returned, and optionally the position of the error into the\n * expression is returned by reference. */\nexprstate *exprCompile(char *expr, int *errpos) {\n    /* Initialize expression state. */\n    exprstate *es = RedisModule_Alloc(sizeof(exprstate));\n    es->expr = RedisModule_Strdup(expr);\n    es->p = es->expr;\n\n    /* Initialize all stacks. */\n    exprStackInit(&es->values_stack);\n    exprStackInit(&es->ops_stack);\n    exprStackInit(&es->tokens);\n    exprStackInit(&es->program);\n\n    /* Tokenization. */\n    if (exprTokenize(es, errpos)) {\n        exprFree(es);\n        return NULL;\n    }\n\n    /* Compile the expression into a sequence of operations. */\n    int stack_items = 0;  // Track # of items that would be on the stack\n                         // during execution. This way we can detect arity\n                         // issues at compile time.\n\n    /* Process each token. */\n    for (int i = 0; i < es->tokens.numitems; i++) {\n        exprtoken *token = es->tokens.items[i];\n\n        if (token->token_type == EXPR_TOKEN_EOF) break;\n\n        /* Handle values (numbers, strings, selectors, null). */\n        if (token->token_type == EXPR_TOKEN_NUM ||\n            token->token_type == EXPR_TOKEN_STR ||\n            token->token_type == EXPR_TOKEN_TUPLE ||\n            token->token_type == EXPR_TOKEN_SELECTOR ||\n            token->token_type == EXPR_TOKEN_NULL)\n        {\n            exprStackPush(&es->program, token);\n            exprTokenRetain(token);\n            stack_items++;\n            continue;\n        }\n\n        /* Handle operators. */\n        if (token->token_type == EXPR_TOKEN_OP) {\n            if (exprProcessOperator(es, token, &stack_items, errpos)) {\n                exprFree(es);\n                return NULL;\n            }\n            continue;\n        }\n    }\n\n    /* Process remaining operators on the stack. */\n    while (es->ops_stack.numitems > 0) {\n        exprtoken *op = exprStackPop(&es->ops_stack);\n        if (op->opcode == EXPR_OP_OPAREN) {\n            if (errpos) *errpos = op->offset;\n            exprTokenRelease(op);\n            exprFree(es);\n            return NULL;\n        }\n\n        int arity = exprGetOpArity(op->opcode);\n        if (stack_items < arity) {\n            if (errpos) *errpos = op->offset;\n            exprTokenRelease(op);\n            exprFree(es);\n            return NULL;\n        }\n\n        exprStackPush(&es->program, op);\n        stack_items = stack_items - arity + 1;\n    }\n\n    /* Verify that exactly one value would remain on the stack after\n     * execution. We could also check that such value is a number, but this\n     * would make the code more complex without much gains. */\n    if (stack_items != 1) {\n        if (errpos) {\n            /* Point to the last token's offset for error reporting. */\n            exprtoken *last = es->tokens.items[es->tokens.numitems - 1];\n            *errpos = last->offset;\n        }\n        exprFree(es);\n        return NULL;\n    }\n    return es;\n}\n\n/* ============================ Expression execution ======================== */\n\n/* Convert a token to its numeric value. For strings we attempt to parse them\n * as numbers, returning 0 if conversion fails. */\ndouble exprTokenToNum(exprtoken *t) {\n    char buf[256];\n    if (t->token_type == EXPR_TOKEN_NUM) {\n        return t->num;\n    } else if (t->token_type == EXPR_TOKEN_STR && t->str.len < sizeof(buf)) {\n        memcpy(buf, t->str.start, t->str.len);\n        buf[t->str.len] = '\\0';\n        char *endptr;\n        double val = strtod(buf, &endptr);\n        return *endptr == '\\0' ? val : 0;\n    } else {\n        return 0;\n    }\n}\n\n/* Convert object to true/false (0 or 1) */\ndouble exprTokenToBool(exprtoken *t) {\n    if (t->token_type == EXPR_TOKEN_NUM) {\n        return t->num != 0;\n    } else if (t->token_type == EXPR_TOKEN_STR && t->str.len == 0) {\n        return 0; // Empty string are false, like in Javascript.\n    } else if (t->token_type == EXPR_TOKEN_NULL) {\n        return 0; // Null is surely more false than true...\n    } else {\n        return 1; // Every non numerical type is true.\n    }\n}\n\n/* Compare two tokens. Returns true if they are equal. */\nint exprTokensEqual(exprtoken *a, exprtoken *b) {\n    // If both are strings, do string comparison.\n    if (a->token_type == EXPR_TOKEN_STR && b->token_type == EXPR_TOKEN_STR) {\n        return a->str.len == b->str.len &&\n               memcmp(a->str.start, b->str.start, a->str.len) == 0;\n    }\n\n    // If both are numbers, do numeric comparison.\n    if (a->token_type == EXPR_TOKEN_NUM && b->token_type == EXPR_TOKEN_NUM) {\n        return a->num == b->num;\n    }\n\n    /* If one of the two is null, the expression is true only if\n     * both are null. */\n    if (a->token_type == EXPR_TOKEN_NULL || b->token_type == EXPR_TOKEN_NULL) {\n        return a->token_type == b->token_type;\n    }\n\n    // Mixed types - convert to numbers and compare.\n    return exprTokenToNum(a) == exprTokenToNum(b);\n}\n\n/* Return true if the string a is a substring of b. */\nint exprTokensStringIn(exprtoken *a, exprtoken *b) {\n    RedisModule_Assert(a->token_type == EXPR_TOKEN_STR &&\n                       b->token_type == EXPR_TOKEN_STR);\n    if (a->str.len > b->str.len) return 0; // A is bigger, can't be a substring.\n    for (size_t i = 0; i <= b->str.len - a->str.len; i++) {\n        if (memcmp(b->str.start+i,a->str.start,a->str.len) == 0) return 1;\n    }\n    return 0;\n}\n\n#include \"fastjson.c\" // JSON parser implementation used by exprRun().\n\n/* Execute the compiled expression program. Returns 1 if the final stack value\n * evaluates to true, 0 otherwise. Also returns 0 if any selector callback\n * fails. */\nint exprRun(exprstate *es, char *json, size_t json_len) {\n    exprStackReset(&es->values_stack);\n\n    // Execute each instruction in the program.\n    for (int i = 0; i < es->program.numitems; i++) {\n        exprtoken *t = es->program.items[i];\n\n        // Handle selectors by calling the callback.\n        if (t->token_type == EXPR_TOKEN_SELECTOR) {\n            exprtoken *obj = NULL;\n            if (t->str.len > 0)\n                obj = jsonExtractField(json,json_len,t->str.start,t->str.len);\n\n            // Selector not found or JSON object not convertible to\n            // expression tokens. Evaluate the expression to false.\n            if (obj == NULL) return 0;\n            exprStackPush(&es->values_stack, obj);\n            continue;\n        }\n\n        // Push non-operator values directly onto the stack.\n        if (t->token_type != EXPR_TOKEN_OP) {\n            exprStackPush(&es->values_stack, t);\n            exprTokenRetain(t);\n            continue;\n        }\n\n        // Handle operators.\n        exprtoken *result = exprNewToken(EXPR_TOKEN_NUM);\n\n        // Pop operands - we know we have enough from compile-time checks.\n        exprtoken *b = exprStackPop(&es->values_stack);\n        exprtoken *a = NULL;\n        if (exprGetOpArity(t->opcode) == 2) {\n            a = exprStackPop(&es->values_stack);\n        }\n\n        switch(t->opcode) {\n        case EXPR_OP_NOT:\n            result->num = exprTokenToBool(b) == 0 ? 1 : 0;\n            break;\n        case EXPR_OP_POW: {\n            double base = exprTokenToNum(a);\n            double exp = exprTokenToNum(b);\n            result->num = pow(base, exp);\n            break;\n        }\n        case EXPR_OP_MULT:\n            result->num = exprTokenToNum(a) * exprTokenToNum(b);\n            break;\n        case EXPR_OP_DIV:\n            result->num = exprTokenToNum(a) / exprTokenToNum(b);\n            break;\n        case EXPR_OP_MOD: {\n            double va = exprTokenToNum(a);\n            double vb = exprTokenToNum(b);\n            result->num = fmod(va, vb);\n            break;\n        }\n        case EXPR_OP_SUM:\n            result->num = exprTokenToNum(a) + exprTokenToNum(b);\n            break;\n        case EXPR_OP_DIFF:\n            result->num = exprTokenToNum(a) - exprTokenToNum(b);\n            break;\n        case EXPR_OP_GT:\n            result->num = exprTokenToNum(a) > exprTokenToNum(b) ? 1 : 0;\n            break;\n        case EXPR_OP_GTE:\n            result->num = exprTokenToNum(a) >= exprTokenToNum(b) ? 1 : 0;\n            break;\n        case EXPR_OP_LT:\n            result->num = exprTokenToNum(a) < exprTokenToNum(b) ? 1 : 0;\n            break;\n        case EXPR_OP_LTE:\n            result->num = exprTokenToNum(a) <= exprTokenToNum(b) ? 1 : 0;\n            break;\n        case EXPR_OP_EQ:\n            result->num = exprTokensEqual(a, b) ? 1 : 0;\n            break;\n        case EXPR_OP_NEQ:\n            result->num = !exprTokensEqual(a, b) ? 1 : 0;\n            break;\n        case EXPR_OP_IN: {\n            /* For 'in' operator, b must be a tuple, and we check for\n             * membership. Otherwise both a and b must be strings, and\n             * in this case we check if a is a substring of b. */\n            result->num = 0;  // Default to false.\n            if (b->token_type == EXPR_TOKEN_TUPLE) {\n                for (size_t j = 0; j < b->tuple.len; j++) {\n                    if (exprTokensEqual(a, b->tuple.ele[j])) {\n                        result->num = 1;  // Found a match.\n                        break;\n                    }\n                }\n            } else if (a->token_type == EXPR_TOKEN_STR &&\n                       b->token_type == EXPR_TOKEN_STR)\n            {\n                result->num = exprTokensStringIn(a,b);\n            }\n            break;\n        }\n        case EXPR_OP_AND:\n            result->num =\n                exprTokenToBool(a) != 0 && exprTokenToBool(b) != 0 ? 1 : 0;\n            break;\n        case EXPR_OP_OR:\n            result->num =\n                exprTokenToBool(a) != 0 || exprTokenToBool(b) != 0 ? 1 : 0;\n            break;\n        default:\n            // Do nothing: we don't want runtime errors.\n            break;\n        }\n\n        // Free operands and push result.\n        if (a) exprTokenRelease(a);\n        exprTokenRelease(b);\n        exprStackPush(&es->values_stack, result);\n    }\n\n    // Get final result from stack.\n    exprtoken *final = exprStackPop(&es->values_stack);\n    if (final == NULL) return 0;\n\n    // Convert result to boolean.\n    int retval = exprTokenToBool(final);\n    exprTokenRelease(final);\n    return retval;\n}\n\n/* ============================ Simple test main ============================ */\n\n#ifdef TEST_MAIN\n#include \"fastjson_test.c\"\n\nvoid exprPrintToken(exprtoken *t) {\n    switch(t->token_type) {\n        case EXPR_TOKEN_EOF:\n            printf(\"EOF\");\n            break;\n        case EXPR_TOKEN_NUM:\n            printf(\"NUM:%g\", t->num);\n            break;\n        case EXPR_TOKEN_STR:\n            printf(\"STR:\\\"%.*s\\\"\", (int)t->str.len, t->str.start);\n            break;\n        case EXPR_TOKEN_SELECTOR:\n            printf(\"SEL:%.*s\", (int)t->str.len, t->str.start);\n            break;\n        case EXPR_TOKEN_OP:\n            printf(\"OP:\");\n            for (int i = 0; ExprOptable[i].opname != NULL; i++) {\n                if (ExprOptable[i].opcode == t->opcode) {\n                    printf(\"%s\", ExprOptable[i].opname);\n                    break;\n                }\n            }\n            break;\n        default:\n            printf(\"UNKNOWN\");\n            break;\n    }\n}\n\nvoid exprPrintStack(exprstack *stack, const char *name) {\n    printf(\"%s (%d items):\", name, stack->numitems);\n    for (int j = 0; j < stack->numitems; j++) {\n        printf(\" \");\n        exprPrintToken(stack->items[j]);\n    }\n    printf(\"\\n\");\n}\n\nint main(int argc, char **argv) {\n    /* Check for JSON parser test mode. */\n    if (argc >= 2 && strcmp(argv[1], \"--test-json-parser\") == 0) {\n        run_fastjson_test();\n        return 0;\n    }\n\n    char *testexpr = \"(5+2)*3 and .year > 1980 and 'foo' == 'foo'\";\n    char *testjson = \"{\\\"year\\\": 1984, \\\"name\\\": \\\"The Matrix\\\"}\";\n    if (argc >= 2) testexpr = argv[1];\n    if (argc >= 3) testjson = argv[2];\n\n    printf(\"Compiling expression: %s\\n\", testexpr);\n\n    int errpos = 0;\n    exprstate *es = exprCompile(testexpr,&errpos);\n    if (es == NULL) {\n        printf(\"Compilation failed near \\\"...%s\\\"\\n\", testexpr+errpos);\n        return 1;\n    }\n\n    exprPrintStack(&es->tokens, \"Tokens\");\n    exprPrintStack(&es->program, \"Program\");\n    printf(\"Running against object: %s\\n\", testjson);\n    int result = exprRun(es,testjson,strlen(testjson));\n    printf(\"Result1: %s\\n\", result ? \"True\" : \"False\");\n    result = exprRun(es,testjson,strlen(testjson));\n    printf(\"Result2: %s\\n\", result ? \"True\" : \"False\");\n\n    exprFree(es);\n    return 0;\n}\n#endif\n"
  },
  {
    "path": "modules/vector-sets/fastjson.c",
    "content": "/* Ultra‑lightweight top‑level JSON field extractor.\n * Return the element directly as an expr.c token.\n * This code is directly included inside expr.c.\n *\n * Copyright (c) 2025-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of the Redis Source Available License 2.0\n * (RSALv2) or the Server Side Public License v1 (SSPLv1).\n *\n * Originally authored by: Salvatore Sanfilippo.\n *\n * ------------------------------------------------------------------\n *\n * DESIGN GOALS:\n *\n * 1. Zero heap allocations while seeking the requested key.\n * 2. A single parse (and therefore a single allocation, if needed)\n *    when the key finally matches.\n * 3. Same subset‑of‑JSON coverage needed by expr.c:\n * - Strings (escapes: \\\" \\\\ \\n \\r \\t).\n * - Numbers (double).\n * - Booleans.\n * - Null.\n * - Flat arrays of the above primitives.\n *\n * Any other value (nested object, unicode escape, etc.) returns NULL.\n * Should be very easy to extend it in case in the future we want\n * more for the FILTER option of VSIM.\n * 4. No global state, so this file can be #included directly in expr.c.\n *\n * The only API expr.c uses directly is:\n *\n * exprtoken *jsonExtractField(const char *json, size_t json_len,\n * const char *field, size_t field_len);\n * ------------------------------------------------------------------ */\n\n#include <ctype.h>\n#include <string.h>\n\n// Forward declarations.\nstatic int jsonSkipValue(const char **p, const char *end);\nstatic exprtoken *jsonParseValueToken(const char **p, const char *end);\n\n/* Similar to ctype.h isdigit() but covers the whole JSON number charset,\n * including exp form. */\nstatic int jsonIsNumberChar(int c) {\n    return isdigit(c) || c=='-' || c=='+' || c=='.' || c=='e' || c=='E';\n}\n\n/* ========================== Fast skipping of JSON =========================\n * The helpers here are designed to skip values without performing any\n * allocation. This way, for the use case of this JSON parser, we are able\n * to easily (and with good speed) skip fields and values we are not\n * interested in. Then, later in the code, when we find the field we want\n * to obtain, we finally call the functions that turn a given JSON value\n * associated to a field into our of our expressions token.\n * ========================================================================== */\n\n/* Advance *p consuming all the spaces. */\nstatic inline void jsonSkipWhiteSpaces(const char **p, const char *end) {\n    while (*p < end && isspace((unsigned char)**p)) (*p)++;\n}\n\n/* Advance *p past a JSON string. Returns 1 on success, 0 on error. */\nstatic int jsonSkipString(const char **p, const char *end) {\n    if (*p >= end || **p != '\"') return 0;\n    (*p)++; /* Skip opening quote. */\n    while (*p < end) {\n        if (**p == '\\\\') {\n            (*p) += 2;\n            continue;\n        }\n        if (**p == '\"') {\n            (*p)++; /* Skip closing quote. */\n            return 1;\n        }\n        (*p)++;\n    }\n    return 0; /* unterminated */\n}\n\n/* Skip an array or object generically using depth counter.\n * Opener and closer tells the function how the aggregated\n * data type starts/stops, basically [] or {}. */\nstatic int jsonSkipBracketed(const char **p, const char *end,\n                             char opener, char closer) {\n    int depth = 1;\n    (*p)++; /* Skip opener. */\n\n    /* Loop until we reach the end of the input or find the matching\n     * closer (depth becomes 0). */\n    while (*p < end && depth > 0) {\n        char c = **p;\n\n        if (c == '\"') {\n            // Found a string, delegate skipping to jsonSkipString().\n            if (!jsonSkipString(p, end)) {\n                return 0; // String skipping failed (e.g., unterminated)\n            }\n            /* jsonSkipString() advances *p past the closing quote.\n             * Continue the loop to process the character *after* the string. */\n            continue;\n        }\n\n        /* If it's not a string, check if it affects the depth for the\n         * specific brackets we are currently tracking. */\n        if (c == opener) {\n            depth++;\n        } else if (c == closer) {\n            depth--;\n        }\n\n        /* Always advance the pointer for any non-string character.\n         * This handles commas, colons, whitespace, numbers, literals,\n         * and even nested brackets of a *different* type than the\n         * one we are currently skipping (e.g. skipping a { inside []). */\n        (*p)++;\n    }\n\n    /* Return 1 (true) if we successfully found the matching closer,\n     * otherwise there is a parse error and we return 0. */\n    return depth == 0;\n}\n\n/* Skip a single JSON literal (true, null, ...) starting at *p.\n * Returns 1 on success, 0 on failure. */\nstatic int jsonSkipLiteral(const char **p, const char *end, const char *lit) {\n    size_t l = strlen(lit);\n    if (*p + l > end) return 0;\n    if (strncmp(*p, lit, l) == 0) { *p += l; return 1; }\n    return 0;\n}\n\n/* Skip number, don't check that number format is correct, just consume\n * number-alike characters.\n *\n * Note: More robust number skipping might check validity,\n * but for skipping, just consuming plausible characters is enough. */\nstatic int jsonSkipNumber(const char **p, const char *end) {\n    const char *num_start = *p;\n    while (*p < end && jsonIsNumberChar(**p)) (*p)++;\n    return *p > num_start; // Any progress made? Otherwise no number found.\n}\n\n/* Skip any JSON value. 1 = success, 0 = error. */\nstatic int jsonSkipValue(const char **p, const char *end) {\n    jsonSkipWhiteSpaces(p, end);\n    if (*p >= end) return 0;\n    switch (**p) {\n    case '\"': return jsonSkipString(p, end);\n    case '{':  return jsonSkipBracketed(p, end, '{', '}');\n    case '[':  return jsonSkipBracketed(p, end, '[', ']');\n    case 't':  return jsonSkipLiteral(p, end, \"true\");\n    case 'f':  return jsonSkipLiteral(p, end, \"false\");\n    case 'n':  return jsonSkipLiteral(p, end, \"null\");\n    default: return jsonSkipNumber(p, end);\n    }\n}\n\n/* =========================== JSON to exprtoken ============================\n * The functions below convert a given json value to the equivalent\n * expression token structure.\n * ========================================================================== */\n\nstatic exprtoken *jsonParseStringToken(const char **p, const char *end) {\n    if (*p >= end || **p != '\"') return NULL;\n    const char *start = ++(*p);\n    int esc = 0; size_t len = 0; int has_esc = 0;\n    const char *q = *p;\n    while (q < end) {\n        if (esc) { esc = 0; q++; len++; has_esc = 1; continue; }\n        if (*q == '\\\\') { esc = 1; q++; continue; }\n        if (*q == '\"') break;\n        q++; len++;\n    }\n    if (q >= end || *q != '\"') return NULL; // Unterminated string\n    exprtoken *t = exprNewToken(EXPR_TOKEN_STR);\n\n    if (!has_esc) {\n        // No escapes, we can point directly into the original JSON string.\n        t->str.start = (char*)start; t->str.len = len; t->str.heapstr = NULL;\n    } else {\n        // Escapes present, need to allocate and copy/process escapes.\n        char *dst = RedisModule_Alloc(len + 1);\n\n        t->str.start = t->str.heapstr = dst; t->str.len = len;\n        const char *r = start; esc = 0;\n        while (r < q) {\n            if (esc) {\n                switch (*r) {\n                // Supported escapes from Goal 3.\n                case 'n': *dst='\\n'; break;\n                case 'r': *dst='\\r'; break;\n                case 't': *dst='\\t'; break;\n                case '\\\\': *dst='\\\\'; break;\n                case '\"': *dst='\\\"'; break;\n                // Escapes (like \\uXXXX, \\b, \\f) are not supported for now,\n                // we just copy them verbatim.\n                default: *dst=*r; break;\n                }\n                dst++; esc = 0; r++; continue;\n            }\n            if (*r == '\\\\') { esc = 1; r++; continue; }\n            *dst++ = *r++;\n        }\n        *dst = '\\0'; // Null-terminate the allocated string.\n    }\n    *p = q + 1; // Advance the main pointer past the closing quote.\n    return t;\n}\n\nstatic exprtoken *jsonParseNumberToken(const char **p, const char *end) {\n    // Use a buffer to extract the number literal for parsing with strtod().\n    char buf[256]; int idx = 0;\n    const char *start = *p; // For strtod partial failures check.\n\n    // Copy potential number characters to buffer.\n    while (*p < end && idx < (int)sizeof(buf)-1 && jsonIsNumberChar(**p)) {\n        buf[idx++] = **p;\n        (*p)++;\n    }\n    buf[idx]='\\0'; // Null-terminate buffer.\n\n    if (idx==0) return NULL; // No number characters found.\n\n    char *ep; // End pointer for strtod validation.\n    double v = strtod(buf, &ep);\n\n    /* Check if strtod() consumed the entire buffer content.\n     * If not, the number format was invalid. */\n    if (*ep!='\\0') {\n        // strtod() failed; rewind p to the start and return NULL\n        *p = start;\n        return NULL;\n    }\n\n    // If strtod() succeeded, create and return the token..\n    exprtoken *t = exprNewToken(EXPR_TOKEN_NUM);\n    t->num = v;\n    return t;\n}\n\nstatic exprtoken *jsonParseLiteralToken(const char **p, const char *end, const char *lit, int type, double num) {\n    size_t l = strlen(lit);\n\n    // Ensure we don't read past 'end'.\n    if ((*p + l) > end) return NULL;\n\n    if (strncmp(*p, lit, l) != 0) return NULL; // Literal doesn't match.\n\n    // Check that the character *after* the literal is a valid JSON delimiter\n    // (whitespace, comma, closing bracket/brace, or end of input)\n    // This prevents matching \"trueblabla\" as \"true\".\n    if ((*p + l) < end) {\n        char next_char = *(*p + l);\n        if (!isspace((unsigned char)next_char) && next_char!=',' &&\n            next_char!=']' && next_char!='}') {\n            return NULL; // Invalid character following literal.\n        }\n    }\n\n    // Literal matched and is correctly terminated.\n    *p += l;\n    exprtoken *t = exprNewToken(type);\n    t->num = num;\n    return t;\n}\n\nstatic exprtoken *jsonParseArrayToken(const char **p, const char *end) {\n    if (*p >= end || **p != '[') return NULL;\n    (*p)++; // Skip '['.\n    jsonSkipWhiteSpaces(p,end);\n\n    exprtoken *t = exprNewToken(EXPR_TOKEN_TUPLE);\n    t->tuple.len = 0; t->tuple.ele = NULL; size_t alloc = 0;\n\n    // Handle empty array [].\n    if (*p < end && **p == ']') {\n        (*p)++; // Skip ']'.\n        return t;\n    }\n\n    // Parse array elements.\n    while (1) {\n        exprtoken *ele = jsonParseValueToken(p,end);\n        if (!ele) {\n            exprTokenRelease(t); // Clean up partially built array token.\n            return NULL;\n        }\n\n        // Grow allocated space for elements if needed.\n        if (t->tuple.len == alloc) {\n            size_t newsize = alloc ? alloc * 2 : 4;\n            // Check for potential overflow if newsize becomes huge.\n            if (newsize < alloc) {\n                exprTokenRelease(ele);\n                exprTokenRelease(t);\n                return NULL;\n            }\n            exprtoken **newele = RedisModule_Realloc(t->tuple.ele,\n                                           sizeof(exprtoken*)*newsize);\n            t->tuple.ele = newele;\n            alloc = newsize;\n        }\n        t->tuple.ele[t->tuple.len++] = ele; // Add element.\n\n        jsonSkipWhiteSpaces(p,end);\n        if (*p>=end) {\n            // Unterminated array. Note that this check is crucial because\n            // previous value parsed may seek 'p' to 'end'.\n            exprTokenRelease(t);\n            return NULL;\n        }\n\n        // Check for comma (more elements) or closing bracket.\n        if (**p == ',') {\n            (*p)++; // Skip ','\n            jsonSkipWhiteSpaces(p,end); // Skip whitespace before next element\n            continue; // Parse next element\n        } else if (**p == ']') {\n            (*p)++; // Skip ']'\n            return t; // End of array\n        } else {\n            // Unexpected character (not ',' or ']')\n            exprTokenRelease(t);\n            return NULL;\n        }\n    }\n}\n\n/* Turn a JSON value into an expr token. */\nstatic exprtoken *jsonParseValueToken(const char **p, const char *end) {\n    jsonSkipWhiteSpaces(p,end);\n    if (*p >= end) return NULL;\n\n    switch (**p) {\n    case '\"': return jsonParseStringToken(p,end);\n    case '[':  return jsonParseArrayToken(p,end);\n    case '{':  return NULL; // No nested elements support for now.\n    case 't':  return jsonParseLiteralToken(p,end,\"true\",EXPR_TOKEN_NUM,1);\n    case 'f':  return jsonParseLiteralToken(p,end,\"false\",EXPR_TOKEN_NUM,0);\n    case 'n':  return jsonParseLiteralToken(p,end,\"null\",EXPR_TOKEN_NULL,0);\n    default:\n        // Check if it starts like a number.\n        if (isdigit((unsigned char)**p) || **p=='-' || **p=='+') {\n             return jsonParseNumberToken(p,end);\n        }\n        // Anything else is an unsupported type or malformed JSON.\n        return NULL;\n    }\n}\n\n/* ============================== Fast key seeking ========================== */\n\n/* Finds the start of the value for a given field key within a JSON object.\n * Returns pointer to the first char of the value, or NULL if not found/error.\n * This function does not perform any allocation and is optimized to seek\n * the specified *toplevel* filed as fast as possible. */\nstatic const char *jsonSeekField(const char *json, const char *end,\n                                 const char *field, size_t flen) {\n    const char *p = json;\n    jsonSkipWhiteSpaces(&p,end);\n    if (p >= end || *p != '{') return NULL; // Must start with '{'.\n    p++; // skip '{'.\n\n    while (1) {\n        jsonSkipWhiteSpaces(&p,end);\n        if (p >= end) return NULL; // Reached end within object.\n\n        if (*p == '}') return NULL; // End of object, field not found.\n\n        // Expecting a key (string).\n        if (*p != '\"') return NULL; // Key must be a string.\n\n        // --- Key Matching using jsonSkipString ---\n        const char *key_start = p + 1; // Start of key content.\n        const char *key_end_p = p;     // Will later contain the end.\n\n        // Use jsonSkipString() to find the end.\n        if (!jsonSkipString(&key_end_p, end)) {\n            // Unterminated / invalid key string.\n            return NULL;\n        }\n\n        // Calculate the length of the key's content.\n        size_t klen = (key_end_p - 1) - key_start;\n\n        /* Perform the comparison using the raw key content.\n         * WARNING: This uses memcmp(), so we don't handle escaped chars\n         * within the key matching against unescaped chars in 'field'. */\n        int match = klen == flen && !memcmp(key_start, field, flen);\n\n        // Update the main pointer 'p' to be after the key string.\n        p = key_end_p;\n\n        // Now we expect to find a \":\" followed by a value.\n        jsonSkipWhiteSpaces(&p,end);\n        if (p>=end || *p!=':') return NULL; // Expect ':' after key\n        p++; // Skip ':'.\n\n\t// Seek value.\n        jsonSkipWhiteSpaces(&p,end);\n        if (p>=end) return NULL; // Expect value after ':'\n\n        if (match) {\n            // Found the matching key, p now points to the start of the value.\n            return p;\n        } else {\n            // Key didn't match, skip the corresponding value.\n            if (!jsonSkipValue(&p,end)) return NULL; // Syntax error.\n        }\n\n\n        // Look for comma or a closing brace.\n        jsonSkipWhiteSpaces(&p,end);\n        if (p>=end) return NULL; // Reached end after value.\n\n        if (*p == ',') {\n            p++; // Skip comma, continue loop to find next key.\n            continue;\n        } else if (*p == '}') {\n            return NULL; // Reached end of object, field not found.\n        }\n        return NULL; // Malformed JSON (unexpected char after value).\n    }\n}\n\n/* This is the only real API that this file conceptually exports (it is\n * inlined, actually). */\nexprtoken *jsonExtractField(const char *json, size_t json_len,\n                            const char *field, size_t field_len)\n{\n    const char *end = json + json_len;\n    const char *valptr = jsonSeekField(json,end,field,field_len);\n    if (!valptr) return NULL;\n\n    /* Key found, valptr points to the start of the value.\n     * Convert it into an expression token object. */\n    return jsonParseValueToken(&valptr,end);\n}\n"
  },
  {
    "path": "modules/vector-sets/fastjson_test.c",
    "content": "/* fastjson_test.c - Stress test for fastjson.c\n *\n * This performs boundary and corruption tests to ensure\n * the JSON parser handles edge cases without accessing\n * memory outside the bounds of the input.\n */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <unistd.h>\n#include <signal.h>\n#include <time.h>\n#include <sys/mman.h>\n#include <sys/types.h>\n#include <fcntl.h>\n#include <errno.h>\n#include <setjmp.h>\n\n/* Page size constant - typically 4096 or 16k bytes (Apple Silicon).\n * We use 16k so that it will work on both, but not with Linux huge pages. */\n#define PAGE_SIZE 4096*4\n#define MAX_JSON_SIZE (PAGE_SIZE - 128)  /* Keep some margin */\n#define MAX_FIELD_SIZE 64\n#define NUM_TEST_ITERATIONS 100000\n#define NUM_CORRUPTION_TESTS 10000\n#define NUM_BOUNDARY_TESTS 10000\n\n/* Test state tracking */\nstatic char *safe_page = NULL;       /* Start of readable/writable page */\nstatic char *unsafe_page = NULL;     /* Start of inaccessible guard page */\nstatic int boundary_violation = 0;   /* Flag for boundary violations */\nstatic jmp_buf jmpbuf;               /* For signal handling */\nstatic int tests_passed = 0;\nstatic int tests_failed = 0;\nstatic int corruptions_passed = 0;\nstatic int boundary_tests_passed = 0;\n\n/* Test metadata for tracking */\ntypedef struct {\n    char *json;\n    size_t json_len;\n    char field[MAX_FIELD_SIZE];\n    size_t field_len;\n    int expected_result;\n} test_case_t;\n\n/* Forward declarations for test JSON generation */\nchar *generate_random_json(size_t *len, char *field, size_t *field_len, int *has_field);\nvoid corrupt_json(char *json, size_t len);\nvoid setup_test_memory(void);\nvoid cleanup_test_memory(void);\nvoid run_normal_tests(void);\nvoid run_corruption_tests(void);\nvoid run_boundary_tests(void);\nvoid print_test_summary(void);\n\n/* Signal handler for segmentation violations */\nstatic void sigsegv_handler(int sig) {\n    boundary_violation = 1;\n    printf(\"Boundary violation detected! Caught signal %d\\n\", sig);\n    longjmp(jmpbuf, 1);\n}\n\n/* Wrapper for jsonExtractField to check for boundary violations */\nexprtoken *safe_extract_field(const char *json, size_t json_len,\n                             const char *field, size_t field_len) {\n    boundary_violation = 0;\n\n    if (setjmp(jmpbuf) == 0) {\n        return jsonExtractField(json, json_len, field, field_len);\n    } else {\n        return NULL; /* Return NULL if boundary violation occurred */\n    }\n}\n\n/* Setup two adjacent memory pages - one readable/writable, one inaccessible */\nvoid setup_test_memory(void) {\n    /* Request a page of memory, with specific alignment. We rely on the\n     * fact that hopefully the page after that will cause a segfault if\n     * accessed. */\n    void *region = mmap(NULL, PAGE_SIZE,\n                       PROT_READ | PROT_WRITE,\n                       MAP_PRIVATE | MAP_ANONYMOUS,\n                       -1, 0);\n\n    if (region == MAP_FAILED) {\n        perror(\"mmap failed\");\n        exit(EXIT_FAILURE);\n    }\n\n    safe_page = (char*)region;\n    unsafe_page = safe_page + PAGE_SIZE;\n    // Uncomment to make sure it crashes :D\n    // printf(\"%d\\n\", unsafe_page[5]);\n\n    /* Set up signal handlers for memory access violations */\n    struct sigaction sa;\n    sa.sa_handler = sigsegv_handler;\n    sigemptyset(&sa.sa_mask);\n    sa.sa_flags = 0;\n\n    sigaction(SIGSEGV, &sa, NULL);\n    sigaction(SIGBUS, &sa, NULL);\n}\n\nvoid cleanup_test_memory(void) {\n    if (safe_page != NULL) {\n        munmap(safe_page, PAGE_SIZE);\n        safe_page = NULL;\n        unsafe_page = NULL;\n    }\n}\n\n/* Generate random strings with proper escaping for JSON */\nvoid generate_random_string(char *buffer, size_t max_len) {\n    static const char charset[] = \"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789\";\n    size_t len = 1 + rand() % (max_len - 2); /* Ensure at least 1 char */\n\n    for (size_t i = 0; i < len; i++) {\n        buffer[i] = charset[rand() % (sizeof(charset) - 1)];\n    }\n    buffer[len] = '\\0';\n}\n\n/* Generate random numbers as strings */\nvoid generate_random_number(char *buffer, size_t max_len) {\n    double num = (double)rand() / RAND_MAX * 1000.0;\n\n    /* Occasionally make it negative or add decimal places */\n    if (rand() % 5 == 0) num = -num;\n    if (rand() % 3 != 0) num += (double)(rand() % 100) / 100.0;\n\n    snprintf(buffer, max_len, \"%.6g\", num);\n}\n\n/* Generate a random field name */\nvoid generate_random_field(char *field, size_t *field_len) {\n    generate_random_string(field, MAX_FIELD_SIZE / 2);\n    *field_len = strlen(field);\n}\n\n/* Generate a random JSON object with fields */\nchar *generate_random_json(size_t *len, char *field, size_t *field_len, int *has_field) {\n    char *json = malloc(MAX_JSON_SIZE);\n    if (json == NULL) {\n        perror(\"malloc\");\n        exit(EXIT_FAILURE);\n    }\n\n    char buffer[MAX_JSON_SIZE / 4]; /* Buffer for generating values */\n    int pos = 0;\n    int num_fields = 1 + rand() % 10; /* Random number of fields */\n    int target_field_index = rand() % num_fields; /* Which field to return */\n\n    /* Start the JSON object */\n    pos += snprintf(json + pos, MAX_JSON_SIZE - pos, \"{\");\n\n    /* Generate random field/value pairs */\n    for (int i = 0; i < num_fields; i++) {\n        /* Add a comma if not the first field */\n        if (i > 0) {\n            pos += snprintf(json + pos, MAX_JSON_SIZE - pos, \", \");\n        }\n\n        /* Generate a field name */\n        if (i == target_field_index) {\n            /* This is our target field - save it for the caller */\n            generate_random_field(field, field_len);\n            pos += snprintf(json + pos, MAX_JSON_SIZE - pos, \"\\\"%s\\\": \", field);\n            *has_field = 1;\n            /* Sometimes change the last char so that it will not match. */\n            if (rand() % 2) {\n                *has_field = 0;\n                field[*field_len-1] = '!';\n            }\n        } else {\n            generate_random_string(buffer, MAX_FIELD_SIZE / 4);\n            pos += snprintf(json + pos, MAX_JSON_SIZE - pos, \"\\\"%s\\\": \", buffer);\n        }\n\n        /* Generate a random value type */\n        int value_type = rand() % 5;\n        switch (value_type) {\n            case 0: /* String */\n                generate_random_string(buffer, MAX_JSON_SIZE / 8);\n                pos += snprintf(json + pos, MAX_JSON_SIZE - pos, \"\\\"%s\\\"\", buffer);\n                break;\n\n            case 1: /* Number */\n                generate_random_number(buffer, MAX_JSON_SIZE / 8);\n                pos += snprintf(json + pos, MAX_JSON_SIZE - pos, \"%s\", buffer);\n                break;\n\n            case 2: /* Boolean: true */\n                pos += snprintf(json + pos, MAX_JSON_SIZE - pos, \"true\");\n                break;\n\n            case 3: /* Boolean: false */\n                pos += snprintf(json + pos, MAX_JSON_SIZE - pos, \"false\");\n                break;\n\n            case 4: /* Null */\n                pos += snprintf(json + pos, MAX_JSON_SIZE - pos, \"null\");\n                break;\n\n            case 5: /* Array (simple) */\n                pos += snprintf(json + pos, MAX_JSON_SIZE - pos, \"[\");\n                int array_items = 1 + rand() % 5;\n                for (int j = 0; j < array_items; j++) {\n                    if (j > 0) pos += snprintf(json + pos, MAX_JSON_SIZE - pos, \", \");\n\n                    /* Array items - either number or string */\n                    if (rand() % 2) {\n                        generate_random_number(buffer, MAX_JSON_SIZE / 16);\n                        pos += snprintf(json + pos, MAX_JSON_SIZE - pos, \"%s\", buffer);\n                    } else {\n                        generate_random_string(buffer, MAX_JSON_SIZE / 16);\n                        pos += snprintf(json + pos, MAX_JSON_SIZE - pos, \"\\\"%s\\\"\", buffer);\n                    }\n                }\n                pos += snprintf(json + pos, MAX_JSON_SIZE - pos, \"]\");\n                break;\n        }\n    }\n\n    /* Close the JSON object */\n    pos += snprintf(json + pos, MAX_JSON_SIZE - pos, \"}\");\n    *len = pos;\n\n    return json;\n}\n\n/* Corrupt JSON by replacing random characters */\nvoid corrupt_json(char *json, size_t len) {\n    if (len < 2) return;  /* Too short to corrupt safely */\n\n    /* Corrupt 1-3 characters */\n    int num_corruptions = 1 + rand() % 3;\n    for (int i = 0; i < num_corruptions; i++) {\n        size_t pos = rand() % len;\n        char corruption = \" \\t\\n{}[]\\\":,0123456789abcdefXYZ\"[rand() % 30];\n        json[pos] = corruption;\n    }\n}\n\n/* Run standard parser tests with generated valid JSON */\nvoid run_normal_tests(void) {\n    printf(\"Running normal JSON extraction tests...\\n\");\n\n    for (int i = 0; i < NUM_TEST_ITERATIONS; i++) {\n        char field[MAX_FIELD_SIZE] = {0};\n        size_t field_len = 0;\n        size_t json_len = 0;\n        int has_field = 0;\n\n        /* Generate random JSON */\n        char *json = generate_random_json(&json_len, field, &field_len, &has_field);\n\n        /* Use valid field to test parser */\n        exprtoken *token = safe_extract_field(json, json_len, field, field_len);\n\n        /* Check if we got a token as expected */\n        if (has_field && token != NULL) {\n            exprTokenRelease(token);\n            tests_passed++;\n        } else if (!has_field && token == NULL) {\n            tests_passed++;\n        } else {\n            tests_failed++;\n        }\n\n        /* Test with a non-existent field */\n        char nonexistent_field[MAX_FIELD_SIZE] = \"nonexistent_field\";\n        token = safe_extract_field(json, json_len, nonexistent_field, strlen(nonexistent_field));\n\n        if (token == NULL) {\n            tests_passed++;\n        } else {\n            exprTokenRelease(token);\n            tests_failed++;\n        }\n\n        free(json);\n    }\n}\n\n/* Run tests with corrupted JSON */\nvoid run_corruption_tests(void) {\n    printf(\"Running JSON corruption tests...\\n\");\n\n    for (int i = 0; i < NUM_CORRUPTION_TESTS; i++) {\n        char field[MAX_FIELD_SIZE] = {0};\n        size_t field_len = 0;\n        size_t json_len = 0;\n        int has_field = 0;\n\n        /* Generate random JSON */\n        char *json = generate_random_json(&json_len, field, &field_len, &has_field);\n\n        /* Make a copy and corrupt it */\n        char *corrupted = malloc(json_len + 1);\n        if (!corrupted) {\n            perror(\"malloc\");\n            free(json);\n            exit(EXIT_FAILURE);\n        }\n\n        memcpy(corrupted, json, json_len + 1);\n        corrupt_json(corrupted, json_len);\n\n        /* Test with corrupted JSON */\n        exprtoken *token = safe_extract_field(corrupted, json_len, field, field_len);\n\n        /* We're just testing that it doesn't crash or access invalid memory */\n        if (boundary_violation) {\n            printf(\"Boundary violation with corrupted JSON!\\n\");\n            tests_failed++;\n        } else {\n            if (token != NULL) {\n                exprTokenRelease(token);\n            }\n            corruptions_passed++;\n        }\n\n        free(corrupted);\n        free(json);\n    }\n}\n\n/* Run tests at memory boundaries */\nvoid run_boundary_tests(void) {\n    printf(\"Running memory boundary tests...\\n\");\n\n    for (int i = 0; i < NUM_BOUNDARY_TESTS; i++) {\n        char field[MAX_FIELD_SIZE] = {0};\n        size_t field_len = 0;\n        size_t json_len = 0;\n        int has_field = 0;\n\n        /* Generate random JSON */\n        char *temp_json = generate_random_json(&json_len, field, &field_len, &has_field);\n\n        /* Truncate the JSON to a random length */\n        size_t truncated_len = 1 + rand() % json_len;\n\n        /* Place at the edge of the safe page */\n        size_t offset = PAGE_SIZE - truncated_len;\n        memcpy(safe_page + offset, temp_json, truncated_len);\n\n        /* Test parsing with non-existent field (forcing it to scan to end) */\n        char nonexistent_field[MAX_FIELD_SIZE] = \"nonexistent_field\";\n        exprtoken *token = safe_extract_field(safe_page + offset, truncated_len,\n                                             nonexistent_field, strlen(nonexistent_field));\n\n        /* We're just testing that it doesn't access memory beyond the boundary */\n        if (boundary_violation) {\n            printf(\"Boundary violation at edge of memory page!\\n\");\n            tests_failed++;\n        } else {\n            if (token != NULL) {\n                exprTokenRelease(token);\n            }\n            boundary_tests_passed++;\n        }\n\n        free(temp_json);\n    }\n}\n\n/* Print summary of test results */\nvoid print_test_summary(void) {\n    printf(\"\\n===== FASTJSON PARSER TEST SUMMARY =====\\n\");\n    printf(\"Normal tests passed: %d/%d\\n\", tests_passed, NUM_TEST_ITERATIONS * 2);\n    printf(\"Corruption tests passed: %d/%d\\n\", corruptions_passed, NUM_CORRUPTION_TESTS);\n    printf(\"Boundary tests passed: %d/%d\\n\", boundary_tests_passed, NUM_BOUNDARY_TESTS);\n    printf(\"Failed tests: %d\\n\", tests_failed);\n\n    if (tests_failed == 0) {\n        printf(\"\\nALL TESTS PASSED! The JSON parser appears to be robust.\\n\");\n    } else {\n        printf(\"\\nSome tests FAILED. The JSON parser may be vulnerable.\\n\");\n    }\n}\n\n/* Entry point for fastjson parser test */\nvoid run_fastjson_test(void) {\n    printf(\"Starting fastjson parser stress test...\\n\");\n\n    /* Seed the random number generator */\n    srand(time(NULL));\n\n    /* Setup test memory environment */\n    setup_test_memory();\n\n    /* Run the various test phases */\n    run_normal_tests();\n    run_corruption_tests();\n    run_boundary_tests();\n\n    /* Print summary */\n    print_test_summary();\n\n    /* Cleanup */\n    cleanup_test_memory();\n}\n"
  },
  {
    "path": "modules/vector-sets/hnsw.c",
    "content": "/* HNSW (Hierarchical Navigable Small World) Implementation.\n *\n * Based on the paper by Yu. A. Malkov, D. A. Yashunin.\n *\n * Many details of this implementation, not covered in the paper, were\n * obtained simulating different workloads and checking the connection\n * quality of the graph.\n *\n * Notably, this implementation:\n *\n * 1. Only uses bi-directional links, implementing strategies in order to\n *    link new nodes even when candidates are full, and our new node would\n *    be not close enough to replace old links in candidate.\n *\n * 2. We normalize on-insert, making cosine similarity and dot product the\n *    same. This means we can't use euclidean distance or alike here.\n *    Together with quantization, this provides an important speedup that\n *    makes HNSW more practical.\n *\n * 3. The quantization used is int8. And it is performed per-vector, so the\n *    \"range\" (max abs value) is also stored alongside with the quantized data.\n *\n * 4. This library implements true elements deletion, not just marking the\n *    element as deleted, but removing it (we can do it since our links are\n *    bidirectional), and reliking the nodes orphaned of one link among\n *    them.\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n * Originally authored by: Salvatore Sanfilippo.\n */\n\n#define _DEFAULT_SOURCE\n#define _POSIX_C_SOURCE 200809L\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <math.h>\n#include <stdint.h>\n#include <float.h>  /* for INFINITY if not in math.h */\n#include <assert.h>\n#include \"hnsw.h\"\n#include \"mixer.h\"\n\n/* Check if we can compile SIMD code with function attributes.\n * This defines HAVE_AVX2, HAVE_AVX512, and HAVE_POPCNT when the compiler\n * supports the target() attribute for runtime CPU feature dispatch. */\n#if defined(__x86_64__) && ((defined(__GNUC__) && __GNUC__ >= 5) || (defined(__clang__) && __clang_major__ >= 4))\n    #if defined(__has_attribute) && __has_attribute(target)\n        #define HAVE_AVX2\n        #define HAVE_AVX512\n        #define HAVE_POPCNT\n    #endif\n#endif\n\n#if defined(HAVE_POPCNT)\n    #define ATTRIBUTE_TARGET_POPCNT __attribute__((target(\"popcnt\")))\n    #define VSET_USE_POPCNT __builtin_cpu_supports(\"popcnt\")\n#else\n    #define ATTRIBUTE_TARGET_POPCNT\n    #define VSET_USE_POPCNT 0\n#endif\n\n#if defined(HAVE_AVX2)\n#define ATTRIBUTE_TARGET_AVX2 __attribute__((target(\"avx2,fma\")))\n#define ATTRIBUTE_TARGET_AVX2_POPCNT __attribute__((target(\"avx2,fma,popcnt\")))\n#define VSET_USE_AVX2 (__builtin_cpu_supports(\"avx2\") && __builtin_cpu_supports(\"fma\"))\n#else\n#define ATTRIBUTE_TARGET_AVX2\n#define ATTRIBUTE_TARGET_AVX2_POPCNT\n#define VSET_USE_AVX2 0\n#endif\n\n#if defined (HAVE_AVX512)\n#define ATTRIBUTE_TARGET_AVX512 __attribute__((target(\"avx512f,avx512bw,fma\")))\n#define ATTRIBUTE_TARGET_AVX512_VPOPCNT __attribute__((target(\"avx512f,fma,avx512vpopcntdq,popcnt\")))\n#define VSET_USE_AVX512 (__builtin_cpu_supports(\"avx512f\") && __builtin_cpu_supports(\"avx512bw\"))\n#define VSET_USE_AVX512_VPOPCNT (__builtin_cpu_supports(\"avx512f\") && __builtin_cpu_supports(\"avx512vpopcntdq\"))\n#else\n#define ATTRIBUTE_TARGET_AVX512\n#define ATTRIBUTE_TARGET_AVX512_VPOPCNT\n#define VSET_USE_AVX512 0\n#define VSET_USE_AVX512_VPOPCNT 0\n#endif\n\n/* Include SIMD headers when supported */\n#if defined(HAVE_AVX2) || defined(HAVE_AVX512)\n#include <immintrin.h>\n#endif\n\n#if 0\n#define debugmsg printf\n#else\n#define debugmsg if(0) printf\n#endif\n\n#ifndef INFINITY\n#define INFINITY (1.0/0.0)\n#endif\n\n#define MIN(a,b) ((a) < (b) ? (a) : (b))\n\n/* Define likely macro if not already defined */\n#ifndef likely\n#if __GNUC__ >= 3\n#define likely(x) __builtin_expect(!!(x), 1)\n#else\n#define likely(x) (x)\n#endif\n#endif\n\n/* Algorithm parameters. */\n\n#define HNSW_P 0.25         /* Probability of level increase. */\n#define HNSW_MAX_LEVEL 16   /* Max level nodes can reach. */\n#define HNSW_EF_C 200       /* Default size of dynamic candidate list while\n                             * inserting a new node, in case 0 is passed to\n                             * the 'ef' argument while inserting. This is also\n                             * used when deleting nodes for the search step\n                             * needed sometimes to reconnect nodes that remain\n                             * orphaned of one link. */\n\nstatic void (*hfree)(void *p) = free;\nstatic void *(*hmalloc)(size_t s) = malloc;\nstatic void *(*hrealloc)(void *old, size_t s) = realloc;\n\nvoid hnsw_set_allocator(void (*free_ptr)(void*), void *(*malloc_ptr)(size_t),\n                        void *(*realloc_ptr)(void*, size_t))\n{\n    hfree = free_ptr;\n    hmalloc = malloc_ptr;\n    hrealloc = realloc_ptr;\n}\n\n// Get a warning if you use the libc allocator functions for mistake.\n#define malloc use_hmalloc_instead\n#define realloc use_hrealloc_instead\n#define free use_hfree_instead\n\n/* ============================== Prototypes ================================ */\nvoid hnsw_cursor_element_deleted(HNSW *index, hnswNode *deleted);\n\n/* ============================ Priority queue ================================\n * We need a priority queue to take an ordered list of candidates. Right now\n * it is implemented as a linear array, since it is relatively small.\n *\n * You may find it to be odd that we take the best element (smaller distance)\n * at the end of the array, but this way popping from the pqueue is O(1), as\n * we need to just decrement the count, and this is a very used operation\n * in a critical code path. This makes the priority queue implementation a\n * bit more complex in the insertion, but for good reasons. */\n\n/* Maximum number of candidates we'll ever need (cit. Bill Gates). */\n#define HNSW_MAX_CANDIDATES 256\n\ntypedef struct {\n    hnswNode *node;\n    float distance;\n} pqitem;\n\ntypedef struct {\n    pqitem *items;         /* Array of items. */\n    uint32_t count;        /* Current number of items. */\n    uint32_t cap;          /* Maximum capacity. */\n} pqueue;\n\n/* The HNSW algorithms access the pqueue conceptually from nearest (index 0)\n * to farthest (larger indexes) node, so the following macros are used to\n * access the pqueue in this fashion, even if the internal order is\n * actually reversed. */\n#define pq_get_node(q,i) ((q)->items[(q)->count-(i+1)].node)\n#define pq_get_distance(q,i) ((q)->items[(q)->count-(i+1)].distance)\n\n/* Create a new priority queue with given capacity. Adding to the\n * pqueue only retains 'capacity' elements with the shortest distance. */\npqueue *pq_new(uint32_t capacity) {\n    pqueue *pq = hmalloc(sizeof(*pq));\n    if (!pq) return NULL;\n\n    pq->items = hmalloc(sizeof(pqitem) * capacity);\n    if (!pq->items) {\n        hfree(pq);\n        return NULL;\n    }\n\n    pq->count = 0;\n    pq->cap = capacity;\n    return pq;\n}\n\n/* Free a priority queue. */\nvoid pq_free(pqueue *pq) {\n    if (!pq) return;\n    hfree(pq->items);\n    hfree(pq);\n}\n\n/* Insert maintaining distance order (higher distances first). */\nvoid pq_push(pqueue *pq, hnswNode *node, float distance) {\n    if (pq->count < pq->cap) {\n        /* Queue not full: shift right from high distances to make room. */\n        uint32_t i = pq->count;\n        while (i > 0 && pq->items[i-1].distance < distance) {\n            pq->items[i] = pq->items[i-1];\n            i--;\n        }\n        pq->items[i].node = node;\n        pq->items[i].distance = distance;\n        pq->count++;\n    } else {\n        /* Queue full: if new item is worse than worst, ignore it. */\n        if (distance >= pq->items[0].distance) return;\n\n        /* Otherwise shift left from low distances to drop worst. */\n        uint32_t i = 0;\n        while (i < pq->cap-1 && pq->items[i+1].distance > distance) {\n            pq->items[i] = pq->items[i+1];\n            i++;\n        }\n        pq->items[i].node = node;\n        pq->items[i].distance = distance;\n    }\n}\n\n/* Remove and return the top (closest) element, which is at count-1\n * since we store elements with higher distances first.\n * Runs in constant time. */\nhnswNode *pq_pop(pqueue *pq, float *distance) {\n    if (pq->count == 0) return NULL;\n    pq->count--;\n    *distance = pq->items[pq->count].distance;\n    return pq->items[pq->count].node;\n}\n\n/* Get distance of the furthest element.\n * An empty priority queue has infinite distance as its furthest element,\n * note that this behavior is needed by the algorithms below. */\nfloat pq_max_distance(pqueue *pq) {\n    if (pq->count == 0) return INFINITY;\n    return pq->items[0].distance;\n}\n\n/* ============================ HNSW algorithm ============================== */\n\n/* Check if CPU supports POPCNT instruction - cached per thread */\nstatic inline int hnsw_cpu_supports_popcnt(void) {\n#if defined(HAVE_POPCNT)\n    static __thread int popcnt_supported = -1;\n    if (popcnt_supported == -1) {\n        popcnt_supported = __builtin_cpu_supports(\"popcnt\");\n    }\n    return popcnt_supported;\n#else\n    return 0; /* Assume CPU does not support POPCNT if __builtin_cpu_supports() is not available. */\n#endif\n}\n\n/* Manual popcount implementation for platforms without POPCNT support */\nstatic inline int hnsw_popcount64(uint64_t x) {\n    x = (x & 0x5555555555555555) + ((x >> 1) & 0x5555555555555555);\n    x = (x & 0x3333333333333333) + ((x >> 2) & 0x3333333333333333);\n    x = (x & 0x0F0F0F0F0F0F0F0F) + ((x >> 4) & 0x0F0F0F0F0F0F0F0F);\n    x = (x & 0x00FF00FF00FF00FF) + ((x >> 8) & 0x00FF00FF00FF00FF);\n    x = (x & 0x0000FFFF0000FFFF) + ((x >> 16) & 0x0000FFFF0000FFFF);\n    x = (x & 0x00000000FFFFFFFF) + ((x >> 32) & 0x00000000FFFFFFFF);\n    return x;\n}\n\n/* Optimized popcount function that uses hardware POPCNT instruction when available,\n * falling back to a software implementation when necessary. The CPU feature detection\n * result is cached per thread for better performance. */\nATTRIBUTE_TARGET_POPCNT\nstatic inline int hnsw_popcount(uint64_t x) {\n    if (likely(hnsw_cpu_supports_popcnt())) {\n        return __builtin_popcountll(x);\n    } else {\n        return hnsw_popcount64(x);\n    }\n}\n\n/* Binary vectors distance function that uses POPCNT when available */\nATTRIBUTE_TARGET_POPCNT\nstatic inline float hnsw_vectors_distance_bin(const uint64_t *x, const uint64_t *y, uint32_t dim) {\n    uint32_t len = (dim+63)/64;\n    uint32_t opposite = 0;\n\n    for (uint32_t j = 0; j < len; j++) {\n        uint64_t xor = x[j]^y[j];\n        opposite += hnsw_popcount(xor);\n    }\n    return (float)opposite*2/dim;\n}\n\n#if defined(HAVE_AVX512)\n/* AVX512 optimized dot product for float vectors */\nATTRIBUTE_TARGET_AVX512\nfloat vectors_distance_float_avx512(const float *x, const float *y, uint32_t dim) {\n    __m512 sum = _mm512_setzero_ps();\n    uint32_t i;\n    \n    /* Process 16 floats at a time with AVX512 */\n    for (i = 0; i + 15 < dim; i += 16) {\n        __m512 vx = _mm512_loadu_ps(&x[i]);\n        __m512 vy = _mm512_loadu_ps(&y[i]);\n        sum = _mm512_fmadd_ps(vx, vy, sum);\n    }\n    \n    /* Horizontal sum of the 16 elements in sum */\n    float dot = _mm512_reduce_add_ps(sum);\n    \n    /* Handle remaining elements */\n    for (; i < dim; i++) {\n        dot += x[i] * y[i];\n    }\n    \n    return 1.0f - dot;\n}\n#endif /* HAVE_AVX512 */\n\n#if defined(HAVE_AVX2)\n/* AVX2 optimized dot product for float vectors */\nATTRIBUTE_TARGET_AVX2\nfloat vectors_distance_float_avx2(const float *x, const float *y, uint32_t dim) {\n    __m256 sum1 = _mm256_setzero_ps();\n    __m256 sum2 = _mm256_setzero_ps();\n    uint32_t i;\n    \n    /* Process 16 floats at a time with two AVX2 registers */\n    for (i = 0; i + 15 < dim; i += 16) {\n        __m256 vx1 = _mm256_loadu_ps(&x[i]);\n        __m256 vy1 = _mm256_loadu_ps(&y[i]);\n        __m256 vx2 = _mm256_loadu_ps(&x[i + 8]);\n        __m256 vy2 = _mm256_loadu_ps(&y[i + 8]);\n        \n        sum1 = _mm256_fmadd_ps(vx1, vy1, sum1);\n        sum2 = _mm256_fmadd_ps(vx2, vy2, sum2);\n    }\n    \n    /* Combine the two sums */\n    __m256 combined = _mm256_add_ps(sum1, sum2);\n    \n    /* Horizontal sum of the 8 elements */\n    __m128 sum_high = _mm256_extractf128_ps(combined, 1);\n    __m128 sum_low = _mm256_castps256_ps128(combined);\n    __m128 sum_128 = _mm_add_ps(sum_high, sum_low);\n    \n    sum_128 = _mm_hadd_ps(sum_128, sum_128);\n    sum_128 = _mm_hadd_ps(sum_128, sum_128);\n    \n    float dot = _mm_cvtss_f32(sum_128);\n    \n    /* Handle remaining elements */\n    for (; i < dim; i++) {\n        dot += x[i] * y[i];\n    }\n    \n    return 1.0f - dot;\n}\n#endif /* HAVE_AVX2 */\n\n/* Optimized dot product: automatically selects best available implementation \n * Dot product: our vectors are already normalized.\n * Version for not quantized vectors of floats. */\nfloat vectors_distance_float(const float *x, const float *y, uint32_t dim) {\n#if defined(HAVE_AVX512)\n    if (dim >= 16 && VSET_USE_AVX512) {\n        return vectors_distance_float_avx512(x, y, dim);\n    }\n#endif\n\n#if defined(HAVE_AVX2)\n    if (VSET_USE_AVX2 && dim >= 16) {\n        return vectors_distance_float_avx2(x, y, dim);\n    }\n#endif\n\n    /* Fallback to original scalar implementation */\n    float dot0 = 0.0f, dot1 = 0.0f;\n    uint32_t i;\n\n    /* Use two accumulators to reduce dependencies among multiplications.\n     * This provides a clear speed boost in Apple silicon, but should be\n     * help in general. */\n    for (i = 0; i + 7 < dim; i += 8) {\n        dot0 += x[i] * y[i] +\n                x[i+1] * y[i+1] +\n                x[i+2] * y[i+2] +\n                x[i+3] * y[i+3];\n\n        dot1 += x[i+4] * y[i+4] +\n                x[i+5] * y[i+5] +\n                x[i+6] * y[i+6] +\n                x[i+7] * y[i+7];\n    }\n\n    /* Handle the remaining elements. These are a minority in the case\n     * of a small vector, don't optimize this part. */\n    for (; i < dim; i++) dot0 += x[i] * y[i];\n\n    /* The following line may be counter intuitive. The dot product of\n     * normalized vectors is equivalent to their cosine similarity. The\n     * cosine will be from -1 (vectors facing opposite directions in the\n     * N-dim space) to 1 (vectors are facing in the same direction).\n     *\n     * We kinda want a \"score\" of distance from 0 to 2 (this is a distance\n     * function and we want minimize the distance for K-NN searches), so we\n     * can't just add 1: that would return a number in the 0-2 range, with\n     * 0 meaning opposite vectors and 2 identical vectors, so this is\n     * similarity, not distance.\n     *\n     * Returning instead (1 - dotprod) inverts the meaning: 0 is identical\n     * and 2 is opposite, hence it is their distance.\n     *\n     * Why don't normalize the similarity right now, and return from 0 to\n     * 1? Because division is costly. */\n    return 1.0f - (dot0 + dot1);\n}\n\n/* Q8 quants dotproduct. We do integer math and later fix it by range. */\n#if defined(HAVE_AVX512)\n/* AVX512 optimized dot product for Q8 vectors */\nATTRIBUTE_TARGET_AVX512\nfloat vectors_distance_q8_avx512(const int8_t *x, const int8_t *y, uint32_t dim,\n                                  float range_a, float range_b) {\n    // Handle zero vectors special case.\n    if (range_a == 0 || range_b == 0) {\n        return 1.0f;\n    }\n\n    const float scale_product = (range_a/127) * (range_b/127);\n    __m512i sum = _mm512_setzero_si512();\n    uint32_t i;\n\n    /* Process 64 int8 elements at a time with AVX512 */\n    for (i = 0; i + 63 < dim; i += 64) {\n        /* Load 64 int8 values */\n        __m512i vx = _mm512_loadu_si512((__m512i*)&x[i]);\n        __m512i vy = _mm512_loadu_si512((__m512i*)&y[i]);\n        \n        /* Unpack and multiply-add in 32-bit precision\n         * This is done in two steps: lower 32 bytes and upper 32 bytes */\n        \n        /* Process lower 32 bytes (256 bits) */\n        __m256i vx_lo = _mm512_extracti64x4_epi64(vx, 0);\n        __m256i vy_lo = _mm512_extracti64x4_epi64(vy, 0);\n        \n        /* Extend int8 to int16 */\n        __m512i vx_lo_16 = _mm512_cvtepi8_epi16(vx_lo);\n        __m512i vy_lo_16 = _mm512_cvtepi8_epi16(vy_lo);\n        \n        /* Multiply and accumulate to int32 */\n        __m512i prod_lo = _mm512_madd_epi16(vx_lo_16, vy_lo_16);\n        sum = _mm512_add_epi32(sum, prod_lo);\n        \n        /* Process upper 32 bytes (256 bits) */\n        __m256i vx_hi = _mm512_extracti64x4_epi64(vx, 1);\n        __m256i vy_hi = _mm512_extracti64x4_epi64(vy, 1);\n        \n        __m512i vx_hi_16 = _mm512_cvtepi8_epi16(vx_hi);\n        __m512i vy_hi_16 = _mm512_cvtepi8_epi16(vy_hi);\n        \n        __m512i prod_hi = _mm512_madd_epi16(vx_hi_16, vy_hi_16);\n        sum = _mm512_add_epi32(sum, prod_hi);\n    }\n\n    /* Horizontal sum of the 16 int32 elements in sum */\n    int32_t dot = _mm512_reduce_add_epi32(sum);\n\n    /* Handle remaining elements */\n    for (; i < dim; i++) {\n        dot += ((int32_t)x[i]) * ((int32_t)y[i]);\n    }\n\n    /* Convert to original range */\n    float dotf = dot * scale_product;\n    float distance = 1.0f - dotf;\n\n    /* Clamp distance to [0, 2] */\n    if (distance < 0) distance = 0;\n    else if (distance > 2) distance = 2;\n    return distance;\n}\n#endif /* HAVE_AVX512 */\n\n#if defined(HAVE_AVX2)\n/* AVX2 optimized dot product for Q8 vectors */\nATTRIBUTE_TARGET_AVX2\nfloat vectors_distance_q8_avx2(const int8_t *x, const int8_t *y, uint32_t dim,\n                                float range_a, float range_b) {\n    // Handle zero vectors special case.\n    if (range_a == 0 || range_b == 0) {\n        return 1.0f;\n    }\n\n    const float scale_product = (range_a/127) * (range_b/127);\n    __m256i sum = _mm256_setzero_si256();\n    uint32_t i;\n\n    /* Process 32 int8 elements at a time with AVX2 */\n    for (i = 0; i + 31 < dim; i += 32) {\n        /* Load 32 int8 values */\n        __m256i vx = _mm256_loadu_si256((__m256i*)&x[i]);\n        __m256i vy = _mm256_loadu_si256((__m256i*)&y[i]);\n        \n        /* Split into lower and upper 16 bytes */\n        __m128i vx_lo = _mm256_extracti128_si256(vx, 0);\n        __m128i vy_lo = _mm256_extracti128_si256(vy, 0);\n        __m128i vx_hi = _mm256_extracti128_si256(vx, 1);\n        __m128i vy_hi = _mm256_extracti128_si256(vy, 1);\n        \n        /* Extend int8 to int16 for lower half */\n        __m256i vx_lo_16 = _mm256_cvtepi8_epi16(vx_lo);\n        __m256i vy_lo_16 = _mm256_cvtepi8_epi16(vy_lo);\n        \n        /* Multiply and accumulate (madd does multiply adjacent pairs and add) */\n        __m256i prod_lo = _mm256_madd_epi16(vx_lo_16, vy_lo_16);\n        sum = _mm256_add_epi32(sum, prod_lo);\n        \n        /* Extend int8 to int16 for upper half */\n        __m256i vx_hi_16 = _mm256_cvtepi8_epi16(vx_hi);\n        __m256i vy_hi_16 = _mm256_cvtepi8_epi16(vy_hi);\n        \n        __m256i prod_hi = _mm256_madd_epi16(vx_hi_16, vy_hi_16);\n        sum = _mm256_add_epi32(sum, prod_hi);\n    }\n\n    /* Horizontal sum of the 8 int32 elements in sum */\n    __m128i sum_hi = _mm256_extracti128_si256(sum, 1);\n    __m128i sum_lo = _mm256_castsi256_si128(sum);\n    __m128i sum_128 = _mm_add_epi32(sum_hi, sum_lo);\n    \n    sum_128 = _mm_hadd_epi32(sum_128, sum_128);\n    sum_128 = _mm_hadd_epi32(sum_128, sum_128);\n    \n    int32_t dot = _mm_cvtsi128_si32(sum_128);\n\n    /* Handle remaining elements */\n    for (; i < dim; i++) {\n        dot += ((int32_t)x[i]) * ((int32_t)y[i]);\n    }\n\n    /* Convert to original range */\n    float dotf = dot * scale_product;\n    float distance = 1.0f - dotf;\n\n    /* Clamp distance to [0, 2] */\n    if (distance < 0) distance = 0;\n    else if (distance > 2) distance = 2;\n    return distance;\n}\n#endif /* HAVE_AVX2 */\n\n/* Q8 dot product: automatically selects best available implementation */\nfloat vectors_distance_q8(const int8_t *x, const int8_t *y, uint32_t dim,\n                        float range_a, float range_b) {\n#if defined(HAVE_AVX512)\n    if (dim >= 64 && VSET_USE_AVX512) {\n        return vectors_distance_q8_avx512(x, y, dim, range_a, range_b);\n    }\n#endif\n\n#if defined(HAVE_AVX2)\n    if (dim >= 32 && VSET_USE_AVX2) {\n        return vectors_distance_q8_avx2(x, y, dim, range_a, range_b);\n    }\n#endif\n\n    /* Fallback to scalar implementation */\n    // Handle zero vectors special case.\n    if (range_a == 0 || range_b == 0) {\n        /* Zero vector distance from anything is 1.0\n         * (since 1.0 - dot_product where dot_product = 0). */\n        return 1.0f;\n    }\n\n    /* Each vector is quantized from [-max_abs, +max_abs] to [-127, 127]\n     * where range = 2*max_abs. */\n    const float scale_product = (range_a/127) * (range_b/127);\n\n    int32_t dot0 = 0, dot1 = 0;\n    uint32_t i;\n\n    // Process 8 elements at a time for better pipeline utilization.\n    for (i = 0; i + 7 < dim; i += 8) {\n        dot0 += ((int32_t)x[i]) * ((int32_t)y[i]) +\n                ((int32_t)x[i+1]) * ((int32_t)y[i+1]) +\n                ((int32_t)x[i+2]) * ((int32_t)y[i+2]) +\n                ((int32_t)x[i+3]) * ((int32_t)y[i+3]);\n\n        dot1 += ((int32_t)x[i+4]) * ((int32_t)y[i+4]) +\n                ((int32_t)x[i+5]) * ((int32_t)y[i+5]) +\n                ((int32_t)x[i+6]) * ((int32_t)y[i+6]) +\n                ((int32_t)x[i+7]) * ((int32_t)y[i+7]);\n    }\n\n    // Handle remaining elements.\n    for (; i < dim; i++) dot0 += ((int32_t)x[i]) * ((int32_t)y[i]);\n\n    // Convert to original range.\n    float dotf = (dot0 + dot1) * scale_product;\n    float distance = 1.0f - dotf;\n\n    // Clamp distance to [0, 2].\n    if (distance < 0) distance = 0;\n    else if (distance > 2) distance = 2;\n    return distance;\n}\n\n#if defined(HAVE_AVX512) && defined(HAVE_POPCNT)\n/* AVX-512 vectorized binary distance calculation using VPOPCNTDQ.\n * Processes 8 uint64_t (512 bits) per iteration.\n * \n * Uses _mm512_popcnt_epi64 hardware popcount instruction which requires\n * AVX512VPOPCNTDQ extension\n */\nATTRIBUTE_TARGET_AVX512_VPOPCNT\nstatic float vectors_distance_bin_avx512_vpopcnt(const uint64_t *x, const uint64_t *y, uint32_t dim) {\n    uint32_t len = (dim+63)/64;\n    uint32_t opposite = 0;\n    uint32_t j = 0;\n\n    /* Process 8 uint64_t (512 bits) at a time with hardware popcount */\n    if (len >= 8) {\n        __m512i sum = _mm512_setzero_si512();\n        \n        for (; j + 7 < len; j += 8) {\n            __m512i vx = _mm512_loadu_si512((__m512i*)&x[j]);\n            __m512i vy = _mm512_loadu_si512((__m512i*)&y[j]);\n            __m512i vxor = _mm512_xor_si512(vx, vy);\n            \n            /* Hardware popcount for 64-bit integers (AVX512VPOPCNTDQ) */\n            __m512i popcnt = _mm512_popcnt_epi64(vxor);\n            sum = _mm512_add_epi64(sum, popcnt);\n        }\n        \n        /* Horizontal sum: reduce 8x 64-bit integers to scalar */\n        opposite = _mm512_reduce_add_epi64(sum);\n    }\n\n    /* Handle remaining elements */\n    for (; j < len; j++) {\n        uint64_t xor = x[j] ^ y[j];\n        opposite += __builtin_popcountll(xor);\n    }\n\n    return (float)opposite * 2.0f / dim;\n}\n#endif\n\n#if defined(HAVE_AVX2) && defined(HAVE_POPCNT)\n/* AVX2 vectorized binary distance calculation.\n * Processes 4 uint64_t (256 bits) per iteration. */\nATTRIBUTE_TARGET_AVX2_POPCNT\nstatic float vectors_distance_bin_avx2(const uint64_t *x, const uint64_t *y, uint32_t dim) {\n    uint32_t len = (dim+63)/64;\n    uint32_t opposite = 0;\n    uint32_t j = 0;\n\n    /* Process 4 uint64_t (256 bits) at a time */\n    if (len >= 4) {\n        for (; j + 3 < len; j += 4) {\n            __m256i vx = _mm256_loadu_si256((__m256i*)&x[j]);\n            __m256i vy = _mm256_loadu_si256((__m256i*)&y[j]);\n            __m256i vxor = _mm256_xor_si256(vx, vy);\n            \n            /* Extract and use hardware POPCNT instruction */\n            uint64_t xor_vals[4];\n            _mm256_storeu_si256((__m256i*)xor_vals, vxor);\n            \n            opposite += __builtin_popcountll(xor_vals[0]);\n            opposite += __builtin_popcountll(xor_vals[1]);\n            opposite += __builtin_popcountll(xor_vals[2]);\n            opposite += __builtin_popcountll(xor_vals[3]);\n        }\n    }\n\n    /* Handle remaining elements */\n    for (; j < len; j++) {\n        uint64_t xor = x[j] ^ y[j];\n        opposite += __builtin_popcountll(xor);\n    }\n\n    return (float)opposite * 2.0f / dim;\n}\n#endif\n\n/* Binary vectors distance with SIMD dispatch. */\nATTRIBUTE_TARGET_POPCNT\nfloat vectors_distance_bin(const uint64_t *x, const uint64_t *y, uint32_t dim) {\n#if defined(HAVE_AVX512) && defined(HAVE_POPCNT)\n    /* AVX-512 with VPOPCNTDQ */\n    if (dim >= 512 && VSET_USE_AVX512_VPOPCNT) {\n        return vectors_distance_bin_avx512_vpopcnt(x, y, dim);\n    }\n#endif\n\n#if defined(HAVE_AVX2) && defined(HAVE_POPCNT)\n    /* AVX2 path: processes 4 uint64_t (256 bits) per iteration */\n    if (dim >= 256 && VSET_USE_AVX2 && VSET_USE_POPCNT) {\n        return vectors_distance_bin_avx2(x, y, dim);\n    }\n#endif\n\n    /* Fallback to scalar implementation with runtime POPCNT detection */\n    return hnsw_vectors_distance_bin(x, y, dim);\n}\n\n/* Dot product between nodes. Will call the right version depending on the\n * quantization used. */\nfloat hnsw_distance(HNSW *index, hnswNode *a, hnswNode *b) {\n    switch(index->quant_type) {\n    case HNSW_QUANT_NONE:\n        return vectors_distance_float(a->vector,b->vector,index->vector_dim);\n    case HNSW_QUANT_Q8:\n        return vectors_distance_q8(a->vector,b->vector,index->vector_dim,a->quants_range,b->quants_range);\n    case HNSW_QUANT_BIN:\n        return vectors_distance_bin(a->vector,b->vector,index->vector_dim);\n    default:\n        assert(1 != 1);\n        return 0;\n    }\n}\n\n/* This do Q8 'range' quantization.\n * For people looking at this code thinking: Oh, I could use min/max\n * quants instead! Well: I tried with min/max normalization but the dot\n * product needs to accumulate the sum for later correction, and it's slower. */\nvoid quantize_to_q8(float *src, int8_t *dst, uint32_t dim, float *rangeptr) {\n    float max_abs = 0;\n    for (uint32_t j = 0; j < dim; j++) {\n        if (src[j] > max_abs) max_abs = src[j];\n        if (-src[j] > max_abs) max_abs = -src[j];\n    }\n\n    if (max_abs == 0) {\n        if (rangeptr) *rangeptr = 0;\n        memset(dst, 0, dim);\n        return;\n    }\n\n    const float scale = 127.0f / max_abs;  // Scale to map to [-127, 127].\n\n    for (uint32_t j = 0; j < dim; j++) {\n        dst[j] = (int8_t)roundf(src[j] * scale);\n    }\n    if (rangeptr) *rangeptr = max_abs;  // Return max_abs instead of 2*max_abs.\n}\n\n/* Binary quantization of vector 'src' to 'dst'. We use full words of\n * 64 bit as smallest unit, we will just set all the unused bits to 0\n * so that they'll be the same in all the vectors, and when xor+popcount\n * is used to compute the distance, such bits are not considered. This\n * allows to go faster. */\nvoid quantize_to_bin(float *src, uint64_t *dst, uint32_t dim) {\n    memset(dst,0,(dim+63)/64*sizeof(uint64_t));\n    for (uint32_t j = 0; j < dim; j++) {\n        uint32_t word = j/64;\n        uint32_t bit = j&63;\n        /* Since cosine similarity checks the vector direction and\n         * not magnitudo, we do likewise in the binary quantization and\n         * just remember if the component is positive or negative. */\n        if (src[j] > 0) dst[word] |= 1ULL<<bit;\n    }\n}\n\n/* L2 normalization of the float vector.\n *\n * Store the L2 value on 'l2ptr' if not NULL. This way the process\n * can be reversed even if some precision will be lost. */\nvoid hnsw_normalize_vector(float *x, float *l2ptr, uint32_t dim) {\n    float l2 = 0;\n    uint32_t i;\n    for (i = 0; i + 3 < dim; i += 4) {\n        l2 += x[i]*x[i] +\n              x[i+1]*x[i+1] +\n              x[i+2]*x[i+2] +\n              x[i+3]*x[i+3];\n    }\n    for (; i < dim; i++) l2 += x[i]*x[i];\n    if (l2 == 0) return; // All zero vector, can't normalize.\n\n    l2 = sqrtf(l2);\n    if (l2ptr) *l2ptr = l2;\n    for (i = 0; i < dim; i++) x[i] /= l2;\n}\n\n/* Helper function to generate random level. */\nuint32_t random_level(void) {\n    static const int threshold = HNSW_P * RAND_MAX;\n    uint32_t level = 0;\n\n    while (rand() < threshold && level < HNSW_MAX_LEVEL)\n        level += 1;\n    return level;\n}\n\n/* Create new HNSW index, quantized or not. */\nHNSW *hnsw_new(uint32_t vector_dim, uint32_t quant_type, uint32_t m) {\n    HNSW *index = hmalloc(sizeof(HNSW));\n    if (!index) return NULL;\n\n    /* M parameter sanity check. */\n    if (m == 0) m = HNSW_DEFAULT_M;\n    else if (m > HNSW_MAX_M) m = HNSW_MAX_M;\n\n    index->M = m;\n    index->quant_type = quant_type;\n    index->enter_point = NULL;\n    index->max_level = 0;\n    index->vector_dim = vector_dim;\n    index->node_count = 0;\n    index->last_id = 0;\n    index->head = NULL;\n    index->cursors = NULL;\n\n    /* Initialize epochs array. */\n    for (int i = 0; i < HNSW_MAX_THREADS; i++)\n        index->current_epoch[i] = 0;\n\n    /* Initialize locks. */\n    if (pthread_rwlock_init(&index->global_lock, NULL) != 0) {\n        hfree(index);\n        return NULL;\n    }\n\n    for (int i = 0; i < HNSW_MAX_THREADS; i++) {\n        if (pthread_mutex_init(&index->slot_locks[i], NULL) != 0) {\n            /* Clean up previously initialized mutexes. */\n            for (int j = 0; j < i; j++)\n                pthread_mutex_destroy(&index->slot_locks[j]);\n            pthread_rwlock_destroy(&index->global_lock);\n            hfree(index);\n            return NULL;\n        }\n    }\n\n    /* Initialize atomic variables. */\n    index->next_slot = 0;\n    index->version = 0;\n    return index;\n}\n\n/* Fill 'vec' with the node vector, de-normalizing and de-quantizing it\n * as needed. Note that this function will return an approximated version\n * of the original vector. */\nvoid hnsw_get_node_vector(HNSW *index, hnswNode *node, float *vec) {\n    if (index->quant_type == HNSW_QUANT_NONE) {\n        memcpy(vec,node->vector,index->vector_dim*sizeof(float));\n    } else if (index->quant_type == HNSW_QUANT_Q8) {\n        int8_t *quants = node->vector;\n        for (uint32_t j = 0; j < index->vector_dim; j++)\n            vec[j] = (quants[j]*node->quants_range)/127;\n    } else if (index->quant_type == HNSW_QUANT_BIN) {\n        uint64_t *bits = node->vector;\n        for (uint32_t j = 0; j < index->vector_dim; j++) {\n            uint32_t word = j/64;\n            uint32_t bit = j&63;\n            vec[j] = (bits[word] & (1ULL<<bit)) ? 1.0f : -1.0f;\n        }\n    }\n\n    // De-normalize.\n    if (index->quant_type != HNSW_QUANT_BIN) {\n        for (uint32_t j = 0; j < index->vector_dim; j++)\n            vec[j] *= node->l2;\n    }\n}\n\n/* Return the number of bytes needed to represent a vector in the index,\n * that is function of the dimension of the vectors and the quantization\n * type used. */\nuint32_t hnsw_quants_bytes(HNSW *index) {\n    switch(index->quant_type) {\n    case HNSW_QUANT_NONE: return index->vector_dim * sizeof(float);\n    case HNSW_QUANT_Q8: return index->vector_dim;\n    case HNSW_QUANT_BIN: return (index->vector_dim+63)/64*8;\n    default: assert(0 && \"Quantization type not supported.\");\n    }\n}\n\n/* Create new node. Returns NULL on out of memory.\n * It is possible to pass the vector as floats or, in case this index\n * was already stored on disk and is being loaded, or serialized and\n * transmitted in any form, the already quantized version in\n * 'qvector'.\n *\n * Only vector or qvector should be non-NULL. The reason why passing\n * a quantized vector is useful, is that because re-normalizing and\n * re-quantizing several times the same vector may accumulate rounding\n * errors. So if you work with quantized indexes, you should save\n * the quantized indexes.\n *\n * Note that, together with qvector, the quantization range is needed,\n * since this library uses per-vector quantization. In case of quantized\n * vectors the l2 is considered to be '1', so if you want to restore\n * the right l2 (to use the API that returns an approximation of the\n * original vector) make sure to save the l2 on disk and set it back\n * after the node creation (see later for the serialization API that\n * handles this and more). */\nhnswNode *hnsw_node_new(HNSW *index, uint64_t id, const float *vector, const int8_t *qvector, float qrange, uint32_t level, int normalize) {\n    hnswNode *node = hmalloc(sizeof(hnswNode)+(sizeof(hnswNodeLayer)*(level+1)));\n    if (!node) return NULL;\n\n    if (id == 0) id = ++index->last_id;\n    node->level = level;\n    node->id = id;\n    node->next = NULL;\n    node->vector = NULL;\n    node->l2 = 1;   // Default in case of already quantized vectors. It is\n                    // up to the caller to fill this later, if needed.\n\n    /* Initialize visited epoch array. */\n    for (int i = 0; i < HNSW_MAX_THREADS; i++)\n        node->visited_epoch[i] = 0;\n\n    if (qvector == NULL) {\n        /* Copy input vector. */\n        node->vector = hmalloc(sizeof(float) * index->vector_dim);\n        if (!node->vector) {\n            hfree(node);\n            return NULL;\n        }\n        memcpy(node->vector, vector, sizeof(float) * index->vector_dim);\n        if (normalize)\n            hnsw_normalize_vector(node->vector,&node->l2,index->vector_dim);\n\n        /* Handle quantization. */\n        if (index->quant_type != HNSW_QUANT_NONE) {\n            void *quants = hmalloc(hnsw_quants_bytes(index));\n            if (quants == NULL) {\n                hfree(node->vector);\n                hfree(node);\n                return NULL;\n            }\n\n            // Quantize.\n            switch(index->quant_type) {\n            case HNSW_QUANT_Q8:\n                quantize_to_q8(node->vector,quants,index->vector_dim,&node->quants_range);\n                break;\n            case HNSW_QUANT_BIN:\n                quantize_to_bin(node->vector,quants,index->vector_dim);\n                break;\n            default:\n                assert(0 && \"Quantization type not handled.\");\n                break;\n            }\n\n            // Discard the full precision vector.\n            hfree(node->vector);\n            node->vector = quants;\n        }\n    } else {\n        // We got the already quantized vector. Just copy it.\n        assert(index->quant_type != HNSW_QUANT_NONE);\n        uint32_t vector_bytes = hnsw_quants_bytes(index);\n        node->vector = hmalloc(vector_bytes);\n        node->quants_range = qrange;\n        if (node->vector == NULL) {\n            hfree(node);\n            return NULL;\n        }\n        memcpy(node->vector,qvector,vector_bytes);\n    }\n\n    /* Initialize each layer. */\n    for (uint32_t i = 0; i <= level; i++) {\n        uint32_t max_links = (i == 0) ? index->M*2 : index->M;\n        node->layers[i].max_links = max_links;\n        node->layers[i].num_links = 0;\n        node->layers[i].worst_distance = 0;\n        node->layers[i].worst_idx = 0;\n        node->layers[i].links = hmalloc(sizeof(hnswNode*) * max_links);\n        if (!node->layers[i].links) {\n            for (uint32_t j = 0; j < i; j++) hfree(node->layers[j].links);\n            hfree(node->vector);\n            hfree(node);\n            return NULL;\n        }\n    }\n\n    return node;\n}\n\n/* Free a node. */\nvoid hnsw_node_free(hnswNode *node) {\n    if (!node) return;\n\n    for (uint32_t i = 0; i <= node->level; i++)\n        hfree(node->layers[i].links);\n\n    hfree(node->vector);\n    hfree(node);\n}\n\n/* Free the entire index. */\nvoid hnsw_free(HNSW *index,void(*free_value)(void*value)) {\n    if (!index) return;\n\n    hnswNode *current = index->head;\n    while (current) {\n        hnswNode *next = current->next;\n        if (free_value) free_value(current->value);\n        hnsw_node_free(current);\n        current = next;\n    }\n\n    /* Destroy locks */\n    pthread_rwlock_destroy(&index->global_lock);\n    for (int i = 0; i < HNSW_MAX_THREADS; i++) {\n        pthread_mutex_destroy(&index->slot_locks[i]);\n    }\n\n    hfree(index);\n}\n\n/* Add node to linked list of nodes. We may need to scan the whole\n * HNSW graph for several reasons. The list is doubly linked since we\n * also need the ability to remove a node without scanning the whole thing. */\nvoid hnsw_add_node(HNSW *index, hnswNode *node) {\n    node->next = index->head;\n    node->prev = NULL;\n    if (index->head)\n        index->head->prev = node;\n    index->head = node;\n    index->node_count++;\n}\n\n/* Search the specified layer starting from the specified entry point\n * to collect 'ef' nodes that are near to 'query'.\n *\n * This function implements optional hybrid search, so that each node\n * can be accepted or not based on its associated value. In this case\n * a callback 'filter_callback' should be passed, together with a maximum\n * effort for the search (number of candidates to evaluate), since even\n * with a a low \"EF\" value we risk that there are too few nodes that satisfy\n * the provided filter, and we could trigger a full scan. */\npqueue *search_layer_with_filter(\n                    HNSW *index, hnswNode *query, hnswNode *entry_point,\n                    uint32_t ef, uint32_t layer, uint32_t slot,\n                    int (*filter_callback)(void *value, void *privdata),\n                    void *filter_privdata, uint32_t max_candidates)\n{\n    // Mark visited nodes with a never seen epoch.\n    index->current_epoch[slot]++;\n\n    pqueue *candidates = pq_new(HNSW_MAX_CANDIDATES);\n    pqueue *results = pq_new(ef);\n    if (!candidates || !results) {\n        if (candidates) pq_free(candidates);\n        if (results) pq_free(results);\n        return NULL;\n    }\n\n    // Take track of the total effort: only used when filtering via\n    // a callback to have a bound effort.\n    uint32_t evaluated_candidates = 1;\n\n    // Add entry point.\n    float dist = hnsw_distance(index, query, entry_point);\n    pq_push(candidates, entry_point, dist);\n    if (filter_callback == NULL ||\n        filter_callback(entry_point->value, filter_privdata))\n    {\n        pq_push(results, entry_point, dist);\n    }\n    entry_point->visited_epoch[slot] = index->current_epoch[slot];\n\n    // Process candidates.\n    while (candidates->count > 0) {\n        // Max effort. If zero, we keep scanning.\n        if (filter_callback &&\n            max_candidates &&\n            evaluated_candidates >= max_candidates) break;\n\n        float cur_dist;\n        hnswNode *current = pq_pop(candidates, &cur_dist);\n        evaluated_candidates++;\n\n        float furthest = pq_max_distance(results);\n        if (results->count >= ef && cur_dist > furthest) break;\n\n        /* Check neighbors. */\n        for (uint32_t i = 0; i < current->layers[layer].num_links; i++) {\n            hnswNode *neighbor = current->layers[layer].links[i];\n\n            if (neighbor->visited_epoch[slot] == index->current_epoch[slot])\n                continue; // Already visited during this scan.\n\n            neighbor->visited_epoch[slot] = index->current_epoch[slot];\n            float neighbor_dist = hnsw_distance(index, query, neighbor);\n\n            furthest = pq_max_distance(results);\n            if (filter_callback == NULL) {\n                /* Original HNSW logic when no filtering:\n                 * Add to results if better than current max or\n                 * results not full. */\n                if (neighbor_dist < furthest || results->count < ef) {\n                    pq_push(candidates, neighbor, neighbor_dist);\n                    pq_push(results, neighbor, neighbor_dist);\n                }\n            } else {\n                /* With filtering: we add candidates even if doesn't match\n                 * the filter, in order to continue to explore the graph. */\n                if (neighbor_dist < furthest || candidates->count < ef) {\n                    pq_push(candidates, neighbor, neighbor_dist);\n                }\n\n                /* Add results only if passes filter. */\n                if (filter_callback(neighbor->value, filter_privdata)) {\n                    if (neighbor_dist < furthest || results->count < ef) {\n                        pq_push(results, neighbor, neighbor_dist);\n                    }\n                }\n            }\n        }\n    }\n\n    pq_free(candidates);\n    return results;\n}\n\n/* Just a wrapper without hybrid search callback. */\npqueue *search_layer(HNSW *index, hnswNode *query, hnswNode *entry_point,\n                     uint32_t ef, uint32_t layer, uint32_t slot)\n{\n    return search_layer_with_filter(index, query, entry_point, ef, layer, slot,\n                                    NULL, NULL, 0);\n}\n\n/* This function is used in order to initialize a node allocated in the\n * function stack with the specified vector. The idea is that we can\n * easily use hnsw_distance() from a vector and the HNSW nodes this way:\n *\n * hnswNode myQuery;\n * hnsw_init_tmp_node(myIndex,&myQuery,0,some_vector);\n * hnsw_distance(&myQuery, some_hnsw_node);\n *\n * Make sure to later free the node with:\n *\n * hnsw_free_tmp_node(&myQuery,some_vector);\n * You have to pass the vector to the free function, because sometimes\n * hnsw_init_tmp_node() may just avoid allocating a vector at all,\n * just reusing 'some_vector' pointer.\n *\n * Return 0 on out of memory, 1 on success.\n */\nint hnsw_init_tmp_node(HNSW *index, hnswNode *node, int is_normalized, const float *vector) {\n    node->vector = NULL;\n\n    /* Work on a normalized query vector if the input vector is\n     * not normalized. */\n    if (!is_normalized) {\n        node->vector = hmalloc(sizeof(float)*index->vector_dim);\n        if (node->vector == NULL) return 0;\n        memcpy(node->vector,vector,sizeof(float)*index->vector_dim);\n        hnsw_normalize_vector(node->vector,NULL,index->vector_dim);\n    } else {\n        node->vector = (float*)vector;\n    }\n\n    /* If quantization is enabled, our query fake node should be\n     * quantized as well. */\n    if (index->quant_type != HNSW_QUANT_NONE) {\n        void *quants = hmalloc(hnsw_quants_bytes(index));\n        if (quants == NULL) {\n            if (node->vector != vector) hfree(node->vector);\n            return 0;\n        }\n        switch(index->quant_type) {\n        case HNSW_QUANT_Q8:\n            quantize_to_q8(node->vector, quants, index->vector_dim, &node->quants_range);\n            break;\n        case HNSW_QUANT_BIN:\n            quantize_to_bin(node->vector, quants, index->vector_dim);\n        }\n        if (node->vector != vector) hfree(node->vector);\n        node->vector = quants;\n    }\n    return 1;\n}\n\n/* Free the stack allocated node initialized by hnsw_init_tmp_node(). */\nvoid hnsw_free_tmp_node(hnswNode *node, const float *vector) {\n    if (node->vector != vector) hfree(node->vector);\n}\n\n/* Return approximated K-NN items. Note that neighbors and distances\n * arrays must have space for at least 'k' items.\n * norm_query should be set to 1 if the query vector is already\n * normalized, otherwise, if 0, the function will copy the vector,\n * L2-normalize the copy and search using the normalized version.\n *\n * If the filter_privdata callback is passed, only elements passing the\n * specified filter (invoked with privdata and the value associated\n * to the node as arguments) are returned. In such case, if max_candidates\n * is not NULL, it represents the maximum number of nodes to explore, since\n * the search may be otherwise unbound if few or no elements pass the\n * filter. */\nint hnsw_search_with_filter\n               (HNSW *index, const float *query_vector, uint32_t k,\n                hnswNode **neighbors, float *distances, uint32_t slot,\n                int query_vector_is_normalized,\n                int (*filter_callback)(void *value, void *privdata),\n                void *filter_privdata, uint32_t max_candidates)\n\n{\n    if (!index || !query_vector || !neighbors || k == 0) return -1;\n    if (!index->enter_point) return 0; // Empty index.\n\n    /* Use a fake node that holds the query vector, this way we can\n     * use our normal node to node distance functions when checking\n     * the distance between query and graph nodes. */\n    hnswNode query;\n    if (hnsw_init_tmp_node(index,&query,query_vector_is_normalized,query_vector) == 0) return -1;\n\n    // Start searching from the entry point.\n    hnswNode *curr_ep = index->enter_point;\n\n    /* Start from higher layer to layer 1 (layer 0 is handled later)\n     * in the next section. Descend to the most similar node found\n     * so far. */\n    for (int lc = index->max_level; lc > 0; lc--) {\n        pqueue *results = search_layer(index, &query, curr_ep, 1, lc, slot);\n        if (!results) continue;\n\n        if (results->count > 0) {\n            curr_ep = pq_get_node(results,0);\n        }\n        pq_free(results);\n    }\n\n    /* Search bottom layer (the most densely populated) with ef = k */\n    pqueue *results = search_layer_with_filter(\n                        index, &query, curr_ep, k, 0, slot, filter_callback,\n                        filter_privdata, max_candidates);\n    if (!results) {\n        hnsw_free_tmp_node(&query, query_vector);\n        return -1;\n    }\n\n    /* Copy results. */\n    uint32_t found = MIN(k, results->count);\n    for (uint32_t i = 0; i < found; i++) {\n        neighbors[i] = pq_get_node(results,i);\n        if (distances) {\n            distances[i] = pq_get_distance(results,i);\n        }\n    }\n\n    pq_free(results);\n    hnsw_free_tmp_node(&query, query_vector);\n    return found;\n}\n\n/* Wrapper to hnsw_search_with_filter() when no filter is needed. */\nint hnsw_search(HNSW *index, const float *query_vector, uint32_t k,\n                hnswNode **neighbors, float *distances, uint32_t slot,\n                int query_vector_is_normalized)\n{\n    return hnsw_search_with_filter(index,query_vector,k,neighbors,\n                                   distances,slot,query_vector_is_normalized,\n                                   NULL,NULL,0);\n}\n\n/* Rescan a node and update the wortst neighbor index.\n * The followinng two functions are variants of this function to be used\n * when links are added or removed: they may do less work than a full scan. */\nvoid hnsw_update_worst_neighbor(HNSW *index, hnswNode *node, uint32_t layer) {\n    float worst_dist = 0;\n    uint32_t worst_idx = 0;\n    for (uint32_t i = 0; i < node->layers[layer].num_links; i++) {\n        float dist = hnsw_distance(index, node, node->layers[layer].links[i]);\n        if (dist > worst_dist) {\n            worst_dist = dist;\n            worst_idx = i;\n        }\n    }\n    node->layers[layer].worst_distance = worst_dist;\n    node->layers[layer].worst_idx = worst_idx;\n}\n\n/* Update node worst neighbor distance information when a new neighbor\n * is added. */\nvoid hnsw_update_worst_neighbor_on_add(HNSW *index, hnswNode *node, uint32_t layer, uint32_t added_index, float distance) {\n    (void) index; // Unused but here for API symmetry.\n    if (node->layers[layer].num_links == 1 ||           // First neighbor?\n        distance > node->layers[layer].worst_distance)  // New worst?\n    {\n        node->layers[layer].worst_distance = distance;\n        node->layers[layer].worst_idx = added_index;\n    }\n}\n\n/* Update node worst neighbor distance information when a linked neighbor\n * is removed. */\nvoid hnsw_update_worst_neighbor_on_remove(HNSW *index, hnswNode *node, uint32_t layer, uint32_t removed_idx)\n{\n    if (node->layers[layer].num_links == 0) {\n        node->layers[layer].worst_distance = 0;\n        node->layers[layer].worst_idx = 0;\n    } else if (removed_idx == node->layers[layer].worst_idx) {\n        hnsw_update_worst_neighbor(index,node,layer);\n    } else if (removed_idx < node->layers[layer].worst_idx) {\n        // Just update index if we removed element before worst.\n        node->layers[layer].worst_idx--;\n    }\n}\n\n/* We have a list of candidate nodes to link to the new node, when inserting\n * one. This function selects which nodes to link and performs the linking.\n *\n * Parameters:\n *\n * - 'candidates' is the priority queue of potential good nodes to link to the\n *   new node 'new_node'.\n * - 'required_links' is as many links we would like our new_node to get\n *   at the specified layer.\n * - 'aggressive' changes the strategy used to find good neighbors as follows:\n *\n * This function is called with aggressive=0 for all the layers, including\n * layer 0. When called like that, it will use the diversity of links and\n * quality of links checks before linking our new node with some candidate.\n *\n * However if the insert function finds that at layer 0, with aggressive=0,\n * few connections were made, it calls this function again with aggressiveness\n * levels greater up to 2.\n *\n * At aggressive=1, the diversity checks are disabled, and the candidate\n * node for linking is accepted even if it is nearest to an already accepted\n * neighbor than it is to the new node.\n *\n * When we link our new node by replacing the link of a candidate neighbor\n * that already has the max number of links, inevitably some other node loses\n * a connection (to make space for our new node link). In this case:\n *\n * 1. If such \"dropped\" node would remain with too little links, we try with\n *    some different neighbor instead, however as the 'aggressive' parameter\n *    has incremental values (0, 1, 2) we are more and more willing to leave\n *    the dropped node with fever connections.\n * 2. If aggressive=2, we will scan the candidate neighbor node links to\n *    find a different linked-node to replace, one better connected even if\n *    its distance is not the worse.\n *\n * Note: this function is also called during deletion of nodes in order to\n * provide certain nodes with additional links.\n */\nvoid select_neighbors(HNSW *index, pqueue *candidates, hnswNode *new_node,\n                      uint32_t layer, uint32_t required_links, int aggressive)\n{\n    for (uint32_t i = 0; i < candidates->count; i++) {\n        hnswNode *neighbor = pq_get_node(candidates,i);\n        if (neighbor == new_node) continue; // Don't link node with itself.\n\n        /* Use our cached distance among the new node and the candidate. */\n        float dist = pq_get_distance(candidates,i);\n\n        /* First of all, since our links are all bidirectional, if the\n         * new node for any reason has no longer room, or if it accumulated\n         * the required number of links, return ASAP. */\n        if (new_node->layers[layer].num_links >= new_node->layers[layer].max_links ||\n            new_node->layers[layer].num_links >= required_links) return;\n\n        /* If aggressive is true, it is possible that the new node\n         * already got some link among the candidates (see the top comment,\n         * this function gets re-called in case of too few links).\n         * So we need to check if this candidate is already linked to\n         * the new node. */\n        if (aggressive) {\n            int duplicated = 0;\n            for (uint32_t j = 0; j < new_node->layers[layer].num_links; j++) {\n                if (new_node->layers[layer].links[j] == neighbor) {\n                    duplicated = 1;\n                    break;\n                }\n            }\n            if (duplicated) continue;\n        }\n\n        /* Diversity check. We accept new candidates\n         * only if there is no element already accepted that is nearest\n         * to the candidate than the new element itself.\n         * However this check is disabled if we have pressure to find\n         * new links (aggressive != 0) */\n        if (!aggressive) {\n            int diversity_failed = 0;\n            for (uint32_t j = 0; j < new_node->layers[layer].num_links; j++) {\n                float link_dist = hnsw_distance(index, neighbor,\n                    new_node->layers[layer].links[j]);\n                if (link_dist < dist) {\n                    diversity_failed = 1;\n                    break;\n                }\n            }\n            if (diversity_failed) continue;\n        }\n\n        /* If potential neighbor node has space, simply add the new link.\n         * We will have space as well. */\n        uint32_t n = neighbor->layers[layer].num_links;\n        if (n < neighbor->layers[layer].max_links) {\n            /* Link candidate to new node. */\n            neighbor->layers[layer].links[n] = new_node;\n            neighbor->layers[layer].num_links++;\n\n            /* Update candidate worst link info. */\n            hnsw_update_worst_neighbor_on_add(index,neighbor,layer,n,dist);\n\n            /* Link new node to candidate. */\n            uint32_t new_links = new_node->layers[layer].num_links;\n            new_node->layers[layer].links[new_links] = neighbor;\n            new_node->layers[layer].num_links++;\n\n            /* Update new node worst link info. */\n            hnsw_update_worst_neighbor_on_add(index,new_node,layer,new_links,dist);\n            continue;\n        }\n\n        /* ====================================================================\n         * Replacing existing candidate neighbor link step.\n         * ================================================================== */\n\n        /* If we are here, our accepted candidate for linking is full.\n         *\n         * If new node is more distant to candidate than its current worst link\n         * then we skip it: we would not be able to establish a bidirectional\n         * connection without compromising link quality of candidate.\n         *\n         * At aggressiveness > 0 we don't care about this check. */\n        if (!aggressive && dist >= neighbor->layers[layer].worst_distance)\n            continue;\n\n        /* We can add it: we are ready to replace the candidate neighbor worst\n         * link with the new node, assuming certain conditions are met. */\n        hnswNode *worst_node = neighbor->layers[layer].links[neighbor->layers[layer].worst_idx];\n\n        /* The worst node linked to our candidate may remain too disconnected\n         * if we remove the candidate node as its link. Let's check if\n         * this is the case: */\n        if (aggressive == 0 &&\n            worst_node->layers[layer].num_links <= index->M/2)\n            continue;\n\n        /* Aggressive level = 1. It's ok if the node remains with just\n         * HNSW_M/4 links. */\n        else if (aggressive == 1 &&\n                 worst_node->layers[layer].num_links <= index->M/4)\n            continue;\n\n        /* If aggressive is set to 2, then the new node we are adding failed\n         * to find enough neighbors. We can't insert an almost orphaned new\n         * node, so let's see if the target node has some other link\n         * that is well connected in the graph: we could drop it instead\n         * of the worst link. */\n        if (aggressive == 2 && worst_node->layers[layer].num_links <=\n            index->M/4)\n        {\n            /* Let's see if we can find at least a candidate link that\n             * would remain with a few connections. Track the one\n             * that is the farthest away (worst distance) from our candidate\n             * neighbor (in order to remove the less interesting link). */\n            worst_node = NULL;\n            uint32_t worst_idx = 0;\n            float max_dist = 0;\n            for (uint32_t j = 0; j < neighbor->layers[layer].num_links; j++) {\n                hnswNode *to_drop = neighbor->layers[layer].links[j];\n\n                /* Skip this if it would remain too disconnected as well.\n                 *\n                 * NOTE about index->M/4 min connections requirement:\n                 *\n                 * It is not too strict, since leaving a node with just a\n                 * single link does not just leave it too weakly connected, but\n                 * also sometimes creates cycles with few disconnected\n                 * nodes linked among them. */\n                if (to_drop->layers[layer].num_links <= index->M/4) continue;\n\n                float link_dist = hnsw_distance(index, neighbor, to_drop);\n                if (worst_node == NULL || link_dist > max_dist) {\n                    worst_node = to_drop;\n                    max_dist = link_dist;\n                    worst_idx = j;\n                }\n            }\n\n            if (worst_node != NULL) {\n                /* We found a node that we can drop. Let's pretend this is\n                 * the worst node of the candidate to unify the following\n                 * code path. Later we will fix the worst node info anyway. */\n                neighbor->layers[layer].worst_distance = max_dist;\n                neighbor->layers[layer].worst_idx = worst_idx;\n            } else {\n                /* Otherwise we have no other option than reallocating\n                 * the max number of links for this target node, and\n                 * ensure at least a few connections for our new node. */\n                uint32_t reallocation_limit = layer == 0 ?\n                    index->M * 3 : index->M *2;\n                if (neighbor->layers[layer].max_links >= reallocation_limit)\n                    continue;\n\n                uint32_t new_max_links = neighbor->layers[layer].max_links+1;\n                hnswNode **new_links = hrealloc(neighbor->layers[layer].links,\n                                        sizeof(hnswNode*) * new_max_links);\n                if (new_links == NULL) continue; // Non critical.\n\n                /* Update neighbor's link capacity. */\n                neighbor->layers[layer].links = new_links;\n                neighbor->layers[layer].max_links = new_max_links;\n\n                /* Establish bidirectional link. */\n                uint32_t n = neighbor->layers[layer].num_links;\n                neighbor->layers[layer].links[n] = new_node;\n                neighbor->layers[layer].num_links++;\n                hnsw_update_worst_neighbor_on_add(index, neighbor, layer,\n                                                  n, dist);\n\n                n = new_node->layers[layer].num_links;\n                new_node->layers[layer].links[n] = neighbor;\n                new_node->layers[layer].num_links++;\n                hnsw_update_worst_neighbor_on_add(index, new_node, layer,\n                                                  n, dist);\n                continue;\n            }\n        }\n\n        // Remove backlink from the worst node of our candidate.\n        for (uint64_t j = 0; j < worst_node->layers[layer].num_links; j++) {\n            if (worst_node->layers[layer].links[j] == neighbor) {\n                memmove(&worst_node->layers[layer].links[j],\n                        &worst_node->layers[layer].links[j+1],\n                        (worst_node->layers[layer].num_links - j - 1) * sizeof(hnswNode*));\n                worst_node->layers[layer].num_links--;\n                hnsw_update_worst_neighbor_on_remove(index,worst_node,layer,j);\n                break;\n            }\n        }\n\n        /* Replace worst link with the new node. */\n        neighbor->layers[layer].links[neighbor->layers[layer].worst_idx] = new_node;\n\n        /* Update the worst link in the target node, at this point\n         * the link that we replaced may no longer be the worst. */\n        hnsw_update_worst_neighbor(index,neighbor,layer);\n\n        // Add new node -> candidate link.\n        uint32_t new_links = new_node->layers[layer].num_links;\n        new_node->layers[layer].links[new_links] = neighbor;\n        new_node->layers[layer].num_links++;\n\n        // Update new node worst link.\n        hnsw_update_worst_neighbor_on_add(index,new_node,layer,new_links,dist);\n    }\n}\n\n/* This function implements node reconnection after a node deletion in HNSW.\n * When a node is deleted, other nodes at the specified layer lose one\n * connection (all the neighbors of the deleted node). This function attempts\n * to pair such nodes together in a way that maximizes connection quality\n * among the M nodes that were former neighbors of our deleted node.\n *\n * The algorithm works by first building a distance matrix among the nodes:\n *\n *     N0   N1   N2   N3\n * N0  0    1.2  0.4  0.9\n * N1  1.2  0    0.8  0.5\n * N2  0.4  0.8  0    1.1\n * N3  0.9  0.5  1.1  0\n *\n * For each potential pairing (i,j) we compute a score that combines:\n * 1. The direct cosine distance between the two nodes\n * 2. The average distance to other nodes that would no longer be\n *    available for pairing if we select this pair\n *\n * We want to balance local node-to-node requirements and global requirements.\n * For instance sometimes connecting A with B, while optimal, would leave\n * C and D to be connected without other choices, and this could be a very\n * bad connection. Maybe instead A and C and B and D are both relatively high\n * quality connections.\n *\n * The formula used to calculate the score of each connection is:\n *\n * score[i,j] = W1*(2-distance[i,j]) + W2*((new_avg_i + new_avg_j)/2)\n * where new_avg_x is the average of distances in row x excluding distance[i,j]\n *\n * So the score is directly proportional to the SIMILARITY of the two nodes\n * and also directly proportional to the DISTANCE of the potential other\n * connections that we lost by pairign i,j. So we have a cost for missed\n * opportunities, or better, in this case, a reward if the missing\n * opportunities are not so good (big average distance).\n *\n * W1 and W2 are weights (defaults: 0.7 and 0.3) that determine the relative\n * importance of immediate connection quality vs future pairing potential.\n *\n * After the initial pairing phase, any nodes that couldn't be paired\n * (due to odd count or existing connections) are handled by searching\n * the broader graph using the standard HNSW neighbor selection logic.\n */\nvoid hnsw_reconnect_nodes(HNSW *index, hnswNode **nodes, int count, uint32_t layer) {\n    if (count <= 0) return;\n    debugmsg(\"Reconnecting %d nodes\\n\", count);\n\n    /* Step 1: Build the distance matrix between all nodes.\n     * Since distance(i,j) = distance(j,i), we only compute the upper triangle\n     * and mirror it to the lower triangle. */\n    float *distances = hmalloc((unsigned long) count * count * sizeof(float));\n    if (!distances) return;\n\n    for (int i = 0; i < count; i++) {\n        distances[i*count + i] = 0;  // Distance to self is 0\n        for (int j = i+1; j < count; j++) {\n            float dist = hnsw_distance(index, nodes[i], nodes[j]);\n            distances[i*count + j] = dist;     // Upper triangle.\n            distances[j*count + i] = dist;     // Lower triangle.\n        }\n    }\n\n    /* Step 2: Calculate row averages (will be used in scoring):\n     * please note that we just calculate row averages and not\n     * columns averages since the matrix is symmetrical, so those\n     * are the same: check the image in the top comment if you have any\n     * doubt about this. */\n    float *row_avgs = hmalloc(count * sizeof(float));\n    if (!row_avgs) {\n        hfree(distances);\n        return;\n    }\n\n    for (int i = 0; i < count; i++) {\n        float sum = 0;\n        int valid_count = 0;\n        for (int j = 0; j < count; j++) {\n            if (i != j) {\n                sum += distances[i*count + j];\n                valid_count++;\n            }\n        }\n        row_avgs[i] = valid_count ? sum / valid_count : 0;\n    }\n\n    /* Step 3: Build scoring matrix. What we do here is to combine how\n     * good is a given i,j nodes connection, with how badly connecting\n     * i,j will affect the remaining quality of connections left to\n     * pair the other nodes. */\n    float *scores = hmalloc((unsigned long) count * count * sizeof(float));\n    if (!scores) {\n        hfree(distances);\n        hfree(row_avgs);\n        return;\n    }\n\n    /* Those weights were obtained manually... No guarantee that they\n     * are optimal. However with these values the algorithm is certain\n     * better than its greedy version that just attempts to pick the\n     * best pair each time (verified experimentally). */\n    const float W1 = 0.7;  // Weight for immediate distance.\n    const float W2 = 0.3;  // Weight for future potential.\n\n    for (int i = 0; i < count; i++) {\n        for (int j = 0; j < count; j++) {\n            if (i == j) {\n                scores[i*count + j] = -1;  // Invalid pairing.\n                continue;\n            }\n\n            // Check for existing connection between i and j.\n            int already_linked = 0;\n            for (uint32_t k = 0; k < nodes[i]->layers[layer].num_links; k++)\n            {\n                if (nodes[i]->layers[layer].links[k] == nodes[j]) {\n                    scores[i*count + j] = -1;  // Already linked.\n                    already_linked = 1;\n                    break;\n                }\n            }\n            if (already_linked) continue;\n\n            float dist = distances[i*count + j];\n\n            /* Calculate new averages excluding this pair.\n             * Handle edge case where we might have too few elements.\n             * Note that it would be not very smart to recompute the average\n             * each time scanning the row, we can remove the element\n             * and adjust the average without it. */\n            float new_avg_i = 0, new_avg_j = 0;\n            if (count > 2) {\n                new_avg_i = (row_avgs[i] * (count-1) - dist) / (count-2);\n                new_avg_j = (row_avgs[j] * (count-1) - dist) / (count-2);\n            }\n\n            /* Final weighted score: the more similar i,j, the better\n             * the score. The more distant are the pairs we lose by\n             * connecting i,j, the better the score. */\n            scores[i*count + j] = W1*(2-dist) + W2*((new_avg_i + new_avg_j)/2);\n        }\n    }\n\n    // Step 5: Pair nodes greedily based on scores.\n    int *used = hmalloc(count*sizeof(int));\n    memset(used,0,count*sizeof(int));\n    if (!used) {\n        hfree(distances);\n        hfree(row_avgs);\n        hfree(scores);\n        return;\n    }\n\n    /* Scan the matrix looking each time for the potential\n     * link with the best score. */\n    while(1) {\n        float max_score = -1;\n        int best_j = -1, best_i = -1;\n\n        // Seek best score i,j values.\n        for (int i = 0; i < count; i++) {\n            if (used[i]) continue;  // Already connected.\n\n            /* No space left? Not possible after a node deletion but makes\n             * this function more future-proof. */\n            if (nodes[i]->layers[layer].num_links >=\n                nodes[i]->layers[layer].max_links) continue;\n\n            for (int j = 0; j < count; j++) {\n                if (i == j) continue; // Same node, skip.\n                if (used[j]) continue; // Already connected.\n                float score = scores[i*count + j];\n                if (score < 0) continue; // Invalid link.\n\n                /* If the target node has space, and its score is better\n                 * than any other seen so far... remember it is the best. */\n                if (score > max_score &&\n                    nodes[j]->layers[layer].num_links <\n                    nodes[j]->layers[layer].max_links)\n                {\n                    // Track the best connection found so far.\n                    max_score = score;\n                    best_j = j;\n                    best_i = i;\n                }\n            }\n        }\n\n        // Possible link found? Connect i and j.\n        if (best_j != -1) {\n            debugmsg(\"[%d] linking %d with %d: %f\\n\", layer, (int)best_i, (int)best_j, max_score);\n            // Link i -> j.\n            int link_idx = nodes[best_i]->layers[layer].num_links;\n            nodes[best_i]->layers[layer].links[link_idx] = nodes[best_j];\n            nodes[best_i]->layers[layer].num_links++;\n\n            // Update worst distance if needed.\n            float dist = distances[best_i*count + best_j];\n            hnsw_update_worst_neighbor_on_add(index,nodes[best_i],layer,link_idx,dist);\n\n            // Link j -> i.\n            link_idx = nodes[best_j]->layers[layer].num_links;\n            nodes[best_j]->layers[layer].links[link_idx] = nodes[best_i];\n            nodes[best_j]->layers[layer].num_links++;\n\n            // Update worst distance if needed.\n            hnsw_update_worst_neighbor_on_add(index,nodes[best_j],layer,link_idx,dist);\n\n            // Mark connection as used.\n            used[best_i] = used[best_j] = 1;\n        } else {\n            break; // No more valid connections available.\n        }\n    }\n\n    /* Step 6: Handle remaining unpaired nodes using the standard HNSW\n     * neighbor selection. */\n    for (int i = 0; i < count; i++) {\n        if (used[i]) continue;\n\n        // Skip if node is already at max connections.\n        if (nodes[i]->layers[layer].num_links >=\n            nodes[i]->layers[layer].max_links)\n            continue;\n\n        debugmsg(\"[%d] Force linking %d\\n\", layer, i);\n\n        /* First, try with local nodes as candidates.\n         * Some candidate may have space. */\n        pqueue *candidates = pq_new(count);\n        if (!candidates) continue;\n\n        /* Add all the local nodes having some space as candidates\n         * to be linked with this node. */\n        for (int j = 0; j < count; j++) {\n            if (i != j &&       // Must not be itself.\n            nodes[j]->layers[layer].num_links <     // Must not be full.\n            nodes[j]->layers[layer].max_links)\n            {\n                float dist = distances[i*count + j];\n                pq_push(candidates, nodes[j], dist);\n            }\n        }\n\n        /* Try local candidates first with aggressive = 1.\n         * So we will link only if there is space.\n         * We want one link more than the links we already have. */\n        uint32_t wanted_links = nodes[i]->layers[layer].num_links+1;\n        if (candidates->count > 0) {\n            select_neighbors(index, candidates, nodes[i], layer,\n                wanted_links, 1);\n            debugmsg(\"Final links after attempt with local nodes: %d (wanted: %d)\\n\", (int)nodes[i]->layers[layer].num_links, wanted_links);\n        }\n\n        // If still no connection, search the broader graph.\n        if (nodes[i]->layers[layer].num_links != wanted_links) {\n            debugmsg(\"No force linking possible with local candidates\\n\");\n            pq_free(candidates);\n\n            // Find entry point for target layer by descending through levels.\n            hnswNode *curr_ep = index->enter_point;\n            for (uint32_t lc = index->max_level; lc > layer; lc--) {\n                pqueue *results = search_layer(index, nodes[i], curr_ep, 1, lc, 0);\n                if (results) {\n                    if (results->count > 0) {\n                        curr_ep = pq_get_node(results,0);\n                    }\n                    pq_free(results);\n                }\n            }\n\n            if (curr_ep) {\n                /* Search this layer for candidates.\n                 * Use the default EF_C in this case, since it's not an\n                 * \"insert\" operation, and we don't know the user\n                 * specified \"EF\". */\n                candidates = search_layer(index, nodes[i], curr_ep, HNSW_EF_C, layer, 0);\n                if (candidates) {\n                    /* Try to connect with aggressiveness proportional to the\n                     * node linking condition. */\n                    int aggressiveness =\n                        (nodes[i]->layers[layer].num_links > index->M / 2)\n                            ? 1 : 2;\n                    select_neighbors(index, candidates, nodes[i], layer,\n                                     wanted_links, aggressiveness);\n                    debugmsg(\"Final links with broader search: %d (wanted: %d)\\n\", (int)nodes[i]->layers[layer].num_links, wanted_links);\n                    pq_free(candidates);\n                }\n            }\n        } else {\n            pq_free(candidates);\n        }\n    }\n\n    // Cleanup.\n    hfree(distances);\n    hfree(row_avgs);\n    hfree(scores);\n    hfree(used);\n}\n\n/* This is an helper function in order to support node deletion.\n * It's goal is just to:\n *\n * 1. Remove the node from the bidirectional links of neighbors in the graph.\n * 2. Remove the node from the linked list of nodes.\n * 3. Fix the entry point in the graph. We just select one of the neighbors\n *    of the deleted node at a lower level. If none is found, we do\n *    a full scan.\n * 4. The node itself amd its aux value field are NOT freed. It's up to the\n *    caller to do it, by using hnsw_node_free().\n * 5. The node associated value (node->value) is NOT freed.\n *\n * Why this function will not free the node? Because in node updates it\n * could be a good idea to reuse the node allocation for different reasons\n * (currently not implemented).\n * In general it is more future-proof to be able to reuse the node if\n * needed. Right now this library reuses the node only when links are\n * not touched (see hnsw_update() for more information). */\nvoid hnsw_unlink_node(HNSW *index, hnswNode *node) {\n    if (!index || !node) return;\n\n    index->version++; // This node may be missing in an already compiled list\n                      // of neighbors. Make optimistic concurrent inserts fail.\n\n    /* Remove all bidirectional links at each level.\n     * Note that in this implementation all the\n     * links are guaranteed to be bedirectional. */\n\n    /* For each level of the deleted node... */\n    for (uint32_t level = 0; level <= node->level; level++) {\n        /* For each linked node of the deleted node... */\n        for (uint32_t i = 0; i < node->layers[level].num_links; i++) {\n            hnswNode *linked = node->layers[level].links[i];\n            /* Find and remove the backlink in the linked node */\n            for (uint32_t j = 0; j < linked->layers[level].num_links; j++) {\n                if (linked->layers[level].links[j] == node) {\n                    /* Remove by shifting remaining links left */\n                    memmove(&linked->layers[level].links[j],\n                           &linked->layers[level].links[j + 1],\n                           (linked->layers[level].num_links - j - 1) * sizeof(hnswNode*));\n                    linked->layers[level].num_links--;\n                    hnsw_update_worst_neighbor_on_remove(index,linked,level,j);\n                    break;\n                }\n            }\n        }\n    }\n\n    /* Update cursors pointing at this element. */\n    if (index->cursors) hnsw_cursor_element_deleted(index,node);\n\n    /* Update the previous node's next pointer. */\n    if (node->prev) {\n        node->prev->next = node->next;\n    } else {\n        /* If there's no previous node, this is the head. */\n        index->head = node->next;\n    }\n\n    /* Update the next node's prev pointer. */\n    if (node->next) node->next->prev = node->prev;\n\n    /* Update node count. */\n    index->node_count--;\n\n    /* If this node was the enter_point, we need to update it. */\n    if (node == index->enter_point) {\n        /* Reset entry point - we'll find a new one (unless the HNSW is\n         * now empty) */\n        index->enter_point = NULL;\n        index->max_level = 0;\n\n        /* Step 1: Try to find a replacement by scanning levels\n         * from top to bottom. Under normal conditions, if there is\n         * any other node at the same level, we have a link. Anyway\n         * we descend levels to find any neighbor at the higher level\n         * possible. */\n        for (int level = node->level; level >= 0; level--) {\n            if (node->layers[level].num_links > 0) {\n                index->enter_point = node->layers[level].links[0];\n                break;\n            }\n        }\n\n        /* Step 2: If no links were found at any level, do a full scan.\n         * This should never happen in practice if the HNSW is not\n         * empty. */\n        if (!index->enter_point) {\n            uint32_t new_max_level = 0;\n            hnswNode *current = index->head;\n\n            while (current) {\n                if (current != node && current->level >= new_max_level) {\n                    new_max_level = current->level;\n                    index->enter_point = current;\n                }\n                current = current->next;\n            }\n        }\n\n        /* Update max_level. */\n        if (index->enter_point)\n            index->max_level = index->enter_point->level;\n    }\n\n    /* Clear the node's links but don't free the node itself */\n    node->prev = node->next = NULL;\n}\n\n/* Higher level API for hnsw_unlink_node() + hnsw_reconnect_nodes() actual work.\n * This will get the write lock, will delete the node, free it,\n * reconnect the node neighbors among themselves, and unlock again.\n * If free_value function pointer is not NULL, then the function provided is\n * used to free node->value.\n *\n * The function returns 0 on error (inability to acquire the lock), otherwise\n * 1 is returned. */\nint hnsw_delete_node(HNSW *index, hnswNode *node, void(*free_value)(void*value)) {\n    if (pthread_rwlock_wrlock(&index->global_lock) != 0) return 0;\n    hnsw_unlink_node(index,node);\n    if (free_value && node->value) free_value(node->value);\n\n    /* Relink all the nodes orphaned of this node link.\n     * Do it for all the levels. */\n    for (unsigned int j = 0; j <= node->level; j++) {\n        hnsw_reconnect_nodes(index, node->layers[j].links,\n            node->layers[j].num_links, j);\n    }\n    hnsw_node_free(node);\n    pthread_rwlock_unlock(&index->global_lock);\n    return 1;\n}\n\n/* ============================ Threaded API ================================\n * Concurrent readers should use the following API to get a slot assigned\n * (and a lock, too), do their read-only call, and unlock the slot.\n *\n * There is a reason why read operations don't implement opaque transparent\n * locking directly on behalf of the user: when we return a result set\n * with hnsw_search(), we report a set of nodes. The caller will do something\n * with the nodes and the associated values, so the unlocking of the\n * slot should happen AFTER the result was already used, otherwise we may\n * have changes to the HNSW nodes as the result is being accessed. */\n\n/* Try to acquire a read slot. Returns the slot number (0 to HNSW_MAX_THREADS-1)\n * on success, -1 on error (pthread mutex errors). */\nint hnsw_acquire_read_slot(HNSW *index) {\n    /* First try a non-blocking approach on all slots. */\n    for (uint32_t i = 0; i < HNSW_MAX_THREADS; i++) {\n        if (pthread_mutex_trylock(&index->slot_locks[i]) == 0) {\n            if (pthread_rwlock_rdlock(&index->global_lock) != 0) {\n                pthread_mutex_unlock(&index->slot_locks[i]);\n                return -1;\n            }\n            return i;\n        }\n    }\n\n    /* All trylock attempts failed, use atomic increment to select slot. */\n    uint32_t slot = index->next_slot++ % HNSW_MAX_THREADS;\n\n    /* Try to lock the selected slot. */\n    if (pthread_mutex_lock(&index->slot_locks[slot]) != 0) return -1;\n\n    /* Get read lock. */\n    if (pthread_rwlock_rdlock(&index->global_lock) != 0) {\n        pthread_mutex_unlock(&index->slot_locks[slot]);\n        return -1;\n    }\n\n    return slot;\n}\n\n/* Release a previously acquired read slot: note that it is important that\n * nodes returned by hnsw_search() are accessed while the read lock is\n * still active, to be sure that nodes are not freed. */\nvoid hnsw_release_read_slot(HNSW *index, int slot) {\n    if (slot < 0 || slot >= HNSW_MAX_THREADS) return;\n    pthread_rwlock_unlock(&index->global_lock);\n    pthread_mutex_unlock(&index->slot_locks[slot]);\n}\n\n/* ============================ Nodes insertion =============================\n * We have an optimistic API separating the read-only candidates search\n * and the write side (actual node insertion). We internally also use\n * this API to provide the plain hnsw_insert() function for code unification. */\n\nstruct InsertContext {\n    pqueue *level_queues[HNSW_MAX_LEVEL]; /* Candidates for each level. */\n    hnswNode *node;         /* Pre-allocated node ready for insertion */\n    uint64_t version;       /* Index version at preparation time. This is used\n                             * for CAS-like locking during change commit. */\n};\n\n/* Optimistic insertion API.\n *\n * WARNING: Note that this is an internal function: users should call\n * hnsw_prepare_insert() instead.\n *\n * This is how it works: you use hnsw_prepare_insert() and it will return\n * a context where good candidate neighbors are already pre-selected.\n * This step only uses read locks.\n *\n * Then finally you try to actually commit the new node with\n * hnsw_try_commit_insert(): this time we will require a write lock, but\n * for less time than it would be otherwise needed if using directly\n * hnsw_insert(). When you try to commit the write, if no node was deleted in\n * the meantime, your operation will succeed, otherwise it will fail, and\n * you should try to just use the hnsw_insert() API, since there is\n * contention.\n *\n * See hnsw_node_new() for information about 'vector' and 'qvector'\n * arguments, and which one to pass. */\nInsertContext *hnsw_prepare_insert_nolock(HNSW *index, const float *vector,\n                const int8_t *qvector, float qrange, uint64_t id,\n                int slot, int ef)\n{\n    InsertContext *ctx = hmalloc(sizeof(*ctx));\n    if (!ctx) return NULL;\n\n    memset(ctx, 0, sizeof(*ctx));\n    ctx->version = index->version;\n\n    /* Crete a new node that we may be able to insert into the\n     * graph later, when calling the commit function. */\n    uint32_t level = random_level();\n    ctx->node = hnsw_node_new(index, id, vector, qvector, qrange, level, 1);\n    if (!ctx->node) {\n        hfree(ctx);\n        return NULL;\n    }\n\n    hnswNode *curr_ep = index->enter_point;\n\n    /* Empty graph, no need to collect candidates. */\n    if (curr_ep == NULL) return ctx;\n\n    /* Phase 1: Find good entry point on the highest level of the new\n     * node we are going to insert. */\n    for (unsigned int lc = index->max_level; lc > level; lc--) {\n        pqueue *results = search_layer(index, ctx->node, curr_ep, 1, lc, slot);\n\n        if (results) {\n            if (results->count > 0) curr_ep = pq_get_node(results,0);\n            pq_free(results);\n        }\n    }\n\n    /* Phase 2: Collect a set of potential connections for each layer of\n     * the new node. */\n    for (int lc = MIN(level, index->max_level); lc >= 0; lc--) {\n        pqueue *candidates =\n            search_layer(index, ctx->node, curr_ep, ef, lc, slot);\n\n        if (!candidates) continue;\n        curr_ep = (candidates->count > 0) ? pq_get_node(candidates,0) : curr_ep;\n        ctx->level_queues[lc] = candidates;\n    }\n\n    return ctx;\n}\n\n/* External API for hnsw_prepare_insert_nolock(), handling locking. */\nInsertContext *hnsw_prepare_insert(HNSW *index, const float *vector,\n                const int8_t *qvector, float qrange, uint64_t id,\n                int ef)\n{\n    InsertContext *ctx;\n    int slot = hnsw_acquire_read_slot(index);\n    ctx = hnsw_prepare_insert_nolock(index,vector,qvector,qrange,id,slot,ef);\n    hnsw_release_read_slot(index,slot);\n    return ctx;\n}\n\n/* Free an insert context and all its resources. */\nvoid hnsw_free_insert_context(InsertContext *ctx) {\n    if (!ctx) return;\n    for (uint32_t i = 0; i < HNSW_MAX_LEVEL; i++) {\n        if (ctx->level_queues[i]) pq_free(ctx->level_queues[i]);\n    }\n    if (ctx->node) hnsw_node_free(ctx->node);\n    hfree(ctx);\n}\n\n/* Commit a prepared insert operation. This function is a low level API that\n * should not be called by the user. See instead hnsw_try_commit_insert(), that\n * will perform the CAS check and acquire the write lock.\n *\n * See the top comment in hnsw_prepare_insert() for more information\n * on the optimistic insertion API.\n *\n * This function can't fail and always returns the pointer to the\n * just inserted node. Out of memory is not possible since no critical\n * allocation is never performed in this code path: we populate links\n * on already allocated nodes. */\nhnswNode *hnsw_commit_insert_nolock(HNSW *index, InsertContext *ctx, void *value) {\n    hnswNode *node = ctx->node;\n    node->value = value;\n\n    /* Handle first node case. */\n    if (index->enter_point == NULL) {\n        index->version++; // First node, make concurrent inserts fail.\n        index->enter_point = node;\n        index->max_level = node->level;\n        hnsw_add_node(index, node);\n        ctx->node = NULL; // So hnsw_free_insert_context() will not free it.\n        hnsw_free_insert_context(ctx);\n        return node;\n    }\n\n    /* Connect the node with near neighbors at each level. */\n    for (int lc = MIN(node->level,index->max_level); lc >= 0; lc--) {\n        if (ctx->level_queues[lc] == NULL) continue;\n\n        /* Try to provide index->M connections to our node. The call\n         * is not guaranteed to be able to provide all the links we would\n         * like to have for the new node: they must be bi-directional, obey\n         * certain quality checks, and so forth, so later there are further\n         * calls to force the hand a bit if needed.\n         *\n         * Let's start with aggressiveness = 0. */\n        select_neighbors(index, ctx->level_queues[lc], node, lc, index->M, 0);\n\n        /* Layer 0 and too few connections? Let's be more aggressive. */\n        if (lc == 0 && node->layers[0].num_links < index->M/2) {\n            select_neighbors(index, ctx->level_queues[lc], node, lc,\n                             index->M, 1);\n\n            /* Still too few connections? Let's go to\n             * aggressiveness level '2' in linking strategy. */\n            if (node->layers[0].num_links < index->M/4) {\n                select_neighbors(index, ctx->level_queues[lc], node, lc,\n                                 index->M/4, 2);\n            }\n        }\n    }\n\n    /* If new node level is higher than current max, update entry point. */\n    if (node->level > index->max_level) {\n        index->version++; // Entry point changed, make concurrent inserts fail.\n        index->enter_point = node;\n        index->max_level = node->level;\n    }\n\n    /* Add node to the linked list. */\n    hnsw_add_node(index, node);\n    ctx->node = NULL; // So hnsw_free_insert_context() will not free the node.\n    hnsw_free_insert_context(ctx);\n    return node;\n}\n\n/* If the context obtained with hnsw_prepare_insert() is still valid\n * (nodes not deleted in the meantime) then add the new node to the HNSW\n * index and return its pointer. Otherwise NULL is returned and the operation\n * should be either performed with the blocking API hnsw_insert() or attempted\n * again. */\nhnswNode *hnsw_try_commit_insert(HNSW *index, InsertContext *ctx, void *value) {\n    /* Check if the version changed since preparation. Note that we\n     * should access index->version under the write lock in order to\n     * be sure we can safely commit the write: this is just a fast-path\n     * in order to return ASAP without acquiring the write lock in case\n     * the version changed. */\n    if (ctx->version != index->version) {\n        hnsw_free_insert_context(ctx);\n        return NULL;\n    }\n\n    /* Try to acquire write lock. */\n    if (pthread_rwlock_wrlock(&index->global_lock) != 0) {\n        hnsw_free_insert_context(ctx);\n        return NULL;\n    }\n\n    /* Check version again under write lock. */\n    if (ctx->version != index->version) {\n        pthread_rwlock_unlock(&index->global_lock);\n        hnsw_free_insert_context(ctx);\n        return NULL;\n    }\n\n    /* Commit the change: note that it's up to hnsw_commit_insert_nolock()\n     * to free the insertion context. */\n    hnswNode *node = hnsw_commit_insert_nolock(index, ctx, value);\n\n    /* Release the write lock. */\n    pthread_rwlock_unlock(&index->global_lock);\n    return node;\n}\n\n/* Insert a new element into the graph.\n * See hnsw_node_new() for information about 'vector' and 'qvector'\n * arguments, and which one to pass.\n *\n * Return NULL on out of memory during insert. Otherwise the newly\n * inserted node pointer is returned. */\nhnswNode *hnsw_insert(HNSW *index, const float *vector, const int8_t *qvector, float qrange, uint64_t id, void *value, int ef) {\n    /* Write lock. We acquire the write lock even for the prepare()\n     * operation (that is a read-only operation) since we want this function\n     * to don't fail in the check-and-set stage of commit().\n     *\n     * Basically here we are using the optimistic API in a non-optimistinc\n     * way in order to have a single insertion code in the implementation. */\n    if (pthread_rwlock_wrlock(&index->global_lock) != 0) return NULL;\n\n    // Prepare the insertion - note we pass slot 0 since we're single threaded.\n    InsertContext *ctx = hnsw_prepare_insert_nolock(index, vector, qvector,\n                                                   qrange, id, 0, ef);\n    if (!ctx) {\n        pthread_rwlock_unlock(&index->global_lock);\n        return NULL;\n    }\n\n    // Commit the prepared insertion without version checking.\n    hnswNode *node = hnsw_commit_insert_nolock(index, ctx, value);\n\n    // Release write lock and return our node pointer.\n    pthread_rwlock_unlock(&index->global_lock);\n    return node;\n}\n\n/* Helper function for qsort call in hnsw_should_reuse_node(). */\nstatic int compare_floats(const float *a, const float *b) {\n    if (*a < *b) return 1;\n    if (*a > *b) return -1;\n    return 0;\n}\n\n/* This function determines if a node can be reused with a new vector by:\n *\n * 1. Computing average of worst 25% of current distances.\n * 2. Checking if at least 50% of new distances stay below this threshold.\n * 3. Requiring a minimum number of links for the check to be meaningful.\n *\n * This check is useful when we want to just update a node that already\n * exists in the graph. Often the new vector is a learned embedding generated\n * by some model, and the embedding represents some document that perhaps\n * changed just slightly compared to the past, so the new embedding will\n * be very nearby. We need to find a way do determine if the current node\n * neighbors (practically speaking its location in the grapb) are good\n * enough even with the new vector.\n *\n * XXX: this function needs improvements: successive updates to the same\n * node with more and more distant vectors will make the node drift away\n * from its neighbors. One of the additional metrics used could be\n * neighbor-to-neighbor distance, that represents a more absolute check\n * of fit for the new vector. */\nint hnsw_should_reuse_node(HNSW *index, hnswNode *node, int is_normalized, const float *new_vector) {\n    /* Step 1: Not enough links? Advice to avoid reuse. */\n    const uint32_t min_links_for_reuse = 4;\n    uint32_t layer0_connections = node->layers[0].num_links;\n    if (layer0_connections < min_links_for_reuse) return 0;\n\n    /* Step2: get all current distances and run our heuristic. */\n    float *old_distances = hmalloc(sizeof(float) * layer0_connections);\n    if (!old_distances) return 0;\n\n    // Temporary node with the new vector, to simplify the next logic.\n    hnswNode tmp_node;\n    if (hnsw_init_tmp_node(index,&tmp_node,is_normalized,new_vector) == 0) {\n        hfree(old_distances);\n        return 0;\n    }\n\n    /* Get old dinstances and sort them to access the 25% worst\n     * (bigger) ones. */\n    for (uint32_t i = 0; i < layer0_connections; i++) {\n        old_distances[i] = hnsw_distance(index, node, node->layers[0].links[i]);\n    }\n    qsort(old_distances, layer0_connections, sizeof(float),\n          (int (*)(const void*, const void*))(&compare_floats));\n\n    uint32_t count = (layer0_connections+3)/4; // 25% approx to larger int.\n    if (count > layer0_connections) count = layer0_connections; // Futureproof.\n    float worst_avg = 0;\n\n    // Compute average of 25% worst dinstances.\n    for (uint32_t i = 0; i < count; i++) worst_avg += old_distances[i];\n    worst_avg /= count;\n    hfree(old_distances);\n\n    // Count how many new distances stay below the threshold.\n    uint32_t good_distances = 0;\n    for (uint32_t i = 0; i < layer0_connections; i++) {\n        float new_dist = hnsw_distance(index, &tmp_node, node->layers[0].links[i]);\n        if (new_dist <= worst_avg) good_distances++;\n    }\n    hnsw_free_tmp_node(&tmp_node,new_vector);\n\n    /* At least 50% of the nodes should pass our quality test, for the\n     * node to be reused. */\n    return good_distances >= layer0_connections/2;\n}\n\n/**\n * Return a random node from the HNSW graph.\n *\n * This function performs a random walk starting from the entry point,\n * using only level 0 connections for navigation. It uses log^2(N) steps\n * to ensure proper mixing time.\n */\n\nhnswNode *hnsw_random_node(HNSW *index, int slot) {\n    if (index->node_count == 0 || index->enter_point == NULL)\n        return NULL;\n\n    (void)slot; // Unused, but we need the caller to acquire the lock.\n\n    /* First phase: descend from max level to level 0 taking random paths.\n     * Note that we don't need a more conservative log^2(N) steps for\n     * proper mixing, since we already descend to a random cluster here. */\n    hnswNode *current = index->enter_point;\n    for (uint32_t level = index->max_level; level > 0; level--) {\n        /* If current node doesn't have this level or no links, continue\n\t * to lower level. */\n        if (current->level < level || current->layers[level].num_links == 0)\n            continue;\n\n        /* Choose random neighbor at this level. */\n        uint32_t rand_neighbor = rand() % current->layers[level].num_links;\n        current = current->layers[level].links[rand_neighbor];\n    }\n\n    /* Second phase: at level 0, take log(N) * c random steps. */\n    const int c = 3; // Multiplier for more thorough exploration.\n    double logN = log2(index->node_count + 1);\n    uint32_t num_walks = (uint32_t)(logN * c);\n\n    /* Avoid the ping-pong effect: imagine there are just two nodes and\n     * the number of walks selected is even. We will select always the\n     * first element of the graph; conversely, if it is odd, we will always\n     * select the other element. One way to add more selection randomness is\n     * to randomly add '1' or '0' to the number of walks to perform. */\n    num_walks += rand() & 1;\n\n    // Perform random walk at level 0.\n    for (uint32_t i = 0; i < num_walks; i++) {\n        if (current->layers[0].num_links == 0) return current;\n\n        // Choose random neighbor.\n        uint32_t rand_neighbor = rand() % current->layers[0].num_links;\n        current = current->layers[0].links[rand_neighbor];\n    }\n    return current;\n}\n\n/* ============================= Serialization ==============================\n *\n * TO SERIALIZE\n * ============\n *\n * To serialize on disk, you need to persist the vector dimension, number\n * of elements, and the quantization type index->quant_type. These are\n * global values for the whole index.\n *\n * Then, to serialize each node:\n *\n * call hnsw_serialize_node() with each node you find in the linked list\n * of nodes, starting at index->head (each node has a next pointer).\n * The function will return an hnswSerNode structure, you will need\n * to store the following on disk (for each node):\n *\n * - The sernode->vector data, that is sernode->vector_size bytes.\n * - The sernode->params array, that points to an array of uint64_t\n *   integers. There are sernode->params_count total items. These\n *   parameters contain everything there is to need about your node: how\n *   many levels it has, its ID, the list of neighbors for each level (as node\n *   IDs), and so forth.\n *\n * You need to to save your own node->value in some way as well, but it already\n * belongs to the user of the API, since, for this library, it's just a pointer,\n * so the user should know how to serialized its private data.\n *\n * RELOADING FROM DISK / NET\n * =========================\n *\n * When reloading nodes, you first load the index vector dimension and\n * quantization type, and create the index with:\n *\n * HNSW *hnsw_new(uint32_t vector_dim, uint32_t quant_type);\n *\n * Then you load back, for each node (you stored how many nodes you had)\n * the vector and the params array / count.\n * You also load the value associated with your node.\n *\n * At this point you add back the loaded elements into the index with:\n *\n * hnsw_insert_serialized(HNSW *index, void *vector, uint64_t params,\n *                        uint32_t params_len, void *value);\n *\n * Once you added all the nodes back, you need to resolve the pointers\n * (since so far they are added just with the node IDs as reference), so\n * you call:\n *\n * hnsw_deserialize_index(index);\n *\n * The index is now ready to be used like if it has been always in memory.\n *\n * DESIGN NOTES\n * ============\n *\n * Why this API does not just give you a binary blob to save? Because in\n * many systems (and in Redis itself) to save integers / floats can have\n * more interesting encodings that just storing a 64 bit value. Many vector\n * indexes will be small, and their IDs will be small numbers, so the storage\n * system can exploit that and use less disk space, less network bandwidth\n * and so forth.\n *\n * How is the data stored in these arrays of numbers? Oh well, we have\n * things that are obviously numbers like node ID, number of levels for the\n * node and so forth. Also each of our nodes have an unique incremental ID,\n * so we can store a node set of links in terms of linked node IDs. This\n * data is put directly in the loaded node pointer space! We just cast the\n * integer to the pointer (so THIS IS NOT SAFE for 32 bit systems). Then\n * we want to translate such IDs into pointers. To do that, we build an\n * hash table, then scan all the nodes again and fix all the links converting\n * the ID to the pointer. */\n\n/* History of serialization versions:\n * version 0: the first implementation, lacking worst node id/info.\n * version 1: includes worst link id/info. */\n#define HNSW_SERIALIZATION_VERSION 1\n\n/* This is a special worst link index that is set when loading a serialized\n * node with version 0 (this version of the serialization lacked explicit\n * information about the worst link index/distance). This way, later, the\n * function that fixes a deserialized index will know to compute the worst\n * index info at runtime. */\n#define HNSW_SER_WORSTLINK_MISSING UINT32_MAX\n\n/* Return the serialized node information as specified in the top comment\n * above. Note that the returned information is true as long as the node\n * provided is not deleted or modified, so this function should be called\n * when there are no concurrent writes.\n *\n * The function hnsw_serialize_node() should be called in order to\n * free the result of this function. */\nhnswSerNode *hnsw_serialize_node(HNSW *index, hnswNode *node) {\n    /* The first step is calculating the number of uint64_t parameters\n     * that we need in order to serialize the node. */\n    uint32_t num_params = 0;\n    num_params += 2;    // node ID, number of layers.\n    for (uint32_t i = 0; i <= node->level; i++) {\n        num_params += 2; // max_links and num_links info for this layer.\n        num_params += node->layers[i].num_links; // The IDs of linked nodes.\n        num_params += 1; // worst link id/distance parameter.\n    }\n\n    /* We use another 64bit value to store two floats that are about\n     * the vector: l2 and quantization range (that is only used if the\n     * vector is quantized). */\n    num_params++;\n\n    /* Allocate the return object and the parameters array. */\n    hnswSerNode *sn = hmalloc(sizeof(hnswSerNode));\n    if (sn == NULL) return NULL;\n    sn->params = hmalloc(sizeof(uint64_t)*num_params);\n    if (sn->params == NULL) {\n        hfree(sn);\n        return NULL;\n    }\n\n    /* Fill data. */\n    sn->params_count = num_params;\n    sn->vector = node->vector;\n    sn->vector_size = hnsw_quants_bytes(index);\n\n    uint32_t param_idx = 0;\n    sn->params[param_idx++] = node->id;\n    /* The second parameter contains information about the serialization\n     * version of this node, the node level and some unused field:\n     *\n     * +--------+--------+--------+--------+\n     * |VVVVVVVV|........|........|LLLLLLLL|\n     * +--------+--------+--------+--------+\n     *\n     * V is the version, 8 bits.\n     * L is the node level, 8 bits (but actually 16 is the max so far).\n     * The middle two bytes are reserved for future uses. */\n    sn->params[param_idx] = node->level & 0xff;\n    sn->params[param_idx] |= HNSW_SERIALIZATION_VERSION << 24;\n    param_idx++;\n    for (uint32_t i = 0; i <= node->level; i++) {\n        sn->params[param_idx++] = node->layers[i].num_links;\n        sn->params[param_idx++] = node->layers[i].max_links;\n        for (uint32_t j = 0; j < node->layers[i].num_links; j++) {\n            sn->params[param_idx++] = node->layers[i].links[j]->id;\n        }\n        /* Since version 1: pack and store worst_idx and worst_distance. */\n        uint32_t worst_distance_bits;\n        memcpy(&worst_distance_bits, &node->layers[i].worst_distance,\n               sizeof(float));\n        uint64_t wi =\n            (((uint64_t)worst_distance_bits) << 32) | node->layers[i].worst_idx;\n        sn->params[param_idx++] = wi;\n    }\n\n    /* Store l2 and range as uint32_t, in a way that is endian-safe.\n     * Note that in big endian archs both are reversed: integers and\n     * also the bytes of floats, so they will match. */\n    uint64_t l2_and_range;\n    uint32_t l2_bits, range_bits;\n    memcpy(&l2_bits,&node->l2,sizeof(float));\n    memcpy(&range_bits,&node->quants_range,sizeof(float));\n    l2_and_range = ((uint64_t)range_bits<<32) | l2_bits;\n\n    sn->params[param_idx++] = l2_and_range;\n\n    /* Better safe than sorry: */\n    assert(param_idx == num_params);\n    return sn;\n}\n\n/* This is needed in order to free hnsw_serialize_node() returned\n * structure. */\nvoid hnsw_free_serialized_node(hnswSerNode *sn) {\n    hfree(sn->params);\n    hfree(sn);\n}\n\n/* Load a serialized node. See the top comment in this section of code\n * for the documentation about how to use this.\n *\n * The function returns NULL both on out of memory and if the remaining\n * parameters length does not match the number of links or other items\n * to load. */\nhnswNode *hnsw_insert_serialized(HNSW *index, void *vector, uint64_t *params, uint32_t params_len, void *value)\n{\n    if (params_len < 2) return NULL;\n\n    uint64_t id = params[0];\n    /* Check the node serialization function for the specific layout\n     * of param[1] fields. */\n    uint32_t level = params[1] & 0xff;                  // Node level.\n    uint32_t version = (params[1] & 0xff000000) >> 24;  // Format version.\n\n    if (version > HNSW_SERIALIZATION_VERSION) return NULL;\n    int has_worst_link_info = version > 0;\n\n    /* Keep track of maximum ID seen while loading. */\n    if (id >= index->last_id) index->last_id = id;\n\n    /* Create node, passing vector data directly based on quantization type. */\n    hnswNode *node;\n    if (index->quant_type != HNSW_QUANT_NONE) {\n        node = hnsw_node_new(index, id, NULL, vector, 0, level, 0);\n    } else {\n        node = hnsw_node_new(index, id, vector, NULL, 0, level, 0);\n    }\n    if (!node) return NULL;\n\n    /* Load params array into the node. */\n    uint32_t param_idx = 2;\n    for (uint32_t i = 0; i <= level; i++) {\n        /* Sanity check. */\n        if (param_idx + 2 + has_worst_link_info > params_len) {\n            hnsw_node_free(node);\n            return NULL;\n        }\n\n        uint32_t num_links = params[param_idx++];\n        uint32_t max_links = params[param_idx++];\n\n        /* Sanity check: links should be less than max links and\n         * in general a reasonable amount. */\n        if (num_links > max_links || max_links > HNSW_MAX_M*4) {\n            hnsw_node_free(node);\n            return NULL;\n        }\n\n        /* If max_links is larger than current allocation, reallocate.\n         * It could happen in select_neighbors() that we over-allocate the\n         * node under very unlikely to happen conditions. */\n        if (max_links > node->layers[i].max_links) {\n            hnswNode **new_links = hrealloc(node->layers[i].links, \n                                         sizeof(hnswNode*) * max_links);\n            if (!new_links) {\n                hnsw_node_free(node);\n                return NULL;\n            }\n            node->layers[i].links = new_links;\n            node->layers[i].max_links = max_links;\n        }\n        node->layers[i].num_links = num_links;\n\n        /* Sanity check. */\n        if (param_idx + num_links + has_worst_link_info > params_len) {\n            hnsw_node_free(node);\n            return NULL;\n        }\n\n        /* Fill links for this layer with the IDs. Note that this\n         * is going to not work in 32 bit systems. Deleting / adding-back\n         * nodes can produce IDs larger than 2^32-1 even if we can't never\n         * fit more than 2^32 nodes in a 32 bit system. */\n        for (uint32_t j = 0; j < num_links; j++)\n            node->layers[i].links[j] = (hnswNode*)params[param_idx++];\n\n        if (has_worst_link_info) {\n            uint64_t wi = params[param_idx++];\n            uint32_t worst_idx = wi & 0xffffffff;\n            uint32_t worst_distance_bits = wi >> 32;\n            float worst_distance;\n            memcpy(&worst_distance,&worst_distance_bits,sizeof(float));\n            node->layers[i].worst_idx = worst_idx;\n            node->layers[i].worst_distance = worst_distance;\n\n            // Sanity check the worst ID range.\n            if (node->layers[i].num_links > 0 &&\n                node->layers[i].worst_idx >= node->layers[i].num_links)\n            {\n                hnsw_node_free(node);\n                return NULL;\n            }\n        } else {\n            node->layers[i].worst_idx = HNSW_SER_WORSTLINK_MISSING;\n            node->layers[i].worst_distance = 0;\n        }\n    }\n\n    /* Get l2 and quantization range. */\n    if (param_idx >= params_len) {\n        hnsw_node_free(node);\n        return NULL;\n    }\n\n    /* Load l2 and range packed into an uint64_t in an endian safe way. */\n    uint64_t l2_and_range = params[param_idx];\n    uint32_t l2_bits, range_bits;\n    l2_bits = l2_and_range & 0xffffffff;\n    range_bits = l2_and_range >> 32;\n    memcpy(&node->l2, &l2_bits, sizeof(float));\n    memcpy(&node->quants_range, &range_bits, sizeof(float));\n\n    node->value = value;\n    hnsw_add_node(index, node);\n\n    /* Keep track of higher node level and set the entry point to the\n     * greatest level node seen so far: thanks to this check we don't\n     * need to remember what our entry point was during serialization. */\n    if (index->enter_point == NULL || level > index->max_level) {\n        index->max_level = level;\n        index->enter_point = node;\n    }\n    return node;\n}\n\n/* Integer hashing, used by hnsw_deserialize_index().\n * MurmurHash3's 64-bit finalizer function. */\nuint64_t hnsw_hash_node_id(uint64_t id) {\n    id ^= id >> 33;\n    id *= 0xff51afd7ed558ccd;\n    id ^= id >> 33;\n    id *= 0xc4ceb9fe1a85ec53;\n    id ^= id >> 33;\n    return id;\n}\n\n/* Helper for duplicated link detection in hnsw_deserialize_index(). */\nstatic int qsort_compare_pointers(const void *aptr, const void *bptr) {\n    uintptr_t a = *((uintptr_t*)aptr);\n    uintptr_t b = *((uintptr_t*)bptr);\n    if (a > b) return 1;\n    if (a < b) return -1;\n    return 0;\n}\n\n/* Fix pointers of neighbors nodes: after loading the serialized nodes, the\n * neighbors links are just IDs (casted to pointers), instead of the actual\n * pointers. We need to resolve IDs into pointers.\n *\n * The two integers salt0 and salt1 are used to make the internal state\n * of the function unguessable to an external attacker, in order to protect\n * from corruptions. Show be two random numbers from /dev/urandom if possible\n * otherwise can be just 0,0 if the application is not security critical and\n * never processes untrusted inputs.\n *\n * Return 0 on error (out of memory or some ID that can't be resolved), 1 on\n * success. */\nint hnsw_deserialize_index(HNSW *index, uint64_t salt0, uint64_t salt1) {\n    /* We will use simple linear probing, so over-allocating is a good\n     * idea: anyway this flat array of pointers will consume a fraction\n     * of the memory of the loaded index. */\n    uint64_t min_size = index->node_count*2;\n    uint64_t table_size = 1;\n    while(table_size < min_size) table_size <<= 1;\n\n    hnswNode **table = hmalloc(sizeof(hnswNode*) * table_size);\n    if (table == NULL) return 0;\n    memset(table,0,sizeof(hnswNode*) * table_size);\n\n    /* First pass: populate the ID -> pointer hash table. */\n    hnswNode *node = index->head;\n    while(node) {\n        uint64_t bucket = hnsw_hash_node_id(node->id) & (table_size-1);\n        for (uint64_t j = 0; j < table_size; j++) {\n            if (table[bucket] == NULL) {\n                table[bucket] = node;\n                break;\n            }\n            bucket = (bucket+1) & (table_size-1);\n        }\n        node = node->next;\n    }\n\n    /* Second pass: fix pointers of all the neighbors links.\n     * As we scan and fix the links, we also compute the accumulator\n     * register \"reciprocal\", that is used in order to guarantee that all\n     * the links are reciprocal.\n     *\n     * This is how it works, we hash (using a strong hash function) the\n     * following key for each link that we see from A to B (or vice versa):\n     *\n     *      hash(salt || A || B || link-level)\n     *\n     * We always sort A and B, so the same link from A to B and from B to A\n     * will hash the same. The we xor the result into the 128 bit accumulator.\n     * If each link has its own backlink, the accumulator is guaranteed to\n     * be zero at the end.\n     *\n     * Collisions are extremely unlikely to happen, and an external attacker\n     * can't easily control the hash function output, since the salt is\n     * unknown, and also there would be to control the pointers.\n     *\n     * This algorithm is O(1) for each node so it is basically free for\n     * us, as we scan the list of nodes, and runs on constant and very\n     * small memory. */\n    uint64_t accumulator[2] = {0,0};\n\n    node = index->head; // Rewind.\n    while(node) {\n        uint64_t this_node_id = node->id;\n        for (uint32_t i = 0; i <= node->level; i++) {\n            // Check if there are duplicated links: those are\n            // also corruptions of the on-disk serialization format.\n            if (node->layers[i].num_links > 0) {\n                qsort(node->layers[i].links, node->layers[i].num_links,\n                        sizeof(void*), qsort_compare_pointers);\n                for (uint32_t j = 0; j < node->layers[i].num_links-1; j++) {\n                    if (node->layers[i].links[j] == node->layers[i].links[j+1])\n                        goto corrupted;\n                }\n            }\n\n            // Resolve pointers.\n            for (uint32_t j = 0; j < node->layers[i].num_links; j++) {\n                uint64_t linked_id = (uint64_t) node->layers[i].links[j];\n\n                // We can't link to our own node.\n                if (linked_id == this_node_id) goto corrupted;\n\n                // Compute accumulator for reciprocal links check.\n                uint64_t mixed_h1, mixed_h2;\n                secure_pair_mixer_128(salt0, salt1, this_node_id, linked_id, (uint64_t)i, &mixed_h1, &mixed_h2);\n\n                accumulator[0] ^= mixed_h1;\n                accumulator[1] ^= mixed_h2;\n\n                // Fix links.\n                uint64_t bucket = hnsw_hash_node_id(linked_id) & (table_size-1);\n                hnswNode *neighbor = NULL;\n                for (uint64_t k = 0; k < table_size; k++) {\n                    if (table[bucket] && table[bucket]->id == linked_id) {\n                        neighbor = table[bucket];\n                        break;\n                    }\n                    bucket = (bucket+1) & (table_size-1);\n                }\n\n                /* The neighbor must exist and also exist at the right\n                 * level. */\n                if (neighbor == NULL || neighbor->level < i) {\n                    /* Unresolved link! Either a bug in this code\n                     * or broken serialization data. */\n                    goto corrupted;\n                }\n                node->layers[i].links[j] = neighbor;\n            }\n\n            /* The worst link information was missing from older\n             * serialization formats. Compute it on the fly if needed. */\n            if (node->layers[i].worst_idx == HNSW_SER_WORSTLINK_MISSING) {\n                hnsw_update_worst_neighbor(index,node,i);\n            }\n        }\n        node = node->next;\n    }\n\n    /* Check that links are reciprocal, otherwise fail. */\n    if (accumulator[0] || accumulator[1]) goto corrupted;\n\n    /* Everything fine. Return success. */\n    hfree(table);\n    return 1;\n\ncorrupted:\n    /* Some corruption error detected. */\n    hfree(table);\n    return 0;\n}\n\n/* ================================ Iterator ================================ */\n\n/* Get a cursor that can be used as argument of hnsw_cursor_next() to iterate\n * all the elements that remain there from the start to the end of the\n * iteration, excluding newly added elements.\n *\n * The function returns NULL on out of memory. */\nhnswCursor *hnsw_cursor_init(HNSW *index) {\n    if (pthread_rwlock_wrlock(&index->global_lock) != 0) return NULL;\n    hnswCursor *cursor = hmalloc(sizeof(*cursor));\n    if (cursor == NULL) {\n        pthread_rwlock_unlock(&index->global_lock);\n        return NULL;\n    }\n    cursor->index = index;\n    cursor->next = index->cursors;\n    cursor->current = index->head;\n    index->cursors = cursor;\n    pthread_rwlock_unlock(&index->global_lock);\n    return cursor;\n}\n\n/* Free the cursor. Can be called both at the end of the iteration, when\n * hnsw_cursor_next() returned NULL, or before. */\nvoid hnsw_cursor_free(hnswCursor *cursor) {\n    HNSW *index = cursor->index;\n    if (pthread_rwlock_wrlock(&index->global_lock) != 0) {\n        // No easy way to recover from that. We will leak memory.\n        return;\n    }\n\n    hnswCursor *x = index->cursors;\n    hnswCursor *prev = NULL;\n    while(x) {\n        if (x == cursor) {\n            if (prev)\n                prev->next = cursor->next;\n            else\n                index->cursors = cursor->next;\n            hfree(cursor);\n            break;\n        }\n        prev = x;\n        x = x->next;\n    }\n    pthread_rwlock_unlock(&index->global_lock);\n}\n\n/* Acquire a lock to use the cursor. Returns 1 if the lock was acquired\n * with success, otherwise zero is returned. The returned element is\n * protected after calling hnsw_cursor_next() for all the time required to\n * access it, then hnsw_cursor_release_lock() should be called in order\n * to unlock the HNSW index. */\nint hnsw_cursor_acquire_lock(hnswCursor *cursor) {\n    return pthread_rwlock_rdlock(&cursor->index->global_lock) == 0;\n}\n\n/* Release the cursor lock, see hnsw_cursor_acquire_lock() top comment\n * for more information. */\nvoid hnsw_cursor_release_lock(hnswCursor *cursor) {\n    pthread_rwlock_unlock(&cursor->index->global_lock);\n}\n\n/* Return the next element of the HNSW. See hnsw_cursor_init() for\n * the guarantees of the function. */\nhnswNode *hnsw_cursor_next(hnswCursor *cursor) {\n    hnswNode *ret = cursor->current;\n    if (ret) cursor->current = ret->next;\n    return ret;\n}\n\n/* Called by hnsw_unlink_node() if there is at least an active cursor.\n * Will scan the cursors to see if any cursor is going to yield this\n * one, and in this case, updates the current element to the next. */\nvoid hnsw_cursor_element_deleted(HNSW *index, hnswNode *deleted) {\n    hnswCursor *x = index->cursors;\n    while(x) {\n        if (x->current == deleted) x->current = deleted->next;\n        x = x->next;\n    }\n}\n\n/* ============================ Debugging stuff ============================= */\n\n/* Show stats about nodes connections. */\nvoid hnsw_print_stats(HNSW *index) {\n    if (!index || !index->head) {\n        printf(\"Empty index or NULL pointer passed\\n\");\n        return;\n    }\n\n    long long total_links = 0;\n    int min_links = -1;         // We'll set this to first node's count.\n    int isolated_nodes = 0;\n    uint32_t node_count = 0;\n\n    // Iterate through all nodes using the linked list.\n    hnswNode *current = index->head;\n    while (current) {\n        // Count total links for this node across all layers.\n        int node_total_links = 0;\n        for (uint32_t layer = 0; layer <= current->level; layer++)\n            node_total_links += current->layers[layer].num_links;\n\n        // Update statistics.\n        total_links += node_total_links;\n\n        // Initialize or update minimum links.\n        if (min_links == -1 || node_total_links < min_links) {\n            min_links = node_total_links;\n        }\n\n        // Check if node is isolated (no links at all).\n        if (node_total_links == 0) isolated_nodes++;\n\n        node_count++;\n        current = current->next;\n    }\n\n    // Print statistics\n    printf(\"HNSW Graph Statistics:\\n\");\n    printf(\"----------------------\\n\");\n    printf(\"Total nodes: %u\\n\", node_count);\n    if (node_count > 0) {\n        printf(\"Average links per node: %.2f\\n\",\n\t\t(float)total_links / node_count);\n        printf(\"Minimum links in a single node: %d\\n\", min_links);\n        printf(\"Number of isolated nodes: %d (%.1f%%)\\n\",\n               isolated_nodes,\n               (float)isolated_nodes * 100 / node_count);\n    }\n}\n\n/* Validate graph connectivity and link reciprocity. Takes pointers to store results:\n * - connected_nodes: will contain number of reachable nodes from entry point.\n * - reciprocal_links: will contain 1 if all links are reciprocal, 0 otherwise.\n * Returns 0 on success, -1 on error (NULL parameters and such).\n */\nint hnsw_validate_graph(HNSW *index, uint64_t *connected_nodes, int *reciprocal_links) {\n    if (!index || !connected_nodes || !reciprocal_links) return -1;\n    if (!index->enter_point) {\n        *connected_nodes = 0;\n        *reciprocal_links = 1;  // Empty graph is valid.\n        return 0;\n    }\n\n    // Initialize connectivity check.\n    index->current_epoch[0]++;\n    *connected_nodes = 0;\n    *reciprocal_links = 1;\n\n    // Initialize node stack.\n    uint64_t stack_size = index->node_count;\n    hnswNode **stack = hmalloc(sizeof(hnswNode*) * stack_size);\n    if (!stack) return -1;\n    uint64_t stack_top = 0;\n\n    // Start from entry point.\n    index->enter_point->visited_epoch[0] = index->current_epoch[0];\n    (*connected_nodes)++;\n    stack[stack_top++] = index->enter_point;\n\n    // Process all reachable nodes.\n    while (stack_top > 0) {\n        hnswNode *current = stack[--stack_top];\n\n        // Explore all neighbors at each level.\n        for (uint32_t level = 0; level <= current->level; level++) {\n            for (uint64_t i = 0; i < current->layers[level].num_links; i++) {\n                hnswNode *neighbor = current->layers[level].links[i];\n\n                // Check reciprocity.\n                int found_backlink = 0;\n                for (uint64_t j = 0; j < neighbor->layers[level].num_links; j++) {\n                    if (neighbor->layers[level].links[j] == current) {\n                        found_backlink = 1;\n                        break;\n                    }\n                }\n                if (!found_backlink) {\n                    *reciprocal_links = 0;\n                }\n\n                // If we haven't visited this neighbor yet.\n                if (neighbor->visited_epoch[0] != index->current_epoch[0]) {\n                    neighbor->visited_epoch[0] = index->current_epoch[0];\n                    (*connected_nodes)++;\n                    if (stack_top < stack_size) {\n                        stack[stack_top++] = neighbor;\n                    } else {\n                        // This should never happen in a valid graph.\n                        hfree(stack);\n                        return -1;\n                    }\n                }\n            }\n        }\n    }\n\n    hfree(stack);\n\n    // Now scan for unreachable nodes and print debug info.\n    printf(\"\\nUnreachable nodes debug information:\\n\");\n    printf(\"=====================================\\n\");\n\n    hnswNode *current = index->head;\n    while (current) {\n        if (current->visited_epoch[0] != index->current_epoch[0]) {\n            printf(\"\\nUnreachable node found:\\n\");\n            printf(\"- Node pointer: %p\\n\", (void*)current);\n            printf(\"- Node ID: %llu\\n\", (unsigned long long)current->id);\n            printf(\"- Node level: %u\\n\", current->level);\n\n            // Print info about all its links at each level.\n            for (uint32_t level = 0; level <= current->level; level++) {\n                printf(\"  Level %u links (%u):\\n\", level,\n                       current->layers[level].num_links);\n                for (uint64_t i = 0; i < current->layers[level].num_links; i++) {\n                    hnswNode *neighbor = current->layers[level].links[i];\n                    // Check reciprocity for this specific link\n                    int found_backlink = 0;\n                    for (uint64_t j = 0; j < neighbor->layers[level].num_links; j++) {\n                        if (neighbor->layers[level].links[j] == current) {\n                            found_backlink = 1;\n                            break;\n                        }\n                    }\n                    printf(\"    - Link %llu: pointer=%p, id=%llu, visited=%s,recpr=%s\\n\",\n                           (unsigned long long)i, (void*)neighbor,\n                           (unsigned long long)neighbor->id,\n                           neighbor->visited_epoch[0] == index->current_epoch[0] ?\n                           \"yes\" : \"no\",\n                           found_backlink ? \"yes\" : \"no\");\n                }\n            }\n        }\n        current = current->next;\n    }\n\n    printf(\"Total connected nodes: %llu\\n\", (unsigned long long)*connected_nodes);\n    printf(\"All links are bi-directiona? %s\\n\", (*reciprocal_links)?\"yes\":\"no\");\n    return 0;\n}\n\n/* Test graph recall ability by verifying each node can be found searching\n * for its own vector. This helps validate that the majority of nodes are\n * properly connected and easily reachable in the graph structure. Every\n * unreachable node is reported.\n *\n * Normally only a small percentage of nodes will be not reachable when\n * visited. This is expected and part of the statistical properties\n * of HNSW. This happens especially with entries that have an ambiguous\n * meaning in the represented space, and are across two or multiple clusters\n * of items.\n *\n * The function works by:\n * 1. Iterating through all nodes in the linked list\n * 2. Using each node's vector to perform a search with specified EF\n * 3. Verifying the node can find itself as nearest neighbor\n * 4. Collecting and reporting statistics about reachability\n *\n * This is just a debugging function that reports stuff in the standard\n * output, part of the implementation because this kind of functions\n * provide some visibility on what happens inside the HNSW.\n */\nvoid hnsw_test_graph_recall(HNSW *index, int test_ef, int verbose) {\n    // Stats\n    uint32_t total_nodes = 0;\n    uint32_t unreachable_nodes = 0;\n    uint32_t perfectly_reachable = 0;  // Node finds itself as first result\n\n    // For storing search results\n    hnswNode **neighbors = hmalloc(sizeof(hnswNode*) * test_ef);\n    float *distances = hmalloc(sizeof(float) * test_ef);\n    float *test_vector = hmalloc(sizeof(float) * index->vector_dim);\n    if (!neighbors || !distances || !test_vector) {\n        hfree(neighbors);\n        hfree(distances);\n        hfree(test_vector);\n        return;\n    }\n\n    // Get a read slot for searching (even if it's highly unlikely that\n    // this test will be run threaded...).\n    int slot = hnsw_acquire_read_slot(index);\n    if (slot < 0) {\n        hfree(neighbors);\n        hfree(distances);\n        return;\n    }\n\n    printf(\"\\nTesting graph recall\\n\");\n    printf(\"====================\\n\");\n\n    // Process one node at a time using the linked list\n    hnswNode *current = index->head;\n    while (current) {\n        total_nodes++;\n\n        // If using quantization, we need to reconstruct the normalized vector\n        if (index->quant_type == HNSW_QUANT_Q8) {\n            int8_t *quants = current->vector;\n            // Reconstruct normalized vector from quantized data\n            for (uint32_t j = 0; j < index->vector_dim; j++) {\n                test_vector[j] = (quants[j] * current->quants_range) / 127;\n            }\n        } else if (index->quant_type == HNSW_QUANT_NONE) {\n            memcpy(test_vector,current->vector,sizeof(float)*index->vector_dim);\n        } else {\n            assert(0 && \"Quantization type not supported.\");\n        }\n\n        // Search using the node's own vector with high ef\n        int found = hnsw_search(index, test_vector, test_ef, neighbors,\n                              distances, slot, 1);\n\n        if (found == 0) continue; // Empty HNSW?\n\n        // Look for the node itself in the results\n        int found_self = 0;\n        int self_position = -1;\n        for (int i = 0; i < found; i++) {\n            if (neighbors[i] == current) {\n                found_self = 1;\n                self_position = i;\n                break;\n            }\n        }\n\n        if (!found_self || self_position != 0) {\n            unreachable_nodes++;\n            if (verbose) {\n                if (!found_self)\n                    printf(\"\\nNode %s cannot find itself:\\n\", (char*)current->value);\n                else\n                    printf(\"\\nNode %s is not top result:\\n\", (char*)current->value);\n                printf(\"- Node ID: %llu\\n\", (unsigned long long)current->id);\n                printf(\"- Node level: %u\\n\", current->level);\n                printf(\"- Found %d neighbors but self not among them\\n\", found);\n                printf(\"- Closest neighbor distance: %f\\n\", distances[0]);\n                printf(\"- Neighbors: \");\n                for (uint32_t i = 0; i < current->layers[0].num_links; i++) {\n                    printf(\"%s \", (char*)current->layers[0].links[i]->value);\n                }\n                printf(\"\\n\");\n                printf(\"\\nFound instead: \");\n                for (int j = 0; j < found && j < 10; j++) {\n                    printf(\"%s \", (char*)neighbors[j]->value);\n                }\n                printf(\"\\n\");\n            }\n        } else {\n            perfectly_reachable++;\n        }\n        current = current->next;\n    }\n\n    // Release read slot\n    hnsw_release_read_slot(index, slot);\n\n    // Free resources\n    hfree(neighbors);\n    hfree(distances);\n    hfree(test_vector);\n\n    // Print final statistics\n    printf(\"Total nodes tested: %u\\n\", total_nodes);\n    printf(\"Perfectly reachable nodes: %u (%.1f%%)\\n\",\n           perfectly_reachable,\n           total_nodes ? (float)perfectly_reachable * 100 / total_nodes : 0);\n    printf(\"Unreachable/suboptimal nodes: %u (%.1f%%)\\n\",\n           unreachable_nodes,\n           total_nodes ? (float)unreachable_nodes * 100 / total_nodes : 0);\n}\n\n/* Return exact K-NN items by performing a linear scan of all nodes.\n * This function has the same signature as hnsw_search_with_filter() but\n * instead of using the graph structure, it scans all nodes to find the\n * true nearest neighbors.\n *\n * Note that neighbors and distances arrays must have space for at least 'k' items.\n * norm_query should be set to 1 if the query vector is already normalized.\n *\n * If the filter_callback is passed, only elements passing the specified filter\n * are returned. The slot parameter is ignored but kept for API consistency. */\nint hnsw_ground_truth_with_filter\n               (HNSW *index, const float *query_vector, uint32_t k,\n                hnswNode **neighbors, float *distances, uint32_t slot,\n                int query_vector_is_normalized,\n                int (*filter_callback)(void *value, void *privdata),\n                void *filter_privdata)\n{\n    /* Note that we don't really use the slot here: it's a linear scan.\n     * Yet we want the user to acquire the slot as this will hold the\n     * global lock in read only mode. */\n    (void) slot;\n\n    /* Take our query vector into a temporary node. */\n    hnswNode query;\n    if (hnsw_init_tmp_node(index, &query, query_vector_is_normalized, query_vector) == 0) return -1;\n\n    /* Accumulate best results into a priority queue. */\n    pqueue *results = pq_new(k);\n    if (!results) {\n        hnsw_free_tmp_node(&query, query_vector);\n        return -1;\n    }\n\n    /* Scan all nodes linearly. */\n    hnswNode *current = index->head;\n    while (current) {\n        /* Apply filter if needed. */\n        if (filter_callback &&\n            !filter_callback(current->value, filter_privdata))\n        {\n            current = current->next;\n            continue;\n        }\n\n        /* Calculate distance to query. */\n        float dist = hnsw_distance(index, &query, current);\n\n        /* Add to results to pqueue. Will be accepted only if better than\n         * the current worse or pqueue not full. */\n        pq_push(results, current, dist);\n        current = current->next;\n    }\n\n    /* Copy results to output arrays. */\n    uint32_t found = MIN(k, results->count);\n    for (uint32_t i = 0; i < found; i++) {\n        neighbors[i] = pq_get_node(results, i);\n        if (distances) distances[i] = pq_get_distance(results, i);\n    }\n\n    /* Clean up. */\n    pq_free(results);\n    hnsw_free_tmp_node(&query, query_vector);\n    return found;\n}\n"
  },
  {
    "path": "modules/vector-sets/hnsw.h",
    "content": "/*\n * HNSW (Hierarchical Navigable Small World) Implementation\n * Based on the paper by Yu. A. Malkov, D. A. Yashunin\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n * Originally authored by: Salvatore Sanfilippo.\n */\n\n#ifndef HNSW_H\n#define HNSW_H\n\n#include <pthread.h>\n#include <stdatomic.h>\n\n#define HNSW_DEFAULT_M  16     /* Used when 0 is given at creation time. */\n#define HNSW_MIN_M      4      /* Probably even too low already. */\n#define HNSW_MAX_M      4096   /* Safeguard sanity limit. */\n#define HNSW_MAX_THREADS 32    /* Maximum number of concurrent threads */\n\n/* Quantization types you can enable at creation time in hnsw_new() */\n#define HNSW_QUANT_NONE  0   // No quantization.\n#define HNSW_QUANT_Q8    1   // Q8 quantization.\n#define HNSW_QUANT_BIN   2   // Binary quantization.\n\n/* Layer structure for HNSW nodes. Each node will have from one to a few\n * of this depending on its level. */\ntypedef struct {\n    struct hnswNode **links;  /* Array of neighbors for this layer */\n    uint32_t num_links;       /* Number of used links */\n    uint32_t max_links;       /* Maximum links for this layer. We may\n                               * reallocate the node in very particular\n                               * conditions in order to allow linking of\n                               * new inserted nodes, so this may change\n                               * dynamically and be > M*2 for a small set of\n                               * nodes. */\n    float worst_distance;     /* Distance to the worst neighbor */\n    uint32_t worst_idx;       /* Index of the worst neighbor */\n} hnswNodeLayer;\n\n/* Node structure for HNSW graph */\ntypedef struct hnswNode {\n    uint32_t level;         /* Node's maximum level */\n    uint64_t id;            /* Unique identifier, may be useful in order to\n                             * have a bitmap of visited notes to use as\n                             * alternative to epoch / visited_epoch.\n                             * Also used in serialization in order to retain\n                             * links specifying IDs. */\n    void *vector;           /* The vector, quantized or not. */\n    float quants_range;     /* Quantization range for this vector:\n                             * min/max values will be in the range\n                             * -quants_range, +quants_range */\n    float l2;               /* L2 before normalization. */\n\n    /* Last time (epoch) this node was visited. We need one per thread.\n     * This avoids having a different data structure where we track\n     * visited nodes, but costs memory per node. */\n    uint64_t visited_epoch[HNSW_MAX_THREADS];\n\n    void *value;                    /* Associated value */\n    struct hnswNode *prev, *next;   /* Prev/Next node in the list starting at\n                                     * HNSW->head. */\n\n    /* Links (and links info) per each layer. Note that this is part\n     * of the node allocation to be more cache friendly: reliable 3% speedup\n     * on Apple silicon, and does not make anything more complex. */\n    hnswNodeLayer layers[];\n} hnswNode;\n\nstruct HNSW;\n\n/* It is possible to navigate an HNSW with a cursor that guarantees\n * visiting all the elements that remain in the HNSW from the start to the\n * end of the process (but not the new ones, so that the process will\n * eventually finish). Check hnsw_cursor_init(), hnsw_cursor_next() and\n * hnsw_cursor_free(). */\ntypedef struct hnswCursor {\n    struct HNSW *index; // Reference to the index of this cursor.\n    hnswNode *current;  // Element to report when hnsw_cursor_next() is called.\n    struct hnswCursor *next; // Next cursor active.\n} hnswCursor;\n\n/* Main HNSW index structure */\ntypedef struct HNSW {\n    hnswNode *enter_point;   /* Entry point for the graph */\n    uint32_t M;               /* M as in the paper: layer 0 has M*2 max\n                                 neighbors (M populated at insertion time)\n                                 while all the other layers have M neighbors. */\n    uint32_t max_level;      /* Current maximum level in the graph */\n    uint32_t vector_dim;     /* Dimensionality of stored vectors */\n    uint64_t node_count;     /* Total number of nodes */\n    _Atomic uint64_t last_id; /* Last node ID used */\n    uint64_t current_epoch[HNSW_MAX_THREADS];  /* Current epoch for visit tracking */\n    hnswNode *head;             /* Linked list of nodes. Last first */\n\n    /* We have two locks here:\n     * 1. A global_lock that is used to perform write operations blocking all\n     * the readers.\n     * 2. One mutex per epoch slot, in order for read operations to acquire\n     * a lock on a specific slot to use epochs tracking of visited nodes. */\n    pthread_rwlock_t global_lock;  /* Global read-write lock */\n    pthread_mutex_t slot_locks[HNSW_MAX_THREADS];  /* Per-slot locks */\n\n    _Atomic uint32_t next_slot; /* Next thread slot to try */\n    _Atomic uint64_t version;   /* Version for optimistic concurrency, this is\n                                 * incremented on deletions and entry point\n                                 * updates. */\n    uint32_t quant_type;        /* Quantization used. HNSW_QUANT_... */\n    hnswCursor *cursors;\n} HNSW;\n\n/* Serialized node. This structure is used as return value of\n * hnsw_serialize_node(). */\ntypedef struct hnswSerNode {\n    void *vector;\n    uint32_t vector_size;\n    uint64_t *params;\n    uint32_t params_count;\n} hnswSerNode;\n\n/* Insert preparation context */\ntypedef struct InsertContext InsertContext;\n\n/* Core HNSW functions */\nHNSW *hnsw_new(uint32_t vector_dim, uint32_t quant_type, uint32_t m);\nvoid hnsw_free(HNSW *index,void(*free_value)(void*value));\nvoid hnsw_node_free(hnswNode *node);\nvoid hnsw_print_stats(HNSW *index);\nhnswNode *hnsw_insert(HNSW *index, const float *vector, const int8_t *qvector,\n                float qrange, uint64_t id, void *value, int ef);\nint hnsw_search(HNSW *index, const float *query, uint32_t k,\n                hnswNode **neighbors, float *distances, uint32_t slot,\n                int query_vector_is_normalized);\nint hnsw_search_with_filter\n               (HNSW *index, const float *query_vector, uint32_t k,\n                hnswNode **neighbors, float *distances, uint32_t slot,\n                int query_vector_is_normalized,\n                int (*filter_callback)(void *value, void *privdata),\n                void *filter_privdata, uint32_t max_candidates);\nvoid hnsw_get_node_vector(HNSW *index, hnswNode *node, float *vec);\nint hnsw_delete_node(HNSW *index, hnswNode *node, void(*free_value)(void*value));\nhnswNode *hnsw_random_node(HNSW *index, int slot);\n\n/* Thread safety functions. */\nint hnsw_acquire_read_slot(HNSW *index);\nvoid hnsw_release_read_slot(HNSW *index, int slot);\n\n/* Optimistic insertion API. */\nInsertContext *hnsw_prepare_insert(HNSW *index, const float *vector, const int8_t *qvector, float qrange, uint64_t id, int ef);\nhnswNode *hnsw_try_commit_insert(HNSW *index, InsertContext *ctx, void *value);\nvoid hnsw_free_insert_context(InsertContext *ctx);\n\n/* Serialization. */\nhnswSerNode *hnsw_serialize_node(HNSW *index, hnswNode *node);\nvoid hnsw_free_serialized_node(hnswSerNode *sn);\nhnswNode *hnsw_insert_serialized(HNSW *index, void *vector, uint64_t *params, uint32_t params_len, void *value);\nint hnsw_deserialize_index(HNSW *index, uint64_t salt0, uint64_t salt1);\n\n// Helper function in case the user wants to directly copy\n// the vector bytes.\nuint32_t hnsw_quants_bytes(HNSW *index);\n\n/* Cursors. */\nhnswCursor *hnsw_cursor_init(HNSW *index);\nvoid hnsw_cursor_free(hnswCursor *cursor);\nhnswNode *hnsw_cursor_next(hnswCursor *cursor);\nint hnsw_cursor_acquire_lock(hnswCursor *cursor);\nvoid hnsw_cursor_release_lock(hnswCursor *cursor);\n\n/* Allocator selection. */\nvoid hnsw_set_allocator(void (*free_ptr)(void*), void *(*malloc_ptr)(size_t),\n                        void *(*realloc_ptr)(void*, size_t));\n\n/* Testing. */\nint hnsw_validate_graph(HNSW *index, uint64_t *connected_nodes, int *reciprocal_links);\nvoid hnsw_test_graph_recall(HNSW *index, int test_ef, int verbose);\nfloat hnsw_distance(HNSW *index, hnswNode *a, hnswNode *b);\nint hnsw_ground_truth_with_filter\n               (HNSW *index, const float *query_vector, uint32_t k,\n                hnswNode **neighbors, float *distances, uint32_t slot,\n                int query_vector_is_normalized,\n                int (*filter_callback)(void *value, void *privdata),\n                void *filter_privdata);\n\n#endif /* HNSW_H */\n"
  },
  {
    "path": "modules/vector-sets/mixer.h",
    "content": "/* Redis implementation for vector sets. The data structure itself\n * is implemented in hnsw.c.\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n * Originally authored by: Salvatore Sanfilippo.\n *\n * =============================================================================\n *\n * Mixing function for HNSW link integrity verification\n * Designed to resist collision attacks when salts are unknown.\n */\n\n#include <stdint.h>\n#include <string.h>\n\nstatic inline uint64_t ROTL64(uint64_t x, int r) {\n    return (x << r) | (x >> (64 - r));\n}\n\n// Use more rounds and stronger constants\n#define MIX_PRIME_1 0xFF51AFD7ED558CCDULL\n#define MIX_PRIME_2 0xC4CEB9FE1A85EC53ULL\n#define MIX_PRIME_3 0x9E3779B97F4A7C15ULL\n#define MIX_PRIME_4 0xBF58476D1CE4E5B9ULL\n#define MIX_PRIME_5 0x94D049BB133111EBULL\n#define MIX_PRIME_6 0x2B7E151628AED2A7ULL\n\n/* Mixer design goals:\n * 1. Thorough mixing of the level parameter.\n * 2. Enough rounds of mixing.\n * 3. Cross-influence between h1 and h2.\n * 4. Domain separation to prevent related-key attacks.\n */\nvoid secure_pair_mixer_128(uint64_t salt0, uint64_t salt1,\n                          uint64_t id1_in, uint64_t id2_in, uint64_t level,\n                          uint64_t* out_h1, uint64_t* out_h2) {\n    // Order independence (A -> B links should hash as B -> A links).\n    uint64_t id_a = (id1_in < id2_in) ? id1_in : id2_in;\n    uint64_t id_b = (id1_in < id2_in) ? id2_in : id1_in;\n\n    // Domain separation: mix salts with a constant to prevent\n    // related-key attacks.\n    uint64_t h1 = salt0 ^ 0xDEADBEEFDEADBEEFULL;\n    uint64_t h2 = salt1 ^ 0xCAFEBABECAFEBABEULL;\n\n    // First, thoroughly mix the level into both accumulators\n    // This prevents predictable level values from being a weakness\n    uint64_t level_mix = level;\n    level_mix *= MIX_PRIME_5;\n    level_mix ^= level_mix >> 32;\n    level_mix *= MIX_PRIME_6;\n\n    h1 ^= level_mix;\n    h2 ^= ROTL64(level_mix, 31);\n\n    // Mix in id_a with strong diffusion.\n    h1 ^= id_a;\n    h1 *= MIX_PRIME_1;\n    h1 = ROTL64(h1, 23);\n    h1 *= MIX_PRIME_2;\n\n    // Mix in id_b.\n    h2 ^= id_b;\n    h2 *= MIX_PRIME_3;\n    h2 = ROTL64(h2, 29);\n    h2 *= MIX_PRIME_4;\n\n    // Three rounds of cross-mixing for better security.\n    for (int i = 0; i < 3; i++) {\n        // Cross-influence.\n        uint64_t tmp = h1;\n        h1 += h2;\n        h2 += tmp;\n\n        // Mix h1.\n        h1 ^= ROTL64(h1, 31);\n        h1 *= MIX_PRIME_1;\n        h1 ^= salt0;\n\n        // Mix h2.\n        h2 ^= ROTL64(h2, 37);\n        h2 *= MIX_PRIME_2;\n        h2 ^= salt1;\n    }\n\n    // Finalization with avalanche rounds.\n    h1 ^= h1 >> 33;\n    h1 *= MIX_PRIME_3;\n    h1 ^= h1 >> 29;\n    h1 *= MIX_PRIME_4;\n    h1 ^= h1 >> 32;\n\n    h2 ^= h2 >> 33;\n    h2 *= MIX_PRIME_5;\n    h2 ^= h2 >> 29;\n    h2 *= MIX_PRIME_6;\n    h2 ^= h2 >> 32;\n\n    *out_h1 = h1;\n    *out_h2 = h2;\n}\n"
  },
  {
    "path": "modules/vector-sets/test.py",
    "content": "#!/usr/bin/env python3\n#\n# Vector set tests.\n# A Redis instance should be running in the default port.\n#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n\nimport redis\nimport random\nimport struct\nimport math\nimport time\nimport sys\nimport os\nimport importlib\nimport inspect\nimport argparse\nfrom typing import List, Tuple, Optional\nfrom dataclasses import dataclass\n\ndef colored(text: str, color: str) -> str:\n    colors = {\n        'red': '\\033[91m',\n        'green': '\\033[92m',\n        'yellow': '\\033[93m',\n        'blue': '\\033[94m',\n        'magenta': '\\033[95m',\n        'cyan': '\\033[96m',\n    }\n    reset = '\\033[0m'\n    return f\"{colors.get(color, '')}{text}{reset}\"\n\n@dataclass\nclass VectorData:\n    vectors: List[List[float]]\n    names: List[str]\n\n    def find_k_nearest(self, query_vector: List[float], k: int) -> List[Tuple[str, float]]:\n        \"\"\"Find k-nearest neighbors using the same scoring as Redis VSIM WITHSCORES.\"\"\"\n        similarities = []\n        query_norm = math.sqrt(sum(x*x for x in query_vector))\n        if query_norm == 0:\n            return []\n\n        for i, vec in enumerate(self.vectors):\n            vec_norm = math.sqrt(sum(x*x for x in vec))\n            if vec_norm == 0:\n                continue\n\n            dot_product = sum(a*b for a,b in zip(query_vector, vec))\n            cosine_sim = dot_product / (query_norm * vec_norm)\n            distance = 1.0 - cosine_sim\n            redis_similarity = 1.0 - (distance/2.0)\n            similarities.append((self.names[i], redis_similarity))\n\n        similarities.sort(key=lambda x: x[1], reverse=True)\n        return similarities[:k]\n\ndef generate_random_vector(dim: int) -> List[float]:\n    \"\"\"Generate a random normalized vector.\"\"\"\n    vec = [random.gauss(0, 1) for _ in range(dim)]\n    norm = math.sqrt(sum(x*x for x in vec))\n    return [x/norm for x in vec]\n\ndef fill_redis_with_vectors(r: redis.Redis, key: str, count: int, dim: int,\n                          with_reduce: Optional[int] = None) -> VectorData:\n    \"\"\"Fill Redis with random vectors and return a VectorData object for verification.\"\"\"\n    vectors = []\n    names = []\n\n    r.delete(key)\n    for i in range(count):\n        vec = generate_random_vector(dim)\n        name = f\"{key}:item:{i}\"\n        vectors.append(vec)\n        names.append(name)\n\n        vec_bytes = struct.pack(f'{dim}f', *vec)\n        args = [key]\n        if with_reduce:\n            args.extend(['REDUCE', with_reduce])\n        args.extend(['FP32', vec_bytes, name])\n        r.execute_command('VADD', *args)\n\n    return VectorData(vectors=vectors, names=names)\n\nclass TestCase:\n    def __init__(self, primary_port=6379, replica_port=6380):\n        self.error_msg = None\n        self.error_details = None\n        self.test_key = f\"test:{self.__class__.__name__.lower()}\"\n        # Primary Redis instance\n        self.redis = redis.Redis(port=primary_port,db=9)\n        self.redis3 = redis.Redis(port=primary_port,protocol=3,db=9)\n        # Replica Redis instance\n        self.replica = redis.Redis(port=replica_port,db=9)\n        # Replication status\n        self.replication_setup = False\n        # Ports\n        self.primary_port = primary_port\n        self.replica_port = replica_port\n\n    def setup(self):\n        self.redis.delete(self.test_key)\n\n    def teardown(self):\n        self.redis.delete(self.test_key)\n\n    def setup_replication(self) -> bool:\n        \"\"\"\n        Setup replication between primary and replica Redis instances.\n        Returns True if replication is successfully established, False otherwise.\n        \"\"\"\n        # Configure replica to replicate from primary\n        self.replica.execute_command('REPLICAOF', '127.0.0.1', self.primary_port)\n\n        # Wait for replication to be established\n        max_attempts = 50\n        for attempt in range(max_attempts):\n            # Check replication info\n            repl_info = self.replica.info('replication')\n\n            # Check if replication is established\n            if (repl_info.get('role') == 'slave' and\n                repl_info.get('master_host') == '127.0.0.1' and\n                repl_info.get('master_port') == self.primary_port and\n                repl_info.get('master_link_status') == 'up'):\n\n                self.replication_setup = True\n                return True\n\n            # Wait before next attempt\n            print(colored(\".\",'cyan'),end=\"\",flush=True)\n            time.sleep(0.5)\n\n        # If we get here, replication wasn't established\n        self.error_msg = \"Failed to establish replication between primary and replica\"\n        return False\n\n    def test(self):\n        raise NotImplementedError(\"Subclasses must implement test method\")\n\n    def run(self):\n        try:\n            self.setup()\n            self.test()\n            return True\n        except AssertionError as e:\n            self.error_msg = str(e)\n            import traceback\n            self.error_details = traceback.format_exc()\n            return False\n        except Exception as e:\n            self.error_msg = f\"Unexpected error: {str(e)}\"\n            import traceback\n            self.error_details = traceback.format_exc()\n            return False\n        finally:\n            self.teardown()\n\n    def getname(self):\n        \"\"\"Each test class should override this to provide its name\"\"\"\n        return self.__class__.__name__\n\n    def estimated_runtime(self):\n        \"\"\"\"Each test class should override this if it takes a significant amount of time to run. Default is 100ms\"\"\"\n        return 0.1\n\ndef find_test_classes(primary_port, replica_port):\n    test_classes = []\n    script_dir = os.path.dirname(os.path.abspath(__file__))\n    tests_dir = os.path.join(script_dir, 'tests')\n\n    if not os.path.exists(tests_dir):\n        return []\n\n    for file in os.listdir(tests_dir):\n        if file.endswith('.py'):\n            module_name = f\"tests.{file[:-3]}\"\n            try:\n                module = importlib.import_module(module_name)\n                for name, obj in inspect.getmembers(module):\n                    if inspect.isclass(obj) and obj.__name__ != 'TestCase' and hasattr(obj, 'test'):\n                        # Create test instance with specified ports\n                        test_instance = obj(primary_port,replica_port)\n                        test_classes.append(test_instance)\n            except Exception as e:\n                print(f\"Error loading {file}: {e}\")\n\n    return test_classes\n\ndef check_redis_empty(r, instance_name):\n    \"\"\"Check if Redis instance is empty\"\"\"\n    try:\n        dbsize = r.dbsize()\n        if dbsize > 0:\n            print(colored(f\"ERROR: {instance_name} Redis instance DB 9 is not empty (dbsize: {dbsize}).\", \"red\"))\n            print(colored(\"Make sure you're not using a production instance and that all data is safe to delete.\", \"red\"))\n            sys.exit(1)\n    except redis.exceptions.ConnectionError:\n        print(colored(f\"ERROR: Cannot connect to {instance_name} Redis instance.\", \"red\"))\n        sys.exit(1)\n\ndef check_replica_running(replica_port):\n    \"\"\"Check if replica Redis instance is running\"\"\"\n    r = redis.Redis(port=replica_port)\n    try:\n        r.ping()\n        return True\n    except redis.exceptions.ConnectionError:\n        print(colored(f\"WARNING: Replica Redis instance (port {replica_port}) is not running.\", \"yellow\"))\n        print(colored(\"Replication tests will be skipped. Make sure to start the replica instance.\", \"yellow\"))\n        return False\n\ndef run_tests():\n    # Parse command line arguments\n    parser = argparse.ArgumentParser(description='Run Redis vector tests.')\n    parser.add_argument('--primary-port', type=int, default=6379, help='Primary Redis instance port (default: 6379)')\n    parser.add_argument('--replica-port', type=int, default=6380, help='Replica Redis instance port (default: 6380)')\n    args = parser.parse_args()\n\n    print(\"================================================\")\n    print(f\"Make sure to have Redis running on localhost\")\n    print(f\"Primary port: {args.primary_port}\")\n    print(f\"Replica port: {args.replica_port}\")\n    print(\"with --enable-debug-command yes\")\n    print(\"================================================\\n\")\n\n    # Check if Redis instances are empty\n    primary = redis.Redis(port=args.primary_port,db=9)\n    replica = redis.Redis(port=args.replica_port,db=9)\n\n    check_redis_empty(primary, \"Primary\")\n\n    # Check if replica is running\n    replica_running = check_replica_running(args.replica_port)\n    if replica_running:\n        check_redis_empty(replica, \"Replica\")\n\n    tests = find_test_classes(args.primary_port, args.replica_port)\n    if not tests:\n        print(\"No tests found!\")\n        return\n\n    # Sort tests by estimated runtime\n    tests.sort(key=lambda t: t.estimated_runtime())\n\n    passed = 0\n    skipped = 0\n    total = len(tests)\n\n    for test in tests:\n        print(f\"{test.getname()}: \", end=\"\")\n        sys.stdout.flush()\n\n        if not replica_running and test.getname().lower().find(\"replication\") != -1:\n            print(colored(\"SKIPPING\",\"yellow\"))\n            skipped += 1\n            continue\n\n        start_time = time.time()\n        success = test.run()\n        duration = time.time() - start_time\n\n        if success:\n            print(colored(\"OK\", \"green\"), f\"({duration:.2f}s)\")\n            passed += 1\n        else:\n            print(colored(\"ERR\", \"red\"), f\"({duration:.2f}s)\")\n            print(f\"Error: {test.error_msg}\")\n            if test.error_details:\n                print(\"\\nTraceback:\")\n                print(test.error_details)\n\n    print(\"\\n\" + \"=\"*50)\n    print(f\"\\nTest Summary: {passed}/{total} tests passed\")\n\n    if passed == total:\n        print(colored(\"ALL TESTS PASSED!\", \"green\"))\n    else:\n        if total-skipped-passed > 0:\n            print(colored(f\"{total-skipped-passed} TESTS FAILED!\", \"red\"))\n            sys.exit(1)\n        if skipped > 0:\n            print(colored(f\"{skipped} TESTS SKIPPED!\", \"yellow\"))\n\nif __name__ == \"__main__\":\n    run_tests()\n"
  },
  {
    "path": "modules/vector-sets/tests/basic_commands.py",
    "content": "from test import TestCase, generate_random_vector\nimport struct\n\nclass BasicCommands(TestCase):\n    def getname(self):\n        return \"VADD, VDIM, VCARD basic usage\"\n\n    def test(self):\n        # Test VADD\n        vec = generate_random_vector(4)\n        vec_bytes = struct.pack('4f', *vec)\n        result = self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes, f'{self.test_key}:item:1')\n        assert result == 1, \"VADD should return 1 for first item\"\n\n        # Test VDIM\n        dim = self.redis.execute_command('VDIM', self.test_key)\n        assert dim == 4, f\"VDIM should return 4, got {dim}\"\n\n        # Test VCARD\n        card = self.redis.execute_command('VCARD', self.test_key)\n        assert card == 1, f\"VCARD should return 1, got {card}\"\n"
  },
  {
    "path": "modules/vector-sets/tests/basic_similarity.py",
    "content": "from test import TestCase\n\nclass BasicSimilarity(TestCase):\n    def getname(self):\n        return \"VSIM reported distance makes sense with 4D vectors\"\n\n    def test(self):\n        # Add two very similar vectors, one different\n        vec1 = [1, 0, 0, 0]\n        vec2 = [0.99, 0.01, 0, 0]\n        vec3 = [0.1, 1, -1, 0.5]\n\n        # Add vectors using VALUES format\n        self.redis.execute_command('VADD', self.test_key, 'VALUES', 4, \n                                 *[str(x) for x in vec1], f'{self.test_key}:item:1')\n        self.redis.execute_command('VADD', self.test_key, 'VALUES', 4, \n                                 *[str(x) for x in vec2], f'{self.test_key}:item:2')\n        self.redis.execute_command('VADD', self.test_key, 'VALUES', 4, \n                                 *[str(x) for x in vec3], f'{self.test_key}:item:3')\n\n        # Query similarity with vec1\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4, \n                                          *[str(x) for x in vec1], 'WITHSCORES')\n\n        # Convert results to dictionary\n        results_dict = {}\n        for i in range(0, len(result), 2):\n            key = result[i].decode()\n            score = float(result[i+1])\n            results_dict[key] = score\n\n        # Verify results\n        assert results_dict[f'{self.test_key}:item:1'] > 0.99, \"Self-similarity should be very high\"\n        assert results_dict[f'{self.test_key}:item:2'] > 0.99, \"Similar vector should have high similarity\"\n        assert results_dict[f'{self.test_key}:item:3'] < 0.8, \"Not very similar vector should have low similarity\"\n"
  },
  {
    "path": "modules/vector-sets/tests/bin_vectorization.py",
    "content": "from test import TestCase\n\nclass BinVectorization(TestCase):\n    def getname(self):\n        return \"Binary quantization: verify vectorized vs scalar paths produce consistent results\"\n\n    def test(self):\n        # Test with different dimensions to exercise different code paths:\n        # - dim=1: Edge case for minimal valid dimension (scalar path)\n        # - dim=64: Exact alignment boundary, one uint64_t word (scalar path)\n        # - dim=128: Scalar path (< 256)\n        # - dim=384: AVX2 path if available (>= 256, < 512)\n        # - dim=768: AVX512 path if available (>= 512)\n        # Note: dim=0 is not tested as it's invalid input (division by zero)\n        \n        test_dims = [1, 64, 128, 384, 768]\n        \n        for dim in test_dims:\n            # Add two very similar vectors, one different\n            vec1 = [1.0] * dim\n            vec2 = [0.99] * dim  # Very similar to vec1\n            vec3 = [-1.0] * dim  # Opposite direction - should have low similarity\n            \n            # Add vectors with binary quantization\n            self.redis.execute_command('VADD', f'{self.test_key}:dim{dim}', 'VALUES', dim, \n                                     *[str(x) for x in vec1], f'{self.test_key}:dim{dim}:item:1', 'BIN')\n            self.redis.execute_command('VADD', f'{self.test_key}:dim{dim}', 'VALUES', dim, \n                                     *[str(x) for x in vec2], f'{self.test_key}:dim{dim}:item:2', 'BIN')\n            self.redis.execute_command('VADD', f'{self.test_key}:dim{dim}', 'VALUES', dim, \n                                     *[str(x) for x in vec3], f'{self.test_key}:dim{dim}:item:3', 'BIN')\n            \n            # Query similarity\n            result = self.redis.execute_command('VSIM', f'{self.test_key}:dim{dim}', 'VALUES', dim, \n                                              *[str(x) for x in vec1], 'WITHSCORES')\n            \n            # Convert results to dictionary\n            results_dict = {}\n            for i in range(0, len(result), 2):\n                key = result[i].decode()\n                score = float(result[i+1])\n                results_dict[key] = score\n            \n            # Verify results are consistent across dimensions\n            # Self-similarity should be very high (binary quantization is less precise)\n            assert results_dict[f'{self.test_key}:dim{dim}:item:1'] > 0.99, \\\n                f\"Dim {dim}: Self-similarity too low: {results_dict[f'{self.test_key}:dim{dim}:item:1']}\"\n            \n            # Similar vector should have high similarity (binary quant loses some precision)\n            assert results_dict[f'{self.test_key}:dim{dim}:item:2'] > 0.95, \\\n                f\"Dim {dim}: Similar vector similarity too low: {results_dict[f'{self.test_key}:dim{dim}:item:2']}\"\n            \n            # Opposite vector should have very low similarity\n            assert results_dict[f'{self.test_key}:dim{dim}:item:3'] < 0.1, \\\n                f\"Dim {dim}: Opposite vector similarity too high: {results_dict[f'{self.test_key}:dim{dim}:item:3']}\"\n"
  },
  {
    "path": "modules/vector-sets/tests/concurrent_vadd_cas_del_vsim.py",
    "content": "from test import TestCase, generate_random_vector\nimport threading\nimport time\nimport struct\n\nclass ThreadingStressTest(TestCase):\n    def getname(self):\n        return \"Concurrent VADD/DEL/VSIM operations stress test\"\n\n    def estimated_runtime(self):\n        return 10  # Test runs for 10 seconds\n\n    def test(self):\n        # Constants - easy to modify if needed\n        NUM_VADD_THREADS = 10\n        NUM_VSIM_THREADS = 1\n        NUM_DEL_THREADS = 1\n        TEST_DURATION = 10  # seconds\n        VECTOR_DIM = 100\n        DEL_INTERVAL = 1  # seconds\n\n        # Shared flags and state\n        stop_event = threading.Event()\n        error_list = []\n        error_lock = threading.Lock()\n\n        def log_error(thread_name, error):\n            with error_lock:\n                error_list.append(f\"{thread_name}: {error}\")\n\n        def vadd_worker(thread_id):\n            \"\"\"Thread function to perform VADD operations\"\"\"\n            thread_name = f\"VADD-{thread_id}\"\n            try:\n                vector_count = 0\n                while not stop_event.is_set():\n                    try:\n                        # Generate random vector\n                        vec = generate_random_vector(VECTOR_DIM)\n                        vec_bytes = struct.pack(f'{VECTOR_DIM}f', *vec)\n\n                        # Add vector with CAS option\n                        self.redis.execute_command(\n                            'VADD',\n                            self.test_key,\n                            'FP32',\n                            vec_bytes,\n                            f'{self.test_key}:item:{thread_id}:{vector_count}',\n                            'CAS'\n                        )\n\n                        vector_count += 1\n\n                        # Small sleep to reduce CPU pressure\n                        if vector_count % 10 == 0:\n                            time.sleep(0.001)\n                    except Exception as e:\n                        log_error(thread_name, f\"Error: {str(e)}\")\n                        time.sleep(0.1)  # Slight backoff on error\n            except Exception as e:\n                log_error(thread_name, f\"Thread error: {str(e)}\")\n\n        def del_worker():\n            \"\"\"Thread function that deletes the key periodically\"\"\"\n            thread_name = \"DEL\"\n            try:\n                del_count = 0\n                while not stop_event.is_set():\n                    try:\n                        # Sleep first, then delete\n                        time.sleep(DEL_INTERVAL)\n                        if stop_event.is_set():\n                            break\n\n                        self.redis.delete(self.test_key)\n                        del_count += 1\n                    except Exception as e:\n                        log_error(thread_name, f\"Error: {str(e)}\")\n            except Exception as e:\n                log_error(thread_name, f\"Thread error: {str(e)}\")\n\n        def vsim_worker(thread_id):\n            \"\"\"Thread function to perform VSIM operations\"\"\"\n            thread_name = f\"VSIM-{thread_id}\"\n            try:\n                search_count = 0\n                while not stop_event.is_set():\n                    try:\n                        # Generate query vector\n                        query_vec = generate_random_vector(VECTOR_DIM)\n                        query_str = [str(x) for x in query_vec]\n\n                        # Perform similarity search\n                        args = ['VSIM', self.test_key, 'VALUES', VECTOR_DIM]\n                        args.extend(query_str)\n                        args.extend(['COUNT', 10])\n                        self.redis.execute_command(*args)\n\n                        search_count += 1\n\n                        # Small sleep to reduce CPU pressure\n                        if search_count % 10 == 0:\n                            time.sleep(0.005)\n                    except Exception as e:\n                        # Don't log empty array errors, as they're expected when key doesn't exist\n                        if \"empty array\" not in str(e).lower():\n                            log_error(thread_name, f\"Error: {str(e)}\")\n                        time.sleep(0.1)  # Slight backoff on error\n            except Exception as e:\n                log_error(thread_name, f\"Thread error: {str(e)}\")\n\n        # Start all threads\n        threads = []\n\n        # VADD threads\n        for i in range(NUM_VADD_THREADS):\n            thread = threading.Thread(target=vadd_worker, args=(i,))\n            thread.start()\n            threads.append(thread)\n\n        # DEL threads\n        for _ in range(NUM_DEL_THREADS):\n            thread = threading.Thread(target=del_worker)\n            thread.start()\n            threads.append(thread)\n\n        # VSIM threads\n        for i in range(NUM_VSIM_THREADS):\n            thread = threading.Thread(target=vsim_worker, args=(i,))\n            thread.start()\n            threads.append(thread)\n\n        # Let the test run for the specified duration\n        time.sleep(TEST_DURATION)\n\n        # Signal all threads to stop\n        stop_event.set()\n\n        # Wait for threads to finish\n        for thread in threads:\n            thread.join(timeout=2.0)\n\n        # Check if Redis is still responsive\n        try:\n            ping_result = self.redis.ping()\n            assert ping_result, \"Redis did not respond to PING after stress test\"\n        except Exception as e:\n            assert False, f\"Redis connection failed after stress test: {str(e)}\"\n\n        # Report any errors for diagnosis, but don't fail the test unless PING fails\n        if error_list:\n            error_count = len(error_list)\n            print(f\"\\nEncountered {error_count} errors during stress test.\")\n            print(\"First 5 errors:\")\n            for error in error_list[:5]:\n                print(f\"- {error}\")\n"
  },
  {
    "path": "modules/vector-sets/tests/concurrent_vsim_and_del.py",
    "content": "from test import TestCase, fill_redis_with_vectors, generate_random_vector\nimport threading, time\n\nclass ConcurrentVSIMAndDEL(TestCase):\n    def getname(self):\n        return \"Concurrent VSIM and DEL operations\"\n\n    def estimated_runtime(self):\n        return 2\n\n    def test(self):\n        # Fill the key with 5000 random vectors\n        dim = 128\n        count = 5000\n        fill_redis_with_vectors(self.redis, self.test_key, count, dim)\n\n        # List to store results from threads\n        thread_results = []\n\n        def vsim_thread():\n            \"\"\"Thread function to perform VSIM operations until the key is deleted\"\"\"\n            while True:\n                query_vec = generate_random_vector(dim)\n                result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', dim,\n                                                   *[str(x) for x in query_vec], 'COUNT', 10)\n                if not result:\n                    # Empty array detected, key is deleted\n                    thread_results.append(True)\n                    break\n\n        # Start multiple threads to perform VSIM operations\n        threads = []\n        for _ in range(4):  # Start 4 threads\n            t = threading.Thread(target=vsim_thread)\n            t.start()\n            threads.append(t)\n\n        # Delete the key while threads are still running\n        time.sleep(1)\n        self.redis.delete(self.test_key)\n\n        # Wait for all threads to finish (they will exit once they detect the key is deleted)\n        for t in threads:\n            t.join()\n\n        # Verify that all threads detected an empty array or error\n        assert len(thread_results) == len(threads), \"Not all threads detected the key deletion\"\n        assert all(thread_results), \"Some threads did not detect an empty array or error after DEL\"\n"
  },
  {
    "path": "modules/vector-sets/tests/debug_digest.py",
    "content": "from test import TestCase, generate_random_vector\nimport struct\n\nclass DebugDigestTest(TestCase):\n    def getname(self):\n        return \"[regression] DEBUG DIGEST-VALUE with attributes\"\n\n    def test(self):\n        # Generate random vectors\n        vec1 = generate_random_vector(4)\n        vec2 = generate_random_vector(4)\n        vec_bytes1 = struct.pack('4f', *vec1)\n        vec_bytes2 = struct.pack('4f', *vec2)\n\n        # Add vectors to the key, one with attribute, one without\n        self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes1, f'{self.test_key}:item:1')\n        self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes2, f'{self.test_key}:item:2', 'SETATTR', '{\"color\":\"red\"}')\n\n        # Call DEBUG DIGEST-VALUE on the key\n        try:\n            digest1 = self.redis.execute_command('DEBUG', 'DIGEST-VALUE', self.test_key)\n            assert digest1 is not None, \"DEBUG DIGEST-VALUE should return a value\"\n\n            # Change attribute and verify digest changes\n            self.redis.execute_command('VSETATTR', self.test_key, f'{self.test_key}:item:2', '{\"color\":\"blue\"}')\n\n            digest2 = self.redis.execute_command('DEBUG', 'DIGEST-VALUE', self.test_key)\n            assert digest2 is not None, \"DEBUG DIGEST-VALUE should return a value after attribute change\"\n            assert digest1 != digest2, \"Digest should change when an attribute is modified\"\n\n            # Remove attribute and verify digest changes again\n            self.redis.execute_command('VSETATTR', self.test_key, f'{self.test_key}:item:2', '')\n\n            digest3 = self.redis.execute_command('DEBUG', 'DIGEST-VALUE', self.test_key)\n            assert digest3 is not None, \"DEBUG DIGEST-VALUE should return a value after attribute removal\"\n            assert digest2 != digest3, \"Digest should change when an attribute is removed\"\n\n        except Exception as e:\n            raise AssertionError(f\"DEBUG DIGEST-VALUE command failed: {str(e)}\")\n"
  },
  {
    "path": "modules/vector-sets/tests/deletion.py",
    "content": "from test import TestCase, fill_redis_with_vectors, generate_random_vector\nimport random\n\n\"\"\"\nA note about this test:\nIt was experimentally tried to modify hnsw.c in order to\navoid calling hnsw_reconnect_nodes(). In this case, the test\nfails very often with EF set to 250, while it hardly\nfails at all with the same parameters if hnsw_reconnect_nodes()\nis called.\n\nNote that for the nature of the test (it is very strict) it can\nstill fail from time to time, without this signaling any\nactual bug.\n\"\"\"\n\nclass VREM(TestCase):\n    def getname(self):\n        return \"Deletion and graph state after deletion\"\n\n    def estimated_runtime(self):\n        return 2.0\n\n    def format_neighbors_with_scores(self, links_result, old_links=None, items_to_remove=None):\n        \"\"\"Format neighbors with their similarity scores and status indicators\"\"\"\n        if not links_result:\n            return \"No neighbors\"\n\n        output = []\n        for level, neighbors in enumerate(links_result):\n            level_num = len(links_result) - level - 1\n            output.append(f\"Level {level_num}:\")\n\n            # Get neighbors and scores\n            neighbors_with_scores = []\n            for i in range(0, len(neighbors), 2):\n                neighbor = neighbors[i].decode() if isinstance(neighbors[i], bytes) else neighbors[i]\n                score = float(neighbors[i+1]) if i+1 < len(neighbors) else None\n                status = \"\"\n\n                # For old links, mark deleted ones\n                if items_to_remove and neighbor in items_to_remove:\n                    status = \" [lost]\"\n                # For new links, mark newly added ones\n                elif old_links is not None:\n                    # Check if this neighbor was in the old links at this level\n                    was_present = False\n                    if old_links and level < len(old_links):\n                        old_neighbors = [n.decode() if isinstance(n, bytes) else n\n                                      for n in old_links[level]]\n                        was_present = neighbor in old_neighbors\n                    if not was_present:\n                        status = \" [gained]\"\n\n                if score is not None:\n                    neighbors_with_scores.append(f\"{len(neighbors_with_scores)+1}. {neighbor} ({score:.6f}){status}\")\n                else:\n                    neighbors_with_scores.append(f\"{len(neighbors_with_scores)+1}. {neighbor}{status}\")\n\n            output.extend([\"    \" + n for n in neighbors_with_scores])\n        return \"\\n\".join(output)\n\n    def test(self):\n        # 1. Fill server with random elements\n        dim = 128\n        count = 5000\n        data = fill_redis_with_vectors(self.redis, self.test_key, count, dim)\n\n        # 2. Do VSIM to get 200 items\n        query_vec = generate_random_vector(dim)\n        results = self.redis.execute_command('VSIM', self.test_key, 'VALUES', dim,\n                                    *[str(x) for x in query_vec],\n                                    'COUNT', 200, 'WITHSCORES')\n\n        # Convert results to list of (item, score) pairs, sorted by score\n        items = []\n        for i in range(0, len(results), 2):\n            item = results[i].decode()\n            score = float(results[i+1])\n            items.append((item, score))\n        items.sort(key=lambda x: x[1], reverse=True)  # Sort by similarity\n\n        # Store the graph structure for all items before deletion\n        neighbors_before = {}\n        for item, _ in items:\n            links = self.redis.execute_command('VLINKS', self.test_key, item, 'WITHSCORES')\n            if links:  # Some items might not have links\n                neighbors_before[item] = links\n\n        # 3. Remove 100 random items\n        items_to_remove = set(item for item, _ in random.sample(items, 100))\n        # Keep track of top 10 non-removed items\n        top_remaining = []\n        for item, score in items:\n            if item not in items_to_remove:\n                top_remaining.append((item, score))\n                if len(top_remaining) == 10:\n                    break\n\n        # Remove the items\n        for item in items_to_remove:\n            result = self.redis.execute_command('VREM', self.test_key, item)\n            assert result == 1, f\"VREM failed to remove {item}\"\n\n        # 4. Do VSIM again with same vector\n        new_results = self.redis.execute_command('VSIM', self.test_key, 'VALUES', dim,\n                                        *[str(x) for x in query_vec],\n                                        'COUNT', 200, 'WITHSCORES',\n                                        'EF', 500)\n\n        # Convert new results to dict of item -> score\n        new_scores = {}\n        for i in range(0, len(new_results), 2):\n            item = new_results[i].decode()\n            score = float(new_results[i+1])\n            new_scores[item] = score\n\n        failure = False\n        failed_item = None\n        failed_reason = None\n        # 5. Verify all top 10 non-removed items are still found with similar scores\n        for item, old_score in top_remaining:\n            if item not in new_scores:\n                failure = True\n                failed_item = item\n                failed_reason = \"missing\"\n                break\n            new_score = new_scores[item]\n            if abs(new_score - old_score) >= 0.01:\n                failure = True\n                failed_item = item\n                failed_reason = f\"score changed: {old_score:.6f} -> {new_score:.6f}\"\n                break\n\n        if failure:\n            print(\"\\nTest failed!\")\n            print(f\"Problem with item: {failed_item} ({failed_reason})\")\n\n            print(\"\\nOriginal neighbors (with similarity scores):\")\n            if failed_item in neighbors_before:\n                print(self.format_neighbors_with_scores(\n                    neighbors_before[failed_item], \n                    items_to_remove=items_to_remove))\n            else:\n                print(\"No neighbors found in original graph\")\n\n            print(\"\\nCurrent neighbors (with similarity scores):\")\n            current_links = self.redis.execute_command('VLINKS', self.test_key, \n                                                     failed_item, 'WITHSCORES')\n            if current_links:\n                print(self.format_neighbors_with_scores(\n                    current_links,\n                    old_links=neighbors_before.get(failed_item)))\n            else:\n                print(\"No neighbors in current graph\")\n\n            print(\"\\nOriginal results (top 20):\")\n            for item, score in items[:20]:\n                deleted = \"[deleted]\" if item in items_to_remove else \"\"\n                print(f\"{item}: {score:.6f} {deleted}\")\n\n            print(\"\\nNew results after removal (top 20):\")\n            new_items = []\n            for i in range(0, len(new_results), 2):\n                item = new_results[i].decode()\n                score = float(new_results[i+1])\n                new_items.append((item, score))\n            new_items.sort(key=lambda x: x[1], reverse=True)\n            for item, score in new_items[:20]:\n                print(f\"{item}: {score:.6f}\")\n\n            raise AssertionError(f\"Test failed: Problem with item {failed_item} ({failed_reason}). *** IMPORTANT *** This test may fail from time to time without indicating that there is a bug. However normally it should pass. The fact is that it's a quite extreme test where we destroy 50% of nodes of top results and still expect perfect recall, with vectors that are very hostile because of the distribution used.\")\n\n"
  },
  {
    "path": "modules/vector-sets/tests/dimension_validation.py",
    "content": "from test import TestCase, generate_random_vector\nimport struct\nimport redis.exceptions\n\nclass DimensionValidation(TestCase):\n    def getname(self):\n        return \"[regression] Dimension Validation with Projection\"\n\n    def estimated_runtime(self):\n        return 0.5\n\n    def test(self):\n        # Test scenario 1: Create a set with projection\n        original_dim = 100\n        reduced_dim = 50\n\n        # Create the initial vector and set with projection\n        vec1 = generate_random_vector(original_dim)\n        vec1_bytes = struct.pack(f'{original_dim}f', *vec1)\n\n        # Add first vector with projection\n        result = self.redis.execute_command('VADD', self.test_key,\n                                          'REDUCE', reduced_dim,\n                                          'FP32', vec1_bytes, f'{self.test_key}:item:1')\n        assert result == 1, \"First VADD with REDUCE should return 1\"\n\n        # Check VINFO returns the correct projection information\n        info = self.redis.execute_command('VINFO', self.test_key)\n        info_map = {k.decode('utf-8'): v for k, v in zip(info[::2], info[1::2])}\n        assert 'vector-dim' in info_map, \"VINFO should contain vector-dim\"\n        assert info_map['vector-dim'] == reduced_dim, f\"Expected reduced dimension {reduced_dim}, got {info['vector-dim']}\"\n        assert 'projection-input-dim' in info_map, \"VINFO should contain projection-input-dim\"\n        assert info_map['projection-input-dim'] == original_dim, f\"Expected original dimension {original_dim}, got {info['projection-input-dim']}\"\n\n        # Test scenario 2: Try adding a mismatched vector - should fail\n        wrong_dim = 80\n        wrong_vec = generate_random_vector(wrong_dim)\n        wrong_vec_bytes = struct.pack(f'{wrong_dim}f', *wrong_vec)\n\n        # This should fail with dimension mismatch error\n        try:\n            self.redis.execute_command('VADD', self.test_key,\n                                     'REDUCE', reduced_dim,\n                                     'FP32', wrong_vec_bytes, f'{self.test_key}:item:2')\n            assert False, \"VADD with wrong dimension should fail\"\n        except redis.exceptions.ResponseError as e:\n            assert \"Input dimension mismatch for projection\" in str(e), f\"Expected dimension mismatch error, got: {e}\"\n\n        # Test scenario 3: Add a correctly-sized vector\n        vec2 = generate_random_vector(original_dim)\n        vec2_bytes = struct.pack(f'{original_dim}f', *vec2)\n\n        # This should succeed\n        result = self.redis.execute_command('VADD', self.test_key,\n                                          'REDUCE', reduced_dim,\n                                          'FP32', vec2_bytes, f'{self.test_key}:item:3')\n        assert result == 1, \"VADD with correct dimensions should succeed\"\n\n        # Check VSIM also validates input dimensions\n        wrong_query = generate_random_vector(wrong_dim)\n        try:\n            self.redis.execute_command('VSIM', self.test_key,\n                                     'VALUES', wrong_dim, *[str(x) for x in wrong_query],\n                                     'COUNT', 10)\n            assert False, \"VSIM with wrong dimension should fail\"\n        except redis.exceptions.ResponseError as e:\n            assert \"Input dimension mismatch for projection\" in str(e), f\"Expected dimension mismatch error in VSIM, got: {e}\"\n"
  },
  {
    "path": "modules/vector-sets/tests/epsilon.py",
    "content": "from test import TestCase\n\nclass EpsilonOption(TestCase):\n    def getname(self):\n        return \"VSIM EPSILON option filtering\"\n\n    def estimated_runtime(self):\n        return 0.1\n\n    def test(self):\n        # Add vectors as shown in the example\n        # Vector 'a' at (1, 1) - normalized to (0.707, 0.707)\n        result = self.redis.execute_command('VADD', self.test_key, 'VALUES', '2', '1', '1', 'a')\n        assert result == 1, \"VADD should return 1 for item 'a'\"\n\n        # Vector 'b' at (0, 1) - normalized to (0, 1)\n        result = self.redis.execute_command('VADD', self.test_key, 'VALUES', '2', '0', '1', 'b')\n        assert result == 1, \"VADD should return 1 for item 'b'\"\n\n        # Vector 'c' at (0, 0) - this will be a zero vector, might be handled specially\n        result = self.redis.execute_command('VADD', self.test_key, 'VALUES', '2', '0', '0', 'c')\n        assert result == 1, \"VADD should return 1 for item 'c'\"\n\n        # Vector 'd' at (0, -1) - normalized to (0, -1)\n        result = self.redis.execute_command('VADD', self.test_key, 'VALUES', '2', '0', '-1', 'd')\n        assert result == 1, \"VADD should return 1 for item 'd'\"\n\n        # Vector 'e' at (-1, -1) - normalized to (-0.707, -0.707)\n        result = self.redis.execute_command('VADD', self.test_key, 'VALUES', '2', '-1', '-1', 'e')\n        assert result == 1, \"VADD should return 1 for item 'e'\"\n\n        # Test without EPSILON - should return all items\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', '2', '1', '1', 'WITHSCORES')\n        # Result is a flat list: [elem1, score1, elem2, score2, ...]\n        elements_all = [result[i].decode() for i in range(0, len(result), 2)]\n        scores_all = [float(result[i]) for i in range(1, len(result), 2)]\n\n        assert len(elements_all) == 5, f\"Should return 5 elements without EPSILON, got {len(elements_all)}\"\n        assert elements_all[0] == 'a', \"First element should be 'a' (most similar)\"\n        assert scores_all[0] == 1.0, \"Score for 'a' should be 1.0 (identical)\"\n\n        # Test with EPSILON 0.5 - should return only elements with similarity >= 0.5 (distance < 0.5)\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', '2', '1', '1', 'WITHSCORES', 'EPSILON', '0.5')\n        elements_epsilon_0_5 = [result[i].decode() for i in range(0, len(result), 2)]\n        scores_epsilon_0_5 = [float(result[i]) for i in range(1, len(result), 2)]\n\n        assert len(elements_epsilon_0_5) == 3, f\"With EPSILON 0.5, should return 3 elements, got {len(elements_epsilon_0_5)}\"\n        assert set(elements_epsilon_0_5) == {'a', 'b', 'c'}, f\"With EPSILON 0.5, should get a, b, c, got {elements_epsilon_0_5}\"\n\n        # Verify all returned scores are >= 0.5\n        for i, score in enumerate(scores_epsilon_0_5):\n            assert score >= 0.5, f\"Element {elements_epsilon_0_5[i]} has score {score} which is < 0.5\"\n\n        # Test with EPSILON 0.2 - should return only elements with similarity >= 0.8 (distance < 0.2)\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', '2', '1', '1', 'WITHSCORES', 'EPSILON', '0.2')\n        elements_epsilon_0_2 = [result[i].decode() for i in range(0, len(result), 2)]\n        scores_epsilon_0_2 = [float(result[i]) for i in range(1, len(result), 2)]\n\n        assert len(elements_epsilon_0_2) == 2, f\"With EPSILON 0.2, should return 2 elements, got {len(elements_epsilon_0_2)}\"\n        assert set(elements_epsilon_0_2) == {'a', 'b'}, f\"With EPSILON 0.2, should get a, b, got {elements_epsilon_0_2}\"\n\n        # Verify all returned scores are >= 0.8 (since distance < 0.2 means similarity > 0.8)\n        for i, score in enumerate(scores_epsilon_0_2):\n            assert score >= 0.8, f\"Element {elements_epsilon_0_2[i]} has score {score} which is < 0.8\"\n\n        # Test with very small EPSILON - should return only the exact match\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', '2', '1', '1', 'WITHSCORES', 'EPSILON', '0.001')\n        elements_epsilon_small = [result[i].decode() for i in range(0, len(result), 2)]\n\n        assert len(elements_epsilon_small) == 1, f\"With EPSILON 0.001, should return only 1 element, got {len(elements_epsilon_small)}\"\n        assert elements_epsilon_small[0] == 'a', \"With very small EPSILON, should only get 'a'\"\n\n        # Test with EPSILON 1.0 - should return all elements (since all similarities are between 0 and 1)\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', '2', '1', '1', 'WITHSCORES', 'EPSILON', '1.0')\n        elements_epsilon_1 = [result[i].decode() for i in range(0, len(result), 2)]\n\n        assert len(elements_epsilon_1) == 5, f\"With EPSILON 1.0, should return all 5 elements, got {len(elements_epsilon_1)}\"\n"
  },
  {
    "path": "modules/vector-sets/tests/evict_empty.py",
    "content": "from test import TestCase, generate_random_vector\nimport struct\n\nclass VREM_LastItemDeletesKey(TestCase):\n    def getname(self):\n        return \"VREM last item deletes key\"\n\n    def test(self):\n        # Generate a random vector\n        vec = generate_random_vector(4)\n        vec_bytes = struct.pack('4f', *vec)\n\n        # Add the vector to the key\n        result = self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes, f'{self.test_key}:item:1')\n        assert result == 1, \"VADD should return 1 for first item\"\n\n        # Verify the key exists\n        exists = self.redis.exists(self.test_key)\n        assert exists == 1, \"Key should exist after VADD\"\n\n        # Remove the item\n        result = self.redis.execute_command('VREM', self.test_key, f'{self.test_key}:item:1')\n        assert result == 1, \"VREM should return 1 for successful removal\"\n\n        # Verify the key no longer exists\n        exists = self.redis.exists(self.test_key)\n        assert exists == 0, \"Key should no longer exist after VREM of last item\"\n"
  },
  {
    "path": "modules/vector-sets/tests/filter_expr.py",
    "content": "from test import TestCase\n\nclass VSIMFilterExpressions(TestCase):\n    def getname(self):\n        return \"VSIM FILTER expressions basic functionality\"\n\n    def test(self):\n        # Create a small set of vectors with different attributes\n\n        # Basic vectors for testing - all orthogonal for clear results\n        vec1 = [1, 0, 0, 0]\n        vec2 = [0, 1, 0, 0]\n        vec3 = [0, 0, 1, 0]\n        vec4 = [0, 0, 0, 1]\n        vec5 = [0.5, 0.5, 0, 0]\n\n        # Add vectors with various attributes\n        self.redis.execute_command('VADD', self.test_key, 'VALUES', 4,\n                                 *[str(x) for x in vec1], f'{self.test_key}:item:1')\n        self.redis.execute_command('VSETATTR', self.test_key, f'{self.test_key}:item:1',\n                                  '{\"age\": 25, \"name\": \"Alice\", \"active\": true, \"scores\": [85, 90, 95], \"city\": \"New York\"}')\n\n        self.redis.execute_command('VADD', self.test_key, 'VALUES', 4,\n                                 *[str(x) for x in vec2], f'{self.test_key}:item:2')\n        self.redis.execute_command('VSETATTR', self.test_key, f'{self.test_key}:item:2',\n                                  '{\"age\": 30, \"name\": \"Bob\", \"active\": false, \"scores\": [70, 75, 80], \"city\": \"Boston\"}')\n\n        self.redis.execute_command('VADD', self.test_key, 'VALUES', 4,\n                                 *[str(x) for x in vec3], f'{self.test_key}:item:3')\n        self.redis.execute_command('VSETATTR', self.test_key, f'{self.test_key}:item:3',\n                                  '{\"age\": 35, \"name\": \"Charlie\", \"scores\": [60, 65, 70], \"city\": \"Seattle\"}')\n\n        self.redis.execute_command('VADD', self.test_key, 'VALUES', 4,\n                                 *[str(x) for x in vec4], f'{self.test_key}:item:4')\n        # Item 4 has no attribute at all\n\n        self.redis.execute_command('VADD', self.test_key, 'VALUES', 4,\n                                 *[str(x) for x in vec5], f'{self.test_key}:item:5')\n        self.redis.execute_command('VSETATTR', self.test_key, f'{self.test_key}:item:5',\n                                  'invalid json')  # Intentionally malformed JSON\n\n        # Basic equality with numbers\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '.age == 25')\n        assert len(result) == 1, \"Expected 1 result for age == 25\"\n        assert result[0].decode() == f'{self.test_key}:item:1', \"Expected item:1 for age == 25\"\n\n        # Greater than\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '.age > 25')\n        assert len(result) == 2, \"Expected 2 results for age > 25\"\n\n        # Less than or equal\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '.age <= 30')\n        assert len(result) == 2, \"Expected 2 results for age <= 30\"\n\n        # String equality\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '.name == \"Alice\"')\n        assert len(result) == 1, \"Expected 1 result for name == Alice\"\n        assert result[0].decode() == f'{self.test_key}:item:1', \"Expected item:1 for name == Alice\"\n\n        # String inequality\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '.name != \"Alice\"')\n        assert len(result) == 2, \"Expected 2 results for name != Alice\"\n\n        # Boolean value\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '.active')\n        assert len(result) == 1, \"Expected 1 result for .active being true\"\n\n        # Logical AND\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '.age > 20 and .age < 30')\n        assert len(result) == 1, \"Expected 1 result for 20 < age < 30\"\n        assert result[0].decode() == f'{self.test_key}:item:1', \"Expected item:1 for 20 < age < 30\"\n\n        # Logical OR\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '.age < 30 or .age > 35')\n        assert len(result) == 1, \"Expected 1 result for age < 30 or age > 35\"\n\n        # Logical NOT\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '!(.age == 25)')\n        assert len(result) == 2, \"Expected 2 results for NOT(age == 25)\"\n\n        # The \"in\" operator with array\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '.age in [25, 35]')\n        assert len(result) == 2, \"Expected 2 results for age in [25, 35]\"\n\n        # The \"in\" operator with strings in array\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '.name in [\"Alice\", \"David\"]')\n        assert len(result) == 1, \"Expected 1 result for name in [Alice, David]\"\n\n        # The \"in\" operator for substring matching\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '\"lic\" in .name')\n        assert len(result) == 1, \"Expected 1 result for 'lic' in name\"\n        assert result[0].decode() == f'{self.test_key}:item:1', \"Expected item:1 (Alice)\"\n\n        # The \"in\" operator with city substring\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '\"ork\" in .city')\n        assert len(result) == 1, \"Expected 1 result for 'ork' in city\"\n        assert result[0].decode() == f'{self.test_key}:item:1', \"Expected item:1 (New York)\"\n\n        # The \"in\" operator with no matches\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '\"xyz\" in .name')\n        assert len(result) == 0, \"Expected 0 results for 'xyz' in name\"\n\n        # Off-by-one tests - substring at the beginning\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '\"Ali\" in .name')\n        assert len(result) == 1, \"Expected 1 result for 'Ali' at beginning of 'Alice'\"\n        assert result[0].decode() == f'{self.test_key}:item:1', \"Expected item:1\"\n\n        # Off-by-one tests - substring at the end\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '\"ice\" in .name')\n        assert len(result) == 1, \"Expected 1 result for 'ice' at end of 'Alice'\"\n        assert result[0].decode() == f'{self.test_key}:item:1', \"Expected item:1\"\n\n        # Off-by-one tests - exact match (entire string)\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '\"Alice\" in .name')\n        assert len(result) == 1, \"Expected 1 result for exact match 'Alice' in 'Alice'\"\n        assert result[0].decode() == f'{self.test_key}:item:1', \"Expected item:1\"\n\n        # Off-by-one tests - single character\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '\"A\" in .name')\n        assert len(result) == 1, \"Expected 1 result for single char 'A' in 'Alice'\"\n\n        # Off-by-one tests - empty string (should match all strings)\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '\"\" in .name')\n        assert len(result) == 3, \"Expected 3 results for empty string (matches all strings)\"\n\n        # Off-by-one tests - non-empty strings are never substrings of \"\"\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '.name in \"\"')\n        assert len(result) == 0, \"Expected 0 results for empty string on the right of IN operator\"\n\n        # Off-by-one tests - empty string match empty string.\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '\"\" in .name && \"\" in \"\"')\n        assert len(result) == 3, \"Expected empty string matching empty string\"\n\n        # Arithmetic operations - addition\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '.age + 10 > 40')\n        assert len(result) == 1, \"Expected 1 result for age + 10 > 40\"\n\n        # Arithmetic operations - multiplication\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '.age * 2 > 60')\n        assert len(result) == 1, \"Expected 1 result for age * 2 > 60\"\n\n        # Arithmetic operations - division\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '.age / 5 == 5')\n        assert len(result) == 1, \"Expected 1 result for age / 5 == 5\"\n\n        # Arithmetic operations - modulo\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '.age % 2 == 0')\n        assert len(result) == 1, \"Expected 1 result for age % 2 == 0\"\n\n        # Power operator\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '.age ** 2 > 900')\n        assert len(result) == 1, \"Expected 1 result for age^2 > 900\"\n\n        # Missing attribute (should exclude items missing that attribute)\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '.missing_field == \"value\"')\n        assert len(result) == 0, \"Expected 0 results for missing_field == value\"\n\n        # No attribute set at all\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '.any_field')\n        assert f'{self.test_key}:item:4' not in [item.decode() for item in result], \"Item with no attribute should be excluded\"\n\n        # Malformed JSON\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '.any_field')\n        assert f'{self.test_key}:item:5' not in [item.decode() for item in result], \"Item with malformed JSON should be excluded\"\n\n        # Complex expression combining multiple operators\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '(.age > 20 and .age < 40) and (.city == \"Boston\" or .city == \"New York\")')\n        assert len(result) == 2, \"Expected 2 results for the complex expression\"\n        expected_items = [f'{self.test_key}:item:1', f'{self.test_key}:item:2']\n        assert set([item.decode() for item in result]) == set(expected_items), \"Expected item:1 and item:2 for the complex expression\"\n\n        # Parentheses to control operator precedence\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '.age > (20 + 10)')\n        assert len(result) == 1, \"Expected 1 result for age > (20 + 10)\"\n\n        # Array access (arrays evaluate to true)\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4,\n                                          *[str(x) for x in vec1],\n                                          'FILTER', '.scores')\n        assert len(result) == 3, \"Expected 3 results for .scores (arrays evaluate to true)\"\n"
  },
  {
    "path": "modules/vector-sets/tests/filter_int.py",
    "content": "from test import TestCase, generate_random_vector\nimport struct\nimport random\nimport math\nimport json\nimport time\n\nclass VSIMFilterAdvanced(TestCase):\n    def getname(self):\n        return \"VSIM FILTER comprehensive functionality testing\"\n\n    def estimated_runtime(self):\n        return 15  # This test might take up to 15 seconds for the large dataset\n\n    def setup(self):\n        super().setup()\n        self.dim = 32        # Vector dimension\n        self.count = 5000    # Number of vectors for large tests\n        self.small_count = 50 # Number of vectors for small/quick tests\n\n        # Categories for attributes\n        self.categories = [\"electronics\", \"furniture\", \"clothing\", \"books\", \"food\"]\n        self.cities = [\"New York\", \"London\", \"Tokyo\", \"Paris\", \"Berlin\", \"Sydney\", \"Toronto\", \"Singapore\"]\n        self.price_ranges = [(10, 50), (50, 200), (200, 1000), (1000, 5000)]\n        self.years = list(range(2000, 2025))\n\n    def create_attributes(self, index):\n        \"\"\"Create realistic attributes for a vector\"\"\"\n        category = random.choice(self.categories)\n        city = random.choice(self.cities)\n        min_price, max_price = random.choice(self.price_ranges)\n        price = round(random.uniform(min_price, max_price), 2)\n        year = random.choice(self.years)\n        in_stock = random.random() > 0.3  # 70% chance of being in stock\n        rating = round(random.uniform(1, 5), 1)\n        views = int(random.expovariate(1/1000))  # Exponential distribution for page views\n        tags = random.sample([\"popular\", \"sale\", \"new\", \"limited\", \"exclusive\", \"clearance\"],\n                           k=random.randint(0, 3))\n\n        # Add some specific patterns for testing\n        # Every 10th item has a specific property combination for testing\n        is_premium = (index % 10 == 0)\n\n        # Create attributes dictionary\n        attrs = {\n            \"id\": index,\n            \"category\": category,\n            \"location\": city,\n            \"price\": price,\n            \"year\": year,\n            \"in_stock\": in_stock,\n            \"rating\": rating,\n            \"views\": views,\n            \"tags\": tags\n        }\n\n        if is_premium:\n            attrs[\"is_premium\"] = True\n            attrs[\"special_features\"] = [\"premium\", \"warranty\", \"support\"]\n\n        # Add sub-categories for more complex filters\n        if category == \"electronics\":\n            attrs[\"subcategory\"] = random.choice([\"phones\", \"computers\", \"cameras\", \"audio\"])\n        elif category == \"furniture\":\n            attrs[\"subcategory\"] = random.choice([\"chairs\", \"tables\", \"sofas\", \"beds\"])\n        elif category == \"clothing\":\n            attrs[\"subcategory\"] = random.choice([\"shirts\", \"pants\", \"dresses\", \"shoes\"])\n\n        # Add some intentionally missing fields for testing\n        if random.random() > 0.9:  # 10% chance of missing price\n            del attrs[\"price\"]\n\n        # Some items have promotion field\n        if random.random() > 0.7:  # 30% chance of having a promotion\n            attrs[\"promotion\"] = random.choice([\"discount\", \"bundle\", \"gift\"])\n\n        # Create invalid JSON for a small percentage of vectors\n        if random.random() > 0.98:  # 2% chance of having invalid JSON\n            return \"{{invalid json}}\"\n\n        return json.dumps(attrs)\n\n    def create_vectors_with_attributes(self, key, count):\n        \"\"\"Create vectors and add attributes to them\"\"\"\n        vectors = []\n        names = []\n        attribute_map = {}  # To store attributes for verification\n\n        # Create vectors\n        for i in range(count):\n            vec = generate_random_vector(self.dim)\n            vectors.append(vec)\n            name = f\"{key}:item:{i}\"\n            names.append(name)\n\n            # Add to Redis\n            vec_bytes = struct.pack(f'{self.dim}f', *vec)\n            self.redis.execute_command('VADD', key, 'FP32', vec_bytes, name)\n\n            # Create and add attributes\n            attrs = self.create_attributes(i)\n            self.redis.execute_command('VSETATTR', key, name, attrs)\n\n            # Store attributes for later verification\n            try:\n                attribute_map[name] = json.loads(attrs) if '{' in attrs else None\n            except json.JSONDecodeError:\n                attribute_map[name] = None\n\n        return vectors, names, attribute_map\n\n    def filter_linear_search(self, vectors, names, query_vector, filter_expr, attribute_map, k=10):\n        \"\"\"Perform a linear search with filtering for verification\"\"\"\n        similarities = []\n        query_norm = math.sqrt(sum(x*x for x in query_vector))\n\n        if query_norm == 0:\n            return []\n\n        for i, vec in enumerate(vectors):\n            name = names[i]\n            attributes = attribute_map.get(name)\n\n            # Skip if doesn't match filter\n            if not self.matches_filter(attributes, filter_expr):\n                continue\n\n            vec_norm = math.sqrt(sum(x*x for x in vec))\n            if vec_norm == 0:\n                continue\n\n            dot_product = sum(a*b for a,b in zip(query_vector, vec))\n            cosine_sim = dot_product / (query_norm * vec_norm)\n            distance = 1.0 - cosine_sim\n            redis_similarity = 1.0 - (distance/2.0)\n            similarities.append((name, redis_similarity))\n\n        similarities.sort(key=lambda x: x[1], reverse=True)\n        return similarities[:k]\n\n    def matches_filter(self, attributes, filter_expr):\n        \"\"\"Filter matching for verification - uses Python eval to handle complex expressions\"\"\"\n        if attributes is None:\n            return False  # No attributes or invalid JSON\n\n        # Replace JSON path selectors with Python dictionary access\n        py_expr = filter_expr\n\n        # Handle `.field` notation (replace with attributes['field'])\n        i = 0\n        while i < len(py_expr):\n            if py_expr[i] == '.' and (i == 0 or not py_expr[i-1].isalnum()):\n                # Find the end of the selector (stops at operators or whitespace)\n                j = i + 1\n                while j < len(py_expr) and (py_expr[j].isalnum() or py_expr[j] == '_'):\n                    j += 1\n\n                if j > i + 1:  # Found a valid selector\n                    field = py_expr[i+1:j]\n                    # Use a safe access pattern that returns a default value based on context\n                    py_expr = py_expr[:i] + f\"attributes.get('{field}')\" + py_expr[j:]\n                    i = i + len(f\"attributes.get('{field}')\")\n                else:\n                    i += 1\n            else:\n                i += 1\n\n        # Convert not operator if needed\n        py_expr = py_expr.replace('!', ' not ')\n\n        try:\n            # Custom evaluation that handles exceptions for missing fields\n            # by returning False for the entire expression\n\n            # Split the expression on logical operators\n            parts = []\n            for op in [' and ', ' or ']:\n                if op in py_expr:\n                    parts = py_expr.split(op)\n                    break\n\n            if not parts:  # No logical operators found\n                parts = [py_expr]\n\n            # Try to evaluate each part - if any part fails,\n            # the whole expression should fail\n            try:\n                result = eval(py_expr, {\"attributes\": attributes})\n                return bool(result)\n            except (TypeError, AttributeError):\n                # This typically happens when trying to compare None with\n                # numbers or other types, or when an attribute doesn't exist\n                return False\n            except Exception as e:\n                print(f\"Error evaluating filter expression '{filter_expr}' as '{py_expr}': {e}\")\n                return False\n\n        except Exception as e:\n            print(f\"Error evaluating filter expression '{filter_expr}' as '{py_expr}': {e}\")\n            return False\n\n    def safe_decode(self,item):\n        return item.decode() if isinstance(item, bytes) else item\n\n    def calculate_recall(self, redis_results, linear_results, k=10):\n        \"\"\"Calculate recall (percentage of correct results retrieved)\"\"\"\n        redis_set = set(self.safe_decode(item) for item in redis_results)\n        linear_set = set(item[0] for item in linear_results[:k])\n\n        if not linear_set:\n            return 1.0  # If no linear results, consider it perfect recall\n\n        intersection = redis_set.intersection(linear_set)\n        return len(intersection) / len(linear_set)\n\n    def test_recall_with_filter(self, filter_expr, ef=500, filter_ef=None):\n        \"\"\"Test recall for a given filter expression\"\"\"\n        # Create query vector\n        query_vec = generate_random_vector(self.dim)\n\n        # First, get ground truth using linear scan\n        linear_results = self.filter_linear_search(\n            self.vectors, self.names, query_vec, filter_expr, self.attribute_map, k=50)\n\n        # Calculate true selectivity from ground truth\n        true_selectivity = len(linear_results) / len(self.names) if self.names else 0\n\n        # Perform Redis search with filter\n        cmd_args = ['VSIM', self.test_key, 'VALUES', self.dim]\n        cmd_args.extend([str(x) for x in query_vec])\n        cmd_args.extend(['COUNT', 50, 'WITHSCORES', 'EF', ef, 'FILTER', filter_expr])\n        if filter_ef:\n            cmd_args.extend(['FILTER-EF', filter_ef])\n\n        start_time = time.time()\n        redis_results = self.redis.execute_command(*cmd_args)\n        query_time = time.time() - start_time\n\n        # Convert Redis results to dict\n        redis_items = {}\n        for i in range(0, len(redis_results), 2):\n            key = redis_results[i].decode() if isinstance(redis_results[i], bytes) else redis_results[i]\n            score = float(redis_results[i+1])\n            redis_items[key] = score\n\n        # Calculate metrics\n        recall = self.calculate_recall(redis_items.keys(), linear_results)\n        selectivity = len(redis_items) / len(self.names) if redis_items else 0\n\n        # Compare against the true selectivity from linear scan\n        assert abs(selectivity - true_selectivity) < 0.1, \\\n            f\"Redis selectivity {selectivity:.3f} differs significantly from ground truth {true_selectivity:.3f}\"\n\n        # We expect high recall for standard parameters\n        if ef >= 500 and (filter_ef is None or filter_ef >= 1000):\n            try:\n                assert recall >= 0.7, \\\n                    f\"Low recall {recall:.2f} for filter '{filter_expr}'\"\n            except AssertionError as e:\n                # Get items found in each set\n                redis_items_set = set(redis_items.keys())\n                linear_items_set = set(item[0] for item in linear_results)\n\n                # Find items in each set\n                only_in_redis = redis_items_set - linear_items_set\n                only_in_linear = linear_items_set - redis_items_set\n                in_both = redis_items_set & linear_items_set\n\n                # Build comprehensive debug message\n                debug = f\"\\nGround Truth: {len(linear_results)} matching items (total vectors: {len(self.vectors)})\"\n                debug += f\"\\nRedis Found: {len(redis_items)} items with FILTER-EF: {filter_ef or 'default'}\"\n                debug += f\"\\nItems in both sets: {len(in_both)} (recall: {recall:.4f})\"\n                debug += f\"\\nItems only in Redis: {len(only_in_redis)}\"\n                debug += f\"\\nItems only in Ground Truth: {len(only_in_linear)}\"\n\n                # Show some example items from each set with their scores\n                if only_in_redis:\n                    debug += \"\\n\\nTOP 5 ITEMS ONLY IN REDIS:\"\n                    sorted_redis = sorted([(k, v) for k, v in redis_items.items()], key=lambda x: x[1], reverse=True)\n                    for i, (item, score) in enumerate(sorted_redis[:5]):\n                        if item in only_in_redis:\n                            debug += f\"\\n  {i+1}. {item} (Score: {score:.4f})\"\n\n                            # Show attribute that should match filter\n                            attr = self.attribute_map.get(item)\n                            if attr:\n                                debug += f\" - Attrs: {attr.get('category', 'N/A')}, Price: {attr.get('price', 'N/A')}\"\n\n                if only_in_linear:\n                    debug += \"\\n\\nTOP 5 ITEMS ONLY IN GROUND TRUTH:\"\n                    for i, (item, score) in enumerate(linear_results[:5]):\n                        if item in only_in_linear:\n                            debug += f\"\\n  {i+1}. {item} (Score: {score:.4f})\"\n\n                            # Show attribute that should match filter\n                            attr = self.attribute_map.get(item)\n                            if attr:\n                                debug += f\" - Attrs: {attr.get('category', 'N/A')}, Price: {attr.get('price', 'N/A')}\"\n\n                # Help identify parsing issues\n                debug += \"\\n\\nPARSING CHECK:\"\n                debug += f\"\\nRedis command: VSIM {self.test_key} VALUES {self.dim} [...] FILTER '{filter_expr}'\"\n\n                # Check for WITHSCORES handling issues\n                if len(redis_results) > 0 and len(redis_results) % 2 == 0:\n                    debug += f\"\\nRedis returned {len(redis_results)} items (looks like item,score pairs)\"\n                    debug += f\"\\nFirst few results: {redis_results[:4]}\"\n\n                # Check the filter implementation\n                debug += \"\\n\\nFILTER IMPLEMENTATION CHECK:\"\n                debug += f\"\\nFilter expression: '{filter_expr}'\"\n                debug += \"\\nSample attribute matches from attribute_map:\"\n                count_matching = 0\n                for i, (name, attrs) in enumerate(self.attribute_map.items()):\n                    if attrs and self.matches_filter(attrs, filter_expr):\n                        count_matching += 1\n                        if i < 3:  # Show first 3 matches\n                            debug += f\"\\n  - {name}: {attrs}\"\n                debug += f\"\\nTotal items matching filter in attribute_map: {count_matching}\"\n\n                # Check if results array handling could be wrong\n                debug += \"\\n\\nRESULT ARRAYS CHECK:\"\n                if len(linear_results) >= 1:\n                    debug += f\"\\nlinear_results[0]: {linear_results[0]}\"\n                    if isinstance(linear_results[0], tuple) and len(linear_results[0]) == 2:\n                        debug += \" (correct tuple format: (name, score))\"\n                    else:\n                        debug += \" (UNEXPECTED FORMAT!)\"\n\n                # Debug sort order\n                debug += \"\\n\\nSORTING CHECK:\"\n                if len(linear_results) >= 2:\n                    debug += f\"\\nGround truth first item score: {linear_results[0][1]}\"\n                    debug += f\"\\nGround truth second item score: {linear_results[1][1]}\"\n                    debug += f\"\\nCorrectly sorted by similarity? {linear_results[0][1] >= linear_results[1][1]}\"\n\n                # Re-raise with detailed information\n                raise AssertionError(str(e) + debug)\n\n        return recall, selectivity, query_time, len(redis_items)\n\n    def test(self):\n        print(f\"\\nRunning comprehensive VSIM FILTER tests...\")\n\n        # Create a larger dataset for testing\n        print(f\"Creating dataset with {self.count} vectors and attributes...\")\n        self.vectors, self.names, self.attribute_map = self.create_vectors_with_attributes(\n            self.test_key, self.count)\n\n        # ==== 1. Recall and Precision Testing ====\n        print(\"Testing recall for various filters...\")\n\n        # Test basic filters with different selectivity\n        results = {}\n        results[\"category\"] = self.test_recall_with_filter('.category == \"electronics\"')\n        results[\"price_high\"] = self.test_recall_with_filter('.price > 1000')\n        results[\"in_stock\"] = self.test_recall_with_filter('.in_stock')\n        results[\"rating\"] = self.test_recall_with_filter('.rating >= 4')\n        results[\"complex1\"] = self.test_recall_with_filter('.category == \"electronics\" and .price < 500')\n\n        print(\"Filter | Recall | Selectivity | Time (ms) | Results\")\n        print(\"----------------------------------------------------\")\n        for name, (recall, selectivity, time_ms, count) in results.items():\n            print(f\"{name:7} | {recall:.3f} | {selectivity:.3f} | {time_ms*1000:.1f} | {count}\")\n\n        # ==== 2. Filter Selectivity Performance ====\n        print(\"\\nTesting filter selectivity performance...\")\n\n        # High selectivity (very few matches)\n        high_sel_recall, _, high_sel_time, _ = self.test_recall_with_filter('.is_premium')\n\n        # Medium selectivity\n        med_sel_recall, _, med_sel_time, _ = self.test_recall_with_filter('.price > 100 and .price < 1000')\n\n        # Low selectivity (many matches)\n        low_sel_recall, _, low_sel_time, _ = self.test_recall_with_filter('.year > 2000')\n\n        print(f\"High selectivity recall: {high_sel_recall:.3f}, time: {high_sel_time*1000:.1f}ms\")\n        print(f\"Med selectivity recall: {med_sel_recall:.3f}, time: {med_sel_time*1000:.1f}ms\")\n        print(f\"Low selectivity recall: {low_sel_recall:.3f}, time: {low_sel_time*1000:.1f}ms\")\n\n        # ==== 3. FILTER-EF Parameter Testing ====\n        print(\"\\nTesting FILTER-EF parameter...\")\n\n        # Test with different FILTER-EF values\n        filter_expr = '.category == \"electronics\" and .price > 200'\n        ef_values = [100, 500, 2000, 5000]\n\n        print(\"FILTER-EF | Recall | Time (ms)\")\n        print(\"-----------------------------\")\n        for filter_ef in ef_values:\n            recall, _, query_time, _ = self.test_recall_with_filter(\n                filter_expr, ef=500, filter_ef=filter_ef)\n            print(f\"{filter_ef:9} | {recall:.3f} | {query_time*1000:.1f}\")\n\n        # Assert that higher FILTER-EF generally gives better recall\n        low_ef_recall, _, _, _ = self.test_recall_with_filter(filter_expr, filter_ef=100)\n        high_ef_recall, _, _, _ = self.test_recall_with_filter(filter_expr, filter_ef=5000)\n\n        # This might not always be true due to randomness, but generally holds\n        # We use a softer assertion to avoid flaky tests\n        assert high_ef_recall >= low_ef_recall * 0.8, \\\n            f\"Higher FILTER-EF should generally give better recall: {high_ef_recall:.3f} vs {low_ef_recall:.3f}\"\n\n        # ==== 4. Complex Filter Expressions ====\n        print(\"\\nTesting complex filter expressions...\")\n\n        # Test a variety of complex expressions\n        complex_filters = [\n            '.price > 100 and (.category == \"electronics\" or .category == \"furniture\")',\n            '(.rating > 4 and .in_stock) or (.price < 50 and .views > 1000)',\n            '.category in [\"electronics\", \"clothing\"] and .price > 200 and .rating >= 3',\n            '(.category == \"electronics\" and .subcategory == \"phones\") or (.category == \"furniture\" and .price > 1000)',\n            '.year > 2010 and !(.price < 100) and .in_stock'\n        ]\n\n        print(\"Expression | Results | Time (ms)\")\n        print(\"-----------------------------\")\n        for i, expr in enumerate(complex_filters):\n            try:\n                _, _, query_time, result_count = self.test_recall_with_filter(expr)\n                print(f\"Complex {i+1} | {result_count:7} | {query_time*1000:.1f}\")\n            except Exception as e:\n                print(f\"Complex {i+1} | Error: {str(e)}\")\n\n        # ==== 5. Attribute Type Testing ====\n        print(\"\\nTesting different attribute types...\")\n\n        type_filters = [\n            ('.price > 500', \"Numeric\"),\n            ('.category == \"books\"', \"String equality\"),\n            ('.in_stock', \"Boolean\"),\n            ('.tags in [\"sale\", \"new\"]', \"Array membership\"),\n            ('.rating * 2 > 8', \"Arithmetic\")\n        ]\n\n        for expr, type_name in type_filters:\n            try:\n                _, _, query_time, result_count = self.test_recall_with_filter(expr)\n                print(f\"{type_name:16} | {expr:30} | {result_count:5} results | {query_time*1000:.1f}ms\")\n            except Exception as e:\n                print(f\"{type_name:16} | {expr:30} | Error: {str(e)}\")\n\n        # ==== 6. Filter + Count Interaction ====\n        print(\"\\nTesting COUNT parameter with filters...\")\n\n        filter_expr = '.category == \"electronics\"'\n        counts = [5, 20, 100]\n\n        for count in counts:\n            query_vec = generate_random_vector(self.dim)\n            cmd_args = ['VSIM', self.test_key, 'VALUES', self.dim]\n            cmd_args.extend([str(x) for x in query_vec])\n            cmd_args.extend(['COUNT', count, 'WITHSCORES', 'FILTER', filter_expr])\n\n            results = self.redis.execute_command(*cmd_args)\n            result_count = len(results) // 2  # Divide by 2 because WITHSCORES returns pairs\n\n            # We expect result count to be at most the requested count\n            assert result_count <= count, f\"Got {result_count} results with COUNT {count}\"\n            print(f\"COUNT {count:3} | Got {result_count:3} results\")\n\n        # ==== 7. Edge Cases ====\n        print(\"\\nTesting edge cases...\")\n\n        # Test with no matching items\n        no_match_expr = '.category == \"nonexistent_category\"'\n        results = self.redis.execute_command('VSIM', self.test_key, 'VALUES', self.dim,\n                                           *[str(x) for x in generate_random_vector(self.dim)],\n                                           'FILTER', no_match_expr)\n        assert len(results) == 0, f\"Expected 0 results for non-matching filter, got {len(results)}\"\n        print(f\"No matching items: {len(results)} results (expected 0)\")\n\n        # Test with invalid filter syntax\n        try:\n            self.redis.execute_command('VSIM', self.test_key, 'VALUES', self.dim,\n                                     *[str(x) for x in generate_random_vector(self.dim)],\n                                     'FILTER', '.category === \"books\"')  # Triple equals is invalid\n            assert False, \"Expected error for invalid filter syntax\"\n        except:\n            print(\"Invalid filter syntax correctly raised an error\")\n\n        # Test with extremely long complex expression\n        long_expr = ' and '.join([f'.rating > {i/10}' for i in range(10)])\n        try:\n            results = self.redis.execute_command('VSIM', self.test_key, 'VALUES', self.dim,\n                                               *[str(x) for x in generate_random_vector(self.dim)],\n                                               'FILTER', long_expr)\n            print(f\"Long expression: {len(results)} results\")\n        except Exception as e:\n            print(f\"Long expression error: {str(e)}\")\n\n        print(\"\\nComprehensive VSIM FILTER tests completed successfully\")\n\n\nclass VSIMFilterSelectivityTest(TestCase):\n    def getname(self):\n        return \"VSIM FILTER selectivity performance benchmark\"\n\n    def estimated_runtime(self):\n        return 8  # This test might take up to 8 seconds\n\n    def setup(self):\n        super().setup()\n        self.dim = 32\n        self.count = 10000\n        self.test_key = f\"{self.test_key}:selectivity\"  # Use a different key\n\n    def create_vector_with_age_attribute(self, name, age):\n        \"\"\"Create a vector with a specific age attribute\"\"\"\n        vec = generate_random_vector(self.dim)\n        vec_bytes = struct.pack(f'{self.dim}f', *vec)\n        self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes, name)\n        self.redis.execute_command('VSETATTR', self.test_key, name, json.dumps({\"age\": age}))\n\n    def test(self):\n        print(\"\\nRunning VSIM FILTER selectivity benchmark...\")\n\n        # Create a dataset where we control the exact selectivity\n        print(f\"Creating controlled dataset with {self.count} vectors...\")\n\n        # Create vectors with age attributes from 1 to 100\n        for i in range(self.count):\n            age = (i % 100) + 1  # Ages from 1 to 100\n            name = f\"{self.test_key}:item:{i}\"\n            self.create_vector_with_age_attribute(name, age)\n\n        # Create a query vector\n        query_vec = generate_random_vector(self.dim)\n\n        # Test filters with different selectivities\n        selectivities = [0.01, 0.05, 0.10, 0.25, 0.50, 0.75, 0.99]\n        results = []\n\n        print(\"\\nSelectivity | Filter          | Results | Time (ms)\")\n        print(\"--------------------------------------------------\")\n\n        for target_selectivity in selectivities:\n            # Calculate age threshold for desired selectivity\n            # For example, age <= 10 gives 10% selectivity\n            age_threshold = int(target_selectivity * 100)\n            filter_expr = f'.age <= {age_threshold}'\n\n            # Run query and measure time\n            start_time = time.time()\n            cmd_args = ['VSIM', self.test_key, 'VALUES', self.dim]\n            cmd_args.extend([str(x) for x in query_vec])\n            cmd_args.extend(['COUNT', 100, 'FILTER', filter_expr])\n\n            results = self.redis.execute_command(*cmd_args)\n            query_time = time.time() - start_time\n\n            actual_selectivity = len(results) / min(100, int(target_selectivity * self.count))\n            print(f\"{target_selectivity:.2f}      | {filter_expr:15} | {len(results):7} | {query_time*1000:.1f}\")\n\n            # Add assertion to ensure reasonable performance for different selectivities\n            # For very selective queries (1%), we might need more exploration\n            if target_selectivity <= 0.05:\n                # For very selective queries, ensure we can find some results\n                assert len(results) > 0, f\"No results found for {filter_expr}\"\n            else:\n                # For less selective queries, performance should be reasonable\n                assert query_time < 1.0, f\"Query too slow: {query_time:.3f}s for {filter_expr}\"\n\n        print(\"\\nSelectivity benchmark completed successfully\")\n\n\nclass VSIMFilterComparisonTest(TestCase):\n    def getname(self):\n        return \"VSIM FILTER EF parameter comparison\"\n\n    def estimated_runtime(self):\n        return 8  # This test might take up to 8 seconds\n\n    def setup(self):\n        super().setup()\n        self.dim = 32\n        self.count = 5000\n        self.test_key = f\"{self.test_key}:efparams\"  # Use a different key\n\n    def create_dataset(self):\n        \"\"\"Create a dataset with specific attribute patterns for testing FILTER-EF\"\"\"\n        vectors = []\n        names = []\n\n        # Create vectors with category and quality score attributes\n        for i in range(self.count):\n            vec = generate_random_vector(self.dim)\n            name = f\"{self.test_key}:item:{i}\"\n\n            # Add vector to Redis\n            vec_bytes = struct.pack(f'{self.dim}f', *vec)\n            self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes, name)\n\n            # Create attributes - we want a very selective filter\n            # Only 2% of items have category=premium AND quality>90\n            category = \"premium\" if random.random() < 0.1 else random.choice([\"standard\", \"economy\", \"basic\"])\n            quality = random.randint(1, 100)\n\n            attrs = {\n                \"id\": i,\n                \"category\": category,\n                \"quality\": quality\n            }\n\n            self.redis.execute_command('VSETATTR', self.test_key, name, json.dumps(attrs))\n            vectors.append(vec)\n            names.append(name)\n\n        return vectors, names\n\n    def test(self):\n        print(\"\\nRunning VSIM FILTER-EF parameter comparison...\")\n\n        # Create dataset\n        vectors, names = self.create_dataset()\n\n        # Create a selective filter that matches ~2% of items\n        filter_expr = '.category == \"premium\" and .quality > 90'\n\n        # Create query vector\n        query_vec = generate_random_vector(self.dim)\n\n        # Test different FILTER-EF values\n        ef_values = [50, 100, 500, 1000, 5000]\n        results = []\n\n        print(\"\\nFILTER-EF | Results | Time (ms) | Notes\")\n        print(\"---------------------------------------\")\n\n        baseline_count = None\n\n        for ef in ef_values:\n            # Run query and measure time\n            start_time = time.time()\n            cmd_args = ['VSIM', self.test_key, 'VALUES', self.dim]\n            cmd_args.extend([str(x) for x in query_vec])\n            cmd_args.extend(['COUNT', 100, 'FILTER', filter_expr, 'FILTER-EF', ef])\n\n            query_results = self.redis.execute_command(*cmd_args)\n            query_time = time.time() - start_time\n\n            # Set baseline for comparison\n            if baseline_count is None:\n                baseline_count = len(query_results)\n\n            recall_rate = len(query_results) / max(1, baseline_count) if baseline_count > 0 else 1.0\n\n            notes = \"\"\n            if ef == 5000:\n                notes = \"Baseline\"\n            elif recall_rate < 0.5:\n                notes = \"Low recall!\"\n\n            print(f\"{ef:9} | {len(query_results):7} | {query_time*1000:.1f} | {notes}\")\n            results.append((ef, len(query_results), query_time))\n\n        # If we have enough results at highest EF, check that recall improves with higher EF\n        if results[-1][1] >= 5:  # At least 5 results for highest EF\n            # Extract result counts\n            result_counts = [r[1] for r in results]\n\n            # The last result (highest EF) should typically find more results than the first (lowest EF)\n            # but we use a soft assertion to avoid flaky tests\n            assert result_counts[-1] >= result_counts[0], \\\n                f\"Higher FILTER-EF should find at least as many results: {result_counts[-1]} vs {result_counts[0]}\"\n\n        print(\"\\nFILTER-EF parameter comparison completed successfully\")\n"
  },
  {
    "path": "modules/vector-sets/tests/large_scale.py",
    "content": "from test import TestCase, fill_redis_with_vectors, generate_random_vector\nimport random\n\nclass LargeScale(TestCase):\n    def getname(self):\n        return \"Large Scale Comparison\"\n\n    def estimated_runtime(self):\n        return 10\n\n    def test(self):\n        dim = 300\n        count = 20000\n        k = 50\n\n        # Fill Redis and get reference data for comparison\n        random.seed(42)  # Make test deterministic\n        data = fill_redis_with_vectors(self.redis, self.test_key, count, dim)\n\n        # Generate query vector\n        query_vec = generate_random_vector(dim)\n\n        # Get results from Redis with good exploration factor\n        redis_raw = self.redis.execute_command('VSIM', self.test_key, 'VALUES', dim, \n                                             *[str(x) for x in query_vec],\n                                             'COUNT', k, 'WITHSCORES', 'EF', 500)\n\n        # Convert Redis results to dict\n        redis_results = {}\n        for i in range(0, len(redis_raw), 2):\n            key = redis_raw[i].decode()\n            score = float(redis_raw[i+1])\n            redis_results[key] = score\n\n        # Get results from linear scan\n        linear_results = data.find_k_nearest(query_vec, k)\n        linear_items = {name: score for name, score in linear_results}\n\n        # Compare overlap\n        redis_set = set(redis_results.keys())\n        linear_set = set(linear_items.keys())\n        overlap = len(redis_set & linear_set)\n\n        # If test fails, print comparison for debugging\n        if overlap < k * 0.7:\n            data.print_comparison({'items': redis_results, 'query_vector': query_vec}, k)\n\n        assert overlap >= k * 0.7, \\\n            f\"Expected at least 70% overlap in top {k} results, got {overlap/k*100:.1f}%\"\n\n        # Verify scores for common items\n        for item in redis_set & linear_set:\n            redis_score = redis_results[item]\n            linear_score = linear_items[item]\n            assert abs(redis_score - linear_score) < 0.01, \\\n                f\"Score mismatch for {item}: Redis={redis_score:.3f} Linear={linear_score:.3f}\"\n"
  },
  {
    "path": "modules/vector-sets/tests/memory_usage.py",
    "content": "from test import TestCase, generate_random_vector\nimport struct\n\nclass MemoryUsageTest(TestCase):\n    def getname(self):\n        return \"[regression] MEMORY USAGE with attributes\"\n\n    def test(self):\n        # Generate random vectors\n        vec1 = generate_random_vector(4)\n        vec2 = generate_random_vector(4)\n        vec_bytes1 = struct.pack('4f', *vec1)\n        vec_bytes2 = struct.pack('4f', *vec2)\n\n        # Add vectors to the key, one with attribute, one without\n        self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes1, f'{self.test_key}:item:1')\n        self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes2, f'{self.test_key}:item:2', 'SETATTR', '{\"color\":\"red\"}')\n\n        # Get memory usage for the key\n        try:\n            memory_usage = self.redis.execute_command('MEMORY', 'USAGE', self.test_key)\n            # If we got here without exception, the command worked\n            assert memory_usage > 0, \"MEMORY USAGE should return a positive value\"\n\n            # Add more attributes to increase complexity\n            self.redis.execute_command('VSETATTR', self.test_key, f'{self.test_key}:item:1', '{\"color\":\"blue\",\"size\":10}')\n\n            # Check memory usage again\n            new_memory_usage = self.redis.execute_command('MEMORY', 'USAGE', self.test_key)\n            assert new_memory_usage > 0, \"MEMORY USAGE should still return a positive value after setting attributes\"\n\n            # Memory usage should be higher after adding attributes\n            assert new_memory_usage > memory_usage, \"Memory usage increase after adding attributes\"\n\n        except Exception as e:\n            raise AssertionError(f\"MEMORY USAGE command failed: {str(e)}\")\n"
  },
  {
    "path": "modules/vector-sets/tests/node_update.py",
    "content": "from test import TestCase, generate_random_vector\nimport struct\nimport math\nimport random\n\nclass VectorUpdateAndClusters(TestCase):\n   def getname(self):\n       return \"VADD vector update with cluster relocation\"\n\n   def estimated_runtime(self):\n       return 2.0  # Should take around 2 seconds\n\n   def generate_cluster_vector(self, base_vec, noise=0.1):\n       \"\"\"Generate a vector that's similar to base_vec with some noise.\"\"\"\n       vec = [x + random.gauss(0, noise) for x in base_vec]\n       # Normalize\n       norm = math.sqrt(sum(x*x for x in vec))\n       return [x/norm for x in vec]\n\n   def test(self):\n       dim = 128\n       vectors_per_cluster = 5000\n\n       # Create two very different base vectors for our clusters\n       cluster1_base = generate_random_vector(dim)\n       cluster2_base = [-x for x in cluster1_base]  # Opposite direction\n\n       # Add vectors from first cluster\n       for i in range(vectors_per_cluster):\n           vec = self.generate_cluster_vector(cluster1_base)\n           vec_bytes = struct.pack(f'{dim}f', *vec)\n           self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes,\n                                    f'{self.test_key}:cluster1:{i}')\n\n       # Add vectors from second cluster\n       for i in range(vectors_per_cluster):\n           vec = self.generate_cluster_vector(cluster2_base)\n           vec_bytes = struct.pack(f'{dim}f', *vec)\n           self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes,\n                                    f'{self.test_key}:cluster2:{i}')\n\n       # Pick a test vector from cluster1\n       test_key = f'{self.test_key}:cluster1:0'\n\n       # Verify it's in cluster1 using VSIM\n       initial_vec = self.generate_cluster_vector(cluster1_base)\n       results = self.redis.execute_command('VSIM', self.test_key, 'VALUES', dim,\n                                          *[str(x) for x in initial_vec],\n                                          'COUNT', 100, 'WITHSCORES')\n\n       # Count how many cluster1 items are in top results\n       cluster1_count = sum(1 for i in range(0, len(results), 2)\n                          if b'cluster1' in results[i])\n       assert cluster1_count > 80, \"Initial clustering check failed\"\n\n       # Now update the test vector to be in cluster2\n       new_vec = self.generate_cluster_vector(cluster2_base, noise=0.05)\n       vec_bytes = struct.pack(f'{dim}f', *new_vec)\n       self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes, test_key)\n\n       # Verify the embedding was actually updated using VEMB\n       emb_result = self.redis.execute_command('VEMB', self.test_key, test_key)\n       updated_vec = [float(x) for x in emb_result]\n\n       # Verify updated vector matches what we inserted\n       dot_product = sum(a*b for a,b in zip(updated_vec, new_vec))\n       similarity = dot_product / (math.sqrt(sum(x*x for x in updated_vec)) *\n                                 math.sqrt(sum(x*x for x in new_vec)))\n       assert similarity > 0.9, \"Vector was not properly updated\"\n\n       # Verify it's now in cluster2 using VSIM\n       results = self.redis.execute_command('VSIM', self.test_key, 'VALUES', dim,\n                                          *[str(x) for x in cluster2_base],\n                                          'COUNT', 100, 'WITHSCORES')\n\n       # Verify our updated vector is among top results\n       found = False\n       for i in range(0, len(results), 2):\n           if results[i].decode() == test_key:\n               found = True\n               similarity = float(results[i+1])\n               assert similarity > 0.80, f\"Updated vector has low similarity: {similarity}\"\n               break\n\n       assert found, \"Updated vector not found in cluster2 proximity\"\n"
  },
  {
    "path": "modules/vector-sets/tests/persistence.py",
    "content": "from test import TestCase, fill_redis_with_vectors, generate_random_vector\nimport random\n\nclass HNSWPersistence(TestCase):\n    def getname(self):\n        return \"HNSW Persistence\"\n\n    def estimated_runtime(self):\n        return 30\n\n    def _verify_results(self, key, dim, query_vec, reduced_dim=None):\n        \"\"\"Run a query and return results dict\"\"\"\n        k = 10\n        args = ['VSIM', key]\n\n        if reduced_dim:\n            args.extend(['VALUES', dim])\n            args.extend([str(x) for x in query_vec])\n        else:\n            args.extend(['VALUES', dim])\n            args.extend([str(x) for x in query_vec])\n\n        args.extend(['COUNT', k, 'WITHSCORES'])\n        results = self.redis.execute_command(*args)\n\n        results_dict = {}\n        for i in range(0, len(results), 2):\n            key = results[i].decode()\n            score = float(results[i+1])\n            results_dict[key] = score\n        return results_dict\n\n    def test(self):\n        # Setup dimensions\n        dim = 128\n        reduced_dim = 32\n        count = 5000\n        random.seed(42)\n\n        # Create two datasets - one normal and one with dimension reduction\n        normal_data = fill_redis_with_vectors(self.redis, f\"{self.test_key}:normal\", count, dim)\n        projected_data = fill_redis_with_vectors(self.redis, f\"{self.test_key}:projected\",\n                                               count, dim, reduced_dim)\n\n        # Generate query vectors we'll use before and after reload\n        query_vec_normal = generate_random_vector(dim)\n        query_vec_projected = generate_random_vector(dim)\n\n        # Get initial results for both sets\n        initial_normal = self._verify_results(f\"{self.test_key}:normal\", \n                                            dim, query_vec_normal)\n        initial_projected = self._verify_results(f\"{self.test_key}:projected\", \n                                               dim, query_vec_projected, reduced_dim)\n\n        # Force Redis to save and reload the dataset\n        self.redis.execute_command('DEBUG', 'RELOAD')\n\n        # Verify results after reload\n        reloaded_normal = self._verify_results(f\"{self.test_key}:normal\", \n                                             dim, query_vec_normal)\n        reloaded_projected = self._verify_results(f\"{self.test_key}:projected\", \n                                                dim, query_vec_projected, reduced_dim)\n\n        # Verify normal vectors results\n        assert len(initial_normal) == len(reloaded_normal), \\\n            \"Normal vectors: Result count mismatch before/after reload\"\n\n        for key in initial_normal:\n            assert key in reloaded_normal, f\"Normal vectors: Missing item after reload: {key}\"\n            assert abs(initial_normal[key] - reloaded_normal[key]) < 0.0001, \\\n                f\"Normal vectors: Score mismatch for {key}: \" + \\\n                f\"before={initial_normal[key]:.6f}, after={reloaded_normal[key]:.6f}\"\n\n        # Verify projected vectors results\n        assert len(initial_projected) == len(reloaded_projected), \\\n            \"Projected vectors: Result count mismatch before/after reload\"\n\n        for key in initial_projected:\n            assert key in reloaded_projected, \\\n                f\"Projected vectors: Missing item after reload: {key}\"\n            assert abs(initial_projected[key] - reloaded_projected[key]) < 0.0001, \\\n                f\"Projected vectors: Score mismatch for {key}: \" + \\\n                f\"before={initial_projected[key]:.6f}, after={reloaded_projected[key]:.6f}\"\n\n        self.redis.delete(f\"{self.test_key}:normal\")\n        self.redis.delete(f\"{self.test_key}:projected\")\n"
  },
  {
    "path": "modules/vector-sets/tests/q8_similarity.py",
    "content": "from test import TestCase\n\nclass Q8Similarity(TestCase):\n    def getname(self):\n        return \"Q8 quantization: VSIM reported distance makes sense with 4D vectors\"\n\n    def test(self):\n        # Add two very similar vectors, one different\n        # Using same test vectors as basic_similarity.py for comparison\n        vec1 = [1, 0, 0, 0]\n        vec2 = [0.99, 0.01, 0, 0]\n        vec3 = [0.1, 1, -1, 0.5]\n\n        # Add vectors using VALUES format with Q8 quantization\n        self.redis.execute_command('VADD', self.test_key, 'VALUES', 4, \n                                 *[str(x) for x in vec1], f'{self.test_key}:item:1', 'Q8')\n        self.redis.execute_command('VADD', self.test_key, 'VALUES', 4, \n                                 *[str(x) for x in vec2], f'{self.test_key}:item:2', 'Q8')\n        self.redis.execute_command('VADD', self.test_key, 'VALUES', 4, \n                                 *[str(x) for x in vec3], f'{self.test_key}:item:3', 'Q8')\n\n        # Query similarity with vec1\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', 4, \n                                          *[str(x) for x in vec1], 'WITHSCORES')\n\n        # Convert results to dictionary\n        results_dict = {}\n        for i in range(0, len(result), 2):\n            key = result[i].decode()\n            score = float(result[i+1])\n            results_dict[key] = score\n\n        # Verify results (same expectations as float32, allowing for quantization error)\n        assert results_dict[f'{self.test_key}:item:1'] > 0.99, \"Self-similarity should be very high (Q8)\"\n        assert results_dict[f'{self.test_key}:item:2'] > 0.99, \"Similar vector should have high similarity (Q8)\"\n        assert results_dict[f'{self.test_key}:item:3'] < 0.80, \"Not very similar vector should have low similarity (Q8)\"\n\n        # Test extreme values with 512 dimensions to stress-test overflow safety\n        vec4 = [1.0] * 512  # All +127 after quantization\n        vec5 = [-1.0] * 512  # All -127 after quantization\n        vec6 = [1.0, -1.0] * 256  # Alternating +127, -127\n\n        # Add vectors using VALUES format with Q8 quantization\n        self.redis.execute_command('VADD', f'{self.test_key}:extreme', 'VALUES', 512,\n                                 *[str(x) for x in vec4], f'{self.test_key}:extreme:vec4', 'Q8')\n        self.redis.execute_command('VADD', f'{self.test_key}:extreme', 'VALUES', 512,\n                                 *[str(x) for x in vec5], f'{self.test_key}:extreme:vec5', 'Q8')\n        self.redis.execute_command('VADD', f'{self.test_key}:extreme', 'VALUES', 512,\n                                 *[str(x) for x in vec6], f'{self.test_key}:extreme:vec6', 'Q8')\n\n        # Query vec4 against itself - worst-case positive accumulation (512 * 127 * 127 = 8,258,048)\n        result_vec4 = self.redis.execute_command('VSIM', f'{self.test_key}:extreme', 'VALUES', 512,\n                                               *[str(x) for x in vec4], 'WITHSCORES')\n        results_vec4 = {}\n        for i in range(0, len(result_vec4), 2):\n            key = result_vec4[i].decode()\n            score = float(result_vec4[i+1])\n            results_vec4[key] = score\n\n        # Verify extreme value handling\n        # VSIM returns similarity = 1.0 - distance/2.0, so:\n        # - Distance 0 (identical) → similarity 1.0\n        # - Distance 2 (opposite) → similarity 0.0\n        assert results_vec4[f'{self.test_key}:extreme:vec4'] > 0.999, \\\n            f\"vec4 self-similarity should be very high, got {results_vec4[f'{self.test_key}:extreme:vec4']}\"\n        assert results_vec4[f'{self.test_key}:extreme:vec5'] < 0.01, \\\n            f\"vec4 vs vec5 (opposite extremes) should be near 0, got {results_vec4[f'{self.test_key}:extreme:vec5']}\"\n        \n        # Alternating pattern should result in mid-range similarity (perpendicular)\n        assert 0.4 < results_vec4[f'{self.test_key}:extreme:vec6'] < 0.6, \\\n            f\"vec4 vs vec6 (alternating) should be near 0.5, got {results_vec4[f'{self.test_key}:extreme:vec6']}\"\n"
  },
  {
    "path": "modules/vector-sets/tests/q8_vectorization.py",
    "content": "from test import TestCase\n\nclass Q8Vectorization(TestCase):\n    def getname(self):\n        return \"Q8 quantization: verify vectorized vs scalar paths produce consistent results\"\n\n    def test(self):\n        # Test with different dimensions to exercise different code paths and boundaries:\n        # - dim=16: Scalar path (< 32)\n        # - dim=31: Largest scalar-only dimension (boundary)\n        # - dim=32: Smallest AVX2 dimension, no remainder (boundary)\n        # - dim=33: AVX2 with 1-element remainder\n        # - dim=63: AVX2 with 31-element remainder (largest AVX2-only)\n        # - dim=64: Smallest AVX512 dimension, no remainder (boundary)\n        # - dim=65: AVX512 with 1-element remainder\n        # - dim=128: AVX512 path with no remainder\n        # - dim=256, dim=512: Large dimensions to test overflow prevention\n        \n        test_dims = [16, 31, 32, 33, 63, 64, 65, 128, 256, 512]\n        \n        for dim in test_dims:\n            key = f'{self.test_key}:dim{dim}'\n            \n            # Test vectors with extreme values to verify overflow prevention:\n            # vec1: all +1.0 -> quantizes to +127 (max positive int8)\n            # vec2: all +0.99 -> quantizes to ~+126 (similar to vec1)\n            # vec3: all -1.0 -> quantizes to -127/-128 (max negative int8)\n            # vec4: alternating +1.0/-1.0 -> alternating +127/-127 (tests mixed signs)\n            vec1 = [1.0] * dim       # All max positive\n            vec2 = [0.99] * dim      # Similar to vec1\n            vec3 = [-1.0] * dim      # All max negative (opposite direction)\n            vec4 = [1.0 if i % 2 == 0 else -1.0 for i in range(dim)]  # Alternating extreme values\n            \n            # Add vectors with Q8 quantization\n            self.redis.execute_command('VADD', key, 'VALUES', dim, \n                                     *[str(x) for x in vec1], f'{key}:item:1', 'Q8')\n            self.redis.execute_command('VADD', key, 'VALUES', dim, \n                                     *[str(x) for x in vec2], f'{key}:item:2', 'Q8')\n            self.redis.execute_command('VADD', key, 'VALUES', dim, \n                                     *[str(x) for x in vec3], f'{key}:item:3', 'Q8')\n            self.redis.execute_command('VADD', key, 'VALUES', dim, \n                                     *[str(x) for x in vec4], f'{key}:item:4', 'Q8')\n            \n            # Query similarity using vec1 (all max positive values)\n            # This exercises worst-case positive accumulation: dim * 127 * 127\n            result = self.redis.execute_command('VSIM', key, 'VALUES', dim, \n                                              *[str(x) for x in vec1], 'WITHSCORES')\n            \n            # Convert results to dictionary\n            results_dict = {}\n            for i in range(0, len(result), 2):\n                k = result[i].decode()\n                score = float(result[i+1])\n                results_dict[k] = score\n            \n            # Verify results - these would be wrong if overflow occurred\n            # Self-similarity should be ~1.0 (identical vectors)\n            assert results_dict[f'{key}:item:1'] > 0.99, \\\n                f\"Dim {dim}: Self-similarity too low: {results_dict[f'{key}:item:1']}\"\n            \n            # Similar vector should have high similarity\n            assert results_dict[f'{key}:item:2'] > 0.99, \\\n                f\"Dim {dim}: Similar vector similarity too low: {results_dict[f'{key}:item:2']}\"\n            \n            # Opposite vector should have very low similarity (~0.0)\n            # With overflow bug, this could give incorrect positive values\n            assert results_dict[f'{key}:item:3'] < 0.1, \\\n                f\"Dim {dim}: Opposite vector similarity too high: {results_dict[f'{key}:item:3']}\"\n            \n            # Alternating vector: dot product sums to ~0, so similarity ~0.5\n            # (127*127) + (127*-127) + ... = 0, normalized gives ~0.5\n            assert 0.4 < results_dict[f'{key}:item:4'] < 0.6, \\\n                f\"Dim {dim}: Alternating vector similarity unexpected: {results_dict[f'{key}:item:4']}\"\n            \n            # Also query with the alternating pattern to verify its self-similarity\n            result_alt = self.redis.execute_command('VSIM', key, 'VALUES', dim,\n                                                  *[str(x) for x in vec4], 'WITHSCORES')\n            results_alt = {}\n            for i in range(0, len(result_alt), 2):\n                k = result_alt[i].decode()\n                score = float(result_alt[i+1])\n                results_alt[k] = score\n            \n            assert results_alt[f'{key}:item:4'] > 0.99, \\\n                f\"Dim {dim}: Alternating self-similarity too low: {results_alt[f'{key}:item:4']}\"\n"
  },
  {
    "path": "modules/vector-sets/tests/reduce.py",
    "content": "from test import TestCase, fill_redis_with_vectors, generate_random_vector\n\nclass Reduce(TestCase):\n    def getname(self):\n        return \"Dimension Reduction\"\n\n    def estimated_runtime(self):\n        return 0.2\n\n    def test(self):\n        original_dim = 100\n        reduced_dim = 80\n        count = 1000\n        k = 50  # Number of nearest neighbors to check\n\n        # Fill Redis with vectors using REDUCE and get reference data\n        data = fill_redis_with_vectors(self.redis, self.test_key, count, original_dim, reduced_dim)\n\n        # Verify dimension is reduced\n        dim = self.redis.execute_command('VDIM', self.test_key)\n        assert dim == reduced_dim, f\"Expected dimension {reduced_dim}, got {dim}\"\n\n        # Generate query vector and get nearest neighbors using Redis\n        query_vec = generate_random_vector(original_dim)\n        redis_raw = self.redis.execute_command('VSIM', self.test_key, 'VALUES', \n                                             original_dim, *[str(x) for x in query_vec],\n                                             'COUNT', k, 'WITHSCORES')\n\n        # Convert Redis results to dict\n        redis_results = {}\n        for i in range(0, len(redis_raw), 2):\n            key = redis_raw[i].decode()\n            score = float(redis_raw[i+1])\n            redis_results[key] = score\n\n        # Get results from linear scan with original vectors\n        linear_results = data.find_k_nearest(query_vec, k)\n        linear_items = {name: score for name, score in linear_results}\n\n        # Compare overlap between reduced and non-reduced results\n        redis_set = set(redis_results.keys())\n        linear_set = set(linear_items.keys())\n        overlap = len(redis_set & linear_set)\n        overlap_ratio = overlap / k\n\n        # With random projection, we expect some loss of accuracy but should\n        # maintain at least some similarity structure.\n        # Note that gaussian distribution is the worse with this test, so\n        # in real world practice, things will be better.\n        min_expected_overlap = 0.1  # At least 10% overlap in top-k\n        assert overlap_ratio >= min_expected_overlap, \\\n            f\"Dimension reduction lost too much structure. Only {overlap_ratio*100:.1f}% overlap in top {k}\"\n\n        # For items that appear in both results, scores should be reasonably correlated\n        common_items = redis_set & linear_set\n        for item in common_items:\n            redis_score = redis_results[item]\n            linear_score = linear_items[item]\n            # Allow for some deviation due to dimensionality reduction\n            assert abs(redis_score - linear_score) < 0.2, \\\n                f\"Score mismatch too high for {item}: Redis={redis_score:.3f} Linear={linear_score:.3f}\"\n\n        # If test fails, print comparison for debugging\n        if overlap_ratio < min_expected_overlap:\n            print(\"\\nLow overlap in results. Details:\")\n            print(\"\\nTop results from linear scan (original vectors):\")\n            for name, score in linear_results:\n                print(f\"{name}: {score:.3f}\")\n            print(\"\\nTop results from Redis (reduced vectors):\")\n            for item, score in sorted(redis_results.items(), key=lambda x: x[1], reverse=True):\n                print(f\"{item}: {score:.3f}\")\n"
  },
  {
    "path": "modules/vector-sets/tests/replication.py",
    "content": "from test import TestCase, generate_random_vector\nimport struct\nimport random\nimport time\n\nclass ComprehensiveReplicationTest(TestCase):\n    def getname(self):\n        return \"Comprehensive Replication Test with mixed operations\"\n\n    def estimated_runtime(self):\n        # This test will take longer than the default 100ms\n        return 20.0  # 20 seconds estimate\n\n    def test(self):\n        # Setup replication between primary and replica\n        assert self.setup_replication(), \"Failed to setup replication\"\n\n        # Test parameters\n        num_vectors = 5000\n        vector_dim = 8\n        delete_probability = 0.1\n        cas_probability = 0.3\n\n        # Keep track of added items for potential deletion\n        added_items = []\n\n        # Add vectors and occasionally delete\n        for i in range(num_vectors):\n            # Generate a random vector\n            vec = generate_random_vector(vector_dim)\n            vec_bytes = struct.pack(f'{vector_dim}f', *vec)\n            item_name = f\"{self.test_key}:item:{i}\"\n\n            # Decide whether to use CAS or not\n            use_cas = random.random() < cas_probability\n\n            if use_cas and added_items:\n                # Get an existing item for CAS reference (if available)\n                cas_item = random.choice(added_items)\n                try:\n                    # Add with CAS\n                    result = self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes,\n                                                   item_name, 'CAS')\n                    # Only add to our list if actually added (CAS might fail)\n                    if result == 1:\n                        added_items.append(item_name)\n                except Exception as e:\n                    print(f\"  CAS VADD failed: {e}\")\n            else:\n                try:\n                    # Add without CAS\n                    result = self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes, item_name)\n                    # Only add to our list if actually added\n                    if result == 1:\n                        added_items.append(item_name)\n                except Exception as e:\n                    print(f\"  VADD failed: {e}\")\n\n            # Randomly delete items (with 10% probability)\n            if random.random() < delete_probability and added_items:\n                try:\n                    # Select a random item to delete\n                    item_to_delete = random.choice(added_items)\n                    # Delete the item using VREM (not VDEL)\n                    self.redis.execute_command('VREM', self.test_key, item_to_delete)\n                    # Remove from our list\n                    added_items.remove(item_to_delete)\n                except Exception as e:\n                    print(f\"  VREM failed: {e}\")\n\n        # Allow time for replication to complete\n        time.sleep(2.0)\n\n        # Verify final VCARD matches\n        primary_card = self.redis.execute_command('VCARD', self.test_key)\n        replica_card = self.replica.execute_command('VCARD', self.test_key)\n        assert primary_card == replica_card, f\"Final VCARD mismatch: primary={primary_card}, replica={replica_card}\"\n\n        # Verify VDIM matches\n        primary_dim = self.redis.execute_command('VDIM', self.test_key)\n        replica_dim = self.replica.execute_command('VDIM', self.test_key)\n        assert primary_dim == replica_dim, f\"VDIM mismatch: primary={primary_dim}, replica={replica_dim}\"\n\n        # Verify digests match using DEBUG DIGEST\n        primary_digest = self.redis.execute_command('DEBUG', 'DIGEST-VALUE', self.test_key)\n        replica_digest = self.replica.execute_command('DEBUG', 'DIGEST-VALUE', self.test_key)\n        assert primary_digest == replica_digest, f\"Digest mismatch: primary={primary_digest}, replica={replica_digest}\"\n\n        # Print summary\n        print(f\"\\n  Added and maintained {len(added_items)} vectors with dimension {vector_dim}\")\n        print(f\"  Final vector count: {primary_card}\")\n        print(f\"  Final digest: {primary_digest[0].decode()}\")\n"
  },
  {
    "path": "modules/vector-sets/tests/threading_config.py",
    "content": "from test import TestCase, generate_random_vector\nimport struct\n\n\nclass ThreadingConfigTest(TestCase):\n    \"\"\"\n    Test suite for vset-force-single-threaded-execution configuration.\n\n    This test validates the behavior of VADD and VSIM commands under different\n    threading configurations. The new configuration is MUTABLE and BINARY:\n    - false (0): Multi-threaded execution enabled (default)\n    - true (1): Force single-threaded execution\n\n    Key behaviors tested:\n    - VADD with and without CAS option under both threading modes\n    - VSIM with and without NOTHREAD option under both threading modes\n    - Configuration reading, validation, and runtime modification\n    - Thread behavior switching (multi-threaded vs forced single-threaded)\n    \"\"\"\n\n    def getname(self):\n        return \"vset-force-single-threaded-execution configuration testing\"\n\n    def estimated_runtime(self):\n        return 0.5  # Updated for mutable config testing with mode switching\n\n    def get_config_value(self):\n        \"\"\"Get current vset-force-single-threaded-execution config value\"\"\"\n        try:\n            result = self.redis.execute_command('CONFIG', 'GET', 'vset-force-single-threaded-execution')\n            if len(result) >= 2:\n                # Redis returns 'yes'/'no' for boolean configs\n                return result[1].decode() if isinstance(result[1], bytes) else result[1]\n            return None\n        except Exception:\n            return None\n\n    def set_config_value(self, value):\n        \"\"\"Set vset-force-single-threaded-execution config value\"\"\"\n        try:\n            # Convert boolean to yes/no string\n            str_value = 'yes' if value else 'no'\n            result = self.redis.execute_command('CONFIG', 'SET', 'vset-force-single-threaded-execution', str_value)\n            return result == b'OK' or result == 'OK'\n        except Exception as e:\n            print(f\"Failed to set config: {e}\")\n            return False\n\n    def test_config_access_and_mutability(self):\n        \"\"\"Test 1: Configuration access and mutability\"\"\"\n        # Get initial value\n        initial_value = self.get_config_value()\n        assert initial_value is not None, \"Should be able to read vset-force-single-threaded-execution config\"\n        assert initial_value in ['yes', 'no'], f\"Config value should be yes/no, got {initial_value}\"\n\n        # Test mutability by toggling the value\n        new_value = 'no' if initial_value == 'yes' else 'yes'\n        assert self.set_config_value(new_value == 'yes'), \"Should be able to change config value\"\n\n        # Verify the change\n        current_value = self.get_config_value()\n        assert current_value == new_value, f\"Config should be {new_value}, got {current_value}\"\n\n        # Restore original value\n        assert self.set_config_value(initial_value == 'yes'), \"Should be able to restore original value\"\n\n        return initial_value == 'yes'\n\n    def test_vadd_without_cas(self, force_single_threaded=False):\n        \"\"\"Test 2: VADD command without CAS option\"\"\"\n        # Set threading mode\n        self.set_config_value(force_single_threaded)\n\n        # Clear test data to avoid dimension conflicts\n        self.redis.delete(self.test_key)\n\n        dim = 64\n        vec = generate_random_vector(dim)\n        vec_bytes = struct.pack(f'{dim}f', *vec)\n\n        result = self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes, f'{self.test_key}:item:1')\n        assert result == 1, f\"VADD should return 1 for new item, got {result}\"\n\n        # Verify the vector was added\n        card = self.redis.execute_command('VCARD', self.test_key)\n        assert card == 1, f\"VCARD should return 1, got {card}\"\n\n    def test_vadd_with_cas(self, force_single_threaded=False):\n        \"\"\"Test 3: VADD command with CAS option\"\"\"\n        # Set threading mode\n        self.set_config_value(force_single_threaded)\n\n        # Clear test data to avoid dimension conflicts\n        self.redis.delete(self.test_key)\n\n        dim = 64\n        vec = generate_random_vector(dim)\n        vec_bytes = struct.pack(f'{dim}f', *vec)\n\n        # First insertion with CAS should succeed\n        result = self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes, f'{self.test_key}:item:cas', 'CAS')\n        assert result == 1, f\"First VADD with CAS should return 1, got {result}\"\n\n        # Second insertion of same item with CAS should return 0\n        result = self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes, f'{self.test_key}:item:cas', 'CAS')\n        assert result == 0, f\"Duplicate VADD with CAS should return 0, got {result}\"\n\n    def test_vsim_without_nothread(self, force_single_threaded=False):\n        \"\"\"Test 4: VSIM command without NOTHREAD\"\"\"\n        # Set threading mode\n        self.set_config_value(force_single_threaded)\n\n        # Clear test data to avoid dimension conflicts\n        self.redis.delete(self.test_key)\n\n        dim = 64\n\n        # Add test vectors\n        for i in range(5):\n            vec = generate_random_vector(dim)\n            vec_bytes = struct.pack(f'{dim}f', *vec)\n            self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes, f'{self.test_key}:item:{i}')\n\n        # Test VSIM without NOTHREAD\n        query_vec = generate_random_vector(dim)\n        args = ['VSIM', self.test_key, 'VALUES', dim] + [str(x) for x in query_vec] + ['COUNT', 3]\n        result = self.redis.execute_command(*args)\n\n        assert isinstance(result, list), f\"VSIM should return a list, got {type(result)}\"\n        assert len(result) <= 3, f\"VSIM should return at most 3 results, got {len(result)}\"\n\n    def test_vsim_with_nothread(self, force_single_threaded=False):\n        \"\"\"Test 5: VSIM command with NOTHREAD\"\"\"\n        # Set threading mode\n        self.set_config_value(force_single_threaded)\n\n        dim = 64\n\n        # Ensure we have vectors to search (use existing vectors from previous test)\n        card = self.redis.execute_command('VCARD', self.test_key)\n        if card == 0:\n            # Add test vectors if none exist\n            for i in range(5):\n                vec = generate_random_vector(dim)\n                vec_bytes = struct.pack(f'{dim}f', *vec)\n                self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes, f'{self.test_key}:item:{i}')\n\n        # Test VSIM with NOTHREAD\n        query_vec = generate_random_vector(dim)\n        args = ['VSIM', self.test_key, 'VALUES', dim] + [str(x) for x in query_vec] + ['COUNT', 3, 'NOTHREAD']\n        result = self.redis.execute_command(*args)\n\n        assert isinstance(result, list), f\"VSIM with NOTHREAD should return a list, got {type(result)}\"\n        assert len(result) <= 3, f\"VSIM with NOTHREAD should return at most 3 results, got {len(result)}\"\n\n    def test_threading_mode_comparison(self):\n        \"\"\"Test 6: Compare behavior between threading modes\"\"\"\n        dim = 64\n\n        # Clear test data\n        self.redis.delete(self.test_key)\n\n        # Test multi-threaded mode (default)\n        self.set_config_value(False)  # Multi-threaded\n        self.test_vadd_without_cas(False)\n        self.test_vadd_with_cas(False)\n        multi_threaded_card = self.redis.execute_command('VCARD', self.test_key)\n\n        # Clear and test single-threaded mode\n        self.redis.delete(self.test_key)\n        self.set_config_value(True)  # Single-threaded\n        self.test_vadd_without_cas(True)\n        self.test_vadd_with_cas(True)\n        single_threaded_card = self.redis.execute_command('VCARD', self.test_key)\n\n        # Both modes should produce same results\n        assert multi_threaded_card == single_threaded_card, \\\n            f\"Both modes should produce same results: multi={multi_threaded_card}, single={single_threaded_card}\"\n\n    def test_nothread_override_behavior(self):\n        \"\"\"Test 7: NOTHREAD option should work regardless of config\"\"\"\n        dim = 64\n\n        # Test with both config modes\n        for force_single in [False, True]:\n            self.set_config_value(force_single)\n            self.redis.delete(self.test_key)\n\n            # Add test vectors\n            for i in range(3):\n                vec = generate_random_vector(dim)\n                vec_bytes = struct.pack(f'{dim}f', *vec)\n                self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes, f'{self.test_key}:item:{i}')\n\n            # NOTHREAD should work regardless of config\n            query_vec = generate_random_vector(dim)\n            args = ['VSIM', self.test_key, 'VALUES', dim] + [str(x) for x in query_vec] + ['COUNT', 2, 'NOTHREAD']\n            result = self.redis.execute_command(*args)\n\n            assert isinstance(result, list), f\"NOTHREAD should work with force_single={force_single}\"\n            assert len(result) <= 2, f\"NOTHREAD should return ≤2 results with force_single={force_single}\"\n\n    def test(self):\n        \"\"\"Main test method - runs all threading configuration tests\"\"\"\n        # Get initial configuration\n        initial_force_single = self.test_config_access_and_mutability()\n        print(f\"Initial vset-force-single-threaded-execution: {'yes' if initial_force_single else 'no'}\")\n\n        # Clear test data\n        self.redis.delete(self.test_key)\n\n        # Test both threading modes\n        print(\"Testing multi-threaded mode...\")\n        self.set_config_value(False)\n        self.test_vadd_without_cas(False)\n        self.test_vadd_with_cas(False)\n        self.test_vsim_without_nothread(False)\n        self.test_vsim_with_nothread(False)\n\n        print(\"Testing single-threaded mode...\")\n        self.set_config_value(True)\n        self.test_vadd_without_cas(True)\n        self.test_vadd_with_cas(True)\n        self.test_vsim_without_nothread(True)\n        self.test_vsim_with_nothread(True)\n\n        # Test mode comparison and NOTHREAD override\n        self.test_threading_mode_comparison()\n        self.test_nothread_override_behavior()\n\n        # Restore initial configuration\n        self.set_config_value(initial_force_single)\n\n        # Print summary\n        self._print_test_summary(initial_force_single)\n\n    def _print_test_summary(self, initial_force_single):\n        \"\"\"Print a summary of what was tested\"\"\"\n        print(f\"\\nThreading Configuration Test Summary:\")\n        print(f\"  Configuration: vset-force-single-threaded-execution\")\n        print(f\"  Type: Boolean, Mutable\")\n        print(f\"  Initial value: {'yes' if initial_force_single else 'no'}\")\n        print(f\"  Tested modes: Both multi-threaded (no) and single-threaded (yes)\")\n        print(f\"  VADD: Works correctly in both modes\")\n        print(f\"  VADD with CAS: Works correctly in both modes\")\n        print(f\"  VSIM: Works correctly in both modes\")\n        print(f\"  NOTHREAD option: Overrides config in both modes\")\n        print(f\"  Configuration mutability: ✅ Successfully changed at runtime\")\n        print(f\"  All tests passed successfully!\")\n"
  },
  {
    "path": "modules/vector-sets/tests/vadd_cas.py",
    "content": "from test import TestCase, generate_random_vector\nimport threading\nimport struct\nimport math\nimport time\nimport random\nfrom typing import List, Dict\n\nclass ConcurrentCASTest(TestCase):\n    def getname(self):\n        return \"Concurrent VADD with CAS\"\n\n    def estimated_runtime(self):\n        return 1.5\n\n    def worker(self, vectors: List[List[float]], start_idx: int, end_idx: int,\n              dim: int, results: Dict[str, bool]):\n        \"\"\"Worker thread that adds a subset of vectors using VADD CAS\"\"\"\n        for i in range(start_idx, end_idx):\n            vec = vectors[i]\n            name = f\"{self.test_key}:item:{i}\"\n            vec_bytes = struct.pack(f'{dim}f', *vec)\n\n            # Try to add the vector with CAS\n            try:\n                result = self.redis.execute_command('VADD', self.test_key, 'FP32',\n                                                  vec_bytes, name, 'CAS')\n                results[name] = (result == 1)  # Store if it was actually added\n            except Exception as e:\n                results[name] = False\n                print(f\"Error adding {name}: {e}\")\n\n    def verify_vector_similarity(self, vec1: List[float], vec2: List[float]) -> float:\n        \"\"\"Calculate cosine similarity between two vectors\"\"\"\n        dot_product = sum(a*b for a,b in zip(vec1, vec2))\n        norm1 = math.sqrt(sum(x*x for x in vec1))\n        norm2 = math.sqrt(sum(x*x for x in vec2))\n        return dot_product / (norm1 * norm2) if norm1 > 0 and norm2 > 0 else 0\n\n    def test(self):\n        # Test parameters\n        dim = 128\n        total_vectors = 5000\n        num_threads = 8\n        vectors_per_thread = total_vectors // num_threads\n\n        # Generate all vectors upfront\n        random.seed(42)  # For reproducibility\n        vectors = [generate_random_vector(dim) for _ in range(total_vectors)]\n\n        # Prepare threads and results dictionary\n        threads = []\n        results = {}  # Will store success/failure for each vector\n\n        # Launch threads\n        for i in range(num_threads):\n            start_idx = i * vectors_per_thread\n            end_idx = start_idx + vectors_per_thread if i < num_threads-1 else total_vectors\n            thread = threading.Thread(target=self.worker,\n                                   args=(vectors, start_idx, end_idx, dim, results))\n            threads.append(thread)\n            thread.start()\n\n        # Wait for all threads to complete\n        for thread in threads:\n            thread.join()\n\n        # Verify cardinality\n        card = self.redis.execute_command('VCARD', self.test_key)\n        assert card == total_vectors, \\\n            f\"Expected {total_vectors} elements, but found {card}\"\n\n        # Verify each vector\n        num_verified = 0\n        for i in range(total_vectors):\n            name = f\"{self.test_key}:item:{i}\"\n\n            # Verify the item was successfully added\n            assert results[name], f\"Vector {name} was not successfully added\"\n\n            # Get the stored vector\n            stored_vec_raw = self.redis.execute_command('VEMB', self.test_key, name)\n            stored_vec = [float(x) for x in stored_vec_raw]\n\n            # Verify vector dimensions\n            assert len(stored_vec) == dim, \\\n                f\"Stored vector dimension mismatch for {name}: {len(stored_vec)} != {dim}\"\n\n            # Calculate similarity with original vector\n            similarity = self.verify_vector_similarity(vectors[i], stored_vec)\n            assert similarity > 0.99, \\\n                f\"Low similarity ({similarity}) for {name}\"\n\n            num_verified += 1\n\n        # Final verification\n        assert num_verified == total_vectors, \\\n            f\"Only verified {num_verified} out of {total_vectors} vectors\"\n"
  },
  {
    "path": "modules/vector-sets/tests/vemb.py",
    "content": "from test import TestCase\nimport struct\nimport math\n\nclass VEMB(TestCase):\n    def getname(self):\n        return \"VEMB Command\"\n\n    def test(self):\n        dim = 4\n\n        # Add same vector in both formats\n        vec = [1, 0, 0, 0]\n        norm = math.sqrt(sum(x*x for x in vec))\n        vec = [x/norm for x in vec]  # Normalize the vector\n\n        # Add using FP32\n        vec_bytes = struct.pack(f'{dim}f', *vec)\n        self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes, f'{self.test_key}:item:1')\n\n        # Add using VALUES\n        self.redis.execute_command('VADD', self.test_key, 'VALUES', dim, \n                         *[str(x) for x in vec], f'{self.test_key}:item:2')\n\n        # Get both back with VEMB\n        result1 = self.redis.execute_command('VEMB', self.test_key, f'{self.test_key}:item:1')\n        result2 = self.redis.execute_command('VEMB', self.test_key, f'{self.test_key}:item:2')\n\n        retrieved_vec1 = [float(x) for x in result1]\n        retrieved_vec2 = [float(x) for x in result2]\n\n        # Compare both vectors with original (allow for small quantization errors)\n        for i in range(dim):\n            assert abs(vec[i] - retrieved_vec1[i]) < 0.01, \\\n                f\"FP32 vector component {i} mismatch: expected {vec[i]}, got {retrieved_vec1[i]}\"\n            assert abs(vec[i] - retrieved_vec2[i]) < 0.01, \\\n                f\"VALUES vector component {i} mismatch: expected {vec[i]}, got {retrieved_vec2[i]}\"\n\n        # Test non-existent item\n        result = self.redis.execute_command('VEMB', self.test_key, 'nonexistent')\n        assert result is None, \"Non-existent item should return nil\"\n"
  },
  {
    "path": "modules/vector-sets/tests/vismember.py",
    "content": "from test import TestCase, generate_random_vector\nimport struct\n\nclass BasicVISMEMBER(TestCase):\n    def getname(self):\n        return \"VISMEMBER basic functionality\"\n\n    def test(self):\n        # Add multiple vectors to the vector set\n        vec1 = generate_random_vector(4)\n        vec2 = generate_random_vector(4)\n        vec_bytes1 = struct.pack('4f', *vec1)\n        vec_bytes2 = struct.pack('4f', *vec2)\n\n        # Create item keys\n        item1 = f'{self.test_key}:item:1'\n        item2 = f'{self.test_key}:item:2'\n        nonexistent_item = f'{self.test_key}:item:nonexistent'\n\n        # Add the vectors\n        self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes1, item1)\n        self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes2, item2)\n\n        # Test VISMEMBER with existing elements\n        result1 = self.redis.execute_command('VISMEMBER', self.test_key, item1)\n        assert result1 == 1, f\"VISMEMBER should return 1 for existing item, got {result1}\"\n\n        result2 = self.redis.execute_command('VISMEMBER', self.test_key, item2)\n        assert result2 == 1, f\"VISMEMBER should return 1 for existing item, got {result2}\"\n\n        # Test VISMEMBER with non-existent element\n        result3 = self.redis.execute_command('VISMEMBER', self.test_key, nonexistent_item)\n        assert result3 == 0, f\"VISMEMBER should return 0 for non-existent item, got {result3}\"\n\n        # Test VISMEMBER with non-existent key\n        nonexistent_key = f'{self.test_key}_nonexistent'\n        result4 = self.redis.execute_command('VISMEMBER', nonexistent_key, item1)\n        assert result4 == 0, f\"VISMEMBER should return 0 for non-existent key, got {result4}\"\n\n        # Test VISMEMBER after removing an element\n        self.redis.execute_command('VREM', self.test_key, item1)\n        result5 = self.redis.execute_command('VISMEMBER', self.test_key, item1)\n        assert result5 == 0, f\"VISMEMBER should return 0 after element removal, got {result5}\"\n\n        # Verify item2 still exists\n        result6 = self.redis.execute_command('VISMEMBER', self.test_key, item2)\n        assert result6 == 1, f\"VISMEMBER should still return 1 for remaining item, got {result6}\"\n"
  },
  {
    "path": "modules/vector-sets/tests/vrand-ping-pong.py",
    "content": "from test import TestCase, generate_random_vector\nimport struct\n\nclass VRANDMEMBERPingPongRegressionTest(TestCase):\n    def getname(self):\n        return \"[regression] VRANDMEMBER ping-pong\"\n\n    def test(self):\n        \"\"\"\n        This test ensures that when only two vectors exist, VRANDMEMBER\n        does not get stuck returning only one of them due to the \"ping-pong\" issue.\n        \"\"\"\n        self.redis.delete(self.test_key) # Clean up before test\n        dim = 4\n\n        # Add exactly two vectors\n        vec1_name = \"vec1\"\n        vec1_data = generate_random_vector(dim)\n        self.redis.execute_command('VADD', self.test_key, 'VALUES', dim, *vec1_data, vec1_name)\n\n        vec2_name = \"vec2\"\n        vec2_data = generate_random_vector(dim)\n        self.redis.execute_command('VADD', self.test_key, 'VALUES', dim, *vec2_data, vec2_name)\n\n        # Call VRANDMEMBER many times and check for distribution\n        iterations = 100\n        results = []\n        for _ in range(iterations):\n            member = self.redis.execute_command('VRANDMEMBER', self.test_key)\n            results.append(member.decode())\n\n        # Verify that both members were returned, proving it's not stuck\n        unique_results = set(results)\n\n        assert len(unique_results) == 2, f\"Ping-pong test failed: should have returned 2 unique members, but got {len(unique_results)}.\"\n"
  },
  {
    "path": "modules/vector-sets/tests/vrandmember.py",
    "content": "from test import TestCase, generate_random_vector, fill_redis_with_vectors\nimport struct\n\nclass VRANDMEMBERTest(TestCase):\n    def getname(self):\n        return \"VRANDMEMBER basic functionality\"\n\n    def test(self):\n        # Test with empty key\n        result = self.redis.execute_command('VRANDMEMBER', self.test_key)\n        assert result is None, \"VRANDMEMBER on non-existent key should return NULL\"\n\n        result = self.redis.execute_command('VRANDMEMBER', self.test_key, 5)\n        assert isinstance(result, list) and len(result) == 0, \"VRANDMEMBER with count on non-existent key should return empty array\"\n\n        # Fill with vectors\n        dim = 4\n        count = 100\n        data = fill_redis_with_vectors(self.redis, self.test_key, count, dim)\n\n        # Test single random member\n        result = self.redis.execute_command('VRANDMEMBER', self.test_key)\n        assert result is not None, \"VRANDMEMBER should return a random member\"\n        assert result.decode() in data.names, \"Random member should be in the set\"\n\n        # Test multiple unique members (positive count)\n        positive_count = 10\n        result = self.redis.execute_command('VRANDMEMBER', self.test_key, positive_count)\n        assert isinstance(result, list), \"VRANDMEMBER with positive count should return an array\"\n        assert len(result) == positive_count, f\"Should return {positive_count} members\"\n\n        # Check for uniqueness\n        decoded_results = [r.decode() for r in result]\n        assert len(decoded_results) == len(set(decoded_results)), \"Results should be unique with positive count\"\n        for item in decoded_results:\n            assert item in data.names, \"All returned items should be in the set\"\n\n        # Test more members than in the set\n        result = self.redis.execute_command('VRANDMEMBER', self.test_key, count + 10)\n        assert len(result) == count, \"Should return only the available members when asking for more than exist\"\n\n        # Test with duplicates (negative count)\n        negative_count = -20\n        result = self.redis.execute_command('VRANDMEMBER', self.test_key, negative_count)\n        assert isinstance(result, list), \"VRANDMEMBER with negative count should return an array\"\n        assert len(result) == abs(negative_count), f\"Should return {abs(negative_count)} members\"\n\n        # Check that all returned elements are valid\n        decoded_results = [r.decode() for r in result]\n        for item in decoded_results:\n            assert item in data.names, \"All returned items should be in the set\"\n\n        # Test with count = 0 (edge case)\n        result = self.redis.execute_command('VRANDMEMBER', self.test_key, 0)\n        assert isinstance(result, list) and len(result) == 0, \"VRANDMEMBER with count=0 should return empty array\"\n"
  },
  {
    "path": "modules/vector-sets/tests/vrange.py",
    "content": "from test import TestCase, generate_random_vector\nimport struct\n\nclass BasicVRANGE(TestCase):\n    def getname(self):\n        return \"VRANGE basic functionality and iteration\"\n\n    def test(self):\n        # Add multiple elements with different names for lexicographical ordering\n        elements = [\n            \"apple\", \"apricot\", \"banana\", \"cherry\", \"date\",\n            \"elderberry\", \"fig\", \"grape\", \"honeydew\", \"kiwi\",\n            \"lemon\", \"mango\", \"nectarine\", \"orange\", \"papaya\",\n            \"quince\", \"raspberry\", \"strawberry\", \"tangerine\", \"watermelon\"\n        ]\n\n        # Add all elements to the vector set\n        for elem in elements:\n            vec = generate_random_vector(4)\n            vec_bytes = struct.pack('4f', *vec)\n            self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes, elem)\n\n        # Test 1: Basic range with inclusive boundaries\n        result = self.redis.execute_command('VRANGE', self.test_key, '[apple', '[grape', '5')\n        result = [r.decode() for r in result]\n        assert result == ['apple', 'apricot', 'banana', 'cherry', 'date'], f\"Expected first 5 elements from apple, got {result}\"\n\n        # Test 2: Exclusive start boundary\n        result = self.redis.execute_command('VRANGE', self.test_key, '(apple', '[cherry', '10')\n        result = [r.decode() for r in result]\n        assert result == ['apricot', 'banana', 'cherry'], f\"Expected elements after apple up to cherry inclusive, got {result}\"\n\n        # Test 3: Exclusive end boundary\n        result = self.redis.execute_command('VRANGE', self.test_key, '[banana', '(cherry', '10')\n        result = [r.decode() for r in result]\n        assert result == ['banana'], f\"Expected only banana (cherry excluded), got {result}\"\n\n        # Test 4: Using '-' for minimum element\n        result = self.redis.execute_command('VRANGE', self.test_key, '-', '[banana', '10')\n        result = [r.decode() for r in result]\n        assert result[0] == 'apple', \"Should start from the first element\"\n        assert result[-1] == 'banana', \"Should end at banana\"\n\n        # Test 5: Using '+' for maximum element\n        result = self.redis.execute_command('VRANGE', self.test_key, '[raspberry', '+', '10')\n        result = [r.decode() for r in result]\n        assert 'raspberry' in result and 'strawberry' in result and 'tangerine' in result and 'watermelon' in result, \"Should include all elements from raspberry onwards\"\n\n        # Test 6: Full range with '-' and '+'\n        result = self.redis.execute_command('VRANGE', self.test_key, '-', '+', '100')\n        result = [r.decode() for r in result]\n        assert len(result) == len(elements), f\"Should return all {len(elements)} elements\"\n        assert result == sorted(elements), \"Elements should be in lexicographical order\"\n\n        # Test 7: Iterator pattern - verify each element appears exactly once\n        seen = set()\n        batch_size = 3\n        current = '-'\n\n        while True:\n            if current == '-':\n                # First iteration\n                result = self.redis.execute_command('VRANGE', self.test_key, '-', '+', str(batch_size))\n            else:\n                # Subsequent iterations - exclusive start from last element\n                result = self.redis.execute_command('VRANGE', self.test_key, f'({current}', '+', str(batch_size))\n\n            result = [r.decode() for r in result]\n\n            if not result:\n                break\n\n            # Check no duplicates in this batch\n            for elem in result:\n                assert elem not in seen, f\"Element {elem} appeared more than once\"\n                seen.add(elem)\n\n            # Update current to last element\n            current = result[-1]\n\n            # Break if we got less than requested (end of set)\n            if len(result) < batch_size:\n                break\n\n        # Verify we saw all elements exactly once\n        assert seen == set(elements), f\"Iterator should visit all elements exactly once. Missing: {set(elements) - seen}, Extra: {seen - set(elements)}\"\n\n        # Test 8: Count of 0 returns empty array\n        result = self.redis.execute_command('VRANGE', self.test_key, '-', '+', '0')\n        assert result == [], f\"Count of 0 should return empty array, got {result}\"\n\n        # Test 9: Range with no matching elements\n        result = self.redis.execute_command('VRANGE', self.test_key, '[zebra', '+', '10')\n        assert result == [], f\"Range beyond all elements should return empty array, got {result}\"\n\n        # Test 10: Non-existent key\n        result = self.redis.execute_command('VRANGE', 'nonexistent_key', '-', '+', '10')\n        assert result == [], f\"Non-existent key should return empty array, got {result}\"\n\n        # Test 11: Partial word boundaries\n        result = self.redis.execute_command('VRANGE', self.test_key, '[app', '[apr', '10')\n        result = [r.decode() for r in result]\n        assert 'apple' in result, \"Should include 'apple' which starts with 'app'\"\n        assert 'apricot' not in result, \"Should not include 'apricot' as it's >= 'apr'\"\n\n        # Test 12: Single element range\n        result = self.redis.execute_command('VRANGE', self.test_key, '[cherry', '[cherry', '10')\n        result = [r.decode() for r in result]\n        assert result == ['cherry'], f\"Inclusive single element range should return that element, got {result}\"\n\n        # Test 13: Empty range (start > end)\n        result = self.redis.execute_command('VRANGE', self.test_key, '[grape', '[apple', '10')\n        assert result == [], f\"Range where start > end should return empty array, got {result}\"\n"
  },
  {
    "path": "modules/vector-sets/tests/vsim_duplicate_filter.py",
    "content": "from test import TestCase\n\nclass VSIMDuplicateFilterLeak(TestCase):\n    def getname(self):\n        return \"[regression] VSIM duplicate FILTER should not leak memory\"\n\n    def test(self):\n        self.redis.execute_command('VADD', self.test_key, 'VALUES', 3, 0.5774, 0.5774, 0.5774, 'elem1')\n        self.redis.execute_command('VADD', self.test_key, 'VALUES', 3, 0.7071, 0.7071, 0.0, 'elem2')\n        self.redis.execute_command('VSETATTR', self.test_key, 'elem1', '{\"a\": 1, \"b\": 2}')\n        self.redis.execute_command('VSETATTR', self.test_key, 'elem2', '{\"a\": 2, \"b\": 3}')\n\n        # Duplicate FILTER: before the fix the first exprstate was\n        # overwritten without exprFree(), leaking ~760 bytes per call.\n        # Under ASAN/valgrind this shows up as a leak at server exit.\n        for _ in range(100):\n            self.redis.execute_command(\n                'VSIM', self.test_key, 'VALUES', 3, 0.5774, 0.5774, 0.5774,\n                'FILTER', '.a == 1', 'FILTER', '.b >= 1')\n"
  },
  {
    "path": "modules/vector-sets/tests/vsim_filter_error_leak.py",
    "content": "from test import TestCase\nimport redis as redis_module\n\nclass VSIMFilterLeakOnOptionError(TestCase):\n    def getname(self):\n        return \"[regression] VSIM FILTER expr freed on option parse error\"\n\n    def test(self):\n        self.redis.execute_command('VADD', self.test_key, 'VALUES', 3, 1, 0, 0, 'elem1')\n\n        # Valid FILTER followed by invalid option values. Before the fix,\n        # error paths freed vec but not filter_expr, leaking the compiled\n        # exprstate. Under ASAN/valgrind this shows up at server exit.\n        error_cmds = [\n            # invalid COUNT (0)\n            ['VSIM', self.test_key, 'VALUES', 3, 0, 0, 0, 'FILTER', '.a > 0', 'COUNT', 0],\n            # invalid EF (0)\n            ['VSIM', self.test_key, 'VALUES', 3, 0, 0, 0, 'FILTER', '.a > 0', 'EF', 0],\n            # invalid EPSILON (0)\n            ['VSIM', self.test_key, 'VALUES', 3, 0, 0, 0, 'FILTER', '.a > 0', 'EPSILON', 0],\n            # invalid FILTER-EF (0)\n            ['VSIM', self.test_key, 'VALUES', 3, 0, 0, 0, 'FILTER', '.a > 0', 'FILTER-EF', 0],\n            # unknown option\n            ['VSIM', self.test_key, 'VALUES', 3, 0, 0, 0, 'FILTER', '.a > 0', 'BADOPT', 1],\n        ]\n\n        for cmd in error_cmds:\n            for _ in range(20):\n                try:\n                    self.redis.execute_command(*cmd)\n                except redis_module.exceptions.ResponseError:\n                    pass\n"
  },
  {
    "path": "modules/vector-sets/tests/vsim_limit_efsearch.py",
    "content": "from test import TestCase, generate_random_vector\nimport struct\n\nclass VSIMLimitEFSearch(TestCase):\n    def getname(self):\n        return \"VSIM Limit EF Search\"\n\n    def estimated_runtime(self):\n        return 0.2\n\n    def test(self):\n        dim = 32\n        vec = generate_random_vector(dim)\n        vec_bytes = struct.pack(f'{dim}f', *vec)\n\n        # Add test vector\n        self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes, f'{self.test_key}:item:1')\n\n        query_vec = generate_random_vector(dim)\n\n        # Test EF upper bound (should accept 1000000)\n        result = self.redis.execute_command('VSIM', self.test_key, 'VALUES', dim,\n                                          *[str(x) for x in query_vec], 'EF', 1000000)\n        assert isinstance(result, list), \"EF=1000000 should be accepted\"\n\n        # Test EF over limit (should reject > 1000000)\n        try:\n            self.redis.execute_command('VSIM', self.test_key, 'VALUES', dim,\n                                     *[str(x) for x in query_vec], 'EF', 1000001)\n            assert False, \"EF=1000001 should be rejected\"\n        except Exception as e:\n            assert \"invalid EF\" in str(e), f\"Expected EF validation error, got: {e}\"\n"
  },
  {
    "path": "modules/vector-sets/tests/with.py",
    "content": "from test import TestCase, generate_random_vector\nimport struct\nimport json\nimport random\n\nclass VSIMWithAttribs(TestCase):\n    def getname(self):\n        return \"VSIM WITHATTRIBS/WITHSCORES functionality testing\"\n\n    def setup(self):\n        super().setup()\n        self.dim = 8\n        self.count = 20\n\n        # Create vectors with attributes\n        for i in range(self.count):\n            vec = generate_random_vector(self.dim)\n            vec_bytes = struct.pack(f'{self.dim}f', *vec)\n\n            # Item name\n            name = f\"{self.test_key}:item:{i}\"\n\n            # Add to Redis\n            self.redis.execute_command('VADD', self.test_key, 'FP32', vec_bytes, name)\n\n            # Create and add attribute\n            if i % 5 == 0:\n                # Every 5th item has no attribute (for testing NULL responses)\n                continue\n\n            category = random.choice([\"electronics\", \"furniture\", \"clothing\"])\n            price = random.randint(50, 1000)\n            attrs = {\"category\": category, \"price\": price, \"id\": i}\n\n            self.redis.execute_command('VSETATTR', self.test_key, name, json.dumps(attrs))\n\n    def is_numeric(self, value):\n        \"\"\"Check if a value can be converted to float\"\"\"\n        try:\n            if isinstance(value, (int, float)):\n                return True\n            if isinstance(value, bytes):\n                float(value.decode('utf-8'))\n                return True\n            if isinstance(value, str):\n                float(value)\n                return True\n            return False\n        except (ValueError, TypeError):\n            return False\n\n    def test(self):\n        # Create query vector\n        query_vec = generate_random_vector(self.dim)\n\n        # Test 1: VSIM with no additional options (should be same for RESP2 and RESP3)\n        cmd_args = ['VSIM', self.test_key, 'VALUES', self.dim]\n        cmd_args.extend([str(x) for x in query_vec])\n        cmd_args.extend(['COUNT', 5])\n\n        results_resp2 = self.redis.execute_command(*cmd_args)\n        results_resp3 = self.redis3.execute_command(*cmd_args)\n\n        # Both should return simple arrays of item names\n        assert len(results_resp2) == 5, f\"RESP2: Expected 5 results, got {len(results_resp2)}\"\n        assert len(results_resp3) == 5, f\"RESP3: Expected 5 results, got {len(results_resp3)}\"\n        assert all(isinstance(item, bytes) for item in results_resp2), \"RESP2: Results should be byte strings\"\n        assert all(isinstance(item, bytes) for item in results_resp3), \"RESP3: Results should be byte strings\"\n\n        # Test 2: VSIM with WITHSCORES only\n        cmd_args = ['VSIM', self.test_key, 'VALUES', self.dim]\n        cmd_args.extend([str(x) for x in query_vec])\n        cmd_args.extend(['COUNT', 5, 'WITHSCORES'])\n\n        results_resp2 = self.redis.execute_command(*cmd_args)\n        results_resp3 = self.redis3.execute_command(*cmd_args)\n\n        # RESP2: Should be a flat array alternating item, score\n        assert len(results_resp2) == 10, f\"RESP2: Expected 10 elements (5 items × 2), got {len(results_resp2)}\"\n        for i in range(0, len(results_resp2), 2):\n            assert isinstance(results_resp2[i], bytes), f\"RESP2: Item at {i} should be bytes\"\n            assert self.is_numeric(results_resp2[i+1]), f\"RESP2: Score at {i+1} should be numeric\"\n            score = float(results_resp2[i+1]) if isinstance(results_resp2[i+1], bytes) else results_resp2[i+1]\n            assert 0 <= score <= 1, f\"RESP2: Score {score} should be between 0 and 1\"\n\n        # RESP3: Should be a dict/map with items as keys and scores as DIRECT values (not arrays)\n        assert isinstance(results_resp3, dict), f\"RESP3: Expected dict, got {type(results_resp3)}\"\n        assert len(results_resp3) == 5, f\"RESP3: Expected 5 entries, got {len(results_resp3)}\"\n        for item, score in results_resp3.items():\n            assert isinstance(item, bytes), f\"RESP3: Key should be bytes\"\n            # Score should be a direct value, NOT an array\n            assert not isinstance(score, list), f\"RESP3: With single WITH option, value should not be array\"\n            assert self.is_numeric(score), f\"RESP3: Score should be numeric, got {type(score)}\"\n            score_val = float(score) if isinstance(score, bytes) else score\n            assert 0 <= score_val <= 1, f\"RESP3: Score {score_val} should be between 0 and 1\"\n\n        # Test 3: VSIM with WITHATTRIBS only\n        cmd_args = ['VSIM', self.test_key, 'VALUES', self.dim]\n        cmd_args.extend([str(x) for x in query_vec])\n        cmd_args.extend(['COUNT', 5, 'WITHATTRIBS'])\n\n        results_resp2 = self.redis.execute_command(*cmd_args)\n        results_resp3 = self.redis3.execute_command(*cmd_args)\n\n        # RESP2: Should be a flat array alternating item, attribute\n        assert len(results_resp2) == 10, f\"RESP2: Expected 10 elements (5 items × 2), got {len(results_resp2)}\"\n        for i in range(0, len(results_resp2), 2):\n            assert isinstance(results_resp2[i], bytes), f\"RESP2: Item at {i} should be bytes\"\n            attr = results_resp2[i+1]\n            assert attr is None or isinstance(attr, bytes), f\"RESP2: Attribute at {i+1} should be None or bytes\"\n            if attr is not None:\n                # Verify it's valid JSON\n                json.loads(attr)\n\n        # RESP3: Should be a dict/map with items as keys and attributes as DIRECT values (not arrays)\n        assert isinstance(results_resp3, dict), f\"RESP3: Expected dict, got {type(results_resp3)}\"\n        assert len(results_resp3) == 5, f\"RESP3: Expected 5 entries, got {len(results_resp3)}\"\n        for item, attr in results_resp3.items():\n            assert isinstance(item, bytes), f\"RESP3: Key should be bytes\"\n            # Attribute should be a direct value, NOT an array\n            assert not isinstance(attr, list), f\"RESP3: With single WITH option, value should not be array\"\n            assert attr is None or isinstance(attr, bytes), f\"RESP3: Attribute should be None or bytes\"\n            if attr is not None:\n                # Verify it's valid JSON\n                json.loads(attr)\n\n        # Test 4: VSIM with both WITHSCORES and WITHATTRIBS\n        cmd_args = ['VSIM', self.test_key, 'VALUES', self.dim]\n        cmd_args.extend([str(x) for x in query_vec])\n        cmd_args.extend(['COUNT', 5, 'WITHSCORES', 'WITHATTRIBS'])\n\n        results_resp2 = self.redis.execute_command(*cmd_args)\n        results_resp3 = self.redis3.execute_command(*cmd_args)\n\n        # RESP2: Should be a flat array with pattern: item, score, attribute\n        assert len(results_resp2) == 15, f\"RESP2: Expected 15 elements (5 items × 3), got {len(results_resp2)}\"\n        for i in range(0, len(results_resp2), 3):\n            assert isinstance(results_resp2[i], bytes), f\"RESP2: Item at {i} should be bytes\"\n            assert self.is_numeric(results_resp2[i+1]), f\"RESP2: Score at {i+1} should be numeric\"\n            score = float(results_resp2[i+1]) if isinstance(results_resp2[i+1], bytes) else results_resp2[i+1]\n            assert 0 <= score <= 1, f\"RESP2: Score {score} should be between 0 and 1\"\n            attr = results_resp2[i+2]\n            assert attr is None or isinstance(attr, bytes), f\"RESP2: Attribute at {i+2} should be None or bytes\"\n\n        # RESP3: Should be a dict where each value is a 2-element array [score, attribute]\n        assert isinstance(results_resp3, dict), f\"RESP3: Expected dict, got {type(results_resp3)}\"\n        assert len(results_resp3) == 5, f\"RESP3: Expected 5 entries, got {len(results_resp3)}\"\n        for item, value in results_resp3.items():\n            assert isinstance(item, bytes), f\"RESP3: Key should be bytes\"\n            # With BOTH options, value MUST be an array\n            assert isinstance(value, list), f\"RESP3: With both WITH options, value should be a list, got {type(value)}\"\n            assert len(value) == 2, f\"RESP3: Value should have 2 elements [score, attr], got {len(value)}\"\n\n            score, attr = value\n            assert self.is_numeric(score), f\"RESP3: Score should be numeric\"\n            score_val = float(score) if isinstance(score, bytes) else score\n            assert 0 <= score_val <= 1, f\"RESP3: Score {score_val} should be between 0 and 1\"\n            assert attr is None or isinstance(attr, bytes), f\"RESP3: Attribute should be None or bytes\"\n\n        # Test 5: Verify consistency - same items returned in same order\n        cmd_args = ['VSIM', self.test_key, 'VALUES', self.dim]\n        cmd_args.extend([str(x) for x in query_vec])\n        cmd_args.extend(['COUNT', 5, 'WITHSCORES', 'WITHATTRIBS'])\n\n        results_resp2 = self.redis.execute_command(*cmd_args)\n        results_resp3 = self.redis3.execute_command(*cmd_args)\n\n        # Extract items from RESP2 (every 3rd element starting from 0)\n        items_resp2 = [results_resp2[i] for i in range(0, len(results_resp2), 3)]\n\n        # Extract items from RESP3 (keys of the dict)\n        items_resp3 = list(results_resp3.keys())\n\n        # Verify same items returned\n        assert set(items_resp2) == set(items_resp3), \"RESP2 and RESP3 should return the same items\"\n\n        # Build a mapping from items to scores and attributes for comparison\n        data_resp2 = {}\n        for i in range(0, len(results_resp2), 3):\n            item = results_resp2[i]\n            score = float(results_resp2[i+1]) if isinstance(results_resp2[i+1], bytes) else results_resp2[i+1]\n            attr = results_resp2[i+2]\n            data_resp2[item] = (score, attr)\n\n        data_resp3 = {}\n        for item, value in results_resp3.items():\n            score = float(value[0]) if isinstance(value[0], bytes) else value[0]\n            attr = value[1]\n            data_resp3[item] = (score, attr)\n\n        # Verify scores and attributes match for each item\n        for item in data_resp2:\n            score_resp2, attr_resp2 = data_resp2[item]\n            score_resp3, attr_resp3 = data_resp3[item]\n\n            assert abs(score_resp2 - score_resp3) < 0.0001, \\\n                f\"Scores for {item} don't match: RESP2={score_resp2}, RESP3={score_resp3}\"\n            assert attr_resp2 == attr_resp3, \\\n                f\"Attributes for {item} don't match: RESP2={attr_resp2}, RESP3={attr_resp3}\"\n\n        # Test 6: Test ordering of WITHSCORES and WITHATTRIBS doesn't matter\n        cmd_args1 = ['VSIM', self.test_key, 'VALUES', self.dim]\n        cmd_args1.extend([str(x) for x in query_vec])\n        cmd_args1.extend(['COUNT', 3, 'WITHSCORES', 'WITHATTRIBS'])\n\n        cmd_args2 = ['VSIM', self.test_key, 'VALUES', self.dim]\n        cmd_args2.extend([str(x) for x in query_vec])\n        cmd_args2.extend(['COUNT', 3, 'WITHATTRIBS', 'WITHSCORES'])  # Reversed order\n\n        results1_resp3 = self.redis3.execute_command(*cmd_args1)\n        results2_resp3 = self.redis3.execute_command(*cmd_args2)\n\n        # Both should return the same structure\n        assert results1_resp3 == results2_resp3, \"Order of WITH options shouldn't matter\"\n"
  },
  {
    "path": "modules/vector-sets/vset.c",
    "content": "/* Redis implementation for vector sets. The data structure itself\n * is implemented in hnsw.c.\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n * Originally authored by: Salvatore Sanfilippo.\n *\n * ======================== Understand threading model =========================\n * This code implements threaded operarations for two of the commands:\n *\n * 1. VSIM, by default.\n * 2. VADD, if the CAS option is specified.\n *\n * Note that even if the second operation, VADD, is a write operation, only\n * the neighbors collection for the new node is performed in a thread: then,\n * the actual insert is performed in the reply callback VADD_CASReply(),\n * which is executed in the main thread.\n *\n * Threaded operations need us to protect various operations with mutexes,\n * even if a certain degree of protection is already provided by the HNSW\n * library. Here are a few very important things about this implementation\n * and the way locking is performed.\n *\n * 1. All the write operations are performed in the main Redis thread:\n *    this also include VADD_CASReply() callback, that is called by Redis\n *    internals only in the context of the main thread. However the HNSW\n *    library allows background threads in hnsw_search() (VSIM) to modify\n *    nodes metadata to speedup search (to understand if a node was already\n *    visited), but this only happens after acquiring a specific lock\n *    for a given \"read slot\".\n *\n * 2. We use a global lock for each Vector Set object, called \"in_use\". This\n *    lock is a read-write lock, and is acquired in read mode by all the\n *    threads that perform reads in the background. It is only acquired in\n *    write mode by vectorSetWaitAllBackgroundClients(): the function acquires\n *    the lock and immediately releases it, with the effect of waiting all the\n *    background threads still running from ending their execution.\n *\n *    Note that no thread can be spawned, since we only call\n *    vectorSetWaitAllBackgroundClients() from the main Redis thread, that\n *    is also the only thread spawning other threads.\n *\n *    vectorSetWaitAllBackgroundClients() is used in two ways:\n *    A) When we need to delete a vector set because of (DEL) or other\n *       operations destroying the object, we need to wait that all the\n *       background threads working with this object finished their work.\n *    B) When we modify the HNSW nodes bypassing the normal locking\n *       provided by the HNSW library. This only happens when we update\n *       an existing node attribute so far, in VSETATTR and when we call\n *       VADD to update a node with the SETATTR option.\n *\n *  3. Often during read operations performed by Redis commands in the\n *     main thread (VCARD, VEMB, VRANDMEMBER, ...) we don't acquire any\n *     lock at all. The commands run in the main Redis thread, we can only\n *     have, at the same time, background reads against the same data\n *     structure. Note that VSIM_thread() and VADD_thread() still modify the\n *     read slot metadata, that is node->visited_epoch[slot], but as long as\n *     our read commands running in the main thread don't need to use\n *     hnsw_search() or other HNSW functions using the visited epochs slots\n *     we are safe.\n *\n * 4. There is a race from the moment we create a thread, passing the\n *    vector set object, to the moment the thread can actually lock the\n *    result win the in_use_lock mutex: as the thread starts, in the meanwhile\n *    a DEL/expire could trigger and remove the object. For this reason\n *    we use an atomic counter that protects our object for this small\n *    time in vectorSetWaitAllBackgroundClients(). This prevents removal\n *    of objects that are about to be taken by threads.\n *\n *    Note that other competing solutions could be used to fix the problem\n *    but have their set of issues, however they are worth documenting here\n *    and evaluating in the future:\n *\n *      A. Using a conditional variable we could \"wait\" for the thread to\n *         acquire the lock. However this means waiting before returning\n *         to the event loop, and would make the command execution slower.\n *      B. We could use again an atomic variable, like we did, but this time\n *         as a refcount for the object, with a vsetAcquire() vsetRelease().\n *         In this case, the command could retain the object in the main thread\n *         before starting the thread, and the thread, after the work is done,\n *         could release it. This way sometimes the object would be freed by\n *         the thread, and it's while now can be safe to do the kind of resource\n *         deallocation that vectorSetReleaseObject() does, given that the\n *         Redis Modules API is not always thread safe this solution may not\n *         be future-proof. However there is to evaluate it better in the\n *         future.\n *      C. We could use the \"B\" solution but instead of freeing the object\n *         in the thread, in this specific case we could just put it into a\n *         list and defer it for later freeing (for instance in the reply\n *         callback), so that the object is always freed in the main thread.\n *         This would require a list of objects to free.\n *\n *    However the current solution only disadvantage is the potential busy\n *    loop, but this busy loop in practical terms will almost never do\n *    much: to trigger it, a number of circumnstances must happen: deleting\n *    Vector Set keys while using them, hitting the small window needed to\n *    start the thread and read-lock the mutex.\n */\n\n#define _DEFAULT_SOURCE\n#define _USE_MATH_DEFINES\n#define _POSIX_C_SOURCE 200809L\n\n#include \"../../src/redismodule.h\"\n#include <stdio.h>\n#include <stdlib.h>\n#include <ctype.h>\n#include <string.h>\n#include <strings.h>\n#include <stdint.h>\n#include <math.h>\n#include <pthread.h>\n#include <stdatomic.h>\n#include \"hnsw.h\"\n#include \"vset_config.h\"\n\n// We inline directly the expression implementation here so that building\n// the module is trivial.\n#include \"expr.c\"\n\nstatic RedisModuleType *VectorSetType;\nstatic uint64_t VectorSetTypeNextId = 0;\n\n// Default EF value if not specified during creation.\n#define VSET_DEFAULT_C_EF 200\n\n// Default EF value if not specified during search.\n#define VSET_DEFAULT_SEARCH_EF 100\n\n// Default num elements returned by VSIM.\n#define VSET_DEFAULT_COUNT 10\n\n/* ========================== Internal data structure  ====================== */\n\n/* Our abstract data type needs a dual representation similar to Redis\n * sorted set: the proximity graph, and also a element -> graph-node map\n * that will allow us to perform deletions and other operations that have\n * as input the element itself. */\nstruct vsetObject {\n    HNSW *hnsw;                 // Proximity graph.\n    RedisModuleDict *dict;      // Element -> node mapping.\n    float *proj_matrix;         // Random projection matrix, NULL if no projection\n    uint32_t proj_input_size;     // Input dimension after projection.\n                                  // Output dimension is implicit in\n                                  // hnsw->vector_dim.\n    pthread_rwlock_t in_use_lock; // Lock needed to destroy the object safely.\n    uint64_t id;                // Unique ID used by threaded VADD to know the\n                                // object is still the same.\n    uint64_t numattribs;        // Number of nodes associated with an attribute.\n    atomic_int thread_creation_pending; // Number of threads that are currently\n                                        // pending to lock the object.\n};\n\n/* Each node has two associated values: the associated string (the item\n * in the set) and potentially a JSON string, that is, the attributes, used\n * for hybrid search with the VSIM FILTER option. */\nstruct vsetNodeVal {\n    RedisModuleString *item;\n    RedisModuleString *attrib;\n};\n\n/* Count the number of set bits in an integer (population count/Hamming weight).\n * This is a portable implementation that doesn't rely on compiler\n * extensions. */\nstatic inline uint32_t bit_count(uint32_t n) {\n    uint32_t count = 0;\n    while (n) {\n        count += n & 1;\n        n >>= 1;\n    }\n    return count;\n}\n\n/* Create a Hadamard-based projection matrix for dimensionality reduction.\n * Uses {-1, +1} entries with a pattern based on bit operations.\n * The pattern is matrix[i][j] = (i & j) % 2 == 0 ? 1 : -1\n * Matrix is scaled by 1/sqrt(input_dim) for normalization.\n * Returns NULL on allocation failure.\n *\n * Note that compared to other approaches (random gaussian weights), what\n * we have here is deterministic, it means that our replicas will have\n * the same set of weights. Also this approach seems to work much better\n * in practice, and the distances between elements are better guaranteed.\n *\n * Note that we still save the projection matrix in the RDB file, because\n * in the future we may change the weights generation, and we want everything\n * to be backward compatible. */\nfloat *createProjectionMatrix(uint32_t input_dim, uint32_t output_dim) {\n    float *matrix = RedisModule_Alloc(sizeof(float) * input_dim * output_dim);\n\n    /* Scale factor to normalize the projection. */\n    const float scale = 1.0f / sqrt(input_dim);\n\n    /* Fill the matrix using Hadamard pattern. */\n    for (uint32_t i = 0; i < output_dim; i++) {\n        for (uint32_t j = 0; j < input_dim; j++) {\n            /* Calculate position in the flattened matrix. */\n            uint32_t pos = i * input_dim + j;\n\n            /* Hadamard pattern: use bit operations to determine sign\n             * If the count of 1-bits in the bitwise AND of i and j is even,\n             * the value is 1, otherwise -1. */\n            int value = (bit_count(i & j) % 2 == 0) ? 1 : -1;\n\n            /* Store the scaled value. */\n            matrix[pos] = value * scale;\n        }\n    }\n    return matrix;\n}\n\n/* Apply random projection to input vector. Returns new allocated vector. */\nfloat *applyProjection(const float *input, const float *proj_matrix,\n                      uint32_t input_dim, uint32_t output_dim)\n{\n    float *output = RedisModule_Alloc(sizeof(float) * output_dim);\n\n    for (uint32_t i = 0; i < output_dim; i++) {\n        const float *row = &proj_matrix[i * input_dim];\n        float sum = 0.0f;\n        for (uint32_t j = 0; j < input_dim; j++) {\n            sum += row[j] * input[j];\n        }\n        output[i] = sum;\n    }\n    return output;\n}\n\n/* Create the vector as HNSW+Dictionary combined data structure. */\nstruct vsetObject *createVectorSetObject(unsigned int dim, uint32_t quant_type, uint32_t hnsw_M) {\n    struct vsetObject *o;\n    o = RedisModule_Alloc(sizeof(*o));\n\n    o->id = VectorSetTypeNextId++;\n    o->hnsw = hnsw_new(dim,quant_type,hnsw_M);\n    if (!o->hnsw) { // May fail because of mutex creation.\n        RedisModule_Free(o);\n        return NULL;\n    }\n\n    o->dict = RedisModule_CreateDict(NULL);\n    o->proj_matrix = NULL;\n    o->proj_input_size = 0;\n    o->numattribs = 0;\n    o->thread_creation_pending = 0;\n    RedisModule_Assert(pthread_rwlock_init(&o->in_use_lock,NULL) == 0);\n    return o;\n}\n\nvoid vectorSetReleaseNodeValue(void *v) {\n    struct vsetNodeVal *nv = v;\n    RedisModule_FreeString(NULL,nv->item);\n    if (nv->attrib) RedisModule_FreeString(NULL,nv->attrib);\n    RedisModule_Free(nv);\n}\n\n/* Free the vector set object. */\nvoid vectorSetReleaseObject(struct vsetObject *o) {\n    if (!o) return;\n    if (o->hnsw) hnsw_free(o->hnsw,vectorSetReleaseNodeValue);\n    if (o->dict) RedisModule_FreeDict(NULL,o->dict);\n    if (o->proj_matrix) RedisModule_Free(o->proj_matrix);\n    pthread_rwlock_destroy(&o->in_use_lock);\n    RedisModule_Free(o);\n}\n\n/* Wait for all the threads performing operations on this\n * index to terminate their work (locking for write will\n * wait for all the other threads).\n *\n * if 'for_del' is set to 1, we also wait for all the pending threads\n * that still didn't acquire the lock to finish their work. This\n * is useful only if we are going to call this function to delete\n * the object, and not if we want to just to modify it. */\nvoid vectorSetWaitAllBackgroundClients(struct vsetObject *vset, int for_del) {\n    if (for_del) {\n        // If we are going to destroy the object, after this call, let's\n        // wait for threads that are being created and still didn't had\n        // a chance to acquire the lock.\n        while (vset->thread_creation_pending > 0);\n    }\n    RedisModule_Assert(pthread_rwlock_wrlock(&vset->in_use_lock) == 0);\n    pthread_rwlock_unlock(&vset->in_use_lock);\n}\n\n/* Return a string representing the quantization type name of a vector set. */\nconst char *vectorSetGetQuantName(struct vsetObject *o) {\n    switch(o->hnsw->quant_type) {\n    case HNSW_QUANT_NONE: return \"f32\";\n    case HNSW_QUANT_Q8: return \"int8\";\n    case HNSW_QUANT_BIN: return \"bin\";\n    default: return \"unknown\";\n    }\n}\n\n/* Insert the specified element into the Vector Set.\n * If update is '1', the existing node will be updated.\n *\n * Returns 1 if the element was added, or 0 if the element was already there\n * and was just updated. */\nint vectorSetInsert(struct vsetObject *o, float *vec, int8_t *qvec, float qrange, RedisModuleString *val, RedisModuleString *attrib, int update, int ef)\n{\n    hnswNode *node = RedisModule_DictGet(o->dict,val,NULL);\n    if (node != NULL) {\n        if (update) {\n            /* Wait for clients in the background: background VSIM\n             * operations touch the nodes attributes we are going\n             * to touch. */\n            vectorSetWaitAllBackgroundClients(o,0);\n\n            struct vsetNodeVal *nv = node->value;\n            /* Pass NULL as value-free function. We want to reuse\n             * the old value. */\n            hnsw_delete_node(o->hnsw, node, NULL);\n            node = hnsw_insert(o->hnsw,vec,qvec,qrange,0,nv,ef);\n            RedisModule_Assert(node != NULL);\n            RedisModule_DictReplace(o->dict,val,node);\n\n            /* If attrib != NULL, the user wants that in case of an update we\n             * update the attribute as well (otherwise it remains as it was).\n             * Note that the order of operations is conceinved so that it\n             * works in case the old attrib and the new attrib pointer is the\n             * same. */\n            if (attrib) {\n                // Empty attribute string means: unset the attribute during\n                // the update.\n                size_t attrlen;\n                RedisModule_StringPtrLen(attrib,&attrlen);\n                if (attrlen != 0) {\n                    RedisModule_RetainString(NULL,attrib);\n                    o->numattribs++;\n                } else {\n                    attrib = NULL;\n                }\n\n                if (nv->attrib) {\n                    o->numattribs--;\n                    RedisModule_FreeString(NULL,nv->attrib);\n                }\n                nv->attrib = attrib;\n            }\n        }\n        return 0;\n    }\n\n    struct vsetNodeVal *nv = RedisModule_Alloc(sizeof(*nv));\n    nv->item = val;\n    nv->attrib = attrib;\n    node = hnsw_insert(o->hnsw,vec,qvec,qrange,0,nv,ef);\n    if (node == NULL) {\n        // XXX Technically in Redis-land we don't have out of memory, as we\n        // crash on OOM. However the HNSW library may fail for error in the\n        // locking libc call. Probably impossible in practical terms.\n        RedisModule_Free(nv);\n        return 0;\n    }\n    if (attrib != NULL) o->numattribs++;\n    RedisModule_DictSet(o->dict,val,node);\n    RedisModule_RetainString(NULL,val);\n    if (attrib) RedisModule_RetainString(NULL,attrib);\n    return 1;\n}\n\n/* Parse vector from FP32 blob or VALUES format, with optional REDUCE.\n * Format: [REDUCE dim] FP32|VALUES ...\n * Returns allocated vector and sets dimension in *dim.\n * If reduce_dim is not NULL, sets it to the requested reduction dimension.\n * Returns NULL on parsing error.\n *\n * The function sets as a reference *consumed_args, so that the caller\n * knows how many arguments we consumed in order to parse the input\n * vector. Remaining arguments are often command options. */\nfloat *parseVector(RedisModuleString **argv, int argc, int start_idx,\n                  size_t *dim, uint32_t *reduce_dim, int *consumed_args)\n{\n    int consumed = 0; // Arguments consumed\n\n    /* Check for REDUCE option first. */\n    if (reduce_dim) *reduce_dim = 0;\n    if (reduce_dim && argc > start_idx + 2 &&\n        !strcasecmp(RedisModule_StringPtrLen(argv[start_idx],NULL),\"REDUCE\"))\n    {\n        long long rdim;\n        if (RedisModule_StringToLongLong(argv[start_idx+1],&rdim)\n            != REDISMODULE_OK || rdim <= 0)\n        {\n            return NULL;\n        }\n        if (reduce_dim) *reduce_dim = rdim;\n        start_idx += 2;  // Skip REDUCE and its argument.\n        consumed += 2;\n    }\n\n    /* Now parse the vector format as before. */\n    float *vec = NULL;\n    const char *vec_format = RedisModule_StringPtrLen(argv[start_idx],NULL);\n\n    if (!strcasecmp(vec_format,\"FP32\")) {\n        if (argc < start_idx + 2) return NULL;  // Need FP32 + vector + value.\n        size_t vec_raw_len;\n        const char *blob =\n            RedisModule_StringPtrLen(argv[start_idx+1],&vec_raw_len);\n\n        // Must be 4 bytes per component.\n        if (vec_raw_len % 4 || vec_raw_len < 4) return NULL;\n        *dim = vec_raw_len/4;\n\n        vec = RedisModule_Alloc(vec_raw_len);\n        if (!vec) return NULL;\n        memcpy(vec,blob,vec_raw_len);\n        consumed += 2;\n    } else if (!strcasecmp(vec_format,\"VALUES\")) {\n        if (argc < start_idx + 2) return NULL;  // Need at least the dimension.\n        long long vdim; // Vector dimension passed by the user.\n        if (RedisModule_StringToLongLong(argv[start_idx+1],&vdim)\n            != REDISMODULE_OK || vdim < 1) return NULL;\n\n        // Check that all the arguments are available.\n        if (argc < start_idx + 2 + vdim) return NULL;\n\n        *dim = vdim;\n        vec = RedisModule_Alloc(sizeof(float) * vdim);\n        if (!vec) return NULL;\n\n        for (int j = 0; j < vdim; j++) {\n            double val;\n            if (RedisModule_StringToDouble(argv[start_idx+2+j],&val)\n                != REDISMODULE_OK)\n            {\n                RedisModule_Free(vec);\n                return NULL;\n            }\n            vec[j] = val;\n        }\n        consumed += vdim + 2;\n    } else {\n        return NULL;  // Unknown format.\n    }\n\n    if (consumed_args) *consumed_args = consumed;\n    return vec;\n}\n\n/* ========================== Commands implementation ======================= */\n\n/* VADD thread handling the \"CAS\" version of the command, that is\n * performed blocking the client, accumulating here, in the thread, the\n * set of potential candidates, and later inserting the element in the\n * key (if it still exists, and if it is still the *same* vector set)\n * in the Reply callback. */\nvoid *VADD_thread(void *arg) {\n    pthread_detach(pthread_self());\n\n    void **targ = (void**)arg;\n    RedisModuleBlockedClient *bc = targ[0];\n    struct vsetObject *vset = targ[1];\n    float *vec = targ[3];\n    int ef = (uint64_t)targ[6];\n\n    /* Lock the object and signal that we are no longer pending\n     * the lock acquisition. */\n    RedisModule_Assert(pthread_rwlock_rdlock(&vset->in_use_lock) == 0);\n    vset->thread_creation_pending--;\n\n    /* Look for candidates... */\n    InsertContext *ic = hnsw_prepare_insert(vset->hnsw, vec, NULL, 0, 0, ef);\n    targ[5] = ic; // Pass the context to the reply callback.\n\n    /* Unblock the client so that our read reply will be invoked. */\n    pthread_rwlock_unlock(&vset->in_use_lock);\n    RedisModule_BlockedClientMeasureTimeEnd(bc);\n    RedisModule_UnblockClient(bc,targ); // Use targ as privdata.\n    return NULL;\n}\n\n/* Reply callback for CAS variant of VADD.\n * Note: this is called in the main thread, in the background thread\n * we just do the read operation of gathering the neighbors. */\nint VADD_CASReply(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    (void)argc;\n    RedisModule_AutoMemory(ctx); /* Use automatic memory management. */\n\n    int retval = REDISMODULE_OK;\n    void **targ = (void**)RedisModule_GetBlockedClientPrivateData(ctx);\n    uint64_t vset_id = (unsigned long) targ[2];\n    float *vec = targ[3];\n    RedisModuleString *val = targ[4];\n    InsertContext *ic = targ[5];\n    int ef = (uint64_t)targ[6];\n    RedisModuleString *attrib = targ[7];\n    RedisModule_Free(targ);\n\n    /* Open the key: there are no guarantees it still exists, or contains\n     * a vector set, or even the SAME vector set. */\n    RedisModuleKey *key = RedisModule_OpenKey(ctx,argv[1],\n        REDISMODULE_READ|REDISMODULE_WRITE);\n    int type = RedisModule_KeyType(key);\n    struct vsetObject *vset = NULL;\n\n    if (type != REDISMODULE_KEYTYPE_EMPTY &&\n        RedisModule_ModuleTypeGetType(key) == VectorSetType)\n    {\n        vset = RedisModule_ModuleTypeGetValue(key);\n        // Same vector set?\n        if (vset->id != vset_id) vset = NULL;\n\n        /* Also, if the element was already inserted, we just pretend\n         * the other insert won. We don't even start a threaded VADD\n         * if this was an update, since the deletion of the element itself\n         * in order to perform the update would invalidate the CAS state. */\n        if (vset && RedisModule_DictGet(vset->dict,val,NULL) != NULL)\n            vset = NULL;\n    }\n\n    if (vset == NULL) {\n        /* If the object does not match the start of the operation, we\n         * just pretend the VADD was performed BEFORE the key was deleted\n         * or replaced. We return success but don't do anything. */\n        hnsw_free_insert_context(ic);\n    } else {\n        /* Otherwise try to insert the new element with the neighbors\n         * collected in background. If we fail, do it synchronously again\n         * from scratch. */\n\n        // First: allocate the dual-ported value for the node.\n        struct vsetNodeVal *nv = RedisModule_Alloc(sizeof(*nv));\n        nv->item = val;\n        nv->attrib = attrib;\n\n        /* Then: insert the node in the HNSW data structure. Note that\n         * 'ic' could be NULL in case hnsw_prepare_insert() failed because of\n         * locking failure (likely impossible in practical terms). */\n        hnswNode *newnode;\n        if (ic == NULL ||\n            (newnode = hnsw_try_commit_insert(vset->hnsw, ic, nv)) == NULL)\n        {\n            /* If we are here, the CAS insert failed. We need to insert\n             * again with full locking for neighbors selection and\n             * actual insertion. This time we can't fail: */\n            newnode = hnsw_insert(vset->hnsw, vec, NULL, 0, 0, nv, ef);\n            RedisModule_Assert(newnode != NULL);\n        }\n        RedisModule_DictSet(vset->dict,val,newnode);\n        val = NULL; // Don't free it later.\n        attrib = NULL; // Don't free it later.\n\n        RedisModule_ReplicateVerbatim(ctx);\n    }\n\n    // Whatever happens is a success... :D\n    RedisModule_ReplyWithBool(ctx,1);\n    if (val) RedisModule_FreeString(ctx,val); // Not added? Free it.\n    if (attrib) RedisModule_FreeString(ctx,attrib); // Not added? Free it.\n    RedisModule_Free(vec);\n    return retval;\n}\n\n/* VADD key [REDUCE dim] FP32|VALUES vector value [CAS] [NOQUANT] [BIN] [Q8]\n *      [M count] */\nint VADD_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx); /* Use automatic memory management. */\n\n    if (argc < 5) return RedisModule_WrongArity(ctx);\n\n    /* Parse vector with optional REDUCE */\n    size_t dim = 0;\n    uint32_t reduce_dim = 0;\n    int consumed_args;\n    int cas = 0; // Threaded check-and-set style insert.\n    long long ef = VSET_DEFAULT_C_EF; // HNSW creation time EF for new nodes.\n    long long hnsw_create_M = HNSW_DEFAULT_M; // HNSW creation default M value.\n    float *vec = parseVector(argv, argc, 2, &dim, &reduce_dim, &consumed_args);\n    RedisModuleString *attrib = NULL; // Attributes if passed via ATTRIB.\n    if (!vec)\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid vector specification\");\n\n    /* Missing element string at the end? */\n    if (argc-2-consumed_args < 1) {\n        RedisModule_Free(vec);\n        return RedisModule_WrongArity(ctx);\n    }\n\n    /* Parse options after the element string. */\n    uint32_t quant_type = HNSW_QUANT_Q8; // Default quantization type.\n\n    for (int j = 2 + consumed_args + 1; j < argc; j++) {\n        const char *opt = RedisModule_StringPtrLen(argv[j], NULL);\n        if (!strcasecmp(opt, \"CAS\")) {\n            cas = 1;\n        } else if (!strcasecmp(opt, \"EF\") && j+1 < argc) {\n            if (RedisModule_StringToLongLong(argv[j+1], &ef)\n                != REDISMODULE_OK || ef <= 0 || ef > 1000000)\n            {\n                RedisModule_Free(vec);\n                return RedisModule_ReplyWithError(ctx, \"ERR invalid EF\");\n            }\n            j++; // skip argument.\n        } else if (!strcasecmp(opt, \"M\") && j+1 < argc) {\n            if (RedisModule_StringToLongLong(argv[j+1], &hnsw_create_M)\n                != REDISMODULE_OK || hnsw_create_M < HNSW_MIN_M ||\n                hnsw_create_M > HNSW_MAX_M)\n            {\n                RedisModule_Free(vec);\n                return RedisModule_ReplyWithError(ctx, \"ERR invalid M\");\n            }\n            j++; // skip argument.\n        } else if (!strcasecmp(opt, \"SETATTR\") && j+1 < argc) {\n            attrib = argv[j+1];\n            j++; // skip argument.\n        } else if (!strcasecmp(opt, \"NOQUANT\")) {\n            quant_type = HNSW_QUANT_NONE;\n        } else if (!strcasecmp(opt, \"BIN\")) {\n            quant_type = HNSW_QUANT_BIN;\n        } else if (!strcasecmp(opt, \"Q8\")) {\n            quant_type = HNSW_QUANT_Q8;\n        } else {\n            RedisModule_Free(vec);\n            return RedisModule_ReplyWithError(ctx,\"ERR invalid option after element\");\n        }\n    }\n\n    /* Drop CAS if this is a replica and we are getting the command from the\n     * replication link: we want to add/delete items in the same order as\n     * the master, while with CAS the timing would be different.\n     *\n     * Also for Lua scripts and MULTI/EXEC, we want to run the command\n     * on the main thread. */\n    if (RedisModule_GetContextFlags(ctx) &\n            (REDISMODULE_CTX_FLAGS_REPLICATED|\n             REDISMODULE_CTX_FLAGS_LUA|\n             REDISMODULE_CTX_FLAGS_MULTI))\n    {\n        cas = 0;\n    }\n\n    if (VSGlobalConfig.forceSingleThreadExec) {\n        cas = 0;\n    }\n\n    /* Open/create key */\n    RedisModuleKey *key = RedisModule_OpenKey(ctx,argv[1],\n        REDISMODULE_READ|REDISMODULE_WRITE);\n    int type = RedisModule_KeyType(key);\n    if (type != REDISMODULE_KEYTYPE_EMPTY &&\n        RedisModule_ModuleTypeGetType(key) != VectorSetType)\n    {\n        RedisModule_Free(vec);\n        return RedisModule_ReplyWithError(ctx,REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    /* Get the correct value argument based on format and REDUCE */\n    RedisModuleString *val = argv[2 + consumed_args];\n\n    /* Create or get existing vector set */\n    struct vsetObject *vset;\n    if (type == REDISMODULE_KEYTYPE_EMPTY) {\n        cas = 0; /* Do synchronous insert at creation, otherwise the\n                  * key would be left empty until the threaded part\n                  * does not return. It's also pointless to try try\n                  * doing threaded first element insertion. */\n        vset = createVectorSetObject(reduce_dim ? reduce_dim : dim, quant_type, hnsw_create_M);\n        if (vset == NULL) {\n            // We can't fail for OOM in Redis, but the mutex initialization\n            // at least theoretically COULD fail. Likely this code path\n            // is not reachable in practical terms.\n            RedisModule_Free(vec);\n            return RedisModule_ReplyWithError(ctx,\n                \"ERR unable to create a Vector Set: system resources issue?\");\n        }\n\n        /* Initialize projection if requested */\n        if (reduce_dim) {\n            vset->proj_matrix = createProjectionMatrix(dim, reduce_dim);\n            vset->proj_input_size = dim;\n\n            /* Project the vector */\n            float *projected = applyProjection(vec, vset->proj_matrix,\n                                            dim, reduce_dim);\n            RedisModule_Free(vec);\n            vec = projected;\n        }\n        RedisModule_ModuleTypeSetValue(key,VectorSetType,vset);\n    } else {\n        vset = RedisModule_ModuleTypeGetValue(key);\n\n        if (vset->hnsw->quant_type != quant_type) {\n            RedisModule_Free(vec);\n            return RedisModule_ReplyWithError(ctx,\n                \"ERR asked quantization mismatch with existing vector set\");\n        }\n\n        if (vset->hnsw->M != hnsw_create_M) {\n            RedisModule_Free(vec);\n            return RedisModule_ReplyWithError(ctx,\n                \"ERR asked M value mismatch with existing vector set\");\n        }\n\n        if ((vset->proj_matrix == NULL && vset->hnsw->vector_dim != dim) ||\n            (vset->proj_matrix && vset->hnsw->vector_dim != reduce_dim))\n        {\n            RedisModule_Free(vec);\n            return RedisModule_ReplyWithErrorFormat(ctx,\n                \"ERR Vector dimension mismatch - got %d but set has %d\",\n                (int)dim, (int)vset->hnsw->vector_dim);\n        }\n\n        /* Check REDUCE compatibility */\n        if (reduce_dim) {\n            if (!vset->proj_matrix) {\n                RedisModule_Free(vec);\n                return RedisModule_ReplyWithError(ctx,\n                    \"ERR cannot add projection to existing set without projection\");\n            }\n            if (reduce_dim != vset->hnsw->vector_dim) {\n                RedisModule_Free(vec);\n                return RedisModule_ReplyWithError(ctx,\n                    \"ERR projection dimension mismatch with existing set\");\n            }\n        }\n\n        /* Apply projection if needed */\n        if (vset->proj_matrix) {\n            /* Ensure input dimension matches the projection matrix's expected input dimension */\n            if (dim != vset->proj_input_size) {\n                RedisModule_Free(vec);\n                return RedisModule_ReplyWithErrorFormat(ctx,\n                    \"ERR Input dimension mismatch for projection - got %d but projection expects %d\",\n                    (int)dim, (int)vset->proj_input_size);\n            }\n\n            float *projected = applyProjection(vec, vset->proj_matrix,\n                                             vset->proj_input_size,\n                                             vset->hnsw->vector_dim);\n            RedisModule_Free(vec);\n            vec = projected;\n            dim = vset->hnsw->vector_dim;\n        }\n    }\n\n    /* For existing keys don't do CAS updates. For how things work now, the\n     * CAS state would be invalidated by the deletion before adding back. */\n    if (cas && RedisModule_DictGet(vset->dict,val,NULL) != NULL)\n        cas = 0;\n\n    /* Here depending on the CAS option we directly insert in a blocking\n     * way, or use a thread to do candidate neighbors selection and only\n     * later, in the reply callback, actually add the element. */\n    if (cas) {\n        RedisModuleBlockedClient *bc = RedisModule_BlockClient(ctx,VADD_CASReply,NULL,NULL,0);\n        pthread_t tid;\n        void **targ = RedisModule_Alloc(sizeof(void*)*8);\n        targ[0] = bc;\n        targ[1] = vset;\n        targ[2] = (void*)(unsigned long)vset->id;\n        targ[3] = vec;\n        targ[4] = val;\n        targ[5] = NULL; // Used later for insertion context.\n        targ[6] = (void*)(unsigned long)ef;\n        targ[7] = attrib;\n        RedisModule_RetainString(ctx,val);\n        if (attrib) RedisModule_RetainString(ctx,attrib);\n        RedisModule_BlockedClientMeasureTimeStart(bc);\n        vset->thread_creation_pending++;\n        if (pthread_create(&tid,NULL,VADD_thread,targ) != 0) {\n            vset->thread_creation_pending--;\n            RedisModule_AbortBlock(bc);\n            RedisModule_Free(targ);\n            RedisModule_FreeString(ctx,val);\n            if (attrib) RedisModule_FreeString(ctx,attrib);\n\n            // Fall back to synchronous insert, see later in the code.\n        } else {\n            return REDISMODULE_OK;\n        }\n    }\n\n    /* Insert vector synchronously: we reach this place even\n     * if cas was true but thread creation failed. */\n    int added = vectorSetInsert(vset,vec,NULL,0,val,attrib,1,ef);\n    RedisModule_Free(vec);\n\n    RedisModule_ReplyWithBool(ctx,added);\n    if (added) RedisModule_ReplicateVerbatim(ctx);\n    return REDISMODULE_OK;\n}\n\n/* HNSW callback to filter items according to a predicate function\n * (our FILTER expression in this case). */\nint vectorSetFilterCallback(void *value, void *privdata) {\n    exprstate *expr = privdata;\n    struct vsetNodeVal *nv = value;\n    if (nv->attrib == NULL) return 0; // No attributes? No match.\n    size_t json_len;\n    char *json = (char*)RedisModule_StringPtrLen(nv->attrib,&json_len);\n    return exprRun(expr,json,json_len);\n}\n\n/* Common path for the execution of the VSIM command both threaded and\n * not threaded. Note that 'ctx' may be normal context of a thread safe\n * context obtained from a blocked client. The locking that is specific\n * to the vset object is handled by the caller, however the function\n * handles the HNSW locking explicitly. */\nvoid VSIM_execute(RedisModuleCtx *ctx, struct vsetObject *vset,\n    float *vec, unsigned long count, float epsilon, unsigned long withscores,\n    unsigned long withattribs, unsigned long ef, exprstate *filter_expr,\n    unsigned long filter_ef, int ground_truth)\n{\n    /* In our scan, we can't just collect 'count' elements as\n     * if count is small we would explore the graph in an insufficient\n     * way to provide enough recall.\n     *\n     * If the user didn't asked for a specific exploration, we use\n     * VSET_DEFAULT_SEARCH_EF as minimum, or we match count if count\n     * is greater than that. Otherwise the minumim will be the specified\n     * EF argument. */\n    if (ef == 0) ef = VSET_DEFAULT_SEARCH_EF;\n    if (count > ef) ef = count;\n\n    int slot = hnsw_acquire_read_slot(vset->hnsw);\n    if (ef > vset->hnsw->node_count) ef = vset->hnsw->node_count;\n\n    /* Perform search */\n    hnswNode **neighbors = RedisModule_Alloc(sizeof(hnswNode*)*ef);\n    float *distances = RedisModule_Alloc(sizeof(float)*ef);\n    unsigned int found;\n    if (ground_truth) {\n        found = hnsw_ground_truth_with_filter(vset->hnsw, vec, ef, neighbors,\n                    distances, slot, 0,\n                    filter_expr ? vectorSetFilterCallback : NULL,\n                    filter_expr);\n    } else {\n        if (filter_expr == NULL) {\n            found = hnsw_search(vset->hnsw, vec, ef, neighbors,\n                                distances, slot, 0);\n        } else {\n            found = hnsw_search_with_filter(vset->hnsw, vec, ef, neighbors,\n                        distances, slot, 0, vectorSetFilterCallback,\n                        filter_expr, filter_ef);\n        }\n    }\n\n    /* Return results */\n    int resp3 = RedisModule_GetContextFlags(ctx) & REDISMODULE_CTX_FLAGS_RESP3;\n    int reply_with_map = resp3 && (withscores || withattribs);\n\n    if (reply_with_map)\n        RedisModule_ReplyWithMap(ctx, REDISMODULE_POSTPONED_LEN);\n    else\n        RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_LEN);\n\n    long long arraylen = 0;\n    for (unsigned int i = 0; i < found && i < count; i++) {\n        if (distances[i]/2 > epsilon) break;\n        struct vsetNodeVal *nv = neighbors[i]->value;\n        RedisModule_ReplyWithString(ctx, nv->item);\n        arraylen++;\n\n        /* If the user asked for multiple properties at the same time using\n         * the RESP3 protocol, we wrap the value of the map into an N-items\n         * array. Two for now, since we have just two properties that can be\n         * requested.\n         *\n         * So in the case of RESP2 we will just have the flat reply:\n         * item, score, attribute. For RESP3 instead item -> [score, attribute]\n         */\n        if (resp3 && withscores && withattribs)\n            RedisModule_ReplyWithArray(ctx,2);\n\n        if (withscores) {\n            /* The similarity score is provided in a 0-1 range. */\n            RedisModule_ReplyWithDouble(ctx, 1.0 - distances[i]/2.0);\n        }\n        if (withattribs) {\n            /* Return the attributes as well, if any. */\n            if (nv->attrib)\n                RedisModule_ReplyWithString(ctx, nv->attrib);\n            else\n                RedisModule_ReplyWithNull(ctx);\n        }\n    }\n    hnsw_release_read_slot(vset->hnsw,slot);\n\n    if (reply_with_map) {\n        RedisModule_ReplySetMapLength(ctx, arraylen);\n    } else {\n        int items_per_ele = 1+withattribs+withscores;\n        RedisModule_ReplySetArrayLength(ctx, arraylen * items_per_ele);\n    }\n\n    RedisModule_Free(vec);\n    RedisModule_Free(neighbors);\n    RedisModule_Free(distances);\n    if (filter_expr) exprFree(filter_expr);\n}\n\n/* VSIM thread handling the blocked client request. */\nvoid *VSIM_thread(void *arg) {\n    pthread_detach(pthread_self());\n\n    // Extract arguments.\n    void **targ = (void**)arg;\n    RedisModuleBlockedClient *bc = targ[0];\n    struct vsetObject *vset = targ[1];\n    float *vec = targ[2];\n    unsigned long count = (unsigned long)targ[3];\n    float epsilon = *((float*)targ[4]);\n    unsigned long withscores = (unsigned long)targ[5];\n    unsigned long withattribs = (unsigned long)targ[6];\n    unsigned long ef = (unsigned long)targ[7];\n    exprstate *filter_expr = targ[8];\n    unsigned long filter_ef = (unsigned long)targ[9];\n    unsigned long ground_truth = (unsigned long)targ[10];\n    RedisModule_Free(targ[4]);\n    RedisModule_Free(targ);\n\n    /* Lock the object and signal that we are no longer pending\n     * the lock acquisition. */\n    RedisModule_Assert(pthread_rwlock_rdlock(&vset->in_use_lock) == 0);\n    vset->thread_creation_pending--;\n\n    // Accumulate reply in a thread safe context: no contention.\n    RedisModuleCtx *ctx = RedisModule_GetThreadSafeContext(bc);\n\n    // Run the query.\n    VSIM_execute(ctx, vset, vec, count, epsilon, withscores, withattribs, ef, filter_expr, filter_ef, ground_truth);\n    pthread_rwlock_unlock(&vset->in_use_lock);\n\n    // Cleanup.\n    RedisModule_FreeThreadSafeContext(ctx);\n    RedisModule_BlockedClientMeasureTimeEnd(bc);\n    RedisModule_UnblockClient(bc,NULL);\n    return NULL;\n}\n\n/* VSIM key [ELE|FP32|VALUES] <vector or ele> [WITHSCORES] [WITHATTRIBS] [COUNT num] [EPSILON eps] [EF exploration-factor] [FILTER expression] [FILTER-EF exploration-factor] */\nint VSIM_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx);\n\n    /* Basic argument check: need at least key and vector specification\n     * method. */\n    if (argc < 4) return RedisModule_WrongArity(ctx);\n\n    /* Defaults */\n    int withscores = 0;\n    int withattribs = 0;\n    long long count = VSET_DEFAULT_COUNT;   /* New default value */\n    long long ef = 0;       /* Exploration factor (see HNSW paper) */\n    double epsilon = 2.0;   /* Max cosine distance */\n    long long ground_truth = 0; /* Linear scan instead of HNSW search? */\n    int no_thread = 0;       /* NOTHREAD option: exec on main thread. */\n\n    /* Things computed later. */\n    long long filter_ef = 0;\n    exprstate *filter_expr = NULL;\n\n    /* Get key and vector type */\n    RedisModuleString *key = argv[1];\n    const char *vectorType = RedisModule_StringPtrLen(argv[2], NULL);\n\n    /* Get vector set */\n    RedisModuleKey *keyptr = RedisModule_OpenKey(ctx, key, REDISMODULE_READ);\n    int type = RedisModule_KeyType(keyptr);\n    if (type == REDISMODULE_KEYTYPE_EMPTY)\n        return RedisModule_ReplyWithEmptyArray(ctx);\n\n    if (RedisModule_ModuleTypeGetType(keyptr) != VectorSetType)\n        return RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n\n    struct vsetObject *vset = RedisModule_ModuleTypeGetValue(keyptr);\n\n    /* Vector parsing stage */\n    float *vec = NULL;\n    size_t dim = 0;\n    int vector_args = 0;  /* Number of args consumed by vector specification */\n\n    if (!strcasecmp(vectorType, \"ELE\")) {\n        /* Get vector from existing element */\n        RedisModuleString *ele = argv[3];\n        hnswNode *node = RedisModule_DictGet(vset->dict, ele, NULL);\n        if (!node) {\n            return RedisModule_ReplyWithError(ctx, \"ERR element not found in set\");\n        }\n        vec = RedisModule_Alloc(sizeof(float) * vset->hnsw->vector_dim);\n        hnsw_get_node_vector(vset->hnsw,node,vec);\n        dim = vset->hnsw->vector_dim;\n        vector_args = 2;  /* ELE + element name */\n    } else {\n        /* Parse vector. */\n        int consumed_args;\n\n        vec = parseVector(argv, argc, 2, &dim, NULL, &consumed_args);\n        if (!vec) {\n            return RedisModule_ReplyWithError(ctx,\n                \"ERR invalid vector specification\");\n        }\n        vector_args = consumed_args;\n\n        /* Apply projection if the set uses it, with the exception\n         * of ELE type, that will already have the right dimension. */\n        if (vset->proj_matrix && dim != vset->hnsw->vector_dim) {\n            /* Ensure input dimension matches the projection matrix's expected input dimension */\n            if (dim != vset->proj_input_size) {\n                RedisModule_Free(vec);\n                return RedisModule_ReplyWithErrorFormat(ctx,\n                    \"ERR Input dimension mismatch for projection - got %d but projection expects %d\",\n                    (int)dim, (int)vset->proj_input_size);\n            }\n\n            float *projected = applyProjection(vec, vset->proj_matrix,\n                                             vset->proj_input_size,\n                                             vset->hnsw->vector_dim);\n            RedisModule_Free(vec);\n            vec = projected;\n            dim = vset->hnsw->vector_dim;\n        }\n\n        /* Count consumed arguments */\n        if (!strcasecmp(vectorType, \"FP32\")) {\n            vector_args = 2;  /* FP32 + vector blob */\n        } else if (!strcasecmp(vectorType, \"VALUES\")) {\n            long long vdim;\n            if (RedisModule_StringToLongLong(argv[3], &vdim) != REDISMODULE_OK) {\n                RedisModule_Free(vec);\n                return RedisModule_ReplyWithError(ctx, \"ERR invalid vector dimension\");\n            }\n            vector_args = 2 + vdim;  /* VALUES + dim + values */\n        } else {\n            RedisModule_Free(vec);\n            return RedisModule_ReplyWithError(ctx,\n                \"ERR vector type must be ELE, FP32 or VALUES\");\n        }\n    }\n\n    /* Check vector dimension matches set */\n    if (dim != vset->hnsw->vector_dim) {\n        RedisModule_Free(vec);\n        return RedisModule_ReplyWithErrorFormat(ctx,\n            \"ERR Vector dimension mismatch - got %d but set has %d\",\n            (int)dim, (int)vset->hnsw->vector_dim);\n    }\n\n    /* Parse optional arguments - start after vector specification */\n    int j = 2 + vector_args;\n    while (j < argc) {\n        const char *opt = RedisModule_StringPtrLen(argv[j], NULL);\n        if (!strcasecmp(opt, \"WITHSCORES\")) {\n            withscores = 1;\n            j++;\n        } else if (!strcasecmp(opt, \"WITHATTRIBS\")) {\n            withattribs = 1;\n            j++;\n        } else if (!strcasecmp(opt, \"TRUTH\")) {\n            ground_truth = 1;\n            j++;\n        } else if (!strcasecmp(opt, \"NOTHREAD\")) {\n            no_thread = 1;\n            j++;\n        } else if (!strcasecmp(opt, \"COUNT\") && j+1 < argc) {\n            if (RedisModule_StringToLongLong(argv[j+1], &count)\n                != REDISMODULE_OK || count <= 0)\n            {\n                RedisModule_Free(vec);\n                if (filter_expr) exprFree(filter_expr);\n                return RedisModule_ReplyWithError(ctx, \"ERR invalid COUNT\");\n            }\n            j += 2;\n        } else if (!strcasecmp(opt, \"EPSILON\") && j+1 < argc) {\n            if (RedisModule_StringToDouble(argv[j+1], &epsilon) !=\n                REDISMODULE_OK || epsilon <= 0)\n            {\n                RedisModule_Free(vec);\n                if (filter_expr) exprFree(filter_expr);\n                return RedisModule_ReplyWithError(ctx, \"ERR invalid EPSILON\");\n            }\n            j += 2;\n        } else if (!strcasecmp(opt, \"EF\") && j+1 < argc) {\n            if (RedisModule_StringToLongLong(argv[j+1], &ef) !=\n                REDISMODULE_OK || ef <= 0 || ef > 1000000)\n            {\n                RedisModule_Free(vec);\n                if (filter_expr) exprFree(filter_expr);\n                return RedisModule_ReplyWithError(ctx, \"ERR invalid EF\");\n            }\n            j += 2;\n        } else if (!strcasecmp(opt, \"FILTER-EF\") && j+1 < argc) {\n            if (RedisModule_StringToLongLong(argv[j+1], &filter_ef) !=\n                REDISMODULE_OK || filter_ef <= 0)\n            {\n                RedisModule_Free(vec);\n                if (filter_expr) exprFree(filter_expr);\n                return RedisModule_ReplyWithError(ctx, \"ERR invalid FILTER-EF\");\n            }\n            j += 2;\n        } else if (!strcasecmp(opt, \"FILTER\") && j+1 < argc) {\n            RedisModuleString *exprarg = argv[j+1];\n            size_t exprlen;\n            char *exprstr = (char*)RedisModule_StringPtrLen(exprarg,&exprlen);\n            int errpos;\n            if (filter_expr) exprFree(filter_expr);\n            filter_expr = exprCompile(exprstr,&errpos);\n            if (filter_expr == NULL) {\n                if ((size_t)errpos >= exprlen) errpos = 0;\n                RedisModule_Free(vec);\n                return RedisModule_ReplyWithErrorFormat(ctx,\n                    \"ERR syntax error in FILTER expression near: %s\",\n                        exprstr+errpos);\n            }\n            j += 2;\n        } else {\n            RedisModule_Free(vec);\n            if (filter_expr) exprFree(filter_expr);\n            return RedisModule_ReplyWithError(ctx,\n                \"ERR syntax error in VSIM command\");\n        }\n    }\n\n    int threaded_request = 1; // Run on a thread, by default.\n    if (filter_ef == 0) filter_ef = count * 100; // Max filter visited nodes.\n\n    /* Disable threaded for MULTI/EXEC and Lua, or if explicitly\n     * requested by the user via the NOTHREAD option. */\n    if (no_thread || VSGlobalConfig.forceSingleThreadExec ||\n        (RedisModule_GetContextFlags(ctx) &\n        (REDISMODULE_CTX_FLAGS_LUA | REDISMODULE_CTX_FLAGS_MULTI)))\n    {\n        threaded_request = 0;\n    }\n\n    if (threaded_request) {\n        /* Note: even if we create one thread per request, the underlying\n         * HNSW library has a fixed number of slots for the threads, as it's\n         * defined in HNSW_MAX_THREADS (beware that if you increase it,\n         * every node will use more memory). This means that while this request\n         * is threaded, and will NOT block Redis, it may end waiting for a\n         * free slot if all the HNSW_MAX_THREADS slots are used. */\n        RedisModuleBlockedClient *bc = RedisModule_BlockClient(ctx,NULL,NULL,NULL,0);\n        pthread_t tid;\n        void **targ = RedisModule_Alloc(sizeof(void*)*11);\n        targ[0] = bc;\n        targ[1] = vset;\n        targ[2] = vec;\n        targ[3] = (void*)count;\n        targ[4] = RedisModule_Alloc(sizeof(float));\n        *((float*)targ[4]) = epsilon;\n        targ[5] = (void*)(unsigned long)withscores;\n        targ[6] = (void*)(unsigned long)withattribs;\n        targ[7] = (void*)(unsigned long)ef;\n        targ[8] = (void*)filter_expr;\n        targ[9] = (void*)(unsigned long)filter_ef;\n        targ[10] = (void*)(unsigned long)ground_truth;\n        RedisModule_BlockedClientMeasureTimeStart(bc);\n        vset->thread_creation_pending++;\n        if (pthread_create(&tid,NULL,VSIM_thread,targ) != 0) {\n            vset->thread_creation_pending--;\n            RedisModule_AbortBlock(bc);\n            RedisModule_Free(targ[4]);\n            RedisModule_Free(targ);\n            VSIM_execute(ctx, vset, vec, count, epsilon, withscores, withattribs, ef, filter_expr, filter_ef, ground_truth);\n        }\n    } else {\n        VSIM_execute(ctx, vset, vec, count, epsilon, withscores, withattribs, ef, filter_expr, filter_ef, ground_truth);\n    }\n\n    return REDISMODULE_OK;\n}\n\n/* VDIM <key>: return the dimension of vectors in the vector set. */\nint VDIM_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx);\n\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_READ);\n    int type = RedisModule_KeyType(key);\n\n    if (type == REDISMODULE_KEYTYPE_EMPTY)\n        return RedisModule_ReplyWithError(ctx, \"ERR key does not exist\");\n\n    if (RedisModule_ModuleTypeGetType(key) != VectorSetType)\n        return RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n\n    struct vsetObject *vset = RedisModule_ModuleTypeGetValue(key);\n    return RedisModule_ReplyWithLongLong(ctx, vset->hnsw->vector_dim);\n}\n\n/* VCARD <key>: return cardinality (num of elements) of the vector set. */\nint VCARD_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx);\n\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_READ);\n    int type = RedisModule_KeyType(key);\n\n    if (type == REDISMODULE_KEYTYPE_EMPTY)\n        return RedisModule_ReplyWithLongLong(ctx, 0);\n\n    if (RedisModule_ModuleTypeGetType(key) != VectorSetType)\n        return RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n\n    struct vsetObject *vset = RedisModule_ModuleTypeGetValue(key);\n    return RedisModule_ReplyWithLongLong(ctx, vset->hnsw->node_count);\n}\n\n/* VREM key element\n * Remove an element from a vector set.\n * Returns 1 if the element was found and removed, 0 if not found. */\nint VREM_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx); /* Use automatic memory management. */\n\n    if (argc != 3) return RedisModule_WrongArity(ctx);\n\n    /* Get key and value */\n    RedisModuleString *key = argv[1];\n    RedisModuleString *element = argv[2];\n\n    /* Open key */\n    RedisModuleKey *keyptr = RedisModule_OpenKey(ctx, key,\n        REDISMODULE_READ|REDISMODULE_WRITE);\n    int type = RedisModule_KeyType(keyptr);\n\n    /* Handle non-existing key or wrong type */\n    if (type == REDISMODULE_KEYTYPE_EMPTY) {\n        return RedisModule_ReplyWithBool(ctx, 0);\n    }\n    if (RedisModule_ModuleTypeGetType(keyptr) != VectorSetType) {\n        return RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    /* Get vector set from key */\n    struct vsetObject *vset = RedisModule_ModuleTypeGetValue(keyptr);\n\n    /* Find the node for this element */\n    hnswNode *node = RedisModule_DictGet(vset->dict, element, NULL);\n    if (!node) {\n        return RedisModule_ReplyWithBool(ctx, 0);\n    }\n\n    /* Remove from dictionary */\n    RedisModule_DictDel(vset->dict, element, NULL);\n\n    /* Remove from HNSW graph using the high-level API that handles\n     * locking and cleanup. We pass RedisModule_FreeString as the value\n     * free function since the strings were retained at insertion time. */\n    struct vsetNodeVal *nv = node->value;\n    if (nv->attrib != NULL) vset->numattribs--;\n    RedisModule_Assert(hnsw_delete_node(vset->hnsw, node, vectorSetReleaseNodeValue) == 1);\n\n    /* Destroy empty vector set. */\n    if (RedisModule_DictSize(vset->dict) == 0) {\n        RedisModule_DeleteKey(keyptr);\n    }\n\n    /* Reply and propagate the command */\n    RedisModule_ReplyWithBool(ctx, 1);\n    RedisModule_ReplicateVerbatim(ctx);\n    return REDISMODULE_OK;\n}\n\n/* VEMB key element\n * Returns the embedding vector associated with an element, or NIL if not\n * found. The vector is returned in the same format it was added, but the\n * return value will have some lack of precision due to quantization and\n * normalization of vectors. Also, if items were added using REDUCE, the\n * reduced vector is returned instead. */\nint VEMB_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx);\n    int raw_output = 0; // RAW option.\n\n    if (argc < 3) return RedisModule_WrongArity(ctx);\n\n    /* Parse arguments. */\n    for (int j = 3; j < argc; j++) {\n        const char *opt = RedisModule_StringPtrLen(argv[j], NULL);\n        if (!strcasecmp(opt,\"raw\")) {\n            raw_output = 1;\n        } else {\n            return RedisModule_ReplyWithError(ctx,\"ERR invalid option\");\n        }\n    }\n\n    /* Get key and element. */\n    RedisModuleString *key = argv[1];\n    RedisModuleString *element = argv[2];\n\n    /* Open key. */\n    RedisModuleKey *keyptr = RedisModule_OpenKey(ctx, key, REDISMODULE_READ);\n    int type = RedisModule_KeyType(keyptr);\n\n    /* Handle non-existing key and key of wrong type. */\n    if (type == REDISMODULE_KEYTYPE_EMPTY) {\n        return RedisModule_ReplyWithNull(ctx);\n    } else if (RedisModule_ModuleTypeGetType(keyptr) != VectorSetType) {\n        return RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    /* Lookup the node about the specified element. */\n    struct vsetObject *vset = RedisModule_ModuleTypeGetValue(keyptr);\n    hnswNode *node = RedisModule_DictGet(vset->dict, element, NULL);\n    if (!node) {\n        return RedisModule_ReplyWithNull(ctx);\n    }\n\n    if (raw_output) {\n        int output_qrange = vset->hnsw->quant_type == HNSW_QUANT_Q8;\n        RedisModule_ReplyWithArray(ctx, 3+output_qrange);\n        RedisModule_ReplyWithSimpleString(ctx, vectorSetGetQuantName(vset));\n        RedisModule_ReplyWithStringBuffer(ctx, node->vector, hnsw_quants_bytes(vset->hnsw));\n        RedisModule_ReplyWithDouble(ctx, node->l2);\n        if (output_qrange) RedisModule_ReplyWithDouble(ctx, node->quants_range);\n    } else {\n        /* Get the vector associated with the node. */\n        float *vec = RedisModule_Alloc(sizeof(float) * vset->hnsw->vector_dim);\n        hnsw_get_node_vector(vset->hnsw, node, vec); // May dequantize/denorm.\n\n        /* Return as array of doubles. */\n        RedisModule_ReplyWithArray(ctx, vset->hnsw->vector_dim);\n        for (uint32_t i = 0; i < vset->hnsw->vector_dim; i++)\n            RedisModule_ReplyWithDouble(ctx, vec[i]);\n        RedisModule_Free(vec);\n    }\n    return REDISMODULE_OK;\n}\n\n/* VSETATTR key element json\n * Set or remove the JSON attribute associated with an element.\n * Setting an empty string removes the attribute.\n * The command returns one if the attribute was actually updated or\n * zero if there is no key or element. */\nint VSETATTR_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx);\n\n    if (argc != 4) return RedisModule_WrongArity(ctx);\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1],\n        REDISMODULE_READ|REDISMODULE_WRITE);\n    int type = RedisModule_KeyType(key);\n\n    if (type == REDISMODULE_KEYTYPE_EMPTY)\n        return RedisModule_ReplyWithBool(ctx, 0);\n\n    if (RedisModule_ModuleTypeGetType(key) != VectorSetType)\n        return RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n\n    struct vsetObject *vset = RedisModule_ModuleTypeGetValue(key);\n    hnswNode *node = RedisModule_DictGet(vset->dict, argv[2], NULL);\n    if (!node)\n        return RedisModule_ReplyWithBool(ctx, 0);\n\n    struct vsetNodeVal *nv = node->value;\n    RedisModuleString *new_attr = argv[3];\n\n    /* Background VSIM operations use the node attributes, so\n     * wait for background operations before messing with them. */\n    vectorSetWaitAllBackgroundClients(vset,0);\n\n    /* Set or delete the attribute based on the fact it's an empty\n     * string or not. */\n    size_t attrlen;\n    RedisModule_StringPtrLen(new_attr, &attrlen);\n    if (attrlen == 0) {\n        // If we had an attribute before, decrease the count and free it.\n        if (nv->attrib) {\n            vset->numattribs--;\n            RedisModule_FreeString(NULL, nv->attrib);\n            nv->attrib = NULL;\n        }\n    } else {\n        // If we didn't have an attribute before, increase the count.\n        // Otherwise free the old one.\n        if (nv->attrib) {\n            RedisModule_FreeString(NULL, nv->attrib);\n        } else {\n            vset->numattribs++;\n        }\n        // Set new attribute.\n        RedisModule_RetainString(NULL, new_attr);\n        nv->attrib = new_attr;\n    }\n\n    RedisModule_ReplyWithBool(ctx, 1);\n    RedisModule_ReplicateVerbatim(ctx);\n    return REDISMODULE_OK;\n}\n\n/* VGETATTR key element\n * Get the JSON attribute associated with an element.\n * Returns NIL if the element has no attribute or doesn't exist. */\nint VGETATTR_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx);\n\n    if (argc != 3) return RedisModule_WrongArity(ctx);\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_READ);\n    int type = RedisModule_KeyType(key);\n\n    if (type == REDISMODULE_KEYTYPE_EMPTY)\n        return RedisModule_ReplyWithNull(ctx);\n\n    if (RedisModule_ModuleTypeGetType(key) != VectorSetType)\n        return RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n\n    struct vsetObject *vset = RedisModule_ModuleTypeGetValue(key);\n    hnswNode *node = RedisModule_DictGet(vset->dict, argv[2], NULL);\n    if (!node)\n        return RedisModule_ReplyWithNull(ctx);\n\n    struct vsetNodeVal *nv = node->value;\n    if (!nv->attrib)\n        return RedisModule_ReplyWithNull(ctx);\n\n    return RedisModule_ReplyWithString(ctx, nv->attrib);\n}\n\n/* ============================== Reflection ================================ */\n\n/* VLINKS key element [WITHSCORES]\n * Returns the neighbors of an element at each layer in the HNSW graph.\n * Reply is an array of arrays, where each nested array represents one level\n * of neighbors, from highest level to level 0. If WITHSCORES is specified,\n * each neighbor is followed by its distance from the element. */\nint VLINKS_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx);\n\n    if (argc < 3 || argc > 4) return RedisModule_WrongArity(ctx);\n\n    RedisModuleString *key = argv[1];\n    RedisModuleString *element = argv[2];\n\n    /* Parse WITHSCORES option. */\n    int withscores = 0;\n    if (argc == 4) {\n        const char *opt = RedisModule_StringPtrLen(argv[3], NULL);\n        if (strcasecmp(opt, \"WITHSCORES\") != 0) {\n            return RedisModule_WrongArity(ctx);\n        }\n        withscores = 1;\n    }\n\n    RedisModuleKey *keyptr = RedisModule_OpenKey(ctx, key, REDISMODULE_READ);\n    int type = RedisModule_KeyType(keyptr);\n\n    /* Handle non-existing key or wrong type. */\n    if (type == REDISMODULE_KEYTYPE_EMPTY)\n        return RedisModule_ReplyWithNull(ctx);\n\n    if (RedisModule_ModuleTypeGetType(keyptr) != VectorSetType)\n        return RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n\n    /* Find the node for this element. */\n    struct vsetObject *vset = RedisModule_ModuleTypeGetValue(keyptr);\n    hnswNode *node = RedisModule_DictGet(vset->dict, element, NULL);\n    if (!node)\n        return RedisModule_ReplyWithNull(ctx);\n\n    /* Reply with array of arrays, one per level. */\n    RedisModule_ReplyWithArray(ctx, node->level + 1);\n\n    /* For each level, from highest to lowest: */\n    for (int i = node->level; i >= 0; i--) {\n        /* Reply with array of neighbors at this level. */\n        if (withscores)\n            RedisModule_ReplyWithMap(ctx,node->layers[i].num_links);\n        else\n            RedisModule_ReplyWithArray(ctx,node->layers[i].num_links);\n\n        /* Add each neighbor's element value to the array. */\n        for (uint32_t j = 0; j < node->layers[i].num_links; j++) {\n            struct vsetNodeVal *nv = node->layers[i].links[j]->value;\n            RedisModule_ReplyWithString(ctx, nv->item);\n            if (withscores) {\n                float distance = hnsw_distance(vset->hnsw, node, node->layers[i].links[j]);\n                /* Convert distance to similarity score to match\n                 * VSIM behavior.*/\n                float similarity = 1.0 - distance/2.0;\n                RedisModule_ReplyWithDouble(ctx, similarity);\n            }\n        }\n    }\n    return REDISMODULE_OK;\n}\n\n/* VINFO key\n * Returns information about a vector set, both visible and hidden\n * features of the HNSW data structure. */\nint VINFO_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx);\n\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_READ);\n    int type = RedisModule_KeyType(key);\n\n    if (type == REDISMODULE_KEYTYPE_EMPTY)\n        return RedisModule_ReplyWithNullArray(ctx);\n\n    if (RedisModule_ModuleTypeGetType(key) != VectorSetType)\n        return RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n\n    struct vsetObject *vset = RedisModule_ModuleTypeGetValue(key);\n\n    /* Reply with hash */\n    RedisModule_ReplyWithMap(ctx, 9);\n\n    /* Quantization type */\n    RedisModule_ReplyWithSimpleString(ctx, \"quant-type\");\n    RedisModule_ReplyWithSimpleString(ctx, vectorSetGetQuantName(vset));\n\n    /* HNSW M value */\n    RedisModule_ReplyWithSimpleString(ctx, \"hnsw-m\");\n    RedisModule_ReplyWithLongLong(ctx, vset->hnsw->M);\n\n    /* Vector dimensionality. */\n    RedisModule_ReplyWithSimpleString(ctx, \"vector-dim\");\n    RedisModule_ReplyWithLongLong(ctx, vset->hnsw->vector_dim);\n\n    /* Original input dimension before projection.\n     * This is zero for vector sets without a random projection matrix. */\n    RedisModule_ReplyWithSimpleString(ctx, \"projection-input-dim\");\n    RedisModule_ReplyWithLongLong(ctx, vset->proj_input_size);\n\n    /* Number of elements. */\n    RedisModule_ReplyWithSimpleString(ctx, \"size\");\n    RedisModule_ReplyWithLongLong(ctx, vset->hnsw->node_count);\n\n    /* Max level of HNSW. */\n    RedisModule_ReplyWithSimpleString(ctx, \"max-level\");\n    RedisModule_ReplyWithLongLong(ctx, vset->hnsw->max_level);\n\n    /* Number of nodes with attributes. */\n    RedisModule_ReplyWithSimpleString(ctx, \"attributes-count\");\n    RedisModule_ReplyWithLongLong(ctx, vset->numattribs);\n\n    /* Vector set ID. */\n    RedisModule_ReplyWithSimpleString(ctx, \"vset-uid\");\n    RedisModule_ReplyWithLongLong(ctx, vset->id);\n\n    /* HNSW max node ID. */\n    RedisModule_ReplyWithSimpleString(ctx, \"hnsw-max-node-uid\");\n    RedisModule_ReplyWithLongLong(ctx, vset->hnsw->last_id);\n\n    return REDISMODULE_OK;\n}\n\n/* VRANDMEMBER key [count]\n * Return random members from a vector set.\n *\n * Without count: returns a single random member.\n * With positive count: N unique random members (no duplicates).\n * With negative count: N random members (with possible duplicates).\n *\n * If the key doesn't exist, returns NULL if count is not given, or\n * an empty array if a count was given. */\nint VRANDMEMBER_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx); /* Use automatic memory management. */\n\n    /* Check arguments. */\n    if (argc != 2 && argc != 3) return RedisModule_WrongArity(ctx);\n\n    /* Parse optional count argument. */\n    long long count = 1;  /* Default is to return a single element. */\n    int with_count = (argc == 3);\n\n    if (with_count) {\n        if (RedisModule_StringToLongLong(argv[2], &count) != REDISMODULE_OK) {\n            return RedisModule_ReplyWithError(ctx,\n                \"ERR COUNT value is not an integer\");\n        }\n        /* Count = 0 is a special case, return empty array */\n        if (count == 0) {\n            return RedisModule_ReplyWithEmptyArray(ctx);\n        }\n    }\n\n    /* Open key. */\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_READ);\n    int type = RedisModule_KeyType(key);\n\n    /* Handle non-existing key. */\n    if (type == REDISMODULE_KEYTYPE_EMPTY) {\n        if (!with_count) {\n            return RedisModule_ReplyWithNull(ctx);\n        } else {\n            return RedisModule_ReplyWithEmptyArray(ctx);\n        }\n    }\n\n    /* Check key type. */\n    if (RedisModule_ModuleTypeGetType(key) != VectorSetType) {\n        return RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    /* Get vector set from key. */\n    struct vsetObject *vset = RedisModule_ModuleTypeGetValue(key);\n    uint64_t set_size = vset->hnsw->node_count;\n\n    /* No elements in the set? */\n    if (set_size == 0) {\n        if (!with_count) {\n            return RedisModule_ReplyWithNull(ctx);\n        } else {\n            return RedisModule_ReplyWithEmptyArray(ctx);\n        }\n    }\n\n    /* Case 1: No count specified: return a single element. */\n    if (!with_count) {\n        hnswNode *random_node = hnsw_random_node(vset->hnsw, 0);\n        if (random_node) {\n            struct vsetNodeVal *nv = random_node->value;\n            return RedisModule_ReplyWithString(ctx, nv->item);\n        } else {\n            return RedisModule_ReplyWithNull(ctx);\n        }\n    }\n\n    /* Case 2: COUNT option given, return an array of elements. */\n    int allow_duplicates = (count < 0);\n    long long abs_count = (count < 0) ? -count : count;\n\n    /* Cap the count to the set size if we are not allowing duplicates. */\n    if (!allow_duplicates && abs_count > (long long)set_size)\n        abs_count = set_size;\n\n    /* Prepare reply. */\n    RedisModule_ReplyWithArray(ctx, abs_count);\n\n    if (allow_duplicates) {\n        /* Simple case: With duplicates, just pick random nodes\n         * abs_count times. */\n        for (long long i = 0; i < abs_count; i++) {\n            hnswNode *random_node = hnsw_random_node(vset->hnsw,0);\n            struct vsetNodeVal *nv = random_node->value;\n            RedisModule_ReplyWithString(ctx, nv->item);\n        }\n    } else {\n        /* Case where count is positive: we need unique elements.\n         * But, if the user asked for many elements, selecting so\n         * many (> 20%) random nodes may be too expansive: we just start\n         * from a random element and follow the next link.\n         *\n         * Otherwisem for the <= 20% case, a dictionary is used to\n         * reject duplicates. */\n        int use_dict = (abs_count <= set_size * 0.2);\n\n        if (use_dict) {\n            RedisModuleDict *returned = RedisModule_CreateDict(ctx);\n\n            long long returned_count = 0;\n            while (returned_count < abs_count) {\n                hnswNode *random_node = hnsw_random_node(vset->hnsw, 0);\n                struct vsetNodeVal *nv = random_node->value;\n\n                /* Check if we've already returned this element. */\n                if (RedisModule_DictGet(returned, nv->item, NULL) == NULL) {\n                    /* Mark as returned and add to results. */\n                    RedisModule_DictSet(returned, nv->item, (void*)1);\n                    RedisModule_ReplyWithString(ctx, nv->item);\n                    returned_count++;\n                }\n            }\n            RedisModule_FreeDict(ctx, returned);\n        } else {\n            /* For large samples, get a random starting node and walk\n             * the list.\n             *\n             * IMPORTANT: doing so does not really generate random\n             * elements: it's just a linear scan, but we have no choices.\n             * If we generate too many random elements, more and more would\n             * fail the check of being novel (not yet collected in the set\n             * to return) if the % of elements to emit is too large, we would\n             * spend too much CPU. */\n            hnswNode *start_node = hnsw_random_node(vset->hnsw, 0);\n            hnswNode *current = start_node;\n\n            long long returned_count = 0;\n            while (returned_count < abs_count) {\n                if (current == NULL) {\n                    /* Restart from head if we hit the end. */\n                    current = vset->hnsw->head;\n                }\n                struct vsetNodeVal *nv = current->value;\n                RedisModule_ReplyWithString(ctx, nv->item);\n                returned_count++;\n                current = current->next;\n            }\n        }\n    }\n    return REDISMODULE_OK;\n}\n\n/* VISMEMBER key element\n * Check if an element exists in a vector set.\n * Returns 1 if the element exists, 0 if not. */\nint VISMEMBER_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx);\n    if (argc != 3) return RedisModule_WrongArity(ctx);\n\n    RedisModuleString *key = argv[1];\n    RedisModuleString *element = argv[2];\n\n    /* Open key. */\n    RedisModuleKey *keyptr = RedisModule_OpenKey(ctx, key, REDISMODULE_READ);\n    int type = RedisModule_KeyType(keyptr);\n\n    /* Handle non-existing key or wrong type. */\n    if (type == REDISMODULE_KEYTYPE_EMPTY) {\n        /* An element of a non existing key does not exist, like\n         * SISMEMBER & similar. */\n        return RedisModule_ReplyWithBool(ctx, 0);\n    }\n    if (RedisModule_ModuleTypeGetType(keyptr) != VectorSetType) {\n        return RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    /* Get the object and test membership via the dictionary in constant\n     * time (assuming a member of average size). */\n    struct vsetObject *vset = RedisModule_ModuleTypeGetValue(keyptr);\n    hnswNode *node = RedisModule_DictGet(vset->dict, element, NULL);\n    return RedisModule_ReplyWithBool(ctx, node != NULL);\n}\n\n/* Structure to represent a range boundary. */\nstruct vsetRangeOp {\n    int incl;   /* 1 if inclusive ([), 0 if exclusive ((). */\n    int min;    /* 1 if this is \"-\" (minimum). */\n    int max;    /* 1 if this is \"+\" (maximum). */\n    unsigned char *ele;  /* The actual element, NULL if min/max. */\n    size_t ele_len;      /* Length of the element. */\n};\n\n/* Parse a range specification like \"[foo\" or \"(bar\" or \"-\" or \"+\".\n * Returns 1 on success, 0 on error. */\nint vsetParseRangeOp(RedisModuleString *arg, struct vsetRangeOp *op) {\n    size_t len;\n    const char *str = RedisModule_StringPtrLen(arg, &len);\n\n    if (len == 0) return 0;\n\n    /* Initialize the structure. */\n    op->incl = 0;\n    op->min = 0;\n    op->max = 0;\n    op->ele = NULL;\n    op->ele_len = 0;\n\n    /* Check for special cases \"-\" and \"+\". */\n    if (len == 1 && str[0] == '-') {\n        op->min = 1;\n        return 1;\n    }\n    if (len == 1 && str[0] == '+') {\n        op->max = 1;\n        return 1;\n    }\n\n    /* Otherwise, must start with ( or [. */\n    if (str[0] == '[') {\n        op->incl = 1;\n    } else if (str[0] == '(') {\n        op->incl = 0;\n    } else {\n        return 0;  /* Invalid format. */\n    }\n\n    /* Extract the string part after the bracket. */\n    if (len > 1) {\n        op->ele = (unsigned char *)(str + 1);\n        op->ele_len = len - 1;\n    } else {\n        return 0;  /* Just a bracket with no string. */\n    }\n\n    return 1;\n}\n\n/* Check if the current element is within the range defined by the end operator.\n * Returns 1 if the element is within range, 0 if it has passed the end. */\nint vsetIsElementInRange(const void *ele, size_t ele_len, struct vsetRangeOp *end_op) {\n    /* If end is \"+\", element is always in range. */\n    if (end_op->max) return 1;\n\n    /* Compare current element with end boundary. */\n    size_t minlen = ele_len < end_op->ele_len ? ele_len : end_op->ele_len;\n    int cmp = memcmp(ele, end_op->ele, minlen);\n\n    if (cmp == 0) {\n        /* If equal up to minlen, shorter string is smaller. */\n        if (ele_len < end_op->ele_len) {\n            cmp = -1;\n        } else if (ele_len > end_op->ele_len) {\n            cmp = 1;\n        }\n    }\n\n    /* Check based on inclusive/exclusive. */\n    if (end_op->incl) {\n        return cmp <= 0;  /* Inclusive: element <= end. */\n    } else {\n        return cmp < 0;   /* Exclusive: element < end. */\n    }\n}\n\n/* VRANGE key start end [count]\n * Returns elements in the lexicographical range [start, end]\n *\n * Elements must be specified in one of the following forms:\n *\n *  [myelement\n *  (myelement\n *  +\n *  -\n *\n * Elements starting with [ are inclusive, so \"myelement\" would be\n * returned if present in the set. Elements starting with ( are exclusive\n * ranges instead. The special - and + elements mean the minimum and maximum\n * possible element (inclusive), so \"VRANGE key - +\" will return everything\n * (depending on COUNT of course). The special - element can be used only\n * as starting element, the special + element only as ending element. */\nint VRANGE_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx);\n\n    /* Check arguments. */\n    if (argc < 4 || argc > 5) return RedisModule_WrongArity(ctx);\n\n    /* Parse COUNT if provided. */\n    long long count = -1;  /* Default: return all elements. */\n    if (argc == 5) {\n        if (RedisModule_StringToLongLong(argv[4], &count) != REDISMODULE_OK) {\n            return RedisModule_ReplyWithError(ctx, \"ERR invalid COUNT value\");\n        }\n    }\n\n    /* Parse range operators. */\n    struct vsetRangeOp start_op, end_op;\n    if (!vsetParseRangeOp(argv[2], &start_op)) {\n        return RedisModule_ReplyWithError(ctx, \"ERR invalid start range format\");\n    }\n    if (!vsetParseRangeOp(argv[3], &end_op)) {\n        return RedisModule_ReplyWithError(ctx, \"ERR invalid end range format\");\n    }\n\n    /* Validate: \"-\" can only be first arg, \"+\" can only be second. */\n    if (start_op.max || end_op.min) {\n        return RedisModule_ReplyWithError(ctx,\n            \"ERR '-' can only be used as first argument, '+' only as second\");\n    }\n\n    /* Open the key. */\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_READ);\n    int type = RedisModule_KeyType(key);\n\n    if (type == REDISMODULE_KEYTYPE_EMPTY) {\n        return RedisModule_ReplyWithEmptyArray(ctx);\n    }\n\n    if (RedisModule_ModuleTypeGetType(key) != VectorSetType) {\n        return RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    struct vsetObject *vset = RedisModule_ModuleTypeGetValue(key);\n\n    /* Start the iterator. */\n    RedisModuleDictIter *iter;\n    if (start_op.min) {\n        /* Start from the beginning. */\n        iter = RedisModule_DictIteratorStartC(vset->dict, \"^\", NULL, 0);\n    } else {\n        /* Start from the specified element. */\n        const char *op = start_op.incl ? \">=\" : \">\";\n        iter = RedisModule_DictIteratorStartC(vset->dict, op, start_op.ele, start_op.ele_len);\n    }\n\n    /* Collect results. */\n    RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_LEN);\n    long long returned = 0;\n\n    void *key_data;\n    size_t key_len;\n    while ((key_data = RedisModule_DictNextC(iter, &key_len, NULL)) != NULL) {\n        /* Check if we've collected enough elements. */\n        if (count >= 0 && returned >= count) break;\n\n        /* Check if we've passed the end range. */\n        if (!vsetIsElementInRange(key_data, key_len, &end_op)) break;\n\n        /* Add this element to the result. */\n        RedisModule_ReplyWithStringBuffer(ctx, key_data, key_len);\n        returned++;\n    }\n\n    RedisModule_ReplySetArrayLength(ctx, returned);\n\n    /* Cleanup. */\n    RedisModule_DictIteratorStop(iter);\n\n    return REDISMODULE_OK;\n}\n\n/* ============================== vset type methods ========================= */\n\n#define SAVE_FLAG_HAS_PROJMATRIX    (1<<0)\n#define SAVE_FLAG_HAS_ATTRIBS       (1<<1)\n\n/* Save object to RDB */\nvoid VectorSetRdbSave(RedisModuleIO *rdb, void *value) {\n    struct vsetObject *vset = value;\n    RedisModule_SaveUnsigned(rdb, vset->hnsw->vector_dim);\n    RedisModule_SaveUnsigned(rdb, vset->hnsw->node_count);\n\n    uint32_t hnsw_config = (vset->hnsw->quant_type & 0xff) |\n                           ((vset->hnsw->M & 0xffff) << 8);\n    RedisModule_SaveUnsigned(rdb, hnsw_config);\n\n    uint32_t save_flags = 0;\n    if (vset->proj_matrix) save_flags |= SAVE_FLAG_HAS_PROJMATRIX;\n    if (vset->numattribs != 0) save_flags |= SAVE_FLAG_HAS_ATTRIBS;\n    RedisModule_SaveUnsigned(rdb, save_flags);\n\n    /* Save projection matrix if present */\n    if (vset->proj_matrix) {\n        uint32_t input_dim = vset->proj_input_size;\n        uint32_t output_dim = vset->hnsw->vector_dim;\n        RedisModule_SaveUnsigned(rdb, input_dim);\n        // Output dim is the same as the first value saved\n        // above, so we don't save it.\n\n        // Save projection matrix as binary blob\n        size_t matrix_size = sizeof(float) * input_dim * output_dim;\n        RedisModule_SaveStringBuffer(rdb, (const char *)vset->proj_matrix, matrix_size);\n    }\n\n    hnswNode *node = vset->hnsw->head;\n    while(node) {\n        struct vsetNodeVal *nv = node->value;\n        RedisModule_SaveString(rdb, nv->item);\n        if (vset->numattribs) {\n            if (nv->attrib)\n                RedisModule_SaveString(rdb, nv->attrib);\n            else\n                RedisModule_SaveStringBuffer(rdb, \"\", 0);\n        }\n        hnswSerNode *sn = hnsw_serialize_node(vset->hnsw,node);\n        RedisModule_SaveStringBuffer(rdb, (const char *)sn->vector, sn->vector_size);\n        RedisModule_SaveUnsigned(rdb, sn->params_count);\n        for (uint32_t j = 0; j < sn->params_count; j++)\n            RedisModule_SaveUnsigned(rdb, sn->params[j]);\n        hnsw_free_serialized_node(sn);\n        node = node->next;\n    }\n}\n\n/* Load object from RDB. Recover from recoverable errors (read errors)\n * by performing cleanup. */\nvoid *VectorSetRdbLoad(RedisModuleIO *rdb, int encver) {\n    if (encver != 0) return NULL;  // Invalid version\n\n    uint32_t dim = RedisModule_LoadUnsigned(rdb);\n    uint64_t elements = RedisModule_LoadUnsigned(rdb);\n    uint32_t hnsw_config = RedisModule_LoadUnsigned(rdb);\n    if (RedisModule_IsIOError(rdb)) return NULL;\n    uint32_t quant_type = hnsw_config & 0xff;\n    uint32_t hnsw_m = (hnsw_config >> 8) & 0xffff;\n\n    /* Check that the quantization type is correct. Otherwise\n     * return ASAP signaling the error. */\n    if (quant_type != HNSW_QUANT_NONE &&\n        quant_type != HNSW_QUANT_Q8 &&\n        quant_type != HNSW_QUANT_BIN) return NULL;\n\n    if (hnsw_m == 0) hnsw_m = 16; // Default, useful for RDB files predating\n                                  // this configuration parameter: it was fixed\n                                  // to 16.\n    struct vsetObject *vset = createVectorSetObject(dim,quant_type,hnsw_m);\n    RedisModule_Assert(vset != NULL);\n\n    /* Load projection matrix if present */\n    uint32_t save_flags = RedisModule_LoadUnsigned(rdb);\n    if (RedisModule_IsIOError(rdb)) goto ioerr;\n    int has_projection = save_flags & SAVE_FLAG_HAS_PROJMATRIX;\n    int has_attribs = save_flags & SAVE_FLAG_HAS_ATTRIBS;\n    if (has_projection) {\n        uint32_t input_dim = RedisModule_LoadUnsigned(rdb);\n        if (RedisModule_IsIOError(rdb)) goto ioerr;\n        uint32_t output_dim = dim;\n        size_t matrix_size = sizeof(float) * input_dim * output_dim;\n\n        vset->proj_matrix = RedisModule_Alloc(matrix_size);\n        vset->proj_input_size = input_dim;\n\n        // Load projection matrix as a binary blob\n        char *matrix_blob = RedisModule_LoadStringBuffer(rdb, NULL);\n        if (matrix_blob == NULL) goto ioerr;\n        memcpy(vset->proj_matrix, matrix_blob, matrix_size);\n        RedisModule_Free(matrix_blob);\n    }\n\n    while(elements--) {\n        // Load associated string element.\n        RedisModuleString *ele = RedisModule_LoadString(rdb);\n        if (RedisModule_IsIOError(rdb)) goto ioerr;\n        RedisModuleString *attrib = NULL;\n        if (has_attribs) {\n            attrib = RedisModule_LoadString(rdb);\n            if (RedisModule_IsIOError(rdb)) {\n                RedisModule_FreeString(NULL,ele);\n                goto ioerr;\n            }\n            size_t attrlen;\n            RedisModule_StringPtrLen(attrib,&attrlen);\n            if (attrlen == 0) {\n                RedisModule_FreeString(NULL,attrib);\n                attrib = NULL;\n            }\n        }\n        size_t vector_len;\n        void *vector = RedisModule_LoadStringBuffer(rdb, &vector_len);\n        if (RedisModule_IsIOError(rdb)) {\n            RedisModule_FreeString(NULL,ele);\n            if (attrib) RedisModule_FreeString(NULL,attrib);\n            goto ioerr;\n        }\n        uint32_t vector_bytes = hnsw_quants_bytes(vset->hnsw);\n        if (vector_len != vector_bytes) {\n            RedisModule_LogIOError(rdb,\"warning\",\n                                       \"Mismatching vector dimension\");\n            RedisModule_FreeString(NULL,ele);\n            if (attrib) RedisModule_FreeString(NULL,attrib);\n            RedisModule_Free(vector);\n            goto ioerr;\n        }\n\n        // Load node parameters back.\n        uint32_t params_count = RedisModule_LoadUnsigned(rdb);\n        if (RedisModule_IsIOError(rdb)) {\n            RedisModule_FreeString(NULL,ele);\n            if (attrib) RedisModule_FreeString(NULL,attrib);\n            RedisModule_Free(vector);\n            goto ioerr;\n        }\n\n        uint64_t *params = RedisModule_Alloc(params_count*sizeof(uint64_t));\n        for (uint32_t j = 0; j < params_count; j++) {\n            // Ignore loading errors here: handled at the end of the loop.\n            params[j] = RedisModule_LoadUnsigned(rdb);\n        }\n        if (RedisModule_IsIOError(rdb)) {\n            RedisModule_FreeString(NULL,ele);\n            if (attrib) RedisModule_FreeString(NULL,attrib);\n            RedisModule_Free(vector);\n            RedisModule_Free(params);\n            goto ioerr;\n        }\n\n        struct vsetNodeVal *nv = RedisModule_Alloc(sizeof(*nv));\n        nv->item = ele;\n        nv->attrib = attrib;\n        hnswNode *node = hnsw_insert_serialized(vset->hnsw, vector, params, params_count, nv);\n        if (node == NULL) {\n            RedisModule_LogIOError(rdb,\"warning\",\n                                       \"Vector set node index loading error\");\n            vectorSetReleaseNodeValue(nv);\n            RedisModule_Free(vector);\n            RedisModule_Free(params);\n            goto ioerr;\n        }\n        if (nv->attrib) vset->numattribs++;\n        RedisModule_DictSet(vset->dict,ele,node);\n        RedisModule_Free(vector);\n        RedisModule_Free(params);\n    }\n\n    uint64_t salt[2];\n    RedisModule_GetRandomBytes((unsigned char*)salt,sizeof(salt));\n    if (!hnsw_deserialize_index(vset->hnsw, salt[0], salt[1])) goto ioerr;\n\n    return vset;\n\nioerr:\n    /* We want to recover from I/O errors and free the partially allocated\n     * data structure to support diskless replication. */\n    vectorSetReleaseObject(vset);\n    return NULL;\n}\n\n/* Calculate memory usage */\nsize_t VectorSetMemUsage(const void *value) {\n    const struct vsetObject *vset = value;\n    size_t size = sizeof(*vset);\n\n    /* Account for HNSW index base structure */\n    size += sizeof(HNSW);\n\n    /* Account for projection matrix if present */\n    if (vset->proj_matrix) {\n        /* For the matrix size, we need the input dimension. We can get it\n         * from the first node if the set is not empty. */\n        uint32_t input_dim = vset->proj_input_size;\n        uint32_t output_dim = vset->hnsw->vector_dim;\n        size += sizeof(float) * input_dim * output_dim;\n    }\n\n    /* Account for each node's memory usage. */\n    hnswNode *node = vset->hnsw->head;\n    if (node == NULL) return size;\n\n    /* Base node structure. */\n    size += sizeof(*node) * vset->hnsw->node_count;\n\n    /* Vector storage. */\n    uint64_t vec_storage = hnsw_quants_bytes(vset->hnsw);\n    size += vec_storage * vset->hnsw->node_count;\n\n    /* Layers array. We use 1.33 as average nodes layers count. */\n    uint64_t layers_storage = sizeof(hnswNodeLayer) * vset->hnsw->node_count;\n    layers_storage = layers_storage * 4 / 3; // 1.33 times.\n    size += layers_storage;\n\n    /* All the nodes have layer 0 links. */\n    uint64_t level0_links = node->layers[0].max_links;\n    uint64_t other_levels_links = level0_links/2;\n    size += sizeof(hnswNode*) * level0_links * vset->hnsw->node_count;\n\n    /* Add the 0.33 remaining part, but upper layers have less links. */\n    size += (sizeof(hnswNode*) * other_levels_links * vset->hnsw->node_count)/3;\n\n    /* Associated string value and attributres.\n     * Use Redis Module API to get string size, and guess that all the\n     * elements have similar size as the first few. */\n    size_t items_scanned = 0, items_size = 0;\n    size_t attribs_scanned = 0, attribs_size = 0;\n    int scan_effort = 20;\n    while(scan_effort > 0 && node) {\n        struct vsetNodeVal *nv = node->value;\n        items_size += RedisModule_MallocSizeString(nv->item);\n        items_scanned++;\n        if (nv->attrib) {\n            attribs_size += RedisModule_MallocSizeString(nv->attrib);\n            attribs_scanned++;\n        }\n        scan_effort--;\n        node = node->next;\n    }\n\n    /* Add the memory usage due to items. */\n    if (items_scanned)\n        size += items_size / items_scanned * vset->hnsw->node_count;\n\n    /* Add memory usage due to attributres. */\n    if (attribs_scanned == 0) {\n        /* We were not lucky enough to find a single attribute in the\n         * first few items? Let's use a fixed arbitrary value. */\n        attribs_scanned = 1;\n        attribs_size = 64;\n    }\n    size += attribs_size / attribs_scanned * vset->numattribs;\n\n    /* Account for dictionary overhead - this is an approximation. */\n    size += RedisModule_DictSize(vset->dict) * (sizeof(void*) * 2);\n\n    return size;\n}\n\n/* Free the entire data structure */\nvoid VectorSetFree(void *value) {\n    struct vsetObject *vset = value;\n\n    vectorSetWaitAllBackgroundClients(vset,1);\n    vectorSetReleaseObject(value);\n}\n\n/* Add object digest to the digest context */\nvoid VectorSetDigest(RedisModuleDigest *md, void *value) {\n    struct vsetObject *vset = value;\n\n    /* Add consistent order-independent hash of all vectors */\n    hnswNode *node = vset->hnsw->head;\n\n    /* Hash the vector dimension and number of nodes. */\n    RedisModule_DigestAddLongLong(md, vset->hnsw->node_count);\n    RedisModule_DigestAddLongLong(md, vset->hnsw->vector_dim);\n    RedisModule_DigestEndSequence(md);\n\n    while(node) {\n        struct vsetNodeVal *nv = node->value;\n        /* Hash each vector component */\n        RedisModule_DigestAddStringBuffer(md, node->vector, hnsw_quants_bytes(vset->hnsw));\n        /* Hash the associated value */\n        size_t len;\n        const char *str = RedisModule_StringPtrLen(nv->item, &len);\n        RedisModule_DigestAddStringBuffer(md, (char*)str, len);\n        if (nv->attrib) {\n            str = RedisModule_StringPtrLen(nv->attrib, &len);\n            RedisModule_DigestAddStringBuffer(md, (char*)str, len);\n        }\n        node = node->next;\n        RedisModule_DigestEndSequence(md);\n    }\n}\n\n// int VectorSets_InitModuleConfig(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\nint VectorSets_InitModuleConfig(RedisModuleCtx *ctx) {\n    if (RegisterModuleConfig(ctx) == REDISMODULE_ERR) {\n        RedisModule_Log(ctx, \"warning\", \"Error registering module configuration\");\n        return REDISMODULE_ERR;\n    }\n    // Load default values\n    if (RedisModule_LoadDefaultConfigs(ctx) == REDISMODULE_ERR) {\n        RedisModule_Log(ctx, \"warning\", \"Error loading default module configuration\");\n        return REDISMODULE_ERR;\n    } else {\n        RedisModule_Log(ctx, \"verbose\", \"Successfully loaded default module configuration\");\n    }\n    if (RedisModule_LoadConfigs(ctx) == REDISMODULE_ERR) {\n        RedisModule_Log(ctx, \"warning\", \"Error loading user module configuration\");\n        return REDISMODULE_ERR;\n    } else {\n        RedisModule_Log(ctx, \"verbose\", \"Successfully loaded user module configuration\");\n    }\n    return REDISMODULE_OK;\n}\n\n/* This function must be present on each Redis module. It is used in order to\n * register the commands into the Redis server. */\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx,\"vectorset\",1,REDISMODULE_APIVER_1)\n        == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    if (VectorSets_InitModuleConfig(ctx) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n\n    RedisModule_SetModuleOptions(ctx, REDISMODULE_OPTIONS_HANDLE_IO_ERRORS|REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD);\n\n    RedisModuleTypeMethods tm = {\n        .version = REDISMODULE_TYPE_METHOD_VERSION,\n        .rdb_load = VectorSetRdbLoad,\n        .rdb_save = VectorSetRdbSave,\n        .aof_rewrite = NULL,\n        .mem_usage = VectorSetMemUsage,\n        .free = VectorSetFree,\n        .digest = VectorSetDigest\n    };\n\n    VectorSetType = RedisModule_CreateDataType(ctx,\"vectorset\",0,&tm);\n    if (VectorSetType == NULL) return REDISMODULE_ERR;\n\n    // Register command VADD\n    if (RedisModule_CreateCommand(ctx,\"VADD\",\n        VADD_RedisCommand,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleCommand *vadd_cmd = RedisModule_GetCommand(ctx, \"VADD\");\n    if (vadd_cmd == NULL) return REDISMODULE_ERR;\n\n    RedisModuleCommandArg vadd_args[] = {\n        { .name = \"key\", .type = REDISMODULE_ARG_TYPE_KEY, .key_spec_index = 0 },\n        { .name = \"reduce\", .type = REDISMODULE_ARG_TYPE_BLOCK, .token = \"REDUCE\", .flags = REDISMODULE_CMD_ARG_OPTIONAL,\n            .subargs = (RedisModuleCommandArg[]) {\n                { .name = \"dim\", .type = REDISMODULE_ARG_TYPE_INTEGER },\n                { .name = NULL }\n            }\n        },\n        { .name = \"format\", .type = REDISMODULE_ARG_TYPE_ONEOF, .subargs = (RedisModuleCommandArg[]) {\n                { .name = \"fp32\", .type = REDISMODULE_ARG_TYPE_PURE_TOKEN, .token = \"FP32\" },\n                { .name = \"values\", .type = REDISMODULE_ARG_TYPE_PURE_TOKEN, .token = \"VALUES\" },\n                { .name = NULL }\n            }\n        },\n        { .name = \"vector\", .type = REDISMODULE_ARG_TYPE_STRING },\n        { .name = \"element\", .type = REDISMODULE_ARG_TYPE_STRING },\n        { .name = \"cas\", .type = REDISMODULE_ARG_TYPE_PURE_TOKEN, .token = \"CAS\", .flags = REDISMODULE_CMD_ARG_OPTIONAL },\n        { .name = \"quant_type\", .type = REDISMODULE_ARG_TYPE_ONEOF, .flags = REDISMODULE_CMD_ARG_OPTIONAL, .subargs = (RedisModuleCommandArg[]) {\n                { .name = \"noquant\", .type = REDISMODULE_ARG_TYPE_PURE_TOKEN, .token = \"NOQUANT\" },\n                { .name = \"bin\", .type = REDISMODULE_ARG_TYPE_PURE_TOKEN, .token = \"BIN\" },\n                { .name = \"q8\", .type = REDISMODULE_ARG_TYPE_PURE_TOKEN, .token = \"Q8\" },\n                { .name = NULL }\n            }\n        },\n        { .name = \"build-exploration-factor\", .type = REDISMODULE_ARG_TYPE_INTEGER, .token = \"EF\", .flags = REDISMODULE_CMD_ARG_OPTIONAL },\n        { .name = \"attributes\", .type = REDISMODULE_ARG_TYPE_STRING, .token = \"SETATTR\", .flags = REDISMODULE_CMD_ARG_OPTIONAL },\n        { .name = \"numlinks\", .type = REDISMODULE_ARG_TYPE_INTEGER, .token = \"M\", .flags = REDISMODULE_CMD_ARG_OPTIONAL },\n        { .name = NULL }\n    };\n    RedisModuleCommandInfo vadd_info = {\n        .version = REDISMODULE_COMMAND_INFO_VERSION,\n        .summary = \"Add one or more elements to a vector set, or update its vector if it already exists\",\n        .since = \"8.0.0\",\n        .arity = -5,\n        .args = vadd_args,\n    };\n    if (RedisModule_SetCommandInfo(vadd_cmd, &vadd_info) == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    // Register command VREM\n    if (RedisModule_CreateCommand(ctx,\"VREM\",\n        VREM_RedisCommand,\"write\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleCommand *vrem_cmd = RedisModule_GetCommand(ctx, \"VREM\");\n    if (vrem_cmd == NULL) return REDISMODULE_ERR;\n\n    RedisModuleCommandArg vrem_args[] = {\n        { .name = \"key\", .type = REDISMODULE_ARG_TYPE_KEY, .key_spec_index = 0 },\n        { .name = \"element\", .type = REDISMODULE_ARG_TYPE_STRING },\n        { .name = NULL }\n    };\n    RedisModuleCommandInfo vrem_info = {\n        .version = REDISMODULE_COMMAND_INFO_VERSION,\n        .summary = \"Remove an element from a vector set\",\n        .since = \"8.0.0\",\n        .arity = 3,\n        .args = vrem_args,\n    };\n    if (RedisModule_SetCommandInfo(vrem_cmd, &vrem_info) == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    // Register command VSIM\n    if (RedisModule_CreateCommand(ctx,\"VSIM\",\n        VSIM_RedisCommand,\"readonly\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleCommand *vsim_cmd = RedisModule_GetCommand(ctx, \"VSIM\");\n    if (vsim_cmd == NULL) return REDISMODULE_ERR;\n\n    RedisModuleCommandArg vsim_args[] = {\n        { .name = \"key\", .type = REDISMODULE_ARG_TYPE_KEY, .key_spec_index = 0 },\n        { .name = \"format\", .type = REDISMODULE_ARG_TYPE_ONEOF, .subargs = (RedisModuleCommandArg[]) {\n                { .name = \"ele\", .type = REDISMODULE_ARG_TYPE_PURE_TOKEN, .token = \"ELE\" },\n                { .name = \"fp32\", .type = REDISMODULE_ARG_TYPE_PURE_TOKEN, .token = \"FP32\" },\n                { .name = \"values\", .type = REDISMODULE_ARG_TYPE_PURE_TOKEN, .token = \"VALUES\" },\n                { .name = NULL }\n            }\n        },\n        { .name = \"vector_or_element\", .type = REDISMODULE_ARG_TYPE_STRING },\n        { .name = \"withscores\", .type = REDISMODULE_ARG_TYPE_PURE_TOKEN, .token = \"WITHSCORES\", .flags = REDISMODULE_CMD_ARG_OPTIONAL },\n        { .name = \"withattribs\", .type = REDISMODULE_ARG_TYPE_PURE_TOKEN, .token = \"WITHATTRIBS\", .flags = REDISMODULE_CMD_ARG_OPTIONAL },\n        { .name = \"count\", .type = REDISMODULE_ARG_TYPE_INTEGER, .token = \"COUNT\", .flags = REDISMODULE_CMD_ARG_OPTIONAL },\n        { .name = \"max_distance\", .type = REDISMODULE_ARG_TYPE_DOUBLE, .token = \"EPSILON\", .flags = REDISMODULE_CMD_ARG_OPTIONAL },\n        { .name = \"search-exploration-factor\", .type = REDISMODULE_ARG_TYPE_INTEGER, .token = \"EF\", .flags = REDISMODULE_CMD_ARG_OPTIONAL },\n        { .name = \"expression\", .type = REDISMODULE_ARG_TYPE_STRING, .token = \"FILTER\", .flags = REDISMODULE_CMD_ARG_OPTIONAL },\n        { .name = \"max-filtering-effort\", .type = REDISMODULE_ARG_TYPE_INTEGER, .token = \"FILTER-EF\", .flags = REDISMODULE_CMD_ARG_OPTIONAL },\n        { .name = \"truth\", .type = REDISMODULE_ARG_TYPE_PURE_TOKEN, .token = \"TRUTH\", .flags = REDISMODULE_CMD_ARG_OPTIONAL },\n        { .name = \"nothread\", .type = REDISMODULE_ARG_TYPE_PURE_TOKEN, .token = \"NOTHREAD\", .flags = REDISMODULE_CMD_ARG_OPTIONAL },\n        { .name = NULL }\n    };\n    RedisModuleCommandInfo vsim_info = {\n        .version = REDISMODULE_COMMAND_INFO_VERSION,\n        .summary = \"Return elements by vector similarity\",\n        .since = \"8.0.0\",\n        .arity = -4,\n        .args = vsim_args,\n    };\n    if (RedisModule_SetCommandInfo(vsim_cmd, &vsim_info) == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    // Register command VDIM\n    if (RedisModule_CreateCommand(ctx, \"VDIM\",\n        VDIM_RedisCommand, \"readonly fast\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleCommand *vdim_cmd = RedisModule_GetCommand(ctx, \"VDIM\");\n    if (vdim_cmd == NULL) return REDISMODULE_ERR;\n\n    RedisModuleCommandArg vdim_args[] = {\n        { .name = \"key\", .type = REDISMODULE_ARG_TYPE_KEY, .key_spec_index = 0 },\n        { .name = NULL }\n    };\n    RedisModuleCommandInfo vdim_info = {\n        .version = REDISMODULE_COMMAND_INFO_VERSION,\n        .summary = \"Return the dimension of vectors in the vector set\",\n        .since = \"8.0.0\",\n        .arity = 2,\n        .args = vdim_args,\n    };\n    if (RedisModule_SetCommandInfo(vdim_cmd, &vdim_info) == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    // Register command VCARD\n    if (RedisModule_CreateCommand(ctx, \"VCARD\",\n        VCARD_RedisCommand, \"readonly fast\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleCommand *vcard_cmd = RedisModule_GetCommand(ctx, \"VCARD\");\n    if (vcard_cmd == NULL) return REDISMODULE_ERR;\n\n    RedisModuleCommandArg vcard_args[] = {\n        { .name = \"key\", .type = REDISMODULE_ARG_TYPE_KEY, .key_spec_index = 0 },\n        { .name = NULL }\n    };\n    RedisModuleCommandInfo vcard_info = {\n        .version = REDISMODULE_COMMAND_INFO_VERSION,\n        .summary = \"Return the number of elements in a vector set\",\n        .since = \"8.0.0\",\n        .arity = 2,\n        .args = vcard_args,\n    };\n    if (RedisModule_SetCommandInfo(vcard_cmd, &vcard_info) == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    // Register command VEMB\n    if (RedisModule_CreateCommand(ctx, \"VEMB\",\n        VEMB_RedisCommand, \"readonly fast\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleCommand *vemb_cmd = RedisModule_GetCommand(ctx, \"VEMB\");\n    if (vemb_cmd == NULL) return REDISMODULE_ERR;\n\n    RedisModuleCommandArg vemb_args[] = {\n        { .name = \"key\", .type = REDISMODULE_ARG_TYPE_KEY, .key_spec_index = 0 },\n        { .name = \"element\", .type = REDISMODULE_ARG_TYPE_STRING },\n        { .name = \"raw\", .type = REDISMODULE_ARG_TYPE_PURE_TOKEN, .token = \"RAW\", .flags = REDISMODULE_CMD_ARG_OPTIONAL },\n        { .name = NULL }\n    };\n    RedisModuleCommandInfo vemb_info = {\n        .version = REDISMODULE_COMMAND_INFO_VERSION,\n        .summary = \"Return the vector associated with an element\",\n        .since = \"8.0.0\",\n        .arity = -3,\n        .args = vemb_args,\n    };\n    if (RedisModule_SetCommandInfo(vemb_cmd, &vemb_info) == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    // Register command VLINKS\n    if (RedisModule_CreateCommand(ctx, \"VLINKS\",\n        VLINKS_RedisCommand, \"readonly fast\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleCommand *vlinks_cmd = RedisModule_GetCommand(ctx, \"VLINKS\");\n    if (vlinks_cmd == NULL) return REDISMODULE_ERR;\n\n    RedisModuleCommandArg vlinks_args[] = {\n        { .name = \"key\", .type = REDISMODULE_ARG_TYPE_KEY, .key_spec_index = 0 },\n        { .name = \"element\", .type = REDISMODULE_ARG_TYPE_STRING },\n        { .name = \"withscores\", .type = REDISMODULE_ARG_TYPE_PURE_TOKEN, .token = \"WITHSCORES\", .flags = REDISMODULE_CMD_ARG_OPTIONAL },\n        { .name = NULL }\n    };\n    RedisModuleCommandInfo vlinks_info = {\n        .version = REDISMODULE_COMMAND_INFO_VERSION,\n        .summary = \"Return the neighbors of an element at each layer in the HNSW graph\",\n        .since = \"8.0.0\",\n        .arity = -3,\n        .args = vlinks_args,\n    };\n    if (RedisModule_SetCommandInfo(vlinks_cmd, &vlinks_info) == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    // Register command VINFO\n    if (RedisModule_CreateCommand(ctx, \"VINFO\",\n        VINFO_RedisCommand, \"readonly fast\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleCommand *vinfo_cmd = RedisModule_GetCommand(ctx, \"VINFO\");\n    if (vinfo_cmd == NULL) return REDISMODULE_ERR;\n\n    RedisModuleCommandArg vinfo_args[] = {\n        { .name = \"key\", .type = REDISMODULE_ARG_TYPE_KEY, .key_spec_index = 0 },\n        { .name = NULL }\n    };\n    RedisModuleCommandInfo vinfo_info = {\n        .version = REDISMODULE_COMMAND_INFO_VERSION,\n        .summary = \"Return information about a vector set\",\n        .since = \"8.0.0\",\n        .arity = 2,\n        .args = vinfo_args,\n    };\n    if (RedisModule_SetCommandInfo(vinfo_cmd, &vinfo_info) == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    // Register command VSETATTR\n    if (RedisModule_CreateCommand(ctx, \"VSETATTR\",\n        VSETATTR_RedisCommand, \"write fast\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleCommand *vsetattr_cmd = RedisModule_GetCommand(ctx, \"VSETATTR\");\n    if (vsetattr_cmd == NULL) return REDISMODULE_ERR;\n\n    RedisModuleCommandArg vsetattr_args[] = {\n        { .name = \"key\", .type = REDISMODULE_ARG_TYPE_KEY, .key_spec_index = 0 },\n        { .name = \"element\", .type = REDISMODULE_ARG_TYPE_STRING },\n        { .name = \"json\", .type = REDISMODULE_ARG_TYPE_STRING },\n        { .name = NULL }\n    };\n    RedisModuleCommandInfo vsetattr_info = {\n        .version = REDISMODULE_COMMAND_INFO_VERSION,\n        .summary = \"Associate or remove the JSON attributes of elements\",\n        .since = \"8.0.0\",\n        .arity = 4,\n        .args = vsetattr_args,\n    };\n    if (RedisModule_SetCommandInfo(vsetattr_cmd, &vsetattr_info) == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    // Register command VGETATTR\n    if (RedisModule_CreateCommand(ctx, \"VGETATTR\",\n        VGETATTR_RedisCommand, \"readonly fast\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleCommand *vgetattr_cmd = RedisModule_GetCommand(ctx, \"VGETATTR\");\n    if (vgetattr_cmd == NULL) return REDISMODULE_ERR;\n\n    RedisModuleCommandArg vgetattr_args[] = {\n        { .name = \"key\", .type = REDISMODULE_ARG_TYPE_KEY, .key_spec_index = 0 },\n        { .name = \"element\", .type = REDISMODULE_ARG_TYPE_STRING },\n        { .name = NULL }\n    };\n    RedisModuleCommandInfo vgetattr_info = {\n        .version = REDISMODULE_COMMAND_INFO_VERSION,\n        .summary = \"Retrieve the JSON attributes of elements\",\n        .since = \"8.0.0\",\n        .arity = 3,\n        .args = vgetattr_args,\n    };\n    if (RedisModule_SetCommandInfo(vgetattr_cmd, &vgetattr_info) == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    // Register command VRANDMEMBER\n    if (RedisModule_CreateCommand(ctx, \"VRANDMEMBER\",\n        VRANDMEMBER_RedisCommand, \"readonly\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleCommand *vrandmember_cmd = RedisModule_GetCommand(ctx, \"VRANDMEMBER\");\n    if (vrandmember_cmd == NULL) return REDISMODULE_ERR;\n\n    RedisModuleCommandArg vrandmember_args[] = {\n        { .name = \"key\", .type = REDISMODULE_ARG_TYPE_KEY, .key_spec_index = 0 },\n        { .name = \"count\", .type = REDISMODULE_ARG_TYPE_INTEGER, .flags = REDISMODULE_CMD_ARG_OPTIONAL },\n        { .name = NULL }\n    };\n    RedisModuleCommandInfo vrandmember_info = {\n        .version = REDISMODULE_COMMAND_INFO_VERSION,\n        .summary = \"Return one or multiple random members from a vector set\",\n        .since = \"8.0.0\",\n        .arity = -2,\n        .args = vrandmember_args,\n    };\n    if (RedisModule_SetCommandInfo(vrandmember_cmd, &vrandmember_info) == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    // Register command VISMEMBER\n    if (RedisModule_CreateCommand(ctx, \"VISMEMBER\",\n        VISMEMBER_RedisCommand, \"readonly\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleCommand *vismember_cmd = RedisModule_GetCommand(ctx, \"VISMEMBER\");\n    if (vismember_cmd == NULL) return REDISMODULE_ERR;\n\n    RedisModuleCommandArg vismember_args[] = {\n        { .name = \"key\", .type = REDISMODULE_ARG_TYPE_KEY, .key_spec_index = 0 },\n        { .name = \"element\", .type = REDISMODULE_ARG_TYPE_STRING },\n        { .name = NULL }\n    };\n    RedisModuleCommandInfo vismember_info = {\n        .version = REDISMODULE_COMMAND_INFO_VERSION,\n        .summary = \"Check if an element exists in a vector set\",\n        .since = \"8.2.0\",\n        .arity = 3,\n        .args = vismember_args,\n    };\n    if (RedisModule_SetCommandInfo(vismember_cmd, &vismember_info) == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    // Register command VRANGE\n    if (RedisModule_CreateCommand(ctx, \"VRANGE\",\n\tVRANGE_RedisCommand, \"readonly\", 1, 1, 1) == REDISMODULE_ERR)\n\treturn REDISMODULE_ERR;\n\n    RedisModuleCommand *vrange_cmd = RedisModule_GetCommand(ctx, \"VRANGE\");\n    if (vrange_cmd == NULL) return REDISMODULE_ERR;\n\n    RedisModuleCommandArg vrange_args[] = {\n        { .name = \"key\", .type = REDISMODULE_ARG_TYPE_KEY, .key_spec_index = 0 },\n        { .name = \"start\", .type = REDISMODULE_ARG_TYPE_STRING },\n        { .name = \"end\", .type = REDISMODULE_ARG_TYPE_STRING },\n        { .name = \"count\", .type = REDISMODULE_ARG_TYPE_INTEGER, .flags = REDISMODULE_CMD_ARG_OPTIONAL },\n        { .name = NULL }\n    };\n    RedisModuleCommandInfo vrange_info = {\n        .version = REDISMODULE_COMMAND_INFO_VERSION,\n        .summary = \"Return vector set elements in a lex range\",\n        .since = \"8.4.0\",\n        .arity = -4,\n        .args = vrange_args,\n    };\n    if (RedisModule_SetCommandInfo(vrange_cmd, &vrange_info) == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    // Set the allocator for the HNSW library, so that memory tracking\n    // is correct in Redis.\n    hnsw_set_allocator(RedisModule_Free, RedisModule_Alloc,\n                       RedisModule_Realloc);\n\n    return REDISMODULE_OK;\n}\n\nint VectorSets_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    return RedisModule_OnLoad(ctx, argv, argc);\n}\n"
  },
  {
    "path": "modules/vector-sets/vset_config.c",
    "content": "/* vector set module configuration.\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n*/\n\n#include \"vset_config.h\"\n\n/* Define __STRING macro for portability (not available in all environments) */\n#ifndef __STRING\n#define __STRING(x) #x\n#endif\n\n#define RM_TRY(expr)                                                  \\\n  if (expr == REDISMODULE_ERR) {                                      \\\n    RedisModule_Log(ctx, \"warning\", \"Could not run \" __STRING(expr)); \\\n    return REDISMODULE_ERR;                                           \\\n  }\n\nVSConfig VSGlobalConfig;\n\nint set_bool_config(const char *name, int val, void *privdata,\n                    RedisModuleString **err) {\n  REDISMODULE_NOT_USED(name);\n  REDISMODULE_NOT_USED(err);\n  *(int *)privdata = val;\n  return REDISMODULE_OK;\n}\n\nint get_bool_config(const char *name, void *privdata) {\n  REDISMODULE_NOT_USED(name);\n  return *(int *)privdata;\n}\n\nint RegisterModuleConfig(RedisModuleCtx *ctx) {\n  // Numeric parameters\n  RM_TRY(\n    RedisModule_RegisterBoolConfig(\n      ctx, \"vset-force-single-threaded-execution\", 0,\n      REDISMODULE_CONFIG_UNPREFIXED,\n      get_bool_config, set_bool_config, NULL,\n      (void *)&(VSGlobalConfig.forceSingleThreadExec)\n    )\n  )\n\n  return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "modules/vector-sets/vset_config.h",
    "content": "/* vector set module configuration.\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n*/\n\n#ifndef VSET_CONFIG_H\n#define VSET_CONFIG_H\n\n#include \"../../src/redismodule.h\"\n\ntypedef struct {\n  int forceSingleThreadExec;\n} VSConfig;\n\nextern VSConfig VSGlobalConfig;\n\nint RegisterModuleConfig(RedisModuleCtx *ctx);\n\n#endif\n"
  },
  {
    "path": "modules/vector-sets/w2v.c",
    "content": "/*\n * HNSW (Hierarchical Navigable Small World) Implementation\n * Based on the paper by Yu. A. Malkov, D. A. Yashunin\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n * Originally authored by: Salvatore Sanfilippo\n */\n\n#define _DEFAULT_SOURCE\n#define _USE_MATH_DEFINES\n#define _POSIX_C_SOURCE 200809L\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <strings.h>\n#include <sys/time.h>\n#include <time.h>\n#include <stdint.h>\n#include <pthread.h>\n#include <stdatomic.h>\n#include <math.h>\n\n#include \"hnsw.h\"\n\n/* Get current time in milliseconds */\nuint64_t ms_time(void) {\n    struct timeval tv;\n    gettimeofday(&tv, NULL);\n    return (uint64_t)tv.tv_sec * 1000 + (tv.tv_usec / 1000);\n}\n\n/* Implementation of the recall test with random vectors. */\nvoid test_recall(HNSW *index, int ef) {\n    const int num_test_vectors = 10000;\n    const int k = 100; // Number of nearest neighbors to find.\n    if (ef < k) ef = k;\n\n    // Add recall distribution counters (2% bins from 0-100%).\n    int recall_bins[50] = {0};\n\n    // Create array to store vectors for mixing.\n    int num_source_vectors = 1000; // Enough, since we mix them.\n    float **source_vectors = malloc(sizeof(float*) * num_source_vectors);\n    if (!source_vectors) {\n        printf(\"Failed to allocate memory for source vectors\\n\");\n        return;\n    }\n\n    // Allocate memory for each source vector.\n    for (int i = 0; i < num_source_vectors; i++) {\n        source_vectors[i] = malloc(sizeof(float) * 300);\n        if (!source_vectors[i]) {\n            printf(\"Failed to allocate memory for source vector %d\\n\", i);\n            // Clean up already allocated vectors.\n            for (int j = 0; j < i; j++) free(source_vectors[j]);\n            free(source_vectors);\n            return;\n        }\n    }\n\n    /* Populate source vectors from the index, we just scan the\n     * first N items. */\n    int source_count = 0;\n    hnswNode *current = index->head;\n    while (current && source_count < num_source_vectors) {\n        hnsw_get_node_vector(index, current, source_vectors[source_count]);\n        source_count++;\n        current = current->next;\n    }\n\n    if (source_count < num_source_vectors) {\n        printf(\"Warning: Only found %d nodes for source vectors\\n\",\n            source_count);\n        num_source_vectors = source_count;\n    }\n\n    // Allocate memory for test vector.\n    float *test_vector = malloc(sizeof(float) * 300);\n    if (!test_vector) {\n        printf(\"Failed to allocate memory for test vector\\n\");\n        for (int i = 0; i < num_source_vectors; i++) {\n            free(source_vectors[i]);\n        }\n        free(source_vectors);\n        return;\n    }\n\n    // Allocate memory for results.\n    hnswNode **hnsw_results = malloc(sizeof(hnswNode*) * ef);\n    hnswNode **linear_results = malloc(sizeof(hnswNode*) * ef);\n    float *hnsw_distances = malloc(sizeof(float) * ef);\n    float *linear_distances = malloc(sizeof(float) * ef);\n\n    if (!hnsw_results || !linear_results || !hnsw_distances || !linear_distances) {\n        printf(\"Failed to allocate memory for results\\n\");\n        if (hnsw_results) free(hnsw_results);\n        if (linear_results) free(linear_results);\n        if (hnsw_distances) free(hnsw_distances);\n        if (linear_distances) free(linear_distances);\n        for (int i = 0; i < num_source_vectors; i++) free(source_vectors[i]);\n        free(source_vectors);\n        free(test_vector);\n        return;\n    }\n\n    // Initialize random seed.\n    srand(time(NULL));\n\n    // Perform recall test.\n    printf(\"\\nPerforming recall test with EF=%d on %d random vectors...\\n\",\n           ef, num_test_vectors);\n    double total_recall = 0.0;\n\n    for (int t = 0; t < num_test_vectors; t++) {\n        // Create a random vector by mixing 3 existing vectors.\n        float weights[3] = {0.0};\n        int src_indices[3] = {0};\n\n        // Generate random weights.\n        float weight_sum = 0.0;\n        for (int i = 0; i < 3; i++) {\n            weights[i] = (float)rand() / RAND_MAX;\n            weight_sum += weights[i];\n            src_indices[i] = rand() % num_source_vectors;\n        }\n\n        // Normalize weights.\n        for (int i = 0; i < 3; i++) weights[i] /= weight_sum;\n\n        // Mix vectors.\n        memset(test_vector, 0, sizeof(float) * 300);\n        for (int i = 0; i < 3; i++) {\n            for (int j = 0; j < 300; j++) {\n                test_vector[j] +=\n                    weights[i] * source_vectors[src_indices[i]][j];\n            }\n        }\n\n        // Perform HNSW search with the specified EF parameter.\n        int slot = hnsw_acquire_read_slot(index);\n        int hnsw_found = hnsw_search(index, test_vector, ef, hnsw_results, hnsw_distances, slot, 0);\n\n        // Perform linear search (ground truth).\n        int linear_found = hnsw_ground_truth_with_filter(index, test_vector, ef, linear_results, linear_distances, slot, 0, NULL, NULL);\n        hnsw_release_read_slot(index, slot);\n\n        // Calculate recall for this query (intersection size / k).\n        if (hnsw_found > k) hnsw_found = k;\n        if (linear_found > k) linear_found = k;\n        int intersection_count = 0;\n        for (int i = 0; i < linear_found; i++) {\n            for (int j = 0; j < hnsw_found; j++) {\n                if (linear_results[i] == hnsw_results[j]) {\n                    intersection_count++;\n                    break;\n                }\n            }\n        }\n\n        double recall = (double)intersection_count / linear_found;\n        total_recall += recall;\n\n        // Add to distribution bins (2% steps)\n        int bin_index = (int)(recall * 50);\n        if (bin_index >= 50) bin_index = 49; // Handle 100% recall case\n        recall_bins[bin_index]++;\n\n        // Show progress.\n        if ((t+1) % 1000 == 0 || t == num_test_vectors-1) {\n            printf(\"Processed %d/%d queries, current avg recall: %.2f%%\\n\",\n                t+1, num_test_vectors, (total_recall / (t+1)) * 100);\n        }\n    }\n\n    // Calculate and print final average recall.\n    double avg_recall = (total_recall / num_test_vectors) * 100;\n    printf(\"\\nRecall Test Results:\\n\");\n    printf(\"Average recall@%d (EF=%d): %.2f%%\\n\", k, ef, avg_recall);\n\n    // Print recall distribution histogram.\n    printf(\"\\nRecall Distribution (2%% bins):\\n\");\n    printf(\"================================\\n\");\n\n    // Find the maximum bin count for scaling.\n    int max_count = 0;\n    for (int i = 0; i < 50; i++) {\n        if (recall_bins[i] > max_count) max_count = recall_bins[i];\n    }\n\n    // Scale factor for histogram (max 50 chars wide)\n    const int max_bars = 50;\n    double scale = (max_count > max_bars) ? (double)max_bars / max_count : 1.0;\n\n    // Print the histogram.\n    for (int i = 0; i < 50; i++) {\n        int bar_len = (int)(recall_bins[i] * scale);\n        printf(\"%3d%%-%-3d%% | %-6d |\", i*2, (i+1)*2, recall_bins[i]);\n        for (int j = 0; j < bar_len; j++) printf(\"#\");\n        printf(\"\\n\");\n    }\n\n    // Cleanup.\n    free(hnsw_results);\n    free(linear_results);\n    free(hnsw_distances);\n    free(linear_distances);\n    free(test_vector);\n    for (int i = 0; i < num_source_vectors; i++) free(source_vectors[i]);\n    free(source_vectors);\n}\n\n/* Example usage in main() */\nint w2v_single_thread(int m_param, int quantization, uint64_t numele, int massdel, int self_recall, int recall_ef) {\n    /* Create index */\n    HNSW *index = hnsw_new(300, quantization, m_param);\n    float v[300];\n    uint16_t wlen;\n\n    FILE *fp = fopen(\"word2vec.bin\",\"rb\");\n    if (fp == NULL) {\n        perror(\"word2vec.bin file missing\");\n        exit(1);\n    }\n    unsigned char header[8];\n    if (fread(header,8,1,fp) <= 0) { // Skip header\n        perror(\"Unexpected EOF\");\n        exit(1);\n    }\n\n    uint64_t id = 0;\n    uint64_t start_time = ms_time();\n    char *word = NULL;\n    hnswNode *search_node = NULL;\n\n    while(id < numele) {\n        if (fread(&wlen,2,1,fp) == 0) break;\n        word = malloc(wlen+1);\n        if (fread(word,wlen,1,fp) <= 0) {\n            perror(\"unexpected EOF\");\n            exit(1);\n        }\n        word[wlen] = 0;\n        if (fread(v,300*sizeof(float),1,fp) <= 0) {\n            perror(\"unexpected EOF\");\n            exit(1);\n        }\n\n        // Plain API that acquires a write lock for the whole time.\n        hnswNode *added = hnsw_insert(index, v, NULL, 0, id++, word, 200);\n\n        if (!strcmp(word,\"banana\")) search_node = added;\n        if (!(id % 10000)) printf(\"%llu added\\n\", (unsigned long long)id);\n    }\n    uint64_t elapsed = ms_time() - start_time;\n    fclose(fp);\n\n    printf(\"%llu words added (%llu words/sec), last word: %s\\n\",\n        (unsigned long long)index->node_count,\n        (unsigned long long)id*1000/elapsed, word);\n\n    /* Search query */\n    if (search_node == NULL) search_node = index->head;\n    hnsw_get_node_vector(index,search_node,v);\n    hnswNode *neighbors[10];\n    float distances[10];\n\n    int found, j;\n    start_time = ms_time();\n    for (j = 0; j < 20000; j++)\n        found = hnsw_search(index, v, 10, neighbors, distances, 0, 0);\n    elapsed = ms_time() - start_time;\n    printf(\"%d searches performed (%llu searches/sec), nodes found: %d\\n\",\n        j, (unsigned long long)j*1000/elapsed, found);\n\n    if (found > 0) {\n        printf(\"Found %d neighbors:\\n\", found);\n        for (int i = 0; i < found; i++) {\n            printf(\"Node ID: %llu, distance: %f, word: %s\\n\",\n                   (unsigned long long)neighbors[i]->id,\n                   distances[i], (char*)neighbors[i]->value);\n        }\n    }\n\n    // Self-recall test (ability to find the node by its own vector).\n    if (self_recall) {\n        hnsw_print_stats(index);\n        hnsw_test_graph_recall(index,200,0);\n    }\n\n    // Recall test with random vectors.\n    if (recall_ef > 0) {\n        test_recall(index, recall_ef);\n    }\n\n    uint64_t connected_nodes;\n    int reciprocal_links;\n    hnsw_validate_graph(index, &connected_nodes, &reciprocal_links);\n\n    if (massdel) {\n        int remove_perc = 95;\n        printf(\"\\nRemoving %d%% of nodes...\\n\", remove_perc);\n        uint64_t initial_nodes = index->node_count;\n\n        hnswNode *current = index->head;\n        while (current && index->node_count > initial_nodes*(100-remove_perc)/100) {\n            hnswNode *next = current->next;\n            hnsw_delete_node(index,current,free);\n            current = next;\n            // In order to don't remove only contiguous nodes, from time\n            // skip a node.\n            if (current && !(random() % remove_perc)) current = current->next;\n        }\n        printf(\"%llu nodes left\\n\", (unsigned long long)index->node_count);\n\n        // Test again.\n        hnsw_validate_graph(index, &connected_nodes, &reciprocal_links);\n        hnsw_test_graph_recall(index,200,0);\n    }\n\n    hnsw_free(index,free);\n    return 0;\n}\n\nstruct threadContext {\n    pthread_mutex_t FileAccessMutex;\n    uint64_t numele;\n    _Atomic uint64_t SearchesDone;\n    _Atomic uint64_t id;\n    FILE *fp;\n    HNSW *index;\n    float *search_vector;\n};\n\n// Note that in practical terms inserting with many concurrent threads\n// may be *slower* and not faster, because there is a lot of\n// contention. So this is more a robustness test than anything else.\n//\n// The optimistic commit API goal is actually to exploit the ability to\n// add faster when there are many concurrent reads.\nvoid *threaded_insert(void *ctxptr) {\n    struct threadContext *ctx = ctxptr;\n    char *word;\n    float v[300];\n    uint16_t wlen;\n\n    while(1) {\n        pthread_mutex_lock(&ctx->FileAccessMutex);\n        if (fread(&wlen,2,1,ctx->fp) == 0) break;\n        pthread_mutex_unlock(&ctx->FileAccessMutex);\n        word = malloc(wlen+1);\n        if (fread(word,wlen,1,ctx->fp) <= 0) {\n            perror(\"Unexpected EOF\");\n            exit(1);\n        }\n\n        word[wlen] = 0;\n        if (fread(v,300*sizeof(float),1,ctx->fp) <= 0) {\n            perror(\"Unexpected EOF\");\n            exit(1);\n        }\n\n        // Check-and-set API that performs the costly scan for similar\n        // nodes concurrently with other read threads, and finally\n        // applies the check if the graph wasn't modified.\n        InsertContext *ic;\n        uint64_t next_id = ctx->id++;\n        ic = hnsw_prepare_insert(ctx->index, v, NULL, 0, next_id, 200);\n        if (hnsw_try_commit_insert(ctx->index, ic, word) == NULL) {\n            // This time try locking since the start.\n            hnsw_insert(ctx->index, v, NULL, 0, next_id, word, 200);\n        }\n\n        if (next_id >= ctx->numele) break;\n        if (!((next_id+1) % 10000))\n            printf(\"%llu added\\n\", (unsigned long long)next_id+1);\n    }\n    return NULL;\n}\n\nvoid *threaded_search(void *ctxptr) {\n    struct threadContext *ctx = ctxptr;\n\n    /* Search query */\n    hnswNode *neighbors[10];\n    float distances[10];\n    int found = 0;\n    uint64_t last_id = 0;\n\n    while(ctx->id < 1000000) {\n        int slot = hnsw_acquire_read_slot(ctx->index);\n        found = hnsw_search(ctx->index, ctx->search_vector, 10, neighbors, distances, slot, 0);\n        hnsw_release_read_slot(ctx->index,slot);\n        last_id = ++ctx->id;\n    }\n\n    if (found > 0 && last_id == 1000000) {\n        printf(\"Found %d neighbors:\\n\", found);\n        for (int i = 0; i < found; i++) {\n            printf(\"Node ID: %llu, distance: %f, word: %s\\n\",\n                   (unsigned long long)neighbors[i]->id,\n                   distances[i], (char*)neighbors[i]->value);\n        }\n    }\n    return NULL;\n}\n\nint w2v_multi_thread(int m_param, int numthreads, int quantization, uint64_t numele) {\n    /* Create index */\n    struct threadContext ctx;\n\n    ctx.index = hnsw_new(300, quantization, m_param);\n\n    ctx.fp = fopen(\"word2vec.bin\",\"rb\");\n    if (ctx.fp == NULL) {\n        perror(\"word2vec.bin file missing\");\n        exit(1);\n    }\n\n    unsigned char header[8];\n    if (fread(header,8,1,ctx.fp) <= 0) { // Skip header\n        perror(\"Unexpected EOF\");\n        exit(1);\n    }\n    pthread_mutex_init(&ctx.FileAccessMutex,NULL);\n\n    uint64_t start_time = ms_time();\n    ctx.id = 0;\n    ctx.numele = numele;\n    pthread_t threads[numthreads];\n    for (int j = 0; j < numthreads; j++)\n        pthread_create(&threads[j], NULL, threaded_insert, &ctx);\n\n    // Wait for all the threads to terminate adding items.\n    for (int j = 0; j < numthreads; j++)\n        pthread_join(threads[j],NULL);\n\n    uint64_t elapsed = ms_time() - start_time;\n    fclose(ctx.fp);\n\n    // Obtain the last word.\n    hnswNode *node = ctx.index->head;\n    char *word = node->value;\n\n    // We will search this last inserted word in the next test.\n    // Let's save its embedding.\n    ctx.search_vector = malloc(sizeof(float)*300);\n    hnsw_get_node_vector(ctx.index,node,ctx.search_vector);\n\n    printf(\"%llu words added (%llu words/sec), last word: %s\\n\",\n        (unsigned long long)ctx.index->node_count,\n        (unsigned long long)ctx.id*1000/elapsed, word);\n\n    /* Search query */\n    start_time = ms_time();\n    ctx.id = 0; // We will use this atomic field to stop at N queries done.\n\n    for (int j = 0; j < numthreads; j++)\n        pthread_create(&threads[j], NULL, threaded_search, &ctx);\n\n    // Wait for all the threads to terminate searching.\n    for (int j = 0; j < numthreads; j++)\n        pthread_join(threads[j],NULL);\n\n    elapsed = ms_time() - start_time;\n    printf(\"%llu searches performed (%llu searches/sec)\\n\",\n        (unsigned long long)ctx.id,\n        (unsigned long long)ctx.id*1000/elapsed);\n\n    hnsw_print_stats(ctx.index);\n    uint64_t connected_nodes;\n    int reciprocal_links;\n    hnsw_validate_graph(ctx.index, &connected_nodes, &reciprocal_links);\n    printf(\"%llu connected nodes. Links all reciprocal: %d\\n\",\n        (unsigned long long)connected_nodes, reciprocal_links);\n    hnsw_free(ctx.index,free);\n    return 0;\n}\n\nint main(int argc, char **argv) {\n    int quantization = HNSW_QUANT_NONE;\n    int numthreads = 0;\n    uint64_t numele = 20000;\n    int m_param = 0;  // Default value (0 means use HNSW_DEFAULT_M)\n\n    /* This you can enable in single thread mode for testing: */\n    int massdel = 0;       // If true, does the mass deletion test.\n    int self_recall = 0;   // If true, does the self-recall test.\n    int recall_ef = 0;     // If not 0, does the recall test with this EF value.\n\n    for (int j = 1; j < argc; j++) {\n        int moreargs = argc-j-1;\n\n        if (!strcasecmp(argv[j],\"--quant\")) {\n            quantization = HNSW_QUANT_Q8;\n        } else if (!strcasecmp(argv[j],\"--bin\")) {\n            quantization = HNSW_QUANT_BIN;\n        } else if (!strcasecmp(argv[j],\"--mass-del\")) {\n            massdel = 1;\n        } else if (!strcasecmp(argv[j],\"--self-recall\")) {\n            self_recall = 1;\n        } else if (moreargs >= 1 && !strcasecmp(argv[j],\"--recall\")) {\n            recall_ef = atoi(argv[j+1]);\n            j++;\n        } else if (moreargs >= 1 && !strcasecmp(argv[j],\"--threads\")) {\n            numthreads = atoi(argv[j+1]);\n            j++;\n        } else if (moreargs >= 1 && !strcasecmp(argv[j],\"--numele\")) {\n            numele = strtoll(argv[j+1],NULL,0);\n            j++;\n            if (numele < 1) numele = 1;\n        } else if (moreargs >= 1 && !strcasecmp(argv[j],\"--m\")) {\n            m_param = atoi(argv[j+1]);\n            j++;\n        } else if (!strcasecmp(argv[j],\"--help\")) {\n            printf(\"%s [--quant] [--bin] [--thread <count>] [--numele <count>] [--m <count>] [--mass-del] [--self-recall] [--recall <ef>]\\n\", argv[0]);\n            exit(0);\n        } else {\n            printf(\"Unrecognized option or wrong number of arguments: %s\\n\", argv[j]);\n            exit(1);\n        }\n    }\n\n    if (quantization == HNSW_QUANT_NONE) {\n        printf(\"You can enable quantization with --quant\\n\");\n    }\n\n    if (numthreads > 0) {\n        w2v_multi_thread(m_param, numthreads, quantization, numele);\n    } else {\n        printf(\"Single thread execution. Use --threads 4 for concurrent API\\n\");\n        w2v_single_thread(m_param, quantization, numele, massdel, self_recall, recall_ef);\n    }\n}\n"
  },
  {
    "path": "redis-full.conf",
    "content": "include redis.conf\n\nloadmodule ./modules/redisbloom/redisbloom.so\nloadmodule ./modules/redisearch/redisearch.so\nloadmodule ./modules/redisjson/rejson.so\nloadmodule ./modules/redistimeseries/redistimeseries.so\n\n############################## QUERY ENGINE CONFIG ############################\n\n# Keep numeric ranges in numeric tree parent nodes of leafs for `x` generations.\n# numeric, valid range: [0, 2], default: 0\n#\n# search-_numeric-ranges-parents 0\n\n# The number of iterations to run while performing background indexing\n# before we call usleep(1) (sleep for 1 micro-second) and make sure that we\n# allow redis to process other commands.\n# numeric, valid range: [1, UINT32_MAX], default: 100\n#\n# search-bg-index-sleep-gap 100\n\n# The default dialect used in search queries.\n# numeric, valid range: [1, 4], default: 1\n#\n# search-default-dialect 1\n\n# the fork gc will only start to clean when the number of not cleaned document\n# will exceed this threshold.\n# numeric, valid range: [1, LLONG_MAX], default: 100\n#\n# search-fork-gc-clean-threshold 100\n\n# interval (in seconds) in which to retry running the forkgc after failure.\n# numeric, valid range: [1, LLONG_MAX], default: 5\n#\n# search-fork-gc-retry-interval 5\n\n# interval (in seconds) in which to run the fork gc (relevant only when fork\n# gc is used).\n# numeric, valid range: [1, LLONG_MAX], default: 30\n#\n# search-fork-gc-run-interval 30\n\n# the amount of seconds for the fork GC to sleep before exiting.\n# numeric, valid range: [0, LLONG_MAX], default: 0\n#\n# search-fork-gc-sleep-before-exit 0\n\n# Scan this many documents at a time during every GC iteration.\n# numeric, valid range: [1, LLONG_MAX], default: 100\n#\n# search-gc-scan-size 100\n\n# Max number of cursors for a given index that can be opened inside of a shard.\n# numeric, valid range: [0, LLONG_MAX], default: 128\n#\n# search-index-cursor-limit 128\n\n# Maximum number of results from ft.aggregate command.\n# numeric, valid range: [0, (1ULL << 31)], default: 1ULL << 31\n#\n# search-max-aggregate-results 2147483648\n\n# Maximum prefix expansions to be used in a query.\n# numeric, valid range: [1, LLONG_MAX], default: 200\n#\n# search-max-prefix-expansions 200\n\n# Maximum runtime document table size (for this process).\n# numeric, valid range: [1, 100000000], default: 1000000\n#\n# search-max-doctablesize 1000000\n\n# max idle time allowed to be set for cursor, setting it high might cause\n# high memory consumption.\n# numeric, valid range: [1, LLONG_MAX], default: 300000\n#\n# search-cursor-max-idle 300000\n\n# Maximum number of results from ft.search command.\n# numeric, valid range: [0, 1ULL << 31], default: 1000000\n#\n# search-max-search-results 1000000\n\n# Number of worker threads to use for background tasks when the server is\n# in an operation event.\n# numeric, valid range: [1, 16], default: 4\n#\n# search-min-operation-workers 4\n\n# Minimum length of term to be considered for phonetic matching.\n# numeric, valid range: [1, LLONG_MAX], default: 3\n#\n# search-min-phonetic-term-len 3\n\n# the minimum prefix for expansions (`*`).\n# numeric, valid range: [1, LLONG_MAX], default: 2\n#\n# search-min-prefix 2\n\n# the minimum word length to stem.\n# numeric, valid range: [2, UINT32_MAX], default: 4\n#\n# search-min-stem-len 4\n\n# Delta used to increase positional offsets between array\n# slots for multi text values.\n# Can control the level of separation between phrases in different\n# array slots (related to the SLOP parameter of ft.search command)\"\n# numeric, valid range: [1, UINT32_MAX], default: 100\n#\n# search-multi-text-slop 100\n\n# Used for setting the buffer limit threshold for vector similarity tiered\n# HNSW index, so that if we are using WORKERS for indexing, and the\n# number of vectors waiting in the buffer to be indexed exceeds this limit,\n# we insert new vectors directly into HNSW.\n# numeric, valid range: [0, LLONG_MAX], default: 1024\n#\n# search-tiered-hnsw-buffer-limit 1024\n\n# Query timeout.\n# numeric, valid range: [1, LLONG_MAX], default: 500\n#\n# search-timeout 500\n\n# minimum number of iterators in a union from which the iterator will\n# will switch to heap-based implementation.\n# numeric, valid range: [1, LLONG_MAX], default: 20\n# switch to heap based implementation.\n#\n# search-union-iterator-heap 20\n\n# The maximum memory resize for vector similarity indexes (in bytes).\n# numeric, valid range: [0, UINT32_MAX], default: 0\n#\n# search-vss-max-resize 0\n\n# Number of worker threads to use for query processing and background tasks.\n# numeric, valid range: [0, 16], default: 0\n# This configuration also affects the number of connections per shard.\n#\n# search-workers 0\n\n# The number of high priority tasks to be executed at any given time by the\n# worker thread pool, before executing low priority tasks. After this number\n# of high priority tasks are being executed, the worker thread pool will\n# execute high and low priority tasks alternately.\n# numeric, valid range: [0, LLONG_MAX], default: 1\n#\n# search-workers-priority-bias-threshold 1\n\n# Load extension scoring/expansion module. Immutable.\n# string, default: \"\"\n#\n# search-ext-load \"\"\n\n# Path to Chinese dictionary configuration file (for Chinese tokenization). Immutable.\n# string, default: \"\"\n#\n# search-friso-ini \"\"\n\n# Action to perform when search timeout is exceeded (choose RETURN or FAIL).\n# enum, valid values: [\"return\", \"fail\"], default: \"fail\"\n#\n# search-on-timeout fail\n\n# Determine whether some index resources are free on a second thread.\n# bool, default: yes\n#\n# search-_free-resource-on-thread yes\n\n# Enable legacy compression of double to float.\n# bool, default: no\n#\n# search-_numeric-compress no\n\n# Disable print of time for ft.profile. For testing only.\n# bool, default: yes\n#\n# search-_print-profile-clock yes\n\n# Intersection iterator orders the children iterators by their relative estimated\n# number of results in ascending order, so that if we see first iterators with\n# a lower count of results we will skip a larger number of results, which\n# translates into faster iteration. If this flag is set, we use this\n# optimization in a way where union iterators are being factorize by the number\n# of their own children, so that we sort by the number of children times the\n# overall estimated number of results instead.\n# bool, default: no\n#\n# search-_prioritize-intersect-union-children no\n\n# Set to run without memory pools.\n# bool, default: no\n#\n# search-no-mem-pools no\n\n# Disable garbage collection (for this process).\n# bool, default: no\n#\n# search-no-gc no\n\n# Enable commands filter which optimize indexing on partial hash updates.\n# bool, default: no\n#\n# search-partial-indexed-docs no\n\n# Disable compression for DocID inverted index. Boost CPU performance.\n# bool, default: no\n#\n# search-raw-docid-encoding no\n\n# Number of search threads in the coordinator thread pool.\n# numeric, valid range: [1, LLONG_MAX], default: 20\n#\n# search-threads 20\n\n# Timeout for topology validation (in milliseconds). After this timeout,\n# any pending requests will be processed, even if the topology is not fully connected.\n# numeric, valid range: [0, LLONG_MAX], default: 30000\n#\n# search-topology-validation-timeout 30000\n\n\n############################## TIME SERIES CONFIG #############################\n\n# The maximal number of per-shard threads for cross-key queries when using cluster mode\n# (TS.MRANGE, TS.MREVRANGE, TS.MGET, and TS.QUERYINDEX).\n# Note: increasing this value may either increase or decrease the performance.\n# integer, valid range: [1..16], default: 3\n# This is a load-time configuration parameter.\n#\n# ts-num-threads 3\n\n\n# Default compaction rules for newly created key with TS.ADD, TS.INCRBY, and TS.DECRBY.\n# Has no effect on keys created with TS.CREATE.\n# This default value is applied to each new time series upon its creation.\n# string, see documentation for rules format, default: no compaction rules\n#\n# ts-compaction-policy \"\"\n\n# Default chunk encoding for automatically-created compacted time series.\n# This default value is applied to each new compacted time series automatically\n# created when ts-compaction-policy is specified.\n# valid values: COMPRESSED, UNCOMPRESSED, default: COMPRESSED\n#\n# ts-encoding COMPRESSED\n\n\n# Default retention period, in milliseconds. 0 means no expiration.\n# This default value is applied to each new time series upon its creation.\n# If ts-compaction-policy is specified - it is overridden for created\n# compactions as specified in ts-compaction-policy.\n# integer, valid range: [0 .. LLONG_MAX], default: 0\n#\n# ts-retention-policy 0\n\n# Default policy for handling insertion (TS.ADD and TS.MADD) of multiple\n# samples with identical timestamps.\n# This default value is applied to each new time series upon its creation.\n# string, valid values: BLOCK, FIRST, LAST, MIN, MAX, SUM, default: BLOCK\n#\n# ts-duplicate-policy BLOCK\n\n# Default initial allocation size, in bytes, for the data part of each new chunk\n# This default value is applied to each new time series upon its creation.\n# integer, valid range: [48 .. 1048576]; must be a multiple of 8, default: 4096\n#\n# ts-chunk-size-bytes 4096\n\n# Default values for newly created time series.\n# Many sensors report data periodically. Often, the difference between the measured\n# value and the previous measured value is negligible and related to random noise\n# or to measurement accuracy limitations. In such situations it may be preferable\n# not to add the new measurement to the time series.\n# A new sample is considered a duplicate and is ignored if the following conditions are met:\n# - The time series is not a compaction;\n# - The time series' DUPLICATE_POLICY IS LAST;\n# - The sample is added in-order (timestamp >= max_timestamp);\n# - The difference of the current timestamp from the previous timestamp\n#   (timestamp - max_timestamp) is less than or equal to ts-ignore-max-time-diff\n# - The absolute value difference of the current value from the value at the previous maximum timestamp\n#   (abs(value - value_at_max_timestamp) is less than or equal to ts-ignore-max-val-diff.\n# where max_timestamp is the timestamp of the sample with the largest timestamp in the time series,\n# and value_at_max_timestamp is the value at max_timestamp.\n# ts-ignore-max-time-diff: integer, valid range: [0 .. LLONG_MAX], default: 0\n# ts-ignore-max-val-diff: double, Valid range: [0 .. DBL_MAX], default: 0\n#\n# ts-ignore-max-time-diff 0\n# ts-ignore-max-val-diff 0\n\n\n########################### BLOOM FILTERS CONFIG ##############################\n\n# Defaults values for new Bloom filters created with BF.ADD, BF.MADD, BF.INSERT, and BF.RESERVE\n# These defaults are applied to each new Bloom filter upon its creation.\n\n# Error ratio\n# The desired probability for false positives.\n# For a false positive rate of 0.1% (1 in 1000) - the value should be 0.001.\n# double, Valid range: (0 .. 1), value greater than 0.25 is treated as 0.25, default: 0.01\n#\n# bf-error-rate 0.01\n\n# Initial capacity\n# The number of entries intended to be added to the filter.\n# integer, valid range: [1 .. 1GB], default: 100\n#\n# bf-initial-size 100\n\n# Expansion factor\n# When capacity is reached, an additional sub-filter is created.\n# The size of the new sub-filter is the size of the last sub-filter multiplied\n# by expansion.\n# integer, [0 .. 32768]. 0 is equivalent to NONSCALING. default: 2\n#\n# bf-expansion-factor 2\n\n\n########################### CUCKOO FILTERS CONFIG #############################\n\n# Defaults values for new Cuckoo filters created with\n# CF.ADD, CF.ADDNX, CF.INSERT, CF.INSERTNX, and CF.RESERVE\n# These defaults are applied to each new Cuckoo filter upon its creation.\n\n# Initial capacity\n# A filter will likely not fill up to 100% of its capacity.\n# Make sure to reserve extra capacity if you want to avoid expansions.\n# value is rounded to the next 2^n integer.\n# integer, valid range: [2*cf-bucket-size .. 1GB], default: 1024\n#\n# cf-initial-size 1024\n\n# Number of items in each bucket\n# The minimal false positive rate is 2/255 ~ 0.78% when bucket size of 1 is used.\n# Larger buckets increase the error rate linearly, but improve the fill rate.\n# integer, valid range: [1 .. 255], default: 2\n#\n# cf-bucket-size 2\n\n# Maximum iterations\n# Number of attempts to swap items between buckets before declaring filter\n# as full and creating an additional filter.\n# A lower value improves performance. A higher value improves fill rate.\n# integer, Valid range: [1 .. 65535], default: 20\n#\n# cf-max-iterations 20\n\n# Expansion factor\n# When a new filter is created, its size is the size of the current filter\n# multiplied by this factor.\n# integer, Valid range: [0 .. 32768], 0 is equivalent to NONSCALING, default: 1\n#\n# cf-expansion-factor 1\n\n# Maximum expansions\n# integer, Valid range: [1 .. 65536], default: 32\n#\n# cf-max-expansions 32\n\n\n################################## SECURITY ###################################\n#\n# The following is a list of command categories and their meanings:\n#\n# * search - Query engine related.\n# * json - Data type: JSON related.\n# * timeseries -  Data type: time series related.\n# * bloom - Data type:  Bloom filter related.\n# * cuckoo - Data type: cuckoo filter related.\n# * topk -  Data type: top-k related.\n# * cms - Data type: count-min sketch related.\n# * tdigest -  Data type: t-digest related.\n\n"
  },
  {
    "path": "redis.conf",
    "content": "# Redis configuration file example.\n#\n# Note that in order to read the configuration file, Redis must be\n# started with the file path as first argument:\n#\n# ./redis-server /path/to/redis.conf\n\n# Note on units: when memory size is needed, it is possible to specify\n# it in the usual form of 1k 5GB 4M and so forth:\n#\n# 1k => 1000 bytes\n# 1kb => 1024 bytes\n# 1m => 1000000 bytes\n# 1mb => 1024*1024 bytes\n# 1g => 1000000000 bytes\n# 1gb => 1024*1024*1024 bytes\n#\n# units are case insensitive so 1GB 1Gb 1gB are all the same.\n\n################################## INCLUDES ###################################\n\n# Include one or more other config files here.  This is useful if you\n# have a standard template that goes to all Redis servers but also need\n# to customize a few per-server settings.  Include files can include\n# other files, so use this wisely.\n#\n# Note that option \"include\" won't be rewritten by command \"CONFIG REWRITE\"\n# from admin or Redis Sentinel. Since Redis always uses the last processed\n# line as value of a configuration directive, you'd better put includes\n# at the beginning of this file to avoid overwriting config change at runtime.\n#\n# If instead you are interested in using includes to override configuration\n# options, it is better to use include as the last line.\n#\n# Included paths may contain wildcards. All files matching the wildcards will\n# be included in alphabetical order.\n# Note that if an include path contains a wildcards but no files match it when\n# the server is started, the include statement will be ignored and no error will\n# be emitted.  It is safe, therefore, to include wildcard files from empty\n# directories.\n#\n# include /path/to/local.conf\n# include /path/to/other.conf\n# include /path/to/fragments/*.conf\n#\n\n################################## MODULES #####################################\n\n# Load modules at startup. If the server is not able to load modules\n# it will abort. It is possible to use multiple loadmodule directives.\n#\n# loadmodule /path/to/my_module.so\n# loadmodule /path/to/other_module.so\n# loadmodule /path/to/args_module.so [arg [arg ...]]\n\n################################## NETWORK #####################################\n\n# By default, if no \"bind\" configuration directive is specified, Redis listens\n# for connections from all available network interfaces on the host machine.\n# It is possible to listen to just one or multiple selected interfaces using\n# the \"bind\" configuration directive, followed by one or more IP addresses.\n# Each address can be prefixed by \"-\", which means that redis will not fail to\n# start if the address is not available. Being not available only refers to\n# addresses that does not correspond to any network interface. Addresses that\n# are already in use will always fail, and unsupported protocols will always BE\n# silently skipped.\n#\n# Examples:\n#\n# bind 192.168.1.100 10.0.0.1     # listens on two specific IPv4 addresses\n# bind 127.0.0.1 ::1              # listens on loopback IPv4 and IPv6\n# bind * -::*                     # like the default, all available interfaces\n#\n# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the\n# internet, binding to all the interfaces is dangerous and will expose the\n# instance to everybody on the internet. So by default we uncomment the\n# following bind directive, that will force Redis to listen only on the\n# IPv4 and IPv6 (if available) loopback interface addresses (this means Redis\n# will only be able to accept client connections from the same host that it is\n# running on).\n#\n# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES\n# COMMENT OUT THE FOLLOWING LINE.\n#\n# You will also need to set a password unless you explicitly disable protected\n# mode.\n# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nbind 127.0.0.1 -::1\n\n# By default, outgoing connections (from replica to master, from Sentinel to\n# instances, cluster bus, etc.) are not bound to a specific local address. In\n# most cases, this means the operating system will handle that based on routing\n# and the interface through which the connection goes out.\n#\n# Using bind-source-addr it is possible to configure a specific address to bind\n# to, which may also affect how the connection gets routed.\n#\n# Example:\n#\n# bind-source-addr 10.0.0.1\n\n# Protected mode is a layer of security protection, in order to avoid that\n# Redis instances left open on the internet are accessed and exploited.\n#\n# When protected mode is on and the default user has no password, the server\n# only accepts local connections from the IPv4 address (127.0.0.1), IPv6 address\n# (::1) or Unix domain sockets.\n#\n# By default protected mode is enabled. You should disable it only if\n# you are sure you want clients from other hosts to connect to Redis\n# even if no authentication is configured.\nprotected-mode yes\n\n# Redis uses default hardened security configuration directives to reduce the\n# attack surface on innocent users. Therefore, several sensitive configuration\n# directives are immutable, and some potentially-dangerous commands are blocked.\n#\n# Configuration directives that control files that Redis writes to (e.g., 'dir'\n# and 'dbfilename') and that aren't usually modified during runtime\n# are protected by making them immutable.\n#\n# Commands that can increase the attack surface of Redis and that aren't usually\n# called by users are blocked by default.\n#\n# These can be exposed to either all connections or just local ones by setting\n# each of the configs listed below to either of these values:\n#\n# no    - Block for any connection (remain immutable)\n# yes   - Allow for any connection (no protection)\n# local - Allow only for local connections. Ones originating from the\n#         IPv4 address (127.0.0.1), IPv6 address (::1) or Unix domain sockets.\n#\n# enable-protected-configs no\n# enable-debug-command no\n# enable-module-command no\n\n# Accept connections on the specified port, default is 6379 (IANA #815344).\n# If port 0 is specified Redis will not listen on a TCP socket.\nport 6379\n\n# TCP listen() backlog.\n#\n# In high requests-per-second environments you need a high backlog in order\n# to avoid slow clients connection issues. Note that the Linux kernel\n# will silently truncate it to the value of /proc/sys/net/core/somaxconn so\n# make sure to raise both the value of somaxconn and tcp_max_syn_backlog\n# in order to get the desired effect.\ntcp-backlog 511\n\n# Unix socket.\n#\n# Specify the path for the Unix socket that will be used to listen for\n# incoming connections. There is no default, so Redis will not listen\n# on a unix socket when not specified.\n#\n# unixsocket /run/redis.sock\n# unixsocketperm 700\n\n# Close the connection after a client is idle for N seconds (0 to disable)\ntimeout 0\n\n# TCP keepalive.\n#\n# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence\n# of communication. This is useful for two reasons:\n#\n# 1) Detect dead peers.\n# 2) Force network equipment in the middle to consider the connection to be\n#    alive.\n#\n# On Linux, the specified value (in seconds) is the period used to send ACKs.\n# Note that to close the connection the double of the time is needed.\n# On other kernels the period depends on the kernel configuration.\n#\n# A reasonable value for this option is 300 seconds, which is the new\n# Redis default starting with Redis 3.2.1.\ntcp-keepalive 300\n\n# Apply OS-specific mechanism to mark the listening socket with the specified\n# ID, to support advanced routing and filtering capabilities.\n#\n# On Linux, the ID represents a connection mark.\n# On FreeBSD, the ID represents a socket cookie ID.\n# On OpenBSD, the ID represents a route table ID.\n#\n# The default value is 0, which implies no marking is required.\n# socket-mark-id 0\n\n################################# TLS/SSL #####################################\n\n# By default, TLS/SSL is disabled. To enable it, the \"tls-port\" configuration\n# directive can be used to define TLS-listening ports. To enable TLS on the\n# default port, use:\n#\n# port 0\n# tls-port 6379\n\n# Configure a X.509 certificate and private key to use for authenticating the\n# server to connected clients, masters or cluster peers.  These files should be\n# PEM formatted.\n#\n# tls-cert-file redis.crt\n# tls-key-file redis.key\n#\n# If the key file is encrypted using a passphrase, it can be included here\n# as well.\n#\n# tls-key-file-pass secret\n\n# Normally Redis uses the same certificate for both server functions (accepting\n# connections) and client functions (replicating from a master, establishing\n# cluster bus connections, etc.).\n#\n# Sometimes certificates are issued with attributes that designate them as\n# client-only or server-only certificates. In that case it may be desired to use\n# different certificates for incoming (server) and outgoing (client)\n# connections. To do that, use the following directives:\n#\n# tls-client-cert-file client.crt\n# tls-client-key-file client.key\n#\n# If the key file is encrypted using a passphrase, it can be included here\n# as well.\n#\n# tls-client-key-file-pass secret\n\n# Configure a DH parameters file to enable Diffie-Hellman (DH) key exchange,\n# required by older versions of OpenSSL (<3.0). Newer versions do not require\n# this configuration and recommend against it.\n#\n# tls-dh-params-file redis.dh\n\n# Configure a CA certificate(s) bundle or directory to authenticate TLS/SSL\n# clients and peers.  Redis requires an explicit configuration of at least one\n# of these, and will not implicitly use the system wide configuration.\n#\n# tls-ca-cert-file ca.crt\n# tls-ca-cert-dir /etc/ssl/certs\n\n# By default, clients (including replica servers) on a TLS port are required\n# to authenticate using valid client side certificates.\n#\n# If \"no\" is specified, client certificates are not required and not accepted.\n# If \"optional\" is specified, client certificates are accepted and must be\n# valid if provided, but are not required.\n#\n# tls-auth-clients no\n# tls-auth-clients optional\n\n# Automatically authenticate TLS clients as Redis users based on their\n# certificates.\n#\n# If set to a field like \"CN\", the server will extract the corresponding field\n# from the client's TLS certificate and attempt to find a Redis user with the\n# same name. If a matching user is found, the client is automatically\n# authenticated as that user during the TLS handshake. If no matching user is\n# found, the client is connected as the unauthenticated default user. Set to\n# \"off\" to disable automatic user authentication via certificate fields.\n#\n# Supported values: CN, off. Default: off.\n#\n# Matches certificate CN to Redis username (exact match only).\n# Example: Cert CN=myapp -> authenticates as user \"myapp\"\n#\n# tls-auth-clients-user CN\n\n# By default, a Redis replica does not attempt to establish a TLS connection\n# with its master.\n#\n# Use the following directive to enable TLS on replication links.\n#\n# tls-replication yes\n\n# By default, the Redis Cluster bus uses a plain TCP connection. To enable\n# TLS for the bus protocol, use the following directive:\n#\n# tls-cluster yes\n\n# By default, only TLSv1.2 and TLSv1.3 are enabled and it is highly recommended\n# that older formally deprecated versions are kept disabled to reduce the attack surface.\n# You can explicitly specify TLS versions to support.\n# Allowed values are case insensitive and include \"TLSv1\", \"TLSv1.1\", \"TLSv1.2\",\n# \"TLSv1.3\" (OpenSSL >= 1.1.1) or any combination.\n# To enable only TLSv1.2 and TLSv1.3, use:\n#\n# tls-protocols \"TLSv1.2 TLSv1.3\"\n\n# Configure allowed ciphers.  See the ciphers(1ssl) manpage for more information\n# about the syntax of this string.\n#\n# Note: this configuration applies only to <= TLSv1.2.\n#\n# tls-ciphers DEFAULT:!MEDIUM\n\n# Configure allowed TLSv1.3 ciphersuites.  See the ciphers(1ssl) manpage for more\n# information about the syntax of this string, and specifically for TLSv1.3\n# ciphersuites.\n#\n# tls-ciphersuites TLS_CHACHA20_POLY1305_SHA256\n\n# When choosing a cipher, use the server's preference instead of the client\n# preference. By default, the server follows the client's preference.\n#\n# tls-prefer-server-ciphers yes\n\n# By default, TLS session caching is enabled to allow faster and less expensive\n# reconnections by clients that support it. Use the following directive to disable\n# caching.\n#\n# tls-session-caching no\n\n# Change the default number of TLS sessions cached. A zero value sets the cache\n# to unlimited size. The default size is 20480.\n#\n# tls-session-cache-size 5000\n\n# Change the default timeout of cached TLS sessions. The default timeout is 300\n# seconds.\n#\n# tls-session-cache-timeout 60\n\n################################# GENERAL #####################################\n\n# By default Redis does not run as a daemon. Use 'yes' if you need it.\n# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.\n# When Redis is supervised by upstart or systemd, this parameter has no impact.\ndaemonize no\n\n# If you run Redis from upstart or systemd, Redis can interact with your\n# supervision tree. Options:\n#   supervised no      - no supervision interaction\n#   supervised upstart - signal upstart by putting Redis into SIGSTOP mode\n#                        requires \"expect stop\" in your upstart job config\n#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET\n#                        on startup, and updating Redis status on a regular\n#                        basis.\n#   supervised auto    - detect upstart or systemd method based on\n#                        UPSTART_JOB or NOTIFY_SOCKET environment variables\n# Note: these supervision methods only signal \"process is ready.\"\n#       They do not enable continuous pings back to your supervisor.\n#\n# The default is \"no\". To run under upstart/systemd, you can simply uncomment\n# the line below:\n#\n# supervised auto\n\n# If a pid file is specified, Redis writes it where specified at startup\n# and removes it at exit.\n#\n# When the server runs non daemonized, no pid file is created if none is\n# specified in the configuration. When the server is daemonized, the pid file\n# is used even if not specified, defaulting to \"/var/run/redis.pid\".\n#\n# Creating a pid file is best effort: if Redis is not able to create it\n# nothing bad happens, the server will start and run normally.\n#\n# Note that on modern Linux systems \"/run/redis.pid\" is more conforming\n# and should be used instead.\npidfile /var/run/redis_6379.pid\n\n# Specify the server verbosity level.\n# This can be one of:\n# debug (a lot of information, useful for development/testing)\n# verbose (many rarely useful info, but not a mess like the debug level)\n# notice (moderately verbose, what you want in production probably)\n# warning (only very important / critical messages are logged)\n# nothing (nothing is logged)\nloglevel notice\n\n# Specify the log file name. Also the empty string can be used to force\n# Redis to log on the standard output. Note that if you use standard\n# output for logging but daemonize, logs will be sent to /dev/null\nlogfile \"\"\n\n# To enable logging to the system logger, just set 'syslog-enabled' to yes,\n# and optionally update the other syslog parameters to suit your needs.\n# syslog-enabled no\n\n# Specify the syslog identity.\n# syslog-ident redis\n\n# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.\n# syslog-facility local0\n\n# To disable the built in crash log, which will possibly produce cleaner core\n# dumps when they are needed, uncomment the following:\n#\n# crash-log-enabled no\n\n# To disable the fast memory check that's run as part of the crash log, which\n# will possibly let redis terminate sooner, uncomment the following:\n#\n# crash-memcheck-enabled no\n\n# Set the number of databases. The default database is DB 0, you can select\n# a different one on a per-connection basis using SELECT <dbid> where\n# dbid is a number between 0 and 'databases'-1\ndatabases 16\n\n# By default Redis shows an ASCII art logo only when started to log to the\n# standard output and if the standard output is a TTY and syslog logging is\n# disabled. Basically this means that normally a logo is displayed only in\n# interactive sessions.\n#\n# However it is possible to force the pre-4.0 behavior and always show a\n# ASCII art logo in startup logs by setting the following option to yes.\nalways-show-logo no\n\n# To avoid logging personal identifiable information (PII) into server log file,\n# uncomment the following:\n#\n# hide-user-data-from-log yes\n\n# By default, Redis modifies the process title (as seen in 'top' and 'ps') to\n# provide some runtime information. It is possible to disable this and leave\n# the process name as executed by setting the following to no.\nset-proc-title yes\n\n# When changing the process title, Redis uses the following template to construct\n# the modified title.\n#\n# Template variables are specified in curly brackets. The following variables are\n# supported:\n#\n# {title}           Name of process as executed if parent, or type of child process.\n# {listen-addr}     Bind address or '*' followed by TCP or TLS port listening on, or\n#                   Unix socket if only that's available.\n# {server-mode}     Special mode, i.e. \"[sentinel]\" or \"[cluster]\".\n# {port}            TCP port listening on, or 0.\n# {tls-port}        TLS port listening on, or 0.\n# {unixsocket}      Unix domain socket listening on, or \"\".\n# {config-file}     Name of configuration file used.\n#\nproc-title-template \"{title} {listen-addr} {server-mode}\"\n\n# Set the local environment which is used for string comparison operations, and \n# also affect the performance of Lua scripts. Empty String indicates the locale \n# is derived from the environment variables.\nlocale-collate \"\"\n\n################################ SNAPSHOTTING  ################################\n\n# Save the DB to disk.\n#\n# save <seconds> <changes> [<seconds> <changes> ...]\n#\n# Redis will save the DB if the given number of seconds elapsed and it\n# surpassed the given number of write operations against the DB.\n#\n# Snapshotting can be completely disabled with a single empty string argument\n# as in following example:\n#\n# save \"\"\n#\n# Unless specified otherwise, by default Redis will save the DB:\n#   * After 3600 seconds (an hour) if at least 1 change was performed\n#   * After 300 seconds (5 minutes) if at least 100 changes were performed\n#   * After 60 seconds if at least 10000 changes were performed\n#\n# You can set these explicitly by uncommenting the following line.\n#\n# save 3600 1 300 100 60 10000\n\n# By default Redis will stop accepting writes if RDB snapshots are enabled\n# (at least one save point) and the latest background save failed.\n# This will make the user aware (in a hard way) that data is not persisting\n# on disk properly, otherwise chances are that no one will notice and some\n# disaster will happen.\n#\n# If the background saving process will start working again Redis will\n# automatically allow writes again.\n#\n# However if you have setup your proper monitoring of the Redis server\n# and persistence, you may want to disable this feature so that Redis will\n# continue to work as usual even if there are problems with disk,\n# permissions, and so forth.\nstop-writes-on-bgsave-error yes\n\n# Compress string objects using LZF when dump .rdb databases?\n# By default compression is enabled as it's almost always a win.\n# If you want to save some CPU in the saving child set it to 'no' but\n# the dataset will likely be bigger if you have compressible values or keys.\nrdbcompression yes\n\n# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.\n# This makes the format more resistant to corruption but there is a performance\n# hit to pay (around 10%) when saving and loading RDB files, so you can disable it\n# for maximum performances.\n#\n# RDB files created with checksum disabled have a checksum of zero that will\n# tell the loading code to skip the check.\nrdbchecksum yes\n\n# Enables or disables full sanitization checks for ziplist and listpack etc when\n# loading an RDB or RESTORE payload. This reduces the chances of a assertion or\n# crash later on while processing commands.\n# Options:\n#   no         - Never perform full sanitization\n#   yes        - Always perform full sanitization\n#   clients    - Perform full sanitization only for user connections.\n#                Excludes: RDB files, RESTORE commands received from the master\n#                connection, and client connections which have the\n#                skip-sanitize-payload ACL flag.\n# The default should be 'clients' but since it currently affects cluster\n# resharding via MIGRATE, it is temporarily set to 'no' by default.\n#\n# sanitize-dump-payload no\n\n# The filename where to dump the DB\ndbfilename dump.rdb\n\n# Remove RDB files used by replication in instances without persistence\n# enabled. By default this option is disabled, however there are environments\n# where for regulations or other security concerns, RDB files persisted on\n# disk by masters in order to feed replicas, or stored on disk by replicas\n# in order to load them for the initial synchronization, should be deleted\n# ASAP. Note that this option ONLY WORKS in instances that have both AOF\n# and RDB persistence disabled, otherwise is completely ignored.\n#\n# An alternative (and sometimes better) way to obtain the same effect is\n# to use diskless replication on both master and replicas instances. However\n# in the case of replicas, diskless is not always an option.\nrdb-del-sync-files no\n\n# The working directory.\n#\n# The DB will be written inside this directory, with the filename specified\n# above using the 'dbfilename' configuration directive.\n#\n# The Append Only File will also be created inside this directory.\n#\n# Note that you must specify a directory here, not a file name.\ndir ./\n\n################################# REPLICATION #################################\n\n# Master-Replica replication. Use replicaof to make a Redis instance a copy of\n# another Redis server. A few things to understand ASAP about Redis replication.\n#\n#   +------------------+      +---------------+\n#   |      Master      | ---> |    Replica    |\n#   | (receive writes) |      |  (exact copy) |\n#   +------------------+      +---------------+\n#\n# 1) Redis replication is asynchronous, but you can configure a master to\n#    stop accepting writes if it appears to be not connected with at least\n#    a given number of replicas.\n# 2) Redis replicas are able to perform a partial resynchronization with the\n#    master if the replication link is lost for a relatively small amount of\n#    time. You may want to configure the replication backlog size (see the next\n#    sections of this file) with a sensible value depending on your needs.\n# 3) Replication is automatic and does not need user intervention. After a\n#    network partition replicas automatically try to reconnect to masters\n#    and resynchronize with them.\n#\n# replicaof <masterip> <masterport>\n\n# If the master is password protected (using the \"requirepass\" configuration\n# directive below) it is possible to tell the replica to authenticate before\n# starting the replication synchronization process, otherwise the master will\n# refuse the replica request.\n#\n# masterauth <master-password>\n#\n# However this is not enough if you are using Redis ACLs (for Redis version\n# 6 or greater), and the default user is not capable of running the PSYNC\n# command and/or other commands needed for replication. In this case it's\n# better to configure a special user to use with replication, and specify the\n# masteruser configuration as such:\n#\n# masteruser <username>\n#\n# When masteruser is specified, the replica will authenticate against its\n# master using the new AUTH form: AUTH <username> <password>.\n\n# When a replica loses its connection with the master, or when the replication\n# is still in progress, the replica can act in two different ways:\n#\n# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will\n#    still reply to client requests, possibly with out of date data, or the\n#    data set may just be empty if this is the first synchronization.\n#\n# 2) If replica-serve-stale-data is set to 'no' the replica will reply with error\n#    \"MASTERDOWN Link with MASTER is down and replica-serve-stale-data is set to 'no'\"\n#    to all data access commands, excluding commands such as:\n#    INFO, REPLICAOF, AUTH, SHUTDOWN, REPLCONF, ROLE, CONFIG, SUBSCRIBE,\n#    UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB, COMMAND, POST,\n#    HOST and LATENCY.\n#\nreplica-serve-stale-data yes\n\n# You can configure a replica instance to accept writes or not. Writing against\n# a replica instance may be useful to store some ephemeral data (because data\n# written on a replica will be easily deleted after resync with the master) but\n# may also cause problems if clients are writing to it because of a\n# misconfiguration.\n#\n# Since Redis 2.6 by default replicas are read-only.\n#\n# Note: read only replicas are not designed to be exposed to untrusted clients\n# on the internet. It's just a protection layer against misuse of the instance.\n# Still a read only replica exports by default all the administrative commands\n# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve\n# security of read only replicas using 'rename-command' to shadow all the\n# administrative / dangerous commands.\nreplica-read-only yes\n\n# Replication SYNC strategy: disk or socket.\n#\n# New replicas and reconnecting replicas that are not able to continue the\n# replication process just receiving differences, need to do what is called a\n# \"full synchronization\". An RDB file is transmitted from the master to the\n# replicas.\n#\n# The transmission can happen in two different ways:\n#\n# 1) Disk-backed: The Redis master creates a new process that writes the RDB\n#                 file on disk. Later the file is transferred by the parent\n#                 process to the replicas incrementally.\n# 2) Diskless: The Redis master creates a new process that directly writes the\n#              RDB file to replica sockets, without touching the disk at all.\n#\n# With disk-backed replication, while the RDB file is generated, more replicas\n# can be queued and served with the RDB file as soon as the current child\n# producing the RDB file finishes its work. With diskless replication instead\n# once the transfer starts, new replicas arriving will be queued and a new\n# transfer will start when the current one terminates.\n#\n# When diskless replication is used, the master waits a configurable amount of\n# time (in seconds) before starting the transfer in the hope that multiple\n# replicas will arrive and the transfer can be parallelized.\n#\n# With slow disks and fast (large bandwidth) networks, diskless replication\n# works better.\nrepl-diskless-sync yes\n\n# When diskless replication is enabled, it is possible to configure the delay\n# the server waits in order to spawn the child that transfers the RDB via socket\n# to the replicas.\n#\n# This is important since once the transfer starts, it is not possible to serve\n# new replicas arriving, that will be queued for the next RDB transfer, so the\n# server waits a delay in order to let more replicas arrive.\n#\n# The delay is specified in seconds, and by default is 5 seconds. To disable\n# it entirely just set it to 0 seconds and the transfer will start ASAP.\nrepl-diskless-sync-delay 5\n\n# When diskless replication is enabled with a delay, it is possible to let\n# the replication start before the maximum delay is reached if the maximum\n# number of replicas expected have connected. Default of 0 means that the\n# maximum is not defined and Redis will wait the full delay.\nrepl-diskless-sync-max-replicas 0\n\n# -----------------------------------------------------------------------------\n# WARNING: Since in this setup the replica does not immediately store an RDB on\n# disk, it may cause data loss during failovers. RDB diskless load + Redis\n# modules not handling I/O reads may cause Redis to abort in case of I/O errors\n# during the initial synchronization stage with the master.\n# -----------------------------------------------------------------------------\n#\n# Replica can load the RDB it reads from the replication link directly from the\n# socket, or store the RDB to a file and read that file after it was completely\n# received from the master.\n#\n# In many cases the disk is slower than the network, and storing and loading\n# the RDB file may increase replication time (and even increase the master's\n# Copy on Write memory and replica buffers).\n# However, when parsing the RDB file directly from the socket, in order to avoid\n# data loss it's only safe to flush the current dataset when the new dataset is\n# fully loaded in memory, resulting in higher memory usage.\n# For this reason we have the following options:\n#\n# \"disabled\"    - Don't use diskless load (store the rdb file to the disk first)\n# \"swapdb\"      - Keep current db contents in RAM while parsing the data directly\n#                 from the socket. Replicas in this mode can keep serving current\n#                 dataset while replication is in progress, except for cases where\n#                 they can't recognize master as having a data set from same\n#                 replication history.\n#                 Note that this requires sufficient memory, if you don't have it,\n#                 you risk an OOM kill.\n# \"flushdb\"     - Always flush the entire dataset before diskless load.\n#                 Note that if the diskless load fails, the replica will lose all\n#                 existing data.\n# \"on-empty-db\" - Use diskless load only when current dataset is empty. This is \n#                 safer and avoid having old and new dataset loaded side by side\n#                 during replication.\nrepl-diskless-load disabled\n\n# Master send PINGs to its replicas in a predefined interval. It's possible to\n# change this interval with the repl-ping-replica-period option. The default\n# value is 10 seconds.\n#\n# repl-ping-replica-period 10\n\n# The following option sets the replication timeout for:\n#\n# 1) Bulk transfer I/O during SYNC, from the point of view of replica.\n# 2) Master timeout from the point of view of replicas (data, pings).\n# 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).\n#\n# It is important to make sure that this value is greater than the value\n# specified for repl-ping-replica-period otherwise a timeout will be detected\n# every time there is low traffic between the master and the replica. The default\n# value is 60 seconds.\n#\n# repl-timeout 60\n\n# Disable TCP_NODELAY on the replica socket after SYNC?\n#\n# If you select \"yes\" Redis will use a smaller number of TCP packets and\n# less bandwidth to send data to replicas. But this can add a delay for\n# the data to appear on the replica side, up to 40 milliseconds with\n# Linux kernels using a default configuration.\n#\n# If you select \"no\" the delay for data to appear on the replica side will\n# be reduced but more bandwidth will be used for replication.\n#\n# By default we optimize for low latency, but in very high traffic conditions\n# or when the master and replicas are many hops away, turning this to \"yes\" may\n# be a good idea.\nrepl-disable-tcp-nodelay no\n\n# Set the replication backlog size. The backlog is a buffer that accumulates\n# replica data when replicas are disconnected for some time, so that when a\n# replica wants to reconnect again, often a full resync is not needed, but a\n# partial resync is enough, just passing the portion of data the replica\n# missed while disconnected.\n#\n# The bigger the replication backlog, the longer the replica can endure the\n# disconnect and later be able to perform a partial resynchronization.\n#\n# The backlog is only allocated if there is at least one replica connected.\n#\n# repl-backlog-size 1mb\n\n# After a master has no connected replicas for some time, the backlog will be\n# freed. The following option configures the amount of seconds that need to\n# elapse, starting from the time the last replica disconnected, for the backlog\n# buffer to be freed.\n#\n# Note that replicas never free the backlog for timeout, since they may be\n# promoted to masters later, and should be able to correctly \"partially\n# resynchronize\" with other replicas: hence they should always accumulate backlog.\n#\n# A value of 0 means to never release the backlog.\n#\n# repl-backlog-ttl 3600\n\n# During a fullsync, the master may decide to send both the RDB file and the\n# replication stream to the replica in parallel. This approach shifts the\n# responsibility of buffering the replication stream to the replica during the\n# fullsync process. The replica accumulates the replication stream data until\n# the RDB file is fully loaded. Once the RDB delivery is completed and\n# successfully loaded, the replica begins processing and applying the\n# accumulated replication data to the db. The configuration below controls how\n# much replication data the replica can accumulate during a fullsync.\n#\n# When the replica reaches this limit, it will stop accumulating further data.\n# At this point, additional data accumulation may occur on the master side\n# depending on the 'client-output-buffer-limit <replica>' config of master.\n#\n# A value of 0 means replica inherits hard limit of\n# 'client-output-buffer-limit <replica>' config to limit accumulation size.\n#\n# replica-full-sync-buffer-limit 0\n\n# The replica priority is an integer number published by Redis in the INFO\n# output. It is used by Redis Sentinel in order to select a replica to promote\n# into a master if the master is no longer working correctly.\n#\n# A replica with a low priority number is considered better for promotion, so\n# for instance if there are three replicas with priority 10, 100, 25 Sentinel\n# will pick the one with priority 10, that is the lowest.\n#\n# However a special priority of 0 marks the replica as not able to perform the\n# role of master, so a replica with priority of 0 will never be selected by\n# Redis Sentinel for promotion.\n#\n# By default the priority is 100.\nreplica-priority 100\n\n# The propagation error behavior controls how Redis will behave when it is\n# unable to handle a command being processed in the replication stream from a master\n# or processed while reading from an AOF file. Errors that occur during propagation\n# are unexpected, and can cause data inconsistency. However, there are edge cases\n# in earlier versions of Redis where it was possible for the server to replicate or persist\n# commands that would fail on future versions. For this reason the default behavior\n# is to ignore such errors and continue processing commands.\n#\n# If an application wants to ensure there is no data divergence, this configuration\n# should be set to 'panic' instead. The value can also be set to 'panic-on-replicas'\n# to only panic when a replica encounters an error on the replication stream. One of\n# these two panic values will become the default value in the future once there are\n# sufficient safety mechanisms in place to prevent false positive crashes.\n#\n# propagation-error-behavior ignore\n\n# Replica ignore disk write errors controls the behavior of a replica when it is\n# unable to persist a write command received from its master to disk. By default,\n# this configuration is set to 'no' and will crash the replica in this condition.\n# It is not recommended to change this default, however in order to be compatible\n# with older versions of Redis this config can be toggled to 'yes' which will just\n# log a warning and execute the write command it got from the master.\n#\n# replica-ignore-disk-write-errors no\n\n# -----------------------------------------------------------------------------\n# By default, Redis Sentinel includes all replicas in its reports. A replica\n# can be excluded from Redis Sentinel's announcements. An unannounced replica\n# will be ignored by the 'sentinel replicas <master>' command and won't be\n# exposed to Redis Sentinel's clients.\n#\n# This option does not change the behavior of replica-priority. Even with\n# replica-announced set to 'no', the replica can be promoted to master. To\n# prevent this behavior, set replica-priority to 0.\n#\n# replica-announced yes\n\n# It is possible for a master to stop accepting writes if there are less than\n# N replicas connected, having a lag less or equal than M seconds.\n#\n# The N replicas need to be in \"online\" state.\n#\n# The lag in seconds, that must be <= the specified value, is calculated from\n# the last ping received from the replica, that is usually sent every second.\n#\n# This option does not GUARANTEE that N replicas will accept the write, but\n# will limit the window of exposure for lost writes in case not enough replicas\n# are available, to the specified number of seconds.\n#\n# For example to require at least 3 replicas with a lag <= 10 seconds use:\n#\n# min-replicas-to-write 3\n# min-replicas-max-lag 10\n#\n# Setting one or the other to 0 disables the feature.\n#\n# By default min-replicas-to-write is set to 0 (feature disabled) and\n# min-replicas-max-lag is set to 10.\n\n# A Redis master is able to list the address and port of the attached\n# replicas in different ways. For example the \"INFO replication\" section\n# offers this information, which is used, among other tools, by\n# Redis Sentinel in order to discover replica instances.\n# Another place where this info is available is in the output of the\n# \"ROLE\" command of a master.\n#\n# The listed IP address and port normally reported by a replica is\n# obtained in the following way:\n#\n#   IP: The address is auto detected by checking the peer address\n#   of the socket used by the replica to connect with the master.\n#\n#   Port: The port is communicated by the replica during the replication\n#   handshake, and is normally the port that the replica is using to\n#   listen for connections.\n#\n# However when port forwarding or Network Address Translation (NAT) is\n# used, the replica may actually be reachable via different IP and port\n# pairs. The following two options can be used by a replica in order to\n# report to its master a specific set of IP and port, so that both INFO\n# and ROLE will report those values.\n#\n# There is no need to use both the options if you need to override just\n# the port or the IP address.\n#\n# replica-announce-ip 5.5.5.5\n# replica-announce-port 1234\n\n############################### KEYS TRACKING #################################\n\n# Redis implements server assisted support for client side caching of values.\n# This is implemented using an invalidation table that remembers, using\n# a radix key indexed by key name, what clients have which keys. In turn\n# this is used in order to send invalidation messages to clients. Please\n# check this page to understand more about the feature:\n#\n#   https://redis.io/docs/latest/develop/use/client-side-caching/\n#\n# When tracking is enabled for a client, all the read only queries are assumed\n# to be cached: this will force Redis to store information in the invalidation\n# table. When keys are modified, such information is flushed away, and\n# invalidation messages are sent to the clients. However if the workload is\n# heavily dominated by reads, Redis could use more and more memory in order\n# to track the keys fetched by many clients.\n#\n# For this reason it is possible to configure a maximum fill value for the\n# invalidation table. By default it is set to 1M of keys, and once this limit\n# is reached, Redis will start to evict keys in the invalidation table\n# even if they were not modified, just to reclaim memory: this will in turn\n# force the clients to invalidate the cached values. Basically the table\n# maximum size is a trade off between the memory you want to spend server\n# side to track information about who cached what, and the ability of clients\n# to retain cached objects in memory.\n#\n# If you set the value to 0, it means there are no limits, and Redis will\n# retain as many keys as needed in the invalidation table.\n# In the \"stats\" INFO section, you can find information about the number of\n# keys in the invalidation table at every given moment.\n#\n# Note: when key tracking is used in broadcasting mode, no memory is used\n# in the server side so this setting is useless.\n#\n# tracking-table-max-keys 1000000\n\n################################## SECURITY ###################################\n\n# Warning: since Redis is pretty fast, an outside user can try up to\n# 1 million passwords per second against a modern box. This means that you\n# should use very strong passwords, otherwise they will be very easy to break.\n# Note that because the password is really a shared secret between the client\n# and the server, and should not be memorized by any human, the password\n# can be easily a long string from /dev/urandom or whatever, so by using a\n# long and unguessable password no brute force attack will be possible.\n\n# Redis ACL users are defined in the following format:\n#\n#   user <username> ... acl rules ...\n#\n# For example:\n#\n#   user worker +@list +@connection ~jobs:* on >ffa9203c493aa99\n#\n# The special username \"default\" is used for new connections. If this user\n# has the \"nopass\" rule, then new connections will be immediately authenticated\n# as the \"default\" user without the need of any password provided via the\n# AUTH command. Otherwise if the \"default\" user is not flagged with \"nopass\"\n# the connections will start in not authenticated state, and will require\n# AUTH (or the HELLO command AUTH option) in order to be authenticated and\n# start to work.\n#\n# The ACL rules that describe what a user can do are the following:\n#\n#  on           Enable the user: it is possible to authenticate as this user.\n#  off          Disable the user: it's no longer possible to authenticate\n#               with this user, however the already authenticated connections\n#               will still work.\n#  skip-sanitize-payload    RESTORE dump-payload sanitization is skipped.\n#  sanitize-payload         RESTORE dump-payload is sanitized (default).\n#  +<command>   Allow the execution of that command.\n#               May be used with `|` for allowing subcommands (e.g \"+config|get\")\n#  -<command>   Disallow the execution of that command.\n#               May be used with `|` for blocking subcommands (e.g \"-config|set\")\n#  +@<category> Allow the execution of all the commands in such category\n#               with valid categories are like @admin, @set, @sortedset, ...\n#               and so forth, see the full list in the server.c file where\n#               the Redis command table is described and defined.\n#               The special category @all means all the commands, but currently\n#               present in the server, and that will be loaded in the future\n#               via modules.\n#  +<command>|first-arg  Allow a specific first argument of an otherwise\n#                        disabled command. It is only supported on commands with\n#                        no sub-commands, and is not allowed as negative form\n#                        like -SELECT|1, only additive starting with \"+\". This\n#                        feature is deprecated and may be removed in the future.\n#  allcommands  Alias for +@all. Note that it implies the ability to execute\n#               all the future commands loaded via the modules system.\n#  nocommands   Alias for -@all.\n#  ~<pattern>   Add a pattern of keys that can be mentioned as part of\n#               commands. For instance ~* allows all the keys. The pattern\n#               is a glob-style pattern like the one of KEYS.\n#               It is possible to specify multiple patterns.\n# %R~<pattern>  Add key read pattern that specifies which keys can be read \n#               from.\n# %W~<pattern>  Add key write pattern that specifies which keys can be\n#               written to. \n#  allkeys      Alias for ~*\n#  resetkeys    Flush the list of allowed keys patterns.\n#  &<pattern>   Add a glob-style pattern of Pub/Sub channels that can be\n#               accessed by the user. It is possible to specify multiple channel\n#               patterns.\n#  allchannels  Alias for &*\n#  resetchannels            Flush the list of allowed channel patterns.\n#  ><password>  Add this password to the list of valid password for the user.\n#               For example >mypass will add \"mypass\" to the list.\n#               This directive clears the \"nopass\" flag (see later).\n#  <<password>  Remove this password from the list of valid passwords.\n#  nopass       All the set passwords of the user are removed, and the user\n#               is flagged as requiring no password: it means that every\n#               password will work against this user. If this directive is\n#               used for the default user, every new connection will be\n#               immediately authenticated with the default user without\n#               any explicit AUTH command required. Note that the \"resetpass\"\n#               directive will clear this condition.\n#  resetpass    Flush the list of allowed passwords. Moreover removes the\n#               \"nopass\" status. After \"resetpass\" the user has no associated\n#               passwords and there is no way to authenticate without adding\n#               some password (or setting it as \"nopass\" later).\n#  reset        Performs the following actions: resetpass, resetkeys, resetchannels,\n#               allchannels (if acl-pubsub-default is set), off, clearselectors, -@all.\n#               The user returns to the same state it has immediately after its creation.\n# (<options>)   Create a new selector with the options specified within the\n#               parentheses and attach it to the user. Each option should be \n#               space separated. The first character must be ( and the last \n#               character must be ).\n# clearselectors            Remove all of the currently attached selectors. \n#                           Note this does not change the \"root\" user permissions,\n#                           which are the permissions directly applied onto the\n#                           user (outside the parentheses).\n#\n# ACL rules can be specified in any order: for instance you can start with\n# passwords, then flags, or key patterns. However note that the additive\n# and subtractive rules will CHANGE MEANING depending on the ordering.\n# For instance see the following example:\n#\n#   user alice on +@all -DEBUG ~* >somepassword\n#\n# This will allow \"alice\" to use all the commands with the exception of the\n# DEBUG command, since +@all added all the commands to the set of the commands\n# alice can use, and later DEBUG was removed. However if we invert the order\n# of two ACL rules the result will be different:\n#\n#   user alice on -DEBUG +@all ~* >somepassword\n#\n# Now DEBUG was removed when alice had yet no commands in the set of allowed\n# commands, later all the commands are added, so the user will be able to\n# execute everything.\n#\n# Basically ACL rules are processed left-to-right.\n#\n# The following is a list of command categories and their meanings:\n# * keyspace - Writing or reading from keys, databases, or their metadata \n#     in a type agnostic way. Includes DEL, RESTORE, DUMP, RENAME, EXISTS, DBSIZE,\n#     KEYS, EXPIRE, TTL, FLUSHALL, etc. Commands that may modify the keyspace,\n#     key or metadata will also have `write` category. Commands that only read\n#     the keyspace, key or metadata will have the `read` category.\n# * read - Reading from keys (values or metadata). Note that commands that don't\n#     interact with keys, will not have either `read` or `write`.\n# * write - Writing to keys (values or metadata)\n# * admin - Administrative commands. Normal applications will never need to use\n#     these. Includes REPLICAOF, CONFIG, DEBUG, SAVE, MONITOR, ACL, SHUTDOWN, etc.\n# * dangerous - Potentially dangerous (each should be considered with care for\n#     various reasons). This includes FLUSHALL, MIGRATE, RESTORE, SORT, KEYS,\n#     CLIENT, DEBUG, INFO, CONFIG, SAVE, REPLICAOF, etc.\n# * connection - Commands affecting the connection or other connections.\n#     This includes AUTH, SELECT, COMMAND, CLIENT, ECHO, PING, etc.\n# * blocking - Potentially blocking the connection until released by another\n#     command.\n# * fast - Fast O(1) commands. May loop on the number of arguments, but not the\n#     number of elements in the key.\n# * slow - All commands that are not Fast.\n# * pubsub - PUBLISH / SUBSCRIBE related\n# * transaction - WATCH / MULTI / EXEC related commands.\n# * scripting - Scripting related.\n# * set - Data type: sets related.\n# * sortedset - Data type: zsets related.\n# * list - Data type: lists related.\n# * hash - Data type: hashes related.\n# * string - Data type: strings related.\n# * bitmap - Data type: bitmaps related.\n# * hyperloglog - Data type: hyperloglog related.\n# * geo - Data type: geo related.\n# * stream - Data type: streams related.\n#\n# For more information about ACL configuration please refer to\n# the Redis web site at https://redis.io/docs/latest/operate/oss_and_stack/management/security/acl/\n\n# ACL LOG\n#\n# The ACL Log tracks failed commands and authentication events associated\n# with ACLs. The ACL Log is useful to troubleshoot failed commands blocked\n# by ACLs. The ACL Log is stored in memory. You can reclaim memory with\n# ACL LOG RESET. Define the maximum entry length of the ACL Log below.\nacllog-max-len 128\n\n# Using an external ACL file\n#\n# Instead of configuring users here in this file, it is possible to use\n# a stand-alone file just listing users. The two methods cannot be mixed:\n# if you configure users here and at the same time you activate the external\n# ACL file, the server will refuse to start.\n#\n# The format of the external ACL user file is exactly the same as the\n# format that is used inside redis.conf to describe users.\n#\n# aclfile /etc/redis/users.acl\n\n# IMPORTANT NOTE: starting with Redis 6 \"requirepass\" is just a compatibility\n# layer on top of the new ACL system. The option effect will be just setting\n# the password for the default user. Clients will still authenticate using\n# AUTH <password> as usually, or more explicitly with AUTH default <password>\n# if they follow the new protocol: both will work.\n#\n# The requirepass is not compatible with aclfile option and the ACL LOAD\n# command, these will cause requirepass to be ignored.\n#\n# requirepass foobared\n\n# New users are initialized with restrictive permissions by default, via the\n# equivalent of this ACL rule 'off resetkeys -@all'. Starting with Redis 6.2, it\n# is possible to manage access to Pub/Sub channels with ACL rules as well. The\n# default Pub/Sub channels permission if new users is controlled by the\n# acl-pubsub-default configuration directive, which accepts one of these values:\n#\n# allchannels: grants access to all Pub/Sub channels\n# resetchannels: revokes access to all Pub/Sub channels\n#\n# From Redis 7.0, acl-pubsub-default defaults to 'resetchannels' permission.\n#\n# acl-pubsub-default resetchannels\n\n# Command renaming (DEPRECATED).\n#\n# ------------------------------------------------------------------------\n# WARNING: avoid using this option if possible. Instead use ACLs to remove\n# commands from the default user, and put them only in some admin user you\n# create for administrative purposes.\n# ------------------------------------------------------------------------\n#\n# It is possible to change the name of dangerous commands in a shared\n# environment. For instance the CONFIG command may be renamed into something\n# hard to guess so that it will still be available for internal-use tools\n# but not available for general clients.\n#\n# Example:\n#\n# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52\n#\n# It is also possible to completely kill a command by renaming it into\n# an empty string:\n#\n# rename-command CONFIG \"\"\n#\n# Please note that changing the name of commands that are logged into the\n# AOF file or transmitted to replicas may cause problems.\n\n################################### CLIENTS ####################################\n\n# Set the max number of connected clients at the same time. By default\n# this limit is set to 10000 clients, however if the Redis server is not\n# able to configure the process file limit to allow for the specified limit\n# the max number of allowed clients is set to the current file limit\n# minus 32 (as Redis reserves a few file descriptors for internal uses).\n#\n# Once the limit is reached Redis will close all the new connections sending\n# an error 'max number of clients reached'.\n#\n# IMPORTANT: When Redis Cluster is used, the max number of connections is also\n# shared with the cluster bus: every node in the cluster will use two\n# connections, one incoming and another outgoing. It is important to size the\n# limit accordingly in case of very large clusters.\n#\n# maxclients 10000\n\n############################## MEMORY MANAGEMENT ################################\n\n# Set a memory usage limit to the specified amount of bytes.\n# When the memory limit is reached Redis will try to remove keys\n# according to the eviction policy selected (see maxmemory-policy).\n#\n# If Redis can't remove keys according to the policy, or if the policy is\n# set to 'noeviction', Redis will start to reply with errors to commands\n# that would use more memory, like SET, LPUSH, and so on, and will continue\n# to reply to read-only commands like GET.\n#\n# This option is usually useful when using Redis as an LRU or LFU cache, or to\n# set a hard memory limit for an instance (using the 'noeviction' policy).\n#\n# WARNING: If you have replicas attached to an instance with maxmemory on,\n# the size of the output buffers needed to feed the replicas are subtracted\n# from the used memory count, so that network problems / resyncs will\n# not trigger a loop where keys are evicted, and in turn the output\n# buffer of replicas is full with DELs of keys evicted triggering the deletion\n# of more keys, and so forth until the database is completely emptied.\n#\n# In short... if you have replicas attached it is suggested that you set a lower\n# limit for maxmemory so that there is some free RAM on the system for replica\n# output buffers (but this is not needed if the policy is 'noeviction').\n#\n# maxmemory <bytes>\n\n# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory\n# is reached. You can select one from the following behaviors:\n#\n# volatile-lru -> Evict using approximated LRU, only keys with an expire set.\n# allkeys-lru -> Evict any key using approximated LRU.\n# volatile-lfu -> Evict using approximated LFU, only keys with an expire set.\n# allkeys-lfu -> Evict any key using approximated LFU.\n# volatile-lrm -> Evict using approximated LRM, only keys with an expire set.\n# allkeys-lrm -> Evict any key using approximated LRM.\n# volatile-random -> Remove a random key having an expire set.\n# allkeys-random -> Remove a random key, any key.\n# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)\n# noeviction -> Don't evict anything, just return an error on write operations.\n#\n# LRU means Least Recently Used\n# LFU means Least Frequently Used\n# LRM means Least Recently Modified (only write operations update the timestamp)\n#\n# LRU, LFU, LRM and volatile-ttl are implemented using approximated\n# randomized algorithms.\n#\n# LRU vs LRM: Both use similar eviction logic based on access time, but:\n# - LRU updates the timestamp on both read (GET) and write (SET) operations\n# - LRM only updates the timestamp on write (SET, INCR, etc.) operations\n# This makes LRM useful when you want to evict keys that haven't been updated\n# recently, regardless of how often they are read.\n#\n# Note: with any of the above policies, when there are no suitable keys for\n# eviction, Redis will return an error on write operations that require\n# more memory. These are usually commands that create new keys, add data or\n# modify existing keys. A few examples are: SET, INCR, HSET, LPUSH, SUNIONSTORE,\n# SORT (due to the STORE argument), and EXEC (if the transaction includes any\n# command that requires memory).\n#\n# The default is:\n#\n# maxmemory-policy noeviction\n\n# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated\n# algorithms (in order to save memory), so you can tune it for speed or\n# accuracy. By default Redis will check five keys and pick the one that was\n# used least recently, you can change the sample size using the following\n# configuration directive.\n#\n# The default of 5 produces good enough results. 10 Approximates very closely\n# true LRU but costs more CPU. 3 is faster but not very accurate. The maximum\n# value that can be set is 64.\n#\n# maxmemory-samples 5\n\n# Eviction processing is designed to function well with the default setting.\n# If there is an unusually large amount of write traffic, this value may need to\n# be increased.  Decreasing this value may reduce latency at the risk of\n# eviction processing effectiveness\n#   0 = minimum latency, 10 = default, 100 = process without regard to latency\n#\n# maxmemory-eviction-tenacity 10\n\n# Starting from Redis 5, by default a replica will ignore its maxmemory setting\n# (unless it is promoted to master after a failover or manually). It means\n# that the eviction of keys will be just handled by the master, sending the\n# DEL commands to the replica as keys evict in the master side.\n#\n# This behavior ensures that masters and replicas stay consistent, and is usually\n# what you want, however if your replica is writable, or you want the replica\n# to have a different memory setting, and you are sure all the writes performed\n# to the replica are idempotent, then you may change this default (but be sure\n# to understand what you are doing).\n#\n# Note that since the replica by default does not evict, it may end using more\n# memory than the one set via maxmemory (there are certain buffers that may\n# be larger on the replica, or data structures may sometimes take more memory\n# and so forth). So make sure you monitor your replicas and make sure they\n# have enough memory to never hit a real out-of-memory condition before the\n# master hits the configured maxmemory setting.\n#\n# replica-ignore-maxmemory yes\n\n# Redis reclaims expired keys in two ways: upon access when those keys are\n# found to be expired, and also in background, in what is called the\n# \"active expire key\". The key space is slowly and interactively scanned\n# looking for expired keys to reclaim, so that it is possible to free memory\n# of keys that are expired and will never be accessed again in a short time.\n#\n# The default effort of the expire cycle will try to avoid having more than\n# ten percent of expired keys still in memory, and will try to avoid consuming\n# more than 25% of total memory and to add latency to the system. However\n# it is possible to increase the expire \"effort\" that is normally set to\n# \"1\", to a greater value, up to the value \"10\". At its maximum value the\n# system will use more CPU, longer cycles (and technically may introduce\n# more latency), and will tolerate less already expired keys still present\n# in the system. It's a tradeoff between memory, CPU and latency.\n#\n# active-expire-effort 1\n\n############################# LAZY FREEING ####################################\n\n# Redis has two primitives to delete keys. One is called DEL and is a blocking\n# deletion of the object. It means that the server stops processing new commands\n# in order to reclaim all the memory associated with an object in a synchronous\n# way. If the key deleted is associated with a small object, the time needed\n# in order to execute the DEL command is very small and comparable to most other\n# O(1) or O(log_N) commands in Redis. However if the key is associated with an\n# aggregated value containing millions of elements, the server can block for\n# a long time (even seconds) in order to complete the operation.\n#\n# For the above reasons Redis also offers non blocking deletion primitives\n# such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and\n# FLUSHDB commands, in order to reclaim memory in background. Those commands\n# are executed in constant time. Another thread will incrementally free the\n# object in the background as fast as possible.\n#\n# DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.\n# It's up to the design of the application to understand when it is a good\n# idea to use one or the other. However the Redis server sometimes has to\n# delete keys or flush the whole database as a side effect of other operations.\n# Specifically Redis deletes objects independently of a user call in the\n# following scenarios:\n#\n# 1) On eviction, because of the maxmemory and maxmemory policy configurations,\n#    in order to make room for new data, without going over the specified\n#    memory limit.\n# 2) Because of expire: when a key with an associated time to live (see the\n#    EXPIRE command) must be deleted from memory.\n# 3) Because of a side effect of a command that stores data on a key that may\n#    already exist. For example the RENAME command may delete the old key\n#    content when it is replaced with another one. Similarly SUNIONSTORE\n#    or SORT with STORE option may delete existing keys. The SET command\n#    itself removes any old content of the specified key in order to replace\n#    it with the specified string.\n# 4) During replication, when a replica performs a full resynchronization with\n#    its master, the content of the whole database is removed in order to\n#    load the RDB file just transferred.\n#\n# In all the above cases the default is to delete objects in a blocking way,\n# like if DEL was called. However you can configure each case specifically\n# in order to instead release memory in a non-blocking way like if UNLINK\n# was called, using the following configuration directives.\n\nlazyfree-lazy-eviction no\nlazyfree-lazy-expire no\nlazyfree-lazy-server-del no\nreplica-lazy-flush no\n\n# It is also possible, for the case when to replace the user code DEL calls\n# with UNLINK calls is not easy, to modify the default behavior of the DEL\n# command to act exactly like UNLINK, using the following configuration\n# directive:\n\nlazyfree-lazy-user-del no\n\n# FLUSHDB, FLUSHALL, SCRIPT FLUSH and FUNCTION FLUSH support both asynchronous and synchronous\n# deletion, which can be controlled by passing the [SYNC|ASYNC] flags into the\n# commands. When neither flag is passed, this directive will be used to determine\n# if the data should be deleted asynchronously.\n\nlazyfree-lazy-user-flush no\n\n################################ THREADED I/O #################################\n\n# Redis is mostly single threaded, however there are certain threaded\n# operations such as UNLINK, slow I/O accesses and other things that are\n# performed on side threads.\n#\n# Now it is also possible to handle Redis clients socket reads and writes\n# in different I/O threads. Since especially writing is so slow, normally\n# Redis users use pipelining in order to speed up the Redis performances per\n# core, and spawn multiple instances in order to scale more. Using I/O\n# threads it is possible to easily speedup several times Redis without resorting\n# to pipelining nor sharding of the instance.\n#\n# By default threading is disabled, we suggest enabling it only in machines\n# that have at least 4 or more cores, leaving at least one spare core.\n# We also recommend using threaded I/O only if you actually have performance\n# problems, with Redis instances being able to use a quite big percentage of\n# CPU time, otherwise there is no point in using this feature.\n#\n# So for instance if you have a four cores boxes, try to use 3 I/O\n# threads, if you have a 8 cores, try to use 7 threads. In order to\n# enable I/O threads use the following configuration directive:\n#\n# io-threads 4\n#\n# Setting io-threads to 1 will just use the main thread as usual.\n# When I/O threads are enabled, we not only use threads for writes, that\n# is to thread the write(2) syscall and transfer the client buffers to the\n# socket, but also use threads for reads and protocol parsing.\n#\n# NOTE: If you want to test the Redis speedup using redis-benchmark, make\n# sure you also run the benchmark itself in threaded mode, using the\n# --threads option to match the number of Redis threads, otherwise you'll not\n# be able to notice the improvements.\n\n############################ KERNEL OOM CONTROL ##############################\n\n# On Linux, it is possible to hint the kernel OOM killer on what processes\n# should be killed first when out of memory.\n#\n# Enabling this feature makes Redis actively control the oom_score_adj value\n# for all its processes, depending on their role. The default scores will\n# attempt to have background child processes killed before all others, and\n# replicas killed before masters.\n#\n# Redis supports these options:\n#\n# no:       Don't make changes to oom-score-adj (default).\n# yes:      Alias to \"relative\" see below.\n# absolute: Values in oom-score-adj-values are written as is to the kernel.\n# relative: Values are used relative to the initial value of oom_score_adj when\n#           the server starts and are then clamped to a range of -1000 to 1000.\n#           Because typically the initial value is 0, they will often match the\n#           absolute values.\noom-score-adj no\n\n# When oom-score-adj is used, this directive controls the specific values used\n# for master, replica and background child processes. Values range -2000 to\n# 2000 (higher means more likely to be killed).\n#\n# Unprivileged processes (not root, and without CAP_SYS_RESOURCE capabilities)\n# can freely increase their value, but not decrease it below its initial\n# settings. This means that setting oom-score-adj to \"relative\" and setting the\n# oom-score-adj-values to positive values will always succeed.\noom-score-adj-values 0 200 800\n\n\n#################### KERNEL transparent hugepage CONTROL ######################\n\n# Usually the kernel Transparent Huge Pages control is set to \"madvise\" or\n# \"never\" by default (/sys/kernel/mm/transparent_hugepage/enabled), in which\n# case this config has no effect. On systems in which it is set to \"always\",\n# redis will attempt to disable it specifically for the redis process in order\n# to avoid latency problems specifically with fork(2) and CoW.\n# If for some reason you prefer to keep it enabled, you can set this config to\n# \"no\" and the kernel global to \"always\".\n\ndisable-thp yes\n\n############################## APPEND ONLY MODE ###############################\n\n# By default Redis asynchronously dumps the dataset on disk. This mode is\n# good enough in many applications, but an issue with the Redis process or\n# a power outage may result into a few minutes of writes lost (depending on\n# the configured save points).\n#\n# The Append Only File is an alternative persistence mode that provides\n# much better durability. For instance using the default data fsync policy\n# (see later in the config file) Redis can lose just one second of writes in a\n# dramatic event like a server power outage, or a single write if something\n# wrong with the Redis process itself happens, but the operating system is\n# still running correctly.\n#\n# AOF and RDB persistence can be enabled at the same time without problems.\n# If the AOF is enabled on startup Redis will load the AOF, that is the file\n# with the better durability guarantees.\n#\n# Note that changing this value in a config file of an existing database and\n# restarting the server can lead to data loss. A conversion needs to be done\n# by setting it via CONFIG command on a live server first.\n#\n# Please check https://redis.io/docs/latest/operate/oss_and_stack/management/persistence/ for more information.\n\nappendonly no\n\n# The base name of the append only file.\n#\n# Redis 7 and newer use a set of append-only files to persist the dataset\n# and changes applied to it. There are two basic types of files in use:\n#\n# - Base files, which are a snapshot representing the complete state of the\n#   dataset at the time the file was created. Base files can be either in\n#   the form of RDB (binary serialized) or AOF (textual commands).\n# - Incremental files, which contain additional commands that were applied\n#   to the dataset following the previous file.\n#\n# In addition, manifest files are used to track the files and the order in\n# which they were created and should be applied.\n#\n# Append-only file names are created by Redis following a specific pattern.\n# The file name's prefix is based on the 'appendfilename' configuration\n# parameter, followed by additional information about the sequence and type.\n#\n# For example, if appendfilename is set to appendonly.aof, the following file\n# names could be derived:\n#\n# - appendonly.aof.1.base.rdb as a base file.\n# - appendonly.aof.1.incr.aof, appendonly.aof.2.incr.aof as incremental files.\n# - appendonly.aof.manifest as a manifest file.\n\nappendfilename \"appendonly.aof\"\n\n# For convenience, Redis stores all persistent append-only files in a dedicated\n# directory. The name of the directory is determined by the appenddirname\n# configuration parameter.\n\nappenddirname \"appendonlydir\"\n\n# The fsync() call tells the Operating System to actually write data on disk\n# instead of waiting for more data in the output buffer. Some OS will really flush\n# data on disk, some other OS will just try to do it ASAP.\n#\n# Redis supports three different modes:\n#\n# no: don't fsync, just let the OS flush the data when it wants. Faster.\n# always: fsync after every write to the append only log. Slow, Safest.\n# everysec: fsync only one time every second. Compromise.\n#\n# The default is \"everysec\", as that's usually the right compromise between\n# speed and data safety. It's up to you to understand if you can relax this to\n# \"no\" that will let the operating system flush the output buffer when\n# it wants, for better performances (but if you can live with the idea of\n# some data loss consider the default persistence mode that's snapshotting),\n# or on the contrary, use \"always\" that's very slow but a bit safer than\n# everysec.\n#\n# More details please check the following article:\n# http://antirez.com/post/redis-persistence-demystified.html\n#\n# If unsure, use \"everysec\".\n\n# appendfsync always\nappendfsync everysec\n# appendfsync no\n\n# When the AOF fsync policy is set to always or everysec, and a background\n# saving process (a background save or AOF log background rewriting) is\n# performing a lot of I/O against the disk, in some Linux configurations\n# Redis may block too long on the fsync() call. Note that there is no fix for\n# this currently, as even performing fsync in a different thread will block\n# our synchronous write(2) call.\n#\n# In order to mitigate this problem it's possible to use the following option\n# that will prevent fsync() from being called in the main process while a\n# BGSAVE or BGREWRITEAOF is in progress.\n#\n# This means that while another child is saving, the durability of Redis is\n# the same as \"appendfsync no\". In practical terms, this means that it is\n# possible to lose up to 30 seconds of log in the worst scenario (with the\n# default Linux settings).\n#\n# If you have latency problems turn this to \"yes\". Otherwise leave it as\n# \"no\" that is the safest pick from the point of view of durability.\n\nno-appendfsync-on-rewrite no\n\n# Automatic rewrite of the append only file.\n# Redis is able to automatically rewrite the log file implicitly calling\n# BGREWRITEAOF when the AOF log size grows by the specified percentage.\n#\n# This is how it works: Redis remembers the size of the AOF file after the\n# latest rewrite (if no rewrite has happened since the restart, the size of\n# the AOF at startup is used).\n#\n# This base size is compared to the current size. If the current size is\n# bigger than the specified percentage, the rewrite is triggered. Also\n# you need to specify a minimal size for the AOF file to be rewritten, this\n# is useful to avoid rewriting the AOF file even if the percentage increase\n# is reached but it is still pretty small.\n#\n# Specify a percentage of zero in order to disable the automatic AOF\n# rewrite feature.\n\nauto-aof-rewrite-percentage 100\nauto-aof-rewrite-min-size 64mb\n\n# An AOF file may be found to be truncated at the end during the Redis\n# startup process, when the AOF data gets loaded back into memory.\n# This may happen when the system where Redis is running\n# crashes, especially when an ext4 filesystem is mounted without the\n# data=ordered option (however this can't happen when Redis itself\n# crashes or aborts but the operating system still works correctly).\n#\n# Redis can either exit with an error when this happens, or load as much\n# data as possible (the default now) and start if the AOF file is found\n# to be truncated at the end. The following option controls this behavior.\n#\n# If aof-load-truncated is set to yes, a truncated AOF file is loaded and\n# the Redis server starts emitting a log to inform the user of the event.\n# Otherwise if the option is set to no, the server aborts with an error\n# and refuses to start. When the option is set to no, the user requires\n# to fix the AOF file using the \"redis-check-aof\" utility before to restart\n# the server.\n#\n# Note that if the AOF file will be found to be corrupted in the middle\n# the server will still exit with an error. This option only applies when\n# Redis will try to read more data from the AOF file but not enough bytes\n# will be found.\naof-load-truncated yes\n\n# An AOF file may be found to be corrupted at the end during the Redis\n# startup process, when the AOF data gets loaded back into memory.\n# This may happen when the system where Redis is running crashes.\n#\n# The aof-load-corrupt-tail-max-size option allows Redis to automatically recover\n# from small amounts of corruption at the end of the AOF file by truncating the\n# corrupted portion and continuing with startup.\n#\n# Set this to the maximum number of bytes of corruption you're willing to\n# lose from the end of the AOF file. If the corrupted portion is larger than\n# this value, Redis will refuse to start and require manual intervention.\n#\n# Setting this to 0 (default) disables automatic recovery from corruption.\n# Redis will exit with an error and suggest using redis-check-aof --fix.\n#\n# Note: This is different from aof-load-truncated, which handles missing data\n# (unexpected EOF) rather than corrupted data (format errors).\n# aof-load-corrupt-tail-max-size 4096\n\n# Redis can create append-only base files in either RDB or AOF formats. Using\n# the RDB format is always faster and more efficient, and disabling it is only\n# supported for backward compatibility purposes.\naof-use-rdb-preamble yes\n\n# Redis supports recording timestamp annotations in the AOF to support restoring\n# the data from a specific point-in-time. However, using this capability changes\n# the AOF format in a way that may not be compatible with existing AOF parsers.\naof-timestamp-enabled no\n\n################################ SHUTDOWN #####################################\n\n# Maximum time to wait for replicas when shutting down, in seconds.\n#\n# During shut down, a grace period allows any lagging replicas to catch up with\n# the latest replication offset before the master exists. This period can\n# prevent data loss, especially for deployments without configured disk backups.\n#\n# The 'shutdown-timeout' value is the grace period's duration in seconds. It is\n# only applicable when the instance has replicas. To disable the feature, set\n# the value to 0.\n#\n# shutdown-timeout 10\n\n# When Redis receives a SIGINT or SIGTERM, shutdown is initiated and by default\n# an RDB snapshot is written to disk in a blocking operation if save points are configured.\n# The options used on signaled shutdown can include the following values:\n# default:  Saves RDB snapshot only if save points are configured.\n#           Waits for lagging replicas to catch up.\n# save:     Forces a DB saving operation even if no save points are configured.\n# nosave:   Prevents DB saving operation even if one or more save points are configured.\n# now:      Skips waiting for lagging replicas.\n# force:    Ignores any errors that would normally prevent the server from exiting.\n#\n# Any combination of values is allowed as long as \"save\" and \"nosave\" are not set simultaneously.\n# Example: \"nosave force now\"\n#\n# shutdown-on-sigint default\n# shutdown-on-sigterm default\n\n################ NON-DETERMINISTIC LONG BLOCKING COMMANDS #####################\n\n# Maximum time in milliseconds for EVAL scripts, functions and in some cases\n# modules' commands before Redis can start processing or rejecting other clients.\n#\n# If the maximum execution time is reached Redis will start to reply to most\n# commands with a BUSY error.\n#\n# In this state Redis will only allow a handful of commands to be executed.\n# For instance, SCRIPT KILL, FUNCTION KILL, SHUTDOWN NOSAVE and possibly some\n# module specific 'allow-busy' commands.\n#\n# SCRIPT KILL and FUNCTION KILL will only be able to stop a script that did not\n# yet call any write commands, so SHUTDOWN NOSAVE may be the only way to stop\n# the server in the case a write command was already issued by the script when\n# the user doesn't want to wait for the natural termination of the script.\n#\n# The default is 5 seconds. It is possible to set it to 0 or a negative value\n# to disable this mechanism (uninterrupted execution). Note that in the past\n# this config had a different name, which is now an alias, so both of these do\n# the same:\n# lua-time-limit 5000\n# busy-reply-threshold 5000\n\n################################ REDIS CLUSTER  ###############################\n\n# Normal Redis instances can't be part of a Redis Cluster; only nodes that are\n# started as cluster nodes can. In order to start a Redis instance as a\n# cluster node enable the cluster support uncommenting the following:\n#\n# cluster-enabled yes\n\n# Every cluster node has a cluster configuration file. This file is not\n# intended to be edited by hand. It is created and updated by Redis nodes.\n# Every Redis Cluster node requires a different cluster configuration file.\n# Make sure that instances running in the same system do not have\n# overlapping cluster configuration file names.\n#\n# cluster-config-file nodes-6379.conf\n\n# Cluster node timeout is the amount of milliseconds a node must be unreachable\n# for it to be considered in failure state.\n# Most other internal time limits are a multiple of the node timeout.\n#\n# cluster-node-timeout 15000\n\n# The cluster port is the port that the cluster bus will listen for inbound connections on. When set \n# to the default value, 0, it will be bound to the command port + 10000. Setting this value requires \n# you to specify the cluster bus port when executing cluster meet.\n# cluster-port 0\n\n# A replica of a failing master will avoid to start a failover if its data\n# looks too old.\n#\n# There is no simple way for a replica to actually have an exact measure of\n# its \"data age\", so the following two checks are performed:\n#\n# 1) If there are multiple replicas able to failover, they exchange messages\n#    in order to try to give an advantage to the replica with the best\n#    replication offset (more data from the master processed).\n#    Replicas will try to get their rank by offset, and apply to the start\n#    of the failover a delay proportional to their rank.\n#\n# 2) Every single replica computes the time of the last interaction with\n#    its master. This can be the last ping or command received (if the master\n#    is still in the \"connected\" state), or the time that elapsed since the\n#    disconnection with the master (if the replication link is currently down).\n#    If the last interaction is too old, the replica will not try to failover\n#    at all.\n#\n# The point \"2\" can be tuned by user. Specifically a replica will not perform\n# the failover if, since the last interaction with the master, the time\n# elapsed is greater than:\n#\n#   (node-timeout * cluster-replica-validity-factor) + repl-ping-replica-period\n#\n# So for example if node-timeout is 30 seconds, and the cluster-replica-validity-factor\n# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the\n# replica will not try to failover if it was not able to talk with the master\n# for longer than 310 seconds.\n#\n# A large cluster-replica-validity-factor may allow replicas with too old data to failover\n# a master, while a too small value may prevent the cluster from being able to\n# elect a replica at all.\n#\n# For maximum availability, it is possible to set the cluster-replica-validity-factor\n# to a value of 0, which means, that replicas will always try to failover the\n# master regardless of the last time they interacted with the master.\n# (However they'll always try to apply a delay proportional to their\n# offset rank).\n#\n# Zero is the only value able to guarantee that when all the partitions heal\n# the cluster will always be able to continue.\n#\n# cluster-replica-validity-factor 10\n\n# Cluster replicas are able to migrate to orphaned masters, that are masters\n# that are left without working replicas. This improves the cluster ability\n# to resist to failures as otherwise an orphaned master can't be failed over\n# in case of failure if it has no working replicas.\n#\n# Replicas migrate to orphaned masters only if there are still at least a\n# given number of other working replicas for their old master. This number\n# is the \"migration barrier\". A migration barrier of 1 means that a replica\n# will migrate only if there is at least 1 other working replica for its master\n# and so forth. It usually reflects the number of replicas you want for every\n# master in your cluster.\n#\n# Default is 1 (replicas migrate only if their masters remain with at least\n# one replica). To disable migration just set it to a very large value or\n# set cluster-allow-replica-migration to 'no'.\n# A value of 0 can be set but is useful only for debugging and dangerous\n# in production.\n#\n# cluster-migration-barrier 1\n\n# Turning off this option allows to use less automatic cluster configuration.\n# It both disables migration to orphaned masters and migration from masters\n# that became empty.\n#\n# Default is 'yes' (allow automatic migrations).\n#\n# cluster-allow-replica-migration yes\n\n# By default Redis Cluster nodes stop accepting queries if they detect there\n# is at least a hash slot uncovered (no available node is serving it).\n# This way if the cluster is partially down (for example a range of hash slots\n# are no longer covered) all the cluster becomes, eventually, unavailable.\n# It automatically returns available as soon as all the slots are covered again.\n#\n# However sometimes you want the subset of the cluster which is working,\n# to continue to accept queries for the part of the key space that is still\n# covered. In order to do so, just set the cluster-require-full-coverage\n# option to no.\n#\n# cluster-require-full-coverage yes\n\n# This option, when set to yes, prevents replicas from trying to failover its\n# master during master failures. However the replica can still perform a\n# manual failover, if forced to do so.\n#\n# This is useful in different scenarios, especially in the case of multiple\n# data center operations, where we want one side to never be promoted if not\n# in the case of a total DC failure.\n#\n# cluster-replica-no-failover no\n\n# This option, when set to yes, allows nodes to serve read traffic while the\n# cluster is in a down state, as long as it believes it owns the slots.\n#\n# This is useful for two cases.  The first case is for when an application\n# doesn't require consistency of data during node failures or network partitions.\n# One example of this is a cache, where as long as the node has the data it\n# should be able to serve it.\n#\n# The second use case is for configurations that don't meet the recommended\n# three shards but want to enable cluster mode and scale later. A\n# master outage in a 1 or 2 shard configuration causes a read/write outage to the\n# entire cluster without this option set, with it set there is only a write outage.\n# Without a quorum of masters, slot ownership will not change automatically.\n#\n# cluster-allow-reads-when-down no\n\n# This option, when set to yes, allows nodes to serve pubsub shard traffic while\n# the cluster is in a down state, as long as it believes it owns the slots.\n#\n# This is useful if the application would like to use the pubsub feature even when\n# the cluster global stable state is not OK. If the application wants to make sure only\n# one shard is serving a given channel, this feature should be kept as yes.\n#\n# cluster-allow-pubsubshard-when-down yes\n\n# Cluster link send buffer limit is the limit on the memory usage of an individual\n# cluster bus link's send buffer in bytes. Cluster links would be freed if they exceed\n# this limit. This is to primarily prevent send buffers from growing unbounded on links\n# toward slow peers (E.g. PubSub messages being piled up).\n# This limit is disabled by default. Enable this limit when 'mem_cluster_links' INFO field\n# and/or 'send-buffer-allocated' entries in the 'CLUSTER LINKS` command output continuously increase.\n# Minimum limit of 1gb is recommended so that cluster link buffer can fit in at least a single\n# PubSub message by default. (client-query-buffer-limit default value is 1gb)\n#\n# cluster-link-sendbuf-limit 0\n \n# Clusters can configure their announced hostname using this config. This is a common use case for \n# applications that need to use TLS Server Name Indication (SNI) or dealing with DNS based\n# routing. By default this value is only shown as additional metadata in the CLUSTER SLOTS\n# command, but can be changed using 'cluster-preferred-endpoint-type' config. This value is \n# communicated along the clusterbus to all nodes, setting it to an empty string will remove \n# the hostname and also propagate the removal.\n#\n# cluster-announce-hostname \"\"\n\n# Clusters can configure an optional nodename to be used in addition to the node ID for\n# debugging and admin information. This name is broadcasted between nodes, so will be used\n# in addition to the node ID when reporting cross node events such as node failures.\n# cluster-announce-human-nodename \"\"\n\n# Clusters can advertise how clients should connect to them using either their IP address,\n# a user defined hostname, or by declaring they have no endpoint. Which endpoint is\n# shown as the preferred endpoint is set by using the cluster-preferred-endpoint-type\n# config with values 'ip', 'hostname', or 'unknown-endpoint'. This value controls how\n# the endpoint returned for MOVED/ASKING requests as well as the first field of CLUSTER SLOTS. \n# If the preferred endpoint type is set to hostname, but no announced hostname is set, a '?' \n# will be returned instead.\n#\n# When a cluster advertises itself as having an unknown endpoint, it's indicating that\n# the server doesn't know how clients can reach the cluster. This can happen in certain \n# networking situations where there are multiple possible routes to the node, and the \n# server doesn't know which one the client took. In this case, the server is expecting\n# the client to reach out on the same endpoint it used for making the last request, but use\n# the port provided in the response.\n#\n# cluster-preferred-endpoint-type ip\n\n# This configuration defines the sampling ratio (0-100) for checking command\n# compatibility in cluster mode. When a command is executed, it is sampled at\n# the specified ratio to determine if it complies with Redis cluster constraints,\n# such as cross-slot restrictions.\n#\n# - A value of 0 means no commands are sampled for compatibility checks.\n# - A value of 100 means all commands are checked.\n# - Intermediate values (e.g., 10) mean that approximately 10% of the commands\n#   are randomly selected for compatibility verification.\n#\n# Higher sampling ratios may introduce additional performance overhead, especially\n# under high QPS. The default value is 0 (no sampling).\n#\n# cluster-compatibility-sample-ratio 0\n\n# Clusters can be configured to track per-slot resource statistics,\n# which are accessible by the CLUSTER SLOT-STATS command.\n#\n# By default, the 'cluster-slot-stats-enabled' is disabled, and only 'key-count' is captured.\n# By enabling the 'cluster-slot-stats-enabled' config, the cluster will begin to capture advanced statistics.\n# These statistics can be leveraged to assess general slot usage trends, identify hot / cold slots,\n# migrate slots for a balanced cluster workload, and / or re-write application logic to better utilize slots.\n#\n# The config accepts multiple values as a space-separated list:\n#   - cpu: Track CPU usage per slot (cpu-usec metric)\n#   - net: Track network bytes per slot (network-bytes-in, network-bytes-out metrics)\n#   - mem: Track memory usage per slot (memory-bytes metric)\n#   - yes: Enable all tracking (equivalent to \"cpu net mem\")\n#   - no: Disable all tracking (default)\n#\n# Example: cluster-slot-stats-enabled \"cpu net\"\n#\n# Note: Memory tracking (mem) can ONLY be enabled at startup. If you try to enable\n# memory tracking via CONFIG SET when it wasn't enabled at startup, the command will\n# fail. However, you can disable memory tracking at runtime by removing the 'mem' flag.\n# Once disabled, memory tracking cannot be re-enabled without restarting the server.\n#\n# cluster-slot-stats-enabled no\n\n# Slot migration write pause timeout controls how long the source node will\n# pause write operations during slot migration handoff phase. This usually\n# finishes in a few milliseconds, depending on traffic and load. When the source\n# node pauses writes to allow the destination to catch up and take the ownership\n# of the slots, this timeout prevents writes from being blocked indefinitely.\n#\n# If the destination node fails to complete the slot ownership takeover within\n# this timeout, the source node will resume accepting writes and assume the\n# migration task is failed. This prevents the source node from being permanently\n# blocked if the destination node becomes unresponsive or fails during migration.\n#\n# If this timeout is set too low, the source may resume writes and assume that\n# the slot migration has failed while the destination is still in the process of\n# draining the replication stream and publishing the configuration update.\n# During this window, writes accepted by the source will not be replicated to\n# the destination; if the destination later publishes the updated config and\n# takes ownership, those writes could be lost. Therefore, avoid setting this\n# timeout too low.\n#\n# This timeout is specified in milliseconds.\n#\n# cluster-slot-migration-write-pause-timeout 10000\n\n# This config controls the maximum acceptable lag in bytes between source and\n# destination nodes during slot migration before triggering the slot handoff\n# phase. If the remaining replication stream size falls below this threshold,\n# the source node pauses writes and then signals destination that it can take\n# over the slot ownership after draining the remaining replication stream.\n#\n# A smaller value means potentially shorter write pause duration, but it may\n# take longer for the destination to catch up. A larger value means handoff can\n# be triggered earlier, but the write pause may potentially be longer.\n#\n# cluster-slot-migration-handoff-max-lag-bytes 1mb\n\n# In order to setup your cluster make sure to read the documentation\n# available at https://redis.io web site.\n\n########################## CLUSTER DOCKER/NAT support  ########################\n\n# In certain deployments, Redis Cluster nodes address discovery fails, because\n# addresses are NAT-ted or because ports are forwarded (the typical case is\n# Docker and other containers).\n#\n# In order to make Redis Cluster working in such environments, a static\n# configuration where each node knows its public address is needed. The\n# following four options are used for this scope, and are:\n#\n# * cluster-announce-ip\n# * cluster-announce-port\n# * cluster-announce-tls-port\n# * cluster-announce-bus-port\n#\n# Each instructs the node about its address, client ports (for connections\n# without and with TLS) and cluster message bus port. The information is then\n# published in the header of the bus packets so that other nodes will be able to\n# correctly map the address of the node publishing the information.\n#\n# If tls-cluster is set to yes and cluster-announce-tls-port is omitted or set\n# to zero, then cluster-announce-port refers to the TLS port. Note also that\n# cluster-announce-tls-port has no effect if tls-cluster is set to no.\n#\n# If the above options are not used, the normal Redis Cluster auto-detection\n# will be used instead.\n#\n# Note that when remapped, the bus port may not be at the fixed offset of\n# clients port + 10000, so you can specify any port and bus-port depending\n# on how they get remapped. If the bus-port is not set, a fixed offset of\n# 10000 will be used as usual.\n#\n# Example:\n#\n# cluster-announce-ip 10.1.1.5\n# cluster-announce-tls-port 6379\n# cluster-announce-port 0\n# cluster-announce-bus-port 6380\n\n################################## SLOW LOG ###################################\n\n# The Redis Slow Log is a system to log queries that exceeded a specified\n# execution time. The execution time does not include the I/O operations\n# like talking with the client, sending the reply and so forth,\n# but just the time needed to actually execute the command (this is the only\n# stage of command execution where the thread is blocked and can not serve\n# other requests in the meantime).\n#\n# You can configure the slow log with two parameters: one tells Redis\n# what is the execution time, in microseconds, to exceed in order for the\n# command to get logged, and the other parameter is the length of the\n# slow log. When a new command is logged the oldest one is removed from the\n# queue of logged commands.\n\n# The following time is expressed in microseconds, so 1000000 is equivalent\n# to one second. Note that a negative number disables the slow log, while\n# a value of zero forces the logging of every command.\nslowlog-log-slower-than 10000\n\n# There is no limit to this length. Just be aware that it will consume memory.\n# You can reclaim memory used by the slow log with SLOWLOG RESET.\nslowlog-max-len 128\n\n################################ LATENCY MONITOR ##############################\n\n# The Redis latency monitoring subsystem samples different operations\n# at runtime in order to collect data related to possible sources of\n# latency of a Redis instance.\n#\n# Via the LATENCY command this information is available to the user that can\n# print graphs and obtain reports.\n#\n# The system only logs operations that were performed in a time equal or\n# greater than the amount of milliseconds specified via the\n# latency-monitor-threshold configuration directive. When its value is set\n# to zero, the latency monitor is turned off.\n#\n# By default latency monitoring is disabled since it is mostly not needed\n# if you don't have latency issues, and collecting data has a performance\n# impact, that while very small, can be measured under big load. Latency\n# monitoring can easily be enabled at runtime using the command\n# \"CONFIG SET latency-monitor-threshold <milliseconds>\" if needed.\nlatency-monitor-threshold 0\n\n################################ LATENCY TRACKING ##############################\n\n# The Redis extended latency monitoring tracks the per command latencies and enables\n# exporting the percentile distribution via the INFO latencystats command,\n# and cumulative latency distributions (histograms) via the LATENCY command.\n#\n# By default, the extended latency monitoring is enabled since the overhead\n# of keeping track of the command latency is very small.\n# latency-tracking yes\n\n# By default the exported latency percentiles via the INFO latencystats command\n# are the p50, p99, and p999.\n# latency-tracking-info-percentiles 50 99 99.9\n\n############################# EVENT NOTIFICATION ##############################\n\n# Redis can notify Pub/Sub clients about events happening in the key space.\n# This feature is documented at https://redis.io/docs/latest/develop/use/keyspace-notifications/\n#\n# For instance if keyspace events notification is enabled, and a client\n# performs a DEL operation on key \"foo\" stored in the Database 0, two\n# messages will be published via Pub/Sub:\n#\n# PUBLISH __keyspace@0__:foo del\n# PUBLISH __keyevent@0__:del foo\n#\n# It is possible to select the events that Redis will notify among a set\n# of classes. Every class is identified by a single character:\n#\n#  K     Keyspace events, published with __keyspace@<db>__ prefix.\n#  E     Keyevent events, published with __keyevent@<db>__ prefix.\n#  g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...\n#  $     String commands\n#  l     List commands\n#  s     Set commands\n#  h     Hash commands\n#  z     Sorted set commands\n#  x     Expired events (events generated every time a key expires)\n#  e     Evicted events (events generated when a key is evicted for maxmemory)\n#  n     New key events (Note: not included in the 'A' class)\n#  t     Stream commands\n#  d     Module key type events\n#  m     Key-miss events (Note: It is not included in the 'A' class)\n#  o     Overwritten events generated every time a key is overwritten.\n#        (Note: not included in the 'A' class)\n#  c     Type-changed events generated every time a key's type changes\n#        (Note: not included in the 'A' class)\n#  r     rate limit event\n#  S     Subkeyspace events, published with __subkeyspace@<db>__:<key> prefix.\n#  T     Subkeyevent events, published with __subkeyevent@<db>__:<event> prefix.\n#  I     Subkeyspaceitem events, published per subkey with\n#        __subkeyspaceitem@<db>__:<key>\\n<subkey> prefix.\n#  V     Subkeyspaceevent events, published with\n#        __subkeyspaceevent@<db>__:<event>|<key> prefix.\n#  A     Alias for g$lshzxetd, so that the \"AKE\" string means all the events\n#        except key-miss, new key, overwritten, type-changed and rate-limit.\n#\n#  The \"notify-keyspace-events\" takes as argument a string that is composed\n#  of zero or multiple characters. The empty string means that notifications\n#  are disabled.\n#\n#  Example: to enable list and generic events, from the point of view of the\n#           event name, use:\n#\n#  notify-keyspace-events Elg\n#\n#  Example 2: to get the stream of the expired keys subscribing to channel\n#             name __keyevent@0__:expired use:\n#\n#  notify-keyspace-events Ex\n#\n#  By default all notifications are disabled because most users don't need\n#  this feature and the feature has some overhead. Note that if you don't\n#  specify at least one of K or E, no events will be delivered.\nnotify-keyspace-events \"\"\n\n############################### ADVANCED CONFIG ###############################\n\n# Hashes are encoded using a memory efficient data structure when they have a\n# small number of entries, and the biggest entry does not exceed a given\n# threshold. These thresholds can be configured using the following directives.\nhash-max-listpack-entries 512\nhash-max-listpack-value 64\n\n# Lists are also encoded in a special way to save a lot of space.\n# The number of entries allowed per internal list node can be specified\n# as a fixed maximum size or a maximum number of elements.\n# For a fixed maximum size, use -5 through -1, meaning:\n# -5: max size: 64 Kb  <-- not recommended for normal workloads\n# -4: max size: 32 Kb  <-- not recommended\n# -3: max size: 16 Kb  <-- probably not recommended\n# -2: max size: 8 Kb   <-- good\n# -1: max size: 4 Kb   <-- good\n# Positive numbers mean store up to _exactly_ that number of elements\n# per list node.\n# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),\n# but if your use case is unique, adjust the settings as necessary.\nlist-max-listpack-size -2\n\n# Lists may also be compressed.\n# Compress depth is the number of quicklist ziplist nodes from *each* side of\n# the list to *exclude* from compression.  The head and tail of the list\n# are always uncompressed for fast push/pop operations.  Settings are:\n# 0: disable all list compression\n# 1: depth 1 means \"don't start compressing until after 1 node into the list,\n#    going from either the head or tail\"\n#    So: [head]->node->node->...->node->[tail]\n#    [head], [tail] will always be uncompressed; inner nodes will compress.\n# 2: [head]->[next]->node->node->...->node->[prev]->[tail]\n#    2 here means: don't compress head or head->next or tail->prev or tail,\n#    but compress all nodes between them.\n# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]\n# etc.\nlist-compress-depth 0\n\n# Sets have a special encoding when a set is composed\n# of just strings that happen to be integers in radix 10 in the range\n# of 64 bit signed integers.\n# The following configuration setting sets the limit in the size of the\n# set in order to use this special memory saving encoding.\nset-max-intset-entries 512\n\n# Sets containing non-integer values are also encoded using a memory efficient\n# data structure when they have a small number of entries, and the biggest entry\n# does not exceed a given threshold. These thresholds can be configured using\n# the following directives.\nset-max-listpack-entries 128\nset-max-listpack-value 64\n\n# Similarly to hashes and lists, sorted sets are also specially encoded in\n# order to save a lot of space. This encoding is only used when the length and\n# elements of a sorted set are below the following limits:\nzset-max-listpack-entries 128\nzset-max-listpack-value 64\n\n# HyperLogLog sparse representation bytes limit. The limit includes the\n# 16 bytes header. When a HyperLogLog using the sparse representation crosses\n# this limit, it is converted into the dense representation.\n#\n# A value greater than 16000 is totally useless, since at that point the\n# dense representation is more memory efficient.\n#\n# The suggested value is ~ 3000 in order to have the benefits of\n# the space efficient encoding without slowing down too much PFADD,\n# which is O(N) with the sparse encoding. The value can be raised to\n# ~ 10000 when CPU is not a concern, but space is, and the data set is\n# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.\nhll-sparse-max-bytes 3000\n\n# Streams macro node max size / items. The stream data structure is a radix\n# tree of big nodes that encode multiple items inside. Using this configuration\n# it is possible to configure how big a single node can be in bytes, and the\n# maximum number of items it may contain before switching to a new node when\n# appending new stream entries. If any of the following settings are set to\n# zero, the limit is ignored, so for instance it is possible to set just a\n# max entries limit by setting max-bytes to 0 and max-entries to the desired\n# value.\nstream-node-max-bytes 4096\nstream-node-max-entries 100\n\n# Redis Streams support Idempotent Message Producer (IDMP) tracking to prevent\n# duplicate message delivery. When producers send messages with IDMP identifiers\n# (using XADD with IDMP or IDMPAUTO parameters), Redis tracks these identifiers to\n# detect and reject duplicates within a configurable time window.\n#\n# stream-idmp-duration: Specifies how long (in seconds) Redis should remember\n# IDMP identifiers for duplicate detection. After this duration expires, old\n# identifiers are automatically removed. This prevents unbounded memory growth\n# while allowing reasonable duplicate detection windows.\n# Valid range: 1 to 86400 seconds (1 second to 24 hours)\n# Default: 100 seconds\n#\n# stream-idmp-maxsize: Maximum number of IDMP identifiers to track per producer\n# per stream. Once this limit is reached, the oldest identifiers are evicted to\n# make room for new ones. This caps memory usage for IDMP tracking.\n# Valid range: 1 to 10000 entries\n# Default: 100 entries\n#\n# Note: These are default values for new streams. Individual streams can override\n# these settings using the XCFGSET command.\n#\n# stream-idmp-duration 100\n# stream-idmp-maxsize 100\n\n# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in\n# order to help rehashing the main Redis hash table (the one mapping top-level\n# keys to values). The hash table implementation Redis uses (see dict.c)\n# performs a lazy rehashing: the more operation you run into a hash table\n# that is rehashing, the more rehashing \"steps\" are performed, so if the\n# server is idle the rehashing is never complete and some more memory is used\n# by the hash table.\n#\n# The default is to use this millisecond 10 times every second in order to\n# actively rehash the main dictionaries, freeing memory when possible.\n#\n# If unsure:\n# use \"activerehashing no\" if you have hard latency requirements and it is\n# not a good thing in your environment that Redis can reply from time to time\n# to queries with 2 milliseconds delay.\n#\n# use \"activerehashing yes\" if you don't have such hard requirements but\n# want to free memory asap when possible.\nactiverehashing yes\n\n# The client output buffer limits can be used to force disconnection of clients\n# that are not reading data from the server fast enough for some reason (a\n# common reason is that a Pub/Sub client can't consume messages as fast as the\n# publisher can produce them).\n#\n# The limit can be set differently for the three different classes of clients:\n#\n# normal -> normal clients including MONITOR clients\n# replica -> replica clients\n# pubsub -> clients subscribed to at least one pubsub channel or pattern\n#\n# The syntax of every client-output-buffer-limit directive is the following:\n#\n# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>\n#\n# A client is immediately disconnected once the hard limit is reached, or if\n# the soft limit is reached and remains reached for the specified number of\n# seconds (continuously).\n# So for instance if the hard limit is 32 megabytes and the soft limit is\n# 16 megabytes / 10 seconds, the client will get disconnected immediately\n# if the size of the output buffers reach 32 megabytes, but will also get\n# disconnected if the client reaches 16 megabytes and continuously overcomes\n# the limit for 10 seconds.\n#\n# By default normal clients are not limited because they don't receive data\n# without asking (in a push way), but just after a request, so only\n# asynchronous clients may create a scenario where data is requested faster\n# than it can read.\n#\n# Instead there is a default limit for pubsub and replica clients, since\n# subscribers and replicas receive data in a push fashion.\n#\n# Note that it doesn't make sense to set the replica clients output buffer\n# limit lower than the repl-backlog-size config (partial sync will succeed\n# and then replica will get disconnected).\n# Such a configuration is ignored (the size of repl-backlog-size will be used).\n# This doesn't have memory consumption implications since the replica client\n# will share the backlog buffers memory.\n#\n# Both the hard or the soft limit can be disabled by setting them to zero.\nclient-output-buffer-limit normal 0 0 0\nclient-output-buffer-limit replica 256mb 64mb 60\nclient-output-buffer-limit pubsub 32mb 8mb 60\n\n# Client query buffers accumulate new commands. They are limited to a fixed\n# amount by default in order to avoid that a protocol desynchronization (for\n# instance due to a bug in the client) will lead to unbound memory usage in\n# the query buffer. However you can configure it here if you have very special\n# needs, such as a command with huge argument, or huge multi/exec requests or alike.\n#\n# client-query-buffer-limit 1gb\n\n# Defines how many commands in each client pipeline to decode and prefetch\n# lookahead 16\n\n# In some scenarios client connections can hog up memory leading to OOM\n# errors or data eviction. To avoid this we can cap the accumulated memory\n# used by all client connections (all pubsub and normal clients). Once we\n# reach that limit connections will be dropped by the server freeing up\n# memory. The server will attempt to drop the connections using the most \n# memory first. We call this mechanism \"client eviction\".\n#\n# Client eviction is configured using the maxmemory-clients setting as follows:\n# 0 - client eviction is disabled (default)\n#\n# A memory value can be used for the client eviction threshold,\n# for example:\n# maxmemory-clients 1g\n#\n# A percentage value (between 1% and 100%) means the client eviction threshold\n# is based on a percentage of the maxmemory setting. For example to set client\n# eviction at 5% of maxmemory:\n# maxmemory-clients 5%\n\n# In the Redis protocol, bulk requests, that are, elements representing single\n# strings, are normally limited to 512 mb. However you can change this limit\n# here, but must be 1mb or greater\n#\n# proto-max-bulk-len 512mb\n\n# Redis calls an internal function to perform many background tasks, like\n# closing connections of clients in timeout, purging expired keys that are\n# never requested, and so forth.\n#\n# Not all tasks are performed with the same frequency, but Redis checks for\n# tasks to perform according to the specified \"hz\" value.\n#\n# By default \"hz\" is set to 10. Raising the value will use more CPU when\n# Redis is idle, but at the same time will make Redis more responsive when\n# there are many keys expiring at the same time, and timeouts may be\n# handled with more precision.\n#\n# The range is between 1 and 500, however a value over 100 is usually not\n# a good idea. Most users should use the default of 10 and raise this up to\n# 100 only in environments where very low latency is required.\nhz 10\n\n# Normally it is useful to have an HZ value which is proportional to the\n# number of clients connected. This is useful in order, for instance, to\n# avoid too many clients are processed for each background task invocation\n# in order to avoid latency spikes.\n#\n# Since the default HZ value by default is conservatively set to 10, Redis\n# offers, and enables by default, the ability to use an adaptive HZ value\n# which will temporarily raise when there are many connected clients.\n#\n# When dynamic HZ is enabled, the actual configured HZ will be used\n# as a baseline, but multiples of the configured HZ value will be actually\n# used as needed once more clients are connected. In this way an idle\n# instance will use very little CPU time while a busy instance will be\n# more responsive.\ndynamic-hz yes\n\n# When a child rewrites the AOF file, if the following option is enabled\n# the file will be fsync-ed every 4 MB of data generated. This is useful\n# in order to commit the file to the disk more incrementally and avoid\n# big latency spikes.\naof-rewrite-incremental-fsync yes\n\n# When redis saves RDB file, if the following option is enabled\n# the file will be fsync-ed every 4 MB of data generated. This is useful\n# in order to commit the file to the disk more incrementally and avoid\n# big latency spikes.\nrdb-save-incremental-fsync yes\n\n# Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good\n# idea to start with the default settings and only change them after investigating\n# how to improve the performances and how the keys LFU change over time, which\n# is possible to inspect via the OBJECT FREQ command.\n#\n# There are two tunable parameters in the Redis LFU implementation: the\n# counter logarithm factor and the counter decay time. It is important to\n# understand what the two parameters mean before changing them.\n#\n# The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis\n# uses a probabilistic increment with logarithmic behavior. Given the value\n# of the old counter, when a key is accessed, the counter is incremented in\n# this way:\n#\n# 1. A random number R between 0 and 1 is extracted.\n# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).\n# 3. The counter is incremented only if R < P.\n#\n# The default lfu-log-factor is 10. This is a table of how the frequency\n# counter changes with a different number of accesses with different\n# logarithmic factors:\n#\n# +--------+------------+------------+------------+------------+------------+\n# | factor | 100 hits   | 1000 hits  | 100K hits  | 1M hits    | 10M hits   |\n# +--------+------------+------------+------------+------------+------------+\n# | 0      | 104        | 255        | 255        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 1      | 18         | 49         | 255        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 10     | 10         | 18         | 142        | 255        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n# | 100    | 8          | 11         | 49         | 143        | 255        |\n# +--------+------------+------------+------------+------------+------------+\n#\n# NOTE: The above table was obtained by running the following commands:\n#\n#   redis-benchmark -n 1000000 incr foo\n#   redis-cli object freq foo\n#\n# NOTE 2: The counter initial value is 5 in order to give new objects a chance\n# to accumulate hits.\n#\n# The counter decay time is the time, in minutes, that must elapse in order\n# for the key counter to be decremented.\n#\n# The default value for the lfu-decay-time is 1. A special value of 0 means we\n# will never decay the counter.\n#\n# lfu-log-factor 10\n# lfu-decay-time 1\n\n\n# The maximum number of new client connections accepted per event-loop cycle. This configuration\n# is set independently for TLS connections.\n#\n# By default, up to 10 new connection will be accepted per event-loop cycle for normal connections\n# and up to 1 new connection per event-loop cycle for TLS connections.\n#\n# Adjusting this to a larger number can slightly improve efficiency for new connections\n# at the risk of causing timeouts for regular commands on established connections.  It is\n# not advised to change this without ensuring that all clients have limited connection\n# pools and exponential backoff in the case of command/connection timeouts. \n#\n# If your application is establishing a large number of new connections per second you should\n# also consider tuning the value of tcp-backlog, which allows the kernel to buffer more\n# pending connections before dropping or rejecting connections. \n#\n# max-new-connections-per-cycle 10\n# max-new-tls-connections-per-cycle 1\n\n\n########################### ACTIVE DEFRAGMENTATION #######################\n#\n# What is active defragmentation?\n# -------------------------------\n#\n# Active (online) defragmentation allows a Redis server to compact the\n# spaces left between small allocations and deallocations of data in memory,\n# thus allowing to reclaim back memory.\n#\n# Fragmentation is a natural process that happens with every allocator (but\n# less so with Jemalloc, fortunately) and certain workloads. Normally a server\n# restart is needed in order to lower the fragmentation, or at least to flush\n# away all the data and create it again. However thanks to this feature\n# implemented by Oran Agra for Redis 4.0 this process can happen at runtime\n# in a \"hot\" way, while the server is running.\n#\n# Basically when the fragmentation is over a certain level (see the\n# configuration options below) Redis will start to create new copies of the\n# values in contiguous memory regions by exploiting certain specific Jemalloc\n# features (in order to understand if an allocation is causing fragmentation\n# and to allocate it in a better place), and at the same time, will release the\n# old copies of the data. This process, repeated incrementally for all the keys\n# will cause the fragmentation to drop back to normal values.\n#\n# Important things to understand:\n#\n# 1. This feature is disabled by default, and only works if you compiled Redis\n#    to use the copy of Jemalloc we ship with the source code of Redis.\n#    This is the default with Linux builds.\n#\n# 2. You never need to enable this feature if you don't have fragmentation\n#    issues.\n#\n# 3. Once you experience fragmentation, you can enable this feature when\n#    needed with the command \"CONFIG SET activedefrag yes\".\n#\n# The configuration parameters are able to fine tune the behavior of the\n# defragmentation process. If you are not sure about what they mean it is\n# a good idea to leave the defaults untouched.\n\n# Active defragmentation is disabled by default\n# activedefrag no\n\n# Minimum amount of fragmentation waste to start active defrag\n# active-defrag-ignore-bytes 100mb\n\n# Minimum percentage of fragmentation to start active defrag\n# active-defrag-threshold-lower 10\n\n# Maximum percentage of fragmentation at which we use maximum effort\n# active-defrag-threshold-upper 100\n\n# Minimal effort for defrag in CPU percentage, to be used when the lower\n# threshold is reached\n# active-defrag-cycle-min 1\n\n# Maximal effort for defrag in CPU percentage, to be used when the upper\n# threshold is reached\n# active-defrag-cycle-max 25\n\n# Maximum number of set/hash/zset/list fields that will be processed from\n# the main dictionary scan\n# active-defrag-max-scan-fields 1000\n\n# Jemalloc background thread for purging will be enabled by default\njemalloc-bg-thread yes\n\n# It is possible to pin different threads and processes of Redis to specific\n# CPUs in your system, in order to maximize the performances of the server.\n# This is useful both in order to pin different Redis threads in different\n# CPUs, but also in order to make sure that multiple Redis instances running\n# in the same host will be pinned to different CPUs.\n#\n# Normally you can do this using the \"taskset\" command, however it is also\n# possible to this via Redis configuration directly, both in Linux and FreeBSD.\n#\n# You can pin the server/IO threads, bio threads, aof rewrite child process, and\n# the bgsave child process. The syntax to specify the cpu list is the same as\n# the taskset command:\n#\n# Set redis server/io threads to cpu affinity 0,2,4,6:\n# server-cpulist 0-7:2\n#\n# Set bio threads to cpu affinity 1,3:\n# bio-cpulist 1,3\n#\n# Set aof rewrite child process to cpu affinity 8,9,10,11:\n# aof-rewrite-cpulist 8-11\n#\n# Set bgsave child process to cpu affinity 1,10,11\n# bgsave-cpulist 1,10-11\n\n# In some cases redis will emit warnings and even refuse to start if it detects\n# that the system is in bad state, it is possible to suppress these warnings\n# by setting the following config which takes a space delimited list of warnings\n# to suppress\n#\n# ignore-warnings ARM64-COW-BUG\n\n# When enabled, 'key-memory-histograms' adds per-type memory allocation histograms\n# (distrib_*_sizes fields in bytes) to INFO keysizes output, complementing the\n# existing size/item histograms with actual memory usage data.\n#\n# This setting must be enabled at startup; later config changes have no effect.\n# It is also implicitly enabled when 'cluster-slot-stats-enabled' is set.\n#\n# key-memory-histograms no\n"
  },
  {
    "path": "runtest",
    "content": "#!/bin/sh\nTCL_VERSIONS=\"8.5 8.6 8.7 9.0\"\nTCLSH=\"\"\n\nfor VERSION in $TCL_VERSIONS; do\n\tTCL=`which tclsh$VERSION 2>/dev/null` && TCLSH=$TCL\ndone\n\nif [ -z $TCLSH ]\nthen\n    echo \"You need tcl 8.5 or newer in order to run the Redis test\"\n    exit 1\nfi\n$TCLSH tests/test_helper.tcl \"${@}\"\n"
  },
  {
    "path": "runtest-cluster",
    "content": "#!/bin/sh\nTCL_VERSIONS=\"8.5 8.6 8.7 9.0\"\nTCLSH=\"\"\n\nfor VERSION in $TCL_VERSIONS; do\n\tTCL=`which tclsh$VERSION 2>/dev/null` && TCLSH=$TCL\ndone\n\nif [ -z $TCLSH ]\nthen\n    echo \"You need tcl 8.5 or newer in order to run the Redis Cluster test\"\n    exit 1\nfi\n$TCLSH tests/cluster/run.tcl $*\n"
  },
  {
    "path": "runtest-moduleapi",
    "content": "#!/bin/sh\nTCL_VERSIONS=\"8.5 8.6 8.7 9.0\"\nTCLSH=\"\"\n[ -z \"$MAKE\" ] && MAKE=make\n\nfor VERSION in $TCL_VERSIONS; do\n\tTCL=`which tclsh$VERSION 2>/dev/null` && TCLSH=$TCL\ndone\n\nif [ -z $TCLSH ]\nthen\n    echo \"You need tcl 8.5 or newer in order to run the Redis ModuleApi test\"\n    exit 1\nfi\n\n$MAKE -C tests/modules && \\\n$TCLSH tests/test_helper.tcl \\\n--single unit/moduleapi/commandfilter \\\n--single unit/moduleapi/basics \\\n--single unit/moduleapi/fork \\\n--single unit/moduleapi/testrdb \\\n--single unit/moduleapi/infotest \\\n--single unit/moduleapi/moduleconfigs \\\n--single unit/moduleapi/infra \\\n--single unit/moduleapi/propagate \\\n--single unit/moduleapi/hooks \\\n--single unit/moduleapi/misc \\\n--single unit/moduleapi/blockonkeys \\\n--single unit/moduleapi/blockonbackground \\\n--single unit/moduleapi/scan \\\n--single unit/moduleapi/datatype \\\n--single unit/moduleapi/auth \\\n--single unit/moduleapi/keyspace_events \\\n--single unit/moduleapi/blockedclient \\\n--single unit/moduleapi/getchannels \\\n--single unit/moduleapi/getkeys \\\n--single unit/moduleapi/test_lazyfree \\\n--single unit/moduleapi/defrag \\\n--single unit/moduleapi/keyspecs \\\n--single unit/moduleapi/hash \\\n--single unit/moduleapi/zset \\\n--single unit/moduleapi/list \\\n--single unit/moduleapi/stream \\\n--single unit/moduleapi/mallocsize \\\n--single unit/moduleapi/datatype2 \\\n--single unit/moduleapi/cluster \\\n--single unit/moduleapi/aclcheck \\\n--single unit/moduleapi/subcommands \\\n--single unit/moduleapi/reply \\\n--single unit/moduleapi/cmdintrospection \\\n--single unit/moduleapi/eventloop \\\n--single unit/moduleapi/timer \\\n--single unit/moduleapi/publish \\\n--single unit/moduleapi/usercall \\\n--single unit/moduleapi/postnotifications \\\n--single unit/moduleapi/async_rm_call \\\n--single unit/moduleapi/moduleauth \\\n--single unit/moduleapi/rdbloadsave \\\n--single unit/moduleapi/crash \\\n--single unit/moduleapi/internalsecret \\\n--single unit/moduleapi/configaccess \\\n--single unit/moduleapi/keymeta \\\n--single unit/moduleapi/ksn_notify_side_effect \\\n\"${@}\"\n"
  },
  {
    "path": "runtest-sentinel",
    "content": "#!/bin/sh\nTCL_VERSIONS=\"8.5 8.6 8.7 9.0\"\nTCLSH=\"\"\n\nfor VERSION in $TCL_VERSIONS; do\n\tTCL=`which tclsh$VERSION 2>/dev/null` && TCLSH=$TCL\ndone\n\nif [ -z $TCLSH ]\nthen\n    echo \"You need tcl 8.5 or newer in order to run the Redis Sentinel test\"\n    exit 1\nfi\n$TCLSH tests/sentinel/run.tcl $*\n"
  },
  {
    "path": "sentinel.conf",
    "content": "# Example sentinel.conf\n\n# By default protected mode is disabled in sentinel mode. Sentinel is reachable\n# from interfaces different than localhost. Make sure the sentinel instance is\n# protected from the outside world via firewalling or other means.\nprotected-mode no\n\n# port <sentinel-port>\n# The port that this sentinel instance will run on\nport 26379\n\n# By default Redis Sentinel does not run as a daemon. Use 'yes' if you need it.\n# Note that Redis will write a pid file in /var/run/redis-sentinel.pid when\n# daemonized.\ndaemonize no\n\n# When running daemonized, Redis Sentinel writes a pid file in\n# /var/run/redis-sentinel.pid by default. You can specify a custom pid file\n# location here.\npidfile /var/run/redis-sentinel.pid\n\n# Specify the server verbosity level.\n# This can be one of:\n# debug (a lot of information, useful for development/testing)\n# verbose (many rarely useful info, but not a mess like the debug level)\n# notice (moderately verbose, what you want in production probably)\n# warning (only very important / critical messages are logged)\n# nothing (nothing is logged)\nloglevel notice\n\n# Specify the log file name. Also the empty string can be used to force\n# Sentinel to log on the standard output. Note that if you use standard\n# output for logging but daemonize, logs will be sent to /dev/null\nlogfile \"\"\n\n# To enable logging to the system logger, just set 'syslog-enabled' to yes,\n# and optionally update the other syslog parameters to suit your needs.\n# syslog-enabled no\n\n# Specify the syslog identity.\n# syslog-ident sentinel\n\n# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.\n# syslog-facility local0\n\n# sentinel announce-ip <ip>\n# sentinel announce-port <port>\n#\n# The above two configuration directives are useful in environments where,\n# because of NAT, Sentinel is reachable from outside via a non-local address.\n#\n# When announce-ip is provided, the Sentinel will claim the specified IP address\n# in HELLO messages used to gossip its presence, instead of auto-detecting the\n# local address as it usually does.\n#\n# Similarly when announce-port is provided and is valid and non-zero, Sentinel\n# will announce the specified TCP port.\n#\n# The two options don't need to be used together, if only announce-ip is\n# provided, the Sentinel will announce the specified IP and the server port\n# as specified by the \"port\" option. If only announce-port is provided, the\n# Sentinel will announce the auto-detected local IP and the specified port.\n#\n# Example:\n#\n# sentinel announce-ip 1.2.3.4\n\n# dir <working-directory>\n# Every long running process should have a well-defined working directory.\n# For Redis Sentinel to chdir to /tmp at startup is the simplest thing\n# for the process to don't interfere with administrative tasks such as\n# unmounting filesystems.\ndir /tmp\n\n# sentinel monitor <master-name> <ip> <redis-port> <quorum>\n#\n# Tells Sentinel to monitor this master, and to consider it in O_DOWN\n# (Objectively Down) state only if at least <quorum> sentinels agree.\n#\n# Note that whatever is the ODOWN quorum, a Sentinel will require to\n# be elected by the majority of the known Sentinels in order to\n# start a failover, so no failover can be performed in minority.\n#\n# Replicas are auto-discovered, so you don't need to specify replicas in\n# any way. Sentinel itself will rewrite this configuration file adding\n# the replicas using additional configuration options.\n# Also note that the configuration file is rewritten when a\n# replica is promoted to master.\n#\n# Note: master name should not include special characters or spaces.\n# The valid charset is A-z 0-9 and the three characters \".-_\".\nsentinel monitor mymaster 127.0.0.1 6379 2\n\n# sentinel auth-pass <master-name> <password>\n#\n# Set the password to use to authenticate with the master and replicas.\n# Useful if there is a password set in the Redis instances to monitor.\n#\n# Note that the master password is also used for replicas, so it is not\n# possible to set a different password in masters and replicas instances\n# if you want to be able to monitor these instances with Sentinel.\n#\n# However you can have Redis instances without the authentication enabled\n# mixed with Redis instances requiring the authentication (as long as the\n# password set is the same for all the instances requiring the password) as\n# the AUTH command will have no effect in Redis instances with authentication\n# switched off.\n#\n# Example:\n#\n# sentinel auth-pass mymaster MySUPER--secret-0123passw0rd\n\n# sentinel auth-user <master-name> <username>\n#\n# This is useful in order to authenticate to instances having ACL capabilities,\n# that is, running Redis 6.0 or greater. When just auth-pass is provided the\n# Sentinel instance will authenticate to Redis using the old \"AUTH <pass>\"\n# method. When also an username is provided, it will use \"AUTH <user> <pass>\".\n# In the Redis servers side, the ACL to provide just minimal access to\n# Sentinel instances, should be configured along the following lines:\n#\n#     user sentinel-user >somepassword +client +subscribe +publish \\\n#                        +ping +info +multi +slaveof +config +client +exec on\n\n# sentinel down-after-milliseconds <master-name> <milliseconds>\n#\n# Number of milliseconds the master (or any attached replica or sentinel) should\n# be unreachable (as in, not acceptable reply to PING, continuously, for the\n# specified period) in order to consider it in S_DOWN state (Subjectively\n# Down).\n#\n# Default is 30 seconds.\nsentinel down-after-milliseconds mymaster 30000\n\n# IMPORTANT NOTE: starting with Redis 6.2 ACL capability is supported for\n# Sentinel mode, please refer to the Redis website https://redis.io/docs/latest/operate/oss_and_stack/management/security/acl/\n# for more details.\n\n# Sentinel's ACL users are defined in the following format:\n#\n#   user <username> ... acl rules ...\n#\n# For example:\n#\n#   user worker +@admin +@connection ~* on >ffa9203c493aa99\n#\n# For more information about ACL configuration please refer to the Redis\n# website at https://redis.io/docs/latest/operate/oss_and_stack/management/security/acl/ and redis server configuration \n# template redis.conf.\n\n# ACL LOG\n#\n# The ACL Log tracks failed commands and authentication events associated\n# with ACLs. The ACL Log is useful to troubleshoot failed commands blocked \n# by ACLs. The ACL Log is stored in memory. You can reclaim memory with \n# ACL LOG RESET. Define the maximum entry length of the ACL Log below.\nacllog-max-len 128\n\n# Using an external ACL file\n#\n# Instead of configuring users here in this file, it is possible to use\n# a stand-alone file just listing users. The two methods cannot be mixed:\n# if you configure users here and at the same time you activate the external\n# ACL file, the server will refuse to start.\n#\n# The format of the external ACL user file is exactly the same as the\n# format that is used inside redis.conf to describe users.\n#\n# aclfile /etc/redis/sentinel-users.acl\n\n# requirepass <password>\n#\n# You can configure Sentinel itself to require a password, however when doing\n# so Sentinel will try to authenticate with the same password to all the\n# other Sentinels. So you need to configure all your Sentinels in a given\n# group with the same \"requirepass\" password. Check the following documentation\n# for more info: https://redis.io/docs/latest/operate/oss_and_stack/management/sentinel/\n#\n# IMPORTANT NOTE: starting with Redis 6.2 \"requirepass\" is a compatibility\n# layer on top of the ACL system. The option effect will be just setting\n# the password for the default user. Clients will still authenticate using\n# AUTH <password> as usually, or more explicitly with AUTH default <password>\n# if they follow the new protocol: both will work.\n#\n# New config files are advised to use separate authentication control for\n# incoming connections (via ACL), and for outgoing connections (via\n# sentinel-user and sentinel-pass) \n#\n# The requirepass is not compatible with aclfile option and the ACL LOAD\n# command, these will cause requirepass to be ignored.\n\n# sentinel sentinel-user <username>\n#\n# You can configure Sentinel to authenticate with other Sentinels with specific\n# user name. \n\n# sentinel sentinel-pass <password>\n#\n# The password for Sentinel to authenticate with other Sentinels. If sentinel-user\n# is not configured, Sentinel will use 'default' user with sentinel-pass to authenticate.\n\n# sentinel parallel-syncs <master-name> <numreplicas>\n#\n# How many replicas we can reconfigure to point to the new replica simultaneously\n# during the failover. Use a low number if you use the replicas to serve query\n# to avoid that all the replicas will be unreachable at about the same\n# time while performing the synchronization with the master.\nsentinel parallel-syncs mymaster 1\n\n# sentinel failover-timeout <master-name> <milliseconds>\n#\n# Specifies the failover timeout in milliseconds. It is used in many ways:\n#\n# - The time needed to re-start a failover after a previous failover was\n#   already tried against the same master by a given Sentinel, is two\n#   times the failover timeout.\n#\n# - The time needed for a replica replicating to a wrong master according\n#   to a Sentinel current configuration, to be forced to replicate\n#   with the right master, is exactly the failover timeout (counting since\n#   the moment a Sentinel detected the misconfiguration).\n#\n# - The time needed to cancel a failover that is already in progress but\n#   did not produced any configuration change (SLAVEOF NO ONE yet not\n#   acknowledged by the promoted replica).\n#\n# - The maximum time a failover in progress waits for all the replicas to be\n#   reconfigured as replicas of the new master. However even after this time\n#   the replicas will be reconfigured by the Sentinels anyway, but not with\n#   the exact parallel-syncs progression as specified.\n#\n# Default is 3 minutes.\nsentinel failover-timeout mymaster 180000\n\n# SCRIPTS EXECUTION\n#\n# sentinel notification-script and sentinel reconfig-script are used in order\n# to configure scripts that are called to notify the system administrator\n# or to reconfigure clients after a failover. The scripts are executed\n# with the following rules for error handling:\n#\n# If script exits with \"1\" the execution is retried later (up to a maximum\n# number of times currently set to 10).\n#\n# If script exits with \"2\" (or an higher value) the script execution is\n# not retried.\n#\n# If script terminates because it receives a signal the behavior is the same\n# as exit code 1.\n#\n# A script has a maximum running time of 60 seconds. After this limit is\n# reached the script is terminated with a SIGKILL and the execution retried.\n\n# NOTIFICATION SCRIPT\n#\n# sentinel notification-script <master-name> <script-path>\n# \n# Call the specified notification script for any sentinel event that is\n# generated in the WARNING level (for instance -sdown, -odown, and so forth).\n# This script should notify the system administrator via email, SMS, or any\n# other messaging system, that there is something wrong with the monitored\n# Redis systems.\n#\n# The script is called with just two arguments: the first is the event type\n# and the second the event description.\n#\n# The script must exist and be executable in order for sentinel to start if\n# this option is provided.\n#\n# Example:\n#\n# sentinel notification-script mymaster /var/redis/notify.sh\n\n# CLIENTS RECONFIGURATION SCRIPT\n#\n# sentinel client-reconfig-script <master-name> <script-path>\n#\n# When the master changed because of a failover a script can be called in\n# order to perform application-specific tasks to notify the clients that the\n# configuration has changed and the master is at a different address.\n# \n# The following arguments are passed to the script:\n#\n# <master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port>\n#\n# <state> is currently always \"start\"\n# <role> is either \"leader\" or \"observer\"\n# \n# The arguments from-ip, from-port, to-ip, to-port are used to communicate\n# the old address of the master and the new address of the elected replica\n# (now a master).\n#\n# This script should be resistant to multiple invocations.\n#\n# Example:\n#\n# sentinel client-reconfig-script mymaster /var/redis/reconfig.sh\n\n# SECURITY\n#\n# By default SENTINEL SET will not be able to change the notification-script\n# and client-reconfig-script at runtime. This avoids a trivial security issue\n# where clients can set the script to anything and trigger a failover in order\n# to get the program executed.\n\nsentinel deny-scripts-reconfig yes\n\n# REDIS COMMANDS RENAMING (DEPRECATED)\n#\n# WARNING: avoid using this option if possible, instead use ACLs.\n#\n# Sometimes the Redis server has certain commands, that are needed for Sentinel\n# to work correctly, renamed to unguessable strings. This is often the case\n# of CONFIG and SLAVEOF in the context of providers that provide Redis as\n# a service, and don't want the customers to reconfigure the instances outside\n# of the administration console.\n#\n# In such case it is possible to tell Sentinel to use different command names\n# instead of the normal ones. For example if the master \"mymaster\", and the\n# associated replicas, have \"CONFIG\" all renamed to \"GUESSME\", I could use:\n#\n# SENTINEL rename-command mymaster CONFIG GUESSME\n#\n# After such configuration is set, every time Sentinel would use CONFIG it will\n# use GUESSME instead. Note that there is no actual need to respect the command\n# case, so writing \"config guessme\" is the same in the example above.\n#\n# SENTINEL SET can also be used in order to perform this configuration at runtime.\n#\n# In order to set a command back to its original name (undo the renaming), it\n# is possible to just rename a command to itself:\n#\n# SENTINEL rename-command mymaster CONFIG CONFIG\n\n# HOSTNAMES SUPPORT\n#\n# Normally Sentinel uses only IP addresses and requires SENTINEL MONITOR\n# to specify an IP address. Also, it requires the Redis replica-announce-ip\n# keyword to specify only IP addresses.\n#\n# You may enable hostnames support by enabling resolve-hostnames. Note\n# that you must make sure your DNS is configured properly and that DNS\n# resolution does not introduce very long delays.\n#\nSENTINEL resolve-hostnames no\n\n# When resolve-hostnames is enabled, Sentinel still uses IP addresses\n# when exposing instances to users, configuration files, etc. If you want\n# to retain the hostnames when announced, enable announce-hostnames below.\n#\nSENTINEL announce-hostnames no\n\n# When master_reboot_down_after_period is set to 0, Sentinel does not fail over\n# when receiving a -LOADING response from a master. This was the only supported\n# behavior before version 7.0.\n#\n# Otherwise, Sentinel will use this value as the time (in ms) it is willing to\n# accept a -LOADING response after a master has been rebooted, before failing\n# over.\n\nSENTINEL master-reboot-down-after-period mymaster 0\n"
  },
  {
    "path": "src/.gitignore",
    "content": "*.gcda\n*.gcno\n*.gcov\nredis.info\nlcov-html\n"
  },
  {
    "path": "src/Makefile",
    "content": "# Redis Makefile\n# Copyright (c) 2011-Present, Redis Ltd.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# The Makefile composes the final FINAL_CFLAGS and FINAL_LDFLAGS using\n# what is needed for Redis plus the standard CFLAGS and LDFLAGS passed.\n# However when building the dependencies (Jemalloc, Lua, Hiredis, ...)\n# CFLAGS and LDFLAGS are propagated to the dependencies, so to pass\n# flags only to be used when compiling / linking Redis itself REDIS_CFLAGS\n# and REDIS_LDFLAGS are used instead (this is the case of 'make gcov').\n#\n# Dependencies are stored in the Makefile.dep file. To rebuild this file\n# Just use 'make dep', but this is only needed by developers.\n\nrelease_hdr := $(shell sh -c './mkreleasehdr.sh')\nuname_S := $(shell sh -c 'uname -s 2>/dev/null || echo not')\nuname_M := $(shell sh -c 'uname -m 2>/dev/null || echo not')\nCLANG := $(findstring clang,$(shell sh -c '$(CC) --version | head -1'))\n\n# Optimization flags. To override, the OPTIMIZATION variable can be passed, but\n# some automatic defaults are added to it. To specify optimization flags\n# explicitly without any defaults added, pass the OPT variable instead.\nOPTIMIZATION?=-O3\nifeq ($(OPTIMIZATION),-O3)\n\tifeq (clang,$(CLANG))\n\t\tOPTIMIZATION+=-flto\n\telse\n\t\tOPTIMIZATION+=-flto=auto\n\tendif\nendif\nifneq ($(OPTIMIZATION),-O0)\n\tOPTIMIZATION+=-fno-omit-frame-pointer\nendif\nDEPENDENCY_TARGETS=hiredis linenoise lua hdr_histogram fpconv xxhash\nNODEPS:=clean distclean\n\n# Default settings\nSTD=-pedantic -DREDIS_STATIC=''\n\n# Use -Wno-c11-extensions on clang, either where explicitly used or on\n# platforms we can assume it's being used.\nifeq (clang,$(CLANG))\n  STD+=-Wno-c11-extensions\nelse\nifneq (,$(findstring FreeBSD,$(uname_S)))\n  STD+=-Wno-c11-extensions\nendif\nendif\nWARN=-Wall -W -Wno-missing-field-initializers -Werror=deprecated-declarations -Wstrict-prototypes\nOPT=$(OPTIMIZATION)\n\nSKIP_VEC_SETS?=no\n# Detect if the compiler supports C11 _Atomic.\n# NUMBER_SIGN_CHAR is a workaround to support both GNU Make 4.3 and older versions.\nNUMBER_SIGN_CHAR := \\#\nC11_ATOMIC := $(shell sh -c 'echo \"$(NUMBER_SIGN_CHAR)include <stdatomic.h>\" > foo.c; \\\n\t$(CC) -std=gnu11 -c foo.c -o foo.o > /dev/null 2>&1; \\\n\tif [ -f foo.o ]; then echo \"yes\"; rm foo.o; fi; rm foo.c')\nifeq ($(C11_ATOMIC),yes)\n\tSTD+=-std=gnu11\nelse\n\tSKIP_VEC_SETS=yes\n\tSTD+=-std=c99\nendif\n\nPREFIX?=/usr/local\nINSTALL_BIN=$(PREFIX)/bin\nINSTALL=install\nPKG_CONFIG?=pkg-config\n\nifndef PYTHON\nPYTHON := $(shell which python3 || which python)\nendif\n\n# Default allocator defaults to Jemalloc on Linux and libc otherwise\nMALLOC=libc\nifeq ($(uname_S),Linux)\n\tMALLOC=jemalloc\nendif\n\n# To get ARM stack traces if Redis crashes we need a special C flag.\nifneq (,$(filter aarch64 armv%,$(uname_M)))\n        CFLAGS+=-funwind-tables\nendif\n\n# Backwards compatibility for selecting an allocator\nifeq ($(USE_TCMALLOC),yes)\n\tMALLOC=tcmalloc\nendif\n\nifeq ($(USE_TCMALLOC_MINIMAL),yes)\n\tMALLOC=tcmalloc_minimal\nendif\n\nifeq ($(USE_JEMALLOC),yes)\n\tMALLOC=jemalloc\nendif\n\nifeq ($(USE_JEMALLOC),no)\n\tMALLOC=libc\nendif\n\nifdef SANITIZER\nifeq ($(SANITIZER),address)\n\tMALLOC=libc\n\tCFLAGS+=-fsanitize=address -fno-sanitize-recover=all -fno-omit-frame-pointer\n\tLDFLAGS+=-fsanitize=address\nelse\nifeq ($(SANITIZER),undefined)\n\tMALLOC=libc\n\tCFLAGS+=-fsanitize=undefined -fno-sanitize-recover=all -fno-omit-frame-pointer\n\tLDFLAGS+=-fsanitize=undefined\nelse\nifeq ($(SANITIZER),thread)\n\tCFLAGS+=-fsanitize=thread -fno-sanitize-recover=all -fno-omit-frame-pointer\n\tLDFLAGS+=-fsanitize=thread\nelse\nifeq ($(SANITIZER),memory)\nifeq (clang, $(CLANG))\n\texport CXX:=clang\n\texport LD:=clang\n\tMALLOC=libc # MSan provides its own allocator so make sure not to use jemalloc as they clash\n\tCFLAGS+=-fsanitize=memory -fsanitize-memory-track-origins=2 -fno-sanitize-recover=all -fno-omit-frame-pointer\n\tLDFLAGS+=-fsanitize=memory\nelse\n    $(error \"MemorySanitizer needs to be compiled and linked with clang. Please use CC=clang\")\nendif\nelse\n    $(error \"unknown sanitizer=${SANITIZER}\")\nendif\nendif\nendif\nendif\nendif\n\n# Special case of forcing defrag to run even though we have no Jemlloc support\nifeq ($(DEBUG_DEFRAG), force)\n\tCFLAGS +=-DDEBUG_DEFRAG_FORCE\nelse ifeq ($(DEBUG_DEFRAG), fully)\n\tCFLAGS +=-DDEBUG_DEFRAG_FORCE -DDEBUG_DEFRAG_FULLY\nendif\n\n# Override default settings if possible\n-include .make-settings\n\nFINAL_CFLAGS=$(STD) $(WARN) $(OPT) $(DEBUG) $(CFLAGS) $(REDIS_CFLAGS)\nFINAL_LDFLAGS=$(LDFLAGS) $(OPT) $(REDIS_LDFLAGS) $(DEBUG)\nFINAL_LIBS=-lm\nDEBUG=-g -ggdb\n\n# Linux ARM32 needs -latomic at linking time\nifneq (,$(findstring armv,$(uname_M)))\n        FINAL_LIBS+=-latomic\nendif\n\nifeq ($(uname_S),SunOS)\n\t# SunOS\n\tifeq ($(findstring -m32,$(FINAL_CFLAGS)),)\n\t\tCFLAGS+=-m64\n\tendif\n\tifeq ($(findstring -m32,$(FINAL_LDFLAGS)),)\n\t\tLDFLAGS+=-m64\n\tendif\n\tDEBUG=-g\n\tDEBUG_FLAGS=-g\n\texport CFLAGS LDFLAGS DEBUG DEBUG_FLAGS\n\tINSTALL=cp -pf\n\tFINAL_CFLAGS+= -D__EXTENSIONS__ -D_XPG6\n\tFINAL_LIBS+= -ldl -lnsl -lsocket -lresolv -lpthread -lrt\n\tifeq ($(USE_BACKTRACE),yes)\n\t    FINAL_CFLAGS+= -DUSE_BACKTRACE\n\tendif\nelse\nifeq ($(uname_S),Darwin)\n\t# Darwin\n\tFINAL_LIBS+= -ldl\n\t# Homebrew's OpenSSL is not linked to /usr/local to avoid\n\t# conflicts with the system's LibreSSL installation so it\n\t# must be referenced explicitly during build.\nifeq ($(uname_M),arm64)\n\t# Homebrew arm64 uses /opt/homebrew as HOMEBREW_PREFIX\n\tOPENSSL_PREFIX?=/opt/homebrew/opt/openssl\nelse\n\t# Homebrew x86/ppc uses /usr/local as HOMEBREW_PREFIX\n\tOPENSSL_PREFIX?=/usr/local/opt/openssl\nendif\nelse\nifeq ($(uname_S),AIX)\n        # AIX\n        FINAL_LDFLAGS+= -Wl,-bexpall\n        FINAL_LIBS+=-ldl -pthread -lcrypt -lbsd\nelse\nifeq ($(uname_S),OpenBSD)\n\t# OpenBSD\n\tFINAL_LIBS+= -lpthread\n\tifeq ($(USE_BACKTRACE),yes)\n\t    FINAL_CFLAGS+= -DUSE_BACKTRACE -I/usr/local/include\n\t    FINAL_LDFLAGS+= -L/usr/local/lib\n\t    FINAL_LIBS+= -lexecinfo\n    \tendif\n\nelse\nifeq ($(uname_S),NetBSD)\n\t# NetBSD\n\tFINAL_LIBS+= -lpthread\n\tifeq ($(USE_BACKTRACE),yes)\n\t    FINAL_CFLAGS+= -DUSE_BACKTRACE -I/usr/pkg/include\n\t    FINAL_LDFLAGS+= -L/usr/pkg/lib\n\t    FINAL_LIBS+= -lexecinfo\n    \tendif\nelse\nifeq ($(uname_S),FreeBSD)\n\t# FreeBSD\n\tFINAL_LIBS+= -lpthread -lexecinfo\nelse\nifeq ($(uname_S),DragonFly)\n\t# DragonFly\n\tFINAL_LIBS+= -lpthread -lexecinfo\nelse\nifeq ($(uname_S),OpenBSD)\n\t# OpenBSD\n\tFINAL_LIBS+= -lpthread -lexecinfo\nelse\nifeq ($(uname_S),NetBSD)\n\t# NetBSD\n\tFINAL_LIBS+= -lpthread -lexecinfo\nelse\nifeq ($(uname_S),Haiku)\n\t# Haiku\n\tFINAL_CFLAGS+= -DBSD_SOURCE\n\tFINAL_LDFLAGS+= -lbsd -lnetwork\n\tFINAL_LIBS+= -lpthread\nelse\n\t# All the other OSes (notably Linux)\n\tFINAL_LDFLAGS+= -rdynamic\n\tFINAL_LIBS+=-ldl -pthread -lrt\nendif\nendif\nendif\nendif\nendif\nendif\nendif\nendif\nendif\nendif\n\nifdef OPENSSL_PREFIX\n\tOPENSSL_CFLAGS=-I$(OPENSSL_PREFIX)/include\n\tOPENSSL_LDFLAGS=-L$(OPENSSL_PREFIX)/lib\n\t# Also export OPENSSL_PREFIX so it ends up in deps sub-Makefiles\n\texport OPENSSL_PREFIX\nendif\n\n# Include paths to dependencies\nFINAL_CFLAGS+= -I../deps/hiredis -I../deps/linenoise -I../deps/lua/src -I../deps/hdr_histogram -I../deps/fpconv -I../deps/xxhash\n\n# Determine systemd support and/or build preference (defaulting to auto-detection)\nBUILD_WITH_SYSTEMD=no\nLIBSYSTEMD_LIBS=-lsystemd\n\n# If 'USE_SYSTEMD' in the environment is neither \"no\" nor \"yes\", try to\n# auto-detect libsystemd's presence and link accordingly.\nifneq ($(USE_SYSTEMD),no)\n\tLIBSYSTEMD_PKGCONFIG := $(shell $(PKG_CONFIG) --exists libsystemd && echo $$?)\n# If libsystemd cannot be detected, continue building without support for it\n# (unless a later check tells us otherwise)\nifeq ($(LIBSYSTEMD_PKGCONFIG),0)\n\tBUILD_WITH_SYSTEMD=yes\n\tLIBSYSTEMD_LIBS=$(shell $(PKG_CONFIG) --libs libsystemd)\nendif\nendif\n\n# If 'USE_SYSTEMD' is set to \"yes\" use pkg-config if available or fall back to\n# default -lsystemd.\nifeq ($(USE_SYSTEMD),yes)\n\tBUILD_WITH_SYSTEMD=yes\nendif\n\nifeq ($(BUILD_WITH_SYSTEMD),yes)\n\tFINAL_LIBS+=$(LIBSYSTEMD_LIBS)\n\tFINAL_CFLAGS+= -DHAVE_LIBSYSTEMD\nendif\n\nifeq ($(MALLOC),tcmalloc)\n\tFINAL_CFLAGS+= -DUSE_TCMALLOC\n\tFINAL_LIBS+= -ltcmalloc\nendif\n\nifeq ($(MALLOC),tcmalloc_minimal)\n\tFINAL_CFLAGS+= -DUSE_TCMALLOC\n\tFINAL_LIBS+= -ltcmalloc_minimal\nendif\n\nifeq ($(MALLOC),jemalloc)\n\tDEPENDENCY_TARGETS+= jemalloc\n\tFINAL_CFLAGS+= -DUSE_JEMALLOC -I../deps/jemalloc/include\n\tFINAL_LIBS := ../deps/jemalloc/lib/libjemalloc.a $(FINAL_LIBS)\nendif\n\n# LIBSSL & LIBCRYPTO\nLIBSSL_LIBS=\nLIBSSL_PKGCONFIG := $(shell $(PKG_CONFIG) --exists libssl && echo $$?)\nifeq ($(LIBSSL_PKGCONFIG),0)\n\tLIBSSL_LIBS=$(shell $(PKG_CONFIG) --libs libssl)\nelse\n\tLIBSSL_LIBS=-lssl\nendif\nLIBCRYPTO_LIBS=\nLIBCRYPTO_PKGCONFIG := $(shell $(PKG_CONFIG) --exists libcrypto && echo $$?)\nifeq ($(LIBCRYPTO_PKGCONFIG),0)\n\tLIBCRYPTO_LIBS=$(shell $(PKG_CONFIG) --libs libcrypto)\nelse\n\tLIBCRYPTO_LIBS=-lcrypto\nendif\n\nBUILD_NO:=0\nBUILD_YES:=1\nBUILD_MODULE:=2\nifeq ($(BUILD_TLS),yes)\n\tFINAL_CFLAGS+=-DUSE_OPENSSL=$(BUILD_YES) $(OPENSSL_CFLAGS) -DBUILD_TLS_MODULE=$(BUILD_NO)\n\tFINAL_LDFLAGS+=$(OPENSSL_LDFLAGS)\n\tFINAL_LIBS += ../deps/hiredis/libhiredis_ssl.a $(LIBSSL_LIBS) $(LIBCRYPTO_LIBS)\nendif\n\nTLS_MODULE=\nTLS_MODULE_NAME:=redis-tls$(PROG_SUFFIX).so\nTLS_MODULE_CFLAGS:=$(FINAL_CFLAGS)\nifeq ($(BUILD_TLS),module)\n\tFINAL_CFLAGS+=-DUSE_OPENSSL=$(BUILD_MODULE) $(OPENSSL_CFLAGS)\n\tTLS_CLIENT_LIBS = ../deps/hiredis/libhiredis_ssl.a $(LIBSSL_LIBS) $(LIBCRYPTO_LIBS)\n\tTLS_MODULE=$(TLS_MODULE_NAME)\n\tTLS_MODULE_CFLAGS+=-DUSE_OPENSSL=$(BUILD_MODULE) $(OPENSSL_CFLAGS) -DBUILD_TLS_MODULE=$(BUILD_MODULE)\nendif\n\nifneq ($(SKIP_VEC_SETS),yes)\n\tvpath %.c ../modules/vector-sets\n\tREDIS_VEC_SETS_OBJ=hnsw.o vset.o vset_config.o\n\tFINAL_CFLAGS+=-DINCLUDE_VEC_SETS=1\nendif\n\nifndef V\n    define MAKE_INSTALL\n        @printf '    %b %b\\n' $(LINKCOLOR)INSTALL$(ENDCOLOR) $(BINCOLOR)$(1)$(ENDCOLOR) 1>&2\n        @$(INSTALL) $(1) $(2)\n    endef\nelse\n    define MAKE_INSTALL\n        $(INSTALL) $(1) $(2)\n    endef\nendif\n\nREDIS_CC=$(QUIET_CC)$(CC) $(FINAL_CFLAGS)\nREDIS_LD=$(QUIET_LINK)$(CC) $(FINAL_LDFLAGS)\nREDIS_INSTALL=$(QUIET_INSTALL)$(INSTALL)\n\nCCCOLOR=\"\\033[34m\"\nLINKCOLOR=\"\\033[34;1m\"\nSRCCOLOR=\"\\033[33m\"\nBINCOLOR=\"\\033[37;1m\"\nMAKECOLOR=\"\\033[32;1m\"\nENDCOLOR=\"\\033[0m\"\n\nifndef V\nQUIET_CC = @printf '    %b %b\\n' $(CCCOLOR)CC$(ENDCOLOR) $(SRCCOLOR)$@$(ENDCOLOR) 1>&2;\nQUIET_GEN = @printf '    %b %b\\n' $(CCCOLOR)GEN$(ENDCOLOR) $(SRCCOLOR)$@$(ENDCOLOR) 1>&2;\nQUIET_LINK = @printf '    %b %b\\n' $(LINKCOLOR)LINK$(ENDCOLOR) $(BINCOLOR)$@$(ENDCOLOR) 1>&2;\nQUIET_INSTALL = @printf '    %b %b\\n' $(LINKCOLOR)INSTALL$(ENDCOLOR) $(BINCOLOR)$@$(ENDCOLOR) 1>&2;\nendif\n\nifneq (, $(findstring LOG_REQ_RES, $(REDIS_CFLAGS)))\n\tCOMMANDS_DEF_FILENAME=commands_with_reply_schema\n\tGEN_COMMANDS_FLAGS=--with-reply-schema\nelse\n\tCOMMANDS_DEF_FILENAME=commands\n\tGEN_COMMANDS_FLAGS=\nendif\n\nREDIS_SERVER_NAME=redis-server$(PROG_SUFFIX)\nREDIS_SENTINEL_NAME=redis-sentinel$(PROG_SUFFIX)\nREDIS_SERVER_OBJ=threads_mngr.o memory_prefetch.o adlist.o quicklist.o ae.o anet.o dict.o ebuckets.o eventnotifier.o iothread.o mstr.o entry.o kvstore.o fwtree.o estore.o server.o sds.o zmalloc.o lzf_c.o lzf_d.o pqsort.o zipmap.o sha1.o ziplist.o release.o networking.o util.o object.o db.o replication.o rdb.o t_string.o t_list.o t_set.o t_zset.o t_hash.o config.o aof.o pubsub.o multi.o debug.o sort.o intset.o syncio.o cluster.o cluster_asm.o cluster_legacy.o cluster_slot_stats.o crc16.o endianconv.o slowlog.o eval.o bio.o rio.o rand.o memtest.o syscheck.o crcspeed.o crccombine.o crc64.o bitops.o sentinel.o notify.o setproctitle.o blocked.o hyperloglog.o latency.o sparkline.o redis-check-rdb.o redis-check-aof.o geo.o lazyfree.o module.o evict.o expire.o geohash.o geohash_helper.o childinfo.o defrag.o siphash.o rax.o t_stream.o listpack.o localtime.o lolwut.o lolwut5.o lolwut6.o lolwut8.o acl.o tracking.o socket.o tls.o sha256.o timeout.o setcpuaffinity.o monotonic.o mt19937-64.o resp_parser.o call_reply.o script_lua.o script.o functions.o function_lua.o commands.o strl.o connection.o unix.o logreqres.o keymeta.o chk.o hotkeys.o gcra.o vector.o fast_float_strtod.o\nREDIS_CLI_NAME=redis-cli$(PROG_SUFFIX)\nREDIS_CLI_OBJ=anet.o adlist.o dict.o redis-cli.o zmalloc.o release.o ae.o redisassert.o crcspeed.o crccombine.o crc64.o siphash.o crc16.o monotonic.o cli_common.o mt19937-64.o strl.o cli_commands.o\nREDIS_BENCHMARK_NAME=redis-benchmark$(PROG_SUFFIX)\nREDIS_BENCHMARK_OBJ=ae.o anet.o redis-benchmark.o adlist.o dict.o zmalloc.o redisassert.o release.o crcspeed.o crccombine.o crc64.o siphash.o crc16.o monotonic.o cli_common.o mt19937-64.o strl.o\nREDIS_CHECK_RDB_NAME=redis-check-rdb$(PROG_SUFFIX)\nREDIS_CHECK_AOF_NAME=redis-check-aof$(PROG_SUFFIX)\nALL_SOURCES=$(sort $(patsubst %.o,%.c,$(REDIS_SERVER_OBJ) $(REDIS_VEC_SETS_OBJ) $(REDIS_CLI_OBJ) $(REDIS_BENCHMARK_OBJ)))\n\nall: $(REDIS_SERVER_NAME) $(REDIS_SENTINEL_NAME) $(REDIS_CLI_NAME) $(REDIS_BENCHMARK_NAME) $(REDIS_CHECK_RDB_NAME) $(REDIS_CHECK_AOF_NAME) $(TLS_MODULE) module_tests\n\t@echo \"\"\n\t@echo \"Hint: It's a good idea to run 'make test' ;)\"\n\t@echo \"\"\n\nMakefile.dep:\n\t-$(REDIS_CC) -MM $(ALL_SOURCES) > Makefile.dep 2> /dev/null || true\n\nifeq (0, $(words $(findstring $(MAKECMDGOALS), $(NODEPS))))\n-include Makefile.dep\nendif\n\n.PHONY: all\n\nmodule_tests: $(REDIS_SERVER_NAME)\n\t$(MAKE) -C ../tests/modules\n\n.PHONY: module_tests\n\npersist-settings: distclean\n\techo STD=$(STD) >> .make-settings\n\techo WARN=$(WARN) >> .make-settings\n\techo OPT=$(OPT) >> .make-settings\n\techo MALLOC=$(MALLOC) >> .make-settings\n\techo BUILD_TLS=$(BUILD_TLS) >> .make-settings\n\techo USE_SYSTEMD=$(USE_SYSTEMD) >> .make-settings\n\techo CFLAGS=$(CFLAGS) >> .make-settings\n\techo LDFLAGS=$(LDFLAGS) >> .make-settings\n\techo REDIS_CFLAGS=$(REDIS_CFLAGS) >> .make-settings\n\techo REDIS_LDFLAGS=$(REDIS_LDFLAGS) >> .make-settings\n\techo PREV_FINAL_CFLAGS=$(FINAL_CFLAGS) >> .make-settings\n\techo PREV_FINAL_LDFLAGS=$(FINAL_LDFLAGS) >> .make-settings\n\t-(cd ../deps && $(MAKE) $(DEPENDENCY_TARGETS))\n\n.PHONY: persist-settings\n\n# Prerequisites target\n.make-prerequisites:\n\t@touch $@\n\n# Clean everything, persist settings and build dependencies if anything changed\nifneq ($(strip $(PREV_FINAL_CFLAGS)), $(strip $(FINAL_CFLAGS)))\n.make-prerequisites: persist-settings\nendif\n\nifneq ($(strip $(PREV_FINAL_LDFLAGS)), $(strip $(FINAL_LDFLAGS)))\n.make-prerequisites: persist-settings\nendif\n\n# redis-server\n$(REDIS_SERVER_NAME): $(REDIS_SERVER_OBJ) $(REDIS_VEC_SETS_OBJ)\n\t$(REDIS_LD) -o $@ $^ ../deps/hiredis/libhiredis.a ../deps/lua/src/liblua.a ../deps/hdr_histogram/libhdrhistogram.a ../deps/fpconv/libfpconv.a ../deps/xxhash/libxxhash.a $(FINAL_LIBS)\n\n# redis-sentinel\n$(REDIS_SENTINEL_NAME): $(REDIS_SERVER_NAME)\n\t$(REDIS_INSTALL) $(REDIS_SERVER_NAME) $(REDIS_SENTINEL_NAME)\n\n# redis-check-rdb\n$(REDIS_CHECK_RDB_NAME): $(REDIS_SERVER_NAME)\n\t$(REDIS_INSTALL) $(REDIS_SERVER_NAME) $(REDIS_CHECK_RDB_NAME)\n\n# redis-check-aof\n$(REDIS_CHECK_AOF_NAME): $(REDIS_SERVER_NAME)\n\t$(REDIS_INSTALL) $(REDIS_SERVER_NAME) $(REDIS_CHECK_AOF_NAME)\n\n# redis-tls.so\n$(TLS_MODULE_NAME): $(REDIS_SERVER_NAME)\n\t$(QUIET_CC)$(CC) -o $@ tls.c -shared -fPIC $(TLS_MODULE_CFLAGS) $(TLS_CLIENT_LIBS)\n\n# redis-cli\n$(REDIS_CLI_NAME): $(REDIS_CLI_OBJ)\n\t$(REDIS_LD) -o $@ $^ ../deps/hiredis/libhiredis.a ../deps/linenoise/linenoise.o ../deps/hdr_histogram/libhdrhistogram.a $(FINAL_LIBS) $(TLS_CLIENT_LIBS)\n\n# redis-benchmark\n$(REDIS_BENCHMARK_NAME): $(REDIS_BENCHMARK_OBJ)\n\t$(REDIS_LD) -o $@ $^ ../deps/hiredis/libhiredis.a ../deps/hdr_histogram/libhdrhistogram.a $(FINAL_LIBS) $(TLS_CLIENT_LIBS)\n\nDEP = $(REDIS_SERVER_OBJ:%.o=%.d) $(REDIS_VEC_SETS_OBJ:%.o=%.d) $(REDIS_CLI_OBJ:%.o=%.d) $(REDIS_BENCHMARK_OBJ:%.o=%.d)\n-include $(DEP)\n\n# Because the jemalloc.h header is generated as a part of the jemalloc build,\n# building it should complete before building any other object. Instead of\n# depending on a single artifact, build all dependencies first.\n%.o: %.c .make-prerequisites\n\t$(REDIS_CC) -MMD -o $@ -c $<\n\n# The following files are checked in and don't normally need to be rebuilt. They\n# are built only if python is available and their prereqs are modified.\nifneq (,$(PYTHON))\n$(COMMANDS_DEF_FILENAME).def: commands/*.json ../utils/generate-command-code.py\n\t$(QUIET_GEN)$(PYTHON) ../utils/generate-command-code.py $(GEN_COMMANDS_FLAGS)\n\nfmtargs.h: ../utils/generate-fmtargs.py\n\t$(QUITE_GEN)sed '/Everything below this line/,$$d' $@ > $@.tmp\n\t$(QUITE_GEN)$(PYTHON) ../utils/generate-fmtargs.py >> $@.tmp\n\t$(QUITE_GEN)mv $@.tmp $@\nendif\n\ncommands.c: $(COMMANDS_DEF_FILENAME).def\n\nclean:\n\trm -rf $(REDIS_SERVER_NAME) $(REDIS_SENTINEL_NAME) $(REDIS_CLI_NAME) $(REDIS_BENCHMARK_NAME) $(REDIS_CHECK_RDB_NAME) $(REDIS_CHECK_AOF_NAME) *.o *.gcda *.gcno *.gcov redis.info lcov-html Makefile.dep *.so\n\trm -f $(DEP)\n\t-(cd ../tests/modules && $(MAKE) clean)\n\n.PHONY: clean\n\ndistclean: clean\n\t-(cd ../deps && $(MAKE) distclean)\n\t-(cd modules && $(MAKE) clean)\n\t-(cd ../tests/modules && $(MAKE) clean)\n\t-(rm -f .make-*)\n\n.PHONY: distclean\n\ntest: $(REDIS_SERVER_NAME) $(REDIS_CHECK_AOF_NAME) $(REDIS_CLI_NAME) $(REDIS_BENCHMARK_NAME) module_tests\n\t@(cd ..; ./runtest)\n\ntest-modules: $(REDIS_SERVER_NAME)\n\t@(cd ..; ./runtest-moduleapi)\n\ntest-sentinel: $(REDIS_SENTINEL_NAME) $(REDIS_CLI_NAME)\n\t@(cd ..; ./runtest-sentinel)\n\ntest-cluster: $(REDIS_SERVER_NAME) $(REDIS_CLI_NAME)\n\t@(cd ..; ./runtest-cluster)\n\ncheck: test\n\nlcov:\n\t@lcov --version\n\t$(MAKE) gcov\n\t@(set -e; cd ..; ./runtest)\n\t@geninfo -o redis.info .\n\t@genhtml --legend -o lcov-html redis.info\n\n.PHONY: lcov\n\nbench: $(REDIS_BENCHMARK_NAME)\n\t./$(REDIS_BENCHMARK_NAME)\n\n32bit:\n\t@echo \"\"\n\t@echo \"WARNING: if it fails under Linux you probably need to install libc6-dev-i386\"\n\t@echo \"\"\n\t$(MAKE) CFLAGS=\"-m32\" LDFLAGS=\"-m32\" SKIP_VEC_SETS=\"yes\"\n\ngcov:\n\t$(MAKE) REDIS_CFLAGS=\"-fprofile-arcs -ftest-coverage -DCOVERAGE_TEST\" REDIS_LDFLAGS=\"-fprofile-arcs -ftest-coverage\"\n\nnoopt:\n\t$(MAKE) OPTIMIZATION=\"-O0\"\n\nvalgrind:\n\t$(MAKE) OPTIMIZATION=\"-O0\" MALLOC=\"libc\"\n\nhelgrind:\n\t$(MAKE) OPTIMIZATION=\"-O0\" MALLOC=\"libc\" CFLAGS=\"-D__ATOMIC_VAR_FORCE_SYNC_MACROS\" REDIS_CFLAGS=\"-I/usr/local/include\" REDIS_LDFLAGS=\"-L/usr/local/lib\"\n\ninstall: all\n\t@mkdir -p $(INSTALL_BIN)\n\t$(call MAKE_INSTALL,$(REDIS_SERVER_NAME),$(INSTALL_BIN))\n\t$(call MAKE_INSTALL,$(REDIS_BENCHMARK_NAME),$(INSTALL_BIN))\n\t$(call MAKE_INSTALL,$(REDIS_CLI_NAME),$(INSTALL_BIN))\n\t@ln -sf $(REDIS_SERVER_NAME) $(INSTALL_BIN)/$(REDIS_CHECK_RDB_NAME)\n\t@ln -sf $(REDIS_SERVER_NAME) $(INSTALL_BIN)/$(REDIS_CHECK_AOF_NAME)\n\t@ln -sf $(REDIS_SERVER_NAME) $(INSTALL_BIN)/$(REDIS_SENTINEL_NAME)\n\nuninstall:\n\trm -f $(INSTALL_BIN)/{$(REDIS_SERVER_NAME),$(REDIS_BENCHMARK_NAME),$(REDIS_CLI_NAME),$(REDIS_CHECK_RDB_NAME),$(REDIS_CHECK_AOF_NAME),$(REDIS_SENTINEL_NAME)}\n"
  },
  {
    "path": "src/acl.c",
    "content": "/*\n * Copyright (c) 2018-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n#include \"cluster.h\"\n#include \"sha256.h\"\n#include <fcntl.h>\n#include <ctype.h>\n\n/* =============================================================================\n * Global state for ACLs\n * ==========================================================================*/\n\nrax *Users; /* Table mapping usernames to user structures. */\n\nuser *DefaultUser;  /* Global reference to the default user.\n                       Every new connection is associated to it, if no\n                       AUTH or HELLO is used to authenticate with a\n                       different user. */\n\nlist *UsersToLoad;  /* This is a list of users found in the configuration file\n                       that we'll need to load in the final stage of Redis\n                       initialization, after all the modules are already\n                       loaded. Every list element is a NULL terminated\n                       array of SDS pointers: the first is the user name,\n                       all the remaining pointers are ACL rules in the same\n                       format as ACLSetUser(). */\nlist *ACLLog;       /* Our security log, the user is able to inspect that\n                       using the ACL LOG command .*/\n\nlong long ACLLogEntryCount = 0; /* Number of ACL log entries created */\n\nstatic rax *commandId = NULL; /* Command name to id mapping */\n\nstatic unsigned long nextid = 0; /* Next command id that has not been assigned */\n\n#define ACL_MAX_CATEGORIES 64 /* Maximum number of command categories  */\n\nstruct ACLCategoryItem {\n    char *name;\n    uint64_t flag;\n} ACLDefaultCommandCategories[] = { /* See redis.conf for details on each category. */\n    {\"keyspace\", ACL_CATEGORY_KEYSPACE},\n    {\"read\", ACL_CATEGORY_READ},\n    {\"write\", ACL_CATEGORY_WRITE},\n    {\"set\", ACL_CATEGORY_SET},\n    {\"sortedset\", ACL_CATEGORY_SORTEDSET},\n    {\"list\", ACL_CATEGORY_LIST},\n    {\"hash\", ACL_CATEGORY_HASH},\n    {\"string\", ACL_CATEGORY_STRING},\n    {\"bitmap\", ACL_CATEGORY_BITMAP},\n    {\"hyperloglog\", ACL_CATEGORY_HYPERLOGLOG},\n    {\"geo\", ACL_CATEGORY_GEO},\n    {\"stream\", ACL_CATEGORY_STREAM},\n    {\"pubsub\", ACL_CATEGORY_PUBSUB},\n    {\"admin\", ACL_CATEGORY_ADMIN},\n    {\"fast\", ACL_CATEGORY_FAST},\n    {\"slow\", ACL_CATEGORY_SLOW},\n    {\"blocking\", ACL_CATEGORY_BLOCKING},\n    {\"dangerous\", ACL_CATEGORY_DANGEROUS},\n    {\"connection\", ACL_CATEGORY_CONNECTION},\n    {\"transaction\", ACL_CATEGORY_TRANSACTION},\n    {\"scripting\", ACL_CATEGORY_SCRIPTING},\n    {\"ratelimit\", ACL_CATEGORY_RATE_LIMIT},\n    {NULL,0} /* Terminator. */\n};\n\nstatic struct ACLCategoryItem *ACLCommandCategories = NULL;\nstatic size_t nextCommandCategory = 0; /* Index of the next command category to be added */\n\n/* Implements the ability to add to the list of ACL categories at runtime. Since each ACL category\n * also requires a bit in the acl_categories flag, there is a limit to the number that can be added.\n * The new ACL categories occupy the remaining bits of acl_categories flag, other than the bits\n * occupied by the default ACL command categories.\n * \n * The optional `flag` argument allows the assignment of the `acl_categories` flag bit to the ACL category.\n * When adding a new category, except for the default ACL command categories, this arguments should be `0`\n * to allow the function to assign the next available `acl_categories` flag bit to the new ACL category.\n *\n * returns 1 -> Added, 0 -> Failed (out of space)\n *\n * This function is present here to gain access to the ACLCommandCategories array and add a new ACL category.\n */\nint ACLAddCommandCategory(const char *name, uint64_t flag) {\n    if (nextCommandCategory >= ACL_MAX_CATEGORIES) return 0;\n    ACLCommandCategories[nextCommandCategory].name = zstrdup(name);\n    ACLCommandCategories[nextCommandCategory].flag = flag != 0 ? flag : (1ULL<<nextCommandCategory);\n    nextCommandCategory++;\n    return 1;\n}\n\n/* Initializes ACLCommandCategories with default ACL categories and allocates space for \n * new ACL categories.\n */\nvoid ACLInitCommandCategories(void) {\n    ACLCommandCategories = zcalloc(sizeof(struct ACLCategoryItem) * (ACL_MAX_CATEGORIES + 1));\n    for (int j = 0; ACLDefaultCommandCategories[j].flag; j++) {\n        serverAssert(ACLAddCommandCategory(ACLDefaultCommandCategories[j].name, ACLDefaultCommandCategories[j].flag));\n    }\n}\n\n/* This function removes the specified number of categories from the trailing end of\n * the `ACLCommandCategories` array.\n * The purpose of this is to remove the categories added by modules that fail\n * during the onload function.\n */\nvoid ACLCleanupCategoriesOnFailure(size_t num_acl_categories_added) {\n    for (size_t j = nextCommandCategory - num_acl_categories_added; j < nextCommandCategory; j++) {\n        zfree(ACLCommandCategories[j].name);\n        ACLCommandCategories[j].name = NULL;\n        ACLCommandCategories[j].flag = 0;\n    }\n    nextCommandCategory -= num_acl_categories_added;\n}\n\nstruct ACLUserFlag {\n    const char *name;\n    uint64_t flag;\n} ACLUserFlags[] = {\n    /* Note: the order here dictates the emitted order at ACLDescribeUser */\n    {\"on\", USER_FLAG_ENABLED},\n    {\"off\", USER_FLAG_DISABLED},\n    {\"nopass\", USER_FLAG_NOPASS},\n    {\"skip-sanitize-payload\", USER_FLAG_SANITIZE_PAYLOAD_SKIP},\n    {\"sanitize-payload\", USER_FLAG_SANITIZE_PAYLOAD},\n    {NULL,0} /* Terminator. */\n};\n\nstruct ACLSelectorFlags {\n    const char *name;\n    uint64_t flag;\n} ACLSelectorFlags[] = {\n    /* Note: the order here dictates the emitted order at ACLDescribeUser */\n    {\"allkeys\", SELECTOR_FLAG_ALLKEYS},\n    {\"allchannels\", SELECTOR_FLAG_ALLCHANNELS},\n    {\"allcommands\", SELECTOR_FLAG_ALLCOMMANDS},\n    {NULL,0} /* Terminator. */\n};\n\n/* ACL selectors are private and not exposed outside of acl.c. */\ntypedef struct {\n    uint32_t flags; /* See SELECTOR_FLAG_* */\n    /* The bit in allowed_commands is set if this user has the right to\n     * execute this command.\n     *\n     * If the bit for a given command is NOT set and the command has\n     * allowed first-args, Redis will also check allowed_firstargs in order to\n     * understand if the command can be executed. */\n    uint64_t allowed_commands[USER_COMMAND_BITS_COUNT/64];\n    /* allowed_firstargs is used by ACL rules to block access to a command unless a\n     * specific argv[1] is given.\n     *\n     * For each command ID (corresponding to the command bit set in allowed_commands),\n     * This array points to an array of SDS strings, terminated by a NULL pointer,\n     * with all the first-args that are allowed for this command. When no first-arg\n     * matching is used, the field is just set to NULL to avoid allocating\n     * USER_COMMAND_BITS_COUNT pointers. */\n    sds **allowed_firstargs;\n    list *patterns;  /* A list of allowed key patterns. If this field is NULL\n                        the user cannot mention any key in a command, unless\n                        the flag ALLKEYS is set in the user. */\n    list *channels;  /* A list of allowed Pub/Sub channel patterns. If this\n                        field is NULL the user cannot mention any channel in a\n                        `PUBLISH` or [P][UNSUBSCRIBE] command, unless the flag\n                        ALLCHANNELS is set in the user. */\n    sds command_rules; /* A string representation of the ordered categories and commands, this\n                        * is used to regenerate the original ACL string for display. */\n} aclSelector;\n\nvoid ACLResetFirstArgsForCommand(aclSelector *selector, unsigned long id);\nvoid ACLResetFirstArgs(aclSelector *selector);\nvoid ACLAddAllowedFirstArg(aclSelector *selector, unsigned long id, const char *sub);\nvoid ACLFreeLogEntry(void *le);\nint ACLSetSelector(aclSelector *selector, const char *op, size_t oplen);\n\n/* The length of the string representation of a hashed password. */\n#define HASH_PASSWORD_LEN (SHA256_BLOCK_SIZE*2)\n\n/* =============================================================================\n * Helper functions for the rest of the ACL implementation\n * ==========================================================================*/\n\n/* Return zero if strings are the same, non-zero if they are not.\n * The comparison is performed in a way that prevents an attacker to obtain\n * information about the nature of the strings just monitoring the execution\n * time of the function. Note: The two strings must be the same length.\n */\nint time_independent_strcmp(char *a, char *b, int len) {\n    int diff = 0;\n    for (int j = 0; j < len; j++) {\n        diff |= (a[j] ^ b[j]);\n    }\n    return diff; /* If zero strings are the same. */\n}\n\n/* Given an SDS string, returns the SHA256 hex representation as a\n * new SDS string. */\nsds ACLHashPassword(unsigned char *cleartext, size_t len) {\n    SHA256_CTX ctx;\n    unsigned char hash[SHA256_BLOCK_SIZE];\n    char hex[HASH_PASSWORD_LEN];\n    char *cset = \"0123456789abcdef\";\n\n    sha256_init(&ctx);\n    sha256_update(&ctx,(unsigned char*)cleartext,len);\n    sha256_final(&ctx,hash);\n\n    for (int j = 0; j < SHA256_BLOCK_SIZE; j++) {\n        hex[j*2] = cset[((hash[j]&0xF0)>>4)];\n        hex[j*2+1] = cset[(hash[j]&0xF)];\n    }\n    return sdsnewlen(hex,HASH_PASSWORD_LEN);\n}\n\n/* Given a hash and the hash length, returns C_OK if it is a valid password\n * hash, or C_ERR otherwise. */\nint ACLCheckPasswordHash(unsigned char *hash, int hashlen) {\n    if (hashlen != HASH_PASSWORD_LEN) {\n        return C_ERR;\n    }\n\n    /* Password hashes can only be characters that represent\n     * hexadecimal values, which are numbers and lowercase\n     * characters 'a' through 'f'. */\n    for(int i = 0; i < HASH_PASSWORD_LEN; i++) {\n        char c = hash[i];\n        if ((c < 'a' || c > 'f') && (c < '0' || c > '9')) {\n            return C_ERR;\n        }\n    }\n    return C_OK;\n}\n\n/* =============================================================================\n * Low level ACL API\n * ==========================================================================*/\n\n/* Return 1 if the specified string contains spaces or null characters.\n * We do this for usernames and key patterns for simpler rewriting of\n * ACL rules, presentation on ACL list, and to avoid subtle security bugs\n * that may arise from parsing the rules in presence of escapes.\n * The function returns 0 if the string has no spaces. */\nint ACLStringHasSpaces(const char *s, size_t len) {\n    for (size_t i = 0; i < len; i++) {\n        if (isspace(s[i]) || s[i] == 0) return 1;\n    }\n    return 0;\n}\n\n/* Given the category name the command returns the corresponding flag, or\n * zero if there is no match. */\nuint64_t ACLGetCommandCategoryFlagByName(const char *name) {\n    for (int j = 0; ACLCommandCategories[j].flag != 0; j++) {\n        if (!strcasecmp(name,ACLCommandCategories[j].name)) {\n            return ACLCommandCategories[j].flag;\n        }\n    }\n    return 0; /* No match. */\n}\n\n/* Method for searching for a user within a list of user definitions. The\n * list contains an array of user arguments, and we are only\n * searching the first argument, the username, for a match. */\nint ACLListMatchLoadedUser(void *definition, void *user) {\n    sds *user_definition = definition;\n    return sdscmp(user_definition[0], user) == 0;\n}\n\n/* Method for passwords/pattern comparison used for the user->passwords list\n * so that we can search for items with listSearchKey(). */\nint ACLListMatchSds(void *a, void *b) {\n    return sdscmp(a,b) == 0;\n}\n\n/* Method to free list elements from ACL users password/patterns lists. */\nvoid ACLListFreeSds(void *item) {\n    sdsfreegeneric(item);\n}\n\n/* Method to duplicate list elements from ACL users password/patterns lists. */\nvoid *ACLListDupSds(void *item) {\n    return sdsdup(item);\n}\n\n/* Structure used for handling key patterns with different key\n * based permissions. */\ntypedef struct {\n    int flags; /* The ACL key permission types for this key pattern */\n    sds pattern; /* The pattern to match keys against */\n} keyPattern;\n\n/* Create a new key pattern. */\nkeyPattern *ACLKeyPatternCreate(sds pattern, int flags) {\n    keyPattern *new = (keyPattern *) zmalloc(sizeof(keyPattern));\n    new->pattern = pattern;\n    new->flags = flags;\n    return new;\n}\n\n/* Free a key pattern and internal structures. */\nvoid ACLKeyPatternFree(keyPattern *pattern) {\n    sdsfree(pattern->pattern);\n    zfree(pattern);\n}\n\n/* Method for passwords/pattern comparison used for the user->passwords list\n * so that we can search for items with listSearchKey(). */\nint ACLListMatchKeyPattern(void *a, void *b) {\n    return sdscmp(((keyPattern *) a)->pattern,((keyPattern *) b)->pattern) == 0;\n}\n\n/* Method to free list elements from ACL users password/patterns lists. */\nvoid ACLListFreeKeyPattern(void *item) {\n    ACLKeyPatternFree(item);\n}\n\n/* Method to duplicate list elements from ACL users password/patterns lists. */\nvoid *ACLListDupKeyPattern(void *item) {\n    keyPattern *old = (keyPattern *) item;\n    return ACLKeyPatternCreate(sdsdup(old->pattern), old->flags);\n}\n\n/* Append the string representation of a key pattern onto the\n * provided base string. */\nsds sdsCatPatternString(sds base, keyPattern *pat) {\n    if (pat->flags == ACL_ALL_PERMISSION) {\n        base = sdscatlen(base,\"~\",1);\n    } else if (pat->flags == ACL_READ_PERMISSION) {\n        base = sdscatlen(base,\"%R~\",3);\n    } else if (pat->flags == ACL_WRITE_PERMISSION) {\n        base = sdscatlen(base,\"%W~\",3);\n    } else {\n        serverPanic(\"Invalid key pattern flag detected\");\n    }\n    return sdscatsds(base, pat->pattern);\n}\n\n/* Create an empty selector with the provided set of initial\n * flags. The selector will be default have no permissions. */\naclSelector *ACLCreateSelector(int flags) {\n    aclSelector *selector = zmalloc(sizeof(aclSelector));\n    selector->flags = flags | server.acl_pubsub_default;\n    selector->patterns = listCreate();\n    selector->channels = listCreate();\n    selector->allowed_firstargs = NULL;\n    selector->command_rules = sdsempty();\n\n    listSetMatchMethod(selector->patterns,ACLListMatchKeyPattern);\n    listSetFreeMethod(selector->patterns,ACLListFreeKeyPattern);\n    listSetDupMethod(selector->patterns,ACLListDupKeyPattern);\n    listSetMatchMethod(selector->channels,ACLListMatchSds);\n    listSetFreeMethod(selector->channels,ACLListFreeSds);\n    listSetDupMethod(selector->channels,ACLListDupSds);\n    memset(selector->allowed_commands,0,sizeof(selector->allowed_commands));\n\n    return selector;\n}\n\n/* Cleanup the provided selector, including all interior structures. */\nvoid ACLFreeSelector(aclSelector *selector) {\n    listRelease(selector->patterns);\n    listRelease(selector->channels);\n    sdsfree(selector->command_rules);\n    ACLResetFirstArgs(selector);\n    zfree(selector);\n}\n\n/* Create an exact copy of the provided selector. */\naclSelector *ACLCopySelector(aclSelector *src) {\n    aclSelector *dst = zmalloc(sizeof(aclSelector));\n    dst->flags = src->flags;\n    dst->patterns = listDup(src->patterns);\n    dst->channels = listDup(src->channels);\n    dst->command_rules = sdsdup(src->command_rules);\n    memcpy(dst->allowed_commands,src->allowed_commands,\n           sizeof(dst->allowed_commands));\n    dst->allowed_firstargs = NULL;\n    /* Copy the allowed first-args array of array of SDS strings. */\n    if (src->allowed_firstargs) {\n        for (int j = 0; j < USER_COMMAND_BITS_COUNT; j++) {\n            if (!(src->allowed_firstargs[j])) continue;\n            for (int i = 0; src->allowed_firstargs[j][i]; i++) {\n                ACLAddAllowedFirstArg(dst, j, src->allowed_firstargs[j][i]);\n            }\n        }\n    }\n    return dst;\n}\n\n/* List method for freeing a selector */\nvoid ACLListFreeSelector(void *a) {\n    ACLFreeSelector((aclSelector *) a);\n}\n\n/* List method for duplicating a selector */\nvoid *ACLListDuplicateSelector(void *src) {\n    return ACLCopySelector((aclSelector *)src);\n}\n\n/* All users have an implicit root selector which\n * provides backwards compatibility to the old ACLs-\n * permissions. */\naclSelector *ACLUserGetRootSelector(user *u) {\n    serverAssert(listLength(u->selectors));\n    aclSelector *s = (aclSelector *) listNodeValue(listFirst(u->selectors));\n    serverAssert(s->flags & SELECTOR_FLAG_ROOT);\n    return s;\n}\n\n/* Create a new user with the specified name, store it in the list\n * of users (the Users global radix tree), and returns a reference to\n * the structure representing the user.\n *\n * If the user with such name already exists NULL is returned. */\nuser *ACLCreateUser(const char *name, size_t namelen) {\n    if (raxFind(Users,(unsigned char*)name,namelen,NULL)) return NULL;\n    user *u = zmalloc(sizeof(*u));\n    u->name = sdsnewlen(name,namelen);\n    atomicSet(u->flags, USER_FLAG_DISABLED | USER_FLAG_SANITIZE_PAYLOAD);\n    u->passwords = listCreate();\n    u->acl_string = NULL;\n    listSetMatchMethod(u->passwords,ACLListMatchSds);\n    listSetFreeMethod(u->passwords,ACLListFreeSds);\n    listSetDupMethod(u->passwords,ACLListDupSds);\n\n    u->selectors = listCreate();\n    listSetFreeMethod(u->selectors,ACLListFreeSelector);\n    listSetDupMethod(u->selectors,ACLListDuplicateSelector);\n\n    /* Add the initial root selector */\n    aclSelector *s = ACLCreateSelector(SELECTOR_FLAG_ROOT);\n    listAddNodeHead(u->selectors, s);\n\n    raxInsert(Users,(unsigned char*)name,namelen,u,NULL);\n    return u;\n}\n\n/* This function should be called when we need an unlinked \"fake\" user\n * we can use in order to validate ACL rules or for other similar reasons.\n * The user will not get linked to the Users radix tree. The returned\n * user should be released with ACLFreeUser() as usually. */\nuser *ACLCreateUnlinkedUser(void) {\n    char username[64];\n    for (int j = 0; ; j++) {\n        snprintf(username,sizeof(username),\"__fakeuser:%d__\",j);\n        user *fakeuser = ACLCreateUser(username,strlen(username));\n        if (fakeuser == NULL) continue;\n        int retval = raxRemove(Users,(unsigned char*) username,\n                               strlen(username),NULL);\n        serverAssert(retval != 0);\n        return fakeuser;\n    }\n}\n\n/* Release the memory used by the user structure. Note that this function\n * will not remove the user from the Users global radix tree. */\nvoid ACLFreeUser(user *u) {\n    sdsfree(u->name);\n    if (u->acl_string) {\n        decrRefCount(u->acl_string);\n        u->acl_string = NULL;\n    }\n    listRelease(u->passwords);\n    listRelease(u->selectors);\n    zfree(u);\n}\n\n/* Generic version of ACLFreeUser. */\nvoid ACLFreeUserGeneric(void *u) {\n    ACLFreeUser((user *)u);\n}\n\n/* When a user is deleted we need to cycle the active\n * connections in order to kill all the pending ones that\n * are authenticated with such user. */\nvoid ACLFreeUserAndKillClients(user *u) {\n    listIter li;\n    listNode *ln;\n    listRewind(server.clients,&li);\n    while ((ln = listNext(&li)) != NULL) {\n        client *c = listNodeValue(ln);\n        if (c->user == u) {\n            /* We'll free the connection asynchronously, so\n             * in theory to set a different user is not needed.\n             * However if there are bugs in Redis, soon or later\n             * this may result in some security hole: it's much\n             * more defensive to set the default user and put\n             * it in non authenticated mode. */\n            deauthenticateAndCloseClient(c);\n        }\n    }\n    ACLFreeUser(u);\n}\n\n/* Copy the user ACL rules from the source user 'src' to the destination\n * user 'dst' so that at the end of the process they'll have exactly the\n * same rules (but the names will continue to be the original ones). */\nvoid ACLCopyUser(user *dst, user *src) {\n    listRelease(dst->passwords);\n    listRelease(dst->selectors);\n    dst->passwords = listDup(src->passwords);\n    dst->selectors = listDup(src->selectors);\n    dst->flags = src->flags;\n    if (dst->acl_string) {\n        decrRefCount(dst->acl_string);\n    }\n    dst->acl_string = src->acl_string;\n    if (dst->acl_string) {\n        /* if src is NULL, we set it to NULL, if not, need to increment reference count */\n        incrRefCount(dst->acl_string);\n    }\n}\n\n/* Given a command ID, this function set by reference 'word' and 'bit'\n * so that user->allowed_commands[word] will address the right word\n * where the corresponding bit for the provided ID is stored, and\n * so that user->allowed_commands[word]&bit will identify that specific\n * bit. The function returns C_ERR in case the specified ID overflows\n * the bitmap in the user representation. */\nint ACLGetCommandBitCoordinates(uint64_t id, uint64_t *word, uint64_t *bit) {\n    if (id >= USER_COMMAND_BITS_COUNT) return C_ERR;\n    *word = id / sizeof(uint64_t) / 8;\n    *bit = 1ULL << (id % (sizeof(uint64_t) * 8));\n    return C_OK;\n}\n\n/* Check if the specified command bit is set for the specified user.\n * The function returns 1 is the bit is set or 0 if it is not.\n * Note that this function does not check the ALLCOMMANDS flag of the user\n * but just the lowlevel bitmask.\n *\n * If the bit overflows the user internal representation, zero is returned\n * in order to disallow the execution of the command in such edge case. */\nint ACLGetSelectorCommandBit(const aclSelector *selector, unsigned long id) {\n    uint64_t word, bit;\n    if (ACLGetCommandBitCoordinates(id,&word,&bit) == C_ERR) return 0;\n    return (selector->allowed_commands[word] & bit) != 0;\n}\n\n/* When +@all or allcommands is given, we set a reserved bit as well that we\n * can later test, to see if the user has the right to execute \"future commands\",\n * that is, commands loaded later via modules. */\nint ACLSelectorCanExecuteFutureCommands(aclSelector *selector) {\n    return ACLGetSelectorCommandBit(selector,USER_COMMAND_BITS_COUNT-1);\n}\n\n/* Set the specified command bit for the specified user to 'value' (0 or 1).\n * If the bit overflows the user internal representation, no operation\n * is performed. As a side effect of calling this function with a value of\n * zero, the user flag ALLCOMMANDS is cleared since it is no longer possible\n * to skip the command bit explicit test. */\nvoid ACLSetSelectorCommandBit(aclSelector *selector, unsigned long id, int value) {\n    uint64_t word, bit;\n    if (ACLGetCommandBitCoordinates(id,&word,&bit) == C_ERR) return;\n    if (value) {\n        selector->allowed_commands[word] |= bit;\n    } else {\n        selector->allowed_commands[word] &= ~bit;\n        selector->flags &= ~SELECTOR_FLAG_ALLCOMMANDS;\n    }\n}\n\n/* Remove a rule from the retained command rules. Always match rules\n * verbatim, but also remove subcommand rules if we are adding or removing the \n * entire command. */\nvoid ACLSelectorRemoveCommandRule(aclSelector *selector, sds new_rule) {\n    size_t new_len = sdslen(new_rule);\n    char *existing_rule = selector->command_rules;\n\n    /* Loop over the existing rules, trying to find a rule that \"matches\"\n     * the new rule. If we find a match, then remove the command from the string by\n     * copying the later rules over it. */\n    while(existing_rule[0]) {\n        /* The first character of the rule is +/-, which we don't need to compare. */\n        char *copy_position = existing_rule;\n        existing_rule += 1;\n\n        /* Assume a trailing space after a command is part of the command, like '+get ', so trim it\n         * as well if the command is removed. */\n        char *rule_end = strchr(existing_rule, ' ');\n        if (!rule_end) {\n            /* This is the last rule, so move it to the end of the string. */\n            rule_end = existing_rule + strlen(existing_rule);\n\n            /* This approach can leave a trailing space if the last rule is removed,\n             * but only if it's not the first rule, so handle that case. */\n            if (copy_position != selector->command_rules) copy_position -= 1;\n        }\n        char *copy_end = rule_end;\n        if (*copy_end == ' ') copy_end++;\n\n        /* Exact match or the rule we are comparing is a subcommand denoted by '|' */\n        size_t existing_len = rule_end - existing_rule;\n        if (!memcmp(existing_rule, new_rule, min(existing_len, new_len))) {\n            if ((existing_len == new_len) || (existing_len > new_len && (existing_rule[new_len]) == '|')) {\n                /* Copy the remaining rules starting at the next rule to replace the rule to be\n                 * deleted, including the terminating NULL character. */\n                memmove(copy_position, copy_end, strlen(copy_end) + 1);\n                existing_rule = copy_position;\n                continue;\n            }\n        }\n        existing_rule = copy_end;\n    }\n\n    /* There is now extra padding at the end of the rules, so clean that up. */\n    sdsupdatelen(selector->command_rules);\n}\n\n/* This function is resopnsible for updating the command_rules struct so that relative ordering of\n * commands and categories is maintained and can be reproduced without loss. */\nvoid ACLUpdateCommandRules(aclSelector *selector, const char *rule, int allow) {\n    sds new_rule = sdsnew(rule);\n    sdstolower(new_rule);\n\n    ACLSelectorRemoveCommandRule(selector, new_rule);\n    if (sdslen(selector->command_rules)) selector->command_rules = sdscat(selector->command_rules, \" \");\n    selector->command_rules = sdscatfmt(selector->command_rules, allow ? \"+%S\" : \"-%S\", new_rule);\n    sdsfree(new_rule);\n}\n\n/* This function is used to allow/block a specific command.\n * Allowing/blocking a container command also applies for its subcommands */\nvoid ACLChangeSelectorPerm(aclSelector *selector, struct redisCommand *cmd, int allow) {\n    unsigned long id = cmd->id;\n    ACLSetSelectorCommandBit(selector,id,allow);\n    ACLResetFirstArgsForCommand(selector,id);\n    if (cmd->subcommands_dict) {\n        dictEntry *de;\n        dictIterator di;\n        dictInitSafeIterator(&di, cmd->subcommands_dict);\n        while((de = dictNext(&di)) != NULL) {\n            struct redisCommand *sub = (struct redisCommand *)dictGetVal(de);\n            ACLSetSelectorCommandBit(selector,sub->id,allow);\n        }\n        dictResetIterator(&di);\n    }\n}\n\n/* This is like ACLSetSelectorCommandBit(), but instead of setting the specified\n * ID, it will check all the commands in the category specified as argument,\n * and will set all the bits corresponding to such commands to the specified\n * value. Since the category passed by the user may be non existing, the\n * function returns C_ERR if the category was not found, or C_OK if it was\n * found and the operation was performed. */\nvoid ACLSetSelectorCommandBitsForCategory(dict *commands, aclSelector *selector, uint64_t cflag, int value) {\n    dictIterator di;\n    dictEntry *de;\n    dictInitIterator(&di, commands);\n    while ((de = dictNext(&di)) != NULL) {\n        struct redisCommand *cmd = dictGetVal(de);\n        if (cmd->acl_categories & cflag) {\n            ACLChangeSelectorPerm(selector,cmd,value);\n        }\n        if (cmd->subcommands_dict) {\n            ACLSetSelectorCommandBitsForCategory(cmd->subcommands_dict, selector, cflag, value);\n        }\n    }\n    dictResetIterator(&di);\n}\n\n/* This function is responsible for recomputing the command bits for all selectors of the existing users.\n * It uses the 'command_rules', a string representation of the ordered categories and commands, \n * to recompute the command bits. */\nvoid ACLRecomputeCommandBitsFromCommandRulesAllUsers(void) {\n    raxIterator ri;\n    raxStart(&ri,Users);\n    raxSeek(&ri,\"^\",NULL,0);\n    while(raxNext(&ri)) {\n        user *u = ri.data;\n        listIter li;\n        listNode *ln;\n        listRewind(u->selectors,&li);\n        while((ln = listNext(&li))) {\n            aclSelector *selector = (aclSelector *) listNodeValue(ln);\n            int argc = 0;\n            sds *argv = sdssplitargs(selector->command_rules, &argc);\n            serverAssert(argv != NULL);\n            /* Checking selector's permissions for all commands to start with a clean state. */\n            if (ACLSelectorCanExecuteFutureCommands(selector)) {\n                int res = ACLSetSelector(selector,\"+@all\",-1);\n                serverAssert(res == C_OK);\n            } else {\n                int res = ACLSetSelector(selector,\"-@all\",-1);\n                serverAssert(res == C_OK);\n            }\n\n            /* Apply all of the commands and categories to this selector. */\n            for(int i = 0; i < argc; i++) {\n                int res = ACLSetSelector(selector, argv[i], sdslen(argv[i]));\n                serverAssert(res == C_OK);\n            }\n            sdsfreesplitres(argv, argc);\n        }\n    }\n    raxStop(&ri);\n\n}\n\nint ACLSetSelectorCategory(aclSelector *selector, const char *category, int allow) {\n    uint64_t cflag = ACLGetCommandCategoryFlagByName(category + 1);\n    if (!cflag) return C_ERR;\n\n    ACLUpdateCommandRules(selector, category, allow);\n\n    /* Set the actual command bits on the selector. */\n    ACLSetSelectorCommandBitsForCategory(server.orig_commands, selector, cflag, allow);\n    return C_OK;\n}\n\nvoid ACLCountCategoryBitsForCommands(dict *commands, aclSelector *selector, unsigned long *on, unsigned long *off, uint64_t cflag) {\n    dictIterator di;\n    dictEntry *de;\n    dictInitIterator(&di, commands);\n    while ((de = dictNext(&di)) != NULL) {\n        struct redisCommand *cmd = dictGetVal(de);\n        if (cmd->acl_categories & cflag) {\n            if (ACLGetSelectorCommandBit(selector,cmd->id))\n                (*on)++;\n            else\n                (*off)++;\n        }\n        if (cmd->subcommands_dict) {\n            ACLCountCategoryBitsForCommands(cmd->subcommands_dict, selector, on, off, cflag);\n        }\n    }\n    dictResetIterator(&di);\n}\n\n/* Return the number of commands allowed (on) and denied (off) for the user 'u'\n * in the subset of commands flagged with the specified category name.\n * If the category name is not valid, C_ERR is returned, otherwise C_OK is\n * returned and on and off are populated by reference. */\nint ACLCountCategoryBitsForSelector(aclSelector *selector, unsigned long *on, unsigned long *off,\n                                const char *category)\n{\n    uint64_t cflag = ACLGetCommandCategoryFlagByName(category);\n    if (!cflag) return C_ERR;\n\n    *on = *off = 0;\n    ACLCountCategoryBitsForCommands(server.orig_commands, selector, on, off, cflag);\n    return C_OK;\n}\n\n/* This function returns an SDS string representing the specified selector ACL\n * rules related to command execution, in the same format you could set them\n * back using ACL SETUSER. The function will return just the set of rules needed\n * to recreate the user commands bitmap, without including other user flags such\n * as on/off, passwords and so forth. The returned string always starts with\n * the +@all or -@all rule, depending on the user bitmap, and is followed, if\n * needed, by the other rules needed to narrow or extend what the user can do. */\nsds ACLDescribeSelectorCommandRules(aclSelector *selector) {\n    sds rules = sdsempty();\n\n    /* We use this fake selector as a \"sanity\" check to make sure the rules\n     * we generate have the same bitmap as those on the current selector. */\n    aclSelector *fake_selector = ACLCreateSelector(0);\n\n    /* Here we want to understand if we should start with +@all or -@all.\n     * Note that when starting with +@all and subtracting, the user\n     * will be able to execute future commands, while -@all and adding will just\n     * allow the user the run the selected commands and/or categories.\n     * How do we test for that? We use the trick of a reserved command ID bit\n     * that is set only by +@all (and its alias \"allcommands\"). */\n    if (ACLSelectorCanExecuteFutureCommands(selector)) {\n        rules = sdscat(rules,\"+@all \");\n        ACLSetSelector(fake_selector,\"+@all\",-1);\n    } else {\n        rules = sdscat(rules,\"-@all \");\n        ACLSetSelector(fake_selector,\"-@all\",-1);\n    }\n\n    /* Apply all of the commands and categories to the fake selector. */\n    int argc = 0;\n    sds *argv = sdssplitargs(selector->command_rules, &argc);\n    serverAssert(argv != NULL);\n\n    for(int i = 0; i < argc; i++) {\n        int res = ACLSetSelector(fake_selector, argv[i], -1);\n        serverAssert(res == C_OK);\n    }\n    if (sdslen(selector->command_rules)) {\n        rules = sdscatfmt(rules, \"%S \", selector->command_rules);\n    }\n    sdsfreesplitres(argv, argc);\n\n    /* Trim the final useless space. */\n    sdsrange(rules,0,-2);\n\n    /* This is technically not needed, but we want to verify that now the\n     * predicted bitmap is exactly the same as the user bitmap, and abort\n     * otherwise, because aborting is better than a security risk in this\n     * code path. */\n    if (memcmp(fake_selector->allowed_commands,\n                        selector->allowed_commands,\n                        sizeof(selector->allowed_commands)) != 0)\n    {\n        serverLog(LL_WARNING,\n            \"CRITICAL ERROR: User ACLs don't match final bitmap: '%s'\",\n            redactLogCstr(rules));\n        serverPanic(\"No bitmap match in ACLDescribeSelectorCommandRules()\");\n    }\n    ACLFreeSelector(fake_selector);\n    return rules;\n}\n\nsds ACLDescribeSelector(aclSelector *selector) {\n    listIter li;\n    listNode *ln;\n    sds res = sdsempty();\n    /* Key patterns. */\n    if (selector->flags & SELECTOR_FLAG_ALLKEYS) {\n        res = sdscatlen(res,\"~* \",3);\n    } else {\n        listRewind(selector->patterns,&li);\n        while((ln = listNext(&li))) {\n            keyPattern *thispat = (keyPattern *)listNodeValue(ln);\n            res = sdsCatPatternString(res, thispat);\n            res = sdscatlen(res,\" \",1);\n        }\n    }\n\n    /* Pub/sub channel patterns. */\n    if (selector->flags & SELECTOR_FLAG_ALLCHANNELS) {\n        res = sdscatlen(res,\"&* \",3);\n    } else {\n        res = sdscatlen(res,\"resetchannels \",14);\n        listRewind(selector->channels,&li);\n        while((ln = listNext(&li))) {\n            sds thispat = listNodeValue(ln);\n            res = sdscatlen(res,\"&\",1);\n            res = sdscatsds(res,thispat);\n            res = sdscatlen(res,\" \",1);\n        }\n    }\n\n    /* Command rules. */\n    sds rules = ACLDescribeSelectorCommandRules(selector);\n    res = sdscatsds(res,rules);\n    sdsfree(rules);\n    return res;\n}\n\n/* This is similar to ACLDescribeSelectorCommandRules(), however instead of\n * describing just the user command rules, everything is described: user\n * flags, keys, passwords and finally the command rules obtained via\n * the ACLDescribeSelectorCommandRules() function. This is the function we call\n * when we want to rewrite the configuration files describing ACLs and\n * in order to show users with ACL LIST. */\nrobj *ACLDescribeUser(user *u) {\n    if (u->acl_string) {\n        incrRefCount(u->acl_string);\n        return u->acl_string;\n    }\n\n    sds res = sdsempty();\n\n    /* Flags. */\n    for (int j = 0; ACLUserFlags[j].flag; j++) {\n        if (u->flags & ACLUserFlags[j].flag) {\n            res = sdscat(res,ACLUserFlags[j].name);\n            res = sdscatlen(res,\" \",1);\n        }\n    }\n\n    /* Passwords. */\n    listIter li;\n    listNode *ln;\n    listRewind(u->passwords,&li);\n    while((ln = listNext(&li))) {\n        sds thispass = listNodeValue(ln);\n        res = sdscatlen(res,\"#\",1);\n        res = sdscatsds(res,thispass);\n        res = sdscatlen(res,\" \",1);\n    }\n\n    /* Selectors (Commands and keys) */\n    listRewind(u->selectors,&li);\n    while((ln = listNext(&li))) {\n        aclSelector *selector = (aclSelector *) listNodeValue(ln);\n        sds default_perm = ACLDescribeSelector(selector);\n        if (selector->flags & SELECTOR_FLAG_ROOT) {\n            res = sdscatfmt(res, \"%s\", default_perm);\n        } else {\n            res = sdscatfmt(res, \" (%s)\", default_perm);\n        }\n        sdsfree(default_perm);\n    }\n\n    u->acl_string = createObject(OBJ_STRING, res);\n    /* because we are returning it, have to increase count */\n    incrRefCount(u->acl_string);\n\n    return u->acl_string;\n}\n\n/* Get a command from the original command table, that is not affected\n * by the command renaming operations: we base all the ACL work from that\n * table, so that ACLs are valid regardless of command renaming. */\nstruct redisCommand *ACLLookupCommand(const char *name) {\n    struct redisCommand *cmd;\n    sds sdsname = sdsnew(name);\n    cmd = lookupCommandBySdsLogic(server.orig_commands,sdsname);\n    sdsfree(sdsname);\n    return cmd;\n}\n\n/* Flush the array of allowed first-args for the specified user\n * and command ID. */\nvoid ACLResetFirstArgsForCommand(aclSelector *selector, unsigned long id) {\n    if (selector->allowed_firstargs && selector->allowed_firstargs[id]) {\n        for (int i = 0; selector->allowed_firstargs[id][i]; i++)\n            sdsfree(selector->allowed_firstargs[id][i]);\n        zfree(selector->allowed_firstargs[id]);\n        selector->allowed_firstargs[id] = NULL;\n    }\n}\n\n/* Flush the entire table of first-args. This is useful on +@all, -@all\n * or similar to return back to the minimal memory usage (and checks to do)\n * for the user. */\nvoid ACLResetFirstArgs(aclSelector *selector) {\n    if (selector->allowed_firstargs == NULL) return;\n    for (int j = 0; j < USER_COMMAND_BITS_COUNT; j++) {\n        if (selector->allowed_firstargs[j]) {\n            for (int i = 0; selector->allowed_firstargs[j][i]; i++)\n                sdsfree(selector->allowed_firstargs[j][i]);\n            zfree(selector->allowed_firstargs[j]);\n        }\n    }\n    zfree(selector->allowed_firstargs);\n    selector->allowed_firstargs = NULL;\n}\n\n/* Add a first-arg to the list of subcommands for the user 'u' and\n * the command id specified. */\nvoid ACLAddAllowedFirstArg(aclSelector *selector, unsigned long id, const char *sub) {\n    /* If this is the first first-arg to be configured for\n     * this user, we have to allocate the first-args array. */\n    if (selector->allowed_firstargs == NULL) {\n        selector->allowed_firstargs = zcalloc(USER_COMMAND_BITS_COUNT * sizeof(sds*));\n    }\n\n    /* We also need to enlarge the allocation pointing to the\n     * null terminated SDS array, to make space for this one.\n     * To start check the current size, and while we are here\n     * make sure the first-arg is not already specified inside. */\n    long items = 0;\n    if (selector->allowed_firstargs[id]) {\n        while(selector->allowed_firstargs[id][items]) {\n            /* If it's already here do not add it again. */\n            if (!strcasecmp(selector->allowed_firstargs[id][items],sub))\n                return;\n            items++;\n        }\n    }\n\n    /* Now we can make space for the new item (and the null term). */\n    items += 2;\n    selector->allowed_firstargs[id] = zrealloc(selector->allowed_firstargs[id], sizeof(sds)*items);\n    selector->allowed_firstargs[id][items-2] = sdsnew(sub);\n    selector->allowed_firstargs[id][items-1] = NULL;\n}\n\n/* Create an ACL selector from the given ACL operations, which should be \n * a list of space separate ACL operations that starts and ends \n * with parentheses.\n *\n * If any of the operations are invalid, NULL will be returned instead\n * and errno will be set corresponding to the interior error. */\naclSelector *aclCreateSelectorFromOpSet(const char *opset, size_t opsetlen) {\n    serverAssert(opset[0] == '(' && opset[opsetlen - 1] == ')');\n    aclSelector *s = ACLCreateSelector(0);\n\n    int argc = 0;\n    sds trimmed = sdsnewlen(opset + 1, opsetlen - 2);\n    sds *argv = sdssplitargs(trimmed, &argc);\n    for (int i = 0; i < argc; i++) {\n        if (ACLSetSelector(s, argv[i], sdslen(argv[i])) == C_ERR) {\n            ACLFreeSelector(s);\n            s = NULL;\n            goto cleanup;\n        }\n    }\n\ncleanup:\n    sdsfreesplitres(argv, argc);\n    sdsfree(trimmed);\n    return s;\n}\n\n/* Set a selector's properties with the provided 'op'.\n *\n * +<command>   Allow the execution of that command.\n *              May be used with `|` for allowing subcommands (e.g \"+config|get\")\n * -<command>   Disallow the execution of that command.\n *              May be used with `|` for blocking subcommands (e.g \"-config|set\")\n * +@<category> Allow the execution of all the commands in such category\n *              with valid categories are like @admin, @set, @sortedset, ...\n *              and so forth, see the full list in the server.c file where\n *              the Redis command table is described and defined.\n *              The special category @all means all the commands, but currently\n *              present in the server, and that will be loaded in the future\n *              via modules.\n * +<command>|first-arg    Allow a specific first argument of an otherwise\n *                         disabled command. Note that this form is not\n *                         allowed as negative like -SELECT|1, but\n *                         only additive starting with \"+\".\n * allcommands  Alias for +@all. Note that it implies the ability to execute\n *              all the future commands loaded via the modules system.\n * nocommands   Alias for -@all.\n * ~<pattern>   Add a pattern of keys that can be mentioned as part of\n *              commands. For instance ~* allows all the keys. The pattern\n *              is a glob-style pattern like the one of KEYS.\n *              It is possible to specify multiple patterns.\n * %R~<pattern> Add key read pattern that specifies which keys can be read\n *              from.\n * %W~<pattern> Add key write pattern that specifies which keys can be\n *              written to.\n * allkeys      Alias for ~*\n * resetkeys    Flush the list of allowed keys patterns.\n * &<pattern>   Add a pattern of channels that can be mentioned as part of\n *              Pub/Sub commands. For instance &* allows all the channels. The\n *              pattern is a glob-style pattern like the one of PSUBSCRIBE.\n *              It is possible to specify multiple patterns.\n * allchannels              Alias for &*\n * resetchannels            Flush the list of allowed channel patterns.\n */\nint ACLSetSelector(aclSelector *selector, const char* op, size_t oplen) {\n    if (!strcasecmp(op,\"allkeys\") ||\n               !strcasecmp(op,\"~*\"))\n    {\n        selector->flags |= SELECTOR_FLAG_ALLKEYS;\n        listEmpty(selector->patterns);\n    } else if (!strcasecmp(op,\"resetkeys\")) {\n        selector->flags &= ~SELECTOR_FLAG_ALLKEYS;\n        listEmpty(selector->patterns);\n    } else if (!strcasecmp(op,\"allchannels\") ||\n               !strcasecmp(op,\"&*\"))\n    {\n        selector->flags |= SELECTOR_FLAG_ALLCHANNELS;\n        listEmpty(selector->channels);\n    } else if (!strcasecmp(op,\"resetchannels\")) {\n        selector->flags &= ~SELECTOR_FLAG_ALLCHANNELS;\n        listEmpty(selector->channels);\n    } else if (!strcasecmp(op,\"allcommands\") ||\n               !strcasecmp(op,\"+@all\"))\n    {\n        memset(selector->allowed_commands,255,sizeof(selector->allowed_commands));\n        selector->flags |= SELECTOR_FLAG_ALLCOMMANDS;\n        sdsclear(selector->command_rules);\n        ACLResetFirstArgs(selector);\n    } else if (!strcasecmp(op,\"nocommands\") ||\n               !strcasecmp(op,\"-@all\"))\n    {\n        memset(selector->allowed_commands,0,sizeof(selector->allowed_commands));\n        selector->flags &= ~SELECTOR_FLAG_ALLCOMMANDS;\n        sdsclear(selector->command_rules);\n        ACLResetFirstArgs(selector);\n    } else if (op[0] == '~' || op[0] == '%') {\n        if (selector->flags & SELECTOR_FLAG_ALLKEYS) {\n            errno = EEXIST;\n            return C_ERR;\n        }\n        int flags = 0;\n        size_t offset = 1;\n        if (op[0] == '%') {\n            int perm_ok = 1;\n            for (; offset < oplen; offset++) {\n                if (toupper(op[offset]) == 'R' && !(flags & ACL_READ_PERMISSION)) {\n                    flags |= ACL_READ_PERMISSION;\n                } else if (toupper(op[offset]) == 'W' && !(flags & ACL_WRITE_PERMISSION)) {\n                    flags |= ACL_WRITE_PERMISSION;\n                } else if (op[offset] == '~') {\n                    offset++;\n                    break;\n                } else {\n                    perm_ok = 0;\n                    break;\n                }\n            }\n            if (!flags || !perm_ok) {\n                errno = EINVAL;\n                return C_ERR;\n            }\n        } else {\n            flags = ACL_ALL_PERMISSION;\n        }\n\n        if (ACLStringHasSpaces(op+offset,oplen-offset)) {\n            errno = EINVAL;\n            return C_ERR;\n        }\n        keyPattern *newpat = ACLKeyPatternCreate(sdsnewlen(op+offset,oplen-offset), flags);\n        listNode *ln = listSearchKey(selector->patterns,newpat);\n        /* Avoid re-adding the same key pattern multiple times. */\n        if (ln == NULL) {\n            listAddNodeTail(selector->patterns,newpat);\n        } else {\n            ((keyPattern *)listNodeValue(ln))->flags |= flags;\n            ACLKeyPatternFree(newpat);\n        }\n        selector->flags &= ~SELECTOR_FLAG_ALLKEYS;\n    } else if (op[0] == '&') {\n        if (selector->flags & SELECTOR_FLAG_ALLCHANNELS) {\n            errno = EISDIR;\n            return C_ERR;\n        }\n        if (ACLStringHasSpaces(op+1,oplen-1)) {\n            errno = EINVAL;\n            return C_ERR;\n        }\n        sds newpat = sdsnewlen(op+1,oplen-1);\n        listNode *ln = listSearchKey(selector->channels,newpat);\n        /* Avoid re-adding the same channel pattern multiple times. */\n        if (ln == NULL)\n            listAddNodeTail(selector->channels,newpat);\n        else\n            sdsfree(newpat);\n        selector->flags &= ~SELECTOR_FLAG_ALLCHANNELS;\n    } else if (op[0] == '+' && op[1] != '@') {\n        if (strrchr(op,'|') == NULL) {\n            struct redisCommand *cmd = ACLLookupCommand(op+1);\n            if (cmd == NULL) {\n                errno = ENOENT;\n                return C_ERR;\n            }\n            ACLChangeSelectorPerm(selector,cmd,1);\n            ACLUpdateCommandRules(selector,cmd->fullname,1);\n        } else {\n            /* Split the command and subcommand parts. */\n            char *copy = zstrdup(op+1);\n            char *sub = strrchr(copy,'|');\n            sub[0] = '\\0';\n            sub++;\n\n            struct redisCommand *cmd = ACLLookupCommand(copy);\n\n            /* Check if the command exists. We can't check the\n             * first-arg to see if it is valid. */\n            if (cmd == NULL) {\n                zfree(copy);\n                errno = ENOENT;\n                return C_ERR;\n            }\n\n            /* We do not support allowing first-arg of a subcommand */\n            if (cmd->parent) {\n                zfree(copy);\n                errno = ECHILD;\n                return C_ERR;\n            }\n\n            /* The subcommand cannot be empty, so things like DEBUG|\n             * are syntax errors of course. */\n            if (strlen(sub) == 0) {\n                zfree(copy);\n                errno = EINVAL;\n                return C_ERR;\n            }\n\n            if (cmd->subcommands_dict) {\n                /* If user is trying to allow a valid subcommand we can just add its unique ID */\n                cmd = ACLLookupCommand(op+1);\n                if (cmd == NULL) {\n                    zfree(copy);\n                    errno = ENOENT;\n                    return C_ERR;\n                }\n                ACLChangeSelectorPerm(selector,cmd,1);\n            } else {\n                /* If user is trying to use the ACL mech to block SELECT except SELECT 0 or\n                 * block DEBUG except DEBUG OBJECT (DEBUG subcommands are not considered\n                 * subcommands for now) we use the allowed_firstargs mechanism. */\n\n                /* Add the first-arg to the list of valid ones. */\n                serverLog(LL_WARNING, \"Deprecation warning: Allowing a first arg of an otherwise \"\n                                      \"blocked command is a misuse of ACL and may get disabled \"\n                                      \"in the future (offender: +%s)\", redactLogCstr(op+1));\n                ACLAddAllowedFirstArg(selector,cmd->id,sub);\n            }\n            ACLUpdateCommandRules(selector,op+1,1);\n            zfree(copy);\n        }\n    } else if (op[0] == '-' && op[1] != '@') {\n        struct redisCommand *cmd = ACLLookupCommand(op+1);\n        if (cmd == NULL) {\n            errno = ENOENT;\n            return C_ERR;\n        }\n        ACLChangeSelectorPerm(selector,cmd,0);\n        ACLUpdateCommandRules(selector,cmd->fullname,0);\n    } else if ((op[0] == '+' || op[0] == '-') && op[1] == '@') {\n        int bitval = op[0] == '+' ? 1 : 0;\n        if (ACLSetSelectorCategory(selector,op+1,bitval) == C_ERR) {\n            errno = ENOENT;\n            return C_ERR;\n        }\n    } else {\n        errno = EINVAL;\n        return C_ERR;\n    }\n    return C_OK;\n}\n\n/* Set user properties according to the string \"op\". The following\n * is a description of what different strings will do:\n *\n * on           Enable the user: it is possible to authenticate as this user.\n * off          Disable the user: it's no longer possible to authenticate\n *              with this user, however the already authenticated connections\n *              will still work.\n * skip-sanitize-payload    RESTORE dump-payload sanitization is skipped.\n * sanitize-payload         RESTORE dump-payload is sanitized (default).\n * ><password>  Add this password to the list of valid password for the user.\n *              For example >mypass will add \"mypass\" to the list.\n *              This directive clears the \"nopass\" flag (see later).\n * #<hash>      Add this password hash to the list of valid hashes for\n *              the user. This is useful if you have previously computed\n *              the hash, and don't want to store it in plaintext.\n *              This directive clears the \"nopass\" flag (see later).\n * <<password>  Remove this password from the list of valid passwords.\n * !<hash>      Remove this hashed password from the list of valid passwords.\n *              This is useful when you want to remove a password just by\n *              hash without knowing its plaintext version at all.\n * nopass       All the set passwords of the user are removed, and the user\n *              is flagged as requiring no password: it means that every\n *              password will work against this user. If this directive is\n *              used for the default user, every new connection will be\n *              immediately authenticated with the default user without\n *              any explicit AUTH command required. Note that the \"resetpass\"\n *              directive will clear this condition.\n * resetpass    Flush the list of allowed passwords. Moreover removes the\n *              \"nopass\" status. After \"resetpass\" the user has no associated\n *              passwords and there is no way to authenticate without adding\n *              some password (or setting it as \"nopass\" later).\n * reset        Performs the following actions: resetpass, resetkeys, resetchannels,\n *              allchannels (if acl-pubsub-default is set), off, clearselectors, -@all.\n *              The user returns to the same state it has immediately after its creation.\n * (<options>)  Create a new selector with the options specified within the\n *              parentheses and attach it to the user. Each option should be\n *              space separated. The first character must be ( and the last\n *              character must be ).\n * clearselectors          Remove all of the currently attached selectors. \n *                         Note this does not change the \"root\" user permissions,\n *                         which are the permissions directly applied onto the\n *                         user (outside the parentheses).\n * \n * Selector options can also be specified by this function, in which case\n * they update the root selector for the user.\n *\n * The 'op' string must be null terminated. The 'oplen' argument should\n * specify the length of the 'op' string in case the caller requires to pass\n * binary data (for instance the >password form may use a binary password).\n * Otherwise the field can be set to -1 and the function will use strlen()\n * to determine the length.\n *\n * The function returns C_OK if the action to perform was understood because\n * the 'op' string made sense. Otherwise C_ERR is returned if the operation\n * is unknown or has some syntax error.\n *\n * When an error is returned, errno is set to the following values:\n *\n * EINVAL: The specified opcode is not understood or the key/channel pattern is\n *         invalid (contains non allowed characters).\n * ENOENT: The command name or command category provided with + or - is not\n *         known.\n * EEXIST: You are adding a key pattern after \"*\" was already added. This is\n *         almost surely an error on the user side.\n * EISDIR: You are adding a channel pattern after \"*\" was already added. This is\n *         almost surely an error on the user side.\n * ENODEV: The password you are trying to remove from the user does not exist.\n * EBADMSG: The hash you are trying to add is not a valid hash.\n * ECHILD: Attempt to allow a specific first argument of a subcommand\n */\nint ACLSetUser(user *u, const char *op, ssize_t oplen) {\n    /* as we are changing the ACL, the old generated string is now invalid */\n    if (u->acl_string) {\n        decrRefCount(u->acl_string);\n        u->acl_string = NULL;\n    }\n\n    if (oplen == -1) oplen = strlen(op);\n    if (oplen == 0) return C_OK; /* Empty string is a no-operation. */\n    if (!strcasecmp(op,\"on\")) {\n        atomicSet(u->flags, (u->flags | USER_FLAG_ENABLED) & ~USER_FLAG_DISABLED);\n    } else if (!strcasecmp(op,\"off\")) {\n        atomicSet(u->flags, (u->flags | USER_FLAG_DISABLED) & ~USER_FLAG_ENABLED);\n    } else if (!strcasecmp(op,\"skip-sanitize-payload\")) {\n        atomicSet(u->flags, (u->flags | USER_FLAG_SANITIZE_PAYLOAD_SKIP) & ~USER_FLAG_SANITIZE_PAYLOAD);\n    } else if (!strcasecmp(op,\"sanitize-payload\")) {\n        atomicSet(u->flags, (u->flags | USER_FLAG_SANITIZE_PAYLOAD) & ~USER_FLAG_SANITIZE_PAYLOAD_SKIP);\n    } else if (!strcasecmp(op,\"nopass\")) {\n        atomicSet(u->flags, u->flags | USER_FLAG_NOPASS);\n        listEmpty(u->passwords);\n    } else if (!strcasecmp(op,\"resetpass\")) {\n        atomicSet(u->flags, u->flags & ~USER_FLAG_NOPASS);\n        listEmpty(u->passwords);\n    } else if (op[0] == '>' || op[0] == '#') {\n        sds newpass;\n        if (op[0] == '>') {\n            newpass = ACLHashPassword((unsigned char*)op+1,oplen-1);\n        } else {\n            if (ACLCheckPasswordHash((unsigned char*)op+1,oplen-1) == C_ERR) {\n                errno = EBADMSG;\n                return C_ERR;\n            }\n            newpass = sdsnewlen(op+1,oplen-1);\n        }\n\n        listNode *ln = listSearchKey(u->passwords,newpass);\n        /* Avoid re-adding the same password multiple times. */\n        if (ln == NULL)\n            listAddNodeTail(u->passwords,newpass);\n        else\n            sdsfree(newpass);\n        atomicSet(u->flags, u->flags & ~USER_FLAG_NOPASS);\n    } else if (op[0] == '<' || op[0] == '!') {\n        sds delpass;\n        if (op[0] == '<') {\n            delpass = ACLHashPassword((unsigned char*)op+1,oplen-1);\n        } else {\n            if (ACLCheckPasswordHash((unsigned char*)op+1,oplen-1) == C_ERR) {\n                errno = EBADMSG;\n                return C_ERR;\n            }\n            delpass = sdsnewlen(op+1,oplen-1);\n        }\n        listNode *ln = listSearchKey(u->passwords,delpass);\n        sdsfree(delpass);\n        if (ln) {\n            listDelNode(u->passwords,ln);\n        } else {\n            errno = ENODEV;\n            return C_ERR;\n        }\n    } else if (op[0] == '(' && op[oplen - 1] == ')') {\n        aclSelector *selector = aclCreateSelectorFromOpSet(op, oplen);\n        if (!selector) {\n            /* No errorno set, propagate it from interior error. */\n            return C_ERR;\n        }\n        listAddNodeTail(u->selectors, selector);\n        return C_OK;\n    } else if (!strcasecmp(op,\"clearselectors\")) {\n        listIter li;\n        listNode *ln;\n        listRewind(u->selectors,&li);\n        /* There has to be a root selector */\n        serverAssert(listNext(&li));\n        while((ln = listNext(&li))) {\n            listDelNode(u->selectors, ln);\n        }\n        return C_OK;\n    } else if (!strcasecmp(op,\"reset\")) {\n        serverAssert(ACLSetUser(u,\"resetpass\",-1) == C_OK);\n        serverAssert(ACLSetUser(u,\"resetkeys\",-1) == C_OK);\n        serverAssert(ACLSetUser(u,\"resetchannels\",-1) == C_OK);\n        if (server.acl_pubsub_default & SELECTOR_FLAG_ALLCHANNELS)\n            serverAssert(ACLSetUser(u,\"allchannels\",-1) == C_OK);\n        serverAssert(ACLSetUser(u,\"off\",-1) == C_OK);\n        serverAssert(ACLSetUser(u,\"sanitize-payload\",-1) == C_OK);\n        serverAssert(ACLSetUser(u,\"clearselectors\",-1) == C_OK);\n        serverAssert(ACLSetUser(u,\"-@all\",-1) == C_OK);\n    } else {\n        aclSelector *selector = ACLUserGetRootSelector(u);\n        if (ACLSetSelector(selector, op, oplen) == C_ERR) {\n            return C_ERR;\n        }\n    }\n    return C_OK;\n}\n\n/* Return a description of the error that occurred in ACLSetUser() according to\n * the errno value set by the function on error. */\nconst char *ACLSetUserStringError(void) {\n    const char *errmsg = \"Wrong format\";\n    if (errno == ENOENT)\n        errmsg = \"Unknown command or category name in ACL\";\n    else if (errno == EINVAL)\n        errmsg = \"Syntax error\";\n    else if (errno == EEXIST)\n        errmsg = \"Adding a pattern after the * pattern (or the \"\n                 \"'allkeys' flag) is not valid and does not have any \"\n                 \"effect. Try 'resetkeys' to start with an empty \"\n                 \"list of patterns\";\n    else if (errno == EISDIR)\n        errmsg = \"Adding a pattern after the * pattern (or the \"\n                 \"'allchannels' flag) is not valid and does not have any \"\n                 \"effect. Try 'resetchannels' to start with an empty \"\n                 \"list of channels\";\n    else if (errno == ENODEV)\n        errmsg = \"The password you are trying to remove from the user does \"\n                 \"not exist\";\n    else if (errno == EBADMSG)\n        errmsg = \"The password hash must be exactly 64 characters and contain \"\n                 \"only lowercase hexadecimal characters\";\n    else if (errno == EALREADY)\n        errmsg = \"Duplicate user found. A user can only be defined once in \"\n                 \"config files\";\n    else if (errno == ECHILD)\n        errmsg = \"Allowing first-arg of a subcommand is not supported\";\n    return errmsg;\n}\n\n/* Create the default user, this has special permissions. */\nuser *ACLCreateDefaultUser(void) {\n    user *new = ACLCreateUser(\"default\",7);\n    ACLSetUser(new,\"+@all\",-1);\n    ACLSetUser(new,\"~*\",-1);\n    ACLSetUser(new,\"&*\",-1);\n    ACLSetUser(new,\"on\",-1);\n    ACLSetUser(new,\"nopass\",-1);\n    return new;\n}\n\n/* Initialization of the ACL subsystem. */\nvoid ACLInit(void) {\n    Users = raxNew();\n    UsersToLoad = listCreate();\n    ACLInitCommandCategories();\n    listSetMatchMethod(UsersToLoad, ACLListMatchLoadedUser);\n    ACLLog = listCreate();\n    DefaultUser = ACLCreateDefaultUser();\n}\n\n/* Check the username and password pair and return C_OK if they are valid,\n * otherwise C_ERR is returned and errno is set to:\n *\n *  EINVAL: if the username-password do not match.\n *  ENOENT: if the specified user does not exist at all.\n */\nint ACLCheckUserCredentials(robj *username, robj *password) {\n    user *u = ACLGetUserByName(username->ptr,sdslen(username->ptr));\n    if (u == NULL) {\n        errno = ENOENT;\n        return C_ERR;\n    }\n\n    /* Disabled users can't login. */\n    if (u->flags & USER_FLAG_DISABLED) {\n        errno = EINVAL;\n        return C_ERR;\n    }\n\n    /* If the user is configured to don't require any password, we\n     * are already fine here. */\n    if (u->flags & USER_FLAG_NOPASS) return C_OK;\n\n    /* Check all the user passwords for at least one to match. */\n    listIter li;\n    listNode *ln;\n    listRewind(u->passwords,&li);\n    sds hashed = ACLHashPassword(password->ptr,sdslen(password->ptr));\n    while((ln = listNext(&li))) {\n        sds thispass = listNodeValue(ln);\n        if (!time_independent_strcmp(hashed, thispass, HASH_PASSWORD_LEN)) {\n            sdsfree(hashed);\n            return C_OK;\n        }\n    }\n    sdsfree(hashed);\n\n    /* If we reached this point, no password matched. */\n    errno = EINVAL;\n    return C_ERR;\n}\n\n/* If `err` is provided, this is added as an error reply to the client.\n * Otherwise, the standard Auth error is added as a reply. */\nvoid addAuthErrReply(client *c, robj *err) {\n    if (clientHasPendingReplies(c)) return;\n    if (!err) {\n        addReplyError(c, \"-WRONGPASS invalid username-password pair or user is disabled.\");\n        return;\n    }\n    addReplyError(c, err->ptr);\n}\n\n/* This is like ACLCheckUserCredentials(), however if the user/pass\n * are correct, the connection is put in authenticated state and the\n * connection user reference is populated.\n *\n * The return value is AUTH_OK on success (valid username / password pair) & AUTH_ERR otherwise. */\nint checkPasswordBasedAuth(client *c, robj *username, robj *password) {\n    if (ACLCheckUserCredentials(username,password) == C_OK) {\n        c->authenticated = 1;\n        c->user = ACLGetUserByName(username->ptr,sdslen(username->ptr));\n        moduleNotifyUserChanged(c);\n        return AUTH_OK;\n    } else {\n        addACLLogEntry(c,ACL_DENIED_AUTH,(c->flags & CLIENT_MULTI) ? ACL_LOG_CTX_MULTI : ACL_LOG_CTX_TOPLEVEL,0,username->ptr,NULL);\n        return AUTH_ERR;\n    }\n}\n\n/* Attempt authenticating the user - first through module based authentication,\n * and then, if needed, with normal password based authentication.\n * Returns one of the following codes:\n * AUTH_OK - Indicates that authentication succeeded.\n * AUTH_ERR - Indicates that authentication failed.\n * AUTH_BLOCKED - Indicates module authentication is in progress through a blocking implementation.\n */\nint ACLAuthenticateUser(client *c, robj *username, robj *password, robj **err) {\n    int result = checkModuleAuthentication(c, username, password, err);\n    /* If authentication was not handled by any Module, attempt normal password based auth. */\n    if (result == AUTH_NOT_HANDLED) {\n        result = checkPasswordBasedAuth(c, username, password);\n    }\n    return result;\n}\n\n/* For ACL purposes, every user has a bitmap with the commands that such\n * user is allowed to execute. In order to populate the bitmap, every command\n * should have an assigned ID (that is used to index the bitmap). This function\n * creates such an ID: it uses sequential IDs, reusing the same ID for the same\n * command name, so that a command retains the same ID in case of modules that\n * are unloaded and later reloaded.\n *\n * The function does not take ownership of the 'cmdname' SDS string.\n * */\nunsigned long ACLGetCommandID(sds cmdname) {\n    sds lowername = sdsdup(cmdname);\n    sdstolower(lowername);\n    if (commandId == NULL) commandId = raxNew();\n    void *id;\n    if (raxFind(commandId,(unsigned char*)lowername,sdslen(lowername),&id)) {\n        sdsfree(lowername);\n        return (unsigned long)id;\n    }\n    raxInsert(commandId,(unsigned char*)lowername,strlen(lowername),\n              (void*)nextid,NULL);\n    sdsfree(lowername);\n    unsigned long thisid = nextid;\n    nextid++;\n\n    /* We never assign the last bit in the user commands bitmap structure,\n     * this way we can later check if this bit is set, understanding if the\n     * current ACL for the user was created starting with a +@all to add all\n     * the possible commands and just subtracting other single commands or\n     * categories, or if, instead, the ACL was created just adding commands\n     * and command categories from scratch, not allowing future commands by\n     * default (loaded via modules). This is useful when rewriting the ACLs\n     * with ACL SAVE. */\n    if (nextid == USER_COMMAND_BITS_COUNT-1) nextid++;\n    return thisid;\n}\n\n/* Clear command id table and reset nextid to 0. */\nvoid ACLClearCommandID(void) {\n    if (commandId) raxFree(commandId);\n    commandId = NULL;\n    nextid = 0;\n}\n\n/* Return an username by its name, or NULL if the user does not exist. */\nuser *ACLGetUserByName(const char *name, size_t namelen) {\n    void *myuser = NULL;\n    raxFind(Users,(unsigned char*)name,namelen,&myuser);\n    return myuser;\n}\n\n/* =============================================================================\n * ACL permission checks\n * ==========================================================================*/\n\n/* Check if the key can be accessed by the selector.\n *\n * If the selector can access the key, ACL_OK is returned, otherwise\n * ACL_DENIED_KEY is returned. */\nstatic int ACLSelectorCheckKey(aclSelector *selector, const char *key, int keylen, int keyspec_flags) {\n    /* The selector can access any key */\n    if (selector->flags & SELECTOR_FLAG_ALLKEYS) return ACL_OK;\n\n    listIter li;\n    listNode *ln;\n    listRewind(selector->patterns,&li);\n\n    int key_flags = 0;\n    if (keyspec_flags & CMD_KEY_ACCESS) key_flags |= ACL_READ_PERMISSION;\n    if (keyspec_flags & CMD_KEY_INSERT) key_flags |= ACL_WRITE_PERMISSION;\n    if (keyspec_flags & CMD_KEY_DELETE) key_flags |= ACL_WRITE_PERMISSION;\n    if (keyspec_flags & CMD_KEY_UPDATE) key_flags |= ACL_WRITE_PERMISSION;\n\n    /* Is given key represent a prefix of a set of keys */\n    int prefix = keyspec_flags & CMD_KEY_PREFIX;\n\n    /* Test this key against every pattern. */\n    while((ln = listNext(&li))) {\n        keyPattern *pattern = listNodeValue(ln);\n        if ((pattern->flags & key_flags) != key_flags)\n            continue;\n        size_t plen = sdslen(pattern->pattern);\n        if (prefix) {\n            if (prefixmatch(pattern->pattern,plen,key,keylen,0))\n                return ACL_OK;\n        } else {\n            if (stringmatchlen(pattern->pattern, plen, key, keylen, 0))\n                return ACL_OK;\n        }\n    }\n    return ACL_DENIED_KEY;\n}\n\n/* Checks if the provided selector selector has access specified in flags\n * to all keys in the keyspace. For example, CMD_KEY_READ access requires either\n * '%R~*', '~*', or allkeys to be granted to the selector. Returns 1 if all \n * the access flags are satisfied with this selector or 0 otherwise.\n */\nstatic int ACLSelectorHasUnrestrictedKeyAccess(aclSelector *selector, int flags) {\n    /* The selector can access any key */\n    if (selector->flags & SELECTOR_FLAG_ALLKEYS) return 1;\n\n    listIter li;\n    listNode *ln;\n    listRewind(selector->patterns,&li);\n\n    int access_flags = 0;\n    if (flags & CMD_KEY_ACCESS) access_flags |= ACL_READ_PERMISSION;\n    if (flags & CMD_KEY_INSERT) access_flags |= ACL_WRITE_PERMISSION;\n    if (flags & CMD_KEY_DELETE) access_flags |= ACL_WRITE_PERMISSION;\n    if (flags & CMD_KEY_UPDATE) access_flags |= ACL_WRITE_PERMISSION;\n\n    /* Test this key against every pattern. */\n    while((ln = listNext(&li))) {\n        keyPattern *pattern = listNodeValue(ln);\n        if ((pattern->flags & access_flags) != access_flags)\n            continue;\n        if (!strcmp(pattern->pattern,\"*\")) {\n           return 1;\n       }\n    }\n    return 0;\n}\n\n/* Checks a channel against a provided list of channels. The is_pattern \n * argument should only be used when subscribing (not when publishing)\n * and controls whether the input channel is evaluated as a channel pattern\n * (like in PSUBSCRIBE) or a plain channel name (like in SUBSCRIBE). \n * \n * Note that a plain channel name like in PUBLISH or SUBSCRIBE can be\n * matched against ACL channel patterns, but the pattern provided in PSUBSCRIBE\n * can only be matched as a literal against an ACL pattern (using plain string compare). */\nstatic int ACLCheckChannelAgainstList(list *reference, const char *channel, int channellen, int is_pattern) {\n    listIter li;\n    listNode *ln;\n\n    listRewind(reference, &li);\n    while((ln = listNext(&li))) {\n        sds pattern = listNodeValue(ln);\n        size_t plen = sdslen(pattern);\n        /* Channel patterns are matched literally against the channels in\n         * the list. Regular channels perform pattern matching. */\n        if ((is_pattern && !strcmp(pattern,channel)) || \n            (!is_pattern && stringmatchlen(pattern,plen,channel,channellen,0)))\n        {\n            return ACL_OK;\n        }\n    }\n    return ACL_DENIED_CHANNEL;\n}\n\n/* To prevent duplicate calls to getKeysResult, a cache is maintained\n * in between calls to the various selectors. */\ntypedef struct {\n    int keys_init;\n    getKeysResult keys;\n} aclKeyResultCache;\n\nvoid initACLKeyResultCache(aclKeyResultCache *cache) {\n    cache->keys_init = 0;\n}\n\nvoid cleanupACLKeyResultCache(aclKeyResultCache *cache) {\n    if (cache->keys_init) getKeysFreeResult(&(cache->keys));\n}\n\n/* Check if the command is ready to be executed according to the\n * ACLs associated with the specified selector.\n *\n * If the selector can execute the command ACL_OK is returned, otherwise\n * ACL_DENIED_CMD, ACL_DENIED_KEY, or ACL_DENIED_CHANNEL is returned: the first in case the\n * command cannot be executed because the selector is not allowed to run such\n * command, the second and third if the command is denied because the selector is trying\n * to access a key or channel that are not among the specified patterns. */\nstatic int ACLSelectorCheckCmd(aclSelector *selector, struct redisCommand *cmd, robj **argv, int argc, int *keyidxptr, aclKeyResultCache *cache) {\n    uint64_t id = cmd->id;\n    int ret;\n    if (!(selector->flags & SELECTOR_FLAG_ALLCOMMANDS) && !(cmd->flags & CMD_NO_AUTH)) {\n        /* If the bit is not set we have to check further, in case the\n         * command is allowed just with that specific first argument. */\n        if (ACLGetSelectorCommandBit(selector,id) == 0) {\n            /* Check if the first argument matches. */\n            if (argc < 2 ||\n                selector->allowed_firstargs == NULL ||\n                selector->allowed_firstargs[id] == NULL)\n            {\n                return ACL_DENIED_CMD;\n            }\n\n            long subid = 0;\n            while (1) {\n                if (selector->allowed_firstargs[id][subid] == NULL)\n                    return ACL_DENIED_CMD;\n                int idx = cmd->parent ? 2 : 1;\n                if (!strcasecmp(argv[idx]->ptr,selector->allowed_firstargs[id][subid]))\n                    break; /* First argument match found. Stop here. */\n                subid++;\n            }\n        }\n    }\n\n    /* Check if the user can execute commands explicitly touching the keys\n     * mentioned in the command arguments. */\n    if (!(selector->flags & SELECTOR_FLAG_ALLKEYS) && doesCommandHaveKeys(cmd)) {\n        if (!(cache->keys_init)) {\n            cache->keys = (getKeysResult) GETKEYS_RESULT_INIT;\n            getKeysFromCommandWithSpecs(cmd, argv, argc, GET_KEYSPEC_DEFAULT, &(cache->keys));\n            cache->keys_init = 1;\n        }\n        getKeysResult *result = &(cache->keys);\n        keyReference *resultidx = result->keys;\n        for (int j = 0; j < result->numkeys; j++) {\n            int idx = resultidx[j].pos;\n            ret = ACLSelectorCheckKey(selector, argv[idx]->ptr, sdslen(argv[idx]->ptr), resultidx[j].flags);\n            if (ret != ACL_OK) {\n                if (keyidxptr) *keyidxptr = resultidx[j].pos;\n                return ret;\n            }\n        }\n    }\n\n    /* Check if the user can execute commands explicitly touching the channels\n     * mentioned in the command arguments */\n    const int channel_flags = CMD_CHANNEL_PUBLISH | CMD_CHANNEL_SUBSCRIBE;\n    if (!(selector->flags & SELECTOR_FLAG_ALLCHANNELS) && doesCommandHaveChannelsWithFlags(cmd, channel_flags)) {\n        getKeysResult channels = (getKeysResult) GETKEYS_RESULT_INIT;\n        getChannelsFromCommand(cmd, argv, argc, &channels);\n        keyReference *channelref = channels.keys;\n        for (int j = 0; j < channels.numkeys; j++) {\n            int idx = channelref[j].pos;\n            if (!(channelref[j].flags & channel_flags)) continue;\n            int is_pattern = channelref[j].flags & CMD_CHANNEL_PATTERN;\n            int ret = ACLCheckChannelAgainstList(selector->channels, argv[idx]->ptr, sdslen(argv[idx]->ptr), is_pattern);\n            if (ret != ACL_OK) {\n                if (keyidxptr) *keyidxptr = channelref[j].pos;\n                getKeysFreeResult(&channels);\n                return ret;\n            }\n        }\n        getKeysFreeResult(&channels);\n    }\n    return ACL_OK;\n}\n\n/* Check if the key can be accessed by the client according to\n * the ACLs associated with the specified user according to the\n * keyspec access flags.\n *\n * If the user can access the key, ACL_OK is returned, otherwise\n * ACL_DENIED_KEY is returned. */\nint ACLUserCheckKeyPerm(user *u, const char *key, int keylen, int flags) {\n    listIter li;\n    listNode *ln;\n\n    /* If there is no associated user, the connection can run anything. */\n    if (u == NULL) return ACL_OK;\n\n    /* Check all of the selectors */\n    listRewind(u->selectors,&li);\n    while((ln = listNext(&li))) {\n        aclSelector *s = (aclSelector *) listNodeValue(ln);\n        if (ACLSelectorCheckKey(s, key, keylen, flags) == ACL_OK) {\n            return ACL_OK;\n        }\n    }\n    return ACL_DENIED_KEY;\n}\n\n/* Checks if the user can execute the given command with the added restriction\n * it must also have the access specified in flags to any key in the key space. \n * For example, CMD_KEY_READ access requires either '%R~*', '~*', or allkeys to be \n * granted in addition to the access required by the command. Returns 1 \n * if the user has access or 0 otherwise.\n */\nint ACLUserCheckCmdWithUnrestrictedKeyAccess(user *u, struct redisCommand *cmd, robj **argv, int argc, int flags) {\n    listIter li;\n    listNode *ln;\n    int local_idxptr;\n\n    /* If there is no associated user, the connection can run anything. */\n    if (u == NULL) return 1;\n\n    /* For multiple selectors, we cache the key result in between selector\n     * calls to prevent duplicate lookups. */\n    aclKeyResultCache cache;\n    initACLKeyResultCache(&cache);\n\n    /* Check each selector sequentially */\n    listRewind(u->selectors,&li);\n    while((ln = listNext(&li))) {\n        aclSelector *s = (aclSelector *) listNodeValue(ln);\n        int acl_retval = ACLSelectorCheckCmd(s, cmd, argv, argc, &local_idxptr, &cache);\n        if (acl_retval == ACL_OK && ACLSelectorHasUnrestrictedKeyAccess(s, flags)) {\n            cleanupACLKeyResultCache(&cache);\n            return 1;\n        }\n    }\n    cleanupACLKeyResultCache(&cache);\n    return 0;\n}\n\n/* Check if the channel can be accessed by the client according to\n * the ACLs associated with the specified user.\n *\n * If the user can access the key, ACL_OK is returned, otherwise\n * ACL_DENIED_CHANNEL is returned. */\nint ACLUserCheckChannelPerm(user *u, sds channel, int is_pattern) {\n    listIter li;\n    listNode *ln;\n\n    /* If there is no associated user, the connection can run anything. */\n    if (u == NULL) return ACL_OK;\n\n    /* Check all of the selectors */\n    listRewind(u->selectors,&li);\n    while((ln = listNext(&li))) {\n        aclSelector *s = (aclSelector *) listNodeValue(ln);\n        /* The selector can run any keys */\n        if (s->flags & SELECTOR_FLAG_ALLCHANNELS) return ACL_OK;\n\n        /* Otherwise, loop over the selectors list and check each channel */\n        if (ACLCheckChannelAgainstList(s->channels, channel, sdslen(channel), is_pattern) == ACL_OK) {\n            return ACL_OK;\n        }\n    }\n    return ACL_DENIED_CHANNEL;\n}\n\n/* Lower level API that checks if a specified user is able to execute a given command.\n *\n * If the command fails an ACL check, idxptr will be to set to the first argv entry that\n * causes the failure, either 0 if the command itself fails or the idx of the key/channel\n * that causes the failure */\nint ACLCheckAllUserCommandPerm(user *u, struct redisCommand *cmd, robj **argv, int argc, getKeysResult *key_result, int *idxptr) {\n    listIter li;\n    listNode *ln;\n\n    /* If there is no associated user, the connection can run anything. */\n    if (u == NULL) return ACL_OK;\n\n    /* Quick check if the user has all permissions, return early if so. */\n    if (likely(listFirst(u->selectors) != NULL)) {\n        aclSelector *s = listNodeValue(listFirst(u->selectors));\n        const uint32_t all_perms = SELECTOR_FLAG_ALLCOMMANDS |\n                                   SELECTOR_FLAG_ALLKEYS |\n                                   SELECTOR_FLAG_ALLCHANNELS;\n        if ((s->flags & all_perms) == all_perms) return ACL_OK;\n    }\n\n    /* We have to pick a single error to log, the logic for picking is as follows:\n     * 1) If no selector can execute the command, return the command.\n     * 2) Return the last key or channel that no selector could match. */\n    int relevant_error = ACL_DENIED_CMD;\n    int local_idxptr = 0, last_idx = 0;\n\n    /* For multiple selectors, we cache the key result in between selector\n     * calls to prevent duplicate lookups. */\n    aclKeyResultCache cache;\n    initACLKeyResultCache(&cache);\n    if (key_result) {\n        cache.keys = *key_result;\n        cache.keys_init = 1;\n    }\n\n    /* Check each selector sequentially */\n    listRewind(u->selectors,&li);\n    while((ln = listNext(&li))) {\n        aclSelector *s = (aclSelector *) listNodeValue(ln);\n        int acl_retval = ACLSelectorCheckCmd(s, cmd, argv, argc, &local_idxptr, &cache);\n        if (acl_retval == ACL_OK) {\n            if (!key_result) cleanupACLKeyResultCache(&cache);\n            return ACL_OK;\n        }\n        if (acl_retval > relevant_error ||\n            (acl_retval == relevant_error && local_idxptr > last_idx))\n        {\n            relevant_error = acl_retval;\n            last_idx = local_idxptr;\n        }\n    }\n\n    *idxptr = last_idx;\n    if (!key_result) cleanupACLKeyResultCache(&cache);\n    return relevant_error;\n}\n\n/* High level API for checking if a client can execute the queued up command */\nint ACLCheckAllPerm(client *c, int *idxptr) {\n    return ACLCheckAllUserCommandPerm(c->user, c->cmd, c->argv, c->argc, getClientCachedKeyResult(c), idxptr);\n}\n\n/* If 'new' can access all channels 'original' could then return NULL;\n   Otherwise return a list of channels that the new user can access */\nlist *getUpcomingChannelList(user *new, user *original) {\n    listIter li, lpi;\n    listNode *ln, *lpn;\n\n    /* Optimization: we check if any selector has all channel permissions. */\n    listRewind(new->selectors,&li);\n    while((ln = listNext(&li))) {\n        aclSelector *s = (aclSelector *) listNodeValue(ln);\n        if (s->flags & SELECTOR_FLAG_ALLCHANNELS) return NULL;\n    }\n\n    /* Next, check if the new list of channels\n     * is a strict superset of the original. This is done by\n     * created an \"upcoming\" list of all channels that are in\n     * the new user and checking each of the existing channels\n     * against it.  */\n    list *upcoming = listCreate();\n    listRewind(new->selectors,&li);\n    while((ln = listNext(&li))) {\n        aclSelector *s = (aclSelector *) listNodeValue(ln);\n        listRewind(s->channels, &lpi);\n        while((lpn = listNext(&lpi))) {\n            listAddNodeTail(upcoming, listNodeValue(lpn));\n        }\n    }\n\n    int match = 1;\n    listRewind(original->selectors,&li);\n    while((ln = listNext(&li)) && match) {\n        aclSelector *s = (aclSelector *) listNodeValue(ln);\n        /* If any of the original selectors has the all-channels permission, but\n         * the new ones don't (this is checked earlier in this function), then the\n         * new list is not a strict superset of the original.  */\n        if (s->flags & SELECTOR_FLAG_ALLCHANNELS) {\n            match = 0;\n            break;\n        }\n        listRewind(s->channels, &lpi);\n        while((lpn = listNext(&lpi)) && match) {\n            if (!listSearchKey(upcoming, listNodeValue(lpn))) {\n                match = 0;\n                break;\n            }\n        }\n    }\n\n    if (match) {\n        /* All channels were matched, no need to kill clients. */\n        listRelease(upcoming);\n        return NULL;\n    }\n\n    return upcoming;\n}\n\n/* Check if the client should be killed because it is subscribed to channels that were\n * permitted in the past, are not in the `upcoming` channel list. */\nint ACLShouldKillPubsubClient(client *c, list *upcoming) {\n    robj *o;\n    int kill = 0;\n\n    if (getClientType(c) == CLIENT_TYPE_PUBSUB) {\n        /* Check for pattern violations. */\n        dictIterator di;\n        dictEntry *de;\n        dictInitIterator(&di, c->pubsub_patterns);\n        while (!kill && ((de = dictNext(&di)) != NULL)) {\n            o = dictGetKey(de);\n            int res = ACLCheckChannelAgainstList(upcoming, o->ptr, sdslen(o->ptr), 1);\n            kill = (res == ACL_DENIED_CHANNEL);\n        }\n        dictResetIterator(&di);\n\n        /* Check for channel violations. */\n        if (!kill) {\n            /* Check for global channels violation. */\n            dictInitIterator(&di, c->pubsub_channels);\n\n            while (!kill && ((de = dictNext(&di)) != NULL)) {\n                o = dictGetKey(de);\n                int res = ACLCheckChannelAgainstList(upcoming, o->ptr, sdslen(o->ptr), 0);\n                kill = (res == ACL_DENIED_CHANNEL);\n            }\n            dictResetIterator(&di);\n        }\n        if (!kill) {\n            /* Check for shard channels violation. */\n            dictInitIterator(&di, c->pubsubshard_channels);\n            while (!kill && ((de = dictNext(&di)) != NULL)) {\n                o = dictGetKey(de);\n                int res = ACLCheckChannelAgainstList(upcoming, o->ptr, sdslen(o->ptr), 0);\n                kill = (res == ACL_DENIED_CHANNEL);\n            }\n            dictResetIterator(&di);\n        }\n\n        if (kill) {\n            return 1;\n        }\n    }\n    return 0;\n}\n\n/* Check if the user's existing pub/sub clients violate the ACL pub/sub\n * permissions specified via the upcoming argument, and kill them if so. */\nvoid ACLKillPubsubClientsIfNeeded(user *new, user *original) {\n    /* Do nothing if there are no subscribers. */\n    if (pubsubTotalSubscriptions() == 0)\n        return;\n\n    list *channels = getUpcomingChannelList(new, original);\n    /* If the new user's pubsub permissions are a strict superset of the original, return early. */\n    if (!channels)\n        return;\n\n    listIter li;\n    listNode *ln;\n\n    /* Permissions have changed, so we need to iterate through all\n     * the clients and disconnect those that are no longer valid.\n     * Scan all connected clients to find the user's pub/subs. */\n    listRewind(server.clients,&li);\n    while ((ln = listNext(&li)) != NULL) {\n        client *c = listNodeValue(ln);\n        if (c->user != original)\n            continue;\n        if (ACLShouldKillPubsubClient(c, channels))\n            deauthenticateAndCloseClient(c);\n    }\n\n    listRelease(channels);\n}\n\n/* =============================================================================\n * ACL loading / saving functions\n * ==========================================================================*/\n\n\n/* Selector definitions should be sent as a single argument, however\n * we will be lenient and try to find selector definitions spread \n * across multiple arguments since it makes for a simpler user experience\n * for ACL SETUSER as well as when loading from conf files. \n * \n * This function takes in an array of ACL operators, excluding the username,\n * and merges selector operations that are spread across multiple arguments. The return\n * value is a new SDS array, with length set to the passed in merged_argc. Arguments \n * that are untouched are still duplicated. If there is an unmatched parenthesis, NULL \n * is returned and invalid_idx is set to the argument with the start of the opening\n * parenthesis. */\nsds *ACLMergeSelectorArguments(sds *argv, int argc, int *merged_argc, int *invalid_idx) {\n    *merged_argc = 0;\n    int open_bracket_start = -1;\n\n    sds *acl_args = (sds *) zmalloc(sizeof(sds) * argc);\n\n    sds selector = NULL;\n    for (int j = 0; j < argc; j++) {\n        char *op = argv[j];\n\n        if (open_bracket_start == -1 &&\n            (op[0] == '(' && op[sdslen(op) - 1] != ')')) {\n            selector = sdsdup(argv[j]);\n            open_bracket_start = j;\n            continue;\n        }\n\n        if (open_bracket_start != -1) {\n            selector = sdscatfmt(selector, \" %s\", op);\n            if (op[sdslen(op) - 1] == ')') {\n                open_bracket_start = -1;\n                acl_args[*merged_argc] = selector;                        \n                (*merged_argc)++;\n            }\n            continue;\n        }\n\n        acl_args[*merged_argc] = sdsdup(argv[j]);\n        (*merged_argc)++;\n    }\n\n    if (open_bracket_start != -1) {\n        for (int i = 0; i < *merged_argc; i++) sdsfree(acl_args[i]);\n        zfree(acl_args);\n        sdsfree(selector);\n        if (invalid_idx) *invalid_idx = open_bracket_start;\n        return NULL;\n    }\n\n    return acl_args;\n}\n\n/* takes an acl string already split on spaces and adds it to the given user\n * if the user object is NULL, will create a user with the given username\n *\n * Returns an error as an sds string if the ACL string is not parsable\n */\nsds ACLStringSetUser(user *u, sds username, sds *argv, int argc) {\n    serverAssert(u != NULL || username != NULL);\n\n    sds error = NULL;\n\n    int merged_argc = 0, invalid_idx = 0;\n    sds *acl_args = ACLMergeSelectorArguments(argv, argc, &merged_argc, &invalid_idx);\n\n    if (!acl_args) {\n        return sdscatfmt(sdsempty(),\n                         \"Unmatched parenthesis in acl selector starting \"\n                         \"at '%s'.\", (char *) argv[invalid_idx]);\n    }\n\n    /* Create a temporary user to validate and stage all changes against\n     * before applying to an existing user or creating a new user. If all\n     * arguments are valid the user parameters will all be applied together.\n     * If there are any errors then none of the changes will be applied. */\n    user *tempu = ACLCreateUnlinkedUser();\n    if (u) {\n        ACLCopyUser(tempu, u);\n    }\n\n    for (int j = 0; j < merged_argc; j++) {\n        if (ACLSetUser(tempu,acl_args[j],(ssize_t) sdslen(acl_args[j])) != C_OK) {\n            const char *errmsg = ACLSetUserStringError();\n            error = sdscatfmt(sdsempty(),\n                              \"Error in ACL SETUSER modifier '%s': %s\",\n                              (char*)acl_args[j], errmsg);\n            goto cleanup;\n        }\n    }\n\n    /* Existing pub/sub clients authenticated with the user may need to be\n     * disconnected if (some of) their channel permissions were revoked. */\n    if (u) {\n        ACLKillPubsubClientsIfNeeded(tempu, u);\n    }\n\n    /* Overwrite the user with the temporary user we modified above. */\n    if (!u) {\n        u = ACLCreateUser(username,sdslen(username));\n    }\n    serverAssert(u != NULL);\n\n    ACLCopyUser(u, tempu);\n\ncleanup:\n    ACLFreeUser(tempu);\n    for (int i = 0; i < merged_argc; i++) {\n        sdsfree(acl_args[i]);\n    }\n    zfree(acl_args);\n\n    return error;\n}\n\n/* Given an argument vector describing a user in the form:\n *\n *      user <username> ... ACL rules and flags ...\n *\n * this function validates, and if the syntax is valid, appends\n * the user definition to a list for later loading.\n *\n * The rules are tested for validity and if there obvious syntax errors\n * the function returns C_ERR and does nothing, otherwise C_OK is returned\n * and the user is appended to the list.\n *\n * Note that this function cannot stop in case of commands that are not found\n * and, in that case, the error will be emitted later, because certain\n * commands may be defined later once modules are loaded.\n *\n * When an error is detected and C_ERR is returned, the function populates\n * by reference (if not set to NULL) the argc_err argument with the index\n * of the argv vector that caused the error. */\nint ACLAppendUserForLoading(sds *argv, int argc, int *argc_err) {\n    if (argc < 2 || strcasecmp(argv[0],\"user\")) {\n        if (argc_err) *argc_err = 0;\n        return C_ERR;\n    }\n\n    if (listSearchKey(UsersToLoad, argv[1])) {\n        if (argc_err) *argc_err = 1;\n        errno = EALREADY;\n        return C_ERR; \n    }\n\n    /* Merged selectors before trying to process */\n    int merged_argc;\n    sds *acl_args = ACLMergeSelectorArguments(argv + 2, argc - 2, &merged_argc, argc_err);\n\n    if (!acl_args) {\n        return C_ERR;\n    }\n\n    /* Try to apply the user rules in a fake user to see if they\n     * are actually valid. */\n    user *fakeuser = ACLCreateUnlinkedUser();\n\n    for (int j = 0; j < merged_argc; j++) {\n        if (ACLSetUser(fakeuser,acl_args[j],sdslen(acl_args[j])) == C_ERR) {\n            if (errno != ENOENT) {\n                ACLFreeUser(fakeuser);\n                if (argc_err) *argc_err = j;\n                for (int i = 0; i < merged_argc; i++) sdsfree(acl_args[i]);\n                zfree(acl_args);\n                return C_ERR;\n            }\n        }\n    }\n\n    /* Rules look valid, let's append the user to the list. */\n    sds *copy = zmalloc(sizeof(sds)*(merged_argc + 2));\n    copy[0] = sdsdup(argv[1]);\n    for (int j = 0; j < merged_argc; j++) copy[j+1] = sdsdup(acl_args[j]);\n    copy[merged_argc + 1] = NULL;\n    listAddNodeTail(UsersToLoad,copy);\n    ACLFreeUser(fakeuser);\n    for (int i = 0; i < merged_argc; i++) sdsfree(acl_args[i]);\n    zfree(acl_args);\n    return C_OK;\n}\n\n/* This function will load the configured users appended to the server\n * configuration via ACLAppendUserForLoading(). On loading errors it will\n * log an error and return C_ERR, otherwise C_OK will be returned. */\nint ACLLoadConfiguredUsers(void) {\n    listIter li;\n    listNode *ln;\n    listRewind(UsersToLoad,&li);\n    while ((ln = listNext(&li)) != NULL) {\n        sds *aclrules = listNodeValue(ln);\n        sds username = aclrules[0];\n\n        if (ACLStringHasSpaces(aclrules[0],sdslen(aclrules[0]))) {\n            serverLog(LL_WARNING,\"Spaces not allowed in ACL usernames\");\n            return C_ERR;\n        }\n\n        user *u = ACLCreateUser(username,sdslen(username));\n        if (!u) {\n            /* Only valid duplicate user is the default one. */\n            serverAssert(!strcmp(username, \"default\"));\n            u = ACLGetUserByName(\"default\",7);\n            ACLSetUser(u,\"reset\",-1);\n        }\n\n        /* Load every rule defined for this user. */\n        for (int j = 1; aclrules[j]; j++) {\n            if (ACLSetUser(u,aclrules[j],sdslen(aclrules[j])) != C_OK) {\n                const char *errmsg = ACLSetUserStringError();\n                serverLog(LL_WARNING,\"Error loading ACL rule '%s' for \"\n                                     \"the user named '%s': %s\",\n                                     redactLogCstr(aclrules[j]),redactLogCstr(aclrules[0]),errmsg);\n                return C_ERR;\n            }\n        }\n\n        /* Having a disabled user in the configuration may be an error,\n         * warn about it without returning any error to the caller. */\n        if (u->flags & USER_FLAG_DISABLED) {\n            serverLog(LL_NOTICE, \"The user '%s' is disabled (there is no \"\n                                 \"'on' modifier in the user description). Make \"\n                                 \"sure this is not a configuration error.\",\n                                 redactLogCstr(aclrules[0]));\n        }\n    }\n    return C_OK;\n}\n\n/* This function loads the ACL from the specified filename: every line\n * is validated and should be either empty, a comment, or in the format\n * used to specify users in the redis.conf configuration or in the ACL file,\n * that is:\n *\n *  user <username> ... rules ...\n *\n * Lines starting with '#' are treated as comments and ignored. Note that\n * comments will be lost after ACL SAVE rewrites the file. Empty lines are\n * also allowed.\n *\n * One important part of implementing ACL LOAD, that uses this function, is\n * to avoid ending with broken rules if the ACL file is invalid for some\n * reason, so the function will attempt to validate the rules before loading\n * each user. For every line that will be found broken the function will\n * collect an error message.\n *\n * IMPORTANT: If there is at least a single error, nothing will be loaded\n * and the rules will remain exactly as they were.\n *\n * At the end of the process, if no errors were found in the whole file then\n * NULL is returned. Otherwise an SDS string describing in a single line\n * a description of all the issues found is returned. */\nsds ACLLoadFromFile(const char *filename) {\n    FILE *fp;\n    char buf[1024];\n\n    /* Open the ACL file. */\n    if ((fp = fopen(filename,\"r\")) == NULL) {\n        sds errors = sdscatprintf(sdsempty(),\n            \"Error loading ACLs, opening file '%s': %s\",\n            filename, strerror(errno));\n        return errors;\n    }\n\n    /* Load the whole file as a single string in memory. */\n    sds acls = sdsempty();\n    while(fgets(buf,sizeof(buf),fp) != NULL)\n        acls = sdscat(acls,buf);\n    fclose(fp);\n\n    /* Split the file into lines and attempt to load each line. */\n    int totlines;\n    sds *lines, errors = sdsempty();\n    lines = sdssplitlen(acls,strlen(acls),\"\\n\",1,&totlines);\n    sdsfree(acls);\n\n    /* We do all the loading in a fresh instance of the Users radix tree,\n     * so if there are errors loading the ACL file we can rollback to the\n     * old version. */\n    rax *old_users = Users;\n    Users = raxNew();\n\n    /* Load each line of the file. */\n    for (int i = 0; i < totlines; i++) {\n        sds *argv;\n        int argc;\n        int linenum = i+1;\n\n        lines[i] = sdstrim(lines[i],\" \\t\\r\\n\");\n\n        /* Skip blank lines and comments */\n        if (lines[i][0] == '\\0' || lines[i][0] == '#') continue;\n\n        /* Split into arguments */\n        argv = sdssplitlen(lines[i],sdslen(lines[i]),\" \",1,&argc);\n        if (argv == NULL) {\n            errors = sdscatprintf(errors,\n                     \"%s:%d: unbalanced quotes in acl line. \",\n                     server.acl_filename, linenum);\n            continue;\n        }\n\n        /* Skip this line if the resulting command vector is empty. */\n        if (argc == 0) {\n            sdsfreesplitres(argv,argc);\n            continue;\n        }\n\n        /* The line should start with the \"user\" keyword. */\n        if (strcmp(argv[0],\"user\") || argc < 2) {\n            errors = sdscatprintf(errors,\n                     \"%s:%d should start with user keyword followed \"\n                     \"by the username. \", server.acl_filename,\n                     linenum);\n            sdsfreesplitres(argv,argc);\n            continue;\n        }\n\n        /* Spaces are not allowed in usernames. */\n        if (ACLStringHasSpaces(argv[1],sdslen(argv[1]))) {\n            errors = sdscatprintf(errors,\n                     \"'%s:%d: username '%s' contains invalid characters. \",\n                     server.acl_filename, linenum, argv[1]);\n            sdsfreesplitres(argv,argc);\n            continue;\n        }\n\n        user *u = ACLCreateUser(argv[1],sdslen(argv[1]));\n\n        /* If the user already exists we assume it's an error and abort. */\n        if (!u) {\n            errors = sdscatprintf(errors,\"WARNING: Duplicate user '%s' found on line %d. \", argv[1], linenum);\n            sdsfreesplitres(argv,argc);\n            continue;\n        }\n\n        /* Finally process the options and validate they can\n         * be cleanly applied to the user. If any option fails\n         * to apply, the other values won't be applied since\n         * all the pending changes will get dropped. */\n        int merged_argc;\n        sds *acl_args = ACLMergeSelectorArguments(argv + 2, argc - 2, &merged_argc, NULL);\n        if (!acl_args) {\n            errors = sdscatprintf(errors,\n                    \"%s:%d: Unmatched parenthesis in selector definition.\",\n                    server.acl_filename, linenum);\n        }\n\n        int syntax_error = 0;\n        for (int j = 0; j < merged_argc; j++) {\n            acl_args[j] = sdstrim(acl_args[j],\"\\t\\r\\n\");\n            if (ACLSetUser(u,acl_args[j],sdslen(acl_args[j])) != C_OK) {\n                const char *errmsg = ACLSetUserStringError();\n                if (errno == ENOENT) {\n                    /* For missing commands, we print out more information since\n                     * it shouldn't contain any sensitive information. */\n                    errors = sdscatprintf(errors,\n                            \"%s:%d: Error in applying operation '%s': %s. \",\n                            server.acl_filename, linenum, acl_args[j], errmsg);\n                } else if (syntax_error == 0) {\n                    /* For all other errors, only print out the first error encountered\n                     * since it might affect future operations. */\n                    errors = sdscatprintf(errors,\n                            \"%s:%d: %s. \",\n                            server.acl_filename, linenum, errmsg);\n                    syntax_error = 1;\n                }\n            }\n        }\n\n        for (int i = 0; i < merged_argc; i++) sdsfree(acl_args[i]);\n        zfree(acl_args);\n\n        /* Apply the rule to the new users set only if so far there\n         * are no errors, otherwise it's useless since we are going\n         * to discard the new users set anyway. */\n        if (sdslen(errors) != 0) {\n            sdsfreesplitres(argv,argc);\n            continue;\n        }\n\n        sdsfreesplitres(argv,argc);\n    }\n\n    sdsfreesplitres(lines,totlines);\n\n    /* Check if we found errors and react accordingly. */\n    if (sdslen(errors) == 0) {\n        /* The default user pointer is referenced in different places: instead\n         * of replacing such occurrences it is much simpler to copy the new\n         * default user configuration in the old one. */\n        user *new_default = ACLGetUserByName(\"default\",7);\n        if (!new_default) {\n            new_default = ACLCreateDefaultUser();\n        }\n\n        ACLCopyUser(DefaultUser,new_default);\n        ACLFreeUser(new_default);\n        raxInsert(Users,(unsigned char*)\"default\",7,DefaultUser,NULL);\n        raxRemove(old_users,(unsigned char*)\"default\",7,NULL);\n\n        /* If there are some subscribers, we need to check if we need to drop some clients. */\n        rax *user_channels = NULL;\n        if (pubsubTotalSubscriptions() > 0) {\n            user_channels = raxNew();\n        }\n\n        listIter li;\n        listNode *ln;\n\n        listRewind(server.clients,&li);\n        while ((ln = listNext(&li)) != NULL) {\n            client *c = listNodeValue(ln);\n            /* a MASTER client can do everything (and user = NULL) so we can skip it */\n            if (c->flags & CLIENT_MASTER)\n                continue;\n            user *original = c->user;\n            list *channels = NULL;\n            user *new = ACLGetUserByName(c->user->name, sdslen(c->user->name));\n            if (new && user_channels) {\n                if (!raxFind(user_channels, (unsigned char*)(new->name), sdslen(new->name), (void**)&channels)) {\n                    channels = getUpcomingChannelList(new, original);\n                    raxInsert(user_channels, (unsigned char*)(new->name), sdslen(new->name), channels, NULL);\n                }\n            }\n            /* When the new channel list is NULL, it means the new user's channel list is a superset of the old user's list. */\n            if (!new || (channels && ACLShouldKillPubsubClient(c, channels))) {\n                deauthenticateAndCloseClient(c);\n                continue;\n            }\n            c->user = new;\n        }\n\n        if (user_channels)\n            raxFreeWithCallback(user_channels, listReleaseGeneric);\n        raxFreeWithCallback(old_users, ACLFreeUserGeneric);\n        sdsfree(errors);\n        return NULL;\n    } else {\n        raxFreeWithCallback(Users, ACLFreeUserGeneric);\n        Users = old_users;\n        errors = sdscat(errors,\"WARNING: ACL errors detected, no change to the previously active ACL rules was performed\");\n        return errors;\n    }\n}\n\n/* Generate a copy of the ACLs currently in memory in the specified filename.\n * Returns C_OK on success or C_ERR if there was an error during the I/O.\n * When C_ERR is returned a log is produced with hints about the issue. */\nint ACLSaveToFile(const char *filename) {\n    sds acl = sdsempty();\n    int fd = -1;\n    sds tmpfilename = NULL;\n    int retval = C_ERR;\n\n    /* Let's generate an SDS string containing the new version of the\n     * ACL file. */\n    raxIterator ri;\n    raxStart(&ri,Users);\n    raxSeek(&ri,\"^\",NULL,0);\n    while(raxNext(&ri)) {\n        user *u = ri.data;\n        /* Return information in the configuration file format. */\n        sds user = sdsnew(\"user \");\n        user = sdscatsds(user,u->name);\n        user = sdscatlen(user,\" \",1);\n        robj *descr = ACLDescribeUser(u);\n        user = sdscatsds(user,descr->ptr);\n        decrRefCount(descr);\n        acl = sdscatsds(acl,user);\n        acl = sdscatlen(acl,\"\\n\",1);\n        sdsfree(user);\n    }\n    raxStop(&ri);\n\n    /* Create a temp file with the new content. */\n    tmpfilename = sdsnew(filename);\n    tmpfilename = sdscatfmt(tmpfilename,\".tmp-%i-%I\",\n        (int) getpid(),commandTimeSnapshot());\n    if ((fd = open(tmpfilename,O_WRONLY|O_CREAT,0644)) == -1) {\n        serverLog(LL_WARNING,\"Opening temp ACL file for ACL SAVE: %s\",\n            strerror(errno));\n        goto cleanup;\n    }\n\n    /* Write it. */\n    size_t offset = 0;\n    while (offset < sdslen(acl)) {\n        ssize_t written_bytes = write(fd,acl + offset,sdslen(acl) - offset);\n        if (written_bytes <= 0) {\n            if (errno == EINTR) continue;\n            serverLog(LL_WARNING,\"Writing ACL file for ACL SAVE: %s\",\n                strerror(errno));\n            goto cleanup;\n        }\n        offset += written_bytes;\n    }\n    if (redis_fsync(fd) == -1) {\n        serverLog(LL_WARNING,\"Syncing ACL file for ACL SAVE: %s\",\n            strerror(errno));\n        goto cleanup;\n    }\n    close(fd); fd = -1;\n\n    /* Let's replace the new file with the old one. */\n    if (rename(tmpfilename,filename) == -1) {\n        serverLog(LL_WARNING,\"Renaming ACL file for ACL SAVE: %s\",\n            strerror(errno));\n        goto cleanup;\n    }\n    if (fsyncFileDir(filename) == -1) {\n        serverLog(LL_WARNING,\"Syncing ACL directory for ACL SAVE: %s\",\n            strerror(errno));\n        goto cleanup;\n    }\n    sdsfree(tmpfilename); tmpfilename = NULL;\n    retval = C_OK; /* If we reached this point, everything is fine. */\n\ncleanup:\n    if (fd != -1) close(fd);\n    if (tmpfilename) unlink(tmpfilename);\n    sdsfree(tmpfilename);\n    sdsfree(acl);\n    return retval;\n}\n\n/* This function is called once the server is already running, modules are\n * loaded, and we are ready to start, in order to load the ACLs either from\n * the pending list of users defined in redis.conf, or from the ACL file.\n * The function will just exit with an error if the user is trying to mix\n * both the loading methods. */\nvoid ACLLoadUsersAtStartup(void) {\n    if (server.acl_filename[0] != '\\0' && listLength(UsersToLoad) != 0) {\n        serverLog(LL_WARNING,\n            \"Configuring Redis with users defined in redis.conf and at \"\n            \"the same setting an ACL file path is invalid. This setup \"\n            \"is very likely to lead to configuration errors and security \"\n            \"holes, please define either an ACL file or declare users \"\n            \"directly in your redis.conf, but not both.\");\n        exit(1);\n    }\n\n    if (ACLLoadConfiguredUsers() == C_ERR) {\n        serverLog(LL_WARNING,\n            \"Critical error while loading ACLs. Exiting.\");\n        exit(1);\n    }\n\n    if (server.acl_filename[0] != '\\0') {\n        sds errors = ACLLoadFromFile(server.acl_filename);\n        if (errors) {\n            serverLog(LL_WARNING,\n                \"Aborting Redis startup because of ACL errors: %s\", errors);\n            sdsfree(errors);\n            exit(1);\n        }\n    }\n}\n\n/* =============================================================================\n * ACL log\n * ==========================================================================*/\n\n#define ACL_LOG_GROUPING_MAX_TIME_DELTA 60000\n\n/* This structure defines an entry inside the ACL log. */\ntypedef struct ACLLogEntry {\n    uint64_t count;     /* Number of times this happened recently. */\n    int reason;         /* Reason for denying the command. ACL_DENIED_*. */\n    int context;        /* Toplevel, Lua or MULTI/EXEC? ACL_LOG_CTX_*. */\n    sds object;         /* The key name or command name. */\n    sds username;       /* User the client is authenticated with. */\n    mstime_t ctime;     /* Milliseconds time of last update to this entry. */\n    sds cinfo;          /* Client info (last client if updated). */\n    long long entry_id;         /* The pair (entry_id, timestamp_created) is a unique identifier of this entry \n                                  * in case the node dies and is restarted, it can detect that if it's a new series. */\n    mstime_t timestamp_created; /* UNIX time in milliseconds at the time of this entry's creation. */\n} ACLLogEntry;\n\n/* This function will check if ACL entries 'a' and 'b' are similar enough\n * that we should actually update the existing entry in our ACL log instead\n * of creating a new one. */\nint ACLLogMatchEntry(ACLLogEntry *a, ACLLogEntry *b) {\n    if (a->reason != b->reason) return 0;\n    if (a->context != b->context) return 0;\n    mstime_t delta = a->ctime - b->ctime;\n    if (delta < 0) delta = -delta;\n    if (delta > ACL_LOG_GROUPING_MAX_TIME_DELTA) return 0;\n    if (sdscmp(a->object,b->object) != 0) return 0;\n    if (sdscmp(a->username,b->username) != 0) return 0;\n    return 1;\n}\n\n/* Release an ACL log entry. */\nvoid ACLFreeLogEntry(void *leptr) {\n    ACLLogEntry *le = leptr;\n    sdsfree(le->object);\n    sdsfree(le->username);\n    sdsfree(le->cinfo);\n    zfree(le);\n}\n\n/* Update the relevant counter by the reason */\nvoid ACLUpdateInfoMetrics(int reason){\n    if (reason == ACL_DENIED_AUTH) {\n        server.acl_info.user_auth_failures++;\n    } else if (reason == ACL_DENIED_CMD) {\n        server.acl_info.invalid_cmd_accesses++;\n    } else if (reason == ACL_DENIED_KEY) {\n        server.acl_info.invalid_key_accesses++;\n    } else if (reason == ACL_DENIED_CHANNEL) {\n        server.acl_info.invalid_channel_accesses++;\n    } else if (reason == ACL_INVALID_TLS_CERT_AUTH) {\n        server.acl_info.acl_access_denied_tls_cert++;\n    } else {\n        serverPanic(\"Unknown ACL_DENIED encoding\");\n    }\n}\n\nstatic void trimACLLogEntriesToMaxLen(void) {\n    while(listLength(ACLLog) > server.acllog_max_len) {\n        listNode *ln = listLast(ACLLog);\n        ACLLogEntry *le = listNodeValue(ln);\n        ACLFreeLogEntry(le);\n        listDelNode(ACLLog,ln);\n    }\n}\n\n/* Adds a new entry in the ACL log, making sure to delete the old entry\n * if we reach the maximum length allowed for the log. This function attempts\n * to find similar entries in the current log in order to bump the counter of\n * the log entry instead of creating many entries for very similar ACL\n * rules issues.\n *\n * The argpos argument is used when the reason is ACL_DENIED_KEY or \n * ACL_DENIED_CHANNEL, since it allows the function to log the key or channel\n * name that caused the problem.\n *\n * The last 2 arguments are a manual override to be used, instead of any of the automatic\n * ones which depend on the client and reason arguments (use NULL for default).\n *\n * If `object` is not NULL, this functions takes over it.\n */\nvoid addACLLogEntry(client *c, int reason, int context, int argpos, sds username, sds object) {\n    /* Update ACL info metrics */\n    ACLUpdateInfoMetrics(reason);\n    \n    if (server.acllog_max_len == 0) {\n        trimACLLogEntriesToMaxLen();\n        return;\n    }\n    \n    /* Create a new entry. */\n    struct ACLLogEntry *le = zmalloc(sizeof(*le));\n    le->count = 1;\n    le->reason = reason;\n    le->username = sdsdup(username ? username : c->user->name);\n    le->ctime = commandTimeSnapshot();\n    le->entry_id = ACLLogEntryCount;\n    le->timestamp_created = le->ctime;\n\n    if (object) {\n        le->object = object;\n    } else {\n        switch(reason) {\n            case ACL_DENIED_CMD: le->object = sdsdup(c->cmd->fullname); break;\n            case ACL_DENIED_KEY: le->object = sdsdup(c->argv[argpos]->ptr); break;\n            case ACL_DENIED_CHANNEL: le->object = sdsdup(c->argv[argpos]->ptr); break;\n            case ACL_DENIED_AUTH: le->object = sdsdup(c->argv[0]->ptr); break;\n            default: le->object = sdsempty();\n        }\n    }\n\n    /* if we have a real client from the network, use it (could be missing on module timers) */\n    client *realclient = server.current_client? server.current_client : c;\n\n    le->cinfo = catClientInfoString(sdsempty(),realclient);\n    le->context = context;\n\n    /* Try to match this entry with past ones, to see if we can just\n     * update an existing entry instead of creating a new one. */\n    long toscan = 10; /* Do a limited work trying to find duplicated. */\n    listIter li;\n    listNode *ln;\n    listRewind(ACLLog,&li);\n    ACLLogEntry *match = NULL;\n    while (toscan-- && (ln = listNext(&li)) != NULL) {\n        ACLLogEntry *current = listNodeValue(ln);\n        if (ACLLogMatchEntry(current,le)) {\n            match = current;\n            listDelNode(ACLLog,ln);\n            listAddNodeHead(ACLLog,current);\n            break;\n        }\n    }\n\n    /* If there is a match update the entry, otherwise add it as a\n     * new one. */\n    if (match) {\n        /* We update a few fields of the existing entry and bump the\n         * counter of events for this entry. */\n        sdsfree(match->cinfo);\n        match->cinfo = le->cinfo;\n        match->ctime = le->ctime;\n        match->count++;\n\n        /* Release the old entry. */\n        le->cinfo = NULL;\n        ACLFreeLogEntry(le);\n    } else {\n        /* Add it to our list of entries. We'll have to trim the list\n         * to its maximum size. */\n        ACLLogEntryCount++; /* Incrementing the entry_id count to make each record in the log unique. */\n        listAddNodeHead(ACLLog, le);\n        trimACLLogEntriesToMaxLen();\n    }\n}\n\nsds getAclErrorMessage(int acl_res, user *user, struct redisCommand *cmd, sds errored_val, int verbose) {\n    switch (acl_res) {\n    case ACL_DENIED_CMD:\n        return sdscatfmt(sdsempty(), \"User %S has no permissions to run \"\n                                     \"the '%S' command\", user->name, cmd->fullname);\n    case ACL_DENIED_KEY:\n        if (verbose) {\n            return sdscatfmt(sdsempty(), \"User %S has no permissions to access \"\n                                         \"the '%S' key\", user->name, errored_val);\n        } else {\n            return sdsnew(\"No permissions to access a key\");\n        }\n    case ACL_DENIED_CHANNEL:\n        if (verbose) {\n            return sdscatfmt(sdsempty(), \"User %S has no permissions to access \"\n                                         \"the '%S' channel\", user->name, errored_val);\n        } else {\n            return sdsnew(\"No permissions to access a channel\");\n        }\n    }\n    serverPanic(\"Reached deadcode on getAclErrorMessage\");\n}\n\n/* =============================================================================\n * ACL related commands\n * ==========================================================================*/\n\n/* ACL CAT category */\nvoid aclCatWithFlags(client *c, dict *commands, uint64_t cflag, int *arraylen) {\n    dictEntry *de;\n    dictIterator di;\n    dictInitIterator(&di, commands);\n    while ((de = dictNext(&di)) != NULL) {\n        struct redisCommand *cmd = dictGetVal(de);\n        if (cmd->acl_categories & cflag) {\n            addReplyBulkCBuffer(c, cmd->fullname, sdslen(cmd->fullname));\n            (*arraylen)++;\n        }\n\n        if (cmd->subcommands_dict) {\n            aclCatWithFlags(c, cmd->subcommands_dict, cflag, arraylen);\n        }\n    }\n    dictResetIterator(&di);\n}\n\n/* Add the formatted response from a single selector to the ACL GETUSER\n * response. This function returns the number of fields added. \n * \n * Setting verbose to 1 means that the full qualifier for key and channel\n * permissions are shown.\n */\nint aclAddReplySelectorDescription(client *c, aclSelector *s) {\n    listIter li;\n    listNode *ln;\n\n    /* Commands */\n    addReplyBulkCString(c,\"commands\");\n    sds cmddescr = ACLDescribeSelectorCommandRules(s);\n    addReplyBulkSds(c,cmddescr);\n    \n    /* Key patterns */\n    addReplyBulkCString(c,\"keys\");\n    if (s->flags & SELECTOR_FLAG_ALLKEYS) {\n        addReplyBulkCBuffer(c,\"~*\",2);\n    } else {\n        sds dsl = sdsempty();\n        listRewind(s->patterns,&li);\n        while((ln = listNext(&li))) {\n            keyPattern *thispat = (keyPattern *) listNodeValue(ln);\n            if (ln != listFirst(s->patterns)) dsl = sdscat(dsl, \" \");\n            dsl = sdsCatPatternString(dsl, thispat);\n        }\n        addReplyBulkSds(c, dsl);\n    }\n\n    /* Pub/sub patterns */\n    addReplyBulkCString(c,\"channels\");\n    if (s->flags & SELECTOR_FLAG_ALLCHANNELS) {\n        addReplyBulkCBuffer(c,\"&*\",2);\n    } else {\n        sds dsl = sdsempty();\n        listRewind(s->channels,&li);\n        while((ln = listNext(&li))) {\n            sds thispat = listNodeValue(ln);\n            if (ln != listFirst(s->channels)) dsl = sdscat(dsl, \" \");\n            dsl = sdscatfmt(dsl, \"&%S\", thispat);\n        }\n        addReplyBulkSds(c, dsl);\n    }\n    return 3;\n}\n\n/* ACL -- show and modify the configuration of ACL users.\n * ACL HELP\n * ACL LOAD\n * ACL SAVE\n * ACL LIST\n * ACL USERS\n * ACL CAT [<category>]\n * ACL SETUSER <username> ... acl rules ...\n * ACL DELUSER <username> [...]\n * ACL GETUSER <username>\n * ACL GENPASS [<bits>]\n * ACL WHOAMI\n * ACL LOG [<count> | RESET]\n */\nvoid aclCommand(client *c) {\n    char *sub = c->argv[1]->ptr;\n    if (!strcasecmp(sub,\"setuser\") && c->argc >= 3) {\n        /* Initially redact all of the arguments to not leak any information\n         * about the user. */\n        for (int j = 2; j < c->argc; j++) {\n            redactClientCommandArgument(c, j);\n        }\n\n        sds username = c->argv[2]->ptr;\n        /* Check username validity. */\n        if (ACLStringHasSpaces(username,sdslen(username))) {\n            addReplyError(c, \"Usernames can't contain spaces or null characters\");\n            return;\n        }\n\n        user *u = ACLGetUserByName(username,sdslen(username));\n\n        sds *temp_argv = zmalloc(c->argc * sizeof(sds));\n        for (int i = 3; i < c->argc; i++) temp_argv[i-3] = c->argv[i]->ptr;\n\n        sds error = ACLStringSetUser(u, username, temp_argv, c->argc - 3);\n        zfree(temp_argv);\n        if (error == NULL) {\n            addReply(c,shared.ok);\n        } else {\n            addReplyErrorSdsSafe(c, error);\n        }\n        return;\n    } else if (!strcasecmp(sub,\"deluser\") && c->argc >= 3) {\n        /* Initially redact all the arguments to not leak any information\n         * about the users. */\n        for (int j = 2; j < c->argc; j++) redactClientCommandArgument(c, j);\n\n        int deleted = 0;\n        for (int j = 2; j < c->argc; j++) {\n            sds username = c->argv[j]->ptr;\n            if (!strcmp(username,\"default\")) {\n                addReplyError(c,\"The 'default' user cannot be removed\");\n                return;\n            }\n        }\n\n        for (int j = 2; j < c->argc; j++) {\n            sds username = c->argv[j]->ptr;\n            user *u;\n            if (raxRemove(Users,(unsigned char*)username,\n                          sdslen(username),\n                          (void**)&u))\n            {\n                ACLFreeUserAndKillClients(u);\n                deleted++;\n            }\n        }\n        addReplyLongLong(c,deleted);\n    } else if (!strcasecmp(sub,\"getuser\") && c->argc == 3) {\n        /* Redact the username to not leak any information about the user. */\n        redactClientCommandArgument(c, 2);\n\n        user *u = ACLGetUserByName(c->argv[2]->ptr,sdslen(c->argv[2]->ptr));\n        if (u == NULL) {\n            addReplyNull(c);\n            return;\n        }\n\n        void *ufields = addReplyDeferredLen(c);\n        int fields = 3;\n\n        /* Flags */\n        addReplyBulkCString(c,\"flags\");\n        void *deflen = addReplyDeferredLen(c);\n        int numflags = 0;\n        for (int j = 0; ACLUserFlags[j].flag; j++) {\n            if (u->flags & ACLUserFlags[j].flag) {\n                addReplyBulkCString(c,ACLUserFlags[j].name);\n                numflags++;\n            }\n        }\n        setDeferredSetLen(c,deflen,numflags);\n\n        /* Passwords */\n        addReplyBulkCString(c,\"passwords\");\n        addReplyArrayLen(c,listLength(u->passwords));\n        listIter li;\n        listNode *ln;\n        listRewind(u->passwords,&li);\n        while((ln = listNext(&li))) {\n            sds thispass = listNodeValue(ln);\n            addReplyBulkCBuffer(c,thispass,sdslen(thispass));\n        }\n        /* Include the root selector at the top level for backwards compatibility */\n        fields += aclAddReplySelectorDescription(c, ACLUserGetRootSelector(u));\n\n        /* Describe all of the selectors on this user, including duplicating the root selector */\n        addReplyBulkCString(c,\"selectors\");\n        addReplyArrayLen(c, listLength(u->selectors) - 1);\n        listRewind(u->selectors,&li);\n        serverAssert(listNext(&li));\n        while((ln = listNext(&li))) {\n            void *slen = addReplyDeferredLen(c);\n            int sfields = aclAddReplySelectorDescription(c, (aclSelector *)listNodeValue(ln));\n            setDeferredMapLen(c, slen, sfields);\n        } \n        setDeferredMapLen(c, ufields, fields);\n    } else if ((!strcasecmp(sub,\"list\") || !strcasecmp(sub,\"users\")) &&\n               c->argc == 2)\n    {\n        int justnames = !strcasecmp(sub,\"users\");\n        addReplyArrayLen(c,raxSize(Users));\n        raxIterator ri;\n        raxStart(&ri,Users);\n        raxSeek(&ri,\"^\",NULL,0);\n        while(raxNext(&ri)) {\n            user *u = ri.data;\n            if (justnames) {\n                addReplyBulkCBuffer(c,u->name,sdslen(u->name));\n            } else {\n                /* Return information in the configuration file format. */\n                sds config = sdsnew(\"user \");\n                config = sdscatsds(config,u->name);\n                config = sdscatlen(config,\" \",1);\n                robj *descr = ACLDescribeUser(u);\n                config = sdscatsds(config,descr->ptr);\n                decrRefCount(descr);\n                addReplyBulkSds(c,config);\n            }\n        }\n        raxStop(&ri);\n    } else if (!strcasecmp(sub,\"whoami\") && c->argc == 2) {\n        if (c->user != NULL) {\n            addReplyBulkCBuffer(c,c->user->name,sdslen(c->user->name));\n        } else {\n            addReplyNull(c);\n        }\n    } else if (server.acl_filename[0] == '\\0' &&\n               (!strcasecmp(sub,\"load\") || !strcasecmp(sub,\"save\")))\n    {\n        addReplyError(c,\"This Redis instance is not configured to use an ACL file. You may want to specify users via the ACL SETUSER command and then issue a CONFIG REWRITE (assuming you have a Redis configuration file set) in order to store users in the Redis configuration.\");\n        return;\n    } else if (!strcasecmp(sub,\"load\") && c->argc == 2) {\n        sds errors = ACLLoadFromFile(server.acl_filename);\n        if (errors == NULL) {\n            addReply(c,shared.ok);\n        } else {\n            addReplyError(c,errors);\n            sdsfree(errors);\n        }\n    } else if (!strcasecmp(sub,\"save\") && c->argc == 2) {\n        if (ACLSaveToFile(server.acl_filename) == C_OK) {\n            addReply(c,shared.ok);\n        } else {\n            addReplyError(c,\"There was an error trying to save the ACLs. \"\n                            \"Please check the server logs for more \"\n                            \"information\");\n        }\n    } else if (!strcasecmp(sub,\"cat\") && c->argc == 2) {\n        void *dl = addReplyDeferredLen(c);\n        int j;\n        for (j = 0; ACLCommandCategories[j].flag != 0; j++)\n            addReplyBulkCString(c,ACLCommandCategories[j].name);\n        setDeferredArrayLen(c,dl,j);\n    } else if (!strcasecmp(sub,\"cat\") && c->argc == 3) {\n        uint64_t cflag = ACLGetCommandCategoryFlagByName(c->argv[2]->ptr);\n        if (cflag == 0) {\n            addReplyErrorFormat(c, \"Unknown category '%.128s'\", (char*)c->argv[2]->ptr);\n            return;\n        }\n        int arraylen = 0;\n        void *dl = addReplyDeferredLen(c);\n        aclCatWithFlags(c, server.orig_commands, cflag, &arraylen);\n        setDeferredArrayLen(c,dl,arraylen);\n    } else if (!strcasecmp(sub,\"genpass\") && (c->argc == 2 || c->argc == 3)) {\n        #define GENPASS_MAX_BITS 4096\n        char pass[GENPASS_MAX_BITS/8*2]; /* Hex representation. */\n        long bits = 256; /* By default generate 256 bits passwords. */\n\n        if (c->argc == 3 && getLongFromObjectOrReply(c,c->argv[2],&bits,NULL)\n            != C_OK) return;\n\n        if (bits <= 0 || bits > GENPASS_MAX_BITS) {\n            addReplyErrorFormat(c,\n                \"ACL GENPASS argument must be the number of \"\n                \"bits for the output password, a positive number \"\n                \"up to %d\",GENPASS_MAX_BITS);\n            return;\n        }\n\n        long chars = (bits+3)/4; /* Round to number of characters to emit. */\n        getRandomHexChars(pass,chars);\n        addReplyBulkCBuffer(c,pass,chars);\n    } else if (!strcasecmp(sub,\"log\") && (c->argc == 2 || c->argc ==3)) {\n        long count = 10; /* Number of entries to emit by default. */\n\n        /* Parse the only argument that LOG may have: it could be either\n         * the number of entries the user wants to display, or alternatively\n         * the \"RESET\" command in order to flush the old entries. */\n        if (c->argc == 3) {\n            if (!strcasecmp(c->argv[2]->ptr,\"reset\")) {\n                listSetFreeMethod(ACLLog,ACLFreeLogEntry);\n                listEmpty(ACLLog);\n                listSetFreeMethod(ACLLog,NULL);\n                addReply(c,shared.ok);\n                return;\n            } else if (getLongFromObjectOrReply(c,c->argv[2],&count,NULL)\n                       != C_OK)\n            {\n                return;\n            }\n            if (count < 0) count = 0;\n        }\n\n        /* Fix the count according to the number of entries we got. */\n        if ((size_t)count > listLength(ACLLog))\n            count = listLength(ACLLog);\n\n        addReplyArrayLen(c,count);\n        listIter li;\n        listNode *ln;\n        listRewind(ACLLog,&li);\n        mstime_t now = commandTimeSnapshot();\n        while (count-- && (ln = listNext(&li)) != NULL) {\n            ACLLogEntry *le = listNodeValue(ln);\n            addReplyMapLen(c,10);\n            addReplyBulkCString(c,\"count\");\n            addReplyLongLong(c,le->count);\n\n            addReplyBulkCString(c,\"reason\");\n            char *reasonstr;\n            switch(le->reason) {\n            case ACL_DENIED_CMD: reasonstr=\"command\"; break;\n            case ACL_DENIED_KEY: reasonstr=\"key\"; break;\n            case ACL_DENIED_CHANNEL: reasonstr=\"channel\"; break;\n            case ACL_DENIED_AUTH: reasonstr=\"auth\"; break;\n            case ACL_INVALID_TLS_CERT_AUTH: reasonstr = \"tls-cert\"; break;\n            default: reasonstr=\"unknown\";\n            }\n            addReplyBulkCString(c,reasonstr);\n\n            addReplyBulkCString(c,\"context\");\n            char *ctxstr;\n            switch(le->context) {\n            case ACL_LOG_CTX_TOPLEVEL: ctxstr=\"toplevel\"; break;\n            case ACL_LOG_CTX_MULTI: ctxstr=\"multi\"; break;\n            case ACL_LOG_CTX_LUA: ctxstr=\"lua\"; break;\n            case ACL_LOG_CTX_MODULE: ctxstr=\"module\"; break;\n            default: ctxstr=\"unknown\";\n            }\n            addReplyBulkCString(c,ctxstr);\n\n            addReplyBulkCString(c,\"object\");\n            addReplyBulkCBuffer(c,le->object,sdslen(le->object));\n            addReplyBulkCString(c,\"username\");\n            addReplyBulkCBuffer(c,le->username,sdslen(le->username));\n            addReplyBulkCString(c,\"age-seconds\");\n            double age = (double)(now - le->ctime)/1000;\n            addReplyDouble(c,age);\n            addReplyBulkCString(c,\"client-info\");\n            addReplyBulkCBuffer(c,le->cinfo,sdslen(le->cinfo));\n            addReplyBulkCString(c, \"entry-id\");\n            addReplyLongLong(c, le->entry_id);\n            addReplyBulkCString(c, \"timestamp-created\");\n            addReplyLongLong(c, le->timestamp_created);\n            addReplyBulkCString(c, \"timestamp-last-updated\");\n            addReplyLongLong(c, le->ctime);\n        }\n    } else if (!strcasecmp(sub,\"dryrun\") && c->argc >= 4) {\n        struct redisCommand *cmd;\n        user *u = ACLGetUserByName(c->argv[2]->ptr,sdslen(c->argv[2]->ptr));\n        if (u == NULL) {\n            addReplyErrorFormat(c, \"User '%s' not found\", (char *)c->argv[2]->ptr);\n            return;\n        }\n\n        if ((cmd = lookupCommand(c->argv + 3, c->argc - 3)) == NULL) {\n            addReplyErrorFormat(c, \"Command '%s' not found\", (char *)c->argv[3]->ptr);\n            return;\n        }\n\n        if ((cmd->arity > 0 && cmd->arity != c->argc-3) ||\n            (c->argc-3 < -cmd->arity))\n        {\n            addReplyErrorFormat(c,\"wrong number of arguments for '%s' command\", cmd->fullname);\n            return;\n        }\n\n        int idx;\n        int result = ACLCheckAllUserCommandPerm(u, cmd, c->argv + 3, c->argc - 3, NULL, &idx);\n        if (result != ACL_OK) {\n            sds err = getAclErrorMessage(result, u, cmd,  c->argv[idx+3]->ptr, 1);\n            addReplyBulkSds(c, err);\n            return;\n        }\n\n        addReply(c,shared.ok);\n    } else if (c->argc == 2 && !strcasecmp(sub,\"help\")) {\n        const char *help[] = {\n\"CAT [<category>]\",\n\"    List all commands that belong to <category>, or all command categories\",\n\"    when no category is specified.\",\n\"DELUSER <username> [<username> ...]\",\n\"    Delete a list of users.\",\n\"DRYRUN <username> <command> [<arg> ...]\",\n\"    Returns whether the user can execute the given command without executing the command.\",\n\"GETUSER <username>\",\n\"    Get the user's details.\",\n\"GENPASS [<bits>]\",\n\"    Generate a secure 256-bit user password. The optional `bits` argument can\",\n\"    be used to specify a different size.\",\n\"LIST\",\n\"    Show users details in config file format.\",\n\"LOAD\",\n\"    Reload users from the ACL file.\",\n\"LOG [<count> | RESET]\",\n\"    Show the ACL log entries.\",\n\"SAVE\",\n\"    Save the current config to the ACL file.\",\n\"SETUSER <username> <attribute> [<attribute> ...]\",\n\"    Create or modify a user with the specified attributes.\",\n\"USERS\",\n\"    List all the registered usernames.\",\n\"WHOAMI\",\n\"    Return the current connection username.\",\nNULL\n        };\n        addReplyHelp(c,help);\n    } else {\n        addReplySubcommandSyntaxError(c);\n    }\n}\n\nvoid addReplyCommandCategories(client *c, struct redisCommand *cmd) {\n    int flagcount = 0;\n    void *flaglen = addReplyDeferredLen(c);\n    for (int j = 0; ACLCommandCategories[j].flag != 0; j++) {\n        if (cmd->acl_categories & ACLCommandCategories[j].flag) {\n            addReplyStatusFormat(c, \"@%s\", ACLCommandCategories[j].name);\n            flagcount++;\n        }\n    }\n    setDeferredSetLen(c, flaglen, flagcount);\n}\n\n/* When successful, initiates an internal connection, that is able to execute\n * internal commands (see CMD_INTERNAL). */\nstatic void internalAuth(client *c) {\n    if (!server.cluster_enabled) {\n        addReplyError(c, \"Cannot authenticate as an internal connection on non-cluster instances\");\n        return;\n    }\n\n    sds password = c->argv[2]->ptr;\n\n    /* Get internal secret. */\n    size_t len = -1;\n    const char *internal_secret = clusterGetSecret(&len);\n    if (sdslen(password) != len) {\n        addReplyError(c, \"-WRONGPASS invalid internal password\");\n        return;\n    }\n    if (!time_independent_strcmp((char *)internal_secret, (char *)password, len)) {\n        c->flags |= CLIENT_INTERNAL;\n        /* No further authentication is needed. */\n        c->authenticated = 1;\n        /* Set the user to the unrestricted user, if it is not already set (default). */\n        if (c->user != NULL) {\n            c->user = NULL;\n            moduleNotifyUserChanged(c);\n        }\n        addReply(c, shared.ok);\n    } else {\n        addReplyError(c, \"-WRONGPASS invalid internal password\");\n    }\n}\n\n/* AUTH <password>\n * AUTH <username> <password> (Redis >= 6.0 form)\n *\n * When the user is omitted it means that we are trying to authenticate\n * against the default user. */\nvoid authCommand(client *c) {\n    /* Only two or three argument forms are allowed. */\n    if (c->argc > 3) {\n        addReplyErrorObject(c,shared.syntaxerr);\n        return;\n    }\n    /* Always redact the second argument */\n    redactClientCommandArgument(c, 1);\n\n    /* Handle the two different forms here. The form with two arguments\n     * will just use \"default\" as username. */\n    robj *username, *password;\n    if (c->argc == 2) {\n        /* Mimic the old behavior of giving an error for the two argument\n         * form if no password is configured. */\n        if (DefaultUser->flags & USER_FLAG_NOPASS) {\n            addReplyError(c,\"AUTH <password> called without any password \"\n                            \"configured for the default user. Are you sure \"\n                            \"your configuration is correct?\");\n            return;\n        }\n\n        username = shared.default_username; \n        password = c->argv[1];\n    } else {\n        username = c->argv[1];\n        password = c->argv[2];\n        redactClientCommandArgument(c, 2);\n\n        /* Handle internal authentication commands.\n         * Note: No user-defined ACL user can have this username (no spaces\n         * allowed), thus no conflicts with ACL possible. */\n        if (!strcmp(username->ptr, \"internal connection\")) {\n            internalAuth(c);\n            return;\n        }\n    }\n\n    robj *err = NULL;\n    int result = ACLAuthenticateUser(c, username, password, &err);\n    if (result == AUTH_OK) {\n        addReply(c, shared.ok);\n    } else if (result == AUTH_ERR) {\n        addAuthErrReply(c, err);\n    }\n    if (err) decrRefCount(err);\n}\n\n/* Set the password for the \"default\" ACL user. This implements supports for\n * requirepass config, so passing in NULL will set the user to be nopass. */\nvoid ACLUpdateDefaultUserPassword(sds password) {\n    ACLSetUser(DefaultUser,\"resetpass\",-1);\n    if (password) {\n        sds aclop = sdscatlen(sdsnew(\">\"), password, sdslen(password));\n        ACLSetUser(DefaultUser,aclop,sdslen(aclop));\n        sdsfree(aclop);\n    } else {\n        ACLSetUser(DefaultUser,\"nopass\",-1);\n    }\n}\n"
  },
  {
    "path": "src/adlist.c",
    "content": "/* adlist.c - A generic doubly linked list implementation\n *\n * Copyright (c) 2006-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n\n#include <stdlib.h>\n#include \"adlist.h\"\n#include \"zmalloc.h\"\n\n/* Create a new list. The created list can be freed with\n * listRelease(), but private value of every node need to be freed\n * by the user before to call listRelease(), or by setting a free method using\n * listSetFreeMethod.\n *\n * On error, NULL is returned. Otherwise the pointer to the new list. */\nlist *listCreate(void)\n{\n    struct list *list;\n\n    if ((list = zmalloc(sizeof(*list))) == NULL)\n        return NULL;\n    list->head = list->tail = NULL;\n    list->len = 0;\n    list->dup = NULL;\n    list->free = NULL;\n    list->match = NULL;\n    return list;\n}\n\n/* Remove all the elements from the list without destroying the list itself. */\nvoid listEmpty(list *list)\n{\n    unsigned long len;\n    listNode *current, *next;\n\n    current = list->head;\n    len = list->len;\n    while(len--) {\n        next = current->next;\n        if (list->free) list->free(current->value);\n        zfree(current);\n        current = next;\n    }\n    list->head = list->tail = NULL;\n    list->len = 0;\n}\n\n/* Free the whole list.\n *\n * This function can't fail. */\nvoid listRelease(list *list)\n{\n    if (!list)\n        return;\n    listEmpty(list);\n    zfree(list);\n}\n\n/* Generic version of listRelease. */\nvoid listReleaseGeneric(void *list) {\n    listRelease((struct list*)list);\n}\n\n/* Add a new node to the list, to head, containing the specified 'value'\n * pointer as value.\n *\n * On error, NULL is returned and no operation is performed (i.e. the\n * list remains unaltered).\n * On success the 'list' pointer you pass to the function is returned. */\nlist *listAddNodeHead(list *list, void *value)\n{\n    listNode *node;\n\n    if ((node = zmalloc(sizeof(*node))) == NULL)\n        return NULL;\n    node->value = value;\n    listLinkNodeHead(list, node);\n    return list;\n}\n\n/*\n * Add a node that has already been allocated to the head of list\n */\nvoid listLinkNodeHead(list* list, listNode *node) {\n    if (list->len == 0) {\n        list->head = list->tail = node;\n        node->prev = node->next = NULL;\n    } else {\n        node->prev = NULL;\n        node->next = list->head;\n        list->head->prev = node;\n        list->head = node;\n    }\n    list->len++;\n}\n\n/* Add a new node to the list, to tail, containing the specified 'value'\n * pointer as value.\n *\n * On error, NULL is returned and no operation is performed (i.e. the\n * list remains unaltered).\n * On success the 'list' pointer you pass to the function is returned. */\nlist *listAddNodeTail(list *list, void *value)\n{\n    listNode *node;\n\n    if ((node = zmalloc(sizeof(*node))) == NULL)\n        return NULL;\n    node->value = value;\n    listLinkNodeTail(list, node);\n    return list;\n}\n\n/*\n * Add a node that has already been allocated to the tail of list\n */\nvoid listLinkNodeTail(list *list, listNode *node) {\n    if (list->len == 0) {\n        list->head = list->tail = node;\n        node->prev = node->next = NULL;\n    } else {\n        node->prev = list->tail;\n        node->next = NULL;\n        list->tail->next = node;\n        list->tail = node;\n    }\n    list->len++;\n}\n\nlist *listInsertNode(list *list, listNode *old_node, void *value, int after) {\n    listNode *node;\n\n    if ((node = zmalloc(sizeof(*node))) == NULL)\n        return NULL;\n    node->value = value;\n    if (after) {\n        node->prev = old_node;\n        node->next = old_node->next;\n        if (list->tail == old_node) {\n            list->tail = node;\n        }\n    } else {\n        node->next = old_node;\n        node->prev = old_node->prev;\n        if (list->head == old_node) {\n            list->head = node;\n        }\n    }\n    if (node->prev != NULL) {\n        node->prev->next = node;\n    }\n    if (node->next != NULL) {\n        node->next->prev = node;\n    }\n    list->len++;\n    return list;\n}\n\n/* Remove the specified node from the specified list.\n * The node is freed. If free callback is provided the value is freed as well.\n *\n * This function can't fail. */\nvoid listDelNode(list *list, listNode *node)\n{\n    listUnlinkNode(list, node);\n    if (list->free) list->free(node->value);\n    zfree(node);\n}\n\n/*\n * Remove the specified node from the list without freeing it.\n */\nvoid listUnlinkNode(list *list, listNode *node) {\n    if (node->prev)\n        node->prev->next = node->next;\n    else\n        list->head = node->next;\n    if (node->next)\n        node->next->prev = node->prev;\n    else\n        list->tail = node->prev;\n\n    node->next = NULL;\n    node->prev = NULL;\n\n    list->len--;\n}\n\n/* Returns a list iterator 'iter'. After the initialization every\n * call to listNext() will return the next element of the list.\n *\n * This function can't fail. */\nvoid listInitIterator(listIter *iter, list *list, int direction)\n{\n    if (direction == AL_START_HEAD)\n        iter->next = list->head;\n    else\n        iter->next = list->tail;\n    iter->direction = direction;\n}\n\n/* Create an iterator in the list private iterator structure */\nvoid listRewind(list *list, listIter *li) {\n    li->next = list->head;\n    li->direction = AL_START_HEAD;\n}\n\nvoid listRewindTail(list *list, listIter *li) {\n    li->next = list->tail;\n    li->direction = AL_START_TAIL;\n}\n\n/* Return the next element of an iterator.\n * It's valid to remove the currently returned element using\n * listDelNode(), but not to remove other elements.\n *\n * The function returns a pointer to the next element of the list,\n * or NULL if there are no more elements, so the classical usage\n * pattern is:\n *\n * iter = listGetIterator(list,<direction>);\n * while ((node = listNext(iter)) != NULL) {\n *     doSomethingWith(listNodeValue(node));\n * }\n *\n * */\nlistNode *listNext(listIter *iter)\n{\n    listNode *current = iter->next;\n\n    if (current != NULL) {\n        if (iter->direction == AL_START_HEAD)\n            iter->next = current->next;\n        else\n            iter->next = current->prev;\n    }\n    return current;\n}\n\n/* Duplicate the whole list. On out of memory NULL is returned.\n * On success a copy of the original list is returned.\n *\n * The 'Dup' method set with listSetDupMethod() function is used\n * to copy the node value. Otherwise the same pointer value of\n * the original node is used as value of the copied node.\n *\n * The original list both on success or error is never modified. */\nlist *listDup(list *orig)\n{\n    list *copy;\n    listIter iter;\n    listNode *node;\n\n    if ((copy = listCreate()) == NULL)\n        return NULL;\n    copy->dup = orig->dup;\n    copy->free = orig->free;\n    copy->match = orig->match;\n    listRewind(orig, &iter);\n    while((node = listNext(&iter)) != NULL) {\n        void *value;\n\n        if (copy->dup) {\n            value = copy->dup(node->value);\n            if (value == NULL) {\n                listRelease(copy);\n                return NULL;\n            }\n        } else {\n            value = node->value;\n        }\n        \n        if (listAddNodeTail(copy, value) == NULL) {\n            /* Free value if dup succeed but listAddNodeTail failed. */\n            if (copy->free) copy->free(value);\n\n            listRelease(copy);\n            return NULL;\n        }\n    }\n    return copy;\n}\n\n/* Search the list for a node matching a given key.\n * The match is performed using the 'match' method\n * set with listSetMatchMethod(). If no 'match' method\n * is set, the 'value' pointer of every node is directly\n * compared with the 'key' pointer.\n *\n * On success the first matching node pointer is returned\n * (search starts from head). If no matching node exists\n * NULL is returned. */\nlistNode *listSearchKey(list *list, void *key)\n{\n    listIter iter;\n    listNode *node;\n\n    listRewind(list, &iter);\n    while((node = listNext(&iter)) != NULL) {\n        if (list->match) {\n            if (list->match(node->value, key)) {\n                return node;\n            }\n        } else {\n            if (key == node->value) {\n                return node;\n            }\n        }\n    }\n    return NULL;\n}\n\n/* Return the element at the specified zero-based index\n * where 0 is the head, 1 is the element next to head\n * and so on. Negative integers are used in order to count\n * from the tail, -1 is the last element, -2 the penultimate\n * and so on. If the index is out of range NULL is returned. */\nlistNode *listIndex(list *list, long index) {\n    listNode *n;\n\n    if (index < 0) {\n        index = (-index)-1;\n        n = list->tail;\n        while(index-- && n) n = n->prev;\n    } else {\n        n = list->head;\n        while(index-- && n) n = n->next;\n    }\n    return n;\n}\n\n/* Rotate the list removing the tail node and inserting it to the head. */\nvoid listRotateTailToHead(list *list) {\n    if (listLength(list) <= 1) return;\n\n    /* Detach current tail */\n    listNode *tail = list->tail;\n    list->tail = tail->prev;\n    list->tail->next = NULL;\n    /* Move it as head */\n    list->head->prev = tail;\n    tail->prev = NULL;\n    tail->next = list->head;\n    list->head = tail;\n}\n\n/* Rotate the list removing the head node and inserting it to the tail. */\nvoid listRotateHeadToTail(list *list) {\n    if (listLength(list) <= 1) return;\n\n    listNode *head = list->head;\n    /* Detach current head */\n    list->head = head->next;\n    list->head->prev = NULL;\n    /* Move it as tail */\n    list->tail->next = head;\n    head->next = NULL;\n    head->prev = list->tail;\n    list->tail = head;\n}\n\n/* Add all the elements of the list 'o' at the end of the\n * list 'l'. The list 'other' remains empty but otherwise valid. */\nvoid listJoin(list *l, list *o) {\n    if (o->len == 0) return;\n\n    o->head->prev = l->tail;\n\n    if (l->tail)\n        l->tail->next = o->head;\n    else\n        l->head = o->head;\n\n    l->tail = o->tail;\n    l->len += o->len;\n\n    /* Setup other as an empty list. */\n    o->head = o->tail = NULL;\n    o->len = 0;\n}\n\n/* Initializes the node's value and sets its pointers\n * so that it is initially not a member of any list.\n */\nvoid listInitNode(listNode *node, void *value) {\n    node->prev = NULL;\n    node->next = NULL;\n    node->value = value;\n}\n"
  },
  {
    "path": "src/adlist.h",
    "content": "/* adlist.h - A generic doubly linked list implementation\n *\n * Copyright (c) 2006-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef __ADLIST_H__\n#define __ADLIST_H__\n\n/* Node, List, and Iterator are the only data structures used currently. */\n\ntypedef struct listNode {\n    struct listNode *prev;\n    struct listNode *next;\n    void *value;\n} listNode;\n\ntypedef struct listIter {\n    listNode *next;\n    int direction;\n} listIter;\n\ntypedef struct list {\n    listNode *head;\n    listNode *tail;\n    void *(*dup)(void *ptr);\n    void (*free)(void *ptr);\n    int (*match)(void *ptr, void *key);\n    unsigned long len;\n} list;\n\n/* Functions implemented as macros */\n#define listLength(l) ((l)->len)\n#define listFirst(l) ((l)->head)\n#define listLast(l) ((l)->tail)\n#define listPrevNode(n) ((n)->prev)\n#define listNextNode(n) ((n)->next)\n#define listNodeValue(n) ((n)->value)\n\n#define listSetDupMethod(l,m) ((l)->dup = (m))\n#define listSetFreeMethod(l,m) ((l)->free = (m))\n#define listSetMatchMethod(l,m) ((l)->match = (m))\n\n#define listGetDupMethod(l) ((l)->dup)\n#define listGetFreeMethod(l) ((l)->free)\n#define listGetMatchMethod(l) ((l)->match)\n\n/* Prototypes */\nlist *listCreate(void);\nvoid listRelease(list *list);\nvoid listReleaseGeneric(void *list);\nvoid listEmpty(list *list);\nlist *listAddNodeHead(list *list, void *value);\nlist *listAddNodeTail(list *list, void *value);\nlist *listInsertNode(list *list, listNode *old_node, void *value, int after);\nvoid listDelNode(list *list, listNode *node);\nvoid listInitIterator(listIter *iter, list *list, int direction);\nlistNode *listNext(listIter *iter);\nlist *listDup(list *orig);\nlistNode *listSearchKey(list *list, void *key);\nlistNode *listIndex(list *list, long index);\nvoid listRewind(list *list, listIter *li);\nvoid listRewindTail(list *list, listIter *li);\nvoid listRotateTailToHead(list *list);\nvoid listRotateHeadToTail(list *list);\nvoid listJoin(list *l, list *o);\nvoid listInitNode(listNode *node, void *value);\nvoid listLinkNodeHead(list *list, listNode *node);\nvoid listLinkNodeTail(list *list, listNode *node);\nvoid listUnlinkNode(list *list, listNode *node);\n\n/* Directions for iterators */\n#define AL_START_HEAD 0\n#define AL_START_TAIL 1\n\n#endif /* __ADLIST_H__ */\n"
  },
  {
    "path": "src/ae.c",
    "content": "/* A simple event-driven programming library. Originally I wrote this code\n * for the Jim's event-loop (Jim is a Tcl interpreter) but later translated\n * it in form of a library for easy reuse.\n *\n * Copyright (c) 2006-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"ae.h\"\n#include \"anet.h\"\n#include \"redisassert.h\"\n\n#include <stdio.h>\n#include <sys/time.h>\n#include <sys/types.h>\n#include <unistd.h>\n#include <stdlib.h>\n#include <poll.h>\n#include <string.h>\n#include <time.h>\n#include <errno.h>\n\n#include \"zmalloc.h\"\n#include \"config.h\"\n\n/* Include the best multiplexing layer supported by this system.\n * The following should be ordered by performances, descending. */\n#ifdef HAVE_EVPORT\n#include \"ae_evport.c\"\n#else\n    #ifdef HAVE_EPOLL\n    #include \"ae_epoll.c\"\n    #else\n        #ifdef HAVE_KQUEUE\n        #include \"ae_kqueue.c\"\n        #else\n        #include \"ae_select.c\"\n        #endif\n    #endif\n#endif\n\n#define INITIAL_EVENT 1024\naeEventLoop *aeCreateEventLoop(int setsize) {\n    aeEventLoop *eventLoop;\n    int i;\n\n    monotonicInit();    /* just in case the calling app didn't initialize */\n\n    if ((eventLoop = zmalloc(sizeof(*eventLoop))) == NULL) goto err;\n    eventLoop->nevents = setsize < INITIAL_EVENT ? setsize : INITIAL_EVENT;\n    eventLoop->events = zmalloc(sizeof(aeFileEvent)*eventLoop->nevents);\n    eventLoop->fired = zmalloc(sizeof(aeFiredEvent)*eventLoop->nevents);\n    if (eventLoop->events == NULL || eventLoop->fired == NULL) goto err;\n    eventLoop->setsize = setsize;\n    eventLoop->timeEventHead = NULL;\n    eventLoop->timeEventNextId = 0;\n    eventLoop->stop = 0;\n    eventLoop->maxfd = -1;\n    eventLoop->beforesleep = NULL;\n    eventLoop->aftersleep = NULL;\n    eventLoop->flags = 0;\n    memset(eventLoop->privdata, 0, sizeof(eventLoop->privdata));\n    if (aeApiCreate(eventLoop) == -1) goto err;\n    /* Events with mask == AE_NONE are not set. So let's initialize the\n     * vector with it. */\n    for (i = 0; i < eventLoop->nevents; i++)\n        eventLoop->events[i].mask = AE_NONE;\n    return eventLoop;\n\nerr:\n    if (eventLoop) {\n        zfree(eventLoop->events);\n        zfree(eventLoop->fired);\n        zfree(eventLoop);\n    }\n    return NULL;\n}\n\n/* Return the current set size. */\nint aeGetSetSize(aeEventLoop *eventLoop) {\n    return eventLoop->setsize;\n}\n\n/*\n * Tell the event processing to change the wait timeout as soon as possible.\n *\n * Note: it just means you turn on/off the global AE_DONT_WAIT.\n */\nvoid aeSetDontWait(aeEventLoop *eventLoop, int noWait) {\n    if (noWait)\n        eventLoop->flags |= AE_DONT_WAIT;\n    else\n        eventLoop->flags &= ~AE_DONT_WAIT;\n}\n\n/* Resize the maximum set size of the event loop.\n * If the requested set size is smaller than the current set size, but\n * there is already a file descriptor in use that is >= the requested\n * set size minus one, AE_ERR is returned and the operation is not\n * performed at all.\n *\n * Otherwise AE_OK is returned and the operation is successful. */\nint aeResizeSetSize(aeEventLoop *eventLoop, int setsize) {\n    if (setsize == eventLoop->setsize) return AE_OK;\n    if (eventLoop->maxfd >= setsize) return AE_ERR;\n    if (aeApiResize(eventLoop,setsize) == -1) return AE_ERR;\n\n    eventLoop->setsize = setsize;\n\n    /* If the current allocated space is larger than the requested size,\n     * we need to shrink it to the requested size. */\n    if (setsize < eventLoop->nevents) {\n        eventLoop->events = zrealloc(eventLoop->events,sizeof(aeFileEvent)*setsize);\n        eventLoop->fired = zrealloc(eventLoop->fired,sizeof(aeFiredEvent)*setsize);\n        eventLoop->nevents = setsize;\n    }\n    return AE_OK;\n}\n\nvoid aeDeleteEventLoop(aeEventLoop *eventLoop) {\n    aeApiFree(eventLoop);\n    zfree(eventLoop->events);\n    zfree(eventLoop->fired);\n\n    /* Free the time events list. */\n    aeTimeEvent *next_te, *te = eventLoop->timeEventHead;\n    while (te) {\n        next_te = te->next;\n        if (te->finalizerProc)\n            te->finalizerProc(eventLoop, te->clientData);\n        zfree(te);\n        te = next_te;\n    }\n    zfree(eventLoop);\n}\n\nvoid aeStop(aeEventLoop *eventLoop) {\n    eventLoop->stop = 1;\n}\n\nint aeCreateFileEvent(aeEventLoop *eventLoop, int fd, int mask,\n        aeFileProc *proc, void *clientData)\n{\n    if (fd >= eventLoop->setsize) {\n        errno = ERANGE;\n        return AE_ERR;\n    }\n\n    /* Resize the events and fired arrays if the file\n     * descriptor exceeds the current number of events. */\n    if (unlikely(fd >= eventLoop->nevents)) {\n        int newnevents = eventLoop->nevents;\n        newnevents = (newnevents * 2 > fd + 1) ? newnevents * 2 : fd + 1;\n        newnevents = (newnevents > eventLoop->setsize) ? eventLoop->setsize : newnevents;\n        eventLoop->events = zrealloc(eventLoop->events, sizeof(aeFileEvent) * newnevents);\n        eventLoop->fired = zrealloc(eventLoop->fired, sizeof(aeFiredEvent) * newnevents);\n\n        /* Initialize new slots with an AE_NONE mask */\n        for (int i = eventLoop->nevents; i < newnevents; i++)\n            eventLoop->events[i].mask = AE_NONE;\n        eventLoop->nevents = newnevents;\n    }\n\n    aeFileEvent *fe = &eventLoop->events[fd];\n\n    if (aeApiAddEvent(eventLoop, fd, mask) == -1)\n        return AE_ERR;\n    fe->mask |= mask;\n    if (mask & AE_READABLE) fe->rfileProc = proc;\n    if (mask & AE_WRITABLE) fe->wfileProc = proc;\n    fe->clientData = clientData;\n    if (fd > eventLoop->maxfd)\n        eventLoop->maxfd = fd;\n    return AE_OK;\n}\n\nvoid aeDeleteFileEvent(aeEventLoop *eventLoop, int fd, int mask)\n{\n    if (fd >= eventLoop->setsize) return;\n    aeFileEvent *fe = &eventLoop->events[fd];\n    if (fe->mask == AE_NONE) return;\n\n    /* We want to always remove AE_BARRIER if set when AE_WRITABLE\n     * is removed. */\n    if (mask & AE_WRITABLE) mask |= AE_BARRIER;\n\n    aeApiDelEvent(eventLoop, fd, mask);\n    fe->mask = fe->mask & (~mask);\n    if (fd == eventLoop->maxfd && fe->mask == AE_NONE) {\n        /* Update the max fd */\n        int j;\n\n        for (j = eventLoop->maxfd-1; j >= 0; j--)\n            if (eventLoop->events[j].mask != AE_NONE) break;\n        eventLoop->maxfd = j;\n    }\n}\n\nvoid *aeGetFileClientData(aeEventLoop *eventLoop, int fd) {\n    if (fd >= eventLoop->setsize) return NULL;\n    aeFileEvent *fe = &eventLoop->events[fd];\n    if (fe->mask == AE_NONE) return NULL;\n\n    return fe->clientData;\n}\n\nint aeGetFileEvents(aeEventLoop *eventLoop, int fd) {\n    if (fd >= eventLoop->setsize) return 0;\n    aeFileEvent *fe = &eventLoop->events[fd];\n\n    return fe->mask;\n}\n\nlong long aeCreateTimeEvent(aeEventLoop *eventLoop, long long milliseconds,\n        aeTimeProc *proc, void *clientData,\n        aeEventFinalizerProc *finalizerProc)\n{\n    long long id = eventLoop->timeEventNextId++;\n    aeTimeEvent *te;\n\n    te = zmalloc(sizeof(*te));\n    if (te == NULL) return AE_ERR;\n    te->id = id;\n    te->when = getMonotonicUs() + milliseconds * 1000;\n    te->timeProc = proc;\n    te->finalizerProc = finalizerProc;\n    te->clientData = clientData;\n    te->prev = NULL;\n    te->next = eventLoop->timeEventHead;\n    te->refcount = 0;\n    if (te->next)\n        te->next->prev = te;\n    eventLoop->timeEventHead = te;\n    return id;\n}\n\nint aeDeleteTimeEvent(aeEventLoop *eventLoop, long long id)\n{\n    aeTimeEvent *te = eventLoop->timeEventHead;\n    while(te) {\n        if (te->id == id) {\n            te->id = AE_DELETED_EVENT_ID;\n            return AE_OK;\n        }\n        te = te->next;\n    }\n    return AE_ERR; /* NO event with the specified ID found */\n}\n\n/* How many microseconds until the first timer should fire.\n * If there are no timers, -1 is returned.\n *\n * Note that's O(N) since time events are unsorted.\n * Possible optimizations (not needed by Redis so far, but...):\n * 1) Insert the event in order, so that the nearest is just the head.\n *    Much better but still insertion or deletion of timers is O(N).\n * 2) Use a skiplist to have this operation as O(1) and insertion as O(log(N)).\n */\nstatic int64_t usUntilEarliestTimer(aeEventLoop *eventLoop) {\n    aeTimeEvent *te = eventLoop->timeEventHead;\n    if (te == NULL) return -1;\n\n    aeTimeEvent *earliest = NULL;\n    while (te) {\n        if ((!earliest || te->when < earliest->when) && te->id != AE_DELETED_EVENT_ID)\n            earliest = te;\n        te = te->next;\n    }\n\n    monotime now = getMonotonicUs();\n    return (now >= earliest->when) ? 0 : earliest->when - now;\n}\n\n/* Process time events */\nstatic int processTimeEvents(aeEventLoop *eventLoop) {\n    int processed = 0;\n    aeTimeEvent *te;\n    long long maxId;\n\n    te = eventLoop->timeEventHead;\n    maxId = eventLoop->timeEventNextId-1;\n    monotime now = getMonotonicUs();\n    while(te) {\n        long long id;\n\n        /* Remove events scheduled for deletion. */\n        if (te->id == AE_DELETED_EVENT_ID) {\n            aeTimeEvent *next = te->next;\n            /* If a reference exists for this timer event,\n             * don't free it. This is currently incremented\n             * for recursive timerProc calls */\n            if (te->refcount) {\n                te = next;\n                continue;\n            }\n            if (te->prev)\n                te->prev->next = te->next;\n            else\n                eventLoop->timeEventHead = te->next;\n            if (te->next)\n                te->next->prev = te->prev;\n            if (te->finalizerProc) {\n                te->finalizerProc(eventLoop, te->clientData);\n                now = getMonotonicUs();\n            }\n            zfree(te);\n            te = next;\n            continue;\n        }\n\n        /* Make sure we don't process time events created by time events in\n         * this iteration. Note that this check is currently useless: we always\n         * add new timers on the head, however if we change the implementation\n         * detail, this check may be useful again: we keep it here for future\n         * defense. */\n        if (te->id > maxId) {\n            te = te->next;\n            continue;\n        }\n\n        if (te->when <= now) {\n            int retval;\n\n            id = te->id;\n            te->refcount++;\n            retval = te->timeProc(eventLoop, id, te->clientData);\n            te->refcount--;\n            processed++;\n            now = getMonotonicUs();\n            if (retval != AE_NOMORE) {\n                te->when = now + (monotime)retval * 1000;\n            } else {\n                te->id = AE_DELETED_EVENT_ID;\n            }\n        }\n        te = te->next;\n    }\n    return processed;\n}\n\n/* Process every pending file event, then every pending time event\n * (that may be registered by file event callbacks just processed).\n * Without special flags the function sleeps until some file event\n * fires, or when the next time event occurs (if any).\n *\n * If flags is 0, the function does nothing and returns.\n * if flags has AE_ALL_EVENTS set, all the kind of events are processed.\n * if flags has AE_FILE_EVENTS set, file events are processed.\n * if flags has AE_TIME_EVENTS set, time events are processed.\n * if flags has AE_DONT_WAIT set, the function returns ASAP once all\n * the events that can be handled without a wait are processed.\n * if flags has AE_CALL_AFTER_SLEEP set, the aftersleep callback is called.\n * if flags has AE_CALL_BEFORE_SLEEP set, the beforesleep callback is called.\n *\n * The function returns the number of events processed. */\nint aeProcessEvents(aeEventLoop *eventLoop, int flags)\n{\n    int processed = 0, numevents;\n\n    /* Nothing to do? return ASAP */\n    if (!(flags & AE_TIME_EVENTS) && !(flags & AE_FILE_EVENTS)) return 0;\n\n    /* Note that we want to call aeApiPoll() even if there are no\n     * file events to process as long as we want to process time\n     * events, in order to sleep until the next time event is ready\n     * to fire. */\n    if (eventLoop->maxfd != -1 ||\n        ((flags & AE_TIME_EVENTS) && !(flags & AE_DONT_WAIT))) {\n        int j;\n        struct timeval tv, *tvp = NULL; /* NULL means infinite wait. */\n        int64_t usUntilTimer;\n\n        if (eventLoop->beforesleep != NULL && (flags & AE_CALL_BEFORE_SLEEP))\n            eventLoop->beforesleep(eventLoop);\n\n        /* The eventLoop->flags may be changed inside beforesleep.\n         * So we should check it after beforesleep be called. At the same time,\n         * the parameter flags always should have the highest priority.\n         * That is to say, once the parameter flag is set to AE_DONT_WAIT,\n         * no matter what value eventLoop->flags is set to, we should ignore it. */\n        if ((flags & AE_DONT_WAIT) || (eventLoop->flags & AE_DONT_WAIT)) {\n            tv.tv_sec = tv.tv_usec = 0;\n            tvp = &tv;\n        } else if (flags & AE_TIME_EVENTS) {\n            usUntilTimer = usUntilEarliestTimer(eventLoop);\n            if (usUntilTimer >= 0) {\n                tv.tv_sec = usUntilTimer / 1000000;\n                tv.tv_usec = usUntilTimer % 1000000;\n                tvp = &tv;\n            }\n        }\n        /* Call the multiplexing API, will return only on timeout or when\n         * some event fires. */\n        numevents = aeApiPoll(eventLoop, tvp);\n\n        /* Don't process file events if not requested. */\n        if (!(flags & AE_FILE_EVENTS)) {\n            numevents = 0;\n        }\n\n        /* After sleep callback. */\n        if (eventLoop->aftersleep != NULL && flags & AE_CALL_AFTER_SLEEP)\n            eventLoop->aftersleep(eventLoop);\n\n        for (j = 0; j < numevents; j++) {\n            int fd = eventLoop->fired[j].fd;\n            aeFileEvent *fe = &eventLoop->events[fd];\n            int mask = eventLoop->fired[j].mask;\n            int fired = 0; /* Number of events fired for current fd. */\n\n            /* Normally we execute the readable event first, and the writable\n             * event later. This is useful as sometimes we may be able\n             * to serve the reply of a query immediately after processing the\n             * query.\n             *\n             * However if AE_BARRIER is set in the mask, our application is\n             * asking us to do the reverse: never fire the writable event\n             * after the readable. In such a case, we invert the calls.\n             * This is useful when, for instance, we want to do things\n             * in the beforeSleep() hook, like fsyncing a file to disk,\n             * before replying to a client. */\n            int invert = fe->mask & AE_BARRIER;\n\n            /* Note the \"fe->mask & mask & ...\" code: maybe an already\n             * processed event removed an element that fired and we still\n             * didn't processed, so we check if the event is still valid.\n             *\n             * Fire the readable event if the call sequence is not\n             * inverted. */\n            if (!invert && fe->mask & mask & AE_READABLE) {\n                fe->rfileProc(eventLoop,fd,fe->clientData,mask);\n                fired++;\n                fe = &eventLoop->events[fd]; /* Refresh in case of resize. */\n            }\n\n            /* Fire the writable event. */\n            if (fe->mask & mask & AE_WRITABLE) {\n                if (!fired || fe->wfileProc != fe->rfileProc) {\n                    fe->wfileProc(eventLoop,fd,fe->clientData,mask);\n                    fired++;\n                }\n            }\n\n            /* If we have to invert the call, fire the readable event now\n             * after the writable one. */\n            if (invert) {\n                fe = &eventLoop->events[fd]; /* Refresh in case of resize. */\n                if ((fe->mask & mask & AE_READABLE) &&\n                    (!fired || fe->wfileProc != fe->rfileProc))\n                {\n                    fe->rfileProc(eventLoop,fd,fe->clientData,mask);\n                    fired++;\n                }\n            }\n\n            processed++;\n        }\n    }\n    /* Check time events */\n    if (flags & AE_TIME_EVENTS)\n        processed += processTimeEvents(eventLoop);\n\n    return processed; /* return the number of processed file/time events */\n}\n\n/* Wait for milliseconds until the given file descriptor becomes\n * writable/readable/exception */\nint aeWait(int fd, int mask, long long milliseconds) {\n    struct pollfd pfd;\n    int retmask = 0, retval;\n\n    memset(&pfd, 0, sizeof(pfd));\n    pfd.fd = fd;\n    if (mask & AE_READABLE) pfd.events |= POLLIN;\n    if (mask & AE_WRITABLE) pfd.events |= POLLOUT;\n\n    if ((retval = poll(&pfd, 1, milliseconds))== 1) {\n        if (pfd.revents & POLLIN) retmask |= AE_READABLE;\n        if (pfd.revents & POLLOUT) retmask |= AE_WRITABLE;\n        if (pfd.revents & POLLERR) retmask |= AE_WRITABLE;\n        if (pfd.revents & POLLHUP) retmask |= AE_WRITABLE;\n        return retmask;\n    } else {\n        return retval;\n    }\n}\n\nvoid aeMain(aeEventLoop *eventLoop) {\n    eventLoop->stop = 0;\n    while (!eventLoop->stop) {\n        aeProcessEvents(eventLoop, AE_ALL_EVENTS|\n                                   AE_CALL_BEFORE_SLEEP|\n                                   AE_CALL_AFTER_SLEEP);\n    }\n}\n\nchar *aeGetApiName(void) {\n    return aeApiName();\n}\n\nvoid aeSetBeforeSleepProc(aeEventLoop *eventLoop, aeBeforeSleepProc *beforesleep) {\n    eventLoop->beforesleep = beforesleep;\n}\n\nvoid aeSetAfterSleepProc(aeEventLoop *eventLoop, aeBeforeSleepProc *aftersleep) {\n    eventLoop->aftersleep = aftersleep;\n}\n"
  },
  {
    "path": "src/ae.h",
    "content": "/* A simple event-driven programming library. Originally I wrote this code\n * for the Jim's event-loop (Jim is a Tcl interpreter) but later translated\n * it in form of a library for easy reuse.\n *\n * Copyright (c) 2006-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef __AE_H__\n#define __AE_H__\n\n#include \"monotonic.h\"\n\n#define AE_OK 0\n#define AE_ERR -1\n\n#define AE_NONE 0       /* No events registered. */\n#define AE_READABLE 1   /* Fire when descriptor is readable. */\n#define AE_WRITABLE 2   /* Fire when descriptor is writable. */\n#define AE_BARRIER 4    /* With WRITABLE, never fire the event if the\n                           READABLE event already fired in the same event\n                           loop iteration. Useful when you want to persist\n                           things to disk before sending replies, and want\n                           to do that in a group fashion. */\n\n#define AE_FILE_EVENTS (1<<0)\n#define AE_TIME_EVENTS (1<<1)\n#define AE_ALL_EVENTS (AE_FILE_EVENTS|AE_TIME_EVENTS)\n#define AE_DONT_WAIT (1<<2)\n#define AE_CALL_BEFORE_SLEEP (1<<3)\n#define AE_CALL_AFTER_SLEEP (1<<4)\n\n#define AE_NOMORE -1\n#define AE_DELETED_EVENT_ID -1\n\n/* Macros */\n#define AE_NOTUSED(V) ((void) V)\n\nstruct aeEventLoop;\n\n/* Types and data structures */\ntypedef void aeFileProc(struct aeEventLoop *eventLoop, int fd, void *clientData, int mask);\ntypedef int aeTimeProc(struct aeEventLoop *eventLoop, long long id, void *clientData);\ntypedef void aeEventFinalizerProc(struct aeEventLoop *eventLoop, void *clientData);\ntypedef void aeBeforeSleepProc(struct aeEventLoop *eventLoop);\n\n/* File event structure */\ntypedef struct aeFileEvent {\n    int mask; /* one of AE_(READABLE|WRITABLE|BARRIER) */\n    aeFileProc *rfileProc;\n    aeFileProc *wfileProc;\n    void *clientData;\n} aeFileEvent;\n\n/* Time event structure */\ntypedef struct aeTimeEvent {\n    long long id; /* time event identifier. */\n    monotime when;\n    aeTimeProc *timeProc;\n    aeEventFinalizerProc *finalizerProc;\n    void *clientData;\n    struct aeTimeEvent *prev;\n    struct aeTimeEvent *next;\n    int refcount; /* refcount to prevent timer events from being\n  \t\t   * freed in recursive time event calls. */\n} aeTimeEvent;\n\n/* A fired event */\ntypedef struct aeFiredEvent {\n    int fd;\n    int mask;\n} aeFiredEvent;\n\n/* State of an event based program */\ntypedef struct aeEventLoop {\n    int maxfd;   /* highest file descriptor currently registered */\n    int setsize; /* max number of file descriptors tracked */\n    long long timeEventNextId;\n    int nevents; /* Size of Registered events */\n    aeFileEvent *events; /* Registered events */\n    aeFiredEvent *fired; /* Fired events */\n    aeTimeEvent *timeEventHead;\n    int stop;\n    void *apidata; /* This is used for polling API specific data */\n    aeBeforeSleepProc *beforesleep;\n    aeBeforeSleepProc *aftersleep;\n    int flags;\n    void *privdata[2];\n} aeEventLoop;\n\n/* Prototypes */\naeEventLoop *aeCreateEventLoop(int setsize);\nvoid aeDeleteEventLoop(aeEventLoop *eventLoop);\nvoid aeStop(aeEventLoop *eventLoop);\nint aeCreateFileEvent(aeEventLoop *eventLoop, int fd, int mask,\n        aeFileProc *proc, void *clientData);\nvoid aeDeleteFileEvent(aeEventLoop *eventLoop, int fd, int mask);\nint aeGetFileEvents(aeEventLoop *eventLoop, int fd);\nvoid *aeGetFileClientData(aeEventLoop *eventLoop, int fd);\nlong long aeCreateTimeEvent(aeEventLoop *eventLoop, long long milliseconds,\n        aeTimeProc *proc, void *clientData,\n        aeEventFinalizerProc *finalizerProc);\nint aeDeleteTimeEvent(aeEventLoop *eventLoop, long long id);\nint aeProcessEvents(aeEventLoop *eventLoop, int flags);\nint aeWait(int fd, int mask, long long milliseconds);\nvoid aeMain(aeEventLoop *eventLoop);\nchar *aeGetApiName(void);\nvoid aeSetBeforeSleepProc(aeEventLoop *eventLoop, aeBeforeSleepProc *beforesleep);\nvoid aeSetAfterSleepProc(aeEventLoop *eventLoop, aeBeforeSleepProc *aftersleep);\nint aeGetSetSize(aeEventLoop *eventLoop);\nint aeResizeSetSize(aeEventLoop *eventLoop, int setsize);\nvoid aeSetDontWait(aeEventLoop *eventLoop, int noWait);\n\n#endif\n"
  },
  {
    "path": "src/ae_epoll.c",
    "content": "/* Linux epoll(2) based ae.c module\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n\n#include <sys/epoll.h>\n\ntypedef struct aeApiState {\n    int epfd;\n    struct epoll_event *events;\n} aeApiState;\n\nstatic int aeApiCreate(aeEventLoop *eventLoop) {\n    aeApiState *state = zmalloc(sizeof(aeApiState));\n\n    if (!state) return -1;\n    state->events = zmalloc(sizeof(struct epoll_event)*eventLoop->setsize);\n    if (!state->events) {\n        zfree(state);\n        return -1;\n    }\n    state->epfd = epoll_create(1024); /* 1024 is just a hint for the kernel */\n    if (state->epfd == -1) {\n        zfree(state->events);\n        zfree(state);\n        return -1;\n    }\n    anetCloexec(state->epfd);\n    eventLoop->apidata = state;\n    return 0;\n}\n\nstatic int aeApiResize(aeEventLoop *eventLoop, int setsize) {\n    aeApiState *state = eventLoop->apidata;\n\n    state->events = zrealloc(state->events, sizeof(struct epoll_event)*setsize);\n    return 0;\n}\n\nstatic void aeApiFree(aeEventLoop *eventLoop) {\n    aeApiState *state = eventLoop->apidata;\n\n    close(state->epfd);\n    zfree(state->events);\n    zfree(state);\n}\n\nstatic int aeApiAddEvent(aeEventLoop *eventLoop, int fd, int mask) {\n    aeApiState *state = eventLoop->apidata;\n    struct epoll_event ee = {0}; /* avoid valgrind warning */\n    /* If the fd was already monitored for some event, we need a MOD\n     * operation. Otherwise we need an ADD operation. */\n    int op = eventLoop->events[fd].mask == AE_NONE ?\n            EPOLL_CTL_ADD : EPOLL_CTL_MOD;\n\n    ee.events = 0;\n    mask |= eventLoop->events[fd].mask; /* Merge old events */\n    if (mask & AE_READABLE) ee.events |= EPOLLIN;\n    if (mask & AE_WRITABLE) ee.events |= EPOLLOUT;\n    ee.data.fd = fd;\n    if (epoll_ctl(state->epfd,op,fd,&ee) == -1) return -1;\n    return 0;\n}\n\nstatic void aeApiDelEvent(aeEventLoop *eventLoop, int fd, int delmask) {\n    aeApiState *state = eventLoop->apidata;\n    struct epoll_event ee = {0}; /* avoid valgrind warning */\n    int mask = eventLoop->events[fd].mask & (~delmask);\n\n    ee.events = 0;\n    if (mask & AE_READABLE) ee.events |= EPOLLIN;\n    if (mask & AE_WRITABLE) ee.events |= EPOLLOUT;\n    ee.data.fd = fd;\n    if (mask != AE_NONE) {\n        epoll_ctl(state->epfd,EPOLL_CTL_MOD,fd,&ee);\n    } else {\n        /* Note, Kernel < 2.6.9 requires a non null event pointer even for\n         * EPOLL_CTL_DEL. */\n        epoll_ctl(state->epfd,EPOLL_CTL_DEL,fd,&ee);\n    }\n}\n\nstatic int aeApiPoll(aeEventLoop *eventLoop, struct timeval *tvp) {\n    aeApiState *state = eventLoop->apidata;\n    int retval, numevents = 0;\n\n    retval = epoll_wait(state->epfd,state->events,eventLoop->setsize,\n            tvp ? (tvp->tv_sec*1000 + (tvp->tv_usec + 999)/1000) : -1);\n    if (retval > 0) {\n        int j;\n\n        numevents = retval;\n        for (j = 0; j < numevents; j++) {\n            int mask = 0;\n            struct epoll_event *e = state->events+j;\n\n            if (e->events & EPOLLIN) mask |= AE_READABLE;\n            if (e->events & EPOLLOUT) mask |= AE_WRITABLE;\n            if (e->events & EPOLLERR) mask |= AE_WRITABLE|AE_READABLE;\n            if (e->events & EPOLLHUP) mask |= AE_WRITABLE|AE_READABLE;\n            eventLoop->fired[j].fd = e->data.fd;\n            eventLoop->fired[j].mask = mask;\n        }\n    } else if (retval == -1 && errno != EINTR) {\n        panic(\"aeApiPoll: epoll_wait, %s\", strerror(errno));\n    }\n\n    return numevents;\n}\n\nstatic char *aeApiName(void) {\n    return \"epoll\";\n}\n"
  },
  {
    "path": "src/ae_evport.c",
    "content": "/* ae.c module for illumos event ports.\n *\n * Copyright (c) 2012, Joyent, Inc. All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n\n#include <errno.h>\n#include <port.h>\n#include <poll.h>\n\n#include <sys/types.h>\n#include <sys/time.h>\n\n#include <stdio.h>\n\nstatic int evport_debug = 0;\n\n/*\n * This file implements the ae API using event ports, present on Solaris-based\n * systems since Solaris 10.  Using the event port interface, we associate file\n * descriptors with the port.  Each association also includes the set of poll(2)\n * events that the consumer is interested in (e.g., POLLIN and POLLOUT).\n *\n * There's one tricky piece to this implementation: when we return events via\n * aeApiPoll, the corresponding file descriptors become dissociated from the\n * port.  This is necessary because poll events are level-triggered, so if the\n * fd didn't become dissociated, it would immediately fire another event since\n * the underlying state hasn't changed yet.  We must re-associate the file\n * descriptor, but only after we know that our caller has actually read from it.\n * The ae API does not tell us exactly when that happens, but we do know that\n * it must happen by the time aeApiPoll is called again.  Our solution is to\n * keep track of the last fds returned by aeApiPoll and re-associate them next\n * time aeApiPoll is invoked.\n *\n * To summarize, in this module, each fd association is EITHER (a) represented\n * only via the in-kernel association OR (b) represented by pending_fds and\n * pending_masks.  (b) is only true for the last fds we returned from aeApiPoll,\n * and only until we enter aeApiPoll again (at which point we restore the\n * in-kernel association).\n */\n#define MAX_EVENT_BATCHSZ 512\n\ntypedef struct aeApiState {\n    int     portfd;                             /* event port */\n    uint_t  npending;                           /* # of pending fds */\n    int     pending_fds[MAX_EVENT_BATCHSZ];     /* pending fds */\n    int     pending_masks[MAX_EVENT_BATCHSZ];   /* pending fds' masks */\n} aeApiState;\n\nstatic int aeApiCreate(aeEventLoop *eventLoop) {\n    int i;\n    aeApiState *state = zmalloc(sizeof(aeApiState));\n    if (!state) return -1;\n\n    state->portfd = port_create();\n    if (state->portfd == -1) {\n        zfree(state);\n        return -1;\n    }\n    anetCloexec(state->portfd);\n\n    state->npending = 0;\n\n    for (i = 0; i < MAX_EVENT_BATCHSZ; i++) {\n        state->pending_fds[i] = -1;\n        state->pending_masks[i] = AE_NONE;\n    }\n\n    eventLoop->apidata = state;\n    return 0;\n}\n\nstatic int aeApiResize(aeEventLoop *eventLoop, int setsize) {\n    (void) eventLoop;\n    (void) setsize;\n    /* Nothing to resize here. */\n    return 0;\n}\n\nstatic void aeApiFree(aeEventLoop *eventLoop) {\n    aeApiState *state = eventLoop->apidata;\n\n    close(state->portfd);\n    zfree(state);\n}\n\nstatic int aeApiLookupPending(aeApiState *state, int fd) {\n    uint_t i;\n\n    for (i = 0; i < state->npending; i++) {\n        if (state->pending_fds[i] == fd)\n            return (i);\n    }\n\n    return (-1);\n}\n\n/*\n * Helper function to invoke port_associate for the given fd and mask.\n */\nstatic int aeApiAssociate(const char *where, int portfd, int fd, int mask) {\n    int events = 0;\n    int rv, err;\n\n    if (mask & AE_READABLE)\n        events |= POLLIN;\n    if (mask & AE_WRITABLE)\n        events |= POLLOUT;\n\n    if (evport_debug)\n        fprintf(stderr, \"%s: port_associate(%d, 0x%x) = \", where, fd, events);\n\n    rv = port_associate(portfd, PORT_SOURCE_FD, fd, events,\n        (void *)(uintptr_t)mask);\n    err = errno;\n\n    if (evport_debug)\n        fprintf(stderr, \"%d (%s)\\n\", rv, rv == 0 ? \"no error\" : strerror(err));\n\n    if (rv == -1) {\n        fprintf(stderr, \"%s: port_associate: %s\\n\", where, strerror(err));\n\n        if (err == EAGAIN)\n            fprintf(stderr, \"aeApiAssociate: event port limit exceeded.\");\n    }\n\n    return rv;\n}\n\nstatic int aeApiAddEvent(aeEventLoop *eventLoop, int fd, int mask) {\n    aeApiState *state = eventLoop->apidata;\n    int fullmask, pfd;\n\n    if (evport_debug)\n        fprintf(stderr, \"aeApiAddEvent: fd %d mask 0x%x\\n\", fd, mask);\n\n    /*\n     * Since port_associate's \"events\" argument replaces any existing events, we\n     * must be sure to include whatever events are already associated when\n     * we call port_associate() again.\n     */\n    fullmask = mask | eventLoop->events[fd].mask;\n    pfd = aeApiLookupPending(state, fd);\n\n    if (pfd != -1) {\n        /*\n         * This fd was recently returned from aeApiPoll.  It should be safe to\n         * assume that the consumer has processed that poll event, but we play\n         * it safer by simply updating pending_mask.  The fd will be\n         * re-associated as usual when aeApiPoll is called again.\n         */\n        if (evport_debug)\n            fprintf(stderr, \"aeApiAddEvent: adding to pending fd %d\\n\", fd);\n        state->pending_masks[pfd] |= fullmask;\n        return 0;\n    }\n\n    return (aeApiAssociate(\"aeApiAddEvent\", state->portfd, fd, fullmask));\n}\n\nstatic void aeApiDelEvent(aeEventLoop *eventLoop, int fd, int mask) {\n    aeApiState *state = eventLoop->apidata;\n    int fullmask, pfd;\n\n    if (evport_debug)\n        fprintf(stderr, \"del fd %d mask 0x%x\\n\", fd, mask);\n\n    pfd = aeApiLookupPending(state, fd);\n\n    if (pfd != -1) {\n        if (evport_debug)\n            fprintf(stderr, \"deleting event from pending fd %d\\n\", fd);\n\n        /*\n         * This fd was just returned from aeApiPoll, so it's not currently\n         * associated with the port.  All we need to do is update\n         * pending_mask appropriately.\n         */\n        state->pending_masks[pfd] &= ~mask;\n\n        if (state->pending_masks[pfd] == AE_NONE)\n            state->pending_fds[pfd] = -1;\n\n        return;\n    }\n\n    /*\n     * The fd is currently associated with the port.  Like with the add case\n     * above, we must look at the full mask for the file descriptor before\n     * updating that association.  We don't have a good way of knowing what the\n     * events are without looking into the eventLoop state directly.  We rely on\n     * the fact that our caller has already updated the mask in the eventLoop.\n     */\n\n    /* We always remove the specified events from the current mask,\n     * regardless of whether eventLoop->events[fd].mask has been updated yet. */\n    fullmask = eventLoop->events[fd].mask & ~mask;\n    if (fullmask == AE_NONE) {\n        /*\n         * We're removing *all* events, so use port_dissociate to remove the\n         * association completely.  Failure here indicates a bug.\n         */\n        if (evport_debug)\n            fprintf(stderr, \"aeApiDelEvent: port_dissociate(%d)\\n\", fd);\n\n        if (port_dissociate(state->portfd, PORT_SOURCE_FD, fd) != 0) {\n            perror(\"aeApiDelEvent: port_dissociate\");\n            abort(); /* will not return */\n        }\n    } else if (aeApiAssociate(\"aeApiDelEvent\", state->portfd, fd,\n        fullmask) != 0) {\n        /*\n         * ENOMEM is a potentially transient condition, but the kernel won't\n         * generally return it unless things are really bad.  EAGAIN indicates\n         * we've reached a resource limit, for which it doesn't make sense to\n         * retry (counter-intuitively).  All other errors indicate a bug.  In any\n         * of these cases, the best we can do is to abort.\n         */\n        abort(); /* will not return */\n    }\n}\n\nstatic int aeApiPoll(aeEventLoop *eventLoop, struct timeval *tvp) {\n    aeApiState *state = eventLoop->apidata;\n    struct timespec timeout, *tsp;\n    uint_t mask, i;\n    uint_t nevents;\n    port_event_t event[MAX_EVENT_BATCHSZ];\n\n    /*\n     * If we've returned fd events before, we must re-associate them with the\n     * port now, before calling port_get().  See the block comment at the top of\n     * this file for an explanation of why.\n     */\n    for (i = 0; i < state->npending; i++) {\n        if (state->pending_fds[i] == -1)\n            /* This fd has since been deleted. */\n            continue;\n\n        if (aeApiAssociate(\"aeApiPoll\", state->portfd,\n            state->pending_fds[i], state->pending_masks[i]) != 0) {\n            /* See aeApiDelEvent for why this case is fatal. */\n            abort();\n        }\n\n        state->pending_masks[i] = AE_NONE;\n        state->pending_fds[i] = -1;\n    }\n\n    state->npending = 0;\n\n    if (tvp != NULL) {\n        timeout.tv_sec = tvp->tv_sec;\n        timeout.tv_nsec = tvp->tv_usec * 1000;\n        tsp = &timeout;\n    } else {\n        tsp = NULL;\n    }\n\n    /*\n     * port_getn can return with errno == ETIME having returned some events (!).\n     * So if we get ETIME, we check nevents, too.\n     */\n    nevents = 1;\n    if (port_getn(state->portfd, event, MAX_EVENT_BATCHSZ, &nevents,\n        tsp) == -1 && (errno != ETIME || nevents == 0)) {\n        if (errno == ETIME || errno == EINTR)\n            return 0;\n\n        /* Any other error indicates a bug. */\n        panic(\"aeApiPoll: port_getn, %s\", strerror(errno));\n    }\n\n    state->npending = nevents;\n\n    for (i = 0; i < nevents; i++) {\n            mask = 0;\n            if (event[i].portev_events & POLLIN)\n                mask |= AE_READABLE;\n            if (event[i].portev_events & POLLOUT)\n                mask |= AE_WRITABLE;\n\n            eventLoop->fired[i].fd = event[i].portev_object;\n            eventLoop->fired[i].mask = mask;\n\n            if (evport_debug)\n                fprintf(stderr, \"aeApiPoll: fd %d mask 0x%x\\n\",\n                    (int)event[i].portev_object, mask);\n\n            state->pending_fds[i] = event[i].portev_object;\n            state->pending_masks[i] = (uintptr_t)event[i].portev_user;\n    }\n\n    return nevents;\n}\n\nstatic char *aeApiName(void) {\n    return \"evport\";\n}\n"
  },
  {
    "path": "src/ae_kqueue.c",
    "content": "/* Kqueue(2)-based ae.c module\n *\n * Copyright (C) 2009 Harish Mallipeddi - harish.mallipeddi@gmail.com\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n\n#include <sys/types.h>\n#include <sys/event.h>\n#include <sys/time.h>\n\ntypedef struct aeApiState {\n    int kqfd;\n    struct kevent *events;\n\n    /* Events mask for merge read and write event.\n     * To reduce memory consumption, we use 2 bits to store the mask\n     * of an event, so that 1 byte will store the mask of 4 events. */\n    char *eventsMask; \n} aeApiState;\n\n#define EVENT_MASK_MALLOC_SIZE(sz) (((sz) + 3) / 4)\n#define EVENT_MASK_OFFSET(fd) ((fd) % 4 * 2)\n#define EVENT_MASK_ENCODE(fd, mask) (((mask) & 0x3) << EVENT_MASK_OFFSET(fd))\n\nstatic inline int getEventMask(const char *eventsMask, int fd) {\n    return (eventsMask[fd/4] >> EVENT_MASK_OFFSET(fd)) & 0x3;\n}\n\nstatic inline void addEventMask(char *eventsMask, int fd, int mask) {\n    eventsMask[fd/4] |= EVENT_MASK_ENCODE(fd, mask);\n}\n\nstatic inline void resetEventMask(char *eventsMask, int fd) {\n    eventsMask[fd/4] &= ~EVENT_MASK_ENCODE(fd, 0x3);\n}\n\nstatic int aeApiCreate(aeEventLoop *eventLoop) {\n    aeApiState *state = zmalloc(sizeof(aeApiState));\n\n    if (!state) return -1;\n    state->events = zmalloc(sizeof(struct kevent)*eventLoop->setsize);\n    if (!state->events) {\n        zfree(state);\n        return -1;\n    }\n    state->kqfd = kqueue();\n    if (state->kqfd == -1) {\n        zfree(state->events);\n        zfree(state);\n        return -1;\n    }\n    anetCloexec(state->kqfd);\n    state->eventsMask = zmalloc(EVENT_MASK_MALLOC_SIZE(eventLoop->setsize));\n    memset(state->eventsMask, 0, EVENT_MASK_MALLOC_SIZE(eventLoop->setsize));\n    eventLoop->apidata = state;\n    return 0;\n}\n\nstatic int aeApiResize(aeEventLoop *eventLoop, int setsize) {\n    aeApiState *state = eventLoop->apidata;\n\n    state->events = zrealloc(state->events, sizeof(struct kevent)*setsize);\n    state->eventsMask = zrealloc(state->eventsMask, EVENT_MASK_MALLOC_SIZE(setsize));\n    memset(state->eventsMask, 0, EVENT_MASK_MALLOC_SIZE(setsize));\n    return 0;\n}\n\nstatic void aeApiFree(aeEventLoop *eventLoop) {\n    aeApiState *state = eventLoop->apidata;\n\n    close(state->kqfd);\n    zfree(state->events);\n    zfree(state->eventsMask);\n    zfree(state);\n}\n\nstatic int aeApiAddEvent(aeEventLoop *eventLoop, int fd, int mask) {\n    aeApiState *state = eventLoop->apidata;\n    struct kevent evs[2];\n    int nch = 0;\n\n    if (mask & AE_READABLE) EV_SET(evs + nch++, fd, EVFILT_READ, EV_ADD, 0, 0, NULL);\n    if (mask & AE_WRITABLE) EV_SET(evs + nch++, fd, EVFILT_WRITE, EV_ADD, 0, 0, NULL);\n\n    return kevent(state->kqfd, evs, nch, NULL, 0, NULL);\n}\n\nstatic void aeApiDelEvent(aeEventLoop *eventLoop, int fd, int mask) {\n    aeApiState *state = eventLoop->apidata;\n    struct kevent evs[2];\n    int nch = 0;\n\n    if (mask & AE_READABLE) EV_SET(evs + nch++, fd, EVFILT_READ, EV_DELETE, 0, 0, NULL);\n    if (mask & AE_WRITABLE) EV_SET(evs + nch++, fd, EVFILT_WRITE, EV_DELETE, 0, 0, NULL);\n\n    kevent(state->kqfd, evs, nch, NULL, 0, NULL);\n}\n\nstatic int aeApiPoll(aeEventLoop *eventLoop, struct timeval *tvp) {\n    aeApiState *state = eventLoop->apidata;\n    int retval, numevents = 0;\n\n    if (tvp != NULL) {\n        struct timespec timeout;\n        timeout.tv_sec = tvp->tv_sec;\n        timeout.tv_nsec = tvp->tv_usec * 1000;\n        retval = kevent(state->kqfd, NULL, 0, state->events, eventLoop->setsize,\n                        &timeout);\n    } else {\n        retval = kevent(state->kqfd, NULL, 0, state->events, eventLoop->setsize,\n                        NULL);\n    }\n\n    if (retval > 0) {\n        int j;\n\n        /* Normally we execute the read event first and then the write event.\n         * When the barrier is set, we will do it reverse.\n         * \n         * However, under kqueue, read and write events would be separate\n         * events, which would make it impossible to control the order of\n         * reads and writes. So we store the event's mask we've got and merge\n         * the same fd events later. */\n        for (j = 0; j < retval; j++) {\n            struct kevent *e = state->events+j;\n            int fd = e->ident;\n            int mask = 0; \n\n            if (e->filter == EVFILT_READ) mask = AE_READABLE;\n            else if (e->filter == EVFILT_WRITE) mask = AE_WRITABLE;\n            addEventMask(state->eventsMask, fd, mask);\n        }\n\n        /* Re-traversal to merge read and write events, and set the fd's mask to\n         * 0 so that events are not added again when the fd is encountered again. */\n        numevents = 0;\n        for (j = 0; j < retval; j++) {\n            struct kevent *e = state->events+j;\n            int fd = e->ident;\n            int mask = getEventMask(state->eventsMask, fd);\n\n            if (mask) {\n                eventLoop->fired[numevents].fd = fd;\n                eventLoop->fired[numevents].mask = mask;\n                resetEventMask(state->eventsMask, fd);\n                numevents++;\n            }\n        }\n    } else if (retval == -1 && errno != EINTR) {\n        panic(\"aeApiPoll: kevent, %s\", strerror(errno));\n    }\n\n    return numevents;\n}\n\nstatic char *aeApiName(void) {\n    return \"kqueue\";\n}\n"
  },
  {
    "path": "src/ae_select.c",
    "content": "/* Select()-based ae.c module.\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n\n#include <sys/select.h>\n#include <string.h>\n\ntypedef struct aeApiState {\n    fd_set rfds, wfds;\n    /* We need to have a copy of the fd sets as it's not safe to reuse\n     * FD sets after select(). */\n    fd_set _rfds, _wfds;\n} aeApiState;\n\nstatic int aeApiCreate(aeEventLoop *eventLoop) {\n    aeApiState *state = zmalloc(sizeof(aeApiState));\n\n    if (!state) return -1;\n    FD_ZERO(&state->rfds);\n    FD_ZERO(&state->wfds);\n    eventLoop->apidata = state;\n    return 0;\n}\n\nstatic int aeApiResize(aeEventLoop *eventLoop, int setsize) {\n    AE_NOTUSED(eventLoop);\n    /* Just ensure we have enough room in the fd_set type. */\n    if (setsize >= FD_SETSIZE) return -1;\n    return 0;\n}\n\nstatic void aeApiFree(aeEventLoop *eventLoop) {\n    zfree(eventLoop->apidata);\n}\n\nstatic int aeApiAddEvent(aeEventLoop *eventLoop, int fd, int mask) {\n    aeApiState *state = eventLoop->apidata;\n\n    if (mask & AE_READABLE) FD_SET(fd,&state->rfds);\n    if (mask & AE_WRITABLE) FD_SET(fd,&state->wfds);\n    return 0;\n}\n\nstatic void aeApiDelEvent(aeEventLoop *eventLoop, int fd, int mask) {\n    aeApiState *state = eventLoop->apidata;\n\n    if (mask & AE_READABLE) FD_CLR(fd,&state->rfds);\n    if (mask & AE_WRITABLE) FD_CLR(fd,&state->wfds);\n}\n\nstatic int aeApiPoll(aeEventLoop *eventLoop, struct timeval *tvp) {\n    aeApiState *state = eventLoop->apidata;\n    int retval, j, numevents = 0;\n\n    memcpy(&state->_rfds,&state->rfds,sizeof(fd_set));\n    memcpy(&state->_wfds,&state->wfds,sizeof(fd_set));\n\n    retval = select(eventLoop->maxfd+1,\n                &state->_rfds,&state->_wfds,NULL,tvp);\n    if (retval > 0) {\n        for (j = 0; j <= eventLoop->maxfd; j++) {\n            int mask = 0;\n            aeFileEvent *fe = &eventLoop->events[j];\n\n            if (fe->mask == AE_NONE) continue;\n            if (fe->mask & AE_READABLE && FD_ISSET(j,&state->_rfds))\n                mask |= AE_READABLE;\n            if (fe->mask & AE_WRITABLE && FD_ISSET(j,&state->_wfds))\n                mask |= AE_WRITABLE;\n            eventLoop->fired[numevents].fd = j;\n            eventLoop->fired[numevents].mask = mask;\n            numevents++;\n        }\n    } else if (retval == -1 && errno != EINTR) {\n        panic(\"aeApiPoll: select, %s\", strerror(errno));\n    }\n\n    return numevents;\n}\n\nstatic char *aeApiName(void) {\n    return \"select\";\n}\n"
  },
  {
    "path": "src/anet.c",
    "content": "/* anet.c -- Basic TCP socket stuff made a bit less boring\n *\n * Copyright (c) 2006-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"fmacros.h\"\n\n#include <sys/types.h>\n#include <sys/socket.h>\n#include <sys/stat.h>\n#include <sys/un.h>\n#include <sys/time.h>\n#include <netinet/in.h>\n#include <netinet/tcp.h>\n#include <arpa/inet.h>\n#include <unistd.h>\n#include <fcntl.h>\n#include <string.h>\n#include <netdb.h>\n#include <errno.h>\n#include <stdarg.h>\n#include <stdio.h>\n\n#include \"anet.h\"\n#include \"config.h\"\n#include \"util.h\"\n\n#define UNUSED(x) (void)(x)\n\nstatic void anetSetError(char *err, const char *fmt, ...)\n{\n    va_list ap;\n\n    if (!err) return;\n    va_start(ap, fmt);\n    vsnprintf(err, ANET_ERR_LEN, fmt, ap);\n    va_end(ap);\n}\n\nint anetGetError(int fd) {\n    int sockerr = 0;\n    socklen_t errlen = sizeof(sockerr);\n\n    if (getsockopt(fd, SOL_SOCKET, SO_ERROR, &sockerr, &errlen) == -1)\n        sockerr = errno;\n    return sockerr;\n}\n\nint anetSetBlock(char *err, int fd, int non_block) {\n    int flags;\n\n    /* Set the socket blocking (if non_block is zero) or non-blocking.\n     * Note that fcntl(2) for F_GETFL and F_SETFL can't be\n     * interrupted by a signal. */\n    if ((flags = fcntl(fd, F_GETFL)) == -1) {\n        anetSetError(err, \"fcntl(F_GETFL): %s\", strerror(errno));\n        return ANET_ERR;\n    }\n\n    /* Check if this flag has been set or unset, if so,\n     * then there is no need to call fcntl to set/unset it again. */\n    if (!!(flags & O_NONBLOCK) == !!non_block)\n        return ANET_OK;\n\n    if (non_block)\n        flags |= O_NONBLOCK;\n    else\n        flags &= ~O_NONBLOCK;\n\n    if (fcntl(fd, F_SETFL, flags) == -1) {\n        anetSetError(err, \"fcntl(F_SETFL,O_NONBLOCK): %s\", strerror(errno));\n        return ANET_ERR;\n    }\n    return ANET_OK;\n}\n\nint anetNonBlock(char *err, int fd) {\n    return anetSetBlock(err,fd,1);\n}\n\nint anetBlock(char *err, int fd) {\n    return anetSetBlock(err,fd,0);\n}\n\n/* Enable the FD_CLOEXEC on the given fd to avoid fd leaks.\n * This function should be invoked for fd's on specific places\n * where fork + execve system calls are called. */\nint anetCloexec(int fd) {\n    int r;\n    int flags;\n\n    do {\n        r = fcntl(fd, F_GETFD);\n    } while (r == -1 && errno == EINTR);\n\n    if (r == -1 || (r & FD_CLOEXEC))\n        return r;\n\n    flags = r | FD_CLOEXEC;\n\n    do {\n        r = fcntl(fd, F_SETFD, flags);\n    } while (r == -1 && errno == EINTR);\n\n    return r;\n}\n\n/* Enable TCP keep-alive mechanism to detect dead peers,\n * TCP_KEEPIDLE, TCP_KEEPINTVL and TCP_KEEPCNT will be set accordingly. */\nint anetKeepAlive(char *err, int fd, int interval)\n{\n    int enabled = 1;\n    if (setsockopt(fd, SOL_SOCKET, SO_KEEPALIVE, &enabled, sizeof(enabled)))\n    {\n        anetSetError(err, \"setsockopt SO_KEEPALIVE: %s\", strerror(errno));\n        return ANET_ERR;\n    }\n\n    int idle;\n    int intvl;\n    int cnt;\n\n    /* There are platforms that are expected to support the full mechanism of TCP keep-alive,\n     * we want the compiler to emit warnings of unused variables if the preprocessor directives\n     * somehow fail, and other than those platforms, just omit these warnings if they happen.\n     */\n#if !(defined(_AIX) || defined(__APPLE__) || defined(__DragonFly__) || \\\n    defined(__FreeBSD__) || defined(__illumos__) || defined(__linux__) || \\\n    defined(__NetBSD__) || defined(__sun))\n    UNUSED(interval);\n    UNUSED(idle);\n    UNUSED(intvl);\n    UNUSED(cnt);\n#endif\n\n#ifdef __sun\n    /* The implementation of TCP keep-alive on Solaris/SmartOS is a bit unusual\n     * compared to other Unix-like systems.\n     * Thus, we need to specialize it on Solaris.\n     *\n     * There are two keep-alive mechanisms on Solaris:\n     * - By default, the first keep-alive probe is sent out after a TCP connection is idle for two hours.\n     * If the peer does not respond to the probe within eight minutes, the TCP connection is aborted.\n     * You can alter the interval for sending out the first probe using the socket option TCP_KEEPALIVE_THRESHOLD\n     * in milliseconds or TCP_KEEPIDLE in seconds.\n     * The system default is controlled by the TCP ndd parameter tcp_keepalive_interval. The minimum value is ten seconds.\n     * The maximum is ten days, while the default is two hours. If you receive no response to the probe,\n     * you can use the TCP_KEEPALIVE_ABORT_THRESHOLD socket option to change the time threshold for aborting a TCP connection.\n     * The option value is an unsigned integer in milliseconds. The value zero indicates that TCP should never time out and\n     * abort the connection when probing. The system default is controlled by the TCP ndd parameter tcp_keepalive_abort_interval.\n     * The default is eight minutes.\n     *\n     * - The second implementation is activated if socket option TCP_KEEPINTVL and/or TCP_KEEPCNT are set.\n     * The time between each consequent probes is set by TCP_KEEPINTVL in seconds.\n     * The minimum value is ten seconds. The maximum is ten days, while the default is two hours.\n     * The TCP connection will be aborted after certain amount of probes, which is set by TCP_KEEPCNT, without receiving response.\n     */\n\n    idle = interval;\n    if (idle < 10) idle = 10; // kernel expects at least 10 seconds\n    if (idle > 10*24*60*60) idle = 10*24*60*60; // kernel expects at most 10 days\n\n    /* `TCP_KEEPIDLE`, `TCP_KEEPINTVL`, and `TCP_KEEPCNT` were not available on Solaris\n     * until version 11.4, but let's take a chance here. */\n#if defined(TCP_KEEPIDLE) && defined(TCP_KEEPINTVL) && defined(TCP_KEEPCNT)\n    if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPIDLE, &idle, sizeof(idle))) {\n        anetSetError(err, \"setsockopt TCP_KEEPIDLE: %s\\n\", strerror(errno));\n        return ANET_ERR;\n    }\n\n    intvl = idle/3;\n    if (intvl < 10) intvl = 10; /* kernel expects at least 10 seconds */\n    if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPINTVL, &intvl, sizeof(intvl))) {\n        anetSetError(err, \"setsockopt TCP_KEEPINTVL: %s\\n\", strerror(errno));\n        return ANET_ERR;\n    }\n\n    cnt = 3;\n    if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPCNT, &cnt, sizeof(cnt))) {\n        anetSetError(err, \"setsockopt TCP_KEEPCNT: %s\\n\", strerror(errno));\n        return ANET_ERR;\n    }\n#else\n    /* Fall back to the first implementation of tcp-alive mechanism for older Solaris,\n     * simulate the tcp-alive mechanism on other platforms via `TCP_KEEPALIVE_THRESHOLD` + `TCP_KEEPALIVE_ABORT_THRESHOLD`.\n     */\n    idle *= 1000; // kernel expects milliseconds\n    if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPALIVE_THRESHOLD, &idle, sizeof(idle))) {\n        anetSetError(err, \"setsockopt TCP_KEEPINTVL: %s\\n\", strerror(errno));\n        return ANET_ERR;\n    }\n\n    /* Note that the consequent probes will not be sent at equal intervals on Solaris,\n     * but will be sent using the exponential backoff algorithm. */\n    int time_to_abort = idle;\n    if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPALIVE_ABORT_THRESHOLD, &time_to_abort, sizeof(time_to_abort))) {\n        anetSetError(err, \"setsockopt TCP_KEEPCNT: %s\\n\", strerror(errno));\n        return ANET_ERR;\n    }\n#endif\n\n    return ANET_OK;\n\n#endif\n\n#ifdef TCP_KEEPIDLE\n    /* Default settings are more or less garbage, with the keepalive time\n     * set to 7200 by default on Linux and other Unix-like systems.\n     * Modify settings to make the feature actually useful. */\n\n    /* Send first probe after interval. */\n    idle = interval;\n    if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPIDLE, &idle, sizeof(idle))) {\n        anetSetError(err, \"setsockopt TCP_KEEPIDLE: %s\\n\", strerror(errno));\n        return ANET_ERR;\n    }\n#elif defined(TCP_KEEPALIVE)\n    /* Darwin/macOS uses TCP_KEEPALIVE in place of TCP_KEEPIDLE. */\n    idle = interval;\n    if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPALIVE, &idle, sizeof(idle))) {\n        anetSetError(err, \"setsockopt TCP_KEEPALIVE: %s\\n\", strerror(errno));\n        return ANET_ERR;\n    }\n#endif\n\n#ifdef TCP_KEEPINTVL\n    /* Send next probes after the specified interval. Note that we set the\n     * delay as interval / 3, as we send three probes before detecting\n     * an error (see the next setsockopt call). */\n    intvl = interval/3;\n    if (intvl == 0) intvl = 1;\n    if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPINTVL, &intvl, sizeof(intvl))) {\n        anetSetError(err, \"setsockopt TCP_KEEPINTVL: %s\\n\", strerror(errno));\n        return ANET_ERR;\n    }\n#endif\n\n#ifdef TCP_KEEPCNT\n    /* Consider the socket in error state after three we send three ACK\n     * probes without getting a reply. */\n    cnt = 3;\n    if (setsockopt(fd, IPPROTO_TCP, TCP_KEEPCNT, &cnt, sizeof(cnt))) {\n        anetSetError(err, \"setsockopt TCP_KEEPCNT: %s\\n\", strerror(errno));\n        return ANET_ERR;\n    }\n#endif\n\n    return ANET_OK;\n}\n\nstatic int anetSetTcpNoDelay(char *err, int fd, int val)\n{\n    if (setsockopt(fd, IPPROTO_TCP, TCP_NODELAY, &val, sizeof(val)) == -1)\n    {\n        anetSetError(err, \"setsockopt TCP_NODELAY: %s\", strerror(errno));\n        return ANET_ERR;\n    }\n    return ANET_OK;\n}\n\nint anetEnableTcpNoDelay(char *err, int fd)\n{\n    return anetSetTcpNoDelay(err, fd, 1);\n}\n\nint anetDisableTcpNoDelay(char *err, int fd)\n{\n    return anetSetTcpNoDelay(err, fd, 0);\n}\n\n/* Set the socket send timeout (SO_SNDTIMEO socket option) to the specified\n * number of milliseconds, or disable it if the 'ms' argument is zero. */\nint anetSendTimeout(char *err, int fd, long long ms) {\n    struct timeval tv;\n\n    tv.tv_sec = ms/1000;\n    tv.tv_usec = (ms%1000)*1000;\n    if (setsockopt(fd, SOL_SOCKET, SO_SNDTIMEO, &tv, sizeof(tv)) == -1) {\n        anetSetError(err, \"setsockopt SO_SNDTIMEO: %s\", strerror(errno));\n        return ANET_ERR;\n    }\n    return ANET_OK;\n}\n\n/* Set the socket receive timeout (SO_RCVTIMEO socket option) to the specified\n * number of milliseconds, or disable it if the 'ms' argument is zero. */\nint anetRecvTimeout(char *err, int fd, long long ms) {\n    struct timeval tv;\n\n    tv.tv_sec = ms/1000;\n    tv.tv_usec = (ms%1000)*1000;\n    if (setsockopt(fd, SOL_SOCKET, SO_RCVTIMEO, &tv, sizeof(tv)) == -1) {\n        anetSetError(err, \"setsockopt SO_RCVTIMEO: %s\", strerror(errno));\n        return ANET_ERR;\n    }\n    return ANET_OK;\n}\n\n/* Resolve the hostname \"host\" and set the string representation of the\n * IP address into the buffer pointed by \"ipbuf\".\n *\n * If flags is set to ANET_IP_ONLY the function only resolves hostnames\n * that are actually already IPv4 or IPv6 addresses. This turns the function\n * into a validating / normalizing function.\n *\n * If the flag ANET_PREFER_IPV4 is set, IPv4 is preferred over IPv6.\n * If the flag ANET_PREFER_IPV6 is set, IPv6 is preferred over IPv4.\n * */\nint anetResolve(char *err, char *host, char *ipbuf, size_t ipbuf_len,\n                       int flags)\n{\n    struct addrinfo hints, *info;\n    int rv;\n\n    memset(&hints,0,sizeof(hints));\n    if (flags & ANET_IP_ONLY) hints.ai_flags = AI_NUMERICHOST;\n    hints.ai_family = AF_UNSPEC;\n    if (flags & ANET_PREFER_IPV4 && !(flags & ANET_PREFER_IPV6)) {\n        hints.ai_family = AF_INET;\n    } else if (flags & ANET_PREFER_IPV6 && !(flags & ANET_PREFER_IPV4)) {\n        hints.ai_family = AF_INET6;\n    }\n    hints.ai_socktype = SOCK_STREAM;  /* specify socktype to avoid dups */\n\n    rv = getaddrinfo(host, NULL, &hints, &info);\n    if (rv != 0 && hints.ai_family != AF_UNSPEC) {\n        /* Try the other IP version. */\n        hints.ai_family = (hints.ai_family == AF_INET) ? AF_INET6 : AF_INET;\n        rv = getaddrinfo(host, NULL, &hints, &info);\n    }\n    if (rv != 0) {\n        anetSetError(err, \"%s\", gai_strerror(rv));\n        return ANET_ERR;\n    }\n    if (info->ai_family == AF_INET) {\n        struct sockaddr_in *sa = (struct sockaddr_in *)info->ai_addr;\n        inet_ntop(AF_INET, &(sa->sin_addr), ipbuf, ipbuf_len);\n    } else {\n        struct sockaddr_in6 *sa = (struct sockaddr_in6 *)info->ai_addr;\n        inet_ntop(AF_INET6, &(sa->sin6_addr), ipbuf, ipbuf_len);\n    }\n\n    freeaddrinfo(info);\n    return ANET_OK;\n}\n\nstatic int anetSetReuseAddr(char *err, int fd) {\n    int yes = 1;\n    /* Make sure connection-intensive things like the redis benchmark\n     * will be able to close/open sockets a zillion of times */\n    if (setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, &yes, sizeof(yes)) == -1) {\n        anetSetError(err, \"setsockopt SO_REUSEADDR: %s\", strerror(errno));\n        return ANET_ERR;\n    }\n    return ANET_OK;\n}\n\nstatic int anetCreateSocket(char *err, int domain) {\n    int s;\n    if ((s = socket(domain, SOCK_STREAM, 0)) == -1) {\n        anetSetError(err, \"creating socket: %s\", strerror(errno));\n        return ANET_ERR;\n    }\n\n    /* Make sure connection-intensive things like the redis benchmark\n     * will be able to close/open sockets a zillion of times */\n    if (anetSetReuseAddr(err,s) == ANET_ERR) {\n        close(s);\n        return ANET_ERR;\n    }\n    return s;\n}\n\n#define ANET_CONNECT_NONE 0\n#define ANET_CONNECT_NONBLOCK 1\n#define ANET_CONNECT_BE_BINDING 2 /* Best effort binding. */\nstatic int anetTcpGenericConnect(char *err, const char *addr, int port,\n                                 const char *source_addr, int flags)\n{\n    int s = ANET_ERR, rv;\n    char portstr[6];  /* strlen(\"65535\") + 1; */\n    struct addrinfo hints, *servinfo, *bservinfo, *p, *b;\n\n    snprintf(portstr,sizeof(portstr),\"%d\",port);\n    memset(&hints,0,sizeof(hints));\n    hints.ai_family = AF_UNSPEC;\n    hints.ai_socktype = SOCK_STREAM;\n\n    if ((rv = getaddrinfo(addr,portstr,&hints,&servinfo)) != 0) {\n        anetSetError(err, \"%s\", gai_strerror(rv));\n        return ANET_ERR;\n    }\n    for (p = servinfo; p != NULL; p = p->ai_next) {\n        /* Try to create the socket and to connect it.\n         * If we fail in the socket() call, or on connect(), we retry with\n         * the next entry in servinfo. */\n        if ((s = socket(p->ai_family,p->ai_socktype,p->ai_protocol)) == -1)\n            continue;\n        if (anetSetReuseAddr(err,s) == ANET_ERR) goto error;\n        if (flags & ANET_CONNECT_NONBLOCK && anetNonBlock(err,s) != ANET_OK)\n            goto error;\n        if (source_addr) {\n            int bound = 0;\n            /* Using getaddrinfo saves us from self-determining IPv4 vs IPv6 */\n            if ((rv = getaddrinfo(source_addr, NULL, &hints, &bservinfo)) != 0)\n            {\n                anetSetError(err, \"%s\", gai_strerror(rv));\n                goto error;\n            }\n            for (b = bservinfo; b != NULL; b = b->ai_next) {\n                if (bind(s,b->ai_addr,b->ai_addrlen) != -1) {\n                    bound = 1;\n                    break;\n                }\n            }\n            freeaddrinfo(bservinfo);\n            if (!bound) {\n                anetSetError(err, \"bind: %s\", strerror(errno));\n                goto error;\n            }\n        }\n        if (connect(s,p->ai_addr,p->ai_addrlen) == -1) {\n            /* If the socket is non-blocking, it is ok for connect() to\n             * return an EINPROGRESS error here. */\n            if (errno == EINPROGRESS && flags & ANET_CONNECT_NONBLOCK)\n                goto end;\n            close(s);\n            s = ANET_ERR;\n            continue;\n        }\n\n        /* If we ended an iteration of the for loop without errors, we\n         * have a connected socket. Let's return to the caller. */\n        goto end;\n    }\n    if (p == NULL)\n        anetSetError(err, \"creating socket: %s\", strerror(errno));\n\nerror:\n    if (s != ANET_ERR) {\n        close(s);\n        s = ANET_ERR;\n    }\n\nend:\n    freeaddrinfo(servinfo);\n\n    /* Handle best effort binding: if a binding address was used, but it is\n     * not possible to create a socket, try again without a binding address. */\n    if (s == ANET_ERR && source_addr && (flags & ANET_CONNECT_BE_BINDING)) {\n        return anetTcpGenericConnect(err,addr,port,NULL,flags);\n    } else {\n        return s;\n    }\n}\n\nint anetTcpNonBlockConnect(char *err, const char *addr, int port)\n{\n    return anetTcpGenericConnect(err,addr,port,NULL,ANET_CONNECT_NONBLOCK);\n}\n\nint anetTcpNonBlockBestEffortBindConnect(char *err, const char *addr, int port,\n                                         const char *source_addr)\n{\n    return anetTcpGenericConnect(err,addr,port,source_addr,\n            ANET_CONNECT_NONBLOCK|ANET_CONNECT_BE_BINDING);\n}\n\nint anetUnixGenericConnect(char *err, const char *path, int flags)\n{\n    int s;\n    struct sockaddr_un sa;\n\n    if ((s = anetCreateSocket(err,AF_LOCAL)) == ANET_ERR)\n        return ANET_ERR;\n\n    sa.sun_family = AF_LOCAL;\n    redis_strlcpy(sa.sun_path,path,sizeof(sa.sun_path));\n    if (flags & ANET_CONNECT_NONBLOCK) {\n        if (anetNonBlock(err,s) != ANET_OK) {\n            close(s);\n            return ANET_ERR;\n        }\n    }\n    if (connect(s,(struct sockaddr*)&sa,sizeof(sa)) == -1) {\n        if (errno == EINPROGRESS &&\n            flags & ANET_CONNECT_NONBLOCK)\n            return s;\n\n        anetSetError(err, \"connect: %s\", strerror(errno));\n        close(s);\n        return ANET_ERR;\n    }\n    return s;\n}\n\nstatic int anetListen(char *err, int s, struct sockaddr *sa, socklen_t len, int backlog, mode_t perm) {\n    if (bind(s,sa,len) == -1) {\n        anetSetError(err, \"bind: %s\", strerror(errno));\n        close(s);\n        return ANET_ERR;\n    }\n\n    if (sa->sa_family == AF_LOCAL && perm)\n        chmod(((struct sockaddr_un *) sa)->sun_path, perm);\n\n    if (listen(s, backlog) == -1) {\n        anetSetError(err, \"listen: %s\", strerror(errno));\n        close(s);\n        return ANET_ERR;\n    }\n    return ANET_OK;\n}\n\nstatic int anetV6Only(char *err, int s) {\n    int yes = 1;\n    if (setsockopt(s,IPPROTO_IPV6,IPV6_V6ONLY,&yes,sizeof(yes)) == -1) {\n        anetSetError(err, \"setsockopt: %s\", strerror(errno));\n        return ANET_ERR;\n    }\n    return ANET_OK;\n}\n\nstatic int _anetTcpServer(char *err, int port, char *bindaddr, int af, int backlog)\n{\n    int s = -1, rv;\n    char _port[6];  /* strlen(\"65535\") */\n    struct addrinfo hints, *servinfo, *p;\n\n    snprintf(_port,6,\"%d\",port);\n    memset(&hints,0,sizeof(hints));\n    hints.ai_family = af;\n    hints.ai_socktype = SOCK_STREAM;\n    hints.ai_flags = AI_PASSIVE;    /* No effect if bindaddr != NULL */\n    if (bindaddr && !strcmp(\"*\", bindaddr))\n        bindaddr = NULL;\n    if (af == AF_INET6 && bindaddr && !strcmp(\"::*\", bindaddr))\n        bindaddr = NULL;\n\n    if ((rv = getaddrinfo(bindaddr,_port,&hints,&servinfo)) != 0) {\n        anetSetError(err, \"%s\", gai_strerror(rv));\n        return ANET_ERR;\n    }\n    for (p = servinfo; p != NULL; p = p->ai_next) {\n        if ((s = socket(p->ai_family,p->ai_socktype,p->ai_protocol)) == -1)\n            continue;\n\n        if (af == AF_INET6 && anetV6Only(err,s) == ANET_ERR) goto error;\n        if (anetSetReuseAddr(err,s) == ANET_ERR) goto error;\n        if (anetListen(err,s,p->ai_addr,p->ai_addrlen,backlog,0) == ANET_ERR) s = ANET_ERR;\n        goto end;\n    }\n    if (p == NULL) {\n        anetSetError(err, \"unable to bind socket, errno: %d\", errno);\n        goto error;\n    }\n\nerror:\n    if (s != -1) close(s);\n    s = ANET_ERR;\nend:\n    freeaddrinfo(servinfo);\n    return s;\n}\n\nint anetTcpServer(char *err, int port, char *bindaddr, int backlog)\n{\n    return _anetTcpServer(err, port, bindaddr, AF_INET, backlog);\n}\n\nint anetTcp6Server(char *err, int port, char *bindaddr, int backlog)\n{\n    return _anetTcpServer(err, port, bindaddr, AF_INET6, backlog);\n}\n\nint anetUnixServer(char *err, char *path, mode_t perm, int backlog)\n{\n    int s;\n    struct sockaddr_un sa;\n\n    if (strlen(path) > sizeof(sa.sun_path)-1) {\n        anetSetError(err,\"unix socket path too long (%zu), must be under %zu\", strlen(path), sizeof(sa.sun_path));\n        return ANET_ERR;\n    }\n    if ((s = anetCreateSocket(err,AF_LOCAL)) == ANET_ERR)\n        return ANET_ERR;\n\n    memset(&sa,0,sizeof(sa));\n    sa.sun_family = AF_LOCAL;\n    redis_strlcpy(sa.sun_path,path,sizeof(sa.sun_path));\n    if (anetListen(err,s,(struct sockaddr*)&sa,sizeof(sa),backlog,perm) == ANET_ERR)\n        return ANET_ERR;\n    return s;\n}\n\n/* Accept a connection and also make sure the socket is non-blocking, and CLOEXEC.\n * returns the new socket FD, or -1 on error. */\nstatic int anetGenericAccept(char *err, int s, struct sockaddr *sa, socklen_t *len) {\n    int fd;\n    do {\n        /* Use the accept4() call on linux to simultaneously accept and\n         * set a socket as non-blocking. */\n#ifdef HAVE_ACCEPT4\n        fd = accept4(s, sa, len,  SOCK_NONBLOCK | SOCK_CLOEXEC);\n#else\n        fd = accept(s,sa,len);\n#endif\n    } while(fd == -1 && errno == EINTR);\n    if (fd == -1) {\n        anetSetError(err, \"accept: %s\", strerror(errno));\n        return ANET_ERR;\n    }\n#ifndef HAVE_ACCEPT4\n    if (anetCloexec(fd) == -1) {\n        anetSetError(err, \"anetCloexec: %s\", strerror(errno));\n        close(fd);\n        return ANET_ERR;\n    }\n    if (anetNonBlock(err, fd) != ANET_OK) {\n        close(fd);\n        return ANET_ERR;\n    }\n#endif\n    return fd;\n}\n\n/* Accept a connection and also make sure the socket is non-blocking, and CLOEXEC.\n * returns the new socket FD, or -1 on error. */\nint anetTcpAccept(char *err, int serversock, char *ip, size_t ip_len, int *port) {\n    int fd;\n    struct sockaddr_storage sa;\n    socklen_t salen = sizeof(sa);\n    if ((fd = anetGenericAccept(err,serversock,(struct sockaddr*)&sa,&salen)) == ANET_ERR)\n        return ANET_ERR;\n\n    if (sa.ss_family == AF_INET) {\n        struct sockaddr_in *s = (struct sockaddr_in *)&sa;\n        if (ip) inet_ntop(AF_INET,(void*)&(s->sin_addr),ip,ip_len);\n        if (port) *port = ntohs(s->sin_port);\n    } else {\n        struct sockaddr_in6 *s = (struct sockaddr_in6 *)&sa;\n        if (ip) inet_ntop(AF_INET6,(void*)&(s->sin6_addr),ip,ip_len);\n        if (port) *port = ntohs(s->sin6_port);\n    }\n    return fd;\n}\n\n/* Accept a connection and also make sure the socket is non-blocking, and CLOEXEC.\n * returns the new socket FD, or -1 on error. */\nint anetUnixAccept(char *err, int s) {\n    int fd;\n    struct sockaddr_un sa;\n    socklen_t salen = sizeof(sa);\n    if ((fd = anetGenericAccept(err,s,(struct sockaddr*)&sa,&salen)) == ANET_ERR)\n        return ANET_ERR;\n\n    return fd;\n}\n\nint anetFdToString(int fd, char *ip, size_t ip_len, int *port, int remote) {\n    struct sockaddr_storage sa;\n    socklen_t salen = sizeof(sa);\n\n    if (remote) {\n        if (getpeername(fd, (struct sockaddr *)&sa, &salen) == -1) goto error;\n    } else {\n        if (getsockname(fd, (struct sockaddr *)&sa, &salen) == -1) goto error;\n    }\n\n    if (sa.ss_family == AF_INET) {\n        struct sockaddr_in *s = (struct sockaddr_in *)&sa;\n        if (ip) {\n            if (inet_ntop(AF_INET,(void*)&(s->sin_addr),ip,ip_len) == NULL)\n                goto error;\n        }\n        if (port) *port = ntohs(s->sin_port);\n    } else if (sa.ss_family == AF_INET6) {\n        struct sockaddr_in6 *s = (struct sockaddr_in6 *)&sa;\n        if (ip) {\n            if (inet_ntop(AF_INET6,(void*)&(s->sin6_addr),ip,ip_len) == NULL)\n                goto error;\n        }\n        if (port) *port = ntohs(s->sin6_port);\n    } else if (sa.ss_family == AF_UNIX) {\n        if (ip) {\n            int res = snprintf(ip, ip_len, \"/unixsocket\");\n            if (res < 0 || (unsigned int) res >= ip_len) goto error;\n        }\n        if (port) *port = 0;\n    } else {\n        goto error;\n    }\n    return 0;\n\nerror:\n    if (ip) {\n        if (ip_len >= 2) {\n            ip[0] = '?';\n            ip[1] = '\\0';\n        } else if (ip_len == 1) {\n            ip[0] = '\\0';\n        }\n    }\n    if (port) *port = 0;\n    return -1;\n}\n\n/* Create a pipe buffer with given flags for read end and write end.\n * Note that it supports the file flags defined by pipe2() and fcntl(F_SETFL),\n * and one of the use cases is O_CLOEXEC|O_NONBLOCK. */\nint anetPipe(int fds[2], int read_flags, int write_flags) {\n    int pipe_flags = 0;\n#ifdef HAVE_PIPE2\n    /* When possible, try to leverage pipe2() to apply flags that are common to both ends.\n     * There is no harm to set O_CLOEXEC to prevent fd leaks. */\n    pipe_flags = O_CLOEXEC | (read_flags & write_flags);\n    if (pipe2(fds, pipe_flags)) {\n        /* Fail on real failures, and fallback to simple pipe if pipe2 is unsupported. */\n        if (errno != ENOSYS && errno != EINVAL)\n            return -1;\n        pipe_flags = 0;\n    } else {\n        /* If the flags on both ends are identical, no need to do anything else. */\n        if ((O_CLOEXEC | read_flags) == (O_CLOEXEC | write_flags))\n            return 0;\n        /* Clear the flags which have already been set using pipe2. */\n        read_flags &= ~pipe_flags;\n        write_flags &= ~pipe_flags;\n    }\n#endif\n\n    /* When we reach here with pipe_flags of 0, it means pipe2 failed (or was not attempted),\n     * so we try to use pipe. Otherwise, we skip and proceed to set specific flags below. */\n    if (pipe_flags == 0 && pipe(fds))\n        return -1;\n\n    /* File descriptor flags.\n     * Currently, only one such flag is defined: FD_CLOEXEC, the close-on-exec flag. */\n    if (read_flags & O_CLOEXEC)\n        if (fcntl(fds[0], F_SETFD, FD_CLOEXEC))\n            goto error;\n    if (write_flags & O_CLOEXEC)\n        if (fcntl(fds[1], F_SETFD, FD_CLOEXEC))\n            goto error;\n\n    /* File status flags after clearing the file descriptor flag O_CLOEXEC. */\n    read_flags &= ~O_CLOEXEC;\n    if (read_flags)\n        if (fcntl(fds[0], F_SETFL, read_flags))\n            goto error;\n    write_flags &= ~O_CLOEXEC;\n    if (write_flags)\n        if (fcntl(fds[1], F_SETFL, write_flags))\n            goto error;\n\n    return 0;\n\nerror:\n    close(fds[0]);\n    close(fds[1]);\n    return -1;\n}\n\nint anetSetSockMarkId(char *err, int fd, uint32_t id) {\n#ifdef HAVE_SOCKOPTMARKID\n    if (setsockopt(fd, SOL_SOCKET, SOCKOPTMARKID, (void *)&id, sizeof(id)) == -1) {\n        anetSetError(err, \"setsockopt: %s\", strerror(errno));\n        return ANET_ERR;\n    }\n    return ANET_OK;\n#else\n    UNUSED(fd);\n    UNUSED(id);\n    anetSetError(err,\"anetSetSockMarkid unsupported on this platform\");\n    return ANET_OK;\n#endif\n}\n\nint anetIsFifo(char *filepath) {\n    struct stat sb;\n    if (stat(filepath, &sb) == -1) return 0;\n    return S_ISFIFO(sb.st_mode);\n}\n\n/* This function must be called after accept4() fails. It returns 1 if 'err'\n * indicates accepted connection faced an error, and it's okay to continue\n * accepting next connection by calling accept4() again. Other errors either\n * indicate programming errors, e.g. calling accept() on a closed fd or indicate\n * a resource limit has been reached, e.g. -EMFILE, open fd limit has been\n * reached. In the latter case, caller might wait until resources are available.\n * See accept4() documentation for details. */\nint anetAcceptFailureNeedsRetry(int err) {\n    if (err == ECONNABORTED)\n        return 1;\n\n#if defined(__linux__)\n    /* For details, see 'Error Handling' section on\n     * https://man7.org/linux/man-pages/man2/accept.2.html */\n    if (err == ENETDOWN || err == EPROTO || err == ENOPROTOOPT ||\n        err == EHOSTDOWN || err == ENONET || err == EHOSTUNREACH ||\n        err == EOPNOTSUPP || err == ENETUNREACH)\n    {\n        return 1;\n    }\n#endif\n    return 0;\n}\n"
  },
  {
    "path": "src/anet.h",
    "content": "/* anet.c -- Basic TCP socket stuff made a bit less boring\n *\n * Copyright (c) 2006-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef ANET_H\n#define ANET_H\n\n#include <sys/types.h>\n\n#define ANET_OK 0\n#define ANET_ERR -1\n#define ANET_ERR_LEN 256\n\n/* Flags used with certain functions. */\n#define ANET_NONE 0\n#define ANET_IP_ONLY (1<<0)\n#define ANET_PREFER_IPV4 (1<<1)\n#define ANET_PREFER_IPV6 (1<<2)\n\n#if defined(__sun) || defined(_AIX)\n#define AF_LOCAL AF_UNIX\n#endif\n\n#ifdef _AIX\n#undef ip_len\n#endif\n\nint anetTcpNonBlockConnect(char *err, const char *addr, int port);\nint anetTcpNonBlockBestEffortBindConnect(char *err, const char *addr, int port, const char *source_addr);\nint anetResolve(char *err, char *host, char *ipbuf, size_t ipbuf_len, int flags);\nint anetTcpServer(char *err, int port, char *bindaddr, int backlog);\nint anetTcp6Server(char *err, int port, char *bindaddr, int backlog);\nint anetUnixServer(char *err, char *path, mode_t perm, int backlog);\nint anetTcpAccept(char *err, int serversock, char *ip, size_t ip_len, int *port);\nint anetUnixAccept(char *err, int serversock);\nint anetNonBlock(char *err, int fd);\nint anetBlock(char *err, int fd);\nint anetCloexec(int fd);\nint anetEnableTcpNoDelay(char *err, int fd);\nint anetDisableTcpNoDelay(char *err, int fd);\nint anetSendTimeout(char *err, int fd, long long ms);\nint anetRecvTimeout(char *err, int fd, long long ms);\nint anetFdToString(int fd, char *ip, size_t ip_len, int *port, int remote);\nint anetKeepAlive(char *err, int fd, int interval);\nint anetFormatAddr(char *fmt, size_t fmt_len, char *ip, int port);\nint anetPipe(int fds[2], int read_flags, int write_flags);\nint anetSetSockMarkId(char *err, int fd, uint32_t id);\nint anetGetError(int fd);\nint anetIsFifo(char *filepath);\nint anetAcceptFailureNeedsRetry(int err);\n\n#endif\n"
  },
  {
    "path": "src/aof.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n#include \"bio.h\"\n#include \"rio.h\"\n#include \"functions.h\"\n#include \"cluster_asm.h\"\n\n#include <signal.h>\n#include <fcntl.h>\n#include <sys/stat.h>\n#include <sys/types.h>\n#include <sys/time.h>\n#include <sys/resource.h>\n#include <sys/wait.h>\n#include <sys/param.h>\n\nvoid freeClientArgv(client *c);\noff_t getAppendOnlyFileSize(sds filename, int *status);\noff_t getBaseAndIncrAppendOnlyFilesSize(aofManifest *am, int *status);\nint getBaseAndIncrAppendOnlyFilesNum(aofManifest *am);\nint aofFileExist(char *filename);\nint rewriteAppendOnlyFile(char *filename);\naofManifest *aofLoadManifestFromFile(sds am_filepath);\nvoid aofManifestFreeAndUpdate(aofManifest *am);\nvoid aof_background_fsync_and_close(int fd);\n\n/* When we call 'startAppendOnly', we will create a temp INCR AOF, and rename\n * it to the real INCR AOF name when the AOFRW is done, so if want to know the\n * accurate start offset of the INCR AOF, we need to record it when we create\n * the temp INCR AOF. This variable is used to record the start offset, and\n * set the start offset of the real INCR AOF when the AOFRW is done. */\nstatic long long tempIncAofStartReplOffset = 0;\n\n/* ----------------------------------------------------------------------------\n * AOF Manifest file implementation.\n *\n * The following code implements the read/write logic of AOF manifest file, which\n * is used to track and manage all AOF files.\n *\n * Append-only files consist of three types:\n *\n * BASE: Represents a Redis snapshot from the time of last AOF rewrite. The manifest\n * file contains at most a single BASE file, which will always be the first file in the\n * list.\n *\n * INCR: Represents all write commands executed by Redis following the last successful\n * AOF rewrite. In some cases it is possible to have several ordered INCR files. For\n * example:\n *   - During an on-going AOF rewrite\n *   - After an AOF rewrite was aborted/failed, and before the next one succeeded.\n *\n * HISTORY: After a successful rewrite, the previous BASE and INCR become HISTORY files.\n * They will be automatically removed unless garbage collection is disabled.\n *\n * The following is a possible AOF manifest file content:\n *\n * file appendonly.aof.2.base.rdb seq 2 type b\n * file appendonly.aof.1.incr.aof seq 1 type h\n * file appendonly.aof.2.incr.aof seq 2 type h\n * file appendonly.aof.3.incr.aof seq 3 type h\n * file appendonly.aof.4.incr.aof seq 4 type i\n * file appendonly.aof.5.incr.aof seq 5 type i\n * ------------------------------------------------------------------------- */\n\n/* Naming rules. */\n#define BASE_FILE_SUFFIX           \".base\"\n#define INCR_FILE_SUFFIX           \".incr\"\n#define RDB_FORMAT_SUFFIX          \".rdb\"\n#define AOF_FORMAT_SUFFIX          \".aof\"\n#define MANIFEST_NAME_SUFFIX       \".manifest\"\n#define TEMP_FILE_NAME_PREFIX      \"temp-\"\n\n/* AOF manifest key. */\n#define AOF_MANIFEST_KEY_FILE_NAME   \"file\"\n#define AOF_MANIFEST_KEY_FILE_SEQ    \"seq\"\n#define AOF_MANIFEST_KEY_FILE_TYPE   \"type\"\n#define AOF_MANIFEST_KEY_FILE_STARTOFFSET \"startoffset\"\n#define AOF_MANIFEST_KEY_FILE_ENDOFFSET   \"endoffset\"\n\n/* Create an empty aofInfo. */\naofInfo *aofInfoCreate(void) {\n    aofInfo *ai = zcalloc(sizeof(aofInfo));\n    ai->start_offset = -1;\n    ai->end_offset = -1;\n    return ai;\n}\n\n/* Free the aofInfo structure (pointed to by ai) and its embedded file_name. */\nvoid aofInfoFree(aofInfo *ai) {\n    serverAssert(ai != NULL);\n    if (ai->file_name) sdsfree(ai->file_name);\n    zfree(ai);\n}\n\n/* Deep copy an aofInfo. */\naofInfo *aofInfoDup(aofInfo *orig) {\n    serverAssert(orig != NULL);\n    aofInfo *ai = aofInfoCreate();\n    ai->file_name = sdsdup(orig->file_name);\n    ai->file_seq = orig->file_seq;\n    ai->file_type = orig->file_type;\n    ai->start_offset = orig->start_offset;\n    ai->end_offset = orig->end_offset;\n    return ai;\n}\n\n/* Format aofInfo as a string and it will be a line in the manifest.\n *\n * When update this format, make sure to update redis-check-aof as well. */\nsds aofInfoFormat(sds buf, aofInfo *ai) {\n    sds filename_repr = NULL;\n\n    if (sdsneedsrepr(ai->file_name))\n        filename_repr = sdscatrepr(sdsempty(), ai->file_name, sdslen(ai->file_name));\n\n    sds ret = sdscatprintf(buf, \"%s %s %s %lld %s %c\",\n        AOF_MANIFEST_KEY_FILE_NAME, filename_repr ? filename_repr : ai->file_name,\n        AOF_MANIFEST_KEY_FILE_SEQ, ai->file_seq,\n        AOF_MANIFEST_KEY_FILE_TYPE, ai->file_type);\n\n    if (ai->start_offset != -1) {\n        ret = sdscatprintf(ret, \" %s %lld\", AOF_MANIFEST_KEY_FILE_STARTOFFSET, ai->start_offset);\n        if (ai->end_offset != -1) {\n            ret = sdscatprintf(ret, \" %s %lld\", AOF_MANIFEST_KEY_FILE_ENDOFFSET, ai->end_offset);\n        }\n    }\n\n    ret = sdscatlen(ret, \"\\n\", 1);\n    sdsfree(filename_repr);\n\n    return ret;\n}\n\n/* Method to free AOF list elements. */\nvoid aofListFree(void *item) {\n    aofInfo *ai = (aofInfo *)item;\n    aofInfoFree(ai);\n}\n\n/* Method to duplicate AOF list elements. */\nvoid *aofListDup(void *item) {\n    return aofInfoDup(item);\n}\n\n/* Create an empty aofManifest, which will be called in `aofLoadManifestFromDisk`. */\naofManifest *aofManifestCreate(void) {\n    aofManifest *am = zcalloc(sizeof(aofManifest));\n    am->incr_aof_list = listCreate();\n    am->history_aof_list = listCreate();\n    listSetFreeMethod(am->incr_aof_list, aofListFree);\n    listSetDupMethod(am->incr_aof_list, aofListDup);\n    listSetFreeMethod(am->history_aof_list, aofListFree);\n    listSetDupMethod(am->history_aof_list, aofListDup);\n    return am;\n}\n\n/* Free the aofManifest structure (pointed to by am) and its embedded members. */\nvoid aofManifestFree(aofManifest *am) {\n    if (am->base_aof_info) aofInfoFree(am->base_aof_info);\n    if (am->incr_aof_list) listRelease(am->incr_aof_list);\n    if (am->history_aof_list) listRelease(am->history_aof_list);\n    zfree(am);\n}\n\nsds getAofManifestFileName(void) {\n    return sdscatprintf(sdsempty(), \"%s%s\", server.aof_filename,\n                MANIFEST_NAME_SUFFIX);\n}\n\nsds getTempAofManifestFileName(void) {\n    return sdscatprintf(sdsempty(), \"%s%s%s\", TEMP_FILE_NAME_PREFIX,\n                server.aof_filename, MANIFEST_NAME_SUFFIX);\n}\n\nsds appendAofInfoFromList(sds buf, list *aofList) {\n    listNode *ln;\n    listIter li;\n\n    listRewind(aofList, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        aofInfo *ai = (aofInfo*)ln->value;\n        buf = aofInfoFormat(buf, ai);\n    }\n\n    return buf;\n}\n\n/* Returns the string representation of aofManifest pointed to by am.\n *\n * The string is multiple lines separated by '\\n', and each line represents\n * an AOF file.\n *\n * Each line is space delimited and contains 6 fields, as follows:\n * \"file\" [filename] \"seq\" [sequence] \"type\" [type]\n *\n * Where \"file\", \"seq\" and \"type\" are keywords that describe the next value,\n * [filename] and [sequence] describe file name and order, and [type] is one\n * of 'b' (base), 'h' (history) or 'i' (incr).\n *\n * The base file, if exists, will always be first, followed by history files,\n * and incremental files.\n */\nsds getAofManifestAsString(aofManifest *am) {\n    serverAssert(am != NULL);\n\n    sds buf = sdsempty();\n\n    /* 1. Add BASE File information, it is always at the beginning\n     * of the manifest file. */\n    if (am->base_aof_info) {\n        buf = aofInfoFormat(buf, am->base_aof_info);\n    }\n\n    /* 2. Add HISTORY type AOF information. */\n    buf = appendAofInfoFromList(buf, am->history_aof_list);\n\n    /* 3. Add INCR type AOF information. */\n    buf = appendAofInfoFromList(buf, am->incr_aof_list);\n\n    return buf;\n}\n\n/* Load the manifest information from the disk to `server.aof_manifest`\n * when the Redis server start.\n *\n * During loading, this function does strict error checking and will abort\n * the entire Redis server process on error (I/O error, invalid format, etc.)\n *\n * If the AOF directory or manifest file do not exist, this will be ignored\n * in order to support seamless upgrades from previous versions which did not\n * use them.\n */\nvoid aofLoadManifestFromDisk(void) {\n    server.aof_manifest = aofManifestCreate();\n    if (!dirExists(server.aof_dirname)) {\n        serverLog(LL_DEBUG, \"The AOF directory %s doesn't exist\", server.aof_dirname);\n        return;\n    }\n\n    sds am_name = getAofManifestFileName();\n    sds am_filepath = makePath(server.aof_dirname, am_name);\n    if (!fileExist(am_filepath)) {\n        serverLog(LL_DEBUG, \"The AOF manifest file %s doesn't exist\", am_name);\n        sdsfree(am_name);\n        sdsfree(am_filepath);\n        return;\n    }\n\n    aofManifest *am = aofLoadManifestFromFile(am_filepath);\n    if (am) aofManifestFreeAndUpdate(am);\n    sdsfree(am_name);\n    sdsfree(am_filepath);\n}\n\n/* Generic manifest loading function, used in `aofLoadManifestFromDisk` and redis-check-aof tool. */\n#define MANIFEST_MAX_LINE 1024\naofManifest *aofLoadManifestFromFile(sds am_filepath) {\n    const char *err = NULL;\n    long long maxseq = 0;\n\n    aofManifest *am = aofManifestCreate();\n    FILE *fp = fopen(am_filepath, \"r\");\n    if (fp == NULL) {\n        serverLog(LL_WARNING, \"Fatal error: can't open the AOF manifest \"\n            \"file %s for reading: %s\", am_filepath, strerror(errno));\n        exit(1);\n    }\n\n    char buf[MANIFEST_MAX_LINE+1];\n    sds *argv = NULL;\n    int argc;\n    aofInfo *ai = NULL;\n\n    sds line = NULL;\n    int linenum = 0;\n\n    while (1) {\n        if (fgets(buf, MANIFEST_MAX_LINE+1, fp) == NULL) {\n            if (feof(fp)) {\n                if (linenum == 0) {\n                    err = \"Found an empty AOF manifest\";\n                    goto loaderr;\n                } else {\n                    break;\n                }\n            } else {\n                err = \"Read AOF manifest failed\";\n                goto loaderr;\n            }\n        }\n\n        linenum++;\n\n        /* Skip comments lines */\n        if (buf[0] == '#') continue;\n\n        if (strchr(buf, '\\n') == NULL) {\n            err = \"The AOF manifest file contains too long line\";\n            goto loaderr;\n        }\n\n        line = sdstrim(sdsnew(buf), \" \\t\\r\\n\");\n        if (!sdslen(line)) {\n            err = \"Invalid AOF manifest file format\";\n            goto loaderr;\n        }\n\n        argv = sdssplitargs(line, &argc);\n        /* 'argc < 6' was done for forward compatibility. */\n        if (argv == NULL || argc < 6 || (argc % 2)) {\n            err = \"Invalid AOF manifest file format\";\n            goto loaderr;\n        }\n\n        ai = aofInfoCreate();\n        for (int i = 0; i < argc; i += 2) {\n            if (!strcasecmp(argv[i], AOF_MANIFEST_KEY_FILE_NAME)) {\n                ai->file_name = sdsnew(argv[i+1]);\n                if (!pathIsBaseName(ai->file_name)) {\n                    err = \"File can't be a path, just a filename\";\n                    goto loaderr;\n                }\n            } else if (!strcasecmp(argv[i], AOF_MANIFEST_KEY_FILE_SEQ)) {\n                ai->file_seq = atoll(argv[i+1]);\n            } else if (!strcasecmp(argv[i], AOF_MANIFEST_KEY_FILE_TYPE)) {\n                ai->file_type = (argv[i+1])[0];\n            } else if (!strcasecmp(argv[i], AOF_MANIFEST_KEY_FILE_STARTOFFSET)) {\n                ai->start_offset = atoll(argv[i+1]);\n            } else if (!strcasecmp(argv[i], AOF_MANIFEST_KEY_FILE_ENDOFFSET)) {\n                ai->end_offset = atoll(argv[i+1]);\n            }\n            /* else if (!strcasecmp(argv[i], AOF_MANIFEST_KEY_OTHER)) {} */\n        }\n\n        /* We have to make sure we load all the information. */\n        if (!ai->file_name || !ai->file_seq || !ai->file_type) {\n            err = \"Invalid AOF manifest file format\";\n            goto loaderr;\n        }\n\n        sdsfreesplitres(argv, argc);\n        argv = NULL;\n\n        if (ai->file_type == AOF_FILE_TYPE_BASE) {\n            if (am->base_aof_info) {\n                err = \"Found duplicate base file information\";\n                goto loaderr;\n            }\n            am->base_aof_info = ai;\n            am->curr_base_file_seq = ai->file_seq;\n        } else if (ai->file_type == AOF_FILE_TYPE_HIST) {\n            listAddNodeTail(am->history_aof_list, ai);\n        } else if (ai->file_type == AOF_FILE_TYPE_INCR) {\n            if (ai->file_seq <= maxseq) {\n                err = \"Found a non-monotonic sequence number\";\n                goto loaderr;\n            }\n            listAddNodeTail(am->incr_aof_list, ai);\n            am->curr_incr_file_seq = ai->file_seq;\n            maxseq = ai->file_seq;\n        } else {\n            err = \"Unknown AOF file type\";\n            goto loaderr;\n        }\n\n        sdsfree(line);\n        line = NULL;\n        ai = NULL;\n    }\n\n    fclose(fp);\n    return am;\n\nloaderr:\n    /* Sanitizer suppression: may report a false positive if we goto loaderr\n     * and exit(1) without freeing these allocations. */\n    if (argv) sdsfreesplitres(argv, argc);\n    if (ai) aofInfoFree(ai);\n\n    serverLog(LL_WARNING, \"\\n*** FATAL AOF MANIFEST FILE ERROR ***\\n\");\n    if (line) {\n        serverLog(LL_WARNING, \"Reading the manifest file, at line %d\\n\", linenum);\n        serverLog(LL_WARNING, \">>> '%s'\\n\", line);\n    }\n    serverLog(LL_WARNING, \"%s\\n\", err);\n    exit(1);\n}\n\n/* Deep copy an aofManifest from orig.\n *\n * In `backgroundRewriteDoneHandler` and `openNewIncrAofForAppend`, we will\n * first deep copy a temporary AOF manifest from the `server.aof_manifest` and\n * try to modify it. Once everything is modified, we will atomically make the\n * `server.aof_manifest` point to this temporary aof_manifest.\n */\naofManifest *aofManifestDup(aofManifest *orig) {\n    serverAssert(orig != NULL);\n    aofManifest *am = zcalloc(sizeof(aofManifest));\n\n    am->curr_base_file_seq = orig->curr_base_file_seq;\n    am->curr_incr_file_seq = orig->curr_incr_file_seq;\n    am->dirty = orig->dirty;\n\n    if (orig->base_aof_info) {\n        am->base_aof_info = aofInfoDup(orig->base_aof_info);\n    }\n\n    am->incr_aof_list = listDup(orig->incr_aof_list);\n    am->history_aof_list = listDup(orig->history_aof_list);\n    serverAssert(am->incr_aof_list != NULL);\n    serverAssert(am->history_aof_list != NULL);\n    return am;\n}\n\n/* Change the `server.aof_manifest` pointer to 'am' and free the previous\n * one if we have. */\nvoid aofManifestFreeAndUpdate(aofManifest *am) {\n    serverAssert(am != NULL);\n    if (server.aof_manifest) aofManifestFree(server.aof_manifest);\n    server.aof_manifest = am;\n}\n\n/* Called in `backgroundRewriteDoneHandler` to get a new BASE file\n * name, and mark the previous (if we have) BASE file as HISTORY type.\n *\n * BASE file naming rules: `server.aof_filename`.seq.base.format\n *\n * for example:\n *  appendonly.aof.1.base.aof  (server.aof_use_rdb_preamble is no)\n *  appendonly.aof.1.base.rdb  (server.aof_use_rdb_preamble is yes)\n */\nsds getNewBaseFileNameAndMarkPreAsHistory(aofManifest *am) {\n    serverAssert(am != NULL);\n    if (am->base_aof_info) {\n        serverAssert(am->base_aof_info->file_type == AOF_FILE_TYPE_BASE);\n        am->base_aof_info->file_type = AOF_FILE_TYPE_HIST;\n        listAddNodeHead(am->history_aof_list, am->base_aof_info);\n    }\n\n    char *format_suffix = server.aof_use_rdb_preamble ?\n        RDB_FORMAT_SUFFIX:AOF_FORMAT_SUFFIX;\n\n    aofInfo *ai = aofInfoCreate();\n    ai->file_name = sdscatprintf(sdsempty(), \"%s.%lld%s%s\", server.aof_filename,\n                        ++am->curr_base_file_seq, BASE_FILE_SUFFIX, format_suffix);\n    ai->file_seq = am->curr_base_file_seq;\n    ai->file_type = AOF_FILE_TYPE_BASE;\n    am->base_aof_info = ai;\n    am->dirty = 1;\n    return am->base_aof_info->file_name;\n}\n\n/* Get a new INCR type AOF name.\n *\n * INCR AOF naming rules: `server.aof_filename`.seq.incr.aof\n *\n * for example:\n *  appendonly.aof.1.incr.aof\n */\nsds getNewIncrAofName(aofManifest *am, long long start_reploff) {\n    aofInfo *ai = aofInfoCreate();\n    ai->file_type = AOF_FILE_TYPE_INCR;\n    ai->file_name = sdscatprintf(sdsempty(), \"%s.%lld%s%s\", server.aof_filename,\n                        ++am->curr_incr_file_seq, INCR_FILE_SUFFIX, AOF_FORMAT_SUFFIX);\n    ai->file_seq = am->curr_incr_file_seq;\n    ai->start_offset = start_reploff;\n    listAddNodeTail(am->incr_aof_list, ai);\n    am->dirty = 1;\n    return ai->file_name;\n}\n\n/* Get temp INCR type AOF name. */\nsds getTempIncrAofName(void) {\n    return sdscatprintf(sdsempty(), \"%s%s%s\", TEMP_FILE_NAME_PREFIX, server.aof_filename,\n        INCR_FILE_SUFFIX);\n}\n\n/* Get the last INCR AOF name or create a new one. */\nsds getLastIncrAofName(aofManifest *am) {\n    serverAssert(am != NULL);\n\n    /* If 'incr_aof_list' is empty, just create a new one. */\n    if (!listLength(am->incr_aof_list)) {\n        return getNewIncrAofName(am, server.master_repl_offset);\n    }\n\n    /* Or return the last one. */\n    listNode *lastnode = listIndex(am->incr_aof_list, -1);\n    aofInfo *ai = listNodeValue(lastnode);\n    return ai->file_name;\n}\n\n/* Called in `backgroundRewriteDoneHandler`. when AOFRW success, This\n * function will change the AOF file type in 'incr_aof_list' from\n * AOF_FILE_TYPE_INCR to AOF_FILE_TYPE_HIST, and move them to the\n * 'history_aof_list'.\n */\nvoid markRewrittenIncrAofAsHistory(aofManifest *am) {\n    serverAssert(am != NULL);\n    if (!listLength(am->incr_aof_list)) {\n        return;\n    }\n\n    listNode *ln;\n    listIter li;\n\n    listRewindTail(am->incr_aof_list, &li);\n\n    /* \"server.aof_fd != -1\" means AOF enabled, then we must skip the\n     * last AOF, because this file is our currently writing. */\n    if (server.aof_fd != -1) {\n        ln = listNext(&li);\n        serverAssert(ln != NULL);\n    }\n\n    /* Move aofInfo from 'incr_aof_list' to 'history_aof_list'. */\n    while ((ln = listNext(&li)) != NULL) {\n        aofInfo *ai = (aofInfo*)ln->value;\n        serverAssert(ai->file_type == AOF_FILE_TYPE_INCR);\n\n        aofInfo *hai = aofInfoDup(ai);\n        hai->file_type = AOF_FILE_TYPE_HIST;\n        listAddNodeHead(am->history_aof_list, hai);\n        listDelNode(am->incr_aof_list, ln);\n    }\n\n    am->dirty = 1;\n}\n\n/* Write the formatted manifest string to disk. */\nint writeAofManifestFile(sds buf) {\n    int ret = C_OK;\n    ssize_t nwritten;\n    int len;\n\n    sds am_name = getAofManifestFileName();\n    sds am_filepath = makePath(server.aof_dirname, am_name);\n    sds tmp_am_name = getTempAofManifestFileName();\n    sds tmp_am_filepath = makePath(server.aof_dirname, tmp_am_name);\n\n    int fd = open(tmp_am_filepath, O_WRONLY|O_TRUNC|O_CREAT, 0644);\n    if (fd == -1) {\n        serverLog(LL_WARNING, \"Can't open the AOF manifest file %s: %s\",\n            tmp_am_name, strerror(errno));\n\n        ret = C_ERR;\n        goto cleanup;\n    }\n\n    len = sdslen(buf);\n    while(len) {\n        nwritten = write(fd, buf, len);\n\n        if (nwritten < 0) {\n            if (errno == EINTR) continue;\n\n            serverLog(LL_WARNING, \"Error trying to write the temporary AOF manifest file %s: %s\",\n                tmp_am_name, strerror(errno));\n\n            ret = C_ERR;\n            goto cleanup;\n        }\n\n        len -= nwritten;\n        buf += nwritten;\n    }\n\n    if (redis_fsync(fd) == -1) {\n        serverLog(LL_WARNING, \"Fail to fsync the temp AOF file %s: %s.\",\n            tmp_am_name, strerror(errno));\n\n        ret = C_ERR;\n        goto cleanup;\n    }\n\n    if (rename(tmp_am_filepath, am_filepath) != 0) {\n        serverLog(LL_WARNING,\n            \"Error trying to rename the temporary AOF manifest file %s into %s: %s\",\n            tmp_am_name, am_name, strerror(errno));\n\n        ret = C_ERR;\n        goto cleanup;\n    }\n\n    /* Also sync the AOF directory as new AOF files may be added in the directory */\n    if (fsyncFileDir(am_filepath) == -1) {\n        serverLog(LL_WARNING, \"Fail to fsync AOF directory %s: %s.\",\n            am_filepath, strerror(errno));\n\n        ret = C_ERR;\n        goto cleanup;\n    }\n\ncleanup:\n    if (fd != -1) close(fd);\n    sdsfree(am_name);\n    sdsfree(am_filepath);\n    sdsfree(tmp_am_name);\n    sdsfree(tmp_am_filepath);\n    return ret;\n}\n\n/* Persist the aofManifest information pointed to by am to disk. */\nint persistAofManifest(aofManifest *am) {\n    if (am->dirty == 0) {\n        return C_OK;\n    }\n\n    sds amstr = getAofManifestAsString(am);\n    int ret = writeAofManifestFile(amstr);\n    sdsfree(amstr);\n    if (ret == C_OK) am->dirty = 0;\n    return ret;\n}\n\n/* Called in `loadAppendOnlyFiles` when we upgrade from a old version redis.\n *\n * 1) Create AOF directory use 'server.aof_dirname' as the name.\n * 2) Use 'server.aof_filename' to construct a BASE type aofInfo and add it to\n *    aofManifest, then persist the manifest file to AOF directory.\n * 3) Move the old AOF file (server.aof_filename) to AOF directory.\n *\n * If any of the above steps fails or crash occurs, this will not cause any\n * problems, and redis will retry the upgrade process when it restarts.\n */\nvoid aofUpgradePrepare(aofManifest *am) {\n    serverAssert(!aofFileExist(server.aof_filename));\n\n    /* Create AOF directory use 'server.aof_dirname' as the name. */\n    if (dirCreateIfMissing(server.aof_dirname) == -1) {\n        serverLog(LL_WARNING, \"Can't open or create append-only dir %s: %s\",\n            server.aof_dirname, strerror(errno));\n        exit(1);\n    }\n\n    /* Manually construct a BASE type aofInfo and add it to aofManifest. */\n    if (am->base_aof_info) aofInfoFree(am->base_aof_info);\n    aofInfo *ai = aofInfoCreate();\n    ai->file_name = sdsnew(server.aof_filename);\n    ai->file_seq = 1;\n    ai->file_type = AOF_FILE_TYPE_BASE;\n    am->base_aof_info = ai;\n    am->curr_base_file_seq = 1;\n    am->dirty = 1;\n\n    /* Persist the manifest file to AOF directory. */\n    if (persistAofManifest(am) != C_OK) {\n        exit(1);\n    }\n\n    /* Move the old AOF file to AOF directory. */\n    sds aof_filepath = makePath(server.aof_dirname, server.aof_filename);\n    if (rename(server.aof_filename, aof_filepath) == -1) {\n        serverLog(LL_WARNING,\n            \"Error trying to move the old AOF file %s into dir %s: %s\",\n            server.aof_filename,\n            server.aof_dirname,\n            strerror(errno));\n        sdsfree(aof_filepath);\n        exit(1);\n    }\n    sdsfree(aof_filepath);\n\n    serverLog(LL_NOTICE, \"Successfully migrated an old-style AOF file (%s) into the AOF directory (%s).\",\n        server.aof_filename, server.aof_dirname);\n}\n\n/* When AOFRW success, the previous BASE and INCR AOFs will\n * become HISTORY type and be moved into 'history_aof_list'.\n *\n * The function will traverse the 'history_aof_list' and submit\n * the delete task to the bio thread.\n */\nint aofDelHistoryFiles(void) {\n    if (server.aof_manifest == NULL ||\n        server.aof_disable_auto_gc == 1 ||\n        !listLength(server.aof_manifest->history_aof_list))\n    {\n        return C_OK;\n    }\n\n    listNode *ln;\n    listIter li;\n\n    listRewind(server.aof_manifest->history_aof_list, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        aofInfo *ai = (aofInfo*)ln->value;\n        serverAssert(ai->file_type == AOF_FILE_TYPE_HIST);\n        serverLog(LL_NOTICE, \"Removing the history file %s in the background\", ai->file_name);\n        sds aof_filepath = makePath(server.aof_dirname, ai->file_name);\n        bg_unlink(aof_filepath);\n        sdsfree(aof_filepath);\n        listDelNode(server.aof_manifest->history_aof_list, ln);\n    }\n\n    server.aof_manifest->dirty = 1;\n    return persistAofManifest(server.aof_manifest);\n}\n\n/* Used to clean up temp INCR AOF when AOFRW fails. */\nvoid aofDelTempIncrAofFile(void) {\n    sds aof_filename = getTempIncrAofName();\n    sds aof_filepath = makePath(server.aof_dirname, aof_filename);\n    serverLog(LL_NOTICE, \"Removing the temp incr aof file %s in the background\", aof_filename);\n    bg_unlink(aof_filepath);\n    sdsfree(aof_filepath);\n    sdsfree(aof_filename);\n    return;\n}\n\n/* Called after `loadDataFromDisk` when redis start. If `server.aof_state` is\n * 'AOF_ON', It will do three things:\n * 1. Force create a BASE file when redis starts with an empty dataset\n * 2. Open the last opened INCR type AOF for writing, If not, create a new one\n * 3. Synchronously update the manifest file to the disk\n *\n * If any of the above steps fails, the redis process will exit.\n */\nvoid aofOpenIfNeededOnServerStart(void) {\n    if (server.aof_state != AOF_ON) {\n        return;\n    }\n\n    serverAssert(server.aof_manifest != NULL);\n    serverAssert(server.aof_fd == -1);\n\n    if (dirCreateIfMissing(server.aof_dirname) == -1) {\n        serverLog(LL_WARNING, \"Can't open or create append-only dir %s: %s\",\n            server.aof_dirname, strerror(errno));\n        exit(1);\n    }\n\n    /* If we start with an empty dataset, we will force create a BASE file. */\n    size_t incr_aof_len = listLength(server.aof_manifest->incr_aof_list);\n    if (!server.aof_manifest->base_aof_info && !incr_aof_len) {\n        sds base_name = getNewBaseFileNameAndMarkPreAsHistory(server.aof_manifest);\n        sds base_filepath = makePath(server.aof_dirname, base_name);\n        if (rewriteAppendOnlyFile(base_filepath) != C_OK) {\n            exit(1);\n        }\n        sdsfree(base_filepath);\n        serverLog(LL_NOTICE, \"Creating AOF base file %s on server start\",\n            base_name);\n    }\n\n    /* Because we will 'exit(1)' if open AOF or persistent manifest fails, so\n     * we don't need atomic modification here. */\n    sds aof_name = getLastIncrAofName(server.aof_manifest);\n\n    /* Here we should use 'O_APPEND' flag. */\n    sds aof_filepath = makePath(server.aof_dirname, aof_name);\n    server.aof_fd = open(aof_filepath, O_WRONLY|O_APPEND|O_CREAT, 0644);\n    sdsfree(aof_filepath);\n    if (server.aof_fd == -1) {\n        serverLog(LL_WARNING, \"Can't open the append-only file %s: %s\",\n            aof_name, strerror(errno));\n        exit(1);\n    }\n\n    /* Persist our changes. */\n    int ret = persistAofManifest(server.aof_manifest);\n    if (ret != C_OK) {\n        exit(1);\n    }\n\n    server.aof_last_incr_size = getAppendOnlyFileSize(aof_name, NULL);\n    server.aof_last_incr_fsync_offset = server.aof_last_incr_size;\n\n    if (incr_aof_len) {\n        serverLog(LL_NOTICE, \"Opening AOF incr file %s on server start\", aof_name);\n    } else {\n        serverLog(LL_NOTICE, \"Creating AOF incr file %s on server start\", aof_name);\n    }\n}\n\nint aofFileExist(char *filename) {\n    sds file_path = makePath(server.aof_dirname, filename);\n    int ret = fileExist(file_path);\n    sdsfree(file_path);\n    return ret;\n}\n\n/* Called in `rewriteAppendOnlyFileBackground`. If `server.aof_state`\n * is 'AOF_ON', It will do two things:\n * 1. Open a new INCR type AOF for writing\n * 2. Synchronously update the manifest file to the disk\n *\n * The above two steps of modification are atomic, that is, if\n * any step fails, the entire operation will rollback and returns\n * C_ERR, and if all succeeds, it returns C_OK.\n * \n * If `server.aof_state` is 'AOF_WAIT_REWRITE', It will open a temporary INCR AOF \n * file to accumulate data during AOF_WAIT_REWRITE, and it will eventually be \n * renamed in the `backgroundRewriteDoneHandler` and written to the manifest file.\n * */\nint openNewIncrAofForAppend(void) {\n    serverAssert(server.aof_manifest != NULL);\n    int newfd = -1;\n    aofManifest *temp_am = NULL;\n    sds new_aof_name = NULL;\n\n    /* Only open new INCR AOF when AOF enabled. */\n    if (server.aof_state == AOF_OFF) return C_OK;\n\n    /* Open new AOF. */\n    if (server.aof_state == AOF_WAIT_REWRITE) {\n        /* Use a temporary INCR AOF file to accumulate data during AOF_WAIT_REWRITE. */\n        new_aof_name = getTempIncrAofName();\n        tempIncAofStartReplOffset = server.master_repl_offset;\n    } else {\n        /* Dup a temp aof_manifest to modify. */\n        temp_am = aofManifestDup(server.aof_manifest);\n        new_aof_name = sdsdup(getNewIncrAofName(temp_am, server.master_repl_offset));\n    }\n    sds new_aof_filepath = makePath(server.aof_dirname, new_aof_name);\n    newfd = open(new_aof_filepath, O_WRONLY|O_TRUNC|O_CREAT, 0644);\n    sdsfree(new_aof_filepath);\n    if (newfd == -1) {\n        serverLog(LL_WARNING, \"Can't open the append-only file %s: %s\",\n            new_aof_name, strerror(errno));\n        goto cleanup;\n    }\n\n    if (temp_am) {\n        /* Persist AOF Manifest. */\n        if (persistAofManifest(temp_am) == C_ERR) {\n            goto cleanup;\n        }\n    }\n\n    serverLog(LL_NOTICE, \"Creating AOF incr file %s on background rewrite\",\n            new_aof_name);\n    sdsfree(new_aof_name);\n\n    /* If reaches here, we can safely modify the `server.aof_manifest`\n     * and `server.aof_fd`. */\n\n    /* fsync and close old aof_fd if needed. In fsync everysec it's ok to delay\n     * the fsync as long as we grantee it happens, and in fsync always the file\n     * is already synced at this point so fsync doesn't matter. */\n    if (server.aof_fd != -1) {\n        aof_background_fsync_and_close(server.aof_fd);\n        server.aof_last_fsync = server.mstime;\n    }\n    server.aof_fd = newfd;\n\n    /* Reset the aof_last_incr_size. */\n    server.aof_last_incr_size = 0;\n    /* Reset the aof_last_incr_fsync_offset. */\n    server.aof_last_incr_fsync_offset = 0;\n    /* Update `server.aof_manifest`. */\n    if (temp_am) aofManifestFreeAndUpdate(temp_am);\n    return C_OK;\n\ncleanup:\n    if (new_aof_name) sdsfree(new_aof_name);\n    if (newfd != -1) close(newfd);\n    if (temp_am) aofManifestFree(temp_am);\n    return C_ERR;\n}\n\n/* When we close gracefully the AOF file, we have the chance to persist the\n * end replication offset of current INCR AOF. */\nvoid updateCurIncrAofEndOffset(void) {\n    if (server.aof_state != AOF_ON) return;\n    serverAssert(server.aof_manifest != NULL);\n\n    if (listLength(server.aof_manifest->incr_aof_list) == 0) return;\n    aofInfo *ai = listNodeValue(listLast(server.aof_manifest->incr_aof_list));\n    ai->end_offset = server.master_repl_offset;\n    server.aof_manifest->dirty = 1;\n    /* It doesn't matter if the persistence fails since this information is not\n     * critical, we can get an approximate value by start offset plus file size. */\n    persistAofManifest(server.aof_manifest);\n}\n\n/* After loading AOF data, we need to update the `server.master_repl_offset`\n * based on the information of the last INCR AOF, to avoid the rollback of\n * the start offset of new INCR AOF. */\nvoid updateReplOffsetAndResetEndOffset(void) {\n    if (server.aof_state != AOF_ON) return;\n    serverAssert(server.aof_manifest != NULL);\n\n    /* If the INCR file has an end offset, we directly use it, and clear it\n     * to avoid the next time we load the manifest file, we will use the same\n     * offset, but the real offset may have advanced. */\n    if (listLength(server.aof_manifest->incr_aof_list) == 0) return;\n    aofInfo *ai = listNodeValue(listLast(server.aof_manifest->incr_aof_list));\n    if (ai->end_offset != -1) {\n        server.master_repl_offset = ai->end_offset;\n        ai->end_offset = -1;\n        server.aof_manifest->dirty = 1;\n        /* We must update the end offset of INCR file correctly, otherwise we\n         * may keep wrong information in the manifest file, since we continue\n         * to append data to the same INCR file. */\n        if (persistAofManifest(server.aof_manifest) != AOF_OK)\n            exit(1);\n    } else {\n        /* If the INCR file doesn't have an end offset, we need to calculate\n         * the replication offset by the start offset plus the file size. */\n        server.master_repl_offset = (ai->start_offset == -1 ? 0 : ai->start_offset) +\n                                    getAppendOnlyFileSize(ai->file_name, NULL);\n    }\n}\n\n/* Whether to limit the execution of Background AOF rewrite.\n *\n * At present, if AOFRW fails, redis will automatically retry. If it continues\n * to fail, we may get a lot of very small INCR files. so we need an AOFRW\n * limiting measure.\n *\n * We can't directly use `server.aof_current_size` and `server.aof_last_incr_size`,\n * because there may be no new writes after AOFRW fails.\n *\n * So, we use time delay to achieve our goal. When AOFRW fails, we delay the execution\n * of the next AOFRW by 1 minute. If the next AOFRW also fails, it will be delayed by 2\n * minutes. The next is 4, 8, 16, the maximum delay is 60 minutes (1 hour).\n *\n * During the limit period, we can still use the 'bgrewriteaof' command to execute AOFRW\n * immediately.\n *\n * Return 1 means that AOFRW is limited and cannot be executed. 0 means that we can execute\n * AOFRW, which may be that we have reached the 'next_rewrite_time' or the number of INCR\n * AOFs has not reached the limit threshold.\n * */\n#define AOF_REWRITE_LIMITE_THRESHOLD    3\n#define AOF_REWRITE_LIMITE_MAX_MINUTES  60 /* 1 hour */\nint aofRewriteLimited(void) {\n    static int next_delay_minutes = 0;\n    static time_t next_rewrite_time = 0;\n\n    if (server.stat_aofrw_consecutive_failures < AOF_REWRITE_LIMITE_THRESHOLD) {\n        /* We may be recovering from limited state, so reset all states. */\n        next_delay_minutes = 0;\n        next_rewrite_time = 0;\n        return 0;\n    }\n\n    /* if it is in the limiting state, then check if the next_rewrite_time is reached */\n    if (next_rewrite_time != 0) {\n        if (server.unixtime < next_rewrite_time) {\n            return 1;\n        } else {\n            next_rewrite_time = 0;\n            return 0;\n        }\n    }\n\n    next_delay_minutes = (next_delay_minutes == 0) ? 1 : (next_delay_minutes * 2);\n    if (next_delay_minutes > AOF_REWRITE_LIMITE_MAX_MINUTES) {\n        next_delay_minutes = AOF_REWRITE_LIMITE_MAX_MINUTES;\n    }\n\n    next_rewrite_time = server.unixtime + next_delay_minutes * 60;\n    serverLog(LL_WARNING,\n        \"Background AOF rewrite has repeatedly failed and triggered the limit, will retry in %d minutes\", next_delay_minutes);\n    return 1;\n}\n\n/* ----------------------------------------------------------------------------\n * AOF file implementation\n * ------------------------------------------------------------------------- */\n\n/* Return true if an AOf fsync is currently already in progress in a\n * BIO thread. */\nint aofFsyncInProgress(void) {\n    /* Note that we don't care about aof_background_fsync_and_close because\n     * server.aof_fd has been replaced by the new INCR AOF file fd,\n     * see openNewIncrAofForAppend. */\n    return bioPendingJobsOfType(BIO_AOF_FSYNC) != 0;\n}\n\n/* Starts a background task that performs fsync() against the specified\n * file descriptor (the one of the AOF file) in another thread. */\nvoid aof_background_fsync(int fd) {\n    bioCreateFsyncJob(fd, server.master_repl_offset, 1);\n}\n\n/* Close the fd on the basis of aof_background_fsync. */\nvoid aof_background_fsync_and_close(int fd) {\n    bioCreateCloseAofJob(fd, server.master_repl_offset, 1);\n}\n\n/* Kills an AOFRW child process if exists */\nvoid killAppendOnlyChild(void) {\n    int statloc;\n    /* No AOFRW child? return. */\n    if (server.child_type != CHILD_TYPE_AOF) return;\n    /* Kill AOFRW child, wait for child exit. */\n    serverLog(LL_NOTICE,\"Killing running AOF rewrite child: %ld\",\n        (long) server.child_pid);\n    if (kill(server.child_pid,SIGUSR1) != -1) {\n        while(waitpid(-1, &statloc, 0) != server.child_pid);\n    }\n    aofRemoveTempFile(server.child_pid);\n    resetChildState();\n    server.aof_rewrite_time_start = -1;\n}\n\n/* Called when the user switches from \"appendonly yes\" to \"appendonly no\"\n * at runtime using the CONFIG command. */\nvoid stopAppendOnly(void) {\n    serverAssert(server.aof_state != AOF_OFF);\n    flushAppendOnlyFile(1);\n    if (redis_fsync(server.aof_fd) == -1) {\n        serverLog(LL_WARNING,\"Fail to fsync the AOF file: %s\",strerror(errno));\n    } else {\n        server.aof_last_fsync = server.mstime;\n    }\n    close(server.aof_fd);\n    updateCurIncrAofEndOffset();\n\n    server.aof_fd = -1;\n    server.aof_selected_db = -1;\n    server.aof_state = AOF_OFF;\n    server.aof_rewrite_scheduled = 0;\n    server.aof_last_incr_size = 0;\n    server.aof_last_incr_fsync_offset = 0;\n    server.fsynced_reploff = -1;\n    atomicSet(server.fsynced_reploff_pending, 0);\n    killAppendOnlyChild();\n    sdsfree(server.aof_buf);\n    server.aof_buf = sdsempty();\n}\n\n/* Called when the user switches from \"appendonly no\" to \"appendonly yes\"\n * at runtime using the CONFIG command. */\nint startAppendOnly(void) {\n    serverAssert(server.aof_state == AOF_OFF);\n\n    server.aof_state = AOF_WAIT_REWRITE;\n    if (hasActiveChildProcess() && server.child_type != CHILD_TYPE_AOF) {\n        server.aof_rewrite_scheduled = 1;\n        serverLog(LL_NOTICE,\"AOF was enabled but there is already another background operation. An AOF background was scheduled to start when possible.\");\n    } else if (server.in_exec){\n        server.aof_rewrite_scheduled = 1;\n        serverLog(LL_NOTICE,\"AOF was enabled during a transaction. An AOF background was scheduled to start when possible.\");\n    } else {\n        /* If there is a pending AOF rewrite, we need to switch it off and\n         * start a new one: the old one cannot be reused because it is not\n         * accumulating the AOF buffer. */\n        if (server.child_type == CHILD_TYPE_AOF) {\n            serverLog(LL_NOTICE,\"AOF was enabled but there is already an AOF rewriting in background. Stopping background AOF and starting a rewrite now.\");\n            killAppendOnlyChild();\n        }\n\n        if (rewriteAppendOnlyFileBackground() == C_ERR) {\n            server.aof_state = AOF_OFF;\n            serverLog(LL_WARNING,\"Redis needs to enable the AOF but can't trigger a background AOF rewrite operation. Check the above logs for more info about the error.\");\n            return C_ERR;\n        }\n    }\n    server.aof_last_fsync = server.mstime;\n    /* If AOF fsync error in bio job, we just ignore it and log the event. */\n    int aof_bio_fsync_status;\n    atomicGet(server.aof_bio_fsync_status, aof_bio_fsync_status);\n    if (aof_bio_fsync_status == C_ERR) {\n        serverLog(LL_WARNING,\n            \"AOF reopen, just ignore the AOF fsync error in bio job\");\n        atomicSet(server.aof_bio_fsync_status,C_OK);\n    }\n\n    /* If AOF was in error state, we just ignore it and log the event. */\n    if (server.aof_last_write_status == C_ERR) {\n        serverLog(LL_WARNING,\"AOF reopen, just ignore the last error.\");\n        server.aof_last_write_status = C_OK;\n    }\n    return C_OK;\n}\n\nvoid startAppendOnlyWithRetry(void) {\n    unsigned int tries, max_tries = 10;\n    for (tries = 0; tries < max_tries; ++tries) {\n        if (startAppendOnly() == C_OK)\n            break;\n        serverLog(LL_WARNING, \"Failed to enable AOF! Trying it again in one second.\");\n        sleep(1);\n    }\n    if (tries == max_tries) {\n        serverLog(LL_WARNING, \"FATAL: AOF can't be turned on. Exiting now.\");\n        exit(1);\n    }\n}\n\n/* Called after \"appendonly\" config is changed. */\nvoid applyAppendOnlyConfig(void) {\n    if (!server.aof_enabled && server.aof_state != AOF_OFF) {\n        stopAppendOnly();\n    } else if (server.aof_enabled && server.aof_state == AOF_OFF) {\n        startAppendOnlyWithRetry();\n    }\n}\n\n/* This is a wrapper to the write syscall in order to retry on short writes\n * or if the syscall gets interrupted. It could look strange that we retry\n * on short writes given that we are writing to a block device: normally if\n * the first call is short, there is a end-of-space condition, so the next\n * is likely to fail. However apparently in modern systems this is no longer\n * true, and in general it looks just more resilient to retry the write. If\n * there is an actual error condition we'll get it at the next try. */\nssize_t aofWrite(int fd, const char *buf, size_t len) {\n    ssize_t nwritten = 0, totwritten = 0;\n\n    while(len) {\n        nwritten = write(fd, buf, len);\n\n        if (nwritten < 0) {\n            if (errno == EINTR) continue;\n            return totwritten ? totwritten : -1;\n        }\n\n        len -= nwritten;\n        buf += nwritten;\n        totwritten += nwritten;\n    }\n\n    return totwritten;\n}\n\n/* Write the append only file buffer on disk.\n *\n * Since we are required to write the AOF before replying to the client,\n * and the only way the client socket can get a write is entering when\n * the event loop, we accumulate all the AOF writes in a memory\n * buffer and write it on disk using this function just before entering\n * the event loop again.\n *\n * About the 'force' argument:\n *\n * When the fsync policy is set to 'everysec' we may delay the flush if there\n * is still an fsync() going on in the background thread, since for instance\n * on Linux write(2) will be blocked by the background fsync anyway.\n * When this happens we remember that there is some aof buffer to be\n * flushed ASAP, and will try to do that in the serverCron() function.\n *\n * However if force is set to 1 we'll write regardless of the background\n * fsync. */\n#define AOF_WRITE_LOG_ERROR_RATE 30 /* Seconds between errors logging. */\nvoid flushAppendOnlyFile(int force) {\n    ssize_t nwritten;\n    int sync_in_progress = 0;\n    mstime_t latency;\n\n    if (sdslen(server.aof_buf) == 0) {\n        if (server.aof_last_incr_fsync_offset == server.aof_last_incr_size) {\n            /* All data is fsync'd already: Update fsynced_reploff_pending just in case.\n             * This is needed to avoid a WAITAOF hang in case a module used RM_Call\n             * with the NO_AOF flag, in which case master_repl_offset will increase but\n             * fsynced_reploff_pending won't be updated (because there's no reason, from\n             * the AOF POV, to call fsync) and then WAITAOF may wait on the higher offset\n             * (which contains data that was only propagated to replicas, and not to AOF) */\n            if (!aofFsyncInProgress())\n                atomicSet(server.fsynced_reploff_pending, server.master_repl_offset);\n        } else {\n            /* Check if we need to do fsync even the aof buffer is empty,\n             * because previously in AOF_FSYNC_EVERYSEC mode, fsync is\n             * called only when aof buffer is not empty, so if users\n             * stop write commands before fsync called in one second,\n             * the data in page cache cannot be flushed in time. */\n            if (server.aof_fsync == AOF_FSYNC_EVERYSEC &&\n                server.mstime - server.aof_last_fsync >= 1000 &&\n                !(sync_in_progress = aofFsyncInProgress()))\n                goto try_fsync;\n\n            /* Check if we need to do fsync even the aof buffer is empty,\n             * the reason is described in the previous AOF_FSYNC_EVERYSEC block,\n             * and AOF_FSYNC_ALWAYS is also checked here to handle a case where\n             * aof_fsync is changed from everysec to always. */\n            if (server.aof_fsync == AOF_FSYNC_ALWAYS)\n                goto try_fsync;\n        }\n        return;\n    }\n\n    if (server.aof_fsync == AOF_FSYNC_EVERYSEC)\n        sync_in_progress = aofFsyncInProgress();\n\n    if (server.aof_fsync == AOF_FSYNC_EVERYSEC && !force) {\n        /* With this append fsync policy we do background fsyncing.\n         * If the fsync is still in progress we can try to delay\n         * the write for a couple of seconds. */\n        if (sync_in_progress) {\n            if (server.aof_flush_postponed_start == 0) {\n                /* No previous write postponing, remember that we are\n                 * postponing the flush and return. */\n                server.aof_flush_postponed_start = server.mstime;\n                return;\n            } else if (server.mstime - server.aof_flush_postponed_start < 2000) {\n                /* We were already waiting for fsync to finish, but for less\n                 * than two seconds this is still ok. Postpone again. */\n                return;\n            }\n            /* Otherwise fall through, and go write since we can't wait\n             * over two seconds. */\n            server.aof_delayed_fsync++;\n            serverLog(LL_NOTICE,\"Asynchronous AOF fsync is taking too long (disk is busy?). Writing the AOF buffer without waiting for fsync to complete, this may slow down Redis.\");\n        }\n    }\n    /* We want to perform a single write. This should be guaranteed atomic\n     * at least if the filesystem we are writing is a real physical one.\n     * While this will save us against the server being killed I don't think\n     * there is much to do about the whole server stopping for power problems\n     * or alike */\n\n    if (server.aof_flush_sleep && sdslen(server.aof_buf)) {\n        usleep(server.aof_flush_sleep);\n    }\n\n    latencyStartMonitor(latency);\n    nwritten = aofWrite(server.aof_fd,server.aof_buf,sdslen(server.aof_buf));\n    latencyEndMonitor(latency);\n    /* We want to capture different events for delayed writes:\n     * when the delay happens with a pending fsync, or with a saving child\n     * active, and when the above two conditions are missing.\n     * We also use an additional event name to save all samples which is\n     * useful for graphing / monitoring purposes. */\n    if (sync_in_progress) {\n        latencyAddSampleIfNeeded(\"aof-write-pending-fsync\",latency);\n    } else if (hasActiveChildProcess()) {\n        latencyAddSampleIfNeeded(\"aof-write-active-child\",latency);\n    } else {\n        latencyAddSampleIfNeeded(\"aof-write-alone\",latency);\n    }\n    latencyAddSampleIfNeeded(\"aof-write\",latency);\n\n    /* We performed the write so reset the postponed flush sentinel to zero. */\n    server.aof_flush_postponed_start = 0;\n\n    if (nwritten != (ssize_t)sdslen(server.aof_buf)) {\n        static time_t last_write_error_log = 0;\n        int can_log = 0;\n\n        /* Limit logging rate to 1 line per AOF_WRITE_LOG_ERROR_RATE seconds. */\n        if ((server.unixtime - last_write_error_log) > AOF_WRITE_LOG_ERROR_RATE) {\n            can_log = 1;\n            last_write_error_log = server.unixtime;\n        }\n\n        /* Log the AOF write error and record the error code. */\n        if (nwritten == -1) {\n            if (can_log) {\n                serverLog(LL_WARNING,\"Error writing to the AOF file: %s\",\n                    strerror(errno));\n            }\n            server.aof_last_write_errno = errno;\n        } else {\n            if (can_log) {\n                serverLog(LL_WARNING,\"Short write while writing to \"\n                                       \"the AOF file: (nwritten=%lld, \"\n                                       \"expected=%lld)\",\n                                       (long long)nwritten,\n                                       (long long)sdslen(server.aof_buf));\n            }\n\n            if (ftruncate(server.aof_fd, server.aof_last_incr_size) == -1) {\n                if (can_log) {\n                    serverLog(LL_WARNING, \"Could not remove short write \"\n                             \"from the append-only file.  Redis may refuse \"\n                             \"to load the AOF the next time it starts.  \"\n                             \"ftruncate: %s\", strerror(errno));\n                }\n            } else {\n                /* If the ftruncate() succeeded we can set nwritten to\n                 * -1 since there is no longer partial data into the AOF. */\n                nwritten = -1;\n            }\n            server.aof_last_write_errno = ENOSPC;\n        }\n\n        /* Handle the AOF write error. */\n        if (server.aof_fsync == AOF_FSYNC_ALWAYS) {\n            /* We can't recover when the fsync policy is ALWAYS since the reply\n             * for the client is already in the output buffers (both writes and\n             * reads), and the changes to the db can't be rolled back. Since we\n             * have a contract with the user that on acknowledged or observed\n             * writes are is synced on disk, we must exit. */\n            serverLog(LL_WARNING,\"Can't recover from AOF write error when the AOF fsync policy is 'always'. Exiting...\");\n            exit(1);\n        } else {\n            /* Recover from failed write leaving data into the buffer. However\n             * set an error to stop accepting writes as long as the error\n             * condition is not cleared. */\n            server.aof_last_write_status = C_ERR;\n\n            /* Trim the sds buffer if there was a partial write, and there\n             * was no way to undo it with ftruncate(2). */\n            if (nwritten > 0) {\n                server.aof_current_size += nwritten;\n                server.aof_last_incr_size += nwritten;\n                sdsrange(server.aof_buf,nwritten,-1);\n            }\n            return; /* We'll try again on the next call... */\n        }\n    } else {\n        /* Successful write(2). If AOF was in error state, restore the\n         * OK state and log the event. */\n        if (server.aof_last_write_status == C_ERR) {\n            serverLog(LL_NOTICE,\n                \"AOF write error looks solved, Redis can write again.\");\n            server.aof_last_write_status = C_OK;\n        }\n    }\n    server.aof_current_size += nwritten;\n    server.aof_last_incr_size += nwritten;\n\n    /* Re-use AOF buffer when it is small enough. The maximum comes from the\n     * arena size of 4k minus some overhead (but is otherwise arbitrary). */\n    if ((sdslen(server.aof_buf)+sdsavail(server.aof_buf)) < 4000) {\n        sdsclear(server.aof_buf);\n    } else {\n        sdsfree(server.aof_buf);\n        server.aof_buf = sdsempty();\n    }\n\ntry_fsync:\n    /* Don't fsync if no-appendfsync-on-rewrite is set to yes and there are\n     * children doing I/O in the background. */\n    if (server.aof_no_fsync_on_rewrite && hasActiveChildProcess())\n        return;\n\n    /* Perform the fsync if needed. */\n    if (server.aof_fsync == AOF_FSYNC_ALWAYS) {\n        /* redis_fsync is defined as fdatasync() for Linux in order to avoid\n         * flushing metadata. */\n        latencyStartMonitor(latency);\n        /* Let's try to get this data on the disk. To guarantee data safe when\n         * the AOF fsync policy is 'always', we should exit if failed to fsync\n         * AOF (see comment next to the exit(1) after write error above). */\n        if (redis_fsync(server.aof_fd) == -1) {\n            serverLog(LL_WARNING,\"Can't persist AOF for fsync error when the \"\n              \"AOF fsync policy is 'always': %s. Exiting...\", strerror(errno));\n            exit(1);\n        }\n        latencyEndMonitor(latency);\n        latencyAddSampleIfNeeded(\"aof-fsync-always\",latency);\n        server.aof_last_incr_fsync_offset = server.aof_last_incr_size;\n        server.aof_last_fsync = server.mstime;\n        atomicSet(server.fsynced_reploff_pending, server.master_repl_offset);\n    } else if (server.aof_fsync == AOF_FSYNC_EVERYSEC &&\n               server.mstime - server.aof_last_fsync >= 1000) {\n        if (!sync_in_progress) {\n            aof_background_fsync(server.aof_fd);\n            server.aof_last_incr_fsync_offset = server.aof_last_incr_size;\n        }\n        server.aof_last_fsync = server.mstime;\n    }\n}\n\nsds catAppendOnlyGenericCommand(sds dst, int argc, robj **argv) {\n    char buf[32];\n    int len, j;\n    robj *o;\n\n    buf[0] = '*';\n    len = 1+ll2string(buf+1,sizeof(buf)-1,argc);\n    buf[len++] = '\\r';\n    buf[len++] = '\\n';\n    dst = sdscatlen(dst,buf,len);\n\n    for (j = 0; j < argc; j++) {\n        o = getDecodedObject(argv[j]);\n        buf[0] = '$';\n        len = 1+ll2string(buf+1,sizeof(buf)-1,sdslen(o->ptr));\n        buf[len++] = '\\r';\n        buf[len++] = '\\n';\n        dst = sdscatlen(dst,buf,len);\n        dst = sdscatlen(dst,o->ptr,sdslen(o->ptr));\n        dst = sdscatlen(dst,\"\\r\\n\",2);\n        decrRefCount(o);\n    }\n    return dst;\n}\n\n/* Generate a piece of timestamp annotation for AOF if current record timestamp\n * in AOF is not equal server unix time. If we specify 'force' argument to 1,\n * we would generate one without check, currently, it is useful in AOF rewriting\n * child process which always needs to record one timestamp at the beginning of\n * rewriting AOF.\n *\n * Timestamp annotation format is \"#TS:${timestamp}\\r\\n\". \"TS\" is short of\n * timestamp and this method could save extra bytes in AOF. */\nsds genAofTimestampAnnotationIfNeeded(int force) {\n    sds ts = NULL;\n\n    if (force || server.aof_cur_timestamp < server.unixtime) {\n        server.aof_cur_timestamp = force ? time(NULL) : server.unixtime;\n        ts = sdscatfmt(sdsempty(), \"#TS:%I\\r\\n\", server.aof_cur_timestamp);\n        serverAssert(sdslen(ts) <= AOF_ANNOTATION_LINE_MAX_LEN);\n    }\n    return ts;\n}\n\n/* Write the given command to the aof file.\n * dictid - dictionary id the command should be applied to,\n *          this is used in order to decide if a `select` command\n *          should also be written to the aof. Value of -1 means\n *          to avoid writing `select` command in any case.\n * argv   - The command to write to the aof.\n * argc   - Number of values in argv\n */\nvoid feedAppendOnlyFile(int dictid, robj **argv, int argc) {\n    sds buf = sdsempty();\n\n    serverAssert(dictid == -1 || (dictid >= 0 && dictid < server.dbnum));\n\n    /* Feed timestamp if needed */\n    if (server.aof_timestamp_enabled) {\n        sds ts = genAofTimestampAnnotationIfNeeded(0);\n        if (ts != NULL) {\n            buf = sdscatsds(buf, ts);\n            sdsfree(ts);\n        }\n    }\n\n    /* The DB this command was targeting is not the same as the last command\n     * we appended. To issue a SELECT command is needed. */\n    if (dictid != -1 && dictid != server.aof_selected_db) {\n        char seldb[64];\n\n        snprintf(seldb,sizeof(seldb),\"%d\",dictid);\n        buf = sdscatprintf(buf,\"*2\\r\\n$6\\r\\nSELECT\\r\\n$%lu\\r\\n%s\\r\\n\",\n            (unsigned long)strlen(seldb),seldb);\n        server.aof_selected_db = dictid;\n    }\n\n    /* All commands should be propagated the same way in AOF as in replication.\n     * No need for AOF-specific translation. */\n    buf = catAppendOnlyGenericCommand(buf,argc,argv);\n\n    /* Append to the AOF buffer. This will be flushed on disk just before\n     * of re-entering the event loop, so before the client will get a\n     * positive reply about the operation performed. */\n    if (server.aof_state == AOF_ON ||\n        (server.aof_state == AOF_WAIT_REWRITE && server.child_type == CHILD_TYPE_AOF))\n    {\n        server.aof_buf = sdscatlen(server.aof_buf, buf, sdslen(buf));\n    }\n\n    sdsfree(buf);\n}\n\n/* ----------------------------------------------------------------------------\n * AOF loading\n * ------------------------------------------------------------------------- */\n\n/* In Redis commands are always executed in the context of a client, so in\n * order to load the append only file we need to create a fake client. */\nstruct client *createAOFClient(void) {\n    struct client *c = createClient(NULL);\n\n    c->id = CLIENT_ID_AOF; /* So modules can identify it's the AOF client. */\n\n    /*\n     * The AOF client should never be blocked (unlike master\n     * replication connection).\n     * This is because blocking the AOF client might cause\n     * deadlock (because potentially no one will unblock it).\n     * Also, if the AOF client will be blocked just for\n     * background processing there is a chance that the\n     * command execution order will be violated.\n     */\n    c->flags = CLIENT_DENY_BLOCKING;\n\n    /* We set the fake client as a slave waiting for the synchronization\n     * so that Redis will not try to send replies to this client. */\n    c->replstate = SLAVE_STATE_WAIT_BGSAVE_START;\n    return c;\n}\n\nstatic int truncateAppendOnlyFile(char *filename, off_t valid_up_to) {\n    if (valid_up_to == -1) {\n        serverLog(LL_WARNING,\"Last valid command offset is invalid\");\n        return 0;\n    }\n\n    if (truncate(filename, valid_up_to) == -1) {\n        serverLog(LL_WARNING,\"Error truncating the AOF file %s: %s\",\n            filename, strerror(errno));\n        return 0;\n    }\n\n    /* Make sure the AOF file descriptor points to the end of the\n     * file after the truncate call. */\n    if (server.aof_fd != -1 && lseek(server.aof_fd, 0, SEEK_END) == -1) {\n        serverLog(LL_WARNING,\"Can't seek the end of the AOF file %s: %s\",\n            filename, strerror(errno));\n        return 0;\n    }\n\n    return 1; /* Success */\n}\n\n/* Replay an append log file. On success AOF_OK or AOF_TRUNCATED is returned,\n * otherwise, one of the following is returned:\n * AOF_OPEN_ERR: Failed to open the AOF file.\n * AOF_NOT_EXIST: AOF file doesn't exist.\n * AOF_EMPTY: The AOF file is empty (nothing to load).\n * AOF_FAILED: Failed to load the AOF file. */\nint loadSingleAppendOnlyFile(char *filename) {\n    struct client *fakeClient;\n    struct redis_stat sb;\n    int old_aof_state = server.aof_state;\n    long loops = 0;\n    off_t valid_up_to = 0; /* Offset of latest well-formed command loaded. */\n    off_t valid_before_multi = 0; /* Offset before MULTI command loaded. */\n    off_t last_progress_report_size = 0;\n    int ret = AOF_OK;\n\n    sds aof_filepath = makePath(server.aof_dirname, filename);\n    FILE *fp = fopen(aof_filepath, \"r\");\n    if (fp == NULL) {\n        int en = errno;\n        if (redis_stat(aof_filepath, &sb) == 0 || errno != ENOENT) {\n            serverLog(LL_WARNING,\"Fatal error: can't open the append log file %s for reading: %s\", filename, strerror(en));\n            sdsfree(aof_filepath);\n            return AOF_OPEN_ERR;\n        } else {\n            serverLog(LL_WARNING,\"The append log file %s doesn't exist: %s\", filename, strerror(errno));\n            sdsfree(aof_filepath);\n            return AOF_NOT_EXIST;\n        }\n    }\n\n    if (fp && redis_fstat(fileno(fp),&sb) != -1 && sb.st_size == 0) {\n        fclose(fp);\n        sdsfree(aof_filepath);\n        return AOF_EMPTY;\n    }\n\n    /* Temporarily disable AOF, to prevent EXEC from feeding a MULTI\n     * to the same file we're about to read. */\n    server.aof_state = AOF_OFF;\n\n    client *old_cur_client = server.current_client;\n    client *old_exec_client = server.executing_client;\n    fakeClient = createAOFClient();\n    server.current_client = server.executing_client = fakeClient;\n\n    /* Check if the AOF file is in RDB format (it may be RDB encoded base AOF\n     * or old style RDB-preamble AOF). In that case we need to load the RDB file \n     * and later continue loading the AOF tail if it is an old style RDB-preamble AOF. */\n    char sig[5]; /* \"REDIS\" */\n    if (fread(sig,1,5,fp) != 5 || memcmp(sig,\"REDIS\",5) != 0) {\n        /* Not in RDB format, seek back at 0 offset. */\n        if (fseek(fp,0,SEEK_SET) == -1) goto readerr;\n    } else {\n        /* RDB format. Pass loading the RDB functions. */\n        rio rdb;\n        int old_style = !strcmp(filename, server.aof_filename);\n        if (old_style)\n            serverLog(LL_NOTICE, \"Reading RDB preamble from AOF file...\");\n        else \n            serverLog(LL_NOTICE, \"Reading RDB base file on AOF loading...\"); \n\n        if (fseek(fp,0,SEEK_SET) == -1) goto readerr;\n        rioInitWithFile(&rdb,fp);\n        if (rdbLoadRio(&rdb,RDBFLAGS_AOF_PREAMBLE,NULL) != C_OK) {\n            if (old_style)\n                serverLog(LL_WARNING, \"Error reading the RDB preamble of the AOF file %s, AOF loading aborted\", filename);\n            else\n                serverLog(LL_WARNING, \"Error reading the RDB base file %s, AOF loading aborted\", filename);\n\n            ret = AOF_FAILED;\n            goto cleanup;\n        } else {\n            loadingAbsProgress(ftello(fp));\n            last_progress_report_size = ftello(fp);\n            if (old_style) serverLog(LL_NOTICE, \"Reading the remaining AOF tail...\");\n        }\n    }\n\n    /* Read the actual AOF file, in REPL format, command by command. */\n    while(1) {\n        int argc, j;\n        unsigned long len;\n        robj **argv;\n        char buf[AOF_ANNOTATION_LINE_MAX_LEN];\n        sds argsds;\n        struct redisCommand *cmd;\n\n        /* Serve the clients from time to time */\n        if (!(loops++ % 1024)) {\n            off_t progress_delta = ftello(fp) - last_progress_report_size;\n            loadingIncrProgress(progress_delta);\n            last_progress_report_size += progress_delta;\n            processEventsWhileBlocked();\n            processModuleLoadingProgressEvent(1);\n        }\n        if (fgets(buf,sizeof(buf),fp) == NULL) {\n            if (feof(fp)) {\n                break;\n            } else {\n                goto readerr;\n            }\n        }\n        if (buf[0] == '#') continue; /* Skip annotations */\n        if (buf[0] != '*') goto fmterr;\n        if (buf[1] == '\\0') goto readerr;\n        argc = atoi(buf+1);\n        if (argc < 1) goto fmterr;\n        if ((size_t)argc > SIZE_MAX / sizeof(robj*)) goto fmterr;\n\n        /* Load the next command in the AOF as our fake client\n         * argv. */\n        argv = zmalloc(sizeof(robj*)*argc);\n        fakeClient->argc = argc;\n        fakeClient->argv = argv;\n        fakeClient->argv_len = argc;\n\n        for (j = 0; j < argc; j++) {\n            /* Parse the argument len. */\n            char *readres = fgets(buf,sizeof(buf),fp);\n            if (readres == NULL || buf[0] != '$') {\n                fakeClient->argc = j; /* Free up to j-1. */\n                freeClientArgv(fakeClient);\n                if (readres == NULL)\n                    goto readerr;\n                else\n                    goto fmterr;\n            }\n            len = strtol(buf+1,NULL,10);\n\n            /* Read it into a string object. */\n            argsds = sdsnewlen(SDS_NOINIT,len);\n            if (len && fread(argsds,len,1,fp) == 0) {\n                sdsfree(argsds);\n                fakeClient->argc = j; /* Free up to j-1. */\n                freeClientArgv(fakeClient);\n                goto readerr;\n            }\n            argv[j] = createObject(OBJ_STRING,argsds);\n\n            /* Discard CRLF. */\n            if (fread(buf,2,1,fp) == 0) {\n                fakeClient->argc = j+1; /* Free up to j. */\n                freeClientArgv(fakeClient);\n                goto readerr;\n            }\n        }\n\n        /* Command lookup */\n        cmd = lookupCommand(argv,argc);\n        if (!cmd) {\n            serverLog(LL_WARNING,\n                \"Unknown command '%s' reading the append only file %s\",\n                (char*)argv[0]->ptr, filename);\n            freeClientArgv(fakeClient);\n            ret = AOF_FAILED;\n            goto cleanup;\n        }\n\n        if (cmd->proc == multiCommand) valid_before_multi = valid_up_to;\n\n        /* Run the command in the context of a fake client */\n        fakeClient->cmd = fakeClient->lastcmd = cmd;\n        if (fakeClient->flags & CLIENT_MULTI &&\n            fakeClient->cmd->proc != execCommand)\n        {\n            /* queueMultiCommand requires a pendingCommand, so we create a \"fake\" one here\n             * for it to consume */\n            pendingCommand *pcmd = zmalloc(sizeof(pendingCommand));\n            initPendingCommand(pcmd);\n            addPendingCommand(&fakeClient->pending_cmds, pcmd);\n\n            pcmd->argc = argc;\n            pcmd->argv_len = argc;\n            pcmd->argv = argv;\n            pcmd->cmd = cmd;\n\n            /* Note: we don't have to attempt calling evalGetCommandFlags,\n             * since this is AOF, the checks in processCommand are not made\n             * anyway.*/\n            queueMultiCommand(fakeClient, cmd->flags);\n        } else {\n            cmd->proc(fakeClient);\n            fakeClient->all_argv_len_sum = 0; /* Otherwise no one cleans this up and we reach cleanup with it non-zero */\n        }\n\n        /* The fake client should not have a reply */\n        serverAssert(fakeClient->bufpos == 0 &&\n                     listLength(fakeClient->reply) == 0);\n\n        /* The fake client should never get blocked */\n        serverAssert((fakeClient->flags & CLIENT_BLOCKED) == 0);\n\n        /* Clean up. Command code may have changed argv/argc so we use the\n         * argv/argc of the client instead of the local variables. */\n        freeClientArgv(fakeClient);\n        if (server.aof_load_truncated || server.aof_load_corrupt_tail_max_size) valid_up_to = ftello(fp);\n        if (server.key_load_delay)\n            debugDelay(server.key_load_delay);\n    }\n\n    /* This point can only be reached when EOF is reached without errors.\n     * If the client is in the middle of a MULTI/EXEC, handle it as it was\n     * a short read, even if technically the protocol is correct: we want\n     * to remove the unprocessed tail and continue. */\n    if (fakeClient->flags & CLIENT_MULTI) {\n        serverLog(LL_WARNING,\n            \"Revert incomplete MULTI/EXEC transaction in AOF file %s\", filename);\n        valid_up_to = valid_before_multi;\n        goto uxeof;\n    }\n\nloaded_ok: /* DB loaded, cleanup and return success (AOF_OK or AOF_TRUNCATED). */\n    loadingIncrProgress(ftello(fp) - last_progress_report_size);\n    server.aof_state = old_aof_state;\n    goto cleanup;\n\nreaderr: /* Read error. If feof(fp) is true, fall through to unexpected EOF. */\n    if (!feof(fp)) {\n        serverLog(LL_WARNING,\"Unrecoverable error reading the append only file %s: %s\", filename, strerror(errno));\n        ret = AOF_FAILED;\n        goto cleanup;\n    }\n\nuxeof: /* Unexpected AOF end of file. */\n    if (server.aof_load_truncated) {\n        serverLog(LL_WARNING,\"!!! Warning: short read while loading the AOF file %s!!!\", filename);\n        serverLog(LL_WARNING,\"!!! Truncating the AOF %s at offset %llu !!!\",\n            filename, (unsigned long long) valid_up_to);\n        if (truncateAppendOnlyFile(aof_filepath, valid_up_to)) {\n            serverLog(LL_WARNING, \"AOF %s loaded anyway because aof-load-truncated is enabled\", aof_filepath);\n            ret = AOF_TRUNCATED;\n            goto loaded_ok;\n        }\n    }\n    serverLog(LL_WARNING, \"Unexpected end of file reading the append only file %s. You can: \"\n        \"1) Make a backup of your AOF file, then use ./redis-check-aof --fix <filename.manifest>. \"\n        \"2) Alternatively you can set the 'aof-load-truncated' configuration option to yes and restart the server.\", filename);\n    ret = AOF_FAILED;\n    goto cleanup;\n\nfmterr: /* Format error. */\n    /* fmterr may be caused by accidentally machine shutdown, so if the broken tail\n     * is less than a specified size, try to recover it automatically */\n    if (server.aof_load_corrupt_tail_max_size && sb.st_size - valid_up_to < server.aof_load_corrupt_tail_max_size) {\n        serverLog(LL_WARNING,\"!!! Warning: corrupt AOF file tail!!!\");\n        serverLog(LL_WARNING,\"!!! Truncating the AOF %s at offset %llu (remaining %llu) !!!\",\n            aof_filepath, (unsigned long long) valid_up_to, (unsigned long long) sb.st_size - valid_up_to);\n        if (truncateAppendOnlyFile(aof_filepath, valid_up_to)) {\n            serverLog(LL_WARNING, \"AOF %s loaded anyway because aof-load-corrupt-tail-max-size is enabled\", aof_filepath);\n            ret = AOF_BROKEN_RECOVERED;\n            goto loaded_ok;\n        }\n    }\n    serverLog(LL_WARNING, \"Bad file format reading the append only file %s at offset %llu. \\\n         make a backup of your AOF file, then use ./redis-check-aof --fix <filename.manifest>. \\\n         Alternatively you can set the 'aof-load-corrupt-tail-max-size' configuration option to %llu and restart the server.\",\n         aof_filepath, (unsigned long long)valid_up_to, (unsigned long long) sb.st_size - valid_up_to);\n    ret = AOF_FAILED;\n    /* fall through to cleanup. */\n\ncleanup:\n    if (fakeClient) freeClient(fakeClient);\n    server.current_client = old_cur_client;\n    server.executing_client = old_exec_client;\n    int fd = dup(fileno(fp));\n    fclose(fp);\n    /* Reclaim page cache memory used by the AOF file in background. */\n    if (fd >= 0) bioCreateCloseJob(fd, 0, 1);\n    sdsfree(aof_filepath);\n    return ret;\n}\n\n/* Load the AOF files according the aofManifest pointed by am. */\nint loadAppendOnlyFiles(aofManifest *am) {\n    serverAssert(am != NULL);\n    int status, ret = AOF_OK;\n    long long start;\n    off_t total_size = 0, base_size = 0;\n    sds aof_name;\n    int total_num, aof_num = 0, last_file;\n\n    /* If the 'server.aof_filename' file exists in dir, we may be starting\n     * from an old redis version. We will use enter upgrade mode in three situations.\n     *\n     * 1. If the 'server.aof_dirname' directory not exist\n     * 2. If the 'server.aof_dirname' directory exists but the manifest file is missing\n     * 3. If the 'server.aof_dirname' directory exists and the manifest file it contains\n     *    has only one base AOF record, and the file name of this base AOF is 'server.aof_filename',\n     *    and the 'server.aof_filename' file not exist in 'server.aof_dirname' directory\n     * */\n    if (fileExist(server.aof_filename)) {\n        if (!dirExists(server.aof_dirname) ||\n            (am->base_aof_info == NULL && listLength(am->incr_aof_list) == 0) ||\n            (am->base_aof_info != NULL && listLength(am->incr_aof_list) == 0 &&\n             !strcmp(am->base_aof_info->file_name, server.aof_filename) && !aofFileExist(server.aof_filename)))\n        {\n            aofUpgradePrepare(am);\n        }\n    }\n\n    if (am->base_aof_info == NULL && listLength(am->incr_aof_list) == 0) {\n        return AOF_NOT_EXIST;\n    }\n\n    total_num = getBaseAndIncrAppendOnlyFilesNum(am);\n    serverAssert(total_num > 0);\n\n    /* Here we calculate the total size of all BASE and INCR files in\n     * advance, it will be set to `server.loading_total_bytes`. */\n    total_size = getBaseAndIncrAppendOnlyFilesSize(am, &status);\n    if (status != AOF_OK) {\n        /* If an AOF exists in the manifest but not on the disk, we consider this to be a fatal error. */\n        if (status == AOF_NOT_EXIST) status = AOF_FAILED;\n\n        return status;\n    } else if (total_size == 0) {\n        return AOF_EMPTY;\n    }\n\n    startLoading(total_size, RDBFLAGS_AOF_PREAMBLE, 0);\n\n    /* Load BASE AOF if needed. */\n    if (am->base_aof_info) {\n        serverAssert(am->base_aof_info->file_type == AOF_FILE_TYPE_BASE);\n        aof_name = (char*)am->base_aof_info->file_name;\n        updateLoadingFileName(aof_name);\n        base_size = getAppendOnlyFileSize(aof_name, NULL);\n        last_file = ++aof_num == total_num;\n        start = ustime();\n        ret = loadSingleAppendOnlyFile(aof_name);\n        if (ret == AOF_OK || ((ret == AOF_TRUNCATED || ret == AOF_BROKEN_RECOVERED) && last_file)) {\n            serverLog(LL_NOTICE, \"DB loaded from base file %s: %.3f seconds\",\n                aof_name, (float)(ustime()-start)/1000000);\n        }\n\n        /* If the truncated file is not the last file, we consider this to be a fatal error. */\n        if ((ret == AOF_TRUNCATED || ret == AOF_BROKEN_RECOVERED) && !last_file) {\n            ret = AOF_FAILED;\n            serverLog(LL_WARNING, \"Fatal error: the truncated file is not the last file\");\n        }\n\n        if (ret == AOF_OPEN_ERR || ret == AOF_FAILED) {\n            goto cleanup;\n        }\n    }\n\n    /* Load INCR AOFs if needed. */\n    if (listLength(am->incr_aof_list)) {\n        listNode *ln;\n        listIter li;\n\n        listRewind(am->incr_aof_list, &li);\n        while ((ln = listNext(&li)) != NULL) {\n            aofInfo *ai = (aofInfo*)ln->value;\n            serverAssert(ai->file_type == AOF_FILE_TYPE_INCR);\n            aof_name = (char*)ai->file_name;\n            updateLoadingFileName(aof_name);\n            last_file = ++aof_num == total_num;\n            start = ustime();\n            ret = loadSingleAppendOnlyFile(aof_name);\n            if (ret == AOF_OK || ((ret == AOF_TRUNCATED || ret == AOF_BROKEN_RECOVERED) && last_file)) {\n                serverLog(LL_NOTICE, \"DB loaded from incr file %s: %.3f seconds\",\n                    aof_name, (float)(ustime()-start)/1000000);\n            }\n\n            /* We know that (at least) one of the AOF files has data (total_size > 0),\n             * so empty incr AOF file doesn't count as a AOF_EMPTY result */\n            if (ret == AOF_EMPTY) ret = AOF_OK;\n\n            /* If the truncated file is not the last file, we consider this to be a fatal error. */\n            if ((ret == AOF_TRUNCATED || ret == AOF_BROKEN_RECOVERED) && !last_file) {\n                ret = AOF_FAILED;\n                serverLog(LL_WARNING, \"Fatal error: the truncated file is not the last file\");\n            }\n\n            if (ret == AOF_OPEN_ERR || ret == AOF_FAILED) {\n                goto cleanup;\n            }\n        }\n    }\n\n    server.aof_current_size = total_size;\n    /* Ideally, the aof_rewrite_base_size variable should hold the size of the\n     * AOF when the last rewrite ended, this should include the size of the\n     * incremental file that was created during the rewrite since otherwise we\n     * risk the next automatic rewrite to happen too soon (or immediately if\n     * auto-aof-rewrite-percentage is low). However, since we do not persist\n     * aof_rewrite_base_size information anywhere, we initialize it on restart\n     * to the size of BASE AOF file. This might cause the first AOFRW to be\n     * executed early, but that shouldn't be a problem since everything will be\n     * fine after the first AOFRW. */\n    server.aof_rewrite_base_size = base_size;\n\ncleanup:\n    stopLoading(ret == AOF_OK || ret == AOF_TRUNCATED);\n    return ret;\n}\n\n/* ----------------------------------------------------------------------------\n * AOF rewrite\n * ------------------------------------------------------------------------- */\n\n/* Delegate writing an object to writing a bulk string or bulk long long.\n * This is not placed in rio.c since that adds the server.h dependency. */\nint rioWriteBulkObject(rio *r, robj *obj) {\n    /* Avoid using getDecodedObject to help copy-on-write (we are often\n     * in a child process when this function is called). */\n    if (obj->encoding == OBJ_ENCODING_INT) {\n        return rioWriteBulkLongLong(r,(long)obj->ptr);\n    } else if (sdsEncodedObject(obj)) {\n        return rioWriteBulkString(r,obj->ptr,sdslen(obj->ptr));\n    } else {\n        serverPanic(\"Unknown string encoding\");\n    }\n}\n\n/* Emit the commands needed to rebuild a list object.\n * The function returns 0 on error, 1 on success. */\nint rewriteListObject(rio *r, robj *key, robj *o) {\n    long long count = 0, items = listTypeLength(o);\n\n    listTypeIterator li;\n    listTypeEntry entry;\n    listTypeInitIterator(&li, o, 0, LIST_TAIL);\n    while (listTypeNext(&li, &entry)) {\n        if (count == 0) {\n            int cmd_items = (items > AOF_REWRITE_ITEMS_PER_CMD) ?\n                AOF_REWRITE_ITEMS_PER_CMD : items;\n            if (!rioWriteBulkCount(r,'*',2+cmd_items) ||\n                !rioWriteBulkString(r,\"RPUSH\",5) ||\n                !rioWriteBulkObject(r,key)) \n            {\n                listTypeResetIterator(&li);\n                return 0;\n            }\n        }\n\n        unsigned char *vstr;\n        size_t vlen;\n        long long lval;\n        vstr = listTypeGetValue(&entry,&vlen,&lval);\n        if (vstr) {\n            if (!rioWriteBulkString(r,(char*)vstr,vlen)) {\n                listTypeResetIterator(&li);\n                return 0;\n            }\n        } else {\n            if (!rioWriteBulkLongLong(r,lval)) {\n                listTypeResetIterator(&li);\n                return 0;\n            }\n        }\n        if (++count == AOF_REWRITE_ITEMS_PER_CMD) count = 0;\n        items--;\n    }\n    listTypeResetIterator(&li);\n    return 1;\n}\n\n/* Emit the commands needed to rebuild a set object.\n * The function returns 0 on error, 1 on success. */\nint rewriteSetObject(rio *r, robj *key, robj *o) {\n    long long count = 0, items = setTypeSize(o);\n    setTypeIterator si;\n    char *str;\n    size_t len;\n    int64_t llval;\n    setTypeInitIterator(&si, o);\n    while (setTypeNext(&si, &str, &len, &llval) != -1) {\n        if (count == 0) {\n            int cmd_items = (items > AOF_REWRITE_ITEMS_PER_CMD) ?\n                AOF_REWRITE_ITEMS_PER_CMD : items;\n            if (!rioWriteBulkCount(r,'*',2+cmd_items) ||\n                !rioWriteBulkString(r,\"SADD\",4) ||\n                !rioWriteBulkObject(r,key))\n            {\n                setTypeResetIterator(&si);\n                return 0;\n            }\n        }\n        size_t written = str ?\n            rioWriteBulkString(r, str, len) : rioWriteBulkLongLong(r, llval);\n        if (!written) {\n            setTypeResetIterator(&si);\n            return 0;\n        }\n        if (++count == AOF_REWRITE_ITEMS_PER_CMD) count = 0;\n        items--;\n    }\n    setTypeResetIterator(&si);\n    return 1;\n}\n\n/* Emit the commands needed to rebuild a sorted set object.\n * The function returns 0 on error, 1 on success. */\nint rewriteSortedSetObject(rio *r, robj *key, robj *o) {\n    long long count = 0, items = zsetLength(o);\n\n    if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *zl = o->ptr;\n        unsigned char *eptr, *sptr;\n        unsigned char *vstr;\n        unsigned int vlen;\n        long long vll;\n        double score;\n\n        eptr = lpSeek(zl,0);\n        serverAssert(eptr != NULL);\n        sptr = lpNext(zl,eptr);\n        serverAssert(sptr != NULL);\n\n        while (eptr != NULL) {\n            vstr = lpGetValue(eptr,&vlen,&vll);\n            score = zzlGetScore(sptr);\n\n            if (count == 0) {\n                int cmd_items = (items > AOF_REWRITE_ITEMS_PER_CMD) ?\n                    AOF_REWRITE_ITEMS_PER_CMD : items;\n\n                if (!rioWriteBulkCount(r,'*',2+cmd_items*2) ||\n                    !rioWriteBulkString(r,\"ZADD\",4) ||\n                    !rioWriteBulkObject(r,key)) \n                {\n                    return 0;\n                }\n            }\n            if (!rioWriteBulkDouble(r,score)) return 0;\n            if (vstr != NULL) {\n                if (!rioWriteBulkString(r,(char*)vstr,vlen)) return 0;\n            } else {\n                if (!rioWriteBulkLongLong(r,vll)) return 0;\n            }\n            zzlNext(zl,&eptr,&sptr);\n            if (++count == AOF_REWRITE_ITEMS_PER_CMD) count = 0;\n            items--;\n        }\n    } else if (o->encoding == OBJ_ENCODING_SKIPLIST) {\n        zset *zs = o->ptr;\n        dictIterator di;\n        dictEntry *de;\n\n        dictInitIterator(&di, zs->dict);\n        while((de = dictNext(&di)) != NULL) {\n            zskiplistNode *znode = dictGetKey(de);\n            sds ele = zslGetNodeElement(znode);\n            double score = znode->score;\n\n            if (count == 0) {\n                int cmd_items = (items > AOF_REWRITE_ITEMS_PER_CMD) ?\n                    AOF_REWRITE_ITEMS_PER_CMD : items;\n\n                if (!rioWriteBulkCount(r,'*',2+cmd_items*2) ||\n                    !rioWriteBulkString(r,\"ZADD\",4) ||\n                    !rioWriteBulkObject(r,key)) \n                {\n                    dictResetIterator(&di);\n                    return 0;\n                }\n            }\n            if (!rioWriteBulkDouble(r,score) ||\n                !rioWriteBulkString(r,ele,sdslen(ele)))\n            {\n                dictResetIterator(&di);\n                return 0;\n            }\n            if (++count == AOF_REWRITE_ITEMS_PER_CMD) count = 0;\n            items--;\n        }\n        dictResetIterator(&di);\n    } else {\n        serverPanic(\"Unknown sorted zset encoding\");\n    }\n    return 1;\n}\n\n/* Write either the key or the value of the currently selected item of a hash.\n * The 'hi' argument passes a valid Redis hash iterator.\n * The 'what' filed specifies if to write a key or a value and can be\n * either OBJ_HASH_KEY or OBJ_HASH_VALUE.\n *\n * The function returns 0 on error, non-zero on success. */\nstatic int rioWriteHashIteratorCursor(rio *r, hashTypeIterator *hi, int what) {\n    if ((hi->encoding == OBJ_ENCODING_LISTPACK) || (hi->encoding == OBJ_ENCODING_LISTPACK_EX)) {\n        unsigned char *vstr = NULL;\n        unsigned int vlen = UINT_MAX;\n        long long vll = LLONG_MAX;\n\n        hashTypeCurrentFromListpack(hi, what, &vstr, &vlen, &vll, NULL);\n        if (vstr)\n            return rioWriteBulkString(r, (char*)vstr, vlen);\n        else\n            return rioWriteBulkLongLong(r, vll);\n    } else if (hi->encoding == OBJ_ENCODING_HT) {\n        char *str;\n        size_t len;\n        hashTypeCurrentFromHashTable(hi, what, &str, &len, NULL);\n        return rioWriteBulkString(r, str, len);\n    }\n\n    serverPanic(\"Unknown hash encoding\");\n    return 0;\n}\n\n/* Emit the commands needed to rebuild a hash object.\n * The function returns 0 on error, 1 on success. */\nint rewriteHashObject(rio *r, robj *key, robj *o) {\n    int res = 0; /*fail*/\n\n    hashTypeIterator hi;\n    long long count = 0, items = hashTypeLength(o, 0);\n\n    int isHFE = hashTypeGetMinExpire(o, 0) != EB_EXPIRE_TIME_INVALID;\n    hashTypeInitIterator(&hi, o);\n\n    if (!isHFE) {\n        while (hashTypeNext(&hi, 0) != C_ERR) {\n            if (count == 0) {\n                int cmd_items = (items > AOF_REWRITE_ITEMS_PER_CMD) ?\n                                AOF_REWRITE_ITEMS_PER_CMD : items;\n                if (!rioWriteBulkCount(r, '*', 2 + cmd_items * 2) ||\n                    !rioWriteBulkString(r, \"HMSET\", 5) ||\n                    !rioWriteBulkObject(r, key))\n                    goto reHashEnd;\n            }\n\n            if (!rioWriteHashIteratorCursor(r, &hi, OBJ_HASH_KEY) ||\n                !rioWriteHashIteratorCursor(r, &hi, OBJ_HASH_VALUE))\n                goto reHashEnd;\n\n            if (++count == AOF_REWRITE_ITEMS_PER_CMD) count = 0;\n            items--;\n        }\n    } else {\n        while (hashTypeNext(&hi, 0) != C_ERR) {\n\n            char hmsetCmd[] = \"*4\\r\\n$5\\r\\nHMSET\\r\\n\";\n            if ( (!rioWrite(r, hmsetCmd, sizeof(hmsetCmd) - 1)) ||\n                 (!rioWriteBulkObject(r, key)) ||\n                 (!rioWriteHashIteratorCursor(r, &hi, OBJ_HASH_KEY)) ||\n                 (!rioWriteHashIteratorCursor(r, &hi, OBJ_HASH_VALUE)) )\n                goto reHashEnd;\n\n            if (hi.expire_time != EB_EXPIRE_TIME_INVALID) {\n                char cmd[] = \"*6\\r\\n$10\\r\\nHPEXPIREAT\\r\\n\";\n                if ( (!rioWrite(r, cmd, sizeof(cmd) - 1)) ||\n                     (!rioWriteBulkObject(r, key)) ||\n                     (!rioWriteBulkLongLong(r, hi.expire_time)) ||\n                     (!rioWriteBulkString(r, \"FIELDS\", 6)) ||\n                     (!rioWriteBulkString(r, \"1\", 1)) ||\n                     (!rioWriteHashIteratorCursor(r, &hi, OBJ_HASH_KEY)) )\n                    goto reHashEnd;\n            }\n        }\n    }\n\n    res = 1; /* success */\n\nreHashEnd:\n    hashTypeResetIterator(&hi);\n    return res;\n}\n\n/* Helper for rewriteStreamObject() that generates a bulk string into the\n * AOF representing the ID 'id'. */\nint rioWriteBulkStreamID(rio *r,streamID *id) {\n    int retval;\n\n    sds replyid = sdscatfmt(sdsempty(),\"%U-%U\",id->ms,id->seq);\n    retval = rioWriteBulkString(r,replyid,sdslen(replyid));\n    sdsfree(replyid);\n    return retval;\n}\n\n/* Helper for rewriteStreamObject(): emit the XCLAIM needed in order to\n * add the message described by 'nack' having the id 'rawid', into the pending\n * list of the specified consumer. All this in the context of the specified\n * key and group. */\nint rioWriteStreamPendingEntry(rio *r, robj *key, const char *groupname, size_t groupname_len, streamConsumer *consumer, unsigned char *rawid, streamNACK *nack) {\n     /* XCLAIM <key> <group> <consumer> 0 <id> TIME <milliseconds-unix-time>\n               RETRYCOUNT <count> JUSTID FORCE. */\n    streamID id;\n    streamDecodeID(rawid,&id);\n    if (rioWriteBulkCount(r,'*',12) == 0) return 0;\n    if (rioWriteBulkString(r,\"XCLAIM\",6) == 0) return 0;\n    if (rioWriteBulkObject(r,key) == 0) return 0;\n    if (rioWriteBulkString(r,groupname,groupname_len) == 0) return 0;\n    if (rioWriteBulkString(r,consumer->name,sdslen(consumer->name)) == 0) return 0;\n    if (rioWriteBulkString(r,\"0\",1) == 0) return 0;\n    if (rioWriteBulkStreamID(r,&id) == 0) return 0;\n    if (rioWriteBulkString(r,\"TIME\",4) == 0) return 0;\n    if (rioWriteBulkLongLong(r,nack->delivery_time) == 0) return 0;\n    if (rioWriteBulkString(r,\"RETRYCOUNT\",10) == 0) return 0;\n    if (rioWriteBulkLongLong(r,nack->delivery_count) == 0) return 0;\n    if (rioWriteBulkString(r,\"JUSTID\",6) == 0) return 0;\n    if (rioWriteBulkString(r,\"FORCE\",5) == 0) return 0;\n    return 1;\n}\n\n/* Helper for rewriteStreamObject(): emit a single XNACK FORCE command that\n * reconstructs one or more NACKed (unowned) PEL entries sharing the same\n * delivery_count. `ids` points to an array of `count` streamIDs (at most\n * AOF_REWRITE_ITEMS_PER_CMD). Returns 0 on error, 1 on success. */\nint rioWriteStreamNackedEntries(rio *r, robj *key, const char *groupname,\n                                size_t groupname_len, streamID *ids,\n                                int count, uint64_t delivery_count) {\n    serverAssert(count > 0 && count <= AOF_REWRITE_ITEMS_PER_CMD);\n\n    /* XNACK <key> <group> FAIL IDS <n> <id..> RETRYCOUNT <cnt> FORCE\n     * 6 fixed tokens before IDs + count IDs + 3 fixed tokens after. */\n    if (rioWriteBulkCount(r,'*',6+count+3) == 0) return 0;\n    if (rioWriteBulkString(r,\"XNACK\",5) == 0) return 0;\n    if (rioWriteBulkObject(r,key) == 0) return 0;\n    if (rioWriteBulkString(r,groupname,groupname_len) == 0) return 0;\n    if (rioWriteBulkString(r,\"FAIL\",4) == 0) return 0;\n    if (rioWriteBulkString(r,\"IDS\",3) == 0) return 0;\n    if (rioWriteBulkLongLong(r,count) == 0) return 0;\n\n    for (int i = 0; i < count; i++) {\n        if (rioWriteBulkStreamID(r,&ids[i]) == 0) return 0;\n    }\n\n    if (rioWriteBulkString(r,\"RETRYCOUNT\",10) == 0) return 0;\n    if (rioWriteBulkLongLong(r,delivery_count) == 0) return 0;\n    if (rioWriteBulkString(r,\"FORCE\",5) == 0) return 0;\n    return 1;\n}\n\n/* Helper for rewriteStreamObject(): emit the XGROUP CREATECONSUMER is\n * needed in order to create consumers that do not have any pending entries.\n * All this in the context of the specified key and group. */\nint rioWriteStreamEmptyConsumer(rio *r, robj *key, const char *groupname, size_t groupname_len, streamConsumer *consumer) {\n    /* XGROUP CREATECONSUMER <key> <group> <consumer> */\n    if (rioWriteBulkCount(r,'*',5) == 0) return 0;\n    if (rioWriteBulkString(r,\"XGROUP\",6) == 0) return 0;\n    if (rioWriteBulkString(r,\"CREATECONSUMER\",14) == 0) return 0;\n    if (rioWriteBulkObject(r,key) == 0) return 0;\n    if (rioWriteBulkString(r,groupname,groupname_len) == 0) return 0;\n    if (rioWriteBulkString(r,consumer->name,sdslen(consumer->name)) == 0) return 0;\n    return 1;\n}\n\n/* Helper for rewriteStreamObject(): emit the XIDMPRECORD needed to\n * restore an IDMP entry for the given producer in the context of the\n * specified key. */\nint rioWriteStreamIdmpEntry(rio *r, robj *key, const char *pid, size_t pid_len, idmpEntry *entry) {\n    /* XIDMPRECORD <key> <pid> <iid> <streamID> */\n    if (rioWriteBulkCount(r,'*',5) == 0) return 0;\n    if (rioWriteBulkString(r,\"XIDMPRECORD\",11) == 0) return 0;\n    if (rioWriteBulkObject(r,key) == 0) return 0;\n    if (rioWriteBulkString(r,pid,pid_len) == 0) return 0;\n    if (rioWriteBulkString(r,entry->iid,entry->iid_len) == 0) return 0;\n    if (rioWriteBulkStreamID(r,&entry->id) == 0) return 0;\n    return 1;\n}\n\n/* Emit the commands needed to rebuild a stream object.\n * The function returns 0 on error, 1 on success. */\nint rewriteStreamObject(rio *r, robj *key, robj *o) {\n    stream *s = o->ptr;\n    streamID id;\n\n    if (s->length) {\n        /* Reconstruct the stream data using XADD commands. */\n        streamIterator si;\n        int64_t numfields;\n        streamIteratorStart(&si,s,NULL,NULL,0);\n        while(streamIteratorGetID(&si,&id,&numfields)) {\n            /* Emit a two elements array for each item. The first is\n             * the ID, the second is an array of field-value pairs. */\n\n            /* Emit the XADD <key> <id> ...fields... command. */\n            if (!rioWriteBulkCount(r,'*',3+numfields*2) || \n                !rioWriteBulkString(r,\"XADD\",4) ||\n                !rioWriteBulkObject(r,key) ||\n                !rioWriteBulkStreamID(r,&id)) \n            {\n                streamIteratorStop(&si);\n                return 0;\n            }\n            while(numfields--) {\n                unsigned char *field, *value;\n                int64_t field_len, value_len;\n                streamIteratorGetField(&si,&field,&value,&field_len,&value_len);\n                if (!rioWriteBulkString(r,(char*)field,field_len) ||\n                    !rioWriteBulkString(r,(char*)value,value_len)) \n                {\n                    streamIteratorStop(&si);\n                    return 0;                  \n                }\n            }\n        }\n        streamIteratorStop(&si);\n    } else {\n        /* Use the XADD MAXLEN 0 trick to generate an empty stream if\n         * the key we are serializing is an empty string, which is possible\n         * for the Stream type. */\n        id.ms = 0; id.seq = 1; \n        if (!rioWriteBulkCount(r,'*',7) ||\n            !rioWriteBulkString(r,\"XADD\",4) ||\n            !rioWriteBulkObject(r,key) ||\n            !rioWriteBulkString(r,\"MAXLEN\",6) ||\n            !rioWriteBulkString(r,\"0\",1) ||\n            !rioWriteBulkStreamID(r,&id) ||\n            !rioWriteBulkString(r,\"x\",1) ||\n            !rioWriteBulkString(r,\"y\",1))\n        {\n            return 0;     \n        }\n    }\n\n    /* Append XSETID after XADD, make sure lastid is correct,\n     * in case of XDEL lastid. */\n    if (!rioWriteBulkCount(r,'*',7) ||\n        !rioWriteBulkString(r,\"XSETID\",6) ||\n        !rioWriteBulkObject(r,key) ||\n        !rioWriteBulkStreamID(r,&s->last_id) ||\n        !rioWriteBulkString(r,\"ENTRIESADDED\",12) ||\n        !rioWriteBulkLongLong(r,s->entries_added) ||\n        !rioWriteBulkString(r,\"MAXDELETEDID\",12) ||\n        !rioWriteBulkStreamID(r,&s->max_deleted_entry_id)) \n    {\n        return 0; \n    }\n\n    /* Create all the stream consumer groups. */\n    if (s->cgroups) {\n        raxIterator ri;\n        raxStart(&ri,s->cgroups);\n        raxSeek(&ri,\"^\",NULL,0);\n        while(raxNext(&ri)) {\n            streamCG *group = ri.data;\n            /* Emit the XGROUP CREATE in order to create the group. */\n            if (!rioWriteBulkCount(r,'*',7) ||\n                !rioWriteBulkString(r,\"XGROUP\",6) ||\n                !rioWriteBulkString(r,\"CREATE\",6) ||\n                !rioWriteBulkObject(r,key) ||\n                !rioWriteBulkString(r,(char*)ri.key,ri.key_len) ||\n                !rioWriteBulkStreamID(r,&group->last_id) ||\n                !rioWriteBulkString(r,\"ENTRIESREAD\",11) ||\n                !rioWriteBulkLongLong(r,group->entries_read))\n            {\n                raxStop(&ri);\n                return 0;\n            }\n\n            /* Generate XCLAIMs for each consumer that happens to\n             * have pending entries. Empty consumers would be generated with\n             * XGROUP CREATECONSUMER. */\n            raxIterator ri_cons;\n            raxStart(&ri_cons,group->consumers);\n            raxSeek(&ri_cons,\"^\",NULL,0);\n            while(raxNext(&ri_cons)) {\n                streamConsumer *consumer = ri_cons.data;\n                /* If there are no pending entries, just emit XGROUP CREATECONSUMER */\n                if (raxSize(consumer->pel) == 0) {\n                    if (rioWriteStreamEmptyConsumer(r,key,(char*)ri.key,\n                                                    ri.key_len,consumer) == 0)\n                    {\n                        raxStop(&ri_cons);\n                        raxStop(&ri);\n                        return 0;\n                    }\n                    continue;\n                }\n                /* For the current consumer, iterate all the PEL entries\n                 * to emit the XCLAIM protocol. */\n                raxIterator ri_pel;\n                raxStart(&ri_pel,consumer->pel);\n                raxSeek(&ri_pel,\"^\",NULL,0);\n                while(raxNext(&ri_pel)) {\n                    streamNACK *nack = ri_pel.data;\n                    if (rioWriteStreamPendingEntry(r,key,(char*)ri.key,\n                                                   ri.key_len,consumer,\n                                                   ri_pel.key,nack) == 0)\n                    {\n                        raxStop(&ri_pel);\n                        raxStop(&ri_cons);\n                        raxStop(&ri);\n                        return 0;\n                    }\n                }\n                raxStop(&ri_pel);\n            }\n            raxStop(&ri_cons);\n\n            /* Emit XNACK FORCE for NACKed (unowned) entries from the\n             * NACK zone of the PEL time-ordered list\n             * (pel_time_head..pel_nack_tail). Consecutive entries with\n             * the same delivery_count are batched into a single command.\n             *\n             * nack_stop is the first node outside the NACK zone (or NULL\n             * when the zone extends to the end of the PEL). When\n             * pel_nack_tail is NULL (no NACKed entries) the guard below\n             * skips the whole block. */\n            streamNACK *nack_end = group->pel_nack_tail;\n            if (nack_end != NULL) {\n                streamID batch_ids[AOF_REWRITE_ITEMS_PER_CMD];\n                streamNACK *nack_stop = nack_end->pel_next;\n                streamNACK *nack = group->pel_time_head;\n                int batch_count = 0;\n                uint64_t batch_dc = 0;\n                while (nack && nack != nack_stop) {\n                    if (batch_count == 0) batch_dc = nack->delivery_count;\n                    batch_ids[batch_count++] = nack->id;\n                    streamNACK *next = nack->pel_next;\n                    if (batch_count >= AOF_REWRITE_ITEMS_PER_CMD ||\n                        !next || next == nack_stop ||\n                        next->delivery_count != batch_dc)\n                    {\n                        if (rioWriteStreamNackedEntries(r,key,(char*)ri.key,\n                                                        ri.key_len,batch_ids,\n                                                        batch_count,batch_dc) == 0)\n                        {\n                            raxStop(&ri);\n                            return 0;\n                        }\n                        batch_count = 0;\n                    }\n                    nack = next;\n                }\n            }\n        }\n        raxStop(&ri);\n    }\n\n    /* Emit XCFGSET to restore per-stream IDMP configuration if it differs\n     * from the server defaults, so that AOF rewrite preserves custom settings. */\n    if (s->idmp_duration != (uint64_t)server.stream_idmp_duration ||\n        s->idmp_max_entries != (uint64_t)server.stream_idmp_maxsize)\n    {\n        if (!rioWriteBulkCount(r,'*',6) ||\n            !rioWriteBulkString(r,\"XCFGSET\",7) ||\n            !rioWriteBulkObject(r,key) ||\n            !rioWriteBulkString(r,\"IDMP-DURATION\",13) ||\n            !rioWriteBulkLongLong(r,s->idmp_duration) ||\n            !rioWriteBulkString(r,\"IDMP-MAXSIZE\",12) ||\n            !rioWriteBulkLongLong(r,s->idmp_max_entries))\n        {\n            return 0;\n        }\n    }\n\n    /* Emit XIDMPRECORD for each IDMP entry. Entries whose stream ID no\n     * longer exists (removed by XDEL/trim) are skipped, since\n     * xidmprecordCommand() rejects references to missing IDs and would\n     * cause AOF replay errors. */\n    if (s->idmp_producers) {\n        raxIterator ri_idmp;\n        raxStart(&ri_idmp,s->idmp_producers);\n        raxSeek(&ri_idmp,\"^\",NULL,0);\n        while(raxNext(&ri_idmp)) {\n            idmpProducer *producer = ri_idmp.data;\n            for (idmpEntry *entry = producer->idmp_head; entry != NULL; entry = entry->next) {\n                if (!streamEntryExists(s, &entry->id)) continue;\n                if (rioWriteStreamIdmpEntry(r,key,(char*)ri_idmp.key,\n                                            ri_idmp.key_len,entry) == 0)\n                {\n                    raxStop(&ri_idmp);\n                    return 0;\n                }\n            }\n        }\n        raxStop(&ri_idmp);\n    }\n\n    return 1;\n}\n\nint rewriteGCRAObject(rio *r, robj *key, robj *o) {\n    long long val;\n    getLongLongFromGCRAObject(o, &val);\n\n    /* GCRASETVALUE <key> <tat> */\n    if (rioWriteBulkCount(r,'*',3) == 0) return 0;\n    if (rioWriteBulkString(r,\"GCRASETVALUE\",12) == 0) return 0;\n    if (rioWriteBulkObject(r,key) == 0) return 0;\n    if (rioWriteBulkLongLong(r,val) == 0) return 0;\n    return 1;\n}\n\n/* Call the module type callback in order to rewrite a data type\n * that is exported by a module and is not handled by Redis itself.\n * The function returns 0 on error, 1 on success. */\nint rewriteModuleObject(rio *r, robj *key, robj *o, int dbid) {\n    RedisModuleIO io;\n    moduleValue *mv = o->ptr;\n    moduleType *mt = mv->type;\n    moduleInitIOContext(&io, &mt->entity, r, key, dbid);\n    mt->aof_rewrite(&io,key,mv->value);\n    if (io.ctx) {\n        moduleFreeContext(io.ctx);\n        zfree(io.ctx);\n    }\n    return io.error ? 0 : 1;\n}\n\nstatic int rewriteFunctions(rio *aof) {\n    dict *functions = functionsLibGet();\n    dictIterator iter;\n    dictEntry *entry = NULL;\n    dictInitIterator(&iter, functions);\n    while ((entry = dictNext(&iter))) {\n        functionLibInfo *li = dictGetVal(entry);\n        if (rioWrite(aof, \"*3\\r\\n\", 4) == 0) goto werr;\n        char function_load[] = \"$8\\r\\nFUNCTION\\r\\n$4\\r\\nLOAD\\r\\n\";\n        if (rioWrite(aof, function_load, sizeof(function_load) - 1) == 0) goto werr;\n        if (rioWriteBulkString(aof, li->code, sdslen(li->code)) == 0) goto werr;\n    }\n    dictResetIterator(&iter);\n    return 1;\n\nwerr:\n    dictResetIterator(&iter);\n    return 0;\n}\n\nint rewriteObject(rio *r, robj *key, robj *o, int dbid, long long expiretime) {\n    /* Save the key and associated value */\n    if (o->type == OBJ_STRING) {\n        /* Emit a SET command */\n        static const char cmd[]=\"*3\\r\\n$3\\r\\nSET\\r\\n\";\n        if (rioWrite(r,cmd,sizeof(cmd)-1) == 0) return C_ERR;\n        /* Key and value */\n        if (rioWriteBulkObject(r,key) == 0) return C_ERR;\n        if (rioWriteBulkObject(r,o) == 0) return C_ERR;\n    } else if (o->type == OBJ_LIST) {\n        if (rewriteListObject(r,key,o) == 0) return C_ERR;\n    } else if (o->type == OBJ_SET) {\n        if (rewriteSetObject(r,key,o) == 0) return C_ERR;\n    } else if (o->type == OBJ_ZSET) {\n        if (rewriteSortedSetObject(r,key,o) == 0) return C_ERR;\n    } else if (o->type == OBJ_HASH) {\n        if (rewriteHashObject(r,key,o) == 0) return C_ERR;\n    } else if (o->type == OBJ_STREAM) {\n        if (rewriteStreamObject(r,key,o) == 0) return C_ERR;\n    } else if (o->type == OBJ_GCRA) {\n        if (rewriteGCRAObject(r,key,o) == 0) return C_ERR;\n    } else if (o->type == OBJ_MODULE) {\n        if (rewriteModuleObject(r,key,o,dbid) == 0) return C_ERR;\n    } else {\n        serverPanic(\"Unknown object type\");\n    }\n\n    /* Save the expire time */\n    if (expiretime != -1) {\n        static const char cmd[]=\"*3\\r\\n$9\\r\\nPEXPIREAT\\r\\n\";\n        if (rioWrite(r,cmd,sizeof(cmd)-1) == 0) return C_ERR;\n        if (rioWriteBulkObject(r,key) == 0) return C_ERR;\n        if (rioWriteBulkLongLong(r,expiretime) == 0) return C_ERR;\n    }\n\n    /* If modules metadata is available */\n    if ((getModuleMetaBits(o->metabits)) && (keyMetaOnAof(r, key, o, dbid) == 0))\n        return C_ERR;\n\n    return C_OK;\n}\n\nint rewriteAppendOnlyFileRio(rio *aof) {\n    dictEntry *de;\n    int j;\n    long key_count = 0;\n    long long updated_time = 0;\n    unsigned long long skipped = 0;\n    kvstoreIterator kvs_it;\n\n    /* Record timestamp at the beginning of rewriting AOF. */\n    if (server.aof_timestamp_enabled) {\n        sds ts = genAofTimestampAnnotationIfNeeded(1);\n        if (rioWrite(aof,ts,sdslen(ts)) == 0) { sdsfree(ts); goto werr; }\n        sdsfree(ts);\n    }\n\n    if (rewriteFunctions(aof) == 0) goto werr;\n\n    for (j = 0; j < server.dbnum; j++) {\n        char selectcmd[] = \"*2\\r\\n$6\\r\\nSELECT\\r\\n\";\n        redisDb *db = server.db + j;\n        if (kvstoreSize(db->keys) == 0) continue;\n\n        /* SELECT the new DB */\n        if (rioWrite(aof,selectcmd,sizeof(selectcmd)-1) == 0) goto werr;\n        if (rioWriteBulkLongLong(aof,j) == 0) goto werr;\n\n        kvstoreIteratorInit(&kvs_it, db->keys);\n        int last_slot = -1;\n        /* Iterate this DB writing every entry */\n        while((de = kvstoreIteratorNext(&kvs_it)) != NULL) {\n            long long expiretime;\n            size_t aof_bytes_before_key = aof->processed_bytes;\n            int curr_slot = kvstoreIteratorGetCurrentDictIndex(&kvs_it);\n\n            /* In cluster mode, dismiss bucket arrays of the previous slot\n             * which won't be accessed again, to avoid CoW. */\n            if (server.cluster_enabled && curr_slot != last_slot) {\n                if (server.in_fork_child && last_slot != -1)\n                    dismissDictBucketsMemory(kvstoreGetDict(db->keys, last_slot));\n                last_slot = curr_slot;\n            }\n\n            /* Get the value object (of type kvobj) */\n            kvobj *o = dictGetKV(de);\n            \n            /* Get the expire time */\n            expiretime = kvobjGetExpire(o);\n\n            /* Skip keys that are being trimmed */\n            if (server.cluster_enabled && isSlotInTrimJob(curr_slot)) {\n                skipped++;\n                continue;\n            }\n            \n            /* Set on stack string object for key */\n            robj key;\n            initStaticStringObject(key, kvobjGetKey(o));\n\n            if (rewriteObject(aof, &key, o, j, expiretime) == C_ERR) goto werr2;\n\n            /* In fork child process, we can try to release memory back to the\n             * OS and possibly avoid or decrease COW. We give the dismiss\n             * mechanism a hint about an estimated size of the object we stored. */\n            size_t dump_size = aof->processed_bytes - aof_bytes_before_key;\n            if (server.in_fork_child && dump_size > server.page_size/2)\n                dismissObject(o, dump_size);\n\n            /* Update info every 1 second (approximately).\n             * in order to avoid calling mstime() on each iteration, we will\n             * check the diff every 1024 keys */\n            if ((key_count++ & 1023) == 0) {\n                long long now = mstime();\n                if (now - updated_time >= 1000) {\n                    sendChildInfo(CHILD_INFO_TYPE_CURRENT_INFO, key_count, \"AOF rewrite\");\n                    updated_time = now;\n                }\n            }\n\n            /* Delay before next key if required (for testing) */\n            if (server.rdb_key_save_delay)\n                debugDelay(server.rdb_key_save_delay);\n        }\n        kvstoreIteratorReset(&kvs_it);\n\n        /* Dismiss bucket arrays of kvstore in standalone mode. */\n        if (server.in_fork_child && !server.cluster_enabled)\n            dismissKvstoreBucketsMemory(db->keys);\n    }\n    serverLog(LL_NOTICE, \"AOF rewrite done, %ld keys saved, %llu keys skipped.\", key_count, skipped);\n    return C_OK;\n\nwerr2:\n    kvstoreIteratorReset(&kvs_it);\nwerr:\n    return C_ERR;\n}\n\n/* Write a sequence of commands able to fully rebuild the dataset into\n * \"filename\". Used both by REWRITEAOF and BGREWRITEAOF.\n *\n * In order to minimize the number of commands needed in the rewritten\n * log Redis uses variadic commands when possible, such as RPUSH, SADD\n * and ZADD. However at max AOF_REWRITE_ITEMS_PER_CMD items per time\n * are inserted using a single command. */\nint rewriteAppendOnlyFile(char *filename) {\n    rio aof;\n    FILE *fp = NULL;\n    char tmpfile[256];\n\n    /* Note that we have to use a different temp name here compared to the\n     * one used by rewriteAppendOnlyFileBackground() function. */\n    snprintf(tmpfile,256,\"temp-rewriteaof-%d.aof\", (int) getpid());\n    fp = fopen(tmpfile,\"w\");\n    if (!fp) {\n        serverLog(LL_WARNING, \"Opening the temp file for AOF rewrite in rewriteAppendOnlyFile(): %s\", strerror(errno));\n        return C_ERR;\n    }\n\n    rioInitWithFile(&aof,fp);\n\n    if (server.aof_rewrite_incremental_fsync) {\n        rioSetAutoSync(&aof,REDIS_AUTOSYNC_BYTES);\n        rioSetReclaimCache(&aof,1);\n    }\n\n    startSaving(RDBFLAGS_AOF_PREAMBLE);\n\n    if (server.aof_use_rdb_preamble) {\n        int error;\n        if (rdbSaveRio(SLAVE_REQ_NONE,&aof,&error,RDBFLAGS_AOF_PREAMBLE,NULL) == C_ERR) {\n            errno = error;\n            goto werr;\n        }\n    } else {\n        if (rewriteAppendOnlyFileRio(&aof) == C_ERR) goto werr;\n    }\n\n    /* Make sure data will not remain on the OS's output buffers */\n    if (fflush(fp)) goto werr;\n    if (fsync(fileno(fp))) goto werr;\n    if (reclaimFilePageCache(fileno(fp), 0, 0) == -1) {\n        /* A minor error. Just log to know what happens */\n        serverLog(LL_NOTICE,\"Unable to reclaim page cache: %s\", strerror(errno));\n    }\n    if (fclose(fp)) { fp = NULL; goto werr; }\n    fp = NULL;\n\n    /* Use RENAME to make sure the DB file is changed atomically only\n     * if the generate DB file is ok. */\n    if (rename(tmpfile,filename) == -1) {\n        serverLog(LL_WARNING,\"Error moving temp append only file on the final destination: %s\", strerror(errno));\n        unlink(tmpfile);\n        stopSaving(0);\n        return C_ERR;\n    }\n    stopSaving(1);\n\n    return C_OK;\n\nwerr:\n    serverLog(LL_WARNING,\"Write error writing append only file on disk: %s\", strerror(errno));\n    if (fp) fclose(fp);\n    unlink(tmpfile);\n    stopSaving(0);\n    return C_ERR;\n}\n/* ----------------------------------------------------------------------------\n * AOF background rewrite\n * ------------------------------------------------------------------------- */\n\n/* This is how rewriting of the append only file in background works:\n *\n * 1) The user calls BGREWRITEAOF\n * 2) Redis calls this function, that forks():\n *    2a) the child rewrite the append only file in a temp file.\n *    2b) the parent open a new INCR AOF file to continue writing.\n * 3) When the child finished '2a' exists.\n * 4) The parent will trap the exit code, if it's OK, it will:\n *    4a) get a new BASE file name and mark the previous (if we have) as the HISTORY type\n *    4b) rename(2) the temp file in new BASE file name\n *    4c) mark the rewritten INCR AOFs as history type\n *    4d) persist AOF manifest file\n *    4e) Delete the history files use bio\n */\nint rewriteAppendOnlyFileBackground(void) {\n    pid_t childpid;\n\n    if (hasActiveChildProcess()) return C_ERR;\n\n    if (dirCreateIfMissing(server.aof_dirname) == -1) {\n        serverLog(LL_WARNING, \"Can't open or create append-only dir %s: %s\",\n            server.aof_dirname, strerror(errno));\n        server.aof_lastbgrewrite_status = C_ERR;\n        return C_ERR;\n    }\n\n    /* We set aof_selected_db to -1 in order to force the next call to the\n     * feedAppendOnlyFile() to issue a SELECT command. */\n    server.aof_selected_db = -1;\n    flushAppendOnlyFile(1);\n    if (openNewIncrAofForAppend() != C_OK) {\n        server.aof_lastbgrewrite_status = C_ERR;\n        return C_ERR;\n    }\n\n    if (server.aof_state == AOF_WAIT_REWRITE) {\n        /* Wait for all bio jobs related to AOF to drain. This prevents a race\n         * between updates to `fsynced_reploff_pending` of the worker thread, belonging\n         * to the previous AOF, and the new one. This concern is specific for a full\n         * sync scenario where we don't wanna risk the ACKed replication offset\n         * jumping backwards or forward when switching to a different master. */\n        bioDrainWorker(BIO_AOF_FSYNC);\n\n        /* Set the initial repl_offset, which will be applied to fsynced_reploff\n         * when AOFRW finishes (after possibly being updated by a bio thread) */\n        atomicSet(server.fsynced_reploff_pending, server.master_repl_offset);\n        server.fsynced_reploff = 0;\n    }\n\n    server.stat_aof_rewrites++;\n\n    if ((childpid = redisFork(CHILD_TYPE_AOF)) == 0) {\n        char tmpfile[256];\n\n        /* Child */\n        redisSetProcTitle(\"redis-aof-rewrite\");\n        redisSetCpuAffinity(server.aof_rewrite_cpulist);\n        snprintf(tmpfile,256,\"temp-rewriteaof-bg-%d.aof\", (int) getpid());\n        if (rewriteAppendOnlyFile(tmpfile) == C_OK) {\n            serverLog(LL_NOTICE,\n                \"Successfully created the temporary AOF base file %s\", tmpfile);\n            sendChildCowInfo(CHILD_INFO_TYPE_AOF_COW_SIZE, \"AOF rewrite\");\n            exitFromChild(0, 0);\n        } else {\n            exitFromChild(1, 0);\n        }\n    } else {\n        /* Parent */\n        if (childpid == -1) {\n            server.aof_lastbgrewrite_status = C_ERR;\n            serverLog(LL_WARNING,\n                \"Can't rewrite append only file in background: fork: %s\",\n                strerror(errno));\n            return C_ERR;\n        }\n        serverLog(LL_NOTICE,\n            \"Background append only file rewriting started by pid %ld\",(long) childpid);\n        server.aof_rewrite_scheduled = 0;\n        server.aof_rewrite_time_start = time(NULL);\n        return C_OK;\n    }\n    return C_OK; /* unreached */\n}\n\nvoid bgrewriteaofCommand(client *c) {\n    if (server.child_type == CHILD_TYPE_AOF) {\n        addReplyError(c,\"Background append only file rewriting already in progress\");\n    } else if (hasActiveChildProcess() || server.in_exec) {\n        server.aof_rewrite_scheduled = 1;\n        /* When manually triggering AOFRW we reset the count \n         * so that it can be executed immediately. */\n        server.stat_aofrw_consecutive_failures = 0;\n        addReplyStatus(c,\"Background append only file rewriting scheduled\");\n    } else if (rewriteAppendOnlyFileBackground() == C_OK) {\n        addReplyStatus(c,\"Background append only file rewriting started\");\n    } else {\n        addReplyError(c,\"Can't execute an AOF background rewriting. \"\n                        \"Please check the server logs for more information.\");\n    }\n}\n\nvoid aofRemoveTempFile(pid_t childpid) {\n    char tmpfile[256];\n\n    snprintf(tmpfile,256,\"temp-rewriteaof-bg-%d.aof\", (int) childpid);\n    bg_unlink(tmpfile);\n\n    snprintf(tmpfile,256,\"temp-rewriteaof-%d.aof\", (int) childpid);\n    bg_unlink(tmpfile);\n}\n\n/* Get size of an AOF file.\n * The status argument is an optional output argument to be filled with\n * one of the AOF_ status values. */\noff_t getAppendOnlyFileSize(sds filename, int *status) {\n    struct redis_stat sb;\n    off_t size;\n    mstime_t latency;\n\n    sds aof_filepath = makePath(server.aof_dirname, filename);\n    latencyStartMonitor(latency);\n    if (redis_stat(aof_filepath, &sb) == -1) {\n        if (status) *status = errno == ENOENT ? AOF_NOT_EXIST : AOF_OPEN_ERR;\n        serverLog(LL_WARNING, \"Unable to obtain the AOF file %s length. stat: %s\",\n            filename, strerror(errno));\n        size = 0;\n    } else {\n        if (status) *status = AOF_OK;\n        size = sb.st_size;\n    }\n    latencyEndMonitor(latency);\n    latencyAddSampleIfNeeded(\"aof-fstat\", latency);\n    sdsfree(aof_filepath);\n    return size;\n}\n\n/* Get size of all AOF files referred by the manifest (excluding history).\n * The status argument is an output argument to be filled with\n * one of the AOF_ status values. */\noff_t getBaseAndIncrAppendOnlyFilesSize(aofManifest *am, int *status) {\n    off_t size = 0;\n    listNode *ln;\n    listIter li;\n\n    if (am->base_aof_info) {\n        serverAssert(am->base_aof_info->file_type == AOF_FILE_TYPE_BASE);\n\n        size += getAppendOnlyFileSize(am->base_aof_info->file_name, status);\n        if (*status != AOF_OK) return 0;\n    }\n\n    listRewind(am->incr_aof_list, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        aofInfo *ai = (aofInfo*)ln->value;\n        serverAssert(ai->file_type == AOF_FILE_TYPE_INCR);\n        size += getAppendOnlyFileSize(ai->file_name, status);\n        if (*status != AOF_OK) return 0;\n    }\n\n    return size;\n}\n\nint getBaseAndIncrAppendOnlyFilesNum(aofManifest *am) {\n    int num = 0;\n    if (am->base_aof_info) num++;\n    if (am->incr_aof_list) num += listLength(am->incr_aof_list);\n    return num;\n}\n\n/* A background append only file rewriting (BGREWRITEAOF) terminated its work.\n * Handle this. */\nvoid backgroundRewriteDoneHandler(int exitcode, int bysignal) {\n    if (!bysignal && exitcode == 0) {\n        char tmpfile[256];\n        long long now = ustime();\n        sds new_base_filepath = NULL;\n        sds new_incr_filepath = NULL;\n        aofManifest *temp_am;\n        mstime_t latency;\n\n        serverLog(LL_NOTICE,\n            \"Background AOF rewrite terminated with success\");\n\n        snprintf(tmpfile, 256, \"temp-rewriteaof-bg-%d.aof\",\n            (int)server.child_pid);\n\n        serverAssert(server.aof_manifest != NULL);\n\n        /* Dup a temporary aof_manifest for subsequent modifications. */\n        temp_am = aofManifestDup(server.aof_manifest);\n\n        /* Get a new BASE file name and mark the previous (if we have)\n         * as the HISTORY type. */\n        sds new_base_filename = getNewBaseFileNameAndMarkPreAsHistory(temp_am);\n        serverAssert(new_base_filename != NULL);\n        new_base_filepath = makePath(server.aof_dirname, new_base_filename);\n\n        /* Rename the temporary aof file to 'new_base_filename'. */\n        latencyStartMonitor(latency);\n        if (rename(tmpfile, new_base_filepath) == -1) {\n            serverLog(LL_WARNING,\n                \"Error trying to rename the temporary AOF base file %s into %s: %s\",\n                tmpfile,\n                new_base_filepath,\n                strerror(errno));\n            aofManifestFree(temp_am);\n            sdsfree(new_base_filepath);\n            server.aof_lastbgrewrite_status = C_ERR;\n            server.stat_aofrw_consecutive_failures++;\n            goto cleanup;\n        }\n        latencyEndMonitor(latency);\n        latencyAddSampleIfNeeded(\"aof-rename\", latency);\n        serverLog(LL_NOTICE,\n            \"Successfully renamed the temporary AOF base file %s into %s\", tmpfile, new_base_filename);\n\n        /* Rename the temporary incr aof file to 'new_incr_filename'. */\n        if (server.aof_state == AOF_WAIT_REWRITE) {\n            /* Get temporary incr aof name. */\n            sds temp_incr_aof_name = getTempIncrAofName();\n            sds temp_incr_filepath = makePath(server.aof_dirname, temp_incr_aof_name);\n            /* Get next new incr aof name. */\n            sds new_incr_filename = getNewIncrAofName(temp_am, tempIncAofStartReplOffset);\n            new_incr_filepath = makePath(server.aof_dirname, new_incr_filename);\n            latencyStartMonitor(latency);\n            if (rename(temp_incr_filepath, new_incr_filepath) == -1) {\n                serverLog(LL_WARNING,\n                    \"Error trying to rename the temporary AOF incr file %s into %s: %s\",\n                    temp_incr_filepath,\n                    new_incr_filepath,\n                    strerror(errno));\n                bg_unlink(new_base_filepath);\n                sdsfree(new_base_filepath);\n                aofManifestFree(temp_am);\n                sdsfree(temp_incr_filepath);\n                sdsfree(new_incr_filepath);\n                sdsfree(temp_incr_aof_name);\n                server.aof_lastbgrewrite_status = C_ERR;\n                server.stat_aofrw_consecutive_failures++;\n                goto cleanup;\n            }\n            latencyEndMonitor(latency);\n            latencyAddSampleIfNeeded(\"aof-rename\", latency);\n            serverLog(LL_NOTICE,\n                \"Successfully renamed the temporary AOF incr file %s into %s\", temp_incr_aof_name, new_incr_filename);\n            sdsfree(temp_incr_filepath);\n            sdsfree(temp_incr_aof_name);\n        }\n\n        /* Change the AOF file type in 'incr_aof_list' from AOF_FILE_TYPE_INCR\n         * to AOF_FILE_TYPE_HIST, and move them to the 'history_aof_list'. */\n        markRewrittenIncrAofAsHistory(temp_am);\n\n        /* Persist our modifications. */\n        if (persistAofManifest(temp_am) == C_ERR) {\n            bg_unlink(new_base_filepath);\n            aofManifestFree(temp_am);\n            sdsfree(new_base_filepath);\n            if (new_incr_filepath) {\n                bg_unlink(new_incr_filepath);\n                sdsfree(new_incr_filepath);\n            }\n            server.aof_lastbgrewrite_status = C_ERR;\n            server.stat_aofrw_consecutive_failures++;\n            goto cleanup;\n        }\n        sdsfree(new_base_filepath);\n        if (new_incr_filepath) sdsfree(new_incr_filepath);\n\n        /* We can safely let `server.aof_manifest` point to 'temp_am' and free the previous one. */\n        aofManifestFreeAndUpdate(temp_am);\n\n        if (server.aof_state != AOF_OFF) {\n            /* AOF enabled. */\n            server.aof_current_size = getAppendOnlyFileSize(new_base_filename, NULL) + server.aof_last_incr_size;\n            server.aof_rewrite_base_size = server.aof_current_size;\n        }\n\n        /* We don't care about the return value of `aofDelHistoryFiles`, because the history\n         * deletion failure will not cause any problems. */\n        aofDelHistoryFiles();\n\n        server.aof_lastbgrewrite_status = C_OK;\n        server.stat_aofrw_consecutive_failures = 0;\n\n        serverLog(LL_NOTICE, \"Background AOF rewrite finished successfully\");\n        /* Change state from WAIT_REWRITE to ON if needed */\n        if (server.aof_state == AOF_WAIT_REWRITE) {\n            server.aof_state = AOF_ON;\n\n            /* Update the fsynced replication offset that just now become valid.\n             * This could either be the one we took in startAppendOnly, or a\n             * newer one set by the bio thread. */\n            long long fsynced_reploff_pending;\n            atomicGet(server.fsynced_reploff_pending, fsynced_reploff_pending);\n            server.fsynced_reploff = fsynced_reploff_pending;\n        }\n\n        serverLog(LL_VERBOSE,\n            \"Background AOF rewrite signal handler took %lldus\", ustime()-now);\n    } else if (!bysignal && exitcode != 0) {\n        server.aof_lastbgrewrite_status = C_ERR;\n        server.stat_aofrw_consecutive_failures++;\n\n        serverLog(LL_WARNING,\n            \"Background AOF rewrite terminated with error\");\n    } else {\n        /* SIGUSR1 is whitelisted, so we have a way to kill a child without\n         * triggering an error condition. */\n        if (bysignal != SIGUSR1) {\n            server.aof_lastbgrewrite_status = C_ERR;\n            server.stat_aofrw_consecutive_failures++;\n        }\n\n        serverLog(LL_WARNING,\n            \"Background AOF rewrite terminated by signal %d\", bysignal);\n    }\n\ncleanup:\n    aofRemoveTempFile(server.child_pid);\n    /* Clear AOF buffer and delete temp incr aof for next rewrite. */\n    if (server.aof_state == AOF_WAIT_REWRITE) {\n        sdsfree(server.aof_buf);\n        server.aof_buf = sdsempty();\n        aofDelTempIncrAofFile();\n    }\n    server.aof_rewrite_time_last = time(NULL)-server.aof_rewrite_time_start;\n    server.aof_rewrite_time_start = -1;\n    /* Schedule a new rewrite if we are waiting for it to switch the AOF ON. */\n    if (server.aof_state == AOF_WAIT_REWRITE)\n        server.aof_rewrite_scheduled = 1;\n}\n"
  },
  {
    "path": "src/asciilogo.h",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\nconst char *ascii_logo =\n\"                _._                                                  \\n\"\n\"           _.-``__ ''-._                                             \\n\"\n\"      _.-``    `.  `_.  ''-._           Redis Open Source            \\n\"\n\"  .-`` .-```.  ```\\\\/    _.,_ ''-._      %s (%s/%d) %s bit\\n\"\n\" (    '      ,       .-`  | `,    )     Running in %s mode\\n\"\n\" |`-._`-...-` __...-.``-._|'` _.-'|     Port: %d\\n\"\n\" |    `-._   `._    /     _.-'    |     PID: %ld\\n\"\n\"  `-._    `-._  `-./  _.-'    _.-'                                   \\n\"\n\" |`-._`-._    `-.__.-'    _.-'_.-'|                                  \\n\"\n\" |    `-._`-._        _.-'_.-'    |           https://redis.io       \\n\"\n\"  `-._    `-._`-.__.-'_.-'    _.-'                                   \\n\"\n\" |`-._`-._    `-.__.-'    _.-'_.-'|                                  \\n\"\n\" |    `-._`-._        _.-'_.-'    |                                  \\n\"\n\"  `-._    `-._`-.__.-'_.-'    _.-'                                   \\n\"\n\"      `-._    `-.__.-'    _.-'                                       \\n\"\n\"          `-._        _.-'                                           \\n\"\n\"              `-.__.-'                                               \\n\\n\";\n"
  },
  {
    "path": "src/atomicvar.h",
    "content": "/* This file implements atomic counters using c11 _Atomic, __atomic or __sync\n * macros if available, otherwise we will throw an error when compile.\n *\n * The exported interface is composed of the following macros:\n *\n * atomicIncr(var,count) -- Increment the atomic counter\n * atomicGetIncr(var,oldvalue_var,count) -- Get and increment the atomic counter\n * atomicIncrGet(var,newvalue_var,count) -- Increment and get the atomic counter new value\n * atomicDecr(var,count) -- Decrement the atomic counter\n * atomicGet(var,dstvar) -- Fetch the atomic counter value\n * atomicSet(var,value)  -- Set the atomic counter value\n * atomicGetWithSync(var,value)  -- 'atomicGet' with inter-thread synchronization\n * atomicSetWithSync(var,value)  -- 'atomicSet' with inter-thread synchronization\n * atomicCompareExchange(type,var,expected_var,desired)  --  Compare and exchange (CAS) operation\n * \n * Atomic operations on flags. \n * Flag type can be int, long, long long or their unsigned counterparts.\n * The value of the flag can be 1 or 0.\n * \n * atomicFlagGetSet(var,oldvalue_var) -- Get and set the atomic counter value\n * \n * NOTE1: __atomic* and _Atomic implementations can be actually elaborated to support any value by changing the \n * hardcoded new value passed to __atomic_exchange* from 1 to @param count\n * i.e oldvalue_var = atomic_exchange_explicit(&var, count).\n * However, in order to be compatible with the __sync functions family, we can use only 0 and 1.\n * The only exchange alternative suggested by __sync is __sync_lock_test_and_set, \n * But as described by the gnu manual for __sync_lock_test_and_set():\n * https://gcc.gnu.org/onlinedocs/gcc/_005f_005fsync-Builtins.html\n * \"A target may support reduced functionality here by which the only valid value to store is the immediate constant 1. The exact value\n * actually stored in *ptr is implementation defined.\"\n * Hence, we can't rely on it for a any value other than 1.\n * We eventually chose to implement this method with __sync_val_compare_and_swap since it satisfies functionality needed for atomicFlagGetSet\n * (if the flag was 0 -> set to 1, if it's already 1 -> do nothing, but the final result is that the flag is set), \n * and also it has a full barrier (__sync_lock_test_and_set has acquire barrier).\n * \n * NOTE2: Unlike other atomic type, which aren't guaranteed to be lock free, c11 atomic_flag does.\n * To check whether a type is lock free, atomic_is_lock_free() can be used. \n * It can be considered to limit the flag type to atomic_flag to improve performance.\n * \n * Never use return value from the macros, instead use the AtomicGetIncr()\n * if you need to get the current value and increment it atomically, like\n * in the following example:\n *\n *  long oldvalue;\n *  atomicGetIncr(myvar,oldvalue,1);\n *  doSomethingWith(oldvalue);\n *\n * ----------------------------------------------------------------------------\n *\n * Copyright (c) 2015-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include <pthread.h>\n#include \"config.h\"\n\n#ifndef __ATOMIC_VAR_H\n#define __ATOMIC_VAR_H\n\n/* Define redisAtomic for atomic variable. */\n#define redisAtomic\n\n/* To test Redis with Helgrind (a Valgrind tool) it is useful to define\n * the following macro, so that __sync macros are used: those can be detected\n * by Helgrind (even if they are less efficient) so that no false positive\n * is reported. */\n// #define __ATOMIC_VAR_FORCE_SYNC_MACROS\n\n/* There will be many false positives if we test Redis with Helgrind, since\n * Helgrind can't understand we have imposed ordering on the program, so\n * we use macros in helgrind.h to tell Helgrind inter-thread happens-before\n * relationship explicitly for avoiding false positives.\n *\n * For more details, please see: valgrind/helgrind.h and\n * https://www.valgrind.org/docs/manual/hg-manual.html#hg-manual.effective-use\n *\n * These macros take effect only when 'make helgrind', and you must first\n * install Valgrind in the default path configuration. */\n#ifdef __ATOMIC_VAR_FORCE_SYNC_MACROS\n#include <valgrind/helgrind.h>\n#else\n#define ANNOTATE_HAPPENS_BEFORE(v) ((void) v)\n#define ANNOTATE_HAPPENS_AFTER(v)  ((void) v)\n#endif\n\n#if !defined(__ATOMIC_VAR_FORCE_SYNC_MACROS) && defined(__STDC_VERSION__) && \\\n    (__STDC_VERSION__ >= 201112L) && !defined(__STDC_NO_ATOMICS__)\n/* Use '_Atomic' keyword if the compiler supports. */\n#undef  redisAtomic\n#define redisAtomic _Atomic\n/* Implementation using _Atomic in C11. */\n\n#include <stdatomic.h>\n#define atomicIncr(var,count) atomic_fetch_add_explicit(&var,(count),memory_order_relaxed)\n#define atomicGetIncr(var,oldvalue_var,count) do { \\\n    oldvalue_var = atomic_fetch_add_explicit(&var,(count),memory_order_relaxed); \\\n} while(0)\n#define atomicIncrGet(var, newvalue_var, count) \\\n    newvalue_var = atomicIncr(var,count) + count\n#define atomicDecr(var,count) atomic_fetch_sub_explicit(&var,(count),memory_order_relaxed)\n#define atomicGet(var,dstvar) do { \\\n    dstvar = atomic_load_explicit(&var,memory_order_relaxed); \\\n} while(0)\n#define atomicSet(var,value) atomic_store_explicit(&var,value,memory_order_relaxed)\n#define atomicGetWithSync(var,dstvar) do { \\\n    dstvar = atomic_load_explicit(&var,memory_order_seq_cst); \\\n} while(0)\n#define atomicSetWithSync(var,value) \\\n    atomic_store_explicit(&var,value,memory_order_seq_cst)\n#define atomicCompareExchange(type,var,expected_var,desired) \\\n    atomic_compare_exchange_weak_explicit(&var,&expected_var,desired,memory_order_relaxed,memory_order_relaxed)\n#define atomicFlagGetSet(var,oldvalue_var) \\\n    oldvalue_var = atomic_exchange_explicit(&var,1,memory_order_relaxed)\n#define REDIS_ATOMIC_API \"c11-builtin\"\n\n#elif !defined(__ATOMIC_VAR_FORCE_SYNC_MACROS) && \\\n    (!defined(__clang__) || !defined(__APPLE__) || __apple_build_version__ > 4210057) && \\\n    defined(__ATOMIC_RELAXED) && defined(__ATOMIC_SEQ_CST)\n/* Implementation using __atomic macros. */\n\n#define atomicIncr(var,count) __atomic_add_fetch(&var,(count),__ATOMIC_RELAXED)\n#define atomicIncrGet(var, newvalue_var, count) \\\n    newvalue_var = __atomic_add_fetch(&var,(count),__ATOMIC_RELAXED)\n#define atomicGetIncr(var,oldvalue_var,count) do { \\\n    oldvalue_var = __atomic_fetch_add(&var,(count),__ATOMIC_RELAXED); \\\n} while(0)\n#define atomicDecr(var,count) __atomic_sub_fetch(&var,(count),__ATOMIC_RELAXED)\n#define atomicGet(var,dstvar) do { \\\n    dstvar = __atomic_load_n(&var,__ATOMIC_RELAXED); \\\n} while(0)\n#define atomicSet(var,value) __atomic_store_n(&var,value,__ATOMIC_RELAXED)\n#define atomicGetWithSync(var,dstvar) do { \\\n    dstvar = __atomic_load_n(&var,__ATOMIC_SEQ_CST); \\\n} while(0)\n#define atomicSetWithSync(var,value) \\\n    __atomic_store_n(&var,value,__ATOMIC_SEQ_CST)\n#define atomicCompareExchange(type,var,expected_var,desired) \\\n    __atomic_compare_exchange_n(&var,&expected_var,desired,1,__ATOMIC_RELAXED,__ATOMIC_RELAXED)\n#define atomicFlagGetSet(var,oldvalue_var) \\\n    oldvalue_var = __atomic_exchange_n(&var,1,__ATOMIC_RELAXED)\n#define REDIS_ATOMIC_API \"atomic-builtin\"\n\n#elif defined(HAVE_ATOMIC)\n/* Implementation using __sync macros. */\n\n#define atomicIncr(var,count) __sync_add_and_fetch(&var,(count))\n#define atomicIncrGet(var, newvalue_var, count) \\\n    newvalue_var = __sync_add_and_fetch(&var,(count))\n#define atomicGetIncr(var,oldvalue_var,count) do { \\\n    oldvalue_var = __sync_fetch_and_add(&var,(count)); \\\n} while(0)\n#define atomicDecr(var,count) __sync_sub_and_fetch(&var,(count))\n#define atomicGet(var,dstvar) do { \\\n    dstvar = __sync_sub_and_fetch(&var,0); \\\n} while(0)\n#define atomicSet(var,value) do { \\\n    while(!__sync_bool_compare_and_swap(&var,var,value)); \\\n} while(0)\n/* Actually the builtin issues a full memory barrier by default. */\n#define atomicGetWithSync(var,dstvar) do { \\\n    dstvar = __sync_sub_and_fetch(&var,0,__sync_synchronize); \\\n    ANNOTATE_HAPPENS_AFTER(&var); \\\n} while(0)\n#define atomicSetWithSync(var,value) do { \\\n    ANNOTATE_HAPPENS_BEFORE(&var);  \\\n    while(!__sync_bool_compare_and_swap(&var,var,value,__sync_synchronize)); \\\n} while(0)\n#define atomicCompareExchange(type,var,expected_var,desired) ({ \\\n    type _old = __sync_val_compare_and_swap(&var,expected_var,desired); \\\n    int _success = (_old == expected_var); \\\n    if (!_success) expected_var = _old; \\\n    _success; \\\n})\n#define atomicFlagGetSet(var,oldvalue_var) \\\n    oldvalue_var = __sync_val_compare_and_swap(&var,0,1)\n#define REDIS_ATOMIC_API \"sync-builtin\"\n\n#else\n#error \"Unable to determine atomic operations for your platform\"\n\n#endif\n#endif /* __ATOMIC_VAR_H */\n"
  },
  {
    "path": "src/bio.c",
    "content": "/* Background I/O service for Redis.\n *\n * This file implements operations that we need to perform in the background.\n * Currently there are 3 operations:\n * 1) a background close(2) system call. This is needed when the process is\n *    the last owner of a reference to a file closing it means unlinking it, and\n *    the deletion of the file is slow, blocking the server.\n * 2) AOF fsync\n * 3) lazyfree of memory\n *\n * In the future we'll either continue implementing new things we need or\n * we'll switch to libeio. However there are probably long term uses for this\n * file as we may want to put Redis specific background tasks here.\n *\n * DESIGN\n * ------\n *\n * The design is simple: We have a structure representing a job to perform,\n * and several worker threads and job queues. Every job type is assigned to\n * a specific worker thread, and a single worker may handle several different\n * job types.\n * Every thread waits for new jobs in its queue, and processes every job\n * sequentially.\n *\n * Jobs handled by the same worker are guaranteed to be processed from the\n * least-recently-inserted to the most-recently-inserted (older jobs processed\n * first).\n *\n * To let the creator of the job to be notified about the completion of the \n * operation, it will need to submit additional dummy job, coined as\n * completion job request that will be written back eventually, by the\n * background thread, into completion job response queue. This notification\n * layout can simplify flows that might submit more than one job, such as\n * in case of FLUSHALL which for a single command submits multiple jobs. It\n * is also correct because jobs are processed in FIFO fashion.\n *\n * ----------------------------------------------------------------------------\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n#include \"bio.h\"\n#include <fcntl.h>\n\nstatic char* bio_worker_title[] = {\n    \"bio_close_file\",\n    \"bio_aof\",\n    \"bio_lazy_free\",\n};\n\n#define BIO_WORKER_NUM (sizeof(bio_worker_title) / sizeof(*bio_worker_title))\n\nstatic unsigned int bio_job_to_worker[] = {\n    [BIO_CLOSE_FILE] = 0,\n    [BIO_AOF_FSYNC] = 1,\n    [BIO_CLOSE_AOF] = 1,\n    [BIO_LAZY_FREE] = 2,\n    [BIO_COMP_RQ_CLOSE_FILE] = 0,\n    [BIO_COMP_RQ_AOF_FSYNC]  = 1,\n    [BIO_COMP_RQ_LAZY_FREE]  = 2\n};\n\nstatic pthread_t bio_threads[BIO_WORKER_NUM];\nstatic pthread_mutex_t bio_mutex[BIO_WORKER_NUM];\nstatic pthread_cond_t bio_newjob_cond[BIO_WORKER_NUM];\nstatic list *bio_jobs[BIO_WORKER_NUM];\nstatic unsigned long bio_jobs_counter[BIO_NUM_OPS] = {0};\n\n/* The bio_comp_list is used to hold completion job responses and to handover\n * to main thread to callback as notification for job completion. Main\n * thread will be triggered to read the list by signaling via writing to a pipe */\nstatic list *bio_comp_list;\nstatic pthread_mutex_t bio_mutex_comp;\nstatic int job_comp_pipe[2];   /* Pipe used to awake the event loop */\n\ntypedef struct bio_comp_item {\n    comp_fn *func;    /* callback after completion job will be processed  */\n    uint64_t arg;     /* user data to be passed to the function */\n    void *ptr;        /* user pointer to be passed to the function */\n} bio_comp_item;\n\n/* This structure represents a background Job. It is only used locally to this\n * file as the API does not expose the internals at all. */\ntypedef union bio_job {\n    struct {\n        int type; /* Job-type tag. This needs to appear as the first element in all union members. */\n    } header;\n\n    /* Job specific arguments.*/\n    struct {\n        int type;\n        int fd; /* Fd for file based background jobs */\n        long long offset; /* A job-specific offset, if applicable */\n        unsigned need_fsync:1; /* A flag to indicate that a fsync is required before\n                                * the file is closed. */\n        unsigned need_reclaim_cache:1; /* A flag to indicate that reclaim cache is required before\n                                * the file is closed. */\n    } fd_args;\n\n    struct {\n        int type;\n        lazy_free_fn *free_fn; /* Function that will free the provided arguments */\n        void *free_args[]; /* List of arguments to be passed to the free function */\n    } free_args;\n    struct {\n        int type; /* header */\n        comp_fn *fn; /* callback. Handover to main thread to cb as notify for job completion */\n        uint64_t arg; /* callback arguments */\n        void *ptr; /* callback pointer */\n    } comp_rq;\n} bio_job;\n\nvoid *bioProcessBackgroundJobs(void *arg);\nvoid bioPipeReadJobCompList(aeEventLoop *el, int fd, void *privdata, int mask);\n\n/* Make sure we have enough stack to perform all the things we do in the\n * main thread. */\n#define REDIS_THREAD_STACK_SIZE (1024*1024*4)\n\n/* Initialize the background system, spawning the thread. */\nvoid bioInit(void) {\n    pthread_attr_t attr;\n    pthread_t thread;\n    size_t stacksize;\n    unsigned long j;\n\n    /* Initialization of state vars and objects */\n    for (j = 0; j < BIO_WORKER_NUM; j++) {\n        pthread_mutex_init(&bio_mutex[j],NULL);\n        pthread_cond_init(&bio_newjob_cond[j],NULL);\n        bio_jobs[j] = listCreate();\n    }\n\n    /* init jobs comp responses */\n    bio_comp_list = listCreate();\n    pthread_mutex_init(&bio_mutex_comp, NULL);\n\n    /* Create a pipe for background thread to be able to wake up the redis main thread.\n     * Make the pipe non blocking. This is just a best effort aware mechanism\n     * and we do not want to block not in the read nor in the write half.\n     * Enable close-on-exec flag on pipes in case of the fork-exec system calls in\n     * sentinels or redis servers. */\n    if (anetPipe(job_comp_pipe, O_CLOEXEC|O_NONBLOCK, O_CLOEXEC|O_NONBLOCK) == -1) {\n        serverLog(LL_WARNING,\n                  \"Can't create the pipe for bio thread: %s\", strerror(errno));\n        exit(1);\n    }\n\n    /* Register a readable event for the pipe used to awake the event loop on job completion */\n    if (aeCreateFileEvent(server.el, job_comp_pipe[0], AE_READABLE,\n                          bioPipeReadJobCompList, NULL) == AE_ERR) {\n        serverPanic(\"Error registering the readable event for the bio pipe.\");\n    }\n\n    /* Set the stack size as by default it may be small in some system */\n    pthread_attr_init(&attr);\n    pthread_attr_getstacksize(&attr,&stacksize);\n    if (!stacksize) stacksize = 1; /* The world is full of Solaris Fixes */\n    while (stacksize < REDIS_THREAD_STACK_SIZE) stacksize *= 2;\n    pthread_attr_setstacksize(&attr, stacksize);\n\n    /* Ready to spawn our threads. We use the single argument the thread\n     * function accepts in order to pass the job ID the thread is\n     * responsible for. */\n    for (j = 0; j < BIO_WORKER_NUM; j++) {\n        int err = pthread_create(&thread,&attr,bioProcessBackgroundJobs, (void*) j);\n        if (err) {\n            serverLog(LL_WARNING, \"Fatal: Can't initialize Background Jobs. Error message: %s\", strerror(err));\n            exit(1);\n        }\n        bio_threads[j] = thread;\n    }\n}\n\nvoid bioSubmitJob(int type, bio_job *job) {\n    job->header.type = type;\n    unsigned long worker = bio_job_to_worker[type];\n    pthread_mutex_lock(&bio_mutex[worker]);\n    listAddNodeTail(bio_jobs[worker],job);\n    bio_jobs_counter[type]++;\n    pthread_cond_signal(&bio_newjob_cond[worker]);\n    pthread_mutex_unlock(&bio_mutex[worker]);\n}\n\nvoid bioCreateLazyFreeJob(lazy_free_fn free_fn, int arg_count, ...) {\n    va_list valist;\n    /* Allocate memory for the job structure and all required\n     * arguments */\n    bio_job *job = zmalloc(sizeof(*job) + sizeof(void *) * (arg_count));\n    job->free_args.free_fn = free_fn;\n\n    va_start(valist, arg_count);\n    for (int i = 0; i < arg_count; i++) {\n        job->free_args.free_args[i] = va_arg(valist, void *);\n    }\n    va_end(valist);\n    bioSubmitJob(BIO_LAZY_FREE, job);\n}\n\nvoid bioCreateCompRq(bio_worker_t assigned_worker, comp_fn *func, uint64_t user_data, void *user_ptr) {\n    int type;\n    switch (assigned_worker) {\n        case BIO_WORKER_CLOSE_FILE:\n            type = BIO_COMP_RQ_CLOSE_FILE;\n            break;\n        case BIO_WORKER_AOF_FSYNC:\n            type = BIO_COMP_RQ_AOF_FSYNC;\n            break;\n        case BIO_WORKER_LAZY_FREE:\n            type = BIO_COMP_RQ_LAZY_FREE;\n            break;\n        default:\n            serverPanic(\"Invalid worker type in bioCreateCompRq().\");\n    }\n\n    bio_job *job = zmalloc(sizeof(*job));\n    job->comp_rq.fn = func;\n    job->comp_rq.arg = user_data;\n    job->comp_rq.ptr = user_ptr;\n    bioSubmitJob(type, job);\n}\n\nvoid bioCreateCloseJob(int fd, int need_fsync, int need_reclaim_cache) {\n    bio_job *job = zmalloc(sizeof(*job));\n    job->fd_args.fd = fd;\n    job->fd_args.need_fsync = need_fsync;\n    job->fd_args.need_reclaim_cache = need_reclaim_cache;\n\n    bioSubmitJob(BIO_CLOSE_FILE, job);\n}\n\nvoid bioCreateCloseAofJob(int fd, long long offset, int need_reclaim_cache) {\n    bio_job *job = zmalloc(sizeof(*job));\n    job->fd_args.fd = fd;\n    job->fd_args.offset = offset;\n    job->fd_args.need_fsync = 1;\n    job->fd_args.need_reclaim_cache = need_reclaim_cache;\n\n    bioSubmitJob(BIO_CLOSE_AOF, job);\n}\n\nvoid bioCreateFsyncJob(int fd, long long offset, int need_reclaim_cache) {\n    bio_job *job = zmalloc(sizeof(*job));\n    job->fd_args.fd = fd;\n    job->fd_args.offset = offset;\n    job->fd_args.need_reclaim_cache = need_reclaim_cache;\n\n    bioSubmitJob(BIO_AOF_FSYNC, job);\n}\n\nvoid *bioProcessBackgroundJobs(void *arg) {\n    bio_job *job;\n    unsigned long worker = (unsigned long) arg;\n    sigset_t sigset;\n\n    /* Check that the worker is within the right interval. */\n    serverAssert(worker < BIO_WORKER_NUM);\n\n    redis_set_thread_title(bio_worker_title[worker]);\n\n    redisSetCpuAffinity(server.bio_cpulist);\n\n    makeThreadKillable();\n\n    pthread_mutex_lock(&bio_mutex[worker]);\n    /* Block SIGALRM so we are sure that only the main thread will\n     * receive the watchdog signal. */\n    sigemptyset(&sigset);\n    sigaddset(&sigset, SIGALRM);\n    int err = pthread_sigmask(SIG_BLOCK, &sigset, NULL);\n    if (err)\n        serverLog(LL_WARNING,\n            \"Warning: can't mask SIGALRM in bio.c thread: %s\", strerror(err));\n\n    while(1) {\n        listNode *ln;\n\n        /* The loop always starts with the lock hold. */\n        if (listLength(bio_jobs[worker]) == 0) {\n            pthread_cond_wait(&bio_newjob_cond[worker], &bio_mutex[worker]);\n            continue;\n        }\n        /* Get the job from the queue. */\n        ln = listFirst(bio_jobs[worker]);\n        job = ln->value;\n        /* It is now possible to unlock the background system as we know have\n         * a stand alone job structure to process.*/\n        pthread_mutex_unlock(&bio_mutex[worker]);\n\n        /* Process the job accordingly to its type. */\n        int job_type = job->header.type;\n\n        if (job_type == BIO_CLOSE_FILE) {\n            if (job->fd_args.need_fsync &&\n                redis_fsync(job->fd_args.fd) == -1 &&\n                errno != EBADF && errno != EINVAL)\n            {\n                serverLog(LL_WARNING, \"Fail to fsync the AOF file: %s\",strerror(errno));\n            }\n            if (job->fd_args.need_reclaim_cache) {\n                if (reclaimFilePageCache(job->fd_args.fd, 0, 0) == -1) {\n                    serverLog(LL_NOTICE,\"Unable to reclaim page cache: %s\", strerror(errno));\n                }\n            }\n            close(job->fd_args.fd);\n        } else if (job_type == BIO_AOF_FSYNC || job_type == BIO_CLOSE_AOF) {\n            /* The fd may be closed by main thread and reused for another\n             * socket, pipe, or file. We just ignore these errno because\n             * aof fsync did not really fail. */\n            if (redis_fsync(job->fd_args.fd) == -1 &&\n                errno != EBADF && errno != EINVAL)\n            {\n                int last_status;\n                atomicGet(server.aof_bio_fsync_status,last_status);\n                atomicSet(server.aof_bio_fsync_status,C_ERR);\n                atomicSet(server.aof_bio_fsync_errno,errno);\n                if (last_status == C_OK) {\n                    serverLog(LL_WARNING,\n                        \"Fail to fsync the AOF file: %s\",strerror(errno));\n                }\n            } else {\n                atomicSet(server.aof_bio_fsync_status,C_OK);\n                atomicSet(server.fsynced_reploff_pending, job->fd_args.offset);\n            }\n\n            if (job->fd_args.need_reclaim_cache) {\n                if (reclaimFilePageCache(job->fd_args.fd, 0, 0) == -1) {\n                    serverLog(LL_NOTICE,\"Unable to reclaim page cache: %s\", strerror(errno));\n                }\n            }\n            if (job_type == BIO_CLOSE_AOF)\n                close(job->fd_args.fd);\n        } else if (job_type == BIO_LAZY_FREE) {\n            job->free_args.free_fn(job->free_args.free_args);\n        } else if ((job_type == BIO_COMP_RQ_CLOSE_FILE) ||\n                   (job_type == BIO_COMP_RQ_AOF_FSYNC) ||\n                   (job_type == BIO_COMP_RQ_LAZY_FREE)) {\n            bio_comp_item *comp_rsp = zmalloc(sizeof(bio_comp_item));\n            comp_rsp->func = job->comp_rq.fn;\n            comp_rsp->arg = job->comp_rq.arg;\n            comp_rsp->ptr = job->comp_rq.ptr;\n\n            /* just write it to completion job responses */\n            pthread_mutex_lock(&bio_mutex_comp);\n            listAddNodeTail(bio_comp_list, comp_rsp);\n            pthread_mutex_unlock(&bio_mutex_comp);\n\n            if (write(job_comp_pipe[1],\"A\",1) != 1) {\n                /* Pipe is non-blocking, write() may fail if it's full. */\n            }\n        } else {\n            serverPanic(\"Wrong job type in bioProcessBackgroundJobs().\");\n        }\n        zfree(job);\n\n        /* Lock again before reiterating the loop, if there are no longer\n         * jobs to process we'll block again in pthread_cond_wait(). */\n        pthread_mutex_lock(&bio_mutex[worker]);\n        listDelNode(bio_jobs[worker], ln);\n        bio_jobs_counter[job_type]--;\n        pthread_cond_signal(&bio_newjob_cond[worker]);\n    }\n}\n\n/* Return the number of pending jobs of the specified type. */\nunsigned long bioPendingJobsOfType(int type) {\n    unsigned int worker = bio_job_to_worker[type];\n\n    pthread_mutex_lock(&bio_mutex[worker]);\n    unsigned long val = bio_jobs_counter[type];\n    pthread_mutex_unlock(&bio_mutex[worker]);\n\n    return val;\n}\n\n/* Wait for the job queue of the worker for jobs of specified type to become empty. */\nvoid bioDrainWorker(int job_type) {\n    unsigned long worker = bio_job_to_worker[job_type];\n\n    pthread_mutex_lock(&bio_mutex[worker]);\n    while (listLength(bio_jobs[worker]) > 0) {\n        pthread_cond_wait(&bio_newjob_cond[worker], &bio_mutex[worker]);\n    }\n    pthread_mutex_unlock(&bio_mutex[worker]);\n}\n\n/* Kill the running bio threads in an unclean way. This function should be\n * used only when it's critical to stop the threads for some reason.\n * Currently Redis does this only on crash (for instance on SIGSEGV) in order\n * to perform a fast memory check without other threads messing with memory. */\nvoid bioKillThreads(void) {\n    int err;\n    unsigned long j;\n\n    for (j = 0; j < BIO_WORKER_NUM; j++) {\n        if (bio_threads[j] == pthread_self()) continue;\n        if (bio_threads[j] && pthread_cancel(bio_threads[j]) == 0) {\n            if ((err = pthread_join(bio_threads[j],NULL)) != 0) {\n                serverLog(LL_WARNING,\n                    \"Bio worker thread #%lu can not be joined: %s\",\n                        j, strerror(err));\n            } else {\n                serverLog(LL_WARNING,\n                    \"Bio worker thread #%lu terminated\",j);\n            }\n        }\n    }\n}\n\nvoid bioPipeReadJobCompList(aeEventLoop *el, int fd, void *privdata, int mask) {\n    UNUSED(el);\n    UNUSED(mask);\n    UNUSED(privdata);\n\n    char buf[128];\n    list *tmp_list = NULL;\n\n    while (read(fd, buf, sizeof(buf)) == sizeof(buf));\n\n    /* Handle event loop events if pipe was written from event loop API */\n    pthread_mutex_lock(&bio_mutex_comp);\n    if (listLength(bio_comp_list)) {\n        tmp_list = bio_comp_list;\n        bio_comp_list = listCreate();\n    }\n    pthread_mutex_unlock(&bio_mutex_comp);\n\n    if (!tmp_list) return;\n\n    /* callback to all job completions  */\n    while (listLength(tmp_list)) {\n        listNode *ln = listFirst(tmp_list);\n        bio_comp_item *rsp = ln->value;\n        listDelNode(tmp_list, ln);\n        rsp->func(rsp->arg, rsp->ptr);\n        zfree(rsp);\n    }\n    listRelease(tmp_list);\n}\n"
  },
  {
    "path": "src/bio.h",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef __BIO_H\n#define __BIO_H\n\ntypedef void lazy_free_fn(void *args[]);\ntypedef void comp_fn(uint64_t user_data, void *user_ptr);\n\ntypedef enum bio_worker_t {\n    BIO_WORKER_CLOSE_FILE = 0,\n    BIO_WORKER_AOF_FSYNC,\n    BIO_WORKER_LAZY_FREE,\n    BIO_WORKER_NUM\n} bio_worker_t;\n\n/* Background job opcodes */\ntypedef enum bio_job_type_t {\n    BIO_CLOSE_FILE = 0,     /* Deferred close(2) syscall. */\n    BIO_AOF_FSYNC,          /* Deferred AOF fsync. */\n    BIO_LAZY_FREE,          /* Deferred objects freeing. */\n    BIO_CLOSE_AOF,\n    BIO_COMP_RQ_CLOSE_FILE,  /* Job completion request, registered on close-file worker's queue */\n    BIO_COMP_RQ_AOF_FSYNC,  /* Job completion request, registered on aof-fsync worker's queue */\n    BIO_COMP_RQ_LAZY_FREE,  /* Job completion request, registered on lazy-free worker's queue */\n    BIO_NUM_OPS\n} bio_job_type_t;\n\n/* Exported API */\nvoid bioInit(void);\nunsigned long bioPendingJobsOfType(int type);\nvoid bioDrainWorker(int job_type);\nvoid bioKillThreads(void);\nvoid bioCreateCloseJob(int fd, int need_fsync, int need_reclaim_cache);\nvoid bioCreateCloseAofJob(int fd, long long offset, int need_reclaim_cache);\nvoid bioCreateFsyncJob(int fd, long long offset, int need_reclaim_cache);\nvoid bioCreateLazyFreeJob(lazy_free_fn free_fn, int arg_count, ...);\nvoid bioCreateCompRq(bio_worker_t assigned_worker, comp_fn *func, uint64_t user_data, void *user_ptr);\n\n\n#endif\n"
  },
  {
    "path": "src/bitops.c",
    "content": "/* Bit operations.\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n#include \"ctype.h\"\n\n#ifdef HAVE_AVX2\n/* Define __MM_MALLOC_H to prevent importing the memory aligned\n * allocation functions, which we don't use. */\n#define __MM_MALLOC_H\n#include <immintrin.h>\n#endif\n\n#ifdef HAVE_AVX512\n/* Define __MM_MALLOC_H to prevent importing the memory aligned\n * allocation functions, which we don't use. */\n#define __MM_MALLOC_H\n#include <immintrin.h>\n#endif\n\n#ifdef HAVE_AARCH64_NEON\n#include <arm_neon.h>\n#endif\n\n#ifdef HAVE_AVX2\n#define BITOP_USE_AVX2 (__builtin_cpu_supports(\"avx2\"))\n#else\n#define BITOP_USE_AVX2 0\n#endif\n\n/* AArch64 NEON support is determined at compile time via HAVE_AARCH64_NEON */\n#ifdef HAVE_AVX512\n#define BITOP_USE_AVX512 (__builtin_cpu_supports(\"avx512f\"))\n#define BITOPS_USE_AVX512_POPCOUNT  (__builtin_cpu_supports(\"avx512f\") && __builtin_cpu_supports(\"avx512vpopcntdq\"))\n#else\n#define BITOP_USE_AVX512 0\n#define BITOPS_USE_AVX512_POPCOUNT  0\n#endif\n\n\n/* -----------------------------------------------------------------------------\n * Helpers and low level bit functions.\n * -------------------------------------------------------------------------- */\n\n /* Shared lookup table for bit counting - maps each byte value to its popcount */\nstatic const uint8_t bitsinbyte[256] = {\n    #define B2(n) n, n+1, n+1, n+2\n    #define B4(n) B2(n), B2(n+1), B2(n+1), B2(n+2)\n    #define B6(n) B4(n), B4(n+1), B4(n+1), B4(n+2)\n    B6(0), B6(1), B6(1), B6(2)\n    #undef B6\n    #undef B4\n    #undef B2\n};\n\n/* Count number of bits set in the binary array pointed by 's' and long\n * 'count' bytes. The implementation of this function is required to\n * work with an input string length up to 512 MB or more (server.proto_max_bulk_len) */\nATTRIBUTE_TARGET_POPCNT\nlong long redisPopcount(void *s, long count) {\n    long long bits = 0;\n    unsigned char *p = s;\n    uint32_t *p4;\n#if defined(HAVE_POPCNT)\n    int use_popcnt = __builtin_cpu_supports(\"popcnt\"); /* Check if CPU supports POPCNT instruction. */\n#else\n    int use_popcnt = 0; /* Assume CPU does not support POPCNT if\n                         * __builtin_cpu_supports() is not available. */\n#endif\n    /* Count initial bytes not aligned to 64-bit when using the POPCNT instruction,\n     * otherwise align to 32-bit. */\n    int align = use_popcnt ? 7 : 3;\n    while ((unsigned long)p & align && count) {\n        bits += bitsinbyte[*p++];\n        count--;\n    }\n\n    if (likely(use_popcnt)) {\n        /* Use separate counters to make the CPU think there are no\n         * dependencies between these popcnt operations. */\n        uint64_t cnt[4];\n        memset(cnt, 0, sizeof(cnt));\n\n        /* Count bits 32 bytes at a time by using popcnt.\n         * Unroll the loop to avoid the overhead of a single popcnt per iteration,\n         * allowing the CPU to extract more instruction-level parallelism.\n         * Reference: https://danluu.com/assembly-intrinsics/ */\n        while (count >= 32) {\n            cnt[0] += __builtin_popcountll(*(uint64_t*)(p));\n            cnt[1] += __builtin_popcountll(*(uint64_t*)(p + 8));\n            cnt[2] += __builtin_popcountll(*(uint64_t*)(p + 16));\n            cnt[3] += __builtin_popcountll(*(uint64_t*)(p + 24));\n            count -= 32;\n            p += 32;\n            /* Prefetch with 2K stride is just enough to overlap L3 miss latency effectively\n             * without causing pressure on lower memory hierarchy or polluting L1/L2 */\n            redis_prefetch_read(p + 2048);\n        }\n        bits += cnt[0] + cnt[1] + cnt[2] + cnt[3];\n        goto remain;\n    }\n\n    /* Count bits 28 bytes at a time */\n    p4 = (uint32_t*)p;\n    while(count>=28) {\n        uint32_t aux1, aux2, aux3, aux4, aux5, aux6, aux7;\n\n        aux1 = *p4++;\n        aux2 = *p4++;\n        aux3 = *p4++;\n        aux4 = *p4++;\n        aux5 = *p4++;\n        aux6 = *p4++;\n        aux7 = *p4++;\n        count -= 28;\n\n        aux1 = aux1 - ((aux1 >> 1) & 0x55555555);\n        aux1 = (aux1 & 0x33333333) + ((aux1 >> 2) & 0x33333333);\n        aux2 = aux2 - ((aux2 >> 1) & 0x55555555);\n        aux2 = (aux2 & 0x33333333) + ((aux2 >> 2) & 0x33333333);\n        aux3 = aux3 - ((aux3 >> 1) & 0x55555555);\n        aux3 = (aux3 & 0x33333333) + ((aux3 >> 2) & 0x33333333);\n        aux4 = aux4 - ((aux4 >> 1) & 0x55555555);\n        aux4 = (aux4 & 0x33333333) + ((aux4 >> 2) & 0x33333333);\n        aux5 = aux5 - ((aux5 >> 1) & 0x55555555);\n        aux5 = (aux5 & 0x33333333) + ((aux5 >> 2) & 0x33333333);\n        aux6 = aux6 - ((aux6 >> 1) & 0x55555555);\n        aux6 = (aux6 & 0x33333333) + ((aux6 >> 2) & 0x33333333);\n        aux7 = aux7 - ((aux7 >> 1) & 0x55555555);\n        aux7 = (aux7 & 0x33333333) + ((aux7 >> 2) & 0x33333333);\n        bits += ((((aux1 + (aux1 >> 4)) & 0x0F0F0F0F) +\n                    ((aux2 + (aux2 >> 4)) & 0x0F0F0F0F) +\n                    ((aux3 + (aux3 >> 4)) & 0x0F0F0F0F) +\n                    ((aux4 + (aux4 >> 4)) & 0x0F0F0F0F) +\n                    ((aux5 + (aux5 >> 4)) & 0x0F0F0F0F) +\n                    ((aux6 + (aux6 >> 4)) & 0x0F0F0F0F) +\n                    ((aux7 + (aux7 >> 4)) & 0x0F0F0F0F))* 0x01010101) >> 24;\n    }\n    p = (unsigned char*)p4;\n\nremain:\n    /* Count the remaining bytes. */\n    while(count--) bits += bitsinbyte[*p++];\n    return bits;\n}\n\n#ifdef HAVE_AARCH64_NEON\n/* AArch64 optimized popcount implementation.\n * Processes the input bitmap using four NEON vector accumulators in parallel\n * to improve instruction-level parallelism and reduce the frequency of\n * scalar reductions. Each accumulator holds 16-bit partial sums that are\n * combined only once per large block (128 bytes), minimizing data movement.\n *\n * Benchmark results show this approach outperforms 2-lane implementations\n * and matches or exceeds 8-lane versions in throughput, while avoiding\n * register pressure and keeping the backend pipeline fully utilized.\n *\n * This function is now memory bound on large bitmaps, as confirmed by perf\n * profiling, with backend stalls dominated by L1/L2 data cache refills.\n */\nlong long redisPopCountAarch64(void *s, long count) {\n    long long bits = 0;\n    const uint8_t *p = (const uint8_t*)s;\n\n    /* Align */\n    while (((uintptr_t)p & 15) && count) {\n        bits += bitsinbyte[*p++];\n        count--;\n    }\n\n    /* Four vector accumulators of u16 (pairwise-accumulated byte counts). */\n    uint16x8_t acc0 = vdupq_n_u16(0);\n    uint16x8_t acc1 = vdupq_n_u16(0);\n    uint16x8_t acc2 = vdupq_n_u16(0);\n    uint16x8_t acc3 = vdupq_n_u16(0);\n\n    /* Process 128B per loop to amortize reductions. */\n    while (count >= 128) {\n        uint8x16_t d0 = vld1q_u8(p +  0);\n        uint8x16_t d1 = vld1q_u8(p + 16);\n        uint8x16_t d2 = vld1q_u8(p + 32);\n        uint8x16_t d3 = vld1q_u8(p + 48);\n        uint8x16_t d4 = vld1q_u8(p + 64);\n        uint8x16_t d5 = vld1q_u8(p + 80);\n        uint8x16_t d6 = vld1q_u8(p + 96);\n        uint8x16_t d7 = vld1q_u8(p +112);\n\n        /* Per-byte popcount */\n        uint8x16_t c0 = vcntq_u8(d0);\n        uint8x16_t c1 = vcntq_u8(d1);\n        uint8x16_t c2 = vcntq_u8(d2);\n        uint8x16_t c3 = vcntq_u8(d3);\n        uint8x16_t c4 = vcntq_u8(d4);\n        uint8x16_t c5 = vcntq_u8(d5);\n        uint8x16_t c6 = vcntq_u8(d6);\n        uint8x16_t c7 = vcntq_u8(d7);\n\n        /* Pairwise widen-add with accumulation: u8 -> u16, stay in vectors */\n        acc0 = vpadalq_u8(acc0, c0);\n        acc1 = vpadalq_u8(acc1, c1);\n        acc2 = vpadalq_u8(acc2, c2);\n        acc3 = vpadalq_u8(acc3, c3);\n\n        acc0 = vpadalq_u8(acc0, c4);\n        acc1 = vpadalq_u8(acc1, c5);\n        acc2 = vpadalq_u8(acc2, c6);\n        acc3 = vpadalq_u8(acc3, c7);\n\n        p += 128;\n        count -= 128;\n    }\n\n    /* Reduce vector accumulators to scalar once. */\n    uint32x4_t s0 = vpaddlq_u16(acc0);\n    uint32x4_t s1 = vpaddlq_u16(acc1);\n    uint32x4_t s2 = vpaddlq_u16(acc2);\n    uint32x4_t s3 = vpaddlq_u16(acc3);\n    uint32x4_t s01 = vaddq_u32(s0, s1);\n    uint32x4_t s23 = vaddq_u32(s2, s3);\n    uint32x4_t st = vaddq_u32(s01, s23);\n    uint64x2_t s64 = vpaddlq_u32(st);\n    bits += (long long)(vgetq_lane_u64(s64, 0) + vgetq_lane_u64(s64, 1));\n\n    /* Remaining 64B blocks (keep vector domain) */\n    while (count >= 64) {\n        uint8x16_t d0 = vld1q_u8(p +  0);\n        uint8x16_t d1 = vld1q_u8(p + 16);\n        uint8x16_t d2 = vld1q_u8(p + 32);\n        uint8x16_t d3 = vld1q_u8(p + 48);\n\n        uint8x16_t c0 = vcntq_u8(d0);\n        uint8x16_t c1 = vcntq_u8(d1);\n        uint8x16_t c2 = vcntq_u8(d2);\n        uint8x16_t c3 = vcntq_u8(d3);\n\n        uint64x2_t t0 = vpaddlq_u32(vpaddlq_u16(vpaddlq_u8(c0)));\n        uint64x2_t t1 = vpaddlq_u32(vpaddlq_u16(vpaddlq_u8(c1)));\n        uint64x2_t t2 = vpaddlq_u32(vpaddlq_u16(vpaddlq_u8(c2)));\n        uint64x2_t t3 = vpaddlq_u32(vpaddlq_u16(vpaddlq_u8(c3)));\n\n        uint64x2_t s = vaddq_u64(vaddq_u64(t0, t1), vaddq_u64(t2, t3));\n        bits += (long long)(vgetq_lane_u64(s, 0) + vgetq_lane_u64(s, 1));\n\n        p += 64;\n        count -= 64;\n    }\n\n    /* 16B chunks */\n    while (count >= 16) {\n        uint8x16_t d = vld1q_u8(p);\n        uint64x2_t s = vpaddlq_u32(vpaddlq_u16(vpaddlq_u8(vcntq_u8(d))));\n        bits += (long long)(vgetq_lane_u64(s, 0) + vgetq_lane_u64(s, 1));\n        p += 16;\n        count -= 16;\n    }\n\n    /* Tail */\n    while (count--) bits += bitsinbyte[*p++];\n\n    return bits;\n}\n#endif\n\n#ifdef HAVE_AVX512\n/* AVX512 optimized version of redisPopcount using VPOPCNTDQ instruction.\n * This function requires AVX512F and AVX512VPOPCNTDQ support. */\nATTRIBUTE_TARGET_AVX512_POPCOUNT\nlong long redisPopCountAvx512(void *s, long count) {\n    long long bits = 0;\n    unsigned char *p = s;\n\n    /* Align to 64-byte boundary for optimal AVX512 performance */\n    while ((unsigned long)p & 63 && count) {\n        bits += bitsinbyte[*p++];\n        count--;\n    }\n\n    /* Process 64 bytes at a time using AVX512 */\n    while (count >= 64) {\n        __m512i data = _mm512_loadu_si512((__m512i*)p);\n        __m512i popcnt = _mm512_popcnt_epi64(data);\n\n        /* Sum all 8 64-bit popcount results */\n        bits += _mm512_reduce_add_epi64(popcnt);\n\n        p += 64;\n        count -= 64;\n\n        /* Prefetch next cache line */\n        redis_prefetch_read(p + 2048);\n    }\n\n    /* Handle remaining bytes with scalar popcount */\n    while (count >= 8) {\n        bits += __builtin_popcountll(*(uint64_t*)p);\n        p += 8;\n        count -= 8;\n    }\n\n    /* Handle final bytes */\n    while (count--) {\n        bits += bitsinbyte[*p++];\n    }\n\n    return bits;\n}\n#endif\n\n#ifdef HAVE_AVX2\n/* AVX2 optimized version of redisPopcount.\n * This function requires AVX2 and POPCNT support. */\nATTRIBUTE_TARGET_AVX2_POPCOUNT\nlong long redisPopCountAvx2(void *s, long count) {\n    long long bits = 0;\n    unsigned char *p = s;\n\n    /* Align to 8-byte boundary for 64-bit operations */\n    while ((unsigned long)p & 7 && count) {\n        bits += bitsinbyte[*p++];\n        count--;\n    }\n\n    /* Use separate counters to avoid dependencies, similar to regular redisPopcount */\n    uint64_t cnt[4];\n    memset(cnt, 0, sizeof(cnt));\n\n    /* Process 32 bytes at a time using POPCNT on 64-bit chunks */\n    while (count >= 32) {\n        cnt[0] += __builtin_popcountll(*(uint64_t*)(p));\n        cnt[1] += __builtin_popcountll(*(uint64_t*)(p + 8));\n        cnt[2] += __builtin_popcountll(*(uint64_t*)(p + 16));\n        cnt[3] += __builtin_popcountll(*(uint64_t*)(p + 24));\n\n        p += 32;\n        count -= 32;\n\n        /* Prefetch next cache line */\n        redis_prefetch_read(p + 2048);\n    }\n\n    bits += cnt[0] + cnt[1] + cnt[2] + cnt[3];\n\n    /* Handle remaining bytes with scalar popcount */\n    while (count >= 8) {\n        bits += __builtin_popcountll(*(uint64_t*)p);\n        p += 8;\n        count -= 8;\n    }\n\n    /* Handle final bytes */\n    while (count--) {\n        bits += bitsinbyte[*p++];\n    }\n\n    return bits;\n}\n#endif\n\n/* Automatically select the best available popcount implementation */\nstatic inline long long redisPopcountAuto(const unsigned char *p, long count) {\n#ifdef HAVE_AVX512\n    if (BITOPS_USE_AVX512_POPCOUNT) {\n        return redisPopCountAvx512((void*)p, count);\n    }\n#endif\n#ifdef HAVE_AVX2\n    if (BITOP_USE_AVX2) {\n        return redisPopCountAvx2((void*)p, count);\n    }\n#endif\n#ifdef HAVE_AARCH64_NEON\n    return redisPopCountAarch64((void*)p, count);\n#else\n    return redisPopcount((void*)p, count);\n#endif\n}\n\n/* Return the position of the first bit set to one (if 'bit' is 1) or\n * zero (if 'bit' is 0) in the bitmap starting at 's' and long 'count' bytes.\n *\n * The function is guaranteed to return a value >= 0 if 'bit' is 0 since if\n * no zero bit is found, it returns count*8 assuming the string is zero\n * padded on the right. However if 'bit' is 1 it is possible that there is\n * not a single set bit in the bitmap. In this special case -1 is returned. */\nlong long redisBitpos(void *s, unsigned long count, int bit) {\n    unsigned long *l;\n    unsigned char *c;\n    unsigned long skipval, word = 0, one;\n    long long pos = 0; /* Position of bit, to return to the caller. */\n    unsigned long j;\n    int found;\n\n    /* Process whole words first, seeking for first word that is not\n     * all ones or all zeros respectively if we are looking for zeros\n     * or ones. This is much faster with large strings having contiguous\n     * blocks of 1 or 0 bits compared to the vanilla bit per bit processing.\n     *\n     * Note that if we start from an address that is not aligned\n     * to sizeof(unsigned long) we consume it byte by byte until it is\n     * aligned. */\n\n    /* Skip initial bits not aligned to sizeof(unsigned long) byte by byte. */\n    skipval = bit ? 0 : UCHAR_MAX;\n    c = (unsigned char*) s;\n    found = 0;\n    while((unsigned long)c & (sizeof(*l)-1) && count) {\n        if (*c != skipval) {\n            found = 1;\n            break;\n        }\n        c++;\n        count--;\n        pos += 8;\n    }\n\n    /* Skip bits with full word step. */\n    l = (unsigned long*) c;\n    if (!found) {\n        skipval = bit ? 0 : ULONG_MAX;\n        while (count >= sizeof(*l)) {\n            if (*l != skipval) break;\n            l++;\n            count -= sizeof(*l);\n            pos += sizeof(*l)*8;\n        }\n    }\n\n    /* Load bytes into \"word\" considering the first byte as the most significant\n     * (we basically consider it as written in big endian, since we consider the\n     * string as a set of bits from left to right, with the first bit at position\n     * zero.\n     *\n     * Note that the loading is designed to work even when the bytes left\n     * (count) are less than a full word. We pad it with zero on the right. */\n    c = (unsigned char*)l;\n    for (j = 0; j < sizeof(*l); j++) {\n        word <<= 8;\n        if (count) {\n            word |= *c;\n            c++;\n            count--;\n        }\n    }\n\n    /* Special case:\n     * If bits in the string are all zero and we are looking for one,\n     * return -1 to signal that there is not a single \"1\" in the whole\n     * string. This can't happen when we are looking for \"0\" as we assume\n     * that the right of the string is zero padded. */\n    if (bit == 1 && word == 0) return -1;\n\n    /* Last word left, scan bit by bit. The first thing we need is to\n     * have a single \"1\" set in the most significant position in an\n     * unsigned long. We don't know the size of the long so we use a\n     * simple trick. */\n    one = ULONG_MAX; /* All bits set to 1.*/\n    one >>= 1;       /* All bits set to 1 but the MSB. */\n    one = ~one;      /* All bits set to 0 but the MSB. */\n\n    while(one) {\n        if (((one & word) != 0) == bit) return pos;\n        pos++;\n        one >>= 1;\n    }\n\n    /* If we reached this point, there is a bug in the algorithm, since\n     * the case of no match is handled as a special case before. */\n    serverPanic(\"End of redisBitpos() reached.\");\n    return 0; /* Just to avoid warnings. */\n}\n\n/* The following set.*Bitfield and get.*Bitfield functions implement setting\n * and getting arbitrary size (up to 64 bits) signed and unsigned integers\n * at arbitrary positions into a bitmap.\n *\n * The representation considers the bitmap as having the bit number 0 to be\n * the most significant bit of the first byte, and so forth, so for example\n * setting a 5 bits unsigned integer to value 23 at offset 7 into a bitmap\n * previously set to all zeroes, will produce the following representation:\n *\n * +--------+--------+\n * |00000001|01110000|\n * +--------+--------+\n *\n * When offsets and integer sizes are aligned to bytes boundaries, this is the\n * same as big endian, however when such alignment does not exist, its important\n * to also understand how the bits inside a byte are ordered.\n *\n * Note that this format follows the same convention as SETBIT and related\n * commands.\n */\n\nvoid setUnsignedBitfield(unsigned char *p, uint64_t offset, uint64_t bits, uint64_t value) {\n    uint64_t byte, bit, byteval, bitval, j;\n\n    for (j = 0; j < bits; j++) {\n        bitval = (value & ((uint64_t)1<<(bits-1-j))) != 0;\n        byte = offset >> 3;\n        bit = 7 - (offset & 0x7);\n        byteval = p[byte];\n        byteval &= ~(1 << bit);\n        byteval |= bitval << bit;\n        p[byte] = byteval & 0xff;\n        offset++;\n    }\n}\n\nvoid setSignedBitfield(unsigned char *p, uint64_t offset, uint64_t bits, int64_t value) {\n    uint64_t uv = value; /* Casting will add UINT64_MAX + 1 if v is negative. */\n    setUnsignedBitfield(p,offset,bits,uv);\n}\n\nuint64_t getUnsignedBitfield(unsigned char *p, uint64_t offset, uint64_t bits) {\n    uint64_t byte, bit, byteval, bitval, j, value = 0;\n\n    for (j = 0; j < bits; j++) {\n        byte = offset >> 3;\n        bit = 7 - (offset & 0x7);\n        byteval = p[byte];\n        bitval = (byteval >> bit) & 1;\n        value = (value<<1) | bitval;\n        offset++;\n    }\n    return value;\n}\n\nint64_t getSignedBitfield(unsigned char *p, uint64_t offset, uint64_t bits) {\n    int64_t value;\n    union {uint64_t u; int64_t i;} conv;\n\n    /* Converting from unsigned to signed is undefined when the value does\n     * not fit, however here we assume two's complement and the original value\n     * was obtained from signed -> unsigned conversion, so we'll find the\n     * most significant bit set if the original value was negative.\n     *\n     * Note that two's complement is mandatory for exact-width types\n     * according to the C99 standard. */\n    conv.u = getUnsignedBitfield(p,offset,bits);\n    value = conv.i;\n\n    /* If the top significant bit is 1, propagate it to all the\n     * higher bits for two's complement representation of signed\n     * integers. */\n    if (bits < 64 && (value & ((uint64_t)1 << (bits-1))))\n        value |= ((uint64_t)-1) << bits;\n    return value;\n}\n\n/* The following two functions detect overflow of a value in the context\n * of storing it as an unsigned or signed integer with the specified\n * number of bits. The functions both take the value and a possible increment.\n * If no overflow could happen and the value+increment fit inside the limits,\n * then zero is returned, otherwise in case of overflow, 1 is returned,\n * otherwise in case of underflow, -1 is returned.\n *\n * When non-zero is returned (overflow or underflow), if not NULL, *limit is\n * set to the value the operation should result when an overflow happens,\n * depending on the specified overflow semantics:\n *\n * For BFOVERFLOW_SAT if 1 is returned, *limit it is set maximum value that\n * you can store in that integer. when -1 is returned, *limit is set to the\n * minimum value that an integer of that size can represent.\n *\n * For BFOVERFLOW_WRAP *limit is set by performing the operation in order to\n * \"wrap\" around towards zero for unsigned integers, or towards the most\n * negative number that is possible to represent for signed integers. */\n\n#define BFOVERFLOW_WRAP 0\n#define BFOVERFLOW_SAT 1\n#define BFOVERFLOW_FAIL 2 /* Used by the BITFIELD command implementation. */\n\nint checkUnsignedBitfieldOverflow(uint64_t value, int64_t incr, uint64_t bits, int owtype, uint64_t *limit) {\n    uint64_t max = (bits == 64) ? UINT64_MAX : (((uint64_t)1<<bits)-1);\n    int64_t maxincr = max-value;\n    int64_t minincr = -value;\n\n    if (value > max || (incr > 0 && incr > maxincr)) {\n        if (limit) {\n            if (owtype == BFOVERFLOW_WRAP) {\n                goto handle_wrap;\n            } else if (owtype == BFOVERFLOW_SAT) {\n                *limit = max;\n            }\n        }\n        return 1;\n    } else if (incr < 0 && incr < minincr) {\n        if (limit) {\n            if (owtype == BFOVERFLOW_WRAP) {\n                goto handle_wrap;\n            } else if (owtype == BFOVERFLOW_SAT) {\n                *limit = 0;\n            }\n        }\n        return -1;\n    }\n    return 0;\n\nhandle_wrap:\n    {\n        uint64_t mask = ((uint64_t)-1) << bits;\n        uint64_t res = value+incr;\n\n        res &= ~mask;\n        *limit = res;\n    }\n    return 1;\n}\n\nint checkSignedBitfieldOverflow(int64_t value, int64_t incr, uint64_t bits, int owtype, int64_t *limit) {\n    int64_t max = (bits == 64) ? INT64_MAX : (((int64_t)1<<(bits-1))-1);\n    int64_t min = (-max)-1;\n\n    /* Note that maxincr and minincr could overflow, but we use the values\n     * only after checking 'value' range, so when we use it no overflow\n     * happens. 'uint64_t' cast is there just to prevent undefined behavior on\n     * overflow */\n    int64_t maxincr = (uint64_t)max-value;\n    int64_t minincr = min-value;\n\n    if (value > max || (bits != 64 && incr > maxincr) || (value >= 0 && incr > 0 && incr > maxincr))\n    {\n        if (limit) {\n            if (owtype == BFOVERFLOW_WRAP) {\n                goto handle_wrap;\n            } else if (owtype == BFOVERFLOW_SAT) {\n                *limit = max;\n            }\n        }\n        return 1;\n    } else if (value < min || (bits != 64 && incr < minincr) || (value < 0 && incr < 0 && incr < minincr)) {\n        if (limit) {\n            if (owtype == BFOVERFLOW_WRAP) {\n                goto handle_wrap;\n            } else if (owtype == BFOVERFLOW_SAT) {\n                *limit = min;\n            }\n        }\n        return -1;\n    }\n    return 0;\n\nhandle_wrap:\n    {\n        uint64_t msb = (uint64_t)1 << (bits-1);\n        uint64_t a = value, b = incr, c;\n        c = a+b; /* Perform addition as unsigned so that's defined. */\n\n        /* If the sign bit is set, propagate to all the higher order\n         * bits, to cap the negative value. If it's clear, mask to\n         * the positive integer limit. */\n        if (bits < 64) {\n            uint64_t mask = ((uint64_t)-1) << bits;\n            if (c & msb) {\n                c |= mask;\n            } else {\n                c &= ~mask;\n            }\n        }\n        *limit = c;\n    }\n    return 1;\n}\n\n/* Debugging function. Just show bits in the specified bitmap. Not used\n * but here for not having to rewrite it when debugging is needed. */\nvoid printBits(unsigned char *p, unsigned long count) {\n    unsigned long j, i, byte;\n\n    for (j = 0; j < count; j++) {\n        byte = p[j];\n        for (i = 0x80; i > 0; i /= 2)\n            printf(\"%c\", (byte & i) ? '1' : '0');\n        printf(\"|\");\n    }\n    printf(\"\\n\");\n}\n\n/* -----------------------------------------------------------------------------\n * Bits related string commands: GETBIT, SETBIT, BITCOUNT, BITOP.\n * -------------------------------------------------------------------------- */\n\n#define BITOP_AND   0\n#define BITOP_OR    1\n#define BITOP_XOR   2\n#define BITOP_NOT   3\n#define BITOP_DIFF  4 /* DIFF(X, A1, A2, ..., An) = X & !(A1 | A2 | ... | An) */\n#define BITOP_DIFF1 5 /* DIFF1(X, A1, A2, ..., An) = !X & (A1 | A2 | ... | An) */\n#define BITOP_ANDOR 6 /* ANDOR(X, A1, A2, ..., An) = X & (A1 | A2 | ... | An) */\n\n/* ONE(A1, A2, ..., An) = X.\n * If X[i] is the i-th bit of X then:\n * X[i] == 1 if and only if there is m such that:\n * Am[i] == 1 and Al[i] == 0 for all l != m. */\n#define BITOP_ONE   7\n\n#define BITFIELDOP_GET 0\n#define BITFIELDOP_SET 1\n#define BITFIELDOP_INCRBY 2\n\n/* This helper function used by GETBIT / SETBIT parses the bit offset argument\n * making sure an error is returned if it is negative or if it overflows\n * Redis 512 MB limit for the string value or more (server.proto_max_bulk_len).\n *\n * If the 'hash' argument is true, and 'bits is positive, then the command\n * will also parse bit offsets prefixed by \"#\". In such a case the offset\n * is multiplied by 'bits'. This is useful for the BITFIELD command. */\nint getBitOffsetFromArgument(client *c, robj *o, uint64_t *offset, int hash, int bits) {\n    long long loffset;\n    char *err = \"bit offset is not an integer or out of range\";\n    char *p = o->ptr;\n    size_t plen = sdslen(p);\n    int usehash = 0;\n\n    /* Handle #<offset> form. */\n    if (p[0] == '#' && hash && bits > 0) usehash = 1;\n\n    if (string2ll(p+usehash,plen-usehash,&loffset) == 0) {\n        addReplyError(c,err);\n        return C_ERR;\n    }\n\n    /* Adjust the offset by 'bits' for #<offset> form. */\n    if (usehash) loffset *= bits;\n\n    /* Limit offset to server.proto_max_bulk_len (512MB in bytes by default) */\n    if (loffset < 0 || (!mustObeyClient(c) && (loffset >> 3) >= server.proto_max_bulk_len))\n    {\n        addReplyError(c,err);\n        return C_ERR;\n    }\n\n    *offset = loffset;\n    return C_OK;\n}\n\n/* This helper function for BITFIELD parses a bitfield type in the form\n * <sign><bits> where sign is 'u' or 'i' for unsigned and signed, and\n * the bits is a value between 1 and 64. However 64 bits unsigned integers\n * are reported as an error because of current limitations of Redis protocol\n * to return unsigned integer values greater than INT64_MAX.\n *\n * On error C_ERR is returned and an error is sent to the client. */\nint getBitfieldTypeFromArgument(client *c, robj *o, int *sign, int *bits) {\n    char *p = o->ptr;\n    char *err = \"Invalid bitfield type. Use something like i16 u8. Note that u64 is not supported but i64 is.\";\n    long long llbits;\n\n    if (p[0] == 'i') {\n        *sign = 1;\n    } else if (p[0] == 'u') {\n        *sign = 0;\n    } else {\n        addReplyError(c,err);\n        return C_ERR;\n    }\n\n    if ((string2ll(p+1,strlen(p+1),&llbits)) == 0 ||\n        llbits < 1 ||\n        (*sign == 1 && llbits > 64) ||\n        (*sign == 0 && llbits > 63))\n    {\n        addReplyError(c,err);\n        return C_ERR;\n    }\n    *bits = llbits;\n    return C_OK;\n}\n\n/* This is a helper function for commands implementations that need to write\n * bits to a string object. The command creates or pad with zeroes the string\n * so that the 'maxbit' bit can be addressed. The object is finally\n * returned. Otherwise if the key holds a wrong type NULL is returned and\n * an error is sent to the client.\n * \n * (Must provide all the arguments to the function)\n */\nstatic kvobj *lookupStringForBitCommand(client *c, uint64_t maxbit, \n                                       size_t *strOldSize, size_t *strGrowSize) \n{\n    dictEntryLink link;\n    size_t byte = maxbit >> 3;\n    size_t oldAllocSize = 0;\n    kvobj *o = lookupKeyWriteWithLink(c->db,c->argv[1],&link);\n    if (checkType(c,o,OBJ_STRING)) return NULL;\n\n    if (o == NULL) {\n        o = createObject(OBJ_STRING,sdsnewlen(NULL, byte+1));\n        dbAddByLink(c->db,c->argv[1],&o,&link);\n        *strGrowSize = byte + 1;\n        *strOldSize = 0;\n    } else {\n        o = dbUnshareStringValue(c->db,c->argv[1],o);\n        *strOldSize  = sdslen(o->ptr);\n        if (server.memory_tracking_enabled)\n            oldAllocSize = kvobjAllocSize(o);\n        o->ptr = sdsgrowzero(o->ptr,byte+1);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), o, oldAllocSize, kvobjAllocSize(o));\n        *strGrowSize = sdslen(o->ptr) - *strOldSize;\n    }\n    return o;\n}\n\n/* Return a pointer to the string object content, and stores its length\n * in 'len'. The user is required to pass (likely stack allocated) buffer\n * 'llbuf' of at least LONG_STR_SIZE bytes. Such a buffer is used in the case\n * the object is integer encoded in order to provide the representation\n * without using heap allocation.\n *\n * The function returns the pointer to the object array of bytes representing\n * the string it contains, that may be a pointer to 'llbuf' or to the\n * internal object representation. As a side effect 'len' is filled with\n * the length of such buffer.\n *\n * If the source object is NULL the function is guaranteed to return NULL\n * and set 'len' to 0. */\nunsigned char *getObjectReadOnlyString(robj *o, long *len, char *llbuf) {\n    serverAssert(!o || o->type == OBJ_STRING);\n    unsigned char *p = NULL;\n\n    /* Set the 'p' pointer to the string, that can be just a stack allocated\n     * array if our string was integer encoded. */\n    if (o && o->encoding == OBJ_ENCODING_INT) {\n        p = (unsigned char*) llbuf;\n        if (len) *len = ll2string(llbuf,LONG_STR_SIZE,(long)o->ptr);\n    } else if (o) {\n        p = (unsigned char*) o->ptr;\n        if (len) *len = sdslen(o->ptr);\n    } else {\n        if (len) *len = 0;\n    }\n    return p;\n}\n\n/* SETBIT key offset bitvalue */\nvoid setbitCommand(client *c) {\n    char *err = \"bit is not an integer or out of range\";\n    uint64_t bitoffset;\n    ssize_t byte, bit;\n    int byteval, bitval;\n    long on;\n\n    if (getBitOffsetFromArgument(c,c->argv[2],&bitoffset,0,0) != C_OK)\n        return;\n\n    if (getLongFromObjectOrReply(c,c->argv[3],&on,err) != C_OK)\n        return;\n\n    /* Bits can only be set or cleared... */\n    if (on & ~1) {\n        addReplyError(c,err);\n        return;\n    }\n\n    size_t strOldSize, strGrowSize;\n    kvobj *o = lookupStringForBitCommand(c, bitoffset, &strOldSize, &strGrowSize);\n    if (o == NULL) return;\n\n    /* Get current values */\n    byte = bitoffset >> 3;\n    byteval = ((uint8_t*)o->ptr)[byte];\n    bit = 7 - (bitoffset & 0x7);\n    bitval = byteval & (1 << bit);\n\n    /* Either it is newly created, changed length, or the bit changes before and after.\n     * Note that the bitval here is actually a decimal number.\n     * So we need to use `!!` to convert it to 0 or 1 for comparison. */\n    if (strGrowSize || (!!bitval != on)) {\n        /* Update byte with new bit value. */\n        byteval &= ~(1 << bit);\n        byteval |= ((on & 0x1) << bit);\n        ((uint8_t*)o->ptr)[byte] = byteval;\n        keyModified(c,c->db,c->argv[1],o,1);\n        notifyKeyspaceEvent(NOTIFY_STRING,\"setbit\",c->argv[1],c->db->id);\n        server.dirty++;\n\n        /* If this is not a new key (old size not 0) and size changed, then \n         * update the keysizes histogram. Otherwise, the histogram already \n         * updated in lookupStringForBitCommand() by calling dbAdd(). */\n        if ((strOldSize > 0) && (strGrowSize != 0))\n            updateKeysizesHist(c->db, OBJ_STRING, strOldSize, strOldSize + strGrowSize);\n    }\n\n    /* Return original value. */\n    addReply(c, bitval ? shared.cone : shared.czero);\n}\n\n/* GETBIT key offset */\nvoid getbitCommand(client *c) {\n    char llbuf[32];\n    uint64_t bitoffset;\n    size_t byte, bit;\n    size_t bitval = 0;\n\n    if (getBitOffsetFromArgument(c,c->argv[2],&bitoffset,0,0) != C_OK)\n        return;\n\n    kvobj *kv = lookupKeyReadOrReply(c, c->argv[1], shared.czero);\n    if (kv == NULL || checkType(c,kv,OBJ_STRING)) return;\n\n    byte = bitoffset >> 3;\n    bit = 7 - (bitoffset & 0x7);\n    if (sdsEncodedObject(kv)) {\n        if (byte < sdslen(kv->ptr))\n            bitval = ((uint8_t*)kv->ptr)[byte] & (1 << bit);\n    } else {\n        if (byte < (size_t)ll2string(llbuf,sizeof(llbuf),(long)kv->ptr))\n            bitval = llbuf[byte] & (1 << bit);\n    }\n\n    addReply(c, bitval ? shared.cone : shared.czero);\n}\n\n#ifdef HAVE_AVX2\n/* Compute the given bitop operation using AVX2 intrinsics.\n * Return how many bytes were successfully processed, as AVX2 operates on\n * 256-bit registers so if `minlen` is not a multiple of 32 some of the bytes\n * will be skipped. They will be taken care for in the unoptimized loop in the\n * main bitopCommand function. */\nATTRIBUTE_TARGET_AVX2\nunsigned long bitopCommandAVX(unsigned char **keys, unsigned char *res, \n                              unsigned long op, unsigned long numkeys,\n                              unsigned long minlen)\n{\n    const unsigned long step = sizeof(__m256i);\n\n    unsigned long i;\n    unsigned long processed = 0;\n    unsigned char *res_start = res;\n    unsigned char *fst_key = keys[0];\n\n    if (minlen < step) {\n        return 0;\n    }\n\n    const __m256i max256 = _mm256_set1_epi64x(-1);\n    const __m256i zero256 = _mm256_set1_epi64x(0);\n\n    switch (op) {\n    case BITOP_AND:\n        while (minlen >= step) {\n            __m256i lres = _mm256_lddqu_si256((__m256i*)(keys[0]+processed));\n\n            for (i = 1; i < numkeys; i++) {\n                __m256i lkey = _mm256_lddqu_si256((__m256i*)(keys[i]+processed));\n                lres = _mm256_and_si256(lres, lkey);\n            }\n            _mm256_storeu_si256((__m256i*)res, lres);\n            res += step;\n            processed += step;\n            minlen -= step;\n        }\n        break;\n    /* Unlike other operations that do the same with all source keys\n     * DIFF, DIFF1 and ANDOR all compute the disjunction of all the source keys\n     * but the first one. We first store that disjunction in `lres` and later\n     * compute the final operation using the first source key. */\n    case BITOP_DIFF:\n    case BITOP_DIFF1:\n    case BITOP_ANDOR:\n    case BITOP_OR:\n        while (minlen >= step) {\n            __m256i lres = (op == BITOP_OR) ?\n                _mm256_lddqu_si256((__m256i*)(keys[0]+processed)) :\n                zero256;\n\n            for (i = 1; i < numkeys; i++) {\n                __m256i lkey = _mm256_lddqu_si256((__m256i*)(keys[i]+processed));\n                lres = _mm256_or_si256(lres, lkey);\n            }\n            _mm256_storeu_si256((__m256i*)res, lres);\n            res += step;\n            processed += step;\n            minlen -= step;\n        }\n        break;\n    case BITOP_XOR:\n        while (minlen >= step) {\n            __m256i lres = _mm256_lddqu_si256((__m256i*)(keys[0]+processed));\n\n            for (i = 1; i < numkeys; i++) {\n                __m256i lkey = _mm256_lddqu_si256((__m256i*)(keys[i]+processed));\n                lres = _mm256_xor_si256(lres, lkey);\n            }\n            _mm256_storeu_si256((__m256i*)res, lres);\n            res += step;\n            processed += step;\n            minlen -= step;\n        }\n        break;\n    case BITOP_NOT:\n        while (minlen >= step) {\n             __m256i lres = _mm256_lddqu_si256((__m256i*)(keys[0]+processed));\n            lres = _mm256_xor_si256(lres, max256);\n            _mm256_storeu_si256((__m256i*)res, lres);\n            res += step;\n            processed += step;\n            minlen -= step;\n        }\n        break;\n    case BITOP_ONE:\n        while (minlen >= step) {\n            __m256i lres = _mm256_lddqu_si256((__m256i*)(keys[0]+processed));\n            __m256i common_bits = zero256;\n\n            for (i = 1; i < numkeys; i++) {\n                __m256i lkey = _mm256_lddqu_si256((__m256i*)(keys[i]+processed));\n                __m256i common = _mm256_and_si256(lres, lkey);\n                common_bits = _mm256_or_si256(common_bits, common);\n\n                lres = _mm256_xor_si256(lres, lkey);\n            }\n            lres = _mm256_andnot_si256(common_bits, lres);\n            _mm256_storeu_si256((__m256i*)res, lres);\n            res += step;\n            processed += step;\n            minlen -= step;\n        }\n        break;\n    default:\n        break;\n    }\n\n    res = res_start;\n    switch (op) {\n    case BITOP_DIFF:\n        for (i = 0; i < processed; i += step) {\n            __m256i lres = _mm256_lddqu_si256((__m256i*)res);\n            __m256i fkey = _mm256_lddqu_si256((__m256i*)fst_key);\n\n            lres = _mm256_andnot_si256(lres, fkey);\n            _mm256_storeu_si256((__m256i*)res, lres);\n\n            res += step;\n            fst_key += step;\n        }\n        break;\n    case BITOP_DIFF1:\n        for (i = 0; i < processed; i += step) {\n            __m256i lres = _mm256_lddqu_si256((__m256i*)res);\n            __m256i fkey = _mm256_lddqu_si256((__m256i*)fst_key);\n\n            lres = _mm256_andnot_si256(fkey, lres);\n            _mm256_storeu_si256((__m256i*)res, lres);\n\n            res += step;\n            fst_key += step;\n        }\n        break;\n    case BITOP_ANDOR:\n        for (i = 0; i < processed; i += step) {\n            __m256i lres = _mm256_lddqu_si256((__m256i*)res);\n            __m256i fkey = _mm256_lddqu_si256((__m256i*)fst_key);\n\n            lres = _mm256_and_si256(fkey, lres);\n            _mm256_storeu_si256((__m256i*)res, lres);\n\n            res += step;\n            fst_key += step;\n        }\n        break;\n    default:\n        break;\n    }\n\n    return processed;\n}\n#endif /* HAVE_AVX2 */\n\n#ifdef HAVE_AVX512\n/* Compute the given bitop operation using AVX512 intrinsics.\n * Return how many bytes were successfully processed, as AVX512 operates on\n * 512-bit registers so if `minlen` is not a multiple of 64 some of the bytes\n * will be skipped. They will be taken care for in the unoptimized loop in the\n * main bitopCommand function. */\nATTRIBUTE_TARGET_AVX512\nunsigned long bitopCommandAVX512(unsigned char **keys, unsigned char *res, \n                                 unsigned long op, unsigned long numkeys,\n                                 unsigned long minlen)\n{\n    const unsigned long step = sizeof(__m512i);  /* 64 bytes */\n\n    unsigned long i;\n    unsigned long processed = 0;\n    unsigned char *res_start = res;\n    unsigned char *fst_key = keys[0];\n\n    if (minlen < step) {\n        return 0;\n    }\n\n    const __m512i max512 = _mm512_set1_epi64(-1);\n    const __m512i zero512 = _mm512_set1_epi64(0);\n    switch (op) {\n    case BITOP_AND:\n        while (minlen >= step) {\n            __m512i lres = _mm512_loadu_si512((__m512i*)(keys[0]+processed));\n\n            for (i = 1; i < numkeys; i++) {\n                __m512i lkey = _mm512_loadu_si512((__m512i*)(keys[i]+processed));\n                lres = _mm512_and_si512(lres, lkey);\n            }\n            _mm512_storeu_si512((__m512i*)res, lres);\n            res += step;\n            processed += step;\n            minlen -= step;\n        }\n        break;\n    /* Unlike other operations that do the same with all source keys\n     * DIFF, DIFF1 and ANDOR all compute the disjunction of all the source keys\n     * but the first one. We first store that disjunction in `lres` and later\n     * compute the final operation using the first source key. */\n    case BITOP_DIFF:\n    case BITOP_DIFF1:\n    case BITOP_ANDOR:\n    case BITOP_OR:\n        while (minlen >= step) {\n            __m512i lres = (op == BITOP_OR) ?\n                _mm512_loadu_si512((__m512i*)(keys[0]+processed)) :\n                zero512;\n\n            for (i = 1; i < numkeys; i++) {\n                __m512i lkey = _mm512_loadu_si512((__m512i*)(keys[i]+processed));\n                lres = _mm512_or_si512(lres, lkey);\n            }\n            _mm512_storeu_si512((__m512i*)res, lres);\n            res += step;\n            processed += step;\n            minlen -= step;\n        }\n        break;\n    case BITOP_XOR:\n        while (minlen >= step) {\n            __m512i lres = _mm512_loadu_si512((__m512i*)(keys[0]+processed));\n\n            for (i = 1; i < numkeys; i++) {\n                __m512i lkey = _mm512_loadu_si512((__m512i*)(keys[i]+processed));\n                lres = _mm512_xor_si512(lres, lkey);\n            }\n            _mm512_storeu_si512((__m512i*)res, lres);\n            res += step;\n            processed += step;\n            minlen -= step;\n        }\n        break;\n    case BITOP_NOT:\n        while (minlen >= step) {\n            __m512i lres = _mm512_loadu_si512((__m512i*)(keys[0]+processed));\n            lres = _mm512_xor_si512(lres, max512);\n            _mm512_storeu_si512((__m512i*)res, lres);\n            res += step;\n            processed += step;\n            minlen -= step;\n        }\n        break;\n    case BITOP_ONE:\n        while (minlen >= step) {\n            __m512i lres = _mm512_loadu_si512((__m512i*)(keys[0]+processed));\n            __m512i common_bits = zero512;\n\n            for (i = 1; i < numkeys; i++) {\n                __m512i lkey = _mm512_loadu_si512((__m512i*)(keys[i]+processed));\n                /* common_bits |= (lres & lkey): ternary-logic with imm8 0xEA == c|(a&b)\n                 * (a=lres, b=lkey, c=common_bits), replacing a separate AND+OR. */\n                common_bits = _mm512_ternarylogic_epi32(lres, lkey, common_bits, 0xEA);\n\n                lres = _mm512_xor_si512(lres, lkey);\n            }\n            lres = _mm512_andnot_si512(common_bits, lres);\n            _mm512_storeu_si512((__m512i*)res, lres);\n            res += step;\n            processed += step;\n            minlen -= step;\n        }\n        break;\n    default:\n        break;\n    }\n\n    res = res_start;\n    switch (op) {\n    case BITOP_DIFF:\n        for (i = 0; i < processed; i += step) {\n            __m512i lres = _mm512_loadu_si512((__m512i*)res);\n            __m512i fkey = _mm512_loadu_si512((__m512i*)fst_key);\n\n            lres = _mm512_andnot_si512(lres, fkey);\n            _mm512_storeu_si512((__m512i*)res, lres);\n\n            res += step;\n            fst_key += step;\n        }\n        break;\n    case BITOP_DIFF1:\n        for (i = 0; i < processed; i += step) {\n            __m512i lres = _mm512_loadu_si512((__m512i*)res);\n            __m512i fkey = _mm512_loadu_si512((__m512i*)fst_key);\n\n            lres = _mm512_andnot_si512(fkey, lres);\n            _mm512_storeu_si512((__m512i*)res, lres);\n\n            res += step;\n            fst_key += step;\n        }\n        break;\n    case BITOP_ANDOR:\n        for (i = 0; i < processed; i += step) {\n            __m512i lres = _mm512_loadu_si512((__m512i*)res);\n            __m512i fkey = _mm512_loadu_si512((__m512i*)fst_key);\n\n            lres = _mm512_and_si512(fkey, lres);\n            _mm512_storeu_si512((__m512i*)res, lres);\n\n            res += step;\n            fst_key += step;\n        }\n        break;\n    default:\n        break;\n    }\n\n    return processed;\n}\n#endif /* HAVE_AVX512 */\n\n/* BITOP op_name target_key src_key1 src_key2 src_key3 ... src_keyN */\nREDIS_NO_SANITIZE(\"alignment\")\nvoid bitopCommand(client *c) {\n    char *opname = c->argv[1]->ptr;\n    robj *targetkey = c->argv[2];\n    unsigned long op, j, numkeys;\n    robj **objects;      /* Array of source objects. */\n    unsigned char **src; /* Array of source strings pointers. */\n    unsigned long *len, maxlen = 0; /* Array of length of src strings,\n                                       and max len. */\n    unsigned long minlen = 0;    /* Min len among the input keys. */\n    unsigned char *res = NULL; /* Resulting string. */\n\n    /* Parse the operation name. */\n    if ((opname[0] == 'a' || opname[0] == 'A') && !strcasecmp(opname,\"and\"))\n        op = BITOP_AND;\n    else if((opname[0] == 'o' || opname[0] == 'O') && !strcasecmp(opname,\"or\"))\n        op = BITOP_OR;\n    else if((opname[0] == 'x' || opname[0] == 'X') && !strcasecmp(opname,\"xor\"))\n        op = BITOP_XOR;\n    else if((opname[0] == 'n' || opname[0] == 'N') && !strcasecmp(opname,\"not\"))\n        op = BITOP_NOT;\n    else if ((opname[0] == 'd' || opname[0] == 'D') && !strcasecmp(opname,\"diff\"))\n        op = BITOP_DIFF;\n    else if ((opname[0] == 'd' || opname[0] == 'D') && !strcasecmp(opname,\"diff1\"))\n        op = BITOP_DIFF1;\n    else if ((opname[0] == 'a' || opname[0] == 'A') && !strcasecmp(opname,\"andor\"))\n        op = BITOP_ANDOR;\n    else if ((opname[0] == 'o' || opname[0] == 'O') && !strcasecmp(opname,\"one\"))\n        op = BITOP_ONE;\n    else {\n        addReplyErrorObject(c,shared.syntaxerr);\n        return;\n    }\n\n    /* Sanity check: NOT accepts only a single key argument. */\n    if (op == BITOP_NOT && c->argc != 4) {\n        addReplyError(c,\"BITOP NOT must be called with a single source key.\");\n        return;\n    }\n\n    if ((op == BITOP_DIFF || op == BITOP_DIFF1 || op == BITOP_ANDOR) && c->argc < 5) {\n        sds opname_upper = sdsnew(opname);\n        sdstoupper(opname_upper);\n        addReplyErrorFormat(c,\"BITOP %s must be called with at least two source keys.\", opname_upper);\n        sdsfree(opname_upper);\n        return;\n    }\n\n    /* Lookup keys, and store pointers to the string objects into an array. */\n    numkeys = c->argc - 3;\n    src = zmalloc(sizeof(unsigned char*) * numkeys);\n    len = zmalloc(sizeof(long) * numkeys);\n    objects = zmalloc(sizeof(robj*) * numkeys);\n    for (j = 0; j < numkeys; j++) {\n        kvobj *kv = lookupKeyRead(c->db, c->argv[j + 3]);\n        /* Handle non-existing keys as empty strings. */\n        if (kv == NULL) {\n            objects[j] = NULL;\n            src[j] = NULL;\n            len[j] = 0;\n            minlen = 0;\n            continue;\n        }\n        /* Return an error if one of the keys is not a string. */\n        if (checkType(c, kv, OBJ_STRING)) {\n            unsigned long i;\n            for (i = 0; i < j; i++) {\n                if (objects[i])\n                    decrRefCount(objects[i]);\n            }\n            zfree(src);\n            zfree(len);\n            zfree(objects);\n            return;\n        }\n        objects[j] = getDecodedObject(kv);\n        src[j] = objects[j]->ptr;\n        len[j] = sdslen(objects[j]->ptr);\n        if (len[j] > maxlen) maxlen = len[j];\n        if (j == 0 || len[j] < minlen) minlen = len[j];\n    }\n\n    /* Compute the bit operation, if at least one string is not empty. */\n    if (maxlen) {\n        res = (unsigned char*) sdsnewlen(NULL,maxlen);\n        unsigned char output, byte, disjunction, common_bits;\n        unsigned long i;\n        int useAVX = 0;\n\n        /* Number of bytes processed from each source key */\n        j = 0;\n\n#if defined(HAVE_AVX512)\n        if (BITOP_USE_AVX512 && (minlen >= 10000) && (numkeys >= 8)) {\n            j = bitopCommandAVX512(src, res, op, numkeys, minlen);\n\n            serverAssert(minlen >= j);\n            minlen -= j;\n\n            useAVX = 1;\n        }\n#endif\n\n#if defined(HAVE_AVX2)\n        if (!useAVX && BITOP_USE_AVX2) {\n            j = bitopCommandAVX(src, res, op, numkeys, minlen);\n\n            serverAssert(minlen >= j);\n            minlen -= j;\n\n            useAVX = 1;\n        }\n#endif\n\n#if !defined(USE_ALIGNED_ACCESS)\n        /* If no SIMD path was used (no AVX2/AVX512), fall back \n         * to a word-at-a-time fast path that is still much better \n         * than the byte-by-byte loop below. On ARM we skip this since \n         * it would cause GCC to emit multiple-word load/store ops\n         * not supported even on ARM >= v6. */\n        if (!useAVX && minlen >= sizeof(unsigned long)*4) {\n\n            unsigned long **lp = (unsigned long**)src;\n            unsigned long *lres = (unsigned long*) res;\n\n            /* Index over the unsigned long version of the source keys */\n            size_t k = 0;\n\n            /* Unlike other operations that do the same with all source keys\n             * DIFF, DIFF1 and ANDOR all compute the disjunction of all the\n             * source keys but the first one. We first store that disjunction\n             * in `lres` and later compute the final operation using the first\n             * source key. */\n            if (op != BITOP_DIFF && op != BITOP_DIFF1 && op != BITOP_ANDOR)\n                memcpy(lres,src[0],minlen);\n\n            /* Different branches per different operations for speed (sorry). */\n            if (op == BITOP_AND) {\n                while(minlen >= sizeof(unsigned long)*4) {\n                    for (i = 1; i < numkeys; i++) {\n                        lres[0] &= lp[i][k+0];\n                        lres[1] &= lp[i][k+1];\n                        lres[2] &= lp[i][k+2];\n                        lres[3] &= lp[i][k+3];\n                    }\n                    k+=4;\n                    lres+=4;\n                    j += sizeof(unsigned long)*4;\n                    minlen -= sizeof(unsigned long)*4;\n                }\n            } else if (op == BITOP_OR) {\n                while(minlen >= sizeof(unsigned long)*4) {\n                    for (i = 1; i < numkeys; i++) {\n                        lres[0] |= lp[i][k+0];\n                        lres[1] |= lp[i][k+1];\n                        lres[2] |= lp[i][k+2];\n                        lres[3] |= lp[i][k+3];\n                    }\n                    k+=4;\n                    lres+=4;\n                    j += sizeof(unsigned long)*4;\n                    minlen -= sizeof(unsigned long)*4;\n                }\n            } else if (op == BITOP_XOR) {\n                while(minlen >= sizeof(unsigned long)*4) {\n                    for (i = 1; i < numkeys; i++) {\n                        lres[0] ^= lp[i][k+0];\n                        lres[1] ^= lp[i][k+1];\n                        lres[2] ^= lp[i][k+2];\n                        lres[3] ^= lp[i][k+3];\n                    }\n                    k+=4;\n                    lres+=4;\n                    j += sizeof(unsigned long)*4;\n                    minlen -= sizeof(unsigned long)*4;\n                }\n            } else if (op == BITOP_NOT) {\n                while(minlen >= sizeof(unsigned long)*4) {\n                    lres[0] = ~lres[0];\n                    lres[1] = ~lres[1];\n                    lres[2] = ~lres[2];\n                    lres[3] = ~lres[3];\n                    lres+=4;\n                    j += sizeof(unsigned long)*4;\n                    minlen -= sizeof(unsigned long)*4;\n                }\n            } else if (op == BITOP_DIFF || op == BITOP_DIFF1 || op == BITOP_ANDOR) {\n                size_t processed = 0;\n                while(minlen >= sizeof(unsigned long)*4) {\n                    for (i = 1; i < numkeys; i++) {\n                        lres[0] |= lp[i][k+0];\n                        lres[1] |= lp[i][k+1];\n                        lres[2] |= lp[i][k+2];\n                        lres[3] |= lp[i][k+3];\n                    }\n                    k+=4;\n                    lres+=4;\n                    j += sizeof(unsigned long)*4;\n                    minlen -= sizeof(unsigned long)*4;\n                    processed += sizeof(unsigned long)*4;\n                }\n\n                lres = (unsigned long*) res;\n                unsigned long *first_key = (unsigned long*)src[0];\n                switch (op) {\n                case BITOP_DIFF:\n                    for (i = 0; i < processed; i += sizeof(unsigned long)*4) {\n                        lres[0] = (first_key[0] & ~lres[0]);\n                        lres[1] = (first_key[1] & ~lres[1]);\n                        lres[2] = (first_key[2] & ~lres[2]);\n                        lres[3] = (first_key[3] & ~lres[3]);\n                        lres+=4;\n                        first_key += 4;\n                    }\n                    break;\n                case BITOP_DIFF1:\n                    for (i = 0; i < processed; i += sizeof(unsigned long)*4) {\n                        lres[0] = (~first_key[0] & lres[0]);\n                        lres[1] = (~first_key[1] & lres[1]);\n                        lres[2] = (~first_key[2] & lres[2]);\n                        lres[3] = (~first_key[3] & lres[3]);\n                        lres+=4;\n                        first_key += 4;\n                    }\n                    break;\n                case BITOP_ANDOR:\n                    for (i = 0; i < processed; i += sizeof(unsigned long)*4) {\n                        lres[0] = (first_key[0] & lres[0]);\n                        lres[1] = (first_key[1] & lres[1]);\n                        lres[2] = (first_key[2] & lres[2]);\n                        lres[3] = (first_key[3] & lres[3]);\n                        lres+=4;\n                        first_key += 4;\n                    }\n                    break;\n                }\n            } else if (op == BITOP_ONE) {\n                unsigned long lcommon_bits[4];\n\n                while(minlen >= sizeof(unsigned long)*4) {\n                    memset(lcommon_bits, 0, sizeof(lcommon_bits));\n\n                    for (i = 1; i < numkeys; i++) {\n                        lcommon_bits[0] |= (lres[0] & lp[i][k+0]);\n                        lcommon_bits[1] |= (lres[1] & lp[i][k+1]);\n                        lcommon_bits[2] |= (lres[2] & lp[i][k+2]);\n                        lcommon_bits[3] |= (lres[3] & lp[i][k+3]);\n\n                        lres[0] ^= lp[i][k+0];\n                        lres[1] ^= lp[i][k+1];\n                        lres[2] ^= lp[i][k+2];\n                        lres[3] ^= lp[i][k+3];\n                    }\n\n                    lres[0] &= ~lcommon_bits[0];\n                    lres[1] &= ~lcommon_bits[1];\n                    lres[2] &= ~lcommon_bits[2];\n                    lres[3] &= ~lcommon_bits[3];\n\n                    k+=4;\n                    lres+=4;\n                    j += sizeof(unsigned long)*4;\n                    minlen -= sizeof(unsigned long)*4;\n                }\n            }\n        }\n#endif /* !defined(USE_ALIGNED_ACCESS) */\n\n        /* j is set to the next byte to process by the previous loop. */\n        for (; j < maxlen; j++) {\n            output = (len[0] <= j) ? 0 : src[0][j];\n            if (op == BITOP_NOT) output = ~output;\n            disjunction = 0;\n            common_bits = 0;\n\n            for (i = 1; i < numkeys; i++) {\n                int skip = 0;\n                byte = (len[i] <= j) ? 0 : src[i][j];\n                switch(op) {\n                case BITOP_AND:\n                    output &= byte;\n                    skip = (output == 0);\n                    break;\n                case BITOP_OR:\n                    output |= byte;\n                    skip = (output == 0xff);\n                    break;\n                case BITOP_XOR: output ^= byte; break;\n\n                /* For DIFF, DIFF1 and ANDOR we compute the disjunction of all\n                 * key arguments except the first one. After that we do their\n                 * respective bit op on said first arg and that disjunction.\n                 * */\n                case BITOP_DIFF:\n                case BITOP_DIFF1:\n                case BITOP_ANDOR:\n                    disjunction |= byte;\n                    skip = (disjunction == 0xff);\n                    break;\n\n                /* BITOP ONE dest key_1 [key_2...]\n                 * If dest[i] is the i-th bit of dest then:\n                 * dest[i] == 1 if and only if there is j such that key_j[i] == 1\n                 * and key_n[i] == 0 for all n != j.\n                 *\n                 * In order to compute that on each step we track which bits\n                 * were seen in more than one key and store that in a helper\n                 * variable. Then the operation is just XOR but on each step we\n                 * nullify the bits that are set in the helper.\n                 * Logically, this operation is the same as nullifying the\n                 * helper bits only once at the end, but performance-wise it had\n                 * no significant benefit and makes the code only more unclear.\n                 *\n                 * e.g:\n                 * 0001 0111 # key1\n                 * 0010 0110 # key2\n                 *\n                 * 0011 0001 # intermediate1\n                 * 0000 0110 # helper\n                 * 0011 0001 # intermediate1 & ~helper\n                 *\n                 * 0100 1101 # key3\n                 *\n                 * 0111 1100 # intermediate2\n                 * 0000 0111 # helper\n                 * 0111 1000 # intermediate2 & ~helper\n                 * ---------\n                 * 0111 1000 # result\n                 * */\n                case BITOP_ONE:\n                    common_bits |= (output & byte);\n                    output ^= byte;\n                    output &= ~common_bits;\n                    skip = (common_bits == 0xff);\n                    break;\n                default:\n                    break;\n                }\n\n                if (skip) {\n                    break;\n                }\n            }\n\n            switch(op) {\n            case BITOP_DIFF:\n                res[j] = (output & ~disjunction);\n                break;\n            case BITOP_DIFF1:\n                res[j] = (~output & disjunction);\n                break;\n            case BITOP_ANDOR:\n                res[j] = (output & disjunction);\n                break;\n            default:\n                res[j] = output;\n                break;\n            }\n        }\n    }\n    for (j = 0; j < numkeys; j++) {\n        if (objects[j])\n            decrRefCount(objects[j]);\n    }\n    zfree(src);\n    zfree(len);\n    zfree(objects);\n\n    /* Store the computed value into the target key */\n    if (maxlen) {\n        robj *o = createObject(OBJ_STRING, res);\n        setKey(c, c->db, targetkey, &o, 0);\n        notifyKeyspaceEvent(NOTIFY_STRING,\"set\",targetkey,c->db->id);\n        server.dirty++;\n    } else if (dbDelete(c->db,targetkey)) {\n        keyModified(c,c->db,targetkey,NULL,1);\n        notifyKeyspaceEvent(NOTIFY_GENERIC,\"del\",targetkey,c->db->id);\n        server.dirty++;\n    }\n    addReplyLongLong(c,maxlen); /* Return the output string length in bytes. */\n}\n\n/* BITCOUNT key [start end [BIT|BYTE]] */\nvoid bitcountCommand(client *c) {\n    kvobj *o;\n    long long start, end;\n    long strlen;\n    unsigned char *p;\n    char llbuf[LONG_STR_SIZE];\n    int isbit = 0;\n    unsigned char first_byte_neg_mask = 0, last_byte_neg_mask = 0;\n\n    /* Parse start/end range if any. */\n    if (c->argc == 4 || c->argc == 5) {\n        if (getLongLongFromObjectOrReply(c,c->argv[2],&start,NULL) != C_OK)\n            return;\n        if (getLongLongFromObjectOrReply(c,c->argv[3],&end,NULL) != C_OK)\n            return;\n        if (c->argc == 5) {\n            if (!strcasecmp(c->argv[4]->ptr,\"bit\")) isbit = 1;\n            else if (!strcasecmp(c->argv[4]->ptr,\"byte\")) isbit = 0;\n            else {\n                addReplyErrorObject(c,shared.syntaxerr);\n                return;\n            }\n        }\n        /* Lookup, check for type. */\n        o = lookupKeyRead(c->db, c->argv[1]);\n        if (checkType(c, o, OBJ_STRING)) return;\n        p = getObjectReadOnlyString(o,&strlen,llbuf);\n        long long totlen = strlen;\n\n        /* Make sure we will not overflow */\n        serverAssert(totlen <= LLONG_MAX >> 3);\n\n        /* Convert negative indexes */\n        if (start < 0 && end < 0 && start > end) {\n            addReply(c,shared.czero);\n            return;\n        }\n        if (isbit) totlen <<= 3;\n        if (start < 0) start = totlen+start;\n        if (end < 0) end = totlen+end;\n        if (start < 0) start = 0;\n        if (end < 0) end = 0;\n        if (end >= totlen) end = totlen-1;\n        if (isbit && start <= end) {\n            /* Before converting bit offset to byte offset, create negative masks\n             * for the edges. */\n            first_byte_neg_mask = ~((1<<(8-(start&7)))-1) & 0xFF;\n            last_byte_neg_mask = (1<<(7-(end&7)))-1;\n            start >>= 3;\n            end >>= 3;\n        }\n    } else if (c->argc == 2) {\n        /* Lookup, check for type. */\n        o = lookupKeyRead(c->db, c->argv[1]);\n        if (checkType(c, o, OBJ_STRING)) return;\n        p = getObjectReadOnlyString(o,&strlen,llbuf);\n        /* The whole string. */\n        start = 0;\n        end = strlen-1;\n    } else {\n        /* Syntax error. */\n        addReplyErrorObject(c,shared.syntaxerr);\n        return;\n    }\n\n    /* Return 0 for non existing keys. */\n    if (o == NULL) {\n        addReply(c, shared.czero);\n        return;\n    }\n\n    /* Precondition: end >= 0 && end < strlen, so the only condition where\n     * zero can be returned is: start > end. */\n    if (start > end) {\n        addReply(c,shared.czero);\n    } else {\n        long bytes = (long)(end-start+1);\n        long long count;\n\n        /* Use the best available popcount implementation */\n        count = redisPopcountAuto(p+start, bytes);\n\n        if (first_byte_neg_mask != 0 || last_byte_neg_mask != 0) {\n            unsigned char firstlast[2] = {0, 0};\n            /* We may count bits of first byte and last byte which are out of\n            * range. So we need to subtract them. Here we use a trick. We set\n            * bits in the range to zero. So these bit will not be excluded. */\n            if (first_byte_neg_mask != 0) firstlast[0] = p[start] & first_byte_neg_mask;\n            if (last_byte_neg_mask != 0) firstlast[1] = p[end] & last_byte_neg_mask;\n\n            /* Use the same popcount implementation for consistency */\n            count -= redisPopcountAuto(firstlast, 2);\n        }\n        addReplyLongLong(c,count);\n    }\n}\n\n/* BITPOS key bit [start [end [BIT|BYTE]]] */\nvoid bitposCommand(client *c) {\n    kvobj *o;\n    long long start, end;\n    long bit, strlen;\n    unsigned char *p;\n    char llbuf[LONG_STR_SIZE];\n    int isbit = 0, end_given = 0;\n    unsigned char first_byte_neg_mask = 0, last_byte_neg_mask = 0;\n\n    /* Parse the bit argument to understand what we are looking for, set\n     * or clear bits. */\n    if (getLongFromObjectOrReply(c,c->argv[2],&bit,NULL) != C_OK)\n        return;\n    if (bit != 0 && bit != 1) {\n        addReplyError(c, \"The bit argument must be 1 or 0.\");\n        return;\n    }\n\n    /* Parse start/end range if any. */\n    if (c->argc == 4 || c->argc == 5 || c->argc == 6) {\n        if (getLongLongFromObjectOrReply(c,c->argv[3],&start,NULL) != C_OK)\n            return;\n        if (c->argc == 6) {\n            if (!strcasecmp(c->argv[5]->ptr,\"bit\")) isbit = 1;\n            else if (!strcasecmp(c->argv[5]->ptr,\"byte\")) isbit = 0;\n            else {\n                addReplyErrorObject(c,shared.syntaxerr);\n                return;\n            }\n        }\n        if (c->argc >= 5) {\n            if (getLongLongFromObjectOrReply(c,c->argv[4],&end,NULL) != C_OK)\n                return;\n            end_given = 1;\n        }\n\n        /* Lookup, check for type. */\n        o = lookupKeyRead(c->db, c->argv[1]);\n        if (checkType(c, o, OBJ_STRING)) return;\n        p = getObjectReadOnlyString(o, &strlen, llbuf);\n\n        /* Make sure we will not overflow */\n        long long totlen = strlen;\n        serverAssert(totlen <= LLONG_MAX >> 3);\n\n        if (c->argc < 5) {\n            if (isbit) end = (totlen<<3) + 7;\n            else end = totlen-1;\n        }\n\n        if (isbit) totlen <<= 3;\n        /* Convert negative indexes */\n        if (start < 0) start = totlen+start;\n        if (end < 0) end = totlen+end;\n        if (start < 0) start = 0;\n        if (end < 0) end = 0;\n        if (end >= totlen) end = totlen-1;\n        if (isbit && start <= end) {\n            /* Before converting bit offset to byte offset, create negative masks\n             * for the edges. */\n            first_byte_neg_mask = ~((1<<(8-(start&7)))-1) & 0xFF;\n            last_byte_neg_mask = (1<<(7-(end&7)))-1;\n            start >>= 3;\n            end >>= 3;\n        }\n    } else if (c->argc == 3) {\n        /* Lookup, check for type. */\n        o = lookupKeyRead(c->db, c->argv[1]);\n        if (checkType(c,o,OBJ_STRING)) return;\n        p = getObjectReadOnlyString(o,&strlen,llbuf);\n\n        /* The whole string. */\n        start = 0;\n        end = strlen-1;\n    } else {\n        /* Syntax error. */\n        addReplyErrorObject(c,shared.syntaxerr);\n        return;\n    }\n\n    /* If the key does not exist, from our point of view it is an infinite\n     * array of 0 bits. If the user is looking for the first clear bit return 0,\n     * If the user is looking for the first set bit, return -1. */\n    if (o == NULL) {\n        addReplyLongLong(c, bit ? -1 : 0);\n        return;\n    }\n\n    /* For empty ranges (start > end) we return -1 as an empty range does\n     * not contain a 0 nor a 1. */\n    if (start > end) {\n        addReplyLongLong(c, -1);\n    } else {\n        long bytes = end-start+1;\n        long long pos;\n        unsigned char tmpchar;\n        if (first_byte_neg_mask) {\n            if (bit) tmpchar = p[start] & ~first_byte_neg_mask;\n            else tmpchar = p[start] | first_byte_neg_mask;\n            /* Special case, there is only one byte */\n            if (last_byte_neg_mask && bytes == 1) {\n                if (bit) tmpchar = tmpchar & ~last_byte_neg_mask;\n                else tmpchar = tmpchar | last_byte_neg_mask;\n            }\n            pos = redisBitpos(&tmpchar,1,bit);\n            /* If there are no more bytes or we get valid pos, we can exit early */\n            if (bytes == 1 || (pos != -1 && pos != 8)) goto result;\n            start++;\n            bytes--;\n        }\n        /* If the last byte has not bits in the range, we should exclude it */\n        long curbytes = bytes - (last_byte_neg_mask ? 1 : 0);\n        if (curbytes > 0) {\n            pos = redisBitpos(p+start,curbytes,bit);\n            /* If there is no more bytes or we get valid pos, we can exit early */\n            if (bytes == curbytes || (pos != -1 && pos != (long long)curbytes<<3)) goto result;\n            start += curbytes;\n            bytes -= curbytes;\n        }\n        if (bit) tmpchar = p[end] & ~last_byte_neg_mask;\n        else tmpchar = p[end] | last_byte_neg_mask;\n        pos = redisBitpos(&tmpchar,1,bit);\n\n    result:\n        /* If we are looking for clear bits, and the user specified an exact\n         * range with start-end, we can't consider the right of the range as\n         * zero padded (as we do when no explicit end is given).\n         *\n         * So if redisBitpos() returns the first bit outside the range,\n         * we return -1 to the caller, to mean, in the specified range there\n         * is not a single \"0\" bit. */\n        if (end_given && bit == 0 && pos == (long long)bytes<<3) {\n            addReplyLongLong(c,-1);\n            return;\n        }\n        if (pos != -1) pos += (long long)start<<3; /* Adjust for the bytes we skipped. */\n        addReplyLongLong(c,pos);\n    }\n}\n\n/* BITFIELD key subcommand-1 arg ... subcommand-2 arg ... subcommand-N ...\n *\n * Supported subcommands:\n *\n * GET <type> <offset>\n * SET <type> <offset> <value>\n * INCRBY <type> <offset> <increment>\n * OVERFLOW [WRAP|SAT|FAIL]\n */\n\n#define BITFIELD_FLAG_NONE      0\n#define BITFIELD_FLAG_READONLY  (1<<0)\n\nstruct bitfieldOp {\n    uint64_t offset;    /* Bitfield offset. */\n    int64_t i64;        /* Increment amount (INCRBY) or SET value */\n    int opcode;         /* Operation id. */\n    int owtype;         /* Overflow type to use. */\n    int bits;           /* Integer bitfield bits width. */\n    int sign;           /* True if signed, otherwise unsigned op. */\n};\n\n/* This implements both the BITFIELD command and the BITFIELD_RO command\n * when flags is set to BITFIELD_FLAG_READONLY: in this case only the\n * GET subcommand is allowed, other subcommands will return an error. */\nvoid bitfieldGeneric(client *c, int flags) {\n    kvobj *o;\n    uint64_t bitoffset;\n    int j, numops = 0, changes = 0;\n    size_t strOldSize = 0, strGrowSize = 0;\n    struct bitfieldOp *ops = NULL; /* Array of ops to execute at end. */\n    int owtype = BFOVERFLOW_WRAP; /* Overflow type. */\n    int readonly = 1;\n    uint64_t highest_write_offset = 0;\n\n    for (j = 2; j < c->argc; j++) {\n        int remargs = c->argc-j-1; /* Remaining args other than current. */\n        char *subcmd = c->argv[j]->ptr; /* Current command name. */\n        int opcode; /* Current operation code. */\n        long long i64 = 0;  /* Signed SET value. */\n        int sign = 0; /* Signed or unsigned type? */\n        int bits = 0; /* Bitfield width in bits. */\n\n        if (!strcasecmp(subcmd,\"get\") && remargs >= 2)\n            opcode = BITFIELDOP_GET;\n        else if (!strcasecmp(subcmd,\"set\") && remargs >= 3)\n            opcode = BITFIELDOP_SET;\n        else if (!strcasecmp(subcmd,\"incrby\") && remargs >= 3)\n            opcode = BITFIELDOP_INCRBY;\n        else if (!strcasecmp(subcmd,\"overflow\") && remargs >= 1) {\n            char *owtypename = c->argv[j+1]->ptr;\n            j++;\n            if (!strcasecmp(owtypename,\"wrap\"))\n                owtype = BFOVERFLOW_WRAP;\n            else if (!strcasecmp(owtypename,\"sat\"))\n                owtype = BFOVERFLOW_SAT;\n            else if (!strcasecmp(owtypename,\"fail\"))\n                owtype = BFOVERFLOW_FAIL;\n            else {\n                addReplyError(c,\"Invalid OVERFLOW type specified\");\n                zfree(ops);\n                return;\n            }\n            continue;\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            zfree(ops);\n            return;\n        }\n\n        /* Get the type and offset arguments, common to all the ops. */\n        if (getBitfieldTypeFromArgument(c,c->argv[j+1],&sign,&bits) != C_OK) {\n            zfree(ops);\n            return;\n        }\n\n        if (getBitOffsetFromArgument(c,c->argv[j+2],&bitoffset,1,bits) != C_OK){\n            zfree(ops);\n            return;\n        }\n\n        if (opcode != BITFIELDOP_GET) {\n            readonly = 0;\n            if (highest_write_offset < bitoffset + bits - 1)\n                highest_write_offset = bitoffset + bits - 1;\n            /* INCRBY and SET require another argument. */\n            if (getLongLongFromObjectOrReply(c,c->argv[j+3],&i64,NULL) != C_OK){\n                zfree(ops);\n                return;\n            }\n        }\n\n        /* Populate the array of operations we'll process. */\n        ops = zrealloc(ops,sizeof(*ops)*(numops+1));\n        ops[numops].offset = bitoffset;\n        ops[numops].i64 = i64;\n        ops[numops].opcode = opcode;\n        ops[numops].owtype = owtype;\n        ops[numops].bits = bits;\n        ops[numops].sign = sign;\n        numops++;\n\n        j += 3 - (opcode == BITFIELDOP_GET);\n    }\n\n    if (readonly) {\n        /* Lookup for read is ok if key doesn't exit, but errors\n         * if it's not a string. */\n        o = lookupKeyRead(c->db,c->argv[1]);\n        if (o != NULL && checkType(c,o,OBJ_STRING)) {\n            zfree(ops);\n            return;\n        }\n    } else {\n        if (flags & BITFIELD_FLAG_READONLY) {\n            zfree(ops);\n            addReplyError(c, \"BITFIELD_RO only supports the GET subcommand\");\n            return;\n        }\n\n        /* Lookup by making room up to the farthest bit reached by\n         * this operation. */\n        if ((o = lookupStringForBitCommand(c,\n            highest_write_offset,&strOldSize,&strGrowSize)) == NULL) {\n            zfree(ops);\n            return;\n        }\n    }\n\n    addReplyArrayLen(c,numops);\n\n    /* Actually process the operations. */\n    for (j = 0; j < numops; j++) {\n        struct bitfieldOp *thisop = ops+j;\n\n        /* Execute the operation. */\n        if (thisop->opcode == BITFIELDOP_SET ||\n            thisop->opcode == BITFIELDOP_INCRBY)\n        {\n            /* SET and INCRBY: We handle both with the same code path\n             * for simplicity. SET return value is the previous value so\n             * we need fetch & store as well. */\n\n            /* We need two different but very similar code paths for signed\n             * and unsigned operations, since the set of functions to get/set\n             * the integers and the used variables types are different. */\n            if (thisop->sign) {\n                int64_t oldval, newval, wrapped, retval;\n                int overflow;\n\n                oldval = getSignedBitfield(o->ptr,thisop->offset,\n                        thisop->bits);\n\n                if (thisop->opcode == BITFIELDOP_INCRBY) {\n                    overflow = checkSignedBitfieldOverflow(oldval,\n                            thisop->i64,thisop->bits,thisop->owtype,&wrapped);\n                    newval = overflow ? wrapped : oldval + thisop->i64;\n                    retval = newval;\n                } else {\n                    newval = thisop->i64;\n                    overflow = checkSignedBitfieldOverflow(newval,\n                            0,thisop->bits,thisop->owtype,&wrapped);\n                    if (overflow) newval = wrapped;\n                    retval = oldval;\n                }\n\n                /* On overflow of type is \"FAIL\", don't write and return\n                 * NULL to signal the condition. */\n                if (!(overflow && thisop->owtype == BFOVERFLOW_FAIL)) {\n                    addReplyLongLong(c,retval);\n                    setSignedBitfield(o->ptr,thisop->offset,\n                                      thisop->bits,newval);\n\n                    if (strGrowSize || (oldval != newval))\n                        changes++;\n                } else {\n                    addReplyNull(c);\n                }\n            } else {\n                /* Initialization of 'wrapped' is required to avoid\n                * false-positive warning \"-Wmaybe-uninitialized\" */\n                uint64_t oldval, newval, retval, wrapped = 0;\n                int overflow;\n\n                oldval = getUnsignedBitfield(o->ptr,thisop->offset,\n                        thisop->bits);\n\n                if (thisop->opcode == BITFIELDOP_INCRBY) {\n                    newval = oldval + thisop->i64;\n                    overflow = checkUnsignedBitfieldOverflow(oldval,\n                            thisop->i64,thisop->bits,thisop->owtype,&wrapped);\n                    if (overflow) newval = wrapped;\n                    retval = newval;\n                } else {\n                    newval = thisop->i64;\n                    overflow = checkUnsignedBitfieldOverflow(newval,\n                            0,thisop->bits,thisop->owtype,&wrapped);\n                    if (overflow) newval = wrapped;\n                    retval = oldval;\n                }\n                /* On overflow of type is \"FAIL\", don't write and return\n                 * NULL to signal the condition. */\n                if (!(overflow && thisop->owtype == BFOVERFLOW_FAIL)) {\n                    addReplyLongLong(c,retval);\n                    setUnsignedBitfield(o->ptr,thisop->offset,\n                                        thisop->bits,newval);\n\n                    if (strGrowSize || (oldval != newval))\n                        changes++;\n                } else {\n                    addReplyNull(c);\n                }\n            }\n        } else {\n            /* GET */\n            unsigned char buf[9];\n            long strlen = 0;\n            unsigned char *src = NULL;\n            char llbuf[LONG_STR_SIZE];\n\n            if (o != NULL)\n                src = getObjectReadOnlyString(o,&strlen,llbuf);\n\n            /* For GET we use a trick: before executing the operation\n             * copy up to 9 bytes to a local buffer, so that we can easily\n             * execute up to 64 bit operations that are at actual string\n             * object boundaries. */\n            memset(buf,0,9);\n            int i;\n            uint64_t byte = thisop->offset >> 3;\n            for (i = 0; i < 9; i++) {\n                if (src == NULL || i+byte >= (uint64_t)strlen) break;\n                buf[i] = src[i+byte];\n            }\n\n            /* Now operate on the copied buffer which is guaranteed\n             * to be zero-padded. */\n            if (thisop->sign) {\n                int64_t val = getSignedBitfield(buf,thisop->offset-(byte*8),\n                                            thisop->bits);\n                addReplyLongLong(c,val);\n            } else {\n                uint64_t val = getUnsignedBitfield(buf,thisop->offset-(byte*8),\n                                            thisop->bits);\n                addReplyLongLong(c,val);\n            }\n        }\n    }\n\n    if (changes) {\n\n        /* If this is not a new key (old size not 0) and size changed, then \n         * update the keysizes histogram. Otherwise, the histogram already \n         * updated in lookupStringForBitCommand() by calling dbAdd(). */\n        if ((strOldSize > 0) && (strGrowSize != 0))\n            updateKeysizesHist(c->db, OBJ_STRING, strOldSize, strOldSize + strGrowSize);\n        \n        keyModified(c,c->db,c->argv[1],o,1);\n        notifyKeyspaceEvent(NOTIFY_STRING,\"setbit\",c->argv[1],c->db->id);\n        server.dirty += changes;\n    }\n    zfree(ops);\n}\n\nvoid bitfieldCommand(client *c) {\n    bitfieldGeneric(c, BITFIELD_FLAG_NONE);\n}\n\nvoid bitfieldroCommand(client *c) {\n    bitfieldGeneric(c, BITFIELD_FLAG_READONLY);\n}\n\n#ifdef REDIS_TEST\n/* Test function to verify popcount implementations */\nint bitopsTest(int argc, char **argv, int flags) {\n    UNUSED(argc);\n    UNUSED(argv);\n    UNUSED(flags);\n\n    /* Test data with known popcount values */\n    unsigned char test_data[] = {0xFF, 0x00, 0xAA, 0x55, 0xF0, 0x0F, 0x33, 0xCC};\n    int expected_bits = 8 + 0 + 4 + 4 + 4 + 4 + 4 + 4; /* = 32 bits */\n\n    long long result_regular = redisPopcount(test_data, sizeof(test_data));\n\n    printf(\"Regular popcount: %lld (expected: %d)\\n\", result_regular, expected_bits);\n\n    if (result_regular != expected_bits) {\n        printf(\"FAIL: Regular popcount mismatch\\n\");\n        return 1;\n    }\n\n#ifdef HAVE_AVX2\n    if (BITOP_USE_AVX2) {\n        long long result_avx2 = redisPopCountAvx2(test_data, sizeof(test_data));\n        printf(\"AVX2 popcount: %lld (expected: %d)\\n\", result_avx2, expected_bits);\n\n        if (result_avx2 != expected_bits) {\n            printf(\"FAIL: AVX2 popcount mismatch\\n\");\n            return 1;\n        }\n    } else {\n        printf(\"AVX2 not supported on this CPU\\n\");\n    }\n#else\n    printf(\"AVX2 not compiled in\\n\");\n#endif\n\n#ifdef HAVE_AVX512\n    if (BITOP_USE_AVX512) {\n        long long result_avx512 = redisPopCountAvx512(test_data, sizeof(test_data));\n        printf(\"AVX512 popcount: %lld (expected: %d)\\n\", result_avx512, expected_bits);\n\n        if (result_avx512 != expected_bits) {\n            printf(\"FAIL: AVX512 popcount mismatch\\n\");\n            return 1;\n        }\n    } else {\n        printf(\"AVX512 not supported on this CPU\\n\");\n    }\n#else\n    printf(\"AVX512 not compiled in\\n\");\n#endif\n\n#ifdef HAVE_AARCH64_NEON\n    {\n        long long result_aarch64 = redisPopCountAarch64(test_data, sizeof(test_data));\n        printf(\"AArch64 NEON popcount: %lld (expected: %d)\\n\", result_aarch64, expected_bits);\n\n        if (result_aarch64 != expected_bits) {\n            printf(\"FAIL: AArch64 NEON popcount mismatch\\n\");\n            return 1;\n        }\n    }\n#else\n    printf(\"AArch64 NEON not available\\n\");\n#endif\n    printf(\"All popcount tests passed!\\n\");\n    return 0;\n}\n#endif\n"
  },
  {
    "path": "src/blocked.c",
    "content": "/* blocked.c - generic support for blocking operations like BLPOP & WAIT.\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n *\n * ---------------------------------------------------------------------------\n *\n * API:\n *\n * blockClient() set the CLIENT_BLOCKED flag in the client, and set the\n * specified block type 'btype' filed to one of BLOCKED_* macros.\n *\n * unblockClient() unblocks the client doing the following:\n * 1) It calls the btype-specific function to cleanup the state.\n * 2) It unblocks the client by unsetting the CLIENT_BLOCKED flag.\n * 3) It puts the client into a list of just unblocked clients that are\n *    processed ASAP in the beforeSleep() event loop callback, so that\n *    if there is some query buffer to process, we do it. This is also\n *    required because otherwise there is no 'readable' event fired, we\n *    already read the pending commands. We also set the CLIENT_UNBLOCKED\n *    flag to remember the client is in the unblocked_clients list.\n *\n * processUnblockedClients() is called inside the beforeSleep() function\n * to process the query buffer from unblocked clients and remove the clients\n * from the blocked_clients queue.\n *\n * replyToBlockedClientTimedOut() is called by the cron function when\n * a client blocked reaches the specified timeout (if the timeout is set\n * to 0, no timeout is processed).\n * It usually just needs to send a reply to the client.\n *\n * When implementing a new type of blocking operation, the implementation\n * should modify unblockClient() and replyToBlockedClientTimedOut() in order\n * to handle the btype-specific behavior of this two functions.\n * If the blocking operation waits for certain keys to change state, the\n * clusterRedirectBlockedClientIfNeeded() function should also be updated.\n */\n\n#include \"server.h\"\n#include \"slowlog.h\"\n#include \"latency.h\"\n#include \"monotonic.h\"\n#include \"cluster_slot_stats.h\"\n\n/* forward declarations */\nstatic void unblockClientWaitingData(client *c);\nstatic void handleClientsBlockedOnKey(readyList *rl);\nstatic void unblockClientOnKey(client *c, robj *key);\nstatic void moduleUnblockClientOnKey(client *c, robj *key);\nstatic void releaseBlockedEntry(client *c, dictEntry *de, int remove_key);\n\nvoid initClientBlockingState(client *c) {\n    c->bstate.btype = BLOCKED_NONE;\n    c->bstate.timeout = 0;\n    c->bstate.keys = dictCreate(&objectKeyHeapPointerValueDictType);\n    c->bstate.numreplicas = 0;\n    c->bstate.reploffset = 0;\n    c->bstate.unblock_on_nokey = 0;\n    c->bstate.async_rm_call_handle = NULL;\n}\n\n/* Block a client for the specific operation type. Once the CLIENT_BLOCKED\n * flag is set client query buffer is not longer processed, but accumulated,\n * and will be processed when the client is unblocked. */\nvoid blockClient(client *c, int btype) {\n    /* Master client should never be blocked unless pause or module */\n    serverAssert(!(c->flags & CLIENT_MASTER &&\n                   btype != BLOCKED_MODULE &&\n                   btype != BLOCKED_LAZYFREE &&\n                   btype != BLOCKED_POSTPONE &&\n                   btype != BLOCKED_POSTPONE_TRIM));\n\n    c->flags |= CLIENT_BLOCKED;\n    c->bstate.btype = btype;\n    if (!(c->flags & CLIENT_MODULE)) server.blocked_clients++; /* We count blocked client stats on regular clients and not on module clients */\n    server.blocked_clients_by_type[btype]++;\n    addClientToTimeoutTable(c);\n}\n\n/* Usually when a client is unblocked due to being blocked while processing some command\n * he will attempt to reprocess the command which will update the statistics.\n * However in case the client was timed out or in case of module blocked client is being unblocked\n * the command will not be reprocessed and we need to make stats update.\n * This function will make updates to the commandstats, slowlog and monitors.*/\nvoid updateStatsOnUnblock(client *c, long blocked_us, long reply_us, int had_errors){\n    const ustime_t total_cmd_duration = c->duration + blocked_us + reply_us;\n    clusterSlotStatsAddCpuDuration(c, total_cmd_duration);\n    c->lastcmd->microseconds += total_cmd_duration;\n    c->lastcmd->calls++;\n    c->commands_processed++;\n    server.stat_numcommands++;\n    if (had_errors)\n        c->lastcmd->failed_calls++;\n    if (server.latency_tracking_enabled)\n        updateCommandLatencyHistogram(&(c->lastcmd->latency_histogram), total_cmd_duration*1000);\n    /* Log the command into the Slow log if needed. */\n    slowlogPushCurrentCommand(c, c->lastcmd, total_cmd_duration);\n    c->duration = 0;\n    /* Log the reply duration event. */\n    latencyAddSampleIfNeeded(\"command-unblocking\",reply_us/1000);\n}\n\n/* This function is called in the beforeSleep() function of the event loop\n * in order to process the pending input buffer of clients that were\n * unblocked after a blocking operation. */\nvoid processUnblockedClients(void) {\n    listNode *ln;\n    client *c;\n\n    while (listLength(server.unblocked_clients)) {\n        ln = listFirst(server.unblocked_clients);\n        serverAssert(ln != NULL);\n        c = ln->value;\n        listDelNode(server.unblocked_clients,ln);\n        c->flags &= ~CLIENT_UNBLOCKED;\n\n        /* Reset the client for a new query, unless the client has pending command to process. */\n        if (!(c->flags & CLIENT_PENDING_COMMAND)) {\n            freeClientOriginalArgv(c);\n            /* Clients that are not blocked on keys are not reprocessed so we must\n             * call reqresAppendResponse here (for clients blocked on key,\n             * unblockClientOnKey is called, which eventually calls processCommand,\n             * which calls reqresAppendResponse) */\n            prepareForNextCommand(c, 0);\n        }\n\n        if (c->flags & CLIENT_MODULE) {\n            if (!(c->flags & CLIENT_BLOCKED)) {\n                moduleCallCommandUnblockedHandler(c);\n            }\n            continue;\n        }\n\n        /* Process remaining data in the input buffer, unless the client\n         * is blocked again. Actually processInputBuffer() checks that the\n         * client is not blocked before to proceed, but things may change and\n         * the code is conceptually more correct this way. */\n        if (!(c->flags & CLIENT_BLOCKED)) {\n            /* If we have a queued command, execute it now. */\n            if (processPendingCommandAndInputBuffer(c) == C_ERR) {\n                c = NULL;\n            }\n        }\n        beforeNextClient(c);\n    }\n}\n\n/* This function will schedule the client for reprocessing at a safe time.\n *\n * This is useful when a client was blocked for some reason (blocking operation,\n * CLIENT PAUSE, or whatever), because it may end with some accumulated query\n * buffer that needs to be processed ASAP:\n *\n * 1. When a client is blocked, its readable handler is still active.\n * 2. However in this case it only gets data into the query buffer, but the\n *    query is not parsed or executed once there is enough to proceed as\n *    usually (because the client is blocked... so we can't execute commands).\n * 3. When the client is unblocked, without this function, the client would\n *    have to write some query in order for the readable handler to finally\n *    call processQueryBuffer*() on it.\n * 4. With this function instead we can put the client in a queue that will\n *    process it for queries ready to be executed at a safe time.\n */\nvoid queueClientForReprocessing(client *c) {\n    /* The client may already be into the unblocked list because of a previous\n     * blocking operation, don't add back it into the list multiple times. */\n    if (!(c->flags & CLIENT_UNBLOCKED)) {\n        c->flags |= CLIENT_UNBLOCKED;\n        listAddNodeTail(server.unblocked_clients,c);\n    }\n}\n\n/* Unblock a client calling the right function depending on the kind\n * of operation the client is blocking for. */\nvoid unblockClient(client *c, int queue_for_reprocessing) {\n    if (c->bstate.btype == BLOCKED_LIST ||\n        c->bstate.btype == BLOCKED_ZSET ||\n        c->bstate.btype == BLOCKED_STREAM) {\n        unblockClientWaitingData(c);\n    } else if (c->bstate.btype == BLOCKED_WAIT || c->bstate.btype == BLOCKED_WAITAOF) {\n        unblockClientWaitingReplicas(c);\n    } else if (c->bstate.btype == BLOCKED_MODULE) {\n        if (moduleClientIsBlockedOnKeys(c)) unblockClientWaitingData(c);\n        unblockClientFromModule(c);\n    } else if (c->bstate.btype == BLOCKED_POSTPONE || c->bstate.btype == BLOCKED_POSTPONE_TRIM) {\n        listDelNode(server.postponed_clients,c->postponed_list_node);\n        c->postponed_list_node = NULL;\n    } else if (c->bstate.btype == BLOCKED_SHUTDOWN) {\n        /* No special cleanup. */\n    } else if (c->bstate.btype == BLOCKED_LAZYFREE) {\n        /* No special cleanup. */\n    } else {\n        serverPanic(\"Unknown btype in unblockClient().\");\n    }\n\n\n    /* Clear the flags, and put the client in the unblocked list so that\n     * we'll process new commands in its query buffer ASAP. */\n    if (!(c->flags & CLIENT_MODULE)) server.blocked_clients--; /* We count blocked client stats on regular clients and not on module clients */\n    server.blocked_clients_by_type[c->bstate.btype]--;\n    c->flags &= ~CLIENT_BLOCKED;\n    c->bstate.btype = BLOCKED_NONE;\n    c->bstate.unblock_on_nokey = 0;\n    removeClientFromTimeoutTable(c);\n    if (queue_for_reprocessing) queueClientForReprocessing(c);\n}\n\n/* Check if the specified client can be safely timed out using\n * unblockClientOnTimeout(). */\nint blockedClientMayTimeout(client *c) {\n    if (c->bstate.btype == BLOCKED_MODULE) {\n        return moduleBlockedClientMayTimeout(c);\n    }\n\n    if (c->bstate.btype == BLOCKED_LIST ||\n        c->bstate.btype == BLOCKED_ZSET ||\n        c->bstate.btype == BLOCKED_STREAM ||\n        c->bstate.btype == BLOCKED_WAIT ||\n        c->bstate.btype == BLOCKED_WAITAOF)\n    {\n        return 1;\n    }\n    return 0;\n}\n\n/* This function gets called when a blocked client timed out in order to\n * send it a reply of some kind. After this function is called,\n * unblockClient() will be called with the same client as argument. */\nvoid replyToBlockedClientTimedOut(client *c) {\n    if (c->bstate.btype == BLOCKED_LAZYFREE) {\n        /* SFLUSH: reply with empty array, FLUSH*: reply with OK */\n        if (c->cmd && c->cmd->proc == sflushCommand)\n            addReplyArrayLen(c, 0);\n        else\n            addReply(c, shared.ok); /* No reason lazy-free to fail */\n    } else if (c->bstate.btype == BLOCKED_LIST ||\n        c->bstate.btype == BLOCKED_ZSET ||\n        c->bstate.btype == BLOCKED_STREAM) {\n        addReplyNullArray(c);\n        updateStatsOnUnblock(c, 0, 0, 0);\n    } else if (c->bstate.btype == BLOCKED_WAIT) {\n        addReplyLongLong(c,replicationCountAcksByOffset(c->bstate.reploffset));\n    } else if (c->bstate.btype == BLOCKED_WAITAOF) {\n        addReplyArrayLen(c,2);\n        addReplyLongLong(c,server.fsynced_reploff >= c->bstate.reploffset);\n        addReplyLongLong(c,replicationCountAOFAcksByOffset(c->bstate.reploffset));\n    } else if (c->bstate.btype == BLOCKED_MODULE) {\n        moduleBlockedClientTimedOut(c);\n    } else {\n        serverPanic(\"Unknown btype in replyToBlockedClientTimedOut().\");\n    }\n}\n\n/* If one or more clients are blocked on the SHUTDOWN command, this function\n * sends them an error reply and unblocks them. */\nvoid replyToClientsBlockedOnShutdown(void) {\n    if (server.blocked_clients_by_type[BLOCKED_SHUTDOWN] == 0) return;\n    listNode *ln;\n    listIter li;\n    listRewind(server.clients, &li);\n    while((ln = listNext(&li))) {\n        client *c = listNodeValue(ln);\n        if (c->flags & CLIENT_BLOCKED && c->bstate.btype == BLOCKED_SHUTDOWN) {\n            c->duration = 0;\n            addReplyError(c, \"Errors trying to SHUTDOWN. Check logs.\");\n            unblockClient(c, 1);\n        }\n    }\n}\n\n/* Mass-unblock clients because something changed in the instance that makes\n * blocking no longer safe. For example clients blocked in list operations\n * in an instance which turns from master to slave is unsafe, so this function\n * is called when a master turns into a slave.\n *\n * The semantics is to send an -UNBLOCKED error to the client, disconnecting\n * it at the same time. */\nvoid disconnectAllBlockedClients(void) {\n    listNode *ln;\n    listIter li;\n\n    listRewind(server.clients,&li);\n    while((ln = listNext(&li))) {\n        client *c = listNodeValue(ln);\n\n        if (c->flags & CLIENT_BLOCKED) {\n            /* POSTPONEd clients are an exception, when they'll be unblocked, the\n             * command processing will start from scratch, and the command will\n             * be either executed or rejected. (unlike LIST blocked clients for\n             * which the command is already in progress in a way. */\n            if (c->bstate.btype == BLOCKED_POSTPONE || c->bstate.btype == BLOCKED_POSTPONE_TRIM)\n                continue;\n\n            if (c->bstate.btype == BLOCKED_LAZYFREE) {\n                /* SFLUSH: reply with empty array, FLUSH*: reply with OK */\n                if (c->cmd && c->cmd->proc == sflushCommand)\n                    addReplyArrayLen(c, 0);\n                else\n                    addReply(c, shared.ok);\n                updateStatsOnUnblock(c, 0, 0, 0);\n                c->flags &= ~CLIENT_PENDING_COMMAND;\n                unblockClient(c, 1);\n            } else {\n\n                unblockClientOnError(c,\n                                     \"-UNBLOCKED force unblock from blocking operation, \"\n                                     \"instance state changed (master -> replica?)\");\n            }\n            c->flags |= CLIENT_CLOSE_AFTER_REPLY;\n        }\n    }\n}\n\n/* This function should be called by Redis every time a single command,\n * a MULTI/EXEC block, or a Lua script, terminated its execution after\n * being called by a client. It handles serving clients blocked in all scenarios\n * where a specific key access requires to block until that key is available.\n *\n * All the keys with at least one client blocked that are signaled as ready\n * are accumulated into the server.ready_keys list. This function will run\n * the list and will serve clients accordingly.\n * Note that the function will iterate again and again (for example as a result of serving BLMOVE\n * we can have new blocking clients to serve because of the PUSH side of BLMOVE.)\n *\n * This function is normally \"fair\", that is, it will serve clients\n * using a FIFO behavior. However this fairness is violated in certain\n * edge cases, that is, when we have clients blocked at the same time\n * in a sorted set and in a list, for the same key (a very odd thing to\n * do client side, indeed!). Because mismatching clients (blocking for\n * a different type compared to the current key type) are moved in the\n * other side of the linked list. However as long as the key starts to\n * be used only for a single type, like virtually any Redis application will\n * do, the function is already fair. */\nvoid handleClientsBlockedOnKeys(void) {\n\n    /* In case we are already in the process of unblocking clients we should\n     * not make a recursive call, in order to prevent breaking fairness. */\n    static int in_handling_blocked_clients = 0;\n    if (in_handling_blocked_clients)\n        return;\n    in_handling_blocked_clients = 1;\n\n    /* This function is called only when also_propagate is in its basic state\n     * (i.e. not from call(), module context, etc.) */\n    serverAssert(server.also_propagate.numops == 0);\n\n    /* If a command being unblocked causes another command to get unblocked,\n     * like a BLMOVE would do, then the new unblocked command will get processed\n     * right away rather than wait for later. */\n    while(listLength(server.ready_keys) != 0) {\n        list *l;\n\n        /* Point server.ready_keys to a fresh list and save the current one\n         * locally. This way as we run the old list we are free to call\n         * signalKeyAsReady() that may push new elements in server.ready_keys\n         * when handling clients blocked into BLMOVE. */\n        l = server.ready_keys;\n        server.ready_keys = listCreate();\n\n        while(listLength(l) != 0) {\n            listNode *ln = listFirst(l);\n            readyList *rl = ln->value;\n\n            /* First of all remove this key from db->ready_keys so that\n             * we can safely call signalKeyAsReady() against this key. */\n            dictDelete(rl->db->ready_keys,rl->key);\n\n            handleClientsBlockedOnKey(rl);\n\n            /* Free this item. */\n            decrRefCount(rl->key);\n            zfree(rl);\n            listDelNode(l,ln);\n        }\n        listRelease(l); /* We have the new list on place at this point. */\n    }\n    in_handling_blocked_clients = 0;\n}\n\n/* Set a client in blocking mode for the specified key, with the specified timeout.\n * The 'type' argument is BLOCKED_LIST,BLOCKED_ZSET or BLOCKED_STREAM depending on the kind of operation we are\n * waiting for an empty key in order to awake the client. The client is blocked\n * for all the 'numkeys' keys as in the 'keys' argument.\n * The client will unblocked as soon as one of the keys in 'keys' value was updated.\n * the parameter unblock_on_nokey can be used to force client to be unblocked even in the case the key\n * is updated to become unavailable, either by type change (override), deletion or swapdb */\nvoid blockForKeys(client *c, int btype, robj **keys, int numkeys, mstime_t timeout, int unblock_on_nokey) {\n    dictEntry *db_blocked_entry, *db_blocked_existing_entry, *client_blocked_entry;\n    list *l;\n    int j;\n\n    if (!(c->flags & CLIENT_REEXECUTING_COMMAND)) {\n        /* If the client is re-processing the command, we do not set the timeout\n         * because we need to retain the client's original timeout. */\n        c->bstate.timeout = timeout;\n    }\n\n    for (j = 0; j < numkeys; j++) {\n        /* If the key already exists in the dictionary ignore it. */\n        if (!(client_blocked_entry = dictAddRaw(c->bstate.keys,keys[j],NULL))) {\n            continue;\n        }\n        incrRefCount(keys[j]);\n\n        /* And in the other \"side\", to map keys -> clients */\n        db_blocked_entry = dictAddRaw(c->db->blocking_keys,keys[j], &db_blocked_existing_entry);\n\n        /* In case key[j] did not have blocking clients yet, we need to create a new list */\n        if (db_blocked_entry != NULL) {\n            l = listCreate();\n            dictSetVal(c->db->blocking_keys, db_blocked_entry, l);\n            incrRefCount(keys[j]);\n        } else {\n            l = dictGetVal(db_blocked_existing_entry);\n        }\n        listAddNodeTail(l,c);\n        dictSetVal(c->bstate.keys,client_blocked_entry,listLast(l));\n\n        /* We need to add the key to blocking_keys_unblock_on_nokey, if the client\n         * wants to be awakened if key is deleted (like XREADGROUP) */\n        if (unblock_on_nokey) {\n            db_blocked_entry = dictAddRaw(c->db->blocking_keys_unblock_on_nokey, keys[j], &db_blocked_existing_entry);\n            if (db_blocked_entry) {\n                incrRefCount(keys[j]);\n                dictSetUnsignedIntegerVal(db_blocked_entry, 1);\n            } else {\n                dictIncrUnsignedIntegerVal(db_blocked_existing_entry, 1);\n            }\n        }\n    }\n    c->bstate.unblock_on_nokey = unblock_on_nokey;\n    /* Currently we assume key blocking will require reprocessing the command.\n     * However in case of modules, they have a different way to handle the reprocessing\n     * which does not require setting the pending command flag */\n    if (btype != BLOCKED_MODULE)\n        c->flags |= CLIENT_PENDING_COMMAND;\n    blockClient(c,btype);\n}\n\n/* Helper function to unblock a client that's waiting in a blocking operation such as BLPOP.\n * Internal function for unblockClient() */\nstatic void unblockClientWaitingData(client *c) {\n    dictEntry *de;\n    dictIterator di;\n\n    if (dictSize(c->bstate.keys) == 0)\n        return;\n\n    dictInitIterator(&di, c->bstate.keys);\n    /* The client may wait for multiple keys, so unblock it for every key. */\n    while((de = dictNext(&di)) != NULL) {\n        releaseBlockedEntry(c, de, 0);\n    }\n    dictResetIterator(&di);\n    dictEmpty(c->bstate.keys, NULL);\n}\n\nstatic blocking_type getBlockedTypeByType(int type) {\n    switch (type) {\n        case OBJ_LIST: return BLOCKED_LIST;\n        case OBJ_ZSET: return BLOCKED_ZSET;\n        case OBJ_MODULE: return BLOCKED_MODULE;\n        case OBJ_STREAM: return BLOCKED_STREAM;\n        default: return BLOCKED_NONE;\n    }\n}\n\n/* If the specified key has clients blocked waiting for list pushes, this\n * function will put the key reference into the server.ready_keys list.\n * Note that db->ready_keys is a hash table that allows us to avoid putting\n * the same key again and again in the list in case of multiple pushes\n * made by a script or in the context of MULTI/EXEC.\n *\n * The list will be finally processed by handleClientsBlockedOnKeys() */\nstatic void signalKeyAsReadyLogic(redisDb *db, robj *key, int type, int deleted) {\n    readyList *rl;\n\n    /* Quick returns. */\n    int btype = getBlockedTypeByType(type);\n    if (btype == BLOCKED_NONE) {\n        /* The type can never block. */\n        return;\n    }\n    if (!server.blocked_clients_by_type[btype] &&\n        !server.blocked_clients_by_type[BLOCKED_MODULE]) {\n        /* No clients block on this type. Note: Blocked modules are represented\n         * by BLOCKED_MODULE, even if the intention is to wake up by normal\n         * types (list, zset, stream), so we need to check that there are no\n         * blocked modules before we do a quick return here. */\n        return;\n    }\n\n    if (deleted) {\n        /* Key deleted and no clients blocking for this key? No need to queue it. */\n        if (dictFind(db->blocking_keys_unblock_on_nokey,key) == NULL)\n            return;\n        /* Note: if we made it here it means the key is also present in db->blocking_keys */\n    } else {\n        /* No clients blocking for this key? No need to queue it. */\n        if (dictFind(db->blocking_keys,key) == NULL)\n            return;\n    }\n\n    dictEntry *de, *existing;\n    de = dictAddRaw(db->ready_keys, key, &existing);\n    if (de) {\n        /* We add the key in the db->ready_keys dictionary in order\n         * to avoid adding it multiple times into a list with a simple O(1)\n         * check. */\n        incrRefCount(key);\n    } else {\n        /* Key was already signaled? No need to queue it again. */\n        return;\n    }\n\n    /* Ok, we need to queue this key into server.ready_keys. */\n    rl = zmalloc(sizeof(*rl));\n    rl->key = key;\n    rl->db = db;\n    incrRefCount(key);\n    listAddNodeTail(server.ready_keys,rl);\n}\n\n/* Helper function to wrap the logic of removing a client blocked key entry\n * In this case we would like to do the following:\n * 1. unlink the client from the global DB locked client list\n * 2. remove the entry from the global db blocking list in case the list is empty\n * 3. in case the global list is empty, also remove the key from the global dict of keys\n *    which should trigger unblock on key deletion\n * 4. remove key from the client blocking keys list - NOTE, since client can be blocked on lots of keys,\n *    but unblocked when only one of them is triggered, we would like to avoid deleting each key separately\n *    and instead clear the dictionary in one-shot. this is why the remove_key argument is provided\n *    to support this logic in unblockClientWaitingData\n */\nstatic void releaseBlockedEntry(client *c, dictEntry *de, int remove_key) {\n    list *l;\n    listNode *pos;\n    void *key;\n    dictEntry *unblock_on_nokey_entry;\n\n    key = dictGetKey(de);\n    pos = dictGetVal(de);\n    /* Remove this client from the list of clients waiting for this key. */\n    l = dictFetchValue(c->db->blocking_keys, key);\n    serverAssertWithInfo(c,key,l != NULL);\n    listUnlinkNode(l,pos);\n    /* If the list is empty we need to remove it to avoid wasting memory\n     * We will also remove the key (if exists) from the blocking_keys_unblock_on_nokey dict.\n     * However, in case the list is not empty, we will have to still perform reference accounting\n     * on the blocking_keys_unblock_on_nokey and delete the entry in case of zero reference.\n     * Why? because it is possible that some more clients are blocked on the same key but without\n     * require to be triggered on key deletion, we do not want these to be later triggered by the\n     * signalDeletedKeyAsReady. */\n    if (listLength(l) == 0) {\n        dictDelete(c->db->blocking_keys, key);\n        dictDelete(c->db->blocking_keys_unblock_on_nokey,key);\n    } else if (c->bstate.unblock_on_nokey) {\n        unblock_on_nokey_entry = dictFind(c->db->blocking_keys_unblock_on_nokey,key);\n        /* it is not possible to have a client blocked on nokey with no matching entry */\n        serverAssertWithInfo(c,key,unblock_on_nokey_entry != NULL);\n        if (!dictIncrUnsignedIntegerVal(unblock_on_nokey_entry, -1)) {\n            /* in case the count is zero, we can delete the entry */\n             dictDelete(c->db->blocking_keys_unblock_on_nokey,key);\n        }\n    }\n    if (remove_key)\n        dictDelete(c->bstate.keys, key);\n}\n\nvoid signalKeyAsReady(redisDb *db, robj *key, int type) {\n    signalKeyAsReadyLogic(db, key, type, 0);\n}\n\nvoid signalDeletedKeyAsReady(redisDb *db, robj *key, int type) {\n    signalKeyAsReadyLogic(db, key, type, 1);\n}\n\n/* Helper function for handleClientsBlockedOnKeys(). This function is called\n * whenever a key is ready. we iterate over all the clients blocked on this key\n * and try to re-execute the command (in case the key is still available). */\nstatic void handleClientsBlockedOnKey(readyList *rl) {\n\n    /* We serve clients in the same order they blocked for\n     * this key, from the first blocked to the last. */\n    dictEntry *de = dictFind(rl->db->blocking_keys,rl->key);\n\n    if (de) {\n        list *clients = dictGetVal(de);\n        listNode *ln;\n        listIter li;\n        listRewind(clients,&li);\n\n        /* Avoid processing more than the initial count so that we're not stuck\n         * in an endless loop in case the reprocessing of the command blocks again. */\n        long count = listLength(clients);\n        while ((ln = listNext(&li)) && count--) {\n            client *receiver = listNodeValue(ln);\n            kvobj *o = lookupKeyReadWithFlags(rl->db, rl->key, LOOKUP_NOEFFECTS);\n            /* 1. In case new key was added/touched we need to verify it satisfy the\n             *    blocked type, since we might process the wrong key type.\n             * 2. We want to serve clients blocked on module keys\n             *    regardless of the object type: we don't know what the\n             *    module is trying to accomplish right now.\n             * 3. In case of XREADGROUP call we will want to unblock on any change in object type\n             *    or in case the key was deleted, since the group is no longer valid. */\n            if ((o != NULL && (receiver->bstate.btype == getBlockedTypeByType(o->type))) ||\n                (o != NULL && (receiver->bstate.btype == BLOCKED_MODULE)) ||\n                (receiver->bstate.unblock_on_nokey))\n            {\n                if (receiver->bstate.btype != BLOCKED_MODULE)\n                    unblockClientOnKey(receiver, rl->key);\n                else\n                    moduleUnblockClientOnKey(receiver, rl->key);\n            }\n        }\n    }\n}\n\n/* block a client due to wait command */\nvoid blockForReplication(client *c, mstime_t timeout, long long offset, long numreplicas) {\n    c->bstate.timeout = timeout;\n    c->bstate.reploffset = offset;\n    c->bstate.numreplicas = numreplicas;\n    listAddNodeHead(server.clients_waiting_acks,c);\n    blockClient(c,BLOCKED_WAIT);\n}\n\n/* block a client due to waitaof command */\nvoid blockForAofFsync(client *c, mstime_t timeout, long long offset, int numlocal, long numreplicas) {\n    c->bstate.timeout = timeout;\n    c->bstate.reploffset = offset;\n    c->bstate.numreplicas = numreplicas;\n    c->bstate.numlocal = numlocal;\n    listAddNodeHead(server.clients_waiting_acks,c);\n    blockClient(c,BLOCKED_WAITAOF);\n}\n\n/* Postpone client from executing a command. For example the server might be busy\n * requesting to avoid processing clients commands which will be processed later\n * when the it is ready to accept them. */\nvoid blockPostponeClientWithType(client *c, int btype) {\n    serverAssert(btype == BLOCKED_POSTPONE || btype == BLOCKED_POSTPONE_TRIM);\n    c->bstate.timeout = 0;\n    blockClient(c, btype);\n    listAddNodeTail(server.postponed_clients, c);\n    c->postponed_list_node = listLast(server.postponed_clients);\n    /* Mark this client to execute its command */\n    c->flags |= CLIENT_PENDING_COMMAND;\n}\n\n/* Postpone client from executing a command. */\nvoid blockPostponeClient(client *c) {\n    blockPostponeClientWithType(c, BLOCKED_POSTPONE);\n}\n\n/* Block client due to shutdown command */\nvoid blockClientShutdown(client *c) {\n    blockClient(c, BLOCKED_SHUTDOWN);\n}\n\n/* Unblock a client once a specific key became available for it.\n * This function will remove the client from the list of clients blocked on this key\n * and also remove the key from the dictionary of keys this client is blocked on.\n * in case the client has a command pending it will process it immediately.  */\nstatic void unblockClientOnKey(client *c, robj *key) {\n    dictEntry *de;\n\n    de = dictFind(c->bstate.keys, key);\n    releaseBlockedEntry(c, de, 1);\n\n    /* Only in case of blocking API calls, we might be blocked on several keys.\n       however we should force unblock the entire blocking keys */\n    serverAssert(c->bstate.btype == BLOCKED_STREAM ||\n                c->bstate.btype == BLOCKED_LIST   ||\n                c->bstate.btype == BLOCKED_ZSET);\n\n    /* We need to unblock the client before calling processCommandAndResetClient\n     * because it checks the CLIENT_BLOCKED flag */\n    unblockClient(c, 0);\n    /* In case this client was blocked on keys during command\n     * we need to re process the command again */\n    if (c->flags & CLIENT_PENDING_COMMAND) {\n        c->flags &= ~CLIENT_PENDING_COMMAND;\n        c->flags |= CLIENT_REEXECUTING_COMMAND;\n        /* We want the command processing and the unblock handler (see RM_Call 'K' option)\n         * to run atomically, this is why we must enter the execution unit here before\n         * running the command, and exit the execution unit after calling the unblock handler (if exists).\n         * Notice that we also must set the current client so it will be available\n         * when we will try to send the client side caching notification (done on 'afterCommand'). */\n        client *old_client = server.current_client;\n        server.current_client = c;\n        enterExecutionUnit(1, 0);\n        processCommandAndResetClient(c);\n        if (!(c->flags & CLIENT_BLOCKED)) {\n            if (c->flags & CLIENT_MODULE) {\n                moduleCallCommandUnblockedHandler(c);\n            } else {\n                queueClientForReprocessing(c);\n            }\n        }\n        exitExecutionUnit();\n        afterCommand(c);\n        /* Clear the CLIENT_REEXECUTING_COMMAND flag after the proc is executed. */\n        c->flags &= ~CLIENT_REEXECUTING_COMMAND;\n        server.current_client = old_client;\n    }\n}\n\n/* Unblock a client blocked on the specific key from module context.\n * This function will try to serve the module call, and in case it succeeds,\n * it will add the client to the list of module unblocked clients which will\n * be processed in moduleHandleBlockedClients. */\nstatic void moduleUnblockClientOnKey(client *c, robj *key) {\n    long long prev_error_replies = server.stat_total_error_replies;\n    client *old_client = server.current_client;\n    server.current_client = c;\n    monotime replyTimer;\n    elapsedStart(&replyTimer);\n\n    if (moduleTryServeClientBlockedOnKey(c, key)) {\n        updateStatsOnUnblock(c, 0, elapsedUs(replyTimer), server.stat_total_error_replies != prev_error_replies);\n        moduleUnblockClient(c);\n    }\n    /* We need to call afterCommand even if the client was not unblocked\n     * in order to propagate any changes that could have been done inside\n     * moduleTryServeClientBlockedOnKey */\n    afterCommand(c);\n    server.current_client = old_client;\n}\n\n/* Unblock a client which is currently Blocked on and provided a timeout.\n * The implementation will first reply to the blocked client with null response\n * or, in case of module blocked client the timeout callback will be used.\n * In this case since we might have a command pending\n * we want to remove the pending flag to indicate we already responded to the\n * command with timeout reply. */\nvoid unblockClientOnTimeout(client *c) {\n    /* The client has been unlocked (in the moduleUnblocked list), return ASAP. */\n    if (c->bstate.btype == BLOCKED_MODULE && isModuleClientUnblocked(c)) return;\n\n    replyToBlockedClientTimedOut(c);\n    if (c->flags & CLIENT_PENDING_COMMAND)\n        c->flags &= ~CLIENT_PENDING_COMMAND;\n    unblockClient(c, 1);\n}\n\n/* Unblock a client which is currently Blocked with error.\n * If err_str is provided it will be used to reply to the blocked client */\nvoid unblockClientOnError(client *c, const char *err_str) {\n    if (err_str)\n        addReplyError(c, err_str);\n    updateStatsOnUnblock(c, 0, 0, 1);\n    if (c->flags & CLIENT_PENDING_COMMAND)\n        c->flags &= ~CLIENT_PENDING_COMMAND;\n    unblockClient(c, 1);\n}\n\nvoid blockedBeforeSleep(void) {\n    /* Handle precise timeouts of blocked clients. */\n    handleBlockedClientsTimeout();\n\n    /* Handle for expired pending entries. */\n    handleClaimableStreamEntries();\n\n    /* Unblock all the clients blocked for synchronous replication\n     * in WAIT or WAITAOF. */\n    if (listLength(server.clients_waiting_acks))\n        processClientsWaitingReplicas();\n\n    /* Try to process blocked clients every once in while.\n     *\n     * Example: A module calls RM_SignalKeyAsReady from within a timer callback\n     * (So we don't visit processCommand() at all).\n     *\n     * This may unblock clients, so must be done before processUnblockedClients */\n    handleClientsBlockedOnKeys();\n\n    /* Check if there are clients unblocked by modules that implement\n     * blocking commands. */\n    if (moduleCount())\n        moduleHandleBlockedClients();\n\n    /* Try to process pending commands for clients that were just unblocked. */\n    if (listLength(server.unblocked_clients))\n        processUnblockedClients();\n}\n"
  },
  {
    "path": "src/call_reply.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n#include \"call_reply.h\"\n\n#define REPLY_FLAG_ROOT (1<<0)\n#define REPLY_FLAG_PARSED (1<<1)\n#define REPLY_FLAG_RESP3 (1<<2)\n\n/* --------------------------------------------------------\n * An opaque struct used to parse a RESP protocol reply and\n * represent it. Used when parsing replies such as in RM_Call\n * or Lua scripts.\n * -------------------------------------------------------- */\nstruct CallReply {\n    void *private_data;\n    sds original_proto; /* Available only for root reply. */\n    const char *proto;\n    size_t proto_len;\n    int type;       /* REPLY_... */\n    int flags;      /* REPLY_FLAG... */\n    size_t len;     /* Length of a string, or the number elements in an array. */\n    union {\n        const char *str; /* String pointer for string and error replies. This\n                          * does not need to be freed, always points inside\n                          * a reply->proto buffer of the reply object or, in\n                          * case of array elements, of parent reply objects. */\n        struct {\n            const char *str;\n            const char *format;\n        } verbatim_str;  /* Reply value for verbatim string */\n        long long ll;    /* Reply value for integer reply. */\n        double d;        /* Reply value for double reply. */\n        struct CallReply *array; /* Array of sub-reply elements. used for set, array, map, and attribute */\n    } val;\n    list *deferred_error_list;   /* list of errors in sds form or NULL */\n    struct CallReply *attribute; /* attribute reply, NULL if not exists */\n};\n\nstatic void callReplySetSharedData(CallReply *rep, int type, const char *proto, size_t proto_len, int extra_flags) {\n    rep->type = type;\n    rep->proto = proto;\n    rep->proto_len = proto_len;\n    rep->flags |= extra_flags;\n}\n\nstatic void callReplyNull(void *ctx, const char *proto, size_t proto_len) {\n    CallReply *rep = ctx;\n    callReplySetSharedData(rep, REDISMODULE_REPLY_NULL, proto, proto_len, REPLY_FLAG_RESP3);\n}\n\nstatic void callReplyNullBulkString(void *ctx, const char *proto, size_t proto_len) {\n    CallReply *rep = ctx;\n    callReplySetSharedData(rep, REDISMODULE_REPLY_NULL, proto, proto_len, 0);\n}\n\nstatic void callReplyNullArray(void *ctx, const char *proto, size_t proto_len) {\n    CallReply *rep = ctx;\n    callReplySetSharedData(rep, REDISMODULE_REPLY_NULL, proto, proto_len, 0);\n}\n\nstatic void callReplyBulkString(void *ctx, const char *str, size_t len, const char *proto, size_t proto_len) {\n    CallReply *rep = ctx;\n    callReplySetSharedData(rep, REDISMODULE_REPLY_STRING, proto, proto_len, 0);\n    rep->len = len;\n    rep->val.str = str;\n}\n\nstatic void callReplyError(void *ctx, const char *str, size_t len, const char *proto, size_t proto_len) {\n    CallReply *rep = ctx;\n    callReplySetSharedData(rep, REDISMODULE_REPLY_ERROR, proto, proto_len, 0);\n    rep->len = len;\n    rep->val.str = str;\n}\n\nstatic void callReplySimpleStr(void *ctx, const char *str, size_t len, const char *proto, size_t proto_len) {\n    CallReply *rep = ctx;\n    callReplySetSharedData(rep, REDISMODULE_REPLY_STRING, proto, proto_len, 0);\n    rep->len = len;\n    rep->val.str = str;\n}\n\nstatic void callReplyLong(void *ctx, long long val, const char *proto, size_t proto_len) {\n    CallReply *rep = ctx;\n    callReplySetSharedData(rep, REDISMODULE_REPLY_INTEGER, proto, proto_len, 0);\n    rep->val.ll = val;\n}\n\nstatic void callReplyDouble(void *ctx, double val, const char *proto, size_t proto_len) {\n    CallReply *rep = ctx;\n    callReplySetSharedData(rep, REDISMODULE_REPLY_DOUBLE, proto, proto_len, REPLY_FLAG_RESP3);\n    rep->val.d = val;\n}\n\nstatic void callReplyVerbatimString(void *ctx, const char *format, const char *str, size_t len, const char *proto, size_t proto_len) {\n    CallReply *rep = ctx;\n    callReplySetSharedData(rep, REDISMODULE_REPLY_VERBATIM_STRING, proto, proto_len, REPLY_FLAG_RESP3);\n    rep->len = len;\n    rep->val.verbatim_str.str = str;\n    rep->val.verbatim_str.format = format;\n}\n\nstatic void callReplyBigNumber(void *ctx, const char *str, size_t len, const char *proto, size_t proto_len) {\n    CallReply *rep = ctx;\n    callReplySetSharedData(rep, REDISMODULE_REPLY_BIG_NUMBER, proto, proto_len, REPLY_FLAG_RESP3);\n    rep->len = len;\n    rep->val.str = str;\n}\n\nstatic void callReplyBool(void *ctx, int val, const char *proto, size_t proto_len) {\n    CallReply *rep = ctx;\n    callReplySetSharedData(rep, REDISMODULE_REPLY_BOOL, proto, proto_len, REPLY_FLAG_RESP3);\n    rep->val.ll = val;\n}\n\nstatic void callReplyParseCollection(ReplyParser *parser, CallReply *rep, size_t len, const char *proto, size_t elements_per_entry) {\n    rep->len = len;\n    rep->val.array = zcalloc(elements_per_entry * len * sizeof(CallReply));\n    for (size_t i = 0; i < len * elements_per_entry; i += elements_per_entry) {\n        for (size_t j = 0 ; j < elements_per_entry ; ++j) {\n            rep->val.array[i + j].private_data = rep->private_data;\n            parseReply(parser, rep->val.array + i + j);\n            rep->val.array[i + j].flags |= REPLY_FLAG_PARSED;\n            if (rep->val.array[i + j].flags & REPLY_FLAG_RESP3) {\n                /* If one of the sub-replies is RESP3, then the current reply is also RESP3. */\n                rep->flags |= REPLY_FLAG_RESP3;\n            }\n        }\n    }\n    rep->proto = proto;\n    rep->proto_len = parser->curr_location - proto;\n}\n\nstatic void callReplyAttribute(ReplyParser *parser, void *ctx, size_t len, const char *proto) {\n    CallReply *rep = ctx;\n    rep->attribute = zcalloc(sizeof(CallReply));\n\n    /* Continue parsing the attribute reply */\n    rep->attribute->len = len;\n    rep->attribute->type = REDISMODULE_REPLY_ATTRIBUTE;\n    callReplyParseCollection(parser, rep->attribute, len, proto, 2);\n    rep->attribute->flags |= REPLY_FLAG_PARSED | REPLY_FLAG_RESP3;\n    rep->attribute->private_data = rep->private_data;\n\n    /* Continue parsing the reply */\n    parseReply(parser, rep);\n\n    /* In this case we need to fix the proto address and len, it should start from the attribute */\n    rep->proto = proto;\n    rep->proto_len = parser->curr_location - proto;\n    rep->flags |= REPLY_FLAG_RESP3;\n}\n\nstatic void callReplyArray(ReplyParser *parser, void *ctx, size_t len, const char *proto) {\n    CallReply *rep = ctx;\n    rep->type = REDISMODULE_REPLY_ARRAY;\n    callReplyParseCollection(parser, rep, len, proto, 1);\n}\n\nstatic void callReplySet(ReplyParser *parser, void *ctx, size_t len, const char *proto) {\n    CallReply *rep = ctx;\n    rep->type = REDISMODULE_REPLY_SET;\n    callReplyParseCollection(parser, rep, len, proto, 1);\n    rep->flags |= REPLY_FLAG_RESP3;\n}\n\nstatic void callReplyMap(ReplyParser *parser, void *ctx, size_t len, const char *proto) {\n    CallReply *rep = ctx;\n    rep->type = REDISMODULE_REPLY_MAP;\n    callReplyParseCollection(parser, rep, len, proto, 2);\n    rep->flags |= REPLY_FLAG_RESP3;\n}\n\nstatic void callReplyParseError(void *ctx) {\n    CallReply *rep = ctx;\n    rep->type = REDISMODULE_REPLY_UNKNOWN;\n}\n\n/* Recursively free the current call reply and its sub-replies. */\nstatic void freeCallReplyInternal(CallReply *rep) {\n    if (rep->type == REDISMODULE_REPLY_ARRAY || rep->type == REDISMODULE_REPLY_SET) {\n        for (size_t i = 0 ; i < rep->len ; ++i) {\n            freeCallReplyInternal(rep->val.array + i);\n        }\n        zfree(rep->val.array);\n    }\n\n    if (rep->type == REDISMODULE_REPLY_MAP || rep->type == REDISMODULE_REPLY_ATTRIBUTE) {\n        for (size_t i = 0 ; i < rep->len ; ++i) {\n            freeCallReplyInternal(rep->val.array + i * 2);\n            freeCallReplyInternal(rep->val.array + i * 2 + 1);\n        }\n        zfree(rep->val.array);\n    }\n\n    if (rep->attribute) {\n        freeCallReplyInternal(rep->attribute);\n        zfree(rep->attribute);\n    }\n}\n\n/* Free the given call reply and its children (in case of nested reply) recursively.\n * If private data was set when the CallReply was created it will not be freed, as it's\n * the caller's responsibility to free it before calling freeCallReply(). */\nvoid freeCallReply(CallReply *rep) {\n    if (!(rep->flags & REPLY_FLAG_ROOT)) {\n        return;\n    }\n    if (rep->flags & REPLY_FLAG_PARSED) {\n        if (rep->type == REDISMODULE_REPLY_PROMISE) {\n            zfree(rep);\n            return;\n        }\n        freeCallReplyInternal(rep);\n    }\n    sdsfree(rep->original_proto);\n    if (rep->deferred_error_list)\n        listRelease(rep->deferred_error_list);\n    zfree(rep);\n}\n\nCallReply *callReplyCreatePromise(void *private_data) {\n    CallReply *res = zmalloc(sizeof(*res));\n    res->type = REDISMODULE_REPLY_PROMISE;\n    /* Mark the reply as parsed so there will be not attempt to parse\n     * it when calling reply API such as freeCallReply.\n     * Also mark the reply as root so freeCallReply will not ignore it. */\n    res->flags |= REPLY_FLAG_PARSED | REPLY_FLAG_ROOT;\n    res->private_data = private_data;\n    return res;\n}\n\nstatic const ReplyParserCallbacks DefaultParserCallbacks = {\n    .null_callback = callReplyNull,\n    .bulk_string_callback = callReplyBulkString,\n    .null_bulk_string_callback = callReplyNullBulkString,\n    .null_array_callback = callReplyNullArray,\n    .error_callback = callReplyError,\n    .simple_str_callback = callReplySimpleStr,\n    .long_callback = callReplyLong,\n    .array_callback = callReplyArray,\n    .set_callback = callReplySet,\n    .map_callback = callReplyMap,\n    .double_callback = callReplyDouble,\n    .bool_callback = callReplyBool,\n    .big_number_callback = callReplyBigNumber,\n    .verbatim_string_callback = callReplyVerbatimString,\n    .attribute_callback = callReplyAttribute,\n    .error = callReplyParseError,\n};\n\n/* Parse the buffer located in rep->original_proto and update the CallReply\n * structure to represent its contents. */\nstatic void callReplyParse(CallReply *rep) {\n    if (rep->flags & REPLY_FLAG_PARSED) {\n        return;\n    }\n\n    ReplyParser parser = {.curr_location = rep->proto, .callbacks = DefaultParserCallbacks};\n\n    parseReply(&parser, rep);\n    rep->flags |= REPLY_FLAG_PARSED;\n}\n\n/* Return the call reply type (REDISMODULE_REPLY_...). */\nint callReplyType(CallReply *rep) {\n    if (!rep) return REDISMODULE_REPLY_UNKNOWN;\n    callReplyParse(rep);\n    return rep->type;\n}\n\n/* Return reply string as buffer and len. Applicable to:\n * - REDISMODULE_REPLY_STRING\n * - REDISMODULE_REPLY_ERROR\n *\n * The return value is borrowed from CallReply, so it must not be freed\n * explicitly or used after CallReply itself is freed.\n *\n * The returned value is not NULL terminated and its length is returned by\n * reference through len, which must not be NULL.\n */\nconst char *callReplyGetString(CallReply *rep, size_t *len) {\n    callReplyParse(rep);\n    if (rep->type != REDISMODULE_REPLY_STRING &&\n        rep->type != REDISMODULE_REPLY_ERROR) return NULL;\n    if (len) *len = rep->len;\n    return rep->val.str;\n}\n\n/* Return a long long reply value. Applicable to:\n * - REDISMODULE_REPLY_INTEGER\n */\nlong long callReplyGetLongLong(CallReply *rep) {\n    callReplyParse(rep);\n    if (rep->type != REDISMODULE_REPLY_INTEGER) return LLONG_MIN;\n    return rep->val.ll;\n}\n\n/* Return a double reply value. Applicable to:\n * - REDISMODULE_REPLY_DOUBLE\n */\ndouble callReplyGetDouble(CallReply *rep) {\n    callReplyParse(rep);\n    if (rep->type != REDISMODULE_REPLY_DOUBLE) return LLONG_MIN;\n    return rep->val.d;\n}\n\n/* Return a reply Boolean value. Applicable to:\n * - REDISMODULE_REPLY_BOOL\n */\nint callReplyGetBool(CallReply *rep) {\n    callReplyParse(rep);\n    if (rep->type != REDISMODULE_REPLY_BOOL) return INT_MIN;\n    return rep->val.ll;\n}\n\n/* Return reply length. Applicable to:\n * - REDISMODULE_REPLY_STRING\n * - REDISMODULE_REPLY_ERROR\n * - REDISMODULE_REPLY_ARRAY\n * - REDISMODULE_REPLY_SET\n * - REDISMODULE_REPLY_MAP\n * - REDISMODULE_REPLY_ATTRIBUTE\n */\nsize_t callReplyGetLen(CallReply *rep) {\n    callReplyParse(rep);\n    switch(rep->type) {\n        case REDISMODULE_REPLY_STRING:\n        case REDISMODULE_REPLY_ERROR:\n        case REDISMODULE_REPLY_ARRAY:\n        case REDISMODULE_REPLY_SET:\n        case REDISMODULE_REPLY_MAP:\n        case REDISMODULE_REPLY_ATTRIBUTE:\n            return rep->len;\n        default:\n            return 0;\n    }\n}\n\nstatic CallReply *callReplyGetCollectionElement(CallReply *rep, size_t idx, int elements_per_entry) {\n    if (idx >= rep->len * elements_per_entry) return NULL; // real len is rep->len * elements_per_entry\n    return rep->val.array+idx;\n}\n\n/* Return a reply array element at a given index. Applicable to:\n * - REDISMODULE_REPLY_ARRAY\n *\n * The return value is borrowed from CallReply, so it must not be freed\n * explicitly or used after CallReply itself is freed.\n */\nCallReply *callReplyGetArrayElement(CallReply *rep, size_t idx) {\n    callReplyParse(rep);\n    if (rep->type != REDISMODULE_REPLY_ARRAY) return NULL;\n    return callReplyGetCollectionElement(rep, idx, 1);\n}\n\n/* Return a reply set element at a given index. Applicable to:\n * - REDISMODULE_REPLY_SET\n *\n * The return value is borrowed from CallReply, so it must not be freed\n * explicitly or used after CallReply itself is freed.\n */\nCallReply *callReplyGetSetElement(CallReply *rep, size_t idx) {\n    callReplyParse(rep);\n    if (rep->type != REDISMODULE_REPLY_SET) return NULL;\n    return callReplyGetCollectionElement(rep, idx, 1);\n}\n\nstatic int callReplyGetMapElementInternal(CallReply *rep, size_t idx, CallReply **key, CallReply **val, int type) {\n    callReplyParse(rep);\n    if (rep->type != type) return C_ERR;\n    if (idx >= rep->len) return C_ERR;\n    if (key) *key = callReplyGetCollectionElement(rep, idx * 2, 2);\n    if (val) *val = callReplyGetCollectionElement(rep, idx * 2 + 1, 2);\n    return C_OK;\n}\n\n/* Retrieve a map reply key and value at a given index. Applicable to:\n * - REDISMODULE_REPLY_MAP\n *\n * The key and value are returned by reference through key and val,\n * which may also be NULL if not needed.\n *\n * Returns C_OK on success or C_ERR if reply type mismatches, or if idx is out\n * of range.\n *\n * The returned values are borrowed from CallReply, so they must not be freed\n * explicitly or used after CallReply itself is freed.\n */\nint callReplyGetMapElement(CallReply *rep, size_t idx, CallReply **key, CallReply **val) {\n    return callReplyGetMapElementInternal(rep, idx, key, val, REDISMODULE_REPLY_MAP);\n}\n\n/* Return reply attribute, or NULL if it does not exist. Applicable to all replies.\n *\n * The returned values are borrowed from CallReply, so they must not be freed\n * explicitly or used after CallReply itself is freed.\n */\nCallReply *callReplyGetAttribute(CallReply *rep) {\n    return rep->attribute;\n}\n\n/* Retrieve attribute reply key and value at a given index. Applicable to:\n * - REDISMODULE_REPLY_ATTRIBUTE\n *\n * The key and value are returned by reference through key and val,\n * which may also be NULL if not needed.\n *\n * Returns C_OK on success or C_ERR if reply type mismatches, or if idx is out\n * of range.\n *\n * The returned values are borrowed from CallReply, so they must not be freed\n * explicitly or used after CallReply itself is freed.\n */\nint callReplyGetAttributeElement(CallReply *rep, size_t idx, CallReply **key, CallReply **val) {\n    return callReplyGetMapElementInternal(rep, idx, key, val, REDISMODULE_REPLY_MAP);\n}\n\n/* Return a big number reply value. Applicable to:\n * - REDISMODULE_REPLY_BIG_NUMBER\n *\n * The returned values are borrowed from CallReply, so they must not be freed\n * explicitly or used after CallReply itself is freed.\n *\n * The return value is guaranteed to be a big number, as described in the RESP3\n * protocol specifications.\n *\n * The returned value is not NULL terminated and its length is returned by\n * reference through len, which must not be NULL.\n */\nconst char *callReplyGetBigNumber(CallReply *rep, size_t *len) {\n    callReplyParse(rep);\n    if (rep->type != REDISMODULE_REPLY_BIG_NUMBER) return NULL;\n    *len = rep->len;\n    return rep->val.str;\n}\n\n/* Return a verbatim string reply value. Applicable to:\n * - REDISMODULE_REPLY_VERBATIM_STRING\n *\n * If format is non-NULL, the verbatim reply format is also returned by value.\n *\n * The optional output argument can be given to get a verbatim reply\n * format, or can be set NULL if not needed.\n *\n * The return value is borrowed from CallReply, so it must not be freed\n * explicitly or used after CallReply itself is freed.\n *\n * The returned value is not NULL terminated and its length is returned by\n * reference through len, which must not be NULL.\n */\nconst char *callReplyGetVerbatim(CallReply *rep, size_t *len, const char **format){\n    callReplyParse(rep);\n    if (rep->type != REDISMODULE_REPLY_VERBATIM_STRING) return NULL;\n    *len = rep->len;\n    if (format) *format = rep->val.verbatim_str.format;\n    return rep->val.verbatim_str.str;\n}\n\n/* Return the current reply blob.\n *\n * The return value is borrowed from CallReply, so it must not be freed\n * explicitly or used after CallReply itself is freed.\n */\nconst char *callReplyGetProto(CallReply *rep, size_t *proto_len) {\n    *proto_len = rep->proto_len;\n    return rep->proto;\n}\n\n/* Return CallReply private data, as set by the caller on callReplyCreate().\n */\nvoid *callReplyGetPrivateData(CallReply *rep) {\n    return rep->private_data;\n}\n\n/* Return true if the reply or one of it sub-replies is RESP3 formatted. */\nint callReplyIsResp3(CallReply *rep) {\n    return rep->flags & REPLY_FLAG_RESP3;\n}\n\n/* Returns a list of errors in sds form, or NULL. */\nlist *callReplyDeferredErrorList(CallReply *rep) {\n    return rep->deferred_error_list;\n}\n\n/* Create a new CallReply struct from the reply blob.\n *\n * The function will own the reply blob, so it must not be used or freed by\n * the caller after passing it to this function.\n *\n * The reply blob will be freed when the returned CallReply struct is later\n * freed using freeCallReply().\n *\n * The deferred_error_list is an optional list of errors that are present\n * in the reply blob, if given, this function will take ownership on it.\n *\n * The private_data is optional and can later be accessed using\n * callReplyGetPrivateData().\n *\n * NOTE: The parser used for parsing the reply and producing CallReply is\n * designed to handle valid replies created by Redis itself. IT IS NOT\n * DESIGNED TO HANDLE USER INPUT and using it to parse invalid replies is\n * unsafe.\n */\nCallReply *callReplyCreate(sds reply, list *deferred_error_list, void *private_data) {\n    CallReply *res = zmalloc(sizeof(*res));\n    res->flags = REPLY_FLAG_ROOT;\n    res->original_proto = reply;\n    res->proto = reply;\n    res->proto_len = sdslen(reply);\n    res->private_data = private_data;\n    res->attribute = NULL;\n    res->deferred_error_list = deferred_error_list;\n    return res;\n}\n\n/* Create a new CallReply struct from the reply blob representing an error message.\n * Automatically creating deferred_error_list and set a copy of the reply in it.\n * Refer to callReplyCreate for detailed explanation.\n * Reply string can come in one of two forms:\n * 1. A protocol reply starting with \"-CODE\" and ending with \"\\r\\n\"\n * 2. A plain string, in which case this function adds the protocol header and footer. */\nCallReply *callReplyCreateError(sds reply, void *private_data) {\n    sds err_buff = reply;\n    if (err_buff[0] != '-') {\n        err_buff = sdscatfmt(sdsempty(), \"-ERR %S\\r\\n\", reply);\n        sdsfree(reply);\n    }\n    list *deferred_error_list = listCreate();\n    listSetFreeMethod(deferred_error_list, sdsfreegeneric);\n    listAddNodeTail(deferred_error_list, sdsnew(err_buff));\n    return callReplyCreate(err_buff, deferred_error_list, private_data);\n}\n"
  },
  {
    "path": "src/call_reply.h",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef SRC_CALL_REPLY_H_\n#define SRC_CALL_REPLY_H_\n\n#include \"resp_parser.h\"\n\ntypedef struct CallReply CallReply;\ntypedef void (*RedisModuleOnUnblocked)(void *ctx, CallReply *reply, void *private_data);\n\nCallReply *callReplyCreate(sds reply, list *deferred_error_list, void *private_data);\nCallReply *callReplyCreateError(sds reply, void *private_data);\nint callReplyType(CallReply *rep);\nconst char *callReplyGetString(CallReply *rep, size_t *len);\nlong long callReplyGetLongLong(CallReply *rep);\ndouble callReplyGetDouble(CallReply *rep);\nint callReplyGetBool(CallReply *rep);\nsize_t callReplyGetLen(CallReply *rep);\nCallReply *callReplyGetArrayElement(CallReply *rep, size_t idx);\nCallReply *callReplyGetSetElement(CallReply *rep, size_t idx);\nint callReplyGetMapElement(CallReply *rep, size_t idx, CallReply **key, CallReply **val);\nCallReply *callReplyGetAttribute(CallReply *rep);\nint callReplyGetAttributeElement(CallReply *rep, size_t idx, CallReply **key, CallReply **val);\nconst char *callReplyGetBigNumber(CallReply *rep, size_t *len);\nconst char *callReplyGetVerbatim(CallReply *rep, size_t *len, const char **format);\nconst char *callReplyGetProto(CallReply *rep, size_t *len);\nvoid *callReplyGetPrivateData(CallReply *rep);\nint callReplyIsResp3(CallReply *rep);\nlist *callReplyDeferredErrorList(CallReply *rep);\nvoid freeCallReply(CallReply *rep);\nCallReply *callReplyCreatePromise(void *private_data);\n\n#endif /* SRC_CALL_REPLY_H_ */\n"
  },
  {
    "path": "src/childinfo.c",
    "content": "/*\n * Copyright (c) 2016-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n#include <unistd.h>\n#include <fcntl.h>\n\ntypedef struct {\n    size_t keys;\n    size_t cow;\n    monotime cow_updated;\n    double progress;\n    childInfoType information_type; /* Type of information */\n} child_info_data;\n\n/* Open a child-parent channel used in order to move information about the\n * RDB / AOF saving process from the child to the parent (for instance\n * the amount of copy on write memory used) */\nvoid openChildInfoPipe(void) {\n    if (anetPipe(server.child_info_pipe, O_NONBLOCK, 0) == -1) {\n        /* On error our two file descriptors should be still set to -1,\n         * but we call anyway closeChildInfoPipe() since can't hurt. */\n        closeChildInfoPipe();\n    } else {\n        server.child_info_nread = 0;\n    }\n}\n\n/* Close the pipes opened with openChildInfoPipe(). */\nvoid closeChildInfoPipe(void) {\n    if (server.child_info_pipe[0] != -1 ||\n        server.child_info_pipe[1] != -1)\n    {\n        close(server.child_info_pipe[0]);\n        close(server.child_info_pipe[1]);\n        server.child_info_pipe[0] = -1;\n        server.child_info_pipe[1] = -1;\n        server.child_info_nread = 0;\n    }\n}\n\n/* Send save data to parent. */\nvoid sendChildInfoGeneric(childInfoType info_type, size_t keys, double progress, char *pname) {\n    if (server.child_info_pipe[1] == -1) return;\n\n    static monotime cow_updated = 0;\n    static uint64_t cow_update_cost = 0;\n    static size_t cow = 0;\n    static size_t peak_cow = 0;\n    static size_t update_count = 0;\n    static unsigned long long sum_cow = 0;\n\n    child_info_data data = {0}; /* zero everything, including padding to satisfy valgrind */\n\n    /* When called to report current info, we need to throttle down CoW updates as they\n     * can be very expensive. To do that, we measure the time it takes to get a reading\n     * and schedule the next reading to happen not before time*CHILD_COW_COST_FACTOR\n     * passes. */\n\n    monotime now = getMonotonicUs();\n    if (info_type != CHILD_INFO_TYPE_CURRENT_INFO ||\n        !cow_updated ||\n        now - cow_updated > cow_update_cost * CHILD_COW_DUTY_CYCLE)\n    {\n        cow = zmalloc_get_private_dirty(-1);\n        cow_updated = getMonotonicUs();\n        cow_update_cost = cow_updated - now;\n        if (cow > peak_cow) peak_cow = cow;\n        sum_cow += cow;\n        update_count++;\n\n        int cow_info = (info_type != CHILD_INFO_TYPE_CURRENT_INFO);\n        if (cow || cow_info) {\n            serverLog(cow_info ? LL_NOTICE : LL_VERBOSE,\n                      \"Fork CoW for %s: current %zu MB, peak %zu MB, average %llu MB\",\n                      pname, cow>>20, peak_cow>>20, (sum_cow/update_count)>>20);\n        }\n    }\n\n    data.information_type = info_type;\n    data.keys = keys;\n    data.cow = cow;\n    data.cow_updated = cow_updated;\n    data.progress = progress;\n\n    ssize_t wlen = sizeof(data);\n\n    if (write(server.child_info_pipe[1], &data, wlen) != wlen) {\n        /* Failed writing to parent, it could have been killed, exit. */\n        serverLog(LL_WARNING,\"Child failed reporting info to parent, exiting. %s\", strerror(errno));\n        exitFromChild(1, 0);\n    }\n}\n\n/* Update Child info. */\nvoid updateChildInfo(childInfoType information_type, size_t cow, monotime cow_updated, size_t keys, double progress) {\n    if (cow > server.stat_current_cow_peak) server.stat_current_cow_peak = cow;\n\n    if (information_type == CHILD_INFO_TYPE_CURRENT_INFO) {\n        server.stat_current_cow_bytes = cow;\n        server.stat_current_cow_updated = cow_updated;\n        server.stat_current_save_keys_processed = keys;\n        if (progress != -1) server.stat_module_progress = progress;\n    } else if (information_type == CHILD_INFO_TYPE_AOF_COW_SIZE) {\n        server.stat_aof_cow_bytes = server.stat_current_cow_peak;\n    } else if (information_type == CHILD_INFO_TYPE_RDB_COW_SIZE) {\n        server.stat_rdb_cow_bytes = server.stat_current_cow_peak;\n    } else if (information_type == CHILD_INFO_TYPE_MODULE_COW_SIZE) {\n        server.stat_module_cow_bytes = server.stat_current_cow_peak;\n    }\n}\n\n/* Read child info data from the pipe.\n * if complete data read into the buffer, \n * data is stored into *buffer, and returns 1.\n * otherwise, the partial data is left in the buffer, waiting for the next read, and returns 0. */\nint readChildInfo(childInfoType *information_type, size_t *cow, monotime *cow_updated, size_t *keys, double* progress) {\n    /* We are using here a static buffer in combination with the server.child_info_nread to handle short reads */\n    static child_info_data buffer;\n    ssize_t wlen = sizeof(buffer);\n\n    /* Do not overlap */\n    if (server.child_info_nread == wlen) server.child_info_nread = 0;\n\n    int nread = read(server.child_info_pipe[0], (char *)&buffer + server.child_info_nread, wlen - server.child_info_nread);\n    if (nread > 0) {\n        server.child_info_nread += nread;\n    }\n\n    /* We have complete child info */\n    if (server.child_info_nread == wlen) {\n        *information_type = buffer.information_type;\n        *cow = buffer.cow;\n        *cow_updated = buffer.cow_updated;\n        *keys = buffer.keys;\n        *progress = buffer.progress;\n        return 1;\n    } else {\n        return 0;\n    }\n}\n\n/* Receive info data from child. */\nvoid receiveChildInfo(void) {\n    if (server.child_info_pipe[0] == -1) return;\n\n    size_t cow;\n    monotime cow_updated;\n    size_t keys;\n    double progress;\n    childInfoType information_type;\n\n    /* Drain the pipe and update child info so that we get the final message. */\n    while (readChildInfo(&information_type, &cow, &cow_updated, &keys, &progress)) {\n        updateChildInfo(information_type, cow, cow_updated, keys, progress);\n    }\n}\n"
  },
  {
    "path": "src/chk.c",
    "content": "/* Implementation of a topK structure using CuckooHeavyKeeper algorithm\n *\n * Implementation is based on the paper \"Cuckoo Heavy Keeper and the balancing\n * act of maintaining heavy hitters in stream processing\" by Vinh Quang Ngo and\n * Marina Papatriantafilou. Also, the accompanying C++ implementation was used\n * as a reference point: https://github.com/vinhqngo5/Cuckoo_Heavy_Keeper\n * Main changes are addition of a min-heap so we can keep names of the top K\n * elements - idea comes from RedisBloom's TopK structure.\n *\n * Copyright (c) 2026-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"chk.h\"\n#include \"redisassert.h\"\n#include \"zmalloc.h\"\n#include \"xxhash.h\"\n\n#include <math.h>\n#include <stdlib.h>\n#include <string.h>\n\n/* Lobby to heavy item promotion threshold */\n#define LOBBY_PROMOTION_THRESHOLD 16\n\n#ifndef static_assert\n#define static_assert(expr, lit) extern char __static_assert_failure[(expr) ? 1:-1]\n#endif\n\nstatic_assert(LOBBY_PROMOTION_THRESHOLD < CHK_LUT_SIZE,\n              \"Lobby promotion threshold should be less then the LUT size to \"\n              \"ensure constant operations during decayCounter!\");\n\n/* After a heavy item is demoted is starts recursively kicking out other heavy\n * items in the case it should stay heavy (defined by isHeavyHitter). In\n * principle this process could go over all the items in the chkTopK's tables\n * so it's artificially limited by this constant. */\n#define MAX_KICKS 16\n\n/* An item is defined as heavy hitter if its count is more or equal to x * N\n * where x is a threshold constant (HEAVY_RATIO) and N is the total count the\n * chkTopK structure has accumulated. See the paper for more info. */\n#define HEAVY_RATIO 0.008\n\n/* A unique seed for the items when storing them in the heap so it's not related\n * to the cuckoo's hashes. Also, we don't need the less-bit hash here as the\n * heap does not take much memory so we avoid needless possible collisions. */\n#define HEAP_SEED 1919\n\ntypedef struct {\n    size_t idx[CHK_NUM_TABLES];\n    fingerprint_t fp;\n} fpAndIdx;\n\n#define min(a, b) ((a) < (b) ? (a) : (b))\n\n/* Heap operations */\nstatic chkHeapBucket *chkCheckExistInHeap(chkTopK *topk, const char *item, int itemlen, uint64_t fp) {\n    for (int32_t i = topk->k - 1; i >= 0; --i) {\n        chkHeapBucket *bucket = topk->heap + i;\n        if (bucket->fp == fp && bucket->item &&\n            sdslen(bucket->item) == (size_t)itemlen &&\n            memcmp(bucket->item, item, itemlen) == 0)\n        {\n            return bucket;\n        }\n    }\n    return NULL;\n}\n\nvoid chkHeapifyDown(chkHeapBucket *array, size_t len, size_t start) {\n    size_t child = start;\n\n    if (len < 2 || (len - 2) / 2 < child) {\n        return;\n    }\n    child = 2 * child + 1;\n    if ((child + 1) < len && (array[child].count > array[child + 1].count)) {\n        ++child;\n    }\n    if (array[child].count > array[start].count) {\n        return;\n    }\n\n    chkHeapBucket top = {0};\n    top = array[start];\n    do {\n        memcpy(&array[start], &array[child], sizeof(chkHeapBucket));\n        start = child;\n\n        if ((len - 2) / 2 < child) {\n            break;\n        }\n        child = 2 * child + 1;\n\n        if ((child + 1) < len && (array[child].count > array[child + 1].count)) {\n            ++child;\n        }\n    } while (array[child].count < top.count);\n    memcpy(&array[start], &top, sizeof(chkHeapBucket));\n}\n\n/*-----------------------------------------------------------------------------\n * chkTopK operations\n *----------------------------------------------------------------------------*/\n\n/* Create the chkTopK structure. Note, CHK paper recommends decay=1.08.\n * numbuckets must be a power of 2. Recommended size for numbuckets is at least\n * 7 or 8 times k. */\nchkTopK *chkTopKCreate(int k, int numbuckets, double decay) {\n    /* Number of buckets need to be a power of 2 for better performance - we\n     * have better cache locality of the tables and faster table indices\n     * calculations. */\n    assert(k > 0 && (numbuckets & (numbuckets - 1)) == 0);\n\n    size_t usable = 0;\n    chkTopK *topk = zcalloc_usable(sizeof(chkTopK), &usable);\n    topk->alloc_size += usable;\n\n    for (int i = 0; i < CHK_NUM_TABLES; ++i) {\n        topk->tables[i] = zcalloc_usable(sizeof(chkBucket) * numbuckets, &usable);\n        topk->alloc_size += usable;\n    }\n\n    topk->heap = zcalloc_usable(sizeof(chkHeapBucket) * k, &usable);\n    topk->alloc_size += usable;\n\n    topk->decay = decay;\n    topk->inv_decay = 1. / decay;\n    topk->k = k;\n    topk->numbuckets = numbuckets;\n\n    topk->lut_decay_exp[0] = 0;\n    topk->lut_min_decay[0] = 0;\n    topk->lut_decay_prob[0] = 0;\n    for (int i = 1; i < CHK_LUT_SIZE + 1; ++i) {\n        topk->lut_decay_exp[i] = topk->lut_decay_exp[i - 1] + pow(topk->decay, i - 1);\n        topk->lut_min_decay[i] = topk->lut_decay_exp[i] - topk->lut_decay_exp[i - 1];\n        topk->lut_decay_prob[i] = pow(topk->inv_decay, i);\n    }\n\n    return topk;\n}\n\n/* Release chkTopK resources */\nvoid chkTopKRelease(chkTopK *topk) {\n    size_t usable;\n    for (int i = 0; i < CHK_NUM_TABLES; ++i) {\n        zfree_usable(topk->tables[i], &usable);\n        topk->alloc_size -= usable;\n    }\n    for (int i = 0; i < topk->k; ++i) {\n        if (topk->heap[i].item) {\n            topk->alloc_size -= sdsAllocSize(topk->heap[i].item);\n            sdsfree(topk->heap[i].item);\n        }\n    }\n    zfree_usable(topk->heap, &usable);\n    topk->alloc_size -= usable;\n    debugAssert(topk->alloc_size == zmalloc_usable_size(topk));\n\n    zfree(topk);\n}\n\nstatic inline int generateAltIdx(fingerprint_t fp, int idx, int numbuckets) {\n    return (idx ^ (0x5bd1e995 * (size_t)fp)) & (numbuckets - 1);\n}\n\nfpAndIdx generateItemFpAndIdxs(chkTopK *topk, char *item, int itemlen) {\n    uint64_t hash = XXH3_64bits_withSeed(item, itemlen, 0);\n\n    fpAndIdx res;\n    res.fp = (hash & 0xFFFF); /* Only use 16 bits for fingerprint */\n\n    /* Note numbuckets are a power of 2 so we don't use modulo for index calc */\n    res.idx[0] = (hash >> 32) & (topk->numbuckets - 1);\n    for (int i = 1; i < CHK_NUM_TABLES; ++i) {\n        res.idx[i] = generateAltIdx(res.fp, res.idx[i-1], topk->numbuckets);\n    }\n\n    return res;\n}\n\ntypedef struct {\n    int table_idx;\n    int pos;\n} checkEntryRes;\n\n/* Check if `item` is a heavy entry. If so we bump its count. If not - we make\n * it a heavy entry immediately if there is an empty spot, thus skipping the\n * lobby as an optimization. */\ncheckEntryRes checkHeavyEntries(chkTopK *topk, fpAndIdx item, counter_t weight) {\n    int empty_table_idx = -1;\n    int empty_pos = -1;\n\n    for (int i = 0; i < CHK_NUM_TABLES; ++i) {\n        int idx = item.idx[i];\n\n        chkBucket *bucket = &topk->tables[i][idx];\n        for (int j = 0; j < CHK_HEAVY_ENTRIES_PER_BUCKET; ++j) {\n            chkHeavyEntry *e = &bucket->heavy_entries[j];\n            if (e->count > 0) {\n                if (e->fp == item.fp) {\n                    e->count += weight;\n\n                    checkEntryRes res = { i, j };\n                    return res;\n                }\n            } else if (empty_table_idx == -1) {\n                empty_table_idx = i;\n                empty_pos = j;\n            }\n        }\n    }\n\n    if (empty_table_idx == -1) {\n        checkEntryRes res = { -1, -1 };\n        return res;\n    }\n\n    /* If there is an empty slot in the heavy entries just put the item there\n     * instead of going through the lobby first (optimization as per the paper) */\n    int idx = item.idx[empty_table_idx];\n    chkHeavyEntry *e = &topk->tables[empty_table_idx][idx].heavy_entries[empty_pos];\n    e->fp = item.fp;\n    e->count = weight;\n\n    checkEntryRes res = {empty_table_idx, empty_pos};\n    return res;\n}\n\n/* A heavy hitter is defined by the paper as an item with counter more or equal\n * to phi * N, where phi is a constant and N is the total count the structure\n * has recorded up to that point */\nint isHeavyHitter(chkTopK *topk, counter_t cnt) {\n    return cnt >= (topk->total * HEAVY_RATIO);\n}\n\n/* After a lobby item is promoted it may be placed on a heavy item's spot. The\n * latter is kicked out, but it may recursively kick out another heavy item.\n * The process is limited by MAX_KICKS and also by the fact that during updates\n * one of the kicked out items may have its counter decayed so much - it's not\n * passing the heavy item threshold (see isHeavyHitter). */\nvoid kickout(chkTopK *topk, chkHeavyEntry entry, int idx, int table_idx) {\n    for (int i = 0; i < MAX_KICKS; ++i) {\n        /* Do not try to swap with any entries if we don't reach the heavy\n         * hitter threshold */\n        if (!isHeavyHitter(topk, entry.count)) return;\n\n        /* Find the heavy entry in the alt bucket in the other table with\n         * minimum count. If there is empty entry there just occupy it, else\n         * recursively kick the minimal one out.\n         * To find the alt bucket we need to compute the alt index from the\n         * fingerprint of the kicked-out entry. */\n        table_idx = 1 - table_idx;\n        idx = generateAltIdx(entry.fp, idx, topk->numbuckets);\n\n        chkBucket *bucket = &topk->tables[table_idx][idx];\n        counter_t min = (counter_t)-1;\n        int min_pos = -1;\n        for (int j = 0; j < CHK_HEAVY_ENTRIES_PER_BUCKET; ++j) {\n            chkHeavyEntry *e = &bucket->heavy_entries[j];\n            if (e->count == 0) {\n                *e = entry;\n                return;\n            }\n            if (e->count < min) {\n                min = e->count;\n                min_pos = j;\n            }\n        }\n\n        chkHeavyEntry old_entry = bucket->heavy_entries[min_pos];\n        bucket->heavy_entries[min_pos] = entry;\n        entry = old_entry;\n    }\n}\n\n/* When a lobby entry's counter passes the promotion threshold we try to promote\n * it with some probability. See the paper for more details. If promotion is\n * successful the lobby entry may kick out a heavy one - see kickout() */\nint tryPromoteAndKickout(chkTopK *topk, fpAndIdx item, counter_t new_count,\n                         int table_idx)\n{\n    int idx = item.idx[table_idx];\n    chkBucket *bucket = &topk->tables[table_idx][idx];\n    counter_t min = (counter_t)-1; /* counter_t is unsigned */\n    int min_idx = -1;\n\n    /* We search for heavy item bucket of the promoted lobby entry. We may have\n     * an empty space which we immediately occupy. Otherwise we choose the\n     * bucket with lowest counter */\n    for (int i = 0; i < CHK_HEAVY_ENTRIES_PER_BUCKET; ++i) {\n        if (bucket->heavy_entries[i].count == 0) {\n            bucket->heavy_entries[i].fp = item.fp;\n            bucket->heavy_entries[i].count = new_count;\n            return i;\n        }\n        if (bucket->heavy_entries[i].count < min) {\n            min = bucket->heavy_entries[i].count;\n            min_idx = i;\n        }\n    }\n\n    /* If the heavy entry that is going to be kicked out has a counter lower\n     * than the lobby's one we always kick it out */\n    if (min > new_count) {\n        double prob = (new_count - LOBBY_PROMOTION_THRESHOLD) /\n                      (double)(min - LOBBY_PROMOTION_THRESHOLD);\n\n        if ((rand() / (double)RAND_MAX) >= prob) return -1;\n    }\n\n    chkHeavyEntry to_kickout = bucket->heavy_entries[min_idx];\n    /* Note, that here the promoted item keeps the old count as per the paper */\n    bucket->heavy_entries[min_idx].fp =  bucket->lobby_entry.fp;\n\n    bucket->lobby_entry.count = 0;\n    bucket->lobby_entry.fp = 0;\n\n    kickout(topk, to_kickout, idx, table_idx);\n\n    return min_idx;\n}\n\n/* Check if an item is a lobby entry */\ncheckEntryRes checkLobbyEntries(chkTopK *topk, fpAndIdx item, counter_t weight) {\n    for (int i = 0; i < CHK_NUM_TABLES; ++i) {\n        int idx = item.idx[i];\n\n        chkBucket *bucket = &topk->tables[i][idx];\n        chkLobbyEntry *e = &bucket->lobby_entry;\n\n        /* No match or empty lobby entry */\n        if (e->fp != item.fp || e->count == 0) continue;\n\n        /* If we don't cross the threshold just update the counter */\n        uint64_t new_count = (uint64_t)e->count + weight;\n        if (new_count < LOBBY_PROMOTION_THRESHOLD) {\n            e->count = (uint16_t)new_count;\n\n            checkEntryRes res = { i, -1 };\n            return res;\n        }\n\n        /* Try to promote the entry to heavy entry if we crossed the threshold.\n         * Else just set the counter to the value of the threshold */\n        int kickout_pos = tryPromoteAndKickout(topk, item, new_count, i);\n        if (kickout_pos != -1) {\n            checkEntryRes res = {i, kickout_pos};\n            return res;\n        }\n\n        e->count = LOBBY_PROMOTION_THRESHOLD;\n        checkEntryRes res = { i, -1 };\n        return res;\n    }\n\n    checkEntryRes res = { -1, -1 };\n    return res;\n}\n\n/* Probability to decay cnt with 1.\n * Equal to pow(decay, -cnt) */\nstatic inline double getDecayProb(chkTopK *topk, counter_t cnt) {\n    if (cnt < CHK_LUT_SIZE) {\n        return topk->lut_decay_prob[cnt];\n    }\n\n    return pow(topk->lut_decay_prob[CHK_LUT_SIZE],\n               ((double)cnt / (CHK_LUT_SIZE))) *\n           topk->lut_decay_prob[cnt % (CHK_LUT_SIZE)];\n}\n\n/* Expected decay steps to decay cnt to 0.\n * Equal to sum(pow(decay, i)) for i in [0; cnt] */\nstatic inline double getExpDecayCount(chkTopK *topk, lobby_counter_t cnt) {\n    return topk->lut_decay_exp[cnt];\n}\n\n/* Expected minimum decay steps to decay cnt with 1. Since probability is\n * pow(decay, -cnt) it's equal to pow(decay, cnt) */\nstatic inline double getMinDecayCount(chkTopK *topk, counter_t cnt) {\n    if (cnt < CHK_LUT_SIZE) {\n        return topk->lut_min_decay[cnt];\n    }\n\n    return pow(topk->lut_min_decay[CHK_LUT_SIZE],\n               ((double)cnt / (CHK_LUT_SIZE))) *\n           topk->lut_min_decay[cnt % (CHK_LUT_SIZE)];\n}\n\n/* When there is a hash-collission between lobby entries we decay the existing\n * lobby entry with the weight of the new one. Return the counter after decaying. */\nlobby_counter_t chkDecayCounter(chkTopK *topk, lobby_counter_t cnt, counter_t weight) {\n    if (weight == 0) return cnt;\n\n    /* Unweighted update - just decay with probability pow(decay, -cnt) */\n    if (weight == 1) {\n        double prob = getDecayProb(topk, (counter_t)cnt);\n        if ((rand() / (double)RAND_MAX) < prob) {\n            return cnt - 1;\n        }\n        return cnt;\n    }\n\n    /* For weighted updates we simulate multiple unweighted ones */\n\n    /* Weight is smaller than the minimum amount of decay steps required to\n     * decay the counter with probability of 100% so again we roll the dice */\n    double min_decay = getMinDecayCount(topk, cnt);\n    if (weight < (counter_t)min_decay) {\n        double prob = weight / min_decay;\n        if ((rand() / (double)RAND_MAX) < prob) {\n            return cnt - 1;\n        }\n        return cnt;\n    }\n\n    /* Weight is more than the expected amount of decay steps to decay the\n     * counter to 0. */\n    double exp_decays = getExpDecayCount(topk, cnt);\n    if (weight >= (counter_t)exp_decays)\n        return 0;\n\n    /* Weight is large enough to decay the counter to cnt - X where 0 < X < cnt.\n     * We binary search for the largest value `C` such that:\n     *\n     * (expected decay ops for `C`) >= (expected decay ops for `cnt`) - `weight`\n     * i.e lut_decay_exp[C] + weight >= lut_decay_exp[cnt]\n     *\n     * Note that since cnt is a lobby counter it will necessarily be less or\n     * equal than LOBBY_PROMOTION_THRESHOLD, so although we binary search this\n     * is a O(1) operation */\n    int left = 0;\n    int right = cnt;\n    while (left < right) {\n        int mid = left + (right - left) / 2;\n\n        if (topk->lut_decay_exp[mid] + weight >= topk->lut_decay_exp[cnt]) {\n            right = mid;\n        } else {\n            left = mid + 1;\n        }\n    }\n\n    return left;\n}\n\n/* Update weighted item. If another one was expelled from the topK list -\n * return it. Caller is responsible for releasing it */\nsds chkTopKUpdate(chkTopK *topk, char *item, int itemlen, counter_t weight)\n{\n    if (weight == 0) return NULL;\n\n    topk->total += weight;\n\n    /* Generate a fingerprint and indices for both cuckoo tables. */\n    fpAndIdx itemFpIdx = generateItemFpAndIdxs(topk, item, itemlen);\n\n    /* Check if the item is amongst the heavy entries. If so we just update its\n     * counter. */\n    checkEntryRes res = checkHeavyEntries(topk, itemFpIdx, weight);\n    if (res.table_idx != -1) {\n        goto update_heap;\n    }\n\n    /* If the item is not already heavy it may be in the lobby. If so we'll\n     * increase its counter and promote it to a heavy entry if it passes the\n     * threshold */\n    res = checkLobbyEntries(topk, itemFpIdx, weight);\n    if (res.table_idx != -1) {\n        goto update_heap;\n    }\n\n    /* Item is not tracked at all. Check for empty lobby entries - if there is\n     * any - place the item there. The weight may be higher than the promotional\n     * threshold in which case we'll try to promote it. */\n    for (int i = 0; i < CHK_NUM_TABLES; ++i) {\n        int idx = itemFpIdx.idx[i];\n        chkBucket *bucket = &topk->tables[i][idx];\n        if (bucket->lobby_entry.count == 0) {\n            bucket->lobby_entry.fp = itemFpIdx.fp;\n\n            res.table_idx = i;\n            res.pos = -1;\n\n            if (weight < LOBBY_PROMOTION_THRESHOLD) {\n                bucket->lobby_entry.count = weight;\n            } else {\n                int kickout_pos = tryPromoteAndKickout(topk, itemFpIdx, weight, i);\n                if (kickout_pos != -1) {\n                    res.pos = kickout_pos;\n                } else {\n                    bucket->lobby_entry.count = LOBBY_PROMOTION_THRESHOLD;\n                }\n            }\n\n            goto update_heap;\n        }\n    }\n \n    /* If there are no empty lobby entries choose a table deterministically,\n     * decay its lobby counter and update */\n    int table_idx = itemFpIdx.fp & 1;\n    int idx = itemFpIdx.idx[table_idx];\n\n    chkLobbyEntry *e = &topk->tables[table_idx][idx].lobby_entry;\n\n    /* new_count is the count of `e` after decaying it with weight */\n    lobby_counter_t new_count = chkDecayCounter(topk, e->count, weight);\n\n    /* if the chosen lobby entry has decayed its counter to 0, it's replaced by\n     * the new entry. Note, in that case the new entry has it's weight\n     * decreased by the approximate amount of decay operations needed to decay\n     * the old entry. */\n    if (new_count == 0) {\n        e->fp = itemFpIdx.fp;\n        counter_t exp_decay_cnt = getExpDecayCount(topk, e->count);\n        e->count = exp_decay_cnt >= weight ?\n            1 : (lobby_counter_t)min(255, weight - exp_decay_cnt);\n    } else {\n        e->count = new_count;\n    }\n\n    if (e->count >= LOBBY_PROMOTION_THRESHOLD) {\n        int kickout_pos = tryPromoteAndKickout(topk, itemFpIdx, e->count, table_idx);\n        if (kickout_pos != -1) {\n            res.table_idx = table_idx;\n            res.pos = kickout_pos;\n        }\n    }\n\n    /* After a change in the structure has occurred we check if we also need to\n     * update the heap - i.e bump a new item in it, or reorder an old item if\n     * it's counter went up. */\nupdate_heap:\n    if (res.table_idx == -1 || res.pos == -1)\n        return NULL;\n\n    table_idx = res.table_idx;\n    idx = itemFpIdx.idx[table_idx];\n\n    counter_t heap_min = topk->heap[0].count;\n    chkHeavyEntry *entry = &topk->tables[table_idx][idx].heavy_entries[res.pos];\n \n    if (entry->count < heap_min)\n        return NULL;\n \n    /* Heap uses different hash than the cuckoo tables */\n    uint64_t fp = XXH3_64bits_withSeed(item, itemlen, HEAP_SEED);\n    chkHeapBucket *itemHeapPtr = chkCheckExistInHeap(topk, item, itemlen, fp);\n    if (itemHeapPtr != NULL) {\n        itemHeapPtr->count = entry->count;\n        chkHeapifyDown(topk->heap, topk->k, itemHeapPtr - topk->heap);\n    } else {\n        /* We know the new entry has bigger count than the min-element so it's\n         * safe to expel it. */\n        sds expelled = topk->heap[0].item;\n        if (expelled) topk->alloc_size -= sdsAllocSize(expelled);\n\n        topk->heap[0].count = entry->count;\n        topk->heap[0].fp = fp;\n        topk->heap[0].item = sdsnewlen(item, itemlen);\n        topk->alloc_size += sdsAllocSize(topk->heap[0].item);\n\n        chkHeapifyDown(topk->heap, topk->k, 0);\n        return expelled;\n    }\n\n    return NULL;\n}\n\nint cmpchkHeapBucket(const void *tmp1, const void *tmp2) {\n    const chkHeapBucket *res1 = tmp1;\n    const chkHeapBucket *res2 = tmp2;\n    return res1->count < res2->count ? 1 : res1->count > res2->count ? -1 : 0;\n}\n\n/* Get an ordered by count list of topk->k elements inside the topk object.\n *\n * NOTE, the returned array is a copy of the internal heap stored by `topk`. The\n * caller is responsible for releasing it after use. The elements of the array\n * share their `item` pointers with the internal topk->heap buckets so one must\n * not use it after `topk` is released. */\nchkHeapBucket *chkTopKList(chkTopK *topk) {\n    chkHeapBucket *list = zmalloc(sizeof(chkHeapBucket) * topk->k);\n    memcpy(list, topk->heap, sizeof(chkHeapBucket) * topk->k);\n    qsort(list, topk->k, sizeof(*list), cmpchkHeapBucket);\n    return list;\n}\n\nsize_t chkTopKGetMemoryUsage(chkTopK *topk) {\n    if (!topk) return 0;\n\n    return topk->alloc_size;\n}\n\n#ifdef REDIS_TEST\n\n#include <stdio.h>\n#include \"testhelp.h\"\n\n#define UNUSED(x) (void)(x)\n\nstatic int findItemInList(chkHeapBucket *list, int k, const char *item, int itemlen) {\n    for (int i = 0; i < k; i++) {\n        if (list[i].item != NULL &&\n            sdslen(list[i].item) == (size_t)itemlen &&\n            memcmp(list[i].item, item, itemlen) == 0) {\n            return i;\n        }\n    }\n    return -1;\n}\n\nstatic int verifyListSorted(chkHeapBucket *list, int k) {\n    for (int i = 0; i < k - 1; i++) {\n        if (list[i].item == NULL) continue;\n        if (list[i + 1].item == NULL) continue;\n        if (list[i].count < list[i + 1].count) {\n            return 0;\n        }\n    }\n    return 1;\n}\n\nstatic void chkTopKUpdateAndFreeExpelled(chkTopK *topk, const char *item, int itemlen, counter_t weight) {\n    sds expelled = chkTopKUpdate(topk, (char *)item, itemlen, weight);\n    if (expelled) sdsfree(expelled);\n}\n\nstatic void testBasicTopK(void) {\n    int k = 5;\n    int numbuckets = 64;\n    double decay = 0.9;\n\n    chkTopK *topk = chkTopKCreate(k, numbuckets, decay);\n    test_cond(\"Create topk structure\", topk != NULL);\n\n    if (topk == NULL) return;\n\n    chkTopKUpdateAndFreeExpelled(topk, \"item1\", 5, 100);\n    chkTopKUpdateAndFreeExpelled(topk, \"item2\", 5, 200);\n    chkTopKUpdateAndFreeExpelled(topk, \"item3\", 5, 150);\n    chkTopKUpdateAndFreeExpelled(topk, \"item4\", 5, 50);\n    chkTopKUpdateAndFreeExpelled(topk, \"item5\", 5, 300);\n    chkTopKUpdateAndFreeExpelled(topk, \"item6\", 5, 75);\n\n    chkHeapBucket *list = chkTopKList(topk);\n    test_cond(\"chkTopKList returns non-NULL\", list != NULL);\n\n    if (list == NULL) {\n        chkTopKRelease(topk);\n        return;\n    }\n\n    test_cond(\"TopK list is sorted in descending order\", verifyListSorted(list, k));\n\n    int idx1 = findItemInList(list, k, \"item5\", 5);\n    int idx2 = findItemInList(list, k, \"item2\", 5);\n    int idx3 = findItemInList(list, k, \"item3\", 5);\n\n    test_cond(\"Heaviest items are in the list\", idx1 != -1 && idx2 != -1 && idx3 != -1);\n\n    test_cond(\"item5 has the highest count\", idx1 == 0);\n\n    zfree(list);\n    chkTopKRelease(topk);\n}\n\nstatic void testHeavierElementsReplaceLighter(void) {\n    int k = 5;\n    int numbuckets = 64;\n    double decay = 0.9;\n\n    chkTopK *topk = chkTopKCreate(k, numbuckets, decay);\n    test_cond(\"Create topk structure for replacement test\", topk != NULL);\n\n    if (topk == NULL) return;\n\n    chkTopKUpdateAndFreeExpelled(topk, \"light1\", 6, 50);\n    chkTopKUpdateAndFreeExpelled(topk, \"light2\", 6, 60);\n    chkTopKUpdateAndFreeExpelled(topk, \"light3\", 6, 70);\n    chkTopKUpdateAndFreeExpelled(topk, \"light4\", 6, 80);\n    chkTopKUpdateAndFreeExpelled(topk, \"light5\", 6, 90);\n\n    chkHeapBucket *list1 = chkTopKList(topk);\n    test_cond(\"Initial topk list is not NULL\", list1 != NULL);\n\n    if (list1 == NULL) {\n        chkTopKRelease(topk);\n        return;\n    }\n\n    int light1_idx = findItemInList(list1, k, \"light1\", 6);\n    int light2_idx = findItemInList(list1, k, \"light2\", 6);\n    int light3_idx = findItemInList(list1, k, \"light3\", 6);\n    int light4_idx = findItemInList(list1, k, \"light4\", 6);\n    int light5_idx = findItemInList(list1, k, \"light5\", 6);\n\n    test_cond(\"light1 is in initial topk list\", light1_idx != -1);\n    test_cond(\"light2 is in initial topk list\", light2_idx != -1);\n    test_cond(\"light3 is in initial topk list\", light3_idx != -1);\n    test_cond(\"light4 is in initial topk list\", light4_idx != -1);\n    test_cond(\"light5 is in initial topk list\", light5_idx != -1);\n\n    zfree(list1);\n\n    chkTopKUpdateAndFreeExpelled(topk, \"heavy1\", 6, 500);\n    chkTopKUpdateAndFreeExpelled(topk, \"heavy2\", 6, 600);\n\n    chkHeapBucket *list2 = chkTopKList(topk);\n    test_cond(\"Updated topk list is not NULL\", list2 != NULL);\n\n    if (list2 == NULL) {\n        chkTopKRelease(topk);\n        return;\n    }\n\n    int heavy1_idx = findItemInList(list2, k, \"heavy1\", 6);\n    int heavy2_idx = findItemInList(list2, k, \"heavy2\", 6);\n\n    test_cond(\"heavy1 is in updated topk list\", heavy1_idx != -1);\n    test_cond(\"heavy2 is in updated topk list\", heavy2_idx != -1);\n\n    light1_idx = findItemInList(list2, k, \"light1\", 6);\n    light2_idx = findItemInList(list2, k, \"light2\", 6);\n    light3_idx = findItemInList(list2, k, \"light3\", 6);\n    light4_idx = findItemInList(list2, k, \"light4\", 6);\n    light5_idx = findItemInList(list2, k, \"light5\", 6);\n\n    int light_items_remaining = (light1_idx != -1 ? 1 : 0) +\n                                (light2_idx != -1 ? 1 : 0) +\n                                (light3_idx != -1 ? 1 : 0) +\n                                (light4_idx != -1 ? 1 : 0) +\n                                (light5_idx != -1 ? 1 : 0);\n\n    test_cond(\"Some lighter items remain in the list after adding heavier ones\",\n              light_items_remaining > 0);\n\n    zfree(list2);\n    chkTopKRelease(topk);\n}\n\nstatic void testManySmallWeightUpdates(void) {\n    int k = 2;\n    int numbuckets = 64;\n    double decay = 0.9;\n\n    chkTopK *topk = chkTopKCreate(k, numbuckets, decay);\n    test_cond(\"Create topk structure for small weight updates test\", topk != NULL);\n\n    if (topk == NULL) return;\n\n    chkTopKUpdateAndFreeExpelled(topk, \"item0\", 5, 50);\n    chkTopKUpdateAndFreeExpelled(topk, \"item1\", 5, 100);\n\n    chkHeapBucket *list1 = chkTopKList(topk);\n    test_cond(\"Topk list after adding item0 and item1 is not NULL\", list1 != NULL);\n\n    if (list1 == NULL) {\n        chkTopKRelease(topk);\n        return;\n    }\n\n    int item0_idx1 = findItemInList(list1, k, \"item0\", 5);\n    int item1_idx1 = findItemInList(list1, k, \"item1\", 5);\n\n    test_cond(\"item0 and item1 are in topk after initial updates\",\n              item0_idx1 != -1 && item1_idx1 != -1);\n\n    zfree(list1);\n\n    for (int i = 0; i < 100; i++) {\n        chkTopKUpdateAndFreeExpelled(topk, \"item2\", 5, 1);\n    }\n\n    chkHeapBucket *list2 = chkTopKList(topk);\n    test_cond(\"Topk list after many small updates is not NULL\", list2 != NULL);\n\n    if (list2 == NULL) {\n        chkTopKRelease(topk);\n        return;\n    }\n\n    int item0_idx2 = findItemInList(list2, k, \"item0\", 5);\n    int item1_idx2 = findItemInList(list2, k, \"item1\", 5);\n    int item2_idx2 = findItemInList(list2, k, \"item2\", 5);\n\n    test_cond(\"item1 and item2 are in topk, item0 is not\",\n              item1_idx2 != -1 && item2_idx2 != -1 && item0_idx2 == -1);\n\n    counter_t item1_count = 0;\n    counter_t item2_count = 0;\n    if (item1_idx2 != -1) item1_count = list2[item1_idx2].count;\n    if (item2_idx2 != -1) item2_count = list2[item2_idx2].count;\n\n    test_cond(\"item1 and item2 have similar weights\", item1_count > 0 && item2_count > 0 && \n              (item1_count > item2_count ? item1_count - item2_count : item2_count - item1_count) < 5);\n\n    zfree(list2);\n    chkTopKRelease(topk);\n}\n\nint chkTopKTest(int argc, char *argv[], int flags) {\n    UNUSED(argc);\n    UNUSED(argv);\n    UNUSED(flags);\n\n    testBasicTopK();\n    testHeavierElementsReplaceLighter();\n    testManySmallWeightUpdates();\n\n    return 0;\n}\n\n#endif /* REDIS_TEST */\n"
  },
  {
    "path": "src/chk.h",
    "content": "/* Implementation of a topK structure using CuckooHeavyKeeper algorithm\n *\n * Implementation is based on the paper \"Cuckoo Heavy Keeper and the balancing\n * act of maintaining heavy hitters in stream processing\" by Vinh Quang Ngo and\n * Marina Papatriantafilou. Also, the accompanying C++ implementation was used\n * as a reference point: https://github.com/vinhqngo5/Cuckoo_Heavy_Keeper\n * Main changes are addition of a min-heap so we can keep names of the top K\n * elements - idea comes from RedisBloom's TopK structure.\n *\n * Copyright (c) 2026-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#pragma once\n\n#include \"sds.h\"\n\n#include <stddef.h>\n#include <stdint.h>\n\n#define CHK_LUT_SIZE 256\n#define CHK_HEAVY_ENTRIES_PER_BUCKET 2\n#define CHK_NUM_TABLES 2\n\ntypedef uint64_t counter_t;\ntypedef uint16_t fingerprint_t;\ntypedef uint8_t lobby_counter_t;\n\ntypedef struct {\n    counter_t count;\n    fingerprint_t fp;\n} chkHeavyEntry;\n\ntypedef struct {\n    fingerprint_t fp;\n    lobby_counter_t count;\n} chkLobbyEntry;\n\ntypedef struct {\n    chkHeavyEntry heavy_entries[CHK_HEAVY_ENTRIES_PER_BUCKET];\n    chkLobbyEntry lobby_entry;\n} chkBucket;\n\ntypedef struct {\n    counter_t count;\n    sds item;\n    uint64_t fp; /* Fingerprint used to identify the item. Internal use only */\n} chkHeapBucket;\n\ntypedef struct chkTopK {\n    chkBucket *tables[CHK_NUM_TABLES]; /* Cuckoo tables */\n    chkHeapBucket *heap; /* Min-heap for storing top-K item's names */\n\n    size_t alloc_size; /* Used for memory tracking only */\n\n    /* Expected number of operations to decay count i to 0 */\n    double lut_decay_exp[CHK_LUT_SIZE + 1];\n\n    /* Minimum number of decay operations to decay count i with 1 */\n    double lut_min_decay[CHK_LUT_SIZE + 1];\n\n    /* Probability of decaying i with 1. As per paper probability is decay^-i\n     * but we actually store (1/decay)^i for faster computation. */\n    double lut_decay_prob[CHK_LUT_SIZE + 1];\n\n    double decay; /* Decay constant */\n    double inv_decay; /* Cache 1/decay for faster computations */\n\n    counter_t total; /* Total recorded count for all updates */\n\n    int k;\n    int numbuckets;\n} chkTopK;\n\nchkTopK *chkTopKCreate(int k, int numbuckets, double decay);\nvoid chkTopKRelease(chkTopK *topk);\nsds chkTopKUpdate(chkTopK *topk, char *item, int itemlen, counter_t weight);\nchkHeapBucket *chkTopKList(chkTopK *topk);\nsize_t chkTopKGetMemoryUsage(chkTopK *topk);\n\n#ifdef REDIS_TEST\n\nint chkTopKTest(int argc, char *argv[], int flags);\n\n#endif /* REDIS_TEST */\n"
  },
  {
    "path": "src/cli_commands.c",
    "content": "#include <stddef.h>\n#include \"cli_commands.h\"\n\n/* Definitions to configure commands.c to generate the above structs. */\n#define MAKE_CMD(name,summary,complexity,since,doc_flags,replaced,deprecated,group,group_enum,history,num_history,tips,num_tips,function,arity,flags,acl,key_specs,key_specs_num,get_keys,numargs) name,summary,group,since,numargs\n#define MAKE_ARG(name,type,key_spec_index,token,summary,since,flags,numsubargs,deprecated_since) name,type,token,since,flags,numsubargs\n#define COMMAND_ARG cliCommandArg\n#define COMMAND_STRUCT commandDocs\n#define SKIP_CMD_HISTORY_TABLE\n#define SKIP_CMD_TIPS_TABLE\n#define SKIP_CMD_KEY_SPECS_TABLE\n\n#include \"commands.def\"\n"
  },
  {
    "path": "src/cli_commands.h",
    "content": "/* This file is used by redis-cli in place of server.h when including commands.c\n * It contains alternative structs which omit the parts of the commands table\n * that are not suitable for redis-cli, e.g. the command proc. */\n\n#ifndef __REDIS_CLI_COMMANDS_H\n#define __REDIS_CLI_COMMANDS_H\n\n#include <stddef.h>\n#include \"commands.h\"\n\n/* Syntax specifications for a command argument. */\ntypedef struct cliCommandArg {\n    char *name;\n    redisCommandArgType type;\n    char *token;\n    char *since;\n    int flags;\n    int numsubargs;\n    struct cliCommandArg *subargs;\n    const char *display_text;\n\n    /*\n     * For use at runtime.\n     * Fields used to keep track of input word matches for command-line hinting.\n     */\n    int matched;  /* How many input words have been matched by this argument? */\n    int matched_token;  /* Has the token been matched? */\n    int matched_name;  /* Has the name been matched? */\n    int matched_all;  /* Has the whole argument been consumed (no hint needed)? */\n} cliCommandArg;\n\n/* Command documentation info used for help output */\nstruct commandDocs {\n    char *name;\n    char *summary;\n    char *group;\n    char *since;\n    int numargs;\n    cliCommandArg *args; /* An array of the command arguments. */\n    struct commandDocs *subcommands;\n    char *params; /* A string describing the syntax of the command arguments. */\n};\n\nextern struct commandDocs redisCommandTable[];\n\n#endif\n"
  },
  {
    "path": "src/cli_common.c",
    "content": "/* CLI (command line interface) common methods\n * \n * Copyright (c) 2020-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"fmacros.h\"\n#include \"cli_common.h\"\n#include \"version.h\"\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <fcntl.h>\n#include <errno.h>\n#include <hiredis.h>\n#include <sdscompat.h> /* Use hiredis' sds compat header that maps sds calls to their hi_ variants */\n#include <sds.h> /* use sds.h from hiredis, so that only one set of sds functions will be present in the binary */\n#include <unistd.h>\n#include <string.h>\n#include <ctype.h>\n#ifdef USE_OPENSSL\n#include <openssl/ssl.h>\n#include <openssl/err.h>\n#include <hiredis_ssl.h>\n#endif\n\n#define UNUSED(V) ((void) V)\n\nchar *redisGitSHA1(void);\nchar *redisGitDirty(void);\n\n/* Wrapper around redisSecureConnection to avoid hiredis_ssl dependencies if\n * not building with TLS support.\n */\nint cliSecureConnection(redisContext *c, cliSSLconfig config, const char **err) {\n#ifdef USE_OPENSSL\n    static SSL_CTX *ssl_ctx = NULL;\n\n    if (!ssl_ctx) {\n        ssl_ctx = SSL_CTX_new(SSLv23_client_method());\n        if (!ssl_ctx) {\n            *err = \"Failed to create SSL_CTX\";\n            goto error;\n        }\n        SSL_CTX_set_options(ssl_ctx, SSL_OP_NO_SSLv2 | SSL_OP_NO_SSLv3);\n        SSL_CTX_set_verify(ssl_ctx, config.skip_cert_verify ? SSL_VERIFY_NONE : SSL_VERIFY_PEER, NULL);\n\n        if (config.cacert || config.cacertdir) {\n            if (!SSL_CTX_load_verify_locations(ssl_ctx, config.cacert, config.cacertdir)) {\n                *err = \"Invalid CA Certificate File/Directory\";\n                goto error;\n            }\n        } else {\n            if (!SSL_CTX_set_default_verify_paths(ssl_ctx)) {\n                *err = \"Failed to use default CA paths\";\n                goto error;\n            }\n        }\n\n        if (config.cert && !SSL_CTX_use_certificate_chain_file(ssl_ctx, config.cert)) {\n            *err = \"Invalid client certificate\";\n            goto error;\n        }\n\n        if (config.key && !SSL_CTX_use_PrivateKey_file(ssl_ctx, config.key, SSL_FILETYPE_PEM)) {\n            *err = \"Invalid private key\";\n            goto error;\n        }\n        if (config.ciphers && !SSL_CTX_set_cipher_list(ssl_ctx, config.ciphers)) {\n            *err = \"Error while configuring ciphers\";\n            goto error;\n        }\n#ifdef TLS1_3_VERSION\n        if (config.ciphersuites && !SSL_CTX_set_ciphersuites(ssl_ctx, config.ciphersuites)) {\n            *err = \"Error while setting cypher suites\";\n            goto error;\n        }\n#endif\n    }\n\n    SSL *ssl = SSL_new(ssl_ctx);\n    if (!ssl) {\n        *err = \"Failed to create SSL object\";\n        return REDIS_ERR;\n    }\n\n    if (config.sni && !SSL_set_tlsext_host_name(ssl, config.sni)) {\n        *err = \"Failed to configure SNI\";\n        SSL_free(ssl);\n        return REDIS_ERR;\n    }\n\n    return redisInitiateSSL(c, ssl);\n\nerror:\n    SSL_CTX_free(ssl_ctx);\n    ssl_ctx = NULL;\n    return REDIS_ERR;\n#else\n    (void) config;\n    (void) c;\n    (void) err;\n    return REDIS_OK;\n#endif\n}\n\n/* Wrapper around hiredis to allow arbitrary reads and writes.\n *\n * We piggybacks on top of hiredis to achieve transparent TLS support,\n * and use its internal buffers so it can co-exist with commands\n * previously/later issued on the connection.\n *\n * Interface is close to enough to read()/write() so things should mostly\n * work transparently.\n */\n\n/* Write a raw buffer through a redisContext. If we already have something\n * in the buffer (leftovers from hiredis operations) it will be written\n * as well.\n */\nssize_t cliWriteConn(redisContext *c, const char *buf, size_t buf_len)\n{\n    int done = 0;\n\n    /* Append data to buffer which is *usually* expected to be empty\n     * but we don't assume that, and write.\n     */\n    c->obuf = sdscatlen(c->obuf, buf, buf_len);\n    if (redisBufferWrite(c, &done) == REDIS_ERR) {\n        if (!(c->flags & REDIS_BLOCK))\n            errno = EAGAIN;\n\n        /* On error, we assume nothing was written and we roll back the\n         * buffer to its original state.\n         */\n        if (sdslen(c->obuf) > buf_len)\n            sdsrange(c->obuf, 0, -(buf_len+1));\n        else\n            sdsclear(c->obuf);\n\n        return -1;\n    }\n\n    /* If we're done, free up everything. We may have written more than\n     * buf_len (if c->obuf was not initially empty) but we don't have to\n     * tell.\n     */\n    if (done) {\n        sdsclear(c->obuf);\n        return buf_len;\n    }\n\n    /* Write was successful but we have some leftovers which we should\n     * remove from the buffer.\n     *\n     * Do we still have data that was there prior to our buf? If so,\n     * restore buffer to it's original state and report no new data was\n     * written.\n     */\n    if (sdslen(c->obuf) > buf_len) {\n        sdsrange(c->obuf, 0, -(buf_len+1));\n        return 0;\n    }\n\n    /* At this point we're sure no prior data is left. We flush the buffer\n     * and report how much we've written.\n     */\n    size_t left = sdslen(c->obuf);\n    sdsclear(c->obuf);\n    return buf_len - left;\n}\n\n/* Wrapper around OpenSSL (libssl and libcrypto) initialisation\n */\nint cliSecureInit(void)\n{\n#ifdef USE_OPENSSL\n    ERR_load_crypto_strings();\n    SSL_load_error_strings();\n    SSL_library_init();\n#endif\n    return REDIS_OK;\n}\n\n/* Create an sds from stdin */\nsds readArgFromStdin(void) {\n    char buf[1024];\n    sds arg = sdsempty();\n\n    while(1) {\n        int nread = read(fileno(stdin),buf,1024);\n\n        if (nread == 0) break;\n        else if (nread == -1) {\n            perror(\"Reading from standard input\");\n            exit(1);\n        }\n        arg = sdscatlen(arg,buf,nread);\n    }\n    return arg;\n}\n\n/* Create an sds array from argv, either as-is or by dequoting every\n * element. When quoted is non-zero, may return a NULL to indicate an\n * invalid quoted string.\n *\n * The caller should free the resulting array of sds strings with\n * sdsfreesplitres().\n */\nsds *getSdsArrayFromArgv(int argc,char **argv, int quoted) {\n    sds *res = sds_malloc(sizeof(sds) * argc);\n\n    for (int j = 0; j < argc; j++) {\n        if (quoted) {\n            sds unquoted = unquoteCString(argv[j]);\n            if (!unquoted) {\n                while (--j >= 0) sdsfree(res[j]);\n                sds_free(res);\n                return NULL;\n            }\n            res[j] = unquoted;\n        } else {\n            res[j] = sdsnew(argv[j]);\n        }\n    }\n\n    return res;\n}\n\n/* Unquote a null-terminated string and return it as a binary-safe sds. */\nsds unquoteCString(char *str) {\n    int count;\n    sds *unquoted = sdssplitargs(str, &count);\n    sds res = NULL;\n\n    if (unquoted && count == 1) {\n        res = unquoted[0];\n        unquoted[0] = NULL;\n    }\n\n    if (unquoted)\n        sdsfreesplitres(unquoted, count);\n\n    return res;\n}\n\n\n/* URL-style percent decoding. */\n#define isHexChar(c) (isdigit(c) || ((c) >= 'a' && (c) <= 'f'))\n#define decodeHexChar(c) (isdigit(c) ? (c) - '0' : (c) - 'a' + 10)\n#define decodeHex(h, l) ((decodeHexChar(h) << 4) + decodeHexChar(l))\n\nstatic sds percentDecode(const char *pe, size_t len) {\n    const char *end = pe + len;\n    sds ret = sdsempty();\n    const char *curr = pe;\n\n    while (curr < end) {\n        if (*curr == '%') {\n            if ((end - curr) < 2) {\n                fprintf(stderr, \"Incomplete URI encoding\\n\");\n                exit(1);\n            }\n\n            char h = tolower(*(++curr));\n            char l = tolower(*(++curr));\n            if (!isHexChar(h) || !isHexChar(l)) {\n                fprintf(stderr, \"Illegal character in URI encoding\\n\");\n                exit(1);\n            }\n            char c = decodeHex(h, l);\n            ret = sdscatlen(ret, &c, 1);\n            curr++;\n        } else {\n            ret = sdscatlen(ret, curr++, 1);\n        }\n    }\n\n    return ret;\n}\n\n/* Parse a URI and extract the server connection information.\n * URI scheme is based on the provisional specification[1] excluding support\n * for query parameters. Valid URIs are:\n *   scheme:    \"redis://\"\n *   authority: [[<username> \":\"] <password> \"@\"] [<hostname> [\":\" <port>]]\n *   path:      [\"/\" [<db>]]\n *\n *  [1]: https://www.iana.org/assignments/uri-schemes/prov/redis */\nvoid parseRedisUri(const char *uri, const char* tool_name, cliConnInfo *connInfo, int *tls_flag) {\n#ifdef USE_OPENSSL\n    UNUSED(tool_name);\n#else\n    UNUSED(tls_flag);\n#endif\n\n    const char *scheme = \"redis://\";\n    const char *tlsscheme = \"rediss://\";\n    const char *curr = uri;\n    const char *end = uri + strlen(uri);\n    const char *userinfo, *username, *port, *host, *path;\n\n    /* URI must start with a valid scheme. */\n    if (!strncasecmp(tlsscheme, curr, strlen(tlsscheme))) {\n#ifdef USE_OPENSSL\n        *tls_flag = 1;\n        curr += strlen(tlsscheme);\n#else\n        fprintf(stderr,\"rediss:// is only supported when %s is compiled with OpenSSL\\n\", tool_name);\n        exit(1);\n#endif\n    } else if (!strncasecmp(scheme, curr, strlen(scheme))) {\n        curr += strlen(scheme);\n    } else {\n        fprintf(stderr,\"Invalid URI scheme\\n\");\n        exit(1);\n    }\n    if (curr == end) return;\n\n    /* Extract user info. */\n    if ((userinfo = strchr(curr,'@'))) {\n        if ((username = strchr(curr, ':')) && username < userinfo) {\n            connInfo->user = percentDecode(curr, username - curr);\n            curr = username + 1;\n        }\n\n        connInfo->auth = percentDecode(curr, userinfo - curr);\n        curr = userinfo + 1;\n    }\n    if (curr == end) return;\n\n    /* Extract host and port. */\n    path = strchr(curr, '/');\n    if (*curr != '/') {\n        host = path ? path - 1 : end;\n        if (*curr == '[') {\n            curr += 1;\n            if ((port = strchr(curr, ']'))) {\n                if (*(port+1) == ':') {\n                    connInfo->hostport = atoi(port + 2);\n                }\n                host = port - 1;\n            }\n        } else {\n            if ((port = strchr(curr, ':'))) {\n                connInfo->hostport = atoi(port + 1);\n                host = port - 1;\n            }\n        }\n        sdsfree(connInfo->hostip);\n        connInfo->hostip = sdsnewlen(curr, host - curr + 1);\n    }\n    curr = path ? path + 1 : end;\n    if (curr == end) return;\n\n    /* Extract database number. */\n    connInfo->input_dbnum = atoi(curr);\n}\n\nvoid freeCliConnInfo(cliConnInfo connInfo){\n    if (connInfo.hostip) sdsfree(connInfo.hostip);\n    if (connInfo.auth) sdsfree(connInfo.auth);\n    if (connInfo.user) sdsfree(connInfo.user);\n}\n\n/*\n * Escape a Unicode string for JSON output (--json), following RFC 7159:\n * https://datatracker.ietf.org/doc/html/rfc7159#section-7\n*/\nsds escapeJsonString(sds s, const char *p, size_t len) {\n    s = sdscatlen(s,\"\\\"\",1);\n    while(len--) {\n        switch(*p) {\n        case '\\\\':\n        case '\"':\n            s = sdscatprintf(s,\"\\\\%c\",*p);\n            break;\n        case '\\n': s = sdscatlen(s,\"\\\\n\",2); break;\n        case '\\f': s = sdscatlen(s,\"\\\\f\",2); break;\n        case '\\r': s = sdscatlen(s,\"\\\\r\",2); break;\n        case '\\t': s = sdscatlen(s,\"\\\\t\",2); break;\n        case '\\b': s = sdscatlen(s,\"\\\\b\",2); break;\n        default:\n            s = sdscatprintf(s,*(unsigned char *)p <= 0x1f ? \"\\\\u%04x\" : \"%c\",*p);\n        }\n        p++;\n    }\n    return sdscatlen(s,\"\\\"\",1);\n}\n\nsds cliVersion(void) {\n    sds version = sdscatprintf(sdsempty(), \"%s\", REDIS_VERSION);\n\n    /* Add git commit and working tree status when available. */\n    if (strtoll(redisGitSHA1(),NULL,16)) {\n        version = sdscatprintf(version, \" (git:%s\", redisGitSHA1());\n        if (strtoll(redisGitDirty(),NULL,10))\n            version = sdscatprintf(version, \"-dirty\");\n        version = sdscat(version, \")\");\n    }\n    return version;\n}\n\n/* This is a wrapper to call redisConnect or redisConnectWithTimeout. */\nredisContext *redisConnectWrapper(const char *ip, int port, const struct timeval tv) {\n    if (tv.tv_sec == 0 && tv.tv_usec == 0) {\n        return redisConnect(ip, port);\n    } else {\n        return redisConnectWithTimeout(ip, port, tv);\n    }\n}\n\n/* This is a wrapper to call redisConnectUnix or redisConnectUnixWithTimeout. */\nredisContext *redisConnectUnixWrapper(const char *path, const struct timeval tv) {\n    if (tv.tv_sec == 0 && tv.tv_usec == 0) {\n        return redisConnectUnix(path);\n    } else {\n        return redisConnectUnixWithTimeout(path, tv);\n    }\n}\n"
  },
  {
    "path": "src/cli_common.h",
    "content": "#ifndef __CLICOMMON_H\n#define __CLICOMMON_H\n\n#include <hiredis.h>\n#include <sdscompat.h> /* Use hiredis' sds compat header that maps sds calls to their hi_ variants */\n\ntypedef struct cliSSLconfig {\n    /* Requested SNI, or NULL */\n    char *sni;\n    /* CA Certificate file, or NULL */\n    char *cacert;\n    /* Directory where trusted CA certificates are stored, or NULL */\n    char *cacertdir;\n    /* Skip server certificate verification. */\n    int skip_cert_verify;\n    /* Client certificate to authenticate with, or NULL */\n    char *cert;\n    /* Private key file to authenticate with, or NULL */\n    char *key;\n    /* Preferred cipher list, or NULL (applies only to <= TLSv1.2) */\n    char* ciphers;\n    /* Preferred ciphersuites list, or NULL (applies only to TLSv1.3) */\n    char* ciphersuites;\n} cliSSLconfig;\n\n\n/* server connection information object, used to describe an ip:port pair, db num user input, and user:pass. */\ntypedef struct cliConnInfo {\n    char *hostip;\n    int hostport;\n    int input_dbnum;\n    char *auth;\n    char *user;\n} cliConnInfo;\n\nint cliSecureConnection(redisContext *c, cliSSLconfig config, const char **err);\n\nssize_t cliWriteConn(redisContext *c, const char *buf, size_t buf_len);\n\nint cliSecureInit(void);\n\nsds readArgFromStdin(void);\n\nsds *getSdsArrayFromArgv(int argc,char **argv, int quoted);\n\nsds unquoteCString(char *str);\n\nvoid parseRedisUri(const char *uri, const char* tool_name, cliConnInfo *connInfo, int *tls_flag);\n\nvoid freeCliConnInfo(cliConnInfo connInfo);\n\nsds escapeJsonString(sds s, const char *p, size_t len);\n\nsds cliVersion(void);\n\nredisContext *redisConnectWrapper(const char *ip, int port, const struct timeval tv);\nredisContext *redisConnectUnixWrapper(const char *path, const struct timeval tv);\n\n#endif /* __CLICOMMON_H */\n"
  },
  {
    "path": "src/cluster.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n/*\n * cluster.c contains the common parts of a clustering\n * implementation, the parts that are shared between\n * any implementation of Redis clustering.\n */\n\n#include \"server.h\"\n#include \"cluster.h\"\n#include \"cluster_asm.h\"\n#include \"cluster_slot_stats.h\"\n\n#include <ctype.h>\n#include \"bio.h\"\n\n/* -----------------------------------------------------------------------------\n * Key space handling\n * -------------------------------------------------------------------------- */\n\n/* If it can be inferred that the given glob-style pattern, as implemented in\n * stringmatchlen() in util.c, only can match keys belonging to a single slot,\n * that slot is returned. Otherwise -1 is returned. */\nint patternHashSlot(char *pattern, int length) {\n    int s = -1; /* index of the first '{' */\n\n    for (int i = 0; i < length; i++) {\n        if (pattern[i] == '*' || pattern[i] == '?' || pattern[i] == '[') {\n            /* Wildcard or character class found. Keys can be in any slot. */\n            return -1;\n        } else if (pattern[i] == '\\\\') {\n            /* Escaped character. Computing slot in this case is not\n             * implemented. We would need a temp buffer. */\n            return -1;\n        } else if (s == -1 && pattern[i] == '{') {\n            /* Opening brace '{' found. */\n            s = i;\n        } else if (s >= 0 && pattern[i] == '}' && i == s + 1) {\n            /* Empty tag '{}' found. The whole key is hashed. Ignore braces. */\n            s = -2;\n        } else if (s >= 0 && pattern[i] == '}') {\n            /* Non-empty tag '{...}' found. Hash what's between braces. */\n            return crc16(pattern + s + 1, i - s - 1) & 0x3FFF;\n        }\n    }\n\n    /* The pattern matches a single key. Hash the whole pattern. */\n    return crc16(pattern, length) & 0x3FFF;\n}\n\nint getSlotOrReply(client *c, robj *o) {\n    long long slot;\n\n    if (getLongLongFromObject(o,&slot) != C_OK ||\n        slot < 0 || slot >= CLUSTER_SLOTS)\n    {\n        addReplyError(c,\"Invalid or out of range slot\");\n        return -1;\n    }\n    return (int) slot;\n}\n\nConnectionType *connTypeOfCluster(void) {\n    if (server.tls_cluster) {\n        return connectionTypeTls();\n    }\n\n    return connectionTypeTcp();\n}\n\n/* -----------------------------------------------------------------------------\n * DUMP, RESTORE and MIGRATE commands\n * -------------------------------------------------------------------------- */\n\n/* Generates a DUMP-format representation of the object 'o', adding it to the\n * io stream pointed by 'rio'. This function can't fail. */\nvoid createDumpPayload(rio *payload, robj *o, robj *key, int dbid, int skip_checksum) {\n    unsigned char buf[2];\n    uint64_t crc = 0;\n\n    /* Serialize the object in an RDB-like format. It consist of an object type\n     * byte followed by the serialized object. This is understood by RESTORE. */\n    rioInitWithBuffer(payload,sdsempty());\n\n    /* Save key metadata if present without (handles TTL separately via command args) */\n    if (getModuleMetaBits(o->metabits))\n        serverAssert(rdbSaveKeyMetadata(payload, key, o, dbid) != -1);\n    serverAssert(rdbSaveObjectType(payload,o));\n    serverAssert(rdbSaveObject(payload,o,key,dbid));\n\n    /* Write the footer, this is how it looks like:\n     * ----------------+---------------------+---------------+\n     * ... RDB payload | 2 bytes RDB version | 8 bytes CRC64 |\n     * ----------------+---------------------+---------------+\n     * RDB version and CRC are both in little endian.\n     */\n\n    /* RDB version */\n    buf[0] = RDB_VERSION & 0xff;\n    buf[1] = (RDB_VERSION >> 8) & 0xff;\n    payload->io.buffer.ptr = sdscatlen(payload->io.buffer.ptr,buf,2);\n\n    /* If crc checksum is disabled, crc is set to 0 and no checksum validation\n     * will be performed on RESTORE. */\n    if (!skip_checksum) {\n        /* CRC64 */\n        crc = crc64(0,(unsigned char*)payload->io.buffer.ptr,\n                    sdslen(payload->io.buffer.ptr));\n        memrev64ifbe(&crc);\n    }\n    payload->io.buffer.ptr = sdscatlen(payload->io.buffer.ptr,&crc,8);\n}\n\n/* Verify that the RDB version of the dump payload matches the one of this Redis\n * instance and that the checksum is ok.\n * If the DUMP payload looks valid C_OK is returned, otherwise C_ERR\n * is returned. If rdbver_ptr is not NULL, its populated with the value read\n * from the input buffer. */\nint verifyDumpPayload(unsigned char *p, size_t len, uint16_t *rdbver_ptr) {\n    unsigned char *footer;\n    uint16_t rdbver;\n    uint64_t crc;\n\n    /* At least 2 bytes of RDB version and 8 of CRC64 should be present. */\n    if (len < 10) return C_ERR;\n    footer = p+(len-10);\n\n    /* Set and verify RDB version. */\n    rdbver = (footer[1] << 8) | footer[0];\n    if (rdbver_ptr) {\n        *rdbver_ptr = rdbver;\n    }\n    if (rdbver > RDB_VERSION) return C_ERR;\n\n    if (server.skip_checksum_validation)\n        return C_OK;\n\n    uint64_t crc_payload;\n    memcpy(&crc_payload, footer+2, 8);\n    if (crc_payload == 0) /* No checksum. */\n        return C_OK;\n\n    /* Verify CRC64 */\n    crc = crc64(0,p,len-8);\n    memrev64ifbe(&crc);\n    return crc == crc_payload ? C_OK : C_ERR;\n}\n\n/* DUMP keyname\n * DUMP is actually not used by Redis Cluster but it is the obvious\n * complement of RESTORE and can be useful for different applications. */\nvoid dumpCommand(client *c) {\n    kvobj *o;\n    rio payload;\n\n    /* Check if the key is here. */\n    if ((o = lookupKeyRead(c->db,c->argv[1])) == NULL) {\n        addReplyNull(c);\n        return;\n    }\n\n    /* Create the DUMP encoded representation. */\n    createDumpPayload(&payload,o,c->argv[1],c->db->id,0);\n\n    /* Transfer to the client */\n    addReplyBulkSds(c,payload.io.buffer.ptr);\n    return;\n}\n\n/* RESTORE key ttl serialized-value [REPLACE] [ABSTTL] [IDLETIME seconds] [FREQ frequency] */\nvoid restoreCommand(client *c) {\n    long long ttl, lfu_freq = -1, lru_idle = -1, lru_clock = -1;\n    rio payload;\n    int j, type, replace = 0, absttl = 0;\n    robj *obj;\n\n    /* Parse additional options */\n    for (j = 4; j < c->argc; j++) {\n        int additional = c->argc-j-1;\n        if (!strcasecmp(c->argv[j]->ptr,\"replace\")) {\n            replace = 1;\n        } else if (!strcasecmp(c->argv[j]->ptr,\"absttl\")) {\n            absttl = 1;\n        } else if (!strcasecmp(c->argv[j]->ptr,\"idletime\") && additional >= 1 &&\n                   lfu_freq == -1)\n        {\n            if (getLongLongFromObjectOrReply(c,c->argv[j+1],&lru_idle,NULL)\n                != C_OK) return;\n            if (lru_idle < 0) {\n                addReplyError(c,\"Invalid IDLETIME value, must be >= 0\");\n                return;\n            }\n            lru_clock = LRU_CLOCK();\n            j++; /* Consume additional arg. */\n        } else if (!strcasecmp(c->argv[j]->ptr,\"freq\") && additional >= 1 &&\n                   lru_idle == -1)\n        {\n            if (getLongLongFromObjectOrReply(c,c->argv[j+1],&lfu_freq,NULL)\n                != C_OK) return;\n            if (lfu_freq < 0 || lfu_freq > 255) {\n                addReplyError(c,\"Invalid FREQ value, must be >= 0 and <= 255\");\n                return;\n            }\n            j++; /* Consume additional arg. */\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        }\n    }\n\n    /* Make sure this key does not already exist here... */\n    robj *key = c->argv[1];\n    kvobj *oldval = lookupKeyWrite(c->db,key);\n    int oldtype = oldval ? oldval->type : -1;\n    if (!replace && oldval) {\n        addReplyErrorObject(c,shared.busykeyerr);\n        return;\n    }\n\n    /* Check if the TTL value makes sense */\n    if (getLongLongFromObjectOrReply(c,c->argv[2],&ttl,NULL) != C_OK) {\n        return;\n    } else if (ttl < 0) {\n        addReplyError(c,\"Invalid TTL value, must be >= 0\");\n        return;\n    }\n\n    /* Verify RDB version and data checksum. */\n    if (verifyDumpPayload(c->argv[3]->ptr,sdslen(c->argv[3]->ptr),NULL) == C_ERR)\n    {\n        addReplyError(c,\"DUMP payload version or checksum are wrong\");\n        return;\n    }\n\n    rioInitWithBuffer(&payload,c->argv[3]->ptr);\n\n    /* Initialize metadata spec to collect metadata+expiry from payload. */\n    KeyMetaSpec keymeta;\n    keyMetaSpecInit(&keymeta);\n\n    /* Compute TTL early so we can add it to metadata spec in correct order */\n    if (ttl) {\n        if (!absttl) ttl+=commandTimeSnapshot();\n        keyMetaSpecAdd(&keymeta, KEY_META_ID_EXPIRE, ttl);\n    }\n\n    /* With metadata, type = RDB_OPCODE_KEY_META. Layout: [<META>,]<TYPE>,<KEY>,<VALUE> */\n    type = rdbLoadType(&payload);\n    if (rdbResolveKeyType(&payload, &type, c->db->id, &keymeta) == -1) {\n        addReplyError(c,\"Bad data format\");\n        return;\n    }\n\n    /* Load the object */\n    if ((obj = rdbLoadObject(type,&payload,key->ptr,c->db->id,NULL)) == NULL)\n    {\n        keyMetaSpecCleanup(&keymeta);\n        addReplyError(c,\"Bad data format\");\n        return;\n    }\n\n    /* Remove the old key if needed. */\n    int deleted = 0;\n    if (replace)\n        deleted = dbDelete(c->db,key);\n\n    if (ttl && checkAlreadyExpired(ttl)) {\n        if (deleted) {\n            robj *aux = server.lazyfree_lazy_server_del ? shared.unlink : shared.del;\n            rewriteClientCommandVector(c, 2, aux, key);\n            keyModified(c,c->db,key,NULL,1);\n            notifyKeyspaceEvent(NOTIFY_GENERIC,\"del\",key,c->db->id);\n            server.dirty++;\n        }\n        /* Update the stats, see setGenericCommand for details. */\n        server.stat_expiredkeys++;\n        keyMetaSpecCleanup(&keymeta);\n        decrRefCount(obj);\n        addReply(c, shared.ok);\n        return;\n    }\n\n    /* Create the key and set the TTL if any */\n    kvobj *kv = dbAddInternal(c->db, key, &obj, NULL, &keymeta);\n\n    /* Save type: kv may be reallocated by module callbacks during notifyKeyspaceEvent below. */\n    int kvtype = kv->type;\n\n    /* If minExpiredField was set, then the object is hash with expiration\n     * on fields and need to register it in global HFE DS */\n    if (kvtype == OBJ_HASH) {\n        uint64_t minExpiredField = hashTypeGetMinExpire(kv, 1);\n        if (minExpiredField != EB_EXPIRE_TIME_INVALID)\n            estoreAdd(c->db->subexpires, getKeySlot(key->ptr), kv, minExpiredField);\n    }\n\n    if (kvtype == OBJ_STREAM)\n        streamKeyLoaded(c->db, key, kv);\n\n    if (ttl) {\n        if (!absttl) {\n            /* Propagate TTL as absolute timestamp */\n            robj *ttl_obj = createStringObjectFromLongLong(ttl);\n            rewriteClientCommandArgument(c,2,ttl_obj);\n            decrRefCount(ttl_obj);\n            rewriteClientCommandArgument(c,c->argc,shared.absttl);\n        }\n    }\n    objectSetLRUOrLFU(kv, lfu_freq, lru_idle, lru_clock, 1000);\n    keyModified(c,c->db,key,NULL,1);\n    notifyKeyspaceEvent(NOTIFY_GENERIC,\"restore\",key,c->db->id);\n    KSN_INVALIDATE_KVOBJ(kv);\n\n    /* If we deleted a key that means REPLACE parameter was passed and the\n     * destination key existed. */\n    if (deleted) {\n        notifyKeyspaceEvent(NOTIFY_OVERWRITTEN, \"overwritten\", key, c->db->id);\n        if (oldtype != kvtype) {\n            notifyKeyspaceEvent(NOTIFY_TYPE_CHANGED, \"type_changed\", key, c->db->id);\n        }\n    }\n    addReply(c,shared.ok);\n    server.dirty++;\n}\n/* MIGRATE socket cache implementation.\n *\n * We take a map between host:ip and a TCP socket that we used to connect\n * to this instance in recent time.\n * This sockets are closed when the max number we cache is reached, and also\n * in serverCron() when they are around for more than a few seconds. */\n#define MIGRATE_SOCKET_CACHE_ITEMS 64 /* max num of items in the cache. */\n#define MIGRATE_SOCKET_CACHE_TTL 10 /* close cached sockets after 10 sec. */\n\ntypedef struct migrateCachedSocket {\n    connection *conn;\n    long last_dbid;\n    time_t last_use_time;\n} migrateCachedSocket;\n\n/* Return a migrateCachedSocket containing a TCP socket connected with the\n * target instance, possibly returning a cached one.\n *\n * This function is responsible of sending errors to the client if a\n * connection can't be established. In this case -1 is returned.\n * Otherwise on success the socket is returned, and the caller should not\n * attempt to free it after usage.\n *\n * If the caller detects an error while using the socket, migrateCloseSocket()\n * should be called so that the connection will be created from scratch\n * the next time. */\nmigrateCachedSocket* migrateGetSocket(client *c, robj *host, robj *port, long timeout) {\n    connection *conn;\n    sds name = sdsempty();\n    migrateCachedSocket *cs;\n\n    /* Check if we have an already cached socket for this ip:port pair. */\n    name = sdscatlen(name,host->ptr,sdslen(host->ptr));\n    name = sdscatlen(name,\":\",1);\n    name = sdscatlen(name,port->ptr,sdslen(port->ptr));\n    cs = dictFetchValue(server.migrate_cached_sockets,name);\n    if (cs) {\n        sdsfree(name);\n        cs->last_use_time = server.unixtime;\n        return cs;\n    }\n\n    /* No cached socket, create one. */\n    if (dictSize(server.migrate_cached_sockets) == MIGRATE_SOCKET_CACHE_ITEMS) {\n        /* Too many items, drop one at random. */\n        dictEntry *de = dictGetRandomKey(server.migrate_cached_sockets);\n        cs = dictGetVal(de);\n        connClose(cs->conn);\n        zfree(cs);\n        dictDelete(server.migrate_cached_sockets,dictGetKey(de));\n    }\n\n    /* Create the connection */\n    conn = connCreate(server.el, connTypeOfCluster());\n    if (connBlockingConnect(conn, host->ptr, atoi(port->ptr), timeout)\n        != C_OK) {\n        addReplyError(c,\"-IOERR error or timeout connecting to the client\");\n        connClose(conn);\n        sdsfree(name);\n        return NULL;\n    }\n    connEnableTcpNoDelay(conn);\n\n    /* Add to the cache and return it to the caller. */\n    cs = zmalloc(sizeof(*cs));\n    cs->conn = conn;\n\n    cs->last_dbid = -1;\n    cs->last_use_time = server.unixtime;\n    dictAdd(server.migrate_cached_sockets,name,cs);\n    return cs;\n}\n\n/* Free a migrate cached connection. */\nvoid migrateCloseSocket(robj *host, robj *port) {\n    sds name = sdsempty();\n    migrateCachedSocket *cs;\n\n    name = sdscatlen(name,host->ptr,sdslen(host->ptr));\n    name = sdscatlen(name,\":\",1);\n    name = sdscatlen(name,port->ptr,sdslen(port->ptr));\n    cs = dictFetchValue(server.migrate_cached_sockets,name);\n    if (!cs) {\n        sdsfree(name);\n        return;\n    }\n\n    connClose(cs->conn);\n    zfree(cs);\n    dictDelete(server.migrate_cached_sockets,name);\n    sdsfree(name);\n}\n\nvoid migrateCloseTimedoutSockets(void) {\n    dictIterator di;\n    dictEntry *de;\n\n    dictInitSafeIterator(&di, server.migrate_cached_sockets);\n    while((de = dictNext(&di)) != NULL) {\n        migrateCachedSocket *cs = dictGetVal(de);\n\n        if ((server.unixtime - cs->last_use_time) > MIGRATE_SOCKET_CACHE_TTL) {\n            connClose(cs->conn);\n            zfree(cs);\n            dictDelete(server.migrate_cached_sockets,dictGetKey(de));\n        }\n    }\n    dictResetIterator(&di);\n}\n\n/* MIGRATE host port key dbid timeout [COPY | REPLACE | AUTH password |\n *         AUTH2 username password]\n *\n * On in the multiple keys form:\n *\n * MIGRATE host port \"\" dbid timeout [COPY | REPLACE | AUTH password |\n *         AUTH2 username password] KEYS key1 key2 ... keyN */\nvoid migrateCommand(client *c) {\n    migrateCachedSocket *cs;\n    int copy = 0, replace = 0, j;\n    char *username = NULL;\n    char *password = NULL;\n    long timeout;\n    long dbid;\n    robj **kvArray = NULL; /* Objects to migrate. */\n    robj **keyArray = NULL; /* Key names. */\n    robj **newargv = NULL; /* Used to rewrite the command as DEL ... keys ... */\n    rio cmd, payload;\n    int may_retry = 1;\n    int write_error = 0;\n    int argv_rewritten = 0;\n\n    /* To support the KEYS option we need the following additional state. */\n    int first_key = 3; /* Argument index of the first key. */\n    int num_keys = 1;  /* By default only migrate the 'key' argument. */\n\n    /* Parse additional options */\n    for (j = 6; j < c->argc; j++) {\n        int moreargs = (c->argc-1) - j;\n        if (!strcasecmp(c->argv[j]->ptr,\"copy\")) {\n            copy = 1;\n        } else if (!strcasecmp(c->argv[j]->ptr,\"replace\")) {\n            replace = 1;\n        } else if (!strcasecmp(c->argv[j]->ptr,\"auth\")) {\n            if (!moreargs) {\n                addReplyErrorObject(c,shared.syntaxerr);\n                return;\n            }\n            j++;\n            password = c->argv[j]->ptr;\n            redactClientCommandArgument(c,j);\n        } else if (!strcasecmp(c->argv[j]->ptr,\"auth2\")) {\n            if (moreargs < 2) {\n                addReplyErrorObject(c,shared.syntaxerr);\n                return;\n            }\n            username = c->argv[++j]->ptr;\n            redactClientCommandArgument(c,j);\n            password = c->argv[++j]->ptr;\n            redactClientCommandArgument(c,j);\n        } else if (!strcasecmp(c->argv[j]->ptr,\"keys\")) {\n            if (sdslen(c->argv[3]->ptr) != 0) {\n                addReplyError(c,\n                              \"When using MIGRATE KEYS option, the key argument\"\n                              \" must be set to the empty string\");\n                return;\n            }\n            first_key = j+1;\n            num_keys = c->argc - j - 1;\n            break; /* All the remaining args are keys. */\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        }\n    }\n\n    /* Sanity check */\n    if (getLongFromObjectOrReply(c,c->argv[5],&timeout,NULL) != C_OK ||\n        getLongFromObjectOrReply(c,c->argv[4],&dbid,NULL) != C_OK)\n    {\n        return;\n    }\n    if (timeout <= 0) timeout = 1000;\n\n    /* Check if the keys are here. If at least one key is to migrate, do it\n     * otherwise if all the keys are missing reply with \"NOKEY\" to signal\n     * the caller there was nothing to migrate. We don't return an error in\n     * this case, since often this is due to a normal condition like the key\n     * expiring in the meantime. */\n    kvArray = zrealloc(kvArray,sizeof(kvobj*)*num_keys);\n    keyArray = zrealloc(keyArray,sizeof(robj*)*num_keys);\n    int num_exists = 0;\n\n    for (j = 0; j < num_keys; j++) {\n        if ((kvArray[num_exists] = lookupKeyRead(c->db,c->argv[first_key+j])) != NULL) {\n            keyArray[num_exists] = c->argv[first_key+j];\n            num_exists++;\n        }\n    }\n    num_keys = num_exists;\n    if (num_keys == 0) {\n        zfree(kvArray); zfree(keyArray);\n        addReplySds(c,sdsnew(\"+NOKEY\\r\\n\"));\n        return;\n    }\n\n    try_again:\n    write_error = 0;\n\n    /* Connect */\n    cs = migrateGetSocket(c,c->argv[1],c->argv[2],timeout);\n    if (cs == NULL) {\n        zfree(kvArray); zfree(keyArray);\n        return; /* error sent to the client by migrateGetSocket() */\n    }\n\n    rioInitWithBuffer(&cmd,sdsempty());\n\n    /* Authentication */\n    if (password) {\n        int arity = username ? 3 : 2;\n        serverAssertWithInfo(c,NULL,rioWriteBulkCount(&cmd,'*',arity));\n        serverAssertWithInfo(c,NULL,rioWriteBulkString(&cmd,\"AUTH\",4));\n        if (username) {\n            serverAssertWithInfo(c,NULL,rioWriteBulkString(&cmd,username,\n                                                           sdslen(username)));\n        }\n        serverAssertWithInfo(c,NULL,rioWriteBulkString(&cmd,password,\n                                                       sdslen(password)));\n    }\n\n    /* Send the SELECT command if the current DB is not already selected. */\n    int select = cs->last_dbid != dbid; /* Should we emit SELECT? */\n    if (select) {\n        serverAssertWithInfo(c,NULL,rioWriteBulkCount(&cmd,'*',2));\n        serverAssertWithInfo(c,NULL,rioWriteBulkString(&cmd,\"SELECT\",6));\n        serverAssertWithInfo(c,NULL,rioWriteBulkLongLong(&cmd,dbid));\n    }\n\n    int non_expired = 0; /* Number of keys that we'll find non expired.\n                            Note that serializing large keys may take some time\n                            so certain keys that were found non expired by the\n                            lookupKey() function, may be expired later. */\n\n    /* Create RESTORE payload and generate the protocol to call the command. */\n    for (j = 0; j < num_keys; j++) {\n        long long ttl = 0;\n        long long expireat = kvobjGetExpire(kvArray[j]);\n\n        if (expireat != -1) {\n            ttl = expireat-commandTimeSnapshot();\n            if (ttl < 0) {\n                continue;\n            }\n            if (ttl < 1) ttl = 1;\n        }\n\n        /* Relocate valid (non expired) keys and values into the array in successive\n         * positions to remove holes created by the keys that were present\n         * in the first lookup but are now expired after the second lookup. */\n        kvArray[non_expired] = kvArray[j];\n        keyArray[non_expired++] = keyArray[j];\n\n        serverAssertWithInfo(c,NULL,\n                             rioWriteBulkCount(&cmd,'*',replace ? 5 : 4));\n\n        if (server.cluster_enabled)\n            serverAssertWithInfo(c,NULL,\n                                 rioWriteBulkString(&cmd,\"RESTORE-ASKING\",14));\n        else\n            serverAssertWithInfo(c,NULL,rioWriteBulkString(&cmd,\"RESTORE\",7));\n        serverAssertWithInfo(c,NULL,sdsEncodedObject(keyArray[j]));\n        serverAssertWithInfo(c,NULL,rioWriteBulkString(&cmd,keyArray[j]->ptr,\n                                                       sdslen(keyArray[j]->ptr)));\n        serverAssertWithInfo(c,NULL,rioWriteBulkLongLong(&cmd,ttl));\n\n        /* Emit the payload argument, that is the serialized object using\n         * the DUMP format. */\n        createDumpPayload(&payload,kvArray[j],keyArray[j],dbid,0);\n        serverAssertWithInfo(c,NULL,\n                             rioWriteBulkString(&cmd,payload.io.buffer.ptr,\n                                                sdslen(payload.io.buffer.ptr)));\n        sdsfree(payload.io.buffer.ptr);\n\n        /* Add the REPLACE option to the RESTORE command if it was specified\n         * as a MIGRATE option. */\n        if (replace)\n            serverAssertWithInfo(c,NULL,rioWriteBulkString(&cmd,\"REPLACE\",7));\n    }\n\n    /* Fix the actual number of keys we are migrating. */\n    num_keys = non_expired;\n\n    /* Transfer the query to the other node in 64K chunks. */\n    errno = 0;\n    {\n        sds buf = cmd.io.buffer.ptr;\n        size_t pos = 0, towrite;\n        int nwritten = 0;\n\n        while ((towrite = sdslen(buf)-pos) > 0) {\n            towrite = (towrite > (64*1024) ? (64*1024) : towrite);\n            nwritten = connSyncWrite(cs->conn,buf+pos,towrite,timeout);\n            if (nwritten != (signed)towrite) {\n                write_error = 1;\n                goto socket_err;\n            }\n            pos += nwritten;\n        }\n    }\n\n    char buf0[1024]; /* Auth reply. */\n    char buf1[1024]; /* Select reply. */\n    char buf2[1024]; /* Restore reply. */\n\n    /* Read the AUTH reply if needed. */\n    if (password && connSyncReadLine(cs->conn, buf0, sizeof(buf0), timeout) <= 0)\n        goto socket_err;\n\n    /* Read the SELECT reply if needed. */\n    if (select && connSyncReadLine(cs->conn, buf1, sizeof(buf1), timeout) <= 0)\n        goto socket_err;\n\n    /* Read the RESTORE replies. */\n    int error_from_target = 0;\n    int socket_error = 0;\n    int del_idx = 1; /* Index of the key argument for the replicated DEL op. */\n\n    /* Allocate the new argument vector that will replace the current command,\n     * to propagate the MIGRATE as a DEL command (if no COPY option was given).\n     * We allocate num_keys+1 because the additional argument is for \"DEL\"\n     * command name itself. */\n    if (!copy) newargv = zmalloc(sizeof(robj*)*(num_keys+1));\n\n    for (j = 0; j < num_keys; j++) {\n        if (connSyncReadLine(cs->conn, buf2, sizeof(buf2), timeout) <= 0) {\n            socket_error = 1;\n            break;\n        }\n        if ((password && buf0[0] == '-') ||\n            (select && buf1[0] == '-') ||\n            buf2[0] == '-')\n        {\n            /* On error assume that last_dbid is no longer valid. */\n            if (!error_from_target) {\n                cs->last_dbid = -1;\n                char *errbuf;\n                if (password && buf0[0] == '-') errbuf = buf0;\n                else if (select && buf1[0] == '-') errbuf = buf1;\n                else errbuf = buf2;\n\n                error_from_target = 1;\n                addReplyErrorFormat(c,\"Target instance replied with error: %s\",\n                                    errbuf+1);\n            }\n        } else {\n            if (!copy) {\n                /* No COPY option: remove the local key, signal the change. */\n                dbDelete(c->db,keyArray[j]);\n                keyModified(c,c->db,keyArray[j],NULL,1);\n                notifyKeyspaceEvent(NOTIFY_GENERIC,\"del\",keyArray[j],c->db->id);\n                server.dirty++;\n\n                /* Populate the argument vector to replace the old one. */\n                newargv[del_idx++] = keyArray[j];\n                incrRefCount(keyArray[j]);\n            }\n        }\n    }\n\n    /* On socket error, if we want to retry, do it now before rewriting the\n     * command vector. We only retry if we are sure nothing was processed\n     * and we failed to read the first reply (j == 0 test). */\n    if (!error_from_target && socket_error && j == 0 && may_retry &&\n        errno != ETIMEDOUT)\n    {\n        goto socket_err; /* A retry is guaranteed because of tested conditions.*/\n    }\n\n    /* On socket errors, close the migration socket now that we still have\n     * the original host/port in the ARGV. Later the original command may be\n     * rewritten to DEL and will be too later. */\n    if (socket_error) migrateCloseSocket(c->argv[1],c->argv[2]);\n\n    if (!copy) {\n        /* Translate MIGRATE as DEL for replication/AOF. Note that we do\n         * this only for the keys for which we received an acknowledgement\n         * from the receiving Redis server, by using the del_idx index. */\n        if (del_idx > 1) {\n            newargv[0] = createStringObject(\"DEL\",3);\n            /* Note that the following call takes ownership of newargv. */\n            replaceClientCommandVector(c,del_idx,newargv);\n            argv_rewritten = 1;\n        } else {\n            /* No key transfer acknowledged, no need to rewrite as DEL. */\n            zfree(newargv);\n        }\n        newargv = NULL; /* Make it safe to call zfree() on it in the future. */\n    }\n\n    /* If we are here and a socket error happened, we don't want to retry.\n     * Just signal the problem to the client, but only do it if we did not\n     * already queue a different error reported by the destination server. */\n    if (!error_from_target && socket_error) {\n        may_retry = 0;\n        goto socket_err;\n    }\n\n    if (!error_from_target) {\n        /* Success! Update the last_dbid in migrateCachedSocket, so that we can\n         * avoid SELECT the next time if the target DB is the same. Reply +OK.\n         *\n         * Note: If we reached this point, even if socket_error is true\n         * still the SELECT command succeeded (otherwise the code jumps to\n         * socket_err label. */\n        cs->last_dbid = dbid;\n        addReply(c,shared.ok);\n    } else {\n        /* On error we already sent it in the for loop above, and set\n         * the currently selected socket to -1 to force SELECT the next time. */\n    }\n\n    sdsfree(cmd.io.buffer.ptr);\n    zfree(kvArray); zfree(keyArray); zfree(newargv);\n    return;\n\n/* On socket errors we try to close the cached socket and try again.\n * It is very common for the cached socket to get closed, if just reopening\n * it works it's a shame to notify the error to the caller. */\n    socket_err:\n    /* Cleanup we want to perform in both the retry and no retry case.\n     * Note: Closing the migrate socket will also force SELECT next time. */\n    sdsfree(cmd.io.buffer.ptr);\n\n    /* If the command was rewritten as DEL and there was a socket error,\n     * we already closed the socket earlier. While migrateCloseSocket()\n     * is idempotent, the host/port arguments are now gone, so don't do it\n     * again. */\n    if (!argv_rewritten) migrateCloseSocket(c->argv[1],c->argv[2]);\n    zfree(newargv);\n    newargv = NULL; /* This will get reallocated on retry. */\n\n    /* Retry only if it's not a timeout and we never attempted a retry\n     * (or the code jumping here did not set may_retry to zero). */\n    if (errno != ETIMEDOUT && may_retry) {\n        may_retry = 0;\n        goto try_again;\n    }\n\n    /* Cleanup we want to do if no retry is attempted. */\n    zfree(kvArray); zfree(keyArray);\n    addReplyErrorSds(c, sdscatprintf(sdsempty(),\n                                     \"-IOERR error or timeout %s to target instance\",\n                                     write_error ? \"writing\" : \"reading\"));\n    return;\n}\n\n/* Cluster node sanity check. Returns C_OK if the node id\n * is valid an C_ERR otherwise. */\nint verifyClusterNodeId(const char *name, int length) {\n    if (length != CLUSTER_NAMELEN) return C_ERR;\n    for (int i = 0; i < length; i++) {\n        if (name[i] >= 'a' && name[i] <= 'z') continue;\n        if (name[i] >= '0' && name[i] <= '9') continue;\n        return C_ERR;\n    }\n    return C_OK;\n}\n\nint isValidAuxChar(int c) {\n    return isalnum(c) || (strchr(\"!#$%&()*+:;<>?@[]^{|}~\", c) == NULL);\n}\n\nint isValidAuxString(char *s, unsigned int length) {\n    for (unsigned i = 0; i < length; i++) {\n        if (!isValidAuxChar(s[i])) return 0;\n    }\n    return 1;\n}\n\nvoid clusterCommandMyId(client *c) {\n    char *name = clusterNodeGetName(getMyClusterNode());\n    if (name) {\n        addReplyBulkCBuffer(c,name, CLUSTER_NAMELEN);\n    } else {\n        addReplyError(c, \"No ID yet\");\n    }\n}\n\nchar* getMyClusterId(void) {\n    return clusterNodeGetName(getMyClusterNode());\n}\n\nvoid clusterCommandMyShardId(client *c) {\n    char *sid = clusterNodeGetShardId(getMyClusterNode());\n    if (sid) {\n        addReplyBulkCBuffer(c,sid, CLUSTER_NAMELEN);\n    } else {\n        addReplyError(c, \"No shard ID yet\");\n    }\n}\n\n/* When a cluster command is called, we need to decide whether to return TLS info or\n * non-TLS info by the client's connection type. However if the command is called by\n * a Lua script or RM_call, there is no connection in the fake client, so we use\n * server.current_client here to get the real client if available. And if it is not\n * available (modules may call commands without a real client), we return the default\n * info, which is determined by server.tls_cluster. */\nstatic int shouldReturnTlsInfo(void) {\n    if (server.current_client && server.current_client->conn) {\n        return connIsTLS(server.current_client->conn);\n    } else {\n        return server.tls_cluster;\n    }\n}\n\nunsigned int countKeysInSlot(unsigned int slot) {\n    return kvstoreDictSize(server.db->keys, slot);\n}\n\n/* Add detailed information of a node to the output buffer of the given client. */\nvoid addNodeDetailsToShardReply(client *c, clusterNode *node) {\n\n    int reply_count = 0;\n    char *hostname;\n    void *node_replylen = addReplyDeferredLen(c);\n\n    addReplyBulkCString(c, \"id\");\n    addReplyBulkCBuffer(c, clusterNodeGetName(node), CLUSTER_NAMELEN);\n    reply_count++;\n\n    if (clusterNodeTcpPort(node)) {\n        addReplyBulkCString(c, \"port\");\n        addReplyLongLong(c, clusterNodeTcpPort(node));\n        reply_count++;\n    }\n\n    if (clusterNodeTlsPort(node)) {\n        addReplyBulkCString(c, \"tls-port\");\n        addReplyLongLong(c, clusterNodeTlsPort(node));\n        reply_count++;\n    }\n\n    addReplyBulkCString(c, \"ip\");\n    addReplyBulkCString(c, clusterNodeIp(node));\n    reply_count++;\n\n    addReplyBulkCString(c, \"endpoint\");\n    addReplyBulkCString(c, clusterNodePreferredEndpoint(node));\n    reply_count++;\n\n    hostname = clusterNodeHostname(node);\n    if (hostname != NULL && *hostname != '\\0') {\n        addReplyBulkCString(c, \"hostname\");\n        addReplyBulkCString(c, hostname);\n        reply_count++;\n    }\n\n    long long node_offset;\n    if (clusterNodeIsMyself(node)) {\n        node_offset = clusterNodeIsSlave(node) ? replicationGetSlaveOffset() : server.master_repl_offset;\n    } else {\n        node_offset = clusterNodeReplOffset(node);\n    }\n\n    addReplyBulkCString(c, \"role\");\n    addReplyBulkCString(c, clusterNodeIsSlave(node) ? \"replica\" : \"master\");\n    reply_count++;\n\n    addReplyBulkCString(c, \"replication-offset\");\n    addReplyLongLong(c, node_offset);\n    reply_count++;\n\n    addReplyBulkCString(c, \"health\");\n    const char *health_msg = NULL;\n    if (clusterNodeIsFailing(node)) {\n        health_msg = \"fail\";\n    } else if (clusterNodeIsSlave(node) && node_offset == 0) {\n        health_msg = \"loading\";\n    } else {\n        health_msg = \"online\";\n    }\n    addReplyBulkCString(c, health_msg);\n    reply_count++;\n\n    setDeferredMapLen(c, node_replylen, reply_count);\n}\n\nstatic clusterNode *clusterGetMasterFromShard(void *shard_handle) {\n    clusterNode *n = NULL;\n    void *node_it = clusterShardHandleGetNodeIterator(shard_handle);\n    while((n = clusterShardNodeIteratorNext(node_it)) != NULL) {\n        if (!clusterNodeIsFailing(n)) {\n            break;\n        }\n    }\n    clusterShardNodeIteratorFree(node_it);\n    if (!n) return NULL;\n    return clusterNodeGetMaster(n);\n}\n\n/* Add the shard reply of a single shard based off the given primary node. */\nvoid addShardReplyForClusterShards(client *c, void *shard_handle) {\n    serverAssert(clusterGetShardNodeCount(shard_handle) > 0);\n    addReplyMapLen(c, 2);\n    addReplyBulkCString(c, \"slots\");\n\n    /* Use slot_info_pairs from the primary only */\n    clusterNode *master_node = clusterGetMasterFromShard(shard_handle);\n\n    if (master_node && clusterNodeHasSlotInfo(master_node)) {\n        serverAssert((clusterNodeSlotInfoCount(master_node) % 2) == 0);\n        addReplyArrayLen(c, clusterNodeSlotInfoCount(master_node));\n        for (int i = 0; i < clusterNodeSlotInfoCount(master_node); i++)\n            addReplyLongLong(c, (unsigned long)clusterNodeSlotInfoEntry(master_node, i));\n    } else {\n        /* If no slot info pair is provided, the node owns no slots */\n        addReplyArrayLen(c, 0);\n    }\n\n    addReplyBulkCString(c, \"nodes\");\n    addReplyArrayLen(c, clusterGetShardNodeCount(shard_handle));\n    void *node_it = clusterShardHandleGetNodeIterator(shard_handle);\n    for (clusterNode *n = clusterShardNodeIteratorNext(node_it); n != NULL; n = clusterShardNodeIteratorNext(node_it)) {\n        addNodeDetailsToShardReply(c, n);\n        clusterFreeNodesSlotsInfo(n);\n    }\n    clusterShardNodeIteratorFree(node_it);\n}\n\n/* Add to the output buffer of the given client, an array of slot (start, end)\n * pair owned by the shard, also the primary and set of replica(s) along with\n * information about each node. */\nvoid clusterCommandShards(client *c) {\n    addReplyArrayLen(c, clusterGetShardCount());\n    /* This call will add slot_info_pairs to all nodes */\n    clusterGenNodesSlotsInfo(0);\n    dictIterator *shard_it = clusterGetShardIterator();\n    for(void *shard_handle = clusterNextShardHandle(shard_it); shard_handle != NULL; shard_handle = clusterNextShardHandle(shard_it)) {\n        addShardReplyForClusterShards(c, shard_handle);\n    }\n    clusterFreeShardIterator(shard_it);\n}\n\nvoid clusterCommandHelp(client *c) {\n    const char *help[] = {\n            \"COUNTKEYSINSLOT <slot>\",\n            \"    Return the number of keys in <slot>.\",\n            \"GETKEYSINSLOT <slot> <count>\",\n            \"    Return key names stored by current node in a slot.\",\n            \"INFO\",\n            \"    Return information about the cluster.\",\n            \"KEYSLOT <key>\",\n            \"    Return the hash slot for <key>.\",\n            \"MYID\",\n            \"    Return the node id.\",\n            \"MYSHARDID\",\n            \"    Return the node's shard id.\",\n            \"NODES\",\n            \"    Return cluster configuration seen by node. Output format:\",\n            \"    <id> <ip:port@bus-port[,hostname]> <flags> <master> <pings> <pongs> <epoch> <link> <slot> ...\",\n            \"REPLICAS <node-id>\",\n            \"    Return <node-id> replicas.\",\n            \"SLOTS\",\n            \"    Return information about slots range mappings. Each range is made of:\",\n            \"    start, end, master and replicas IP addresses, ports and ids\",\n            \"SLOT-STATS\",\n            \"    Return an array of slot usage statistics for slots assigned to the current node.\",\n            \"SHARDS\",\n            \"    Return information about slot range mappings and the nodes associated with them.\",\n            NULL\n    };\n\n    addExtendedReplyHelp(c, help, clusterCommandExtendedHelp());\n}\n\nvoid clusterCommand(client *c) {\n    if (server.cluster_enabled == 0) {\n        addReplyError(c,\"This instance has cluster support disabled\");\n        return;\n    }\n\n    if (c->argc == 2 && !strcasecmp(c->argv[1]->ptr,\"help\")) {\n        clusterCommandHelp(c);\n    } else  if (!strcasecmp(c->argv[1]->ptr,\"nodes\") && c->argc == 2) {\n        /* CLUSTER NODES */\n        /* Report TLS ports to TLS client, and report non-TLS port to non-TLS client. */\n        sds nodes = clusterGenNodesDescription(c, 0, shouldReturnTlsInfo());\n        addReplyVerbatim(c,nodes,sdslen(nodes),\"txt\");\n        sdsfree(nodes);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"myid\") && c->argc == 2) {\n        /* CLUSTER MYID */\n        clusterCommandMyId(c);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"myshardid\") && c->argc == 2) {\n        /* CLUSTER MYSHARDID */\n        clusterCommandMyShardId(c);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"slots\") && c->argc == 2) {\n        /* CLUSTER SLOTS */\n        clusterCommandSlots(c);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"shards\") && c->argc == 2) {\n        /* CLUSTER SHARDS */\n        clusterCommandShards(c);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"info\") && c->argc == 2) {\n        /* CLUSTER INFO */\n\n        sds info = genClusterInfoString();\n\n        /* Produce the reply protocol. */\n        addReplyVerbatim(c,info,sdslen(info),\"txt\");\n        sdsfree(info);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"keyslot\") && c->argc == 3) {\n        /* CLUSTER KEYSLOT <key> */\n        sds key = c->argv[2]->ptr;\n\n        addReplyLongLong(c,keyHashSlot(key,sdslen(key)));\n    } else if (!strcasecmp(c->argv[1]->ptr,\"countkeysinslot\") && c->argc == 3) {\n        /* CLUSTER COUNTKEYSINSLOT <slot> */\n        long long slot;\n\n        if (getLongLongFromObjectOrReply(c,c->argv[2],&slot,NULL) != C_OK)\n            return;\n        if (slot < 0 || slot >= CLUSTER_SLOTS) {\n            addReplyError(c,\"Invalid slot\");\n            return;\n        }\n\n        if (!clusterCanAccessKeysInSlot(slot)) {\n            addReplyLongLong(c, 0);\n            return;\n        }\n        addReplyLongLong(c,countKeysInSlot(slot));\n    } else if (!strcasecmp(c->argv[1]->ptr,\"getkeysinslot\") && c->argc == 4) {\n        /* CLUSTER GETKEYSINSLOT <slot> <count> */\n        long long maxkeys, slot;\n\n        if (getLongLongFromObjectOrReply(c,c->argv[2],&slot,NULL) != C_OK)\n            return;\n        if (getLongLongFromObjectOrReply(c,c->argv[3],&maxkeys,NULL)\n            != C_OK)\n            return;\n        if (slot < 0 || slot >= CLUSTER_SLOTS || maxkeys < 0) {\n            addReplyError(c,\"Invalid slot or number of keys\");\n            return;\n        }\n\n        if (!clusterCanAccessKeysInSlot(slot)) {\n            addReplyArrayLen(c, 0);\n            return;\n        }\n\n        unsigned int keys_in_slot = countKeysInSlot(slot);\n        unsigned int numkeys = maxkeys > keys_in_slot ? keys_in_slot : maxkeys;\n        addReplyArrayLen(c,numkeys);\n        kvstoreDictIterator kvs_di;\n        dictEntry *de = NULL;\n        kvstoreInitDictIterator(&kvs_di, server.db->keys, slot);\n        for (unsigned int i = 0; i < numkeys; i++) {\n            de = kvstoreDictIteratorNext(&kvs_di);\n            serverAssert(de != NULL);\n            sds sdskey = kvobjGetKey(dictGetKV(de));\n            addReplyBulkCBuffer(c, sdskey, sdslen(sdskey));\n        }\n        kvstoreResetDictIterator(&kvs_di);\n    } else if ((!strcasecmp(c->argv[1]->ptr,\"slaves\") ||\n                !strcasecmp(c->argv[1]->ptr,\"replicas\")) && c->argc == 3) {\n        /* CLUSTER SLAVES <NODE ID> */\n        /* CLUSTER REPLICAS <NODE ID> */\n        clusterNode *n = clusterLookupNode(c->argv[2]->ptr, sdslen(c->argv[2]->ptr));\n        int j;\n\n        /* Lookup the specified node in our table. */\n        if (!n) {\n            addReplyErrorFormat(c,\"Unknown node %s\", (char*)c->argv[2]->ptr);\n            return;\n        }\n\n        if (clusterNodeIsSlave(n)) {\n            addReplyError(c,\"The specified node is not a master\");\n            return;\n        }\n\n        /* Report TLS ports to TLS client, and report non-TLS port to non-TLS client. */\n        addReplyArrayLen(c, clusterNodeNumSlaves(n));\n        for (j = 0; j < clusterNodeNumSlaves(n); j++) {\n            sds ni = clusterGenNodeDescription(c, clusterNodeGetSlave(n, j), shouldReturnTlsInfo());\n            addReplyBulkCString(c,ni);\n            sdsfree(ni);\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr, \"migration\")) {\n        clusterMigrationCommand(c);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"syncslots\") && c->argc >= 3) {\n        clusterSyncSlotsCommand(c);\n    } else if(!clusterCommandSpecial(c)) {\n        addReplySubcommandSyntaxError(c);\n        return;\n    }\n}\n\n/* Extract slot number from keys in a keys_result structure and return to caller.\n * Returns:\n *   - The slot number if all keys belong to the same slot\n *   - INVALID_CLUSTER_SLOT if there are no keys or cluster is disabled\n *   - CLUSTER_CROSSSLOT if keys belong to different slots (cross-slot error) */\nint extractSlotFromKeysResult(robj **argv, getKeysResult *keys_result) {\n    if (keys_result->numkeys == 0 || !server.cluster_enabled)\n        return INVALID_CLUSTER_SLOT;\n\n    int first_slot = INVALID_CLUSTER_SLOT;\n    for (int j = 0; j < keys_result->numkeys; j++) {\n        robj *this_key = argv[keys_result->keys[j].pos];\n        int this_slot = (int)keyHashSlot((char*)this_key->ptr, sdslen(this_key->ptr));\n\n        if (first_slot == INVALID_CLUSTER_SLOT)\n            first_slot = this_slot;\n        else if (first_slot != this_slot) {\n            return CLUSTER_CROSSSLOT;\n        }\n    }\n    return first_slot;\n}\n\n/* Return the pointer to the cluster node that is able to serve the command.\n * For the function to succeed the command should only target either:\n *\n * 1) A single key (even multiple times like RPOPLPUSH mylist mylist).\n * 2) Multiple keys in the same hash slot, while the slot is stable (no\n *    resharding in progress).\n *\n * On success the function returns the node that is able to serve the request.\n * If the node is not 'myself' a redirection must be performed. The kind of\n * redirection is specified setting the integer passed by reference\n * 'error_code', which will be set to CLUSTER_REDIR_ASK or\n * CLUSTER_REDIR_MOVED.\n *\n * When the node is 'myself' 'error_code' is set to CLUSTER_REDIR_NONE.\n *\n * If the command fails NULL is returned, and the reason of the failure is\n * provided via 'error_code', which will be set to:\n *\n * CLUSTER_REDIR_CROSS_SLOT if the request contains multiple keys that\n * don't belong to the same hash slot.\n *\n * CLUSTER_REDIR_UNSTABLE if the request contains multiple keys\n * belonging to the same slot, but the slot is not stable (in migration or\n * importing state, likely because a resharding is in progress).\n *\n * CLUSTER_REDIR_DOWN_UNBOUND if the request addresses a slot which is\n * not bound to any node. In this case the cluster global state should be\n * already \"down\" but it is fragile to rely on the update of the global state,\n * so we also handle it here.\n *\n * CLUSTER_REDIR_TRIMMING if the request addresses a slot that is being trimmed.\n *\n * CLUSTER_REDIR_DOWN_STATE and CLUSTER_REDIR_DOWN_RO_STATE if the cluster is\n * down but the user attempts to execute a command that addresses one or more keys. */\nclusterNode *getNodeByQuery(client *c, struct redisCommand *cmd, robj **argv, int argc, int *hashslot,\n    getKeysResult *keys_result, uint8_t read_error, uint64_t cmd_flags, int *error_code)\n{\n    clusterNode *myself = getMyClusterNode();\n    clusterNode *n = NULL;\n    robj *firstkey = NULL;\n    int multiple_keys = 0;\n    multiState *ms, _ms;\n    pendingCommand mc;\n    pendingCommand *mcp = &mc;\n    int i, slot = 0, migrating_slot = 0, importing_slot = 0, missing_keys = 0,\n            existing_keys = 0;\n    int pubsubshard_included = 0; /* Flag to indicate if a pubsub shard cmd is included. */\n\n    /* Allow any key to be set if a module disabled cluster redirections. */\n    if (server.cluster_module_flags & CLUSTER_MODULE_FLAG_NO_REDIRECTION)\n        return myself;\n\n    /* Set error code optimistically for the base case. */\n    if (error_code) *error_code = CLUSTER_REDIR_NONE;\n\n    /* Modules can turn off Redis Cluster redirection: this is useful\n     * when writing a module that implements a completely different\n     * distributed system. */\n\n    /* We handle all the cases as if they were EXEC commands, so we have\n     * a common code path for everything */\n    if (cmd->proc == execCommand) {\n        /* If CLIENT_MULTI flag is not set EXEC is just going to return an\n         * error. */\n        if (!(c->flags & CLIENT_MULTI)) return myself;\n        ms = &c->mstate;\n    } else {\n        /* In order to have a single codepath create a fake Multi State\n         * structure if the client is not in MULTI/EXEC state, this way\n         * we have a single codepath below. */\n        ms = &_ms;\n        _ms.commands = &mcp;\n        _ms.count = 1;\n\n        /* Properly initialize the fake pendingCommand */\n        initPendingCommand(&mc);\n        mc.argv = argv;\n        mc.argc = argc;\n        mc.cmd = cmd;\n        mc.slot = hashslot ? *hashslot : INVALID_CLUSTER_SLOT;\n        mc.read_error = read_error;\n        if (keys_result) {\n            mc.keys_result = *keys_result;\n            mc.flags |= PENDING_CMD_KEYS_RESULT_VALID;\n        }\n    }\n\n    /* Check that all the keys are in the same hash slot, and obtain this\n     * slot and the node associated. */\n    for (i = 0; i < ms->count; i++) {\n        struct redisCommand *mcmd;\n        robj **margv;\n        int margc, j;\n        keyReference *keyindex;\n\n        pendingCommand *pcmd = ms->commands[i];\n\n        mcmd = pcmd->cmd;\n        margc = pcmd->argc;\n        margv = pcmd->argv;\n\n        /* Only valid for sharded pubsub as regular pubsub can operate on any node and bypasses this layer. */\n        if (!pubsubshard_included &&\n            doesCommandHaveChannelsWithFlags(mcmd, CMD_CHANNEL_PUBLISH | CMD_CHANNEL_SUBSCRIBE))\n        {\n            pubsubshard_included = 1;\n        }\n\n        /* If we have a cached keys result from preprocessCommand(), use it.\n         * Otherwise, extract keys result. */\n        int use_cache_keys_result = pcmd->flags & PENDING_CMD_KEYS_RESULT_VALID;\n        getKeysResult result = GETKEYS_RESULT_INIT;\n        if (use_cache_keys_result)\n            result = pcmd->keys_result;\n        else\n            getKeysFromCommand(mcmd,margv,margc,&result);\n        keyindex = result.keys;\n\n        for (j = 0; j < result.numkeys; j++) {\n            /* The command has keys and was checked for cross-slot between its keys in preprocessCommand() */\n            if (pcmd->read_error == CLIENT_READ_CROSS_SLOT) {\n                /* Error: multiple keys from different slots. */\n                if (!use_cache_keys_result) getKeysFreeResult(&result);\n                if (error_code)\n                    *error_code = CLUSTER_REDIR_CROSS_SLOT;\n                return NULL;\n            }\n\n            robj *thiskey = margv[keyindex[j].pos];\n            int thisslot = pcmd->slot;\n            if (thisslot == INVALID_CLUSTER_SLOT)\n                thisslot = keyHashSlot((char*)thiskey->ptr, sdslen(thiskey->ptr));\n\n            if (firstkey == NULL) {\n                /* This is the first key we see. Check what is the slot\n                 * and node. */\n                firstkey = thiskey;\n                slot = thisslot;\n                n = getNodeBySlot(slot);\n\n                /* Error: If a slot is not served, we are in \"cluster down\"\n                 * state. However the state is yet to be updated, so this was\n                 * not trapped earlier in processCommand(). Report the same\n                 * error to the client. */\n                if (n == NULL) {\n                    if (!use_cache_keys_result) getKeysFreeResult(&result);\n                    if (error_code)\n                        *error_code = CLUSTER_REDIR_DOWN_UNBOUND;\n                    return NULL;\n                }\n\n                /* If we are migrating or importing this slot, we need to check\n                 * if we have all the keys in the request (the only way we\n                 * can safely serve the request, otherwise we return a TRYAGAIN\n                 * error). To do so we set the importing/migrating state and\n                 * increment a counter for every missing key. */\n                if (n == myself &&\n                    getMigratingSlotDest(slot) != NULL)\n                {\n                    migrating_slot = 1;\n                } else if (getImportingSlotSource(slot) != NULL) {\n                    importing_slot = 1;\n                }\n            } else {\n                /* If it is not the first key/channel, make sure it is exactly\n                 * the same key/channel as the first we saw. */\n                if (slot != thisslot) {\n                    /* Error: multiple keys from different slots. */\n                    if (!use_cache_keys_result) getKeysFreeResult(&result);\n                    if (error_code)\n                        *error_code = CLUSTER_REDIR_CROSS_SLOT;\n                    return NULL;\n                }\n                if (importing_slot && !multiple_keys && !equalStringObjects(firstkey,thiskey)) {\n                    /* Flag this request as one with multiple different\n                     * keys/channels when the slot is in importing state. */\n                    multiple_keys = 1;\n                }\n            }\n\n            /* Migrating / Importing slot? Count keys we don't have.\n             * If it is pubsubshard command, it isn't required to check\n             * the channel being present or not in the node during the\n             * slot migration, the channel will be served from the source\n             * node until the migration completes with CLUSTER SETSLOT <slot>\n             * NODE <node-id>. */\n            int flags = LOOKUP_NOTOUCH | LOOKUP_NOSTATS | LOOKUP_NONOTIFY | LOOKUP_NOEXPIRE;\n            if ((migrating_slot || importing_slot) && !pubsubshard_included)\n            {\n                if (lookupKeyReadWithFlags(&server.db[0], thiskey, flags) == NULL) missing_keys++;\n                else existing_keys++;\n            }\n        }\n        if (!use_cache_keys_result) getKeysFreeResult(&result);\n    }\n\n    /* No key at all in command? then we can serve the request\n     * without redirections or errors in all the cases. */\n    if (n == NULL) return myself;\n\n    /* Cluster is globally down but we got keys? We only serve the request\n     * if it is a read command and when allow_reads_when_down is enabled. */\n    if (!isClusterHealthy()) {\n        if (pubsubshard_included) {\n            if (!server.cluster_allow_pubsubshard_when_down) {\n                if (error_code) *error_code = CLUSTER_REDIR_DOWN_STATE;\n                return NULL;\n            }\n        } else if (!server.cluster_allow_reads_when_down) {\n            /* The cluster is configured to block commands when the\n             * cluster is down. */\n            if (error_code) *error_code = CLUSTER_REDIR_DOWN_STATE;\n            return NULL;\n        } else if (cmd_flags & CMD_WRITE) {\n            /* The cluster is configured to allow read only commands */\n            if (error_code) *error_code = CLUSTER_REDIR_DOWN_RO_STATE;\n            return NULL;\n        } else {\n            /* Fall through and allow the command to be executed:\n             * this happens when server.cluster_allow_reads_when_down is\n             * true and the command is not a write command */\n        }\n    }\n\n    /* Return the hashslot by reference. */\n    if (hashslot) *hashslot = slot;\n\n    /* MIGRATE always works in the context of the local node if the slot\n     * is open (migrating or importing state). We need to be able to freely\n     * move keys among instances in this case. */\n    if ((migrating_slot || importing_slot) && cmd->proc == migrateCommand)\n        return myself;\n\n    /* If we don't have all the keys and we are migrating the slot, send\n     * an ASK redirection or TRYAGAIN. */\n    if (migrating_slot && missing_keys) {\n        /* If we have keys but we don't have all keys, we return TRYAGAIN */\n        if (existing_keys) {\n            if (error_code) *error_code = CLUSTER_REDIR_UNSTABLE;\n            return NULL;\n        } else {\n            if (error_code) *error_code = CLUSTER_REDIR_ASK;\n            return getMigratingSlotDest(slot);\n        }\n    }\n\n    /* If we are receiving the slot, and the client correctly flagged the\n     * request as \"ASKING\", we can serve the request. However if the request\n     * involves multiple keys and we don't have them all, the only option is\n     * to send a TRYAGAIN error. */\n    if (importing_slot &&\n        (c->flags & CLIENT_ASKING || cmd_flags & CMD_ASKING))\n    {\n        if (multiple_keys && missing_keys) {\n            if (error_code) *error_code = CLUSTER_REDIR_UNSTABLE;\n            return NULL;\n        } else {\n            return myself;\n        }\n    }\n\n    /* Handle the read-only client case reading from a slave: if this\n     * node is a slave and the request is about a hash slot our master\n     * is serving, we can reply without redirection. */\n    int is_write_command = (cmd_flags & CMD_WRITE) ||\n                           (c->cmd->proc == execCommand && (c->mstate.cmd_flags & CMD_WRITE));\n    if (((c->flags & CLIENT_READONLY) || pubsubshard_included) &&\n        !is_write_command &&\n        clusterNodeIsSlave(myself) &&\n        clusterNodeGetSlaveof(myself) == n)\n    {\n        return myself;\n    }\n\n    /* If this node is responsible for the slot and is currently trimming it,\n     * SFLUSH may have triggered active trimming and it could still be in progress.\n     * Here we reject any write commands as no writes should be accepted for\n     * trimming slots while active trimming is in progress. */\n    if (n == myself && is_write_command && isSlotInTrimJob(slot)) {\n        if (error_code) *error_code = CLUSTER_REDIR_TRIMMING;\n        return NULL;\n    }\n\n    /* Base case: just return the right node. However, if this node is not\n     * myself, set error_code to MOVED since we need to issue a redirection. */\n    if (n != myself && error_code) *error_code = CLUSTER_REDIR_MOVED;\n    return n;\n}\n\n/* Send the client the right redirection code, according to error_code\n * that should be set to one of CLUSTER_REDIR_* macros.\n *\n * If CLUSTER_REDIR_ASK or CLUSTER_REDIR_MOVED error codes\n * are used, then the node 'n' should not be NULL, but should be the\n * node we want to mention in the redirection. Moreover hashslot should\n * be set to the hash slot that caused the redirection. */\nvoid clusterRedirectClient(client *c, clusterNode *n, int hashslot, int error_code) {\n    if (error_code == CLUSTER_REDIR_CROSS_SLOT) {\n        addReplyError(c,\"-CROSSSLOT Keys in request don't hash to the same slot\");\n    } else if (error_code == CLUSTER_REDIR_UNSTABLE) {\n        /* The request spawns multiple keys in the same slot,\n         * but the slot is not \"stable\" currently as there is\n         * a migration or import in progress. */\n        addReplyError(c,\"-TRYAGAIN Multiple keys request during rehashing of slot\");\n    } else if (error_code == CLUSTER_REDIR_DOWN_STATE) {\n        addReplyError(c,\"-CLUSTERDOWN The cluster is down\");\n    } else if (error_code == CLUSTER_REDIR_DOWN_RO_STATE) {\n        addReplyError(c,\"-CLUSTERDOWN The cluster is down and only accepts read commands\");\n    } else if (error_code == CLUSTER_REDIR_DOWN_UNBOUND) {\n        addReplyError(c,\"-CLUSTERDOWN Hash slot not served\");\n    } else if (error_code == CLUSTER_REDIR_MOVED ||\n               error_code == CLUSTER_REDIR_ASK)\n    {\n        /* Report TLS ports to TLS client, and report non-TLS port to non-TLS client. */\n        int port = clusterNodeClientPort(n, shouldReturnTlsInfo());\n        addReplyErrorSds(c,sdscatprintf(sdsempty(),\n                                        \"-%s %d %s:%d\",\n                                        (error_code == CLUSTER_REDIR_ASK) ? \"ASK\" : \"MOVED\",\n                                        hashslot, clusterNodePreferredEndpoint(n), port));\n    } else if (error_code == CLUSTER_REDIR_TRIMMING) {\n        addReplyError(c,\"-TRYAGAIN Slot is being trimmed\");\n    } else {\n        serverPanic(\"getNodeByQuery() unknown error.\");\n    }\n}\n\n/* This function is called by the function processing clients incrementally\n * to detect timeouts, in order to handle the following case:\n *\n * 1) A client blocks with BLPOP or similar blocking operation.\n * 2) The master migrates the hash slot elsewhere or turns into a slave.\n * 3) The client may remain blocked forever (or up to the max timeout time)\n *    waiting for a key change that will never happen.\n *\n * If the client is found to be blocked into a hash slot this node no\n * longer handles, the client is sent a redirection error, and the function\n * returns 1. Otherwise 0 is returned and no operation is performed. */\nint clusterRedirectBlockedClientIfNeeded(client *c) {\n    clusterNode *myself = getMyClusterNode();\n    if (c->flags & CLIENT_BLOCKED &&\n        (c->bstate.btype == BLOCKED_LIST ||\n         c->bstate.btype == BLOCKED_ZSET ||\n         c->bstate.btype == BLOCKED_STREAM ||\n         c->bstate.btype == BLOCKED_MODULE))\n    {\n        dictEntry *de;\n        dictIterator di;\n\n        /* If the cluster is down, unblock the client with the right error.\n         * If the cluster is configured to allow reads on cluster down, we\n         * still want to emit this error since a write will be required\n         * to unblock them which may never come.  */\n        if (!isClusterHealthy()) {\n            clusterRedirectClient(c,NULL,0,CLUSTER_REDIR_DOWN_STATE);\n            return 1;\n        }\n\n        /* If the client is blocked on module, but not on a specific key,\n         * don't unblock it (except for the CLUSTER_FAIL case above). */\n        if (c->bstate.btype == BLOCKED_MODULE && !moduleClientIsBlockedOnKeys(c))\n            return 0;\n\n        /* All keys must belong to the same slot, so check first key only. */\n        dictInitIterator(&di, c->bstate.keys);\n        if ((de = dictNext(&di)) != NULL) {\n            robj *key = dictGetKey(de);\n            int slot = keyHashSlot((char*)key->ptr, sdslen(key->ptr));\n            clusterNode *node = getNodeBySlot(slot);\n\n            /* if the client is read-only and attempting to access key that our\n             * replica can handle, allow it. */\n            if ((c->flags & CLIENT_READONLY) &&\n                !(c->lastcmd->flags & CMD_WRITE) &&\n                clusterNodeIsSlave(myself) && clusterNodeGetSlaveof(myself) == node)\n            {\n                node = myself;\n            }\n\n            /* We send an error and unblock the client if:\n             * 1) The slot is unassigned, emitting a cluster down error.\n             * 2) The slot is not handled by this node, nor being imported. */\n            if (node != myself && getImportingSlotSource(slot) == NULL)\n            {\n                if (node == NULL) {\n                    clusterRedirectClient(c,NULL,0,\n                                          CLUSTER_REDIR_DOWN_UNBOUND);\n                } else {\n                    clusterRedirectClient(c,node,slot,\n                                          CLUSTER_REDIR_MOVED);\n                }\n                dictResetIterator(&di);\n                return 1;\n            }\n        }\n        dictResetIterator(&di);\n    }\n    return 0;\n}\n\n/* Returns an indication if the replica node is fully available\n * and should be listed in CLUSTER SLOTS response.\n * Returns 1 for available nodes, 0 for nodes that have\n * not finished their initial sync, in failed state, or are\n * otherwise considered not available to serve read commands. */\nstatic int isReplicaAvailable(clusterNode *node) {\n    if (clusterNodeIsFailing(node)) {\n        return 0;\n    }\n    long long repl_offset = clusterNodeReplOffset(node);\n    if (clusterNodeIsMyself(node)) {\n        /* Nodes do not update their own information\n         * in the cluster node list. */\n        repl_offset = replicationGetSlaveOffset();\n    }\n    return (repl_offset != 0);\n}\n\nvoid addNodeToNodeReply(client *c, clusterNode *node) {\n    char* hostname = clusterNodeHostname(node);\n    addReplyArrayLen(c, 4);\n    if (server.cluster_preferred_endpoint_type == CLUSTER_ENDPOINT_TYPE_IP) {\n        addReplyBulkCString(c, clusterNodeIp(node));\n    } else if (server.cluster_preferred_endpoint_type == CLUSTER_ENDPOINT_TYPE_HOSTNAME) {\n        if (hostname != NULL && hostname[0] != '\\0') {\n            addReplyBulkCString(c, hostname);\n        } else {\n            addReplyBulkCString(c, \"?\");\n        }\n    } else if (server.cluster_preferred_endpoint_type == CLUSTER_ENDPOINT_TYPE_UNKNOWN_ENDPOINT) {\n        addReplyNull(c);\n    } else {\n        serverPanic(\"Unrecognized preferred endpoint type\");\n    }\n\n    /* Report TLS ports to TLS client, and report non-TLS port to non-TLS client. */\n    addReplyLongLong(c, clusterNodeClientPort(node, shouldReturnTlsInfo()));\n    addReplyBulkCBuffer(c, clusterNodeGetName(node), CLUSTER_NAMELEN);\n\n    /* Add the additional endpoint information, this is all the known networking information\n     * that is not the preferred endpoint. Note the logic is evaluated twice so we can\n     * correctly report the number of additional network arguments without using a deferred\n     * map, an assertion is made at the end to check we set the right length. */\n    int length = 0;\n    if (server.cluster_preferred_endpoint_type != CLUSTER_ENDPOINT_TYPE_IP) {\n        length++;\n    }\n    if (server.cluster_preferred_endpoint_type != CLUSTER_ENDPOINT_TYPE_HOSTNAME\n        && hostname != NULL && hostname[0] != '\\0')\n    {\n        length++;\n    }\n    addReplyMapLen(c, length);\n\n    if (server.cluster_preferred_endpoint_type != CLUSTER_ENDPOINT_TYPE_IP) {\n        addReplyBulkCString(c, \"ip\");\n        addReplyBulkCString(c, clusterNodeIp(node));\n        length--;\n    }\n    if (server.cluster_preferred_endpoint_type != CLUSTER_ENDPOINT_TYPE_HOSTNAME\n        && hostname != NULL && hostname[0] != '\\0')\n    {\n        addReplyBulkCString(c, \"hostname\");\n        addReplyBulkCString(c, hostname);\n        length--;\n    }\n    serverAssert(length == 0);\n}\n\nvoid addNodeReplyForClusterSlot(client *c, clusterNode *node, int start_slot, int end_slot) {\n    int i, nested_elements = 3; /* slots (2) + master addr (1) */\n    for (i = 0; i < clusterNodeNumSlaves(node); i++) {\n        if (!isReplicaAvailable(clusterNodeGetSlave(node, i))) continue;\n        nested_elements++;\n    }\n    addReplyArrayLen(c, nested_elements);\n    addReplyLongLong(c, start_slot);\n    addReplyLongLong(c, end_slot);\n    addNodeToNodeReply(c, node);\n\n    /* Remaining nodes in reply are replicas for slot range */\n    for (i = 0; i < clusterNodeNumSlaves(node); i++) {\n        /* This loop is copy/pasted from clusterGenNodeDescription()\n         * with modifications for per-slot node aggregation. */\n        if (!isReplicaAvailable(clusterNodeGetSlave(node, i))) continue;\n        addNodeToNodeReply(c, clusterNodeGetSlave(node, i));\n        nested_elements--;\n    }\n    serverAssert(nested_elements == 3); /* Original 3 elements */\n}\n\nvoid clusterCommandSlots(client * c) {\n    /* Format: 1) 1) start slot\n     *            2) end slot\n     *            3) 1) master IP\n     *               2) master port\n     *               3) node ID\n     *            4) 1) replica IP\n     *               2) replica port\n     *               3) node ID\n     *           ... continued until done\n     */\n    clusterNode *n = NULL;\n    int num_masters = 0, start = -1;\n    void *slot_replylen = addReplyDeferredLen(c);\n\n    for (int i = 0; i <= CLUSTER_SLOTS; i++) {\n        /* Find start node and slot id. */\n        if (n == NULL) {\n            if (i == CLUSTER_SLOTS) break;\n            n = getNodeBySlot(i);\n            start = i;\n            continue;\n        }\n\n        /* Add cluster slots info when occur different node with start\n         * or end of slot. */\n        if (i == CLUSTER_SLOTS || n != getNodeBySlot(i)) {\n            addNodeReplyForClusterSlot(c, n, start, i-1);\n            num_masters++;\n            if (i == CLUSTER_SLOTS) break;\n            n = getNodeBySlot(i);\n            start = i;\n        }\n    }\n    setDeferredArrayLen(c, slot_replylen, num_masters);\n}\n\n/* -----------------------------------------------------------------------------\n * Cluster functions related to serving / redirecting clients\n * -------------------------------------------------------------------------- */\n\n/* The ASKING command is required after a -ASK redirection.\n * The client should issue ASKING before to actually send the command to\n * the target instance. See the Redis Cluster specification for more\n * information. */\nvoid askingCommand(client *c) {\n    if (server.cluster_enabled == 0) {\n        addReplyError(c,\"This instance has cluster support disabled\");\n        return;\n    }\n    c->flags |= CLIENT_ASKING;\n    addReply(c,shared.ok);\n}\n\n/* The READONLY command is used by clients to enter the read-only mode.\n * In this mode slaves will not redirect clients as long as clients access\n * with read-only commands to keys that are served by the slave's master. */\nvoid readonlyCommand(client *c) {\n    if (server.cluster_enabled == 0) {\n        addReplyError(c,\"This instance has cluster support disabled\");\n        return;\n    }\n    c->flags |= CLIENT_READONLY;\n    addReply(c,shared.ok);\n}\n\n/* Remove all the keys in the specified hash slot.\n * The number of removed items is returned. */\nunsigned int clusterDelKeysInSlot(unsigned int hashslot, int by_command) {\n    unsigned int j = 0;\n\n    if (!kvstoreDictSize(server.db->keys, (int) hashslot))\n        return 0;\n\n    kvstoreDictIterator kvs_di;\n    dictEntry *de = NULL;\n    kvstoreInitDictSafeIterator(&kvs_di, server.db->keys, (int) hashslot);\n    while((de = kvstoreDictIteratorNext(&kvs_di)) != NULL) {\n        enterExecutionUnit(1, 0);\n        sds sdskey = kvobjGetKey(dictGetKV(de));\n        robj *key = createStringObject(sdskey, sdslen(sdskey));\n        dbDelete(&server.db[0], key);\n\n        keyModified(NULL, &server.db[0], key, NULL, 1);\n        if (by_command) {\n            /* Keys are deleted by a command (trimslots), we need to notify the\n             * keyspace event. Though, we don't need to propagate the DEL\n             * command, as the command (trimslots) will be propagated. */\n            notifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", key, server.db[0].id);\n        } else {\n            /* Propagate the DEL command */\n            propagateDeletion(&server.db[0], key, server.lazyfree_lazy_server_del);\n            /* The keys are not actually logically deleted from the database,\n             * just moved to another node. The modules needs to know that these\n             * keys are no longer available locally, so just send the keyspace\n             * notification to the modules, but not to clients. */\n            moduleNotifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", key, server.db[0].id, NULL, 0);\n        }\n        exitExecutionUnit();\n        postExecutionUnitOperations();\n        decrRefCount(key);\n        j++;\n        server.dirty++;\n    }\n    kvstoreResetDictIterator(&kvs_di);\n    return j;\n}\n\n/* Delete the keys in the slot ranges. Returns the number of deleted items */\nunsigned int clusterDelKeysInSlotRangeArray(slotRangeArray *slots, int by_command) {\n    unsigned int j = 0;\n    for (int i = 0; i < slots->num_ranges; i++) {\n        for (int slot = slots->ranges[i].start; slot <= slots->ranges[i].end; slot++) {\n            j += clusterDelKeysInSlot(slot, by_command);\n        }\n    }\n    return j;\n}\n\nint clusterIsMySlot(int slot) {\n    return getMyClusterNode() == getNodeBySlot(slot);\n}\n\nvoid replySlotsFlush(client *c, slotRangeArray *slots) {\n    addReplyArrayLen(c, slots->num_ranges);\n    for (int i = 0 ; i < slots->num_ranges ; i++) {\n        addReplyArrayLen(c, 2);\n        addReplyLongLong(c, slots->ranges[i].start);\n        addReplyLongLong(c, slots->ranges[i].end);\n    }\n}\n\n/* Normalizes (sorts and merges adjacent ranges), checks that slot ranges are\n * well-formed and non-overlapping. */\nint slotRangeArrayNormalizeAndValidate(slotRangeArray *slots, sds *err) {\n    unsigned char used_slots[CLUSTER_SLOTS] = {0};\n\n    if (slots->num_ranges <= 0 || slots->num_ranges >= CLUSTER_SLOTS) {\n        *err = sdscatprintf(sdsempty(), \"invalid number of slot ranges: %d\", slots->num_ranges);\n        return C_ERR;\n    }\n\n    /* Sort and merge adjacent slot ranges. */\n    slotRangeArraySortAndMerge(slots);\n\n    for (int i = 0; i < slots->num_ranges; i++) {\n        if (slots->ranges[i].start >= CLUSTER_SLOTS ||\n            slots->ranges[i].end >= CLUSTER_SLOTS)\n        {\n            *err = sdscatprintf(sdsempty(), \"slot range is out of range: %d-%d\",\n                                slots->ranges[i].start, slots->ranges[i].end);\n            return C_ERR;\n        }\n\n        if (slots->ranges[i].start > slots->ranges[i].end) {\n            *err = sdscatprintf(sdsempty(), \"start slot number %d is greater than end slot number %d\",\n                                slots->ranges[i].start, slots->ranges[i].end);\n            return C_ERR;\n        }\n\n        for (int j = slots->ranges[i].start; j <= slots->ranges[i].end; j++) {\n            if (used_slots[j]) {\n                *err = sdscatprintf(sdsempty(), \"Slot %d specified multiple times\", j);\n                return C_ERR;\n            }\n            used_slots[j]++;\n        }\n    }\n    return C_OK;\n}\n\n/* Create a slot range array with the specified number of ranges. */\nslotRangeArray *slotRangeArrayCreate(int num_ranges) {\n    slotRangeArray *slots = zcalloc(sizeof(slotRangeArray) + num_ranges * sizeof(slotRange));\n    slots->num_ranges = num_ranges;\n    return slots;\n}\n\n/* Duplicate the slot range array. */\nslotRangeArray *slotRangeArrayDup(slotRangeArray *slots) {\n    slotRangeArray *dup = slotRangeArrayCreate(slots->num_ranges);\n    memcpy(dup->ranges, slots->ranges, sizeof(slotRange) * slots->num_ranges);\n    return dup;\n}\n\n/* Set the slot range at the specified index. */\nvoid slotRangeArraySet(slotRangeArray *slots, int idx, int start, int end) {\n    slots->ranges[idx].start = start;\n    slots->ranges[idx].end = end;\n}\n\n/* Create a slot range string in the format of: \"1000-2000 3000-4000 ...\" */\nsds slotRangeArrayToString(slotRangeArray *slots) {\n    sds s = sdsempty();\n    if (slots == NULL || slots->num_ranges == 0) return s;\n\n    for (int i = 0; i < slots->num_ranges; i++) {\n        slotRange *sr = &slots->ranges[i];\n        s = sdscatprintf(s, \"%d-%d \", sr->start, sr->end);\n    }\n    sdssetlen(s, sdslen(s) - 1);\n    s[sdslen(s)] = '\\0';\n\n    return s;\n}\n\n/* Parse a slot range string in the format \"1000-2000 3000-4000 ...\" into a slotRangeArray.\n * Returns a new slotRangeArray on success, NULL on failure. */\nslotRangeArray *slotRangeArrayFromString(sds data) {\n    int num_ranges;\n    long long start, end;\n    slotRangeArray *slots = NULL;\n    if (!data || sdslen(data) == 0) return NULL;\n\n    sds *parts = sdssplitlen(data, sdslen(data), \" \", 1, &num_ranges);\n    if (num_ranges <= 0) goto err;\n\n    slots = slotRangeArrayCreate(num_ranges);\n\n    /* Parse each slot range */\n    for (int i = 0; i < num_ranges; i++) {\n        char *dash = strchr(parts[i], '-');\n        if (!dash) goto err;\n\n        if (string2ll(parts[i], dash - parts[i], &start) == 0 ||\n            string2ll(dash + 1, sdslen(parts[i]) - (dash - parts[i]) - 1, &end) == 0)\n            goto err;\n        slotRangeArraySet(slots, i, start, end);\n    }\n\n    /* Validate all ranges */\n    sds err_msg = NULL;\n    if (slotRangeArrayNormalizeAndValidate(slots, &err_msg) != C_OK) {\n        if (err_msg) sdsfree(err_msg);\n        goto err;\n    }\n    sdsfreesplitres(parts, num_ranges);\n    return slots;\n\nerr:\n    if (slots) slotRangeArrayFree(slots);\n    sdsfreesplitres(parts, num_ranges);\n    return NULL;\n}\n\nstatic int compareSlotRange(const void *a, const void *b) {\n    const slotRange *sa = a;\n    const slotRange *sb = b;\n    if (sa->start < sb->start) return -1;\n    if (sa->start > sb->start) return 1;\n    return 0;\n}\n\n/* Sort slot ranges by start slot and merge adjacent ranges.\n * Adjacent means: prev.end + 1 == next.start.\n * e.g. 1000-2000 2001-3000 0-100  =>  0-100 1000-3000\n *\n * Note: Overlapping ranges are not merged.*/\nvoid slotRangeArraySortAndMerge(slotRangeArray *slots) {\n    if (!slots || slots->num_ranges <= 1) return;\n\n    qsort(slots->ranges, slots->num_ranges, sizeof(slotRange), compareSlotRange);\n\n    int idx = 0;\n    for (int i = 1; i < slots->num_ranges; i++) {\n        if (slots->ranges[idx].end + 1 == slots->ranges[i].start)\n            slots->ranges[idx].end = slots->ranges[i].end;\n        else\n            slots->ranges[++idx] = slots->ranges[i];\n    }\n    slots->num_ranges = idx + 1;\n}\n\n/* Compare two slot range arrays, return 1 if equal, 0 otherwise */\nint slotRangeArrayIsEqual(slotRangeArray *slots1, slotRangeArray *slots2) {\n    slotRangeArraySortAndMerge(slots1);\n    slotRangeArraySortAndMerge(slots2);\n\n    if (slots1->num_ranges != slots2->num_ranges) return 0;\n\n    for (int i = 0; i < slots1->num_ranges; i++) {\n        if (slots1->ranges[i].start != slots2->ranges[i].start ||\n            slots1->ranges[i].end != slots2->ranges[i].end) {\n            return 0;\n        }\n    }\n    return 1;\n}\n\n/* Add a slot to the slot range array.\n * Usage:\n *     slotRangeArray *slots = NULL\n *     slots = slotRangeArrayAppend(slots, 1000);\n *     slots = slotRangeArrayAppend(slots, 1001);\n *     slots = slotRangeArrayAppend(slots, 1003);\n *     slots = slotRangeArrayAppend(slots, 1004);\n *     slots = slotRangeArrayAppend(slots, 1005);\n *\n *     Result: 1000-1001, 1003-1005\n *     Note: `slot` must be greater than the previous slot.\n * */\nslotRangeArray *slotRangeArrayAppend(slotRangeArray *slots, int slot) {\n    if (slots == NULL) {\n        slots = slotRangeArrayCreate(4);\n        slots->ranges[0].start = slot;\n        slots->ranges[0].end = slot;\n        slots->num_ranges = 1;\n        return slots;\n    }\n\n    serverAssert(slots->num_ranges >= 0 && slots->num_ranges <= CLUSTER_SLOTS);\n    serverAssert(slot > slots->ranges[slots->num_ranges - 1].end);\n\n    /* Check if we can extend the last range */\n    slotRange *last = &slots->ranges[slots->num_ranges - 1];\n    if (slot == last->end + 1) {\n        last->end = slot;\n        return slots;\n    }\n\n    /* Calculate current capacity and reallocate if needed */\n    int cap = (int) ((zmalloc_size(slots) - sizeof(slotRangeArray)) / sizeof(slotRange));\n    if (slots->num_ranges >= cap)\n        slots = zrealloc(slots, sizeof(slotRangeArray) + sizeof(slotRange) * cap * 2);\n\n    /* Add new single-slot range */\n    slots->ranges[slots->num_ranges].start = slot;\n    slots->ranges[slots->num_ranges].end = slot;\n    slots->num_ranges++;\n\n    return slots;\n}\n\n/* Returns 1 if the slot range array contains the given slot, 0 otherwise. */\nint slotRangeArrayContains(slotRangeArray *slots, unsigned int slot) {\n    for (int i = 0; i < slots->num_ranges; i++)\n        if (slots->ranges[i].start <= slot && slots->ranges[i].end >= slot)\n            return 1;\n    return 0;\n}\n\n/* Free the slot range array. */\nvoid slotRangeArrayFree(slotRangeArray *slots) {\n    zfree(slots);\n}\n\n/* Generic version of slotRangeArrayFree(). */\nvoid slotRangeArrayFreeGeneric(void *slots) {\n    slotRangeArrayFree(slots);\n}\n\n/* Returns the number of keys in the given slot ranges. */\nunsigned long long getKeyCountInSlotRangeArray(slotRangeArray *slots) {\n    if (!slots) return 0;\n\n    unsigned long long key_count = 0;\n    for (int i = 0; i < slots->num_ranges; i++) {\n        for (int j = slots->ranges[i].start; j <= slots->ranges[i].end; j++) {\n            key_count += countKeysInSlot(j);\n        }\n    }\n    return key_count;\n}\n\n/* Slot range array iterator */\nslotRangeArrayIter *slotRangeArrayGetIterator(slotRangeArray *slots) {\n    slotRangeArrayIter *it = zmalloc(sizeof(*it));\n    it->slots = slots;\n    it->range_index = 0;\n    it->cur_slot = slots->num_ranges > 0 ? slots->ranges[0].start : -1;\n    return it;\n}\n\n/* Returns the next slot in the array, or -1 if there are no more slots. */\nint slotRangeArrayNext(slotRangeArrayIter *it) {\n    if (it->range_index >= it->slots->num_ranges) return -1;\n\n    if (it->cur_slot < it->slots->ranges[it->range_index].end) {\n        it->cur_slot++;\n    } else {\n        it->range_index++;\n        if (it->range_index < it->slots->num_ranges)\n            it->cur_slot = it->slots->ranges[it->range_index].start;\n        else\n            it->cur_slot = -1; /* finished */\n    }\n    return it->cur_slot;\n}\n\nint slotRangeArrayGetCurrentSlot(slotRangeArrayIter *it) {\n    return it->cur_slot;\n}\n\nvoid slotRangeArrayIteratorFree(slotRangeArrayIter *it) {\n    zfree(it);\n}\n\n/* Parse slot range pairs from argv starting at `pos`.\n * `argc` is the argument count, `pos` is the first slot argument index.\n * Returns a slotRangeArray or NULL on error. */\nslotRangeArray *parseSlotRangesOrReply(client *c, int argc, int pos) {\n    int start, end, count;\n    slotRangeArray *slots;\n\n    /* Ensure there is at least one (start,end) slot range pairs. */\n    if (argc < 0 || pos < 0 || pos >= argc || (argc - pos) < 2 || ((argc - pos) % 2) != 0) {\n        addReplyErrorArity(c);\n        return NULL;\n    }\n\n    count = (argc - pos) / 2;\n    slots = slotRangeArrayCreate(count);\n    slots->num_ranges = 0;\n\n    for (int j = pos; j < argc; j += 2) {\n        if ((start = getSlotOrReply(c, c->argv[j])) == -1 ||\n            (end = getSlotOrReply(c, c->argv[j + 1])) == -1)\n        {\n            slotRangeArrayFree(slots);\n            return NULL;\n        }\n        slotRangeArraySet(slots, slots->num_ranges, start, end);\n        slots->num_ranges++;\n    }\n\n    sds err = NULL;\n    if (slotRangeArrayNormalizeAndValidate(slots, &err) != C_OK) {\n        addReplyErrorSds(c, err);\n        slotRangeArrayFree(slots);\n        return NULL;\n    }\n    return slots;\n}\n\n/* Return 1 if the keys in the slot can be accessed, 0 otherwise. */\nint clusterCanAccessKeysInSlot(int slot) {\n    /* If not in cluster mode, all keys are accessible */\n    if (server.cluster_enabled == 0) return 1;\n\n    /* If the slot is being imported under old slot migration approach, we should\n     * allow to list keys from the slot as previously. */\n    if (getImportingSlotSource(slot)) return 1;\n\n    /* If using atomic slot migration, check if the slot belongs to the current\n     * node or its master, return 1 if so. */\n    clusterNode *myself = getMyClusterNode();\n    if (clusterNodeIsSlave(myself)) {\n        clusterNode *master = clusterNodeGetMaster(myself);\n        if (master && clusterNodeCoversSlot(master, slot))\n            return 1;\n    } else {\n        if (clusterNodeCoversSlot(myself, slot))\n            return 1;\n    }\n    return 0;\n}\n\n/* Return the slot ranges that belong to the current node or its master. */\nslotRangeArray *clusterGetLocalSlotRanges(void) {\n    slotRangeArray *slots = NULL;\n\n    if (!server.cluster_enabled) {\n        slots = slotRangeArrayCreate(1);\n        slotRangeArraySet(slots, 0, 0, CLUSTER_SLOTS - 1);\n        return slots;\n    }\n\n    clusterNode *master = clusterNodeGetMaster(getMyClusterNode());\n    if (master) {\n        for (int i = 0; i < CLUSTER_SLOTS; i++) {\n            if (clusterNodeCoversSlot(master, i))\n                slots = slotRangeArrayAppend(slots, i);\n        }\n    }\n    return slots ? slots : slotRangeArrayCreate(0);\n}\n\n/* Partially flush destination DB in a cluster node, based on the slot range.\n *\n * Usage: SFLUSH <start-slot> <end slot> [<start-slot> <end slot>]* [SYNC|ASYNC]\n *\n * Redis will flush the slots that belong to this node and reply with the flushed \n * slot ranges. If no slot is flushed, an empty array will be returned.\n * \n * e.g. Node owns slot 100-200, user issues SFLUSH 50 150\n * Redis will flush slot 100-150 and reply with [100,150]\n * \n * If possible, SFLUSH SYNC will be run as blocking ASYNC as an \n * optimization.\n */\nvoid sflushCommand(client *c) {\n    int flags = EMPTYDB_NO_FLAGS, argc = c->argc;\n    int trim_method = ASM_TRIM_METHOD_NONE;\n\n    if (server.cluster_enabled == 0) {\n        addReplyError(c,\"This instance has cluster support disabled\");\n        return;\n    }\n\n    /* check if last argument is SYNC or ASYNC */\n    if (!strcasecmp(c->argv[c->argc-1]->ptr,\"sync\")) {\n        flags = EMPTYDB_NO_FLAGS;\n        argc--;\n    } else if (!strcasecmp(c->argv[c->argc-1]->ptr,\"async\")) {\n        flags = EMPTYDB_ASYNC;\n        argc--;\n    } else if (server.lazyfree_lazy_user_flush) {\n        flags = EMPTYDB_ASYNC;\n    }\n\n    /* parse the slot range */\n    if (argc % 2 == 0) {\n        addReplyErrorArity(c);\n        return;\n    }\n\n    /* Parse slot ranges from the command arguments. */\n    slotRangeArray *slots = parseSlotRangesOrReply(c, argc, 1);\n    if (!slots) return;\n\n    /* If client is AOF or master, we must obey the slot ranges. */\n    int must_obey = mustObeyClient(c);\n\n    /* Iterate and find the slot ranges that belong to this node. Save them in\n     * a new slotRangeArray. It is allocated on heap since there is a chance\n     * that FLUSH SYNC will be running as blocking ASYNC and only later reply\n     * with slot ranges */\n    slotRangeArray *myslots = NULL;\n    for (int i = 0; i < slots->num_ranges; i++) {\n        for (int j = slots->ranges[i].start; j <= slots->ranges[i].end; j++) {\n            if (must_obey || clusterIsMySlot(j)) {\n                myslots = slotRangeArrayAppend(myslots, j);\n            }\n        }\n    }\n\n    /* If no slots belong to this node, return empty array. */\n    if (myslots == NULL) {\n        addReplyArrayLen(c, 0);\n        slotRangeArrayFree(slots);\n        return;\n    }\n    slotRangeArrayFree(slots);\n    \n    /* takes ownership of myslots */\n    asmTrimCtx *trim_ctx = asmTrimCtxCreate(myslots, server.db[0].keys);\n\n    /* If the selected slots are exactly the same as the local slots, we can\n     * simply flush the entire DB by flushCommandCommon. */\n    slotRangeArray *local_slots = clusterGetLocalSlotRanges();\n    int all_slots_covered = slotRangeArrayIsEqual(myslots, local_slots);\n    slotRangeArrayFree(local_slots);\n    if (all_slots_covered) {\n        /* If not flush as blocking async, then reply immediately */\n        if (flushCommandCommon(c, FLUSH_TYPE_SLOTS, flags, trim_ctx) == 0) {\n            replySlotsFlush(c, trim_ctx->slots);\n        }\n        asmTrimCtxRelease(trim_ctx);\n        return;\n    }\n\n    /* Cancel all ASM tasks that overlap with the given slot ranges. */\n    clusterAsmCancelBySlotRangeArray(myslots, c->argv[0]->ptr);\n\n    /* In case of SYNC, check if we can optimize and run it in bg as blocking ASYNC */\n    int blocking_async = 0;\n    if ((!(flags & EMPTYDB_ASYNC)) && (!(c->flags & CLIENT_AVOID_BLOCKING_ASYNC_FLUSH))) {\n        flags |= EMPTYDB_ASYNC; /* Run as ASYNC */\n        blocking_async = 1;\n    }\n\n    /* Trim the slots if running in async mode and not loading from AOF,\n     * otherwise delete the keys synchronously. */\n    if (flags & EMPTYDB_ASYNC && server.loading == 0) {\n        /* Update dirty stats before trimming. */\n        server.dirty += getKeyCountInSlotRangeArray(myslots);\n        /* Pass client id for active trim to unblock client when trim completes. */\n        trim_method = asmTrimSlots(trim_ctx, blocking_async ? c->id : CLIENT_ID_NONE, 0);\n    } else {\n        clusterDelKeysInSlotRangeArray(myslots, 1);\n    }\n\n    /* Without the forceCommandPropagation, when DB was already empty,\n     * SFLUSH will not be replicated nor put into the AOF. */\n    forceCommandPropagation(c, PROPAGATE_REPL | PROPAGATE_AOF);\n\n    /* Handle waiting for trim job to complete in case of blocking async flush.\n     * Block the client and schedule completion callback based on trim method:\n     * - BG trim uses BIO lazyfree worker to trim the slots, so schedule a new\n     *   BIO lazyfree worker to wait for completion, then unblock client and reply.\n     * - Active trim works in cron job of the main thread, it will automatically\n     *   unblock client and reply in active trim completion. */\n    if (blocking_async && trim_method != ASM_TRIM_METHOD_NONE) {\n        blockClientForAsyncFlush(c);\n    } else {\n        /* Reply with slot ranges that were flushed. SYNC and ASYNC mode will be\n         * replied here immediately. */\n        replySlotsFlush(c, trim_ctx->slots);\n    }\n\n    asmTrimCtxRelease(trim_ctx); /* if bg trim, released later by kvsAsyncFreeDoneCB() */\n}\n\n/* The READWRITE command just clears the READONLY command state. */\nvoid readwriteCommand(client *c) {\n    if (server.cluster_enabled == 0) {\n        addReplyError(c,\"This instance has cluster support disabled\");\n        return;\n    }\n    c->flags &= ~CLIENT_READONLY;\n    addReply(c,shared.ok);\n}\n\n/* Resets transient cluster stats that we expose via INFO or other means that we want\n * to reset via CONFIG RESETSTAT. The function is also used in order to\n * initialize these fields in clusterInit() at server startup. */\nvoid resetClusterStats(void) {\n    if (!server.cluster_enabled) return;\n\n    clusterSlotStatResetAll();\n}\n\n/* This function is called at server startup in order to initialize cluster data\n * structures that are shared between the different cluster implementations. */\nvoid clusterCommonInit(void) {\n    resetClusterStats();\n    asmInit();\n}\n\n/* This function is called after the node startup in order to check if there\n * are any slots that we have keys for, but are not assigned to us. If so,\n * we delete the keys. */\nvoid clusterDeleteKeysInUnownedSlots(void) {\n    if (clusterNodeIsSlave(getMyClusterNode())) return;\n\n    /* Check that all the slots we have keys for are assigned to us. Otherwise,\n     * delete the keys. */\n    for (int i = 0; i < CLUSTER_SLOTS; i++) {\n        /* Skip if: no keys in the slot, it's our slot, or we are importing it. */\n        if (!countKeysInSlot(i) ||\n            clusterIsMySlot(i) ||\n            getImportingSlotSource(i))\n        {\n            continue;\n        }\n\n        serverLog(LL_NOTICE, \"I have keys for slot %d, but the slot is \"\n                             \"assigned to another node. \"\n                             \"Deleting keys in the slot.\", i);\n        /* With atomic slot migration, it is safe to drop keys from slots\n         * that are not owned. This will not result in data loss under the\n         * legacy slot migration approach either, since the importing state\n         * has already been persisted in node.conf. */\n        clusterDelKeysInSlot(i, 0);\n    }\n}\n\n\n/* This function is called after the node startup in order to verify that data\n * loaded from disk is in agreement with the cluster configuration:\n *\n * 1) If we find keys about hash slots we have no responsibility for, the\n *    following happens:\n *    A) If no other node is in charge according to the current cluster\n *       configuration, we add these slots to our node.\n *    B) If according to our config other nodes are already in charge for\n *       this slots, we set the slots as IMPORTING from our point of view\n *       in order to justify we have those slots, and in order to make\n *       redis-cli aware of the issue, so that it can try to fix it.\n * 2) If we find data in a DB different than DB0 we return C_ERR to\n *    signal the caller it should quit the server with an error message\n *    or take other actions.\n *\n * The function always returns C_OK even if it will try to correct\n * the error described in \"1\". However if data is found in DB different\n * from DB0, C_ERR is returned.\n *\n * The function also uses the logging facility in order to warn the user\n * about desynchronizations between the data we have in memory and the\n * cluster configuration. */\nint verifyClusterConfigWithData(void) {\n    /* Return ASAP if a module disabled cluster redirections. In that case\n     * every master can store keys about every possible hash slot. */\n    if (server.cluster_module_flags & CLUSTER_MODULE_FLAG_NO_REDIRECTION)\n        return C_OK;\n\n    /* If this node is a slave, don't perform the check at all as we\n     * completely depend on the replication stream. */\n    if (clusterNodeIsSlave(getMyClusterNode())) return C_OK;\n\n    /* Make sure we only have keys in DB0. */\n    for (int i = 1; i < server.dbnum; i++) {\n        if (kvstoreSize(server.db[i].keys)) return C_ERR;\n    }\n\n    /* Take over slots that we have keys for, but are assigned to no one. */\n    clusterClaimUnassignedSlots();\n    /* Delete keys in unowned slots */\n    clusterDeleteKeysInUnownedSlots();\n    return C_OK;\n}\n"
  },
  {
    "path": "src/cluster.h",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n#ifndef __CLUSTER_H\n#define __CLUSTER_H\n\n/*-----------------------------------------------------------------------------\n * Redis cluster exported API.\n *----------------------------------------------------------------------------*/\n\n#define CLUSTER_SLOT_MASK_BITS 14 /* Number of bits used for slot id. */\n#define CLUSTER_SLOTS (1<<CLUSTER_SLOT_MASK_BITS) /* Total number of slots in cluster mode, which is 16384. */\n#define CLUSTER_SLOT_MASK ((unsigned long long)(CLUSTER_SLOTS - 1)) /* Bit mask for slot id stored in LSB. */\n#define INVALID_CLUSTER_SLOT (-1) /* Invalid slot number. */\n#define CLUSTER_CROSSSLOT  (-2)\n#define CLUSTER_OK 0            /* Everything looks ok */\n#define CLUSTER_FAIL 1          /* The cluster can't work */\n#define CLUSTER_NAMELEN 40      /* sha1 hex length */\n\n/* Redirection errors returned by getNodeByQuery(). */\n#define CLUSTER_REDIR_NONE 0          /* Node can serve the request. */\n#define CLUSTER_REDIR_CROSS_SLOT 1    /* -CROSSSLOT request. */\n#define CLUSTER_REDIR_UNSTABLE 2      /* -TRYAGAIN redirection required */\n#define CLUSTER_REDIR_ASK 3           /* -ASK redirection required. */\n#define CLUSTER_REDIR_MOVED 4         /* -MOVED redirection required. */\n#define CLUSTER_REDIR_DOWN_STATE 5    /* -CLUSTERDOWN, global state. */\n#define CLUSTER_REDIR_DOWN_UNBOUND 6  /* -CLUSTERDOWN, unbound slot. */\n#define CLUSTER_REDIR_DOWN_RO_STATE 7 /* -CLUSTERDOWN, allow reads. */\n#define CLUSTER_REDIR_TRIMMING 8      /* -TRYAGAIN, slot is being trimmed. */\n\ntypedef struct _clusterNode clusterNode;\nstruct clusterState;\n\n/* Flags that a module can set in order to prevent certain Redis Cluster\n * features to be enabled. Useful when implementing a different distributed\n * system on top of Redis Cluster message bus, using modules. */\n#define CLUSTER_MODULE_FLAG_NONE 0\n#define CLUSTER_MODULE_FLAG_NO_FAILOVER (1<<1)\n#define CLUSTER_MODULE_FLAG_NO_REDIRECTION (1<<2)\n\n/* ---------------------- API exported outside cluster.c -------------------- */\n\n/* We have 16384 hash slots. The hash slot of a given key is obtained\n * as the least significant 14 bits of the crc16 of the key.\n *\n * However, if the key contains the {...} pattern, only the part between\n * { and } is hashed. This may be useful in the future to force certain\n * keys to be in the same node (assuming no resharding is in progress). */\nstatic inline unsigned int keyHashSlot(const char *key, int keylen) {\n    int s, e; /* start-end indexes of { and } */\n\n    for (s = 0; s < keylen; s++)\n        if (key[s] == '{') break;\n\n    /* No '{' ? Hash the whole key. This is the base case. */\n    if (likely(s == keylen)) return crc16(key,keylen) & 0x3FFF;\n\n    /* '{' found? Check if we have the corresponding '}'. */\n    for (e = s+1; e < keylen; e++)\n        if (key[e] == '}') break;\n\n    /* No '}' or nothing between {} ? Hash the whole key. */\n    if (e == keylen || e == s+1) return crc16(key,keylen) & 0x3FFF;\n\n    /* If we are here there is both a { and a } on its right. Hash\n     * what is in the middle between { and }. */\n    return crc16(key+s+1,e-s-1) & 0x3FFF;\n}\n\n/* functions requiring mechanism specific implementations */\nvoid clusterInit(void);\nvoid clusterInitLast(void);\nvoid clusterCommonInit(void);\nvoid clusterCron(void);\nvoid clusterBeforeSleep(void);\nvoid clusterClaimUnassignedSlots(void);\nint verifyClusterConfigWithData(void);\n\nint clusterSendModuleMessageToTarget(const char *target, uint64_t module_id, uint8_t type, const char *payload, uint32_t len);\n\nvoid clusterUpdateMyselfFlags(void);\nvoid clusterUpdateMyselfIp(void);\nvoid clusterUpdateMyselfHostname(void);\nvoid clusterUpdateMyselfAnnouncedPorts(void);\nvoid clusterUpdateMyselfHumanNodename(void);\n\nvoid clusterPropagatePublish(robj *channel, robj *message, int sharded);\n\nunsigned long getClusterConnectionsCount(void);\nint isClusterHealthy(void);\n\nsds clusterGenNodesDescription(client *c, int filter, int tls_primary);\nsds genClusterInfoString(void);\n/* handle implementation specific debug cluster commands. Return 1 if handled, 0 otherwise. */\nint handleDebugClusterCommand(client *c);\nconst char **clusterDebugCommandExtendedHelp(void);\n/* handle implementation specific cluster commands. Return 1 if handled, 0 otherwise. */\nint clusterCommandSpecial(client *c);\nconst char** clusterCommandExtendedHelp(void);\n\nint clusterAllowFailoverCmd(client *c);\nvoid clusterPromoteSelfToMaster(void);\nint clusterManualFailoverTimeLimit(void);\n\nvoid clusterCommandSlots(client * c);\nvoid clusterCommandMyId(client *c);\nvoid clusterCommandMyShardId(client *c);\n\nsds clusterGenNodeDescription(client *c, clusterNode *node, int tls_primary);\n\nint clusterNodeCoversSlot(clusterNode *n, int slot);\nint getNodeDefaultClientPort(clusterNode *n);\nint clusterNodeIsMyself(clusterNode *n);\nclusterNode *getMyClusterNode(void);\nchar *getMyClusterId(void);\nint getClusterSize(void);\nint getMyShardSlotCount(void);\nint clusterNodePending(clusterNode  *node);\nchar **getClusterNodesList(size_t *numnodes);\nint clusterNodeIsMaster(clusterNode *n);\nchar *clusterNodeIp(clusterNode *node);\nint clusterNodeIsSlave(clusterNode *node);\nclusterNode *clusterNodeGetSlaveof(clusterNode *node);\nclusterNode *clusterNodeGetMaster(clusterNode *node);\nchar *clusterNodeGetName(clusterNode *node);\nint clusterNodeTimedOut(clusterNode *node);\nint clusterNodeIsFailing(clusterNode *node);\nint clusterNodeIsNoFailover(clusterNode *node);\nchar *clusterNodeGetShardId(clusterNode *node);\nint clusterNodeNumSlaves(clusterNode *node);\nclusterNode *clusterNodeGetSlave(clusterNode *node, int slave_idx);\nclusterNode *getMigratingSlotDest(int slot);\nclusterNode *getImportingSlotSource(int slot);\nclusterNode *getNodeBySlot(int slot);\nint clusterNodeClientPort(clusterNode *n, int use_tls);\nchar *clusterNodeHostname(clusterNode *node);\nconst char *clusterNodePreferredEndpoint(clusterNode *n);\nlong long clusterNodeReplOffset(clusterNode *node);\nclusterNode *clusterLookupNode(const char *name, int length);\nconst char *clusterGetSecret(size_t *len);\nunsigned int countKeysInSlot(unsigned int slot);\nint getSlotOrReply(client *c, robj *o);\nint clusterIsMySlot(int slot);\nint clusterCanAccessKeysInSlot(int slot);\nstruct slotRangeArray *clusterGetLocalSlotRanges(void);\n\n/* functions with shared implementations */\nclusterNode *getNodeByQuery(client *c, struct redisCommand *cmd, robj **argv, int argc, int *hashslot,\n                            getKeysResult *result, uint8_t read_error, uint64_t cmd_flags, int *error_code);\nint extractSlotFromKeysResult(robj **argv, getKeysResult *keys_result);\nint clusterRedirectBlockedClientIfNeeded(client *c);\nvoid clusterRedirectClient(client *c, clusterNode *n, int hashslot, int error_code);\nvoid migrateCloseTimedoutSockets(void);\nint patternHashSlot(char *pattern, int length);\nint isValidAuxString(char *s, unsigned int length);\nvoid migrateCommand(client *c);\nvoid clusterCommand(client *c);\nConnectionType *connTypeOfCluster(void);\n\ntypedef struct slotRange {\n    unsigned short start, end;\n} slotRange;\ntypedef struct slotRangeArray {\n    int num_ranges;\n    slotRange ranges[];\n} slotRangeArray;\ntypedef struct slotRangeArrayIter {\n    slotRangeArray *slots; /* the array we’re iterating */\n    int range_index;       /* current range index */\n    int cur_slot;          /* current slot within the range */\n} slotRangeArrayIter;\nslotRangeArray *slotRangeArrayCreate(int num_ranges);\nslotRangeArray *slotRangeArrayDup(slotRangeArray *slots);\nvoid slotRangeArraySet(slotRangeArray *slots, int idx, int start, int end);\nsds slotRangeArrayToString(slotRangeArray *slots);\nslotRangeArray *slotRangeArrayFromString(sds data);\nvoid slotRangeArraySortAndMerge(slotRangeArray *slots);\nint slotRangeArrayIsEqual(slotRangeArray *slots1, slotRangeArray *slots2);\nslotRangeArray *slotRangeArrayAppend(slotRangeArray *slots, int slot);\nint slotRangeArrayContains(slotRangeArray *slots, unsigned int slot);\nvoid slotRangeArrayFree(slotRangeArray *slots);\nvoid slotRangeArrayFreeGeneric(void *slots);\nslotRangeArrayIter *slotRangeArrayGetIterator(slotRangeArray *slots);\nint slotRangeArrayNext(slotRangeArrayIter *it);\nint slotRangeArrayGetCurrentSlot(slotRangeArrayIter *it);\nvoid slotRangeArrayIteratorFree(slotRangeArrayIter *it);\nint slotRangeArrayNormalizeAndValidate(slotRangeArray *slots, sds *err);\nslotRangeArray *parseSlotRangesOrReply(client *c, int argc, int pos);\nunsigned long long getKeyCountInSlotRangeArray(slotRangeArray *slots);\n\nunsigned int clusterDelKeysInSlot(unsigned int hashslot, int by_command);\nunsigned int clusterDelKeysInSlotRangeArray(slotRangeArray *slots, int by_command);\n\nvoid clusterGenNodesSlotsInfo(int filter);\nvoid clusterFreeNodesSlotsInfo(clusterNode *n);\nint clusterNodeSlotInfoCount(clusterNode *n);\nuint16_t clusterNodeSlotInfoEntry(clusterNode *n, int idx);\nint clusterNodeHasSlotInfo(clusterNode *n);\nvoid resetClusterStats(void);\n\nint clusterGetShardCount(void);\nvoid *clusterGetShardIterator(void);\nvoid *clusterNextShardHandle(void *shard_iterator);\nvoid clusterFreeShardIterator(void *shard_iterator);\nint clusterGetShardNodeCount(void *shard);\nvoid *clusterShardHandleGetNodeIterator(void *shard);\nclusterNode *clusterShardNodeIteratorNext(void *node_iterator);\nvoid clusterShardNodeIteratorFree(void *node_iterator);\nclusterNode *clusterShardNodeFirst(void *shard);\n\nint clusterNodeTcpPort(clusterNode *node);\nint clusterNodeTlsPort(clusterNode *node);\n\n/* API for alternative cluster implementations to start and coordinate\n * Atomic Slot Migration (ASM).\n *\n * These two functions drive ASM for alternative cluster implementations.\n * - clusterAsmProcess(...) impl -> redis: initiates/advances/cancels ASM operations\n * - clusterAsmOnEvent(...) redis -> impl: notifies state changes\n *\n * Generic steps for an alternative implementation:\n * - On destination side, implementation calls clusterAsmProcess(ASM_EVENT_IMPORT_START)\n *   to start an import operation.\n * - Redis calls clusterAsmOnEvent() when an ASM event occurs.\n * - On the source side, Redis will call clusterAsmOnEvent(ASM_EVENT_HANDOFF_PREP)\n *   when slots are ready to be handed off and the write pause is needed.\n * - Implementation stops the traffic to the slots and calls clusterAsmProcess(ASM_EVENT_HANDOFF)\n * - On the destination side, Redis calls clusterAsmOnEvent(ASM_EVENT_TAKEOVER)\n *   when destination node is ready to take over the slot, waiting for ownership change.\n * - Cluster implementation updates the config and calls clusterAsmProcess(ASM_EVENT_DONE)\n *   to notify Redis that the slots ownership has changed.\n *\n * Sequence diagram for import:\n *   - Note: shows only the events that cluster implementation needs to react.\n *\n * ┌───────────────┐              ┌───────────────┐         ┌───────────────┐             ┌───────────────┐\n * │ Destination   │              │ Destination   │         │    Source     │             │ Source        │\n * │ Cluster impl  │              │ Master        │         │    Master     │             │ Cluster impl  │\n * └───────┬───────┘              └───────┬───────┘         └───────┬───────┘             └───────┬───────┘\n *         │                              │                         │                             │\n *         │     ASM_EVENT_IMPORT_START   │                         │                             │\n *         ├─────────────────────────────►│                         │                             │\n *         │                              │ CLUSTER SYNCSLOTS <arg> │                             │\n *         │                              ├────────────────────────►│                             │\n *         │                              │                         │                             │\n *         │                              │  SNAPSHOT(restore cmds) │                             │\n *         │                              │◄────────────────────────┤                             │\n *         │                              │  Repl stream            │                             │\n *         │                              │◄────────────────────────┤                             │\n *         │                              │                         │   ASM_EVENT_HANDOFF_PREP    │\n *         │                              │                         ├────────────────────────────►│\n *         │                              │                         │     ASM_EVENT_HANDOFF       │\n *         │                              │                         │◄────────────────────────────┤\n *         │                              │ Drain repl stream       │                             │\n *         │                              │◄────────────────────────┤                             │\n *         │     ASM_EVENT_TAKEOVER       │                         │                             │\n *         │◄─────────────────────────────┤                         │                             │\n *         │                              │                         │                             │\n *         │       ASM_EVENT_DONE         │                         │                             │\n *         ├─────────────────────────────►│                         │       ASM_EVENT_DONE        │\n *         │                              │                         │◄────────────────────────────┤\n *         │                              │                         │                             │\n */\n\n#define ASM_EVENT_IMPORT_START      1  /* Start a new import operation (destination side) */\n#define ASM_EVENT_CANCEL            2  /* Cancel an ongoing import/migrate operation (source and destination side) */\n#define ASM_EVENT_HANDOFF_PREP      3  /* Slot is ready to be handed off to the destination shard (source side) */\n#define ASM_EVENT_HANDOFF           4  /* Notify that the slot can be handed off (source side) */\n#define ASM_EVENT_TAKEOVER          5  /* Ready to take over the slot, waiting for config change (destination side) */\n#define ASM_EVENT_DONE              6  /* Notify that import/migrate is completed, config is updated (source and destination side) */\n\n#define ASM_EVENT_IMPORT_PREP       7  /* Import is about to start, the implementation may reject by returning C_ERR */\n#define ASM_EVENT_IMPORT_STARTED    8  /* Import started */\n#define ASM_EVENT_IMPORT_FAILED     9  /* Import failed */\n#define ASM_EVENT_IMPORT_COMPLETED  10 /* Import completed (config updated) */\n#define ASM_EVENT_MIGRATE_PREP      11 /* Migrate is about to start, the implementation may reject by returning C_ERR */\n#define ASM_EVENT_MIGRATE_STARTED   12 /* Migrate started */\n#define ASM_EVENT_MIGRATE_FAILED    13 /* Migrate failed */\n#define ASM_EVENT_MIGRATE_COMPLETED 14 /* Migrate completed (config updated) */\n\n\n/* Called by cluster implementation to request an ASM operation. (cluster impl --> redis)\n * Valid values for 'event':\n *  ASM_EVENT_IMPORT_START\n *  ASM_EVENT_CANCEL\n *  ASM_EVENT_HANDOFF\n *  ASM_EVENT_DONE\n *\n * For ASM_EVENT_IMPORT_START, 'task_id' should be a unique string.\n * For other events (ASM_EVENT_CANCEL, ASM_EVENT_HANDOFF, ASM_EVENT_DONE),\n * 'task_id' should match the ID from the corresponding import operation.\n *    Usage:\n *      char *task_id = malloc(CLUSTER_NAMELEN + 1);\n *      getRandomHexChars(task_id, CLUSTER_NAMELEN);\n *      task_id[CLUSTER_NAMELEN] = '\\0';\n *\n *      slotRangeArray *slots  = slotRangeArrayCreate(1);\n *      slotRangeArraySet(slots, 0, 0, 1000);\n *\n *      const char *err = NULL;\n *      int ret = clusterAsmProcess(task_id, ASM_EVENT_IMPORT_START, slots, &err);\n *      zfree(task_id);\n *      slotRangeArrayFree(slots);\n *\n *      if (ret != C_OK) {\n *          perror(err);\n *          return;\n *      }\n *\n * For ASM_EVENT_CANCEL, if `task_id` is NULL, all tasks will be cancelled.\n * If `arg` parameter is provided, it should be a pointer to an int. It will be\n * set to the number of tasks cancelled.\n *\n * Return value:\n *  - Returns C_OK on success, C_ERR on failure and 'err' will be set to the\n *    error message.\n *\n * Memory management:\n *  - There is no ownership transfer of 'task_id', 'err' or `slotRangeArray`.\n *  - `task_id` and `slotRangeArray` should be allocated and be freed by the\n *     caller. Redis internally will make a copy of these.\n *  - `err` is allocated by Redis and should NOT be freed by the caller.\n **/\nint clusterAsmProcess(const char *task_id, int event, void *arg, char **err);\n\n/* Called when an ASM event occurs to notify the cluster implementation. (redis --> cluster impl)\n *\n * `arg` will point to a `slotRangeArray` for the following events:\n *  ASM_EVENT_IMPORT_PREP\n *  ASM_EVENT_IMPORT_STARTED\n *  ASM_EVENT_MIGRATE_PREP\n *  ASM_EVENT_MIGRATE_STARTED\n *  ASM_EVENT_HANDOFF_PREP\n *\n *  Memory management:\n *  - Redis owns the `task_id` and `slotRangeArray`.\n *\n *  Returns C_OK on success.\n *\n *  If the cluster implementation returns C_ERR for ASM_EVENT_IMPORT_PREP or\n *  ASM_EVENT_MIGRATE_PREP, operation will not start.\n **/\nint clusterAsmOnEvent(const char *task_id, int event, void *arg);\n\n#endif /* __CLUSTER_H */\n"
  },
  {
    "path": "src/cluster_asm.c",
    "content": "/* \n * Copyright (c) 2025-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n * \n * cluster_asm.c -- Atomic slot migration implementation for cluster\n * \n * TERMINOLOGY:\n * - SOURCE: The node that currently owns the slots (sending data away)\n * - DESTINATION: The node that will own the slots (receiving data)\n *\n * Example: Moving slots 0-100 from Node A to Node B\n *   - Node A = SOURCE (has the data, will lose ownership)\n *   - Node B = DESTINATION (will receive data, will gain ownership)\n *\n * Migration Flow:\n * 1. DESTINATION initiates: CLUSTER MIGRATION IMPORT <slots>\n *    (Operator runs command on Node B, the receiving node)\n *\n * 2. SOURCE forks and sends slot snapshot (RESTORE commands) via RDB channel\n *    (Node A creates snapshot of slots 0-100)\n *\n * 3. SOURCE streams incremental changes via main channel\n *    (Node A forwards new writes to Node B while snapshot is being sent)\n *\n * 4. DESTINATION applies snapshot and buffers incremental changes\n *    (Node B receives snapshot, buffers ongoing writes)\n *\n * 5. SOURCE pauses writes when destination catches up\n *    (Node A stops accepting writes for slots 0-100 when Node B is nearly caught up)\n *\n * 6. DESTINATION drains buffer and takes ownership\n *    (Node B applies final buffered commands, updates config to own slots 0-100)\n *\n * 7. Config updated atomically via cluster bus\n *    (All nodes learn: slots 0-100 now belong to Node B)\n *\n * 8. SOURCE trims migrated keys (background or active)\n *    (Node A deletes keys from slots 0-100 since it no longer owns them)\n *\n */\n\n#include \"server.h\"\n#include \"cluster.h\"\n#include \"functions.h\"\n#include \"cluster_asm.h\"\n#include \"cluster_slot_stats.h\"\n#include \"bio.h\"\n\n/* Operation types: import (destination side) or migrate (source side) */\n#define ASM_IMPORT  (1 << 1)\n#define ASM_MIGRATE (1 << 2)\n\n/* Trimming methods for cleaning up migrated keys */\n#define ASM_DEBUG_TRIM_DEFAULT 0  /* Auto-select based on module subscriptions and client tracking */\n#define ASM_DEBUG_TRIM_NONE 1     /* No trimming (for testing) */\n#define ASM_DEBUG_TRIM_BG 2       /* Background trim: hand off to BIO thread (fast, non-blocking) */\n#define ASM_DEBUG_TRIM_ACTIVE 3   /* Active trim: delete in main thread cron (slow, fires notifications) */\n\n#define ASM_AOF_MIN_ITEMS_PER_KEY 512 /* Minimum number of items per key to use AOF format encoding */\n\n/* ASM Task: Represents a single slot migration operation.\n * Each task tracks the complete lifecycle of migrating one or more slot ranges\n * from a source node to a destination node. The task exists on both sides but\n * with different states (import states on destination, migrate states on source).\n */\ntypedef struct asmTask {\n    sds id;                                 /* Task ID */\n    int operation;                          /* Either ASM_IMPORT or ASM_MIGRATE */\n    slotRangeArray *slots;                  /* List of slot ranges for this migration task */\n    int state;                              /* Current state of the task */\n    int dest_state;                         /* Destination node's main state (approximate) */\n    char source[CLUSTER_NAMELEN];           /* Source node name */\n    char dest[CLUSTER_NAMELEN];             /* Destination node name */\n    connection *main_channel_conn;          /* Main channel connection */\n    connection *rdb_channel_conn;           /* RDB channel connection */\n    int rdb_channel_state;                  /* State of the RDB channel */\n    unsigned long long dest_offset;         /* Destination offset */\n    unsigned long long source_offset;       /* Source offset */\n    int cross_slot_during_propagating;      /* If cross-slot commands are encountered during propagating */\n    int stream_eof_during_streaming;        /* If STREAM-EOF is received during streaming buffer */\n    replDataBuf sync_buffer;                /* Buffer for the stream */\n    client *main_channel_client;            /* Client for the main channel on the source side */\n    client *rdb_channel_client;             /* Client for the RDB channel on the source side */\n    long long retry_count;                  /* Number of retries for this task */\n    mstime_t create_time;                   /* Task creation time */\n    mstime_t start_time;                    /* Task start time */\n    mstime_t end_time;                      /* Task end time */\n    mstime_t paused_time;                   /* The time when the slot writes were paused */\n    mstime_t dest_slots_snapshot_time;      /* The time when the destination starts applying the slot snapshot */\n    mstime_t dest_accum_applied_time;       /* The time when the destination finishes applying the accumulated buffer */\n    sds error;                              /* Error message for this task */\n    redisOpArray *pre_snapshot_module_cmds; /* Module commands to be propagated at the beginning of slot migration */\n} asmTask;\n\ntypedef struct activeTrimJob {\n    slotRangeArray *slots;      /* Slots being trimmed */\n    uint64_t client_id;         /* Client ID waiting for active trim completion (0 if none) */\n    int migration_cleanup;      /* Whether this is a migration cleanup of slots no longer owned */\n} activeTrimJob;\n\n/* ASM Manager: Global singleton that manages all ASM operations.\n * Coordinates migration tasks, trim jobs, and maintains statistics. */\nstruct asmManager {\n    list *tasks;                        /* List of asmTask to be processed */\n    list *archived_tasks;               /* List of archived asmTask */\n    list *pending_trim_jobs;            /* List of pending trim jobs (due to write pause) */\n    list *active_trim_jobs;             /* List of active trim jobs */\n    slotRangeArrayIter *active_trim_it; /* Iterator of the current active trim job */\n    size_t sync_buffer_peak;            /* Peak size of sync buffer */\n    asmTask *master_task;               /* The task that is currently active on the master */\n\n    /* Fail point injection for debugging */\n    int debug_fail_channel;       /* Channel where the task will fail */\n    int debug_fail_state;         /* State where the task will fail */\n    int debug_trim_method;        /* Method to trim the buffer */\n    int debug_active_trim_delay;  /* Sleep before trimming each key */\n\n    /* Background trim tracking */\n    size_t bg_trim_running;                             /* Number of bg trim jobs in progress */\n\n    /* Active trim stats */\n    unsigned long long active_trim_started;             /* Number of times active trim was started */\n    unsigned long long active_trim_completed;           /* Number of times active trim was completed */\n    unsigned long long active_trim_cancelled;           /* Number of times active trim was cancelled */\n    unsigned long long active_trim_current_job_keys;    /* Total number of keys to trim in the current job */\n    unsigned long long active_trim_current_job_trimmed; /* Number of keys trimmed in the current job */\n};\n\nenum asmState {\n    /* Common state */\n    ASM_NONE = 0,\n    ASM_CONNECTING,\n    ASM_AUTH_REPLY,\n    ASM_CANCELED,\n    ASM_FAILED,\n    ASM_COMPLETED,\n\n    /* Import state */\n    ASM_SEND_HANDSHAKE,\n    ASM_HANDSHAKE_REPLY,\n    ASM_SEND_SYNCSLOTS,\n    ASM_SYNCSLOTS_REPLY,\n    ASM_INIT_RDBCHANNEL,\n    ASM_ACCUMULATE_BUF,\n    ASM_READY_TO_STREAM,\n    ASM_STREAMING_BUF,\n    ASM_WAIT_STREAM_EOF,\n    ASM_TAKEOVER,\n\n    /* Migrate state */\n    ASM_WAIT_RDBCHANNEL,\n    ASM_WAIT_BGSAVE_START,\n    ASM_SEND_BULK_AND_STREAM,\n    ASM_SEND_STREAM,\n    ASM_HANDOFF_PREP,\n    ASM_HANDOFF,\n    ASM_STREAM_EOF,\n\n    /* RDB channel state */\n    ASM_RDBCHANNEL_REQUEST,\n    ASM_RDBCHANNEL_REPLY,\n    ASM_RDBCHANNEL_TRANSFER,\n};\n\nenum asmChannel {\n    ASM_IMPORT_MAIN_CHANNEL = 1,   /* Main channel for the import task */\n    ASM_IMPORT_RDB_CHANNEL,        /* RDB channel for the import task */\n    ASM_MIGRATE_MAIN_CHANNEL,      /* Main channel for the migrate task */\n    ASM_MIGRATE_RDB_CHANNEL        /* RDB channel for the migrate task */\n};\n\n/* Global ASM manager */\nstruct asmManager *asmManager = NULL;\n\n/* replication.c */\nchar *sendCommand(connection *conn, ...);\nchar *sendCommandArgv(connection *conn, int argc, char **argv, size_t *argv_lens);\nchar *receiveSynchronousResponse(connection *conn);\nConnectionType *connTypeOfReplication(void);\nint startBgsaveForReplication(int mincapa, int req);\nvoid createReplicationBacklogIfNeeded(void);\n/* cluster.c */\nvoid createDumpPayload(rio *payload, robj *o, robj *key, int dbid, int skip_checksum);\n/* cluster_asm.c */\nstatic void asmStartImportTask(asmTask *task);\nstatic void asmTaskCancel(asmTask *task, const char *reason);\nstatic void asmSyncBufferReadFromConn(connection *conn);\nstatic void propagateTrimSlots(slotRangeArray *slots);\nvoid asmTrimJobSchedule(slotRangeArray *slots);\nvoid asmTrimJobProcessPending(void);\nvoid asmCancelPendingTrimJobs(void);\nvoid asmTriggerActiveTrim(slotRangeArray *slots, uint64_t client_id, int migration_cleanup);\nvoid asmActiveTrimEnd(void);\nint asmIsAnyTrimJobOverlaps(slotRangeArray *slots);\nvoid asmTrimSlotsIfNotOwned(slotRangeArray *slots);\nvoid asmNotifyStateChange(asmTask *task, int event);\nvoid activeTrimJobFreeMethod(void *ptr);\n\nvoid asmInit(void) {\n    asmManager = zcalloc(sizeof(*asmManager));\n    asmManager->tasks = listCreate();\n    asmManager->archived_tasks = listCreate();\n    asmManager->pending_trim_jobs = listCreate();\n    asmManager->sync_buffer_peak = 0;\n    asmManager->master_task = NULL;\n    asmManager->debug_fail_channel = -1;\n    asmManager->debug_fail_state = -1;\n    asmManager->debug_trim_method = ASM_DEBUG_TRIM_DEFAULT;\n    asmManager->debug_active_trim_delay = 0;\n    asmManager->active_trim_jobs = listCreate();\n    asmManager->active_trim_started = 0;\n    asmManager->active_trim_completed = 0;\n    asmManager->active_trim_cancelled = 0;\n    listSetFreeMethod(asmManager->active_trim_jobs, activeTrimJobFreeMethod);\n}\n\nchar *asmTaskStateToString(int state) {\n    switch (state) {\n        case ASM_NONE: return \"none\";\n        case ASM_CONNECTING: return \"connecting\";\n        case ASM_AUTH_REPLY: return \"auth-reply\";\n        case ASM_CANCELED: return \"canceled\";\n        case ASM_FAILED: return \"failed\";\n        case ASM_COMPLETED: return \"completed\";\n\n        /* Import state */\n        case ASM_SEND_HANDSHAKE: return \"send-handshake\";\n        case ASM_HANDSHAKE_REPLY: return \"handshake-reply\";\n        case ASM_SEND_SYNCSLOTS: return \"send-syncslots\";\n        case ASM_SYNCSLOTS_REPLY: return \"syncslots-reply\";\n        case ASM_INIT_RDBCHANNEL: return \"init-rdbchannel\";\n        case ASM_ACCUMULATE_BUF: return \"accumulate-buffer\";\n        case ASM_READY_TO_STREAM: return \"ready-to-stream\";\n        case ASM_STREAMING_BUF: return \"streaming-buffer\";\n        case ASM_WAIT_STREAM_EOF: return \"wait-stream-eof\";\n        case ASM_TAKEOVER: return \"takeover\";\n    \n        /* Migrate state */\n        case ASM_WAIT_RDBCHANNEL: return \"wait-rdbchannel\";\n        case ASM_WAIT_BGSAVE_START: return \"wait-bgsave-start\";\n        case ASM_SEND_BULK_AND_STREAM: return \"send-bulk-and-stream\";\n        case ASM_SEND_STREAM: return \"send-stream\";\n        case ASM_HANDOFF_PREP: return \"handoff-prep\";\n        case ASM_HANDOFF: return \"handoff\";\n        case ASM_STREAM_EOF: return \"stream-eof\";\n\n        /* RDB channel state */\n        case ASM_RDBCHANNEL_REQUEST: return \"rdbchannel-request\";\n        case ASM_RDBCHANNEL_REPLY: return \"rdbchannel-reply\";\n        case ASM_RDBCHANNEL_TRANSFER: return \"rdbchannel-transfer\";\n\n        default: return \"unknown\";\n    }\n    serverAssert(0); /* Unreachable */\n}\n\nconst char *asmChannelToString(int channel) {\n    switch (channel) {\n        case ASM_IMPORT_MAIN_CHANNEL: return \"import-main-channel\";\n        case ASM_IMPORT_RDB_CHANNEL: return \"import-rdb-channel\";\n        case ASM_MIGRATE_MAIN_CHANNEL: return \"migrate-main-channel\";\n        case ASM_MIGRATE_RDB_CHANNEL: return \"migrate-rdb-channel\";\n        default: return \"unknown\";\n    }\n}\n\nint asmDebugSetFailPoint(char *channel, char *state) {\n    if (!asmManager) {\n        serverLog(LL_WARNING, \"ASM manager is not initialized\");\n        return C_ERR;\n    }\n    asmManager->debug_fail_channel = -1;\n    asmManager->debug_fail_state = -1;\n    if (!channel && !state) return C_ERR;\n    if (sdslen(channel) == 0 && sdslen(state) == 0) {\n        serverLog(LL_WARNING, \"ASM fail point is cleared\");\n        return C_OK;\n    }\n\n    for (int i = ASM_IMPORT_MAIN_CHANNEL; i <= ASM_MIGRATE_RDB_CHANNEL; i++) {\n        if (!strcasecmp(channel, asmChannelToString(i))) {\n            asmManager->debug_fail_channel = i;\n            break;\n        }\n    }\n    if (asmManager->debug_fail_channel == -1) return C_ERR;\n\n    for (int i = ASM_NONE; i <= ASM_RDBCHANNEL_TRANSFER; i++) {\n        if (!strcasecmp(state, asmTaskStateToString(i))) {\n            asmManager->debug_fail_state = i;\n            break;\n        }\n    }\n    if (asmManager->debug_fail_state == -1) return C_ERR;\n\n    serverLog(LL_NOTICE, \"ASM fail point set: channel=%s, state=%s\", channel, state);\n    return C_OK;\n}\n\nint asmDebugSetTrimMethod(const char *method, int active_trim_delay) {\n    if (!asmManager) {\n        serverLog(LL_WARNING, \"ASM manager is not initialized\");\n        return C_ERR;\n    }\n    int prev = asmManager->debug_trim_method;\n    if (!strcasecmp(method, \"default\")) asmManager->debug_trim_method = ASM_DEBUG_TRIM_DEFAULT;\n    else if (!strcasecmp(method, \"none\")) asmManager->debug_trim_method = ASM_DEBUG_TRIM_NONE;\n    else if (!strcasecmp(method, \"bg\")) asmManager->debug_trim_method = ASM_DEBUG_TRIM_BG;\n    else if (!strcasecmp(method, \"active\")) asmManager->debug_trim_method = ASM_DEBUG_TRIM_ACTIVE;\n    else return C_ERR;\n\n    /* If we are switching from none to default, delete all the keys in the\n     * slots we don't own */\n    if (prev == ASM_DEBUG_TRIM_NONE && asmManager->debug_trim_method != ASM_DEBUG_TRIM_NONE) {\n        for (int i = 0; i < CLUSTER_SLOTS; i++)\n            if (!clusterIsMySlot(i))\n                clusterDelKeysInSlot(i, 0);\n    }\n    asmManager->debug_active_trim_delay = active_trim_delay;\n    serverLog(LL_NOTICE, \"ASM trim method was set=%s, active_trim_delay=%d\", method, active_trim_delay);\n    return C_OK;\n}\n\nint asmDebugIsFailPointActive(int channel, int state) {\n    if (!asmManager) return 0; /* ASM manager not initialized */\n    if (asmManager->debug_fail_channel == channel && asmManager->debug_fail_state == state) {\n        serverLog(LL_NOTICE, \"ASM fail point active: channel=%s, state=%s\",\n                  asmChannelToString(channel), asmTaskStateToString(state));\n        return 1;\n    }\n    return 0;\n}\n\nsds asmCatInfoString(sds info) {\n    int active_tasks = 0;\n\n    listIter li;\n    listNode *ln;\n    listRewind(asmManager->tasks, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        asmTask *task = listNodeValue(ln);\n        if (task->operation == ASM_IMPORT ||\n            (task->operation == ASM_MIGRATE && task->state != ASM_FAILED))\n        {\n            active_tasks++;\n        }\n    }\n\n    return sdscatprintf(info ? info : sdsempty(),\n                        \"cluster_slot_migration_active_tasks:%d\\r\\n\"\n                        \"cluster_slot_migration_active_trim_running:%lu\\r\\n\"\n                        \"cluster_slot_migration_active_trim_current_job_keys:%llu\\r\\n\"\n                        \"cluster_slot_migration_active_trim_current_job_trimmed:%llu\\r\\n\"\n                        \"cluster_slot_migration_stats_active_trim_started:%llu\\r\\n\"\n                        \"cluster_slot_migration_stats_active_trim_completed:%llu\\r\\n\"\n                        \"cluster_slot_migration_stats_active_trim_cancelled:%llu\\r\\n\",\n                        active_tasks,\n                        listLength(asmManager->active_trim_jobs),\n                        asmManager->active_trim_current_job_keys,\n                        asmManager->active_trim_current_job_trimmed,\n                        asmManager->active_trim_started,\n                        asmManager->active_trim_completed,\n                        asmManager->active_trim_cancelled);\n}\n\nvoid asmTaskReset(asmTask *task) {\n    task->state = ASM_NONE;\n    task->dest_state = ASM_NONE;\n    task->rdb_channel_state = ASM_NONE;\n    task->main_channel_conn = NULL;\n    task->rdb_channel_conn = NULL;\n    task->dest_offset = 0;\n    task->source_offset = 0;\n    task->stream_eof_during_streaming = 0;\n    task->cross_slot_during_propagating = 0;\n    replDataBufInit(&task->sync_buffer);\n    task->main_channel_client = NULL;\n    task->rdb_channel_client = NULL;\n    task->paused_time = 0;\n    task->dest_slots_snapshot_time = 0;\n    task->dest_accum_applied_time = 0;\n    task->pre_snapshot_module_cmds = NULL;\n}\n\nasmTask *asmTaskCreate(const char *task_id) {\n    asmTask *task = zcalloc(sizeof(*task));\n    task->error = sdsempty();\n    asmTaskReset(task);\n    task->slots = NULL;\n    task->retry_count = 0;\n    task->create_time = server.mstime;\n    task->start_time = -1;\n    task->end_time = -1;\n    if (task_id) {\n        task->id = sdsnew(task_id);\n    } else {\n        task->id = sdsnewlen(NULL, CLUSTER_NAMELEN);\n        getRandomHexChars(task->id, CLUSTER_NAMELEN);\n    }\n\n    return task;\n}\n\nvoid asmTaskFree(asmTask *task) {\n    replDataBufClear(&task->sync_buffer);\n    sdsfree(task->id);\n    slotRangeArrayFree(task->slots);\n    sdsfree(task->error);\n    zfree(task);\n}\n\n/* Convert the task state to the corresponding event. */\nint asmTaskStateToEvent(asmTask *task) {\n    if (task->operation == ASM_IMPORT) {\n        if (task->state == ASM_COMPLETED) return ASM_EVENT_IMPORT_COMPLETED;\n        else if (task->state == ASM_FAILED) return ASM_EVENT_IMPORT_FAILED;\n        else return ASM_EVENT_IMPORT_STARTED;\n    } else {\n        if (task->state == ASM_COMPLETED) return ASM_EVENT_MIGRATE_COMPLETED;\n        else if (task->state == ASM_FAILED) return ASM_EVENT_MIGRATE_FAILED;\n        else return ASM_EVENT_MIGRATE_STARTED;\n    }\n}\n\n/* Serialize ASM task information into a string for transmission to replicas.\n * Format: \"task_id:source_node:dest_node:operation:state:slot_ranges\"\n * Where slot_ranges is in the format \"1000-2000 3000-4000 ...\" */\nsds asmTaskSerialize(asmTask *task) {\n    sds serialized = sdsempty();\n\n    /* Add task ID */\n    serialized = sdscatprintf(serialized, \"%s:\", task->id);\n\n    /* Add source node ID (40 chars) */\n    serialized = sdscatlen(serialized, task->source, CLUSTER_NAMELEN);\n    serialized = sdscat(serialized, \":\");\n\n    /* Add destination node ID (40 chars) */\n    serialized = sdscatlen(serialized, task->dest, CLUSTER_NAMELEN);\n    serialized = sdscat(serialized, \":\");\n\n    /* Add operation type */\n    serialized = sdscatprintf(serialized, \"%s:\", task->operation == ASM_IMPORT ?\n                                                 \"import\" : \"migrate\");\n\n    /* Add current state */\n    serialized = sdscatprintf(serialized, \"%s:\", asmTaskStateToString(task->state));\n\n    /* Add slot ranges sds */\n    sds slots_str = slotRangeArrayToString(task->slots);\n    serialized = sdscatprintf(serialized, \"%s\", slots_str);\n    sdsfree(slots_str);\n\n    return serialized;\n}\n\n/* Deserialize ASM task information from a string and create a complete asmTask.\n * Format: \"task_id:source_node:dest_node:operation:state:slot_ranges\"\n * Returns a new asmTask on success, NULL on failure. */\nasmTask *asmTaskDeserialize(sds data) {\n    int count, idx = 0;\n    asmTask *task = NULL;\n    if (!data || sdslen(data) == 0) return NULL;\n\n    sds *parts = sdssplitlen(data, sdslen(data), \":\", 1, &count);\n    if (count < 6) goto err;\n\n    /* Parse task ID */\n    if (sdslen(parts[idx]) == 0) goto err;\n    task = asmTaskCreate(parts[idx]);\n    if (!task) goto err;\n    idx++;\n\n    /* Parse source node ID */\n    if (sdslen(parts[idx]) != CLUSTER_NAMELEN) goto err;\n    memcpy(task->source, parts[idx], CLUSTER_NAMELEN);\n    idx++;\n\n    /* Parse destination node ID */\n    if (sdslen(parts[idx]) != CLUSTER_NAMELEN) goto err;\n    memcpy(task->dest, parts[idx], CLUSTER_NAMELEN);\n    idx++;\n\n    /* Parse operation type */\n    if (!strcasecmp(parts[idx], \"import\")) {\n        task->operation = ASM_IMPORT;\n    } else if (!strcasecmp(parts[idx], \"migrate\")) {\n        task->operation = ASM_MIGRATE;\n    } else {\n        goto err;\n    }\n    idx++;\n\n    /* Parse state */\n    task->state = ASM_NONE; /* Default state */\n    for (int state = ASM_NONE; state <= ASM_RDBCHANNEL_TRANSFER; state++) {\n        if (!strcasecmp(parts[idx], asmTaskStateToString(state))) {\n            task->state = state;\n            break;\n        }\n    }\n    idx++;\n\n    /* Parse slot ranges */\n    task->slots = slotRangeArrayFromString(parts[idx]);\n    if (!task->slots) goto err;\n    idx++;\n\n    /* Ignore any extra fields for future compatibility */\n\n    sdsfreesplitres(parts, count);\n    return task;\n\nerr:\n    if (task) asmTaskFree(task);\n    sdsfreesplitres(parts, count);\n    return NULL;\n}\n\n/* Notify replicas about ASM task information to maintain consistency during\n * slot migration. This function sends a CLUSTER SYNCSLOTS CONF ASM-TASK command\n * to all connected replicas with the serialized task information. */\nvoid asmNotifyReplicasStateChange(struct asmTask *task) {\n    if (!server.cluster_enabled || !clusterNodeIsMaster(getMyClusterNode())) return;\n\n    /* Create command arguments for CLUSTER SYNCSLOTS CONF ASM-TASK */\n    robj *argv[5];\n    argv[0] = createStringObject(\"CLUSTER\", 7);\n    argv[1] = createStringObject(\"SYNCSLOTS\", 9);\n    argv[2] = createStringObject(\"CONF\", 4);\n    argv[3] = createStringObject(\"ASM-TASK\", 8);\n    argv[4] = createObject(OBJ_STRING, asmTaskSerialize(task));\n\n    /* Send the command to all replicas */\n    replicationFeedSlaves(server.slaves, -1, argv, 5);\n\n    /* Clean up command objects */\n    for (int i = 0; i < 5; i++) {\n        decrRefCount(argv[i]);\n    }\n}\n\n/* Dump the active import ASM task information. */\nsds asmDumpActiveImportTask(void) {\n    if (!server.cluster_enabled) return NULL;\n\n    /* For replica, dump the master active task. */\n    if (clusterNodeIsSlave(getMyClusterNode()) &&\n        asmManager->master_task &&\n        asmManager->master_task->state != ASM_FAILED &&\n        asmManager->master_task->state != ASM_COMPLETED)\n    {\n        return asmTaskSerialize(asmManager->master_task);\n    }\n\n    /* For master, dump the first active task. */\n    if (!asmManager || listLength(asmManager->tasks) == 0) return NULL;\n    asmTask *task = listNodeValue(listFirst(asmManager->tasks));\n    if (task->state == ASM_NONE || task->state == ASM_FAILED ||\n        task->state == ASM_COMPLETED) return NULL;\n\n    return asmTaskSerialize(task);\n}\n\nsize_t asmGetPeakSyncBufferSize(void) {\n    if (!asmManager) return 0;\n    /* Compute peak sync buffer usage. The current task's peak may not\n     * reflect in asmManager->sync_buffer_peak immediately. */\n    size_t peak = asmManager->sync_buffer_peak;\n    asmTask *task = listFirst(asmManager->tasks) ?\n                    listNodeValue(listFirst(asmManager->tasks)) : NULL;\n    if (task && task->operation == ASM_IMPORT)\n        peak = max(task->sync_buffer.peak, asmManager->sync_buffer_peak);\n    \n    return peak;\n}\n\nsize_t asmGetImportInputBufferSize(void) {\n    if (!asmManager || listLength(asmManager->tasks) == 0) return 0;\n\n    asmTask *task = listNodeValue(listFirst(asmManager->tasks));\n    if (task->operation == ASM_IMPORT)\n        return task->sync_buffer.mem_used;\n\n    return 0;\n}\n\nsize_t asmGetMigrateOutputBufferSize(void) {\n    if (!asmManager || listLength(asmManager->tasks) == 0) return 0;\n\n    asmTask *task = listNodeValue(listFirst(asmManager->tasks));\n    if (task->operation == ASM_MIGRATE && task->main_channel_client)\n        return getClientOutputBufferMemoryUsage(task->main_channel_client);\n\n    return 0;\n}\n\n/* Returns the ASM task with the given ID, or NULL if no such task exists. */\nstatic asmTask *asmLookupTaskAt(list *tasks, const char *id) {\n    listIter li;\n    listNode *ln;\n\n    listRewind(tasks, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        asmTask *task = listNodeValue(ln);\n        if (!strcmp(task->id, id)) return task;\n    }\n    return NULL;\n}\n\n/* Returns the ASM task with the given ID, or NULL if no such task exists. */\nasmTask *asmLookupTaskById(const char *id) {\n    return asmLookupTaskAt(asmManager->tasks, id);\n}\n\n/* Returns the ASM task that is identical to the given slot range array, or NULL\n * if no such task exists. */\nasmTask *asmLookupTaskBySlotRangeArray(slotRangeArray *slots) {\n    listIter li;\n    listNode *ln;\n\n    listRewind(asmManager->tasks, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        asmTask *task = listNodeValue(ln);\n        if (slotRangeArrayIsEqual(task->slots, slots))\n            return task;\n    }\n    return NULL;\n}\n\n/* Returns the slot range array for the given task ID */\nslotRangeArray *asmTaskGetSlotRanges(const char *task_id) {\n    asmTask *task = NULL;\n    if (!task_id || (task = asmLookupTaskById(task_id)) == NULL) return NULL;\n\n    return task->slots;\n}\n\n/* Returns 1 if the slot range array overlaps with the given slot range. */\nstatic int slotRangeArrayOverlaps(slotRangeArray *slots, slotRange *req) {\n    for (int i = 0; i < slots->num_ranges; i++) {\n        slotRange *sr = &slots->ranges[i];\n        if (sr->start <= req->end && sr->end >= req->start)\n            return 1;\n    }\n    return 0;\n}\n\n/* Returns 1 if the two slot range arrays overlap, 0 otherwise. */\nstatic int slotRangeArraysOverlap(slotRangeArray *slots1, slotRangeArray *slots2) {\n    for (int i = 0; i < slots1->num_ranges; i++) {\n        slotRange *sr1 = &slots1->ranges[i];\n        if (slotRangeArrayOverlaps(slots2, sr1)) return 1;\n    }\n    return 0;\n}\n\n/* Returns the ASM task that overlaps with the given slot range, or NULL if\n * no such task exists. */\nstatic asmTask *lookupAsmTaskBySlotRange(slotRange *req) {\n    listIter li;\n    listNode *ln;\n\n    listRewind(asmManager->tasks, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        asmTask *task = listNodeValue(ln);\n        if (slotRangeArrayOverlaps(task->slots, req))\n            return task;\n    }\n    return NULL;\n}\n\n/* Validates the given slot ranges for a migration task:\n * - Ensures the current node is a master.\n * - Verifies all slots are in a STABLE state.\n * - Confirms all slots belong to a single source node.\n * - Confirms no ongoing import task that overlaps with the slot ranges.\n *\n * Returns the source node if validation succeeds.\n * Otherwise, returns NULL and sets 'err' variable. */\nstatic clusterNode *validateImportSlotRanges(slotRangeArray *slots, sds *err, asmTask *current) {\n    clusterNode *source = NULL;\n\n    *err = NULL;\n\n    /* Ensure this is a master node */\n    if (!clusterNodeIsMaster(getMyClusterNode())) {\n        *err = sdsnew(\"slot migration not allowed on replica.\");\n        goto out;\n    }\n\n    /* Ensure no manual migration is in progress. */\n    for (int i = 0; i < CLUSTER_SLOTS; i++) {\n        if (getImportingSlotSource(i) != NULL ||\n            getMigratingSlotDest(i) != NULL)\n        {\n            *err = sdsnew(\"all slot states must be STABLE to start a slot migration task.\");\n            goto out;\n        }\n    }\n\n    for (int i = 0; i < slots->num_ranges; i++) {\n        slotRange *sr = &slots->ranges[i];\n\n        /* Ensure no import task overlaps with this slot range.\n         * Skip check current task that is running for this slot range. */\n        asmTask *task = lookupAsmTaskBySlotRange(sr);\n        if (task && task != current && task->operation == ASM_IMPORT) {\n            *err = sdscatprintf(sdsempty(),\n                                \"overlapping import exists for slot range: %d-%d\",\n                                sr->start, sr->end);\n            goto out;\n        }\n\n        /* Validate if we can start migration task for this slot range. */\n        for (int j = sr->start; j <= sr->end; j++) {\n            clusterNode *node = getNodeBySlot(j);\n            if (node == NULL) {\n                *err = sdscatprintf(sdsempty(), \"slot has no owner: %d\", j);\n                goto out;\n            }\n\n            if (!source) {\n                source = node;\n            } else if (source != node) {\n                *err = sdsnew(\"slots belong to different source nodes\");\n                goto out;\n            }\n        }\n    }\n\nout:\n    return *err ? NULL : source;\n}\n\n/* Returns 1 if a task with the specified operation is in progress, 0 otherwise. */\nstatic int asmTaskInProgress(int operation) {\n    listIter li;\n    listNode *ln;\n\n    if (!asmManager || listLength(asmManager->tasks) == 0) return 0;\n\n    listRewind(asmManager->tasks, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        asmTask *task = listNodeValue(ln);\n        if (task->operation == operation) return 1;\n    }\n    return 0;\n}\n\n/* Returns 1 if a migrate task is in progress, 0 otherwise. */\nint asmMigrateInProgress(void) {\n    return asmTaskInProgress(ASM_MIGRATE);\n}\n\n/* Returns 1 if an import task is in progress, 0 otherwise. */\nint asmImportInProgress(void) {\n    return asmTaskInProgress(ASM_IMPORT);\n}\n\n/* Returns 1 if the task is in a state where it can receive replication stream\n*  for the slot range, 0 otherwise. */\ninline static int asmCanFeedMigrationClient(asmTask *task) {\n    return task->operation == ASM_MIGRATE &&\n           !task->cross_slot_during_propagating &&\n             (task->state == ASM_SEND_BULK_AND_STREAM ||\n              task->state == ASM_SEND_STREAM ||\n              task->state == ASM_HANDOFF_PREP);\n}\n\n/* Feed the migration client with the replication stream for the slot range. */\nvoid asmFeedMigrationClient(robj **argv, int argc) {\n    asmTask *task = NULL;\n\n    if (server.cluster_enabled == 0 || listLength(asmManager->tasks) == 0)\n        return;\n\n    /* Check if there is a migrate task that can receive replication stream. */\n    task = listNodeValue(listFirst(asmManager->tasks));\n    if (!asmCanFeedMigrationClient(task)) return;\n\n    /* Ensure all arguments are converted to string encoding if necessary,\n     * since getSlotFromCommand expects them to be string-encoded.\n     * Generally the arguments are string-encoded, but we may rewrite\n     * the command arguments to integer encoding. */\n    for (int i = 0; i < argc; i++) {\n        if (!sdsEncodedObject(argv[i])) {\n            serverAssert(argv[i]->encoding == OBJ_ENCODING_INT);\n            robj *old = argv[i];\n            argv[i] = createStringObjectFromLongLongWithSds((long)old->ptr);\n            decrRefCount(old);\n        }\n    }\n\n    /* Check if the command belongs to the slot range. */\n    struct redisCommand *cmd = lookupCommand(argv, argc);\n    serverAssert(cmd);\n\n    int slot = getSlotFromCommand(cmd, argv, argc);\n\n    /* If the command does not have keys, skip it now.\n     * SELECT is not propagated, since we only support a single db in cluster mode.\n     * MULTI/EXEC is not needed, since transaction semantics are unnecessary\n     * before the slot handoff.\n     * FUNCTION subcommands should be executed on all nodes, so here we skip it,\n     * and even propagating them may cause an error when executing.\n     *\n     * NOTICE: if some keyless commands should be propagated to the destination,\n     * we should identify them here and send. */\n    if (slot == INVALID_CLUSTER_SLOT) return;\n\n    /* Generally we reject cross-slot commands before executing, but module may\n     * replicate this kind of command, so we check again. To guarantee data\n     * consistency, we cancel the task if we encounter a cross-slot command. */\n    if (slot == CLUSTER_CROSSSLOT) {\n        /* We cannot cancel the task directly here, since it may lead to a recursive\n         * call: asmTaskCancel() --> moduleFireServerEvent() --> moduleFreeContext()\n         * --> postExecutionUnitOperations() --> propagateNow(). Even worse, this\n         * could result in propagating pending commands to the replication stream twice.\n         * To avoid this, we simply set a flag here, cancel the task in beforeSleep. */\n        task->cross_slot_during_propagating = 1;\n        return;\n    }\n\n    /* Check if the slot belongs to the task's slot range. */\n    slotRange sr = {slot, slot};\n    if (!slotRangeArrayOverlaps(task->slots, &sr)) return;\n\n    if (unlikely(asmDebugIsFailPointActive(ASM_MIGRATE_MAIN_CHANNEL, task->state)))\n        freeClientAsync(task->main_channel_client);\n\n    /* Feed main channel with the command. */\n    client *c = task->main_channel_client;\n    size_t prev_bytes = getNormalClientPendingReplyBytes(c);\n\n    addReplyArrayLen(c, argc);\n    for (int i = 0; i < argc; i++)\n        addReplyBulk(c, argv[i]);\n\n    /* Update the task's source offset to reflect the bytes sent. */\n    task->source_offset += (getNormalClientPendingReplyBytes(c) - prev_bytes);\n}\n\nasmTask *asmCreateImportTask(const char *task_id, slotRangeArray *slots, sds *err) {\n    clusterNode *source;\n\n    *err = NULL;\n    /* Validate that the slot ranges are valid and that migration can be\n     * initiated for them. */\n    source = validateImportSlotRanges(slots, err, NULL);\n    if (!source)\n        goto err;\n\n    if (source == getMyClusterNode()) {\n        *err = sdsnew(\"this node is already the owner of the slot range\");\n        goto err;\n    }\n\n    /* Only support a single task at a time now. */\n    if (listLength(asmManager->tasks) != 0) {\n        asmTask *current = listNodeValue(listFirst(asmManager->tasks));\n        if (current->state == ASM_FAILED) {\n            /* We can create a new import task only if the current one is failed,\n             * cancel the failed task to create a new one. */\n            asmTaskCancel(current, \"new import requested\");\n        } else {\n            *err = sdsnew(\"another ASM task is already in progress\");\n            goto err;\n        }\n    }\n    /* There should be no task in progress. */\n    serverAssert(listLength(asmManager->tasks) == 0);\n\n    /* Create a slot migration task */\n    asmTask *task = asmTaskCreate(task_id);\n    task->slots = slots;\n    task->state = ASM_NONE;\n    task->operation = ASM_IMPORT;\n    memcpy(task->source, clusterNodeGetName(source), CLUSTER_NAMELEN);\n    memcpy(task->dest, getMyClusterId(), CLUSTER_NAMELEN);\n\n    listAddNodeTail(asmManager->tasks, task);\n    sds slots_str = slotRangeArrayToString(slots);\n    serverLog(LL_NOTICE, \"Import task %s created: src=%.40s, dest=%.40s, slots=%s\",\n                         task->id, task->source, task->dest, slots_str);\n    sdsfree(slots_str);\n\n    return task;\n\nerr:\n    slotRangeArrayFree(slots);\n    return NULL;\n}\n\n/* CLUSTER MIGRATION IMPORT <start-slot end-slot [start-slot end-slot ...]>\n *\n * Sent by operator to the destination node to start the migration. */\nstatic void clusterMigrationCommandImport(client *c) {\n    /* Validate slot range arg count */\n    int remaining = c->argc - 3;\n    if (remaining == 0 || remaining % 2 != 0) {\n        addReplyErrorArity(c);\n        return;\n    }\n\n    slotRangeArray *slots = parseSlotRangesOrReply(c, c->argc, 3);\n    if (!slots) return;\n\n    sds err = NULL;\n    asmTask *task = asmCreateImportTask(NULL, slots, &err);\n    if (!task) {\n        addReplyErrorSds(c, err);\n        return;\n    }\n\n    addReplyBulkCString(c, task->id);\n}\n\n/* CLUSTER MIGRATION CANCEL [ID <task-id> | ALL]\n *   - Reply: Number of cancelled tasks\n *\n * Cancels import tasks that overlap with the specified slot ranges.\n * Multiple tasks may be cancelled. */\nstatic void clusterMigrationCommandCancel(client *c) {\n    sds task_id = NULL;\n    int num_cancelled = 0;\n\n    /* Validate slot range arg count */\n    if (c->argc != 4 && c->argc != 5) {\n        addReplyErrorArity(c);\n        return;\n    }\n\n    if (!strcasecmp(c->argv[3]->ptr, \"id\")) {\n        if (c->argc != 5) {\n            addReplyErrorArity(c);\n            return;\n        }\n        task_id = c->argv[4]->ptr;\n    } else if (!strcasecmp(c->argv[3]->ptr, \"all\")) {\n        if (c->argc != 4) {\n            addReplyErrorArity(c);\n            return;\n        }\n    } else {\n        addReplyError(c, \"unknown argument\");\n        return;\n    }\n\n    num_cancelled = clusterAsmCancel(task_id, \"user request\");\n    addReplyLongLong(c, num_cancelled);\n}\n\n/* Reply with the status of the task. */\nstatic void replyTaskStatus(client *c, asmTask *task) {\n    mstime_t p = 0;\n\n    addReplyMapLen(c, 12);\n    addReplyBulkCString(c, \"id\");\n    addReplyBulkCString(c, task->id);\n    addReplyBulkCString(c, \"slots\");\n    addReplyBulkSds(c, slotRangeArrayToString(task->slots));\n    addReplyBulkCString(c, \"source\");\n    addReplyBulkCBuffer(c, task->source, CLUSTER_NAMELEN);\n    addReplyBulkCString(c, \"dest\");\n    addReplyBulkCBuffer(c, task->dest, CLUSTER_NAMELEN);\n    addReplyBulkCString(c, \"operation\");\n    addReplyBulkCString(c, task->operation == ASM_IMPORT ? \"import\" : \"migrate\");\n    addReplyBulkCString(c, \"state\");\n    addReplyBulkCString(c, asmTaskStateToString(task->state));\n    addReplyBulkCString(c, \"last_error\");\n    addReplyBulkCBuffer(c, task->error, sdslen(task->error));\n    addReplyBulkCString(c, \"retries\");\n    addReplyLongLong(c, task->retry_count);\n    addReplyBulkCString(c, \"create_time\");\n    addReplyLongLong(c, task->create_time);\n    addReplyBulkCString(c, \"start_time\");\n    addReplyLongLong(c, task->start_time);\n    addReplyBulkCString(c, \"end_time\");\n    addReplyLongLong(c, task->end_time);\n\n    if (task->operation == ASM_MIGRATE && task->state == ASM_COMPLETED)\n        p = task->end_time - task->paused_time;\n    addReplyBulkCString(c, \"write_pause_ms\");\n    addReplyLongLong(c, p);\n}\n\n/* CLUSTER MIGRATION STATUS [ID <task-id> | ALL]\n *  - Reply: Array of atomic slot migration tasks */\nstatic void clusterMigrationCommandStatus(client *c) {\n    listIter li;\n    listNode *ln;\n\n    if (c->argc != 4 && c->argc != 5) {\n        addReplyErrorArity(c);\n        return;\n    }\n\n    if (!strcasecmp(c->argv[3]->ptr, \"id\")) {\n        if (c->argc != 5) {\n            addReplyErrorArity(c);\n            return;\n        }\n        sds id = c->argv[4]->ptr;\n        asmTask *task = asmLookupTaskAt(asmManager->tasks, id);\n        if (!task) task = asmLookupTaskAt(asmManager->archived_tasks, id);\n        if (!task) {\n            addReplyArrayLen(c, 0);\n            return;\n        }\n\n        addReplyArrayLen(c, 1);\n        replyTaskStatus(c, task);\n    } else if (!strcasecmp(c->argv[3]->ptr, \"all\")) {\n        if (c->argc != 4) {\n            addReplyErrorArity(c);\n            return;\n        }\n        addReplyArrayLen(c, listLength(asmManager->tasks) +\n                            listLength(asmManager->archived_tasks));\n        listRewind(asmManager->tasks, &li);\n        while ((ln = listNext(&li)) != NULL)\n            replyTaskStatus(c, listNodeValue(ln));\n\n        listRewind(asmManager->archived_tasks, &li);\n        while ((ln = listNext(&li)) != NULL)\n            replyTaskStatus(c, listNodeValue(ln));\n    } else {\n        addReplyError(c, \"unknown argument\");\n        return;\n    }\n}\n\n/* CLUSTER MIGRATION\n *      <IMPORT <start-slot end-slot [start-slot end-slot ...]> |\n *       STATUS [ID <task-id> | ALL] |\n *       CANCEL [ID <task-id> | ALL]>\n*/\nvoid clusterMigrationCommand(client *c) {\n    if (c->argc < 4) {\n        addReplyErrorArity(c);\n        return;\n    }\n\n    if (strcasecmp(c->argv[2]->ptr, \"import\") == 0) {\n        clusterMigrationCommandImport(c);\n    } else if (strcasecmp(c->argv[2]->ptr, \"status\") == 0) {\n        clusterMigrationCommandStatus(c);\n    } else if (strcasecmp(c->argv[2]->ptr, \"cancel\") == 0) {\n        clusterMigrationCommandCancel(c);\n    } else {\n        addReplyError(c, \"unknown argument\");\n    }\n}\n\n/* Returns the address of the node in the format \"ip:port\". */\nstatic const char *getNodeAddressStr(const char *node_id, int len) {\n    serverAssert(node_id != NULL);\n    static char buf[NET_HOST_PORT_STR_LEN];\n\n    clusterNode *n = clusterLookupNode(node_id, len);\n    char *ip = n ? clusterNodeIp(n) : \"?\";\n    int port = n ? (server.tls_replication ? clusterNodeTlsPort(n) :\n                                             clusterNodeTcpPort(n)) : 0;\n    formatAddr(buf, sizeof(buf), ip, port);\n    return buf;\n}\n\n/* Log a human-readable message for ASM task lifecycle events. */\nvoid asmLogTaskEvent(asmTask *task, int event) {\n    sds str = slotRangeArrayToString(task->slots);\n\n    switch (event) {\n        case ASM_EVENT_IMPORT_STARTED:\n            serverLog(LL_NOTICE, \"Import task %s started for slots: %s, source address: %s\",\n                      task->id, str, getNodeAddressStr(task->source, CLUSTER_NAMELEN));\n            break;\n        case ASM_EVENT_IMPORT_FAILED:\n            serverLog(LL_NOTICE, \"Import task %s failed for slots: %s\", task->id, str);\n            break;\n        case ASM_EVENT_TAKEOVER:\n            serverLog(LL_NOTICE, \"Import task %s is ready to takeover slots: %s\", task->id, str);\n            break;\n        case ASM_EVENT_IMPORT_COMPLETED:\n            serverLog(LL_NOTICE, \"Import task %s completed for slots: %s (imported %llu keys)\",\n                      task->id, str, getKeyCountInSlotRangeArray(task->slots));\n            break;\n        case ASM_EVENT_MIGRATE_STARTED:\n            serverLog(LL_NOTICE, \"Migrate task %s started for slots: %s, destination address: %s, (number of keys at start: %llu)\",\n                      task->id, str, getNodeAddressStr(task->dest, CLUSTER_NAMELEN), getKeyCountInSlotRangeArray(task->slots));\n            break;\n        case ASM_EVENT_MIGRATE_FAILED:\n            serverLog(LL_NOTICE, \"Migrate task %s failed for slots: %s\", task->id, str);\n            break;\n        case ASM_EVENT_HANDOFF_PREP:\n            serverLog(LL_NOTICE, \"Migrate task %s preparing to handoff for slots: %s\", task->id, str);\n            break;\n        case ASM_EVENT_MIGRATE_COMPLETED:\n            serverLog(LL_NOTICE, \"Migrate task %s completed for slots: %s (migrated %llu keys)\",\n                      task->id, str, getKeyCountInSlotRangeArray(task->slots));\n            break;\n        default:\n            break;\n    }\n\n    sdsfree(str);\n}\n\n/* Notify the state change to the module and the cluster implementation. */\nvoid asmNotifyStateChange(asmTask *task, int event) {\n    RedisModuleClusterSlotMigrationInfo info = {\n            .version = REDISMODULE_CLUSTER_SLOT_MIGRATION_INFO_VERSION,\n            .task_id = task->id,\n            .slots = (RedisModuleSlotRangeArray *) task->slots\n    };\n    memcpy(info.source_node_id, task->source, CLUSTER_NAMELEN);\n    memcpy(info.destination_node_id, task->dest, CLUSTER_NAMELEN);\n\n    int module_event = -1;\n    if (event == ASM_EVENT_IMPORT_STARTED) module_event = REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_IMPORT_STARTED;\n    else if (event == ASM_EVENT_IMPORT_COMPLETED) module_event = REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_IMPORT_COMPLETED;\n    else if (event == ASM_EVENT_IMPORT_FAILED) module_event = REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_IMPORT_FAILED;\n    else if (event == ASM_EVENT_MIGRATE_STARTED) module_event = REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_STARTED;\n    else if (event == ASM_EVENT_MIGRATE_COMPLETED) module_event = REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_COMPLETED;\n    else if (event == ASM_EVENT_MIGRATE_FAILED) module_event = REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_FAILED;\n    serverAssert(module_event != -1);\n\n    moduleFireServerEvent(REDISMODULE_EVENT_CLUSTER_SLOT_MIGRATION, module_event, &info);\n    serverLog(LL_DEBUG, \"Fire cluster asm module event, task %s: state=%s\",\n                        task->id, asmTaskStateToString(task->state));\n\n    if (clusterNodeIsMaster(getMyClusterNode())) {\n        /* Notify the cluster impl only if it is a real active import task. */\n        if (task != asmManager->master_task) {\n            asmLogTaskEvent(task, event);\n            clusterAsmOnEvent(task->id, event, task->slots);\n        }\n        asmNotifyReplicasStateChange(task); /* Propagate state change to replicas */\n    }\n}\n\nvoid asmImportSetFailed(asmTask *task) {\n    serverAssert(task->operation == ASM_IMPORT);\n    if (task->state == ASM_FAILED) return;\n\n    /* If we are in the RDB channel transfer state, we need to\n     * close the client that was created for the RDB channel. */\n    if (task->rdb_channel_conn && task->rdb_channel_state == ASM_RDBCHANNEL_TRANSFER) {\n        client *c = connGetPrivateData(task->rdb_channel_conn);\n        serverAssert(c->task == task);\n        task->rdb_channel_conn = NULL;\n        c->task = NULL;\n        c->flags &= ~CLIENT_MASTER;\n        freeClientAsync(c);\n    }\n\n    /* If in the wait stream EOF or streaming buffer state, we need to close the\n     * client that was created for the main channel. */\n    if (task->main_channel_conn &&\n        (task->state == ASM_STREAMING_BUF || task->state == ASM_WAIT_STREAM_EOF))\n    {\n        client *c = connGetPrivateData(task->main_channel_conn);\n        serverAssert(c->task == task);\n        task->main_channel_conn = NULL;\n        c->task = NULL;\n        c->flags &= ~CLIENT_MASTER;\n        freeClientAsync(c);\n    }\n\n    /* Close the connections */\n    if (task->rdb_channel_conn) connClose(task->rdb_channel_conn);\n    if (task->main_channel_conn) connClose(task->main_channel_conn);\n    task->rdb_channel_conn = NULL;\n    task->main_channel_conn = NULL;\n\n    /* Clear the replication data buffer */\n    asmManager->sync_buffer_peak = max(asmManager->sync_buffer_peak, task->sync_buffer.peak);\n    replDataBufClear(&task->sync_buffer);\n\n    /* Mark the task as failed and notify the cluster */\n    task->state = ASM_FAILED;\n    asmNotifyStateChange(task, ASM_EVENT_IMPORT_FAILED);\n    /* This node may become replica, only master can setup new slot trimming jobs. */\n    if (clusterNodeIsMaster(getMyClusterNode()))\n        asmTrimJobSchedule(task->slots);\n}\n\nvoid asmMigrateSetFailed(asmTask *task) {\n    serverAssert(task->operation == ASM_MIGRATE);\n    if (task->state == ASM_FAILED) return;\n\n    /* Close the RDB and main channel clients*/\n    if (task->rdb_channel_client) {\n        task->rdb_channel_client->task = NULL;\n        freeClientAsync(task->rdb_channel_client);\n        task->rdb_channel_client = NULL;\n    }\n    if (task->main_channel_client) {\n        task->main_channel_client->task = NULL;\n        freeClientAsync(task->main_channel_client);\n        task->main_channel_client = NULL;\n    }\n\n    /* Actually it is not necessary to clear the sync buffer here,\n     * to make asmTaskReset work properly after migrate task failed  */\n    replDataBufClear(&task->sync_buffer);\n\n    /* Mark the task as failed and notify the cluster */\n    task->state = ASM_FAILED;\n    asmNotifyStateChange(task, ASM_EVENT_MIGRATE_FAILED);\n}\n\nvoid asmTaskSetFailed(asmTask *task, const char *fmt, ...) {\n    va_list ap;\n    sds error = sdsempty();\n\n    /* Set the error message */\n    va_start(ap, fmt);\n    error = sdscatvprintf(error, fmt, ap);\n    va_end(ap);\n    error = sdscatprintf(error, \" (state: %s, rdb_channel_state: %s)\",\n                         asmTaskStateToString(task->state),\n                         asmTaskStateToString(task->rdb_channel_state));\n    sdsfree(task->error);\n    task->error = error;\n\n    /* Log the error */\n    sds slots_str = slotRangeArrayToString(task->slots);\n    serverLog(LL_WARNING, \"%s task %s failed: slots=%s, err=%s\",\n              task->operation == ASM_IMPORT ? \"Import\" : \"Migrate\",\n              task->id, slots_str, task->error);\n    sdsfree(slots_str);\n\n    if (task->operation == ASM_IMPORT)\n        asmImportSetFailed(task);\n    else\n        asmMigrateSetFailed(task);\n}\n\n/* The task is completed or canceled. Update stats and move it to\n * the archived list. */\nvoid asmTaskFinalize(asmTask *task) {\n    listNode *ln = listFirst(asmManager->tasks);\n    serverAssert(ln->value == task);\n\n    task->end_time = server.mstime;\n\n    if (task->operation == ASM_IMPORT) {\n        asmManager->sync_buffer_peak = max(asmManager->sync_buffer_peak,\n                                           task->sync_buffer.peak);\n        replDataBufClear(&task->sync_buffer); /* Not used, so save memory */\n    }\n\n    /* Move the task to the archived list */\n    listUnlinkNode(asmManager->tasks, ln);\n    listLinkNodeHead(asmManager->archived_tasks, ln);\n}\n\nstatic void asmTaskCancel(asmTask *task, const char *reason) {\n    if (task->state == ASM_CANCELED) return;\n\n    asmTaskSetFailed(task, \"Cancelled due to %s\", reason);\n    task->state = ASM_CANCELED;\n    asmTaskFinalize(task);\n}\n\nvoid asmImportTakeover(asmTask *task) {\n    serverAssert(task->state == ASM_WAIT_STREAM_EOF ||\n                 task->state == ASM_STREAMING_BUF);\n\n    if (unlikely(asmDebugIsFailPointActive(ASM_IMPORT_MAIN_CHANNEL, ASM_TAKEOVER))) {\n        /* Do not take over slots to test timeout scenario. */\n        return;\n    }\n\n    /* Free the main channel connection since it is no longer needed. */\n    serverAssert(task->main_channel_conn != NULL);\n    client *c = connGetPrivateData(task->main_channel_conn);\n    c->task = NULL;\n    c->flags &= ~CLIENT_MASTER;\n    freeClientAsync(c);\n    task->main_channel_conn = NULL;\n\n    task->state = ASM_TAKEOVER;\n    asmLogTaskEvent(task, ASM_EVENT_TAKEOVER);\n    clusterAsmOnEvent(task->id, ASM_EVENT_TAKEOVER, task->slots);\n}\n\nvoid asmCallbackOnFreeClient(client *c) {\n    asmTask *task = c->task;\n    if (!task) return;\n\n    /* If the RDB channel connection is closed, mark the task as failed. */\n    if (c->conn && task->rdb_channel_conn == c->conn) {\n        /* We create the client only when transferring data on the RDB channel */\n        serverAssert(task->rdb_channel_state == ASM_RDBCHANNEL_TRANSFER);\n        task->rdb_channel_conn = NULL; /* Will be freed by freeClient */\n        c->flags &= ~CLIENT_MASTER;\n        asmTaskSetFailed(task, \"RDB channel - Connection is closed\");\n        return;\n    }\n\n    if (c->conn && task->main_channel_conn == c->conn) {\n        /* After or in the process of streaming buffer to DB, a client will be\n         * created based on the main channel connection. */\n        serverAssert(task->state == ASM_STREAMING_BUF ||\n                     task->state == ASM_WAIT_STREAM_EOF);\n        task->main_channel_conn = NULL; /* Will be freed by freeClient */\n        c->flags &= ~CLIENT_MASTER;\n        asmTaskSetFailed(task, \"Main channel - Connection is closed\");\n        return;\n    }\n\n    if (c == task->rdb_channel_client) {\n        /* TODO: Detect whether the bgsave is completed successfully and\n         * update the state properly. */\n        task->rdb_channel_state = ASM_COMPLETED;\n        /* We may not have detected whether the child process has exited yet,\n         * so we can't determine whether the client has completed the slots\n         * snapshot transfer. If the RDB channel is interrupted unexpectedly,\n         * the destination side will also close the main channel.\n         * So here we just reset the RDB channel client of task. */\n        task->rdb_channel_client = NULL;\n        return;\n    }\n\n    /* If the main channel client is closed, we need to mark the task as failed\n     * and clean up the RDB channel client if it exists. */\n    if (c == task->main_channel_client) {\n        task->main_channel_client = NULL;\n        /* The rdb channel client will be cleaned up */\n        asmTaskSetFailed(task, \"Main and RDB channel clients are disconnected.\");\n        return;\n    }\n}\n\n/* Sends an AUTH command to the source node using the internal secret.\n * Returns an error string if the command fails, or NULL on success. */\nchar *asmSendInternalAuth(connection *conn) {\n    size_t len = 0;\n    const char *internal_secret = clusterGetSecret(&len);\n    serverAssert(internal_secret != NULL);\n\n    sds secret = sdsnewlen(internal_secret, len);\n    char *err = sendCommand(conn, \"AUTH\", \"internal connection\", secret, NULL);\n    sdsfree(secret);\n    return err;\n}\n\n/* Handles the RDB channel sync with the source node.\n * This function is called when the RDB channel is established\n * and ready to sync with the source node. */\nvoid asmRdbChannelSyncWithSource(connection *conn) {\n    asmTask *task = connGetPrivateData(conn);\n    char *err = NULL;\n    sds task_error_msg = NULL;\n\n    /* Check for errors in the socket: after a non blocking connect() we\n     * may find that the socket is in error state. */\n    if (connGetState(conn) != CONN_STATE_CONNECTED)\n        goto error;\n\n    /* Check if the task is in a fail point state */\n    if (unlikely(asmDebugIsFailPointActive(ASM_IMPORT_RDB_CHANNEL, task->rdb_channel_state))) {\n        char buf[1];\n        /* Simulate a failure by shutting down the connection. On some operating\n         * systems (e.g. Linux), the socket's receive buffer is not flushed\n         * immediately, so we issue a dummy read to drain any pending data and\n         * surface the error condition.\n         * using shutdown() instead of connShutdown() because connTLSShutdown()\n         * will free the connection directly, which is not what we want. */\n        shutdown(conn->fd, SHUT_RDWR);\n        connRead(conn, buf, 1);\n    }\n\n    if (task->rdb_channel_state == ASM_CONNECTING) {\n        connSetReadHandler(conn, asmRdbChannelSyncWithSource);\n        connSetWriteHandler(conn, NULL);\n\n        /* Send AUTH command to source node using internal auth */\n        err = asmSendInternalAuth(conn);\n        if (err) goto write_error;\n        task->rdb_channel_state = ASM_AUTH_REPLY;\n        return;\n    }\n\n    if (task->rdb_channel_state == ASM_AUTH_REPLY) {\n        err = receiveSynchronousResponse(conn);\n        /* The source node did not reply */\n        if (err == NULL) goto no_response_error;\n\n        /* Check `+OK` reply */\n        if (!strcmp(err, \"+OK\")) {\n            sdsfree(err);\n            err = NULL;\n            task->rdb_channel_state = ASM_RDBCHANNEL_REQUEST;\n            serverLog(LL_NOTICE, \"Source node replied to AUTH command, syncslots rdb channel operation can continue...\");\n        } else {\n            task_error_msg = sdscatprintf(sdsempty(),\n                \"Error reply to AUTH from source: %s\", err);\n            sdsfree(err);\n            goto error;\n        }\n    }\n\n    if (task->rdb_channel_state == ASM_RDBCHANNEL_REQUEST) {\n        err = sendCommand(conn, \"CLUSTER\", \"SYNCSLOTS\", \"RDBCHANNEL\", task->id, NULL);\n        if (err) goto write_error;\n        task->rdb_channel_state = ASM_RDBCHANNEL_REPLY;\n        return;\n    }\n\n    if (task->rdb_channel_state == ASM_RDBCHANNEL_REPLY) {\n        err = receiveSynchronousResponse(conn);\n        /* The source node did not reply */\n        if (err == NULL) goto no_response_error;\n\n        /* Ignore ‘\\n' sent from the source node to keep the connection alive. */\n        if (sdslen(err) == 0) {\n            serverLog(LL_DEBUG, \"Received an empty line in RDBCHANNEL reply, slots snapshot delivery will start later\");\n            sdsfree(err);\n            return;\n        }\n\n        /* Check `+SLOTSSNAPSHOT` reply */\n        if (!strncmp(err, \"+SLOTSSNAPSHOT\", strlen(\"+SLOTSSNAPSHOT\"))) {\n            sdsfree(err);\n            err = NULL;\n            task->state = ASM_ACCUMULATE_BUF;\n            /* The main channel buffers pending commands. */\n            connSetReadHandler(task->main_channel_conn, asmSyncBufferReadFromConn);\n\n            task->rdb_channel_state = ASM_RDBCHANNEL_TRANSFER;\n            client *c = createClient(conn);\n            c->flags |= (CLIENT_MASTER | CLIENT_INTERNAL | CLIENT_ASM_IMPORTING);\n            c->querybuf = sdsempty();\n            c->authenticated = 1;\n            c->user = NULL;\n            c->task = task;\n            serverLog(LL_NOTICE,\n                \"Source node replied to SLOTSSNAPSHOT, syncing slots snapshot can continue...\");\n        } else {\n            task_error_msg = sdscatprintf(sdsempty(),\n                \"Error reply to CLUSTER SYNCSLOTS RDBCHANNEL from the source: %s\", err);\n            sdsfree(err);\n            goto error;\n        }\n        return;\n    }\n    return;\n\nno_response_error:\n    task_error_msg = sdsnew(\"Source node did not respond to command during RDBCHANNELSYNCSLOTS handshake\");\n    /* Fall through to regular error handling */\n\nerror:\n    asmTaskSetFailed(task, \"RDB channel - Failed to sync with the source node: %s\",\n                     task_error_msg ? task_error_msg : connGetLastError(conn));\n    sdsfree(task_error_msg);\n    return;\n\nwrite_error: /* Handle sendCommand() errors. */\n    task_error_msg = sdscatprintf(sdsempty(), \"Failed to send command to the source node: %s\", err);\n    sdsfree(err);\n    goto error;\n}\n\nchar *asmSendSlotRangesSync(connection *conn, asmTask *task) {\n    /* Prepare CLUSTER SYNCSLOTS SYNC command */\n    serverAssert(task->slots->num_ranges <= CLUSTER_SLOTS);\n    int argc = task->slots->num_ranges * 2 + 4;\n    char **args = zcalloc(sizeof(char*) * argc);\n    size_t *lens = zcalloc(sizeof(size_t) * argc);\n\n    args[0] = \"CLUSTER\";\n    args[1] = \"SYNCSLOTS\";\n    args[2] = \"SYNC\";\n    args[3] = task->id;\n    lens[0] = strlen(\"CLUSTER\");\n    lens[1] = strlen(\"SYNCSLOTS\");\n    lens[2] = strlen(\"SYNC\");\n    lens[3] = sdslen(task->id);\n\n    int i = 4;\n    for (int j = 0; j < task->slots->num_ranges; j++) {\n        slotRange *sr = &task->slots->ranges[j];\n        args[i] = sdscatprintf(sdsempty(), \"%d\", sr->start);\n        lens[i] = sdslen(args[i]);\n        args[i+1] = sdscatprintf(sdsempty(), \"%d\", sr->end);\n        lens[i+1] = sdslen(args[i+1]);\n        i += 2;\n    }\n    serverAssert(i == argc);\n\n    /* Send command to source node */\n    char *err = sendCommandArgv(conn, argc, args, lens);\n\n    /* Free allocated memory */\n    for (int j = 4; j < argc; j++) {\n        sdsfree(args[j]);\n    }\n    zfree(args);\n    zfree(lens);\n\n    return err;\n}\n\nvoid asmSyncWithSource(connection *conn) {\n    asmTask *task = connGetPrivateData(conn);\n    char *err = NULL;\n\n    /* Some task errors are not network issues, we record them explicitly. */\n    sds task_error_msg = NULL;\n\n    /* Check for errors in the socket: after a non blocking connect() we\n     * may find that the socket is in error state. */\n    if (connGetState(conn) != CONN_STATE_CONNECTED)\n        goto error;\n\n    /* Check if the fail point is active for this channel and state */\n    if (unlikely(asmDebugIsFailPointActive(ASM_IMPORT_MAIN_CHANNEL, task->state))) {\n        char buf[1];\n        shutdown(conn->fd, SHUT_RDWR);\n        connRead(conn, buf, 1);\n    }\n\n    if (task->state == ASM_CONNECTING) {\n        connSetReadHandler(conn, asmSyncWithSource);\n        connSetWriteHandler(conn, NULL);\n        /* Send AUTH command to source node using internal auth */\n        err = asmSendInternalAuth(conn);\n        if (err) goto write_error;\n        task->state = ASM_AUTH_REPLY;\n        return;\n    }\n\n    if (task->state == ASM_AUTH_REPLY) {\n        err = receiveSynchronousResponse(conn);\n        /* The source node did not reply */\n        if (err == NULL) goto no_response_error;\n\n        /* Check `+OK` reply */\n        if (!strcmp(err, \"+OK\")) {\n            sdsfree(err);\n            err = NULL;\n            task->state = ASM_SEND_HANDSHAKE;\n            serverLog(LL_NOTICE, \"Source node replied to AUTH command, syncslots can continue...\");\n        } else {\n            task_error_msg = sdscatprintf(sdsempty(),\n                \"Error reply to AUTH from the source: %s\", err);\n            sdsfree(err);\n            goto error;\n        }\n    }\n\n    if (task->state == ASM_SEND_HANDSHAKE) {\n        sds node_id = sdsnewlen(clusterNodeGetName(getMyClusterNode()), CLUSTER_NAMELEN);\n        err = sendCommand(conn, \"CLUSTER\", \"SYNCSLOTS\", \"CONF\", \"NODE-ID\", node_id, NULL);\n        sdsfree(node_id);\n        if (err) goto write_error;\n        task->state = ASM_HANDSHAKE_REPLY;\n        return;\n    }\n\n    if (task->state == ASM_HANDSHAKE_REPLY) {\n        err = receiveSynchronousResponse(conn);\n        /* The source node did not reply */\n        if (err == NULL) goto no_response_error;\n\n        /* Check `+OK` reply */\n        if (!strcmp(err, \"+OK\")) {\n            sdsfree(err);\n            err = NULL;\n            task->state = ASM_SEND_SYNCSLOTS;\n            serverLog(LL_NOTICE, \"Source node replied to SYNCSLOTS CONF command, syncslots can continue...\");\n        } else {\n            task_error_msg = sdscatprintf(sdsempty(),\n                \"Error reply to CLUSTER SYNCSLOTS CONF from the source: %s\", err);\n            sdsfree(err);\n            goto error;\n        }\n    }\n\n    if (task->state == ASM_SEND_SYNCSLOTS) {\n        err = asmSendSlotRangesSync(conn, task);\n        if (err) goto write_error;\n\n        task->state = ASM_SYNCSLOTS_REPLY;\n        return;\n    }\n\n    if (task->state == ASM_SYNCSLOTS_REPLY) {\n        err = receiveSynchronousResponse(conn);\n        /* The source node did not reply */\n        if (err == NULL) goto no_response_error;\n\n        /* Check `+RDBCHANNELSYNCSLOTS` reply */\n        if (!strncmp(err, \"+RDBCHANNELSYNCSLOTS\", strlen(\"+RDBCHANNELSYNCSLOTS\"))) {\n            sdsfree(err);\n            err = NULL;\n            task->state = ASM_INIT_RDBCHANNEL;\n            serverLog(LL_NOTICE,\n                \"Source node replied to SYNCSLOTS SYNC, syncslots can continue...\");\n        } else if (!strncmp(err, \"-NOTREADY\", strlen(\"-NOTREADY\"))) {\n            /* The source-side cluster is temporarily not ready to start a\n             * migration and replied -NOTREADY. We could fail this attempt and\n             * let the import task start another attempt later but that could\n             * trigger unnecessary cleanup in the cluster implementation.\n             * Instead, we'll retry sending SYNCSLOTS later in asmCron(). */\n            sdsfree(err);\n            task->state = ASM_SEND_SYNCSLOTS;\n            serverLog(LL_NOTICE,\n                \"Source node replied to SYNCSLOTS SYNC with -NOTREADY, will retry later...\");\n            return;\n        } else {\n            task_error_msg = sdscatprintf(sdsempty(),\n                \"Error reply to CLUSTER SYNCSLOTS SYNC from the source: %s\", err);\n            sdsfree(err);\n            goto error;\n        }\n    }\n\n    if (task->state == ASM_INIT_RDBCHANNEL) {\n        /* Create RDB channel connection */\n        clusterNode *source_node = clusterLookupNode(task->source, CLUSTER_NAMELEN);\n        if (!source_node) {\n            task_error_msg = sdscatfmt(sdsempty(), \"Source node %.40s was not found\", task->source);\n            goto error;\n        }\n        char *ip = clusterNodeIp(source_node);\n        int port = server.tls_replication ? clusterNodeTlsPort(source_node) :\n                                            clusterNodeTcpPort(source_node);\n        task->rdb_channel_conn = connCreate(server.el, connTypeOfReplication());\n        if (connConnect(task->rdb_channel_conn, ip, port,\n                        server.bind_source_addr, asmRdbChannelSyncWithSource) == C_ERR)\n        {\n            serverLog(LL_WARNING, \"Unable to connect to the source node: %s\",\n                      connGetLastError(task->rdb_channel_conn));\n            goto error;\n        }\n        task->rdb_channel_state  = ASM_CONNECTING;\n        connSetPrivateData(task->rdb_channel_conn, task);\n        serverLog(LL_NOTICE,\n            \"RDB channel connection to source node %.40s established, waiting for AUTH reply...\",\n            task->source);\n\n        /* Main channel waits for the new event */\n        connSetReadHandler(conn, NULL);\n        return;\n    }\n    return;\n\nno_response_error:\n    serverLog(LL_WARNING, \"Source node did not respond to command during SYNCSLOTS handshake\");\n    /* Fall through to regular error handling */\n\nerror:\n    asmTaskSetFailed(task, \"Main channel - Failed to sync with source node: %s\",\n                     task_error_msg ? task_error_msg : connGetLastError(conn));\n    sdsfree(task_error_msg);\n    return;\n\nwrite_error: /* Handle sendCommand() errors. */\n    serverLog(LL_WARNING, \"Failed to send command to source node: %s\", err);\n    sdsfree(err);\n    goto error;\n}\n\nint asmImportSendACK(asmTask *task) {\n    serverAssert(task->operation == ASM_IMPORT && task->state == ASM_WAIT_STREAM_EOF);\n    serverLog(LL_DEBUG, \"Destination node applied offset is %lld\", task->dest_offset);\n\n    char offset[64];\n    ull2string(offset, sizeof(offset), task->dest_offset);\n\n    char *err = sendCommand(task->main_channel_conn, \"CLUSTER\", \"SYNCSLOTS\", \"ACK\",\n                    asmTaskStateToString(task->state), offset, NULL);\n    if (err) {\n        asmTaskSetFailed(task, \"Main channel - Failed to send ACK: %s\", err);\n        sdsfree(err);\n        return C_ERR;\n    }\n    return C_OK;\n}\n\n/* Called when the RDB channel begins sending the snapshot.\n * From this point on, the main channel also starts sending incremental streams. */\nvoid asmSlotSnapshotAndStreamStart(struct asmTask *task) {\n    if (task == NULL || task->state != ASM_WAIT_BGSAVE_START) return;\n\n    if (unlikely(asmDebugIsFailPointActive(ASM_MIGRATE_RDB_CHANNEL, task->state))) {\n        shutdown(task->rdb_channel_client->conn->fd, SHUT_RDWR);\n        return;\n    }\n    task->main_channel_client->replstate = SLAVE_STATE_SEND_BULK_AND_STREAM;\n\n    task->state = ASM_SEND_BULK_AND_STREAM;\n    task->rdb_channel_state = ASM_RDBCHANNEL_TRANSFER;\n\n    /* From the source node's perspective, the destination node begins to accumulate\n     * the buffer while the RDB channel starts applying the slot snapshot data. */\n    task->dest_state = ASM_ACCUMULATE_BUF;\n    task->dest_slots_snapshot_time = server.mstime;\n}\n\n/* Called when the RDB channel has succeeded in sending the snapshot. */\nvoid asmSlotSnapshotSucceed(struct asmTask *task) {\n    if (task == NULL || task->state != ASM_SEND_BULK_AND_STREAM) return;\n\n    /* The destination starts sending ACKs to keep the main channel alive after\n     * receiving the snapshot, so here we need to update the last interaction\n     * time to avoid false timeout. */\n    task->main_channel_client->lastinteraction = server.unixtime;\n\n    task->state = ASM_SEND_STREAM;\n    task->rdb_channel_state = ASM_COMPLETED;\n}\n\n/* Called when the RDB channel fails to send the snapshot. */\nvoid asmSlotSnapshotFailed(struct asmTask *task) {\n    if (task == NULL || task->state != ASM_SEND_BULK_AND_STREAM) return;\n\n    asmTaskSetFailed(task, \"RDB channel - Failed to send slots snapshot\");\n}\n\n/* CLUSTER SYNCSLOTS SNAPSHOT-EOF\n *\n * This command is sent by the source node to the destination node to indicate\n * that the slots snapshot has ended. */\nvoid clusterSyncSlotsSnapshotEOF(client *c) {\n    /* This client is RDB channel connection. */\n    asmTask *task = c->task;\n    if (!task || task->rdb_channel_state != ASM_RDBCHANNEL_TRANSFER ||\n        c->conn != task->rdb_channel_conn)\n    {\n        /* Unexpected SNAPSHOT-EOF command */\n        serverLog(LL_WARNING, \"Unexpected CLUSTER SYNCSLOTS SNAPSHOT-EOF command: \"\n                              \"rdb_channel_state=%s\",\n                              asmTaskStateToString(task ? task->rdb_channel_state : ASM_NONE));\n        freeClientAsync(c);\n        return;\n    }\n\n    /* RDB channel state: ASM_RDBCHANNEL_TRANSFER */\n    if (unlikely(asmDebugIsFailPointActive(ASM_IMPORT_RDB_CHANNEL, task->rdb_channel_state))) {\n        freeClientAsync(c); /* Simulate a failure */\n        return;\n    }\n\n    /* Clear the RDB channel connection */\n    task->rdb_channel_conn = NULL;\n    task->rdb_channel_state = ASM_COMPLETED;\n    serverLog(LL_NOTICE, \"RDB channel snapshot transfer completed for the import task.\");\n\n    /* Free the RDB channel connection. */\n    c->task = NULL;\n    c->flags &= ~CLIENT_MASTER;\n    freeClientAsync(c);\n\n    /* Will start streaming the buffer to DB, don't start here since now\n     * we are in the context of executing command, otherwise, redis will\n     * generate a big MULTI-EXEC including all the commands in the buffer.\n     * just update the state here, and do it in beforeSleep(). */\n    task->state = ASM_READY_TO_STREAM;\n    connSetReadHandler(task->main_channel_conn, NULL);\n}\n\n/* CLUSTER SYNCSLOTS STREAM-EOF\n *\n * This command is sent by the source node to the destination node to indicate\n * that the slot sync stream has ended and the slots can be handed off. */\nvoid clusterSyncSlotsStreamEOF(client *c) {\n    asmTask *task = c->task;\n\n    if (!task || task->operation != ASM_IMPORT) {\n        serverLog(LL_WARNING, \"Unexpected CLUSTER SYNCSLOTS STREAM-EOF command\");\n        freeClientAsync(c);\n        return;\n    }\n\n    if (task->state == ASM_STREAMING_BUF) {\n        /* We are still streaming the buffer to DB, mark the EOF received, and we\n         * can take over after streaming is EOF. Since we may release the context\n         * in asmImportTakeover, this breaks the context for streaming buffer. */\n        task->stream_eof_during_streaming = 1;\n        serverLog(LL_NOTICE, \"CLUSTER SYNCSLOTS STREAM-EOF received during streaming buffer\");\n        return;\n    }\n\n    if (task->state != ASM_WAIT_STREAM_EOF) {\n        serverLog(LL_WARNING, \"Unexpected CLUSTER SYNCSLOTS STREAM-EOF state: %s\",\n                               asmTaskStateToString(task->state));\n        freeClientAsync(c);\n        return;\n    }\n    serverLog(LL_NOTICE, \"CLUSTER SYNCSLOTS STREAM-EOF received when waiting for STREAM-EOF\");\n\n    /* STREAM-EOF received, the source is ready to handoff, takeover now. */\n    asmImportTakeover(task);\n}\n\n/* Start the import task. */\nstatic void asmStartImportTask(asmTask *task) {\n    if (task->operation != ASM_IMPORT || task->state != ASM_NONE) return;\n    sds slots_str = slotRangeArrayToString(task->slots);\n\n    /* Sanity check: Clean up any keys that exist in slots not owned by this node.\n     * This handles cases where users previously migrated slots using legacy method\n     * but left behind orphaned keys, or maybe cluster missed cleaning up during\n     * previous operations, which could interfere with the ASM import process. */\n    asmTrimSlotsIfNotOwned(task->slots);\n\n    /* Check if there is any trim job in progress for the slot ranges.\n     * We can't start the import task since the trim job will modify the data.*/\n    int trim_in_progress = asmIsAnyTrimJobOverlaps(task->slots);\n\n    /* Notify the cluster implementation to prepare for the import task. */\n    int impl_ret = clusterAsmOnEvent(task->id, ASM_EVENT_IMPORT_PREP, task->slots);\n\n    /* We do not start the import task if trim is disabled by module. */\n    int disabled_by_module = server.cluster_module_trim_disablers > 0;\n\n    static int start_blocked_logged = 0;\n    /* Cannot start import task since pause action is performed. Otherwise, we\n     * will break the promise that no writes are performed during the pause. */\n    if (isPausedActions(PAUSE_ACTION_CLIENT_ALL) ||\n        isPausedActions(PAUSE_ACTION_CLIENT_WRITE) ||\n        trim_in_progress ||\n        impl_ret != C_OK ||\n        disabled_by_module)\n    {\n        const char *reason = disabled_by_module ? \"trim is disabled by module\" :\n                             impl_ret != C_OK ? \"cluster is not ready\" :\n                             trim_in_progress ? \"trim in progress for some of the slots\" :\n                                                \"server paused\";\n        if (start_blocked_logged == 0) {\n            serverLog(LL_WARNING, \"Can not start import task %s for slots: %s due to %s\",\n                                  task->id, slots_str, reason);\n            start_blocked_logged = 1;\n        }\n        sdsfree(slots_str);\n        return;\n    }\n    start_blocked_logged = 0; /* Reset the log flag */\n\n    /* Detect if the cluster topology is changed. We should cancel the task if\n     * we can not schedule it, and update the source node if needed. */\n    sds err = NULL;\n    clusterNode *source = validateImportSlotRanges(task->slots, &err, task);\n    if (!source) {\n        asmTaskCancel(task, err);\n        sdsfree(slots_str);\n        sdsfree(err);\n        return;\n    }\n    /* Now I'm the owner of the slot range, cancel the import task. */\n    if (source == getMyClusterNode()) {\n        asmTaskCancel(task, \"slots owned by myself now\");\n        sdsfree(slots_str);\n        return;\n    }\n    /* Change the source node if needed. */\n    if (memcmp(task->source, clusterNodeGetName(source), CLUSTER_NAMELEN)) {\n        memcpy(task->source, clusterNodeGetName(source), CLUSTER_NAMELEN);\n        serverLog(LL_NOTICE, \"Import task %s source node changed: slots=%s, \"\n                             \"new_source=%.40s\", task->id, slots_str, clusterNodeGetName(source));\n    }\n    sdsfree(slots_str);\n\n    task->state = ASM_CONNECTING;\n    task->start_time = server.mstime;\n    asmNotifyStateChange(task, ASM_EVENT_IMPORT_STARTED);\n\n    task->main_channel_conn = connCreate(server.el, connTypeOfReplication());\n    char *ip = clusterNodeIp(source);\n    int port = server.tls_replication ? clusterNodeTlsPort(source) :\n                                        clusterNodeTcpPort(source);\n    if (connConnect(task->main_channel_conn, ip, port, server.bind_source_addr,\n                    asmSyncWithSource) == C_ERR)\n    {\n        asmTaskSetFailed(task, \"Main channel - Failed to connect to source node: %s\",\n                         connGetLastError(task->main_channel_conn));\n        return;\n    }\n    connSetPrivateData(task->main_channel_conn, task);\n}\n\nvoid clusterSyncSlotsCommand(client *c) {\n    /* Only internal clients are allowed to execute this command to avoid\n     * potential attack, since some state changes are not well protected,\n     * external clients may damage the slot migration state. */\n    if (!(c->flags & (CLIENT_INTERNAL | CLIENT_MASTER))) {\n        addReplyError(c, \"CLUSTER SYNCSLOTS subcommands are only allowed for internal clients\");\n        c->flags |= CLIENT_CLOSE_AFTER_REPLY;\n        return;\n    }\n\n    /* On replica, only allow master client to execute CONF subcommand. */\n    if (!clusterNodeIsMaster(getMyClusterNode())) {\n        if (!(c->flags & CLIENT_MASTER)) {\n            /* Not master client, reject all subcommands and close the connection. */\n            addReplyError(c, \"CLUSTER SYNCSLOTS subcommands are only allowed for master\");\n            c->flags |= CLIENT_CLOSE_AFTER_REPLY;\n            return;\n        } else {\n            /* Only allow CONF subcommand on replica. */\n            if (strcasecmp(c->argv[2]->ptr, \"conf\")) return;\n        }\n    }\n\n    if (!strcasecmp(c->argv[2]->ptr, \"sync\") && c->argc >= 6) {\n        /* CLUSTER SYNCSLOTS SYNC <ID> <start-slot> <end-slot> [<start-slot> <end-slot>] */\n        if (c->argc % 2 == 1) {\n            addReplyErrorArity(c);\n            return;\n        }\n\n        slotRangeArray *slots = parseSlotRangesOrReply(c, c->argc, 4);\n        if (!slots) return;\n\n        /* Validate that the slot ranges are valid and that migration can be\n         * initiated for them. */\n        sds err = NULL;\n        clusterNode *source = validateImportSlotRanges(slots, &err, NULL);\n        if (!source) {\n            addReplyErrorSds(c, err);\n            slotRangeArrayFree(slots);\n            return;\n        }\n\n        /* Check if the source node is the same as the current node. */\n        if (source != getMyClusterNode()) {\n            addReplyError(c, \"This node is not the owner of the slots\");\n            slotRangeArrayFree(slots);\n            return;\n        }\n\n        /* Verify the destination node is known and is a master. */\n        if (c->node_id) {\n            clusterNode *dest = clusterLookupNode(c->node_id, CLUSTER_NAMELEN);\n            if (dest == NULL || !clusterNodeIsMaster(dest)) {\n                addReplyErrorFormat(c, \"Destination node %.40s is not a master\", c->node_id);\n                slotRangeArrayFree(slots);\n                return;\n            }\n        }\n\n        /* Check if there is any trim job in progress for the slot ranges.\n         * We can't start the migrate task since the trim job will modify the data.*/\n        if (asmIsAnyTrimJobOverlaps(slots)) {\n            addReplyError(c, \"Trim job in progress for the slots\");\n            slotRangeArrayFree(slots);\n            return;\n        }\n\n        sds task_id = c->argv[3]->ptr;\n        /* Notify the cluster implementation to prepare for the migrate task. */\n        if (clusterAsmOnEvent(task_id, ASM_EVENT_MIGRATE_PREP, slots) != C_OK ||\n            asmDebugIsFailPointActive(ASM_MIGRATE_MAIN_CHANNEL, ASM_NONE))\n        {\n            addReplyError(c, \"-NOTREADY Cluster is not ready to migrate slots\");\n            slotRangeArrayFree(slots);\n            return;\n        }\n\n        /* We do not start the migrate task if trim is disabled by module. */\n        int disabled_by_module = server.cluster_module_trim_disablers > 0;\n        if (disabled_by_module) {\n            addReplyError(c, \"Trim is disabled by module\");\n            slotRangeArrayFree(slots);\n            return;\n        }\n\n        asmTask *task = listLength(asmManager->tasks) == 0 ? NULL :\n                            listNodeValue(listFirst(asmManager->tasks));\n        if (task && !strcmp(task->id, task_id) &&\n            task->operation == ASM_MIGRATE && task->state == ASM_FAILED &&\n            slotRangeArrayIsEqual(slots, task->slots) &&\n            memcmp(task->dest, c->node_id, CLUSTER_NAMELEN) == 0)\n        {\n            /* Reuse the failed task */\n            asmTaskReset(task);\n            slotRangeArrayFree(task->slots); /* Will be set again later */\n            task->retry_count++;\n        } else if (task) {\n            if (task->state == ASM_FAILED) {\n                /* We can create a new migrate task only if the current one is\n                 * failed, cancel the failed task to create a new one. */\n                asmTaskCancel(task, \"new migration requested\");\n                task = NULL;\n            } else {\n                addReplyError(c, \"Another ASM task is already in progress\");\n                slotRangeArrayFree(slots);\n                return;\n            }\n        }\n\n        /* Create the migrate slots task and add it to the list,\n         * otherwise reuse the existing one */\n        if (task == NULL) {\n            task = asmTaskCreate(task_id);\n            task->start_time = server.mstime; /* Start immediately */\n            serverAssert(listLength(asmManager->tasks) == 0);\n            listAddNodeTail(asmManager->tasks, task);\n        }\n\n        task->slots = slots;\n        task->operation = ASM_MIGRATE;\n        memcpy(task->source, clusterNodeGetName(getMyClusterNode()), CLUSTER_NAMELEN);\n        if (c->node_id) memcpy(task->dest, c->node_id, CLUSTER_NAMELEN);\n\n        task->main_channel_client = c;\n        c->task = task;\n\n        /* We mark the main channel client as a replica, so this client is limited\n         * by the client output buffer settings for replicas. The replstate has\n         * no real significance, just to prevent it from going online. */\n        c->flags |= (CLIENT_SLAVE | CLIENT_ASM_MIGRATING);\n        c->replstate = SLAVE_STATE_WAIT_RDB_CHANNEL;\n        if (server.repl_disable_tcp_nodelay)\n            connDisableTcpNoDelay(c->conn);  /* Non-critical if it fails. */\n        listAddNodeTail(server.slaves, c);\n        createReplicationBacklogIfNeeded();\n\n        /* Wait for RDB channel to be ready */\n        task->state = ASM_WAIT_RDBCHANNEL;\n\n        sds slots_str = slotRangeArrayToString(slots);\n        serverLog(LL_NOTICE, \"Migrate task %s created: src=%.40s, dest=%.40s, slots=%s\",\n                              task->id, task->source, task->dest, slots_str);\n        sdsfree(slots_str);\n\n        asmNotifyStateChange(task, ASM_EVENT_MIGRATE_STARTED);\n\n        /* Keep the client in the main thread to avoid data races between the\n         * connWrite call below and the client's event handler in IO threads. */\n        if (c->tid != IOTHREAD_MAIN_THREAD_ID) keepClientInMainThread(c);\n\n        /* addReply*() is not suitable for clients in SLAVE_STATE_WAIT_RDB_CHANNEL state. */\n        if (connWrite(c->conn, \"+RDBCHANNELSYNCSLOTS\\r\\n\", 22) != 22)\n            freeClientAsync(c);\n    } else if (!strcasecmp(c->argv[2]->ptr, \"rdbchannel\") && c->argc == 4) {\n        /* CLUSTER SYNCSLOTS RDBCHANNEL <task-id> */\n        sds task_id = c->argv[3]->ptr;\n        if (sdslen(task_id) != CLUSTER_NAMELEN) {\n            addReplyError(c, \"Invalid task id\");\n            return;\n        }\n\n        if (listLength(asmManager->tasks) == 0) {\n            addReplyError(c, \"No slot migration task in progress\");\n            return;\n        }\n\n        asmTask *task = listNodeValue(listFirst(asmManager->tasks));\n        if (task->operation != ASM_MIGRATE || task->state != ASM_WAIT_RDBCHANNEL ||\n            strcmp(task->id, task_id) != 0)\n        {\n            addReplyError(c, \"Another migration task is already in progress\");\n            return;\n        }\n\n        if (unlikely(asmDebugIsFailPointActive(ASM_MIGRATE_MAIN_CHANNEL, task->state))) {\n            /* Close the main channel client before rdb channel client connects */\n            if (task->main_channel_client)\n                freeClient(task->main_channel_client);\n        }\n\n        /* The main channel client must be present when setting RDB channel client */\n        if (task->main_channel_client == NULL) {\n            /* Maybe the main channel connection is closed. */\n            addReplyError(c, \"Main channel connection is not established\");\n            return;\n        }\n\n        /* Mark the client as a slave to generate slots snapshot */\n        c->flags |= (CLIENT_SLAVE | CLIENT_REPL_RDB_CHANNEL | CLIENT_REPL_RDBONLY | CLIENT_ASM_MIGRATING);\n        c->slave_capa |= SLAVE_CAPA_EOF;\n        c->slave_req |= (SLAVE_REQ_SLOTS_SNAPSHOT | SLAVE_REQ_RDB_CHANNEL);\n        c->replstate = SLAVE_STATE_WAIT_BGSAVE_START;\n        c->repldbfd = -1;\n        if (server.repl_disable_tcp_nodelay)\n            connDisableTcpNoDelay(c->conn); /* Non-critical if it fails. */\n        listAddNodeTail(server.slaves, c);\n\n        /* Wait for bgsave to start for slots sync */\n        task->state = ASM_WAIT_BGSAVE_START;\n        task->rdb_channel_state = ASM_WAIT_BGSAVE_START;\n        task->rdb_channel_client = c;\n        c->task = task;\n\n        /* Keep the client in the main thread to avoid data races between the\n         * connWrite call in startBgsaveForReplication and the client's event\n         * handler in IO threads. */\n        if (c->tid != IOTHREAD_MAIN_THREAD_ID) keepClientInMainThread(c);\n\n        if (!hasActiveChildProcess()) {\n            startBgsaveForReplication(c->slave_capa, c->slave_req);\n        } else {\n            serverLog(LL_NOTICE, \"BGSAVE for slots snapshot sync delayed\");\n        }\n    } else if (!strcasecmp(c->argv[2]->ptr, \"snapshot-eof\") && c->argc == 3) {\n        /* CLUSTER SYNCSLOTS SNAPSHOT-EOF */\n        clusterSyncSlotsSnapshotEOF(c);\n    } else if (!strcasecmp(c->argv[2]->ptr, \"stream-eof\") && c->argc == 3) {\n        /* CLUSTER SYNCSLOTS STREAM-EOF */\n        clusterSyncSlotsStreamEOF(c);\n    } else if (!strcasecmp(c->argv[2]->ptr, \"ack\") && c->argc == 5) {\n        /* CLUSTER SYNCSLOTS ACK <state> <offset> */\n        long long offset;\n        int dest_state;\n\n        if (!strcasecmp(c->argv[3]->ptr, asmTaskStateToString(ASM_STREAMING_BUF))) {\n            dest_state = ASM_STREAMING_BUF;\n        } else if (!strcasecmp(c->argv[3]->ptr, asmTaskStateToString(ASM_WAIT_STREAM_EOF))) {\n            dest_state = ASM_WAIT_STREAM_EOF;\n        } else {\n            return; /* Not support now. */\n        }\n\n        if ((getLongLongFromObject(c->argv[4], &offset) != C_OK))\n            return;\n\n        if (c->task && c->task->operation == ASM_MIGRATE) {\n            /* Update the state and ACKed offset from destination. */\n            asmTask *task = c->task;\n            task->dest_state = dest_state;\n            if (task->dest_offset > (unsigned long long) offset) {\n                serverLog(LL_WARNING, \"CLUSTER SYNCSLOTS ACK received, dest state: %s, \"\n                                      \"but offset %lld is less than the current dest offset %lld\",\n                        asmTaskStateToString(dest_state), offset, task->dest_offset);\n                return;\n            }\n            task->dest_offset = offset;\n            /* Detailed ACK progress log (for debugging handoff/drain issues). */\n            serverLog(LL_DEBUG, \"CLUSTER SYNCSLOTS ACK received, dest state: %s, \"\n                                \"updated dest offset to %lld, source offset: %lld\",\n                asmTaskStateToString(dest_state), task->dest_offset, task->source_offset);\n\n            /* Record the time when the destination finishes applying the accumulated buffer */\n            if (task->dest_state == ASM_WAIT_STREAM_EOF && task->dest_accum_applied_time == 0)\n                task->dest_accum_applied_time = server.mstime;\n\n            /* Pause write if needed */\n            if (task->state == ASM_SEND_BULK_AND_STREAM || task->state == ASM_SEND_STREAM) {\n                /* Pause writes on the main channel if the lag is less than the threshold. */\n                if (task->dest_offset + server.asm_handoff_max_lag_bytes >= task->source_offset) {\n                    if (unlikely(asmDebugIsFailPointActive(ASM_MIGRATE_MAIN_CHANNEL, ASM_HANDOFF_PREP)))\n                        return; /* Do not enter handoff prep state for testing buffer drain timeout. */\n\n                    serverLog(LL_NOTICE, \"The applied offset lag %lld is less than the threshold %lld, \"\n                                         \"pausing writes for slot handoff\",\n                                         task->source_offset - task->dest_offset,\n                                         server.asm_handoff_max_lag_bytes);\n                    task->state = ASM_HANDOFF_PREP;\n                    asmLogTaskEvent(task, ASM_EVENT_HANDOFF_PREP);\n                    clusterAsmOnEvent(task->id, ASM_EVENT_HANDOFF_PREP, task->slots);\n                }\n            }\n        }\n    } else if (!strcasecmp(c->argv[2]->ptr, \"fail\") && c->argc == 4) {\n        /* CLUSTER SYNCSLOTS FAIL <err> */\n        return; /* This is a no-op, just to handle the command syntax. */\n    } else if (!strcasecmp(c->argv[2]->ptr, \"conf\") && c->argc >= 5) {\n        /* CLUSTER SYNCSLOTS CONF <option> <value> [<option> <value>] */\n        for (int j = 3; j < c->argc; j += 2) {\n            if (j + 1 >= c->argc) {\n                addReplyErrorArity(c);\n                return;\n            }\n            /* Handle each option here */\n            if (!strcasecmp(c->argv[j]->ptr, \"node-id\")) {\n                /* node-id <node-id> */\n                sds node_id = c->argv[j + 1]->ptr;\n                int node_id_len = (int) sdslen(node_id);\n                if (node_id_len != CLUSTER_NAMELEN) {\n                    addReplyErrorFormat(c, \"Invalid node id length %d\", node_id_len);\n                    return;\n                }\n\n                /* Lookup the node in the cluster. */\n                clusterNode *node = clusterLookupNode(node_id, node_id_len);\n                if (node == NULL) {\n                    addReplyErrorFormat(c, \"Node %s not found in cluster\", node_id);\n                    return;\n                }\n\n                if (c->node_id) sdsfree(c->node_id);\n                c->node_id = sdsdup(node_id);\n            } else if (!strcasecmp(c->argv[j]->ptr, \"slot-info\")) {\n                /* slot-info slot:key_size:expire_size */\n                int count;\n                long long slot, key_size, expire_size;\n                sds slot_info = c->argv[j + 1]->ptr;\n                sds *parts = sdssplitlen(slot_info, sdslen(slot_info), \":\", 1, &count);\n\n                /* Validate the slot info format, parse slot, key_size, expire_size */\n                if (parts == NULL || count != 3 ||\n                    (string2ll(parts[0], sdslen(parts[0]), &slot) == 0 || slot < 0 || slot >= CLUSTER_SLOTS) ||\n                    (string2ll(parts[1], sdslen(parts[1]), &key_size) == 0 || key_size < 0) ||\n                    (string2ll(parts[2], sdslen(parts[2]), &expire_size) == 0 || expire_size < 0))\n                {\n                    addReplyErrorFormat(c, \"Invalid slot info: %s\", slot_info);\n                    sdsfreesplitres(parts, count);\n                    return;\n                }\n\n                /* We resize individual slot specific dictionaries. */\n                redisDb *db = c->db;\n                serverAssert(db->id == 0); /* Only support DB 0 for cluster mode. */\n                kvstoreDictExpand(db->keys, slot, key_size);\n                kvstoreDictExpand(db->expires, slot, expire_size);\n\n                sdsfreesplitres(parts, count);\n            } else if (!strcasecmp(c->argv[j]->ptr, \"asm-task\")) {\n                /* asm-task task_id:source_node:dest_node:operation:state:slot_ranges */\n                if (clusterNodeIsMaster(getMyClusterNode())) {\n                    addReplyError(c, \"CLUSTER SYNCSLOTS CONF ASM-TASK only allowed on replica\");\n                    return;\n                }\n                if (asmReplicaHandleMasterTask(c->argv[j + 1]->ptr) != C_OK) {\n                    addReplyErrorFormat(c, \"Failed to handle master task: %s\",\n                                        (char *)c->argv[j + 1]->ptr);\n                }\n            } else if (!strcasecmp(c->argv[j]->ptr, \"capa\")) {\n                /* Ignore unrecognized capabilities. This is for future extensions. */\n            } else {\n                addReplyErrorFormat(c, \"Unknown option %s\", (char *)c->argv[j]->ptr);\n            }\n        }\n        addReply(c, shared.ok);\n    } else {\n        addReplyErrorObject(c, shared.syntaxerr);\n    }\n}\n\n/* Save a key-value pair to stream I/O using either RESTORE or AOF format. */\nstatic int slotSnapshotSaveKeyValuePair(rio *rdb, kvobj *o, int dbid) {\n    /* Get the expire time */\n    long long expiretime = kvobjGetExpire(o);\n\n    /* Set on stack string object for key */\n    robj key;\n    initStaticStringObject(key, kvobjGetKey(o));\n\n    /* If module object or non-string object that is not too big,\n     * use RESTORE command (RDB format) to migrate data.\n     * Generally RDB binary format is more efficient, but it may cause\n     * block in the destination if the object is too large, so fall back\n     * to AOF format if necessary. */\n    if ((o->type == OBJ_MODULE) ||\n        (o->type != OBJ_STRING && getObjectLength(o) <= ASM_AOF_MIN_ITEMS_PER_KEY))\n    {\n        if (rioWriteBulkCount(rdb, '*', 5) == 0) return C_ERR;\n        if (rioWriteBulkString(rdb, \"RESTORE\", 7) == 0) return C_ERR;\n        if (rioWriteBulkObject(rdb, &key) == 0) return C_ERR;\n        if (rioWriteBulkLongLong(rdb, expiretime == -1 ? 0 : expiretime) == 0) return C_ERR;\n\n        /* Create the DUMP encoded representation. */\n        rio payload;\n        createDumpPayload(&payload, o, &key, dbid, 1);\n        sds buf = payload.io.buffer.ptr;\n        if (rioWriteBulkString(rdb, buf, sdslen(buf)) == 0) {\n            sdsfree(payload.io.buffer.ptr);\n            return C_ERR;\n        }\n        sdsfree(payload.io.buffer.ptr);\n\n        /* Write ABSTTL */\n        if (rioWriteBulkString(rdb, \"ABSTTL\", 6) == 0) return C_ERR;\n    } else {\n        /* Use AOF format to migrate data */\n        if (rewriteObject(rdb, &key, o, dbid, expiretime) == C_ERR) return C_ERR;\n    }\n\n    return C_OK;\n}\n\n/* Modules can use RM_ClusterPropagateForSlotMigration() during the\n * CLUSTER_SLOT_MIGRATION_MIGRATE_MODULE_PROPAGATE event to propagate commands\n * that should be delivered just before the slot snapshot delivery starts. This\n * function triggers the event, collects the commands and writes them to the rio. */\nstatic int propagateModuleCommands(asmTask *task, rio *rdb) {\n    RedisModuleClusterSlotMigrationInfo info = {\n            .version = REDISMODULE_CLUSTER_SLOT_MIGRATION_INFO_VERSION,\n            .task_id = task->id,\n            .slots = (RedisModuleSlotRangeArray *) task->slots\n    };\n    memcpy(info.source_node_id, task->source, CLUSTER_NAMELEN);\n    memcpy(info.destination_node_id, task->dest, CLUSTER_NAMELEN);\n\n    task->pre_snapshot_module_cmds = zcalloc(sizeof(*task->pre_snapshot_module_cmds));\n    moduleFireServerEvent(REDISMODULE_EVENT_CLUSTER_SLOT_MIGRATION,\n                          REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_MODULE_PROPAGATE,\n                          &info\n    );\n\n    int ret = C_OK;\n    /* Write the module commands to the rio */\n    for (int i = 0; i < task->pre_snapshot_module_cmds->numops; i++) {\n        redisOp *op = &task->pre_snapshot_module_cmds->ops[i];\n        if (rioWriteBulkCount(rdb, '*', op->argc) == 0) {\n            ret = C_ERR;\n            break;\n        }\n        for (int j = 0; j < op->argc; j++)\n            if (rioWriteBulkObject(rdb, op->argv[j]) == 0) {\n                ret = C_ERR;\n                break;\n            }\n    }\n    redisOpArrayFree(task->pre_snapshot_module_cmds);\n    zfree(task->pre_snapshot_module_cmds);\n    task->pre_snapshot_module_cmds = NULL;\n    return ret;\n}\n\n/* Save the slot ranges snapshot to the file. It generates the DUMP encoded\n * representation of each key in the slot ranges and writes it to the file.\n *\n * Returns C_OK on success, or C_ERR on error. */\nint slotSnapshotSaveRio(int req, rio *rdb, int *error) {\n    serverAssert(req & SLAVE_REQ_SLOTS_SNAPSHOT);\n\n    dictEntry *de;\n    kvstoreDictIterator kvs_di;\n\n    if (unlikely(asmDebugIsFailPointActive(ASM_MIGRATE_RDB_CHANNEL, ASM_SEND_BULK_AND_STREAM)))\n        rioAbort(rdb); /* Simulate a failure */\n\n    /* Disable RDB compression for slots snapshot since compression is too\n     * expensive both in source and destination. */\n    server.rdb_compression = 0;\n\n    /* Only support a single migrate task */\n    serverAssert(listLength(asmManager->tasks) == 1);\n    asmTask *task = listNodeValue(listFirst(asmManager->tasks));\n    serverAssert(task->operation == ASM_MIGRATE);\n\n    if (propagateModuleCommands(task, rdb) == C_ERR) goto werr;\n\n    /* Dump functions and send to destination side. */\n    rio payload;\n    createFunctionDumpPayload(&payload);\n    sds functions = payload.io.buffer.ptr;\n    if (rioWriteBulkCount(rdb, '*', 4) == 0) goto werr;\n    if (rioWriteBulkString(rdb, \"FUNCTION\", 8) == 0) goto werr;\n    if (rioWriteBulkString(rdb, \"RESTORE\", 7) == 0) goto werr;\n    if (rioWriteBulkString(rdb, functions, sdslen(functions)) == 0) {\n        sdsfree(payload.io.buffer.ptr);\n        goto werr;\n    }\n    sdsfree(payload.io.buffer.ptr);\n    /* Add the REPLACE option to the RESTORE command, to avoid error\n     * when migrating to a node with existing libraries. */\n    if (rioWriteBulkString(rdb, \"REPLACE\", 7) == 0) goto werr;\n\n    for (int i = 0; i < server.dbnum; i++) {\n        char selectcmd[] = \"*2\\r\\n$6\\r\\nSELECT\\r\\n\";\n        redisDb *db = server.db + i;\n        if (kvstoreSize(db->keys) == 0) continue;\n\n        /* SELECT the new DB */\n        if (rioWrite(rdb,selectcmd,sizeof(selectcmd)-1) == 0) goto werr;\n        if (rioWriteBulkLongLong(rdb, i) == 0) goto werr;\n\n        /* Iterate all slot ranges, and generate the DUMP encoded\n         * representation of each key in the DB. */\n        for (int j = 0; j < task->slots->num_ranges; j++) {\n            slotRange *sr = &task->slots->ranges[j];\n            /* Iterate all keys in the slot range */\n            for (int k = sr->start; k <= sr->end; k++) {\n                int send_slot_info = 0;\n\n                kvstoreInitDictIterator(&kvs_di, server.db->keys, k);\n                while ((de = kvstoreDictIteratorNext(&kvs_di)) != NULL) {\n                    /* Send slot info before the first key in the slot */\n                    if (!send_slot_info) {\n                        /* Format slot info */\n                        char buf[128];\n                        int len = snprintf(buf, sizeof(buf), \"%d:%lu:%lu\",\n                                    k, kvstoreDictSize(db->keys, k),\n                                    kvstoreDictSize(db->expires, k));\n                        serverAssert(len > 0 && len < (int)sizeof(buf));\n\n                        /* Send slot info */\n                        if (rioWriteBulkCount(rdb, '*', 5) == 0) goto werr2;\n                        if (rioWriteBulkString(rdb, \"CLUSTER\", 7) == 0) goto werr2;\n                        if (rioWriteBulkString(rdb, \"SYNCSLOTS\", 9) == 0) goto werr2;\n                        if (rioWriteBulkString(rdb, \"CONF\", 4) == 0) goto werr2;\n                        if (rioWriteBulkString(rdb, \"SLOT-INFO\", 9) == 0) goto werr2;\n                        if (rioWriteBulkString(rdb, buf, len) == 0) goto werr2;\n                        send_slot_info = 1;\n                    }\n\n                    /* Save a key-value pair */\n                    kvobj *o = dictGetKV(de);\n                    if (slotSnapshotSaveKeyValuePair(rdb, o, db->id) == C_ERR) goto werr2;\n\n                    /* Delay return if required (for testing) */\n                    if (unlikely(server.rdb_key_save_delay)) {\n                        /* Send buffer to the destination ASAP. */\n                        if (rioFlush(rdb) == 0) goto werr2;\n                        debugDelay(server.rdb_key_save_delay);\n                    }\n                }\n                kvstoreResetDictIterator(&kvs_di);\n            }\n        }\n    }\n\n    /* Write the end of the snapshot file command */\n    if (rioWriteBulkCount(rdb, '*', 3) == 0) goto werr;\n    if (rioWriteBulkString(rdb, \"CLUSTER\", 7) == 0) goto werr;\n    if (rioWriteBulkString(rdb, \"SYNCSLOTS\", 9) == 0) goto werr;\n    if (rioWriteBulkString(rdb, \"SNAPSHOT-EOF\", 12) == 0) goto werr;\n    return C_OK;\n\nwerr2:\n    kvstoreResetDictIterator(&kvs_di);\nwerr:\n    if (error) *error = errno;\n    return C_ERR;\n}\n\n/* Read error handler for sync buffer */\nstatic void asmReadSyncBufferErrorHandler(connection *conn) {\n    if (listLength(asmManager->tasks) == 0) return;\n    asmTask *task = listNodeValue(listFirst(asmManager->tasks));\n    if (task->state != ASM_ACCUMULATE_BUF && task->state != ASM_STREAMING_BUF) return;\n\n    if (task->state == ASM_STREAMING_BUF) {\n        freeClient(connGetPrivateData(conn));\n    } else {\n        asmTaskSetFailed(task, \"Main channel - Read error: %s\", connGetLastError(conn));\n    }\n}\n\n/* Read data from connection into sync buffer. */\nstatic void asmSyncBufferReadFromConn(connection *conn) {\n    /* The task may be canceled (move to finished list) or failed during streaming buffer. */\n    if (listLength(asmManager->tasks) == 0) return;\n    asmTask *task = listNodeValue(listFirst(asmManager->tasks));\n    if (task->state != ASM_ACCUMULATE_BUF && task->state != ASM_STREAMING_BUF) return;\n\n    /* ASM_ACCUMULATE_BUF and ASM_STREAMING_BUF fail points are handled here */\n    if (unlikely(asmDebugIsFailPointActive(ASM_IMPORT_MAIN_CHANNEL, task->state)))\n        shutdown(conn->fd, SHUT_RDWR);\n\n    replDataBuf *buf = &task->sync_buffer;\n    if (task->state == ASM_STREAMING_BUF) {\n        /* While streaming accumulated buffers, we continue reading from the\n         * source to prevent accumulation on source side as much as possible.\n         * However, we aim to drain buffer eventually. To ensure we consume more\n         * than we read, we'll read at most one block after two blocks of\n         * buffers are consumed. */\n        if (listLength(buf->blocks) + 1 >= buf->last_num_blocks)\n            return;\n        buf->last_num_blocks = listLength(buf->blocks);\n    }\n\n    replDataBufReadFromConn(conn, buf, asmReadSyncBufferErrorHandler);\n}\n\nstatic void asmSyncBufferStreamYieldCallback(void *ctx) {\n    replDataBufToDbCtx *context = ctx;\n    asmTask *task = context->privdata;\n    client *c = context->client;\n\n    char offset[64];\n    ull2string(offset, sizeof(offset), context->applied_offset);\n\n    char *err = sendCommand(c->conn, \"CLUSTER\", \"SYNCSLOTS\", \"ACK\",\n                    asmTaskStateToString(task->state), offset, NULL);\n    if (err) {\n        serverLog(LL_WARNING, \"Error sending CLUSTER SYNCSLOTS ACK: %s\", err);\n        sdsfree(err);\n        freeClient(c);\n    }\n    serverLog(LL_DEBUG, \"Yielding sending ACK during streaming buffer, applied offset: %zu\",\n                         context->applied_offset);\n}\n\nstatic int asmSyncBufferStreamShouldContinue(void *ctx) {\n    replDataBufToDbCtx *context = ctx;\n\n    /* If the task is failed or canceled, we should stop streaming immediately. */\n    asmTask *task = context->privdata;\n    if (task->state == ASM_FAILED || task->state == ASM_CANCELED) return 0;\n\n    /* Check the client-close flag only if the task has not failed or been canceled,\n     * otherwise the client may have already been freed. */\n    if (context->client->flags & CLIENT_CLOSE_ASAP) return 0;\n\n    return 1;\n}\n\n/* Stream the sync buffer to the database. */\nvoid asmSyncBufferStreamToDb(asmTask *task) {\n    task->state = ASM_STREAMING_BUF;\n    serverLog(LL_NOTICE, \"Starting to stream accumulated buffer for the import task (%zu bytes)\",\n                         task->sync_buffer.used);\n\n    /* The buffered stream from the main channel connection into\n     * the database is processed by a fake client. */\n    client *c = createClient(task->main_channel_conn);\n    c->flags |= (CLIENT_MASTER | CLIENT_INTERNAL | CLIENT_ASM_IMPORTING);\n    c->querybuf = sdsempty();\n    c->authenticated = 1;\n    c->user = NULL;\n    c->task = task;\n\n    /* Mark the peek buffer block count. We'll use it to verify we consume\n     * faster than we read from the source side. */\n    task->sync_buffer.last_num_blocks = listLength(task->sync_buffer.blocks);\n\n    /* Continue accumulating during streaming to prevent accumulation on source side. */\n    connSetReadHandler(c->conn, asmSyncBufferReadFromConn);\n\n    replDataBufToDbCtx ctx = {\n        .privdata = task,\n        .client = c,\n        .applied_offset = 0,\n        .should_continue = asmSyncBufferStreamShouldContinue,\n        .yield_callback = asmSyncBufferStreamYieldCallback,\n    };\n\n    /* Start streaming the buffer to the DB. This task may fail due to network\n     * errors or cancellations. We never release the task immediately; instead,\n     * it may be moved to the finished list. The actual free happens in serverCron,\n     * which ensures there is no use-after-free issue. */\n    int ret = replDataBufStreamToDb(&task->sync_buffer, &ctx);\n\n    if (ret == C_OK) {\n        if (task->stream_eof_during_streaming) {\n            /* STREAM-EOF received during streaming, we can take over now. */\n            asmImportTakeover(task);\n            return;\n        }\n\n        /* Update the dest offset according to applied bytes. */\n        task->dest_offset = ctx.applied_offset;\n        /* Wait STREAM-EOF from the source node. */\n        task->state = ASM_WAIT_STREAM_EOF;\n        connSetReadHandler(task->main_channel_conn, readQueryFromClient);\n        serverLog(LL_NOTICE, \"Successfully streamed accumulated buffer for the import task, applied offset: %lld\",\n                             task->dest_offset);\n\n        if (unlikely(asmDebugIsFailPointActive(ASM_IMPORT_MAIN_CHANNEL, task->state)))\n            shutdown(task->main_channel_conn->fd, SHUT_RDWR); /* Simulate a failure */\n\n        /* ACK offset after streaming buffer is done. */\n        asmImportSendACK(task);\n    } else {\n        /* If the task is already canceled or failed, we don't need to do anything here. */\n        if (task->state == ASM_FAILED || task->state == ASM_CANCELED) return;\n\n        asmTaskSetFailed(task, \"Main channel - Failed to stream into the DB\");\n    }\n}\n\nvoid asmImportIncrAppliedBytes(struct asmTask *task, size_t bytes) {\n    if (!task || task->state != ASM_WAIT_STREAM_EOF) return;\n    task->dest_offset += bytes;\n}\n\n/* Send STREAM-EOF if the sync buffer stream is drained. */\nvoid asmSendStreamEofIfDrained(asmTask *task) {\n    client *c = task->main_channel_client;\n\n    /* The command streams for slot ranges have been drained. */\n    if (!clientHasPendingReplies(c)) {\n        serverLog(LL_NOTICE, \"Slot migration command stream drained, sending STREAM-EOF to the destination\");\n\n        if (unlikely(asmDebugIsFailPointActive(ASM_MIGRATE_MAIN_CHANNEL, task->state)))\n            shutdown(c->conn->fd, SHUT_RDWR);\n\n        /* Send STREAM-EOF to indicate the end of the stream. */\n        char *err = sendCommand(c->conn, \"CLUSTER\", \"SYNCSLOTS\", \"STREAM-EOF\", NULL);\n        if (err) {\n            asmTaskSetFailed(task, \"Main channel - Failed to send STREAM-EOF: %s\", err);\n            sdsfree(err);\n            return;\n        }\n\n        /* Even though the main channel client is no longer needed, we\n         * can't close it directly because the destination may still be\n         * sending ACKs over this connection. Instead, we leave it to the\n         * destination to close it. We just clear the task and client\n         * references */\n        task->main_channel_client->task = NULL;\n        task->main_channel_client = NULL;\n\n        /* There may be a delay to handle the disconnection of RDB channel,\n         * so we clear the task and client references here. */\n        if (task->rdb_channel_client != NULL) {\n            task->rdb_channel_state = ASM_COMPLETED;\n            task->rdb_channel_client->task = NULL;\n            freeClientAsync(task->rdb_channel_client);\n            task->rdb_channel_client = NULL;\n        }\n\n        task->state = ASM_STREAM_EOF;\n    }\n}\n\nvoid asmBeforeSleep(void) {\n    asmTrimJobProcessPending();\n\n    if (listLength(asmManager->tasks) == 0) return;\n    asmTask *task = listNodeValue(listFirst(asmManager->tasks));\n\n    if (task->operation == ASM_IMPORT) {\n        if (task->state == ASM_NONE)\n            asmStartImportTask(task);\n        else if (task->state == ASM_READY_TO_STREAM)\n            asmSyncBufferStreamToDb(task);\n    }\n\n    if (task->operation == ASM_MIGRATE) {\n        if (task->cross_slot_during_propagating) {\n            asmTaskCancel(task, \"propagating cross slot command\");\n            return;\n        }\n\n        /* Send STREAM-EOF if the destination drained the command stream. */\n        if (task->state == ASM_HANDOFF)\n            asmSendStreamEofIfDrained(task);\n    }\n}\n\nvoid asmCron(void) {\n    static unsigned long long asm_cron_runs = 0;\n    asm_cron_runs++;\n\n    if (listLength(asmManager->tasks) == 0) return;\n    asmTask *task = listNodeValue(listFirst(asmManager->tasks));\n\n    if (task->operation == ASM_IMPORT) {\n        if (task->state == ASM_FAILED) {\n            /* Retry every 1 second */\n            if (asm_cron_runs % 10 == 0) {\n                asmTaskReset(task);\n                task->retry_count++;\n                serverAssert(task->state == ASM_NONE);\n                asmStartImportTask(task);\n            }\n        } else if (task->state == ASM_WAIT_STREAM_EOF) {\n            if (asmImportSendACK(task) == C_ERR) return;\n\n            /* Check if the main channel is timed out */\n            client *c = connGetPrivateData(task->main_channel_conn);\n            serverAssert(c->task == task);\n            if (server.unixtime - c->lastinteraction > server.repl_timeout)\n                asmTaskSetFailed(task, \"Main channel - Connection timeout\");\n        } else if (task->state == ASM_ACCUMULATE_BUF &&\n                   task->rdb_channel_state == ASM_RDBCHANNEL_TRANSFER)\n        {\n            /* Check if the RDB channel is timed out */\n            client *c = connGetPrivateData(task->rdb_channel_conn);\n            serverAssert(c->task == task);\n            if (server.unixtime - c->lastinteraction > server.repl_timeout)\n                asmTaskSetFailed(task, \"RDB channel - Connection timeout\");\n        } else if (task->state == ASM_SEND_SYNCSLOTS) {\n            /* Rare case: the source node replied to SYNCSLOTS with -NOTREADY\n             * because it wasn't ready to start a migration. We'll retry\n             * SYNCSLOTS every second instead of failing the attempt which could\n             * trigger unnecessary cleanup in the cluster implementation. */\n            if (asm_cron_runs % 10 == 0)\n                asmSyncWithSource(task->main_channel_conn);\n        }\n    } else if (task->operation == ASM_MIGRATE) {\n        if (task->state == ASM_SEND_STREAM) {\n            /* Currently, we only need to check the main channel timeout when sending streams.\n             * For RDB channel connections, the timeout is handled by the socket itself\n             * during writes in slotSnapshotSaveRio. */\n            if (server.unixtime - task->main_channel_client->lastinteraction > server.repl_timeout)\n                asmTaskSetFailed(task, \"Main channel - Connection timeout\");\n\n            /* After the destination applies the accumulated buffer, the source continues\n             * sending commands for migrating slots. The destination keeps applying them,\n             * but the gap remains above the acceptable limit, which may cause endless\n             * synchronization. A timeout check is required to handle this case.\n             *\n             * The timeout is calculated as the maximum of two values:\n             * - A configurable timeout (cluster-slot-migration-sync-buffer-drain-timeout) to\n             *   avoid false positives.\n             * - A dynamic timeout based on the time that the destination took to apply the\n             *   slot snapshot and the accumulated buffer during slot snapshot delivery.\n             *   The destination should be able to drain the remaining sync buffer in less\n             *   time than this. We multiply it by 2 to be more conservative. */\n            if (task->dest_state == ASM_WAIT_STREAM_EOF && task->dest_accum_applied_time &&\n                server.mstime - task->dest_accum_applied_time >\n                    max(server.asm_sync_buffer_drain_timeout,\n                        (task->dest_accum_applied_time - task->dest_slots_snapshot_time) * 2))\n            {\n                asmTaskSetFailed(task, \"Sync buffer drain timeout\");\n            }\n        } else if (task->state == ASM_HANDOFF || task->state == ASM_STREAM_EOF) {\n            /* In these states, writes are still paused while waiting for the \n             * destination to broadcast the slot ownership change. If the \n             * destination fails or becomes unreachable, the source could remain \n             * paused indefinitely, so we enforce a timeout and fail the task.\n             * \n             * NOTE: There is a tricky case where the destination node may \n             * advertise ownership of the slot after the source node resumes \n             * writes, causing a temporary configuration conflict. However, the\n             * configuration will eventually converge. In most cases, the\n             * destination node becomes the winner, since it bumps its config \n             * epoch before taking over slot ownership. During this window, \n             * writes accepted by the source will not be replicated to the\n             * destination and those writes will be lost.*/\n            if (server.mstime - task->paused_time >= server.asm_write_pause_timeout) {\n                asmTaskSetFailed(task, \"Write pause timeout during slot handoff: destination did not take ownership within %lld ms.\",\n                                 server.asm_write_pause_timeout);\n                return;\n            }\n        }\n    }\n\n    /* Trim the archived tasks list if it grows too large */\n    while (listLength(asmManager->archived_tasks) > (unsigned long)server.asm_max_archived_tasks) {\n        asmTask *oldest = listNodeValue(listLast(asmManager->archived_tasks));\n        asmTaskFree(oldest);\n        listDelNode(asmManager->archived_tasks, listLast(asmManager->archived_tasks));\n    }\n}\n\n/* Cancel a specific task if ID is provided, otherwise cancel all tasks. */\nint clusterAsmCancel(const char *task_id, const char *reason) {\n    if (asmManager == NULL) return 0;\n\n    if (task_id) {\n        asmTask *task = asmLookupTaskById(task_id);\n        if (!task) return 0; /* Not found */\n\n        asmTaskCancel(task, reason);\n        return 1;\n    } else {\n        int num_cancelled = 0;\n        listIter li;\n        listNode *ln;\n\n        listRewind(asmManager->tasks, &li);\n        while ((ln = listNext(&li)) != NULL) {\n            asmTask *task = listNodeValue(ln);\n            asmTaskCancel(task, reason);\n            num_cancelled++;\n        }\n        return num_cancelled;\n    }\n}\n\n/* Cancel all tasks that overlap with the given slot ranges.\n * If slots is NULL, cancel all tasks. */\nint clusterAsmCancelBySlotRangeArray(struct slotRangeArray *slots, const char *reason) {\n    if (asmManager == NULL) return 0;\n\n    int num_cancelled = 0;\n    listIter li;\n    listNode *ln;\n    listRewind(asmManager->tasks, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        asmTask *task = listNodeValue(ln);\n        if (!slots || slotRangeArraysOverlap(task->slots, slots)) {\n            asmTaskCancel(task, reason);\n            num_cancelled++;\n        }\n    }\n    return num_cancelled;\n}\n\n/* Cancel the task that overlap with the given slot. */\nint clusterAsmCancelBySlot(int slot, const char *reason) {\n    slotRange req = {slot, slot};\n    if (asmManager == NULL) return 0;\n\n    /* Cancel it if found. */\n    asmTask *task = lookupAsmTaskBySlotRange(&req);\n    if (task) asmTaskCancel(task, reason);\n\n    return task ? 1 : 0;\n}\n\n/* Cancel all tasks that involve the given node. */\nint clusterAsmCancelByNode(void *node, const char *reason) {\n    if (asmManager == NULL || node == NULL) return 0;\n\n    /* If the node to be deleted is myself, cancel all tasks. */\n    clusterNode *n = node;\n    if (n == getMyClusterNode()) return clusterAsmCancel(NULL, reason);\n\n    int num_cancelled = 0;\n    listIter li;\n    listNode *ln;\n    listRewind(asmManager->tasks, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        asmTask *task = listNodeValue(ln);\n        /* Cancel the task if the source node is the one to be deleted, or\n         * the dest node is the one to be deleted. */\n        if (!memcmp(task->dest, clusterNodeGetName(n), CLUSTER_NAMELEN) ||\n            !memcmp(task->source, clusterNodeGetName(n), CLUSTER_NAMELEN))\n        {\n            asmTaskCancel(task, reason);\n            num_cancelled++;\n        }\n    }\n    return num_cancelled;\n}\n\n/* Check if the slot is in an active ASM task. */\nint isSlotInAsmTask(int slot) {\n    slotRange req = {slot, slot};\n    if (!asmManager) return 0;\n\n    listIter li;\n    listNode *ln;\n    listRewind(asmManager->tasks, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        asmTask *task = listNodeValue(ln);\n        if (slotRangeArrayOverlaps(task->slots, &req))\n            return 1;\n    }\n    return 0;\n}\n\n/* Check if the slot is in a pending trim job. It may happen if we can't trim\n * the slots immediately due to a write pause or when active trim is in progress. */\nint isSlotInTrimJob(int slot) {\n    slotRange req = {slot, slot};\n\n    if (!asmManager || !asmIsTrimInProgress()) return 0;\n\n    /* Check if the slot is in any pending trim job. */\n    listIter li;\n    listNode *ln;\n    listRewind(asmManager->pending_trim_jobs, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        slotRangeArray *slots = listNodeValue(ln);\n        if (slotRangeArrayOverlaps(slots, &req))\n            return 1;\n    }\n\n    /* Check if the slot is in any active trim job. */\n    listRewind(asmManager->active_trim_jobs, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        activeTrimJob *job = listNodeValue(ln);\n        if (slotRangeArrayOverlaps(job->slots, &req))\n            return 1;\n    }\n    return 0;\n}\n\nint clusterAsmHandoff(const char *task_id, sds *err) {\n    serverAssert(task_id);\n\n    asmTask *task = asmLookupTaskById(task_id);\n    if (!task || task->state != ASM_HANDOFF_PREP) {\n        *err = sdscatprintf(sdsempty(), \"No suitable ASM task found for id: %s, task_state: %s\",\n                            task_id, task ? asmTaskStateToString(task->state) : \"null\");\n        return C_ERR;\n    }\n\n    task->state = ASM_HANDOFF;\n    task->paused_time = server.mstime;\n\n    return C_OK;\n}\n\n/* Notify Redis that the config is updated for the task. */\nint asmNotifyConfigUpdated(asmTask *task, sds *err) {\n    int event = -1;\n\n    if (task->operation == ASM_IMPORT && task->state == ASM_TAKEOVER) {\n        event = ASM_EVENT_IMPORT_COMPLETED;\n    } else if (task->operation == ASM_MIGRATE && task->state == ASM_STREAM_EOF) {\n        event = ASM_EVENT_MIGRATE_COMPLETED;\n    } else {\n        *err = sdscatprintf(sdsempty(),\n                            \"ASM task is not in the correct state for config update: %s\",\n                            asmTaskStateToString(task->state));\n        asmTaskCancel(task, \"slots configuration updated\");\n        return C_ERR;\n    }\n\n    /* Reset per-slot statistics for the migrated/imported ranges.\n     * Note: cluster_legacy.c also cleans up, so this may run twice, but\n     * required if an alternative cluster impl is in use. */\n    for (int i = 0; i < task->slots->num_ranges; i++) {\n        slotRange *sr = &task->slots->ranges[i];\n        for (int j = sr->start; j <= sr->end; j++)\n            clusterSlotStatReset(j);\n    }\n\n    /* Clear error message if successful. */\n    sdsfree(task->error);\n    task->error = sdsempty();\n    task->state = ASM_COMPLETED;\n\n    asmNotifyStateChange(task, event);\n    asmTaskFinalize(task);\n\n    /* Trim the slots after the migrate task is completed. */\n    if (event == ASM_EVENT_MIGRATE_COMPLETED)\n        asmTrimJobSchedule(task->slots);\n\n    return C_OK;\n}\n\n/* Import/Migrate task is done, config is updated. */\nint clusterAsmDone(const char *task_id, sds *err) {\n    serverAssert(task_id);\n\n    asmTask *task = asmLookupTaskById(task_id);\n    if (!task) {\n        *err = sdscatprintf(sdsempty(), \"No ASM task found for id: %s\", task_id);\n        return C_ERR;\n    }\n    return asmNotifyConfigUpdated(task, err);\n}\n\nint clusterAsmProcess(const char *task_id, int event, void *arg, char **err) {\n    int ret, num_cancelled;\n    sds errsds = NULL;\n    static char buf[256];\n\n    if (err) *err = NULL;\n\n    switch (event) {\n        case ASM_EVENT_IMPORT_START: {\n            /* Validate the slot ranges. */\n            slotRangeArray *slots = slotRangeArrayDup(arg);\n            if (slotRangeArrayNormalizeAndValidate(slots, &errsds) != C_OK) {\n                slotRangeArrayFree(slots);\n                ret = C_ERR;\n                break;\n            }\n            ret = asmCreateImportTask(task_id, slots, &errsds) ? C_OK : C_ERR;\n            break;\n        }\n        case ASM_EVENT_CANCEL: {\n            num_cancelled = clusterAsmCancel(task_id, \"user request\");\n            if (arg) *((int *)arg) = num_cancelled;\n            ret = C_OK;\n            break;\n        }\n        case ASM_EVENT_HANDOFF: {\n            ret = clusterAsmHandoff(task_id, &errsds);\n            break;\n        }\n        case ASM_EVENT_DONE: {\n            ret = clusterAsmDone(task_id, &errsds);\n            break;\n        }\n        default: {\n            ret = C_ERR;\n            errsds = sdscatprintf(sdsempty(), \"Unknown operation: %d\", event);\n            break;\n        }\n    }\n\n    if (ret != C_OK && errsds && err) {\n        snprintf(buf, sizeof(buf), \"%s\", errsds);\n        *err = buf;\n    }\n    sdsfree(errsds);\n\n    return ret;\n}\n\n/* Propagate TRIMSLOTS command to AOF and replicas. */\nstatic void propagateTrimSlots(slotRangeArray *slots) {\n    int argc = slots->num_ranges * 2 + 3;\n    robj **argv = zmalloc(sizeof(robj*) * argc);\n    argv[0] = createStringObject(\"TRIMSLOTS\", 9);\n    argv[1] = createStringObject(\"RANGES\", 6);\n    argv[2] = createStringObjectFromLongLong(slots->num_ranges);\n    for (int i = 0; i < slots->num_ranges; i++) {\n        argv[i*2+3] = createStringObjectFromLongLong(slots->ranges[i].start);\n        argv[i*2+4] = createStringObjectFromLongLong(slots->ranges[i].end);\n    }\n\n    enterExecutionUnit(1, 0);\n\n    int prev_replication_allowed = server.replication_allowed;\n    server.replication_allowed = 1;\n    alsoPropagate(-1, argv, argc, PROPAGATE_AOF | PROPAGATE_REPL);\n    server.replication_allowed = prev_replication_allowed;\n\n    exitExecutionUnit();\n    postExecutionUnitOperations();\n\n    for (int i = 0; i < argc; i++)\n        decrRefCount(argv[i]);\n    zfree(argv);\n}\n\n/* If this node is a replica and there is an active trim or a pending trim\n * job (due to write pause), we cannot process commands from the master for the\n * slots that are waiting to be trimmed. Otherwise, the trim cycle could\n * mistakenly delete newly added keys. In this case, the master will be blocked\n * until the trim job finishes. This is supposed to be a rare event as it needs\n * to migrate slots and import them back before the trim job is done. */\nvoid asmUnblockMasterAfterTrim(void) {\n    if (server.master &&\n        server.master->flags & CLIENT_BLOCKED &&\n        server.master->bstate.btype == BLOCKED_POSTPONE_TRIM)\n    {\n        unblockClient(server.master, 1);\n        serverLog(LL_NOTICE, \"Unblocking master client after active trim is completed\");\n    }\n}\n\n/* Background Trim: Delete migrated keys asynchronously in BIO thread.\n *\n * It works by moving entire slot data structures (dictionaries) to temporary\n * kvstores, then handing them off to BIO thread for deletion.\n *\n * @param trim_ctx Context for slot ranges and histogram tracking  \n * @param migration_cleanup True if this is post-migration cleanup (fires module events)\n */\nvoid asmTriggerBackgroundTrim(asmTrimCtx *trim_ctx, int migration_cleanup) {\n    slotRangeArray *slots = trim_ctx->slots;\n    RedisModuleClusterSlotMigrationTrimInfoV1 fsi = {\n            REDISMODULE_CLUSTER_SLOT_MIGRATION_TRIMINFO_VERSION,\n            (RedisModuleSlotRangeArray *) slots\n    };\n\n    /* Fire the trim event to modules only if this is a migration cleanup. */\n    if (migration_cleanup)\n        moduleFireServerEvent(REDISMODULE_EVENT_CLUSTER_SLOT_MIGRATION_TRIM,\n                REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_TRIM_BACKGROUND,\n                &fsi);\n\n    signalFlushedDb(0, 1, slots);\n\n    /* Create temporary kvstores to hold the slot data we're about to move.\n     * These will be deleted in the BIO thread. */\n    kvstore *keys = kvstoreCreate(&kvstoreBaseType, &dbDictType,\n                                  CLUSTER_SLOT_MASK_BITS,\n                                  KVSTORE_ALLOCATE_DICTS_ON_DEMAND);\n    kvstore *expires = kvstoreCreate(&kvstoreBaseType, &dbExpiresDictType,\n                                     CLUSTER_SLOT_MASK_BITS,\n                                     KVSTORE_ALLOCATE_DICTS_ON_DEMAND);\n    estore *subexpires = estoreCreate(&subexpiresBucketsType, CLUSTER_SLOT_MASK_BITS);\n    dict *stream_idmp_keys = dictCreate(&objectKeyNoValueDictType);\n\n    size_t total_keys = 0;\n\n    /* Move slot dictionaries from main DB to temp kvstores (O(1) per slot) */\n    for (int i = 0; i < slots->num_ranges; i++) {\n        for (int slot = slots->ranges[i].start; slot <= slots->ranges[i].end; slot++) {\n            total_keys += kvstoreDictSize(server.db[0].keys, slot);\n            kvstoreMoveDict(server.db[0].keys, keys, slot);\n            kvstoreMoveDict(server.db[0].expires, expires, slot);\n            estoreMoveEbuckets(server.db[0].subexpires, subexpires, slot);\n            streamMoveIdmpKeys(server.db[0].stream_idmp_keys, stream_idmp_keys, slot);\n        }\n    }\n\n    emptyDbDataAsync(keys, expires, subexpires, stream_idmp_keys, trim_ctx);\n\n    sds str = slotRangeArrayToString(slots);\n    serverLog(LL_NOTICE, \"Background trim started for slots: %s to trim %zu keys.\", str, total_keys);\n    sdsfree(str);\n\n    /* Unblock master if blocked. This can only happen in a very unlikely case,\n     * trim job will be in pending list due to write pause and master will send\n     * commands for the slots that are waiting to be trimmed. Just keeping this\n     * call here for being defensive as it is harmless. */\n    asmUnblockMasterAfterTrim();\n}\n\n/* Trimming of slots can be triggered in several cases: \n *  - After a successful ASM migrate operation: slots are migrated away from\n *    this node and keys that are no longer owned must be removed.\n *  - After a failed ASM import operation: partially imported slot data must\n *    be cleaned up.\n *  - Due to user initiated SFLUSH command.\n * \n * Redis supports two trimming methods: background trim and active trim.\n * \n * Background trim: In cluster mode, Redis maintains per-slot data structures \n * for keys, expires, and subexpires. This makes it possible to efficiently \n * detach all data associated with a given slot in a single step. During trimming, \n * these slot-specific data structures are handed off to a BIO thread for \n * asynchronous cleanup, similar to how FLUSHALL or FLUSHDB operate. This is the \n * default trimming method.\n * \n * Active trim: Unlike Redis itself, some modules may not maintain per-slot data\n * structures and therefore cannot drop related slots data in a single operation.\n * To support these cases, Redis introduces active trim, where key deletion \n * occurs in the main thread instead. This is not a blocking operation, trimming\n * runs concurrently in the main thread, periodically removing keys during the\n * cron loop. Each deletion triggers a keyspace notification so that modules can\n * react to individual key removals. While active trim is less efficient, it \n * ensures backward compatibility for modules during the transition period.\n\n * Before starting the trim, Redis checks whether any module is subscribed to \n * REDISMODULE_NOTIFY_KEY_TRIMMED keyspace event. If such subscribers exist,\n * active trim is used; otherwise, background trim is triggered. Going forward,\n * modules are expected to adopt background trim and active trim will be phased \n * out once modules migrate to the new method.\n *\n * Active trim is also preferred if there is any client that is using client\n * tracking feature (client-side caching). In the client tracking protocol, \n * there is currently no mechanism to signal that only specific slots have been\n * flushed. So, iterating over all keys in the slots and sending invalidation \n * notifications would be a blocking operation. To avoid this, if there is any \n * client that is using client tracking feature, Redis triggers active trim. \n * During trimming, it sends invalidation notifications for each key being trimmed.\n * In the future, the client tracking protocol can be extended to support slot-based\n * invalidation, allowing background trim to be used in this case as well.\n * \n * Trim the slots, return the trim method used.\n * If client_id is non-zero, the client will be unblocked when trim completes.\n * If migration_cleanup is true, this is a migration cleanup of slots no longer owned. */\n\n/* Create ASM trim context with refcount=1 */\nasmTrimCtx *asmTrimCtxCreate(slotRangeArray *slots, kvstore *target_kvstore) {\n    asmTrimCtx *ctx = zcalloc(sizeof(asmTrimCtx));\n    ctx->refcount = 1;\n    ctx->slots = slots;\n    ctx->target_kvstore = target_kvstore;\n    /* delta histograms are zero-initialized by zcalloc */\n    return ctx;\n}\n\n/* Increment refcount */\nvoid asmTrimCtxRetain(asmTrimCtx *ctx) {\n    if (!ctx) return;\n    ctx->refcount++;\n}\n\n/* Decrement refcount, free if reaches 0 */\nvoid asmTrimCtxRelease(asmTrimCtx *ctx) {\n    if (!ctx) return;\n\n    serverAssert(ctx->refcount > 0);\n    ctx->refcount--;\n\n    if (ctx->refcount == 0) {\n        slotRangeArrayFree(ctx->slots);\n        zfree(ctx);\n    }\n}\n\nint asmTrimSlots(asmTrimCtx *ctx, uint64_t client_id, int migration_cleanup) {\n    serverAssert(ctx != NULL);\n\n    if (asmManager->debug_trim_method == ASM_DEBUG_TRIM_NONE)\n        return ASM_TRIM_METHOD_NONE;\n\n    /* Trigger active trim for the following cases:\n     * 1. Debug override: trim method is set to 'active'.\n     * 2. There are clients using client side caching (client tracking is enabled):\n     *   There is no way to invalidate specific slots in the client tracking\n     *   protocol. For now, we just use active trim to trim the slots.\n     * 3. Module subscribers: If any module is subscribed to TRIMMED event, we\n     *   assume module needs per key notification and cannot use background trim.\n     */\n    int activetrim = server.tracking_clients != 0 ||\n                     (asmManager->debug_trim_method == ASM_DEBUG_TRIM_ACTIVE) ||\n                     (asmManager->debug_trim_method == ASM_DEBUG_TRIM_DEFAULT &&\n                      moduleHasSubscribersForKeyspaceEvent(NOTIFY_KEY_TRIMMED));\n    if (activetrim) {\n        asmTriggerActiveTrim(ctx->slots, client_id, migration_cleanup);\n    } else {\n        /* Background trim:\n         * - Retain ctx for kvsAsyncFreeDoneCB() to release ctx later\n         * - Trigger background trim. Also updates ctx delta histogram.\n         * - Schedule completion cb to deduce delta histogram from DB */\n        asmBgTrimCounterIncr();\n        asmTrimCtxRetain(ctx);\n        asmTriggerBackgroundTrim(ctx, migration_cleanup);\n        bioCreateCompRq(BIO_WORKER_LAZY_FREE, kvsAsyncFreeDoneCB, client_id, ctx);\n    }\n\n    return activetrim ? ASM_TRIM_METHOD_ACTIVE : ASM_TRIM_METHOD_BG;\n}\n\n/* Schedule a trim job for the specified slot ranges. The job will be\n * deferred and handled later in asmBeforeSleep(). We delay the trim jobs to\n * asmBeforeSleep() to ensure it only runs when there is no write pause. \n * For trim method details, see asmTrimSlots(). */\nvoid asmTrimJobSchedule(slotRangeArray *slots) {\n    listAddNodeTail(asmManager->pending_trim_jobs, slotRangeArrayDup(slots));\n}\n\n/* Process any pending trim jobs. */\nvoid asmTrimJobProcessPending(void) {\n    /* Check if there is any pending trim jobs. */\n    if (listLength(asmManager->pending_trim_jobs) == 0 ||\n        asmManager->debug_trim_method == ASM_DEBUG_TRIM_NONE)\n    {\n        return;\n    }\n\n    /* If this node is a replica, it should not initiate slot trimming actively.\n     * Cancel the trim job and unblock the master if it is blocked. */\n    if (clusterNodeIsSlave(getMyClusterNode())) {\n        asmCancelPendingTrimJobs();\n        asmUnblockMasterAfterTrim();\n        return;\n    }\n\n    /* Determine if we can start the trim job:\n     * - require client writes not paused (so key deletions are allowed)\n     * - require replica traffic is not paused (so TRIMSLOTS can be propagated).\n     * - require trim is not disabled via RedisModule_ClusterDisableTrim().\n     */\n    static int logged = 0;\n    int disabled_by_module = server.cluster_module_trim_disablers > 0;\n\n    if (isPausedActions(PAUSE_ACTION_CLIENT_WRITE) ||\n        isPausedActions(PAUSE_ACTION_CLIENT_ALL) ||\n        isPausedActions(PAUSE_ACTION_REPLICA) ||\n        disabled_by_module)\n    {\n        if (logged == 0) {\n            logged = 1;\n            const char *reason = disabled_by_module ? \"trim is disabled by module\" :\n                                                      \"pause action is in effect\";\n            serverLog(LL_NOTICE, \"Trim job is deferred since %s.\", reason);\n        }\n        return;\n    }\n    logged = 0;\n\n    listIter li;\n    listNode *ln;\n    listRewind(asmManager->pending_trim_jobs, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        slotRangeArray *slots = listNodeValue(ln);\n        asmTrimCtx *ctx = asmTrimCtxCreate(slots, server.db[0].keys);\n        asmTrimSlots(ctx, CLIENT_ID_NONE, 1);  \n        propagateTrimSlots(slots);\n        listDelNode(asmManager->pending_trim_jobs, ln);\n        asmTrimCtxRelease(ctx); /* Release ctx (if bg trim, released later by kvsAsyncFreeDoneCB) */\n    }\n}\n\n/* Trim keys in slots not owned by this node (if any). */\nvoid asmTrimSlotsIfNotOwned(slotRangeArray *slots) {\n    if (!server.cluster_enabled || !clusterNodeIsMaster(getMyClusterNode())) return;\n\n    size_t num_keys = 0;\n    slotRangeArray *trim_slots = NULL;\n    for (int i = 0; i < slots->num_ranges; i++) {\n        for (int j = slots->ranges[i].start; j <= slots->ranges[i].end; j++) {\n            if (clusterIsMySlot(j) ||\n                kvstoreDictSize(server.db[0].keys, j) == 0 ||\n                isSlotInTrimJob(j))\n            {\n                continue;\n            }\n\n            trim_slots = slotRangeArrayAppend(trim_slots, j);\n            num_keys += kvstoreDictSize(server.db[0].keys, j);\n        }\n    }\n    if (!trim_slots) return;\n\n    sds str = slotRangeArrayToString(trim_slots);\n    serverLog(LL_NOTICE,\n              \"Detected keys in slots that do not belong to this node. \"\n              \"Scheduling trim for %zu keys in slots: %s\", num_keys, str);\n    sdsfree(str);\n\n    asmTrimJobSchedule(trim_slots);\n    slotRangeArrayFree(trim_slots);\n}\n\n/* Handle the master task when it is no longer used, trim unowned slots if necessary.\n * This function is called when the replica is just promoted to master. */\nvoid asmFinalizeMasterTask(void) {\n    if (!server.cluster_enabled) return;\n\n    asmTask *task = asmManager->master_task;\n    if (task == NULL) return;\n\n    if (task->operation == ASM_IMPORT) {\n        /* Check if there is an ASM task that master did not finish. */\n        if (task->state != ASM_COMPLETED && task->state != ASM_FAILED) {\n            sds slots_str = slotRangeArrayToString(task->slots);\n            serverLog(LL_WARNING, \"Import task %s from old master failed: slots=%s\",\n                                task->id, slots_str);\n            sdsfree(slots_str);\n            /* Mark the task as failed and notify the replicas. */\n            task->state = ASM_FAILED;\n            asmNotifyStateChange(task, ASM_EVENT_IMPORT_FAILED);\n        }\n\n        /* Trim the slots if the import task is failed. */\n        if (clusterNodeIsMaster(getMyClusterNode()) && task->state == ASM_FAILED) {\n            asmTrimSlotsIfNotOwned(task->slots);\n        }\n    } else if (task->operation == ASM_MIGRATE) {\n        /* For migrate tasks, attempt to trim slots if necessary. After ASM completed,\n         * the previous master may not have initiated slot trimming before the failover\n         * occurred. In that case, we need to initiate slot trimming here.\n         * However, if ASM failed, slot ownership did not change, so no slot trimming\n         * is needed. */\n        if (clusterNodeIsMaster(getMyClusterNode()) && task->state != ASM_FAILED) {\n            asmTrimSlotsIfNotOwned(task->slots);\n        }\n    }\n\n    /* Clear the master task since it is not a replica anymore. */\n    asmTaskFree(asmManager->master_task);\n    asmManager->master_task = NULL;\n}\n\n/* The replicas handle the master import ASM task information. */\nint asmReplicaHandleMasterTask(sds task_info) {\n    if (!server.cluster_enabled || !clusterNodeIsSlave(getMyClusterNode())) return C_ERR;\n\n    /* If the master task is migrating, just clear it when receiving a new task info,\n     * even the task info is empty since it means the master finished the task. */\n    if (asmManager->master_task && asmManager->master_task->operation == ASM_MIGRATE) {\n        asmTaskFree(asmManager->master_task);\n        asmManager->master_task = NULL;\n    }\n\n    /* If the master task is empty, it means the master finished the task, the\n     * replica should check the slot ownership to decide to raise completed or\n     * failed event. */\n    if (!task_info || sdslen(task_info) == 0) {\n        asmTask *task = asmManager->master_task;\n        if (task && task->state != ASM_COMPLETED && task->state != ASM_FAILED) {\n            /* Check if the slots are owned by the master. */\n            int owned_by_master = 1;\n            for (int i = 0; i < task->slots->num_ranges; i++) {\n                slotRange *sr = &task->slots->ranges[i];\n                for (int j = sr->start; j <= sr->end; j++) {\n                    clusterNode *master = clusterNodeGetMaster(getMyClusterNode());\n                    if (!master || !clusterNodeCoversSlot(master, j)) {\n                        owned_by_master = 0;\n                        break;\n                    }\n                }\n            }\n            if (owned_by_master) {\n                task->state = ASM_COMPLETED;\n                asmNotifyStateChange(task, ASM_EVENT_IMPORT_COMPLETED);\n            } else {\n                task->state = ASM_FAILED;\n                asmNotifyStateChange(task, ASM_EVENT_IMPORT_FAILED);\n            }\n        }\n        return C_OK;\n    }\n\n    asmTask *task = asmTaskDeserialize(task_info);\n    if (!task) return C_ERR;\n\n    /* For migrate task, replica just keeps the task info, doesn't notify any event. */\n    if (task->operation == ASM_MIGRATE) {\n        if (asmManager->master_task) asmTaskFree(asmManager->master_task);\n        asmManager->master_task = task;\n        return C_OK;\n    }\n\n    int notify_event = 0;\n    int event = asmTaskStateToEvent(task);\n    if (asmManager->master_task) {\n        /* Notify when the task or event is changed, to avoid duplicated notification. */\n        if (strcmp(task->id, asmManager->master_task->id) != 0 ||\n            event != asmTaskStateToEvent(asmManager->master_task))\n        {\n            notify_event = 1;\n        }\n        asmTaskFree(asmManager->master_task);\n    } else {\n        /* Ignore completed or failed task when there is no active master task. */\n        if (task->state != ASM_FAILED && task->state != ASM_COMPLETED)\n            notify_event = 1;\n    }\n\n    asmManager->master_task = task;\n    if (notify_event) asmNotifyStateChange(task, event);\n    return C_OK;\n}\n\n/* Cancel all pending trim jobs. */\nvoid asmCancelPendingTrimJobs(void) {\n    if (!asmManager) return;\n\n    listIter li;\n    listNode *ln;\n    listRewind(asmManager->pending_trim_jobs, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        slotRangeArray *slots = listNodeValue(ln);\n        listDelNode(asmManager->pending_trim_jobs, ln);\n        sds str = slotRangeArrayToString(slots);\n        serverLog(LL_NOTICE, \"Cancelling the pending trim job for slots: %s\", str);\n        sdsfree(str);\n        slotRangeArrayFree(slots);\n    }\n}\n\n/* Free an activeTrimJob and unblock pending client if needed. */\nvoid activeTrimJobFreeMethod(void *ptr) {\n    activeTrimJob *job = ptr;\n    if (job->client_id != 0) {\n        /* Reply with the slot ranges that requested to be trimmed. Generally we\n         * cancel trim jobs as the dataset is reset, no need to trim anymore. */\n        unblockClientForAsyncFlush(job->client_id, job->slots);\n    }\n    if (job->slots) slotRangeArrayFree(job->slots);\n    zfree(job);\n}\n\n/* Cancel all pending and active trim jobs. */\nvoid asmCancelTrimJobs(void) {\n    if (!asmManager) return;\n\n    /* Unblock master if blocked */\n    asmUnblockMasterAfterTrim();\n\n    /* Cancel pending trim jobs */\n    asmCancelPendingTrimJobs();\n\n    /* Cancel active trim jobs */\n    if (listLength(asmManager->active_trim_jobs) == 0)\n        return;\n\n    serverLog(LL_NOTICE, \"Cancelling all active trim jobs\");\n    asmManager->active_trim_cancelled += listLength(asmManager->active_trim_jobs);\n    asmActiveTrimEnd();\n    listEmpty(asmManager->active_trim_jobs);\n}\n\n/* It's used to trim slots after the migration is completed or import is failed.\n * TRIMSLOTS RANGES <numranges> <start-slot> <end-slot> ... */\nvoid trimslotsCommand(client *c) {\n    long numranges = 0;\n\n    if (server.cluster_enabled == 0) {\n        addReplyError(c,\"This instance has cluster support disabled\");\n        return;\n    }\n\n    if (c->argc < 5) {\n        addReplyErrorArity(c);\n        return;\n    }\n\n    /* Validate the ranges argument */\n    if (strcasecmp(c->argv[1]->ptr, \"ranges\") != 0) {\n        addReplyError(c, \"missing ranges argument\");\n        return;\n    }\n\n    /* Get the number of ranges */\n    if (getLongFromObjectOrReply(c, c->argv[2], &numranges, NULL) != C_OK)\n        return;\n\n    /* Validate the number of ranges and argument count */\n    if (numranges < 1 || numranges > CLUSTER_SLOTS || c->argc != 3 + numranges * 2) {\n        addReplyError(c, \"invalid number of ranges\");\n        return;\n    }\n\n    /* Parse the slot ranges and start trimming */\n    slotRangeArray *slots = parseSlotRangesOrReply(c, c->argc, 3);\n    if (!slots) return;\n\n    if (c->id == CLIENT_ID_AOF) {\n        serverAssert(server.loading);\n        /* If we are loading the AOF, we can't trigger active trim because next\n         * command may have an update for the same key that is supposed to be\n         * trimmed. We have to trim the keys synchronously. */\n        clusterDelKeysInSlotRangeArray(slots, 1);\n        slotRangeArrayFree(slots);\n    } else {\n        /* We cannot trim any slot served by this node. */\n        if (clusterNodeIsMaster(getMyClusterNode())) {\n            for (int i = 0; i < slots->num_ranges; i++) {\n                for (int j = slots->ranges[i].start; j <= slots->ranges[i].end; j++) {\n                    if (clusterCanAccessKeysInSlot(j)) {\n                        addReplyErrorFormat(c, \"the slot %d is served by this node\", j);\n                        slotRangeArrayFree(slots);\n                        return;\n                    }\n                }\n            }\n        }\n        asmTrimCtx *ctx = asmTrimCtxCreate(slots, server.db[0].keys);\n        asmTrimSlots(ctx, CLIENT_ID_NONE, 1);\n        /* Release ctx - if bg trim, will be freed when BIO completes */\n        asmTrimCtxRelease(ctx);\n    }\n\n    /* Command will not be propagated automatically since it does not modify\n     * the dataset. */\n    forceCommandPropagation(c, PROPAGATE_REPL | PROPAGATE_AOF);\n    addReply(c, shared.ok);\n}\n\n/* Start the active trim job. */\nvoid asmActiveTrimStart(void) {\n    activeTrimJob *job = listNodeValue(listFirst(asmManager->active_trim_jobs));\n    slotRangeArray *slots = job->slots;\n\n    serverAssert(asmManager->active_trim_it == NULL);\n    asmManager->active_trim_it = slotRangeArrayGetIterator(slots);\n    asmManager->active_trim_started++;\n    asmManager->active_trim_current_job_keys = 0;\n    asmManager->active_trim_current_job_trimmed = 0;\n\n    /* Count the number of keys to trim */\n    asmManager->active_trim_current_job_keys += getKeyCountInSlotRangeArray(slots);\n\n    RedisModuleClusterSlotMigrationTrimInfoV1 fsi = {\n            REDISMODULE_CLUSTER_SLOT_MIGRATION_TRIMINFO_VERSION,\n            (RedisModuleSlotRangeArray *) slots\n    };\n\n    /* Fire the trim event to modules only if this is a migration cleanup. */\n    if (job->migration_cleanup)\n        moduleFireServerEvent(REDISMODULE_EVENT_CLUSTER_SLOT_MIGRATION_TRIM,\n                              REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_TRIM_STARTED,\n                              &fsi);\n\n    sds str = slotRangeArrayToString(slots);\n    serverLog(LL_NOTICE, \"Active trim initiated for slots: %s, to trim %llu keys.\",\n              str, asmManager->active_trim_current_job_keys);\n    sdsfree(str);\n}\n\n/* Schedule an active trim job with optional client waiting for completion. */\nvoid asmTriggerActiveTrim(slotRangeArray *slots, uint64_t client_id, int migration_cleanup) {\n    activeTrimJob *job = zmalloc(sizeof(*job));\n    job->slots = slotRangeArrayDup(slots);\n    job->client_id = client_id;\n    job->migration_cleanup = migration_cleanup;\n\n    listAddNodeTail(asmManager->active_trim_jobs, job);\n    sds str = slotRangeArrayToString(slots);\n    serverLog(LL_NOTICE, \"Active trim scheduled for slots: %s\", str);\n    sdsfree(str);\n\n    /* Start an active trim job if no active trim job is running. */\n    if (asmManager->active_trim_it == NULL) {\n        serverAssert(listLength(asmManager->active_trim_jobs) > 0);\n        asmActiveTrimStart();\n    }\n}\n\n/* End the active trim job. */\nvoid asmActiveTrimEnd(void) {\n    activeTrimJob *job = listNodeValue(listFirst(asmManager->active_trim_jobs));\n    slotRangeArray *slots = job->slots;\n\n    if (asmManager->active_trim_it) {\n        slotRangeArrayIteratorFree(asmManager->active_trim_it);\n        asmManager->active_trim_it = NULL;\n    }\n\n    /* Unblock the master if it is blocked */\n    asmUnblockMasterAfterTrim();\n\n    RedisModuleClusterSlotMigrationTrimInfoV1 fsi = {\n            REDISMODULE_CLUSTER_SLOT_MIGRATION_TRIMINFO_VERSION,\n            (RedisModuleSlotRangeArray *) slots\n    };\n\n    /* Fire the trim event to modules only if this is a migration cleanup. */\n    if (job->migration_cleanup)\n        moduleFireServerEvent(REDISMODULE_EVENT_CLUSTER_SLOT_MIGRATION_TRIM,\n                 REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_TRIM_COMPLETED,\n                 &fsi);\n\n    sds str = slotRangeArrayToString(slots);\n    serverLog(LL_NOTICE, \"Active trim completed for slots: %s, %llu keys trimmed.\",\n              str, asmManager->active_trim_current_job_trimmed);\n    sdsfree(str);\n    listDelNode(asmManager->active_trim_jobs, listFirst(asmManager->active_trim_jobs));\n    asmManager->active_trim_completed++;\n}\n\n/* Check if the slot range array overlaps with any trim job. */\nint asmIsAnyTrimJobOverlaps(slotRangeArray *slots) {\n    if (!asmIsTrimInProgress()) return 0;\n    for (int i = 0; i < slots->num_ranges; i++) {\n        for (int j = slots->ranges[i].start; j <= slots->ranges[i].end; j++) {\n            if (isSlotInTrimJob(j)) return 1;\n        }\n    }\n    return 0;\n}\n\n/* Decrement background trim counter. Called from completion callback. */\nvoid asmBgTrimCounterDecr(void) {\n    if (!asmManager) return;\n    debugServerAssert(asmManager->bg_trim_running > 0);\n    asmManager->bg_trim_running--;\n}\n\n/* Increment background trim counter. */\nvoid asmBgTrimCounterIncr(void) {\n    if (!asmManager) return;\n    asmManager->bg_trim_running++;\n}\n\n/* Check if background trim is running (for skipping debug assertions). */\nint asmIsBgTrimRunning(void) {\n    if (!asmManager) return 0;\n    return asmManager->bg_trim_running > 0;\n}\n\n/* Check if there is any trim job in progress. */\nint asmIsTrimInProgress(void) {\n    if (!server.cluster_enabled) return 0;\n    return (listLength(asmManager->active_trim_jobs) != 0 ||\n            listLength(asmManager->pending_trim_jobs) != 0);\n}\n\n\n/* Check if the command is accessing keys in a slot being trimmed.\n * Return the slot if found, otherwise return -1. */\nint asmGetTrimmingSlotForCommand(struct redisCommand *cmd, robj **argv, int argc) {\n    if (!asmIsTrimInProgress()) return -1;\n\n    /* Get the keys from the command */\n    getKeysResult result = GETKEYS_RESULT_INIT;\n    int numkeys = getKeysFromCommand(cmd, argv, argc, &result);\n\n    int last_checked_slot = -1;\n    for (int j = 0; j < numkeys; j++) {\n        robj *key = argv[result.keys[j].pos];\n        int slot = keyHashSlot((char*) key->ptr, sdslen(key->ptr));\n        if (slot == last_checked_slot) continue;\n        if (isSlotInTrimJob(slot)) {\n            getKeysFreeResult(&result);\n            return slot;\n        }\n        last_checked_slot = slot;\n    }\n    getKeysFreeResult(&result);\n    return -1;\n}\n\n/* Delete the key and notify the modules. */\nvoid asmActiveTrimDeleteKey(redisDb *db, robj *keyobj, int migration_cleanup) {\n    if (asmManager->debug_active_trim_delay > 0)\n        debugDelay(asmManager->debug_active_trim_delay);\n\n    /* The key needs to be converted from static to heap before deletion. */\n    int static_key = keyobj->refcount == OBJ_STATIC_REFCOUNT;\n    if (static_key) keyobj = createStringObject(keyobj->ptr, sdslen(keyobj->ptr));\n\n    dbDelete(db, keyobj);\n    keyModified(NULL, db, keyobj, NULL, 1);\n    if (migration_cleanup) {\n        /* The keys are not actually logically deleted from the database, just moved\n        * to another node. The modules need to know that these keys are no longer\n        * available locally, so just send the keyspace notification to the modules,\n        * but not to clients. */\n        moduleNotifyKeyspaceEvent(NOTIFY_KEY_TRIMMED, \"key_trimmed\", keyobj, db->id, NULL, 0);\n    } else {\n        /* Not a migration cleanup, the key is really deleted from the database,\n         * need to notify the clients. */\n        notifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", keyobj, db->id);\n    }\n    asmManager->active_trim_current_job_trimmed++;\n\n    if (static_key) decrRefCount(keyobj);\n}\n\n/* Trim keys in the active trim job. */\nvoid asmActiveTrimCycle(void) {\n    if (asmManager->debug_active_trim_delay < 0 ||\n        listLength(asmManager->active_trim_jobs) == 0)\n    {\n        return;\n    }\n\n    /* Verify client pause is not in effect and trim is not disabled by module,\n     * so we can delete keys. */\n    static int blocked = 0;\n    int disabled_by_module = server.cluster_module_trim_disablers > 0;\n    if (isPausedActions(PAUSE_ACTION_CLIENT_ALL) ||\n        isPausedActions(PAUSE_ACTION_CLIENT_WRITE) ||\n        disabled_by_module)\n    {\n        if (blocked == 0)  {\n            blocked = 1;\n            const char *reason = disabled_by_module ? \"trim is disabled by module\" :\n                                                       \"pause action is in effect\";\n            serverLog(LL_NOTICE, \"Active trim cycle is blocked since %s.\", reason);\n        }\n        return;\n    }\n    if (blocked) serverLog(LL_NOTICE, \"Active trim cycle is unblocked.\");\n    blocked = 0;\n\n    /* This works in a similar way to activeExpireCycle, in the sense that\n     * we do incremental work across calls. */\n    const int trim_cycle_time_perc = 25;\n    int time_exceeded = 0;\n    long long start = ustime(), timelimit;\n    unsigned long long num_deleted = 0;\n\n    /* Calculate the time limit in microseconds for this cycle. */\n    timelimit = 1000000 * trim_cycle_time_perc / server.hz / 100;\n    if (timelimit <= 0) timelimit = 1;\n\n    activeTrimJob *job = listNodeValue(listFirst(asmManager->active_trim_jobs));\n\n    serverAssert(asmManager->active_trim_it);\n    int slot = slotRangeArrayGetCurrentSlot(asmManager->active_trim_it);\n\n    while (!time_exceeded && slot != -1) {\n        dictEntry *de;\n        kvstoreDictIterator kvs_di;\n        kvstoreInitDictSafeIterator(&kvs_di, server.db[0].keys, slot);\n        while ((de = kvstoreDictIteratorNext(&kvs_di)) != NULL) {\n            kvobj *kv = dictGetKV(de);\n            sds sdskey = kvobjGetKey(kv);\n\n            enterExecutionUnit(1, 0);\n            robj *keyobj = createStringObject(sdskey, sdslen(sdskey));\n            asmActiveTrimDeleteKey(&server.db[0], keyobj, job->migration_cleanup);\n            decrRefCount(keyobj);\n            exitExecutionUnit();\n            postExecutionUnitOperations();\n            num_deleted++;\n\n            /* Once in 32 deletions check if we reached the time limit. */\n            if (num_deleted % 32 == 0 && (ustime() - start) > timelimit) {\n                time_exceeded = 1;\n                break;\n            }\n        }\n        kvstoreResetDictIterator(&kvs_di);\n        if (!time_exceeded) slot = slotRangeArrayNext(asmManager->active_trim_it);\n    }\n\n    if (slot == -1) {\n#if defined(USE_JEMALLOC)\n        jemalloc_purge();\n#endif\n        asmActiveTrimEnd();\n\n        /* Immediately start the next trim job upon completion of the current\n         * one. Eliminates gaps in notifications so modules are informed about\n         * trimming unowned keys, which is important for modules that\n         * continuously filter unowned keys from their replies. */\n        if (listLength(asmManager->active_trim_jobs) != 0)\n            asmActiveTrimStart();\n    }\n}\n\n/* Check if the key in a trim job. */\nint asmIsKeyInTrimJob(sds keyname) {\n    if (!asmIsTrimInProgress() || !isSlotInTrimJob(getKeySlot(keyname)))\n        return 0;\n    return 1;\n}\n\n/* Modules can use RM_ClusterPropagateForSlotMigration() during the\n * CLUSTER_SLOT_MIGRATION_MIGRATE_MODULE_PROPAGATE event to propagate commands\n * that should be delivered just before the slot snapshot delivery starts. */\nint asmModulePropagateBeforeSlotSnapshot(struct redisCommand *cmd, robj **argv, int argc) {\n    /* This API is only called in the fork child. */\n    if (server.cluster_enabled == 0 ||\n        server.in_fork_child != CHILD_TYPE_RDB ||\n        listLength(asmManager->tasks) == 0)\n    {\n        errno = EBADF;\n        return C_ERR;\n    }\n\n    /* Check if the task state is right. */\n    asmTask *task = listNodeValue(listFirst(asmManager->tasks));\n    if (task->operation != ASM_MIGRATE ||\n        task->state != ASM_SEND_BULK_AND_STREAM ||\n        task->pre_snapshot_module_cmds == NULL)\n    {\n        errno = EBADF;\n        return C_ERR;\n    }\n\n    /* Ensure all arguments are converted to string encoding if necessary,\n     * since getSlotFromCommand expects them to be string-encoded. */\n    for (int i = 0; i < argc; i++) {\n        if (!sdsEncodedObject(argv[i])) {\n            serverAssert(argv[i]->encoding == OBJ_ENCODING_INT);\n            robj *old = argv[i];\n            argv[i] = createStringObjectFromLongLongWithSds((long)old->ptr);\n            decrRefCount(old);\n        }\n    }\n\n    /* Crossslot commands are not allowed */\n    int slot = getSlotFromCommand(cmd, argv, argc);\n    if (slot == CLUSTER_CROSSSLOT) {\n        errno = ENOTSUP;\n        return C_ERR;\n    }\n\n    /* Allow no-keys commands or if keys are in the slot range. */\n    slotRange sr = {slot, slot};\n    if (slot != INVALID_CLUSTER_SLOT && !slotRangeArrayOverlaps(task->slots, &sr)) {\n        errno = ERANGE;\n        return C_ERR;\n    }\n\n    robj **argvcopy = zmalloc(sizeof(robj*) * argc);\n    for (int i = 0; i < argc; i++) {\n        argvcopy[i] = argv[i];\n        incrRefCount(argv[i]);\n    }\n\n    redisOpArrayAppend(task->pre_snapshot_module_cmds, 0, argvcopy, argc, 0);\n    return C_OK;\n}\n"
  },
  {
    "path": "src/cluster_asm.h",
    "content": "/* cluster_asm.h -- Atomic slot migration implementation for cluster\n *\n * Copyright (c) 2025-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef CLUSTER_ASM_H\n#define CLUSTER_ASM_H\n\nstruct asmTask;\nstruct slotRangeArray;\nstruct slotRange;\n\n#define ASM_TRIM_METHOD_NONE 0\n#define ASM_TRIM_METHOD_BG 1\n#define ASM_TRIM_METHOD_ACTIVE 2\n\nvoid asmInit(void);\nvoid asmBeforeSleep(void);\nvoid asmCron(void);\nvoid asmSlotSnapshotAndStreamStart(struct asmTask *task);\nvoid asmSlotSnapshotSucceed(struct asmTask *task);\nvoid asmSlotSnapshotFailed(struct asmTask *task);\nvoid asmCallbackOnFreeClient(client *c);\nint asmMigrateInProgress(void);\nint asmImportInProgress(void);\nvoid asmFeedMigrationClient(robj **argv, int argc);\nint asmDebugSetFailPoint(char * channel, char *state);\nint asmDebugSetTrimMethod(const char *method, int active_trim_delay);\n\nvoid asmImportIncrAppliedBytes(struct asmTask *task, size_t bytes);\nstruct slotRangeArray *asmTaskGetSlotRanges(const char *task_id);\nint asmNotifyConfigUpdated(struct asmTask *task, sds *err);\nsize_t asmGetPeakSyncBufferSize(void);\nsize_t asmGetImportInputBufferSize(void);\nsize_t asmGetMigrateOutputBufferSize(void);\nint clusterAsmCancel(const char *task_id, const char *reason);\nint clusterAsmCancelBySlot(int slot, const char *reason);\nint clusterAsmCancelBySlotRangeArray(struct slotRangeArray *slots, const char *reason);\nint clusterAsmCancelByNode(void *node, const char *reason);\nint isSlotInAsmTask(int slot);\nint isSlotInTrimJob(int slot);\nsds asmCatInfoString(sds info);\nvoid clusterMigrationCommand(client *c);\nvoid clusterSyncSlotsCommand(client *c);\nstruct asmTask *asmLookupTaskBySlotRangeArray(struct slotRangeArray *slots);\nvoid asmCancelTrimJobs(void);\nsds asmDumpActiveImportTask(void);\nint asmReplicaHandleMasterTask(sds task_info);\nvoid asmFinalizeMasterTask(void);\nint asmIsTrimInProgress(void);\nint asmGetTrimmingSlotForCommand(struct redisCommand *cmd, robj **argv, int argc);\nvoid asmActiveTrimCycle(void);\nint asmIsKeyInTrimJob(sds keyname);\nint asmModulePropagateBeforeSlotSnapshot(struct redisCommand *cmd, robj **argv, int argc);\nint asmTrimSlots(struct asmTrimCtx *ctx, uint64_t client_id, int migration_cleanup);\nint asmIsBgTrimRunning(void);\nvoid asmBgTrimCounterDecr(void);\nvoid asmBgTrimCounterIncr(void);\n\n/* Context for ASM background trim */\nstruct asmTrimCtx *asmTrimCtxCreate(struct slotRangeArray *slots, kvstore *target_kvstore);\nvoid asmTrimCtxRetain(struct asmTrimCtx *ctx);\nvoid asmTrimCtxRelease(struct asmTrimCtx *ctx);\n#endif\n\n"
  },
  {
    "path": "src/cluster_legacy.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n/*\n * cluster_legacy.c contains the implementation of the cluster API that is\n * specific to the standard, Redis cluster-bus based clustering mechanism.\n */\n\n#include \"server.h\"\n#include \"cluster.h\"\n#include \"cluster_legacy.h\"\n#include \"cluster_asm.h\"\n#include \"cluster_slot_stats.h\"\n#include \"endianconv.h\"\n#include \"connection.h\"\n\n#include <sys/types.h>\n#include <sys/socket.h>\n#include <arpa/inet.h>\n#include <fcntl.h>\n#include <unistd.h>\n#include <sys/stat.h>\n#include <math.h>\n#include <sys/file.h>\n\n/* A global reference to myself is handy to make code more clear.\n * Myself always points to server.cluster->myself, that is, the clusterNode\n * that represents this node. */\nclusterNode *myself = NULL;\n\nclusterNode *createClusterNode(char *nodename, int flags);\nvoid clusterAddNode(clusterNode *node);\nvoid clusterAcceptHandler(aeEventLoop *el, int fd, void *privdata, int mask);\nvoid clusterReadHandler(connection *conn);\nvoid clusterSendPing(clusterLink *link, int type);\nvoid clusterSendFail(char *nodename);\nvoid clusterSendFailoverAuthIfNeeded(clusterNode *node, clusterMsg *request);\nvoid clusterUpdateState(void);\nint clusterNodeCoversSlot(clusterNode *n, int slot);\nlist *clusterGetNodesInMyShard(clusterNode *node);\nint clusterNodeAddSlave(clusterNode *master, clusterNode *slave);\nint clusterAddSlot(clusterNode *n, int slot);\nint clusterDelSlot(int slot);\nint clusterMoveNodeSlots(clusterNode *from_node, clusterNode *to_node);\nint clusterDelNodeSlots(clusterNode *node);\nint clusterNodeSetSlotBit(clusterNode *n, int slot);\nvoid clusterSetMaster(clusterNode *n);\nvoid clusterHandleSlaveFailover(void);\nvoid clusterHandleSlaveMigration(int max_slaves);\nint bitmapTestBit(unsigned char *bitmap, int pos);\nvoid bitmapSetBit(unsigned char *bitmap, int pos);\nvoid bitmapClearBit(unsigned char *bitmap, int pos);\nvoid clusterDoBeforeSleep(int flags);\nvoid clusterSendUpdate(clusterLink *link, clusterNode *node);\nvoid resetManualFailover(void);\nvoid clusterCloseAllSlots(void);\nvoid clusterSetNodeAsMaster(clusterNode *n);\nvoid clusterDelNode(clusterNode *delnode);\nsds representClusterNodeFlags(sds ci, uint16_t flags);\nsds representSlotInfo(sds ci, uint16_t *slot_info_pairs, int slot_info_pairs_count);\nvoid clusterFreeNodesSlotsInfo(clusterNode *n);\nuint64_t clusterGetMaxEpoch(void);\nint clusterBumpConfigEpochWithoutConsensus(void);\nvoid moduleCallClusterReceivers(const char *sender_id, uint64_t module_id, uint8_t type, const unsigned char *payload, uint32_t len);\nconst char *clusterGetMessageTypeString(int type);\nvoid removeChannelsInSlot(unsigned int slot);\nunsigned int countKeysInSlot(unsigned int hashslot);\nunsigned int countChannelsInSlot(unsigned int hashslot);\nunsigned int clusterDelKeysInSlot(unsigned int hashslot, int flags);\nvoid clusterAddNodeToShard(const char *shard_id, clusterNode *node);\nlist *clusterLookupNodeListByShardId(const char *shard_id);\nvoid clusterRemoveNodeFromShard(clusterNode *node);\nint auxShardIdSetter(clusterNode *n, void *value, int length);\nsds auxShardIdGetter(clusterNode *n, sds s);\nint auxShardIdPresent(clusterNode *n);\nint auxHumanNodenameSetter(clusterNode *n, void *value, int length);\nsds auxHumanNodenameGetter(clusterNode *n, sds s);\nint auxHumanNodenamePresent(clusterNode *n);\nint auxTcpPortSetter(clusterNode *n, void *value, int length);\nsds auxTcpPortGetter(clusterNode *n, sds s);\nint auxTcpPortPresent(clusterNode *n);\nint auxTlsPortSetter(clusterNode *n, void *value, int length);\nsds auxTlsPortGetter(clusterNode *n, sds s);\nint auxTlsPortPresent(clusterNode *n);\nstatic void clusterBuildMessageHdr(clusterMsg *hdr, int type, size_t msglen);\nvoid freeClusterLink(clusterLink *link);\nint verifyClusterNodeId(const char *name, int length);\nstatic void updateShardId(clusterNode *node, const char *shard_id);\n\nint getNodeDefaultClientPort(clusterNode *n) {\n    return server.tls_cluster ? n->tls_port : n->tcp_port;\n}\n\nstatic inline int getNodeDefaultReplicationPort(clusterNode *n) {\n    return server.tls_replication ? n->tls_port : n->tcp_port;\n}\n\nint clusterNodeClientPort(clusterNode *n, int use_tls) {\n    return use_tls ? n->tls_port : n->tcp_port;\n}\n\nstatic inline int defaultClientPort(void) {\n    return server.tls_cluster ? server.tls_port : server.port;\n}\n\n#define isSlotUnclaimed(slot) \\\n    (server.cluster->slots[slot] == NULL || \\\n        bitmapTestBit(server.cluster->owner_not_claiming_slot, slot))\n\n#define RCVBUF_INIT_LEN 1024\n#define RCVBUF_MAX_PREALLOC (1<<20) /* 1MB */\n\n/* Cluster nodes hash table, mapping nodes addresses 1.2.3.4:6379 to\n * clusterNode structures. */\ndictType clusterNodesDictType = {\n        dictSdsHash,                /* hash function */\n        NULL,                       /* key dup */\n        NULL,                       /* val dup */\n        dictSdsKeyCompare,          /* key compare */\n        dictSdsDestructor,          /* key destructor */\n        NULL,                       /* val destructor */\n        NULL                        /* allow to expand */\n};\n\n/* Cluster re-addition blacklist. This maps node IDs to the time\n * we can re-add this node. The goal is to avoid reading a removed\n * node for some time. */\ndictType clusterNodesBlackListDictType = {\n        dictSdsCaseHash,            /* hash function */\n        NULL,                       /* key dup */\n        NULL,                       /* val dup */\n        dictSdsKeyCaseCompare,      /* key compare */\n        dictSdsDestructor,          /* key destructor */\n        NULL,                       /* val destructor */\n        NULL                        /* allow to expand */\n};\n\n/* Cluster shards hash table, mapping shard id to list of nodes */\ndictType clusterSdsToListType = {\n        dictSdsHash,                /* hash function */\n        NULL,                       /* key dup */\n        NULL,                       /* val dup */\n        dictSdsKeyCompare,          /* key compare */\n        dictSdsDestructor,          /* key destructor */\n        dictListDestructor,         /* val destructor */\n        NULL                        /* allow to expand */\n};\n\n/* Aux fields are introduced in Redis 7.2 to support the persistence\n * of various important node properties, such as shard id, in nodes.conf.\n * Aux fields take an explicit format of name=value pairs and have no\n * intrinsic order among them. Aux fields are always grouped together\n * at the end of the second column of each row after the node's IP\n * address/port/cluster_port and the optional hostname. Aux fields\n * are separated by ','. */\n\n/* Aux field setter function prototype\n * return C_OK when the update is successful; C_ERR otherwise */\ntypedef int (aux_value_setter) (clusterNode* n, void *value, int length);\n/* Aux field getter function prototype\n * return an sds that is a concatenation of the input sds string and\n * the aux value */\ntypedef sds (aux_value_getter) (clusterNode* n, sds s);\n\ntypedef int (aux_value_present) (clusterNode* n);\n\ntypedef struct {\n    char *field;\n    aux_value_setter *setter;\n    aux_value_getter *getter;\n    aux_value_present *isPresent;\n} auxFieldHandler;\n\n/* Assign index to each aux field */\ntypedef enum {\n    af_shard_id,\n    af_human_nodename,\n    af_tcp_port,\n    af_tls_port,\n    af_count,\n} auxFieldIndex;\n\n/* Note that\n * 1. the order of the elements below must match that of their\n *    indices as defined in auxFieldIndex\n * 2. aux name can contain characters that pass the isValidAuxChar check only */\nauxFieldHandler auxFieldHandlers[] = {\n    {\"shard-id\", auxShardIdSetter, auxShardIdGetter, auxShardIdPresent},\n    {\"nodename\", auxHumanNodenameSetter, auxHumanNodenameGetter, auxHumanNodenamePresent},\n    {\"tcp-port\", auxTcpPortSetter, auxTcpPortGetter, auxTcpPortPresent},\n    {\"tls-port\", auxTlsPortSetter, auxTlsPortGetter, auxTlsPortPresent},\n};\n\nint auxShardIdSetter(clusterNode *n, void *value, int length) {\n    if (verifyClusterNodeId(value, length) == C_ERR) {\n        return C_ERR;\n    }\n    memcpy(n->shard_id, value, CLUSTER_NAMELEN);\n    /* if n already has replicas, make sure they all use\n     * the primary shard id */\n    for (int i = 0; i < n->numslaves; i++) {\n        if (memcmp(n->slaves[i]->shard_id, n->shard_id, CLUSTER_NAMELEN) != 0)\n            updateShardId(n->slaves[i], n->shard_id);\n    }\n    clusterAddNodeToShard(value, n);\n    return C_OK;\n}\n\nsds auxShardIdGetter(clusterNode *n, sds s) {\n    return sdscatprintf(s, \"%.40s\", n->shard_id);\n}\n\nint auxShardIdPresent(clusterNode *n) {\n    return strlen(n->shard_id);\n}\n\nint auxHumanNodenameSetter(clusterNode *n, void *value, int length) {\n    if (n && !strncmp(value, n->human_nodename, length)) {\n        return C_OK;\n    } else if (!n && (length == 0)) {\n        return C_OK;\n    }\n    if (n) {\n        n->human_nodename = sdscpylen(n->human_nodename, value, length);\n    } else if (sdslen(n->human_nodename) != 0) {\n        sdsclear(n->human_nodename);\n    } else {\n        return C_ERR;\n    }\n    return C_OK;\n}\n\nsds auxHumanNodenameGetter(clusterNode *n, sds s) {\n    return sdscatprintf(s, \"%s\", n->human_nodename);\n}\n\nint auxHumanNodenamePresent(clusterNode *n) {\n    return sdslen(n->human_nodename);\n}\n\nint auxTcpPortSetter(clusterNode *n, void *value, int length) {\n    if (length > 5 || length < 1) {\n        return C_ERR;\n    }\n    char buf[length + 1];\n    memcpy(buf, (char*)value, length);\n    buf[length] = '\\0';\n    n->tcp_port = atoi(buf);\n    return (n->tcp_port < 0 || n->tcp_port >= 65536) ? C_ERR : C_OK;\n}\n\nsds auxTcpPortGetter(clusterNode *n, sds s) {\n    return sdscatprintf(s, \"%d\", n->tcp_port);\n}\n\nint auxTcpPortPresent(clusterNode *n) {\n    return n->tcp_port >= 0 && n->tcp_port < 65536;\n}\n\nint auxTlsPortSetter(clusterNode *n, void *value, int length) {\n    if (length > 5 || length < 1) {\n        return C_ERR;\n    }\n    char buf[length + 1];\n    memcpy(buf, (char*)value, length);\n    buf[length] = '\\0';\n    n->tls_port = atoi(buf);\n    return (n->tls_port < 0 || n->tls_port >= 65536) ? C_ERR : C_OK;\n}\n\nsds auxTlsPortGetter(clusterNode *n, sds s) {\n    return sdscatprintf(s, \"%d\", n->tls_port);\n}\n\nint auxTlsPortPresent(clusterNode *n) {\n    return n->tls_port >= 0 && n->tls_port < 65536;\n}\n\n/* clusterLink send queue blocks */\ntypedef struct {\n    size_t totlen; /* Total length of this block including the message */\n    int refcount;  /* Number of cluster link send msg queues containing the message */\n    clusterMsg msg[];\n} clusterMsgSendBlock;\n\n/* Helper function to extract a normal message from a send block. */\nstatic clusterMsg *getMessageFromSendBlock(clusterMsgSendBlock *msgblock) {\n    return &msgblock->msg[0];\n}\n\n/* -----------------------------------------------------------------------------\n * Initialization\n * -------------------------------------------------------------------------- */\n\n/* Load the cluster config from 'filename'.\n *\n * If the file does not exist or is zero-length (this may happen because\n * when we lock the nodes.conf file, we create a zero-length one for the\n * sake of locking if it does not already exist), C_ERR is returned.\n * If the configuration was loaded from the file, C_OK is returned. */\nint clusterLoadConfig(char *filename) {\n    FILE *fp = fopen(filename,\"r\");\n    struct stat sb;\n    char *line;\n    int maxline, j;\n\n    if (fp == NULL) {\n        if (errno == ENOENT) {\n            return C_ERR;\n        } else {\n            serverLog(LL_WARNING,\n                \"Loading the cluster node config from %s: %s\",\n                filename, strerror(errno));\n            exit(1);\n        }\n    }\n\n    if (redis_fstat(fileno(fp),&sb) == -1) {\n        serverLog(LL_WARNING,\n            \"Unable to obtain the cluster node config file stat %s: %s\",\n            filename, strerror(errno));\n        exit(1);\n    }\n    /* Check if the file is zero-length: if so return C_ERR to signal\n     * we have to write the config. */\n    if (sb.st_size == 0) {\n        fclose(fp);\n        return C_ERR;\n    }\n\n    /* Parse the file. Note that single lines of the cluster config file can\n     * be really long as they include all the hash slots of the node.\n     * This means in the worst possible case, half of the Redis slots will be\n     * present in a single line, possibly in importing or migrating state, so\n     * together with the node ID of the sender/receiver.\n     *\n     * To simplify we allocate 1024+CLUSTER_SLOTS*128 bytes per line. */\n    maxline = 1024+CLUSTER_SLOTS*128;\n    line = zmalloc(maxline);\n    while(fgets(line,maxline,fp) != NULL) {\n        int argc, aux_argc;\n        sds *argv, *aux_argv;\n        clusterNode *n, *master;\n        char *p, *s;\n\n        /* Skip blank lines, they can be created either by users manually\n         * editing nodes.conf or by the config writing process if stopped\n         * before the truncate() call. */\n        if (line[0] == '\\n' || line[0] == '\\0') continue;\n\n        /* Split the line into arguments for processing. */\n        argv = sdssplitargs(line,&argc);\n        if (argv == NULL) goto fmterr;\n\n        /* Handle the special \"vars\" line. Don't pretend it is the last\n         * line even if it actually is when generated by Redis. */\n        if (strcasecmp(argv[0],\"vars\") == 0) {\n            if (!(argc % 2)) goto fmterr;\n            for (j = 1; j < argc; j += 2) {\n                if (strcasecmp(argv[j],\"currentEpoch\") == 0) {\n                    server.cluster->currentEpoch =\n                            strtoull(argv[j+1],NULL,10);\n                } else if (strcasecmp(argv[j],\"lastVoteEpoch\") == 0) {\n                    server.cluster->lastVoteEpoch =\n                            strtoull(argv[j+1],NULL,10);\n                } else {\n                    serverLog(LL_NOTICE,\n                        \"Skipping unknown cluster config variable '%s'\",\n                        argv[j]);\n                }\n            }\n            sdsfreesplitres(argv,argc);\n            continue;\n        }\n\n        /* Regular config lines have at least eight fields */\n        if (argc < 8) {\n            sdsfreesplitres(argv,argc);\n            goto fmterr;\n        }\n\n        /* Create this node if it does not exist */\n        if (verifyClusterNodeId(argv[0], sdslen(argv[0])) == C_ERR) {\n            sdsfreesplitres(argv, argc);\n            goto fmterr;\n        }\n        n = clusterLookupNode(argv[0], sdslen(argv[0]));\n        if (!n) {\n            n = createClusterNode(argv[0],0);\n            clusterAddNode(n);\n        }\n        /* Format for the node address and auxiliary argument information:\n         * ip:port[@cport][,hostname][,aux=val]*] */\n\n        aux_argv = sdssplitlen(argv[1], sdslen(argv[1]), \",\", 1, &aux_argc);\n        if (aux_argv == NULL) {\n            sdsfreesplitres(argv,argc);\n            goto fmterr;\n        }\n\n        /* Hostname is an optional argument that defines the endpoint\n         * that can be reported to clients instead of IP. */\n        if (aux_argc > 1 && sdslen(aux_argv[1]) > 0) {\n            n->hostname = sdscpy(n->hostname, aux_argv[1]);\n        } else if (sdslen(n->hostname) != 0) {\n            sdsclear(n->hostname);\n        }\n\n        /* All fields after hostname are auxiliary and they take on\n         * the format of \"aux=val\" where both aux and val can contain\n         * characters that pass the isValidAuxChar check only. The order\n         * of the aux fields is insignificant. */\n        int aux_tcp_port = 0;\n        int aux_tls_port = 0;\n        int aux_shard_id = 0;\n        for (int i = 2; i < aux_argc; i++) {\n            int field_argc;\n            sds *field_argv;\n            field_argv = sdssplitlen(aux_argv[i], sdslen(aux_argv[i]), \"=\", 1, &field_argc);\n            if (field_argv == NULL || field_argc != 2) {\n                /* Invalid aux field format */\n                if (field_argv != NULL) sdsfreesplitres(field_argv, field_argc);\n                sdsfreesplitres(aux_argv, aux_argc);\n                sdsfreesplitres(argv,argc);\n                goto fmterr;\n            }\n\n            /* Validate that both aux and value contain valid characters only */\n            for (unsigned j = 0; j < 2; j++) {\n                if (!isValidAuxString(field_argv[j],sdslen(field_argv[j]))){\n                    /* Invalid aux field format */\n                    sdsfreesplitres(field_argv, field_argc);\n                    sdsfreesplitres(aux_argv, aux_argc);\n                    sdsfreesplitres(argv,argc);\n                    goto fmterr;\n                }\n            }\n\n            /* Note that we don't expect lots of aux fields in the foreseeable\n             * future so a linear search is completely fine. */\n            int field_found = 0;\n            for (unsigned j = 0; j < numElements(auxFieldHandlers); j++) {\n                if (sdslen(field_argv[0]) != strlen(auxFieldHandlers[j].field) ||\n                    memcmp(field_argv[0], auxFieldHandlers[j].field, sdslen(field_argv[0])) != 0) {\n                    continue;\n                }\n                field_found = 1;\n                aux_shard_id |= j == af_shard_id;\n                aux_tcp_port |= j == af_tcp_port;\n                aux_tls_port |= j == af_tls_port;\n                if (auxFieldHandlers[j].setter(n, field_argv[1], sdslen(field_argv[1])) != C_OK) {\n                    /* Invalid aux field format */\n                    sdsfreesplitres(field_argv, field_argc);\n                    sdsfreesplitres(aux_argv, aux_argc);\n                    sdsfreesplitres(argv,argc);\n                    goto fmterr;\n                }\n            }\n\n            if (field_found == 0) {\n                /* Invalid aux field format */\n                sdsfreesplitres(field_argv, field_argc);\n                sdsfreesplitres(aux_argv, aux_argc);\n                sdsfreesplitres(argv,argc);\n                goto fmterr;\n            }\n\n            sdsfreesplitres(field_argv, field_argc);\n        }\n        /* Address and port */\n        if ((p = strrchr(aux_argv[0],':')) == NULL) {\n            sdsfreesplitres(aux_argv, aux_argc);\n            sdsfreesplitres(argv,argc);\n            goto fmterr;\n        }\n        *p = '\\0';\n        memcpy(n->ip,aux_argv[0],strlen(aux_argv[0])+1);\n        char *port = p+1;\n        char *busp = strchr(port,'@');\n        if (busp) {\n            *busp = '\\0';\n            busp++;\n        }\n        /* If neither TCP or TLS port is found in aux field, it is considered\n         * an old version of nodes.conf file.*/\n        if (!aux_tcp_port && !aux_tls_port) {\n            if (server.tls_cluster) {\n                n->tls_port = atoi(port);\n            } else {\n                n->tcp_port = atoi(port);\n            }\n        } else if (!aux_tcp_port) {\n            n->tcp_port = atoi(port);\n        } else if (!aux_tls_port) {\n            n->tls_port = atoi(port);\n        }\n        /* In older versions of nodes.conf the \"@busport\" part is missing.\n         * In this case we set it to the default offset of 10000 from the\n         * base port. */\n        n->cport = busp ? atoi(busp) : (getNodeDefaultClientPort(n) + CLUSTER_PORT_INCR);\n\n        /* The plaintext port for client in a TLS cluster (n->pport) is not\n         * stored in nodes.conf. It is received later over the bus protocol. */\n\n        sdsfreesplitres(aux_argv, aux_argc);\n\n        /* Parse flags */\n        p = s = argv[2];\n        while(p) {\n            p = strchr(s,',');\n            if (p) *p = '\\0';\n            if (!strcasecmp(s,\"myself\")) {\n                serverAssert(server.cluster->myself == NULL);\n                myself = server.cluster->myself = n;\n                n->flags |= CLUSTER_NODE_MYSELF;\n            } else if (!strcasecmp(s,\"master\")) {\n                n->flags |= CLUSTER_NODE_MASTER;\n            } else if (!strcasecmp(s,\"slave\")) {\n                n->flags |= CLUSTER_NODE_SLAVE;\n            } else if (!strcasecmp(s,\"fail?\")) {\n                n->flags |= CLUSTER_NODE_PFAIL;\n            } else if (!strcasecmp(s,\"fail\")) {\n                n->flags |= CLUSTER_NODE_FAIL;\n                n->fail_time = mstime();\n            } else if (!strcasecmp(s,\"handshake\")) {\n                n->flags |= CLUSTER_NODE_HANDSHAKE;\n            } else if (!strcasecmp(s,\"noaddr\")) {\n                n->flags |= CLUSTER_NODE_NOADDR;\n            } else if (!strcasecmp(s,\"nofailover\")) {\n                n->flags |= CLUSTER_NODE_NOFAILOVER;\n            } else if (!strcasecmp(s,\"noflags\")) {\n                /* nothing to do */\n            } else {\n                serverPanic(\"Unknown flag in redis cluster config file\");\n            }\n            if (p) s = p+1;\n        }\n\n        /* Get master if any. Set the master and populate master's\n         * slave list. */\n        if (argv[3][0] != '-') {\n            if (verifyClusterNodeId(argv[3], sdslen(argv[3])) == C_ERR) {\n                sdsfreesplitres(argv, argc);\n                goto fmterr;\n            }\n            master = clusterLookupNode(argv[3], sdslen(argv[3]));\n            if (!master) {\n                master = createClusterNode(argv[3],0);\n                clusterAddNode(master);\n            }\n            /* shard_id can be absent if we are loading a nodes.conf generated\n             * by an older version of Redis; \n             * ignore replica's shard_id in the file, only use the primary's.\n             * If replica precedes primary in file, it will be corrected\n             * later by the auxShardIdSetter.\n             * Remove node from its old shard before adding it to the new one. */\n            if (aux_shard_id == 1) clusterRemoveNodeFromShard(n);\n            memcpy(n->shard_id, master->shard_id, CLUSTER_NAMELEN);\n            clusterAddNodeToShard(master->shard_id, n);\n            n->slaveof = master;\n            clusterNodeAddSlave(master,n);\n        } else if (aux_shard_id == 0) {\n            /* n is a primary but it does not have a persisted shard_id.\n             * This happens if we are loading a nodes.conf generated by\n             * an older version of Redis. We should manually update the\n             * shard membership in this case */\n            clusterAddNodeToShard(n->shard_id, n);\n        }\n\n        /* Set ping sent / pong received timestamps */\n        if (atoi(argv[4])) n->ping_sent = mstime();\n        if (atoi(argv[5])) n->pong_received = mstime();\n\n        /* Set configEpoch for this node.\n         * If the node is a replica, set its config epoch to 0.\n         * If it's a primary, load the config epoch from the configuration file. */\n        n->configEpoch = (nodeIsSlave(n) && n->slaveof) ? 0 : strtoull(argv[6],NULL,10);\n\n        /* Populate hash slots served by this instance. */\n        for (j = 8; j < argc; j++) {\n            int start, stop;\n\n            if (argv[j][0] == '[') {\n                /* Here we handle migrating / importing slots */\n                int slot;\n                char direction;\n                clusterNode *cn;\n\n                p = strchr(argv[j],'-');\n                serverAssert(p != NULL);\n                *p = '\\0';\n                direction = p[1]; /* Either '>' or '<' */\n                slot = atoi(argv[j]+1);\n                if (slot < 0 || slot >= CLUSTER_SLOTS) {\n                    sdsfreesplitres(argv,argc);\n                    goto fmterr;\n                }\n                p += 3;\n\n                char *pr = strchr(p, ']');\n                size_t node_len = pr - p;\n                if (pr == NULL || verifyClusterNodeId(p, node_len) == C_ERR) {\n                    sdsfreesplitres(argv, argc);\n                    goto fmterr;\n                }\n                cn = clusterLookupNode(p, CLUSTER_NAMELEN);\n                if (!cn) {\n                    cn = createClusterNode(p,0);\n                    clusterAddNode(cn);\n                }\n                if (direction == '>') {\n                    server.cluster->migrating_slots_to[slot] = cn;\n                } else {\n                    server.cluster->importing_slots_from[slot] = cn;\n                }\n                continue;\n            } else if ((p = strchr(argv[j],'-')) != NULL) {\n                *p = '\\0';\n                start = atoi(argv[j]);\n                stop = atoi(p+1);\n            } else {\n                start = stop = atoi(argv[j]);\n            }\n            if (start < 0 || start >= CLUSTER_SLOTS ||\n                stop < 0 || stop >= CLUSTER_SLOTS)\n            {\n                sdsfreesplitres(argv,argc);\n                goto fmterr;\n            }\n            while(start <= stop) clusterAddSlot(n, start++);\n        }\n\n        sdsfreesplitres(argv,argc);\n    }\n    /* Config sanity check */\n    if (server.cluster->myself == NULL) goto fmterr;\n    if (!(myself->flags & (CLUSTER_NODE_MASTER | CLUSTER_NODE_SLAVE))) goto fmterr;\n    if (nodeIsSlave(myself) && myself->slaveof == NULL) goto fmterr;\n\n    zfree(line);\n    fclose(fp);\n\n    serverLog(LL_NOTICE,\"Node configuration loaded, I'm %.40s\", myself->name);\n\n    /* Something that should never happen: currentEpoch smaller than\n     * the max epoch found in the nodes configuration. However we handle this\n     * as some form of protection against manual editing of critical files. */\n    if (clusterGetMaxEpoch() > server.cluster->currentEpoch) {\n        server.cluster->currentEpoch = clusterGetMaxEpoch();\n    }\n    return C_OK;\n\nfmterr:\n    serverLog(LL_WARNING,\n        \"Unrecoverable error: corrupted cluster config file \\\"%s\\\".\", line);\n    zfree(line);\n    if (fp) fclose(fp);\n    exit(1);\n}\n\n/* Cluster node configuration is exactly the same as CLUSTER NODES output.\n *\n * This function writes the node config and returns 0, on error -1\n * is returned.\n *\n * Note: we need to write the file in an atomic way from the point of view\n * of the POSIX filesystem semantics, so that if the server is stopped\n * or crashes during the write, we'll end with either the old file or the\n * new one. Since we have the full payload to write available we can use\n * a single write to write the whole file. If the pre-existing file was\n * bigger we pad our payload with newlines that are anyway ignored and truncate\n * the file afterward. */\nint clusterSaveConfig(int do_fsync) {\n    sds ci,tmpfilename;\n    size_t content_size,offset = 0;\n    ssize_t written_bytes;\n    int fd = -1;\n    int retval = C_ERR;\n\n    server.cluster->todo_before_sleep &= ~CLUSTER_TODO_SAVE_CONFIG;\n\n    /* Get the nodes description and concatenate our \"vars\" directive to\n     * save currentEpoch and lastVoteEpoch. */\n    ci = clusterGenNodesDescription(NULL, CLUSTER_NODE_HANDSHAKE, 0);\n    ci = sdscatprintf(ci,\"vars currentEpoch %llu lastVoteEpoch %llu\\n\",\n        (unsigned long long) server.cluster->currentEpoch,\n        (unsigned long long) server.cluster->lastVoteEpoch);\n    content_size = sdslen(ci);\n\n    /* Create a temp file with the new content. */\n    tmpfilename = sdscatfmt(sdsempty(),\"%s.tmp-%i-%I\",\n        server.cluster_configfile,(int) getpid(),mstime());\n    if ((fd = open(tmpfilename,O_WRONLY|O_CREAT,0644)) == -1) {\n        serverLog(LL_WARNING,\"Could not open temp cluster config file: %s\",strerror(errno));\n        goto cleanup;\n    }\n\n    while (offset < content_size) {\n        written_bytes = write(fd,ci + offset,content_size - offset);\n        if (written_bytes <= 0) {\n            if (errno == EINTR) continue;\n            serverLog(LL_WARNING,\"Failed after writing (%zd) bytes to tmp cluster config file: %s\",\n                offset,strerror(errno));\n            goto cleanup;\n        }\n        offset += written_bytes;\n    }\n\n    if (do_fsync) {\n        server.cluster->todo_before_sleep &= ~CLUSTER_TODO_FSYNC_CONFIG;\n        if (redis_fsync(fd) == -1) {\n            serverLog(LL_WARNING,\"Could not sync tmp cluster config file: %s\",strerror(errno));\n            goto cleanup;\n        }\n    }\n\n    if (rename(tmpfilename, server.cluster_configfile) == -1) {\n        serverLog(LL_WARNING,\"Could not rename tmp cluster config file: %s\",strerror(errno));\n        goto cleanup;\n    }\n\n    if (do_fsync) {\n        if (fsyncFileDir(server.cluster_configfile) == -1) {\n            serverLog(LL_WARNING,\"Could not sync cluster config file dir: %s\",strerror(errno));\n            goto cleanup;\n        }\n    }\n    retval = C_OK; /* If we reached this point, everything is fine. */\n\ncleanup:\n    if (fd != -1) close(fd);\n    if (retval) unlink(tmpfilename);\n    sdsfree(tmpfilename);\n    sdsfree(ci);\n    return retval;\n}\n\nvoid clusterSaveConfigOrDie(int do_fsync) {\n    if (clusterSaveConfig(do_fsync) == -1) {\n        serverLog(LL_WARNING,\"Fatal: can't update cluster config file.\");\n        exit(1);\n    }\n}\n\n/* Lock the cluster config using flock(), and retain the file descriptor used to\n * acquire the lock so that the file will be locked as long as the process is up.\n *\n * This works because we always update nodes.conf with a new version\n * in-place, reopening the file, and writing to it in place (later adjusting\n * the length with ftruncate()).\n *\n * On success C_OK is returned, otherwise an error is logged and\n * the function returns C_ERR to signal a lock was not acquired. */\nint clusterLockConfig(char *filename) {\n/* flock() does not exist on Solaris\n * and a fcntl-based solution won't help, as we constantly re-open that file,\n * which will release _all_ locks anyway\n */\n#if !defined(__sun)\n    /* To lock it, we need to open the file in a way it is created if\n     * it does not exist, otherwise there is a race condition with other\n     * processes. */\n    int fd = open(filename,O_WRONLY|O_CREAT|O_CLOEXEC,0644);\n    if (fd == -1) {\n        serverLog(LL_WARNING,\n            \"Can't open %s in order to acquire a lock: %s\",\n            filename, strerror(errno));\n        return C_ERR;\n    }\n\n    if (flock(fd,LOCK_EX|LOCK_NB) == -1) {\n        if (errno == EWOULDBLOCK) {\n            serverLog(LL_WARNING,\n                 \"Sorry, the cluster configuration file %s is already used \"\n                 \"by a different Redis Cluster node. Please make sure that \"\n                 \"different nodes use different cluster configuration \"\n                 \"files.\", filename);\n        } else {\n            serverLog(LL_WARNING,\n                \"Impossible to lock %s: %s\", filename, strerror(errno));\n        }\n        close(fd);\n        return C_ERR;\n    }\n    /* Lock acquired: leak the 'fd' by not closing it until shutdown time, so that\n     * we'll retain the lock to the file as long as the process exists.\n     *\n     * After fork, the child process will get the fd opened by the parent process,\n     * we need save `fd` to `cluster_config_file_lock_fd`, so that in redisFork(),\n     * it will be closed in the child process.\n     * If it is not closed, when the main process is killed -9, but the child process\n     * (redis-aof-rewrite) is still alive, the fd(lock) will still be held by the\n     * child process, and the main process will fail to get lock, means fail to start. */\n    server.cluster_config_file_lock_fd = fd;\n#else\n    UNUSED(filename);\n#endif /* __sun */\n\n    return C_OK;\n}\n\n/* Derives our ports to be announced in the cluster bus. */\nvoid deriveAnnouncedPorts(int *announced_tcp_port, int *announced_tls_port,\n                          int *announced_cport) {\n    /* Config overriding announced ports. */\n    *announced_tcp_port = server.cluster_announce_port ? \n                          server.cluster_announce_port : server.port;\n    *announced_tls_port = server.cluster_announce_tls_port ? \n                          server.cluster_announce_tls_port : server.tls_port;\n    /* Derive cluster bus port. */\n    if (server.cluster_announce_bus_port) {\n        *announced_cport = server.cluster_announce_bus_port;\n    } else if (server.cluster_port) {\n        *announced_cport = server.cluster_port;\n    } else {\n        *announced_cport = defaultClientPort() + CLUSTER_PORT_INCR;\n    }\n}\n\n/* Some flags (currently just the NOFAILOVER flag) may need to be updated\n * in the \"myself\" node based on the current configuration of the node,\n * that may change at runtime via CONFIG SET. This function changes the\n * set of flags in myself->flags accordingly. */\nvoid clusterUpdateMyselfFlags(void) {\n    if (!myself) return;\n    int oldflags = myself->flags;\n    int nofailover = server.cluster_slave_no_failover ?\n                     CLUSTER_NODE_NOFAILOVER : 0;\n    myself->flags &= ~CLUSTER_NODE_NOFAILOVER;\n    myself->flags |= nofailover;\n    if (myself->flags != oldflags) {\n        clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG|\n                             CLUSTER_TODO_UPDATE_STATE);\n    }\n}\n\n\n/* We want to take myself->port/cport/pport in sync with the\n* cluster-announce-port/cluster-announce-bus-port/cluster-announce-tls-port option.\n* The option can be set at runtime via CONFIG SET. */\nvoid clusterUpdateMyselfAnnouncedPorts(void) {\n    if (!myself) return;\n    deriveAnnouncedPorts(&myself->tcp_port,&myself->tls_port,&myself->cport);\n}\n\n/* We want to take myself->ip in sync with the cluster-announce-ip option.\n* The option can be set at runtime via CONFIG SET. */\nvoid clusterUpdateMyselfIp(void) {\n    if (!myself) return;\n    static char *prev_ip = NULL;\n    char *curr_ip = server.cluster_announce_ip;\n    int changed = 0;\n\n    if (prev_ip == NULL && curr_ip != NULL) changed = 1;\n    else if (prev_ip != NULL && curr_ip == NULL) changed = 1;\n    else if (prev_ip && curr_ip && strcmp(prev_ip,curr_ip)) changed = 1;\n\n    if (changed) {\n        if (prev_ip) zfree(prev_ip);\n        prev_ip = curr_ip;\n\n        if (curr_ip) {\n            /* We always take a copy of the previous IP address, by\n            * duplicating the string. This way later we can check if\n            * the address really changed. */\n            prev_ip = zstrdup(prev_ip);\n            redis_strlcpy(myself->ip,server.cluster_announce_ip,NET_IP_STR_LEN);\n        } else {\n            myself->ip[0] = '\\0'; /* Force autodetection. */\n        }\n    }\n}\n\n/* Update the hostname for the specified node with the provided C string. */\nstatic void updateAnnouncedHostname(clusterNode *node, char *new) {\n    /* Previous and new hostname are the same, no need to update. */\n    if (new && !strcmp(new, node->hostname)) {\n        return;\n    } else if (!new && (sdslen(node->hostname) == 0)) {\n        return;\n    }\n\n    if (new) {\n        node->hostname = sdscpy(node->hostname, new);\n    } else if (sdslen(node->hostname) != 0) {\n        sdsclear(node->hostname);\n    }\n    clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG);\n}\n\nstatic void updateAnnouncedHumanNodename(clusterNode *node, char *new) {\n    if (new && !strcmp(new, node->human_nodename)) {\n        return;\n    } else if (!new && (sdslen(node->human_nodename) == 0)) {\n        return;\n    }\n    \n    if (new) {\n        node->human_nodename = sdscpy(node->human_nodename, new);\n    } else if (sdslen(node->human_nodename) != 0) {\n        sdsclear(node->human_nodename);\n    }\n    clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG);\n}\n\nstatic void assignShardIdToNode(clusterNode *node, const char *shard_id, int flag) {\n    clusterRemoveNodeFromShard(node);\n    memcpy(node->shard_id, shard_id, CLUSTER_NAMELEN);\n    clusterAddNodeToShard(shard_id, node);\n    clusterDoBeforeSleep(flag); \n}\n\nstatic void updateShardId(clusterNode *node, const char *shard_id) {\n    if (shard_id && memcmp(node->shard_id, shard_id, CLUSTER_NAMELEN) != 0) {\n        /* We always make our best effort to keep the shard-id consistent\n         * between the master and its replicas:\n         *\n         * 1. When updating the master's shard-id, we simultaneously update the\n         *    shard-id of all its replicas to ensure consistency.\n         * 2. When updating replica's shard-id, if it differs from its master's shard-id,\n         *    we discard this replica's shard-id and continue using master's shard-id.\n         *    This applies even if the master does not support shard-id, in which\n         *    case we rely on the master's randomly generated shard-id. */\n        if (node->slaveof == NULL) {\n            assignShardIdToNode(node, shard_id, CLUSTER_TODO_SAVE_CONFIG);\n            for (int i = 0; i < clusterNodeNumSlaves(node); i++) {\n                clusterNode *slavenode = clusterNodeGetSlave(node, i);\n                if (memcmp(slavenode->shard_id, shard_id, CLUSTER_NAMELEN) != 0)\n                    assignShardIdToNode(slavenode, shard_id, CLUSTER_TODO_SAVE_CONFIG|CLUSTER_TODO_FSYNC_CONFIG);\n            }\n        } else if (memcmp(node->slaveof->shard_id, shard_id, CLUSTER_NAMELEN) == 0) {\n            assignShardIdToNode(node, shard_id, CLUSTER_TODO_SAVE_CONFIG);\n        }\n    }\n}\n\n/* Update my hostname based on server configuration values */\nvoid clusterUpdateMyselfHostname(void) {\n    if (!myself) return;\n    updateAnnouncedHostname(myself, server.cluster_announce_hostname);\n}\n\nvoid clusterUpdateMyselfHumanNodename(void) {\n    if (!myself) return;\n    updateAnnouncedHumanNodename(myself, server.cluster_announce_human_nodename);\n}\n\nvoid clusterInit(void) {\n    int saveconf = 0;\n\n    server.cluster = zmalloc(sizeof(struct clusterState));\n    server.cluster->myself = NULL;\n    server.cluster->currentEpoch = 0;\n    server.cluster->state = CLUSTER_FAIL;\n    server.cluster->size = 0;\n    server.cluster->todo_before_sleep = 0;\n    server.cluster->nodes = dictCreate(&clusterNodesDictType);\n    server.cluster->shards = dictCreate(&clusterSdsToListType);\n    server.cluster->nodes_black_list =\n        dictCreate(&clusterNodesBlackListDictType);\n    server.cluster->failover_auth_time = 0;\n    server.cluster->failover_auth_count = 0;\n    server.cluster->failover_auth_rank = 0;\n    server.cluster->failover_auth_epoch = 0;\n    server.cluster->cant_failover_reason = CLUSTER_CANT_FAILOVER_NONE;\n    server.cluster->lastVoteEpoch = 0;\n\n    /* Initialize stats */\n    for (int i = 0; i < CLUSTERMSG_TYPE_COUNT; i++) {\n        server.cluster->stats_bus_messages_sent[i] = 0;\n        server.cluster->stats_bus_messages_received[i] = 0;\n    }\n    server.cluster->stats_pfail_nodes = 0;\n    server.cluster->stat_cluster_links_buffer_limit_exceeded = 0;\n\n    memset(server.cluster->slots,0, sizeof(server.cluster->slots));\n    clusterCloseAllSlots();\n\n    memset(server.cluster->owner_not_claiming_slot, 0, sizeof(server.cluster->owner_not_claiming_slot));\n\n    /* Lock the cluster config file to make sure every node uses\n     * its own nodes.conf. */\n    server.cluster_config_file_lock_fd = -1;\n    if (clusterLockConfig(server.cluster_configfile) == C_ERR)\n        exit(1);\n\n    /* Load or create a new nodes configuration. */\n    if (clusterLoadConfig(server.cluster_configfile) == C_ERR) {\n        /* No configuration found. We will just use the random name provided\n         * by the createClusterNode() function. */\n        myself = server.cluster->myself =\n            createClusterNode(NULL,CLUSTER_NODE_MYSELF|CLUSTER_NODE_MASTER);\n        serverLog(LL_NOTICE,\"No cluster configuration found, I'm %.40s\",\n            myself->name);\n        clusterAddNode(myself);\n        clusterAddNodeToShard(myself->shard_id, myself);\n        saveconf = 1;\n    }\n    if (saveconf) clusterSaveConfigOrDie(1);\n\n    /* Port sanity check II\n     * The other handshake port check is triggered too late to stop\n     * us from trying to use a too-high cluster port number. */\n    int port = defaultClientPort();\n    if (!server.cluster_port && port > (65535-CLUSTER_PORT_INCR)) {\n        serverLog(LL_WARNING, \"Redis port number too high. \"\n                   \"Cluster communication port is 10,000 port \"\n                   \"numbers higher than your Redis port. \"\n                   \"Your Redis port number must be 55535 or less.\");\n        exit(1);\n    }\n    if (!server.bindaddr_count) {\n        serverLog(LL_WARNING, \"No bind address is configured, but it is required for the Cluster bus.\");\n        exit(1);\n    }\n\n    /* Set myself->port/cport/pport to my listening ports, we'll just need to\n     * discover the IP address via MEET messages. */\n    deriveAnnouncedPorts(&myself->tcp_port, &myself->tls_port, &myself->cport);\n\n    server.cluster->mf_end = 0;\n    server.cluster->mf_slave = NULL;\n    resetManualFailover();\n    clusterUpdateMyselfFlags();\n    clusterUpdateMyselfIp();\n    clusterUpdateMyselfHostname();\n    clusterUpdateMyselfHumanNodename();\n\n    getRandomHexChars(server.cluster->internal_secret, CLUSTER_INTERNALSECRETLEN);\n}\n\nvoid clusterInitLast(void) {\n    if (connectionIndexByType(connTypeOfCluster()->get_type(NULL)) < 0) {\n        serverLog(LL_WARNING, \"Missing connection type %s, but it is required for the Cluster bus.\", connTypeOfCluster()->get_type(NULL));\n        exit(1);\n    }\n\n    int port = defaultClientPort();\n    connListener *listener = &server.clistener;\n    listener->count = 0;\n    listener->bindaddr = server.bindaddr;\n    listener->bindaddr_count = server.bindaddr_count;\n    listener->port = server.cluster_port ? server.cluster_port : port + CLUSTER_PORT_INCR;\n    listener->ct = connTypeOfCluster();\n    if (connListen(listener) == C_ERR ) {\n        /* Note: the following log text is matched by the test suite. */\n        serverLog(LL_WARNING, \"Failed listening on port %u (cluster), aborting.\", listener->port);\n        exit(1);\n    }\n    \n    if (createSocketAcceptHandler(&server.clistener, clusterAcceptHandler) != C_OK) {\n        serverPanic(\"Unrecoverable error creating Redis Cluster socket accept handler.\");\n    }\n}\n\n/* Reset a node performing a soft or hard reset:\n *\n * 1) All other nodes are forgotten.\n * 2) All the assigned / open slots are released.\n * 3) If the node is a slave, it turns into a master.\n * 4) Only for hard reset: a new Node ID is generated.\n * 5) Only for hard reset: currentEpoch and configEpoch are set to 0.\n * 6) The new configuration is saved and the cluster state updated.\n * 7) If the node was a slave, the whole data set is flushed away. */\nvoid clusterReset(int hard) {\n    dictIterator di;\n    dictEntry *de;\n    int j;\n\n    /* Turn into master. */\n    if (nodeIsSlave(myself)) {\n        asmFinalizeMasterTask();\n        clusterSetNodeAsMaster(myself);\n        replicationUnsetMaster();\n        emptyData(-1,EMPTYDB_NO_FLAGS,NULL);\n    }\n\n    /* Close slots, reset manual failover state. */\n    clusterCloseAllSlots();\n    resetManualFailover();\n\n    /* Cancel all ASM tasks */\n    clusterAsmCancel(NULL, \"CLUSTER RESET\");\n    asmCancelTrimJobs();\n\n    /* Unassign all the slots. */\n    for (j = 0; j < CLUSTER_SLOTS; j++) clusterDelSlot(j);\n\n    /* Recreate shards dict */\n    dictEmpty(server.cluster->shards, NULL);\n\n    /* Forget all the nodes, but myself. */\n    dictInitSafeIterator(&di, server.cluster->nodes);\n    while((de = dictNext(&di)) != NULL) {\n        clusterNode *node = dictGetVal(de);\n\n        if (node == myself) continue;\n        clusterDelNode(node);\n    }\n    dictResetIterator(&di);\n\n    /* Empty the nodes blacklist. */\n    dictEmpty(server.cluster->nodes_black_list, NULL);\n\n    /* Hard reset only: set epochs to 0, change node ID. */\n    if (hard) {\n        sds oldname;\n\n        server.cluster->currentEpoch = 0;\n        server.cluster->lastVoteEpoch = 0;\n        myself->configEpoch = 0;\n        serverLog(LL_NOTICE, \"configEpoch set to 0 via CLUSTER RESET HARD\");\n\n        /* To change the Node ID we need to remove the old name from the\n         * nodes table, change the ID, and re-add back with new name. */\n        oldname = sdsnewlen(myself->name, CLUSTER_NAMELEN);\n        dictDelete(server.cluster->nodes,oldname);\n        sdsfree(oldname);\n        getRandomHexChars(myself->name, CLUSTER_NAMELEN);\n        getRandomHexChars(myself->shard_id, CLUSTER_NAMELEN);\n        clusterAddNode(myself);\n        serverLog(LL_NOTICE,\"Node hard reset, now I'm %.40s\", myself->name);\n    }\n\n    /* Re-populate shards */\n    clusterAddNodeToShard(myself->shard_id, myself);\n\n    /* Make sure to persist the new config and update the state. */\n    clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG|\n                         CLUSTER_TODO_UPDATE_STATE|\n                         CLUSTER_TODO_FSYNC_CONFIG);\n}\n\n/* -----------------------------------------------------------------------------\n * CLUSTER communication link\n * -------------------------------------------------------------------------- */\nstatic clusterMsgSendBlock *createClusterMsgSendBlock(int type, uint32_t msglen) {\n    uint32_t blocklen = msglen + sizeof(clusterMsgSendBlock);\n    clusterMsgSendBlock *msgblock = zcalloc(blocklen);\n    msgblock->refcount = 1;\n    msgblock->totlen = blocklen;\n    server.stat_cluster_links_memory += blocklen;\n    clusterBuildMessageHdr(getMessageFromSendBlock(msgblock),type,msglen);\n    return msgblock;\n}\n\nstatic void clusterMsgSendBlockDecrRefCount(void *node) {\n    clusterMsgSendBlock *msgblock = (clusterMsgSendBlock*)node;\n    msgblock->refcount--;\n    serverAssert(msgblock->refcount >= 0);\n    if (msgblock->refcount == 0) {\n        server.stat_cluster_links_memory -= msgblock->totlen;\n        zfree(msgblock);\n    }\n}\n\nclusterLink *createClusterLink(clusterNode *node) {\n    clusterLink *link = zmalloc(sizeof(*link));\n    link->ctime = mstime();\n    link->send_msg_queue = listCreate();\n    listSetFreeMethod(link->send_msg_queue, clusterMsgSendBlockDecrRefCount);\n    link->head_msg_send_offset = 0;\n    link->send_msg_queue_mem = sizeof(list);\n    link->rcvbuf = zmalloc(link->rcvbuf_alloc = RCVBUF_INIT_LEN);\n    link->rcvbuf_len = 0;\n    server.stat_cluster_links_memory += link->rcvbuf_alloc + link->send_msg_queue_mem;\n    link->conn = NULL;\n    link->node = node;\n    /* Related node can only possibly be known at link creation time if this is an outbound link */\n    link->inbound = (node == NULL);\n    if (!link->inbound) {\n        node->link = link;\n    }\n    return link;\n}\n\n/* Free a cluster link, but does not free the associated node of course.\n * This function will just make sure that the original node associated\n * with this link will have the 'link' field set to NULL. */\nvoid freeClusterLink(clusterLink *link) {\n    if (link->conn) {\n        connClose(link->conn);\n        link->conn = NULL;\n    }\n    server.stat_cluster_links_memory -= sizeof(list) + listLength(link->send_msg_queue)*sizeof(listNode);\n    listRelease(link->send_msg_queue);\n    server.stat_cluster_links_memory -= link->rcvbuf_alloc;\n    zfree(link->rcvbuf);\n    if (link->node) {\n        if (link->node->link == link) {\n            serverAssert(!link->inbound);\n            link->node->link = NULL;\n        } else if (link->node->inbound_link == link) {\n            serverAssert(link->inbound);\n            link->node->inbound_link = NULL;\n        }\n    }\n    zfree(link);\n}\n\nvoid setClusterNodeToInboundClusterLink(clusterNode *node, clusterLink *link) {\n    serverAssert(!link->node);\n    serverAssert(link->inbound);\n    if (node->inbound_link) {\n        /* A peer may disconnect and then reconnect with us, and it's not guaranteed that\n         * we would always process the disconnection of the existing inbound link before\n         * accepting a new existing inbound link. Therefore, it's possible to have more than\n         * one inbound link from the same node at the same time. Our cleanup logic assumes\n         * a one to one relationship between nodes and inbound links, so we need to kill\n         * one of the links. The existing link is more likely the outdated one, but it's\n         * possible the other node may need to open another link. */\n        serverLog(LL_DEBUG, \"Replacing inbound link fd %d from node %.40s with fd %d\",\n                node->inbound_link->conn->fd, node->name, link->conn->fd);\n        freeClusterLink(node->inbound_link);\n    }\n    serverAssert(!node->inbound_link);\n    node->inbound_link = link;\n    link->node = node;\n}\n\nstatic void clusterConnAcceptHandler(connection *conn) {\n    clusterLink *link;\n\n    if (connGetState(conn) != CONN_STATE_CONNECTED) {\n        serverLog(LL_VERBOSE,\n                \"Error accepting cluster node connection: %s\", connGetLastError(conn));\n        connClose(conn);\n        return;\n    }\n\n    /* Create a link object we use to handle the connection.\n     * It gets passed to the readable handler when data is available.\n     * Initially the link->node pointer is set to NULL as we don't know\n     * which node is, but the right node is references once we know the\n     * node identity. */\n    link = createClusterLink(NULL);\n    link->conn = conn;\n    connSetPrivateData(conn, link);\n\n    /* Register read handler */\n    connSetReadHandler(conn, clusterReadHandler);\n}\n\n#define MAX_CLUSTER_ACCEPTS_PER_CALL 1000\nvoid clusterAcceptHandler(aeEventLoop *el, int fd, void *privdata, int mask) {\n    int cport, cfd;\n    int max = MAX_CLUSTER_ACCEPTS_PER_CALL;\n    char cip[NET_IP_STR_LEN];\n    int require_auth = TLS_CLIENT_AUTH_YES;\n    UNUSED(el);\n    UNUSED(mask);\n    UNUSED(privdata);\n\n    /* If the server is starting up, don't accept cluster connections:\n     * UPDATE messages may interact with the database content. */\n    if (server.masterhost == NULL && server.loading) return;\n\n    while(max--) {\n        cfd = anetTcpAccept(server.neterr, fd, cip, sizeof(cip), &cport);\n        if (cfd == ANET_ERR) {\n            if (anetAcceptFailureNeedsRetry(errno))\n                continue;\n            if (errno != EWOULDBLOCK)\n                serverLog(LL_VERBOSE,\n                    \"Error accepting cluster node: %s\", server.neterr);\n            return;\n        }\n\n        connection *conn = connCreateAccepted(server.el, connTypeOfCluster(), cfd, &require_auth);\n\n        /* Make sure connection is not in an error state */\n        if (connGetState(conn) != CONN_STATE_ACCEPTING) {\n            serverLog(LL_VERBOSE,\n                \"Error creating an accepting connection for cluster node: %s\",\n                    connGetLastError(conn));\n            connClose(conn);\n            return;\n        }\n        connEnableTcpNoDelay(conn);\n        connKeepAlive(conn,server.cluster_node_timeout / 1000 * 2);\n\n        /* Use non-blocking I/O for cluster messages. */\n        serverLog(LL_VERBOSE,\"Accepting cluster node connection from %s:%d\", cip, cport);\n\n        /* Accept the connection now.  connAccept() may call our handler directly\n         * or schedule it for later depending on connection implementation.\n         */\n        if (connAccept(conn, clusterConnAcceptHandler) == C_ERR) {\n            if (connGetState(conn) == CONN_STATE_ERROR)\n                serverLog(LL_VERBOSE,\n                        \"Error accepting cluster node connection: %s\",\n                        connGetLastError(conn));\n            connClose(conn);\n            return;\n        }\n    }\n}\n\n/* Return the approximated number of sockets we are using in order to\n * take the cluster bus connections. */\nunsigned long getClusterConnectionsCount(void) {\n    /* We decrement the number of nodes by one, since there is the\n     * \"myself\" node too in the list. Each node uses two file descriptors,\n     * one incoming and one outgoing, thus the multiplication by 2. */\n    return server.cluster_enabled ?\n           ((dictSize(server.cluster->nodes)-1)*2) : 0;\n}\n\n/* -----------------------------------------------------------------------------\n * CLUSTER node API\n * -------------------------------------------------------------------------- */\n\n/* Create a new cluster node, with the specified flags.\n * If \"nodename\" is NULL this is considered a first handshake and a random\n * node name is assigned to this node (it will be fixed later when we'll\n * receive the first pong).\n *\n * The node is created and returned to the user, but it is not automatically\n * added to the nodes hash table. */\nclusterNode *createClusterNode(char *nodename, int flags) {\n    clusterNode *node = zmalloc(sizeof(*node));\n\n    if (nodename)\n        memcpy(node->name, nodename, CLUSTER_NAMELEN);\n    else\n        getRandomHexChars(node->name, CLUSTER_NAMELEN);\n    getRandomHexChars(node->shard_id, CLUSTER_NAMELEN);\n    node->ctime = mstime();\n    node->configEpoch = 0;\n    node->flags = flags;\n    memset(node->slots,0,sizeof(node->slots));\n    node->slot_info_pairs = NULL;\n    node->slot_info_pairs_count = 0;\n    node->numslots = 0;\n    node->numslaves = 0;\n    node->slaves = NULL;\n    node->slaveof = NULL;\n    node->last_in_ping_gossip = 0;\n    node->ping_sent = node->pong_received = 0;\n    node->data_received = 0;\n    node->fail_time = 0;\n    node->link = NULL;\n    node->inbound_link = NULL;\n    memset(node->ip,0,sizeof(node->ip));\n    node->hostname = sdsempty();\n    node->human_nodename = sdsempty();\n    node->tcp_port = 0;\n    node->cport = 0;\n    node->tls_port = 0;\n    node->fail_reports = listCreate();\n    node->voted_time = 0;\n    node->orphaned_time = 0;\n    node->repl_offset_time = 0;\n    node->repl_offset = 0;\n    listSetFreeMethod(node->fail_reports,zfree);\n    return node;\n}\n\n/* This function is called every time we get a failure report from a node.\n * The side effect is to populate the fail_reports list (or to update\n * the timestamp of an existing report).\n *\n * 'failing' is the node that is in failure state according to the\n * 'sender' node.\n *\n * The function returns 0 if it just updates a timestamp of an existing\n * failure report from the same sender. 1 is returned if a new failure\n * report is created. */\nint clusterNodeAddFailureReport(clusterNode *failing, clusterNode *sender) {\n    list *l = failing->fail_reports;\n    listNode *ln;\n    listIter li;\n    clusterNodeFailReport *fr;\n\n    /* If a failure report from the same sender already exists, just update\n     * the timestamp. */\n    listRewind(l,&li);\n    while ((ln = listNext(&li)) != NULL) {\n        fr = ln->value;\n        if (fr->node == sender) {\n            fr->time = mstime();\n            return 0;\n        }\n    }\n\n    /* Otherwise create a new report. */\n    fr = zmalloc(sizeof(*fr));\n    fr->node = sender;\n    fr->time = mstime();\n    listAddNodeTail(l,fr);\n    return 1;\n}\n\n/* Remove failure reports that are too old, where too old means reasonably\n * older than the global node timeout. Note that anyway for a node to be\n * flagged as FAIL we need to have a local PFAIL state that is at least\n * older than the global node timeout, so we don't just trust the number\n * of failure reports from other nodes. */\nvoid clusterNodeCleanupFailureReports(clusterNode *node) {\n    list *l = node->fail_reports;\n    listNode *ln;\n    listIter li;\n    clusterNodeFailReport *fr;\n    mstime_t maxtime = server.cluster_node_timeout *\n                     CLUSTER_FAIL_REPORT_VALIDITY_MULT;\n    mstime_t now = mstime();\n\n    listRewind(l,&li);\n    while ((ln = listNext(&li)) != NULL) {\n        fr = ln->value;\n        if (now - fr->time > maxtime) listDelNode(l,ln);\n    }\n}\n\n/* Remove the failing report for 'node' if it was previously considered\n * failing by 'sender'. This function is called when a node informs us via\n * gossip that a node is OK from its point of view (no FAIL or PFAIL flags).\n *\n * Note that this function is called relatively often as it gets called even\n * when there are no nodes failing, and is O(N), however when the cluster is\n * fine the failure reports list is empty so the function runs in constant\n * time.\n *\n * The function returns 1 if the failure report was found and removed.\n * Otherwise 0 is returned. */\nint clusterNodeDelFailureReport(clusterNode *node, clusterNode *sender) {\n    list *l = node->fail_reports;\n    listNode *ln;\n    listIter li;\n    clusterNodeFailReport *fr;\n\n    /* Search for a failure report from this sender. */\n    listRewind(l,&li);\n    while ((ln = listNext(&li)) != NULL) {\n        fr = ln->value;\n        if (fr->node == sender) break;\n    }\n    if (!ln) return 0; /* No failure report from this sender. */\n\n    /* Remove the failure report. */\n    listDelNode(l,ln);\n    clusterNodeCleanupFailureReports(node);\n    return 1;\n}\n\n/* Return the number of external nodes that believe 'node' is failing,\n * not including this node, that may have a PFAIL or FAIL state for this\n * node as well. */\nint clusterNodeFailureReportsCount(clusterNode *node) {\n    clusterNodeCleanupFailureReports(node);\n    return listLength(node->fail_reports);\n}\n\nint clusterNodeRemoveSlave(clusterNode *master, clusterNode *slave) {\n    int j;\n\n    for (j = 0; j < master->numslaves; j++) {\n        if (master->slaves[j] == slave) {\n            if ((j+1) < master->numslaves) {\n                int remaining_slaves = (master->numslaves - j) - 1;\n                memmove(master->slaves+j,master->slaves+(j+1),\n                        (sizeof(*master->slaves) * remaining_slaves));\n            }\n            master->numslaves--;\n            if (master->numslaves == 0)\n                master->flags &= ~CLUSTER_NODE_MIGRATE_TO;\n            return C_OK;\n        }\n    }\n    return C_ERR;\n}\n\nint clusterNodeAddSlave(clusterNode *master, clusterNode *slave) {\n    int j;\n\n    /* If it's already a slave, don't add it again. */\n    for (j = 0; j < master->numslaves; j++)\n        if (master->slaves[j] == slave) return C_ERR;\n    master->slaves = zrealloc(master->slaves,\n        sizeof(clusterNode*)*(master->numslaves+1));\n    master->slaves[master->numslaves] = slave;\n    master->numslaves++;\n    master->flags |= CLUSTER_NODE_MIGRATE_TO;\n    return C_OK;\n}\n\nint clusterCountNonFailingSlaves(clusterNode *n) {\n    int j, okslaves = 0;\n\n    for (j = 0; j < n->numslaves; j++)\n        if (!nodeFailed(n->slaves[j])) okslaves++;\n    return okslaves;\n}\n\n/* Low level cleanup of the node structure. Only called by clusterDelNode(). */\nvoid freeClusterNode(clusterNode *n) {\n    sds nodename;\n    int j;\n\n    /* If the node has associated slaves, we have to set\n     * all the slaves->slaveof fields to NULL (unknown). */\n    for (j = 0; j < n->numslaves; j++)\n        n->slaves[j]->slaveof = NULL;\n\n    /* Remove this node from the list of slaves of its master. */\n    if (nodeIsSlave(n) && n->slaveof) clusterNodeRemoveSlave(n->slaveof,n);\n\n    /* Unlink from the set of nodes. */\n    nodename = sdsnewlen(n->name, CLUSTER_NAMELEN);\n    serverAssert(dictDelete(server.cluster->nodes,nodename) == DICT_OK);\n    sdsfree(nodename);\n    sdsfree(n->hostname);\n    sdsfree(n->human_nodename);\n\n    /* Release links and associated data structures. */\n    if (n->link) freeClusterLink(n->link);\n    if (n->inbound_link) freeClusterLink(n->inbound_link);\n    listRelease(n->fail_reports);\n    zfree(n->slaves);\n    zfree(n);\n}\n\n/* Add a node to the nodes hash table */\nvoid clusterAddNode(clusterNode *node) {\n    int retval;\n\n    retval = dictAdd(server.cluster->nodes,\n            sdsnewlen(node->name,CLUSTER_NAMELEN), node);\n    serverAssert(retval == DICT_OK);\n}\n\n/* Remove a node from the cluster. The function performs the high level\n * cleanup, calling freeClusterNode() for the low level cleanup.\n * Here we do the following:\n *\n * 1) Mark all the slots handled by it as unassigned.\n * 2) Remove all the failure reports sent by this node and referenced by\n *    other nodes.\n * 3) Remove the node from the owning shard\n * 4) Cancel all ASM tasks that involve the node.\n * 5) Free the node with freeClusterNode() that will in turn remove it\n *    from the hash table and from the list of slaves of its master, if\n *    it is a slave node.\n */\nvoid clusterDelNode(clusterNode *delnode) {\n    int j;\n    dictIterator di;\n    dictEntry *de;\n\n    /* 1) Mark slots as unassigned. */\n    for (j = 0; j < CLUSTER_SLOTS; j++) {\n        if (server.cluster->importing_slots_from[j] == delnode)\n            server.cluster->importing_slots_from[j] = NULL;\n        if (server.cluster->migrating_slots_to[j] == delnode)\n            server.cluster->migrating_slots_to[j] = NULL;\n        if (server.cluster->slots[j] == delnode)\n            clusterDelSlot(j);\n    }\n\n    /* 2) Remove failure reports. */\n    dictInitSafeIterator(&di, server.cluster->nodes);\n    while((de = dictNext(&di)) != NULL) {\n        clusterNode *node = dictGetVal(de);\n\n        if (node == delnode) continue;\n        clusterNodeDelFailureReport(node,delnode);\n    }\n    dictResetIterator(&di);\n\n    /* 3) Remove the node from the owning shard */\n    clusterRemoveNodeFromShard(delnode);\n\n    /* 4) Cancel all ASM tasks that involve the node. */\n    clusterAsmCancelByNode(delnode, \"node deleted\");\n\n    /* 5) Free the node, unlinking it from the cluster. */\n    freeClusterNode(delnode);\n}\n\n/* Node lookup by name */\nclusterNode *clusterLookupNode(const char *name, int length) {\n    if (verifyClusterNodeId(name, length) != C_OK) return NULL;\n    sds s = sdsnewlen(name, length);\n    dictEntry *de = dictFind(server.cluster->nodes, s);\n    sdsfree(s);\n    if (de == NULL) return NULL;\n    return dictGetVal(de);\n}\n\nconst char *clusterGetSecret(size_t *len) {\n    if (!server.cluster) {\n        return NULL;\n    }\n    *len = CLUSTER_INTERNALSECRETLEN;\n    return server.cluster->internal_secret;\n}\n\n/* Get all the nodes in my shard.\n * Note that the list returned is not computed on the fly\n * via slaveof; rather, it is maintained permanently to\n * track the shard membership and its life cycle is tied\n * to this Redis process. Therefore, the caller must not\n * release the list. */\nlist *clusterGetNodesInMyShard(clusterNode *node) {\n    sds s = sdsnewlen(node->shard_id, CLUSTER_NAMELEN);\n    dictEntry *de = dictFind(server.cluster->shards,s);\n    sdsfree(s);\n    return (de != NULL) ? dictGetVal(de) : NULL;\n}\n\n/* This is only used after the handshake. When we connect a given IP/PORT\n * as a result of CLUSTER MEET we don't have the node name yet, so we\n * pick a random one, and will fix it when we receive the PONG request using\n * this function. */\nvoid clusterRenameNode(clusterNode *node, char *newname) {\n    int retval;\n    sds s = sdsnewlen(node->name, CLUSTER_NAMELEN);\n\n    serverLog(LL_DEBUG,\"Renaming node %.40s into %.40s\",\n        node->name, newname);\n    retval = dictDelete(server.cluster->nodes, s);\n    sdsfree(s);\n    serverAssert(retval == DICT_OK);\n    memcpy(node->name, newname, CLUSTER_NAMELEN);\n    clusterAddNode(node);\n    clusterAddNodeToShard(node->shard_id, node);\n}\n\nvoid clusterAddNodeToShard(const char *shard_id, clusterNode *node) {\n    sds s = sdsnewlen(shard_id, CLUSTER_NAMELEN);\n    dictEntry *de = dictFind(server.cluster->shards,s);\n    if (de == NULL) {\n        list *l = listCreate();\n        listAddNodeTail(l, node);\n        serverAssert(dictAdd(server.cluster->shards, s, l) == DICT_OK);\n    } else {\n        list *l = dictGetVal(de);\n        if (listSearchKey(l, node) == NULL) {\n            listAddNodeTail(l, node);\n        }\n        sdsfree(s);\n    }\n}\n\nvoid clusterRemoveNodeFromShard(clusterNode *node) {\n    sds s = sdsnewlen(node->shard_id, CLUSTER_NAMELEN);\n    dictEntry *de = dictFind(server.cluster->shards, s);\n    if (de != NULL) {\n        list *l = dictGetVal(de);\n        listNode *ln = listSearchKey(l, node);\n        if (ln != NULL) {\n            listDelNode(l, ln);\n        }\n        if (listLength(l) == 0) {\n            dictDelete(server.cluster->shards, s);\n        }\n    }\n    sdsfree(s);\n}\n\n/* -----------------------------------------------------------------------------\n * CLUSTER config epoch handling\n * -------------------------------------------------------------------------- */\n\n/* Return the greatest configEpoch found in the cluster, or the current\n * epoch if greater than any node configEpoch. */\nuint64_t clusterGetMaxEpoch(void) {\n    uint64_t max = 0;\n    dictIterator di;\n    dictEntry *de;\n\n    dictInitSafeIterator(&di, server.cluster->nodes);\n    while((de = dictNext(&di)) != NULL) {\n        clusterNode *node = dictGetVal(de);\n        if (node->configEpoch > max) max = node->configEpoch;\n    }\n    dictResetIterator(&di);\n    if (max < server.cluster->currentEpoch) max = server.cluster->currentEpoch;\n    return max;\n}\n\n/* If this node epoch is zero or is not already the greatest across the\n * cluster (from the POV of the local configuration), this function will:\n *\n * 1) Generate a new config epoch, incrementing the current epoch.\n * 2) Assign the new epoch to this node, WITHOUT any consensus.\n * 3) Persist the configuration on disk before sending packets with the\n *    new configuration.\n *\n * If the new config epoch is generated and assigned, C_OK is returned,\n * otherwise C_ERR is returned (since the node has already the greatest\n * configuration around) and no operation is performed.\n *\n * Important note: this function violates the principle that config epochs\n * should be generated with consensus and should be unique across the cluster.\n * However Redis Cluster uses this auto-generated new config epochs in two\n * cases:\n *\n * 1) When slots are closed after importing. Otherwise resharding would be\n *    too expensive.\n * 2) When CLUSTER FAILOVER is called with options that force a slave to\n *    failover its master even if there is not master majority able to\n *    create a new configuration epoch.\n *\n * Redis Cluster will not explode using this function, even in the case of\n * a collision between this node and another node, generating the same\n * configuration epoch unilaterally, because the config epoch conflict\n * resolution algorithm will eventually move colliding nodes to different\n * config epochs. However using this function may violate the \"last failover\n * wins\" rule, so should only be used with care. */\nint clusterBumpConfigEpochWithoutConsensus(void) {\n    uint64_t maxEpoch = clusterGetMaxEpoch();\n\n    if (myself->configEpoch == 0 ||\n        myself->configEpoch != maxEpoch)\n    {\n        server.cluster->currentEpoch++;\n        myself->configEpoch = server.cluster->currentEpoch;\n        /* Save the new config epoch and broadcast it to the other nodes. */\n        clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG|\n                             CLUSTER_TODO_FSYNC_CONFIG|\n                             CLUSTER_TODO_BROADCAST_PONG);\n        serverLog(LL_NOTICE,\n            \"New configEpoch set to %llu\",\n            (unsigned long long) myself->configEpoch);\n        return C_OK;\n    } else {\n        return C_ERR;\n    }\n}\n\n/* This function is called when this node is a master, and we receive from\n * another master a configuration epoch that is equal to our configuration\n * epoch.\n *\n * BACKGROUND\n *\n * It is not possible that different slaves get the same config\n * epoch during a failover election, because the slaves need to get voted\n * by a majority. However when we perform a manual resharding of the cluster\n * the node will assign a configuration epoch to itself without to ask\n * for agreement. Usually resharding happens when the cluster is working well\n * and is supervised by the sysadmin, however it is possible for a failover\n * to happen exactly while the node we are resharding a slot to assigns itself\n * a new configuration epoch, but before it is able to propagate it.\n *\n * So technically it is possible in this condition that two nodes end with\n * the same configuration epoch.\n *\n * Another possibility is that there are bugs in the implementation causing\n * this to happen.\n *\n * Moreover when a new cluster is created, all the nodes start with the same\n * configEpoch. This collision resolution code allows nodes to automatically\n * end with a different configEpoch at startup automatically.\n *\n * In all the cases, we want a mechanism that resolves this issue automatically\n * as a safeguard. The same configuration epoch for masters serving different\n * set of slots is not harmful, but it is if the nodes end serving the same\n * slots for some reason (manual errors or software bugs) without a proper\n * failover procedure.\n *\n * In general we want a system that eventually always ends with different\n * masters having different configuration epochs whatever happened, since\n * nothing is worse than a split-brain condition in a distributed system.\n *\n * BEHAVIOR\n *\n * When this function gets called, what happens is that if this node\n * has the lexicographically smaller Node ID compared to the other node\n * with the conflicting epoch (the 'sender' node), it will assign itself\n * the greatest configuration epoch currently detected among nodes plus 1.\n *\n * This means that even if there are multiple nodes colliding, the node\n * with the greatest Node ID never moves forward, so eventually all the nodes\n * end with a different configuration epoch.\n */\nvoid clusterHandleConfigEpochCollision(clusterNode *sender) {\n    /* Prerequisites: nodes have the same configEpoch and are both masters. */\n    if (sender->configEpoch != myself->configEpoch ||\n        !clusterNodeIsMaster(sender) || !clusterNodeIsMaster(myself)) return;\n    /* Don't act if the colliding node has a smaller Node ID. */\n    if (memcmp(sender->name,myself->name,CLUSTER_NAMELEN) <= 0) return;\n    /* Get the next ID available at the best of this node knowledge. */\n    server.cluster->currentEpoch++;\n    myself->configEpoch = server.cluster->currentEpoch;\n    clusterSaveConfigOrDie(1);\n    /* Broadcast new config epoch to the other nodes. */\n    clusterDoBeforeSleep(CLUSTER_TODO_BROADCAST_PONG);\n    serverLog(LL_VERBOSE,\n        \"WARNING: configEpoch collision with node %.40s (%s).\"\n        \" configEpoch set to %llu\",\n        sender->name,sender->human_nodename,\n        (unsigned long long) myself->configEpoch);\n}\n\n/* -----------------------------------------------------------------------------\n * CLUSTER nodes blacklist\n *\n * The nodes blacklist is just a way to ensure that a given node with a given\n * Node ID is not re-added before some time elapsed (this time is specified\n * in seconds in CLUSTER_BLACKLIST_TTL).\n *\n * This is useful when we want to remove a node from the cluster completely:\n * when CLUSTER FORGET is called, it also puts the node into the blacklist so\n * that even if we receive gossip messages from other nodes that still remember\n * about the node we want to remove, we don't re-add it before some time.\n *\n * Currently the CLUSTER_BLACKLIST_TTL is set to 1 minute, this means\n * that redis-cli has 60 seconds to send CLUSTER FORGET messages to nodes\n * in the cluster without dealing with the problem of other nodes re-adding\n * back the node to nodes we already sent the FORGET command to.\n *\n * The data structure used is a hash table with an sds string representing\n * the node ID as key, and the time when it is ok to re-add the node as\n * value.\n * -------------------------------------------------------------------------- */\n\n#define CLUSTER_BLACKLIST_TTL 60      /* 1 minute. */\n\n\n/* Before of the addNode() or Exists() operations we always remove expired\n * entries from the black list. This is an O(N) operation but it is not a\n * problem since add / exists operations are called very infrequently and\n * the hash table is supposed to contain very little elements at max.\n * However without the cleanup during long uptime and with some automated\n * node add/removal procedures, entries could accumulate. */\nvoid clusterBlacklistCleanup(void) {\n    dictIterator di;\n    dictEntry *de;\n\n    dictInitSafeIterator(&di, server.cluster->nodes_black_list);\n    while((de = dictNext(&di)) != NULL) {\n        int64_t expire = dictGetUnsignedIntegerVal(de);\n\n        if (expire < server.unixtime)\n            dictDelete(server.cluster->nodes_black_list,dictGetKey(de));\n    }\n    dictResetIterator(&di);\n}\n\n/* Cleanup the blacklist and add a new node ID to the black list. */\nvoid clusterBlacklistAddNode(clusterNode *node) {\n    dictEntry *de;\n    sds id = sdsnewlen(node->name,CLUSTER_NAMELEN);\n\n    clusterBlacklistCleanup();\n    if (dictAdd(server.cluster->nodes_black_list,id,NULL) == DICT_OK) {\n        /* If the key was added, duplicate the sds string representation of\n         * the key for the next lookup. We'll free it at the end. */\n        id = sdsdup(id);\n    }\n    de = dictFind(server.cluster->nodes_black_list,id);\n    dictSetUnsignedIntegerVal(de,time(NULL)+CLUSTER_BLACKLIST_TTL);\n    sdsfree(id);\n}\n\n/* Return non-zero if the specified node ID exists in the blacklist.\n * You don't need to pass an sds string here, any pointer to 40 bytes\n * will work. */\nint clusterBlacklistExists(char *nodeid, size_t len) {\n    sds id = sdsnewlen(nodeid,len);\n    int retval;\n\n    clusterBlacklistCleanup();\n    retval = dictFind(server.cluster->nodes_black_list,id) != NULL;\n    sdsfree(id);\n    return retval;\n}\n\n/* -----------------------------------------------------------------------------\n * CLUSTER messages exchange - PING/PONG and gossip\n * -------------------------------------------------------------------------- */\n\n/* This function checks if a given node should be marked as FAIL.\n * It happens if the following conditions are met:\n *\n * 1) We received enough failure reports from other master nodes via gossip.\n *    Enough means that the majority of the masters signaled the node is\n *    down recently.\n * 2) We believe this node is in PFAIL state.\n *\n * If a failure is detected we also inform the whole cluster about this\n * event trying to force every other node to set the FAIL flag for the node.\n *\n * Note that the form of agreement used here is weak, as we collect the majority\n * of masters state during some time, and even if we force agreement by\n * propagating the FAIL message, because of partitions we may not reach every\n * node. However:\n *\n * 1) Either we reach the majority and eventually the FAIL state will propagate\n *    to all the cluster.\n * 2) Or there is no majority so no slave promotion will be authorized and the\n *    FAIL flag will be cleared after some time.\n */\nvoid markNodeAsFailingIfNeeded(clusterNode *node) {\n    int failures;\n    int needed_quorum = (server.cluster->size / 2) + 1;\n\n    if (!nodeTimedOut(node)) return; /* We can reach it. */\n    if (nodeFailed(node)) return; /* Already FAILing. */\n\n    failures = clusterNodeFailureReportsCount(node);\n    /* Also count myself as a voter if I'm a master. */\n    if (clusterNodeIsMaster(myself)) failures++;\n    if (failures < needed_quorum) return; /* No weak agreement from masters. */\n\n    serverLog(LL_NOTICE,\n        \"Marking node %.40s (%s) as failing (quorum reached).\", node->name, node->human_nodename);\n\n    /* Mark the node as failing. */\n    node->flags &= ~CLUSTER_NODE_PFAIL;\n    node->flags |= CLUSTER_NODE_FAIL;\n    node->fail_time = mstime();\n\n    /* Broadcast the failing node name to everybody, forcing all the other\n     * reachable nodes to flag the node as FAIL.\n     * We do that even if this node is a replica and not a master: anyway\n     * the failing state is triggered collecting failure reports from masters,\n     * so here the replica is only helping propagating this status. */\n    clusterSendFail(node->name);\n    clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE|CLUSTER_TODO_SAVE_CONFIG);\n}\n\n/* This function is called only if a node is marked as FAIL, but we are able\n * to reach it again. It checks if there are the conditions to undo the FAIL\n * state. */\nvoid clearNodeFailureIfNeeded(clusterNode *node) {\n    mstime_t now = mstime();\n\n    serverAssert(nodeFailed(node));\n\n    /* For slaves we always clear the FAIL flag if we can contact the\n     * node again. */\n    if (nodeIsSlave(node) || node->numslots == 0) {\n        serverLog(LL_NOTICE,\n            \"Clear FAIL state for node %.40s (%s):%s is reachable again.\",\n                node->name,node->human_nodename,\n                nodeIsSlave(node) ? \"replica\" : \"master without slots\");\n        node->flags &= ~CLUSTER_NODE_FAIL;\n        clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE|CLUSTER_TODO_SAVE_CONFIG);\n    }\n\n    /* If it is a master and...\n     * 1) The FAIL state is old enough.\n     * 2) It is yet serving slots from our point of view (not failed over).\n     * Apparently no one is going to fix these slots, clear the FAIL flag. */\n    if (clusterNodeIsMaster(node) && node->numslots > 0 &&\n        (now - node->fail_time) >\n        (server.cluster_node_timeout * CLUSTER_FAIL_UNDO_TIME_MULT))\n    {\n        serverLog(LL_NOTICE,\n            \"Clear FAIL state for node %.40s (%s): is reachable again and nobody is serving its slots after some time.\",\n                node->name, node->human_nodename);\n        node->flags &= ~CLUSTER_NODE_FAIL;\n        clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE|CLUSTER_TODO_SAVE_CONFIG);\n    }\n}\n\n/* Return true if we already have a node in HANDSHAKE state matching the\n * specified ip address and port number. This function is used in order to\n * avoid adding a new handshake node for the same address multiple times. */\nint clusterHandshakeInProgress(char *ip, int port, int cport) {\n    dictIterator di;\n    dictEntry *de;\n\n    dictInitSafeIterator(&di, server.cluster->nodes);\n    while((de = dictNext(&di)) != NULL) {\n        clusterNode *node = dictGetVal(de);\n\n        if (!nodeInHandshake(node)) continue;\n        if (!strcasecmp(node->ip,ip) &&\n            getNodeDefaultClientPort(node) == port &&\n            node->cport == cport) break;\n    }\n    dictResetIterator(&di);\n    return de != NULL;\n}\n\n/* Start a handshake with the specified address if there is not one\n * already in progress. Returns non-zero if the handshake was actually\n * started. On error zero is returned and errno is set to one of the\n * following values:\n *\n * EAGAIN - There is already a handshake in progress for this address.\n * EINVAL - IP or port are not valid. */\nint clusterStartHandshake(char *ip, int port, int cport) {\n    clusterNode *n;\n    char norm_ip[NET_IP_STR_LEN];\n    struct sockaddr_storage sa;\n\n    /* IP sanity check */\n    if (inet_pton(AF_INET,ip,\n            &(((struct sockaddr_in *)&sa)->sin_addr)))\n    {\n        sa.ss_family = AF_INET;\n    } else if (inet_pton(AF_INET6,ip,\n            &(((struct sockaddr_in6 *)&sa)->sin6_addr)))\n    {\n        sa.ss_family = AF_INET6;\n    } else {\n        errno = EINVAL;\n        return 0;\n    }\n\n    /* Port sanity check */\n    if (port <= 0 || port > 65535 || cport <= 0 || cport > 65535) {\n        errno = EINVAL;\n        return 0;\n    }\n\n    /* Set norm_ip as the normalized string representation of the node\n     * IP address. */\n    memset(norm_ip,0,NET_IP_STR_LEN);\n    if (sa.ss_family == AF_INET)\n        inet_ntop(AF_INET,\n            (void*)&(((struct sockaddr_in *)&sa)->sin_addr),\n            norm_ip,NET_IP_STR_LEN);\n    else\n        inet_ntop(AF_INET6,\n            (void*)&(((struct sockaddr_in6 *)&sa)->sin6_addr),\n            norm_ip,NET_IP_STR_LEN);\n\n    if (clusterHandshakeInProgress(norm_ip,port,cport)) {\n        errno = EAGAIN;\n        return 0;\n    }\n\n    /* Add the node with a random address (NULL as first argument to\n     * createClusterNode()). Everything will be fixed during the\n     * handshake. */\n    n = createClusterNode(NULL,CLUSTER_NODE_HANDSHAKE|CLUSTER_NODE_MEET);\n    memcpy(n->ip,norm_ip,sizeof(n->ip));\n    if (server.tls_cluster) {\n        n->tls_port = port;\n    } else {\n        n->tcp_port = port;\n    }\n    n->cport = cport;\n    clusterAddNode(n);\n    return 1;\n}\n\nstatic void getClientPortFromClusterMsg(clusterMsg *hdr, int *tls_port, int *tcp_port) {\n    if (server.tls_cluster) {\n        *tls_port = ntohs(hdr->port);\n        *tcp_port = ntohs(hdr->pport);\n    } else {\n        *tls_port = ntohs(hdr->pport);\n        *tcp_port = ntohs(hdr->port);\n    }\n}\n\nstatic void getClientPortFromGossip(clusterMsgDataGossip *g, int *tls_port, int *tcp_port) {\n    if (server.tls_cluster) {\n        *tls_port = ntohs(g->port);\n        *tcp_port = ntohs(g->pport);\n    } else {\n        *tls_port = ntohs(g->pport);\n        *tcp_port = ntohs(g->port);\n    }\n}\n\n/* Returns a string with the byte representation of the node ID (i.e. nodename)\n * along with 8 trailing bytes for debugging purposes. */\nchar *getCorruptedNodeIdByteString(clusterMsgDataGossip *gossip_msg) {\n    const int num_bytes = CLUSTER_NAMELEN + 8;\n    /* Allocate enough room for 4 chars per byte + null terminator */\n    char *byte_string = (char*) zmalloc((num_bytes*4) + 1); \n    const char *name_ptr = gossip_msg->nodename;\n\n    /* Ensure we won't print beyond the bounds of the message */\n    serverAssert(name_ptr + num_bytes <= (char*)gossip_msg + sizeof(clusterMsgDataGossip));\n\n    for (int i = 0; i < num_bytes; i++) {\n        snprintf(byte_string + 4*i, 5, \"\\\\x%02hhX\", name_ptr[i]);\n    }\n    return byte_string;\n}\n\n/* Returns the number of nodes in the gossip with invalid IDs. */\nint verifyGossipSectionNodeIds(clusterMsgDataGossip *g, uint16_t count) {\n    int invalid_ids = 0;\n    for (int i = 0; i < count; i++) {\n        const char *nodename = g[i].nodename;\n        if (verifyClusterNodeId(nodename, CLUSTER_NAMELEN) != C_OK) {\n            invalid_ids++;\n            char *raw_node_id = getCorruptedNodeIdByteString(g);\n            serverLog(LL_WARNING,\n                      \"Received gossip about a node with invalid ID %.40s. For debugging purposes, \"\n                      \"the 48 bytes including the invalid ID and 8 trailing bytes are: %s\",\n                      nodename, raw_node_id);\n            zfree(raw_node_id);\n        }\n    }\n    return invalid_ids;\n}\n\n/* Process the gossip section of PING or PONG packets.\n * Note that this function assumes that the packet is already sanity-checked\n * by the caller, not in the content of the gossip section, but in the\n * length. */\nvoid clusterProcessGossipSection(clusterMsg *hdr, clusterLink *link) {\n    uint16_t count = ntohs(hdr->count);\n    clusterMsgDataGossip *g = (clusterMsgDataGossip*) hdr->data.ping.gossip;\n    clusterNode *sender = link->node ? link->node : clusterLookupNode(hdr->sender, CLUSTER_NAMELEN);\n\n    /* Abort if the gossip contains invalid node IDs to avoid adding incorrect information to\n     * the nodes dictionary. An invalid ID indicates memory corruption on the sender side. */\n    int invalid_ids = verifyGossipSectionNodeIds(g, count);\n    if (invalid_ids) {\n        if (sender) {\n            serverLog(LL_WARNING, \"Node %.40s (%s) gossiped %d nodes with invalid IDs.\", sender->name, sender->human_nodename, invalid_ids);\n        } else {\n            serverLog(LL_WARNING, \"Unknown node gossiped %d nodes with invalid IDs.\", invalid_ids);\n        }\n        return;\n    }\n\n    while(count--) {\n        uint16_t flags = ntohs(g->flags);\n        clusterNode *node;\n        sds ci;\n\n        if (server.verbosity == LL_DEBUG) {\n            ci = representClusterNodeFlags(sdsempty(), flags);\n            serverLog(LL_DEBUG,\"GOSSIP %.40s %s:%d@%d %s\",\n                g->nodename,\n                g->ip,\n                ntohs(g->port),\n                ntohs(g->cport),\n                ci);\n            sdsfree(ci);\n        }\n\n        /* Convert port and pport into TCP port and TLS port. */\n        int msg_tls_port, msg_tcp_port;\n        getClientPortFromGossip(g, &msg_tls_port, &msg_tcp_port);\n\n        /* Update our state accordingly to the gossip sections */\n        node = clusterLookupNode(g->nodename, CLUSTER_NAMELEN);\n        /* Ignore gossips about self. */\n        if (node && node != myself) {\n            /* We already know this node.\n               Handle failure reports, only when the sender is a master. */\n            if (sender && clusterNodeIsMaster(sender)) {\n                if (flags & (CLUSTER_NODE_FAIL|CLUSTER_NODE_PFAIL)) {\n                    if (clusterNodeAddFailureReport(node,sender)) {\n                        serverLog(LL_VERBOSE,\n                            \"Node %.40s (%s) reported node %.40s (%s) as not reachable.\",\n                            sender->name, sender->human_nodename, node->name, node->human_nodename);\n                    }\n                    markNodeAsFailingIfNeeded(node);\n                } else {\n                    if (clusterNodeDelFailureReport(node,sender)) {\n                        serverLog(LL_VERBOSE,\n                            \"Node %.40s (%s) reported node %.40s (%s) is back online.\",\n                            sender->name, sender->human_nodename, node->name, node->human_nodename);\n                    }\n                }\n            }\n\n            /* If from our POV the node is up (no failure flags are set),\n             * we have no pending ping for the node, nor we have failure\n             * reports for this node, update the last pong time with the\n             * one we see from the other nodes. */\n            if (!(flags & (CLUSTER_NODE_FAIL|CLUSTER_NODE_PFAIL)) &&\n                node->ping_sent == 0 &&\n                clusterNodeFailureReportsCount(node) == 0)\n            {\n                mstime_t pongtime = ntohl(g->pong_received);\n                pongtime *= 1000; /* Convert back to milliseconds. */\n\n                /* Replace the pong time with the received one only if\n                 * it's greater than our view but is not in the future\n                 * (with 500 milliseconds tolerance) from the POV of our\n                 * clock. */\n                if (pongtime <= (server.mstime+500) &&\n                    pongtime > node->pong_received)\n                {\n                    node->pong_received = pongtime;\n                }\n            }\n\n            /* If we already know this node, but it is not reachable, and\n             * we see a different address in the gossip section of a node that\n             * can talk with this other node, update the address, disconnect\n             * the old link if any, so that we'll attempt to connect with the\n             * new address. */\n            if (node->flags & (CLUSTER_NODE_FAIL|CLUSTER_NODE_PFAIL) &&\n                !(flags & CLUSTER_NODE_NOADDR) &&\n                !(flags & (CLUSTER_NODE_FAIL|CLUSTER_NODE_PFAIL)) &&\n                (strcasecmp(node->ip,g->ip) ||\n                 node->tls_port != (server.tls_cluster ? ntohs(g->port) : ntohs(g->pport)) ||\n                 node->tcp_port != (server.tls_cluster ? ntohs(g->pport) : ntohs(g->port)) ||\n                 node->cport != ntohs(g->cport)))\n            {\n                if (node->link) freeClusterLink(node->link);\n                memcpy(node->ip,g->ip,NET_IP_STR_LEN);\n                node->tcp_port = msg_tcp_port;\n                node->tls_port = msg_tls_port;\n                node->cport = ntohs(g->cport);\n                node->flags &= ~CLUSTER_NODE_NOADDR;\n            }\n        } else if (!node) {\n            /* If it's not in NOADDR state and we don't have it, we\n             * add it to our trusted dict with exact nodeid and flag.\n             * Note that we cannot simply start a handshake against\n             * this IP/PORT pairs, since IP/PORT can be reused already,\n             * otherwise we risk joining another cluster.\n             *\n             * Note that we require that the sender of this gossip message\n             * is a well known node in our cluster, otherwise we risk\n             * joining another cluster. */\n            if (sender &&\n                !(flags & CLUSTER_NODE_NOADDR) &&\n                !clusterBlacklistExists(g->nodename, CLUSTER_NAMELEN))\n            {\n                clusterNode *node;\n                node = createClusterNode(g->nodename, flags);\n                memcpy(node->ip,g->ip,NET_IP_STR_LEN);\n                node->tcp_port = msg_tcp_port;\n                node->tls_port = msg_tls_port;\n                node->cport = ntohs(g->cport);\n                clusterAddNode(node);\n                clusterAddNodeToShard(node->shard_id, node);\n            }\n        }\n\n        /* Next node */\n        g++;\n    }\n}\n\n/* IP -> string conversion. 'buf' is supposed to at least be 46 bytes.\n * If 'announced_ip' length is non-zero, it is used instead of extracting\n * the IP from the socket peer address. */\nint nodeIp2String(char *buf, clusterLink *link, char *announced_ip) {\n    if (announced_ip[0] != '\\0') {\n        memcpy(buf,announced_ip,NET_IP_STR_LEN);\n        buf[NET_IP_STR_LEN-1] = '\\0'; /* We are not sure the input is sane. */\n        return C_OK;\n    } else {\n        if (connAddrPeerName(link->conn, buf, NET_IP_STR_LEN, NULL) == -1) {\n            serverLog(LL_NOTICE, \"Error converting peer IP to string: %s\",\n                link->conn ? connGetLastError(link->conn) : \"no link\");\n            return C_ERR;\n        }\n        return C_OK;\n    }\n}\n\n/* Update the node address to the IP address that can be extracted\n * from link->fd, or if hdr->myip is non empty, to the address the node\n * is announcing us. The port is taken from the packet header as well.\n *\n * If the address or port changed, disconnect the node link so that we'll\n * connect again to the new address.\n *\n * If the ip/port pair are already correct no operation is performed at\n * all.\n *\n * The function returns 0 if the node address is still the same,\n * otherwise 1 is returned. */\nint nodeUpdateAddressIfNeeded(clusterNode *node, clusterLink *link,\n                              clusterMsg *hdr)\n{\n    char ip[NET_IP_STR_LEN] = {0};\n    int cport = ntohs(hdr->cport);\n    int tcp_port, tls_port;\n    getClientPortFromClusterMsg(hdr, &tls_port, &tcp_port);\n\n    /* We don't proceed if the link is the same as the sender link, as this\n     * function is designed to see if the node link is consistent with the\n     * symmetric link that is used to receive PINGs from the node.\n     *\n     * As a side effect this function never frees the passed 'link', so\n     * it is safe to call during packet processing. */\n    if (link == node->link) return 0;\n\n    /* If the peer IP is unavailable for some reasons like invalid fd or closed\n     * link, just give up the update this time, and the update will be retried\n     * in the next round of PINGs */\n    if (nodeIp2String(ip,link,hdr->myip) == C_ERR) return 0;\n\n    if (node->tcp_port == tcp_port && node->cport == cport && node->tls_port == tls_port &&\n        strcmp(ip,node->ip) == 0) return 0;\n\n    /* IP / port is different, update it. */\n    memcpy(node->ip,ip,sizeof(ip));\n    node->tcp_port = tcp_port;\n    node->tls_port = tls_port;\n    node->cport = cport;\n    if (node->link) freeClusterLink(node->link);\n    node->flags &= ~CLUSTER_NODE_NOADDR;\n    serverLog(LL_NOTICE,\"Address updated for node %.40s (%s), now %s:%d\",\n        node->name, node->human_nodename, node->ip, getNodeDefaultClientPort(node)); \n\n    /* Check if this is our master and we have to change the\n     * replication target as well. */\n    if (nodeIsSlave(myself) && myself->slaveof == node)\n        replicationSetMaster(node->ip, getNodeDefaultReplicationPort(node));\n    return 1;\n}\n\n/* Reconfigure the specified node 'n' as a master. This function is called when\n * a node that we believed to be a slave is now acting as master in order to\n * update the state of the node. */\nvoid clusterSetNodeAsMaster(clusterNode *n) {\n    if (clusterNodeIsMaster(n)) return;\n\n    if (n->slaveof) {\n        clusterNodeRemoveSlave(n->slaveof,n);\n        if (n != myself) n->flags |= CLUSTER_NODE_MIGRATE_TO;\n    }\n    n->flags &= ~CLUSTER_NODE_SLAVE;\n    n->flags |= CLUSTER_NODE_MASTER;\n    n->slaveof = NULL;\n\n    /* Update config and state. */\n    clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG|\n                         CLUSTER_TODO_UPDATE_STATE);\n}\n\n/* This function is called when we receive a master configuration via a\n * PING, PONG or UPDATE packet. What we receive is a node, a configEpoch of the\n * node, and the set of slots claimed under this configEpoch.\n *\n * What we do is to rebind the slots with newer configuration compared to our\n * local configuration, and if needed, we turn ourself into a replica of the\n * node (see the function comments for more info).\n *\n * The 'sender' is the node for which we received a configuration update.\n * Sometimes it is not actually the \"Sender\" of the information, like in the\n * case we receive the info via an UPDATE packet. */\nvoid clusterUpdateSlotsConfigWith(clusterNode *sender, uint64_t senderConfigEpoch, unsigned char *slots) {\n    int j;\n    clusterNode *curmaster = NULL, *newmaster = NULL;\n    /* The dirty slots list is a list of slots for which we lose the ownership\n     * while having still keys inside. This usually happens after a failover\n     * or after a manual cluster reconfiguration operated by the admin.\n     *\n     * If the update message is not able to demote a master to slave (in this\n     * case we'll resync with the master updating the whole key space), we\n     * need to delete all the keys in the slots we lost ownership. */\n    uint16_t dirty_slots[CLUSTER_SLOTS];\n    int dirty_slots_count = 0;\n\n    /* We should detect if sender is new master of our shard.\n     * We will know it if all our slots were migrated to sender, and sender\n     * has no slots except ours */\n    int sender_slots = 0;\n    int migrated_our_slots = 0;\n\n    /* Here we set curmaster to this node or the node this node\n     * replicates to if it's a slave. In the for loop we are\n     * interested to check if slots are taken away from curmaster. */\n    curmaster = clusterNodeIsMaster(myself) ? myself : myself->slaveof;\n\n    if (sender == myself) {\n        serverLog(LL_NOTICE,\"Discarding UPDATE message about myself.\");\n        return;\n    }\n\n    slotRangeArray *sra = NULL;\n    for (j = 0; j < CLUSTER_SLOTS; j++) {\n        if (bitmapTestBit(slots,j)) {\n            sender_slots++;\n\n            /* The slot is already bound to the sender of this message. */\n            if (server.cluster->slots[j] == sender) {\n                bitmapClearBit(server.cluster->owner_not_claiming_slot, j);\n                continue;\n            }\n\n            /* The slot is in importing state, it should be modified only\n             * manually via redis-cli (example: a resharding is in progress\n             * and the migrating side slot was already closed and is advertising\n             * a new config. We still want the slot to be closed manually). */\n            if (server.cluster->importing_slots_from[j]) continue;\n\n            /* We rebind the slot to the new node claiming it if:\n             * 1) The slot was unassigned or the previous owner no longer owns the slot or\n             *    the new node claims it with a greater configEpoch.\n             * 2) We are not currently importing the slot. */\n            if (isSlotUnclaimed(j) ||\n                server.cluster->slots[j]->configEpoch < senderConfigEpoch)\n            {\n                /* After completing slot ranges migration, the destination node\n                 * will broadcast a PONG message to all the nodes. We need to\n                 * detect that the slot was moved from us to the sender, and\n                 * call asmNotifyConfigUpdated() to notify the ASM state machine. */\n                if (server.cluster->slots[j] == myself && sender != myself)\n                    sra = slotRangeArrayAppend(sra, j);\n\n                /* Was this slot mine, and still contains keys? Mark it as\n                 * a dirty slot. */\n                if (server.cluster->slots[j] == myself &&\n                    countKeysInSlot(j) &&\n                    sender != myself)\n                {\n                    dirty_slots[dirty_slots_count] = j;\n                    dirty_slots_count++;\n                }\n\n                if (server.cluster->slots[j] == curmaster) {\n                    newmaster = sender;\n                    migrated_our_slots++;\n                }\n                clusterDelSlot(j);\n                clusterAddSlot(sender,j);\n                clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG|\n                                     CLUSTER_TODO_UPDATE_STATE|\n                                     CLUSTER_TODO_FSYNC_CONFIG);\n            }\n        } else if (server.cluster->slots[j] == sender) {\n            /* The slot is currently bound to the sender but the sender is no longer\n             * claiming it. We don't want to unbind the slot yet as it can cause the cluster\n             * to move to FAIL state and also throw client error. Keeping the slot bound to\n             * the previous owner will cause a few client side redirects, but won't throw\n             * any errors. We will keep track of the uncertainty in ownership to avoid\n             * propagating misinformation about this slot's ownership using UPDATE\n             * messages. */\n            bitmapSetBit(server.cluster->owner_not_claiming_slot, j);\n        }\n    }\n\n    /* Notify ASM about the config update */\n    struct asmTask *asm_task = NULL;\n    if (sra && sra->num_ranges > 0 && server.masterhost == NULL) {\n        sds err = NULL;\n        asm_task = asmLookupTaskBySlotRangeArray(sra);\n        if (!asm_task) {\n            /* If no task was found, it means the config update is not related\n             * to current ASM task, but this node learned about the config\n             * update from cluster protocol, and we need to cancel any\n             * conflicting tasks that overlap with the slot ranges. */\n            clusterAsmCancelBySlotRangeArray(sra, \"slots configuration updated\");\n        } else if (asmNotifyConfigUpdated(asm_task, &err) != C_OK) {\n            serverLog(LL_WARNING, \"ASM config update failed: %s\", err);\n            sdsfree(err);\n        }\n    }\n    slotRangeArrayFree(sra);\n\n    /* After updating the slots configuration, don't do any actual change\n     * in the state of the server if a module disabled Redis Cluster\n     * keys redirections. */\n    if (server.cluster_module_flags & CLUSTER_MODULE_FLAG_NO_REDIRECTION)\n        return;\n\n    /* If at least one slot was reassigned from a node to another node\n     * with a greater configEpoch, it is possible that:\n     * 1) We are a master left without slots. This means that we were\n     *    failed over and we should turn into a replica of the new\n     *    master.\n     * 2) We are a slave and our master is left without slots. We need\n     *    to replicate to the new slots owner. */\n    if (newmaster && curmaster->numslots == 0 &&\n            (server.cluster_allow_replica_migration ||\n             sender_slots == migrated_our_slots)) {\n        serverLog(LL_NOTICE,\n            \"Configuration change detected. Reconfiguring myself \"\n            \"as a replica of %.40s (%s)\", sender->name, sender->human_nodename);\n        clusterSetMaster(sender);\n        /* Save the new config and broadcast it to the other nodes. */\n        clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG|\n                             CLUSTER_TODO_UPDATE_STATE|\n                             CLUSTER_TODO_FSYNC_CONFIG|\n                             CLUSTER_TODO_BROADCAST_PONG);\n    } else if (myself->slaveof && myself->slaveof->slaveof &&\n               /* In some rare case when CLUSTER FAILOVER TAKEOVER is used, it\n                * can happen that myself is a replica of a replica of myself. If\n                * this happens, we do nothing to avoid a crash and wait for the\n                * admin to repair the cluster. */\n               myself->slaveof->slaveof != myself)\n    {\n        /* Safeguard against sub-replicas. A replica's master can turn itself\n         * into a replica if its last slot is removed. If no other node takes\n         * over the slot, there is nothing else to trigger replica migration. */\n        serverLog(LL_NOTICE,\n                  \"I'm a sub-replica! Reconfiguring myself as a replica of grandmaster %.40s (%s)\",\n                  myself->slaveof->slaveof->name, myself->slaveof->slaveof->human_nodename);\n        clusterSetMaster(myself->slaveof->slaveof);\n        /* Save the new config and broadcast to the other nodes. */\n        clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG|\n                             CLUSTER_TODO_UPDATE_STATE|\n                             CLUSTER_TODO_FSYNC_CONFIG|\n                             CLUSTER_TODO_BROADCAST_PONG);\n    } else if (dirty_slots_count && !asm_task) {\n        /* If we are here, we received an update message which removed\n         * ownership for certain slots we still have keys about, but still\n         * we are serving some slots, so this master node was not demoted to\n         * a slave.\n         *\n         * In order to maintain a consistent state between keys and slots\n         * we need to remove all the keys from the slots we lost. */\n        for (j = 0; j < dirty_slots_count; j++)\n            clusterDelKeysInSlot(dirty_slots[j], 0);\n    }\n}\n\n/* Cluster ping extensions.\n *\n * The ping/pong/meet messages support arbitrary extensions to add additional\n * metadata to the messages that are sent between the various nodes in the\n * cluster. The extensions take the form:\n * [ Header length + type (8 bytes) ] \n * [ Extension information (Arbitrary length, but must be 8 byte padded) ]\n */\n\n\n/* Returns the length of a given extension */\nstatic uint32_t getPingExtLength(clusterMsgPingExt *ext) {\n    return ntohl(ext->length);\n}\n\n/* Returns the initial position of ping extensions. May return an invalid\n * address if there are no ping extensions. */\nstatic clusterMsgPingExt *getInitialPingExt(clusterMsg *hdr, int count) {\n    clusterMsgPingExt *initial = (clusterMsgPingExt*) &(hdr->data.ping.gossip[count]);\n    return initial;\n} \n\n/* Given a current ping extension, returns the start of the next extension. May return\n * an invalid address if there are no further ping extensions. */\nstatic clusterMsgPingExt *getNextPingExt(clusterMsgPingExt *ext) {\n    clusterMsgPingExt *next = (clusterMsgPingExt *) (((char *) ext) + getPingExtLength(ext));\n    return next;\n}\n\n/* All PING extensions must be 8-byte aligned */\nuint32_t getAlignedPingExtSize(uint32_t dataSize) {\n\n    return sizeof(clusterMsgPingExt) + EIGHT_BYTE_ALIGN(dataSize);\n}\n\nuint32_t getHostnamePingExtSize(void) {\n    if (sdslen(myself->hostname) == 0) {\n        return 0;\n    }\n    return getAlignedPingExtSize(sdslen(myself->hostname) + 1);\n}\n\nuint32_t getHumanNodenamePingExtSize(void) {\n    if (sdslen(myself->human_nodename) == 0) {\n        return 0;\n    }\n    return getAlignedPingExtSize(sdslen(myself->human_nodename) + 1);\n}\n\nuint32_t getShardIdPingExtSize(void) {\n    return getAlignedPingExtSize(sizeof(clusterMsgPingExtShardId));\n}\n\nuint32_t getInternalSecretPingExtSize(void) {\n    return getAlignedPingExtSize(sizeof(clusterMsgPingExtInternalSecret));\n}\n\nuint32_t getForgottenNodeExtSize(void) {\n    return getAlignedPingExtSize(sizeof(clusterMsgPingExtForgottenNode));\n}\n\nvoid *preparePingExt(clusterMsgPingExt *ext, uint16_t type, uint32_t length) {\n    ext->type = htons(type);\n    ext->length = htonl(length);\n    return &ext->ext[0];\n}\n\nclusterMsgPingExt *nextPingExt(clusterMsgPingExt *ext) {\n    return (clusterMsgPingExt *)((char*)ext + ntohl(ext->length));\n}\n\n/* 1. If a NULL hdr is provided, compute the extension size;\n * 2. If a non-NULL hdr is provided, write the hostname ping\n *    extension at the start of the cursor. This function\n *    will update the cursor to point to the end of the\n *    written extension and will return the amount of bytes\n *    written. */\nuint32_t writePingExt(clusterMsg *hdr, int gossipcount)  {\n    uint16_t extensions = 0;\n    uint32_t totlen = 0;\n    clusterMsgPingExt *cursor = NULL;\n    /* Set the initial extension position */\n    if (hdr != NULL) {\n        cursor = getInitialPingExt(hdr, gossipcount);\n    }\n\n    /* hostname is optional */\n    if (sdslen(myself->hostname) != 0) {\n        if (cursor != NULL) {\n            /* Populate hostname */\n            clusterMsgPingExtHostname *ext = preparePingExt(cursor, CLUSTERMSG_EXT_TYPE_HOSTNAME, getHostnamePingExtSize());\n            memcpy(ext->hostname, myself->hostname, sdslen(myself->hostname));\n\n            /* Move the write cursor */\n            cursor = nextPingExt(cursor);\n        }\n\n        totlen += getHostnamePingExtSize();\n        extensions++;\n    }\n\n    if (sdslen(myself->human_nodename) != 0) {\n        if (cursor != NULL) {\n            /* Populate human_nodename */\n            clusterMsgPingExtHumanNodename *ext = preparePingExt(cursor, CLUSTERMSG_EXT_TYPE_HUMAN_NODENAME, getHumanNodenamePingExtSize());\n            memcpy(ext->human_nodename, myself->human_nodename, sdslen(myself->human_nodename));\n        \n            /* Move the write cursor */\n            cursor = nextPingExt(cursor);\n        }\n\n        totlen += getHumanNodenamePingExtSize();\n        extensions++;\n    }\n\n    /* Gossip forgotten nodes */\n    if (dictSize(server.cluster->nodes_black_list) > 0) {\n        dictIterator di;\n        dictEntry *de;\n\n        dictInitIterator(&di, server.cluster->nodes_black_list);\n        while ((de = dictNext(&di)) != NULL) {\n            if (cursor != NULL) {\n                uint64_t expire = dictGetUnsignedIntegerVal(de);\n                if ((time_t)expire < server.unixtime) continue; /* already expired */\n                uint64_t ttl = expire - server.unixtime;\n                clusterMsgPingExtForgottenNode *ext = preparePingExt(cursor, CLUSTERMSG_EXT_TYPE_FORGOTTEN_NODE, getForgottenNodeExtSize());\n                memcpy(ext->name, dictGetKey(de), CLUSTER_NAMELEN);\n                ext->ttl = htonu64(ttl);\n\n                /* Move the write cursor */\n                cursor = nextPingExt(cursor);\n            }\n            totlen += getForgottenNodeExtSize();\n            extensions++;\n        }\n        dictResetIterator(&di);\n    }\n\n    /* Populate shard_id */\n    if (cursor != NULL) {\n        clusterMsgPingExtShardId *ext = preparePingExt(cursor, CLUSTERMSG_EXT_TYPE_SHARDID, getShardIdPingExtSize());\n        memcpy(ext->shard_id, myself->shard_id, CLUSTER_NAMELEN);\n\n        /* Move the write cursor */\n        cursor = nextPingExt(cursor);\n    }\n    totlen += getShardIdPingExtSize();\n    extensions++;\n\n    /* Populate internal secret */\n    if (cursor != NULL) {\n        clusterMsgPingExtInternalSecret *ext = preparePingExt(cursor, CLUSTERMSG_EXT_TYPE_INTERNALSECRET, getInternalSecretPingExtSize());\n        memcpy(ext->internal_secret, server.cluster->internal_secret, CLUSTER_INTERNALSECRETLEN);\n\n        /* Move the write cursor */\n        cursor = nextPingExt(cursor);\n    }\n    totlen += getInternalSecretPingExtSize();\n    extensions++;\n\n    if (hdr != NULL) {\n        hdr->extensions = htons(extensions);\n    }\n\n    return totlen;\n}\n\n/* We previously validated the extensions, so this function just needs to\n * handle the extensions. */\nvoid clusterProcessPingExtensions(clusterMsg *hdr, clusterLink *link) {\n    clusterNode *sender = link->node ? link->node : clusterLookupNode(hdr->sender, CLUSTER_NAMELEN);\n    char *ext_hostname = NULL;\n    char *ext_humannodename = NULL;\n    char *ext_shardid = NULL;\n    uint16_t extensions = ntohs(hdr->extensions);\n    /* Loop through all the extensions and process them */\n    clusterMsgPingExt *ext = getInitialPingExt(hdr, ntohs(hdr->count));\n    while (extensions--) {\n        uint16_t type = ntohs(ext->type);\n        if (type == CLUSTERMSG_EXT_TYPE_HOSTNAME) {\n            clusterMsgPingExtHostname *hostname_ext = (clusterMsgPingExtHostname *) &(ext->ext[0].hostname);\n            ext_hostname = hostname_ext->hostname;\n        } else if (type == CLUSTERMSG_EXT_TYPE_HUMAN_NODENAME) {\n            clusterMsgPingExtHumanNodename *humannodename_ext = (clusterMsgPingExtHumanNodename *) &(ext->ext[0].human_nodename);\n            ext_humannodename = humannodename_ext->human_nodename;\n        } else if (type == CLUSTERMSG_EXT_TYPE_FORGOTTEN_NODE) {\n            clusterMsgPingExtForgottenNode *forgotten_node_ext = &(ext->ext[0].forgotten_node);\n            clusterNode *n = clusterLookupNode(forgotten_node_ext->name, CLUSTER_NAMELEN);\n            if (n && n != myself && !(nodeIsSlave(myself) && myself->slaveof == n)) {\n                sds id = sdsnewlen(forgotten_node_ext->name, CLUSTER_NAMELEN);\n                dictEntry *de = dictAddOrFind(server.cluster->nodes_black_list, id);\n                if (dictGetKey(de) != id) sdsfree(id);\n                uint64_t expire = server.unixtime + ntohu64(forgotten_node_ext->ttl);\n                dictSetUnsignedIntegerVal(de, expire);\n                clusterDelNode(n);\n                clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE|\n                                     CLUSTER_TODO_SAVE_CONFIG);\n            }\n        } else if (type == CLUSTERMSG_EXT_TYPE_SHARDID) {\n            clusterMsgPingExtShardId *shardid_ext = (clusterMsgPingExtShardId *) &(ext->ext[0].shard_id);\n            ext_shardid = shardid_ext->shard_id;\n        } else if (type == CLUSTERMSG_EXT_TYPE_INTERNALSECRET) {\n            clusterMsgPingExtInternalSecret *internal_secret_ext = (clusterMsgPingExtInternalSecret *) &(ext->ext[0].internal_secret);\n            if (memcmp(server.cluster->internal_secret, internal_secret_ext->internal_secret, CLUSTER_INTERNALSECRETLEN) > 0 ) {\n                memcpy(server.cluster->internal_secret, internal_secret_ext->internal_secret, CLUSTER_INTERNALSECRETLEN);\n            }\n        } else {\n            /* Unknown type, we will ignore it but log what happened. */\n            serverLog(LL_VERBOSE, \"Received unknown extension type %d\", type);\n        }\n\n        /* We know this will be valid since we validated it ahead of time */\n        ext = getNextPingExt(ext);\n    }\n\n    /* If the node did not send us a hostname extension, assume\n     * they don't have an announced hostname. Otherwise, we'll\n     * set it now. */\n    updateAnnouncedHostname(sender, ext_hostname);\n    updateAnnouncedHumanNodename(sender, ext_humannodename);\n    /* If the node did not send us a shard-id extension, it means the sender\n     * does not support it (old version), node->shard_id is randomly generated.\n     * A cluster-wide consensus for the node's shard_id is not necessary.\n     * The key is maintaining consistency of the shard_id on each individual 7.2 node.\n     * As the cluster progressively upgrades to version 7.2, we can expect the shard_ids\n     * across all nodes to naturally converge and align.\n     *\n     * If sender is a replica, set the shard_id to the shard_id of its master.\n     * Otherwise, we'll set it now. */\n    if (ext_shardid == NULL) ext_shardid = clusterNodeGetMaster(sender)->shard_id;\n\n    updateShardId(sender, ext_shardid);\n}\n\nstatic clusterNode *getNodeFromLinkAndMsg(clusterLink *link, clusterMsg *hdr) {\n    clusterNode *sender;\n    if (link->node && !nodeInHandshake(link->node)) {\n        /* If the link has an associated node, use that so that we don't have to look it\n         * up every time, except when the node is still in handshake, the node still has\n         * a random name thus not truly \"known\". */\n        sender = link->node;\n    } else {\n        /* Otherwise, fetch sender based on the message */\n        sender = clusterLookupNode(hdr->sender, CLUSTER_NAMELEN);\n        /* We know the sender node but haven't associate it with the link. This must\n         * be an inbound link because only for inbound links we didn't know which node\n         * to associate when they were created. */\n        if (sender && !link->node) {\n            setClusterNodeToInboundClusterLink(sender, link);\n        }\n    }\n    return sender;\n}\n\n/* When this function is called, there is a packet to process starting\n * at link->rcvbuf. Releasing the buffer is up to the caller, so this\n * function should just handle the higher level stuff of processing the\n * packet, modifying the cluster state if needed.\n *\n * The function returns 1 if the link is still valid after the packet\n * was processed, otherwise 0 if the link was freed since the packet\n * processing lead to some inconsistency error (for instance a PONG\n * received from the wrong sender ID). */\nint clusterProcessPacket(clusterLink *link) {\n    clusterMsg *hdr = (clusterMsg*) link->rcvbuf;\n    uint32_t totlen = ntohl(hdr->totlen);\n    uint16_t type = ntohs(hdr->type);\n    mstime_t now = mstime();\n\n    if (type < CLUSTERMSG_TYPE_COUNT)\n        server.cluster->stats_bus_messages_received[type]++;\n    serverLog(LL_DEBUG,\"--- Processing packet of type %s, %lu bytes\",\n        clusterGetMessageTypeString(type), (unsigned long) totlen);\n\n    /* Perform sanity checks */\n    if (totlen < 16) return 1; /* At least signature, version, totlen, count. */\n    if (totlen > link->rcvbuf_len) return 1;\n\n    if (ntohs(hdr->ver) != CLUSTER_PROTO_VER) {\n        /* Can't handle messages of different versions. */\n        return 1;\n    }\n\n    if (type == server.cluster_drop_packet_filter) {\n        serverLog(LL_WARNING, \"Dropping packet that matches debug drop filter\");\n        return 1;\n    }\n\n    uint16_t flags = ntohs(hdr->flags);\n    uint16_t extensions = ntohs(hdr->extensions);\n    uint64_t senderCurrentEpoch = 0, senderConfigEpoch = 0;\n    uint32_t explen; /* expected length of this packet */\n    clusterNode *sender;\n\n    if (type == CLUSTERMSG_TYPE_PING || type == CLUSTERMSG_TYPE_PONG ||\n        type == CLUSTERMSG_TYPE_MEET)\n    {\n        uint16_t count = ntohs(hdr->count);\n\n        explen = sizeof(clusterMsg)-sizeof(union clusterMsgData);\n        explen += (sizeof(clusterMsgDataGossip)*count);\n\n        /* If there is extension data, which doesn't have a fixed length,\n         * loop through them and validate the length of it now. */\n        if (hdr->mflags[0] & CLUSTERMSG_FLAG0_EXT_DATA) {\n            clusterMsgPingExt *ext = getInitialPingExt(hdr, count);\n            while (extensions--) {\n                uint16_t extlen = getPingExtLength(ext);\n                if (extlen % 8 != 0) {\n                    serverLog(LL_WARNING, \"Received a %s packet without proper padding (%d bytes)\",\n                        clusterGetMessageTypeString(type), (int) extlen);\n                    return 1;\n                }\n                if ((totlen - explen) < extlen) {\n                    serverLog(LL_WARNING, \"Received invalid %s packet with extension data that exceeds \"\n                        \"total packet length (%lld)\", clusterGetMessageTypeString(type),\n                        (unsigned long long) totlen);\n                    return 1;\n                }\n                explen += extlen;\n                ext = getNextPingExt(ext);\n            }\n        }\n    } else if (type == CLUSTERMSG_TYPE_FAIL) {\n        explen = sizeof(clusterMsg)-sizeof(union clusterMsgData);\n        explen += sizeof(clusterMsgDataFail);\n    } else if (type == CLUSTERMSG_TYPE_PUBLISH || type == CLUSTERMSG_TYPE_PUBLISHSHARD) {\n        explen = sizeof(clusterMsg)-sizeof(union clusterMsgData);\n        explen += sizeof(clusterMsgDataPublish) -\n                8 +\n                ntohl(hdr->data.publish.msg.channel_len) +\n                ntohl(hdr->data.publish.msg.message_len);\n    } else if (type == CLUSTERMSG_TYPE_FAILOVER_AUTH_REQUEST ||\n               type == CLUSTERMSG_TYPE_FAILOVER_AUTH_ACK ||\n               type == CLUSTERMSG_TYPE_MFSTART)\n    {\n        explen = sizeof(clusterMsg)-sizeof(union clusterMsgData);\n    } else if (type == CLUSTERMSG_TYPE_UPDATE) {\n        explen = sizeof(clusterMsg)-sizeof(union clusterMsgData);\n        explen += sizeof(clusterMsgDataUpdate);\n    } else if (type == CLUSTERMSG_TYPE_MODULE) {\n        explen = sizeof(clusterMsg)-sizeof(union clusterMsgData);\n        explen += sizeof(clusterMsgModule) -\n                3 + ntohl(hdr->data.module.msg.len);\n    } else {\n        /* We don't know this type of packet, so we assume it's well formed. */\n        explen = totlen;\n    }\n\n    if (totlen != explen) {\n        serverLog(LL_WARNING, \"Received invalid %s packet of length %lld but expected length %lld\",\n            clusterGetMessageTypeString(type), (unsigned long long) totlen, (unsigned long long) explen);\n        return 1;\n    }\n\n    sender = getNodeFromLinkAndMsg(link, hdr);\n    if (sender && (hdr->mflags[0] & CLUSTERMSG_FLAG0_EXT_DATA)) {\n        sender->flags |= CLUSTER_NODE_EXTENSIONS_SUPPORTED;\n    }\n\n    /* Update the last time we saw any data from this node. We\n     * use this in order to avoid detecting a timeout from a node that\n     * is just sending a lot of data in the cluster bus, for instance\n     * because of Pub/Sub. */\n    if (sender) sender->data_received = now;\n\n    if (sender && !nodeInHandshake(sender)) {\n        /* Update our currentEpoch if we see a newer epoch in the cluster. */\n        senderCurrentEpoch = ntohu64(hdr->currentEpoch);\n        senderConfigEpoch = ntohu64(hdr->configEpoch);\n        if (senderCurrentEpoch > server.cluster->currentEpoch)\n            server.cluster->currentEpoch = senderCurrentEpoch;\n        /* Update the sender configEpoch if it is publishing a newer one. */\n        if (senderConfigEpoch > sender->configEpoch) {\n            sender->configEpoch = senderConfigEpoch;\n            clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG|\n                                 CLUSTER_TODO_FSYNC_CONFIG);\n        }\n        /* Update the replication offset info for this node. */\n        sender->repl_offset = ntohu64(hdr->offset);\n        sender->repl_offset_time = now;\n        /* If we are a slave performing a manual failover and our master\n         * sent its offset while already paused, populate the MF state. */\n        if (server.cluster->mf_end &&\n            nodeIsSlave(myself) &&\n            myself->slaveof == sender &&\n            hdr->mflags[0] & CLUSTERMSG_FLAG0_PAUSED &&\n            server.cluster->mf_master_offset == -1)\n        {\n            server.cluster->mf_master_offset = sender->repl_offset;\n            clusterDoBeforeSleep(CLUSTER_TODO_HANDLE_MANUALFAILOVER);\n            serverLog(LL_NOTICE,\n                \"Received replication offset for paused \"\n                \"master manual failover: %lld\",\n                server.cluster->mf_master_offset);\n        }\n    }\n\n    /* Initial processing of PING and MEET requests replying with a PONG. */\n    if (type == CLUSTERMSG_TYPE_PING || type == CLUSTERMSG_TYPE_MEET) {\n        /* We use incoming MEET messages in order to set the address\n         * for 'myself', since only other cluster nodes will send us\n         * MEET messages on handshakes, when the cluster joins, or\n         * later if we changed address, and those nodes will use our\n         * official address to connect to us. So by obtaining this address\n         * from the socket is a simple way to discover / update our own\n         * address in the cluster without it being hardcoded in the config.\n         *\n         * However if we don't have an address at all, we update the address\n         * even with a normal PING packet. If it's wrong it will be fixed\n         * by MEET later. */\n        if ((type == CLUSTERMSG_TYPE_MEET || myself->ip[0] == '\\0') &&\n            server.cluster_announce_ip == NULL)\n        {\n            char ip[NET_IP_STR_LEN];\n\n            if (connAddrSockName(link->conn,ip,sizeof(ip),NULL) != -1 &&\n                strcmp(ip,myself->ip))\n            {\n                memcpy(myself->ip,ip,NET_IP_STR_LEN);\n                serverLog(LL_NOTICE,\"IP address for this node updated to %s\",\n                    myself->ip);\n                clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG);\n            }\n        }\n\n        /* Add this node if it is new for us and the msg type is MEET.\n         * In this stage we don't try to add the node with the right\n         * flags, slaveof pointer, and so forth, as this details will be\n         * resolved when we'll receive PONGs from the node. */\n        if (!sender && type == CLUSTERMSG_TYPE_MEET) {\n            clusterNode *node;\n            char ip[NET_IP_STR_LEN] = {0};\n            if (nodeIp2String(ip, link, hdr->myip) != C_OK) {\n                /* Unable to retrieve the node's IP address from the connection. Without a\n                 * valid IP, the node becomes unusable in the cluster. This failure might be\n                 * due to the connection being closed. */\n                serverLog(LL_NOTICE, \"Closing link even though we received a MEET packet on it, \"\n                                     \"because the connection has an error\");\n                freeClusterLink(link);\n                return 0;\n            }\n\n            node = createClusterNode(NULL,CLUSTER_NODE_HANDSHAKE);\n            memcpy(node->ip, ip, sizeof(ip));\n            getClientPortFromClusterMsg(hdr, &node->tls_port, &node->tcp_port);\n            node->cport = ntohs(hdr->cport);\n            clusterAddNode(node);\n            clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG);\n        }\n\n        /* If this is a MEET packet from an unknown node, we still process\n         * the gossip section here since we have to trust the sender because\n         * of the message type. */\n        if (!sender && type == CLUSTERMSG_TYPE_MEET)\n            clusterProcessGossipSection(hdr,link);\n\n        /* Anyway reply with a PONG */\n        clusterSendPing(link,CLUSTERMSG_TYPE_PONG);\n    }\n\n    /* PING, PONG, MEET: process config information. */\n    if (type == CLUSTERMSG_TYPE_PING || type == CLUSTERMSG_TYPE_PONG ||\n        type == CLUSTERMSG_TYPE_MEET)\n    {\n        serverLog(LL_DEBUG,\"%s packet received: %.40s\",\n            clusterGetMessageTypeString(type),\n            link->node ? link->node->name : \"NULL\");\n        if (!link->inbound) {\n            if (nodeInHandshake(link->node)) {\n                /* If we already have this node, try to change the\n                 * IP/port of the node with the new one. */\n                if (sender) {\n                    serverLog(LL_VERBOSE,\n                        \"Handshake: we already know node %.40s (%s), \"\n                        \"updating the address if needed.\", sender->name, sender->human_nodename);\n                    if (nodeUpdateAddressIfNeeded(sender,link,hdr))\n                    {\n                        clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG|\n                                             CLUSTER_TODO_UPDATE_STATE);\n                    }\n                    /* Free this node as we already have it. This will\n                     * cause the link to be freed as well. */\n                    clusterDelNode(link->node);\n                    return 0;\n                }\n\n                /* First thing to do is replacing the random name with the\n                 * right node name if this was a handshake stage. */\n                clusterRenameNode(link->node, hdr->sender);\n                serverLog(LL_DEBUG,\"Handshake with node %.40s completed.\",\n                    link->node->name);\n                link->node->flags &= ~CLUSTER_NODE_HANDSHAKE;\n                link->node->flags |= flags&(CLUSTER_NODE_MASTER|CLUSTER_NODE_SLAVE);\n                clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG);\n            } else if (memcmp(link->node->name,hdr->sender,\n                        CLUSTER_NAMELEN) != 0)\n            {\n                /* If the reply has a non matching node ID we\n                 * disconnect this node and set it as not having an associated\n                 * address. */\n                serverLog(LL_DEBUG,\"PONG contains mismatching sender ID. About node %.40s added %d ms ago, having flags %d\",\n                    link->node->name,\n                    (int)(now-(link->node->ctime)),\n                    link->node->flags);\n                link->node->flags |= CLUSTER_NODE_NOADDR;\n                link->node->ip[0] = '\\0';\n                link->node->tcp_port = 0;\n                link->node->tls_port = 0;\n                link->node->cport = 0;\n                freeClusterLink(link);\n                clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG);\n                return 0;\n            }\n        }\n\n        /* Copy the CLUSTER_NODE_NOFAILOVER flag from what the sender\n         * announced. This is a dynamic flag that we receive from the\n         * sender, and the latest status must be trusted. We need it to\n         * be propagated because the slave ranking used to understand the\n         * delay of each slave in the voting process, needs to know\n         * what are the instances really competing. */\n        if (sender) {\n            int nofailover = flags & CLUSTER_NODE_NOFAILOVER;\n            sender->flags &= ~CLUSTER_NODE_NOFAILOVER;\n            sender->flags |= nofailover;\n        }\n\n        /* Update the node address if it changed. */\n        if (sender && type == CLUSTERMSG_TYPE_PING &&\n            !nodeInHandshake(sender) &&\n            nodeUpdateAddressIfNeeded(sender,link,hdr))\n        {\n            clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG|\n                                 CLUSTER_TODO_UPDATE_STATE);\n        }\n\n        /* Update our info about the node */\n        if (!link->inbound && type == CLUSTERMSG_TYPE_PONG) {\n            link->node->pong_received = now;\n            link->node->ping_sent = 0;\n\n            /* The PFAIL condition can be reversed without external\n             * help if it is momentary (that is, if it does not\n             * turn into a FAIL state).\n             *\n             * The FAIL condition is also reversible under specific\n             * conditions detected by clearNodeFailureIfNeeded(). */\n            if (nodeTimedOut(link->node)) {\n                link->node->flags &= ~CLUSTER_NODE_PFAIL;\n                clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG|\n                                     CLUSTER_TODO_UPDATE_STATE);\n            } else if (nodeFailed(link->node)) {\n                clearNodeFailureIfNeeded(link->node);\n            }\n        }\n\n        /* Check for role switch: slave -> master or master -> slave. */\n        if (sender) {\n            if (!memcmp(hdr->slaveof,CLUSTER_NODE_NULL_NAME,\n                sizeof(hdr->slaveof)))\n            {\n                /* Node is a master. */\n                clusterSetNodeAsMaster(sender);\n            } else {\n                /* Node is a slave. */\n                clusterNode *master = clusterLookupNode(hdr->slaveof, CLUSTER_NAMELEN);\n\n                if (clusterNodeIsMaster(sender)) {\n                    /* Master turned into a slave! Reconfigure the node. */\n                    if (master && !memcmp(master->shard_id, sender->shard_id, CLUSTER_NAMELEN)) {\n                        /* `sender` was a primary and was in the same shard as `master`, its new primary */\n                        if (sender->configEpoch > senderConfigEpoch) {\n                            serverLog(LL_NOTICE,\n                                    \"Ignore stale message from %.40s (%s) in shard %.40s;\"\n                                    \" gossip config epoch: %llu, current config epoch: %llu\", \n                                    sender->name,\n                                    sender->human_nodename,\n                                    sender->shard_id,\n                                    (unsigned long long)senderConfigEpoch,\n                                    (unsigned long long)sender->configEpoch);\n                        } else {\n                            /* A failover occurred in the shard where `sender` belongs to and `sender` is no longer\n                             * a primary. Update slot assignment to `master`, which is the new primary in the shard */\n                            int slots = clusterMoveNodeSlots(sender, master);\n                            /* `master` is still a `slave` in this observer node's view; update its role and configEpoch */\n                            clusterSetNodeAsMaster(master);\n                            master->configEpoch = senderConfigEpoch;\n                            serverLog(LL_NOTICE, \"A failover occurred in shard %.40s; node %.40s (%s)\"\n                                    \" lost %d slot(s) to node %.40s (%s) with a config epoch of %llu\",\n                                    sender->shard_id,\n                                    sender->name,\n                                    sender->human_nodename,\n                                    slots,\n                                    master->name,\n                                    master->human_nodename,\n                                    (unsigned long long) master->configEpoch);\n                        }\n                    } else {\n                        /* `sender` was moved to another shard and has become a replica, remove its slot assignment */\n                        int slots = clusterDelNodeSlots(sender);\n                        serverLog(LL_NOTICE, \"Node %.40s (%s) is no longer master of shard %.40s;\"\n                                \" removed all %d slot(s) it used to own\",\n                                sender->name,\n                                sender->human_nodename,\n                                sender->shard_id,\n                                slots);\n                       if (master != NULL) {\n                           serverLog(LL_NOTICE, \"Node %.40s (%s) is now part of shard %.40s\",\n                                   sender->name,\n                                   sender->human_nodename,\n                                   master->shard_id);\n                        }\n                    }\n                    sender->flags &= ~(CLUSTER_NODE_MASTER|\n                                       CLUSTER_NODE_MIGRATE_TO);\n                    sender->flags |= CLUSTER_NODE_SLAVE;\n\n                    /* Update config and state. */\n                    clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG|\n                                         CLUSTER_TODO_UPDATE_STATE);\n                }\n\n                /* Master node changed for this slave? */\n                if (master && sender->slaveof != master) {\n                    if (sender->slaveof)\n                        clusterNodeRemoveSlave(sender->slaveof,sender);\n                    clusterNodeAddSlave(master,sender);\n                    sender->slaveof = master;\n\n                    /* Update the shard_id when a replica is connected to its\n                     * primary in the very first time. */\n                    updateShardId(sender, master->shard_id);\n\n                    /* Update config. */\n                    clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG);\n                }\n            }\n        }\n\n        /* Update our info about served slots.\n         *\n         * Note: this MUST happen after we update the master/slave state\n         * so that CLUSTER_NODE_MASTER flag will be set. */\n\n        /* Many checks are only needed if the set of served slots this\n         * instance claims is different compared to the set of slots we have\n         * for it. Check this ASAP to avoid other computational expansive\n         * checks later. */\n        clusterNode *sender_master = NULL; /* Sender or its master if slave. */\n        int dirty_slots = 0; /* Sender claimed slots don't match my view? */\n\n        if (sender) {\n            sender_master = clusterNodeIsMaster(sender) ? sender : sender->slaveof;\n            if (sender_master) {\n                dirty_slots = memcmp(sender_master->slots,\n                        hdr->myslots,sizeof(hdr->myslots)) != 0;\n            }\n        }\n\n        /* 1) If the sender of the message is a master, and we detected that\n         *    the set of slots it claims changed, scan the slots to see if we\n         *    need to update our configuration. */\n        if (sender && clusterNodeIsMaster(sender) && dirty_slots)\n            clusterUpdateSlotsConfigWith(sender,senderConfigEpoch,hdr->myslots);\n\n        /* 2) We also check for the reverse condition, that is, the sender\n         *    claims to serve slots we know are served by a master with a\n         *    greater configEpoch. If this happens we inform the sender.\n         *\n         * This is useful because sometimes after a partition heals, a\n         * reappearing master may be the last one to claim a given set of\n         * hash slots, but with a configuration that other instances know to\n         * be deprecated. Example:\n         *\n         * A and B are master and slave for slots 1,2,3.\n         * A is partitioned away, B gets promoted.\n         * B is partitioned away, and A returns available.\n         *\n         * Usually B would PING A publishing its set of served slots and its\n         * configEpoch, but because of the partition B can't inform A of the\n         * new configuration, so other nodes that have an updated table must\n         * do it. In this way A will stop to act as a master (or can try to\n         * failover if there are the conditions to win the election). */\n        if (sender && dirty_slots) {\n            int j;\n\n            for (j = 0; j < CLUSTER_SLOTS; j++) {\n                if (bitmapTestBit(hdr->myslots,j)) {\n                    if (server.cluster->slots[j] == sender ||\n                        isSlotUnclaimed(j)) continue;\n                    if (server.cluster->slots[j]->configEpoch >\n                        senderConfigEpoch)\n                    {\n                        serverLog(LL_VERBOSE,\n                            \"Node %.40s has old slots configuration, sending \"\n                            \"an UPDATE message about %.40s\",\n                                sender->name, server.cluster->slots[j]->name);\n                        clusterSendUpdate(sender->link,\n                            server.cluster->slots[j]);\n\n                        /* TODO: instead of exiting the loop send every other\n                         * UPDATE packet for other nodes that are the new owner\n                         * of sender's slots. */\n                        break;\n                    }\n                }\n            }\n        }\n\n        /* If our config epoch collides with the sender's try to fix\n         * the problem. */\n        if (sender && clusterNodeIsMaster(myself) && clusterNodeIsMaster(sender) &&\n            senderConfigEpoch == myself->configEpoch)\n        {\n            clusterHandleConfigEpochCollision(sender);\n        }\n\n        /* Get info from the gossip section */\n        if (sender) {\n            clusterProcessGossipSection(hdr,link);\n            clusterProcessPingExtensions(hdr,link);\n        }\n    } else if (type == CLUSTERMSG_TYPE_FAIL) {\n        clusterNode *failing;\n\n        if (sender) {\n            failing = clusterLookupNode(hdr->data.fail.about.nodename, CLUSTER_NAMELEN);\n            if (failing &&\n                !(failing->flags & (CLUSTER_NODE_FAIL|CLUSTER_NODE_MYSELF)))\n            {\n                serverLog(LL_NOTICE,\n                    \"FAIL message received from %.40s (%s) about %.40s (%s)\",\n                    hdr->sender, sender->human_nodename, hdr->data.fail.about.nodename, failing->human_nodename);\n                failing->flags |= CLUSTER_NODE_FAIL;\n                failing->fail_time = now;\n                failing->flags &= ~CLUSTER_NODE_PFAIL;\n                clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG|\n                                     CLUSTER_TODO_UPDATE_STATE);\n            }\n        } else {\n            serverLog(LL_NOTICE,\n                \"Ignoring FAIL message from unknown node %.40s about %.40s\",\n                hdr->sender, hdr->data.fail.about.nodename);\n        }\n    } else if (type == CLUSTERMSG_TYPE_PUBLISH || type == CLUSTERMSG_TYPE_PUBLISHSHARD) {\n        if (!sender) return 1;  /* We don't know that node. */\n\n        robj *channel, *message;\n        uint32_t channel_len, message_len;\n\n        /* Don't bother creating useless objects if there are no\n         * Pub/Sub subscribers. */\n        if ((type == CLUSTERMSG_TYPE_PUBLISH\n            && serverPubsubSubscriptionCount() > 0)\n        || (type == CLUSTERMSG_TYPE_PUBLISHSHARD\n            && serverPubsubShardSubscriptionCount() > 0))\n        {\n            channel_len = ntohl(hdr->data.publish.msg.channel_len);\n            message_len = ntohl(hdr->data.publish.msg.message_len);\n            channel = createStringObject(\n                        (char*)hdr->data.publish.msg.bulk_data,channel_len);\n            message = createStringObject(\n                        (char*)hdr->data.publish.msg.bulk_data+channel_len,\n                        message_len);\n            pubsubPublishMessage(channel, message, type == CLUSTERMSG_TYPE_PUBLISHSHARD);\n            decrRefCount(channel);\n            decrRefCount(message);\n        }\n    } else if (type == CLUSTERMSG_TYPE_FAILOVER_AUTH_REQUEST) {\n        if (!sender) return 1;  /* We don't know that node. */\n        clusterSendFailoverAuthIfNeeded(sender,hdr);\n    } else if (type == CLUSTERMSG_TYPE_FAILOVER_AUTH_ACK) {\n        if (!sender) return 1;  /* We don't know that node. */\n        /* We consider this vote only if the sender is a master serving\n         * a non zero number of slots, and its currentEpoch is greater or\n         * equal to epoch where this node started the election. */\n        if (clusterNodeIsMaster(sender) && sender->numslots > 0 &&\n            senderCurrentEpoch >= server.cluster->failover_auth_epoch)\n        {\n            server.cluster->failover_auth_count++;\n            /* Maybe we reached a quorum here, set a flag to make sure\n             * we check ASAP. */\n            clusterDoBeforeSleep(CLUSTER_TODO_HANDLE_FAILOVER);\n        }\n    } else if (type == CLUSTERMSG_TYPE_MFSTART) {\n        /* This message is acceptable only if I'm a master and the sender\n         * is one of my slaves. */\n        if (!sender || sender->slaveof != myself) return 1;\n        /* Cancel all ASM tasks when starting manual failover */\n        clusterAsmCancel(NULL, \"manual failover\");\n        /* Manual failover requested from slaves. Initialize the state\n         * accordingly. */\n        resetManualFailover();\n        server.cluster->mf_end = now + CLUSTER_MF_TIMEOUT;\n        server.cluster->mf_slave = sender;\n        pauseActions(PAUSE_DURING_FAILOVER,\n                     now + (CLUSTER_MF_TIMEOUT * CLUSTER_MF_PAUSE_MULT),\n                     PAUSE_ACTIONS_CLIENT_WRITE_SET);\n        serverLog(LL_NOTICE,\"Manual failover requested by replica %.40s (%s).\",\n            sender->name, sender->human_nodename);\n        /* We need to send a ping message to the replica, as it would carry\n         * `server.cluster->mf_master_offset`, which means the master paused clients\n         * at offset `server.cluster->mf_master_offset`, so that the replica would\n         * know that it is safe to set its `server.cluster->mf_can_start` to 1 so as\n         * to complete failover as quickly as possible. */\n        clusterSendPing(link, CLUSTERMSG_TYPE_PING);\n    } else if (type == CLUSTERMSG_TYPE_UPDATE) {\n        clusterNode *n; /* The node the update is about. */\n        uint64_t reportedConfigEpoch =\n                    ntohu64(hdr->data.update.nodecfg.configEpoch);\n\n        if (!sender) return 1;  /* We don't know the sender. */\n        n = clusterLookupNode(hdr->data.update.nodecfg.nodename, CLUSTER_NAMELEN);\n        if (!n) return 1;   /* We don't know the reported node. */\n        if (n->configEpoch >= reportedConfigEpoch) return 1; /* Nothing new. */\n\n        /* If in our current config the node is a slave, set it as a master. */\n        if (nodeIsSlave(n)) clusterSetNodeAsMaster(n);\n\n        /* Update the node's configEpoch. */\n        n->configEpoch = reportedConfigEpoch;\n        clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG|\n                             CLUSTER_TODO_FSYNC_CONFIG);\n\n        /* Check the bitmap of served slots and update our\n         * config accordingly. */\n        clusterUpdateSlotsConfigWith(n,reportedConfigEpoch,\n            hdr->data.update.nodecfg.slots);\n    } else if (type == CLUSTERMSG_TYPE_MODULE) {\n        if (!sender) return 1;  /* Protect the module from unknown nodes. */\n        /* We need to route this message back to the right module subscribed\n         * for the right message type. */\n        uint64_t module_id = hdr->data.module.msg.module_id; /* Endian-safe ID */\n        uint32_t len = ntohl(hdr->data.module.msg.len);\n        uint8_t type = hdr->data.module.msg.type;\n        unsigned char *payload = hdr->data.module.msg.bulk_data;\n        moduleCallClusterReceivers(sender->name,module_id,type,payload,len);\n    } else {\n        serverLog(LL_WARNING,\"Received unknown packet type: %d\", type);\n    }\n    return 1;\n}\n\n/* This function is called when we detect the link with this node is lost.\n   We set the node as no longer connected. The Cluster Cron will detect\n   this connection and will try to get it connected again.\n\n   Instead if the node is a temporary node used to accept a query, we\n   completely free the node on error. */\nvoid handleLinkIOError(clusterLink *link) {\n    freeClusterLink(link);\n}\n\n/* Send the messages queued for the link. */\nvoid clusterWriteHandler(connection *conn) {\n    clusterLink *link = connGetPrivateData(conn);\n    ssize_t nwritten;\n    size_t totwritten = 0;\n\n    while (totwritten < NET_MAX_WRITES_PER_EVENT && listLength(link->send_msg_queue) > 0) {\n        listNode *head = listFirst(link->send_msg_queue);\n        clusterMsgSendBlock *msgblock = (clusterMsgSendBlock*)head->value;\n        clusterMsg *msg = getMessageFromSendBlock(msgblock);\n        size_t msg_offset = link->head_msg_send_offset;\n        size_t msg_len = ntohl(msg->totlen);\n\n        nwritten = connWrite(conn, (char*)msg + msg_offset, msg_len - msg_offset);\n        if (nwritten <= 0) {\n            serverLog(LL_DEBUG,\"I/O error writing to node link: %s\",\n                (nwritten == -1) ? connGetLastError(conn) : \"short write\");\n            handleLinkIOError(link);\n            return;\n        }\n        if (msg_offset + nwritten < msg_len) {\n            /* If full message wasn't written, record the offset\n             * and continue sending from this point next time */\n            link->head_msg_send_offset += nwritten;\n            return;\n        }\n        serverAssert((msg_offset + nwritten) == msg_len);\n        link->head_msg_send_offset = 0;\n\n        /* Delete the node and update our memory tracking */\n        uint32_t blocklen = msgblock->totlen;\n        listDelNode(link->send_msg_queue, head);\n        server.stat_cluster_links_memory -= sizeof(listNode);\n        link->send_msg_queue_mem -= sizeof(listNode) + blocklen;\n\n        totwritten += nwritten;\n    }\n\n    if (listLength(link->send_msg_queue) == 0)\n        connSetWriteHandler(link->conn, NULL);\n}\n\n/* A connect handler that gets called when a connection to another node\n * gets established.\n */\nvoid clusterLinkConnectHandler(connection *conn) {\n    clusterLink *link = connGetPrivateData(conn);\n    clusterNode *node = link->node;\n\n    /* Check if connection succeeded */\n    if (connGetState(conn) != CONN_STATE_CONNECTED) {\n        serverLog(LL_VERBOSE, \"Connection with Node %.40s at %s:%d failed: %s\",\n                node->name, node->ip, node->cport,\n                connGetLastError(conn));\n        freeClusterLink(link);\n        return;\n    }\n\n    /* Register a read handler from now on */\n    connSetReadHandler(conn, clusterReadHandler);\n\n    /* Queue a PING in the new connection ASAP: this is crucial\n     * to avoid false positives in failure detection.\n     *\n     * If the node is flagged as MEET, we send a MEET message instead\n     * of a PING one, to force the receiver to add us in its node\n     * table. */\n    mstime_t old_ping_sent = node->ping_sent;\n    clusterSendPing(link, node->flags & CLUSTER_NODE_MEET ?\n            CLUSTERMSG_TYPE_MEET : CLUSTERMSG_TYPE_PING);\n    if (old_ping_sent) {\n        /* If there was an active ping before the link was\n         * disconnected, we want to restore the ping time, otherwise\n         * replaced by the clusterSendPing() call. */\n        node->ping_sent = old_ping_sent;\n    }\n    /* We can clear the flag after the first packet is sent.\n     * If we'll never receive a PONG, we'll never send new packets\n     * to this node. Instead after the PONG is received and we\n     * are no longer in meet/handshake status, we want to send\n     * normal PING packets. */\n    node->flags &= ~CLUSTER_NODE_MEET;\n\n    serverLog(LL_DEBUG,\"Connecting with Node %.40s at %s:%d\",\n            node->name, node->ip, node->cport);\n}\n\n/* Read data. Try to read the first field of the header first to check the\n * full length of the packet. When a whole packet is in memory this function\n * will call the function to process the packet. And so forth. */\nvoid clusterReadHandler(connection *conn) {\n    clusterMsg buf[1];\n    ssize_t nread;\n    clusterMsg *hdr;\n    clusterLink *link = connGetPrivateData(conn);\n    unsigned int readlen, rcvbuflen;\n\n    while(1) { /* Read as long as there is data to read. */\n        rcvbuflen = link->rcvbuf_len;\n        if (rcvbuflen < 8) {\n            /* First, obtain the first 8 bytes to get the full message\n             * length. */\n            readlen = 8 - rcvbuflen;\n        } else {\n            /* Finally read the full message. */\n            hdr = (clusterMsg*) link->rcvbuf;\n            if (rcvbuflen == 8) {\n                /* Perform some sanity check on the message signature\n                 * and length. */\n                if (memcmp(hdr->sig,\"RCmb\",4) != 0 ||\n                    ntohl(hdr->totlen) < CLUSTERMSG_MIN_LEN)\n                {\n                    char ip[NET_IP_STR_LEN];\n                    int port;\n                    if (connAddrPeerName(conn, ip, sizeof(ip), &port) == -1) {\n                        serverLog(LL_WARNING,\n                            \"Bad message length or signature received \"\n                            \"on the Cluster bus.\");\n                    } else {\n                        serverLog(LL_WARNING,\n                            \"Bad message length or signature received \"\n                            \"on the Cluster bus from %s:%d\", ip, port);\n                    }\n                    handleLinkIOError(link);\n                    return;\n                }\n            }\n            readlen = ntohl(hdr->totlen) - rcvbuflen;\n            if (readlen > sizeof(buf)) readlen = sizeof(buf);\n        }\n\n        nread = connRead(conn,buf,readlen);\n        if (nread == -1 && (connGetState(conn) == CONN_STATE_CONNECTED)) return; /* No more data ready. */\n\n        if (nread <= 0) {\n            /* I/O error... */\n            serverLog(LL_DEBUG,\"I/O error reading from node link: %s\",\n                (nread == 0) ? \"connection closed\" : connGetLastError(conn));\n            handleLinkIOError(link);\n            return;\n        } else {\n            /* Read data and recast the pointer to the new buffer. */\n            size_t unused = link->rcvbuf_alloc - link->rcvbuf_len;\n            if ((size_t)nread > unused) {\n                size_t required = link->rcvbuf_len + nread;\n                size_t prev_rcvbuf_alloc = link->rcvbuf_alloc;\n                /* If less than 1mb, grow to twice the needed size, if larger grow by 1mb. */\n                link->rcvbuf_alloc = required < RCVBUF_MAX_PREALLOC ? required * 2: required + RCVBUF_MAX_PREALLOC;\n                link->rcvbuf = zrealloc(link->rcvbuf, link->rcvbuf_alloc);\n                server.stat_cluster_links_memory += link->rcvbuf_alloc - prev_rcvbuf_alloc;\n            }\n            memcpy(link->rcvbuf + link->rcvbuf_len, buf, nread);\n            link->rcvbuf_len += nread;\n            hdr = (clusterMsg*) link->rcvbuf;\n            rcvbuflen += nread;\n        }\n\n        /* Total length obtained? Process this packet. */\n        if (rcvbuflen >= 8 && rcvbuflen == ntohl(hdr->totlen)) {\n            if (clusterProcessPacket(link)) {\n                if (link->rcvbuf_alloc > RCVBUF_INIT_LEN) {\n                    size_t prev_rcvbuf_alloc = link->rcvbuf_alloc;\n                    zfree(link->rcvbuf);\n                    link->rcvbuf = zmalloc(link->rcvbuf_alloc = RCVBUF_INIT_LEN);\n                    server.stat_cluster_links_memory += link->rcvbuf_alloc - prev_rcvbuf_alloc;\n                }\n                link->rcvbuf_len = 0;\n            } else {\n                return; /* Link no longer valid. */\n            }\n        }\n    }\n}\n\n/* Put the message block into the link's send queue.\n *\n * It is guaranteed that this function will never have as a side effect\n * the link to be invalidated, so it is safe to call this function\n * from event handlers that will do stuff with the same link later. */\nvoid clusterSendMessage(clusterLink *link, clusterMsgSendBlock *msgblock) {\n    if (!link) {\n        return;\n    }\n    if (listLength(link->send_msg_queue) == 0 && getMessageFromSendBlock(msgblock)->totlen != 0)\n        connSetWriteHandlerWithBarrier(link->conn, clusterWriteHandler, 1);\n\n    listAddNodeTail(link->send_msg_queue, msgblock);\n    msgblock->refcount++;\n\n    /* Update memory tracking */\n    link->send_msg_queue_mem += sizeof(listNode) + msgblock->totlen;\n    server.stat_cluster_links_memory += sizeof(listNode);\n\n    /* Populate sent messages stats. */\n    uint16_t type = ntohs(getMessageFromSendBlock(msgblock)->type);\n    if (type < CLUSTERMSG_TYPE_COUNT)\n        server.cluster->stats_bus_messages_sent[type]++;\n}\n\n/* Send a message to all the nodes that are part of the cluster having\n * a connected link.\n *\n * It is guaranteed that this function will never have as a side effect\n * some node->link to be invalidated, so it is safe to call this function\n * from event handlers that will do stuff with node links later. */\nvoid clusterBroadcastMessage(clusterMsgSendBlock *msgblock) {\n    dictIterator di;\n    dictEntry *de;\n\n    dictInitSafeIterator(&di, server.cluster->nodes);\n    while((de = dictNext(&di)) != NULL) {\n        clusterNode *node = dictGetVal(de);\n\n        if (node->flags & (CLUSTER_NODE_MYSELF|CLUSTER_NODE_HANDSHAKE))\n            continue;\n        clusterSendMessage(node->link,msgblock);\n    }\n    dictResetIterator(&di);\n}\n\n/* Build the message header. hdr must point to a buffer at least\n * sizeof(clusterMsg) in bytes. */\nstatic void clusterBuildMessageHdr(clusterMsg *hdr, int type, size_t msglen) {\n    uint64_t offset;\n    clusterNode *master;\n\n    /* If this node is a master, we send its slots bitmap and configEpoch.\n     * If this node is a slave we send the master's information instead (the\n     * node is flagged as slave so the receiver knows that it is NOT really\n     * in charge for this slots. */\n    master = (nodeIsSlave(myself) && myself->slaveof) ?\n              myself->slaveof : myself;\n\n    hdr->ver = htons(CLUSTER_PROTO_VER);\n    hdr->sig[0] = 'R';\n    hdr->sig[1] = 'C';\n    hdr->sig[2] = 'm';\n    hdr->sig[3] = 'b';\n    hdr->type = htons(type);\n    memcpy(hdr->sender,myself->name,CLUSTER_NAMELEN);\n\n    /* If cluster-announce-ip option is enabled, force the receivers of our\n     * packets to use the specified address for this node. Otherwise if the\n     * first byte is zero, they'll do auto discovery. */\n    memset(hdr->myip,0,NET_IP_STR_LEN);\n    if (server.cluster_announce_ip) {\n        redis_strlcpy(hdr->myip,server.cluster_announce_ip,NET_IP_STR_LEN);\n    }\n\n    /* Handle cluster-announce-[tls-|bus-]port. */\n    int announced_tcp_port, announced_tls_port, announced_cport;\n    deriveAnnouncedPorts(&announced_tcp_port, &announced_tls_port, &announced_cport);\n\n    memcpy(hdr->myslots,master->slots,sizeof(hdr->myslots));\n    memset(hdr->slaveof,0,CLUSTER_NAMELEN);\n    if (myself->slaveof != NULL)\n        memcpy(hdr->slaveof,myself->slaveof->name, CLUSTER_NAMELEN);\n    if (server.tls_cluster) {\n        hdr->port = htons(announced_tls_port);\n        hdr->pport = htons(announced_tcp_port);\n    } else {\n        hdr->port = htons(announced_tcp_port);\n        hdr->pport = htons(announced_tls_port);\n    }\n    hdr->cport = htons(announced_cport);\n    hdr->flags = htons(myself->flags);\n    hdr->state = server.cluster->state;\n\n    /* Set the currentEpoch and configEpochs. */\n    hdr->currentEpoch = htonu64(server.cluster->currentEpoch);\n    hdr->configEpoch = htonu64(master->configEpoch);\n\n    /* Set the replication offset. */\n    if (nodeIsSlave(myself))\n        offset = replicationGetSlaveOffset();\n    else\n        offset = server.master_repl_offset;\n    hdr->offset = htonu64(offset);\n\n    /* Set the message flags. */\n    if (clusterNodeIsMaster(myself) && server.cluster->mf_end)\n        hdr->mflags[0] |= CLUSTERMSG_FLAG0_PAUSED;\n    hdr->mflags[0] |= CLUSTERMSG_FLAG0_EXT_DATA; /* Always make other nodes know that\n                                                  * this node supports extension data. */\n\n    hdr->totlen = htonl(msglen);\n}\n\n/* Set the i-th entry of the gossip section in the message pointed by 'hdr'\n * to the info of the specified node 'n'. */\nvoid clusterSetGossipEntry(clusterMsg *hdr, int i, clusterNode *n) {\n    clusterMsgDataGossip *gossip;\n    gossip = &(hdr->data.ping.gossip[i]);\n    memcpy(gossip->nodename,n->name,CLUSTER_NAMELEN);\n    gossip->ping_sent = htonl(n->ping_sent/1000);\n    gossip->pong_received = htonl(n->pong_received/1000);\n    memcpy(gossip->ip,n->ip,sizeof(n->ip));\n    if (server.tls_cluster) {\n        gossip->port = htons(n->tls_port);\n        gossip->pport = htons(n->tcp_port);\n    } else {\n        gossip->port = htons(n->tcp_port);\n        gossip->pport = htons(n->tls_port);\n    }\n    gossip->cport = htons(n->cport);\n    gossip->flags = htons(n->flags);\n    gossip->notused1 = 0;\n}\n\n/* Send a PING or PONG packet to the specified node, making sure to add enough\n * gossip information. */\nvoid clusterSendPing(clusterLink *link, int type) {\n    static unsigned long long cluster_pings_sent = 0;\n    cluster_pings_sent++;\n    int gossipcount = 0; /* Number of gossip sections added so far. */\n    int wanted; /* Number of gossip sections we want to append if possible. */\n    int estlen; /* Upper bound on estimated packet length */\n    /* freshnodes is the max number of nodes we can hope to append at all:\n     * nodes available minus two (ourself and the node we are sending the\n     * message to). However practically there may be less valid nodes since\n     * nodes in handshake state, disconnected, are not considered. */\n    int freshnodes = dictSize(server.cluster->nodes)-2;\n\n    /* How many gossip sections we want to add? 1/10 of the number of nodes\n     * and anyway at least 3. Why 1/10?\n     *\n     * If we have N masters, with N/10 entries, and we consider that in\n     * node_timeout we exchange with each other node at least 4 packets\n     * (we ping in the worst case in node_timeout/2 time, and we also\n     * receive two pings from the host), we have a total of 8 packets\n     * in the node_timeout*2 failure reports validity time. So we have\n     * that, for a single PFAIL node, we can expect to receive the following\n     * number of failure reports (in the specified window of time):\n     *\n     * PROB * GOSSIP_ENTRIES_PER_PACKET * TOTAL_PACKETS:\n     *\n     * PROB = probability of being featured in a single gossip entry,\n     *        which is 1 / NUM_OF_NODES.\n     * ENTRIES = 10.\n     * TOTAL_PACKETS = 2 * 4 * NUM_OF_MASTERS.\n     *\n     * If we assume we have just masters (so num of nodes and num of masters\n     * is the same), with 1/10 we always get over the majority, and specifically\n     * 80% of the number of nodes, to account for many masters failing at the\n     * same time.\n     *\n     * Since we have non-voting slaves that lower the probability of an entry\n     * to feature our node, we set the number of entries per packet as\n     * 10% of the total nodes we have. */\n    wanted = floor(dictSize(server.cluster->nodes)/10);\n    if (wanted < 3) wanted = 3;\n    if (wanted > freshnodes) wanted = freshnodes;\n\n    /* Include all the nodes in PFAIL state, so that failure reports are\n     * faster to propagate to go from PFAIL to FAIL state. */\n    int pfail_wanted = server.cluster->stats_pfail_nodes;\n\n    /* Compute the maximum estlen to allocate our buffer. We'll fix the estlen\n     * later according to the number of gossip sections we really were able\n     * to put inside the packet. */\n    estlen = sizeof(clusterMsg) - sizeof(union clusterMsgData);\n    estlen += (sizeof(clusterMsgDataGossip)*(wanted + pfail_wanted));\n    if (link->node && nodeSupportsExtensions(link->node)) {\n        estlen += writePingExt(NULL, 0);\n    }\n    /* Note: clusterBuildMessageHdr() expects the buffer to be always at least\n     * sizeof(clusterMsg) or more. */\n    if (estlen < (int)sizeof(clusterMsg)) estlen = sizeof(clusterMsg);\n    clusterMsgSendBlock *msgblock = createClusterMsgSendBlock(type, estlen);\n    clusterMsg *hdr = getMessageFromSendBlock(msgblock);\n\n    if (!link->inbound && type == CLUSTERMSG_TYPE_PING)\n        link->node->ping_sent = mstime();\n\n    /* Populate the gossip fields */\n    int maxiterations = wanted*3;\n    while(freshnodes > 0 && gossipcount < wanted && maxiterations--) {\n        dictEntry *de = dictGetRandomKey(server.cluster->nodes);\n        clusterNode *this = dictGetVal(de);\n\n        /* Don't include this node: the whole packet header is about us\n         * already, so we just gossip about other nodes.\n         * Also, don't include the receiver. Receiver will not update its state\n         * based on gossips about itself. */\n        if (this == myself || this == link->node) continue;\n\n        /* PFAIL nodes will be added later. */\n        if (this->flags & CLUSTER_NODE_PFAIL) continue;\n\n        /* In the gossip section don't include:\n         * 1) Nodes in HANDSHAKE state.\n         * 3) Nodes with the NOADDR flag set.\n         * 4) Disconnected nodes if they don't have configured slots.\n         */\n        if (this->flags & (CLUSTER_NODE_HANDSHAKE|CLUSTER_NODE_NOADDR) ||\n            (this->link == NULL && this->numslots == 0))\n        {\n            freshnodes--; /* Technically not correct, but saves CPU. */\n            continue;\n        }\n\n        /* Do not add a node we already have. */\n        if (this->last_in_ping_gossip == cluster_pings_sent) continue;\n\n        /* Add it */\n        clusterSetGossipEntry(hdr,gossipcount,this);\n        this->last_in_ping_gossip = cluster_pings_sent;\n        freshnodes--;\n        gossipcount++;\n    }\n\n    /* If there are PFAIL nodes, add them at the end. */\n    if (pfail_wanted) {\n        dictIterator di;\n        dictEntry *de;\n\n        dictInitSafeIterator(&di, server.cluster->nodes);\n        while((de = dictNext(&di)) != NULL && pfail_wanted > 0) {\n            clusterNode *node = dictGetVal(de);\n            if (node->flags & CLUSTER_NODE_HANDSHAKE) continue;\n            if (node->flags & CLUSTER_NODE_NOADDR) continue;\n            if (!(node->flags & CLUSTER_NODE_PFAIL)) continue;\n            clusterSetGossipEntry(hdr,gossipcount,node);\n            gossipcount++;\n            /* We take the count of the slots we allocated, since the\n             * PFAIL stats may not match perfectly with the current number\n             * of PFAIL nodes. */\n            pfail_wanted--;\n        }\n        dictResetIterator(&di);\n    }\n\n    /* Compute the actual total length and send! */\n    uint32_t totlen = 0;\n    if (link->node && nodeSupportsExtensions(link->node)) {\n        totlen += writePingExt(hdr, gossipcount);\n    }\n    totlen += sizeof(clusterMsg)-sizeof(union clusterMsgData);\n    totlen += (sizeof(clusterMsgDataGossip)*gossipcount);\n    serverAssert(gossipcount < USHRT_MAX);\n    hdr->count = htons(gossipcount);\n    hdr->totlen = htonl(totlen);\n\n    clusterSendMessage(link,msgblock);\n    clusterMsgSendBlockDecrRefCount(msgblock);\n}\n\n/* Send a PONG packet to every connected node that's not in handshake state\n * and for which we have a valid link.\n *\n * In Redis Cluster pongs are not used just for failure detection, but also\n * to carry important configuration information. So broadcasting a pong is\n * useful when something changes in the configuration and we want to make\n * the cluster aware ASAP (for instance after a slave promotion).\n *\n * The 'target' argument specifies the receiving instances using the\n * defines below:\n *\n * CLUSTER_BROADCAST_ALL -> All known instances.\n * CLUSTER_BROADCAST_LOCAL_SLAVES -> All slaves in my master-slaves ring.\n */\n#define CLUSTER_BROADCAST_ALL 0\n#define CLUSTER_BROADCAST_LOCAL_SLAVES 1\nvoid clusterBroadcastPong(int target) {\n    dictIterator di;\n    dictEntry *de;\n\n    dictInitSafeIterator(&di, server.cluster->nodes);\n    while((de = dictNext(&di)) != NULL) {\n        clusterNode *node = dictGetVal(de);\n\n        if (!node->link) continue;\n        if (node == myself || nodeInHandshake(node)) continue;\n        if (target == CLUSTER_BROADCAST_LOCAL_SLAVES) {\n            int local_slave =\n                nodeIsSlave(node) && node->slaveof &&\n                (node->slaveof == myself || node->slaveof == myself->slaveof);\n            if (!local_slave) continue;\n        }\n        clusterSendPing(node->link,CLUSTERMSG_TYPE_PONG);\n    }\n    dictResetIterator(&di);\n}\n\n/* Create a PUBLISH message block.\n *\n * Sanitizer suppression: In clusterMsgDataPublish, sizeof(bulk_data) is 8.\n * As all the struct is used as a buffer, when more than 8 bytes are copied into\n * the 'bulk_data', sanitizer generates an out-of-bounds error which is a false\n * positive in this context. */\nREDIS_NO_SANITIZE(\"bounds\")\nclusterMsgSendBlock *clusterCreatePublishMsgBlock(robj *channel, robj *message, uint16_t type) {\n\n    uint32_t channel_len, message_len;\n\n    channel = getDecodedObject(channel);\n    message = getDecodedObject(message);\n    channel_len = sdslen(channel->ptr);\n    message_len = sdslen(message->ptr);\n\n    size_t msglen = sizeof(clusterMsg)-sizeof(union clusterMsgData);\n    msglen += sizeof(clusterMsgDataPublish) - 8 + channel_len + message_len;\n    clusterMsgSendBlock *msgblock = createClusterMsgSendBlock(type, msglen);\n\n    clusterMsg *hdr = getMessageFromSendBlock(msgblock);\n    hdr->data.publish.msg.channel_len = htonl(channel_len);\n    hdr->data.publish.msg.message_len = htonl(message_len);\n    memcpy(hdr->data.publish.msg.bulk_data,channel->ptr,sdslen(channel->ptr));\n    memcpy(hdr->data.publish.msg.bulk_data+sdslen(channel->ptr),\n        message->ptr,sdslen(message->ptr));\n\n    decrRefCount(channel);\n    decrRefCount(message);\n    \n    return msgblock;\n}\n\n/* Send a FAIL message to all the nodes we are able to contact.\n * The FAIL message is sent when we detect that a node is failing\n * (CLUSTER_NODE_PFAIL) and we also receive a gossip confirmation of this:\n * we switch the node state to CLUSTER_NODE_FAIL and ask all the other\n * nodes to do the same ASAP. */\nvoid clusterSendFail(char *nodename) {\n    uint32_t msglen = sizeof(clusterMsg) - sizeof(union clusterMsgData)\n        + sizeof(clusterMsgDataFail);\n    clusterMsgSendBlock *msgblock = createClusterMsgSendBlock(CLUSTERMSG_TYPE_FAIL, msglen);\n\n    clusterMsg *hdr = getMessageFromSendBlock(msgblock);\n    memcpy(hdr->data.fail.about.nodename,nodename,CLUSTER_NAMELEN);\n\n    clusterBroadcastMessage(msgblock);\n    clusterMsgSendBlockDecrRefCount(msgblock);\n}\n\n/* Send an UPDATE message to the specified link carrying the specified 'node'\n * slots configuration. The node name, slots bitmap, and configEpoch info\n * are included. */\nvoid clusterSendUpdate(clusterLink *link, clusterNode *node) {\n    if (link == NULL) return;\n\n    uint32_t msglen = sizeof(clusterMsg) - sizeof(union clusterMsgData)\n        + sizeof(clusterMsgDataUpdate);\n    clusterMsgSendBlock *msgblock = createClusterMsgSendBlock(CLUSTERMSG_TYPE_UPDATE, msglen);\n\n    clusterMsg *hdr = getMessageFromSendBlock(msgblock);\n    memcpy(hdr->data.update.nodecfg.nodename,node->name,CLUSTER_NAMELEN);\n    hdr->data.update.nodecfg.configEpoch = htonu64(node->configEpoch);\n    memcpy(hdr->data.update.nodecfg.slots,node->slots,sizeof(node->slots));\n    for (unsigned int i = 0; i < sizeof(node->slots); i++) {\n        /* Don't advertise slots that the node stopped claiming */\n        hdr->data.update.nodecfg.slots[i] = hdr->data.update.nodecfg.slots[i] & (~server.cluster->owner_not_claiming_slot[i]);\n    }\n\n    clusterSendMessage(link,msgblock);\n    clusterMsgSendBlockDecrRefCount(msgblock);\n}\n\n/* Send a MODULE message.\n *\n * If link is NULL, then the message is broadcasted to the whole cluster. */\nvoid clusterSendModule(clusterLink *link, uint64_t module_id, uint8_t type,\n                       const char *payload, uint32_t len) {\n    uint32_t msglen = sizeof(clusterMsg)-sizeof(union clusterMsgData);\n    msglen += sizeof(clusterMsgModule) - 3 + len;\n    clusterMsgSendBlock *msgblock = createClusterMsgSendBlock(CLUSTERMSG_TYPE_MODULE, msglen);\n\n    clusterMsg *hdr = getMessageFromSendBlock(msgblock);\n    hdr->data.module.msg.module_id = module_id; /* Already endian adjusted. */\n    hdr->data.module.msg.type = type;\n    hdr->data.module.msg.len = htonl(len);\n    memcpy(hdr->data.module.msg.bulk_data,payload,len);\n\n    if (link)\n        clusterSendMessage(link,msgblock);\n    else\n        clusterBroadcastMessage(msgblock);\n\n    clusterMsgSendBlockDecrRefCount(msgblock);\n}\n\n/* This function gets a cluster node ID string as target, the same way the nodes\n * addresses are represented in the modules side, resolves the node, and sends\n * the message. If the target is NULL the message is broadcasted.\n *\n * The function returns C_OK if the target is valid, otherwise C_ERR is\n * returned. */\nint clusterSendModuleMessageToTarget(const char *target, uint64_t module_id, uint8_t type, const char *payload, uint32_t len) {\n    clusterNode *node = NULL;\n\n    if (target != NULL) {\n        node = clusterLookupNode(target, strlen(target));\n        if (node == NULL || node->link == NULL) return C_ERR;\n    }\n\n    clusterSendModule(target ? node->link : NULL,\n                      module_id, type, payload, len);\n    return C_OK;\n}\n\n/* -----------------------------------------------------------------------------\n * CLUSTER Pub/Sub support\n *\n * If `sharded` is 0:\n * For now we do very little, just propagating [S]PUBLISH messages across the whole\n * cluster. In the future we'll try to get smarter and avoiding propagating those\n * messages to hosts without receives for a given channel.\n * Otherwise:\n * Publish this message across the slot (primary/replica).\n * -------------------------------------------------------------------------- */\nvoid clusterPropagatePublish(robj *channel, robj *message, int sharded) {\n    clusterMsgSendBlock *msgblock;\n\n    if (!sharded) {\n        msgblock = clusterCreatePublishMsgBlock(channel, message, CLUSTERMSG_TYPE_PUBLISH);\n        clusterBroadcastMessage(msgblock);\n        clusterMsgSendBlockDecrRefCount(msgblock);\n        return;\n    }\n\n    listIter li;\n    listNode *ln;\n    list *nodes_for_slot = clusterGetNodesInMyShard(server.cluster->myself);\n    serverAssert(nodes_for_slot != NULL);\n    listRewind(nodes_for_slot, &li);\n    msgblock = clusterCreatePublishMsgBlock(channel, message, CLUSTERMSG_TYPE_PUBLISHSHARD);\n    while((ln = listNext(&li))) {\n        clusterNode *node = listNodeValue(ln);\n        if (node->flags & (CLUSTER_NODE_MYSELF|CLUSTER_NODE_HANDSHAKE))\n            continue;\n        clusterSendMessage(node->link,msgblock);\n    }\n    clusterMsgSendBlockDecrRefCount(msgblock);\n}\n\n/* -----------------------------------------------------------------------------\n * SLAVE node specific functions\n * -------------------------------------------------------------------------- */\n\n/* This function sends a FAILOVER_AUTH_REQUEST message to every node in order to\n * see if there is the quorum for this slave instance to failover its failing\n * master.\n *\n * Note that we send the failover request to everybody, master and slave nodes,\n * but only the masters are supposed to reply to our query. */\nvoid clusterRequestFailoverAuth(void) {\n    uint32_t msglen = sizeof(clusterMsg)-sizeof(union clusterMsgData);\n    clusterMsgSendBlock *msgblock = createClusterMsgSendBlock(CLUSTERMSG_TYPE_FAILOVER_AUTH_REQUEST, msglen);\n\n    /* If this is a manual failover, set the CLUSTERMSG_FLAG0_FORCEACK bit\n     * in the header to communicate the nodes receiving the message that\n     * they should authorized the failover even if the master is working. */\n    if (server.cluster->mf_end) msgblock->msg[0].mflags[0] |= CLUSTERMSG_FLAG0_FORCEACK;\n    clusterBroadcastMessage(msgblock);\n    clusterMsgSendBlockDecrRefCount(msgblock);\n}\n\n/* Send a FAILOVER_AUTH_ACK message to the specified node. */\nvoid clusterSendFailoverAuth(clusterNode *node) {\n    if (!node->link) return;\n\n    uint32_t msglen = sizeof(clusterMsg)-sizeof(union clusterMsgData);\n    clusterMsgSendBlock *msgblock = createClusterMsgSendBlock(CLUSTERMSG_TYPE_FAILOVER_AUTH_ACK, msglen);\n\n    clusterSendMessage(node->link,msgblock);\n    clusterMsgSendBlockDecrRefCount(msgblock);\n}\n\n/* Send a MFSTART message to the specified node. */\nvoid clusterSendMFStart(clusterNode *node) {\n    if (!node->link) return;\n\n    uint32_t msglen = sizeof(clusterMsg)-sizeof(union clusterMsgData);\n    clusterMsgSendBlock *msgblock = createClusterMsgSendBlock(CLUSTERMSG_TYPE_MFSTART, msglen);\n\n    clusterSendMessage(node->link,msgblock);\n    clusterMsgSendBlockDecrRefCount(msgblock);\n}\n\n/* Vote for the node asking for our vote if there are the conditions. */\nvoid clusterSendFailoverAuthIfNeeded(clusterNode *node, clusterMsg *request) {\n    clusterNode *master = node->slaveof;\n    uint64_t requestCurrentEpoch = ntohu64(request->currentEpoch);\n    uint64_t requestConfigEpoch = ntohu64(request->configEpoch);\n    unsigned char *claimed_slots = request->myslots;\n    int force_ack = request->mflags[0] & CLUSTERMSG_FLAG0_FORCEACK;\n    int j;\n\n    /* IF we are not a master serving at least 1 slot, we don't have the\n     * right to vote, as the cluster size in Redis Cluster is the number\n     * of masters serving at least one slot, and quorum is the cluster\n     * size + 1 */\n    if (nodeIsSlave(myself) || myself->numslots == 0) return;\n\n    /* Request epoch must be >= our currentEpoch.\n     * Note that it is impossible for it to actually be greater since\n     * our currentEpoch was updated as a side effect of receiving this\n     * request, if the request epoch was greater. */\n    if (requestCurrentEpoch < server.cluster->currentEpoch) {\n        serverLog(LL_WARNING,\n            \"Failover auth denied to %.40s (%s): reqEpoch (%llu) < curEpoch(%llu)\",\n            node->name, node->human_nodename,\n            (unsigned long long) requestCurrentEpoch,\n            (unsigned long long) server.cluster->currentEpoch);\n        return;\n    }\n\n    /* I already voted for this epoch? Return ASAP. */\n    if (server.cluster->lastVoteEpoch == server.cluster->currentEpoch) {\n        serverLog(LL_WARNING,\n                \"Failover auth denied to %.40s (%s): already voted for epoch %llu\",\n                node->name, node->human_nodename,\n                (unsigned long long) server.cluster->currentEpoch);\n        return;\n    }\n\n    /* Node must be a slave and its master down.\n     * The master can be non failing if the request is flagged\n     * with CLUSTERMSG_FLAG0_FORCEACK (manual failover). */\n    if (clusterNodeIsMaster(node) || master == NULL ||\n        (!nodeFailed(master) && !force_ack))\n    {\n        if (clusterNodeIsMaster(node)) {\n            serverLog(LL_WARNING,\n                    \"Failover auth denied to %.40s (%s): it is a master node\",\n                    node->name, node->human_nodename);\n        } else if (master == NULL) {\n            serverLog(LL_WARNING,\n                    \"Failover auth denied to %.40s (%s): I don't know its master\",\n                    node->name, node->human_nodename);\n        } else if (!nodeFailed(master)) {\n            serverLog(LL_WARNING,\n                    \"Failover auth denied to %.40s (%s): its master is up\",\n                    node->name, node->human_nodename);\n        }\n        return;\n    }\n\n    /* We did not voted for a slave about this master for two\n     * times the node timeout. This is not strictly needed for correctness\n     * of the algorithm but makes the base case more linear. */\n    if (mstime() - node->slaveof->voted_time < server.cluster_node_timeout * 2)\n    {\n        serverLog(LL_WARNING,\n                \"Failover auth denied to %.40s %s: \"\n                \"can't vote about this master before %lld milliseconds\",\n                node->name, node->human_nodename,\n                (long long) ((server.cluster_node_timeout*2)-\n                             (mstime() - node->slaveof->voted_time)));\n        return;\n    }\n\n    /* The slave requesting the vote must have a configEpoch for the claimed\n     * slots that is >= the one of the masters currently serving the same\n     * slots in the current configuration. */\n    for (j = 0; j < CLUSTER_SLOTS; j++) {\n        if (bitmapTestBit(claimed_slots, j) == 0) continue;\n        if (isSlotUnclaimed(j) ||\n            server.cluster->slots[j]->configEpoch <= requestConfigEpoch)\n        {\n            continue;\n        }\n        /* If we reached this point we found a slot that in our current slots\n         * is served by a master with a greater configEpoch than the one claimed\n         * by the slave requesting our vote. Refuse to vote for this slave. */\n        serverLog(LL_WARNING,\n                \"Failover auth denied to %.40s (%s): \"\n                \"slot %d epoch (%llu) > reqEpoch (%llu)\",\n                node->name, node->human_nodename, j,\n                (unsigned long long) server.cluster->slots[j]->configEpoch,\n                (unsigned long long) requestConfigEpoch);\n        return;\n    }\n\n    /* We can vote for this slave. */\n    server.cluster->lastVoteEpoch = server.cluster->currentEpoch;\n    node->slaveof->voted_time = mstime();\n    clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG|CLUSTER_TODO_FSYNC_CONFIG);\n    clusterSendFailoverAuth(node);\n    serverLog(LL_NOTICE, \"Failover auth granted to %.40s (%s) for epoch %llu\",\n        node->name, node->human_nodename, (unsigned long long) server.cluster->currentEpoch);\n}\n\n/* This function returns the \"rank\" of this instance, a slave, in the context\n * of its master-slaves ring. The rank of the slave is given by the number of\n * other slaves for the same master that have a better replication offset\n * compared to the local one (better means, greater, so they claim more data).\n *\n * A slave with rank 0 is the one with the greatest (most up to date)\n * replication offset, and so forth. Note that because how the rank is computed\n * multiple slaves may have the same rank, in case they have the same offset.\n *\n * The slave rank is used to add a delay to start an election in order to\n * get voted and replace a failing master. Slaves with better replication\n * offsets are more likely to win. */\nint clusterGetSlaveRank(void) {\n    long long myoffset;\n    int j, rank = 0;\n    clusterNode *master;\n\n    serverAssert(nodeIsSlave(myself));\n    master = myself->slaveof;\n    if (master == NULL) return 0; /* Never called by slaves without master. */\n\n    myoffset = replicationGetSlaveOffset();\n    for (j = 0; j < master->numslaves; j++)\n        if (master->slaves[j] != myself &&\n            !nodeCantFailover(master->slaves[j]) &&\n            master->slaves[j]->repl_offset > myoffset) rank++;\n    return rank;\n}\n\n/* This function is called by clusterHandleSlaveFailover() in order to\n * let the slave log why it is not able to failover. Sometimes there are\n * not the conditions, but since the failover function is called again and\n * again, we can't log the same things continuously.\n *\n * This function works by logging only if a given set of conditions are\n * true:\n *\n * 1) The reason for which the failover can't be initiated changed.\n *    The reasons also include a NONE reason we reset the state to\n *    when the slave finds that its master is fine (no FAIL flag).\n * 2) Also, the log is emitted again if the master is still down and\n *    the reason for not failing over is still the same, but more than\n *    CLUSTER_CANT_FAILOVER_RELOG_PERIOD seconds elapsed.\n * 3) Finally, the function only logs if the slave is down for more than\n *    five seconds + NODE_TIMEOUT. This way nothing is logged when a\n *    failover starts in a reasonable time.\n *\n * The function is called with the reason why the slave can't failover\n * which is one of the integer macros CLUSTER_CANT_FAILOVER_*.\n *\n * The function is guaranteed to be called only if 'myself' is a slave. */\nvoid clusterLogCantFailover(int reason) {\n    char *msg;\n    static time_t lastlog_time = 0;\n    mstime_t nolog_fail_time = server.cluster_node_timeout + 5000;\n\n    /* Don't log if we have the same reason for some time. */\n    if (reason == server.cluster->cant_failover_reason &&\n        time(NULL)-lastlog_time < CLUSTER_CANT_FAILOVER_RELOG_PERIOD)\n        return;\n\n    server.cluster->cant_failover_reason = reason;\n\n    /* We also don't emit any log if the master failed no long ago, the\n     * goal of this function is to log slaves in a stalled condition for\n     * a long time. */\n    if (myself->slaveof &&\n        nodeFailed(myself->slaveof) &&\n        (mstime() - myself->slaveof->fail_time) < nolog_fail_time) return;\n\n    switch(reason) {\n    case CLUSTER_CANT_FAILOVER_DATA_AGE:\n        msg = \"Disconnected from master for longer than allowed. \"\n              \"Please check the 'cluster-replica-validity-factor' configuration \"\n              \"option.\";\n        break;\n    case CLUSTER_CANT_FAILOVER_WAITING_DELAY:\n        msg = \"Waiting the delay before I can start a new failover.\";\n        break;\n    case CLUSTER_CANT_FAILOVER_EXPIRED:\n        msg = \"Failover attempt expired.\";\n        break;\n    case CLUSTER_CANT_FAILOVER_WAITING_VOTES:\n        msg = \"Waiting for votes, but majority still not reached.\";\n        break;\n    default:\n        msg = \"Unknown reason code.\";\n        break;\n    }\n    lastlog_time = time(NULL);\n    serverLog(LL_NOTICE,\"Currently unable to failover: %s\", msg);\n    \n    int cur_vote = server.cluster->failover_auth_count;\n    int cur_quorum = (server.cluster->size / 2) + 1;\n    /* Emits a log when an election is in progress and waiting for votes or when the failover attempt expired. */\n    if (reason == CLUSTER_CANT_FAILOVER_WAITING_VOTES || reason == CLUSTER_CANT_FAILOVER_EXPIRED) {\n        serverLog(LL_NOTICE, \"Needed quorum: %d. Number of votes received so far: %d\", cur_quorum, cur_vote);\n    } \n}\n\n/* This function implements the final part of automatic and manual failovers,\n * where the slave grabs its master's hash slots, and propagates the new\n * configuration.\n *\n * Note that it's up to the caller to be sure that the node got a new\n * configuration epoch already. */\nvoid clusterFailoverReplaceYourMaster(void) {\n    int j;\n    clusterNode *oldmaster = myself->slaveof;\n\n    if (clusterNodeIsMaster(myself) || oldmaster == NULL) return;\n\n    /* 1) Turn this node into a master. */\n    clusterSetNodeAsMaster(myself);\n    replicationUnsetMaster();\n\n    /* 2) Claim all the slots assigned to our master. */\n    for (j = 0; j < CLUSTER_SLOTS; j++) {\n        if (clusterNodeCoversSlot(oldmaster, j)) {\n            clusterDelSlot(j);\n            clusterAddSlot(myself,j);\n        }\n    }\n\n    /* 3) Update state and save config. */\n    clusterUpdateState();\n    clusterSaveConfigOrDie(1);\n\n    /* 4) Pong all the other nodes so that they can update the state\n     *    accordingly and detect that we switched to master role. */\n    clusterDoBeforeSleep(CLUSTER_TODO_BROADCAST_PONG);\n\n    /* 5) If there was a manual failover in progress, clear the state. */\n    resetManualFailover();\n\n    /* 6) Handle the ASM task from previous master. */\n    asmFinalizeMasterTask();\n}\n\n/* This function is called if we are a slave node and our master serving\n * a non-zero amount of hash slots is in FAIL state.\n *\n * The goal of this function is:\n * 1) To check if we are able to perform a failover, is our data updated?\n * 2) Try to get elected by masters.\n * 3) Perform the failover informing all the other nodes.\n */\nvoid clusterHandleSlaveFailover(void) {\n    mstime_t data_age;\n    mstime_t auth_age = mstime() - server.cluster->failover_auth_time;\n    int needed_quorum = (server.cluster->size / 2) + 1;\n    int manual_failover = server.cluster->mf_end != 0 &&\n                          server.cluster->mf_can_start;\n    mstime_t auth_timeout, auth_retry_time;\n\n    server.cluster->todo_before_sleep &= ~CLUSTER_TODO_HANDLE_FAILOVER;\n\n    /* Compute the failover timeout (the max time we have to send votes\n     * and wait for replies), and the failover retry time (the time to wait\n     * before trying to get voted again).\n     *\n     * Timeout is MAX(NODE_TIMEOUT*2,2000) milliseconds.\n     * Retry is two times the Timeout.\n     */\n    auth_timeout = server.cluster_node_timeout*2;\n    if (auth_timeout < 2000) auth_timeout = 2000;\n    auth_retry_time = auth_timeout*2;\n\n    /* Pre conditions to run the function, that must be met both in case\n     * of an automatic or manual failover:\n     * 1) We are a slave.\n     * 2) Our master is flagged as FAIL, or this is a manual failover.\n     * 3) We don't have the no failover configuration set, and this is\n     *    not a manual failover.\n     * 4) It is serving slots. */\n    if (clusterNodeIsMaster(myself) ||\n        myself->slaveof == NULL ||\n        (!nodeFailed(myself->slaveof) && !manual_failover) ||\n        (server.cluster_slave_no_failover && !manual_failover) ||\n        myself->slaveof->numslots == 0)\n    {\n        /* There are no reasons to failover, so we set the reason why we\n         * are returning without failing over to NONE. */\n        server.cluster->cant_failover_reason = CLUSTER_CANT_FAILOVER_NONE;\n        return;\n    }\n\n    /* Set data_age to the number of milliseconds we are disconnected from\n     * the master. */\n    if (server.repl_state == REPL_STATE_CONNECTED) {\n        data_age = (mstime_t)(server.unixtime - server.master->lastinteraction)\n                   * 1000;\n    } else {\n        data_age = (mstime_t)(server.unixtime - server.repl_down_since) * 1000;\n    }\n\n    /* Remove the node timeout from the data age as it is fine that we are\n     * disconnected from our master at least for the time it was down to be\n     * flagged as FAIL, that's the baseline. */\n    if (data_age > server.cluster_node_timeout)\n        data_age -= server.cluster_node_timeout;\n\n    /* Check if our data is recent enough according to the slave validity\n     * factor configured by the user.\n     *\n     * Check bypassed for manual failovers. */\n    if (server.cluster_slave_validity_factor &&\n        data_age >\n        (((mstime_t)server.repl_ping_slave_period * 1000) +\n         (server.cluster_node_timeout * server.cluster_slave_validity_factor)))\n    {\n        if (!manual_failover) {\n            clusterLogCantFailover(CLUSTER_CANT_FAILOVER_DATA_AGE);\n            return;\n        }\n    }\n\n    /* If the previous failover attempt timeout and the retry time has\n     * elapsed, we can setup a new one. */\n    if (auth_age > auth_retry_time) {\n        server.cluster->failover_auth_time = mstime() +\n            500 + /* Fixed delay of 500 milliseconds, let FAIL msg propagate. */\n            random() % 500; /* Random delay between 0 and 500 milliseconds. */\n        server.cluster->failover_auth_count = 0;\n        server.cluster->failover_auth_sent = 0;\n        server.cluster->failover_auth_rank = clusterGetSlaveRank();\n        /* We add another delay that is proportional to the slave rank.\n         * Specifically 1 second * rank. This way slaves that have a probably\n         * less updated replication offset, are penalized. */\n        server.cluster->failover_auth_time +=\n            server.cluster->failover_auth_rank * 1000;\n        /* However if this is a manual failover, no delay is needed. */\n        if (server.cluster->mf_end) {\n            server.cluster->failover_auth_time = mstime();\n            server.cluster->failover_auth_rank = 0;\n            clusterDoBeforeSleep(CLUSTER_TODO_HANDLE_FAILOVER);\n        }\n        serverLog(LL_NOTICE,\n            \"Start of election delayed for %lld milliseconds \"\n            \"(rank #%d, offset %lld).\",\n            server.cluster->failover_auth_time - mstime(),\n            server.cluster->failover_auth_rank,\n            replicationGetSlaveOffset());\n        /* Now that we have a scheduled election, broadcast our offset\n         * to all the other slaves so that they'll updated their offsets\n         * if our offset is better. */\n        clusterBroadcastPong(CLUSTER_BROADCAST_LOCAL_SLAVES);\n        return;\n    }\n\n    /* It is possible that we received more updated offsets from other\n     * slaves for the same master since we computed our election delay.\n     * Update the delay if our rank changed.\n     *\n     * Not performed if this is a manual failover. */\n    if (server.cluster->failover_auth_sent == 0 &&\n        server.cluster->mf_end == 0)\n    {\n        int newrank = clusterGetSlaveRank();\n        if (newrank > server.cluster->failover_auth_rank) {\n            long long added_delay =\n                (newrank - server.cluster->failover_auth_rank) * 1000;\n            server.cluster->failover_auth_time += added_delay;\n            server.cluster->failover_auth_rank = newrank;\n            serverLog(LL_NOTICE,\n                \"Replica rank updated to #%d, added %lld milliseconds of delay.\",\n                newrank, added_delay);\n        }\n    }\n\n    /* Return ASAP if we can't still start the election. */\n    if (mstime() < server.cluster->failover_auth_time) {\n        clusterLogCantFailover(CLUSTER_CANT_FAILOVER_WAITING_DELAY);\n        return;\n    }\n\n    /* Return ASAP if the election is too old to be valid. */\n    if (auth_age > auth_timeout) {\n        clusterLogCantFailover(CLUSTER_CANT_FAILOVER_EXPIRED);\n        return;\n    }\n\n    /* Ask for votes if needed. */\n    if (server.cluster->failover_auth_sent == 0) {\n        server.cluster->currentEpoch++;\n        server.cluster->failover_auth_epoch = server.cluster->currentEpoch;\n        serverLog(LL_NOTICE,\"Starting a failover election for epoch %llu.\",\n            (unsigned long long) server.cluster->currentEpoch);\n        clusterRequestFailoverAuth();\n        server.cluster->failover_auth_sent = 1;\n        clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG|\n                             CLUSTER_TODO_UPDATE_STATE|\n                             CLUSTER_TODO_FSYNC_CONFIG);\n        return; /* Wait for replies. */\n    }\n\n    /* Check if we reached the quorum. */\n    if (server.cluster->failover_auth_count >= needed_quorum) {\n        /* We have the quorum, we can finally failover the master. */\n\n        serverLog(LL_NOTICE,\n            \"Failover election won: I'm the new master.\");\n\n        /* Update my configEpoch to the epoch of the election. */\n        if (myself->configEpoch < server.cluster->failover_auth_epoch) {\n            myself->configEpoch = server.cluster->failover_auth_epoch;\n            serverLog(LL_NOTICE,\n                \"configEpoch set to %llu after successful failover\",\n                (unsigned long long) myself->configEpoch);\n        }\n\n        /* Take responsibility for the cluster slots. */\n        clusterFailoverReplaceYourMaster();\n    } else {\n        clusterLogCantFailover(CLUSTER_CANT_FAILOVER_WAITING_VOTES);\n    }\n}\n\n/* -----------------------------------------------------------------------------\n * CLUSTER slave migration\n *\n * Slave migration is the process that allows a slave of a master that is\n * already covered by at least another slave, to \"migrate\" to a master that\n * is orphaned, that is, left with no working slaves.\n * ------------------------------------------------------------------------- */\n\n/* This function is responsible to decide if this replica should be migrated\n * to a different (orphaned) master. It is called by the clusterCron() function\n * only if:\n *\n * 1) We are a slave node.\n * 2) It was detected that there is at least one orphaned master in\n *    the cluster.\n * 3) We are a slave of one of the masters with the greatest number of\n *    slaves.\n *\n * This checks are performed by the caller since it requires to iterate\n * the nodes anyway, so we spend time into clusterHandleSlaveMigration()\n * if definitely needed.\n *\n * The function is called with a pre-computed max_slaves, that is the max\n * number of working (not in FAIL state) slaves for a single master.\n *\n * Additional conditions for migration are examined inside the function.\n */\nvoid clusterHandleSlaveMigration(int max_slaves) {\n    int j, okslaves = 0;\n    clusterNode *mymaster = myself->slaveof, *target = NULL, *candidate = NULL;\n    dictIterator di;\n    dictEntry *de;\n\n    /* Step 1: Don't migrate if the cluster state is not ok. */\n    if (server.cluster->state != CLUSTER_OK) return;\n\n    /* Step 2: Don't migrate if my master will not be left with at least\n     *         'migration-barrier' slaves after my migration. */\n    if (mymaster == NULL) return;\n    for (j = 0; j < mymaster->numslaves; j++)\n        if (!nodeFailed(mymaster->slaves[j]) &&\n            !nodeTimedOut(mymaster->slaves[j])) okslaves++;\n    if (okslaves <= server.cluster_migration_barrier) return;\n\n    /* Step 3: Identify a candidate for migration, and check if among the\n     * masters with the greatest number of ok slaves, I'm the one with the\n     * smallest node ID (the \"candidate slave\").\n     *\n     * Note: this means that eventually a replica migration will occur\n     * since slaves that are reachable again always have their FAIL flag\n     * cleared, so eventually there must be a candidate.\n     * There is a possible race condition causing multiple\n     * slaves to migrate at the same time, but this is unlikely to\n     * happen and relatively harmless when it does. */\n    candidate = myself;\n    dictInitSafeIterator(&di, server.cluster->nodes);\n    while((de = dictNext(&di)) != NULL) {\n        clusterNode *node = dictGetVal(de);\n        int okslaves = 0, is_orphaned = 1;\n\n        /* We want to migrate only if this master is working, orphaned, and\n         * used to have slaves or if failed over a master that had slaves\n         * (MIGRATE_TO flag). This way we only migrate to instances that were\n         * supposed to have replicas. */\n        if (nodeIsSlave(node) || nodeFailed(node)) is_orphaned = 0;\n        if (!(node->flags & CLUSTER_NODE_MIGRATE_TO)) is_orphaned = 0;\n\n        /* Check number of working slaves. */\n        if (clusterNodeIsMaster(node)) okslaves = clusterCountNonFailingSlaves(node);\n        if (okslaves > 0) is_orphaned = 0;\n\n        if (is_orphaned) {\n            if (!target && node->numslots > 0) target = node;\n\n            /* Track the starting time of the orphaned condition for this\n             * master. */\n            if (!node->orphaned_time) node->orphaned_time = mstime();\n        } else {\n            node->orphaned_time = 0;\n        }\n\n        /* Check if I'm the slave candidate for the migration: attached\n         * to a master with the maximum number of slaves and with the smallest\n         * node ID. */\n        if (okslaves == max_slaves) {\n            for (j = 0; j < node->numslaves; j++) {\n                if (memcmp(node->slaves[j]->name,\n                           candidate->name,\n                           CLUSTER_NAMELEN) < 0)\n                {\n                    candidate = node->slaves[j];\n                }\n            }\n        }\n    }\n    dictResetIterator(&di);\n\n    /* Step 4: perform the migration if there is a target, and if I'm the\n     * candidate, but only if the master is continuously orphaned for a\n     * couple of seconds, so that during failovers, we give some time to\n     * the natural slaves of this instance to advertise their switch from\n     * the old master to the new one. */\n    if (target && candidate == myself &&\n        (mstime()-target->orphaned_time) > CLUSTER_SLAVE_MIGRATION_DELAY &&\n       !(server.cluster_module_flags & CLUSTER_MODULE_FLAG_NO_FAILOVER))\n    {\n        serverLog(LL_NOTICE,\"Migrating to orphaned master %.40s\",\n            target->name);\n        clusterSetMaster(target);\n        /* Save the new config and broadcast it to the other nodes. */\n        clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG|\n                             CLUSTER_TODO_FSYNC_CONFIG|\n                             CLUSTER_TODO_BROADCAST_PONG);\n    }\n}\n\n/* -----------------------------------------------------------------------------\n * CLUSTER manual failover\n *\n * This are the important steps performed by slaves during a manual failover:\n * 1) User send CLUSTER FAILOVER command. The failover state is initialized\n *    setting mf_end to the millisecond unix time at which we'll abort the\n *    attempt.\n * 2) Slave sends a MFSTART message to the master requesting to pause clients\n *    for two times the manual failover timeout CLUSTER_MF_TIMEOUT.\n *    When master is paused for manual failover, it also starts to flag\n *    packets with CLUSTERMSG_FLAG0_PAUSED.\n * 3) Slave waits for master to send its replication offset flagged as PAUSED.\n * 4) If slave received the offset from the master, and its offset matches,\n *    mf_can_start is set to 1, and clusterHandleSlaveFailover() will perform\n *    the failover as usually, with the difference that the vote request\n *    will be modified to force masters to vote for a slave that has a\n *    working master.\n *\n * From the point of view of the master things are simpler: when a\n * PAUSE_CLIENTS packet is received the master sets mf_end as well and\n * the sender in mf_slave. During the time limit for the manual failover\n * the master will just send PINGs more often to this slave, flagged with\n * the PAUSED flag, so that the slave will set mf_master_offset when receiving\n * a packet from the master with this flag set.\n *\n * The goal of the manual failover is to perform a fast failover without\n * data loss due to the asynchronous master-slave replication.\n * -------------------------------------------------------------------------- */\n\n/* Reset the manual failover state. This works for both masters and slaves\n * as all the state about manual failover is cleared.\n *\n * The function can be used both to initialize the manual failover state at\n * startup or to abort a manual failover in progress. */\nvoid resetManualFailover(void) {\n    if (server.cluster->mf_slave) {\n        /* We were a master failing over, so we paused clients and related actions.\n         * Regardless of the outcome we unpause now to allow traffic again. */\n        unpauseActions(PAUSE_DURING_FAILOVER);\n    }\n    server.cluster->mf_end = 0; /* No manual failover in progress. */\n    server.cluster->mf_can_start = 0;\n    server.cluster->mf_slave = NULL;\n    server.cluster->mf_master_offset = -1;\n}\n\n/* If a manual failover timed out, abort it. */\nvoid manualFailoverCheckTimeout(void) {\n    if (server.cluster->mf_end && server.cluster->mf_end < mstime()) {\n        serverLog(LL_WARNING,\"Manual failover timed out.\");\n        resetManualFailover();\n    }\n}\n\n/* This function is called from the cluster cron function in order to go\n * forward with a manual failover state machine. */\nvoid clusterHandleManualFailover(void) {\n    /* Return ASAP if no manual failover is in progress. */\n    if (server.cluster->mf_end == 0) return;\n\n    /* If mf_can_start is non-zero, the failover was already triggered so the\n     * next steps are performed by clusterHandleSlaveFailover(). */\n    if (server.cluster->mf_can_start) return;\n\n    if (server.cluster->mf_master_offset == -1) return; /* Wait for offset... */\n\n    if (server.cluster->mf_master_offset == replicationGetSlaveOffset()) {\n        /* Our replication offset matches the master replication offset\n         * announced after clients were paused. We can start the failover. */\n        server.cluster->mf_can_start = 1;\n        serverLog(LL_NOTICE,\n            \"All master replication stream processed, \"\n            \"manual failover can start.\");\n        clusterDoBeforeSleep(CLUSTER_TODO_HANDLE_FAILOVER);\n        return;\n    }\n    clusterDoBeforeSleep(CLUSTER_TODO_HANDLE_MANUALFAILOVER);\n}\n\n/* -----------------------------------------------------------------------------\n * CLUSTER cron job\n * -------------------------------------------------------------------------- */\n\n/* Check if the node is disconnected and re-establish the connection.\n * Also update a few stats while we are here, that can be used to make\n * better decisions in other part of the code. */\nstatic int clusterNodeCronHandleReconnect(clusterNode *node, mstime_t handshake_timeout, mstime_t now) {\n    /* Not interested in reconnecting the link with myself or nodes\n     * for which we have no address. */\n    if (node->flags & (CLUSTER_NODE_MYSELF|CLUSTER_NODE_NOADDR)) return 1;\n\n    if (node->flags & CLUSTER_NODE_PFAIL)\n        server.cluster->stats_pfail_nodes++;\n\n    /* A Node in HANDSHAKE state has a limited lifespan equal to the\n     * configured node timeout. */\n    if (nodeInHandshake(node) && now - node->ctime > handshake_timeout) {\n        clusterDelNode(node);\n        return 1;\n    }\n\n    if (node->link == NULL) {\n        clusterLink *link = createClusterLink(node);\n        link->conn = connCreate(server.el, connTypeOfCluster());\n        connSetPrivateData(link->conn, link);\n        if (connConnect(link->conn, node->ip, node->cport, server.bind_source_addr,\n                    clusterLinkConnectHandler) == C_ERR) {\n            /* We got a synchronous error from connect before\n             * clusterSendPing() had a chance to be called.\n             * If node->ping_sent is zero, failure detection can't work,\n             * so we claim we actually sent a ping now (that will\n             * be really sent as soon as the link is obtained). */\n            if (node->ping_sent == 0) node->ping_sent = mstime();\n            serverLog(LL_DEBUG, \"Unable to connect to \"\n                \"Cluster Node [%s]:%d -> %s\", node->ip,\n                node->cport, server.neterr);\n\n            freeClusterLink(link);\n            return 0;\n        }\n    }\n    return 0;\n}\n\nstatic void freeClusterLinkOnBufferLimitReached(clusterLink *link) {\n    if (link == NULL || server.cluster_link_msg_queue_limit_bytes == 0) {\n        return;\n    }\n\n    unsigned long long mem_link = link->send_msg_queue_mem;\n    if (mem_link > server.cluster_link_msg_queue_limit_bytes) {\n        serverLog(LL_WARNING, \"Freeing cluster link(%s node %.40s, used memory: %llu) due to \"\n                \"exceeding send buffer memory limit.\", link->inbound ? \"from\" : \"to\",\n                link->node ? link->node->name : \"\", mem_link);\n        freeClusterLink(link);\n        server.cluster->stat_cluster_links_buffer_limit_exceeded++;\n    }\n}\n\n/* Free outbound link to a node if its send buffer size exceeded limit. */\nstatic void clusterNodeCronFreeLinkOnBufferLimitReached(clusterNode *node) {\n    freeClusterLinkOnBufferLimitReached(node->link);\n    freeClusterLinkOnBufferLimitReached(node->inbound_link);\n}\n\n/* This is executed 10 times every second */\nvoid clusterCron(void) {\n    dictIterator di;\n    dictEntry *de;\n    int update_state = 0;\n    int orphaned_masters; /* How many masters there are without ok slaves. */\n    int max_slaves; /* Max number of ok slaves for a single master. */\n    int this_slaves; /* Number of ok slaves for our master (if we are slave). */\n    mstime_t min_pong = 0, now = mstime();\n    clusterNode *min_pong_node = NULL;\n    static unsigned long long iteration = 0;\n    mstime_t handshake_timeout;\n\n    iteration++; /* Number of times this function was called so far. */\n\n    clusterUpdateMyselfHostname();\n\n    /* The handshake timeout is the time after which a handshake node that was\n     * not turned into a normal node is removed from the nodes. Usually it is\n     * just the NODE_TIMEOUT value, but when NODE_TIMEOUT is too small we use\n     * the value of 1 second. */\n    handshake_timeout = server.cluster_node_timeout;\n    if (handshake_timeout < 1000) handshake_timeout = 1000;\n\n    /* Clear so clusterNodeCronHandleReconnect can count the number of nodes in PFAIL. */\n    server.cluster->stats_pfail_nodes = 0;\n    /* Run through some of the operations we want to do on each cluster node. */\n    dictInitSafeIterator(&di, server.cluster->nodes);\n    while((de = dictNext(&di)) != NULL) {\n        clusterNode *node = dictGetVal(de);\n        /* We free the inbound or outboud link to the node if the link has an\n         * oversized message send queue and immediately try reconnecting. */\n        clusterNodeCronFreeLinkOnBufferLimitReached(node);\n        /* The protocol is that function(s) below return non-zero if the node was\n         * terminated.\n         */\n        if(clusterNodeCronHandleReconnect(node, handshake_timeout, now)) continue;\n    }\n    dictResetIterator(&di);\n\n    /* Ping some random node 1 time every 10 iterations, so that we usually ping\n     * one random node every second. */\n    if (!(iteration % 10)) {\n        int j;\n\n        /* Check a few random nodes and ping the one with the oldest\n         * pong_received time. */\n        for (j = 0; j < 5; j++) {\n            de = dictGetRandomKey(server.cluster->nodes);\n            clusterNode *this = dictGetVal(de);\n\n            /* Don't ping nodes disconnected or with a ping currently active. */\n            if (this->link == NULL || this->ping_sent != 0) continue;\n            if (this->flags & (CLUSTER_NODE_MYSELF|CLUSTER_NODE_HANDSHAKE))\n                continue;\n            if (min_pong_node == NULL || min_pong > this->pong_received) {\n                min_pong_node = this;\n                min_pong = this->pong_received;\n            }\n        }\n        if (min_pong_node) {\n            serverLog(LL_DEBUG,\"Pinging node %.40s\", min_pong_node->name);\n            clusterSendPing(min_pong_node->link, CLUSTERMSG_TYPE_PING);\n        }\n    }\n\n    /* Iterate nodes to check if we need to flag something as failing.\n     * This loop is also responsible to:\n     * 1) Check if there are orphaned masters (masters without non failing\n     *    slaves).\n     * 2) Count the max number of non failing slaves for a single master.\n     * 3) Count the number of slaves for our master, if we are a slave. */\n    orphaned_masters = 0;\n    max_slaves = 0;\n    this_slaves = 0;\n    dictInitSafeIterator(&di, server.cluster->nodes);\n    while((de = dictNext(&di)) != NULL) {\n        clusterNode *node = dictGetVal(de);\n        now = mstime(); /* Use an updated time at every iteration. */\n\n        if (node->flags &\n            (CLUSTER_NODE_MYSELF|CLUSTER_NODE_NOADDR|CLUSTER_NODE_HANDSHAKE))\n                continue;\n\n        /* Orphaned master check, useful only if the current instance\n         * is a slave that may migrate to another master. */\n        if (nodeIsSlave(myself) && clusterNodeIsMaster(node) && !nodeFailed(node)) {\n            int okslaves = clusterCountNonFailingSlaves(node);\n\n            /* A master is orphaned if it is serving a non-zero number of\n             * slots, have no working slaves, but used to have at least one\n             * slave, or failed over a master that used to have slaves. */\n            if (okslaves == 0 && node->numslots > 0 &&\n                node->flags & CLUSTER_NODE_MIGRATE_TO)\n            {\n                orphaned_masters++;\n            }\n            if (okslaves > max_slaves) max_slaves = okslaves;\n            if (myself->slaveof == node)\n                this_slaves = okslaves;\n        }\n\n        /* If we are not receiving any data for more than half the cluster\n         * timeout, reconnect the link: maybe there is a connection\n         * issue even if the node is alive. */\n        mstime_t ping_delay = now - node->ping_sent;\n        mstime_t data_delay = now - node->data_received;\n        if (node->link && /* is connected */\n            now - node->link->ctime >\n            server.cluster_node_timeout && /* was not already reconnected */\n            node->ping_sent && /* we already sent a ping */\n            /* and we are waiting for the pong more than timeout/2 */\n            ping_delay > server.cluster_node_timeout/2 &&\n            /* and in such interval we are not seeing any traffic at all. */\n            data_delay > server.cluster_node_timeout/2)\n        {\n            /* Disconnect the link, it will be reconnected automatically. */\n            freeClusterLink(node->link);\n        }\n\n        /* If we have currently no active ping in this instance, and the\n         * received PONG is older than half the cluster timeout, send\n         * a new ping now, to ensure all the nodes are pinged without\n         * a too big delay. */\n        mstime_t ping_interval = server.cluster_ping_interval ? \n            server.cluster_ping_interval : server.cluster_node_timeout/2;\n        if (node->link &&\n            node->ping_sent == 0 &&\n            (now - node->pong_received) > ping_interval)\n        {\n            clusterSendPing(node->link, CLUSTERMSG_TYPE_PING);\n            continue;\n        }\n\n        /* If we are a master and one of the slaves requested a manual\n         * failover, ping it continuously. */\n        if (server.cluster->mf_end &&\n            clusterNodeIsMaster(myself) &&\n            server.cluster->mf_slave == node &&\n            node->link)\n        {\n            clusterSendPing(node->link, CLUSTERMSG_TYPE_PING);\n            continue;\n        }\n\n        /* Check only if we have an active ping for this instance. */\n        if (node->ping_sent == 0) continue;\n\n        /* Check if this node looks unreachable.\n         * Note that if we already received the PONG, then node->ping_sent\n         * is zero, so can't reach this code at all, so we don't risk of\n         * checking for a PONG delay if we didn't sent the PING.\n         *\n         * We also consider every incoming data as proof of liveness, since\n         * our cluster bus link is also used for data: under heavy data\n         * load pong delays are possible. */\n        mstime_t node_delay = (ping_delay < data_delay) ? ping_delay :\n                                                          data_delay;\n\n        if (node_delay > server.cluster_node_timeout) {\n            /* Timeout reached. Set the node as possibly failing if it is\n             * not already in this state. */\n            if (!(node->flags & (CLUSTER_NODE_PFAIL|CLUSTER_NODE_FAIL))) {\n                node->flags |= CLUSTER_NODE_PFAIL;\n                update_state = 1;\n                if (clusterNodeIsMaster(myself) && server.cluster->size == 1) {\n                    markNodeAsFailingIfNeeded(node);                    \n                } else {\n                    serverLog(LL_DEBUG,\"*** NODE %.40s possibly failing\", node->name);\n                }\n            }\n        }\n    }\n    dictResetIterator(&di);\n\n    /* If we are a slave node but the replication is still turned off,\n     * enable it if we know the address of our master and it appears to\n     * be up. */\n    if (nodeIsSlave(myself) &&\n        server.masterhost == NULL &&\n        myself->slaveof &&\n        nodeHasAddr(myself->slaveof))\n    {\n        replicationSetMaster(myself->slaveof->ip, getNodeDefaultReplicationPort(myself->slaveof));\n    }\n\n    /* Abort a manual failover if the timeout is reached. */\n    manualFailoverCheckTimeout();\n\n    if (nodeIsSlave(myself)) {\n        clusterHandleManualFailover();\n        if (!(server.cluster_module_flags & CLUSTER_MODULE_FLAG_NO_FAILOVER))\n            clusterHandleSlaveFailover();\n        /* If there are orphaned slaves, and we are a slave among the masters\n         * with the max number of non-failing slaves, consider migrating to\n         * the orphaned masters. Note that it does not make sense to try\n         * a migration if there is no master with at least *two* working\n         * slaves. */\n        if (orphaned_masters && max_slaves >= 2 && this_slaves == max_slaves &&\n            server.cluster_allow_replica_migration)\n            clusterHandleSlaveMigration(max_slaves);\n    }\n\n    if (update_state || server.cluster->state == CLUSTER_FAIL)\n        clusterUpdateState();\n}\n\n/* This function is called before the event handler returns to sleep for\n * events. It is useful to perform operations that must be done ASAP in\n * reaction to events fired but that are not safe to perform inside event\n * handlers, or to perform potentially expansive tasks that we need to do\n * a single time before replying to clients. */\nvoid clusterBeforeSleep(void) {\n    int flags = server.cluster->todo_before_sleep;\n\n    /* Reset our flags (not strictly needed since every single function\n     * called for flags set should be able to clear its flag). */\n    server.cluster->todo_before_sleep = 0;\n\n    if (flags & CLUSTER_TODO_HANDLE_MANUALFAILOVER) {\n        /* Handle manual failover as soon as possible so that won't have a 100ms\n         * as it was handled only in clusterCron */\n        if(nodeIsSlave(myself)) {\n            clusterHandleManualFailover();\n            if (!(server.cluster_module_flags & CLUSTER_MODULE_FLAG_NO_FAILOVER))\n                clusterHandleSlaveFailover();\n        }\n    } else if (flags & CLUSTER_TODO_HANDLE_FAILOVER) {\n        /* Handle failover, this is needed when it is likely that there is already\n         * the quorum from masters in order to react fast. */\n        clusterHandleSlaveFailover();\n    }\n\n    /* Update the cluster state. */\n    if (flags & CLUSTER_TODO_UPDATE_STATE)\n        clusterUpdateState();\n\n    /* Save the config, possibly using fsync. */\n    if (flags & CLUSTER_TODO_SAVE_CONFIG) {\n        int fsync = flags & CLUSTER_TODO_FSYNC_CONFIG;\n        clusterSaveConfigOrDie(fsync);\n    }\n\n    /* Broadcast a PONG to all the nodes. */\n    if (flags & CLUSTER_TODO_BROADCAST_PONG)\n        clusterBroadcastPong(CLUSTER_BROADCAST_ALL);\n}\n\nvoid clusterDoBeforeSleep(int flags) {\n    server.cluster->todo_before_sleep |= flags;\n}\n\n/* -----------------------------------------------------------------------------\n * Slots management\n * -------------------------------------------------------------------------- */\n\n/* Test bit 'pos' in a generic bitmap. Return 1 if the bit is set,\n * otherwise 0. */\nint bitmapTestBit(unsigned char *bitmap, int pos) {\n    off_t byte = pos/8;\n    int bit = pos&7;\n    return (bitmap[byte] & (1<<bit)) != 0;\n}\n\n/* Set the bit at position 'pos' in a bitmap. */\nvoid bitmapSetBit(unsigned char *bitmap, int pos) {\n    off_t byte = pos/8;\n    int bit = pos&7;\n    bitmap[byte] |= 1<<bit;\n}\n\n/* Clear the bit at position 'pos' in a bitmap. */\nvoid bitmapClearBit(unsigned char *bitmap, int pos) {\n    off_t byte = pos/8;\n    int bit = pos&7;\n    bitmap[byte] &= ~(1<<bit);\n}\n\n/* Return non-zero if there is at least one master with slaves in the cluster.\n * Otherwise zero is returned. Used by clusterNodeSetSlotBit() to set the\n * MIGRATE_TO flag the when a master gets the first slot. */\nint clusterMastersHaveSlaves(void) {\n    dictIterator di;\n    dictEntry *de;\n    int slaves = 0;\n\n    dictInitSafeIterator(&di, server.cluster->nodes);\n    while((de = dictNext(&di)) != NULL) {\n        clusterNode *node = dictGetVal(de);\n\n        if (nodeIsSlave(node)) continue;\n        slaves += node->numslaves;\n    }\n    dictResetIterator(&di);\n    return slaves != 0;\n}\n\n/* Set the slot bit and return the old value. */\nint clusterNodeSetSlotBit(clusterNode *n, int slot) {\n    int old = bitmapTestBit(n->slots,slot);\n    if (!old) {\n        bitmapSetBit(n->slots,slot);\n        n->numslots++;\n        /* When a master gets its first slot, even if it has no slaves,\n         * it gets flagged with MIGRATE_TO, that is, the master is a valid\n         * target for replicas migration, if and only if at least one of\n         * the other masters has slaves right now.\n         *\n         * Normally masters are valid targets of replica migration if:\n         * 1. The used to have slaves (but no longer have).\n         * 2. They are slaves failing over a master that used to have slaves.\n         *\n         * However new masters with slots assigned are considered valid\n         * migration targets if the rest of the cluster is not a slave-less.\n         *\n         * See https://github.com/redis/redis/issues/3043 for more info. */\n        if (n->numslots == 1 && clusterMastersHaveSlaves())\n            n->flags |= CLUSTER_NODE_MIGRATE_TO;\n    }\n    return old;\n}\n\n/* Clear the slot bit and return the old value. */\nint clusterNodeClearSlotBit(clusterNode *n, int slot) {\n    int old = bitmapTestBit(n->slots,slot);\n    if (old) {\n        bitmapClearBit(n->slots,slot);\n        n->numslots--;\n    }\n    return old;\n}\n\n/* Return the slot bit from the cluster node structure. */\nint clusterNodeCoversSlot(clusterNode *n, int slot) {\n    return bitmapTestBit(n->slots,slot);\n}\n\n/* Add the specified slot to the list of slots that node 'n' will\n * serve. Return C_OK if the operation ended with success.\n * If the slot is already assigned to another instance this is considered\n * an error and C_ERR is returned. */\nint clusterAddSlot(clusterNode *n, int slot) {\n    if (server.cluster->slots[slot]) return C_ERR;\n    clusterNodeSetSlotBit(n,slot);\n    server.cluster->slots[slot] = n;\n    /* Make owner_not_claiming_slot flag consistent with slot ownership information. */\n    bitmapClearBit(server.cluster->owner_not_claiming_slot, slot);\n    clusterSlotStatReset(slot);\n    return C_OK;\n}\n\n/* Delete the specified slot marking it as unassigned.\n * Returns C_OK if the slot was assigned, otherwise if the slot was\n * already unassigned C_ERR is returned. */\nint clusterDelSlot(int slot) {\n    clusterNode *n = server.cluster->slots[slot];\n\n    if (!n) return C_ERR;\n\n    /* Cleanup the channels in master/replica as part of slot deletion. */\n    removeChannelsInSlot(slot);\n    /* Clear the slot bit. */\n    serverAssert(clusterNodeClearSlotBit(n,slot) == 1);\n    server.cluster->slots[slot] = NULL;\n    /* Make owner_not_claiming_slot flag consistent with slot ownership information. */\n    bitmapClearBit(server.cluster->owner_not_claiming_slot, slot);\n    clusterSlotStatReset(slot);\n    return C_OK;\n}\n\n/* Transfer slots from `from_node` to `to_node`.\n * Iterates over all cluster slots, transferring each slot covered by `from_node` to `to_node`.\n * Counts and returns the number of slots transferred.  */\nint clusterMoveNodeSlots(clusterNode *from_node, clusterNode *to_node) {\n    int processed = 0;\n\n    for (int j = 0; j < CLUSTER_SLOTS; j++) {\n        if (clusterNodeCoversSlot(from_node, j)) {\n            clusterDelSlot(j);\n            clusterAddSlot(to_node, j);\n            processed++;\n        }\n    }\n    return processed;\n}\n\n/* Delete all the slots associated with the specified node.\n * The number of deleted slots is returned. */\nint clusterDelNodeSlots(clusterNode *node) {\n    int deleted = 0, j;\n\n    for (j = 0; j < CLUSTER_SLOTS; j++) {\n        if (clusterNodeCoversSlot(node, j)) {\n            clusterDelSlot(j);\n            deleted++;\n        }\n    }\n    return deleted;\n}\n\n/* Clear the migrating / importing state for all the slots.\n * This is useful at initialization and when turning a master into slave. */\nvoid clusterCloseAllSlots(void) {\n    memset(server.cluster->migrating_slots_to,0,\n        sizeof(server.cluster->migrating_slots_to));\n    memset(server.cluster->importing_slots_from,0,\n        sizeof(server.cluster->importing_slots_from));\n}\n\n/* -----------------------------------------------------------------------------\n * Cluster state evaluation function\n * -------------------------------------------------------------------------- */\n\n/* The following are defines that are only used in the evaluation function\n * and are based on heuristics. Actually the main point about the rejoin and\n * writable delay is that they should be a few orders of magnitude larger\n * than the network latency. */\n#define CLUSTER_MAX_REJOIN_DELAY 5000\n#define CLUSTER_MIN_REJOIN_DELAY 500\n#define CLUSTER_WRITABLE_DELAY 2000\n\nvoid clusterUpdateState(void) {\n    int j, new_state;\n    int reachable_masters = 0;\n    static mstime_t among_minority_time;\n    static mstime_t first_call_time = 0;\n\n    server.cluster->todo_before_sleep &= ~CLUSTER_TODO_UPDATE_STATE;\n\n    /* If this is a master node, wait some time before turning the state\n     * into OK, since it is not a good idea to rejoin the cluster as a writable\n     * master, after a reboot, without giving the cluster a chance to\n     * reconfigure this node. Note that the delay is calculated starting from\n     * the first call to this function and not since the server start, in order\n     * to not count the DB loading time. */\n    if (first_call_time == 0) first_call_time = mstime();\n    if (clusterNodeIsMaster(myself) &&\n        server.cluster->state == CLUSTER_FAIL &&\n        mstime() - first_call_time < CLUSTER_WRITABLE_DELAY) return;\n\n    /* Start assuming the state is OK. We'll turn it into FAIL if there\n     * are the right conditions. */\n    new_state = CLUSTER_OK;\n\n    /* Check if all the slots are covered. */\n    if (server.cluster_require_full_coverage) {\n        for (j = 0; j < CLUSTER_SLOTS; j++) {\n            if (server.cluster->slots[j] == NULL ||\n                server.cluster->slots[j]->flags & (CLUSTER_NODE_FAIL))\n            {\n                new_state = CLUSTER_FAIL;\n                break;\n            }\n        }\n    }\n\n    /* Compute the cluster size, that is the number of master nodes\n     * serving at least a single slot.\n     *\n     * At the same time count the number of reachable masters having\n     * at least one slot. */\n    {\n        dictIterator di;\n        dictEntry *de;\n\n        server.cluster->size = 0;\n        dictInitSafeIterator(&di, server.cluster->nodes);\n        while((de = dictNext(&di)) != NULL) {\n            clusterNode *node = dictGetVal(de);\n\n            if (clusterNodeIsMaster(node) && node->numslots) {\n                server.cluster->size++;\n                if ((node->flags & (CLUSTER_NODE_FAIL|CLUSTER_NODE_PFAIL)) == 0)\n                    reachable_masters++;\n            }\n        }\n        dictResetIterator(&di);\n    }\n\n    /* If we are in a minority partition, change the cluster state\n     * to FAIL. */\n    {\n        int needed_quorum = (server.cluster->size / 2) + 1;\n\n        if (reachable_masters < needed_quorum) {\n            new_state = CLUSTER_FAIL;\n            among_minority_time = mstime();\n        }\n    }\n\n    /* Log a state change */\n    if (new_state != server.cluster->state) {\n        mstime_t rejoin_delay = server.cluster_node_timeout;\n\n        /* If the instance is a master and was partitioned away with the\n         * minority, don't let it accept queries for some time after the\n         * partition heals, to make sure there is enough time to receive\n         * a configuration update. */\n        if (rejoin_delay > CLUSTER_MAX_REJOIN_DELAY)\n            rejoin_delay = CLUSTER_MAX_REJOIN_DELAY;\n        if (rejoin_delay < CLUSTER_MIN_REJOIN_DELAY)\n            rejoin_delay = CLUSTER_MIN_REJOIN_DELAY;\n\n        if (new_state == CLUSTER_OK &&\n            clusterNodeIsMaster(myself) &&\n            mstime() - among_minority_time < rejoin_delay)\n        {\n            return;\n        }\n\n        /* Change the state and log the event. */\n        serverLog(new_state == CLUSTER_OK ? LL_NOTICE : LL_WARNING,\n            \"Cluster state changed: %s\",\n            new_state == CLUSTER_OK ? \"ok\" : \"fail\");\n        server.cluster->state = new_state;\n    }\n}\n\n/* Remove all the shard channel related information not owned by the current shard. */\nstatic inline void removeAllNotOwnedShardChannelSubscriptions(void) {\n    if (!kvstoreSize(server.pubsubshard_channels)) return;\n    clusterNode *currmaster = clusterNodeIsMaster(myself) ? myself : myself->slaveof;\n    for (int j = 0; j < CLUSTER_SLOTS; j++) {\n        if (server.cluster->slots[j] != currmaster) {\n            removeChannelsInSlot(j);\n        }\n    }\n}\n\n/* This function is called after the node startup in order to check if there\n * are any slots that we have keys for, but are assigned to no one. If so,\n * we take ownership of them. */\nvoid clusterClaimUnassignedSlots(void) {\n    if (nodeIsSlave(myself)) return;\n\n    int update_config = 0;\n    for (int i = 0; i < CLUSTER_SLOTS; i++) {\n        /* Skip if: no keys, already has an owner, or we are importing it. */\n        if (!countKeysInSlot(i) ||\n            server.cluster->slots[i] != NULL ||\n            server.cluster->importing_slots_from[i] != NULL)\n        {\n            continue;\n        }\n\n        /* If we are here data and cluster config don't agree, and we have\n         * slot 'i' populated even if we are not importing it, nor anyone else\n         * is assigned to it. Fix this condition by taking ownership. */\n        update_config++;\n        serverLog(LL_NOTICE, \"I have keys for unassigned slot %d. \"\n                             \"Taking responsibility for it.\", i);\n        clusterAddSlot(myself, i);\n    }\n    if (update_config) clusterSaveConfigOrDie(1);\n}\n\n/* -----------------------------------------------------------------------------\n * SLAVE nodes handling\n * -------------------------------------------------------------------------- */\n\n/* Set the specified node 'n' as master for this node.\n * If this node is currently a master, it is turned into a slave. */\nvoid clusterSetMaster(clusterNode *n) {\n    serverAssert(n != myself);\n    serverAssert(myself->numslots == 0);\n\n    int was_master = clusterNodeIsMaster(myself);\n    if (was_master) {\n        myself->flags &= ~(CLUSTER_NODE_MASTER|CLUSTER_NODE_MIGRATE_TO);\n        myself->flags |= CLUSTER_NODE_SLAVE;\n        clusterCloseAllSlots();\n    } else {\n        if (myself->slaveof)\n            clusterNodeRemoveSlave(myself->slaveof,myself);\n    }\n    myself->slaveof = n;\n    updateShardId(myself, n->shard_id);\n    clusterNodeAddSlave(n,myself);\n    replicationSetMaster(n->ip, getNodeDefaultReplicationPort(n));\n    removeAllNotOwnedShardChannelSubscriptions();\n    resetManualFailover();\n\n    /* Cancel all ASM tasks when switching into slave */\n    if (was_master) clusterAsmCancel(NULL, \"switching to replica\");\n}\n\n/* -----------------------------------------------------------------------------\n * Nodes to string representation functions.\n * -------------------------------------------------------------------------- */\n\nstruct redisNodeFlags {\n    uint16_t flag;\n    char *name;\n};\n\nstatic struct redisNodeFlags redisNodeFlagsTable[] = {\n    {CLUSTER_NODE_MYSELF,       \"myself,\"},\n    {CLUSTER_NODE_MASTER,       \"master,\"},\n    {CLUSTER_NODE_SLAVE,        \"slave,\"},\n    {CLUSTER_NODE_PFAIL,        \"fail?,\"},\n    {CLUSTER_NODE_FAIL,         \"fail,\"},\n    {CLUSTER_NODE_HANDSHAKE,    \"handshake,\"},\n    {CLUSTER_NODE_NOADDR,       \"noaddr,\"},\n    {CLUSTER_NODE_NOFAILOVER,   \"nofailover,\"}\n};\n\n/* Concatenate the comma separated list of node flags to the given SDS\n * string 'ci'. */\nsds representClusterNodeFlags(sds ci, uint16_t flags) {\n    size_t orig_len = sdslen(ci);\n    int i, size = sizeof(redisNodeFlagsTable)/sizeof(struct redisNodeFlags);\n    for (i = 0; i < size; i++) {\n        struct redisNodeFlags *nodeflag = redisNodeFlagsTable + i;\n        if (flags & nodeflag->flag) ci = sdscat(ci, nodeflag->name);\n    }\n    /* If no flag was added, add the \"noflags\" special flag. */\n    if (sdslen(ci) == orig_len) ci = sdscat(ci,\"noflags,\");\n    sdsIncrLen(ci,-1); /* Remove trailing comma. */\n    return ci;\n}\n\n/* Concatenate the slot ownership information to the given SDS string 'ci'.\n * If the slot ownership is in a contiguous block, it's represented as start-end pair,\n * else each slot is added separately. */\nsds representSlotInfo(sds ci, uint16_t *slot_info_pairs, int slot_info_pairs_count) {\n    for (int i = 0; i< slot_info_pairs_count; i+=2) {\n        unsigned long start = slot_info_pairs[i];\n        unsigned long end = slot_info_pairs[i+1];\n        if (start == end) {\n            ci = sdscatfmt(ci, \" %i\", start);\n        } else {\n            ci = sdscatfmt(ci, \" %i-%i\", start, end);\n        }\n    }\n    return ci;\n}\n\n/* Generate a csv-alike representation of the specified cluster node.\n * See clusterGenNodesDescription() top comment for more information.\n *\n * The function returns the string representation as an SDS string. */\nsds clusterGenNodeDescription(client *c, clusterNode *node, int tls_primary) {\n    int j, start;\n    sds ci;\n    int port = clusterNodeClientPort(node, tls_primary);\n\n    /* Node coordinates */\n    ci = sdscatlen(sdsempty(),node->name,CLUSTER_NAMELEN);\n    ci = sdscatfmt(ci,\" %s:%i@%i\",\n        node->ip,\n        port,\n        node->cport);\n    if (sdslen(node->hostname) != 0) {\n        ci = sdscatfmt(ci,\",%s\", node->hostname);\n    }\n    /* Don't expose aux fields to any clients yet but do allow them\n     * to be persisted to nodes.conf */\n    if (c == NULL) {\n        if (sdslen(node->hostname) == 0) {\n            ci = sdscatfmt(ci,\",\", 1);\n        }\n        for (int i = af_count-1; i >=0; i--) {\n            if ((tls_primary && i == af_tls_port) || (!tls_primary && i == af_tcp_port)) {\n                continue;\n            }\n            if (auxFieldHandlers[i].isPresent(node)) {\n                ci = sdscatprintf(ci, \",%s=\", auxFieldHandlers[i].field);\n                ci = auxFieldHandlers[i].getter(node, ci);\n            }\n        }\n    }\n\n    /* Flags */\n    ci = sdscatlen(ci,\" \",1);\n    ci = representClusterNodeFlags(ci, node->flags);\n\n    /* Slave of... or just \"-\" */\n    ci = sdscatlen(ci,\" \",1);\n    if (node->slaveof)\n        ci = sdscatlen(ci,node->slaveof->name,CLUSTER_NAMELEN);\n    else\n        ci = sdscatlen(ci,\"-\",1);\n\n    unsigned long long nodeEpoch = node->configEpoch;\n    if (nodeIsSlave(node) && node->slaveof) {\n        nodeEpoch = node->slaveof->configEpoch;\n    }\n    /* Latency from the POV of this node, config epoch, link status */\n    ci = sdscatfmt(ci,\" %I %I %U %s\",\n        (long long) node->ping_sent,\n        (long long) node->pong_received,\n        nodeEpoch,\n        (node->link || node->flags & CLUSTER_NODE_MYSELF) ?\n                    \"connected\" : \"disconnected\");\n\n    /* Slots served by this instance. If we already have slots info,\n     * append it directly, otherwise, generate slots only if it has. */\n    if (node->slot_info_pairs) {\n        ci = representSlotInfo(ci, node->slot_info_pairs, node->slot_info_pairs_count);\n    } else if (node->numslots > 0) {\n        start = -1;\n        for (j = 0; j < CLUSTER_SLOTS; j++) {\n            int bit;\n\n            if ((bit = clusterNodeCoversSlot(node, j)) != 0) {\n                if (start == -1) start = j;\n            }\n            if (start != -1 && (!bit || j == CLUSTER_SLOTS-1)) {\n                if (bit && j == CLUSTER_SLOTS-1) j++;\n\n                if (start == j-1) {\n                    ci = sdscatfmt(ci,\" %i\",start);\n                } else {\n                    ci = sdscatfmt(ci,\" %i-%i\",start,j-1);\n                }\n                start = -1;\n            }\n        }\n    }\n\n    /* Just for MYSELF node we also dump info about slots that\n     * we are migrating to other instances or importing from other\n     * instances. */\n    if (node->flags & CLUSTER_NODE_MYSELF) {\n        for (j = 0; j < CLUSTER_SLOTS; j++) {\n            if (server.cluster->migrating_slots_to[j]) {\n                ci = sdscatprintf(ci,\" [%d->-%.40s]\",j,\n                    server.cluster->migrating_slots_to[j]->name);\n            } else if (server.cluster->importing_slots_from[j]) {\n                ci = sdscatprintf(ci,\" [%d-<-%.40s]\",j,\n                    server.cluster->importing_slots_from[j]->name);\n            }\n        }\n    }\n    return ci;\n}\n\n/* Generate the slot topology for all nodes and store the string representation\n * in the slots_info struct on the node. This is used to improve the efficiency\n * of clusterGenNodesDescription() because it removes looping of the slot space\n * for generating the slot info for each node individually. */\nvoid clusterGenNodesSlotsInfo(int filter) {\n    clusterNode *n = NULL;\n    int start = -1;\n\n    for (int i = 0; i <= CLUSTER_SLOTS; i++) {\n        /* Find start node and slot id. */\n        if (n == NULL) {\n            if (i == CLUSTER_SLOTS) break;\n            n = server.cluster->slots[i];\n            start = i;\n            continue;\n        }\n\n        /* Generate slots info when occur different node with start\n         * or end of slot. */\n        if (i == CLUSTER_SLOTS || n != server.cluster->slots[i]) {\n            if (!(n->flags & filter)) {\n                if (!n->slot_info_pairs) {\n                    n->slot_info_pairs = zmalloc(2 * n->numslots * sizeof(uint16_t));\n                }\n                serverAssert((n->slot_info_pairs_count + 1) < (2 * n->numslots));\n                n->slot_info_pairs[n->slot_info_pairs_count++] = start;\n                n->slot_info_pairs[n->slot_info_pairs_count++] = i-1;\n            }\n            if (i == CLUSTER_SLOTS) break;\n            n = server.cluster->slots[i];\n            start = i;\n        }\n    }\n}\n\nvoid clusterFreeNodesSlotsInfo(clusterNode *n) {\n    zfree(n->slot_info_pairs);\n    n->slot_info_pairs = NULL;\n    n->slot_info_pairs_count = 0;\n}\n\n/* Generate a csv-alike representation of the nodes we are aware of,\n * including the \"myself\" node, and return an SDS string containing the\n * representation (it is up to the caller to free it).\n *\n * All the nodes matching at least one of the node flags specified in\n * \"filter\" are excluded from the output, so using zero as a filter will\n * include all the known nodes in the representation, including nodes in\n * the HANDSHAKE state.\n *\n * Setting tls_primary to 1 to put TLS port in the main <ip>:<port> \n * field and put TCP port in aux field, instead of the opposite way.\n *\n * The representation obtained using this function is used for the output\n * of the CLUSTER NODES function, and as format for the cluster\n * configuration file (nodes.conf) for a given node. */\nsds clusterGenNodesDescription(client *c, int filter, int tls_primary) {\n    sds ci = sdsempty(), ni;\n    dictIterator di;\n    dictEntry *de;\n\n    /* Generate all nodes slots info firstly. */\n    clusterGenNodesSlotsInfo(filter);\n\n    dictInitSafeIterator(&di, server.cluster->nodes);\n    while((de = dictNext(&di)) != NULL) {\n        clusterNode *node = dictGetVal(de);\n\n        if (node->flags & filter) continue;\n        ni = clusterGenNodeDescription(c, node, tls_primary);\n        ci = sdscatsds(ci,ni);\n        sdsfree(ni);\n        ci = sdscatlen(ci,\"\\n\",1);\n\n        /* Release slots info. */\n        clusterFreeNodesSlotsInfo(node);\n    }\n    dictResetIterator(&di);\n    return ci;\n}\n\n/* Add to the output buffer of the given client the description of the given cluster link.\n * The description is a map with each entry being an attribute of the link. */\nvoid addReplyClusterLinkDescription(client *c, clusterLink *link) {\n    addReplyMapLen(c, 6);\n\n    addReplyBulkCString(c, \"direction\");\n    addReplyBulkCString(c, link->inbound ? \"from\" : \"to\");\n\n    /* addReplyClusterLinkDescription is only called for links that have been\n     * associated with nodes. The association is always bi-directional, so\n     * in addReplyClusterLinkDescription, link->node should never be NULL. */\n    serverAssert(link->node);\n    sds node_name = sdsnewlen(link->node->name, CLUSTER_NAMELEN);\n    addReplyBulkCString(c, \"node\");\n    addReplyBulkCString(c, node_name);\n    sdsfree(node_name);\n\n    addReplyBulkCString(c, \"create-time\");\n    addReplyLongLong(c, link->ctime);\n\n    char events[3], *p;\n    p = events;\n    if (link->conn) {\n        if (connHasReadHandler(link->conn)) *p++ = 'r';\n        if (connHasWriteHandler(link->conn)) *p++ = 'w';\n    }\n    *p = '\\0';\n    addReplyBulkCString(c, \"events\");\n    addReplyBulkCString(c, events);\n\n    addReplyBulkCString(c, \"send-buffer-allocated\");\n    addReplyLongLong(c, link->send_msg_queue_mem);\n\n    addReplyBulkCString(c, \"send-buffer-used\");\n    addReplyLongLong(c, link->send_msg_queue_mem);\n}\n\n/* Add to the output buffer of the given client an array of cluster link descriptions,\n * with array entry being a description of a single current cluster link. */\nvoid addReplyClusterLinksDescription(client *c) {\n    dictIterator di;\n    dictEntry *de;\n    void *arraylen_ptr = NULL;\n    int num_links = 0;\n\n    arraylen_ptr = addReplyDeferredLen(c);\n\n    dictInitSafeIterator(&di, server.cluster->nodes);\n    while((de = dictNext(&di)) != NULL) {\n        clusterNode *node = dictGetVal(de);\n        if (node->link) {\n            num_links++;\n            addReplyClusterLinkDescription(c, node->link);\n        }\n        if (node->inbound_link) {\n            num_links++;\n            addReplyClusterLinkDescription(c, node->inbound_link);\n        }\n    }\n    dictResetIterator(&di);\n\n    setDeferredArrayLen(c, arraylen_ptr, num_links);\n}\n\n/* -----------------------------------------------------------------------------\n * CLUSTER command\n * -------------------------------------------------------------------------- */\n\nconst char *clusterGetMessageTypeString(int type) {\n    switch(type) {\n    case CLUSTERMSG_TYPE_PING: return \"ping\";\n    case CLUSTERMSG_TYPE_PONG: return \"pong\";\n    case CLUSTERMSG_TYPE_MEET: return \"meet\";\n    case CLUSTERMSG_TYPE_FAIL: return \"fail\";\n    case CLUSTERMSG_TYPE_PUBLISH: return \"publish\";\n    case CLUSTERMSG_TYPE_PUBLISHSHARD: return \"publishshard\";\n    case CLUSTERMSG_TYPE_FAILOVER_AUTH_REQUEST: return \"auth-req\";\n    case CLUSTERMSG_TYPE_FAILOVER_AUTH_ACK: return \"auth-ack\";\n    case CLUSTERMSG_TYPE_UPDATE: return \"update\";\n    case CLUSTERMSG_TYPE_MFSTART: return \"mfstart\";\n    case CLUSTERMSG_TYPE_MODULE: return \"module\";\n    }\n    return \"unknown\";\n}\n\nint checkSlotAssignmentsOrReply(client *c, unsigned char *slots, int del, int start_slot, int end_slot) {\n    int slot;\n    for (slot = start_slot; slot <= end_slot; slot++) {\n        if (del && server.cluster->slots[slot] == NULL) {\n            addReplyErrorFormat(c,\"Slot %d is already unassigned\", slot);\n            return C_ERR;\n        } else if (!del && server.cluster->slots[slot]) {\n            addReplyErrorFormat(c,\"Slot %d is already busy\", slot);\n            return C_ERR;\n        }\n        if (slots[slot]++ == 1) {\n            addReplyErrorFormat(c,\"Slot %d specified multiple times\",(int)slot);\n            return C_ERR;\n        }\n    }\n    return C_OK;\n}\n\nvoid clusterUpdateSlots(client *c, unsigned char *slots, int del) {\n    int j;\n    for (j = 0; j < CLUSTER_SLOTS; j++) {\n        if (slots[j]) {\n            int retval;\n                \n            /* If this slot was set as importing we can clear this\n             * state as now we are the real owner of the slot. */\n            if (server.cluster->importing_slots_from[j])\n                server.cluster->importing_slots_from[j] = NULL;\n\n            /* Cancel any ASM task that overlaps with the slot. */\n            clusterAsmCancelBySlot(j, \"slots configuration updated\");\n\n            retval = del ? clusterDelSlot(j) :\n                           clusterAddSlot(myself,j);\n            serverAssertWithInfo(c,NULL,retval == C_OK);\n        }\n    }\n}\n\nint clusterGetShardCount(void) {\n    return dictSize(server.cluster->shards);\n}\n\nvoid *clusterGetShardIterator(void) {\n    return dictGetSafeIterator(server.cluster->shards);\n}\n\nvoid *clusterNextShardHandle(void *shard_iterator) {\n    dictEntry *de = dictNext(shard_iterator);\n    if(de == NULL) return NULL;\n    return dictGetVal(de);\n}\n\nvoid clusterFreeShardIterator(void *shard_iterator) {\n    dictReleaseIterator(shard_iterator);\n}\n\nint clusterNodeHasSlotInfo(clusterNode *n) {\n    return n->slot_info_pairs != NULL;\n}\n\nint clusterNodeSlotInfoCount(clusterNode *n) {\n    return n->slot_info_pairs_count;\n}\n\nuint16_t clusterNodeSlotInfoEntry(clusterNode *n, int idx) {\n    return n->slot_info_pairs[idx];\n}\n\nint clusterGetShardNodeCount(void *shard) {\n    return listLength((list*)shard);\n}\n\nvoid *clusterShardHandleGetNodeIterator(void *shard) {\n    listIter *li = zmalloc(sizeof(listIter));\n    listRewind((list*)shard, li);\n    return li;\n}\n\nvoid clusterShardNodeIteratorFree(void *node_iterator) {\n    zfree(node_iterator);\n}\n\nclusterNode *clusterShardNodeIteratorNext(void *node_iterator) {\n    listNode *item = listNext((listIter*)node_iterator);\n    if (item == NULL) return NULL;\n    return listNodeValue(item);\n}\n\nclusterNode *clusterShardNodeFirst(void *shard) {\n    listNode *item = listFirst((list*)shard);\n    if (item == NULL) return NULL;\n    return listNodeValue(item);\n}\n\nint clusterNodeTcpPort(clusterNode *node) {\n    return node->tcp_port;\n}\n\nint clusterNodeTlsPort(clusterNode *node) {\n    return node->tls_port;\n}\n\nsds genClusterInfoString(void) {\n    sds info = sdsempty();\n    char *statestr[] = {\"ok\",\"fail\"};\n    int slots_assigned = 0, slots_ok = 0, slots_pfail = 0, slots_fail = 0;\n    uint64_t myepoch;\n    int j;\n\n    for (j = 0; j < CLUSTER_SLOTS; j++) {\n        clusterNode *n = server.cluster->slots[j];\n\n        if (n == NULL) continue;\n        slots_assigned++;\n        if (nodeFailed(n)) {\n            slots_fail++;\n        } else if (nodeTimedOut(n)) {\n            slots_pfail++;\n        } else {\n            slots_ok++;\n        }\n    }\n\n    myepoch = (nodeIsSlave(myself) && myself->slaveof) ?\n                myself->slaveof->configEpoch : myself->configEpoch;\n\n    info = sdscatprintf(info,\n        \"cluster_state:%s\\r\\n\"\n        \"cluster_slots_assigned:%d\\r\\n\"\n        \"cluster_slots_ok:%d\\r\\n\"\n        \"cluster_slots_pfail:%d\\r\\n\"\n        \"cluster_slots_fail:%d\\r\\n\"\n        \"cluster_known_nodes:%lu\\r\\n\"\n        \"cluster_size:%d\\r\\n\"\n        \"cluster_current_epoch:%llu\\r\\n\"\n        \"cluster_my_epoch:%llu\\r\\n\"\n        , statestr[server.cluster->state],\n        slots_assigned,\n        slots_ok,\n        slots_pfail,\n        slots_fail,\n        dictSize(server.cluster->nodes),\n        server.cluster->size,\n        (unsigned long long) server.cluster->currentEpoch,\n        (unsigned long long) myepoch\n    );\n\n    /* Show stats about messages sent and received. */\n    long long tot_msg_sent = 0;\n    long long tot_msg_received = 0;\n\n    for (int i = 0; i < CLUSTERMSG_TYPE_COUNT; i++) {\n        if (server.cluster->stats_bus_messages_sent[i] == 0) continue;\n        tot_msg_sent += server.cluster->stats_bus_messages_sent[i];\n        info = sdscatprintf(info,\n            \"cluster_stats_messages_%s_sent:%lld\\r\\n\",\n            clusterGetMessageTypeString(i),\n            server.cluster->stats_bus_messages_sent[i]);\n    }\n    info = sdscatprintf(info,\n        \"cluster_stats_messages_sent:%lld\\r\\n\", tot_msg_sent);\n\n    for (int i = 0; i < CLUSTERMSG_TYPE_COUNT; i++) {\n        if (server.cluster->stats_bus_messages_received[i] == 0) continue;\n        tot_msg_received += server.cluster->stats_bus_messages_received[i];\n        info = sdscatprintf(info,\n            \"cluster_stats_messages_%s_received:%lld\\r\\n\",\n            clusterGetMessageTypeString(i),\n            server.cluster->stats_bus_messages_received[i]);\n    }\n    info = sdscatprintf(info,\n        \"cluster_stats_messages_received:%lld\\r\\n\", tot_msg_received);\n\n    info = sdscatprintf(info,\n        \"total_cluster_links_buffer_limit_exceeded:%llu\\r\\n\",\n        server.cluster->stat_cluster_links_buffer_limit_exceeded);\n\n    info = asmCatInfoString(info);\n\n    return info;\n}\n\n\nvoid removeChannelsInSlot(unsigned int slot) {\n    if (countChannelsInSlot(slot) == 0) return;\n\n    pubsubShardUnsubscribeAllChannelsInSlot(slot);\n}\n\n/* Get the count of the channels for a given slot. */\nunsigned int countChannelsInSlot(unsigned int hashslot) {\n    return kvstoreDictSize(server.pubsubshard_channels, hashslot);\n}\n\nint clusterNodeIsMyself(clusterNode *n) {\n    return n == server.cluster->myself;\n}\n\nclusterNode *getMyClusterNode(void) {\n    return server.cluster->myself;\n}\n\nint clusterManualFailoverTimeLimit(void) {\n    return server.cluster->mf_end;\n}\n\nint getClusterSize(void) {\n    return dictSize(server.cluster->nodes);\n}\n\nint getMyShardSlotCount(void) {\n    if (!nodeIsSlave(server.cluster->myself)) {\n        return server.cluster->myself->numslots;\n    } else if (server.cluster->myself->slaveof) {\n        return server.cluster->myself->slaveof->numslots;\n    } else {\n        return 0;\n    }\n}\n\nchar **getClusterNodesList(size_t *numnodes) {\n    size_t count = dictSize(server.cluster->nodes);\n    char **ids = zmalloc((count+1)*CLUSTER_NAMELEN);\n    dictIterator di;\n    dictEntry *de;\n    int j = 0;\n\n    dictInitIterator(&di, server.cluster->nodes);\n    while((de = dictNext(&di)) != NULL) {\n        clusterNode *node = dictGetVal(de);\n        if (node->flags & (CLUSTER_NODE_NOADDR|CLUSTER_NODE_HANDSHAKE)) continue;\n        ids[j] = zmalloc(CLUSTER_NAMELEN);\n        memcpy(ids[j],node->name,CLUSTER_NAMELEN);\n        j++;\n    }\n    *numnodes = j;\n    ids[j] = NULL; /* Null term so that FreeClusterNodesList does not need\n                    * to also get the count argument. */\n    dictResetIterator(&di);\n    return ids;\n}\n\nint clusterNodeIsMaster(clusterNode *n) {\n    return n->flags & CLUSTER_NODE_MASTER;\n}\n\nint handleDebugClusterCommand(client *c) {\n    if (c->argc != 5 ||\n        strcasecmp(c->argv[1]->ptr, \"CLUSTERLINK\") ||\n        strcasecmp(c->argv[2]->ptr, \"KILL\")) {\n        return 0;\n    }\n\n    if (!server.cluster_enabled) {\n        addReplyError(c, \"Debug option only available for cluster mode enabled setup!\");\n        return 1;\n    }\n\n    /* Find the node. */\n    clusterNode *n = clusterLookupNode(c->argv[4]->ptr, sdslen(c->argv[4]->ptr));\n    if (!n) {\n        addReplyErrorFormat(c, \"Unknown node %s\", (char *) c->argv[4]->ptr);\n        return 1;\n    }\n\n    /* Terminate the link based on the direction or all. */\n    if (!strcasecmp(c->argv[3]->ptr, \"from\")) {\n        if (n->inbound_link) freeClusterLink(n->inbound_link);\n    } else if (!strcasecmp(c->argv[3]->ptr, \"to\")) {\n        if (n->link) freeClusterLink(n->link);\n    } else if (!strcasecmp(c->argv[3]->ptr, \"all\")) {\n        if (n->link) freeClusterLink(n->link);\n        if (n->inbound_link) freeClusterLink(n->inbound_link);\n    } else {\n        addReplyErrorFormat(c, \"Unknown direction %s\", (char *) c->argv[3]->ptr);\n    }\n    addReply(c, shared.ok);\n\n    return 1;\n}\n\nint clusterNodePending(clusterNode  *node) {\n    return node->flags & (CLUSTER_NODE_NOADDR|CLUSTER_NODE_HANDSHAKE);\n}\n\nchar *clusterNodeIp(clusterNode *node) {\n    return node->ip;\n}\n\nint clusterNodeIsSlave(clusterNode *node) {\n    return node->flags & CLUSTER_NODE_SLAVE;\n}\n\nclusterNode *clusterNodeGetSlaveof(clusterNode *node) {\n    return node->slaveof;\n}\n\nclusterNode *clusterNodeGetMaster(clusterNode *node) {\n    while (node->slaveof != NULL) node = node->slaveof;\n    return node;\n}\n\nchar *clusterNodeGetName(clusterNode *node) {\n    return node->name;\n}\n\nint clusterNodeTimedOut(clusterNode *node) {\n    return nodeTimedOut(node);\n}\n\nint clusterNodeIsFailing(clusterNode *node) {\n    return nodeFailed(node);\n}\n\nint clusterNodeIsNoFailover(clusterNode *node) {\n    return node->flags & CLUSTER_NODE_NOFAILOVER;\n}\n\nconst char **clusterDebugCommandExtendedHelp(void) {\n    static const char *help[] = {\n        \"CLUSTERLINK KILL <to|from|all> <node-id>\",\n        \"    Kills the link based on the direction to/from (both) with the provided node.\",\n        NULL\n    };\n\n    return help;\n}\n\nchar *clusterNodeGetShardId(clusterNode *node) {\n    return node->shard_id;\n}\n\nint clusterCommandSpecial(client *c) {\n    if (!strcasecmp(c->argv[1]->ptr,\"meet\") && (c->argc == 4 || c->argc == 5)) {\n        /* CLUSTER MEET <ip> <port> [cport] */\n        long long port, cport;\n\n        if (getLongLongFromObject(c->argv[3], &port) != C_OK) {\n            addReplyErrorFormat(c,\"Invalid base port specified: %s\",\n                                (char*)c->argv[3]->ptr);\n            return 1;\n        }\n\n        if (c->argc == 5) {\n            if (getLongLongFromObject(c->argv[4], &cport) != C_OK) {\n                addReplyErrorFormat(c,\"Invalid bus port specified: %s\",\n                                    (char*)c->argv[4]->ptr);\n                return 1;\n            }\n        } else {\n            cport = port + CLUSTER_PORT_INCR;\n        }\n\n        if (clusterStartHandshake(c->argv[2]->ptr,port,cport) == 0 &&\n            errno == EINVAL)\n        {\n            addReplyErrorFormat(c,\"Invalid node address specified: %s:%s\",\n                            (char*)c->argv[2]->ptr, (char*)c->argv[3]->ptr);\n        } else {\n            addReply(c,shared.ok);\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"flushslots\") && c->argc == 2) {\n        /* CLUSTER FLUSHSLOTS */\n        if (kvstoreSize(server.db[0].keys) != 0) {\n            addReplyError(c,\"DB must be empty to perform CLUSTER FLUSHSLOTS.\");\n            return 1;\n        }\n        clusterDelNodeSlots(myself);\n        clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE|CLUSTER_TODO_SAVE_CONFIG);\n        addReply(c,shared.ok);\n    } else if ((!strcasecmp(c->argv[1]->ptr,\"addslots\") ||\n                !strcasecmp(c->argv[1]->ptr,\"delslots\")) && c->argc >= 3) {\n        /* CLUSTER ADDSLOTS <slot> [slot] ... */\n        /* CLUSTER DELSLOTS <slot> [slot] ... */\n        int j, slot;\n        unsigned char *slots = zmalloc(CLUSTER_SLOTS);\n        int del = !strcasecmp(c->argv[1]->ptr,\"delslots\");\n\n        memset(slots,0,CLUSTER_SLOTS);\n        /* Check that all the arguments are parseable.*/\n        for (j = 2; j < c->argc; j++) {\n            if ((slot = getSlotOrReply(c,c->argv[j])) == C_ERR) {\n                zfree(slots);\n                return 1;\n            }\n        }\n        /* Check that the slots are not already busy. */\n        for (j = 2; j < c->argc; j++) {\n            slot = getSlotOrReply(c,c->argv[j]);\n            if (checkSlotAssignmentsOrReply(c, slots, del, slot, slot) == C_ERR) {\n                zfree(slots);\n                return 1;\n            }\n        }\n        clusterUpdateSlots(c, slots, del);\n        zfree(slots);\n        clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE|CLUSTER_TODO_SAVE_CONFIG);\n        addReply(c,shared.ok);\n    } else if ((!strcasecmp(c->argv[1]->ptr,\"addslotsrange\") ||\n               !strcasecmp(c->argv[1]->ptr,\"delslotsrange\")) && c->argc >= 4) {\n        if (c->argc % 2 == 1) {\n            addReplyErrorArity(c);\n            return 1;\n        }\n        /* CLUSTER ADDSLOTSRANGE <start slot> <end slot> [<start slot> <end slot> ...] */\n        /* CLUSTER DELSLOTSRANGE <start slot> <end slot> [<start slot> <end slot> ...] */\n        int j, startslot, endslot;\n        unsigned char *slots = zmalloc(CLUSTER_SLOTS);\n        int del = !strcasecmp(c->argv[1]->ptr,\"delslotsrange\");\n\n        memset(slots,0,CLUSTER_SLOTS);\n        /* Check that all the arguments are parseable and that all the\n         * slots are not already busy. */\n        for (j = 2; j < c->argc; j += 2) {\n            if ((startslot = getSlotOrReply(c,c->argv[j])) == C_ERR) {\n                zfree(slots);\n                return 1;\n            }\n            if ((endslot = getSlotOrReply(c,c->argv[j+1])) == C_ERR) {\n                zfree(slots);\n                return 1;\n            }\n            if (startslot > endslot) {\n                addReplyErrorFormat(c,\"start slot number %d is greater than end slot number %d\", startslot, endslot);\n                zfree(slots);\n                return 1;\n            }\n\n            if (checkSlotAssignmentsOrReply(c, slots, del, startslot, endslot) == C_ERR) {\n                zfree(slots);\n                return 1;\n            }\n        }\n        clusterUpdateSlots(c, slots, del);\n        zfree(slots);\n        clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE|CLUSTER_TODO_SAVE_CONFIG);\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"setslot\") && c->argc >= 4) {\n        /* SETSLOT 10 MIGRATING <node ID> */\n        /* SETSLOT 10 IMPORTING <node ID> */\n        /* SETSLOT 10 STABLE */\n        /* SETSLOT 10 NODE <node ID> */\n        int slot;\n        clusterNode *n;\n\n        if (nodeIsSlave(myself)) {\n            addReplyError(c,\"Please use SETSLOT only with masters.\");\n            return 1;\n        }\n\n        if ((slot = getSlotOrReply(c, c->argv[2])) == -1) return 1;\n\n        /* Don't allow legacy slot migration if the slot is in an ASM task. */\n        if (isSlotInAsmTask(slot)) {\n            addReplyErrorFormat(c, \"Slot %d is currently in an active atomic slot migration. \"\n                \"CLUSTER SETSLOT cannot be used at this time. To perform a legacy slot migration \"\n                \"instead, first cancel the ongoing task with CLUSTER MIGRATION CANCEL\", slot);\n            return 1;\n        }\n\n        if (isSlotInTrimJob(slot)) {\n            addReplyErrorFormat(c, \"There is a pending trim job for slot %d. \"\n                \"Most probably, this is due to a failed atomic slot migration. \"\n                \"CLUSTER SETSLOT cannot be used at this time. \"\n                \"Please retry later once the trim job is completed.\", slot);\n            return 1;\n        }\n\n        if (!strcasecmp(c->argv[3]->ptr,\"migrating\") && c->argc == 5) {\n            if (server.cluster->slots[slot] != myself) {\n                addReplyErrorFormat(c,\"I'm not the owner of hash slot %u\",slot);\n                return 1;\n            }\n            n = clusterLookupNode(c->argv[4]->ptr, sdslen(c->argv[4]->ptr));\n            if (n == NULL) {\n                addReplyErrorFormat(c,\"I don't know about node %s\",\n                    (char*)c->argv[4]->ptr);\n                return 1;\n            }\n            if (nodeIsSlave(n)) {\n                addReplyError(c,\"Target node is not a master\");\n                return 1;\n            }\n            server.cluster->migrating_slots_to[slot] = n;\n        } else if (!strcasecmp(c->argv[3]->ptr,\"importing\") && c->argc == 5) {\n            if (server.cluster->slots[slot] == myself) {\n                addReplyErrorFormat(c,\n                    \"I'm already the owner of hash slot %u\",slot);\n                return 1;\n            }\n            n = clusterLookupNode(c->argv[4]->ptr, sdslen(c->argv[4]->ptr));\n            if (n == NULL) {\n                addReplyErrorFormat(c,\"I don't know about node %s\",\n                    (char*)c->argv[4]->ptr);\n                return 1;\n            }\n            if (nodeIsSlave(n)) {\n                addReplyError(c,\"Target node is not a master\");\n                return 1;\n            }\n            server.cluster->importing_slots_from[slot] = n;\n        } else if (!strcasecmp(c->argv[3]->ptr,\"stable\") && c->argc == 4) {\n            /* CLUSTER SETSLOT <SLOT> STABLE */\n            server.cluster->importing_slots_from[slot] = NULL;\n            server.cluster->migrating_slots_to[slot] = NULL;\n        } else if (!strcasecmp(c->argv[3]->ptr,\"node\") && c->argc == 5) {\n            /* CLUSTER SETSLOT <SLOT> NODE <NODE ID> */\n            n = clusterLookupNode(c->argv[4]->ptr, sdslen(c->argv[4]->ptr));\n            if (!n) {\n                addReplyErrorFormat(c,\"Unknown node %s\",\n                    (char*)c->argv[4]->ptr);\n                return 1;\n            }\n            if (nodeIsSlave(n)) {\n                addReplyError(c,\"Target node is not a master\");\n                return 1;\n            }\n            /* If this hash slot was served by 'myself' before to switch\n             * make sure there are no longer local keys for this hash slot. */\n            if (server.cluster->slots[slot] == myself && n != myself) {\n                if (countKeysInSlot(slot) != 0) {\n                    addReplyErrorFormat(c,\n                        \"Can't assign hashslot %d to a different node \"\n                        \"while I still hold keys for this hash slot.\", slot);\n                    return 1;\n                }\n            }\n            /* If this slot is in migrating status but we have no keys\n             * for it assigning the slot to another node will clear\n             * the migrating status. */\n            if (countKeysInSlot(slot) == 0 &&\n                server.cluster->migrating_slots_to[slot])\n                server.cluster->migrating_slots_to[slot] = NULL;\n\n            int slot_was_mine = server.cluster->slots[slot] == myself;\n            clusterDelSlot(slot);\n            clusterAddSlot(n,slot);\n\n            /* If we are a master left without slots, we should turn into a\n             * replica of the new master. */\n            if (slot_was_mine &&\n                n != myself &&\n                myself->numslots == 0 &&\n                server.cluster_allow_replica_migration) {\n                serverLog(LL_NOTICE,\n                          \"Configuration change detected. Reconfiguring myself \"\n                          \"as a replica of %.40s (%s)\", n->name, n->human_nodename);\n                clusterSetMaster(n);\n                /* Save the new config and broadcast it to the other nodes. */\n                clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG |\n                                     CLUSTER_TODO_UPDATE_STATE |\n                                     CLUSTER_TODO_FSYNC_CONFIG |\n                                     CLUSTER_TODO_BROADCAST_PONG);\n            }\n\n            /* If this node was importing this slot, assigning the slot to\n             * itself also clears the importing status. */\n            if (n == myself &&\n                server.cluster->importing_slots_from[slot]) {\n                /* This slot was manually migrated, set this node configEpoch\n                 * to a new epoch so that the new version can be propagated\n                 * by the cluster.\n                 *\n                 * Note that if this ever results in a collision with another\n                 * node getting the same configEpoch, for example because a\n                 * failover happens at the same time we close the slot, the\n                 * configEpoch collision resolution will fix it assigning\n                 * a different epoch to each node. */\n                if (clusterBumpConfigEpochWithoutConsensus() == C_OK) {\n                    serverLog(LL_NOTICE,\n                        \"configEpoch updated after importing slot %d\", slot);\n                }\n                server.cluster->importing_slots_from[slot] = NULL;\n                /* After importing this slot, let the other nodes know as\n                 * soon as possible. */\n                clusterDoBeforeSleep(CLUSTER_TODO_BROADCAST_PONG);\n            }\n        } else {\n            addReplyError(c,\n                \"Invalid CLUSTER SETSLOT action or number of arguments. Try CLUSTER HELP\");\n            return 1;\n        }\n        clusterDoBeforeSleep(CLUSTER_TODO_SAVE_CONFIG|CLUSTER_TODO_UPDATE_STATE);\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"bumpepoch\") && c->argc == 2) {\n        /* CLUSTER BUMPEPOCH */\n        int retval = clusterBumpConfigEpochWithoutConsensus();\n        sds reply = sdscatprintf(sdsempty(),\"+%s %llu\\r\\n\",\n                (retval == C_OK) ? \"BUMPED\" : \"STILL\",\n                (unsigned long long) myself->configEpoch);\n        addReplySds(c,reply);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"saveconfig\") && c->argc == 2) {\n        int retval = clusterSaveConfig(1);\n\n        if (retval == 0)\n            addReply(c,shared.ok);\n        else\n            addReplyErrorFormat(c,\"error saving the cluster node config: %s\",\n                strerror(errno));\n    } else if (!strcasecmp(c->argv[1]->ptr,\"forget\") && c->argc == 3) {\n        /* CLUSTER FORGET <NODE ID> */\n        clusterNode *n = clusterLookupNode(c->argv[2]->ptr, sdslen(c->argv[2]->ptr));\n        if (!n) {\n            if (clusterBlacklistExists((char*)c->argv[2]->ptr, sdslen(c->argv[2]->ptr)))\n                /* Already forgotten. The deletion may have been gossipped by\n                 * another node, so we pretend it succeeded. */\n                addReply(c,shared.ok);\n            else\n                addReplyErrorFormat(c,\"Unknown node %s\", (char*)c->argv[2]->ptr);\n            return 1;\n        } else if (n == myself) {\n            addReplyError(c,\"I tried hard but I can't forget myself...\");\n            return 1;\n        } else if (nodeIsSlave(myself) && myself->slaveof == n) {\n            addReplyError(c,\"Can't forget my master!\");\n            return 1;\n        }\n        clusterBlacklistAddNode(n);\n        clusterDelNode(n);\n        clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE|\n                             CLUSTER_TODO_SAVE_CONFIG);\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"replicate\") && c->argc == 3) {\n        /* CLUSTER REPLICATE <NODE ID> */\n        /* Lookup the specified node in our table. */\n        clusterNode *n = clusterLookupNode(c->argv[2]->ptr, sdslen(c->argv[2]->ptr));\n        if (!n) {\n            addReplyErrorFormat(c,\"Unknown node %s\", (char*)c->argv[2]->ptr);\n            return 1;\n        }\n\n        /* I can't replicate myself. */\n        if (n == myself) {\n            addReplyError(c,\"Can't replicate myself\");\n            return 1;\n        }\n\n        /* Can't replicate a slave. */\n        if (nodeIsSlave(n)) {\n            addReplyError(c,\"I can only replicate a master, not a replica.\");\n            return 1;\n        }\n\n        /* If the instance is currently a master, it should have no assigned\n         * slots nor keys to accept to replicate some other node.\n         * Slaves can switch to another master without issues. */\n        if (clusterNodeIsMaster(myself) &&\n            (myself->numslots != 0 || kvstoreSize(server.db[0].keys) != 0)) {\n            addReplyError(c,\n                \"To set a master the node must be empty and \"\n                \"without assigned slots.\");\n            return 1;\n        }\n\n        /* Set the master. */\n        clusterSetMaster(n);\n        /* Save the new config and broadcast it to the other nodes. */\n        clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE|\n                             CLUSTER_TODO_SAVE_CONFIG|\n                             CLUSTER_TODO_BROADCAST_PONG);\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"count-failure-reports\") &&\n               c->argc == 3)\n    {\n        /* CLUSTER COUNT-FAILURE-REPORTS <NODE ID> */\n        clusterNode *n = clusterLookupNode(c->argv[2]->ptr, sdslen(c->argv[2]->ptr));\n\n        if (!n) {\n            addReplyErrorFormat(c,\"Unknown node %s\", (char*)c->argv[2]->ptr);\n            return 1;\n        } else {\n            addReplyLongLong(c,clusterNodeFailureReportsCount(n));\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"failover\") &&\n               (c->argc == 2 || c->argc == 3))\n    {\n        /* CLUSTER FAILOVER [FORCE|TAKEOVER] */\n        int force = 0, takeover = 0;\n\n        if (c->argc == 3) {\n            if (!strcasecmp(c->argv[2]->ptr,\"force\")) {\n                force = 1;\n            } else if (!strcasecmp(c->argv[2]->ptr,\"takeover\")) {\n                takeover = 1;\n                force = 1; /* Takeover also implies force. */\n            } else {\n                addReplyErrorObject(c,shared.syntaxerr);\n                return 1;\n            }\n        }\n\n        /* Check preconditions. */\n        if (clusterNodeIsMaster(myself)) {\n            addReplyError(c,\"You should send CLUSTER FAILOVER to a replica\");\n            return 1;\n        } else if (myself->slaveof == NULL) {\n            addReplyError(c,\"I'm a replica but my master is unknown to me\");\n            return 1;\n        } else if (!force &&\n                   (nodeFailed(myself->slaveof) ||\n                    myself->slaveof->link == NULL))\n        {\n            addReplyError(c,\"Master is down or failed, \"\n                            \"please use CLUSTER FAILOVER FORCE\");\n            return 1;\n        }\n        resetManualFailover();\n        server.cluster->mf_end = mstime() + CLUSTER_MF_TIMEOUT;\n\n        if (takeover) {\n            /* A takeover does not perform any initial check. It just\n             * generates a new configuration epoch for this node without\n             * consensus, claims the master's slots, and broadcast the new\n             * configuration. */\n            serverLog(LL_NOTICE,\"Taking over the master (user request).\");\n            clusterBumpConfigEpochWithoutConsensus();\n            clusterFailoverReplaceYourMaster();\n        } else if (force) {\n            /* If this is a forced failover, we don't need to talk with our\n             * master to agree about the offset. We just failover taking over\n             * it without coordination. */\n            serverLog(LL_NOTICE,\"Forced failover user request accepted.\");\n            server.cluster->mf_can_start = 1;\n        } else {\n            serverLog(LL_NOTICE,\"Manual failover user request accepted.\");\n            clusterSendMFStart(myself->slaveof);\n        }\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"set-config-epoch\") && c->argc == 3)\n    {\n        /* CLUSTER SET-CONFIG-EPOCH <epoch>\n         *\n         * The user is allowed to set the config epoch only when a node is\n         * totally fresh: no config epoch, no other known node, and so forth.\n         * This happens at cluster creation time to start with a cluster where\n         * every node has a different node ID, without to rely on the conflicts\n         * resolution system which is too slow when a big cluster is created. */\n        long long epoch;\n\n        if (getLongLongFromObjectOrReply(c,c->argv[2],&epoch,NULL) != C_OK)\n            return 1;\n\n        if (epoch < 0) {\n            addReplyErrorFormat(c,\"Invalid config epoch specified: %lld\",epoch);\n        } else if (dictSize(server.cluster->nodes) > 1) {\n            addReplyError(c,\"The user can assign a config epoch only when the \"\n                            \"node does not know any other node.\");\n        } else if (myself->configEpoch != 0) {\n            addReplyError(c,\"Node config epoch is already non-zero\");\n        } else {\n            myself->configEpoch = epoch;\n            serverLog(LL_NOTICE,\n                \"configEpoch set to %llu via CLUSTER SET-CONFIG-EPOCH\",\n                (unsigned long long) myself->configEpoch);\n\n            if (server.cluster->currentEpoch < (uint64_t)epoch)\n                server.cluster->currentEpoch = epoch;\n            /* No need to fsync the config here since in the unlucky event\n             * of a failure to persist the config, the conflict resolution code\n             * will assign a unique config to this node. */\n            clusterDoBeforeSleep(CLUSTER_TODO_UPDATE_STATE|\n                                 CLUSTER_TODO_SAVE_CONFIG);\n            addReply(c,shared.ok);\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"reset\") &&\n               (c->argc == 2 || c->argc == 3))\n    {\n        /* CLUSTER RESET [SOFT|HARD] */\n        int hard = 0;\n\n        /* Parse soft/hard argument. Default is soft. */\n        if (c->argc == 3) {\n            if (!strcasecmp(c->argv[2]->ptr,\"hard\")) {\n                hard = 1;\n            } else if (!strcasecmp(c->argv[2]->ptr,\"soft\")) {\n                hard = 0;\n            } else {\n                addReplyErrorObject(c,shared.syntaxerr);\n                return 1;\n            }\n        }\n\n        /* Slaves can be reset while containing data, but not master nodes\n         * that must be empty. */\n        if (clusterNodeIsMaster(myself) && kvstoreSize(c->db->keys) != 0) {\n            addReplyError(c,\"CLUSTER RESET can't be called with \"\n                            \"master nodes containing keys\");\n            return 1;\n        }\n        clusterReset(hard);\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"links\") && c->argc == 2) {\n        /* CLUSTER LINKS */\n        addReplyClusterLinksDescription(c);\n    } else {\n        return 0;\n    }\n\n    return 1;\n}\n\nconst char **clusterCommandExtendedHelp(void) {\n    static const char *help[] = {\n        \"ADDSLOTS <slot> [<slot> ...]\",\n        \"    Assign slots to current node.\",\n        \"ADDSLOTSRANGE <start slot> <end slot> [<start slot> <end slot> ...]\",\n        \"    Assign slots which are between <start-slot> and <end-slot> to current node.\",\n        \"BUMPEPOCH\",\n        \"    Advance the cluster config epoch.\",\n        \"COUNT-FAILURE-REPORTS <node-id>\",\n        \"    Return number of failure reports for <node-id>.\",\n        \"DELSLOTS <slot> [<slot> ...]\",\n        \"    Delete slots information from current node.\",\n        \"DELSLOTSRANGE <start slot> <end slot> [<start slot> <end slot> ...]\",\n        \"    Delete slots information which are between <start-slot> and <end-slot> from current node.\",\n        \"FAILOVER [FORCE|TAKEOVER]\",\n        \"    Promote current replica node to being a master.\",\n        \"FORGET <node-id>\",\n        \"    Remove a node from the cluster.\",\n        \"FLUSHSLOTS\",\n        \"    Delete current node own slots information.\",\n        \"MEET <ip> <port> [<bus-port>]\",\n        \"    Connect nodes into a working cluster.\",\n        \"REPLICATE <node-id>\",\n        \"    Configure current node as replica to <node-id>.\",\n        \"RESET [HARD|SOFT]\",\n        \"    Reset current node (default: soft).\",\n        \"SET-CONFIG-EPOCH <epoch>\",\n        \"    Set config epoch of current node.\",\n        \"SETSLOT <slot> (IMPORTING <node-id>|MIGRATING <node-id>|STABLE|NODE <node-id>)\",\n        \"    Set slot state.\",\n        \"SAVECONFIG\",\n        \"    Force saving cluster configuration on disk.\",\n        \"LINKS\",\n        \"    Return information about all network links between this node and its peers.\",\n        \"    Output format is an array where each array element is a map containing attributes of a link\",\n        \"MIGRATION IMPORT <start-slot end-slot [start-slot end-slot ...]> |\",\n        \"          STATUS [ID <task-id> | ALL] | CANCEL [ID <task-id> | ALL]\",\n        \"    Start, monitor and cancel slot migration.\",\n        NULL\n    };\n\n    return help;\n}\n\nint clusterNodeNumSlaves(clusterNode *node) {\n    return node->numslaves;\n}\n\nclusterNode *clusterNodeGetSlave(clusterNode *node, int slave_idx) {\n    return node->slaves[slave_idx];\n}\n\nclusterNode *getMigratingSlotDest(int slot) {\n    return server.cluster->migrating_slots_to[slot];\n}\n\nclusterNode *getImportingSlotSource(int slot) {\n    return server.cluster->importing_slots_from[slot];\n}\n\nint isClusterHealthy(void) {\n    return server.cluster->state == CLUSTER_OK;\n}\n\nclusterNode *getNodeBySlot(int slot) {\n    return server.cluster->slots[slot];\n}\n\nchar *clusterNodeHostname(clusterNode *node) {\n    return node->hostname;\n}\n\nlong long clusterNodeReplOffset(clusterNode *node) {\n    return node->repl_offset;\n}\n\nconst char *clusterNodePreferredEndpoint(clusterNode *n) {\n    char *hostname = clusterNodeHostname(n);\n    switch (server.cluster_preferred_endpoint_type) {\n        case CLUSTER_ENDPOINT_TYPE_IP:\n            return clusterNodeIp(n);\n        case CLUSTER_ENDPOINT_TYPE_HOSTNAME:\n            return (hostname != NULL && hostname[0] != '\\0') ? hostname : \"?\";\n        case CLUSTER_ENDPOINT_TYPE_UNKNOWN_ENDPOINT:\n            return \"\";\n    }\n    return \"unknown\";\n}\n\nint clusterAllowFailoverCmd(client *c) {\n    if (!server.cluster_enabled) {\n        return 1;\n    }\n    addReplyError(c,\"FAILOVER not allowed in cluster mode. \"\n                    \"Use CLUSTER FAILOVER command instead.\");\n    return 0;\n}\n\nvoid clusterPromoteSelfToMaster(void) {\n    replicationUnsetMaster();\n    asmFinalizeMasterTask();\n}\n\nint clusterAsmOnEvent(const char *task_id, int event, void *arg) {\n    sds str = NULL;\n\n    slotRangeArray *slots = asmTaskGetSlotRanges(task_id);\n    if (slots) str = slotRangeArrayToString(slots);\n    else if (arg) str = slotRangeArrayToString(arg);\n\n    serverLog(LL_VERBOSE, \"Slot migration task %s received event %d for slots: %s\",\n                          task_id, event, str ? str : \"unknown\");\n\n    switch (event) {\n        case ASM_EVENT_TAKEOVER:\n            for (int i = 0; i < slots->num_ranges; i++) {\n                slotRange *sr = &slots->ranges[i];\n                for (int j = sr->start; j <= sr->end; j++) {\n                    clusterDelSlot(j);\n                    clusterAddSlot(myself, j);\n                }\n            }\n            /* Bump config epoch and broadcast the new config to the other nodes. */\n            clusterBumpConfigEpochWithoutConsensus();\n            clusterSaveConfigOrDie(1);\n            clusterDoBeforeSleep(CLUSTER_TODO_BROADCAST_PONG);\n            clusterAsmProcess(task_id, ASM_EVENT_DONE, NULL, NULL);\n            break;\n        case ASM_EVENT_MIGRATE_FAILED:\n            unpauseActions(PAUSE_DURING_SLOT_HANDOFF);\n            break;\n        case ASM_EVENT_HANDOFF_PREP:\n            pauseActions(PAUSE_DURING_SLOT_HANDOFF,\n                         LLONG_MAX,\n                         PAUSE_ACTIONS_CLIENT_WRITE_SET);\n            clusterAsmProcess(task_id, ASM_EVENT_HANDOFF, NULL, NULL);\n            break;\n        case ASM_EVENT_MIGRATE_COMPLETED:\n            unpauseActions(PAUSE_DURING_SLOT_HANDOFF);\n            break;\n        default:\n            break;\n    }\n\n    sdsfree(str);\n    return C_OK;\n}\n"
  },
  {
    "path": "src/cluster_legacy.h",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n#ifndef CLUSTER_LEGACY_H\n#define CLUSTER_LEGACY_H\n\n#define CLUSTER_PORT_INCR 10000 /* Cluster port = baseport + PORT_INCR */\n\n/* The following defines are amount of time, sometimes expressed as\n * multiplicators of the node timeout value (when ending with MULT). */\n#define CLUSTER_FAIL_REPORT_VALIDITY_MULT 2 /* Fail report validity. */\n#define CLUSTER_FAIL_UNDO_TIME_MULT 2 /* Undo fail if master is back. */\n#define CLUSTER_MF_TIMEOUT 5000 /* Milliseconds to do a manual failover. */\n#define CLUSTER_MF_PAUSE_MULT 2 /* Master pause manual failover mult. */\n#define CLUSTER_SLAVE_MIGRATION_DELAY 5000 /* Delay for slave migration. */\n\n/* Reasons why a slave is not able to failover. */\n#define CLUSTER_CANT_FAILOVER_NONE 0\n#define CLUSTER_CANT_FAILOVER_DATA_AGE 1\n#define CLUSTER_CANT_FAILOVER_WAITING_DELAY 2\n#define CLUSTER_CANT_FAILOVER_EXPIRED 3\n#define CLUSTER_CANT_FAILOVER_WAITING_VOTES 4\n#define CLUSTER_CANT_FAILOVER_RELOG_PERIOD (10) /* seconds. */\n\n/* clusterState todo_before_sleep flags. */\n#define CLUSTER_TODO_HANDLE_FAILOVER (1<<0)\n#define CLUSTER_TODO_UPDATE_STATE (1<<1)\n#define CLUSTER_TODO_SAVE_CONFIG (1<<2)\n#define CLUSTER_TODO_FSYNC_CONFIG (1<<3)\n#define CLUSTER_TODO_HANDLE_MANUALFAILOVER (1<<4)\n#define CLUSTER_TODO_BROADCAST_PONG (1<<5)\n\n/* clusterLink encapsulates everything needed to talk with a remote node. */\ntypedef struct clusterLink {\n    mstime_t ctime;             /* Link creation time */\n    connection *conn;           /* Connection to remote node */\n    list *send_msg_queue;        /* List of messages to be sent */\n    size_t head_msg_send_offset; /* Number of bytes already sent of message at head of queue */\n    unsigned long long send_msg_queue_mem; /* Memory in bytes used by message queue */\n    char *rcvbuf;               /* Packet reception buffer */\n    size_t rcvbuf_len;          /* Used size of rcvbuf */\n    size_t rcvbuf_alloc;        /* Allocated size of rcvbuf */\n    clusterNode *node;          /* Node related to this link. Initialized to NULL when unknown */\n    int inbound;                /* 1 if this link is an inbound link accepted from the related node */\n} clusterLink;\n\n/* Cluster node flags and macros. */\n#define CLUSTER_NODE_MASTER 1     /* The node is a master */\n#define CLUSTER_NODE_SLAVE 2      /* The node is a slave */\n#define CLUSTER_NODE_PFAIL 4      /* Failure? Need acknowledge */\n#define CLUSTER_NODE_FAIL 8       /* The node is believed to be malfunctioning */\n#define CLUSTER_NODE_MYSELF 16    /* This node is myself */\n#define CLUSTER_NODE_HANDSHAKE 32 /* We have still to exchange the first ping */\n#define CLUSTER_NODE_NOADDR   64  /* We don't know the address of this node */\n#define CLUSTER_NODE_MEET 128     /* Send a MEET message to this node */\n#define CLUSTER_NODE_MIGRATE_TO 256 /* Master eligible for replica migration. */\n#define CLUSTER_NODE_NOFAILOVER 512 /* Slave will not try to failover. */\n#define CLUSTER_NODE_EXTENSIONS_SUPPORTED 1024 /* This node supports extensions. */\n#define CLUSTER_NODE_NULL_NAME \"\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\\000\"\n\n#define nodeIsMaster(n) ((n)->flags & CLUSTER_NODE_MASTER)\n#define nodeIsSlave(n) ((n)->flags & CLUSTER_NODE_SLAVE)\n#define nodeInHandshake(n) ((n)->flags & CLUSTER_NODE_HANDSHAKE)\n#define nodeHasAddr(n) (!((n)->flags & CLUSTER_NODE_NOADDR))\n#define nodeTimedOut(n) ((n)->flags & CLUSTER_NODE_PFAIL)\n#define nodeFailed(n) ((n)->flags & CLUSTER_NODE_FAIL)\n#define nodeCantFailover(n) ((n)->flags & CLUSTER_NODE_NOFAILOVER)\n#define nodeSupportsExtensions(n) ((n)->flags & CLUSTER_NODE_EXTENSIONS_SUPPORTED)\n\n/* This structure represent elements of node->fail_reports. */\ntypedef struct clusterNodeFailReport {\n    clusterNode *node;         /* Node reporting the failure condition. */\n    mstime_t time;             /* Time of the last report from this node. */\n} clusterNodeFailReport;\n\n/* Redis cluster messages header */\n\n/* Message types.\n *\n * Note that the PING, PONG and MEET messages are actually the same exact\n * kind of packet. PONG is the reply to ping, in the exact format as a PING,\n * while MEET is a special PING that forces the receiver to add the sender\n * as a node (if it is not already in the list). */\n#define CLUSTERMSG_TYPE_PING 0          /* Ping */\n#define CLUSTERMSG_TYPE_PONG 1          /* Pong (reply to Ping) */\n#define CLUSTERMSG_TYPE_MEET 2          /* Meet \"let's join\" message */\n#define CLUSTERMSG_TYPE_FAIL 3          /* Mark node xxx as failing */\n#define CLUSTERMSG_TYPE_PUBLISH 4       /* Pub/Sub Publish propagation */\n#define CLUSTERMSG_TYPE_FAILOVER_AUTH_REQUEST 5 /* May I failover? */\n#define CLUSTERMSG_TYPE_FAILOVER_AUTH_ACK 6     /* Yes, you have my vote */\n#define CLUSTERMSG_TYPE_UPDATE 7        /* Another node slots configuration */\n#define CLUSTERMSG_TYPE_MFSTART 8       /* Pause clients for manual failover */\n#define CLUSTERMSG_TYPE_MODULE 9        /* Module cluster API message. */\n#define CLUSTERMSG_TYPE_PUBLISHSHARD 10 /* Pub/Sub Publish shard propagation */\n#define CLUSTERMSG_TYPE_COUNT 11        /* Total number of message types. */\n\n/* Initially we don't know our \"name\", but we'll find it once we connect\n * to the first node, using the getsockname() function. Then we'll use this\n * address for all the next messages. */\ntypedef struct {\n    char nodename[CLUSTER_NAMELEN];\n    uint32_t ping_sent;\n    uint32_t pong_received;\n    char ip[NET_IP_STR_LEN];  /* IP address last time it was seen */\n    uint16_t port;              /* primary port last time it was seen */\n    uint16_t cport;             /* cluster port last time it was seen */\n    uint16_t flags;             /* node->flags copy */\n    uint16_t pport;             /* secondary port last time it was seen */\n    uint16_t notused1;\n} clusterMsgDataGossip;\n\ntypedef struct {\n    char nodename[CLUSTER_NAMELEN];\n} clusterMsgDataFail;\n\ntypedef struct {\n    uint32_t channel_len;\n    uint32_t message_len;\n    unsigned char bulk_data[8]; /* 8 bytes just as placeholder. */\n} clusterMsgDataPublish;\n\ntypedef struct {\n    uint64_t configEpoch; /* Config epoch of the specified instance. */\n    char nodename[CLUSTER_NAMELEN]; /* Name of the slots owner. */\n    unsigned char slots[CLUSTER_SLOTS/8]; /* Slots bitmap. */\n} clusterMsgDataUpdate;\n\ntypedef struct {\n    uint64_t module_id;     /* ID of the sender module. */\n    uint32_t len;           /* ID of the sender module. */\n    uint8_t type;           /* Type from 0 to 255. */\n    unsigned char bulk_data[3]; /* 3 bytes just as placeholder. */\n} clusterMsgModule;\n\n/* The cluster supports optional extension messages that can be sent\n * along with ping/pong/meet messages to give additional info in a\n * consistent manner. */\ntypedef enum {\n    CLUSTERMSG_EXT_TYPE_HOSTNAME,\n    CLUSTERMSG_EXT_TYPE_HUMAN_NODENAME,\n    CLUSTERMSG_EXT_TYPE_FORGOTTEN_NODE,\n    CLUSTERMSG_EXT_TYPE_SHARDID,\n    CLUSTERMSG_EXT_TYPE_INTERNALSECRET,\n} clusterMsgPingtypes;\n\n/* Helper function for making sure extensions are eight byte aligned. */\n#define EIGHT_BYTE_ALIGN(size) ((((size) + 7) / 8) * 8)\n#define CLUSTER_INTERNALSECRETLEN 40      /* sha1 hex length */\n\ntypedef struct {\n    char hostname[1]; /* The announced hostname, ends with \\0. */\n} clusterMsgPingExtHostname;\n\ntypedef struct {\n    char human_nodename[1]; /* The announced nodename, ends with \\0. */\n} clusterMsgPingExtHumanNodename;\n\ntypedef struct {\n    char name[CLUSTER_NAMELEN]; /* Node name. */\n    uint64_t ttl; /* Remaining time to blacklist the node, in seconds. */\n} clusterMsgPingExtForgottenNode;\n\nstatic_assert(sizeof(clusterMsgPingExtForgottenNode) % 8 == 0, \"\");\n\ntypedef struct {\n    char shard_id[CLUSTER_NAMELEN]; /* The shard_id, 40 bytes fixed. */\n} clusterMsgPingExtShardId;\n\ntypedef struct {\n    char internal_secret[CLUSTER_INTERNALSECRETLEN]; /* Current shard internal secret */\n} clusterMsgPingExtInternalSecret;\n\ntypedef struct {\n    uint32_t length; /* Total length of this extension message (including this header) */\n    uint16_t type; /* Type of this extension message (see clusterMsgPingExtTypes) */\n    uint16_t unused; /* 16 bits of padding to make this structure 8 byte aligned. */\n    union {\n        clusterMsgPingExtHostname hostname;\n        clusterMsgPingExtHumanNodename human_nodename;\n        clusterMsgPingExtForgottenNode forgotten_node;\n        clusterMsgPingExtShardId shard_id;\n        clusterMsgPingExtInternalSecret internal_secret;\n    } ext[]; /* Actual extension information, formatted so that the data is 8\n              * byte aligned, regardless of its content. */\n} clusterMsgPingExt;\n\nunion clusterMsgData {\n    /* PING, MEET and PONG */\n    struct {\n        /* Array of N clusterMsgDataGossip structures */\n        clusterMsgDataGossip gossip[1];\n        /* Extension data that can optionally be sent for ping/meet/pong\n         * messages. We can't explicitly define them here though, since\n         * the gossip array isn't the real length of the gossip data. */\n    } ping;\n\n    /* FAIL */\n    struct {\n        clusterMsgDataFail about;\n    } fail;\n\n    /* PUBLISH */\n    struct {\n        clusterMsgDataPublish msg;\n    } publish;\n\n    /* UPDATE */\n    struct {\n        clusterMsgDataUpdate nodecfg;\n    } update;\n\n    /* MODULE */\n    struct {\n        clusterMsgModule msg;\n    } module;\n};\n\n#define CLUSTER_PROTO_VER 1 /* Cluster bus protocol version. */\n\ntypedef struct {\n    char sig[4];        /* Signature \"RCmb\" (Redis Cluster message bus). */\n    uint32_t totlen;    /* Total length of this message */\n    uint16_t ver;       /* Protocol version, currently set to 1. */\n    uint16_t port;      /* Primary port number (TCP or TLS). */\n    uint16_t type;      /* Message type */\n    uint16_t count;     /* Only used for some kinds of messages. */\n    uint64_t currentEpoch;  /* The epoch accordingly to the sending node. */\n    uint64_t configEpoch;   /* The config epoch if it's a master, or the last\n                               epoch advertised by its master if it is a\n                               slave. */\n    uint64_t offset;    /* Master replication offset if node is a master or\n                           processed replication offset if node is a slave. */\n    char sender[CLUSTER_NAMELEN]; /* Name of the sender node */\n    unsigned char myslots[CLUSTER_SLOTS/8];\n    char slaveof[CLUSTER_NAMELEN];\n    char myip[NET_IP_STR_LEN];    /* Sender IP, if not all zeroed. */\n    uint16_t extensions; /* Number of extensions sent along with this packet. */\n    char notused1[30];   /* 30 bytes reserved for future usage. */\n    uint16_t pport;      /* Secondary port number: if primary port is TCP port, this is\n                            TLS port, and if primary port is TLS port, this is TCP port.*/\n    uint16_t cport;      /* Sender TCP cluster bus port */\n    uint16_t flags;      /* Sender node flags */\n    unsigned char state; /* Cluster state from the POV of the sender */\n    unsigned char mflags[3]; /* Message flags: CLUSTERMSG_FLAG[012]_... */\n    union clusterMsgData data;\n} clusterMsg;\n\n/* clusterMsg defines the gossip wire protocol exchanged among Redis cluster\n * members, which can be running different versions of redis-server bits,\n * especially during cluster rolling upgrades.\n *\n * Therefore, fields in this struct should remain at the same offset from\n * release to release. The static asserts below ensure that incompatible\n * changes in clusterMsg are caught at compile time.\n */\n\nstatic_assert(offsetof(clusterMsg, sig) == 0, \"unexpected field offset\");\nstatic_assert(offsetof(clusterMsg, totlen) == 4, \"unexpected field offset\");\nstatic_assert(offsetof(clusterMsg, ver) == 8, \"unexpected field offset\");\nstatic_assert(offsetof(clusterMsg, port) == 10, \"unexpected field offset\");\nstatic_assert(offsetof(clusterMsg, type) == 12, \"unexpected field offset\");\nstatic_assert(offsetof(clusterMsg, count) == 14, \"unexpected field offset\");\nstatic_assert(offsetof(clusterMsg, currentEpoch) == 16, \"unexpected field offset\");\nstatic_assert(offsetof(clusterMsg, configEpoch) == 24, \"unexpected field offset\");\nstatic_assert(offsetof(clusterMsg, offset) == 32, \"unexpected field offset\");\nstatic_assert(offsetof(clusterMsg, sender) == 40, \"unexpected field offset\");\nstatic_assert(offsetof(clusterMsg, myslots) == 80, \"unexpected field offset\");\nstatic_assert(offsetof(clusterMsg, slaveof) == 2128, \"unexpected field offset\");\nstatic_assert(offsetof(clusterMsg, myip) == 2168, \"unexpected field offset\");\nstatic_assert(offsetof(clusterMsg, extensions) == 2214, \"unexpected field offset\");\nstatic_assert(offsetof(clusterMsg, notused1) == 2216, \"unexpected field offset\");\nstatic_assert(offsetof(clusterMsg, pport) == 2246, \"unexpected field offset\");\nstatic_assert(offsetof(clusterMsg, cport) == 2248, \"unexpected field offset\");\nstatic_assert(offsetof(clusterMsg, flags) == 2250, \"unexpected field offset\");\nstatic_assert(offsetof(clusterMsg, state) == 2252, \"unexpected field offset\");\nstatic_assert(offsetof(clusterMsg, mflags) == 2253, \"unexpected field offset\");\nstatic_assert(offsetof(clusterMsg, data) == 2256, \"unexpected field offset\");\n\n#define CLUSTERMSG_MIN_LEN (sizeof(clusterMsg)-sizeof(union clusterMsgData))\n\n/* Message flags better specify the packet content or are used to\n * provide some information about the node state. */\n#define CLUSTERMSG_FLAG0_PAUSED (1<<0) /* Master paused for manual failover. */\n#define CLUSTERMSG_FLAG0_FORCEACK (1<<1) /* Give ACK to AUTH_REQUEST even if\n                                            master is up. */\n#define CLUSTERMSG_FLAG0_EXT_DATA (1<<2) /* Message contains extension data */\n\nstruct _clusterNode {\n    mstime_t ctime; /* Node object creation time. */\n    char name[CLUSTER_NAMELEN]; /* Node name, hex string, sha1-size */\n    char shard_id[CLUSTER_NAMELEN]; /* shard id, hex string, sha1-size */\n    int flags;      /* CLUSTER_NODE_... */\n    uint64_t configEpoch; /* Last configEpoch observed for this node */\n    unsigned char slots[CLUSTER_SLOTS/8]; /* slots handled by this node */\n    uint16_t *slot_info_pairs; /* Slots info represented as (start/end) pair (consecutive index). */\n    int slot_info_pairs_count; /* Used number of slots in slot_info_pairs */\n    int numslots;   /* Number of slots handled by this node */\n    int numslaves;  /* Number of slave nodes, if this is a master */\n    clusterNode **slaves; /* pointers to slave nodes */\n    clusterNode *slaveof; /* pointer to the master node. Note that it\n                             may be NULL even if the node is a slave\n                             if we don't have the master node in our\n                             tables. */\n    unsigned long long last_in_ping_gossip; /* The number of the last carried in the ping gossip section */\n    mstime_t ping_sent;      /* Unix time we sent latest ping */\n    mstime_t pong_received;  /* Unix time we received the pong */\n    mstime_t data_received;  /* Unix time we received any data */\n    mstime_t fail_time;      /* Unix time when FAIL flag was set */\n    mstime_t voted_time;     /* Last time we voted for a slave of this master */\n    mstime_t repl_offset_time;  /* Unix time we received offset for this node */\n    mstime_t orphaned_time;     /* Starting time of orphaned master condition */\n    long long repl_offset;      /* Last known repl offset for this node. */\n    char ip[NET_IP_STR_LEN];    /* Latest known IP address of this node */\n    sds hostname;               /* The known hostname for this node */\n    sds human_nodename;         /* The known human readable nodename for this node */\n    int tcp_port;               /* Latest known clients TCP port. */\n    int tls_port;               /* Latest known clients TLS port */\n    int cport;                  /* Latest known cluster port of this node. */\n    clusterLink *link;          /* TCP/IP link established toward this node */\n    clusterLink *inbound_link;  /* TCP/IP link accepted from this node */\n    list *fail_reports;         /* List of nodes signaling this as failing */\n};\n\nstruct clusterState {\n    clusterNode *myself;  /* This node */\n    uint64_t currentEpoch;\n    int state;            /* CLUSTER_OK, CLUSTER_FAIL, ... */\n    int size;             /* Num of master nodes with at least one slot */\n    dict *nodes;          /* Hash table of name -> clusterNode structures */\n    dict *shards;         /* Hash table of shard_id -> list (of nodes) structures */\n    dict *nodes_black_list; /* Nodes we don't re-add for a few seconds. */\n    clusterNode *migrating_slots_to[CLUSTER_SLOTS];\n    clusterNode *importing_slots_from[CLUSTER_SLOTS];\n    clusterNode *slots[CLUSTER_SLOTS];\n    char internal_secret[CLUSTER_INTERNALSECRETLEN];\n    /* The following fields are used to take the slave state on elections. */\n    mstime_t failover_auth_time; /* Time of previous or next election. */\n    int failover_auth_count;    /* Number of votes received so far. */\n    int failover_auth_sent;     /* True if we already asked for votes. */\n    int failover_auth_rank;     /* This slave rank for current auth request. */\n    uint64_t failover_auth_epoch; /* Epoch of the current election. */\n    int cant_failover_reason;   /* Why a slave is currently not able to\n                                   failover. See the CANT_FAILOVER_* macros. */\n    /* Manual failover state in common. */\n    mstime_t mf_end;            /* Manual failover time limit (ms unixtime).\n                                   It is zero if there is no MF in progress. */\n    /* Manual failover state of master. */\n    clusterNode *mf_slave;      /* Slave performing the manual failover. */\n    /* Manual failover state of slave. */\n    long long mf_master_offset; /* Master offset the slave needs to start MF\n                                   or -1 if still not received. */\n    int mf_can_start;           /* If non-zero signal that the manual failover\n                                   can start requesting masters vote. */\n    /* The following fields are used by masters to take state on elections. */\n    uint64_t lastVoteEpoch;     /* Epoch of the last vote granted. */\n    int todo_before_sleep; /* Things to do in clusterBeforeSleep(). */\n    /* Stats */\n    /* Messages received and sent by type. */\n    long long stats_bus_messages_sent[CLUSTERMSG_TYPE_COUNT];\n    long long stats_bus_messages_received[CLUSTERMSG_TYPE_COUNT];\n    long long stats_pfail_nodes;    /* Number of nodes in PFAIL status,\n                                       excluding nodes without address. */\n    unsigned long long stat_cluster_links_buffer_limit_exceeded;  /* Total number of cluster links freed due to exceeding buffer limit */\n\n    /* Bit map for slots that are no longer claimed by the owner in cluster PING\n     * messages. During slot migration, the owner will stop claiming the slot after\n     * the ownership transfer. Set the bit corresponding to the slot when a node\n     * stops claiming the slot. This prevents spreading incorrect information (that\n     * source still owns the slot) using UPDATE messages. */\n    unsigned char owner_not_claiming_slot[CLUSTER_SLOTS / 8];\n};\n\n\n#endif //CLUSTER_LEGACY_H\n"
  },
  {
    "path": "src/cluster_slot_stats.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n#include \"cluster_slot_stats.h\"\n#include \"cluster.h\"\n\ntypedef enum {\n    KEY_COUNT,\n    CPU_USEC,\n    MEMORY_BYTES,\n    NETWORK_BYTES_IN,\n    NETWORK_BYTES_OUT,\n    SLOT_STAT_COUNT,\n    INVALID\n} slotStatType;\n\n/* -----------------------------------------------------------------------------\n * CLUSTER SLOT-STATS command\n * -------------------------------------------------------------------------- */\n\n/* Struct used to temporarily hold slot statistics for sorting. */\ntypedef struct {\n    int slot;\n    uint64_t stat;\n} slotStatForSort;\n\nstatic int markSlotsAssignedToMyShard(unsigned char *assigned_slots, int start_slot, int end_slot) {\n    clusterNode *primary = clusterNodeGetMaster(getMyClusterNode());\n    int assigned_slots_count = 0;\n    for (int slot = start_slot; slot <= end_slot; slot++) {\n        if (!clusterNodeCoversSlot(primary, slot)) continue;\n        assigned_slots[slot]++;\n        assigned_slots_count++;\n    }\n    return assigned_slots_count;\n}\n\nstatic inline kvstoreDictMetadata *getSlotMeta(int slot, int createIfNeeded) {\n    return kvstoreGetDictMeta(server.db->keys, slot, createIfNeeded);\n}\n\nstatic uint64_t getSlotStat(int slot, slotStatType stat_type) {\n    kvstoreDictMetadata *meta = getSlotMeta(slot, 0);\n    switch (stat_type) {\n    case KEY_COUNT: return countKeysInSlot(slot);\n    case CPU_USEC: return meta ? meta->cpu_usec : 0;\n    case MEMORY_BYTES: return meta ? meta->alloc_size : 0;\n    case NETWORK_BYTES_IN: return meta ? meta->network_bytes_in : 0;\n    case NETWORK_BYTES_OUT: return meta ? meta->network_bytes_out : 0;\n    default: serverPanic(\"Invalid slot stat type %d was found.\", stat_type);\n    }\n}\n\n/* Compare by stat in ascending order. If stat is the same, compare by slot in ascending order. */\nstatic int slotStatForSortAscCmp(const void *a, const void *b) {\n    const slotStatForSort *entry_a = a;\n    const slotStatForSort *entry_b = b;\n    if (entry_a->stat == entry_b->stat) {\n        return entry_a->slot - entry_b->slot;\n    }\n    return entry_a->stat - entry_b->stat;\n}\n\n/* Compare by stat in descending order. If stat is the same, compare by slot in ascending order. */\nstatic int slotStatForSortDescCmp(const void *a, const void *b) {\n    const slotStatForSort *entry_a = a;\n    const slotStatForSort *entry_b = b;\n    if (entry_b->stat == entry_a->stat) {\n        return entry_a->slot - entry_b->slot;\n    }\n    return entry_b->stat - entry_a->stat;\n}\n\nstatic void collectAndSortSlotStats(slotStatForSort slot_stats[], slotStatType order_by, int desc) {\n    clusterNode *primary = clusterNodeGetMaster(getMyClusterNode());\n    int i = 0;\n    for (int slot = 0; slot < CLUSTER_SLOTS; slot++) {\n        if (!clusterNodeCoversSlot(primary, slot)) continue;\n        slot_stats[i].slot = slot;\n        slot_stats[i].stat = getSlotStat(slot, order_by);\n        i++;\n    }\n    qsort(slot_stats, i, sizeof(slotStatForSort), desc ? slotStatForSortDescCmp : slotStatForSortAscCmp);\n}\n\nstatic void addReplySlotStat(client *c, int slot) {\n    int cpu_enabled = server.cluster_slot_stats_enabled & CLUSTER_SLOT_STATS_CPU;\n    int net_enabled = server.cluster_slot_stats_enabled & CLUSTER_SLOT_STATS_NET;\n    int mem_enabled = (server.cluster_slot_stats_enabled & CLUSTER_SLOT_STATS_MEM) && server.memory_tracking_enabled;\n\n    addReplyArrayLen(c, 2); /* Array of size 2, where 0th index represents (int) slot,\n                             * and 1st index represents (map) usage statistics. */\n    addReplyLongLong(c, slot);\n    /* Nested map representing slot usage statistics. */\n    addReplyMapLen(c, 1 +                                        /* key-count */\n                   (mem_enabled ? 1 : 0) +                       /* memory-bytes */\n                   (cpu_enabled ? 1 : 0) +                       /* cpu-usec */\n                   (net_enabled ? 2 : 0));                       /* network-bytes-in/out */\n    addReplyBulkCString(c, \"key-count\");\n    addReplyLongLong(c, countKeysInSlot(slot));\n\n    /* Any additional metrics aside from key-count come with a performance trade-off,\n     * and are aggregated and returned based on its server config. */\n    kvstoreDictMetadata *meta = getSlotMeta(slot, 0);\n    if (mem_enabled) {\n        addReplyBulkCString(c, \"memory-bytes\");\n        addReplyLongLong(c, meta ? meta->alloc_size : 0);\n    }\n    if (cpu_enabled) {\n        addReplyBulkCString(c, \"cpu-usec\");\n        addReplyLongLong(c, meta ? meta->cpu_usec : 0);\n    }\n    if (net_enabled) {\n        addReplyBulkCString(c, \"network-bytes-in\");\n        addReplyLongLong(c, meta ? meta->network_bytes_in : 0);\n        addReplyBulkCString(c, \"network-bytes-out\");\n        addReplyLongLong(c, meta ? meta->network_bytes_out : 0);\n    }\n}\n\n/* Adds reply for the SLOTSRANGE variant.\n * Response is ordered in ascending slot number. */\nstatic void addReplySlotsRange(client *c, unsigned char *assigned_slots, int start_slot, int end_slot, int len) {\n    addReplyArrayLen(c, len); /* Top level RESP reply format is defined as an array, due to ordering invariance. */\n\n    for (int slot = start_slot; slot <= end_slot; slot++) {\n        if (assigned_slots[slot]) addReplySlotStat(c, slot);\n    }\n}\n\nstatic void addReplySortedSlotStats(client *c, slotStatForSort slot_stats[], long limit) {\n    int num_slots_assigned = getMyShardSlotCount();\n    int len = min(limit, num_slots_assigned);\n    addReplyArrayLen(c, len); /* Top level RESP reply format is defined as an array, due to ordering invariance. */\n\n    for (int i = 0; i < len; i++) {\n        addReplySlotStat(c, slot_stats[i].slot);\n    }\n}\n\nstatic int canAddNetworkBytesOut(client *c) {\n    return clusterSlotStatsEnabled(CLUSTER_SLOT_STATS_NET) && c->slot != INVALID_CLUSTER_SLOT;\n}\n\n/* Accumulates egress bytes upon sending RESP responses back to user clients. */\nvoid clusterSlotStatsAddNetworkBytesOutForUserClient(client *c) {\n    if (!canAddNetworkBytesOut(c)) return;\n\n    serverAssert(c->slot >= 0 && c->slot < CLUSTER_SLOTS);\n    kvstoreDictMetadata *meta = getSlotMeta(c->slot, 1);\n    meta->network_bytes_out += c->net_output_bytes_curr_cmd;\n}\n\n/* Accumulates egress bytes upon sending replication stream. This only applies for primary nodes. */\nstatic void clusterSlotStatsUpdateNetworkBytesOutForReplication(long long len) {\n    client *c = server.current_client;\n    if (c == NULL || !canAddNetworkBytesOut(c)) return;\n\n    /* We multiply the bytes len by the number of replicas to account for us broadcasting to multiple replicas at once. */\n    len *= (long long)listLength(server.slaves);\n    serverAssert(c->slot >= 0 && c->slot < CLUSTER_SLOTS);\n    serverAssert(clusterNodeIsMaster(getMyClusterNode()));\n    kvstoreDictMetadata *meta = getSlotMeta(c->slot, 1);\n    /* We sometimes want to adjust the counter downwards (for example when we want to undo accounting for\n     * SELECT commands that don't belong to any slot) so let's make sure we don't underflow the counter. */\n    debugServerAssert(len >= 0 || meta->network_bytes_out >= (uint64_t)-len);\n    meta->network_bytes_out += len;\n}\n\n/* Increment network bytes out for replication stream. This method will increment `len` value times the active replica\n * count. */\nvoid clusterSlotStatsIncrNetworkBytesOutForReplication(long long len) {\n    clusterSlotStatsUpdateNetworkBytesOutForReplication(len);\n}\n\n/* Decrement network bytes out for replication stream.\n * This is used to remove accounting of data which doesn't belong to any particular slots e.g. SELECT command.\n * This will decrement `len` value times the active replica count. */\nvoid clusterSlotStatsDecrNetworkBytesOutForReplication(long long len) {\n    clusterSlotStatsUpdateNetworkBytesOutForReplication(-len);\n}\n\n/* Upon SPUBLISH, two egress events are triggered.\n * 1) Internal propagation, for clients that are subscribed to the current node.\n * 2) External propagation, for other nodes within the same shard (could either be a primary or replica).\n *    This type is not aggregated, to stay consistent with server.stat_net_output_bytes aggregation.\n * This function covers the internal propagation component. */\nvoid clusterSlotStatsAddNetworkBytesOutForShardedPubSubInternalPropagation(client *c, int slot) {\n    /* For a blocked client, c->slot could be pre-filled.\n     * Thus c->slot is backed-up for restoration after aggregation is completed. */\n    int save_slot = c->slot;\n    c->slot = slot;\n    if (canAddNetworkBytesOut(c)) {\n        serverAssert(c->slot >= 0 && c->slot < CLUSTER_SLOTS);\n        kvstoreDictMetadata *meta = getSlotMeta(c->slot, 1);\n        meta->network_bytes_out += c->net_output_bytes_curr_cmd;\n    }\n    /* For sharded pubsub, the client's network bytes metrics must be reset here,\n     * as resetClient() is not called until subscription ends. */\n    c->net_output_bytes_curr_cmd = 0;\n    c->slot = save_slot;\n}\n\n/* Adds reply for the ORDERBY variant.\n * Response is ordered based on the sort result. */\nstatic void addReplyOrderBy(client *c, slotStatType order_by, long limit, int desc) {\n    slotStatForSort slot_stats[CLUSTER_SLOTS];\n    collectAndSortSlotStats(slot_stats, order_by, desc);\n    addReplySortedSlotStats(c, slot_stats, limit);\n}\n\n/* Resets applicable slot statistics. */\nvoid clusterSlotStatReset(int slot) {\n    kvstoreDictMetadata *meta = getSlotMeta(slot, 0);\n    if (!meta) return;\n    meta->cpu_usec = 0;\n    meta->network_bytes_in = 0;\n    meta->network_bytes_out = 0;\n    kvstoreFreeDictIfNeeded(server.db->keys, slot);\n}\n\nvoid clusterSlotStatResetAll(void) {\n    for (int slot = 0; slot < CLUSTER_SLOTS; slot++)\n        clusterSlotStatReset(slot);\n}\n\n/* For cpu-usec accumulation, nested commands within EXEC, EVAL, FCALL are skipped.\n * This is due to their unique callstack, where the c->duration for\n * EXEC, EVAL and FCALL already includes all of its nested commands.\n * Meaning, the accumulation of cpu-usec for these nested commands\n * would equate to repeating the same calculation twice.\n */\nstatic int canAddCpuDuration(client *c) {\n    return clusterSlotStatsEnabled(CLUSTER_SLOT_STATS_CPU) && /* CPU tracking should be enabled. */\n           c->slot != INVALID_CLUSTER_SLOT &&    /* Command should be slot specific. */\n           (!server.execution_nesting ||         /* Either command should not be nested, */\n            (c->realcmd->flags & CMD_BLOCKING)); /* or it must be due to unblocking. */\n}\n\nvoid clusterSlotStatsAddCpuDuration(client *c, ustime_t duration) {\n    if (!canAddCpuDuration(c)) return;\n\n    serverAssert(c->slot >= 0 && c->slot < CLUSTER_SLOTS);\n    kvstoreDictMetadata *meta = getSlotMeta(c->slot, 1);\n    meta->cpu_usec += duration;\n}\n\n/* For cross-slot scripting, its caller client's slot must be invalidated,\n * such that its slot-stats aggregation is bypassed. */\nvoid clusterSlotStatsInvalidateSlotIfApplicable(scriptRunCtx *ctx) {\n    if (!(ctx->flags & SCRIPT_ALLOW_CROSS_SLOT)) return;\n\n    ctx->original_client->slot = -1;\n}\n\nstatic int canAddNetworkBytesIn(client *c) {\n    /* First, network tracking must be enabled.\n     * Second, command should target a specific slot.\n     * Third, blocked client is not aggregated, to avoid duplicate aggregation upon unblocking.\n     * Fourth, the server is not under a MULTI/EXEC transaction, to avoid duplicate aggregation of\n     * EXEC's 14 bytes RESP upon nested call()'s afterCommand(). */\n    return clusterSlotStatsEnabled(CLUSTER_SLOT_STATS_NET) && c->slot != INVALID_CLUSTER_SLOT &&\n        !(c->flags & CLIENT_BLOCKED) && !server.in_exec;\n}\n\n/* Adds network ingress bytes of the current command in execution,\n * calculated earlier within networking.c layer.\n *\n * Note: Below function should only be called once c->slot is parsed.\n * Otherwise, the aggregation will be skipped due to canAddNetworkBytesIn() check failure.\n * */\nvoid clusterSlotStatsAddNetworkBytesInForUserClient(client *c) {\n    if (!canAddNetworkBytesIn(c)) return;\n\n    if (c->cmd->proc == execCommand) {\n        /* Accumulate its corresponding MULTI RESP; *1\\r\\n$5\\r\\nmulti\\r\\n */\n        c->net_input_bytes_curr_cmd += 15;\n    }\n\n    kvstoreDictMetadata *meta = getSlotMeta(c->slot, 1);\n    meta->network_bytes_in += c->net_input_bytes_curr_cmd;\n}\n\nvoid clusterSlotStatsCommand(client *c) {\n    if (!server.cluster_enabled) {\n        addReplyError(c, \"This instance has cluster support disabled\");\n        return;\n    }\n\n    /* Parse additional arguments. */\n    if (c->argc == 5 && !strcasecmp(c->argv[2]->ptr, \"slotsrange\")) {\n        /* CLUSTER SLOT-STATS SLOTSRANGE start-slot end-slot */\n        int start_slot, end_slot;\n        if ((start_slot = getSlotOrReply(c, c->argv[3])) == -1 ||\n            (end_slot = getSlotOrReply(c, c->argv[4])) == -1) {\n            return;\n        }\n        if (start_slot > end_slot) {\n            addReplyErrorFormat(c, \"Start slot number %d is greater than end slot number %d\", start_slot, end_slot);\n            return;\n        }\n        /* Initialize slot assignment array. */\n        unsigned char assigned_slots[CLUSTER_SLOTS] = {0};\n        int assigned_slots_count = markSlotsAssignedToMyShard(assigned_slots, start_slot, end_slot);\n        addReplySlotsRange(c, assigned_slots, start_slot, end_slot, assigned_slots_count);\n\n    } else if (c->argc >= 4 && !strcasecmp(c->argv[2]->ptr, \"orderby\")) {\n        /* CLUSTER SLOT-STATS ORDERBY metric [LIMIT limit] [ASC | DESC] */\n        int desc = 1;\n        slotStatType order_by = INVALID;\n        int cpu_enabled = server.cluster_slot_stats_enabled & CLUSTER_SLOT_STATS_CPU;\n        int net_enabled = server.cluster_slot_stats_enabled & CLUSTER_SLOT_STATS_NET;\n        int mem_enabled = (server.cluster_slot_stats_enabled & CLUSTER_SLOT_STATS_MEM) && server.memory_tracking_enabled;\n        if (!strcasecmp(c->argv[3]->ptr, \"key-count\")) {\n            order_by = KEY_COUNT;\n        } else if (!strcasecmp(c->argv[3]->ptr, \"cpu-usec\") && cpu_enabled) {\n            order_by = CPU_USEC;\n        } else if (!strcasecmp(c->argv[3]->ptr, \"memory-bytes\") && mem_enabled) {\n            order_by = MEMORY_BYTES;\n        } else if (!strcasecmp(c->argv[3]->ptr, \"network-bytes-in\") && net_enabled) {\n            order_by = NETWORK_BYTES_IN;\n        } else if (!strcasecmp(c->argv[3]->ptr, \"network-bytes-out\") && net_enabled) {\n            order_by = NETWORK_BYTES_OUT;\n        } else {\n            addReplyError(c, \"Unrecognized sort metric for ORDERBY.\");\n            return;\n        }\n        int i = 4; /* Next argument index, following ORDERBY */\n        int limit_counter = 0, asc_desc_counter = 0;\n        long limit = CLUSTER_SLOTS;\n        while (i < c->argc) {\n            int moreargs = c->argc > i + 1;\n            if (!strcasecmp(c->argv[i]->ptr, \"limit\") && moreargs) {\n                if (getRangeLongFromObjectOrReply(\n                        c, c->argv[i + 1], 1, CLUSTER_SLOTS, &limit,\n                        \"Limit has to lie in between 1 and 16384 (maximum number of slots).\") != C_OK) {\n                    return;\n                }\n                i++;\n                limit_counter++;\n            } else if (!strcasecmp(c->argv[i]->ptr, \"asc\")) {\n                desc = 0;\n                asc_desc_counter++;\n            } else if (!strcasecmp(c->argv[i]->ptr, \"desc\")) {\n                desc = 1;\n                asc_desc_counter++;\n            } else {\n                addReplyErrorObject(c, shared.syntaxerr);\n                return;\n            }\n            if (limit_counter > 1 || asc_desc_counter > 1) {\n                addReplyError(c, \"Multiple filters of the same type are disallowed.\");\n                return;\n            }\n            i++;\n        }\n        addReplyOrderBy(c, order_by, limit, desc);\n\n    } else {\n        addReplySubcommandSyntaxError(c);\n    }\n}\n"
  },
  {
    "path": "src/cluster_slot_stats.h",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n#include \"server.h\"\n#include \"script.h\"\n\n/* General use-cases. */\nvoid clusterSlotStatReset(int slot);\nvoid clusterSlotStatResetAll(void);\n\n/* cpu-usec metric. */\nvoid clusterSlotStatsAddCpuDuration(client *c, ustime_t duration);\nvoid clusterSlotStatsInvalidateSlotIfApplicable(scriptRunCtx *ctx);\n\n/* network-bytes-in metric. */\nvoid clusterSlotStatsAddNetworkBytesInForUserClient(client *c);\n\n/* network-bytes-out metric. */\nvoid clusterSlotStatsAddNetworkBytesOutForUserClient(client *c);\nvoid clusterSlotStatsIncrNetworkBytesOutForReplication(long long len);\nvoid clusterSlotStatsDecrNetworkBytesOutForReplication(long long len);\nvoid clusterSlotStatsAddNetworkBytesOutForShardedPubSubInternalPropagation(client *c, int slot);\n"
  },
  {
    "path": "src/commands/README.md",
    "content": "This directory contains JSON files, one for each of Redis commands.\n\nEach JSON contains all the information about the command itself, but these JSON files are not to be used directly!\nAny third party who needs access to command information must get it from `COMMAND INFO` and `COMMAND DOCS`.\nThe output can be extracted in a JSON format by using `redis-cli --json`, in the same manner as in `utils/generate-commands-json.py`.\n\nThe JSON files are used to generate commands.def (and https://github.com/redis/redis-doc/blob/master/commands.json) in Redis, and\ndespite looking similar to the output of `COMMAND` there are some fields and flags that are implicitly populated, and that's the\nreason one shouldn't rely on the raw files.\n\nThe structure of each JSON is somewhat documented in https://redis.io/commands/command-docs/ and https://redis.io/commands/command/\n\nThe `reply_schema` section is a standard JSON Schema (see https://json-schema.org/) that describes the reply of each command.\nIt is designed to someday be used to auto-generate code in client libraries, but is not yet mature and is not exposed externally.\n\n"
  },
  {
    "path": "src/commands/acl-cat.json",
    "content": "{\n    \"CAT\": {\n        \"summary\": \"Lists the ACL categories, or the commands inside a category.\",\n        \"complexity\": \"O(1) since the categories and commands are a fixed set.\",\n        \"group\": \"server\",\n        \"since\": \"6.0.0\",\n        \"arity\": -2,\n        \"container\": \"ACL\",\n        \"function\": \"aclCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"anyOf\": [\n                {\n                    \"type\": \"array\",\n                    \"description\": \"In case `category` was not given, a list of existing ACL categories\",\n                    \"items\": {\n                        \"type\": \"string\"\n                    }\n                },\n                {\n                    \"type\": \"array\",\n                    \"description\": \"In case `category` was given, list of commands that fall under the provided ACL category\",\n                    \"items\": {\n                        \"type\": \"string\"\n                    }\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"category\",\n                \"type\": \"string\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/acl-deluser.json",
    "content": "{\n    \"DELUSER\": {\n        \"summary\": \"Deletes ACL users, and terminates their connections.\",\n        \"complexity\": \"O(1) amortized time considering the typical user.\",\n        \"group\": \"server\",\n        \"since\": \"6.0.0\",\n        \"arity\": -3,\n        \"container\": \"ACL\",\n        \"function\": \"aclCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"command_tips\": [\n          \"REQUEST_POLICY:ALL_NODES\",\n          \"RESPONSE_POLICY:ALL_SUCCEEDED\"\n        ],        \n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"The number of users that were deleted\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"username\",\n                \"type\": \"string\",\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/acl-dryrun.json",
    "content": "{\n    \"DRYRUN\": {\n        \"summary\": \"Simulates the execution of a command by a user, without executing the command.\",\n        \"complexity\": \"O(1).\",\n        \"group\": \"server\",\n        \"since\": \"7.0.0\",\n        \"arity\": -4,\n        \"container\": \"ACL\",\n        \"function\": \"aclCommand\",\n        \"history\": [],\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"anyOf\": [\n                {\n                    \"const\": \"OK\",\n                    \"description\": \"The given user may successfully execute the given command.\"\n                },\n                {\n                    \"type\": \"string\",\n                    \"description\": \"The description of the problem, in case the user is not allowed to run the given command.\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"username\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"command\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"arg\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/acl-genpass.json",
    "content": "{\n    \"GENPASS\": {\n        \"summary\": \"Generates a pseudorandom, secure password that can be used to identify ACL users.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"6.0.0\",\n        \"arity\": -2,\n        \"container\": \"ACL\",\n        \"function\": \"aclCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"string\",\n            \"description\": \"Pseudorandom data. By default it contains 64 bytes, representing 256 bits of data. If `bits` was given, the output string length is the number of specified bits (rounded to the next multiple of 4) divided by 4.\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"bits\",\n                \"type\": \"integer\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/acl-getuser.json",
    "content": "{\n    \"GETUSER\": {\n        \"summary\": \"Lists the ACL rules of a user.\",\n        \"complexity\": \"O(N). Where N is the number of password, command and pattern rules that the user has.\",\n        \"group\": \"server\",\n        \"since\": \"6.0.0\",\n        \"arity\": 3,\n        \"container\": \"ACL\",\n        \"function\": \"aclCommand\",\n        \"history\": [\n            [\n                \"6.2.0\",\n                \"Added Pub/Sub channel patterns.\"\n            ],\n            [\n                \"7.0.0\",\n                \"Added selectors and changed the format of key and channel patterns from a list to their rule representation.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"username\",\n                \"type\": \"string\"\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"a set of ACL rule definitions for the user\",\n                    \"type\": \"object\",\n                    \"additionalProperties\": false,\n                    \"properties\": {\n                        \"flags\": {\n                            \"type\": \"array\",\n                            \"items\": {\n                                \"type\": \"string\"\n                            }\n                        },\n                        \"passwords\": {\n                            \"type\": \"array\",\n                            \"items\": {\n                                \"type\": \"string\"\n                            }\n                        },\n                        \"commands\": {\n                            \"description\": \"root selector's commands\",\n                            \"type\": \"string\"\n                        },\n                        \"keys\": {\n                            \"description\": \"root selector's keys\",\n                            \"type\": \"string\"\n                        },\n                        \"channels\": {\n                            \"description\": \"root selector's channels\",\n                            \"type\": \"string\"\n                        },\n                        \"selectors\": {\n                            \"type\": \"array\",\n                            \"items\": {\n                                \"type\": \"object\",\n                                \"additionalProperties\": false,\n                                \"properties\": {\n                                    \"commands\": {\n                                        \"type\": \"string\"\n                                    },\n                                    \"keys\": {\n                                        \"type\": \"string\"\n                                    },\n                                    \"channels\": {\n                                        \"type\": \"string\"\n                                    }\n                                }\n                            }\n                        }\n                    }\n                },\n                {\n                    \"description\": \"If user does not exist\",\n                    \"type\": \"null\"\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/acl-help.json",
    "content": "{\n    \"HELP\": {\n        \"summary\": \"Returns helpful text about the different subcommands.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"6.0.0\",\n        \"arity\": 2,\n        \"container\": \"ACL\",\n        \"function\": \"aclCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"A list of subcommands and their description\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/acl-list.json",
    "content": "{\n    \"LIST\": {\n        \"summary\": \"Dumps the effective rules in ACL file format.\",\n        \"complexity\": \"O(N). Where N is the number of configured users.\",\n        \"group\": \"server\",\n        \"since\": \"6.0.0\",\n        \"arity\": 2,\n        \"container\": \"ACL\",\n        \"function\": \"aclCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"A list of currently active ACL rules\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/acl-load.json",
    "content": "{\n    \"LOAD\": {\n        \"summary\": \"Reloads the rules from the configured ACL file.\",\n        \"complexity\": \"O(N). Where N is the number of configured users.\",\n        \"group\": \"server\",\n        \"since\": \"6.0.0\",\n        \"arity\": 2,\n        \"container\": \"ACL\",\n        \"function\": \"aclCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/acl-log.json",
    "content": "{\n    \"LOG\": {\n        \"summary\": \"Lists recent security events generated due to ACL rules.\",\n        \"complexity\": \"O(N) with N being the number of entries shown.\",\n        \"group\": \"server\",\n        \"since\": \"6.0.0\",\n        \"arity\": -2,\n        \"container\": \"ACL\",\n        \"function\": \"aclCommand\",\n        \"history\": [\n            [\n                \"7.2.0\",\n                \"Added entry ID, timestamp created, and timestamp last updated.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"In case `RESET` was not given, a list of recent ACL security events.\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"object\",\n                        \"additionalProperties\": false,\n                        \"properties\": {\n                            \"count\": {\n                                \"type\": \"integer\"\n                            },\n                            \"reason\": {\n                                \"type\": \"string\"\n                            },\n                            \"context\": {\n                                \"type\": \"string\"\n                            },\n                            \"object\": {\n                                \"type\": \"string\"\n                            },\n                            \"username\": {\n                                \"type\": \"string\"\n                            },\n                            \"age-seconds\": {\n                                \"type\": \"number\"\n                            },\n                            \"client-info\": {\n                                \"type\": \"string\"\n                            },\n                            \"entry-id\": {\n                                \"type\": \"integer\"\n                            },\n                            \"timestamp-created\": {\n                                \"type\": \"integer\"\n                            },\n                            \"timestamp-last-updated\": {\n                                \"type\": \"integer\"\n                            }\n                        }\n                    }\n                },\n                {\n                    \"const\": \"OK\",\n                    \"description\": \"In case `RESET` was given, OK indicates ACL log was cleared.\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"operation\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"count\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"reset\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"RESET\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/acl-save.json",
    "content": "{\n    \"SAVE\": {\n        \"summary\": \"Saves the effective ACL rules in the configured ACL file.\",\n        \"complexity\": \"O(N). Where N is the number of configured users.\",\n        \"group\": \"server\",\n        \"since\": \"6.0.0\",\n        \"arity\": 2,\n        \"container\": \"ACL\",\n        \"function\": \"aclCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"command_tips\": [\n          \"REQUEST_POLICY:ALL_NODES\",\n          \"RESPONSE_POLICY:ALL_SUCCEEDED\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/acl-setuser.json",
    "content": "{\n    \"SETUSER\": {\n        \"summary\": \"Creates and modifies an ACL user and its rules.\",\n        \"complexity\": \"O(N). Where N is the number of rules provided.\",\n        \"group\": \"server\",\n        \"since\": \"6.0.0\",\n        \"arity\": -3,\n        \"container\": \"ACL\",\n        \"function\": \"aclCommand\",\n        \"history\": [\n            [\n                \"6.2.0\",\n                \"Added Pub/Sub channel patterns.\"\n            ],\n            [\n                \"7.0.0\",\n                \"Added selectors and key based permissions.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"command_tips\": [\n          \"REQUEST_POLICY:ALL_NODES\",\n          \"RESPONSE_POLICY:ALL_SUCCEEDED\"\n        ],        \n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"username\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"rule\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/acl-users.json",
    "content": "{\n    \"USERS\": {\n        \"summary\": \"Lists all ACL users.\",\n        \"complexity\": \"O(N). Where N is the number of configured users.\",\n        \"group\": \"server\",\n        \"since\": \"6.0.0\",\n        \"arity\": 2,\n        \"container\": \"ACL\",\n        \"function\": \"aclCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"List of existing ACL users\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/acl-whoami.json",
    "content": "{\n    \"WHOAMI\": {\n        \"summary\": \"Returns the authenticated username of the current connection.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"6.0.0\",\n        \"arity\": 2,\n        \"container\": \"ACL\",\n        \"function\": \"aclCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"string\",\n            \"description\": \"The username of the current connection.\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/acl.json",
    "content": "{\n    \"ACL\": {\n        \"summary\": \"A container for Access List Control commands.\",\n        \"complexity\": \"Depends on subcommand.\",\n        \"group\": \"server\",\n        \"since\": \"6.0.0\",\n        \"arity\": -2,\n        \"command_flags\": [\n            \"SENTINEL\"\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/append.json",
    "content": "{\n    \"APPEND\": {\n        \"summary\": \"Appends a string to the value of a key. Creates the key if it doesn't exist.\",\n        \"complexity\": \"O(1). The amortized time complexity is O(1) assuming the appended value is small and the already present value is of any size, since the dynamic string library used by Redis will double the free space available on every reallocation.\",\n        \"group\": \"string\",\n        \"since\": \"2.0.0\",\n        \"arity\": 3,\n        \"function\": \"appendCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"INSERT\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"The length of the string after the append operation.\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"value\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/asking.json",
    "content": "{\n    \"ASKING\": {\n        \"summary\": \"Signals that a cluster client is following an -ASK redirect.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": 1,\n        \"function\": \"askingCommand\",\n        \"command_flags\": [\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/auth.json",
    "content": "{\n    \"AUTH\": {\n        \"summary\": \"Authenticates the connection.\",\n        \"complexity\": \"O(N) where N is the number of passwords defined for the user\",\n        \"group\": \"connection\",\n        \"since\": \"1.0.0\",\n        \"arity\": -2,\n        \"function\": \"authCommand\",\n        \"history\": [\n            [\n                \"6.0.0\",\n                \"Added ACL style (username and password).\"\n            ]\n        ],\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"FAST\",\n            \"NO_AUTH\",\n            \"SENTINEL\",\n            \"ALLOW_BUSY\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"username\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"since\": \"6.0.0\"\n            },\n            {\n                \"name\": \"password\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/bgrewriteaof.json",
    "content": "{\n    \"BGREWRITEAOF\": {\n        \"summary\": \"Asynchronously rewrites the append-only file to disk.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"1.0.0\",\n        \"arity\": 1,\n        \"function\": \"bgrewriteaofCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"NOSCRIPT\"\n        ],\n        \"reply_schema\": {\n            \"description\": \"A simple string reply indicating that the rewriting started or is about to start ASAP\",\n            \"type\": \"string\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/bgsave.json",
    "content": "{\n    \"BGSAVE\": {\n        \"summary\": \"Asynchronously saves the database(s) to disk.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"1.0.0\",\n        \"arity\": -1,\n        \"function\": \"bgsaveCommand\",\n        \"history\": [\n            [\n                \"3.2.2\",\n                \"Added the `SCHEDULE` option.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"NOSCRIPT\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"schedule\",\n                \"token\": \"SCHEDULE\",\n                \"type\": \"pure-token\",\n                \"optional\": true,\n                \"since\": \"3.2.2\"\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"const\": \"Background saving started\"\n                },\n                {\n                    \"const\": \"Background saving scheduled\"\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/bitcount.json",
    "content": "{\n    \"BITCOUNT\": {\n        \"summary\": \"Counts the number of set bits (population counting) in a string.\",\n        \"complexity\": \"O(N)\",\n        \"group\": \"bitmap\",\n        \"since\": \"2.6.0\",\n        \"arity\": -2,\n        \"function\": \"bitcountCommand\",\n        \"history\": [\n            [\n                \"7.0.0\",\n                \"Added the `BYTE|BIT` option.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"BITMAP\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"range\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"start\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"end\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"unit\",\n                        \"type\": \"oneof\",\n                        \"optional\": true,\n                        \"since\": \"7.0.0\",\n                        \"arguments\": [\n                            {\n                                \"name\": \"byte\",\n                                \"type\": \"pure-token\",\n                                \"token\": \"BYTE\"\n                            },\n                            {\n                                \"name\": \"bit\",\n                                \"type\": \"pure-token\",\n                                \"token\": \"BIT\"\n                            }\n                        ]\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The number of bits set to 1.\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/bitfield.json",
    "content": "{\n    \"BITFIELD\": {\n        \"summary\": \"Performs arbitrary bitfield integer operations on strings.\",\n        \"complexity\": \"O(1) for each subcommand specified\",\n        \"group\": \"bitmap\",\n        \"since\": \"3.2.0\",\n        \"arity\": -2,\n        \"function\": \"bitfieldCommand\",\n        \"get_keys_function\": \"bitfieldGetKeys\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"BITMAP\"\n        ],\n        \"key_specs\": [\n            {\n                \"notes\": \"This command allows both access and modification of the key\",\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\",\n                    \"ACCESS\",\n                    \"VARIABLE_FLAGS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"operation\",\n                \"type\": \"oneof\",\n                \"multiple\": true,\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"token\": \"GET\",\n                        \"name\": \"get-block\",\n                        \"type\": \"block\",\n                        \"arguments\": [\n                            {\n                                \"name\": \"encoding\",\n                                \"type\": \"string\"\n                            },\n                            {\n                                \"name\": \"offset\",\n                                \"type\": \"integer\"\n                            }\n                        ]\n                    },\n                    {\n                        \"name\": \"write\",\n                        \"type\": \"block\",\n                        \"arguments\": [\n                            {\n                                \"token\": \"OVERFLOW\",\n                                \"name\": \"overflow-block\",\n                                \"type\": \"oneof\",\n                                \"optional\": true,\n                                \"arguments\": [\n                                    {\n                                        \"name\": \"wrap\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"WRAP\"\n                                    },\n                                    {\n                                        \"name\": \"sat\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"SAT\"\n                                    },\n                                    {\n                                        \"name\": \"fail\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"FAIL\"\n                                    }\n                                ]\n                            },\n                            {\n                                \"name\": \"write-operation\",\n                                \"type\": \"oneof\",\n                                \"arguments\": [\n                                    {\n                                        \"token\": \"SET\",\n                                        \"name\": \"set-block\",\n                                        \"type\": \"block\",\n                                        \"arguments\": [\n                                            {\n                                                \"name\": \"encoding\",\n                                                \"type\": \"string\"\n                                            },\n                                            {\n                                                \"name\": \"offset\",\n                                                \"type\": \"integer\"\n                                            },\n                                            {\n                                                \"name\": \"value\",\n                                                \"type\": \"integer\"\n                                            }\n                                        ]\n                                    },\n                                    {\n                                        \"token\": \"INCRBY\",\n                                        \"name\": \"incrby-block\",\n                                        \"type\": \"block\",\n                                        \"arguments\": [\n                                            {\n                                                \"name\": \"encoding\",\n                                                \"type\": \"string\"\n                                            },\n                                            {\n                                                \"name\": \"offset\",\n                                                \"type\": \"integer\"\n                                            },\n                                            {\n                                                \"name\": \"increment\",\n                                                \"type\": \"integer\"\n                                            }\n                                        ]\n                                    }\n                                ]\n                            }\n                        ]\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"description\": \"The result of the subcommand at the same position\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"description\": \"In case OVERFLOW FAIL was given and overflows or underflows detected\",\n                        \"type\": \"null\"\n                    }\n                ]\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/bitfield_ro.json",
    "content": "{\n    \"BITFIELD_RO\": {\n        \"summary\": \"Performs arbitrary read-only bitfield integer operations on strings.\",\n        \"complexity\": \"O(1) for each subcommand specified\",\n        \"group\": \"bitmap\",\n        \"since\": \"6.0.0\",\n        \"arity\": -2,\n        \"function\": \"bitfieldroCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"BITMAP\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"token\": \"GET\",\n                \"name\": \"get-block\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"multiple\": true,\n                \"multiple_token\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"encoding\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"name\": \"offset\",\n                        \"type\": \"integer\"\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"items\": {\n                \"description\": \"The result of the subcommand at the same position\",\n                \"type\": \"integer\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/bitop.json",
    "content": "{\n    \"BITOP\": {\n        \"summary\": \"Performs bitwise operations on multiple strings, and stores the result.\",\n        \"complexity\": \"O(N)\",\n        \"group\": \"bitmap\",\n        \"since\": \"2.6.0\",\n        \"arity\": -4,\n        \"function\": \"bitopCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"BITMAP\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 3\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"operation\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"and\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"AND\"\n                    },\n                    {\n                        \"name\": \"or\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"OR\"\n                    },\n                    {\n                        \"name\": \"xor\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"XOR\"\n                    },\n                    {\n                        \"name\": \"not\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"NOT\"\n                    },\n                    {\n                        \"name\": \"diff\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"DIFF\"\n                    },\n                    {\n                        \"name\": \"diff1\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"DIFF1\"\n                    },\n                    {\n                        \"name\": \"andor\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ANDOR\"\n                    },\n                    {\n                        \"name\": \"one\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ONE\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"destkey\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 1,\n                \"multiple\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"the size of the string stored in the destination key, that is equal to the size of the longest input string\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/bitpos.json",
    "content": "{\n    \"BITPOS\": {\n        \"summary\": \"Finds the first set (1) or clear (0) bit in a string.\",\n        \"complexity\": \"O(N)\",\n        \"group\": \"bitmap\",\n        \"since\": \"2.8.7\",\n        \"arity\": -3,\n        \"function\": \"bitposCommand\",\n        \"history\": [\n            [\n                \"7.0.0\",\n                \"Added the `BYTE|BIT` option.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"BITMAP\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"bit\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"range\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"start\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"end-unit-block\",\n                        \"type\": \"block\",\n                        \"optional\": true,\n                        \"arguments\": [\n                            {\n                                \"name\": \"end\",\n                                \"type\": \"integer\"\n                            },\n                            {\n                                \"name\": \"unit\",\n                                \"type\": \"oneof\",\n                                \"optional\": true,\n                                \"since\": \"7.0.0\",\n                                \"arguments\": [\n                                    {\n                                        \"name\": \"byte\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"BYTE\"\n                                    },\n                                    {\n                                        \"name\": \"bit\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"BIT\"\n                                    }\n                                ]\n                            }\n                        ]\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"the position of the first bit set to 1 or 0 according to the request\",\n                    \"type\": \"integer\",\n                    \"minimum\": 0\n                },\n                {\n                    \"description\": \"In case the `bit` argument is 1 and the string is empty or composed of just zero bytes\",\n                    \"const\": -1\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/blmove.json",
    "content": "{\n    \"BLMOVE\": {\n        \"summary\": \"Pops an element from a list, pushes it to another list and returns it. Blocks until an element is available otherwise. Deletes the list if the last element was moved.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"list\",\n        \"since\": \"6.2.0\",\n        \"arity\": 6,\n        \"function\": \"blmoveCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"BLOCKING\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"INSERT\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"The popped element.\",\n                    \"type\": \"string\"\n                },\n                {\n                    \"description\": \"Operation timed-out\",\n                    \"type\": \"null\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"source\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"destination\",\n                \"type\": \"key\",\n                \"key_spec_index\": 1\n            },\n            {\n                \"name\": \"wherefrom\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"left\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"LEFT\"\n                    },\n                    {\n                        \"name\": \"right\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"RIGHT\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"whereto\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"left\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"LEFT\"\n                    },\n                    {\n                        \"name\": \"right\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"RIGHT\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"timeout\",\n                \"type\": \"double\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/blmpop.json",
    "content": "{\n    \"BLMPOP\": {\n        \"summary\": \"Pops the first element from one of multiple lists. Blocks until an element is available otherwise. Deletes the list if the last element was popped.\",\n        \"complexity\": \"O(N+M) where N is the number of provided keys and M is the number of elements returned.\",\n        \"group\": \"list\",\n        \"since\": \"7.0.0\",\n        \"arity\": -5,\n        \"function\": \"blmpopCommand\",\n        \"get_keys_function\": \"blmpopGetKeys\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"BLOCKING\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"keynum\": {\n                        \"keynumidx\": 0,\n                        \"firstkey\": 1,\n                        \"step\": 1\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"Operation timed-out\",\n                    \"type\": \"null\"\n                },\n                {\n                    \"description\": \"The key from which elements were popped and the popped elements\",\n                    \"type\": \"array\",\n                    \"minItems\": 2,\n                    \"maxItems\": 2,\n                    \"items\": [\n                        {\n                            \"description\": \"List key from which elements were popped.\",\n                            \"type\": \"string\"\n                        },\n                        {\n                            \"description\": \"Array of popped elements.\",\n                            \"type\": \"array\",\n                            \"minItems\": 1,\n                            \"items\": {\n                                \"type\": \"string\"\n                            }\n                        }\n                    ]\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"timeout\",\n                \"type\": \"double\"\n            },\n            {\n                \"name\": \"numkeys\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            },\n            {\n                \"name\": \"where\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"left\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"LEFT\"\n                    },\n                    {\n                        \"name\": \"right\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"RIGHT\"\n                    }\n                ]\n            },\n            {\n                \"token\": \"COUNT\",\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/blpop.json",
    "content": "{\n    \"BLPOP\": {\n        \"summary\": \"Removes and returns the first element in a list. Blocks until an element is available otherwise. Deletes the list if the last element was popped.\",\n        \"complexity\": \"O(N) where N is the number of provided keys.\",\n        \"group\": \"list\",\n        \"since\": \"2.0.0\",\n        \"arity\": -3,\n        \"function\": \"blpopCommand\",\n        \"history\": [\n            [\n                \"6.0.0\",\n                \"`timeout` is interpreted as a double instead of an integer.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"BLOCKING\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -2,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"type\": \"null\",\n                    \"description\": \"No element could be popped and timeout expired\"\n                },\n                {\n                    \"description\": \"The key from which the element was popped and the value of the popped element\",\n                    \"type\": \"array\",\n                    \"minItems\": 2,\n                    \"maxItems\": 2,\n                    \"items\": [\n                        {\n                            \"description\": \"List key from which the element was popped.\",\n                            \"type\": \"string\"\n                        },\n                        {\n                            \"description\": \"Value of the popped element.\",\n                            \"type\": \"string\"\n                        }\n                    ]\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            },\n            {\n                \"name\": \"timeout\",\n                \"type\": \"double\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/brpop.json",
    "content": "{\n    \"BRPOP\": {\n        \"summary\": \"Removes and returns the last element in a list. Blocks until an element is available otherwise. Deletes the list if the last element was popped.\",\n        \"complexity\": \"O(N) where N is the number of provided keys.\",\n        \"group\": \"list\",\n        \"since\": \"2.0.0\",\n        \"arity\": -3,\n        \"function\": \"brpopCommand\",\n        \"history\": [\n            [\n                \"6.0.0\",\n                \"`timeout` is interpreted as a double instead of an integer.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"BLOCKING\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -2,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            },\n            {\n                \"name\": \"timeout\",\n                \"type\": \"double\"\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"No element could be popped and the timeout expired.\",\n                    \"type\": \"null\"\n                },\n                {\n                    \"type\": \"array\",\n                    \"minItems\": 2,\n                    \"maxItems\": 2,\n                    \"items\": [\n                        {\n                            \"description\": \"The name of the key where an element was popped \",\n                            \"type\": \"string\"\n                        },\n                        {\n                            \"description\": \"The value of the popped element\",\n                            \"type\": \"string\"\n                        }\n                    ]\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/brpoplpush.json",
    "content": "{\n    \"BRPOPLPUSH\": {\n        \"summary\": \"Pops an element from a list, pushes it to another list and returns it. Block until an element is available otherwise. Deletes the list if the last element was popped.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"list\",\n        \"since\": \"2.2.0\",\n        \"arity\": 4,\n        \"function\": \"brpoplpushCommand\",\n        \"history\": [\n            [\n                \"6.0.0\",\n                \"`timeout` is interpreted as a double instead of an integer.\"\n            ]\n        ],\n        \"deprecated_since\": \"6.2.0\",\n        \"replaced_by\": \"`BLMOVE` with the `RIGHT` and `LEFT` arguments\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"BLOCKING\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"INSERT\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"type\": \"string\",\n                    \"description\": \"The element being popped from source and pushed to destination.\"\n                },\n                {\n                    \"type\": \"null\",\n                    \"description\": \"Timeout is reached.\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"source\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"destination\",\n                \"type\": \"key\",\n                \"key_spec_index\": 1\n            },\n            {\n                \"name\": \"timeout\",\n                \"type\": \"double\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/bzmpop.json",
    "content": "{\n    \"BZMPOP\": {\n        \"summary\": \"Removes and returns a member by score from one or more sorted sets. Blocks until a member is available otherwise. Deletes the sorted set if the last element was popped.\",\n        \"complexity\": \"O(K) + O(M*log(N)) where K is the number of provided keys, N being the number of elements in the sorted set, and M being the number of elements popped.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"7.0.0\",\n        \"arity\": -5,\n        \"function\": \"bzmpopCommand\",\n        \"get_keys_function\": \"blmpopGetKeys\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"BLOCKING\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"keynum\": {\n                        \"keynumidx\": 0,\n                        \"firstkey\": 1,\n                        \"step\": 1\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"Timeout reached and no elements were popped.\",\n                    \"type\": \"null\"\n                },\n                {\n                    \"description\": \"The keyname and the popped members.\",\n                    \"type\": \"array\",\n                    \"minItems\": 2,\n                    \"maxItems\": 2,\n                    \"items\": [\n                        {\n                            \"description\": \"Keyname\",\n                            \"type\": \"string\"\n                        },\n                        {\n                            \"description\": \"Popped members and their scores.\",\n                            \"type\": \"array\",\n                            \"uniqueItems\": true,\n                            \"items\": {\n                                \"type\": \"array\",\n                                \"minItems\": 2,\n                                \"maxItems\": 2,\n                                \"items\": [\n                                    {\n                                        \"description\": \"Member\",\n                                        \"type\": \"string\"\n                                    },\n                                    {\n                                        \"description\": \"Score\",\n                                        \"type\": \"number\"\n                                    }\n                                ]\n                            }\n                        }\n                    ]\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"timeout\",\n                \"type\": \"double\"\n            },\n            {\n                \"name\": \"numkeys\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            },\n            {\n                \"name\": \"where\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"min\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"MIN\"\n                    },\n                    {\n                        \"name\": \"max\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"MAX\"\n                    }\n                ]\n            },\n            {\n                \"token\": \"COUNT\",\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/bzpopmax.json",
    "content": "{\n    \"BZPOPMAX\": {\n        \"summary\": \"Removes and returns the member with the highest score from one or more sorted sets. Blocks until a member available otherwise.  Deletes the sorted set if the last element was popped.\",\n        \"complexity\": \"O(log(N)) with N being the number of elements in the sorted set.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"5.0.0\",\n        \"arity\": -3,\n        \"function\": \"bzpopmaxCommand\",\n        \"history\": [\n            [\n                \"6.0.0\",\n                \"`timeout` is interpreted as a double instead of an integer.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\",\n            \"BLOCKING\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -2,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"Timeout reached and no elements were popped.\",\n                    \"type\": \"null\"\n                },\n                {\n                    \"description\": \"The keyname, popped member, and its score.\",\n                    \"type\": \"array\",\n                    \"minItems\": 3,\n                    \"maxItems\": 3,\n                    \"items\": [\n                        {\n                            \"description\": \"Keyname\",\n                            \"type\": \"string\"\n                        },\n                        {\n                            \"description\": \"Member\",\n                            \"type\": \"string\"\n                        },\n                        {\n                            \"description\": \"Score\",\n                            \"type\": \"number\"\n                        }\n                    ]\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            },\n            {\n                \"name\": \"timeout\",\n                \"type\": \"double\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/bzpopmin.json",
    "content": "{\n    \"BZPOPMIN\": {\n        \"summary\": \"Removes and returns the member with the lowest score from one or more sorted sets. Blocks until a member is available otherwise. Deletes the sorted set if the last element was popped.\",\n        \"complexity\": \"O(log(N)) with N being the number of elements in the sorted set.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"5.0.0\",\n        \"arity\": -3,\n        \"function\": \"bzpopminCommand\",\n        \"history\": [\n            [\n                \"6.0.0\",\n                \"`timeout` is interpreted as a double instead of an integer.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\",\n            \"BLOCKING\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -2,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"Timeout reached and no elements were popped.\",\n                    \"type\": \"null\"\n                },\n                {\n                    \"description\": \"The keyname, popped member, and its score.\",\n                    \"type\": \"array\",\n                    \"minItems\": 3,\n                    \"maxItems\": 3,\n                    \"items\": [\n                        {\n                            \"description\": \"Keyname\",\n                            \"type\": \"string\"\n                        },\n                        {\n                            \"description\": \"Member\",\n                            \"type\": \"string\"\n                        },\n                        {\n                            \"description\": \"Score\",\n                            \"type\": \"number\"\n                        }\n                    ]\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            },\n            {\n                \"name\": \"timeout\",\n                \"type\": \"double\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/client-caching.json",
    "content": "{\n    \"CACHING\": {\n        \"summary\": \"Instructs the server whether to track the keys in the next request.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"connection\",\n        \"since\": \"6.0.0\",\n        \"arity\": 3,\n        \"container\": \"CLIENT\",\n        \"function\": \"clientCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"mode\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"yes\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"YES\"\n                    },\n                    {\n                        \"name\": \"no\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"NO\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/client-getname.json",
    "content": "{\n    \"GETNAME\": {\n        \"summary\": \"Returns the name of the connection.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"connection\",\n        \"since\": \"2.6.9\",\n        \"arity\": 2,\n        \"container\": \"CLIENT\",\n        \"function\": \"clientCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"type\": \"string\",\n                    \"description\": \"The connection name of the current connection\"\n                },\n                {\n                    \"type\": \"null\",\n                    \"description\": \"Connection name was not set\"\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/client-getredir.json",
    "content": "{\n    \"GETREDIR\": {\n        \"summary\": \"Returns the client ID to which the connection's tracking notifications are redirected.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"connection\",\n        \"since\": \"6.0.0\",\n        \"arity\": 2,\n        \"container\": \"CLIENT\",\n        \"function\": \"clientCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"const\": 0,\n                    \"description\": \"Not redirecting notifications to any client.\"\n                },\n                {\n                    \"const\": -1,\n                    \"description\": \"Client tracking is not enabled.\"\n                },\n                {\n                    \"type\": \"integer\",\n                    \"description\": \"ID of the client we are redirecting the notifications to.\",\n                    \"minimum\": 1\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/client-help.json",
    "content": "{\n    \"HELP\": {\n        \"summary\": \"Returns helpful text about the different subcommands.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"connection\",\n        \"since\": \"5.0.0\",\n        \"arity\": 2,\n        \"container\": \"CLIENT\",\n        \"function\": \"clientCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"Helpful text about subcommands.\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/client-id.json",
    "content": "{\n    \"ID\": {\n        \"summary\": \"Returns the unique client ID of the connection.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"connection\",\n        \"since\": \"5.0.0\",\n        \"arity\": 2,\n        \"container\": \"CLIENT\",\n        \"function\": \"clientCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"The id of the client\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/client-info.json",
    "content": "{\n    \"INFO\": {\n        \"summary\": \"Returns information about the connection.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"connection\",\n        \"since\": \"6.2.0\",\n        \"arity\": 2,\n        \"container\": \"CLIENT\",\n        \"function\": \"clientCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"reply_schema\": {\n            \"description\": \"a unique string, as described at the CLIENT LIST page, for the current client\",\n            \"type\": \"string\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/client-kill.json",
    "content": "{\n    \"KILL\": {\n        \"summary\": \"Terminates open connections.\",\n        \"complexity\": \"O(N) where N is the number of client connections\",\n        \"group\": \"connection\",\n        \"since\": \"2.4.0\",\n        \"arity\": -3,\n        \"container\": \"CLIENT\",\n        \"function\": \"clientCommand\",\n        \"history\": [\n            [\n                \"2.8.12\",\n                \"Added new filter format.\"\n            ],\n            [\n                \"2.8.12\",\n                \"`ID` option.\"\n            ],\n            [\n                \"3.2.0\",\n                \"Added `master` type in for `TYPE` option.\"\n            ],\n            [\n                \"5.0.0\",\n                \"Replaced `slave` `TYPE` with `replica`. `slave` still supported for backward compatibility.\"\n            ],\n            [\n                \"6.2.0\",\n                \"`LADDR` option.\"\n            ],\n            [\n                \"7.4.0\",\n                \"`MAXAGE` option.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"filter\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"old-format\",\n                        \"display\": \"ip:port\",\n                        \"type\": \"string\",\n                        \"deprecated_since\": \"2.8.12\"\n                    },\n                    {\n                        \"name\": \"new-format\",\n                        \"type\": \"oneof\",\n                        \"multiple\": true,\n                        \"arguments\": [\n                            {\n                                \"token\": \"ID\",\n                                \"name\": \"client-id\",\n                                \"type\": \"integer\",\n                                \"optional\": true,\n                                \"since\": \"2.8.12\"\n                            },\n                            {\n                                \"token\": \"TYPE\",\n                                \"name\": \"client-type\",\n                                \"type\": \"oneof\",\n                                \"optional\": true,\n                                \"since\": \"2.8.12\",\n                                \"arguments\": [\n                                    {\n                                        \"name\": \"normal\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"normal\"\n                                    },\n                                    {\n                                        \"name\": \"master\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"master\",\n                                        \"since\": \"3.2.0\"\n                                    },\n                                    {\n                                        \"name\": \"slave\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"slave\"\n                                    },\n                                    {\n                                        \"name\": \"replica\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"replica\",\n                                        \"since\": \"5.0.0\"\n                                    },\n                                    {\n                                        \"name\": \"pubsub\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"pubsub\"\n                                    }\n                                ]\n                            },\n                            {\n                                \"token\": \"USER\",\n                                \"name\": \"username\",\n                                \"type\": \"string\",\n                                \"optional\": true\n                            },\n                            {\n                                \"token\": \"ADDR\",\n                                \"name\": \"addr\",\n                                \"display\": \"ip:port\",\n                                \"type\": \"string\",\n                                \"optional\": true\n                            },\n                            {\n                                \"token\": \"LADDR\",\n                                \"name\": \"laddr\",\n                                \"display\": \"ip:port\",\n                                \"type\": \"string\",\n                                \"optional\": true,\n                                \"since\": \"6.2.0\"\n                            },\n                            {\n                                \"token\": \"SKIPME\",\n                                \"name\": \"skipme\",\n                                \"type\": \"oneof\",\n                                \"optional\": true,\n                                \"arguments\": [\n                                    {\n                                        \"name\": \"yes\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"YES\"\n                                    },\n                                    {\n                                        \"name\": \"no\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"NO\"\n                                    }\n                                ]\n                            },\n                            {\n                                \"token\": \"MAXAGE\",\n                                \"name\": \"maxage\",\n                                \"type\": \"integer\",\n                                \"optional\": true,\n                                \"since\": \"7.4.0\"\n                            }\n                        ]\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"when called in 3 argument format\",\n                    \"const\": \"OK\"\n                },\n                {\n                    \"description\": \"when called in filter/value format, the number of clients killed\",\n                    \"type\": \"integer\",\n                    \"minimum\": 0\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/client-list.json",
    "content": "{\n    \"LIST\": {\n        \"summary\": \"Lists open connections.\",\n        \"complexity\": \"O(N) where N is the number of client connections\",\n        \"group\": \"connection\",\n        \"since\": \"2.4.0\",\n        \"arity\": -2,\n        \"container\": \"CLIENT\",\n        \"function\": \"clientCommand\",\n        \"history\": [\n            [\n                \"2.8.12\",\n                \"Added unique client `id` field.\"\n            ],\n            [\n                \"5.0.0\",\n                \"Added optional `TYPE` filter.\"\n            ],\n            [\n                \"6.0.0\",\n                \"Added `user` field.\"\n            ],\n            [\n                \"6.2.0\",\n                \"Added `argv-mem`, `tot-mem`, `laddr` and `redir` fields and the optional `ID` filter.\"\n            ],\n            [\n                \"7.0.0\",\n                \"Added `resp`, `multi-mem`, `rbs` and `rbp` fields.\"\n            ],\n            [\n                \"7.0.3\",\n                \"Added `ssub` field.\"\n            ],\n            [\n                \"7.2.0\",\n                \"Added `lib-name` and `lib-ver` fields.\"\n            ],\n            [\n                \"7.4.0\",\n                \"Added `watch` field.\"\n            ],\n            [\n                \"8.0.0\",\n                \"Added `io-thread` field.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"string\",\n            \"description\": \"Information and statistics about client connections\"\n        },\n        \"arguments\": [\n            {\n                \"token\": \"TYPE\",\n                \"name\": \"client-type\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"since\": \"5.0.0\",\n                \"arguments\": [\n                    {\n                        \"name\": \"normal\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"normal\"\n                    },\n                    {\n                        \"name\": \"master\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"master\"\n                    },\n                    {\n                        \"name\": \"replica\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"replica\"\n                    },\n                    {\n                        \"name\": \"pubsub\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"pubsub\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"client-id\",\n                \"token\": \"ID\",\n                \"type\": \"integer\",\n                \"optional\": true,\n                \"multiple\": true,\n                \"since\": \"6.2.0\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/client-no-evict.json",
    "content": "{\n    \"NO-EVICT\": {\n        \"summary\": \"Sets the client eviction mode of the connection.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"connection\",\n        \"since\": \"7.0.0\",\n        \"arity\": 3,\n        \"container\": \"CLIENT\",\n        \"function\": \"clientCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"enabled\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"on\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ON\"\n                    },\n                    {\n                        \"name\": \"off\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"OFF\"\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/client-no-touch.json",
    "content": "{\n    \"NO-TOUCH\": {\n        \"summary\": \"Controls whether commands sent by the client affect the LRU/LFU of accessed keys.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"connection\",\n        \"since\": \"7.2.0\",\n        \"arity\": 3,\n        \"container\": \"CLIENT\",\n        \"function\": \"clientCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"enabled\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"on\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ON\"\n                    },\n                    {\n                        \"name\": \"off\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"OFF\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/client-pause.json",
    "content": "{\n    \"PAUSE\": {\n        \"summary\": \"Suspends commands processing.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"connection\",\n        \"since\": \"3.0.0\",\n        \"arity\": -3,\n        \"container\": \"CLIENT\",\n        \"function\": \"clientCommand\",\n        \"history\": [\n            [\n                \"6.2.0\",\n                \"`CLIENT PAUSE WRITE` mode added along with the `mode` option.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"timeout\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"mode\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"since\": \"6.2.0\",\n                \"arguments\": [\n                    {\n                        \"name\": \"write\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"WRITE\"\n                    },\n                    {\n                        \"name\": \"all\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ALL\"\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/client-reply.json",
    "content": "{\n    \"REPLY\": {\n        \"summary\": \"Instructs the server whether to reply to commands.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"connection\",\n        \"since\": \"3.2.0\",\n        \"arity\": 3,\n        \"container\": \"CLIENT\",\n        \"function\": \"clientCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\",\n            \"description\": \"When called with either OFF or SKIP subcommands, no reply is made. When called with ON, reply is OK.\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"action\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"on\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ON\"\n                    },\n                    {\n                        \"name\": \"off\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"OFF\"\n                    },\n                    {\n                        \"name\": \"skip\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"SKIP\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/client-setinfo.json",
    "content": "{\n    \"SETINFO\": {\n        \"summary\": \"Sets information specific to the client or connection.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"connection\",\n        \"since\": \"7.2.0\",\n        \"arity\": 4,\n        \"container\": \"CLIENT\",\n        \"function\": \"clientSetinfoCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"command_tips\": [\n          \"REQUEST_POLICY:ALL_NODES\",\n          \"RESPONSE_POLICY:ALL_SUCCEEDED\"\n        ],        \n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"attr\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"token\": \"lib-name\",\n                        \"name\": \"libname\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"token\": \"lib-ver\",\n                        \"name\": \"libver\",\n                        \"type\": \"string\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/client-setname.json",
    "content": "{\n    \"SETNAME\": {\n        \"summary\": \"Sets the connection name.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"connection\",\n        \"since\": \"2.6.9\",\n        \"arity\": 3,\n        \"container\": \"CLIENT\",\n        \"function\": \"clientCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"command_tips\": [\n          \"REQUEST_POLICY:ALL_NODES\",\n          \"RESPONSE_POLICY:ALL_SUCCEEDED\"\n        ],        \n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"connection-name\",\n                \"type\": \"string\"\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/client-tracking.json",
    "content": "{\n    \"TRACKING\": {\n        \"summary\": \"Controls server-assisted client-side caching for the connection.\",\n        \"complexity\": \"O(1). Some options may introduce additional complexity.\",\n        \"group\": \"connection\",\n        \"since\": \"6.0.0\",\n        \"arity\": -3,\n        \"container\": \"CLIENT\",\n        \"function\": \"clientCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"status\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"on\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ON\"\n                    },\n                    {\n                        \"name\": \"off\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"OFF\"\n                    }\n                ]\n            },\n            {\n                \"token\": \"REDIRECT\",\n                \"name\": \"client-id\",\n                \"type\": \"integer\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"PREFIX\",\n                \"name\": \"prefix\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"multiple\": true,\n                \"multiple_token\": true\n            },\n            {\n                \"name\": \"BCAST\",\n                \"token\": \"BCAST\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"OPTIN\",\n                \"token\": \"OPTIN\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"OPTOUT\",\n                \"token\": \"OPTOUT\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"NOLOOP\",\n                \"token\": \"NOLOOP\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"if the client was successfully put into or taken out of tracking mode\",\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/client-trackinginfo.json",
    "content": "{\n    \"TRACKINGINFO\": {\n        \"summary\": \"Returns information about server-assisted client-side caching for the connection.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"connection\",\n        \"since\": \"6.2.0\",\n        \"arity\": 2,\n        \"container\": \"CLIENT\",\n        \"function\": \"clientCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"object\",\n            \"additionalProperties\": false,\n            \"properties\": {\n                \"flags\": {\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"oneOf\": [\n                            {\n                                \"const\": \"off\",\n                                \"description\": \"The connection isn't using server assisted client side caching.\"\n                            },\n                            {\n                                \"const\": \"on\",\n                                \"description\": \"Server assisted client side caching is enabled for the connection.\"\n                            },\n                            {\n                                \"const\": \"bcast\",\n                                \"description\": \"The client uses broadcasting mode.\"\n                            },\n                            {\n                                \"const\": \"optin\",\n                                \"description\": \"The client does not cache keys by default.\"\n                            },\n                            {\n                                \"const\": \"optout\",\n                                \"description\": \"The client caches keys by default.\"\n                            },\n                            {\n                                \"const\": \"caching-yes\",\n                                \"description\": \"The next command will cache keys (exists only together with optin).\"\n                            },\n                            {\n                                \"const\": \"caching-no\",\n                                \"description\": \"The next command won't cache keys (exists only together with optout).\"\n                            },\n                            {\n                                \"const\": \"noloop\",\n                                \"description\": \"The client isn't notified about keys modified by itself.\"\n                            },\n                            {\n                                \"const\": \"broken_redirect\",\n                                \"description\": \"The client ID used for redirection isn't valid anymore.\"\n                            }\n                        ]\n                    }\n                },\n                \"redirect\": {\n                    \"type\": \"integer\",\n                    \"description\": \"The client ID used for notifications redirection, or -1 when none.\"\n                },\n                \"prefixes\": {\n                    \"type\": \"array\",\n                    \"description\": \"List of key prefixes for which notifications are sent to the client.\",\n                    \"items\": {\n                        \"type\": \"string\"\n                    }\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/client-unblock.json",
    "content": "{\n    \"UNBLOCK\": {\n        \"summary\": \"Unblocks a client blocked by a blocking command from a different connection.\",\n        \"complexity\": \"O(log N) where N is the number of client connections\",\n        \"group\": \"connection\",\n        \"since\": \"5.0.0\",\n        \"arity\": -3,\n        \"container\": \"CLIENT\",\n        \"function\": \"clientCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"const\": 0,\n                    \"description\": \"if the client was unblocked successfully\"\n                },\n                {\n                    \"const\": 1,\n                    \"description\": \"if the client wasn't unblocked\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"client-id\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"unblock-type\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"timeout\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"TIMEOUT\"\n                    },\n                    {\n                        \"name\": \"error\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ERROR\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/client-unpause.json",
    "content": "{\n    \"UNPAUSE\": {\n        \"summary\": \"Resumes processing commands from paused clients.\",\n        \"complexity\": \"O(N) Where N is the number of paused clients\",\n        \"group\": \"connection\",\n        \"since\": \"6.2.0\",\n        \"arity\": 2,\n        \"container\": \"CLIENT\",\n        \"function\": \"clientCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/client.json",
    "content": "{\n    \"CLIENT\": {\n        \"summary\": \"A container for client connection commands.\",\n        \"complexity\": \"Depends on subcommand.\",\n        \"group\": \"connection\",\n        \"since\": \"2.4.0\",\n        \"arity\": -2,\n        \"command_flags\": [\n            \"SENTINEL\"\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-addslots.json",
    "content": "{\n    \"ADDSLOTS\": {\n        \"summary\": \"Assigns new hash slots to a node.\",\n        \"complexity\": \"O(N) where N is the total number of hash slot arguments\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": -3,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"STALE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"slot\",\n                \"type\": \"integer\",\n                \"multiple\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-addslotsrange.json",
    "content": "{\n    \"ADDSLOTSRANGE\": {\n        \"summary\": \"Assigns new hash slot ranges to a node.\",\n        \"complexity\": \"O(N) where N is the total number of the slots between the start slot and end slot arguments.\",\n        \"group\": \"cluster\",\n        \"since\": \"7.0.0\",\n        \"arity\": -4,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"STALE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"range\",\n                \"type\": \"block\",\n                \"multiple\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"start-slot\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"end-slot\",\n                        \"type\": \"integer\"\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-bumpepoch.json",
    "content": "{\n    \"BUMPEPOCH\": {\n        \"summary\": \"Advances the cluster config epoch.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": 2,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"if the epoch was incremented\",\n                    \"type\": \"string\",\n                    \"pattern\": \"^BUMPED [0-9]*$\"\n                },\n                {\n                    \"description\": \"if the node already has the greatest config epoch in the cluster\",\n                    \"type\": \"string\",\n                    \"pattern\": \"^STILL [0-9]*$\"\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-count-failure-reports.json",
    "content": "{\n    \"COUNT-FAILURE-REPORTS\": {\n        \"summary\": \"Returns the number of active failure reports active for a node.\",\n        \"complexity\": \"O(N) where N is the number of failure reports\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": 3,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"node-id\",\n                \"type\": \"string\"\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"the number of active failure reports for the node\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-countkeysinslot.json",
    "content": "{\n    \"COUNTKEYSINSLOT\": {\n        \"summary\": \"Returns the number of keys in a hash slot.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": 3,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"STALE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"slot\",\n                \"type\": \"integer\"\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The number of keys in the specified hash slot\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-delslots.json",
    "content": "{\n    \"DELSLOTS\": {\n        \"summary\": \"Sets hash slots as unbound for a node.\",\n        \"complexity\": \"O(N) where N is the total number of hash slot arguments\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": -3,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"STALE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"slot\",\n                \"type\": \"integer\",\n                \"multiple\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-delslotsrange.json",
    "content": "{\n    \"DELSLOTSRANGE\": {\n        \"summary\": \"Sets hash slot ranges as unbound for a node.\",\n        \"complexity\": \"O(N) where N is the total number of the slots between the start slot and end slot arguments.\",\n        \"group\": \"cluster\",\n        \"since\": \"7.0.0\",\n        \"arity\": -4,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"STALE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"range\",\n                \"type\": \"block\",\n                \"multiple\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"start-slot\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"end-slot\",\n                        \"type\": \"integer\"\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-failover.json",
    "content": "{\n    \"FAILOVER\": {\n        \"summary\": \"Forces a replica to perform a manual failover of its master.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": -2,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"STALE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"options\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"force\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"FORCE\"\n                    },\n                    {\n                        \"name\": \"takeover\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"TAKEOVER\"\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-flushslots.json",
    "content": "{\n    \"FLUSHSLOTS\": {\n        \"summary\": \"Deletes all slots information from a node.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": 2,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"STALE\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-forget.json",
    "content": "{\n    \"FORGET\": {\n        \"summary\": \"Removes a node from the nodes table.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": 3,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"STALE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"node-id\",\n                \"type\": \"string\"\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-getkeysinslot.json",
    "content": "{\n    \"GETKEYSINSLOT\": {\n        \"summary\": \"Returns the key names in a hash slot.\",\n        \"complexity\": \"O(N) where N is the number of requested keys\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": 4,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"slot\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"count\",\n                \"type\": \"integer\"\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"an array with up to count elements\",\n            \"type\": \"array\",\n            \"items\": {\n                \"description\": \"key name\",\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-help.json",
    "content": "{\n    \"HELP\": {\n        \"summary\": \"Returns helpful text about the different subcommands.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"cluster\",\n        \"since\": \"5.0.0\",\n        \"arity\": 2,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"Helpful text about subcommands.\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-info.json",
    "content": "{\n    \"INFO\": {\n        \"summary\": \"Returns information about the state of a node.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": 2,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"reply_schema\": {\n            \"description\": \"A map between named fields and values in the form of <field>:<value> lines separated by newlines composed by the two bytes CRLF\",\n            \"type\": \"string\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-keyslot.json",
    "content": "{\n    \"KEYSLOT\": {\n        \"summary\": \"Returns the hash slot for a key.\",\n        \"complexity\": \"O(N) where N is the number of bytes in the key\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": 3,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"string\"\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The hash slot number for the specified key\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-links.json",
    "content": "{\n    \"LINKS\": {\n        \"summary\": \"Returns a list of all TCP links to and from peer nodes.\",\n        \"complexity\": \"O(N) where N is the total number of Cluster nodes\",\n        \"group\": \"cluster\",\n        \"since\": \"7.0.0\",\n        \"arity\": 2,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"reply_schema\": {\n            \"description\": \"an array of cluster links and their attributes\",\n            \"type\": \"array\",\n            \"items\": {\n                \"type\": \"object\",\n                \"properties\": {\n                    \"direction\": {\n                        \"description\": \"This link is established by the local node _to_ the peer, or accepted by the local node _from_ the peer.\",\n                        \"oneOf\": [\n                            {\n                                \"description\": \"connection initiated from peer\",\n                                \"const\": \"from\"\n                            },\n                            {\n                                \"description\": \"connection initiated to peer\",\n                                \"const\": \"to\"\n                            }\n                        ]\n                    },\n                    \"node\": {\n                        \"description\": \"the node id of the peer\",\n                        \"type\": \"string\"\n                    },\n                    \"create-time\": {\n                        \"description\": \"unix time creation time of the link. (In the case of a _to_ link, this is the time when the TCP link is created by the local node, not the time when it is actually established.)\",\n                        \"type\": \"integer\"\n                    },\n                    \"events\": {\n                        \"description\": \"events currently registered for the link. r means readable event, w means writable event\",\n                        \"type\": \"string\"\n                    },\n                    \"send-buffer-allocated\": {\n                        \"description\": \"allocated size of the link's send buffer, which is used to buffer outgoing messages toward the peer\",\n                        \"type\": \"integer\"\n                    },\n                    \"send-buffer-used\": {\n                        \"description\": \"size of the portion of the link's send buffer that is currently holding data(messages)\",\n                        \"type\": \"integer\"\n                    }\n                },\n                \"additionalProperties\": false\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-meet.json",
    "content": "{\n    \"MEET\": {\n        \"summary\": \"Forces a node to handshake with another node.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": -4,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"history\": [\n            [\n                \"4.0.0\",\n                \"Added the optional `cluster_bus_port` argument.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"STALE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"ip\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"port\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"cluster-bus-port\",\n                \"type\": \"integer\",\n                \"optional\": true,\n                \"since\": \"4.0.0\"\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-migration.json",
    "content": "{\n    \"MIGRATION\": {\n        \"summary\": \"Start, monitor and cancel slot migration.\",\n        \"complexity\": \"O(N) where N is the total number of the slots between the start slot and end slot arguments.\",\n        \"group\": \"cluster\",\n        \"since\": \"8.4.0\",\n        \"arity\": -4,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"STALE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"subcommand\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"import\",\n                        \"token\": \"IMPORT\",\n                        \"type\": \"block\",\n                        \"multiple\": true,\n                        \"arguments\": [\n                            {\n                                \"name\": \"start-slot\",\n                                \"type\": \"integer\"\n                            },\n                            {\n                                \"name\": \"end-slot\",\n                                \"type\": \"integer\"\n                            }\n                        ]\n                    },\n                    {\n                        \"name\": \"cancel\",\n                        \"token\": \"CANCEL\",\n                        \"type\": \"oneof\",\n                        \"arguments\": [\n                            {\n                                \"token\": \"ID\",\n                                \"name\": \"task-id\",\n                                \"type\": \"string\"\n                            },\n                            {\n                                \"name\": \"all\",\n                                \"token\": \"ALL\",\n                                \"type\": \"pure-token\"\n                            }\n                        ]\n                    },\n                    {\n                        \"name\": \"status\",\n                        \"token\": \"STATUS\",\n                        \"type\": \"oneof\",\n                        \"arguments\": [\n                            {\n                                \"token\": \"ID\",\n                                \"name\": \"task-id\",\n                                \"type\": \"string\",\n                                \"optional\": true\n                            },\n                            {\n                                \"name\": \"all\",\n                                \"token\": \"ALL\",\n                                \"type\": \"pure-token\",\n                                \"optional\": true\n                            }\n                        ]\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"Reply to CLUSTER MIGRATION IMPORT, returns the task ID.\",\n                    \"type\": \"string\"\n                },\n                {\n                    \"description\": \"Reply to CLUSTER MIGRATION CANCEL, number of cancelled migration operations.\",\n                    \"type\": \"integer\"\n                },\n                {\n                    \"description\": \"Reply to CLUSTER MIGRATION STATUS, array of migration operation details.\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"object\",\n                        \"additionalProperties\": false,\n                        \"properties\": {\n                            \"id\": {\n                                \"type\": \"string\"\n                            },\n                            \"slots\": {\n                                \"type\": \"string\"\n                            },\n                            \"source\": {\n                                \"type\": \"string\"\n                            },\n                            \"dest\": {\n                                \"type\": \"string\"\n                            },\n                            \"operation\": {\n                                \"oneOf\": [\n                                    {\n                                        \"const\": \"import\"\n                                    },\n                                    {\n                                        \"const\": \"migrate\"\n                                    }\n                                ]\n                            },\n                            \"state\": {\n                                \"type\": \"string\"\n                            },\n                            \"last_error\": {\n                                \"type\": \"string\"\n                            },\n                            \"retries\": {\n                                \"type\": \"integer\"\n                            },\n                            \"create_time\": {\n                                \"type\": \"integer\"\n                            },\n                            \"start_time\": {\n                                \"type\": \"integer\"\n                            },\n                            \"end_time\": {\n                                \"type\": \"integer\"\n                            },\n                            \"write_pause_ms\": {\n                                \"type\": \"integer\"\n                            }\n                        }\n                    }\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-myid.json",
    "content": "{\n    \"MYID\": {\n        \"summary\": \"Returns the ID of a node.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": 2,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"reply_schema\": {\n            \"description\": \"the node id\",\n            \"type\": \"string\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-myshardid.json",
    "content": "{\n    \"MYSHARDID\": {\n        \"summary\": \"Returns the shard ID of a node.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"cluster\",\n        \"since\": \"7.2.0\",\n        \"arity\": 2,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"history\": [],\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"reply_schema\": {\n            \"description\": \"the node's shard id\",\n            \"type\": \"string\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-nodes.json",
    "content": "{\n    \"NODES\": {\n        \"summary\": \"Returns the cluster configuration for a node.\",\n        \"complexity\": \"O(N) where N is the total number of Cluster nodes\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": 2,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"reply_schema\": {\n            \"description\": \"the serialized cluster configuration\",\n            \"type\": \"string\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-replicas.json",
    "content": "{\n    \"REPLICAS\": {\n        \"summary\": \"Lists the replica nodes of a master node.\",\n        \"complexity\": \"O(N) where N is the number of replicas.\",\n        \"group\": \"cluster\",\n        \"since\": \"5.0.0\",\n        \"arity\": 3,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"node-id\",\n                \"type\": \"string\"\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"a list of replica nodes replicating from the specified master node provided in the same format used by CLUSTER NODES\",\n            \"type\": \"array\",\n            \"items\": {\n                \"type\": \"string\",\n                \"description\": \"the serialized cluster configuration\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-replicate.json",
    "content": "{\n    \"REPLICATE\": {\n        \"summary\": \"Configure a node as replica of a master node.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": 3,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"STALE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"node-id\",\n                \"type\": \"string\"\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-reset.json",
    "content": "{\n    \"RESET\": {\n        \"summary\": \"Resets a node.\",\n        \"complexity\": \"O(N) where N is the number of known nodes. The command may execute a FLUSHALL as a side effect.\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": -2,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"STALE\",\n            \"NOSCRIPT\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"reset-type\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"hard\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"HARD\"\n                    },\n                    {\n                        \"name\": \"soft\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"SOFT\"\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-saveconfig.json",
    "content": "{\n    \"SAVECONFIG\": {\n        \"summary\": \"Forces a node to save the cluster configuration to disk.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": 2,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"STALE\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-set-config-epoch.json",
    "content": "{\n    \"SET-CONFIG-EPOCH\": {\n        \"summary\": \"Sets the configuration epoch for a new node.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": 3,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"STALE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"config-epoch\",\n                \"type\": \"integer\"\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-setslot.json",
    "content": "{\n    \"SETSLOT\": {\n        \"summary\": \"Binds a hash slot to a node.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": -4,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"STALE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"slot\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"subcommand\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"importing\",\n                        \"display\": \"node-id\",\n                        \"type\": \"string\",\n                        \"token\": \"IMPORTING\"\n                    },\n                    {\n                        \"name\": \"migrating\",\n                        \"display\": \"node-id\",\n                        \"type\": \"string\",\n                        \"token\": \"MIGRATING\"\n                    },\n                    {\n                        \"name\": \"node\",\n                        \"display\": \"node-id\",\n                        \"type\": \"string\",\n                        \"token\": \"NODE\"\n                    },\n                    {\n                        \"name\": \"stable\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"STABLE\"\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-shards.json",
    "content": "{\n    \"SHARDS\": {\n        \"summary\": \"Returns the mapping of cluster slots to shards.\",\n        \"complexity\": \"O(N) where N is the total number of cluster nodes\",\n        \"group\": \"cluster\",\n        \"since\": \"7.0.0\",\n        \"arity\": 2,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"history\": [],\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"reply_schema\": {\n            \"description\": \"a nested list of a map of hash ranges and shard nodes describing individual shards\",\n            \"type\": \"array\",\n            \"items\": {\n                \"type\": \"object\",\n                \"additionalProperties\": false,\n                \"properties\": {\n                    \"slots\": {\n                        \"description\": \"an even number element array specifying the start and end slot numbers for slot ranges owned by this shard\",\n                        \"type\": \"array\",\n                        \"items\": {\n                            \"type\": \"integer\"\n                        }\n                    },\n                    \"nodes\": {\n                        \"description\": \"nodes that handle these slot ranges\",\n                        \"type\": \"array\",\n                        \"items\": {\n                            \"type\": \"object\",\n                            \"additionalProperties\": false,\n                            \"properties\": {\n                                \"id\": {\n                                    \"type\": \"string\"\n                                },\n                                \"port\": {\n                                    \"type\": \"integer\"\n                                },\n                                \"tls-port\": {\n                                    \"type\": \"integer\"\n                                },\n                                \"ip\": {\n                                    \"type\": \"string\"\n                                },\n                                \"endpoint\": {\n                                    \"type\": \"string\"\n                                },\n                                \"hostname\": {\n                                    \"type\": \"string\"\n                                },\n                                \"role\": {\n                                    \"oneOf\": [\n                                        {\n                                            \"const\": \"master\"\n                                        },\n                                        {\n                                            \"const\": \"replica\"\n                                        }\n                                    ]\n                                },\n                                \"replication-offset\": {\n                                    \"type\": \"integer\"\n                                },\n                                \"health\": {\n                                    \"oneOf\": [\n                                        {\n                                            \"const\": \"fail\"\n                                        },\n                                        {\n                                            \"const\": \"loading\"\n                                        },\n                                        {\n                                            \"const\": \"online\"\n                                        }\n                                    ]\n                                }\n                            }\n                        }\n                    }\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-slaves.json",
    "content": "{\n    \"SLAVES\": {\n        \"summary\": \"Lists the replica nodes of a master node.\",\n        \"complexity\": \"O(N) where N is the number of replicas.\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": 3,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"deprecated_since\": \"5.0.0\",\n        \"replaced_by\": \"`CLUSTER REPLICAS`\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"command_flags\": [\n            \"ADMIN\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"node-id\",\n                \"type\": \"string\"\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"a list of replica nodes replicating from the specified master node provided in the same format used by CLUSTER NODES\",\n            \"type\": \"array\",\n            \"items\": {\n                \"type\": \"string\",\n                \"description\": \"the serialized cluster configuration\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-slot-stats.json",
    "content": "{\n    \"SLOT-STATS\": {\n        \"summary\": \"Return an array of slot usage statistics for slots assigned to the current node.\",\n        \"complexity\": \"O(N) where N is the total number of slots based on arguments. O(N*log(N)) with ORDERBY subcommand.\",\n        \"group\": \"cluster\",\n        \"since\": \"8.2.0\",\n        \"arity\": -4,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterSlotStatsCommand\",\n        \"command_flags\": [\n            \"STALE\",\n            \"LOADING\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\",\n            \"REQUEST_POLICY:ALL_SHARDS\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"Array of nested arrays, where the inner array element represents a slot and its respective usage statistics.\",\n            \"items\": {\n                \"type\": \"array\",\n                \"description\": \"Array of size 2, where 0th index represents (int) slot and 1st index represents (map) usage statistics.\",\n                \"minItems\": 2,\n                \"maxItems\": 2,\n                \"items\": [\n                    {\n                        \"description\": \"Slot Number.\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"type\": \"object\",\n                        \"description\": \"Map of slot usage statistics.\",\n                        \"additionalProperties\": false,\n                        \"properties\": {\n                            \"key-count\": {\n                                \"type\": \"integer\"\n                            },\n                            \"memory-bytes\": {\n                                \"type\": \"integer\"\n                            },\n                            \"cpu-usec\": {\n                                \"type\": \"integer\"\n                            },\n                            \"network-bytes-in\": {\n                                \"type\": \"integer\"\n                            },\n                            \"network-bytes-out\": {\n                                \"type\": \"integer\"\n                            }\n                        }\n                    }\n                ]\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"filter\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"token\": \"SLOTSRANGE\",\n                        \"name\": \"slotsrange\",\n                        \"type\": \"block\",\n                        \"arguments\": [\n                            {\n                                \"name\": \"start-slot\",\n                                \"type\": \"integer\"\n                            },\n                            {\n                                \"name\": \"end-slot\",\n                                \"type\": \"integer\"\n                            }\n                        ]\n                    },\n                    {\n                        \"token\": \"ORDERBY\",\n                        \"name\": \"orderby\",\n                        \"type\": \"block\",\n                        \"arguments\": [\n                            {\n                                \"name\": \"metric\",\n                                \"type\": \"string\"\n                            },\n                            {\n                                \"token\": \"LIMIT\",\n                                \"name\": \"limit\",\n                                \"type\": \"integer\",\n                                \"optional\": true\n                            },\n                            {\n                                \"name\": \"order\",\n                                \"type\": \"oneof\",\n                                \"optional\": true,\n                                \"arguments\": [\n                                    {\n                                        \"name\": \"asc\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"ASC\"\n                                    },\n                                    {\n                                        \"name\": \"desc\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"DESC\"\n                                    }\n                                ]\n                            }\n                        ]\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-slots.json",
    "content": "{\n    \"SLOTS\": {\n        \"summary\": \"Returns the mapping of cluster slots to nodes.\",\n        \"complexity\": \"O(N) where N is the total number of Cluster nodes\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": 2,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"deprecated_since\": \"7.0.0\",\n        \"replaced_by\": \"`CLUSTER SHARDS`\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"history\": [\n            [\n                \"4.0.0\",\n                \"Added node IDs.\"\n            ],\n            [\n                \"7.0.0\",\n                \"Added additional networking metadata field.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"reply_schema\": {\n            \"description\": \"nested list of slot ranges with networking information\",\n            \"type\": \"array\",\n            \"items\": {\n                \"type\": \"array\",\n                \"minItems\": 3,\n                \"maxItems\": 4294967295,\n                \"items\": [\n                    {\n                        \"description\": \"start slot number\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"description\": \"end slot number\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"type\": \"array\",\n                        \"description\": \"Master node for the slot range\",\n                        \"minItems\": 4,\n                        \"maxItems\": 4,\n                        \"items\": [\n                            {\n                                \"description\": \"endpoint description\",\n                                \"oneOf\": [\n                                    {\n                                        \"description\": \"hostname or ip\",\n                                        \"type\": \"string\"\n                                    },\n                                    {\n                                        \"description\": \"unknown type\",\n                                        \"type\": \"null\"\n                                    }\n                                ]\n                            },\n                            {\n                                \"description\": \"port\",\n                                \"type\": \"integer\"\n                            },\n                            {\n                                \"description\": \"node name\",\n                                \"type\": \"string\"\n                            },\n                            {\n                                \"description\": \"array of node descriptions\",\n                                \"type\": \"object\",\n                                \"additionalProperties\": false,\n                                \"properties\": {\n                                    \"hostname\": {\n                                        \"type\": \"string\"\n                                    },\n                                    \"ip\": {\n                                        \"type\": \"string\"\n                                    }\n                                }\n                            }\n                        ]\n                    }\n                ],\n                \"additionalItems\": {\n                    \"type\": \"array\",\n                    \"description\": \"Replica node for the slot range\",\n                    \"minItems\": 4,\n                    \"maxItems\": 4,\n                    \"items\": [\n                        {\n                            \"description\": \"endpoint description\",\n                            \"oneOf\": [\n                                {\n                                    \"description\": \"hostname or ip\",\n                                    \"type\": \"string\"\n                                },\n                                {\n                                    \"description\": \"unknown type\",\n                                    \"type\": \"null\"\n                                }\n                            ]\n                        },\n                        {\n                            \"description\": \"port\",\n                            \"type\": \"integer\"\n                        },\n                        {\n                            \"description\": \"node name\",\n                            \"type\": \"string\"\n                        },\n                        {\n                            \"description\": \"array of node descriptions\",\n                            \"type\": \"object\",\n                            \"additionalProperties\": false,\n                            \"properties\": {\n                                \"hostname\": {\n                                    \"type\": \"string\"\n                                },\n                                \"ip\": {\n                                    \"type\": \"string\"\n                                }\n                            }\n                        }\n                    ]\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster-syncslots.json",
    "content": "{\n    \"SYNCSLOTS\": {\n        \"summary\": \"Internal command for atomic slot migration protocol between cluster nodes.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"cluster\",\n        \"since\": \"8.4.0\",\n        \"arity\": -3,\n        \"container\": \"CLUSTER\",\n        \"function\": \"clusterCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"subcommand\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"sync\",\n                        \"token\": \"SYNC\",\n                        \"type\": \"block\",\n                        \"arguments\": [\n                            {\n                                \"name\": \"task-id\",\n                                \"type\": \"string\"\n                            },\n                            {\n                                \"name\": \"slot-range\",\n                                \"type\": \"block\",\n                                \"multiple\": true,\n                                \"arguments\": [\n                                    {\n                                        \"name\": \"start-slot\",\n                                        \"type\": \"integer\"\n                                    },\n                                    {\n                                        \"name\": \"end-slot\",\n                                        \"type\": \"integer\"\n                                    }\n                                ]\n                            }\n                        ]\n                    },\n                    {\n                        \"token\": \"RDBCHANNEL\",\n                        \"name\": \"task-id\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"name\": \"snapshot-eof\",\n                        \"token\": \"SNAPSHOT-EOF\",\n                        \"type\": \"pure-token\"\n                    },\n                    {\n                        \"name\": \"stream-eof\",\n                        \"token\": \"STREAM-EOF\",\n                        \"type\": \"pure-token\"\n                    },\n                    {\n                        \"name\": \"ack\",\n                        \"token\": \"ACK\",\n                        \"type\": \"block\",\n                        \"arguments\": [\n                            {\n                                \"name\": \"state\",\n                                \"type\": \"string\"\n                            },\n                            {\n                                \"name\": \"offset\",\n                                \"type\": \"integer\"\n                            }\n                        ]\n                    },\n                    {\n                        \"token\": \"FAIL\",\n                        \"name\": \"error\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"name\": \"conf\",\n                        \"token\": \"CONF\",\n                        \"type\": \"block\",\n                        \"arguments\": [\n                            {\n                                \"name\": \"option\",\n                                \"type\": \"string\",\n                                \"multiple\": true\n                            },\n                            {\n                                \"name\": \"value\",\n                                \"type\": \"string\",\n                                \"multiple\": true\n                            }\n                        ]\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"Reply to CLUSTER SYNCSLOTS SYNC, returns special RDB channel sync response.\",\n                    \"const\": \"RDBCHANNELSYNCSLOTS\"\n                },\n                {\n                    \"description\": \"Reply to CLUSTER SYNCSLOTS CONF and other subcommands.\",\n                    \"const\": \"OK\"\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/cluster.json",
    "content": "{\n    \"CLUSTER\": {\n        \"summary\": \"A container for Redis Cluster commands.\",\n        \"complexity\": \"Depends on subcommand.\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": -2\n    }\n}\n"
  },
  {
    "path": "src/commands/command-count.json",
    "content": "{\n    \"COUNT\": {\n        \"summary\": \"Returns a count of commands.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"2.8.13\",\n        \"arity\": 2,\n        \"container\": \"COMMAND\",\n        \"function\": \"commandCountCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"description\": \"Number of total commands in this Redis server.\",\n            \"type\": \"integer\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/command-docs.json",
    "content": "{\n    \"DOCS\": {\n        \"summary\": \"Returns documentary information about one, multiple or all commands.\",\n        \"complexity\": \"O(N) where N is the number of commands to look up\",\n        \"group\": \"server\",\n        \"since\": \"7.0.0\",\n        \"arity\": -2,\n        \"container\": \"COMMAND\",\n        \"function\": \"commandDocsCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT_ORDER\"\n        ],\n        \"reply_schema\": {\n            \"description\": \"A map where each key is a command name, and each value is the documentary information\",\n            \"type\": \"object\",\n            \"additionalProperties\": false,\n            \"patternProperties\": {\n                \"^.*$\": {\n                    \"type\": \"object\",\n                    \"additionalProperties\": false,\n                    \"properties\": {\n                        \"summary\": {\n                            \"description\": \"short command description\",\n                            \"type\": \"string\"\n                        },\n                        \"since\": {\n                            \"description\": \"the Redis version that added the command (or for module commands, the module version).\",\n                            \"type\": \"string\"\n                        },\n                        \"group\": {\n                            \"description\": \"the functional group to which the command belongs\",\n                            \"oneOf\": [\n                                {\n                                    \"const\": \"bitmap\"\n                                },\n                                {\n                                    \"const\": \"cluster\"\n                                },\n                                {\n                                    \"const\": \"connection\"\n                                },\n                                {\n                                    \"const\": \"generic\"\n                                },\n                                {\n                                    \"const\": \"geo\"\n                                },\n                                {\n                                    \"const\": \"hash\"\n                                },\n                                {\n                                    \"const\": \"hyperloglog\"\n                                },\n                                {\n                                    \"const\": \"list\"\n                                },\n                                {\n                                    \"const\": \"module\"\n                                },\n                                {\n                                    \"const\": \"pubsub\"\n                                },\n                                {\n                                    \"const\": \"scripting\"\n                                },\n                                {\n                                    \"const\": \"sentinel\"\n                                },\n                                {\n                                    \"const\": \"server\"\n                                },\n                                {\n                                    \"const\": \"set\"\n                                },\n                                {\n                                    \"const\": \"sorted-set\"\n                                },\n                                {\n                                    \"const\": \"stream\"\n                                },\n                                {\n                                    \"const\": \"string\"\n                                },\n                                {\n                                    \"const\": \"transactions\"\n                                },\n                                {\n                                    \"const\": \"rate_limit\"\n                                }\n                            ]\n                        },\n                        \"complexity\": {\n                            \"description\": \"a short explanation about the command's time complexity.\",\n                            \"type\": \"string\"\n                        },\n                        \"module\": {\n                            \"type\": \"string\"\n                        },\n                        \"doc_flags\": {\n                            \"description\": \"an array of documentation flags\",\n                            \"type\": \"array\",\n                            \"items\": {\n                                \"oneOf\": [\n                                    {\n                                        \"description\": \"the command is deprecated.\",\n                                        \"const\": \"deprecated\"\n                                    },\n                                    {\n                                        \"description\": \"a system command that isn't meant to be called by users.\",\n                                        \"const\": \"syscmd\"\n                                    }\n                                ]\n                            }\n                        },\n                        \"deprecated_since\": {\n                            \"description\": \"the Redis version that deprecated the command (or for module commands, the module version)\",\n                            \"type\": \"string\"\n                        },\n                        \"replaced_by\": {\n                            \"description\": \"the alternative for a deprecated command.\",\n                            \"type\": \"string\"\n                        },\n                        \"history\": {\n                            \"description\": \"an array of historical notes describing changes to the command's behavior or arguments.\",\n                            \"type\": \"array\",\n                            \"items\": {\n                                \"type\": \"array\",\n                                \"minItems\": 2,\n                                \"maxItems\": 2,\n                                \"items\": [\n                                    {\n                                        \"type\": \"string\",\n                                        \"description\": \"The Redis version that the entry applies to.\"\n                                    },\n                                    {\n                                        \"type\": \"string\",\n                                        \"description\": \"The description of the change.\"\n                                    }\n                                ]\n                            }\n                        },\n                        \"arguments\": {\n                            \"description\": \"an array of maps that describe the command's arguments.\",\n                            \"type\": \"array\",\n                            \"items\": {\n                                \"type\": \"object\",\n                                \"additionalProperties\": false,\n                                \"properties\": {\n                                    \"name\": {\n                                        \"type\": \"string\"\n                                    },\n                                    \"type\": {\n                                        \"type\": \"string\"\n                                    },\n                                    \"display_text\": {\n                                        \"type\": \"string\"\n                                    },\n                                    \"key_spec_index\": {\n                                        \"type\": \"integer\"\n                                    },\n                                    \"token\": {\n                                        \"type\": \"string\"\n                                    },\n                                    \"summary\": {\n                                        \"type\": \"string\"\n                                    },\n                                    \"since\": {\n                                        \"type\": \"string\"\n                                    },\n                                    \"deprecated_since\": {\n                                        \"type\": \"string\"\n                                    },\n                                    \"flags\": {\n                                        \"type\": \"array\",\n                                        \"items\": {\n                                            \"type\": \"string\"\n                                        }\n                                    },\n                                    \"arguments\": {\n                                        \"type\": \"array\"\n                                    }\n                                }\n                            }\n                        },\n                        \"reply_schema\": {\n                            \"description\": \"command reply schema\",\n                            \"type\": \"object\"\n                        },\n                        \"subcommands\": {\n                            \"description\": \"A map where each key is a subcommand, and each value is the documentary information\",\n                            \"$ref\": \"#\"\n                        }\n                    }\n                }\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"command-name\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/command-getkeys.json",
    "content": "{\n    \"GETKEYS\": {\n        \"summary\": \"Extracts the key names from an arbitrary command.\",\n        \"complexity\": \"O(N) where N is the number of arguments to the command\",\n        \"group\": \"server\",\n        \"since\": \"2.8.13\",\n        \"arity\": -3,\n        \"container\": \"COMMAND\",\n        \"function\": \"commandGetKeysCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"description\": \"List of keys from the given Redis command.\",\n            \"type\": \"array\",\n            \"items\": {\n                \"type\": \"string\"\n            },\n            \"uniqueItems\": true\n        },\n        \"arguments\": [\n            {\n                \"name\": \"command\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"arg\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/command-getkeysandflags.json",
    "content": "{\n    \"GETKEYSANDFLAGS\": {\n        \"summary\": \"Extracts the key names and access flags for an arbitrary command.\",\n        \"complexity\": \"O(N) where N is the number of arguments to the command\",\n        \"group\": \"server\",\n        \"since\": \"7.0.0\",\n        \"arity\": -3,\n        \"container\": \"COMMAND\",\n        \"function\": \"commandGetKeysAndFlagsCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"description\": \"List of keys from the given Redis command and their usage flags.\",\n            \"type\": \"array\",\n            \"uniqueItems\": true,\n            \"items\": {\n                \"type\": \"array\",\n                \"minItems\": 2,\n                \"maxItems\": 2,\n                \"items\": [\n                    {\n                        \"description\": \"Key name\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"description\": \"Set of key flags\",\n                        \"type\": \"array\",\n                        \"minItems\": 1,\n                        \"items\": {\n                            \"type\": \"string\"\n                        }\n                    }\n                ]\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"command\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"arg\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/command-help.json",
    "content": "{\n    \"HELP\": {\n        \"summary\": \"Returns helpful text about the different subcommands.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"5.0.0\",\n        \"arity\": 2,\n        \"container\": \"COMMAND\",\n        \"function\": \"commandHelpCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"Helpful text about subcommands.\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/command-info.json",
    "content": "{\n    \"INFO\": {\n        \"summary\": \"Returns information about one, multiple or all commands.\",\n        \"complexity\": \"O(N) where N is the number of commands to look up\",\n        \"group\": \"server\",\n        \"since\": \"2.8.13\",\n        \"arity\": -2,\n        \"container\": \"COMMAND\",\n        \"function\": \"commandInfoCommand\",\n        \"history\": [\n            [\n                \"7.0.0\",\n                \"Allowed to be called with no argument to get info on all commands.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT_ORDER\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"command-name\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"multiple\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"description\": \"command does not exist\",\n                        \"type\": \"null\"\n                    },\n                    {\n                        \"description\": \"command info array output\",\n                        \"type\": \"array\",\n                        \"minItems\": 10,\n                        \"maxItems\": 10,\n                        \"items\": [\n                            {\n                                \"description\": \"command name\",\n                                \"type\": \"string\"\n                            },\n                            {\n                                \"description\": \"command arity\",\n                                \"type\": \"integer\"\n                            },\n                            {\n                                \"description\": \"command flags\",\n                                \"type\": \"array\",\n                                \"items\": {\n                                    \"description\": \"command flag\",\n                                    \"type\": \"string\"\n                                }\n                            },\n                            {\n                                \"description\": \"command first key index\",\n                                \"type\": \"integer\"\n                            },\n                            {\n                                \"description\": \"command last key index\",\n                                \"type\": \"integer\"\n                            },\n                            {\n                                \"description\": \"command key step index\",\n                                \"type\": \"integer\"\n                            },\n                            {\n                                \"description\": \"command categories\",\n                                \"type\": \"array\",\n                                \"items\": {\n                                    \"description\": \"command category\",\n                                    \"type\": \"string\"\n                                }\n                            },\n                            {\n                                \"description\": \"command tips\",\n                                \"type\": \"array\",\n                                \"items\": {\n                                    \"description\": \"command tip\",\n                                    \"type\": \"string\"\n                                }\n                            },\n                            {\n                                \"description\": \"command key specs\",\n                                \"type\": \"array\",\n                                \"items\": {\n                                    \"type\": \"object\",\n                                    \"additionalProperties\": false,\n                                    \"properties\": {\n                                        \"notes\": {\n                                            \"type\": \"string\"\n                                        },\n                                        \"flags\": {\n                                            \"type\": \"array\",\n                                            \"items\": {\n                                                \"type\": \"string\"\n                                            }\n                                        },\n                                        \"begin_search\": {\n                                            \"type\": \"object\",\n                                            \"additionalProperties\": false,\n                                            \"properties\": {\n                                                \"type\": {\n                                                    \"type\": \"string\"\n                                                },\n                                                \"spec\": {\n                                                    \"anyOf\": [\n                                                        {\n                                                            \"description\": \"unknown type, empty map\",\n                                                            \"type\": \"object\",\n                                                            \"additionalProperties\": false\n                                                        },\n                                                        {\n                                                            \"description\": \"index type\",\n                                                            \"type\": \"object\",\n                                                            \"additionalProperties\": false,\n                                                            \"properties\": {\n                                                                \"index\": {\n                                                                    \"type\": \"integer\"\n                                                                }\n                                                            }\n                                                        },\n                                                        {\n                                                            \"description\": \"keyword type\",\n                                                            \"type\": \"object\",\n                                                            \"additionalProperties\": false,\n                                                            \"properties\": {\n                                                                \"keyword\": {\n                                                                    \"type\": \"string\"\n                                                                },\n                                                                \"startfrom\": {\n                                                                    \"type\": \"integer\"\n                                                                }\n                                                            }\n                                                        }\n                                                    ]\n                                                }\n                                            }\n                                        },\n                                        \"find_keys\": {\n                                            \"type\": \"object\",\n                                            \"additionalProperties\": false,\n                                            \"properties\": {\n                                                \"type\": {\n                                                    \"type\": \"string\"\n                                                },\n                                                \"spec\": {\n                                                    \"anyOf\": [\n                                                        {\n                                                            \"description\": \"unknown type\",\n                                                            \"type\": \"object\",\n                                                            \"additionalProperties\": false\n                                                        },\n                                                        {\n                                                            \"description\": \"range type\",\n                                                            \"type\": \"object\",\n                                                            \"additionalProperties\": false,\n                                                            \"properties\": {\n                                                                \"lastkey\": {\n                                                                    \"type\": \"integer\"\n                                                                },\n                                                                \"keystep\": {\n                                                                    \"type\": \"integer\"\n                                                                },\n                                                                \"limit\": {\n                                                                    \"type\": \"integer\"\n                                                                }\n                                                            }\n                                                        },\n                                                        {\n                                                            \"description\": \"keynum type\",\n                                                            \"type\": \"object\",\n                                                            \"additionalProperties\": false,\n                                                            \"properties\": {\n                                                                \"keynumidx\": {\n                                                                    \"type\": \"integer\"\n                                                                },\n                                                                \"firstkey\": {\n                                                                    \"type\": \"integer\"\n                                                                },\n                                                                \"keystep\": {\n                                                                    \"type\": \"integer\"\n                                                                }\n                                                            }\n                                                        }\n                                                    ]\n                                                }\n                                            }\n                                        }\n                                    }\n                                }\n                            },\n                            {\n                                \"type\": \"array\",\n                                \"description\": \"subcommands\"\n                            }\n                        ]\n                    }\n                ]\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/command-list.json",
    "content": "{\n    \"LIST\": {\n        \"summary\": \"Returns a list of command names.\",\n        \"complexity\": \"O(N) where N is the total number of Redis commands\",\n        \"group\": \"server\",\n        \"since\": \"7.0.0\",\n        \"arity\": -2,\n        \"container\": \"COMMAND\",\n        \"function\": \"commandListCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT_ORDER\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"filterby\",\n                \"token\": \"FILTERBY\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"module-name\",\n                        \"type\": \"string\",\n                        \"token\": \"MODULE\"\n                    },\n                    {\n                        \"name\": \"category\",\n                        \"type\": \"string\",\n                        \"token\": \"ACLCAT\"\n                    },\n                    {\n                        \"name\": \"pattern\",\n                        \"type\": \"pattern\",\n                        \"token\": \"PATTERN\"\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"items\": {\n                \"description\": \"command name\",\n                \"type\": \"string\"\n            },\n            \"uniqueItems\": true\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/command.json",
    "content": "{\n    \"COMMAND\": {\n        \"summary\": \"Returns detailed information about all commands.\",\n        \"complexity\": \"O(N) where N is the total number of Redis commands\",\n        \"group\": \"server\",\n        \"since\": \"2.8.13\",\n        \"arity\": -1,\n        \"function\": \"commandCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT_ORDER\"\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/config-get.json",
    "content": "{\n    \"GET\": {\n        \"summary\": \"Returns the effective values of configuration parameters.\",\n        \"complexity\": \"O(N) when N is the number of configuration parameters provided\",\n        \"group\": \"server\",\n        \"since\": \"2.0.0\",\n        \"arity\": -3,\n        \"container\": \"CONFIG\",\n        \"function\": \"configGetCommand\",\n        \"history\": [\n            [\n                \"7.0.0\",\n                \"Added the ability to pass multiple pattern parameters in one call\"\n            ]\n        ],\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"object\",\n            \"additionalProperties\": {\n                \"type\": \"string\"\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"parameter\",\n                \"type\": \"string\",\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/config-help.json",
    "content": "{\n    \"HELP\": {\n        \"summary\": \"Returns helpful text about the different subcommands.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"5.0.0\",\n        \"arity\": 2,\n        \"container\": \"CONFIG\",\n        \"function\": \"configHelpCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"Helpful text about subcommands.\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/config-resetstat.json",
    "content": "{\n    \"RESETSTAT\": {\n        \"summary\": \"Resets the server's statistics.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"2.0.0\",\n        \"arity\": 2,\n        \"container\": \"CONFIG\",\n        \"function\": \"configResetStatCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n          \"REQUEST_POLICY:ALL_NODES\",\n          \"RESPONSE_POLICY:ALL_SUCCEEDED\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/config-rewrite.json",
    "content": "{\n    \"REWRITE\": {\n        \"summary\": \"Persists the effective configuration to file.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"2.8.0\",\n        \"arity\": 2,\n        \"container\": \"CONFIG\",\n        \"function\": \"configRewriteCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n          \"REQUEST_POLICY:ALL_NODES\",\n          \"RESPONSE_POLICY:ALL_SUCCEEDED\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/config-set.json",
    "content": "{\n    \"SET\": {\n        \"summary\": \"Sets configuration parameters in-flight.\",\n        \"complexity\": \"O(N) when N is the number of configuration parameters provided\",\n        \"group\": \"server\",\n        \"since\": \"2.0.0\",\n        \"arity\": -4,\n        \"container\": \"CONFIG\",\n        \"function\": \"configSetCommand\",\n        \"history\": [\n            [\n                \"7.0.0\",\n                \"Added the ability to set multiple parameters in one call.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_NODES\",\n            \"RESPONSE_POLICY:ALL_SUCCEEDED\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"data\",\n                \"type\": \"block\",\n                \"multiple\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"parameter\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"name\": \"value\",\n                        \"type\": \"string\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/config.json",
    "content": "{\n    \"CONFIG\": {\n        \"summary\": \"A container for server configuration commands.\",\n        \"complexity\": \"Depends on subcommand.\",\n        \"group\": \"server\",\n        \"since\": \"2.0.0\",\n        \"arity\": -2\n    }\n}\n"
  },
  {
    "path": "src/commands/copy.json",
    "content": "{\n    \"COPY\": {\n        \"summary\": \"Copies the value of a key to a new key.\",\n        \"complexity\": \"O(N) worst case for collections, where N is the number of nested items. O(1) for string values.\",\n        \"group\": \"generic\",\n        \"since\": \"6.2.0\",\n        \"arity\": -3,\n        \"function\": \"copyCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"source\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"destination\",\n                \"type\": \"key\",\n                \"key_spec_index\": 1\n            },\n            {\n                \"token\": \"DB\",\n                \"name\": \"destination-db\",\n                \"type\": \"integer\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"replace\",\n                \"token\": \"REPLACE\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"source was copied\",\n                    \"const\": 1\n                },\n                {\n                    \"description\": \"source was not copied\",\n                    \"const\": 0\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/dbsize.json",
    "content": "{\n    \"DBSIZE\": {\n        \"summary\": \"Returns the number of keys in the database.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"1.0.0\",\n        \"arity\": 1,\n        \"function\": \"dbsizeCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_SHARDS\",\n            \"RESPONSE_POLICY:AGG_SUM\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"The number of keys in the currently-selected database.\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/debug.json",
    "content": "{\n    \"DEBUG\": {\n        \"summary\": \"A container for debugging commands.\",\n        \"complexity\": \"Depends on subcommand.\",\n        \"group\": \"server\",\n        \"since\": \"1.0.0\",\n        \"arity\": -2,\n        \"function\": \"debugCommand\",\n        \"doc_flags\": [\n            \"SYSCMD\"\n        ],\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"PROTECTED\"\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/decr.json",
    "content": "{\n    \"DECR\": {\n        \"summary\": \"Decrements the integer value of a key by one. Uses 0 as initial value if the key doesn't exist.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"string\",\n        \"since\": \"1.0.0\",\n        \"arity\": 2,\n        \"function\": \"decrCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"The value of the key after decrementing it.\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/decrby.json",
    "content": "{\n    \"DECRBY\": {\n        \"summary\": \"Decrements a number from the integer value of a key. Uses 0 as initial value if the key doesn't exist.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"string\",\n        \"since\": \"1.0.0\",\n        \"arity\": 3,\n        \"function\": \"decrbyCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"The value of the key after decrementing it.\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"decrement\",\n                \"type\": \"integer\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/del.json",
    "content": "{\n    \"DEL\": {\n        \"summary\": \"Deletes one or more keys.\",\n        \"complexity\": \"O(N) where N is the number of keys that will be removed. When a key to remove holds a value other than a string, the individual complexity for this key is O(M) where M is the number of elements in the list, set, sorted set or hash. Removing a single key that holds a string value is O(1).\",\n        \"group\": \"generic\",\n        \"since\": \"1.0.0\",\n        \"arity\": -2,\n        \"function\": \"delCommand\",\n        \"command_flags\": [\n            \"WRITE\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:MULTI_SHARD\",\n            \"RESPONSE_POLICY:AGG_SUM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RM\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"the number of keys that were removed\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/delex.json",
    "content": "{\n    \"DELEX\": {\n        \"summary\": \"Conditionally removes the specified key based on value or digest comparison.\",\n        \"complexity\": \"O(1) for IFEQ/IFNE, O(N) for IFDEQ/IFDNE where N is the length of the string value.\",\n        \"group\": \"string\",\n        \"since\": \"8.4.0\",\n        \"arity\": -2,\n        \"function\": \"delexCommand\",\n        \"get_keys_function\": \"delexGetKeys\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"DELETE\",\n                    \"VARIABLE_FLAGS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"The key exists but holds a non-string value\",\n                    \"const\": \"WRONGTYPE\"\n                },\n                {\n                    \"description\": \"The key does not exist or the specified condition was not met.\",\n                    \"const\": 0\n                },\n                {\n                    \"description\": \"The key was deleted.\",\n                    \"const\": 1\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"condition\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"ifeq-value\",\n                        \"type\": \"string\",\n                        \"token\": \"IFEQ\"\n                    },\n                    {\n                        \"name\": \"ifne-value\",\n                        \"type\": \"string\",\n                        \"token\": \"IFNE\"\n                    },\n                    {\n                        \"name\": \"ifdeq-digest\",\n                        \"type\": \"integer\",\n                        \"token\": \"IFDEQ\"\n                    },\n                    {\n                        \"name\": \"ifdne-digest\",\n                        \"type\": \"integer\",\n                        \"token\": \"IFDNE\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/digest.json",
    "content": "{\n    \"DIGEST\": {\n        \"summary\": \"Returns the XXH3 hash of a string value.\",\n        \"complexity\": \"O(N) where N is the length of the string value.\",\n        \"group\": \"string\",\n        \"since\": \"8.4.0\",\n        \"arity\": 2,\n        \"function\": \"digestCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"The XXH3 64-bit hash of the string value as a signed integer.\",\n                    \"type\": \"string\"\n                },\n                {\n                    \"description\": \"Key does not exist\",\n                    \"type\": \"null\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/discard.json",
    "content": "{\n    \"DISCARD\": {\n        \"summary\": \"Discards a transaction.\",\n        \"complexity\": \"O(N), when N is the number of queued commands\",\n        \"group\": \"transactions\",\n        \"since\": \"2.0.0\",\n        \"arity\": 1,\n        \"function\": \"discardCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"FAST\",\n            \"ALLOW_BUSY\"\n        ],\n        \"acl_categories\": [\n            \"TRANSACTION\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/dump.json",
    "content": "{\n    \"DUMP\": {\n        \"summary\": \"Returns a serialized representation of the value stored at a key.\",\n        \"complexity\": \"O(1) to access the key and additional O(N*M) to serialize it, where N is the number of Redis objects composing the value and M their average size. For small string values the time complexity is thus O(1)+O(1*M) where M is small, so simply O(1).\",\n        \"group\": \"generic\",\n        \"since\": \"2.6.0\",\n        \"arity\": 2,\n        \"function\": \"dumpCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"The serialized value.\",\n                    \"type\": \"string\"\n                },\n                {\n                    \"description\": \"Key does not exist.\",\n                    \"type\": \"null\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/echo.json",
    "content": "{\n    \"ECHO\": {\n        \"summary\": \"Returns the given string.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"connection\",\n        \"since\": \"1.0.0\",\n        \"arity\": 2,\n        \"function\": \"echoCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"description\": \"The given string\",\n            \"type\": \"string\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"message\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/eval.json",
    "content": "{\n    \"EVAL\": {\n        \"summary\": \"Executes a server-side Lua script.\",\n        \"complexity\": \"Depends on the script that is executed.\",\n        \"group\": \"scripting\",\n        \"since\": \"2.6.0\",\n        \"arity\": -3,\n        \"function\": \"evalCommand\",\n        \"get_keys_function\": \"evalGetKeys\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"SKIP_MONITOR\",\n            \"MAY_REPLICATE\",\n            \"NO_MANDATORY_KEYS\",\n            \"STALE\"\n        ],\n        \"acl_categories\": [\n            \"SCRIPTING\"\n        ],\n        \"key_specs\": [\n            {\n                \"notes\": \"We cannot tell how the keys will be used so we assume the worst, RW and UPDATE\",\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"keynum\": {\n                        \"keynumidx\": 0,\n                        \"firstkey\": 1,\n                        \"step\": 1\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"script\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"numkeys\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"optional\": true,\n                \"multiple\": true\n            },\n            {\n                \"name\": \"arg\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"multiple\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Return value depends on the script that is executed\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/eval_ro.json",
    "content": "{\n    \"EVAL_RO\": {\n        \"summary\": \"Executes a read-only server-side Lua script.\",\n        \"complexity\": \"Depends on the script that is executed.\",\n        \"group\": \"scripting\",\n        \"since\": \"7.0.0\",\n        \"arity\": -3,\n        \"function\": \"evalRoCommand\",\n        \"get_keys_function\": \"evalGetKeys\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"SKIP_MONITOR\",\n            \"NO_MANDATORY_KEYS\",\n            \"STALE\",\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SCRIPTING\"\n        ],\n        \"key_specs\": [\n            {\n                \"notes\": \"We cannot tell how the keys will be used so we assume the worst, RO and ACCESS\",\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"keynum\": {\n                        \"keynumidx\": 0,\n                        \"firstkey\": 1,\n                        \"step\": 1\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"script\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"numkeys\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"optional\":true,\n                \"multiple\": true\n            },\n            {\n                \"name\": \"arg\",\n                \"type\": \"string\",\n                \"optional\":true,\n                \"multiple\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Return value depends on the script that is executed\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/evalsha.json",
    "content": "{\n    \"EVALSHA\": {\n        \"summary\": \"Executes a server-side Lua script by SHA1 digest.\",\n        \"complexity\": \"Depends on the script that is executed.\",\n        \"group\": \"scripting\",\n        \"since\": \"2.6.0\",\n        \"arity\": -3,\n        \"function\": \"evalShaCommand\",\n        \"get_keys_function\": \"evalGetKeys\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"SKIP_MONITOR\",\n            \"MAY_REPLICATE\",\n            \"NO_MANDATORY_KEYS\",\n            \"STALE\"\n        ],\n        \"acl_categories\": [\n            \"SCRIPTING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"keynum\": {\n                        \"keynumidx\": 0,\n                        \"firstkey\": 1,\n                        \"step\": 1\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"sha1\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"numkeys\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"optional\": true,\n                \"multiple\": true\n            },\n            {\n                \"name\": \"arg\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"multiple\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Return value depends on the script that is executed\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/evalsha_ro.json",
    "content": "{\n    \"EVALSHA_RO\": {\n        \"summary\": \"Executes a read-only server-side Lua script by SHA1 digest.\",\n        \"complexity\": \"Depends on the script that is executed.\",\n        \"group\": \"scripting\",\n        \"since\": \"7.0.0\",\n        \"arity\": -3,\n        \"function\": \"evalShaRoCommand\",\n        \"get_keys_function\": \"evalGetKeys\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"SKIP_MONITOR\",\n            \"NO_MANDATORY_KEYS\",\n            \"STALE\",\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SCRIPTING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"keynum\": {\n                        \"keynumidx\": 0,\n                        \"firstkey\": 1,\n                        \"step\": 1\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"sha1\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"numkeys\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"optional\":true,\n                \"multiple\": true\n            },\n            {\n                \"name\": \"arg\",\n                \"type\": \"string\",\n                \"optional\":true,\n                \"multiple\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Return value depends on the script that is executed\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/exec.json",
    "content": "{\n    \"EXEC\": {\n        \"summary\": \"Executes all commands in a transaction.\",\n        \"complexity\": \"Depends on commands in the transaction\",\n        \"group\": \"transactions\",\n        \"since\": \"1.2.0\",\n        \"arity\": 1,\n        \"function\": \"execCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SKIP_SLOWLOG\"\n        ],\n        \"acl_categories\": [\n            \"TRANSACTION\"\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"Each element being the reply to each of the commands in the atomic transaction.\",\n                    \"type\": \"array\"\n                },\n                {\n                    \"description\": \"The transaction was aborted because a `WATCH`ed key was touched\",\n                    \"type\": \"null\"\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/exists.json",
    "content": "{\n    \"EXISTS\": {\n        \"summary\": \"Determines whether one or more keys exist.\",\n        \"complexity\": \"O(N) where N is the number of keys to check.\",\n        \"group\": \"generic\",\n        \"since\": \"1.0.0\",\n        \"arity\": -2,\n        \"function\": \"existsCommand\",\n        \"history\": [\n            [\n                \"3.0.3\",\n                \"Accepts multiple `key` arguments.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:MULTI_SHARD\",\n            \"RESPONSE_POLICY:AGG_SUM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Number of keys that exist from those specified as arguments.\",\n            \"type\": \"integer\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/expire.json",
    "content": "{\n    \"EXPIRE\": {\n        \"summary\": \"Sets the expiration time of a key in seconds.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"generic\",\n        \"since\": \"1.0.0\",\n        \"arity\": -3,\n        \"function\": \"expireCommand\",\n        \"history\": [\n            [\n                \"7.0.0\",\n                \"Added options: `NX`, `XX`, `GT` and `LT`.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"The timeout was not set. e.g. key doesn't exist, or operation skipped due to the provided arguments.\",\n                    \"const\": 0\n                },\n                {\n                    \"description\": \"The timeout was set.\",\n                    \"const\": 1\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"seconds\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"condition\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"since\": \"7.0.0\",\n                \"arguments\": [\n                    {\n                        \"name\": \"nx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"NX\"\n                    },\n                    {\n                        \"name\": \"xx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"XX\"\n                    },\n                    {\n                        \"name\": \"gt\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"GT\"\n                    },\n                    {\n                        \"name\": \"lt\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"LT\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/expireat.json",
    "content": "{\n    \"EXPIREAT\": {\n        \"summary\": \"Sets the expiration time of a key to a Unix timestamp.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"generic\",\n        \"since\": \"1.2.0\",\n        \"arity\": -3,\n        \"function\": \"expireatCommand\",\n        \"history\": [\n            [\n                \"7.0.0\",\n                \"Added options: `NX`, `XX`, `GT` and `LT`.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"const\": 1,\n                    \"description\": \"The timeout was set.\"\n                },\n                {\n                    \"const\": 0,\n                    \"description\": \"The timeout was not set. e.g. key doesn't exist, or operation skipped due to the provided arguments.\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"unix-time-seconds\",\n                \"type\": \"unix-time\"\n            },\n            {\n                \"name\": \"condition\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"since\": \"7.0.0\",\n                \"arguments\": [\n                    {\n                        \"name\": \"nx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"NX\"\n                    },\n                    {\n                        \"name\": \"xx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"XX\"\n                    },\n                    {\n                        \"name\": \"gt\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"GT\"\n                    },\n                    {\n                        \"name\": \"lt\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"LT\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/expiretime.json",
    "content": "{\n    \"EXPIRETIME\": {\n        \"summary\": \"Returns the expiration time of a key as a Unix timestamp.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"generic\",\n        \"since\": \"7.0.0\",\n        \"arity\": 2,\n        \"function\": \"expiretimeCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"type\": \"integer\",\n                    \"description\": \"Expiration Unix timestamp in seconds.\",\n                    \"minimum\": 0\n                },\n                {\n                    \"const\": -1,\n                    \"description\": \"The key exists but has no associated expiration time.\"\n                },\n                {\n                    \"const\": -2,\n                    \"description\": \"The key does not exist.\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/failover.json",
    "content": "{\n    \"FAILOVER\": {\n        \"summary\": \"Starts a coordinated failover from a server to one of its replicas.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"6.2.0\",\n        \"arity\": -1,\n        \"function\": \"failoverCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"STALE\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"target\",\n                \"token\": \"TO\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"host\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"name\": \"port\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"token\": \"FORCE\",\n                        \"name\": \"force\",\n                        \"type\": \"pure-token\",\n                        \"optional\": true\n                    }\n                ]\n            },\n            {\n                \"token\": \"ABORT\",\n                \"name\": \"abort\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"TIMEOUT\",\n                \"name\": \"milliseconds\",\n                \"type\": \"integer\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/fcall.json",
    "content": "{\n    \"FCALL\": {\n        \"summary\": \"Invokes a function.\",\n        \"complexity\": \"Depends on the function that is executed.\",\n        \"group\": \"scripting\",\n        \"since\": \"7.0.0\",\n        \"arity\": -3,\n        \"function\": \"fcallCommand\",\n        \"get_keys_function\": \"functionGetKeys\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"SKIP_MONITOR\",\n            \"MAY_REPLICATE\",\n            \"NO_MANDATORY_KEYS\",\n            \"STALE\"\n        ],\n        \"acl_categories\": [\n            \"SCRIPTING\"\n        ],\n        \"key_specs\": [\n            {\n                \"notes\": \"We cannot tell how the keys will be used so we assume the worst, RW and UPDATE\",\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"keynum\": {\n                        \"keynumidx\": 0,\n                        \"firstkey\": 1,\n                        \"step\": 1\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"function\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"numkeys\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"optional\": true,\n                \"multiple\": true\n            },\n            {\n                \"name\": \"arg\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"multiple\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Return value depends on the function that is executed\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/fcall_ro.json",
    "content": "{\n    \"FCALL_RO\": {\n        \"summary\": \"Invokes a read-only function.\",\n        \"complexity\": \"Depends on the function that is executed.\",\n        \"group\": \"scripting\",\n        \"since\": \"7.0.0\",\n        \"arity\": -3,\n        \"function\": \"fcallroCommand\",\n        \"get_keys_function\": \"functionGetKeys\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"SKIP_MONITOR\",\n            \"NO_MANDATORY_KEYS\",\n            \"STALE\",\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SCRIPTING\"\n        ],\n        \"key_specs\": [\n            {\n                \"notes\": \"We cannot tell how the keys will be used so we assume the worst, RO and ACCESS\",\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"keynum\": {\n                        \"keynumidx\": 0,\n                        \"firstkey\": 1,\n                        \"step\": 1\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"function\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"numkeys\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"optional\": true,\n                \"multiple\": true\n            },\n            {\n                \"name\": \"arg\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"multiple\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Return value depends on the function that is executed\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/flushall.json",
    "content": "{\n    \"FLUSHALL\": {\n        \"summary\": \"Removes all keys from all databases.\",\n        \"complexity\": \"O(N) where N is the total number of keys in all databases\",\n        \"group\": \"server\",\n        \"since\": \"1.0.0\",\n        \"arity\": -1,\n        \"function\": \"flushallCommand\",\n        \"history\": [\n            [\n                \"4.0.0\",\n                \"Added the `ASYNC` flushing mode modifier.\"\n            ],\n            [\n                \"6.2.0\",\n                \"Added the `SYNC` flushing mode modifier.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\",\n            \"DANGEROUS\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_SHARDS\",\n            \"RESPONSE_POLICY:ALL_SUCCEEDED\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"flush-type\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"async\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ASYNC\",\n                        \"since\": \"4.0.0\"\n                    },\n                    {\n                        \"name\": \"sync\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"SYNC\",\n                        \"since\": \"6.2.0\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/flushdb.json",
    "content": "{\n    \"FLUSHDB\": {\n        \"summary\": \"Remove all keys from the current database.\",\n        \"complexity\": \"O(N) where N is the number of keys in the selected database\",\n        \"group\": \"server\",\n        \"since\": \"1.0.0\",\n        \"arity\": -1,\n        \"function\": \"flushdbCommand\",\n        \"history\": [\n            [\n                \"4.0.0\",\n                \"Added the `ASYNC` flushing mode modifier.\"\n            ],\n            [\n                \"6.2.0\",\n                \"Added the `SYNC` flushing mode modifier.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\",\n            \"DANGEROUS\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_SHARDS\",\n            \"RESPONSE_POLICY:ALL_SUCCEEDED\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"flush-type\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"async\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ASYNC\",\n                        \"since\": \"4.0.0\"\n                    },\n                    {\n                        \"name\": \"sync\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"SYNC\",\n                        \"since\": \"6.2.0\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/function-delete.json",
    "content": "{\n    \"DELETE\": {\n        \"summary\": \"Deletes a library and its functions.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"scripting\",\n        \"since\": \"7.0.0\",\n        \"arity\": 3,\n        \"container\": \"FUNCTION\",\n        \"function\": \"functionDeleteCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"WRITE\"\n        ],\n        \"acl_categories\": [\n            \"SCRIPTING\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_SHARDS\",\n            \"RESPONSE_POLICY:ALL_SUCCEEDED\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"library-name\",\n                \"type\": \"string\"\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/function-dump.json",
    "content": "{\n    \"DUMP\": {\n        \"summary\": \"Dumps all libraries into a serialized binary payload.\",\n        \"complexity\": \"O(N) where N is the number of functions\",\n        \"group\": \"scripting\",\n        \"since\": \"7.0.0\",\n        \"arity\": 2,\n        \"container\": \"FUNCTION\",\n        \"function\": \"functionDumpCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\"\n        ],\n        \"acl_categories\": [\n            \"SCRIPTING\"\n        ],\n        \"reply_schema\": {\n            \"description\": \"the serialized payload\",\n            \"type\": \"string\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/function-flush.json",
    "content": "{\n    \"FLUSH\": {\n        \"summary\": \"Deletes all libraries and functions.\",\n        \"complexity\": \"O(N) where N is the number of functions deleted\",\n        \"group\": \"scripting\",\n        \"since\": \"7.0.0\",\n        \"arity\": -2,\n        \"container\": \"FUNCTION\",\n        \"function\": \"functionFlushCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"WRITE\"\n        ],\n        \"acl_categories\": [\n            \"SCRIPTING\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_SHARDS\",\n            \"RESPONSE_POLICY:ALL_SUCCEEDED\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"flush-type\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"async\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ASYNC\"\n                    },\n                    {\n                        \"name\": \"sync\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"SYNC\"\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/function-help.json",
    "content": "{\n    \"HELP\": {\n        \"summary\": \"Returns helpful text about the different subcommands.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"scripting\",\n        \"since\": \"7.0.0\",\n        \"arity\": 2,\n        \"container\": \"FUNCTION\",\n        \"function\": \"functionHelpCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"acl_categories\": [\n            \"SCRIPTING\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"Helpful text about subcommands.\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/function-kill.json",
    "content": "{\n    \"KILL\": {\n        \"summary\": \"Terminates a function during execution.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"scripting\",\n        \"since\": \"7.0.0\",\n        \"arity\": 2,\n        \"container\": \"FUNCTION\",\n        \"function\": \"functionKillCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"ALLOW_BUSY\"\n        ],\n        \"acl_categories\": [\n            \"SCRIPTING\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_SHARDS\",\n            \"RESPONSE_POLICY:ONE_SUCCEEDED\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/function-list.json",
    "content": "{\n    \"LIST\": {\n        \"summary\": \"Returns information about all libraries.\",\n        \"complexity\": \"O(N) where N is the number of functions\",\n        \"group\": \"scripting\",\n        \"since\": \"7.0.0\",\n        \"arity\": -2,\n        \"container\": \"FUNCTION\",\n        \"function\": \"functionListCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT_ORDER\"\n        ],\n        \"acl_categories\": [\n            \"SCRIPTING\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"items\": {\n                \"type\": \"object\",\n                \"additionalProperties\": false,\n                \"properties\": {\n                    \"library_name\": {\n                        \"description\": \" the name of the library\",\n                        \"type\": \"string\"\n                    },\n                    \"engine\": {\n                        \"description\": \"the engine of the library\",\n                        \"type\": \"string\"\n                    },\n                    \"functions\": {\n                        \"description\": \"the list of functions in the library\",\n                        \"type\": \"array\",\n                        \"items\": {\n                            \"type\": \"object\",\n                            \"additionalProperties\": false,\n                            \"properties\": {\n                                \"name\": {\n                                    \"description\": \"the name of the function\",\n                                    \"type\": \"string\"\n                                },\n                                \"description\": {\n                                    \"description\": \"the function's description\",\n                                    \"oneOf\": [\n                                        {\n                                            \"type\": \"null\"\n                                        },\n                                        {\n                                            \"type\": \"string\"\n                                        }\n                                    ]\n                                },\n                                \"flags\": {\n                                    \"description\": \"an array of function flags\",\n                                    \"type\": \"array\",\n                                    \"items\": {\n                                        \"type\": \"string\"\n                                    }\n                                }\n                            }\n                        }\n                    },\n                    \"library_code\": {\n                        \"description\": \"the library's source code (when given the WITHCODE modifier)\",\n                        \"type\": \"string\"\n                    }\n                }\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"library-name-pattern\",\n                \"type\": \"string\",\n                \"token\": \"LIBRARYNAME\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"withcode\",\n                \"type\": \"pure-token\",\n                \"token\": \"WITHCODE\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/function-load.json",
    "content": "{\n    \"LOAD\": {\n        \"summary\": \"Creates a library.\",\n        \"complexity\": \"O(1) (considering compilation time is redundant)\",\n        \"group\": \"scripting\",\n        \"since\": \"7.0.0\",\n        \"arity\": -3,\n        \"container\": \"FUNCTION\",\n        \"function\": \"functionLoadCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"SCRIPTING\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_SHARDS\",\n            \"RESPONSE_POLICY:ALL_SUCCEEDED\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"replace\",\n                \"type\": \"pure-token\",\n                \"token\": \"REPLACE\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"function-code\",\n                \"type\": \"string\"\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The library name that was loaded\",\n            \"type\": \"string\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/function-restore.json",
    "content": "{\n    \"RESTORE\": {\n        \"summary\": \"Restores all libraries from a payload.\",\n        \"complexity\": \"O(N) where N is the number of functions on the payload\",\n        \"group\": \"scripting\",\n        \"since\": \"7.0.0\",\n        \"arity\": -3,\n        \"container\": \"FUNCTION\",\n        \"function\": \"functionRestoreCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"SCRIPTING\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_SHARDS\",\n            \"RESPONSE_POLICY:ALL_SUCCEEDED\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"serialized-value\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"policy\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"flush\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"FLUSH\"\n                    },\n                    {\n                        \"name\": \"append\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"APPEND\"\n                    },\n                    {\n                        \"name\": \"replace\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"REPLACE\"\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/function-stats.json",
    "content": "{\n    \"STATS\": {\n        \"summary\": \"Returns information about a function during execution.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"scripting\",\n        \"since\": \"7.0.0\",\n        \"arity\": 2,\n        \"container\": \"FUNCTION\",\n        \"function\": \"functionStatsCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"ALLOW_BUSY\"\n        ],\n        \"acl_categories\": [\n            \"SCRIPTING\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\",\n            \"REQUEST_POLICY:ALL_SHARDS\",\n            \"RESPONSE_POLICY:SPECIAL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"object\",\n            \"additionalProperties\": false,\n            \"properties\": {\n                \"running_script\": {\n                    \"description\": \"information about the running script.\",\n                    \"oneOf\": [\n                        {\n                            \"description\": \"If there's no in-flight function\",\n                            \"type\": \"null\"\n                        },\n                        {\n                            \"description\": \"a map with the information about the running script\",\n                            \"type\": \"object\",\n                            \"additionalProperties\": false,\n                            \"properties\": {\n                                \"name\": {\n                                    \"description\": \"the name of the function.\",\n                                    \"type\": \"string\"\n                                },\n                                \"command\": {\n                                    \"description\": \"the command and arguments used for invoking the function.\",\n                                    \"type\": \"array\",\n                                    \"items\": {\n                                        \"type\": \"string\"\n                                    }\n                                },\n                                \"duration_ms\": {\n                                    \"description\": \"the function's runtime duration in milliseconds.\",\n                                    \"type\": \"integer\"\n                                }\n                            }\n                        }\n                    ]\n                },\n                \"engines\": {\n                    \"description\": \"A map when each entry in the map represent a single engine.\",\n                    \"type\": \"object\",\n                    \"patternProperties\": {\n                        \"^.*$\": {\n                            \"description\": \"Engine map contains statistics about the engine\",\n                            \"type\": \"object\",\n                            \"additionalProperties\": false,\n                            \"properties\": {\n                                \"libraries_count\": {\n                                    \"description\": \"number of libraries\",\n                                    \"type\": \"integer\"\n                                },\n                                \"functions_count\": {\n                                    \"description\": \"number of functions\",\n                                    \"type\": \"integer\"\n                                }\n                            }\n                        }\n                    }\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/function.json",
    "content": "{\n    \"FUNCTION\": {\n        \"summary\": \"A container for function commands.\",\n        \"complexity\": \"Depends on subcommand.\",\n        \"group\": \"scripting\",\n        \"since\": \"7.0.0\",\n        \"arity\": -2\n    }\n}\n"
  },
  {
    "path": "src/commands/gcra.json",
    "content": "{\n    \"GCRA\": {\n        \"summary\": \"Rate limit via GCRA (Generic Cell Rate Algorithm).\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"rate_limit\",\n        \"since\": \"8.8.0\",\n        \"arity\": -5,\n        \"function\": \"gcraCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"RATE_LIMIT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"minItems\": 5,\n            \"maxItems\": 5,\n            \"description\": \"Rate limiting result\",\n            \"items\": [\n                {\n                    \"type\": \"integer\",\n                    \"description\": \"Limited: 0 if allowed, 1 if rate limited\"\n                },\n                {\n                    \"type\": \"integer\",\n                    \"description\": \"Max request tokens: always equal to max_burst+1\"\n                },\n                {\n                    \"type\": \"integer\",\n                    \"description\": \"Number of tokens available immediately\"\n                },\n                {\n                    \"type\": \"integer\",\n                    \"description\": \"Retry after: seconds after which the caller should retry. Always -1 if not limited\"\n                },\n                {\n                    \"type\": \"integer\",\n                    \"description\": \"Full burst after: seconds after which a full burst will be allowed\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"max-burst\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"tokens-per-period\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"period\",\n                \"type\": \"double\"\n            },\n            {\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"token\": \"TOKENS\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/gcrasetvalue.json",
    "content": "{\n    \"GCRASETVALUE\": {\n        \"summary\": \"An internal command for recording a GCRA TAT value during AOF rewrite and replication.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"rate_limit\",\n        \"since\": \"8.8.0\",\n        \"arity\": 3,\n        \"function\": \"gcraSetValueCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"RATE_LIMIT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"tat\",\n                \"type\": \"integer\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/geoadd.json",
    "content": "{\n    \"GEOADD\": {\n        \"summary\": \"Adds one or more members to a geospatial index. The key is created if it doesn't exist.\",\n        \"complexity\": \"O(log(N)) for each item added, where N is the number of elements in the sorted set.\",\n        \"group\": \"geo\",\n        \"since\": \"3.2.0\",\n        \"arity\": -5,\n        \"function\": \"geoaddCommand\",\n        \"history\": [\n            [\n                \"6.2.0\",\n                \"Added the `CH`, `NX` and `XX` options.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"GEO\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"condition\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"since\": \"6.2.0\",\n                \"arguments\": [\n                    {\n                        \"name\": \"nx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"NX\"\n                    },\n                    {\n                        \"name\": \"xx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"XX\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"change\",\n                \"token\": \"CH\",\n                \"type\": \"pure-token\",\n                \"optional\": true,\n                \"since\": \"6.2.0\"\n            },\n            {\n                \"name\": \"data\",\n                \"type\": \"block\",\n                \"multiple\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"longitude\",\n                        \"type\": \"double\"\n                    },\n                    {\n                        \"name\": \"latitude\",\n                        \"type\": \"double\"\n                    },\n                    {\n                        \"name\": \"member\",\n                        \"type\": \"string\"\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"When used without optional arguments, the number of elements added to the sorted set (excluding score updates).  If the CH option is specified, the number of elements that were changed (added or updated).\",\n            \"type\": \"integer\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/geodist.json",
    "content": "{\n    \"GEODIST\": {\n        \"summary\": \"Returns the distance between two members of a geospatial index.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"geo\",\n        \"since\": \"3.2.0\",\n        \"arity\": -4,\n        \"function\": \"geodistCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"GEO\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"member1\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"member2\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"unit\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"m\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"m\"\n                    },\n                    {\n                        \"name\": \"km\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"km\"\n                    },\n                    {\n                        \"name\": \"ft\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ft\"\n                    },\n                    {\n                        \"name\": \"mi\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"mi\"\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"one or both of elements are missing\",\n                    \"type\": \"null\"\n                },\n                {\n                    \"description\": \"distance as a double (represented as a string) in the specified units\",\n                    \"type\": \"string\",\n                    \"pattern\": \"^[0-9]*(.[0-9]*)?$\"\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/geohash.json",
    "content": "{\n    \"GEOHASH\": {\n        \"summary\": \"Returns members from a geospatial index as geohash strings.\",\n        \"complexity\": \"O(1) for each member requested.\",\n        \"group\": \"geo\",\n        \"since\": \"3.2.0\",\n        \"arity\": -2,\n        \"function\": \"geohashCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"GEO\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"member\",\n                \"type\": \"string\",\n                \"multiple\": true,\n                \"optional\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"An array where each element is the Geohash corresponding to each member name passed as argument to the command.\",\n            \"type\": \"array\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/geopos.json",
    "content": "{\n    \"GEOPOS\": {\n        \"summary\": \"Returns the longitude and latitude of members from a geospatial index.\",\n        \"complexity\": \"O(1) for each member requested.\",\n        \"group\": \"geo\",\n        \"since\": \"3.2.0\",\n        \"arity\": -2,\n        \"function\": \"geoposCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"GEO\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"member\",\n                \"type\": \"string\",\n                \"multiple\": true,\n                \"optional\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"An array where each element is a two elements array representing longitude and latitude (x,y) of each member name passed as argument to the command\",\n            \"type\": \"array\",\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"description\": \"Element does not exist\",\n                        \"type\": \"null\"\n                    },\n                    {\n                        \"type\": \"array\",\n                        \"minItems\": 2,\n                        \"maxItems\": 2,\n                        \"items\": [\n                            {\n                                \"description\": \"Latitude (x)\",\n                                \"type\": \"number\"\n                            },\n                            {\n                                \"description\": \"Longitude (y)\",\n                                \"type\": \"number\"\n                            }\n                        ]\n                    }\n                ]\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/georadius.json",
    "content": "{\n    \"GEORADIUS\": {\n        \"summary\": \"Queries a geospatial index for members within a distance from a coordinate, optionally stores the result.\",\n        \"complexity\": \"O(N+log(M)) where N is the number of elements inside the bounding box of the circular area delimited by center and radius and M is the number of items inside the index.\",\n        \"group\": \"geo\",\n        \"since\": \"3.2.0\",\n        \"arity\": -6,\n        \"function\": \"georadiusCommand\",\n        \"get_keys_function\": \"georadiusGetKeys\",\n        \"history\": [\n            [\n                \"6.2.0\",\n                \"Added the `ANY` option for `COUNT`.\"\n            ],\n            [\n                \"7.0.0\",\n                \"Added support for uppercase unit names.\"\n            ]\n        ],\n        \"deprecated_since\": \"6.2.0\",\n        \"replaced_by\": \"`GEOSEARCH` and `GEOSEARCHSTORE` with the `BYRADIUS` argument\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"GEO\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"keyword\": {\n                        \"keyword\": \"STORE\",\n                        \"startfrom\": 6\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"keyword\": {\n                        \"keyword\": \"STOREDIST\",\n                        \"startfrom\": 6\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"longitude\",\n                \"type\": \"double\"\n            },\n            {\n                \"name\": \"latitude\",\n                \"type\": \"double\"\n            },\n            {\n                \"name\": \"radius\",\n                \"type\": \"double\"\n            },\n            {\n                \"name\": \"unit\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"m\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"m\"\n                    },\n                    {\n                        \"name\": \"km\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"km\"\n                    },\n                    {\n                        \"name\": \"ft\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ft\"\n                    },\n                    {\n                        \"name\": \"mi\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"mi\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"withcoord\",\n                \"token\": \"WITHCOORD\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"withdist\",\n                \"token\": \"WITHDIST\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"withhash\",\n                \"token\": \"WITHHASH\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"count-block\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"token\": \"COUNT\",\n                        \"name\": \"count\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"any\",\n                        \"token\": \"ANY\",\n                        \"type\": \"pure-token\",\n                        \"optional\": true,\n                        \"since\": \"6.2.0\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"order\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"asc\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ASC\"\n                    },\n                    {\n                        \"name\": \"desc\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"DESC\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"store\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"token\": \"STORE\",\n                        \"name\": \"storekey\",\n                        \"display\": \"key\",\n                        \"type\": \"key\",\n                        \"key_spec_index\": 1\n                    },\n                    {\n                        \"token\": \"STOREDIST\",\n                        \"name\": \"storedistkey\",\n                        \"display\": \"key\",\n                        \"type\": \"key\",\n                        \"key_spec_index\": 2\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Array of matched members information\",\n            \"anyOf\": [\n                {\n                    \"description\": \"If no WITH* option is specified, array of matched members names\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"description\": \"name\",\n                        \"type\": \"string\"\n                    }\n                },\n                {\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"array\",\n                        \"minItems\": 1,\n                        \"maxItems\": 4,\n                        \"items\": [\n                            {\n                                \"description\": \"Matched member name\",\n                                \"type\": \"string\"\n                            }\n                        ],\n                        \"additionalItems\": {\n                            \"oneOf\": [\n                                {\n                                    \"description\": \"If WITHDIST option is specified, the distance from the center as a floating point number, in the same unit specified in the radius\",\n                                    \"type\": \"string\"\n                                },\n                                {\n                                    \"description\": \"If WITHHASH option is specified, the geohash integer\",\n                                    \"type\": \"integer\"\n                                },\n                                {\n                                    \"description\": \"If WITHCOORD option is specified, the coordinates as a two items x,y array (longitude,latitude)\",\n                                    \"type\": \"array\",\n                                    \"minItems\": 2,\n                                    \"maxItems\": 2,\n                                    \"items\": [\n                                        {\n                                            \"description\": \"latitude (x)\",\n                                            \"type\": \"number\"\n                                        },\n                                        {\n                                            \"description\": \"longitude (y)\",\n                                            \"type\": \"number\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                {\n                    \"description\": \"number of items stored in key\",\n                    \"type\": \"integer\"\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/georadius_ro.json",
    "content": "{\n    \"GEORADIUS_RO\": {\n        \"summary\": \"Returns members from a geospatial index that are within a distance from a coordinate.\",\n        \"complexity\": \"O(N+log(M)) where N is the number of elements inside the bounding box of the circular area delimited by center and radius and M is the number of items inside the index.\",\n        \"group\": \"geo\",\n        \"since\": \"3.2.10\",\n        \"arity\": -6,\n        \"function\": \"georadiusroCommand\",\n        \"history\": [\n            [\n                \"6.2.0\",\n                \"Added the `ANY` option for `COUNT`.\"\n            ],\n            [\n                \"7.0.0\",\n                \"Added support for uppercase unit names.\"\n            ]\n        ],\n        \"deprecated_since\": \"6.2.0\",\n        \"replaced_by\": \"`GEOSEARCH` with the `BYRADIUS` argument\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"GEO\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"longitude\",\n                \"type\": \"double\"\n            },\n            {\n                \"name\": \"latitude\",\n                \"type\": \"double\"\n            },\n            {\n                \"name\": \"radius\",\n                \"type\": \"double\"\n            },\n            {\n                \"name\": \"unit\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"m\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"m\"\n                    },\n                    {\n                        \"name\": \"km\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"km\"\n                    },\n                    {\n                        \"name\": \"ft\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ft\"\n                    },\n                    {\n                        \"name\": \"mi\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"mi\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"withcoord\",\n                \"token\": \"WITHCOORD\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"withdist\",\n                \"token\": \"WITHDIST\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"withhash\",\n                \"token\": \"WITHHASH\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"count-block\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"token\": \"COUNT\",\n                        \"name\": \"count\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"any\",\n                        \"token\": \"ANY\",\n                        \"type\": \"pure-token\",\n                        \"optional\": true,\n                        \"since\": \"6.2.0\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"order\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"asc\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ASC\"\n                    },\n                    {\n                        \"name\": \"desc\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"DESC\"\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Array of matched members information\",\n            \"anyOf\": [\n                {\n                    \"description\": \"If no WITH* option is specified, array of matched members names\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"description\": \"name\",\n                        \"type\": \"string\"\n                    }\n                },\n                {\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"array\",\n                        \"minItems\": 1,\n                        \"maxItems\": 4,\n                        \"items\": [\n                            {\n                                \"description\": \"Matched member name\",\n                                \"type\": \"string\"\n                            }\n                        ],\n                        \"additionalItems\": {\n                            \"oneOf\": [\n                                {\n                                    \"description\": \"If WITHDIST option is specified, the distance from the center as a floating point number, in the same unit specified in the radius\",\n                                    \"type\": \"string\"\n                                },\n                                {\n                                    \"description\": \"If WITHHASH option is specified, the geohash integer\",\n                                    \"type\": \"integer\"\n                                },\n                                {\n                                    \"description\": \"If WITHCOORD option is specified, the coordinates as a two items x,y array (longitude,latitude)\",\n                                    \"type\": \"array\",\n                                    \"minItems\": 2,\n                                    \"maxItems\": 2,\n                                    \"items\": [\n                                        {\n                                            \"description\": \"latitude (x)\",\n                                            \"type\": \"number\"\n                                        },\n                                        {\n                                            \"description\": \"longitude (y)\",\n                                            \"type\": \"number\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/georadiusbymember.json",
    "content": "{\n    \"GEORADIUSBYMEMBER\": {\n        \"summary\": \"Queries a geospatial index for members within a distance from a member, optionally stores the result.\",\n        \"complexity\": \"O(N+log(M)) where N is the number of elements inside the bounding box of the circular area delimited by center and radius and M is the number of items inside the index.\",\n        \"group\": \"geo\",\n        \"since\": \"3.2.0\",\n        \"arity\": -5,\n        \"function\": \"georadiusbymemberCommand\",\n        \"get_keys_function\": \"georadiusGetKeys\",\n        \"history\": [\n            [\n                \"6.2.0\",\n                \"Added the `ANY` option for `COUNT`.\"\n            ],\n            [\n                \"7.0.0\",\n                \"Added support for uppercase unit names.\"\n            ]\n        ],\n        \"deprecated_since\": \"6.2.0\",\n        \"replaced_by\": \"`GEOSEARCH` and `GEOSEARCHSTORE` with the `BYRADIUS` and `FROMMEMBER` arguments\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"GEO\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"keyword\": {\n                        \"keyword\": \"STORE\",\n                        \"startfrom\": 5\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"keyword\": {\n                        \"keyword\": \"STOREDIST\",\n                        \"startfrom\": 5\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"member\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"radius\",\n                \"type\": \"double\"\n            },\n            {\n                \"name\": \"unit\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"m\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"m\"\n                    },\n                    {\n                        \"name\": \"km\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"km\"\n                    },\n                    {\n                        \"name\": \"ft\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ft\"\n                    },\n                    {\n                        \"name\": \"mi\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"mi\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"withcoord\",\n                \"token\": \"WITHCOORD\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"withdist\",\n                \"token\": \"WITHDIST\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"withhash\",\n                \"token\": \"WITHHASH\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"count-block\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"token\": \"COUNT\",\n                        \"name\": \"count\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"any\",\n                        \"token\": \"ANY\",\n                        \"type\": \"pure-token\",\n                        \"optional\": true\n                    }\n                ]\n            },\n            {\n                \"name\": \"order\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"asc\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ASC\"\n                    },\n                    {\n                        \"name\": \"desc\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"DESC\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"store\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"token\": \"STORE\",\n                        \"name\": \"storekey\",\n                        \"display\": \"key\",\n                        \"type\": \"key\",\n                        \"key_spec_index\": 1\n                    },\n                    {\n                        \"token\": \"STOREDIST\",\n                        \"name\": \"storedistkey\",\n                        \"display\": \"key\",\n                        \"type\": \"key\",\n                        \"key_spec_index\": 2\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Array of matched members information\",\n            \"anyOf\": [\n                {\n                    \"description\": \"If no WITH* option is specified, array of matched members names\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"description\": \"name\",\n                        \"type\": \"string\"\n                    }\n                },\n                {\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"array\",\n                        \"minItems\": 1,\n                        \"maxItems\": 4,\n                        \"items\": [\n                            {\n                                \"description\": \"Matched member name\",\n                                \"type\": \"string\"\n                            }\n                        ],\n                        \"additionalItems\": {\n                            \"oneOf\": [\n                                {\n                                    \"description\": \"If WITHDIST option is specified, the distance from the center as a floating point number, in the same unit specified in the radius\",\n                                    \"type\": \"string\"\n                                },\n                                {\n                                    \"description\": \"If WITHHASH option is specified, the geohash integer\",\n                                    \"type\": \"integer\"\n                                },\n                                {\n                                    \"description\": \"If WITHCOORD option is specified, the coordinates as a two items x,y array (longitude,latitude)\",\n                                    \"type\": \"array\",\n                                    \"minItems\": 2,\n                                    \"maxItems\": 2,\n                                    \"items\": [\n                                        {\n                                            \"description\": \"latitude (x)\",\n                                            \"type\": \"number\"\n                                        },\n                                        {\n                                            \"description\": \"longitude (y)\",\n                                            \"type\": \"number\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                {\n                    \"description\": \"number of items stored in key\",\n                    \"type\": \"integer\"\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/georadiusbymember_ro.json",
    "content": "{\n    \"GEORADIUSBYMEMBER_RO\": {\n        \"summary\": \"Returns members from a geospatial index that are within a distance from a member.\",\n        \"complexity\": \"O(N+log(M)) where N is the number of elements inside the bounding box of the circular area delimited by center and radius and M is the number of items inside the index.\",\n        \"group\": \"geo\",\n        \"since\": \"3.2.10\",\n        \"arity\": -5,\n        \"function\": \"georadiusbymemberroCommand\",\n        \"history\": [\n            [\n                \"6.2.0\",\n                \"Added the `ANY` option for `COUNT`.\"\n            ],\n            [\n                \"7.0.0\", \n                \"Added support for uppercase unit names.\"\n            ]\n        ],\n        \"deprecated_since\": \"6.2.0\",\n        \"replaced_by\": \"`GEOSEARCH` with the `BYRADIUS` and `FROMMEMBER` arguments\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"GEO\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"member\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"radius\",\n                \"type\": \"double\"\n            },\n            {\n                \"name\": \"unit\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"m\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"m\"\n                    },\n                    {\n                        \"name\": \"km\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"km\"\n                    },\n                    {\n                        \"name\": \"ft\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ft\"\n                    },\n                    {\n                        \"name\": \"mi\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"mi\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"withcoord\",\n                \"token\": \"WITHCOORD\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"withdist\",\n                \"token\": \"WITHDIST\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"withhash\",\n                \"token\": \"WITHHASH\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"count-block\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"token\": \"COUNT\",\n                        \"name\": \"count\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"any\",\n                        \"token\": \"ANY\",\n                        \"type\": \"pure-token\",\n                        \"optional\": true\n                    }\n                ]\n            },\n            {\n                \"name\": \"order\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"asc\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ASC\"\n                    },\n                    {\n                        \"name\": \"desc\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"DESC\"\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Array of matched members information\",\n            \"anyOf\": [\n                {\n                    \"description\": \"If no WITH* option is specified, array of matched members names\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"description\": \"name\",\n                        \"type\": \"string\"\n                    }\n                },\n                {\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"array\",\n                        \"minItems\": 1,\n                        \"maxItems\": 4,\n                        \"items\": [\n                            {\n                                \"description\": \"Matched member name\",\n                                \"type\": \"string\"\n                            }\n                        ],\n                        \"additionalItems\": {\n                            \"oneOf\": [\n                                {\n                                    \"description\": \"If WITHDIST option is specified, the distance from the center as a floating point number, in the same unit specified in the radius\",\n                                    \"type\": \"string\"\n                                },\n                                {\n                                    \"description\": \"If WITHHASH option is specified, the geohash integer\",\n                                    \"type\": \"integer\"\n                                },\n                                {\n                                    \"description\": \"If WITHCOORD option is specified, the coordinates as a two items x,y array (longitude,latitude)\",\n                                    \"type\": \"array\",\n                                    \"minItems\": 2,\n                                    \"maxItems\": 2,\n                                    \"items\": [\n                                        {\n                                            \"description\": \"latitude (x)\",\n                                            \"type\": \"number\"\n                                        },\n                                        {\n                                            \"description\": \"longitude (y)\",\n                                            \"type\": \"number\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/geosearch.json",
    "content": "{\n    \"GEOSEARCH\": {\n        \"summary\": \"Queries a geospatial index for members inside an area of a box or a circle.\",\n        \"complexity\": \"O(N+log(M)) where N is the number of elements in the grid-aligned bounding box area around the shape provided as the filter and M is the number of items inside the shape\",\n        \"group\": \"geo\",\n        \"since\": \"6.2.0\",\n        \"arity\": -7,\n        \"function\": \"geosearchCommand\",\n        \"history\": [\n            [\n                \"7.0.0\",\n                \"Added support for uppercase unit names.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"GEO\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"from\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"token\": \"FROMMEMBER\",\n                        \"name\": \"member\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"token\": \"FROMLONLAT\",\n                        \"name\": \"fromlonlat\",\n                        \"type\": \"block\",\n                        \"arguments\": [\n                            {\n                                \"name\": \"longitude\",\n                                \"type\": \"double\"\n                            },\n                            {\n                                \"name\": \"latitude\",\n                                \"type\": \"double\"\n                            }\n                        ]\n                    }\n                ]\n            },\n            {\n                \"name\": \"by\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"circle\",\n                        \"type\": \"block\",\n                        \"arguments\": [\n                            {\n                                \"token\": \"BYRADIUS\",\n                                \"name\": \"radius\",\n                                \"type\": \"double\"\n                            },\n                            {\n                                \"name\": \"unit\",\n                                \"type\": \"oneof\",\n                                \"arguments\": [\n                                    {\n                                        \"name\": \"m\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"m\"\n                                    },\n                                    {\n                                        \"name\": \"km\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"km\"\n                                    },\n                                    {\n                                        \"name\": \"ft\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"ft\"\n                                    },\n                                    {\n                                        \"name\": \"mi\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"mi\"\n                                    }\n                                ]\n                            }\n                        ]\n                    },\n                    {\n                        \"name\": \"box\",\n                        \"type\": \"block\",\n                        \"arguments\": [\n                            {\n                                \"token\": \"BYBOX\",\n                                \"name\": \"width\",\n                                \"type\": \"double\"\n                            },\n                            {\n                                \"name\": \"height\",\n                                \"type\": \"double\"\n                            },\n                            {\n                                \"name\": \"unit\",\n                                \"type\": \"oneof\",\n                                \"arguments\": [\n                                    {\n                                        \"name\": \"m\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"m\"\n                                    },\n                                    {\n                                        \"name\": \"km\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"km\"\n                                    },\n                                    {\n                                        \"name\": \"ft\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"ft\"\n                                    },\n                                    {\n                                        \"name\": \"mi\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"mi\"\n                                    }\n                                ]\n                            }\n                        ]\n                    }\n                ]\n            },\n            {\n                \"name\": \"order\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"asc\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ASC\"\n                    },\n                    {\n                        \"name\": \"desc\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"DESC\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"count-block\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"token\": \"COUNT\",\n                        \"name\": \"count\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"any\",\n                        \"token\": \"ANY\",\n                        \"type\": \"pure-token\",\n                        \"optional\": true\n                    }\n                ]\n            },\n            {\n                \"name\": \"withcoord\",\n                \"token\": \"WITHCOORD\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"withdist\",\n                \"token\": \"WITHDIST\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"withhash\",\n                \"token\": \"WITHHASH\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Array of matched members information\",\n            \"anyOf\": [\n                {\n                    \"description\": \"If no WITH* option is specified, array of matched members names\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"description\": \"name\",\n                        \"type\": \"string\"\n                    }\n                },\n                {\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"array\",\n                        \"minItems\": 1,\n                        \"maxItems\": 4,\n                        \"items\": [\n                            {\n                                \"description\": \"Matched member name\",\n                                \"type\": \"string\"\n                            }\n                        ],\n                        \"additionalItems\": {\n                            \"oneOf\": [\n                                {\n                                    \"description\": \"If WITHDIST option is specified, the distance from the center as a floating point number, in the same unit specified in the radius\",\n                                    \"type\": \"string\"\n                                },\n                                {\n                                    \"description\": \"If WITHHASH option is specified, the geohash integer\",\n                                    \"type\": \"integer\"\n                                },\n                                {\n                                    \"description\": \"If WITHCOORD option is specified, the coordinates as a two items x,y array (longitude,latitude)\",\n                                    \"type\": \"array\",\n                                    \"minItems\": 2,\n                                    \"maxItems\": 2,\n                                    \"items\": [\n                                        {\n                                            \"description\": \"latitude (x)\",\n                                            \"type\": \"number\"\n                                        },\n                                        {\n                                            \"description\": \"longitude (y)\",\n                                            \"type\": \"number\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/geosearchstore.json",
    "content": "{\n    \"GEOSEARCHSTORE\": {\n        \"summary\": \"Queries a geospatial index for members inside an area of a box or a circle, optionally stores the result.\",\n        \"complexity\": \"O(N+log(M)) where N is the number of elements in the grid-aligned bounding box area around the shape provided as the filter and M is the number of items inside the shape\",\n        \"group\": \"geo\",\n        \"since\": \"6.2.0\",\n        \"arity\": -8,\n        \"function\": \"geosearchstoreCommand\",\n        \"history\": [\n            [\n                \"7.0.0\",\n                \"Added support for uppercase unit names.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"GEO\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"destination\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"source\",\n                \"type\": \"key\",\n                \"key_spec_index\": 1\n            },\n            {\n                \"name\": \"from\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"token\": \"FROMMEMBER\",\n                        \"name\": \"member\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"token\": \"FROMLONLAT\",\n                        \"name\": \"fromlonlat\",\n                        \"type\": \"block\",\n                        \"arguments\": [\n                            {\n                                \"name\": \"longitude\",\n                                \"type\": \"double\"\n                            },\n                            {\n                                \"name\": \"latitude\",\n                                \"type\": \"double\"\n                            }\n                        ]\n                    }\n                ]\n            },\n            {\n                \"name\": \"by\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"circle\",\n                        \"type\": \"block\",\n                        \"arguments\": [\n                            {\n                                \"token\": \"BYRADIUS\",\n                                \"name\": \"radius\",\n                                \"type\": \"double\"\n                            },\n                            {\n                                \"name\": \"unit\",\n                                \"type\": \"oneof\",\n                                \"arguments\": [\n                                    {\n                                        \"name\": \"m\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"m\"\n                                    },\n                                    {\n                                        \"name\": \"km\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"km\"\n                                    },\n                                    {\n                                        \"name\": \"ft\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"ft\"\n                                    },\n                                    {\n                                        \"name\": \"mi\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"mi\"\n                                    }\n                                ]\n                            }\n                        ]\n                    },\n                    {\n                        \"name\": \"box\",\n                        \"type\": \"block\",\n                        \"arguments\": [\n                            {\n                                \"token\": \"BYBOX\",\n                                \"name\": \"width\",\n                                \"type\": \"double\"\n                            },\n                            {\n                                \"name\": \"height\",\n                                \"type\": \"double\"\n                            },\n                            {\n                                \"name\": \"unit\",\n                                \"type\": \"oneof\",\n                                \"arguments\": [\n                                    {\n                                        \"name\": \"m\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"m\"\n                                    },\n                                    {\n                                        \"name\": \"km\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"km\"\n                                    },\n                                    {\n                                        \"name\": \"ft\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"ft\"\n                                    },\n                                    {\n                                        \"name\": \"mi\",\n                                        \"type\": \"pure-token\",\n                                        \"token\": \"mi\"\n                                    }\n                                ]\n                            }\n                        ]\n                    }\n                ]\n            },\n            {\n                \"name\": \"order\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"asc\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ASC\"\n                    },\n                    {\n                        \"name\": \"desc\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"DESC\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"count-block\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"token\": \"COUNT\",\n                        \"name\": \"count\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"any\",\n                        \"token\": \"ANY\",\n                        \"type\": \"pure-token\",\n                        \"optional\": true\n                    }\n                ]\n            },\n            {\n                \"name\": \"storedist\",\n                \"token\": \"STOREDIST\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"the number of elements in the resulting set\",\n            \"type\": \"integer\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/get.json",
    "content": "{\n    \"GET\": {\n        \"summary\": \"Returns the string value of a key.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"string\",\n        \"since\": \"1.0.0\",\n        \"arity\": 2,\n        \"function\": \"getCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"The value of the key.\",\n                    \"type\": \"string\"\n                },\n                {\n                    \"description\": \"Key does not exist.\",\n                    \"type\": \"null\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/getbit.json",
    "content": "{\n    \"GETBIT\": {\n        \"summary\": \"Returns a bit value by offset.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"bitmap\",\n        \"since\": \"2.2.0\",\n        \"arity\": 3,\n        \"function\": \"getbitCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"BITMAP\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The bit value stored at offset.\",\n            \"oneOf\": [\n                {\n                    \"const\": 0\n                },\n                {\n                    \"const\": 1\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"offset\",\n                \"type\": \"integer\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/getdel.json",
    "content": "{\n    \"GETDEL\": {\n        \"summary\": \"Returns the string value of a key after deleting the key.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"string\",\n        \"since\": \"6.2.0\",\n        \"arity\": 2,\n        \"function\": \"getdelCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"The value of the key.\",\n                    \"type\": \"string\"\n                },\n                {\n                    \"description\": \"The key does not exist.\",\n                    \"type\": \"null\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/getex.json",
    "content": "{\n    \"GETEX\": {\n        \"summary\": \"Returns the string value of a key after setting its expiration time.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"string\",\n        \"since\": \"6.2.0\",\n        \"arity\": -2,\n        \"function\": \"getexCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"notes\": \"RW and UPDATE because it changes the TTL\",\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"The value of the key.\",\n                    \"type\": \"string\"\n                },\n                {\n                    \"description\": \"Key does not exist.\",\n                    \"type\": \"null\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"expiration\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"seconds\",\n                        \"type\": \"integer\",\n                        \"token\": \"EX\"\n                    },\n                    {\n                        \"name\": \"milliseconds\",\n                        \"type\": \"integer\",\n                        \"token\": \"PX\"\n                    },\n                    {\n                        \"name\": \"unix-time-seconds\",\n                        \"type\": \"unix-time\",\n                        \"token\": \"EXAT\"\n                    },\n                    {\n                        \"name\": \"unix-time-milliseconds\",\n                        \"type\": \"unix-time\",\n                        \"token\": \"PXAT\"\n                    },\n                    {\n                        \"name\": \"persist\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"PERSIST\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/getrange.json",
    "content": "{\n    \"GETRANGE\": {\n        \"summary\": \"Returns a substring of the string stored at a key.\",\n        \"complexity\": \"O(N) where N is the length of the returned string. The complexity is ultimately determined by the returned length, but because creating a substring from an existing string is very cheap, it can be considered O(1) for small strings.\",\n        \"group\": \"string\",\n        \"since\": \"2.4.0\",\n        \"arity\": 4,\n        \"function\": \"getrangeCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"string\",\n            \"description\": \"The substring of the string value stored at key, determined by the offsets start and end (both are inclusive).\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"start\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"end\",\n                \"type\": \"integer\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/getset.json",
    "content": "{\n    \"GETSET\": {\n        \"summary\": \"Returns the previous string value of a key after setting it to a new value.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"string\",\n        \"since\": \"1.0.0\",\n        \"arity\": 3,\n        \"function\": \"getsetCommand\",\n        \"deprecated_since\": \"6.2.0\",\n        \"replaced_by\": \"`SET` with the `!GET` argument\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"The old value stored at the key.\",\n                    \"type\": \"string\"\n                },\n                {\n                    \"description\": \"The key does not exist.\",\n                    \"type\": \"null\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"value\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hdel.json",
    "content": "{\n    \"HDEL\": {\n        \"summary\": \"Deletes one or more fields and their values from a hash. Deletes the hash if no fields remain.\",\n        \"complexity\": \"O(N) where N is the number of fields to be removed.\",\n        \"group\": \"hash\",\n        \"since\": \"2.0.0\",\n        \"arity\": -3,\n        \"function\": \"hdelCommand\",\n        \"history\": [\n            [\n                \"2.4.0\",\n                \"Accepts multiple `field` arguments.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"The number of fields that were removed from the hash.\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"field\",\n                \"type\": \"string\",\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hello.json",
    "content": "{\n    \"HELLO\": {\n        \"summary\": \"Handshakes with the Redis server.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"connection\",\n        \"since\": \"6.0.0\",\n        \"arity\": -1,\n        \"function\": \"helloCommand\",\n        \"history\": [\n            [\n                \"6.2.0\",\n                \"`protover` made optional; when called without arguments the command reports the current connection's context.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"FAST\",\n            \"NO_AUTH\",\n            \"SENTINEL\",\n            \"ALLOW_BUSY\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"object\",\n            \"additionalProperties\": false,\n            \"properties\": {\n                \"server\": {\n                    \"type\": \"string\"\n                },\n                \"version\": {\n                    \"type\": \"string\"\n                },\n                \"proto\": {\n                    \"const\": 3\n                },\n                \"id\": {\n                    \"type\": \"integer\"\n                },\n                \"mode\": {\n                    \"type\": \"string\"\n                },\n                \"role\": {\n                    \"type\": \"string\"\n                },\n                \"modules\": {\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"object\",\n                        \"additionalProperties\": false,\n                        \"properties\": {\n                            \"name\": {\n                                \"type\": \"string\"\n                            },\n                            \"ver\": {\n                                \"type\": \"integer\"\n                            },\n                            \"path\": {\n                                \"type\": \"string\"\n                            },\n                            \"args\": {\n                                \"type\": \"array\",\n                                \"items\": {\n                                    \"type\": \"string\"\n                                }\n                            }\n                        }\n                    }\n                }\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"arguments\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"protover\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"token\": \"AUTH\",\n                        \"name\": \"auth\",\n                        \"type\": \"block\",\n                        \"optional\": true,\n                        \"arguments\": [\n                            {\n                                \"name\": \"username\",\n                                \"type\": \"string\"\n                            },\n                            {\n                                \"name\": \"password\",\n                                \"type\": \"string\"\n                            }\n                        ]\n                    },\n                    {\n                        \"token\": \"SETNAME\",\n                        \"name\": \"clientname\",\n                        \"type\": \"string\",\n                        \"optional\": true\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hexists.json",
    "content": "{\n    \"HEXISTS\": {\n        \"summary\": \"Determines whether a field exists in a hash.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"hash\",\n        \"since\": \"2.0.0\",\n        \"arity\": 3,\n        \"function\": \"hexistsCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"The hash does not contain the field, or key does not exist.\",\n                    \"const\": 0\n                },\n                {\n                    \"description\": \"The hash contains the field.\",\n                    \"const\": 1\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"field\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hexpire.json",
    "content": "{\n    \"HEXPIRE\": {\n        \"summary\": \"Set expiry for hash field using relative time to expire (seconds)\",\n        \"complexity\": \"O(N) where N is the number of specified fields\",\n        \"group\": \"hash\",\n        \"since\": \"7.4.0\",\n        \"arity\": -6,\n        \"function\": \"hexpireCommand\",\n        \"history\": [],\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Array of results. Returns empty array if the key does not exist.\",\n            \"type\": \"array\",\n            \"minItems\": 0,\n            \"maxItems\": 4294967295,\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"description\": \"The field does not exist.\",\n                        \"const\": -2\n                    },\n                    {\n                        \"description\": \"Specified NX | XX | GT | LT condition not met\",\n                        \"const\": 0\n                    },\n                    {\n                        \"description\": \"Expiration time was set or updated.\",\n                        \"const\": 1\n                    },\n                    {\n                        \"description\": \"Field deleted because the specified expiration time is in the past.\",\n                        \"const\": 2\n                    }\n                ]\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"seconds\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"condition\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"nx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"NX\"\n                    },\n                    {\n                        \"name\": \"xx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"XX\"\n                    },\n                    {\n                        \"name\": \"gt\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"GT\"\n                    },\n                    {\n                        \"name\": \"lt\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"LT\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"fields\",\n                \"token\": \"FIELDS\",\n                \"type\": \"block\",\n                \"arguments\": [\n                    {\n                        \"name\": \"numfields\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"field\",\n                        \"type\": \"string\",\n                        \"multiple\": true\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hexpireat.json",
    "content": "{\n    \"HEXPIREAT\": {\n        \"summary\": \"Set expiry for hash field using an absolute Unix timestamp (seconds)\",\n        \"complexity\": \"O(N) where N is the number of specified fields\",\n        \"group\": \"hash\",\n        \"since\": \"7.4.0\",\n        \"arity\": -6,\n        \"function\": \"hexpireatCommand\",\n        \"history\": [],\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Array of results. Returns empty array if the key does not exist.\",\n            \"type\": \"array\",\n            \"minItems\": 0,\n            \"maxItems\": 4294967295,\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"description\": \"The field does not exist.\",\n                        \"const\": -2\n                    },\n                    {\n                        \"description\": \"Specified NX | XX | GT | LT condition not met\",\n                        \"const\": 0\n                    },\n                    {\n                        \"description\": \"Expiration time was set or updated.\",\n                        \"const\": 1\n                    },\n                    {\n                        \"description\": \"Field deleted because the specified expiration time is in the past.\",\n                        \"const\": 2\n                    }\n                ]\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"unix-time-seconds\",\n                \"type\": \"unix-time\"\n            },\n            {\n                \"name\": \"condition\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"nx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"NX\"\n                    },\n                    {\n                        \"name\": \"xx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"XX\"\n                    },\n                    {\n                        \"name\": \"gt\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"GT\"\n                    },\n                    {\n                        \"name\": \"lt\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"LT\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"fields\",\n                \"token\": \"FIELDS\",\n                \"type\": \"block\",\n                \"arguments\": [\n                    {\n                        \"name\": \"numfields\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"field\",\n                        \"type\": \"string\",\n                        \"multiple\": true\n                    }\n                ]\n            }\n        ]\n    }\n}"
  },
  {
    "path": "src/commands/hexpiretime.json",
    "content": "{\n    \"HEXPIRETIME\": {\n        \"summary\": \"Returns the expiration time of a hash field as a Unix timestamp, in seconds.\",\n        \"complexity\": \"O(N) where N is the number of specified fields\",\n        \"group\": \"hash\",\n        \"since\": \"7.4.0\",\n        \"arity\": -5,\n        \"function\": \"hexpiretimeCommand\",\n        \"history\": [],\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Array of results. Returns empty array if the key does not exist.\",\n            \"type\": \"array\",\n            \"minItems\": 0,\n            \"maxItems\": 4294967295,\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"description\": \"The field does not exist.\",\n                        \"const\": -2\n                    },\n                    {\n                        \"description\": \"The field exists but has no associated expire.\",\n                        \"const\": -1\n                    },\n                    {\n                        \"description\": \"Expiration Unix timestamp in seconds.\",\n                        \"type\": \"integer\",\n                        \"minimum\": 1\n                    }\n                ]\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"fields\",\n                \"token\": \"FIELDS\",\n                \"type\": \"block\",\n                \"arguments\": [\n                    {\n                        \"name\": \"numfields\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"field\",\n                        \"type\": \"string\",\n                        \"multiple\": true\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hget.json",
    "content": "{\n    \"HGET\": {\n        \"summary\": \"Returns the value of a field in a hash.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"hash\",\n        \"since\": \"2.0.0\",\n        \"arity\": 3,\n        \"function\": \"hgetCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"The value associated with the field.\",\n                    \"type\": \"string\"\n                },\n                {\n                    \"description\": \"If the field is not present in the hash or key does not exist.\",\n                    \"type\": \"null\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"field\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hgetall.json",
    "content": "{\n    \"HGETALL\": {\n        \"summary\": \"Returns all fields and values in a hash.\",\n        \"complexity\": \"O(N) where N is the size of the hash.\",\n        \"group\": \"hash\",\n        \"since\": \"2.0.0\",\n        \"arity\": 2,\n        \"function\": \"hgetallCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT_ORDER\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"object\",\n            \"description\": \"Map of fields and their values stored in the hash, or an empty list when key does not exist. In RESP2 this is returned as a flat array.\",\n            \"additionalProperties\": {\n                \"type\": \"string\"\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hgetdel.json",
    "content": "{\n    \"HGETDEL\": {\n        \"summary\": \"Returns the value of a field and deletes it from the hash.\",\n        \"complexity\": \"O(N) where N is the number of specified fields\",\n        \"group\": \"hash\",\n        \"since\": \"8.0.0\",\n        \"arity\": -5,\n        \"function\": \"hgetdelCommand\",\n        \"history\": [],\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"List of values associated with the given fields, in the same order as they are requested.\",\n            \"type\": \"array\",\n            \"minItems\": 1,\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"type\": \"null\"\n                    }\n                ]\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"fields\",\n                \"token\": \"FIELDS\",\n                \"type\": \"block\",\n                \"arguments\": [\n                    {\n                        \"name\": \"numfields\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"field\",\n                        \"type\": \"string\",\n                        \"multiple\": true\n                    }\n                ]\n            }\n        ]\n    }\n}\n\n"
  },
  {
    "path": "src/commands/hgetex.json",
    "content": "{\n    \"HGETEX\": {\n        \"summary\": \"Get the value of one or more fields of a given hash key, and optionally set their expiration.\",\n        \"complexity\": \"O(N) where N is the number of specified fields\",\n        \"group\": \"hash\",\n        \"since\": \"8.0.0\",\n        \"arity\": -5,\n        \"function\": \"hgetexCommand\",\n        \"history\": [],\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"key_specs\": [\n            {\n                \"notes\": \"RW and UPDATE because it changes the TTL\",\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"List of values associated with the given fields, in the same order as they are requested.\",\n            \"type\": \"array\",\n            \"minItems\": 1,\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"type\": \"null\"\n                    }\n                ]\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"expiration\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"seconds\",\n                        \"type\": \"integer\",\n                        \"token\": \"EX\"\n                    },\n                    {\n                        \"name\": \"milliseconds\",\n                        \"type\": \"integer\",\n                        \"token\": \"PX\"\n                    },\n                    {\n                        \"name\": \"unix-time-seconds\",\n                        \"type\": \"unix-time\",\n                        \"token\": \"EXAT\"\n                    },\n                    {\n                        \"name\": \"unix-time-milliseconds\",\n                        \"type\": \"unix-time\",\n                        \"token\": \"PXAT\"\n                    },\n                    {\n                        \"name\": \"persist\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"PERSIST\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"fields\",\n                \"token\": \"FIELDS\",\n                \"type\": \"block\",\n                \"arguments\": [\n                    {\n                        \"name\": \"numfields\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"field\",\n                        \"type\": \"string\",\n                        \"multiple\": true\n                    }\n                ]\n            }\n        ]\n    }\n}\n\n"
  },
  {
    "path": "src/commands/hincrby.json",
    "content": "{\n    \"HINCRBY\": {\n        \"summary\": \"Increments the integer value of a field in a hash by a number. Uses 0 as initial value if the field doesn't exist.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"hash\",\n        \"since\": \"2.0.0\",\n        \"arity\": 4,\n        \"function\": \"hincrbyCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"The value of the field after the increment operation.\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"field\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"increment\",\n                \"type\": \"integer\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hincrbyfloat.json",
    "content": "{\n    \"HINCRBYFLOAT\": {\n        \"summary\": \"Increments the floating point value of a field by a number. Uses 0 as initial value if the field doesn't exist.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"hash\",\n        \"since\": \"2.6.0\",\n        \"arity\": 4,\n        \"function\": \"hincrbyfloatCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"string\",\n            \"description\": \"The value of the field after the increment operation.\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"field\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"increment\",\n                \"type\": \"double\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hkeys.json",
    "content": "{\n    \"HKEYS\": {\n        \"summary\": \"Returns all fields in a hash.\",\n        \"complexity\": \"O(N) where N is the size of the hash.\",\n        \"group\": \"hash\",\n        \"since\": \"2.0.0\",\n        \"arity\": 2,\n        \"function\": \"hkeysCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT_ORDER\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"List of fields in the hash, or an empty list when the key does not exist.\",\n            \"uniqueItems\": true,\n            \"items\": {\n                \"type\": \"string\"\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hlen.json",
    "content": "{\n    \"HLEN\": {\n        \"summary\": \"Returns the number of fields in a hash.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"hash\",\n        \"since\": \"2.0.0\",\n        \"arity\": 2,\n        \"function\": \"hlenCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"Number of the fields in the hash, or 0 when the key does not exist.\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hmget.json",
    "content": "{\n    \"HMGET\": {\n        \"summary\": \"Returns the values of all fields in a hash.\",\n        \"complexity\": \"O(N) where N is the number of fields being requested.\",\n        \"group\": \"hash\",\n        \"since\": \"2.0.0\",\n        \"arity\": -3,\n        \"function\": \"hmgetCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"List of values associated with the given fields, in the same order as they are requested.\",\n            \"type\": \"array\",\n            \"minItems\": 1,\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"type\": \"null\"\n                    }\n                ]\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"field\",\n                \"type\": \"string\",\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hmset.json",
    "content": "{\n    \"HMSET\": {\n        \"summary\": \"Sets the values of multiple fields.\",\n        \"complexity\": \"O(N) where N is the number of fields being set.\",\n        \"group\": \"hash\",\n        \"since\": \"2.0.0\",\n        \"arity\": -4,\n        \"function\": \"hsetCommand\",\n        \"deprecated_since\": \"4.0.0\",\n        \"replaced_by\": \"`HSET` with multiple field-value pairs\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"data\",\n                \"type\": \"block\",\n                \"multiple\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"field\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"name\": \"value\",\n                        \"type\": \"string\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hotkeys-get.json",
    "content": "{\n    \"GET\": {\n        \"summary\": \"Returns lists of top K hotkeys depending on metrics chosen in HOTKEYS START command.\",\n        \"complexity\": \"O(K) where K is the number of hotkeys returned.\",\n        \"group\": \"server\",\n        \"since\": \"8.6.0\",\n        \"arity\": 2,\n        \"container\": \"HOTKEYS\",\n        \"function\": \"hotkeysCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\",\n            \"REQUEST_POLICY:SPECIAL\",\n            \"RESPONSE_POLICY:SPECIAL\"\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"Array of maps with various metrics (tracking-active, sample-ratio, selected-slots, time/network statistics), collection info (collection-start-time-unix-ms, collection-duration-ms, total-cpu-time-user-ms, total-cpu-time-sys-ms, total-net-bytes), and the requested lists of Top-K hotkeys (available metrics: by-cpu-time-us, by-net-bytes) where at most K hotkeys are returned.\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"object\",\n                        \"properties\": {\n                            \"tracking-active\": {\n                                \"type\": \"integer\",\n                                \"description\": \"Whether hotkey tracking is currently active (1) or stopped (0).\"\n                            },\n                            \"sample-ratio\": {\n                                \"type\": \"integer\",\n                                \"description\": \"The sampling ratio used for tracking.\"\n                            },\n                            \"selected-slots\": {\n                                \"type\": \"array\",\n                                \"items\": {\n                                    \"type\": \"array\",\n                                    \"items\": {\n                                        \"type\": \"integer\"\n                                    },\n                                    \"minItems\": 1,\n                                    \"maxItems\": 2\n                                },\n                                \"description\": \"Array of slot ranges. Each element is an array: single-element [slot] for individual slots, or two-element [start, end] for inclusive ranges.\"\n                            },\n                            \"sampled-commands-selected-slots-us\": {\n                                \"type\": \"integer\",\n                                \"description\": \"CPU time in microseconds for sampled commands in selected slots (only present when sampling and slots are configured).\"\n                            },\n                            \"all-commands-selected-slots-us\": {\n                                \"type\": \"integer\",\n                                \"description\": \"CPU time in microseconds for all commands in selected slots (only present when slots are configured).\"\n                            },\n                            \"all-commands-all-slots-us\": {\n                                \"type\": \"integer\",\n                                \"description\": \"CPU time in microseconds for all commands across all slots.\"\n                            },\n                            \"net-bytes-sampled-commands-selected-slots\": {\n                                \"type\": \"integer\",\n                                \"description\": \"Network bytes for sampled commands in selected slots (only present when sampling and slots are configured).\"\n                            },\n                            \"net-bytes-all-commands-selected-slots\": {\n                                \"type\": \"integer\",\n                                \"description\": \"Network bytes for all commands in selected slots (only present when slots are configured).\"\n                            },\n                            \"net-bytes-all-commands-all-slots\": {\n                                \"type\": \"integer\",\n                                \"description\": \"Network bytes for all commands across all slots.\"\n                            },\n                            \"collection-start-time-unix-ms\": {\n                                \"type\": \"integer\",\n                                \"description\": \"Unix timestamp in milliseconds when collection started.\"\n                            },\n                            \"collection-duration-ms\": {\n                                \"type\": \"integer\",\n                                \"description\": \"Duration of collection in milliseconds.\"\n                            },\n                            \"total-cpu-time-user-ms\": {\n                                \"type\": \"integer\",\n                                \"description\": \"Total user CPU time in milliseconds (only present when CPU tracking is enabled).\"\n                            },\n                            \"total-cpu-time-sys-ms\": {\n                                \"type\": \"integer\",\n                                \"description\": \"Total system CPU time in milliseconds (only present when CPU tracking is enabled).\"\n                            },\n                            \"total-net-bytes\": {\n                                \"type\": \"integer\",\n                                \"description\": \"Total network bytes (only present when NET tracking is enabled).\"\n                            },\n                            \"by-cpu-time-us\": {\n                                \"type\": \"array\",\n                                \"items\": {\n                                    \"oneOf\": [\n                                        {\n                                            \"type\": \"string\"\n                                        },\n                                        {\n                                            \"type\": \"integer\"\n                                        }\n                                    ]\n                                },\n                                \"description\": \"Flat array of key-value pairs (key1, cpu_time1, key2, cpu_time2, ...) for top-K hotkeys by CPU time in microseconds (only present when CPU tracking is enabled).\"\n                            },\n                            \"by-net-bytes\": {\n                                \"type\": \"array\",\n                                \"items\": {\n                                    \"oneOf\": [\n                                        {\n                                            \"type\": \"string\"\n                                        },\n                                        {\n                                            \"type\": \"integer\"\n                                        }\n                                    ]\n                                },\n                                \"description\": \"Flat array of key-value pairs (key1, bytes1, key2, bytes2, ...) for top-K hotkeys by network bytes (only present when NET tracking is enabled).\"\n                            }\n                        },\n                        \"additionalProperties\": false\n                    }\n                },\n                {\n                    \"description\": \"If no tracking is started\",\n                    \"type\": \"null\"\n                }\n            ]\n        }\n    }\n}\n\n"
  },
  {
    "path": "src/commands/hotkeys-help.json",
    "content": "{\n    \"HELP\": {\n        \"summary\": \"Return helpful text about HOTKEYS command parameters.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"8.6.1\",\n        \"arity\": 2,\n        \"container\": \"HOTKEYS\",\n        \"function\": \"hotkeysCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"Helpful text about subcommands.\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/hotkeys-reset.json",
    "content": "{\n    \"RESET\": {\n        \"summary\": \"Release the resources used for hotkey tracking.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"8.6.0\",\n        \"arity\": 2,\n        \"container\": \"HOTKEYS\",\n        \"function\": \"hotkeysCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:SPECIAL\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/hotkeys-start.json",
    "content": "{\n    \"START\": {\n        \"summary\": \"Starts hotkeys tracking.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"8.6.0\",\n        \"arity\": -2,\n        \"container\": \"HOTKEYS\",\n        \"function\": \"hotkeysCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:SPECIAL\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"token\": \"METRICS\",\n                \"name\": \"metrics\",\n                \"type\": \"block\",\n                \"optional\": false,\n                \"arguments\": [\n                    {\n                        \"name\": \"count\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"token\": \"CPU\",\n                        \"name\": \"cpu\",\n                        \"type\": \"pure-token\",\n                        \"optional\": true\n                    },\n                    {\n                        \"token\": \"NET\",\n                        \"name\": \"net\",\n                        \"type\": \"pure-token\",\n                        \"optional\": true\n                    }\n                ]\n            },\n            {\n                \"token\": \"COUNT\",\n                \"name\": \"k\",\n                \"type\": \"integer\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"DURATION\",\n                \"name\": \"seconds\",\n                \"type\": \"integer\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"SAMPLE\",\n                \"name\": \"ratio\",\n                \"type\": \"integer\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"SLOTS\",\n                \"name\": \"slots\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"count\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"slot\",\n                        \"type\": \"integer\",\n                        \"multiple\": true\n                    }\n                ]\n            }\n        ]\n    }\n}\n\n"
  },
  {
    "path": "src/commands/hotkeys-stop.json",
    "content": "{\n    \"STOP\": {\n        \"summary\": \"Stops hotkeys tracking.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"8.6.0\",\n        \"arity\": 2,\n        \"container\": \"HOTKEYS\",\n        \"function\": \"hotkeysCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:SPECIAL\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n\n"
  },
  {
    "path": "src/commands/hotkeys.json",
    "content": "{\n    \"HOTKEYS\": {\n        \"summary\": \"A container for hotkeys tracking commands.\",\n        \"complexity\": \"Depends on subcommand.\",\n        \"group\": \"server\",\n        \"since\": \"8.6.0\",\n        \"arity\": -2\n    }\n}\n\n"
  },
  {
    "path": "src/commands/hpersist.json",
    "content": "{\n    \"HPERSIST\": {\n        \"summary\": \"Removes the expiration time for each specified field\",\n        \"complexity\": \"O(N) where N is the number of specified fields\",\n        \"group\": \"hash\",\n        \"since\": \"7.4.0\",\n        \"arity\": -5,\n        \"function\": \"hpersistCommand\",\n        \"history\": [],\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Array of results. Returns empty array if the key does not exist.\",\n            \"type\": \"array\",\n            \"minItems\": 0,\n            \"maxItems\": 4294967295,\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"description\": \"The field does not exist.\",\n                        \"const\": -2\n                    },\n                    {\n                        \"description\": \"The field exists but has no associated expire.\",\n                        \"const\": -1\n                    },\n                    {\n                        \"description\": \"Expiration time was removed\",\n                        \"const\": 1\n                    }\n                ]\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"fields\",\n                \"token\": \"FIELDS\",\n                \"type\": \"block\",\n                \"arguments\": [\n                    {\n                        \"name\": \"numfields\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"field\",\n                        \"type\": \"string\",\n                        \"multiple\": true\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hpexpire.json",
    "content": "{\n    \"HPEXPIRE\": {\n        \"summary\": \"Set expiry for hash field using relative time to expire (milliseconds)\",\n        \"complexity\": \"O(N) where N is the number of specified fields\",\n        \"group\": \"hash\",\n        \"since\": \"7.4.0\",\n        \"arity\": -6,\n        \"function\": \"hpexpireCommand\",\n        \"history\": [],\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Array of results. Returns empty array if the key does not exist.\",\n            \"type\": \"array\",\n            \"minItems\": 0,\n            \"maxItems\": 4294967295,\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"description\": \"The field does not exist.\",\n                        \"const\": -2\n                    },\n                    {\n                        \"description\": \"Specified NX | XX | GT | LT condition not met\",\n                        \"const\": 0\n                    },\n                    {\n                        \"description\": \"Expiration time was set or updated.\",\n                        \"const\": 1\n                    },\n                    {\n                        \"description\": \"Field deleted because the specified expiration time is in the past.\",\n                        \"const\": 2\n                    }\n                ]\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"milliseconds\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"condition\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"nx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"NX\"\n                    },\n                    {\n                        \"name\": \"xx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"XX\"\n                    },\n                    {\n                        \"name\": \"gt\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"GT\"\n                    },\n                    {\n                        \"name\": \"lt\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"LT\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"fields\",\n                \"token\": \"FIELDS\",\n                \"type\": \"block\",\n                \"arguments\": [\n                    {\n                        \"name\": \"numfields\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"field\",\n                        \"type\": \"string\",\n                        \"multiple\": true\n                    }\n                ]\n            }\n        ]\n    }\n}"
  },
  {
    "path": "src/commands/hpexpireat.json",
    "content": "{\n    \"HPEXPIREAT\": {\n        \"summary\": \"Set expiry for hash field using an absolute Unix timestamp (milliseconds)\",\n        \"complexity\": \"O(N) where N is the number of specified fields\",\n        \"group\": \"hash\",\n        \"since\": \"7.4.0\",\n        \"arity\": -6,\n        \"function\": \"hpexpireatCommand\",\n        \"history\": [],\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Array of results. Returns empty array if the key does not exist.\",\n            \"type\": \"array\",\n            \"minItems\": 0,\n            \"maxItems\": 4294967295,\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"description\": \"The field does not exist.\",\n                        \"const\": -2\n                    },\n                    {\n                        \"description\": \"Specified NX | XX | GT | LT condition not met\",\n                        \"const\": 0\n                    },\n                    {\n                        \"description\": \"Expiration time was set or updated.\",\n                        \"const\": 1\n                    },\n                    {\n                        \"description\": \"Field deleted because the specified expiration time is in the past.\",\n                        \"const\": 2\n                    }\n                ]\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"unix-time-milliseconds\",\n                \"type\": \"unix-time\"\n            },\n            {\n                \"name\": \"condition\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"nx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"NX\"\n                    },\n                    {\n                        \"name\": \"xx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"XX\"\n                    },\n                    {\n                        \"name\": \"gt\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"GT\"\n                    },\n                    {\n                        \"name\": \"lt\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"LT\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"fields\",\n                \"token\": \"FIELDS\",\n                \"type\": \"block\",\n                \"arguments\": [\n                    {\n                        \"name\": \"numfields\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"field\",\n                        \"type\": \"string\",\n                        \"multiple\": true\n                    }\n                ]\n            }\n        ]\n    }\n}"
  },
  {
    "path": "src/commands/hpexpiretime.json",
    "content": "{\n    \"HPEXPIRETIME\": {\n        \"summary\": \"Returns the expiration time of a hash field as a Unix timestamp, in msec.\",\n        \"complexity\": \"O(N) where N is the number of specified fields\",\n        \"group\": \"hash\",\n        \"since\": \"7.4.0\",\n        \"arity\": -5,\n        \"function\": \"hpexpiretimeCommand\",\n        \"history\": [],\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Array of results. Returns empty array if the key does not exist.\",\n            \"type\": \"array\",\n            \"minItems\": 0,\n            \"maxItems\": 4294967295,\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"description\": \"The field does not exist.\",\n                        \"const\": -2\n                    },\n                    {\n                        \"description\": \"The field exists but has no associated expire.\",\n                        \"const\": -1\n                    },\n                    {\n                        \"description\": \"Expiration Unix timestamp in milliseconds.\",\n                        \"type\": \"integer\",\n                        \"minimum\": 1\n                    }\n                ]\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"fields\",\n                \"token\": \"FIELDS\",\n                \"type\": \"block\",\n                \"arguments\": [\n                    {\n                        \"name\": \"numfields\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"field\",\n                        \"type\": \"string\",\n                        \"multiple\": true\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hpttl.json",
    "content": "{\n    \"HPTTL\": {\n        \"summary\": \"Returns the TTL in milliseconds of a hash field.\",\n        \"complexity\": \"O(N) where N is the number of specified fields\",\n        \"group\": \"hash\",\n        \"since\": \"7.4.0\",\n        \"arity\": -5,\n        \"function\": \"hpttlCommand\",\n        \"history\": [],\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Array of results. Returns empty array if the key does not exist.\",\n            \"type\": \"array\",\n            \"minItems\": 0,\n            \"maxItems\": 4294967295,\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"description\": \"The field does not exist.\",\n                        \"const\": -2\n                    },\n                    {\n                        \"description\": \"The field exists but has no associated expire.\",\n                        \"const\": -1\n                    },\n                    {\n                        \"description\": \"TTL in milliseconds.\",\n                        \"type\": \"integer\",\n                        \"minimum\": 1\n                    }\n                ]\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"fields\",\n                \"token\": \"FIELDS\",\n                \"type\": \"block\",\n                \"arguments\": [\n                    {\n                        \"name\": \"numfields\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"field\",\n                        \"type\": \"string\",\n                        \"multiple\": true\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hrandfield.json",
    "content": "{\n    \"HRANDFIELD\": {\n        \"summary\": \"Returns one or more random fields from a hash.\",\n        \"complexity\": \"O(N) where N is the number of fields returned\",\n        \"group\": \"hash\",\n        \"since\": \"6.2.0\",\n        \"arity\": -2,\n        \"function\": \"hrandfieldCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"anyOf\": [\n                {\n                    \"description\": \"Key doesn't exist\",\n                    \"type\": \"null\"\n                },\n                {\n                    \"description\": \"A single random field. Returned in case `COUNT` was not used.\",\n                    \"type\": \"string\"\n                },\n                {\n                    \"description\": \"A list of fields. Returned in case `COUNT` was used.\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"string\"\n                    }\n                },\n                {\n                    \"description\": \"Fields and their values. Returned in case `COUNT` and `WITHVALUES` were used. In RESP2 this is returned as a flat array.\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"array\",\n                        \"minItems\": 2,\n                        \"maxItems\": 2,\n                        \"items\": [\n                            {\n                                \"description\": \"Field\",\n                                \"type\": \"string\"\n                            },\n                            {\n                                \"description\": \"Value\",\n                                \"type\": \"string\"\n                            }\n                        ]\n                    }\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"options\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"count\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"withvalues\",\n                        \"token\": \"WITHVALUES\",\n                        \"type\": \"pure-token\",\n                        \"optional\": true\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hscan.json",
    "content": "{\n    \"HSCAN\": {\n        \"summary\": \"Iterates over fields and values of a hash.\",\n        \"complexity\": \"O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection.\",\n        \"group\": \"hash\",\n        \"since\": \"2.8.0\",\n        \"arity\": -3,\n        \"function\": \"hscanCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"cursor\",\n                \"type\": \"integer\"\n            },\n            {\n                \"token\": \"MATCH\",\n                \"name\": \"pattern\",\n                \"type\": \"pattern\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"COUNT\",\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"NOVALUES\",\n                \"name\": \"novalues\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"cursor and scan response in array form\",\n            \"type\": \"array\",\n            \"minItems\": 2,\n            \"maxItems\": 2,\n            \"items\": [\n                {\n                    \"description\": \"cursor\",\n                    \"type\": \"string\"\n                },\n                {\n                    \"description\": \"list of key/value pairs from the hash where each even element is the key, and each odd element is the value, or when novalues option is on, a list of keys from the hash\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"string\"\n                    }\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/hset.json",
    "content": "{\n    \"HSET\": {\n        \"summary\": \"Creates or modifies the value of a field in a hash.\",\n        \"complexity\": \"O(1) for each field/value pair added, so O(N) to add N field/value pairs when the command is called with multiple field/value pairs.\",\n        \"group\": \"hash\",\n        \"since\": \"2.0.0\",\n        \"arity\": -4,\n        \"function\": \"hsetCommand\",\n        \"history\": [\n            [\n                \"4.0.0\",\n                \"Accepts multiple `field` and `value` arguments.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The number of fields that were added\",\n            \"type\": \"integer\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"data\",\n                \"type\": \"block\",\n                \"multiple\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"field\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"name\": \"value\",\n                        \"type\": \"string\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hsetex.json",
    "content": "{\n    \"HSETEX\": {\n        \"summary\": \"Set the value of one or more fields of a given hash key, and optionally set their expiration.\",\n        \"complexity\": \"O(N) where N is the number of fields being set.\",\n        \"group\": \"hash\",\n        \"since\": \"8.0.0\",\n        \"arity\": -6,\n        \"function\": \"hsetexCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"No field was set (due to FXX or FNX flags).\",\n                    \"const\": 0\n                },\n                {\n                    \"description\": \"All the fields were set.\",\n                    \"const\": 1\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"condition\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"fnx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"FNX\"\n                    },\n                    {\n                        \"name\": \"fxx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"FXX\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"expiration\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"seconds\",\n                        \"type\": \"integer\",\n                        \"token\": \"EX\"\n                    },\n                    {\n                        \"name\": \"milliseconds\",\n                        \"type\": \"integer\",\n                        \"token\": \"PX\"\n                    },\n                    {\n                        \"name\": \"unix-time-seconds\",\n                        \"type\": \"unix-time\",\n                        \"token\": \"EXAT\"\n                    },\n                    {\n                        \"name\": \"unix-time-milliseconds\",\n                        \"type\": \"unix-time\",\n                        \"token\": \"PXAT\"\n                    },\n                    {\n                        \"name\": \"keepttl\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"KEEPTTL\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"fields\",\n                \"token\": \"FIELDS\",\n                \"type\": \"block\",\n                \"arguments\": [\n                    {\n                        \"name\": \"numfields\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"data\",\n                        \"type\": \"block\",\n                        \"multiple\": true,\n                        \"arguments\": [\n                            {\n                                \"name\": \"field\",\n                                \"type\": \"string\"\n                            },\n                            {\n                                \"name\": \"value\",\n                                \"type\": \"string\"\n                            }\n                        ]\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hsetnx.json",
    "content": "{\n    \"HSETNX\": {\n        \"summary\": \"Sets the value of a field in a hash only when the field doesn't exist.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"hash\",\n        \"since\": \"2.0.0\",\n        \"arity\": 4,\n        \"function\": \"hsetnxCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"INSERT\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"The field is a new field in the hash and value was set.\",\n                    \"const\": 0\n                },\n                {\n                    \"description\": \"The field already exists in the hash and no operation was performed.\",\n                    \"const\": 1\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"field\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"value\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hstrlen.json",
    "content": "{\n    \"HSTRLEN\": {\n        \"summary\": \"Returns the length of the value of a field.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"hash\",\n        \"since\": \"3.2.0\",\n        \"arity\": 3,\n        \"function\": \"hstrlenCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"String length of the value associated with the field, or zero when the field is not present in the hash or key does not exist at all.\",\n            \"minimum\": 0\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"field\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/httl.json",
    "content": "{\n    \"HTTL\": {\n        \"summary\": \"Returns the TTL in seconds of a hash field.\",\n        \"complexity\": \"O(N) where N is the number of specified fields\",\n        \"group\": \"hash\",\n        \"since\": \"7.4.0\",\n        \"arity\": -5,\n        \"function\": \"httlCommand\",\n        \"history\": [],\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Array of results. Returns empty array if the key does not exist.\",\n            \"type\": \"array\",\n            \"minItems\": 0,\n            \"maxItems\": 4294967295,\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"description\": \"The field does not exist.\",\n                        \"const\": -2\n                    },\n                    {\n                        \"description\": \"The field exists but has no associated expire.\",\n                        \"const\": -1\n                    },\n                    {\n                        \"description\": \"TTL in seconds.\",\n                        \"type\": \"integer\",\n                        \"minimum\": 1\n                    }\n                ]\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"fields\",\n                \"token\": \"FIELDS\",\n                \"type\": \"block\",\n                \"arguments\": [\n                    {\n                        \"name\": \"numfields\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"field\",\n                        \"type\": \"string\",\n                        \"multiple\": true\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/hvals.json",
    "content": "{\n    \"HVALS\": {\n        \"summary\": \"Returns all values in a hash.\",\n        \"complexity\": \"O(N) where N is the size of the hash.\",\n        \"group\": \"hash\",\n        \"since\": \"2.0.0\",\n        \"arity\": 2,\n        \"function\": \"hvalsCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"HASH\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT_ORDER\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"List of values in the hash, or an empty list when the key does not exist.\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/incr.json",
    "content": "{\n    \"INCR\": {\n        \"summary\": \"Increments the integer value of a key by one. Uses 0 as initial value if the key doesn't exist.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"string\",\n        \"since\": \"1.0.0\",\n        \"arity\": 2,\n        \"function\": \"incrCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The value of key after the increment\",\n            \"type\": \"integer\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/incrby.json",
    "content": "{\n    \"INCRBY\": {\n        \"summary\": \"Increments the integer value of a key by a number. Uses 0 as initial value if the key doesn't exist.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"string\",\n        \"since\": \"1.0.0\",\n        \"arity\": 3,\n        \"function\": \"incrbyCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"The value of the key after incrementing it.\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"increment\",\n                \"type\": \"integer\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/incrbyfloat.json",
    "content": "{\n    \"INCRBYFLOAT\": {\n        \"summary\": \"Increment the floating point value of a key by a number. Uses 0 as initial value if the key doesn't exist.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"string\",\n        \"since\": \"2.6.0\",\n        \"arity\": 3,\n        \"function\": \"incrbyfloatCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"string\",\n            \"description\": \"The value of the key after incrementing it.\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"increment\",\n                \"type\": \"double\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/info.json",
    "content": "{\n    \"INFO\": {\n        \"summary\": \"Returns information and statistics about the server.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"1.0.0\",\n        \"arity\": -1,\n        \"function\": \"infoCommand\",\n        \"history\": [\n            [\n                \"7.0.0\",\n                \"Added support for taking multiple section arguments.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"DANGEROUS\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\",\n            \"REQUEST_POLICY:ALL_SHARDS\",\n            \"RESPONSE_POLICY:SPECIAL\"\n        ],\n        \"reply_schema\": {\n            \"description\": \"A map of info fields, one field per line in the form of <field>:<value> where the value can be a comma separated map like <key>=<val>. Also contains section header lines starting with `#` and blank lines.\",\n            \"type\": \"string\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"section\",\n                \"type\": \"string\",\n                \"multiple\": true,\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/keys.json",
    "content": "{\n    \"KEYS\": {\n        \"summary\": \"Returns all key names that match a pattern.\",\n        \"complexity\": \"O(N) with N being the number of keys in the database, under the assumption that the key names in the database and the given pattern have limited length.\",\n        \"group\": \"generic\",\n        \"since\": \"1.0.0\",\n        \"arity\": 2,\n        \"function\": \"keysCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\",\n            \"DANGEROUS\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_SHARDS\",\n            \"NONDETERMINISTIC_OUTPUT_ORDER\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"pattern\",\n                \"type\": \"pattern\"\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"list of keys matching pattern\",\n            \"type\": \"array\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/lastsave.json",
    "content": "{\n    \"LASTSAVE\": {\n        \"summary\": \"Returns the Unix timestamp of the last successful save to disk.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"1.0.0\",\n        \"arity\": 1,\n        \"function\": \"lastsaveCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\",\n            \"FAST\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"acl_categories\": [\n            \"ADMIN\",\n            \"DANGEROUS\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"UNIX TIME of the last DB save executed with success.\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/latency-doctor.json",
    "content": "{\n    \"DOCTOR\": {\n        \"summary\": \"Returns a human-readable latency analysis report.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"2.8.13\",\n        \"arity\": 2,\n        \"container\": \"LATENCY\",\n        \"function\": \"latencyCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\",\n            \"REQUEST_POLICY:ALL_NODES\",\n            \"RESPONSE_POLICY:SPECIAL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"string\",\n            \"description\": \"A human readable latency analysis report.\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/latency-graph.json",
    "content": "{\n    \"GRAPH\": {\n        \"summary\": \"Returns a latency graph for an event.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"2.8.13\",\n        \"arity\": 3,\n        \"container\": \"LATENCY\",\n        \"function\": \"latencyCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\",\n            \"REQUEST_POLICY:ALL_NODES\",\n            \"RESPONSE_POLICY:SPECIAL\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"event\",\n                \"type\": \"string\"\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"string\",\n            \"description\": \"Latency graph\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/latency-help.json",
    "content": "{\n    \"HELP\": {\n        \"summary\": \"Returns helpful text about the different subcommands.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"2.8.13\",\n        \"arity\": 2,\n        \"container\": \"LATENCY\",\n        \"function\": \"latencyCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"Helpful text about subcommands.\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/latency-histogram.json",
    "content": "{\n    \"HISTOGRAM\": {\n        \"summary\": \"Returns the cumulative distribution of latencies of a subset or all commands.\",\n        \"complexity\": \"O(N) where N is the number of commands with latency information being retrieved.\",\n        \"group\": \"server\",\n        \"since\": \"7.0.0\",\n        \"arity\": -2,\n        \"container\": \"LATENCY\",\n        \"function\": \"latencyCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\",\n            \"REQUEST_POLICY:ALL_NODES\",\n            \"RESPONSE_POLICY:SPECIAL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"object\",\n            \"description\": \"A map where each key is a command name, and each value is a map with the total calls, and an inner map of the histogram time buckets.\",\n            \"patternProperties\": {\n                \"^.*$\": {\n                    \"type\": \"object\",\n                    \"additionalProperties\": false,\n                    \"properties\": {\n                        \"calls\": {\n                            \"description\": \"The total calls for the command.\",\n                            \"type\": \"integer\",\n                            \"minimum\": 0\n                        },\n                        \"histogram_usec\": {\n                            \"description\": \"Histogram map, bucket id to latency\",\n                            \"type\": \"object\",\n                            \"additionalProperties\": {\n                                \"type\": \"integer\"\n                            }\n                        }\n                    }\n                }\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"COMMAND\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/latency-history.json",
    "content": "{\n    \"HISTORY\": {\n        \"summary\": \"Returns timestamp-latency samples for an event.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"2.8.13\",\n        \"arity\": 3,\n        \"container\": \"LATENCY\",\n        \"function\": \"latencyCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\",\n            \"REQUEST_POLICY:ALL_NODES\",\n            \"RESPONSE_POLICY:SPECIAL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"An array where each element is a two elements array representing the timestamp and the latency of the event.\",\n            \"items\": {\n                \"type\": \"array\",\n                \"minItems\": 2,\n                \"maxItems\": 2,\n                \"items\": [\n                    {\n                        \"description\": \"timestamp of the event\",\n                        \"type\": \"integer\",\n                        \"minimum\": 0\n                    },\n                    {\n                        \"description\": \"latency of the event\",\n                        \"type\": \"integer\",\n                        \"minimum\": 0\n                    }\n                ]\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"event\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/latency-latest.json",
    "content": "{\n    \"LATEST\": {\n        \"summary\": \"Returns the latest latency samples for all events.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"2.8.13\",\n        \"arity\": 2,\n        \"container\": \"LATENCY\",\n        \"function\": \"latencyCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\",\n            \"REQUEST_POLICY:ALL_NODES\",\n            \"RESPONSE_POLICY:SPECIAL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"An array where each element is a four elements array representing the event's name, timestamp, latest and all-time latency measurements.\",\n            \"items\": {\n                \"type\": \"array\",\n                \"minItems\": 4,\n                \"maxItems\": 4,\n                \"items\": [\n                    {\n                        \"type\": \"string\",\n                        \"description\": \"Event name.\"\n                    },\n                    {\n                        \"type\": \"integer\",\n                        \"description\": \"Timestamp.\"\n                    },\n                    {\n                        \"type\": \"integer\",\n                        \"description\": \"Latest latency in milliseconds.\"\n                    },\n                    {\n                        \"type\": \"integer\",\n                        \"description\": \"Max latency in milliseconds.\"\n                    }\n                ]\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/latency-reset.json",
    "content": "{\n    \"RESET\": {\n        \"summary\": \"Resets the latency data for one or more events.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"2.8.13\",\n        \"arity\": -2,\n        \"container\": \"LATENCY\",\n        \"function\": \"latencyCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_NODES\",\n            \"RESPONSE_POLICY:AGG_SUM\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"Number of event time series that were reset.\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"event\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/latency.json",
    "content": "{\n    \"LATENCY\": {\n        \"summary\": \"A container for latency diagnostics commands.\",\n        \"complexity\": \"Depends on subcommand.\",\n        \"group\": \"server\",\n        \"since\": \"2.8.13\",\n        \"arity\": -2\n    }\n}\n"
  },
  {
    "path": "src/commands/lcs.json",
    "content": "{\n    \"LCS\": {\n        \"summary\": \"Finds the longest common substring.\",\n        \"complexity\": \"O(N*M) where N and M are the lengths of s1 and s2, respectively\",\n        \"group\": \"string\",\n        \"since\": \"7.0.0\",\n        \"arity\": -3,\n        \"function\": \"lcsCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 1,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"type\": \"string\",\n                    \"description\": \"The longest common subsequence.\"\n                },\n                {\n                    \"type\": \"integer\",\n                    \"description\": \"The length of the longest common subsequence when 'LEN' is given.\"\n                },\n                {\n                    \"type\": \"object\",\n                    \"description\": \"Array with the LCS length and all the ranges in both the strings when 'IDX' is given. In RESP2 this is returned as a flat array\",\n                    \"additionalProperties\": false,\n                    \"properties\": {\n                        \"matches\": {\n                            \"type\": \"array\",\n                            \"items\": {\n                                \"type\": \"array\",\n                                \"minItems\": 2,\n                                \"maxItems\": 3,\n                                \"items\": [\n                                    {\n                                        \"type\": \"array\",\n                                        \"description\": \"Matched range in the first string.\",\n                                        \"minItems\": 2,\n                                        \"maxItems\": 2,\n                                        \"items\": {\n                                            \"type\": \"integer\"\n                                        }\n                                    },\n                                    {\n                                        \"type\": \"array\",\n                                        \"description\": \"Matched range in the second string.\",\n                                        \"minItems\": 2,\n                                        \"maxItems\": 2,\n                                        \"items\": {\n                                            \"type\": \"integer\"\n                                        }\n                                    }\n                                ],\n                                \"additionalItems\": {\n                                    \"type\": \"integer\",\n                                    \"description\": \"The length of the match when 'WITHMATCHLEN' is given.\"\n                                }\n                            }\n                        },\n                        \"len\": {\n                            \"type\": \"integer\",\n                            \"description\": \"Length of the longest common subsequence.\"\n                        }\n                    }\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key1\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"key2\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"len\",\n                \"token\": \"LEN\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"idx\",\n                \"token\": \"IDX\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"MINMATCHLEN\",\n                \"name\": \"min-match-len\",\n                \"type\": \"integer\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"withmatchlen\",\n                \"token\": \"WITHMATCHLEN\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/lindex.json",
    "content": "{\n    \"LINDEX\": {\n        \"summary\": \"Returns an element from a list by its index.\",\n        \"complexity\": \"O(N) where N is the number of elements to traverse to get to the element at index. This makes asking for the first or the last element of the list O(1).\",\n        \"group\": \"list\",\n        \"since\": \"1.0.0\",\n        \"arity\": 3,\n        \"function\": \"lindexCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"type\": \"null\",\n                    \"description\": \"Index is out of range\"\n                },\n                {\n                    \"description\": \"The requested element\",\n                    \"type\": \"string\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"index\",\n                \"type\": \"integer\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/linsert.json",
    "content": "{\n    \"LINSERT\": {\n        \"summary\": \"Inserts an element before or after another element in a list.\",\n        \"complexity\": \"O(N) where N is the number of elements to traverse before seeing the value pivot. This means that inserting somewhere on the left end on the list (head) can be considered O(1) and inserting somewhere on the right end (tail) is O(N).\",\n        \"group\": \"list\",\n        \"since\": \"2.2.0\",\n        \"arity\": 5,\n        \"function\": \"linsertCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"INSERT\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"List length after a successful insert operation.\",\n                    \"type\": \"integer\",\n                    \"minimum\": 1\n                },\n                {\n                    \"description\": \"in case key doesn't exist.\",\n                    \"const\": 0\n                },\n                {\n                    \"description\": \"when the pivot wasn't found.\",\n                    \"const\": -1\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"where\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"before\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"BEFORE\"\n                    },\n                    {\n                        \"name\": \"after\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"AFTER\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"pivot\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"element\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/llen.json",
    "content": "{\n    \"LLEN\": {\n        \"summary\": \"Returns the length of a list.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"list\",\n        \"since\": \"1.0.0\",\n        \"arity\": 2,\n        \"function\": \"llenCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"List length.\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/lmove.json",
    "content": "{\n    \"LMOVE\": {\n        \"summary\": \"Returns an element after popping it from one list and pushing it to another. Deletes the list if the last element was moved.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"list\",\n        \"since\": \"6.2.0\",\n        \"arity\": 5,\n        \"function\": \"lmoveCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"INSERT\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The element being popped and pushed.\",\n            \"type\": \"string\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"source\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"destination\",\n                \"type\": \"key\",\n                \"key_spec_index\": 1\n            },\n            {\n                \"name\": \"wherefrom\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"left\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"LEFT\"\n                    },\n                    {\n                        \"name\": \"right\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"RIGHT\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"whereto\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"left\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"LEFT\"\n                    },\n                    {\n                        \"name\": \"right\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"RIGHT\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/lmpop.json",
    "content": "{\n    \"LMPOP\": {\n        \"summary\": \"Returns multiple elements from a list after removing them. Deletes the list if the last element was popped.\",\n        \"complexity\": \"O(N+M) where N is the number of provided keys and M is the number of elements returned.\",\n        \"group\": \"list\",\n        \"since\": \"7.0.0\",\n        \"arity\": -4,\n        \"function\": \"lmpopCommand\",\n        \"get_keys_function\": \"lmpopGetKeys\",\n        \"command_flags\": [\n            \"WRITE\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"keynum\": {\n                        \"keynumidx\": 0,\n                        \"firstkey\": 1,\n                        \"step\": 1\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"anyOf\": [\n                {\n                    \"description\": \"If no element could be popped.\",\n                    \"type\": \"null\"\n                },\n                {\n                    \"description\": \"List key from which elements were popped.\",\n                    \"type\": \"array\",\n                    \"minItems\": 2,\n                    \"maxItems\": 2,\n                    \"items\": [\n                        {\n                            \"description\": \"Name of the key from which elements were popped.\",\n                            \"type\": \"string\"\n                        },\n                        {\n                            \"description\": \"Array of popped elements.\",\n                            \"type\": \"array\",\n                            \"minItems\": 1,\n                            \"items\": {\n                                \"type\": \"string\"\n                            }\n                        }\n                    ]\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"numkeys\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            },\n            {\n                \"name\": \"where\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"left\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"LEFT\"\n                    },\n                    {\n                        \"name\": \"right\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"RIGHT\"\n                    }\n                ]\n            },\n            {\n                \"token\": \"COUNT\",\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/lolwut.json",
    "content": "{\n    \"LOLWUT\": {\n        \"summary\": \"Displays computer art and the Redis version\",\n        \"group\": \"server\",\n        \"since\": \"5.0.0\",\n        \"arity\": -1,\n        \"function\": \"lolwutCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"string\",\n            \"description\": \"String containing the generative computer art, and a text with the Redis version.\"\n        },\n        \"arguments\": [\n            {\n                \"token\": \"VERSION\",\n                \"name\": \"version\",\n                \"type\": \"integer\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/lpop.json",
    "content": "{\n    \"LPOP\": {\n        \"summary\": \"Returns the first elements in a list after removing it. Deletes the list if the last element was popped.\",\n        \"complexity\": \"O(N) where N is the number of elements returned\",\n        \"group\": \"list\",\n        \"since\": \"1.0.0\",\n        \"arity\": -2,\n        \"function\": \"lpopCommand\",\n        \"history\": [\n            [\n                \"6.2.0\",\n                \"Added the `count` argument.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"Key does not exist.\",\n                    \"type\": \"null\"\n                },\n                {\n                    \"description\": \"In case `count` argument was not given, the value of the first element.\",\n                    \"type\": \"string\"\n                },\n                {\n                    \"description\": \"In case `count` argument was given, a list of popped elements\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"string\"\n                    }\n                }\n            ]\n        },\n\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true,\n                \"since\": \"6.2.0\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/lpos.json",
    "content": "{\n    \"LPOS\": {\n        \"summary\": \"Returns the index of matching elements in a list.\",\n        \"complexity\": \"O(N) where N is the number of elements in the list, for the average case. When searching for elements near the head or the tail of the list, or when the MAXLEN option is provided, the command may run in constant time.\",\n        \"group\": \"list\",\n        \"since\": \"6.0.6\",\n        \"arity\": -3,\n        \"function\": \"lposCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"anyOf\": [\n                {\n                    \"description\": \"In case there is no matching element\",\n                    \"type\": \"null\"\n                },\n                {\n                    \"description\": \"An integer representing the matching element\",\n                    \"type\": \"integer\"\n                },\n                {\n                    \"description\": \"If the COUNT option is given, an array of integers representing the matching elements (empty if there are no matches)\",\n                    \"type\": \"array\",\n                    \"uniqueItems\": true,\n                    \"items\": {\n                        \"type\": \"integer\"\n                    }\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"element\",\n                \"type\": \"string\"\n            },\n            {\n                \"token\": \"RANK\",\n                \"name\": \"rank\",\n                \"type\": \"integer\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"COUNT\",\n                \"name\": \"num-matches\",\n                \"type\": \"integer\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"MAXLEN\",\n                \"name\": \"len\",\n                \"type\": \"integer\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/lpush.json",
    "content": "{\n    \"LPUSH\": {\n        \"summary\": \"Prepends one or more elements to a list. Creates the key if it doesn't exist.\",\n        \"complexity\": \"O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments.\",\n        \"group\": \"list\",\n        \"since\": \"1.0.0\",\n        \"arity\": -3,\n        \"function\": \"lpushCommand\",\n        \"history\": [\n            [\n                \"2.4.0\",\n                \"Accepts multiple `element` arguments.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"INSERT\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Length of the list after the push operations.\",\n            \"type\": \"integer\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"element\",\n                \"type\": \"string\",\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/lpushx.json",
    "content": "{\n    \"LPUSHX\": {\n        \"summary\": \"Prepends one or more elements to a list only when the list exists.\",\n        \"complexity\": \"O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments.\",\n        \"group\": \"list\",\n        \"since\": \"2.2.0\",\n        \"arity\": -3,\n        \"function\": \"lpushxCommand\",\n        \"history\": [\n            [\n                \"4.0.0\",\n                \"Accepts multiple `element` arguments.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"INSERT\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"the length of the list after the push operation\",\n            \"minimum\": 0\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"element\",\n                \"type\": \"string\",\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/lrange.json",
    "content": "{\n    \"LRANGE\": {\n        \"summary\": \"Returns a range of elements from a list.\",\n        \"complexity\": \"O(S+N) where S is the distance of start offset from HEAD for small lists, from nearest end (HEAD or TAIL) for large lists; and N is the number of elements in the specified range.\",\n        \"group\": \"list\",\n        \"since\": \"1.0.0\",\n        \"arity\": 4,\n        \"function\": \"lrangeCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"start\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"stop\",\n                \"type\": \"integer\"\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"List of elements in the specified range\",\n            \"type\": \"array\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/lrem.json",
    "content": "{\n    \"LREM\": {\n        \"summary\": \"Removes elements from a list. Deletes the list if the last element was removed.\",\n        \"complexity\": \"O(N+M) where N is the length of the list and M is the number of elements removed.\",\n        \"group\": \"list\",\n        \"since\": \"1.0.0\",\n        \"arity\": 4,\n        \"function\": \"lremCommand\",\n        \"command_flags\": [\n            \"WRITE\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The number of removed elements.\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"count\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"element\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/lset.json",
    "content": "{\n    \"LSET\": {\n        \"summary\": \"Sets the value of an element in a list by its index.\",\n        \"complexity\": \"O(N) where N is the length of the list. Setting either the first or the last element of the list is O(1).\",\n        \"group\": \"list\",\n        \"since\": \"1.0.0\",\n        \"arity\": 4,\n        \"function\": \"lsetCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"index\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"element\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/ltrim.json",
    "content": "{\n    \"LTRIM\": {\n        \"summary\": \"Removes elements from both ends a list. Deletes the list if all elements were trimmed.\",\n        \"complexity\": \"O(N) where N is the number of elements to be removed by the operation.\",\n        \"group\": \"list\",\n        \"since\": \"1.0.0\",\n        \"arity\": 4,\n        \"function\": \"ltrimCommand\",\n        \"command_flags\": [\n            \"WRITE\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"start\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"stop\",\n                \"type\": \"integer\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/memory-doctor.json",
    "content": "{\n    \"DOCTOR\": {\n        \"summary\": \"Outputs a memory problems report.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"4.0.0\",\n        \"arity\": 2,\n        \"container\": \"MEMORY\",\n        \"function\": \"memoryCommand\",\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\",\n            \"REQUEST_POLICY:ALL_SHARDS\",\n            \"RESPONSE_POLICY:SPECIAL\"\n        ],\n        \"reply_schema\": {\n            \"description\": \"memory problems report\",\n            \"type\": \"string\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/memory-help.json",
    "content": "{\n    \"HELP\": {\n        \"summary\": \"Returns helpful text about the different subcommands.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"4.0.0\",\n        \"arity\": 2,\n        \"container\": \"MEMORY\",\n        \"function\": \"memoryCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"Helpful text about subcommands.\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/memory-malloc-stats.json",
    "content": "{\n    \"MALLOC-STATS\": {\n        \"summary\": \"Returns the allocator statistics.\",\n        \"complexity\": \"Depends on how much memory is allocated, could be slow\",\n        \"group\": \"server\",\n        \"since\": \"4.0.0\",\n        \"arity\": 2,\n        \"container\": \"MEMORY\",\n        \"function\": \"memoryCommand\",\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\",\n            \"REQUEST_POLICY:ALL_SHARDS\",\n            \"RESPONSE_POLICY:SPECIAL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"string\",\n            \"description\": \"The memory allocator's internal statistics report.\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/memory-purge.json",
    "content": "{\n    \"PURGE\": {\n        \"summary\": \"Asks the allocator to release memory.\",\n        \"complexity\": \"Depends on how much memory is allocated, could be slow\",\n        \"group\": \"server\",\n        \"since\": \"4.0.0\",\n        \"arity\": 2,\n        \"container\": \"MEMORY\",\n        \"function\": \"memoryCommand\",\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_SHARDS\",\n            \"RESPONSE_POLICY:ALL_SUCCEEDED\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/memory-stats.json",
    "content": "{\n    \"STATS\": {\n        \"summary\": \"Returns details about memory usage.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"4.0.0\",\n        \"arity\": 2,\n        \"container\": \"MEMORY\",\n        \"function\": \"memoryCommand\",\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\",\n            \"REQUEST_POLICY:ALL_SHARDS\",\n            \"RESPONSE_POLICY:SPECIAL\"\n        ],\n        \"reply_schema\": {\n            \"description\": \"memory usage details\",\n            \"type\": \"object\",\n            \"additionalProperties\": false,\n            \"properties\": {\n                \"peak.allocated\": {\n                    \"type\": \"integer\"\n                },\n                \"total.allocated\": {\n                    \"type\": \"integer\"\n                },\n                \"startup.allocated\": {\n                    \"type\": \"integer\"\n                },\n                \"replication.backlog\": {\n                    \"type\": \"integer\"\n                },\n                \"replica.fullsync.buffer\": {\n                    \"type\": \"integer\"\n                },\n                \"clients.slaves\": {\n                    \"type\": \"integer\"\n                },\n                \"clients.normal\": {\n                    \"type\": \"integer\"\n                },\n                \"cluster.links\": {\n                    \"type\": \"integer\"\n                },\n                \"aof.buffer\": {\n                    \"type\": \"integer\"\n                },\n                \"lua.caches\": {\n                    \"type\": \"integer\"\n                },\n                \"script.VMs\": {\n                    \"type\": \"integer\"\n                },\n                \"functions.caches\": {\n                    \"type\": \"integer\"\n                },\n                \"overhead.db.hashtable.lut\": {\n                    \"type\": \"integer\"\n                },\n                \"overhead.db.hashtable.rehashing\": {\n                    \"type\": \"integer\"\n                },\n                \"overhead.total\": {\n                    \"type\": \"integer\"\n                },\n                \"db.dict.rehashing.count\": {\n                    \"type\": \"integer\"\n                },\n                \"keys.count\": {\n                    \"type\": \"integer\"\n                },\n                \"keys.bytes-per-key\": {\n                    \"type\": \"integer\"\n                },\n                \"dataset.bytes\": {\n                    \"type\": \"integer\"\n                },\n                \"dataset.percentage\": {\n                    \"type\": \"number\"\n                },\n                \"peak.percentage\": {\n                    \"type\": \"number\"\n                },\n                \"allocator.allocated\": {\n                    \"type\": \"integer\"\n                },\n                \"allocator.active\": {\n                    \"type\": \"integer\"\n                },\n                \"allocator.resident\": {\n                    \"type\": \"integer\"\n                },\n                \"allocator.muzzy\": {\n                    \"type\": \"integer\"\n                },\n                \"allocator-fragmentation.ratio\": {\n                    \"type\": \"number\"\n                },\n                \"allocator-fragmentation.bytes\": {\n                    \"type\": \"integer\"\n                },\n                \"allocator-rss.ratio\": {\n                    \"type\": \"number\"\n                },\n                \"allocator-rss.bytes\": {\n                    \"type\": \"integer\"\n                },\n                \"rss-overhead.ratio\": {\n                    \"type\": \"number\"\n                },\n                \"rss-overhead.bytes\": {\n                    \"type\": \"integer\"\n                },\n                \"fragmentation\": {\n                    \"type\": \"number\"\n                },\n                \"fragmentation.bytes\": {\n                    \"type\": \"integer\"\n                }\n            },\n            \"patternProperties\": {\n                \"^db\\\\.\\\\d+$\": {\n                    \"type\": \"object\",\n                    \"properties\": {\n                        \"overhead.hashtable.main\": {\n                            \"type\": \"integer\"\n                        },\n                        \"overhead.hashtable.expires\": {\n                            \"type\": \"integer\"\n                        }\n                    },\n                    \"additionalProperties\": false\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/memory-usage.json",
    "content": "{\n    \"USAGE\": {\n        \"summary\": \"Estimates the memory usage of a key.\",\n        \"complexity\": \"O(N) where N is the number of samples.\",\n        \"group\": \"server\",\n        \"since\": \"4.0.0\",\n        \"arity\": -3,\n        \"container\": \"MEMORY\",\n        \"function\": \"memoryCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"Number of bytes that a key and its value require to be stored in RAM.\",\n                    \"type\": \"integer\"\n                },\n                {\n                    \"description\": \"Key does not exist.\",\n                    \"type\": \"null\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"token\": \"SAMPLES\",\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/memory.json",
    "content": "{\n    \"MEMORY\": {\n        \"summary\": \"A container for memory diagnostics commands.\",\n        \"complexity\": \"Depends on subcommand.\",\n        \"group\": \"server\",\n        \"since\": \"4.0.0\",\n        \"arity\": -2\n    }\n}\n"
  },
  {
    "path": "src/commands/mget.json",
    "content": "{\n    \"MGET\": {\n        \"summary\": \"Atomically returns the string values of one or more keys.\",\n        \"complexity\": \"O(N) where N is the number of keys to retrieve.\",\n        \"group\": \"string\",\n        \"since\": \"1.0.0\",\n        \"arity\": -2,\n        \"function\": \"mgetCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:MULTI_SHARD\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"List of values at the specified keys.\",\n            \"type\": \"array\",\n            \"minItems\": 1,\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"type\": \"null\"\n                    }\n                ]\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/migrate.json",
    "content": "{\n    \"MIGRATE\": {\n        \"summary\": \"Atomically transfers a key from one Redis instance to another.\",\n        \"complexity\": \"This command actually executes a DUMP+DEL in the source instance, and a RESTORE in the target instance. See the pages of these commands for time complexity. Also an O(N) data transfer between the two instances is performed.\",\n        \"group\": \"generic\",\n        \"since\": \"2.6.0\",\n        \"arity\": -6,\n        \"function\": \"migrateCommand\",\n        \"get_keys_function\": \"migrateGetKeys\",\n        \"history\": [\n            [\n                \"3.0.0\",\n                \"Added the `COPY` and `REPLACE` options.\"\n            ],\n            [\n                \"3.0.6\",\n                \"Added the `KEYS` option.\"\n            ],\n            [\n                \"4.0.7\",\n                \"Added the `AUTH` option.\"\n            ],\n            [\n                \"6.0.0\",\n                \"Added the `AUTH2` option.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\",\n            \"DANGEROUS\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 3\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\",\n                    \"INCOMPLETE\"\n                ],\n                \"begin_search\": {\n                    \"keyword\": {\n                        \"keyword\": \"KEYS\",\n                        \"startfrom\": -2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"const\": \"OK\",\n                    \"description\": \"Success.\"\n                },\n                {\n                    \"const\": \"NOKEY\",\n                    \"description\": \"No keys were found in the source instance.\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"host\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"port\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"key-selector\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"key\",\n                        \"type\": \"key\",\n                        \"key_spec_index\": 0\n                    },\n                    {\n                        \"name\": \"empty-string\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"\\\"\\\"\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"destination-db\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"timeout\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"copy\",\n                \"token\": \"COPY\",\n                \"type\": \"pure-token\",\n                \"optional\": true,\n                \"since\": \"3.0.0\"\n            },\n            {\n                \"name\": \"replace\",\n                \"token\": \"REPLACE\",\n                \"type\": \"pure-token\",\n                \"optional\": true,\n                \"since\": \"3.0.0\"\n            },\n            {\n                \"name\": \"authentication\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"token\": \"AUTH\",\n                        \"name\": \"auth\",\n                        \"display\": \"password\",\n                        \"type\": \"string\",\n                        \"since\": \"4.0.7\"\n                    },\n                    {\n                        \"token\": \"AUTH2\",\n                        \"name\": \"auth2\",\n                        \"type\": \"block\",\n                        \"since\": \"6.0.0\",\n                        \"arguments\": [\n                            {\n                                \"name\": \"username\",\n                                \"type\": \"string\"\n                            },\n                            {\n                                \"name\": \"password\",\n                                \"type\": \"string\"\n                            }\n                        ]\n                    }\n                ]\n            },\n            {\n                \"token\": \"KEYS\",\n                \"name\": \"keys\",\n                \"display\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 1,\n                \"optional\": true,\n                \"multiple\": true,\n                \"since\": \"3.0.6\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/module-help.json",
    "content": "{\n    \"HELP\": {\n        \"summary\": \"Returns helpful text about the different subcommands.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"5.0.0\",\n        \"arity\": 2,\n        \"container\": \"MODULE\",\n        \"function\": \"moduleCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"Helpful text about subcommands.\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/module-list.json",
    "content": "{\n    \"LIST\": {\n        \"summary\": \"Returns all loaded modules.\",\n        \"complexity\": \"O(N) where N is the number of loaded modules.\",\n        \"group\": \"server\",\n        \"since\": \"4.0.0\",\n        \"arity\": 2,\n        \"container\": \"MODULE\",\n        \"function\": \"moduleCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT_ORDER\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"Returns information about the modules loaded to the server.\",\n            \"items\": {\n                \"type\": \"object\",\n                \"additionalProperties\": false,\n                \"properties\": {\n                    \"name\": {\n                        \"type\": \"string\",\n                        \"description\": \"Name of the module.\"\n                    },\n                    \"ver\": {\n                        \"type\": \"integer\",\n                        \"description\": \"Version of the module.\"\n                    },\n                    \"path\": {\n                        \"type\": \"string\",\n                        \"description\": \"Module path.\"\n                    },\n                    \"args\": {\n                        \"type\": \"array\",\n                        \"description\": \"Module arguments.\",\n                        \"items\": {\n                            \"type\": \"string\"\n                        }\n                    }\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/module-load.json",
    "content": "{\n    \"LOAD\": {\n        \"summary\": \"Loads a module.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"4.0.0\",\n        \"arity\": -3,\n        \"container\": \"MODULE\",\n        \"function\": \"moduleCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"PROTECTED\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"path\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"arg\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/module-loadex.json",
    "content": "{\n    \"LOADEX\": {\n        \"summary\": \"Loads a module using extended parameters.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"7.0.0\",\n        \"arity\": -3,\n        \"container\": \"MODULE\",\n        \"function\": \"moduleCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"PROTECTED\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"path\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"configs\",\n                \"token\": \"CONFIG\",\n                \"type\": \"block\",\n                \"multiple\": true,\n                \"multiple_token\": true,\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"name\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"name\": \"value\",\n                        \"type\": \"string\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"args\",\n                \"token\": \"ARGS\",\n                \"type\": \"string\",\n                \"multiple\": true,\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/module-unload.json",
    "content": "{\n    \"UNLOAD\": {\n        \"summary\": \"Unloads a module.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"4.0.0\",\n        \"arity\": 3,\n        \"container\": \"MODULE\",\n        \"function\": \"moduleCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"PROTECTED\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"name\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/module.json",
    "content": "{\n    \"MODULE\": {\n        \"summary\": \"A container for module commands.\",\n        \"complexity\": \"Depends on subcommand.\",\n        \"group\": \"server\",\n        \"since\": \"4.0.0\",\n        \"arity\": -2\n    }\n}\n"
  },
  {
    "path": "src/commands/monitor.json",
    "content": "{\n    \"MONITOR\": {\n        \"summary\": \"Listens for all requests received by the server in real-time.\",\n        \"group\": \"server\",\n        \"since\": \"1.0.0\",\n        \"arity\": 1,\n        \"function\": \"monitorCommand\",\n        \"history\": [],\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\"\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/move.json",
    "content": "{\n    \"MOVE\": {\n        \"summary\": \"Moves a key to another database.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"generic\",\n        \"since\": \"1.0.0\",\n        \"arity\": 3,\n        \"function\": \"moveCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"db\",\n                \"type\": \"integer\"\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"key was moved\",\n                    \"const\": 1\n                },\n                {\n                    \"description\": \"key wasn't moved\",\n                    \"const\": 0\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/mset.json",
    "content": "{\n    \"MSET\": {\n        \"summary\": \"Atomically creates or modifies the string values of one or more keys.\",\n        \"complexity\": \"O(N) where N is the number of keys to set.\",\n        \"group\": \"string\",\n        \"since\": \"1.0.1\",\n        \"arity\": -3,\n        \"function\": \"msetCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:MULTI_SHARD\",\n            \"RESPONSE_POLICY:ALL_SUCCEEDED\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 2,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"data\",\n                \"type\": \"block\",\n                \"multiple\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"key\",\n                        \"type\": \"key\",\n                        \"key_spec_index\": 0\n                    },\n                    {\n                        \"name\": \"value\",\n                        \"type\": \"string\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/msetex.json",
    "content": "{\n    \"MSETEX\": {\n        \"summary\": \"Atomically sets multiple string keys with a shared expiration in a single operation. Supports flexible argument parsing where condition and expiration flags can appear in any order.\",\n        \"complexity\": \"O(N) where N is the number of keys to set.\",\n        \"group\": \"string\",\n        \"since\": \"8.4.0\",\n        \"arity\": -4,\n        \"function\": \"msetexCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:MULTI_SHARD\",\n            \"RESPONSE_POLICY:ALL_SUCCEEDED\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"keynum\": {\n                        \"keynumidx\": 0,\n                        \"firstkey\": 1,\n                        \"step\": 2\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"No key was set (at least one key failed).\",\n                    \"const\": 0\n                },\n                {\n                    \"description\": \"All the keys were set.\",\n                    \"const\": 1\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"numkeys\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"data\",\n                \"type\": \"block\",\n                \"multiple\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"key\",\n                        \"type\": \"key\",\n                        \"key_spec_index\": 0\n                    },\n                    {\n                        \"name\": \"value\",\n                        \"type\": \"string\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"condition\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"nx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"NX\"\n                    },\n                    {\n                        \"name\": \"xx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"XX\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"expiration\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"seconds\",\n                        \"type\": \"integer\",\n                        \"token\": \"EX\"\n                    },\n                    {\n                        \"name\": \"milliseconds\",\n                        \"type\": \"integer\",\n                        \"token\": \"PX\"\n                    },\n                    {\n                        \"name\": \"unix-time-seconds\",\n                        \"type\": \"unix-time\",\n                        \"token\": \"EXAT\"\n                    },\n                    {\n                        \"name\": \"unix-time-milliseconds\",\n                        \"type\": \"unix-time\",\n                        \"token\": \"PXAT\"\n                    },\n                    {\n                        \"name\": \"keepttl\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"KEEPTTL\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/msetnx.json",
    "content": "{\n    \"MSETNX\": {\n        \"summary\": \"Atomically modifies the string values of one or more keys only when all keys don't exist.\",\n        \"complexity\": \"O(N) where N is the number of keys to set.\",\n        \"group\": \"string\",\n        \"since\": \"1.0.1\",\n        \"arity\": -3,\n        \"function\": \"msetnxCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"INSERT\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 2,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"No key was set (at least one key already existed).\",\n                    \"const\": 0\n                },\n                {\n                    \"description\": \"All the keys were set.\",\n                    \"const\": 1\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"data\",\n                \"type\": \"block\",\n                \"multiple\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"key\",\n                        \"type\": \"key\",\n                        \"key_spec_index\": 0\n                    },\n                    {\n                        \"name\": \"value\",\n                        \"type\": \"string\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/multi.json",
    "content": "{\n    \"MULTI\": {\n        \"summary\": \"Starts a transaction.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"transactions\",\n        \"since\": \"1.2.0\",\n        \"arity\": 1,\n        \"function\": \"multiCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"FAST\",\n            \"ALLOW_BUSY\"\n        ],\n        \"acl_categories\": [\n            \"TRANSACTION\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/object-encoding.json",
    "content": "{\n    \"ENCODING\": {\n        \"summary\": \"Returns the internal encoding of a Redis object.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"generic\",\n        \"since\": \"2.2.3\",\n        \"arity\": 3,\n        \"container\": \"OBJECT\",\n        \"function\": \"objectCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"key doesn't exist\",\n                    \"type\": \"null\"\n                },\n                {\n                    \"description\": \"encoding of the object\",\n                    \"type\": \"string\"\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/object-freq.json",
    "content": "{\n    \"FREQ\": {\n        \"summary\": \"Returns the logarithmic access frequency counter of a Redis object.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"generic\",\n        \"since\": \"4.0.0\",\n        \"arity\": 3,\n        \"container\": \"OBJECT\",\n        \"function\": \"objectCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"the counter's value\",\n            \"type\": \"integer\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/object-help.json",
    "content": "{\n    \"HELP\": {\n        \"summary\": \"Returns helpful text about the different subcommands.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"generic\",\n        \"since\": \"6.2.0\",\n        \"arity\": 2,\n        \"container\": \"OBJECT\",\n        \"function\": \"objectCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"Helpful text about subcommands.\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/object-idletime.json",
    "content": "{\n    \"IDLETIME\": {\n        \"summary\": \"Returns the time since the last access to a Redis object.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"generic\",\n        \"since\": \"2.2.3\",\n        \"arity\": 3,\n        \"container\": \"OBJECT\",\n        \"function\": \"objectCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"the idle time in seconds\",\n            \"type\": \"integer\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/object-refcount.json",
    "content": "{\n    \"REFCOUNT\": {\n        \"summary\": \"Returns the reference count of a value of a key.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"generic\",\n        \"since\": \"2.2.3\",\n        \"arity\": 3,\n        \"container\": \"OBJECT\",\n        \"function\": \"objectCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"the number of references\",\n            \"type\": \"integer\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/object.json",
    "content": "{\n    \"OBJECT\": {\n        \"summary\": \"A container for object introspection commands.\",\n        \"complexity\": \"Depends on subcommand.\",\n        \"group\": \"generic\",\n        \"since\": \"2.2.3\",\n        \"arity\": -2\n    }\n}\n"
  },
  {
    "path": "src/commands/persist.json",
    "content": "{\n    \"PERSIST\": {\n        \"summary\": \"Removes the expiration time of a key.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"generic\",\n        \"since\": \"2.2.0\",\n        \"arity\": 2,\n        \"function\": \"persistCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"const\": 0,\n                    \"description\": \"Key does not exist or does not have an associated timeout.\"\n                },\n                {\n                    \"const\": 1,\n                    \"description\": \"The timeout has been removed.\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/pexpire.json",
    "content": "{\n    \"PEXPIRE\": {\n        \"summary\": \"Sets the expiration time of a key in milliseconds.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"generic\",\n        \"since\": \"2.6.0\",\n        \"arity\": -3,\n        \"function\": \"pexpireCommand\",\n        \"history\": [\n            [\n                \"7.0.0\",\n                \"Added options: `NX`, `XX`, `GT` and `LT`.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"const\": 0,\n                    \"description\": \"The timeout was not set. e.g. key doesn't exist, or operation skipped due to the provided arguments.\"\n                },\n                {\n                    \"const\": 1,\n                    \"description\": \"The timeout was set.\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"milliseconds\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"condition\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"since\": \"7.0.0\",\n                \"arguments\": [\n                    {\n                        \"name\": \"nx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"NX\"\n                    },\n                    {\n                        \"name\": \"xx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"XX\"\n                    },\n                    {\n                        \"name\": \"gt\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"GT\"\n                    },\n                    {\n                        \"name\": \"lt\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"LT\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/pexpireat.json",
    "content": "{\n    \"PEXPIREAT\": {\n        \"summary\": \"Sets the expiration time of a key to a Unix milliseconds timestamp.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"generic\",\n        \"since\": \"2.6.0\",\n        \"arity\": -3,\n        \"function\": \"pexpireatCommand\",\n        \"history\": [\n            [\n                \"7.0.0\",\n                \"Added options: `NX`, `XX`, `GT` and `LT`.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"const\": 1,\n                    \"description\": \"The timeout was set.\"\n                },\n                {\n                    \"const\": 0,\n                    \"description\": \"The timeout was not set. e.g. key doesn't exist, or operation skipped due to the provided arguments.\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"unix-time-milliseconds\",\n                \"type\": \"unix-time\"\n            },\n            {\n                \"name\": \"condition\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"since\": \"7.0.0\",\n                \"arguments\": [\n                    {\n                        \"name\": \"nx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"NX\"\n                    },\n                    {\n                        \"name\": \"xx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"XX\"\n                    },\n                    {\n                        \"name\": \"gt\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"GT\"\n                    },\n                    {\n                        \"name\": \"lt\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"LT\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/pexpiretime.json",
    "content": "{\n    \"PEXPIRETIME\": {\n        \"summary\": \"Returns the expiration time of a key as a Unix milliseconds timestamp.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"generic\",\n        \"since\": \"7.0.0\",\n        \"arity\": 2,\n        \"function\": \"pexpiretimeCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"type\": \"integer\",\n                    \"description\": \"Expiration Unix timestamp in milliseconds.\",\n                    \"minimum\": 0\n                },\n                {\n                    \"const\": -1,\n                    \"description\": \"The key exists but has no associated expiration time.\"\n                },\n                {\n                    \"const\": -2,\n                    \"description\": \"The key does not exist.\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/pfadd.json",
    "content": "{\n    \"PFADD\": {\n        \"summary\": \"Adds elements to a HyperLogLog key. Creates the key if it doesn't exist.\",\n        \"complexity\": \"O(1) to add every element.\",\n        \"group\": \"hyperloglog\",\n        \"since\": \"2.8.9\",\n        \"arity\": -2,\n        \"function\": \"pfaddCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"HYPERLOGLOG\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"INSERT\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"element\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"multiple\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"if at least 1 HyperLogLog internal register was altered\",\n                    \"const\": 1\n                },\n                {\n                    \"description\": \"if no HyperLogLog internal register were altered\",\n                    \"const\": 0\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/pfcount.json",
    "content": "{\n    \"PFCOUNT\": {\n        \"summary\": \"Returns the approximated cardinality of the set(s) observed by the HyperLogLog key(s).\",\n        \"complexity\": \"O(1) with a very small average constant time when called with a single key. O(N) with N being the number of keys, and much bigger constant times, when called with multiple keys.\",\n        \"group\": \"hyperloglog\",\n        \"since\": \"2.8.9\",\n        \"arity\": -2,\n        \"function\": \"pfcountCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"MAY_REPLICATE\"\n        ],\n        \"acl_categories\": [\n            \"HYPERLOGLOG\"\n        ],\n        \"key_specs\": [\n            {\n                \"notes\": \"RW because it may change the internal representation of the key, and propagate to replicas\",\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The approximated number of unique elements observed via PFADD\",\n            \"type\": \"integer\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/pfdebug.json",
    "content": "{\n    \"PFDEBUG\": {\n        \"summary\": \"Internal commands for debugging HyperLogLog values.\",\n        \"complexity\": \"N/A\",\n        \"group\": \"hyperloglog\",\n        \"since\": \"2.8.9\",\n        \"arity\": 3,\n        \"function\": \"pfdebugCommand\",\n        \"doc_flags\": [\n            \"SYSCMD\"\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"ADMIN\"\n        ],\n        \"acl_categories\": [\n            \"HYPERLOGLOG\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"subcommand\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/pfmerge.json",
    "content": "{\n    \"PFMERGE\": {\n        \"summary\": \"Merges one or more HyperLogLog values into a single key.\",\n        \"complexity\": \"O(N) to merge N HyperLogLogs, but with high constant times.\",\n        \"group\": \"hyperloglog\",\n        \"since\": \"2.8.9\",\n        \"arity\": -2,\n        \"function\": \"pfmergeCommand\",\n        \"get_keys_function\": \"pfmergeGetKeys\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"HYPERLOGLOG\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"INSERT\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"destkey\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"sourcekey\",\n                \"type\": \"key\",\n                \"key_spec_index\": 1,\n                \"optional\": true,\n                \"multiple\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/pfselftest.json",
    "content": "{\n    \"PFSELFTEST\": {\n        \"summary\": \"An internal command for testing HyperLogLog values.\",\n        \"complexity\": \"N/A\",\n        \"group\": \"hyperloglog\",\n        \"since\": \"2.8.9\",\n        \"arity\": 1,\n        \"function\": \"pfselftestCommand\",\n        \"doc_flags\": [\n            \"SYSCMD\"\n        ],\n        \"command_flags\": [\n            \"ADMIN\"\n        ],\n        \"acl_categories\": [\n            \"HYPERLOGLOG\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/ping.json",
    "content": "{\n    \"PING\": {\n        \"summary\": \"Returns the server's liveliness response.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"connection\",\n        \"since\": \"1.0.0\",\n        \"arity\": -1,\n        \"function\": \"pingCommand\",\n        \"command_flags\": [\n            \"FAST\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_SHARDS\",\n            \"RESPONSE_POLICY:ALL_SUCCEEDED\"\n        ],\n        \"reply_schema\": {\n            \"anyOf\": [\n                {\n                    \"const\": \"PONG\",\n                    \"description\": \"Default reply.\"\n                },\n                {\n                    \"type\": \"string\",\n                    \"description\": \"Relay of given `message`.\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"message\",\n                \"type\": \"string\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/psetex.json",
    "content": "{\n    \"PSETEX\": {\n        \"summary\": \"Sets both string value and expiration time in milliseconds of a key. The key is created if it doesn't exist.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"string\",\n        \"since\": \"2.6.0\",\n        \"arity\": 4,\n        \"function\": \"psetexCommand\",\n        \"deprecated_since\": \"2.6.12\",\n        \"replaced_by\": \"`SET` with the `PX` argument\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"milliseconds\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"value\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/psubscribe.json",
    "content": "{\n    \"PSUBSCRIBE\": {\n        \"summary\": \"Listens for messages published to channels that match one or more patterns.\",\n        \"complexity\": \"O(N) where N is the number of patterns to subscribe to.\",\n        \"group\": \"pubsub\",\n        \"since\": \"2.0.0\",\n        \"arity\": -2,\n        \"function\": \"psubscribeCommand\",\n        \"command_flags\": [\n            \"PUBSUB\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"pattern\",\n                \"type\": \"pattern\",\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/psync.json",
    "content": "{\n    \"PSYNC\": {\n        \"summary\": \"An internal command used in replication.\",\n        \"group\": \"server\",\n        \"since\": \"2.8.0\",\n        \"arity\": -3,\n        \"function\": \"syncCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"NO_MULTI\",\n            \"NOSCRIPT\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"replicationid\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"offset\",\n                \"type\": \"integer\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/pttl.json",
    "content": "{\n    \"PTTL\": {\n        \"summary\": \"Returns the expiration time in milliseconds of a key.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"generic\",\n        \"since\": \"2.6.0\",\n        \"arity\": 2,\n        \"function\": \"pttlCommand\",\n        \"history\": [\n            [\n                \"2.8.0\",\n                \"Added the -2 reply.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"TTL in milliseconds.\",\n                    \"type\": \"integer\",\n                    \"minimum\": 0\n                },\n                {\n                    \"description\": \"The key exists but has no associated expire.\",\n                    \"const\": -1\n                },\n                {\n                    \"description\": \"The key does not exist.\",\n                    \"const\": -2\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/publish.json",
    "content": "{\n    \"PUBLISH\": {\n        \"summary\": \"Posts a message to a channel.\",\n        \"complexity\": \"O(N+M) where N is the number of clients subscribed to the receiving channel and M is the total number of subscribed patterns (by any client).\",\n        \"group\": \"pubsub\",\n        \"since\": \"2.0.0\",\n        \"arity\": 3,\n        \"function\": \"publishCommand\",\n        \"command_flags\": [\n            \"PUBSUB\",\n            \"LOADING\",\n            \"STALE\",\n            \"FAST\",\n            \"MAY_REPLICATE\",\n            \"SENTINEL\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"channel\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"message\",\n                \"type\": \"string\"\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"the number of clients that received the message. Note that in a Redis Cluster, only clients that are connected to the same node as the publishing client are included in the count\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/pubsub-channels.json",
    "content": "{\n    \"CHANNELS\": {\n        \"summary\": \"Returns the active channels.\",\n        \"complexity\": \"O(N) where N is the number of active channels, and assuming constant time pattern matching (relatively short channels and patterns)\",\n        \"group\": \"pubsub\",\n        \"since\": \"2.8.0\",\n        \"arity\": -2,\n        \"container\": \"PUBSUB\",\n        \"function\": \"pubsubCommand\",\n        \"command_flags\": [\n            \"PUBSUB\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"pattern\",\n                \"type\": \"pattern\",\n                \"optional\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"a list of active channels, optionally matching the specified pattern\",\n            \"type\": \"array\",\n            \"uniqueItems\": true,\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/pubsub-help.json",
    "content": "{\n    \"HELP\": {\n        \"summary\": \"Returns helpful text about the different subcommands.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"pubsub\",\n        \"since\": \"6.2.0\",\n        \"arity\": 2,\n        \"container\": \"PUBSUB\",\n        \"function\": \"pubsubCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"Helpful text about subcommands.\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/pubsub-numpat.json",
    "content": "{\n    \"NUMPAT\": {\n        \"summary\": \"Returns a count of unique pattern subscriptions.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"pubsub\",\n        \"since\": \"2.8.0\",\n        \"arity\": 2,\n        \"container\": \"PUBSUB\",\n        \"function\": \"pubsubCommand\",\n        \"command_flags\": [\n            \"PUBSUB\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"reply_schema\": {\n            \"description\": \"the number of patterns all the clients are subscribed to\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/pubsub-numsub.json",
    "content": "{\n    \"NUMSUB\": {\n        \"summary\": \"Returns a count of subscribers to channels.\",\n        \"complexity\": \"O(N) for the NUMSUB subcommand, where N is the number of requested channels\",\n        \"group\": \"pubsub\",\n        \"since\": \"2.8.0\",\n        \"arity\": -2,\n        \"container\": \"PUBSUB\",\n        \"function\": \"pubsubCommand\",\n        \"command_flags\": [\n            \"PUBSUB\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"channel\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"multiple\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"the number of subscribers per channel, each even element (including 0th) is channel name, each odd element is the number of subscribers\",\n            \"type\": \"array\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/pubsub-shardchannels.json",
    "content": "{\n    \"SHARDCHANNELS\": {\n        \"summary\": \"Returns the active shard channels.\",\n        \"complexity\": \"O(N) where N is the number of active shard channels, and assuming constant time pattern matching (relatively short shard channels).\",\n        \"group\": \"pubsub\",\n        \"since\": \"7.0.0\",\n        \"arity\": -2,\n        \"container\": \"PUBSUB\",\n        \"function\": \"pubsubCommand\",\n        \"command_flags\": [\n            \"PUBSUB\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"pattern\",\n                \"type\": \"pattern\",\n                \"optional\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"a list of active channels, optionally matching the specified pattern\",\n            \"type\": \"array\",\n            \"items\": {\n                \"type\": \"string\"\n            },\n            \"uniqueItems\": true\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/pubsub-shardnumsub.json",
    "content": "{\n    \"SHARDNUMSUB\": {\n        \"summary\": \"Returns the count of subscribers of shard channels.\",\n        \"complexity\": \"O(N) for the SHARDNUMSUB subcommand, where N is the number of requested shard channels\",\n        \"group\": \"pubsub\",\n        \"since\": \"7.0.0\",\n        \"arity\": -2,\n        \"container\": \"PUBSUB\",\n        \"function\": \"pubsubCommand\",\n        \"command_flags\": [\n            \"PUBSUB\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"shardchannel\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"multiple\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"the number of subscribers per shard channel, each even element (including 0th) is channel name, each odd element is the number of subscribers\",\n            \"type\": \"array\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/pubsub.json",
    "content": "{\n    \"PUBSUB\": {\n        \"summary\": \"A container for Pub/Sub commands.\",\n        \"complexity\": \"Depends on subcommand.\",\n        \"group\": \"pubsub\",\n        \"since\": \"2.8.0\",\n        \"arity\": -2\n    }\n}\n"
  },
  {
    "path": "src/commands/punsubscribe.json",
    "content": "{\n    \"PUNSUBSCRIBE\": {\n        \"summary\": \"Stops listening to messages published to channels that match one or more patterns.\",\n        \"complexity\": \"O(N) where N is the number of patterns to unsubscribe.\",\n        \"group\": \"pubsub\",\n        \"since\": \"2.0.0\",\n        \"arity\": -1,\n        \"function\": \"punsubscribeCommand\",\n        \"command_flags\": [\n            \"PUBSUB\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"pattern\",\n                \"type\": \"pattern\",\n                \"optional\": true,\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/quit.json",
    "content": "{\n    \"QUIT\": {\n        \"summary\": \"Closes the connection.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"connection\",\n        \"since\": \"1.0.0\",\n        \"arity\": -1,\n        \"function\": \"quitCommand\",\n        \"deprecated_since\": \"7.2.0\",\n        \"replaced_by\": \"just closing the connection\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"command_flags\": [\n            \"ALLOW_BUSY\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"FAST\",\n            \"NO_AUTH\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/randomkey.json",
    "content": "{\n    \"RANDOMKEY\": {\n        \"summary\": \"Returns a random key name from the database.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"generic\",\n        \"since\": \"1.0.0\",\n        \"arity\": 1,\n        \"function\": \"randomkeyCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"TOUCHES_ARBITRARY_KEYS\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_SHARDS\",\n            \"RESPONSE_POLICY:SPECIAL\",\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"when the database is empty\",\n                    \"type\": \"null\"\n                },\n                {\n                    \"description\": \"random key in db\",\n                    \"type\": \"string\"\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/readonly.json",
    "content": "{\n    \"READONLY\": {\n        \"summary\": \"Enables read-only queries for a connection to a Redis Cluster replica node.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": 1,\n        \"function\": \"readonlyCommand\",\n        \"command_flags\": [\n            \"FAST\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/readwrite.json",
    "content": "{\n    \"READWRITE\": {\n        \"summary\": \"Enables read-write queries for a connection to a Reids Cluster replica node.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"cluster\",\n        \"since\": \"3.0.0\",\n        \"arity\": 1,\n        \"function\": \"readwriteCommand\",\n        \"command_flags\": [\n            \"FAST\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/rename.json",
    "content": "{\n    \"RENAME\": {\n        \"summary\": \"Renames a key and overwrites the destination.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"generic\",\n        \"since\": \"1.0.0\",\n        \"arity\": 3,\n        \"function\": \"renameCommand\",\n        \"history\": [],\n        \"command_flags\": [\n            \"WRITE\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"newkey\",\n                \"type\": \"key\",\n                \"key_spec_index\": 1\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/renamenx.json",
    "content": "{\n    \"RENAMENX\": {\n        \"summary\": \"Renames a key only when the target key name doesn't exist.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"generic\",\n        \"since\": \"1.0.0\",\n        \"arity\": 3,\n        \"function\": \"renamenxCommand\",\n        \"history\": [\n            [\n                \"3.2.0\",\n                \"The command no longer returns an error when source and destination names are the same.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"INSERT\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"newkey\",\n                \"type\": \"key\",\n                \"key_spec_index\": 1\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"key was renamed to newkey\",\n                    \"const\": 1\n                },\n                {\n                    \"description\": \"new key already exists\",\n                    \"const\": 0\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/replconf.json",
    "content": "{\n    \"REPLCONF\": {\n        \"summary\": \"An internal command for configuring the replication stream.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"3.0.0\",\n        \"arity\": -1,\n        \"function\": \"replconfCommand\",\n        \"doc_flags\": [\n            \"SYSCMD\"\n        ],\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"ALLOW_BUSY\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/replicaof.json",
    "content": "{\n    \"REPLICAOF\": {\n        \"summary\": \"Configures a server as replica of another, or promotes it to a master.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"5.0.0\",\n        \"arity\": 3,\n        \"function\": \"replicaofCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"STALE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"args\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"host-port\",\n                        \"type\": \"block\",\n                        \"arguments\": [\n                            {\n                                \"name\": \"host\",\n                                \"type\": \"string\"\n                            },\n                            {\n                                \"name\": \"port\",\n                                \"type\": \"integer\"\n                            }\n                        ]\n                    },\n                    {\n                        \"name\": \"no-one\",\n                        \"type\": \"block\",\n                        \"arguments\": [\n                            {\n                                \"name\": \"no\",\n                                \"type\": \"pure-token\",\n                                \"token\": \"NO\"\n                            },\n                            {\n                                \"name\": \"one\",\n                                \"type\": \"pure-token\",\n                                \"token\": \"ONE\"\n                            }\n                        ]\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"replicaOf status\",\n            \"type\": \"string\",\n            \"pattern\": \"OK*\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/reset.json",
    "content": "{\n    \"RESET\": {\n        \"summary\": \"Resets the connection.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"connection\",\n        \"since\": \"6.2.0\",\n        \"arity\": 1,\n        \"function\": \"resetCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"FAST\",\n            \"NO_AUTH\",\n            \"ALLOW_BUSY\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"RESET\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/restore-asking.json",
    "content": "{\n    \"RESTORE-ASKING\": {\n        \"summary\": \"An internal command for migrating keys in a cluster.\",\n        \"complexity\": \"O(1) to create the new key and additional O(N*M) to reconstruct the serialized value, where N is the number of Redis objects composing the value and M their average size. For small string values the time complexity is thus O(1)+O(1*M) where M is small, so simply O(1). However for sorted set values the complexity is O(N*M*log(N)) because inserting values into sorted sets is O(log(N)).\",\n        \"group\": \"server\",\n        \"since\": \"3.0.0\",\n        \"arity\": -4,\n        \"function\": \"restoreCommand\",\n        \"history\": [\n            [\n                \"3.0.0\",\n                \"Added the `REPLACE` modifier.\"\n            ],\n            [\n                \"5.0.0\",\n                \"Added the `ABSTTL` modifier.\"\n            ],\n            [\n                \"5.0.0\",\n                \"Added the `IDLETIME` and `FREQ` options.\"\n            ]\n        ],\n        \"doc_flags\": [\n            \"SYSCMD\"\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"ASKING\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\",\n            \"DANGEROUS\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"ttl\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"serialized-value\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"replace\",\n                \"token\": \"REPLACE\",\n                \"type\": \"pure-token\",\n                \"optional\": true,\n                \"since\": \"3.0.0\"\n            },\n            {\n                \"name\": \"absttl\",\n                \"token\": \"ABSTTL\",\n                \"type\": \"pure-token\",\n                \"optional\": true,\n                \"since\": \"5.0.0\"\n            },\n            {\n                \"token\": \"IDLETIME\",\n                \"name\": \"seconds\",\n                \"type\": \"integer\",\n                \"optional\": true,\n                \"since\": \"5.0.0\"\n            },\n            {\n                \"token\": \"FREQ\",\n                \"name\": \"frequency\",\n                \"type\": \"integer\",\n                \"optional\": true,\n                \"since\": \"5.0.0\"\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/restore.json",
    "content": "{\n    \"RESTORE\": {\n        \"summary\": \"Creates a key from the serialized representation of a value.\",\n        \"complexity\": \"O(1) to create the new key and additional O(N*M) to reconstruct the serialized value, where N is the number of Redis objects composing the value and M their average size. For small string values the time complexity is thus O(1)+O(1*M) where M is small, so simply O(1). However for sorted set values the complexity is O(N*M*log(N)) because inserting values into sorted sets is O(log(N)).\",\n        \"group\": \"generic\",\n        \"since\": \"2.6.0\",\n        \"arity\": -4,\n        \"function\": \"restoreCommand\",\n        \"history\": [\n            [\n                \"3.0.0\",\n                \"Added the `REPLACE` modifier.\"\n            ],\n            [\n                \"5.0.0\",\n                \"Added the `ABSTTL` modifier.\"\n            ],\n            [\n                \"5.0.0\",\n                \"Added the `IDLETIME` and `FREQ` options.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\",\n            \"DANGEROUS\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"ttl\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"serialized-value\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"replace\",\n                \"token\": \"REPLACE\",\n                \"type\": \"pure-token\",\n                \"optional\": true,\n                \"since\": \"3.0.0\"\n            },\n            {\n                \"name\": \"absttl\",\n                \"token\": \"ABSTTL\",\n                \"type\": \"pure-token\",\n                \"optional\": true,\n                \"since\": \"5.0.0\"\n            },\n            {\n                \"token\": \"IDLETIME\",\n                \"name\": \"seconds\",\n                \"type\": \"integer\",\n                \"optional\": true,\n                \"since\": \"5.0.0\"\n            },\n            {\n                \"token\": \"FREQ\",\n                \"name\": \"frequency\",\n                \"type\": \"integer\",\n                \"optional\": true,\n                \"since\": \"5.0.0\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/role.json",
    "content": "{\n    \"ROLE\": {\n        \"summary\": \"Returns the replication role.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"2.8.12\",\n        \"arity\": 1,\n        \"function\": \"roleCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"FAST\",\n            \"SENTINEL\"\n        ],\n        \"acl_categories\": [\n            \"ADMIN\",\n            \"DANGEROUS\"\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"type\": \"array\",\n                    \"minItems\": 3,\n                    \"maxItems\": 3,\n                    \"items\": [\n                        {\n                            \"const\": \"master\"\n                        },\n                        {\n                            \"description\": \"current replication master offset\",\n                            \"type\": \"integer\"\n                        },\n                        {\n                            \"description\": \"connected replicas\",\n                            \"type\": \"array\",\n                            \"items\": {\n                                \"type\": \"array\",\n                                \"minItems\": 3,\n                                \"maxItems\": 3,\n                                \"items\": [\n                                    {\n                                        \"description\": \"replica ip\",\n                                        \"type\": \"string\"\n                                    },\n                                    {\n                                        \"description\": \"replica port\",\n                                        \"type\": \"string\"\n                                    },\n                                    {\n                                        \"description\": \"last acknowledged replication offset\",\n                                        \"type\": \"string\"\n                                    }\n                                ]\n                            }\n                        }\n                    ]\n                },\n                {\n                    \"type\": \"array\",\n                    \"minItems\": 5,\n                    \"maxItems\": 5,\n                    \"items\": [\n                        {\n                            \"const\": \"slave\"\n                        },\n                        {\n                            \"description\": \"ip of master\",\n                            \"type\": \"string\"\n                        },\n                        {\n                            \"description\": \"port number of master\",\n                            \"type\": \"integer\"\n                        },\n                        {\n                            \"description\": \"state of the replication from the point of view of the master\",\n                            \"oneOf\": [\n                                {\n                                    \"description\": \"the instance is in handshake with its master\",\n                                    \"const\": \"handshake\"\n                                },\n                                {\n                                    \"description\": \"the instance in not active\",\n                                    \"const\": \"none\"\n                                },\n                                {\n                                    \"description\": \"the instance needs to connect to its master\",\n                                    \"const\": \"connect\"\n                                },\n                                {\n                                    \"description\": \"the master-replica connection is in progress\",\n                                    \"const\": \"connecting\"\n                                },\n                                {\n                                    \"description\": \"the master and replica are trying to perform the synchronization\",\n                                    \"const\": \"sync\"\n                                },\n                                {\n                                    \"description\": \"the replica is online\",\n                                    \"const\": \"connected\"\n                                },\n                                {\n                                    \"description\": \"instance state is unknown\",\n                                    \"const\": \"unknown\"\n                                }\n                            ]\n                        },\n                        {\n                            \"description\": \"the amount of data received from the replica so far in terms of master replication offset\",\n                            \"type\": \"integer\"\n                        }\n                    ]\n                },\n                {\n                    \"type\": \"array\",\n                    \"minItems\": 2,\n                    \"maxItems\": 2,\n                    \"items\": [\n                        {\n                            \"const\": \"sentinel\"\n                        },\n                        {\n                            \"description\": \"list of master names monitored by this sentinel instance\",\n                            \"type\": \"array\",\n                            \"items\": {\n                                \"type\": \"string\"\n                            }\n                        }\n                    ]\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/rpop.json",
    "content": "{\n    \"RPOP\": {\n        \"summary\": \"Returns and removes the last elements of a list. Deletes the list if the last element was popped.\",\n        \"complexity\": \"O(N) where N is the number of elements returned\",\n        \"group\": \"list\",\n        \"since\": \"1.0.0\",\n        \"arity\": -2,\n        \"function\": \"rpopCommand\",\n        \"history\": [\n            [\n                \"6.2.0\",\n                \"Added the `count` argument.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"type\": \"null\",\n                    \"description\": \"Key does not exist.\"\n                },\n                {\n                    \"type\": \"string\",\n                    \"description\": \"When 'COUNT' was not given, the value of the last element.\"\n                },\n                {\n                    \"type\": \"array\",\n                    \"description\": \"When 'COUNT' was given, list of popped elements.\",\n                    \"items\": {\n                        \"type\": \"string\"\n                    }\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true,\n                \"since\": \"6.2.0\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/rpoplpush.json",
    "content": "{\n    \"RPOPLPUSH\": {\n        \"summary\": \"Returns the last element of a list after removing and pushing it to another list. Deletes the list if the last element was popped.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"list\",\n        \"since\": \"1.2.0\",\n        \"arity\": 3,\n        \"function\": \"rpoplpushCommand\",\n        \"deprecated_since\": \"6.2.0\",\n        \"replaced_by\": \"`LMOVE` with the `RIGHT` and `LEFT` arguments\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"INSERT\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"type\": \"string\",\n                    \"description\": \"The element being popped and pushed.\"\n                },\n                {\n                    \"type\": \"null\",\n                    \"description\": \"Source list is empty.\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"source\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"destination\",\n                \"type\": \"key\",\n                \"key_spec_index\": 1\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/rpush.json",
    "content": "{\n    \"RPUSH\": {\n        \"summary\": \"Appends one or more elements to a list. Creates the key if it doesn't exist.\",\n        \"complexity\": \"O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments.\",\n        \"group\": \"list\",\n        \"since\": \"1.0.0\",\n        \"arity\": -3,\n        \"function\": \"rpushCommand\",\n        \"history\": [\n            [\n                \"2.4.0\",\n                \"Accepts multiple `element` arguments.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"INSERT\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Length of the list after the push operations.\",\n            \"type\": \"integer\",\n            \"minimum\": 1\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"element\",\n                \"type\": \"string\",\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/rpushx.json",
    "content": "{\n    \"RPUSHX\": {\n        \"summary\": \"Appends an element to a list only when the list exists.\",\n        \"complexity\": \"O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments.\",\n        \"group\": \"list\",\n        \"since\": \"2.2.0\",\n        \"arity\": -3,\n        \"function\": \"rpushxCommand\",\n        \"history\": [\n            [\n                \"4.0.0\",\n                \"Accepts multiple `element` arguments.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"LIST\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"INSERT\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"Length of the list after the push operation.\",\n            \"minimum\": 0\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"element\",\n                \"type\": \"string\",\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sadd.json",
    "content": "{\n    \"SADD\": {\n        \"summary\": \"Adds one or more members to a set. Creates the key if it doesn't exist.\",\n        \"complexity\": \"O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments.\",\n        \"group\": \"set\",\n        \"since\": \"1.0.0\",\n        \"arity\": -3,\n        \"function\": \"saddCommand\",\n        \"history\": [\n            [\n                \"2.4.0\",\n                \"Accepts multiple `member` arguments.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"SET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"INSERT\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Number of elements that were added to the set, not including all the elements already present in the set.\",\n            \"type\": \"integer\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"member\",\n                \"type\": \"string\",\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/save.json",
    "content": "{\n    \"SAVE\": {\n        \"summary\": \"Synchronously saves the database(s) to disk.\",\n        \"complexity\": \"O(N) where N is the total number of keys in all databases\",\n        \"group\": \"server\",\n        \"since\": \"1.0.0\",\n        \"arity\": 1,\n        \"function\": \"saveCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"NO_MULTI\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/scan.json",
    "content": "{\n    \"SCAN\": {\n        \"summary\": \"Iterates over the key names in the database.\",\n        \"complexity\": \"O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection.\",\n        \"group\": \"generic\",\n        \"since\": \"2.8.0\",\n        \"arity\": -2,\n        \"function\": \"scanCommand\",\n        \"history\": [\n            [\n                \"6.0.0\",\n                \"Added the `TYPE` subcommand.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"READONLY\",\n            \"TOUCHES_ARBITRARY_KEYS\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\",\n            \"REQUEST_POLICY:SPECIAL\",\n            \"RESPONSE_POLICY:SPECIAL\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"cursor\",\n                \"type\": \"integer\"\n            },\n            {\n                \"token\": \"MATCH\",\n                \"name\": \"pattern\",\n                \"type\": \"pattern\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"COUNT\",\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"TYPE\",\n                \"name\": \"type\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"since\": \"6.0.0\"\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"cursor and scan response in array form\",\n            \"type\": \"array\",\n            \"minItems\": 2,\n            \"maxItems\": 2,\n            \"items\": [\n                {\n                    \"description\": \"cursor\",\n                    \"type\": \"string\"\n                },\n                {\n                    \"description\": \"list of keys\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"string\"\n                    }\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/scard.json",
    "content": "{\n    \"SCARD\": {\n        \"summary\": \"Returns the number of members in a set.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"set\",\n        \"since\": \"1.0.0\",\n        \"arity\": 2,\n        \"function\": \"scardCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"SET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The cardinality (number of elements) of the set, or 0 if key does not exist.\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/script-debug.json",
    "content": "{\n    \"DEBUG\": {\n        \"summary\": \"Sets the debug mode of server-side Lua scripts.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"scripting\",\n        \"since\": \"3.2.0\",\n        \"arity\": 3,\n        \"container\": \"SCRIPT\",\n        \"function\": \"scriptCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\"\n        ],\n        \"acl_categories\": [\n            \"SCRIPTING\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"mode\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"yes\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"YES\"\n                    },\n                    {\n                        \"name\": \"sync\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"SYNC\"\n                    },\n                    {\n                        \"name\": \"no\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"NO\"\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/script-exists.json",
    "content": "{\n    \"EXISTS\": {\n        \"summary\": \"Determines whether server-side Lua scripts exist in the script cache.\",\n        \"complexity\": \"O(N) with N being the number of scripts to check (so checking a single script is an O(1) operation).\",\n        \"group\": \"scripting\",\n        \"since\": \"2.6.0\",\n        \"arity\": -3,\n        \"container\": \"SCRIPT\",\n        \"function\": \"scriptCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\"\n        ],\n        \"acl_categories\": [\n            \"SCRIPTING\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_SHARDS\",\n            \"RESPONSE_POLICY:AGG_LOGICAL_AND\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"sha1\",\n                \"type\": \"string\",\n                \"multiple\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"An array of integers that correspond to the specified SHA1 digest arguments.\",\n            \"type\": \"array\",\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"description\": \"sha1 hash exists in script cache\",\n                        \"const\": 1\n                    },\n                    {\n                        \"description\": \"sha1 hash does not exist in script cache\",\n                        \"const\": 0\n                    }\n                ]\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/script-flush.json",
    "content": "{\n    \"FLUSH\": {\n        \"summary\": \"Removes all server-side Lua scripts from the script cache.\",\n        \"complexity\": \"O(N) with N being the number of scripts in cache\",\n        \"group\": \"scripting\",\n        \"since\": \"2.6.0\",\n        \"arity\": -2,\n        \"container\": \"SCRIPT\",\n        \"function\": \"scriptCommand\",\n        \"history\": [\n            [\n                \"6.2.0\",\n                \"Added the `ASYNC` and `SYNC` flushing mode modifiers.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"NOSCRIPT\"\n        ],\n        \"acl_categories\": [\n            \"SCRIPTING\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_NODES\",\n            \"RESPONSE_POLICY:ALL_SUCCEEDED\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"flush-type\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"since\": \"6.2.0\",\n                \"arguments\": [\n                    {\n                        \"name\": \"async\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ASYNC\"\n                    },\n                    {\n                        \"name\": \"sync\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"SYNC\"\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/script-help.json",
    "content": "{\n    \"HELP\": {\n        \"summary\": \"Returns helpful text about the different subcommands.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"scripting\",\n        \"since\": \"5.0.0\",\n        \"arity\": 2,\n        \"container\": \"SCRIPT\",\n        \"function\": \"scriptCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"acl_categories\": [\n            \"SCRIPTING\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"Helpful text about subcommands.\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/script-kill.json",
    "content": "{\n    \"KILL\": {\n        \"summary\": \"Terminates a server-side Lua script during execution.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"scripting\",\n        \"since\": \"2.6.0\",\n        \"arity\": 2,\n        \"container\": \"SCRIPT\",\n        \"function\": \"scriptCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"ALLOW_BUSY\"\n        ],\n        \"acl_categories\": [\n            \"SCRIPTING\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_SHARDS\",\n            \"RESPONSE_POLICY:ONE_SUCCEEDED\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/script-load.json",
    "content": "{\n    \"LOAD\": {\n        \"summary\": \"Loads a server-side Lua script to the script cache.\",\n        \"complexity\": \"O(N) with N being the length in bytes of the script body.\",\n        \"group\": \"scripting\",\n        \"since\": \"2.6.0\",\n        \"arity\": 3,\n        \"container\": \"SCRIPT\",\n        \"function\": \"scriptCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"STALE\"\n        ],\n        \"acl_categories\": [\n            \"SCRIPTING\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_NODES\",\n            \"RESPONSE_POLICY:ALL_SUCCEEDED\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"script\",\n                \"type\": \"string\"\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The SHA1 digest of the script added into the script cache\",\n            \"type\": \"string\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/script.json",
    "content": "{\n    \"SCRIPT\": {\n        \"summary\": \"A container for Lua scripts management commands.\",\n        \"complexity\": \"Depends on subcommand.\",\n        \"group\": \"scripting\",\n        \"since\": \"2.6.0\",\n        \"arity\": -2\n    }\n}\n"
  },
  {
    "path": "src/commands/sdiff.json",
    "content": "{\n    \"SDIFF\": {\n        \"summary\": \"Returns the difference of multiple sets.\",\n        \"complexity\": \"O(N) where N is the total number of elements in all given sets.\",\n        \"group\": \"set\",\n        \"since\": \"1.0.0\",\n        \"arity\": -2,\n        \"function\": \"sdiffCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SET\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT_ORDER\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"List with the members of the resulting set.\",\n            \"uniqueItems\": true,\n            \"items\": {\n                \"type\": \"string\"\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sdiffstore.json",
    "content": "{\n    \"SDIFFSTORE\": {\n        \"summary\": \"Stores the difference of multiple sets in a key.\",\n        \"complexity\": \"O(N) where N is the total number of elements in all given sets.\",\n        \"group\": \"set\",\n        \"since\": \"1.0.0\",\n        \"arity\": -3,\n        \"function\": \"sdiffstoreCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"SET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Number of the elements in the resulting set.\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        },\n        \"arguments\": [\n            {\n                \"name\": \"destination\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 1,\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/select.json",
    "content": "{\n    \"SELECT\": {\n        \"summary\": \"Changes the selected database.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"connection\",\n        \"since\": \"1.0.0\",\n        \"arity\": 2,\n        \"function\": \"selectCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"index\",\n                \"type\": \"integer\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel-ckquorum.json",
    "content": "{\n    \"CKQUORUM\": {\n        \"summary\": \"Checks for a Redis Sentinel quorum.\",\n        \"group\": \"sentinel\",\n        \"since\": \"2.8.4\",\n        \"arity\": 3,\n        \"container\": \"SENTINEL\",\n        \"function\": \"sentinelCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"string\",\n            \"description\": \"Returns OK if the current Sentinel configuration is able to reach the quorum needed to failover a master, and the majority needed to authorize the failover.\",\n            \"pattern\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"master-name\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel-config.json",
    "content": "{\n    \"CONFIG\": {\n        \"summary\": \"Configures Redis Sentinel.\",\n        \"complexity\": \"O(N) when N is the number of configuration parameters provided\",\n        \"group\": \"sentinel\",\n        \"since\": \"6.2.0\",\n        \"arity\": -4,\n        \"container\": \"SENTINEL\",\n        \"function\": \"sentinelCommand\",\n        \"history\": [\n            [\n                \"7.2.0\",\n                \"Added the ability to set and get multiple parameters in one call.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"ADMIN\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"type\": \"object\",\n                    \"description\": \"When 'SENTINEL-CONFIG GET' is called, returns a map.\",\n                    \"properties\": {\n                        \"resolve-hostnames\": {\n                            \"oneOf\": [\n                                {\n                                    \"const\": \"yes\"\n                                },\n                                {\n                                    \"const\": \"no\"\n                                }\n                            ]\n                        },\n                        \"announce-hostnames\": {\n                            \"oneOf\": [\n                                {\n                                    \"const\": \"yes\"\n                                },\n                                {\n                                    \"const\": \"no\"\n                                }\n                            ]\n                        },\n                        \"announce-ip\": {\n                            \"type\": \"string\"\n                        },\n                        \"announce-port\": {\n                            \"type\": \"string\"\n                        },\n                        \"sentinel-user\": {\n                            \"type\": \"string\"\n                        },\n                        \"sentinel-pass\": {\n                            \"type\": \"string\"\n                        },\n                        \"loglevel\": {\n                            \"oneOf\": [\n                                {\n                                    \"const\": \"debug\"\n                                },\n                                {\n                                    \"const\": \"verbose\"\n                                },\n                                {\n                                    \"const\": \"notice\"\n                                },\n                                {\n                                    \"const\": \"warning\"\n                                },\n                                {\n                                    \"const\": \"nothing\"\n                                },\n                                {\n                                    \"const\": \"unknown\"\n                                }\n                            ]\n                        }\n                    },\n                    \"additionalProperties\": false\n                },\n                {\n                    \"const\": \"OK\",\n                    \"description\": \"When 'SENTINEL-CONFIG SET' is called, returns OK on success.\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\":\"action\",\n                \"type\":\"oneof\",\n                \"arguments\":[\n                    {\n                        \"name\":\"set\",\n                        \"token\":\"SET\",\n                        \"type\":\"block\",\n                        \"multiple\": true,\n                        \"arguments\":[\n                            {\n                                \"name\":\"parameter\",\n                                \"type\":\"string\"\n                            },\n                            {\n                                \"name\":\"value\",\n                                \"type\":\"string\"\n                            }\n                        ]\n                    },\n                    {\n                        \"token\":\"GET\",\n                        \"name\":\"parameter\",\n                        \"type\":\"string\",\n                        \"multiple\": true\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel-debug.json",
    "content": "{\n    \"DEBUG\": {\n        \"summary\": \"Lists or updates the current configurable parameters of Redis Sentinel.\",\n        \"complexity\": \"O(N) where N is the number of configurable parameters\",\n        \"group\": \"sentinel\",\n        \"since\": \"7.0.0\",\n        \"arity\": -2,\n        \"container\": \"SENTINEL\",\n        \"function\": \"sentinelCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"The configuration update was successful.\",\n                    \"const\": \"OK\"\n                },\n                {\n                    \"description\": \"List of configurable time parameters and their values (milliseconds).\",\n                    \"type\": \"object\",\n                    \"additionalProperties\": {\n                        \"type\": \"string\"\n                    }\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"data\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"multiple\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"parameter\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"name\": \"value\",\n                        \"type\": \"string\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel-failover.json",
    "content": "{\n    \"FAILOVER\": {\n        \"summary\": \"Forces a Redis Sentinel failover.\",\n        \"group\": \"sentinel\",\n        \"since\": \"2.8.4\",\n        \"arity\": 3,\n        \"container\": \"SENTINEL\",\n        \"function\": \"sentinelCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\",\n            \"description\": \"Force a fail over as if the master was not reachable, and without asking for agreement to other Sentinels.\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"master-name\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel-flushconfig.json",
    "content": "{\n    \"FLUSHCONFIG\": {\n        \"summary\": \"Rewrites the Redis Sentinel configuration file.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"sentinel\",\n        \"since\": \"2.8.4\",\n        \"arity\": 2,\n        \"container\": \"SENTINEL\",\n        \"function\": \"sentinelCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\",\n            \"description\": \"Force Sentinel to rewrite its configuration on disk, including the current Sentinel state.\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel-get-master-addr-by-name.json",
    "content": "{\n    \"GET-MASTER-ADDR-BY-NAME\": {\n        \"summary\": \"Returns the port and address of a master Redis instance.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"sentinel\",\n        \"since\": \"2.8.4\",\n        \"arity\": 3,\n        \"container\": \"SENTINEL\",\n        \"function\": \"sentinelCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"minItems\": 2,\n            \"maxItems\": 2,\n            \"items\": [\n                {\n                    \"type\": \"string\",\n                    \"description\": \"IP addr or hostname.\"\n                },\n                {\n                    \"type\": \"string\",\n                    \"description\": \"Port.\",\n                    \"pattern\": \"[0-9]+\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"master-name\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel-help.json",
    "content": "{\n    \"HELP\": {\n        \"summary\": \"Returns helpful text about the different subcommands.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"sentinel\",\n        \"since\": \"6.2.0\",\n        \"arity\": 2,\n        \"container\": \"SENTINEL\",\n        \"function\": \"sentinelCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"Helpful text about subcommands.\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel-info-cache.json",
    "content": "{\n    \"INFO-CACHE\": {\n        \"summary\": \"Returns the cached `INFO` replies from the deployment's instances.\",\n        \"complexity\": \"O(N) where N is the number of instances\",\n        \"group\": \"sentinel\",\n        \"since\": \"3.2.0\",\n        \"arity\": -3,\n        \"container\": \"SENTINEL\",\n        \"function\": \"sentinelCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"This is actually a map, the odd entries are a master name, and the even entries are the last cached INFO output from that master and all its replicas.\",\n            \"minItems\": 0,\n            \"maxItems\": 4294967295,\n            \"items\": [\n                {\n                    \"oneOf\": [\n                        {\n                            \"type\": \"string\",\n                            \"description\": \"The master name.\"\n                        },\n                        {\n                            \"type\": \"array\",\n                            \"description\": \"This is an array of pairs, the odd entries are the INFO age, and the even entries are the cached INFO string. The first pair belong to the master and the rest are its replicas.\",\n                            \"minItems\": 2,\n                            \"maxItems\": 2,\n                            \"items\": [\n                                {\n                                    \"description\": \"The number of milliseconds since when the INFO was cached.\",\n                                    \"type\": \"integer\"\n                                },\n                                {\n                                    \"description\": \"The cached INFO string or null.\",\n                                    \"oneOf\": [\n                                        {\n                                            \"description\": \"The cached INFO string.\",\n                                            \"type\": \"string\"\n                                        },\n                                        {\n                                            \"description\": \"No cached INFO string.\",\n                                            \"type\": \"null\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    ]\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"nodename\",\n                \"type\": \"string\",\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel-is-master-down-by-addr.json",
    "content": "{\n    \"IS-MASTER-DOWN-BY-ADDR\": {\n        \"summary\": \"Determines whether a master Redis instance is down.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"sentinel\",\n        \"since\": \"2.8.4\",\n        \"arity\": 6,\n        \"container\": \"SENTINEL\",\n        \"function\": \"sentinelCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"minItems\": 3,\n            \"maxItems\": 3,\n            \"items\": [\n                {\n                    \"oneOf\": [\n                        {\n                            \"const\": 0,\n                            \"description\": \"Master is up.\"\n                        },\n                        {\n                            \"const\": 1,\n                            \"description\": \"Master is down.\"\n                        }\n                    ]\n                },\n                {\n                    \"type\": \"string\",\n                    \"description\": \"Sentinel address.\"\n                },\n                {\n                    \"type\": \"integer\",\n                    \"description\": \"Port.\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"ip\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"port\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"current-epoch\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"runid\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel-master.json",
    "content": "{\n    \"MASTER\": {\n        \"summary\": \"Returns the state of a master Redis instance.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"sentinel\",\n        \"since\": \"2.8.4\",\n        \"arity\": 3,\n        \"container\": \"SENTINEL\",\n        \"function\": \"sentinelCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"object\",\n            \"description\": \"The state and info of the specified master.\",\n            \"additionalProperties\": {\n                \"type\": \"string\"\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"master-name\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel-masters.json",
    "content": "{\n    \"MASTERS\": {\n        \"summary\": \"Returns a list of monitored Redis masters.\",\n        \"complexity\": \"O(N) where N is the number of masters\",\n        \"group\": \"sentinel\",\n        \"since\": \"2.8.4\",\n        \"arity\": 2,\n        \"container\": \"SENTINEL\",\n        \"function\": \"sentinelCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"List of monitored Redis masters, and their state.\",\n            \"items\": {\n                \"type\": \"object\",\n                \"additionalProperties\": {\n                    \"type\": \"string\"\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel-monitor.json",
    "content": "{\n    \"MONITOR\": {\n        \"summary\": \"Starts monitoring.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"sentinel\",\n        \"since\": \"2.8.4\",\n        \"arity\": 6,\n        \"container\": \"SENTINEL\",\n        \"function\": \"sentinelCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"name\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"ip\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"port\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"quorum\",\n                \"type\": \"integer\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel-myid.json",
    "content": "{\n    \"MYID\": {\n        \"summary\": \"Returns the Redis Sentinel instance ID.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"sentinel\",\n        \"since\": \"6.2.0\",\n        \"arity\": 2,\n        \"container\": \"SENTINEL\",\n        \"function\": \"sentinelCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"description\": \"Node ID of the sentinel instance.\",\n            \"type\": \"string\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel-pending-scripts.json",
    "content": "{\n    \"PENDING-SCRIPTS\": {\n        \"summary\": \"Returns information about pending scripts for Redis Sentinel.\",\n        \"group\": \"sentinel\",\n        \"since\": \"2.8.4\",\n        \"arity\": 2,\n        \"container\": \"SENTINEL\",\n        \"function\": \"sentinelCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"List of pending scripts.\",\n            \"items\": {\n                \"type\": \"object\",\n                \"additionalProperties\": false,\n                \"properties\": {\n                    \"argv\": {\n                        \"type\": \"array\",\n                        \"description\": \"Script arguments.\",\n                        \"items\": {\n                            \"type\": \"string\"\n                        }\n                    },\n                    \"flags\": {\n                        \"type\": \"string\",\n                        \"description\": \"Script flags.\"\n                    },\n                    \"pid\": {\n                        \"type\": \"string\",\n                        \"description\": \"Script pid.\"\n                    },\n                    \"run-time\": {\n                        \"type\": \"string\",\n                        \"description\": \"Script run-time.\"\n                    },\n                    \"run-delay\": {\n                        \"type\": \"string\",\n                        \"description\": \"Script run-delay.\"\n                    },\n                    \"retry-num\": {\n                        \"type\": \"string\",\n                        \"description\": \"Number of times we tried to execute the script.\"\n                    }\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel-remove.json",
    "content": "{\n    \"REMOVE\": {\n        \"summary\": \"Stops monitoring.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"sentinel\",\n        \"since\": \"2.8.4\",\n        \"arity\": 3,\n        \"container\": \"SENTINEL\",\n        \"function\": \"sentinelCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"master-name\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel-replicas.json",
    "content": "{\n    \"REPLICAS\": {\n        \"summary\": \"Returns a list of the monitored Redis replicas.\",\n        \"complexity\": \"O(N) where N is the number of replicas\",\n        \"group\": \"sentinel\",\n        \"since\": \"5.0.0\",\n        \"arity\": 3,\n        \"container\": \"SENTINEL\",\n        \"function\": \"sentinelCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"List of replicas for this master, and their state.\",\n            \"items\": {\n                \"type\": \"object\",\n                \"additionalProperties\": {\n                    \"type\": \"string\"\n                }\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"master-name\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel-reset.json",
    "content": "{\n    \"RESET\": {\n        \"summary\": \"Resets Redis masters by name matching a pattern.\",\n        \"complexity\": \"O(N) where N is the number of monitored masters\",\n        \"group\": \"sentinel\",\n        \"since\": \"2.8.4\",\n        \"arity\": 3,\n        \"container\": \"SENTINEL\",\n        \"function\": \"sentinelCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"The number of masters that were reset.\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"pattern\",\n                \"type\": \"pattern\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel-sentinels.json",
    "content": "{\n    \"SENTINELS\": {\n        \"summary\": \"Returns a list of Sentinel instances.\",\n        \"complexity\": \"O(N) where N is the number of Sentinels\",\n        \"group\": \"sentinel\",\n        \"since\": \"2.8.4\",\n        \"arity\": 3,\n        \"container\": \"SENTINEL\",\n        \"function\": \"sentinelCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"List of sentinel instances, and their state.\",\n            \"items\": {\n                \"type\": \"object\",\n                \"additionalProperties\": {\n                    \"type\": \"string\"\n                }\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"master-name\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel-set.json",
    "content": "{\n    \"SET\": {\n        \"summary\": \"Changes the configuration of a monitored Redis master.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"sentinel\",\n        \"since\": \"2.8.4\",\n        \"arity\": -5,\n        \"container\": \"SENTINEL\",\n        \"function\": \"sentinelCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"master-name\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"data\",\n                \"type\": \"block\",\n                \"multiple\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"option\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"name\": \"value\",\n                        \"type\": \"string\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel-simulate-failure.json",
    "content": "{\n    \"SIMULATE-FAILURE\": {\n        \"summary\": \"Simulates failover scenarios.\",\n        \"group\": \"sentinel\",\n        \"since\": \"3.2.0\",\n        \"arity\": -3,\n        \"container\": \"SENTINEL\",\n        \"function\": \"sentinelCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"The simulated flag was set.\",\n                    \"const\": \"OK\"\n                },\n                {\n                    \"description\": \"Supported simulates flags. Returned in case `HELP` was used.\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"string\"\n                    }\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"mode\",\n                \"type\": \"oneof\",\n                \"optional\":true,\n                \"multiple\":true,\n                \"arguments\": [\n                    {\n                        \"name\": \"crash-after-election\",\n                        \"type\": \"pure-token\"\n                    },\n                    {\n                        \"name\": \"crash-after-promotion\",\n                        \"type\": \"pure-token\"\n                    },\n                    {\n                        \"name\": \"help\",\n                        \"type\": \"pure-token\"\n                    }\n                ]\n            }            \n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel-slaves.json",
    "content": "{\n    \"SLAVES\": {\n        \"summary\": \"Returns a list of the monitored replicas.\",\n        \"complexity\": \"O(N) where N is the number of replicas.\",\n        \"group\": \"sentinel\",\n        \"since\": \"2.8.0\",\n        \"arity\": 3,\n        \"container\": \"SENTINEL\",\n        \"function\": \"sentinelCommand\",\n        \"deprecated_since\": \"5.0.0\",\n        \"replaced_by\": \"`SENTINEL REPLICAS`\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"command_flags\": [\n            \"ADMIN\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"List of monitored replicas, and their state.\",\n            \"items\": {\n                \"type\": \"object\",\n                \"additionalProperties\": {\n                    \"type\": \"string\"\n                }\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"master-name\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sentinel.json",
    "content": "{\n    \"SENTINEL\": {\n        \"summary\": \"A container for Redis Sentinel commands.\",\n        \"complexity\": \"Depends on subcommand.\",\n        \"group\": \"sentinel\",\n        \"since\": \"2.8.4\",\n        \"arity\": -2,\n        \"command_flags\": [\n            \"ADMIN\",\n            \"SENTINEL\",\n            \"ONLY_SENTINEL\"\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/set.json",
    "content": "{\n    \"SET\": {\n        \"summary\": \"Sets the string value of a key, ignoring its type. The key is created if it doesn't exist.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"string\",\n        \"since\": \"1.0.0\",\n        \"arity\": -3,\n        \"function\": \"setCommand\",\n        \"get_keys_function\": \"setGetKeys\",\n        \"history\": [\n            [\n                \"2.6.12\",\n                \"Added the `EX`, `PX`, `NX` and `XX` options.\"\n            ],\n            [\n                \"6.0.0\",\n                \"Added the `KEEPTTL` option.\"\n            ],\n            [\n                \"6.2.0\",\n                \"Added the `GET`, `EXAT` and `PXAT` option.\"\n            ],\n            [\n                \"7.0.0\",\n                \"Allowed the `NX` and `GET` options to be used together.\"\n            ],\n            [\n                \"8.4.0\",\n                \"Added 'IFEQ', 'IFNE', 'IFDEQ', 'IFDNE' options.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"notes\": \"RW and ACCESS due to the optional `GET` argument\",\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"UPDATE\",\n                    \"VARIABLE_FLAGS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"anyOf\":[\n                {\n                    \"description\": \"`GET` not given: Operation was aborted (conflict with one of the `XX`/`NX`/`IFEQ`/`IFDEQ` options or condition not met).\",\n                    \"type\": \"null\"\n                },\n                {\n                    \"description\": \"`GET` not given: The key was set.\",\n                    \"const\": \"OK\"\n                },\n                {\n                    \"description\": \"`GET` given: The key didn't exist before the `SET`\",\n                    \"type\": \"null\"\n                },\n                {\n                    \"description\": \"`GET` given: The previous value of the key\",\n                    \"type\": \"string\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"value\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"condition\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"since\": \"2.6.12\",\n                \"arguments\": [\n                    {\n                        \"name\": \"nx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"NX\"\n                    },\n                    {\n                        \"name\": \"xx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"XX\"\n                    },\n                    {\n                        \"name\": \"ifeq-value\",\n                        \"type\": \"string\",\n                        \"token\": \"IFEQ\",\n                        \"since\": \"8.4.0\"\n                    },\n                    {\n                        \"name\": \"ifne-value\",\n                        \"type\": \"string\",\n                        \"token\": \"IFNE\",\n                        \"since\": \"8.4.0\"\n                    },\n                    {\n                        \"name\": \"ifdeq-digest\",\n                        \"type\": \"integer\",\n                        \"token\": \"IFDEQ\",\n                        \"since\": \"8.4.0\"\n                    },\n                    {\n                        \"name\": \"ifdne-digest\",\n                        \"type\": \"integer\",\n                        \"token\": \"IFDNE\",\n                        \"since\": \"8.4.0\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"get\",\n                \"token\": \"GET\",\n                \"type\": \"pure-token\",\n                \"optional\": true,\n                \"since\": \"6.2.0\"\n            },\n            {\n                \"name\": \"expiration\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"seconds\",\n                        \"type\": \"integer\",\n                        \"token\": \"EX\",\n                        \"since\": \"2.6.12\"\n                    },\n                    {\n                        \"name\": \"milliseconds\",\n                        \"type\": \"integer\",\n                        \"token\": \"PX\",\n                        \"since\": \"2.6.12\"\n                    },\n                    {\n                        \"name\": \"unix-time-seconds\",\n                        \"type\": \"unix-time\",\n                        \"token\": \"EXAT\",\n                        \"since\": \"6.2.0\"\n                    },\n                    {\n                        \"name\": \"unix-time-milliseconds\",\n                        \"type\": \"unix-time\",\n                        \"token\": \"PXAT\",\n                        \"since\": \"6.2.0\"\n                    },\n                    {\n                        \"name\": \"keepttl\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"KEEPTTL\",\n                        \"since\": \"6.0.0\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/setbit.json",
    "content": "{\n    \"SETBIT\": {\n        \"summary\": \"Sets or clears the bit at offset of the string value. Creates the key if it doesn't exist.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"bitmap\",\n        \"since\": \"2.2.0\",\n        \"arity\": 4,\n        \"function\": \"setbitCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"BITMAP\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The original bit value stored at offset.\",\n            \"oneOf\": [\n                {\n                    \"const\": 0\n                },\n                {\n                    \"const\": 1\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"offset\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"value\",\n                \"type\": \"integer\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/setex.json",
    "content": "{\n    \"SETEX\": {\n        \"summary\": \"Sets the string value and expiration time of a key. Creates the key if it doesn't exist.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"string\",\n        \"since\": \"2.0.0\",\n        \"arity\": 4,\n        \"function\": \"setexCommand\",\n        \"deprecated_since\": \"2.6.12\",\n        \"replaced_by\": \"`SET` with the `EX` argument\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"seconds\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"value\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/setnx.json",
    "content": "{\n    \"SETNX\": {\n        \"summary\": \"Set the string value of a key only when the key doesn't exist.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"string\",\n        \"since\": \"1.0.0\",\n        \"arity\": 3,\n        \"function\": \"setnxCommand\",\n        \"deprecated_since\": \"2.6.12\",\n        \"replaced_by\": \"`SET` with the `NX` argument\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"INSERT\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"The key was set.\",\n                    \"const\": 0\n                },\n                {\n                    \"description\": \"The key was not set.\",\n                    \"const\": 1\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"value\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/setrange.json",
    "content": "{\n    \"SETRANGE\": {\n        \"summary\": \"Overwrites a part of a string value with another by an offset. Creates the key if it doesn't exist.\",\n        \"complexity\": \"O(1), not counting the time taken to copy the new string in place. Usually, this string is very small so the amortized complexity is O(1). Otherwise, complexity is O(M) with M being the length of the value argument.\",\n        \"group\": \"string\",\n        \"since\": \"2.2.0\",\n        \"arity\": 4,\n        \"function\": \"setrangeCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Length of the string after it was modified by the command.\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"offset\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"value\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sflush.json",
    "content": "{\n    \"SFLUSH\": {\n        \"summary\": \"Remove all keys from selected range of slots.\",\n        \"complexity\": \"O(N)+O(k) where N is the number of keys and k is the number of slots.\",\n        \"group\": \"server\",\n        \"since\": \"8.0.0\",\n        \"arity\": -3,\n        \"function\": \"sflushCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"EXPERIMENTAL\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\",\n            \"DANGEROUS\"\n        ],\n        \"command_tips\": [\n        ],\n        \"reply_schema\": {\n            \"description\": \"List of slot ranges\",\n            \"type\": \"array\",\n            \"minItems\": 0,\n            \"maxItems\": 4294967295,\n            \"items\": {\n                \"type\": \"array\",\n                \"minItems\": 2,\n                \"maxItems\": 2,\n                \"items\": [\n                    {\n                        \"description\": \"start slot number\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"description\": \"end slot number\",\n                        \"type\": \"integer\"\n                    }\n                ]\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"data\",\n                \"type\": \"block\",\n                \"multiple\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"slot-start\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"slot-last\",\n                        \"type\": \"integer\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"flush-type\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"async\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ASYNC\"\n                    },\n                    {\n                        \"name\": \"sync\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"SYNC\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/shutdown.json",
    "content": "{\n    \"SHUTDOWN\": {\n        \"summary\": \"Synchronously saves the database(s) to disk and shuts down the Redis server.\",\n        \"complexity\": \"O(N) when saving, where N is the total number of keys in all databases when saving data, otherwise O(1)\",\n        \"group\": \"server\",\n        \"since\": \"1.0.0\",\n        \"arity\": -1,\n        \"function\": \"shutdownCommand\",\n        \"history\": [\n            [\n                \"7.0.0\",\n                \"Added the `NOW`, `FORCE` and `ABORT` modifiers.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"NO_MULTI\",\n            \"SENTINEL\",\n            \"ALLOW_BUSY\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"save-selector\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"nosave\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"NOSAVE\"\n                    },\n                    {\n                        \"name\": \"save\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"SAVE\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"now\",\n                \"type\": \"pure-token\",\n                \"token\": \"NOW\",\n                \"optional\": true,\n                \"since\": \"7.0.0\"\n            },\n            {\n                \"name\": \"force\",\n                \"type\": \"pure-token\",\n                \"token\": \"FORCE\",\n                \"optional\": true,\n                \"since\": \"7.0.0\"\n            },\n            {\n                \"name\": \"abort\",\n                \"type\": \"pure-token\",\n                \"token\": \"ABORT\",\n                \"optional\": true,\n                \"since\": \"7.0.0\"\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"OK if ABORT was specified and shutdown was aborted. On successful shutdown, nothing is returned since the server quits and the connection is closed. On failure, an error is returned.\",\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/sinter.json",
    "content": "{\n    \"SINTER\": {\n        \"summary\": \"Returns the intersect of multiple sets.\",\n        \"complexity\": \"O(N*M) worst case where N is the cardinality of the smallest set and M is the number of sets.\",\n        \"group\": \"set\",\n        \"since\": \"1.0.0\",\n        \"arity\": -2,\n        \"function\": \"sinterCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SET\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT_ORDER\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"List with the members of the resulting set.\",\n            \"uniqueItems\": true,\n            \"items\": {\n                \"type\": \"string\"\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sintercard.json",
    "content": "{\n    \"SINTERCARD\": {\n        \"summary\": \"Returns the number of members of the intersect of multiple sets.\",\n        \"complexity\": \"O(N*M) worst case where N is the cardinality of the smallest set and M is the number of sets.\",\n        \"group\": \"set\",\n        \"since\": \"7.0.0\",\n        \"arity\": -3,\n        \"function\": \"sinterCardCommand\",\n        \"get_keys_function\": \"sintercardGetKeys\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"keynum\": {\n                        \"keynumidx\": 0,\n                        \"firstkey\": 1,\n                        \"step\": 1\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Number of the elements in the resulting intersection.\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        },\n        \"arguments\": [\n            {\n                \"name\": \"numkeys\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            },\n            {\n                \"token\": \"LIMIT\",\n                \"name\": \"limit\",\n                \"type\": \"integer\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sinterstore.json",
    "content": "{\n    \"SINTERSTORE\": {\n        \"summary\": \"Stores the intersect of multiple sets in a key.\",\n        \"complexity\": \"O(N*M) worst case where N is the cardinality of the smallest set and M is the number of sets.\",\n        \"group\": \"set\",\n        \"since\": \"1.0.0\",\n        \"arity\": -3,\n        \"function\": \"sinterstoreCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"SET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Number of the elements in the result set.\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        },\n        \"arguments\": [\n            {\n                \"name\": \"destination\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 1,\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sismember.json",
    "content": "{\n    \"SISMEMBER\": {\n        \"summary\": \"Determines whether a member belongs to a set.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"set\",\n        \"since\": \"1.0.0\",\n        \"arity\": 3,\n        \"function\": \"sismemberCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"SET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"const\": 0,\n                    \"description\": \"The element is not a member of the set, or the key does not exist.\"\n                },\n                {\n                    \"const\": 1,\n                    \"description\": \"The element is a member of the set.\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"member\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/slaveof.json",
    "content": "{\n    \"SLAVEOF\": {\n        \"summary\": \"Sets a Redis server as a replica of another, or promotes it to being a master.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"1.0.0\",\n        \"arity\": 3,\n        \"function\": \"replicaofCommand\",\n        \"deprecated_since\": \"5.0.0\",\n        \"replaced_by\": \"`REPLICAOF`\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"NOSCRIPT\",\n            \"STALE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"args\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"host-port\",\n                        \"type\": \"block\",\n                        \"arguments\": [\n                            {\n                                \"name\": \"host\",\n                                \"type\": \"string\"\n                            },\n                            {\n                                \"name\": \"port\",\n                                \"type\": \"integer\"\n                            }\n                        ]\n                    },\n                    {\n                        \"name\": \"no-one\",\n                        \"type\": \"block\",\n                        \"arguments\": [\n                            {\n                                \"name\": \"no\",\n                                \"type\": \"pure-token\",\n                                \"token\": \"NO\"\n                            },\n                            {\n                                \"name\": \"one\",\n                                \"type\": \"pure-token\",\n                                \"token\": \"ONE\"\n                            }\n                        ]\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"slaveOf status\",\n            \"type\": \"string\",\n            \"pattern\": \"OK*\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/slowlog-get.json",
    "content": "{\n    \"GET\": {\n        \"summary\": \"Returns the slow log's entries.\",\n        \"complexity\": \"O(N) where N is the number of entries returned\",\n        \"group\": \"server\",\n        \"since\": \"2.2.12\",\n        \"arity\": -2,\n        \"container\": \"SLOWLOG\",\n        \"function\": \"slowlogCommand\",\n        \"history\": [\n            [\n                \"4.0.0\",\n                \"Added client IP address, port and name to the reply.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"ADMIN\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_NODES\",\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"Entries from the slow log in chronological order.\",\n            \"uniqueItems\": true,\n            \"items\": {\n                \"type\": \"array\",\n                \"minItems\": 6,\n                \"maxItems\": 6,\n                \"items\": [\n                    {\n                        \"type\": \"integer\",\n                        \"description\": \"Slow log entry ID.\"\n                    },\n                    {\n                        \"type\": \"integer\",\n                        \"description\": \"The unix timestamp at which the logged command was processed.\",\n                        \"minimum\": 0\n                    },\n                    {\n                        \"type\": \"integer\",\n                        \"description\": \"The amount of time needed for its execution, in microseconds.\",\n                        \"minimum\": 0\n                    },\n                    {\n                        \"type\": \"array\",\n                        \"description\": \"The arguments of the command.\",\n                        \"items\": {\n                            \"type\": \"string\"\n                        }\n                    },\n                    {\n                        \"type\": \"string\",\n                        \"description\": \"Client IP address and port.\"\n                    },\n                    {\n                        \"type\": \"string\",\n                        \"description\": \"Client name if set via the CLIENT SETNAME command.\"\n                    }\n                ]\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/slowlog-help.json",
    "content": "{\n    \"HELP\": {\n        \"summary\": \"Show helpful text about the different subcommands\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"6.2.0\",\n        \"arity\": 2,\n        \"container\": \"SLOWLOG\",\n        \"function\": \"slowlogCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"Helpful text about subcommands.\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/slowlog-len.json",
    "content": "{\n    \"LEN\": {\n        \"summary\": \"Returns the number of entries in the slow log.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"2.2.12\",\n        \"arity\": 2,\n        \"container\": \"SLOWLOG\",\n        \"function\": \"slowlogCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_NODES\",\n            \"RESPONSE_POLICY:AGG_SUM\",\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"Number of entries in the slow log.\",\n            \"minimum\": 0\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/slowlog-reset.json",
    "content": "{\n    \"RESET\": {\n        \"summary\": \"Clears all entries from the slow log.\",\n        \"complexity\": \"O(N) where N is the number of entries in the slowlog\",\n        \"group\": \"server\",\n        \"since\": \"2.2.12\",\n        \"arity\": 2,\n        \"container\": \"SLOWLOG\",\n        \"function\": \"slowlogCommand\",\n        \"command_flags\": [\n            \"ADMIN\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_NODES\",\n            \"RESPONSE_POLICY:ALL_SUCCEEDED\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/slowlog.json",
    "content": "{\n    \"SLOWLOG\": {\n        \"summary\": \"A container for slow log commands.\",\n        \"complexity\": \"Depends on subcommand.\",\n        \"group\": \"server\",\n        \"since\": \"2.2.12\",\n        \"arity\": -2\n    }\n}\n"
  },
  {
    "path": "src/commands/smembers.json",
    "content": "{\n    \"SMEMBERS\": {\n        \"summary\": \"Returns all members of a set.\",\n        \"complexity\": \"O(N) where N is the set cardinality.\",\n        \"group\": \"set\",\n        \"since\": \"1.0.0\",\n        \"arity\": 2,\n        \"function\": \"smembersCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SET\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT_ORDER\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"All elements of the set.\",\n            \"uniqueItems\": true,\n            \"items\": {\n                \"type\": \"string\"\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/smismember.json",
    "content": "{\n    \"SMISMEMBER\": {\n        \"summary\": \"Determines whether multiple members belong to a set.\",\n        \"complexity\": \"O(N) where N is the number of elements being checked for membership\",\n        \"group\": \"set\",\n        \"since\": \"6.2.0\",\n        \"arity\": -3,\n        \"function\": \"smismemberCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"SET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"List representing the membership of the given elements, in the same order as they are requested.\",\n            \"minItems\": 1,\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"const\": 0,\n                        \"description\": \"Not a member of the set or the key does not exist.\"\n                    },\n                    {\n                        \"const\": 1,\n                        \"description\": \"A member of the set.\"\n                    }\n                ]\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"member\",\n                \"type\": \"string\",\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/smove.json",
    "content": "{\n    \"SMOVE\": {\n        \"summary\": \"Moves a member from one set to another.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"set\",\n        \"since\": \"1.0.0\",\n        \"arity\": 4,\n        \"function\": \"smoveCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"SET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"INSERT\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"const\": 1,\n                    \"description\": \"Element is moved.\"\n                },\n                {\n                    \"const\": 0,\n                    \"description\": \"The element is not a member of source and no operation was performed.\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"source\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"destination\",\n                \"type\": \"key\",\n                \"key_spec_index\": 1\n            },\n            {\n                \"name\": \"member\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sort.json",
    "content": "{\n    \"SORT\": {\n        \"summary\": \"Sorts the elements in a list, a set, or a sorted set, optionally storing the result.\",\n        \"complexity\": \"O(N+M*log(M)) where N is the number of elements in the list or set to sort, and M the number of returned elements. When the elements are not sorted, complexity is O(N).\",\n        \"group\": \"generic\",\n        \"since\": \"1.0.0\",\n        \"arity\": -2,\n        \"function\": \"sortCommand\",\n        \"get_keys_function\": \"sortGetKeys\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"SET\",\n            \"SORTEDSET\",\n            \"LIST\",\n            \"DANGEROUS\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"notes\": \"For the optional BY/GET keyword. It is marked 'unknown' because the key names derive from the content of the key we sort\",\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"unknown\": null\n                },\n                \"find_keys\": {\n                    \"unknown\": null\n                }\n            },\n            {\n                \"notes\": \"For the optional STORE keyword. It is marked 'unknown' because the keyword can appear anywhere in the argument array\",\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"unknown\": null\n                },\n                \"find_keys\": {\n                    \"unknown\": null\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"token\": \"BY\",\n                \"name\": \"by-pattern\",\n                \"display\": \"pattern\",\n                \"type\": \"pattern\",\n                \"key_spec_index\": 1,\n                \"optional\": true\n            },\n            {\n                \"token\": \"LIMIT\",\n                \"name\": \"limit\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"offset\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"count\",\n                        \"type\": \"integer\"\n                    }\n                ]\n            },\n            {\n                \"token\": \"GET\",\n                \"name\": \"get-pattern\",\n                \"display\": \"pattern\",\n                \"key_spec_index\": 1,\n                \"type\": \"pattern\",\n                \"optional\": true,\n                \"multiple\": true,\n                \"multiple_token\": true\n            },\n            {\n                \"name\": \"order\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"asc\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ASC\"\n                    },\n                    {\n                        \"name\": \"desc\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"DESC\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"sorting\",\n                \"token\": \"ALPHA\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"STORE\",\n                \"name\": \"destination\",\n                \"type\": \"key\",\n                \"key_spec_index\": 2,\n                \"optional\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"when the store option is specified the command returns the number of sorted elements in the destination list\",\n                    \"type\": \"integer\",\n                    \"minimum\": 0\n                },\n                {\n                    \"description\": \"when not passing the store option the command returns a list of sorted elements\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"oneOf\": [\n                            {\n                                \"type\": \"string\"\n                            },\n                            {\n                                \"description\": \"GET option is specified, but no object was found\",\n                                \"type\": \"null\"\n                            }\n                        ]\n                    }\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/sort_ro.json",
    "content": "{\n    \"SORT_RO\": {\n        \"summary\": \"Returns the sorted elements of a list, a set, or a sorted set.\",\n        \"complexity\": \"O(N+M*log(M)) where N is the number of elements in the list or set to sort, and M the number of returned elements. When the elements are not sorted, complexity is O(N).\",\n        \"group\": \"generic\",\n        \"since\": \"7.0.0\",\n        \"arity\": -2,\n        \"function\": \"sortroCommand\",\n        \"get_keys_function\": \"sortROGetKeys\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SET\",\n            \"SORTEDSET\",\n            \"LIST\",\n            \"DANGEROUS\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"notes\": \"For the optional BY/GET keyword. It is marked 'unknown' because the key names derive from the content of the key we sort\",\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"unknown\": null\n                },\n                \"find_keys\": {\n                    \"unknown\": null\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"token\": \"BY\",\n                \"name\": \"by-pattern\",\n                \"display\": \"pattern\",\n                \"type\": \"pattern\",\n                \"key_spec_index\": 1,\n                \"optional\": true\n            },\n            {\n                \"token\": \"LIMIT\",\n                \"name\": \"limit\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"offset\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"count\",\n                        \"type\": \"integer\"\n                    }\n                ]\n            },\n            {\n                \"token\": \"GET\",\n                \"name\": \"get-pattern\",\n                \"display\": \"pattern\",\n                \"key_spec_index\": 1,\n                \"type\": \"pattern\",\n                \"optional\": true,\n                \"multiple\": true,\n                \"multiple_token\": true\n            },\n            {\n                \"name\": \"order\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"asc\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ASC\"\n                    },\n                    {\n                        \"name\": \"desc\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"DESC\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"sorting\",\n                \"token\": \"ALPHA\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"a list of sorted elements\",\n            \"type\": \"array\",\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"description\": \"GET option is specified, but no object was found\",\n                        \"type\": \"null\"\n                    }\n                ]\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/spop.json",
    "content": "{\n    \"SPOP\": {\n        \"summary\": \"Returns one or more random members from a set after removing them. Deletes the set if the last member was popped.\",\n        \"complexity\": \"Without the count argument O(1), otherwise O(N) where N is the value of the passed count.\",\n        \"group\": \"set\",\n        \"since\": \"1.0.0\",\n        \"arity\": -2,\n        \"function\": \"spopCommand\",\n        \"history\": [\n            [\n                \"3.2.0\",\n                \"Added the `count` argument.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"SET\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"type\": \"null\",\n                    \"description\": \"The key does not exist.\"\n                },\n                {\n                    \"type\": \"string\",\n                    \"description\": \"The removed member when 'COUNT' is not given.\"\n                },\n                {\n                    \"type\": \"array\",\n                    \"description\": \"List to the removed members when 'COUNT' is given.\",\n                    \"uniqueItems\": true,\n                    \"items\": {\n                        \"type\": \"string\"\n                    }\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true,\n                \"since\": \"3.2.0\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/spublish.json",
    "content": "{\n    \"SPUBLISH\": {\n        \"summary\": \"Post a message to a shard channel\",\n        \"complexity\": \"O(N) where N is the number of clients subscribed to the receiving shard channel.\",\n        \"group\": \"pubsub\",\n        \"since\": \"7.0.0\",\n        \"arity\": 3,\n        \"function\": \"spublishCommand\",\n        \"command_flags\": [\n            \"PUBSUB\",\n            \"LOADING\",\n            \"STALE\",\n            \"FAST\",\n            \"MAY_REPLICATE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"shardchannel\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"message\",\n                \"type\": \"string\"\n            }\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"NOT_KEY\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"the number of clients that received the message. Note that in a Redis Cluster, only clients that are connected to the same node as the publishing client are included in the count\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/srandmember.json",
    "content": "{\n    \"SRANDMEMBER\": {\n        \"summary\": \"Get one or multiple random members from a set\",\n        \"complexity\": \"Without the count argument O(1), otherwise O(N) where N is the absolute value of the passed count.\",\n        \"group\": \"set\",\n        \"since\": \"1.0.0\",\n        \"arity\": -2,\n        \"function\": \"srandmemberCommand\",\n        \"history\": [\n            [\n                \"2.6.0\",\n                \"Added the optional `count` argument.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SET\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true,\n                \"since\": \"2.6.0\"\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"In case `count` is not given and key doesn't exist\",\n                    \"type\": \"null\"\n                },\n                {\n                    \"description\": \"In case `count` is not given, randomly selected element\",\n                    \"type\": \"string\"\n                },\n                {\n                    \"description\": \"In case `count` is given, an array of elements\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"string\"\n                    },\n                    \"minItems\": 1\n                },\n                {\n                    \"description\": \"In case `count` is given and key doesn't exist\",\n                    \"type\": \"array\",\n                    \"maxItems\": 0\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/srem.json",
    "content": "{\n    \"SREM\": {\n        \"summary\": \"Removes one or more members from a set. Deletes the set if the last member was removed.\",\n        \"complexity\": \"O(N) where N is the number of members to be removed.\",\n        \"group\": \"set\",\n        \"since\": \"1.0.0\",\n        \"arity\": -3,\n        \"function\": \"sremCommand\",\n        \"history\": [\n            [\n                \"2.4.0\",\n                \"Accepts multiple `member` arguments.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"SET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Number of members that were removed from the set, not including non existing members.\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"member\",\n                \"type\": \"string\",\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sscan.json",
    "content": "{\n    \"SSCAN\": {\n        \"summary\": \"Iterates over members of a set.\",\n        \"complexity\": \"O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection.\",\n        \"group\": \"set\",\n        \"since\": \"2.8.0\",\n        \"arity\": -3,\n        \"function\": \"sscanCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SET\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"cursor\",\n                \"type\": \"integer\"\n            },\n            {\n                \"token\": \"MATCH\",\n                \"name\": \"pattern\",\n                \"type\": \"pattern\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"COUNT\",\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"cursor and scan response in array form\",\n            \"type\": \"array\",\n            \"minItems\": 2,\n            \"maxItems\": 2,\n            \"items\": [\n                {\n                    \"description\": \"cursor\",\n                    \"type\": \"string\"\n                },\n                {\n                    \"description\": \"list of set members\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"string\"\n                    }\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/ssubscribe.json",
    "content": "{\n    \"SSUBSCRIBE\": {\n        \"summary\": \"Listens for messages published to shard channels.\",\n        \"complexity\": \"O(N) where N is the number of shard channels to subscribe to.\",\n        \"group\": \"pubsub\",\n        \"since\": \"7.0.0\",\n        \"arity\": -2,\n        \"function\": \"ssubscribeCommand\",\n        \"command_flags\": [\n            \"PUBSUB\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"shardchannel\",\n                \"type\": \"string\",\n                \"multiple\": true\n            }\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"NOT_KEY\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/strlen.json",
    "content": "{\n    \"STRLEN\": {\n        \"summary\": \"Returns the length of a string value.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"string\",\n        \"since\": \"2.2.0\",\n        \"arity\": 2,\n        \"function\": \"strlenCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The length of the string value stored at key, or 0 when key does not exist.\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/subscribe.json",
    "content": "{\n    \"SUBSCRIBE\": {\n        \"summary\": \"Listens for messages published to channels.\",\n        \"complexity\": \"O(N) where N is the number of channels to subscribe to.\",\n        \"group\": \"pubsub\",\n        \"since\": \"2.0.0\",\n        \"arity\": -2,\n        \"function\": \"subscribeCommand\",\n        \"history\": [],\n        \"command_flags\": [\n            \"PUBSUB\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"channel\",\n                \"type\": \"string\",\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/substr.json",
    "content": "{\n    \"SUBSTR\": {\n        \"summary\": \"Returns a substring from a string value.\",\n        \"complexity\": \"O(N) where N is the length of the returned string. The complexity is ultimately determined by the returned length, but because creating a substring from an existing string is very cheap, it can be considered O(1) for small strings.\",\n        \"group\": \"string\",\n        \"since\": \"1.0.0\",\n        \"arity\": 4,\n        \"function\": \"getrangeCommand\",\n        \"deprecated_since\": \"2.0.0\",\n        \"replaced_by\": \"`GETRANGE`\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"STRING\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"string\",\n            \"description\": \"The substring of the string value stored at key, determined by the offsets start and end (both are inclusive).\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"start\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"end\",\n                \"type\": \"integer\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sunion.json",
    "content": "{\n    \"SUNION\": {\n        \"summary\": \"Returns the union of multiple sets.\",\n        \"complexity\": \"O(N) where N is the total number of elements in all given sets.\",\n        \"group\": \"set\",\n        \"since\": \"1.0.0\",\n        \"arity\": -2,\n        \"function\": \"sunionCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SET\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT_ORDER\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"List with the members of the resulting set.\",\n            \"uniqueItems\": true,\n            \"items\": {\n                \"type\": \"string\"\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sunionstore.json",
    "content": "{\n    \"SUNIONSTORE\": {\n        \"summary\": \"Stores the union of multiple sets in a key.\",\n        \"complexity\": \"O(N) where N is the total number of elements in all given sets.\",\n        \"group\": \"set\",\n        \"since\": \"1.0.0\",\n        \"arity\": -3,\n        \"function\": \"sunionstoreCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"SET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"Number of the elements in the resulting set.\",\n            \"minimum\": 0\n        },\n        \"arguments\": [\n            {\n                \"name\": \"destination\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 1,\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/sunsubscribe.json",
    "content": "{\n    \"SUNSUBSCRIBE\": {\n        \"summary\": \"Stops listening to messages posted to shard channels.\",\n        \"complexity\": \"O(N) where N is the number of shard channels to unsubscribe.\",\n        \"group\": \"pubsub\",\n        \"since\": \"7.0.0\",\n        \"arity\": -1,\n        \"function\": \"sunsubscribeCommand\",\n        \"command_flags\": [\n            \"PUBSUB\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"shardchannel\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"multiple\": true\n            }\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"NOT_KEY\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/swapdb.json",
    "content": "{\n    \"SWAPDB\": {\n        \"summary\": \"Swaps two Redis databases.\",\n        \"complexity\": \"O(N) where N is the count of clients watching or blocking on keys from both databases.\",\n        \"group\": \"server\",\n        \"since\": \"4.0.0\",\n        \"arity\": 3,\n        \"function\": \"swapdbCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\",\n            \"DANGEROUS\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"index1\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"index2\",\n                \"type\": \"integer\"\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/sync.json",
    "content": "{\n    \"SYNC\": {\n        \"summary\": \"An internal command used in replication.\",\n        \"group\": \"server\",\n        \"since\": \"1.0.0\",\n        \"arity\": 1,\n        \"function\": \"syncCommand\",\n        \"command_flags\": [\n            \"NO_ASYNC_LOADING\",\n            \"ADMIN\",\n            \"NO_MULTI\",\n            \"NOSCRIPT\"\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/time.json",
    "content": "{\n    \"TIME\": {\n        \"summary\": \"Returns the server time.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"server\",\n        \"since\": \"2.6.0\",\n        \"arity\": 1,\n        \"function\": \"timeCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\",\n            \"FAST\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"Array containing two elements: Unix time in seconds and microseconds.\",\n            \"minItems\": 2,\n            \"maxItems\": 2,\n            \"items\": {\n                \"type\": \"string\",\n                \"pattern\": \"[0-9]+\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/touch.json",
    "content": "{\n    \"TOUCH\": {\n        \"summary\": \"Returns the number of existing keys out of those specified after updating the time they were last accessed.\",\n        \"complexity\": \"O(N) where N is the number of keys that will be touched.\",\n        \"group\": \"generic\",\n        \"since\": \"3.2.1\",\n        \"arity\": -2,\n        \"function\": \"touchCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:MULTI_SHARD\",\n            \"RESPONSE_POLICY:AGG_SUM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"the number of touched keys\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/trimslots.json",
    "content": "{\n    \"TRIMSLOTS\": {\n        \"summary\": \"Trim the keys that belong to specified slots.\",\n        \"complexity\": \"O(N) where N is the total number of keys in all databases\",\n        \"group\": \"server\",\n        \"since\": \"8.4.0\",\n        \"arity\": -5,\n        \"function\": \"trimslotsCommand\",\n        \"command_flags\": [\n            \"WRITE\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\",\n            \"DANGEROUS\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"ranges\",\n                \"token\": \"RANGES\",\n                \"type\": \"block\",\n                \"arguments\": [\n                    {\n                        \"name\": \"numranges\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"slots\",\n                        \"type\": \"block\",\n                        \"multiple\": true,\n                        \"arguments\": [\n                            {\n                                \"name\": \"startslot\",\n                                \"type\": \"integer\"\n                            },\n                            {\n                                \"name\": \"endslot\",\n                                \"type\": \"integer\"\n                            }\n                        ]\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/ttl.json",
    "content": "{\n    \"TTL\": {\n        \"summary\": \"Returns the expiration time in seconds of a key.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"generic\",\n        \"since\": \"1.0.0\",\n        \"arity\": 2,\n        \"function\": \"ttlCommand\",\n        \"history\": [\n            [\n                \"2.8.0\",\n                \"Added the -2 reply.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"TTL in seconds.\",\n                    \"type\": \"integer\",\n                    \"minimum\": 0\n                },\n                {\n                    \"description\": \"The key exists but has no associated expire.\",\n                    \"const\": -1\n                },\n                {\n                    \"description\": \"The key does not exist.\",\n                    \"const\": -2\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/type.json",
    "content": "{\n    \"TYPE\": {\n        \"summary\": \"Determines the type of value stored at a key.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"generic\",\n        \"since\": \"1.0.0\",\n        \"arity\": 2,\n        \"function\": \"typeCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"Key doesn't exist\",\n                    \"type\": \"null\"\n                },\n                {\n                    \"description\": \"Type of the key\",\n                    \"type\": \"string\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/unlink.json",
    "content": "{\n    \"UNLINK\": {\n        \"summary\": \"Asynchronously deletes one or more keys.\",\n        \"complexity\": \"O(1) for each key removed regardless of its size. Then the command does O(N) work in a different thread in order to reclaim memory, where N is the number of allocations the deleted objects where composed of.\",\n        \"group\": \"generic\",\n        \"since\": \"4.0.0\",\n        \"arity\": -2,\n        \"function\": \"unlinkCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"KEYSPACE\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:MULTI_SHARD\",\n            \"RESPONSE_POLICY:AGG_SUM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RM\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"the number of keys that were unlinked\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/unsubscribe.json",
    "content": "{\n    \"UNSUBSCRIBE\": {\n        \"summary\": \"Stops listening to messages posted to channels.\",\n        \"complexity\": \"O(N) where N is the number of channels to unsubscribe.\",\n        \"group\": \"pubsub\",\n        \"since\": \"2.0.0\",\n        \"arity\": -1,\n        \"function\": \"unsubscribeCommand\",\n        \"command_flags\": [\n            \"PUBSUB\",\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"SENTINEL\"\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"channel\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/unwatch.json",
    "content": "{\n    \"UNWATCH\": {\n        \"summary\": \"Forgets about watched keys of a transaction.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"transactions\",\n        \"since\": \"2.2.0\",\n        \"arity\": 1,\n        \"function\": \"unwatchCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"FAST\",\n            \"ALLOW_BUSY\"\n        ],\n        \"acl_categories\": [\n            \"TRANSACTION\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/wait.json",
    "content": "{\n    \"WAIT\": {\n        \"summary\": \"Blocks until the asynchronous replication of all preceding write commands sent by the connection is completed.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"generic\",\n        \"since\": \"3.0.0\",\n        \"arity\": 3,\n        \"function\": \"waitCommand\",\n        \"command_flags\": [\n            \"BLOCKING\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_SHARDS\",\n            \"RESPONSE_POLICY:AGG_MIN\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"The number of replicas reached by all the writes performed in the context of the current connection.\",\n            \"minimum\": 0\n        },\n        \"arguments\": [\n            {\n                \"name\": \"numreplicas\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"timeout\",\n                \"type\": \"integer\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/waitaof.json",
    "content": "{\n    \"WAITAOF\": {\n        \"summary\": \"Blocks until all of the preceding write commands sent by the connection are written to the append-only file of the master and/or replicas.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"generic\",\n        \"since\": \"7.2.0\",\n        \"arity\": 4,\n        \"function\": \"waitaofCommand\",\n        \"command_flags\": [\n            \"BLOCKING\"\n        ],\n        \"acl_categories\": [\n            \"CONNECTION\"\n        ],\n        \"command_tips\": [\n            \"REQUEST_POLICY:ALL_SHARDS\",\n            \"RESPONSE_POLICY:AGG_MIN\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"Number of local and remote AOF files in sync.\",\n            \"minItems\": 2,\n            \"maxItems\": 2,\n            \"items\": [\n                {\n                    \"description\": \"Number of local AOF files.\",\n                    \"type\": \"integer\",\n                    \"minimum\": 0\n                },\n                {\n                    \"description\": \"Number of replica AOF files.\",\n                    \"type\": \"number\",\n                    \"minimum\": 0\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"numlocal\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"numreplicas\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"timeout\",\n                \"type\": \"integer\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/watch.json",
    "content": "{\n    \"WATCH\": {\n        \"summary\": \"Monitors changes to keys to determine the execution of a transaction.\",\n        \"complexity\": \"O(1) for every key.\",\n        \"group\": \"transactions\",\n        \"since\": \"2.2.0\",\n        \"arity\": -2,\n        \"function\": \"watchCommand\",\n        \"command_flags\": [\n            \"NOSCRIPT\",\n            \"LOADING\",\n            \"STALE\",\n            \"FAST\",\n            \"ALLOW_BUSY\"\n        ],\n        \"acl_categories\": [\n            \"TRANSACTION\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/xack.json",
    "content": "{\n    \"XACK\": {\n        \"summary\": \"Returns the number of messages that were successfully acknowledged by the consumer group member of a stream.\",\n        \"complexity\": \"O(1) for each message ID processed.\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": -4,\n        \"function\": \"xackCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"group\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"ID\",\n                \"type\": \"string\",\n                \"multiple\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The command returns the number of messages successfully acknowledged. Certain message IDs may no longer be part of the PEL (for example because they have already been acknowledged), and XACK will not count them as successfully acknowledged.\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/xackdel.json",
    "content": "{\n    \"XACKDEL\": {\n        \"summary\": \"Acknowledges and deletes one or multiple messages for a stream consumer group.\",\n        \"complexity\": \"O(1) for each message ID processed.\",\n        \"group\": \"stream\",\n        \"since\": \"8.2.0\",\n        \"arity\": -6,\n        \"function\": \"xackdelCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"group\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"condition\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"keepref\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"KEEPREF\"\n                    },\n                    {\n                        \"name\": \"delref\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"DELREF\"\n                    },\n                    {\n                        \"name\": \"acked\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ACKED\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"ids\",\n                \"token\": \"IDS\",\n                \"type\": \"block\",\n                \"arguments\": [\n                    {\n                        \"name\": \"numids\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"id\",\n                        \"type\": \"string\",\n                        \"multiple\": true\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Array of results. Returns an array with -1 for each requested ID if the key does not exist.\",\n            \"type\": \"array\",\n            \"minItems\": 0,\n            \"maxItems\": 4294967295,\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"description\": \"The id does not exist in the provided stream key.\",\n                        \"const\": -1\n                    },\n                    {\n                        \"description\": \"Entry was acknowledged and deleted from the stream.\",\n                        \"const\": 1\n                    },\n                    {\n                        \"description\": \"Entry was acknowledged but not deleted, there are still dangling references.\",\n                        \"const\": 2\n                    }\n                ]\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/xadd.json",
    "content": "{\n    \"XADD\": {\n        \"summary\": \"Appends a new message to a stream. Creates the key if it doesn't exist.\",\n        \"complexity\": \"O(1) when adding a new entry, O(N) when trimming where N being the number of entries evicted.\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": -5,\n        \"function\": \"xaddCommand\",\n        \"history\": [\n            [\n                \"6.2.0\",\n                \"Added the `NOMKSTREAM` option, `MINID` trimming strategy and the `LIMIT` option.\"\n            ],\n            [\n                \"7.0.0\",\n                \"Added support for the `<ms>-*` explicit ID form.\"\n            ],\n            [\n                \"8.2.0\",\n                \"Added the `KEEPREF`, `DELREF` and `ACKED` options.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"notes\": \"UPDATE instead of INSERT because of the optional trimming feature\",\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"token\": \"NOMKSTREAM\",\n                \"name\": \"nomkstream\",\n                \"type\": \"pure-token\",\n                \"optional\": true,\n                \"since\": \"6.2.0\"\n            },\n            {\n                \"name\": \"condition\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"keepref\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"KEEPREF\"\n                    },\n                    {\n                        \"name\": \"delref\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"DELREF\"\n                    },\n                    {\n                        \"name\": \"acked\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ACKED\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"idmp\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"since\": \"8.6.0\",\n                \"arguments\": [\n                    {\n                        \"token\": \"IDMPAUTO\",\n                        \"type\": \"string\",\n                        \"name\": \"pid\",\n                        \"display_text\": \"producer-id\"\n                    },\n                    {\n                        \"token\": \"IDMP\",\n                        \"type\": \"block\",\n                        \"name\": \"idmp\",\n                        \"arguments\": [\n                            {\n                                \"name\": \"pid\",\n                                \"type\": \"string\",\n                                \"display_text\": \"producer-id\"\n                            },\n                            {\n                                \"name\": \"iid\",\n                                \"type\": \"string\",\n                                \"display_text\": \"idempotent-id\"\n                            }\n                        ]\n                    }\n                ]\n            },\n            {\n                \"name\": \"trim\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"strategy\",\n                        \"type\": \"oneof\",\n                        \"arguments\": [\n                            {\n                                \"name\": \"maxlen\",\n                                \"type\": \"pure-token\",\n                                \"token\": \"MAXLEN\"\n                            },\n                            {\n                                \"name\": \"minid\",\n                                \"type\": \"pure-token\",\n                                \"token\": \"MINID\",\n                                \"since\": \"6.2.0\"\n                            }\n                        ]\n                    },\n                    {\n                        \"name\": \"operator\",\n                        \"type\": \"oneof\",\n                        \"optional\": true,\n                        \"arguments\": [\n                            {\n                                \"name\": \"equal\",\n                                \"type\": \"pure-token\",\n                                \"token\": \"=\"\n                            },\n                            {\n                                \"name\": \"approximately\",\n                                \"type\": \"pure-token\",\n                                \"token\": \"~\"\n                            }\n                        ]\n                    },\n                    {\n                        \"name\": \"threshold\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"token\": \"LIMIT\",\n                        \"name\": \"count\",\n                        \"type\": \"integer\",\n                        \"optional\": true,\n                        \"since\": \"6.2.0\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"id-selector\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"auto-id\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"*\"\n                    },\n                    {\n                        \"name\": \"id\",\n                        \"type\": \"string\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"data\",\n                \"type\": \"block\",\n                \"multiple\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"field\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"name\": \"value\",\n                        \"type\": \"string\"\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\":[\n                {\n                    \"description\": \"The ID of the added entry. The ID is the one auto-generated if * is passed as ID argument, otherwise the command just returns the same ID specified by the user during insertion.\",\n                    \"type\": \"string\",\n                    \"pattern\": \"[0-9]+-[0-9]+\"\n                },\n                {\n                    \"description\": \"The NOMKSTREAM option is given and the key doesn't exist.\",\n                    \"type\": \"null\"\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/xautoclaim.json",
    "content": "{\n    \"XAUTOCLAIM\": {\n        \"summary\": \"Changes, or acquires, ownership of messages in a consumer group, as if the messages were delivered to as consumer group member.\",\n        \"complexity\": \"O(1) if COUNT is small.\",\n        \"group\": \"stream\",\n        \"since\": \"6.2.0\",\n        \"arity\": -6,\n        \"function\": \"xautoclaimCommand\",\n        \"history\": [\n            [\n                \"7.0.0\",\n                \"Added an element to the reply array, containing deleted entries the command cleared from the PEL\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"anyOf\": [\n                {\n                    \"description\": \"Claimed stream entries (with data, if `JUSTID` was not given).\",\n                    \"type\": \"array\",\n                    \"minItems\": 3,\n                    \"maxItems\": 3,\n                    \"items\": [\n                        {\n                            \"description\": \"Cursor for next call.\",\n                            \"type\": \"string\",\n                            \"pattern\": \"[0-9]+-[0-9]+\"\n                        },\n                        {\n                            \"type\": \"array\",\n                            \"uniqueItems\": true,\n                            \"items\": {\n                                \"type\": \"array\",\n                                \"minItems\": 2,\n                                \"maxItems\": 2,\n                                \"items\": [\n                                    {\n                                        \"description\": \"Entry ID\",\n                                        \"type\": \"string\",\n                                        \"pattern\": \"[0-9]+-[0-9]+\"\n                                    },\n                                    {\n                                        \"description\": \"Data\",\n                                        \"type\": \"array\",\n                                        \"items\": {\n                                            \"type\": \"string\"\n                                        }\n                                    }\n                                ]\n                            }\n                        },\n                        {\n                            \"description\": \"Entry IDs which no longer exist in the stream, and were deleted from the PEL in which they were found.\",\n                            \"type\": \"array\",\n                            \"items\": {\n                                \"type\": \"string\",\n                                \"pattern\": \"[0-9]+-[0-9]+\"\n                            }\n                        }\n                    ]\n                },\n                {\n                    \"description\": \"Claimed stream entries (without data, if `JUSTID` was given).\",\n                    \"type\": \"array\",\n                    \"minItems\": 3,\n                    \"maxItems\": 3,\n                    \"items\": [\n                        {\n                            \"description\": \"Cursor for next call.\",\n                            \"type\": \"string\",\n                            \"pattern\": \"[0-9]+-[0-9]+\"\n                        },\n                        {\n                            \"type\": \"array\",\n                            \"uniqueItems\": true,\n                            \"items\": {\n                                \"type\": \"string\",\n                                \"pattern\": \"[0-9]+-[0-9]+\"\n                            }\n                        },\n                        {\n                            \"description\": \"Entry IDs which no longer exist in the stream, and were deleted from the PEL in which they were found.\",\n                            \"type\": \"array\",\n                            \"items\": {\n                                \"type\": \"string\",\n                                \"pattern\": \"[0-9]+-[0-9]+\"\n                            }\n                        }\n                    ]\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"group\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"consumer\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"min-idle-time\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"start\",\n                \"type\": \"string\"\n            },\n            {\n                \"token\": \"COUNT\",\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"justid\",\n                \"token\": \"JUSTID\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/xcfgset.json",
    "content": "{\n    \"XCFGSET\": {\n        \"summary\": \"Sets the IDMP configuration parameters for a stream.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"stream\",\n        \"since\": \"8.6.0\",\n        \"arity\": -2,\n        \"function\": \"xcfgsetCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        },\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"idmp-duration\",\n                \"type\": \"integer\",\n                \"token\": \"IDMP-DURATION\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"idmp-maxsize\",\n                \"type\": \"integer\",\n                \"token\": \"IDMP-MAXSIZE\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/xclaim.json",
    "content": "{\n    \"XCLAIM\": {\n        \"summary\": \"Changes, or acquires, ownership of a message in a consumer group, as if the message was delivered a consumer group member.\",\n        \"complexity\": \"O(log N) with N being the number of messages in the PEL of the consumer group.\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": -6,\n        \"function\": \"xclaimCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"group\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"consumer\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"min-idle-time\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"ID\",\n                \"type\": \"string\",\n                \"multiple\": true\n            },\n            {\n                \"token\": \"IDLE\",\n                \"name\": \"ms\",\n                \"type\": \"integer\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"TIME\",\n                \"name\": \"unix-time-milliseconds\",\n                \"type\": \"unix-time\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"RETRYCOUNT\",\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"force\",\n                \"token\": \"FORCE\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"justid\",\n                \"token\": \"JUSTID\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"lastid\",\n                \"token\": \"LASTID\",\n                \"type\": \"string\",\n                \"optional\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Stream entries with IDs matching the specified range.\",\n            \"anyOf\": [\n                {\n                    \"description\": \"If JUSTID option is specified, return just an array of IDs of messages successfully claimed\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"description\": \"Entry ID\",\n                        \"type\": \"string\",\n                        \"pattern\": \"[0-9]+-[0-9]+\"\n                    }\n                },\n                {\n                    \"description\": \"array of stream entries that contains each entry as an array of 2 elements, the Entry ID and the entry data itself\",\n                    \"type\": \"array\",\n                    \"uniqueItems\": true,\n                    \"items\": {\n                        \"type\": \"array\",\n                        \"minItems\": 2,\n                        \"maxItems\": 2,\n                        \"items\": [\n                            {\n                                \"description\": \"Entry ID\",\n                                \"type\": \"string\",\n                                \"pattern\": \"[0-9]+-[0-9]+\"\n                            },\n                            {\n                                \"description\": \"Data\",\n                                \"type\": \"array\",\n                                \"items\": {\n                                    \"type\": \"string\"\n                                }\n                            }\n                        ]\n                    }\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/xdel.json",
    "content": "{\n    \"XDEL\": {\n        \"summary\": \"Returns the number of messages after removing them from a stream.\",\n        \"complexity\": \"O(1) for each single item to delete in the stream, regardless of the stream size.\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": -3,\n        \"function\": \"xdelCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"ID\",\n                \"type\": \"string\",\n                \"multiple\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The number of entries actually deleted\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/xdelex.json",
    "content": "{\n    \"XDELEX\": {\n        \"summary\": \"Deletes one or multiple entries from the stream.\",\n        \"complexity\": \"O(1) for each single item to delete in the stream, regardless of the stream size.\",\n        \"group\": \"stream\",\n        \"since\": \"8.2.0\",\n        \"arity\": -5,\n        \"function\": \"xdelexCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"condition\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"keepref\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"KEEPREF\"\n                    },\n                    {\n                        \"name\": \"delref\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"DELREF\"\n                    },\n                    {\n                        \"name\": \"acked\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"ACKED\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"ids\",\n                \"token\": \"IDS\",\n                \"type\": \"block\",\n                \"arguments\": [\n                    {\n                        \"name\": \"numids\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"id\",\n                        \"type\": \"string\",\n                        \"multiple\": true\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Array of results. Returns an array with -1 for each requested ID if the key does not exist.\",\n            \"type\": \"array\",\n            \"minItems\": 0,\n            \"maxItems\": 4294967295,\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"description\": \"The id does not exist in the provided stream key.\",\n                        \"const\": -1\n                    },\n                    {\n                        \"description\": \"Entry was deleted from the stream.\",\n                        \"const\": 1\n                    },\n                    {\n                        \"description\": \"Entry was not deleted, but there are still dangling references.\",\n                        \"const\": 2\n                    }\n                ]\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/xgroup-create.json",
    "content": "{\n    \"CREATE\": {\n        \"summary\": \"Creates a consumer group.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": -5,\n        \"container\": \"XGROUP\",\n        \"function\": \"xgroupCommand\",\n        \"history\": [\n            [\n                \"7.0.0\",\n                \"Added the `entries_read` named argument.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"INSERT\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"group\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"id-selector\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"id\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"name\": \"new-id\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"$\"\n                    }\n                ]\n            },\n            {\n                \"token\": \"MKSTREAM\",\n                \"name\": \"mkstream\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"entriesread\",\n                \"display\": \"entries-read\",\n                \"token\": \"ENTRIESREAD\",\n                \"type\": \"integer\",\n                \"optional\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/xgroup-createconsumer.json",
    "content": "{\n    \"CREATECONSUMER\": {\n        \"summary\": \"Creates a consumer in a consumer group.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"stream\",\n        \"since\": \"6.2.0\",\n        \"arity\": 5,\n        \"container\": \"XGROUP\",\n        \"function\": \"xgroupCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"INSERT\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"group\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"consumer\",\n                \"type\": \"string\"\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The number of created consumers (0 or 1)\",\n            \"oneOf\": [\n                {\n                    \"const\": 1\n                },\n                {\n                    \"const\": 0\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/xgroup-delconsumer.json",
    "content": "{\n    \"DELCONSUMER\": {\n        \"summary\": \"Deletes a consumer from a consumer group.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": 5,\n        \"container\": \"XGROUP\",\n        \"function\": \"xgroupCommand\",\n        \"command_flags\": [\n            \"WRITE\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"group\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"consumer\",\n                \"type\": \"string\"\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The number of pending messages that were yet associated with such a consumer\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/xgroup-destroy.json",
    "content": "{\n    \"DESTROY\": {\n        \"summary\": \"Destroys a consumer group.\",\n        \"complexity\": \"O(N) where N is the number of entries in the group's pending entries list (PEL).\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": 4,\n        \"container\": \"XGROUP\",\n        \"function\": \"xgroupCommand\",\n        \"command_flags\": [\n            \"WRITE\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"group\",\n                \"type\": \"string\"\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The number of destroyed consumer groups (0 or 1)\",\n            \"oneOf\": [\n                {\n                    \"const\": 1\n                },\n                {\n                    \"const\": 0\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/xgroup-help.json",
    "content": "{\n    \"HELP\": {\n        \"summary\": \"Returns helpful text about the different subcommands.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": 2,\n        \"container\": \"XGROUP\",\n        \"function\": \"xgroupCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"Helpful text about subcommands.\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/xgroup-setid.json",
    "content": "{\n    \"SETID\": {\n        \"summary\": \"Sets the last-delivered ID of a consumer group.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": -5,\n        \"container\": \"XGROUP\",\n        \"function\": \"xgroupCommand\",\n        \"history\": [\n            [\n                \"7.0.0\",\n                \"Added the optional `entries_read` argument.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"group\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"id-selector\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"id\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"name\": \"new-id\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"$\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"entriesread\",\n                \"display\": \"entries-read\",\n                \"token\": \"ENTRIESREAD\",\n                \"type\": \"integer\",\n                \"optional\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/xgroup.json",
    "content": "{\n    \"XGROUP\": {\n        \"summary\": \"A container for consumer groups commands.\",\n        \"complexity\": \"Depends on subcommand.\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": -2\n    }\n}\n"
  },
  {
    "path": "src/commands/xidmprecord.json",
    "content": "{\n    \"XIDMPRECORD\": {\n        \"summary\": \"An internal command for setting IDMP metadata on an existing stream message.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"stream\",\n        \"since\": \"8.6.2\",\n        \"arity\": 5,\n        \"function\": \"xidmprecordCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"pid\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"iid\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"stream-id\",\n                \"type\": \"string\"\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/xinfo-consumers.json",
    "content": "{\n    \"CONSUMERS\": {\n        \"summary\": \"Returns a list of the consumers in a consumer group.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": 4,\n        \"container\": \"XINFO\",\n        \"function\": \"xinfoCommand\",\n        \"history\": [\n            [\n                \"7.2.0\",\n                \"Added the `inactive` field, and changed the meaning of `idle`.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"group\",\n                \"type\": \"string\"\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Array list of consumers\",\n            \"type\": \"array\",\n            \"uniqueItems\": true,\n            \"items\": {\n                \"type\": \"object\",\n                \"additionalProperties\": false,\n                \"properties\": {\n                    \"name\": {\n                        \"type\": \"string\"\n                    },\n                    \"pending\": {\n                        \"type\": \"integer\"\n                    },\n                    \"idle\": {\n                        \"type\": \"integer\"\n                    },\n                    \"inactive\": {\n                        \"type\": \"integer\"\n                    }\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/xinfo-groups.json",
    "content": "{\n    \"GROUPS\": {\n        \"summary\": \"Returns a list of the consumer groups of a stream.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": 3,\n        \"container\": \"XINFO\",\n        \"history\": [\n            [\n                \"7.0.0\",\n                \"Added the `entries-read` and `lag` fields\"\n            ]\n        ],\n        \"function\": \"xinfoCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"items\": {\n                \"type\": \"object\",\n                \"additionalProperties\": false,\n                \"properties\": {\n                    \"name\": {\n                        \"type\": \"string\"\n                    },\n                    \"consumers\": {\n                        \"type\": \"integer\"\n                    },\n                    \"pending\": {\n                        \"type\": \"integer\"\n                    },\n                    \"last-delivered-id\": {\n                        \"type\": \"string\",\n                        \"pattern\": \"[0-9]+-[0-9]+\"\n                    },\n                    \"entries-read\": {\n                        \"oneOf\": [\n                            {\n                                \"type\": \"null\"\n                            },\n                            {\n                                \"type\": \"integer\"\n                            }\n                        ]\n                    },\n                    \"lag\": {\n                        \"oneOf\": [\n                            {\n                                \"type\": \"null\"\n                            },\n                            {\n                                \"type\": \"integer\"\n                            }\n                        ]\n                    }\n                }\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/xinfo-help.json",
    "content": "{\n    \"HELP\": {\n        \"summary\": \"Returns helpful text about the different subcommands.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": 2,\n        \"container\": \"XINFO\",\n        \"function\": \"xinfoCommand\",\n        \"command_flags\": [\n            \"LOADING\",\n            \"STALE\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"Helpful text about subcommands.\",\n            \"items\": {\n                \"type\": \"string\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/xinfo-stream.json",
    "content": "{\n    \"STREAM\": {\n        \"summary\": \"Returns information about a stream.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": -3,\n        \"container\": \"XINFO\",\n        \"history\": [\n            [\n                \"6.0.0\",\n                \"Added the `FULL` modifier.\"\n            ],\n            [\n                \"7.0.0\",\n                \"Added the `max-deleted-entry-id`, `entries-added`, `recorded-first-entry-id`, `entries-read` and `lag` fields\"\n            ],\n            [\n                \"7.2.0\",\n                \"Added the `active-time` field, and changed the meaning of `seen-time`.\"\n            ],\n            [\n                \"8.6.0\",\n                \"Added the `idmp-duration`, `idmp-maxsize`, `pids-tracked`, `iids-tracked`, `iids-added` and `iids-duplicates` fields for IDMP tracking.\"\n            ],\n            [\n                \"8.8.0\",\n                \"Added the `nacked-count` field to consumer groups in `FULL` output.\"\n            ]\n        ],\n        \"function\": \"xinfoCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"Summary form, in case `FULL` was not given.\",\n                    \"type\": \"object\",\n                    \"additionalProperties\": false,\n                    \"properties\": {\n                        \"length\": {\n                            \"description\": \"the number of entries in the stream (see `XLEN`)\",\n                            \"type\": \"integer\"\n                        },\n                        \"radix-tree-keys\": {\n                            \"description\": \"the number of keys in the underlying radix data structure\",\n                            \"type\": \"integer\"\n                        },\n                        \"radix-tree-nodes\": {\n                            \"description\": \"the number of nodes in the underlying radix data structure\",\n                            \"type\": \"integer\"\n                        },\n                        \"last-generated-id\": {\n                            \"description\": \"the ID of the least-recently entry that was added to the stream\",\n                            \"type\": \"string\",\n                            \"pattern\": \"[0-9]+-[0-9]+\"\n                        },\n                        \"max-deleted-entry-id\": {\n                            \"description\": \"the maximal entry ID that was deleted from the stream\",\n                            \"type\": \"string\",\n                            \"pattern\": \"[0-9]+-[0-9]+\"\n                        },\n                        \"recorded-first-entry-id\": {\n                            \"description\": \"cached copy of the first entry ID\",\n                            \"type\": \"string\",\n                            \"pattern\": \"[0-9]+-[0-9]+\"\n                        },\n                        \"entries-added\": {\n                            \"description\": \"the count of all entries added to the stream during its lifetime\",\n                            \"type\": \"integer\"\n                        },\n                        \"idmp-duration\": {\n                            \"description\": \"the duration in seconds that each iid is kept in the stream's IDMP map\",\n                            \"type\": \"integer\"\n                        },\n                        \"idmp-maxsize\": {\n                            \"description\": \"the maximum number of most recent iids kept for each pid in the stream's IDMP map\",\n                            \"type\": \"integer\"\n                        },\n                        \"pids-tracked\": {\n                            \"description\": \"the number of idempotent producer ids currently tracked in the stream\",\n                            \"type\": \"integer\"\n                        },\n                        \"iids-tracked\": {\n                            \"description\": \"the number of idempotent ids currently tracked in the stream\",\n                            \"type\": \"integer\"\n                        },\n                        \"iids-added\": {\n                            \"description\": \"the count of all entries with an idempotent id added to the stream during its lifetime\",\n                            \"type\": \"integer\"\n                        },\n                        \"iids-duplicates\": {\n                            \"description\": \"the count of duplicate idempotent ids detected during the stream's lifetime\",\n                            \"type\": \"integer\"\n                        },\n                        \"groups\": {\n                            \"description\": \"the number of consumer groups defined for the stream\",\n                            \"type\": \"integer\"\n                        },\n                        \"first-entry\": {\n                            \"description\": \"the first entry of the stream\",\n                            \"oneOf\": [\n                                {\n                                    \"type\": \"null\"\n                                },\n                                {\n                                    \"type\": \"array\",\n                                    \"minItems\": 2,\n                                    \"maxItems\": 2,\n                                    \"items\": [\n                                        {\n                                            \"description\": \"entry ID\",\n                                            \"type\": \"string\",\n                                            \"pattern\": \"[0-9]+-[0-9]+\"\n                                        },\n                                        {\n                                            \"description\": \"data\",\n                                            \"type\": \"array\",\n                                            \"items\": {\n                                                \"type\": \"string\"\n                                            }\n                                        }\n                                    ]\n                                }\n                            ]\n                        },\n                        \"last-entry\": {\n                            \"description\": \"the last entry of the stream\",\n                            \"oneOf\": [\n                                {\n                                    \"type\": \"null\"\n                                },\n                                {\n                                    \"type\": \"array\",\n                                    \"minItems\": 2,\n                                    \"maxItems\": 2,\n                                    \"items\": [\n                                        {\n                                            \"description\": \"entry ID\",\n                                            \"type\": \"string\",\n                                            \"pattern\": \"[0-9]+-[0-9]+\"\n                                        },\n                                        {\n                                            \"description\": \"data\",\n                                            \"type\": \"array\",\n                                            \"items\": {\n                                                \"type\": \"string\"\n                                            }\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                {\n                    \"description\": \"Extended form, in case `FULL` was given.\",\n                    \"type\": \"object\",\n                    \"additionalProperties\": false,\n                    \"properties\": {\n                        \"length\": {\n                            \"description\": \"the number of entries in the stream (see `XLEN`)\",\n                            \"type\": \"integer\"\n                        },\n                        \"radix-tree-keys\": {\n                            \"description\": \"the number of keys in the underlying radix data structure\",\n                            \"type\": \"integer\"\n                        },\n                        \"radix-tree-nodes\": {\n                            \"description\": \"the number of nodes in the underlying radix data structure\",\n                            \"type\": \"integer\"\n                        },\n                        \"last-generated-id\": {\n                            \"description\": \"the ID of the least-recently entry that was added to the stream\",\n                            \"type\": \"string\",\n                            \"pattern\": \"[0-9]+-[0-9]+\"\n                        },\n                        \"max-deleted-entry-id\": {\n                            \"description\": \"the maximal entry ID that was deleted from the stream\",\n                            \"type\": \"string\",\n                            \"pattern\": \"[0-9]+-[0-9]+\"\n                        },\n                        \"recorded-first-entry-id\": {\n                            \"description\": \"cached copy of the first entry ID\",\n                            \"type\": \"string\",\n                            \"pattern\": \"[0-9]+-[0-9]+\"\n                        },\n                        \"entries-added\": {\n                            \"description\": \"the count of all entries added to the stream during its lifetime\",\n                            \"type\": \"integer\"\n                        },\n                        \"idmp-duration\": {\n                            \"description\": \"the duration in seconds that each iid is kept in the stream's IDMP map\",\n                            \"type\": \"integer\"\n                        },\n                        \"idmp-maxsize\": {\n                            \"description\": \"the maximum number of most recent iids kept for each pid in the stream's IDMP map\",\n                            \"type\": \"integer\"\n                        },\n                        \"pids-tracked\": {\n                            \"description\": \"the number of idempotent producer ids currently tracked in the stream\",\n                            \"type\": \"integer\"\n                        },\n                        \"iids-tracked\": {\n                            \"description\": \"the number of idempotent ids currently tracked in the stream\",\n                            \"type\": \"integer\"\n                        },\n                        \"iids-added\": {\n                            \"description\": \"the count of all entries with an idempotent id added to the stream during its lifetime\",\n                            \"type\": \"integer\"\n                        },\n                        \"iids-duplicates\": {\n                            \"description\": \"the count of duplicate idempotent ids detected during the stream's lifetime\",\n                            \"type\": \"integer\"\n                        },\n                        \"entries\": {\n                            \"description\": \"all the entries of the stream\",\n                            \"type\": \"array\",\n                            \"uniqueItems\": true,\n                            \"items\": {\n                                \"type\": \"array\",\n                                \"minItems\": 2,\n                                \"maxItems\": 2,\n                                \"items\": [\n                                    {\n                                        \"description\": \"entry ID\",\n                                        \"type\": \"string\",\n                                        \"pattern\": \"[0-9]+-[0-9]+\"\n                                    },\n                                    {\n                                        \"description\": \"data\",\n                                        \"type\": \"array\",\n                                        \"items\": {\n                                            \"type\": \"string\"\n                                        }\n                                    }\n                                ]\n                            }\n                        },\n                        \"groups\": {\n                            \"type\": \"array\",\n                            \"items\": {\n                                \"type\": \"object\",\n                                \"additionalProperties\": false,\n                                \"properties\": {\n                                    \"name\": {\n                                        \"description\": \"group name\",\n                                        \"type\": \"string\"\n                                    },\n                                    \"last-delivered-id\": {\n                                        \"description\": \"last entry ID that was delivered to a consumer\",\n                                        \"type\": \"string\",\n                                        \"pattern\": \"[0-9]+-[0-9]+\"\n                                    },\n                                    \"entries-read\": {\n                                        \"description\": \"total number of entries ever read by consumers in the group\",\n                                        \"oneOf\": [\n                                            {\n                                                \"type\": \"null\"\n                                            },\n                                            {\n                                                \"type\": \"integer\"\n                                            }\n                                        ]\n                                    },\n                                    \"lag\": {\n                                        \"description\": \"number of entries left to be consumed from the stream\",\n                                        \"oneOf\": [\n                                            {\n                                                \"type\": \"null\"\n                                            },\n                                            {\n                                                \"type\": \"integer\"\n                                            }\n                                        ]\n                                    },\n                                    \"pel-count\": {\n                                        \"description\": \"total number of unacknowledged entries\",\n                                        \"type\": \"integer\"\n                                    },\n                                    \"nacked-count\": {\n                                        \"description\": \"number of entries currently in the nacked zone\",\n                                        \"type\": \"integer\"\n                                    },\n                                    \"pending\": {\n                                        \"description\": \"data about all of the unacknowledged entries\",\n                                        \"type\": \"array\",\n                                        \"items\": {\n                                            \"type\": \"array\",\n                                            \"minItems\": 4,\n                                            \"maxItems\": 4,\n                                            \"items\": [\n                                                {\n                                                    \"description\": \"Entry ID\",\n                                                    \"type\": \"string\",\n                                                    \"pattern\": \"[0-9]+-[0-9]+\"\n                                                },\n                                                {\n                                                    \"description\": \"Consumer name\",\n                                                    \"type\": \"string\"\n                                                },\n                                                {\n                                                    \"description\": \"Delivery timestamp\",\n                                                    \"type\": \"integer\"\n                                                },\n                                                {\n                                                    \"description\": \"Delivery count\",\n                                                    \"type\": \"integer\"\n                                                }\n                                            ]\n                                        }\n                                    },\n                                    \"consumers\": {\n                                        \"description\": \"data about all of the consumers of the group\",\n                                        \"type\": \"array\",\n                                        \"items\": {\n                                            \"type\": \"object\",\n                                            \"additionalProperties\": false,\n                                            \"properties\": {\n                                                \"active-time\": {\n                                                    \"type\": \"integer\",\n                                                    \"description\": \"Last time this consumer was active (successful reading/claiming).\",\n                                                    \"minimum\": 0\n                                                },\n                                                \"name\": {\n                                                    \"description\": \"consumer name\",\n                                                    \"type\": \"string\"\n                                                },\n                                                \"seen-time\": {\n                                                    \"description\": \"timestamp of the last interaction attempt of the consumer\",\n                                                    \"type\": \"integer\",\n                                                    \"minimum\": 0\n                                                },\n                                                \"pel-count\": {\n                                                    \"description\": \"number of unacknowledged entries that belong to the consumer\",\n                                                    \"type\": \"integer\"\n                                                },\n                                                \"pending\": {\n                                                    \"description\": \"data about the unacknowledged entries\",\n                                                    \"type\": \"array\",\n                                                    \"items\": {\n                                                        \"type\": \"array\",\n                                                        \"minItems\": 3,\n                                                        \"maxItems\": 3,\n                                                        \"items\": [\n                                                            {\n                                                                \"description\": \"Entry ID\",\n                                                                \"type\": \"string\",\n                                                                \"pattern\": \"[0-9]+-[0-9]+\"\n                                                            },\n                                                            {\n                                                                \"description\": \"Delivery timestamp\",\n                                                                \"type\": \"integer\"\n                                                            },\n                                                            {\n                                                                \"description\": \"Delivery count\",\n                                                                \"type\": \"integer\"\n                                                            }\n                                                        ]\n                                                    }\n                                                }\n                                            }\n                                        }\n                                    }\n                                }\n                            }\n                        }\n                    }\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"full-block\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"full\",\n                        \"token\": \"FULL\",\n                        \"type\": \"pure-token\"\n                    },\n                    {\n                        \"token\": \"COUNT\",\n                        \"name\": \"count\",\n                        \"type\": \"integer\",\n                        \"optional\": true\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/xinfo.json",
    "content": "{\n    \"XINFO\": {\n        \"summary\": \"A container for stream introspection commands.\",\n        \"complexity\": \"Depends on subcommand.\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": -2\n    }\n}\n"
  },
  {
    "path": "src/commands/xlen.json",
    "content": "{\n    \"XLEN\": {\n        \"summary\": \"Return the number of messages in a stream.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": 2,\n        \"function\": \"xlenCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The number of entries of the stream at key\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/xnack.json",
    "content": "{\n    \"XNACK\": {\n        \"summary\": \"Releases claimed messages back to the group's PEL without acknowledging them, making them available for re-delivery.\",\n        \"complexity\": \"O(1) for each message ID processed.\",\n        \"group\": \"stream\",\n        \"since\": \"8.8.0\",\n        \"arity\": -7,\n        \"function\": \"xnackCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"group\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"mode\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"silent\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"SILENT\"\n                    },\n                    {\n                        \"name\": \"fail\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"FAIL\"\n                    },\n                    {\n                        \"name\": \"fatal\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"FATAL\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"ids\",\n                \"token\": \"IDS\",\n                \"type\": \"block\",\n                \"arguments\": [\n                    {\n                        \"name\": \"numids\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"id\",\n                        \"type\": \"string\",\n                        \"multiple\": true\n                    }\n                ]\n            },\n            {\n                \"token\": \"RETRYCOUNT\",\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"force\",\n                \"token\": \"FORCE\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The number of messages successfully NACKed (released back to the group PEL).\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/xpending.json",
    "content": "{\n    \"XPENDING\": {\n        \"summary\": \"Returns the information and entries from a stream consumer group's pending entries list.\",\n        \"complexity\": \"O(N) with N being the number of elements returned, so asking for a small fixed number of entries per call is O(1). O(M), where M is the total number of entries scanned when used with the IDLE filter. When the command returns just the summary and the list of consumers is small, it runs in O(1) time; otherwise, an additional O(N) time for iterating every consumer.\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": -3,\n        \"function\": \"xpendingCommand\",\n        \"history\": [\n            [\n                \"6.2.0\",\n                \"Added the `IDLE` option and exclusive range intervals.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"Extended form, in case `start` was given.\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"array\",\n                        \"minItems\": 4,\n                        \"maxItems\": 4,\n                        \"items\": [\n                            {\n                                \"description\": \"Entry ID\",\n                                \"type\": \"string\",\n                                \"pattern\": \"[0-9]+-[0-9]+\"\n                            },\n                            {\n                                \"description\": \"Consumer name\",\n                                \"type\": \"string\"\n                            },\n                            {\n                                \"description\": \"Idle time\",\n                                \"type\": \"integer\"\n                            },\n                            {\n                                \"description\": \"Delivery count\",\n                                \"type\": \"integer\"\n                            }\n                        ]\n                    }\n                },\n                {\n                    \"description\": \"Summary form, in case `start` was not given.\",\n                    \"type\": \"array\",\n                    \"minItems\": 4,\n                    \"maxItems\": 4,\n                    \"items\": [\n                        {\n                            \"description\": \"Total number of pending messages\",\n                            \"type\": \"integer\"\n                        },\n                        {\n                            \"description\": \"Minimal pending entry ID\",\n                            \"type\": \"string\",\n                            \"pattern\": \"[0-9]+-[0-9]+\"\n                        },\n                        {\n                            \"description\": \"Maximal pending entry ID\",\n                            \"type\": \"string\",\n                            \"pattern\": \"[0-9]+-[0-9]+\"\n                        },\n                        {\n                            \"description\": \"Consuers with pending messages\",\n                            \"oneOf\": [\n                                {\n                                    \"type\": \"array\",\n                                    \"items\": {\n                                        \"type\": \"array\",\n                                        \"minItems\": 2,\n                                        \"maxItems\": 2,\n                                        \"items\": [\n                                            {\n                                                \"description\": \"Consumer name\",\n                                                \"type\": \"string\"\n                                            },\n                                            {\n                                                \"description\": \"Number of pending messages\",\n                                                \"type\": \"string\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"type\": \"null\",\n                                    \"description\": \"When there are no consumers with pending messages\"\n                                }\n                            ]\n                            \n                        }\n                    ]\n                },\n                {\n                    \"description\": \"Summary form, in case `start` was not given and there are no pending messages.\",\n                    \"type\": \"array\",\n                    \"minItems\": 4,\n                    \"maxItems\": 4,\n                    \"items\": [\n                        {\n                            \"description\": \"Total number of pending messages\",\n                            \"const\": 0\n                        },\n                        {\n                            \"type\": \"null\"\n                        },\n                        {\n                            \"type\": \"null\"\n                        },\n                        {\n                            \"type\": \"null\"\n                        }\n                    ]\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"group\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"filters\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"token\": \"IDLE\",\n                        \"name\": \"min-idle-time\",\n                        \"type\": \"integer\",\n                        \"optional\": true,\n                        \"since\": \"6.2.0\"\n                    },\n                    {\n                        \"name\": \"start\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"name\": \"end\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"name\": \"count\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"consumer\",\n                        \"type\": \"string\",\n                        \"optional\": true\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/xrange.json",
    "content": "{\n    \"XRANGE\": {\n        \"summary\": \"Returns the messages from a stream within a range of IDs.\",\n        \"complexity\": \"O(N) with N being the number of elements being returned. If N is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(1).\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": -4,\n        \"function\": \"xrangeCommand\",\n        \"history\": [\n            [\n                \"6.2.0\",\n                \"Added exclusive ranges.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Stream entries with IDs matching the specified range.\",\n            \"type\": \"array\",\n            \"uniqueItems\": true,\n            \"items\": {\n                \"type\": \"array\",\n                \"minItems\": 2,\n                \"maxItems\": 2,\n                \"items\": [\n                    {\n                        \"description\": \"Entry ID\",\n                        \"type\": \"string\",\n                        \"pattern\": \"[0-9]+-[0-9]+\"\n                    },\n                    {\n                        \"description\": \"Data\",\n                        \"type\": \"array\",\n                        \"items\": {\n                            \"type\": \"string\"\n                        }\n                    }\n                ]\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"start\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"end\",\n                \"type\": \"string\"\n            },\n            {\n                \"token\": \"COUNT\",\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/xread.json",
    "content": "{\n    \"XREAD\": {\n        \"summary\": \"Returns messages from multiple streams with IDs greater than the ones requested. Blocks until a message is available otherwise.\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": -4,\n        \"function\": \"xreadCommand\",\n        \"get_keys_function\": \"xreadGetKeys\",\n        \"command_flags\": [\n            \"BLOCKING\",\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"keyword\": {\n                        \"keyword\": \"STREAMS\",\n                        \"startfrom\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 1,\n                        \"limit\": 2\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"token\": \"COUNT\",\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"BLOCK\",\n                \"name\": \"milliseconds\",\n                \"type\": \"integer\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"streams\",\n                \"token\": \"STREAMS\",\n                \"type\": \"block\",\n                \"arguments\": [\n                    {\n                        \"name\": \"key\",\n                        \"type\": \"key\",\n                        \"key_spec_index\": 0,\n                        \"multiple\": true\n                    },\n                    {\n                        \"name\": \"ID\",\n                        \"type\": \"string\",\n                        \"multiple\": true\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"A map of key-value elements when each element composed of key name and the entries reported for that key\",\n                    \"type\": \"object\",\n                    \"patternProperties\": {\n                        \"^.*$\": {\n                            \"description\": \"The entries reported for that key\",\n                            \"type\": \"array\",\n                            \"items\": {\n                                \"type\": \"array\",\n                                \"minItems\": 2,\n                                \"maxItems\": 2,\n                                \"items\": [\n                                    {\n                                        \"description\": \"entry id\",\n                                        \"type\": \"string\",\n                                        \"pattern\": \"[0-9]+-[0-9]+\"\n                                    },\n                                    {\n                                        \"description\": \"array of field-value pairs\",\n                                        \"type\": \"array\",\n                                        \"items\": {\n                                            \"type\": \"string\"\n                                        }\n                                    }\n                                ]\n                            }\n                        }\n                    }\n                },\n                {\n                    \"description\": \"If BLOCK option is given, and a timeout occurs, or there is no stream we can serve\",\n                    \"type\": \"null\"\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/xreadgroup.json",
    "content": "{\n    \"XREADGROUP\": {\n        \"summary\": \"Returns new or historical messages from a stream for a consumer in a group. Blocks until a message is available otherwise.\",\n        \"complexity\": \"For each stream mentioned: O(M) with M being the number of elements returned. If M is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(1). On the other side when XREADGROUP blocks, XADD will pay the O(N) time in order to serve the N clients blocked on the stream getting new data.\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": -7,\n        \"function\": \"xreadCommand\",\n        \"get_keys_function\": \"xreadGetKeys\",\n        \"command_flags\": [\n            \"BLOCKING\",\n            \"WRITE\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"keyword\": {\n                        \"keyword\": \"STREAMS\",\n                        \"startfrom\": 4\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": -1,\n                        \"step\": 1,\n                        \"limit\": 2\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"token\": \"GROUP\",\n                \"name\": \"group-block\",\n                \"type\": \"block\",\n                \"arguments\": [\n                    {\n                        \"name\": \"group\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"name\": \"consumer\",\n                        \"type\": \"string\"\n                    }\n                ]\n            },\n            {\n                \"token\": \"COUNT\",\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"BLOCK\",\n                \"name\": \"milliseconds\",\n                \"type\": \"integer\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"CLAIM\",\n                \"name\": \"min-idle-time\",\n                \"type\": \"integer\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"noack\",\n                \"token\": \"NOACK\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"name\": \"streams\",\n                \"token\": \"STREAMS\",\n                \"type\": \"block\",\n                \"arguments\": [\n                    {\n                        \"name\": \"key\",\n                        \"type\": \"key\",\n                        \"key_spec_index\": 0,\n                        \"multiple\": true\n                    },\n                    {\n                        \"name\": \"ID\",\n                        \"type\": \"string\",\n                        \"multiple\": true\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"If BLOCK option is specified and the timeout expired\",\n                    \"type\": \"null\"\n                },\n                {\n                    \"description\": \"A map of key-value elements when each element composed of key name and the entries reported for that key\",\n                    \"type\": \"object\",\n                    \"additionalProperties\": {\n                        \"description\": \"The entries reported for that key\",\n                        \"type\": \"array\",\n                        \"items\": {\n                            \"oneOf\": [\n                                {\n                                    \"description\": \"Entry without CLAIM option\",\n                                    \"type\": \"array\",\n                                    \"minItems\": 2,\n                                    \"maxItems\": 2,\n                                    \"items\": [\n                                        {\n                                            \"description\": \"Stream id\",\n                                            \"type\": \"string\",\n                                            \"pattern\": \"[0-9]+-[0-9]+\"\n                                        },\n                                        {\n                                            \"oneOf\": [\n                                                {\n                                                    \"description\": \"Array of field-value pairs\",\n                                                    \"type\": \"array\",\n                                                    \"items\": {\n                                                        \"type\": \"string\"\n                                                    }\n                                                },\n                                                {\n                                                    \"type\": \"null\"\n                                                }\n                                            ]\n                                        }\n                                    ]\n                                },\n                                {\n                                    \"description\": \"Entry with CLAIM option - includes elapsed milliseconds and delivery count\",\n                                    \"type\": \"array\",\n                                    \"minItems\": 4,\n                                    \"maxItems\": 4,\n                                    \"items\": [\n                                        {\n                                            \"description\": \"Stream id\",\n                                            \"type\": \"string\",\n                                            \"pattern\": \"[0-9]+-[0-9]+\"\n                                        },\n                                        {\n                                            \"oneOf\": [\n                                                {\n                                                    \"description\": \"Array of field-value pairs\",\n                                                    \"type\": \"array\",\n                                                    \"items\": {\n                                                        \"type\": \"string\"\n                                                    }\n                                                },\n                                                {\n                                                    \"type\": \"null\"\n                                                }\n                                            ]\n                                        },\n                                        {\n                                            \"description\": \"Milliseconds elapsed since last delivery\",\n                                            \"type\": \"integer\"\n                                        },\n                                        {\n                                            \"description\": \"Delivery count (0 for new messages, 1+ for claimed messages)\",\n                                            \"type\": \"integer\"\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/xrevrange.json",
    "content": "{\n    \"XREVRANGE\": {\n        \"summary\": \"Returns the messages from a stream within a range of IDs in reverse order.\",\n        \"complexity\": \"O(N) with N being the number of elements returned. If N is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(1).\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": -4,\n        \"function\": \"xrevrangeCommand\",\n        \"history\": [\n            [\n                \"6.2.0\",\n                \"Added exclusive ranges.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"end\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"start\",\n                \"type\": \"string\"\n            },\n            {\n                \"token\": \"COUNT\",\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"An array of the entries with IDs matching the specified range\",\n            \"type\": \"array\",\n            \"items\": {\n                \"type\": \"array\",\n                \"minItems\": 2,\n                \"maxItems\": 2,\n                \"items\": [\n                    {\n                        \"description\": \"Stream id\",\n                        \"type\": \"string\",\n                        \"pattern\": \"[0-9]+-[0-9]+\"\n                    },\n                    {\n                        \"description\": \"Array of field-value pairs\",\n                        \"type\": \"array\",\n                        \"items\": {\n                            \"type\": \"string\"\n                        }\n                    }\n                ]\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/xsetid.json",
    "content": "{\n    \"XSETID\": {\n        \"summary\": \"An internal command for replicating stream values.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": -3,\n        \"function\": \"xsetidCommand\",\n        \"history\": [\n            [\n                \"7.0.0\",\n                \"Added the `entries_added` and `max_deleted_entry_id` arguments.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"last-id\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"entries-added\",\n                \"token\": \"ENTRIESADDED\",\n                \"type\": \"integer\",\n                \"optional\": true,\n                \"since\": \"7.0.0\"\n            },\n            {\n                \"name\": \"max-deleted-id\",\n                \"token\": \"MAXDELETEDID\",\n                \"type\": \"string\",\n                \"optional\": true,\n                \"since\": \"7.0.0\"\n            }\n        ],\n        \"reply_schema\": {\n            \"const\": \"OK\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/xtrim.json",
    "content": "{\n    \"XTRIM\": {\n        \"summary\": \"Deletes messages from the beginning of a stream.\",\n        \"complexity\": \"O(N), with N being the number of evicted entries. Constant times are very small however, since entries are organized in macro nodes containing multiple entries that can be released with a single deallocation.\",\n        \"group\": \"stream\",\n        \"since\": \"5.0.0\",\n        \"arity\": -4,\n        \"function\": \"xtrimCommand\",\n        \"history\": [\n            [\n                \"6.2.0\",\n                \"Added the `MINID` trimming strategy and the `LIMIT` option.\"\n            ],\n            [\n                \"8.2.0\",\n                \"Added the `KEEPREF`, `DELREF` and `ACKED` options.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\"\n        ],\n        \"acl_categories\": [\n            \"STREAM\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"trim\",\n                \"type\": \"block\",\n                \"arguments\": [\n                    {\n                        \"name\": \"strategy\",\n                        \"type\": \"oneof\",\n                        \"arguments\": [\n                            {\n                                \"name\": \"maxlen\",\n                                \"type\": \"pure-token\",\n                                \"token\": \"MAXLEN\"\n                            },\n                            {\n                                \"name\": \"minid\",\n                                \"type\": \"pure-token\",\n                                \"token\": \"MINID\",\n                                \"since\": \"6.2.0\"\n                            }\n                        ]\n                    },\n                    {\n                        \"name\": \"operator\",\n                        \"type\": \"oneof\",\n                        \"optional\": true,\n                        \"arguments\": [\n                            {\n                                \"name\": \"equal\",\n                                \"type\": \"pure-token\",\n                                \"token\": \"=\"\n                            },\n                            {\n                                \"name\": \"approximately\",\n                                \"type\": \"pure-token\",\n                                \"token\": \"~\"\n                            }\n                        ]\n                    },\n                    {\n                        \"name\": \"threshold\",\n                        \"type\": \"string\"\n                    },\n                    {\n                        \"token\": \"LIMIT\",\n                        \"name\": \"count\",\n                        \"type\": \"integer\",\n                        \"optional\": true,\n                        \"since\": \"6.2.0\"\n                    },\n                    {\n                        \"name\": \"condition\",\n                        \"type\": \"oneof\",\n                        \"optional\": true,\n                        \"arguments\": [\n                            {\n                                \"name\": \"keepref\",\n                                \"type\": \"pure-token\",\n                                \"token\": \"KEEPREF\"\n                            },\n                            {\n                                \"name\": \"delref\",\n                                \"type\": \"pure-token\",\n                                \"token\": \"DELREF\"\n                            },\n                            {\n                                \"name\": \"acked\",\n                                \"type\": \"pure-token\",\n                                \"token\": \"ACKED\"\n                            }\n                        ]\n                    }\n                ]\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The number of entries deleted from the stream.\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/zadd.json",
    "content": "{\n    \"ZADD\": {\n        \"summary\": \"Adds one or more members to a sorted set, or updates their scores. Creates the key if it doesn't exist.\",\n        \"complexity\": \"O(log(N)) for each item added, where N is the number of elements in the sorted set.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"1.2.0\",\n        \"arity\": -4,\n        \"function\": \"zaddCommand\",\n        \"history\": [\n            [\n                \"2.4.0\",\n                \"Accepts multiple elements.\"\n            ],\n            [\n                \"3.0.2\",\n                \"Added the `XX`, `NX`, `CH` and `INCR` options.\"\n            ],\n            [\n                \"6.2.0\",\n                \"Added the `GT` and `LT` options.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"anyOf\":[\n                {\n                    \"description\": \"Operation was aborted (conflict with one of the `XX`/`NX`/`LT`/`GT` options).\",\n                    \"type\": \"null\"\n                },\n                {\n                    \"description\": \"The number of new members (when the `CH` option is not used)\",\n                    \"type\": \"integer\"\n                },\n                {\n                    \"description\": \"The number of new or updated members (when the `CH` option is used)\",\n                    \"type\": \"integer\"\n                },\n                {\n                    \"description\": \"The updated score of the member (when the `INCR` option is used)\",\n                    \"type\": \"number\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"condition\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"since\": \"3.0.2\",\n                \"arguments\": [\n                    {\n                        \"name\": \"nx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"NX\"\n                    },\n                    {\n                        \"name\": \"xx\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"XX\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"comparison\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"since\": \"6.2.0\",\n                \"arguments\": [\n                    {\n                        \"name\": \"gt\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"GT\"\n                    },\n                    {\n                        \"name\": \"lt\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"LT\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"change\",\n                \"token\": \"CH\",\n                \"type\": \"pure-token\",\n                \"optional\": true,\n                \"since\": \"3.0.2\"\n            },\n            {\n                \"name\": \"increment\",\n                \"token\": \"INCR\",\n                \"type\": \"pure-token\",\n                \"optional\": true,\n                \"since\": \"3.0.2\"\n            },\n            {\n                \"name\": \"data\",\n                \"type\": \"block\",\n                \"multiple\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"score\",\n                        \"type\": \"double\"\n                    },\n                    {\n                        \"name\": \"member\",\n                        \"type\": \"string\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zcard.json",
    "content": "{\n    \"ZCARD\": {\n        \"summary\": \"Returns the number of members in a sorted set.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"sorted_set\",\n        \"since\": \"1.2.0\",\n        \"arity\": 2,\n        \"function\": \"zcardCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The cardinality (number of elements) of the sorted set, or 0 if key does not exist\",\n            \"type\": \"integer\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zcount.json",
    "content": "{\n    \"ZCOUNT\": {\n        \"summary\": \"Returns the count of members in a sorted set that have scores within a range.\",\n        \"complexity\": \"O(log(N)) with N being the number of elements in the sorted set.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"2.0.0\",\n        \"arity\": 4,\n        \"function\": \"zcountCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The number of elements in the specified score range\",\n            \"type\": \"integer\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"min\",\n                \"type\": \"double\"\n            },\n            {\n                \"name\": \"max\",\n                \"type\": \"double\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zdiff.json",
    "content": "{\n    \"ZDIFF\": {\n        \"summary\": \"Returns the difference between multiple sorted sets.\",\n        \"complexity\": \"O(L + (N-K)log(N)) worst case where L is the total number of elements in all the sets, N is the size of the first set, and K is the size of the result set.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"6.2.0\",\n        \"arity\": -3,\n        \"function\": \"zdiffCommand\",\n        \"get_keys_function\": \"zunionInterDiffGetKeys\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"keynum\": {\n                        \"keynumidx\": 0,\n                        \"firstkey\": 1,\n                        \"step\": 1\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"anyOf\": [\n                {\n                    \"description\": \"A list of members. Returned in case `WITHSCORES` was not used.\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"string\"\n                    }\n                },\n                {\n                    \"description\": \"Members and their scores. Returned in case `WITHSCORES` was used. In RESP2 this is returned as a flat array\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"array\",\n                        \"minItems\": 2,\n                        \"maxItems\": 2,\n                        \"items\": [\n                            {\n                                \"description\": \"Member\",\n                                \"type\": \"string\"\n                            },\n                            {\n                                \"description\": \"Score\",\n                                \"type\": \"number\"\n                            }\n                        ]\n                    }\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"numkeys\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            },\n            {\n                \"name\": \"withscores\",\n                \"token\": \"WITHSCORES\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zdiffstore.json",
    "content": "{\n    \"ZDIFFSTORE\": {\n        \"summary\": \"Stores the difference of multiple sorted sets in a key.\",\n        \"complexity\": \"O(L + (N-K)log(N)) worst case where L is the total number of elements in all the sets, N is the size of the first set, and K is the size of the result set.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"6.2.0\",\n        \"arity\": -4,\n        \"function\": \"zdiffstoreCommand\",\n        \"get_keys_function\": \"zunionInterDiffStoreGetKeys\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"keynum\": {\n                        \"keynumidx\": 0,\n                        \"firstkey\": 1,\n                        \"step\": 1\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Number of elements in the resulting sorted set at `destination`\",\n            \"type\": \"integer\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"destination\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"numkeys\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 1,\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zincrby.json",
    "content": "{\n    \"ZINCRBY\": {\n        \"summary\": \"Increments the score of a member in a sorted set.\",\n        \"complexity\": \"O(log(N)) where N is the number of elements in the sorted set.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"1.2.0\",\n        \"arity\": 4,\n        \"function\": \"zincrbyCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The new score of `member`\",\n            \"type\": \"number\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"increment\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"member\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zinter.json",
    "content": "{\n    \"ZINTER\": {\n        \"summary\": \"Returns the intersect of multiple sorted sets.\",\n        \"complexity\": \"O(N*K)+O(M*log(M)) worst case with N being the smallest input sorted set, K being the number of input sorted sets and M being the number of elements in the resulting sorted set.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"6.2.0\",\n        \"arity\": -3,\n        \"function\": \"zinterCommand\",\n        \"get_keys_function\": \"zunionInterDiffGetKeys\",\n        \"history\": [\n            [\n                \"8.8.0\",\n                \"Added `COUNT` aggregate option.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"keynum\": {\n                        \"keynumidx\": 0,\n                        \"firstkey\": 1,\n                        \"step\": 1\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"anyOf\": [\n                {\n                    \"description\": \"Result of intersection, containing only the member names. Returned in case `WITHSCORES` was not used.\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"string\"\n                    }\n                },\n                {\n                    \"description\": \"Result of intersection, containing members and their scores. Returned in case `WITHSCORES` was used. In RESP2 this is returned as a flat array\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"array\",\n                        \"minItems\": 2,\n                        \"maxItems\": 2,\n                        \"items\": [\n                            {\n                                \"description\": \"Member\",\n                                \"type\": \"string\"\n                            },\n                            {\n                                \"description\": \"Score\",\n                                \"type\": \"number\"\n                            }\n                        ]\n                    }\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"numkeys\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            },\n            {\n                \"token\": \"WEIGHTS\",\n                \"name\": \"weight\",\n                \"type\": \"integer\",\n                \"optional\": true,\n                \"multiple\": true\n            },\n            {\n                \"token\": \"AGGREGATE\",\n                \"name\": \"aggregate\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"sum\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"SUM\"\n                    },\n                    {\n                        \"name\": \"min\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"MIN\"\n                    },\n                    {\n                        \"name\": \"max\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"MAX\"\n                    },\n                    {\n                        \"name\": \"count\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"COUNT\",\n                        \"since\": \"8.8.0\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"withscores\",\n                \"token\": \"WITHSCORES\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zintercard.json",
    "content": "{\n    \"ZINTERCARD\": {\n        \"summary\": \"Returns the number of members of the intersect of multiple sorted sets.\",\n        \"complexity\": \"O(N*K) worst case with N being the smallest input sorted set, K being the number of input sorted sets.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"7.0.0\",\n        \"arity\": -3,\n        \"function\": \"zinterCardCommand\",\n        \"get_keys_function\": \"zunionInterDiffGetKeys\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"keynum\": {\n                        \"keynumidx\": 0,\n                        \"firstkey\": 1,\n                        \"step\": 1\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Number of elements in the resulting intersection.\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        },\n        \"arguments\": [\n            {\n                \"name\": \"numkeys\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            },\n            {\n                \"token\": \"LIMIT\",\n                \"name\": \"limit\",\n                \"type\": \"integer\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zinterstore.json",
    "content": "{\n    \"ZINTERSTORE\": {\n        \"summary\": \"Stores the intersect of multiple sorted sets in a key.\",\n        \"complexity\": \"O(N*K)+O(M*log(M)) worst case with N being the smallest input sorted set, K being the number of input sorted sets and M being the number of elements in the resulting sorted set.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"2.0.0\",\n        \"arity\": -4,\n        \"function\": \"zinterstoreCommand\",\n        \"get_keys_function\": \"zunionInterDiffStoreGetKeys\",\n        \"history\": [\n            [\n                \"8.8.0\",\n                \"Added `COUNT` aggregate option.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"keynum\": {\n                        \"keynumidx\": 0,\n                        \"firstkey\": 1,\n                        \"step\": 1\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Number of elements in the resulting sorted set.\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        },\n        \"arguments\": [\n            {\n                \"name\": \"destination\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"numkeys\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 1,\n                \"multiple\": true\n            },\n            {\n                \"token\": \"WEIGHTS\",\n                \"name\": \"weight\",\n                \"type\": \"integer\",\n                \"optional\": true,\n                \"multiple\": true\n            },\n            {\n                \"token\": \"AGGREGATE\",\n                \"name\": \"aggregate\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"sum\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"SUM\"\n                    },\n                    {\n                        \"name\": \"min\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"MIN\"\n                    },\n                    {\n                        \"name\": \"max\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"MAX\"\n                    },\n                    {\n                        \"name\": \"count\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"COUNT\",\n                        \"since\": \"8.8.0\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zlexcount.json",
    "content": "{\n    \"ZLEXCOUNT\": {\n        \"summary\": \"Returns the number of members in a sorted set within a lexicographical range.\",\n        \"complexity\": \"O(log(N)) with N being the number of elements in the sorted set.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"2.8.9\",\n        \"arity\": 4,\n        \"function\": \"zlexcountCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"Number of elements in the specified score range.\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"min\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"max\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zmpop.json",
    "content": "{\n    \"ZMPOP\": {\n        \"summary\": \"Returns the highest- or lowest-scoring members from one or more sorted sets after removing them. Deletes the sorted set if the last member was popped.\",\n        \"complexity\": \"O(K) + O(M*log(N)) where K is the number of provided keys, N being the number of elements in the sorted set, and M being the number of elements popped.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"7.0.0\",\n        \"arity\": -4,\n        \"function\": \"zmpopCommand\",\n        \"get_keys_function\": \"zmpopGetKeys\",\n        \"command_flags\": [\n            \"WRITE\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"keynum\": {\n                        \"keynumidx\": 0,\n                        \"firstkey\": 1,\n                        \"step\": 1\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"description\": \"No element could be popped.\",\n                    \"type\": \"null\"\n                },\n                {\n                    \"type\": \"array\",\n                    \"minItems\": 2,\n                    \"maxItems\": 2,\n                    \"items\": [\n                        {\n                            \"type\": \"string\",\n                            \"description\": \"Name of the key that elements were popped.\"\n                        },\n                        {\n                            \"type\": \"array\",\n                            \"description\": \"Popped elements.\",\n                            \"items\": {\n                                \"type\": \"array\",\n                                \"uniqueItems\": true,\n                                \"minItems\": 2,\n                                \"maxItems\": 2,\n                                \"items\": [\n                                    {\n                                        \"type\": \"string\",\n                                        \"description\": \"Name of the member.\"\n                                    },\n                                    {\n                                        \"type\": \"number\",\n                                        \"description\": \"Score.\"\n                                    }\n                                ]\n                            }\n                        }\n                    ]\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"numkeys\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            },\n            {\n                \"name\": \"where\",\n                \"type\": \"oneof\",\n                \"arguments\": [\n                    {\n                        \"name\": \"min\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"MIN\"\n                    },\n                    {\n                        \"name\": \"max\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"MAX\"\n                    }\n                ]\n            },\n            {\n                \"token\": \"COUNT\",\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zmscore.json",
    "content": "{\n    \"ZMSCORE\": {\n        \"summary\": \"Returns the score of one or more members in a sorted set.\",\n        \"complexity\": \"O(N) where N is the number of members being requested.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"6.2.0\",\n        \"arity\": -3,\n        \"function\": \"zmscoreCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"minItems\": 1,\n            \"items\": {\n                \"oneOf\": [\n                    {\n                        \"type\": \"number\",\n                        \"description\": \"The score of the member (a double precision floating point number). In RESP2, this is returned as string.\"\n                    },\n                    {\n                        \"type\": \"null\",\n                        \"description\": \"Member does not exist in the sorted set.\"\n                    }\n                ]\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"member\",\n                \"type\": \"string\",\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zpopmax.json",
    "content": "{\n    \"ZPOPMAX\": {\n        \"summary\": \"Returns the highest-scoring members from a sorted set after removing them. Deletes the sorted set if the last member was popped.\",\n        \"complexity\": \"O(log(N)*M) with N being the number of elements in the sorted set, and M being the number of elements popped.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"5.0.0\",\n        \"arity\": -2,\n        \"function\": \"zpopmaxCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"anyOf\": [\n                {\n                    \"type\": \"array\",\n                    \"description\": \"List of popped elements and scores when 'COUNT' isn't specified.\",\n                    \"minItems\": 2,\n                    \"maxItems\": 2,\n                    \"items\": [\n                        {\n                            \"type\": \"string\",\n                            \"description\": \"Popped element.\"\n                        },\n                        {\n                            \"type\": \"number\",\n                            \"description\": \"Score.\"\n                        }\n                    ]\n                },\n                {\n                    \"type\": \"array\",\n                    \"description\": \"List of popped elements and scores when 'COUNT' is specified.\",\n                    \"items\": {\n                        \"type\": \"array\",\n                        \"minItems\": 2,\n                        \"maxItems\": 2,\n                        \"items\": [\n                            {\n                                \"type\": \"string\",\n                                \"description\": \"Popped element.\"\n                            },\n                            {\n                                \"type\": \"number\",\n                                \"description\": \"Score.\"\n                            }\n                        ]\n                    }\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zpopmin.json",
    "content": "{\n    \"ZPOPMIN\": {\n        \"summary\": \"Returns the lowest-scoring members from a sorted set after removing them. Deletes the sorted set if the last member was popped.\",\n        \"complexity\": \"O(log(N)*M) with N being the number of elements in the sorted set, and M being the number of elements popped.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"5.0.0\",\n        \"arity\": -2,\n        \"function\": \"zpopminCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"ACCESS\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"anyOf\": [\n                {\n                    \"type\": \"array\",\n                    \"description\": \"List of popped elements and scores when 'COUNT' isn't specified.\",\n                    \"minItems\": 2,\n                    \"maxItems\": 2,\n                    \"items\": [\n                        {\n                            \"type\": \"string\",\n                            \"description\": \"Popped element.\"\n                        },\n                        {\n                            \"type\": \"number\",\n                            \"description\": \"Score.\"\n                        }\n                    ]\n                },\n                {\n                    \"type\": \"array\",\n                    \"description\": \"List of popped elements and scores when 'COUNT' is specified.\",\n                    \"items\": {\n                        \"type\": \"array\",\n                        \"minItems\": 2,\n                        \"maxItems\": 2,\n                        \"items\": [\n                            {\n                                \"type\": \"string\",\n                                \"description\": \"Popped element.\"\n                            },\n                            {\n                                \"type\": \"number\",\n                                \"description\": \"Score.\"\n                            }\n                        ]\n                    }\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zrandmember.json",
    "content": "{\n    \"ZRANDMEMBER\": {\n        \"summary\": \"Returns one or more random members from a sorted set.\",\n        \"complexity\": \"O(N) where N is the number of members returned\",\n        \"group\": \"sorted_set\",\n        \"since\": \"6.2.0\",\n        \"arity\": -2,\n        \"function\": \"zrandmemberCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"anyOf\": [\n                {\n                    \"type\": \"null\",\n                    \"description\": \"Key does not exist.\"\n                },\n                {\n                    \"type\": \"string\",\n                    \"description\": \"Randomly selected element when 'COUNT' is not used.\"\n                },\n                {\n                    \"type\": \"array\",\n                    \"description\": \"Randomly selected elements when 'COUNT' is used.\",\n                    \"items\": {\n                        \"type\": \"string\"\n                    }\n                },\n                {\n                    \"type\": \"array\",\n                    \"description\": \"Randomly selected elements when 'COUNT' and 'WITHSCORES' modifiers are used.\",\n                    \"items\": {\n                        \"type\": \"array\",\n                        \"minItems\": 2,\n                        \"maxItems\": 2,\n                        \"items\": [\n                            {\n                                \"type\": \"string\",\n                                \"description\": \"Element.\"\n                            },\n                            {\n                                \"type\": \"number\",\n                                \"description\": \"Score.\"\n                            }\n                        ]\n                    }\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"options\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"count\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"withscores\",\n                        \"token\": \"WITHSCORES\",\n                        \"type\": \"pure-token\",\n                        \"optional\": true\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zrange.json",
    "content": "{\n    \"ZRANGE\": {\n        \"summary\": \"Returns members in a sorted set within a range of indexes.\",\n        \"complexity\": \"O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements returned.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"1.2.0\",\n        \"arity\": -4,\n        \"function\": \"zrangeCommand\",\n        \"history\": [\n            [\n                \"6.2.0\",\n                \"Added the `REV`, `BYSCORE`, `BYLEX` and `LIMIT` options.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"anyOf\": [\n                {\n                    \"description\": \"A list of member elements\",\n                    \"type\": \"array\",\n                    \"uniqueItems\": true,\n                    \"items\": {\n                        \"type\": \"string\"\n                    }\n                },\n                {\n                    \"description\": \"Members and their scores. Returned in case `WITHSCORES` was used. In RESP2 this is returned as a flat array\",\n                    \"type\": \"array\",\n                    \"uniqueItems\": true,\n                    \"items\": {\n                        \"type\": \"array\",\n                        \"minItems\": 2,\n                        \"maxItems\": 2,\n                        \"items\": [\n                            {\n                                \"description\": \"Member\",\n                                \"type\": \"string\"\n                            },\n                            {\n                                \"description\": \"Score\",\n                                \"type\": \"number\"\n                            }\n                        ]\n                    }\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"start\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"stop\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"sortby\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"since\": \"6.2.0\",\n                \"arguments\": [\n                    {\n                        \"name\": \"byscore\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"BYSCORE\"\n                    },\n                    {\n                        \"name\": \"bylex\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"BYLEX\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"rev\",\n                \"token\": \"REV\",\n                \"type\": \"pure-token\",\n                \"optional\": true,\n                \"since\": \"6.2.0\"\n            },\n            {\n                \"token\": \"LIMIT\",\n                \"name\": \"limit\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"since\": \"6.2.0\",\n                \"arguments\": [\n                    {\n                        \"name\": \"offset\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"count\",\n                        \"type\": \"integer\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"withscores\",\n                \"token\": \"WITHSCORES\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zrangebylex.json",
    "content": "{\n    \"ZRANGEBYLEX\": {\n        \"summary\": \"Returns members in a sorted set within a lexicographical range.\",\n        \"complexity\": \"O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)).\",\n        \"group\": \"sorted_set\",\n        \"since\": \"2.8.9\",\n        \"arity\": -4,\n        \"function\": \"zrangebylexCommand\",\n        \"deprecated_since\": \"6.2.0\",\n        \"replaced_by\": \"`ZRANGE` with the `BYLEX` argument\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"List of elements in the specified score range.\",\n            \"uniqueItems\": true,\n            \"items\": {\n                \"type\": \"string\"\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"min\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"max\",\n                \"type\": \"string\"\n            },\n            {\n                \"token\": \"LIMIT\",\n                \"name\": \"limit\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"offset\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"count\",\n                        \"type\": \"integer\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zrangebyscore.json",
    "content": "{\n    \"ZRANGEBYSCORE\": {\n        \"summary\": \"Returns members in a sorted set within a range of scores.\",\n        \"complexity\": \"O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)).\",\n        \"group\": \"sorted_set\",\n        \"since\": \"1.0.5\",\n        \"arity\": -4,\n        \"function\": \"zrangebyscoreCommand\",\n        \"history\": [\n            [\n                \"2.0.0\",\n                \"Added the `WITHSCORES` modifier.\"\n            ]\n        ],\n        \"deprecated_since\": \"6.2.0\",\n        \"replaced_by\": \"`ZRANGE` with the `BYSCORE` argument\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"anyOf\": [\n                {\n                    \"type\": \"array\",\n                    \"description\": \"List of the elements in the specified score range, as not WITHSCORES\",\n                    \"uniqueItems\": true,\n                    \"items\": {\n                        \"type\": \"string\",\n                        \"description\": \"Element\"\n                    }\n                },\n                {\n                    \"type\": \"array\",\n                    \"description\": \"List of the elements and their scores in the specified score range, as WITHSCORES used\",\n                    \"uniqueItems\": true,\n                    \"items\": {\n                        \"type\": \"array\",\n                        \"description\": \"Tuple of element and its score\",\n                        \"minItems\": 2,\n                        \"maxItems\": 2,\n                        \"items\": [\n                            {\n                                \"description\": \"element\",\n                                \"type\": \"string\"\n                            },\n                            {\n                                \"description\": \"score\",\n                                \"type\": \"number\"\n                            }\n                        ]\n                    }\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"min\",\n                \"type\": \"double\"\n            },\n            {\n                \"name\": \"max\",\n                \"type\": \"double\"\n            },\n            {\n                \"name\": \"withscores\",\n                \"token\": \"WITHSCORES\",\n                \"type\": \"pure-token\",\n                \"optional\": true,\n                \"since\": \"2.0.0\"\n            },\n            {\n                \"token\": \"LIMIT\",\n                \"name\": \"limit\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"offset\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"count\",\n                        \"type\": \"integer\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zrangestore.json",
    "content": "{\n    \"ZRANGESTORE\": {\n        \"summary\": \"Stores a range of members from sorted set in a key.\",\n        \"complexity\": \"O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements stored into the destination key.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"6.2.0\",\n        \"arity\": -5,\n        \"function\": \"zrangestoreCommand\",\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"Number of elements in the resulting sorted set.\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"dst\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"src\",\n                \"type\": \"key\",\n                \"key_spec_index\": 1\n            },\n            {\n                \"name\": \"min\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"max\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"sortby\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"byscore\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"BYSCORE\"\n                    },\n                    {\n                        \"name\": \"bylex\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"BYLEX\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"rev\",\n                \"token\": \"REV\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"LIMIT\",\n                \"name\": \"limit\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"offset\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"count\",\n                        \"type\": \"integer\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zrank.json",
    "content": "{\n    \"ZRANK\": {\n        \"summary\": \"Returns the index of a member in a sorted set ordered by ascending scores.\",\n        \"complexity\": \"O(log(N))\",\n        \"group\": \"sorted_set\",\n        \"since\": \"2.0.0\",\n        \"arity\": -3,\n        \"function\": \"zrankCommand\",\n        \"history\": [\n            [\n                \"7.2.0\",\n                \"Added the optional `WITHSCORE` argument.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"type\": \"null\",\n                    \"description\": \"Key does not exist or the member does not exist in the sorted set.\"\n                },\n                {\n                    \"type\": \"integer\",\n                    \"description\": \"The rank of the member when 'WITHSCORE' is not used.\"\n                },\n                {\n                    \"type\": \"array\",\n                    \"description\": \"The rank and score of the member when 'WITHSCORE' is used.\",\n                    \"minItems\": 2,\n                    \"maxItems\": 2,\n                    \"items\": [\n                        {\n                            \"type\": \"integer\"\n                        },\n                        {\n                            \"type\": \"number\"\n                        }\n                    ]\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"member\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"withscore\",\n                \"token\": \"WITHSCORE\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zrem.json",
    "content": "{\n    \"ZREM\": {\n        \"summary\": \"Removes one or more members from a sorted set. Deletes the sorted set if all members were removed.\",\n        \"complexity\": \"O(M*log(N)) with N being the number of elements in the sorted set and M the number of elements to be removed.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"1.2.0\",\n        \"arity\": -3,\n        \"function\": \"zremCommand\",\n        \"history\": [\n            [\n                \"2.4.0\",\n                \"Accepts multiple elements.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The number of members removed from the sorted set, not including non existing members.\",\n            \"type\": \"integer\",\n            \"minimum\": 0\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"member\",\n                \"type\": \"string\",\n                \"multiple\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zremrangebylex.json",
    "content": "{\n    \"ZREMRANGEBYLEX\": {\n        \"summary\": \"Removes members in a sorted set within a lexicographical range. Deletes the sorted set if all members were removed.\",\n        \"complexity\": \"O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements removed by the operation.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"2.8.9\",\n        \"arity\": 4,\n        \"function\": \"zremrangebylexCommand\",\n        \"command_flags\": [\n            \"WRITE\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"Number of elements removed.\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"min\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"max\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zremrangebyrank.json",
    "content": "{\n    \"ZREMRANGEBYRANK\": {\n        \"summary\": \"Removes members in a sorted set within a range of indexes. Deletes the sorted set if all members were removed.\",\n        \"complexity\": \"O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements removed by the operation.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"2.0.0\",\n        \"arity\": 4,\n        \"function\": \"zremrangebyrankCommand\",\n        \"command_flags\": [\n            \"WRITE\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"Number of elements removed.\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"start\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"stop\",\n                \"type\": \"integer\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zremrangebyscore.json",
    "content": "{\n    \"ZREMRANGEBYSCORE\": {\n        \"summary\": \"Removes members in a sorted set within a range of scores. Deletes the sorted set if all members were removed.\",\n        \"complexity\": \"O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements removed by the operation.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"1.2.0\",\n        \"arity\": 4,\n        \"function\": \"zremrangebyscoreCommand\",\n        \"command_flags\": [\n            \"WRITE\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RW\",\n                    \"DELETE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"integer\",\n            \"description\": \"Number of elements removed.\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"min\",\n                \"type\": \"double\"\n            },\n            {\n                \"name\": \"max\",\n                \"type\": \"double\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zrevrange.json",
    "content": "{\n    \"ZREVRANGE\": {\n        \"summary\": \"Returns members in a sorted set within a range of indexes in reverse order.\",\n        \"complexity\": \"O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements returned.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"1.2.0\",\n        \"arity\": -4,\n        \"function\": \"zrevrangeCommand\",\n        \"deprecated_since\": \"6.2.0\",\n        \"replaced_by\": \"`ZRANGE` with the `REV` argument\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"anyOf\": [\n                {\n                    \"description\": \"List of member elements.\",\n                    \"type\": \"array\",\n                    \"uniqueItems\": true,\n                    \"items\": {\n                        \"type\": \"string\"\n                    }\n                },\n                {\n                    \"description\": \"List of the members and their scores. Returned in case `WITHSCORES` was used.\",\n                    \"type\": \"array\",\n                    \"uniqueItems\": true,\n                    \"items\": {\n                        \"type\": \"array\",\n                        \"minItems\": 2,\n                        \"maxItems\": 2,\n                        \"items\": [\n                            {\n                                \"description\": \"member\",\n                                \"type\": \"string\"\n                            },\n                            {\n                                \"description\": \"score\",\n                                \"type\": \"number\"\n                            }\n                        ]\n                    }\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"start\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"stop\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"withscores\",\n                \"token\": \"WITHSCORES\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zrevrangebylex.json",
    "content": "{\n    \"ZREVRANGEBYLEX\": {\n        \"summary\": \"Returns members in a sorted set within a lexicographical range in reverse order.\",\n        \"complexity\": \"O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)).\",\n        \"group\": \"sorted_set\",\n        \"since\": \"2.8.9\",\n        \"arity\": -4,\n        \"function\": \"zrevrangebylexCommand\",\n        \"deprecated_since\": \"6.2.0\",\n        \"replaced_by\": \"`ZRANGE` with the `REV` and `BYLEX` arguments\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"type\": \"array\",\n            \"description\": \"List of the elements in the specified score range.\",\n            \"uniqueItems\": true,\n            \"items\": {\n                \"type\": \"string\"\n            }\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"max\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"min\",\n                \"type\": \"string\"\n            },\n            {\n                \"token\": \"LIMIT\",\n                \"name\": \"limit\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"offset\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"count\",\n                        \"type\": \"integer\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zrevrangebyscore.json",
    "content": "{\n    \"ZREVRANGEBYSCORE\": {\n        \"summary\": \"Returns members in a sorted set within a range of scores in reverse order.\",\n        \"complexity\": \"O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)).\",\n        \"group\": \"sorted_set\",\n        \"since\": \"2.2.0\",\n        \"arity\": -4,\n        \"function\": \"zrevrangebyscoreCommand\",\n        \"history\": [\n            [\n                \"2.1.6\",\n                \"`min` and `max` can be exclusive.\"\n            ]\n        ],\n        \"deprecated_since\": \"6.2.0\",\n        \"replaced_by\": \"`ZRANGE` with the `REV` and `BYSCORE` arguments\",\n        \"doc_flags\": [\n            \"DEPRECATED\"\n        ],\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"anyOf\": [\n                {\n                    \"type\": \"array\",\n                    \"description\": \"List of the elements in the specified score range, as not WITHSCORES\",\n                    \"uniqueItems\": true,\n                    \"items\": {\n                        \"type\": \"string\",\n                        \"description\": \"Element\"\n                    }\n                },\n                {\n                    \"type\": \"array\",\n                    \"description\": \"List of the elements and their scores in the specified score range, as WITHSCORES used\",\n                    \"uniqueItems\": true,\n                    \"items\": {\n                        \"type\": \"array\",\n                        \"description\": \"Tuple of element and its score\",\n                        \"minItems\": 2,\n                        \"maxItems\": 2,\n                        \"items\": [\n                            {\n                                \"type\": \"string\",\n                                \"description\": \"element\"\n                            },\n                            {\n                                \"type\": \"number\",\n                                \"description\": \"score\"\n                            }\n                        ]\n                    }\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"max\",\n                \"type\": \"double\"\n            },\n            {\n                \"name\": \"min\",\n                \"type\": \"double\"\n            },\n            {\n                \"name\": \"withscores\",\n                \"token\": \"WITHSCORES\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"LIMIT\",\n                \"name\": \"limit\",\n                \"type\": \"block\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"offset\",\n                        \"type\": \"integer\"\n                    },\n                    {\n                        \"name\": \"count\",\n                        \"type\": \"integer\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zrevrank.json",
    "content": "{\n    \"ZREVRANK\": {\n        \"summary\": \"Returns the index of a member in a sorted set ordered by descending scores.\",\n        \"complexity\": \"O(log(N))\",\n        \"group\": \"sorted_set\",\n        \"since\": \"2.0.0\",\n        \"arity\": -3,\n        \"function\": \"zrevrankCommand\",\n        \"history\": [\n            [\n                \"7.2.0\",\n                \"Added the optional `WITHSCORE` argument.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"type\": \"null\",\n                    \"description\": \"Key does not exist or the member does not exist in the sorted set.\"\n                },\n                {\n                    \"type\": \"integer\",\n                    \"description\": \"The rank of the member when 'WITHSCORE' is not used.\"\n                },\n                {\n                    \"type\": \"array\",\n                    \"description\": \"The rank and score of the member when 'WITHSCORE' is used.\",\n                    \"minItems\": 2,\n                    \"maxItems\": 2,\n                    \"items\": [\n                        {\n                            \"type\": \"integer\"\n                        },\n                        {\n                            \"type\": \"number\"\n                        }\n                    ]\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"member\",\n                \"type\": \"string\"\n            },\n            {\n                \"name\": \"withscore\",\n                \"token\": \"WITHSCORE\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zscan.json",
    "content": "{\n    \"ZSCAN\": {\n        \"summary\": \"Iterates over members and scores of a sorted set.\",\n        \"complexity\": \"O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"2.8.0\",\n        \"arity\": -3,\n        \"function\": \"zscanCommand\",\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"command_tips\": [\n            \"NONDETERMINISTIC_OUTPUT\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"cursor\",\n                \"type\": \"integer\"\n            },\n            {\n                \"token\": \"MATCH\",\n                \"name\": \"pattern\",\n                \"type\": \"pattern\",\n                \"optional\": true\n            },\n            {\n                \"token\": \"COUNT\",\n                \"name\": \"count\",\n                \"type\": \"integer\",\n                \"optional\": true\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"cursor and scan response in array form\",\n            \"type\": \"array\",\n            \"minItems\": 2,\n            \"maxItems\": 2,\n            \"items\": [\n                {\n                    \"description\": \"cursor\",\n                    \"type\": \"string\"\n                },\n                {\n                    \"description\": \"list of elements of the sorted set, where each even element is the member, and each odd value is its associated score\",\n                    \"type\": \"array\",\n                    \"items\": {\n                        \"type\": \"string\"\n                    }\n                }\n            ]\n        }\n    }\n}\n"
  },
  {
    "path": "src/commands/zscore.json",
    "content": "{\n    \"ZSCORE\": {\n        \"summary\": \"Returns the score of a member in a sorted set.\",\n        \"complexity\": \"O(1)\",\n        \"group\": \"sorted_set\",\n        \"since\": \"1.2.0\",\n        \"arity\": 3,\n        \"function\": \"zscoreCommand\",\n        \"command_flags\": [\n            \"READONLY\",\n            \"FAST\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"oneOf\": [\n                {\n                    \"type\": \"number\",\n                    \"description\": \"The score of the member (a double precision floating point number). In RESP2, this is returned as string.\"\n                },\n                {\n                    \"type\": \"null\",\n                    \"description\": \"Member does not exist in the sorted set, or key does not exist.\"\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"member\",\n                \"type\": \"string\"\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zunion.json",
    "content": "{\n    \"ZUNION\": {\n        \"summary\": \"Returns the union of multiple sorted sets.\",\n        \"complexity\": \"O(N)+O(M*log(M)) with N being the sum of the sizes of the input sorted sets, and M being the number of elements in the resulting sorted set.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"6.2.0\",\n        \"arity\": -3,\n        \"function\": \"zunionCommand\",\n        \"get_keys_function\": \"zunionInterDiffGetKeys\",\n        \"history\": [\n            [\n                \"8.8.0\",\n                \"Added `COUNT` aggregate option.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"READONLY\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"keynum\": {\n                        \"keynumidx\": 0,\n                        \"firstkey\": 1,\n                        \"step\": 1\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"anyOf\": [\n                {\n                    \"description\": \"The result of union when 'WITHSCORES' is not used.\",\n                    \"type\": \"array\",\n                    \"uniqueItems\": true,\n                    \"items\": {\n                        \"type\": \"string\"\n                    }\n                },\n                {\n                    \"description\": \"The result of union when 'WITHSCORES' is used.\",\n                    \"type\": \"array\",\n                    \"uniqueItems\": true,\n                    \"items\": {\n                        \"type\": \"array\",\n                        \"minItems\": 2,\n                        \"maxItems\": 2,\n                        \"items\": [\n                            {\n                                \"type\": \"string\"\n                            },\n                            {\n                                \"type\": \"number\"\n                            }\n                        ]\n                    }\n                }\n            ]\n        },\n        \"arguments\": [\n            {\n                \"name\": \"numkeys\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0,\n                \"multiple\": true\n            },\n            {\n                \"token\": \"WEIGHTS\",\n                \"name\": \"weight\",\n                \"type\": \"integer\",\n                \"optional\": true,\n                \"multiple\": true\n            },\n            {\n                \"token\": \"AGGREGATE\",\n                \"name\": \"aggregate\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"sum\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"SUM\"\n                    },\n                    {\n                        \"name\": \"min\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"MIN\"\n                    },\n                    {\n                        \"name\": \"max\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"MAX\"\n                    },\n                    {\n                        \"name\": \"count\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"COUNT\",\n                        \"since\": \"8.8.0\"\n                    }\n                ]\n            },\n            {\n                \"name\": \"withscores\",\n                \"token\": \"WITHSCORES\",\n                \"type\": \"pure-token\",\n                \"optional\": true\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands/zunionstore.json",
    "content": "{\n    \"ZUNIONSTORE\": {\n        \"summary\": \"Stores the union of multiple sorted sets in a key.\",\n        \"complexity\": \"O(N)+O(M log(M)) with N being the sum of the sizes of the input sorted sets, and M being the number of elements in the resulting sorted set.\",\n        \"group\": \"sorted_set\",\n        \"since\": \"2.0.0\",\n        \"arity\": -4,\n        \"function\": \"zunionstoreCommand\",\n        \"get_keys_function\": \"zunionInterDiffStoreGetKeys\",\n        \"history\": [\n            [\n                \"8.8.0\",\n                \"Added `COUNT` aggregate option.\"\n            ]\n        ],\n        \"command_flags\": [\n            \"WRITE\",\n            \"DENYOOM\"\n        ],\n        \"acl_categories\": [\n            \"SORTEDSET\"\n        ],\n        \"key_specs\": [\n            {\n                \"flags\": [\n                    \"OW\",\n                    \"UPDATE\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 1\n                    }\n                },\n                \"find_keys\": {\n                    \"range\": {\n                        \"lastkey\": 0,\n                        \"step\": 1,\n                        \"limit\": 0\n                    }\n                }\n            },\n            {\n                \"flags\": [\n                    \"RO\",\n                    \"ACCESS\"\n                ],\n                \"begin_search\": {\n                    \"index\": {\n                        \"pos\": 2\n                    }\n                },\n                \"find_keys\": {\n                    \"keynum\": {\n                        \"keynumidx\": 0,\n                        \"firstkey\": 1,\n                        \"step\": 1\n                    }\n                }\n            }\n        ],\n        \"reply_schema\": {\n            \"description\": \"The number of elements in the resulting sorted set.\",\n            \"type\": \"integer\"\n        },\n        \"arguments\": [\n            {\n                \"name\": \"destination\",\n                \"type\": \"key\",\n                \"key_spec_index\": 0\n            },\n            {\n                \"name\": \"numkeys\",\n                \"type\": \"integer\"\n            },\n            {\n                \"name\": \"key\",\n                \"type\": \"key\",\n                \"key_spec_index\": 1,\n                \"multiple\": true\n            },\n            {\n                \"token\": \"WEIGHTS\",\n                \"name\": \"weight\",\n                \"type\": \"integer\",\n                \"optional\": true,\n                \"multiple\": true\n            },\n            {\n                \"token\": \"AGGREGATE\",\n                \"name\": \"aggregate\",\n                \"type\": \"oneof\",\n                \"optional\": true,\n                \"arguments\": [\n                    {\n                        \"name\": \"sum\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"SUM\"\n                    },\n                    {\n                        \"name\": \"min\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"MIN\"\n                    },\n                    {\n                        \"name\": \"max\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"MAX\"\n                    },\n                    {\n                        \"name\": \"count\",\n                        \"type\": \"pure-token\",\n                        \"token\": \"COUNT\",\n                        \"since\": \"8.8.0\"\n                    }\n                ]\n            }\n        ]\n    }\n}\n"
  },
  {
    "path": "src/commands.c",
    "content": "#include \"commands.h\"\n#include \"server.h\"\n\n#define MAKE_CMD(name,summary,complexity,since,doc_flags,replaced,deprecated,group,group_enum,history,num_history,tips,num_tips,function,arity,flags,acl,key_specs,key_specs_num,get_keys,numargs) name,summary,complexity,since,doc_flags,replaced,deprecated,group_enum,history,num_history,tips,num_tips,function,arity,flags,acl,key_specs,key_specs_num,get_keys,numargs\n#define MAKE_ARG(name,type,key_spec_index,token,summary,since,flags,numsubargs,deprecated_since) name,type,key_spec_index,token,summary,since,flags,deprecated_since,numsubargs\n#define COMMAND_STRUCT redisCommand\n#define COMMAND_ARG redisCommandArg\n\n#ifdef LOG_REQ_RES\n#include \"commands_with_reply_schema.def\"\n#else\n#include \"commands.def\"\n#endif\n"
  },
  {
    "path": "src/commands.def",
    "content": "/* Automatically generated by generate-command-code.py, do not edit. */\n\n\n/* We have fabulous commands from\n * the fantastic\n * Redis Command Table! */\n\n/* Must match redisCommandGroup */\nconst char *COMMAND_GROUP_STR[] = {\n    \"generic\",\n    \"string\",\n    \"list\",\n    \"set\",\n    \"sorted-set\",\n    \"hash\",\n    \"pubsub\",\n    \"transactions\",\n    \"connection\",\n    \"server\",\n    \"scripting\",\n    \"hyperloglog\",\n    \"cluster\",\n    \"sentinel\",\n    \"geo\",\n    \"stream\",\n    \"bitmap\",\n    \"module\",\n    \"rate_limit\"\n};\n\nconst char *commandGroupStr(int index) {\n    return COMMAND_GROUP_STR[index];\n}\n/********** BITCOUNT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* BITCOUNT history */\ncommandHistory BITCOUNT_History[] = {\n{\"7.0.0\",\"Added the `BYTE|BIT` option.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* BITCOUNT tips */\n#define BITCOUNT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* BITCOUNT key specs */\nkeySpec BITCOUNT_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* BITCOUNT range unit argument table */\nstruct COMMAND_ARG BITCOUNT_range_unit_Subargs[] = {\n{MAKE_ARG(\"byte\",ARG_TYPE_PURE_TOKEN,-1,\"BYTE\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"bit\",ARG_TYPE_PURE_TOKEN,-1,\"BIT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* BITCOUNT range argument table */\nstruct COMMAND_ARG BITCOUNT_range_Subargs[] = {\n{MAKE_ARG(\"start\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"end\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unit\",ARG_TYPE_ONEOF,-1,NULL,NULL,\"7.0.0\",CMD_ARG_OPTIONAL,2,NULL),.subargs=BITCOUNT_range_unit_Subargs},\n};\n\n/* BITCOUNT argument table */\nstruct COMMAND_ARG BITCOUNT_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"range\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,3,NULL),.subargs=BITCOUNT_range_Subargs},\n};\n\n/********** BITFIELD ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* BITFIELD history */\n#define BITFIELD_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* BITFIELD tips */\n#define BITFIELD_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* BITFIELD key specs */\nkeySpec BITFIELD_Keyspecs[1] = {\n{\"This command allows both access and modification of the key\",CMD_KEY_RW|CMD_KEY_UPDATE|CMD_KEY_ACCESS|CMD_KEY_VARIABLE_FLAGS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* BITFIELD operation get_block argument table */\nstruct COMMAND_ARG BITFIELD_operation_get_block_Subargs[] = {\n{MAKE_ARG(\"encoding\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"offset\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* BITFIELD operation write overflow_block argument table */\nstruct COMMAND_ARG BITFIELD_operation_write_overflow_block_Subargs[] = {\n{MAKE_ARG(\"wrap\",ARG_TYPE_PURE_TOKEN,-1,\"WRAP\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"sat\",ARG_TYPE_PURE_TOKEN,-1,\"SAT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"fail\",ARG_TYPE_PURE_TOKEN,-1,\"FAIL\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* BITFIELD operation write write_operation set_block argument table */\nstruct COMMAND_ARG BITFIELD_operation_write_write_operation_set_block_Subargs[] = {\n{MAKE_ARG(\"encoding\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"offset\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* BITFIELD operation write write_operation incrby_block argument table */\nstruct COMMAND_ARG BITFIELD_operation_write_write_operation_incrby_block_Subargs[] = {\n{MAKE_ARG(\"encoding\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"offset\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"increment\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* BITFIELD operation write write_operation argument table */\nstruct COMMAND_ARG BITFIELD_operation_write_write_operation_Subargs[] = {\n{MAKE_ARG(\"set-block\",ARG_TYPE_BLOCK,-1,\"SET\",NULL,NULL,CMD_ARG_NONE,3,NULL),.subargs=BITFIELD_operation_write_write_operation_set_block_Subargs},\n{MAKE_ARG(\"incrby-block\",ARG_TYPE_BLOCK,-1,\"INCRBY\",NULL,NULL,CMD_ARG_NONE,3,NULL),.subargs=BITFIELD_operation_write_write_operation_incrby_block_Subargs},\n};\n\n/* BITFIELD operation write argument table */\nstruct COMMAND_ARG BITFIELD_operation_write_Subargs[] = {\n{MAKE_ARG(\"overflow-block\",ARG_TYPE_ONEOF,-1,\"OVERFLOW\",NULL,NULL,CMD_ARG_OPTIONAL,3,NULL),.subargs=BITFIELD_operation_write_overflow_block_Subargs},\n{MAKE_ARG(\"write-operation\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=BITFIELD_operation_write_write_operation_Subargs},\n};\n\n/* BITFIELD operation argument table */\nstruct COMMAND_ARG BITFIELD_operation_Subargs[] = {\n{MAKE_ARG(\"get-block\",ARG_TYPE_BLOCK,-1,\"GET\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=BITFIELD_operation_get_block_Subargs},\n{MAKE_ARG(\"write\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=BITFIELD_operation_write_Subargs},\n};\n\n/* BITFIELD argument table */\nstruct COMMAND_ARG BITFIELD_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"operation\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,2,NULL),.subargs=BITFIELD_operation_Subargs},\n};\n\n/********** BITFIELD_RO ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* BITFIELD_RO history */\n#define BITFIELD_RO_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* BITFIELD_RO tips */\n#define BITFIELD_RO_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* BITFIELD_RO key specs */\nkeySpec BITFIELD_RO_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* BITFIELD_RO get_block argument table */\nstruct COMMAND_ARG BITFIELD_RO_get_block_Subargs[] = {\n{MAKE_ARG(\"encoding\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"offset\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* BITFIELD_RO argument table */\nstruct COMMAND_ARG BITFIELD_RO_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"get-block\",ARG_TYPE_BLOCK,-1,\"GET\",NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE|CMD_ARG_MULTIPLE_TOKEN,2,NULL),.subargs=BITFIELD_RO_get_block_Subargs},\n};\n\n/********** BITOP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* BITOP history */\n#define BITOP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* BITOP tips */\n#define BITOP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* BITOP key specs */\nkeySpec BITOP_Keyspecs[2] = {\n{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={3},KSPEC_FK_RANGE,.fk.range={-1,1,0}}\n};\n#endif\n\n/* BITOP operation argument table */\nstruct COMMAND_ARG BITOP_operation_Subargs[] = {\n{MAKE_ARG(\"and\",ARG_TYPE_PURE_TOKEN,-1,\"AND\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"or\",ARG_TYPE_PURE_TOKEN,-1,\"OR\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"xor\",ARG_TYPE_PURE_TOKEN,-1,\"XOR\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"not\",ARG_TYPE_PURE_TOKEN,-1,\"NOT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"diff\",ARG_TYPE_PURE_TOKEN,-1,\"DIFF\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"diff1\",ARG_TYPE_PURE_TOKEN,-1,\"DIFF1\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"andor\",ARG_TYPE_PURE_TOKEN,-1,\"ANDOR\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"one\",ARG_TYPE_PURE_TOKEN,-1,\"ONE\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* BITOP argument table */\nstruct COMMAND_ARG BITOP_Args[] = {\n{MAKE_ARG(\"operation\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,8,NULL),.subargs=BITOP_operation_Subargs},\n{MAKE_ARG(\"destkey\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** BITPOS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* BITPOS history */\ncommandHistory BITPOS_History[] = {\n{\"7.0.0\",\"Added the `BYTE|BIT` option.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* BITPOS tips */\n#define BITPOS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* BITPOS key specs */\nkeySpec BITPOS_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* BITPOS range end_unit_block unit argument table */\nstruct COMMAND_ARG BITPOS_range_end_unit_block_unit_Subargs[] = {\n{MAKE_ARG(\"byte\",ARG_TYPE_PURE_TOKEN,-1,\"BYTE\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"bit\",ARG_TYPE_PURE_TOKEN,-1,\"BIT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* BITPOS range end_unit_block argument table */\nstruct COMMAND_ARG BITPOS_range_end_unit_block_Subargs[] = {\n{MAKE_ARG(\"end\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unit\",ARG_TYPE_ONEOF,-1,NULL,NULL,\"7.0.0\",CMD_ARG_OPTIONAL,2,NULL),.subargs=BITPOS_range_end_unit_block_unit_Subargs},\n};\n\n/* BITPOS range argument table */\nstruct COMMAND_ARG BITPOS_range_Subargs[] = {\n{MAKE_ARG(\"start\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"end-unit-block\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=BITPOS_range_end_unit_block_Subargs},\n};\n\n/* BITPOS argument table */\nstruct COMMAND_ARG BITPOS_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"bit\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"range\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=BITPOS_range_Subargs},\n};\n\n/********** GETBIT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* GETBIT history */\n#define GETBIT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* GETBIT tips */\n#define GETBIT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* GETBIT key specs */\nkeySpec GETBIT_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* GETBIT argument table */\nstruct COMMAND_ARG GETBIT_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"offset\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** SETBIT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SETBIT history */\n#define SETBIT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SETBIT tips */\n#define SETBIT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SETBIT key specs */\nkeySpec SETBIT_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* SETBIT argument table */\nstruct COMMAND_ARG SETBIT_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"offset\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** ASKING ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ASKING history */\n#define ASKING_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ASKING tips */\n#define ASKING_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ASKING key specs */\n#define ASKING_Keyspecs NULL\n#endif\n\n/********** CLUSTER ADDSLOTS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER ADDSLOTS history */\n#define CLUSTER_ADDSLOTS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER ADDSLOTS tips */\n#define CLUSTER_ADDSLOTS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER ADDSLOTS key specs */\n#define CLUSTER_ADDSLOTS_Keyspecs NULL\n#endif\n\n/* CLUSTER ADDSLOTS argument table */\nstruct COMMAND_ARG CLUSTER_ADDSLOTS_Args[] = {\n{MAKE_ARG(\"slot\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** CLUSTER ADDSLOTSRANGE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER ADDSLOTSRANGE history */\n#define CLUSTER_ADDSLOTSRANGE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER ADDSLOTSRANGE tips */\n#define CLUSTER_ADDSLOTSRANGE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER ADDSLOTSRANGE key specs */\n#define CLUSTER_ADDSLOTSRANGE_Keyspecs NULL\n#endif\n\n/* CLUSTER ADDSLOTSRANGE range argument table */\nstruct COMMAND_ARG CLUSTER_ADDSLOTSRANGE_range_Subargs[] = {\n{MAKE_ARG(\"start-slot\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"end-slot\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLUSTER ADDSLOTSRANGE argument table */\nstruct COMMAND_ARG CLUSTER_ADDSLOTSRANGE_Args[] = {\n{MAKE_ARG(\"range\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,2,NULL),.subargs=CLUSTER_ADDSLOTSRANGE_range_Subargs},\n};\n\n/********** CLUSTER BUMPEPOCH ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER BUMPEPOCH history */\n#define CLUSTER_BUMPEPOCH_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER BUMPEPOCH tips */\nconst char *CLUSTER_BUMPEPOCH_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER BUMPEPOCH key specs */\n#define CLUSTER_BUMPEPOCH_Keyspecs NULL\n#endif\n\n/********** CLUSTER COUNT_FAILURE_REPORTS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER COUNT_FAILURE_REPORTS history */\n#define CLUSTER_COUNT_FAILURE_REPORTS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER COUNT_FAILURE_REPORTS tips */\nconst char *CLUSTER_COUNT_FAILURE_REPORTS_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER COUNT_FAILURE_REPORTS key specs */\n#define CLUSTER_COUNT_FAILURE_REPORTS_Keyspecs NULL\n#endif\n\n/* CLUSTER COUNT_FAILURE_REPORTS argument table */\nstruct COMMAND_ARG CLUSTER_COUNT_FAILURE_REPORTS_Args[] = {\n{MAKE_ARG(\"node-id\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** CLUSTER COUNTKEYSINSLOT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER COUNTKEYSINSLOT history */\n#define CLUSTER_COUNTKEYSINSLOT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER COUNTKEYSINSLOT tips */\n#define CLUSTER_COUNTKEYSINSLOT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER COUNTKEYSINSLOT key specs */\n#define CLUSTER_COUNTKEYSINSLOT_Keyspecs NULL\n#endif\n\n/* CLUSTER COUNTKEYSINSLOT argument table */\nstruct COMMAND_ARG CLUSTER_COUNTKEYSINSLOT_Args[] = {\n{MAKE_ARG(\"slot\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** CLUSTER DELSLOTS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER DELSLOTS history */\n#define CLUSTER_DELSLOTS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER DELSLOTS tips */\n#define CLUSTER_DELSLOTS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER DELSLOTS key specs */\n#define CLUSTER_DELSLOTS_Keyspecs NULL\n#endif\n\n/* CLUSTER DELSLOTS argument table */\nstruct COMMAND_ARG CLUSTER_DELSLOTS_Args[] = {\n{MAKE_ARG(\"slot\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** CLUSTER DELSLOTSRANGE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER DELSLOTSRANGE history */\n#define CLUSTER_DELSLOTSRANGE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER DELSLOTSRANGE tips */\n#define CLUSTER_DELSLOTSRANGE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER DELSLOTSRANGE key specs */\n#define CLUSTER_DELSLOTSRANGE_Keyspecs NULL\n#endif\n\n/* CLUSTER DELSLOTSRANGE range argument table */\nstruct COMMAND_ARG CLUSTER_DELSLOTSRANGE_range_Subargs[] = {\n{MAKE_ARG(\"start-slot\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"end-slot\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLUSTER DELSLOTSRANGE argument table */\nstruct COMMAND_ARG CLUSTER_DELSLOTSRANGE_Args[] = {\n{MAKE_ARG(\"range\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,2,NULL),.subargs=CLUSTER_DELSLOTSRANGE_range_Subargs},\n};\n\n/********** CLUSTER FAILOVER ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER FAILOVER history */\n#define CLUSTER_FAILOVER_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER FAILOVER tips */\n#define CLUSTER_FAILOVER_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER FAILOVER key specs */\n#define CLUSTER_FAILOVER_Keyspecs NULL\n#endif\n\n/* CLUSTER FAILOVER options argument table */\nstruct COMMAND_ARG CLUSTER_FAILOVER_options_Subargs[] = {\n{MAKE_ARG(\"force\",ARG_TYPE_PURE_TOKEN,-1,\"FORCE\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"takeover\",ARG_TYPE_PURE_TOKEN,-1,\"TAKEOVER\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLUSTER FAILOVER argument table */\nstruct COMMAND_ARG CLUSTER_FAILOVER_Args[] = {\n{MAKE_ARG(\"options\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=CLUSTER_FAILOVER_options_Subargs},\n};\n\n/********** CLUSTER FLUSHSLOTS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER FLUSHSLOTS history */\n#define CLUSTER_FLUSHSLOTS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER FLUSHSLOTS tips */\n#define CLUSTER_FLUSHSLOTS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER FLUSHSLOTS key specs */\n#define CLUSTER_FLUSHSLOTS_Keyspecs NULL\n#endif\n\n/********** CLUSTER FORGET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER FORGET history */\n#define CLUSTER_FORGET_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER FORGET tips */\n#define CLUSTER_FORGET_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER FORGET key specs */\n#define CLUSTER_FORGET_Keyspecs NULL\n#endif\n\n/* CLUSTER FORGET argument table */\nstruct COMMAND_ARG CLUSTER_FORGET_Args[] = {\n{MAKE_ARG(\"node-id\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** CLUSTER GETKEYSINSLOT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER GETKEYSINSLOT history */\n#define CLUSTER_GETKEYSINSLOT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER GETKEYSINSLOT tips */\nconst char *CLUSTER_GETKEYSINSLOT_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER GETKEYSINSLOT key specs */\n#define CLUSTER_GETKEYSINSLOT_Keyspecs NULL\n#endif\n\n/* CLUSTER GETKEYSINSLOT argument table */\nstruct COMMAND_ARG CLUSTER_GETKEYSINSLOT_Args[] = {\n{MAKE_ARG(\"slot\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** CLUSTER HELP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER HELP history */\n#define CLUSTER_HELP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER HELP tips */\n#define CLUSTER_HELP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER HELP key specs */\n#define CLUSTER_HELP_Keyspecs NULL\n#endif\n\n/********** CLUSTER INFO ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER INFO history */\n#define CLUSTER_INFO_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER INFO tips */\nconst char *CLUSTER_INFO_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER INFO key specs */\n#define CLUSTER_INFO_Keyspecs NULL\n#endif\n\n/********** CLUSTER KEYSLOT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER KEYSLOT history */\n#define CLUSTER_KEYSLOT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER KEYSLOT tips */\n#define CLUSTER_KEYSLOT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER KEYSLOT key specs */\n#define CLUSTER_KEYSLOT_Keyspecs NULL\n#endif\n\n/* CLUSTER KEYSLOT argument table */\nstruct COMMAND_ARG CLUSTER_KEYSLOT_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** CLUSTER LINKS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER LINKS history */\n#define CLUSTER_LINKS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER LINKS tips */\nconst char *CLUSTER_LINKS_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER LINKS key specs */\n#define CLUSTER_LINKS_Keyspecs NULL\n#endif\n\n/********** CLUSTER MEET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER MEET history */\ncommandHistory CLUSTER_MEET_History[] = {\n{\"4.0.0\",\"Added the optional `cluster_bus_port` argument.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER MEET tips */\n#define CLUSTER_MEET_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER MEET key specs */\n#define CLUSTER_MEET_Keyspecs NULL\n#endif\n\n/* CLUSTER MEET argument table */\nstruct COMMAND_ARG CLUSTER_MEET_Args[] = {\n{MAKE_ARG(\"ip\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"port\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"cluster-bus-port\",ARG_TYPE_INTEGER,-1,NULL,NULL,\"4.0.0\",CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** CLUSTER MIGRATION ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER MIGRATION history */\n#define CLUSTER_MIGRATION_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER MIGRATION tips */\n#define CLUSTER_MIGRATION_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER MIGRATION key specs */\n#define CLUSTER_MIGRATION_Keyspecs NULL\n#endif\n\n/* CLUSTER MIGRATION subcommand import argument table */\nstruct COMMAND_ARG CLUSTER_MIGRATION_subcommand_import_Subargs[] = {\n{MAKE_ARG(\"start-slot\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"end-slot\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLUSTER MIGRATION subcommand cancel argument table */\nstruct COMMAND_ARG CLUSTER_MIGRATION_subcommand_cancel_Subargs[] = {\n{MAKE_ARG(\"task-id\",ARG_TYPE_STRING,-1,\"ID\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"all\",ARG_TYPE_PURE_TOKEN,-1,\"ALL\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLUSTER MIGRATION subcommand status argument table */\nstruct COMMAND_ARG CLUSTER_MIGRATION_subcommand_status_Subargs[] = {\n{MAKE_ARG(\"task-id\",ARG_TYPE_STRING,-1,\"ID\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"all\",ARG_TYPE_PURE_TOKEN,-1,\"ALL\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/* CLUSTER MIGRATION subcommand argument table */\nstruct COMMAND_ARG CLUSTER_MIGRATION_subcommand_Subargs[] = {\n{MAKE_ARG(\"import\",ARG_TYPE_BLOCK,-1,\"IMPORT\",NULL,NULL,CMD_ARG_MULTIPLE,2,NULL),.subargs=CLUSTER_MIGRATION_subcommand_import_Subargs},\n{MAKE_ARG(\"cancel\",ARG_TYPE_ONEOF,-1,\"CANCEL\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=CLUSTER_MIGRATION_subcommand_cancel_Subargs},\n{MAKE_ARG(\"status\",ARG_TYPE_ONEOF,-1,\"STATUS\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=CLUSTER_MIGRATION_subcommand_status_Subargs},\n};\n\n/* CLUSTER MIGRATION argument table */\nstruct COMMAND_ARG CLUSTER_MIGRATION_Args[] = {\n{MAKE_ARG(\"subcommand\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,3,NULL),.subargs=CLUSTER_MIGRATION_subcommand_Subargs},\n};\n\n/********** CLUSTER MYID ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER MYID history */\n#define CLUSTER_MYID_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER MYID tips */\n#define CLUSTER_MYID_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER MYID key specs */\n#define CLUSTER_MYID_Keyspecs NULL\n#endif\n\n/********** CLUSTER MYSHARDID ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER MYSHARDID history */\n#define CLUSTER_MYSHARDID_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER MYSHARDID tips */\nconst char *CLUSTER_MYSHARDID_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER MYSHARDID key specs */\n#define CLUSTER_MYSHARDID_Keyspecs NULL\n#endif\n\n/********** CLUSTER NODES ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER NODES history */\n#define CLUSTER_NODES_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER NODES tips */\nconst char *CLUSTER_NODES_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER NODES key specs */\n#define CLUSTER_NODES_Keyspecs NULL\n#endif\n\n/********** CLUSTER REPLICAS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER REPLICAS history */\n#define CLUSTER_REPLICAS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER REPLICAS tips */\nconst char *CLUSTER_REPLICAS_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER REPLICAS key specs */\n#define CLUSTER_REPLICAS_Keyspecs NULL\n#endif\n\n/* CLUSTER REPLICAS argument table */\nstruct COMMAND_ARG CLUSTER_REPLICAS_Args[] = {\n{MAKE_ARG(\"node-id\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** CLUSTER REPLICATE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER REPLICATE history */\n#define CLUSTER_REPLICATE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER REPLICATE tips */\n#define CLUSTER_REPLICATE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER REPLICATE key specs */\n#define CLUSTER_REPLICATE_Keyspecs NULL\n#endif\n\n/* CLUSTER REPLICATE argument table */\nstruct COMMAND_ARG CLUSTER_REPLICATE_Args[] = {\n{MAKE_ARG(\"node-id\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** CLUSTER RESET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER RESET history */\n#define CLUSTER_RESET_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER RESET tips */\n#define CLUSTER_RESET_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER RESET key specs */\n#define CLUSTER_RESET_Keyspecs NULL\n#endif\n\n/* CLUSTER RESET reset_type argument table */\nstruct COMMAND_ARG CLUSTER_RESET_reset_type_Subargs[] = {\n{MAKE_ARG(\"hard\",ARG_TYPE_PURE_TOKEN,-1,\"HARD\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"soft\",ARG_TYPE_PURE_TOKEN,-1,\"SOFT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLUSTER RESET argument table */\nstruct COMMAND_ARG CLUSTER_RESET_Args[] = {\n{MAKE_ARG(\"reset-type\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=CLUSTER_RESET_reset_type_Subargs},\n};\n\n/********** CLUSTER SAVECONFIG ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER SAVECONFIG history */\n#define CLUSTER_SAVECONFIG_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER SAVECONFIG tips */\n#define CLUSTER_SAVECONFIG_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER SAVECONFIG key specs */\n#define CLUSTER_SAVECONFIG_Keyspecs NULL\n#endif\n\n/********** CLUSTER SET_CONFIG_EPOCH ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER SET_CONFIG_EPOCH history */\n#define CLUSTER_SET_CONFIG_EPOCH_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER SET_CONFIG_EPOCH tips */\n#define CLUSTER_SET_CONFIG_EPOCH_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER SET_CONFIG_EPOCH key specs */\n#define CLUSTER_SET_CONFIG_EPOCH_Keyspecs NULL\n#endif\n\n/* CLUSTER SET_CONFIG_EPOCH argument table */\nstruct COMMAND_ARG CLUSTER_SET_CONFIG_EPOCH_Args[] = {\n{MAKE_ARG(\"config-epoch\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** CLUSTER SETSLOT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER SETSLOT history */\n#define CLUSTER_SETSLOT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER SETSLOT tips */\n#define CLUSTER_SETSLOT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER SETSLOT key specs */\n#define CLUSTER_SETSLOT_Keyspecs NULL\n#endif\n\n/* CLUSTER SETSLOT subcommand argument table */\nstruct COMMAND_ARG CLUSTER_SETSLOT_subcommand_Subargs[] = {\n{MAKE_ARG(\"importing\",ARG_TYPE_STRING,-1,\"IMPORTING\",NULL,NULL,CMD_ARG_NONE,0,NULL),.display_text=\"node-id\"},\n{MAKE_ARG(\"migrating\",ARG_TYPE_STRING,-1,\"MIGRATING\",NULL,NULL,CMD_ARG_NONE,0,NULL),.display_text=\"node-id\"},\n{MAKE_ARG(\"node\",ARG_TYPE_STRING,-1,\"NODE\",NULL,NULL,CMD_ARG_NONE,0,NULL),.display_text=\"node-id\"},\n{MAKE_ARG(\"stable\",ARG_TYPE_PURE_TOKEN,-1,\"STABLE\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLUSTER SETSLOT argument table */\nstruct COMMAND_ARG CLUSTER_SETSLOT_Args[] = {\n{MAKE_ARG(\"slot\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"subcommand\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,4,NULL),.subargs=CLUSTER_SETSLOT_subcommand_Subargs},\n};\n\n/********** CLUSTER SHARDS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER SHARDS history */\n#define CLUSTER_SHARDS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER SHARDS tips */\nconst char *CLUSTER_SHARDS_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER SHARDS key specs */\n#define CLUSTER_SHARDS_Keyspecs NULL\n#endif\n\n/********** CLUSTER SLAVES ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER SLAVES history */\n#define CLUSTER_SLAVES_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER SLAVES tips */\nconst char *CLUSTER_SLAVES_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER SLAVES key specs */\n#define CLUSTER_SLAVES_Keyspecs NULL\n#endif\n\n/* CLUSTER SLAVES argument table */\nstruct COMMAND_ARG CLUSTER_SLAVES_Args[] = {\n{MAKE_ARG(\"node-id\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** CLUSTER SLOT_STATS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER SLOT_STATS history */\n#define CLUSTER_SLOT_STATS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER SLOT_STATS tips */\nconst char *CLUSTER_SLOT_STATS_Tips[] = {\n\"nondeterministic_output\",\n\"request_policy:all_shards\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER SLOT_STATS key specs */\n#define CLUSTER_SLOT_STATS_Keyspecs NULL\n#endif\n\n/* CLUSTER SLOT_STATS filter slotsrange argument table */\nstruct COMMAND_ARG CLUSTER_SLOT_STATS_filter_slotsrange_Subargs[] = {\n{MAKE_ARG(\"start-slot\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"end-slot\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLUSTER SLOT_STATS filter orderby order argument table */\nstruct COMMAND_ARG CLUSTER_SLOT_STATS_filter_orderby_order_Subargs[] = {\n{MAKE_ARG(\"asc\",ARG_TYPE_PURE_TOKEN,-1,\"ASC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"desc\",ARG_TYPE_PURE_TOKEN,-1,\"DESC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLUSTER SLOT_STATS filter orderby argument table */\nstruct COMMAND_ARG CLUSTER_SLOT_STATS_filter_orderby_Subargs[] = {\n{MAKE_ARG(\"metric\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"limit\",ARG_TYPE_INTEGER,-1,\"LIMIT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"order\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=CLUSTER_SLOT_STATS_filter_orderby_order_Subargs},\n};\n\n/* CLUSTER SLOT_STATS filter argument table */\nstruct COMMAND_ARG CLUSTER_SLOT_STATS_filter_Subargs[] = {\n{MAKE_ARG(\"slotsrange\",ARG_TYPE_BLOCK,-1,\"SLOTSRANGE\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=CLUSTER_SLOT_STATS_filter_slotsrange_Subargs},\n{MAKE_ARG(\"orderby\",ARG_TYPE_BLOCK,-1,\"ORDERBY\",NULL,NULL,CMD_ARG_NONE,3,NULL),.subargs=CLUSTER_SLOT_STATS_filter_orderby_Subargs},\n};\n\n/* CLUSTER SLOT_STATS argument table */\nstruct COMMAND_ARG CLUSTER_SLOT_STATS_Args[] = {\n{MAKE_ARG(\"filter\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=CLUSTER_SLOT_STATS_filter_Subargs},\n};\n\n/********** CLUSTER SLOTS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER SLOTS history */\ncommandHistory CLUSTER_SLOTS_History[] = {\n{\"4.0.0\",\"Added node IDs.\"},\n{\"7.0.0\",\"Added additional networking metadata field.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER SLOTS tips */\nconst char *CLUSTER_SLOTS_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER SLOTS key specs */\n#define CLUSTER_SLOTS_Keyspecs NULL\n#endif\n\n/********** CLUSTER SYNCSLOTS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER SYNCSLOTS history */\n#define CLUSTER_SYNCSLOTS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER SYNCSLOTS tips */\nconst char *CLUSTER_SYNCSLOTS_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER SYNCSLOTS key specs */\n#define CLUSTER_SYNCSLOTS_Keyspecs NULL\n#endif\n\n/* CLUSTER SYNCSLOTS subcommand sync slot_range argument table */\nstruct COMMAND_ARG CLUSTER_SYNCSLOTS_subcommand_sync_slot_range_Subargs[] = {\n{MAKE_ARG(\"start-slot\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"end-slot\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLUSTER SYNCSLOTS subcommand sync argument table */\nstruct COMMAND_ARG CLUSTER_SYNCSLOTS_subcommand_sync_Subargs[] = {\n{MAKE_ARG(\"task-id\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"slot-range\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,2,NULL),.subargs=CLUSTER_SYNCSLOTS_subcommand_sync_slot_range_Subargs},\n};\n\n/* CLUSTER SYNCSLOTS subcommand ack argument table */\nstruct COMMAND_ARG CLUSTER_SYNCSLOTS_subcommand_ack_Subargs[] = {\n{MAKE_ARG(\"state\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"offset\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLUSTER SYNCSLOTS subcommand conf argument table */\nstruct COMMAND_ARG CLUSTER_SYNCSLOTS_subcommand_conf_Subargs[] = {\n{MAKE_ARG(\"option\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* CLUSTER SYNCSLOTS subcommand argument table */\nstruct COMMAND_ARG CLUSTER_SYNCSLOTS_subcommand_Subargs[] = {\n{MAKE_ARG(\"sync\",ARG_TYPE_BLOCK,-1,\"SYNC\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=CLUSTER_SYNCSLOTS_subcommand_sync_Subargs},\n{MAKE_ARG(\"task-id\",ARG_TYPE_STRING,-1,\"RDBCHANNEL\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"snapshot-eof\",ARG_TYPE_PURE_TOKEN,-1,\"SNAPSHOT-EOF\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"stream-eof\",ARG_TYPE_PURE_TOKEN,-1,\"STREAM-EOF\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"ack\",ARG_TYPE_BLOCK,-1,\"ACK\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=CLUSTER_SYNCSLOTS_subcommand_ack_Subargs},\n{MAKE_ARG(\"error\",ARG_TYPE_STRING,-1,\"FAIL\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"conf\",ARG_TYPE_BLOCK,-1,\"CONF\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=CLUSTER_SYNCSLOTS_subcommand_conf_Subargs},\n};\n\n/* CLUSTER SYNCSLOTS argument table */\nstruct COMMAND_ARG CLUSTER_SYNCSLOTS_Args[] = {\n{MAKE_ARG(\"subcommand\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,7,NULL),.subargs=CLUSTER_SYNCSLOTS_subcommand_Subargs},\n};\n\n/* CLUSTER command table */\nstruct COMMAND_STRUCT CLUSTER_Subcommands[] = {\n{MAKE_CMD(\"addslots\",\"Assigns new hash slots to a node.\",\"O(N) where N is the total number of hash slot arguments\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_ADDSLOTS_History,0,CLUSTER_ADDSLOTS_Tips,0,clusterCommand,-3,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_STALE,0,CLUSTER_ADDSLOTS_Keyspecs,0,NULL,1),.args=CLUSTER_ADDSLOTS_Args},\n{MAKE_CMD(\"addslotsrange\",\"Assigns new hash slot ranges to a node.\",\"O(N) where N is the total number of the slots between the start slot and end slot arguments.\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_ADDSLOTSRANGE_History,0,CLUSTER_ADDSLOTSRANGE_Tips,0,clusterCommand,-4,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_STALE,0,CLUSTER_ADDSLOTSRANGE_Keyspecs,0,NULL,1),.args=CLUSTER_ADDSLOTSRANGE_Args},\n{MAKE_CMD(\"bumpepoch\",\"Advances the cluster config epoch.\",\"O(1)\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_BUMPEPOCH_History,0,CLUSTER_BUMPEPOCH_Tips,1,clusterCommand,2,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_STALE,0,CLUSTER_BUMPEPOCH_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"count-failure-reports\",\"Returns the number of active failure reports active for a node.\",\"O(N) where N is the number of failure reports\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_COUNT_FAILURE_REPORTS_History,0,CLUSTER_COUNT_FAILURE_REPORTS_Tips,1,clusterCommand,3,CMD_ADMIN|CMD_LOADING|CMD_STALE,0,CLUSTER_COUNT_FAILURE_REPORTS_Keyspecs,0,NULL,1),.args=CLUSTER_COUNT_FAILURE_REPORTS_Args},\n{MAKE_CMD(\"countkeysinslot\",\"Returns the number of keys in a hash slot.\",\"O(1)\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_COUNTKEYSINSLOT_History,0,CLUSTER_COUNTKEYSINSLOT_Tips,0,clusterCommand,3,CMD_STALE,0,CLUSTER_COUNTKEYSINSLOT_Keyspecs,0,NULL,1),.args=CLUSTER_COUNTKEYSINSLOT_Args},\n{MAKE_CMD(\"delslots\",\"Sets hash slots as unbound for a node.\",\"O(N) where N is the total number of hash slot arguments\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_DELSLOTS_History,0,CLUSTER_DELSLOTS_Tips,0,clusterCommand,-3,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_STALE,0,CLUSTER_DELSLOTS_Keyspecs,0,NULL,1),.args=CLUSTER_DELSLOTS_Args},\n{MAKE_CMD(\"delslotsrange\",\"Sets hash slot ranges as unbound for a node.\",\"O(N) where N is the total number of the slots between the start slot and end slot arguments.\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_DELSLOTSRANGE_History,0,CLUSTER_DELSLOTSRANGE_Tips,0,clusterCommand,-4,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_STALE,0,CLUSTER_DELSLOTSRANGE_Keyspecs,0,NULL,1),.args=CLUSTER_DELSLOTSRANGE_Args},\n{MAKE_CMD(\"failover\",\"Forces a replica to perform a manual failover of its master.\",\"O(1)\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_FAILOVER_History,0,CLUSTER_FAILOVER_Tips,0,clusterCommand,-2,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_STALE,0,CLUSTER_FAILOVER_Keyspecs,0,NULL,1),.args=CLUSTER_FAILOVER_Args},\n{MAKE_CMD(\"flushslots\",\"Deletes all slots information from a node.\",\"O(1)\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_FLUSHSLOTS_History,0,CLUSTER_FLUSHSLOTS_Tips,0,clusterCommand,2,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_STALE,0,CLUSTER_FLUSHSLOTS_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"forget\",\"Removes a node from the nodes table.\",\"O(1)\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_FORGET_History,0,CLUSTER_FORGET_Tips,0,clusterCommand,3,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_STALE,0,CLUSTER_FORGET_Keyspecs,0,NULL,1),.args=CLUSTER_FORGET_Args},\n{MAKE_CMD(\"getkeysinslot\",\"Returns the key names in a hash slot.\",\"O(N) where N is the number of requested keys\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_GETKEYSINSLOT_History,0,CLUSTER_GETKEYSINSLOT_Tips,1,clusterCommand,4,CMD_STALE,0,CLUSTER_GETKEYSINSLOT_Keyspecs,0,NULL,2),.args=CLUSTER_GETKEYSINSLOT_Args},\n{MAKE_CMD(\"help\",\"Returns helpful text about the different subcommands.\",\"O(1)\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_HELP_History,0,CLUSTER_HELP_Tips,0,clusterCommand,2,CMD_LOADING|CMD_STALE,0,CLUSTER_HELP_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"info\",\"Returns information about the state of a node.\",\"O(1)\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_INFO_History,0,CLUSTER_INFO_Tips,1,clusterCommand,2,CMD_LOADING|CMD_STALE,0,CLUSTER_INFO_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"keyslot\",\"Returns the hash slot for a key.\",\"O(N) where N is the number of bytes in the key\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_KEYSLOT_History,0,CLUSTER_KEYSLOT_Tips,0,clusterCommand,3,CMD_LOADING|CMD_STALE,0,CLUSTER_KEYSLOT_Keyspecs,0,NULL,1),.args=CLUSTER_KEYSLOT_Args},\n{MAKE_CMD(\"links\",\"Returns a list of all TCP links to and from peer nodes.\",\"O(N) where N is the total number of Cluster nodes\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_LINKS_History,0,CLUSTER_LINKS_Tips,1,clusterCommand,2,CMD_LOADING|CMD_STALE,0,CLUSTER_LINKS_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"meet\",\"Forces a node to handshake with another node.\",\"O(1)\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_MEET_History,1,CLUSTER_MEET_Tips,0,clusterCommand,-4,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_STALE,0,CLUSTER_MEET_Keyspecs,0,NULL,3),.args=CLUSTER_MEET_Args},\n{MAKE_CMD(\"migration\",\"Start, monitor and cancel slot migration.\",\"O(N) where N is the total number of the slots between the start slot and end slot arguments.\",\"8.4.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_MIGRATION_History,0,CLUSTER_MIGRATION_Tips,0,clusterCommand,-4,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_STALE,0,CLUSTER_MIGRATION_Keyspecs,0,NULL,1),.args=CLUSTER_MIGRATION_Args},\n{MAKE_CMD(\"myid\",\"Returns the ID of a node.\",\"O(1)\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_MYID_History,0,CLUSTER_MYID_Tips,0,clusterCommand,2,CMD_LOADING|CMD_STALE,0,CLUSTER_MYID_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"myshardid\",\"Returns the shard ID of a node.\",\"O(1)\",\"7.2.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_MYSHARDID_History,0,CLUSTER_MYSHARDID_Tips,1,clusterCommand,2,CMD_LOADING|CMD_STALE,0,CLUSTER_MYSHARDID_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"nodes\",\"Returns the cluster configuration for a node.\",\"O(N) where N is the total number of Cluster nodes\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_NODES_History,0,CLUSTER_NODES_Tips,1,clusterCommand,2,CMD_LOADING|CMD_STALE,0,CLUSTER_NODES_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"replicas\",\"Lists the replica nodes of a master node.\",\"O(N) where N is the number of replicas.\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_REPLICAS_History,0,CLUSTER_REPLICAS_Tips,1,clusterCommand,3,CMD_ADMIN|CMD_LOADING|CMD_STALE,0,CLUSTER_REPLICAS_Keyspecs,0,NULL,1),.args=CLUSTER_REPLICAS_Args},\n{MAKE_CMD(\"replicate\",\"Configure a node as replica of a master node.\",\"O(1)\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_REPLICATE_History,0,CLUSTER_REPLICATE_Tips,0,clusterCommand,3,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_STALE,0,CLUSTER_REPLICATE_Keyspecs,0,NULL,1),.args=CLUSTER_REPLICATE_Args},\n{MAKE_CMD(\"reset\",\"Resets a node.\",\"O(N) where N is the number of known nodes. The command may execute a FLUSHALL as a side effect.\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_RESET_History,0,CLUSTER_RESET_Tips,0,clusterCommand,-2,CMD_ADMIN|CMD_STALE|CMD_NOSCRIPT,0,CLUSTER_RESET_Keyspecs,0,NULL,1),.args=CLUSTER_RESET_Args},\n{MAKE_CMD(\"saveconfig\",\"Forces a node to save the cluster configuration to disk.\",\"O(1)\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_SAVECONFIG_History,0,CLUSTER_SAVECONFIG_Tips,0,clusterCommand,2,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_STALE,0,CLUSTER_SAVECONFIG_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"set-config-epoch\",\"Sets the configuration epoch for a new node.\",\"O(1)\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_SET_CONFIG_EPOCH_History,0,CLUSTER_SET_CONFIG_EPOCH_Tips,0,clusterCommand,3,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_STALE,0,CLUSTER_SET_CONFIG_EPOCH_Keyspecs,0,NULL,1),.args=CLUSTER_SET_CONFIG_EPOCH_Args},\n{MAKE_CMD(\"setslot\",\"Binds a hash slot to a node.\",\"O(1)\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_SETSLOT_History,0,CLUSTER_SETSLOT_Tips,0,clusterCommand,-4,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_STALE,0,CLUSTER_SETSLOT_Keyspecs,0,NULL,2),.args=CLUSTER_SETSLOT_Args},\n{MAKE_CMD(\"shards\",\"Returns the mapping of cluster slots to shards.\",\"O(N) where N is the total number of cluster nodes\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_SHARDS_History,0,CLUSTER_SHARDS_Tips,1,clusterCommand,2,CMD_LOADING|CMD_STALE,0,CLUSTER_SHARDS_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"slaves\",\"Lists the replica nodes of a master node.\",\"O(N) where N is the number of replicas.\",\"3.0.0\",CMD_DOC_DEPRECATED,\"`CLUSTER REPLICAS`\",\"5.0.0\",\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_SLAVES_History,0,CLUSTER_SLAVES_Tips,1,clusterCommand,3,CMD_ADMIN|CMD_LOADING|CMD_STALE,0,CLUSTER_SLAVES_Keyspecs,0,NULL,1),.args=CLUSTER_SLAVES_Args},\n{MAKE_CMD(\"slot-stats\",\"Return an array of slot usage statistics for slots assigned to the current node.\",\"O(N) where N is the total number of slots based on arguments. O(N*log(N)) with ORDERBY subcommand.\",\"8.2.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_SLOT_STATS_History,0,CLUSTER_SLOT_STATS_Tips,2,clusterSlotStatsCommand,-4,CMD_STALE|CMD_LOADING,0,CLUSTER_SLOT_STATS_Keyspecs,0,NULL,1),.args=CLUSTER_SLOT_STATS_Args},\n{MAKE_CMD(\"slots\",\"Returns the mapping of cluster slots to nodes.\",\"O(N) where N is the total number of Cluster nodes\",\"3.0.0\",CMD_DOC_DEPRECATED,\"`CLUSTER SHARDS`\",\"7.0.0\",\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_SLOTS_History,2,CLUSTER_SLOTS_Tips,1,clusterCommand,2,CMD_LOADING|CMD_STALE,0,CLUSTER_SLOTS_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"syncslots\",\"Internal command for atomic slot migration protocol between cluster nodes.\",\"O(1)\",\"8.4.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_SYNCSLOTS_History,0,CLUSTER_SYNCSLOTS_Tips,1,clusterCommand,-3,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_STALE,0,CLUSTER_SYNCSLOTS_Keyspecs,0,NULL,1),.args=CLUSTER_SYNCSLOTS_Args},\n{0}\n};\n\n/********** CLUSTER ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLUSTER history */\n#define CLUSTER_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLUSTER tips */\n#define CLUSTER_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLUSTER key specs */\n#define CLUSTER_Keyspecs NULL\n#endif\n\n/********** READONLY ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* READONLY history */\n#define READONLY_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* READONLY tips */\n#define READONLY_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* READONLY key specs */\n#define READONLY_Keyspecs NULL\n#endif\n\n/********** READWRITE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* READWRITE history */\n#define READWRITE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* READWRITE tips */\n#define READWRITE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* READWRITE key specs */\n#define READWRITE_Keyspecs NULL\n#endif\n\n/********** AUTH ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* AUTH history */\ncommandHistory AUTH_History[] = {\n{\"6.0.0\",\"Added ACL style (username and password).\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* AUTH tips */\n#define AUTH_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* AUTH key specs */\n#define AUTH_Keyspecs NULL\n#endif\n\n/* AUTH argument table */\nstruct COMMAND_ARG AUTH_Args[] = {\n{MAKE_ARG(\"username\",ARG_TYPE_STRING,-1,NULL,NULL,\"6.0.0\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"password\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** CLIENT CACHING ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLIENT CACHING history */\n#define CLIENT_CACHING_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLIENT CACHING tips */\n#define CLIENT_CACHING_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLIENT CACHING key specs */\n#define CLIENT_CACHING_Keyspecs NULL\n#endif\n\n/* CLIENT CACHING mode argument table */\nstruct COMMAND_ARG CLIENT_CACHING_mode_Subargs[] = {\n{MAKE_ARG(\"yes\",ARG_TYPE_PURE_TOKEN,-1,\"YES\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"no\",ARG_TYPE_PURE_TOKEN,-1,\"NO\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLIENT CACHING argument table */\nstruct COMMAND_ARG CLIENT_CACHING_Args[] = {\n{MAKE_ARG(\"mode\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=CLIENT_CACHING_mode_Subargs},\n};\n\n/********** CLIENT GETNAME ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLIENT GETNAME history */\n#define CLIENT_GETNAME_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLIENT GETNAME tips */\n#define CLIENT_GETNAME_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLIENT GETNAME key specs */\n#define CLIENT_GETNAME_Keyspecs NULL\n#endif\n\n/********** CLIENT GETREDIR ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLIENT GETREDIR history */\n#define CLIENT_GETREDIR_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLIENT GETREDIR tips */\n#define CLIENT_GETREDIR_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLIENT GETREDIR key specs */\n#define CLIENT_GETREDIR_Keyspecs NULL\n#endif\n\n/********** CLIENT HELP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLIENT HELP history */\n#define CLIENT_HELP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLIENT HELP tips */\n#define CLIENT_HELP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLIENT HELP key specs */\n#define CLIENT_HELP_Keyspecs NULL\n#endif\n\n/********** CLIENT ID ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLIENT ID history */\n#define CLIENT_ID_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLIENT ID tips */\n#define CLIENT_ID_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLIENT ID key specs */\n#define CLIENT_ID_Keyspecs NULL\n#endif\n\n/********** CLIENT INFO ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLIENT INFO history */\n#define CLIENT_INFO_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLIENT INFO tips */\nconst char *CLIENT_INFO_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLIENT INFO key specs */\n#define CLIENT_INFO_Keyspecs NULL\n#endif\n\n/********** CLIENT KILL ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLIENT KILL history */\ncommandHistory CLIENT_KILL_History[] = {\n{\"2.8.12\",\"Added new filter format.\"},\n{\"2.8.12\",\"`ID` option.\"},\n{\"3.2.0\",\"Added `master` type in for `TYPE` option.\"},\n{\"5.0.0\",\"Replaced `slave` `TYPE` with `replica`. `slave` still supported for backward compatibility.\"},\n{\"6.2.0\",\"`LADDR` option.\"},\n{\"7.4.0\",\"`MAXAGE` option.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLIENT KILL tips */\n#define CLIENT_KILL_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLIENT KILL key specs */\n#define CLIENT_KILL_Keyspecs NULL\n#endif\n\n/* CLIENT KILL filter new_format client_type argument table */\nstruct COMMAND_ARG CLIENT_KILL_filter_new_format_client_type_Subargs[] = {\n{MAKE_ARG(\"normal\",ARG_TYPE_PURE_TOKEN,-1,\"NORMAL\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"master\",ARG_TYPE_PURE_TOKEN,-1,\"MASTER\",NULL,\"3.2.0\",CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"slave\",ARG_TYPE_PURE_TOKEN,-1,\"SLAVE\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"replica\",ARG_TYPE_PURE_TOKEN,-1,\"REPLICA\",NULL,\"5.0.0\",CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"pubsub\",ARG_TYPE_PURE_TOKEN,-1,\"PUBSUB\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLIENT KILL filter new_format skipme argument table */\nstruct COMMAND_ARG CLIENT_KILL_filter_new_format_skipme_Subargs[] = {\n{MAKE_ARG(\"yes\",ARG_TYPE_PURE_TOKEN,-1,\"YES\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"no\",ARG_TYPE_PURE_TOKEN,-1,\"NO\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLIENT KILL filter new_format argument table */\nstruct COMMAND_ARG CLIENT_KILL_filter_new_format_Subargs[] = {\n{MAKE_ARG(\"client-id\",ARG_TYPE_INTEGER,-1,\"ID\",NULL,\"2.8.12\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"client-type\",ARG_TYPE_ONEOF,-1,\"TYPE\",NULL,\"2.8.12\",CMD_ARG_OPTIONAL,5,NULL),.subargs=CLIENT_KILL_filter_new_format_client_type_Subargs},\n{MAKE_ARG(\"username\",ARG_TYPE_STRING,-1,\"USER\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"addr\",ARG_TYPE_STRING,-1,\"ADDR\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL),.display_text=\"ip:port\"},\n{MAKE_ARG(\"laddr\",ARG_TYPE_STRING,-1,\"LADDR\",NULL,\"6.2.0\",CMD_ARG_OPTIONAL,0,NULL),.display_text=\"ip:port\"},\n{MAKE_ARG(\"skipme\",ARG_TYPE_ONEOF,-1,\"SKIPME\",NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=CLIENT_KILL_filter_new_format_skipme_Subargs},\n{MAKE_ARG(\"maxage\",ARG_TYPE_INTEGER,-1,\"MAXAGE\",NULL,\"7.4.0\",CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/* CLIENT KILL filter argument table */\nstruct COMMAND_ARG CLIENT_KILL_filter_Subargs[] = {\n{MAKE_ARG(\"old-format\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,\"2.8.12\"),.display_text=\"ip:port\"},\n{MAKE_ARG(\"new-format\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,7,NULL),.subargs=CLIENT_KILL_filter_new_format_Subargs},\n};\n\n/* CLIENT KILL argument table */\nstruct COMMAND_ARG CLIENT_KILL_Args[] = {\n{MAKE_ARG(\"filter\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=CLIENT_KILL_filter_Subargs},\n};\n\n/********** CLIENT LIST ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLIENT LIST history */\ncommandHistory CLIENT_LIST_History[] = {\n{\"2.8.12\",\"Added unique client `id` field.\"},\n{\"5.0.0\",\"Added optional `TYPE` filter.\"},\n{\"6.0.0\",\"Added `user` field.\"},\n{\"6.2.0\",\"Added `argv-mem`, `tot-mem`, `laddr` and `redir` fields and the optional `ID` filter.\"},\n{\"7.0.0\",\"Added `resp`, `multi-mem`, `rbs` and `rbp` fields.\"},\n{\"7.0.3\",\"Added `ssub` field.\"},\n{\"7.2.0\",\"Added `lib-name` and `lib-ver` fields.\"},\n{\"7.4.0\",\"Added `watch` field.\"},\n{\"8.0.0\",\"Added `io-thread` field.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLIENT LIST tips */\nconst char *CLIENT_LIST_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLIENT LIST key specs */\n#define CLIENT_LIST_Keyspecs NULL\n#endif\n\n/* CLIENT LIST client_type argument table */\nstruct COMMAND_ARG CLIENT_LIST_client_type_Subargs[] = {\n{MAKE_ARG(\"normal\",ARG_TYPE_PURE_TOKEN,-1,\"NORMAL\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"master\",ARG_TYPE_PURE_TOKEN,-1,\"MASTER\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"replica\",ARG_TYPE_PURE_TOKEN,-1,\"REPLICA\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"pubsub\",ARG_TYPE_PURE_TOKEN,-1,\"PUBSUB\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLIENT LIST argument table */\nstruct COMMAND_ARG CLIENT_LIST_Args[] = {\n{MAKE_ARG(\"client-type\",ARG_TYPE_ONEOF,-1,\"TYPE\",NULL,\"5.0.0\",CMD_ARG_OPTIONAL,4,NULL),.subargs=CLIENT_LIST_client_type_Subargs},\n{MAKE_ARG(\"client-id\",ARG_TYPE_INTEGER,-1,\"ID\",NULL,\"6.2.0\",CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** CLIENT NO_EVICT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLIENT NO_EVICT history */\n#define CLIENT_NO_EVICT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLIENT NO_EVICT tips */\n#define CLIENT_NO_EVICT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLIENT NO_EVICT key specs */\n#define CLIENT_NO_EVICT_Keyspecs NULL\n#endif\n\n/* CLIENT NO_EVICT enabled argument table */\nstruct COMMAND_ARG CLIENT_NO_EVICT_enabled_Subargs[] = {\n{MAKE_ARG(\"on\",ARG_TYPE_PURE_TOKEN,-1,\"ON\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"off\",ARG_TYPE_PURE_TOKEN,-1,\"OFF\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLIENT NO_EVICT argument table */\nstruct COMMAND_ARG CLIENT_NO_EVICT_Args[] = {\n{MAKE_ARG(\"enabled\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=CLIENT_NO_EVICT_enabled_Subargs},\n};\n\n/********** CLIENT NO_TOUCH ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLIENT NO_TOUCH history */\n#define CLIENT_NO_TOUCH_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLIENT NO_TOUCH tips */\n#define CLIENT_NO_TOUCH_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLIENT NO_TOUCH key specs */\n#define CLIENT_NO_TOUCH_Keyspecs NULL\n#endif\n\n/* CLIENT NO_TOUCH enabled argument table */\nstruct COMMAND_ARG CLIENT_NO_TOUCH_enabled_Subargs[] = {\n{MAKE_ARG(\"on\",ARG_TYPE_PURE_TOKEN,-1,\"ON\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"off\",ARG_TYPE_PURE_TOKEN,-1,\"OFF\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLIENT NO_TOUCH argument table */\nstruct COMMAND_ARG CLIENT_NO_TOUCH_Args[] = {\n{MAKE_ARG(\"enabled\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=CLIENT_NO_TOUCH_enabled_Subargs},\n};\n\n/********** CLIENT PAUSE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLIENT PAUSE history */\ncommandHistory CLIENT_PAUSE_History[] = {\n{\"6.2.0\",\"`CLIENT PAUSE WRITE` mode added along with the `mode` option.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLIENT PAUSE tips */\n#define CLIENT_PAUSE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLIENT PAUSE key specs */\n#define CLIENT_PAUSE_Keyspecs NULL\n#endif\n\n/* CLIENT PAUSE mode argument table */\nstruct COMMAND_ARG CLIENT_PAUSE_mode_Subargs[] = {\n{MAKE_ARG(\"write\",ARG_TYPE_PURE_TOKEN,-1,\"WRITE\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"all\",ARG_TYPE_PURE_TOKEN,-1,\"ALL\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLIENT PAUSE argument table */\nstruct COMMAND_ARG CLIENT_PAUSE_Args[] = {\n{MAKE_ARG(\"timeout\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"mode\",ARG_TYPE_ONEOF,-1,NULL,NULL,\"6.2.0\",CMD_ARG_OPTIONAL,2,NULL),.subargs=CLIENT_PAUSE_mode_Subargs},\n};\n\n/********** CLIENT REPLY ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLIENT REPLY history */\n#define CLIENT_REPLY_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLIENT REPLY tips */\n#define CLIENT_REPLY_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLIENT REPLY key specs */\n#define CLIENT_REPLY_Keyspecs NULL\n#endif\n\n/* CLIENT REPLY action argument table */\nstruct COMMAND_ARG CLIENT_REPLY_action_Subargs[] = {\n{MAKE_ARG(\"on\",ARG_TYPE_PURE_TOKEN,-1,\"ON\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"off\",ARG_TYPE_PURE_TOKEN,-1,\"OFF\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"skip\",ARG_TYPE_PURE_TOKEN,-1,\"SKIP\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLIENT REPLY argument table */\nstruct COMMAND_ARG CLIENT_REPLY_Args[] = {\n{MAKE_ARG(\"action\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,3,NULL),.subargs=CLIENT_REPLY_action_Subargs},\n};\n\n/********** CLIENT SETINFO ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLIENT SETINFO history */\n#define CLIENT_SETINFO_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLIENT SETINFO tips */\nconst char *CLIENT_SETINFO_Tips[] = {\n\"request_policy:all_nodes\",\n\"response_policy:all_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLIENT SETINFO key specs */\n#define CLIENT_SETINFO_Keyspecs NULL\n#endif\n\n/* CLIENT SETINFO attr argument table */\nstruct COMMAND_ARG CLIENT_SETINFO_attr_Subargs[] = {\n{MAKE_ARG(\"libname\",ARG_TYPE_STRING,-1,\"LIB-NAME\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"libver\",ARG_TYPE_STRING,-1,\"LIB-VER\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLIENT SETINFO argument table */\nstruct COMMAND_ARG CLIENT_SETINFO_Args[] = {\n{MAKE_ARG(\"attr\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=CLIENT_SETINFO_attr_Subargs},\n};\n\n/********** CLIENT SETNAME ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLIENT SETNAME history */\n#define CLIENT_SETNAME_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLIENT SETNAME tips */\nconst char *CLIENT_SETNAME_Tips[] = {\n\"request_policy:all_nodes\",\n\"response_policy:all_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLIENT SETNAME key specs */\n#define CLIENT_SETNAME_Keyspecs NULL\n#endif\n\n/* CLIENT SETNAME argument table */\nstruct COMMAND_ARG CLIENT_SETNAME_Args[] = {\n{MAKE_ARG(\"connection-name\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** CLIENT TRACKING ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLIENT TRACKING history */\n#define CLIENT_TRACKING_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLIENT TRACKING tips */\n#define CLIENT_TRACKING_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLIENT TRACKING key specs */\n#define CLIENT_TRACKING_Keyspecs NULL\n#endif\n\n/* CLIENT TRACKING status argument table */\nstruct COMMAND_ARG CLIENT_TRACKING_status_Subargs[] = {\n{MAKE_ARG(\"on\",ARG_TYPE_PURE_TOKEN,-1,\"ON\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"off\",ARG_TYPE_PURE_TOKEN,-1,\"OFF\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLIENT TRACKING argument table */\nstruct COMMAND_ARG CLIENT_TRACKING_Args[] = {\n{MAKE_ARG(\"status\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=CLIENT_TRACKING_status_Subargs},\n{MAKE_ARG(\"client-id\",ARG_TYPE_INTEGER,-1,\"REDIRECT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"prefix\",ARG_TYPE_STRING,-1,\"PREFIX\",NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE|CMD_ARG_MULTIPLE_TOKEN,0,NULL)},\n{MAKE_ARG(\"bcast\",ARG_TYPE_PURE_TOKEN,-1,\"BCAST\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"optin\",ARG_TYPE_PURE_TOKEN,-1,\"OPTIN\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"optout\",ARG_TYPE_PURE_TOKEN,-1,\"OPTOUT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"noloop\",ARG_TYPE_PURE_TOKEN,-1,\"NOLOOP\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** CLIENT TRACKINGINFO ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLIENT TRACKINGINFO history */\n#define CLIENT_TRACKINGINFO_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLIENT TRACKINGINFO tips */\n#define CLIENT_TRACKINGINFO_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLIENT TRACKINGINFO key specs */\n#define CLIENT_TRACKINGINFO_Keyspecs NULL\n#endif\n\n/********** CLIENT UNBLOCK ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLIENT UNBLOCK history */\n#define CLIENT_UNBLOCK_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLIENT UNBLOCK tips */\n#define CLIENT_UNBLOCK_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLIENT UNBLOCK key specs */\n#define CLIENT_UNBLOCK_Keyspecs NULL\n#endif\n\n/* CLIENT UNBLOCK unblock_type argument table */\nstruct COMMAND_ARG CLIENT_UNBLOCK_unblock_type_Subargs[] = {\n{MAKE_ARG(\"timeout\",ARG_TYPE_PURE_TOKEN,-1,\"TIMEOUT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"error\",ARG_TYPE_PURE_TOKEN,-1,\"ERROR\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CLIENT UNBLOCK argument table */\nstruct COMMAND_ARG CLIENT_UNBLOCK_Args[] = {\n{MAKE_ARG(\"client-id\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unblock-type\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=CLIENT_UNBLOCK_unblock_type_Subargs},\n};\n\n/********** CLIENT UNPAUSE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLIENT UNPAUSE history */\n#define CLIENT_UNPAUSE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLIENT UNPAUSE tips */\n#define CLIENT_UNPAUSE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLIENT UNPAUSE key specs */\n#define CLIENT_UNPAUSE_Keyspecs NULL\n#endif\n\n/* CLIENT command table */\nstruct COMMAND_STRUCT CLIENT_Subcommands[] = {\n{MAKE_CMD(\"caching\",\"Instructs the server whether to track the keys in the next request.\",\"O(1)\",\"6.0.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,CLIENT_CACHING_History,0,CLIENT_CACHING_Tips,0,clientCommand,3,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,CLIENT_CACHING_Keyspecs,0,NULL,1),.args=CLIENT_CACHING_Args},\n{MAKE_CMD(\"getname\",\"Returns the name of the connection.\",\"O(1)\",\"2.6.9\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,CLIENT_GETNAME_History,0,CLIENT_GETNAME_Tips,0,clientCommand,2,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,CLIENT_GETNAME_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"getredir\",\"Returns the client ID to which the connection's tracking notifications are redirected.\",\"O(1)\",\"6.0.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,CLIENT_GETREDIR_History,0,CLIENT_GETREDIR_Tips,0,clientCommand,2,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,CLIENT_GETREDIR_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"help\",\"Returns helpful text about the different subcommands.\",\"O(1)\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,CLIENT_HELP_History,0,CLIENT_HELP_Tips,0,clientCommand,2,CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,CLIENT_HELP_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"id\",\"Returns the unique client ID of the connection.\",\"O(1)\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,CLIENT_ID_History,0,CLIENT_ID_Tips,0,clientCommand,2,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,CLIENT_ID_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"info\",\"Returns information about the connection.\",\"O(1)\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,CLIENT_INFO_History,0,CLIENT_INFO_Tips,1,clientCommand,2,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,CLIENT_INFO_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"kill\",\"Terminates open connections.\",\"O(N) where N is the number of client connections\",\"2.4.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,CLIENT_KILL_History,6,CLIENT_KILL_Tips,0,clientCommand,-3,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,CLIENT_KILL_Keyspecs,0,NULL,1),.args=CLIENT_KILL_Args},\n{MAKE_CMD(\"list\",\"Lists open connections.\",\"O(N) where N is the number of client connections\",\"2.4.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,CLIENT_LIST_History,9,CLIENT_LIST_Tips,1,clientCommand,-2,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,CLIENT_LIST_Keyspecs,0,NULL,2),.args=CLIENT_LIST_Args},\n{MAKE_CMD(\"no-evict\",\"Sets the client eviction mode of the connection.\",\"O(1)\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,CLIENT_NO_EVICT_History,0,CLIENT_NO_EVICT_Tips,0,clientCommand,3,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,CLIENT_NO_EVICT_Keyspecs,0,NULL,1),.args=CLIENT_NO_EVICT_Args},\n{MAKE_CMD(\"no-touch\",\"Controls whether commands sent by the client affect the LRU/LFU of accessed keys.\",\"O(1)\",\"7.2.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,CLIENT_NO_TOUCH_History,0,CLIENT_NO_TOUCH_Tips,0,clientCommand,3,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE,ACL_CATEGORY_CONNECTION,CLIENT_NO_TOUCH_Keyspecs,0,NULL,1),.args=CLIENT_NO_TOUCH_Args},\n{MAKE_CMD(\"pause\",\"Suspends commands processing.\",\"O(1)\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,CLIENT_PAUSE_History,1,CLIENT_PAUSE_Tips,0,clientCommand,-3,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,CLIENT_PAUSE_Keyspecs,0,NULL,2),.args=CLIENT_PAUSE_Args},\n{MAKE_CMD(\"reply\",\"Instructs the server whether to reply to commands.\",\"O(1)\",\"3.2.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,CLIENT_REPLY_History,0,CLIENT_REPLY_Tips,0,clientCommand,3,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,CLIENT_REPLY_Keyspecs,0,NULL,1),.args=CLIENT_REPLY_Args},\n{MAKE_CMD(\"setinfo\",\"Sets information specific to the client or connection.\",\"O(1)\",\"7.2.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,CLIENT_SETINFO_History,0,CLIENT_SETINFO_Tips,2,clientSetinfoCommand,4,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,CLIENT_SETINFO_Keyspecs,0,NULL,1),.args=CLIENT_SETINFO_Args},\n{MAKE_CMD(\"setname\",\"Sets the connection name.\",\"O(1)\",\"2.6.9\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,CLIENT_SETNAME_History,0,CLIENT_SETNAME_Tips,2,clientCommand,3,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,CLIENT_SETNAME_Keyspecs,0,NULL,1),.args=CLIENT_SETNAME_Args},\n{MAKE_CMD(\"tracking\",\"Controls server-assisted client-side caching for the connection.\",\"O(1). Some options may introduce additional complexity.\",\"6.0.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,CLIENT_TRACKING_History,0,CLIENT_TRACKING_Tips,0,clientCommand,-3,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,CLIENT_TRACKING_Keyspecs,0,NULL,7),.args=CLIENT_TRACKING_Args},\n{MAKE_CMD(\"trackinginfo\",\"Returns information about server-assisted client-side caching for the connection.\",\"O(1)\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,CLIENT_TRACKINGINFO_History,0,CLIENT_TRACKINGINFO_Tips,0,clientCommand,2,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,CLIENT_TRACKINGINFO_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"unblock\",\"Unblocks a client blocked by a blocking command from a different connection.\",\"O(log N) where N is the number of client connections\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,CLIENT_UNBLOCK_History,0,CLIENT_UNBLOCK_Tips,0,clientCommand,-3,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,CLIENT_UNBLOCK_Keyspecs,0,NULL,2),.args=CLIENT_UNBLOCK_Args},\n{MAKE_CMD(\"unpause\",\"Resumes processing commands from paused clients.\",\"O(N) Where N is the number of paused clients\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,CLIENT_UNPAUSE_History,0,CLIENT_UNPAUSE_Tips,0,clientCommand,2,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,CLIENT_UNPAUSE_Keyspecs,0,NULL,0)},\n{0}\n};\n\n/********** CLIENT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CLIENT history */\n#define CLIENT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CLIENT tips */\n#define CLIENT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CLIENT key specs */\n#define CLIENT_Keyspecs NULL\n#endif\n\n/********** ECHO ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ECHO history */\n#define ECHO_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ECHO tips */\n#define ECHO_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ECHO key specs */\n#define ECHO_Keyspecs NULL\n#endif\n\n/* ECHO argument table */\nstruct COMMAND_ARG ECHO_Args[] = {\n{MAKE_ARG(\"message\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** HELLO ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HELLO history */\ncommandHistory HELLO_History[] = {\n{\"6.2.0\",\"`protover` made optional; when called without arguments the command reports the current connection's context.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HELLO tips */\n#define HELLO_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HELLO key specs */\n#define HELLO_Keyspecs NULL\n#endif\n\n/* HELLO arguments auth argument table */\nstruct COMMAND_ARG HELLO_arguments_auth_Subargs[] = {\n{MAKE_ARG(\"username\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"password\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* HELLO arguments argument table */\nstruct COMMAND_ARG HELLO_arguments_Subargs[] = {\n{MAKE_ARG(\"protover\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"auth\",ARG_TYPE_BLOCK,-1,\"AUTH\",NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=HELLO_arguments_auth_Subargs},\n{MAKE_ARG(\"clientname\",ARG_TYPE_STRING,-1,\"SETNAME\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/* HELLO argument table */\nstruct COMMAND_ARG HELLO_Args[] = {\n{MAKE_ARG(\"arguments\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,3,NULL),.subargs=HELLO_arguments_Subargs},\n};\n\n/********** PING ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PING history */\n#define PING_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PING tips */\nconst char *PING_Tips[] = {\n\"request_policy:all_shards\",\n\"response_policy:all_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PING key specs */\n#define PING_Keyspecs NULL\n#endif\n\n/* PING argument table */\nstruct COMMAND_ARG PING_Args[] = {\n{MAKE_ARG(\"message\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** QUIT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* QUIT history */\n#define QUIT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* QUIT tips */\n#define QUIT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* QUIT key specs */\n#define QUIT_Keyspecs NULL\n#endif\n\n/********** RESET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* RESET history */\n#define RESET_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* RESET tips */\n#define RESET_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* RESET key specs */\n#define RESET_Keyspecs NULL\n#endif\n\n/********** SELECT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SELECT history */\n#define SELECT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SELECT tips */\n#define SELECT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SELECT key specs */\n#define SELECT_Keyspecs NULL\n#endif\n\n/* SELECT argument table */\nstruct COMMAND_ARG SELECT_Args[] = {\n{MAKE_ARG(\"index\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** COPY ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* COPY history */\n#define COPY_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* COPY tips */\n#define COPY_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* COPY key specs */\nkeySpec COPY_Keyspecs[2] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* COPY argument table */\nstruct COMMAND_ARG COPY_Args[] = {\n{MAKE_ARG(\"source\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"destination\",ARG_TYPE_KEY,1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"destination-db\",ARG_TYPE_INTEGER,-1,\"DB\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"replace\",ARG_TYPE_PURE_TOKEN,-1,\"REPLACE\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** DEL ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* DEL history */\n#define DEL_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* DEL tips */\nconst char *DEL_Tips[] = {\n\"request_policy:multi_shard\",\n\"response_policy:agg_sum\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* DEL key specs */\nkeySpec DEL_Keyspecs[1] = {\n{NULL,CMD_KEY_RM|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={-1,1,0}}\n};\n#endif\n\n/* DEL argument table */\nstruct COMMAND_ARG DEL_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** DUMP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* DUMP history */\n#define DUMP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* DUMP tips */\nconst char *DUMP_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* DUMP key specs */\nkeySpec DUMP_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* DUMP argument table */\nstruct COMMAND_ARG DUMP_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** EXISTS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* EXISTS history */\ncommandHistory EXISTS_History[] = {\n{\"3.0.3\",\"Accepts multiple `key` arguments.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* EXISTS tips */\nconst char *EXISTS_Tips[] = {\n\"request_policy:multi_shard\",\n\"response_policy:agg_sum\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* EXISTS key specs */\nkeySpec EXISTS_Keyspecs[1] = {\n{NULL,CMD_KEY_RO,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={-1,1,0}}\n};\n#endif\n\n/* EXISTS argument table */\nstruct COMMAND_ARG EXISTS_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** EXPIRE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* EXPIRE history */\ncommandHistory EXPIRE_History[] = {\n{\"7.0.0\",\"Added options: `NX`, `XX`, `GT` and `LT`.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* EXPIRE tips */\n#define EXPIRE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* EXPIRE key specs */\nkeySpec EXPIRE_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* EXPIRE condition argument table */\nstruct COMMAND_ARG EXPIRE_condition_Subargs[] = {\n{MAKE_ARG(\"nx\",ARG_TYPE_PURE_TOKEN,-1,\"NX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"xx\",ARG_TYPE_PURE_TOKEN,-1,\"XX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"gt\",ARG_TYPE_PURE_TOKEN,-1,\"GT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"lt\",ARG_TYPE_PURE_TOKEN,-1,\"LT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* EXPIRE argument table */\nstruct COMMAND_ARG EXPIRE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"seconds\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"condition\",ARG_TYPE_ONEOF,-1,NULL,NULL,\"7.0.0\",CMD_ARG_OPTIONAL,4,NULL),.subargs=EXPIRE_condition_Subargs},\n};\n\n/********** EXPIREAT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* EXPIREAT history */\ncommandHistory EXPIREAT_History[] = {\n{\"7.0.0\",\"Added options: `NX`, `XX`, `GT` and `LT`.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* EXPIREAT tips */\n#define EXPIREAT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* EXPIREAT key specs */\nkeySpec EXPIREAT_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* EXPIREAT condition argument table */\nstruct COMMAND_ARG EXPIREAT_condition_Subargs[] = {\n{MAKE_ARG(\"nx\",ARG_TYPE_PURE_TOKEN,-1,\"NX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"xx\",ARG_TYPE_PURE_TOKEN,-1,\"XX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"gt\",ARG_TYPE_PURE_TOKEN,-1,\"GT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"lt\",ARG_TYPE_PURE_TOKEN,-1,\"LT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* EXPIREAT argument table */\nstruct COMMAND_ARG EXPIREAT_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unix-time-seconds\",ARG_TYPE_UNIX_TIME,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"condition\",ARG_TYPE_ONEOF,-1,NULL,NULL,\"7.0.0\",CMD_ARG_OPTIONAL,4,NULL),.subargs=EXPIREAT_condition_Subargs},\n};\n\n/********** EXPIRETIME ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* EXPIRETIME history */\n#define EXPIRETIME_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* EXPIRETIME tips */\n#define EXPIRETIME_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* EXPIRETIME key specs */\nkeySpec EXPIRETIME_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* EXPIRETIME argument table */\nstruct COMMAND_ARG EXPIRETIME_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** KEYS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* KEYS history */\n#define KEYS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* KEYS tips */\nconst char *KEYS_Tips[] = {\n\"request_policy:all_shards\",\n\"nondeterministic_output_order\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* KEYS key specs */\n#define KEYS_Keyspecs NULL\n#endif\n\n/* KEYS argument table */\nstruct COMMAND_ARG KEYS_Args[] = {\n{MAKE_ARG(\"pattern\",ARG_TYPE_PATTERN,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** MIGRATE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* MIGRATE history */\ncommandHistory MIGRATE_History[] = {\n{\"3.0.0\",\"Added the `COPY` and `REPLACE` options.\"},\n{\"3.0.6\",\"Added the `KEYS` option.\"},\n{\"4.0.7\",\"Added the `AUTH` option.\"},\n{\"6.0.0\",\"Added the `AUTH2` option.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* MIGRATE tips */\nconst char *MIGRATE_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* MIGRATE key specs */\nkeySpec MIGRATE_Keyspecs[2] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={3},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE|CMD_KEY_INCOMPLETE,KSPEC_BS_KEYWORD,.bs.keyword={\"KEYS\",-2},KSPEC_FK_RANGE,.fk.range={-1,1,0}}\n};\n#endif\n\n/* MIGRATE key_selector argument table */\nstruct COMMAND_ARG MIGRATE_key_selector_Subargs[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"empty-string\",ARG_TYPE_PURE_TOKEN,-1,\"\"\"\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* MIGRATE authentication auth2 argument table */\nstruct COMMAND_ARG MIGRATE_authentication_auth2_Subargs[] = {\n{MAKE_ARG(\"username\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"password\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* MIGRATE authentication argument table */\nstruct COMMAND_ARG MIGRATE_authentication_Subargs[] = {\n{MAKE_ARG(\"auth\",ARG_TYPE_STRING,-1,\"AUTH\",NULL,\"4.0.7\",CMD_ARG_NONE,0,NULL),.display_text=\"password\"},\n{MAKE_ARG(\"auth2\",ARG_TYPE_BLOCK,-1,\"AUTH2\",NULL,\"6.0.0\",CMD_ARG_NONE,2,NULL),.subargs=MIGRATE_authentication_auth2_Subargs},\n};\n\n/* MIGRATE argument table */\nstruct COMMAND_ARG MIGRATE_Args[] = {\n{MAKE_ARG(\"host\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"port\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key-selector\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=MIGRATE_key_selector_Subargs},\n{MAKE_ARG(\"destination-db\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"timeout\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"copy\",ARG_TYPE_PURE_TOKEN,-1,\"COPY\",NULL,\"3.0.0\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"replace\",ARG_TYPE_PURE_TOKEN,-1,\"REPLACE\",NULL,\"3.0.0\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"authentication\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=MIGRATE_authentication_Subargs},\n{MAKE_ARG(\"keys\",ARG_TYPE_KEY,1,\"KEYS\",NULL,\"3.0.6\",CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL),.display_text=\"key\"},\n};\n\n/********** MOVE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* MOVE history */\n#define MOVE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* MOVE tips */\n#define MOVE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* MOVE key specs */\nkeySpec MOVE_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* MOVE argument table */\nstruct COMMAND_ARG MOVE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"db\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** OBJECT ENCODING ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* OBJECT ENCODING history */\n#define OBJECT_ENCODING_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* OBJECT ENCODING tips */\nconst char *OBJECT_ENCODING_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* OBJECT ENCODING key specs */\nkeySpec OBJECT_ENCODING_Keyspecs[1] = {\n{NULL,CMD_KEY_RO,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* OBJECT ENCODING argument table */\nstruct COMMAND_ARG OBJECT_ENCODING_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** OBJECT FREQ ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* OBJECT FREQ history */\n#define OBJECT_FREQ_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* OBJECT FREQ tips */\nconst char *OBJECT_FREQ_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* OBJECT FREQ key specs */\nkeySpec OBJECT_FREQ_Keyspecs[1] = {\n{NULL,CMD_KEY_RO,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* OBJECT FREQ argument table */\nstruct COMMAND_ARG OBJECT_FREQ_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** OBJECT HELP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* OBJECT HELP history */\n#define OBJECT_HELP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* OBJECT HELP tips */\n#define OBJECT_HELP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* OBJECT HELP key specs */\n#define OBJECT_HELP_Keyspecs NULL\n#endif\n\n/********** OBJECT IDLETIME ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* OBJECT IDLETIME history */\n#define OBJECT_IDLETIME_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* OBJECT IDLETIME tips */\nconst char *OBJECT_IDLETIME_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* OBJECT IDLETIME key specs */\nkeySpec OBJECT_IDLETIME_Keyspecs[1] = {\n{NULL,CMD_KEY_RO,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* OBJECT IDLETIME argument table */\nstruct COMMAND_ARG OBJECT_IDLETIME_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** OBJECT REFCOUNT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* OBJECT REFCOUNT history */\n#define OBJECT_REFCOUNT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* OBJECT REFCOUNT tips */\nconst char *OBJECT_REFCOUNT_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* OBJECT REFCOUNT key specs */\nkeySpec OBJECT_REFCOUNT_Keyspecs[1] = {\n{NULL,CMD_KEY_RO,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* OBJECT REFCOUNT argument table */\nstruct COMMAND_ARG OBJECT_REFCOUNT_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* OBJECT command table */\nstruct COMMAND_STRUCT OBJECT_Subcommands[] = {\n{MAKE_CMD(\"encoding\",\"Returns the internal encoding of a Redis object.\",\"O(1)\",\"2.2.3\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,OBJECT_ENCODING_History,0,OBJECT_ENCODING_Tips,1,objectCommand,3,CMD_READONLY,ACL_CATEGORY_KEYSPACE,OBJECT_ENCODING_Keyspecs,1,NULL,1),.args=OBJECT_ENCODING_Args},\n{MAKE_CMD(\"freq\",\"Returns the logarithmic access frequency counter of a Redis object.\",\"O(1)\",\"4.0.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,OBJECT_FREQ_History,0,OBJECT_FREQ_Tips,1,objectCommand,3,CMD_READONLY,ACL_CATEGORY_KEYSPACE,OBJECT_FREQ_Keyspecs,1,NULL,1),.args=OBJECT_FREQ_Args},\n{MAKE_CMD(\"help\",\"Returns helpful text about the different subcommands.\",\"O(1)\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,OBJECT_HELP_History,0,OBJECT_HELP_Tips,0,objectCommand,2,CMD_LOADING|CMD_STALE,ACL_CATEGORY_KEYSPACE,OBJECT_HELP_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"idletime\",\"Returns the time since the last access to a Redis object.\",\"O(1)\",\"2.2.3\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,OBJECT_IDLETIME_History,0,OBJECT_IDLETIME_Tips,1,objectCommand,3,CMD_READONLY,ACL_CATEGORY_KEYSPACE,OBJECT_IDLETIME_Keyspecs,1,NULL,1),.args=OBJECT_IDLETIME_Args},\n{MAKE_CMD(\"refcount\",\"Returns the reference count of a value of a key.\",\"O(1)\",\"2.2.3\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,OBJECT_REFCOUNT_History,0,OBJECT_REFCOUNT_Tips,1,objectCommand,3,CMD_READONLY,ACL_CATEGORY_KEYSPACE,OBJECT_REFCOUNT_Keyspecs,1,NULL,1),.args=OBJECT_REFCOUNT_Args},\n{0}\n};\n\n/********** OBJECT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* OBJECT history */\n#define OBJECT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* OBJECT tips */\n#define OBJECT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* OBJECT key specs */\n#define OBJECT_Keyspecs NULL\n#endif\n\n/********** PERSIST ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PERSIST history */\n#define PERSIST_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PERSIST tips */\n#define PERSIST_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PERSIST key specs */\nkeySpec PERSIST_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* PERSIST argument table */\nstruct COMMAND_ARG PERSIST_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** PEXPIRE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PEXPIRE history */\ncommandHistory PEXPIRE_History[] = {\n{\"7.0.0\",\"Added options: `NX`, `XX`, `GT` and `LT`.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PEXPIRE tips */\n#define PEXPIRE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PEXPIRE key specs */\nkeySpec PEXPIRE_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* PEXPIRE condition argument table */\nstruct COMMAND_ARG PEXPIRE_condition_Subargs[] = {\n{MAKE_ARG(\"nx\",ARG_TYPE_PURE_TOKEN,-1,\"NX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"xx\",ARG_TYPE_PURE_TOKEN,-1,\"XX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"gt\",ARG_TYPE_PURE_TOKEN,-1,\"GT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"lt\",ARG_TYPE_PURE_TOKEN,-1,\"LT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* PEXPIRE argument table */\nstruct COMMAND_ARG PEXPIRE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"milliseconds\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"condition\",ARG_TYPE_ONEOF,-1,NULL,NULL,\"7.0.0\",CMD_ARG_OPTIONAL,4,NULL),.subargs=PEXPIRE_condition_Subargs},\n};\n\n/********** PEXPIREAT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PEXPIREAT history */\ncommandHistory PEXPIREAT_History[] = {\n{\"7.0.0\",\"Added options: `NX`, `XX`, `GT` and `LT`.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PEXPIREAT tips */\n#define PEXPIREAT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PEXPIREAT key specs */\nkeySpec PEXPIREAT_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* PEXPIREAT condition argument table */\nstruct COMMAND_ARG PEXPIREAT_condition_Subargs[] = {\n{MAKE_ARG(\"nx\",ARG_TYPE_PURE_TOKEN,-1,\"NX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"xx\",ARG_TYPE_PURE_TOKEN,-1,\"XX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"gt\",ARG_TYPE_PURE_TOKEN,-1,\"GT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"lt\",ARG_TYPE_PURE_TOKEN,-1,\"LT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* PEXPIREAT argument table */\nstruct COMMAND_ARG PEXPIREAT_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unix-time-milliseconds\",ARG_TYPE_UNIX_TIME,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"condition\",ARG_TYPE_ONEOF,-1,NULL,NULL,\"7.0.0\",CMD_ARG_OPTIONAL,4,NULL),.subargs=PEXPIREAT_condition_Subargs},\n};\n\n/********** PEXPIRETIME ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PEXPIRETIME history */\n#define PEXPIRETIME_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PEXPIRETIME tips */\n#define PEXPIRETIME_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PEXPIRETIME key specs */\nkeySpec PEXPIRETIME_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* PEXPIRETIME argument table */\nstruct COMMAND_ARG PEXPIRETIME_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** PTTL ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PTTL history */\ncommandHistory PTTL_History[] = {\n{\"2.8.0\",\"Added the -2 reply.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PTTL tips */\nconst char *PTTL_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PTTL key specs */\nkeySpec PTTL_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* PTTL argument table */\nstruct COMMAND_ARG PTTL_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** RANDOMKEY ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* RANDOMKEY history */\n#define RANDOMKEY_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* RANDOMKEY tips */\nconst char *RANDOMKEY_Tips[] = {\n\"request_policy:all_shards\",\n\"response_policy:special\",\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* RANDOMKEY key specs */\n#define RANDOMKEY_Keyspecs NULL\n#endif\n\n/********** RENAME ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* RENAME history */\n#define RENAME_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* RENAME tips */\n#define RENAME_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* RENAME key specs */\nkeySpec RENAME_Keyspecs[2] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* RENAME argument table */\nstruct COMMAND_ARG RENAME_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"newkey\",ARG_TYPE_KEY,1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** RENAMENX ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* RENAMENX history */\ncommandHistory RENAMENX_History[] = {\n{\"3.2.0\",\"The command no longer returns an error when source and destination names are the same.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* RENAMENX tips */\n#define RENAMENX_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* RENAMENX key specs */\nkeySpec RENAMENX_Keyspecs[2] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_OW|CMD_KEY_INSERT,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* RENAMENX argument table */\nstruct COMMAND_ARG RENAMENX_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"newkey\",ARG_TYPE_KEY,1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** RESTORE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* RESTORE history */\ncommandHistory RESTORE_History[] = {\n{\"3.0.0\",\"Added the `REPLACE` modifier.\"},\n{\"5.0.0\",\"Added the `ABSTTL` modifier.\"},\n{\"5.0.0\",\"Added the `IDLETIME` and `FREQ` options.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* RESTORE tips */\n#define RESTORE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* RESTORE key specs */\nkeySpec RESTORE_Keyspecs[1] = {\n{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* RESTORE argument table */\nstruct COMMAND_ARG RESTORE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"ttl\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"serialized-value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"replace\",ARG_TYPE_PURE_TOKEN,-1,\"REPLACE\",NULL,\"3.0.0\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"absttl\",ARG_TYPE_PURE_TOKEN,-1,\"ABSTTL\",NULL,\"5.0.0\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"seconds\",ARG_TYPE_INTEGER,-1,\"IDLETIME\",NULL,\"5.0.0\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"frequency\",ARG_TYPE_INTEGER,-1,\"FREQ\",NULL,\"5.0.0\",CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** SCAN ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SCAN history */\ncommandHistory SCAN_History[] = {\n{\"6.0.0\",\"Added the `TYPE` subcommand.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SCAN tips */\nconst char *SCAN_Tips[] = {\n\"nondeterministic_output\",\n\"request_policy:special\",\n\"response_policy:special\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SCAN key specs */\n#define SCAN_Keyspecs NULL\n#endif\n\n/* SCAN argument table */\nstruct COMMAND_ARG SCAN_Args[] = {\n{MAKE_ARG(\"cursor\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"pattern\",ARG_TYPE_PATTERN,-1,\"MATCH\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"type\",ARG_TYPE_STRING,-1,\"TYPE\",NULL,\"6.0.0\",CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** SORT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SORT history */\n#define SORT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SORT tips */\n#define SORT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SORT key specs */\nkeySpec SORT_Keyspecs[3] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}},{\"For the optional BY/GET keyword. It is marked 'unknown' because the key names derive from the content of the key we sort\",CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_UNKNOWN,{{0}},KSPEC_FK_UNKNOWN,{{0}}},{\"For the optional STORE keyword. It is marked 'unknown' because the keyword can appear anywhere in the argument array\",CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_UNKNOWN,{{0}},KSPEC_FK_UNKNOWN,{{0}}}\n};\n#endif\n\n/* SORT limit argument table */\nstruct COMMAND_ARG SORT_limit_Subargs[] = {\n{MAKE_ARG(\"offset\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* SORT order argument table */\nstruct COMMAND_ARG SORT_order_Subargs[] = {\n{MAKE_ARG(\"asc\",ARG_TYPE_PURE_TOKEN,-1,\"ASC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"desc\",ARG_TYPE_PURE_TOKEN,-1,\"DESC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* SORT argument table */\nstruct COMMAND_ARG SORT_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"by-pattern\",ARG_TYPE_PATTERN,1,\"BY\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL),.display_text=\"pattern\"},\n{MAKE_ARG(\"limit\",ARG_TYPE_BLOCK,-1,\"LIMIT\",NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=SORT_limit_Subargs},\n{MAKE_ARG(\"get-pattern\",ARG_TYPE_PATTERN,1,\"GET\",NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE|CMD_ARG_MULTIPLE_TOKEN,0,NULL),.display_text=\"pattern\"},\n{MAKE_ARG(\"order\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=SORT_order_Subargs},\n{MAKE_ARG(\"sorting\",ARG_TYPE_PURE_TOKEN,-1,\"ALPHA\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"destination\",ARG_TYPE_KEY,2,\"STORE\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** SORT_RO ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SORT_RO history */\n#define SORT_RO_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SORT_RO tips */\n#define SORT_RO_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SORT_RO key specs */\nkeySpec SORT_RO_Keyspecs[2] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}},{\"For the optional BY/GET keyword. It is marked 'unknown' because the key names derive from the content of the key we sort\",CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_UNKNOWN,{{0}},KSPEC_FK_UNKNOWN,{{0}}}\n};\n#endif\n\n/* SORT_RO limit argument table */\nstruct COMMAND_ARG SORT_RO_limit_Subargs[] = {\n{MAKE_ARG(\"offset\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* SORT_RO order argument table */\nstruct COMMAND_ARG SORT_RO_order_Subargs[] = {\n{MAKE_ARG(\"asc\",ARG_TYPE_PURE_TOKEN,-1,\"ASC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"desc\",ARG_TYPE_PURE_TOKEN,-1,\"DESC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* SORT_RO argument table */\nstruct COMMAND_ARG SORT_RO_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"by-pattern\",ARG_TYPE_PATTERN,1,\"BY\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL),.display_text=\"pattern\"},\n{MAKE_ARG(\"limit\",ARG_TYPE_BLOCK,-1,\"LIMIT\",NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=SORT_RO_limit_Subargs},\n{MAKE_ARG(\"get-pattern\",ARG_TYPE_PATTERN,1,\"GET\",NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE|CMD_ARG_MULTIPLE_TOKEN,0,NULL),.display_text=\"pattern\"},\n{MAKE_ARG(\"order\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=SORT_RO_order_Subargs},\n{MAKE_ARG(\"sorting\",ARG_TYPE_PURE_TOKEN,-1,\"ALPHA\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** TOUCH ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* TOUCH history */\n#define TOUCH_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* TOUCH tips */\nconst char *TOUCH_Tips[] = {\n\"request_policy:multi_shard\",\n\"response_policy:agg_sum\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* TOUCH key specs */\nkeySpec TOUCH_Keyspecs[1] = {\n{NULL,CMD_KEY_RO,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={-1,1,0}}\n};\n#endif\n\n/* TOUCH argument table */\nstruct COMMAND_ARG TOUCH_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** TTL ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* TTL history */\ncommandHistory TTL_History[] = {\n{\"2.8.0\",\"Added the -2 reply.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* TTL tips */\nconst char *TTL_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* TTL key specs */\nkeySpec TTL_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* TTL argument table */\nstruct COMMAND_ARG TTL_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** TYPE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* TYPE history */\n#define TYPE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* TYPE tips */\n#define TYPE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* TYPE key specs */\nkeySpec TYPE_Keyspecs[1] = {\n{NULL,CMD_KEY_RO,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* TYPE argument table */\nstruct COMMAND_ARG TYPE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** UNLINK ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* UNLINK history */\n#define UNLINK_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* UNLINK tips */\nconst char *UNLINK_Tips[] = {\n\"request_policy:multi_shard\",\n\"response_policy:agg_sum\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* UNLINK key specs */\nkeySpec UNLINK_Keyspecs[1] = {\n{NULL,CMD_KEY_RM|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={-1,1,0}}\n};\n#endif\n\n/* UNLINK argument table */\nstruct COMMAND_ARG UNLINK_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** WAIT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* WAIT history */\n#define WAIT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* WAIT tips */\nconst char *WAIT_Tips[] = {\n\"request_policy:all_shards\",\n\"response_policy:agg_min\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* WAIT key specs */\n#define WAIT_Keyspecs NULL\n#endif\n\n/* WAIT argument table */\nstruct COMMAND_ARG WAIT_Args[] = {\n{MAKE_ARG(\"numreplicas\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"timeout\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** WAITAOF ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* WAITAOF history */\n#define WAITAOF_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* WAITAOF tips */\nconst char *WAITAOF_Tips[] = {\n\"request_policy:all_shards\",\n\"response_policy:agg_min\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* WAITAOF key specs */\n#define WAITAOF_Keyspecs NULL\n#endif\n\n/* WAITAOF argument table */\nstruct COMMAND_ARG WAITAOF_Args[] = {\n{MAKE_ARG(\"numlocal\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"numreplicas\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"timeout\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** GEOADD ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* GEOADD history */\ncommandHistory GEOADD_History[] = {\n{\"6.2.0\",\"Added the `CH`, `NX` and `XX` options.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* GEOADD tips */\n#define GEOADD_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* GEOADD key specs */\nkeySpec GEOADD_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* GEOADD condition argument table */\nstruct COMMAND_ARG GEOADD_condition_Subargs[] = {\n{MAKE_ARG(\"nx\",ARG_TYPE_PURE_TOKEN,-1,\"NX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"xx\",ARG_TYPE_PURE_TOKEN,-1,\"XX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* GEOADD data argument table */\nstruct COMMAND_ARG GEOADD_data_Subargs[] = {\n{MAKE_ARG(\"longitude\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"latitude\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"member\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* GEOADD argument table */\nstruct COMMAND_ARG GEOADD_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"condition\",ARG_TYPE_ONEOF,-1,NULL,NULL,\"6.2.0\",CMD_ARG_OPTIONAL,2,NULL),.subargs=GEOADD_condition_Subargs},\n{MAKE_ARG(\"change\",ARG_TYPE_PURE_TOKEN,-1,\"CH\",NULL,\"6.2.0\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"data\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,3,NULL),.subargs=GEOADD_data_Subargs},\n};\n\n/********** GEODIST ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* GEODIST history */\n#define GEODIST_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* GEODIST tips */\n#define GEODIST_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* GEODIST key specs */\nkeySpec GEODIST_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* GEODIST unit argument table */\nstruct COMMAND_ARG GEODIST_unit_Subargs[] = {\n{MAKE_ARG(\"m\",ARG_TYPE_PURE_TOKEN,-1,\"M\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"km\",ARG_TYPE_PURE_TOKEN,-1,\"KM\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"ft\",ARG_TYPE_PURE_TOKEN,-1,\"FT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"mi\",ARG_TYPE_PURE_TOKEN,-1,\"MI\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* GEODIST argument table */\nstruct COMMAND_ARG GEODIST_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"member1\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"member2\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unit\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,4,NULL),.subargs=GEODIST_unit_Subargs},\n};\n\n/********** GEOHASH ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* GEOHASH history */\n#define GEOHASH_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* GEOHASH tips */\n#define GEOHASH_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* GEOHASH key specs */\nkeySpec GEOHASH_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* GEOHASH argument table */\nstruct COMMAND_ARG GEOHASH_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"member\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** GEOPOS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* GEOPOS history */\n#define GEOPOS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* GEOPOS tips */\n#define GEOPOS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* GEOPOS key specs */\nkeySpec GEOPOS_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* GEOPOS argument table */\nstruct COMMAND_ARG GEOPOS_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"member\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** GEORADIUS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* GEORADIUS history */\ncommandHistory GEORADIUS_History[] = {\n{\"6.2.0\",\"Added the `ANY` option for `COUNT`.\"},\n{\"7.0.0\",\"Added support for uppercase unit names.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* GEORADIUS tips */\n#define GEORADIUS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* GEORADIUS key specs */\nkeySpec GEORADIUS_Keyspecs[3] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_KEYWORD,.bs.keyword={\"STORE\",6},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_KEYWORD,.bs.keyword={\"STOREDIST\",6},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* GEORADIUS unit argument table */\nstruct COMMAND_ARG GEORADIUS_unit_Subargs[] = {\n{MAKE_ARG(\"m\",ARG_TYPE_PURE_TOKEN,-1,\"M\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"km\",ARG_TYPE_PURE_TOKEN,-1,\"KM\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"ft\",ARG_TYPE_PURE_TOKEN,-1,\"FT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"mi\",ARG_TYPE_PURE_TOKEN,-1,\"MI\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* GEORADIUS count_block argument table */\nstruct COMMAND_ARG GEORADIUS_count_block_Subargs[] = {\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"any\",ARG_TYPE_PURE_TOKEN,-1,\"ANY\",NULL,\"6.2.0\",CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/* GEORADIUS order argument table */\nstruct COMMAND_ARG GEORADIUS_order_Subargs[] = {\n{MAKE_ARG(\"asc\",ARG_TYPE_PURE_TOKEN,-1,\"ASC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"desc\",ARG_TYPE_PURE_TOKEN,-1,\"DESC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* GEORADIUS store argument table */\nstruct COMMAND_ARG GEORADIUS_store_Subargs[] = {\n{MAKE_ARG(\"storekey\",ARG_TYPE_KEY,1,\"STORE\",NULL,NULL,CMD_ARG_NONE,0,NULL),.display_text=\"key\"},\n{MAKE_ARG(\"storedistkey\",ARG_TYPE_KEY,2,\"STOREDIST\",NULL,NULL,CMD_ARG_NONE,0,NULL),.display_text=\"key\"},\n};\n\n/* GEORADIUS argument table */\nstruct COMMAND_ARG GEORADIUS_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"longitude\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"latitude\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"radius\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unit\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,4,NULL),.subargs=GEORADIUS_unit_Subargs},\n{MAKE_ARG(\"withcoord\",ARG_TYPE_PURE_TOKEN,-1,\"WITHCOORD\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"withdist\",ARG_TYPE_PURE_TOKEN,-1,\"WITHDIST\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"withhash\",ARG_TYPE_PURE_TOKEN,-1,\"WITHHASH\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"count-block\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=GEORADIUS_count_block_Subargs},\n{MAKE_ARG(\"order\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=GEORADIUS_order_Subargs},\n{MAKE_ARG(\"store\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=GEORADIUS_store_Subargs},\n};\n\n/********** GEORADIUSBYMEMBER ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* GEORADIUSBYMEMBER history */\ncommandHistory GEORADIUSBYMEMBER_History[] = {\n{\"6.2.0\",\"Added the `ANY` option for `COUNT`.\"},\n{\"7.0.0\",\"Added support for uppercase unit names.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* GEORADIUSBYMEMBER tips */\n#define GEORADIUSBYMEMBER_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* GEORADIUSBYMEMBER key specs */\nkeySpec GEORADIUSBYMEMBER_Keyspecs[3] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_KEYWORD,.bs.keyword={\"STORE\",5},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_KEYWORD,.bs.keyword={\"STOREDIST\",5},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* GEORADIUSBYMEMBER unit argument table */\nstruct COMMAND_ARG GEORADIUSBYMEMBER_unit_Subargs[] = {\n{MAKE_ARG(\"m\",ARG_TYPE_PURE_TOKEN,-1,\"M\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"km\",ARG_TYPE_PURE_TOKEN,-1,\"KM\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"ft\",ARG_TYPE_PURE_TOKEN,-1,\"FT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"mi\",ARG_TYPE_PURE_TOKEN,-1,\"MI\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* GEORADIUSBYMEMBER count_block argument table */\nstruct COMMAND_ARG GEORADIUSBYMEMBER_count_block_Subargs[] = {\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"any\",ARG_TYPE_PURE_TOKEN,-1,\"ANY\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/* GEORADIUSBYMEMBER order argument table */\nstruct COMMAND_ARG GEORADIUSBYMEMBER_order_Subargs[] = {\n{MAKE_ARG(\"asc\",ARG_TYPE_PURE_TOKEN,-1,\"ASC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"desc\",ARG_TYPE_PURE_TOKEN,-1,\"DESC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* GEORADIUSBYMEMBER store argument table */\nstruct COMMAND_ARG GEORADIUSBYMEMBER_store_Subargs[] = {\n{MAKE_ARG(\"storekey\",ARG_TYPE_KEY,1,\"STORE\",NULL,NULL,CMD_ARG_NONE,0,NULL),.display_text=\"key\"},\n{MAKE_ARG(\"storedistkey\",ARG_TYPE_KEY,2,\"STOREDIST\",NULL,NULL,CMD_ARG_NONE,0,NULL),.display_text=\"key\"},\n};\n\n/* GEORADIUSBYMEMBER argument table */\nstruct COMMAND_ARG GEORADIUSBYMEMBER_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"member\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"radius\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unit\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,4,NULL),.subargs=GEORADIUSBYMEMBER_unit_Subargs},\n{MAKE_ARG(\"withcoord\",ARG_TYPE_PURE_TOKEN,-1,\"WITHCOORD\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"withdist\",ARG_TYPE_PURE_TOKEN,-1,\"WITHDIST\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"withhash\",ARG_TYPE_PURE_TOKEN,-1,\"WITHHASH\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"count-block\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=GEORADIUSBYMEMBER_count_block_Subargs},\n{MAKE_ARG(\"order\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=GEORADIUSBYMEMBER_order_Subargs},\n{MAKE_ARG(\"store\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=GEORADIUSBYMEMBER_store_Subargs},\n};\n\n/********** GEORADIUSBYMEMBER_RO ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* GEORADIUSBYMEMBER_RO history */\ncommandHistory GEORADIUSBYMEMBER_RO_History[] = {\n{\"6.2.0\",\"Added the `ANY` option for `COUNT`.\"},\n{\"7.0.0\",\"Added support for uppercase unit names.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* GEORADIUSBYMEMBER_RO tips */\n#define GEORADIUSBYMEMBER_RO_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* GEORADIUSBYMEMBER_RO key specs */\nkeySpec GEORADIUSBYMEMBER_RO_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* GEORADIUSBYMEMBER_RO unit argument table */\nstruct COMMAND_ARG GEORADIUSBYMEMBER_RO_unit_Subargs[] = {\n{MAKE_ARG(\"m\",ARG_TYPE_PURE_TOKEN,-1,\"M\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"km\",ARG_TYPE_PURE_TOKEN,-1,\"KM\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"ft\",ARG_TYPE_PURE_TOKEN,-1,\"FT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"mi\",ARG_TYPE_PURE_TOKEN,-1,\"MI\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* GEORADIUSBYMEMBER_RO count_block argument table */\nstruct COMMAND_ARG GEORADIUSBYMEMBER_RO_count_block_Subargs[] = {\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"any\",ARG_TYPE_PURE_TOKEN,-1,\"ANY\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/* GEORADIUSBYMEMBER_RO order argument table */\nstruct COMMAND_ARG GEORADIUSBYMEMBER_RO_order_Subargs[] = {\n{MAKE_ARG(\"asc\",ARG_TYPE_PURE_TOKEN,-1,\"ASC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"desc\",ARG_TYPE_PURE_TOKEN,-1,\"DESC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* GEORADIUSBYMEMBER_RO argument table */\nstruct COMMAND_ARG GEORADIUSBYMEMBER_RO_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"member\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"radius\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unit\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,4,NULL),.subargs=GEORADIUSBYMEMBER_RO_unit_Subargs},\n{MAKE_ARG(\"withcoord\",ARG_TYPE_PURE_TOKEN,-1,\"WITHCOORD\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"withdist\",ARG_TYPE_PURE_TOKEN,-1,\"WITHDIST\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"withhash\",ARG_TYPE_PURE_TOKEN,-1,\"WITHHASH\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"count-block\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=GEORADIUSBYMEMBER_RO_count_block_Subargs},\n{MAKE_ARG(\"order\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=GEORADIUSBYMEMBER_RO_order_Subargs},\n};\n\n/********** GEORADIUS_RO ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* GEORADIUS_RO history */\ncommandHistory GEORADIUS_RO_History[] = {\n{\"6.2.0\",\"Added the `ANY` option for `COUNT`.\"},\n{\"7.0.0\",\"Added support for uppercase unit names.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* GEORADIUS_RO tips */\n#define GEORADIUS_RO_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* GEORADIUS_RO key specs */\nkeySpec GEORADIUS_RO_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* GEORADIUS_RO unit argument table */\nstruct COMMAND_ARG GEORADIUS_RO_unit_Subargs[] = {\n{MAKE_ARG(\"m\",ARG_TYPE_PURE_TOKEN,-1,\"M\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"km\",ARG_TYPE_PURE_TOKEN,-1,\"KM\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"ft\",ARG_TYPE_PURE_TOKEN,-1,\"FT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"mi\",ARG_TYPE_PURE_TOKEN,-1,\"MI\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* GEORADIUS_RO count_block argument table */\nstruct COMMAND_ARG GEORADIUS_RO_count_block_Subargs[] = {\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"any\",ARG_TYPE_PURE_TOKEN,-1,\"ANY\",NULL,\"6.2.0\",CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/* GEORADIUS_RO order argument table */\nstruct COMMAND_ARG GEORADIUS_RO_order_Subargs[] = {\n{MAKE_ARG(\"asc\",ARG_TYPE_PURE_TOKEN,-1,\"ASC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"desc\",ARG_TYPE_PURE_TOKEN,-1,\"DESC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* GEORADIUS_RO argument table */\nstruct COMMAND_ARG GEORADIUS_RO_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"longitude\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"latitude\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"radius\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unit\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,4,NULL),.subargs=GEORADIUS_RO_unit_Subargs},\n{MAKE_ARG(\"withcoord\",ARG_TYPE_PURE_TOKEN,-1,\"WITHCOORD\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"withdist\",ARG_TYPE_PURE_TOKEN,-1,\"WITHDIST\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"withhash\",ARG_TYPE_PURE_TOKEN,-1,\"WITHHASH\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"count-block\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=GEORADIUS_RO_count_block_Subargs},\n{MAKE_ARG(\"order\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=GEORADIUS_RO_order_Subargs},\n};\n\n/********** GEOSEARCH ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* GEOSEARCH history */\ncommandHistory GEOSEARCH_History[] = {\n{\"7.0.0\",\"Added support for uppercase unit names.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* GEOSEARCH tips */\n#define GEOSEARCH_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* GEOSEARCH key specs */\nkeySpec GEOSEARCH_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* GEOSEARCH from fromlonlat argument table */\nstruct COMMAND_ARG GEOSEARCH_from_fromlonlat_Subargs[] = {\n{MAKE_ARG(\"longitude\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"latitude\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* GEOSEARCH from argument table */\nstruct COMMAND_ARG GEOSEARCH_from_Subargs[] = {\n{MAKE_ARG(\"member\",ARG_TYPE_STRING,-1,\"FROMMEMBER\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"fromlonlat\",ARG_TYPE_BLOCK,-1,\"FROMLONLAT\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=GEOSEARCH_from_fromlonlat_Subargs},\n};\n\n/* GEOSEARCH by circle unit argument table */\nstruct COMMAND_ARG GEOSEARCH_by_circle_unit_Subargs[] = {\n{MAKE_ARG(\"m\",ARG_TYPE_PURE_TOKEN,-1,\"M\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"km\",ARG_TYPE_PURE_TOKEN,-1,\"KM\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"ft\",ARG_TYPE_PURE_TOKEN,-1,\"FT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"mi\",ARG_TYPE_PURE_TOKEN,-1,\"MI\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* GEOSEARCH by circle argument table */\nstruct COMMAND_ARG GEOSEARCH_by_circle_Subargs[] = {\n{MAKE_ARG(\"radius\",ARG_TYPE_DOUBLE,-1,\"BYRADIUS\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unit\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,4,NULL),.subargs=GEOSEARCH_by_circle_unit_Subargs},\n};\n\n/* GEOSEARCH by box unit argument table */\nstruct COMMAND_ARG GEOSEARCH_by_box_unit_Subargs[] = {\n{MAKE_ARG(\"m\",ARG_TYPE_PURE_TOKEN,-1,\"M\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"km\",ARG_TYPE_PURE_TOKEN,-1,\"KM\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"ft\",ARG_TYPE_PURE_TOKEN,-1,\"FT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"mi\",ARG_TYPE_PURE_TOKEN,-1,\"MI\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* GEOSEARCH by box argument table */\nstruct COMMAND_ARG GEOSEARCH_by_box_Subargs[] = {\n{MAKE_ARG(\"width\",ARG_TYPE_DOUBLE,-1,\"BYBOX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"height\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unit\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,4,NULL),.subargs=GEOSEARCH_by_box_unit_Subargs},\n};\n\n/* GEOSEARCH by argument table */\nstruct COMMAND_ARG GEOSEARCH_by_Subargs[] = {\n{MAKE_ARG(\"circle\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=GEOSEARCH_by_circle_Subargs},\n{MAKE_ARG(\"box\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_NONE,3,NULL),.subargs=GEOSEARCH_by_box_Subargs},\n};\n\n/* GEOSEARCH order argument table */\nstruct COMMAND_ARG GEOSEARCH_order_Subargs[] = {\n{MAKE_ARG(\"asc\",ARG_TYPE_PURE_TOKEN,-1,\"ASC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"desc\",ARG_TYPE_PURE_TOKEN,-1,\"DESC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* GEOSEARCH count_block argument table */\nstruct COMMAND_ARG GEOSEARCH_count_block_Subargs[] = {\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"any\",ARG_TYPE_PURE_TOKEN,-1,\"ANY\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/* GEOSEARCH argument table */\nstruct COMMAND_ARG GEOSEARCH_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"from\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=GEOSEARCH_from_Subargs},\n{MAKE_ARG(\"by\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=GEOSEARCH_by_Subargs},\n{MAKE_ARG(\"order\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=GEOSEARCH_order_Subargs},\n{MAKE_ARG(\"count-block\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=GEOSEARCH_count_block_Subargs},\n{MAKE_ARG(\"withcoord\",ARG_TYPE_PURE_TOKEN,-1,\"WITHCOORD\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"withdist\",ARG_TYPE_PURE_TOKEN,-1,\"WITHDIST\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"withhash\",ARG_TYPE_PURE_TOKEN,-1,\"WITHHASH\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** GEOSEARCHSTORE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* GEOSEARCHSTORE history */\ncommandHistory GEOSEARCHSTORE_History[] = {\n{\"7.0.0\",\"Added support for uppercase unit names.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* GEOSEARCHSTORE tips */\n#define GEOSEARCHSTORE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* GEOSEARCHSTORE key specs */\nkeySpec GEOSEARCHSTORE_Keyspecs[2] = {\n{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* GEOSEARCHSTORE from fromlonlat argument table */\nstruct COMMAND_ARG GEOSEARCHSTORE_from_fromlonlat_Subargs[] = {\n{MAKE_ARG(\"longitude\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"latitude\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* GEOSEARCHSTORE from argument table */\nstruct COMMAND_ARG GEOSEARCHSTORE_from_Subargs[] = {\n{MAKE_ARG(\"member\",ARG_TYPE_STRING,-1,\"FROMMEMBER\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"fromlonlat\",ARG_TYPE_BLOCK,-1,\"FROMLONLAT\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=GEOSEARCHSTORE_from_fromlonlat_Subargs},\n};\n\n/* GEOSEARCHSTORE by circle unit argument table */\nstruct COMMAND_ARG GEOSEARCHSTORE_by_circle_unit_Subargs[] = {\n{MAKE_ARG(\"m\",ARG_TYPE_PURE_TOKEN,-1,\"M\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"km\",ARG_TYPE_PURE_TOKEN,-1,\"KM\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"ft\",ARG_TYPE_PURE_TOKEN,-1,\"FT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"mi\",ARG_TYPE_PURE_TOKEN,-1,\"MI\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* GEOSEARCHSTORE by circle argument table */\nstruct COMMAND_ARG GEOSEARCHSTORE_by_circle_Subargs[] = {\n{MAKE_ARG(\"radius\",ARG_TYPE_DOUBLE,-1,\"BYRADIUS\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unit\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,4,NULL),.subargs=GEOSEARCHSTORE_by_circle_unit_Subargs},\n};\n\n/* GEOSEARCHSTORE by box unit argument table */\nstruct COMMAND_ARG GEOSEARCHSTORE_by_box_unit_Subargs[] = {\n{MAKE_ARG(\"m\",ARG_TYPE_PURE_TOKEN,-1,\"M\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"km\",ARG_TYPE_PURE_TOKEN,-1,\"KM\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"ft\",ARG_TYPE_PURE_TOKEN,-1,\"FT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"mi\",ARG_TYPE_PURE_TOKEN,-1,\"MI\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* GEOSEARCHSTORE by box argument table */\nstruct COMMAND_ARG GEOSEARCHSTORE_by_box_Subargs[] = {\n{MAKE_ARG(\"width\",ARG_TYPE_DOUBLE,-1,\"BYBOX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"height\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unit\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,4,NULL),.subargs=GEOSEARCHSTORE_by_box_unit_Subargs},\n};\n\n/* GEOSEARCHSTORE by argument table */\nstruct COMMAND_ARG GEOSEARCHSTORE_by_Subargs[] = {\n{MAKE_ARG(\"circle\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=GEOSEARCHSTORE_by_circle_Subargs},\n{MAKE_ARG(\"box\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_NONE,3,NULL),.subargs=GEOSEARCHSTORE_by_box_Subargs},\n};\n\n/* GEOSEARCHSTORE order argument table */\nstruct COMMAND_ARG GEOSEARCHSTORE_order_Subargs[] = {\n{MAKE_ARG(\"asc\",ARG_TYPE_PURE_TOKEN,-1,\"ASC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"desc\",ARG_TYPE_PURE_TOKEN,-1,\"DESC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* GEOSEARCHSTORE count_block argument table */\nstruct COMMAND_ARG GEOSEARCHSTORE_count_block_Subargs[] = {\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"any\",ARG_TYPE_PURE_TOKEN,-1,\"ANY\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/* GEOSEARCHSTORE argument table */\nstruct COMMAND_ARG GEOSEARCHSTORE_Args[] = {\n{MAKE_ARG(\"destination\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"source\",ARG_TYPE_KEY,1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"from\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=GEOSEARCHSTORE_from_Subargs},\n{MAKE_ARG(\"by\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=GEOSEARCHSTORE_by_Subargs},\n{MAKE_ARG(\"order\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=GEOSEARCHSTORE_order_Subargs},\n{MAKE_ARG(\"count-block\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=GEOSEARCHSTORE_count_block_Subargs},\n{MAKE_ARG(\"storedist\",ARG_TYPE_PURE_TOKEN,-1,\"STOREDIST\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** HDEL ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HDEL history */\ncommandHistory HDEL_History[] = {\n{\"2.4.0\",\"Accepts multiple `field` arguments.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HDEL tips */\n#define HDEL_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HDEL key specs */\nkeySpec HDEL_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HDEL argument table */\nstruct COMMAND_ARG HDEL_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** HEXISTS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HEXISTS history */\n#define HEXISTS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HEXISTS tips */\n#define HEXISTS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HEXISTS key specs */\nkeySpec HEXISTS_Keyspecs[1] = {\n{NULL,CMD_KEY_RO,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HEXISTS argument table */\nstruct COMMAND_ARG HEXISTS_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** HEXPIRE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HEXPIRE history */\n#define HEXPIRE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HEXPIRE tips */\n#define HEXPIRE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HEXPIRE key specs */\nkeySpec HEXPIRE_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HEXPIRE condition argument table */\nstruct COMMAND_ARG HEXPIRE_condition_Subargs[] = {\n{MAKE_ARG(\"nx\",ARG_TYPE_PURE_TOKEN,-1,\"NX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"xx\",ARG_TYPE_PURE_TOKEN,-1,\"XX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"gt\",ARG_TYPE_PURE_TOKEN,-1,\"GT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"lt\",ARG_TYPE_PURE_TOKEN,-1,\"LT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* HEXPIRE fields argument table */\nstruct COMMAND_ARG HEXPIRE_fields_Subargs[] = {\n{MAKE_ARG(\"numfields\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* HEXPIRE argument table */\nstruct COMMAND_ARG HEXPIRE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"seconds\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"condition\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,4,NULL),.subargs=HEXPIRE_condition_Subargs},\n{MAKE_ARG(\"fields\",ARG_TYPE_BLOCK,-1,\"FIELDS\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=HEXPIRE_fields_Subargs},\n};\n\n/********** HEXPIREAT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HEXPIREAT history */\n#define HEXPIREAT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HEXPIREAT tips */\n#define HEXPIREAT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HEXPIREAT key specs */\nkeySpec HEXPIREAT_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HEXPIREAT condition argument table */\nstruct COMMAND_ARG HEXPIREAT_condition_Subargs[] = {\n{MAKE_ARG(\"nx\",ARG_TYPE_PURE_TOKEN,-1,\"NX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"xx\",ARG_TYPE_PURE_TOKEN,-1,\"XX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"gt\",ARG_TYPE_PURE_TOKEN,-1,\"GT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"lt\",ARG_TYPE_PURE_TOKEN,-1,\"LT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* HEXPIREAT fields argument table */\nstruct COMMAND_ARG HEXPIREAT_fields_Subargs[] = {\n{MAKE_ARG(\"numfields\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* HEXPIREAT argument table */\nstruct COMMAND_ARG HEXPIREAT_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unix-time-seconds\",ARG_TYPE_UNIX_TIME,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"condition\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,4,NULL),.subargs=HEXPIREAT_condition_Subargs},\n{MAKE_ARG(\"fields\",ARG_TYPE_BLOCK,-1,\"FIELDS\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=HEXPIREAT_fields_Subargs},\n};\n\n/********** HEXPIRETIME ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HEXPIRETIME history */\n#define HEXPIRETIME_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HEXPIRETIME tips */\n#define HEXPIRETIME_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HEXPIRETIME key specs */\nkeySpec HEXPIRETIME_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HEXPIRETIME fields argument table */\nstruct COMMAND_ARG HEXPIRETIME_fields_Subargs[] = {\n{MAKE_ARG(\"numfields\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* HEXPIRETIME argument table */\nstruct COMMAND_ARG HEXPIRETIME_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"fields\",ARG_TYPE_BLOCK,-1,\"FIELDS\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=HEXPIRETIME_fields_Subargs},\n};\n\n/********** HGET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HGET history */\n#define HGET_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HGET tips */\n#define HGET_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HGET key specs */\nkeySpec HGET_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HGET argument table */\nstruct COMMAND_ARG HGET_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** HGETALL ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HGETALL history */\n#define HGETALL_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HGETALL tips */\nconst char *HGETALL_Tips[] = {\n\"nondeterministic_output_order\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HGETALL key specs */\nkeySpec HGETALL_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HGETALL argument table */\nstruct COMMAND_ARG HGETALL_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** HGETDEL ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HGETDEL history */\n#define HGETDEL_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HGETDEL tips */\n#define HGETDEL_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HGETDEL key specs */\nkeySpec HGETDEL_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HGETDEL fields argument table */\nstruct COMMAND_ARG HGETDEL_fields_Subargs[] = {\n{MAKE_ARG(\"numfields\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* HGETDEL argument table */\nstruct COMMAND_ARG HGETDEL_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"fields\",ARG_TYPE_BLOCK,-1,\"FIELDS\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=HGETDEL_fields_Subargs},\n};\n\n/********** HGETEX ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HGETEX history */\n#define HGETEX_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HGETEX tips */\n#define HGETEX_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HGETEX key specs */\nkeySpec HGETEX_Keyspecs[1] = {\n{\"RW and UPDATE because it changes the TTL\",CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HGETEX expiration argument table */\nstruct COMMAND_ARG HGETEX_expiration_Subargs[] = {\n{MAKE_ARG(\"seconds\",ARG_TYPE_INTEGER,-1,\"EX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"milliseconds\",ARG_TYPE_INTEGER,-1,\"PX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unix-time-seconds\",ARG_TYPE_UNIX_TIME,-1,\"EXAT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unix-time-milliseconds\",ARG_TYPE_UNIX_TIME,-1,\"PXAT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"persist\",ARG_TYPE_PURE_TOKEN,-1,\"PERSIST\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* HGETEX fields argument table */\nstruct COMMAND_ARG HGETEX_fields_Subargs[] = {\n{MAKE_ARG(\"numfields\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* HGETEX argument table */\nstruct COMMAND_ARG HGETEX_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"expiration\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,5,NULL),.subargs=HGETEX_expiration_Subargs},\n{MAKE_ARG(\"fields\",ARG_TYPE_BLOCK,-1,\"FIELDS\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=HGETEX_fields_Subargs},\n};\n\n/********** HINCRBY ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HINCRBY history */\n#define HINCRBY_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HINCRBY tips */\n#define HINCRBY_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HINCRBY key specs */\nkeySpec HINCRBY_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HINCRBY argument table */\nstruct COMMAND_ARG HINCRBY_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"increment\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** HINCRBYFLOAT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HINCRBYFLOAT history */\n#define HINCRBYFLOAT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HINCRBYFLOAT tips */\n#define HINCRBYFLOAT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HINCRBYFLOAT key specs */\nkeySpec HINCRBYFLOAT_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HINCRBYFLOAT argument table */\nstruct COMMAND_ARG HINCRBYFLOAT_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"increment\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** HKEYS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HKEYS history */\n#define HKEYS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HKEYS tips */\nconst char *HKEYS_Tips[] = {\n\"nondeterministic_output_order\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HKEYS key specs */\nkeySpec HKEYS_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HKEYS argument table */\nstruct COMMAND_ARG HKEYS_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** HLEN ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HLEN history */\n#define HLEN_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HLEN tips */\n#define HLEN_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HLEN key specs */\nkeySpec HLEN_Keyspecs[1] = {\n{NULL,CMD_KEY_RO,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HLEN argument table */\nstruct COMMAND_ARG HLEN_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** HMGET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HMGET history */\n#define HMGET_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HMGET tips */\n#define HMGET_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HMGET key specs */\nkeySpec HMGET_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HMGET argument table */\nstruct COMMAND_ARG HMGET_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** HMSET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HMSET history */\n#define HMSET_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HMSET tips */\n#define HMSET_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HMSET key specs */\nkeySpec HMSET_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HMSET data argument table */\nstruct COMMAND_ARG HMSET_data_Subargs[] = {\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* HMSET argument table */\nstruct COMMAND_ARG HMSET_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"data\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,2,NULL),.subargs=HMSET_data_Subargs},\n};\n\n/********** HPERSIST ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HPERSIST history */\n#define HPERSIST_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HPERSIST tips */\n#define HPERSIST_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HPERSIST key specs */\nkeySpec HPERSIST_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HPERSIST fields argument table */\nstruct COMMAND_ARG HPERSIST_fields_Subargs[] = {\n{MAKE_ARG(\"numfields\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* HPERSIST argument table */\nstruct COMMAND_ARG HPERSIST_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"fields\",ARG_TYPE_BLOCK,-1,\"FIELDS\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=HPERSIST_fields_Subargs},\n};\n\n/********** HPEXPIRE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HPEXPIRE history */\n#define HPEXPIRE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HPEXPIRE tips */\n#define HPEXPIRE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HPEXPIRE key specs */\nkeySpec HPEXPIRE_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HPEXPIRE condition argument table */\nstruct COMMAND_ARG HPEXPIRE_condition_Subargs[] = {\n{MAKE_ARG(\"nx\",ARG_TYPE_PURE_TOKEN,-1,\"NX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"xx\",ARG_TYPE_PURE_TOKEN,-1,\"XX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"gt\",ARG_TYPE_PURE_TOKEN,-1,\"GT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"lt\",ARG_TYPE_PURE_TOKEN,-1,\"LT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* HPEXPIRE fields argument table */\nstruct COMMAND_ARG HPEXPIRE_fields_Subargs[] = {\n{MAKE_ARG(\"numfields\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* HPEXPIRE argument table */\nstruct COMMAND_ARG HPEXPIRE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"milliseconds\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"condition\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,4,NULL),.subargs=HPEXPIRE_condition_Subargs},\n{MAKE_ARG(\"fields\",ARG_TYPE_BLOCK,-1,\"FIELDS\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=HPEXPIRE_fields_Subargs},\n};\n\n/********** HPEXPIREAT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HPEXPIREAT history */\n#define HPEXPIREAT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HPEXPIREAT tips */\n#define HPEXPIREAT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HPEXPIREAT key specs */\nkeySpec HPEXPIREAT_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HPEXPIREAT condition argument table */\nstruct COMMAND_ARG HPEXPIREAT_condition_Subargs[] = {\n{MAKE_ARG(\"nx\",ARG_TYPE_PURE_TOKEN,-1,\"NX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"xx\",ARG_TYPE_PURE_TOKEN,-1,\"XX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"gt\",ARG_TYPE_PURE_TOKEN,-1,\"GT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"lt\",ARG_TYPE_PURE_TOKEN,-1,\"LT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* HPEXPIREAT fields argument table */\nstruct COMMAND_ARG HPEXPIREAT_fields_Subargs[] = {\n{MAKE_ARG(\"numfields\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* HPEXPIREAT argument table */\nstruct COMMAND_ARG HPEXPIREAT_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unix-time-milliseconds\",ARG_TYPE_UNIX_TIME,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"condition\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,4,NULL),.subargs=HPEXPIREAT_condition_Subargs},\n{MAKE_ARG(\"fields\",ARG_TYPE_BLOCK,-1,\"FIELDS\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=HPEXPIREAT_fields_Subargs},\n};\n\n/********** HPEXPIRETIME ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HPEXPIRETIME history */\n#define HPEXPIRETIME_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HPEXPIRETIME tips */\n#define HPEXPIRETIME_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HPEXPIRETIME key specs */\nkeySpec HPEXPIRETIME_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HPEXPIRETIME fields argument table */\nstruct COMMAND_ARG HPEXPIRETIME_fields_Subargs[] = {\n{MAKE_ARG(\"numfields\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* HPEXPIRETIME argument table */\nstruct COMMAND_ARG HPEXPIRETIME_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"fields\",ARG_TYPE_BLOCK,-1,\"FIELDS\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=HPEXPIRETIME_fields_Subargs},\n};\n\n/********** HPTTL ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HPTTL history */\n#define HPTTL_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HPTTL tips */\nconst char *HPTTL_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HPTTL key specs */\nkeySpec HPTTL_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HPTTL fields argument table */\nstruct COMMAND_ARG HPTTL_fields_Subargs[] = {\n{MAKE_ARG(\"numfields\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* HPTTL argument table */\nstruct COMMAND_ARG HPTTL_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"fields\",ARG_TYPE_BLOCK,-1,\"FIELDS\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=HPTTL_fields_Subargs},\n};\n\n/********** HRANDFIELD ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HRANDFIELD history */\n#define HRANDFIELD_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HRANDFIELD tips */\nconst char *HRANDFIELD_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HRANDFIELD key specs */\nkeySpec HRANDFIELD_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HRANDFIELD options argument table */\nstruct COMMAND_ARG HRANDFIELD_options_Subargs[] = {\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"withvalues\",ARG_TYPE_PURE_TOKEN,-1,\"WITHVALUES\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/* HRANDFIELD argument table */\nstruct COMMAND_ARG HRANDFIELD_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"options\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=HRANDFIELD_options_Subargs},\n};\n\n/********** HSCAN ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HSCAN history */\n#define HSCAN_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HSCAN tips */\nconst char *HSCAN_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HSCAN key specs */\nkeySpec HSCAN_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HSCAN argument table */\nstruct COMMAND_ARG HSCAN_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"cursor\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"pattern\",ARG_TYPE_PATTERN,-1,\"MATCH\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"novalues\",ARG_TYPE_PURE_TOKEN,-1,\"NOVALUES\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** HSET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HSET history */\ncommandHistory HSET_History[] = {\n{\"4.0.0\",\"Accepts multiple `field` and `value` arguments.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HSET tips */\n#define HSET_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HSET key specs */\nkeySpec HSET_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HSET data argument table */\nstruct COMMAND_ARG HSET_data_Subargs[] = {\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* HSET argument table */\nstruct COMMAND_ARG HSET_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"data\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,2,NULL),.subargs=HSET_data_Subargs},\n};\n\n/********** HSETEX ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HSETEX history */\n#define HSETEX_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HSETEX tips */\n#define HSETEX_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HSETEX key specs */\nkeySpec HSETEX_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HSETEX condition argument table */\nstruct COMMAND_ARG HSETEX_condition_Subargs[] = {\n{MAKE_ARG(\"fnx\",ARG_TYPE_PURE_TOKEN,-1,\"FNX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"fxx\",ARG_TYPE_PURE_TOKEN,-1,\"FXX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* HSETEX expiration argument table */\nstruct COMMAND_ARG HSETEX_expiration_Subargs[] = {\n{MAKE_ARG(\"seconds\",ARG_TYPE_INTEGER,-1,\"EX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"milliseconds\",ARG_TYPE_INTEGER,-1,\"PX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unix-time-seconds\",ARG_TYPE_UNIX_TIME,-1,\"EXAT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unix-time-milliseconds\",ARG_TYPE_UNIX_TIME,-1,\"PXAT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"keepttl\",ARG_TYPE_PURE_TOKEN,-1,\"KEEPTTL\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* HSETEX fields data argument table */\nstruct COMMAND_ARG HSETEX_fields_data_Subargs[] = {\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* HSETEX fields argument table */\nstruct COMMAND_ARG HSETEX_fields_Subargs[] = {\n{MAKE_ARG(\"numfields\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"data\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,2,NULL),.subargs=HSETEX_fields_data_Subargs},\n};\n\n/* HSETEX argument table */\nstruct COMMAND_ARG HSETEX_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"condition\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=HSETEX_condition_Subargs},\n{MAKE_ARG(\"expiration\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,5,NULL),.subargs=HSETEX_expiration_Subargs},\n{MAKE_ARG(\"fields\",ARG_TYPE_BLOCK,-1,\"FIELDS\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=HSETEX_fields_Subargs},\n};\n\n/********** HSETNX ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HSETNX history */\n#define HSETNX_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HSETNX tips */\n#define HSETNX_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HSETNX key specs */\nkeySpec HSETNX_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_INSERT,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HSETNX argument table */\nstruct COMMAND_ARG HSETNX_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** HSTRLEN ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HSTRLEN history */\n#define HSTRLEN_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HSTRLEN tips */\n#define HSTRLEN_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HSTRLEN key specs */\nkeySpec HSTRLEN_Keyspecs[1] = {\n{NULL,CMD_KEY_RO,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HSTRLEN argument table */\nstruct COMMAND_ARG HSTRLEN_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** HTTL ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HTTL history */\n#define HTTL_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HTTL tips */\nconst char *HTTL_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HTTL key specs */\nkeySpec HTTL_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HTTL fields argument table */\nstruct COMMAND_ARG HTTL_fields_Subargs[] = {\n{MAKE_ARG(\"numfields\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* HTTL argument table */\nstruct COMMAND_ARG HTTL_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"fields\",ARG_TYPE_BLOCK,-1,\"FIELDS\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=HTTL_fields_Subargs},\n};\n\n/********** HVALS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HVALS history */\n#define HVALS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HVALS tips */\nconst char *HVALS_Tips[] = {\n\"nondeterministic_output_order\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HVALS key specs */\nkeySpec HVALS_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* HVALS argument table */\nstruct COMMAND_ARG HVALS_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** PFADD ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PFADD history */\n#define PFADD_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PFADD tips */\n#define PFADD_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PFADD key specs */\nkeySpec PFADD_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_INSERT,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* PFADD argument table */\nstruct COMMAND_ARG PFADD_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"element\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** PFCOUNT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PFCOUNT history */\n#define PFCOUNT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PFCOUNT tips */\n#define PFCOUNT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PFCOUNT key specs */\nkeySpec PFCOUNT_Keyspecs[1] = {\n{\"RW because it may change the internal representation of the key, and propagate to replicas\",CMD_KEY_RW|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={-1,1,0}}\n};\n#endif\n\n/* PFCOUNT argument table */\nstruct COMMAND_ARG PFCOUNT_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** PFDEBUG ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PFDEBUG history */\n#define PFDEBUG_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PFDEBUG tips */\n#define PFDEBUG_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PFDEBUG key specs */\nkeySpec PFDEBUG_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* PFDEBUG argument table */\nstruct COMMAND_ARG PFDEBUG_Args[] = {\n{MAKE_ARG(\"subcommand\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** PFMERGE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PFMERGE history */\n#define PFMERGE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PFMERGE tips */\n#define PFMERGE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PFMERGE key specs */\nkeySpec PFMERGE_Keyspecs[2] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_INSERT,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={-1,1,0}}\n};\n#endif\n\n/* PFMERGE argument table */\nstruct COMMAND_ARG PFMERGE_Args[] = {\n{MAKE_ARG(\"destkey\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"sourcekey\",ARG_TYPE_KEY,1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** PFSELFTEST ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PFSELFTEST history */\n#define PFSELFTEST_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PFSELFTEST tips */\n#define PFSELFTEST_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PFSELFTEST key specs */\n#define PFSELFTEST_Keyspecs NULL\n#endif\n\n/********** BLMOVE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* BLMOVE history */\n#define BLMOVE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* BLMOVE tips */\n#define BLMOVE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* BLMOVE key specs */\nkeySpec BLMOVE_Keyspecs[2] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_RW|CMD_KEY_INSERT,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* BLMOVE wherefrom argument table */\nstruct COMMAND_ARG BLMOVE_wherefrom_Subargs[] = {\n{MAKE_ARG(\"left\",ARG_TYPE_PURE_TOKEN,-1,\"LEFT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"right\",ARG_TYPE_PURE_TOKEN,-1,\"RIGHT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* BLMOVE whereto argument table */\nstruct COMMAND_ARG BLMOVE_whereto_Subargs[] = {\n{MAKE_ARG(\"left\",ARG_TYPE_PURE_TOKEN,-1,\"LEFT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"right\",ARG_TYPE_PURE_TOKEN,-1,\"RIGHT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* BLMOVE argument table */\nstruct COMMAND_ARG BLMOVE_Args[] = {\n{MAKE_ARG(\"source\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"destination\",ARG_TYPE_KEY,1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"wherefrom\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=BLMOVE_wherefrom_Subargs},\n{MAKE_ARG(\"whereto\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=BLMOVE_whereto_Subargs},\n{MAKE_ARG(\"timeout\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** BLMPOP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* BLMPOP history */\n#define BLMPOP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* BLMPOP tips */\n#define BLMPOP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* BLMPOP key specs */\nkeySpec BLMPOP_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_KEYNUM,.fk.keynum={0,1,1}}\n};\n#endif\n\n/* BLMPOP where argument table */\nstruct COMMAND_ARG BLMPOP_where_Subargs[] = {\n{MAKE_ARG(\"left\",ARG_TYPE_PURE_TOKEN,-1,\"LEFT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"right\",ARG_TYPE_PURE_TOKEN,-1,\"RIGHT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* BLMPOP argument table */\nstruct COMMAND_ARG BLMPOP_Args[] = {\n{MAKE_ARG(\"timeout\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"numkeys\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"where\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=BLMPOP_where_Subargs},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** BLPOP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* BLPOP history */\ncommandHistory BLPOP_History[] = {\n{\"6.0.0\",\"`timeout` is interpreted as a double instead of an integer.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* BLPOP tips */\n#define BLPOP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* BLPOP key specs */\nkeySpec BLPOP_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={-2,1,0}}\n};\n#endif\n\n/* BLPOP argument table */\nstruct COMMAND_ARG BLPOP_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"timeout\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** BRPOP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* BRPOP history */\ncommandHistory BRPOP_History[] = {\n{\"6.0.0\",\"`timeout` is interpreted as a double instead of an integer.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* BRPOP tips */\n#define BRPOP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* BRPOP key specs */\nkeySpec BRPOP_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={-2,1,0}}\n};\n#endif\n\n/* BRPOP argument table */\nstruct COMMAND_ARG BRPOP_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"timeout\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** BRPOPLPUSH ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* BRPOPLPUSH history */\ncommandHistory BRPOPLPUSH_History[] = {\n{\"6.0.0\",\"`timeout` is interpreted as a double instead of an integer.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* BRPOPLPUSH tips */\n#define BRPOPLPUSH_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* BRPOPLPUSH key specs */\nkeySpec BRPOPLPUSH_Keyspecs[2] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_RW|CMD_KEY_INSERT,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* BRPOPLPUSH argument table */\nstruct COMMAND_ARG BRPOPLPUSH_Args[] = {\n{MAKE_ARG(\"source\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"destination\",ARG_TYPE_KEY,1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"timeout\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** LINDEX ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LINDEX history */\n#define LINDEX_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LINDEX tips */\n#define LINDEX_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LINDEX key specs */\nkeySpec LINDEX_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* LINDEX argument table */\nstruct COMMAND_ARG LINDEX_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"index\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** LINSERT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LINSERT history */\n#define LINSERT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LINSERT tips */\n#define LINSERT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LINSERT key specs */\nkeySpec LINSERT_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_INSERT,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* LINSERT where argument table */\nstruct COMMAND_ARG LINSERT_where_Subargs[] = {\n{MAKE_ARG(\"before\",ARG_TYPE_PURE_TOKEN,-1,\"BEFORE\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"after\",ARG_TYPE_PURE_TOKEN,-1,\"AFTER\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* LINSERT argument table */\nstruct COMMAND_ARG LINSERT_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"where\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=LINSERT_where_Subargs},\n{MAKE_ARG(\"pivot\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"element\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** LLEN ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LLEN history */\n#define LLEN_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LLEN tips */\n#define LLEN_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LLEN key specs */\nkeySpec LLEN_Keyspecs[1] = {\n{NULL,CMD_KEY_RO,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* LLEN argument table */\nstruct COMMAND_ARG LLEN_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** LMOVE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LMOVE history */\n#define LMOVE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LMOVE tips */\n#define LMOVE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LMOVE key specs */\nkeySpec LMOVE_Keyspecs[2] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_RW|CMD_KEY_INSERT,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* LMOVE wherefrom argument table */\nstruct COMMAND_ARG LMOVE_wherefrom_Subargs[] = {\n{MAKE_ARG(\"left\",ARG_TYPE_PURE_TOKEN,-1,\"LEFT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"right\",ARG_TYPE_PURE_TOKEN,-1,\"RIGHT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* LMOVE whereto argument table */\nstruct COMMAND_ARG LMOVE_whereto_Subargs[] = {\n{MAKE_ARG(\"left\",ARG_TYPE_PURE_TOKEN,-1,\"LEFT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"right\",ARG_TYPE_PURE_TOKEN,-1,\"RIGHT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* LMOVE argument table */\nstruct COMMAND_ARG LMOVE_Args[] = {\n{MAKE_ARG(\"source\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"destination\",ARG_TYPE_KEY,1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"wherefrom\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=LMOVE_wherefrom_Subargs},\n{MAKE_ARG(\"whereto\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=LMOVE_whereto_Subargs},\n};\n\n/********** LMPOP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LMPOP history */\n#define LMPOP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LMPOP tips */\n#define LMPOP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LMPOP key specs */\nkeySpec LMPOP_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_KEYNUM,.fk.keynum={0,1,1}}\n};\n#endif\n\n/* LMPOP where argument table */\nstruct COMMAND_ARG LMPOP_where_Subargs[] = {\n{MAKE_ARG(\"left\",ARG_TYPE_PURE_TOKEN,-1,\"LEFT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"right\",ARG_TYPE_PURE_TOKEN,-1,\"RIGHT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* LMPOP argument table */\nstruct COMMAND_ARG LMPOP_Args[] = {\n{MAKE_ARG(\"numkeys\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"where\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=LMPOP_where_Subargs},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** LPOP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LPOP history */\ncommandHistory LPOP_History[] = {\n{\"6.2.0\",\"Added the `count` argument.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LPOP tips */\n#define LPOP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LPOP key specs */\nkeySpec LPOP_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* LPOP argument table */\nstruct COMMAND_ARG LPOP_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,\"6.2.0\",CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** LPOS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LPOS history */\n#define LPOS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LPOS tips */\n#define LPOS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LPOS key specs */\nkeySpec LPOS_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* LPOS argument table */\nstruct COMMAND_ARG LPOS_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"element\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"rank\",ARG_TYPE_INTEGER,-1,\"RANK\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"num-matches\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"len\",ARG_TYPE_INTEGER,-1,\"MAXLEN\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** LPUSH ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LPUSH history */\ncommandHistory LPUSH_History[] = {\n{\"2.4.0\",\"Accepts multiple `element` arguments.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LPUSH tips */\n#define LPUSH_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LPUSH key specs */\nkeySpec LPUSH_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_INSERT,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* LPUSH argument table */\nstruct COMMAND_ARG LPUSH_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"element\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** LPUSHX ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LPUSHX history */\ncommandHistory LPUSHX_History[] = {\n{\"4.0.0\",\"Accepts multiple `element` arguments.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LPUSHX tips */\n#define LPUSHX_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LPUSHX key specs */\nkeySpec LPUSHX_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_INSERT,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* LPUSHX argument table */\nstruct COMMAND_ARG LPUSHX_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"element\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** LRANGE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LRANGE history */\n#define LRANGE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LRANGE tips */\n#define LRANGE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LRANGE key specs */\nkeySpec LRANGE_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* LRANGE argument table */\nstruct COMMAND_ARG LRANGE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"start\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"stop\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** LREM ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LREM history */\n#define LREM_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LREM tips */\n#define LREM_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LREM key specs */\nkeySpec LREM_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* LREM argument table */\nstruct COMMAND_ARG LREM_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"element\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** LSET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LSET history */\n#define LSET_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LSET tips */\n#define LSET_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LSET key specs */\nkeySpec LSET_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* LSET argument table */\nstruct COMMAND_ARG LSET_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"index\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"element\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** LTRIM ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LTRIM history */\n#define LTRIM_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LTRIM tips */\n#define LTRIM_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LTRIM key specs */\nkeySpec LTRIM_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* LTRIM argument table */\nstruct COMMAND_ARG LTRIM_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"start\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"stop\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** RPOP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* RPOP history */\ncommandHistory RPOP_History[] = {\n{\"6.2.0\",\"Added the `count` argument.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* RPOP tips */\n#define RPOP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* RPOP key specs */\nkeySpec RPOP_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* RPOP argument table */\nstruct COMMAND_ARG RPOP_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,\"6.2.0\",CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** RPOPLPUSH ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* RPOPLPUSH history */\n#define RPOPLPUSH_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* RPOPLPUSH tips */\n#define RPOPLPUSH_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* RPOPLPUSH key specs */\nkeySpec RPOPLPUSH_Keyspecs[2] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_RW|CMD_KEY_INSERT,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* RPOPLPUSH argument table */\nstruct COMMAND_ARG RPOPLPUSH_Args[] = {\n{MAKE_ARG(\"source\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"destination\",ARG_TYPE_KEY,1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** RPUSH ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* RPUSH history */\ncommandHistory RPUSH_History[] = {\n{\"2.4.0\",\"Accepts multiple `element` arguments.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* RPUSH tips */\n#define RPUSH_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* RPUSH key specs */\nkeySpec RPUSH_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_INSERT,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* RPUSH argument table */\nstruct COMMAND_ARG RPUSH_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"element\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** RPUSHX ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* RPUSHX history */\ncommandHistory RPUSHX_History[] = {\n{\"4.0.0\",\"Accepts multiple `element` arguments.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* RPUSHX tips */\n#define RPUSHX_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* RPUSHX key specs */\nkeySpec RPUSHX_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_INSERT,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* RPUSHX argument table */\nstruct COMMAND_ARG RPUSHX_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"element\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** PSUBSCRIBE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PSUBSCRIBE history */\n#define PSUBSCRIBE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PSUBSCRIBE tips */\n#define PSUBSCRIBE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PSUBSCRIBE key specs */\n#define PSUBSCRIBE_Keyspecs NULL\n#endif\n\n/* PSUBSCRIBE argument table */\nstruct COMMAND_ARG PSUBSCRIBE_Args[] = {\n{MAKE_ARG(\"pattern\",ARG_TYPE_PATTERN,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** PUBLISH ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PUBLISH history */\n#define PUBLISH_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PUBLISH tips */\n#define PUBLISH_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PUBLISH key specs */\n#define PUBLISH_Keyspecs NULL\n#endif\n\n/* PUBLISH argument table */\nstruct COMMAND_ARG PUBLISH_Args[] = {\n{MAKE_ARG(\"channel\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"message\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** PUBSUB CHANNELS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PUBSUB CHANNELS history */\n#define PUBSUB_CHANNELS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PUBSUB CHANNELS tips */\n#define PUBSUB_CHANNELS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PUBSUB CHANNELS key specs */\n#define PUBSUB_CHANNELS_Keyspecs NULL\n#endif\n\n/* PUBSUB CHANNELS argument table */\nstruct COMMAND_ARG PUBSUB_CHANNELS_Args[] = {\n{MAKE_ARG(\"pattern\",ARG_TYPE_PATTERN,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** PUBSUB HELP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PUBSUB HELP history */\n#define PUBSUB_HELP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PUBSUB HELP tips */\n#define PUBSUB_HELP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PUBSUB HELP key specs */\n#define PUBSUB_HELP_Keyspecs NULL\n#endif\n\n/********** PUBSUB NUMPAT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PUBSUB NUMPAT history */\n#define PUBSUB_NUMPAT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PUBSUB NUMPAT tips */\n#define PUBSUB_NUMPAT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PUBSUB NUMPAT key specs */\n#define PUBSUB_NUMPAT_Keyspecs NULL\n#endif\n\n/********** PUBSUB NUMSUB ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PUBSUB NUMSUB history */\n#define PUBSUB_NUMSUB_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PUBSUB NUMSUB tips */\n#define PUBSUB_NUMSUB_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PUBSUB NUMSUB key specs */\n#define PUBSUB_NUMSUB_Keyspecs NULL\n#endif\n\n/* PUBSUB NUMSUB argument table */\nstruct COMMAND_ARG PUBSUB_NUMSUB_Args[] = {\n{MAKE_ARG(\"channel\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** PUBSUB SHARDCHANNELS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PUBSUB SHARDCHANNELS history */\n#define PUBSUB_SHARDCHANNELS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PUBSUB SHARDCHANNELS tips */\n#define PUBSUB_SHARDCHANNELS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PUBSUB SHARDCHANNELS key specs */\n#define PUBSUB_SHARDCHANNELS_Keyspecs NULL\n#endif\n\n/* PUBSUB SHARDCHANNELS argument table */\nstruct COMMAND_ARG PUBSUB_SHARDCHANNELS_Args[] = {\n{MAKE_ARG(\"pattern\",ARG_TYPE_PATTERN,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** PUBSUB SHARDNUMSUB ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PUBSUB SHARDNUMSUB history */\n#define PUBSUB_SHARDNUMSUB_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PUBSUB SHARDNUMSUB tips */\n#define PUBSUB_SHARDNUMSUB_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PUBSUB SHARDNUMSUB key specs */\n#define PUBSUB_SHARDNUMSUB_Keyspecs NULL\n#endif\n\n/* PUBSUB SHARDNUMSUB argument table */\nstruct COMMAND_ARG PUBSUB_SHARDNUMSUB_Args[] = {\n{MAKE_ARG(\"shardchannel\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* PUBSUB command table */\nstruct COMMAND_STRUCT PUBSUB_Subcommands[] = {\n{MAKE_CMD(\"channels\",\"Returns the active channels.\",\"O(N) where N is the number of active channels, and assuming constant time pattern matching (relatively short channels and patterns)\",\"2.8.0\",CMD_DOC_NONE,NULL,NULL,\"pubsub\",COMMAND_GROUP_PUBSUB,PUBSUB_CHANNELS_History,0,PUBSUB_CHANNELS_Tips,0,pubsubCommand,-2,CMD_PUBSUB|CMD_LOADING|CMD_STALE,0,PUBSUB_CHANNELS_Keyspecs,0,NULL,1),.args=PUBSUB_CHANNELS_Args},\n{MAKE_CMD(\"help\",\"Returns helpful text about the different subcommands.\",\"O(1)\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"pubsub\",COMMAND_GROUP_PUBSUB,PUBSUB_HELP_History,0,PUBSUB_HELP_Tips,0,pubsubCommand,2,CMD_LOADING|CMD_STALE,0,PUBSUB_HELP_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"numpat\",\"Returns a count of unique pattern subscriptions.\",\"O(1)\",\"2.8.0\",CMD_DOC_NONE,NULL,NULL,\"pubsub\",COMMAND_GROUP_PUBSUB,PUBSUB_NUMPAT_History,0,PUBSUB_NUMPAT_Tips,0,pubsubCommand,2,CMD_PUBSUB|CMD_LOADING|CMD_STALE,0,PUBSUB_NUMPAT_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"numsub\",\"Returns a count of subscribers to channels.\",\"O(N) for the NUMSUB subcommand, where N is the number of requested channels\",\"2.8.0\",CMD_DOC_NONE,NULL,NULL,\"pubsub\",COMMAND_GROUP_PUBSUB,PUBSUB_NUMSUB_History,0,PUBSUB_NUMSUB_Tips,0,pubsubCommand,-2,CMD_PUBSUB|CMD_LOADING|CMD_STALE,0,PUBSUB_NUMSUB_Keyspecs,0,NULL,1),.args=PUBSUB_NUMSUB_Args},\n{MAKE_CMD(\"shardchannels\",\"Returns the active shard channels.\",\"O(N) where N is the number of active shard channels, and assuming constant time pattern matching (relatively short shard channels).\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"pubsub\",COMMAND_GROUP_PUBSUB,PUBSUB_SHARDCHANNELS_History,0,PUBSUB_SHARDCHANNELS_Tips,0,pubsubCommand,-2,CMD_PUBSUB|CMD_LOADING|CMD_STALE,0,PUBSUB_SHARDCHANNELS_Keyspecs,0,NULL,1),.args=PUBSUB_SHARDCHANNELS_Args},\n{MAKE_CMD(\"shardnumsub\",\"Returns the count of subscribers of shard channels.\",\"O(N) for the SHARDNUMSUB subcommand, where N is the number of requested shard channels\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"pubsub\",COMMAND_GROUP_PUBSUB,PUBSUB_SHARDNUMSUB_History,0,PUBSUB_SHARDNUMSUB_Tips,0,pubsubCommand,-2,CMD_PUBSUB|CMD_LOADING|CMD_STALE,0,PUBSUB_SHARDNUMSUB_Keyspecs,0,NULL,1),.args=PUBSUB_SHARDNUMSUB_Args},\n{0}\n};\n\n/********** PUBSUB ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PUBSUB history */\n#define PUBSUB_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PUBSUB tips */\n#define PUBSUB_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PUBSUB key specs */\n#define PUBSUB_Keyspecs NULL\n#endif\n\n/********** PUNSUBSCRIBE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PUNSUBSCRIBE history */\n#define PUNSUBSCRIBE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PUNSUBSCRIBE tips */\n#define PUNSUBSCRIBE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PUNSUBSCRIBE key specs */\n#define PUNSUBSCRIBE_Keyspecs NULL\n#endif\n\n/* PUNSUBSCRIBE argument table */\nstruct COMMAND_ARG PUNSUBSCRIBE_Args[] = {\n{MAKE_ARG(\"pattern\",ARG_TYPE_PATTERN,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** SPUBLISH ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SPUBLISH history */\n#define SPUBLISH_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SPUBLISH tips */\n#define SPUBLISH_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SPUBLISH key specs */\nkeySpec SPUBLISH_Keyspecs[1] = {\n{NULL,CMD_KEY_NOT_KEY,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* SPUBLISH argument table */\nstruct COMMAND_ARG SPUBLISH_Args[] = {\n{MAKE_ARG(\"shardchannel\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"message\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** SSUBSCRIBE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SSUBSCRIBE history */\n#define SSUBSCRIBE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SSUBSCRIBE tips */\n#define SSUBSCRIBE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SSUBSCRIBE key specs */\nkeySpec SSUBSCRIBE_Keyspecs[1] = {\n{NULL,CMD_KEY_NOT_KEY,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={-1,1,0}}\n};\n#endif\n\n/* SSUBSCRIBE argument table */\nstruct COMMAND_ARG SSUBSCRIBE_Args[] = {\n{MAKE_ARG(\"shardchannel\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** SUBSCRIBE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SUBSCRIBE history */\n#define SUBSCRIBE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SUBSCRIBE tips */\n#define SUBSCRIBE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SUBSCRIBE key specs */\n#define SUBSCRIBE_Keyspecs NULL\n#endif\n\n/* SUBSCRIBE argument table */\nstruct COMMAND_ARG SUBSCRIBE_Args[] = {\n{MAKE_ARG(\"channel\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** SUNSUBSCRIBE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SUNSUBSCRIBE history */\n#define SUNSUBSCRIBE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SUNSUBSCRIBE tips */\n#define SUNSUBSCRIBE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SUNSUBSCRIBE key specs */\nkeySpec SUNSUBSCRIBE_Keyspecs[1] = {\n{NULL,CMD_KEY_NOT_KEY,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={-1,1,0}}\n};\n#endif\n\n/* SUNSUBSCRIBE argument table */\nstruct COMMAND_ARG SUNSUBSCRIBE_Args[] = {\n{MAKE_ARG(\"shardchannel\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** UNSUBSCRIBE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* UNSUBSCRIBE history */\n#define UNSUBSCRIBE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* UNSUBSCRIBE tips */\n#define UNSUBSCRIBE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* UNSUBSCRIBE key specs */\n#define UNSUBSCRIBE_Keyspecs NULL\n#endif\n\n/* UNSUBSCRIBE argument table */\nstruct COMMAND_ARG UNSUBSCRIBE_Args[] = {\n{MAKE_ARG(\"channel\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** GCRA ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* GCRA history */\n#define GCRA_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* GCRA tips */\n#define GCRA_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* GCRA key specs */\nkeySpec GCRA_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* GCRA argument table */\nstruct COMMAND_ARG GCRA_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"max-burst\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"tokens-per-period\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"period\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"TOKENS\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** GCRASETVALUE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* GCRASETVALUE history */\n#define GCRASETVALUE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* GCRASETVALUE tips */\n#define GCRASETVALUE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* GCRASETVALUE key specs */\nkeySpec GCRASETVALUE_Keyspecs[1] = {\n{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* GCRASETVALUE argument table */\nstruct COMMAND_ARG GCRASETVALUE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"tat\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** EVAL ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* EVAL history */\n#define EVAL_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* EVAL tips */\n#define EVAL_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* EVAL key specs */\nkeySpec EVAL_Keyspecs[1] = {\n{\"We cannot tell how the keys will be used so we assume the worst, RW and UPDATE\",CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_KEYNUM,.fk.keynum={0,1,1}}\n};\n#endif\n\n/* EVAL argument table */\nstruct COMMAND_ARG EVAL_Args[] = {\n{MAKE_ARG(\"script\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"numkeys\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"arg\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** EVALSHA ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* EVALSHA history */\n#define EVALSHA_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* EVALSHA tips */\n#define EVALSHA_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* EVALSHA key specs */\nkeySpec EVALSHA_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_KEYNUM,.fk.keynum={0,1,1}}\n};\n#endif\n\n/* EVALSHA argument table */\nstruct COMMAND_ARG EVALSHA_Args[] = {\n{MAKE_ARG(\"sha1\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"numkeys\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"arg\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** EVALSHA_RO ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* EVALSHA_RO history */\n#define EVALSHA_RO_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* EVALSHA_RO tips */\n#define EVALSHA_RO_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* EVALSHA_RO key specs */\nkeySpec EVALSHA_RO_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_KEYNUM,.fk.keynum={0,1,1}}\n};\n#endif\n\n/* EVALSHA_RO argument table */\nstruct COMMAND_ARG EVALSHA_RO_Args[] = {\n{MAKE_ARG(\"sha1\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"numkeys\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"arg\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** EVAL_RO ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* EVAL_RO history */\n#define EVAL_RO_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* EVAL_RO tips */\n#define EVAL_RO_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* EVAL_RO key specs */\nkeySpec EVAL_RO_Keyspecs[1] = {\n{\"We cannot tell how the keys will be used so we assume the worst, RO and ACCESS\",CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_KEYNUM,.fk.keynum={0,1,1}}\n};\n#endif\n\n/* EVAL_RO argument table */\nstruct COMMAND_ARG EVAL_RO_Args[] = {\n{MAKE_ARG(\"script\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"numkeys\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"arg\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** FCALL ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* FCALL history */\n#define FCALL_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* FCALL tips */\n#define FCALL_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* FCALL key specs */\nkeySpec FCALL_Keyspecs[1] = {\n{\"We cannot tell how the keys will be used so we assume the worst, RW and UPDATE\",CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_KEYNUM,.fk.keynum={0,1,1}}\n};\n#endif\n\n/* FCALL argument table */\nstruct COMMAND_ARG FCALL_Args[] = {\n{MAKE_ARG(\"function\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"numkeys\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"arg\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** FCALL_RO ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* FCALL_RO history */\n#define FCALL_RO_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* FCALL_RO tips */\n#define FCALL_RO_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* FCALL_RO key specs */\nkeySpec FCALL_RO_Keyspecs[1] = {\n{\"We cannot tell how the keys will be used so we assume the worst, RO and ACCESS\",CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_KEYNUM,.fk.keynum={0,1,1}}\n};\n#endif\n\n/* FCALL_RO argument table */\nstruct COMMAND_ARG FCALL_RO_Args[] = {\n{MAKE_ARG(\"function\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"numkeys\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"arg\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** FUNCTION DELETE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* FUNCTION DELETE history */\n#define FUNCTION_DELETE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* FUNCTION DELETE tips */\nconst char *FUNCTION_DELETE_Tips[] = {\n\"request_policy:all_shards\",\n\"response_policy:all_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* FUNCTION DELETE key specs */\n#define FUNCTION_DELETE_Keyspecs NULL\n#endif\n\n/* FUNCTION DELETE argument table */\nstruct COMMAND_ARG FUNCTION_DELETE_Args[] = {\n{MAKE_ARG(\"library-name\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** FUNCTION DUMP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* FUNCTION DUMP history */\n#define FUNCTION_DUMP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* FUNCTION DUMP tips */\n#define FUNCTION_DUMP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* FUNCTION DUMP key specs */\n#define FUNCTION_DUMP_Keyspecs NULL\n#endif\n\n/********** FUNCTION FLUSH ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* FUNCTION FLUSH history */\n#define FUNCTION_FLUSH_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* FUNCTION FLUSH tips */\nconst char *FUNCTION_FLUSH_Tips[] = {\n\"request_policy:all_shards\",\n\"response_policy:all_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* FUNCTION FLUSH key specs */\n#define FUNCTION_FLUSH_Keyspecs NULL\n#endif\n\n/* FUNCTION FLUSH flush_type argument table */\nstruct COMMAND_ARG FUNCTION_FLUSH_flush_type_Subargs[] = {\n{MAKE_ARG(\"async\",ARG_TYPE_PURE_TOKEN,-1,\"ASYNC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"sync\",ARG_TYPE_PURE_TOKEN,-1,\"SYNC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* FUNCTION FLUSH argument table */\nstruct COMMAND_ARG FUNCTION_FLUSH_Args[] = {\n{MAKE_ARG(\"flush-type\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=FUNCTION_FLUSH_flush_type_Subargs},\n};\n\n/********** FUNCTION HELP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* FUNCTION HELP history */\n#define FUNCTION_HELP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* FUNCTION HELP tips */\n#define FUNCTION_HELP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* FUNCTION HELP key specs */\n#define FUNCTION_HELP_Keyspecs NULL\n#endif\n\n/********** FUNCTION KILL ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* FUNCTION KILL history */\n#define FUNCTION_KILL_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* FUNCTION KILL tips */\nconst char *FUNCTION_KILL_Tips[] = {\n\"request_policy:all_shards\",\n\"response_policy:one_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* FUNCTION KILL key specs */\n#define FUNCTION_KILL_Keyspecs NULL\n#endif\n\n/********** FUNCTION LIST ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* FUNCTION LIST history */\n#define FUNCTION_LIST_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* FUNCTION LIST tips */\nconst char *FUNCTION_LIST_Tips[] = {\n\"nondeterministic_output_order\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* FUNCTION LIST key specs */\n#define FUNCTION_LIST_Keyspecs NULL\n#endif\n\n/* FUNCTION LIST argument table */\nstruct COMMAND_ARG FUNCTION_LIST_Args[] = {\n{MAKE_ARG(\"library-name-pattern\",ARG_TYPE_STRING,-1,\"LIBRARYNAME\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"withcode\",ARG_TYPE_PURE_TOKEN,-1,\"WITHCODE\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** FUNCTION LOAD ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* FUNCTION LOAD history */\n#define FUNCTION_LOAD_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* FUNCTION LOAD tips */\nconst char *FUNCTION_LOAD_Tips[] = {\n\"request_policy:all_shards\",\n\"response_policy:all_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* FUNCTION LOAD key specs */\n#define FUNCTION_LOAD_Keyspecs NULL\n#endif\n\n/* FUNCTION LOAD argument table */\nstruct COMMAND_ARG FUNCTION_LOAD_Args[] = {\n{MAKE_ARG(\"replace\",ARG_TYPE_PURE_TOKEN,-1,\"REPLACE\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"function-code\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** FUNCTION RESTORE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* FUNCTION RESTORE history */\n#define FUNCTION_RESTORE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* FUNCTION RESTORE tips */\nconst char *FUNCTION_RESTORE_Tips[] = {\n\"request_policy:all_shards\",\n\"response_policy:all_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* FUNCTION RESTORE key specs */\n#define FUNCTION_RESTORE_Keyspecs NULL\n#endif\n\n/* FUNCTION RESTORE policy argument table */\nstruct COMMAND_ARG FUNCTION_RESTORE_policy_Subargs[] = {\n{MAKE_ARG(\"flush\",ARG_TYPE_PURE_TOKEN,-1,\"FLUSH\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"append\",ARG_TYPE_PURE_TOKEN,-1,\"APPEND\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"replace\",ARG_TYPE_PURE_TOKEN,-1,\"REPLACE\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* FUNCTION RESTORE argument table */\nstruct COMMAND_ARG FUNCTION_RESTORE_Args[] = {\n{MAKE_ARG(\"serialized-value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"policy\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,3,NULL),.subargs=FUNCTION_RESTORE_policy_Subargs},\n};\n\n/********** FUNCTION STATS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* FUNCTION STATS history */\n#define FUNCTION_STATS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* FUNCTION STATS tips */\nconst char *FUNCTION_STATS_Tips[] = {\n\"nondeterministic_output\",\n\"request_policy:all_shards\",\n\"response_policy:special\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* FUNCTION STATS key specs */\n#define FUNCTION_STATS_Keyspecs NULL\n#endif\n\n/* FUNCTION command table */\nstruct COMMAND_STRUCT FUNCTION_Subcommands[] = {\n{MAKE_CMD(\"delete\",\"Deletes a library and its functions.\",\"O(1)\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,FUNCTION_DELETE_History,0,FUNCTION_DELETE_Tips,2,functionDeleteCommand,3,CMD_NOSCRIPT|CMD_WRITE,ACL_CATEGORY_SCRIPTING,FUNCTION_DELETE_Keyspecs,0,NULL,1),.args=FUNCTION_DELETE_Args},\n{MAKE_CMD(\"dump\",\"Dumps all libraries into a serialized binary payload.\",\"O(N) where N is the number of functions\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,FUNCTION_DUMP_History,0,FUNCTION_DUMP_Tips,0,functionDumpCommand,2,CMD_NOSCRIPT,ACL_CATEGORY_SCRIPTING,FUNCTION_DUMP_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"flush\",\"Deletes all libraries and functions.\",\"O(N) where N is the number of functions deleted\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,FUNCTION_FLUSH_History,0,FUNCTION_FLUSH_Tips,2,functionFlushCommand,-2,CMD_NOSCRIPT|CMD_WRITE,ACL_CATEGORY_SCRIPTING,FUNCTION_FLUSH_Keyspecs,0,NULL,1),.args=FUNCTION_FLUSH_Args},\n{MAKE_CMD(\"help\",\"Returns helpful text about the different subcommands.\",\"O(1)\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,FUNCTION_HELP_History,0,FUNCTION_HELP_Tips,0,functionHelpCommand,2,CMD_LOADING|CMD_STALE,ACL_CATEGORY_SCRIPTING,FUNCTION_HELP_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"kill\",\"Terminates a function during execution.\",\"O(1)\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,FUNCTION_KILL_History,0,FUNCTION_KILL_Tips,2,functionKillCommand,2,CMD_NOSCRIPT|CMD_ALLOW_BUSY,ACL_CATEGORY_SCRIPTING,FUNCTION_KILL_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"list\",\"Returns information about all libraries.\",\"O(N) where N is the number of functions\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,FUNCTION_LIST_History,0,FUNCTION_LIST_Tips,1,functionListCommand,-2,CMD_NOSCRIPT,ACL_CATEGORY_SCRIPTING,FUNCTION_LIST_Keyspecs,0,NULL,2),.args=FUNCTION_LIST_Args},\n{MAKE_CMD(\"load\",\"Creates a library.\",\"O(1) (considering compilation time is redundant)\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,FUNCTION_LOAD_History,0,FUNCTION_LOAD_Tips,2,functionLoadCommand,-3,CMD_NOSCRIPT|CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_SCRIPTING,FUNCTION_LOAD_Keyspecs,0,NULL,2),.args=FUNCTION_LOAD_Args},\n{MAKE_CMD(\"restore\",\"Restores all libraries from a payload.\",\"O(N) where N is the number of functions on the payload\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,FUNCTION_RESTORE_History,0,FUNCTION_RESTORE_Tips,2,functionRestoreCommand,-3,CMD_NOSCRIPT|CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_SCRIPTING,FUNCTION_RESTORE_Keyspecs,0,NULL,2),.args=FUNCTION_RESTORE_Args},\n{MAKE_CMD(\"stats\",\"Returns information about a function during execution.\",\"O(1)\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,FUNCTION_STATS_History,0,FUNCTION_STATS_Tips,3,functionStatsCommand,2,CMD_NOSCRIPT|CMD_ALLOW_BUSY,ACL_CATEGORY_SCRIPTING,FUNCTION_STATS_Keyspecs,0,NULL,0)},\n{0}\n};\n\n/********** FUNCTION ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* FUNCTION history */\n#define FUNCTION_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* FUNCTION tips */\n#define FUNCTION_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* FUNCTION key specs */\n#define FUNCTION_Keyspecs NULL\n#endif\n\n/********** SCRIPT DEBUG ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SCRIPT DEBUG history */\n#define SCRIPT_DEBUG_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SCRIPT DEBUG tips */\n#define SCRIPT_DEBUG_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SCRIPT DEBUG key specs */\n#define SCRIPT_DEBUG_Keyspecs NULL\n#endif\n\n/* SCRIPT DEBUG mode argument table */\nstruct COMMAND_ARG SCRIPT_DEBUG_mode_Subargs[] = {\n{MAKE_ARG(\"yes\",ARG_TYPE_PURE_TOKEN,-1,\"YES\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"sync\",ARG_TYPE_PURE_TOKEN,-1,\"SYNC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"no\",ARG_TYPE_PURE_TOKEN,-1,\"NO\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* SCRIPT DEBUG argument table */\nstruct COMMAND_ARG SCRIPT_DEBUG_Args[] = {\n{MAKE_ARG(\"mode\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,3,NULL),.subargs=SCRIPT_DEBUG_mode_Subargs},\n};\n\n/********** SCRIPT EXISTS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SCRIPT EXISTS history */\n#define SCRIPT_EXISTS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SCRIPT EXISTS tips */\nconst char *SCRIPT_EXISTS_Tips[] = {\n\"request_policy:all_shards\",\n\"response_policy:agg_logical_and\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SCRIPT EXISTS key specs */\n#define SCRIPT_EXISTS_Keyspecs NULL\n#endif\n\n/* SCRIPT EXISTS argument table */\nstruct COMMAND_ARG SCRIPT_EXISTS_Args[] = {\n{MAKE_ARG(\"sha1\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** SCRIPT FLUSH ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SCRIPT FLUSH history */\ncommandHistory SCRIPT_FLUSH_History[] = {\n{\"6.2.0\",\"Added the `ASYNC` and `SYNC` flushing mode modifiers.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SCRIPT FLUSH tips */\nconst char *SCRIPT_FLUSH_Tips[] = {\n\"request_policy:all_nodes\",\n\"response_policy:all_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SCRIPT FLUSH key specs */\n#define SCRIPT_FLUSH_Keyspecs NULL\n#endif\n\n/* SCRIPT FLUSH flush_type argument table */\nstruct COMMAND_ARG SCRIPT_FLUSH_flush_type_Subargs[] = {\n{MAKE_ARG(\"async\",ARG_TYPE_PURE_TOKEN,-1,\"ASYNC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"sync\",ARG_TYPE_PURE_TOKEN,-1,\"SYNC\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* SCRIPT FLUSH argument table */\nstruct COMMAND_ARG SCRIPT_FLUSH_Args[] = {\n{MAKE_ARG(\"flush-type\",ARG_TYPE_ONEOF,-1,NULL,NULL,\"6.2.0\",CMD_ARG_OPTIONAL,2,NULL),.subargs=SCRIPT_FLUSH_flush_type_Subargs},\n};\n\n/********** SCRIPT HELP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SCRIPT HELP history */\n#define SCRIPT_HELP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SCRIPT HELP tips */\n#define SCRIPT_HELP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SCRIPT HELP key specs */\n#define SCRIPT_HELP_Keyspecs NULL\n#endif\n\n/********** SCRIPT KILL ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SCRIPT KILL history */\n#define SCRIPT_KILL_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SCRIPT KILL tips */\nconst char *SCRIPT_KILL_Tips[] = {\n\"request_policy:all_shards\",\n\"response_policy:one_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SCRIPT KILL key specs */\n#define SCRIPT_KILL_Keyspecs NULL\n#endif\n\n/********** SCRIPT LOAD ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SCRIPT LOAD history */\n#define SCRIPT_LOAD_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SCRIPT LOAD tips */\nconst char *SCRIPT_LOAD_Tips[] = {\n\"request_policy:all_nodes\",\n\"response_policy:all_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SCRIPT LOAD key specs */\n#define SCRIPT_LOAD_Keyspecs NULL\n#endif\n\n/* SCRIPT LOAD argument table */\nstruct COMMAND_ARG SCRIPT_LOAD_Args[] = {\n{MAKE_ARG(\"script\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* SCRIPT command table */\nstruct COMMAND_STRUCT SCRIPT_Subcommands[] = {\n{MAKE_CMD(\"debug\",\"Sets the debug mode of server-side Lua scripts.\",\"O(1)\",\"3.2.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,SCRIPT_DEBUG_History,0,SCRIPT_DEBUG_Tips,0,scriptCommand,3,CMD_NOSCRIPT,ACL_CATEGORY_SCRIPTING,SCRIPT_DEBUG_Keyspecs,0,NULL,1),.args=SCRIPT_DEBUG_Args},\n{MAKE_CMD(\"exists\",\"Determines whether server-side Lua scripts exist in the script cache.\",\"O(N) with N being the number of scripts to check (so checking a single script is an O(1) operation).\",\"2.6.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,SCRIPT_EXISTS_History,0,SCRIPT_EXISTS_Tips,2,scriptCommand,-3,CMD_NOSCRIPT,ACL_CATEGORY_SCRIPTING,SCRIPT_EXISTS_Keyspecs,0,NULL,1),.args=SCRIPT_EXISTS_Args},\n{MAKE_CMD(\"flush\",\"Removes all server-side Lua scripts from the script cache.\",\"O(N) with N being the number of scripts in cache\",\"2.6.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,SCRIPT_FLUSH_History,1,SCRIPT_FLUSH_Tips,2,scriptCommand,-2,CMD_NOSCRIPT,ACL_CATEGORY_SCRIPTING,SCRIPT_FLUSH_Keyspecs,0,NULL,1),.args=SCRIPT_FLUSH_Args},\n{MAKE_CMD(\"help\",\"Returns helpful text about the different subcommands.\",\"O(1)\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,SCRIPT_HELP_History,0,SCRIPT_HELP_Tips,0,scriptCommand,2,CMD_LOADING|CMD_STALE,ACL_CATEGORY_SCRIPTING,SCRIPT_HELP_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"kill\",\"Terminates a server-side Lua script during execution.\",\"O(1)\",\"2.6.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,SCRIPT_KILL_History,0,SCRIPT_KILL_Tips,2,scriptCommand,2,CMD_NOSCRIPT|CMD_ALLOW_BUSY,ACL_CATEGORY_SCRIPTING,SCRIPT_KILL_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"load\",\"Loads a server-side Lua script to the script cache.\",\"O(N) with N being the length in bytes of the script body.\",\"2.6.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,SCRIPT_LOAD_History,0,SCRIPT_LOAD_Tips,2,scriptCommand,3,CMD_NOSCRIPT|CMD_STALE,ACL_CATEGORY_SCRIPTING,SCRIPT_LOAD_Keyspecs,0,NULL,1),.args=SCRIPT_LOAD_Args},\n{0}\n};\n\n/********** SCRIPT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SCRIPT history */\n#define SCRIPT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SCRIPT tips */\n#define SCRIPT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SCRIPT key specs */\n#define SCRIPT_Keyspecs NULL\n#endif\n\n/********** SENTINEL CKQUORUM ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL CKQUORUM history */\n#define SENTINEL_CKQUORUM_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL CKQUORUM tips */\n#define SENTINEL_CKQUORUM_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL CKQUORUM key specs */\n#define SENTINEL_CKQUORUM_Keyspecs NULL\n#endif\n\n/* SENTINEL CKQUORUM argument table */\nstruct COMMAND_ARG SENTINEL_CKQUORUM_Args[] = {\n{MAKE_ARG(\"master-name\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** SENTINEL CONFIG ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL CONFIG history */\ncommandHistory SENTINEL_CONFIG_History[] = {\n{\"7.2.0\",\"Added the ability to set and get multiple parameters in one call.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL CONFIG tips */\n#define SENTINEL_CONFIG_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL CONFIG key specs */\n#define SENTINEL_CONFIG_Keyspecs NULL\n#endif\n\n/* SENTINEL CONFIG action set argument table */\nstruct COMMAND_ARG SENTINEL_CONFIG_action_set_Subargs[] = {\n{MAKE_ARG(\"parameter\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* SENTINEL CONFIG action argument table */\nstruct COMMAND_ARG SENTINEL_CONFIG_action_Subargs[] = {\n{MAKE_ARG(\"set\",ARG_TYPE_BLOCK,-1,\"SET\",NULL,NULL,CMD_ARG_MULTIPLE,2,NULL),.subargs=SENTINEL_CONFIG_action_set_Subargs},\n{MAKE_ARG(\"parameter\",ARG_TYPE_STRING,-1,\"GET\",NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* SENTINEL CONFIG argument table */\nstruct COMMAND_ARG SENTINEL_CONFIG_Args[] = {\n{MAKE_ARG(\"action\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=SENTINEL_CONFIG_action_Subargs},\n};\n\n/********** SENTINEL DEBUG ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL DEBUG history */\n#define SENTINEL_DEBUG_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL DEBUG tips */\n#define SENTINEL_DEBUG_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL DEBUG key specs */\n#define SENTINEL_DEBUG_Keyspecs NULL\n#endif\n\n/* SENTINEL DEBUG data argument table */\nstruct COMMAND_ARG SENTINEL_DEBUG_data_Subargs[] = {\n{MAKE_ARG(\"parameter\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* SENTINEL DEBUG argument table */\nstruct COMMAND_ARG SENTINEL_DEBUG_Args[] = {\n{MAKE_ARG(\"data\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,2,NULL),.subargs=SENTINEL_DEBUG_data_Subargs},\n};\n\n/********** SENTINEL FAILOVER ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL FAILOVER history */\n#define SENTINEL_FAILOVER_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL FAILOVER tips */\n#define SENTINEL_FAILOVER_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL FAILOVER key specs */\n#define SENTINEL_FAILOVER_Keyspecs NULL\n#endif\n\n/* SENTINEL FAILOVER argument table */\nstruct COMMAND_ARG SENTINEL_FAILOVER_Args[] = {\n{MAKE_ARG(\"master-name\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** SENTINEL FLUSHCONFIG ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL FLUSHCONFIG history */\n#define SENTINEL_FLUSHCONFIG_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL FLUSHCONFIG tips */\n#define SENTINEL_FLUSHCONFIG_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL FLUSHCONFIG key specs */\n#define SENTINEL_FLUSHCONFIG_Keyspecs NULL\n#endif\n\n/********** SENTINEL GET_MASTER_ADDR_BY_NAME ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL GET_MASTER_ADDR_BY_NAME history */\n#define SENTINEL_GET_MASTER_ADDR_BY_NAME_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL GET_MASTER_ADDR_BY_NAME tips */\n#define SENTINEL_GET_MASTER_ADDR_BY_NAME_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL GET_MASTER_ADDR_BY_NAME key specs */\n#define SENTINEL_GET_MASTER_ADDR_BY_NAME_Keyspecs NULL\n#endif\n\n/* SENTINEL GET_MASTER_ADDR_BY_NAME argument table */\nstruct COMMAND_ARG SENTINEL_GET_MASTER_ADDR_BY_NAME_Args[] = {\n{MAKE_ARG(\"master-name\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** SENTINEL HELP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL HELP history */\n#define SENTINEL_HELP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL HELP tips */\n#define SENTINEL_HELP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL HELP key specs */\n#define SENTINEL_HELP_Keyspecs NULL\n#endif\n\n/********** SENTINEL INFO_CACHE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL INFO_CACHE history */\n#define SENTINEL_INFO_CACHE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL INFO_CACHE tips */\n#define SENTINEL_INFO_CACHE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL INFO_CACHE key specs */\n#define SENTINEL_INFO_CACHE_Keyspecs NULL\n#endif\n\n/* SENTINEL INFO_CACHE argument table */\nstruct COMMAND_ARG SENTINEL_INFO_CACHE_Args[] = {\n{MAKE_ARG(\"nodename\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** SENTINEL IS_MASTER_DOWN_BY_ADDR ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL IS_MASTER_DOWN_BY_ADDR history */\n#define SENTINEL_IS_MASTER_DOWN_BY_ADDR_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL IS_MASTER_DOWN_BY_ADDR tips */\n#define SENTINEL_IS_MASTER_DOWN_BY_ADDR_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL IS_MASTER_DOWN_BY_ADDR key specs */\n#define SENTINEL_IS_MASTER_DOWN_BY_ADDR_Keyspecs NULL\n#endif\n\n/* SENTINEL IS_MASTER_DOWN_BY_ADDR argument table */\nstruct COMMAND_ARG SENTINEL_IS_MASTER_DOWN_BY_ADDR_Args[] = {\n{MAKE_ARG(\"ip\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"port\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"current-epoch\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"runid\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** SENTINEL MASTER ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL MASTER history */\n#define SENTINEL_MASTER_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL MASTER tips */\n#define SENTINEL_MASTER_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL MASTER key specs */\n#define SENTINEL_MASTER_Keyspecs NULL\n#endif\n\n/* SENTINEL MASTER argument table */\nstruct COMMAND_ARG SENTINEL_MASTER_Args[] = {\n{MAKE_ARG(\"master-name\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** SENTINEL MASTERS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL MASTERS history */\n#define SENTINEL_MASTERS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL MASTERS tips */\n#define SENTINEL_MASTERS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL MASTERS key specs */\n#define SENTINEL_MASTERS_Keyspecs NULL\n#endif\n\n/********** SENTINEL MONITOR ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL MONITOR history */\n#define SENTINEL_MONITOR_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL MONITOR tips */\n#define SENTINEL_MONITOR_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL MONITOR key specs */\n#define SENTINEL_MONITOR_Keyspecs NULL\n#endif\n\n/* SENTINEL MONITOR argument table */\nstruct COMMAND_ARG SENTINEL_MONITOR_Args[] = {\n{MAKE_ARG(\"name\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"ip\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"port\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"quorum\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** SENTINEL MYID ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL MYID history */\n#define SENTINEL_MYID_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL MYID tips */\n#define SENTINEL_MYID_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL MYID key specs */\n#define SENTINEL_MYID_Keyspecs NULL\n#endif\n\n/********** SENTINEL PENDING_SCRIPTS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL PENDING_SCRIPTS history */\n#define SENTINEL_PENDING_SCRIPTS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL PENDING_SCRIPTS tips */\n#define SENTINEL_PENDING_SCRIPTS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL PENDING_SCRIPTS key specs */\n#define SENTINEL_PENDING_SCRIPTS_Keyspecs NULL\n#endif\n\n/********** SENTINEL REMOVE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL REMOVE history */\n#define SENTINEL_REMOVE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL REMOVE tips */\n#define SENTINEL_REMOVE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL REMOVE key specs */\n#define SENTINEL_REMOVE_Keyspecs NULL\n#endif\n\n/* SENTINEL REMOVE argument table */\nstruct COMMAND_ARG SENTINEL_REMOVE_Args[] = {\n{MAKE_ARG(\"master-name\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** SENTINEL REPLICAS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL REPLICAS history */\n#define SENTINEL_REPLICAS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL REPLICAS tips */\n#define SENTINEL_REPLICAS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL REPLICAS key specs */\n#define SENTINEL_REPLICAS_Keyspecs NULL\n#endif\n\n/* SENTINEL REPLICAS argument table */\nstruct COMMAND_ARG SENTINEL_REPLICAS_Args[] = {\n{MAKE_ARG(\"master-name\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** SENTINEL RESET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL RESET history */\n#define SENTINEL_RESET_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL RESET tips */\n#define SENTINEL_RESET_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL RESET key specs */\n#define SENTINEL_RESET_Keyspecs NULL\n#endif\n\n/* SENTINEL RESET argument table */\nstruct COMMAND_ARG SENTINEL_RESET_Args[] = {\n{MAKE_ARG(\"pattern\",ARG_TYPE_PATTERN,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** SENTINEL SENTINELS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL SENTINELS history */\n#define SENTINEL_SENTINELS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL SENTINELS tips */\n#define SENTINEL_SENTINELS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL SENTINELS key specs */\n#define SENTINEL_SENTINELS_Keyspecs NULL\n#endif\n\n/* SENTINEL SENTINELS argument table */\nstruct COMMAND_ARG SENTINEL_SENTINELS_Args[] = {\n{MAKE_ARG(\"master-name\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** SENTINEL SET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL SET history */\n#define SENTINEL_SET_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL SET tips */\n#define SENTINEL_SET_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL SET key specs */\n#define SENTINEL_SET_Keyspecs NULL\n#endif\n\n/* SENTINEL SET data argument table */\nstruct COMMAND_ARG SENTINEL_SET_data_Subargs[] = {\n{MAKE_ARG(\"option\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* SENTINEL SET argument table */\nstruct COMMAND_ARG SENTINEL_SET_Args[] = {\n{MAKE_ARG(\"master-name\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"data\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,2,NULL),.subargs=SENTINEL_SET_data_Subargs},\n};\n\n/********** SENTINEL SIMULATE_FAILURE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL SIMULATE_FAILURE history */\n#define SENTINEL_SIMULATE_FAILURE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL SIMULATE_FAILURE tips */\n#define SENTINEL_SIMULATE_FAILURE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL SIMULATE_FAILURE key specs */\n#define SENTINEL_SIMULATE_FAILURE_Keyspecs NULL\n#endif\n\n/* SENTINEL SIMULATE_FAILURE mode argument table */\nstruct COMMAND_ARG SENTINEL_SIMULATE_FAILURE_mode_Subargs[] = {\n{MAKE_ARG(\"crash-after-election\",ARG_TYPE_PURE_TOKEN,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"crash-after-promotion\",ARG_TYPE_PURE_TOKEN,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"help\",ARG_TYPE_PURE_TOKEN,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* SENTINEL SIMULATE_FAILURE argument table */\nstruct COMMAND_ARG SENTINEL_SIMULATE_FAILURE_Args[] = {\n{MAKE_ARG(\"mode\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,3,NULL),.subargs=SENTINEL_SIMULATE_FAILURE_mode_Subargs},\n};\n\n/********** SENTINEL SLAVES ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL SLAVES history */\n#define SENTINEL_SLAVES_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL SLAVES tips */\n#define SENTINEL_SLAVES_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL SLAVES key specs */\n#define SENTINEL_SLAVES_Keyspecs NULL\n#endif\n\n/* SENTINEL SLAVES argument table */\nstruct COMMAND_ARG SENTINEL_SLAVES_Args[] = {\n{MAKE_ARG(\"master-name\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* SENTINEL command table */\nstruct COMMAND_STRUCT SENTINEL_Subcommands[] = {\n{MAKE_CMD(\"ckquorum\",\"Checks for a Redis Sentinel quorum.\",NULL,\"2.8.4\",CMD_DOC_NONE,NULL,NULL,\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_CKQUORUM_History,0,SENTINEL_CKQUORUM_Tips,0,sentinelCommand,3,CMD_ADMIN|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_CKQUORUM_Keyspecs,0,NULL,1),.args=SENTINEL_CKQUORUM_Args},\n{MAKE_CMD(\"config\",\"Configures Redis Sentinel.\",\"O(N) when N is the number of configuration parameters provided\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_CONFIG_History,1,SENTINEL_CONFIG_Tips,0,sentinelCommand,-4,CMD_ADMIN|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_CONFIG_Keyspecs,0,NULL,1),.args=SENTINEL_CONFIG_Args},\n{MAKE_CMD(\"debug\",\"Lists or updates the current configurable parameters of Redis Sentinel.\",\"O(N) where N is the number of configurable parameters\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_DEBUG_History,0,SENTINEL_DEBUG_Tips,0,sentinelCommand,-2,CMD_ADMIN|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_DEBUG_Keyspecs,0,NULL,1),.args=SENTINEL_DEBUG_Args},\n{MAKE_CMD(\"failover\",\"Forces a Redis Sentinel failover.\",NULL,\"2.8.4\",CMD_DOC_NONE,NULL,NULL,\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_FAILOVER_History,0,SENTINEL_FAILOVER_Tips,0,sentinelCommand,3,CMD_ADMIN|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_FAILOVER_Keyspecs,0,NULL,1),.args=SENTINEL_FAILOVER_Args},\n{MAKE_CMD(\"flushconfig\",\"Rewrites the Redis Sentinel configuration file.\",\"O(1)\",\"2.8.4\",CMD_DOC_NONE,NULL,NULL,\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_FLUSHCONFIG_History,0,SENTINEL_FLUSHCONFIG_Tips,0,sentinelCommand,2,CMD_ADMIN|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_FLUSHCONFIG_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"get-master-addr-by-name\",\"Returns the port and address of a master Redis instance.\",\"O(1)\",\"2.8.4\",CMD_DOC_NONE,NULL,NULL,\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_GET_MASTER_ADDR_BY_NAME_History,0,SENTINEL_GET_MASTER_ADDR_BY_NAME_Tips,0,sentinelCommand,3,CMD_ADMIN|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_GET_MASTER_ADDR_BY_NAME_Keyspecs,0,NULL,1),.args=SENTINEL_GET_MASTER_ADDR_BY_NAME_Args},\n{MAKE_CMD(\"help\",\"Returns helpful text about the different subcommands.\",\"O(1)\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_HELP_History,0,SENTINEL_HELP_Tips,0,sentinelCommand,2,CMD_LOADING|CMD_STALE|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_HELP_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"info-cache\",\"Returns the cached `INFO` replies from the deployment's instances.\",\"O(N) where N is the number of instances\",\"3.2.0\",CMD_DOC_NONE,NULL,NULL,\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_INFO_CACHE_History,0,SENTINEL_INFO_CACHE_Tips,0,sentinelCommand,-3,CMD_ADMIN|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_INFO_CACHE_Keyspecs,0,NULL,1),.args=SENTINEL_INFO_CACHE_Args},\n{MAKE_CMD(\"is-master-down-by-addr\",\"Determines whether a master Redis instance is down.\",\"O(1)\",\"2.8.4\",CMD_DOC_NONE,NULL,NULL,\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_IS_MASTER_DOWN_BY_ADDR_History,0,SENTINEL_IS_MASTER_DOWN_BY_ADDR_Tips,0,sentinelCommand,6,CMD_ADMIN|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_IS_MASTER_DOWN_BY_ADDR_Keyspecs,0,NULL,4),.args=SENTINEL_IS_MASTER_DOWN_BY_ADDR_Args},\n{MAKE_CMD(\"master\",\"Returns the state of a master Redis instance.\",\"O(1)\",\"2.8.4\",CMD_DOC_NONE,NULL,NULL,\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_MASTER_History,0,SENTINEL_MASTER_Tips,0,sentinelCommand,3,CMD_ADMIN|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_MASTER_Keyspecs,0,NULL,1),.args=SENTINEL_MASTER_Args},\n{MAKE_CMD(\"masters\",\"Returns a list of monitored Redis masters.\",\"O(N) where N is the number of masters\",\"2.8.4\",CMD_DOC_NONE,NULL,NULL,\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_MASTERS_History,0,SENTINEL_MASTERS_Tips,0,sentinelCommand,2,CMD_ADMIN|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_MASTERS_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"monitor\",\"Starts monitoring.\",\"O(1)\",\"2.8.4\",CMD_DOC_NONE,NULL,NULL,\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_MONITOR_History,0,SENTINEL_MONITOR_Tips,0,sentinelCommand,6,CMD_ADMIN|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_MONITOR_Keyspecs,0,NULL,4),.args=SENTINEL_MONITOR_Args},\n{MAKE_CMD(\"myid\",\"Returns the Redis Sentinel instance ID.\",\"O(1)\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_MYID_History,0,SENTINEL_MYID_Tips,0,sentinelCommand,2,CMD_ADMIN|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_MYID_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"pending-scripts\",\"Returns information about pending scripts for Redis Sentinel.\",NULL,\"2.8.4\",CMD_DOC_NONE,NULL,NULL,\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_PENDING_SCRIPTS_History,0,SENTINEL_PENDING_SCRIPTS_Tips,0,sentinelCommand,2,CMD_ADMIN|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_PENDING_SCRIPTS_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"remove\",\"Stops monitoring.\",\"O(1)\",\"2.8.4\",CMD_DOC_NONE,NULL,NULL,\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_REMOVE_History,0,SENTINEL_REMOVE_Tips,0,sentinelCommand,3,CMD_ADMIN|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_REMOVE_Keyspecs,0,NULL,1),.args=SENTINEL_REMOVE_Args},\n{MAKE_CMD(\"replicas\",\"Returns a list of the monitored Redis replicas.\",\"O(N) where N is the number of replicas\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_REPLICAS_History,0,SENTINEL_REPLICAS_Tips,0,sentinelCommand,3,CMD_ADMIN|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_REPLICAS_Keyspecs,0,NULL,1),.args=SENTINEL_REPLICAS_Args},\n{MAKE_CMD(\"reset\",\"Resets Redis masters by name matching a pattern.\",\"O(N) where N is the number of monitored masters\",\"2.8.4\",CMD_DOC_NONE,NULL,NULL,\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_RESET_History,0,SENTINEL_RESET_Tips,0,sentinelCommand,3,CMD_ADMIN|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_RESET_Keyspecs,0,NULL,1),.args=SENTINEL_RESET_Args},\n{MAKE_CMD(\"sentinels\",\"Returns a list of Sentinel instances.\",\"O(N) where N is the number of Sentinels\",\"2.8.4\",CMD_DOC_NONE,NULL,NULL,\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_SENTINELS_History,0,SENTINEL_SENTINELS_Tips,0,sentinelCommand,3,CMD_ADMIN|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_SENTINELS_Keyspecs,0,NULL,1),.args=SENTINEL_SENTINELS_Args},\n{MAKE_CMD(\"set\",\"Changes the configuration of a monitored Redis master.\",\"O(1)\",\"2.8.4\",CMD_DOC_NONE,NULL,NULL,\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_SET_History,0,SENTINEL_SET_Tips,0,sentinelCommand,-5,CMD_ADMIN|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_SET_Keyspecs,0,NULL,2),.args=SENTINEL_SET_Args},\n{MAKE_CMD(\"simulate-failure\",\"Simulates failover scenarios.\",NULL,\"3.2.0\",CMD_DOC_NONE,NULL,NULL,\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_SIMULATE_FAILURE_History,0,SENTINEL_SIMULATE_FAILURE_Tips,0,sentinelCommand,-3,CMD_ADMIN|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_SIMULATE_FAILURE_Keyspecs,0,NULL,1),.args=SENTINEL_SIMULATE_FAILURE_Args},\n{MAKE_CMD(\"slaves\",\"Returns a list of the monitored replicas.\",\"O(N) where N is the number of replicas.\",\"2.8.0\",CMD_DOC_DEPRECATED,\"`SENTINEL REPLICAS`\",\"5.0.0\",\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_SLAVES_History,0,SENTINEL_SLAVES_Tips,0,sentinelCommand,3,CMD_ADMIN|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_SLAVES_Keyspecs,0,NULL,1),.args=SENTINEL_SLAVES_Args},\n{0}\n};\n\n/********** SENTINEL ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SENTINEL history */\n#define SENTINEL_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SENTINEL tips */\n#define SENTINEL_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SENTINEL key specs */\n#define SENTINEL_Keyspecs NULL\n#endif\n\n/********** ACL CAT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ACL CAT history */\n#define ACL_CAT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ACL CAT tips */\n#define ACL_CAT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ACL CAT key specs */\n#define ACL_CAT_Keyspecs NULL\n#endif\n\n/* ACL CAT argument table */\nstruct COMMAND_ARG ACL_CAT_Args[] = {\n{MAKE_ARG(\"category\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** ACL DELUSER ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ACL DELUSER history */\n#define ACL_DELUSER_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ACL DELUSER tips */\nconst char *ACL_DELUSER_Tips[] = {\n\"request_policy:all_nodes\",\n\"response_policy:all_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ACL DELUSER key specs */\n#define ACL_DELUSER_Keyspecs NULL\n#endif\n\n/* ACL DELUSER argument table */\nstruct COMMAND_ARG ACL_DELUSER_Args[] = {\n{MAKE_ARG(\"username\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** ACL DRYRUN ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ACL DRYRUN history */\n#define ACL_DRYRUN_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ACL DRYRUN tips */\n#define ACL_DRYRUN_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ACL DRYRUN key specs */\n#define ACL_DRYRUN_Keyspecs NULL\n#endif\n\n/* ACL DRYRUN argument table */\nstruct COMMAND_ARG ACL_DRYRUN_Args[] = {\n{MAKE_ARG(\"username\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"command\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"arg\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** ACL GENPASS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ACL GENPASS history */\n#define ACL_GENPASS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ACL GENPASS tips */\n#define ACL_GENPASS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ACL GENPASS key specs */\n#define ACL_GENPASS_Keyspecs NULL\n#endif\n\n/* ACL GENPASS argument table */\nstruct COMMAND_ARG ACL_GENPASS_Args[] = {\n{MAKE_ARG(\"bits\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** ACL GETUSER ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ACL GETUSER history */\ncommandHistory ACL_GETUSER_History[] = {\n{\"6.2.0\",\"Added Pub/Sub channel patterns.\"},\n{\"7.0.0\",\"Added selectors and changed the format of key and channel patterns from a list to their rule representation.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ACL GETUSER tips */\n#define ACL_GETUSER_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ACL GETUSER key specs */\n#define ACL_GETUSER_Keyspecs NULL\n#endif\n\n/* ACL GETUSER argument table */\nstruct COMMAND_ARG ACL_GETUSER_Args[] = {\n{MAKE_ARG(\"username\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** ACL HELP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ACL HELP history */\n#define ACL_HELP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ACL HELP tips */\n#define ACL_HELP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ACL HELP key specs */\n#define ACL_HELP_Keyspecs NULL\n#endif\n\n/********** ACL LIST ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ACL LIST history */\n#define ACL_LIST_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ACL LIST tips */\n#define ACL_LIST_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ACL LIST key specs */\n#define ACL_LIST_Keyspecs NULL\n#endif\n\n/********** ACL LOAD ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ACL LOAD history */\n#define ACL_LOAD_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ACL LOAD tips */\n#define ACL_LOAD_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ACL LOAD key specs */\n#define ACL_LOAD_Keyspecs NULL\n#endif\n\n/********** ACL LOG ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ACL LOG history */\ncommandHistory ACL_LOG_History[] = {\n{\"7.2.0\",\"Added entry ID, timestamp created, and timestamp last updated.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ACL LOG tips */\n#define ACL_LOG_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ACL LOG key specs */\n#define ACL_LOG_Keyspecs NULL\n#endif\n\n/* ACL LOG operation argument table */\nstruct COMMAND_ARG ACL_LOG_operation_Subargs[] = {\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"reset\",ARG_TYPE_PURE_TOKEN,-1,\"RESET\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* ACL LOG argument table */\nstruct COMMAND_ARG ACL_LOG_Args[] = {\n{MAKE_ARG(\"operation\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=ACL_LOG_operation_Subargs},\n};\n\n/********** ACL SAVE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ACL SAVE history */\n#define ACL_SAVE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ACL SAVE tips */\nconst char *ACL_SAVE_Tips[] = {\n\"request_policy:all_nodes\",\n\"response_policy:all_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ACL SAVE key specs */\n#define ACL_SAVE_Keyspecs NULL\n#endif\n\n/********** ACL SETUSER ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ACL SETUSER history */\ncommandHistory ACL_SETUSER_History[] = {\n{\"6.2.0\",\"Added Pub/Sub channel patterns.\"},\n{\"7.0.0\",\"Added selectors and key based permissions.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ACL SETUSER tips */\nconst char *ACL_SETUSER_Tips[] = {\n\"request_policy:all_nodes\",\n\"response_policy:all_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ACL SETUSER key specs */\n#define ACL_SETUSER_Keyspecs NULL\n#endif\n\n/* ACL SETUSER argument table */\nstruct COMMAND_ARG ACL_SETUSER_Args[] = {\n{MAKE_ARG(\"username\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"rule\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** ACL USERS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ACL USERS history */\n#define ACL_USERS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ACL USERS tips */\n#define ACL_USERS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ACL USERS key specs */\n#define ACL_USERS_Keyspecs NULL\n#endif\n\n/********** ACL WHOAMI ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ACL WHOAMI history */\n#define ACL_WHOAMI_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ACL WHOAMI tips */\n#define ACL_WHOAMI_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ACL WHOAMI key specs */\n#define ACL_WHOAMI_Keyspecs NULL\n#endif\n\n/* ACL command table */\nstruct COMMAND_STRUCT ACL_Subcommands[] = {\n{MAKE_CMD(\"cat\",\"Lists the ACL categories, or the commands inside a category.\",\"O(1) since the categories and commands are a fixed set.\",\"6.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,ACL_CAT_History,0,ACL_CAT_Tips,0,aclCommand,-2,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,0,ACL_CAT_Keyspecs,0,NULL,1),.args=ACL_CAT_Args},\n{MAKE_CMD(\"deluser\",\"Deletes ACL users, and terminates their connections.\",\"O(1) amortized time considering the typical user.\",\"6.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,ACL_DELUSER_History,0,ACL_DELUSER_Tips,2,aclCommand,-3,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,0,ACL_DELUSER_Keyspecs,0,NULL,1),.args=ACL_DELUSER_Args},\n{MAKE_CMD(\"dryrun\",\"Simulates the execution of a command by a user, without executing the command.\",\"O(1).\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,ACL_DRYRUN_History,0,ACL_DRYRUN_Tips,0,aclCommand,-4,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,0,ACL_DRYRUN_Keyspecs,0,NULL,3),.args=ACL_DRYRUN_Args},\n{MAKE_CMD(\"genpass\",\"Generates a pseudorandom, secure password that can be used to identify ACL users.\",\"O(1)\",\"6.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,ACL_GENPASS_History,0,ACL_GENPASS_Tips,0,aclCommand,-2,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,0,ACL_GENPASS_Keyspecs,0,NULL,1),.args=ACL_GENPASS_Args},\n{MAKE_CMD(\"getuser\",\"Lists the ACL rules of a user.\",\"O(N). Where N is the number of password, command and pattern rules that the user has.\",\"6.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,ACL_GETUSER_History,2,ACL_GETUSER_Tips,0,aclCommand,3,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,0,ACL_GETUSER_Keyspecs,0,NULL,1),.args=ACL_GETUSER_Args},\n{MAKE_CMD(\"help\",\"Returns helpful text about the different subcommands.\",\"O(1)\",\"6.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,ACL_HELP_History,0,ACL_HELP_Tips,0,aclCommand,2,CMD_LOADING|CMD_STALE|CMD_SENTINEL,0,ACL_HELP_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"list\",\"Dumps the effective rules in ACL file format.\",\"O(N). Where N is the number of configured users.\",\"6.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,ACL_LIST_History,0,ACL_LIST_Tips,0,aclCommand,2,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,0,ACL_LIST_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"load\",\"Reloads the rules from the configured ACL file.\",\"O(N). Where N is the number of configured users.\",\"6.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,ACL_LOAD_History,0,ACL_LOAD_Tips,0,aclCommand,2,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,0,ACL_LOAD_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"log\",\"Lists recent security events generated due to ACL rules.\",\"O(N) with N being the number of entries shown.\",\"6.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,ACL_LOG_History,1,ACL_LOG_Tips,0,aclCommand,-2,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,0,ACL_LOG_Keyspecs,0,NULL,1),.args=ACL_LOG_Args},\n{MAKE_CMD(\"save\",\"Saves the effective ACL rules in the configured ACL file.\",\"O(N). Where N is the number of configured users.\",\"6.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,ACL_SAVE_History,0,ACL_SAVE_Tips,2,aclCommand,2,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,0,ACL_SAVE_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"setuser\",\"Creates and modifies an ACL user and its rules.\",\"O(N). Where N is the number of rules provided.\",\"6.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,ACL_SETUSER_History,2,ACL_SETUSER_Tips,2,aclCommand,-3,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,0,ACL_SETUSER_Keyspecs,0,NULL,2),.args=ACL_SETUSER_Args},\n{MAKE_CMD(\"users\",\"Lists all ACL users.\",\"O(N). Where N is the number of configured users.\",\"6.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,ACL_USERS_History,0,ACL_USERS_Tips,0,aclCommand,2,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,0,ACL_USERS_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"whoami\",\"Returns the authenticated username of the current connection.\",\"O(1)\",\"6.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,ACL_WHOAMI_History,0,ACL_WHOAMI_Tips,0,aclCommand,2,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,0,ACL_WHOAMI_Keyspecs,0,NULL,0)},\n{0}\n};\n\n/********** ACL ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ACL history */\n#define ACL_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ACL tips */\n#define ACL_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ACL key specs */\n#define ACL_Keyspecs NULL\n#endif\n\n/********** BGREWRITEAOF ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* BGREWRITEAOF history */\n#define BGREWRITEAOF_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* BGREWRITEAOF tips */\n#define BGREWRITEAOF_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* BGREWRITEAOF key specs */\n#define BGREWRITEAOF_Keyspecs NULL\n#endif\n\n/********** BGSAVE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* BGSAVE history */\ncommandHistory BGSAVE_History[] = {\n{\"3.2.2\",\"Added the `SCHEDULE` option.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* BGSAVE tips */\n#define BGSAVE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* BGSAVE key specs */\n#define BGSAVE_Keyspecs NULL\n#endif\n\n/* BGSAVE argument table */\nstruct COMMAND_ARG BGSAVE_Args[] = {\n{MAKE_ARG(\"schedule\",ARG_TYPE_PURE_TOKEN,-1,\"SCHEDULE\",NULL,\"3.2.2\",CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** COMMAND COUNT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* COMMAND COUNT history */\n#define COMMAND_COUNT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* COMMAND COUNT tips */\n#define COMMAND_COUNT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* COMMAND COUNT key specs */\n#define COMMAND_COUNT_Keyspecs NULL\n#endif\n\n/********** COMMAND DOCS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* COMMAND DOCS history */\n#define COMMAND_DOCS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* COMMAND DOCS tips */\nconst char *COMMAND_DOCS_Tips[] = {\n\"nondeterministic_output_order\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* COMMAND DOCS key specs */\n#define COMMAND_DOCS_Keyspecs NULL\n#endif\n\n/* COMMAND DOCS argument table */\nstruct COMMAND_ARG COMMAND_DOCS_Args[] = {\n{MAKE_ARG(\"command-name\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** COMMAND GETKEYS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* COMMAND GETKEYS history */\n#define COMMAND_GETKEYS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* COMMAND GETKEYS tips */\n#define COMMAND_GETKEYS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* COMMAND GETKEYS key specs */\n#define COMMAND_GETKEYS_Keyspecs NULL\n#endif\n\n/* COMMAND GETKEYS argument table */\nstruct COMMAND_ARG COMMAND_GETKEYS_Args[] = {\n{MAKE_ARG(\"command\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"arg\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** COMMAND GETKEYSANDFLAGS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* COMMAND GETKEYSANDFLAGS history */\n#define COMMAND_GETKEYSANDFLAGS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* COMMAND GETKEYSANDFLAGS tips */\n#define COMMAND_GETKEYSANDFLAGS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* COMMAND GETKEYSANDFLAGS key specs */\n#define COMMAND_GETKEYSANDFLAGS_Keyspecs NULL\n#endif\n\n/* COMMAND GETKEYSANDFLAGS argument table */\nstruct COMMAND_ARG COMMAND_GETKEYSANDFLAGS_Args[] = {\n{MAKE_ARG(\"command\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"arg\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** COMMAND HELP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* COMMAND HELP history */\n#define COMMAND_HELP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* COMMAND HELP tips */\n#define COMMAND_HELP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* COMMAND HELP key specs */\n#define COMMAND_HELP_Keyspecs NULL\n#endif\n\n/********** COMMAND INFO ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* COMMAND INFO history */\ncommandHistory COMMAND_INFO_History[] = {\n{\"7.0.0\",\"Allowed to be called with no argument to get info on all commands.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* COMMAND INFO tips */\nconst char *COMMAND_INFO_Tips[] = {\n\"nondeterministic_output_order\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* COMMAND INFO key specs */\n#define COMMAND_INFO_Keyspecs NULL\n#endif\n\n/* COMMAND INFO argument table */\nstruct COMMAND_ARG COMMAND_INFO_Args[] = {\n{MAKE_ARG(\"command-name\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** COMMAND LIST ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* COMMAND LIST history */\n#define COMMAND_LIST_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* COMMAND LIST tips */\nconst char *COMMAND_LIST_Tips[] = {\n\"nondeterministic_output_order\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* COMMAND LIST key specs */\n#define COMMAND_LIST_Keyspecs NULL\n#endif\n\n/* COMMAND LIST filterby argument table */\nstruct COMMAND_ARG COMMAND_LIST_filterby_Subargs[] = {\n{MAKE_ARG(\"module-name\",ARG_TYPE_STRING,-1,\"MODULE\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"category\",ARG_TYPE_STRING,-1,\"ACLCAT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"pattern\",ARG_TYPE_PATTERN,-1,\"PATTERN\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* COMMAND LIST argument table */\nstruct COMMAND_ARG COMMAND_LIST_Args[] = {\n{MAKE_ARG(\"filterby\",ARG_TYPE_ONEOF,-1,\"FILTERBY\",NULL,NULL,CMD_ARG_OPTIONAL,3,NULL),.subargs=COMMAND_LIST_filterby_Subargs},\n};\n\n/* COMMAND command table */\nstruct COMMAND_STRUCT COMMAND_Subcommands[] = {\n{MAKE_CMD(\"count\",\"Returns a count of commands.\",\"O(1)\",\"2.8.13\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,COMMAND_COUNT_History,0,COMMAND_COUNT_Tips,0,commandCountCommand,2,CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,COMMAND_COUNT_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"docs\",\"Returns documentary information about one, multiple or all commands.\",\"O(N) where N is the number of commands to look up\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,COMMAND_DOCS_History,0,COMMAND_DOCS_Tips,1,commandDocsCommand,-2,CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,COMMAND_DOCS_Keyspecs,0,NULL,1),.args=COMMAND_DOCS_Args},\n{MAKE_CMD(\"getkeys\",\"Extracts the key names from an arbitrary command.\",\"O(N) where N is the number of arguments to the command\",\"2.8.13\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,COMMAND_GETKEYS_History,0,COMMAND_GETKEYS_Tips,0,commandGetKeysCommand,-3,CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,COMMAND_GETKEYS_Keyspecs,0,NULL,2),.args=COMMAND_GETKEYS_Args},\n{MAKE_CMD(\"getkeysandflags\",\"Extracts the key names and access flags for an arbitrary command.\",\"O(N) where N is the number of arguments to the command\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,COMMAND_GETKEYSANDFLAGS_History,0,COMMAND_GETKEYSANDFLAGS_Tips,0,commandGetKeysAndFlagsCommand,-3,CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,COMMAND_GETKEYSANDFLAGS_Keyspecs,0,NULL,2),.args=COMMAND_GETKEYSANDFLAGS_Args},\n{MAKE_CMD(\"help\",\"Returns helpful text about the different subcommands.\",\"O(1)\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,COMMAND_HELP_History,0,COMMAND_HELP_Tips,0,commandHelpCommand,2,CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,COMMAND_HELP_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"info\",\"Returns information about one, multiple or all commands.\",\"O(N) where N is the number of commands to look up\",\"2.8.13\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,COMMAND_INFO_History,1,COMMAND_INFO_Tips,1,commandInfoCommand,-2,CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,COMMAND_INFO_Keyspecs,0,NULL,1),.args=COMMAND_INFO_Args},\n{MAKE_CMD(\"list\",\"Returns a list of command names.\",\"O(N) where N is the total number of Redis commands\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,COMMAND_LIST_History,0,COMMAND_LIST_Tips,1,commandListCommand,-2,CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,COMMAND_LIST_Keyspecs,0,NULL,1),.args=COMMAND_LIST_Args},\n{0}\n};\n\n/********** COMMAND ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* COMMAND history */\n#define COMMAND_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* COMMAND tips */\nconst char *COMMAND_Tips[] = {\n\"nondeterministic_output_order\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* COMMAND key specs */\n#define COMMAND_Keyspecs NULL\n#endif\n\n/********** CONFIG GET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CONFIG GET history */\ncommandHistory CONFIG_GET_History[] = {\n{\"7.0.0\",\"Added the ability to pass multiple pattern parameters in one call\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CONFIG GET tips */\n#define CONFIG_GET_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CONFIG GET key specs */\n#define CONFIG_GET_Keyspecs NULL\n#endif\n\n/* CONFIG GET argument table */\nstruct COMMAND_ARG CONFIG_GET_Args[] = {\n{MAKE_ARG(\"parameter\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** CONFIG HELP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CONFIG HELP history */\n#define CONFIG_HELP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CONFIG HELP tips */\n#define CONFIG_HELP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CONFIG HELP key specs */\n#define CONFIG_HELP_Keyspecs NULL\n#endif\n\n/********** CONFIG RESETSTAT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CONFIG RESETSTAT history */\n#define CONFIG_RESETSTAT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CONFIG RESETSTAT tips */\nconst char *CONFIG_RESETSTAT_Tips[] = {\n\"request_policy:all_nodes\",\n\"response_policy:all_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CONFIG RESETSTAT key specs */\n#define CONFIG_RESETSTAT_Keyspecs NULL\n#endif\n\n/********** CONFIG REWRITE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CONFIG REWRITE history */\n#define CONFIG_REWRITE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CONFIG REWRITE tips */\nconst char *CONFIG_REWRITE_Tips[] = {\n\"request_policy:all_nodes\",\n\"response_policy:all_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CONFIG REWRITE key specs */\n#define CONFIG_REWRITE_Keyspecs NULL\n#endif\n\n/********** CONFIG SET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CONFIG SET history */\ncommandHistory CONFIG_SET_History[] = {\n{\"7.0.0\",\"Added the ability to set multiple parameters in one call.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CONFIG SET tips */\nconst char *CONFIG_SET_Tips[] = {\n\"request_policy:all_nodes\",\n\"response_policy:all_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CONFIG SET key specs */\n#define CONFIG_SET_Keyspecs NULL\n#endif\n\n/* CONFIG SET data argument table */\nstruct COMMAND_ARG CONFIG_SET_data_Subargs[] = {\n{MAKE_ARG(\"parameter\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* CONFIG SET argument table */\nstruct COMMAND_ARG CONFIG_SET_Args[] = {\n{MAKE_ARG(\"data\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,2,NULL),.subargs=CONFIG_SET_data_Subargs},\n};\n\n/* CONFIG command table */\nstruct COMMAND_STRUCT CONFIG_Subcommands[] = {\n{MAKE_CMD(\"get\",\"Returns the effective values of configuration parameters.\",\"O(N) when N is the number of configuration parameters provided\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,CONFIG_GET_History,1,CONFIG_GET_Tips,0,configGetCommand,-3,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE,0,CONFIG_GET_Keyspecs,0,NULL,1),.args=CONFIG_GET_Args},\n{MAKE_CMD(\"help\",\"Returns helpful text about the different subcommands.\",\"O(1)\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,CONFIG_HELP_History,0,CONFIG_HELP_Tips,0,configHelpCommand,2,CMD_LOADING|CMD_STALE,0,CONFIG_HELP_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"resetstat\",\"Resets the server's statistics.\",\"O(1)\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,CONFIG_RESETSTAT_History,0,CONFIG_RESETSTAT_Tips,2,configResetStatCommand,2,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE,0,CONFIG_RESETSTAT_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"rewrite\",\"Persists the effective configuration to file.\",\"O(1)\",\"2.8.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,CONFIG_REWRITE_History,0,CONFIG_REWRITE_Tips,2,configRewriteCommand,2,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE,0,CONFIG_REWRITE_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"set\",\"Sets configuration parameters in-flight.\",\"O(N) when N is the number of configuration parameters provided\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,CONFIG_SET_History,1,CONFIG_SET_Tips,2,configSetCommand,-4,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE,0,CONFIG_SET_Keyspecs,0,NULL,1),.args=CONFIG_SET_Args},\n{0}\n};\n\n/********** CONFIG ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* CONFIG history */\n#define CONFIG_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* CONFIG tips */\n#define CONFIG_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* CONFIG key specs */\n#define CONFIG_Keyspecs NULL\n#endif\n\n/********** DBSIZE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* DBSIZE history */\n#define DBSIZE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* DBSIZE tips */\nconst char *DBSIZE_Tips[] = {\n\"request_policy:all_shards\",\n\"response_policy:agg_sum\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* DBSIZE key specs */\n#define DBSIZE_Keyspecs NULL\n#endif\n\n/********** DEBUG ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* DEBUG history */\n#define DEBUG_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* DEBUG tips */\n#define DEBUG_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* DEBUG key specs */\n#define DEBUG_Keyspecs NULL\n#endif\n\n/********** FAILOVER ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* FAILOVER history */\n#define FAILOVER_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* FAILOVER tips */\n#define FAILOVER_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* FAILOVER key specs */\n#define FAILOVER_Keyspecs NULL\n#endif\n\n/* FAILOVER target argument table */\nstruct COMMAND_ARG FAILOVER_target_Subargs[] = {\n{MAKE_ARG(\"host\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"port\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"force\",ARG_TYPE_PURE_TOKEN,-1,\"FORCE\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/* FAILOVER argument table */\nstruct COMMAND_ARG FAILOVER_Args[] = {\n{MAKE_ARG(\"target\",ARG_TYPE_BLOCK,-1,\"TO\",NULL,NULL,CMD_ARG_OPTIONAL,3,NULL),.subargs=FAILOVER_target_Subargs},\n{MAKE_ARG(\"abort\",ARG_TYPE_PURE_TOKEN,-1,\"ABORT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"milliseconds\",ARG_TYPE_INTEGER,-1,\"TIMEOUT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** FLUSHALL ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* FLUSHALL history */\ncommandHistory FLUSHALL_History[] = {\n{\"4.0.0\",\"Added the `ASYNC` flushing mode modifier.\"},\n{\"6.2.0\",\"Added the `SYNC` flushing mode modifier.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* FLUSHALL tips */\nconst char *FLUSHALL_Tips[] = {\n\"request_policy:all_shards\",\n\"response_policy:all_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* FLUSHALL key specs */\n#define FLUSHALL_Keyspecs NULL\n#endif\n\n/* FLUSHALL flush_type argument table */\nstruct COMMAND_ARG FLUSHALL_flush_type_Subargs[] = {\n{MAKE_ARG(\"async\",ARG_TYPE_PURE_TOKEN,-1,\"ASYNC\",NULL,\"4.0.0\",CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"sync\",ARG_TYPE_PURE_TOKEN,-1,\"SYNC\",NULL,\"6.2.0\",CMD_ARG_NONE,0,NULL)},\n};\n\n/* FLUSHALL argument table */\nstruct COMMAND_ARG FLUSHALL_Args[] = {\n{MAKE_ARG(\"flush-type\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=FLUSHALL_flush_type_Subargs},\n};\n\n/********** FLUSHDB ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* FLUSHDB history */\ncommandHistory FLUSHDB_History[] = {\n{\"4.0.0\",\"Added the `ASYNC` flushing mode modifier.\"},\n{\"6.2.0\",\"Added the `SYNC` flushing mode modifier.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* FLUSHDB tips */\nconst char *FLUSHDB_Tips[] = {\n\"request_policy:all_shards\",\n\"response_policy:all_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* FLUSHDB key specs */\n#define FLUSHDB_Keyspecs NULL\n#endif\n\n/* FLUSHDB flush_type argument table */\nstruct COMMAND_ARG FLUSHDB_flush_type_Subargs[] = {\n{MAKE_ARG(\"async\",ARG_TYPE_PURE_TOKEN,-1,\"ASYNC\",NULL,\"4.0.0\",CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"sync\",ARG_TYPE_PURE_TOKEN,-1,\"SYNC\",NULL,\"6.2.0\",CMD_ARG_NONE,0,NULL)},\n};\n\n/* FLUSHDB argument table */\nstruct COMMAND_ARG FLUSHDB_Args[] = {\n{MAKE_ARG(\"flush-type\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=FLUSHDB_flush_type_Subargs},\n};\n\n/********** HOTKEYS GET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HOTKEYS GET history */\n#define HOTKEYS_GET_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HOTKEYS GET tips */\nconst char *HOTKEYS_GET_Tips[] = {\n\"nondeterministic_output\",\n\"request_policy:special\",\n\"response_policy:special\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HOTKEYS GET key specs */\n#define HOTKEYS_GET_Keyspecs NULL\n#endif\n\n/********** HOTKEYS HELP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HOTKEYS HELP history */\n#define HOTKEYS_HELP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HOTKEYS HELP tips */\n#define HOTKEYS_HELP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HOTKEYS HELP key specs */\n#define HOTKEYS_HELP_Keyspecs NULL\n#endif\n\n/********** HOTKEYS RESET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HOTKEYS RESET history */\n#define HOTKEYS_RESET_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HOTKEYS RESET tips */\nconst char *HOTKEYS_RESET_Tips[] = {\n\"request_policy:special\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HOTKEYS RESET key specs */\n#define HOTKEYS_RESET_Keyspecs NULL\n#endif\n\n/********** HOTKEYS START ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HOTKEYS START history */\n#define HOTKEYS_START_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HOTKEYS START tips */\nconst char *HOTKEYS_START_Tips[] = {\n\"request_policy:special\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HOTKEYS START key specs */\n#define HOTKEYS_START_Keyspecs NULL\n#endif\n\n/* HOTKEYS START metrics argument table */\nstruct COMMAND_ARG HOTKEYS_START_metrics_Subargs[] = {\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"cpu\",ARG_TYPE_PURE_TOKEN,-1,\"CPU\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"net\",ARG_TYPE_PURE_TOKEN,-1,\"NET\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/* HOTKEYS START slots argument table */\nstruct COMMAND_ARG HOTKEYS_START_slots_Subargs[] = {\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"slot\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* HOTKEYS START argument table */\nstruct COMMAND_ARG HOTKEYS_START_Args[] = {\n{MAKE_ARG(\"metrics\",ARG_TYPE_BLOCK,-1,\"METRICS\",NULL,NULL,CMD_ARG_NONE,3,NULL),.subargs=HOTKEYS_START_metrics_Subargs},\n{MAKE_ARG(\"k\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"seconds\",ARG_TYPE_INTEGER,-1,\"DURATION\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"ratio\",ARG_TYPE_INTEGER,-1,\"SAMPLE\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"slots\",ARG_TYPE_BLOCK,-1,\"SLOTS\",NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=HOTKEYS_START_slots_Subargs},\n};\n\n/********** HOTKEYS STOP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HOTKEYS STOP history */\n#define HOTKEYS_STOP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HOTKEYS STOP tips */\nconst char *HOTKEYS_STOP_Tips[] = {\n\"request_policy:special\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HOTKEYS STOP key specs */\n#define HOTKEYS_STOP_Keyspecs NULL\n#endif\n\n/* HOTKEYS command table */\nstruct COMMAND_STRUCT HOTKEYS_Subcommands[] = {\n{MAKE_CMD(\"get\",\"Returns lists of top K hotkeys depending on metrics chosen in HOTKEYS START command.\",\"O(K) where K is the number of hotkeys returned.\",\"8.6.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,HOTKEYS_GET_History,0,HOTKEYS_GET_Tips,3,hotkeysCommand,2,CMD_ADMIN|CMD_NOSCRIPT,0,HOTKEYS_GET_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"help\",\"Return helpful text about HOTKEYS command parameters.\",\"O(1)\",\"8.6.1\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,HOTKEYS_HELP_History,0,HOTKEYS_HELP_Tips,0,hotkeysCommand,2,CMD_LOADING|CMD_STALE,0,HOTKEYS_HELP_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"reset\",\"Release the resources used for hotkey tracking.\",\"O(1)\",\"8.6.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,HOTKEYS_RESET_History,0,HOTKEYS_RESET_Tips,1,hotkeysCommand,2,CMD_ADMIN|CMD_NOSCRIPT,0,HOTKEYS_RESET_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"start\",\"Starts hotkeys tracking.\",\"O(1)\",\"8.6.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,HOTKEYS_START_History,0,HOTKEYS_START_Tips,1,hotkeysCommand,-2,CMD_ADMIN|CMD_NOSCRIPT,0,HOTKEYS_START_Keyspecs,0,NULL,5),.args=HOTKEYS_START_Args},\n{MAKE_CMD(\"stop\",\"Stops hotkeys tracking.\",\"O(1)\",\"8.6.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,HOTKEYS_STOP_History,0,HOTKEYS_STOP_Tips,1,hotkeysCommand,2,CMD_ADMIN|CMD_NOSCRIPT,0,HOTKEYS_STOP_Keyspecs,0,NULL,0)},\n{0}\n};\n\n/********** HOTKEYS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* HOTKEYS history */\n#define HOTKEYS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* HOTKEYS tips */\n#define HOTKEYS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* HOTKEYS key specs */\n#define HOTKEYS_Keyspecs NULL\n#endif\n\n/********** INFO ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* INFO history */\ncommandHistory INFO_History[] = {\n{\"7.0.0\",\"Added support for taking multiple section arguments.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* INFO tips */\nconst char *INFO_Tips[] = {\n\"nondeterministic_output\",\n\"request_policy:all_shards\",\n\"response_policy:special\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* INFO key specs */\n#define INFO_Keyspecs NULL\n#endif\n\n/* INFO argument table */\nstruct COMMAND_ARG INFO_Args[] = {\n{MAKE_ARG(\"section\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** LASTSAVE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LASTSAVE history */\n#define LASTSAVE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LASTSAVE tips */\nconst char *LASTSAVE_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LASTSAVE key specs */\n#define LASTSAVE_Keyspecs NULL\n#endif\n\n/********** LATENCY DOCTOR ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LATENCY DOCTOR history */\n#define LATENCY_DOCTOR_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LATENCY DOCTOR tips */\nconst char *LATENCY_DOCTOR_Tips[] = {\n\"nondeterministic_output\",\n\"request_policy:all_nodes\",\n\"response_policy:special\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LATENCY DOCTOR key specs */\n#define LATENCY_DOCTOR_Keyspecs NULL\n#endif\n\n/********** LATENCY GRAPH ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LATENCY GRAPH history */\n#define LATENCY_GRAPH_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LATENCY GRAPH tips */\nconst char *LATENCY_GRAPH_Tips[] = {\n\"nondeterministic_output\",\n\"request_policy:all_nodes\",\n\"response_policy:special\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LATENCY GRAPH key specs */\n#define LATENCY_GRAPH_Keyspecs NULL\n#endif\n\n/* LATENCY GRAPH argument table */\nstruct COMMAND_ARG LATENCY_GRAPH_Args[] = {\n{MAKE_ARG(\"event\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** LATENCY HELP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LATENCY HELP history */\n#define LATENCY_HELP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LATENCY HELP tips */\n#define LATENCY_HELP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LATENCY HELP key specs */\n#define LATENCY_HELP_Keyspecs NULL\n#endif\n\n/********** LATENCY HISTOGRAM ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LATENCY HISTOGRAM history */\n#define LATENCY_HISTOGRAM_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LATENCY HISTOGRAM tips */\nconst char *LATENCY_HISTOGRAM_Tips[] = {\n\"nondeterministic_output\",\n\"request_policy:all_nodes\",\n\"response_policy:special\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LATENCY HISTOGRAM key specs */\n#define LATENCY_HISTOGRAM_Keyspecs NULL\n#endif\n\n/* LATENCY HISTOGRAM argument table */\nstruct COMMAND_ARG LATENCY_HISTOGRAM_Args[] = {\n{MAKE_ARG(\"command\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** LATENCY HISTORY ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LATENCY HISTORY history */\n#define LATENCY_HISTORY_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LATENCY HISTORY tips */\nconst char *LATENCY_HISTORY_Tips[] = {\n\"nondeterministic_output\",\n\"request_policy:all_nodes\",\n\"response_policy:special\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LATENCY HISTORY key specs */\n#define LATENCY_HISTORY_Keyspecs NULL\n#endif\n\n/* LATENCY HISTORY argument table */\nstruct COMMAND_ARG LATENCY_HISTORY_Args[] = {\n{MAKE_ARG(\"event\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** LATENCY LATEST ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LATENCY LATEST history */\n#define LATENCY_LATEST_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LATENCY LATEST tips */\nconst char *LATENCY_LATEST_Tips[] = {\n\"nondeterministic_output\",\n\"request_policy:all_nodes\",\n\"response_policy:special\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LATENCY LATEST key specs */\n#define LATENCY_LATEST_Keyspecs NULL\n#endif\n\n/********** LATENCY RESET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LATENCY RESET history */\n#define LATENCY_RESET_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LATENCY RESET tips */\nconst char *LATENCY_RESET_Tips[] = {\n\"request_policy:all_nodes\",\n\"response_policy:agg_sum\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LATENCY RESET key specs */\n#define LATENCY_RESET_Keyspecs NULL\n#endif\n\n/* LATENCY RESET argument table */\nstruct COMMAND_ARG LATENCY_RESET_Args[] = {\n{MAKE_ARG(\"event\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* LATENCY command table */\nstruct COMMAND_STRUCT LATENCY_Subcommands[] = {\n{MAKE_CMD(\"doctor\",\"Returns a human-readable latency analysis report.\",\"O(1)\",\"2.8.13\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,LATENCY_DOCTOR_History,0,LATENCY_DOCTOR_Tips,3,latencyCommand,2,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE,0,LATENCY_DOCTOR_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"graph\",\"Returns a latency graph for an event.\",\"O(1)\",\"2.8.13\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,LATENCY_GRAPH_History,0,LATENCY_GRAPH_Tips,3,latencyCommand,3,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE,0,LATENCY_GRAPH_Keyspecs,0,NULL,1),.args=LATENCY_GRAPH_Args},\n{MAKE_CMD(\"help\",\"Returns helpful text about the different subcommands.\",\"O(1)\",\"2.8.13\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,LATENCY_HELP_History,0,LATENCY_HELP_Tips,0,latencyCommand,2,CMD_LOADING|CMD_STALE,0,LATENCY_HELP_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"histogram\",\"Returns the cumulative distribution of latencies of a subset or all commands.\",\"O(N) where N is the number of commands with latency information being retrieved.\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,LATENCY_HISTOGRAM_History,0,LATENCY_HISTOGRAM_Tips,3,latencyCommand,-2,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE,0,LATENCY_HISTOGRAM_Keyspecs,0,NULL,1),.args=LATENCY_HISTOGRAM_Args},\n{MAKE_CMD(\"history\",\"Returns timestamp-latency samples for an event.\",\"O(1)\",\"2.8.13\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,LATENCY_HISTORY_History,0,LATENCY_HISTORY_Tips,3,latencyCommand,3,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE,0,LATENCY_HISTORY_Keyspecs,0,NULL,1),.args=LATENCY_HISTORY_Args},\n{MAKE_CMD(\"latest\",\"Returns the latest latency samples for all events.\",\"O(1)\",\"2.8.13\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,LATENCY_LATEST_History,0,LATENCY_LATEST_Tips,3,latencyCommand,2,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE,0,LATENCY_LATEST_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"reset\",\"Resets the latency data for one or more events.\",\"O(1)\",\"2.8.13\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,LATENCY_RESET_History,0,LATENCY_RESET_Tips,2,latencyCommand,-2,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE,0,LATENCY_RESET_Keyspecs,0,NULL,1),.args=LATENCY_RESET_Args},\n{0}\n};\n\n/********** LATENCY ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LATENCY history */\n#define LATENCY_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LATENCY tips */\n#define LATENCY_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LATENCY key specs */\n#define LATENCY_Keyspecs NULL\n#endif\n\n/********** LOLWUT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LOLWUT history */\n#define LOLWUT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LOLWUT tips */\n#define LOLWUT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LOLWUT key specs */\n#define LOLWUT_Keyspecs NULL\n#endif\n\n/* LOLWUT argument table */\nstruct COMMAND_ARG LOLWUT_Args[] = {\n{MAKE_ARG(\"version\",ARG_TYPE_INTEGER,-1,\"VERSION\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** MEMORY DOCTOR ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* MEMORY DOCTOR history */\n#define MEMORY_DOCTOR_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* MEMORY DOCTOR tips */\nconst char *MEMORY_DOCTOR_Tips[] = {\n\"nondeterministic_output\",\n\"request_policy:all_shards\",\n\"response_policy:special\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* MEMORY DOCTOR key specs */\n#define MEMORY_DOCTOR_Keyspecs NULL\n#endif\n\n/********** MEMORY HELP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* MEMORY HELP history */\n#define MEMORY_HELP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* MEMORY HELP tips */\n#define MEMORY_HELP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* MEMORY HELP key specs */\n#define MEMORY_HELP_Keyspecs NULL\n#endif\n\n/********** MEMORY MALLOC_STATS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* MEMORY MALLOC_STATS history */\n#define MEMORY_MALLOC_STATS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* MEMORY MALLOC_STATS tips */\nconst char *MEMORY_MALLOC_STATS_Tips[] = {\n\"nondeterministic_output\",\n\"request_policy:all_shards\",\n\"response_policy:special\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* MEMORY MALLOC_STATS key specs */\n#define MEMORY_MALLOC_STATS_Keyspecs NULL\n#endif\n\n/********** MEMORY PURGE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* MEMORY PURGE history */\n#define MEMORY_PURGE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* MEMORY PURGE tips */\nconst char *MEMORY_PURGE_Tips[] = {\n\"request_policy:all_shards\",\n\"response_policy:all_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* MEMORY PURGE key specs */\n#define MEMORY_PURGE_Keyspecs NULL\n#endif\n\n/********** MEMORY STATS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* MEMORY STATS history */\n#define MEMORY_STATS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* MEMORY STATS tips */\nconst char *MEMORY_STATS_Tips[] = {\n\"nondeterministic_output\",\n\"request_policy:all_shards\",\n\"response_policy:special\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* MEMORY STATS key specs */\n#define MEMORY_STATS_Keyspecs NULL\n#endif\n\n/********** MEMORY USAGE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* MEMORY USAGE history */\n#define MEMORY_USAGE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* MEMORY USAGE tips */\n#define MEMORY_USAGE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* MEMORY USAGE key specs */\nkeySpec MEMORY_USAGE_Keyspecs[1] = {\n{NULL,CMD_KEY_RO,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* MEMORY USAGE argument table */\nstruct COMMAND_ARG MEMORY_USAGE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"SAMPLES\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/* MEMORY command table */\nstruct COMMAND_STRUCT MEMORY_Subcommands[] = {\n{MAKE_CMD(\"doctor\",\"Outputs a memory problems report.\",\"O(1)\",\"4.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,MEMORY_DOCTOR_History,0,MEMORY_DOCTOR_Tips,3,memoryCommand,2,0,0,MEMORY_DOCTOR_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"help\",\"Returns helpful text about the different subcommands.\",\"O(1)\",\"4.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,MEMORY_HELP_History,0,MEMORY_HELP_Tips,0,memoryCommand,2,CMD_LOADING|CMD_STALE,0,MEMORY_HELP_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"malloc-stats\",\"Returns the allocator statistics.\",\"Depends on how much memory is allocated, could be slow\",\"4.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,MEMORY_MALLOC_STATS_History,0,MEMORY_MALLOC_STATS_Tips,3,memoryCommand,2,0,0,MEMORY_MALLOC_STATS_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"purge\",\"Asks the allocator to release memory.\",\"Depends on how much memory is allocated, could be slow\",\"4.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,MEMORY_PURGE_History,0,MEMORY_PURGE_Tips,2,memoryCommand,2,0,0,MEMORY_PURGE_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"stats\",\"Returns details about memory usage.\",\"O(1)\",\"4.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,MEMORY_STATS_History,0,MEMORY_STATS_Tips,3,memoryCommand,2,0,0,MEMORY_STATS_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"usage\",\"Estimates the memory usage of a key.\",\"O(N) where N is the number of samples.\",\"4.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,MEMORY_USAGE_History,0,MEMORY_USAGE_Tips,0,memoryCommand,-3,CMD_READONLY,0,MEMORY_USAGE_Keyspecs,1,NULL,2),.args=MEMORY_USAGE_Args},\n{0}\n};\n\n/********** MEMORY ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* MEMORY history */\n#define MEMORY_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* MEMORY tips */\n#define MEMORY_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* MEMORY key specs */\n#define MEMORY_Keyspecs NULL\n#endif\n\n/********** MODULE HELP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* MODULE HELP history */\n#define MODULE_HELP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* MODULE HELP tips */\n#define MODULE_HELP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* MODULE HELP key specs */\n#define MODULE_HELP_Keyspecs NULL\n#endif\n\n/********** MODULE LIST ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* MODULE LIST history */\n#define MODULE_LIST_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* MODULE LIST tips */\nconst char *MODULE_LIST_Tips[] = {\n\"nondeterministic_output_order\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* MODULE LIST key specs */\n#define MODULE_LIST_Keyspecs NULL\n#endif\n\n/********** MODULE LOAD ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* MODULE LOAD history */\n#define MODULE_LOAD_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* MODULE LOAD tips */\n#define MODULE_LOAD_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* MODULE LOAD key specs */\n#define MODULE_LOAD_Keyspecs NULL\n#endif\n\n/* MODULE LOAD argument table */\nstruct COMMAND_ARG MODULE_LOAD_Args[] = {\n{MAKE_ARG(\"path\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"arg\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** MODULE LOADEX ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* MODULE LOADEX history */\n#define MODULE_LOADEX_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* MODULE LOADEX tips */\n#define MODULE_LOADEX_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* MODULE LOADEX key specs */\n#define MODULE_LOADEX_Keyspecs NULL\n#endif\n\n/* MODULE LOADEX configs argument table */\nstruct COMMAND_ARG MODULE_LOADEX_configs_Subargs[] = {\n{MAKE_ARG(\"name\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* MODULE LOADEX argument table */\nstruct COMMAND_ARG MODULE_LOADEX_Args[] = {\n{MAKE_ARG(\"path\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"configs\",ARG_TYPE_BLOCK,-1,\"CONFIG\",NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE|CMD_ARG_MULTIPLE_TOKEN,2,NULL),.subargs=MODULE_LOADEX_configs_Subargs},\n{MAKE_ARG(\"args\",ARG_TYPE_STRING,-1,\"ARGS\",NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** MODULE UNLOAD ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* MODULE UNLOAD history */\n#define MODULE_UNLOAD_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* MODULE UNLOAD tips */\n#define MODULE_UNLOAD_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* MODULE UNLOAD key specs */\n#define MODULE_UNLOAD_Keyspecs NULL\n#endif\n\n/* MODULE UNLOAD argument table */\nstruct COMMAND_ARG MODULE_UNLOAD_Args[] = {\n{MAKE_ARG(\"name\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* MODULE command table */\nstruct COMMAND_STRUCT MODULE_Subcommands[] = {\n{MAKE_CMD(\"help\",\"Returns helpful text about the different subcommands.\",\"O(1)\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,MODULE_HELP_History,0,MODULE_HELP_Tips,0,moduleCommand,2,CMD_LOADING|CMD_STALE,0,MODULE_HELP_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"list\",\"Returns all loaded modules.\",\"O(N) where N is the number of loaded modules.\",\"4.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,MODULE_LIST_History,0,MODULE_LIST_Tips,1,moduleCommand,2,CMD_ADMIN|CMD_NOSCRIPT,0,MODULE_LIST_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"load\",\"Loads a module.\",\"O(1)\",\"4.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,MODULE_LOAD_History,0,MODULE_LOAD_Tips,0,moduleCommand,-3,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_NOSCRIPT|CMD_PROTECTED,0,MODULE_LOAD_Keyspecs,0,NULL,2),.args=MODULE_LOAD_Args},\n{MAKE_CMD(\"loadex\",\"Loads a module using extended parameters.\",\"O(1)\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,MODULE_LOADEX_History,0,MODULE_LOADEX_Tips,0,moduleCommand,-3,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_NOSCRIPT|CMD_PROTECTED,0,MODULE_LOADEX_Keyspecs,0,NULL,3),.args=MODULE_LOADEX_Args},\n{MAKE_CMD(\"unload\",\"Unloads a module.\",\"O(1)\",\"4.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,MODULE_UNLOAD_History,0,MODULE_UNLOAD_Tips,0,moduleCommand,3,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_NOSCRIPT|CMD_PROTECTED,0,MODULE_UNLOAD_Keyspecs,0,NULL,1),.args=MODULE_UNLOAD_Args},\n{0}\n};\n\n/********** MODULE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* MODULE history */\n#define MODULE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* MODULE tips */\n#define MODULE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* MODULE key specs */\n#define MODULE_Keyspecs NULL\n#endif\n\n/********** MONITOR ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* MONITOR history */\n#define MONITOR_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* MONITOR tips */\n#define MONITOR_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* MONITOR key specs */\n#define MONITOR_Keyspecs NULL\n#endif\n\n/********** PSYNC ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PSYNC history */\n#define PSYNC_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PSYNC tips */\n#define PSYNC_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PSYNC key specs */\n#define PSYNC_Keyspecs NULL\n#endif\n\n/* PSYNC argument table */\nstruct COMMAND_ARG PSYNC_Args[] = {\n{MAKE_ARG(\"replicationid\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"offset\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** REPLCONF ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* REPLCONF history */\n#define REPLCONF_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* REPLCONF tips */\n#define REPLCONF_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* REPLCONF key specs */\n#define REPLCONF_Keyspecs NULL\n#endif\n\n/********** REPLICAOF ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* REPLICAOF history */\n#define REPLICAOF_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* REPLICAOF tips */\n#define REPLICAOF_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* REPLICAOF key specs */\n#define REPLICAOF_Keyspecs NULL\n#endif\n\n/* REPLICAOF args host_port argument table */\nstruct COMMAND_ARG REPLICAOF_args_host_port_Subargs[] = {\n{MAKE_ARG(\"host\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"port\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* REPLICAOF args no_one argument table */\nstruct COMMAND_ARG REPLICAOF_args_no_one_Subargs[] = {\n{MAKE_ARG(\"no\",ARG_TYPE_PURE_TOKEN,-1,\"NO\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"one\",ARG_TYPE_PURE_TOKEN,-1,\"ONE\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* REPLICAOF args argument table */\nstruct COMMAND_ARG REPLICAOF_args_Subargs[] = {\n{MAKE_ARG(\"host-port\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=REPLICAOF_args_host_port_Subargs},\n{MAKE_ARG(\"no-one\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=REPLICAOF_args_no_one_Subargs},\n};\n\n/* REPLICAOF argument table */\nstruct COMMAND_ARG REPLICAOF_Args[] = {\n{MAKE_ARG(\"args\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=REPLICAOF_args_Subargs},\n};\n\n/********** RESTORE_ASKING ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* RESTORE_ASKING history */\ncommandHistory RESTORE_ASKING_History[] = {\n{\"3.0.0\",\"Added the `REPLACE` modifier.\"},\n{\"5.0.0\",\"Added the `ABSTTL` modifier.\"},\n{\"5.0.0\",\"Added the `IDLETIME` and `FREQ` options.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* RESTORE_ASKING tips */\n#define RESTORE_ASKING_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* RESTORE_ASKING key specs */\nkeySpec RESTORE_ASKING_Keyspecs[1] = {\n{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* RESTORE_ASKING argument table */\nstruct COMMAND_ARG RESTORE_ASKING_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"ttl\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"serialized-value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"replace\",ARG_TYPE_PURE_TOKEN,-1,\"REPLACE\",NULL,\"3.0.0\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"absttl\",ARG_TYPE_PURE_TOKEN,-1,\"ABSTTL\",NULL,\"5.0.0\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"seconds\",ARG_TYPE_INTEGER,-1,\"IDLETIME\",NULL,\"5.0.0\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"frequency\",ARG_TYPE_INTEGER,-1,\"FREQ\",NULL,\"5.0.0\",CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** ROLE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ROLE history */\n#define ROLE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ROLE tips */\n#define ROLE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ROLE key specs */\n#define ROLE_Keyspecs NULL\n#endif\n\n/********** SAVE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SAVE history */\n#define SAVE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SAVE tips */\n#define SAVE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SAVE key specs */\n#define SAVE_Keyspecs NULL\n#endif\n\n/********** SHUTDOWN ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SHUTDOWN history */\ncommandHistory SHUTDOWN_History[] = {\n{\"7.0.0\",\"Added the `NOW`, `FORCE` and `ABORT` modifiers.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SHUTDOWN tips */\n#define SHUTDOWN_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SHUTDOWN key specs */\n#define SHUTDOWN_Keyspecs NULL\n#endif\n\n/* SHUTDOWN save_selector argument table */\nstruct COMMAND_ARG SHUTDOWN_save_selector_Subargs[] = {\n{MAKE_ARG(\"nosave\",ARG_TYPE_PURE_TOKEN,-1,\"NOSAVE\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"save\",ARG_TYPE_PURE_TOKEN,-1,\"SAVE\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* SHUTDOWN argument table */\nstruct COMMAND_ARG SHUTDOWN_Args[] = {\n{MAKE_ARG(\"save-selector\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=SHUTDOWN_save_selector_Subargs},\n{MAKE_ARG(\"now\",ARG_TYPE_PURE_TOKEN,-1,\"NOW\",NULL,\"7.0.0\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"force\",ARG_TYPE_PURE_TOKEN,-1,\"FORCE\",NULL,\"7.0.0\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"abort\",ARG_TYPE_PURE_TOKEN,-1,\"ABORT\",NULL,\"7.0.0\",CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** SLAVEOF ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SLAVEOF history */\n#define SLAVEOF_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SLAVEOF tips */\n#define SLAVEOF_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SLAVEOF key specs */\n#define SLAVEOF_Keyspecs NULL\n#endif\n\n/* SLAVEOF args host_port argument table */\nstruct COMMAND_ARG SLAVEOF_args_host_port_Subargs[] = {\n{MAKE_ARG(\"host\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"port\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* SLAVEOF args no_one argument table */\nstruct COMMAND_ARG SLAVEOF_args_no_one_Subargs[] = {\n{MAKE_ARG(\"no\",ARG_TYPE_PURE_TOKEN,-1,\"NO\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"one\",ARG_TYPE_PURE_TOKEN,-1,\"ONE\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* SLAVEOF args argument table */\nstruct COMMAND_ARG SLAVEOF_args_Subargs[] = {\n{MAKE_ARG(\"host-port\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=SLAVEOF_args_host_port_Subargs},\n{MAKE_ARG(\"no-one\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=SLAVEOF_args_no_one_Subargs},\n};\n\n/* SLAVEOF argument table */\nstruct COMMAND_ARG SLAVEOF_Args[] = {\n{MAKE_ARG(\"args\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=SLAVEOF_args_Subargs},\n};\n\n/********** SLOWLOG GET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SLOWLOG GET history */\ncommandHistory SLOWLOG_GET_History[] = {\n{\"4.0.0\",\"Added client IP address, port and name to the reply.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SLOWLOG GET tips */\nconst char *SLOWLOG_GET_Tips[] = {\n\"request_policy:all_nodes\",\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SLOWLOG GET key specs */\n#define SLOWLOG_GET_Keyspecs NULL\n#endif\n\n/* SLOWLOG GET argument table */\nstruct COMMAND_ARG SLOWLOG_GET_Args[] = {\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** SLOWLOG HELP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SLOWLOG HELP history */\n#define SLOWLOG_HELP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SLOWLOG HELP tips */\n#define SLOWLOG_HELP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SLOWLOG HELP key specs */\n#define SLOWLOG_HELP_Keyspecs NULL\n#endif\n\n/********** SLOWLOG LEN ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SLOWLOG LEN history */\n#define SLOWLOG_LEN_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SLOWLOG LEN tips */\nconst char *SLOWLOG_LEN_Tips[] = {\n\"request_policy:all_nodes\",\n\"response_policy:agg_sum\",\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SLOWLOG LEN key specs */\n#define SLOWLOG_LEN_Keyspecs NULL\n#endif\n\n/********** SLOWLOG RESET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SLOWLOG RESET history */\n#define SLOWLOG_RESET_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SLOWLOG RESET tips */\nconst char *SLOWLOG_RESET_Tips[] = {\n\"request_policy:all_nodes\",\n\"response_policy:all_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SLOWLOG RESET key specs */\n#define SLOWLOG_RESET_Keyspecs NULL\n#endif\n\n/* SLOWLOG command table */\nstruct COMMAND_STRUCT SLOWLOG_Subcommands[] = {\n{MAKE_CMD(\"get\",\"Returns the slow log's entries.\",\"O(N) where N is the number of entries returned\",\"2.2.12\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,SLOWLOG_GET_History,1,SLOWLOG_GET_Tips,2,slowlogCommand,-2,CMD_ADMIN|CMD_LOADING|CMD_STALE,0,SLOWLOG_GET_Keyspecs,0,NULL,1),.args=SLOWLOG_GET_Args},\n{MAKE_CMD(\"help\",\"Show helpful text about the different subcommands\",\"O(1)\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,SLOWLOG_HELP_History,0,SLOWLOG_HELP_Tips,0,slowlogCommand,2,CMD_LOADING|CMD_STALE,0,SLOWLOG_HELP_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"len\",\"Returns the number of entries in the slow log.\",\"O(1)\",\"2.2.12\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,SLOWLOG_LEN_History,0,SLOWLOG_LEN_Tips,3,slowlogCommand,2,CMD_ADMIN|CMD_LOADING|CMD_STALE,0,SLOWLOG_LEN_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"reset\",\"Clears all entries from the slow log.\",\"O(N) where N is the number of entries in the slowlog\",\"2.2.12\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,SLOWLOG_RESET_History,0,SLOWLOG_RESET_Tips,2,slowlogCommand,2,CMD_ADMIN|CMD_LOADING|CMD_STALE,0,SLOWLOG_RESET_Keyspecs,0,NULL,0)},\n{0}\n};\n\n/********** SLOWLOG ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SLOWLOG history */\n#define SLOWLOG_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SLOWLOG tips */\n#define SLOWLOG_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SLOWLOG key specs */\n#define SLOWLOG_Keyspecs NULL\n#endif\n\n/********** SWAPDB ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SWAPDB history */\n#define SWAPDB_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SWAPDB tips */\n#define SWAPDB_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SWAPDB key specs */\n#define SWAPDB_Keyspecs NULL\n#endif\n\n/* SWAPDB argument table */\nstruct COMMAND_ARG SWAPDB_Args[] = {\n{MAKE_ARG(\"index1\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"index2\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** SYNC ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SYNC history */\n#define SYNC_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SYNC tips */\n#define SYNC_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SYNC key specs */\n#define SYNC_Keyspecs NULL\n#endif\n\n/********** TIME ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* TIME history */\n#define TIME_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* TIME tips */\nconst char *TIME_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* TIME key specs */\n#define TIME_Keyspecs NULL\n#endif\n\n/********** TRIMSLOTS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* TRIMSLOTS history */\n#define TRIMSLOTS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* TRIMSLOTS tips */\n#define TRIMSLOTS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* TRIMSLOTS key specs */\n#define TRIMSLOTS_Keyspecs NULL\n#endif\n\n/* TRIMSLOTS ranges slots argument table */\nstruct COMMAND_ARG TRIMSLOTS_ranges_slots_Subargs[] = {\n{MAKE_ARG(\"startslot\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"endslot\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* TRIMSLOTS ranges argument table */\nstruct COMMAND_ARG TRIMSLOTS_ranges_Subargs[] = {\n{MAKE_ARG(\"numranges\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"slots\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,2,NULL),.subargs=TRIMSLOTS_ranges_slots_Subargs},\n};\n\n/* TRIMSLOTS argument table */\nstruct COMMAND_ARG TRIMSLOTS_Args[] = {\n{MAKE_ARG(\"ranges\",ARG_TYPE_BLOCK,-1,\"RANGES\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=TRIMSLOTS_ranges_Subargs},\n};\n\n/********** SADD ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SADD history */\ncommandHistory SADD_History[] = {\n{\"2.4.0\",\"Accepts multiple `member` arguments.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SADD tips */\n#define SADD_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SADD key specs */\nkeySpec SADD_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_INSERT,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* SADD argument table */\nstruct COMMAND_ARG SADD_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"member\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** SCARD ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SCARD history */\n#define SCARD_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SCARD tips */\n#define SCARD_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SCARD key specs */\nkeySpec SCARD_Keyspecs[1] = {\n{NULL,CMD_KEY_RO,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* SCARD argument table */\nstruct COMMAND_ARG SCARD_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** SDIFF ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SDIFF history */\n#define SDIFF_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SDIFF tips */\nconst char *SDIFF_Tips[] = {\n\"nondeterministic_output_order\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SDIFF key specs */\nkeySpec SDIFF_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={-1,1,0}}\n};\n#endif\n\n/* SDIFF argument table */\nstruct COMMAND_ARG SDIFF_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** SDIFFSTORE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SDIFFSTORE history */\n#define SDIFFSTORE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SDIFFSTORE tips */\n#define SDIFFSTORE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SDIFFSTORE key specs */\nkeySpec SDIFFSTORE_Keyspecs[2] = {\n{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={-1,1,0}}\n};\n#endif\n\n/* SDIFFSTORE argument table */\nstruct COMMAND_ARG SDIFFSTORE_Args[] = {\n{MAKE_ARG(\"destination\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** SINTER ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SINTER history */\n#define SINTER_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SINTER tips */\nconst char *SINTER_Tips[] = {\n\"nondeterministic_output_order\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SINTER key specs */\nkeySpec SINTER_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={-1,1,0}}\n};\n#endif\n\n/* SINTER argument table */\nstruct COMMAND_ARG SINTER_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** SINTERCARD ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SINTERCARD history */\n#define SINTERCARD_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SINTERCARD tips */\n#define SINTERCARD_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SINTERCARD key specs */\nkeySpec SINTERCARD_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_KEYNUM,.fk.keynum={0,1,1}}\n};\n#endif\n\n/* SINTERCARD argument table */\nstruct COMMAND_ARG SINTERCARD_Args[] = {\n{MAKE_ARG(\"numkeys\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"limit\",ARG_TYPE_INTEGER,-1,\"LIMIT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** SINTERSTORE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SINTERSTORE history */\n#define SINTERSTORE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SINTERSTORE tips */\n#define SINTERSTORE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SINTERSTORE key specs */\nkeySpec SINTERSTORE_Keyspecs[2] = {\n{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={-1,1,0}}\n};\n#endif\n\n/* SINTERSTORE argument table */\nstruct COMMAND_ARG SINTERSTORE_Args[] = {\n{MAKE_ARG(\"destination\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** SISMEMBER ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SISMEMBER history */\n#define SISMEMBER_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SISMEMBER tips */\n#define SISMEMBER_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SISMEMBER key specs */\nkeySpec SISMEMBER_Keyspecs[1] = {\n{NULL,CMD_KEY_RO,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* SISMEMBER argument table */\nstruct COMMAND_ARG SISMEMBER_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"member\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** SMEMBERS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SMEMBERS history */\n#define SMEMBERS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SMEMBERS tips */\nconst char *SMEMBERS_Tips[] = {\n\"nondeterministic_output_order\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SMEMBERS key specs */\nkeySpec SMEMBERS_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* SMEMBERS argument table */\nstruct COMMAND_ARG SMEMBERS_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** SMISMEMBER ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SMISMEMBER history */\n#define SMISMEMBER_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SMISMEMBER tips */\n#define SMISMEMBER_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SMISMEMBER key specs */\nkeySpec SMISMEMBER_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* SMISMEMBER argument table */\nstruct COMMAND_ARG SMISMEMBER_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"member\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** SMOVE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SMOVE history */\n#define SMOVE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SMOVE tips */\n#define SMOVE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SMOVE key specs */\nkeySpec SMOVE_Keyspecs[2] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_RW|CMD_KEY_INSERT,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* SMOVE argument table */\nstruct COMMAND_ARG SMOVE_Args[] = {\n{MAKE_ARG(\"source\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"destination\",ARG_TYPE_KEY,1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"member\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** SPOP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SPOP history */\ncommandHistory SPOP_History[] = {\n{\"3.2.0\",\"Added the `count` argument.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SPOP tips */\nconst char *SPOP_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SPOP key specs */\nkeySpec SPOP_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* SPOP argument table */\nstruct COMMAND_ARG SPOP_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,\"3.2.0\",CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** SRANDMEMBER ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SRANDMEMBER history */\ncommandHistory SRANDMEMBER_History[] = {\n{\"2.6.0\",\"Added the optional `count` argument.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SRANDMEMBER tips */\nconst char *SRANDMEMBER_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SRANDMEMBER key specs */\nkeySpec SRANDMEMBER_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* SRANDMEMBER argument table */\nstruct COMMAND_ARG SRANDMEMBER_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,\"2.6.0\",CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** SREM ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SREM history */\ncommandHistory SREM_History[] = {\n{\"2.4.0\",\"Accepts multiple `member` arguments.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SREM tips */\n#define SREM_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SREM key specs */\nkeySpec SREM_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* SREM argument table */\nstruct COMMAND_ARG SREM_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"member\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** SSCAN ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SSCAN history */\n#define SSCAN_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SSCAN tips */\nconst char *SSCAN_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SSCAN key specs */\nkeySpec SSCAN_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* SSCAN argument table */\nstruct COMMAND_ARG SSCAN_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"cursor\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"pattern\",ARG_TYPE_PATTERN,-1,\"MATCH\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** SUNION ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SUNION history */\n#define SUNION_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SUNION tips */\nconst char *SUNION_Tips[] = {\n\"nondeterministic_output_order\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SUNION key specs */\nkeySpec SUNION_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={-1,1,0}}\n};\n#endif\n\n/* SUNION argument table */\nstruct COMMAND_ARG SUNION_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** SUNIONSTORE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SUNIONSTORE history */\n#define SUNIONSTORE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SUNIONSTORE tips */\n#define SUNIONSTORE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SUNIONSTORE key specs */\nkeySpec SUNIONSTORE_Keyspecs[2] = {\n{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={-1,1,0}}\n};\n#endif\n\n/* SUNIONSTORE argument table */\nstruct COMMAND_ARG SUNIONSTORE_Args[] = {\n{MAKE_ARG(\"destination\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** BZMPOP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* BZMPOP history */\n#define BZMPOP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* BZMPOP tips */\n#define BZMPOP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* BZMPOP key specs */\nkeySpec BZMPOP_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_KEYNUM,.fk.keynum={0,1,1}}\n};\n#endif\n\n/* BZMPOP where argument table */\nstruct COMMAND_ARG BZMPOP_where_Subargs[] = {\n{MAKE_ARG(\"min\",ARG_TYPE_PURE_TOKEN,-1,\"MIN\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"max\",ARG_TYPE_PURE_TOKEN,-1,\"MAX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* BZMPOP argument table */\nstruct COMMAND_ARG BZMPOP_Args[] = {\n{MAKE_ARG(\"timeout\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"numkeys\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"where\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=BZMPOP_where_Subargs},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** BZPOPMAX ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* BZPOPMAX history */\ncommandHistory BZPOPMAX_History[] = {\n{\"6.0.0\",\"`timeout` is interpreted as a double instead of an integer.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* BZPOPMAX tips */\n#define BZPOPMAX_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* BZPOPMAX key specs */\nkeySpec BZPOPMAX_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={-2,1,0}}\n};\n#endif\n\n/* BZPOPMAX argument table */\nstruct COMMAND_ARG BZPOPMAX_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"timeout\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** BZPOPMIN ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* BZPOPMIN history */\ncommandHistory BZPOPMIN_History[] = {\n{\"6.0.0\",\"`timeout` is interpreted as a double instead of an integer.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* BZPOPMIN tips */\n#define BZPOPMIN_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* BZPOPMIN key specs */\nkeySpec BZPOPMIN_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={-2,1,0}}\n};\n#endif\n\n/* BZPOPMIN argument table */\nstruct COMMAND_ARG BZPOPMIN_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"timeout\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** ZADD ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZADD history */\ncommandHistory ZADD_History[] = {\n{\"2.4.0\",\"Accepts multiple elements.\"},\n{\"3.0.2\",\"Added the `XX`, `NX`, `CH` and `INCR` options.\"},\n{\"6.2.0\",\"Added the `GT` and `LT` options.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZADD tips */\n#define ZADD_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZADD key specs */\nkeySpec ZADD_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZADD condition argument table */\nstruct COMMAND_ARG ZADD_condition_Subargs[] = {\n{MAKE_ARG(\"nx\",ARG_TYPE_PURE_TOKEN,-1,\"NX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"xx\",ARG_TYPE_PURE_TOKEN,-1,\"XX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* ZADD comparison argument table */\nstruct COMMAND_ARG ZADD_comparison_Subargs[] = {\n{MAKE_ARG(\"gt\",ARG_TYPE_PURE_TOKEN,-1,\"GT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"lt\",ARG_TYPE_PURE_TOKEN,-1,\"LT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* ZADD data argument table */\nstruct COMMAND_ARG ZADD_data_Subargs[] = {\n{MAKE_ARG(\"score\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"member\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* ZADD argument table */\nstruct COMMAND_ARG ZADD_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"condition\",ARG_TYPE_ONEOF,-1,NULL,NULL,\"3.0.2\",CMD_ARG_OPTIONAL,2,NULL),.subargs=ZADD_condition_Subargs},\n{MAKE_ARG(\"comparison\",ARG_TYPE_ONEOF,-1,NULL,NULL,\"6.2.0\",CMD_ARG_OPTIONAL,2,NULL),.subargs=ZADD_comparison_Subargs},\n{MAKE_ARG(\"change\",ARG_TYPE_PURE_TOKEN,-1,\"CH\",NULL,\"3.0.2\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"increment\",ARG_TYPE_PURE_TOKEN,-1,\"INCR\",NULL,\"3.0.2\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"data\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,2,NULL),.subargs=ZADD_data_Subargs},\n};\n\n/********** ZCARD ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZCARD history */\n#define ZCARD_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZCARD tips */\n#define ZCARD_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZCARD key specs */\nkeySpec ZCARD_Keyspecs[1] = {\n{NULL,CMD_KEY_RO,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZCARD argument table */\nstruct COMMAND_ARG ZCARD_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** ZCOUNT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZCOUNT history */\n#define ZCOUNT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZCOUNT tips */\n#define ZCOUNT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZCOUNT key specs */\nkeySpec ZCOUNT_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZCOUNT argument table */\nstruct COMMAND_ARG ZCOUNT_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"min\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"max\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** ZDIFF ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZDIFF history */\n#define ZDIFF_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZDIFF tips */\n#define ZDIFF_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZDIFF key specs */\nkeySpec ZDIFF_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_KEYNUM,.fk.keynum={0,1,1}}\n};\n#endif\n\n/* ZDIFF argument table */\nstruct COMMAND_ARG ZDIFF_Args[] = {\n{MAKE_ARG(\"numkeys\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"withscores\",ARG_TYPE_PURE_TOKEN,-1,\"WITHSCORES\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** ZDIFFSTORE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZDIFFSTORE history */\n#define ZDIFFSTORE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZDIFFSTORE tips */\n#define ZDIFFSTORE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZDIFFSTORE key specs */\nkeySpec ZDIFFSTORE_Keyspecs[2] = {\n{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_KEYNUM,.fk.keynum={0,1,1}}\n};\n#endif\n\n/* ZDIFFSTORE argument table */\nstruct COMMAND_ARG ZDIFFSTORE_Args[] = {\n{MAKE_ARG(\"destination\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"numkeys\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** ZINCRBY ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZINCRBY history */\n#define ZINCRBY_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZINCRBY tips */\n#define ZINCRBY_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZINCRBY key specs */\nkeySpec ZINCRBY_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZINCRBY argument table */\nstruct COMMAND_ARG ZINCRBY_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"increment\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"member\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** ZINTER ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZINTER history */\ncommandHistory ZINTER_History[] = {\n{\"8.8.0\",\"Added `COUNT` aggregate option.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZINTER tips */\n#define ZINTER_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZINTER key specs */\nkeySpec ZINTER_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_KEYNUM,.fk.keynum={0,1,1}}\n};\n#endif\n\n/* ZINTER aggregate argument table */\nstruct COMMAND_ARG ZINTER_aggregate_Subargs[] = {\n{MAKE_ARG(\"sum\",ARG_TYPE_PURE_TOKEN,-1,\"SUM\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"min\",ARG_TYPE_PURE_TOKEN,-1,\"MIN\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"max\",ARG_TYPE_PURE_TOKEN,-1,\"MAX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_PURE_TOKEN,-1,\"COUNT\",NULL,\"8.8.0\",CMD_ARG_NONE,0,NULL)},\n};\n\n/* ZINTER argument table */\nstruct COMMAND_ARG ZINTER_Args[] = {\n{MAKE_ARG(\"numkeys\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"weight\",ARG_TYPE_INTEGER,-1,\"WEIGHTS\",NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"aggregate\",ARG_TYPE_ONEOF,-1,\"AGGREGATE\",NULL,NULL,CMD_ARG_OPTIONAL,4,NULL),.subargs=ZINTER_aggregate_Subargs},\n{MAKE_ARG(\"withscores\",ARG_TYPE_PURE_TOKEN,-1,\"WITHSCORES\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** ZINTERCARD ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZINTERCARD history */\n#define ZINTERCARD_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZINTERCARD tips */\n#define ZINTERCARD_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZINTERCARD key specs */\nkeySpec ZINTERCARD_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_KEYNUM,.fk.keynum={0,1,1}}\n};\n#endif\n\n/* ZINTERCARD argument table */\nstruct COMMAND_ARG ZINTERCARD_Args[] = {\n{MAKE_ARG(\"numkeys\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"limit\",ARG_TYPE_INTEGER,-1,\"LIMIT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** ZINTERSTORE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZINTERSTORE history */\ncommandHistory ZINTERSTORE_History[] = {\n{\"8.8.0\",\"Added `COUNT` aggregate option.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZINTERSTORE tips */\n#define ZINTERSTORE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZINTERSTORE key specs */\nkeySpec ZINTERSTORE_Keyspecs[2] = {\n{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_KEYNUM,.fk.keynum={0,1,1}}\n};\n#endif\n\n/* ZINTERSTORE aggregate argument table */\nstruct COMMAND_ARG ZINTERSTORE_aggregate_Subargs[] = {\n{MAKE_ARG(\"sum\",ARG_TYPE_PURE_TOKEN,-1,\"SUM\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"min\",ARG_TYPE_PURE_TOKEN,-1,\"MIN\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"max\",ARG_TYPE_PURE_TOKEN,-1,\"MAX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_PURE_TOKEN,-1,\"COUNT\",NULL,\"8.8.0\",CMD_ARG_NONE,0,NULL)},\n};\n\n/* ZINTERSTORE argument table */\nstruct COMMAND_ARG ZINTERSTORE_Args[] = {\n{MAKE_ARG(\"destination\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"numkeys\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"weight\",ARG_TYPE_INTEGER,-1,\"WEIGHTS\",NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"aggregate\",ARG_TYPE_ONEOF,-1,\"AGGREGATE\",NULL,NULL,CMD_ARG_OPTIONAL,4,NULL),.subargs=ZINTERSTORE_aggregate_Subargs},\n};\n\n/********** ZLEXCOUNT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZLEXCOUNT history */\n#define ZLEXCOUNT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZLEXCOUNT tips */\n#define ZLEXCOUNT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZLEXCOUNT key specs */\nkeySpec ZLEXCOUNT_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZLEXCOUNT argument table */\nstruct COMMAND_ARG ZLEXCOUNT_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"min\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"max\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** ZMPOP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZMPOP history */\n#define ZMPOP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZMPOP tips */\n#define ZMPOP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZMPOP key specs */\nkeySpec ZMPOP_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_KEYNUM,.fk.keynum={0,1,1}}\n};\n#endif\n\n/* ZMPOP where argument table */\nstruct COMMAND_ARG ZMPOP_where_Subargs[] = {\n{MAKE_ARG(\"min\",ARG_TYPE_PURE_TOKEN,-1,\"MIN\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"max\",ARG_TYPE_PURE_TOKEN,-1,\"MAX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* ZMPOP argument table */\nstruct COMMAND_ARG ZMPOP_Args[] = {\n{MAKE_ARG(\"numkeys\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"where\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=ZMPOP_where_Subargs},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** ZMSCORE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZMSCORE history */\n#define ZMSCORE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZMSCORE tips */\n#define ZMSCORE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZMSCORE key specs */\nkeySpec ZMSCORE_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZMSCORE argument table */\nstruct COMMAND_ARG ZMSCORE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"member\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** ZPOPMAX ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZPOPMAX history */\n#define ZPOPMAX_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZPOPMAX tips */\n#define ZPOPMAX_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZPOPMAX key specs */\nkeySpec ZPOPMAX_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZPOPMAX argument table */\nstruct COMMAND_ARG ZPOPMAX_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** ZPOPMIN ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZPOPMIN history */\n#define ZPOPMIN_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZPOPMIN tips */\n#define ZPOPMIN_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZPOPMIN key specs */\nkeySpec ZPOPMIN_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZPOPMIN argument table */\nstruct COMMAND_ARG ZPOPMIN_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** ZRANDMEMBER ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZRANDMEMBER history */\n#define ZRANDMEMBER_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZRANDMEMBER tips */\nconst char *ZRANDMEMBER_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZRANDMEMBER key specs */\nkeySpec ZRANDMEMBER_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZRANDMEMBER options argument table */\nstruct COMMAND_ARG ZRANDMEMBER_options_Subargs[] = {\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"withscores\",ARG_TYPE_PURE_TOKEN,-1,\"WITHSCORES\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/* ZRANDMEMBER argument table */\nstruct COMMAND_ARG ZRANDMEMBER_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"options\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=ZRANDMEMBER_options_Subargs},\n};\n\n/********** ZRANGE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZRANGE history */\ncommandHistory ZRANGE_History[] = {\n{\"6.2.0\",\"Added the `REV`, `BYSCORE`, `BYLEX` and `LIMIT` options.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZRANGE tips */\n#define ZRANGE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZRANGE key specs */\nkeySpec ZRANGE_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZRANGE sortby argument table */\nstruct COMMAND_ARG ZRANGE_sortby_Subargs[] = {\n{MAKE_ARG(\"byscore\",ARG_TYPE_PURE_TOKEN,-1,\"BYSCORE\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"bylex\",ARG_TYPE_PURE_TOKEN,-1,\"BYLEX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* ZRANGE limit argument table */\nstruct COMMAND_ARG ZRANGE_limit_Subargs[] = {\n{MAKE_ARG(\"offset\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* ZRANGE argument table */\nstruct COMMAND_ARG ZRANGE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"start\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"stop\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"sortby\",ARG_TYPE_ONEOF,-1,NULL,NULL,\"6.2.0\",CMD_ARG_OPTIONAL,2,NULL),.subargs=ZRANGE_sortby_Subargs},\n{MAKE_ARG(\"rev\",ARG_TYPE_PURE_TOKEN,-1,\"REV\",NULL,\"6.2.0\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"limit\",ARG_TYPE_BLOCK,-1,\"LIMIT\",NULL,\"6.2.0\",CMD_ARG_OPTIONAL,2,NULL),.subargs=ZRANGE_limit_Subargs},\n{MAKE_ARG(\"withscores\",ARG_TYPE_PURE_TOKEN,-1,\"WITHSCORES\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** ZRANGEBYLEX ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZRANGEBYLEX history */\n#define ZRANGEBYLEX_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZRANGEBYLEX tips */\n#define ZRANGEBYLEX_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZRANGEBYLEX key specs */\nkeySpec ZRANGEBYLEX_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZRANGEBYLEX limit argument table */\nstruct COMMAND_ARG ZRANGEBYLEX_limit_Subargs[] = {\n{MAKE_ARG(\"offset\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* ZRANGEBYLEX argument table */\nstruct COMMAND_ARG ZRANGEBYLEX_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"min\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"max\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"limit\",ARG_TYPE_BLOCK,-1,\"LIMIT\",NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=ZRANGEBYLEX_limit_Subargs},\n};\n\n/********** ZRANGEBYSCORE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZRANGEBYSCORE history */\ncommandHistory ZRANGEBYSCORE_History[] = {\n{\"2.0.0\",\"Added the `WITHSCORES` modifier.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZRANGEBYSCORE tips */\n#define ZRANGEBYSCORE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZRANGEBYSCORE key specs */\nkeySpec ZRANGEBYSCORE_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZRANGEBYSCORE limit argument table */\nstruct COMMAND_ARG ZRANGEBYSCORE_limit_Subargs[] = {\n{MAKE_ARG(\"offset\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* ZRANGEBYSCORE argument table */\nstruct COMMAND_ARG ZRANGEBYSCORE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"min\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"max\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"withscores\",ARG_TYPE_PURE_TOKEN,-1,\"WITHSCORES\",NULL,\"2.0.0\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"limit\",ARG_TYPE_BLOCK,-1,\"LIMIT\",NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=ZRANGEBYSCORE_limit_Subargs},\n};\n\n/********** ZRANGESTORE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZRANGESTORE history */\n#define ZRANGESTORE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZRANGESTORE tips */\n#define ZRANGESTORE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZRANGESTORE key specs */\nkeySpec ZRANGESTORE_Keyspecs[2] = {\n{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZRANGESTORE sortby argument table */\nstruct COMMAND_ARG ZRANGESTORE_sortby_Subargs[] = {\n{MAKE_ARG(\"byscore\",ARG_TYPE_PURE_TOKEN,-1,\"BYSCORE\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"bylex\",ARG_TYPE_PURE_TOKEN,-1,\"BYLEX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* ZRANGESTORE limit argument table */\nstruct COMMAND_ARG ZRANGESTORE_limit_Subargs[] = {\n{MAKE_ARG(\"offset\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* ZRANGESTORE argument table */\nstruct COMMAND_ARG ZRANGESTORE_Args[] = {\n{MAKE_ARG(\"dst\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"src\",ARG_TYPE_KEY,1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"min\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"max\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"sortby\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=ZRANGESTORE_sortby_Subargs},\n{MAKE_ARG(\"rev\",ARG_TYPE_PURE_TOKEN,-1,\"REV\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"limit\",ARG_TYPE_BLOCK,-1,\"LIMIT\",NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=ZRANGESTORE_limit_Subargs},\n};\n\n/********** ZRANK ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZRANK history */\ncommandHistory ZRANK_History[] = {\n{\"7.2.0\",\"Added the optional `WITHSCORE` argument.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZRANK tips */\n#define ZRANK_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZRANK key specs */\nkeySpec ZRANK_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZRANK argument table */\nstruct COMMAND_ARG ZRANK_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"member\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"withscore\",ARG_TYPE_PURE_TOKEN,-1,\"WITHSCORE\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** ZREM ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZREM history */\ncommandHistory ZREM_History[] = {\n{\"2.4.0\",\"Accepts multiple elements.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZREM tips */\n#define ZREM_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZREM key specs */\nkeySpec ZREM_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZREM argument table */\nstruct COMMAND_ARG ZREM_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"member\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** ZREMRANGEBYLEX ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZREMRANGEBYLEX history */\n#define ZREMRANGEBYLEX_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZREMRANGEBYLEX tips */\n#define ZREMRANGEBYLEX_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZREMRANGEBYLEX key specs */\nkeySpec ZREMRANGEBYLEX_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZREMRANGEBYLEX argument table */\nstruct COMMAND_ARG ZREMRANGEBYLEX_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"min\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"max\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** ZREMRANGEBYRANK ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZREMRANGEBYRANK history */\n#define ZREMRANGEBYRANK_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZREMRANGEBYRANK tips */\n#define ZREMRANGEBYRANK_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZREMRANGEBYRANK key specs */\nkeySpec ZREMRANGEBYRANK_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZREMRANGEBYRANK argument table */\nstruct COMMAND_ARG ZREMRANGEBYRANK_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"start\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"stop\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** ZREMRANGEBYSCORE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZREMRANGEBYSCORE history */\n#define ZREMRANGEBYSCORE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZREMRANGEBYSCORE tips */\n#define ZREMRANGEBYSCORE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZREMRANGEBYSCORE key specs */\nkeySpec ZREMRANGEBYSCORE_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZREMRANGEBYSCORE argument table */\nstruct COMMAND_ARG ZREMRANGEBYSCORE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"min\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"max\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** ZREVRANGE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZREVRANGE history */\n#define ZREVRANGE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZREVRANGE tips */\n#define ZREVRANGE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZREVRANGE key specs */\nkeySpec ZREVRANGE_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZREVRANGE argument table */\nstruct COMMAND_ARG ZREVRANGE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"start\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"stop\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"withscores\",ARG_TYPE_PURE_TOKEN,-1,\"WITHSCORES\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** ZREVRANGEBYLEX ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZREVRANGEBYLEX history */\n#define ZREVRANGEBYLEX_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZREVRANGEBYLEX tips */\n#define ZREVRANGEBYLEX_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZREVRANGEBYLEX key specs */\nkeySpec ZREVRANGEBYLEX_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZREVRANGEBYLEX limit argument table */\nstruct COMMAND_ARG ZREVRANGEBYLEX_limit_Subargs[] = {\n{MAKE_ARG(\"offset\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* ZREVRANGEBYLEX argument table */\nstruct COMMAND_ARG ZREVRANGEBYLEX_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"max\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"min\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"limit\",ARG_TYPE_BLOCK,-1,\"LIMIT\",NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=ZREVRANGEBYLEX_limit_Subargs},\n};\n\n/********** ZREVRANGEBYSCORE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZREVRANGEBYSCORE history */\ncommandHistory ZREVRANGEBYSCORE_History[] = {\n{\"2.1.6\",\"`min` and `max` can be exclusive.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZREVRANGEBYSCORE tips */\n#define ZREVRANGEBYSCORE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZREVRANGEBYSCORE key specs */\nkeySpec ZREVRANGEBYSCORE_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZREVRANGEBYSCORE limit argument table */\nstruct COMMAND_ARG ZREVRANGEBYSCORE_limit_Subargs[] = {\n{MAKE_ARG(\"offset\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* ZREVRANGEBYSCORE argument table */\nstruct COMMAND_ARG ZREVRANGEBYSCORE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"max\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"min\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"withscores\",ARG_TYPE_PURE_TOKEN,-1,\"WITHSCORES\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"limit\",ARG_TYPE_BLOCK,-1,\"LIMIT\",NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=ZREVRANGEBYSCORE_limit_Subargs},\n};\n\n/********** ZREVRANK ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZREVRANK history */\ncommandHistory ZREVRANK_History[] = {\n{\"7.2.0\",\"Added the optional `WITHSCORE` argument.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZREVRANK tips */\n#define ZREVRANK_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZREVRANK key specs */\nkeySpec ZREVRANK_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZREVRANK argument table */\nstruct COMMAND_ARG ZREVRANK_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"member\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"withscore\",ARG_TYPE_PURE_TOKEN,-1,\"WITHSCORE\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** ZSCAN ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZSCAN history */\n#define ZSCAN_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZSCAN tips */\nconst char *ZSCAN_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZSCAN key specs */\nkeySpec ZSCAN_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZSCAN argument table */\nstruct COMMAND_ARG ZSCAN_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"cursor\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"pattern\",ARG_TYPE_PATTERN,-1,\"MATCH\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** ZSCORE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZSCORE history */\n#define ZSCORE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZSCORE tips */\n#define ZSCORE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZSCORE key specs */\nkeySpec ZSCORE_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* ZSCORE argument table */\nstruct COMMAND_ARG ZSCORE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"member\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** ZUNION ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZUNION history */\ncommandHistory ZUNION_History[] = {\n{\"8.8.0\",\"Added `COUNT` aggregate option.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZUNION tips */\n#define ZUNION_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZUNION key specs */\nkeySpec ZUNION_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_KEYNUM,.fk.keynum={0,1,1}}\n};\n#endif\n\n/* ZUNION aggregate argument table */\nstruct COMMAND_ARG ZUNION_aggregate_Subargs[] = {\n{MAKE_ARG(\"sum\",ARG_TYPE_PURE_TOKEN,-1,\"SUM\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"min\",ARG_TYPE_PURE_TOKEN,-1,\"MIN\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"max\",ARG_TYPE_PURE_TOKEN,-1,\"MAX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_PURE_TOKEN,-1,\"COUNT\",NULL,\"8.8.0\",CMD_ARG_NONE,0,NULL)},\n};\n\n/* ZUNION argument table */\nstruct COMMAND_ARG ZUNION_Args[] = {\n{MAKE_ARG(\"numkeys\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"weight\",ARG_TYPE_INTEGER,-1,\"WEIGHTS\",NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"aggregate\",ARG_TYPE_ONEOF,-1,\"AGGREGATE\",NULL,NULL,CMD_ARG_OPTIONAL,4,NULL),.subargs=ZUNION_aggregate_Subargs},\n{MAKE_ARG(\"withscores\",ARG_TYPE_PURE_TOKEN,-1,\"WITHSCORES\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** ZUNIONSTORE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* ZUNIONSTORE history */\ncommandHistory ZUNIONSTORE_History[] = {\n{\"8.8.0\",\"Added `COUNT` aggregate option.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* ZUNIONSTORE tips */\n#define ZUNIONSTORE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* ZUNIONSTORE key specs */\nkeySpec ZUNIONSTORE_Keyspecs[2] = {\n{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}},{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_KEYNUM,.fk.keynum={0,1,1}}\n};\n#endif\n\n/* ZUNIONSTORE aggregate argument table */\nstruct COMMAND_ARG ZUNIONSTORE_aggregate_Subargs[] = {\n{MAKE_ARG(\"sum\",ARG_TYPE_PURE_TOKEN,-1,\"SUM\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"min\",ARG_TYPE_PURE_TOKEN,-1,\"MIN\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"max\",ARG_TYPE_PURE_TOKEN,-1,\"MAX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_PURE_TOKEN,-1,\"COUNT\",NULL,\"8.8.0\",CMD_ARG_NONE,0,NULL)},\n};\n\n/* ZUNIONSTORE argument table */\nstruct COMMAND_ARG ZUNIONSTORE_Args[] = {\n{MAKE_ARG(\"destination\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"numkeys\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"weight\",ARG_TYPE_INTEGER,-1,\"WEIGHTS\",NULL,NULL,CMD_ARG_OPTIONAL|CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"aggregate\",ARG_TYPE_ONEOF,-1,\"AGGREGATE\",NULL,NULL,CMD_ARG_OPTIONAL,4,NULL),.subargs=ZUNIONSTORE_aggregate_Subargs},\n};\n\n/********** XACK ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XACK history */\n#define XACK_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XACK tips */\n#define XACK_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XACK key specs */\nkeySpec XACK_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XACK argument table */\nstruct COMMAND_ARG XACK_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"group\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"id\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** XACKDEL ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XACKDEL history */\n#define XACKDEL_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XACKDEL tips */\n#define XACKDEL_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XACKDEL key specs */\nkeySpec XACKDEL_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XACKDEL condition argument table */\nstruct COMMAND_ARG XACKDEL_condition_Subargs[] = {\n{MAKE_ARG(\"keepref\",ARG_TYPE_PURE_TOKEN,-1,\"KEEPREF\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"delref\",ARG_TYPE_PURE_TOKEN,-1,\"DELREF\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"acked\",ARG_TYPE_PURE_TOKEN,-1,\"ACKED\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* XACKDEL ids argument table */\nstruct COMMAND_ARG XACKDEL_ids_Subargs[] = {\n{MAKE_ARG(\"numids\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"id\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* XACKDEL argument table */\nstruct COMMAND_ARG XACKDEL_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"group\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"condition\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,3,NULL),.subargs=XACKDEL_condition_Subargs},\n{MAKE_ARG(\"ids\",ARG_TYPE_BLOCK,-1,\"IDS\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=XACKDEL_ids_Subargs},\n};\n\n/********** XADD ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XADD history */\ncommandHistory XADD_History[] = {\n{\"6.2.0\",\"Added the `NOMKSTREAM` option, `MINID` trimming strategy and the `LIMIT` option.\"},\n{\"7.0.0\",\"Added support for the `<ms>-*` explicit ID form.\"},\n{\"8.2.0\",\"Added the `KEEPREF`, `DELREF` and `ACKED` options.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XADD tips */\nconst char *XADD_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XADD key specs */\nkeySpec XADD_Keyspecs[1] = {\n{\"UPDATE instead of INSERT because of the optional trimming feature\",CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XADD condition argument table */\nstruct COMMAND_ARG XADD_condition_Subargs[] = {\n{MAKE_ARG(\"keepref\",ARG_TYPE_PURE_TOKEN,-1,\"KEEPREF\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"delref\",ARG_TYPE_PURE_TOKEN,-1,\"DELREF\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"acked\",ARG_TYPE_PURE_TOKEN,-1,\"ACKED\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* XADD idmp idmp argument table */\nstruct COMMAND_ARG XADD_idmp_idmp_Subargs[] = {\n{MAKE_ARG(\"pid\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"iid\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* XADD idmp argument table */\nstruct COMMAND_ARG XADD_idmp_Subargs[] = {\n{MAKE_ARG(\"pid\",ARG_TYPE_STRING,-1,\"IDMPAUTO\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"idmp\",ARG_TYPE_BLOCK,-1,\"IDMP\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=XADD_idmp_idmp_Subargs},\n};\n\n/* XADD trim strategy argument table */\nstruct COMMAND_ARG XADD_trim_strategy_Subargs[] = {\n{MAKE_ARG(\"maxlen\",ARG_TYPE_PURE_TOKEN,-1,\"MAXLEN\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"minid\",ARG_TYPE_PURE_TOKEN,-1,\"MINID\",NULL,\"6.2.0\",CMD_ARG_NONE,0,NULL)},\n};\n\n/* XADD trim operator argument table */\nstruct COMMAND_ARG XADD_trim_operator_Subargs[] = {\n{MAKE_ARG(\"equal\",ARG_TYPE_PURE_TOKEN,-1,\"=\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"approximately\",ARG_TYPE_PURE_TOKEN,-1,\"~\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* XADD trim argument table */\nstruct COMMAND_ARG XADD_trim_Subargs[] = {\n{MAKE_ARG(\"strategy\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=XADD_trim_strategy_Subargs},\n{MAKE_ARG(\"operator\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=XADD_trim_operator_Subargs},\n{MAKE_ARG(\"threshold\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"LIMIT\",NULL,\"6.2.0\",CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/* XADD id_selector argument table */\nstruct COMMAND_ARG XADD_id_selector_Subargs[] = {\n{MAKE_ARG(\"auto-id\",ARG_TYPE_PURE_TOKEN,-1,\"*\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"id\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* XADD data argument table */\nstruct COMMAND_ARG XADD_data_Subargs[] = {\n{MAKE_ARG(\"field\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* XADD argument table */\nstruct COMMAND_ARG XADD_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"nomkstream\",ARG_TYPE_PURE_TOKEN,-1,\"NOMKSTREAM\",NULL,\"6.2.0\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"condition\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,3,NULL),.subargs=XADD_condition_Subargs},\n{MAKE_ARG(\"idmp\",ARG_TYPE_ONEOF,-1,NULL,NULL,\"8.6.0\",CMD_ARG_OPTIONAL,2,NULL),.subargs=XADD_idmp_Subargs},\n{MAKE_ARG(\"trim\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,4,NULL),.subargs=XADD_trim_Subargs},\n{MAKE_ARG(\"id-selector\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=XADD_id_selector_Subargs},\n{MAKE_ARG(\"data\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,2,NULL),.subargs=XADD_data_Subargs},\n};\n\n/********** XAUTOCLAIM ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XAUTOCLAIM history */\ncommandHistory XAUTOCLAIM_History[] = {\n{\"7.0.0\",\"Added an element to the reply array, containing deleted entries the command cleared from the PEL\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XAUTOCLAIM tips */\nconst char *XAUTOCLAIM_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XAUTOCLAIM key specs */\nkeySpec XAUTOCLAIM_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XAUTOCLAIM argument table */\nstruct COMMAND_ARG XAUTOCLAIM_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"group\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"consumer\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"min-idle-time\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"start\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"justid\",ARG_TYPE_PURE_TOKEN,-1,\"JUSTID\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** XCFGSET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XCFGSET history */\n#define XCFGSET_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XCFGSET tips */\n#define XCFGSET_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XCFGSET key specs */\nkeySpec XCFGSET_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XCFGSET argument table */\nstruct COMMAND_ARG XCFGSET_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"idmp-duration\",ARG_TYPE_INTEGER,-1,\"IDMP-DURATION\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"idmp-maxsize\",ARG_TYPE_INTEGER,-1,\"IDMP-MAXSIZE\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** XCLAIM ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XCLAIM history */\n#define XCLAIM_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XCLAIM tips */\nconst char *XCLAIM_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XCLAIM key specs */\nkeySpec XCLAIM_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XCLAIM argument table */\nstruct COMMAND_ARG XCLAIM_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"group\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"consumer\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"min-idle-time\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"id\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"ms\",ARG_TYPE_INTEGER,-1,\"IDLE\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"unix-time-milliseconds\",ARG_TYPE_UNIX_TIME,-1,\"TIME\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"RETRYCOUNT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"force\",ARG_TYPE_PURE_TOKEN,-1,\"FORCE\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"justid\",ARG_TYPE_PURE_TOKEN,-1,\"JUSTID\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"lastid\",ARG_TYPE_STRING,-1,\"LASTID\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** XDEL ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XDEL history */\n#define XDEL_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XDEL tips */\n#define XDEL_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XDEL key specs */\nkeySpec XDEL_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XDEL argument table */\nstruct COMMAND_ARG XDEL_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"id\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** XDELEX ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XDELEX history */\n#define XDELEX_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XDELEX tips */\n#define XDELEX_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XDELEX key specs */\nkeySpec XDELEX_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XDELEX condition argument table */\nstruct COMMAND_ARG XDELEX_condition_Subargs[] = {\n{MAKE_ARG(\"keepref\",ARG_TYPE_PURE_TOKEN,-1,\"KEEPREF\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"delref\",ARG_TYPE_PURE_TOKEN,-1,\"DELREF\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"acked\",ARG_TYPE_PURE_TOKEN,-1,\"ACKED\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* XDELEX ids argument table */\nstruct COMMAND_ARG XDELEX_ids_Subargs[] = {\n{MAKE_ARG(\"numids\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"id\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* XDELEX argument table */\nstruct COMMAND_ARG XDELEX_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"condition\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,3,NULL),.subargs=XDELEX_condition_Subargs},\n{MAKE_ARG(\"ids\",ARG_TYPE_BLOCK,-1,\"IDS\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=XDELEX_ids_Subargs},\n};\n\n/********** XGROUP CREATE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XGROUP CREATE history */\ncommandHistory XGROUP_CREATE_History[] = {\n{\"7.0.0\",\"Added the `entries_read` named argument.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XGROUP CREATE tips */\n#define XGROUP_CREATE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XGROUP CREATE key specs */\nkeySpec XGROUP_CREATE_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_INSERT,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XGROUP CREATE id_selector argument table */\nstruct COMMAND_ARG XGROUP_CREATE_id_selector_Subargs[] = {\n{MAKE_ARG(\"id\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"new-id\",ARG_TYPE_PURE_TOKEN,-1,\"$\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* XGROUP CREATE argument table */\nstruct COMMAND_ARG XGROUP_CREATE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"group\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"id-selector\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=XGROUP_CREATE_id_selector_Subargs},\n{MAKE_ARG(\"mkstream\",ARG_TYPE_PURE_TOKEN,-1,\"MKSTREAM\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"entriesread\",ARG_TYPE_INTEGER,-1,\"ENTRIESREAD\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL),.display_text=\"entries-read\"},\n};\n\n/********** XGROUP CREATECONSUMER ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XGROUP CREATECONSUMER history */\n#define XGROUP_CREATECONSUMER_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XGROUP CREATECONSUMER tips */\n#define XGROUP_CREATECONSUMER_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XGROUP CREATECONSUMER key specs */\nkeySpec XGROUP_CREATECONSUMER_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_INSERT,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XGROUP CREATECONSUMER argument table */\nstruct COMMAND_ARG XGROUP_CREATECONSUMER_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"group\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"consumer\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** XGROUP DELCONSUMER ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XGROUP DELCONSUMER history */\n#define XGROUP_DELCONSUMER_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XGROUP DELCONSUMER tips */\n#define XGROUP_DELCONSUMER_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XGROUP DELCONSUMER key specs */\nkeySpec XGROUP_DELCONSUMER_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XGROUP DELCONSUMER argument table */\nstruct COMMAND_ARG XGROUP_DELCONSUMER_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"group\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"consumer\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** XGROUP DESTROY ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XGROUP DESTROY history */\n#define XGROUP_DESTROY_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XGROUP DESTROY tips */\n#define XGROUP_DESTROY_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XGROUP DESTROY key specs */\nkeySpec XGROUP_DESTROY_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XGROUP DESTROY argument table */\nstruct COMMAND_ARG XGROUP_DESTROY_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"group\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** XGROUP HELP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XGROUP HELP history */\n#define XGROUP_HELP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XGROUP HELP tips */\n#define XGROUP_HELP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XGROUP HELP key specs */\n#define XGROUP_HELP_Keyspecs NULL\n#endif\n\n/********** XGROUP SETID ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XGROUP SETID history */\ncommandHistory XGROUP_SETID_History[] = {\n{\"7.0.0\",\"Added the optional `entries_read` argument.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XGROUP SETID tips */\n#define XGROUP_SETID_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XGROUP SETID key specs */\nkeySpec XGROUP_SETID_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XGROUP SETID id_selector argument table */\nstruct COMMAND_ARG XGROUP_SETID_id_selector_Subargs[] = {\n{MAKE_ARG(\"id\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"new-id\",ARG_TYPE_PURE_TOKEN,-1,\"$\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* XGROUP SETID argument table */\nstruct COMMAND_ARG XGROUP_SETID_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"group\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"id-selector\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=XGROUP_SETID_id_selector_Subargs},\n{MAKE_ARG(\"entriesread\",ARG_TYPE_INTEGER,-1,\"ENTRIESREAD\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL),.display_text=\"entries-read\"},\n};\n\n/* XGROUP command table */\nstruct COMMAND_STRUCT XGROUP_Subcommands[] = {\n{MAKE_CMD(\"create\",\"Creates a consumer group.\",\"O(1)\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XGROUP_CREATE_History,1,XGROUP_CREATE_Tips,0,xgroupCommand,-5,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_STREAM,XGROUP_CREATE_Keyspecs,1,NULL,5),.args=XGROUP_CREATE_Args},\n{MAKE_CMD(\"createconsumer\",\"Creates a consumer in a consumer group.\",\"O(1)\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XGROUP_CREATECONSUMER_History,0,XGROUP_CREATECONSUMER_Tips,0,xgroupCommand,5,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_STREAM,XGROUP_CREATECONSUMER_Keyspecs,1,NULL,3),.args=XGROUP_CREATECONSUMER_Args},\n{MAKE_CMD(\"delconsumer\",\"Deletes a consumer from a consumer group.\",\"O(1)\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XGROUP_DELCONSUMER_History,0,XGROUP_DELCONSUMER_Tips,0,xgroupCommand,5,CMD_WRITE,ACL_CATEGORY_STREAM,XGROUP_DELCONSUMER_Keyspecs,1,NULL,3),.args=XGROUP_DELCONSUMER_Args},\n{MAKE_CMD(\"destroy\",\"Destroys a consumer group.\",\"O(N) where N is the number of entries in the group's pending entries list (PEL).\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XGROUP_DESTROY_History,0,XGROUP_DESTROY_Tips,0,xgroupCommand,4,CMD_WRITE,ACL_CATEGORY_STREAM,XGROUP_DESTROY_Keyspecs,1,NULL,2),.args=XGROUP_DESTROY_Args},\n{MAKE_CMD(\"help\",\"Returns helpful text about the different subcommands.\",\"O(1)\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XGROUP_HELP_History,0,XGROUP_HELP_Tips,0,xgroupCommand,2,CMD_LOADING|CMD_STALE,ACL_CATEGORY_STREAM,XGROUP_HELP_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"setid\",\"Sets the last-delivered ID of a consumer group.\",\"O(1)\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XGROUP_SETID_History,1,XGROUP_SETID_Tips,0,xgroupCommand,-5,CMD_WRITE,ACL_CATEGORY_STREAM,XGROUP_SETID_Keyspecs,1,NULL,4),.args=XGROUP_SETID_Args},\n{0}\n};\n\n/********** XGROUP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XGROUP history */\n#define XGROUP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XGROUP tips */\n#define XGROUP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XGROUP key specs */\n#define XGROUP_Keyspecs NULL\n#endif\n\n/********** XIDMPRECORD ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XIDMPRECORD history */\n#define XIDMPRECORD_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XIDMPRECORD tips */\n#define XIDMPRECORD_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XIDMPRECORD key specs */\nkeySpec XIDMPRECORD_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XIDMPRECORD argument table */\nstruct COMMAND_ARG XIDMPRECORD_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"pid\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"iid\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"stream-id\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** XINFO CONSUMERS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XINFO CONSUMERS history */\ncommandHistory XINFO_CONSUMERS_History[] = {\n{\"7.2.0\",\"Added the `inactive` field, and changed the meaning of `idle`.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XINFO CONSUMERS tips */\nconst char *XINFO_CONSUMERS_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XINFO CONSUMERS key specs */\nkeySpec XINFO_CONSUMERS_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XINFO CONSUMERS argument table */\nstruct COMMAND_ARG XINFO_CONSUMERS_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"group\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** XINFO GROUPS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XINFO GROUPS history */\ncommandHistory XINFO_GROUPS_History[] = {\n{\"7.0.0\",\"Added the `entries-read` and `lag` fields\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XINFO GROUPS tips */\n#define XINFO_GROUPS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XINFO GROUPS key specs */\nkeySpec XINFO_GROUPS_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XINFO GROUPS argument table */\nstruct COMMAND_ARG XINFO_GROUPS_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** XINFO HELP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XINFO HELP history */\n#define XINFO_HELP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XINFO HELP tips */\n#define XINFO_HELP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XINFO HELP key specs */\n#define XINFO_HELP_Keyspecs NULL\n#endif\n\n/********** XINFO STREAM ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XINFO STREAM history */\ncommandHistory XINFO_STREAM_History[] = {\n{\"6.0.0\",\"Added the `FULL` modifier.\"},\n{\"7.0.0\",\"Added the `max-deleted-entry-id`, `entries-added`, `recorded-first-entry-id`, `entries-read` and `lag` fields\"},\n{\"7.2.0\",\"Added the `active-time` field, and changed the meaning of `seen-time`.\"},\n{\"8.6.0\",\"Added the `idmp-duration`, `idmp-maxsize`, `pids-tracked`, `iids-tracked`, `iids-added` and `iids-duplicates` fields for IDMP tracking.\"},\n{\"8.8.0\",\"Added the `nacked-count` field to consumer groups in `FULL` output.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XINFO STREAM tips */\n#define XINFO_STREAM_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XINFO STREAM key specs */\nkeySpec XINFO_STREAM_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={2},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XINFO STREAM full_block argument table */\nstruct COMMAND_ARG XINFO_STREAM_full_block_Subargs[] = {\n{MAKE_ARG(\"full\",ARG_TYPE_PURE_TOKEN,-1,\"FULL\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/* XINFO STREAM argument table */\nstruct COMMAND_ARG XINFO_STREAM_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"full-block\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=XINFO_STREAM_full_block_Subargs},\n};\n\n/* XINFO command table */\nstruct COMMAND_STRUCT XINFO_Subcommands[] = {\n{MAKE_CMD(\"consumers\",\"Returns a list of the consumers in a consumer group.\",\"O(1)\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XINFO_CONSUMERS_History,1,XINFO_CONSUMERS_Tips,1,xinfoCommand,4,CMD_READONLY,ACL_CATEGORY_STREAM,XINFO_CONSUMERS_Keyspecs,1,NULL,2),.args=XINFO_CONSUMERS_Args},\n{MAKE_CMD(\"groups\",\"Returns a list of the consumer groups of a stream.\",\"O(1)\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XINFO_GROUPS_History,1,XINFO_GROUPS_Tips,0,xinfoCommand,3,CMD_READONLY,ACL_CATEGORY_STREAM,XINFO_GROUPS_Keyspecs,1,NULL,1),.args=XINFO_GROUPS_Args},\n{MAKE_CMD(\"help\",\"Returns helpful text about the different subcommands.\",\"O(1)\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XINFO_HELP_History,0,XINFO_HELP_Tips,0,xinfoCommand,2,CMD_LOADING|CMD_STALE,ACL_CATEGORY_STREAM,XINFO_HELP_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"stream\",\"Returns information about a stream.\",\"O(1)\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XINFO_STREAM_History,5,XINFO_STREAM_Tips,0,xinfoCommand,-3,CMD_READONLY,ACL_CATEGORY_STREAM,XINFO_STREAM_Keyspecs,1,NULL,2),.args=XINFO_STREAM_Args},\n{0}\n};\n\n/********** XINFO ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XINFO history */\n#define XINFO_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XINFO tips */\n#define XINFO_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XINFO key specs */\n#define XINFO_Keyspecs NULL\n#endif\n\n/********** XLEN ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XLEN history */\n#define XLEN_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XLEN tips */\n#define XLEN_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XLEN key specs */\nkeySpec XLEN_Keyspecs[1] = {\n{NULL,CMD_KEY_RO,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XLEN argument table */\nstruct COMMAND_ARG XLEN_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** XNACK ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XNACK history */\n#define XNACK_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XNACK tips */\n#define XNACK_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XNACK key specs */\nkeySpec XNACK_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XNACK mode argument table */\nstruct COMMAND_ARG XNACK_mode_Subargs[] = {\n{MAKE_ARG(\"silent\",ARG_TYPE_PURE_TOKEN,-1,\"SILENT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"fail\",ARG_TYPE_PURE_TOKEN,-1,\"FAIL\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"fatal\",ARG_TYPE_PURE_TOKEN,-1,\"FATAL\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* XNACK ids argument table */\nstruct COMMAND_ARG XNACK_ids_Subargs[] = {\n{MAKE_ARG(\"numids\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"id\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* XNACK argument table */\nstruct COMMAND_ARG XNACK_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"group\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"mode\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,3,NULL),.subargs=XNACK_mode_Subargs},\n{MAKE_ARG(\"ids\",ARG_TYPE_BLOCK,-1,\"IDS\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=XNACK_ids_Subargs},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"RETRYCOUNT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"force\",ARG_TYPE_PURE_TOKEN,-1,\"FORCE\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** XPENDING ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XPENDING history */\ncommandHistory XPENDING_History[] = {\n{\"6.2.0\",\"Added the `IDLE` option and exclusive range intervals.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XPENDING tips */\nconst char *XPENDING_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XPENDING key specs */\nkeySpec XPENDING_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XPENDING filters argument table */\nstruct COMMAND_ARG XPENDING_filters_Subargs[] = {\n{MAKE_ARG(\"min-idle-time\",ARG_TYPE_INTEGER,-1,\"IDLE\",NULL,\"6.2.0\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"start\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"end\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"consumer\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/* XPENDING argument table */\nstruct COMMAND_ARG XPENDING_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"group\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"filters\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,5,NULL),.subargs=XPENDING_filters_Subargs},\n};\n\n/********** XRANGE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XRANGE history */\ncommandHistory XRANGE_History[] = {\n{\"6.2.0\",\"Added exclusive ranges.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XRANGE tips */\n#define XRANGE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XRANGE key specs */\nkeySpec XRANGE_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XRANGE argument table */\nstruct COMMAND_ARG XRANGE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"start\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"end\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** XREAD ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XREAD history */\n#define XREAD_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XREAD tips */\n#define XREAD_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XREAD key specs */\nkeySpec XREAD_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_KEYWORD,.bs.keyword={\"STREAMS\",1},KSPEC_FK_RANGE,.fk.range={-1,1,2}}\n};\n#endif\n\n/* XREAD streams argument table */\nstruct COMMAND_ARG XREAD_streams_Subargs[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"id\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* XREAD argument table */\nstruct COMMAND_ARG XREAD_Args[] = {\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"milliseconds\",ARG_TYPE_INTEGER,-1,\"BLOCK\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"streams\",ARG_TYPE_BLOCK,-1,\"STREAMS\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=XREAD_streams_Subargs},\n};\n\n/********** XREADGROUP ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XREADGROUP history */\n#define XREADGROUP_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XREADGROUP tips */\n#define XREADGROUP_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XREADGROUP key specs */\nkeySpec XREADGROUP_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_KEYWORD,.bs.keyword={\"STREAMS\",4},KSPEC_FK_RANGE,.fk.range={-1,1,2}}\n};\n#endif\n\n/* XREADGROUP group_block argument table */\nstruct COMMAND_ARG XREADGROUP_group_block_Subargs[] = {\n{MAKE_ARG(\"group\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"consumer\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* XREADGROUP streams argument table */\nstruct COMMAND_ARG XREADGROUP_streams_Subargs[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n{MAKE_ARG(\"id\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* XREADGROUP argument table */\nstruct COMMAND_ARG XREADGROUP_Args[] = {\n{MAKE_ARG(\"group-block\",ARG_TYPE_BLOCK,-1,\"GROUP\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=XREADGROUP_group_block_Subargs},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"milliseconds\",ARG_TYPE_INTEGER,-1,\"BLOCK\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"min-idle-time\",ARG_TYPE_INTEGER,-1,\"CLAIM\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"noack\",ARG_TYPE_PURE_TOKEN,-1,\"NOACK\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"streams\",ARG_TYPE_BLOCK,-1,\"STREAMS\",NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=XREADGROUP_streams_Subargs},\n};\n\n/********** XREVRANGE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XREVRANGE history */\ncommandHistory XREVRANGE_History[] = {\n{\"6.2.0\",\"Added exclusive ranges.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XREVRANGE tips */\n#define XREVRANGE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XREVRANGE key specs */\nkeySpec XREVRANGE_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XREVRANGE argument table */\nstruct COMMAND_ARG XREVRANGE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"end\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"start\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"COUNT\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** XSETID ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XSETID history */\ncommandHistory XSETID_History[] = {\n{\"7.0.0\",\"Added the `entries_added` and `max_deleted_entry_id` arguments.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XSETID tips */\n#define XSETID_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XSETID key specs */\nkeySpec XSETID_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XSETID argument table */\nstruct COMMAND_ARG XSETID_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"last-id\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"entries-added\",ARG_TYPE_INTEGER,-1,\"ENTRIESADDED\",NULL,\"7.0.0\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"max-deleted-id\",ARG_TYPE_STRING,-1,\"MAXDELETEDID\",NULL,\"7.0.0\",CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** XTRIM ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* XTRIM history */\ncommandHistory XTRIM_History[] = {\n{\"6.2.0\",\"Added the `MINID` trimming strategy and the `LIMIT` option.\"},\n{\"8.2.0\",\"Added the `KEEPREF`, `DELREF` and `ACKED` options.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* XTRIM tips */\nconst char *XTRIM_Tips[] = {\n\"nondeterministic_output\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* XTRIM key specs */\nkeySpec XTRIM_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* XTRIM trim strategy argument table */\nstruct COMMAND_ARG XTRIM_trim_strategy_Subargs[] = {\n{MAKE_ARG(\"maxlen\",ARG_TYPE_PURE_TOKEN,-1,\"MAXLEN\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"minid\",ARG_TYPE_PURE_TOKEN,-1,\"MINID\",NULL,\"6.2.0\",CMD_ARG_NONE,0,NULL)},\n};\n\n/* XTRIM trim operator argument table */\nstruct COMMAND_ARG XTRIM_trim_operator_Subargs[] = {\n{MAKE_ARG(\"equal\",ARG_TYPE_PURE_TOKEN,-1,\"=\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"approximately\",ARG_TYPE_PURE_TOKEN,-1,\"~\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* XTRIM trim condition argument table */\nstruct COMMAND_ARG XTRIM_trim_condition_Subargs[] = {\n{MAKE_ARG(\"keepref\",ARG_TYPE_PURE_TOKEN,-1,\"KEEPREF\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"delref\",ARG_TYPE_PURE_TOKEN,-1,\"DELREF\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"acked\",ARG_TYPE_PURE_TOKEN,-1,\"ACKED\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* XTRIM trim argument table */\nstruct COMMAND_ARG XTRIM_trim_Subargs[] = {\n{MAKE_ARG(\"strategy\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_NONE,2,NULL),.subargs=XTRIM_trim_strategy_Subargs},\n{MAKE_ARG(\"operator\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=XTRIM_trim_operator_Subargs},\n{MAKE_ARG(\"threshold\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"count\",ARG_TYPE_INTEGER,-1,\"LIMIT\",NULL,\"6.2.0\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"condition\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,3,NULL),.subargs=XTRIM_trim_condition_Subargs},\n};\n\n/* XTRIM argument table */\nstruct COMMAND_ARG XTRIM_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"trim\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_NONE,5,NULL),.subargs=XTRIM_trim_Subargs},\n};\n\n/********** APPEND ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* APPEND history */\n#define APPEND_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* APPEND tips */\n#define APPEND_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* APPEND key specs */\nkeySpec APPEND_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_INSERT,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* APPEND argument table */\nstruct COMMAND_ARG APPEND_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** DECR ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* DECR history */\n#define DECR_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* DECR tips */\n#define DECR_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* DECR key specs */\nkeySpec DECR_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* DECR argument table */\nstruct COMMAND_ARG DECR_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** DECRBY ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* DECRBY history */\n#define DECRBY_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* DECRBY tips */\n#define DECRBY_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* DECRBY key specs */\nkeySpec DECRBY_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* DECRBY argument table */\nstruct COMMAND_ARG DECRBY_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"decrement\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** DELEX ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* DELEX history */\n#define DELEX_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* DELEX tips */\n#define DELEX_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* DELEX key specs */\nkeySpec DELEX_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_DELETE|CMD_KEY_VARIABLE_FLAGS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* DELEX condition argument table */\nstruct COMMAND_ARG DELEX_condition_Subargs[] = {\n{MAKE_ARG(\"ifeq-value\",ARG_TYPE_STRING,-1,\"IFEQ\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"ifne-value\",ARG_TYPE_STRING,-1,\"IFNE\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"ifdeq-digest\",ARG_TYPE_INTEGER,-1,\"IFDEQ\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"ifdne-digest\",ARG_TYPE_INTEGER,-1,\"IFDNE\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* DELEX argument table */\nstruct COMMAND_ARG DELEX_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"condition\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,4,NULL),.subargs=DELEX_condition_Subargs},\n};\n\n/********** DIGEST ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* DIGEST history */\n#define DIGEST_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* DIGEST tips */\n#define DIGEST_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* DIGEST key specs */\nkeySpec DIGEST_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* DIGEST argument table */\nstruct COMMAND_ARG DIGEST_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** GET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* GET history */\n#define GET_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* GET tips */\n#define GET_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* GET key specs */\nkeySpec GET_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* GET argument table */\nstruct COMMAND_ARG GET_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** GETDEL ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* GETDEL history */\n#define GETDEL_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* GETDEL tips */\n#define GETDEL_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* GETDEL key specs */\nkeySpec GETDEL_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_DELETE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* GETDEL argument table */\nstruct COMMAND_ARG GETDEL_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** GETEX ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* GETEX history */\n#define GETEX_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* GETEX tips */\n#define GETEX_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* GETEX key specs */\nkeySpec GETEX_Keyspecs[1] = {\n{\"RW and UPDATE because it changes the TTL\",CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* GETEX expiration argument table */\nstruct COMMAND_ARG GETEX_expiration_Subargs[] = {\n{MAKE_ARG(\"seconds\",ARG_TYPE_INTEGER,-1,\"EX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"milliseconds\",ARG_TYPE_INTEGER,-1,\"PX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unix-time-seconds\",ARG_TYPE_UNIX_TIME,-1,\"EXAT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unix-time-milliseconds\",ARG_TYPE_UNIX_TIME,-1,\"PXAT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"persist\",ARG_TYPE_PURE_TOKEN,-1,\"PERSIST\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* GETEX argument table */\nstruct COMMAND_ARG GETEX_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"expiration\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,5,NULL),.subargs=GETEX_expiration_Subargs},\n};\n\n/********** GETRANGE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* GETRANGE history */\n#define GETRANGE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* GETRANGE tips */\n#define GETRANGE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* GETRANGE key specs */\nkeySpec GETRANGE_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* GETRANGE argument table */\nstruct COMMAND_ARG GETRANGE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"start\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"end\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** GETSET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* GETSET history */\n#define GETSET_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* GETSET tips */\n#define GETSET_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* GETSET key specs */\nkeySpec GETSET_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* GETSET argument table */\nstruct COMMAND_ARG GETSET_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** INCR ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* INCR history */\n#define INCR_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* INCR tips */\n#define INCR_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* INCR key specs */\nkeySpec INCR_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* INCR argument table */\nstruct COMMAND_ARG INCR_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** INCRBY ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* INCRBY history */\n#define INCRBY_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* INCRBY tips */\n#define INCRBY_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* INCRBY key specs */\nkeySpec INCRBY_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* INCRBY argument table */\nstruct COMMAND_ARG INCRBY_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"increment\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** INCRBYFLOAT ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* INCRBYFLOAT history */\n#define INCRBYFLOAT_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* INCRBYFLOAT tips */\n#define INCRBYFLOAT_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* INCRBYFLOAT key specs */\nkeySpec INCRBYFLOAT_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* INCRBYFLOAT argument table */\nstruct COMMAND_ARG INCRBYFLOAT_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"increment\",ARG_TYPE_DOUBLE,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** LCS ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* LCS history */\n#define LCS_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* LCS tips */\n#define LCS_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* LCS key specs */\nkeySpec LCS_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={1,1,0}}\n};\n#endif\n\n/* LCS argument table */\nstruct COMMAND_ARG LCS_Args[] = {\n{MAKE_ARG(\"key1\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"key2\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"len\",ARG_TYPE_PURE_TOKEN,-1,\"LEN\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"idx\",ARG_TYPE_PURE_TOKEN,-1,\"IDX\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"min-match-len\",ARG_TYPE_INTEGER,-1,\"MINMATCHLEN\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"withmatchlen\",ARG_TYPE_PURE_TOKEN,-1,\"WITHMATCHLEN\",NULL,NULL,CMD_ARG_OPTIONAL,0,NULL)},\n};\n\n/********** MGET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* MGET history */\n#define MGET_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* MGET tips */\nconst char *MGET_Tips[] = {\n\"request_policy:multi_shard\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* MGET key specs */\nkeySpec MGET_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={-1,1,0}}\n};\n#endif\n\n/* MGET argument table */\nstruct COMMAND_ARG MGET_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/********** MSET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* MSET history */\n#define MSET_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* MSET tips */\nconst char *MSET_Tips[] = {\n\"request_policy:multi_shard\",\n\"response_policy:all_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* MSET key specs */\nkeySpec MSET_Keyspecs[1] = {\n{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={-1,2,0}}\n};\n#endif\n\n/* MSET data argument table */\nstruct COMMAND_ARG MSET_data_Subargs[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* MSET argument table */\nstruct COMMAND_ARG MSET_Args[] = {\n{MAKE_ARG(\"data\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,2,NULL),.subargs=MSET_data_Subargs},\n};\n\n/********** MSETEX ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* MSETEX history */\n#define MSETEX_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* MSETEX tips */\nconst char *MSETEX_Tips[] = {\n\"request_policy:multi_shard\",\n\"response_policy:all_succeeded\",\n};\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* MSETEX key specs */\nkeySpec MSETEX_Keyspecs[1] = {\n{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_KEYNUM,.fk.keynum={0,1,2}}\n};\n#endif\n\n/* MSETEX data argument table */\nstruct COMMAND_ARG MSETEX_data_Subargs[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* MSETEX condition argument table */\nstruct COMMAND_ARG MSETEX_condition_Subargs[] = {\n{MAKE_ARG(\"nx\",ARG_TYPE_PURE_TOKEN,-1,\"NX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"xx\",ARG_TYPE_PURE_TOKEN,-1,\"XX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* MSETEX expiration argument table */\nstruct COMMAND_ARG MSETEX_expiration_Subargs[] = {\n{MAKE_ARG(\"seconds\",ARG_TYPE_INTEGER,-1,\"EX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"milliseconds\",ARG_TYPE_INTEGER,-1,\"PX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unix-time-seconds\",ARG_TYPE_UNIX_TIME,-1,\"EXAT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unix-time-milliseconds\",ARG_TYPE_UNIX_TIME,-1,\"PXAT\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"keepttl\",ARG_TYPE_PURE_TOKEN,-1,\"KEEPTTL\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* MSETEX argument table */\nstruct COMMAND_ARG MSETEX_Args[] = {\n{MAKE_ARG(\"numkeys\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"data\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,2,NULL),.subargs=MSETEX_data_Subargs},\n{MAKE_ARG(\"condition\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,2,NULL),.subargs=MSETEX_condition_Subargs},\n{MAKE_ARG(\"expiration\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,5,NULL),.subargs=MSETEX_expiration_Subargs},\n};\n\n/********** MSETNX ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* MSETNX history */\n#define MSETNX_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* MSETNX tips */\n#define MSETNX_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* MSETNX key specs */\nkeySpec MSETNX_Keyspecs[1] = {\n{NULL,CMD_KEY_OW|CMD_KEY_INSERT,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={-1,2,0}}\n};\n#endif\n\n/* MSETNX data argument table */\nstruct COMMAND_ARG MSETNX_data_Subargs[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/* MSETNX argument table */\nstruct COMMAND_ARG MSETNX_Args[] = {\n{MAKE_ARG(\"data\",ARG_TYPE_BLOCK,-1,NULL,NULL,NULL,CMD_ARG_MULTIPLE,2,NULL),.subargs=MSETNX_data_Subargs},\n};\n\n/********** PSETEX ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* PSETEX history */\n#define PSETEX_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* PSETEX tips */\n#define PSETEX_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* PSETEX key specs */\nkeySpec PSETEX_Keyspecs[1] = {\n{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* PSETEX argument table */\nstruct COMMAND_ARG PSETEX_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"milliseconds\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** SET ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SET history */\ncommandHistory SET_History[] = {\n{\"2.6.12\",\"Added the `EX`, `PX`, `NX` and `XX` options.\"},\n{\"6.0.0\",\"Added the `KEEPTTL` option.\"},\n{\"6.2.0\",\"Added the `GET`, `EXAT` and `PXAT` option.\"},\n{\"7.0.0\",\"Allowed the `NX` and `GET` options to be used together.\"},\n{\"8.4.0\",\"Added 'IFEQ', 'IFNE', 'IFDEQ', 'IFDNE' options.\"},\n};\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SET tips */\n#define SET_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SET key specs */\nkeySpec SET_Keyspecs[1] = {\n{\"RW and ACCESS due to the optional `GET` argument\",CMD_KEY_RW|CMD_KEY_ACCESS|CMD_KEY_UPDATE|CMD_KEY_VARIABLE_FLAGS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* SET condition argument table */\nstruct COMMAND_ARG SET_condition_Subargs[] = {\n{MAKE_ARG(\"nx\",ARG_TYPE_PURE_TOKEN,-1,\"NX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"xx\",ARG_TYPE_PURE_TOKEN,-1,\"XX\",NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"ifeq-value\",ARG_TYPE_STRING,-1,\"IFEQ\",NULL,\"8.4.0\",CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"ifne-value\",ARG_TYPE_STRING,-1,\"IFNE\",NULL,\"8.4.0\",CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"ifdeq-digest\",ARG_TYPE_INTEGER,-1,\"IFDEQ\",NULL,\"8.4.0\",CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"ifdne-digest\",ARG_TYPE_INTEGER,-1,\"IFDNE\",NULL,\"8.4.0\",CMD_ARG_NONE,0,NULL)},\n};\n\n/* SET expiration argument table */\nstruct COMMAND_ARG SET_expiration_Subargs[] = {\n{MAKE_ARG(\"seconds\",ARG_TYPE_INTEGER,-1,\"EX\",NULL,\"2.6.12\",CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"milliseconds\",ARG_TYPE_INTEGER,-1,\"PX\",NULL,\"2.6.12\",CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unix-time-seconds\",ARG_TYPE_UNIX_TIME,-1,\"EXAT\",NULL,\"6.2.0\",CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"unix-time-milliseconds\",ARG_TYPE_UNIX_TIME,-1,\"PXAT\",NULL,\"6.2.0\",CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"keepttl\",ARG_TYPE_PURE_TOKEN,-1,\"KEEPTTL\",NULL,\"6.0.0\",CMD_ARG_NONE,0,NULL)},\n};\n\n/* SET argument table */\nstruct COMMAND_ARG SET_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"condition\",ARG_TYPE_ONEOF,-1,NULL,NULL,\"2.6.12\",CMD_ARG_OPTIONAL,6,NULL),.subargs=SET_condition_Subargs},\n{MAKE_ARG(\"get\",ARG_TYPE_PURE_TOKEN,-1,\"GET\",NULL,\"6.2.0\",CMD_ARG_OPTIONAL,0,NULL)},\n{MAKE_ARG(\"expiration\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,5,NULL),.subargs=SET_expiration_Subargs},\n};\n\n/********** SETEX ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SETEX history */\n#define SETEX_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SETEX tips */\n#define SETEX_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SETEX key specs */\nkeySpec SETEX_Keyspecs[1] = {\n{NULL,CMD_KEY_OW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* SETEX argument table */\nstruct COMMAND_ARG SETEX_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"seconds\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** SETNX ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SETNX history */\n#define SETNX_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SETNX tips */\n#define SETNX_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SETNX key specs */\nkeySpec SETNX_Keyspecs[1] = {\n{NULL,CMD_KEY_OW|CMD_KEY_INSERT,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* SETNX argument table */\nstruct COMMAND_ARG SETNX_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** SETRANGE ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SETRANGE history */\n#define SETRANGE_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SETRANGE tips */\n#define SETRANGE_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SETRANGE key specs */\nkeySpec SETRANGE_Keyspecs[1] = {\n{NULL,CMD_KEY_RW|CMD_KEY_UPDATE,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* SETRANGE argument table */\nstruct COMMAND_ARG SETRANGE_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"offset\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"value\",ARG_TYPE_STRING,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** STRLEN ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* STRLEN history */\n#define STRLEN_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* STRLEN tips */\n#define STRLEN_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* STRLEN key specs */\nkeySpec STRLEN_Keyspecs[1] = {\n{NULL,CMD_KEY_RO,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* STRLEN argument table */\nstruct COMMAND_ARG STRLEN_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** SUBSTR ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* SUBSTR history */\n#define SUBSTR_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* SUBSTR tips */\n#define SUBSTR_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* SUBSTR key specs */\nkeySpec SUBSTR_Keyspecs[1] = {\n{NULL,CMD_KEY_RO|CMD_KEY_ACCESS,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={0,1,0}}\n};\n#endif\n\n/* SUBSTR argument table */\nstruct COMMAND_ARG SUBSTR_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"start\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n{MAKE_ARG(\"end\",ARG_TYPE_INTEGER,-1,NULL,NULL,NULL,CMD_ARG_NONE,0,NULL)},\n};\n\n/********** DISCARD ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* DISCARD history */\n#define DISCARD_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* DISCARD tips */\n#define DISCARD_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* DISCARD key specs */\n#define DISCARD_Keyspecs NULL\n#endif\n\n/********** EXEC ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* EXEC history */\n#define EXEC_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* EXEC tips */\n#define EXEC_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* EXEC key specs */\n#define EXEC_Keyspecs NULL\n#endif\n\n/********** MULTI ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* MULTI history */\n#define MULTI_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* MULTI tips */\n#define MULTI_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* MULTI key specs */\n#define MULTI_Keyspecs NULL\n#endif\n\n/********** UNWATCH ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* UNWATCH history */\n#define UNWATCH_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* UNWATCH tips */\n#define UNWATCH_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* UNWATCH key specs */\n#define UNWATCH_Keyspecs NULL\n#endif\n\n/********** WATCH ********************/\n\n#ifndef SKIP_CMD_HISTORY_TABLE\n/* WATCH history */\n#define WATCH_History NULL\n#endif\n\n#ifndef SKIP_CMD_TIPS_TABLE\n/* WATCH tips */\n#define WATCH_Tips NULL\n#endif\n\n#ifndef SKIP_CMD_KEY_SPECS_TABLE\n/* WATCH key specs */\nkeySpec WATCH_Keyspecs[1] = {\n{NULL,CMD_KEY_RO,KSPEC_BS_INDEX,.bs.index={1},KSPEC_FK_RANGE,.fk.range={-1,1,0}}\n};\n#endif\n\n/* WATCH argument table */\nstruct COMMAND_ARG WATCH_Args[] = {\n{MAKE_ARG(\"key\",ARG_TYPE_KEY,0,NULL,NULL,NULL,CMD_ARG_MULTIPLE,0,NULL)},\n};\n\n/* Main command table */\nstruct COMMAND_STRUCT redisCommandTable[] = {\n/* bitmap */\n{MAKE_CMD(\"bitcount\",\"Counts the number of set bits (population counting) in a string.\",\"O(N)\",\"2.6.0\",CMD_DOC_NONE,NULL,NULL,\"bitmap\",COMMAND_GROUP_BITMAP,BITCOUNT_History,1,BITCOUNT_Tips,0,bitcountCommand,-2,CMD_READONLY,ACL_CATEGORY_BITMAP,BITCOUNT_Keyspecs,1,NULL,2),.args=BITCOUNT_Args},\n{MAKE_CMD(\"bitfield\",\"Performs arbitrary bitfield integer operations on strings.\",\"O(1) for each subcommand specified\",\"3.2.0\",CMD_DOC_NONE,NULL,NULL,\"bitmap\",COMMAND_GROUP_BITMAP,BITFIELD_History,0,BITFIELD_Tips,0,bitfieldCommand,-2,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_BITMAP,BITFIELD_Keyspecs,1,bitfieldGetKeys,2),.args=BITFIELD_Args},\n{MAKE_CMD(\"bitfield_ro\",\"Performs arbitrary read-only bitfield integer operations on strings.\",\"O(1) for each subcommand specified\",\"6.0.0\",CMD_DOC_NONE,NULL,NULL,\"bitmap\",COMMAND_GROUP_BITMAP,BITFIELD_RO_History,0,BITFIELD_RO_Tips,0,bitfieldroCommand,-2,CMD_READONLY|CMD_FAST,ACL_CATEGORY_BITMAP,BITFIELD_RO_Keyspecs,1,NULL,2),.args=BITFIELD_RO_Args},\n{MAKE_CMD(\"bitop\",\"Performs bitwise operations on multiple strings, and stores the result.\",\"O(N)\",\"2.6.0\",CMD_DOC_NONE,NULL,NULL,\"bitmap\",COMMAND_GROUP_BITMAP,BITOP_History,0,BITOP_Tips,0,bitopCommand,-4,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_BITMAP,BITOP_Keyspecs,2,NULL,3),.args=BITOP_Args},\n{MAKE_CMD(\"bitpos\",\"Finds the first set (1) or clear (0) bit in a string.\",\"O(N)\",\"2.8.7\",CMD_DOC_NONE,NULL,NULL,\"bitmap\",COMMAND_GROUP_BITMAP,BITPOS_History,1,BITPOS_Tips,0,bitposCommand,-3,CMD_READONLY,ACL_CATEGORY_BITMAP,BITPOS_Keyspecs,1,NULL,3),.args=BITPOS_Args},\n{MAKE_CMD(\"getbit\",\"Returns a bit value by offset.\",\"O(1)\",\"2.2.0\",CMD_DOC_NONE,NULL,NULL,\"bitmap\",COMMAND_GROUP_BITMAP,GETBIT_History,0,GETBIT_Tips,0,getbitCommand,3,CMD_READONLY|CMD_FAST,ACL_CATEGORY_BITMAP,GETBIT_Keyspecs,1,NULL,2),.args=GETBIT_Args},\n{MAKE_CMD(\"setbit\",\"Sets or clears the bit at offset of the string value. Creates the key if it doesn't exist.\",\"O(1)\",\"2.2.0\",CMD_DOC_NONE,NULL,NULL,\"bitmap\",COMMAND_GROUP_BITMAP,SETBIT_History,0,SETBIT_Tips,0,setbitCommand,4,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_BITMAP,SETBIT_Keyspecs,1,NULL,3),.args=SETBIT_Args},\n/* cluster */\n{MAKE_CMD(\"asking\",\"Signals that a cluster client is following an -ASK redirect.\",\"O(1)\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,ASKING_History,0,ASKING_Tips,0,askingCommand,1,CMD_FAST,ACL_CATEGORY_CONNECTION,ASKING_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"cluster\",\"A container for Redis Cluster commands.\",\"Depends on subcommand.\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,CLUSTER_History,0,CLUSTER_Tips,0,NULL,-2,0,0,CLUSTER_Keyspecs,0,NULL,0),.subcommands=CLUSTER_Subcommands},\n{MAKE_CMD(\"readonly\",\"Enables read-only queries for a connection to a Redis Cluster replica node.\",\"O(1)\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,READONLY_History,0,READONLY_Tips,0,readonlyCommand,1,CMD_FAST|CMD_LOADING|CMD_STALE,ACL_CATEGORY_CONNECTION,READONLY_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"readwrite\",\"Enables read-write queries for a connection to a Reids Cluster replica node.\",\"O(1)\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"cluster\",COMMAND_GROUP_CLUSTER,READWRITE_History,0,READWRITE_Tips,0,readwriteCommand,1,CMD_FAST|CMD_LOADING|CMD_STALE,ACL_CATEGORY_CONNECTION,READWRITE_Keyspecs,0,NULL,0)},\n/* connection */\n{MAKE_CMD(\"auth\",\"Authenticates the connection.\",\"O(N) where N is the number of passwords defined for the user\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,AUTH_History,1,AUTH_Tips,0,authCommand,-2,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_FAST|CMD_NO_AUTH|CMD_SENTINEL|CMD_ALLOW_BUSY,ACL_CATEGORY_CONNECTION,AUTH_Keyspecs,0,NULL,2),.args=AUTH_Args},\n{MAKE_CMD(\"client\",\"A container for client connection commands.\",\"Depends on subcommand.\",\"2.4.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,CLIENT_History,0,CLIENT_Tips,0,NULL,-2,CMD_SENTINEL,0,CLIENT_Keyspecs,0,NULL,0),.subcommands=CLIENT_Subcommands},\n{MAKE_CMD(\"echo\",\"Returns the given string.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,ECHO_History,0,ECHO_Tips,0,echoCommand,2,CMD_LOADING|CMD_STALE|CMD_FAST,ACL_CATEGORY_CONNECTION,ECHO_Keyspecs,0,NULL,1),.args=ECHO_Args},\n{MAKE_CMD(\"hello\",\"Handshakes with the Redis server.\",\"O(1)\",\"6.0.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,HELLO_History,1,HELLO_Tips,0,helloCommand,-1,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_FAST|CMD_NO_AUTH|CMD_SENTINEL|CMD_ALLOW_BUSY,ACL_CATEGORY_CONNECTION,HELLO_Keyspecs,0,NULL,1),.args=HELLO_Args},\n{MAKE_CMD(\"ping\",\"Returns the server's liveliness response.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,PING_History,0,PING_Tips,2,pingCommand,-1,CMD_FAST|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,PING_Keyspecs,0,NULL,1),.args=PING_Args},\n{MAKE_CMD(\"quit\",\"Closes the connection.\",\"O(1)\",\"1.0.0\",CMD_DOC_DEPRECATED,\"just closing the connection\",\"7.2.0\",\"connection\",COMMAND_GROUP_CONNECTION,QUIT_History,0,QUIT_Tips,0,quitCommand,-1,CMD_ALLOW_BUSY|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_FAST|CMD_NO_AUTH,ACL_CATEGORY_CONNECTION,QUIT_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"reset\",\"Resets the connection.\",\"O(1)\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,RESET_History,0,RESET_Tips,0,resetCommand,1,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_FAST|CMD_NO_AUTH|CMD_ALLOW_BUSY,ACL_CATEGORY_CONNECTION,RESET_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"select\",\"Changes the selected database.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"connection\",COMMAND_GROUP_CONNECTION,SELECT_History,0,SELECT_Tips,0,selectCommand,2,CMD_LOADING|CMD_STALE|CMD_FAST,ACL_CATEGORY_CONNECTION,SELECT_Keyspecs,0,NULL,1),.args=SELECT_Args},\n/* generic */\n{MAKE_CMD(\"copy\",\"Copies the value of a key to a new key.\",\"O(N) worst case for collections, where N is the number of nested items. O(1) for string values.\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,COPY_History,0,COPY_Tips,0,copyCommand,-3,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_KEYSPACE,COPY_Keyspecs,2,NULL,4),.args=COPY_Args},\n{MAKE_CMD(\"del\",\"Deletes one or more keys.\",\"O(N) where N is the number of keys that will be removed. When a key to remove holds a value other than a string, the individual complexity for this key is O(M) where M is the number of elements in the list, set, sorted set or hash. Removing a single key that holds a string value is O(1).\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,DEL_History,0,DEL_Tips,2,delCommand,-2,CMD_WRITE,ACL_CATEGORY_KEYSPACE,DEL_Keyspecs,1,NULL,1),.args=DEL_Args},\n{MAKE_CMD(\"dump\",\"Returns a serialized representation of the value stored at a key.\",\"O(1) to access the key and additional O(N*M) to serialize it, where N is the number of Redis objects composing the value and M their average size. For small string values the time complexity is thus O(1)+O(1*M) where M is small, so simply O(1).\",\"2.6.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,DUMP_History,0,DUMP_Tips,1,dumpCommand,2,CMD_READONLY,ACL_CATEGORY_KEYSPACE,DUMP_Keyspecs,1,NULL,1),.args=DUMP_Args},\n{MAKE_CMD(\"exists\",\"Determines whether one or more keys exist.\",\"O(N) where N is the number of keys to check.\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,EXISTS_History,1,EXISTS_Tips,2,existsCommand,-2,CMD_READONLY|CMD_FAST,ACL_CATEGORY_KEYSPACE,EXISTS_Keyspecs,1,NULL,1),.args=EXISTS_Args},\n{MAKE_CMD(\"expire\",\"Sets the expiration time of a key in seconds.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,EXPIRE_History,1,EXPIRE_Tips,0,expireCommand,-3,CMD_WRITE|CMD_FAST,ACL_CATEGORY_KEYSPACE,EXPIRE_Keyspecs,1,NULL,3),.args=EXPIRE_Args},\n{MAKE_CMD(\"expireat\",\"Sets the expiration time of a key to a Unix timestamp.\",\"O(1)\",\"1.2.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,EXPIREAT_History,1,EXPIREAT_Tips,0,expireatCommand,-3,CMD_WRITE|CMD_FAST,ACL_CATEGORY_KEYSPACE,EXPIREAT_Keyspecs,1,NULL,3),.args=EXPIREAT_Args},\n{MAKE_CMD(\"expiretime\",\"Returns the expiration time of a key as a Unix timestamp.\",\"O(1)\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,EXPIRETIME_History,0,EXPIRETIME_Tips,0,expiretimeCommand,2,CMD_READONLY|CMD_FAST,ACL_CATEGORY_KEYSPACE,EXPIRETIME_Keyspecs,1,NULL,1),.args=EXPIRETIME_Args},\n{MAKE_CMD(\"keys\",\"Returns all key names that match a pattern.\",\"O(N) with N being the number of keys in the database, under the assumption that the key names in the database and the given pattern have limited length.\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,KEYS_History,0,KEYS_Tips,2,keysCommand,2,CMD_READONLY,ACL_CATEGORY_KEYSPACE|ACL_CATEGORY_DANGEROUS,KEYS_Keyspecs,0,NULL,1),.args=KEYS_Args},\n{MAKE_CMD(\"migrate\",\"Atomically transfers a key from one Redis instance to another.\",\"This command actually executes a DUMP+DEL in the source instance, and a RESTORE in the target instance. See the pages of these commands for time complexity. Also an O(N) data transfer between the two instances is performed.\",\"2.6.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,MIGRATE_History,4,MIGRATE_Tips,1,migrateCommand,-6,CMD_WRITE,ACL_CATEGORY_KEYSPACE|ACL_CATEGORY_DANGEROUS,MIGRATE_Keyspecs,2,migrateGetKeys,9),.args=MIGRATE_Args},\n{MAKE_CMD(\"move\",\"Moves a key to another database.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,MOVE_History,0,MOVE_Tips,0,moveCommand,3,CMD_WRITE|CMD_FAST,ACL_CATEGORY_KEYSPACE,MOVE_Keyspecs,1,NULL,2),.args=MOVE_Args},\n{MAKE_CMD(\"object\",\"A container for object introspection commands.\",\"Depends on subcommand.\",\"2.2.3\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,OBJECT_History,0,OBJECT_Tips,0,NULL,-2,0,0,OBJECT_Keyspecs,0,NULL,0),.subcommands=OBJECT_Subcommands},\n{MAKE_CMD(\"persist\",\"Removes the expiration time of a key.\",\"O(1)\",\"2.2.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,PERSIST_History,0,PERSIST_Tips,0,persistCommand,2,CMD_WRITE|CMD_FAST,ACL_CATEGORY_KEYSPACE,PERSIST_Keyspecs,1,NULL,1),.args=PERSIST_Args},\n{MAKE_CMD(\"pexpire\",\"Sets the expiration time of a key in milliseconds.\",\"O(1)\",\"2.6.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,PEXPIRE_History,1,PEXPIRE_Tips,0,pexpireCommand,-3,CMD_WRITE|CMD_FAST,ACL_CATEGORY_KEYSPACE,PEXPIRE_Keyspecs,1,NULL,3),.args=PEXPIRE_Args},\n{MAKE_CMD(\"pexpireat\",\"Sets the expiration time of a key to a Unix milliseconds timestamp.\",\"O(1)\",\"2.6.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,PEXPIREAT_History,1,PEXPIREAT_Tips,0,pexpireatCommand,-3,CMD_WRITE|CMD_FAST,ACL_CATEGORY_KEYSPACE,PEXPIREAT_Keyspecs,1,NULL,3),.args=PEXPIREAT_Args},\n{MAKE_CMD(\"pexpiretime\",\"Returns the expiration time of a key as a Unix milliseconds timestamp.\",\"O(1)\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,PEXPIRETIME_History,0,PEXPIRETIME_Tips,0,pexpiretimeCommand,2,CMD_READONLY|CMD_FAST,ACL_CATEGORY_KEYSPACE,PEXPIRETIME_Keyspecs,1,NULL,1),.args=PEXPIRETIME_Args},\n{MAKE_CMD(\"pttl\",\"Returns the expiration time in milliseconds of a key.\",\"O(1)\",\"2.6.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,PTTL_History,1,PTTL_Tips,1,pttlCommand,2,CMD_READONLY|CMD_FAST,ACL_CATEGORY_KEYSPACE,PTTL_Keyspecs,1,NULL,1),.args=PTTL_Args},\n{MAKE_CMD(\"randomkey\",\"Returns a random key name from the database.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,RANDOMKEY_History,0,RANDOMKEY_Tips,3,randomkeyCommand,1,CMD_READONLY|CMD_TOUCHES_ARBITRARY_KEYS,ACL_CATEGORY_KEYSPACE,RANDOMKEY_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"rename\",\"Renames a key and overwrites the destination.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,RENAME_History,0,RENAME_Tips,0,renameCommand,3,CMD_WRITE,ACL_CATEGORY_KEYSPACE,RENAME_Keyspecs,2,NULL,2),.args=RENAME_Args},\n{MAKE_CMD(\"renamenx\",\"Renames a key only when the target key name doesn't exist.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,RENAMENX_History,1,RENAMENX_Tips,0,renamenxCommand,3,CMD_WRITE|CMD_FAST,ACL_CATEGORY_KEYSPACE,RENAMENX_Keyspecs,2,NULL,2),.args=RENAMENX_Args},\n{MAKE_CMD(\"restore\",\"Creates a key from the serialized representation of a value.\",\"O(1) to create the new key and additional O(N*M) to reconstruct the serialized value, where N is the number of Redis objects composing the value and M their average size. For small string values the time complexity is thus O(1)+O(1*M) where M is small, so simply O(1). However for sorted set values the complexity is O(N*M*log(N)) because inserting values into sorted sets is O(log(N)).\",\"2.6.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,RESTORE_History,3,RESTORE_Tips,0,restoreCommand,-4,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_KEYSPACE|ACL_CATEGORY_DANGEROUS,RESTORE_Keyspecs,1,NULL,7),.args=RESTORE_Args},\n{MAKE_CMD(\"scan\",\"Iterates over the key names in the database.\",\"O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection.\",\"2.8.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,SCAN_History,1,SCAN_Tips,3,scanCommand,-2,CMD_READONLY|CMD_TOUCHES_ARBITRARY_KEYS,ACL_CATEGORY_KEYSPACE,SCAN_Keyspecs,0,NULL,4),.args=SCAN_Args},\n{MAKE_CMD(\"sort\",\"Sorts the elements in a list, a set, or a sorted set, optionally storing the result.\",\"O(N+M*log(M)) where N is the number of elements in the list or set to sort, and M the number of returned elements. When the elements are not sorted, complexity is O(N).\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,SORT_History,0,SORT_Tips,0,sortCommand,-2,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_SET|ACL_CATEGORY_SORTEDSET|ACL_CATEGORY_LIST|ACL_CATEGORY_DANGEROUS,SORT_Keyspecs,3,sortGetKeys,7),.args=SORT_Args},\n{MAKE_CMD(\"sort_ro\",\"Returns the sorted elements of a list, a set, or a sorted set.\",\"O(N+M*log(M)) where N is the number of elements in the list or set to sort, and M the number of returned elements. When the elements are not sorted, complexity is O(N).\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,SORT_RO_History,0,SORT_RO_Tips,0,sortroCommand,-2,CMD_READONLY,ACL_CATEGORY_SET|ACL_CATEGORY_SORTEDSET|ACL_CATEGORY_LIST|ACL_CATEGORY_DANGEROUS,SORT_RO_Keyspecs,2,sortROGetKeys,6),.args=SORT_RO_Args},\n{MAKE_CMD(\"touch\",\"Returns the number of existing keys out of those specified after updating the time they were last accessed.\",\"O(N) where N is the number of keys that will be touched.\",\"3.2.1\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,TOUCH_History,0,TOUCH_Tips,2,touchCommand,-2,CMD_READONLY|CMD_FAST,ACL_CATEGORY_KEYSPACE,TOUCH_Keyspecs,1,NULL,1),.args=TOUCH_Args},\n{MAKE_CMD(\"ttl\",\"Returns the expiration time in seconds of a key.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,TTL_History,1,TTL_Tips,1,ttlCommand,2,CMD_READONLY|CMD_FAST,ACL_CATEGORY_KEYSPACE,TTL_Keyspecs,1,NULL,1),.args=TTL_Args},\n{MAKE_CMD(\"type\",\"Determines the type of value stored at a key.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,TYPE_History,0,TYPE_Tips,0,typeCommand,2,CMD_READONLY|CMD_FAST,ACL_CATEGORY_KEYSPACE,TYPE_Keyspecs,1,NULL,1),.args=TYPE_Args},\n{MAKE_CMD(\"unlink\",\"Asynchronously deletes one or more keys.\",\"O(1) for each key removed regardless of its size. Then the command does O(N) work in a different thread in order to reclaim memory, where N is the number of allocations the deleted objects where composed of.\",\"4.0.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,UNLINK_History,0,UNLINK_Tips,2,unlinkCommand,-2,CMD_WRITE|CMD_FAST,ACL_CATEGORY_KEYSPACE,UNLINK_Keyspecs,1,NULL,1),.args=UNLINK_Args},\n{MAKE_CMD(\"wait\",\"Blocks until the asynchronous replication of all preceding write commands sent by the connection is completed.\",\"O(1)\",\"3.0.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,WAIT_History,0,WAIT_Tips,2,waitCommand,3,CMD_BLOCKING,ACL_CATEGORY_CONNECTION,WAIT_Keyspecs,0,NULL,2),.args=WAIT_Args},\n{MAKE_CMD(\"waitaof\",\"Blocks until all of the preceding write commands sent by the connection are written to the append-only file of the master and/or replicas.\",\"O(1)\",\"7.2.0\",CMD_DOC_NONE,NULL,NULL,\"generic\",COMMAND_GROUP_GENERIC,WAITAOF_History,0,WAITAOF_Tips,2,waitaofCommand,4,CMD_BLOCKING,ACL_CATEGORY_CONNECTION,WAITAOF_Keyspecs,0,NULL,3),.args=WAITAOF_Args},\n/* geo */\n{MAKE_CMD(\"geoadd\",\"Adds one or more members to a geospatial index. The key is created if it doesn't exist.\",\"O(log(N)) for each item added, where N is the number of elements in the sorted set.\",\"3.2.0\",CMD_DOC_NONE,NULL,NULL,\"geo\",COMMAND_GROUP_GEO,GEOADD_History,1,GEOADD_Tips,0,geoaddCommand,-5,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_GEO,GEOADD_Keyspecs,1,NULL,4),.args=GEOADD_Args},\n{MAKE_CMD(\"geodist\",\"Returns the distance between two members of a geospatial index.\",\"O(1)\",\"3.2.0\",CMD_DOC_NONE,NULL,NULL,\"geo\",COMMAND_GROUP_GEO,GEODIST_History,0,GEODIST_Tips,0,geodistCommand,-4,CMD_READONLY,ACL_CATEGORY_GEO,GEODIST_Keyspecs,1,NULL,4),.args=GEODIST_Args},\n{MAKE_CMD(\"geohash\",\"Returns members from a geospatial index as geohash strings.\",\"O(1) for each member requested.\",\"3.2.0\",CMD_DOC_NONE,NULL,NULL,\"geo\",COMMAND_GROUP_GEO,GEOHASH_History,0,GEOHASH_Tips,0,geohashCommand,-2,CMD_READONLY,ACL_CATEGORY_GEO,GEOHASH_Keyspecs,1,NULL,2),.args=GEOHASH_Args},\n{MAKE_CMD(\"geopos\",\"Returns the longitude and latitude of members from a geospatial index.\",\"O(1) for each member requested.\",\"3.2.0\",CMD_DOC_NONE,NULL,NULL,\"geo\",COMMAND_GROUP_GEO,GEOPOS_History,0,GEOPOS_Tips,0,geoposCommand,-2,CMD_READONLY,ACL_CATEGORY_GEO,GEOPOS_Keyspecs,1,NULL,2),.args=GEOPOS_Args},\n{MAKE_CMD(\"georadius\",\"Queries a geospatial index for members within a distance from a coordinate, optionally stores the result.\",\"O(N+log(M)) where N is the number of elements inside the bounding box of the circular area delimited by center and radius and M is the number of items inside the index.\",\"3.2.0\",CMD_DOC_DEPRECATED,\"`GEOSEARCH` and `GEOSEARCHSTORE` with the `BYRADIUS` argument\",\"6.2.0\",\"geo\",COMMAND_GROUP_GEO,GEORADIUS_History,2,GEORADIUS_Tips,0,georadiusCommand,-6,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_GEO,GEORADIUS_Keyspecs,3,georadiusGetKeys,11),.args=GEORADIUS_Args},\n{MAKE_CMD(\"georadiusbymember\",\"Queries a geospatial index for members within a distance from a member, optionally stores the result.\",\"O(N+log(M)) where N is the number of elements inside the bounding box of the circular area delimited by center and radius and M is the number of items inside the index.\",\"3.2.0\",CMD_DOC_DEPRECATED,\"`GEOSEARCH` and `GEOSEARCHSTORE` with the `BYRADIUS` and `FROMMEMBER` arguments\",\"6.2.0\",\"geo\",COMMAND_GROUP_GEO,GEORADIUSBYMEMBER_History,2,GEORADIUSBYMEMBER_Tips,0,georadiusbymemberCommand,-5,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_GEO,GEORADIUSBYMEMBER_Keyspecs,3,georadiusGetKeys,10),.args=GEORADIUSBYMEMBER_Args},\n{MAKE_CMD(\"georadiusbymember_ro\",\"Returns members from a geospatial index that are within a distance from a member.\",\"O(N+log(M)) where N is the number of elements inside the bounding box of the circular area delimited by center and radius and M is the number of items inside the index.\",\"3.2.10\",CMD_DOC_DEPRECATED,\"`GEOSEARCH` with the `BYRADIUS` and `FROMMEMBER` arguments\",\"6.2.0\",\"geo\",COMMAND_GROUP_GEO,GEORADIUSBYMEMBER_RO_History,2,GEORADIUSBYMEMBER_RO_Tips,0,georadiusbymemberroCommand,-5,CMD_READONLY,ACL_CATEGORY_GEO,GEORADIUSBYMEMBER_RO_Keyspecs,1,NULL,9),.args=GEORADIUSBYMEMBER_RO_Args},\n{MAKE_CMD(\"georadius_ro\",\"Returns members from a geospatial index that are within a distance from a coordinate.\",\"O(N+log(M)) where N is the number of elements inside the bounding box of the circular area delimited by center and radius and M is the number of items inside the index.\",\"3.2.10\",CMD_DOC_DEPRECATED,\"`GEOSEARCH` with the `BYRADIUS` argument\",\"6.2.0\",\"geo\",COMMAND_GROUP_GEO,GEORADIUS_RO_History,2,GEORADIUS_RO_Tips,0,georadiusroCommand,-6,CMD_READONLY,ACL_CATEGORY_GEO,GEORADIUS_RO_Keyspecs,1,NULL,10),.args=GEORADIUS_RO_Args},\n{MAKE_CMD(\"geosearch\",\"Queries a geospatial index for members inside an area of a box or a circle.\",\"O(N+log(M)) where N is the number of elements in the grid-aligned bounding box area around the shape provided as the filter and M is the number of items inside the shape\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"geo\",COMMAND_GROUP_GEO,GEOSEARCH_History,1,GEOSEARCH_Tips,0,geosearchCommand,-7,CMD_READONLY,ACL_CATEGORY_GEO,GEOSEARCH_Keyspecs,1,NULL,8),.args=GEOSEARCH_Args},\n{MAKE_CMD(\"geosearchstore\",\"Queries a geospatial index for members inside an area of a box or a circle, optionally stores the result.\",\"O(N+log(M)) where N is the number of elements in the grid-aligned bounding box area around the shape provided as the filter and M is the number of items inside the shape\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"geo\",COMMAND_GROUP_GEO,GEOSEARCHSTORE_History,1,GEOSEARCHSTORE_Tips,0,geosearchstoreCommand,-8,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_GEO,GEOSEARCHSTORE_Keyspecs,2,NULL,7),.args=GEOSEARCHSTORE_Args},\n/* hash */\n{MAKE_CMD(\"hdel\",\"Deletes one or more fields and their values from a hash. Deletes the hash if no fields remain.\",\"O(N) where N is the number of fields to be removed.\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HDEL_History,1,HDEL_Tips,0,hdelCommand,-3,CMD_WRITE|CMD_FAST,ACL_CATEGORY_HASH,HDEL_Keyspecs,1,NULL,2),.args=HDEL_Args},\n{MAKE_CMD(\"hexists\",\"Determines whether a field exists in a hash.\",\"O(1)\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HEXISTS_History,0,HEXISTS_Tips,0,hexistsCommand,3,CMD_READONLY|CMD_FAST,ACL_CATEGORY_HASH,HEXISTS_Keyspecs,1,NULL,2),.args=HEXISTS_Args},\n{MAKE_CMD(\"hexpire\",\"Set expiry for hash field using relative time to expire (seconds)\",\"O(N) where N is the number of specified fields\",\"7.4.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HEXPIRE_History,0,HEXPIRE_Tips,0,hexpireCommand,-6,CMD_WRITE|CMD_FAST,ACL_CATEGORY_HASH,HEXPIRE_Keyspecs,1,NULL,4),.args=HEXPIRE_Args},\n{MAKE_CMD(\"hexpireat\",\"Set expiry for hash field using an absolute Unix timestamp (seconds)\",\"O(N) where N is the number of specified fields\",\"7.4.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HEXPIREAT_History,0,HEXPIREAT_Tips,0,hexpireatCommand,-6,CMD_WRITE|CMD_FAST,ACL_CATEGORY_HASH,HEXPIREAT_Keyspecs,1,NULL,4),.args=HEXPIREAT_Args},\n{MAKE_CMD(\"hexpiretime\",\"Returns the expiration time of a hash field as a Unix timestamp, in seconds.\",\"O(N) where N is the number of specified fields\",\"7.4.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HEXPIRETIME_History,0,HEXPIRETIME_Tips,0,hexpiretimeCommand,-5,CMD_READONLY|CMD_FAST,ACL_CATEGORY_HASH,HEXPIRETIME_Keyspecs,1,NULL,2),.args=HEXPIRETIME_Args},\n{MAKE_CMD(\"hget\",\"Returns the value of a field in a hash.\",\"O(1)\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HGET_History,0,HGET_Tips,0,hgetCommand,3,CMD_READONLY|CMD_FAST,ACL_CATEGORY_HASH,HGET_Keyspecs,1,NULL,2),.args=HGET_Args},\n{MAKE_CMD(\"hgetall\",\"Returns all fields and values in a hash.\",\"O(N) where N is the size of the hash.\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HGETALL_History,0,HGETALL_Tips,1,hgetallCommand,2,CMD_READONLY,ACL_CATEGORY_HASH,HGETALL_Keyspecs,1,NULL,1),.args=HGETALL_Args},\n{MAKE_CMD(\"hgetdel\",\"Returns the value of a field and deletes it from the hash.\",\"O(N) where N is the number of specified fields\",\"8.0.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HGETDEL_History,0,HGETDEL_Tips,0,hgetdelCommand,-5,CMD_WRITE|CMD_FAST,ACL_CATEGORY_HASH,HGETDEL_Keyspecs,1,NULL,2),.args=HGETDEL_Args},\n{MAKE_CMD(\"hgetex\",\"Get the value of one or more fields of a given hash key, and optionally set their expiration.\",\"O(N) where N is the number of specified fields\",\"8.0.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HGETEX_History,0,HGETEX_Tips,0,hgetexCommand,-5,CMD_WRITE|CMD_FAST,ACL_CATEGORY_HASH,HGETEX_Keyspecs,1,NULL,3),.args=HGETEX_Args},\n{MAKE_CMD(\"hincrby\",\"Increments the integer value of a field in a hash by a number. Uses 0 as initial value if the field doesn't exist.\",\"O(1)\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HINCRBY_History,0,HINCRBY_Tips,0,hincrbyCommand,4,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_HASH,HINCRBY_Keyspecs,1,NULL,3),.args=HINCRBY_Args},\n{MAKE_CMD(\"hincrbyfloat\",\"Increments the floating point value of a field by a number. Uses 0 as initial value if the field doesn't exist.\",\"O(1)\",\"2.6.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HINCRBYFLOAT_History,0,HINCRBYFLOAT_Tips,0,hincrbyfloatCommand,4,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_HASH,HINCRBYFLOAT_Keyspecs,1,NULL,3),.args=HINCRBYFLOAT_Args},\n{MAKE_CMD(\"hkeys\",\"Returns all fields in a hash.\",\"O(N) where N is the size of the hash.\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HKEYS_History,0,HKEYS_Tips,1,hkeysCommand,2,CMD_READONLY,ACL_CATEGORY_HASH,HKEYS_Keyspecs,1,NULL,1),.args=HKEYS_Args},\n{MAKE_CMD(\"hlen\",\"Returns the number of fields in a hash.\",\"O(1)\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HLEN_History,0,HLEN_Tips,0,hlenCommand,2,CMD_READONLY|CMD_FAST,ACL_CATEGORY_HASH,HLEN_Keyspecs,1,NULL,1),.args=HLEN_Args},\n{MAKE_CMD(\"hmget\",\"Returns the values of all fields in a hash.\",\"O(N) where N is the number of fields being requested.\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HMGET_History,0,HMGET_Tips,0,hmgetCommand,-3,CMD_READONLY|CMD_FAST,ACL_CATEGORY_HASH,HMGET_Keyspecs,1,NULL,2),.args=HMGET_Args},\n{MAKE_CMD(\"hmset\",\"Sets the values of multiple fields.\",\"O(N) where N is the number of fields being set.\",\"2.0.0\",CMD_DOC_DEPRECATED,\"`HSET` with multiple field-value pairs\",\"4.0.0\",\"hash\",COMMAND_GROUP_HASH,HMSET_History,0,HMSET_Tips,0,hsetCommand,-4,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_HASH,HMSET_Keyspecs,1,NULL,2),.args=HMSET_Args},\n{MAKE_CMD(\"hpersist\",\"Removes the expiration time for each specified field\",\"O(N) where N is the number of specified fields\",\"7.4.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HPERSIST_History,0,HPERSIST_Tips,0,hpersistCommand,-5,CMD_WRITE|CMD_FAST,ACL_CATEGORY_HASH,HPERSIST_Keyspecs,1,NULL,2),.args=HPERSIST_Args},\n{MAKE_CMD(\"hpexpire\",\"Set expiry for hash field using relative time to expire (milliseconds)\",\"O(N) where N is the number of specified fields\",\"7.4.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HPEXPIRE_History,0,HPEXPIRE_Tips,0,hpexpireCommand,-6,CMD_WRITE|CMD_FAST,ACL_CATEGORY_HASH,HPEXPIRE_Keyspecs,1,NULL,4),.args=HPEXPIRE_Args},\n{MAKE_CMD(\"hpexpireat\",\"Set expiry for hash field using an absolute Unix timestamp (milliseconds)\",\"O(N) where N is the number of specified fields\",\"7.4.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HPEXPIREAT_History,0,HPEXPIREAT_Tips,0,hpexpireatCommand,-6,CMD_WRITE|CMD_FAST,ACL_CATEGORY_HASH,HPEXPIREAT_Keyspecs,1,NULL,4),.args=HPEXPIREAT_Args},\n{MAKE_CMD(\"hpexpiretime\",\"Returns the expiration time of a hash field as a Unix timestamp, in msec.\",\"O(N) where N is the number of specified fields\",\"7.4.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HPEXPIRETIME_History,0,HPEXPIRETIME_Tips,0,hpexpiretimeCommand,-5,CMD_READONLY|CMD_FAST,ACL_CATEGORY_HASH,HPEXPIRETIME_Keyspecs,1,NULL,2),.args=HPEXPIRETIME_Args},\n{MAKE_CMD(\"hpttl\",\"Returns the TTL in milliseconds of a hash field.\",\"O(N) where N is the number of specified fields\",\"7.4.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HPTTL_History,0,HPTTL_Tips,1,hpttlCommand,-5,CMD_READONLY|CMD_FAST,ACL_CATEGORY_HASH,HPTTL_Keyspecs,1,NULL,2),.args=HPTTL_Args},\n{MAKE_CMD(\"hrandfield\",\"Returns one or more random fields from a hash.\",\"O(N) where N is the number of fields returned\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HRANDFIELD_History,0,HRANDFIELD_Tips,1,hrandfieldCommand,-2,CMD_READONLY,ACL_CATEGORY_HASH,HRANDFIELD_Keyspecs,1,NULL,2),.args=HRANDFIELD_Args},\n{MAKE_CMD(\"hscan\",\"Iterates over fields and values of a hash.\",\"O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection.\",\"2.8.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HSCAN_History,0,HSCAN_Tips,1,hscanCommand,-3,CMD_READONLY,ACL_CATEGORY_HASH,HSCAN_Keyspecs,1,NULL,5),.args=HSCAN_Args},\n{MAKE_CMD(\"hset\",\"Creates or modifies the value of a field in a hash.\",\"O(1) for each field/value pair added, so O(N) to add N field/value pairs when the command is called with multiple field/value pairs.\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HSET_History,1,HSET_Tips,0,hsetCommand,-4,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_HASH,HSET_Keyspecs,1,NULL,2),.args=HSET_Args},\n{MAKE_CMD(\"hsetex\",\"Set the value of one or more fields of a given hash key, and optionally set their expiration.\",\"O(N) where N is the number of fields being set.\",\"8.0.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HSETEX_History,0,HSETEX_Tips,0,hsetexCommand,-6,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_HASH,HSETEX_Keyspecs,1,NULL,4),.args=HSETEX_Args},\n{MAKE_CMD(\"hsetnx\",\"Sets the value of a field in a hash only when the field doesn't exist.\",\"O(1)\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HSETNX_History,0,HSETNX_Tips,0,hsetnxCommand,4,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_HASH,HSETNX_Keyspecs,1,NULL,3),.args=HSETNX_Args},\n{MAKE_CMD(\"hstrlen\",\"Returns the length of the value of a field.\",\"O(1)\",\"3.2.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HSTRLEN_History,0,HSTRLEN_Tips,0,hstrlenCommand,3,CMD_READONLY|CMD_FAST,ACL_CATEGORY_HASH,HSTRLEN_Keyspecs,1,NULL,2),.args=HSTRLEN_Args},\n{MAKE_CMD(\"httl\",\"Returns the TTL in seconds of a hash field.\",\"O(N) where N is the number of specified fields\",\"7.4.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HTTL_History,0,HTTL_Tips,1,httlCommand,-5,CMD_READONLY|CMD_FAST,ACL_CATEGORY_HASH,HTTL_Keyspecs,1,NULL,2),.args=HTTL_Args},\n{MAKE_CMD(\"hvals\",\"Returns all values in a hash.\",\"O(N) where N is the size of the hash.\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"hash\",COMMAND_GROUP_HASH,HVALS_History,0,HVALS_Tips,1,hvalsCommand,2,CMD_READONLY,ACL_CATEGORY_HASH,HVALS_Keyspecs,1,NULL,1),.args=HVALS_Args},\n/* hyperloglog */\n{MAKE_CMD(\"pfadd\",\"Adds elements to a HyperLogLog key. Creates the key if it doesn't exist.\",\"O(1) to add every element.\",\"2.8.9\",CMD_DOC_NONE,NULL,NULL,\"hyperloglog\",COMMAND_GROUP_HYPERLOGLOG,PFADD_History,0,PFADD_Tips,0,pfaddCommand,-2,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_HYPERLOGLOG,PFADD_Keyspecs,1,NULL,2),.args=PFADD_Args},\n{MAKE_CMD(\"pfcount\",\"Returns the approximated cardinality of the set(s) observed by the HyperLogLog key(s).\",\"O(1) with a very small average constant time when called with a single key. O(N) with N being the number of keys, and much bigger constant times, when called with multiple keys.\",\"2.8.9\",CMD_DOC_NONE,NULL,NULL,\"hyperloglog\",COMMAND_GROUP_HYPERLOGLOG,PFCOUNT_History,0,PFCOUNT_Tips,0,pfcountCommand,-2,CMD_READONLY|CMD_MAY_REPLICATE,ACL_CATEGORY_HYPERLOGLOG,PFCOUNT_Keyspecs,1,NULL,1),.args=PFCOUNT_Args},\n{MAKE_CMD(\"pfdebug\",\"Internal commands for debugging HyperLogLog values.\",\"N/A\",\"2.8.9\",CMD_DOC_SYSCMD,NULL,NULL,\"hyperloglog\",COMMAND_GROUP_HYPERLOGLOG,PFDEBUG_History,0,PFDEBUG_Tips,0,pfdebugCommand,3,CMD_WRITE|CMD_DENYOOM|CMD_ADMIN,ACL_CATEGORY_HYPERLOGLOG,PFDEBUG_Keyspecs,1,NULL,2),.args=PFDEBUG_Args},\n{MAKE_CMD(\"pfmerge\",\"Merges one or more HyperLogLog values into a single key.\",\"O(N) to merge N HyperLogLogs, but with high constant times.\",\"2.8.9\",CMD_DOC_NONE,NULL,NULL,\"hyperloglog\",COMMAND_GROUP_HYPERLOGLOG,PFMERGE_History,0,PFMERGE_Tips,0,pfmergeCommand,-2,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_HYPERLOGLOG,PFMERGE_Keyspecs,2,pfmergeGetKeys,2),.args=PFMERGE_Args},\n{MAKE_CMD(\"pfselftest\",\"An internal command for testing HyperLogLog values.\",\"N/A\",\"2.8.9\",CMD_DOC_SYSCMD,NULL,NULL,\"hyperloglog\",COMMAND_GROUP_HYPERLOGLOG,PFSELFTEST_History,0,PFSELFTEST_Tips,0,pfselftestCommand,1,CMD_ADMIN,ACL_CATEGORY_HYPERLOGLOG,PFSELFTEST_Keyspecs,0,NULL,0)},\n/* list */\n{MAKE_CMD(\"blmove\",\"Pops an element from a list, pushes it to another list and returns it. Blocks until an element is available otherwise. Deletes the list if the last element was moved.\",\"O(1)\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"list\",COMMAND_GROUP_LIST,BLMOVE_History,0,BLMOVE_Tips,0,blmoveCommand,6,CMD_WRITE|CMD_DENYOOM|CMD_BLOCKING,ACL_CATEGORY_LIST,BLMOVE_Keyspecs,2,NULL,5),.args=BLMOVE_Args},\n{MAKE_CMD(\"blmpop\",\"Pops the first element from one of multiple lists. Blocks until an element is available otherwise. Deletes the list if the last element was popped.\",\"O(N+M) where N is the number of provided keys and M is the number of elements returned.\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"list\",COMMAND_GROUP_LIST,BLMPOP_History,0,BLMPOP_Tips,0,blmpopCommand,-5,CMD_WRITE|CMD_BLOCKING,ACL_CATEGORY_LIST,BLMPOP_Keyspecs,1,blmpopGetKeys,5),.args=BLMPOP_Args},\n{MAKE_CMD(\"blpop\",\"Removes and returns the first element in a list. Blocks until an element is available otherwise. Deletes the list if the last element was popped.\",\"O(N) where N is the number of provided keys.\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"list\",COMMAND_GROUP_LIST,BLPOP_History,1,BLPOP_Tips,0,blpopCommand,-3,CMD_WRITE|CMD_BLOCKING,ACL_CATEGORY_LIST,BLPOP_Keyspecs,1,NULL,2),.args=BLPOP_Args},\n{MAKE_CMD(\"brpop\",\"Removes and returns the last element in a list. Blocks until an element is available otherwise. Deletes the list if the last element was popped.\",\"O(N) where N is the number of provided keys.\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"list\",COMMAND_GROUP_LIST,BRPOP_History,1,BRPOP_Tips,0,brpopCommand,-3,CMD_WRITE|CMD_BLOCKING,ACL_CATEGORY_LIST,BRPOP_Keyspecs,1,NULL,2),.args=BRPOP_Args},\n{MAKE_CMD(\"brpoplpush\",\"Pops an element from a list, pushes it to another list and returns it. Block until an element is available otherwise. Deletes the list if the last element was popped.\",\"O(1)\",\"2.2.0\",CMD_DOC_DEPRECATED,\"`BLMOVE` with the `RIGHT` and `LEFT` arguments\",\"6.2.0\",\"list\",COMMAND_GROUP_LIST,BRPOPLPUSH_History,1,BRPOPLPUSH_Tips,0,brpoplpushCommand,4,CMD_WRITE|CMD_DENYOOM|CMD_BLOCKING,ACL_CATEGORY_LIST,BRPOPLPUSH_Keyspecs,2,NULL,3),.args=BRPOPLPUSH_Args},\n{MAKE_CMD(\"lindex\",\"Returns an element from a list by its index.\",\"O(N) where N is the number of elements to traverse to get to the element at index. This makes asking for the first or the last element of the list O(1).\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"list\",COMMAND_GROUP_LIST,LINDEX_History,0,LINDEX_Tips,0,lindexCommand,3,CMD_READONLY,ACL_CATEGORY_LIST,LINDEX_Keyspecs,1,NULL,2),.args=LINDEX_Args},\n{MAKE_CMD(\"linsert\",\"Inserts an element before or after another element in a list.\",\"O(N) where N is the number of elements to traverse before seeing the value pivot. This means that inserting somewhere on the left end on the list (head) can be considered O(1) and inserting somewhere on the right end (tail) is O(N).\",\"2.2.0\",CMD_DOC_NONE,NULL,NULL,\"list\",COMMAND_GROUP_LIST,LINSERT_History,0,LINSERT_Tips,0,linsertCommand,5,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_LIST,LINSERT_Keyspecs,1,NULL,4),.args=LINSERT_Args},\n{MAKE_CMD(\"llen\",\"Returns the length of a list.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"list\",COMMAND_GROUP_LIST,LLEN_History,0,LLEN_Tips,0,llenCommand,2,CMD_READONLY|CMD_FAST,ACL_CATEGORY_LIST,LLEN_Keyspecs,1,NULL,1),.args=LLEN_Args},\n{MAKE_CMD(\"lmove\",\"Returns an element after popping it from one list and pushing it to another. Deletes the list if the last element was moved.\",\"O(1)\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"list\",COMMAND_GROUP_LIST,LMOVE_History,0,LMOVE_Tips,0,lmoveCommand,5,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_LIST,LMOVE_Keyspecs,2,NULL,4),.args=LMOVE_Args},\n{MAKE_CMD(\"lmpop\",\"Returns multiple elements from a list after removing them. Deletes the list if the last element was popped.\",\"O(N+M) where N is the number of provided keys and M is the number of elements returned.\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"list\",COMMAND_GROUP_LIST,LMPOP_History,0,LMPOP_Tips,0,lmpopCommand,-4,CMD_WRITE,ACL_CATEGORY_LIST,LMPOP_Keyspecs,1,lmpopGetKeys,4),.args=LMPOP_Args},\n{MAKE_CMD(\"lpop\",\"Returns the first elements in a list after removing it. Deletes the list if the last element was popped.\",\"O(N) where N is the number of elements returned\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"list\",COMMAND_GROUP_LIST,LPOP_History,1,LPOP_Tips,0,lpopCommand,-2,CMD_WRITE|CMD_FAST,ACL_CATEGORY_LIST,LPOP_Keyspecs,1,NULL,2),.args=LPOP_Args},\n{MAKE_CMD(\"lpos\",\"Returns the index of matching elements in a list.\",\"O(N) where N is the number of elements in the list, for the average case. When searching for elements near the head or the tail of the list, or when the MAXLEN option is provided, the command may run in constant time.\",\"6.0.6\",CMD_DOC_NONE,NULL,NULL,\"list\",COMMAND_GROUP_LIST,LPOS_History,0,LPOS_Tips,0,lposCommand,-3,CMD_READONLY,ACL_CATEGORY_LIST,LPOS_Keyspecs,1,NULL,5),.args=LPOS_Args},\n{MAKE_CMD(\"lpush\",\"Prepends one or more elements to a list. Creates the key if it doesn't exist.\",\"O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments.\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"list\",COMMAND_GROUP_LIST,LPUSH_History,1,LPUSH_Tips,0,lpushCommand,-3,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_LIST,LPUSH_Keyspecs,1,NULL,2),.args=LPUSH_Args},\n{MAKE_CMD(\"lpushx\",\"Prepends one or more elements to a list only when the list exists.\",\"O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments.\",\"2.2.0\",CMD_DOC_NONE,NULL,NULL,\"list\",COMMAND_GROUP_LIST,LPUSHX_History,1,LPUSHX_Tips,0,lpushxCommand,-3,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_LIST,LPUSHX_Keyspecs,1,NULL,2),.args=LPUSHX_Args},\n{MAKE_CMD(\"lrange\",\"Returns a range of elements from a list.\",\"O(S+N) where S is the distance of start offset from HEAD for small lists, from nearest end (HEAD or TAIL) for large lists; and N is the number of elements in the specified range.\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"list\",COMMAND_GROUP_LIST,LRANGE_History,0,LRANGE_Tips,0,lrangeCommand,4,CMD_READONLY,ACL_CATEGORY_LIST,LRANGE_Keyspecs,1,NULL,3),.args=LRANGE_Args},\n{MAKE_CMD(\"lrem\",\"Removes elements from a list. Deletes the list if the last element was removed.\",\"O(N+M) where N is the length of the list and M is the number of elements removed.\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"list\",COMMAND_GROUP_LIST,LREM_History,0,LREM_Tips,0,lremCommand,4,CMD_WRITE,ACL_CATEGORY_LIST,LREM_Keyspecs,1,NULL,3),.args=LREM_Args},\n{MAKE_CMD(\"lset\",\"Sets the value of an element in a list by its index.\",\"O(N) where N is the length of the list. Setting either the first or the last element of the list is O(1).\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"list\",COMMAND_GROUP_LIST,LSET_History,0,LSET_Tips,0,lsetCommand,4,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_LIST,LSET_Keyspecs,1,NULL,3),.args=LSET_Args},\n{MAKE_CMD(\"ltrim\",\"Removes elements from both ends a list. Deletes the list if all elements were trimmed.\",\"O(N) where N is the number of elements to be removed by the operation.\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"list\",COMMAND_GROUP_LIST,LTRIM_History,0,LTRIM_Tips,0,ltrimCommand,4,CMD_WRITE,ACL_CATEGORY_LIST,LTRIM_Keyspecs,1,NULL,3),.args=LTRIM_Args},\n{MAKE_CMD(\"rpop\",\"Returns and removes the last elements of a list. Deletes the list if the last element was popped.\",\"O(N) where N is the number of elements returned\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"list\",COMMAND_GROUP_LIST,RPOP_History,1,RPOP_Tips,0,rpopCommand,-2,CMD_WRITE|CMD_FAST,ACL_CATEGORY_LIST,RPOP_Keyspecs,1,NULL,2),.args=RPOP_Args},\n{MAKE_CMD(\"rpoplpush\",\"Returns the last element of a list after removing and pushing it to another list. Deletes the list if the last element was popped.\",\"O(1)\",\"1.2.0\",CMD_DOC_DEPRECATED,\"`LMOVE` with the `RIGHT` and `LEFT` arguments\",\"6.2.0\",\"list\",COMMAND_GROUP_LIST,RPOPLPUSH_History,0,RPOPLPUSH_Tips,0,rpoplpushCommand,3,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_LIST,RPOPLPUSH_Keyspecs,2,NULL,2),.args=RPOPLPUSH_Args},\n{MAKE_CMD(\"rpush\",\"Appends one or more elements to a list. Creates the key if it doesn't exist.\",\"O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments.\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"list\",COMMAND_GROUP_LIST,RPUSH_History,1,RPUSH_Tips,0,rpushCommand,-3,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_LIST,RPUSH_Keyspecs,1,NULL,2),.args=RPUSH_Args},\n{MAKE_CMD(\"rpushx\",\"Appends an element to a list only when the list exists.\",\"O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments.\",\"2.2.0\",CMD_DOC_NONE,NULL,NULL,\"list\",COMMAND_GROUP_LIST,RPUSHX_History,1,RPUSHX_Tips,0,rpushxCommand,-3,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_LIST,RPUSHX_Keyspecs,1,NULL,2),.args=RPUSHX_Args},\n/* pubsub */\n{MAKE_CMD(\"psubscribe\",\"Listens for messages published to channels that match one or more patterns.\",\"O(N) where N is the number of patterns to subscribe to.\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"pubsub\",COMMAND_GROUP_PUBSUB,PSUBSCRIBE_History,0,PSUBSCRIBE_Tips,0,psubscribeCommand,-2,CMD_PUBSUB|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,0,PSUBSCRIBE_Keyspecs,0,NULL,1),.args=PSUBSCRIBE_Args},\n{MAKE_CMD(\"publish\",\"Posts a message to a channel.\",\"O(N+M) where N is the number of clients subscribed to the receiving channel and M is the total number of subscribed patterns (by any client).\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"pubsub\",COMMAND_GROUP_PUBSUB,PUBLISH_History,0,PUBLISH_Tips,0,publishCommand,3,CMD_PUBSUB|CMD_LOADING|CMD_STALE|CMD_FAST|CMD_MAY_REPLICATE|CMD_SENTINEL,0,PUBLISH_Keyspecs,0,NULL,2),.args=PUBLISH_Args},\n{MAKE_CMD(\"pubsub\",\"A container for Pub/Sub commands.\",\"Depends on subcommand.\",\"2.8.0\",CMD_DOC_NONE,NULL,NULL,\"pubsub\",COMMAND_GROUP_PUBSUB,PUBSUB_History,0,PUBSUB_Tips,0,NULL,-2,0,0,PUBSUB_Keyspecs,0,NULL,0),.subcommands=PUBSUB_Subcommands},\n{MAKE_CMD(\"punsubscribe\",\"Stops listening to messages published to channels that match one or more patterns.\",\"O(N) where N is the number of patterns to unsubscribe.\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"pubsub\",COMMAND_GROUP_PUBSUB,PUNSUBSCRIBE_History,0,PUNSUBSCRIBE_Tips,0,punsubscribeCommand,-1,CMD_PUBSUB|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,0,PUNSUBSCRIBE_Keyspecs,0,NULL,1),.args=PUNSUBSCRIBE_Args},\n{MAKE_CMD(\"spublish\",\"Post a message to a shard channel\",\"O(N) where N is the number of clients subscribed to the receiving shard channel.\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"pubsub\",COMMAND_GROUP_PUBSUB,SPUBLISH_History,0,SPUBLISH_Tips,0,spublishCommand,3,CMD_PUBSUB|CMD_LOADING|CMD_STALE|CMD_FAST|CMD_MAY_REPLICATE,0,SPUBLISH_Keyspecs,1,NULL,2),.args=SPUBLISH_Args},\n{MAKE_CMD(\"ssubscribe\",\"Listens for messages published to shard channels.\",\"O(N) where N is the number of shard channels to subscribe to.\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"pubsub\",COMMAND_GROUP_PUBSUB,SSUBSCRIBE_History,0,SSUBSCRIBE_Tips,0,ssubscribeCommand,-2,CMD_PUBSUB|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE,0,SSUBSCRIBE_Keyspecs,1,NULL,1),.args=SSUBSCRIBE_Args},\n{MAKE_CMD(\"subscribe\",\"Listens for messages published to channels.\",\"O(N) where N is the number of channels to subscribe to.\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"pubsub\",COMMAND_GROUP_PUBSUB,SUBSCRIBE_History,0,SUBSCRIBE_Tips,0,subscribeCommand,-2,CMD_PUBSUB|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,0,SUBSCRIBE_Keyspecs,0,NULL,1),.args=SUBSCRIBE_Args},\n{MAKE_CMD(\"sunsubscribe\",\"Stops listening to messages posted to shard channels.\",\"O(N) where N is the number of shard channels to unsubscribe.\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"pubsub\",COMMAND_GROUP_PUBSUB,SUNSUBSCRIBE_History,0,SUNSUBSCRIBE_Tips,0,sunsubscribeCommand,-1,CMD_PUBSUB|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE,0,SUNSUBSCRIBE_Keyspecs,1,NULL,1),.args=SUNSUBSCRIBE_Args},\n{MAKE_CMD(\"unsubscribe\",\"Stops listening to messages posted to channels.\",\"O(N) where N is the number of channels to unsubscribe.\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"pubsub\",COMMAND_GROUP_PUBSUB,UNSUBSCRIBE_History,0,UNSUBSCRIBE_Tips,0,unsubscribeCommand,-1,CMD_PUBSUB|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SENTINEL,0,UNSUBSCRIBE_Keyspecs,0,NULL,1),.args=UNSUBSCRIBE_Args},\n/* rate_limit */\n{MAKE_CMD(\"gcra\",\"Rate limit via GCRA (Generic Cell Rate Algorithm).\",\"O(1)\",\"8.8.0\",CMD_DOC_NONE,NULL,NULL,\"rate_limit\",COMMAND_GROUP_RATE_LIMIT,GCRA_History,0,GCRA_Tips,0,gcraCommand,-5,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_RATE_LIMIT,GCRA_Keyspecs,1,NULL,5),.args=GCRA_Args},\n{MAKE_CMD(\"gcrasetvalue\",\"An internal command for recording a GCRA TAT value during AOF rewrite and replication.\",\"O(1)\",\"8.8.0\",CMD_DOC_NONE,NULL,NULL,\"rate_limit\",COMMAND_GROUP_RATE_LIMIT,GCRASETVALUE_History,0,GCRASETVALUE_Tips,0,gcraSetValueCommand,3,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_RATE_LIMIT,GCRASETVALUE_Keyspecs,1,NULL,2),.args=GCRASETVALUE_Args},\n/* scripting */\n{MAKE_CMD(\"eval\",\"Executes a server-side Lua script.\",\"Depends on the script that is executed.\",\"2.6.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,EVAL_History,0,EVAL_Tips,0,evalCommand,-3,CMD_NOSCRIPT|CMD_SKIP_MONITOR|CMD_MAY_REPLICATE|CMD_NO_MANDATORY_KEYS|CMD_STALE,ACL_CATEGORY_SCRIPTING,EVAL_Keyspecs,1,evalGetKeys,4),.args=EVAL_Args},\n{MAKE_CMD(\"evalsha\",\"Executes a server-side Lua script by SHA1 digest.\",\"Depends on the script that is executed.\",\"2.6.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,EVALSHA_History,0,EVALSHA_Tips,0,evalShaCommand,-3,CMD_NOSCRIPT|CMD_SKIP_MONITOR|CMD_MAY_REPLICATE|CMD_NO_MANDATORY_KEYS|CMD_STALE,ACL_CATEGORY_SCRIPTING,EVALSHA_Keyspecs,1,evalGetKeys,4),.args=EVALSHA_Args},\n{MAKE_CMD(\"evalsha_ro\",\"Executes a read-only server-side Lua script by SHA1 digest.\",\"Depends on the script that is executed.\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,EVALSHA_RO_History,0,EVALSHA_RO_Tips,0,evalShaRoCommand,-3,CMD_NOSCRIPT|CMD_SKIP_MONITOR|CMD_NO_MANDATORY_KEYS|CMD_STALE|CMD_READONLY,ACL_CATEGORY_SCRIPTING,EVALSHA_RO_Keyspecs,1,evalGetKeys,4),.args=EVALSHA_RO_Args},\n{MAKE_CMD(\"eval_ro\",\"Executes a read-only server-side Lua script.\",\"Depends on the script that is executed.\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,EVAL_RO_History,0,EVAL_RO_Tips,0,evalRoCommand,-3,CMD_NOSCRIPT|CMD_SKIP_MONITOR|CMD_NO_MANDATORY_KEYS|CMD_STALE|CMD_READONLY,ACL_CATEGORY_SCRIPTING,EVAL_RO_Keyspecs,1,evalGetKeys,4),.args=EVAL_RO_Args},\n{MAKE_CMD(\"fcall\",\"Invokes a function.\",\"Depends on the function that is executed.\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,FCALL_History,0,FCALL_Tips,0,fcallCommand,-3,CMD_NOSCRIPT|CMD_SKIP_MONITOR|CMD_MAY_REPLICATE|CMD_NO_MANDATORY_KEYS|CMD_STALE,ACL_CATEGORY_SCRIPTING,FCALL_Keyspecs,1,functionGetKeys,4),.args=FCALL_Args},\n{MAKE_CMD(\"fcall_ro\",\"Invokes a read-only function.\",\"Depends on the function that is executed.\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,FCALL_RO_History,0,FCALL_RO_Tips,0,fcallroCommand,-3,CMD_NOSCRIPT|CMD_SKIP_MONITOR|CMD_NO_MANDATORY_KEYS|CMD_STALE|CMD_READONLY,ACL_CATEGORY_SCRIPTING,FCALL_RO_Keyspecs,1,functionGetKeys,4),.args=FCALL_RO_Args},\n{MAKE_CMD(\"function\",\"A container for function commands.\",\"Depends on subcommand.\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,FUNCTION_History,0,FUNCTION_Tips,0,NULL,-2,0,0,FUNCTION_Keyspecs,0,NULL,0),.subcommands=FUNCTION_Subcommands},\n{MAKE_CMD(\"script\",\"A container for Lua scripts management commands.\",\"Depends on subcommand.\",\"2.6.0\",CMD_DOC_NONE,NULL,NULL,\"scripting\",COMMAND_GROUP_SCRIPTING,SCRIPT_History,0,SCRIPT_Tips,0,NULL,-2,0,0,SCRIPT_Keyspecs,0,NULL,0),.subcommands=SCRIPT_Subcommands},\n/* sentinel */\n{MAKE_CMD(\"sentinel\",\"A container for Redis Sentinel commands.\",\"Depends on subcommand.\",\"2.8.4\",CMD_DOC_NONE,NULL,NULL,\"sentinel\",COMMAND_GROUP_SENTINEL,SENTINEL_History,0,SENTINEL_Tips,0,NULL,-2,CMD_ADMIN|CMD_SENTINEL|CMD_ONLY_SENTINEL,0,SENTINEL_Keyspecs,0,NULL,0),.subcommands=SENTINEL_Subcommands},\n/* server */\n{MAKE_CMD(\"acl\",\"A container for Access List Control commands.\",\"Depends on subcommand.\",\"6.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,ACL_History,0,ACL_Tips,0,NULL,-2,CMD_SENTINEL,0,ACL_Keyspecs,0,NULL,0),.subcommands=ACL_Subcommands},\n{MAKE_CMD(\"bgrewriteaof\",\"Asynchronously rewrites the append-only file to disk.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,BGREWRITEAOF_History,0,BGREWRITEAOF_Tips,0,bgrewriteaofCommand,1,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_NOSCRIPT,0,BGREWRITEAOF_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"bgsave\",\"Asynchronously saves the database(s) to disk.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,BGSAVE_History,1,BGSAVE_Tips,0,bgsaveCommand,-1,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_NOSCRIPT,0,BGSAVE_Keyspecs,0,NULL,1),.args=BGSAVE_Args},\n{MAKE_CMD(\"command\",\"Returns detailed information about all commands.\",\"O(N) where N is the total number of Redis commands\",\"2.8.13\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,COMMAND_History,0,COMMAND_Tips,1,commandCommand,-1,CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_CONNECTION,COMMAND_Keyspecs,0,NULL,0),.subcommands=COMMAND_Subcommands},\n{MAKE_CMD(\"config\",\"A container for server configuration commands.\",\"Depends on subcommand.\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,CONFIG_History,0,CONFIG_Tips,0,NULL,-2,0,0,CONFIG_Keyspecs,0,NULL,0),.subcommands=CONFIG_Subcommands},\n{MAKE_CMD(\"dbsize\",\"Returns the number of keys in the database.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,DBSIZE_History,0,DBSIZE_Tips,2,dbsizeCommand,1,CMD_READONLY|CMD_FAST,ACL_CATEGORY_KEYSPACE,DBSIZE_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"debug\",\"A container for debugging commands.\",\"Depends on subcommand.\",\"1.0.0\",CMD_DOC_SYSCMD,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,DEBUG_History,0,DEBUG_Tips,0,debugCommand,-2,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_PROTECTED,0,DEBUG_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"failover\",\"Starts a coordinated failover from a server to one of its replicas.\",\"O(1)\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,FAILOVER_History,0,FAILOVER_Tips,0,failoverCommand,-1,CMD_ADMIN|CMD_NOSCRIPT|CMD_STALE,0,FAILOVER_Keyspecs,0,NULL,3),.args=FAILOVER_Args},\n{MAKE_CMD(\"flushall\",\"Removes all keys from all databases.\",\"O(N) where N is the total number of keys in all databases\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,FLUSHALL_History,2,FLUSHALL_Tips,2,flushallCommand,-1,CMD_WRITE,ACL_CATEGORY_KEYSPACE|ACL_CATEGORY_DANGEROUS,FLUSHALL_Keyspecs,0,NULL,1),.args=FLUSHALL_Args},\n{MAKE_CMD(\"flushdb\",\"Remove all keys from the current database.\",\"O(N) where N is the number of keys in the selected database\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,FLUSHDB_History,2,FLUSHDB_Tips,2,flushdbCommand,-1,CMD_WRITE,ACL_CATEGORY_KEYSPACE|ACL_CATEGORY_DANGEROUS,FLUSHDB_Keyspecs,0,NULL,1),.args=FLUSHDB_Args},\n{MAKE_CMD(\"hotkeys\",\"A container for hotkeys tracking commands.\",\"Depends on subcommand.\",\"8.6.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,HOTKEYS_History,0,HOTKEYS_Tips,0,NULL,-2,0,0,HOTKEYS_Keyspecs,0,NULL,0),.subcommands=HOTKEYS_Subcommands},\n{MAKE_CMD(\"info\",\"Returns information and statistics about the server.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,INFO_History,1,INFO_Tips,3,infoCommand,-1,CMD_LOADING|CMD_STALE|CMD_SENTINEL,ACL_CATEGORY_DANGEROUS,INFO_Keyspecs,0,NULL,1),.args=INFO_Args},\n{MAKE_CMD(\"lastsave\",\"Returns the Unix timestamp of the last successful save to disk.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,LASTSAVE_History,0,LASTSAVE_Tips,1,lastsaveCommand,1,CMD_LOADING|CMD_STALE|CMD_FAST,ACL_CATEGORY_ADMIN|ACL_CATEGORY_DANGEROUS,LASTSAVE_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"latency\",\"A container for latency diagnostics commands.\",\"Depends on subcommand.\",\"2.8.13\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,LATENCY_History,0,LATENCY_Tips,0,NULL,-2,0,0,LATENCY_Keyspecs,0,NULL,0),.subcommands=LATENCY_Subcommands},\n{MAKE_CMD(\"lolwut\",\"Displays computer art and the Redis version\",NULL,\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,LOLWUT_History,0,LOLWUT_Tips,0,lolwutCommand,-1,CMD_READONLY|CMD_FAST,0,LOLWUT_Keyspecs,0,NULL,1),.args=LOLWUT_Args},\n{MAKE_CMD(\"memory\",\"A container for memory diagnostics commands.\",\"Depends on subcommand.\",\"4.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,MEMORY_History,0,MEMORY_Tips,0,NULL,-2,0,0,MEMORY_Keyspecs,0,NULL,0),.subcommands=MEMORY_Subcommands},\n{MAKE_CMD(\"module\",\"A container for module commands.\",\"Depends on subcommand.\",\"4.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,MODULE_History,0,MODULE_Tips,0,NULL,-2,0,0,MODULE_Keyspecs,0,NULL,0),.subcommands=MODULE_Subcommands},\n{MAKE_CMD(\"monitor\",\"Listens for all requests received by the server in real-time.\",NULL,\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,MONITOR_History,0,MONITOR_Tips,0,monitorCommand,1,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE,0,MONITOR_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"psync\",\"An internal command used in replication.\",NULL,\"2.8.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,PSYNC_History,0,PSYNC_Tips,0,syncCommand,-3,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_NO_MULTI|CMD_NOSCRIPT,0,PSYNC_Keyspecs,0,NULL,2),.args=PSYNC_Args},\n{MAKE_CMD(\"replconf\",\"An internal command for configuring the replication stream.\",\"O(1)\",\"3.0.0\",CMD_DOC_SYSCMD,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,REPLCONF_History,0,REPLCONF_Tips,0,replconfCommand,-1,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_ALLOW_BUSY,0,REPLCONF_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"replicaof\",\"Configures a server as replica of another, or promotes it to a master.\",\"O(1)\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,REPLICAOF_History,0,REPLICAOF_Tips,0,replicaofCommand,3,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_NOSCRIPT|CMD_STALE,0,REPLICAOF_Keyspecs,0,NULL,1),.args=REPLICAOF_Args},\n{MAKE_CMD(\"restore-asking\",\"An internal command for migrating keys in a cluster.\",\"O(1) to create the new key and additional O(N*M) to reconstruct the serialized value, where N is the number of Redis objects composing the value and M their average size. For small string values the time complexity is thus O(1)+O(1*M) where M is small, so simply O(1). However for sorted set values the complexity is O(N*M*log(N)) because inserting values into sorted sets is O(log(N)).\",\"3.0.0\",CMD_DOC_SYSCMD,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,RESTORE_ASKING_History,3,RESTORE_ASKING_Tips,0,restoreCommand,-4,CMD_WRITE|CMD_DENYOOM|CMD_ASKING,ACL_CATEGORY_KEYSPACE|ACL_CATEGORY_DANGEROUS,RESTORE_ASKING_Keyspecs,1,NULL,7),.args=RESTORE_ASKING_Args},\n{MAKE_CMD(\"role\",\"Returns the replication role.\",\"O(1)\",\"2.8.12\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,ROLE_History,0,ROLE_Tips,0,roleCommand,1,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_FAST|CMD_SENTINEL,ACL_CATEGORY_ADMIN|ACL_CATEGORY_DANGEROUS,ROLE_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"save\",\"Synchronously saves the database(s) to disk.\",\"O(N) where N is the total number of keys in all databases\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,SAVE_History,0,SAVE_Tips,0,saveCommand,1,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_NOSCRIPT|CMD_NO_MULTI,0,SAVE_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"shutdown\",\"Synchronously saves the database(s) to disk and shuts down the Redis server.\",\"O(N) when saving, where N is the total number of keys in all databases when saving data, otherwise O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,SHUTDOWN_History,1,SHUTDOWN_Tips,0,shutdownCommand,-1,CMD_ADMIN|CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_NO_MULTI|CMD_SENTINEL|CMD_ALLOW_BUSY,0,SHUTDOWN_Keyspecs,0,NULL,4),.args=SHUTDOWN_Args},\n{MAKE_CMD(\"slaveof\",\"Sets a Redis server as a replica of another, or promotes it to being a master.\",\"O(1)\",\"1.0.0\",CMD_DOC_DEPRECATED,\"`REPLICAOF`\",\"5.0.0\",\"server\",COMMAND_GROUP_SERVER,SLAVEOF_History,0,SLAVEOF_Tips,0,replicaofCommand,3,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_NOSCRIPT|CMD_STALE,0,SLAVEOF_Keyspecs,0,NULL,1),.args=SLAVEOF_Args},\n{MAKE_CMD(\"slowlog\",\"A container for slow log commands.\",\"Depends on subcommand.\",\"2.2.12\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,SLOWLOG_History,0,SLOWLOG_Tips,0,NULL,-2,0,0,SLOWLOG_Keyspecs,0,NULL,0),.subcommands=SLOWLOG_Subcommands},\n{MAKE_CMD(\"swapdb\",\"Swaps two Redis databases.\",\"O(N) where N is the count of clients watching or blocking on keys from both databases.\",\"4.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,SWAPDB_History,0,SWAPDB_Tips,0,swapdbCommand,3,CMD_WRITE|CMD_FAST,ACL_CATEGORY_KEYSPACE|ACL_CATEGORY_DANGEROUS,SWAPDB_Keyspecs,0,NULL,2),.args=SWAPDB_Args},\n{MAKE_CMD(\"sync\",\"An internal command used in replication.\",NULL,\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,SYNC_History,0,SYNC_Tips,0,syncCommand,1,CMD_NO_ASYNC_LOADING|CMD_ADMIN|CMD_NO_MULTI|CMD_NOSCRIPT,0,SYNC_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"time\",\"Returns the server time.\",\"O(1)\",\"2.6.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,TIME_History,0,TIME_Tips,1,timeCommand,1,CMD_LOADING|CMD_STALE|CMD_FAST,0,TIME_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"trimslots\",\"Trim the keys that belong to specified slots.\",\"O(N) where N is the total number of keys in all databases\",\"8.4.0\",CMD_DOC_NONE,NULL,NULL,\"server\",COMMAND_GROUP_SERVER,TRIMSLOTS_History,0,TRIMSLOTS_Tips,0,trimslotsCommand,-5,CMD_WRITE,ACL_CATEGORY_KEYSPACE|ACL_CATEGORY_DANGEROUS,TRIMSLOTS_Keyspecs,0,NULL,1),.args=TRIMSLOTS_Args},\n/* set */\n{MAKE_CMD(\"sadd\",\"Adds one or more members to a set. Creates the key if it doesn't exist.\",\"O(1) for each element added, so O(N) to add N elements when the command is called with multiple arguments.\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"set\",COMMAND_GROUP_SET,SADD_History,1,SADD_Tips,0,saddCommand,-3,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_SET,SADD_Keyspecs,1,NULL,2),.args=SADD_Args},\n{MAKE_CMD(\"scard\",\"Returns the number of members in a set.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"set\",COMMAND_GROUP_SET,SCARD_History,0,SCARD_Tips,0,scardCommand,2,CMD_READONLY|CMD_FAST,ACL_CATEGORY_SET,SCARD_Keyspecs,1,NULL,1),.args=SCARD_Args},\n{MAKE_CMD(\"sdiff\",\"Returns the difference of multiple sets.\",\"O(N) where N is the total number of elements in all given sets.\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"set\",COMMAND_GROUP_SET,SDIFF_History,0,SDIFF_Tips,1,sdiffCommand,-2,CMD_READONLY,ACL_CATEGORY_SET,SDIFF_Keyspecs,1,NULL,1),.args=SDIFF_Args},\n{MAKE_CMD(\"sdiffstore\",\"Stores the difference of multiple sets in a key.\",\"O(N) where N is the total number of elements in all given sets.\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"set\",COMMAND_GROUP_SET,SDIFFSTORE_History,0,SDIFFSTORE_Tips,0,sdiffstoreCommand,-3,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_SET,SDIFFSTORE_Keyspecs,2,NULL,2),.args=SDIFFSTORE_Args},\n{MAKE_CMD(\"sinter\",\"Returns the intersect of multiple sets.\",\"O(N*M) worst case where N is the cardinality of the smallest set and M is the number of sets.\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"set\",COMMAND_GROUP_SET,SINTER_History,0,SINTER_Tips,1,sinterCommand,-2,CMD_READONLY,ACL_CATEGORY_SET,SINTER_Keyspecs,1,NULL,1),.args=SINTER_Args},\n{MAKE_CMD(\"sintercard\",\"Returns the number of members of the intersect of multiple sets.\",\"O(N*M) worst case where N is the cardinality of the smallest set and M is the number of sets.\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"set\",COMMAND_GROUP_SET,SINTERCARD_History,0,SINTERCARD_Tips,0,sinterCardCommand,-3,CMD_READONLY,ACL_CATEGORY_SET,SINTERCARD_Keyspecs,1,sintercardGetKeys,3),.args=SINTERCARD_Args},\n{MAKE_CMD(\"sinterstore\",\"Stores the intersect of multiple sets in a key.\",\"O(N*M) worst case where N is the cardinality of the smallest set and M is the number of sets.\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"set\",COMMAND_GROUP_SET,SINTERSTORE_History,0,SINTERSTORE_Tips,0,sinterstoreCommand,-3,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_SET,SINTERSTORE_Keyspecs,2,NULL,2),.args=SINTERSTORE_Args},\n{MAKE_CMD(\"sismember\",\"Determines whether a member belongs to a set.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"set\",COMMAND_GROUP_SET,SISMEMBER_History,0,SISMEMBER_Tips,0,sismemberCommand,3,CMD_READONLY|CMD_FAST,ACL_CATEGORY_SET,SISMEMBER_Keyspecs,1,NULL,2),.args=SISMEMBER_Args},\n{MAKE_CMD(\"smembers\",\"Returns all members of a set.\",\"O(N) where N is the set cardinality.\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"set\",COMMAND_GROUP_SET,SMEMBERS_History,0,SMEMBERS_Tips,1,smembersCommand,2,CMD_READONLY,ACL_CATEGORY_SET,SMEMBERS_Keyspecs,1,NULL,1),.args=SMEMBERS_Args},\n{MAKE_CMD(\"smismember\",\"Determines whether multiple members belong to a set.\",\"O(N) where N is the number of elements being checked for membership\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"set\",COMMAND_GROUP_SET,SMISMEMBER_History,0,SMISMEMBER_Tips,0,smismemberCommand,-3,CMD_READONLY|CMD_FAST,ACL_CATEGORY_SET,SMISMEMBER_Keyspecs,1,NULL,2),.args=SMISMEMBER_Args},\n{MAKE_CMD(\"smove\",\"Moves a member from one set to another.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"set\",COMMAND_GROUP_SET,SMOVE_History,0,SMOVE_Tips,0,smoveCommand,4,CMD_WRITE|CMD_FAST,ACL_CATEGORY_SET,SMOVE_Keyspecs,2,NULL,3),.args=SMOVE_Args},\n{MAKE_CMD(\"spop\",\"Returns one or more random members from a set after removing them. Deletes the set if the last member was popped.\",\"Without the count argument O(1), otherwise O(N) where N is the value of the passed count.\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"set\",COMMAND_GROUP_SET,SPOP_History,1,SPOP_Tips,1,spopCommand,-2,CMD_WRITE|CMD_FAST,ACL_CATEGORY_SET,SPOP_Keyspecs,1,NULL,2),.args=SPOP_Args},\n{MAKE_CMD(\"srandmember\",\"Get one or multiple random members from a set\",\"Without the count argument O(1), otherwise O(N) where N is the absolute value of the passed count.\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"set\",COMMAND_GROUP_SET,SRANDMEMBER_History,1,SRANDMEMBER_Tips,1,srandmemberCommand,-2,CMD_READONLY,ACL_CATEGORY_SET,SRANDMEMBER_Keyspecs,1,NULL,2),.args=SRANDMEMBER_Args},\n{MAKE_CMD(\"srem\",\"Removes one or more members from a set. Deletes the set if the last member was removed.\",\"O(N) where N is the number of members to be removed.\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"set\",COMMAND_GROUP_SET,SREM_History,1,SREM_Tips,0,sremCommand,-3,CMD_WRITE|CMD_FAST,ACL_CATEGORY_SET,SREM_Keyspecs,1,NULL,2),.args=SREM_Args},\n{MAKE_CMD(\"sscan\",\"Iterates over members of a set.\",\"O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection.\",\"2.8.0\",CMD_DOC_NONE,NULL,NULL,\"set\",COMMAND_GROUP_SET,SSCAN_History,0,SSCAN_Tips,1,sscanCommand,-3,CMD_READONLY,ACL_CATEGORY_SET,SSCAN_Keyspecs,1,NULL,4),.args=SSCAN_Args},\n{MAKE_CMD(\"sunion\",\"Returns the union of multiple sets.\",\"O(N) where N is the total number of elements in all given sets.\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"set\",COMMAND_GROUP_SET,SUNION_History,0,SUNION_Tips,1,sunionCommand,-2,CMD_READONLY,ACL_CATEGORY_SET,SUNION_Keyspecs,1,NULL,1),.args=SUNION_Args},\n{MAKE_CMD(\"sunionstore\",\"Stores the union of multiple sets in a key.\",\"O(N) where N is the total number of elements in all given sets.\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"set\",COMMAND_GROUP_SET,SUNIONSTORE_History,0,SUNIONSTORE_Tips,0,sunionstoreCommand,-3,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_SET,SUNIONSTORE_Keyspecs,2,NULL,2),.args=SUNIONSTORE_Args},\n/* sorted_set */\n{MAKE_CMD(\"bzmpop\",\"Removes and returns a member by score from one or more sorted sets. Blocks until a member is available otherwise. Deletes the sorted set if the last element was popped.\",\"O(K) + O(M*log(N)) where K is the number of provided keys, N being the number of elements in the sorted set, and M being the number of elements popped.\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,BZMPOP_History,0,BZMPOP_Tips,0,bzmpopCommand,-5,CMD_WRITE|CMD_BLOCKING,ACL_CATEGORY_SORTEDSET,BZMPOP_Keyspecs,1,blmpopGetKeys,5),.args=BZMPOP_Args},\n{MAKE_CMD(\"bzpopmax\",\"Removes and returns the member with the highest score from one or more sorted sets. Blocks until a member available otherwise.  Deletes the sorted set if the last element was popped.\",\"O(log(N)) with N being the number of elements in the sorted set.\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,BZPOPMAX_History,1,BZPOPMAX_Tips,0,bzpopmaxCommand,-3,CMD_WRITE|CMD_FAST|CMD_BLOCKING,ACL_CATEGORY_SORTEDSET,BZPOPMAX_Keyspecs,1,NULL,2),.args=BZPOPMAX_Args},\n{MAKE_CMD(\"bzpopmin\",\"Removes and returns the member with the lowest score from one or more sorted sets. Blocks until a member is available otherwise. Deletes the sorted set if the last element was popped.\",\"O(log(N)) with N being the number of elements in the sorted set.\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,BZPOPMIN_History,1,BZPOPMIN_Tips,0,bzpopminCommand,-3,CMD_WRITE|CMD_FAST|CMD_BLOCKING,ACL_CATEGORY_SORTEDSET,BZPOPMIN_Keyspecs,1,NULL,2),.args=BZPOPMIN_Args},\n{MAKE_CMD(\"zadd\",\"Adds one or more members to a sorted set, or updates their scores. Creates the key if it doesn't exist.\",\"O(log(N)) for each item added, where N is the number of elements in the sorted set.\",\"1.2.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZADD_History,3,ZADD_Tips,0,zaddCommand,-4,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_SORTEDSET,ZADD_Keyspecs,1,NULL,6),.args=ZADD_Args},\n{MAKE_CMD(\"zcard\",\"Returns the number of members in a sorted set.\",\"O(1)\",\"1.2.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZCARD_History,0,ZCARD_Tips,0,zcardCommand,2,CMD_READONLY|CMD_FAST,ACL_CATEGORY_SORTEDSET,ZCARD_Keyspecs,1,NULL,1),.args=ZCARD_Args},\n{MAKE_CMD(\"zcount\",\"Returns the count of members in a sorted set that have scores within a range.\",\"O(log(N)) with N being the number of elements in the sorted set.\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZCOUNT_History,0,ZCOUNT_Tips,0,zcountCommand,4,CMD_READONLY|CMD_FAST,ACL_CATEGORY_SORTEDSET,ZCOUNT_Keyspecs,1,NULL,3),.args=ZCOUNT_Args},\n{MAKE_CMD(\"zdiff\",\"Returns the difference between multiple sorted sets.\",\"O(L + (N-K)log(N)) worst case where L is the total number of elements in all the sets, N is the size of the first set, and K is the size of the result set.\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZDIFF_History,0,ZDIFF_Tips,0,zdiffCommand,-3,CMD_READONLY,ACL_CATEGORY_SORTEDSET,ZDIFF_Keyspecs,1,zunionInterDiffGetKeys,3),.args=ZDIFF_Args},\n{MAKE_CMD(\"zdiffstore\",\"Stores the difference of multiple sorted sets in a key.\",\"O(L + (N-K)log(N)) worst case where L is the total number of elements in all the sets, N is the size of the first set, and K is the size of the result set.\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZDIFFSTORE_History,0,ZDIFFSTORE_Tips,0,zdiffstoreCommand,-4,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_SORTEDSET,ZDIFFSTORE_Keyspecs,2,zunionInterDiffStoreGetKeys,3),.args=ZDIFFSTORE_Args},\n{MAKE_CMD(\"zincrby\",\"Increments the score of a member in a sorted set.\",\"O(log(N)) where N is the number of elements in the sorted set.\",\"1.2.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZINCRBY_History,0,ZINCRBY_Tips,0,zincrbyCommand,4,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_SORTEDSET,ZINCRBY_Keyspecs,1,NULL,3),.args=ZINCRBY_Args},\n{MAKE_CMD(\"zinter\",\"Returns the intersect of multiple sorted sets.\",\"O(N*K)+O(M*log(M)) worst case with N being the smallest input sorted set, K being the number of input sorted sets and M being the number of elements in the resulting sorted set.\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZINTER_History,1,ZINTER_Tips,0,zinterCommand,-3,CMD_READONLY,ACL_CATEGORY_SORTEDSET,ZINTER_Keyspecs,1,zunionInterDiffGetKeys,5),.args=ZINTER_Args},\n{MAKE_CMD(\"zintercard\",\"Returns the number of members of the intersect of multiple sorted sets.\",\"O(N*K) worst case with N being the smallest input sorted set, K being the number of input sorted sets.\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZINTERCARD_History,0,ZINTERCARD_Tips,0,zinterCardCommand,-3,CMD_READONLY,ACL_CATEGORY_SORTEDSET,ZINTERCARD_Keyspecs,1,zunionInterDiffGetKeys,3),.args=ZINTERCARD_Args},\n{MAKE_CMD(\"zinterstore\",\"Stores the intersect of multiple sorted sets in a key.\",\"O(N*K)+O(M*log(M)) worst case with N being the smallest input sorted set, K being the number of input sorted sets and M being the number of elements in the resulting sorted set.\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZINTERSTORE_History,1,ZINTERSTORE_Tips,0,zinterstoreCommand,-4,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_SORTEDSET,ZINTERSTORE_Keyspecs,2,zunionInterDiffStoreGetKeys,5),.args=ZINTERSTORE_Args},\n{MAKE_CMD(\"zlexcount\",\"Returns the number of members in a sorted set within a lexicographical range.\",\"O(log(N)) with N being the number of elements in the sorted set.\",\"2.8.9\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZLEXCOUNT_History,0,ZLEXCOUNT_Tips,0,zlexcountCommand,4,CMD_READONLY|CMD_FAST,ACL_CATEGORY_SORTEDSET,ZLEXCOUNT_Keyspecs,1,NULL,3),.args=ZLEXCOUNT_Args},\n{MAKE_CMD(\"zmpop\",\"Returns the highest- or lowest-scoring members from one or more sorted sets after removing them. Deletes the sorted set if the last member was popped.\",\"O(K) + O(M*log(N)) where K is the number of provided keys, N being the number of elements in the sorted set, and M being the number of elements popped.\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZMPOP_History,0,ZMPOP_Tips,0,zmpopCommand,-4,CMD_WRITE,ACL_CATEGORY_SORTEDSET,ZMPOP_Keyspecs,1,zmpopGetKeys,4),.args=ZMPOP_Args},\n{MAKE_CMD(\"zmscore\",\"Returns the score of one or more members in a sorted set.\",\"O(N) where N is the number of members being requested.\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZMSCORE_History,0,ZMSCORE_Tips,0,zmscoreCommand,-3,CMD_READONLY|CMD_FAST,ACL_CATEGORY_SORTEDSET,ZMSCORE_Keyspecs,1,NULL,2),.args=ZMSCORE_Args},\n{MAKE_CMD(\"zpopmax\",\"Returns the highest-scoring members from a sorted set after removing them. Deletes the sorted set if the last member was popped.\",\"O(log(N)*M) with N being the number of elements in the sorted set, and M being the number of elements popped.\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZPOPMAX_History,0,ZPOPMAX_Tips,0,zpopmaxCommand,-2,CMD_WRITE|CMD_FAST,ACL_CATEGORY_SORTEDSET,ZPOPMAX_Keyspecs,1,NULL,2),.args=ZPOPMAX_Args},\n{MAKE_CMD(\"zpopmin\",\"Returns the lowest-scoring members from a sorted set after removing them. Deletes the sorted set if the last member was popped.\",\"O(log(N)*M) with N being the number of elements in the sorted set, and M being the number of elements popped.\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZPOPMIN_History,0,ZPOPMIN_Tips,0,zpopminCommand,-2,CMD_WRITE|CMD_FAST,ACL_CATEGORY_SORTEDSET,ZPOPMIN_Keyspecs,1,NULL,2),.args=ZPOPMIN_Args},\n{MAKE_CMD(\"zrandmember\",\"Returns one or more random members from a sorted set.\",\"O(N) where N is the number of members returned\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZRANDMEMBER_History,0,ZRANDMEMBER_Tips,1,zrandmemberCommand,-2,CMD_READONLY,ACL_CATEGORY_SORTEDSET,ZRANDMEMBER_Keyspecs,1,NULL,2),.args=ZRANDMEMBER_Args},\n{MAKE_CMD(\"zrange\",\"Returns members in a sorted set within a range of indexes.\",\"O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements returned.\",\"1.2.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZRANGE_History,1,ZRANGE_Tips,0,zrangeCommand,-4,CMD_READONLY,ACL_CATEGORY_SORTEDSET,ZRANGE_Keyspecs,1,NULL,7),.args=ZRANGE_Args},\n{MAKE_CMD(\"zrangebylex\",\"Returns members in a sorted set within a lexicographical range.\",\"O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)).\",\"2.8.9\",CMD_DOC_DEPRECATED,\"`ZRANGE` with the `BYLEX` argument\",\"6.2.0\",\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZRANGEBYLEX_History,0,ZRANGEBYLEX_Tips,0,zrangebylexCommand,-4,CMD_READONLY,ACL_CATEGORY_SORTEDSET,ZRANGEBYLEX_Keyspecs,1,NULL,4),.args=ZRANGEBYLEX_Args},\n{MAKE_CMD(\"zrangebyscore\",\"Returns members in a sorted set within a range of scores.\",\"O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)).\",\"1.0.5\",CMD_DOC_DEPRECATED,\"`ZRANGE` with the `BYSCORE` argument\",\"6.2.0\",\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZRANGEBYSCORE_History,1,ZRANGEBYSCORE_Tips,0,zrangebyscoreCommand,-4,CMD_READONLY,ACL_CATEGORY_SORTEDSET,ZRANGEBYSCORE_Keyspecs,1,NULL,5),.args=ZRANGEBYSCORE_Args},\n{MAKE_CMD(\"zrangestore\",\"Stores a range of members from sorted set in a key.\",\"O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements stored into the destination key.\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZRANGESTORE_History,0,ZRANGESTORE_Tips,0,zrangestoreCommand,-5,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_SORTEDSET,ZRANGESTORE_Keyspecs,2,NULL,7),.args=ZRANGESTORE_Args},\n{MAKE_CMD(\"zrank\",\"Returns the index of a member in a sorted set ordered by ascending scores.\",\"O(log(N))\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZRANK_History,1,ZRANK_Tips,0,zrankCommand,-3,CMD_READONLY|CMD_FAST,ACL_CATEGORY_SORTEDSET,ZRANK_Keyspecs,1,NULL,3),.args=ZRANK_Args},\n{MAKE_CMD(\"zrem\",\"Removes one or more members from a sorted set. Deletes the sorted set if all members were removed.\",\"O(M*log(N)) with N being the number of elements in the sorted set and M the number of elements to be removed.\",\"1.2.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZREM_History,1,ZREM_Tips,0,zremCommand,-3,CMD_WRITE|CMD_FAST,ACL_CATEGORY_SORTEDSET,ZREM_Keyspecs,1,NULL,2),.args=ZREM_Args},\n{MAKE_CMD(\"zremrangebylex\",\"Removes members in a sorted set within a lexicographical range. Deletes the sorted set if all members were removed.\",\"O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements removed by the operation.\",\"2.8.9\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZREMRANGEBYLEX_History,0,ZREMRANGEBYLEX_Tips,0,zremrangebylexCommand,4,CMD_WRITE,ACL_CATEGORY_SORTEDSET,ZREMRANGEBYLEX_Keyspecs,1,NULL,3),.args=ZREMRANGEBYLEX_Args},\n{MAKE_CMD(\"zremrangebyrank\",\"Removes members in a sorted set within a range of indexes. Deletes the sorted set if all members were removed.\",\"O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements removed by the operation.\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZREMRANGEBYRANK_History,0,ZREMRANGEBYRANK_Tips,0,zremrangebyrankCommand,4,CMD_WRITE,ACL_CATEGORY_SORTEDSET,ZREMRANGEBYRANK_Keyspecs,1,NULL,3),.args=ZREMRANGEBYRANK_Args},\n{MAKE_CMD(\"zremrangebyscore\",\"Removes members in a sorted set within a range of scores. Deletes the sorted set if all members were removed.\",\"O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements removed by the operation.\",\"1.2.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZREMRANGEBYSCORE_History,0,ZREMRANGEBYSCORE_Tips,0,zremrangebyscoreCommand,4,CMD_WRITE,ACL_CATEGORY_SORTEDSET,ZREMRANGEBYSCORE_Keyspecs,1,NULL,3),.args=ZREMRANGEBYSCORE_Args},\n{MAKE_CMD(\"zrevrange\",\"Returns members in a sorted set within a range of indexes in reverse order.\",\"O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements returned.\",\"1.2.0\",CMD_DOC_DEPRECATED,\"`ZRANGE` with the `REV` argument\",\"6.2.0\",\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZREVRANGE_History,0,ZREVRANGE_Tips,0,zrevrangeCommand,-4,CMD_READONLY,ACL_CATEGORY_SORTEDSET,ZREVRANGE_Keyspecs,1,NULL,4),.args=ZREVRANGE_Args},\n{MAKE_CMD(\"zrevrangebylex\",\"Returns members in a sorted set within a lexicographical range in reverse order.\",\"O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)).\",\"2.8.9\",CMD_DOC_DEPRECATED,\"`ZRANGE` with the `REV` and `BYLEX` arguments\",\"6.2.0\",\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZREVRANGEBYLEX_History,0,ZREVRANGEBYLEX_Tips,0,zrevrangebylexCommand,-4,CMD_READONLY,ACL_CATEGORY_SORTEDSET,ZREVRANGEBYLEX_Keyspecs,1,NULL,4),.args=ZREVRANGEBYLEX_Args},\n{MAKE_CMD(\"zrevrangebyscore\",\"Returns members in a sorted set within a range of scores in reverse order.\",\"O(log(N)+M) with N being the number of elements in the sorted set and M the number of elements being returned. If M is constant (e.g. always asking for the first 10 elements with LIMIT), you can consider it O(log(N)).\",\"2.2.0\",CMD_DOC_DEPRECATED,\"`ZRANGE` with the `REV` and `BYSCORE` arguments\",\"6.2.0\",\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZREVRANGEBYSCORE_History,1,ZREVRANGEBYSCORE_Tips,0,zrevrangebyscoreCommand,-4,CMD_READONLY,ACL_CATEGORY_SORTEDSET,ZREVRANGEBYSCORE_Keyspecs,1,NULL,5),.args=ZREVRANGEBYSCORE_Args},\n{MAKE_CMD(\"zrevrank\",\"Returns the index of a member in a sorted set ordered by descending scores.\",\"O(log(N))\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZREVRANK_History,1,ZREVRANK_Tips,0,zrevrankCommand,-3,CMD_READONLY|CMD_FAST,ACL_CATEGORY_SORTEDSET,ZREVRANK_Keyspecs,1,NULL,3),.args=ZREVRANK_Args},\n{MAKE_CMD(\"zscan\",\"Iterates over members and scores of a sorted set.\",\"O(1) for every call. O(N) for a complete iteration, including enough command calls for the cursor to return back to 0. N is the number of elements inside the collection.\",\"2.8.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZSCAN_History,0,ZSCAN_Tips,1,zscanCommand,-3,CMD_READONLY,ACL_CATEGORY_SORTEDSET,ZSCAN_Keyspecs,1,NULL,4),.args=ZSCAN_Args},\n{MAKE_CMD(\"zscore\",\"Returns the score of a member in a sorted set.\",\"O(1)\",\"1.2.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZSCORE_History,0,ZSCORE_Tips,0,zscoreCommand,3,CMD_READONLY|CMD_FAST,ACL_CATEGORY_SORTEDSET,ZSCORE_Keyspecs,1,NULL,2),.args=ZSCORE_Args},\n{MAKE_CMD(\"zunion\",\"Returns the union of multiple sorted sets.\",\"O(N)+O(M*log(M)) with N being the sum of the sizes of the input sorted sets, and M being the number of elements in the resulting sorted set.\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZUNION_History,1,ZUNION_Tips,0,zunionCommand,-3,CMD_READONLY,ACL_CATEGORY_SORTEDSET,ZUNION_Keyspecs,1,zunionInterDiffGetKeys,5),.args=ZUNION_Args},\n{MAKE_CMD(\"zunionstore\",\"Stores the union of multiple sorted sets in a key.\",\"O(N)+O(M log(M)) with N being the sum of the sizes of the input sorted sets, and M being the number of elements in the resulting sorted set.\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"sorted_set\",COMMAND_GROUP_SORTED_SET,ZUNIONSTORE_History,1,ZUNIONSTORE_Tips,0,zunionstoreCommand,-4,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_SORTEDSET,ZUNIONSTORE_Keyspecs,2,zunionInterDiffStoreGetKeys,5),.args=ZUNIONSTORE_Args},\n/* stream */\n{MAKE_CMD(\"xack\",\"Returns the number of messages that were successfully acknowledged by the consumer group member of a stream.\",\"O(1) for each message ID processed.\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XACK_History,0,XACK_Tips,0,xackCommand,-4,CMD_WRITE|CMD_FAST,ACL_CATEGORY_STREAM,XACK_Keyspecs,1,NULL,3),.args=XACK_Args},\n{MAKE_CMD(\"xackdel\",\"Acknowledges and deletes one or multiple messages for a stream consumer group.\",\"O(1) for each message ID processed.\",\"8.2.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XACKDEL_History,0,XACKDEL_Tips,0,xackdelCommand,-6,CMD_WRITE|CMD_FAST,ACL_CATEGORY_STREAM,XACKDEL_Keyspecs,1,NULL,4),.args=XACKDEL_Args},\n{MAKE_CMD(\"xadd\",\"Appends a new message to a stream. Creates the key if it doesn't exist.\",\"O(1) when adding a new entry, O(N) when trimming where N being the number of entries evicted.\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XADD_History,3,XADD_Tips,1,xaddCommand,-5,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_STREAM,XADD_Keyspecs,1,NULL,7),.args=XADD_Args},\n{MAKE_CMD(\"xautoclaim\",\"Changes, or acquires, ownership of messages in a consumer group, as if the messages were delivered to as consumer group member.\",\"O(1) if COUNT is small.\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XAUTOCLAIM_History,1,XAUTOCLAIM_Tips,1,xautoclaimCommand,-6,CMD_WRITE|CMD_FAST,ACL_CATEGORY_STREAM,XAUTOCLAIM_Keyspecs,1,NULL,7),.args=XAUTOCLAIM_Args},\n{MAKE_CMD(\"xcfgset\",\"Sets the IDMP configuration parameters for a stream.\",\"O(1)\",\"8.6.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XCFGSET_History,0,XCFGSET_Tips,0,xcfgsetCommand,-2,CMD_WRITE|CMD_FAST,ACL_CATEGORY_STREAM,XCFGSET_Keyspecs,1,NULL,3),.args=XCFGSET_Args},\n{MAKE_CMD(\"xclaim\",\"Changes, or acquires, ownership of a message in a consumer group, as if the message was delivered a consumer group member.\",\"O(log N) with N being the number of messages in the PEL of the consumer group.\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XCLAIM_History,0,XCLAIM_Tips,1,xclaimCommand,-6,CMD_WRITE|CMD_FAST,ACL_CATEGORY_STREAM,XCLAIM_Keyspecs,1,NULL,11),.args=XCLAIM_Args},\n{MAKE_CMD(\"xdel\",\"Returns the number of messages after removing them from a stream.\",\"O(1) for each single item to delete in the stream, regardless of the stream size.\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XDEL_History,0,XDEL_Tips,0,xdelCommand,-3,CMD_WRITE|CMD_FAST,ACL_CATEGORY_STREAM,XDEL_Keyspecs,1,NULL,2),.args=XDEL_Args},\n{MAKE_CMD(\"xdelex\",\"Deletes one or multiple entries from the stream.\",\"O(1) for each single item to delete in the stream, regardless of the stream size.\",\"8.2.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XDELEX_History,0,XDELEX_Tips,0,xdelexCommand,-5,CMD_WRITE|CMD_FAST,ACL_CATEGORY_STREAM,XDELEX_Keyspecs,1,NULL,3),.args=XDELEX_Args},\n{MAKE_CMD(\"xgroup\",\"A container for consumer groups commands.\",\"Depends on subcommand.\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XGROUP_History,0,XGROUP_Tips,0,NULL,-2,0,0,XGROUP_Keyspecs,0,NULL,0),.subcommands=XGROUP_Subcommands},\n{MAKE_CMD(\"xidmprecord\",\"An internal command for setting IDMP metadata on an existing stream message.\",\"O(1)\",\"8.6.2\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XIDMPRECORD_History,0,XIDMPRECORD_Tips,0,xidmprecordCommand,5,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_STREAM,XIDMPRECORD_Keyspecs,1,NULL,4),.args=XIDMPRECORD_Args},\n{MAKE_CMD(\"xinfo\",\"A container for stream introspection commands.\",\"Depends on subcommand.\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XINFO_History,0,XINFO_Tips,0,NULL,-2,0,0,XINFO_Keyspecs,0,NULL,0),.subcommands=XINFO_Subcommands},\n{MAKE_CMD(\"xlen\",\"Return the number of messages in a stream.\",\"O(1)\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XLEN_History,0,XLEN_Tips,0,xlenCommand,2,CMD_READONLY|CMD_FAST,ACL_CATEGORY_STREAM,XLEN_Keyspecs,1,NULL,1),.args=XLEN_Args},\n{MAKE_CMD(\"xnack\",\"Releases claimed messages back to the group's PEL without acknowledging them, making them available for re-delivery.\",\"O(1) for each message ID processed.\",\"8.8.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XNACK_History,0,XNACK_Tips,0,xnackCommand,-7,CMD_WRITE|CMD_FAST,ACL_CATEGORY_STREAM,XNACK_Keyspecs,1,NULL,6),.args=XNACK_Args},\n{MAKE_CMD(\"xpending\",\"Returns the information and entries from a stream consumer group's pending entries list.\",\"O(N) with N being the number of elements returned, so asking for a small fixed number of entries per call is O(1). O(M), where M is the total number of entries scanned when used with the IDLE filter. When the command returns just the summary and the list of consumers is small, it runs in O(1) time; otherwise, an additional O(N) time for iterating every consumer.\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XPENDING_History,1,XPENDING_Tips,1,xpendingCommand,-3,CMD_READONLY,ACL_CATEGORY_STREAM,XPENDING_Keyspecs,1,NULL,3),.args=XPENDING_Args},\n{MAKE_CMD(\"xrange\",\"Returns the messages from a stream within a range of IDs.\",\"O(N) with N being the number of elements being returned. If N is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(1).\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XRANGE_History,1,XRANGE_Tips,0,xrangeCommand,-4,CMD_READONLY,ACL_CATEGORY_STREAM,XRANGE_Keyspecs,1,NULL,4),.args=XRANGE_Args},\n{MAKE_CMD(\"xread\",\"Returns messages from multiple streams with IDs greater than the ones requested. Blocks until a message is available otherwise.\",NULL,\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XREAD_History,0,XREAD_Tips,0,xreadCommand,-4,CMD_BLOCKING|CMD_READONLY,ACL_CATEGORY_STREAM,XREAD_Keyspecs,1,xreadGetKeys,3),.args=XREAD_Args},\n{MAKE_CMD(\"xreadgroup\",\"Returns new or historical messages from a stream for a consumer in a group. Blocks until a message is available otherwise.\",\"For each stream mentioned: O(M) with M being the number of elements returned. If M is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(1). On the other side when XREADGROUP blocks, XADD will pay the O(N) time in order to serve the N clients blocked on the stream getting new data.\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XREADGROUP_History,0,XREADGROUP_Tips,0,xreadCommand,-7,CMD_BLOCKING|CMD_WRITE,ACL_CATEGORY_STREAM,XREADGROUP_Keyspecs,1,xreadGetKeys,6),.args=XREADGROUP_Args},\n{MAKE_CMD(\"xrevrange\",\"Returns the messages from a stream within a range of IDs in reverse order.\",\"O(N) with N being the number of elements returned. If N is constant (e.g. always asking for the first 10 elements with COUNT), you can consider it O(1).\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XREVRANGE_History,1,XREVRANGE_Tips,0,xrevrangeCommand,-4,CMD_READONLY,ACL_CATEGORY_STREAM,XREVRANGE_Keyspecs,1,NULL,4),.args=XREVRANGE_Args},\n{MAKE_CMD(\"xsetid\",\"An internal command for replicating stream values.\",\"O(1)\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XSETID_History,1,XSETID_Tips,0,xsetidCommand,-3,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_STREAM,XSETID_Keyspecs,1,NULL,4),.args=XSETID_Args},\n{MAKE_CMD(\"xtrim\",\"Deletes messages from the beginning of a stream.\",\"O(N), with N being the number of evicted entries. Constant times are very small however, since entries are organized in macro nodes containing multiple entries that can be released with a single deallocation.\",\"5.0.0\",CMD_DOC_NONE,NULL,NULL,\"stream\",COMMAND_GROUP_STREAM,XTRIM_History,2,XTRIM_Tips,1,xtrimCommand,-4,CMD_WRITE,ACL_CATEGORY_STREAM,XTRIM_Keyspecs,1,NULL,2),.args=XTRIM_Args},\n/* string */\n{MAKE_CMD(\"append\",\"Appends a string to the value of a key. Creates the key if it doesn't exist.\",\"O(1). The amortized time complexity is O(1) assuming the appended value is small and the already present value is of any size, since the dynamic string library used by Redis will double the free space available on every reallocation.\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"string\",COMMAND_GROUP_STRING,APPEND_History,0,APPEND_Tips,0,appendCommand,3,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_STRING,APPEND_Keyspecs,1,NULL,2),.args=APPEND_Args},\n{MAKE_CMD(\"decr\",\"Decrements the integer value of a key by one. Uses 0 as initial value if the key doesn't exist.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"string\",COMMAND_GROUP_STRING,DECR_History,0,DECR_Tips,0,decrCommand,2,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_STRING,DECR_Keyspecs,1,NULL,1),.args=DECR_Args},\n{MAKE_CMD(\"decrby\",\"Decrements a number from the integer value of a key. Uses 0 as initial value if the key doesn't exist.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"string\",COMMAND_GROUP_STRING,DECRBY_History,0,DECRBY_Tips,0,decrbyCommand,3,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_STRING,DECRBY_Keyspecs,1,NULL,2),.args=DECRBY_Args},\n{MAKE_CMD(\"delex\",\"Conditionally removes the specified key based on value or digest comparison.\",\"O(1) for IFEQ/IFNE, O(N) for IFDEQ/IFDNE where N is the length of the string value.\",\"8.4.0\",CMD_DOC_NONE,NULL,NULL,\"string\",COMMAND_GROUP_STRING,DELEX_History,0,DELEX_Tips,0,delexCommand,-2,CMD_WRITE|CMD_FAST,ACL_CATEGORY_STRING,DELEX_Keyspecs,1,delexGetKeys,2),.args=DELEX_Args},\n{MAKE_CMD(\"digest\",\"Returns the XXH3 hash of a string value.\",\"O(N) where N is the length of the string value.\",\"8.4.0\",CMD_DOC_NONE,NULL,NULL,\"string\",COMMAND_GROUP_STRING,DIGEST_History,0,DIGEST_Tips,0,digestCommand,2,CMD_READONLY|CMD_FAST,ACL_CATEGORY_STRING,DIGEST_Keyspecs,1,NULL,1),.args=DIGEST_Args},\n{MAKE_CMD(\"get\",\"Returns the string value of a key.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"string\",COMMAND_GROUP_STRING,GET_History,0,GET_Tips,0,getCommand,2,CMD_READONLY|CMD_FAST,ACL_CATEGORY_STRING,GET_Keyspecs,1,NULL,1),.args=GET_Args},\n{MAKE_CMD(\"getdel\",\"Returns the string value of a key after deleting the key.\",\"O(1)\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"string\",COMMAND_GROUP_STRING,GETDEL_History,0,GETDEL_Tips,0,getdelCommand,2,CMD_WRITE|CMD_FAST,ACL_CATEGORY_STRING,GETDEL_Keyspecs,1,NULL,1),.args=GETDEL_Args},\n{MAKE_CMD(\"getex\",\"Returns the string value of a key after setting its expiration time.\",\"O(1)\",\"6.2.0\",CMD_DOC_NONE,NULL,NULL,\"string\",COMMAND_GROUP_STRING,GETEX_History,0,GETEX_Tips,0,getexCommand,-2,CMD_WRITE|CMD_FAST,ACL_CATEGORY_STRING,GETEX_Keyspecs,1,NULL,2),.args=GETEX_Args},\n{MAKE_CMD(\"getrange\",\"Returns a substring of the string stored at a key.\",\"O(N) where N is the length of the returned string. The complexity is ultimately determined by the returned length, but because creating a substring from an existing string is very cheap, it can be considered O(1) for small strings.\",\"2.4.0\",CMD_DOC_NONE,NULL,NULL,\"string\",COMMAND_GROUP_STRING,GETRANGE_History,0,GETRANGE_Tips,0,getrangeCommand,4,CMD_READONLY,ACL_CATEGORY_STRING,GETRANGE_Keyspecs,1,NULL,3),.args=GETRANGE_Args},\n{MAKE_CMD(\"getset\",\"Returns the previous string value of a key after setting it to a new value.\",\"O(1)\",\"1.0.0\",CMD_DOC_DEPRECATED,\"`SET` with the `!GET` argument\",\"6.2.0\",\"string\",COMMAND_GROUP_STRING,GETSET_History,0,GETSET_Tips,0,getsetCommand,3,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_STRING,GETSET_Keyspecs,1,NULL,2),.args=GETSET_Args},\n{MAKE_CMD(\"incr\",\"Increments the integer value of a key by one. Uses 0 as initial value if the key doesn't exist.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"string\",COMMAND_GROUP_STRING,INCR_History,0,INCR_Tips,0,incrCommand,2,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_STRING,INCR_Keyspecs,1,NULL,1),.args=INCR_Args},\n{MAKE_CMD(\"incrby\",\"Increments the integer value of a key by a number. Uses 0 as initial value if the key doesn't exist.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"string\",COMMAND_GROUP_STRING,INCRBY_History,0,INCRBY_Tips,0,incrbyCommand,3,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_STRING,INCRBY_Keyspecs,1,NULL,2),.args=INCRBY_Args},\n{MAKE_CMD(\"incrbyfloat\",\"Increment the floating point value of a key by a number. Uses 0 as initial value if the key doesn't exist.\",\"O(1)\",\"2.6.0\",CMD_DOC_NONE,NULL,NULL,\"string\",COMMAND_GROUP_STRING,INCRBYFLOAT_History,0,INCRBYFLOAT_Tips,0,incrbyfloatCommand,3,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_STRING,INCRBYFLOAT_Keyspecs,1,NULL,2),.args=INCRBYFLOAT_Args},\n{MAKE_CMD(\"lcs\",\"Finds the longest common substring.\",\"O(N*M) where N and M are the lengths of s1 and s2, respectively\",\"7.0.0\",CMD_DOC_NONE,NULL,NULL,\"string\",COMMAND_GROUP_STRING,LCS_History,0,LCS_Tips,0,lcsCommand,-3,CMD_READONLY,ACL_CATEGORY_STRING,LCS_Keyspecs,1,NULL,6),.args=LCS_Args},\n{MAKE_CMD(\"mget\",\"Atomically returns the string values of one or more keys.\",\"O(N) where N is the number of keys to retrieve.\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"string\",COMMAND_GROUP_STRING,MGET_History,0,MGET_Tips,1,mgetCommand,-2,CMD_READONLY|CMD_FAST,ACL_CATEGORY_STRING,MGET_Keyspecs,1,NULL,1),.args=MGET_Args},\n{MAKE_CMD(\"mset\",\"Atomically creates or modifies the string values of one or more keys.\",\"O(N) where N is the number of keys to set.\",\"1.0.1\",CMD_DOC_NONE,NULL,NULL,\"string\",COMMAND_GROUP_STRING,MSET_History,0,MSET_Tips,2,msetCommand,-3,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_STRING,MSET_Keyspecs,1,NULL,1),.args=MSET_Args},\n{MAKE_CMD(\"msetex\",\"Atomically sets multiple string keys with a shared expiration in a single operation. Supports flexible argument parsing where condition and expiration flags can appear in any order.\",\"O(N) where N is the number of keys to set.\",\"8.4.0\",CMD_DOC_NONE,NULL,NULL,\"string\",COMMAND_GROUP_STRING,MSETEX_History,0,MSETEX_Tips,2,msetexCommand,-4,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_STRING,MSETEX_Keyspecs,1,NULL,4),.args=MSETEX_Args},\n{MAKE_CMD(\"msetnx\",\"Atomically modifies the string values of one or more keys only when all keys don't exist.\",\"O(N) where N is the number of keys to set.\",\"1.0.1\",CMD_DOC_NONE,NULL,NULL,\"string\",COMMAND_GROUP_STRING,MSETNX_History,0,MSETNX_Tips,0,msetnxCommand,-3,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_STRING,MSETNX_Keyspecs,1,NULL,1),.args=MSETNX_Args},\n{MAKE_CMD(\"psetex\",\"Sets both string value and expiration time in milliseconds of a key. The key is created if it doesn't exist.\",\"O(1)\",\"2.6.0\",CMD_DOC_DEPRECATED,\"`SET` with the `PX` argument\",\"2.6.12\",\"string\",COMMAND_GROUP_STRING,PSETEX_History,0,PSETEX_Tips,0,psetexCommand,4,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_STRING,PSETEX_Keyspecs,1,NULL,3),.args=PSETEX_Args},\n{MAKE_CMD(\"set\",\"Sets the string value of a key, ignoring its type. The key is created if it doesn't exist.\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"string\",COMMAND_GROUP_STRING,SET_History,5,SET_Tips,0,setCommand,-3,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_STRING,SET_Keyspecs,1,setGetKeys,5),.args=SET_Args},\n{MAKE_CMD(\"setex\",\"Sets the string value and expiration time of a key. Creates the key if it doesn't exist.\",\"O(1)\",\"2.0.0\",CMD_DOC_DEPRECATED,\"`SET` with the `EX` argument\",\"2.6.12\",\"string\",COMMAND_GROUP_STRING,SETEX_History,0,SETEX_Tips,0,setexCommand,4,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_STRING,SETEX_Keyspecs,1,NULL,3),.args=SETEX_Args},\n{MAKE_CMD(\"setnx\",\"Set the string value of a key only when the key doesn't exist.\",\"O(1)\",\"1.0.0\",CMD_DOC_DEPRECATED,\"`SET` with the `NX` argument\",\"2.6.12\",\"string\",COMMAND_GROUP_STRING,SETNX_History,0,SETNX_Tips,0,setnxCommand,3,CMD_WRITE|CMD_DENYOOM|CMD_FAST,ACL_CATEGORY_STRING,SETNX_Keyspecs,1,NULL,2),.args=SETNX_Args},\n{MAKE_CMD(\"setrange\",\"Overwrites a part of a string value with another by an offset. Creates the key if it doesn't exist.\",\"O(1), not counting the time taken to copy the new string in place. Usually, this string is very small so the amortized complexity is O(1). Otherwise, complexity is O(M) with M being the length of the value argument.\",\"2.2.0\",CMD_DOC_NONE,NULL,NULL,\"string\",COMMAND_GROUP_STRING,SETRANGE_History,0,SETRANGE_Tips,0,setrangeCommand,4,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_STRING,SETRANGE_Keyspecs,1,NULL,3),.args=SETRANGE_Args},\n{MAKE_CMD(\"strlen\",\"Returns the length of a string value.\",\"O(1)\",\"2.2.0\",CMD_DOC_NONE,NULL,NULL,\"string\",COMMAND_GROUP_STRING,STRLEN_History,0,STRLEN_Tips,0,strlenCommand,2,CMD_READONLY|CMD_FAST,ACL_CATEGORY_STRING,STRLEN_Keyspecs,1,NULL,1),.args=STRLEN_Args},\n{MAKE_CMD(\"substr\",\"Returns a substring from a string value.\",\"O(N) where N is the length of the returned string. The complexity is ultimately determined by the returned length, but because creating a substring from an existing string is very cheap, it can be considered O(1) for small strings.\",\"1.0.0\",CMD_DOC_DEPRECATED,\"`GETRANGE`\",\"2.0.0\",\"string\",COMMAND_GROUP_STRING,SUBSTR_History,0,SUBSTR_Tips,0,getrangeCommand,4,CMD_READONLY,ACL_CATEGORY_STRING,SUBSTR_Keyspecs,1,NULL,3),.args=SUBSTR_Args},\n/* transactions */\n{MAKE_CMD(\"discard\",\"Discards a transaction.\",\"O(N), when N is the number of queued commands\",\"2.0.0\",CMD_DOC_NONE,NULL,NULL,\"transactions\",COMMAND_GROUP_TRANSACTIONS,DISCARD_History,0,DISCARD_Tips,0,discardCommand,1,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_FAST|CMD_ALLOW_BUSY,ACL_CATEGORY_TRANSACTION,DISCARD_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"exec\",\"Executes all commands in a transaction.\",\"Depends on commands in the transaction\",\"1.2.0\",CMD_DOC_NONE,NULL,NULL,\"transactions\",COMMAND_GROUP_TRANSACTIONS,EXEC_History,0,EXEC_Tips,0,execCommand,1,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_SKIP_SLOWLOG,ACL_CATEGORY_TRANSACTION,EXEC_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"multi\",\"Starts a transaction.\",\"O(1)\",\"1.2.0\",CMD_DOC_NONE,NULL,NULL,\"transactions\",COMMAND_GROUP_TRANSACTIONS,MULTI_History,0,MULTI_Tips,0,multiCommand,1,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_FAST|CMD_ALLOW_BUSY,ACL_CATEGORY_TRANSACTION,MULTI_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"unwatch\",\"Forgets about watched keys of a transaction.\",\"O(1)\",\"2.2.0\",CMD_DOC_NONE,NULL,NULL,\"transactions\",COMMAND_GROUP_TRANSACTIONS,UNWATCH_History,0,UNWATCH_Tips,0,unwatchCommand,1,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_FAST|CMD_ALLOW_BUSY,ACL_CATEGORY_TRANSACTION,UNWATCH_Keyspecs,0,NULL,0)},\n{MAKE_CMD(\"watch\",\"Monitors changes to keys to determine the execution of a transaction.\",\"O(1) for every key.\",\"2.2.0\",CMD_DOC_NONE,NULL,NULL,\"transactions\",COMMAND_GROUP_TRANSACTIONS,WATCH_History,0,WATCH_Tips,0,watchCommand,-2,CMD_NOSCRIPT|CMD_LOADING|CMD_STALE|CMD_FAST|CMD_ALLOW_BUSY,ACL_CATEGORY_TRANSACTION,WATCH_Keyspecs,1,NULL,1),.args=WATCH_Args},\n{0}\n};\n"
  },
  {
    "path": "src/commands.h",
    "content": "#ifndef __REDIS_COMMANDS_H\n#define __REDIS_COMMANDS_H\n\n/* Must be synced with ARG_TYPE_STR and generate-command-code.py */\ntypedef enum {\n    ARG_TYPE_STRING,\n    ARG_TYPE_INTEGER,\n    ARG_TYPE_DOUBLE,\n    ARG_TYPE_KEY, /* A string, but represents a keyname */\n    ARG_TYPE_PATTERN,\n    ARG_TYPE_UNIX_TIME,\n    ARG_TYPE_PURE_TOKEN,\n    ARG_TYPE_ONEOF, /* Has subargs */\n    ARG_TYPE_BLOCK /* Has subargs */\n} redisCommandArgType;\n\n#define CMD_ARG_NONE            (0)\n#define CMD_ARG_OPTIONAL        (1<<0)\n#define CMD_ARG_MULTIPLE        (1<<1)\n#define CMD_ARG_MULTIPLE_TOKEN  (1<<2)\n\n/* Must be compatible with RedisModuleCommandArg. See moduleCopyCommandArgs. */\ntypedef struct redisCommandArg {\n    const char *name;\n    redisCommandArgType type;\n    int key_spec_index;\n    const char *token;\n    const char *summary;\n    const char *since;\n    int flags;\n    const char *deprecated_since;\n    int num_args;\n    struct redisCommandArg *subargs;\n    const char *display_text;\n} redisCommandArg;\n\n/* Returns the command group name by group number. */\nconst char *commandGroupStr(int index);\n\n#endif\n"
  },
  {
    "path": "src/config.c",
    "content": "/* Configuration file parsing and CONFIG GET/SET commands implementation.\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n#include \"server.h\"\n#include \"cluster.h\"\n#include \"connection.h\"\n#include \"bio.h\"\n\n#include <fcntl.h>\n#include <sys/stat.h>\n#include <glob.h>\n#include <string.h>\n#include <locale.h>\n#include <ctype.h>\n\n/*-----------------------------------------------------------------------------\n * Config file name-value maps.\n *----------------------------------------------------------------------------*/\n\ntypedef struct deprecatedConfig {\n    const char *name;\n    const int argc_min;\n    const int argc_max;\n} deprecatedConfig;\n\nconfigEnum maxmemory_policy_enum[] = {\n    {\"volatile-lru\", MAXMEMORY_VOLATILE_LRU},\n    {\"volatile-lfu\", MAXMEMORY_VOLATILE_LFU},\n    {\"volatile-random\",MAXMEMORY_VOLATILE_RANDOM},\n    {\"volatile-ttl\",MAXMEMORY_VOLATILE_TTL},\n    {\"volatile-lrm\",MAXMEMORY_VOLATILE_LRM},\n    {\"allkeys-lru\",MAXMEMORY_ALLKEYS_LRU},\n    {\"allkeys-lfu\",MAXMEMORY_ALLKEYS_LFU},\n    {\"allkeys-random\",MAXMEMORY_ALLKEYS_RANDOM},\n    {\"allkeys-lrm\",MAXMEMORY_ALLKEYS_LRM},\n    {\"noeviction\",MAXMEMORY_NO_EVICTION},\n    {NULL, 0}\n};\n\nconfigEnum syslog_facility_enum[] = {\n    {\"user\",    LOG_USER},\n    {\"local0\",  LOG_LOCAL0},\n    {\"local1\",  LOG_LOCAL1},\n    {\"local2\",  LOG_LOCAL2},\n    {\"local3\",  LOG_LOCAL3},\n    {\"local4\",  LOG_LOCAL4},\n    {\"local5\",  LOG_LOCAL5},\n    {\"local6\",  LOG_LOCAL6},\n    {\"local7\",  LOG_LOCAL7},\n    {NULL, 0}\n};\n\nconfigEnum loglevel_enum[] = {\n    {\"debug\", LL_DEBUG},\n    {\"verbose\", LL_VERBOSE},\n    {\"notice\", LL_NOTICE},\n    {\"warning\", LL_WARNING},\n    {\"nothing\", LL_NOTHING},\n    {NULL,0}\n};\n\nconfigEnum supervised_mode_enum[] = {\n    {\"upstart\", SUPERVISED_UPSTART},\n    {\"systemd\", SUPERVISED_SYSTEMD},\n    {\"auto\", SUPERVISED_AUTODETECT},\n    {\"no\", SUPERVISED_NONE},\n    {NULL, 0}\n};\n\nconfigEnum aof_fsync_enum[] = {\n    {\"everysec\", AOF_FSYNC_EVERYSEC},\n    {\"always\", AOF_FSYNC_ALWAYS},\n    {\"no\", AOF_FSYNC_NO},\n    {NULL, 0}\n};\n\nconfigEnum shutdown_on_sig_enum[] = {\n    {\"default\", 0},\n    {\"save\", SHUTDOWN_SAVE},\n    {\"nosave\", SHUTDOWN_NOSAVE},\n    {\"now\", SHUTDOWN_NOW},\n    {\"force\", SHUTDOWN_FORCE},\n    {NULL, 0}\n};\n\nconfigEnum cluster_slot_stats_enum[] = {\n    {\"no\", 0},\n    {\"yes\", CLUSTER_SLOT_STATS_ALL},\n    {\"cpu\", CLUSTER_SLOT_STATS_CPU},\n    {\"net\", CLUSTER_SLOT_STATS_NET},\n    {\"mem\", CLUSTER_SLOT_STATS_MEM},\n    {NULL, 0}\n};\n\nconfigEnum repl_diskless_load_enum[] = {\n    {\"disabled\", REPL_DISKLESS_LOAD_DISABLED},\n    {\"on-empty-db\", REPL_DISKLESS_LOAD_WHEN_DB_EMPTY},\n    {\"swapdb\", REPL_DISKLESS_LOAD_SWAPDB},\n    {\"flushdb\", REPL_DISKLESS_LOAD_ALWAYS},\n    {NULL, 0}\n};\n\nconfigEnum tls_auth_clients_enum[] = {\n    {\"no\", TLS_CLIENT_AUTH_NO},\n    {\"yes\", TLS_CLIENT_AUTH_YES},\n    {\"optional\", TLS_CLIENT_AUTH_OPTIONAL},\n    {NULL, 0}\n};\n\nconfigEnum tls_client_auth_user_enum[] = {\n    {\"CN\", TLS_CLIENT_FIELD_CN},\n    {\"off\", TLS_CLIENT_FIELD_OFF},\n    {NULL, 0}\n};\n\nconfigEnum oom_score_adj_enum[] = {\n    {\"no\", OOM_SCORE_ADJ_NO},\n    {\"yes\", OOM_SCORE_RELATIVE},\n    {\"relative\", OOM_SCORE_RELATIVE},\n    {\"absolute\", OOM_SCORE_ADJ_ABSOLUTE},\n    {NULL, 0}\n};\n\nconfigEnum acl_pubsub_default_enum[] = {\n    {\"allchannels\", SELECTOR_FLAG_ALLCHANNELS},\n    {\"resetchannels\", 0},\n    {NULL, 0}\n};\n\nconfigEnum sanitize_dump_payload_enum[] = {\n    {\"no\", SANITIZE_DUMP_NO},\n    {\"yes\", SANITIZE_DUMP_YES},\n    {\"clients\", SANITIZE_DUMP_CLIENTS},\n    {NULL, 0}\n};\n\nconfigEnum protected_action_enum[] = {\n    {\"no\", PROTECTED_ACTION_ALLOWED_NO},\n    {\"yes\", PROTECTED_ACTION_ALLOWED_YES},\n    {\"local\", PROTECTED_ACTION_ALLOWED_LOCAL},\n    {NULL, 0}\n};\n\nconfigEnum cluster_preferred_endpoint_type_enum[] = {\n    {\"ip\", CLUSTER_ENDPOINT_TYPE_IP},\n    {\"hostname\", CLUSTER_ENDPOINT_TYPE_HOSTNAME},\n    {\"unknown-endpoint\", CLUSTER_ENDPOINT_TYPE_UNKNOWN_ENDPOINT},\n    {NULL, 0}\n};\n\nconfigEnum propagation_error_behavior_enum[] = {\n    {\"ignore\", PROPAGATION_ERR_BEHAVIOR_IGNORE},\n    {\"panic\", PROPAGATION_ERR_BEHAVIOR_PANIC},\n    {\"panic-on-replicas\", PROPAGATION_ERR_BEHAVIOR_PANIC_ON_REPLICAS},\n    {NULL, 0}\n};\n\n/* Output buffer limits presets. */\nclientBufferLimitsConfig clientBufferLimitsDefaults[CLIENT_TYPE_OBUF_COUNT] = {\n    {0, 0, 0}, /* normal */\n    {1024*1024*256, 1024*1024*64, 60}, /* slave */\n    {1024*1024*32, 1024*1024*8, 60}  /* pubsub */\n};\n\n/* OOM Score defaults */\nint configOOMScoreAdjValuesDefaults[CONFIG_OOM_COUNT] = { 0, 200, 800 };\n\n/* Generic config infrastructure function pointers\n * int is_valid_fn(val, err)\n *     Return 1 when val is valid, and 0 when invalid.\n *     Optionally set err to a static error string.\n */\n\n/* Configuration values that require no special handling to set, get, load or\n * rewrite. */\ntypedef struct boolConfigData {\n    int *config; /* The pointer to the server config this value is stored in */\n    int default_value; /* The default value of the config on rewrite */\n    int (*is_valid_fn)(int val, const char **err); /* Optional function to check validity of new value (generic doc above) */\n} boolConfigData;\n\ntypedef struct stringConfigData {\n    char **config; /* Pointer to the server config this value is stored in. */\n    const char *default_value; /* Default value of the config on rewrite. */\n    int (*is_valid_fn)(char* val, const char **err); /* Optional function to check validity of new value (generic doc above) */\n    int convert_empty_to_null; /* Boolean indicating if empty strings should\n                                  be stored as a NULL value. */\n} stringConfigData;\n\ntypedef struct sdsConfigData {\n    sds *config; /* Pointer to the server config this value is stored in. */\n    char *default_value; /* Default value of the config on rewrite. */\n    int (*is_valid_fn)(sds val, const char **err); /* Optional function to check validity of new value (generic doc above) */\n    int convert_empty_to_null; /* Boolean indicating if empty SDS strings should\n                                  be stored as a NULL value. */\n} sdsConfigData;\n\ntypedef struct enumConfigData {\n    int *config; /* The pointer to the server config this value is stored in */\n    configEnum *enum_value; /* The underlying enum type this data represents */\n    int default_value; /* The default value of the config on rewrite */\n    int (*is_valid_fn)(int val, const char **err); /* Optional function to check validity of new value (generic doc above) */\n} enumConfigData;\n\ntypedef enum numericType {\n    NUMERIC_TYPE_INT,\n    NUMERIC_TYPE_UINT,\n    NUMERIC_TYPE_LONG,\n    NUMERIC_TYPE_ULONG,\n    NUMERIC_TYPE_LONG_LONG,\n    NUMERIC_TYPE_ULONG_LONG,\n    NUMERIC_TYPE_SIZE_T,\n    NUMERIC_TYPE_SSIZE_T,\n    NUMERIC_TYPE_OFF_T,\n    NUMERIC_TYPE_TIME_T,\n} numericType;\n\ntypedef struct numericConfigData {\n    union {\n        int *i;\n        unsigned int *ui;\n        long *l;\n        unsigned long *ul;\n        long long *ll;\n        unsigned long long *ull;\n        size_t *st;\n        ssize_t *sst;\n        off_t *ot;\n        time_t *tt;\n    } config; /* The pointer to the numeric config this value is stored in */\n    unsigned int flags;\n    numericType numeric_type; /* An enum indicating the type of this value */\n    long long lower_bound; /* The lower bound of this numeric value */\n    long long upper_bound; /* The upper bound of this numeric value */\n    long long default_value; /* The default value of the config on rewrite */\n    int (*is_valid_fn)(long long val, const char **err); /* Optional function to check validity of new value (generic doc above) */\n} numericConfigData;\n\ntypedef union typeData {\n    boolConfigData yesno;\n    stringConfigData string;\n    sdsConfigData sds;\n    enumConfigData enumd;\n    numericConfigData numeric;\n} typeData;\n\ntypedef struct standardConfig standardConfig;\n\ntypedef int (*apply_fn)(const char **err);\ntypedef struct typeInterface {\n    /* Called on server start, to init the server with default value */\n    void (*init)(standardConfig *config);\n    /* Called on server startup and CONFIG SET, returns 1 on success,\n     * 2 meaning no actual change done, 0 on error and can set a verbose err\n     * string */\n    int (*set)(standardConfig *config, sds *argv, int argc, const char **err);\n    /* Optional: called after `set()` to apply the config change. Used only in\n     * the context of CONFIG SET. Returns 1 on success, 0 on failure.\n     * Optionally set err to a static error string. */\n    apply_fn apply;\n    /* Called on CONFIG GET, returns sds to be used in reply */\n    sds (*get)(standardConfig *config);\n    /* Called on CONFIG REWRITE, required to rewrite the config state */\n    void (*rewrite)(standardConfig *config, const char *name, struct rewriteConfigState *state);\n} typeInterface;\n\nstruct standardConfig {\n    const char *name; /* The user visible name of this config */\n    const char *alias; /* An alias that can also be used for this config */\n    unsigned int flags; /* Flags for this specific config */\n    typeInterface interface; /* The function pointers that define the type interface */\n    typeData data; /* The type specific data exposed used by the interface */\n    configType type; /* The type of config this is. */\n    void *privdata; /* privdata for this config, for module configs this is a ModuleConfig struct */\n};\n\ndict *configs = NULL; /* Runtime config values */\n\n/* Lookup a config by the provided sds string name, or return NULL\n * if the config does not exist */\nstatic standardConfig *lookupConfig(const sds name) {\n    dictEntry *de = dictFind(configs, name);\n    return de ? dictGetVal(de) : NULL;\n}\n\n/*-----------------------------------------------------------------------------\n * Enum access functions\n *----------------------------------------------------------------------------*/\n\n/* Get enum value from name. If there is no match INT_MIN is returned. */\nint configEnumGetValue(configEnum *ce, sds *argv, int argc, int bitflags) {\n    if (argc == 0 || (!bitflags && argc != 1)) return INT_MIN;\n    int values = 0;\n    for (int i = 0; i < argc; i++) {\n        int matched = 0;\n        for (configEnum *ceItem = ce; ceItem->name != NULL; ceItem++) {\n            if (!strcasecmp(argv[i],ceItem->name)) {\n                values |= ceItem->val;\n                matched = 1;\n            }\n        }\n        if (!matched) return INT_MIN;\n    }\n    return values;\n}\n\n/* Get enum name/s from value. If no matches are found \"unknown\" is returned. */\nstatic sds configEnumGetName(configEnum *ce, int values, int bitflags) {\n    sds names = NULL;\n    int unmatched = values;\n    for( ; ce->name != NULL; ce++) {\n        if (values == ce->val) { /* Short path for perfect match */\n            sdsfree(names);\n            return sdsnew(ce->name);\n        }\n\n        /* Note: for bitflags, we want them sorted from high to low, so that if there are several / partially\n         * overlapping entries, we'll prefer the ones matching more bits. */\n        if (bitflags && ce->val && ce->val == (unmatched & ce->val)) {\n            names = names ? sdscatfmt(names, \" %s\", ce->name) : sdsnew(ce->name);\n            unmatched &= ~ce->val;\n        }\n    }\n    if (!names || unmatched) {\n        sdsfree(names);\n        return sdsnew(\"unknown\");\n    }\n    return names;\n}\n\n/* Used for INFO generation. */\nconst char *evictPolicyToString(void) {\n    for (configEnum *ce = maxmemory_policy_enum; ce->name != NULL; ce++) {\n        if (server.maxmemory_policy == ce->val)\n            return ce->name;\n    }\n    serverPanic(\"unknown eviction policy\");\n}\n\n/*-----------------------------------------------------------------------------\n * Config file parsing\n *----------------------------------------------------------------------------*/\n\nint yesnotoi(char *s) {\n    if (!strcasecmp(s,\"yes\")) return 1;\n    else if (!strcasecmp(s,\"no\")) return 0;\n    else return -1;\n}\n\nvoid appendServerSaveParams(time_t seconds, int changes) {\n    server.saveparams = zrealloc(server.saveparams,sizeof(struct saveparam)*(server.saveparamslen+1));\n    server.saveparams[server.saveparamslen].seconds = seconds;\n    server.saveparams[server.saveparamslen].changes = changes;\n    server.saveparamslen++;\n}\n\nvoid resetServerSaveParams(void) {\n    zfree(server.saveparams);\n    server.saveparams = NULL;\n    server.saveparamslen = 0;\n}\n\nvoid queueLoadModule(sds path, sds *argv, int argc) {\n    int i;\n    struct moduleLoadQueueEntry *loadmod;\n\n    loadmod = zmalloc(sizeof(struct moduleLoadQueueEntry));\n    loadmod->argv = argc ? zmalloc(sizeof(robj*)*argc) : NULL;\n    loadmod->path = sdsnew(path);\n    loadmod->argc = argc;\n    for (i = 0; i < argc; i++) {\n        loadmod->argv[i] = createRawStringObject(argv[i],sdslen(argv[i]));\n    }\n    listAddNodeTail(server.loadmodule_queue,loadmod);\n}\n\n/* Parse an array of `arg_len` sds strings, validate and populate\n * server.client_obuf_limits if valid.\n * Used in CONFIG SET and configuration file parsing. */\nstatic int updateClientOutputBufferLimit(sds *args, int arg_len, const char **err) {\n    int j;\n    int class;\n    unsigned long long hard, soft;\n    int hard_err, soft_err;\n    int soft_seconds;\n    char *soft_seconds_eptr;\n    clientBufferLimitsConfig values[CLIENT_TYPE_OBUF_COUNT];\n    int classes[CLIENT_TYPE_OBUF_COUNT] = {0};\n\n    /* We need a multiple of 4: <class> <hard> <soft> <soft_seconds> */\n    if (arg_len % 4) {\n        if (err) *err = \"Wrong number of arguments in \"\n                        \"buffer limit configuration.\";\n        return 0;\n    }\n\n    /* Sanity check of single arguments, so that we either refuse the\n     * whole configuration string or accept it all, even if a single\n     * error in a single client class is present. */\n    for (j = 0; j < arg_len; j += 4) {\n        class = getClientTypeByName(args[j]);\n        if (class == -1 || class == CLIENT_TYPE_MASTER) {\n            if (err) *err = \"Invalid client class specified in \"\n                            \"buffer limit configuration.\";\n            return 0;\n        }\n\n        hard = memtoull(args[j+1], &hard_err);\n        soft = memtoull(args[j+2], &soft_err);\n        soft_seconds = strtoll(args[j+3], &soft_seconds_eptr, 10);\n        if (hard_err || soft_err ||\n            soft_seconds < 0 || *soft_seconds_eptr != '\\0')\n        {\n            if (err) *err = \"Error in hard, soft or soft_seconds setting in \"\n                            \"buffer limit configuration.\";\n            return 0;\n        }\n\n        values[class].hard_limit_bytes = hard;\n        values[class].soft_limit_bytes = soft;\n        values[class].soft_limit_seconds = soft_seconds;\n        classes[class] = 1;\n    }\n\n    /* Finally set the new config. */\n    for (j = 0; j < CLIENT_TYPE_OBUF_COUNT; j++) {\n        if (classes[j]) server.client_obuf_limits[j] = values[j];\n    }\n\n    return 1;\n}\n\n/* Note this is here to support detecting we're running a config set from\n * within conf file parsing. This is only needed to support the deprecated\n * abnormal aggregate `save T C` functionality. Remove in the future. */\nstatic int reading_config_file;\n\nvoid loadServerConfigFromString(char *config) {\n    deprecatedConfig deprecated_configs[] = {\n        {\"list-max-ziplist-entries\", 2, 2},\n        {\"list-max-ziplist-value\", 2, 2},\n        {\"lua-replicate-commands\", 2, 2},\n        {\"io-threads-do-reads\", 2, 2},\n        {NULL, 0},\n    };\n    char buf[1024];\n    const char *err = NULL;\n    int linenum = 0, totlines, i;\n    sds *lines;\n    sds *argv = NULL;\n    int argc;\n\n    reading_config_file = 1;\n    lines = sdssplitlen(config,strlen(config),\"\\n\",1,&totlines);\n\n    for (i = 0; i < totlines; i++) {\n        linenum = i+1;\n        lines[i] = sdstrim(lines[i],\" \\t\\r\\n\");\n\n        /* Skip comments and blank lines */\n        if (lines[i][0] == '#' || lines[i][0] == '\\0') continue;\n\n        /* Split into arguments */\n        argv = sdssplitargs(lines[i],&argc);\n        if (argv == NULL) {\n            err = \"Unbalanced quotes in configuration line\";\n            goto loaderr;\n        }\n\n        /* Skip this line if the resulting command vector is empty. */\n        if (argc == 0) {\n            sdsfreesplitres(argv,argc);\n            argv = NULL;\n            continue;\n        }\n        sdstolower(argv[0]);\n\n        /* Iterate the configs that are standard */\n        standardConfig *config = lookupConfig(argv[0]);\n        if (config) {\n            /* For normal single arg configs enforce we have a single argument.\n             * Note that MULTI_ARG_CONFIGs need to validate arg count on their own */\n            if (!(config->flags & MULTI_ARG_CONFIG) && argc != 2) {\n                err = \"wrong number of arguments\";\n                goto loaderr;\n            }\n\n            if ((config->flags & MULTI_ARG_CONFIG) && argc == 2 && sdslen(argv[1])) {\n                /* For MULTI_ARG_CONFIGs, if we only have one argument, try to split it by spaces.\n                 * Only if the argument is not empty, otherwise something like --save \"\" will fail.\n                 * So that we can support something like --config \"arg1 arg2 arg3\". */\n                sds *new_argv;\n                int new_argc;\n                new_argv = sdssplitargs(argv[1], &new_argc);\n                if (!config->interface.set(config, new_argv, new_argc, &err)) {\n                    if(new_argv) sdsfreesplitres(new_argv, new_argc);\n                    goto loaderr;\n                }\n                sdsfreesplitres(new_argv, new_argc);\n            } else {\n                /* Set config using all arguments that follows */\n                if (!config->interface.set(config, &argv[1], argc-1, &err)) {\n                    goto loaderr;\n                }\n            }\n\n            sdsfreesplitres(argv,argc);\n            argv = NULL;\n            continue;\n        } else {\n            int match = 0;\n            for (deprecatedConfig *config = deprecated_configs; config->name != NULL; config++) {\n                if (!strcasecmp(argv[0], config->name) && \n                    config->argc_min <= argc && \n                    argc <= config->argc_max) \n                {\n                    match = 1;\n                    break;\n                }\n            }\n            if (match) {\n                sdsfreesplitres(argv,argc);\n                argv = NULL;\n                continue;\n            }\n        }\n\n        /* Execute config directives */\n        if (!strcasecmp(argv[0],\"include\") && argc == 2) {\n            loadServerConfig(argv[1], 0, NULL);\n        } else if (!strcasecmp(argv[0],\"rename-command\") && argc == 3) {\n            struct redisCommand *cmd = lookupCommandBySds(argv[1]);\n            int retval;\n\n            if (!cmd) {\n                err = \"No such command in rename-command\";\n                goto loaderr;\n            }\n\n            /* If the target command name is the empty string we just\n             * remove it from the command table. */\n            retval = dictDelete(server.commands, argv[1]);\n            serverAssert(retval == DICT_OK);\n\n            /* Otherwise we re-add the command under a different name. */\n            if (sdslen(argv[2]) != 0) {\n                sds copy = sdsdup(argv[2]);\n\n                retval = dictAdd(server.commands, copy, cmd);\n                if (retval != DICT_OK) {\n                    sdsfree(copy);\n                    err = \"Target command name already exists\"; goto loaderr;\n                }\n            }\n        } else if (!strcasecmp(argv[0],\"user\") && argc >= 2) {\n            int argc_err;\n            if (ACLAppendUserForLoading(argv,argc,&argc_err) == C_ERR) {\n                const char *errmsg = ACLSetUserStringError();\n                snprintf(buf,sizeof(buf),\"Error in user declaration '%s': %s\",\n                    argv[argc_err],errmsg);\n                err = buf;\n                goto loaderr;\n            }\n        } else if (!strcasecmp(argv[0],\"loadmodule\") && argc >= 2) {\n            queueLoadModule(argv[1],&argv[2],argc-2);\n        } else if (!strcasecmp(argv[0],\"sentinel\")) {\n            /* argc == 1 is handled by main() as we need to enter the sentinel\n             * mode ASAP. */\n            if (argc != 1) {\n                if (!server.sentinel_mode) {\n                    err = \"sentinel directive while not in sentinel mode\";\n                    goto loaderr;\n                }\n                queueSentinelConfig(argv+1,argc-1,linenum,lines[i]);\n            }\n        } else {\n            /* Collect all unknown configurations into `module_configs_queue`.\n             * These may include valid module configurations or invalid ones.\n             * They will be validated later by loadModuleConfigs() against the\n             * configurations declared by the loaded module(s). */\n            \n            if (argc < 2) {\n                err = \"Bad directive or wrong number of arguments\";\n                goto loaderr;\n            }\n            sds name = sdsdup(argv[0]);\n            sds val = sdsdup(argv[1]);\n            for (int i = 2; i < argc; i++)\n                val = sdscatfmt(val, \" %S\", argv[i]);\n            if (!dictReplace(server.module_configs_queue, name, val)) sdsfree(name);\n        }\n        sdsfreesplitres(argv,argc);\n        argv = NULL;\n    }\n\n    if (server.logfile[0] != '\\0') {\n        FILE *logfp;\n\n        /* Test if we are able to open the file. The server will not\n         * be able to abort just for this problem later... */\n        logfp = fopen(server.logfile,\"a\");\n        if (logfp == NULL) {\n            err = sdscatprintf(sdsempty(),\n                               \"Can't open the log file: %s\", strerror(errno));\n            goto loaderr;\n        }\n        fclose(logfp);\n    }\n\n    /* Sanity checks. */\n    if (server.cluster_enabled && server.masterhost) {\n        err = \"replicaof directive not allowed in cluster mode\";\n        goto loaderr;\n    }\n\n    /* in case cluster mode is enabled dbnum must be 1 */\n    if (server.cluster_enabled && server.dbnum > 1) {\n        serverLog(LL_WARNING, \"WARNING: Changing databases number from %d to 1 since we are in cluster mode\", server.dbnum);\n        server.dbnum = 1;\n    }\n\n    /* To ensure backward compatibility and work while hz is out of range */\n    if (server.config_hz < CONFIG_MIN_HZ) server.config_hz = CONFIG_MIN_HZ;\n    if (server.config_hz > CONFIG_MAX_HZ) server.config_hz = CONFIG_MAX_HZ;\n\n    sdsfreesplitres(lines,totlines);\n    reading_config_file = 0;\n    return;\n\nloaderr:\n    if (argv) sdsfreesplitres(argv,argc);\n    fprintf(stderr, \"\\n*** FATAL CONFIG FILE ERROR (Redis %s) ***\\n\",\n        REDIS_VERSION);\n    if (i < totlines) {\n        fprintf(stderr, \"Reading the configuration file, at line %d\\n\", linenum);\n        fprintf(stderr, \">>> '%s'\\n\", lines[i]);\n    }\n    fprintf(stderr, \"%s\\n\", err);\n    exit(1);\n}\n\n/* Load the server configuration from the specified filename.\n * The function appends the additional configuration directives stored\n * in the 'options' string to the config file before loading.\n *\n * Both filename and options can be NULL, in such a case are considered\n * empty. This way loadServerConfig can be used to just load a file or\n * just load a string. */\n#define CONFIG_READ_LEN 1024\nvoid loadServerConfig(char *filename, char config_from_stdin, char *options) {\n    sds config = sdsempty();\n    char buf[CONFIG_READ_LEN+1];\n    FILE *fp;\n    glob_t globbuf;\n\n    /* Load the file content */\n    if (filename) {\n\n        /* The logic for handling wildcards has slightly different behavior in cases where\n         * there is a failure to locate the included file.\n         * Whether or not a wildcard is specified, we should ALWAYS log errors when attempting\n         * to open included config files.\n         *\n         * However, we desire a behavioral difference between instances where a wildcard was\n         * specified and those where it hasn't:\n         *      no wildcards   : attempt to open the specified file and fail with a logged error\n         *                       if the file cannot be found and opened.\n         *      with wildcards : attempt to glob the specified pattern; if no files match the\n         *                       pattern, then gracefully continue on to the next entry in the\n         *                       config file, as if the current entry was never encountered.\n         *                       This will allow for empty conf.d directories to be included. */\n\n        if (strchr(filename, '*') || strchr(filename, '?') || strchr(filename, '[')) {\n            /* A wildcard character detected in filename, so let us use glob */\n            if (glob(filename, 0, NULL, &globbuf) == 0) {\n\n                for (size_t i = 0; i < globbuf.gl_pathc; i++) {\n                    if ((fp = fopen(globbuf.gl_pathv[i], \"r\")) == NULL) {\n                        serverLog(LL_WARNING,\n                                  \"Fatal error, can't open config file '%s': %s\",\n                                  globbuf.gl_pathv[i], strerror(errno));\n                        exit(1);\n                    }\n                    while(fgets(buf,CONFIG_READ_LEN+1,fp) != NULL)\n                        config = sdscat(config,buf);\n                    fclose(fp);\n                }\n\n                globfree(&globbuf);\n            }\n        } else {\n            /* No wildcard in filename means we can use the original logic to read and\n             * potentially fail traditionally */\n            if ((fp = fopen(filename, \"r\")) == NULL) {\n                serverLog(LL_WARNING,\n                          \"Fatal error, can't open config file '%s': %s\",\n                          filename, strerror(errno));\n                exit(1);\n            }\n            while(fgets(buf,CONFIG_READ_LEN+1,fp) != NULL)\n                config = sdscat(config,buf);\n            fclose(fp);\n        }\n    }\n\n    /* Append content from stdin */\n    if (config_from_stdin) {\n        serverLog(LL_NOTICE,\"Reading config from stdin\");\n        fp = stdin;\n        while(fgets(buf,CONFIG_READ_LEN+1,fp) != NULL)\n            config = sdscat(config,buf);\n    }\n\n    /* Append the additional options */\n    if (options) {\n        config = sdscat(config,\"\\n\");\n        config = sdscat(config,options);\n    }\n    loadServerConfigFromString(config);\n    sdsfree(config);\n}\n\nstatic int performInterfaceSet(standardConfig *config, sds value, const char **errstr) {\n    sds *argv;\n    int argc, res;\n\n    if (config->flags & MULTI_ARG_CONFIG) {\n        argv = sdssplitlen(value, sdslen(value), \" \", 1, &argc);\n    } else {\n        argv = (char**)&value;\n        argc = 1;\n    }\n\n    /* Set the config */\n    res = config->interface.set(config, argv, argc, errstr);\n    if (config->flags & MULTI_ARG_CONFIG) sdsfreesplitres(argv, argc);\n    return res;\n}\n\n/* Find the config by name and attempt to set it to value. */\nint performModuleConfigSetFromName(sds name, sds value, const char **err) {\n    standardConfig *config = lookupConfig(name);\n    if (!config || !(config->flags & MODULE_CONFIG)) {\n        *err = \"Config name not found\";\n        return 0;\n    }\n    return performInterfaceSet(config, value, err);\n}\n\n/* Find config by name and attempt to set it to its default value. */\nint performModuleConfigSetDefaultFromName(sds name, const char **err) {\n    standardConfig *config = lookupConfig(name);\n    serverAssert(config);\n    if (!(config->flags & MODULE_CONFIG)) {\n        *err = \"Config name not found\";\n        return 0;\n    }\n    switch (config->type) {\n        case BOOL_CONFIG:\n            return setModuleBoolConfig(config->privdata, config->data.yesno.default_value, err);\n        case SDS_CONFIG:\n            return setModuleStringConfig(config->privdata, config->data.sds.default_value, err);\n        case NUMERIC_CONFIG:\n            return setModuleNumericConfig(config->privdata, config->data.numeric.default_value, err);\n        case ENUM_CONFIG:\n            return setModuleEnumConfig(config->privdata, config->data.enumd.default_value, err);\n        default:\n            serverPanic(\"Config type of module config is not allowed.\");\n    }\n    return 0;\n}\n\nstatic int configNeedsApply(standardConfig *config) {\n    return ((config->flags & MODULE_CONFIG) && moduleConfigNeedsApply(config->privdata)) ||\n           config->interface.apply;\n}\n\nstatic void restoreBackupConfig(standardConfig **set_configs, sds *old_values, int count) {\n    int i;\n    const char *errstr = \"unknown error\";\n    /* Set all backup values */\n    for (i = 0; i < count; i++) {\n        if (!performInterfaceSet(set_configs[i], old_values[i], &errstr))\n            serverLog(LL_WARNING, \"Failed restoring failed CONFIG SET command. Error setting %s to '%s': %s\",\n                      set_configs[i]->name, old_values[i], errstr);\n    }\n \n    for (i = 0; i < count; i++) {\n        if (!configNeedsApply(set_configs[i])) continue;\n\n        int applyres = 0;\n        if (set_configs[i]->flags & MODULE_CONFIG)\n            applyres = moduleConfigApply(set_configs[i]->privdata, &errstr);\n        else\n            applyres = set_configs[i]->interface.apply(&errstr);\n        \n        if (!applyres)\n            serverLog(LL_WARNING, \"Failed applying restored CONFIG SET command. Error applying %s: %s\",\n                      set_configs[i]->name, errstr);\n    }\n}\n\n/*-----------------------------------------------------------------------------\n * CONFIG SET implementation\n *----------------------------------------------------------------------------*/\n\nvoid configSetCommand(client *c) {\n    const char *errstr = NULL;\n    const char *invalid_arg_name = NULL;\n    const char *err_arg_name = NULL;\n    standardConfig **set_configs; /* TODO: make this a dict for better performance */\n    const char **config_names;\n    sds *new_values;\n    sds *old_values = NULL;\n    int config_count, i, j;\n    int invalid_args = 0, deny_loading_error = 0;\n    int *config_changed;\n\n    /* Make sure we have an even number of arguments: conf-val pairs */\n    if (c->argc & 1) {\n        addReplyErrorObject(c, shared.syntaxerr);\n        return;\n    }\n    config_count = (c->argc - 2) / 2;\n\n    set_configs = zcalloc(sizeof(standardConfig*)*config_count);\n    config_names = zcalloc(sizeof(char*)*config_count);\n    new_values = zmalloc(sizeof(sds*)*config_count);\n    old_values = zcalloc(sizeof(sds*)*config_count);\n    config_changed = zmalloc(sizeof(int)*config_count);\n\n    /* Find all relevant configs */\n    for (i = 0; i < config_count; i++) {\n        standardConfig *config = lookupConfig(c->argv[2+i*2]->ptr);\n        /* Fail if we couldn't find this config */\n        if (!config) {\n            if (!invalid_args) {\n                invalid_arg_name = c->argv[2+i*2]->ptr;\n                invalid_args = 1;\n            }\n            continue;\n        }\n\n        /* Note: it's important we run over ALL passed configs and check if we need to call `redactClientCommandArgument()`.\n         * This is in order to avoid anyone using this command for a log/slowlog/monitor/etc. displaying sensitive info.\n         * So even if we encounter an error we still continue running over the remaining arguments. */\n        if (config->flags & SENSITIVE_CONFIG) {\n            redactClientCommandArgument(c,2+i*2+1);\n        }\n\n        /* We continue to make sure we redact all the configs */ \n        if (invalid_args) continue;\n\n        if (config->flags & IMMUTABLE_CONFIG ||\n            (config->flags & PROTECTED_CONFIG && !allowProtectedAction(server.enable_protected_configs, c)))\n        {\n            /* Note: we don't abort the loop since we still want to handle redacting sensitive configs (above) */\n            errstr = (config->flags & IMMUTABLE_CONFIG) ? \"can't set immutable config\" : \"can't set protected config\";\n            err_arg_name = c->argv[2+i*2]->ptr;\n            invalid_args = 1;\n            continue;\n        }\n\n        if (server.loading && config->flags & DENY_LOADING_CONFIG) {\n            /* Note: we don't abort the loop since we still want to handle redacting sensitive configs (above) */\n            deny_loading_error = 1;\n            invalid_args = 1;\n            continue;\n        }\n\n        /* If this config appears twice then fail */\n        for (j = 0; j < i; j++) {\n            if (set_configs[j] == config) {\n                /* Note: we don't abort the loop since we still want to handle redacting sensitive configs (above) */\n                errstr = \"duplicate parameter\";\n                err_arg_name = c->argv[2+i*2]->ptr;\n                invalid_args = 1;\n                break;\n            }\n        }\n        set_configs[i] = config;\n        config_names[i] = config->name;\n        new_values[i] = c->argv[2+i*2+1]->ptr;\n    }\n    \n    if (invalid_args) goto err;\n\n    /* Backup old values before setting new ones */\n    for (i = 0; i < config_count; i++)\n        old_values[i] = set_configs[i]->interface.get(set_configs[i]);\n\n    /* Set all new values (don't apply yet) */\n    for (i = 0; i < config_count; i++) {\n        int res = performInterfaceSet(set_configs[i], new_values[i], &errstr);\n        if (!res) {\n            restoreBackupConfig(set_configs, old_values, i+1);\n            err_arg_name = set_configs[i]->name;\n            goto err;\n        }\n        if (res == 1) config_changed[i] = 1;\n        else config_changed[i] = 0;\n    }\n\n    /* Apply all configs that need it */\n    for (i = 0; i < config_count; i++) {\n        if (!config_changed[i]) continue;\n\n        /* A new value was set, if this config has an apply function try to apply\n         * it and restore if apply fails. */\n        if (!configNeedsApply(set_configs[i])) continue;\n\n        int res = 0;\n        if (set_configs[i]->flags & MODULE_CONFIG)\n            res = moduleConfigApply(set_configs[i]->privdata, &errstr);\n        else\n            res = set_configs[i]->interface.apply(&errstr);\n        \n        if (!res) {\n            restoreBackupConfig(set_configs, old_values, config_count);\n            err_arg_name = set_configs[i]->name;\n            goto err;\n        }\n    }\n\n    RedisModuleConfigChangeV1 cc = {.num_changes = config_count, .config_names = config_names};\n    moduleFireServerEvent(REDISMODULE_EVENT_CONFIG, REDISMODULE_SUBEVENT_CONFIG_CHANGE, &cc);\n    addReply(c,shared.ok);\n    goto end;\n\nerr:\n    if (deny_loading_error) {\n        /* We give the loading error precedence because it may be handled by clients differently, unlike a plain -ERR. */\n        addReplyErrorObject(c,shared.loadingerr);\n    } else if (invalid_arg_name) {\n        addReplyErrorFormat(c,\"Unknown option or number of arguments for CONFIG SET - '%s'\", invalid_arg_name);\n    } else if (errstr) {\n        addReplyErrorFormat(c,\"CONFIG SET failed (possibly related to argument '%s') - %s\", err_arg_name, errstr);\n    } else {\n        addReplyErrorFormat(c,\"CONFIG SET failed (possibly related to argument '%s')\", err_arg_name);\n    }\nend:\n    zfree(set_configs);\n    zfree(config_names);\n    zfree(new_values);\n    for (i = 0; i < config_count; i++)\n        sdsfree(old_values[i]);\n    zfree(old_values);\n    zfree(config_changed);\n}\n\n/*-----------------------------------------------------------------------------\n * CONFIG GET implementation\n *----------------------------------------------------------------------------*/\n\nvoid configGetCommand(client *c) {\n    int i;\n    dictEntry *de;\n    dictIterator di;\n    /* Create a dictionary to store the matched configs */\n    dict *matches = dictCreate(&externalStringType);\n    for (i = 0; i < c->argc - 2; i++) {\n        robj *o = c->argv[2+i];\n        sds name = o->ptr;\n\n        /* If the string doesn't contain glob patterns, just directly\n         * look up the key in the dictionary. */\n        if (!strpbrk(name, \"[*?\")) {\n            if (dictFind(matches, name)) continue;\n            standardConfig *config = lookupConfig(name);\n\n            if (config) {\n                dictAdd(matches, name, config);\n            }\n            continue;\n        }\n\n        /* Otherwise, do a match against all items in the dictionary. */\n        dictInitIterator(&di, configs);\n        while ((de = dictNext(&di)) != NULL) {\n            standardConfig *config = dictGetVal(de);\n            /* Note that hidden configs require an exact match (not a pattern) */\n            if (config->flags & HIDDEN_CONFIG) continue;\n            if (dictFind(matches, config->name)) continue;\n            if (stringmatch(name, dictGetKey(de), 1)) {\n                dictAdd(matches, dictGetKey(de), config);\n            }\n        }\n        dictResetIterator(&di);\n    }\n\n    dictInitIterator(&di, matches);\n    addReplyMapLen(c, dictSize(matches));\n    while ((de = dictNext(&di)) != NULL) {\n        standardConfig *config = (standardConfig *) dictGetVal(de);\n        addReplyBulkCString(c, dictGetKey(de));\n        addReplyBulkSds(c, config->interface.get(config));\n    }\n    dictResetIterator(&di);\n    dictRelease(matches);\n}\n\n/*-----------------------------------------------------------------------------\n * CONFIG REWRITE implementation\n *----------------------------------------------------------------------------*/\n\n#define REDIS_CONFIG_REWRITE_SIGNATURE \"# Generated by CONFIG REWRITE\"\n\n/* We use the following dictionary type to store where a configuration\n * option is mentioned in the old configuration file, so it's\n * like \"maxmemory\" -> list of line numbers (first line is zero). */\nvoid dictListDestructor(dict *d, void *val);\n\n/* Sentinel config rewriting is implemented inside sentinel.c by\n * rewriteConfigSentinelOption(). */\nvoid rewriteConfigSentinelOption(struct rewriteConfigState *state);\n\ndictType optionToLineDictType = {\n    dictSdsCaseHash,            /* hash function */\n    NULL,                       /* key dup */\n    NULL,                       /* val dup */\n    dictSdsKeyCaseCompare,      /* key compare */\n    dictSdsDestructor,          /* key destructor */\n    dictListDestructor,         /* val destructor */\n    NULL                        /* allow to expand */\n};\n\ndictType optionSetDictType = {\n    dictSdsCaseHash,            /* hash function */\n    NULL,                       /* key dup */\n    NULL,                       /* val dup */\n    dictSdsKeyCaseCompare,      /* key compare */\n    dictSdsDestructor,          /* key destructor */\n    NULL,                       /* val destructor */\n    NULL                        /* allow to expand */\n};\n\n/* The config rewrite state. */\nstruct rewriteConfigState {\n    dict *option_to_line; /* Option -> list of config file lines map */\n    dict *rewritten;      /* Dictionary of already processed options */\n    int numlines;         /* Number of lines in current config */\n    sds *lines;           /* Current lines as an array of sds strings */\n    int needs_signature;  /* True if we need to append the rewrite\n                             signature. */\n    int force_write;      /* True if we want all keywords to be force\n                             written. Currently only used for testing\n                             and debug information. */\n};\n\n/* Free the configuration rewrite state. */\nvoid rewriteConfigReleaseState(struct rewriteConfigState *state) {\n    sdsfreesplitres(state->lines,state->numlines);\n    dictRelease(state->option_to_line);\n    dictRelease(state->rewritten);\n    zfree(state);\n}\n\n/* Create the configuration rewrite state */\nstruct rewriteConfigState *rewriteConfigCreateState(void) {\n    struct rewriteConfigState *state = zmalloc(sizeof(*state));\n    state->option_to_line = dictCreate(&optionToLineDictType);\n    state->rewritten = dictCreate(&optionSetDictType);\n    state->numlines = 0;\n    state->lines = NULL;\n    state->needs_signature = 1;\n    state->force_write = 0;\n    return state;\n}\n\n/* Append the new line to the current configuration state. */\nvoid rewriteConfigAppendLine(struct rewriteConfigState *state, sds line) {\n    state->lines = zrealloc(state->lines, sizeof(char*) * (state->numlines+1));\n    state->lines[state->numlines++] = line;\n}\n\n/* Populate the option -> list of line numbers map. */\nvoid rewriteConfigAddLineNumberToOption(struct rewriteConfigState *state, sds option, int linenum) {\n    list *l = dictFetchValue(state->option_to_line,option);\n\n    if (l == NULL) {\n        l = listCreate();\n        dictAdd(state->option_to_line,sdsdup(option),l);\n    }\n    listAddNodeTail(l,(void*)(long)linenum);\n}\n\n/* Add the specified option to the set of processed options.\n * This is useful as only unused lines of processed options will be blanked\n * in the config file, while options the rewrite process does not understand\n * remain untouched. */\nvoid rewriteConfigMarkAsProcessed(struct rewriteConfigState *state, const char *option) {\n    sds opt = sdsnew(option);\n\n    if (dictAdd(state->rewritten,opt,NULL) != DICT_OK) sdsfree(opt);\n}\n\n/* Read the old file, split it into lines to populate a newly created\n * config rewrite state, and return it to the caller.\n *\n * If it is impossible to read the old file, NULL is returned.\n * If the old file does not exist at all, an empty state is returned. */\nstruct rewriteConfigState *rewriteConfigReadOldFile(char *path) {\n    FILE *fp = fopen(path,\"r\");\n    if (fp == NULL && errno != ENOENT) return NULL;\n\n    struct redis_stat sb;\n    if (fp && redis_fstat(fileno(fp),&sb) == -1) {\n        fclose(fp);\n        return NULL;\n    }\n\n    int linenum = -1;\n    struct rewriteConfigState *state = rewriteConfigCreateState();\n\n    if (fp == NULL) {\n        return state;\n    }\n\n    if (sb.st_size == 0) {\n        fclose(fp);\n        return state;\n    } \n\n    /* Load the file content */\n    sds config = sdsnewlen(SDS_NOINIT,sb.st_size);\n    if (fread(config,1,sb.st_size,fp) == 0) {\n        sdsfree(config);\n        rewriteConfigReleaseState(state);\n        fclose(fp);\n        return NULL;\n    }\n\n    int i, totlines;\n    sds *lines = sdssplitlen(config,sdslen(config),\"\\n\",1,&totlines);\n\n    /* Read the old content line by line, populate the state. */\n    for (i = 0; i < totlines; i++) {\n        int argc;\n        sds *argv;\n        sds line = sdstrim(lines[i],\"\\r\\n\\t \");\n        lines[i] = NULL;\n\n        linenum++; /* Zero based, so we init at -1 */\n\n        /* Handle comments and empty lines. */\n        if (line[0] == '#' || line[0] == '\\0') {\n            if (state->needs_signature && !strcmp(line,REDIS_CONFIG_REWRITE_SIGNATURE))\n                state->needs_signature = 0;\n            rewriteConfigAppendLine(state,line);\n            continue;\n        }\n\n        /* Not a comment, split into arguments. */\n        argv = sdssplitargs(line,&argc);\n\n        if (argv == NULL ||\n            (!lookupConfig(argv[0]) &&\n             /* The following is a list of config features that are only supported in\n              * config file parsing and are not recognized by lookupConfig */\n             strcasecmp(argv[0],\"include\") &&\n             strcasecmp(argv[0],\"rename-command\") &&\n             strcasecmp(argv[0],\"user\") &&\n             strcasecmp(argv[0],\"loadmodule\") &&\n             strcasecmp(argv[0],\"sentinel\")))\n        {\n            /* The line is either unparsable for some reason, for\n             * instance it may have unbalanced quotes, may contain a\n             * config that doesn't exist anymore, for instance a module that got\n             * unloaded. Load it as a comment. */\n            sds aux = sdsnew(\"# ??? \");\n            aux = sdscatsds(aux,line);\n            if (argv) sdsfreesplitres(argv, argc);\n            sdsfree(line);\n            rewriteConfigAppendLine(state,aux);\n            continue;\n        }\n\n        sdstolower(argv[0]); /* We only want lowercase config directives. */\n\n        /* Now we populate the state according to the content of this line.\n         * Append the line and populate the option -> line numbers map. */\n        rewriteConfigAppendLine(state,line);\n\n        /* If this is a alias config, replace it with the original name. */\n        standardConfig *s_conf = lookupConfig(argv[0]);\n        if (s_conf && s_conf->flags & ALIAS_CONFIG) {\n            sdsfree(argv[0]);\n            argv[0] = sdsnew(s_conf->alias);\n        }\n\n        /* If this is sentinel config, we use sentinel \"sentinel <config>\" as option \n            to avoid messing up the sequence. */\n        if (server.sentinel_mode && argc > 1 && !strcasecmp(argv[0],\"sentinel\")) {\n            sds sentinelOption = sdsempty();\n            sentinelOption = sdscatfmt(sentinelOption,\"%S %S\",argv[0],argv[1]);\n            rewriteConfigAddLineNumberToOption(state,sentinelOption,linenum);\n            sdsfree(sentinelOption);\n        } else {\n            rewriteConfigAddLineNumberToOption(state,argv[0],linenum);\n        }\n        sdsfreesplitres(argv,argc);\n    }\n    fclose(fp);\n    sdsfreesplitres(lines,totlines);\n    sdsfree(config);\n    return state;\n}\n\n/* Rewrite the specified configuration option with the new \"line\".\n * It progressively uses lines of the file that were already used for the same\n * configuration option in the old version of the file, removing that line from\n * the map of options -> line numbers.\n *\n * If there are lines associated with a given configuration option and\n * \"force\" is non-zero, the line is appended to the configuration file.\n * Usually \"force\" is true when an option has not its default value, so it\n * must be rewritten even if not present previously.\n *\n * The first time a line is appended into a configuration file, a comment\n * is added to show that starting from that point the config file was generated\n * by CONFIG REWRITE.\n *\n * \"line\" is either used, or freed, so the caller does not need to free it\n * in any way. */\nint rewriteConfigRewriteLine(struct rewriteConfigState *state, const char *option, sds line, int force) {\n    sds o = sdsnew(option);\n    list *l = dictFetchValue(state->option_to_line,o);\n\n    rewriteConfigMarkAsProcessed(state,option);\n\n    if (!l && !force && !state->force_write) {\n        /* Option not used previously, and we are not forced to use it. */\n        sdsfree(line);\n        sdsfree(o);\n        return 0;\n    }\n\n    if (l) {\n        listNode *ln = listFirst(l);\n        int linenum = (long) ln->value;\n\n        /* There are still lines in the old configuration file we can reuse\n         * for this option. Replace the line with the new one. */\n        listDelNode(l,ln);\n        if (listLength(l) == 0) dictDelete(state->option_to_line,o);\n        sdsfree(state->lines[linenum]);\n        state->lines[linenum] = line;\n    } else {\n        /* Append a new line. */\n        if (state->needs_signature) {\n            rewriteConfigAppendLine(state,\n                sdsnew(REDIS_CONFIG_REWRITE_SIGNATURE));\n            state->needs_signature = 0;\n        }\n        rewriteConfigAppendLine(state,line);\n    }\n    sdsfree(o);\n    return 1;\n}\n\n/* Write the long long 'bytes' value as a string in a way that is parsable\n * inside redis.conf. If possible uses the GB, MB, KB notation. */\nint rewriteConfigFormatMemory(char *buf, size_t len, long long bytes) {\n    int gb = 1024*1024*1024;\n    int mb = 1024*1024;\n    int kb = 1024;\n\n    if (bytes && (bytes % gb) == 0) {\n        return snprintf(buf,len,\"%lldgb\",bytes/gb);\n    } else if (bytes && (bytes % mb) == 0) {\n        return snprintf(buf,len,\"%lldmb\",bytes/mb);\n    } else if (bytes && (bytes % kb) == 0) {\n        return snprintf(buf,len,\"%lldkb\",bytes/kb);\n    } else {\n        return snprintf(buf,len,\"%lld\",bytes);\n    }\n}\n\n/* Rewrite a simple \"option-name <bytes>\" configuration option. */\nvoid rewriteConfigBytesOption(struct rewriteConfigState *state, const char *option, long long value, long long defvalue) {\n    char buf[64];\n    int force = value != defvalue;\n    sds line;\n\n    rewriteConfigFormatMemory(buf,sizeof(buf),value);\n    line = sdscatprintf(sdsempty(),\"%s %s\",option,buf);\n    rewriteConfigRewriteLine(state,option,line,force);\n}\n\n/* Rewrite a simple \"option-name n%\" configuration option. */\nvoid rewriteConfigPercentOption(struct rewriteConfigState *state, const char *option, long long value, long long defvalue) {\n    int force = value != defvalue;\n    sds line = sdscatprintf(sdsempty(),\"%s %lld%%\",option,value);\n\n    rewriteConfigRewriteLine(state,option,line,force);\n}\n\n/* Rewrite a yes/no option. */\nvoid rewriteConfigYesNoOption(struct rewriteConfigState *state, const char *option, int value, int defvalue) {\n    int force = value != defvalue;\n    sds line = sdscatprintf(sdsempty(),\"%s %s\",option,\n        value ? \"yes\" : \"no\");\n\n    rewriteConfigRewriteLine(state,option,line,force);\n}\n\n/* Rewrite a string option. */\nvoid rewriteConfigStringOption(struct rewriteConfigState *state, const char *option, char *value, const char *defvalue) {\n    int force = 1;\n    sds line;\n\n    /* String options set to NULL need to be not present at all in the\n     * configuration file to be set to NULL again at the next reboot. */\n    if (value == NULL) {\n        rewriteConfigMarkAsProcessed(state,option);\n        return;\n    }\n\n    /* Set force to zero if the value is set to its default. */\n    if (defvalue && strcmp(value,defvalue) == 0) force = 0;\n\n    line = sdsnew(option);\n    line = sdscatlen(line, \" \", 1);\n    line = sdscatrepr(line, value, strlen(value));\n\n    rewriteConfigRewriteLine(state,option,line,force);\n}\n\n/* Rewrite a SDS string option. */\nvoid rewriteConfigSdsOption(struct rewriteConfigState *state, const char *option, sds value, const char *defvalue) {\n    int force = 1;\n    sds line;\n\n    /* If there is no value set, we don't want the SDS option\n     * to be present in the configuration at all. */\n    if (value == NULL) {\n        rewriteConfigMarkAsProcessed(state, option);\n        return;\n    }\n\n    /* Set force to zero if the value is set to its default. */\n    if (defvalue && strcmp(value, defvalue) == 0) force = 0;\n\n    line = sdsnew(option);\n    line = sdscatlen(line, \" \", 1);\n    line = sdscatrepr(line, value, sdslen(value));\n\n    rewriteConfigRewriteLine(state, option, line, force);\n}\n\n/* Rewrite a numerical (long long range) option. */\nvoid rewriteConfigNumericalOption(struct rewriteConfigState *state, const char *option, long long value, long long defvalue) {\n    int force = value != defvalue;\n    sds line = sdscatprintf(sdsempty(),\"%s %lld\",option,value);\n\n    rewriteConfigRewriteLine(state,option,line,force);\n}\n\n/* Rewrite an octal option. */\nvoid rewriteConfigOctalOption(struct rewriteConfigState *state, const char *option, long long value, long long defvalue) {\n    int force = value != defvalue;\n    sds line = sdscatprintf(sdsempty(),\"%s %llo\",option,value);\n\n    rewriteConfigRewriteLine(state,option,line,force);\n}\n\n/* Rewrite an enumeration option. It takes as usually state and option name,\n * and in addition the enumeration array and the default value for the\n * option. */\nvoid rewriteConfigEnumOption(struct rewriteConfigState *state, const char *option, int value, standardConfig *config) {\n    int multiarg = config->flags & MULTI_ARG_CONFIG;\n    sds names = configEnumGetName(config->data.enumd.enum_value,value,multiarg);\n    sds line = sdscatfmt(sdsempty(),\"%s %s\",option,names);\n    sdsfree(names);\n    int force = value != config->data.enumd.default_value;\n\n    rewriteConfigRewriteLine(state,option,line,force);\n}\n\n/* Rewrite the save option. */\nvoid rewriteConfigSaveOption(standardConfig *config, const char *name, struct rewriteConfigState *state) {\n    UNUSED(config);\n    int j;\n    sds line;\n\n    /* In Sentinel mode we don't need to rewrite the save parameters */\n    if (server.sentinel_mode) {\n        rewriteConfigMarkAsProcessed(state,name);\n        return;\n    }\n\n    /* Rewrite save parameters, or an empty 'save \"\"' line to avoid the\n     * defaults from being used.\n     */\n    if (!server.saveparamslen) {\n        rewriteConfigRewriteLine(state,name,sdsnew(\"save \\\"\\\"\"),1);\n    } else {\n        for (j = 0; j < server.saveparamslen; j++) {\n            line = sdscatprintf(sdsempty(),\"save %ld %d\",\n                (long) server.saveparams[j].seconds, server.saveparams[j].changes);\n            rewriteConfigRewriteLine(state,name,line,1);\n        }\n    }\n\n    /* Mark \"save\" as processed in case server.saveparamslen is zero. */\n    rewriteConfigMarkAsProcessed(state,name);\n}\n\n/* Rewrite the user option. */\nvoid rewriteConfigUserOption(struct rewriteConfigState *state) {\n    /* If there is a user file defined we just mark this configuration\n     * directive as processed, so that all the lines containing users\n     * inside the config file gets discarded. */\n    if (server.acl_filename[0] != '\\0') {\n        rewriteConfigMarkAsProcessed(state,\"user\");\n        return;\n    }\n\n    /* Otherwise scan the list of users and rewrite every line. Note that\n     * in case the list here is empty, the effect will just be to comment\n     * all the users directive inside the config file. */\n    raxIterator ri;\n    raxStart(&ri,Users);\n    raxSeek(&ri,\"^\",NULL,0);\n    while(raxNext(&ri)) {\n        user *u = ri.data;\n        sds line = sdsnew(\"user \");\n        line = sdscatsds(line,u->name);\n        line = sdscatlen(line,\" \",1);\n        robj *descr = ACLDescribeUser(u);\n        line = sdscatsds(line,descr->ptr);\n        decrRefCount(descr);\n        rewriteConfigRewriteLine(state,\"user\",line,1);\n    }\n    raxStop(&ri);\n\n    /* Mark \"user\" as processed in case there are no defined users. */\n    rewriteConfigMarkAsProcessed(state,\"user\");\n}\n\n/* Rewrite the dir option, always using absolute paths.*/\nvoid rewriteConfigDirOption(standardConfig *config, const char *name, struct rewriteConfigState *state) {\n    UNUSED(config);\n    char cwd[1024];\n\n    if (getcwd(cwd,sizeof(cwd)) == NULL) {\n        rewriteConfigMarkAsProcessed(state,name);\n        return; /* no rewrite on error. */\n    }\n    rewriteConfigStringOption(state,name,cwd,NULL);\n}\n\n/* Rewrite the slaveof option. */\nvoid rewriteConfigReplicaOfOption(standardConfig *config, const char *name, struct rewriteConfigState *state) {\n    UNUSED(config);\n    sds line;\n\n    /* If this is a master, we want all the slaveof config options\n     * in the file to be removed. Note that if this is a cluster instance\n     * we don't want a slaveof directive inside redis.conf. */\n    if (server.cluster_enabled || server.masterhost == NULL) {\n        rewriteConfigMarkAsProcessed(state, name);\n        return;\n    }\n    line = sdscatprintf(sdsempty(),\"%s %s %d\", name,\n        server.masterhost, server.masterport);\n    rewriteConfigRewriteLine(state,name,line,1);\n}\n\n/* Rewrite the notify-keyspace-events option. */\nvoid rewriteConfigNotifyKeyspaceEventsOption(standardConfig *config, const char *name, struct rewriteConfigState *state) {\n    UNUSED(config);\n    int force = server.notify_keyspace_events != 0;\n    sds line, flags;\n\n    flags = keyspaceEventsFlagsToString(server.notify_keyspace_events);\n    line = sdsnew(name);\n    line = sdscatlen(line, \" \", 1);\n    line = sdscatrepr(line, flags, sdslen(flags));\n    sdsfree(flags);\n    rewriteConfigRewriteLine(state,name,line,force);\n}\n\n/* Rewrite the client-output-buffer-limit option. */\nvoid rewriteConfigClientOutputBufferLimitOption(standardConfig *config, const char *name, struct rewriteConfigState *state) {\n    UNUSED(config);\n    int j;\n    for (j = 0; j < CLIENT_TYPE_OBUF_COUNT; j++) {\n        int force = (server.client_obuf_limits[j].hard_limit_bytes !=\n                    clientBufferLimitsDefaults[j].hard_limit_bytes) ||\n                    (server.client_obuf_limits[j].soft_limit_bytes !=\n                    clientBufferLimitsDefaults[j].soft_limit_bytes) ||\n                    (server.client_obuf_limits[j].soft_limit_seconds !=\n                    clientBufferLimitsDefaults[j].soft_limit_seconds);\n        sds line;\n        char hard[64], soft[64];\n\n        rewriteConfigFormatMemory(hard,sizeof(hard),\n                server.client_obuf_limits[j].hard_limit_bytes);\n        rewriteConfigFormatMemory(soft,sizeof(soft),\n                server.client_obuf_limits[j].soft_limit_bytes);\n\n        char *typename = getClientTypeName(j);\n        if (!strcmp(typename,\"slave\")) typename = \"replica\";\n        line = sdscatprintf(sdsempty(),\"%s %s %s %s %ld\",\n                name, typename, hard, soft,\n                (long) server.client_obuf_limits[j].soft_limit_seconds);\n        rewriteConfigRewriteLine(state,name,line,force);\n    }\n}\n\n/* Rewrite the oom-score-adj-values option. */\nvoid rewriteConfigOOMScoreAdjValuesOption(standardConfig *config, const char *name, struct rewriteConfigState *state) {\n    UNUSED(config);\n    int force = 0;\n    int j;\n    sds line;\n\n    line = sdsnew(name);\n    line = sdscatlen(line, \" \", 1);\n    for (j = 0; j < CONFIG_OOM_COUNT; j++) {\n        if (server.oom_score_adj_values[j] != configOOMScoreAdjValuesDefaults[j])\n            force = 1;\n\n        line = sdscatprintf(line, \"%d\", server.oom_score_adj_values[j]);\n        if (j+1 != CONFIG_OOM_COUNT)\n            line = sdscatlen(line, \" \", 1);\n    }\n    rewriteConfigRewriteLine(state,name,line,force);\n}\n\n/* Rewrite the bind option. */\nvoid rewriteConfigBindOption(standardConfig *config, const char *name, struct rewriteConfigState *state) {\n    UNUSED(config);\n    int force = 1;\n    sds line, addresses;\n    int is_default = 0;\n\n    /* Compare server.bindaddr with CONFIG_DEFAULT_BINDADDR */\n    if (server.bindaddr_count == CONFIG_DEFAULT_BINDADDR_COUNT) {\n        is_default = 1;\n        char *default_bindaddr[CONFIG_DEFAULT_BINDADDR_COUNT] = CONFIG_DEFAULT_BINDADDR;\n        for (int j = 0; j < CONFIG_DEFAULT_BINDADDR_COUNT; j++) {\n            if (strcmp(server.bindaddr[j], default_bindaddr[j]) != 0) {\n                is_default = 0;\n                break;\n            }\n        }\n    }\n\n    if (is_default) {\n        rewriteConfigMarkAsProcessed(state,name);\n        return;\n    }\n\n    /* Rewrite as bind <addr1> <addr2> ... <addrN> */\n    if (server.bindaddr_count > 0)\n        addresses = sdsjoin(server.bindaddr,server.bindaddr_count,\" \");\n    else\n        addresses = sdsnew(\"\\\"\\\"\");\n    line = sdsnew(name);\n    line = sdscatlen(line, \" \", 1);\n    line = sdscatsds(line, addresses);\n    sdsfree(addresses);\n\n    rewriteConfigRewriteLine(state,name,line,force);\n}\n\n/* Rewrite the loadmodule option. */\nvoid rewriteConfigLoadmoduleOption(struct rewriteConfigState *state) {\n    sds line;\n    dictIterator di;\n    dictEntry *de;\n    dictInitIterator(&di, modules);\n    while ((de = dictNext(&di)) != NULL) {\n        struct RedisModule *module = dictGetVal(de);\n        /* Internal modules doesn't have path and are not part of the configuration file */\n        if (sdslen(module->loadmod->path) == 0) continue;\n\n        line = sdsnew(\"loadmodule \");\n        line = sdscatsds(line, module->loadmod->path);\n        for (int i = 0; i < module->loadmod->argc; i++) {\n            line = sdscatlen(line, \" \", 1);\n            line = sdscatsds(line, module->loadmod->argv[i]->ptr);\n        }\n        rewriteConfigRewriteLine(state,\"loadmodule\",line,1);\n    }\n    dictResetIterator(&di);\n    /* Mark \"loadmodule\" as processed in case modules is empty. */\n    rewriteConfigMarkAsProcessed(state,\"loadmodule\");\n}\n\n/* Glue together the configuration lines in the current configuration\n * rewrite state into a single string, stripping multiple empty lines. */\nsds rewriteConfigGetContentFromState(struct rewriteConfigState *state) {\n    sds content = sdsempty();\n    int j, was_empty = 0;\n\n    for (j = 0; j < state->numlines; j++) {\n        /* Every cluster of empty lines is turned into a single empty line. */\n        if (sdslen(state->lines[j]) == 0) {\n            if (was_empty) continue;\n            was_empty = 1;\n        } else {\n            was_empty = 0;\n        }\n        content = sdscatsds(content,state->lines[j]);\n        content = sdscatlen(content,\"\\n\",1);\n    }\n    return content;\n}\n\n/* At the end of the rewrite process the state contains the remaining\n * map between \"option name\" => \"lines in the original config file\".\n * Lines used by the rewrite process were removed by the function\n * rewriteConfigRewriteLine(), all the other lines are \"orphaned\" and\n * should be replaced by empty lines.\n *\n * This function does just this, iterating all the option names and\n * blanking all the lines still associated. */\nvoid rewriteConfigRemoveOrphaned(struct rewriteConfigState *state) {\n    dictIterator di;\n    dictEntry *de;\n\n    dictInitIterator(&di, state->option_to_line);\n    while((de = dictNext(&di)) != NULL) {\n        list *l = dictGetVal(de);\n        sds option = dictGetKey(de);\n\n        /* Don't blank lines about options the rewrite process\n         * don't understand. */\n        if (dictFind(state->rewritten,option) == NULL) {\n            serverLog(LL_DEBUG,\"Not rewritten option: %s\", option);\n            continue;\n        }\n\n        while(listLength(l)) {\n            listNode *ln = listFirst(l);\n            int linenum = (long) ln->value;\n\n            sdsfree(state->lines[linenum]);\n            state->lines[linenum] = sdsempty();\n            listDelNode(l,ln);\n        }\n    }\n    dictResetIterator(&di);\n}\n\n/* This function returns a string representation of all the config options\n * marked with DEBUG_CONFIG, which can be used to help with debugging. */\nsds getConfigDebugInfo(void) {\n    struct rewriteConfigState *state = rewriteConfigCreateState();\n    state->force_write = 1; /* Force the output */\n    state->needs_signature = 0; /* Omit the rewrite signature */\n\n    /* Iterate the configs and \"rewrite\" the ones that have\n     * the debug flag. */\n    dictIterator di;\n    dictEntry *de;\n    dictInitIterator(&di, configs);\n    while ((de = dictNext(&di)) != NULL) {\n        standardConfig *config = dictGetVal(de);\n        if (!(config->flags & DEBUG_CONFIG)) continue;\n        config->interface.rewrite(config, config->name, state);\n    }\n    dictResetIterator(&di);\n    sds info = rewriteConfigGetContentFromState(state);\n    rewriteConfigReleaseState(state);\n    return info;\n}\n\n/* This function replaces the old configuration file with the new content\n * in an atomic manner.\n *\n * The function returns 0 on success, otherwise -1 is returned and errno\n * is set accordingly. */\nint rewriteConfigOverwriteFile(char *configfile, sds content) {\n    int fd = -1;\n    int retval = -1;\n    char tmp_conffile[PATH_MAX];\n    const char *tmp_suffix = \".XXXXXX\";\n    size_t offset = 0;\n    ssize_t written_bytes = 0;\n    int old_errno;\n\n    int tmp_path_len = snprintf(tmp_conffile, sizeof(tmp_conffile), \"%s%s\", configfile, tmp_suffix);\n    if (tmp_path_len <= 0 || (unsigned int)tmp_path_len >= sizeof(tmp_conffile)) {\n        serverLog(LL_WARNING, \"Config file full path is too long\");\n        errno = ENAMETOOLONG;\n        return retval;\n    }\n\n#if defined(_GNU_SOURCE) && !defined(__HAIKU__)\n    fd = mkostemp(tmp_conffile, O_CLOEXEC);\n#else\n    /* There's a theoretical chance here to leak the FD if a module thread forks & execv in the middle */\n    fd = mkstemp(tmp_conffile);\n#endif\n\n    if (fd == -1) {\n        serverLog(LL_WARNING, \"Could not create tmp config file (%s)\", strerror(errno));\n        return retval;\n    }\n\n    while (offset < sdslen(content)) {\n         written_bytes = write(fd, content + offset, sdslen(content) - offset);\n         if (written_bytes <= 0) {\n             if (errno == EINTR) continue; /* FD is blocking, no other retryable errors */\n             serverLog(LL_WARNING, \"Failed after writing (%zd) bytes to tmp config file (%s)\", offset, strerror(errno));\n             goto cleanup;\n         }\n         offset+=written_bytes;\n    }\n\n    if (fsync(fd))\n        serverLog(LL_WARNING, \"Could not sync tmp config file to disk (%s)\", strerror(errno));\n    else if (fchmod(fd, 0644 & ~server.umask) == -1)\n        serverLog(LL_WARNING, \"Could not chmod config file (%s)\", strerror(errno));\n    else if (rename(tmp_conffile, configfile) == -1)\n        serverLog(LL_WARNING, \"Could not rename tmp config file (%s)\", strerror(errno));\n    else if (fsyncFileDir(configfile) == -1)\n        serverLog(LL_WARNING, \"Could not sync config file dir (%s)\", strerror(errno));\n    else {\n        retval = 0;\n        serverLog(LL_DEBUG, \"Rewritten config file (%s) successfully\", configfile);\n    }\n\ncleanup:\n    old_errno = errno;\n    close(fd);\n    if (retval) unlink(tmp_conffile);\n    errno = old_errno;\n    return retval;\n}\n\n/* Rewrite the configuration file at \"path\".\n * If the configuration file already exists, we try at best to retain comments\n * and overall structure.\n *\n * Configuration parameters that are at their default value, unless already\n * explicitly included in the old configuration file, are not rewritten.\n * The force_write flag overrides this behavior and forces everything to be\n * written. This is currently only used for testing purposes.\n *\n * On error -1 is returned and errno is set accordingly, otherwise 0. */\nint rewriteConfig(char *path, int force_write) {\n    struct rewriteConfigState *state;\n    sds newcontent;\n    int retval;\n\n    /* Step 1: read the old config into our rewrite state. */\n    if ((state = rewriteConfigReadOldFile(path)) == NULL) return -1;\n    if (force_write) state->force_write = 1;\n\n    /* Step 2: rewrite every single option, replacing or appending it inside\n     * the rewrite state. */\n\n    /* Iterate the configs that are standard */\n    dictIterator di;\n    dictEntry *de;\n    dictInitIterator(&di, configs);\n    while ((de = dictNext(&di)) != NULL) {\n        standardConfig *config = dictGetVal(de);\n        /* Only rewrite the primary names */\n        if (config->flags & ALIAS_CONFIG) continue;\n        if (config->interface.rewrite) config->interface.rewrite(config, dictGetKey(de), state);\n    }\n    dictResetIterator(&di);\n\n    rewriteConfigUserOption(state);\n    rewriteConfigLoadmoduleOption(state);\n\n    /* Rewrite Sentinel config if in Sentinel mode. */\n    if (server.sentinel_mode) rewriteConfigSentinelOption(state);\n\n    /* Step 3: remove all the orphaned lines in the old file, that is, lines\n     * that were used by a config option and are no longer used, like in case\n     * of multiple \"save\" options or duplicated options. */\n    rewriteConfigRemoveOrphaned(state);\n\n    /* Step 4: generate a new configuration file from the modified state\n     * and write it into the original file. */\n    newcontent = rewriteConfigGetContentFromState(state);\n    retval = rewriteConfigOverwriteFile(server.configfile,newcontent);\n\n    sdsfree(newcontent);\n    rewriteConfigReleaseState(state);\n    return retval;\n}\n\n/*-----------------------------------------------------------------------------\n * Configs that fit one of the major types and require no special handling\n *----------------------------------------------------------------------------*/\n#define LOADBUF_SIZE 256\nstatic char loadbuf[LOADBUF_SIZE];\n\n#define embedCommonConfig(config_name, config_alias, config_flags) \\\n    .name = (config_name), \\\n    .alias = (config_alias), \\\n    .flags = (config_flags),\n\n#define embedConfigInterface(initfn, setfn, getfn, rewritefn, applyfn) .interface = { \\\n    .init = (initfn), \\\n    .set = (setfn), \\\n    .get = (getfn), \\\n    .rewrite = (rewritefn), \\\n    .apply = (applyfn) \\\n},\n\n/* What follows is the generic config types that are supported. To add a new\n * config with one of these types, add it to the standardConfig table with\n * the creation macro for each type.\n *\n * Each type contains the following:\n * * A function defining how to load this type on startup.\n * * A function defining how to update this type on CONFIG SET.\n * * A function defining how to serialize this type on CONFIG SET.\n * * A function defining how to rewrite this type on CONFIG REWRITE.\n * * A Macro defining how to create this type.\n */\n\n/* Bool Configs */\nstatic void boolConfigInit(standardConfig *config) {\n    *config->data.yesno.config = config->data.yesno.default_value;\n}\n\nstatic int boolConfigSetInternal(standardConfig *config, int yn, const char **err) {\n    if (yn == -1) {\n        *err = \"argument must be 'yes' or 'no'\";\n        return 0;\n    }\n    if (config->data.yesno.is_valid_fn && !config->data.yesno.is_valid_fn(yn, err))\n        return 0;\n    int prev = config->flags & MODULE_CONFIG ? getModuleBoolConfig(config->privdata) : *(config->data.yesno.config);\n    if (prev != yn) {\n        if (config->flags & MODULE_CONFIG) {\n            return setModuleBoolConfig(config->privdata, yn, err);\n        }\n        *(config->data.yesno.config) = yn;\n        return 1;\n    }\n    return (config->flags & VOLATILE_CONFIG) ? 1 : 2;\n\n}\n\nstatic int boolConfigSet(standardConfig *config, sds *argv, int argc, const char **err) {\n    UNUSED(argc);\n    int yn = yesnotoi(argv[0]);\n    return boolConfigSetInternal(config, yn, err);\n}\n\nstatic sds boolConfigGet(standardConfig *config) {\n    if (config->flags & MODULE_CONFIG) {\n        return sdsnew(getModuleBoolConfig(config->privdata) ? \"yes\" : \"no\");\n    }\n    return sdsnew(*config->data.yesno.config ? \"yes\" : \"no\");\n}\n\nstatic void boolConfigRewrite(standardConfig *config, const char *name, struct rewriteConfigState *state) {\n    int val = config->flags & MODULE_CONFIG ? getModuleBoolConfig(config->privdata) : *(config->data.yesno.config);\n    rewriteConfigYesNoOption(state, name, val, config->data.yesno.default_value);\n}\n\n#define createBoolConfig(name, alias, flags, config_addr, default, is_valid, apply) { \\\n    embedCommonConfig(name, alias, flags) \\\n    embedConfigInterface(boolConfigInit, boolConfigSet, boolConfigGet, boolConfigRewrite, apply) \\\n    .type = BOOL_CONFIG, \\\n    .data.yesno = { \\\n        .config = &(config_addr), \\\n        .default_value = (default), \\\n        .is_valid_fn = (is_valid), \\\n    } \\\n}\n\n/* String Configs */\nstatic void stringConfigInit(standardConfig *config) {\n    *config->data.string.config = (config->data.string.convert_empty_to_null && !config->data.string.default_value) ? NULL : zstrdup(config->data.string.default_value);\n}\n\nstatic int stringConfigSetInternal(standardConfig *config, char *str, const char **err) {\n    if (config->data.string.is_valid_fn && !config->data.string.is_valid_fn(str, err))\n        return 0;\n    char *prev = *config->data.string.config;\n    char *new = (config->data.string.convert_empty_to_null && !str[0]) ? NULL : str;\n    if (new != prev && (new == NULL || prev == NULL || strcmp(prev, new))) {\n        *config->data.string.config = new != NULL ? zstrdup(new) : NULL;\n        zfree(prev);\n        return 1;\n    }\n    return (config->flags & VOLATILE_CONFIG) ? 1 : 2;\n}\n\nstatic int stringConfigSet(standardConfig *config, sds *argv, int argc, const char **err) {\n    UNUSED(argc);\n    return stringConfigSetInternal(config, argv[0], err);\n}\n\nstatic sds stringConfigGet(standardConfig *config) {\n    return sdsnew(*config->data.string.config ? *config->data.string.config : \"\");\n}\n\nstatic void stringConfigRewrite(standardConfig *config, const char *name, struct rewriteConfigState *state) {\n    rewriteConfigStringOption(state, name,*(config->data.string.config), config->data.string.default_value);\n}\n\n/* SDS Configs */\nstatic void sdsConfigInit(standardConfig *config) {\n    *config->data.sds.config = (config->data.sds.convert_empty_to_null && !config->data.sds.default_value) ? NULL : sdsnew(config->data.sds.default_value);\n}\n\nstatic int sdsConfigSet(standardConfig *config, sds *argv, int argc, const char **err) {\n    UNUSED(argc);\n    if (config->data.sds.is_valid_fn && !config->data.sds.is_valid_fn(argv[0], err))\n        return 0;\n\n    sds prev = config->flags & MODULE_CONFIG ? getModuleStringConfig(config->privdata) : *config->data.sds.config;\n    sds new = (config->data.string.convert_empty_to_null && (sdslen(argv[0]) == 0)) ? NULL : argv[0];\n\n    /* if prev and new configuration are not equal, set the new one */\n    if (new != prev && (new == NULL || prev == NULL || sdscmp(prev, new))) {\n        /* If MODULE_CONFIG flag is set, then free temporary prev getModuleStringConfig returned.\n         * Otherwise, free the actual previous config value Redis held (Same action, different reasons) */\n        sdsfree(prev);\n\n        if (config->flags & MODULE_CONFIG) {\n            return setModuleStringConfig(config->privdata, new, err);\n        }\n        *config->data.sds.config = new != NULL ? sdsdup(new) : NULL;\n        return 1;\n    }\n    if (config->flags & MODULE_CONFIG && prev) sdsfree(prev);\n    return (config->flags & VOLATILE_CONFIG) ? 1 : 2;\n}\n\nstatic sds sdsConfigGet(standardConfig *config) {\n    sds val = config->flags & MODULE_CONFIG ? getModuleStringConfig(config->privdata) : *config->data.sds.config;\n    if (val) {\n        if (config->flags & MODULE_CONFIG) return val;\n        return sdsdup(val);\n    } else {\n        return sdsnew(\"\");\n    }\n}\n\nstatic void sdsConfigRewrite(standardConfig *config, const char *name, struct rewriteConfigState *state) {\n    sds val = config->flags & MODULE_CONFIG ? getModuleStringConfig(config->privdata) : *config->data.sds.config;\n    rewriteConfigSdsOption(state, name, val, config->data.sds.default_value);\n    if ((val) && (config->flags & MODULE_CONFIG)) sdsfree(val);\n}\n\n\n#define ALLOW_EMPTY_STRING 0\n#define EMPTY_STRING_IS_NULL 1\n\n#define createStringConfig(name, alias, flags, empty_to_null, config_addr, default, is_valid, apply) { \\\n    embedCommonConfig(name, alias, flags) \\\n    embedConfigInterface(stringConfigInit, stringConfigSet, stringConfigGet, stringConfigRewrite, apply) \\\n    .type = STRING_CONFIG, \\\n    .data.string = { \\\n        .config = &(config_addr), \\\n        .default_value = (default), \\\n        .is_valid_fn = (is_valid), \\\n        .convert_empty_to_null = (empty_to_null), \\\n    } \\\n}\n\n#define createSDSConfig(name, alias, flags, empty_to_null, config_addr, default, is_valid, apply) { \\\n    embedCommonConfig(name, alias, flags) \\\n    embedConfigInterface(sdsConfigInit, sdsConfigSet, sdsConfigGet, sdsConfigRewrite, apply) \\\n    .type = SDS_CONFIG, \\\n    .data.sds = { \\\n        .config = &(config_addr), \\\n        .default_value = (default), \\\n        .is_valid_fn = (is_valid), \\\n        .convert_empty_to_null = (empty_to_null), \\\n    } \\\n}\n\n/* Enum configs */\nstatic void enumConfigInit(standardConfig *config) {\n    *config->data.enumd.config = config->data.enumd.default_value;\n}\n\nstatic int enumConfigSet(standardConfig *config, sds *argv, int argc, const char **err) {\n    int enumval;\n    int bitflags = !!(config->flags & MULTI_ARG_CONFIG);\n    enumval = configEnumGetValue(config->data.enumd.enum_value, argv, argc, bitflags);\n\n    if (enumval == INT_MIN) {\n        sds enumerr = sdsnew(\"argument(s) must be one of the following: \");\n        configEnum *enumNode = config->data.enumd.enum_value;\n        while(enumNode->name != NULL) {\n            enumerr = sdscatlen(enumerr, enumNode->name,\n                                strlen(enumNode->name));\n            enumerr = sdscatlen(enumerr, \", \", 2);\n            enumNode++;\n        }\n        sdsrange(enumerr,0,-3); /* Remove final \", \". */\n\n        redis_strlcpy(loadbuf, enumerr, LOADBUF_SIZE);\n\n        sdsfree(enumerr);\n        *err = loadbuf;\n        return 0;\n    }\n    if (config->data.enumd.is_valid_fn && !config->data.enumd.is_valid_fn(enumval, err))\n        return 0;\n    int prev = config->flags & MODULE_CONFIG ? getModuleEnumConfig(config->privdata) : *(config->data.enumd.config);\n    if (prev != enumval) {\n        if (config->flags & MODULE_CONFIG)\n            return setModuleEnumConfig(config->privdata, enumval, err);\n        *(config->data.enumd.config) = enumval;\n        return 1;\n    }\n    return (config->flags & VOLATILE_CONFIG) ? 1 : 2;\n}\n\nstatic sds enumConfigGet(standardConfig *config) {\n    int val = config->flags & MODULE_CONFIG ? getModuleEnumConfig(config->privdata) : *(config->data.enumd.config);\n    int bitflags = !!(config->flags & MULTI_ARG_CONFIG);\n    return configEnumGetName(config->data.enumd.enum_value,val,bitflags);\n}\n\nstatic void enumConfigRewrite(standardConfig *config, const char *name, struct rewriteConfigState *state) {\n    int val = config->flags & MODULE_CONFIG ? getModuleEnumConfig(config->privdata) : *(config->data.enumd.config);\n    rewriteConfigEnumOption(state, name, val, config);\n}\n\n#define createEnumConfig(name, alias, flags, enum, config_addr, default, is_valid, apply) { \\\n    embedCommonConfig(name, alias, flags) \\\n    embedConfigInterface(enumConfigInit, enumConfigSet, enumConfigGet, enumConfigRewrite, apply) \\\n    .type = ENUM_CONFIG, \\\n    .data.enumd = { \\\n        .config = &(config_addr), \\\n        .default_value = (default), \\\n        .is_valid_fn = (is_valid), \\\n        .enum_value = (enum), \\\n    } \\\n}\n\n/* Gets a 'long long val' and sets it into the union, using a macro to get\n * compile time type check. */\nint setNumericType(standardConfig *config, long long val, const char **err) {\n    if (config->data.numeric.numeric_type == NUMERIC_TYPE_INT) {\n        *(config->data.numeric.config.i) = (int) val;\n    } else if (config->data.numeric.numeric_type == NUMERIC_TYPE_UINT) {\n        *(config->data.numeric.config.ui) = (unsigned int) val;\n    } else if (config->data.numeric.numeric_type == NUMERIC_TYPE_LONG) {\n        *(config->data.numeric.config.l) = (long) val;\n    } else if (config->data.numeric.numeric_type == NUMERIC_TYPE_ULONG) {\n        *(config->data.numeric.config.ul) = (unsigned long) val;\n    } else if (config->data.numeric.numeric_type == NUMERIC_TYPE_LONG_LONG) {\n        if (config->flags & MODULE_CONFIG)\n            return setModuleNumericConfig(config->privdata, val, err);\n        else *(config->data.numeric.config.ll) = (long long) val;\n    } else if (config->data.numeric.numeric_type == NUMERIC_TYPE_ULONG_LONG) {\n        *(config->data.numeric.config.ull) = (unsigned long long) val;\n    } else if (config->data.numeric.numeric_type == NUMERIC_TYPE_SIZE_T) {\n        *(config->data.numeric.config.st) = (size_t) val;\n    } else if (config->data.numeric.numeric_type == NUMERIC_TYPE_SSIZE_T) {\n        *(config->data.numeric.config.sst) = (ssize_t) val;\n    } else if (config->data.numeric.numeric_type == NUMERIC_TYPE_OFF_T) {\n        *(config->data.numeric.config.ot) = (off_t) val;\n    } else if (config->data.numeric.numeric_type == NUMERIC_TYPE_TIME_T) {\n        *(config->data.numeric.config.tt) = (time_t) val;\n    }\n    return 1;\n}\n\n/* Gets a 'long long val' and sets it with the value from the union, using a\n * macro to get compile time type check. */\n#define GET_NUMERIC_TYPE(val) \\\n    if (config->data.numeric.numeric_type == NUMERIC_TYPE_INT) { \\\n        val = *(config->data.numeric.config.i); \\\n    } else if (config->data.numeric.numeric_type == NUMERIC_TYPE_UINT) { \\\n        val = *(config->data.numeric.config.ui); \\\n    } else if (config->data.numeric.numeric_type == NUMERIC_TYPE_LONG) { \\\n        val = *(config->data.numeric.config.l); \\\n    } else if (config->data.numeric.numeric_type == NUMERIC_TYPE_ULONG) { \\\n        val = *(config->data.numeric.config.ul); \\\n    } else if (config->data.numeric.numeric_type == NUMERIC_TYPE_LONG_LONG) { \\\n        if (config->flags & MODULE_CONFIG) val = getModuleNumericConfig(config->privdata); \\\n        else val = *(config->data.numeric.config.ll); \\\n    } else if (config->data.numeric.numeric_type == NUMERIC_TYPE_ULONG_LONG) { \\\n        val = *(config->data.numeric.config.ull); \\\n    } else if (config->data.numeric.numeric_type == NUMERIC_TYPE_SIZE_T) { \\\n        val = *(config->data.numeric.config.st); \\\n    } else if (config->data.numeric.numeric_type == NUMERIC_TYPE_SSIZE_T) { \\\n        val = *(config->data.numeric.config.sst); \\\n    } else if (config->data.numeric.numeric_type == NUMERIC_TYPE_OFF_T) { \\\n        val = *(config->data.numeric.config.ot); \\\n    } else if (config->data.numeric.numeric_type == NUMERIC_TYPE_TIME_T) { \\\n        val = *(config->data.numeric.config.tt); \\\n    }\n\n/* Numeric configs */\nstatic void numericConfigInit(standardConfig *config) {\n    setNumericType(config, config->data.numeric.default_value, NULL);\n}\n\nstatic int numericBoundaryCheck(standardConfig *config, long long ll, const char **err) {\n    if (config->data.numeric.numeric_type == NUMERIC_TYPE_ULONG_LONG ||\n        config->data.numeric.numeric_type == NUMERIC_TYPE_UINT ||\n        config->data.numeric.numeric_type == NUMERIC_TYPE_SIZE_T) {\n        /* Boundary check for unsigned types */\n        unsigned long long ull = ll;\n        unsigned long long upper_bound = config->data.numeric.upper_bound;\n        unsigned long long lower_bound = config->data.numeric.lower_bound;\n        if (ull > upper_bound || ull < lower_bound) {\n            if (config->data.numeric.flags & OCTAL_CONFIG) {\n                snprintf(loadbuf, LOADBUF_SIZE,\n                    \"argument must be between %llo and %llo inclusive\",\n                    lower_bound,\n                    upper_bound);\n            } else {\n                snprintf(loadbuf, LOADBUF_SIZE,\n                    \"argument must be between %llu and %llu inclusive\",\n                    lower_bound,\n                    upper_bound);\n            }\n            *err = loadbuf;\n            return 0;\n        }\n    } else {\n        /* Boundary check for percentages */\n        if (config->data.numeric.flags & PERCENT_CONFIG && ll < 0) {\n            if (ll < config->data.numeric.lower_bound) {\n                snprintf(loadbuf, LOADBUF_SIZE,\n                         \"percentage argument must be less or equal to %lld\",\n                         -config->data.numeric.lower_bound);\n                *err = loadbuf;\n                return 0;\n            }\n        }\n        /* Boundary check for signed types */\n        else if (ll > config->data.numeric.upper_bound || ll < config->data.numeric.lower_bound) {\n            snprintf(loadbuf, LOADBUF_SIZE,\n                \"argument must be between %lld and %lld inclusive\",\n                config->data.numeric.lower_bound,\n                config->data.numeric.upper_bound);\n            *err = loadbuf;\n            return 0;\n        }\n    }\n    return 1;\n}\n\nstatic int numericParseString(standardConfig *config, sds value, const char **err, long long *res) {\n    /* First try to parse as memory */\n    if (config->data.numeric.flags & MEMORY_CONFIG) {\n        int memerr;\n        *res = memtoull(value, &memerr);\n        if (!memerr)\n            return 1;\n    }\n\n    /* Attempt to parse as percent */\n    if (config->data.numeric.flags & PERCENT_CONFIG &&\n        sdslen(value) > 1 && value[sdslen(value)-1] == '%' &&\n        string2ll(value, sdslen(value)-1, res) &&\n        *res >= 0) {\n            /* We store percentage as negative value */\n            *res = -*res;\n            return 1;\n    }\n\n    /* Attempt to parse as an octal number */\n    if (config->data.numeric.flags & OCTAL_CONFIG) {\n        char *endptr;\n        errno = 0;\n        *res = strtoll(value, &endptr, 8);\n        if (errno == 0 && *endptr == '\\0')\n            return 1; /* No overflow or invalid characters */\n    }\n\n    /* Attempt a simple number (no special flags set) */\n    if (!config->data.numeric.flags && string2ll(value, sdslen(value), res))\n        return 1;\n\n    /* Select appropriate error string */\n    if (config->data.numeric.flags & MEMORY_CONFIG &&\n        config->data.numeric.flags & PERCENT_CONFIG)\n        *err = \"argument must be a memory or percent value\" ;\n    else if (config->data.numeric.flags & MEMORY_CONFIG)\n        *err = \"argument must be a memory value\";\n    else if (config->data.numeric.flags & OCTAL_CONFIG)\n        *err = \"argument couldn't be parsed as an octal number\";\n    else\n        *err = \"argument couldn't be parsed into an integer\";\n    return 0;\n}\n\nstatic int numericConfigSetInternal(standardConfig *config, long long ll, const char **err) {\n    if (!numericBoundaryCheck(config, ll, err))\n        return 0;\n\n    if (config->data.numeric.is_valid_fn && !config->data.numeric.is_valid_fn(ll, err))\n        return 0;\n\n    long long prev = 0;\n    GET_NUMERIC_TYPE(prev)\n    if (prev != ll) {\n        return setNumericType(config, ll, err);\n    }\n\n    return (config->flags & VOLATILE_CONFIG) ? 1 : 2;\n}\n\nstatic int numericConfigSet(standardConfig *config, sds *argv, int argc, const char **err) {\n    UNUSED(argc);\n    long long ll;\n\n    if (!numericParseString(config, argv[0], err, &ll))\n        return 0;\n\n    return numericConfigSetInternal(config, ll, err);\n}\n\nstatic sds numericConfigGet(standardConfig *config) {\n    char buf[128];\n\n    long long value = 0;\n    GET_NUMERIC_TYPE(value)\n\n    if (config->data.numeric.flags & PERCENT_CONFIG && value < 0) {\n        int len = ll2string(buf, sizeof(buf), -value);\n        buf[len] = '%';\n        buf[len+1] = '\\0';\n    }\n    else if (config->data.numeric.flags & MEMORY_CONFIG) {\n        ull2string(buf, sizeof(buf), value);\n    } else if (config->data.numeric.flags & OCTAL_CONFIG) {\n        snprintf(buf, sizeof(buf), \"%llo\", value);\n    } else {\n        ll2string(buf, sizeof(buf), value);\n    }\n    return sdsnew(buf);\n}\n\nstatic void numericConfigRewrite(standardConfig *config, const char *name, struct rewriteConfigState *state) {\n    long long value = 0;\n\n    GET_NUMERIC_TYPE(value)\n\n    if (config->data.numeric.flags & PERCENT_CONFIG && value < 0) {\n        rewriteConfigPercentOption(state, name, -value, config->data.numeric.default_value);\n    } else if (config->data.numeric.flags & MEMORY_CONFIG) {\n        rewriteConfigBytesOption(state, name, value, config->data.numeric.default_value);\n    } else if (config->data.numeric.flags & OCTAL_CONFIG) {\n        rewriteConfigOctalOption(state, name, value, config->data.numeric.default_value);\n    } else {\n        rewriteConfigNumericalOption(state, name, value, config->data.numeric.default_value);\n    }\n}\n\n#define embedCommonNumericalConfig(name, alias, _flags, lower, upper, config_addr, default, num_conf_flags, is_valid, apply) { \\\n    embedCommonConfig(name, alias, _flags) \\\n    embedConfigInterface(numericConfigInit, numericConfigSet, numericConfigGet, numericConfigRewrite, apply) \\\n    .type = NUMERIC_CONFIG, \\\n    .data.numeric = { \\\n        .lower_bound = (lower), \\\n        .upper_bound = (upper), \\\n        .default_value = (default), \\\n        .is_valid_fn = (is_valid), \\\n        .flags = (num_conf_flags),\n\n#define createIntConfig(name, alias, flags, lower, upper, config_addr, default, num_conf_flags, is_valid, apply) \\\n    embedCommonNumericalConfig(name, alias, flags, lower, upper, config_addr, default, num_conf_flags, is_valid, apply) \\\n        .numeric_type = NUMERIC_TYPE_INT, \\\n        .config.i = &(config_addr) \\\n    } \\\n}\n\n#define createUIntConfig(name, alias, flags, lower, upper, config_addr, default, num_conf_flags, is_valid, apply) \\\n    embedCommonNumericalConfig(name, alias, flags, lower, upper, config_addr, default, num_conf_flags, is_valid, apply) \\\n        .numeric_type = NUMERIC_TYPE_UINT, \\\n        .config.ui = &(config_addr) \\\n    } \\\n}\n\n#define createLongConfig(name, alias, flags, lower, upper, config_addr, default, num_conf_flags, is_valid, apply) \\\n    embedCommonNumericalConfig(name, alias, flags, lower, upper, config_addr, default, num_conf_flags, is_valid, apply) \\\n        .numeric_type = NUMERIC_TYPE_LONG, \\\n        .config.l = &(config_addr) \\\n    } \\\n}\n\n#define createULongConfig(name, alias, flags, lower, upper, config_addr, default, num_conf_flags, is_valid, apply) \\\n    embedCommonNumericalConfig(name, alias, flags, lower, upper, config_addr, default, num_conf_flags, is_valid, apply) \\\n        .numeric_type = NUMERIC_TYPE_ULONG, \\\n        .config.ul = &(config_addr) \\\n    } \\\n}\n\n#define createLongLongConfig(name, alias, flags, lower, upper, config_addr, default, num_conf_flags, is_valid, apply) \\\n    embedCommonNumericalConfig(name, alias, flags, lower, upper, config_addr, default, num_conf_flags, is_valid, apply) \\\n        .numeric_type = NUMERIC_TYPE_LONG_LONG, \\\n        .config.ll = &(config_addr) \\\n    } \\\n}\n\n#define createULongLongConfig(name, alias, flags, lower, upper, config_addr, default, num_conf_flags, is_valid, apply) \\\n    embedCommonNumericalConfig(name, alias, flags, lower, upper, config_addr, default, num_conf_flags, is_valid, apply) \\\n        .numeric_type = NUMERIC_TYPE_ULONG_LONG, \\\n        .config.ull = &(config_addr) \\\n    } \\\n}\n\n#define createSizeTConfig(name, alias, flags, lower, upper, config_addr, default, num_conf_flags, is_valid, apply) \\\n    embedCommonNumericalConfig(name, alias, flags, lower, upper, config_addr, default, num_conf_flags, is_valid, apply) \\\n        .numeric_type = NUMERIC_TYPE_SIZE_T, \\\n        .config.st = &(config_addr) \\\n    } \\\n}\n\n#define createSSizeTConfig(name, alias, flags, lower, upper, config_addr, default, num_conf_flags, is_valid, apply) \\\n    embedCommonNumericalConfig(name, alias, flags, lower, upper, config_addr, default, num_conf_flags, is_valid, apply) \\\n        .numeric_type = NUMERIC_TYPE_SSIZE_T, \\\n        .config.sst = &(config_addr) \\\n    } \\\n}\n\n#define createTimeTConfig(name, alias, flags, lower, upper, config_addr, default, num_conf_flags, is_valid, apply) \\\n    embedCommonNumericalConfig(name, alias, flags, lower, upper, config_addr, default, num_conf_flags, is_valid, apply) \\\n        .numeric_type = NUMERIC_TYPE_TIME_T, \\\n        .config.tt = &(config_addr) \\\n    } \\\n}\n\n#define createOffTConfig(name, alias, flags, lower, upper, config_addr, default, num_conf_flags, is_valid, apply) \\\n    embedCommonNumericalConfig(name, alias, flags, lower, upper, config_addr, default, num_conf_flags, is_valid, apply) \\\n        .numeric_type = NUMERIC_TYPE_OFF_T, \\\n        .config.ot = &(config_addr) \\\n    } \\\n}\n\n#define createSpecialConfig(name, alias, modifiable, setfn, getfn, rewritefn, applyfn) { \\\n    .type = SPECIAL_CONFIG, \\\n    embedCommonConfig(name, alias, modifiable) \\\n    embedConfigInterface(NULL, setfn, getfn, rewritefn, applyfn) \\\n}\n\nstatic int isValidActiveDefrag(int val, const char **err) {\n#ifndef HAVE_DEFRAG\n    if (val) {\n        *err = \"Active defragmentation cannot be enabled: it \"\n               \"requires a Redis server compiled with a modified Jemalloc \"\n               \"like the one shipped by default with the Redis source \"\n               \"distribution\";\n        return 0;\n    }\n#else\n    UNUSED(val);\n    UNUSED(err);\n#endif\n    return 1;\n}\n\nstatic int isValidDBfilename(char *val, const char **err) {\n    if (!pathIsBaseName(val)) {\n        *err = \"dbfilename can't be a path, just a filename\";\n        return 0;\n    }\n    return 1;\n}\n\nstatic int isValidAOFfilename(char *val, const char **err) {\n    if (!strcmp(val, \"\")) {\n        *err = \"appendfilename can't be empty\";\n        return 0;\n    }\n    if (!pathIsBaseName(val)) {\n        *err = \"appendfilename can't be a path, just a filename\";\n        return 0;\n    }\n    return 1;\n}\n\nstatic int isValidAOFdirname(char *val, const char **err) {\n    if (!strcmp(val, \"\")) {\n        *err = \"appenddirname can't be empty\";\n        return 0;\n    }\n    if (!pathIsBaseName(val)) {\n        *err = \"appenddirname can't be a path, just a dirname\";\n        return 0;\n    }\n    return 1;\n}\n\nstatic int isValidShutdownOnSigFlags(int val, const char **err) {\n    /* Individual arguments are validated by createEnumConfig logic.\n     * We just need to ensure valid combinations here. */\n    if (val & SHUTDOWN_NOSAVE && val & SHUTDOWN_SAVE) {\n        *err = \"shutdown options SAVE and NOSAVE can't be used simultaneously\";\n        return 0;\n    }\n    return 1;\n}\n\nstatic int updateMemoryTrackingEnabled(const char **err) {\n    int memory_tracking_enabled = server.key_memory_histograms || clusterSlotStatsEnabled(CLUSTER_SLOT_STATS_MEM);\n    if (!server.memory_tracking_enabled && memory_tracking_enabled) {\n        *err = \"memory tracking cannot be enabled at runtime\";\n        return 0;\n    }\n    server.memory_tracking_enabled = memory_tracking_enabled;\n    return 1;\n}\n\nstatic int isValidAnnouncedNodename(char *val,const char **err) {\n    if (!(isValidAuxString(val,sdslen(val)))) {\n        *err = \"Announced human node name contained invalid character\";\n        return 0;\n    }\n    return 1;\n}\n\nstatic int isValidAnnouncedHostname(char *val, const char **err) {\n    if (strlen(val) >= NET_HOST_STR_LEN) {\n        *err = \"Hostnames must be less than \"\n            STRINGIFY(NET_HOST_STR_LEN) \" characters\";\n        return 0;\n    }\n\n    int i = 0;\n    char c;\n    while ((c = val[i])) {\n        /* We just validate the character set to make sure that everything\n         * is parsed and handled correctly. */\n        if (!((c >= 'A' && c <= 'Z') || (c >= 'a' && c <= 'z')\n            || (c >= '0' && c <= '9') || (c == '-') || (c == '.')))\n        {\n            *err = \"Hostnames may only contain alphanumeric characters, \"\n                \"hyphens or dots\";\n            return 0;\n        }\n        c = val[i++];\n    }\n    return 1;\n}\n\n/* Validate specified string is a valid proc-title-template */\nstatic int isValidProcTitleTemplate(char *val, const char **err) {\n    if (!validateProcTitleTemplate(val)) {\n        *err = \"template format is invalid or contains unknown variables\";\n        return 0;\n    }\n    return 1;\n}\n\nstatic int updateLocaleCollate(const char **err) {\n    const char *s = setlocale(LC_COLLATE, server.locale_collate);\n    if (s == NULL) {\n        *err = \"Invalid locale name\";\n        return 0;\n    }\n    return 1;\n}\n\nstatic int updateProcTitleTemplate(const char **err) {\n    if (redisSetProcTitle(NULL) == C_ERR) {\n        *err = \"failed to set process title\";\n        return 0;\n    }\n    return 1;\n}\n\nstatic int updateHZ(const char **err) {\n    UNUSED(err);\n    /* Hz is more a hint from the user, so we accept values out of range\n     * but cap them to reasonable values. */\n    if (server.config_hz < CONFIG_MIN_HZ) server.config_hz = CONFIG_MIN_HZ;\n    if (server.config_hz > CONFIG_MAX_HZ) server.config_hz = CONFIG_MAX_HZ;\n    server.hz = server.config_hz;\n    return 1;\n}\n\nstatic int updatePort(const char **err) {\n    connListener *listener = listenerByType(CONN_TYPE_SOCKET);\n\n    serverAssert(listener != NULL);\n    listener->bindaddr = server.bindaddr;\n    listener->bindaddr_count = server.bindaddr_count;\n    listener->port = server.port;\n    clusterUpdateMyselfAnnouncedPorts();\n    listener->ct = connectionByType(CONN_TYPE_SOCKET);\n    if (changeListener(listener) == C_ERR) {\n        *err = \"Unable to listen on this port. Check server logs.\";\n        return 0;\n    }\n\n    return 1;\n}\n\nstatic int updateDefragConfiguration(const char **err) {\n    UNUSED(err);\n    server.active_defrag_configuration_changed = 1;\n    return 1;\n}\n\nstatic int updateJemallocBgThread(const char **err) {\n    UNUSED(err);\n    set_jemalloc_bg_thread(server.jemalloc_bg_thread);\n    return 1;\n}\n\nstatic int updateReplBacklogSize(const char **err) {\n    UNUSED(err);\n    resizeReplicationBacklog();\n    return 1;\n}\n\nstatic int updateMaxmemory(const char **err) {\n    UNUSED(err);\n    if (server.maxmemory) {\n        size_t used = zmalloc_used_memory()-freeMemoryGetNotCountedMemory();\n        if (server.maxmemory < used) {\n            serverLog(LL_WARNING,\"WARNING: the new maxmemory value set via CONFIG SET (%llu) is smaller than the current memory usage (%zu). This will result in key eviction and/or the inability to accept new write commands depending on the maxmemory-policy.\", server.maxmemory, used);\n        }\n        startEvictionTimeProc();\n    }\n    return 1;\n}\n\nstatic int updateGoodSlaves(const char **err) {\n    UNUSED(err);\n    refreshGoodSlavesCount();\n    return 1;\n}\n\nstatic int updateWatchdogPeriod(const char **err) {\n    UNUSED(err);\n    applyWatchdogPeriod();\n    return 1;\n}\n\nstatic int updateAppendonly(const char **err) {\n    /* If loading flag is set, AOF might have been stopped temporarily, and it\n     * will be restarted depending on server.aof_enabled flag after loading is\n     * completed. So, we just need to update 'server.aof_enabled' which has been\n     * updated already before calling this function. */\n    if (server.loading)\n        return 1;\n\n    if (!server.aof_enabled && server.aof_state != AOF_OFF) {\n        stopAppendOnly();\n    } else if (server.aof_enabled && server.aof_state == AOF_OFF) {\n        if (startAppendOnly() == C_ERR) {\n            *err = \"Unable to turn on AOF. Check server logs.\";\n            return 0;\n        }\n    }\n    return 1;\n}\n\nstatic int updateAofAutoGCEnabled(const char **err) {\n    UNUSED(err);\n    if (!server.aof_disable_auto_gc) {\n        aofDelHistoryFiles();\n    }\n\n    return 1;\n}\n\nstatic int updateSighandlerEnabled(const char **err) {\n    UNUSED(err);\n    if (server.crashlog_enabled)\n        setupSigSegvHandler();\n    else\n        removeSigSegvHandlers();\n    return 1;\n}\n\nstatic int updateMaxclients(const char **err) {\n    unsigned int new_maxclients = server.maxclients;\n    adjustOpenFilesLimit();\n    if (server.maxclients != new_maxclients) {\n        static char msg[128];\n        snprintf(msg, sizeof(msg), \"The operating system is not able to handle the specified number of clients, try with %d\", server.maxclients);\n        *err = msg;\n        return 0;\n    }\n    size_t newsize = server.maxclients + CONFIG_FDSET_INCR;\n    if ((unsigned int) aeGetSetSize(server.el) < newsize) {\n        if (aeResizeSetSize(server.el, newsize) == AE_ERR ||\n            resizeAllIOThreadsEventLoops(newsize) == AE_ERR)\n        {\n            *err = \"The event loop API used by Redis is not able to handle the specified number of clients\";\n            return 0;\n        }\n    }\n    return 1;\n}\n\nstatic int updateOOMScoreAdj(const char **err) {\n    if (setOOMScoreAdj(-1) == C_ERR) {\n        *err = \"Failed to set current oom_score_adj. Check server logs.\";\n        return 0;\n    }\n\n    return 1;\n}\n\nint updateRequirePass(const char **err) {\n    UNUSED(err);\n    /* The old \"requirepass\" directive just translates to setting\n     * a password to the default user. The only thing we do\n     * additionally is to remember the cleartext password in this\n     * case, for backward compatibility with Redis <= 5. */\n    ACLUpdateDefaultUserPassword(server.requirepass);\n    return 1;\n}\n\nint updateAppendFsync(const char **err) {\n    UNUSED(err);\n    if (server.aof_fsync == AOF_FSYNC_ALWAYS) {\n        /* Wait for all bio jobs related to AOF to drain before proceeding. This prevents a race\n         * between updates to `fsynced_reploff_pending` done in the main thread and those done on the\n         * worker thread. */\n        bioDrainWorker(BIO_AOF_FSYNC);\n    }\n    return 1;\n}\n\n/* applyBind affects both TCP and TLS (if enabled) together */\nstatic int applyBind(const char **err) {\n    connListener *tcp_listener = listenerByType(CONN_TYPE_SOCKET);\n    connListener *tls_listener = listenerByType(CONN_TYPE_TLS);\n\n    serverAssert(tcp_listener != NULL);\n    tcp_listener->bindaddr = server.bindaddr;\n    tcp_listener->bindaddr_count = server.bindaddr_count;\n    tcp_listener->port = server.port;\n    tcp_listener->ct = connectionByType(CONN_TYPE_SOCKET);\n    if (changeListener(tcp_listener) == C_ERR) {\n        *err = \"Failed to bind to specified addresses.\";\n        if (tls_listener)\n            closeListener(tls_listener); /* failed with TLS together */\n        return 0;\n    }\n\n    if (server.tls_port != 0) {\n        serverAssert(tls_listener != NULL);\n        tls_listener->bindaddr = server.bindaddr;\n        tls_listener->bindaddr_count = server.bindaddr_count;\n        tls_listener->port = server.tls_port;\n        tls_listener->ct = connectionByType(CONN_TYPE_TLS);\n        if (changeListener(tls_listener) == C_ERR) {\n            *err = \"Failed to bind to specified addresses.\";\n            closeListener(tcp_listener); /* failed with TCP together */\n            return 0;\n        }\n    }\n\n    return 1;\n}\n\nint updateClusterFlags(const char **err) {\n    UNUSED(err);\n    clusterUpdateMyselfFlags();\n    return 1;\n}\n\nstatic int updateClusterAnnouncedPort(const char **err) {\n    UNUSED(err);\n    clusterUpdateMyselfAnnouncedPorts();\n    return 1;\n}\n\nstatic int updateClusterIp(const char **err) {\n    UNUSED(err);\n    clusterUpdateMyselfIp();\n    return 1;\n}\n\nint updateClusterHostname(const char **err) {\n    UNUSED(err);\n    clusterUpdateMyselfHostname();\n    return 1;\n}\n\nint updateClusterHumanNodename(const char **err) {\n    UNUSED(err);\n    clusterUpdateMyselfHumanNodename();\n    return 1;\n}\n\nstatic int applyTlsCfg(const char **err) {\n    UNUSED(err);\n\n    /* If TLS is enabled, try to configure OpenSSL. */\n    if ((server.tls_port || server.tls_replication || server.tls_cluster)\n         && connTypeConfigure(connectionTypeTls(), &server.tls_ctx_config, 1) == C_ERR) {\n        *err = \"Unable to update TLS configuration. Check server logs.\";\n        return 0;\n    }\n    return 1;\n}\n\nstatic int applyTLSPort(const char **err) {\n    /* Configure TLS in case it wasn't enabled */\n    if (connTypeConfigure(connectionTypeTls(), &server.tls_ctx_config, 0) == C_ERR) {\n        *err = \"Unable to update TLS configuration. Check server logs.\";\n        return 0;\n    }\n\n    connListener *listener = listenerByType(CONN_TYPE_TLS);\n    serverAssert(listener != NULL);\n    listener->bindaddr = server.bindaddr;\n    listener->bindaddr_count = server.bindaddr_count;\n    listener->port = server.tls_port;\n    listener->ct = connectionByType(CONN_TYPE_TLS);\n    clusterUpdateMyselfAnnouncedPorts();\n    if (changeListener(listener) == C_ERR) {\n        *err = \"Unable to listen on this port. Check server logs.\";\n        return 0;\n    }\n\n    return 1;\n}\n\nstatic int setConfigDirOption(standardConfig *config, sds *argv, int argc, const char **err) {\n    UNUSED(config);\n    if (argc != 1) {\n        *err = \"wrong number of arguments\";\n        return 0;\n    }\n    if (chdir(argv[0]) == -1) {\n        *err = strerror(errno);\n        return 0;\n    }\n    return 1;\n}\n\nstatic sds getConfigDirOption(standardConfig *config) {\n    UNUSED(config);\n    char buf[1024];\n\n    if (getcwd(buf,sizeof(buf)) == NULL)\n        buf[0] = '\\0';\n\n    return sdsnew(buf);\n}\n\nstatic int setConfigSaveOption(standardConfig *config, sds *argv, int argc, const char **err) {\n    UNUSED(config);\n    int j;\n\n    /* Special case: treat single arg \"\" as zero args indicating empty save configuration */\n    if (argc == 1 && !strcasecmp(argv[0],\"\")) {\n        resetServerSaveParams();\n        argc = 0;\n    }\n\n    /* Perform sanity check before setting the new config:\n    * - Even number of args\n    * - Seconds >= 1, changes >= 0 */\n    if (argc & 1) {\n        *err = \"Invalid save parameters\";\n        return 0;\n    }\n    for (j = 0; j < argc; j++) {\n        char *eptr;\n        long val;\n\n        val = strtoll(argv[j], &eptr, 10);\n        if (eptr[0] != '\\0' ||\n            ((j & 1) == 0 && val < 1) ||\n            ((j & 1) == 1 && val < 0)) {\n            *err = \"Invalid save parameters\";\n            return 0;\n        }\n    }\n    /* Finally set the new config */\n    if (!reading_config_file) {\n        resetServerSaveParams();\n    } else {\n        /* We don't reset save params before loading, because if they're not part\n         * of the file the defaults should be used.\n         */\n        static int save_loaded = 0;\n        if (!save_loaded) {\n            save_loaded = 1;\n            resetServerSaveParams();\n        }\n    }\n\n    for (j = 0; j < argc; j += 2) {\n        time_t seconds;\n        int changes;\n\n        seconds = strtoll(argv[j],NULL,10);\n        changes = strtoll(argv[j+1],NULL,10);\n        appendServerSaveParams(seconds, changes);\n    }\n\n    return 1;\n}\n\nstatic sds getConfigSaveOption(standardConfig *config) {\n    UNUSED(config);\n    sds buf = sdsempty();\n    int j;\n\n    for (j = 0; j < server.saveparamslen; j++) {\n        buf = sdscatprintf(buf,\"%jd %d\",\n                           (intmax_t)server.saveparams[j].seconds,\n                           server.saveparams[j].changes);\n        if (j != server.saveparamslen-1)\n            buf = sdscatlen(buf,\" \",1);\n    }\n\n    return buf;\n}\n\nstatic int setConfigClientOutputBufferLimitOption(standardConfig *config, sds *argv, int argc, const char **err) {\n    UNUSED(config);\n    return updateClientOutputBufferLimit(argv, argc, err);\n}\n\nstatic sds getConfigClientOutputBufferLimitOption(standardConfig *config) {\n    UNUSED(config);\n    sds buf = sdsempty();\n    int j;\n    for (j = 0; j < CLIENT_TYPE_OBUF_COUNT; j++) {\n        buf = sdscatprintf(buf,\"%s %llu %llu %ld\",\n                           getClientTypeName(j),\n                           server.client_obuf_limits[j].hard_limit_bytes,\n                           server.client_obuf_limits[j].soft_limit_bytes,\n                           (long) server.client_obuf_limits[j].soft_limit_seconds);\n        if (j != CLIENT_TYPE_OBUF_COUNT-1)\n            buf = sdscatlen(buf,\" \",1);\n    }\n    return buf;\n}\n\n/* Parse an array of CONFIG_OOM_COUNT sds strings, validate and populate\n * server.oom_score_adj_values if valid.\n */\nstatic int setConfigOOMScoreAdjValuesOption(standardConfig *config, sds *argv, int argc, const char **err) {\n    int i;\n    int values[CONFIG_OOM_COUNT];\n    int change = 0;\n    UNUSED(config);\n\n    if (argc != CONFIG_OOM_COUNT) {\n        *err = \"wrong number of arguments\";\n        return 0;\n    }\n\n    for (i = 0; i < CONFIG_OOM_COUNT; i++) {\n        char *eptr;\n        long long val = strtoll(argv[i], &eptr, 10);\n\n        if (*eptr != '\\0' || val < -2000 || val > 2000) {\n            if (err) *err = \"Invalid oom-score-adj-values, elements must be between -2000 and 2000.\";\n            return 0;\n        }\n\n        values[i] = val;\n    }\n\n    /* Verify that the values make sense. If they don't omit a warning but\n     * keep the configuration, which may still be valid for privileged processes.\n     */\n\n    if (values[CONFIG_OOM_REPLICA] < values[CONFIG_OOM_MASTER] ||\n        values[CONFIG_OOM_BGCHILD] < values[CONFIG_OOM_REPLICA])\n    {\n        serverLog(LL_WARNING,\n                  \"The oom-score-adj-values configuration may not work for non-privileged processes! \"\n                  \"Please consult the documentation.\");\n    }\n\n    for (i = 0; i < CONFIG_OOM_COUNT; i++) {\n        if (server.oom_score_adj_values[i] != values[i]) {\n            server.oom_score_adj_values[i] = values[i];\n            change = 1;\n        }\n    }\n\n    return change ? 1 : 2;\n}\n\nstatic sds getConfigOOMScoreAdjValuesOption(standardConfig *config) {\n    UNUSED(config);\n    sds buf = sdsempty();\n    int j;\n\n    for (j = 0; j < CONFIG_OOM_COUNT; j++) {\n        buf = sdscatprintf(buf,\"%d\", server.oom_score_adj_values[j]);\n        if (j != CONFIG_OOM_COUNT-1)\n            buf = sdscatlen(buf,\" \",1);\n    }\n\n    return buf;\n}\n\nstatic int setConfigNotifyKeyspaceEventsOption(standardConfig *config, sds *argv, int argc, const char **err) {\n    UNUSED(config);\n    if (argc != 1) {\n        *err = \"wrong number of arguments\";\n        return 0;\n    }\n    int flags = keyspaceEventsStringToFlags(argv[0]);\n    if (flags == -1) {\n        *err = \"Invalid event class character. Use 'Ag$lshzxeKEtmdnocrSTIV'.\";\n        return 0;\n    }\n    server.notify_keyspace_events = flags;\n    return 1;\n}\n\nstatic sds getConfigNotifyKeyspaceEventsOption(standardConfig *config) {\n    UNUSED(config);\n    return keyspaceEventsFlagsToString(server.notify_keyspace_events);\n}\n\nstatic int setConfigBindOption(standardConfig *config, sds* argv, int argc, const char **err) {\n    UNUSED(config);\n    int j;\n\n    if (argc > CONFIG_BINDADDR_MAX) {\n        *err = \"Too many bind addresses specified.\";\n        return 0;\n    }\n\n    /* A single empty argument is treated as a zero bindaddr count */\n    if (argc == 1 && sdslen(argv[0]) == 0) argc = 0;\n\n    /* Free old bind addresses */\n    for (j = 0; j < server.bindaddr_count; j++) {\n        zfree(server.bindaddr[j]);\n    }\n    for (j = 0; j < argc; j++)\n        server.bindaddr[j] = zstrdup(argv[j]);\n    server.bindaddr_count = argc;\n\n    return 1;\n}\n\nstatic int setConfigReplicaOfOption(standardConfig *config, sds* argv, int argc, const char **err) {\n    UNUSED(config);\n\n    if (argc != 2) {\n        *err = \"wrong number of arguments\";\n        return 0;\n    }\n\n    sdsfree(server.masterhost);\n    server.masterhost = NULL;\n    if (!strcasecmp(argv[0], \"no\") && !strcasecmp(argv[1], \"one\")) {\n        return 1;\n    }\n    char *ptr;\n    server.masterport = strtol(argv[1], &ptr, 10);\n    if (server.masterport < 0 || server.masterport > 65535 || *ptr != '\\0') {\n        *err = \"Invalid master port\";\n        return 0;\n    }\n    server.masterhost = sdsnew(argv[0]);\n    server.repl_state = REPL_STATE_CONNECT;\n    return 1;\n}\n\nstatic sds getConfigBindOption(standardConfig *config) {\n    UNUSED(config);\n    return sdsjoin(server.bindaddr,server.bindaddr_count,\" \");\n}\n\nstatic sds getConfigReplicaOfOption(standardConfig *config) {\n    UNUSED(config);\n    char buf[256];\n    if (server.masterhost)\n        snprintf(buf,sizeof(buf),\"%s %d\",\n                 server.masterhost, server.masterport);\n    else\n        buf[0] = '\\0';\n    return sdsnew(buf);\n}\n\nint allowProtectedAction(int config, client *c) {\n    return (config == PROTECTED_ACTION_ALLOWED_YES) ||\n           (config == PROTECTED_ACTION_ALLOWED_LOCAL && (connIsLocal(c->conn) == 1));\n}\n\n\nstatic int setConfigLatencyTrackingInfoPercentilesOutputOption(standardConfig *config, sds *argv, int argc, const char **err) {\n    UNUSED(config);\n    zfree(server.latency_tracking_info_percentiles);\n    server.latency_tracking_info_percentiles = NULL;\n    server.latency_tracking_info_percentiles_len = argc;\n\n    /* Special case: treat single arg \"\" as zero args indicating empty percentile configuration */\n    if (argc == 1 && sdslen(argv[0]) == 0)\n        server.latency_tracking_info_percentiles_len = 0;\n    else\n        server.latency_tracking_info_percentiles = zmalloc(sizeof(double)*argc);\n\n    for (int j = 0; j < server.latency_tracking_info_percentiles_len; j++) {\n        double percentile;\n        if (!string2d(argv[j], sdslen(argv[j]), &percentile)) {\n            *err = \"Invalid latency-tracking-info-percentiles parameters\";\n            goto configerr;\n        }\n        if (percentile > 100.0 || percentile < 0.0) {\n            *err = \"latency-tracking-info-percentiles parameters should sit between [0.0,100.0]\";\n            goto configerr;\n        }\n        server.latency_tracking_info_percentiles[j] = percentile;\n    }\n\n    return 1;\nconfigerr:\n    zfree(server.latency_tracking_info_percentiles);\n    server.latency_tracking_info_percentiles = NULL;\n    server.latency_tracking_info_percentiles_len = 0;\n    return 0;\n}\n\nstatic sds getConfigLatencyTrackingInfoPercentilesOutputOption(standardConfig *config) {\n    UNUSED(config);\n    sds buf = sdsempty();\n    for (int j = 0; j < server.latency_tracking_info_percentiles_len; j++) {\n        char fbuf[128];\n        size_t len = snprintf(fbuf, sizeof(fbuf), \"%f\", server.latency_tracking_info_percentiles[j]);\n        len = trimDoubleString(fbuf, len);\n        buf = sdscatlen(buf, fbuf, len);\n        if (j != server.latency_tracking_info_percentiles_len-1)\n            buf = sdscatlen(buf,\" \",1);\n    }\n    return buf;\n}\n\n/* Rewrite the latency-tracking-info-percentiles option. */\nvoid rewriteConfigLatencyTrackingInfoPercentilesOutputOption(standardConfig *config, const char *name, struct rewriteConfigState *state) {\n    UNUSED(config);\n    sds line = sdsnew(name);\n    /* Rewrite latency-tracking-info-percentiles parameters,\n     * or an empty 'latency-tracking-info-percentiles \"\"' line to avoid the\n     * defaults from being used.\n     */\n    if (!server.latency_tracking_info_percentiles_len) {\n        line = sdscat(line,\" \\\"\\\"\");\n    } else {\n        for (int j = 0; j < server.latency_tracking_info_percentiles_len; j++) {\n            char fbuf[128];\n            size_t len = snprintf(fbuf, sizeof(fbuf), \" %f\", server.latency_tracking_info_percentiles[j]);\n            len = trimDoubleString(fbuf, len);\n            line = sdscatlen(line, fbuf, len);\n        }\n    }\n    rewriteConfigRewriteLine(state,name,line,1);\n}\n\nstatic int applyClientMaxMemoryUsage(const char **err) {\n    UNUSED(err);\n    listIter li;\n    listNode *ln;\n\n    /* server.client_mem_usage_buckets is an indication that the previous config\n     * was non-zero, in which case we can exit and no apply is needed. */\n    if(server.maxmemory_clients !=0 && server.client_mem_usage_buckets)\n        return 1;\n    if (server.maxmemory_clients != 0)\n        initServerClientMemUsageBuckets();\n\n    pauseAllIOThreads();\n    /* When client eviction is enabled update memory buckets for all clients.\n     * When disabled, clear that data structure. */\n    listRewind(server.clients, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        client *c = listNodeValue(ln);\n        if (server.maxmemory_clients == 0) {\n            /* Remove client from memory usage bucket. */\n            removeClientFromMemUsageBucket(c, 0);\n        } else {\n            /* Update each client(s) memory usage and add to appropriate bucket. */\n            updateClientMemUsageAndBucket(c);\n        }\n    }\n    resumeAllIOThreads();\n\n    if (server.maxmemory_clients == 0)\n        freeServerClientMemUsageBuckets();\n    return 1;\n}\n\nstandardConfig static_configs[] = {\n    /* Bool configs */\n    createBoolConfig(\"rdbchecksum\", NULL, IMMUTABLE_CONFIG, server.rdb_checksum, 1, NULL, NULL),\n    createBoolConfig(\"daemonize\", NULL, IMMUTABLE_CONFIG, server.daemonize, 0, NULL, NULL),\n    createBoolConfig(\"always-show-logo\", NULL, IMMUTABLE_CONFIG, server.always_show_logo, 0, NULL, NULL),\n    createBoolConfig(\"protected-mode\", NULL, MODIFIABLE_CONFIG, server.protected_mode, 1, NULL, NULL),\n    createBoolConfig(\"rdbcompression\", NULL, MODIFIABLE_CONFIG, server.rdb_compression, 1, NULL, NULL),\n    createBoolConfig(\"rdb-del-sync-files\", NULL, MODIFIABLE_CONFIG, server.rdb_del_sync_files, 0, NULL, NULL),\n    createBoolConfig(\"activerehashing\", NULL, MODIFIABLE_CONFIG, server.activerehashing, 1, NULL, NULL),\n    createBoolConfig(\"stop-writes-on-bgsave-error\", NULL, MODIFIABLE_CONFIG, server.stop_writes_on_bgsave_err, 1, NULL, NULL),\n    createBoolConfig(\"set-proc-title\", NULL, IMMUTABLE_CONFIG, server.set_proc_title, 1, NULL, NULL), /* Should setproctitle be used? */\n    createBoolConfig(\"dynamic-hz\", NULL, MODIFIABLE_CONFIG, server.dynamic_hz, 1, NULL, NULL), /* Adapt hz to # of clients.*/\n    createBoolConfig(\"lazyfree-lazy-eviction\", NULL, DEBUG_CONFIG | MODIFIABLE_CONFIG, server.lazyfree_lazy_eviction, 0, NULL, NULL),\n    createBoolConfig(\"lazyfree-lazy-expire\", NULL, DEBUG_CONFIG | MODIFIABLE_CONFIG, server.lazyfree_lazy_expire, 0, NULL, NULL),\n    createBoolConfig(\"lazyfree-lazy-server-del\", NULL, DEBUG_CONFIG | MODIFIABLE_CONFIG, server.lazyfree_lazy_server_del, 0, NULL, NULL),\n    createBoolConfig(\"lazyfree-lazy-user-del\", NULL, DEBUG_CONFIG | MODIFIABLE_CONFIG, server.lazyfree_lazy_user_del , 0, NULL, NULL),\n    createBoolConfig(\"lazyfree-lazy-user-flush\", NULL, DEBUG_CONFIG | MODIFIABLE_CONFIG, server.lazyfree_lazy_user_flush , 0, NULL, NULL),\n    createBoolConfig(\"repl-disable-tcp-nodelay\", NULL, MODIFIABLE_CONFIG, server.repl_disable_tcp_nodelay, 0, NULL, NULL),\n    createBoolConfig(\"repl-diskless-sync\", NULL, DEBUG_CONFIG | MODIFIABLE_CONFIG, server.repl_diskless_sync, 1, NULL, NULL),\n    createBoolConfig(\"repl-rdb-channel\", NULL, MODIFIABLE_CONFIG | HIDDEN_CONFIG, server.repl_rdb_channel, 1, NULL, NULL),\n    createBoolConfig(\"aof-rewrite-incremental-fsync\", NULL, MODIFIABLE_CONFIG, server.aof_rewrite_incremental_fsync, 1, NULL, NULL),\n    createBoolConfig(\"no-appendfsync-on-rewrite\", NULL, MODIFIABLE_CONFIG, server.aof_no_fsync_on_rewrite, 0, NULL, NULL),\n    createBoolConfig(\"cluster-require-full-coverage\", NULL, MODIFIABLE_CONFIG, server.cluster_require_full_coverage, 1, NULL, NULL),\n    createBoolConfig(\"rdb-save-incremental-fsync\", NULL, MODIFIABLE_CONFIG, server.rdb_save_incremental_fsync, 1, NULL, NULL),\n    createBoolConfig(\"aof-load-truncated\", NULL, MODIFIABLE_CONFIG, server.aof_load_truncated, 1, NULL, NULL),\n    createBoolConfig(\"aof-use-rdb-preamble\", NULL, MODIFIABLE_CONFIG, server.aof_use_rdb_preamble, 1, NULL, NULL),\n    createBoolConfig(\"aof-timestamp-enabled\", NULL, MODIFIABLE_CONFIG, server.aof_timestamp_enabled, 0, NULL, NULL),\n    createBoolConfig(\"cluster-replica-no-failover\", \"cluster-slave-no-failover\", MODIFIABLE_CONFIG, server.cluster_slave_no_failover, 0, NULL, updateClusterFlags), /* Failover by default. */\n    createBoolConfig(\"replica-lazy-flush\", \"slave-lazy-flush\", MODIFIABLE_CONFIG, server.repl_slave_lazy_flush, 0, NULL, NULL),\n    createBoolConfig(\"replica-serve-stale-data\", \"slave-serve-stale-data\", MODIFIABLE_CONFIG, server.repl_serve_stale_data, 1, NULL, NULL),\n    createBoolConfig(\"replica-read-only\", \"slave-read-only\", DEBUG_CONFIG | MODIFIABLE_CONFIG, server.repl_slave_ro, 1, NULL, NULL),\n    createBoolConfig(\"replica-ignore-maxmemory\", \"slave-ignore-maxmemory\", MODIFIABLE_CONFIG, server.repl_slave_ignore_maxmemory, 1, NULL, NULL),\n    createBoolConfig(\"jemalloc-bg-thread\", NULL, MODIFIABLE_CONFIG, server.jemalloc_bg_thread, 1, NULL, updateJemallocBgThread),\n    createBoolConfig(\"activedefrag\", NULL, DEBUG_CONFIG | MODIFIABLE_CONFIG, server.active_defrag_enabled, 0, isValidActiveDefrag, NULL),\n    createBoolConfig(\"syslog-enabled\", NULL, IMMUTABLE_CONFIG, server.syslog_enabled, 0, NULL, NULL),\n    createBoolConfig(\"cluster-enabled\", NULL, IMMUTABLE_CONFIG, server.cluster_enabled, 0, NULL, NULL),\n    createBoolConfig(\"appendonly\", NULL, MODIFIABLE_CONFIG, server.aof_enabled, 0, NULL, updateAppendonly),\n    createBoolConfig(\"cluster-allow-reads-when-down\", NULL, MODIFIABLE_CONFIG, server.cluster_allow_reads_when_down, 0, NULL, NULL),\n    createBoolConfig(\"cluster-allow-pubsubshard-when-down\", NULL, MODIFIABLE_CONFIG, server.cluster_allow_pubsubshard_when_down, 1, NULL, NULL),\n    createBoolConfig(\"crash-log-enabled\", NULL, MODIFIABLE_CONFIG, server.crashlog_enabled, 1, NULL, updateSighandlerEnabled),\n    createBoolConfig(\"crash-memcheck-enabled\", NULL, MODIFIABLE_CONFIG, server.memcheck_enabled, 1, NULL, NULL),\n    createBoolConfig(\"use-exit-on-panic\", NULL, MODIFIABLE_CONFIG | HIDDEN_CONFIG, server.use_exit_on_panic, 0, NULL, NULL),\n    createBoolConfig(\"disable-thp\", NULL, IMMUTABLE_CONFIG, server.disable_thp, 1, NULL, NULL),\n    createBoolConfig(\"cluster-allow-replica-migration\", NULL, MODIFIABLE_CONFIG, server.cluster_allow_replica_migration, 1, NULL, NULL),\n    createBoolConfig(\"replica-announced\", NULL, MODIFIABLE_CONFIG, server.replica_announced, 1, NULL, NULL),\n    createBoolConfig(\"latency-tracking\", NULL, MODIFIABLE_CONFIG, server.latency_tracking_enabled, 1, NULL, NULL),\n    createBoolConfig(\"aof-disable-auto-gc\", NULL, MODIFIABLE_CONFIG | HIDDEN_CONFIG, server.aof_disable_auto_gc, 0, NULL, updateAofAutoGCEnabled),\n    createBoolConfig(\"replica-ignore-disk-write-errors\", NULL, MODIFIABLE_CONFIG, server.repl_ignore_disk_write_error, 0, NULL, NULL),\n    createBoolConfig(\"hide-user-data-from-log\", NULL, MODIFIABLE_CONFIG, server.hide_user_data_from_log, 0, NULL, NULL),\n    createBoolConfig(\"lazyexpire-nested-arbitrary-keys\", NULL, MODIFIABLE_CONFIG | HIDDEN_CONFIG, server.lazyexpire_nested_arbitrary_keys, 1, NULL, NULL),\n    createEnumConfig(\"cluster-slot-stats-enabled\", NULL, MODIFIABLE_CONFIG | MULTI_ARG_CONFIG, cluster_slot_stats_enum, server.cluster_slot_stats_enabled, 0, NULL, updateMemoryTrackingEnabled),\n    createBoolConfig(\"lua-enable-deprecated-api\", NULL, IMMUTABLE_CONFIG | HIDDEN_CONFIG, server.lua_enable_deprecated_api, 0, NULL, NULL),\n    createBoolConfig(\"key-memory-histograms\", NULL, MODIFIABLE_CONFIG, server.key_memory_histograms, 0, NULL, updateMemoryTrackingEnabled),\n\n    /* String Configs */\n    createStringConfig(\"aclfile\", NULL, IMMUTABLE_CONFIG, ALLOW_EMPTY_STRING, server.acl_filename, \"\", NULL, NULL),\n    createStringConfig(\"unixsocket\", NULL, IMMUTABLE_CONFIG, EMPTY_STRING_IS_NULL, server.unixsocket, NULL, NULL, NULL),\n    createStringConfig(\"pidfile\", NULL, IMMUTABLE_CONFIG, EMPTY_STRING_IS_NULL, server.pidfile, NULL, NULL, NULL),\n    createStringConfig(\"replica-announce-ip\", \"slave-announce-ip\", MODIFIABLE_CONFIG, EMPTY_STRING_IS_NULL, server.slave_announce_ip, NULL, NULL, NULL),\n    createStringConfig(\"masteruser\", NULL, MODIFIABLE_CONFIG | SENSITIVE_CONFIG, EMPTY_STRING_IS_NULL, server.masteruser, NULL, NULL, NULL),\n    createStringConfig(\"cluster-announce-ip\", NULL, MODIFIABLE_CONFIG, EMPTY_STRING_IS_NULL, server.cluster_announce_ip, NULL, NULL, updateClusterIp),\n    createStringConfig(\"cluster-config-file\", NULL, IMMUTABLE_CONFIG, ALLOW_EMPTY_STRING, server.cluster_configfile, \"nodes.conf\", NULL, NULL),\n    createStringConfig(\"cluster-announce-hostname\", NULL, MODIFIABLE_CONFIG, EMPTY_STRING_IS_NULL, server.cluster_announce_hostname, NULL, isValidAnnouncedHostname, updateClusterHostname),\n    createStringConfig(\"cluster-announce-human-nodename\", NULL, MODIFIABLE_CONFIG, EMPTY_STRING_IS_NULL, server.cluster_announce_human_nodename, NULL, isValidAnnouncedNodename, updateClusterHumanNodename),\n    createStringConfig(\"syslog-ident\", NULL, IMMUTABLE_CONFIG, ALLOW_EMPTY_STRING, server.syslog_ident, \"redis\", NULL, NULL),\n    createStringConfig(\"dbfilename\", NULL, MODIFIABLE_CONFIG | PROTECTED_CONFIG, ALLOW_EMPTY_STRING, server.rdb_filename, \"dump.rdb\", isValidDBfilename, NULL),\n    createStringConfig(\"appendfilename\", NULL, IMMUTABLE_CONFIG, ALLOW_EMPTY_STRING, server.aof_filename, \"appendonly.aof\", isValidAOFfilename, NULL),\n    createStringConfig(\"appenddirname\", NULL, IMMUTABLE_CONFIG, ALLOW_EMPTY_STRING, server.aof_dirname, \"appendonlydir\", isValidAOFdirname, NULL),\n    createStringConfig(\"server-cpulist\", \"server_cpulist\", IMMUTABLE_CONFIG, EMPTY_STRING_IS_NULL, server.server_cpulist, NULL, NULL, NULL),\n    createStringConfig(\"bio-cpulist\", \"bio_cpulist\", IMMUTABLE_CONFIG, EMPTY_STRING_IS_NULL, server.bio_cpulist, NULL, NULL, NULL),\n    createStringConfig(\"aof-rewrite-cpulist\", \"aof_rewrite_cpulist\", IMMUTABLE_CONFIG, EMPTY_STRING_IS_NULL, server.aof_rewrite_cpulist, NULL, NULL, NULL),\n    createStringConfig(\"bgsave-cpulist\", \"bgsave_cpulist\", IMMUTABLE_CONFIG, EMPTY_STRING_IS_NULL, server.bgsave_cpulist, NULL, NULL, NULL),\n    createStringConfig(\"ignore-warnings\", NULL, MODIFIABLE_CONFIG, ALLOW_EMPTY_STRING, server.ignore_warnings, \"\", NULL, NULL),\n    createStringConfig(\"proc-title-template\", NULL, MODIFIABLE_CONFIG, ALLOW_EMPTY_STRING, server.proc_title_template, CONFIG_DEFAULT_PROC_TITLE_TEMPLATE, isValidProcTitleTemplate, updateProcTitleTemplate),\n    createStringConfig(\"bind-source-addr\", NULL, MODIFIABLE_CONFIG, EMPTY_STRING_IS_NULL, server.bind_source_addr, NULL, NULL, NULL),\n    createStringConfig(\"logfile\", NULL, IMMUTABLE_CONFIG, ALLOW_EMPTY_STRING, server.logfile, \"\", NULL, NULL),\n#ifdef LOG_REQ_RES\n    createStringConfig(\"req-res-logfile\", NULL, IMMUTABLE_CONFIG | HIDDEN_CONFIG, EMPTY_STRING_IS_NULL, server.req_res_logfile, NULL, NULL, NULL),\n#endif\n    createStringConfig(\"locale-collate\", NULL, MODIFIABLE_CONFIG, ALLOW_EMPTY_STRING, server.locale_collate, \"\", NULL, updateLocaleCollate),\n\n    /* SDS Configs */\n    createSDSConfig(\"masterauth\", NULL, MODIFIABLE_CONFIG | SENSITIVE_CONFIG, EMPTY_STRING_IS_NULL, server.masterauth, NULL, NULL, NULL),\n    createSDSConfig(\"requirepass\", NULL, MODIFIABLE_CONFIG | SENSITIVE_CONFIG, EMPTY_STRING_IS_NULL, server.requirepass, NULL, NULL, updateRequirePass),\n\n    /* Enum Configs */\n    createEnumConfig(\"supervised\", NULL, IMMUTABLE_CONFIG, supervised_mode_enum, server.supervised_mode, SUPERVISED_NONE, NULL, NULL),\n    createEnumConfig(\"syslog-facility\", NULL, IMMUTABLE_CONFIG, syslog_facility_enum, server.syslog_facility, LOG_LOCAL0, NULL, NULL),\n    createEnumConfig(\"repl-diskless-load\", NULL, DEBUG_CONFIG | MODIFIABLE_CONFIG | DENY_LOADING_CONFIG, repl_diskless_load_enum, server.repl_diskless_load, REPL_DISKLESS_LOAD_DISABLED, NULL, NULL),\n    createEnumConfig(\"loglevel\", NULL, MODIFIABLE_CONFIG, loglevel_enum, server.verbosity, LL_NOTICE, NULL, NULL),\n    createEnumConfig(\"maxmemory-policy\", NULL, MODIFIABLE_CONFIG, maxmemory_policy_enum, server.maxmemory_policy, MAXMEMORY_NO_EVICTION, NULL, NULL),\n    createEnumConfig(\"appendfsync\", NULL, MODIFIABLE_CONFIG, aof_fsync_enum, server.aof_fsync, AOF_FSYNC_EVERYSEC, NULL, updateAppendFsync),\n    createEnumConfig(\"oom-score-adj\", NULL, MODIFIABLE_CONFIG, oom_score_adj_enum, server.oom_score_adj, OOM_SCORE_ADJ_NO, NULL, updateOOMScoreAdj),\n    createEnumConfig(\"acl-pubsub-default\", NULL, MODIFIABLE_CONFIG, acl_pubsub_default_enum, server.acl_pubsub_default, 0, NULL, NULL),\n    createEnumConfig(\"sanitize-dump-payload\", NULL, DEBUG_CONFIG | MODIFIABLE_CONFIG, sanitize_dump_payload_enum, server.sanitize_dump_payload, SANITIZE_DUMP_NO, NULL, NULL),\n    createEnumConfig(\"enable-protected-configs\", NULL, IMMUTABLE_CONFIG, protected_action_enum, server.enable_protected_configs, PROTECTED_ACTION_ALLOWED_NO, NULL, NULL),\n    createEnumConfig(\"enable-debug-command\", NULL, IMMUTABLE_CONFIG, protected_action_enum, server.enable_debug_cmd, PROTECTED_ACTION_ALLOWED_NO, NULL, NULL),\n    createEnumConfig(\"enable-module-command\", NULL, IMMUTABLE_CONFIG, protected_action_enum, server.enable_module_cmd, PROTECTED_ACTION_ALLOWED_NO, NULL, NULL),\n    createEnumConfig(\"cluster-preferred-endpoint-type\", NULL, MODIFIABLE_CONFIG, cluster_preferred_endpoint_type_enum, server.cluster_preferred_endpoint_type, CLUSTER_ENDPOINT_TYPE_IP, NULL, NULL),\n    createEnumConfig(\"propagation-error-behavior\", NULL, MODIFIABLE_CONFIG, propagation_error_behavior_enum, server.propagation_error_behavior, PROPAGATION_ERR_BEHAVIOR_IGNORE, NULL, NULL),\n    createEnumConfig(\"shutdown-on-sigint\", NULL, MODIFIABLE_CONFIG | MULTI_ARG_CONFIG, shutdown_on_sig_enum, server.shutdown_on_sigint, 0, isValidShutdownOnSigFlags, NULL),\n    createEnumConfig(\"shutdown-on-sigterm\", NULL, MODIFIABLE_CONFIG | MULTI_ARG_CONFIG, shutdown_on_sig_enum, server.shutdown_on_sigterm, 0, isValidShutdownOnSigFlags, NULL),\n\n    /* Integer configs */\n    createIntConfig(\"databases\", NULL, IMMUTABLE_CONFIG, 1, INT_MAX, server.dbnum, 16, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"port\", NULL, MODIFIABLE_CONFIG, 0, 65535, server.port, 6379, INTEGER_CONFIG, NULL, updatePort), /* TCP port. */\n    createIntConfig(\"io-threads\", NULL, DEBUG_CONFIG | IMMUTABLE_CONFIG, 1, 128, server.io_threads_num, 1, INTEGER_CONFIG, NULL, NULL), /* Single threaded by default */\n    createIntConfig(\"prefetch-batch-max-size\", NULL, MODIFIABLE_CONFIG | HIDDEN_CONFIG, 0, PREFETCH_BATCH_MAX_SIZE, server.prefetch_batch_max_size, 16, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"auto-aof-rewrite-percentage\", NULL, MODIFIABLE_CONFIG, 0, INT_MAX, server.aof_rewrite_perc, 100, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"cluster-replica-validity-factor\", \"cluster-slave-validity-factor\", MODIFIABLE_CONFIG, 0, INT_MAX, server.cluster_slave_validity_factor, 10, INTEGER_CONFIG, NULL, NULL), /* Slave max data age factor. */\n    createIntConfig(\"list-max-listpack-size\", \"list-max-ziplist-size\", MODIFIABLE_CONFIG, INT_MIN, INT_MAX, server.list_max_listpack_size, -2, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"tcp-keepalive\", NULL, MODIFIABLE_CONFIG, 0, INT_MAX, server.tcpkeepalive, 300, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"cluster-migration-barrier\", NULL, MODIFIABLE_CONFIG, 0, INT_MAX, server.cluster_migration_barrier, 1, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"active-defrag-cycle-min\", NULL, MODIFIABLE_CONFIG, 1, 99, server.active_defrag_cycle_min, 1, INTEGER_CONFIG, NULL, updateDefragConfiguration), /* Default: 1% CPU min (at lower threshold) */\n    createIntConfig(\"active-defrag-cycle-max\", NULL, MODIFIABLE_CONFIG, 1, 99, server.active_defrag_cycle_max, 25, INTEGER_CONFIG, NULL, updateDefragConfiguration), /* Default: 25% CPU max (at upper threshold) */\n    createIntConfig(\"active-defrag-threshold-lower\", NULL, MODIFIABLE_CONFIG, 0, 1000, server.active_defrag_threshold_lower, 10, INTEGER_CONFIG, NULL, NULL), /* Default: don't defrag when fragmentation is below 10% */\n    createIntConfig(\"active-defrag-threshold-upper\", NULL, MODIFIABLE_CONFIG, 0, 1000, server.active_defrag_threshold_upper, 100, INTEGER_CONFIG, NULL, updateDefragConfiguration), /* Default: maximum defrag force at 100% fragmentation */\n    createIntConfig(\"lfu-log-factor\", NULL, MODIFIABLE_CONFIG, 0, INT_MAX, server.lfu_log_factor, 10, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"lfu-decay-time\", NULL, MODIFIABLE_CONFIG, 0, INT_MAX, server.lfu_decay_time, 1, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"replica-priority\", \"slave-priority\", MODIFIABLE_CONFIG, 0, INT_MAX, server.slave_priority, 100, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"repl-diskless-sync-delay\", NULL, MODIFIABLE_CONFIG, 0, INT_MAX, server.repl_diskless_sync_delay, 5, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"maxmemory-samples\", NULL, MODIFIABLE_CONFIG, 1, 64, server.maxmemory_samples, 5, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"maxmemory-eviction-tenacity\", NULL, MODIFIABLE_CONFIG, 0, 100, server.maxmemory_eviction_tenacity, 10, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"timeout\", NULL, MODIFIABLE_CONFIG, 0, INT_MAX, server.maxidletime, 0, INTEGER_CONFIG, NULL, NULL), /* Default client timeout: infinite */\n    createIntConfig(\"replica-announce-port\", \"slave-announce-port\", MODIFIABLE_CONFIG, 0, 65535, server.slave_announce_port, 0, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"tcp-backlog\", NULL, IMMUTABLE_CONFIG, 0, INT_MAX, server.tcp_backlog, 511, INTEGER_CONFIG, NULL, NULL), /* TCP listen backlog. */\n    createIntConfig(\"cluster-port\", NULL, IMMUTABLE_CONFIG, 0, 65535, server.cluster_port, 0, INTEGER_CONFIG, NULL, NULL),    \n    createIntConfig(\"cluster-announce-bus-port\", NULL, MODIFIABLE_CONFIG, 0, 65535, server.cluster_announce_bus_port, 0, INTEGER_CONFIG, NULL, updateClusterAnnouncedPort), /* Default: Use +10000 offset. */\n    createIntConfig(\"cluster-announce-port\", NULL, MODIFIABLE_CONFIG, 0, 65535, server.cluster_announce_port, 0, INTEGER_CONFIG, NULL, updateClusterAnnouncedPort), /* Use server.port */\n    createIntConfig(\"cluster-announce-tls-port\", NULL, MODIFIABLE_CONFIG, 0, 65535, server.cluster_announce_tls_port, 0, INTEGER_CONFIG, NULL, updateClusterAnnouncedPort), /* Use server.tls_port */\n    createIntConfig(\"repl-timeout\", NULL, MODIFIABLE_CONFIG, 1, INT_MAX, server.repl_timeout, 60, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"repl-ping-replica-period\", \"repl-ping-slave-period\", MODIFIABLE_CONFIG, 1, INT_MAX, server.repl_ping_slave_period, 10, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"list-compress-depth\", NULL, DEBUG_CONFIG | MODIFIABLE_CONFIG, 0, INT_MAX, server.list_compress_depth, 0, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"rdb-key-save-delay\", NULL, MODIFIABLE_CONFIG | HIDDEN_CONFIG, INT_MIN, INT_MAX, server.rdb_key_save_delay, 0, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"key-load-delay\", NULL, MODIFIABLE_CONFIG | HIDDEN_CONFIG, INT_MIN, INT_MAX, server.key_load_delay, 0, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"active-expire-effort\", NULL, MODIFIABLE_CONFIG, 1, 10, server.active_expire_effort, 1, INTEGER_CONFIG, NULL, NULL), /* From 1 to 10. */\n    createIntConfig(\"hz\", NULL, MODIFIABLE_CONFIG, 0, INT_MAX, server.config_hz, CONFIG_DEFAULT_HZ, INTEGER_CONFIG, NULL, updateHZ),\n    createIntConfig(\"min-replicas-to-write\", \"min-slaves-to-write\", MODIFIABLE_CONFIG, 0, INT_MAX, server.repl_min_slaves_to_write, 0, INTEGER_CONFIG, NULL, updateGoodSlaves),\n    createIntConfig(\"min-replicas-max-lag\", \"min-slaves-max-lag\", MODIFIABLE_CONFIG, 0, INT_MAX, server.repl_min_slaves_max_lag, 10, INTEGER_CONFIG, NULL, updateGoodSlaves),\n    createIntConfig(\"watchdog-period\", NULL, MODIFIABLE_CONFIG | HIDDEN_CONFIG, 0, INT_MAX, server.watchdog_period, 0, INTEGER_CONFIG, NULL, updateWatchdogPeriod),\n    createIntConfig(\"shutdown-timeout\", NULL, MODIFIABLE_CONFIG, 0, INT_MAX, server.shutdown_timeout, 10, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"repl-diskless-sync-max-replicas\", NULL, MODIFIABLE_CONFIG, 0, INT_MAX, server.repl_diskless_sync_max_replicas, 0, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"cluster-compatibility-sample-ratio\", NULL, MODIFIABLE_CONFIG, 0, 100, server.cluster_compatibility_sample_ratio, 0, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"cluster-slot-migration-max-archived-tasks\", NULL, MODIFIABLE_CONFIG | HIDDEN_CONFIG, 1, INT_MAX, server.asm_max_archived_tasks, 32, INTEGER_CONFIG, NULL, NULL),\n    createIntConfig(\"lookahead\", NULL, MODIFIABLE_CONFIG, 1, INT_MAX, server.lookahead, REDIS_DEFAULT_LOOKAHEAD, INTEGER_CONFIG, NULL, NULL),\n\n    /* Unsigned int configs */\n    createUIntConfig(\"maxclients\", NULL, MODIFIABLE_CONFIG, 1, UINT_MAX, server.maxclients, 10000, INTEGER_CONFIG, NULL, updateMaxclients),\n    createUIntConfig(\"unixsocketperm\", NULL, IMMUTABLE_CONFIG, 0, 0777, server.unixsocketperm, 0, OCTAL_CONFIG, NULL, NULL),\n    createUIntConfig(\"socket-mark-id\", NULL, IMMUTABLE_CONFIG, 0, UINT_MAX, server.socket_mark_id, 0, INTEGER_CONFIG, NULL, NULL),\n    createUIntConfig(\"max-new-connections-per-cycle\", NULL, MODIFIABLE_CONFIG, 1, 1000, server.max_new_conns_per_cycle, 10, INTEGER_CONFIG, NULL, NULL),\n    createUIntConfig(\"max-new-tls-connections-per-cycle\", NULL, MODIFIABLE_CONFIG, 1, 1000, server.max_new_tls_conns_per_cycle, 1, INTEGER_CONFIG, NULL, NULL),\n#ifdef LOG_REQ_RES\n    createUIntConfig(\"client-default-resp\", NULL, IMMUTABLE_CONFIG | HIDDEN_CONFIG, 2, 3, server.client_default_resp, 2, INTEGER_CONFIG, NULL, NULL),\n#endif\n\n    /* Unsigned Long configs */\n    createULongConfig(\"active-defrag-max-scan-fields\", NULL, MODIFIABLE_CONFIG, 1, LONG_MAX, server.active_defrag_max_scan_fields, 1000, INTEGER_CONFIG, NULL, NULL), /* Default: keys with more than 1000 fields will be processed separately */\n    createULongConfig(\"slowlog-max-len\", NULL, MODIFIABLE_CONFIG, 0, LONG_MAX, server.slowlog_max_len, 128, INTEGER_CONFIG, NULL, NULL),\n    createULongConfig(\"acllog-max-len\", NULL, MODIFIABLE_CONFIG, 0, LONG_MAX, server.acllog_max_len, 128, INTEGER_CONFIG, NULL, NULL),\n\n    /* Long Long configs */\n    createLongLongConfig(\"busy-reply-threshold\", \"lua-time-limit\", MODIFIABLE_CONFIG, 0, LONG_MAX, server.busy_reply_threshold, 5000, INTEGER_CONFIG, NULL, NULL),/* milliseconds */\n    createLongLongConfig(\"cluster-node-timeout\", NULL, MODIFIABLE_CONFIG, 0, LLONG_MAX, server.cluster_node_timeout, 15000, INTEGER_CONFIG, NULL, NULL),\n    createLongLongConfig(\"cluster-ping-interval\", NULL, MODIFIABLE_CONFIG | HIDDEN_CONFIG, 0, LLONG_MAX, server.cluster_ping_interval, 0, INTEGER_CONFIG, NULL, NULL),\n    createLongLongConfig(\"cluster-slot-migration-handoff-max-lag-bytes\", NULL, MODIFIABLE_CONFIG, 0, LLONG_MAX, server.asm_handoff_max_lag_bytes, 1*1024*1024, MEMORY_CONFIG, NULL, NULL), /* 1MB */\n    createLongLongConfig(\"cluster-slot-migration-write-pause-timeout\", NULL, MODIFIABLE_CONFIG, 0, LLONG_MAX, server.asm_write_pause_timeout, 10*1000, INTEGER_CONFIG, NULL, NULL), /* 10 seconds */\n    createLongLongConfig(\"cluster-slot-migration-sync-buffer-drain-timeout\", NULL, MODIFIABLE_CONFIG | HIDDEN_CONFIG, 0, LLONG_MAX, server.asm_sync_buffer_drain_timeout, 60000, INTEGER_CONFIG, NULL, NULL), /* 60 seconds */\n    createLongLongConfig(\"slowlog-log-slower-than\", NULL, MODIFIABLE_CONFIG, -1, LLONG_MAX, server.slowlog_log_slower_than, 10000, INTEGER_CONFIG, NULL, NULL),\n    createLongLongConfig(\"latency-monitor-threshold\", NULL, MODIFIABLE_CONFIG, 0, LLONG_MAX, server.latency_monitor_threshold, 0, INTEGER_CONFIG, NULL, NULL),\n    createLongLongConfig(\"proto-max-bulk-len\", NULL, DEBUG_CONFIG | MODIFIABLE_CONFIG, 1024*1024, LONG_MAX, server.proto_max_bulk_len, 512ll*1024*1024, MEMORY_CONFIG, NULL, NULL), /* Bulk request max size */\n    createLongLongConfig(\"stream-node-max-entries\", NULL, MODIFIABLE_CONFIG, 0, LLONG_MAX, server.stream_node_max_entries, 100, INTEGER_CONFIG, NULL, NULL),\n    createLongLongConfig(\"stream-idmp-duration\", NULL, MODIFIABLE_CONFIG, CONFIG_STREAM_IDMP_MIN_DURATION, CONFIG_STREAM_IDMP_MAX_DURATION, server.stream_idmp_duration, 100, INTEGER_CONFIG, NULL, NULL),\n    createLongLongConfig(\"stream-idmp-maxsize\", NULL, MODIFIABLE_CONFIG, CONFIG_STREAM_IDMP_MIN_MAXSIZE, CONFIG_STREAM_IDMP_MAX_MAXSIZE, server.stream_idmp_maxsize, 100, INTEGER_CONFIG, NULL, NULL),\n    createLongLongConfig(\"repl-backlog-size\", NULL, MODIFIABLE_CONFIG, 1, LLONG_MAX, server.repl_backlog_size, 1024*1024, MEMORY_CONFIG, NULL, updateReplBacklogSize), /* Default: 1mb */\n    createLongLongConfig(\"replica-full-sync-buffer-limit\", NULL, MODIFIABLE_CONFIG, 0, LLONG_MAX, server.repl_full_sync_buffer_limit, 0, MEMORY_CONFIG, NULL, NULL), /* Default: Inherits 'client-output-buffer-limit <replica>' */\n\n    /* Unsigned Long Long configs */\n    createULongLongConfig(\"maxmemory\", NULL, MODIFIABLE_CONFIG, 0, ULLONG_MAX, server.maxmemory, 0, MEMORY_CONFIG, NULL, updateMaxmemory),\n    createULongLongConfig(\"cluster-link-sendbuf-limit\", NULL, MODIFIABLE_CONFIG, 0, ULLONG_MAX, server.cluster_link_msg_queue_limit_bytes, 0, MEMORY_CONFIG, NULL, NULL),\n\n    /* Size_t configs */\n    createSizeTConfig(\"hash-max-listpack-entries\", \"hash-max-ziplist-entries\", MODIFIABLE_CONFIG, 0, LONG_MAX, server.hash_max_listpack_entries, 512, INTEGER_CONFIG, NULL, NULL),\n    createSizeTConfig(\"set-max-intset-entries\", NULL, MODIFIABLE_CONFIG, 0, LONG_MAX, server.set_max_intset_entries, 512, INTEGER_CONFIG, NULL, NULL),\n    createSizeTConfig(\"set-max-listpack-entries\", NULL, MODIFIABLE_CONFIG, 0, LONG_MAX, server.set_max_listpack_entries, 128, INTEGER_CONFIG, NULL, NULL),\n    createSizeTConfig(\"set-max-listpack-value\", NULL, MODIFIABLE_CONFIG, 0, LONG_MAX, server.set_max_listpack_value, 64, INTEGER_CONFIG, NULL, NULL),\n    createSizeTConfig(\"zset-max-listpack-entries\", \"zset-max-ziplist-entries\", MODIFIABLE_CONFIG, 0, LONG_MAX, server.zset_max_listpack_entries, 128, INTEGER_CONFIG, NULL, NULL),\n    createSizeTConfig(\"active-defrag-ignore-bytes\", NULL, MODIFIABLE_CONFIG, 1, LLONG_MAX, server.active_defrag_ignore_bytes, 100<<20, MEMORY_CONFIG, NULL, NULL), /* Default: don't defrag if frag overhead is below 100mb */\n    createSizeTConfig(\"hash-max-listpack-value\", \"hash-max-ziplist-value\", MODIFIABLE_CONFIG, 0, LONG_MAX, server.hash_max_listpack_value, 64, MEMORY_CONFIG, NULL, NULL),\n    createSizeTConfig(\"stream-node-max-bytes\", NULL, MODIFIABLE_CONFIG, 0, LONG_MAX, server.stream_node_max_bytes, 4096, MEMORY_CONFIG, NULL, NULL),\n    createSizeTConfig(\"zset-max-listpack-value\", \"zset-max-ziplist-value\", MODIFIABLE_CONFIG, 0, LONG_MAX, server.zset_max_listpack_value, 64, MEMORY_CONFIG, NULL, NULL),\n    createSizeTConfig(\"hll-sparse-max-bytes\", NULL, MODIFIABLE_CONFIG, 0, LONG_MAX, server.hll_sparse_max_bytes, 3000, MEMORY_CONFIG, NULL, NULL),\n    createSizeTConfig(\"tracking-table-max-keys\", NULL, MODIFIABLE_CONFIG, 0, LONG_MAX, server.tracking_table_max_keys, 1000000, INTEGER_CONFIG, NULL, NULL), /* Default: 1 million keys max. */\n    createSizeTConfig(\"client-query-buffer-limit\", NULL, DEBUG_CONFIG | MODIFIABLE_CONFIG, 1024*1024, LONG_MAX, server.client_max_querybuf_len, 1024*1024*1024, MEMORY_CONFIG, NULL, NULL), /* Default: 1GB max query buffer. */\n    createSSizeTConfig(\"maxmemory-clients\", NULL, MODIFIABLE_CONFIG, -100, SSIZE_MAX, server.maxmemory_clients, 0, MEMORY_CONFIG | PERCENT_CONFIG, NULL, applyClientMaxMemoryUsage),\n\n    /* Other configs */\n    createTimeTConfig(\"repl-backlog-ttl\", NULL, MODIFIABLE_CONFIG, 0, LONG_MAX, server.repl_backlog_time_limit, 60*60, INTEGER_CONFIG, NULL, NULL), /* Default: 1 hour */\n    createOffTConfig(\"auto-aof-rewrite-min-size\", NULL, MODIFIABLE_CONFIG, 0, LLONG_MAX, server.aof_rewrite_min_size, 64*1024*1024, MEMORY_CONFIG, NULL, NULL),\n    createOffTConfig(\"loading-process-events-interval-bytes\", NULL, MODIFIABLE_CONFIG | HIDDEN_CONFIG, 1024, INT_MAX, server.loading_process_events_interval_bytes, 1024*512, INTEGER_CONFIG, NULL, NULL),\n    createOffTConfig(\"aof-load-corrupt-tail-max-size\", NULL, MODIFIABLE_CONFIG, 0, LONG_MAX, server.aof_load_corrupt_tail_max_size, 0, INTEGER_CONFIG, NULL, NULL),\n\n    createIntConfig(\"tls-port\", NULL, MODIFIABLE_CONFIG, 0, 65535, server.tls_port, 0, INTEGER_CONFIG, NULL, applyTLSPort), /* TCP port. */\n    createIntConfig(\"tls-session-cache-size\", NULL, MODIFIABLE_CONFIG, 0, INT_MAX, server.tls_ctx_config.session_cache_size, 20*1024, INTEGER_CONFIG, NULL, applyTlsCfg),\n    createIntConfig(\"tls-session-cache-timeout\", NULL, MODIFIABLE_CONFIG, 0, INT_MAX, server.tls_ctx_config.session_cache_timeout, 300, INTEGER_CONFIG, NULL, applyTlsCfg),\n    createBoolConfig(\"tls-cluster\", NULL, MODIFIABLE_CONFIG, server.tls_cluster, 0, NULL, applyTlsCfg),\n    createBoolConfig(\"tls-replication\", NULL, MODIFIABLE_CONFIG, server.tls_replication, 0, NULL, applyTlsCfg),\n    createEnumConfig(\"tls-auth-clients\", NULL, MODIFIABLE_CONFIG, tls_auth_clients_enum, server.tls_auth_clients, TLS_CLIENT_AUTH_YES, NULL, NULL),\n    createEnumConfig(\"tls-auth-clients-user\", NULL, MODIFIABLE_CONFIG, tls_client_auth_user_enum, server.tls_ctx_config.client_auth_user, TLS_CLIENT_FIELD_OFF, NULL, NULL),\n    createBoolConfig(\"tls-prefer-server-ciphers\", NULL, MODIFIABLE_CONFIG, server.tls_ctx_config.prefer_server_ciphers, 0, NULL, applyTlsCfg),\n    createBoolConfig(\"tls-session-caching\", NULL, MODIFIABLE_CONFIG, server.tls_ctx_config.session_caching, 1, NULL, applyTlsCfg),\n    createStringConfig(\"tls-cert-file\", NULL, VOLATILE_CONFIG | MODIFIABLE_CONFIG, EMPTY_STRING_IS_NULL, server.tls_ctx_config.cert_file, NULL, NULL, applyTlsCfg),\n    createStringConfig(\"tls-key-file\", NULL, VOLATILE_CONFIG | MODIFIABLE_CONFIG, EMPTY_STRING_IS_NULL, server.tls_ctx_config.key_file, NULL, NULL, applyTlsCfg),\n    createStringConfig(\"tls-key-file-pass\", NULL, MODIFIABLE_CONFIG | SENSITIVE_CONFIG, EMPTY_STRING_IS_NULL, server.tls_ctx_config.key_file_pass, NULL, NULL, applyTlsCfg),\n    createStringConfig(\"tls-client-cert-file\", NULL, VOLATILE_CONFIG | MODIFIABLE_CONFIG, EMPTY_STRING_IS_NULL, server.tls_ctx_config.client_cert_file, NULL, NULL, applyTlsCfg),\n    createStringConfig(\"tls-client-key-file\", NULL, VOLATILE_CONFIG | MODIFIABLE_CONFIG, EMPTY_STRING_IS_NULL, server.tls_ctx_config.client_key_file, NULL, NULL, applyTlsCfg),\n    createStringConfig(\"tls-client-key-file-pass\", NULL, MODIFIABLE_CONFIG | SENSITIVE_CONFIG, EMPTY_STRING_IS_NULL, server.tls_ctx_config.client_key_file_pass, NULL, NULL, applyTlsCfg),\n    createStringConfig(\"tls-dh-params-file\", NULL, VOLATILE_CONFIG | MODIFIABLE_CONFIG, EMPTY_STRING_IS_NULL, server.tls_ctx_config.dh_params_file, NULL, NULL, applyTlsCfg),\n    createStringConfig(\"tls-ca-cert-file\", NULL, VOLATILE_CONFIG | MODIFIABLE_CONFIG, EMPTY_STRING_IS_NULL, server.tls_ctx_config.ca_cert_file, NULL, NULL, applyTlsCfg),\n    createStringConfig(\"tls-ca-cert-dir\", NULL, VOLATILE_CONFIG | MODIFIABLE_CONFIG, EMPTY_STRING_IS_NULL, server.tls_ctx_config.ca_cert_dir, NULL, NULL, applyTlsCfg),\n    createStringConfig(\"tls-protocols\", NULL, MODIFIABLE_CONFIG, EMPTY_STRING_IS_NULL, server.tls_ctx_config.protocols, NULL, NULL, applyTlsCfg),\n    createStringConfig(\"tls-ciphers\", NULL, MODIFIABLE_CONFIG, EMPTY_STRING_IS_NULL, server.tls_ctx_config.ciphers, NULL, NULL, applyTlsCfg),\n    createStringConfig(\"tls-ciphersuites\", NULL, MODIFIABLE_CONFIG, EMPTY_STRING_IS_NULL, server.tls_ctx_config.ciphersuites, NULL, NULL, applyTlsCfg),\n\n    /* Special configs */\n    createSpecialConfig(\"dir\", NULL, MODIFIABLE_CONFIG | PROTECTED_CONFIG | DENY_LOADING_CONFIG, setConfigDirOption, getConfigDirOption, rewriteConfigDirOption, NULL),\n    createSpecialConfig(\"save\", NULL, MODIFIABLE_CONFIG | MULTI_ARG_CONFIG, setConfigSaveOption, getConfigSaveOption, rewriteConfigSaveOption, NULL),\n    createSpecialConfig(\"client-output-buffer-limit\", NULL, MODIFIABLE_CONFIG | MULTI_ARG_CONFIG, setConfigClientOutputBufferLimitOption, getConfigClientOutputBufferLimitOption, rewriteConfigClientOutputBufferLimitOption, NULL),\n    createSpecialConfig(\"oom-score-adj-values\", NULL, MODIFIABLE_CONFIG | MULTI_ARG_CONFIG, setConfigOOMScoreAdjValuesOption, getConfigOOMScoreAdjValuesOption, rewriteConfigOOMScoreAdjValuesOption, updateOOMScoreAdj),\n    createSpecialConfig(\"notify-keyspace-events\", NULL, MODIFIABLE_CONFIG, setConfigNotifyKeyspaceEventsOption, getConfigNotifyKeyspaceEventsOption, rewriteConfigNotifyKeyspaceEventsOption, NULL),\n    createSpecialConfig(\"bind\", NULL, MODIFIABLE_CONFIG | MULTI_ARG_CONFIG, setConfigBindOption, getConfigBindOption, rewriteConfigBindOption, applyBind),\n    createSpecialConfig(\"replicaof\", \"slaveof\", IMMUTABLE_CONFIG | MULTI_ARG_CONFIG, setConfigReplicaOfOption, getConfigReplicaOfOption, rewriteConfigReplicaOfOption, NULL),\n    createSpecialConfig(\"latency-tracking-info-percentiles\", NULL, MODIFIABLE_CONFIG | MULTI_ARG_CONFIG, setConfigLatencyTrackingInfoPercentilesOutputOption, getConfigLatencyTrackingInfoPercentilesOutputOption, rewriteConfigLatencyTrackingInfoPercentilesOutputOption, NULL),\n\n    /* NULL Terminator, this is dropped when we convert to the runtime array. */\n    {NULL}\n};\n\n/* Create a new config by copying the passed in config. Returns 1 on success\n * or 0 when their was already a config with the same name.. */\nint registerConfigValue(const char *name, const standardConfig *config, int alias) {\n    standardConfig *new = zmalloc(sizeof(standardConfig));\n    memcpy(new, config, sizeof(standardConfig));\n    if (alias) {\n        new->flags |= ALIAS_CONFIG;\n        new->name = config->alias;\n        new->alias = config->name;\n    }\n\n    return dictAdd(configs, sdsnew(name), new) == DICT_OK;\n}\n\n/* Initialize configs to their default values and create and populate the \n * runtime configuration dictionary. */\nvoid initConfigValues(void) {\n    configs = dictCreate(&sdsHashDictType);\n    dictExpand(configs, sizeof(static_configs) / sizeof(standardConfig));\n    for (standardConfig *config = static_configs; config->name != NULL; config++) {\n        if (config->interface.init) config->interface.init(config);\n        /* Add the primary config to the dictionary. */\n        int ret = registerConfigValue(config->name, config, 0);\n        serverAssert(ret);\n\n        /* Aliases are the same as their primary counter parts, but they\n         * also have a flag indicating they are the alias. */\n        if (config->alias) {\n            int ret = registerConfigValue(config->alias, config, ALIAS_CONFIG);\n            serverAssert(ret);\n        }\n    }\n}\n\n/* Remove a config by name from the configs dict. */\nvoid removeConfig(sds name) {\n    standardConfig *config = lookupConfig(name);\n    if (!config) return;\n    if (config->flags & MODULE_CONFIG) {\n        \n        sdsfree((sds) config->name);\n        sdsfree((sds) config->alias);\n        \n        switch (config->type) {\n            case BOOL_CONFIG:\n                break;\n            case NUMERIC_CONFIG:\n                break;\n            case SDS_CONFIG:\n                if (config->data.sds.default_value) \n                    sdsfree((sds)config->data.sds.default_value);\n                break;\n            case ENUM_CONFIG: \n                {\n                    configEnum *enumNode = config->data.enumd.enum_value;\n                    while(enumNode->name != NULL) {\n                        zfree(enumNode->name);\n                        enumNode++;\n                    }\n                    zfree(config->data.enumd.enum_value);\n                }\n                break;\n            case SPECIAL_CONFIG: /* Not used by modules */\n            case STRING_CONFIG: /* Not used by modules */\n            default:\n                serverAssert(0);\n                break;\n        }\n    }\n    dictDelete(configs, name);\n}\n\n/*-----------------------------------------------------------------------------\n * Module Config\n *----------------------------------------------------------------------------*/\n\n/* Create a bool/string/enum/numeric standardConfig for a module config in the configs dictionary */\n\n/* On removeConfig(), name and alias will be sdsfree() */\nvoid addModuleBoolConfig(sds name, sds alias, int flags, void *privdata, int default_val) {\n    int config_dummy_address;\n    standardConfig sc = createBoolConfig(name, alias, flags | MODULE_CONFIG, config_dummy_address, default_val, NULL, NULL);\n    sc.data.yesno.config = NULL;\n    sc.privdata = privdata;\n    registerConfigValue(name, &sc, 0);\n\n    /* If alias available, deep copy standardConfig and register again */\n    if (alias) {\n        sc.name = sdsdup(name);\n        sc.alias = sdsdup(alias);\n        registerConfigValue(sc.alias, &sc, 1);\n    }\n}\n\n/* On removeConfig(), name, default_val, and alias will be sdsfree() */\nvoid addModuleStringConfig(sds name, sds alias, int flags, void *privdata, sds default_val) {\n    sds config_dummy_address;\n    standardConfig sc = createSDSConfig(name, alias, flags | MODULE_CONFIG, 0, config_dummy_address, default_val, NULL, NULL);\n    sc.data.sds.config = NULL;\n    sc.privdata = privdata;\n    registerConfigValue(name, &sc, 0); /* memcpy sc */\n\n    /* If alias available, deep copy standardConfig and register again */\n    if (alias) {\n        sc.name = sdsdup(name);\n        sc.alias = sdsdup(alias);\n        if (default_val) sc.data.sds.default_value = sdsdup(default_val);\n        registerConfigValue(sc.alias, &sc, 1);\n    }\n}\n\n/* On removeConfig(), name, default_val, alias and enum_vals will be freed */\nvoid addModuleEnumConfig(sds name, sds alias, int flags, void *privdata, int default_val, configEnum *enum_vals, int num_enum_vals) {\n    int config_dummy_address;\n    standardConfig sc = createEnumConfig(name, alias, flags | MODULE_CONFIG, enum_vals, config_dummy_address, default_val, NULL, NULL);\n    sc.data.enumd.config = NULL;\n    sc.privdata = privdata;\n    registerConfigValue(name, &sc, 0);\n\n    /* If alias available, deep copy standardConfig and register again */\n    if (alias) {\n        sc.name = sdsdup(name);\n        sc.alias = sdsdup(alias);\n        sc.data.enumd.enum_value = zmalloc((num_enum_vals + 1) * sizeof(configEnum));\n        for (int i = 0; i < num_enum_vals; i++) {\n            sc.data.enumd.enum_value[i].name = zstrdup(enum_vals[i].name);\n            sc.data.enumd.enum_value[i].val = enum_vals[i].val;\n        }\n        sc.data.enumd.enum_value[num_enum_vals].name = NULL;\n        sc.data.enumd.enum_value[num_enum_vals].val = 0;        \n        registerConfigValue(sc.alias, &sc, 1);\n    }    \n}\n\n/* On removeConfig(), it will free name, and alias if it is not NULL */\nvoid addModuleNumericConfig(sds name, sds alias, int flags, void *privdata, long long default_val, int conf_flags, long long lower, long long upper) {\n    long long config_dummy_address;\n    standardConfig sc = createLongLongConfig(name, alias, flags | MODULE_CONFIG, lower, upper, config_dummy_address, default_val, conf_flags, NULL, NULL);\n    sc.data.numeric.config.ll = NULL;\n    sc.privdata = privdata;\n    registerConfigValue(name, &sc, 0);\n\n    /* If alias available, deep copy standardConfig and register again */\n    if (alias) {\n        sc.name = sdsdup(name);\n        sc.alias = sdsdup(alias);\n        registerConfigValue(sc.alias, &sc, 1);\n    }\n}\n\n/*-----------------------------------------------------------------------------\n * API for modules to access the config\n *----------------------------------------------------------------------------*/\n\n/* If a config with the given `name` does not exist or is not mutable, return\n * NULL, else return the config. */\nstatic standardConfig *getMutableConfig(client *c, const sds name, const char **errstr) {\n    standardConfig *config = lookupConfig(name);\n\n    if (!config) {\n        if (errstr) *errstr = \"Config name not found\";\n        return NULL;\n    }\n\n    if (config->flags & IMMUTABLE_CONFIG ||\n        (config->flags & PROTECTED_CONFIG &&\n         !allowProtectedAction(server.enable_protected_configs, c))) {\n        if (errstr) *errstr = config->flags & IMMUTABLE_CONFIG ? \"Config is immutable\" : \"Config is protected\";\n        return NULL;\n    }\n\n    if (server.loading && config->flags & DENY_LOADING_CONFIG) {\n        if (errstr) *errstr = \"Config is not allowed during loading\";\n        return NULL;\n    }\n\n    return config;\n}\n\ndictIterator *moduleGetConfigIterator(void) {\n    return dictGetSafeIterator(configs);\n}\n\nconst char *moduleConfigIteratorNext(dictIterator **iter, sds pattern, int is_glob, configType *typehint) {\n    if (*iter == NULL) return NULL;\n\n    standardConfig *config = NULL;\n\n    /* Special case for non-glob patterns - we only need to check if the config\n     * exists and return it. That save us iteration cycles. */\n    if (pattern && !is_glob) {\n        /* Release the iterator so we stop the iteration at this point */\n        dictReleaseIterator(*iter);\n        *iter = NULL;\n\n        dictEntry *de = dictFind(configs, pattern);\n        if (!de) return NULL;\n        config = dictGetVal(de);\n        if (typehint) *typehint = config->type;\n        return config->name;\n    }\n\n    dictEntry *de = NULL;\n    while ((de = dictNext(*iter)) != NULL) {\n        config = dictGetVal(de);\n\n        /* Note that hidden configs require an exact match (not a pattern) */\n        if (config->flags & HIDDEN_CONFIG) continue;\n\n        if (!pattern || stringmatch(pattern, config->name, 1))\n            break;\n    }\n    if (!de) return NULL;\n    if (typehint) *typehint = config->type;\n    return config->name;\n}\n\nint moduleGetConfigType(sds name, configType *res) {\n    standardConfig *config = lookupConfig(name);\n    if (!config) return 0;\n    if (res) *res = config->type;\n    return 1;\n}\n\nint moduleGetBoolConfig(sds name, int *res) {\n    standardConfig *config = lookupConfig(name);\n    if (!config) return 0;\n    if (config->type != BOOL_CONFIG) return 0;\n\n    if (res == NULL) return 1;\n\n    if (config->flags & MODULE_CONFIG) \n        *res = getModuleBoolConfig(config->privdata);\n    else\n        *res = *config->data.yesno.config;\n\n    return 1;\n}\n\nint moduleGetStringConfig(sds name, sds *res) {\n    standardConfig *config = lookupConfig(name);\n    if (!config) return 0;\n\n    if (res == NULL) return 1;\n\n    *res = config->interface.get(config);\n\n    return 1;\n}\n\nint moduleGetEnumConfig(sds name, sds *res) {\n    standardConfig *config = lookupConfig(name);\n    if (!config) return 0;\n    if (config->type != ENUM_CONFIG) return 0;\n\n    if (res != NULL) *res = enumConfigGet(config);\n\n    return 1;\n}\n\nint moduleGetNumericConfig(sds name, long long *res) {\n    standardConfig *config = lookupConfig(name);\n    if (!config) return 0;\n    if (config->type != NUMERIC_CONFIG) return 0;\n\n    if (res == NULL) return 1;\n\n    if (config->flags & MODULE_CONFIG) \n        *res = getModuleNumericConfig(config->privdata);\n    else\n        GET_NUMERIC_TYPE(*res);\n\n    return 1;\n}\n\nstatic int configApply(standardConfig *config, sds old_value, const char **err) {\n    if (!configNeedsApply(config)) return 1;\n\n    int res = 0;\n    if (config->flags & MODULE_CONFIG) \n        res = moduleConfigApply(config->privdata, err);\n    else\n        res = config->interface.apply(err);\n\n    if (res) return res;\n\n    /* Apply failed - restore old value and apply it again since we don't know\n     * the side effects of the failed apply. */\n    restoreBackupConfig(&config, &old_value, 1);\n\n    return res;\n}\n\nint moduleSetBoolConfig(client *c, sds name, int val, const char **err) {\n    standardConfig *config = getMutableConfig(c, name, err);\n    if (!config) return 0;\n    if (config->type != BOOL_CONFIG) return 0;\n\n    /* Sanitize input */\n    if (val != 0 && val != 1) val = -1;\n\n    sds old_value = config->interface.get(config);\n    int res = boolConfigSetInternal(config, val, err);\n\n    /* We can't be sure if value was changed but setting still failed so we need\n     * to restore the old value */\n    if (!res)\n        restoreBackupConfig(&config, &old_value, 1);\n    else\n        res = configApply(config, old_value, err);\n\n    if (old_value) sdsfree(old_value);\n\n    return res;\n}\n\nint moduleSetStringConfig(client *c, sds name, const char *val, const char **err) {\n    standardConfig *config = getMutableConfig(c, name, err);\n    if (!config) return 0;\n\n    sds old_value = config->interface.get(config);\n\n    int res = 0;\n    if (config->type == STRING_CONFIG)\n        res = stringConfigSetInternal(config, (char *)val, err);\n    else {\n        sds sdsval = sdsnew(val);\n        res = performInterfaceSet(config, sdsval, err);\n        sdsfree(sdsval);\n    }\n\n    /* We can't be sure if value was changed but setting still failed so we need\n     * to restore the old value */\n    if (!res)\n        restoreBackupConfig(&config, &old_value, 1);\n    else\n        res = configApply(config, old_value, err);\n \n    if (old_value) sdsfree(old_value);\n\n    return res;\n}\n\nint moduleSetEnumConfig(client *c, sds name, sds *vals, int vals_cnt, const char **err) {\n    standardConfig *config = getMutableConfig(c, name, err);\n    if (!config) return 0;\n    if (config->type != ENUM_CONFIG) return 0;\n\n    sds old_value = config->interface.get(config);\n    int res = enumConfigSet(config, vals, vals_cnt, err);\n\n    /* We can't be sure if value was changed but setting still failed so we need\n     * to restore the old value */\n    if (!res)\n        restoreBackupConfig(&config, &old_value, 1);\n    else\n        res = configApply(config, old_value, err);\n\n    if (old_value) sdsfree(old_value);\n\n    return res;\n}\n\nint moduleSetNumericConfig(client *c, sds name, long long val, const char **err) {\n    standardConfig *config = getMutableConfig(c, name, err);\n    if (config->type != NUMERIC_CONFIG) return 0;\n\n    sds old_value = config->interface.get(config);\n    int res = numericConfigSetInternal(config, val, err);\n\n    /* We can't be sure if value was changed but setting still failed so we need\n     * to restore the old value */\n    if (!res)\n        restoreBackupConfig(&config, &old_value, 1);\n    else\n        res = configApply(config, old_value, err);\n\n    if (old_value) sdsfree(old_value);\n\n    return res;\n}\n\n/*-----------------------------------------------------------------------------\n * CONFIG HELP\n *----------------------------------------------------------------------------*/\n\nvoid configHelpCommand(client *c) {\n    const char *help[] = {\n\"GET <pattern>\",\n\"    Return parameters matching the glob-like <pattern> and their values.\",\n\"SET <directive> <value>\",\n\"    Set the configuration <directive> to <value>.\",\n\"RESETSTAT\",\n\"    Reset statistics reported by the INFO command.\",\n\"REWRITE\",\n\"    Rewrite the configuration file.\",\nNULL\n    };\n\n    addReplyHelp(c, help);\n}\n\n/*-----------------------------------------------------------------------------\n * CONFIG RESETSTAT\n *----------------------------------------------------------------------------*/\n\nvoid configResetStatCommand(client *c) {\n    resetServerStats();\n    resetClusterStats();\n    resetCommandTableStats(server.commands);\n    resetErrorTableStats();\n    addReply(c,shared.ok);\n}\n\n/*-----------------------------------------------------------------------------\n * CONFIG REWRITE\n *----------------------------------------------------------------------------*/\n\nvoid configRewriteCommand(client *c) {\n    if (server.configfile == NULL) {\n        addReplyError(c,\"The server is running without a config file\");\n        return;\n    }\n    if (rewriteConfig(server.configfile, 0) == -1) {\n        /* save errno in case of being tainted. */\n        int err = errno;\n        serverLog(LL_WARNING,\"CONFIG REWRITE failed: %s\", strerror(err));\n        addReplyErrorFormat(c,\"Rewriting config file: %s\", strerror(err));\n    } else {\n        serverLog(LL_NOTICE,\"CONFIG REWRITE executed with success.\");\n        addReply(c,shared.ok);\n    }\n}\n\nint configExists(const sds name) {\n    return lookupConfig(name) != NULL;\n}\n"
  },
  {
    "path": "src/config.h",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef __CONFIG_H\n#define __CONFIG_H\n\n#include <sys/param.h>\n\n#ifdef __APPLE__\n#include <fcntl.h> // for fcntl(fd, F_FULLFSYNC)\n#include <AvailabilityMacros.h>\n#endif\n\n#ifdef __linux__\n#include <features.h>\n#include <fcntl.h>\n#endif\n\n#if defined(__APPLE__) && defined(__MAC_OS_X_VERSION_MAX_ALLOWED) && __MAC_OS_X_VERSION_MAX_ALLOWED >= 1060\n#define MAC_OS_10_6_DETECTED\n#endif\n\n/* Define redis_fstat to fstat or fstat64() */\n#if defined(__APPLE__) && !defined(MAC_OS_10_6_DETECTED)\n#define redis_fstat fstat64\n#define redis_stat stat64\n#else\n#define redis_fstat fstat\n#define redis_stat stat\n#endif\n\n#ifndef CACHE_LINE_SIZE\n#if defined(__aarch64__) && defined(__APPLE__)\n#define CACHE_LINE_SIZE 128\n#else\n#define CACHE_LINE_SIZE 64\n#endif\n#endif\n\n/* Test for proc filesystem */\n#ifdef __linux__\n#define HAVE_PROC_STAT 1\n#define HAVE_PROC_MAPS 1\n#define HAVE_PROC_SMAPS 1\n#define HAVE_PROC_SOMAXCONN 1\n#define HAVE_PROC_OOM_SCORE_ADJ 1\n#define HAVE_EVENT_FD 1\n#endif\n\n/* Test for task_info() */\n#if defined(__APPLE__)\n#define HAVE_TASKINFO 1\n#endif\n\n/* Test for somaxconn check */\n#if defined(__APPLE__) || defined(__FreeBSD__)\n#define HAVE_SYSCTL_KIPC_SOMAXCONN 1\n#elif defined(__OpenBSD__)\n#define HAVE_SYSCTL_KERN_SOMAXCONN 1\n#endif\n\n/* Test for backtrace() */\n#if defined(__APPLE__) || (defined(__linux__) && defined(__GLIBC__)) || \\\n    defined(__FreeBSD__) || ((defined(__OpenBSD__) || defined(__NetBSD__) || defined(__sun)) && defined(USE_BACKTRACE))\\\n || defined(__DragonFly__) || (defined(__UCLIBC__) && defined(__UCLIBC_HAS_BACKTRACE__))\n#define HAVE_BACKTRACE 1\n#endif\n\n/* MSG_NOSIGNAL. */\n#ifdef __linux__\n#define HAVE_MSG_NOSIGNAL 1\n#if defined(SO_MARK)\n#define HAVE_SOCKOPTMARKID 1\n#define SOCKOPTMARKID SO_MARK\n#endif\n#endif\n\n/* Test for polling API */\n#ifdef __linux__\n#define HAVE_EPOLL 1\n#endif\n\n/* Test for accept4() */\n#if defined(__linux__) || (defined(OpenBSD) && OpenBSD >= 201505) || \\\n    defined(__FreeBSD__) || \\\n    (defined(__NetBSD_Version__) && __NetBSD_Version__ >= 800000000) || \\\n    (defined(__DragonFly__) && __DragonFly_version >= 400305)\n#define HAVE_ACCEPT4 1\n#endif\n\n/* Detect for pipe2() */\n#if defined(__linux__) || \\\n    defined(__FreeBSD__) || \\\n    (defined(__OpenBSD__) && OpenBSD >= 201505) || \\\n    (defined(__DragonFly_version) && __DragonFly_version >= 400106) || \\\n    (defined(__NetBSD_Version__) && __NetBSD_Version__ >= 600000000)\n#define HAVE_PIPE2 1\n#endif\n\n/* Detect for kqueue */\n#if (defined(__APPLE__) && defined(MAC_OS_10_6_DETECTED)) || defined(__FreeBSD__) || \\\n    defined(__OpenBSD__) || defined (__NetBSD__) || defined(__DragonFly__)\n#define HAVE_KQUEUE 1\n#endif\n\n#ifdef __sun\n#include <sys/feature_tests.h>\n#ifdef _DTRACE_VERSION\n#define HAVE_EVPORT 1\n#define HAVE_PSINFO 1\n#endif\n#endif\n\n/* Test for __builtin_prefetch()\n * Supported in LLVM since 2.9: https://releases.llvm.org/2.9/docs/ReleaseNotes.html\n * Supported in GCC since 3.1 but we use 4.8 given it's too old: https://gcc.gnu.org/gcc-3.1/changes.html. */\n#if defined(__clang__) && (__clang_major__ > 2 || (__clang_major__ == 2 && __clang_minor__ >= 9))\n#define HAS_BUILTIN_PREFETCH 1\n#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 8))\n#define HAS_BUILTIN_PREFETCH 1\n#else\n#define HAS_BUILTIN_PREFETCH 0\n#endif\n\n#if HAS_BUILTIN_PREFETCH\n#define redis_prefetch_read(addr) __builtin_prefetch(addr, 0, 3)  /* Read with high locality */\n#define redis_prefetch_write(addr) __builtin_prefetch(addr, 1, 3) /* Write with high locality */\n#else\n#define redis_prefetch_read(addr) ((void)(addr))  /* No-op if unsupported */\n#define redis_prefetch_write(addr) ((void)(addr)) /* No-op if unsupported */\n#endif\n\n/* Define redis_fsync to fdatasync() in Linux and fsync() for all the rest */\n#if defined(__linux__)\n#define redis_fsync(fd) fdatasync(fd)\n#elif defined(__APPLE__)\n#define redis_fsync(fd) fcntl(fd, F_FULLFSYNC)\n#else\n#define redis_fsync(fd) fsync(fd)\n#endif\n\n#if defined(__FreeBSD__)\n#if defined(SO_USER_COOKIE)\n#define HAVE_SOCKOPTMARKID 1\n#define SOCKOPTMARKID SO_USER_COOKIE\n#endif\n#endif\n\n#if defined(__OpenBSD__)\n#if defined(SO_RTABLE)\n#define HAVE_SOCKOPTMARKID 1\n#define SOCKOPTMARKID SO_RTABLE\n#endif\n#endif\n\n#if __GNUC__ >= 5 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 5)\n#define redis_unreachable __builtin_unreachable\n#else\n#define redis_unreachable abort\n#endif\n\n#if __GNUC__ >= 3\n#define likely(x) __builtin_expect(!!(x), 1)\n#define unlikely(x) __builtin_expect(!!(x), 0)\n#else\n#define likely(x) (x)\n#define unlikely(x) (x)\n#endif\n\n#if defined(__has_attribute)\n#if __has_attribute(no_sanitize)\n#define REDIS_NO_SANITIZE(sanitizer) __attribute__((no_sanitize(sanitizer)))\n#endif\n#endif\n#if !defined(REDIS_NO_SANITIZE)\n#define REDIS_NO_SANITIZE(sanitizer)\n#endif\n\n#if defined(__clang__)\n#define REDIS_NO_SANITIZE_MSAN(sanitizer) REDIS_NO_SANITIZE(sanitizer)\n#else\n#define REDIS_NO_SANITIZE_MSAN(sanitizer)\n#endif\n\n/* Define rdb_fsync_range to sync_file_range() on Linux, otherwise we use\n * the plain fsync() call. */\n#if (defined(__linux__) && defined(SYNC_FILE_RANGE_WAIT_BEFORE))\n#define HAVE_SYNC_FILE_RANGE 1\n#define rdb_fsync_range(fd,off,size) sync_file_range(fd,off,size,SYNC_FILE_RANGE_WAIT_BEFORE|SYNC_FILE_RANGE_WRITE)\n#elif defined(__APPLE__)\n#define rdb_fsync_range(fd,off,size) fcntl(fd, F_FULLFSYNC)\n#else\n#define rdb_fsync_range(fd,off,size) fsync(fd)\n#endif\n\n/* Check if we can use setproctitle().\n * BSD systems have support for it, we provide an implementation for\n * Linux and osx. */\n#if (defined __NetBSD__ || defined __FreeBSD__ || defined __OpenBSD__)\n#define USE_SETPROCTITLE\n#endif\n\n#if defined(__HAIKU__)\n#define ESOCKTNOSUPPORT 0\n#endif\n\n#if (defined __linux || defined __APPLE__)\n#define USE_SETPROCTITLE\n#define INIT_SETPROCTITLE_REPLACEMENT\nvoid spt_init(int argc, char *argv[]);\nvoid setproctitle(const char *fmt, ...);\n#endif\n\n/* Byte ordering detection */\n#include <sys/types.h> /* This will likely define BYTE_ORDER */\n\n#ifndef BYTE_ORDER\n#if (BSD >= 199103)\n# include <machine/endian.h>\n#else\n#if defined(linux) || defined(__linux__)\n# include <endian.h>\n#else\n#define\tLITTLE_ENDIAN\t1234\t/* least-significant byte first (vax, pc) */\n#define\tBIG_ENDIAN\t4321\t/* most-significant byte first (IBM, net) */\n#define\tPDP_ENDIAN\t3412\t/* LSB first in word, MSW first in long (pdp)*/\n\n#if defined(__i386__) || defined(__x86_64__) || defined(__amd64__) || \\\n   defined(vax) || defined(ns32000) || defined(sun386) || \\\n   defined(MIPSEL) || defined(_MIPSEL) || defined(BIT_ZERO_ON_RIGHT) || \\\n   defined(__alpha__) || defined(__alpha)\n#define BYTE_ORDER    LITTLE_ENDIAN\n#endif\n\n#if defined(sel) || defined(pyr) || defined(mc68000) || defined(sparc) || \\\n    defined(is68k) || defined(tahoe) || defined(ibm032) || defined(ibm370) || \\\n    defined(MIPSEB) || defined(_MIPSEB) || defined(_IBMR2) || defined(DGUX) ||\\\n    defined(apollo) || defined(__convex__) || defined(_CRAY) || \\\n    defined(__hppa) || defined(__hp9000) || \\\n    defined(__hp9000s300) || defined(__hp9000s700) || \\\n    defined (BIT_ZERO_ON_LEFT) || defined(m68k) || defined(__sparc)\n#define BYTE_ORDER\tBIG_ENDIAN\n#endif\n#endif /* linux */\n#endif /* BSD */\n#endif /* BYTE_ORDER */\n\n/* Sometimes after including an OS-specific header that defines the\n * endianness we end with __BYTE_ORDER but not with BYTE_ORDER that is what\n * the Redis code uses. In this case let's define everything without the\n * underscores. */\n#ifndef BYTE_ORDER\n#ifdef __BYTE_ORDER\n#if defined(__LITTLE_ENDIAN) && defined(__BIG_ENDIAN)\n#ifndef LITTLE_ENDIAN\n#define LITTLE_ENDIAN __LITTLE_ENDIAN\n#endif\n#ifndef BIG_ENDIAN\n#define BIG_ENDIAN __BIG_ENDIAN\n#endif\n#if (__BYTE_ORDER == __LITTLE_ENDIAN)\n#define BYTE_ORDER LITTLE_ENDIAN\n#else\n#define BYTE_ORDER BIG_ENDIAN\n#endif\n#endif\n#endif\n#endif\n\n#if !defined(BYTE_ORDER) || \\\n    (BYTE_ORDER != BIG_ENDIAN && BYTE_ORDER != LITTLE_ENDIAN)\n\t/* you must determine what the correct bit order is for\n\t * your compiler - the next line is an intentional error\n\t * which will force your compiles to bomb until you fix\n\t * the above macros.\n\t */\n#error \"Undefined or invalid BYTE_ORDER\"\n#endif\n\n#if (__i386 || __amd64 || __powerpc__) && __GNUC__\n#define GNUC_VERSION (__GNUC__ * 10000 + __GNUC_MINOR__ * 100 + __GNUC_PATCHLEVEL__)\n#if defined(__clang__)\n#define HAVE_ATOMIC\n#endif\n#if (defined(__GLIBC__) && defined(__GLIBC_PREREQ))\n#if (GNUC_VERSION >= 40100 && __GLIBC_PREREQ(2, 6))\n#define HAVE_ATOMIC\n#endif\n#endif\n#endif\n\n/* Make sure we can test for ARM just checking for __arm__, since sometimes\n * __arm is defined but __arm__ is not. */\n#if defined(__arm) && !defined(__arm__)\n#define __arm__\n#endif\n#if defined (__aarch64__) && !defined(__arm64__)\n#define __arm64__\n#endif\n\n/* Make sure we can test for SPARC just checking for __sparc__. */\n#if defined(__sparc) && !defined(__sparc__)\n#define __sparc__\n#endif\n\n#if defined(__sparc__) || defined(__arm__)\n#define USE_ALIGNED_ACCESS\n#endif\n\n/* Define for redis_set_thread_title */\n#ifdef __linux__\n#define redis_set_thread_title(name) pthread_setname_np(pthread_self(), name)\n#else\n#if (defined __FreeBSD__ || defined __OpenBSD__)\n#include <pthread_np.h>\n#define redis_set_thread_title(name) pthread_set_name_np(pthread_self(), name)\n#elif defined __NetBSD__\n#include <pthread.h>\n#define redis_set_thread_title(name) pthread_setname_np(pthread_self(), \"%s\", name)\n#elif defined __HAIKU__\n#include <kernel/OS.h>\n#define redis_set_thread_title(name) rename_thread(find_thread(0), name)\n#else\n#if (defined __APPLE__ && defined(__MAC_OS_X_VERSION_MAX_ALLOWED) && __MAC_OS_X_VERSION_MAX_ALLOWED >= 1070)\nint pthread_setname_np(const char *name);\n#include <pthread.h>\n#define redis_set_thread_title(name) pthread_setname_np(name)\n#else\n#define redis_set_thread_title(name)\n#endif\n#endif\n#endif\n\n/* Check if we can use setcpuaffinity(). */\n#if (defined __linux || defined __NetBSD__ || defined __FreeBSD__ || defined __DragonFly__)\n#define USE_SETCPUAFFINITY\nvoid setcpuaffinity(const char *cpulist);\n#endif\n\n/* Test for posix_fadvise() */\n#if defined(__linux__) || defined(__FreeBSD__)\n#define HAVE_FADVISE\n#endif\n\n#if defined(__x86_64__) && ((defined(__GNUC__) && __GNUC__ > 5) || (defined(__clang__)))\n    #if defined(__has_attribute) && __has_attribute(target)\n        #define HAVE_POPCNT\n        #define ATTRIBUTE_TARGET_POPCNT __attribute__((target(\"popcnt\")))\n    #else\n        #define ATTRIBUTE_TARGET_POPCNT\n    #endif\n#else\n    #define ATTRIBUTE_TARGET_POPCNT\n#endif\n\n/* Check if we can compile AVX2 code */\n#if defined (__x86_64__) && ((defined(__GNUC__) && __GNUC__ >= 5) || (defined(__clang__) && __clang_major__ >= 4))\n#if defined(__has_attribute) && __has_attribute(target)\n#define HAVE_AVX2\n#define ATTRIBUTE_TARGET_AVX2 __attribute__((target(\"avx2\")))\n#define ATTRIBUTE_TARGET_AVX2_POPCOUNT __attribute__((target(\"avx2,popcnt\")))\n#endif\n#endif\n\n/* Check if we can compile AVX512 code */\n#if defined (__x86_64__) && ((defined(__GNUC__) && __GNUC__ >= 5) || (defined(__clang__) && __clang_major__ >= 4))\n#if defined(__has_attribute) && __has_attribute(target)\n#define HAVE_AVX512\n#define ATTRIBUTE_TARGET_AVX512 __attribute__((target(\"avx512f\")))\n#define ATTRIBUTE_TARGET_AVX512_POPCOUNT __attribute__((target(\"avx512f,avx512vpopcntdq\")))\n#endif\n#endif\n\n/* Check for AArch64 (ARM v8) specific optimizations */\n#if defined(__aarch64__) && ((defined(__GNUC__) && __GNUC__ >= 5) || defined(__clang__))\n#if defined(__has_attribute) && __has_attribute(target)\n#define HAVE_AARCH64_NEON\n#endif\n#endif\n\n#endif\n"
  },
  {
    "path": "src/connection.c",
    "content": "/* ==========================================================================\n * connection.c - connection layer framework\n * --------------------------------------------------------------------------\n * Copyright (C) 2022  zhenwei pi\n *\n * Permission is hereby granted, free of charge, to any person obtaining a\n * copy of this software and associated documentation files (the\n * \"Software\"), to deal in the Software without restriction, including\n * without limitation the rights to use, copy, modify, merge, publish,\n * distribute, sublicense, and/or sell copies of the Software, and to permit\n * persons to whom the Software is furnished to do so, subject to the\n * following conditions:\n *\n * The above copyright notice and this permission notice shall be included\n * in all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\n * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN\n * NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,\n * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE\n * USE OR OTHER DEALINGS IN THE SOFTWARE.\n * ==========================================================================\n */\n\n#include \"server.h\"\n#include \"connection.h\"\n\nstatic ConnectionType *connTypes[CONN_TYPE_MAX];\n\nint connTypeRegister(ConnectionType *ct) {\n    const char *typename = ct->get_type(NULL);\n    ConnectionType *tmpct;\n    int type;\n\n    /* find an empty slot to store the new connection type */\n    for (type = 0; type < CONN_TYPE_MAX; type++) {\n        tmpct = connTypes[type];\n        if (!tmpct)\n            break;\n\n        /* ignore case, we really don't care \"tls\"/\"TLS\" */\n        if (!strcasecmp(typename, tmpct->get_type(NULL))) {\n            serverLog(LL_WARNING, \"Connection types %s already registered\", typename);\n            return C_ERR;\n        }\n    }\n\n    serverAssert(type < CONN_TYPE_MAX);\n    serverLog(LL_VERBOSE, \"Connection type %s registered\", typename);\n    connTypes[type] = ct;\n\n    if (ct->init) {\n        ct->init();\n    }\n\n    return C_OK;\n}\n\nint connTypeInitialize(void) {\n    /* currently socket connection type is necessary  */\n    serverAssert(RedisRegisterConnectionTypeSocket() == C_OK);\n\n    /* currently unix socket connection type is necessary  */\n    serverAssert(RedisRegisterConnectionTypeUnix() == C_OK);\n\n    /* may fail if without BUILD_TLS=yes */\n    RedisRegisterConnectionTypeTLS();\n\n    return C_OK;\n}\n\nConnectionType *connectionByType(const char *typename) {\n    ConnectionType *ct;\n\n    for (int type = 0; type < CONN_TYPE_MAX; type++) {\n        ct = connTypes[type];\n        if (!ct)\n            break;\n\n        if (!strcasecmp(typename, ct->get_type(NULL)))\n            return ct;\n    }\n\n    serverLog(LL_WARNING, \"Missing implement of connection type %s\", typename);\n\n    return NULL;\n}\n\n/* Cache TCP connection type, query it by string once */\nConnectionType *connectionTypeTcp(void) {\n    static ConnectionType *ct_tcp = NULL;\n\n    if (ct_tcp != NULL)\n        return ct_tcp;\n\n    ct_tcp = connectionByType(CONN_TYPE_SOCKET);\n    serverAssert(ct_tcp != NULL);\n\n    return ct_tcp;\n}\n\n/* Cache TLS connection type, query it by string once */\nConnectionType *connectionTypeTls(void) {\n    static ConnectionType *ct_tls = NULL;\n    static int cached = 0;\n\n    /* Unlike the TCP and Unix connections, the TLS one can be missing\n     * So we need the cached pointer to handle NULL correctly too. */\n    if (!cached) {\n        cached = 1;\n        ct_tls = connectionByType(CONN_TYPE_TLS);\n    }\n\n    return ct_tls;\n}\n\n/* Cache Unix connection type, query it by string once */\nConnectionType *connectionTypeUnix(void) {\n    static ConnectionType *ct_unix = NULL;\n\n    if (ct_unix != NULL)\n        return ct_unix;\n\n    ct_unix = connectionByType(CONN_TYPE_UNIX);\n    return ct_unix;\n}\n\nint connectionIndexByType(const char *typename) {\n    ConnectionType *ct;\n\n    for (int type = 0; type < CONN_TYPE_MAX; type++) {\n        ct = connTypes[type];\n        if (!ct)\n            break;\n\n        if (!strcasecmp(typename, ct->get_type(NULL)))\n            return type;\n    }\n\n    return -1;\n}\n\nvoid connTypeCleanupAll(void) {\n    ConnectionType *ct;\n    int type;\n\n    for (type = 0; type < CONN_TYPE_MAX; type++) {\n        ct = connTypes[type];\n        if (!ct)\n            break;\n\n        if (ct->cleanup)\n            ct->cleanup();\n    }\n}\n\n/* walk all the connection types until has pending data */\nint connTypeHasPendingData(struct aeEventLoop *el) {\n    ConnectionType *ct;\n    int type;\n    int ret = 0;\n\n    for (type = 0; type < CONN_TYPE_MAX; type++) {\n        ct = connTypes[type];\n        if (ct && ct->has_pending_data && (ret = ct->has_pending_data(el))) {\n            return ret;\n        }\n    }\n\n    return ret;\n}\n\n/* walk all the connection types and process pending data for each connection type */\nint connTypeProcessPendingData(struct aeEventLoop *el) {\n    ConnectionType *ct;\n    int type;\n    int ret = 0;\n\n    for (type = 0; type < CONN_TYPE_MAX; type++) {\n        ct = connTypes[type];\n        if (ct && ct->process_pending_data) {\n            ret += ct->process_pending_data(el);\n        }\n    }\n\n    return ret;\n}\n\nsds getListensInfoString(sds info) {\n    for (int j = 0; j < CONN_TYPE_MAX; j++) {\n        connListener *listener = &server.listeners[j];\n        if (listener->ct == NULL)\n            continue;\n\n        info = sdscatfmt(info, \"listener%i:name=%s\", j, listener->ct->get_type(NULL));\n        for (int i = 0; i < listener->count; i++) {\n            info = sdscatfmt(info, \",bind=%s\", listener->bindaddr[i]);\n        }\n\n        if (listener->port)\n            info = sdscatfmt(info, \",port=%i\", listener->port);\n\n        info = sdscatfmt(info, \"\\r\\n\");\n    }\n\n    return info;\n}\n"
  },
  {
    "path": "src/connection.h",
    "content": "\n/*\n * Copyright (c) 2019-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef __REDIS_CONNECTION_H\n#define __REDIS_CONNECTION_H\n\n#include <errno.h>\n#include <stdio.h>\n#include <string.h>\n#include <sys/uio.h>\n\n#include \"ae.h\"\n\n#define CONN_INFO_LEN   32\n#define CONN_ADDR_STR_LEN 128 /* Similar to INET6_ADDRSTRLEN, hoping to handle other protocols. */\n\nstruct aeEventLoop;\ntypedef struct connection connection;\ntypedef struct connListener connListener;\n\ntypedef enum {\n    CONN_STATE_NONE = 0,\n    CONN_STATE_CONNECTING,\n    CONN_STATE_ACCEPTING,\n    CONN_STATE_CONNECTED,\n    CONN_STATE_CLOSED,\n    CONN_STATE_ERROR\n} ConnectionState;\n\n#define CONN_FLAG_CLOSE_SCHEDULED   (1<<0)      /* Closed scheduled by a handler */\n#define CONN_FLAG_WRITE_BARRIER     (1<<1)      /* Write barrier requested */\n\n#define CONN_TYPE_SOCKET            \"tcp\"\n#define CONN_TYPE_UNIX              \"unix\"\n#define CONN_TYPE_TLS               \"tls\"\n#define CONN_TYPE_MAX               8           /* 8 is enough to be extendable */\n\ntypedef void (*ConnectionCallbackFunc)(struct connection *conn);\n\ntypedef struct ConnectionType {\n    /* connection type */\n    const char *(*get_type)(struct connection *conn);\n\n    /* connection type initialize & finalize & configure */\n    void (*init)(void); /* auto-call during register */\n    void (*cleanup)(void);\n    int (*configure)(void *priv, int reconfigure);\n\n    /* ae & accept & listen & error & address handler */\n    void (*ae_handler)(struct aeEventLoop *el, int fd, void *clientData, int mask);\n    aeFileProc *accept_handler;\n    int (*addr)(connection *conn, char *ip, size_t ip_len, int *port, int remote);\n    int (*is_local)(connection *conn);\n    int (*listen)(connListener *listener);\n\n    /* create/shutdown/close connection */\n    connection* (*conn_create)(struct aeEventLoop *el);\n    connection* (*conn_create_accepted)(struct aeEventLoop *el, int fd, void *priv);\n    void (*shutdown)(struct connection *conn);\n    void (*close)(struct connection *conn);\n\n    /* connect & accept */\n    int (*connect)(struct connection *conn, const char *addr, int port, const char *source_addr, ConnectionCallbackFunc connect_handler);\n    int (*blocking_connect)(struct connection *conn, const char *addr, int port, long long timeout);\n    int (*accept)(struct connection *conn, ConnectionCallbackFunc accept_handler);\n\n    /* IO */\n    int (*write)(struct connection *conn, const void *data, size_t data_len);\n    int (*writev)(struct connection *conn, const struct iovec *iov, int iovcnt);\n    int (*read)(struct connection *conn, void *buf, size_t buf_len);\n    int (*set_write_handler)(struct connection *conn, ConnectionCallbackFunc handler, int barrier);\n    int (*set_read_handler)(struct connection *conn, ConnectionCallbackFunc handler);\n    const char *(*get_last_error)(struct connection *conn);\n    ssize_t (*sync_write)(struct connection *conn, char *ptr, ssize_t size, long long timeout);\n    ssize_t (*sync_read)(struct connection *conn, char *ptr, ssize_t size, long long timeout);\n    ssize_t (*sync_readline)(struct connection *conn, char *ptr, ssize_t size, long long timeout);\n\n    /* event loop */\n    void (*unbind_event_loop)(struct connection *conn);\n    int (*rebind_event_loop)(struct connection *conn, aeEventLoop *el);\n\n    /* pending data */\n    int (*has_pending_data)(struct aeEventLoop *el);\n    int (*process_pending_data)(struct aeEventLoop *el);\n\n    /* TLS specified methods */\n    sds (*get_peer_cert)(struct connection *conn);\n\n    /* Get peer username based on connection type */\n    sds (*get_peer_username)(connection *conn);\n} ConnectionType;\n\nstruct connection {\n    ConnectionType *type;\n    ConnectionState state;\n    int last_errno;\n    int fd;\n    short int flags;\n    short int refs;\n    unsigned short int iovcnt;\n    void *private_data;\n    struct aeEventLoop *el;\n    ConnectionCallbackFunc conn_handler;\n    ConnectionCallbackFunc write_handler;\n    ConnectionCallbackFunc read_handler;\n};\n\n#define CONFIG_BINDADDR_MAX 16\n\n/* Setup a listener by a connection type */\nstruct connListener {\n    int fd[CONFIG_BINDADDR_MAX];\n    int count;\n    char **bindaddr;\n    int bindaddr_count;\n    int port;\n    ConnectionType *ct;\n    void *priv; /* used by connection type specified data */\n};\n\n/* The connection module does not deal with listening and accepting sockets,\n * so we assume we have a socket when an incoming connection is created.\n *\n * The fd supplied should therefore be associated with an already accept()ed\n * socket.\n *\n * connAccept() may directly call accept_handler(), or return and call it\n * at a later time. This behavior is a bit awkward but aims to reduce the need\n * to wait for the next event loop, if no additional handshake is required.\n *\n * IMPORTANT: accept_handler may decide to close the connection, calling connClose().\n * To make this safe, the connection is only marked with CONN_FLAG_CLOSE_SCHEDULED\n * in this case, and connAccept() returns with an error.\n *\n * connAccept() callers must always check the return value and on error (C_ERR)\n * a connClose() must be called.\n */\n\nstatic inline int connAccept(connection *conn, ConnectionCallbackFunc accept_handler) {\n    return conn->type->accept(conn, accept_handler);\n}\n\n/* Establish a connection.  The connect_handler will be called when the connection\n * is established, or if an error has occurred.\n *\n * The connection handler will be responsible to set up any read/write handlers\n * as needed.\n *\n * If C_ERR is returned, the operation failed and the connection handler shall\n * not be expected.\n */\nstatic inline int connConnect(connection *conn, const char *addr, int port, const char *src_addr,\n        ConnectionCallbackFunc connect_handler) {\n    return conn->type->connect(conn, addr, port, src_addr, connect_handler);\n}\n\n/* Blocking connect.\n *\n * NOTE: This is implemented in order to simplify the transition to the abstract\n * connections, but should probably be refactored out of cluster.c and replication.c,\n * in favor of a pure async implementation.\n */\nstatic inline int connBlockingConnect(connection *conn, const char *addr, int port, long long timeout) {\n    return conn->type->blocking_connect(conn, addr, port, timeout);\n}\n\n/* Write to connection, behaves the same as write(2).\n *\n * Like write(2), a short write is possible. A -1 return indicates an error.\n *\n * The caller should NOT rely on errno. Testing for an EAGAIN-like condition, use\n * connGetState() to see if the connection state is still CONN_STATE_CONNECTED.\n */\nstatic inline int connWrite(connection *conn, const void *data, size_t data_len) {\n    return conn->type->write(conn, data, data_len);\n}\n\n/* Gather output data from the iovcnt buffers specified by the members of the iov\n * array: iov[0], iov[1], ..., iov[iovcnt-1] and write to connection, behaves the same as writev(3).\n *\n * Like writev(3), a short write is possible. A -1 return indicates an error.\n *\n * The caller should NOT rely on errno. Testing for an EAGAIN-like condition, use\n * connGetState() to see if the connection state is still CONN_STATE_CONNECTED.\n */\nstatic inline int connWritev(connection *conn, const struct iovec *iov, int iovcnt) {\n    return conn->type->writev(conn, iov, iovcnt);\n}\n\n/* Read from the connection, behaves the same as read(2).\n * \n * Like read(2), a short read is possible.  A return value of 0 will indicate the\n * connection was closed, and -1 will indicate an error.\n *\n * The caller should NOT rely on errno. Testing for an EAGAIN-like condition, use\n * connGetState() to see if the connection state is still CONN_STATE_CONNECTED.\n */\nstatic inline int connRead(connection *conn, void *buf, size_t buf_len) {\n    int ret = conn->type->read(conn, buf, buf_len);\n    return ret;\n}\n\n/* Register a write handler, to be called when the connection is writable.\n * If NULL, the existing handler is removed.\n */\nstatic inline int connSetWriteHandler(connection *conn, ConnectionCallbackFunc func) {\n    return conn->type->set_write_handler(conn, func, 0);\n}\n\n/* Register a read handler, to be called when the connection is readable.\n * If NULL, the existing handler is removed.\n */\nstatic inline int connSetReadHandler(connection *conn, ConnectionCallbackFunc func) {\n    return conn->type->set_read_handler(conn, func);\n}\n\n/* Set a write handler, and possibly enable a write barrier, this flag is\n * cleared when write handler is changed or removed.\n * With barrier enabled, we never fire the event if the read handler already\n * fired in the same event loop iteration. Useful when you want to persist\n * things to disk before sending replies, and want to do that in a group fashion. */\nstatic inline int connSetWriteHandlerWithBarrier(connection *conn, ConnectionCallbackFunc func, int barrier) {\n    return conn->type->set_write_handler(conn, func, barrier);\n}\n\nstatic inline void connShutdown(connection *conn) {\n    conn->type->shutdown(conn);\n}\n\nstatic inline void connClose(connection *conn) {\n    conn->type->close(conn);\n}\n\n/* Returns the last error encountered by the connection, as a string.  If no error,\n * a NULL is returned.\n */\nstatic inline const char *connGetLastError(connection *conn) {\n    return conn->type->get_last_error(conn);\n}\n\nstatic inline ssize_t connSyncWrite(connection *conn, char *ptr, ssize_t size, long long timeout) {\n    return conn->type->sync_write(conn, ptr, size, timeout);\n}\n\nstatic inline ssize_t connSyncRead(connection *conn, char *ptr, ssize_t size, long long timeout) {\n    return conn->type->sync_read(conn, ptr, size, timeout);\n}\n\nstatic inline ssize_t connSyncReadLine(connection *conn, char *ptr, ssize_t size, long long timeout) {\n    return conn->type->sync_readline(conn, ptr, size, timeout);\n}\n\n/* Return CONN_TYPE_* for the specified connection */\nstatic inline const char *connGetType(connection *conn) {\n    return conn->type->get_type(conn);\n}\n\nstatic inline int connLastErrorRetryable(connection *conn) {\n    return conn->last_errno == EINTR;\n}\n\n/* Get address information of a connection.\n * remote works as boolean type to get local/remote address */\nstatic inline int connAddr(connection *conn, char *ip, size_t ip_len, int *port, int remote) {\n    if (conn && conn->type->addr) {\n        return conn->type->addr(conn, ip, ip_len, port, remote);\n    }\n\n    return -1;\n}\n\n/* Format an IP,port pair into something easy to parse. If IP is IPv6\n * (matches for \":\"), the ip is surrounded by []. IP and port are just\n * separated by colons. This the standard to display addresses within Redis. */\nstatic inline int formatAddr(char *buf, size_t buf_len, char *ip, int port) {\n    return snprintf(buf, buf_len, strchr(ip,':') ?\n           \"[%s]:%d\" : \"%s:%d\", ip, port);\n}\n\nstatic inline int connFormatAddr(connection *conn, char *buf, size_t buf_len, int remote)\n{\n    char ip[CONN_ADDR_STR_LEN];\n    int port;\n\n    if (connAddr(conn, ip, sizeof(ip), &port, remote) < 0) {\n        return -1;\n    }\n\n    return formatAddr(buf, buf_len, ip, port);\n}\n\nstatic inline int connAddrPeerName(connection *conn, char *ip, size_t ip_len, int *port) {\n    return connAddr(conn, ip, ip_len, port, 1);\n}\n\nstatic inline int connAddrSockName(connection *conn, char *ip, size_t ip_len, int *port) {\n    return connAddr(conn, ip, ip_len, port, 0);\n}\n\n/* Test a connection is local or loopback.\n * Return -1 on failure, 0 is not a local connection, 1 is a local connection */\nstatic inline int connIsLocal(connection *conn) {\n    if (conn && conn->type->is_local) {\n        return conn->type->is_local(conn);\n    }\n\n    return -1;\n}\n\nstatic inline int connGetState(connection *conn) {\n    return conn->state;\n}\n\n/* Returns true if a write handler is registered */\nstatic inline int connHasWriteHandler(connection *conn) {\n    return conn->write_handler != NULL;\n}\n\n/* Returns true if a read handler is registered */\nstatic inline int connHasReadHandler(connection *conn) {\n    return conn->read_handler != NULL;\n}\n\n/* Returns true if the connection is bound to an event loop */\nstatic inline int connHasEventLoop(connection *conn) {\n    return conn->el != NULL;\n}\n\n/* Unbind the current event loop from the connection, so that it can be\n * rebind to a different event loop in the future. */\nstatic inline void connUnbindEventLoop(connection *conn) {\n    if (conn->el == NULL) return;\n    connSetReadHandler(conn, NULL);\n    connSetWriteHandler(conn, NULL);\n    if (conn->type->unbind_event_loop)\n        conn->type->unbind_event_loop(conn);\n    conn->el = NULL;\n}\n\n/* Rebind the connection to another event loop, read/write handlers must not\n * be installed in the current event loop */\nstatic inline int connRebindEventLoop(connection *conn, aeEventLoop *el) {\n    return conn->type->rebind_event_loop(conn, el);\n}\n\n/* Associate a private data pointer with the connection */\nstatic inline void connSetPrivateData(connection *conn, void *data) {\n    conn->private_data = data;\n}\n\n/* Get the associated private data pointer */\nstatic inline void *connGetPrivateData(connection *conn) {\n    return conn->private_data;\n}\n\n/* Return a text that describes the connection, suitable for inclusion\n * in CLIENT LIST and similar outputs.\n *\n * For sockets, we always return \"fd=<fdnum>\" to maintain compatibility.\n */\nstatic inline const char *connGetInfo(connection *conn, char *buf, size_t buf_len) {\n    snprintf(buf, buf_len-1, \"fd=%i\", conn == NULL ? -1 : conn->fd);\n    return buf;\n}\n\n/* anet-style wrappers to conns */\nint connBlock(connection *conn);\nint connNonBlock(connection *conn);\nint connEnableTcpNoDelay(connection *conn);\nint connDisableTcpNoDelay(connection *conn);\nint connKeepAlive(connection *conn, int interval);\nint connSendTimeout(connection *conn, long long ms);\nint connRecvTimeout(connection *conn, long long ms);\n\n/* Get cert for the secure connection */\nstatic inline sds connGetPeerCert(connection *conn) {\n    if (conn->type->get_peer_cert) {\n        return conn->type->get_peer_cert(conn);\n    }\n\n    return NULL;\n}\n\n/* Get Peer username based on connection type */\nstatic inline sds connGetPeerUsername(connection *conn) {\n    if (conn->type && conn->type->get_peer_username) {\n        return conn->type->get_peer_username(conn);\n    }\n    return NULL;\n}\n\n/* Initialize the redis connection framework */\nint connTypeInitialize(void);\n\n/* Register a connection type into redis connection framework */\nint connTypeRegister(ConnectionType *ct);\n\n/* Lookup a connection type by type name */\nConnectionType *connectionByType(const char *typename);\n\n/* Fast path to get TCP connection type */\nConnectionType *connectionTypeTcp(void);\n\n/* Fast path to get TLS connection type */\nConnectionType *connectionTypeTls(void);\n\n/* Fast path to get Unix connection type */\nConnectionType *connectionTypeUnix(void);\n\n/* Lookup the index of a connection type by type name, return -1 if not found */\nint connectionIndexByType(const char *typename);\n\n/* Create a connection of specified type */\nstatic inline connection *connCreate(struct aeEventLoop *el, ConnectionType *ct) {\n    return ct->conn_create(el);\n}\n\n/* Create an accepted connection of specified type.\n * priv is connection type specified argument */\nstatic inline connection *connCreateAccepted(struct aeEventLoop *el, ConnectionType *ct, int fd, void *priv) {\n    return ct->conn_create_accepted(el, fd, priv);\n}\n\n/* Configure a connection type. A typical case is to configure TLS.\n * priv is connection type specified,\n * reconfigure is boolean type to specify if overwrite the original config */\nstatic inline int connTypeConfigure(ConnectionType *ct, void *priv, int reconfigure) {\n    return ct->configure(priv, reconfigure);\n}\n\n/* Walk all the connection types and cleanup them all if possible */\nvoid connTypeCleanupAll(void);\n\n/* Test all the connection type has pending data or not. */\nint connTypeHasPendingData(struct aeEventLoop *el);\n\n/* walk all the connection types and process pending data for each connection type */\nint connTypeProcessPendingData(struct aeEventLoop *el);\n\n/* Listen on an initialized listener */\nstatic inline int connListen(connListener *listener) {\n    return listener->ct->listen(listener);\n}\n\n/* Get accept_handler of a connection type */\nstatic inline aeFileProc *connAcceptHandler(ConnectionType *ct) {\n    if (ct)\n        return ct->accept_handler;\n    return NULL;\n}\n\n/* Get Listeners information, note that caller should free the non-empty string */\nsds getListensInfoString(sds info);\n\nint RedisRegisterConnectionTypeSocket(void);\nint RedisRegisterConnectionTypeUnix(void);\nint RedisRegisterConnectionTypeTLS(void);\n\n/* Return 1 if connection is using TLS protocol, 0 if otherwise. */\nstatic inline int connIsTLS(connection *conn) {\n    return conn && conn->type == connectionTypeTls();\n}\n\n#endif  /* __REDIS_CONNECTION_H */\n"
  },
  {
    "path": "src/connhelpers.h",
    "content": "\n/*\n * Copyright (c) 2019-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef __REDIS_CONNHELPERS_H\n#define __REDIS_CONNHELPERS_H\n\n#include \"connection.h\"\n\n/* These are helper functions that are common to different connection\n * implementations (currently sockets in connection.c and TLS in tls.c).\n *\n * Currently helpers implement the mechanisms for invoking connection\n * handlers and tracking connection references, to allow safe destruction\n * of connections from within a handler.\n */\n\n/* Increment connection references.\n *\n * Inside a connection handler, we guarantee refs >= 1 so it is always\n * safe to connClose().\n *\n * In other cases where we don't want to prematurely lose the connection,\n * it can go beyond 1 as well; currently it is only done by connAccept().\n */\nstatic inline void connIncrRefs(connection *conn) {\n    conn->refs++;\n}\n\n/* Decrement connection references.\n *\n * Note that this is not intended to provide any automatic free logic!\n * callHandler() takes care of that for the common flows, and anywhere an\n * explicit connIncrRefs() is used, the caller is expected to take care of\n * that.\n */\n\nstatic inline void connDecrRefs(connection *conn) {\n    conn->refs--;\n}\n\nstatic inline int connHasRefs(connection *conn) {\n    return conn->refs;\n}\n\n/* Helper for connection implementations to call handlers:\n * 1. Increment refs to protect the connection.\n * 2. Execute the handler (if set).\n * 3. Decrement refs and perform deferred close, if refs==0.\n */\nstatic inline int callHandler(connection *conn, ConnectionCallbackFunc handler) {\n    connIncrRefs(conn);\n    if (handler) handler(conn);\n    connDecrRefs(conn);\n    if (conn->flags & CONN_FLAG_CLOSE_SCHEDULED) {\n        if (!connHasRefs(conn)) connClose(conn);\n        return 0;\n    }\n    return 1;\n}\n\n#endif  /* __REDIS_CONNHELPERS_H */\n"
  },
  {
    "path": "src/crc16.c",
    "content": "#include \"server.h\"\n\n/*\n * Copyright 2001-2010 Georges Menie (www.menie.org)\n * Copyright 2010-current Redis Ltd.\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *     * Redistributions of source code must retain the above copyright\n *       notice, this list of conditions and the following disclaimer.\n *     * Redistributions in binary form must reproduce the above copyright\n *       notice, this list of conditions and the following disclaimer in the\n *       documentation and/or other materials provided with the distribution.\n *     * Neither the name of the University of California, Berkeley nor the\n *       names of its contributors may be used to endorse or promote products\n *       derived from this software without specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND ANY\n * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\n * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n * DISCLAIMED. IN NO EVENT SHALL THE REGENTS AND CONTRIBUTORS BE LIABLE FOR ANY\n * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\n * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n */\n\n/* CRC16 implementation according to CCITT standards.\n *\n * Note by @antirez: this is actually the XMODEM CRC 16 algorithm, using the\n * following parameters:\n *\n * Name                       : \"XMODEM\", also known as \"ZMODEM\", \"CRC-16/ACORN\"\n * Width                      : 16 bit\n * Poly                       : 1021 (That is actually x^16 + x^12 + x^5 + 1)\n * Initialization             : 0000\n * Reflect Input byte         : False\n * Reflect Output CRC         : False\n * Xor constant to output CRC : 0000\n * Output for \"123456789\"     : 31C3\n */\n\nstatic const uint16_t crc16tab[256]= {\n    0x0000,0x1021,0x2042,0x3063,0x4084,0x50a5,0x60c6,0x70e7,\n    0x8108,0x9129,0xa14a,0xb16b,0xc18c,0xd1ad,0xe1ce,0xf1ef,\n    0x1231,0x0210,0x3273,0x2252,0x52b5,0x4294,0x72f7,0x62d6,\n    0x9339,0x8318,0xb37b,0xa35a,0xd3bd,0xc39c,0xf3ff,0xe3de,\n    0x2462,0x3443,0x0420,0x1401,0x64e6,0x74c7,0x44a4,0x5485,\n    0xa56a,0xb54b,0x8528,0x9509,0xe5ee,0xf5cf,0xc5ac,0xd58d,\n    0x3653,0x2672,0x1611,0x0630,0x76d7,0x66f6,0x5695,0x46b4,\n    0xb75b,0xa77a,0x9719,0x8738,0xf7df,0xe7fe,0xd79d,0xc7bc,\n    0x48c4,0x58e5,0x6886,0x78a7,0x0840,0x1861,0x2802,0x3823,\n    0xc9cc,0xd9ed,0xe98e,0xf9af,0x8948,0x9969,0xa90a,0xb92b,\n    0x5af5,0x4ad4,0x7ab7,0x6a96,0x1a71,0x0a50,0x3a33,0x2a12,\n    0xdbfd,0xcbdc,0xfbbf,0xeb9e,0x9b79,0x8b58,0xbb3b,0xab1a,\n    0x6ca6,0x7c87,0x4ce4,0x5cc5,0x2c22,0x3c03,0x0c60,0x1c41,\n    0xedae,0xfd8f,0xcdec,0xddcd,0xad2a,0xbd0b,0x8d68,0x9d49,\n    0x7e97,0x6eb6,0x5ed5,0x4ef4,0x3e13,0x2e32,0x1e51,0x0e70,\n    0xff9f,0xefbe,0xdfdd,0xcffc,0xbf1b,0xaf3a,0x9f59,0x8f78,\n    0x9188,0x81a9,0xb1ca,0xa1eb,0xd10c,0xc12d,0xf14e,0xe16f,\n    0x1080,0x00a1,0x30c2,0x20e3,0x5004,0x4025,0x7046,0x6067,\n    0x83b9,0x9398,0xa3fb,0xb3da,0xc33d,0xd31c,0xe37f,0xf35e,\n    0x02b1,0x1290,0x22f3,0x32d2,0x4235,0x5214,0x6277,0x7256,\n    0xb5ea,0xa5cb,0x95a8,0x8589,0xf56e,0xe54f,0xd52c,0xc50d,\n    0x34e2,0x24c3,0x14a0,0x0481,0x7466,0x6447,0x5424,0x4405,\n    0xa7db,0xb7fa,0x8799,0x97b8,0xe75f,0xf77e,0xc71d,0xd73c,\n    0x26d3,0x36f2,0x0691,0x16b0,0x6657,0x7676,0x4615,0x5634,\n    0xd94c,0xc96d,0xf90e,0xe92f,0x99c8,0x89e9,0xb98a,0xa9ab,\n    0x5844,0x4865,0x7806,0x6827,0x18c0,0x08e1,0x3882,0x28a3,\n    0xcb7d,0xdb5c,0xeb3f,0xfb1e,0x8bf9,0x9bd8,0xabbb,0xbb9a,\n    0x4a75,0x5a54,0x6a37,0x7a16,0x0af1,0x1ad0,0x2ab3,0x3a92,\n    0xfd2e,0xed0f,0xdd6c,0xcd4d,0xbdaa,0xad8b,0x9de8,0x8dc9,\n    0x7c26,0x6c07,0x5c64,0x4c45,0x3ca2,0x2c83,0x1ce0,0x0cc1,\n    0xef1f,0xff3e,0xcf5d,0xdf7c,0xaf9b,0xbfba,0x8fd9,0x9ff8,\n    0x6e17,0x7e36,0x4e55,0x5e74,0x2e93,0x3eb2,0x0ed1,0x1ef0\n};\n\nuint16_t crc16(const char *buf, int len) {\n    int counter;\n    uint16_t crc = 0;\n    for (counter = 0; counter < len; counter++)\n            crc = (crc<<8) ^ crc16tab[((crc>>8) ^ *buf++)&0x00FF];\n    return crc;\n}\n"
  },
  {
    "path": "src/crc16_slottable.h",
    "content": "#ifndef _CRC16_TABLE_H__\n#define _CRC16_TABLE_H__\n\n/* A table of the shortest possible alphanumeric string that is mapped by redis' crc16\n * to any given redis cluster slot. \n * \n * The array indexes are slot numbers, so that given a desired slot, this string is guaranteed\n * to make redis cluster route a request to the shard holding this slot \n */\ntypedef char crc16_alphastring[4];\n\nconst crc16_alphastring crc16_slot_table[] = {\n\"06S\", \"Qi\", \"5L5\", \"4Iu\", \"4gY\", \"460\", \"1Y7\", \"1LV\", \"0QG\", \"ru\", \"7Ok\", \"4ji\", \"4DE\", \"65n\", \"2JH\", \"I8\", \"F9\", \"SX\", \"7nF\", \"4KD\", \n\"4eh\", \"6PK\", \"2ke\", \"1Ng\", \"0Sv\", \"4L\", \"491\", \"4hX\", \"4Ft\", \"5C4\", \"2Hy\", \"09R\", \"021\", \"0cX\", \"4Xv\", \"6mU\", \"6Cy\", \"42R\", \"0Mt\", \"nF\", \n\"cv\", \"1Pe\", \"5kK\", \"6NI\", \"74L\", \"4UF\", \"0nh\", \"MZ\", \"2TJ\", \"0ai\", \"4ZG\", \"6od\", \"6AH\", \"40c\", \"0OE\", \"lw\", \"aG\", \"0Bu\", \"5iz\", \"6Lx\", \n\"5R7\", \"4Ww\", \"0lY\", \"Ok\", \"5n3\", \"4ks\", \"8YE\", \"7g\", \"2KR\", \"1nP\", \"714\", \"64t\", \"69D\", \"4Ho\", \"07I\", \"Ps\", \"2hN\", \"1ML\", \"4fC\", \"7CA\", \n\"avs\", \"4iB\", \"0Rl\", \"5V\", \"2Ic\", \"08H\", \"4Gn\", \"66E\", \"aUo\", \"b4e\", \"05x\", \"RB\", \"8f\", \"8VD\", \"4dr\", \"5a2\", \"4zp\", \"6OS\", \"bl\", \"355\", \n\"0or\", \"1j2\", \"75V\", \"bno\", \"4Yl\", \"6lO\", \"Ap\", \"0bB\", \"0Ln\", \"2yM\", \"6Bc\", \"43H\", \"4xA\", \"6Mb\", \"22D\", \"14\", \"0mC\", \"Nq\", \"6cN\", \"4Vm\", \n\"ban\", \"aDl\", \"CA\", \"14Z\", \"8GG\", \"mm\", \"549\", \"41y\", \"53t\", \"464\", \"1Y3\", \"1LR\", \"06W\", \"Qm\", \"5L1\", \"4Iq\", \"4DA\", \"65j\", \"2JL\", \"1oN\", \n\"0QC\", \"6y\", \"7Oo\", \"4jm\", \"4el\", \"6PO\", \"9x\", \"1Nc\", \"04f\", \"2EM\", \"7nB\", \"bqs\", \"4Fp\", \"5C0\", \"d6F\", \"09V\", \"0Sr\", \"4H\", \"495\", \"bRo\", \n\"aio\", \"42V\", \"0Mp\", \"nB\", \"025\", \"17u\", \"4Xr\", \"6mQ\", \"74H\", \"4UB\", \"0nl\", \"3Kn\", \"cr\", \"1Pa\", \"5kO\", \"6NM\", \"6AL\", \"40g\", \"0OA\", \"ls\", \n\"2TN\", \"0am\", \"4ZC\", \"aEr\", \"5R3\", \"4Ws\", \"18t\", \"Oo\", \"aC\", \"0Bq\", \"bCl\", \"afn\", \"2KV\", \"1nT\", \"5Uz\", \"64p\", \"5n7\", \"4kw\", \"0PY\", \"7c\", \n\"2hJ\", \"1MH\", \"4fG\", \"6Sd\", \"7mi\", \"4Hk\", \"07M\", \"Pw\", \"2Ig\", \"08L\", \"4Gj\", \"66A\", \"7LD\", \"4iF\", \"0Rh\", \"5R\", \"8b\", \"1Oy\", \"4dv\", \"5a6\", \n\"7oX\", \"4JZ\", \"0qt\", \"RF\", \"0ov\", \"LD\", \"4A9\", \"4TX\", \"4zt\", \"6OW\", \"bh\", \"0AZ\", \"z9\", \"oX\", \"6Bg\", \"43L\", \"4Yh\", \"6lK\", \"At\", \"0bF\", \n\"0mG\", \"Nu\", \"6cJ\", \"4Vi\", \"4xE\", \"6Mf\", \"2vH\", \"10\", \"8GC\", \"mi\", \"5p5\", \"4uu\", \"5Kx\", \"4N8\", \"CE\", \"1pV\", \"0QO\", \"6u\", \"7Oc\", \"4ja\", \n\"4DM\", \"65f\", \"3Za\", \"I0\", \"0rS\", \"Qa\", \"68V\", \"b7F\", \"4gQ\", \"468\", \"dSo\", \"285\", \"274\", \"4D\", \"499\", \"4hP\", \"b8G\", \"67W\", \"0h3\", \"09Z\", \n\"F1\", \"SP\", \"7nN\", \"4KL\", \"51I\", \"6PC\", \"9t\", \"1No\", \"21g\", \"1Pm\", \"5kC\", \"6NA\", \"74D\", \"4UN\", \"X3\", \"MR\", \"029\", \"0cP\", \"bbM\", \"79t\", \n\"4c3\", \"42Z\", \"8Dd\", \"nN\", \"aO\", \"8Ke\", \"4yS\", \"4l2\", \"76u\", \"635\", \"0lQ\", \"Oc\", \"BS\", \"W2\", \"4ZO\", \"6ol\", \"7Qa\", \"40k\", \"0OM\", \"2zn\", \n\"69L\", \"4Hg\", \"07A\", \"2Fj\", \"2hF\", \"k6\", \"4fK\", \"6Sh\", \"7Ny\", \"6K9\", \"0PU\", \"7o\", \"2KZ\", \"1nX\", \"4EW\", \"4P6\", \"7oT\", \"4JV\", \"05p\", \"RJ\", \n\"8n\", \"1Ou\", \"4dz\", \"6QY\", \"7LH\", \"4iJ\", \"d7\", \"qV\", \"2Ik\", \"1li\", \"4Gf\", \"66M\", \"4Yd\", \"6lG\", \"Ax\", \"0bJ\", \"z5\", \"oT\", \"6Bk\", \"4wH\", \n\"4zx\", \"aeI\", \"bd\", \"0AV\", \"0oz\", \"LH\", \"4A5\", \"4TT\", \"5Kt\", \"4N4\", \"CI\", \"14R\", \"0NW\", \"me\", \"541\", \"41q\", \"4xI\", \"6Mj\", \"22L\", \"u4\", \n\"0mK\", \"Ny\", \"6cF\", \"4Ve\", \"4DI\", \"65b\", \"2JD\", \"I4\", \"0QK\", \"6q\", \"7Og\", \"4je\", \"4gU\", \"4r4\", \"2iX\", \"1LZ\", \"0rW\", \"Qe\", \"5L9\", \"4Iy\", \n\"4Fx\", \"5C8\", \"0h7\", \"1mw\", \"0Sz\", \"pH\", \"7MV\", \"4hT\", \"4ed\", \"6PG\", \"9p\", \"1Nk\", \"F5\", \"ST\", \"7nJ\", \"4KH\", \"7pH\", \"4UJ\", \"X7\", \"MV\", \n\"cz\", \"1Pi\", \"5kG\", \"6NE\", \"4c7\", \"4vV\", \"0Mx\", \"nJ\", \"0v5\", \"0cT\", \"4Xz\", \"6mY\", \"6bX\", \"5GZ\", \"0lU\", \"Og\", \"aK\", \"0By\", \"4yW\", \"4l6\", \n\"6AD\", \"40o\", \"0OI\", \"2zj\", \"BW\", \"W6\", \"4ZK\", \"6oh\", \"2hB\", \"k2\", \"4fO\", \"6Sl\", \"69H\", \"4Hc\", \"07E\", \"2Fn\", \"d5e\", \"83m\", \"4ES\", \"4P2\", \n\"a0F\", \"bQL\", \"0PQ\", \"7k\", \"8j\", \"1Oq\", \"50W\", \"hbv\", \"7oP\", \"4JR\", \"05t\", \"RN\", \"2Io\", \"08D\", \"4Gb\", \"66I\", \"7LL\", \"4iN\", \"d3\", \"5Z\", \n\"z1\", \"oP\", \"6Bo\", \"43D\", \"5IA\", \"6lC\", \"2Wm\", \"0bN\", \"8ff\", \"LL\", \"4A1\", \"4TP\", \"cPn\", \"aeM\", \"0T3\", \"0AR\", \"0NS\", \"ma\", \"545\", \"41u\", \n\"5Kp\", \"4N0\", \"CM\", \"14V\", \"0mO\", \"2Xl\", \"6cB\", \"4Va\", \"4xM\", \"6Mn\", \"22H\", \"18\", \"04s\", \"SI\", \"7nW\", \"4KU\", \"4ey\", \"6PZ\", \"9m\", \"1Nv\", \n\"e4\", \"pU\", \"7MK\", \"4hI\", \"4Fe\", \"67N\", \"2Hh\", \"09C\", \"06B\", \"Qx\", \"68O\", \"4Id\", \"4gH\", \"6Rk\", \"2iE\", \"j5\", \"0QV\", \"6l\", \"5o8\", \"4jx\", \n\"4DT\", \"4Q5\", \"2JY\", \"82j\", \"BJ\", \"0ax\", \"4ZV\", \"4O7\", \"552\", \"40r\", \"0OT\", \"lf\", \"aV\", \"t7\", \"4yJ\", \"6Li\", \"6bE\", \"4Wf\", \"0lH\", \"Oz\", \n\"2Vj\", \"0cI\", \"4Xg\", \"6mD\", \"6Ch\", \"42C\", \"0Me\", \"nW\", \"cg\", \"1Pt\", \"5kZ\", \"6NX\", \"7pU\", \"4UW\", \"0ny\", \"MK\", \"7LQ\", \"4iS\", \"267\", \"5G\", \n\"0i0\", \"08Y\", \"b9D\", \"66T\", \"7oM\", \"4JO\", \"G2\", \"RS\", \"8w\", \"1Ol\", \"4dc\", \"7Aa\", \"atS\", \"4kb\", \"0PL\", \"7v\", \"2KC\", \"H3\", \"4EN\", \"64e\", \n\"69U\", \"b6E\", \"07X\", \"Pb\", \"dRl\", \"296\", \"4fR\", \"4s3\", \"4xP\", \"4m1\", \"22U\", \"8Jf\", \"0mR\", \"0x3\", \"77v\", \"626\", \"5Km\", \"6no\", \"CP\", \"V1\", \n\"0NN\", \"3kL\", \"7Pb\", \"41h\", \"4za\", \"6OB\", \"20d\", \"0AO\", \"Y0\", \"LQ\", \"6an\", \"4TM\", \"bcN\", \"78w\", \"Aa\", \"0bS\", \"8Eg\", \"oM\", \"4b0\", \"43Y\", \n\"51T\", \"azL\", \"9i\", \"1Nr\", \"04w\", \"SM\", \"7nS\", \"4KQ\", \"4Fa\", \"67J\", \"2Hl\", \"09G\", \"e0\", \"4Y\", \"7MO\", \"4hM\", \"4gL\", \"6Ro\", \"2iA\", \"j1\", \n\"06F\", \"2Gm\", \"68K\", \"5YA\", \"4DP\", \"4Q1\", \"d4f\", \"82n\", \"0QR\", \"6h\", \"a1E\", \"bPO\", \"556\", \"40v\", \"0OP\", \"lb\", \"BN\", \"15U\", \"4ZR\", \"4O3\", \n\"6bA\", \"4Wb\", \"0lL\", \"2Yo\", \"aR\", \"t3\", \"4yN\", \"6Lm\", \"6Cl\", \"42G\", \"0Ma\", \"nS\", \"2Vn\", \"0cM\", \"4Xc\", \"79i\", \"74Y\", \"4US\", \"8ge\", \"MO\", \n\"cc\", \"1Pp\", \"bAL\", \"adN\", \"0i4\", \"1lt\", \"5WZ\", \"66P\", \"7LU\", \"4iW\", \"0Ry\", \"5C\", \"8s\", \"1Oh\", \"4dg\", \"6QD\", \"7oI\", \"4JK\", \"G6\", \"RW\", \n\"2KG\", \"H7\", \"4EJ\", \"64a\", \"7Nd\", \"4kf\", \"0PH\", \"7r\", \"1X8\", \"1MY\", \"4fV\", \"4s7\", \"69Q\", \"4Hz\", \"0sT\", \"Pf\", \"0mV\", \"Nd\", \"5S8\", \"4Vx\", \n\"4xT\", \"4m5\", \"22Q\", \"0Cz\", \"0NJ\", \"mx\", \"7Pf\", \"41l\", \"5Ki\", \"6nk\", \"CT\", \"V5\", \"Y4\", \"LU\", \"6aj\", \"4TI\", \"4ze\", \"6OF\", \"by\", \"0AK\", \n\"2l9\", \"oI\", \"4b4\", \"4wU\", \"4Yy\", \"6lZ\", \"Ae\", \"0bW\", \"0So\", \"4U\", \"7MC\", \"4hA\", \"4Fm\", \"67F\", \"3XA\", \"09K\", \"0ps\", \"SA\", \"aTl\", \"b5f\", \n\"4eq\", \"6PR\", \"9e\", \"8WG\", \"8XF\", \"6d\", \"5o0\", \"4jp\", \"707\", \"65w\", \"1z2\", \"1oS\", \"06J\", \"Qp\", \"68G\", \"4Il\", \"53i\", \"6Rc\", \"2iM\", \"1LO\", \n\"23G\", \"07\", \"4yB\", \"6La\", \"6bM\", \"4Wn\", \"18i\", \"Or\", \"BB\", \"0ap\", \"c4D\", \"aEo\", \"5q2\", \"40z\", \"8FD\", \"ln\", \"co\", \"346\", \"5kR\", \"6NP\", \n\"74U\", \"bol\", \"0nq\", \"MC\", \"2Vb\", \"0cA\", \"4Xo\", \"6mL\", \"7SA\", \"42K\", \"0Mm\", \"2xN\", \"7oE\", \"4JG\", \"05a\", \"2DJ\", \"2jf\", \"1Od\", \"4dk\", \"6QH\", \n\"482\", \"5yz\", \"0Ru\", \"5O\", \"0i8\", \"08Q\", \"4Gw\", \"5B7\", \"5M6\", \"4Hv\", \"07P\", \"Pj\", \"1X4\", \"1MU\", \"4fZ\", \"473\", \"7Nh\", \"4kj\", \"0PD\", \"sv\", \n\"2KK\", \"1nI\", \"4EF\", \"64m\", \"5Ke\", \"6ng\", \"CX\", \"V9\", \"0NF\", \"mt\", \"7Pj\", \"4uh\", \"4xX\", \"4m9\", \"1F6\", \"0Cv\", \"0mZ\", \"Nh\", \"5S4\", \"4Vt\", \n\"4Yu\", \"6lV\", \"Ai\", \"16r\", \"0Lw\", \"oE\", \"4b8\", \"43Q\", \"4zi\", \"6OJ\", \"bu\", \"0AG\", \"Y8\", \"LY\", \"6af\", \"4TE\", \"4Fi\", \"67B\", \"2Hd\", \"09O\", \n\"e8\", \"4Q\", \"7MG\", \"4hE\", \"4eu\", \"6PV\", \"9a\", \"1Nz\", \"0pw\", \"SE\", \"aTh\", \"4KY\", \"4DX\", \"4Q9\", \"1z6\", \"1oW\", \"0QZ\", \"rh\", \"5o4\", \"4jt\", \n\"4gD\", \"6Rg\", \"2iI\", \"j9\", \"06N\", \"Qt\", \"68C\", \"4Ih\", \"6bI\", \"4Wj\", \"0lD\", \"Ov\", \"aZ\", \"03\", \"4yF\", \"6Le\", \"5q6\", \"4tv\", \"0OX\", \"lj\", \n\"BF\", \"0at\", \"4ZZ\", \"6oy\", \"74Q\", \"5Ez\", \"0nu\", \"MG\", \"ck\", \"1Px\", \"5kV\", \"6NT\", \"6Cd\", \"42O\", \"0Mi\", \"2xJ\", \"2Vf\", \"0cE\", \"4Xk\", \"6mH\", \n\"2jb\", \"8VY\", \"4do\", \"6QL\", \"7oA\", \"4JC\", \"05e\", \"2DN\", \"d7E\", \"08U\", \"4Gs\", \"5B3\", \"486\", \"bSl\", \"0Rq\", \"5K\", \"1X0\", \"1MQ\", \"52w\", \"477\", \n\"5M2\", \"4Hr\", \"07T\", \"Pn\", \"2KO\", \"1nM\", \"4EB\", \"64i\", \"7Nl\", \"4kn\", \"8YX\", \"7z\", \"0NB\", \"mp\", \"7Pn\", \"41d\", \"5Ka\", \"6nc\", \"2UM\", \"14G\", \n\"19w\", \"Nl\", \"5S0\", \"4Vp\", \"bBo\", \"agm\", \"1F2\", \"0Cr\", \"0Ls\", \"oA\", \"ahl\", \"43U\", \"4Yq\", \"6lR\", \"Am\", \"16v\", \"0oo\", \"2ZL\", \"6ab\", \"4TA\", \n\"4zm\", \"6ON\", \"bq\", \"0AC\", \"2VY\", \"0cz\", \"4XT\", \"4M5\", \"570\", \"42p\", \"0MV\", \"nd\", \"cT\", \"v5\", \"5ki\", \"6Nk\", \"74n\", \"4Ud\", \"0nJ\", \"Mx\", \n\"By\", \"0aK\", \"4Ze\", \"6oF\", \"6Aj\", \"40A\", \"y4\", \"lU\", \"ae\", \"0BW\", \"4yy\", \"581\", \"4B4\", \"4WU\", \"18R\", \"OI\", \"06q\", \"QK\", \"7lU\", \"4IW\", \n\"53R\", \"6RX\", \"0I4\", \"1Lt\", \"g6\", \"rW\", \"7OI\", \"4jK\", \"4Dg\", \"65L\", \"2Jj\", \"1oh\", \"0pH\", \"Sz\", \"7nd\", \"4Kf\", \"4eJ\", \"6Pi\", \"2kG\", \"h7\", \n\"0ST\", \"4n\", \"7Mx\", \"4hz\", \"4FV\", \"4S7\", \"1x8\", \"09p\", \"4zR\", \"4o3\", \"bN\", \"8Hd\", \"0oP\", \"Lb\", \"75t\", \"604\", \"4YN\", \"6lm\", \"AR\", \"T3\", \n\"0LL\", \"2yo\", \"6BA\", \"43j\", \"4xc\", \"agR\", \"22f\", \"0CM\", \"0ma\", \"NS\", \"6cl\", \"4VO\", \"baL\", \"aDN\", \"Cc\", \"14x\", \"8Ge\", \"mO\", \"7PQ\", \"4uS\", \n\"7NS\", \"4kQ\", \"245\", \"7E\", \"0k2\", \"1nr\", \"coo\", \"64V\", \"69f\", \"4HM\", \"E0\", \"PQ\", \"2hl\", \"1Mn\", \"4fa\", \"6SB\", \"7Lb\", \"5yA\", \"0RN\", \"5t\", \n\"2IA\", \"J1\", \"4GL\", \"66g\", \"aUM\", \"b4G\", \"05Z\", \"0d3\", \"8D\", \"8Vf\", \"4dP\", \"459\", \"574\", \"42t\", \"0MR\", \"0X3\", \"dln\", \"17W\", \"4XP\", \"4M1\", \n\"74j\", \"5EA\", \"0nN\", \"3KL\", \"cP\", \"29\", \"5km\", \"6No\", \"6An\", \"40E\", \"y0\", \"lQ\", \"2Tl\", \"0aO\", \"4Za\", \"6oB\", \"4B0\", \"4WQ\", \"18V\", \"OM\", \n\"aa\", \"0BS\", \"bCN\", \"585\", \"53V\", \"axN\", \"0I0\", \"1Lp\", \"06u\", \"QO\", \"68x\", \"4IS\", \"4Dc\", \"65H\", \"2Jn\", \"1ol\", \"g2\", \"rS\", \"7OM\", \"4jO\", \n\"4eN\", \"6Pm\", \"9Z\", \"h3\", \"04D\", \"2Eo\", \"aTS\", \"4Kb\", \"4FR\", \"4S3\", \"d6d\", \"09t\", \"0SP\", \"4j\", \"a3G\", \"bRM\", \"0oT\", \"Lf\", \"6aY\", \"4Tz\", \n\"4zV\", \"4o7\", \"bJ\", \"0Ax\", \"0LH\", \"oz\", \"6BE\", \"43n\", \"4YJ\", \"6li\", \"AV\", \"T7\", \"0me\", \"NW\", \"6ch\", \"4VK\", \"4xg\", \"6MD\", \"22b\", \"0CI\", \n\"0Ny\", \"mK\", \"7PU\", \"4uW\", \"5KZ\", \"6nX\", \"Cg\", \"1pt\", \"0k6\", \"1nv\", \"4Ey\", \"64R\", \"7NW\", \"4kU\", \"241\", \"7A\", \"2hh\", \"1Mj\", \"4fe\", \"6SF\", \n\"69b\", \"4HI\", \"E4\", \"PU\", \"2IE\", \"J5\", \"4GH\", \"66c\", \"7Lf\", \"4id\", \"0RJ\", \"5p\", \"2jY\", \"8Vb\", \"4dT\", \"4q5\", \"5O8\", \"4Jx\", \"0qV\", \"Rd\", \n\"21E\", \"25\", \"5ka\", \"6Nc\", \"74f\", \"4Ul\", \"0nB\", \"Mp\", \"1f2\", \"0cr\", \"bbo\", \"79V\", \"578\", \"42x\", \"395\", \"nl\", \"am\", \"364\", \"4yq\", \"589\", \n\"76W\", \"bmn\", \"0ls\", \"OA\", \"Bq\", \"0aC\", \"4Zm\", \"6oN\", \"6Ab\", \"40I\", \"0Oo\", \"2zL\", \"0Qm\", \"6W\", \"7OA\", \"4jC\", \"4Do\", \"65D\", \"2Jb\", \"82Q\", \n\"06y\", \"QC\", \"68t\", \"b7d\", \"4gs\", \"5b3\", \"dSM\", \"8UE\", \"8ZD\", \"4f\", \"5m2\", \"4hr\", \"725\", \"67u\", \"1x0\", \"09x\", \"04H\", \"Sr\", \"7nl\", \"4Kn\", \n\"4eB\", \"6Pa\", \"9V\", \"1NM\", \"4YF\", \"6le\", \"AZ\", \"0bh\", \"0LD\", \"ov\", \"6BI\", \"43b\", \"4zZ\", \"6Oy\", \"bF\", \"0At\", \"0oX\", \"Lj\", \"5Q6\", \"4Tv\", \n\"5KV\", \"6nT\", \"Ck\", \"14p\", \"0Nu\", \"mG\", \"7PY\", \"41S\", \"4xk\", \"6MH\", \"22n\", \"0CE\", \"0mi\", \"2XJ\", \"6cd\", \"4VG\", \"69n\", \"4HE\", \"E8\", \"PY\", \n\"2hd\", \"1Mf\", \"4fi\", \"6SJ\", \"ath\", \"4kY\", \"0Pw\", \"7M\", \"2Kx\", \"1nz\", \"4Eu\", \"6pV\", \"5O4\", \"4Jt\", \"05R\", \"Rh\", \"8L\", \"1OW\", \"4dX\", \"451\", \n\"7Lj\", \"4ih\", \"0RF\", \"qt\", \"2II\", \"J9\", \"4GD\", \"66o\", \"74b\", \"4Uh\", \"0nF\", \"Mt\", \"cX\", \"21\", \"5ke\", \"6Ng\", \"5s4\", \"4vt\", \"0MZ\", \"nh\", \n\"1f6\", \"0cv\", \"4XX\", \"4M9\", \"4B8\", \"4WY\", \"0lw\", \"OE\", \"ai\", \"1Rz\", \"4yu\", \"6LV\", \"6Af\", \"40M\", \"y8\", \"lY\", \"Bu\", \"0aG\", \"4Zi\", \"6oJ\", \n\"4Dk\", \"6qH\", \"2Jf\", \"1od\", \"0Qi\", \"6S\", \"7OE\", \"4jG\", \"4gw\", \"5b7\", \"0I8\", \"1Lx\", \"0ru\", \"QG\", \"68p\", \"5Yz\", \"4FZ\", \"67q\", \"1x4\", \"1mU\", \n\"0SX\", \"4b\", \"5m6\", \"4hv\", \"4eF\", \"6Pe\", \"9R\", \"1NI\", \"04L\", \"Sv\", \"7nh\", \"4Kj\", \"8EX\", \"or\", \"6BM\", \"43f\", \"4YB\", \"6la\", \"2WO\", \"0bl\", \n\"8fD\", \"Ln\", \"5Q2\", \"4Tr\", \"cPL\", \"aeo\", \"bB\", \"0Ap\", \"0Nq\", \"mC\", \"ajn\", \"41W\", \"5KR\", \"6nP\", \"Co\", \"14t\", \"0mm\", \"2XN\", \"77I\", \"4VC\", \n\"4xo\", \"6ML\", \"22j\", \"0CA\", \"3xA\", \"1Mb\", \"4fm\", \"6SN\", \"69j\", \"4HA\", \"07g\", \"2FL\", \"d5G\", \"83O\", \"4Eq\", \"64Z\", \"a0d\", \"bQn\", \"0Ps\", \"7I\", \n\"8H\", \"1OS\", \"50u\", \"455\", \"5O0\", \"4Jp\", \"05V\", \"Rl\", \"2IM\", \"08f\", \"5Wa\", \"66k\", \"7Ln\", \"4il\", \"0RB\", \"5x\", \"Bh\", \"0aZ\", \"4Zt\", \"6oW\", \n\"4a9\", \"40P\", \"0Ov\", \"lD\", \"at\", \"0BF\", \"4yh\", \"6LK\", \"6bg\", \"4WD\", \"Z9\", \"OX\", \"2VH\", \"U8\", \"4XE\", \"6mf\", \"6CJ\", \"42a\", \"0MG\", \"nu\", \n\"cE\", \"1PV\", \"5kx\", \"4n8\", \"5P5\", \"4Uu\", \"8gC\", \"Mi\", \"04Q\", \"Sk\", \"5N7\", \"4Kw\", \"51r\", \"442\", \"9O\", \"1NT\", \"0SE\", \"pw\", \"7Mi\", \"4hk\", \n\"4FG\", \"67l\", \"2HJ\", \"09a\", \"3\", \"QZ\", \"68m\", \"4IF\", \"4gj\", \"6RI\", \"2ig\", \"1Le\", \"0Qt\", \"6N\", \"7OX\", \"4jZ\", \"4Dv\", \"5A6\", \"0j9\", \"1oy\", \n\"4xr\", \"6MQ\", \"22w\", \"377\", \"0mp\", \"NB\", \"77T\", \"blm\", \"5KO\", \"6nM\", \"Cr\", \"14i\", \"0Nl\", \"3kn\", \"ajs\", \"41J\", \"4zC\", \"aer\", \"20F\", \"36\", \n\"0oA\", \"Ls\", \"6aL\", \"4To\", \"bcl\", \"78U\", \"AC\", \"0bq\", \"386\", \"oo\", \"5r3\", \"4ws\", \"5l1\", \"4iq\", \"9Kf\", \"5e\", \"1y3\", \"1lR\", \"736\", \"66v\", \n\"7oo\", \"4Jm\", \"05K\", \"Rq\", \"8U\", \"1ON\", \"4dA\", \"6Qb\", \"7NB\", \"bQs\", \"0Pn\", \"7T\", \"2Ka\", \"1nc\", \"4El\", \"64G\", \"69w\", \"b6g\", \"07z\", \"1v2\", \n\"dRN\", \"8TF\", \"4fp\", \"5c0\", \"akm\", \"40T\", \"0Or\", \"1J2\", \"Bl\", \"15w\", \"4Zp\", \"6oS\", \"6bc\", \"5Ga\", \"0ln\", \"2YM\", \"ap\", \"0BB\", \"4yl\", \"6LO\", \n\"6CN\", \"42e\", \"0MC\", \"nq\", \"2VL\", \"0co\", \"4XA\", \"6mb\", \"5P1\", \"4Uq\", \"8gG\", \"Mm\", \"cA\", \"1PR\", \"bAn\", \"adl\", \"51v\", \"446\", \"9K\", \"1NP\", \n\"04U\", \"So\", \"5N3\", \"4Ks\", \"4FC\", \"67h\", \"2HN\", \"09e\", \"0SA\", \"ps\", \"7Mm\", \"4ho\", \"4gn\", \"6RM\", \"2ic\", \"1La\", \"7\", \"2GO\", \"68i\", \"4IB\", \n\"4Dr\", \"5A2\", \"d4D\", \"82L\", \"0Qp\", \"6J\", \"a1g\", \"bPm\", \"0mt\", \"NF\", \"6cy\", \"4VZ\", \"4xv\", \"6MU\", \"0V9\", \"0CX\", \"0Nh\", \"mZ\", \"7PD\", \"41N\", \n\"5KK\", \"6nI\", \"Cv\", \"14m\", \"0oE\", \"Lw\", \"6aH\", \"4Tk\", \"4zG\", \"6Od\", \"20B\", \"32\", \"0LY\", \"ok\", \"5r7\", \"4ww\", \"5Iz\", \"6lx\", \"AG\", \"0bu\", \n\"1y7\", \"1lV\", \"4GY\", \"4R8\", \"5l5\", \"4iu\", \"1Bz\", \"5a\", \"8Q\", \"i8\", \"4dE\", \"6Qf\", \"7ok\", \"4Ji\", \"05O\", \"Ru\", \"2Ke\", \"1ng\", \"4Eh\", \"64C\", \n\"7NF\", \"4kD\", \"f9\", \"7P\", \"2hy\", \"3m9\", \"4ft\", \"5c4\", \"69s\", \"4HX\", \"0sv\", \"PD\", \"23e\", \"0BN\", \"5iA\", \"6LC\", \"6bo\", \"4WL\", \"Z1\", \"OP\", \n\"0t3\", \"0aR\", \"c4f\", \"aEM\", \"4a1\", \"40X\", \"8Ff\", \"lL\", \"cM\", \"8Ig\", \"5kp\", \"4n0\", \"74w\", \"617\", \"0nS\", \"Ma\", \"3Fa\", \"U0\", \"4XM\", \"6mn\", \n\"6CB\", \"42i\", \"0MO\", \"2xl\", \"0SM\", \"4w\", \"7Ma\", \"4hc\", \"4FO\", \"67d\", \"2HB\", \"K2\", \"04Y\", \"Sc\", \"aTN\", \"b5D\", \"4eS\", \"4p2\", \"9G\", \"8We\", \n\"256\", \"6F\", \"7OP\", \"4jR\", \"cnl\", \"65U\", \"0j1\", \"1oq\", \"D3\", \"QR\", \"68e\", \"4IN\", \"4gb\", \"6RA\", \"2io\", \"1Lm\", \"5KG\", \"6nE\", \"Cz\", \"14a\", \n\"x7\", \"mV\", \"7PH\", \"41B\", \"4xz\", \"592\", \"0V5\", \"0CT\", \"0mx\", \"NJ\", \"4C7\", \"4VV\", \"4YW\", \"4L6\", \"AK\", \"0by\", \"0LU\", \"og\", \"563\", \"43s\", \n\"4zK\", \"6Oh\", \"bW\", \"w6\", \"0oI\", \"2Zj\", \"6aD\", \"4Tg\", \"7og\", \"4Je\", \"05C\", \"Ry\", \"2jD\", \"i4\", \"4dI\", \"6Qj\", \"5l9\", \"4iy\", \"0RW\", \"5m\", \n\"2IX\", \"08s\", \"4GU\", \"4R4\", \"7mV\", \"4HT\", \"07r\", \"PH\", \"0H7\", \"1Mw\", \"4fx\", \"5c8\", \"7NJ\", \"4kH\", \"f5\", \"sT\", \"2Ki\", \"1nk\", \"4Ed\", \"64O\", \n\"6bk\", \"4WH\", \"Z5\", \"OT\", \"ax\", \"0BJ\", \"4yd\", \"6LG\", \"4a5\", \"4tT\", \"0Oz\", \"lH\", \"Bd\", \"0aV\", \"4Zx\", \"aEI\", \"5P9\", \"4Uy\", \"0nW\", \"Me\", \n\"cI\", \"1PZ\", \"5kt\", \"4n4\", \"6CF\", \"42m\", \"0MK\", \"ny\", \"2VD\", \"U4\", \"4XI\", \"6mj\", \"4FK\", \"6sh\", \"2HF\", \"K6\", \"0SI\", \"4s\", \"7Me\", \"4hg\", \n\"4eW\", \"4p6\", \"9C\", \"1NX\", \"0pU\", \"Sg\", \"7ny\", \"6k9\", \"4Dz\", \"65Q\", \"0j5\", \"1ou\", \"0Qx\", \"6B\", \"7OT\", \"4jV\", \"4gf\", \"6RE\", \"2ik\", \"1Li\", \n\"D7\", \"QV\", \"68a\", \"4IJ\", \"x3\", \"mR\", \"7PL\", \"41F\", \"5KC\", \"6nA\", \"2Uo\", \"14e\", \"19U\", \"NN\", \"4C3\", \"4VR\", \"bBM\", \"596\", \"0V1\", \"0CP\", \n\"0LQ\", \"oc\", \"567\", \"43w\", \"4YS\", \"4L2\", \"AO\", \"16T\", \"0oM\", \"2Zn\", \"75i\", \"4Tc\", \"4zO\", \"6Ol\", \"bS\", \"w2\", \"8Y\", \"i0\", \"4dM\", \"6Qn\", \n\"7oc\", \"4Ja\", \"05G\", \"2Dl\", \"d7g\", \"08w\", \"4GQ\", \"4R0\", \"a2D\", \"bSN\", \"0RS\", \"5i\", \"0H3\", \"1Ms\", \"52U\", \"ayM\", \"7mR\", \"4HP\", \"07v\", \"PL\", \n\"2Km\", \"1no\", \"5UA\", \"64K\", \"7NN\", \"4kL\", \"f1\", \"7X\", \"5nw\", \"4k7\", \"fJ\", \"0Ex\", \"0kT\", \"Hf\", \"6eY\", \"4Pz\", \"5Mk\", \"6hi\", \"EV\", \"P7\", \n\"0HH\", \"kz\", \"6FE\", \"47n\", \"48o\", \"6ID\", \"26b\", \"0GI\", \"0ie\", \"JW\", \"6gh\", \"4RK\", \"5OZ\", \"6jX\", \"Gg\", \"0dU\", \"0Jy\", \"iK\", \"4d6\", \"4qW\", \n\"4z4\", \"4oU\", \"1DZ\", \"3A\", \"Ye\", \"0zW\", \"4Ay\", \"5D9\", \"6yj\", \"4LI\", \"A4\", \"TU\", \"zy\", \"0YK\", \"4be\", \"6WF\", \"6XG\", \"4md\", \"0VJ\", \"1p\", \n\"2ME\", \"N5\", \"4CH\", \"62c\", \"5K8\", \"4Nx\", \"0uV\", \"Vd\", \"xH\", \"8Rb\", \"5pu\", \"4u5\", \"D\", \"13W\", \"5Lq\", \"4I1\", \"534\", \"46t\", \"0IR\", \"28y\", \n\"gP\", \"69\", \"5om\", \"6Jo\", \"6dC\", \"5AA\", \"0jN\", \"3OL\", \"2Pl\", \"0eO\", \"aT1\", \"6kB\", \"6En\", \"44E\", \"98\", \"hQ\", \"ea\", \"0FS\", \"49u\", \"abL\", \n\"4F0\", \"4SQ\", \"8ag\", \"KM\", \"02u\", \"UO\", \"4X2\", \"4MS\", \"57V\", \"a8F\", \"0M0\", \"0XQ\", \"c2\", \"vS\", \"7KM\", \"4nO\", \"5PB\", \"61H\", \"2Nn\", \"1kl\", \n\"00D\", \"2Ao\", \"6zA\", \"4Ob\", \"4aN\", \"6Tm\", \"yR\", \"l3\", \"0WP\", \"0j\", \"a7G\", \"58W\", \"4BR\", \"4W3\", \"ZN\", \"84l\", \"0kP\", \"Hb\", \"71t\", \"644\", \n\"5ns\", \"4k3\", \"fN\", \"8Ld\", \"0HL\", \"29g\", \"6FA\", \"47j\", \"5Mo\", \"6hm\", \"ER\", \"P3\", \"0ia\", \"JS\", \"6gl\", \"4RO\", \"48k\", \"7Ya\", \"26f\", \"0GM\", \n\"8Ce\", \"iO\", \"4d2\", \"4qS\", \"beL\", \"hYw\", \"Gc\", \"0dQ\", \"Ya\", \"0zS\", \"cko\", \"60V\", \"4z0\", \"4oQ\", \"205\", \"3E\", \"2ll\", \"0YO\", \"4ba\", \"6WB\", \n\"6yn\", \"4LM\", \"A0\", \"TQ\", \"2MA\", \"N1\", \"4CL\", \"62g\", \"6XC\", \"59I\", \"0VN\", \"1t\", \"xL\", \"8Rf\", \"54y\", \"419\", \"aQM\", \"b0G\", \"01Z\", \"3PP\", \n\"530\", \"46p\", \"0IV\", \"jd\", \"DH\", \"0gz\", \"5Lu\", \"4I5\", \"6dG\", \"4Qd\", \"0jJ\", \"Ix\", \"gT\", \"r5\", \"5oi\", \"6Jk\", \"6Ej\", \"44A\", \"0Kg\", \"hU\", \n\"Fy\", \"0eK\", \"5ND\", \"6kF\", \"4F4\", \"4SU\", \"1xZ\", \"KI\", \"ee\", \"0FW\", \"49q\", \"5x9\", \"57R\", \"6VX\", \"0M4\", \"0XU\", \"02q\", \"UK\", \"4X6\", \"4MW\", \n\"5PF\", \"61L\", \"2Nj\", \"1kh\", \"c6\", \"vW\", \"7KI\", \"4nK\", \"4aJ\", \"6Ti\", \"yV\", \"l7\", \"0tH\", \"Wz\", \"6zE\", \"4Of\", \"4BV\", \"4W7\", \"ZJ\", \"0yx\", \n\"0WT\", \"0n\", \"6YY\", \"4lz\", \"5Mc\", \"6ha\", \"2SO\", \"0fl\", \"1Xa\", \"kr\", \"6FM\", \"47f\", \"bDm\", \"aao\", \"fB\", \"0Ep\", \"8bD\", \"Hn\", \"5U2\", \"4Pr\", \n\"5OR\", \"5Z3\", \"Go\", \"10t\", \"0Jq\", \"iC\", \"ann\", \"45W\", \"48g\", \"6IL\", \"ds\", \"0GA\", \"0im\", \"3Lo\", \"73I\", \"4RC\", \"6yb\", \"4LA\", \"03g\", \"2BL\", \n\"zq\", \"0YC\", \"4bm\", \"6WN\", \"a4d\", \"bUn\", \"0Ts\", \"3I\", \"Ym\", \"87O\", \"4Aq\", \"5D1\", \"5K0\", \"4Np\", \"01V\", \"Vl\", \"2nQ\", \"1KS\", \"54u\", \"415\", \n\"6XO\", \"4ml\", \"0VB\", \"1x\", \"2MM\", \"0xn\", \"5Sa\", \"62k\", \"gX\", \"61\", \"5oe\", \"6Jg\", \"6dK\", \"4Qh\", \"0jF\", \"It\", \"L\", \"0gv\", \"5Ly\", \"4I9\", \n\"5w4\", \"4rt\", \"0IZ\", \"jh\", \"ei\", \"1Vz\", \"5mT\", \"5x5\", \"4F8\", \"4SY\", \"0hw\", \"KE\", \"Fu\", \"0eG\", \"5NH\", \"6kJ\", \"6Ef\", \"44M\", \"90\", \"hY\", \n\"0Ui\", \"2S\", \"7KE\", \"4nG\", \"5PJ\", \"6uH\", \"Xw\", \"1kd\", \"0vu\", \"UG\", \"6xx\", \"790\", \"4cw\", \"5f7\", \"0M8\", \"0XY\", \"0WX\", \"0b\", \"5i6\", \"4lv\", \n\"4BZ\", \"63q\", \"ZF\", \"0yt\", \"00L\", \"Wv\", \"6zI\", \"4Oj\", \"4aF\", \"6Te\", \"yZ\", \"0Zh\", \"0HD\", \"kv\", \"6FI\", \"47b\", \"5Mg\", \"6he\", \"EZ\", \"0fh\", \n\"0kX\", \"Hj\", \"5U6\", \"4Pv\", \"7N9\", \"6Ky\", \"fF\", \"0Et\", \"0Ju\", \"iG\", \"6Dx\", \"45S\", \"5OV\", \"5Z7\", \"Gk\", \"0dY\", \"0ii\", \"3Lk\", \"6gd\", \"4RG\", \n\"48c\", \"6IH\", \"dw\", \"0GE\", \"zu\", \"0YG\", \"4bi\", \"6WJ\", \"6yf\", \"4LE\", \"A8\", \"TY\", \"Yi\", \"1jz\", \"4Au\", \"5D5\", \"4z8\", \"4oY\", \"0Tw\", \"3M\", \n\"xD\", \"1KW\", \"54q\", \"411\", \"5K4\", \"4Nt\", \"01R\", \"Vh\", \"2MI\", \"N9\", \"4CD\", \"62o\", \"6XK\", \"4mh\", \"0VF\", \"ut\", \"6dO\", \"4Ql\", \"0jB\", \"Ip\", \n\"25E\", \"65\", \"5oa\", \"6Jc\", \"538\", \"46x\", \"9Pg\", \"jl\", \"H\", \"0gr\", \"bfo\", \"aCm\", \"72W\", \"bin\", \"0hs\", \"KA\", \"em\", \"324\", \"49y\", \"5x1\", \n\"6Eb\", \"44I\", \"94\", \"3nm\", \"Fq\", \"0eC\", \"5NL\", \"6kN\", \"5PN\", \"61D\", \"Xs\", \"86Q\", \"0Um\", \"2W\", \"7KA\", \"4nC\", \"4cs\", \"5f3\", \"39W\", \"8QE\", \n\"02y\", \"UC\", \"aRn\", \"794\", \"765\", \"63u\", \"ZB\", \"0yp\", \"9Ne\", \"0f\", \"5i2\", \"4lr\", \"4aB\", \"6Ta\", \"2oO\", \"0Zl\", \"00H\", \"Wr\", \"6zM\", \"4On\", \n\"5lW\", \"5y6\", \"dj\", \"0GX\", \"0it\", \"JF\", \"6gy\", \"4RZ\", \"5OK\", \"6jI\", \"Gv\", \"0dD\", \"83\", \"iZ\", \"6De\", \"45N\", \"5nf\", \"6Kd\", \"24B\", \"72\", \n\"0kE\", \"Hw\", \"6eH\", \"4Pk\", \"5Mz\", \"6hx\", \"EG\", \"0fu\", \"0HY\", \"kk\", \"5v7\", \"4sw\", \"5h5\", \"4mu\", \"1Fz\", \"1a\", \"2MT\", \"0xw\", \"4CY\", \"4V8\", \n\"7kk\", \"4Ni\", \"01O\", \"Vu\", \"xY\", \"m8\", \"54l\", \"6Uf\", \"6Zg\", \"4oD\", \"b9\", \"3P\", \"Yt\", \"0zF\", \"4Ah\", \"60C\", \"4Y9\", \"4LX\", \"0wv\", \"TD\", \n\"zh\", \"0YZ\", \"4bt\", \"5g4\", \"Fl\", \"11w\", \"5NQ\", \"6kS\", \"aom\", \"44T\", \"0Kr\", \"1N2\", \"ep\", \"0FB\", \"49d\", \"6HO\", \"6fc\", \"5Ca\", \"0hn\", \"3Ml\", \n\"U\", \"0go\", \"bfr\", \"6ib\", \"6GN\", \"46e\", \"0IC\", \"jq\", \"gA\", \"0Ds\", \"bEn\", \"hyU\", \"5T1\", \"4Qq\", \"8cG\", \"Im\", \"00U\", \"Wo\", \"5J3\", \"4Os\", \n\"55v\", \"406\", \"yC\", \"0Zq\", \"0WA\", \"ts\", \"6YL\", \"4lo\", \"4BC\", \"63h\", \"2LN\", \"0ym\", \"02d\", \"2CO\", \"6xa\", \"4MB\", \"4cn\", \"6VM\", \"2mc\", \"1Ha\", \n\"0Up\", \"2J\", \"a5g\", \"bTm\", \"5PS\", \"5E2\", \"Xn\", \"86L\", \"0ip\", \"JB\", \"73T\", \"bhm\", \"48z\", \"5y2\", \"dn\", \"337\", \"87\", \"3on\", \"6Da\", \"45J\", \n\"5OO\", \"6jM\", \"Gr\", \"10i\", \"0kA\", \"Hs\", \"6eL\", \"4Po\", \"5nb\", \"aar\", \"24F\", \"76\", \"8AE\", \"ko\", \"5v3\", \"4ss\", \"bgl\", \"aBn\", \"EC\", \"0fq\", \n\"2MP\", \"0xs\", \"776\", \"62v\", \"5h1\", \"4mq\", \"9Of\", \"1e\", \"2nL\", \"1KN\", \"54h\", \"6Ub\", \"7ko\", \"4Nm\", \"01K\", \"Vq\", \"Yp\", \"0zB\", \"4Al\", \"60G\", \n\"6Zc\", \"bUs\", \"0Tn\", \"3T\", \"zl\", \"8PF\", \"4bp\", \"5g0\", \"aSm\", \"787\", \"03z\", \"1r2\", \"4e9\", \"44P\", \"0Kv\", \"hD\", \"Fh\", \"0eZ\", \"5NU\", \"6kW\", \n\"6fg\", \"4SD\", \"0hj\", \"KX\", \"et\", \"0FF\", \"5mI\", \"6HK\", \"6GJ\", \"46a\", \"0IG\", \"ju\", \"Q\", \"Q8\", \"5Ld\", \"6if\", \"5T5\", \"4Qu\", \"1zz\", \"Ii\", \n\"gE\", \"0Dw\", \"5ox\", \"4j8\", \"55r\", \"402\", \"yG\", \"0Zu\", \"00Q\", \"Wk\", \"5J7\", \"4Ow\", \"4BG\", \"63l\", \"2LJ\", \"0yi\", \"0WE\", \"tw\", \"6YH\", \"4lk\", \n\"4cj\", \"6VI\", \"2mg\", \"0XD\", \"0vh\", \"UZ\", \"6xe\", \"4MF\", \"5PW\", \"5E6\", \"Xj\", \"1ky\", \"0Ut\", \"2N\", \"7KX\", \"4nZ\", \"5OC\", \"6jA\", \"2Qo\", \"0dL\", \n\"1ZA\", \"iR\", \"6Dm\", \"45F\", \"48v\", \"acO\", \"db\", \"0GP\", \"94M\", \"JN\", \"4G3\", \"4RR\", \"5Mr\", \"4H2\", \"EO\", \"12T\", \"0HQ\", \"kc\", \"527\", \"47w\", \n\"5nn\", \"6Kl\", \"fS\", \"s2\", \"0kM\", \"3NO\", \"71i\", \"4Pc\", \"7kc\", \"4Na\", \"01G\", \"3PM\", \"xQ\", \"m0\", \"54d\", \"6Un\", \"a6D\", \"59T\", \"0VS\", \"1i\", \n\"197\", \"85o\", \"4CQ\", \"4V0\", \"4Y1\", \"4LP\", \"03v\", \"TL\", \"0L3\", \"0YR\", \"56U\", \"a9E\", \"6Zo\", \"4oL\", \"b1\", \"3X\", \"2Om\", \"0zN\", \"5QA\", \"60K\", \n\"ex\", \"0FJ\", \"49l\", \"6HG\", \"6fk\", \"4SH\", \"0hf\", \"KT\", \"Fd\", \"0eV\", \"5NY\", \"aAI\", \"4e5\", \"4pT\", \"0Kz\", \"hH\", \"gI\", \"1TZ\", \"5ot\", \"4j4\", \n\"5T9\", \"4Qy\", \"0jW\", \"Ie\", \"DU\", \"Q4\", \"5Lh\", \"6ij\", \"6GF\", \"46m\", \"0IK\", \"jy\", \"0WI\", \"0s\", \"6YD\", \"4lg\", \"4BK\", \"6wh\", \"ZW\", \"O6\", \n\"0tU\", \"Wg\", \"6zX\", \"6o9\", \"4aW\", \"4t6\", \"yK\", \"0Zy\", \"0Ux\", \"2B\", \"7KT\", \"4nV\", \"bzI\", \"61Q\", \"Xf\", \"1ku\", \"02l\", \"UV\", \"6xi\", \"4MJ\", \n\"4cf\", \"6VE\", \"2mk\", \"0XH\", \"0Jd\", \"iV\", \"6Di\", \"45B\", \"5OG\", \"6jE\", \"Gz\", \"0dH\", \"0ix\", \"JJ\", \"4G7\", \"4RV\", \"48r\", \"6IY\", \"df\", \"0GT\", \n\"0HU\", \"kg\", \"523\", \"47s\", \"5Mv\", \"4H6\", \"EK\", \"0fy\", \"0kI\", \"3NK\", \"6eD\", \"4Pg\", \"5nj\", \"6Kh\", \"fW\", \"s6\", \"xU\", \"m4\", \"5ph\", \"6Uj\", \n\"7kg\", \"4Ne\", \"01C\", \"Vy\", \"193\", \"1hZ\", \"4CU\", \"4V4\", \"5h9\", \"4my\", \"0VW\", \"1m\", \"zd\", \"0YV\", \"4bx\", \"5g8\", \"4Y5\", \"4LT\", \"03r\", \"TH\", \n\"Yx\", \"0zJ\", \"4Ad\", \"60O\", \"6Zk\", \"4oH\", \"b5\", \"wT\", \"6fo\", \"4SL\", \"0hb\", \"KP\", \"27e\", \"0FN\", \"49h\", \"6HC\", \"4e1\", \"44X\", \"8Bf\", \"hL\", \n\"0p3\", \"0eR\", \"bdO\", \"aAM\", \"70w\", \"657\", \"0jS\", \"Ia\", \"gM\", \"8Mg\", \"5op\", \"4j0\", \"6GB\", \"46i\", \"0IO\", \"28d\", \"Y\", \"Q0\", \"5Ll\", \"6in\", \n\"4BO\", \"63d\", \"ZS\", \"O2\", \"0WM\", \"0w\", \"7Ia\", \"4lc\", \"4aS\", \"4t2\", \"yO\", \"8Se\", \"00Y\", \"Wc\", \"aPN\", \"b1D\", \"bzM\", \"61U\", \"Xb\", \"1kq\", \n\"216\", \"2F\", \"7KP\", \"4nR\", \"4cb\", \"6VA\", \"2mo\", \"0XL\", \"02h\", \"UR\", \"6xm\", \"4MN\", \"5j7\", \"4ow\", \"0TY\", \"3c\", \"YG\", \"0zu\", \"5Qz\", \"60p\", \n\"6yH\", \"4Lk\", \"03M\", \"Tw\", \"2lJ\", \"0Yi\", \"4bG\", \"6Wd\", \"6Xe\", \"4mF\", \"0Vh\", \"1R\", \"2Mg\", \"0xD\", \"4Cj\", \"62A\", \"7kX\", \"4NZ\", \"0ut\", \"VF\", \n\"xj\", \"1Ky\", \"5pW\", \"5e6\", \"5nU\", \"6KW\", \"fh\", \"0EZ\", \"0kv\", \"HD\", \"4E9\", \"4PX\", \"5MI\", \"6hK\", \"Et\", \"0fF\", \"0Hj\", \"kX\", \"6Fg\", \"47L\", \n\"48M\", \"6If\", \"dY\", \"50\", \"0iG\", \"Ju\", \"6gJ\", \"4Ri\", \"5Ox\", \"4J8\", \"GE\", \"0dw\", \"1Zz\", \"ii\", \"5t5\", \"4qu\", \"02W\", \"Um\", \"5H1\", \"4Mq\", \n\"57t\", \"424\", \"2mP\", \"0Xs\", \"0UC\", \"2y\", \"7Ko\", \"4nm\", \"bzr\", \"61j\", \"2NL\", \"1kN\", \"00f\", \"2AM\", \"6zc\", \"bus\", \"4al\", \"6TO\", \"yp\", \"0ZB\", \n\"0Wr\", \"0H\", \"a7e\", \"58u\", \"4Bp\", \"5G0\", \"Zl\", \"84N\", \"f\", \"13u\", \"5LS\", \"5Y2\", \"amo\", \"46V\", \"0Ip\", \"jB\", \"gr\", \"1Ta\", \"5oO\", \"6JM\", \n\"6da\", \"4QB\", \"0jl\", \"3On\", \"2PN\", \"0em\", \"5Nb\", \"aAr\", \"6EL\", \"44g\", \"0KA\", \"hs\", \"eC\", \"0Fq\", \"49W\", \"abn\", \"5V3\", \"4Ss\", \"8aE\", \"Ko\", \n\"YC\", \"0zq\", \"754\", \"60t\", \"5j3\", \"4os\", \"9Md\", \"3g\", \"2lN\", \"0Ym\", \"4bC\", \"7GA\", \"6yL\", \"4Lo\", \"03I\", \"Ts\", \"2Mc\", \"1ha\", \"4Cn\", \"62E\", \n\"6Xa\", \"4mB\", \"0Vl\", \"1V\", \"xn\", \"8RD\", \"5pS\", \"5e2\", \"aQo\", \"b0e\", \"01x\", \"VB\", \"0kr\", \"1n2\", \"71V\", \"bjo\", \"5nQ\", \"6KS\", \"fl\", \"315\", \n\"0Hn\", \"29E\", \"6Fc\", \"47H\", \"5MM\", \"6hO\", \"Ep\", \"0fB\", \"0iC\", \"Jq\", \"6gN\", \"4Rm\", \"48I\", \"6Ib\", \"26D\", \"54\", \"8CG\", \"im\", \"509\", \"45y\", \n\"ben\", \"hYU\", \"GA\", \"0ds\", \"4cY\", \"420\", \"2mT\", \"0Xw\", \"02S\", \"Ui\", \"5H5\", \"4Mu\", \"5Pd\", \"61n\", \"XY\", \"M8\", \"0UG\", \"vu\", \"7Kk\", \"4ni\", \n\"4ah\", \"6TK\", \"yt\", \"0ZF\", \"B9\", \"WX\", \"6zg\", \"4OD\", \"4Bt\", \"5G4\", \"Zh\", \"0yZ\", \"0Wv\", \"0L\", \"4y9\", \"4lX\", \"6Gy\", \"46R\", \"0It\", \"jF\", \n\"b\", \"0gX\", \"5LW\", \"5Y6\", \"6de\", \"4QF\", \"0jh\", \"IZ\", \"gv\", \"0DD\", \"5oK\", \"6JI\", \"6EH\", \"44c\", \"0KE\", \"hw\", \"2PJ\", \"0ei\", \"5Nf\", \"6kd\", \n\"5V7\", \"4Sw\", \"0hY\", \"Kk\", \"eG\", \"0Fu\", \"49S\", \"6Hx\", \"7ia\", \"4Lc\", \"03E\", \"2Bn\", \"zS\", \"o2\", \"4bO\", \"6Wl\", \"a4F\", \"bUL\", \"0TQ\", \"3k\", \n\"YO\", \"87m\", \"4AS\", \"4T2\", \"7kP\", \"4NR\", \"01t\", \"VN\", \"xb\", \"1Kq\", \"54W\", \"hfv\", \"6Xm\", \"4mN\", \"1FA\", \"1Z\", \"2Mo\", \"0xL\", \"4Cb\", \"62I\", \n\"5MA\", \"6hC\", \"2Sm\", \"0fN\", \"0Hb\", \"kP\", \"6Fo\", \"47D\", \"bDO\", \"aaM\", \"0P3\", \"0ER\", \"8bf\", \"HL\", \"4E1\", \"4PP\", \"5Op\", \"4J0\", \"GM\", \"10V\", \n\"0JS\", \"ia\", \"505\", \"45u\", \"48E\", \"6In\", \"dQ\", \"58\", \"0iO\", \"3LM\", \"6gB\", \"4Ra\", \"0UK\", \"2q\", \"7Kg\", \"4ne\", \"5Ph\", \"61b\", \"XU\", \"M4\", \n\"0vW\", \"Ue\", \"5H9\", \"4My\", \"4cU\", \"4v4\", \"2mX\", \"1HZ\", \"0Wz\", \"tH\", \"4y5\", \"4lT\", \"4Bx\", \"5G8\", \"Zd\", \"0yV\", \"B5\", \"WT\", \"6zk\", \"4OH\", \n\"4ad\", \"6TG\", \"yx\", \"0ZJ\", \"gz\", \"0DH\", \"5oG\", \"6JE\", \"6di\", \"4QJ\", \"0jd\", \"IV\", \"n\", \"0gT\", \"680\", \"6iY\", \"4g7\", \"4rV\", \"0Ix\", \"jJ\", \n\"eK\", \"0Fy\", \"5mv\", \"4h6\", \"6fX\", \"5CZ\", \"0hU\", \"Kg\", \"FW\", \"S6\", \"5Nj\", \"6kh\", \"6ED\", \"44o\", \"0KI\", \"3nK\", \"zW\", \"o6\", \"4bK\", \"6Wh\", \n\"6yD\", \"4Lg\", \"03A\", \"2Bj\", \"YK\", \"0zy\", \"4AW\", \"4T6\", \"6ZX\", \"6O9\", \"0TU\", \"3o\", \"xf\", \"1Ku\", \"54S\", \"6UY\", \"7kT\", \"4NV\", \"01p\", \"VJ\", \n\"2Mk\", \"0xH\", \"4Cf\", \"62M\", \"6Xi\", \"4mJ\", \"0Vd\", \"uV\", \"0Hf\", \"kT\", \"6Fk\", \"4sH\", \"5ME\", \"6hG\", \"Ex\", \"0fJ\", \"0kz\", \"HH\", \"4E5\", \"4PT\", \n\"5nY\", \"aaI\", \"fd\", \"0EV\", \"0JW\", \"ie\", \"501\", \"45q\", \"5Ot\", \"4J4\", \"GI\", \"10R\", \"0iK\", \"Jy\", \"6gF\", \"4Re\", \"48A\", \"6Ij\", \"dU\", \"q4\", \n\"5Pl\", \"61f\", \"XQ\", \"M0\", \"0UO\", \"2u\", \"7Kc\", \"4na\", \"4cQ\", \"428\", \"39u\", \"8Qg\", \"0vS\", \"Ua\", \"aRL\", \"b3F\", \"bxO\", \"63W\", \"0l3\", \"0yR\", \n\"234\", \"0D\", \"4y1\", \"4lP\", \"55I\", \"6TC\", \"2om\", \"0ZN\", \"B1\", \"WP\", \"6zo\", \"4OL\", \"6dm\", \"4QN\", \"1zA\", \"IR\", \"25g\", \"0DL\", \"5oC\", \"6JA\", \n\"4g3\", \"46Z\", \"9PE\", \"jN\", \"j\", \"0gP\", \"684\", \"aCO\", \"72u\", \"675\", \"0hQ\", \"Kc\", \"eO\", \"8Oe\", \"5mr\", \"4h2\", \"7Ua\", \"44k\", \"0KM\", \"3nO\", \n\"FS\", \"S2\", \"5Nn\", \"6kl\", \"4x6\", \"4mW\", \"0Vy\", \"1C\", \"0m4\", \"0xU\", \"5SZ\", \"62P\", \"7kI\", \"4NK\", \"C6\", \"VW\", \"2nj\", \"1Kh\", \"54N\", \"6UD\", \n\"6ZE\", \"4of\", \"0TH\", \"3r\", \"YV\", \"L7\", \"4AJ\", \"60a\", \"6yY\", \"4Lz\", \"0wT\", \"Tf\", \"zJ\", \"0Yx\", \"4bV\", \"4w7\", \"5lu\", \"4i5\", \"dH\", \"0Gz\", \n\"0iV\", \"Jd\", \"5W8\", \"4Rx\", \"5Oi\", \"6jk\", \"GT\", \"R5\", \"0JJ\", \"ix\", \"6DG\", \"45l\", \"5nD\", \"6KF\", \"fy\", \"0EK\", \"0kg\", \"HU\", \"6ej\", \"4PI\", \n\"5MX\", \"5X9\", \"Ee\", \"0fW\", \"1XZ\", \"kI\", \"4f4\", \"4sU\", \"00w\", \"WM\", \"4Z0\", \"4OQ\", \"55T\", \"hgu\", \"ya\", \"0ZS\", \"a0\", \"0Y\", \"6Yn\", \"4lM\", \n\"4Ba\", \"63J\", \"2Ll\", \"0yO\", \"02F\", \"2Cm\", \"6xC\", \"aG0\", \"4cL\", \"6Vo\", \"2mA\", \"n1\", \"0UR\", \"2h\", \"a5E\", \"bTO\", \"5Pq\", \"4U1\", \"XL\", \"86n\", \n\"FN\", \"11U\", \"5Ns\", \"4K3\", \"516\", \"44v\", \"0KP\", \"hb\", \"eR\", \"p3\", \"49F\", \"6Hm\", \"6fA\", \"4Sb\", \"0hL\", \"3MN\", \"w\", \"0gM\", \"5LB\", \"7ya\", \n\"6Gl\", \"46G\", \"0Ia\", \"jS\", \"gc\", \"0DQ\", \"bEL\", \"hyw\", \"4D2\", \"4QS\", \"8ce\", \"IO\", \"0m0\", \"0xQ\", \"byL\", \"62T\", \"4x2\", \"4mS\", \"227\", \"1G\", \n\"2nn\", \"1Kl\", \"54J\", \"7Ea\", \"7kM\", \"4NO\", \"C2\", \"VS\", \"YR\", \"L3\", \"4AN\", \"60e\", \"6ZA\", \"4ob\", \"0TL\", \"3v\", \"zN\", \"8Pd\", \"4bR\", \"4w3\", \n\"aSO\", \"b2E\", \"03X\", \"Tb\", \"0iR\", \"3LP\", \"73v\", \"666\", \"48X\", \"4i1\", \"dL\", \"8Nf\", \"0JN\", \"3oL\", \"6DC\", \"45h\", \"5Om\", \"6jo\", \"GP\", \"R1\", \n\"0kc\", \"HQ\", \"6en\", \"4PM\", \"a09\", \"6KB\", \"24d\", \"0EO\", \"8Ag\", \"kM\", \"4f0\", \"47Y\", \"697\", \"aBL\", \"Ea\", \"0fS\", \"4ay\", \"5d9\", \"ye\", \"0ZW\", \n\"00s\", \"WI\", \"4Z4\", \"4OU\", \"4Be\", \"63N\", \"Zy\", \"0yK\", \"a4\", \"tU\", \"6Yj\", \"4lI\", \"4cH\", \"6Vk\", \"2mE\", \"n5\", \"02B\", \"Ux\", \"6xG\", \"4Md\", \n\"5Pu\", \"4U5\", \"XH\", \"86j\", \"0UV\", \"2l\", \"5k8\", \"4nx\", \"512\", \"44r\", \"0KT\", \"hf\", \"FJ\", \"0ex\", \"5Nw\", \"4K7\", \"6fE\", \"4Sf\", \"0hH\", \"Kz\", \n\"eV\", \"p7\", \"49B\", \"6Hi\", \"6Gh\", \"46C\", \"0Ie\", \"jW\", \"s\", \"0gI\", \"5LF\", \"6iD\", \"4D6\", \"4QW\", \"0jy\", \"IK\", \"gg\", \"0DU\", \"5oZ\", \"6JX\", \n\"7kA\", \"4NC\", \"01e\", \"3Po\", \"xs\", \"8RY\", \"54F\", \"6UL\", \"a6f\", \"59v\", \"0Vq\", \"1K\", \"d3E\", \"85M\", \"4Cs\", \"5F3\", \"5I2\", \"4Lr\", \"03T\", \"Tn\", \n\"zB\", \"0Yp\", \"56w\", \"437\", \"6ZM\", \"4on\", \"1Da\", \"3z\", \"2OO\", \"0zl\", \"4AB\", \"60i\", \"5Oa\", \"6jc\", \"2QM\", \"0dn\", \"0JB\", \"ip\", \"6DO\", \"45d\", \n\"48T\", \"acm\", \"1B2\", \"0Gr\", \"94o\", \"Jl\", \"5W0\", \"4Rp\", \"5MP\", \"5X1\", \"Em\", \"12v\", \"0Hs\", \"kA\", \"all\", \"47U\", \"5nL\", \"6KN\", \"fq\", \"0EC\", \n\"0ko\", \"3Nm\", \"6eb\", \"4PA\", \"a8\", \"0Q\", \"6Yf\", \"4lE\", \"4Bi\", \"63B\", \"Zu\", \"0yG\", \"0tw\", \"WE\", \"4Z8\", \"4OY\", \"4au\", \"5d5\", \"yi\", \"1Jz\", \n\"0UZ\", \"vh\", \"5k4\", \"4nt\", \"5Py\", \"4U9\", \"XD\", \"1kW\", \"02N\", \"Ut\", \"6xK\", \"4Mh\", \"4cD\", \"6Vg\", \"2mI\", \"n9\", \"eZ\", \"43\", \"49N\", \"6He\", \n\"6fI\", \"4Sj\", \"0hD\", \"Kv\", \"FF\", \"0et\", \"7n9\", \"6ky\", \"5u6\", \"4pv\", \"0KX\", \"hj\", \"gk\", \"0DY\", \"5oV\", \"5z7\", \"6dx\", \"5Az\", \"0ju\", \"IG\", \n\"Dw\", \"0gE\", \"5LJ\", \"6iH\", \"6Gd\", \"46O\", \"0Ii\", \"28B\", \"xw\", \"1Kd\", \"54B\", \"6UH\", \"7kE\", \"4NG\", \"01a\", \"3Pk\", \"0m8\", \"0xY\", \"4Cw\", \"5F7\", \n\"6Xx\", \"59r\", \"0Vu\", \"1O\", \"zF\", \"0Yt\", \"4bZ\", \"433\", \"5I6\", \"4Lv\", \"03P\", \"Tj\", \"YZ\", \"0zh\", \"4AF\", \"60m\", \"6ZI\", \"4oj\", \"0TD\", \"wv\", \n\"0JF\", \"it\", \"6DK\", \"4qh\", \"5Oe\", \"6jg\", \"GX\", \"R9\", \"0iZ\", \"Jh\", \"5W4\", \"4Rt\", \"48P\", \"4i9\", \"dD\", \"0Gv\", \"0Hw\", \"kE\", \"4f8\", \"47Q\", \n\"5MT\", \"5X5\", \"Ei\", \"12r\", \"0kk\", \"HY\", \"6ef\", \"4PE\", \"5nH\", \"6KJ\", \"fu\", \"0EG\", \"4Bm\", \"63F\", \"Zq\", \"0yC\", \"0Wo\", \"0U\", \"6Yb\", \"4lA\", \n\"4aq\", \"5d1\", \"ym\", \"8SG\", \"0ts\", \"WA\", \"aPl\", \"b1f\", \"747\", \"61w\", \"2NQ\", \"1kS\", \"9Lg\", \"2d\", \"5k0\", \"4np\", \"57i\", \"6Vc\", \"2mM\", \"0Xn\", \n\"02J\", \"Up\", \"6xO\", \"4Ml\", \"6fM\", \"4Sn\", \"1xa\", \"Kr\", \"27G\", \"47\", \"49J\", \"6Ha\", \"5u2\", \"44z\", \"8BD\", \"hn\", \"FB\", \"0ep\", \"bdm\", \"aAo\", \n\"70U\", \"bkl\", \"0jq\", \"IC\", \"go\", \"306\", \"5oR\", \"5z3\", \"7WA\", \"46K\", \"0Im\", \"28F\", \"Ds\", \"0gA\", \"5LN\", \"6iL\", \"0cY\", \"020\", \"6mT\", \"4Xw\", \n\"42S\", \"6Cx\", \"nG\", \"0Mu\", \"1Pd\", \"cw\", \"6NH\", \"5kJ\", \"4UG\", \"74M\", \"3Kk\", \"0ni\", \"0ah\", \"BZ\", \"6oe\", \"4ZF\", \"40b\", \"6AI\", \"lv\", \"0OD\", \n\"0Bt\", \"aF\", \"6Ly\", \"4yZ\", \"4Wv\", \"5R6\", \"Oj\", \"0lX\", \"Qh\", \"06R\", \"4It\", \"5L4\", \"461\", \"4gX\", \"1LW\", \"1Y6\", \"rt\", \"0QF\", \"4jh\", \"7Oj\", \n\"65o\", \"4DD\", \"I9\", \"2JI\", \"SY\", \"F8\", \"4KE\", \"7nG\", \"6PJ\", \"4ei\", \"1Nf\", \"2kd\", \"4M\", \"0Sw\", \"4hY\", \"490\", \"5C5\", \"4Fu\", \"09S\", \"2Hx\", \n\"6OR\", \"4zq\", \"354\", \"bm\", \"LA\", \"0os\", \"bnn\", \"75W\", \"6lN\", \"4Ym\", \"0bC\", \"Aq\", \"2yL\", \"0Lo\", \"43I\", \"6Bb\", \"6Mc\", \"5ha\", \"15\", \"22E\", \n\"Np\", \"0mB\", \"4Vl\", \"6cO\", \"aDm\", \"bao\", \"1pS\", \"1e2\", \"ml\", \"8GF\", \"41x\", \"548\", \"4kr\", \"5n2\", \"7f\", \"8YD\", \"1nQ\", \"2KS\", \"64u\", \"715\", \n\"4Hn\", \"69E\", \"Pr\", \"07H\", \"1MM\", \"2hO\", \"6Sa\", \"4fB\", \"4iC\", \"7LA\", \"5W\", \"0Rm\", \"08I\", \"2Ib\", \"66D\", \"4Go\", \"b4d\", \"aUn\", \"RC\", \"05y\", \n\"8VE\", \"8g\", \"5a3\", \"4ds\", \"42W\", \"ain\", \"nC\", \"0Mq\", \"17t\", \"024\", \"6mP\", \"4Xs\", \"4UC\", \"74I\", \"3Ko\", \"0nm\", \"8IY\", \"cs\", \"6NL\", \"5kN\", \n\"40f\", \"6AM\", \"lr\", \"8FX\", \"0al\", \"2TO\", \"6oa\", \"4ZB\", \"4Wr\", \"5R2\", \"On\", \"18u\", \"0Bp\", \"aB\", \"afo\", \"bCm\", \"465\", \"53u\", \"1LS\", \"1Y2\", \n\"Ql\", \"06V\", \"4Ip\", \"5L0\", \"65k\", \"5Ta\", \"1oO\", \"2JM\", \"6x\", \"0QB\", \"4jl\", \"7On\", \"6PN\", \"4em\", \"1Nb\", \"9y\", \"2EL\", \"04g\", \"4KA\", \"7nC\", \n\"5C1\", \"4Fq\", \"09W\", \"d6G\", \"4I\", \"0Ss\", \"bRn\", \"494\", \"LE\", \"0ow\", \"4TY\", \"4A8\", \"6OV\", \"4zu\", \"1Qz\", \"bi\", \"oY\", \"z8\", \"43M\", \"6Bf\", \n\"6lJ\", \"4Yi\", \"0bG\", \"Au\", \"Nt\", \"0mF\", \"4Vh\", \"6cK\", \"6Mg\", \"4xD\", \"11\", \"22A\", \"mh\", \"0NZ\", \"4ut\", \"5p4\", \"4N9\", \"5Ky\", \"1pW\", \"CD\", \n\"1nU\", \"2KW\", \"64q\", \"4EZ\", \"4kv\", \"5n6\", \"7b\", \"0PX\", \"1MI\", \"2hK\", \"6Se\", \"4fF\", \"4Hj\", \"69A\", \"Pv\", \"07L\", \"08M\", \"2If\", \"6rH\", \"4Gk\", \n\"4iG\", \"7LE\", \"5S\", \"0Ri\", \"1Ox\", \"8c\", \"5a7\", \"4dw\", \"5Zz\", \"7oY\", \"RG\", \"0qu\", \"1Pl\", \"21f\", \"adR\", \"5kB\", \"4UO\", \"74E\", \"MS\", \"X2\", \n\"0cQ\", \"028\", \"79u\", \"bbL\", \"4vS\", \"4c2\", \"nO\", \"8De\", \"8Kd\", \"aN\", \"4l3\", \"4yR\", \"634\", \"76t\", \"Ob\", \"0lP\", \"W3\", \"BR\", \"6om\", \"4ZN\", \n\"40j\", \"6AA\", \"2zo\", \"0OL\", \"6t\", \"0QN\", \"5zA\", \"7Ob\", \"65g\", \"4DL\", \"I1\", \"2JA\", \"0g3\", \"06Z\", \"b7G\", \"68W\", \"469\", \"4gP\", \"284\", \"dSn\", \n\"4E\", \"275\", \"4hQ\", \"498\", \"67V\", \"b8F\", \"1mr\", \"0h2\", \"SQ\", \"F0\", \"4KM\", \"7nO\", \"6PB\", \"4ea\", \"1Nn\", \"9u\", \"6lF\", \"4Ye\", \"0bK\", \"Ay\", \n\"oU\", \"z4\", \"43A\", \"6Bj\", \"6OZ\", \"4zy\", \"0AW\", \"be\", \"LI\", \"2O9\", \"4TU\", \"4A4\", \"4N5\", \"5Ku\", \"14S\", \"CH\", \"md\", \"0NV\", \"41p\", \"540\", \n\"6Mk\", \"4xH\", \"u5\", \"22M\", \"Nx\", \"0mJ\", \"4Vd\", \"6cG\", \"4Hf\", \"69M\", \"Pz\", \"0sH\", \"k7\", \"2hG\", \"6Si\", \"4fJ\", \"4kz\", \"7Nx\", \"7n\", \"0PT\", \n\"1nY\", \"dqh\", \"4P7\", \"4EV\", \"4JW\", \"7oU\", \"RK\", \"05q\", \"1Ot\", \"8o\", \"6QX\", \"50R\", \"4iK\", \"7LI\", \"qW\", \"d6\", \"08A\", \"2Ij\", \"66L\", \"4Gg\", \n\"4UK\", \"74A\", \"MW\", \"X6\", \"1Ph\", \"21b\", \"6ND\", \"5kF\", \"4vW\", \"4c6\", \"nK\", \"0My\", \"0cU\", \"0v4\", \"6mX\", \"5HZ\", \"4Wz\", \"6bY\", \"Of\", \"0lT\", \n\"0Bx\", \"aJ\", \"4l7\", \"4yV\", \"40n\", \"6AE\", \"lz\", \"0OH\", \"W7\", \"BV\", \"6oi\", \"4ZJ\", \"65c\", \"4DH\", \"I5\", \"2JE\", \"6p\", \"0QJ\", \"4jd\", \"7Of\", \n\"4r5\", \"4gT\", \"280\", \"2iY\", \"Qd\", \"0rV\", \"4Ix\", \"5L8\", \"5C9\", \"4Fy\", \"1mv\", \"0h6\", \"4A\", \"1CZ\", \"4hU\", \"7MW\", \"6PF\", \"4ee\", \"1Nj\", \"9q\", \n\"SU\", \"F4\", \"4KI\", \"7nK\", \"oQ\", \"z0\", \"43E\", \"6Bn\", \"6lB\", \"4Ya\", \"0bO\", \"2Wl\", \"LM\", \"8fg\", \"4TQ\", \"4A0\", \"aeL\", \"cPo\", \"0AS\", \"ba\", \n\"3kP\", \"0NR\", \"41t\", \"544\", \"4N1\", \"5Kq\", \"14W\", \"CL\", \"2Xm\", \"0mN\", \"5FA\", \"6cC\", \"6Mo\", \"4xL\", \"19\", \"22I\", \"k3\", \"2hC\", \"6Sm\", \"4fN\", \n\"4Hb\", \"69I\", \"2Fo\", \"07D\", \"83l\", \"d5d\", \"4P3\", \"4ER\", \"bQM\", \"a0G\", \"7j\", \"0PP\", \"1Op\", \"8k\", \"hbw\", \"50V\", \"4JS\", \"7oQ\", \"RO\", \"05u\", \n\"08E\", \"2In\", \"66H\", \"4Gc\", \"4iO\", \"7LM\", \"qS\", \"d2\", \"0ay\", \"BK\", \"4O6\", \"4ZW\", \"40s\", \"553\", \"lg\", \"0OU\", \"t6\", \"aW\", \"6Lh\", \"4yK\", \n\"4Wg\", \"6bD\", \"2Yj\", \"0lI\", \"0cH\", \"2Vk\", \"6mE\", \"4Xf\", \"42B\", \"6Ci\", \"nV\", \"0Md\", \"1Pu\", \"cf\", \"6NY\", \"bAI\", \"4UV\", \"7pT\", \"MJ\", \"0nx\", \n\"SH\", \"04r\", \"4KT\", \"7nV\", \"azI\", \"4ex\", \"1Nw\", \"9l\", \"pT\", \"e5\", \"4hH\", \"7MJ\", \"67O\", \"4Fd\", \"09B\", \"2Hi\", \"Qy\", \"06C\", \"4Ie\", \"68N\", \n\"6Rj\", \"4gI\", \"j4\", \"2iD\", \"6m\", \"0QW\", \"4jy\", \"5o9\", \"4Q4\", \"4DU\", \"1oZ\", \"2JX\", \"4m0\", \"4xQ\", \"8Jg\", \"22T\", \"Na\", \"0mS\", \"627\", \"77w\", \n\"6nn\", \"5Kl\", \"V0\", \"CQ\", \"3kM\", \"0NO\", \"41i\", \"7Pc\", \"6OC\", \"5jA\", \"0AN\", \"20e\", \"LP\", \"Y1\", \"4TL\", \"6ao\", \"78v\", \"bcO\", \"0bR\", \"0w3\", \n\"oL\", \"8Ef\", \"43X\", \"4b1\", \"4iR\", \"7LP\", \"5F\", \"266\", \"08X\", \"0i1\", \"66U\", \"b9E\", \"4JN\", \"7oL\", \"RR\", \"G3\", \"1Om\", \"8v\", \"6QA\", \"4db\", \n\"4kc\", \"7Na\", \"7w\", \"0PM\", \"H2\", \"2KB\", \"64d\", \"4EO\", \"b6D\", \"69T\", \"Pc\", \"07Y\", \"297\", \"dRm\", \"4s2\", \"4fS\", \"40w\", \"557\", \"lc\", \"0OQ\", \n\"15T\", \"BO\", \"4O2\", \"4ZS\", \"4Wc\", \"76i\", \"2Yn\", \"0lM\", \"t2\", \"aS\", \"6Ll\", \"4yO\", \"42F\", \"6Cm\", \"nR\", \"8Dx\", \"0cL\", \"2Vo\", \"6mA\", \"4Xb\", \n\"4UR\", \"74X\", \"MN\", \"8gd\", \"1Pq\", \"cb\", \"adO\", \"bAM\", \"azM\", \"51U\", \"1Ns\", \"9h\", \"SL\", \"04v\", \"4KP\", \"7nR\", \"67K\", \"5VA\", \"09F\", \"2Hm\", \n\"4X\", \"e1\", \"4hL\", \"7MN\", \"6Rn\", \"4gM\", \"j0\", \"3ya\", \"2Gl\", \"06G\", \"4Ia\", \"68J\", \"4Q0\", \"4DQ\", \"82o\", \"d4g\", \"6i\", \"0QS\", \"bPN\", \"a1D\", \n\"Ne\", \"0mW\", \"4Vy\", \"5S9\", \"4m4\", \"4xU\", \"1SZ\", \"22P\", \"my\", \"0NK\", \"41m\", \"7Pg\", \"6nj\", \"5Kh\", \"V4\", \"CU\", \"LT\", \"Y5\", \"4TH\", \"6ak\", \n\"6OG\", \"4zd\", \"0AJ\", \"bx\", \"oH\", \"0Lz\", \"4wT\", \"4b5\", \"78r\", \"4Yx\", \"0bV\", \"Ad\", \"1lu\", \"0i5\", \"66Q\", \"4Gz\", \"4iV\", \"7LT\", \"5B\", \"0Rx\", \n\"1Oi\", \"8r\", \"6QE\", \"4df\", \"4JJ\", \"7oH\", \"RV\", \"G7\", \"H6\", \"2KF\", \"6ph\", \"4EK\", \"4kg\", \"7Ne\", \"7s\", \"0PI\", \"1MX\", \"1X9\", \"4s6\", \"4fW\", \n\"5XZ\", \"69P\", \"Pg\", \"0sU\", \"06\", \"23F\", \"afr\", \"4yC\", \"4Wo\", \"6bL\", \"Os\", \"0lA\", \"0aq\", \"BC\", \"aEn\", \"c4E\", \"4ts\", \"5q3\", \"lo\", \"8FE\", \n\"347\", \"cn\", \"6NQ\", \"5kS\", \"bom\", \"74T\", \"MB\", \"0np\", \"17i\", \"2Vc\", \"6mM\", \"4Xn\", \"42J\", \"6Ca\", \"2xO\", \"0Ml\", \"4T\", \"0Sn\", \"5xa\", \"7MB\", \n\"67G\", \"4Fl\", \"09J\", \"2Ha\", \"1u2\", \"04z\", \"b5g\", \"aTm\", \"6PS\", \"4ep\", \"8WF\", \"9d\", \"6e\", \"8XG\", \"4jq\", \"5o1\", \"65v\", \"706\", \"1oR\", \"1z3\", \n\"Qq\", \"06K\", \"4Im\", \"68F\", \"6Rb\", \"4gA\", \"1LN\", \"2iL\", \"6nf\", \"5Kd\", \"V8\", \"CY\", \"mu\", \"0NG\", \"41a\", \"7Pk\", \"4m8\", \"4xY\", \"0Cw\", \"1F7\", \n\"Ni\", \"19r\", \"4Vu\", \"5S5\", \"6lW\", \"4Yt\", \"0bZ\", \"Ah\", \"oD\", \"0Lv\", \"43P\", \"4b9\", \"6OK\", \"4zh\", \"0AF\", \"bt\", \"LX\", \"Y9\", \"4TD\", \"6ag\", \n\"4JF\", \"7oD\", \"RZ\", \"0qh\", \"1Oe\", \"2jg\", \"6QI\", \"4dj\", \"4iZ\", \"483\", \"5N\", \"0Rt\", \"08P\", \"0i9\", \"5B6\", \"4Gv\", \"4Hw\", \"5M7\", \"Pk\", \"07Q\", \n\"1MT\", \"1X5\", \"472\", \"52r\", \"4kk\", \"7Ni\", \"sw\", \"0PE\", \"1nH\", \"2KJ\", \"64l\", \"4EG\", \"4Wk\", \"6bH\", \"Ow\", \"0lE\", \"02\", \"23B\", \"6Ld\", \"4yG\", \n\"4tw\", \"5q7\", \"lk\", \"0OY\", \"0au\", \"BG\", \"6ox\", \"5Jz\", \"4UZ\", \"74P\", \"MF\", \"0nt\", \"1Py\", \"cj\", \"6NU\", \"5kW\", \"42N\", \"6Ce\", \"nZ\", \"0Mh\", \n\"0cD\", \"2Vg\", \"6mI\", \"4Xj\", \"67C\", \"4Fh\", \"09N\", \"2He\", \"4P\", \"e9\", \"4hD\", \"7MF\", \"6PW\", \"4et\", \"3n9\", \"2ky\", \"SD\", \"0pv\", \"4KX\", \"7nZ\", \n\"4Q8\", \"4DY\", \"1oV\", \"1z7\", \"6a\", \"1Az\", \"4ju\", \"5o5\", \"6Rf\", \"4gE\", \"j8\", \"2iH\", \"Qu\", \"06O\", \"4Ii\", \"68B\", \"mq\", \"0NC\", \"41e\", \"7Po\", \n\"6nb\", \"bar\", \"14F\", \"2UL\", \"Nm\", \"19v\", \"4Vq\", \"5S1\", \"agl\", \"bBn\", \"0Cs\", \"1F3\", \"1I2\", \"0Lr\", \"43T\", \"ahm\", \"6lS\", \"4Yp\", \"16w\", \"Al\", \n\"2ZM\", \"0on\", \"5Da\", \"6ac\", \"6OO\", \"4zl\", \"0AB\", \"bp\", \"1Oa\", \"8z\", \"6QM\", \"4dn\", \"4JB\", \"aUs\", \"2DO\", \"05d\", \"08T\", \"d7D\", \"5B2\", \"4Gr\", \n\"bSm\", \"487\", \"5J\", \"0Rp\", \"1MP\", \"1X1\", \"476\", \"52v\", \"4Hs\", \"5M3\", \"Po\", \"07U\", \"1nL\", \"2KN\", \"64h\", \"4EC\", \"4ko\", \"7Nm\", \"ss\", \"0PA\", \n\"QJ\", \"06p\", \"4IV\", \"7lT\", \"6RY\", \"4gz\", \"1Lu\", \"0I5\", \"rV\", \"g7\", \"4jJ\", \"7OH\", \"65M\", \"4Df\", \"1oi\", \"2Jk\", \"2Ej\", \"04A\", \"4Kg\", \"7ne\", \n\"6Ph\", \"4eK\", \"h6\", \"2kF\", \"4o\", \"0SU\", \"5xZ\", \"7My\", \"4S6\", \"4FW\", \"09q\", \"1x9\", \"17R\", \"2VX\", \"4M4\", \"4XU\", \"42q\", \"571\", \"ne\", \"0MW\", \n\"v4\", \"cU\", \"6Nj\", \"5kh\", \"4Ue\", \"74o\", \"My\", \"0nK\", \"0aJ\", \"Bx\", \"6oG\", \"4Zd\", \"4tH\", \"6Ak\", \"lT\", \"y5\", \"0BV\", \"ad\", \"580\", \"4yx\", \n\"4WT\", \"4B5\", \"OH\", \"0lz\", \"4kP\", \"7NR\", \"7D\", \"244\", \"1ns\", \"0k3\", \"64W\", \"con\", \"4HL\", \"69g\", \"PP\", \"E1\", \"1Mo\", \"2hm\", \"6SC\", \"52I\", \n\"4ia\", \"7Lc\", \"5u\", \"0RO\", \"J0\", \"3Ya\", \"66f\", \"4GM\", \"b4F\", \"aUL\", \"Ra\", \"0qS\", \"8Vg\", \"8E\", \"458\", \"4dQ\", \"4o2\", \"4zS\", \"8He\", \"bO\", \n\"Lc\", \"0oQ\", \"605\", \"75u\", \"6ll\", \"4YO\", \"T2\", \"AS\", \"2yn\", \"0LM\", \"43k\", \"7Ra\", \"6MA\", \"4xb\", \"0CL\", \"22g\", \"NR\", \"19I\", \"4VN\", \"6cm\", \n\"aDO\", \"baM\", \"14y\", \"Cb\", \"mN\", \"8Gd\", \"41Z\", \"7PP\", \"axO\", \"53W\", \"1Lq\", \"0I1\", \"QN\", \"06t\", \"4IR\", \"68y\", \"65I\", \"4Db\", \"1om\", \"2Jo\", \n\"6Z\", \"g3\", \"4jN\", \"7OL\", \"6Pl\", \"4eO\", \"h2\", \"2kB\", \"2En\", \"04E\", \"4Kc\", \"7na\", \"4S2\", \"4FS\", \"09u\", \"d6e\", \"4k\", \"0SQ\", \"bRL\", \"a3F\", \n\"42u\", \"575\", \"na\", \"0MS\", \"17V\", \"dlo\", \"4M0\", \"4XQ\", \"4Ua\", \"74k\", \"3KM\", \"0nO\", \"28\", \"cQ\", \"6Nn\", \"5kl\", \"40D\", \"6Ao\", \"lP\", \"y1\", \n\"0aN\", \"2Tm\", \"6oC\", \"5JA\", \"4WP\", \"4B1\", \"OL\", \"18W\", \"0BR\", \"0W3\", \"584\", \"bCO\", \"1nw\", \"0k7\", \"64S\", \"4Ex\", \"4kT\", \"7NV\", \"sH\", \"0Pz\", \n\"1Mk\", \"2hi\", \"6SG\", \"4fd\", \"4HH\", \"69c\", \"PT\", \"E5\", \"J4\", \"2ID\", \"66b\", \"4GI\", \"4ie\", \"7Lg\", \"5q\", \"0RK\", \"1OZ\", \"8A\", \"4q4\", \"4dU\", \n\"4Jy\", \"5O9\", \"Re\", \"0qW\", \"Lg\", \"0oU\", \"5DZ\", \"6aX\", \"4o6\", \"4zW\", \"0Ay\", \"bK\", \"2yj\", \"0LI\", \"43o\", \"6BD\", \"6lh\", \"4YK\", \"T6\", \"AW\", \n\"NV\", \"0md\", \"4VJ\", \"6ci\", \"6ME\", \"4xf\", \"0CH\", \"22c\", \"mJ\", \"0Nx\", \"4uV\", \"7PT\", \"6nY\", \"baI\", \"1pu\", \"Cf\", \"6V\", \"0Ql\", \"4jB\", \"aus\", \n\"65E\", \"4Dn\", \"1oa\", \"2Jc\", \"QB\", \"06x\", \"b7e\", \"68u\", \"5b2\", \"4gr\", \"8UD\", \"dSL\", \"4g\", \"8ZE\", \"4hs\", \"5m3\", \"67t\", \"724\", \"09y\", \"1x1\", \n\"Ss\", \"04I\", \"4Ko\", \"7nm\", \"azr\", \"4eC\", \"1NL\", \"9W\", \"24\", \"21D\", \"6Nb\", \"bAr\", \"4Um\", \"74g\", \"Mq\", \"0nC\", \"0cs\", \"1f3\", \"79W\", \"bbn\", \n\"42y\", \"579\", \"nm\", \"394\", \"365\", \"al\", \"588\", \"4yp\", \"bmo\", \"76V\", \"1i2\", \"0lr\", \"0aB\", \"Bp\", \"6oO\", \"4Zl\", \"40H\", \"6Ac\", \"2zM\", \"0On\", \n\"4HD\", \"69o\", \"PX\", \"E9\", \"1Mg\", \"2he\", \"6SK\", \"4fh\", \"4kX\", \"7NZ\", \"7L\", \"0Pv\", \"3N9\", \"2Ky\", \"6pW\", \"4Et\", \"4Ju\", \"5O5\", \"Ri\", \"05S\", \n\"1OV\", \"8M\", \"450\", \"4dY\", \"4ii\", \"7Lk\", \"qu\", \"0RG\", \"J8\", \"2IH\", \"66n\", \"4GE\", \"6ld\", \"4YG\", \"0bi\", \"2WJ\", \"ow\", \"0LE\", \"43c\", \"6BH\", \n\"6Ox\", \"5jz\", \"0Au\", \"bG\", \"Lk\", \"0oY\", \"4Tw\", \"5Q7\", \"6nU\", \"5KW\", \"14q\", \"Cj\", \"mF\", \"0Nt\", \"41R\", \"7PX\", \"6MI\", \"4xj\", \"0CD\", \"22o\", \n\"NZ\", \"0mh\", \"4VF\", \"6ce\", \"65A\", \"4Dj\", \"1oe\", \"2Jg\", \"6R\", \"0Qh\", \"4jF\", \"7OD\", \"5b6\", \"4gv\", \"1Ly\", \"0I9\", \"QF\", \"0rt\", \"4IZ\", \"68q\", \n\"67p\", \"5Vz\", \"1mT\", \"1x5\", \"4c\", \"0SY\", \"4hw\", \"5m7\", \"6Pd\", \"4eG\", \"1NH\", \"9S\", \"Sw\", \"04M\", \"4Kk\", \"7ni\", \"4Ui\", \"74c\", \"Mu\", \"0nG\", \n\"20\", \"cY\", \"6Nf\", \"5kd\", \"4vu\", \"5s5\", \"ni\", \"390\", \"0cw\", \"1f7\", \"4M8\", \"4XY\", \"4WX\", \"4B9\", \"OD\", \"0lv\", \"0BZ\", \"ah\", \"6LW\", \"4yt\", \n\"40L\", \"6Ag\", \"lX\", \"y9\", \"0aF\", \"Bt\", \"6oK\", \"4Zh\", \"1Mc\", \"2ha\", \"6SO\", \"4fl\", \"5Xa\", \"69k\", \"2FM\", \"07f\", \"83N\", \"d5F\", \"6pS\", \"4Ep\", \n\"bQo\", \"a0e\", \"7H\", \"0Pr\", \"1OR\", \"8I\", \"454\", \"50t\", \"4Jq\", \"5O1\", \"Rm\", \"05W\", \"08g\", \"2IL\", \"66j\", \"4GA\", \"4im\", \"7Lo\", \"5y\", \"0RC\", \n\"os\", \"0LA\", \"43g\", \"6BL\", \"78I\", \"4YC\", \"0bm\", \"2WN\", \"Lo\", \"8fE\", \"4Ts\", \"5Q3\", \"aen\", \"cPM\", \"0Aq\", \"bC\", \"mB\", \"0Np\", \"41V\", \"ajo\", \n\"6nQ\", \"5KS\", \"14u\", \"Cn\", \"2XO\", \"0ml\", \"4VB\", \"6ca\", \"6MM\", \"4xn\", \"1Sa\", \"22k\", \"Sj\", \"04P\", \"4Kv\", \"5N6\", \"443\", \"4eZ\", \"1NU\", \"9N\", \n\"pv\", \"0SD\", \"4hj\", \"7Mh\", \"67m\", \"4FF\", \"1mI\", \"2HK\", \"2GJ\", \"2\", \"4IG\", \"68l\", \"6RH\", \"4gk\", \"1Ld\", \"2if\", \"6O\", \"0Qu\", \"5zz\", \"7OY\", \n\"5A7\", \"4Dw\", \"1ox\", \"0j8\", \"15r\", \"Bi\", \"6oV\", \"4Zu\", \"40Q\", \"4a8\", \"lE\", \"0Ow\", \"0BG\", \"au\", \"6LJ\", \"4yi\", \"4WE\", \"6bf\", \"OY\", \"Z8\", \n\"U9\", \"2VI\", \"6mg\", \"4XD\", \"4vh\", \"6CK\", \"nt\", \"0MF\", \"1PW\", \"cD\", \"4n9\", \"5ky\", \"4Ut\", \"5P4\", \"Mh\", \"0nZ\", \"4ip\", \"5l0\", \"5d\", \"9Kg\", \n\"08z\", \"1y2\", \"66w\", \"737\", \"4Jl\", \"7on\", \"Rp\", \"05J\", \"1OO\", \"8T\", \"6Qc\", \"50i\", \"4kA\", \"7NC\", \"7U\", \"0Po\", \"1nb\", \"dqS\", \"64F\", \"4Em\", \n\"b6f\", \"69v\", \"PA\", \"0ss\", \"8TG\", \"dRO\", \"5c1\", \"4fq\", \"6MP\", \"4xs\", \"376\", \"22v\", \"NC\", \"0mq\", \"bll\", \"77U\", \"6nL\", \"5KN\", \"14h\", \"Cs\", \n\"3ko\", \"0Nm\", \"41K\", \"7PA\", \"6Oa\", \"4zB\", \"37\", \"20G\", \"Lr\", \"8fX\", \"4Tn\", \"6aM\", \"78T\", \"bcm\", \"0bp\", \"AB\", \"on\", \"387\", \"43z\", \"5r2\", \n\"447\", \"51w\", \"1NQ\", \"9J\", \"Sn\", \"04T\", \"4Kr\", \"5N2\", \"67i\", \"4FB\", \"09d\", \"2HO\", \"4z\", \"1Ca\", \"4hn\", \"7Ml\", \"6RL\", \"4go\", \"8UY\", \"2ib\", \n\"2GN\", \"6\", \"4IC\", \"68h\", \"5A3\", \"4Ds\", \"82M\", \"d4E\", \"6K\", \"0Qq\", \"bPl\", \"a1f\", \"40U\", \"akl\", \"lA\", \"0Os\", \"15v\", \"Bm\", \"6oR\", \"4Zq\", \n\"4WA\", \"6bb\", \"2YL\", \"0lo\", \"0BC\", \"aq\", \"6LN\", \"4ym\", \"42d\", \"6CO\", \"np\", \"0MB\", \"0cn\", \"2VM\", \"6mc\", \"5Ha\", \"4Up\", \"5P0\", \"Ml\", \"8gF\", \n\"1PS\", \"1E2\", \"adm\", \"bAo\", \"1lW\", \"1y6\", \"4R9\", \"4GX\", \"4it\", \"5l4\", \"qh\", \"0RZ\", \"i9\", \"8P\", \"6Qg\", \"4dD\", \"4Jh\", \"7oj\", \"Rt\", \"05N\", \n\"1nf\", \"2Kd\", \"64B\", \"4Ei\", \"4kE\", \"7NG\", \"7Q\", \"f8\", \"1Mz\", \"2hx\", \"5c5\", \"4fu\", \"4HY\", \"69r\", \"PE\", \"0sw\", \"NG\", \"0mu\", \"5Fz\", \"6cx\", \n\"6MT\", \"4xw\", \"0CY\", \"0V8\", \"3kk\", \"0Ni\", \"41O\", \"7PE\", \"6nH\", \"5KJ\", \"14l\", \"Cw\", \"Lv\", \"0oD\", \"4Tj\", \"6aI\", \"6Oe\", \"4zF\", \"33\", \"bZ\", \n\"oj\", \"0LX\", \"4wv\", \"5r6\", \"6ly\", \"4YZ\", \"0bt\", \"AF\", \"4v\", \"0SL\", \"4hb\", \"awS\", \"67e\", \"4FN\", \"K3\", \"2HC\", \"Sb\", \"04X\", \"b5E\", \"aTO\", \n\"4p3\", \"4eR\", \"8Wd\", \"9F\", \"6G\", \"257\", \"4jS\", \"7OQ\", \"65T\", \"cnm\", \"1op\", \"0j0\", \"QS\", \"D2\", \"4IO\", \"68d\", \"7Ba\", \"4gc\", \"1Ll\", \"2in\", \n\"0BO\", \"23d\", \"6LB\", \"4ya\", \"4WM\", \"6bn\", \"OQ\", \"Z0\", \"0aS\", \"Ba\", \"aEL\", \"c4g\", \"40Y\", \"4a0\", \"lM\", \"8Fg\", \"8If\", \"cL\", \"4n1\", \"5kq\", \n\"616\", \"74v\", \"3KP\", \"0nR\", \"U1\", \"2VA\", \"6mo\", \"4XL\", \"42h\", \"6CC\", \"2xm\", \"0MN\", \"4Jd\", \"7of\", \"Rx\", \"05B\", \"i5\", \"2jE\", \"6Qk\", \"4dH\", \n\"4ix\", \"5l8\", \"5l\", \"0RV\", \"08r\", \"2IY\", \"4R5\", \"4GT\", \"4HU\", \"7mW\", \"PI\", \"07s\", \"1Mv\", \"0H6\", \"5c9\", \"4fy\", \"4kI\", \"7NK\", \"sU\", \"f4\", \n\"1nj\", \"2Kh\", \"64N\", \"4Ee\", \"6nD\", \"5KF\", \"1ph\", \"2Uj\", \"mW\", \"x6\", \"41C\", \"7PI\", \"593\", \"5hZ\", \"0CU\", \"0V4\", \"NK\", \"0my\", \"4VW\", \"4C6\", \n\"4L7\", \"4YV\", \"0bx\", \"AJ\", \"of\", \"0LT\", \"43r\", \"562\", \"6Oi\", \"4zJ\", \"w7\", \"bV\", \"Lz\", \"0oH\", \"4Tf\", \"6aE\", \"67a\", \"4FJ\", \"K7\", \"2HG\", \n\"4r\", \"0SH\", \"4hf\", \"7Md\", \"4p7\", \"4eV\", \"1NY\", \"9B\", \"Sf\", \"0pT\", \"4Kz\", \"7nx\", \"65P\", \"5TZ\", \"1ot\", \"0j4\", \"6C\", \"0Qy\", \"4jW\", \"7OU\", \n\"6RD\", \"4gg\", \"1Lh\", \"2ij\", \"QW\", \"D6\", \"4IK\", \"7lI\", \"4WI\", \"6bj\", \"OU\", \"Z4\", \"0BK\", \"ay\", \"6LF\", \"4ye\", \"4tU\", \"4a4\", \"lI\", \"2o9\", \n\"0aW\", \"Be\", \"6oZ\", \"4Zy\", \"4Ux\", \"5P8\", \"Md\", \"0nV\", \"8Ib\", \"cH\", \"4n5\", \"5ku\", \"42l\", \"6CG\", \"nx\", \"0MJ\", \"U5\", \"2VE\", \"6mk\", \"4XH\", \n\"i1\", \"8X\", \"6Qo\", \"4dL\", \"5ZA\", \"7ob\", \"2Dm\", \"05F\", \"08v\", \"d7f\", \"4R1\", \"4GP\", \"bSO\", \"a2E\", \"5h\", \"0RR\", \"1Mr\", \"0H2\", \"ayL\", \"52T\", \n\"4HQ\", \"69z\", \"PM\", \"07w\", \"1nn\", \"2Kl\", \"64J\", \"4Ea\", \"4kM\", \"7NO\", \"7Y\", \"f0\", \"mS\", \"x2\", \"41G\", \"7PM\", \"aDR\", \"5KB\", \"14d\", \"2Un\", \n\"NO\", \"19T\", \"4VS\", \"4C2\", \"597\", \"bBL\", \"0CQ\", \"0V0\", \"ob\", \"0LP\", \"43v\", \"566\", \"4L3\", \"4YR\", \"16U\", \"AN\", \"2Zo\", \"0oL\", \"4Tb\", \"6aA\", \n\"6Om\", \"4zN\", \"w3\", \"bR\", \"4oT\", \"4z5\", \"wH\", \"0Tz\", \"0zV\", \"Yd\", \"5D8\", \"4Ax\", \"4LH\", \"6yk\", \"TT\", \"A5\", \"0YJ\", \"zx\", \"6WG\", \"4bd\", \n\"4me\", \"6XF\", \"1q\", \"0VK\", \"N4\", \"2MD\", \"62b\", \"4CI\", \"4Ny\", \"5K9\", \"Ve\", \"0uW\", \"1KZ\", \"xI\", \"4u4\", \"5pt\", \"4k6\", \"5nv\", \"0Ey\", \"fK\", \n\"Hg\", \"0kU\", \"641\", \"6eX\", \"6hh\", \"5Mj\", \"P6\", \"EW\", \"29b\", \"0HI\", \"47o\", \"6FD\", \"6IE\", \"48n\", \"0GH\", \"dz\", \"JV\", \"0id\", \"4RJ\", \"6gi\", \n\"6jY\", \"beI\", \"0dT\", \"Gf\", \"iJ\", \"0Jx\", \"4qV\", \"4d7\", \"UN\", \"02t\", \"4MR\", \"4X3\", \"a8G\", \"57W\", \"0XP\", \"0M1\", \"2Z\", \"c3\", \"4nN\", \"7KL\", \n\"61I\", \"5PC\", \"1km\", \"2No\", \"2An\", \"00E\", \"4Oc\", \"7ja\", \"6Tl\", \"4aO\", \"l2\", \"yS\", \"0k\", \"0WQ\", \"58V\", \"a7F\", \"4W2\", \"4BS\", \"84m\", \"ZO\", \n\"13V\", \"E\", \"4I0\", \"5Lp\", \"46u\", \"535\", \"ja\", \"0IS\", \"68\", \"gQ\", \"6Jn\", \"5ol\", \"4Qa\", \"6dB\", \"3OM\", \"0jO\", \"0eN\", \"2Pm\", \"6kC\", \"5NA\", \n\"44D\", \"6Eo\", \"hP\", \"99\", \"0FR\", \"0S3\", \"abM\", \"49t\", \"4SP\", \"4F1\", \"KL\", \"8af\", \"0zR\", \"0o3\", \"60W\", \"ckn\", \"4oP\", \"4z1\", \"3D\", \"204\", \n\"0YN\", \"2lm\", \"6WC\", \"56I\", \"4LL\", \"6yo\", \"TP\", \"A1\", \"N0\", \"903\", \"62f\", \"4CM\", \"4ma\", \"6XB\", \"1u\", \"0VO\", \"8Rg\", \"xM\", \"418\", \"54x\", \n\"b0F\", \"aQL\", \"Va\", \"0uS\", \"Hc\", \"0kQ\", \"645\", \"71u\", \"4k2\", \"5nr\", \"8Le\", \"fO\", \"29f\", \"0HM\", \"47k\", \"7Va\", \"6hl\", \"5Mn\", \"P2\", \"ES\", \n\"JR\", \"1yA\", \"4RN\", \"6gm\", \"6IA\", \"48j\", \"0GL\", \"26g\", \"iN\", \"8Cd\", \"45Z\", \"4d3\", \"hYv\", \"beM\", \"0dP\", \"Gb\", \"6VY\", \"4cz\", \"0XT\", \"0M5\", \n\"UJ\", \"02p\", \"4MV\", \"4X7\", \"61M\", \"5PG\", \"1ki\", \"Xz\", \"vV\", \"c7\", \"4nJ\", \"7KH\", \"6Th\", \"4aK\", \"l6\", \"yW\", \"2Aj\", \"00A\", \"4Og\", \"6zD\", \n\"4W6\", \"4BW\", \"0yy\", \"ZK\", \"0o\", \"0WU\", \"58R\", \"6YX\", \"46q\", \"531\", \"je\", \"0IW\", \"13R\", \"A\", \"4I4\", \"5Lt\", \"4Qe\", \"6dF\", \"Iy\", \"0jK\", \n\"r4\", \"gU\", \"6Jj\", \"5oh\", \"4pH\", \"6Ek\", \"hT\", \"0Kf\", \"0eJ\", \"Fx\", \"6kG\", \"5NE\", \"4ST\", \"4F5\", \"KH\", \"0hz\", \"0FV\", \"ed\", \"5x8\", \"49p\", \n\"bvs\", \"6yc\", \"2BM\", \"03f\", \"0YB\", \"zp\", \"6WO\", \"4bl\", \"bUo\", \"a4e\", \"3H\", \"0Tr\", \"87N\", \"Yl\", \"5D0\", \"4Ap\", \"4Nq\", \"5K1\", \"Vm\", \"01W\", \n\"1KR\", \"xA\", \"414\", \"54t\", \"4mm\", \"6XN\", \"1y\", \"0VC\", \"0xo\", \"2ML\", \"62j\", \"4CA\", \"7xA\", \"5Mb\", \"0fm\", \"2SN\", \"ks\", \"0HA\", \"47g\", \"6FL\", \n\"aan\", \"bDl\", \"0Eq\", \"fC\", \"Ho\", \"8bE\", \"4Ps\", \"5U3\", \"5Z2\", \"5OS\", \"10u\", \"Gn\", \"iB\", \"0Jp\", \"45V\", \"ano\", \"6IM\", \"48f\", \"1Wa\", \"dr\", \n\"3Ln\", \"0il\", \"4RB\", \"6ga\", \"2R\", \"0Uh\", \"4nF\", \"7KD\", \"61A\", \"5PK\", \"1ke\", \"Xv\", \"UF\", \"0vt\", \"4MZ\", \"6xy\", \"5f6\", \"4cv\", \"0XX\", \"0M9\", \n\"0c\", \"0WY\", \"4lw\", \"5i7\", \"63p\", \"5Rz\", \"0yu\", \"ZG\", \"Ww\", \"00M\", \"4Ok\", \"6zH\", \"6Td\", \"4aG\", \"0Zi\", \"2oJ\", \"60\", \"gY\", \"6Jf\", \"5od\", \n\"4Qi\", \"6dJ\", \"Iu\", \"0jG\", \"0gw\", \"M\", \"4I8\", \"5Lx\", \"4ru\", \"5w5\", \"ji\", \"1Yz\", \"0FZ\", \"eh\", \"5x4\", \"5mU\", \"4SX\", \"4F9\", \"KD\", \"0hv\", \n\"0eF\", \"Ft\", \"6kK\", \"5NI\", \"44L\", \"6Eg\", \"hX\", \"91\", \"0YF\", \"zt\", \"6WK\", \"4bh\", \"4LD\", \"6yg\", \"TX\", \"A9\", \"0zZ\", \"Yh\", \"5D4\", \"4At\", \n\"4oX\", \"4z9\", \"3L\", \"0Tv\", \"1KV\", \"xE\", \"410\", \"54p\", \"4Nu\", \"5K5\", \"Vi\", \"01S\", \"N8\", \"2MH\", \"62n\", \"4CE\", \"4mi\", \"6XJ\", \"uu\", \"0VG\", \n\"kw\", \"0HE\", \"47c\", \"6FH\", \"6hd\", \"5Mf\", \"0fi\", \"2SJ\", \"Hk\", \"0kY\", \"4Pw\", \"5U7\", \"6Kx\", \"5nz\", \"0Eu\", \"fG\", \"iF\", \"0Jt\", \"45R\", \"6Dy\", \n\"5Z6\", \"5OW\", \"0dX\", \"Gj\", \"JZ\", \"0ih\", \"4RF\", \"6ge\", \"6II\", \"48b\", \"0GD\", \"dv\", \"61E\", \"5PO\", \"1ka\", \"Xr\", \"2V\", \"0Ul\", \"4nB\", \"aqs\", \n\"5f2\", \"4cr\", \"8QD\", \"39V\", \"UB\", \"02x\", \"795\", \"aRo\", \"63t\", \"764\", \"0yq\", \"ZC\", \"0g\", \"9Nd\", \"4ls\", \"5i3\", \"7DA\", \"4aC\", \"0Zm\", \"2oN\", \n\"Ws\", \"00I\", \"4Oo\", \"6zL\", \"4Qm\", \"6dN\", \"Iq\", \"0jC\", \"64\", \"25D\", \"6Jb\", \"bEr\", \"46y\", \"539\", \"jm\", \"9Pf\", \"0gs\", \"I\", \"aCl\", \"bfn\", \n\"bio\", \"72V\", \"1m2\", \"0hr\", \"325\", \"el\", \"5x0\", \"49x\", \"44H\", \"6Ec\", \"3nl\", \"95\", \"0eB\", \"Fp\", \"6kO\", \"5NM\", \"4mt\", \"5h4\", \"uh\", \"0VZ\", \n\"0xv\", \"2MU\", \"4V9\", \"4CX\", \"4Nh\", \"7kj\", \"Vt\", \"01N\", \"m9\", \"xX\", \"6Ug\", \"54m\", \"4oE\", \"6Zf\", \"3Q\", \"b8\", \"0zG\", \"Yu\", \"60B\", \"4Ai\", \n\"4LY\", \"4Y8\", \"TE\", \"0ww\", \"1Iz\", \"zi\", \"5g5\", \"4bu\", \"5y7\", \"5lV\", \"0GY\", \"dk\", \"JG\", \"0iu\", \"5Bz\", \"6gx\", \"6jH\", \"5OJ\", \"0dE\", \"Gw\", \n\"3ok\", \"82\", \"45O\", \"6Dd\", \"6Ke\", \"5ng\", \"73\", \"fZ\", \"Hv\", \"0kD\", \"4Pj\", \"6eI\", \"6hy\", \"7m9\", \"0ft\", \"EF\", \"kj\", \"0HX\", \"4sv\", \"5v6\", \n\"Wn\", \"00T\", \"4Or\", \"5J2\", \"407\", \"55w\", \"0Zp\", \"yB\", \"0z\", \"1Ga\", \"4ln\", \"6YM\", \"63i\", \"4BB\", \"0yl\", \"2LO\", \"2CN\", \"02e\", \"4MC\", \"7hA\", \n\"6VL\", \"4co\", \"0XA\", \"2mb\", \"2K\", \"0Uq\", \"bTl\", \"a5f\", \"5E3\", \"5PR\", \"86M\", \"Xo\", \"11v\", \"Fm\", \"6kR\", \"5NP\", \"44U\", \"aol\", \"hA\", \"0Ks\", \n\"0FC\", \"eq\", \"6HN\", \"49e\", \"4SA\", \"6fb\", \"3Mm\", \"0ho\", \"0gn\", \"T\", \"6ic\", \"5La\", \"46d\", \"6GO\", \"jp\", \"0IB\", \"0Dr\", \"1A2\", \"hyT\", \"bEo\", \n\"4Qp\", \"5T0\", \"Il\", \"8cF\", \"0xr\", \"2MQ\", \"62w\", \"777\", \"4mp\", \"5h0\", \"1d\", \"9Og\", \"1KO\", \"2nM\", \"6Uc\", \"54i\", \"4Nl\", \"7kn\", \"Vp\", \"01J\", \n\"0zC\", \"Yq\", \"60F\", \"4Am\", \"4oA\", \"6Zb\", \"3U\", \"0To\", \"8PG\", \"zm\", \"5g1\", \"4bq\", \"786\", \"aSl\", \"TA\", \"0ws\", \"JC\", \"0iq\", \"bhl\", \"73U\", \n\"5y3\", \"5lR\", \"336\", \"do\", \"3oo\", \"86\", \"45K\", \"7TA\", \"6jL\", \"5ON\", \"0dA\", \"Gs\", \"Hr\", \"8bX\", \"4Pn\", \"6eM\", \"6Ka\", \"5nc\", \"77\", \"24G\", \n\"kn\", \"8AD\", \"47z\", \"5v2\", \"aBo\", \"bgm\", \"0fp\", \"EB\", \"403\", \"4aZ\", \"0Zt\", \"yF\", \"Wj\", \"00P\", \"4Ov\", \"5J6\", \"63m\", \"4BF\", \"0yh\", \"ZZ\", \n\"tv\", \"0WD\", \"4lj\", \"6YI\", \"6VH\", \"4ck\", \"0XE\", \"2mf\", \"2CJ\", \"02a\", \"4MG\", \"6xd\", \"5E7\", \"5PV\", \"1kx\", \"Xk\", \"2O\", \"0Uu\", \"bTh\", \"7KY\", \n\"44Q\", \"4e8\", \"hE\", \"0Kw\", \"11r\", \"Fi\", \"6kV\", \"5NT\", \"4SE\", \"6ff\", \"KY\", \"0hk\", \"0FG\", \"eu\", \"6HJ\", \"49a\", \"4rh\", \"6GK\", \"jt\", \"0IF\", \n\"Q9\", \"P\", \"6ig\", \"5Le\", \"4Qt\", \"5T4\", \"Ih\", \"0jZ\", \"0Dv\", \"gD\", \"4j9\", \"5oy\", \"aD0\", \"7kb\", \"3PL\", \"01F\", \"m1\", \"xP\", \"6Uo\", \"54e\", \n\"59U\", \"a6E\", \"1h\", \"0VR\", \"85n\", \"196\", \"4V1\", \"4CP\", \"4LQ\", \"4Y0\", \"TM\", \"03w\", \"0YS\", \"za\", \"a9D\", \"56T\", \"4oM\", \"6Zn\", \"3Y\", \"b0\", \n\"0zO\", \"2Ol\", \"60J\", \"4Aa\", \"7za\", \"5OB\", \"0dM\", \"2Qn\", \"iS\", \"0Ja\", \"45G\", \"6Dl\", \"acN\", \"48w\", \"0GQ\", \"dc\", \"JO\", \"94L\", \"4RS\", \"4G2\", \n\"4H3\", \"5Ms\", \"12U\", \"EN\", \"kb\", \"0HP\", \"47v\", \"526\", \"6Km\", \"5no\", \"s3\", \"fR\", \"3NN\", \"0kL\", \"4Pb\", \"6eA\", \"0r\", \"0WH\", \"4lf\", \"6YE\", \n\"63a\", \"4BJ\", \"O7\", \"ZV\", \"Wf\", \"0tT\", \"4Oz\", \"6zY\", \"4t7\", \"4aV\", \"0Zx\", \"yJ\", \"2C\", \"0Uy\", \"4nW\", \"7KU\", \"61P\", \"5PZ\", \"1kt\", \"Xg\", \n\"UW\", \"02m\", \"4MK\", \"6xh\", \"6VD\", \"4cg\", \"0XI\", \"2mj\", \"0FK\", \"ey\", \"6HF\", \"49m\", \"4SI\", \"6fj\", \"KU\", \"0hg\", \"0eW\", \"Fe\", \"6kZ\", \"5NX\", \n\"4pU\", \"4e4\", \"hI\", \"2k9\", \"0Dz\", \"gH\", \"4j5\", \"5ou\", \"4Qx\", \"5T8\", \"Id\", \"0jV\", \"Q5\", \"DT\", \"6ik\", \"5Li\", \"46l\", \"6GG\", \"jx\", \"0IJ\", \n\"m5\", \"xT\", \"6Uk\", \"54a\", \"4Nd\", \"7kf\", \"Vx\", \"01B\", \"0xz\", \"192\", \"4V5\", \"4CT\", \"4mx\", \"5h8\", \"1l\", \"0VV\", \"0YW\", \"ze\", \"5g9\", \"4by\", \n\"4LU\", \"4Y4\", \"TI\", \"03s\", \"0zK\", \"Yy\", \"60N\", \"4Ae\", \"4oI\", \"6Zj\", \"wU\", \"b4\", \"iW\", \"0Je\", \"45C\", \"6Dh\", \"6jD\", \"5OF\", \"0dI\", \"2Qj\", \n\"JK\", \"0iy\", \"4RW\", \"4G6\", \"6IX\", \"48s\", \"0GU\", \"dg\", \"kf\", \"0HT\", \"47r\", \"522\", \"4H7\", \"5Mw\", \"0fx\", \"EJ\", \"Hz\", \"0kH\", \"4Pf\", \"6eE\", \n\"6Ki\", \"5nk\", \"s7\", \"fV\", \"63e\", \"4BN\", \"O3\", \"ZR\", \"0v\", \"0WL\", \"4lb\", \"6YA\", \"4t3\", \"4aR\", \"8Sd\", \"yN\", \"Wb\", \"00X\", \"b1E\", \"aPO\", \n\"61T\", \"bzL\", \"1kp\", \"Xc\", \"2G\", \"217\", \"4nS\", \"7KQ\", \"7Fa\", \"4cc\", \"0XM\", \"2mn\", \"US\", \"02i\", \"4MO\", \"6xl\", \"4SM\", \"6fn\", \"KQ\", \"0hc\", \n\"0FO\", \"27d\", \"6HB\", \"49i\", \"44Y\", \"4e0\", \"hM\", \"8Bg\", \"0eS\", \"Fa\", \"aAL\", \"bdN\", \"656\", \"70v\", \"3OP\", \"0jR\", \"8Mf\", \"gL\", \"4j1\", \"5oq\", \n\"46h\", \"6GC\", \"28e\", \"0IN\", \"Q1\", \"X\", \"6io\", \"5Lm\", \"6KV\", \"5nT\", \"1Uz\", \"fi\", \"HE\", \"0kw\", \"4PY\", \"4E8\", \"6hJ\", \"5MH\", \"0fG\", \"Eu\", \n\"kY\", \"0Hk\", \"47M\", \"6Ff\", \"6Ig\", \"48L\", \"51\", \"dX\", \"Jt\", \"0iF\", \"4Rh\", \"6gK\", \"4J9\", \"5Oy\", \"0dv\", \"GD\", \"ih\", \"0JZ\", \"4qt\", \"5t4\", \n\"4ov\", \"5j6\", \"3b\", \"0TX\", \"0zt\", \"YF\", \"60q\", \"4AZ\", \"4Lj\", \"6yI\", \"Tv\", \"03L\", \"0Yh\", \"zZ\", \"6We\", \"4bF\", \"4mG\", \"6Xd\", \"1S\", \"0Vi\", \n\"0xE\", \"2Mf\", \"6vH\", \"4Ck\", \"bth\", \"7kY\", \"VG\", \"0uu\", \"1Kx\", \"xk\", \"5e7\", \"5pV\", \"13t\", \"g\", \"5Y3\", \"5LR\", \"46W\", \"amn\", \"jC\", \"0Iq\", \n\"0DA\", \"gs\", \"6JL\", \"5oN\", \"4QC\", \"70I\", \"3Oo\", \"0jm\", \"0el\", \"2PO\", \"6ka\", \"5Nc\", \"44f\", \"6EM\", \"hr\", \"8BX\", \"0Fp\", \"eB\", \"abo\", \"49V\", \n\"4Sr\", \"5V2\", \"Kn\", \"8aD\", \"Ul\", \"02V\", \"4Mp\", \"5H0\", \"425\", \"57u\", \"0Xr\", \"2mQ\", \"2x\", \"0UB\", \"4nl\", \"7Kn\", \"61k\", \"5Pa\", \"1kO\", \"2NM\", \n\"2AL\", \"00g\", \"4OA\", \"6zb\", \"6TN\", \"4am\", \"0ZC\", \"yq\", \"0I\", \"0Ws\", \"58t\", \"a7d\", \"5G1\", \"4Bq\", \"84O\", \"Zm\", \"HA\", \"0ks\", \"bjn\", \"71W\", \n\"6KR\", \"5nP\", \"314\", \"fm\", \"29D\", \"0Ho\", \"47I\", \"6Fb\", \"6hN\", \"5ML\", \"0fC\", \"Eq\", \"Jp\", \"0iB\", \"4Rl\", \"6gO\", \"6Ic\", \"48H\", \"55\", \"26E\", \n\"il\", \"8CF\", \"45x\", \"508\", \"hYT\", \"beo\", \"0dr\", \"1a2\", \"0zp\", \"YB\", \"60u\", \"755\", \"4or\", \"5j2\", \"3f\", \"9Me\", \"0Yl\", \"2lO\", \"6Wa\", \"4bB\", \n\"4Ln\", \"6yM\", \"Tr\", \"03H\", \"0xA\", \"2Mb\", \"62D\", \"4Co\", \"4mC\", \"7HA\", \"1W\", \"0Vm\", \"8RE\", \"xo\", \"5e3\", \"54Z\", \"b0d\", \"aQn\", \"VC\", \"01y\", \n\"46S\", \"6Gx\", \"jG\", \"0Iu\", \"0gY\", \"c\", \"5Y7\", \"5LV\", \"4QG\", \"6dd\", \"3Ok\", \"0ji\", \"0DE\", \"gw\", \"6JH\", \"5oJ\", \"44b\", \"6EI\", \"hv\", \"0KD\", \n\"0eh\", \"FZ\", \"6ke\", \"5Ng\", \"4Sv\", \"5V6\", \"Kj\", \"0hX\", \"0Ft\", \"eF\", \"6Hy\", \"49R\", \"421\", \"4cX\", \"0Xv\", \"2mU\", \"Uh\", \"02R\", \"4Mt\", \"5H4\", \n\"61o\", \"5Pe\", \"M9\", \"XX\", \"vt\", \"0UF\", \"4nh\", \"7Kj\", \"6TJ\", \"4ai\", \"0ZG\", \"yu\", \"WY\", \"B8\", \"4OE\", \"6zf\", \"5G5\", \"4Bu\", \"1iz\", \"Zi\", \n\"0M\", \"0Ww\", \"4lY\", \"4y8\", \"6hB\", \"aW1\", \"0fO\", \"2Sl\", \"kQ\", \"0Hc\", \"47E\", \"6Fn\", \"aaL\", \"bDN\", \"0ES\", \"fa\", \"HM\", \"8bg\", \"4PQ\", \"4E0\", \n\"4J1\", \"5Oq\", \"10W\", \"GL\", \"3oP\", \"0JR\", \"45t\", \"504\", \"6Io\", \"48D\", \"59\", \"dP\", \"3LL\", \"0iN\", \"5BA\", \"6gC\", \"4Lb\", \"6yA\", \"2Bo\", \"03D\", \n\"o3\", \"zR\", \"6Wm\", \"4bN\", \"bUM\", \"a4G\", \"3j\", \"0TP\", \"87l\", \"YN\", \"4T3\", \"4AR\", \"4NS\", \"7kQ\", \"VO\", \"01u\", \"1Kp\", \"xc\", \"hfw\", \"54V\", \n\"4mO\", \"6Xl\", \"uS\", \"0Va\", \"0xM\", \"2Mn\", \"62H\", \"4Cc\", \"0DI\", \"25b\", \"6JD\", \"5oF\", \"4QK\", \"6dh\", \"IW\", \"0je\", \"0gU\", \"o\", \"6iX\", \"5LZ\", \n\"4rW\", \"4g6\", \"jK\", \"0Iy\", \"0Fx\", \"eJ\", \"4h7\", \"5mw\", \"4Sz\", \"6fY\", \"Kf\", \"0hT\", \"S7\", \"FV\", \"6ki\", \"5Nk\", \"44n\", \"6EE\", \"hz\", \"0KH\", \n\"2p\", \"0UJ\", \"4nd\", \"7Kf\", \"61c\", \"5Pi\", \"M5\", \"XT\", \"Ud\", \"0vV\", \"4Mx\", \"5H8\", \"4v5\", \"4cT\", \"0Xz\", \"2mY\", \"0A\", \"1GZ\", \"4lU\", \"4y4\", \n\"5G9\", \"4By\", \"0yW\", \"Ze\", \"WU\", \"B4\", \"4OI\", \"6zj\", \"6TF\", \"4ae\", \"0ZK\", \"yy\", \"kU\", \"0Hg\", \"47A\", \"6Fj\", \"6hF\", \"5MD\", \"0fK\", \"Ey\", \n\"HI\", \"2K9\", \"4PU\", \"4E4\", \"6KZ\", \"5nX\", \"0EW\", \"fe\", \"id\", \"0JV\", \"45p\", \"500\", \"4J5\", \"5Ou\", \"0dz\", \"GH\", \"Jx\", \"0iJ\", \"4Rd\", \"6gG\", \n\"6Ik\", \"5li\", \"q5\", \"dT\", \"o7\", \"zV\", \"6Wi\", \"4bJ\", \"4Lf\", \"6yE\", \"Tz\", \"0wH\", \"0zx\", \"YJ\", \"4T7\", \"4AV\", \"4oz\", \"6ZY\", \"3n\", \"0TT\", \n\"1Kt\", \"xg\", \"6UX\", \"54R\", \"4NW\", \"7kU\", \"VK\", \"01q\", \"0xI\", \"2Mj\", \"62L\", \"4Cg\", \"4mK\", \"6Xh\", \"uW\", \"0Ve\", \"4QO\", \"6dl\", \"IS\", \"0ja\", \n\"0DM\", \"25f\", \"7Za\", \"5oB\", \"4rS\", \"4g2\", \"jO\", \"9PD\", \"0gQ\", \"k\", \"aCN\", \"685\", \"674\", \"72t\", \"Kb\", \"0hP\", \"8Od\", \"eN\", \"4h3\", \"49Z\", \n\"44j\", \"6EA\", \"3nN\", \"0KL\", \"S3\", \"FR\", \"6km\", \"5No\", \"61g\", \"5Pm\", \"M1\", \"XP\", \"2t\", \"0UN\", \"ad0\", \"7Kb\", \"429\", \"4cP\", \"8Qf\", \"39t\", \n\"0c3\", \"02Z\", \"b3G\", \"aRM\", \"63V\", \"bxN\", \"0yS\", \"Za\", \"0E\", \"235\", \"4lQ\", \"4y0\", \"6TB\", \"4aa\", \"0ZO\", \"2ol\", \"WQ\", \"B0\", \"4OM\", \"6zn\", \n\"4i4\", \"5lt\", \"1WZ\", \"dI\", \"Je\", \"0iW\", \"4Ry\", \"5W9\", \"6jj\", \"5Oh\", \"R4\", \"GU\", \"iy\", \"0JK\", \"45m\", \"6DF\", \"6KG\", \"5nE\", \"0EJ\", \"fx\", \n\"HT\", \"0kf\", \"4PH\", \"6ek\", \"5X8\", \"5MY\", \"0fV\", \"Ed\", \"kH\", \"0Hz\", \"4sT\", \"4f5\", \"4mV\", \"4x7\", \"1B\", \"0Vx\", \"0xT\", \"0m5\", \"62Q\", \"4Cz\", \n\"4NJ\", \"7kH\", \"VV\", \"C7\", \"1Ki\", \"xz\", \"6UE\", \"54O\", \"4og\", \"6ZD\", \"3s\", \"0TI\", \"L6\", \"YW\", \"6th\", \"4AK\", \"6l9\", \"6yX\", \"Tg\", \"0wU\", \n\"0Yy\", \"zK\", \"4w6\", \"4bW\", \"11T\", \"FO\", \"4K2\", \"5Nr\", \"44w\", \"517\", \"hc\", \"0KQ\", \"p2\", \"eS\", \"6Hl\", \"49G\", \"4Sc\", \"72i\", \"3MO\", \"0hM\", \n\"0gL\", \"v\", \"6iA\", \"5LC\", \"46F\", \"6Gm\", \"jR\", \"1YA\", \"0DP\", \"gb\", \"hyv\", \"bEM\", \"4QR\", \"4D3\", \"IN\", \"8cd\", \"WL\", \"00v\", \"4OP\", \"4Z1\", \n\"hgt\", \"55U\", \"0ZR\", \"0O3\", \"0X\", \"a1\", \"4lL\", \"6Yo\", \"63K\", \"5RA\", \"0yN\", \"2Lm\", \"2Cl\", \"02G\", \"4Ma\", \"6xB\", \"6Vn\", \"4cM\", \"n0\", \"39i\", \n\"2i\", \"0US\", \"bTN\", \"a5D\", \"4U0\", \"5Pp\", \"86o\", \"XM\", \"Ja\", \"0iS\", \"667\", \"73w\", \"4i0\", \"48Y\", \"8Ng\", \"dM\", \"3oM\", \"0JO\", \"45i\", \"6DB\", \n\"6jn\", \"5Ol\", \"R0\", \"GQ\", \"HP\", \"0kb\", \"4PL\", \"6eo\", \"6KC\", \"5nA\", \"0EN\", \"24e\", \"kL\", \"8Af\", \"47X\", \"4f1\", \"aBM\", \"696\", \"0fR\", \"0s3\", \n\"0xP\", \"0m1\", \"62U\", \"byM\", \"4mR\", \"4x3\", \"1F\", \"226\", \"1Km\", \"2no\", \"6UA\", \"54K\", \"4NN\", \"7kL\", \"VR\", \"C3\", \"L2\", \"YS\", \"60d\", \"4AO\", \n\"4oc\", \"7Ja\", \"3w\", \"0TM\", \"8Pe\", \"zO\", \"4w2\", \"4bS\", \"b2D\", \"aSN\", \"Tc\", \"03Y\", \"44s\", \"513\", \"hg\", \"0KU\", \"0ey\", \"FK\", \"4K6\", \"5Nv\", \n\"4Sg\", \"6fD\", \"3MK\", \"0hI\", \"p6\", \"eW\", \"6Hh\", \"49C\", \"46B\", \"6Gi\", \"jV\", \"0Id\", \"0gH\", \"r\", \"6iE\", \"5LG\", \"4QV\", \"4D7\", \"IJ\", \"0jx\", \n\"0DT\", \"gf\", \"6JY\", \"bEI\", \"5d8\", \"4ax\", \"0ZV\", \"yd\", \"WH\", \"00r\", \"4OT\", \"4Z5\", \"63O\", \"4Bd\", \"0yJ\", \"Zx\", \"tT\", \"a5\", \"4lH\", \"6Yk\", \n\"6Vj\", \"4cI\", \"n4\", \"2mD\", \"Uy\", \"02C\", \"4Me\", \"6xF\", \"4U4\", \"5Pt\", \"1kZ\", \"XI\", \"2m\", \"0UW\", \"4ny\", \"5k9\", \"6jb\", \"ber\", \"0do\", \"2QL\", \n\"iq\", \"0JC\", \"45e\", \"6DN\", \"acl\", \"48U\", \"0Gs\", \"dA\", \"Jm\", \"94n\", \"4Rq\", \"5W1\", \"5X0\", \"5MQ\", \"12w\", \"El\", \"1M2\", \"0Hr\", \"47T\", \"alm\", \n\"6KO\", \"5nM\", \"0EB\", \"fp\", \"3Nl\", \"0kn\", \"bjs\", \"6ec\", \"4NB\", \"aQs\", \"3Pn\", \"01d\", \"1Ka\", \"xr\", \"6UM\", \"54G\", \"59w\", \"a6g\", \"1J\", \"0Vp\", \n\"85L\", \"d3D\", \"5F2\", \"4Cr\", \"4Ls\", \"5I3\", \"To\", \"03U\", \"0Yq\", \"zC\", \"436\", \"56v\", \"4oo\", \"6ZL\", \"ws\", \"0TA\", \"0zm\", \"2ON\", \"60h\", \"4AC\", \n\"42\", \"27B\", \"6Hd\", \"49O\", \"4Sk\", \"6fH\", \"Kw\", \"0hE\", \"0eu\", \"FG\", \"6kx\", \"5Nz\", \"4pw\", \"5u7\", \"hk\", \"0KY\", \"0DX\", \"gj\", \"5z6\", \"5oW\", \n\"4QZ\", \"6dy\", \"IF\", \"0jt\", \"0gD\", \"Dv\", \"6iI\", \"5LK\", \"46N\", \"6Ge\", \"jZ\", \"0Ih\", \"0P\", \"a9\", \"4lD\", \"6Yg\", \"63C\", \"4Bh\", \"0yF\", \"Zt\", \n\"WD\", \"0tv\", \"4OX\", \"4Z9\", \"5d4\", \"4at\", \"0ZZ\", \"yh\", \"2a\", \"1Ez\", \"4nu\", \"5k5\", \"4U8\", \"5Px\", \"1kV\", \"XE\", \"Uu\", \"02O\", \"4Mi\", \"6xJ\", \n\"6Vf\", \"4cE\", \"n8\", \"2mH\", \"iu\", \"0JG\", \"45a\", \"6DJ\", \"6jf\", \"5Od\", \"R8\", \"GY\", \"Ji\", \"1yz\", \"4Ru\", \"5W5\", \"4i8\", \"48Q\", \"0Gw\", \"dE\", \n\"kD\", \"0Hv\", \"47P\", \"4f9\", \"5X4\", \"5MU\", \"0fZ\", \"Eh\", \"HX\", \"0kj\", \"4PD\", \"6eg\", \"6KK\", \"5nI\", \"0EF\", \"ft\", \"1Ke\", \"xv\", \"6UI\", \"54C\", \n\"4NF\", \"7kD\", \"VZ\", \"0uh\", \"0xX\", \"0m9\", \"5F6\", \"4Cv\", \"4mZ\", \"6Xy\", \"1N\", \"0Vt\", \"0Yu\", \"zG\", \"432\", \"56r\", \"4Lw\", \"5I7\", \"Tk\", \"03Q\", \n\"0zi\", \"2OJ\", \"60l\", \"4AG\", \"4ok\", \"6ZH\", \"ww\", \"0TE\", \"4So\", \"6fL\", \"Ks\", \"0hA\", \"46\", \"27F\", \"7XA\", \"49K\", \"4ps\", \"5u3\", \"ho\", \"8BE\", \n\"0eq\", \"FC\", \"aAn\", \"bdl\", \"bkm\", \"70T\", \"IB\", \"0jp\", \"307\", \"gn\", \"5z2\", \"5oS\", \"46J\", \"6Ga\", \"28G\", \"0Il\", \"13i\", \"z\", \"6iM\", \"5LO\", \n\"63G\", \"4Bl\", \"0yB\", \"Zp\", \"0T\", \"0Wn\", \"58i\", \"6Yc\", \"5d0\", \"4ap\", \"8SF\", \"yl\", \"1q2\", \"00z\", \"b1g\", \"aPm\", \"61v\", \"746\", \"1kR\", \"XA\", \n\"2e\", \"9Lf\", \"4nq\", \"5k1\", \"6Vb\", \"4cA\", \"0Xo\", \"2mL\", \"Uq\", \"02K\", \"4Mm\", \"6xN\", \"8YG\", \"7e\", \"5n1\", \"4kq\", \"716\", \"64v\", \"2KP\", \"1nR\", \n\"07K\", \"Pq\", \"69F\", \"4Hm\", \"4fA\", \"6Sb\", \"2hL\", \"1MN\", \"0Rn\", \"5T\", \"7LB\", \"5ya\", \"4Gl\", \"66G\", \"2Ia\", \"08J\", \"05z\", \"1t2\", \"aUm\", \"b4g\", \n\"4dp\", \"5a0\", \"8d\", \"8VF\", \"bn\", \"357\", \"4zr\", \"6OQ\", \"75T\", \"bnm\", \"0op\", \"LB\", \"Ar\", \"16i\", \"4Yn\", \"6lM\", \"6Ba\", \"43J\", \"0Ll\", \"2yO\", \n\"22F\", \"16\", \"4xC\", \"agr\", \"6cL\", \"4Vo\", \"0mA\", \"Ns\", \"CC\", \"14X\", \"bal\", \"aDn\", \"5p3\", \"4us\", \"8GE\", \"mo\", \"5L7\", \"4Iw\", \"06Q\", \"Qk\", \n\"1Y5\", \"1LT\", \"53r\", \"462\", \"7Oi\", \"4jk\", \"0QE\", \"rw\", \"2JJ\", \"1oH\", \"4DG\", \"65l\", \"7nD\", \"4KF\", \"0ph\", \"SZ\", \"2kg\", \"1Ne\", \"4ej\", \"6PI\", \n\"493\", \"4hZ\", \"0St\", \"4N\", \"0h9\", \"09P\", \"4Fv\", \"5C6\", \"4Xt\", \"6mW\", \"023\", \"0cZ\", \"0Mv\", \"nD\", \"4c9\", \"42P\", \"5kI\", \"6NK\", \"ct\", \"1Pg\", \n\"X9\", \"MX\", \"74N\", \"4UD\", \"4ZE\", \"6of\", \"BY\", \"W8\", \"0OG\", \"lu\", \"6AJ\", \"40a\", \"4yY\", \"4l8\", \"aE\", \"0Bw\", \"18r\", \"Oi\", \"5R5\", \"4Wu\", \n\"4EY\", \"4P8\", \"2KT\", \"1nV\", \"8YC\", \"7a\", \"5n5\", \"4ku\", \"4fE\", \"6Sf\", \"2hH\", \"k8\", \"07O\", \"Pu\", \"69B\", \"4Hi\", \"4Gh\", \"66C\", \"2Ie\", \"08N\", \n\"d9\", \"5P\", \"7LF\", \"4iD\", \"4dt\", \"5a4\", \"2jy\", \"3o9\", \"0qv\", \"RD\", \"7oZ\", \"4JX\", \"6ay\", \"4TZ\", \"0ot\", \"LF\", \"bj\", \"0AX\", \"4zv\", \"6OU\", \n\"6Be\", \"43N\", \"0Lh\", \"oZ\", \"Av\", \"0bD\", \"4Yj\", \"6lI\", \"6cH\", \"4Vk\", \"0mE\", \"Nw\", \"22B\", \"12\", \"4xG\", \"6Md\", \"5p7\", \"4uw\", \"0NY\", \"mk\", \n\"CG\", \"1pT\", \"5Kz\", \"6nx\", \"1Y1\", \"1LP\", \"53v\", \"466\", \"5L3\", \"4Is\", \"06U\", \"Qo\", \"2JN\", \"1oL\", \"4DC\", \"65h\", \"7Om\", \"4jo\", \"0QA\", \"rs\", \n\"9z\", \"1Na\", \"4en\", \"6PM\", \"aTs\", \"4KB\", \"04d\", \"2EO\", \"d6D\", \"09T\", \"4Fr\", \"5C2\", \"497\", \"bRm\", \"0Sp\", \"4J\", \"0Mr\", \"1H2\", \"aim\", \"42T\", \n\"4Xp\", \"6mS\", \"027\", \"17w\", \"0nn\", \"3Kl\", \"74J\", \"5Ea\", \"5kM\", \"6NO\", \"cp\", \"1Pc\", \"0OC\", \"lq\", \"6AN\", \"40e\", \"4ZA\", \"6ob\", \"2TL\", \"0ao\", \n\"18v\", \"Om\", \"5R1\", \"4Wq\", \"bCn\", \"afl\", \"aA\", \"0Bs\", \"07C\", \"Py\", \"69N\", \"4He\", \"4fI\", \"6Sj\", \"2hD\", \"k4\", \"0PW\", \"7m\", \"5n9\", \"4ky\", \n\"4EU\", \"4P4\", \"2KX\", \"1nZ\", \"05r\", \"RH\", \"7oV\", \"4JT\", \"4dx\", \"5a8\", \"8l\", \"1Ow\", \"d5\", \"qT\", \"7LJ\", \"4iH\", \"4Gd\", \"66O\", \"2Ii\", \"08B\", \n\"Az\", \"0bH\", \"4Yf\", \"6lE\", \"6Bi\", \"43B\", \"z7\", \"oV\", \"bf\", \"0AT\", \"4zz\", \"6OY\", \"4A7\", \"4TV\", \"0ox\", \"LJ\", \"CK\", \"14P\", \"5Kv\", \"4N6\", \n\"543\", \"41s\", \"0NU\", \"mg\", \"22N\", \"u6\", \"4xK\", \"6Mh\", \"6cD\", \"4Vg\", \"0mI\", \"2Xj\", \"7Oa\", \"4jc\", \"0QM\", \"6w\", \"2JB\", \"I2\", \"4DO\", \"65d\", \n\"68T\", \"b7D\", \"06Y\", \"Qc\", \"dSm\", \"287\", \"4gS\", \"4r2\", \"7MP\", \"4hR\", \"276\", \"4F\", \"0h1\", \"09X\", \"b8E\", \"67U\", \"7nL\", \"4KN\", \"F3\", \"SR\", \n\"9v\", \"1Nm\", \"4eb\", \"6PA\", \"5kA\", \"6NC\", \"21e\", \"1Po\", \"X1\", \"MP\", \"74F\", \"4UL\", \"bbO\", \"79v\", \"0v3\", \"0cR\", \"8Df\", \"nL\", \"4c1\", \"42X\", \n\"4yQ\", \"4l0\", \"aM\", \"8Kg\", \"0lS\", \"Oa\", \"76w\", \"637\", \"4ZM\", \"6on\", \"BQ\", \"W0\", \"0OO\", \"2zl\", \"6AB\", \"40i\", \"4fM\", \"6Sn\", \"3xa\", \"k0\", \n\"07G\", \"2Fl\", \"69J\", \"4Ha\", \"4EQ\", \"4P0\", \"d5g\", \"83o\", \"0PS\", \"7i\", \"a0D\", \"bQN\", \"50U\", \"hbt\", \"8h\", \"1Os\", \"05v\", \"RL\", \"7oR\", \"4JP\", \n\"5WA\", \"66K\", \"2Im\", \"08F\", \"d1\", \"5X\", \"7LN\", \"4iL\", \"6Bm\", \"43F\", \"z3\", \"oR\", \"2Wo\", \"0bL\", \"4Yb\", \"6lA\", \"4A3\", \"4TR\", \"8fd\", \"LN\", \n\"bb\", \"0AP\", \"cPl\", \"aeO\", \"547\", \"41w\", \"0NQ\", \"mc\", \"CO\", \"14T\", \"5Kr\", \"4N2\", \"77i\", \"4Vc\", \"0mM\", \"2Xn\", \"22J\", \"u2\", \"4xO\", \"6Ml\", \n\"2JF\", \"I6\", \"4DK\", \"6qh\", \"7Oe\", \"4jg\", \"0QI\", \"6s\", \"1Y9\", \"1LX\", \"4gW\", \"4r6\", \"68P\", \"5YZ\", \"0rU\", \"Qg\", \"0h5\", \"1mu\", \"4Fz\", \"67Q\", \n\"7MT\", \"4hV\", \"0Sx\", \"4B\", \"9r\", \"1Ni\", \"4ef\", \"6PE\", \"7nH\", \"4KJ\", \"F7\", \"SV\", \"X5\", \"MT\", \"74B\", \"4UH\", \"5kE\", \"6NG\", \"cx\", \"1Pk\", \n\"0Mz\", \"nH\", \"4c5\", \"4vT\", \"4Xx\", \"79r\", \"0v7\", \"0cV\", \"0lW\", \"Oe\", \"5R9\", \"4Wy\", \"4yU\", \"4l4\", \"aI\", \"1RZ\", \"0OK\", \"ly\", \"6AF\", \"40m\", \n\"4ZI\", \"6oj\", \"BU\", \"W4\", \"265\", \"5E\", \"488\", \"4iQ\", \"b9F\", \"66V\", \"0i2\", \"1lr\", \"G0\", \"RQ\", \"7oO\", \"4JM\", \"4da\", \"6QB\", \"8u\", \"1On\", \n\"0PN\", \"7t\", \"7Nb\", \"aa0\", \"4EL\", \"64g\", \"2KA\", \"H1\", \"07Z\", \"0f3\", \"69W\", \"b6G\", \"4fP\", \"479\", \"dRn\", \"294\", \"22W\", \"8Jd\", \"4xR\", \"4m3\", \n\"77t\", \"624\", \"0mP\", \"Nb\", \"CR\", \"V3\", \"5Ko\", \"6nm\", \"ajS\", \"41j\", \"0NL\", \"3kN\", \"20f\", \"0AM\", \"4zc\", \"aeR\", \"6al\", \"4TO\", \"Y2\", \"LS\", \n\"Ac\", \"0bQ\", \"bcL\", \"78u\", \"4b2\", \"4wS\", \"8Ee\", \"oO\", \"7nU\", \"4KW\", \"04q\", \"SK\", \"9o\", \"1Nt\", \"51R\", \"6PX\", \"7MI\", \"4hK\", \"e6\", \"pW\", \n\"2Hj\", \"09A\", \"4Fg\", \"67L\", \"68M\", \"4If\", \"0rH\", \"Qz\", \"2iG\", \"j7\", \"4gJ\", \"6Ri\", \"7Ox\", \"4jz\", \"0QT\", \"6n\", \"1z8\", \"1oY\", \"4DV\", \"4Q7\", \n\"4ZT\", \"4O5\", \"BH\", \"0az\", \"0OV\", \"ld\", \"550\", \"40p\", \"4yH\", \"6Lk\", \"aT\", \"t5\", \"0lJ\", \"Ox\", \"6bG\", \"4Wd\", \"4Xe\", \"6mF\", \"2Vh\", \"0cK\", \n\"0Mg\", \"nU\", \"6Cj\", \"42A\", \"5kX\", \"6NZ\", \"ce\", \"1Pv\", \"2N9\", \"MI\", \"7pW\", \"4UU\", \"4Gy\", \"5B9\", \"0i6\", \"1lv\", \"1BZ\", \"5A\", \"7LW\", \"4iU\", \n\"4de\", \"6QF\", \"8q\", \"1Oj\", \"G4\", \"RU\", \"7oK\", \"4JI\", \"4EH\", \"64c\", \"2KE\", \"H5\", \"0PJ\", \"7p\", \"7Nf\", \"4kd\", \"4fT\", \"4s5\", \"2hY\", \"290\", \n\"0sV\", \"Pd\", \"5M8\", \"4Hx\", \"6cY\", \"4Vz\", \"0mT\", \"Nf\", \"1F8\", \"0Cx\", \"4xV\", \"4m7\", \"7Pd\", \"41n\", \"0NH\", \"mz\", \"CV\", \"V7\", \"5Kk\", \"6ni\", \n\"6ah\", \"4TK\", \"Y6\", \"LW\", \"20b\", \"0AI\", \"4zg\", \"6OD\", \"4b6\", \"4wW\", \"0Ly\", \"oK\", \"Ag\", \"0bU\", \"5IZ\", \"6lX\", \"9k\", \"1Np\", \"51V\", \"azN\", \n\"7nQ\", \"4KS\", \"04u\", \"SO\", \"2Hn\", \"09E\", \"4Fc\", \"67H\", \"7MM\", \"4hO\", \"e2\", \"pS\", \"2iC\", \"j3\", \"4gN\", \"6Rm\", \"68I\", \"4Ib\", \"06D\", \"2Go\", \n\"d4d\", \"82l\", \"4DR\", \"4Q3\", \"a1G\", \"bPM\", \"0QP\", \"6j\", \"0OR\", \"0Z3\", \"554\", \"40t\", \"4ZP\", \"4O1\", \"BL\", \"15W\", \"0lN\", \"2Ym\", \"6bC\", \"5GA\", \n\"4yL\", \"6Lo\", \"aP\", \"09\", \"0Mc\", \"nQ\", \"6Cn\", \"42E\", \"4Xa\", \"6mB\", \"2Vl\", \"0cO\", \"8gg\", \"MM\", \"7pS\", \"4UQ\", \"bAN\", \"adL\", \"ca\", \"1Pr\", \n\"G8\", \"RY\", \"7oG\", \"4JE\", \"4di\", \"6QJ\", \"2jd\", \"1Of\", \"0Rw\", \"5M\", \"480\", \"4iY\", \"4Gu\", \"5B5\", \"2Ix\", \"08S\", \"07R\", \"Ph\", \"5M4\", \"4Ht\", \n\"4fX\", \"471\", \"1X6\", \"1MW\", \"0PF\", \"st\", \"7Nj\", \"4kh\", \"4ED\", \"64o\", \"2KI\", \"H9\", \"CZ\", \"14A\", \"5Kg\", \"6ne\", \"7Ph\", \"41b\", \"0ND\", \"mv\", \n\"1F4\", \"0Ct\", \"4xZ\", \"6My\", \"5S6\", \"4Vv\", \"0mX\", \"Nj\", \"Ak\", \"0bY\", \"4Yw\", \"6lT\", \"6Bx\", \"43S\", \"0Lu\", \"oG\", \"bw\", \"0AE\", \"4zk\", \"6OH\", \n\"6ad\", \"4TG\", \"0oi\", \"2ZJ\", \"7MA\", \"4hC\", \"0Sm\", \"4W\", \"2Hb\", \"09I\", \"4Fo\", \"67D\", \"aTn\", \"b5d\", \"04y\", \"SC\", \"9g\", \"8WE\", \"4es\", \"6PP\", \n\"5o2\", \"4jr\", \"8XD\", \"6f\", \"1z0\", \"1oQ\", \"705\", \"65u\", \"68E\", \"4In\", \"06H\", \"Qr\", \"2iO\", \"1LM\", \"4gB\", \"6Ra\", \"5ia\", \"6Lc\", \"23E\", \"05\", \n\"0lB\", \"Op\", \"6bO\", \"4Wl\", \"c4F\", \"aEm\", \"1d2\", \"0ar\", \"8FF\", \"ll\", \"558\", \"40x\", \"5kP\", \"6NR\", \"cm\", \"344\", \"0ns\", \"MA\", \"74W\", \"bon\", \n\"4Xm\", \"6mN\", \"3FA\", \"0cC\", \"0Mo\", \"2xL\", \"6Cb\", \"42I\", \"4dm\", \"6QN\", \"8y\", \"1Ob\", \"05g\", \"2DL\", \"7oC\", \"4JA\", \"4Gq\", \"5B1\", \"d7G\", \"08W\", \n\"0Rs\", \"5I\", \"484\", \"bSn\", \"52u\", \"475\", \"1X2\", \"1MS\", \"07V\", \"Pl\", \"5M0\", \"4Hp\", \"5Ua\", \"64k\", \"2KM\", \"1nO\", \"0PB\", \"7x\", \"7Nn\", \"4kl\", \n\"7Pl\", \"41f\", \"8GX\", \"mr\", \"2UO\", \"14E\", \"5Kc\", \"6na\", \"5S2\", \"4Vr\", \"19u\", \"Nn\", \"1F0\", \"0Cp\", \"bBm\", \"ago\", \"ahn\", \"43W\", \"0Lq\", \"oC\", \n\"Ao\", \"16t\", \"4Ys\", \"6lP\", \"75I\", \"4TC\", \"0om\", \"2ZN\", \"bs\", \"0AA\", \"4zo\", \"6OL\", \"2Hf\", \"09M\", \"4Fk\", \"6sH\", \"7ME\", \"4hG\", \"0Si\", \"4S\", \n\"9c\", \"1Nx\", \"4ew\", \"6PT\", \"7nY\", \"bqh\", \"0pu\", \"SG\", \"1z4\", \"1oU\", \"4DZ\", \"65q\", \"5o6\", \"4jv\", \"0QX\", \"6b\", \"2iK\", \"1LI\", \"4gF\", \"6Re\", \n\"68A\", \"4Ij\", \"06L\", \"Qv\", \"0lF\", \"Ot\", \"6bK\", \"4Wh\", \"4yD\", \"6Lg\", \"aX\", \"01\", \"0OZ\", \"lh\", \"5q4\", \"4tt\", \"4ZX\", \"4O9\", \"BD\", \"0av\", \n\"0nw\", \"ME\", \"74S\", \"4UY\", \"5kT\", \"6NV\", \"ci\", \"1Pz\", \"0Mk\", \"nY\", \"6Cf\", \"42M\", \"4Xi\", \"6mJ\", \"2Vd\", \"0cG\", \"bL\", \"8Hf\", \"4zP\", \"4o1\", \n\"75v\", \"606\", \"0oR\", \"0z3\", \"AP\", \"T1\", \"4YL\", \"6lo\", \"6BC\", \"43h\", \"0LN\", \"2ym\", \"22d\", \"0CO\", \"4xa\", \"6MB\", \"6cn\", \"4VM\", \"0mc\", \"NQ\", \n\"Ca\", \"14z\", \"baN\", \"aDL\", \"7PS\", \"41Y\", \"8Gg\", \"mM\", \"247\", \"7G\", \"7NQ\", \"4kS\", \"com\", \"64T\", \"0k0\", \"1np\", \"E2\", \"PS\", \"69d\", \"4HO\", \n\"4fc\", \"7Ca\", \"2hn\", \"1Ml\", \"0RL\", \"5v\", \"avS\", \"4ib\", \"4GN\", \"66e\", \"2IC\", \"J3\", \"05X\", \"Rb\", \"aUO\", \"b4E\", \"4dR\", \"4q3\", \"8F\", \"8Vd\", \n\"4XV\", \"4M7\", \"1f8\", \"0cx\", \"0MT\", \"nf\", \"572\", \"42r\", \"5kk\", \"6Ni\", \"cV\", \"v7\", \"0nH\", \"Mz\", \"74l\", \"4Uf\", \"4Zg\", \"6oD\", \"2Tj\", \"0aI\", \n\"y6\", \"lW\", \"6Ah\", \"40C\", \"5iZ\", \"583\", \"ag\", \"0BU\", \"0ly\", \"OK\", \"4B6\", \"4WW\", \"7lW\", \"4IU\", \"06s\", \"QI\", \"0I6\", \"1Lv\", \"4gy\", \"5b9\", \n\"7OK\", \"4jI\", \"g4\", \"rU\", \"2Jh\", \"1oj\", \"4De\", \"65N\", \"7nf\", \"4Kd\", \"04B\", \"Sx\", \"2kE\", \"h5\", \"4eH\", \"6Pk\", \"5m8\", \"4hx\", \"0SV\", \"4l\", \n\"2HY\", \"09r\", \"4FT\", \"4S5\", \"5Q8\", \"4Tx\", \"0oV\", \"Ld\", \"bH\", \"0Az\", \"4zT\", \"4o5\", \"6BG\", \"43l\", \"0LJ\", \"ox\", \"AT\", \"T5\", \"4YH\", \"6lk\", \n\"6cj\", \"4VI\", \"0mg\", \"NU\", \"2vh\", \"0CK\", \"4xe\", \"6MF\", \"7PW\", \"4uU\", \"2n9\", \"mI\", \"Ce\", \"1pv\", \"5KX\", \"6nZ\", \"5UZ\", \"64P\", \"0k4\", \"1nt\", \n\"0Py\", \"7C\", \"7NU\", \"4kW\", \"4fg\", \"6SD\", \"2hj\", \"1Mh\", \"E6\", \"PW\", \"7mI\", \"4HK\", \"4GJ\", \"66a\", \"2IG\", \"J7\", \"0RH\", \"5r\", \"7Ld\", \"4if\", \n\"4dV\", \"4q7\", \"8B\", \"1OY\", \"0qT\", \"Rf\", \"7ox\", \"4Jz\", \"0MP\", \"nb\", \"576\", \"42v\", \"4XR\", \"4M3\", \"dll\", \"17U\", \"0nL\", \"3KN\", \"74h\", \"4Ub\", \n\"5ko\", \"6Nm\", \"cR\", \"v3\", \"y2\", \"lS\", \"6Al\", \"40G\", \"4Zc\", \"aER\", \"2Tn\", \"0aM\", \"18T\", \"OO\", \"4B2\", \"4WS\", \"bCL\", \"587\", \"ac\", \"0BQ\", \n\"0I2\", \"1Lr\", \"53T\", \"axL\", \"68z\", \"4IQ\", \"06w\", \"QM\", \"2Jl\", \"1on\", \"4Da\", \"65J\", \"7OO\", \"4jM\", \"g0\", \"6Y\", \"9X\", \"h1\", \"4eL\", \"6Po\", \n\"7nb\", \"aA0\", \"04F\", \"2Em\", \"d6f\", \"09v\", \"4FP\", \"4S1\", \"a3E\", \"bRO\", \"0SR\", \"4h\", \"AX\", \"T9\", \"4YD\", \"6lg\", \"6BK\", \"4wh\", \"0LF\", \"ot\", \n\"bD\", \"0Av\", \"4zX\", \"4o9\", \"5Q4\", \"4Tt\", \"0oZ\", \"Lh\", \"Ci\", \"14r\", \"5KT\", \"6nV\", \"ajh\", \"41Q\", \"0Nw\", \"mE\", \"22l\", \"0CG\", \"4xi\", \"6MJ\", \n\"6cf\", \"4VE\", \"0mk\", \"NY\", \"07a\", \"2FJ\", \"69l\", \"4HG\", \"4fk\", \"6SH\", \"2hf\", \"1Md\", \"0Pu\", \"7O\", \"7NY\", \"bQh\", \"4Ew\", \"6pT\", \"0k8\", \"1nx\", \n\"05P\", \"Rj\", \"5O6\", \"4Jv\", \"4dZ\", \"453\", \"8N\", \"1OU\", \"0RD\", \"qv\", \"7Lh\", \"4ij\", \"4GF\", \"66m\", \"2IK\", \"1lI\", \"5kc\", \"6Na\", \"21G\", \"27\", \n\"8gX\", \"Mr\", \"74d\", \"4Un\", \"bbm\", \"79T\", \"1f0\", \"0cp\", \"397\", \"nn\", \"5s2\", \"42z\", \"4ys\", \"6LP\", \"ao\", \"366\", \"0lq\", \"OC\", \"76U\", \"bml\", \n\"4Zo\", \"6oL\", \"Bs\", \"0aA\", \"0Om\", \"2zN\", \"7QA\", \"40K\", \"7OC\", \"4jA\", \"0Qo\", \"6U\", \"3ZA\", \"1ob\", \"4Dm\", \"65F\", \"68v\", \"b7f\", \"0rs\", \"QA\", \n\"dSO\", \"8UG\", \"4gq\", \"5b1\", \"5m0\", \"4hp\", \"8ZF\", \"4d\", \"1x2\", \"09z\", \"727\", \"67w\", \"7nn\", \"4Kl\", \"04J\", \"Sp\", \"9T\", \"1NO\", \"51i\", \"6Pc\", \n\"6BO\", \"43d\", \"0LB\", \"op\", \"2WM\", \"0bn\", \"5Ia\", \"6lc\", \"5Q0\", \"4Tp\", \"8fF\", \"Ll\", \"1D2\", \"0Ar\", \"cPN\", \"aem\", \"ajl\", \"41U\", \"0Ns\", \"mA\", \n\"Cm\", \"14v\", \"5KP\", \"6nR\", \"6cb\", \"4VA\", \"0mo\", \"2XL\", \"22h\", \"0CC\", \"4xm\", \"6MN\", \"4fo\", \"6SL\", \"2hb\", \"8TY\", \"07e\", \"2FN\", \"69h\", \"4HC\", \n\"4Es\", \"64X\", \"d5E\", \"83M\", \"0Pq\", \"7K\", \"a0f\", \"bQl\", \"50w\", \"457\", \"8J\", \"1OQ\", \"05T\", \"Rn\", \"5O2\", \"4Jr\", \"4GB\", \"66i\", \"2IO\", \"08d\", \n\"1Ba\", \"5z\", \"7Ll\", \"4in\", \"0nD\", \"Mv\", \"7ph\", \"4Uj\", \"5kg\", \"6Ne\", \"cZ\", \"23\", \"0MX\", \"nj\", \"5s6\", \"4vv\", \"4XZ\", \"6my\", \"1f4\", \"0ct\", \n\"0lu\", \"OG\", \"6bx\", \"5Gz\", \"4yw\", \"6LT\", \"ak\", \"0BY\", \"0Oi\", \"2zJ\", \"6Ad\", \"40O\", \"4Zk\", \"6oH\", \"Bw\", \"0aE\", \"2Jd\", \"1of\", \"4Di\", \"65B\", \n\"7OG\", \"4jE\", \"g8\", \"6Q\", \"2ix\", \"1Lz\", \"4gu\", \"5b5\", \"68r\", \"4IY\", \"0rw\", \"QE\", \"1x6\", \"1mW\", \"4FX\", \"4S9\", \"5m4\", \"4ht\", \"0SZ\", \"ph\", \n\"9P\", \"h9\", \"4eD\", \"6Pg\", \"7nj\", \"4Kh\", \"04N\", \"St\", \"22u\", \"375\", \"4xp\", \"598\", \"77V\", \"blo\", \"0mr\", \"1h2\", \"Cp\", \"14k\", \"5KM\", \"6nO\", \n\"7PB\", \"41H\", \"0Nn\", \"3kl\", \"20D\", \"34\", \"4zA\", \"6Ob\", \"6aN\", \"4Tm\", \"0oC\", \"Lq\", \"AA\", \"0bs\", \"bcn\", \"78W\", \"569\", \"43y\", \"384\", \"om\", \n\"9Kd\", \"5g\", \"5l3\", \"4is\", \"734\", \"66t\", \"1y1\", \"08y\", \"05I\", \"Rs\", \"7om\", \"4Jo\", \"4dC\", \"7AA\", \"8W\", \"1OL\", \"0Pl\", \"7V\", \"ats\", \"4kB\", \n\"4En\", \"64E\", \"2Kc\", \"1na\", \"07x\", \"PB\", \"69u\", \"b6e\", \"4fr\", \"5c2\", \"dRL\", \"8TD\", \"4Zv\", \"6oU\", \"Bj\", \"0aX\", \"0Ot\", \"lF\", \"6Ay\", \"40R\", \n\"4yj\", \"6LI\", \"av\", \"0BD\", \"0lh\", \"OZ\", \"6be\", \"4WF\", \"4XG\", \"6md\", \"2VJ\", \"0ci\", \"0ME\", \"nw\", \"6CH\", \"42c\", \"5kz\", \"6Nx\", \"cG\", \"1PT\", \n\"0nY\", \"Mk\", \"5P7\", \"4Uw\", \"5N5\", \"4Ku\", \"04S\", \"Si\", \"9M\", \"1NV\", \"4eY\", \"440\", \"7Mk\", \"4hi\", \"0SG\", \"pu\", \"2HH\", \"K8\", \"4FE\", \"67n\", \n\"68o\", \"4ID\", \"1\", \"QX\", \"2ie\", \"1Lg\", \"4gh\", \"6RK\", \"7OZ\", \"4jX\", \"0Qv\", \"6L\", \"2Jy\", \"3O9\", \"4Dt\", \"5A4\", \"4C9\", \"4VX\", \"0mv\", \"ND\", \n\"22q\", \"0CZ\", \"4xt\", \"6MW\", \"7PF\", \"41L\", \"x9\", \"mX\", \"Ct\", \"14o\", \"5KI\", \"6nK\", \"6aJ\", \"4Ti\", \"0oG\", \"Lu\", \"bY\", \"30\", \"4zE\", \"6Of\", \n\"5r5\", \"4wu\", \"380\", \"oi\", \"AE\", \"0bw\", \"4YY\", \"4L8\", \"5Wz\", \"66p\", \"1y5\", \"1lT\", \"0RY\", \"5c\", \"5l7\", \"4iw\", \"4dG\", \"6Qd\", \"8S\", \"1OH\", \n\"05M\", \"Rw\", \"7oi\", \"4Jk\", \"4Ej\", \"64A\", \"2Kg\", \"1ne\", \"0Ph\", \"7R\", \"7ND\", \"4kF\", \"4fv\", \"5c6\", \"0H9\", \"1My\", \"0st\", \"PF\", \"69q\", \"4HZ\", \n\"0Op\", \"lB\", \"ako\", \"40V\", \"4Zr\", \"6oQ\", \"Bn\", \"15u\", \"0ll\", \"2YO\", \"6ba\", \"4WB\", \"4yn\", \"6LM\", \"ar\", \"1Ra\", \"0MA\", \"ns\", \"6CL\", \"42g\", \n\"4XC\", \"79I\", \"2VN\", \"0cm\", \"8gE\", \"Mo\", \"5P3\", \"4Us\", \"bAl\", \"adn\", \"cC\", \"1PP\", \"9I\", \"1NR\", \"51t\", \"444\", \"5N1\", \"4Kq\", \"04W\", \"Sm\", \n\"2HL\", \"09g\", \"4FA\", \"67j\", \"7Mo\", \"4hm\", \"0SC\", \"4y\", \"2ia\", \"1Lc\", \"4gl\", \"6RO\", \"68k\", \"5Ya\", \"5\", \"2GM\", \"d4F\", \"82N\", \"4Dp\", \"5A0\", \n\"a1e\", \"bPo\", \"0Qr\", \"6H\", \"Cx\", \"14c\", \"5KE\", \"6nG\", \"7PJ\", \"4uH\", \"x5\", \"mT\", \"0V7\", \"0CV\", \"4xx\", \"590\", \"4C5\", \"4VT\", \"0mz\", \"NH\", \n\"AI\", \"16R\", \"4YU\", \"4L4\", \"561\", \"43q\", \"0LW\", \"oe\", \"bU\", \"w4\", \"4zI\", \"6Oj\", \"6aF\", \"4Te\", \"0oK\", \"Ly\", \"05A\", \"2Dj\", \"7oe\", \"4Jg\", \n\"4dK\", \"6Qh\", \"2jF\", \"i6\", \"0RU\", \"5o\", \"7Ly\", \"5yZ\", \"4GW\", \"4R6\", \"1y9\", \"08q\", \"07p\", \"PJ\", \"7mT\", \"4HV\", \"4fz\", \"6SY\", \"0H5\", \"1Mu\", \n\"f7\", \"sV\", \"7NH\", \"4kJ\", \"4Ef\", \"64M\", \"2Kk\", \"1ni\", \"4yb\", \"6LA\", \"23g\", \"0BL\", \"Z3\", \"OR\", \"6bm\", \"4WN\", \"c4d\", \"aEO\", \"Bb\", \"0aP\", \n\"8Fd\", \"lN\", \"4a3\", \"40Z\", \"5kr\", \"4n2\", \"cO\", \"8Ie\", \"0nQ\", \"Mc\", \"74u\", \"615\", \"4XO\", \"6ml\", \"2VB\", \"U2\", \"0MM\", \"2xn\", \"7Sa\", \"42k\", \n\"7Mc\", \"4ha\", \"0SO\", \"4u\", \"3Xa\", \"K0\", \"4FM\", \"67f\", \"aTL\", \"b5F\", \"0pS\", \"Sa\", \"9E\", \"8Wg\", \"4eQ\", \"448\", \"7OR\", \"4jP\", \"254\", \"6D\", \n\"0j3\", \"1os\", \"cnn\", \"65W\", \"68g\", \"4IL\", \"9\", \"QP\", \"2im\", \"1Lo\", \"53I\", \"6RC\", \"7PN\", \"41D\", \"x1\", \"mP\", \"2Um\", \"14g\", \"5KA\", \"6nC\", \n\"4C1\", \"4VP\", \"19W\", \"NL\", \"0V3\", \"0CR\", \"bBO\", \"594\", \"565\", \"43u\", \"0LS\", \"oa\", \"AM\", \"16V\", \"4YQ\", \"4L0\", \"6aB\", \"4Ta\", \"0oO\", \"2Zl\", \n\"bQ\", \"38\", \"4zM\", \"6On\", \"4dO\", \"6Ql\", \"2jB\", \"i2\", \"05E\", \"2Dn\", \"7oa\", \"4Jc\", \"4GS\", \"4R2\", \"d7e\", \"08u\", \"0RQ\", \"5k\", \"a2F\", \"bSL\", \n\"52W\", \"ayO\", \"0H1\", \"1Mq\", \"07t\", \"PN\", \"69y\", \"4HR\", \"4Eb\", \"64I\", \"2Ko\", \"1nm\", \"f3\", \"7Z\", \"7NL\", \"4kN\", \"Z7\", \"OV\", \"6bi\", \"4WJ\", \n\"4yf\", \"6LE\", \"az\", \"0BH\", \"0Ox\", \"lJ\", \"4a7\", \"4tV\", \"4Zz\", \"6oY\", \"Bf\", \"0aT\", \"0nU\", \"Mg\", \"74q\", \"5EZ\", \"5kv\", \"4n6\", \"cK\", \"1PX\", \n\"0MI\", \"2xj\", \"6CD\", \"42o\", \"4XK\", \"6mh\", \"2VF\", \"U6\", \"2HD\", \"K4\", \"4FI\", \"67b\", \"7Mg\", \"4he\", \"0SK\", \"4q\", \"9A\", \"1NZ\", \"4eU\", \"4p4\", \n\"5N9\", \"4Ky\", \"0pW\", \"Se\", \"0j7\", \"1ow\", \"4Dx\", \"5A8\", \"7OV\", \"4jT\", \"0Qz\", \"rH\", \"2ii\", \"1Lk\", \"4gd\", \"6RG\", \"68c\", \"4IH\", \"D5\", \"QT\", \n\"5Ls\", \"4I3\", \"F\", \"13U\", \"0IP\", \"jb\", \"536\", \"46v\", \"5oo\", \"6Jm\", \"gR\", \"r3\", \"0jL\", \"3ON\", \"6dA\", \"4Qb\", \"5NB\", \"aAR\", \"2Pn\", \"0eM\", \n\"0Ka\", \"hS\", \"6El\", \"44G\", \"49w\", \"abN\", \"ec\", \"0FQ\", \"8ae\", \"KO\", \"4F2\", \"4SS\", \"4X0\", \"4MQ\", \"02w\", \"UM\", \"0M2\", \"0XS\", \"57T\", \"a8D\", \n\"7KO\", \"4nM\", \"c0\", \"2Y\", \"2Nl\", \"1kn\", \"aJ1\", \"61J\", \"6zC\", \"aE0\", \"00F\", \"2Am\", \"yP\", \"l1\", \"4aL\", \"6To\", \"a7E\", \"58U\", \"0WR\", \"0h\", \n\"ZL\", \"84n\", \"4BP\", \"4W1\", \"fH\", \"0Ez\", \"5nu\", \"4k5\", \"5U8\", \"4Px\", \"0kV\", \"Hd\", \"ET\", \"P5\", \"5Mi\", \"6hk\", \"6FG\", \"47l\", \"0HJ\", \"kx\", \n\"dy\", \"0GK\", \"48m\", \"6IF\", \"6gj\", \"4RI\", \"0ig\", \"JU\", \"Ge\", \"0dW\", \"5OX\", \"5Z9\", \"4d4\", \"4qU\", \"1ZZ\", \"iI\", \"0Ty\", \"3C\", \"4z6\", \"4oW\", \n\"5QZ\", \"60P\", \"Yg\", \"0zU\", \"A6\", \"TW\", \"6yh\", \"4LK\", \"4bg\", \"6WD\", \"2lj\", \"0YI\", \"0VH\", \"1r\", \"6XE\", \"4mf\", \"4CJ\", \"62a\", \"2MG\", \"N7\", \n\"0uT\", \"Vf\", \"7kx\", \"4Nz\", \"5pw\", \"4u7\", \"xJ\", \"1KY\", \"0IT\", \"jf\", \"532\", \"46r\", \"5Lw\", \"4I7\", \"B\", \"0gx\", \"0jH\", \"Iz\", \"6dE\", \"4Qf\", \n\"5ok\", \"6Ji\", \"gV\", \"r7\", \"0Ke\", \"hW\", \"6Eh\", \"44C\", \"5NF\", \"6kD\", \"2Pj\", \"0eI\", \"0hy\", \"KK\", \"4F6\", \"4SW\", \"49s\", \"6HX\", \"eg\", \"0FU\", \n\"0M6\", \"0XW\", \"4cy\", \"5f9\", \"4X4\", \"4MU\", \"02s\", \"UI\", \"Xy\", \"1kj\", \"5PD\", \"61N\", \"7KK\", \"4nI\", \"c4\", \"vU\", \"yT\", \"l5\", \"4aH\", \"6Tk\", \n\"6zG\", \"4Od\", \"00B\", \"Wx\", \"ZH\", \"0yz\", \"4BT\", \"4W5\", \"5i8\", \"4lx\", \"0WV\", \"0l\", \"71v\", \"646\", \"0kR\", \"3NP\", \"fL\", \"8Lf\", \"5nq\", \"4k1\", \n\"6FC\", \"47h\", \"0HN\", \"29e\", \"EP\", \"P1\", \"5Mm\", \"6ho\", \"6gn\", \"4RM\", \"0ic\", \"JQ\", \"26d\", \"0GO\", \"48i\", \"6IB\", \"4d0\", \"45Y\", \"8Cg\", \"iM\", \n\"Ga\", \"0dS\", \"beN\", \"hYu\", \"ckm\", \"60T\", \"Yc\", \"0zQ\", \"207\", \"3G\", \"4z2\", \"4oS\", \"4bc\", \"7Ga\", \"2ln\", \"0YM\", \"A2\", \"TS\", \"6yl\", \"4LO\", \n\"4CN\", \"62e\", \"2MC\", \"N3\", \"0VL\", \"1v\", \"6XA\", \"4mb\", \"5ps\", \"4u3\", \"xN\", \"8Rd\", \"01X\", \"Vb\", \"aQO\", \"b0E\", \"5og\", \"6Je\", \"gZ\", \"63\", \n\"0jD\", \"Iv\", \"6dI\", \"4Qj\", \"7l9\", \"6iy\", \"N\", \"0gt\", \"0IX\", \"jj\", \"5w6\", \"4rv\", \"5mV\", \"5x7\", \"ek\", \"0FY\", \"0hu\", \"KG\", \"6fx\", \"5Cz\", \n\"5NJ\", \"6kH\", \"Fw\", \"0eE\", \"92\", \"3nk\", \"6Ed\", \"44O\", \"7KG\", \"4nE\", \"c8\", \"2Q\", \"Xu\", \"1kf\", \"5PH\", \"61B\", \"4X8\", \"4MY\", \"0vw\", \"UE\", \n\"2mx\", \"1Hz\", \"4cu\", \"5f5\", \"5i4\", \"4lt\", \"0WZ\", \"th\", \"ZD\", \"0yv\", \"4BX\", \"4W9\", \"6zK\", \"4Oh\", \"00N\", \"Wt\", \"yX\", \"l9\", \"4aD\", \"6Tg\", \n\"2SM\", \"0fn\", \"5Ma\", \"6hc\", \"6FO\", \"47d\", \"0HB\", \"kp\", \"24Y\", \"0Er\", \"bDo\", \"aam\", \"5U0\", \"4Pp\", \"8bF\", \"Hl\", \"Gm\", \"10v\", \"5OP\", \"5Z1\", \n\"anl\", \"45U\", \"0Js\", \"iA\", \"dq\", \"0GC\", \"48e\", \"6IN\", \"6gb\", \"4RA\", \"0io\", \"3Lm\", \"03e\", \"2BN\", \"7iA\", \"4LC\", \"4bo\", \"6WL\", \"zs\", \"0YA\", \n\"0Tq\", \"3K\", \"a4f\", \"bUl\", \"4As\", \"5D3\", \"Yo\", \"87M\", \"01T\", \"Vn\", \"5K2\", \"4Nr\", \"54w\", \"417\", \"xB\", \"1KQ\", \"1Fa\", \"1z\", \"6XM\", \"4mn\", \n\"4CB\", \"62i\", \"2MO\", \"0xl\", \"1za\", \"Ir\", \"6dM\", \"4Qn\", \"5oc\", \"6Ja\", \"25G\", \"67\", \"9Pe\", \"jn\", \"5w2\", \"46z\", \"bfm\", \"aCo\", \"J\", \"0gp\", \n\"0hq\", \"KC\", \"72U\", \"bil\", \"5mR\", \"5x3\", \"eo\", \"326\", \"96\", \"3no\", \"7UA\", \"44K\", \"5NN\", \"6kL\", \"Fs\", \"0eA\", \"Xq\", \"1kb\", \"5PL\", \"61F\", \n\"7KC\", \"4nA\", \"0Uo\", \"2U\", \"39U\", \"8QG\", \"4cq\", \"5f1\", \"aRl\", \"796\", \"0vs\", \"UA\", \"2LQ\", \"0yr\", \"767\", \"63w\", \"5i0\", \"4lp\", \"9Ng\", \"0d\", \n\"2oM\", \"0Zn\", \"55i\", \"6Tc\", \"6zO\", \"4Ol\", \"00J\", \"Wp\", \"6FK\", \"4sh\", \"0HF\", \"kt\", \"EX\", \"P9\", \"5Me\", \"6hg\", \"5U4\", \"4Pt\", \"0kZ\", \"Hh\", \n\"fD\", \"0Ev\", \"5ny\", \"4k9\", \"4d8\", \"45Q\", \"0Jw\", \"iE\", \"Gi\", \"10r\", \"5OT\", \"5Z5\", \"6gf\", \"4RE\", \"0ik\", \"JY\", \"du\", \"0GG\", \"48a\", \"6IJ\", \n\"4bk\", \"6WH\", \"zw\", \"0YE\", \"03a\", \"2BJ\", \"6yd\", \"4LG\", \"4Aw\", \"5D7\", \"Yk\", \"0zY\", \"0Tu\", \"3O\", \"6Zx\", \"bUh\", \"54s\", \"413\", \"xF\", \"1KU\", \n\"01P\", \"Vj\", \"5K6\", \"4Nv\", \"4CF\", \"62m\", \"2MK\", \"0xh\", \"0VD\", \"uv\", \"6XI\", \"4mj\", \"5NS\", \"6kQ\", \"Fn\", \"11u\", \"0Kp\", \"hB\", \"aoo\", \"44V\", \n\"49f\", \"6HM\", \"er\", \"1Va\", \"0hl\", \"3Mn\", \"6fa\", \"4SB\", \"5Lb\", \"7yA\", \"W\", \"0gm\", \"0IA\", \"js\", \"6GL\", \"46g\", \"bEl\", \"hyW\", \"gC\", \"0Dq\", \n\"8cE\", \"Io\", \"5T3\", \"4Qs\", \"5J1\", \"4Oq\", \"00W\", \"Wm\", \"yA\", \"0Zs\", \"55t\", \"404\", \"6YN\", \"4lm\", \"0WC\", \"0y\", \"2LL\", \"0yo\", \"4BA\", \"63j\", \n\"6xc\", \"bws\", \"02f\", \"2CM\", \"2ma\", \"0XB\", \"4cl\", \"6VO\", \"a5e\", \"bTo\", \"0Ur\", \"2H\", \"Xl\", \"86N\", \"5PQ\", \"5E0\", \"dh\", \"0GZ\", \"5lU\", \"5y4\", \n\"4G9\", \"4RX\", \"0iv\", \"JD\", \"Gt\", \"0dF\", \"5OI\", \"6jK\", \"6Dg\", \"45L\", \"81\", \"iX\", \"fY\", \"70\", \"5nd\", \"6Kf\", \"6eJ\", \"4Pi\", \"0kG\", \"Hu\", \n\"EE\", \"0fw\", \"5Mx\", \"4H8\", \"5v5\", \"4su\", \"1Xz\", \"ki\", \"0VY\", \"1c\", \"5h7\", \"4mw\", \"5Sz\", \"62p\", \"2MV\", \"0xu\", \"01M\", \"Vw\", \"7ki\", \"4Nk\", \n\"54n\", \"6Ud\", \"2nJ\", \"1KH\", \"0Th\", \"3R\", \"6Ze\", \"4oF\", \"4Aj\", \"60A\", \"Yv\", \"0zD\", \"0wt\", \"TF\", \"6yy\", \"4LZ\", \"4bv\", \"5g6\", \"zj\", \"0YX\", \n\"0Kt\", \"hF\", \"6Ey\", \"44R\", \"5NW\", \"6kU\", \"Fj\", \"0eX\", \"0hh\", \"KZ\", \"6fe\", \"4SF\", \"49b\", \"6HI\", \"ev\", \"0FD\", \"0IE\", \"jw\", \"6GH\", \"46c\", \n\"5Lf\", \"6id\", \"S\", \"0gi\", \"0jY\", \"Ik\", \"5T7\", \"4Qw\", \"5oz\", \"6Jx\", \"gG\", \"0Du\", \"yE\", \"0Zw\", \"4aY\", \"400\", \"5J5\", \"4Ou\", \"00S\", \"Wi\", \n\"ZY\", \"O8\", \"4BE\", \"63n\", \"6YJ\", \"4li\", \"0WG\", \"tu\", \"2me\", \"0XF\", \"4ch\", \"6VK\", \"6xg\", \"4MD\", \"02b\", \"UX\", \"Xh\", \"3K9\", \"5PU\", \"5E4\", \n\"7KZ\", \"4nX\", \"0Uv\", \"2L\", \"73V\", \"bho\", \"0ir\", \"1l2\", \"dl\", \"335\", \"48x\", \"5y0\", \"6Dc\", \"45H\", \"85\", \"3ol\", \"Gp\", \"0dB\", \"5OM\", \"6jO\", \n\"6eN\", \"4Pm\", \"0kC\", \"Hq\", \"24D\", \"74\", \"bDr\", \"6Kb\", \"529\", \"47y\", \"8AG\", \"km\", \"EA\", \"0fs\", \"bgn\", \"aBl\", \"774\", \"62t\", \"199\", \"0xq\", \n\"9Od\", \"1g\", \"5h3\", \"4ms\", \"54j\", \"7EA\", \"2nN\", \"1KL\", \"01I\", \"Vs\", \"7km\", \"4No\", \"4An\", \"60E\", \"Yr\", \"1ja\", \"0Tl\", \"3V\", \"6Za\", \"4oB\", \n\"4br\", \"5g2\", \"zn\", \"8PD\", \"03x\", \"TB\", \"aSo\", \"785\", \"49n\", \"6HE\", \"ez\", \"0FH\", \"0hd\", \"KV\", \"6fi\", \"4SJ\", \"bdI\", \"6kY\", \"Ff\", \"0eT\", \n\"0Kx\", \"hJ\", \"4e7\", \"4pV\", \"5ov\", \"4j6\", \"gK\", \"0Dy\", \"0jU\", \"Ig\", \"6dX\", \"5AZ\", \"5Lj\", \"6ih\", \"DW\", \"Q6\", \"0II\", \"28b\", \"6GD\", \"46o\", \n\"6YF\", \"4le\", \"0WK\", \"0q\", \"ZU\", \"O4\", \"4BI\", \"63b\", \"5J9\", \"4Oy\", \"0tW\", \"We\", \"yI\", \"1JZ\", \"4aU\", \"4t4\", \"7KV\", \"4nT\", \"0Uz\", \"vH\", \n\"Xd\", \"1kw\", \"5PY\", \"5E8\", \"6xk\", \"4MH\", \"02n\", \"UT\", \"2mi\", \"0XJ\", \"4cd\", \"6VG\", \"2Qm\", \"0dN\", \"5OA\", \"6jC\", \"6Do\", \"45D\", \"89\", \"iP\", \n\"0R3\", \"0GR\", \"48t\", \"acM\", \"4G1\", \"4RP\", \"94O\", \"JL\", \"EM\", \"12V\", \"5Mp\", \"4H0\", \"525\", \"47u\", \"0HS\", \"ka\", \"fQ\", \"78\", \"5nl\", \"6Kn\", \n\"6eB\", \"4Pa\", \"0kO\", \"3NM\", \"01E\", \"3PO\", \"7ka\", \"4Nc\", \"54f\", \"6Ul\", \"xS\", \"m2\", \"0VQ\", \"1k\", \"a6F\", \"59V\", \"4CS\", \"4V2\", \"195\", \"85m\", \n\"03t\", \"TN\", \"4Y3\", \"4LR\", \"56W\", \"a9G\", \"zb\", \"0YP\", \"b3\", \"3Z\", \"6Zm\", \"4oN\", \"4Ab\", \"60I\", \"2Oo\", \"0zL\", \"1xA\", \"KR\", \"6fm\", \"4SN\", \n\"49j\", \"6HA\", \"27g\", \"0FL\", \"8Bd\", \"hN\", \"4e3\", \"44Z\", \"bdM\", \"aAO\", \"Fb\", \"0eP\", \"0jQ\", \"Ic\", \"70u\", \"655\", \"5or\", \"4j2\", \"gO\", \"8Me\", \n\"0IM\", \"28f\", \"7Wa\", \"46k\", \"5Ln\", \"6il\", \"DS\", \"Q2\", \"ZQ\", \"O0\", \"4BM\", \"63f\", \"6YB\", \"4la\", \"0WO\", \"0u\", \"yM\", \"8Sg\", \"4aQ\", \"408\", \n\"aPL\", \"b1F\", \"0tS\", \"Wa\", \"0n3\", \"1ks\", \"bzO\", \"61W\", \"7KR\", \"4nP\", \"214\", \"2D\", \"2mm\", \"0XN\", \"57I\", \"6VC\", \"6xo\", \"4ML\", \"02j\", \"UP\", \n\"6Dk\", \"4qH\", \"0Jf\", \"iT\", \"Gx\", \"0dJ\", \"5OE\", \"6jG\", \"4G5\", \"4RT\", \"0iz\", \"JH\", \"dd\", \"0GV\", \"48p\", \"5y8\", \"521\", \"47q\", \"0HW\", \"ke\", \n\"EI\", \"12R\", \"5Mt\", \"4H4\", \"6eF\", \"4Pe\", \"0kK\", \"Hy\", \"fU\", \"s4\", \"5nh\", \"6Kj\", \"54b\", \"6Uh\", \"xW\", \"m6\", \"01A\", \"3PK\", \"7ke\", \"4Ng\", \n\"4CW\", \"4V6\", \"191\", \"0xy\", \"0VU\", \"1o\", \"6XX\", \"59R\", \"4bz\", \"6WY\", \"zf\", \"0YT\", \"03p\", \"TJ\", \"4Y7\", \"4LV\", \"4Af\", \"60M\", \"Yz\", \"0zH\", \n\"b7\", \"wV\", \"6Zi\", \"4oJ\", \"5H3\", \"4Ms\", \"02U\", \"Uo\", \"2mR\", \"0Xq\", \"57v\", \"426\", \"7Km\", \"4no\", \"0UA\", \"vs\", \"2NN\", \"1kL\", \"5Pb\", \"61h\", \n\"6za\", \"4OB\", \"00d\", \"2AO\", \"yr\", \"1Ja\", \"4an\", \"6TM\", \"a7g\", \"58w\", \"0Wp\", \"0J\", \"Zn\", \"84L\", \"4Br\", \"5G2\", \"5LQ\", \"5Y0\", \"d\", \"13w\", \n\"0Ir\", \"1L2\", \"amm\", \"46T\", \"5oM\", \"6JO\", \"gp\", \"0DB\", \"0jn\", \"3Ol\", \"6dc\", \"5Aa\", \"bdr\", \"6kb\", \"2PL\", \"0eo\", \"0KC\", \"hq\", \"6EN\", \"44e\", \n\"49U\", \"abl\", \"eA\", \"0Fs\", \"8aG\", \"Km\", \"5V1\", \"4Sq\", \"1Dz\", \"3a\", \"5j5\", \"4ou\", \"4AY\", \"4T8\", \"YE\", \"0zw\", \"03O\", \"Tu\", \"6yJ\", \"4Li\", \n\"4bE\", \"6Wf\", \"zY\", \"o8\", \"0Vj\", \"1P\", \"6Xg\", \"4mD\", \"4Ch\", \"62C\", \"2Me\", \"0xF\", \"0uv\", \"VD\", \"7kZ\", \"4NX\", \"5pU\", \"5e4\", \"xh\", \"3k9\", \n\"fj\", \"0EX\", \"5nW\", \"6KU\", \"6ey\", \"4PZ\", \"0kt\", \"HF\", \"Ev\", \"0fD\", \"5MK\", \"6hI\", \"6Fe\", \"47N\", \"0Hh\", \"kZ\", \"26B\", \"52\", \"48O\", \"6Id\", \n\"6gH\", \"4Rk\", \"0iE\", \"Jw\", \"GG\", \"0du\", \"5Oz\", \"6jx\", \"5t7\", \"4qw\", \"0JY\", \"ik\", \"2mV\", \"0Xu\", \"57r\", \"422\", \"5H7\", \"4Mw\", \"02Q\", \"Uk\", \n\"2NJ\", \"1kH\", \"5Pf\", \"61l\", \"7Ki\", \"4nk\", \"0UE\", \"vw\", \"yv\", \"0ZD\", \"4aj\", \"6TI\", \"6ze\", \"4OF\", \"0th\", \"WZ\", \"Zj\", \"0yX\", \"4Bv\", \"5G6\", \n\"6Yy\", \"4lZ\", \"0Wt\", \"0N\", \"0Iv\", \"jD\", \"4g9\", \"46P\", \"5LU\", \"5Y4\", \"Dh\", \"0gZ\", \"0jj\", \"IX\", \"6dg\", \"4QD\", \"5oI\", \"6JK\", \"gt\", \"0DF\", \n\"0KG\", \"hu\", \"6EJ\", \"44a\", \"5Nd\", \"6kf\", \"FY\", \"S8\", \"1xz\", \"Ki\", \"5V5\", \"4Su\", \"49Q\", \"4h8\", \"eE\", \"0Fw\", \"756\", \"60v\", \"YA\", \"0zs\", \n\"9Mf\", \"3e\", \"5j1\", \"4oq\", \"4bA\", \"6Wb\", \"2lL\", \"0Yo\", \"03K\", \"Tq\", \"6yN\", \"4Lm\", \"4Cl\", \"62G\", \"2Ma\", \"0xB\", \"0Vn\", \"1T\", \"6Xc\", \"59i\", \n\"54Y\", \"5e0\", \"xl\", \"8RF\", \"01z\", \"1p2\", \"aQm\", \"b0g\", \"71T\", \"bjm\", \"0kp\", \"HB\", \"fn\", \"317\", \"5nS\", \"6KQ\", \"6Fa\", \"47J\", \"0Hl\", \"29G\", \n\"Er\", \"12i\", \"5MO\", \"6hM\", \"6gL\", \"4Ro\", \"0iA\", \"Js\", \"26F\", \"56\", \"48K\", \"7YA\", \"5t3\", \"4qs\", \"8CE\", \"io\", \"GC\", \"0dq\", \"bel\", \"hYW\", \n\"7Ke\", \"4ng\", \"0UI\", \"2s\", \"XW\", \"M6\", \"5Pj\", \"6uh\", \"6xX\", \"6m9\", \"0vU\", \"Ug\", \"2mZ\", \"0Xy\", \"4cW\", \"4v6\", \"4y7\", \"4lV\", \"0Wx\", \"0B\", \n\"Zf\", \"0yT\", \"4Bz\", \"63Q\", \"6zi\", \"4OJ\", \"B7\", \"WV\", \"yz\", \"0ZH\", \"4af\", \"6TE\", \"5oE\", \"6JG\", \"gx\", \"0DJ\", \"0jf\", \"IT\", \"6dk\", \"4QH\", \n\"5LY\", \"5Y8\", \"l\", \"0gV\", \"0Iz\", \"jH\", \"4g5\", \"4rT\", \"5mt\", \"4h4\", \"eI\", \"1VZ\", \"0hW\", \"Ke\", \"5V9\", \"4Sy\", \"5Nh\", \"6kj\", \"FU\", \"S4\", \n\"0KK\", \"hy\", \"6EF\", \"44m\", \"03G\", \"2Bl\", \"6yB\", \"4La\", \"4bM\", \"6Wn\", \"zQ\", \"o0\", \"0TS\", \"3i\", \"a4D\", \"bUN\", \"4AQ\", \"4T0\", \"YM\", \"87o\", \n\"01v\", \"VL\", \"7kR\", \"4NP\", \"54U\", \"hft\", \"0N3\", \"1Ks\", \"0Vb\", \"1X\", \"6Xo\", \"4mL\", \"5SA\", \"62K\", \"2Mm\", \"0xN\", \"2So\", \"0fL\", \"5MC\", \"6hA\", \n\"6Fm\", \"47F\", \"1XA\", \"kR\", \"fb\", \"0EP\", \"bDM\", \"aaO\", \"4E3\", \"4PR\", \"8bd\", \"HN\", \"GO\", \"10T\", \"5Or\", \"4J2\", \"507\", \"45w\", \"0JQ\", \"ic\", \n\"dS\", \"q2\", \"48G\", \"6Il\", \"73i\", \"4Rc\", \"0iM\", \"3LO\", \"XS\", \"M2\", \"5Pn\", \"61d\", \"7Ka\", \"4nc\", \"0UM\", \"2w\", \"39w\", \"8Qe\", \"4cS\", \"4v2\", \n\"aRN\", \"b3D\", \"02Y\", \"Uc\", \"Zb\", \"0yP\", \"bxM\", \"63U\", \"4y3\", \"4lR\", \"236\", \"0F\", \"2oo\", \"0ZL\", \"4ab\", \"6TA\", \"6zm\", \"4ON\", \"B3\", \"WR\", \n\"0jb\", \"IP\", \"6do\", \"4QL\", \"5oA\", \"6JC\", \"25e\", \"0DN\", \"9PG\", \"jL\", \"4g1\", \"46X\", \"686\", \"aCM\", \"h\", \"0gR\", \"0hS\", \"Ka\", \"72w\", \"677\", \n\"49Y\", \"4h0\", \"eM\", \"8Og\", \"0KO\", \"3nM\", \"6EB\", \"44i\", \"5Nl\", \"6kn\", \"FQ\", \"S0\", \"4bI\", \"6Wj\", \"zU\", \"o4\", \"03C\", \"Ty\", \"6yF\", \"4Le\", \n\"4AU\", \"4T4\", \"YI\", \"1jZ\", \"0TW\", \"3m\", \"5j9\", \"4oy\", \"54Q\", \"5e8\", \"xd\", \"1Kw\", \"01r\", \"VH\", \"7kV\", \"4NT\", \"4Cd\", \"62O\", \"2Mi\", \"0xJ\", \n\"0Vf\", \"uT\", \"6Xk\", \"4mH\", \"6Fi\", \"47B\", \"0Hd\", \"kV\", \"Ez\", \"0fH\", \"5MG\", \"6hE\", \"4E7\", \"4PV\", \"0kx\", \"HJ\", \"ff\", \"0ET\", \"bDI\", \"6KY\", \n\"503\", \"45s\", \"0JU\", \"ig\", \"GK\", \"0dy\", \"5Ov\", \"4J6\", \"6gD\", \"4Rg\", \"0iI\", \"3LK\", \"dW\", \"q6\", \"48C\", \"6Ih\", \"4Z2\", \"4OS\", \"00u\", \"WO\", \n\"yc\", \"0ZQ\", \"55V\", \"hgw\", \"6Yl\", \"4lO\", \"a2\", \"tS\", \"2Ln\", \"0yM\", \"4Bc\", \"63H\", \"6xA\", \"4Mb\", \"02D\", \"2Co\", \"2mC\", \"n3\", \"4cN\", \"6Vm\", \n\"a5G\", \"bTM\", \"0UP\", \"2j\", \"XN\", \"86l\", \"5Ps\", \"4U3\", \"5Nq\", \"4K1\", \"FL\", \"11W\", \"0KR\", \"3nP\", \"514\", \"44t\", \"49D\", \"6Ho\", \"eP\", \"49\", \n\"0hN\", \"3ML\", \"6fC\", \"5CA\", \"aV1\", \"6iB\", \"u\", \"0gO\", \"0Ic\", \"jQ\", \"6Gn\", \"46E\", \"bEN\", \"hyu\", \"ga\", \"0DS\", \"8cg\", \"IM\", \"4D0\", \"4QQ\", \n\"1FZ\", \"1A\", \"4x4\", \"4mU\", \"4Cy\", \"5F9\", \"0m6\", \"0xW\", \"C4\", \"VU\", \"7kK\", \"4NI\", \"54L\", \"6UF\", \"xy\", \"1Kj\", \"0TJ\", \"3p\", \"6ZG\", \"4od\", \n\"4AH\", \"60c\", \"YT\", \"L5\", \"0wV\", \"Td\", \"5I8\", \"4Lx\", \"4bT\", \"4w5\", \"zH\", \"0Yz\", \"dJ\", \"0Gx\", \"5lw\", \"4i7\", \"6gY\", \"4Rz\", \"0iT\", \"Jf\", \n\"GV\", \"R7\", \"5Ok\", \"6ji\", \"6DE\", \"45n\", \"0JH\", \"iz\", \"24b\", \"0EI\", \"5nF\", \"6KD\", \"6eh\", \"4PK\", \"0ke\", \"HW\", \"Eg\", \"0fU\", \"5MZ\", \"6hX\", \n\"4f6\", \"4sW\", \"0Hy\", \"kK\", \"yg\", \"0ZU\", \"55R\", \"6TX\", \"4Z6\", \"4OW\", \"00q\", \"WK\", \"2Lj\", \"0yI\", \"4Bg\", \"63L\", \"6Yh\", \"4lK\", \"a6\", \"tW\", \n\"2mG\", \"n7\", \"4cJ\", \"6Vi\", \"6xE\", \"4Mf\", \"0vH\", \"Uz\", \"XJ\", \"1kY\", \"5Pw\", \"4U7\", \"7Kx\", \"4nz\", \"0UT\", \"2n\", \"0KV\", \"hd\", \"510\", \"44p\", \n\"5Nu\", \"4K5\", \"FH\", \"0ez\", \"0hJ\", \"Kx\", \"6fG\", \"4Sd\", \"5mi\", \"6Hk\", \"eT\", \"p5\", \"0Ig\", \"jU\", \"6Gj\", \"46A\", \"5LD\", \"6iF\", \"q\", \"0gK\", \n\"1zZ\", \"II\", \"4D4\", \"4QU\", \"5oX\", \"5z9\", \"ge\", \"0DW\", \"byN\", \"62V\", \"0m2\", \"0xS\", \"225\", \"1E\", \"4x0\", \"4mQ\", \"54H\", \"6UB\", \"2nl\", \"1Kn\", \n\"C0\", \"VQ\", \"7kO\", \"4NM\", \"4AL\", \"60g\", \"YP\", \"L1\", \"0TN\", \"3t\", \"6ZC\", \"ae0\", \"4bP\", \"439\", \"zL\", \"8Pf\", \"03Z\", \"0b3\", \"aSM\", \"b2G\", \n\"73t\", \"664\", \"0iP\", \"Jb\", \"dN\", \"8Nd\", \"48Z\", \"4i3\", \"6DA\", \"45j\", \"0JL\", \"3oN\", \"GR\", \"R3\", \"5Oo\", \"6jm\", \"6el\", \"4PO\", \"0ka\", \"HS\", \n\"24f\", \"0EM\", \"5nB\", \"aaR\", \"4f2\", \"4sS\", \"8Ae\", \"kO\", \"Ec\", \"0fQ\", \"695\", \"aBN\", \"6Yd\", \"4lG\", \"0Wi\", \"0S\", \"Zw\", \"0yE\", \"4Bk\", \"6wH\", \n\"6zx\", \"buh\", \"0tu\", \"WG\", \"yk\", \"0ZY\", \"4aw\", \"5d7\", \"5k6\", \"4nv\", \"0UX\", \"2b\", \"XF\", \"1kU\", \"741\", \"61q\", \"6xI\", \"4Mj\", \"02L\", \"Uv\", \n\"2mK\", \"0Xh\", \"4cF\", \"6Ve\", \"49L\", \"6Hg\", \"eX\", \"41\", \"0hF\", \"Kt\", \"6fK\", \"4Sh\", \"5Ny\", \"4K9\", \"FD\", \"0ev\", \"0KZ\", \"hh\", \"5u4\", \"4pt\", \n\"5oT\", \"5z5\", \"gi\", \"1Tz\", \"0jw\", \"IE\", \"4D8\", \"4QY\", \"5LH\", \"6iJ\", \"Du\", \"0gG\", \"0Ik\", \"jY\", \"6Gf\", \"46M\", \"01g\", \"3Pm\", \"7kC\", \"4NA\", \n\"54D\", \"6UN\", \"xq\", \"1Kb\", \"0Vs\", \"1I\", \"a6d\", \"59t\", \"4Cq\", \"5F1\", \"d3G\", \"85O\", \"03V\", \"Tl\", \"5I0\", \"4Lp\", \"56u\", \"435\", \"2lQ\", \"0Yr\", \n\"0TB\", \"3x\", \"6ZO\", \"4ol\", \"5Qa\", \"60k\", \"2OM\", \"0zn\", \"2QO\", \"0dl\", \"5Oc\", \"6ja\", \"6DM\", \"45f\", \"1Za\", \"ir\", \"dB\", \"0Gp\", \"48V\", \"aco\", \n\"5W2\", \"4Rr\", \"94m\", \"Jn\", \"Eo\", \"12t\", \"5MR\", \"5X3\", \"aln\", \"47W\", \"0Hq\", \"kC\", \"fs\", \"0EA\", \"5nN\", \"6KL\", \"71I\", \"4PC\", \"0km\", \"3No\", \n\"Zs\", \"0yA\", \"4Bo\", \"63D\", \"7IA\", \"4lC\", \"0Wm\", \"0W\", \"yo\", \"8SE\", \"4as\", \"5d3\", \"aPn\", \"b1d\", \"00y\", \"WC\", \"XB\", \"1kQ\", \"745\", \"61u\", \n\"5k2\", \"4nr\", \"9Le\", \"2f\", \"2mO\", \"0Xl\", \"4cB\", \"6Va\", \"6xM\", \"4Mn\", \"02H\", \"Ur\", \"0hB\", \"Kp\", \"6fO\", \"4Sl\", \"49H\", \"6Hc\", \"27E\", \"45\", \n\"8BF\", \"hl\", \"518\", \"44x\", \"bdo\", \"aAm\", \"2PQ\", \"0er\", \"0js\", \"IA\", \"70W\", \"bkn\", \"5oP\", \"5z1\", \"gm\", \"304\", \"0Io\", \"28D\", \"6Gb\", \"46I\", \n\"5LL\", \"6iN\", \"y\", \"0gC\", \"5pH\", \"6UJ\", \"xu\", \"1Kf\", \"C8\", \"VY\", \"7kG\", \"4NE\", \"4Cu\", \"5F5\", \"2Mx\", \"1hz\", \"0Vw\", \"1M\", \"4x8\", \"4mY\", \n\"4bX\", \"431\", \"zD\", \"0Yv\", \"03R\", \"Th\", \"5I4\", \"4Lt\", \"4AD\", \"60o\", \"YX\", \"L9\", \"0TF\", \"wt\", \"6ZK\", \"4oh\", \"6DI\", \"45b\", \"0JD\", \"iv\", \n\"GZ\", \"0dh\", \"5Og\", \"6je\", \"5W6\", \"4Rv\", \"0iX\", \"Jj\", \"dF\", \"0Gt\", \"48R\", \"6Iy\", \"6Fx\", \"47S\", \"0Hu\", \"kG\", \"Ek\", \"0fY\", \"5MV\", \"5X7\", \n\"6ed\", \"4PG\", \"0ki\", \"3Nk\", \"fw\", \"0EE\", \"5nJ\", \"6KH\", \"356\", \"bo\", \"6OP\", \"4zs\", \"bnl\", \"75U\", \"LC\", \"0oq\", \"0bA\", \"As\", \"6lL\", \"4Yo\", \n\"43K\", \"7RA\", \"2yN\", \"0Lm\", \"17\", \"22G\", \"6Ma\", \"4xB\", \"4Vn\", \"6cM\", \"Nr\", \"19i\", \"14Y\", \"CB\", \"aDo\", \"bam\", \"41z\", \"5p2\", \"mn\", \"8GD\", \n\"7d\", \"8YF\", \"4kp\", \"5n0\", \"64w\", \"717\", \"1nS\", \"2KQ\", \"Pp\", \"07J\", \"4Hl\", \"69G\", \"6Sc\", \"52i\", \"1MO\", \"2hM\", \"5U\", \"0Ro\", \"4iA\", \"7LC\", \n\"66F\", \"4Gm\", \"08K\", \"3YA\", \"RA\", \"0qs\", \"b4f\", \"aUl\", \"5a1\", \"4dq\", \"8VG\", \"8e\", \"6mV\", \"4Xu\", \"17r\", \"022\", \"nE\", \"0Mw\", \"42Q\", \"4c8\", \n\"6NJ\", \"5kH\", \"1Pf\", \"cu\", \"MY\", \"X8\", \"4UE\", \"74O\", \"6og\", \"4ZD\", \"W9\", \"BX\", \"lt\", \"0OF\", \"4th\", \"6AK\", \"4l9\", \"4yX\", \"0Bv\", \"aD\", \n\"Oh\", \"0lZ\", \"4Wt\", \"5R4\", \"4Iv\", \"5L6\", \"Qj\", \"06P\", \"1LU\", \"1Y4\", \"463\", \"4gZ\", \"4jj\", \"7Oh\", \"rv\", \"0QD\", \"1oI\", \"2JK\", \"65m\", \"4DF\", \n\"4KG\", \"7nE\", \"2EJ\", \"04a\", \"1Nd\", \"2kf\", \"6PH\", \"4ek\", \"5xz\", \"492\", \"4O\", \"0Su\", \"09Q\", \"0h8\", \"5C7\", \"4Fw\", \"5Dz\", \"6ax\", \"LG\", \"0ou\", \n\"0AY\", \"bk\", \"6OT\", \"4zw\", \"43O\", \"6Bd\", \"2yJ\", \"0Li\", \"0bE\", \"Aw\", \"6lH\", \"4Yk\", \"4Vj\", \"6cI\", \"Nv\", \"0mD\", \"13\", \"22C\", \"6Me\", \"4xF\", \n\"4uv\", \"5p6\", \"mj\", \"0NX\", \"1pU\", \"CF\", \"6ny\", \"7k9\", \"4P9\", \"4EX\", \"1nW\", \"2KU\", \"sh\", \"0PZ\", \"4kt\", \"5n4\", \"6Sg\", \"4fD\", \"k9\", \"2hI\", \n\"Pt\", \"07N\", \"4Hh\", \"69C\", \"66B\", \"4Gi\", \"08O\", \"2Id\", \"5Q\", \"d8\", \"4iE\", \"7LG\", \"5a5\", \"4du\", \"1Oz\", \"8a\", \"RE\", \"0qw\", \"4JY\", \"aUh\", \n\"nA\", \"0Ms\", \"42U\", \"ail\", \"6mR\", \"4Xq\", \"17v\", \"026\", \"3Km\", \"0no\", \"4UA\", \"74K\", \"6NN\", \"5kL\", \"1Pb\", \"cq\", \"lp\", \"0OB\", \"40d\", \"6AO\", \n\"6oc\", \"5Ja\", \"0an\", \"2TM\", \"Ol\", \"18w\", \"4Wp\", \"5R0\", \"afm\", \"bCo\", \"0Br\", \"1G2\", \"1LQ\", \"1Y0\", \"467\", \"53w\", \"4Ir\", \"5L2\", \"Qn\", \"06T\", \n\"1oM\", \"2JO\", \"65i\", \"4DB\", \"4jn\", \"7Ol\", \"6z\", \"1Aa\", \"8WY\", \"2kb\", \"6PL\", \"4eo\", \"4KC\", \"7nA\", \"2EN\", \"04e\", \"09U\", \"d6E\", \"5C3\", \"4Fs\", \n\"bRl\", \"496\", \"4K\", \"0Sq\", \"0bI\", \"2Wj\", \"6lD\", \"4Yg\", \"43C\", \"6Bh\", \"oW\", \"z6\", \"0AU\", \"bg\", \"6OX\", \"5jZ\", \"4TW\", \"4A6\", \"LK\", \"0oy\", \n\"14Q\", \"CJ\", \"4N7\", \"5Kw\", \"41r\", \"542\", \"mf\", \"0NT\", \"u7\", \"22O\", \"6Mi\", \"4xJ\", \"4Vf\", \"6cE\", \"Nz\", \"0mH\", \"Px\", \"07B\", \"4Hd\", \"69O\", \n\"6Sk\", \"4fH\", \"k5\", \"2hE\", \"7l\", \"0PV\", \"4kx\", \"5n8\", \"4P5\", \"4ET\", \"83j\", \"2KY\", \"RI\", \"05s\", \"4JU\", \"7oW\", \"5a9\", \"4dy\", \"1Ov\", \"8m\", \n\"qU\", \"d4\", \"4iI\", \"7LK\", \"66N\", \"4Ge\", \"08C\", \"2Ih\", \"6NB\", \"a59\", \"1Pn\", \"21d\", \"MQ\", \"X0\", \"4UM\", \"74G\", \"79w\", \"bbN\", \"0cS\", \"0v2\", \n\"nM\", \"8Dg\", \"42Y\", \"4c0\", \"4l1\", \"4yP\", \"8Kf\", \"aL\", \"0y3\", \"0lR\", \"636\", \"76v\", \"6oo\", \"4ZL\", \"W1\", \"BP\", \"2zm\", \"0ON\", \"40h\", \"6AC\", \n\"4jb\", \"auS\", \"6v\", \"0QL\", \"I3\", \"2JC\", \"65e\", \"4DN\", \"b7E\", \"68U\", \"Qb\", \"06X\", \"286\", \"dSl\", \"4r3\", \"4gR\", \"4hS\", \"7MQ\", \"4G\", \"277\", \n\"09Y\", \"0h0\", \"67T\", \"b8D\", \"4KO\", \"7nM\", \"SS\", \"F2\", \"1Nl\", \"9w\", \"azR\", \"4ec\", \"43G\", \"6Bl\", \"oS\", \"z2\", \"0bM\", \"2Wn\", \"78i\", \"4Yc\", \n\"4TS\", \"4A2\", \"LO\", \"8fe\", \"0AQ\", \"bc\", \"aeN\", \"cPm\", \"41v\", \"546\", \"mb\", \"0NP\", \"14U\", \"CN\", \"4N3\", \"5Ks\", \"4Vb\", \"6cA\", \"2Xo\", \"0mL\", \n\"u3\", \"22K\", \"6Mm\", \"4xN\", \"6So\", \"4fL\", \"k1\", \"2hA\", \"2Fm\", \"07F\", \"5XA\", \"69K\", \"4P1\", \"4EP\", \"83n\", \"d5f\", \"7h\", \"0PR\", \"bQO\", \"a0E\", \n\"hbu\", \"50T\", \"1Or\", \"8i\", \"RM\", \"05w\", \"4JQ\", \"7oS\", \"66J\", \"4Ga\", \"08G\", \"2Il\", \"5Y\", \"d0\", \"4iM\", \"7LO\", \"MU\", \"X4\", \"4UI\", \"74C\", \n\"6NF\", \"5kD\", \"1Pj\", \"cy\", \"nI\", \"2m9\", \"4vU\", \"4c4\", \"6mZ\", \"4Xy\", \"0cW\", \"0v6\", \"Od\", \"0lV\", \"4Wx\", \"5R8\", \"4l5\", \"4yT\", \"0Bz\", \"aH\", \n\"lx\", \"0OJ\", \"40l\", \"6AG\", \"6ok\", \"4ZH\", \"W5\", \"BT\", \"I7\", \"2JG\", \"65a\", \"4DJ\", \"4jf\", \"7Od\", \"6r\", \"0QH\", \"1LY\", \"1Y8\", \"4r7\", \"4gV\", \n\"4Iz\", \"68Q\", \"Qf\", \"0rT\", \"1mt\", \"0h4\", \"67P\", \"5VZ\", \"4hW\", \"7MU\", \"4C\", \"0Sy\", \"1Nh\", \"9s\", \"6PD\", \"4eg\", \"4KK\", \"7nI\", \"SW\", \"F6\", \n\"8Je\", \"22V\", \"4m2\", \"4xS\", \"625\", \"77u\", \"Nc\", \"0mQ\", \"V2\", \"CS\", \"6nl\", \"5Kn\", \"41k\", \"7Pa\", \"3kO\", \"0NM\", \"0AL\", \"20g\", \"6OA\", \"4zb\", \n\"4TN\", \"6am\", \"LR\", \"Y3\", \"0bP\", \"Ab\", \"78t\", \"bcM\", \"43Z\", \"4b3\", \"oN\", \"8Ed\", \"5D\", \"264\", \"4iP\", \"489\", \"66W\", \"b9G\", \"08Z\", \"0i3\", \n\"RP\", \"G1\", \"4JL\", \"7oN\", \"6QC\", \"50I\", \"1Oo\", \"8t\", \"7u\", \"0PO\", \"4ka\", \"7Nc\", \"64f\", \"4EM\", \"H0\", \"963\", \"Pa\", \"0sS\", \"b6F\", \"69V\", \n\"478\", \"4fQ\", \"295\", \"dRo\", \"4O4\", \"4ZU\", \"15R\", \"BI\", \"le\", \"0OW\", \"40q\", \"551\", \"6Lj\", \"4yI\", \"t4\", \"aU\", \"Oy\", \"0lK\", \"4We\", \"6bF\", \n\"6mG\", \"4Xd\", \"0cJ\", \"2Vi\", \"nT\", \"0Mf\", \"4vH\", \"6Ck\", \"adI\", \"5kY\", \"1Pw\", \"cd\", \"MH\", \"0nz\", \"4UT\", \"7pV\", \"4KV\", \"7nT\", \"SJ\", \"04p\", \n\"1Nu\", \"9n\", \"6PY\", \"4ez\", \"4hJ\", \"7MH\", \"pV\", \"e7\", \"1mi\", \"2Hk\", \"67M\", \"4Ff\", \"4Ig\", \"68L\", \"2Gj\", \"06A\", \"j6\", \"2iF\", \"6Rh\", \"4gK\", \n\"5zZ\", \"7Oy\", \"6o\", \"0QU\", \"1oX\", \"1z9\", \"4Q6\", \"4DW\", \"5FZ\", \"6cX\", \"Ng\", \"0mU\", \"0Cy\", \"1F9\", \"4m6\", \"4xW\", \"41o\", \"7Pe\", \"3kK\", \"0NI\", \n\"V6\", \"CW\", \"6nh\", \"5Kj\", \"4TJ\", \"6ai\", \"LV\", \"Y7\", \"0AH\", \"bz\", \"6OE\", \"4zf\", \"4wV\", \"4b7\", \"oJ\", \"0Lx\", \"0bT\", \"Af\", \"6lY\", \"4Yz\", \n\"5B8\", \"4Gx\", \"1lw\", \"0i7\", \"qH\", \"0Rz\", \"4iT\", \"7LV\", \"6QG\", \"4dd\", \"1Ok\", \"8p\", \"RT\", \"G5\", \"4JH\", \"7oJ\", \"64b\", \"4EI\", \"H4\", \"2KD\", \n\"7q\", \"0PK\", \"4ke\", \"7Ng\", \"4s4\", \"4fU\", \"1MZ\", \"2hX\", \"Pe\", \"0sW\", \"4Hy\", \"5M9\", \"la\", \"0OS\", \"40u\", \"555\", \"4O0\", \"4ZQ\", \"15V\", \"BM\", \n\"2Yl\", \"0lO\", \"4Wa\", \"6bB\", \"6Ln\", \"4yM\", \"08\", \"aQ\", \"nP\", \"0Mb\", \"42D\", \"6Co\", \"6mC\", \"5HA\", \"0cN\", \"2Vm\", \"ML\", \"8gf\", \"4UP\", \"74Z\", \n\"adM\", \"bAO\", \"1Ps\", \"0U3\", \"1Nq\", \"9j\", \"azO\", \"51W\", \"4KR\", \"7nP\", \"SN\", \"04t\", \"09D\", \"2Ho\", \"67I\", \"4Fb\", \"4hN\", \"7ML\", \"4Z\", \"e3\", \n\"j2\", \"2iB\", \"6Rl\", \"4gO\", \"4Ic\", \"68H\", \"2Gn\", \"06E\", \"82m\", \"d4e\", \"4Q2\", \"4DS\", \"bPL\", \"a1F\", \"6k\", \"0QQ\", \"1pH\", \"2UJ\", \"6nd\", \"5Kf\", \n\"41c\", \"7Pi\", \"mw\", \"0NE\", \"0Cu\", \"1F5\", \"6Mx\", \"5hz\", \"4Vw\", \"5S7\", \"Nk\", \"0mY\", \"0bX\", \"Aj\", \"6lU\", \"4Yv\", \"43R\", \"6By\", \"oF\", \"0Lt\", \n\"0AD\", \"bv\", \"6OI\", \"4zj\", \"4TF\", \"6ae\", \"LZ\", \"0oh\", \"RX\", \"G9\", \"4JD\", \"7oF\", \"6QK\", \"4dh\", \"1Og\", \"2je\", \"5L\", \"0Rv\", \"4iX\", \"481\", \n\"5B4\", \"4Gt\", \"08R\", \"2Iy\", \"Pi\", \"07S\", \"4Hu\", \"5M5\", \"470\", \"4fY\", \"1MV\", \"1X7\", \"su\", \"0PG\", \"4ki\", \"7Nk\", \"64n\", \"4EE\", \"H8\", \"2KH\", \n\"6Lb\", \"4yA\", \"04\", \"23D\", \"Oq\", \"0lC\", \"4Wm\", \"6bN\", \"aEl\", \"c4G\", \"0as\", \"BA\", \"lm\", \"8FG\", \"40y\", \"559\", \"6NS\", \"5kQ\", \"345\", \"cl\", \n\"1k2\", \"0nr\", \"boo\", \"74V\", \"6mO\", \"4Xl\", \"0cB\", \"2Va\", \"2xM\", \"0Mn\", \"42H\", \"6Cc\", \"4hB\", \"aws\", \"4V\", \"0Sl\", \"09H\", \"2Hc\", \"67E\", \"4Fn\", \n\"b5e\", \"aTo\", \"SB\", \"04x\", \"8WD\", \"9f\", \"6PQ\", \"4er\", \"4js\", \"5o3\", \"6g\", \"8XE\", \"1oP\", \"1z1\", \"65t\", \"704\", \"4Io\", \"68D\", \"Qs\", \"06I\", \n\"1LL\", \"2iN\", \"7BA\", \"4gC\", \"41g\", \"7Pm\", \"ms\", \"0NA\", \"14D\", \"2UN\", \"aDr\", \"5Kb\", \"4Vs\", \"5S3\", \"No\", \"19t\", \"0Cq\", \"1F1\", \"agn\", \"bBl\", \n\"43V\", \"aho\", \"oB\", \"0Lp\", \"16u\", \"An\", \"6lQ\", \"4Yr\", \"4TB\", \"6aa\", \"2ZO\", \"0ol\", \"1Qa\", \"br\", \"6OM\", \"4zn\", \"6QO\", \"4dl\", \"1Oc\", \"8x\", \n\"2DM\", \"05f\", \"5Za\", \"7oB\", \"5B0\", \"4Gp\", \"08V\", \"d7F\", \"5H\", \"0Rr\", \"bSo\", \"485\", \"474\", \"52t\", \"1MR\", \"1X3\", \"Pm\", \"07W\", \"4Hq\", \"5M1\", \n\"64j\", \"4EA\", \"1nN\", \"2KL\", \"7y\", \"0PC\", \"4km\", \"7No\", \"Ou\", \"0lG\", \"4Wi\", \"6bJ\", \"6Lf\", \"4yE\", \"00\", \"aY\", \"li\", \"8FC\", \"4tu\", \"5q5\", \n\"4O8\", \"4ZY\", \"0aw\", \"BE\", \"MD\", \"0nv\", \"4UX\", \"74R\", \"6NW\", \"5kU\", \"341\", \"ch\", \"nX\", \"0Mj\", \"42L\", \"6Cg\", \"6mK\", \"4Xh\", \"0cF\", \"2Ve\", \n\"09L\", \"2Hg\", \"67A\", \"4Fj\", \"4hF\", \"7MD\", \"4R\", \"0Sh\", \"1Ny\", \"9b\", \"6PU\", \"4ev\", \"4KZ\", \"7nX\", \"SF\", \"0pt\", \"1oT\", \"1z5\", \"65p\", \"5Tz\", \n\"4jw\", \"5o7\", \"6c\", \"0QY\", \"1LH\", \"2iJ\", \"6Rd\", \"4gG\", \"4Ik\", \"7li\", \"Qw\", \"06M\", \"7F\", \"246\", \"4kR\", \"7NP\", \"64U\", \"col\", \"1nq\", \"0k1\", \n\"PR\", \"E3\", \"4HN\", \"69e\", \"6SA\", \"4fb\", \"1Mm\", \"2ho\", \"5w\", \"0RM\", \"4ic\", \"7La\", \"66d\", \"4GO\", \"J2\", \"2IB\", \"Rc\", \"05Y\", \"b4D\", \"aUN\", \n\"4q2\", \"4dS\", \"8Ve\", \"8G\", \"8Hg\", \"bM\", \"4o0\", \"4zQ\", \"607\", \"75w\", \"La\", \"0oS\", \"T0\", \"AQ\", \"6ln\", \"4YM\", \"43i\", \"6BB\", \"2yl\", \"0LO\", \n\"0CN\", \"22e\", \"6MC\", \"5hA\", \"4VL\", \"6co\", \"NP\", \"0mb\", \"1ps\", \"0u3\", \"aDM\", \"baO\", \"41X\", \"7PR\", \"mL\", \"8Gf\", \"4IT\", \"7lV\", \"QH\", \"06r\", \n\"1Lw\", \"0I7\", \"5b8\", \"4gx\", \"4jH\", \"7OJ\", \"rT\", \"g5\", \"1ok\", \"2Ji\", \"65O\", \"4Dd\", \"4Ke\", \"7ng\", \"Sy\", \"04C\", \"h4\", \"2kD\", \"6Pj\", \"4eI\", \n\"4hy\", \"5m9\", \"4m\", \"0SW\", \"09s\", \"2HX\", \"4S4\", \"4FU\", \"4M6\", \"4XW\", \"0cy\", \"1f9\", \"ng\", \"0MU\", \"42s\", \"573\", \"6Nh\", \"5kj\", \"v6\", \"cW\", \n\"3KK\", \"0nI\", \"4Ug\", \"74m\", \"6oE\", \"4Zf\", \"0aH\", \"Bz\", \"lV\", \"y7\", \"40B\", \"6Ai\", \"582\", \"4yz\", \"0BT\", \"af\", \"OJ\", \"0lx\", \"4WV\", \"4B7\", \n\"64Q\", \"4Ez\", \"1nu\", \"0k5\", \"7B\", \"0Px\", \"4kV\", \"7NT\", \"6SE\", \"4ff\", \"1Mi\", \"2hk\", \"PV\", \"E7\", \"4HJ\", \"69a\", \"6rh\", \"4GK\", \"J6\", \"2IF\", \n\"5s\", \"0RI\", \"4ig\", \"7Le\", \"4q6\", \"4dW\", \"1OX\", \"8C\", \"Rg\", \"0qU\", \"5ZZ\", \"7oy\", \"4Ty\", \"5Q9\", \"Le\", \"0oW\", \"1QZ\", \"bI\", \"4o4\", \"4zU\", \n\"43m\", \"6BF\", \"oy\", \"0LK\", \"T4\", \"AU\", \"6lj\", \"4YI\", \"4VH\", \"6ck\", \"NT\", \"0mf\", \"0CJ\", \"22a\", \"6MG\", \"4xd\", \"4uT\", \"7PV\", \"mH\", \"0Nz\", \n\"1pw\", \"Cd\", \"aDI\", \"5KY\", \"1Ls\", \"0I3\", \"axM\", \"53U\", \"4IP\", \"7lR\", \"QL\", \"06v\", \"1oo\", \"2Jm\", \"65K\", \"5TA\", \"4jL\", \"7ON\", \"6X\", \"g1\", \n\"h0\", \"9Y\", \"6Pn\", \"4eM\", \"4Ka\", \"7nc\", \"2El\", \"04G\", \"09w\", \"d6g\", \"4S0\", \"4FQ\", \"bRN\", \"a3D\", \"4i\", \"0SS\", \"nc\", \"0MQ\", \"42w\", \"577\", \n\"4M2\", \"4XS\", \"17T\", \"dlm\", \"3KO\", \"0nM\", \"4Uc\", \"74i\", \"6Nl\", \"5kn\", \"v2\", \"cS\", \"lR\", \"y3\", \"40F\", \"6Am\", \"6oA\", \"4Zb\", \"0aL\", \"2To\", \n\"ON\", \"18U\", \"4WR\", \"4B3\", \"586\", \"bCM\", \"0BP\", \"ab\", \"PZ\", \"0sh\", \"4HF\", \"69m\", \"6SI\", \"4fj\", \"1Me\", \"2hg\", \"7N\", \"0Pt\", \"4kZ\", \"7NX\", \n\"6pU\", \"4Ev\", \"1ny\", \"0k9\", \"Rk\", \"05Q\", \"4Jw\", \"5O7\", \"452\", \"50r\", \"1OT\", \"8O\", \"qw\", \"0RE\", \"4ik\", \"7Li\", \"66l\", \"4GG\", \"08a\", \"2IJ\", \n\"T8\", \"AY\", \"6lf\", \"4YE\", \"43a\", \"6BJ\", \"ou\", \"0LG\", \"0Aw\", \"bE\", \"4o8\", \"4zY\", \"4Tu\", \"5Q5\", \"Li\", \"8fC\", \"14s\", \"Ch\", \"6nW\", \"5KU\", \n\"41P\", \"7PZ\", \"mD\", \"0Nv\", \"0CF\", \"22m\", \"6MK\", \"4xh\", \"4VD\", \"6cg\", \"NX\", \"0mj\", \"5za\", \"7OB\", \"6T\", \"0Qn\", \"1oc\", \"2Ja\", \"65G\", \"4Dl\", \n\"b7g\", \"68w\", \"1w2\", \"06z\", \"8UF\", \"dSN\", \"5b0\", \"4gp\", \"4hq\", \"5m1\", \"4e\", \"8ZG\", \"1mR\", \"1x3\", \"67v\", \"726\", \"4Km\", \"7no\", \"Sq\", \"04K\", \n\"1NN\", \"9U\", \"6Pb\", \"4eA\", \"adr\", \"5kb\", \"26\", \"21F\", \"Ms\", \"0nA\", \"4Uo\", \"74e\", \"79U\", \"bbl\", \"0cq\", \"1f1\", \"no\", \"396\", \"4vs\", \"5s3\", \n\"6LQ\", \"4yr\", \"367\", \"an\", \"OB\", \"0lp\", \"bmm\", \"76T\", \"6oM\", \"4Zn\", \"15i\", \"Br\", \"2zO\", \"0Ol\", \"40J\", \"6Aa\", \"6SM\", \"4fn\", \"1Ma\", \"2hc\", \n\"2FO\", \"07d\", \"4HB\", \"69i\", \"64Y\", \"4Er\", \"83L\", \"d5D\", \"7J\", \"0Pp\", \"bQm\", \"a0g\", \"456\", \"50v\", \"1OP\", \"8K\", \"Ro\", \"05U\", \"4Js\", \"5O3\", \n\"66h\", \"4GC\", \"08e\", \"2IN\", \"qs\", \"0RA\", \"4io\", \"7Lm\", \"43e\", \"6BN\", \"oq\", \"0LC\", \"0bo\", \"2WL\", \"6lb\", \"4YA\", \"4Tq\", \"5Q1\", \"Lm\", \"8fG\", \n\"0As\", \"bA\", \"ael\", \"cPO\", \"41T\", \"ajm\", \"1K2\", \"0Nr\", \"14w\", \"Cl\", \"6nS\", \"5KQ\", \"5Fa\", \"6cc\", \"2XM\", \"0mn\", \"0CB\", \"22i\", \"6MO\", \"4xl\", \n\"1og\", \"2Je\", \"65C\", \"4Dh\", \"4jD\", \"7OF\", \"6P\", \"g9\", \"3l9\", \"2iy\", \"5b4\", \"4gt\", \"4IX\", \"68s\", \"QD\", \"0rv\", \"1mV\", \"1x7\", \"4S8\", \"4FY\", \n\"4hu\", \"5m5\", \"4a\", \"1Cz\", \"h8\", \"9Q\", \"6Pf\", \"4eE\", \"4Ki\", \"7nk\", \"Su\", \"04O\", \"Mw\", \"0nE\", \"4Uk\", \"74a\", \"6Nd\", \"5kf\", \"22\", \"21B\", \n\"nk\", \"0MY\", \"4vw\", \"5s7\", \"6mx\", \"5Hz\", \"0cu\", \"1f5\", \"OF\", \"0lt\", \"4WZ\", \"6by\", \"6LU\", \"4yv\", \"0BX\", \"aj\", \"lZ\", \"0Oh\", \"40N\", \"6Ae\", \n\"6oI\", \"4Zj\", \"0aD\", \"Bv\", \"5f\", \"9Ke\", \"4ir\", \"5l2\", \"66u\", \"735\", \"08x\", \"1y0\", \"Rr\", \"05H\", \"4Jn\", \"7ol\", \"6Qa\", \"4dB\", \"1OM\", \"8V\", \n\"7W\", \"0Pm\", \"4kC\", \"7NA\", \"64D\", \"4Eo\", \"83Q\", \"2Kb\", \"PC\", \"07y\", \"b6d\", \"69t\", \"5c3\", \"4fs\", \"8TE\", \"dRM\", \"374\", \"22t\", \"599\", \"4xq\", \n\"bln\", \"77W\", \"NA\", \"0ms\", \"14j\", \"Cq\", \"6nN\", \"5KL\", \"41I\", \"7PC\", \"3km\", \"0No\", \"35\", \"20E\", \"6Oc\", \"5ja\", \"4Tl\", \"6aO\", \"Lp\", \"0oB\", \n\"0br\", \"1g2\", \"78V\", \"bco\", \"43x\", \"568\", \"ol\", \"385\", \"4Kt\", \"5N4\", \"Sh\", \"04R\", \"1NW\", \"9L\", \"441\", \"4eX\", \"4hh\", \"7Mj\", \"pt\", \"0SF\", \n\"K9\", \"2HI\", \"67o\", \"4FD\", \"4IE\", \"68n\", \"QY\", \"0\", \"1Lf\", \"2id\", \"6RJ\", \"4gi\", \"4jY\", \"auh\", \"6M\", \"0Qw\", \"1oz\", \"2Jx\", \"5A5\", \"4Du\", \n\"6oT\", \"4Zw\", \"0aY\", \"Bk\", \"lG\", \"0Ou\", \"40S\", \"6Ax\", \"6LH\", \"4yk\", \"0BE\", \"aw\", \"2YJ\", \"0li\", \"4WG\", \"6bd\", \"6me\", \"4XF\", \"0ch\", \"2VK\", \n\"nv\", \"0MD\", \"42b\", \"6CI\", \"6Ny\", \"7K9\", \"1PU\", \"cF\", \"Mj\", \"0nX\", \"4Uv\", \"5P6\", \"66q\", \"4GZ\", \"1lU\", \"1y4\", \"5b\", \"0RX\", \"4iv\", \"5l6\", \n\"6Qe\", \"4dF\", \"1OI\", \"8R\", \"Rv\", \"05L\", \"4Jj\", \"7oh\", \"6pH\", \"4Ek\", \"1nd\", \"2Kf\", \"7S\", \"0Pi\", \"4kG\", \"7NE\", \"5c7\", \"4fw\", \"1Mx\", \"0H8\", \n\"PG\", \"0su\", \"5Xz\", \"69p\", \"4VY\", \"4C8\", \"NE\", \"0mw\", \"1Sz\", \"22p\", \"6MV\", \"4xu\", \"41M\", \"7PG\", \"mY\", \"x8\", \"14n\", \"Cu\", \"6nJ\", \"5KH\", \n\"4Th\", \"6aK\", \"Lt\", \"0oF\", \"31\", \"bX\", \"6Og\", \"4zD\", \"4wt\", \"5r4\", \"oh\", \"0LZ\", \"0bv\", \"AD\", \"4L9\", \"4YX\", \"1NS\", \"9H\", \"445\", \"51u\", \n\"4Kp\", \"5N0\", \"Sl\", \"04V\", \"09f\", \"2HM\", \"67k\", \"5Va\", \"4hl\", \"7Mn\", \"4x\", \"0SB\", \"1Lb\", \"3yA\", \"6RN\", \"4gm\", \"4IA\", \"68j\", \"2GL\", \"4\", \n\"82O\", \"d4G\", \"5A1\", \"4Dq\", \"bPn\", \"a1d\", \"6I\", \"0Qs\", \"lC\", \"0Oq\", \"40W\", \"akn\", \"6oP\", \"4Zs\", \"15t\", \"Bo\", \"2YN\", \"0lm\", \"4WC\", \"76I\", \n\"6LL\", \"4yo\", \"0BA\", \"as\", \"nr\", \"8DX\", \"42f\", \"6CM\", \"6ma\", \"4XB\", \"0cl\", \"2VO\", \"Mn\", \"8gD\", \"4Ur\", \"5P2\", \"ado\", \"bAm\", \"1PQ\", \"cB\", \n\"Rz\", \"0qH\", \"4Jf\", \"7od\", \"6Qi\", \"4dJ\", \"i7\", \"2jG\", \"5n\", \"0RT\", \"4iz\", \"7Lx\", \"4R7\", \"4GV\", \"08p\", \"1y8\", \"PK\", \"07q\", \"4HW\", \"7mU\", \n\"6SX\", \"52R\", \"1Mt\", \"0H4\", \"sW\", \"f6\", \"4kK\", \"7NI\", \"64L\", \"4Eg\", \"1nh\", \"2Kj\", \"14b\", \"Cy\", \"6nF\", \"5KD\", \"41A\", \"7PK\", \"mU\", \"x4\", \n\"0CW\", \"0V6\", \"591\", \"4xy\", \"4VU\", \"4C4\", \"NI\", \"19R\", \"0bz\", \"AH\", \"4L5\", \"4YT\", \"43p\", \"560\", \"od\", \"0LV\", \"w5\", \"bT\", \"6Ok\", \"4zH\", \n\"4Td\", \"6aG\", \"Lx\", \"0oJ\", \"5xA\", \"7Mb\", \"4t\", \"0SN\", \"K1\", \"2HA\", \"67g\", \"4FL\", \"b5G\", \"aTM\", \"0e3\", \"04Z\", \"8Wf\", \"9D\", \"449\", \"4eP\", \n\"4jQ\", \"7OS\", \"6E\", \"255\", \"1or\", \"0j2\", \"65V\", \"cno\", \"4IM\", \"68f\", \"QQ\", \"8\", \"1Ln\", \"2il\", \"6RB\", \"4ga\", \"afR\", \"4yc\", \"0BM\", \"23f\", \n\"OS\", \"Z2\", \"4WO\", \"6bl\", \"aEN\", \"c4e\", \"0aQ\", \"Bc\", \"lO\", \"8Fe\", \"4tS\", \"4a2\", \"4n3\", \"5ks\", \"8Id\", \"cN\", \"Mb\", \"0nP\", \"614\", \"74t\", \n\"6mm\", \"4XN\", \"U3\", \"2VC\", \"2xo\", \"0ML\", \"42j\", \"6CA\", \"6Qm\", \"4dN\", \"i3\", \"8Z\", \"2Do\", \"05D\", \"4Jb\", \"aUS\", \"4R3\", \"4GR\", \"08t\", \"d7d\", \n\"5j\", \"0RP\", \"bSM\", \"a2G\", \"ayN\", \"52V\", \"1Mp\", \"0H0\", \"PO\", \"07u\", \"4HS\", \"69x\", \"64H\", \"4Ec\", \"1nl\", \"2Kn\", \"sS\", \"f2\", \"4kO\", \"7NM\", \n\"41E\", \"7PO\", \"mQ\", \"x0\", \"14f\", \"2Ul\", \"6nB\", \"aQ1\", \"4VQ\", \"4C0\", \"NM\", \"19V\", \"0CS\", \"0V2\", \"595\", \"bBN\", \"43t\", \"564\", \"0Y3\", \"0LR\", \n\"16W\", \"AL\", \"4L1\", \"4YP\", \"5DA\", \"6aC\", \"2Zm\", \"0oN\", \"39\", \"bP\", \"6Oo\", \"4zL\", \"K5\", \"2HE\", \"67c\", \"4FH\", \"4hd\", \"7Mf\", \"4p\", \"0SJ\", \n\"8Wb\", \"2kY\", \"4p5\", \"4eT\", \"4Kx\", \"5N8\", \"Sd\", \"0pV\", \"1ov\", \"0j6\", \"5A9\", \"4Dy\", \"4jU\", \"7OW\", \"6A\", \"1AZ\", \"1Lj\", \"2ih\", \"6RF\", \"4ge\", \n\"4II\", \"68b\", \"QU\", \"D4\", \"OW\", \"Z6\", \"4WK\", \"6bh\", \"6LD\", \"4yg\", \"0BI\", \"23b\", \"lK\", \"0Oy\", \"4tW\", \"4a6\", \"6oX\", \"5JZ\", \"0aU\", \"Bg\", \n\"Mf\", \"0nT\", \"4Uz\", \"74p\", \"4n7\", \"5kw\", \"1PY\", \"cJ\", \"nz\", \"0MH\", \"42n\", \"6CE\", \"6mi\", \"4XJ\", \"U7\", \"2VG\", \"4MP\", \"4X1\", \"UL\", \"02v\", \n\"0XR\", \"0M3\", \"a8E\", \"57U\", \"4nL\", \"7KN\", \"2X\", \"c1\", \"1ko\", \"2Nm\", \"61K\", \"5PA\", \"4Oa\", \"6zB\", \"2Al\", \"00G\", \"l0\", \"yQ\", \"6Tn\", \"4aM\", \n\"58T\", \"a7D\", \"0i\", \"0WS\", \"84o\", \"ZM\", \"4W0\", \"4BQ\", \"4I2\", \"5Lr\", \"13T\", \"G\", \"jc\", \"0IQ\", \"46w\", \"537\", \"6Jl\", \"5on\", \"r2\", \"gS\", \n\"3OO\", \"0jM\", \"4Qc\", \"70i\", \"6kA\", \"5NC\", \"0eL\", \"2Po\", \"hR\", \"8Bx\", \"44F\", \"6Em\", \"abO\", \"49v\", \"0FP\", \"eb\", \"KN\", \"8ad\", \"4SR\", \"4F3\", \n\"3B\", \"0Tx\", \"4oV\", \"4z7\", \"60Q\", \"4Az\", \"0zT\", \"Yf\", \"TV\", \"A7\", \"4LJ\", \"6yi\", \"6WE\", \"4bf\", \"0YH\", \"zz\", \"1s\", \"0VI\", \"4mg\", \"6XD\", \n\"6vh\", \"4CK\", \"N6\", \"2MF\", \"Vg\", \"0uU\", \"6n9\", \"7ky\", \"4u6\", \"5pv\", \"1KX\", \"xK\", \"1UZ\", \"fI\", \"4k4\", \"5nt\", \"4Py\", \"5U9\", \"He\", \"0kW\", \n\"P4\", \"EU\", \"6hj\", \"5Mh\", \"47m\", \"6FF\", \"ky\", \"0HK\", \"0GJ\", \"dx\", \"6IG\", \"48l\", \"4RH\", \"6gk\", \"JT\", \"0if\", \"0dV\", \"Gd\", \"5Z8\", \"5OY\", \n\"4qT\", \"4d5\", \"iH\", \"0Jz\", \"0XV\", \"0M7\", \"5f8\", \"4cx\", \"4MT\", \"4X5\", \"UH\", \"02r\", \"1kk\", \"Xx\", \"61O\", \"5PE\", \"4nH\", \"7KJ\", \"vT\", \"c5\", \n\"l4\", \"yU\", \"6Tj\", \"4aI\", \"4Oe\", \"6zF\", \"Wy\", \"00C\", \"1iZ\", \"ZI\", \"4W4\", \"4BU\", \"4ly\", \"5i9\", \"0m\", \"0WW\", \"jg\", \"0IU\", \"46s\", \"533\", \n\"4I6\", \"5Lv\", \"0gy\", \"C\", \"3OK\", \"0jI\", \"4Qg\", \"6dD\", \"6Jh\", \"5oj\", \"r6\", \"gW\", \"hV\", \"0Kd\", \"44B\", \"6Ei\", \"6kE\", \"5NG\", \"0eH\", \"Fz\", \n\"KJ\", \"0hx\", \"4SV\", \"4F7\", \"6HY\", \"49r\", \"0FT\", \"ef\", \"60U\", \"ckl\", \"0zP\", \"Yb\", \"3F\", \"206\", \"4oR\", \"4z3\", \"6WA\", \"4bb\", \"0YL\", \"2lo\", \n\"TR\", \"A3\", \"4LN\", \"6ym\", \"62d\", \"4CO\", \"N2\", \"2MB\", \"1w\", \"0VM\", \"4mc\", \"7Ha\", \"4u2\", \"54z\", \"8Re\", \"xO\", \"Vc\", \"01Y\", \"b0D\", \"aQN\", \n\"647\", \"71w\", \"Ha\", \"0kS\", \"8Lg\", \"fM\", \"4k0\", \"5np\", \"47i\", \"6FB\", \"29d\", \"0HO\", \"P0\", \"EQ\", \"6hn\", \"5Ml\", \"4RL\", \"6go\", \"JP\", \"0ib\", \n\"0GN\", \"26e\", \"6IC\", \"48h\", \"45X\", \"4d1\", \"iL\", \"8Cf\", \"0dR\", \"0q3\", \"hYt\", \"beO\", \"4nD\", \"7KF\", \"2P\", \"c9\", \"1kg\", \"Xt\", \"61C\", \"5PI\", \n\"4MX\", \"4X9\", \"UD\", \"0vv\", \"0XZ\", \"2my\", \"5f4\", \"4ct\", \"4lu\", \"5i5\", \"0a\", \"1Gz\", \"0yw\", \"ZE\", \"4W8\", \"4BY\", \"4Oi\", \"6zJ\", \"Wu\", \"00O\", \n\"l8\", \"yY\", \"6Tf\", \"4aE\", \"6Jd\", \"5of\", \"62\", \"25B\", \"Iw\", \"0jE\", \"4Qk\", \"6dH\", \"6ix\", \"5Lz\", \"0gu\", \"O\", \"jk\", \"0IY\", \"4rw\", \"5w7\", \n\"5x6\", \"5mW\", \"0FX\", \"ej\", \"KF\", \"0ht\", \"4SZ\", \"6fy\", \"6kI\", \"5NK\", \"0eD\", \"Fv\", \"hZ\", \"93\", \"44N\", \"6Ee\", \"2BO\", \"03d\", \"4LB\", \"6ya\", \n\"6WM\", \"4bn\", \"1Ia\", \"zr\", \"3J\", \"0Tp\", \"bUm\", \"a4g\", \"5D2\", \"4Ar\", \"87L\", \"Yn\", \"Vo\", \"01U\", \"4Ns\", \"5K3\", \"416\", \"54v\", \"1KP\", \"xC\", \n\"us\", \"0VA\", \"4mo\", \"6XL\", \"62h\", \"4CC\", \"0xm\", \"2MN\", \"0fo\", \"2SL\", \"6hb\", \"bgr\", \"47e\", \"6FN\", \"kq\", \"0HC\", \"0Es\", \"fA\", \"aal\", \"bDn\", \n\"4Pq\", \"5U1\", \"Hm\", \"8bG\", \"10w\", \"Gl\", \"5Z0\", \"5OQ\", \"45T\", \"anm\", \"1O2\", \"0Jr\", \"0GB\", \"dp\", \"6IO\", \"48d\", \"5Ba\", \"6gc\", \"3Ll\", \"0in\", \n\"1kc\", \"Xp\", \"61G\", \"5PM\", \"bTs\", \"7KB\", \"2T\", \"0Un\", \"8QF\", \"39T\", \"5f0\", \"4cp\", \"797\", \"aRm\", \"1s2\", \"02z\", \"0ys\", \"ZA\", \"63v\", \"766\", \n\"4lq\", \"5i1\", \"0e\", \"9Nf\", \"0Zo\", \"2oL\", \"6Tb\", \"4aA\", \"4Om\", \"6zN\", \"Wq\", \"00K\", \"Is\", \"0jA\", \"4Qo\", \"6dL\", \"7ZA\", \"5ob\", \"66\", \"25F\", \n\"jo\", \"9Pd\", \"4rs\", \"5w3\", \"aCn\", \"bfl\", \"0gq\", \"K\", \"KB\", \"0hp\", \"bim\", \"72T\", \"5x2\", \"49z\", \"327\", \"en\", \"3nn\", \"97\", \"44J\", \"6Ea\", \n\"6kM\", \"5NO\", \"11i\", \"Fr\", \"6WI\", \"4bj\", \"0YD\", \"zv\", \"TZ\", \"0wh\", \"4LF\", \"6ye\", \"5D6\", \"4Av\", \"0zX\", \"Yj\", \"3N\", \"0Tt\", \"4oZ\", \"6Zy\", \n\"412\", \"54r\", \"1KT\", \"xG\", \"Vk\", \"01Q\", \"4Nw\", \"5K7\", \"62l\", \"4CG\", \"0xi\", \"2MJ\", \"uw\", \"0VE\", \"4mk\", \"6XH\", \"47a\", \"6FJ\", \"ku\", \"0HG\", \n\"P8\", \"EY\", \"6hf\", \"5Md\", \"4Pu\", \"5U5\", \"Hi\", \"8bC\", \"0Ew\", \"fE\", \"4k8\", \"5nx\", \"45P\", \"4d9\", \"iD\", \"0Jv\", \"0dZ\", \"Gh\", \"5Z4\", \"5OU\", \n\"4RD\", \"6gg\", \"JX\", \"0ij\", \"0GF\", \"dt\", \"6IK\", \"5lI\", \"4Op\", \"5J0\", \"Wl\", \"00V\", \"0Zr\", \"2oQ\", \"405\", \"55u\", \"4ll\", \"6YO\", \"0x\", \"0WB\", \n\"0yn\", \"2LM\", \"63k\", \"5Ra\", \"4MA\", \"6xb\", \"2CL\", \"02g\", \"0XC\", \"39I\", \"6VN\", \"4cm\", \"bTn\", \"a5d\", \"2I\", \"0Us\", \"86O\", \"Xm\", \"5E1\", \"5PP\", \n\"6kP\", \"5NR\", \"11t\", \"Fo\", \"hC\", \"0Kq\", \"44W\", \"aon\", \"6HL\", \"49g\", \"0FA\", \"es\", \"3Mo\", \"0hm\", \"4SC\", \"72I\", \"6ia\", \"5Lc\", \"0gl\", \"V\", \n\"jr\", \"1Ya\", \"46f\", \"6GM\", \"hyV\", \"bEm\", \"0Dp\", \"gB\", \"In\", \"8cD\", \"4Qr\", \"5T2\", \"1b\", \"0VX\", \"4mv\", \"5h6\", \"62q\", \"4CZ\", \"0xt\", \"2MW\", \n\"Vv\", \"01L\", \"4Nj\", \"7kh\", \"6Ue\", \"54o\", \"1KI\", \"xZ\", \"3S\", \"0Ti\", \"4oG\", \"6Zd\", \"6tH\", \"4Ak\", \"0zE\", \"Yw\", \"TG\", \"0wu\", \"780\", \"6yx\", \n\"5g7\", \"4bw\", \"0YY\", \"zk\", \"1Wz\", \"di\", \"5y5\", \"5lT\", \"4RY\", \"4G8\", \"JE\", \"0iw\", \"0dG\", \"Gu\", \"6jJ\", \"5OH\", \"45M\", \"6Df\", \"iY\", \"80\", \n\"71\", \"fX\", \"6Kg\", \"5ne\", \"4Ph\", \"6eK\", \"Ht\", \"0kF\", \"0fv\", \"ED\", \"4H9\", \"5My\", \"4st\", \"5v4\", \"kh\", \"0HZ\", \"0Zv\", \"yD\", \"401\", \"4aX\", \n\"4Ot\", \"5J4\", \"Wh\", \"00R\", \"O9\", \"ZX\", \"63o\", \"4BD\", \"4lh\", \"6YK\", \"tt\", \"0WF\", \"0XG\", \"2md\", \"6VJ\", \"4ci\", \"4ME\", \"6xf\", \"UY\", \"02c\", \n\"1kz\", \"Xi\", \"5E5\", \"5PT\", \"4nY\", \"aqh\", \"2M\", \"0Uw\", \"hG\", \"0Ku\", \"44S\", \"6Ex\", \"6kT\", \"5NV\", \"0eY\", \"Fk\", \"3Mk\", \"0hi\", \"4SG\", \"6fd\", \n\"6HH\", \"49c\", \"0FE\", \"ew\", \"jv\", \"0ID\", \"46b\", \"6GI\", \"6ie\", \"5Lg\", \"0gh\", \"R\", \"Ij\", \"0jX\", \"4Qv\", \"5T6\", \"6Jy\", \"7O9\", \"0Dt\", \"gF\", \n\"62u\", \"775\", \"0xp\", \"198\", \"1f\", \"9Oe\", \"4mr\", \"5h2\", \"6Ua\", \"54k\", \"1KM\", \"2nO\", \"Vr\", \"01H\", \"4Nn\", \"7kl\", \"60D\", \"4Ao\", \"0zA\", \"Ys\", \n\"3W\", \"0Tm\", \"4oC\", \"7JA\", \"5g3\", \"4bs\", \"8PE\", \"zo\", \"TC\", \"03y\", \"784\", \"aSn\", \"bhn\", \"73W\", \"JA\", \"0is\", \"334\", \"dm\", \"5y1\", \"48y\", \n\"45I\", \"6Db\", \"3om\", \"84\", \"0dC\", \"Gq\", \"6jN\", \"5OL\", \"4Pl\", \"6eO\", \"Hp\", \"0kB\", \"75\", \"24E\", \"6Kc\", \"5na\", \"47x\", \"528\", \"kl\", \"8AF\", \n\"0fr\", \"1c2\", \"aBm\", \"bgo\", \"4ld\", \"6YG\", \"0p\", \"0WJ\", \"O5\", \"ZT\", \"63c\", \"4BH\", \"4Ox\", \"5J8\", \"Wd\", \"0tV\", \"0Zz\", \"yH\", \"4t5\", \"4aT\", \n\"4nU\", \"7KW\", \"2A\", \"1EZ\", \"1kv\", \"Xe\", \"5E9\", \"5PX\", \"4MI\", \"6xj\", \"UU\", \"02o\", \"0XK\", \"2mh\", \"6VF\", \"4ce\", \"6HD\", \"49o\", \"0FI\", \"27b\", \n\"KW\", \"0he\", \"4SK\", \"6fh\", \"6kX\", \"5NZ\", \"0eU\", \"Fg\", \"hK\", \"0Ky\", \"4pW\", \"4e6\", \"4j7\", \"5ow\", \"0Dx\", \"gJ\", \"If\", \"0jT\", \"4Qz\", \"6dY\", \n\"6ii\", \"5Lk\", \"Q7\", \"DV\", \"jz\", \"0IH\", \"46n\", \"6GE\", \"3PN\", \"01D\", \"4Nb\", \"aQS\", \"6Um\", \"54g\", \"m3\", \"xR\", \"1j\", \"0VP\", \"59W\", \"a6G\", \n\"4V3\", \"4CR\", \"85l\", \"194\", \"TO\", \"03u\", \"4LS\", \"4Y2\", \"a9F\", \"56V\", \"0YQ\", \"zc\", \"wS\", \"b2\", \"4oO\", \"6Zl\", \"60H\", \"4Ac\", \"0zM\", \"2On\", \n\"0dO\", \"2Ql\", \"6jB\", \"aU1\", \"45E\", \"6Dn\", \"iQ\", \"88\", \"0GS\", \"da\", \"acL\", \"48u\", \"4RQ\", \"4G0\", \"JM\", \"94N\", \"12W\", \"EL\", \"4H1\", \"5Mq\", \n\"47t\", \"524\", \"29y\", \"0HR\", \"79\", \"fP\", \"6Ko\", \"5nm\", \"aZ0\", \"6eC\", \"3NL\", \"0kN\", \"O1\", \"ZP\", \"63g\", \"4BL\", \"58I\", \"6YC\", \"0t\", \"0WN\", \n\"8Sf\", \"yL\", \"409\", \"4aP\", \"b1G\", \"aPM\", \"0a3\", \"00Z\", \"1kr\", \"Xa\", \"61V\", \"bzN\", \"4nQ\", \"7KS\", \"2E\", \"215\", \"0XO\", \"2ml\", \"6VB\", \"4ca\", \n\"4MM\", \"6xn\", \"UQ\", \"02k\", \"KS\", \"0ha\", \"4SO\", \"6fl\", \"7Xa\", \"49k\", \"0FM\", \"27f\", \"hO\", \"8Be\", \"4pS\", \"4e2\", \"aAN\", \"bdL\", \"0eQ\", \"Fc\", \n\"Ib\", \"0jP\", \"654\", \"70t\", \"4j3\", \"5os\", \"8Md\", \"gN\", \"28g\", \"0IL\", \"46j\", \"6GA\", \"6im\", \"5Lo\", \"Q3\", \"Z\", \"6Ui\", \"54c\", \"m7\", \"xV\", \n\"Vz\", \"0uH\", \"4Nf\", \"7kd\", \"4V7\", \"4CV\", \"0xx\", \"190\", \"1n\", \"0VT\", \"4mz\", \"6XY\", \"6WX\", \"56R\", \"0YU\", \"zg\", \"TK\", \"03q\", \"4LW\", \"4Y6\", \n\"60L\", \"4Ag\", \"0zI\", \"2Oj\", \"wW\", \"b6\", \"4oK\", \"6Zh\", \"45A\", \"6Dj\", \"iU\", \"0Jg\", \"0dK\", \"Gy\", \"6jF\", \"5OD\", \"4RU\", \"4G4\", \"JI\", \"1yZ\", \n\"0GW\", \"de\", \"5y9\", \"48q\", \"47p\", \"520\", \"kd\", \"0HV\", \"0fz\", \"EH\", \"4H5\", \"5Mu\", \"4Pd\", \"6eG\", \"Hx\", \"0kJ\", \"s5\", \"fT\", \"6Kk\", \"5ni\", \n\"5Y1\", \"5LP\", \"13v\", \"e\", \"jA\", \"0Is\", \"46U\", \"aml\", \"6JN\", \"5oL\", \"0DC\", \"gq\", \"3Om\", \"0jo\", \"4QA\", \"6db\", \"6kc\", \"5Na\", \"0en\", \"2PM\", \n\"hp\", \"0KB\", \"44d\", \"6EO\", \"abm\", \"49T\", \"0Fr\", \"1C2\", \"Kl\", \"8aF\", \"4Sp\", \"5V0\", \"4Mr\", \"5H2\", \"Un\", \"02T\", \"0Xp\", \"2mS\", \"427\", \"57w\", \n\"4nn\", \"7Kl\", \"2z\", \"1Ea\", \"1kM\", \"2NO\", \"61i\", \"5Pc\", \"4OC\", \"7jA\", \"2AN\", \"00e\", \"0ZA\", \"ys\", \"6TL\", \"4ao\", \"58v\", \"a7f\", \"0K\", \"0Wq\", \n\"84M\", \"Zo\", \"5G3\", \"4Bs\", \"0EY\", \"fk\", \"6KT\", \"5nV\", \"bjh\", \"6ex\", \"HG\", \"0ku\", \"0fE\", \"Ew\", \"6hH\", \"5MJ\", \"47O\", \"6Fd\", \"29B\", \"0Hi\", \n\"53\", \"dZ\", \"6Ie\", \"48N\", \"4Rj\", \"6gI\", \"Jv\", \"0iD\", \"0dt\", \"GF\", \"6jy\", \"7o9\", \"4qv\", \"5t6\", \"ij\", \"0JX\", \"wh\", \"0TZ\", \"4ot\", \"5j4\", \n\"4T9\", \"4AX\", \"0zv\", \"YD\", \"Tt\", \"03N\", \"4Lh\", \"6yK\", \"6Wg\", \"4bD\", \"o9\", \"zX\", \"1Q\", \"0Vk\", \"4mE\", \"6Xf\", \"62B\", \"4Ci\", \"0xG\", \"2Md\", \n\"VE\", \"0uw\", \"4NY\", \"aQh\", \"5e5\", \"5pT\", \"1Kz\", \"xi\", \"jE\", \"0Iw\", \"46Q\", \"4g8\", \"5Y5\", \"5LT\", \"13r\", \"a\", \"IY\", \"0jk\", \"4QE\", \"6df\", \n\"6JJ\", \"5oH\", \"0DG\", \"gu\", \"ht\", \"0KF\", \"4ph\", \"6EK\", \"6kg\", \"5Ne\", \"S9\", \"FX\", \"Kh\", \"0hZ\", \"4St\", \"5V4\", \"4h9\", \"49P\", \"0Fv\", \"eD\", \n\"0Xt\", \"2mW\", \"423\", \"4cZ\", \"4Mv\", \"5H6\", \"Uj\", \"02P\", \"1kI\", \"XZ\", \"61m\", \"5Pg\", \"4nj\", \"7Kh\", \"vv\", \"0UD\", \"0ZE\", \"yw\", \"6TH\", \"4ak\", \n\"4OG\", \"6zd\", \"2AJ\", \"00a\", \"0yY\", \"Zk\", \"5G7\", \"4Bw\", \"58r\", \"6Yx\", \"0O\", \"0Wu\", \"bjl\", \"71U\", \"HC\", \"0kq\", \"316\", \"fo\", \"6KP\", \"5nR\", \n\"47K\", \"7VA\", \"29F\", \"0Hm\", \"0fA\", \"Es\", \"6hL\", \"5MN\", \"4Rn\", \"6gM\", \"Jr\", \"1ya\", \"57\", \"26G\", \"6Ia\", \"48J\", \"45z\", \"5t2\", \"in\", \"8CD\", \n\"0dp\", \"GB\", \"hYV\", \"bem\", \"60w\", \"757\", \"0zr\", \"2OQ\", \"3d\", \"9Mg\", \"4op\", \"5j0\", \"6Wc\", \"56i\", \"0Yn\", \"2lM\", \"Tp\", \"03J\", \"4Ll\", \"6yO\", \n\"62F\", \"4Cm\", \"0xC\", \"dwS\", \"1U\", \"0Vo\", \"4mA\", \"6Xb\", \"5e1\", \"54X\", \"8RG\", \"xm\", \"VA\", \"0us\", \"b0f\", \"aQl\", \"6JF\", \"5oD\", \"0DK\", \"gy\", \n\"IU\", \"0jg\", \"4QI\", \"6dj\", \"5Y9\", \"5LX\", \"0gW\", \"m\", \"jI\", \"1YZ\", \"4rU\", \"4g4\", \"4h5\", \"5mu\", \"0Fz\", \"eH\", \"Kd\", \"0hV\", \"4Sx\", \"5V8\", \n\"6kk\", \"5Ni\", \"S5\", \"FT\", \"hx\", \"0KJ\", \"44l\", \"6EG\", \"4nf\", \"7Kd\", \"2r\", \"0UH\", \"M7\", \"XV\", \"61a\", \"5Pk\", \"4Mz\", \"6xY\", \"Uf\", \"0vT\", \n\"0Xx\", \"39r\", \"4v7\", \"4cV\", \"4lW\", \"4y6\", \"0C\", \"0Wy\", \"0yU\", \"Zg\", \"63P\", \"5RZ\", \"4OK\", \"6zh\", \"WW\", \"B6\", \"0ZI\", \"2oj\", \"6TD\", \"4ag\", \n\"0fM\", \"2Sn\", \"7xa\", \"5MB\", \"47G\", \"6Fl\", \"kS\", \"0Ha\", \"0EQ\", \"fc\", \"aaN\", \"bDL\", \"4PS\", \"4E2\", \"HO\", \"8be\", \"10U\", \"GN\", \"4J3\", \"5Os\", \n\"45v\", \"506\", \"ib\", \"0JP\", \"q3\", \"dR\", \"6Im\", \"48F\", \"4Rb\", \"6gA\", \"3LN\", \"0iL\", \"2Bm\", \"03F\", \"aF0\", \"6yC\", \"6Wo\", \"4bL\", \"o1\", \"zP\", \n\"3h\", \"0TR\", \"bUO\", \"a4E\", \"4T1\", \"4AP\", \"87n\", \"YL\", \"VM\", \"01w\", \"4NQ\", \"7kS\", \"hfu\", \"54T\", \"1Kr\", \"xa\", \"1Y\", \"0Vc\", \"4mM\", \"6Xn\", \n\"62J\", \"4Ca\", \"0xO\", \"2Ml\", \"IQ\", \"0jc\", \"4QM\", \"6dn\", \"6JB\", \"a19\", \"0DO\", \"25d\", \"jM\", \"9PF\", \"46Y\", \"4g0\", \"aCL\", \"687\", \"0gS\", \"i\", \n\"3MP\", \"0hR\", \"676\", \"72v\", \"4h1\", \"49X\", \"8Of\", \"eL\", \"3nL\", \"0KN\", \"44h\", \"6EC\", \"6ko\", \"5Nm\", \"S1\", \"FP\", \"M3\", \"XR\", \"61e\", \"5Po\", \n\"4nb\", \"aqS\", \"2v\", \"0UL\", \"8Qd\", \"39v\", \"4v3\", \"4cR\", \"b3E\", \"aRO\", \"Ub\", \"02X\", \"0yQ\", \"Zc\", \"63T\", \"bxL\", \"4lS\", \"4y2\", \"0G\", \"237\", \n\"0ZM\", \"2on\", \"7Da\", \"4ac\", \"4OO\", \"6zl\", \"WS\", \"B2\", \"47C\", \"6Fh\", \"kW\", \"0He\", \"0fI\", \"2Sj\", \"6hD\", \"5MF\", \"4PW\", \"4E6\", \"HK\", \"0ky\", \n\"0EU\", \"fg\", \"6KX\", \"5nZ\", \"45r\", \"502\", \"if\", \"0JT\", \"0dx\", \"GJ\", \"4J7\", \"5Ow\", \"4Rf\", \"6gE\", \"Jz\", \"0iH\", \"q7\", \"dV\", \"6Ii\", \"48B\", \n\"6Wk\", \"4bH\", \"o5\", \"zT\", \"Tx\", \"03B\", \"4Ld\", \"6yG\", \"4T5\", \"4AT\", \"0zz\", \"YH\", \"3l\", \"0TV\", \"4ox\", \"5j8\", \"5e9\", \"54P\", \"1Kv\", \"xe\", \n\"VI\", \"01s\", \"4NU\", \"7kW\", \"62N\", \"4Ce\", \"0xK\", \"2Mh\", \"uU\", \"0Vg\", \"4mI\", \"6Xj\", \"4K0\", \"5Np\", \"11V\", \"FM\", \"ha\", \"0KS\", \"44u\", \"515\", \n\"6Hn\", \"49E\", \"48\", \"eQ\", \"3MM\", \"0hO\", \"4Sa\", \"6fB\", \"6iC\", \"5LA\", \"0gN\", \"t\", \"jP\", \"0Ib\", \"46D\", \"6Go\", \"hyt\", \"bEO\", \"0DR\", \"0Q3\", \n\"IL\", \"8cf\", \"4QP\", \"4D1\", \"4OR\", \"4Z3\", \"WN\", \"00t\", \"0ZP\", \"yb\", \"hgv\", \"55W\", \"4lN\", \"6Ym\", \"0Z\", \"a3\", \"0yL\", \"2Lo\", \"63I\", \"4Bb\", \n\"4Mc\", \"7ha\", \"2Cn\", \"02E\", \"n2\", \"2mB\", \"6Vl\", \"4cO\", \"bTL\", \"a5F\", \"2k\", \"0UQ\", \"86m\", \"XO\", \"4U2\", \"5Pr\", \"0Gy\", \"dK\", \"4i6\", \"5lv\", \n\"5BZ\", \"6gX\", \"Jg\", \"0iU\", \"R6\", \"GW\", \"6jh\", \"5Oj\", \"45o\", \"6DD\", \"3oK\", \"0JI\", \"0EH\", \"fz\", \"6KE\", \"5nG\", \"4PJ\", \"6ei\", \"HV\", \"0kd\", \n\"0fT\", \"Ef\", \"6hY\", \"690\", \"4sV\", \"4f7\", \"kJ\", \"0Hx\", \"uH\", \"0Vz\", \"4mT\", \"4x5\", \"5F8\", \"4Cx\", \"0xV\", \"0m7\", \"VT\", \"C5\", \"4NH\", \"7kJ\", \n\"6UG\", \"54M\", \"1Kk\", \"xx\", \"3q\", \"0TK\", \"4oe\", \"6ZF\", \"60b\", \"4AI\", \"L4\", \"YU\", \"Te\", \"0wW\", \"4Ly\", \"5I9\", \"4w4\", \"4bU\", \"1IZ\", \"zI\", \n\"he\", \"0KW\", \"44q\", \"511\", \"4K4\", \"5Nt\", \"11R\", \"FI\", \"Ky\", \"0hK\", \"4Se\", \"6fF\", \"6Hj\", \"49A\", \"p4\", \"eU\", \"jT\", \"0If\", \"4rH\", \"6Gk\", \n\"6iG\", \"5LE\", \"0gJ\", \"p\", \"IH\", \"0jz\", \"4QT\", \"4D5\", \"5z8\", \"5oY\", \"0DV\", \"gd\", \"0ZT\", \"yf\", \"6TY\", \"4az\", \"4OV\", \"4Z7\", \"WJ\", \"00p\", \n\"0yH\", \"Zz\", \"63M\", \"4Bf\", \"4lJ\", \"6Yi\", \"tV\", \"a7\", \"n6\", \"2mF\", \"6Vh\", \"4cK\", \"4Mg\", \"6xD\", \"2Cj\", \"02A\", \"1kX\", \"XK\", \"4U6\", \"5Pv\", \n\"6N9\", \"7Ky\", \"2o\", \"0UU\", \"665\", \"73u\", \"Jc\", \"0iQ\", \"8Ne\", \"dO\", \"4i2\", \"5lr\", \"45k\", \"7Ta\", \"3oO\", \"0JM\", \"R2\", \"GS\", \"6jl\", \"5On\", \n\"4PN\", \"6em\", \"HR\", \"8bx\", \"0EL\", \"24g\", \"6KA\", \"5nC\", \"47Z\", \"4f3\", \"kN\", \"8Ad\", \"0fP\", \"Eb\", \"aBO\", \"694\", \"62W\", \"byO\", \"0xR\", \"0m3\", \n\"1D\", \"224\", \"4mP\", \"4x1\", \"6UC\", \"54I\", \"1Ko\", \"2nm\", \"VP\", \"C1\", \"4NL\", \"7kN\", \"60f\", \"4AM\", \"L0\", \"YQ\", \"3u\", \"0TO\", \"4oa\", \"6ZB\", \n\"438\", \"4bQ\", \"8Pg\", \"zM\", \"Ta\", \"0wS\", \"b2F\", \"aSL\", \"6Hf\", \"49M\", \"40\", \"eY\", \"Ku\", \"0hG\", \"4Si\", \"6fJ\", \"4K8\", \"5Nx\", \"0ew\", \"FE\", \n\"hi\", \"8BC\", \"4pu\", \"5u5\", \"5z4\", \"5oU\", \"0DZ\", \"gh\", \"ID\", \"0jv\", \"4QX\", \"4D9\", \"6iK\", \"5LI\", \"0gF\", \"Dt\", \"jX\", \"0Ij\", \"46L\", \"6Gg\", \n\"4lF\", \"6Ye\", \"0R\", \"0Wh\", \"0yD\", \"Zv\", \"63A\", \"4Bj\", \"4OZ\", \"6zy\", \"WF\", \"0tt\", \"0ZX\", \"yj\", \"5d6\", \"4av\", \"4nw\", \"5k7\", \"2c\", \"0UY\", \n\"1kT\", \"XG\", \"61p\", \"5Pz\", \"4Mk\", \"6xH\", \"Uw\", \"02M\", \"0Xi\", \"2mJ\", \"6Vd\", \"4cG\", \"0dm\", \"2QN\", \"7zA\", \"5Ob\", \"45g\", \"6DL\", \"is\", \"0JA\", \n\"0Gq\", \"dC\", \"acn\", \"48W\", \"4Rs\", \"5W3\", \"Jo\", \"94l\", \"12u\", \"En\", \"5X2\", \"5MS\", \"47V\", \"alo\", \"kB\", \"0Hp\", \"1Ua\", \"fr\", \"6KM\", \"5nO\", \n\"4PB\", \"6ea\", \"3Nn\", \"0kl\", \"3Pl\", \"01f\", \"bts\", \"7kB\", \"6UO\", \"54E\", \"1Kc\", \"xp\", \"1H\", \"0Vr\", \"59u\", \"a6e\", \"5F0\", \"4Cp\", \"85N\", \"d3F\", \n\"Tm\", \"03W\", \"4Lq\", \"5I1\", \"434\", \"56t\", \"0Ys\", \"zA\", \"3y\", \"0TC\", \"4om\", \"6ZN\", \"60j\", \"4AA\", \"0zo\", \"2OL\", \"Kq\", \"0hC\", \"4Sm\", \"6fN\", \n\"6Hb\", \"49I\", \"44\", \"27D\", \"hm\", \"8BG\", \"44y\", \"519\", \"aAl\", \"bdn\", \"0es\", \"FA\", \"1o2\", \"0jr\", \"bko\", \"70V\", \"5z0\", \"5oQ\", \"305\", \"gl\", \n\"28E\", \"0In\", \"46H\", \"6Gc\", \"6iO\", \"5LM\", \"0gB\", \"x\", \"1ia\", \"Zr\", \"63E\", \"4Bn\", \"4lB\", \"6Ya\", \"0V\", \"0Wl\", \"8SD\", \"yn\", \"5d2\", \"4ar\", \n\"b1e\", \"aPo\", \"WB\", \"00x\", \"1kP\", \"XC\", \"61t\", \"744\", \"4ns\", \"5k3\", \"2g\", \"9Ld\", \"0Xm\", \"2mN\", \"7FA\", \"4cC\", \"4Mo\", \"6xL\", \"Us\", \"02I\", \n\"45c\", \"6DH\", \"iw\", \"0JE\", \"0di\", \"2QJ\", \"6jd\", \"5Of\", \"4Rw\", \"5W7\", \"Jk\", \"0iY\", \"0Gu\", \"dG\", \"6Ix\", \"48S\", \"47R\", \"6Fy\", \"kF\", \"0Ht\", \n\"0fX\", \"Ej\", \"5X6\", \"5MW\", \"4PF\", \"6ee\", \"HZ\", \"0kh\", \"0ED\", \"fv\", \"6KI\", \"5nK\", \"6UK\", \"54A\", \"1Kg\", \"xt\", \"VX\", \"C9\", \"4ND\", \"7kF\", \n\"5F4\", \"4Ct\", \"0xZ\", \"2My\", \"1L\", \"0Vv\", \"4mX\", \"4x9\", \"430\", \"4bY\", \"0Yw\", \"zE\", \"Ti\", \"03S\", \"4Lu\", \"5I5\", \"60n\", \"4AE\", \"L8\", \"YY\", \n\"wu\", \"0TG\", \"4oi\", \"6ZJ\" };\n\n\n#endif\n\n"
  },
  {
    "path": "src/crc64.c",
    "content": "/* Copyright (c) 2014, Matt Stancliff <matt@genges.com>\n * Copyright (c) 2020, Amazon Web Services\n * All rights reserved.\n * \n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE. */\n\n#include <stdlib.h>\n#include \"crc64.h\"\n#include \"crcspeed.h\"\n#include \"redisassert.h\"\n#include \"testhelp.h\"\nstatic uint64_t crc64_table[8][256] = {{0}};\n\n#define POLY UINT64_C(0xad93d23594c935a9)\n/******************** BEGIN GENERATED PYCRC FUNCTIONS ********************/\n/**\n * Generated on Sun Dec 21 14:14:07 2014,\n * by pycrc v0.8.2, https://www.tty1.net/pycrc/\n *\n * LICENSE ON GENERATED CODE:\n * ==========================\n * As of version 0.6, pycrc is released under the terms of the MIT licence.\n * The code generated by pycrc is not considered a substantial portion of the\n * software, therefore the author of pycrc will not claim any copyright on\n * the generated code.\n * ==========================\n *\n * CRC configuration:\n *    Width        = 64\n *    Poly         = 0xad93d23594c935a9\n *    XorIn        = 0xffffffffffffffff\n *    ReflectIn    = True\n *    XorOut       = 0x0000000000000000\n *    ReflectOut   = True\n *    Algorithm    = bit-by-bit-fast\n *\n * Modifications after generation (by matt):\n *   - included finalize step in-line with update for single-call generation\n *   - re-worked some inner variable architectures\n *   - adjusted function parameters to match expected prototypes.\n *****************************************************************************/\n\n/**\n * Reflect all bits of a \\a data word of \\a data_len bytes.\n *\n * \\param data         The data word to be reflected.\n * \\param data_len     The width of \\a data expressed in number of bits.\n * \\return             The reflected data.\n *****************************************************************************/\nstatic inline uint_fast64_t crc_reflect(uint_fast64_t data, size_t data_len) {\n    /* only ever called for data_len == 64 in this codebase\n     *\n     * Borrowed from bit twiddling hacks, original in the public domain.\n     * https://graphics.stanford.edu/~seander/bithacks.html#ReverseParallel\n     * Extended to 64 bits, and added byteswap for final 3 steps.\n     * 16-30x 64-bit operations, no comparisons (16 for native byteswap, 30 for pure C)\n     */\n\n    assert(data_len <= 64);\n    /* swap odd and even bits */\n    data = ((data >> 1) & 0x5555555555555555ULL) | ((data & 0x5555555555555555ULL) << 1);\n    /* swap consecutive pairs */\n    data = ((data >> 2) & 0x3333333333333333ULL) | ((data & 0x3333333333333333ULL) << 2);\n    /* swap nibbles ... */\n    data = ((data >> 4) & 0x0F0F0F0F0F0F0F0FULL) | ((data & 0x0F0F0F0F0F0F0F0FULL) << 4);\n#if defined(__GNUC__) || defined(__clang__)\n    data = __builtin_bswap64(data);\n#else\n    /* swap bytes */\n    data = ((data >> 8) & 0x00FF00FF00FF00FFULL) | ((data & 0x00FF00FF00FF00FFULL) << 8);\n    /* swap 2-byte long pairs */\n    data = ( data >> 16 &     0xFFFF0000FFFFULL) | ((data &     0xFFFF0000FFFFULL) << 16);\n    /* swap 4-byte quads */\n    data = ( data >> 32 &         0xFFFFFFFFULL) | ((data &         0xFFFFFFFFULL) << 32);\n#endif\n    /* adjust for non-64-bit reversals */\n    return data >> (64 - data_len);\n}\n\n/**\n *  Update the crc value with new data.\n *\n * \\param crc      The current crc value.\n * \\param data     Pointer to a buffer of \\a data_len bytes.\n * \\param data_len Number of bytes in the \\a data buffer.\n * \\return         The updated crc value.\n ******************************************************************************/\nuint64_t _crc64(uint_fast64_t crc, const void *in_data, const uint64_t len) {\n    const uint8_t *data = in_data;\n    unsigned long long bit;\n\n    for (uint64_t offset = 0; offset < len; offset++) {\n        uint8_t c = data[offset];\n        for (uint_fast8_t i = 0x01; i & 0xff; i <<= 1) {\n            bit = crc & 0x8000000000000000;\n            if (c & i) {\n                bit = !bit;\n            }\n\n            crc <<= 1;\n            if (bit) {\n                crc ^= POLY;\n            }\n        }\n\n        crc &= 0xffffffffffffffff;\n    }\n\n    crc = crc & 0xffffffffffffffff;\n    return crc_reflect(crc, 64) ^ 0x0000000000000000;\n}\n\n/******************** END GENERATED PYCRC FUNCTIONS ********************/\n\n/* Initializes the 16KB lookup tables. */\nvoid crc64_init(void) {\n    crcspeed64native_init(_crc64, crc64_table);\n}\n\n/* Compute crc64 */\nuint64_t crc64(uint64_t crc, const unsigned char *s, uint64_t l) {\n    return crcspeed64native(crc64_table, crc, (void *) s, l);\n}\n\n/* Test main */\n#ifdef REDIS_TEST\n#include <stdio.h>\n\nstatic void genBenchmarkRandomData(char *data, int count);\nstatic int bench_crc64(unsigned char *data, uint64_t size, long long passes, uint64_t check, char *name, int csv);\nstatic void bench_combine(char *label, uint64_t size, uint64_t expect, int csv);\nlong long _ustime(void);\n\n#include <inttypes.h>\n#include <string.h>\n#include <stdlib.h>\n#include <time.h>\n#include <sys/time.h>\n#include <unistd.h>\n\n#include \"zmalloc.h\"\n#include \"crccombine.h\"\n\nlong long _ustime(void) {\n    struct timeval tv;\n    long long ust;\n\n    gettimeofday(&tv, NULL);\n    ust = ((long long)tv.tv_sec)*1000000;\n    ust += tv.tv_usec;\n    return ust;\n}\n\nstatic int bench_crc64(unsigned char *data, uint64_t size, long long passes, uint64_t check, char *name, int csv) {\n    uint64_t min = size, hash = 0;\n    long long original_start = _ustime(), original_end;\n    for (long long i=passes; i > 0; i--) {\n        hash = crc64(0, data, size);\n    }\n    original_end = _ustime();\n    min = (original_end - original_start) * 1000 / passes;\n    /* approximate nanoseconds without nstime */\n    if (csv) {\n        printf(\"%s,%\" PRIu64 \",%\" PRIu64 \",%d\\n\",\n               name, size, (1000 * size) / min, hash == check);\n    } else {\n        printf(\"test size=%\" PRIu64 \" algorithm=%s %\" PRIu64 \" M/sec matches=%d\\n\",\n               size, name, (1000 * size) / min, hash == check);\n    }\n    return hash != check;\n}\n\nconst uint64_t BENCH_RPOLY = UINT64_C(0x95ac9329ac4bc9b5);\n\nstatic void bench_combine(char *label, uint64_t size, uint64_t expect, int csv) {\n    uint64_t min = size, start = expect, thash = expect ^ (expect >> 17);\n    long long original_start = _ustime(), original_end;\n    for (int i=0; i < 1000; i++) {\n        crc64_combine(thash, start, size, BENCH_RPOLY, 64);\n    }\n    original_end = _ustime();\n    /* ran 1000 times, want ns per, counted us per 1000 ... */\n    min = original_end - original_start;\n    if (csv) {\n        printf(\"%s,%\" PRIu64 \",%\" PRIu64 \"\\n\", label, size, min);\n    } else {\n        printf(\"%s size=%\" PRIu64 \" in %\" PRIu64 \" nsec\\n\", label, size, min);\n    }\n}\n\nstatic void genBenchmarkRandomData(char *data, int count) {\n    static uint32_t state = 1234;\n    int i = 0;\n\n    while (count--) {\n        state = (state*1103515245+12345);\n        data[i++] = '0'+((state>>16)&63);\n    }\n}\n\n#define UNUSED(x) (void)(x)\nint crc64Test(int argc, char *argv[], int flags) {\n\n    uint64_t crc64_test_size = 0;\n    int i, lastarg, csv = 0, loop = 0, combine = 0, testAll = 0;\n    \nagain:\n    if ((argc>=4) && (!strcmp(argv[3],\"custom\"))) {        \n        for (i = 4; i < argc; i++) {\n            lastarg = (i == (argc - 1));\n            if (!strcmp(argv[i], \"--help\")) {\n                goto usage;\n            } else if (!strcmp(argv[i], \"--csv\")) {\n                csv = 1;\n            } else if (!strcmp(argv[i], \"-l\")) {\n                loop = 1;\n            } else if (!strcmp(argv[i], \"--crc\")) {\n                if (lastarg) goto invalid;\n                crc64_test_size = atoll(argv[++i]);\n            } else if (!strcmp(argv[i], \"--combine\")) {\n                combine = 1;\n            } else {\n                invalid:\n                printf(\"Invalid option \\\"%s\\\" or option argument missing\\n\\n\",\n                       argv[i]);\n                usage:\n                printf(\n                        \"Usage: crc64 [OPTIONS]\\n\\n\"\n                        \" --csv              Output in CSV format\\n\"\n                        \" -l                 Loop. Run the tests forever\\n\"\n                        \" --crc <bytes>      Benchmark crc64 faster options, using a buffer this big, and quit when done.\\n\"\n                        \" --combine          Benchmark crc64 combine value ranges and timings.\\n\"\n                );\n                return 1;\n            }\n        }\n    } else {\n        crc64_test_size = 50000; \n        testAll = 1;\n        if (flags & REDIS_TEST_ACCURATE) crc64_test_size = 5000000;\n    }\n    \n    if ((crc64_test_size == 0 && combine == 0) || testAll) {\n        crc64_init();\n        printf(\"[calcula]: e9c6d914c4b8d9ca == %016\" PRIx64 \"\\n\",\n            (uint64_t)_crc64(0, \"123456789\", 9));\n        printf(\"[64speed]: e9c6d914c4b8d9ca == %016\" PRIx64 \"\\n\",\n            (uint64_t)crc64(0, (unsigned char*)\"123456789\", 9));\n        char li[] = \"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed \"\n                    \"do eiusmod tempor incididunt ut labore et dolore magna \"\n                    \"aliqua. Ut enim ad minim veniam, quis nostrud exercitation \"\n                    \"ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis \"\n                    \"aute irure dolor in reprehenderit in voluptate velit esse \"\n                    \"cillum dolore eu fugiat nulla pariatur. Excepteur sint \"\n                    \"occaecat cupidatat non proident, sunt in culpa qui officia \"\n                    \"deserunt mollit anim id est laborum.\";\n        printf(\"[calcula]: c7794709e69683b3 == %016\" PRIx64 \"\\n\",\n            (uint64_t)_crc64(0, li, sizeof(li)));\n        printf(\"[64speed]: c7794709e69683b3 == %016\" PRIx64 \"\\n\",\n            (uint64_t)crc64(0, (unsigned char*)li, sizeof(li)));\n        \n        if (!testAll) return 0;\n    }\n\n    int init_this_loop = 1;\n    long long init_start, init_end;\n\n    do {\n        unsigned char* data = NULL;\n        uint64_t passes = 0;\n        if (crc64_test_size) {\n            data = zmalloc(crc64_test_size);\n            genBenchmarkRandomData((char*)data, crc64_test_size);\n            /* We want to hash about 1 gig of data in total, looped, to get a good\n             * idea of our performance.\n             */\n            passes = (UINT64_C(0x100000000) / crc64_test_size);\n            passes = passes >= 2 ? passes : 2;\n            passes = passes <= 1000 ? passes : 1000;\n        }\n\n        crc64_init();\n        /* warm up the cache */\n        set_crc64_cutoffs(crc64_test_size+1, crc64_test_size+1);\n        uint64_t expect = crc64(0, data, crc64_test_size);\n\n        if ((!combine || testAll) && crc64_test_size) {\n            if (csv && init_this_loop) printf(\"algorithm,buffer,performance,crc64_matches\\n\");\n\n            /* get the single-character version for single-byte Redis behavior */\n            set_crc64_cutoffs(0, crc64_test_size+1);\n            assert(!bench_crc64(data, crc64_test_size, passes, expect, \"crc_1byte\", csv));\n\n            set_crc64_cutoffs(crc64_test_size+1, crc64_test_size+1);\n            /* run with 8-byte \"single\" path, crcfaster */\n            assert(!(bench_crc64(data, crc64_test_size, passes, expect, \"crcspeed\", csv)));\n\n            /* run with dual 8-byte paths */\n            set_crc64_cutoffs(1, crc64_test_size+1);\n            assert(!(bench_crc64(data, crc64_test_size, passes, expect, \"crcdual\", csv)));\n\n            /* run with tri 8-byte paths */\n            set_crc64_cutoffs(1, 1);\n            assert(!(bench_crc64(data, crc64_test_size, passes, expect, \"crctri\", csv)));\n\n            /* Be free memory region, be free. */\n            zfree(data);\n            data = NULL;\n        }\n\n        uint64_t INIT_SIZE = UINT64_C(0xffffffffffffffff);\n        if (combine || testAll) {\n            if (init_this_loop) {\n                init_start = _ustime();\n                crc64_combine(\n                    UINT64_C(0xdeadbeefdeadbeef),\n                    UINT64_C(0xfeebdaedfeebdaed),\n                    INIT_SIZE,\n                    BENCH_RPOLY, 64);\n                init_end = _ustime();\n\n                init_end -= init_start;\n                init_end *= 1000;\n                if (csv) {\n                    printf(\"operation,size,nanoseconds\\n\");\n                    printf(\"init_64,%\" PRIu64 \",%\" PRIu64 \"\\n\", INIT_SIZE, (uint64_t)init_end);\n                } else {\n                    printf(\"init_64 size=%\" PRIu64 \" in %\" PRIu64 \" nsec\\n\", INIT_SIZE, (uint64_t)init_end);\n                }\n                /* use the hash itself as the size (unpredictable) */\n                bench_combine(\"hash_as_size_combine\", crc64_test_size, expect, csv);\n\n                /* let's do something big (predictable, so fast) */\n                bench_combine(\"largest_combine\", INIT_SIZE, expect, csv);\n            }\n            bench_combine(\"combine\", crc64_test_size, expect, csv);\n        }\n        init_this_loop = 0;\n        /* step down by ~1.641 for a range of test sizes */\n        crc64_test_size -= (crc64_test_size >> 2) + (crc64_test_size >> 3) + (crc64_test_size >> 6);\n    } while (crc64_test_size > 3);\n    if (loop) goto again;\n    return 0;\n}\n\n#endif\n"
  },
  {
    "path": "src/crc64.h",
    "content": "#ifndef CRC64_H\n#define CRC64_H\n\n#include <stdint.h>\n\nvoid crc64_init(void);\nuint64_t crc64(uint64_t crc, const unsigned char *s, uint64_t l);\n\n#ifdef REDIS_TEST\nint crc64Test(int argc, char *argv[], int flags);\n#endif\n\n#endif\n"
  },
  {
    "path": "src/crccombine.c",
    "content": "#include <stdint.h>\n#include <stdio.h>\n#include <strings.h>\n#if defined(__i386__) || defined(__X86_64__)\n#include <immintrin.h>\n#endif\n#include \"crccombine.h\"\n\n/* Copyright (C) 2013 Mark Adler\n * Copyright (C) 2019-2024 Josiah Carlson\n * Portions originally from: crc64.c Version 1.4  16 Dec 2013  Mark Adler\n * Modifications by Josiah Carlson <josiah.carlson@gmail.com>\n *   - Added implementation variations with sample timings for gf_matrix_times*()\n *   - Most folks would be best using gf2_matrix_times_vec or\n *\t   gf2_matrix_times_vec2, unless some processor does AVX2 fast.\n *   - This is the implementation of the MERGE_CRC macro defined in\n *     crcspeed.c (which calls crc_combine()), and is a specialization of the\n *     generic crc_combine() (and related from the 2013 edition of Mark Adler's\n *     crc64.c)) for the sake of clarity and performance.\n\n  This software is provided 'as-is', without any express or implied\n  warranty.  In no event will the author be held liable for any damages\n  arising from the use of this software.\n\n  Permission is granted to anyone to use this software for any purpose,\n  including commercial applications, and to alter it and redistribute it\n  freely, subject to the following restrictions:\n\n  1. The origin of this software must not be misrepresented; you must not\n\t claim that you wrote the original software. If you use this software\n\t in a product, an acknowledgment in the product documentation would be\n\t appreciated but is not required.\n  2. Altered source versions must be plainly marked as such, and must not be\n\t misrepresented as being the original software.\n  3. This notice may not be removed or altered from any source distribution.\n\n  Mark Adler\n  madler@alumni.caltech.edu\n*/\n\n#define STATIC_ASSERT(VVV) do {int test = 1 / (VVV);test++;} while (0)\n\n#if !((defined(__i386__) || defined(__X86_64__)))\n\n/* This cuts 40% of the time vs bit-by-bit. */\n\nuint64_t gf2_matrix_times_switch(uint64_t *mat, uint64_t vec) {\n\t/*\n\t * Without using any vector math, this handles 4 bits at a time,\n\t * and saves 40+% of the time compared to the bit-by-bit version. Use if you\n\t * have no vector compile option available to you. With cache, we see:\n\t * E5-2670 ~1-2us to extend ~1 meg 64 bit hash\n\t */\n\tuint64_t sum;\n\n\tsum = 0;\n\twhile (vec) {\n\t\t/* reversing the case order is ~10% slower on Xeon E5-2670 */\n\t\tswitch (vec & 15) {\n\t\tcase 15:\n\t\t\tsum ^= *mat ^ *(mat+1) ^ *(mat+2) ^ *(mat+3);\n\t\t\tbreak;\n\t\tcase 14:\n\t\t\tsum ^= *(mat+1) ^ *(mat+2) ^ *(mat+3);\n\t\t\tbreak;\n\t\tcase 13:\n\t\t\tsum ^= *mat ^ *(mat+2) ^ *(mat+3);\n\t\t\tbreak;\n\t\tcase 12:\n\t\t\tsum ^= *(mat+2) ^ *(mat+3);\n\t\t\tbreak;\n\t\tcase 11:\n\t\t\tsum ^= *mat ^ *(mat+1) ^ *(mat+3);\n\t\t\tbreak;\n\t\tcase 10:\n\t\t\tsum ^= *(mat+1) ^ *(mat+3);\n\t\t\tbreak;\n\t\tcase 9:\n\t\t\tsum ^= *mat ^ *(mat+3);\n\t\t\tbreak;\n\t\tcase 8:\n\t\t\tsum ^= *(mat+3);\n\t\t\tbreak;\n\t\tcase 7:\n\t\t\tsum ^= *mat ^ *(mat+1) ^ *(mat+2);\n\t\t\tbreak;\n\t\tcase 6:\n\t\t\tsum ^= *(mat+1) ^ *(mat+2);\n\t\t\tbreak;\n\t\tcase 5:\n\t\t\tsum ^= *mat ^ *(mat+2);\n\t\t\tbreak;\n\t\tcase 4:\n\t\t\tsum ^= *(mat+2);\n\t\t\tbreak;\n\t\tcase 3:\n\t\t\tsum ^= *mat ^ *(mat+1);\n\t\t\tbreak;\n\t\tcase 2:\n\t\t\tsum ^= *(mat+1);\n\t\t\tbreak;\n\t\tcase 1:\n\t\t\tsum ^= *mat;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tbreak;\n\t\t}\n\t\tvec >>= 4;\n\t\tmat += 4;\n\t}\n\treturn sum;\n}\n\n#define CRC_MULTIPLY gf2_matrix_times_switch\n\n#else\n\n/*\n\tWarning: here there be dragons involving vector math, and macros to save us\n\tfrom repeating the same information over and over.\n*/\n\nuint64_t gf2_matrix_times_vec2(uint64_t *mat, uint64_t vec) {\n\t/*\n\t * Uses xmm registers on x86, works basically everywhere fast, doing\n\t * cycles of movqda, mov, shr, pand, and, pxor, at least on gcc 8.\n\t * Is 9-11x faster than original.\n\t * E5-2670 ~29us to extend ~1 meg 64 bit hash\n\t * i3-8130U ~22us to extend ~1 meg 64 bit hash\n\t */\n\tv2uq sum = {0, 0},\n\t\t*mv2 = (v2uq*)mat;\n\t/* this table allows us to eliminate conditions during gf2_matrix_times_vec2() */\n\tstatic v2uq masks2[4] = {\n\t\t{0,0},\n\t\t{-1,0},\n\t\t{0,-1},\n\t\t{-1,-1},\n\t};\n\n\t/* Almost as beautiful as gf2_matrix_times_vec, but only half as many\n\t * bits per step, so we need 2 per chunk4 operation. Faster in my tests. */\n\n#define DO_CHUNK4() \\\n\t\tsum ^= (*mv2++) & masks2[vec & 3]; \\\n\t\tvec >>= 2; \\\n\t\tsum ^= (*mv2++) & masks2[vec & 3]; \\\n\t\tvec >>= 2\n\n#define DO_CHUNK16() \\\n\t\tDO_CHUNK4(); \\\n\t\tDO_CHUNK4(); \\\n\t\tDO_CHUNK4(); \\\n\t\tDO_CHUNK4()\n\n\tDO_CHUNK16();\n\tDO_CHUNK16();\n\tDO_CHUNK16();\n\tDO_CHUNK16();\n\n\tSTATIC_ASSERT(sizeof(uint64_t) == 8);\n\tSTATIC_ASSERT(sizeof(long long unsigned int) == 8);\n\treturn sum[0] ^ sum[1];\n}\n\n#undef DO_CHUNK16\n#undef DO_CHUNK4\n\n#define CRC_MULTIPLY gf2_matrix_times_vec2\n#endif\n\nstatic void gf2_matrix_square(uint64_t *square, uint64_t *mat, uint8_t dim) {\n\tunsigned n;\n\n\tfor (n = 0; n < dim; n++)\n\t\tsquare[n] = CRC_MULTIPLY(mat, mat[n]);\n}\n\n/* Turns out our Redis / Jones CRC cycles at this point, so we can support\n * more than 64 bits of extension if we want. Trivially. */\nstatic uint64_t combine_cache[64][64];\n\n/* Mark Adler has some amazing updates to crc.c in his crcany repository. I\n * like static caches, and not worrying about finding cycles generally. We are\n * okay to spend the 32k of memory here, leaving the algorithm unchanged from\n * as it was a decade ago, and be happy that it costs <200 microseconds to\n * init, and that subsequent calls to the combine function take under 100\n * nanoseconds. We also note that the crcany/crc.c code applies to any CRC, and\n * we are currently targeting one: Jones CRC64.\n */\n\nvoid init_combine_cache(uint64_t poly, uint8_t dim) {\n\tunsigned n, cache_num = 0;\n\tcombine_cache[1][0] = poly;\n\tint prev = 1;\n\tuint64_t row = 1;\n\tfor (n = 1; n < dim; n++)\n\t{\n\t\tcombine_cache[1][n] = row;\n\t\trow <<= 1;\n\t}\n\n\tgf2_matrix_square(combine_cache[0], combine_cache[1], dim);\n\tgf2_matrix_square(combine_cache[1], combine_cache[0], dim);\n\n\t/* do/while to overwrite the first two layers, they are not used, but are\n\t * re-generated in the last two layers for the Redis polynomial */\n\tdo {\n\t\tgf2_matrix_square(combine_cache[cache_num], combine_cache[cache_num + prev], dim);\n\t\tprev = -1;\n\t} while (++cache_num < 64);\n}\n\n/* Return the CRC-64 of two sequential blocks, where crc1 is the CRC-64 of the\n * first block, crc2 is the CRC-64 of the second block, and len2 is the length\n * of the second block.\n *\n * If you want reflections on your CRCs; do them outside before / after.\n * WARNING: if you enable USE_STATIC_COMBINE_CACHE to make this fast, you MUST\n * ALWAYS USE THE SAME POLYNOMIAL, otherwise you will get the wrong results.\n * You MAY bzero() the even/odd static arrays, which will induce a re-cache on\n * next call as a work-around, but ... maybe just parameterize the cached\n * models at that point like Mark Adler does in modern crcany/crc.c .\n */\nuint64_t crc64_combine(uint64_t crc1, uint64_t crc2, uintmax_t len2, uint64_t poly, uint8_t dim) {\n\t/* degenerate case */\n\tif (len2 == 0)\n\t\treturn crc1;\n\n\tunsigned cache_num = 0;\n\tif (combine_cache[0][0] == 0) {\n\t\tinit_combine_cache(poly, dim);\n\t}\n\n\t/* apply len2 zeros to crc1 (first square will put the operator for one\n\t   zero byte, eight zero bits, in even) */\n\tdo\n\t{\n\t\t/* apply zeros operator for this bit of len2 */\n\t\tif (len2 & 1)\n\t\t\tcrc1 = CRC_MULTIPLY(combine_cache[cache_num], crc1);\n\t\tlen2 >>= 1;\n\t\tcache_num = (cache_num + 1) & 63;\n\t\t/* if no more bits set, then done */\n\t} while (len2 != 0);\n\n\t/* return combined crc */\n\tcrc1 ^= crc2;\n\treturn crc1;\n}\n\n#undef CRC_MULTIPLY\n"
  },
  {
    "path": "src/crccombine.h",
    "content": "\n#include <stdint.h>\n\n\n/* mask types */\ntypedef unsigned long long v2uq __attribute__ ((vector_size (16)));\n\nuint64_t gf2_matrix_times_vec2(uint64_t *mat, uint64_t vec);\nvoid init_combine_cache(uint64_t poly, uint8_t dim);\nuint64_t crc64_combine(uint64_t crc1, uint64_t crc2, uintmax_t len2, uint64_t poly, uint8_t dim);\n"
  },
  {
    "path": "src/crcspeed.c",
    "content": "/*\n * Copyright (C) 2013 Mark Adler\n * Copyright (C) 2019-2024 Josiah Carlson\n * Originally by: crc64.c Version 1.4  16 Dec 2013  Mark Adler\n * Modifications by Matt Stancliff <matt@genges.com>:\n *   - removed CRC64-specific behavior\n *   - added generation of lookup tables by parameters\n *   - removed inversion of CRC input/result\n *   - removed automatic initialization in favor of explicit initialization\n * Modifications by Josiah Carlson <josiah.carlson@gmail.com>\n *   - Added case/vector/AVX/+ versions of crc combine function; see crccombine.c\n *     - added optional static cache\n *   - Modified to use 1 thread to:\n *     - Partition large crc blobs into 2-3 segments\n *     - Process the 2-3 segments in parallel\n *     - Merge the resulting crcs\n *     -> Resulting in 10-90% performance boost for data > 1 meg\n *     - macro-ized to reduce copy/pasta\n\n  This software is provided 'as-is', without any express or implied\n  warranty.  In no event will the author be held liable for any damages\n  arising from the use of this software.\n\n  Permission is granted to anyone to use this software for any purpose,\n  including commercial applications, and to alter it and redistribute it\n  freely, subject to the following restrictions:\n\n  1. The origin of this software must not be misrepresented; you must not\n     claim that you wrote the original software. If you use this software\n     in a product, an acknowledgment in the product documentation would be\n     appreciated but is not required.\n  2. Altered source versions must be plainly marked as such, and must not be\n     misrepresented as being the original software.\n  3. This notice may not be removed or altered from any source distribution.\n\n  Mark Adler\n  madler@alumni.caltech.edu\n */\n\n#include \"crcspeed.h\"\n#include \"crccombine.h\"\n\n#define CRC64_LEN_MASK UINT64_C(0x7ffffffffffffff8)\n#define CRC64_REVERSED_POLY UINT64_C(0x95ac9329ac4bc9b5)\n\n/* Fill in a CRC constants table. */\nvoid crcspeed64little_init(crcfn64 crcfn, uint64_t table[8][256]) {\n    uint64_t crc;\n\n    /* generate CRCs for all single byte sequences */\n    for (int n = 0; n < 256; n++) {\n        unsigned char v = n;\n        table[0][n] = crcfn(0, &v, 1);\n    }\n\n    /* generate nested CRC table for future slice-by-8/16/24+ lookup */\n    for (int n = 0; n < 256; n++) {\n        crc = table[0][n];\n        for (int k = 1; k < 8; k++) {\n            crc = table[0][crc & 0xff] ^ (crc >> 8);\n            table[k][n] = crc;\n        }\n    }\n#if USE_STATIC_COMBINE_CACHE\n    /* initialize combine cache for CRC stapling for slice-by 16/24+ */\n    init_combine_cache(CRC64_REVERSED_POLY, 64);\n#endif\n}\n\nvoid crcspeed16little_init(crcfn16 crcfn, uint16_t table[8][256]) {\n    uint16_t crc;\n\n    /* generate CRCs for all single byte sequences */\n    for (int n = 0; n < 256; n++) {\n        table[0][n] = crcfn(0, &n, 1);\n    }\n\n    /* generate nested CRC table for future slice-by-8 lookup */\n    for (int n = 0; n < 256; n++) {\n        crc = table[0][n];\n        for (int k = 1; k < 8; k++) {\n            crc = table[0][(crc >> 8) & 0xff] ^ (crc << 8);\n            table[k][n] = crc;\n        }\n    }\n}\n\n/* Reverse the bytes in a 64-bit word. */\nstatic inline uint64_t rev8(uint64_t a) {\n#if defined(__GNUC__) || defined(__clang__)\n    return __builtin_bswap64(a);\n#else\n    uint64_t m;\n\n    m = UINT64_C(0xff00ff00ff00ff);\n    a = ((a >> 8) & m) | (a & m) << 8;\n    m = UINT64_C(0xffff0000ffff);\n    a = ((a >> 16) & m) | (a & m) << 16;\n    return a >> 32 | a << 32;\n#endif\n}\n\n/* This function is called once to initialize the CRC table for use on a\n   big-endian architecture. */\nvoid crcspeed64big_init(crcfn64 fn, uint64_t big_table[8][256]) {\n    /* Create the little endian table then reverse all the entries. */\n    crcspeed64little_init(fn, big_table);\n    for (int k = 0; k < 8; k++) {\n        for (int n = 0; n < 256; n++) {\n            big_table[k][n] = rev8(big_table[k][n]);\n        }\n    }\n}\n\nvoid crcspeed16big_init(crcfn16 fn, uint16_t big_table[8][256]) {\n    /* Create the little endian table then reverse all the entries. */\n    crcspeed16little_init(fn, big_table);\n    for (int k = 0; k < 8; k++) {\n        for (int n = 0; n < 256; n++) {\n            big_table[k][n] = rev8(big_table[k][n]);\n        }\n    }\n}\n\n/* Note: doing all of our crc/next modifications *before* the crc table\n * references is an absolute speedup on all CPUs tested. So... keep these\n * macros separate.\n */\n\n#define DO_8_1(crc, next)                            \\\n    crc ^= *(uint64_t *)next;                        \\\n    next += 8\n\n#define DO_8_2(crc)                                  \\\n    crc = little_table[7][(uint8_t)crc] ^            \\\n             little_table[6][(uint8_t)(crc >> 8)] ^  \\\n             little_table[5][(uint8_t)(crc >> 16)] ^ \\\n             little_table[4][(uint8_t)(crc >> 24)] ^ \\\n             little_table[3][(uint8_t)(crc >> 32)] ^ \\\n             little_table[2][(uint8_t)(crc >> 40)] ^ \\\n             little_table[1][(uint8_t)(crc >> 48)] ^ \\\n             little_table[0][crc >> 56]\n\n#define CRC64_SPLIT(div) \\\n    olen = len; \\\n    next2 = next1 + ((len / div) & CRC64_LEN_MASK); \\\n    len = (next2 - next1)\n\n#define MERGE_CRC(crcn) \\\n    crc1 = crc64_combine(crc1, crcn, next2 - next1, CRC64_REVERSED_POLY, 64)\n\n#define MERGE_END(last, DIV) \\\n    len = olen - ((next2 - next1) * DIV); \\\n    next1 = last\n\n/* Variables so we can change for benchmarking; these seem to be fairly\n * reasonable for Intel CPUs made since 2010. Please adjust as necessary if\n * or when your CPU has more load / execute units. We've written benchmark code\n * to help you tune your platform, see crc64Test. */\n#if defined(__i386__) || defined(__X86_64__)\nstatic size_t CRC64_TRI_CUTOFF = (2*1024);\nstatic size_t CRC64_DUAL_CUTOFF = (128);\n#else\nstatic size_t CRC64_TRI_CUTOFF = (16*1024);\nstatic size_t CRC64_DUAL_CUTOFF = (1024);\n#endif\n\n\nvoid set_crc64_cutoffs(size_t dual_cutoff, size_t tri_cutoff) {\n    CRC64_DUAL_CUTOFF = dual_cutoff;\n    CRC64_TRI_CUTOFF = tri_cutoff;\n}\n\n/* Calculate a non-inverted CRC multiple bytes at a time on a little-endian\n * architecture. If you need inverted CRC, invert *before* calling and invert\n * *after* calling.\n * 64 bit crc = process 8/16/24 bytes at once;\n */\nuint64_t crcspeed64little(uint64_t little_table[8][256], uint64_t crc1,\n                          void *buf, size_t len) {\n    unsigned char *next1 = buf;\n\n    if (CRC64_DUAL_CUTOFF < 1) {\n        goto final;\n    }\n\n    /* process individual bytes until we reach an 8-byte aligned pointer */\n    while (len && ((uintptr_t)next1 & 7) != 0) {\n        crc1 = little_table[0][(crc1 ^ *next1++) & 0xff] ^ (crc1 >> 8);\n        len--;\n    }\n\n    if (len >  CRC64_TRI_CUTOFF) {\n        /* 24 bytes per loop, doing 3 parallel 8 byte chunks at a time */\n        unsigned char *next2, *next3;\n        uint64_t olen, crc2=0, crc3=0;\n        CRC64_SPLIT(3);\n        /* len is now the length of the first segment, the 3rd segment possibly\n         * having extra bytes to clean up at the end\n         */\n        next3 = next2 + len;\n        while (len >= 8) {\n            len -= 8;\n            DO_8_1(crc1, next1);\n            DO_8_1(crc2, next2);\n            DO_8_1(crc3, next3);\n            DO_8_2(crc1);\n            DO_8_2(crc2);\n            DO_8_2(crc3);\n        }\n\n        /* merge the 3 crcs */\n        MERGE_CRC(crc2);\n        MERGE_CRC(crc3);\n        MERGE_END(next3, 3);\n    } else if (len > CRC64_DUAL_CUTOFF) {\n        /* 16 bytes per loop, doing 2 parallel 8 byte chunks at a time */\n        unsigned char *next2;\n        uint64_t olen, crc2=0;\n        CRC64_SPLIT(2);\n        /* len is now the length of the first segment, the 2nd segment possibly\n         * having extra bytes to clean up at the end\n         */\n        while (len >= 8) {\n            len -= 8;\n            DO_8_1(crc1, next1);\n            DO_8_1(crc2, next2);\n            DO_8_2(crc1);\n            DO_8_2(crc2);\n        }\n\n        /* merge the 2 crcs */\n        MERGE_CRC(crc2);\n        MERGE_END(next2, 2);\n    }\n    /* We fall through here to handle our <CRC64_DUAL_CUTOFF inputs, and for any trailing\n     * bytes that wasn't evenly divisble by 16 or 24 above. */\n\n    /* fast processing, 8 bytes (aligned!) per loop */\n    while (len >= 8) {\n        len -= 8;\n        DO_8_1(crc1, next1);\n        DO_8_2(crc1);\n    }\nfinal:\n    /* process remaining bytes (can't be larger than 8) */\n    while (len) {\n        crc1 = little_table[0][(crc1 ^ *next1++) & 0xff] ^ (crc1 >> 8);\n        len--;\n    }\n\n    return crc1;\n}\n\n/* clean up our namespace */\n#undef DO_8_1\n#undef DO_8_2\n#undef CRC64_SPLIT\n#undef MERGE_CRC\n#undef MERGE_END\n#undef CRC64_REVERSED_POLY\n#undef CRC64_LEN_MASK\n\n\n/* note: similar perf advantages can be had for long strings in crc16 using all\n * of the same optimizations as above; though this is unnecessary. crc16 is\n * normally used to shard keys; not hash / verify data, so is used on shorter\n * data that doesn't warrant such changes. */\n\nuint16_t crcspeed16little(uint16_t little_table[8][256], uint16_t crc,\n                          void *buf, size_t len) {\n    unsigned char *next = buf;\n\n    /* process individual bytes until we reach an 8-byte aligned pointer */\n    while (len && ((uintptr_t)next & 7) != 0) {\n        crc = little_table[0][((crc >> 8) ^ *next++) & 0xff] ^ (crc << 8);\n        len--;\n    }\n\n    /* fast middle processing, 8 bytes (aligned!) per loop */\n    while (len >= 8) {\n        uint64_t n = *(uint64_t *)next;\n        crc = little_table[7][(n & 0xff) ^ ((crc >> 8) & 0xff)] ^\n              little_table[6][((n >> 8) & 0xff) ^ (crc & 0xff)] ^\n              little_table[5][(n >> 16) & 0xff] ^\n              little_table[4][(n >> 24) & 0xff] ^\n              little_table[3][(n >> 32) & 0xff] ^\n              little_table[2][(n >> 40) & 0xff] ^\n              little_table[1][(n >> 48) & 0xff] ^\n              little_table[0][n >> 56];\n        next += 8;\n        len -= 8;\n    }\n\n    /* process remaining bytes (can't be larger than 8) */\n    while (len) {\n        crc = little_table[0][((crc >> 8) ^ *next++) & 0xff] ^ (crc << 8);\n        len--;\n    }\n\n    return crc;\n}\n\n/* Calculate a non-inverted CRC eight bytes at a time on a big-endian\n * architecture.\n */\nuint64_t crcspeed64big(uint64_t big_table[8][256], uint64_t crc, void *buf,\n                       size_t len) {\n    unsigned char *next = buf;\n\n    crc = rev8(crc);\n    while (len && ((uintptr_t)next & 7) != 0) {\n        crc = big_table[0][(crc >> 56) ^ *next++] ^ (crc << 8);\n        len--;\n    }\n\n    /* note: alignment + 2/3-way processing can probably be handled here nearly\n       the same as above, using our updated DO_8_2 macro. Not included in these\n       changes, as other authors, I don't have big-endian to test with. */\n\n    while (len >= 8) {\n        crc ^= *(uint64_t *)next;\n        crc = big_table[0][crc & 0xff] ^\n              big_table[1][(crc >> 8) & 0xff] ^\n              big_table[2][(crc >> 16) & 0xff] ^\n              big_table[3][(crc >> 24) & 0xff] ^\n              big_table[4][(crc >> 32) & 0xff] ^\n              big_table[5][(crc >> 40) & 0xff] ^\n              big_table[6][(crc >> 48) & 0xff] ^\n              big_table[7][crc >> 56];\n        next += 8;\n        len -= 8;\n    }\n\n    while (len) {\n        crc = big_table[0][(crc >> 56) ^ *next++] ^ (crc << 8);\n        len--;\n    }\n\n    return rev8(crc);\n}\n\n/* WARNING: Completely untested on big endian architecture.  Possibly broken. */\nuint16_t crcspeed16big(uint16_t big_table[8][256], uint16_t crc_in, void *buf,\n                       size_t len) {\n    unsigned char *next = buf;\n    uint64_t crc = crc_in;\n\n    crc = rev8(crc);\n    while (len && ((uintptr_t)next & 7) != 0) {\n        crc = big_table[0][((crc >> (56 - 8)) ^ *next++) & 0xff] ^ (crc >> 8);\n        len--;\n    }\n\n    while (len >= 8) {\n        uint64_t n = *(uint64_t *)next;\n        crc = big_table[0][(n & 0xff) ^ ((crc >> (56 - 8)) & 0xff)] ^\n              big_table[1][((n >> 8) & 0xff) ^ (crc & 0xff)] ^\n              big_table[2][(n >> 16) & 0xff] ^\n              big_table[3][(n >> 24) & 0xff] ^\n              big_table[4][(n >> 32) & 0xff] ^\n              big_table[5][(n >> 40) & 0xff] ^\n              big_table[6][(n >> 48) & 0xff] ^\n              big_table[7][n >> 56];\n        next += 8;\n        len -= 8;\n    }\n\n    while (len) {\n        crc = big_table[0][((crc >> (56 - 8)) ^ *next++) & 0xff] ^ (crc >> 8);\n        len--;\n    }\n\n    return rev8(crc);\n}\n\n/* Return the CRC of buf[0..len-1] with initial crc, processing eight bytes\n   at a time using passed-in lookup table.\n   This selects one of two routines depending on the endianness of\n   the architecture. */\nuint64_t crcspeed64native(uint64_t table[8][256], uint64_t crc, void *buf,\n                          size_t len) {\n    uint64_t n = 1;\n\n    return *(char *)&n ? crcspeed64little(table, crc, buf, len)\n                       : crcspeed64big(table, crc, buf, len);\n}\n\nuint16_t crcspeed16native(uint16_t table[8][256], uint16_t crc, void *buf,\n                          size_t len) {\n    uint64_t n = 1;\n\n    return *(char *)&n ? crcspeed16little(table, crc, buf, len)\n                       : crcspeed16big(table, crc, buf, len);\n}\n\n/* Initialize CRC lookup table in architecture-dependent manner. */\nvoid crcspeed64native_init(crcfn64 fn, uint64_t table[8][256]) {\n    uint64_t n = 1;\n\n    *(char *)&n ? crcspeed64little_init(fn, table)\n                : crcspeed64big_init(fn, table);\n}\n\nvoid crcspeed16native_init(crcfn16 fn, uint16_t table[8][256]) {\n    uint64_t n = 1;\n\n    *(char *)&n ? crcspeed16little_init(fn, table)\n                : crcspeed16big_init(fn, table);\n}\n"
  },
  {
    "path": "src/crcspeed.h",
    "content": "/* Copyright (c) 2014, Matt Stancliff <matt@genges.com>\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE. */\n\n#ifndef CRCSPEED_H\n#define CRCSPEED_H\n\n#include <inttypes.h>\n#include <stdio.h>\n\ntypedef uint64_t (*crcfn64)(uint64_t, const void *, const uint64_t);\ntypedef uint16_t (*crcfn16)(uint16_t, const void *, const uint64_t);\n\nvoid set_crc64_cutoffs(size_t dual_cutoff, size_t tri_cutoff);\n\n/* CRC-64 */\nvoid crcspeed64little_init(crcfn64 fn, uint64_t table[8][256]);\nvoid crcspeed64big_init(crcfn64 fn, uint64_t table[8][256]);\nvoid crcspeed64native_init(crcfn64 fn, uint64_t table[8][256]);\n\nuint64_t crcspeed64little(uint64_t table[8][256], uint64_t crc, void *buf,\n                          size_t len);\nuint64_t crcspeed64big(uint64_t table[8][256], uint64_t crc, void *buf,\n                       size_t len);\nuint64_t crcspeed64native(uint64_t table[8][256], uint64_t crc, void *buf,\n                          size_t len);\n\n/* CRC-16 */\nvoid crcspeed16little_init(crcfn16 fn, uint16_t table[8][256]);\nvoid crcspeed16big_init(crcfn16 fn, uint16_t table[8][256]);\nvoid crcspeed16native_init(crcfn16 fn, uint16_t table[8][256]);\n\nuint16_t crcspeed16little(uint16_t table[8][256], uint16_t crc, void *buf,\n                          size_t len);\nuint16_t crcspeed16big(uint16_t table[8][256], uint16_t crc, void *buf,\n                       size_t len);\nuint16_t crcspeed16native(uint16_t table[8][256], uint16_t crc, void *buf,\n                          size_t len);\n#endif\n"
  },
  {
    "path": "src/db.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n#include \"server.h\"\n#include \"vector.h\"\n#include \"cluster.h\"\n#include \"atomicvar.h\"\n#include \"latency.h\"\n#include \"script.h\"\n#include \"functions.h\"\n#include \"cluster_asm.h\"\n#include \"redisassert.h\"\n\n#include <signal.h>\n#include <ctype.h>\n#include \"bio.h\"\n#include \"keymeta.h\"\n\n/*-----------------------------------------------------------------------------\n * C-level DB API\n *----------------------------------------------------------------------------*/\n\nstatic_assert(MAX_KEYSIZES_TYPES == OBJ_TYPE_BASIC_MAX, \"Must be equal\");\n\n/* Flags for expireIfNeeded */\n#define EXPIRE_FORCE_DELETE_EXPIRED 1\n#define EXPIRE_AVOID_DELETE_EXPIRED 2\n#define EXPIRE_ALLOW_ACCESS_EXPIRED 4\n#define EXPIRE_ALLOW_ACCESS_TRIMMED 8\n\n/* Return values for expireIfNeeded */\ntypedef enum {\n    KEY_VALID = 0, /* Could be volatile and not yet expired, non-volatile, or even non-existing key. */\n    KEY_EXPIRED, /* Logically expired but not yet deleted. */\n    KEY_DELETED, /* The key was deleted now. */\n    KEY_TRIMMED  /* Logically trimmed but not yet deleted. */\n} keyStatus;\n\nstatic keyStatus expireIfNeeded(redisDb *db, robj *key, kvobj *kv, int flags);\n\n/* Update LFU when an object is accessed.\n * Firstly, decrement the counter if the decrement time is reached.\n * Then logarithmically increment the counter, and update the access time. */\nvoid updateLFU(robj *val) {\n    unsigned long counter = LFUDecrAndReturn(val);\n    counter = LFULogIncr(counter);\n    val->lru = (LFUGetTimeInMinutes()<<8) | counter;\n}\n\n/* Update LRM when an object is modified. */\nvoid updateLRM(robj *o) {\n    if (o->refcount == OBJ_SHARED_REFCOUNT)\n        return;\n    if (server.maxmemory_policy & MAXMEMORY_FLAG_LRM) {\n        o->lru = LRU_CLOCK();\n    }\n}\n\n/* \n * Update histogram of keys-sizes\n * \n * It is used to track the distribution of key sizes in the dataset. It is updated \n * every time key's length is modified. Available to user via INFO command. \n * \n * The histogram is a base-2 logarithmic histogram, with 60 bins. The i'th bin\n * represents the number of keys with a size in the range 2^i and 2^(i+1) \n * exclusive. oldLen/newLen must be smaller than 2^48, and if their value \n * equals -1, it means that the key is being created/deleted, respectively. Each\n * data type has its own histogram and it is maintained per database.\n *\n * Example mapping of key lengths to bins:\n *               [1,2)->1 [2,4)->2 [4,8)->3 [8,16)->4 ...\n *\n * Since strings can be zero length, the histogram also tracks:\n *               [0,1)->0\n */\nvoid kvsUpdateHistogram(keysizesHist kvstoreHist, uint32_t type, int64_t oldLen, int64_t newLen) {\n    if(unlikely(type >= OBJ_TYPE_BASIC_MAX))\n        return;\n\n    if (oldLen > 0) {\n        int old_bin = log2ceil(oldLen) + 1;\n        debugServerAssert(old_bin < MAX_KEYSIZES_BINS);\n        kvstoreHist[type][old_bin]--;\n        debugServerAssert(kvstoreHist[type][old_bin] >= 0);\n    } else {\n        /* here, oldLen can be either 0 or -1 */\n        if (oldLen == 0) {\n            /* Only strings can be empty. Yet, a command flow might temporarily\n             * dbAdd() empty collection, and only after add elements. */\n            kvstoreHist[type][0]--;\n            debugServerAssert(kvstoreHist[type][0] >= 0);\n        }\n    }\n    \n    if (newLen > 0) {\n        int new_bin = log2ceil(newLen) + 1;\n        debugServerAssert(new_bin < MAX_KEYSIZES_BINS);\n        kvstoreHist[type][new_bin]++;\n    } else {\n        /* here, newLen can be either 0 or -1 */\n        if (newLen == 0) {\n            /* Only strings can be empty. Yet, a command flow might temporarily\n             * dbAdd() empty collection, and only after add elements. */\n            kvstoreHist[type][0]++;\n        }\n    }\n}\n\nvoid updateKeysizesHist(redisDb *db, uint32_t type, int64_t oldLen, int64_t newLen) {\n    kvstoreMetadata *kvstoreMeta = kvstoreGetMetadata(db->keys);\n    kvsUpdateHistogram(kvstoreMeta->keysizes_hist, type, oldLen, newLen);\n}\n\nvoid updateSlotAllocSize(redisDb *db, int didx, kvobj *kv, int64_t oldsize, int64_t newsize) {\n    debugServerAssert(server.memory_tracking_enabled);\n    kvstoreDictMetadata *dictMeta = kvstoreGetDictMeta(db->keys, didx, 0);\n\n    /* Early return if nothing changed */\n    if (oldsize == newsize) return;\n\n    if (dictMeta) {\n        /* Handle -1 as a marker for deletion or type change */\n        if (oldsize >= 0) {\n            debugServerAssert((size_t)oldsize <= dictMeta->alloc_size);\n            dictMeta->alloc_size -= oldsize;\n        }\n        if (newsize >= 0) {\n            dictMeta->alloc_size += newsize;\n        }\n    }\n\n    /* Update allocation size histogram */\n    kvstoreMetadata *kvstoreMeta = kvstoreGetMetadata(db->keys);\n    kvsUpdateHistogram(kvstoreMeta->allocsizes_hist, kv->type, oldsize, newsize);\n}\n\nstatic void dbgAssertHist(kvstore *kvs, keysizesHist hist,\n                          size_t (*fn)(kvobj *), const char *name) {\n    /* Scan DB and build expected histogram by scanning all keys */\n    int64_t scanHist[MAX_KEYSIZES_TYPES][MAX_KEYSIZES_BINS] = {{0}};\n    dictEntry *de;\n    kvstoreIterator kvs_it;\n    kvstoreIteratorInit(&kvs_it, kvs);\n    while ((de = kvstoreIteratorNext(&kvs_it)) != NULL) {\n        kvobj *kv = dictGetKV(de);\n        if (kv->type < OBJ_TYPE_BASIC_MAX) {\n            int64_t len = fn(kv);\n            scanHist[kv->type][(len == 0) ? 0 : log2ceil(len) + 1]++;\n        }\n    }\n    kvstoreIteratorReset(&kvs_it);\n    for (int type = 0; type < OBJ_TYPE_BASIC_MAX; type++) {\n        volatile int64_t *keysizesHist = hist[type];\n        for (int i = 0; i < MAX_KEYSIZES_BINS; i++) {\n            if (scanHist[type][i] == keysizesHist[i])\n                continue;\n\n            /* print scanStr vs. expected histograms for debugging */\n            char scanStr[500] = {0}, keysizesStr[500] = {0};\n            int l1 = 0, l2 = 0;\n            for (int j = 0; (j < MAX_KEYSIZES_BINS) && (l1 < 500) && (l2 < 500); j++) {\n                if (scanHist[type][j])\n                    l1 += snprintf(scanStr + l1, sizeof(scanStr) - l1,\n                                        \"[%d]=%\"PRId64\" \", j, scanHist[type][j]);\n                if (keysizesHist[j])\n                    l2 += snprintf(keysizesStr + l2, sizeof(keysizesStr) - l2,\n                                            \"[%d]=%\"PRId64\" \", j, keysizesHist[j]);\n            }\n            serverPanic(\"%s: type=%d\\nscanStr=%s\\nkeysizes=%s\\n\",\n                        name, type, scanStr, keysizesStr);\n        }\n    }\n}\n\n/* Assert keysizes histogram (For debugging only) */\nstatic void dbgAssertKeysizesHist(redisDb *db) {\n    kvstoreMetadata *meta = kvstoreGetMetadata(db->keys);\n    dbgAssertHist(db->keys, meta->keysizes_hist, getObjectLength, \"dbgAssertKeysizesHist\");\n}\n\n/* Assert per-slot alloc_size (For debugging only) */\nstatic void dbgAssertAllocSizePerSlot(redisDb *db) {\n    if (!server.memory_tracking_enabled) return;\n    \n    /* Check allocsizes histogram per db */\n    kvstoreMetadata *meta = kvstoreGetMetadata(db->keys);\n    dbgAssertHist(db->keys, meta->allocsizes_hist, kvobjAllocSize, \"dbgAssertAllocsizesHist\");\n    \n    /* Check alloc_size per slot */    \n    size_t slot_sizes[CLUSTER_SLOTS] = {0};\n    dictEntry *de;\n    kvstoreIterator kvs_it;\n    kvstoreIteratorInit(&kvs_it, db->keys);\n    while ((de = kvstoreIteratorNext(&kvs_it)) != NULL) {\n        int slot = kvstoreIteratorGetCurrentDictIndex(&kvs_it);\n        kvobj *kv = dictGetKV(de);\n        slot_sizes[slot] += kvobjAllocSize(kv);\n    }\n    kvstoreIteratorReset(&kvs_it);\n\n    int num_slots = kvstoreNumDicts(db->keys);\n    for (int slot = 0; slot < num_slots; slot++) {\n        kvstoreDictMetadata *dictMeta = kvstoreGetDictMeta(db->keys, slot, 0);\n        size_t want = slot_sizes[slot];\n        size_t have = dictMeta ? dictMeta->alloc_size : 0;\n        if (have == want) continue;\n        serverPanic(\"dbgAssertAllocSizePerSlot: slot=%d expected=%zu actual=%zu\",\n                    slot, want, have);\n    }\n}\n\n/* Run debug assertions based on server.dbg_assert_flags.\n *\n * DBG_ASSERT_KEYSIZES:   Triggered by DEBUG KEYSIZES-HIST-ASSERT 1\n * DBG_ASSERT_ALLOC_SLOT: Triggered by DEBUG ALLOCSIZE-SLOTS-ASSERT 1\n */\nvoid dbgRunAssertions(redisDb *db) {\n    /* Don't assert during nested calls. Intermediate state may be inconsistent. */\n    if (server.execution_nesting) return;\n\n    /* Don't assert during RDB loading. Database may be in inconsistent state. */\n    if (server.loading || server.async_loading) return;\n\n    /* Don't assert during ASM background trim or import.\n     * - During background trim, histogram delta hasn't been applied yet.\n     * - During import, assertions can introduce slowdown and cause ASM tests to fail. */\n    if (asmIsBgTrimRunning() || asmImportInProgress()) return;\n\n    if (server.dbg_assert_flags & DBG_ASSERT_KEYSIZES)\n        dbgAssertKeysizesHist(db);\n\n    if (server.dbg_assert_flags & DBG_ASSERT_ALLOC_SLOT)\n        dbgAssertAllocSizePerSlot(db);\n}\n\n/* Lookup a kvobj for read or write operations, or return NULL if the it is not\n * found in the specified DB. This function implements the functionality of\n * lookupKeyRead(), lookupKeyWrite() and their ...WithFlags() variants.\n *\n * link - If key found, return the link of the key.\n *        If key not found, return the bucket link, where the key should be added.\n *        Or NULL if dict wasn't allocated yet.\n *\n * Side-effects of calling this function:\n *\n * 1. A key gets expired if it reached it's TTL.\n * 2. The key's last access time is updated.\n * 3. The global keys hits/misses stats are updated (reported in INFO).\n * 4. If keyspace notifications are enabled, a \"keymiss\" notification is fired.\n *\n * Flags change the behavior of this command:\n *\n *  LOOKUP_NONE (or zero): No special flags are passed.\n *  LOOKUP_NOTOUCH: Don't alter the last access time of the key.\n *  LOOKUP_NONOTIFY: Don't trigger keyspace event on key miss.\n *  LOOKUP_NOSTATS: Don't increment key hits/misses counters.\n *  LOOKUP_WRITE: Prepare the key for writing (delete expired keys even on\n *                replicas, use separate keyspace stats and events (TODO)).\n *  LOOKUP_NOEXPIRE: Perform expiration check, but avoid deleting the key,\n *                   so that we don't have to propagate the deletion.\n *\n * Note: this function also returns NULL if the key is logically expired but\n * still existing, in case this is a replica and the LOOKUP_WRITE is not set.\n * Even if the key expiry is master-driven, we can correctly report a key is\n * expired on replicas even if the master is lagging expiring our key via DELs\n * in the replication link. */\nkvobj *lookupKey(redisDb *db, robj *key, int flags, dictEntryLink *link) {\n\n    kvobj *val = dbFindByLink(db, key->ptr, link);\n\n    if (val) {\n        /* Forcing deletion of expired keys on a replica makes the replica\n         * inconsistent with the master. We forbid it on readonly replicas, but\n         * we have to allow it on writable replicas to make write commands\n         * behave consistently.\n         *\n         * It's possible that the WRITE flag is set even during a readonly\n         * command, since the command may trigger events that cause modules to\n         * perform additional writes. */\n        int is_ro_replica = server.masterhost && server.repl_slave_ro;\n        int expire_flags = 0;\n        if (flags & LOOKUP_WRITE && !is_ro_replica)\n            expire_flags |= EXPIRE_FORCE_DELETE_EXPIRED;\n        if (flags & LOOKUP_NOEXPIRE)\n            expire_flags |= EXPIRE_AVOID_DELETE_EXPIRED;\n        if (flags & LOOKUP_ACCESS_EXPIRED)\n            expire_flags |= EXPIRE_ALLOW_ACCESS_EXPIRED;\n        if (flags & LOOKUP_ACCESS_TRIMMED)\n            expire_flags |= EXPIRE_ALLOW_ACCESS_TRIMMED;\n        if (expireIfNeeded(db, key, val, expire_flags) != KEY_VALID) {\n            /* The key is no longer valid. */\n            val = NULL;\n            if (link) *link = NULL;\n        }\n    }\n\n    if (val) {\n        /* Update the access time for the ageing algorithm.\n         * Don't do it if we have a saving child, as this will trigger\n         * a copy on write madness. */\n        if (((flags & LOOKUP_NOTOUCH) == 0) &&\n            (server.current_client && server.current_client->flags & CLIENT_NO_TOUCH) &&\n            (server.executing_client && server.executing_client->cmd->proc != touchCommand))\n            flags |= LOOKUP_NOTOUCH;\n        if (!hasActiveChildProcess() && !(flags & LOOKUP_NOTOUCH)){\n            if (server.maxmemory_policy & MAXMEMORY_FLAG_LFU) {\n                updateLFU(val);\n            } else if (!(server.maxmemory_policy & MAXMEMORY_FLAG_LRM)) {\n                /* LRM policy should NOT update timestamp on reads. */\n                val->lru = LRU_CLOCK();\n            }\n        }\n\n        if (!(flags & (LOOKUP_NOSTATS | LOOKUP_WRITE)))\n            server.stat_keyspace_hits++;\n        /* TODO: Use separate hits stats for WRITE */\n    } else {\n        if (!(flags & (LOOKUP_NONOTIFY | LOOKUP_WRITE)))\n            notifyKeyspaceEvent(NOTIFY_KEY_MISS, \"keymiss\", key, db->id);\n        if (!(flags & (LOOKUP_NOSTATS | LOOKUP_WRITE)))\n            server.stat_keyspace_misses++;\n        /* TODO: Use separate misses stats and notify event for WRITE */\n    }\n\n    return val;\n}\n\n/* Lookup a key for read operations, or return NULL if the key is not found\n * in the specified DB.\n *\n * This API should not be used when we write to the key after obtaining\n * the object linked to the key, but only for read only operations.\n *\n * This function is equivalent to lookupKey(). The point of using this function\n * rather than lookupKey() directly is to indicate that the purpose is to read\n * the key. */\nkvobj *lookupKeyReadWithFlags(redisDb *db, robj *key, int flags) {\n    serverAssert(!(flags & LOOKUP_WRITE));\n    return lookupKey(db, key, flags, NULL);\n}\n\n/* Like lookupKeyReadWithFlags(), but does not use any flag, which is the\n * common case. */\nkvobj *lookupKeyRead(redisDb *db, robj *key) {\n    return lookupKeyReadWithFlags(db,key,LOOKUP_NONE);\n}\n\n/* Lookup a key for write operations, and as a side effect, if needed, expires\n * the key if its TTL is reached. It's equivalent to lookupKey() with the\n * LOOKUP_WRITE flag added.\n *\n * Returns the linked value object if the key exists or NULL if the key\n * does not exist in the specified DB. */\nkvobj *lookupKeyWriteWithFlags(redisDb *db, robj *key, int flags) {\n    return lookupKey(db, key, flags | LOOKUP_WRITE, NULL);\n}\n\nkvobj *lookupKeyWrite(redisDb *db, robj *key) {\n    return lookupKeyWriteWithFlags(db, key, LOOKUP_NONE);\n}\n\n/* Like lookupKeyWrite(), but accepts ref to optional `link`\n *\n * link - If key found, updated to link the key.\n *        If key not found, updated to the bucket where the key should be added.\n *        If key not found and dict is empty, it is set to NULL\n */\nkvobj *lookupKeyWriteWithLink(redisDb *db, robj *key, dictEntryLink *link) {\n    return lookupKey(db, key, LOOKUP_NONE | LOOKUP_WRITE, link);\n}\n\nkvobj *lookupKeyReadOrReply(client *c, robj *key, robj *reply) {\n    kvobj *kv = lookupKeyRead(c->db, key);\n    if (!kv) addReplyOrErrorObject(c, reply);\n    return kv;\n}\n\nkvobj *lookupKeyWriteOrReply(client *c, robj *key, robj *reply) {\n    kvobj *kv = lookupKeyWrite(c->db, key);\n    if (!kv) addReplyOrErrorObject(c, reply);\n    return kv;\n}\n\n/* Add a key-value entry to the DB.\n *\n * A copy of 'key' is stored in the database. The caller must ensure the\n * `key` is properly freed by calling decrRefcount(key).\n *\n * The value may (if its reference counter == 1) be reallocated and become\n * invalid after a call to this function. The (possibly reallocated) value is\n * stored in the database and the 'valref' pointer is updated to point to the\n * new allocation.\n *\n * The reference counter of the value pointed to by valref is not incremented,\n * so the caller should not free the value using decrRefcount after calling this\n * function.\n *\n * link - Optional link to bucket where the key should be added.\n *          On return, get updated, by need, to the inserted key.\n *          \n * keymeta - Defines metadata to be attached to the key. Including optional \n *           expiration and modules metadata to be copied (REQUIRED).\n */\nkvobj *dbAddInternal(redisDb *db, robj *key, robj **valref, dictEntryLink *link, \n                     const KeyMetaSpec *keymeta) \n{\n    int slot = getKeySlot(key->ptr);\n    dictEntryLink tmp = NULL;\n    if (link == NULL) link = &tmp;\n    robj *val = *valref;\n    kvobj *kv = kvobjSet(key->ptr, val, keymeta->metabits);\n    initObjectLRUOrLFU(kv);\n    kvstoreDictSetAtLink(db->keys, slot, kv, link, 1);\n    \n    /* Handle metadata (expiration and modules metadata) */\n    if (keymeta->metabits) {\n        if (keymeta->metabits & KEY_META_MASK_EXPIRE) {\n            /* Expiry is always the first meta (from last) */\n            long long expire = keymeta->meta[KEY_META_ID_MAX - 1];\n            kvobj *newkv = setExpireByLink(NULL, db, key->ptr, expire, *link);\n            serverAssert(newkv == kv);\n        }\n        \n        /* memcpy modules metadata to beginning of kvobj */\n        if (keymeta->metabits & KEY_META_MASK_MODULES)\n            /* Also trivial overwrite expire */\n            memcpy(kvobjGetAllocPtr(kv), \n                   keymeta->meta + KEY_META_ID_MAX - keymeta->numMeta, \n                   keymeta->numMeta * sizeof(uint64_t));\n    }\n\n    signalKeyAsReady(db, key, kv->type);\n    notifyKeyspaceEvent(NOTIFY_NEW,\"new\",key,db->id);\n    updateKeysizesHist(db, kv->type, -1, getObjectLength(kv)); /* add hist */\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(db, slot, kv, -1, kvobjAllocSize(kv));\n    *valref = kv;\n    return kv;\n}\n\n/* Read dbAddInternal() comment */\nkvobj *dbAdd(redisDb *db, robj *key, robj **valref) {\n    KeyMetaSpec keyMetaEmpty; /* No metadata added */\n    keyMetaSpecInit(&keyMetaEmpty);\n    return dbAddInternal(db, key, valref, NULL, &keyMetaEmpty);\n}\n\nkvobj *dbAddByLink(redisDb *db, robj *key, robj **valref, dictEntryLink *link) {\n    KeyMetaSpec keyMetaEmpty; /* No metadata added */\n    keyMetaSpecInit(&keyMetaEmpty);\n    return dbAddInternal(db, key, valref, link, &keyMetaEmpty);\n}\n\n/* Returns key's hash slot when cluster mode is enabled, or 0 when disabled.\n * The only difference between this function and getKeySlot, is that it's not using cached key slot from the current_client\n * and always calculates CRC hash.\n * This is useful when slot needs to be calculated for a key that user didn't request for, such as in case of eviction. */\nint calculateKeySlot(sds key) {\n    return server.cluster_enabled ? keyHashSlot(key, (int) sdslen(key)) : 0;\n}\n\n/* Return slot-specific dictionary for key based on key's hash slot when cluster mode is enabled, else 0.*/\nint getKeySlot(sds key) {\n    if (!server.cluster_enabled) return 0;\n    /* This is performance optimization that uses pre-set slot id from the current command,\n     * in order to avoid calculation of the key hash.\n     *\n     * This optimization is only used when current_client flag `CLIENT_EXECUTING_COMMAND` is set.\n     * It only gets set during the execution of command under `call` method. Other flows requesting\n     * the key slot would fallback to calculateKeySlot.\n     */\n    if (server.current_client && server.current_client->slot >= 0 && server.current_client->flags & CLIENT_EXECUTING_COMMAND) {\n        debugServerAssertWithInfo(server.current_client, NULL,\n                                  (int)keyHashSlot(key, (int)sdslen(key)) == server.current_client->slot);\n        return server.current_client->slot;\n    }\n    int slot = keyHashSlot(key, (int)sdslen(key));\n    return slot;\n}\n\n/* Return the slot of the key in the command.\n * INVALID_CLUSTER_SLOT if no keys, CLUSTER_CROSSSLOT if cross slot, otherwise the slot number. */\nint getSlotFromCommand(struct redisCommand *cmd, robj **argv, int argc) {\n    if (!cmd || !server.cluster_enabled) return INVALID_CLUSTER_SLOT;\n\n    /* Get the keys from the command */\n    getKeysResult result = GETKEYS_RESULT_INIT;\n    getKeysFromCommand(cmd, argv, argc, &result);\n\n    /* Extract slot from the keys result. */\n    int slot = extractSlotFromKeysResult(argv, &result);\n    getKeysFreeResult(&result);\n    return slot;\n}\n\n/* This is a special version of dbAdd() that is used only when loading\n * keys from the RDB file: the key is passed as an SDS string that is\n * copied by the function and freed by the caller.\n *\n * Moreover this function will not abort if the key is already busy, to\n * give more control to the caller, nor will signal the key as ready\n * since it is not useful in this context.\n *\n * If added to db, returns pointer to the object, Otherwise NULL is returned.\n */\nkvobj *dbAddRDBLoad(redisDb *db, sds key, robj **valref, const KeyMetaSpec *keyMetaSpec) {\n    /* Add new kvobj to the db. */\n    int slot = getKeySlot(key);\n\n    dictEntryLink link, bucket;\n    link = kvstoreDictFindLink(db->keys, slot, key, &bucket);\n\n    /* If already exists, return NULL */\n    if (link != NULL)\n        return NULL;\n\n    /* Create kvobj with metadata bits from KeyMetaSpec */\n    robj *val = *valref;\n    kvobj *kv = kvobjSet(key, val, keyMetaSpec->metabits);\n    initObjectLRUOrLFU(kv);\n    kvstoreDictSetAtLink(db->keys, slot, kv, &bucket, 1);\n\n    /* Handle metadata (expiration and modules metadata) */\n    if (keyMetaSpec->metabits) {\n        if (keyMetaSpec->metabits & KEY_META_MASK_EXPIRE) {\n            /* Expiry is always the first meta (from last) */\n            long long expire = keyMetaSpec->meta[KEY_META_ID_MAX - 1];\n            kvobj *newkv = setExpireByLink(NULL, db, key, expire, bucket);\n            serverAssert(newkv == kv);\n        }\n\n        /* memcpy modules metadata to beginning of kvobj */\n        if (keyMetaSpec->metabits & KEY_META_MASK_MODULES)\n            memcpy(kvobjGetAllocPtr(kv),\n                   keyMetaSpec->meta + KEY_META_ID_MAX - keyMetaSpec->numMeta,\n                   keyMetaSpec->numMeta * sizeof(uint64_t));\n    }\n\n    updateKeysizesHist(db, kv->type, -1, (int64_t) getObjectLength(kv));\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(db, slot, kv, -1, kvobjAllocSize(kv));\n    return *valref = kv;\n}\n\n/**\n * Overwrite an existing key's value in db with a new value.\n *\n * - If the reference count of 'valref' is 1 the ownership of the value is\n *   transferred to this function. The value may be reallocated, potentially\n *   invalidating any external references to it. The (potentially reallocated)\n *   value is stored in the database, and the 'valref' pointer is updated to\n *   reflect the new allocation, if one occurs.\n * - The reference counter of the value referenced by 'valref' is not incremented\n *   so the caller must refrain from releasing it using decrRefCount after this\n *   function is called.\n * - This function does not modify the expire time of the existing key.\n * - The 'overwrite' flag is an indication whether this is done as part of a\n *   complete replacement of their key, which can be thought as a deletion and\n *   replacement (in which case we need to emit deletion signals), or just an\n *   update of a value of an existing key (when false).\n * - The `link` is optional, can save lookup, if provided.\n */\nstatic void dbSetValue(redisDb *db, robj *key, robj **valref, dictEntryLink link, \n                       int overwrite, int updateKeySizes, int keepTTL) {\n    int freeModuleMeta = 0;\n    robj *val = *valref;\n    int slot = getKeySlot(key->ptr);\n    size_t oldsize = 0;\n    if (!link) {\n        link = kvstoreDictFindLink(db->keys, slot, key->ptr, NULL);\n        serverAssertWithInfo(NULL, key, link != NULL); /* expected to exist */\n    }\n    kvobj *old = dictGetKV(*link);\n    kvobj *kvNew;\n\n    int64_t oldlen = (int64_t) getObjectLength(old);\n    int oldtype = old->type;\n\n    /* if hash with HFEs, take care to remove from global HFE DS before attempting\n     * to manipulate and maybe free kvOld object */\n    if (old->type == OBJ_HASH)\n        estoreRemove(db->subexpires, slot, old);\n\n    if (old->type == OBJ_STREAM)\n        streamKeyRemoved(db, key, old);\n\n    long long oldExpire = getExpire(db, key->ptr, old);\n\n    /* All metadata will be kept if not `overwrite` for the new object  */\n    uint32_t newKeyMetaBits = old->metabits;\n    /* clear expire if not keepTTL or no old expire */\n    if ((!keepTTL) || (oldExpire == -1))\n        newKeyMetaBits &= ~KEY_META_MASK_EXPIRE; \n\n    if (overwrite) {\n        /* On overwrite, discard module metadata excluding expire if set */\n        newKeyMetaBits &= KEY_META_MASK_EXPIRE;\n        /* RM_StringDMA may call dbUnshareStringValue which may free val, so we\n         * need to incr to retain old */\n        incrRefCount(old);\n\n        /* Free related metadata. Ignore builtin metadata (currently only expire) */\n        if (getModuleMetaBits(old->metabits)) {\n            keyMetaOnUnlink(db, key, old);\n            freeModuleMeta = 1;\n        }\n\n        /* Although the key is not really deleted from the database, we regard\n         * overwrite as two steps of unlink+add, so we still need to call the unlink\n         * callback of the module. */\n        moduleNotifyKeyUnlink(key,old,db->id,DB_FLAG_KEY_OVERWRITE);\n        /* We want to try to unblock any module clients or clients using a blocking XREADGROUP */\n        signalDeletedKeyAsReady(db,key,old->type);\n        decrRefCount(old);\n        /* Because of RM_StringDMA, old may be changed, so we need get old again */\n        old = dictGetKV(*link);\n    }\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(old);\n\n    if ((old->refcount == 1 && old->encoding != OBJ_ENCODING_EMBSTR) &&\n        (val->refcount == 1 && val->encoding != OBJ_ENCODING_EMBSTR) && (!freeModuleMeta))\n    {\n        /* Keep old object in the database. Just swap it's ptr, type and\n         * encoding with the content of val. */\n        robj tmp = *old;\n        old->type = val->type;\n        old->encoding = val->encoding;\n        old->ptr = val->ptr;\n        val->type = tmp.type;\n        val->encoding = tmp.encoding;\n        val->ptr = tmp.ptr;\n        /* Set new to old to keep the old object. Set old to val to be freed below. */\n        kvNew = old;\n        old = val;\n\n        /* Handle TTL in the optimization path */\n        if ((!keepTTL) && (oldExpire >= 0))\n            removeExpire(db, key);\n    } else {\n        /* Replace the old value at its location in the key space. */\n        val->lru = old->lru;\n        \n        kvNew = kvobjSet(key->ptr, val, newKeyMetaBits);\n        kvstoreDictSetAtLink(db->keys, slot, kvNew, &link, 0);\n\n        /* if expiry replace the old value at its location in the expire space. */\n        if (oldExpire != -1) {\n            if (keepTTL) {\n                kvobjSetExpire(kvNew, oldExpire); /* kvNew not reallocated here */\n                dictEntryLink exLink = kvstoreDictFindLink(db->expires, slot,\n                                                           key->ptr, NULL);\n                serverAssertWithInfo(NULL, key, exLink != NULL);\n                kvstoreDictSetAtLink(db->expires, slot, kvNew, &exLink, 0);\n            } else {\n                kvstoreDictDelete(db->expires, slot, key->ptr);\n            }\n        }\n\n        if (newKeyMetaBits & KEY_META_MASK_MODULES)\n            keyMetaTransition(old, kvNew);\n    }\n\n    /* Remove old key and add new key to KEYSIZES histogram */\n    int64_t newlen = (int64_t) getObjectLength(kvNew);\n    if (updateKeySizes) {\n        /* Save one call if old and new are the same type */\n        if (oldtype == kvNew->type) {\n            updateKeysizesHist(db, oldtype, oldlen, newlen);\n        } else {\n            updateKeysizesHist(db, oldtype, oldlen, -1);\n            updateKeysizesHist(db, kvNew->type, -1, newlen);\n        }\n    }\n\n    if (server.memory_tracking_enabled) {\n        /* Save one call if old and new are the same type */\n        if (oldtype == kvNew->type) {\n            updateSlotAllocSize(db, slot, kvNew, oldsize, kvobjAllocSize(kvNew));\n        } else {\n            updateSlotAllocSize(db, slot, old, oldsize, -1);\n            updateSlotAllocSize(db, slot, kvNew, -1, kvobjAllocSize(kvNew));\n        }\n    }\n\n    if (server.io_threads_num > 1 && old->encoding == OBJ_ENCODING_RAW && old->refcount == 1) {\n        /* In multi-threaded mode, the OBJ_ENCODING_RAW string object usually is\n         * allocated in the IO thread, so we defer the free to the IO thread.\n         * Besides, we never free a string object in BIO threads, so, even with\n         * lazyfree-lazy-server-del enabled, a fallback to main thread freeing\n         * due to defer free failure doesn't go against the config intention. */\n        tryDeferFreeClientObject(server.current_client, DEFERRED_OBJECT_TYPE_ROBJ, old);\n    } else if (server.lazyfree_lazy_server_del) {\n        freeObjAsync(key, old, db->id);\n    } else {\n        decrRefCount(old);\n    }\n    *valref = kvNew;\n}\n\n/* Replace an existing key with a new value, we just replace value and don't\n * emit any events */\nvoid dbReplaceValue(redisDb *db, robj *key, robj **valref, int updateKeySizes) {\n    dbSetValue(db, key, valref, NULL, 0, updateKeySizes, 1);\n}\n\n/* Replace an existing key with a new value (don't emit any events)\n *\n * parameter 'link' is optional. If provided, saves lookup.\n */\nvoid dbReplaceValueWithLink(redisDb *db, robj *key, robj **val, dictEntryLink link) {\n    dbSetValue(db, key, val, link, 0, 1, 1);\n}\n\n/* High level Set operation. This function can be used in order to set\n * a key, whatever it was existing or not, to a new object.\n *\n * 1) The value may be reallocated when adding it to the database. The value\n *    pointer 'valref' is updated to point to the reallocated object. The\n *    reference count of the value object is *not* incremented.\n * 2) clients WATCHing for the destination key notified.\n * 3) The expire time of the key is reset (the key is made persistent),\n *    unless 'SETKEY_KEEPTTL' is enabled in flags.\n * 4) The key lookup can take place outside this interface outcome will be\n *    delivered with 'SETKEY_ALREADY_EXIST' or 'SETKEY_DOESNT_EXIST'\n *\n * All the new keys in the database should be created via this interface.\n * The client 'c' argument may be set to NULL if the operation is performed\n * in a context where there is no clear client performing the operation. */\nvoid setKey(client *c, redisDb *db, robj *key, robj **valref, int flags) {\n    setKeyByLink(c, db, key, valref, flags, NULL);\n}\n\n/* Like setKey(), but accepts an optional link\n *\n * - If flags is set with SETKEY_ALREADY_EXIST, then `link` must be provided\n * - If flags is set with SETKEY_DOESNT_EXIST, then `link` is optional. If\n *   provided, it will point to the bucket where the key should be added.\n * - If flag is not set (0) then add or update key, and `link` must be NULL\n * On return, link get updated, by need, to the inserted kvobj.\n */\nvoid setKeyByLink(client *c, redisDb *db, robj *key, robj **valref, int flags, dictEntryLink *plink) {\n    dictEntryLink dummy = NULL, *link = plink ? plink : &dummy;\n    int exists;\n    kvobj *oldval = NULL;\n\n    if (flags & SETKEY_ALREADY_EXIST) {\n        debugServerAssert((*link) != NULL);\n        oldval = dictGetKV(**link);\n        exists = 1;\n    } else if (flags & SETKEY_DOESNT_EXIST) {\n        /* link is optional */\n        exists = 0;\n    } else {\n        /* Add or update key */\n        oldval = lookupKeyWriteWithLink(db, key, link);\n        exists = oldval != NULL;\n    }\n\n    if (exists) {\n        int oldtype = oldval->type;\n        int newtype = (*valref)->type;\n\n        /* Update the value of an existing key */\n        dbSetValue(db, key, valref, *link, 1, 1, flags & SETKEY_KEEPTTL);\n\n        /* Notify keyspace events for override and type change */\n        notifyKeyspaceEvent(NOTIFY_OVERWRITTEN, \"overwritten\", key, db->id);\n        if (oldtype != newtype)\n            notifyKeyspaceEvent(NOTIFY_TYPE_CHANGED, \"type_changed\", key, db->id);\n    } else {\n        /* Add the new key to the database */\n        dbAddByLink(db, key, valref, link);\n    }\n\n    /* Signal key modification and update LRM timestamp. */\n    keyModified(c,db,key,*valref,!(flags & SETKEY_NO_SIGNAL));\n}\n\n/* During atomic slot migration, keys that are being imported are in an\n * intermediate state. we cannot access them and therefore skip them.\n *\n * This callback function now is used by:\n * - dbRandomKey\n * - keysCommand\n * - scanCommand\n */\nstatic int accessKeysShouldSkipDictIndex(int didx) {\n    return !clusterCanAccessKeysInSlot(didx);\n}\n\n/* Return a random key, in form of a Redis object.\n * If there are no keys, NULL is returned.\n *\n * The function makes sure to return keys not already expired. */\nrobj *dbRandomKey(redisDb *db) {\n    dictEntry *de;\n    int maxtries = 100;\n    int allvolatile = kvstoreSize(db->keys) == kvstoreSize(db->expires);\n\n    while(1) {\n        robj *keyobj;\n        int randomSlot = kvstoreGetFairRandomDictIndex(db->keys, accessKeysShouldSkipDictIndex, 16, 1);\n        if (randomSlot == -1) return NULL;\n        de = kvstoreDictGetFairRandomKey(db->keys, randomSlot);\n        if (de == NULL) return NULL;\n\n        kvobj *kv = dictGetKV(de);\n        sds key = kvobjGetKey(kv);\n        keyobj = createStringObject(key,sdslen(key));\n        if (allvolatile && (server.masterhost || isPausedActions(PAUSE_ACTION_EXPIRE)) && --maxtries == 0) {\n            /* If the DB is composed only of keys with an expire set,\n             * it could happen that all the keys are already logically\n             * expired in the slave, so the function cannot stop because\n             * expireIfNeeded() is false, nor it can stop because\n             * dictGetFairRandomKey() returns NULL (there are keys to return).\n             * To prevent the infinite loop we do some tries, but if there\n             * are the conditions for an infinite loop, eventually we\n             * return a key name that may be already expired. */\n            return keyobj;\n        }\n        if (expireIfNeeded(db, keyobj, kv, 0) != KEY_VALID) {\n            decrRefCount(keyobj);\n            continue; /* search for another key. This expired. */\n        }\n\n        return keyobj;\n    }\n}\n\n/* Helper for sync and async delete. */\nint dbGenericDelete(redisDb *db, robj *key, int async, int flags) {\n    dictEntryLink link;\n    int table;\n    int slot = getKeySlot(key->ptr);\n    link = kvstoreDictTwoPhaseUnlinkFind(db->keys, slot, key->ptr, &table);\n\n    if (link) {\n        kvobj *kv = dictGetKV(*link);\n\n        int64_t oldlen = (int64_t) getObjectLength(kv);\n        int type = kv->type;\n\n        /* If hash object with expiry on fields, remove it from HFE DS of DB */\n        if (type == OBJ_HASH)\n            estoreRemove(db->subexpires, slot, kv);\n\n        /* If stream with IDMP tracking, remove it from stream_idmp_keys */\n        if (type == OBJ_STREAM)\n            streamKeyRemoved(db, key, kv);\n\n        /* RM_StringDMA may call dbUnshareStringValue which may free kv, so we\n         * need to incr to retain kv */\n        incrRefCount(kv); /* refcnt=1->2 */\n        /* Metadata hook: notify unlink for key metadata cleanup. */\n        if (getModuleMetaBits(kv->metabits)) keyMetaOnUnlink(db, key, kv);\n        /* Tells the module that the key has been unlinked from the database. */\n        moduleNotifyKeyUnlink(key, kv, db->id, flags);\n        /* We want to try to unblock any module clients or clients using a blocking XREADGROUP */\n        signalDeletedKeyAsReady(db,key,type);\n        /* We should call decr before freeObjAsync. If not, the refcount may be\n         * greater than 1, so freeObjAsync doesn't work */\n        decrRefCount(kv);\n\n        /* Because of dbUnshareStringValue, the val in db may change. */\n        kv = dictGetKV(*link);\n        \n        /* if expirable, delete an entry from the expires dict is not decrRefCount of kvobj */\n        if (kvobjGetExpire(kv) != -1)\n            kvstoreDictDelete(db->expires, slot, key->ptr);\n\n        if (async) {\n            if (server.memory_tracking_enabled)\n                updateSlotAllocSize(db, slot, kv, kvobjAllocSize(kv), -1);\n            freeObjAsync(key, kv, db->id);\n            /* Set the key to NULL in the main dictionary. */\n            kvstoreDictSetAtLink(db->keys, slot, NULL, &link, 0);\n        }\n        kvstoreDictTwoPhaseUnlinkFree(db->keys, slot, link, table);\n\n        /* remove key from histogram */\n        if(!(flags & DB_FLAG_NO_UPDATE_KEYSIZES))\n            updateKeysizesHist(db, type, oldlen, -1);\n        return 1;\n    } else {\n        return 0;\n    }\n}\n\n/* Delete a key, value, and associated expiration entry if any, from the DB */\nint dbSyncDelete(redisDb *db, robj *key) {\n    return dbGenericDelete(db, key, 0, DB_FLAG_KEY_DELETED);\n}\n\n/* Delete a key, value, and associated expiration entry if any, from the DB. If\n * the value consists of many allocations, it may be freed asynchronously. */\nint dbAsyncDelete(redisDb *db, robj *key) {\n    return dbGenericDelete(db, key, 1, DB_FLAG_KEY_DELETED);\n}\n\n/* This is a wrapper whose behavior depends on the Redis lazy free\n * configuration. Deletes the key synchronously or asynchronously. */\nint dbDelete(redisDb *db, robj *key) {\n    return dbGenericDelete(db, key, server.lazyfree_lazy_server_del, DB_FLAG_KEY_DELETED);\n}\n\n/* Similar to dbDelete(), but does not update the keysizes histogram.\n * This is used when we want to delete a key without affecting the histogram,\n * typically in cases where a command flow deletes elements from a collection\n * and then deletes the collection itself. In such cases, using dbDelete()\n * would incorrectly decrement bin #0. A corresponding test should be added\n * to `info-keysizes.tcl`. */\nint dbDeleteSkipKeysizesUpdate(redisDb *db, robj *key) {\n    return dbGenericDelete(db, key, server.lazyfree_lazy_server_del,\n                    DB_FLAG_KEY_DELETED | DB_FLAG_NO_UPDATE_KEYSIZES);\n}\n\n/* Prepare the string object stored at 'key' to be modified destructively\n * to implement commands like SETBIT or APPEND.\n *\n * An object is usually ready to be modified unless one of the two conditions\n * are true:\n *\n * 1) The object 'o' is shared (refcount > 1), we don't want to affect\n *    other users.\n * 2) The object encoding is not \"RAW\".\n *\n * If the object is found in one of the above conditions (or both) by the\n * function, an unshared / not-encoded copy of the string object is stored\n * at 'key' in the specified 'db'. Otherwise the object 'o' itself is\n * returned.\n *\n * USAGE:\n *\n * The object 'o' is what the caller already obtained by looking up 'key'\n * in 'db', the usage pattern looks like this:\n *\n * o = lookupKeyWrite(db,key);\n * if (checkType(c,o,OBJ_STRING)) return;\n * o = dbUnshareStringValue(db,key,o);\n *\n * At this point the caller is ready to modify the object, for example\n * using an sdscat() call to append some data, or anything else.\n */\nkvobj *dbUnshareStringValue(redisDb *db, robj *key, kvobj *kv) {\n    return dbUnshareStringValueByLink(db,key,kv,NULL);\n}\n\n/* Like dbUnshareStringValue(), but accepts a optional link,\n * which can be used if we already have one, thus saving the dbFind call. */\nkvobj *dbUnshareStringValueByLink(redisDb *db, robj *key, kvobj *o, dictEntryLink link) {\n    serverAssert(o->type == OBJ_STRING);\n    if (o->refcount != 1 || o->encoding != OBJ_ENCODING_RAW) {\n        robj *decoded = getDecodedObject(o);\n        o = createRawStringObject(decoded->ptr, sdslen(decoded->ptr));\n        decrRefCount(decoded);\n        dbReplaceValueWithLink(db, key, &o, link);\n    }\n    return o;\n}\n\n/* Remove all keys from the database(s) structure. The dbarray argument\n * may not be the server main DBs (could be a temporary DB).\n *\n * The dbnum can be -1 if all the DBs should be emptied, or the specified\n * DB index if we want to empty only a single database.\n * The function returns the number of keys removed from the database(s). */\nlong long emptyDbStructure(redisDb *dbarray, int dbnum, int async,\n                           void(callback)(dict*))\n{\n    long long removed = 0;\n    int startdb, enddb;\n\n    if (dbnum == -1) {\n        startdb = 0;\n        enddb = server.dbnum-1;\n    } else {\n        startdb = enddb = dbnum;\n    }\n\n    for (int j = startdb; j <= enddb; j++) {\n        removed += kvstoreSize(dbarray[j].keys);\n        if (async) {\n            emptyDbAsync(&dbarray[j]);\n        } else {\n            /* Destroy sub-expires before deleting the kv-objects since ebuckets\n             * data structure is embedded in the stored kv-objects. */\n            estoreEmpty(dbarray[j].subexpires);\n            kvstoreEmpty(dbarray[j].keys, callback);\n            kvstoreEmpty(dbarray[j].expires, callback);\n            dictEmpty(dbarray[j].stream_idmp_keys, callback);\n        }\n        /* Because all keys of database are removed, reset average ttl. */\n        dbarray[j].avg_ttl = 0;\n        dbarray[j].expires_cursor = 0;\n    }\n\n    return removed;\n}\n\n/* Remove all data (keys and functions) from all the databases in a\n * Redis server. If callback is given the function is called from\n * time to time to signal that work is in progress.\n *\n * The dbnum can be -1 if all the DBs should be flushed, or the specified\n * DB number if we want to flush only a single Redis database number.\n *\n * Flags are be EMPTYDB_NO_FLAGS if no special flags are specified or\n * EMPTYDB_ASYNC if we want the memory to be freed in a different thread\n * and the function to return ASAP. EMPTYDB_NOFUNCTIONS can also be set\n * to specify that we do not want to delete the functions.\n *\n * On success the function returns the number of keys removed from the\n * database(s). Otherwise -1 is returned in the specific case the\n * DB number is out of range, and errno is set to EINVAL. */\nlong long emptyData(int dbnum, int flags, void(callback)(dict*)) {\n    int async = (flags & EMPTYDB_ASYNC);\n    int with_functions = !(flags & EMPTYDB_NOFUNCTIONS);\n    RedisModuleFlushInfoV1 fi = {REDISMODULE_FLUSHINFO_VERSION,!async,dbnum};\n    long long removed = 0;\n\n    if (dbnum < -1 || dbnum >= server.dbnum) {\n        errno = EINVAL;\n        return -1;\n    }\n\n    if (dbnum == -1 || dbnum == 0)\n        asmCancelTrimJobs();\n\n    /* Fire the flushdb modules event. */\n    moduleFireServerEvent(REDISMODULE_EVENT_FLUSHDB,\n                          REDISMODULE_SUBEVENT_FLUSHDB_START,\n                          &fi);\n\n    /* Make sure the WATCHed keys are affected by the FLUSH* commands.\n     * Note that we need to call the function while the keys are still\n     * there. */\n    signalFlushedDb(dbnum, async, NULL);\n\n    /* Empty redis database structure. */\n    removed = emptyDbStructure(server.db, dbnum, async, callback);\n\n    if (dbnum == -1) flushSlaveKeysWithExpireList();\n\n    if (with_functions) {\n        serverAssert(dbnum == -1);\n        functionsLibCtxClearCurrent(async);\n    }\n\n    /* Also fire the end event. Note that this event will fire almost\n     * immediately after the start event if the flush is asynchronous. */\n    moduleFireServerEvent(REDISMODULE_EVENT_FLUSHDB,\n                          REDISMODULE_SUBEVENT_FLUSHDB_END,\n                          &fi);\n\n    return removed;\n}\n\n/* Initialize temporary db on replica for use during diskless replication. */\nredisDb *initTempDb(void) {\n    int slot_count_bits = 0;\n    int flags = KVSTORE_ALLOCATE_DICTS_ON_DEMAND;\n    if (server.cluster_enabled) {\n        slot_count_bits = CLUSTER_SLOT_MASK_BITS;\n        flags |= KVSTORE_FREE_EMPTY_DICTS;\n    }\n    redisDb *tempDb = zcalloc(sizeof(redisDb)*server.dbnum);\n    for (int i=0; i<server.dbnum; i++) {\n        tempDb[i].id = i;\n        tempDb[i].keys = kvstoreCreate(&kvstoreExType, &dbDictType, slot_count_bits,\n                                       flags);\n        tempDb[i].expires = kvstoreCreate(&kvstoreBaseType, &dbExpiresDictType,\n                                          slot_count_bits, flags);\n        tempDb[i].subexpires = estoreCreate(&subexpiresBucketsType, slot_count_bits);\n        tempDb[i].stream_idmp_keys = dictCreate(&objectKeyNoValueDictType);\n    }\n\n    return tempDb;\n}\n\n/* Discard tempDb, this can be slow (similar to FLUSHALL), but it's always async. */\nvoid discardTempDb(redisDb *tempDb) {\n    int async = 1;\n\n    /* Release temp DBs. */\n    emptyDbStructure(tempDb, -1, async, NULL);\n    for (int i=0; i<server.dbnum; i++) {\n        /* Destroy sub-expires before deleting the kv-objects since ebuckets\n         * data structure is embedded in the stored kv-objects. */\n        estoreRelease(tempDb[i].subexpires);\n        kvstoreRelease(tempDb[i].keys);\n        kvstoreRelease(tempDb[i].expires);\n        dictRelease(tempDb[i].stream_idmp_keys);\n    }\n\n    zfree(tempDb);\n}\n\n/* Move entries whose robj keys belong to the given slot from src dict to dst.\n * Matching entries are removed from src and added to dst. */\nvoid streamMoveIdmpKeys(dict *src, dict *dst, int slot) {\n    if (dictSize(src) == 0) return;\n\n    dictIterator *di = dictGetSafeIterator(src);\n    dictEntry *de;\n    while ((de = dictNext(di)) != NULL) {\n        robj *key = dictGetKey(de);\n        if (calculateKeySlot(key->ptr) == slot) {\n            if (dictAddRaw(dst, key, NULL)) {\n                incrRefCount(key);\n            }\n            dictDelete(src, key);\n        }\n    }\n    dictReleaseIterator(di);\n}\n\nint selectDb(client *c, int id) {\n    if (id < 0 || id >= server.dbnum)\n        return C_ERR;\n    c->db = &server.db[id];\n    return C_OK;\n}\n\nlong long dbTotalServerKeyCount(void) {\n    long long total = 0;\n    int j;\n    for (j = 0; j < server.dbnum; j++) {\n        total += kvstoreSize(server.db[j].keys);\n    }\n    return total;\n}\n\n/*-----------------------------------------------------------------------------\n * Hooks for key space changes.\n *\n * Every time a key in the database is modified the function\n * keyModified() is called.\n *\n * Every time a DB is flushed the function signalFlushDb() is called.\n *----------------------------------------------------------------------------*/\n\n/* Called when a key is modified to update LRM timestamp\n * and optionally signal watchers/tracking clients.\n *\n * Arguments:\n * - c: client (may be NULL if the key was modified out of a context of a client)\n * - db: database containing the key\n * - key: the key that was modified\n * - val: the value object (if NULL, LRM won't be updated, e.g., for deleted keys)\n * - signal: if true, trigger WATCH and client-side tracking invalidation\n */\nvoid keyModified(client *c, redisDb *db, robj *key, robj *val, int signal) {\n    if (val) updateLRM(val);\n    if (signal) {\n        touchWatchedKey(db,key);\n        trackingInvalidateKey(c,key,1);\n    }\n}\n\nvoid signalFlushedDb(int dbid, int async, slotRangeArray *slots) {\n    int startdb, enddb;\n    if (dbid == -1) {\n        startdb = 0;\n        enddb = server.dbnum-1;\n    } else {\n        startdb = enddb = dbid;\n    }\n\n    for (int j = startdb; j <= enddb; j++) {\n        scanDatabaseForDeletedKeys(&server.db[j], NULL, slots);\n        touchAllWatchedKeysInDb(&server.db[j], NULL, slots);\n    }\n\n    trackingInvalidateKeysOnFlush(async);\n\n    /* Changes in this method may take place in swapMainDbWithTempDb as well,\n     * where we execute similar calls, but with subtle differences as it's\n     * not simply flushing db. */\n}\n\n/*-----------------------------------------------------------------------------\n * Type agnostic commands operating on the key space\n *----------------------------------------------------------------------------*/\n\n/* Return the set of flags to use for the emptyData() call for FLUSHALL\n * and FLUSHDB commands.\n *\n * sync: flushes the database in an sync manner.\n * async: flushes the database in an async manner.\n * no option: determine sync or async according to the value of lazyfree-lazy-user-flush.\n *\n * On success C_OK is returned and the flags are stored in *flags, otherwise\n * C_ERR is returned and the function sends an error to the client. */\nint getFlushCommandFlags(client *c, int *flags) {\n    /* Parse the optional ASYNC option. */\n    if (c->argc == 2 && !strcasecmp(c->argv[1]->ptr,\"sync\")) {\n        *flags = EMPTYDB_NO_FLAGS;\n    } else if (c->argc == 2 && !strcasecmp(c->argv[1]->ptr,\"async\")) {\n        *flags = EMPTYDB_ASYNC;\n    } else if (c->argc == 1) {\n        *flags = server.lazyfree_lazy_user_flush ? EMPTYDB_ASYNC : EMPTYDB_NO_FLAGS;\n    } else {\n        addReplyErrorObject(c,shared.syntaxerr);\n        return C_ERR;\n    }\n    return C_OK;\n}\n\n/* Flushes the whole server data set. */\nvoid flushAllDataAndResetRDB(int flags) {\n    server.dirty += emptyData(-1,flags,NULL);\n    if (server.child_type == CHILD_TYPE_RDB) killRDBChild();\n    if (server.saveparamslen > 0) {\n        rdbSaveInfo rsi, *rsiptr;\n        rsiptr = rdbPopulateSaveInfo(&rsi);\n        rdbSave(SLAVE_REQ_NONE,server.rdb_filename,rsiptr,RDBFLAGS_NONE);\n    }\n\n#if defined(USE_JEMALLOC)\n    /* jemalloc 5 doesn't release pages back to the OS when there's no traffic.\n     * for large databases, flushdb blocks for long anyway, so a bit more won't\n     * harm and this way the flush and purge will be synchronous. */\n    if (!(flags & EMPTYDB_ASYNC)) {\n        /* Only clear the current thread cache.\n         * Ignore the return call since this will fail if the tcache is disabled. */\n        je_mallctl(\"thread.tcache.flush\", NULL, NULL, NULL, 0);\n\n        jemalloc_purge();\n    }\n#endif\n}\n\n/* Block client for blocking ASYNC FLUSH operation (FLUSH*, SFLUSH). */\nvoid blockClientForAsyncFlush(client *c) {\n    /* measure bg job till completion as elapsed time of flush command */\n    elapsedStart(&c->bstate.lazyfreeStartTime);\n\n    c->bstate.timeout = 0;\n    /* We still need to perform cleanup operations for the command, including\n     * updating the replication offset, so mark this command as pending to\n     * avoid command from being reset during unblock. */\n    c->flags |= CLIENT_PENDING_COMMAND;\n    blockClient(c, BLOCKED_LAZYFREE);\n}\n\n/* CB function on blocking ASYNC FLUSH/TRIM completion.\n * We will unblock the client and send the proper reply if provided. */\nvoid kvsAsyncFreeDoneCB(uint64_t client_id, void *userdata) {\n\n    /* If ASM Trim context provided, apply histogram delta */\n    asmTrimCtx *ctx = userdata;\n    if (ctx) {\n        kvstoreMetadata *meta = kvstoreGetMetadata(server.db[0].keys);\n        /* Apply histogram delta only if target_kvstore hasn't changed */\n        if (ctx->target_kvstore == server.db[0].keys && meta) {\n            for (int type = 0; type < MAX_KEYSIZES_TYPES; type++) {\n                for (int bin = 0; bin < MAX_KEYSIZES_BINS; bin++) {\n                    meta->keysizes_hist[type][bin] -= ctx->delta_keysizes_hist[type][bin];\n                    meta->allocsizes_hist[type][bin] -= ctx->delta_allocsizes_hist[type][bin];\n                }\n            }\n        }\n        /* Decrement counter unconditionally to track job completion. If kvstore was\n         * replaced (e.g., by FLUSHALL), the new histogram is already consistent (reset\n         * to 0 for empty DB), so it's safe to resume assertions when counter reaches 0. */\n        asmBgTrimCounterDecr();\n    }\n\n    unblockClientForAsyncFlush(client_id, (ctx) ? ctx->slots : NULL);\n\n    /* Release context and slots */\n    asmTrimCtxRelease(ctx);\n}\n\n/* Unblock client on async flush/trim completion */\nvoid unblockClientForAsyncFlush(uint64_t client_id, struct slotRangeArray *slots) {\n    client *c = lookupClientByID(client_id);\n\n    /* Verify that client still exists and being blocked. */\n    if (!(c && c->flags & CLIENT_BLOCKED)) {\n        return;\n    }\n\n    /* Update current_client (Called functions might rely on it) */\n    client *old_client = server.current_client;\n    server.current_client = c;\n\n    /* Don't update blocked_us since command was processed in bg by lazy_free thread */\n    updateStatsOnUnblock(c, 0 /*blocked_us*/, elapsedUs(c->bstate.lazyfreeStartTime), 0);\n\n    /* Only SFLUSH command pass user data pointer. */\n    if (slots)\n        replySlotsFlush(c, slots);\n    else\n        addReply(c, shared.ok);\n\n    /* mark client as unblocked */\n    unblockClient(c, 1);\n\n    if (c->flags & CLIENT_PENDING_COMMAND) {\n        c->flags &= ~CLIENT_PENDING_COMMAND;\n        /* The FLUSH command won't be reprocessed, FLUSH command is finished, but\n         * we still need to complete its full processing flow, including updating\n         * the replication offset. */\n        commandProcessed(c);\n    }\n\n    /* On flush completion, update the client's memory */\n    updateClientMemUsageAndBucket(c);\n\n    /* restore current_client */\n    server.current_client = old_client;\n}\n\n/* Common flush command implementation for FLUSHALL, FLUSHDB and SFLUSH.\n *\n * Return 1 indicates that flush SYNC is actually running in bg as blocking ASYNC\n * Return 0 otherwise\n *\n * trim_ctx - provided only by SFLUSH command, otherwise NULL. Contains slots to\n *            be used on completion to reply with the slots flush result. \n */\nint flushCommandCommon(client *c, int type, int flags, asmTrimCtx *trim_ctx) {\n    int blocking_async = 0; /* Flush SYNC option to run as blocking ASYNC */\n\n    /* in case of SYNC, check if we can optimize and run it in bg as blocking ASYNC */\n    if ((!(flags & EMPTYDB_ASYNC)) && (!(c->flags & CLIENT_AVOID_BLOCKING_ASYNC_FLUSH))) {\n        /* Run as ASYNC */\n        flags |= EMPTYDB_ASYNC;\n        blocking_async = 1;\n    }\n\n    /* Cancel all ASM tasks that overlap with the given slot ranges. */\n    clusterAsmCancelBySlotRangeArray(trim_ctx ? trim_ctx->slots : NULL, c->argv[0]->ptr);\n\n    if (type == FLUSH_TYPE_ALL)\n        flushAllDataAndResetRDB(flags | EMPTYDB_NOFUNCTIONS);\n    else\n        server.dirty += emptyData(c->db->id,flags | EMPTYDB_NOFUNCTIONS,NULL);\n\n    /* Without the forceCommandPropagation, when DB(s) was already empty,\n     * FLUSHALL\\FLUSHDB will not be replicated nor put into the AOF. */\n    forceCommandPropagation(c, PROPAGATE_REPL | PROPAGATE_AOF);\n\n    /* if blocking ASYNC, block client and add completion job request to BIO lazyfree\n     * worker's queue. To be called and reply with OK only after all preceding pending\n     * lazyfree jobs in queue were processed */\n    if (blocking_async) {\n        blockClientForAsyncFlush(c);\n        /* Retain trim_ctx if provided so kvsAsyncFreeDoneCB can release it later */\n        if (trim_ctx) {\n            asmBgTrimCounterIncr();\n            asmTrimCtxRetain(trim_ctx);\n        }\n        bioCreateCompRq(BIO_WORKER_LAZY_FREE, kvsAsyncFreeDoneCB, c->id, trim_ctx);\n    }\n\n#if defined(USE_JEMALLOC)\n    /* jemalloc 5 doesn't release pages back to the OS when there's no traffic.\n     * for large databases, flushdb blocks for long anyway, so a bit more won't\n     * harm and this way the flush and purge will be synchronous.\n     *\n     * Take care purge only FLUSHDB for sync flow. FLUSHALL sync flow already\n     * applied at flushAllDataAndResetRDB. Async flow will apply only later on */\n    if ((type != FLUSH_TYPE_ALL) && (!(flags & EMPTYDB_ASYNC))) {\n        /* Only clear the current thread cache.\n         * Ignore the return call since this will fail if the tcache is disabled. */\n        je_mallctl(\"thread.tcache.flush\", NULL, NULL, NULL, 0);\n\n        jemalloc_purge();\n    }\n#endif\n    return blocking_async;\n}\n\n/* FLUSHALL [SYNC|ASYNC]\n *\n * Flushes the whole server data set. */\nvoid flushallCommand(client *c) {\n    int flags;\n    if (getFlushCommandFlags(c,&flags) == C_ERR) return;\n\n    /* If FLUSH SYNC isn't running as blocking async, then reply */\n    if (flushCommandCommon(c, FLUSH_TYPE_ALL, flags, NULL) == 0)\n        addReply(c, shared.ok);\n}\n\n/* FLUSHDB [SYNC|ASYNC]\n *\n * Flushes the currently SELECTed Redis DB. */\nvoid flushdbCommand(client *c) {\n    int flags;\n    if (getFlushCommandFlags(c,&flags) == C_ERR) return;\n\n    /* If FLUSH SYNC isn't running as blocking async, then reply */\n    if (flushCommandCommon(c, FLUSH_TYPE_DB, flags, NULL) == 0)\n        addReply(c, shared.ok);\n\n}\n\n/* This command implements DEL and UNLINK. */\nvoid delGenericCommand(client *c, int lazy) {\n    int numdel = 0, j;\n\n    for (j = 1; j < c->argc; j++) {\n        if (expireIfNeeded(c->db, c->argv[j], NULL, 0) == KEY_DELETED)\n            continue;\n        int deleted  = lazy ? dbAsyncDelete(c->db,c->argv[j]) :\n                              dbSyncDelete(c->db,c->argv[j]);\n        if (deleted) {\n            keyModified(c,c->db,c->argv[j],NULL,1);\n            notifyKeyspaceEvent(NOTIFY_GENERIC,\n                \"del\",c->argv[j],c->db->id);\n            server.dirty++;\n            numdel++;\n        }\n    }\n    addReplyLongLong(c,numdel);\n}\n\nvoid delCommand(client *c) {\n    delGenericCommand(c,server.lazyfree_lazy_user_del);\n}\n\n/* DELEX key [IFEQ match-value|IFNE match-value|IFDEQ match-digest|IFDNE match-digest]\n *\n * Conditionally removes the specified key. A key is ignored if it does not\n * exist.\n * If no condition is specified the behavior is the same as DEL command.\n * If condition is specified the key must be of STRING type.\n *\n * IFEQ/IFNE conditions check the match-value against the value of the key\n * IFDEQ/IFDNE conditions check the match-digest against the digest of the key's value.*/\nvoid delexCommand(client *c) {\n    kvobj *o;\n    int deleted = 0, should_delete = 0;\n\n    /* If there are no conditions specified we just delete the key */\n    if (c->argc == 2) {\n        delGenericCommand(c, server.lazyfree_lazy_server_del);\n        return;\n    }\n\n    /* If we have more than two arguments the next two are condition and\n     * match-value */\n    if (c->argc != 4) {\n        addReplyErrorArity(c);\n        return;\n    }\n\n    robj *key = c->argv[1];\n    o = lookupKeyRead(c->db, key);\n    if (o == NULL) {\n        addReplyLongLong(c, 0);\n        return;\n    }\n\n    /* If any conditions are specified the only supported key type for now is\n     * string */\n    if (o->type != OBJ_STRING) {\n        addReplyError(c, \"Key should be of string type if conditions are specified\");\n        return;\n    }\n\n    char *condition = c->argv[2]->ptr;\n    if (!strcasecmp(\"ifeq\", condition)) {\n        robj *valueobj = getDecodedObject(o);\n        sds match_value = c->argv[3]->ptr;\n        if (sdscmp(valueobj->ptr, match_value) == 0)\n            should_delete = 1;\n\n        decrRefCount(valueobj);\n    } else if (!strcasecmp(\"ifne\", condition)) {\n        robj *valueobj = getDecodedObject(o);\n        sds match_value = c->argv[3]->ptr;\n        if (sdscmp(valueobj->ptr, match_value) != 0)\n           should_delete = 1;\n\n        decrRefCount(valueobj);\n    } else if (!strcasecmp(\"ifdeq\", condition)) {\n        if (validateHexDigest(c, c->argv[3]->ptr) != C_OK)\n            return;\n\n        sds current_digest = stringDigest(o);\n        if (strcasecmp(current_digest, c->argv[3]->ptr) == 0)\n            should_delete = 1;\n\n        sdsfree(current_digest);\n    } else if (!strcasecmp(\"ifdne\", condition)) {\n        if (validateHexDigest(c, c->argv[3]->ptr) != C_OK)\n            return;\n\n        sds current_digest = stringDigest(o);\n        if (strcasecmp(current_digest, c->argv[3]->ptr) != 0)\n            should_delete = 1;\n\n        sdsfree(current_digest);\n    } else {\n        addReplyError(c, \"Invalid condition. Use IFEQ, IFNE, IFDEQ, or IFDNE\");\n        return;\n    }\n\n    if (should_delete) {\n        deleted = server.lazyfree_lazy_server_del ?\n                  dbAsyncDelete(c->db, key) :\n                  dbSyncDelete(c->db, key);\n    }\n\n    if (deleted) {\n        rewriteClientCommandVector(c, 2, shared.del, key);\n        keyModified(c, c->db, key, NULL, 1);\n        notifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", key, c->db->id);\n        KSN_INVALIDATE_KVOBJ(o);\n        server.dirty++;\n    }\n\n    addReplyLongLong(c, deleted);\n}\n\nvoid unlinkCommand(client *c) {\n    delGenericCommand(c,1);\n}\n\n/* EXISTS key1 key2 ... key_N.\n * Return value is the number of keys existing. */\nvoid existsCommand(client *c) {\n    long long count = 0;\n    int j;\n\n    for (j = 1; j < c->argc; j++) {\n        if (lookupKeyReadWithFlags(c->db,c->argv[j],LOOKUP_NOTOUCH)) count++;\n    }\n    addReplyLongLong(c,count);\n}\n\nvoid selectCommand(client *c) {\n    int id;\n\n    if (getIntFromObjectOrReply(c, c->argv[1], &id, NULL) != C_OK)\n        return;\n\n    if (server.cluster_enabled && id != 0) {\n        addReplyError(c,\"SELECT is not allowed in cluster mode\");\n        return;\n    }\n\n    if (id != 0) {\n        server.stat_cluster_incompatible_ops++;\n    }\n\n    if (selectDb(c,id) == C_ERR) {\n        addReplyError(c,\"DB index is out of range\");\n    } else {\n        addReply(c,shared.ok);\n    }\n}\n\nvoid randomkeyCommand(client *c) {\n    robj *key;\n\n    if ((key = dbRandomKey(c->db)) == NULL) {\n        addReplyNull(c);\n        return;\n    }\n\n    addReplyBulk(c,key);\n    decrRefCount(key);\n}\n\nvoid keysCommand(client *c) {\n    dictEntry *de;\n    sds pattern = c->argv[1]->ptr;\n    int plen = sdslen(pattern), allkeys, pslot = -1;\n    unsigned long numkeys = 0;\n    void *replylen = addReplyDeferredLen(c);\n    allkeys = (pattern[0] == '*' && plen == 1);\n    if (server.cluster_enabled && !allkeys) {\n        pslot = patternHashSlot(pattern, plen);\n    }\n    int has_slot = pslot != -1;\n    union {\n        kvstoreDictIterator kvs_di;\n        kvstoreIterator kvs_it;\n    } it;\n    if (has_slot) {\n        if (!kvstoreDictSize(c->db->keys, pslot) || accessKeysShouldSkipDictIndex(pslot)) {\n            /* Requested slot is empty */\n            setDeferredArrayLen(c,replylen,0);\n            return;\n        }\n        kvstoreInitDictSafeIterator(&it.kvs_di, c->db->keys, pslot);\n    } else {\n        kvstoreIteratorInit(&it.kvs_it, c->db->keys);\n    }\n\n    while ((de = has_slot ? kvstoreDictIteratorNext(&it.kvs_di) : kvstoreIteratorNext(&it.kvs_it)) != NULL) {\n        if (!has_slot && accessKeysShouldSkipDictIndex(kvstoreIteratorGetCurrentDictIndex(&it.kvs_it))) {\n            continue;\n        }\n\n        kvobj *kv = dictGetKV(de);\n        sds key = kvobjGetKey(kv);\n\n        if (allkeys || stringmatchlen(pattern,plen,key,sdslen(key),0)) {\n            if (!keyIsExpired(c->db, NULL, kv)) {\n                addReplyBulkCBuffer(c, key, sdslen(key));\n                numkeys++;\n            }\n        }\n        if (c->flags & CLIENT_CLOSE_ASAP)\n            break;\n    }\n    if (has_slot)\n        kvstoreResetDictIterator(&it.kvs_di);\n    else\n        kvstoreIteratorReset(&it.kvs_it);\n    setDeferredArrayLen(c,replylen,numkeys);\n}\n\n/* Data used by the dict scan callback. */\ntypedef struct {\n    vec *keys;    /* elements collected from dict */\n    robj *o;      /* o must be a hash/set/zset object, NULL means current db */\n    long long type; /* the particular type when scan the db */\n    sds pattern;  /* pattern string, NULL means no pattern */\n    long sampled; /* cumulative number of keys sampled */\n    int no_values; /* set to 1 means to return keys only */\n    sds typename; /* typename string, NULL means no type filter */\n    redisDb *db;  /* database reference for expiration checks */\n} scanData;\n\n/* Helper function to compare key type in scan commands */\nint objectTypeCompare(robj *o, long long target) {\n    if (o->type != OBJ_MODULE) {\n        if (o->type != target) \n            return 0;\n        else \n            return 1;\n    }\n    /* module type compare */\n    moduleType *type = ((moduleValue *)o->ptr)->type;\n    long long mt = (long long)REDISMODULE_TYPE_SIGN(type->entity.id);\n    if (target != -mt)\n        return 0;\n    else \n        return 1;\n}\n/* This callback is used by scanGenericCommand in order to collect elements\n * returned by the dictionary iterator into a list. */\nvoid scanCallback(void *privdata, const dictEntry *de, dictEntryLink plink) {\n    UNUSED(plink);\n    Entry *hashEntry = NULL;\n    scanData *data = (scanData *)privdata;\n    vec *keys = data->keys;\n    robj *o = data->o;\n    sds val = NULL;\n    void *key = NULL;  /* if OBJ_HASH then key is of type `hfield`. Otherwise, `sds` */\n    void *keyStr;\n    data->sampled++;\n\n    /* o and typename can not have values at the same time. */\n    serverAssert(!((data->type != LLONG_MAX) && o));\n\n    kvobj *kv = NULL;\n    zskiplistNode *znode = NULL;\n    if (!o) { /* If scanning keyspace */\n        kv = dictGetKV(de);\n        keyStr = kvobjGetKey(kv);\n    } else if (o->type == OBJ_HASH) {\n        hashEntry = dictGetKey(de);\n        keyStr = entryGetField(hashEntry);\n    } else if (o->type == OBJ_ZSET) {\n        znode = dictGetKey(de);\n        keyStr = zslGetNodeElement(znode);\n    } else {\n        keyStr = dictGetKey(de);\n    }\n    \n    /* Filter element if it does not match the pattern. */\n    if (data->pattern) {\n        if (!stringmatchlen(data->pattern, sdslen(data->pattern), keyStr, sdslen(keyStr), 0)) {\n            return;\n        }\n    }\n    \n    if (!o) {\n        /* Expiration check first - only for database keyspace scanning.\n         * Use kv obj to avoid robj creation. */\n        if (expireIfNeeded(data->db, NULL, kv, 0) != KEY_VALID)\n            return;\n\n        /* Type filtering - only for database keyspace scanning */\n        if (data->typename) {\n            /* For unknown types (LLONG_MAX), skip all keys */\n            if (data->type == LLONG_MAX)\n                return;\n            /* For known types, skip keys that don't match */\n            if (!objectTypeCompare(kv, data->type))\n                return;\n        }\n    }\n\n    if (o == NULL) {\n        key = keyStr;\n    } else if (o->type == OBJ_SET) {\n        key = keyStr;\n    } else if (o->type == OBJ_HASH) {\n        key = keyStr;\n        val = entryGetValue(hashEntry);\n\n        /* If field is expired, then ignore */\n        if (entryIsExpired(hashEntry))\n            return;\n\n    } else if (o->type == OBJ_ZSET) {\n        char buf[MAX_LONG_DOUBLE_CHARS];\n        int len = ld2string(buf, sizeof(buf), znode->score, LD_STR_AUTO);\n        key = sdsdup(keyStr);\n        val = sdsnewlen(buf, len);\n    } else {\n        serverPanic(\"Type not handled in SCAN callback.\");\n    }\n\n    vecPush(keys, key);\n    if (val && !data->no_values) vecPush(keys, val);\n}\n\n/* Try to parse a SCAN cursor stored at object 'o':\n * if the cursor is valid, store it as unsigned integer into *cursor and\n * returns C_OK. Otherwise return C_ERR and send an error to the\n * client. */\nint parseScanCursorOrReply(client *c, robj *o, unsigned long long *cursor) {\n    if (!string2ull(o->ptr, cursor)) {\n        addReplyError(c, \"invalid cursor\");\n        return C_ERR;\n    }\n    return C_OK;\n}\n\nchar *obj_type_name[OBJ_TYPE_MAX] = {\n    \"string\", \n    \"list\", \n    \"set\", \n    \"zset\", \n    \"hash\", \n    NULL, /* module type is special */\n    \"stream\",\n    \"gcra\"\n};\n\n/* Helper function to get type from a string in scan commands */\nlong long getObjectTypeByName(char *name) {\n\n    for (long long i = 0; i < OBJ_TYPE_MAX; i++) {\n        if (obj_type_name[i] && !strcasecmp(name, obj_type_name[i])) {\n            return i;\n        }\n    }\n\n    moduleType *mt = moduleTypeLookupModuleByNameIgnoreCase(name);\n    if (mt != NULL) return -(REDISMODULE_TYPE_SIGN(mt->entity.id));\n\n    return LLONG_MAX;\n}\n\nchar *getObjectTypeName(robj *o) {\n    if (o == NULL) {\n        return \"none\";\n    }\n\n    serverAssert(o->type >= 0 && o->type < OBJ_TYPE_MAX);\n\n    if (o->type == OBJ_MODULE) {\n        moduleValue *mv = o->ptr;\n        return mv->type->entity.name;\n    } else {\n        return obj_type_name[o->type];\n    }\n}\n\nstatic int scanShouldSkipDict(dict *d, int didx) {\n    UNUSED(d);\n    return accessKeysShouldSkipDictIndex(didx);\n}\n\n/* This command implements SCAN, HSCAN and SSCAN commands.\n * If object 'o' is passed, then it must be a Hash, Set or Zset object, otherwise\n * if 'o' is NULL the command will operate on the dictionary associated with\n * the current database.\n *\n * When 'o' is not NULL the function assumes that the first argument in\n * the client arguments vector is a key so it skips it before iterating\n * in order to parse options.\n *\n * In the case of a Hash object the function returns both the field and value\n * of every element on the Hash. */\nvoid scanGenericCommand(client *c, robj *o, unsigned long long cursor) {\n    int i, j;\n    long count = 10;\n    sds pat = NULL;\n    sds typename = NULL;\n    long long type = LLONG_MAX;\n    int patlen = 0, use_pattern = 0, no_values = 0;\n    dict *ht;\n\n    /* Object must be NULL (to iterate keys names), or the type of the object\n     * must be Set, Sorted Set, or Hash. */\n    serverAssert(o == NULL || o->type == OBJ_SET || o->type == OBJ_HASH ||\n                o->type == OBJ_ZSET);\n\n    /* Set i to the first option argument. The previous one is the cursor. */\n    i = (o == NULL) ? 2 : 3; /* Skip the key argument if needed. */\n\n    /* Step 1: Parse options. */\n    while (i < c->argc) {\n        j = c->argc - i;\n        if (!strcasecmp(c->argv[i]->ptr, \"count\") && j >= 2) {\n            if (getLongFromObjectOrReply(c, c->argv[i+1], &count, NULL)\n                != C_OK)\n            {\n                return;\n            }\n\n            if (count < 1) {\n                addReplyErrorObject(c,shared.syntaxerr);\n                return;\n            }\n\n            i += 2;\n        } else if (!strcasecmp(c->argv[i]->ptr, \"match\") && j >= 2) {\n            pat = c->argv[i+1]->ptr;\n            patlen = sdslen(pat);\n\n            /* The pattern always matches if it is exactly \"*\", so it is\n             * equivalent to disabling it. */\n            use_pattern = !(patlen == 1 && pat[0] == '*');\n\n            i += 2;\n        } else if (!strcasecmp(c->argv[i]->ptr, \"type\") && o == NULL && j >= 2) {\n            /* SCAN for a particular type only applies to the db dict */\n            typename = c->argv[i+1]->ptr;\n            type = getObjectTypeByName(typename);\n            if (type == LLONG_MAX) {\n                /* TODO: uncomment in redis 8.0\n                addReplyErrorFormat(c, \"unknown type name '%s'\", typename);\n                return; */\n            }\n            i+= 2;\n        } else if (!strcasecmp(c->argv[i]->ptr, \"novalues\")) {\n            if (!o || o->type != OBJ_HASH) {\n                addReplyError(c, \"NOVALUES option can only be used in HSCAN\");\n                return;\n            }\n            no_values = 1;\n            i++;\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        }\n    }\n\n    /* Step 2: Iterate the collection.\n     *\n     * Note that if the object is encoded with a listpack, intset, or any other\n     * representation that is not a hash table, we are sure that it is also\n     * composed of a small number of elements. So to avoid taking state we\n     * just return everything inside the object in a single call, setting the\n     * cursor to zero to signal the end of the iteration. */\n\n    /* Handle the case of a hash table. */\n    ht = NULL;\n    if (o == NULL) {\n        ht = NULL;\n    } else if (o->type == OBJ_SET && o->encoding == OBJ_ENCODING_HT) {\n        ht = o->ptr;\n    } else if (o->type == OBJ_HASH && o->encoding == OBJ_ENCODING_HT) {\n        ht = o->ptr;\n    } else if (o->type == OBJ_ZSET && o->encoding == OBJ_ENCODING_SKIPLIST) {\n        zset *zs = o->ptr;\n        ht = zs->dict;\n    }\n\n    vec keys;\n    void *keys_stack[256];\n    vecInit(&keys, keys_stack, 256);\n    /* Hash on dict only has pointers to dict entries; other paths allocate\n     * temporary sds that must be released. */\n    if (o && (!ht || o->type == OBJ_ZSET))\n        vecSetFreeMethod(&keys, sdsfreegeneric);\n\n    /* For main dictionary scan or data structure using hashtable. */\n    if (!o || ht) {\n        /* We set the max number of iterations to ten times the specified\n         * COUNT, so if the hash table is in a pathological state (very\n         * sparsely populated) we avoid to block too much time at the cost\n         * of returning no or very few elements. */\n        long maxiterations = (count > LONG_MAX / 10) ? LONG_MAX : count * 10;\n\n        /* We pass scanData which have three pointers to the callback:\n         * 1. data.keys: the list to which it will add new elements;\n         * 2. data.o: the object containing the dictionary so that\n         * it is possible to fetch more data in a type-dependent way;\n         * 3. data.type: the specified type scan in the db, LLONG_MAX means\n         * type matching is no needed;\n         * 4. data.pattern: the pattern string;\n         * 5. data.sampled: the maxiteration limit is there in case we're\n         * working on an empty dict, one with a lot of empty buckets, and\n         * for the buckets are not empty, we need to limit the spampled number\n         * to prevent a long hang time caused by filtering too many keys;\n         * 6. data.no_values: to control whether values will be returned or\n         * only keys are returned. */\n        scanData data = {\n            .keys = &keys,\n            .o = o,\n            .type = type,\n            .pattern = use_pattern ? pat : NULL,\n            .sampled = 0,\n            .no_values = no_values,\n            .typename = typename,\n            .db = c->db,\n        };\n\n        /* A pattern may restrict all matching keys to one cluster slot. */\n        int onlydidx = -1;\n        if (o == NULL && use_pattern && server.cluster_enabled) {\n            onlydidx = patternHashSlot(pat, patlen);\n        }\n        do {\n            /* In cluster mode there is a separate dictionary for each slot.\n             * If cursor is empty, we should try exploring next non-empty slot. */\n            if (o == NULL) {\n                cursor = kvstoreScan(c->db->keys, cursor, onlydidx, scanCallback, scanShouldSkipDict, &data);\n            } else {\n                cursor = dictScan(ht, cursor, scanCallback, &data);\n            }\n        } while (cursor && maxiterations-- && data.sampled < count);\n    } else if (o->type == OBJ_SET) {\n        unsigned long array_reply_len = 0;\n        void *replylen = NULL;\n        vecRelease(&keys);\n        char *str;\n        char buf[LONG_STR_SIZE];\n        size_t len;\n        int64_t llele;\n        /* Reply to the client. */\n        addReplyArrayLen(c, 2);\n        /* Cursor is always 0 given we iterate over all set */\n        addReplyBulkLongLong(c,0);\n        /* If there is no pattern the length is the entire set size, otherwise we defer the reply size */\n        if (use_pattern)\n            replylen = addReplyDeferredLen(c);\n        else {\n            array_reply_len = setTypeSize(o);\n            addReplyArrayLen(c, array_reply_len);\n        }\n\n        setTypeIterator si;\n        unsigned long cur_length = 0;\n        setTypeInitIterator(&si, o);\n        while (setTypeNext(&si, &str, &len, &llele) != -1) {\n            if (str == NULL) {\n                len = ll2string(buf, sizeof(buf), llele);\n            }\n            char *key = str ? str : buf;\n            if (use_pattern && !stringmatchlen(pat, patlen, key, len, 0)) {\n                continue;\n            }\n            addReplyBulkCBuffer(c, key, len);\n            cur_length++;\n        }\n        setTypeResetIterator(&si);\n        if (use_pattern)\n            setDeferredArrayLen(c,replylen,cur_length);\n        else\n            serverAssert(cur_length == array_reply_len); /* fail on corrupt data */\n        return;\n    } else if ((o->type == OBJ_HASH || o->type == OBJ_ZSET) &&\n               o->encoding == OBJ_ENCODING_LISTPACK)\n    {\n        unsigned char *p = lpFirst(o->ptr);\n        unsigned char *str;\n        int64_t len;\n        unsigned long array_reply_len = 0;\n        unsigned char intbuf[LP_INTBUF_SIZE];\n        void *replylen = NULL;\n        vecRelease(&keys);\n\n        /* Reply to the client. */\n        addReplyArrayLen(c, 2);\n        /* Cursor is always 0 given we iterate over all set */\n        addReplyBulkLongLong(c,0);\n        /* If there is no pattern the length is the entire set size, otherwise we defer the reply size */\n        if (use_pattern)\n            replylen = addReplyDeferredLen(c);\n        else {\n            array_reply_len = o->type == OBJ_HASH ? hashTypeLength(o, 0) : zsetLength(o);\n            if (!no_values) {\n                array_reply_len *= 2;\n            }\n            addReplyArrayLen(c, array_reply_len);\n        }\n        unsigned long cur_length = 0;\n        while(p) {\n            str = lpGet(p, &len, intbuf);\n            /* point to the value */\n            p = lpNext(o->ptr, p);\n            if (use_pattern && !stringmatchlen(pat, patlen, (char *)str, len, 0)) {\n                /* jump to the next key/val pair */\n                p = lpNext(o->ptr, p);\n                continue;\n            }\n            /* add key object */\n            addReplyBulkCBuffer(c, str, len);\n            cur_length++;\n            /* add value object */\n            if (!no_values) {\n                str = lpGet(p, &len, intbuf);\n                addReplyBulkCBuffer(c, str, len);\n                cur_length++;\n            }\n            p = lpNext(o->ptr, p);\n        }\n        if (use_pattern)\n            setDeferredArrayLen(c,replylen,cur_length);\n        else\n            serverAssert(cur_length == array_reply_len); /* fail on corrupt data */\n        return;\n    } else if (o->type == OBJ_HASH && o->encoding == OBJ_ENCODING_LISTPACK_EX) {\n        int64_t len;\n        long long expire_at;\n        unsigned char *lp = hashTypeListpackGetLp(o);\n        unsigned char *p = lpFirst(lp);\n        unsigned char *str, *val;\n        unsigned char intbuf[LP_INTBUF_SIZE];\n        void *replylen = NULL;\n\n        vecRelease(&keys);\n        /* Reply to the client. */\n        addReplyArrayLen(c, 2);\n        /* Cursor is always 0 given we iterate over all set */\n        addReplyBulkLongLong(c,0);\n        /* In the case of OBJ_ENCODING_LISTPACK_EX we always defer the reply size given some fields might be expired */\n        replylen = addReplyDeferredLen(c);\n        unsigned long cur_length = 0;\n\n        while (p) {\n            str = lpGet(p, &len, intbuf);\n            p = lpNext(lp, p);\n            val = p; /* Keep pointer to value */\n\n            p = lpNext(lp, p);\n            serverAssert(p && lpGetIntegerValue(p, &expire_at));\n\n            if (hashTypeIsExpired(o, expire_at) ||\n               (use_pattern && !stringmatchlen(pat, patlen, (char *)str, len, 0)))\n            {\n                /* jump to the next key/val pair */\n                p = lpNext(lp, p);\n                continue;\n            }\n\n            /* add key object */\n            addReplyBulkCBuffer(c, str, len);\n            cur_length++;\n            /* add value object */\n            if (!no_values) {\n                str = lpGet(val, &len, intbuf);\n                addReplyBulkCBuffer(c, str, len);\n                cur_length++;\n            }\n            p = lpNext(lp, p);\n        }\n        setDeferredArrayLen(c,replylen,cur_length);\n        return;\n    } else {\n        serverPanic(\"Not handled encoding in SCAN.\");\n    }\n\n    /* Step 3: Reply to the client. */\n    addReplyArrayLen(c, 2);\n    addReplyBulkLongLong(c,cursor);\n\n    addReplyArrayLen(c, vecSize(&keys));\n    for (size_t i = 0; i < vecSize(&keys); i++) {\n        sds key = vecGet(&keys, i);\n        addReplyBulkCBuffer(c, key, sdslen(key));\n    }\n\n    vecRelease(&keys);\n}\n\n/* The SCAN command completely relies on scanGenericCommand. */\nvoid scanCommand(client *c) {\n    unsigned long long cursor;\n    if (parseScanCursorOrReply(c,c->argv[1],&cursor) == C_ERR) return;\n    scanGenericCommand(c,NULL,cursor);\n}\n\nvoid dbsizeCommand(client *c) {\n    addReplyLongLong(c,dbSize(c->db));\n}\n\nvoid lastsaveCommand(client *c) {\n    addReplyLongLong(c,server.lastsave);\n}\n\nvoid typeCommand(client *c) {\n    kvobj *kv = lookupKeyReadWithFlags(c->db,c->argv[1],LOOKUP_NOTOUCH);\n    addReplyStatus(c, getObjectTypeName(kv));\n}\n\nvoid shutdownCommand(client *c) {\n    int flags = SHUTDOWN_NOFLAGS;\n    int abort = 0;\n    for (int i = 1; i < c->argc; i++) {\n        if (!strcasecmp(c->argv[i]->ptr,\"nosave\")) {\n            flags |= SHUTDOWN_NOSAVE;\n        } else if (!strcasecmp(c->argv[i]->ptr,\"save\")) {\n            flags |= SHUTDOWN_SAVE;\n        } else if (!strcasecmp(c->argv[i]->ptr, \"now\")) {\n            flags |= SHUTDOWN_NOW;\n        } else if (!strcasecmp(c->argv[i]->ptr, \"force\")) {\n            flags |= SHUTDOWN_FORCE;\n        } else if (!strcasecmp(c->argv[i]->ptr, \"abort\")) {\n            abort = 1;\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        }\n    }\n    if ((abort && flags != SHUTDOWN_NOFLAGS) ||\n        (flags & SHUTDOWN_NOSAVE && flags & SHUTDOWN_SAVE))\n    {\n        /* Illegal combo. */\n        addReplyErrorObject(c,shared.syntaxerr);\n        return;\n    }\n\n    if (abort) {\n        if (abortShutdown() == C_OK)\n            addReply(c, shared.ok);\n        else\n            addReplyError(c, \"No shutdown in progress.\");\n        return;\n    }\n\n    if (!(flags & SHUTDOWN_NOW) && c->flags & CLIENT_DENY_BLOCKING) {\n        addReplyError(c, \"SHUTDOWN without NOW or ABORT isn't allowed for DENY BLOCKING client\");\n        return;\n    }\n\n    if (!(flags & SHUTDOWN_NOSAVE) && isInsideYieldingLongCommand()) {\n        /* Script timed out. Shutdown allowed only with the NOSAVE flag. See\n         * also processCommand where these errors are returned. */\n        if (server.busy_module_yield_flags && server.busy_module_yield_reply) {\n            addReplyErrorFormat(c, \"-BUSY %s\", server.busy_module_yield_reply);\n        } else if (server.busy_module_yield_flags) {\n            addReplyErrorObject(c, shared.slowmoduleerr);\n        } else if (scriptIsEval()) {\n            addReplyErrorObject(c, shared.slowevalerr);\n        } else {\n            addReplyErrorObject(c, shared.slowscripterr);\n        }\n        return;\n    }\n\n    blockClientShutdown(c);\n    if (prepareForShutdown(flags) == C_OK) exit(0);\n    /* If we're here, then shutdown is ongoing (the client is still blocked) or\n     * failed (the client has received an error). */\n}\n\nvoid renameGenericCommand(client *c, int nx) {\n    kvobj *o;\n    int samekey = 0;\n    uint64_t minHashExpireTime = EB_EXPIRE_TIME_INVALID;\n\n    /* When source and dest key is the same, no operation is performed,\n     * if the key exists, however we still return an error on unexisting key. */\n    if (sdscmp(c->argv[1]->ptr,c->argv[2]->ptr) == 0) samekey = 1;\n\n    if ((o = lookupKeyWriteOrReply(c,c->argv[1],shared.nokeyerr)) == NULL)\n        return;\n\n    if (samekey) {\n        addReply(c,nx ? shared.czero : shared.ok);\n        return;\n    }\n\n    incrRefCount(o);\n    kvobj *destval = lookupKeyWrite(c->db,c->argv[2]);\n    int overwritten = 0;\n    int desttype = -1;\n    if (destval != NULL) {\n        if (nx) {\n            decrRefCount(o);\n            addReply(c,shared.czero);\n            return;\n        }\n\n        /* Overwrite: delete the old key before creating the new one\n         * with the same name. */\n        desttype = destval->type;\n        dbDelete(c->db,c->argv[2]);\n        overwritten = 1;\n    }\n\n    /* If hash with expiration on fields then remove it from global HFE DS and\n     * keep next expiration time. Otherwise, dbDelete() will remove it from the\n     * global HFE DS and we will lose the expiration time. */\n    int srctype = o->type;\n    if (srctype == OBJ_HASH)\n        minHashExpireTime = estoreRemove(c->db->subexpires, getKeySlot(c->argv[1]->ptr), o);\n\n    /* Prepare metadata for the renamed key */\n    KeyMetaSpec keymeta;\n    keyMetaSpecInit(&keymeta);\n    if (o->metabits) keyMetaOnRename(c->db, o, c->argv[1], c->argv[2], &keymeta);\n\n    dbDelete(c->db,c->argv[1]);\n    \n    dbAddInternal(c->db, c->argv[2], &o, NULL, &keymeta);\n\n    /* If hash with HFEs, register in DB subexpires */\n    if (minHashExpireTime != EB_EXPIRE_TIME_INVALID)\n        estoreAdd(c->db->subexpires, getKeySlot(c->argv[2]->ptr), o, minHashExpireTime);\n\n    /* Re-register stream IDMP tracking under the new key name. */\n    if (srctype == OBJ_STREAM)\n        streamKeyLoaded(c->db, c->argv[2], o);\n\n    keyModified(c,c->db,c->argv[1],NULL,1);\n    keyModified(c,c->db,c->argv[2],o,1);\n    notifyKeyspaceEvent(NOTIFY_GENERIC, \"rename_from\", c->argv[1],c->db->id);\n    notifyKeyspaceEvent(NOTIFY_GENERIC, \"rename_to\", c->argv[2],c->db->id);\n    KSN_INVALIDATE_KVOBJ(o);\n    if (overwritten) {\n        notifyKeyspaceEvent(NOTIFY_OVERWRITTEN, \"overwritten\", c->argv[2], c->db->id);\n        if (desttype != srctype)\n            notifyKeyspaceEvent(NOTIFY_TYPE_CHANGED, \"type_changed\", c->argv[2], c->db->id);\n    }\n    server.dirty++;\n    addReply(c,nx ? shared.cone : shared.ok);\n}\n\nvoid renameCommand(client *c) {\n    renameGenericCommand(c,0);\n}\n\nvoid renamenxCommand(client *c) {\n    renameGenericCommand(c,1);\n}\n\nvoid moveCommand(client *c) {\n    redisDb *src, *dst;\n    int srcid, dbid;\n    uint64_t hashExpireTime = EB_EXPIRE_TIME_INVALID;\n\n    if (server.cluster_enabled) {\n        addReplyError(c,\"MOVE is not allowed in cluster mode\");\n        return;\n    }\n\n    /* Obtain source and target DB pointers */\n    src = c->db;\n    srcid = c->db->id;\n\n    if (getIntFromObjectOrReply(c, c->argv[2], &dbid, NULL) != C_OK)\n        return;\n\n    if (selectDb(c,dbid) == C_ERR) {\n        addReplyError(c,\"DB index is out of range\");\n        return;\n    }\n    dst = c->db;\n    selectDb(c,srcid); /* Back to the source DB */\n\n    /* If the user is moving using as target the same\n     * DB as the source DB it is probably an error. */\n    if (src == dst) {\n        addReplyErrorObject(c,shared.sameobjecterr);\n        return;\n    }\n\n    /* Record incompatible operations in cluster mode */\n    server.stat_cluster_incompatible_ops++;\n\n    /* Check if the element exists and get a reference */\n    kvobj *kv = lookupKeyWrite(c->db,c->argv[1]);\n    if (!kv) {\n        addReply(c,shared.czero);\n        return;\n    }\n\n    /* Return zero if the key already exists in the target DB */\n    dictEntryLink dstBucket;\n    if (lookupKey(dst, c->argv[1], LOOKUP_WRITE, &dstBucket) != NULL)  {\n        addReply(c,shared.czero);\n        return;\n    }\n\n    int slot = getKeySlot(c->argv[1]->ptr);\n\n    /* If hash with expiration on fields, remove it from DB subexpires and keep\n     * aside registered expiration time. Must be before removal of the\n     * object since it embeds ExpireMeta that is used by subexpires */\n    if (kv->type == OBJ_HASH)\n        hashExpireTime = estoreRemove(src->subexpires, slot, kv);\n\n    /* Move a side metadata before dbDelete() */\n    KeyMetaSpec keymeta;\n    keyMetaSpecInit(&keymeta);\n    keyMetaOnMove(kv, c->argv[1], srcid, dbid, &keymeta);\n\n    incrRefCount(kv);            /* ref counter = 1->2 */\n    dbDelete(src,c->argv[1]);    /* ref counter = 2->1 */\n\n    dbAddInternal(dst, c->argv[1], &kv, &dstBucket, &keymeta);\n\n    /* If object of type hash with expiration on fields. Taken care to add the\n     * hash to subexpires of `dst` only after dbDelete(). */\n    if (hashExpireTime != EB_EXPIRE_TIME_INVALID)\n        estoreAdd(dst->subexpires, slot, kv, hashExpireTime);\n\n    /* Register stream IDMP tracking in the destination DB. */\n    if (kv->type == OBJ_STREAM)\n        streamKeyLoaded(dst, c->argv[1], kv);\n\n    keyModified(c,src,c->argv[1],NULL,1);\n    keyModified(c,dst,c->argv[1],kv,1);\n    notifyKeyspaceEvent(NOTIFY_GENERIC, \"move_from\", c->argv[1],src->id);\n    notifyKeyspaceEvent(NOTIFY_GENERIC, \"move_to\", c->argv[1],dst->id);\n    KSN_INVALIDATE_KVOBJ(kv);\n\n    server.dirty++;\n    addReply(c,shared.cone);\n}\n\nvoid copyCommand(client *c) {\n    kvobj *o;\n    redisDb *src, *dst;\n    int srcid, dbid;\n    int j, replace = 0, delete = 0;\n\n    /* Obtain source and target DB pointers \n     * Default target DB is the same as the source DB \n     * Parse the REPLACE option and targetDB option. */\n    src = c->db;\n    dst = c->db;\n    srcid = c->db->id;\n    dbid = c->db->id;\n    for (j = 3; j < c->argc; j++) {\n        int additional = c->argc - j - 1;\n        if (!strcasecmp(c->argv[j]->ptr,\"replace\")) {\n            replace = 1;\n        } else if (!strcasecmp(c->argv[j]->ptr, \"db\") && additional >= 1) {\n            if (getIntFromObjectOrReply(c, c->argv[j+1], &dbid, NULL) != C_OK)\n                return;\n\n            if (selectDb(c, dbid) == C_ERR) {\n                addReplyError(c,\"DB index is out of range\");\n                return;\n            }\n            dst = c->db;\n            selectDb(c,srcid); /* Back to the source DB */\n            j++; /* Consume additional arg. */\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        }\n    }\n\n    if ((server.cluster_enabled == 1) && (srcid != 0 || dbid != 0)) {\n        addReplyError(c,\"Copying to another database is not allowed in cluster mode\");\n        return;\n    }\n\n    /* If the user select the same DB as\n     * the source DB and using newkey as the same key\n     * it is probably an error. */\n    robj *key = c->argv[1];\n    robj *newkey = c->argv[2];\n    if (src == dst && (sdscmp(key->ptr, newkey->ptr) == 0)) {\n        addReplyErrorObject(c,shared.sameobjecterr);\n        return;\n    }\n\n    if (srcid != 0 || dbid != 0) {\n        server.stat_cluster_incompatible_ops++;\n    }\n\n    /* Check if the element exists and get a reference */\n    o = lookupKeyRead(c->db, key);\n    if (!o) {\n        addReply(c,shared.czero);\n        return;\n    }\n\n    /* Return zero if the key already exists in the target DB. \n     * If REPLACE option is selected, delete newkey from targetDB. */\n    kvobj *destval = lookupKeyWrite(dst,newkey);\n    if (destval != NULL) {\n        if (replace) {\n            delete = 1;\n        } else {\n            addReply(c,shared.czero);\n            return;\n        }\n    }\n    int destoldtype = destval ? destval->type : -1;\n    int destnewtype = o->type;\n\n    /* Duplicate object according to object's type. */\n    robj *newobj;\n    uint64_t minHashExpire = EB_EXPIRE_TIME_INVALID; /* HFE feature */\n    switch(o->type) {\n        case OBJ_STRING: newobj = dupStringObject(o); break;\n        case OBJ_LIST: newobj = listTypeDup(o); break;\n        case OBJ_SET: newobj = setTypeDup(o); break;\n        case OBJ_ZSET: newobj = zsetDup(o); break;\n        case OBJ_HASH: newobj = hashTypeDup(o, &minHashExpire); break;\n        case OBJ_STREAM: newobj = streamDup(o); break;\n        case OBJ_GCRA: newobj = gcraDup(o); break;\n        case OBJ_MODULE:\n            newobj = moduleTypeDupOrReply(c, key, newkey, dst->id, o);\n            if (!newobj) return;\n            break;\n        default:\n            addReplyError(c, \"unknown type object\");\n            return;\n    }\n\n    if (delete) {\n        dbDelete(dst,newkey);\n    }\n\n    /* Prepare metadata for the new key */\n    KeyMetaSpec keymeta;\n    keyMetaSpecInit(&keymeta);\n    if (o->metabits) keyMetaOnCopy(o, key, newkey, c->db->id, dst->id, &keymeta);\n\n    kvobj *kvCopy = dbAddInternal(dst, newkey, &newobj, NULL, &keymeta);\n\n    /* If minExpiredField was set, then the object is hash with expiration\n     * on fields and need to register it in global HFE DS */\n    if (minHashExpire != EB_EXPIRE_TIME_INVALID)\n        estoreAdd(dst->subexpires, getKeySlot(newkey->ptr), kvCopy, minHashExpire);\n\n    /* Register copied stream with IDMP producers for cron-based expiration. */\n    if (kvCopy->type == OBJ_STREAM)\n        streamKeyLoaded(dst, newkey, kvCopy);\n\n    /* OK! key copied. Signal modification */\n    keyModified(c,dst,c->argv[2],kvCopy,1);\n    notifyKeyspaceEvent(NOTIFY_GENERIC,\"copy_to\",c->argv[2],dst->id);\n    KSN_INVALIDATE_KVOBJ(kvCopy);\n\n    /* `delete` implies the destination key was overwritten */\n    if (delete) {\n        notifyKeyspaceEvent(NOTIFY_OVERWRITTEN, \"overwritten\", c->argv[2], dst->id);\n        if (destoldtype != destnewtype)\n            notifyKeyspaceEvent(NOTIFY_TYPE_CHANGED, \"type_changed\", c->argv[2], dst->id);\n    }\n\n    server.dirty++;\n    addReply(c,shared.cone);\n}\n\n/* Helper function for dbSwapDatabases(): scans the list of keys that have\n * one or more blocked clients for B[LR]POP or other blocking commands\n * and signal the keys as ready if they are of the right type. See the comment\n * where the function is used for more info. */\nvoid scanDatabaseForReadyKeys(redisDb *db) {\n    dictEntry *de;\n    dictIterator di;\n    dictInitSafeIterator(&di, db->blocking_keys);\n    while((de = dictNext(&di)) != NULL) {\n        robj *key = dictGetKey(de);\n        kvobj *kv = dbFind(db, key->ptr);\n        if (kv)\n            signalKeyAsReady(db, key, kv->type);\n    }\n    dictResetIterator(&di);\n}\n\n/* Since we are unblocking XREADGROUP clients in the event the key was\n * deleted/overwritten we must do the same in case the database was\n * flushed/swapped. If 'slots' is not NULL, only keys in the specified slot\n * range are considered. */\nvoid scanDatabaseForDeletedKeys(redisDb *emptied, redisDb *replaced_with, slotRangeArray *slots) {\n    dictEntry *de;\n    dictIterator di;\n\n    dictInitSafeIterator(&di, emptied->blocking_keys);\n    while((de = dictNext(&di)) != NULL) {\n        robj *key = dictGetKey(de);\n        /* Check if key belongs to the slot range. */\n        if (slots && !slotRangeArrayContains(slots, keyHashSlot(key->ptr, sdslen(key->ptr))))\n            continue;\n        int existed = 0, exists = 0;\n        int original_type = -1, curr_type = -1;\n\n        kvobj *kv = dbFind(emptied, key->ptr);\n        if (kv) {\n            original_type = kv->type;\n            existed = 1;\n        }\n\n        if (replaced_with) {\n            kv = dbFind(replaced_with, key->ptr);\n            if (kv) {\n                curr_type = kv->type;\n                exists = 1;\n            }\n        }\n        /* We want to try to unblock any client using a blocking XREADGROUP */\n        if ((existed && !exists) || original_type != curr_type)\n            signalDeletedKeyAsReady(emptied, key, original_type);\n    }\n    dictResetIterator(&di);\n}\n\n/* Swap two databases at runtime so that all clients will magically see\n * the new database even if already connected. Note that the client\n * structure c->db points to a given DB, so we need to be smarter and\n * swap the underlying referenced structures, otherwise we would need\n * to fix all the references to the Redis DB structure.\n *\n * Returns C_ERR if at least one of the DB ids are out of range, otherwise\n * C_OK is returned. */\nint dbSwapDatabases(int id1, int id2) {\n    if (id1 < 0 || id1 >= server.dbnum ||\n        id2 < 0 || id2 >= server.dbnum) return C_ERR;\n    if (id1 == id2) return C_OK;\n    redisDb aux = server.db[id1];\n    redisDb *db1 = &server.db[id1], *db2 = &server.db[id2];\n\n    /* Swapdb should make transaction fail if there is any\n     * client watching keys */\n    touchAllWatchedKeysInDb(db1, db2, NULL);\n    touchAllWatchedKeysInDb(db2, db1, NULL);\n\n    /* Try to unblock any XREADGROUP clients if the key no longer exists. */\n    scanDatabaseForDeletedKeys(db1, db2, NULL);\n    scanDatabaseForDeletedKeys(db2, db1, NULL);\n\n    /* Swap hash tables. Note that we don't swap blocking_keys,\n     * ready_keys and watched_keys, since we want clients to\n     * remain in the same DB they were. */\n    db1->keys = db2->keys;\n    db1->expires = db2->expires;\n    db1->subexpires = db2->subexpires;\n    db1->stream_idmp_keys = db2->stream_idmp_keys;\n    db1->avg_ttl = db2->avg_ttl;\n    db1->expires_cursor = db2->expires_cursor;\n\n    db2->keys = aux.keys;\n    db2->expires = aux.expires;\n    db2->subexpires = aux.subexpires;\n    db2->stream_idmp_keys = aux.stream_idmp_keys;\n    db2->avg_ttl = aux.avg_ttl;\n    db2->expires_cursor = aux.expires_cursor;\n\n    /* Now we need to handle clients blocked on lists: as an effect\n     * of swapping the two DBs, a client that was waiting for list\n     * X in a given DB, may now actually be unblocked if X happens\n     * to exist in the new version of the DB, after the swap.\n     *\n     * However normally we only do this check for efficiency reasons\n     * in dbAdd() when a list is created. So here we need to rescan\n     * the list of clients blocked on lists and signal lists as ready\n     * if needed. */\n    scanDatabaseForReadyKeys(db1);\n    scanDatabaseForReadyKeys(db2);\n    return C_OK;\n}\n\n/* Logically, this discards (flushes) the old main database, and apply the newly loaded\n * database (temp) as the main (active) database, the actual freeing of old database\n * (which will now be placed in the temp one) is done later. */\nvoid swapMainDbWithTempDb(redisDb *tempDb) {\n    for (int i=0; i<server.dbnum; i++) {\n        redisDb aux = server.db[i];\n        redisDb *activedb = &server.db[i], *newdb = &tempDb[i];\n\n        /* Swapping databases should make transaction fail if there is any\n         * client watching keys. */\n        touchAllWatchedKeysInDb(activedb, newdb, NULL);\n\n        /* Try to unblock any XREADGROUP clients if the key no longer exists. */\n        scanDatabaseForDeletedKeys(activedb, newdb, NULL);\n\n        /* Swap hash tables. Note that we don't swap blocking_keys,\n         * ready_keys and watched_keys, since clients \n         * remain in the same DB they were. */\n        activedb->keys = newdb->keys;\n        activedb->expires = newdb->expires;\n        activedb->subexpires = newdb->subexpires;\n        activedb->stream_idmp_keys = newdb->stream_idmp_keys;\n        activedb->avg_ttl = newdb->avg_ttl;\n        activedb->expires_cursor = newdb->expires_cursor;\n\n        newdb->keys = aux.keys;\n        newdb->expires = aux.expires;\n        newdb->subexpires = aux.subexpires;\n        newdb->stream_idmp_keys = aux.stream_idmp_keys;\n        newdb->avg_ttl = aux.avg_ttl;\n        newdb->expires_cursor = aux.expires_cursor;\n\n        /* Now we need to handle clients blocked on lists: as an effect\n         * of swapping the two DBs, a client that was waiting for list\n         * X in a given DB, may now actually be unblocked if X happens\n         * to exist in the new version of the DB, after the swap.\n         *\n         * However normally we only do this check for efficiency reasons\n         * in dbAdd() when a list is created. So here we need to rescan\n         * the list of clients blocked on lists and signal lists as ready\n         * if needed. */\n        scanDatabaseForReadyKeys(activedb);\n    }\n\n    trackingInvalidateKeysOnFlush(1);\n    flushSlaveKeysWithExpireList();\n}\n\n/* SWAPDB db1 db2 */\nvoid swapdbCommand(client *c) {\n    int id1, id2;\n\n    /* Not allowed in cluster mode: we have just DB 0 there. */\n    if (server.cluster_enabled) {\n        addReplyError(c,\"SWAPDB is not allowed in cluster mode\");\n        return;\n    }\n\n    /* Get the two DBs indexes. */\n    if (getIntFromObjectOrReply(c, c->argv[1], &id1,\n        \"invalid first DB index\") != C_OK)\n        return;\n\n    if (getIntFromObjectOrReply(c, c->argv[2], &id2,\n        \"invalid second DB index\") != C_OK)\n        return;\n\n    /* Swap... */\n    if (dbSwapDatabases(id1,id2) == C_ERR) {\n        addReplyError(c,\"DB index is out of range\");\n        return;\n    } else {\n        RedisModuleSwapDbInfo si = {REDISMODULE_SWAPDBINFO_VERSION,id1,id2};\n        moduleFireServerEvent(REDISMODULE_EVENT_SWAPDB,0,&si);\n        server.dirty++;\n        server.stat_cluster_incompatible_ops++;\n        addReply(c,shared.ok);\n    }\n}\n\n/*-----------------------------------------------------------------------------\n * Expires API\n *----------------------------------------------------------------------------*/\n\n/* Remove expiry from key\n *\n *  Remove the object from db->expires and set to -1 attached TTL to KV\n */\nint removeExpire(redisDb *db, robj *key) {\n    int table;\n    int slot = getKeySlot(key->ptr);\n    dictEntryLink link = kvstoreDictTwoPhaseUnlinkFind(db->expires, slot, key->ptr, &table);\n\n    if (link == NULL) return 0;\n    dictEntry *de = *link;\n    kvobj *kv = dictGetKV(de);\n    kvobj *newkv = kvobjSetExpire(kv, -1);\n    serverAssert(newkv == kv);\n    kvstoreDictTwoPhaseUnlinkFree(db->expires, slot, link, table);\n    return 1;\n}\n\n\n/* Set an expire to the specified key. If the expire is set in the context\n * of an user calling a command 'c' is the client, otherwise 'c' is set\n * to NULL. The 'when' parameter is the absolute unix time in milliseconds\n * after which the key will no longer be considered valid.\n * \n * Note: It may reallocate kvobj. The returned ref may point to a new object. */\nkvobj *setExpire(client *c, redisDb *db, robj *key, long long when) {\n    return setExpireByLink(c,db,key->ptr,when,NULL);\n}\n\n/* Like setExpire(), but accepts an optional `keyLink` to save lookup */\nkvobj *setExpireByLink(client *c, redisDb *db, sds key, long long when, dictEntryLink keyLink) {\n    /* Reuse the sds from the main dict in the expire dict */\n    int slot = getKeySlot(key);\n    size_t oldsize = 0;\n    if (!keyLink) {\n        keyLink = kvstoreDictFindLink(db->keys, slot, key, NULL);\n        serverAssert(keyLink != NULL);\n    }\n    kvobj *kv = dictGetKV(*keyLink);\n    long long old_when = kvobjGetExpire(kv);\n\n    if (old_when != -1) { /* old expire */\n        kvobj *kvnew = kvobjSetExpire(kv, when); /* release kv if reallocated */\n        /* Val already had an expire field, so it was not reallocated. */\n        serverAssert(kv == kvnew);\n    } else { /* No old expire */\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(kv);\n        uint64_t subexpiry = EB_EXPIRE_TIME_INVALID;\n        /* If hash with HFEs, take care to remove from global HFE DS before attempting\n         * to manipulate and maybe free kv object */\n        if (kv->type == OBJ_HASH)\n            subexpiry = estoreRemove(db->subexpires, slot, kv);\n\n        kvobj *kvnew = kvobjSetExpire(kv, when); /* release kv if reallocated */\n        /* if kvobj was reallocated, update dict */\n        if (kv != kvnew) {\n            kvstoreDictSetAtLink(db->keys, slot, kvnew, &keyLink, 0);\n            if (server.memory_tracking_enabled)\n                updateSlotAllocSize(db, slot, kvnew, oldsize, kvobjAllocSize(kvnew));\n            kv = kvnew;\n        }\n        /* Now add to expires */\n        dictEntry *de = kvstoreDictAddRaw(db->expires, slot, kv, NULL);\n        serverAssert(de != NULL);\n\n        if (subexpiry != EB_EXPIRE_TIME_INVALID)\n            estoreAdd(db->subexpires, slot, kv, subexpiry);\n    }\n\n    int writable_slave = server.masterhost && server.repl_slave_ro == 0;\n    if (c && writable_slave && !(c->flags & CLIENT_MASTER))\n        rememberSlaveKeyWithExpire(db,key);\n    return kv;\n}\n\n/* Retrieve the expiration time for the specified key.\n * Returns -1 if the key has no expiration set or doesn't exists\n *\n * To avoid lookup, pass key-value object (`kv`) instead of `key`.\n */\nlong long getExpire(redisDb *db, sds key, kvobj *kv) {\n    if (kv == NULL) kv = dbFindExpires(db, key);\n    if (kv == NULL) return -1;\n    return kvobjGetExpire(kv);\n}\n\n/* Delete the specified expired or evicted key and propagate to replicas.\n * Currently notify_type can only be NOTIFY_EXPIRED or NOTIFY_EVICTED,\n * and it affects other aspects like the latency monitor event name and,\n * which config to look for lazy free, stats var to increment, and so on.\n *\n * key_mem_freed is an out parameter which contains the estimated\n * amount of memory freed due to the trimming (may be NULL) */\nstatic void deleteKeyAndPropagate(redisDb *db, robj *keyobj, int notify_type, long long *key_mem_freed) {\n    mstime_t latency;\n    int del_flag = notify_type == NOTIFY_EXPIRED ? DB_FLAG_KEY_EXPIRED : DB_FLAG_KEY_EVICTED;\n    int lazy_flag = notify_type == NOTIFY_EXPIRED ? server.lazyfree_lazy_expire : server.lazyfree_lazy_eviction;\n    char *latency_name = notify_type == NOTIFY_EXPIRED ? \"expire-del\" : \"evict-del\";\n    char *notify_name = notify_type == NOTIFY_EXPIRED ? \"expired\" : \"evicted\";\n\n    /* The key needs to be converted from static to heap before deleted */\n    int static_key = keyobj->refcount == OBJ_STATIC_REFCOUNT;\n    if (static_key) {\n        keyobj = createStringObject(keyobj->ptr, sdslen(keyobj->ptr));\n    }\n\n    serverLog(LL_DEBUG,\"key %s %s: deleting it\", redactLogCstr((char*)keyobj->ptr), notify_type == NOTIFY_EXPIRED ? \"expired\" : \"evicted\");\n\n    /* We compute the amount of memory freed by db*Delete() alone.\n     * It is possible that actually the memory needed to propagate\n     * the DEL in AOF and replication link is greater than the one\n     * we are freeing removing the key, but we can't account for\n     * that otherwise we would never exit the loop.\n     *\n     * Same for CSC invalidation messages generated by keyModified.\n     *\n     * AOF and Output buffer memory will be freed eventually so\n     * we only care about memory used by the key space.\n     *\n     * The code here used to first propagate and then record delta\n     * using only zmalloc_used_memory but in CRDT we can't do that\n     * so we use freeMemoryGetNotCountedMemory to avoid counting\n     * AOF and slave buffers */\n    if (key_mem_freed) *key_mem_freed = (long long) zmalloc_used_memory() - freeMemoryGetNotCountedMemory();\n    latencyStartMonitor(latency);\n    dbGenericDelete(db, keyobj, lazy_flag, del_flag);\n    latencyEndMonitor(latency);\n    latencyAddSampleIfNeeded(latency_name, latency);\n    if (key_mem_freed) *key_mem_freed -= (long long) zmalloc_used_memory() - freeMemoryGetNotCountedMemory();\n\n    notifyKeyspaceEvent(notify_type, notify_name,keyobj, db->id);\n    keyModified(NULL, db, keyobj, NULL, 1);\n    propagateDeletion(db, keyobj, lazy_flag);\n\n    if (notify_type == NOTIFY_EXPIRED)\n        server.stat_expiredkeys++;\n    else\n        server.stat_evictedkeys++;\n\n    if (static_key)\n        decrRefCount(keyobj);\n}\n\n/* Delete the specified expired key and propagate. */\nvoid deleteExpiredKeyAndPropagate(redisDb *db, robj *keyobj) {\n    deleteKeyAndPropagate(db, keyobj, NOTIFY_EXPIRED, NULL);\n}\n\n/* Delete the specified evicted key and propagate. */\nvoid deleteEvictedKeyAndPropagate(redisDb *db, robj *keyobj, long long *key_mem_freed) {\n    deleteKeyAndPropagate(db, keyobj, NOTIFY_EVICTED, key_mem_freed);\n}\n\n/* Propagate an implicit key deletion into replicas and the AOF file.\n * When a key was deleted in the master by eviction, expiration or a similar\n * mechanism a DEL/UNLINK operation for this key is sent\n * to all the replicas and the AOF file if enabled.\n *\n * This way the key deletion is centralized in one place, and since both\n * AOF and the replication link guarantee operation ordering, everything\n * will be consistent even if we allow write operations against deleted\n * keys.\n *\n * This function may be called from:\n * 1. Within call(): Example: Lazy-expire on key access.\n *    In this case the caller doesn't have to do anything\n *    because call() handles server.also_propagate(); or\n * 2. Outside of call(): Example: Active-expire, eviction, slot ownership changed.\n *    In this the caller must remember to call\n *    postExecutionUnitOperations, preferably just after a\n *    single deletion batch, so that DEL/UNLINK will NOT be wrapped\n *    in MULTI/EXEC */\nvoid propagateDeletion(redisDb *db, robj *key, int lazy) {\n    robj *argv[2];\n\n    argv[0] = lazy ? shared.unlink : shared.del;\n    argv[1] = key;\n    incrRefCount(argv[0]);\n    incrRefCount(argv[1]);\n\n    /* If the master decided to delete a key we must propagate it to replicas no matter what.\n     * Even if module executed a command without asking for propagation. */\n    int prev_replication_allowed = server.replication_allowed;\n    server.replication_allowed = 1;\n    alsoPropagate(db->id,argv,2,PROPAGATE_AOF|PROPAGATE_REPL);\n    server.replication_allowed = prev_replication_allowed;\n\n    decrRefCount(argv[0]);\n    decrRefCount(argv[1]);\n}\n\n/* Check if the key is expired\n *\n * Provide either the key name for a lookup or KV object (to save lookup)\n */\nint keyIsExpired(redisDb *db, sds key, kvobj *kv) {\n    /* Don't expire anything while loading. It will be done later. */\n    if (server.loading || server.allow_access_expired) return 0;\n    mstime_t when = getExpire(db, key, kv);\n    if (when < 0) return 0; /* No expire for this key */\n    const mstime_t now = commandTimeSnapshot();\n    /* The key expired if the current (virtual or real) time is greater\n     * than the expire time of the key. */\n    return now > when;\n}\n\n/* Check if user configuration allows key to be deleted due to expiary */\nint confAllowsExpireDel(void) {\n    if (server.lazyexpire_nested_arbitrary_keys)\n        return 1;\n\n    /* This configuration specifically targets nested commands, to align with RE's feature of replication between dbs.\n     * transactions (from scripts or multi-exec) containing commands like SCAN and RANDOMKEY will execute locally, but their\n     * lazy-expiration DELs may induce CROSS-SLOT on remote proxy in mode replica-of (RED-161574) */\n    return !(server.execution_nesting > 1 && server.executing_client->cmd->flags & CMD_TOUCHES_ARBITRARY_KEYS);\n}\n\n/* This function is called when we are going to perform some operation\n * in a given key, but such key may be already logically expired even if\n * it still exists in the database. The main way this function is called\n * is via lookupKey*() family of functions.\n *\n * The behavior of the function depends on the replication role of the\n * instance, because by default replicas do not delete expired keys. They\n * wait for DELs from the master for consistency matters. However even\n * replicas will try to have a coherent return value for the function,\n * so that read commands executed in the replica side will be able to\n * behave like if the key is expired even if still present (because the\n * master has yet to propagate the DEL).\n *\n * In masters as a side effect of finding a key which is expired, such\n * key will be evicted from the database. Also this may trigger the\n * propagation of a DEL/UNLINK command in AOF / replication stream.\n *\n * On replicas, this function does not delete expired keys by default, but\n * it still returns KEY_EXPIRED if the key is logically expired. To force deletion\n * of logically expired keys even on replicas, use the EXPIRE_FORCE_DELETE_EXPIRED\n * flag. Note though that if the current client is executing\n * replicated commands from the master, keys are never considered expired.\n *\n * On the other hand, if you just want expiration check, but need to avoid\n * the actual key deletion and propagation of the deletion, use the\n * EXPIRE_AVOID_DELETE_EXPIRED flag. If also needed to read expired key (that\n * hasn't being deleted yet) then use EXPIRE_ALLOW_ACCESS_EXPIRED.\n *\n * The return value of the function is KEY_VALID if the key is still valid.\n * The function returns KEY_EXPIRED if the key is expired BUT not deleted,\n * or returns KEY_DELETED if the key is expired and deleted. If the key is in a\n * trim job due to slot migration, the function returns KEY_TRIMMED, unless\n * EXPIRE_ALLOW_ACCESS_TRIMMED is set, in which case it returns KEY_VALID.\n *\n * You can optionally pass `kv` to save a lookup.\n */\nkeyStatus expireIfNeeded(redisDb *db, robj *key, kvobj *kv, int flags) {\n    debugAssert(key != NULL || kv != NULL);\n\n    /* NOTE: Keys in slots scheduled for trimming can still exist for a while.\n     * We don't delete it here, return KEY_VALID if allowing access to trimmed\n     * keys, and return KEY_TRIMMED otherwise. */\n    sds key_name = key ? key->ptr : kvobjGetKey(kv);\n    if (asmIsKeyInTrimJob(key_name)) {\n        if (server.allow_access_trimmed || (flags & EXPIRE_ALLOW_ACCESS_TRIMMED))\n            return KEY_VALID;\n\n        /* If the slot is not served by this node, we should not allow access\n         * to the key, we consider it as trimmed. */\n        if (!clusterCanAccessKeysInSlot(getKeySlot(key_name)))\n            return KEY_TRIMMED;\n    }\n\n    if ((flags & EXPIRE_ALLOW_ACCESS_EXPIRED) ||\n        (!keyIsExpired(db,  key ? key->ptr : NULL, kv)))\n        return KEY_VALID;\n\n    /* If we are running in the context of a replica, instead of\n     * evicting the expired key from the database, we return ASAP:\n     * the replica key expiration is controlled by the master that will\n     * send us synthesized DEL operations for expired keys. The\n     * exception is when write operations are performed on writable\n     * replicas.\n     *\n     * In cluster mode, we also return ASAP if we are importing data\n     * from the source, to avoid deleting keys that are still in use.\n     * We create a fake master client for data import, which can be\n     * identified using the CLIENT_MASTER flag.\n     *\n     * Still we try to return the right information to the caller,\n     * that is, KEY_VALID if we think the key should still be valid,\n     * KEY_EXPIRED if we think the key is expired but don't want to delete it at this time.\n     *\n     * When replicating commands from the master, keys are never considered\n     * expired. */\n    if (server.masterhost != NULL || server.cluster_enabled) {\n        if (server.current_client && (server.current_client->flags & CLIENT_MASTER)) return KEY_VALID;\n        if (server.masterhost != NULL && !(flags & EXPIRE_FORCE_DELETE_EXPIRED)) return KEY_EXPIRED;\n    }\n\n    /* Check if user configuration disables lazy-expire deletions in current state.\n     * This will only apply if the server doesn't mandate key deletion to operate correctly (write commands). */\n    if (!(flags & EXPIRE_FORCE_DELETE_EXPIRED) && !confAllowsExpireDel())\n        return KEY_EXPIRED;\n\n    /* In some cases we're explicitly instructed to return an indication of a\n     * missing key without actually deleting it, even on masters. */\n    if (flags & EXPIRE_AVOID_DELETE_EXPIRED)\n        return KEY_EXPIRED;\n\n    /* If 'expire' action is paused, for whatever reason, then don't expire any key.\n     * Typically, at the end of the pause we will properly expire the key OR we\n     * will have failed over and the new primary will send us the expire. */\n    if (isPausedActionsWithUpdate(PAUSE_ACTION_EXPIRE)) return KEY_EXPIRED;\n\n    /* Perform deletion */\n    if (key) {\n        deleteExpiredKeyAndPropagate(db, key);\n    } else {\n        sds keyname = kvobjGetKey(kv);\n        robj *tmpkey = createStringObject(keyname, sdslen(keyname));\n        deleteExpiredKeyAndPropagate(db, tmpkey);\n        decrRefCount(tmpkey);\n    }\n    return KEY_DELETED;\n}\n\n/* CB passed to kvstoreExpand.\n * The purpose is to skip expansion of unused dicts in cluster mode (all\n * dicts not mapped to *my* slots) */\nstatic int dbExpandSkipSlot(int slot) {\n    return !clusterNodeCoversSlot(getMyClusterNode(), slot);\n}\n\n/*\n * This functions increases size of the main/expires db to match desired number.\n * In cluster mode resizes all individual dictionaries for slots that this node owns.\n *\n * Based on the parameter `try_expand`, appropriate dict expand API is invoked.\n * if try_expand is set to 1, `dictTryExpand` is used else `dictExpand`.\n * The return code is either `DICT_OK`/`DICT_ERR` for both the API(s).\n * `DICT_OK` response is for successful expansion. However ,`DICT_ERR` response signifies failure in allocation in\n * `dictTryExpand` call and in case of `dictExpand` call it signifies no expansion was performed.\n */\nstatic int dbExpandGeneric(kvstore *kvs, uint64_t db_size, int try_expand) {\n    int ret;\n    if (server.cluster_enabled) {\n        /* We don't know exact number of keys that would fall into each slot, but we can\n         * approximate it, assuming even distribution, divide it by the number of slots. */\n        int slots = getMyShardSlotCount();\n        if (slots == 0) return C_OK;\n        db_size = db_size / slots;\n        ret = kvstoreExpand(kvs, db_size, try_expand, dbExpandSkipSlot);\n    } else {\n        ret = kvstoreExpand(kvs, db_size, try_expand, NULL);\n    }\n\n    return ret? C_OK : C_ERR;\n}\n\nint dbExpand(redisDb *db, uint64_t db_size, int try_expand) {\n    return dbExpandGeneric(db->keys, db_size, try_expand);\n}\n\nint dbExpandExpires(redisDb *db, uint64_t db_size, int try_expand) {\n    return dbExpandGeneric(db->expires, db_size, try_expand);\n}\n\nstatic kvobj *dbFindGeneric(kvstore *kvs, sds key) {\n    dictEntry *res = kvstoreDictFind(kvs, getKeySlot(key), key);\n    return (res) ? dictGetKey(res) : NULL;\n}\n\nkvobj *dbFind(redisDb *db, sds key) {\n    return dbFindGeneric(db->keys, key);\n}\n\n/* Find a KV in the main db. Return also link to it.\n *\n * plink - If found, set to the link of the key in the dict.\n *         If not found, set to the bucket where the key should be added.\n *         If set to NULL, then HT of dict not allocated yet.\n */\nkvobj *dbFindByLink(redisDb *db, sds key, dictEntryLink *plink) {\n    int slot = getKeySlot(key);\n    dictEntryLink link, bucket;\n\n    link = kvstoreDictFindLink(db->keys, slot, key, &bucket);\n    if (link == NULL) {\n        if (plink) *plink = bucket;\n        return NULL;\n    } else {\n        if (plink) *plink = link;\n        return dictGetKV(*link);\n    }\n}\n\nkvobj *dbFindExpires(redisDb *db, sds key) {\n    return dbFindGeneric(db->expires, key);\n}\n\nunsigned long long dbSize(redisDb *db) {\n    unsigned long long total = kvstoreSize(db->keys);\n\n    if (server.cluster_enabled) {\n        /* If we are the master and there is no import or trim in progress,\n         * then we can return the total count. If not, we need to subtract\n         * the number of keys in slots that are not accessible, as below. */\n        if (clusterNodeIsMaster(getMyClusterNode()) &&\n            !asmImportInProgress() &&\n            !asmIsTrimInProgress())\n        {\n            return total;\n        }\n\n        /* Besides, we don't know the slot migration states on replicas, so we\n         * need to check each slot to see if it's accessible. */\n        for (int i = 0; i < CLUSTER_SLOTS; i++) {\n            dict *d = kvstoreGetDict(db->keys, i);\n            if (d && !clusterCanAccessKeysInSlot(i)) {\n                total -= kvstoreDictSize(db->keys, i);\n            }\n        }\n    }\n\n    return total;\n}\n\nunsigned long long dbScan(redisDb *db, unsigned long long cursor, dictScanFunction *scan_cb, void *privdata) {\n    return kvstoreScan(db->keys, cursor, -1, scan_cb, scanShouldSkipDict, privdata);\n}\n\n/* -----------------------------------------------------------------------------\n * API to get key arguments from commands\n * ---------------------------------------------------------------------------*/\n\n/* Prepare the getKeysResult struct to hold numkeys, either by using the\n * pre-allocated keysbuf or by allocating a new array on the heap.\n *\n * This function must be called at least once before starting to populate\n * the result, and can be called repeatedly to enlarge the result array.\n */\nkeyReference *getKeysPrepareResult(getKeysResult *result, int numkeys) {\n    /* GETKEYS_RESULT_INIT initializes keys to NULL, point it to the pre-allocated stack\n     * buffer here. */\n    if (!result->keys) {\n        serverAssert(!result->numkeys);\n        result->keys = result->keysbuf;\n    }\n\n    /* Resize if necessary */\n    if (numkeys > result->size) {\n        if (result->keys != result->keysbuf) {\n            /* We're not using a static buffer, just (re)alloc */\n            result->keys = zrealloc(result->keys, numkeys * sizeof(keyReference));\n        } else {\n            /* We are using a static buffer, copy its contents */\n            result->keys = zmalloc(numkeys * sizeof(keyReference));\n            if (result->numkeys)\n                memcpy(result->keys, result->keysbuf, result->numkeys * sizeof(keyReference));\n        }\n        result->size = numkeys;\n    }\n\n    return result->keys;\n}\n\n/* Returns a bitmask with all the flags found in any of the key specs of the command.\n * The 'inv' argument means we'll return a mask with all flags that are missing in at least one spec. */\nint64_t getAllKeySpecsFlags(struct redisCommand *cmd, int inv) {\n    int64_t flags = 0;\n    for (int j = 0; j < cmd->key_specs_num; j++) {\n        keySpec *spec = cmd->key_specs + j;\n        flags |= inv? ~spec->flags : spec->flags;\n    }\n    return flags;\n}\n\n/* Fetch the keys based of the provided key specs. Returns the number of keys found, or -1 on error.\n * There are several flags that can be used to modify how this function finds keys in a command.\n * \n * GET_KEYSPEC_INCLUDE_NOT_KEYS: Return 'fake' keys as if they were keys.\n * GET_KEYSPEC_RETURN_PARTIAL:   Skips invalid and incomplete keyspecs but returns the keys\n *                               found in other valid keyspecs. \n */\nint getKeysUsingKeySpecs(struct redisCommand *cmd, robj **argv, int argc, int search_flags, getKeysResult *result) {\n    long j, i, last, first, step;\n    keyReference *keys;\n    serverAssert(result->numkeys == 0); /* caller should initialize or reset it */\n\n    for (j = 0; j < cmd->key_specs_num; j++) {\n        keySpec *spec = cmd->key_specs + j;\n        serverAssert(spec->begin_search_type != KSPEC_BS_INVALID);\n        /* Skip specs that represent 'fake' keys */\n        if ((spec->flags & CMD_KEY_NOT_KEY) && !(search_flags & GET_KEYSPEC_INCLUDE_NOT_KEYS)) {\n            continue;\n        }\n\n        first = 0;\n        if (spec->begin_search_type == KSPEC_BS_INDEX) {\n            first = spec->bs.index.pos;\n        } else if (spec->begin_search_type == KSPEC_BS_KEYWORD) {\n            int start_index = spec->bs.keyword.startfrom > 0 ? spec->bs.keyword.startfrom : argc+spec->bs.keyword.startfrom;\n            int end_index = spec->bs.keyword.startfrom > 0 ? argc-1: 1;\n            for (i = start_index; i != end_index; i = start_index <= end_index ? i + 1 : i - 1) {\n                if (i >= argc || i < 1)\n                    break;\n                if (!strcasecmp((char*)argv[i]->ptr,spec->bs.keyword.keyword)) {\n                    first = i+1;\n                    break;\n                }\n            }\n            /* keyword not found */\n            if (!first) {\n                continue;\n            }\n        } else {\n            /* unknown spec */\n            goto invalid_spec;\n        }\n\n        if (spec->find_keys_type == KSPEC_FK_RANGE) {\n            step = spec->fk.range.keystep;\n            if (spec->fk.range.lastkey >= 0) {\n                last = first + spec->fk.range.lastkey;\n            } else {\n                if (!spec->fk.range.limit) {\n                    last = argc + spec->fk.range.lastkey;\n                } else {\n                    serverAssert(spec->fk.range.lastkey == -1);\n                    last = first + ((argc-first)/spec->fk.range.limit + spec->fk.range.lastkey);\n                }\n            }\n        } else if (spec->find_keys_type == KSPEC_FK_KEYNUM) {\n            step = spec->fk.keynum.keystep;\n            long long numkeys;\n            long keynumidx = first + spec->fk.keynum.keynumidx;\n            if (keynumidx >= argc || keynumidx < 0)\n                goto invalid_spec;\n\n            sds keynum_str = argv[keynumidx]->ptr;\n            if (!string2ll(keynum_str,sdslen(keynum_str),&numkeys) || numkeys < 0) {\n                /* Unable to parse the numkeys argument or it was invalid */\n                goto invalid_spec;\n            }\n\n            first += spec->fk.keynum.firstkey;\n            last = first + ((long)numkeys - 1) * step;\n        } else {\n            /* unknown spec */\n            goto invalid_spec;\n        }\n\n        /* First or last is out of bounds, which indicates a syntax error */\n        if (last >= argc || last < first || first >= argc) {\n            goto invalid_spec;\n        }\n\n        int count = ((last - first)+1);\n        keys = getKeysPrepareResult(result, result->numkeys + count);\n\n        for (i = first; i <= last; i += step) {\n            if (i >= argc || i < first) {\n                /* Modules commands, and standard commands with a not fixed number\n                 * of arguments (negative arity parameter) do not have dispatch\n                 * time arity checks, so we need to handle the case where the user\n                 * passed an invalid number of arguments here. In this case we\n                 * return no keys and expect the command implementation to report\n                 * an arity or syntax error. */\n                if (cmd->flags & CMD_MODULE || cmd->arity < 0) {\n                    continue;\n                } else {\n                    serverPanic(\"Redis built-in command declared keys positions not matching the arity requirements.\");\n                }\n            }\n            keys[result->numkeys].pos = i;\n            keys[result->numkeys].flags = spec->flags;\n            result->numkeys++;\n        }\n\n        /* Handle incomplete specs (only after we added the current spec\n         * to `keys`, just in case GET_KEYSPEC_RETURN_PARTIAL was given) */\n        if (spec->flags & CMD_KEY_INCOMPLETE) {\n            goto invalid_spec;\n        }\n\n        /* Done with this spec */\n        continue;\n\ninvalid_spec:\n        if (search_flags & GET_KEYSPEC_RETURN_PARTIAL) {\n            continue;\n        } else {\n            result->numkeys = 0;\n            return -1;\n        }\n    }\n\n    return result->numkeys;\n}\n\n/* Return all the arguments that are keys in the command passed via argc / argv. \n * This function will eventually replace getKeysFromCommand.\n *\n * The command returns the positions of all the key arguments inside the array,\n * so the actual return value is a heap allocated array of integers. The\n * length of the array is returned by reference into *numkeys.\n * \n * Along with the position, this command also returns the flags that are\n * associated with how Redis will access the key.\n *\n * 'cmd' must be point to the corresponding entry into the redisCommand\n * table, according to the command name in argv[0]. */\nint getKeysFromCommandWithSpecs(struct redisCommand *cmd, robj **argv, int argc, int search_flags, getKeysResult *result) {\n    /* The command has at least one key-spec not marked as NOT_KEY */\n    int has_keyspec = (getAllKeySpecsFlags(cmd, 1) & CMD_KEY_NOT_KEY);\n    /* The command has at least one key-spec marked as VARIABLE_FLAGS */\n    int has_varflags = (getAllKeySpecsFlags(cmd, 0) & CMD_KEY_VARIABLE_FLAGS);\n\n    /* We prefer key-specs if there are any, and their flags are reliable. */\n    if (has_keyspec && !has_varflags) {\n        int ret = getKeysUsingKeySpecs(cmd,argv,argc,search_flags,result);\n        if (ret >= 0)\n            return ret;\n        /* If the specs returned with an error (probably an INVALID or INCOMPLETE spec),\n         * fallback to the callback method. */\n    }\n\n    /* Resort to getkeys callback methods. */\n    if (cmd->flags & CMD_MODULE_GETKEYS)\n        return moduleGetCommandKeysViaAPI(cmd,argv,argc,result);\n\n    /* We use native getkeys as a last resort, since not all these native getkeys provide\n     * flags properly (only the ones that correspond to INVALID, INCOMPLETE or VARIABLE_FLAGS do.*/\n    if (cmd->getkeys_proc)\n        return cmd->getkeys_proc(cmd,argv,argc,result);\n    return 0;\n}\n\n/* This function returns a sanity check if the command may have keys. */\nint doesCommandHaveKeys(struct redisCommand *cmd) {\n    return cmd->getkeys_proc ||                                 /* has getkeys_proc (non modules) */\n        (cmd->flags & CMD_MODULE_GETKEYS) ||                    /* module with GETKEYS */\n        (getAllKeySpecsFlags(cmd, 1) & CMD_KEY_NOT_KEY);        /* has at least one key-spec not marked as NOT_KEY */\n}\n\n/* A simplified channel spec table that contains all of the redis commands\n * and which channels they have and how they are accessed. */\ntypedef struct ChannelSpecs {\n    redisCommandProc *proc; /* Command procedure to match against */\n    uint64_t flags;         /* CMD_CHANNEL_* flags for this command */\n    int start;              /* The initial position of the first channel */\n    int count;              /* The number of channels, or -1 if all remaining\n                             * arguments are channels. */\n} ChannelSpecs;\n\nChannelSpecs commands_with_channels[] = {\n    {subscribeCommand, CMD_CHANNEL_SUBSCRIBE, 1, -1},\n    {ssubscribeCommand, CMD_CHANNEL_SUBSCRIBE, 1, -1},\n    {unsubscribeCommand, CMD_CHANNEL_UNSUBSCRIBE, 1, -1},\n    {sunsubscribeCommand, CMD_CHANNEL_UNSUBSCRIBE, 1, -1},\n    {psubscribeCommand, CMD_CHANNEL_PATTERN | CMD_CHANNEL_SUBSCRIBE, 1, -1},\n    {punsubscribeCommand, CMD_CHANNEL_PATTERN | CMD_CHANNEL_UNSUBSCRIBE, 1, -1},\n    {publishCommand, CMD_CHANNEL_PUBLISH, 1, 1},\n    {spublishCommand, CMD_CHANNEL_PUBLISH, 1, 1},\n    {NULL,0} /* Terminator. */\n};\n\n/* Returns 1 if the command may access any channels matched by the flags\n * argument. */\nint doesCommandHaveChannelsWithFlags(struct redisCommand *cmd, int flags) {\n    /* If a module declares get channels, we are just going to assume\n     * has channels. This API is allowed to return false positives. */\n    if (cmd->flags & CMD_MODULE_GETCHANNELS) {\n        return 1;\n    }\n    for (ChannelSpecs *spec = commands_with_channels; spec->proc != NULL; spec += 1) {\n        if (cmd->proc == spec->proc) {\n            return !!(spec->flags & flags);\n        }\n    }\n    return 0;\n}\n\n/* Return all the arguments that are channels in the command passed via argc / argv. \n * This function behaves similar to getKeysFromCommandWithSpecs, but with channels \n * instead of keys.\n * \n * The command returns the positions of all the channel arguments inside the array,\n * so the actual return value is a heap allocated array of integers. The\n * length of the array is returned by reference into *numkeys.\n * \n * Along with the position, this command also returns the flags that are\n * associated with how Redis will access the channel.\n *\n * 'cmd' must be point to the corresponding entry into the redisCommand\n * table, according to the command name in argv[0]. */\nint getChannelsFromCommand(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    keyReference *keys;\n    /* If a module declares get channels, use that. */\n    if (cmd->flags & CMD_MODULE_GETCHANNELS) {\n        return moduleGetCommandChannelsViaAPI(cmd, argv, argc, result);\n    }\n    /* Otherwise check the channel spec table */\n    for (ChannelSpecs *spec = commands_with_channels; spec != NULL; spec += 1) {\n        if (cmd->proc == spec->proc) {\n            int start = spec->start;\n            int stop = (spec->count == -1) ? argc : start + spec->count;\n            if (stop > argc) stop = argc;\n            int count = 0;\n            keys = getKeysPrepareResult(result, stop - start);\n            for (int i = start; i < stop; i++ ) {\n                keys[count].pos = i;\n                keys[count++].flags = spec->flags;\n            }\n            result->numkeys = count;\n            return count;\n        }\n    }\n    return 0;\n}\n\n/* Extract keys/channels from a command and calculate the cluster slot.\n * Returns the number of keys/channels extracted.\n * The slot number is returned by reference into *slot.\n * If is_incomplete is not NULL, it will be set for key extraction.\n *\n * This function handles both regular commands (keys) and sharded pubsub\n * commands (channels), but excludes regular pubsub commands which don't\n * have slots.\n */\nint extractKeysAndSlot(struct redisCommand *cmd, robj **argv, int argc,\n                       getKeysResult *result, int *slot) {\n    int num_keys = -1;\n\n    if (!doesCommandHaveChannelsWithFlags(cmd, CMD_CHANNEL_PUBLISH | CMD_CHANNEL_SUBSCRIBE)) {\n        num_keys = getKeysFromCommandWithSpecs(cmd, argv, argc, GET_KEYSPEC_DEFAULT, result);\n    } else {\n        /* Only extract channels for commands that have key_specs (sharded pubsub).\n         * Regular pubsub commands (PUBLISH, SUBSCRIBE) don't have slots. */\n        if (cmd->key_specs_num > 0) {\n            num_keys = getChannelsFromCommand(cmd, argv, argc, result);\n        } else {\n            num_keys = 0;\n        }\n    }\n\n    *slot = extractSlotFromKeysResult(argv, result);\n    return num_keys;\n}\n\n/* The base case is to use the keys position as given in the command table\n * (firstkey, lastkey, step).\n * This function works only on command with the legacy_range_key_spec,\n * all other commands should be handled by getkeys_proc. \n * \n * If the commands keyspec is incomplete, no keys will be returned, and the provided\n * keys function should be called instead.\n * \n * NOTE: This function does not guarantee populating the flags for \n * the keys, in order to get flags you should use getKeysUsingKeySpecs. */\nint getKeysUsingLegacyRangeSpec(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    int j, i = 0, last, first, step;\n    keyReference *keys;\n    UNUSED(argv);\n\n    if (cmd->legacy_range_key_spec.begin_search_type == KSPEC_BS_INVALID) {\n        result->numkeys = 0;\n        return 0;\n    }\n\n    first = cmd->legacy_range_key_spec.bs.index.pos;\n    last = cmd->legacy_range_key_spec.fk.range.lastkey;\n    if (last >= 0)\n        last += first;\n    step = cmd->legacy_range_key_spec.fk.range.keystep;\n\n    if (last < 0) last = argc+last;\n\n    int count = ((last - first)+1);\n    keys = getKeysPrepareResult(result, count);\n\n    for (j = first; j <= last; j += step) {\n        if (j >= argc || j < first) {\n            /* Modules commands, and standard commands with a not fixed number\n             * of arguments (negative arity parameter) do not have dispatch\n             * time arity checks, so we need to handle the case where the user\n             * passed an invalid number of arguments here. In this case we\n             * return no keys and expect the command implementation to report\n             * an arity or syntax error. */\n            if (cmd->flags & CMD_MODULE || cmd->arity < 0) {\n                result->numkeys = 0;\n                return 0;\n            } else {\n                serverPanic(\"Redis built-in command declared keys positions not matching the arity requirements.\");\n            }\n        }\n        keys[i].pos = j;\n        /* Flags are omitted from legacy key specs */\n        keys[i++].flags = 0;\n    }\n    result->numkeys = i;\n    return i;\n}\n\n/* Return all the arguments that are keys in the command passed via argc / argv.\n *\n * The command returns the positions of all the key arguments inside the array,\n * so the actual return value is a heap allocated array of integers. The\n * length of the array is returned by reference into *numkeys.\n *\n * 'cmd' must be point to the corresponding entry into the redisCommand\n * table, according to the command name in argv[0].\n *\n * This function uses the command table if a command-specific helper function\n * is not required, otherwise it calls the command-specific function. */\nint getKeysFromCommand(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    if (cmd->flags & CMD_MODULE_GETKEYS) {\n        return moduleGetCommandKeysViaAPI(cmd,argv,argc,result);\n    } else if (cmd->getkeys_proc) {\n        return cmd->getkeys_proc(cmd,argv,argc,result);\n    } else {\n        return getKeysUsingLegacyRangeSpec(cmd,argv,argc,result);\n    }\n}\n\n/* Free the result of getKeysFromCommand. */\nvoid getKeysFreeResult(getKeysResult *result) {\n    if (result && result->keys != result->keysbuf)\n        zfree(result->keys);\n}\n\n/* Helper function to extract keys from following commands:\n * COMMAND [destkey] <num-keys> <key> [...] <key> [...] ... <options>\n *\n * eg:\n * ZUNION <num-keys> <key> <key> ... <key> <options>\n * ZUNIONSTORE <destkey> <num-keys> <key> <key> ... <key> <options>\n *\n * 'storeKeyOfs': destkey index, 0 means destkey not exists.\n * 'keyCountOfs': num-keys index.\n * 'firstKeyOfs': firstkey index.\n * 'keyStep': the interval of each key, usually this value is 1.\n * \n * The commands using this function have a fully defined keyspec, so returning flags isn't needed. */\nint genericGetKeys(int storeKeyOfs, int keyCountOfs, int firstKeyOfs, int keyStep,\n                    robj **argv, int argc, getKeysResult *result) {\n    int i, num;\n    keyReference *keys;\n\n    if (keyCountOfs >= argc) {\n        result->numkeys = 0;\n        return 0;\n    }\n    num = atoi(argv[keyCountOfs]->ptr);\n    /* Sanity check. Don't return any key if the command is going to\n     * reply with syntax error. (no input keys). */\n    if (num < 1 || num > (argc - firstKeyOfs)/keyStep) {\n        result->numkeys = 0;\n        return 0;\n    }\n\n    int numkeys = storeKeyOfs ? num + 1 : num;\n    keys = getKeysPrepareResult(result, numkeys);\n    result->numkeys = numkeys;\n\n    /* Add all key positions for argv[firstKeyOfs...n] to keys[] */\n    for (i = 0; i < num; i++) {\n        keys[i].pos = firstKeyOfs+(i*keyStep);\n        keys[i].flags = 0;\n    } \n\n    if (storeKeyOfs) {\n        keys[num].pos = storeKeyOfs;\n        keys[num].flags = 0;\n    } \n    return result->numkeys;\n}\n\nint sintercardGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    UNUSED(cmd);\n    return genericGetKeys(0, 1, 2, 1, argv, argc, result);\n}\n\nint zunionInterDiffStoreGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    UNUSED(cmd);\n    return genericGetKeys(1, 2, 3, 1, argv, argc, result);\n}\n\nint zunionInterDiffGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    UNUSED(cmd);\n    return genericGetKeys(0, 1, 2, 1, argv, argc, result);\n}\n\nint evalGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    UNUSED(cmd);\n    return genericGetKeys(0, 2, 3, 1, argv, argc, result);\n}\n\nint functionGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    UNUSED(cmd);\n    return genericGetKeys(0, 2, 3, 1, argv, argc, result);\n}\n\nint lmpopGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    UNUSED(cmd);\n    return genericGetKeys(0, 1, 2, 1, argv, argc, result);\n}\n\nint blmpopGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    UNUSED(cmd);\n    return genericGetKeys(0, 2, 3, 1, argv, argc, result);\n}\n\nint zmpopGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    UNUSED(cmd);\n    return genericGetKeys(0, 1, 2, 1, argv, argc, result);\n}\n\nint bzmpopGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    UNUSED(cmd);\n    return genericGetKeys(0, 2, 3, 1, argv, argc, result);\n}\n\n/* Helper function to extract keys from the SORT RO command.\n *\n * SORT <sort-key>\n *\n * The second argument of SORT is always a key, however an arbitrary number of\n * keys may be accessed while doing the sort (the BY and GET args), so the\n * key-spec declares incomplete keys which is why we have to provide a concrete\n * implementation to fetch the keys.\n *\n * This command declares incomplete keys, so the flags are correctly set for this function */\nint sortROGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    keyReference *keys;\n    UNUSED(cmd);\n    UNUSED(argv);\n    UNUSED(argc);\n\n    keys = getKeysPrepareResult(result, 1);\n    keys[0].pos = 1; /* <sort-key> is always present. */\n    keys[0].flags = CMD_KEY_RO | CMD_KEY_ACCESS;\n    result->numkeys = 1;\n    return result->numkeys;\n}\n\n/* Helper function to extract keys from the SORT command.\n *\n * SORT <sort-key> ... STORE <store-key> ...\n *\n * The first argument of SORT is always a key, however a list of options\n * follow in SQL-alike style. Here we parse just the minimum in order to\n * correctly identify keys in the \"STORE\" option. \n * \n * This command declares incomplete keys, so the flags are correctly set for this function */\nint sortGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    int i, j, num, found_store = 0;\n    keyReference *keys;\n    UNUSED(cmd);\n\n    num = 0;\n    keys = getKeysPrepareResult(result, 2); /* Alloc 2 places for the worst case. */\n    keys[num].pos = 1; /* <sort-key> is always present. */\n    keys[num++].flags = CMD_KEY_RO | CMD_KEY_ACCESS;\n\n    /* Search for STORE option. By default we consider options to don't\n     * have arguments, so if we find an unknown option name we scan the\n     * next. However there are options with 1 or 2 arguments, so we\n     * provide a list here in order to skip the right number of args. */\n    struct {\n        char *name;\n        int skip;\n    } skiplist[] = {\n        {\"limit\", 2},\n        {\"get\", 1},\n        {\"by\", 1},\n        {NULL, 0} /* End of elements. */\n    };\n\n    for (i = 2; i < argc; i++) {\n        for (j = 0; skiplist[j].name != NULL; j++) {\n            if (!strcasecmp(argv[i]->ptr,skiplist[j].name)) {\n                i += skiplist[j].skip;\n                break;\n            } else if (!strcasecmp(argv[i]->ptr,\"store\") && i+1 < argc) {\n                /* Note: we don't increment \"num\" here and continue the loop\n                 * to be sure to process the *last* \"STORE\" option if multiple\n                 * ones are provided. This is same behavior as SORT. */\n                found_store = 1;\n                keys[num].pos = i+1; /* <store-key> */\n                keys[num].flags = CMD_KEY_OW | CMD_KEY_UPDATE;\n                break;\n            }\n        }\n    }\n    result->numkeys = num + found_store;\n    return result->numkeys;\n}\n\nint pfmergeGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    int i, numkeys;\n    keyReference *keys;\n    UNUSED(cmd);\n    UNUSED(argv);\n\n    numkeys = argc - 1; /* destkey + all sourcekeys */\n    keys = getKeysPrepareResult(result, numkeys);\n\n    /* destkey at argv[1] */\n    keys[0].pos = 1;\n    keys[0].flags = CMD_KEY_RW | CMD_KEY_ACCESS | CMD_KEY_INSERT;\n\n    /* sourcekeys at argv[2..argc-1], may be zero */\n    for (i = 2; i < argc; i++) {\n        keys[i - 1].pos = i;\n        keys[i - 1].flags = CMD_KEY_RO | CMD_KEY_ACCESS;\n    }\n\n    result->numkeys = numkeys;\n    return result->numkeys;\n}\n\n/* This command declares incomplete keys, so the flags are correctly set for this function */\nint migrateGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    int i, j, num, first;\n    keyReference *keys;\n    UNUSED(cmd);\n\n    /* Assume the obvious form. */\n    first = 3;\n    num = 1;\n\n    /* But check for the extended one with the KEYS option. */\n    struct {\n        char* name;\n        int skip;\n    } skip_keywords[] = {       \n        {\"copy\", 0},\n        {\"replace\", 0},\n        {\"auth\", 1},\n        {\"auth2\", 2},\n        {NULL, 0}\n    };\n    if (argc > 6) {\n        for (i = 6; i < argc; i++) {\n            if (!strcasecmp(argv[i]->ptr, \"keys\")) {\n                if (sdslen(argv[3]->ptr) > 0) {\n                    /* This is a syntax error. So ignore the keys and leave\n                     * the syntax error to be handled by migrateCommand. */\n                    num = 0; \n                } else {\n                    first = i + 1;\n                    num = argc - first;\n                }\n                break;\n            }\n            for (j = 0; skip_keywords[j].name != NULL; j++) {\n                if (!strcasecmp(argv[i]->ptr, skip_keywords[j].name)) {\n                    i += skip_keywords[j].skip;\n                    break;\n                }\n            }\n        }\n    }\n\n    keys = getKeysPrepareResult(result, num);\n    for (i = 0; i < num; i++) {\n        keys[i].pos = first+i;\n        keys[i].flags = CMD_KEY_RW | CMD_KEY_ACCESS | CMD_KEY_DELETE;\n    } \n    result->numkeys = num;\n    return num;\n}\n\n/* Helper function to extract keys from following commands:\n * GEORADIUS key x y radius unit [WITHDIST] [WITHHASH] [WITHCOORD] [ASC|DESC]\n *                             [COUNT count] [STORE key|STOREDIST key]\n * GEORADIUSBYMEMBER key member radius unit ... options ...\n * \n * This command has a fully defined keyspec, so returning flags isn't needed. */\nint georadiusGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    int i, num;\n    keyReference *keys;\n    UNUSED(cmd);\n\n    /* Check for the presence of the stored key in the command */\n    int stored_key = -1;\n    for (i = 5; i < argc; i++) {\n        char *arg = argv[i]->ptr;\n        /* For the case when user specifies both \"store\" and \"storedist\" options, the\n         * second key specified would override the first key. This behavior is kept\n         * the same as in georadiusCommand method.\n         */\n        if ((!strcasecmp(arg, \"store\") || !strcasecmp(arg, \"storedist\")) && ((i+1) < argc)) {\n            stored_key = i+1;\n            i++;\n        }\n    }\n    num = 1 + (stored_key == -1 ? 0 : 1);\n\n    /* Keys in the command come from two places:\n     * argv[1] = key,\n     * argv[5...n] = stored key if present\n     */\n    keys = getKeysPrepareResult(result, num);\n\n    /* Add all key positions to keys[] */\n    keys[0].pos = 1;\n    keys[0].flags = 0;\n    if(num > 1) {\n         keys[1].pos = stored_key;\n         keys[1].flags = 0;\n    }\n    result->numkeys = num;\n    return num;\n}\n\n/* XREAD [BLOCK <milliseconds>] [COUNT <count>] [GROUP <groupname> <ttl>]\n *       STREAMS key_1 key_2 ... key_N ID_1 ID_2 ... ID_N\n *\n * This command has a fully defined keyspec, so returning flags isn't needed. */\nint xreadGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    int i, num = 0;\n    keyReference *keys;\n    UNUSED(cmd);\n\n    /* We need to parse the options of the command in order to seek the first\n     * \"STREAMS\" string which is actually the option. This is needed because\n     * \"STREAMS\" could also be the name of the consumer group and even the\n     * name of the stream key. */\n    int streams_pos = -1;\n    for (i = 1; i < argc; i++) {\n        char *arg = argv[i]->ptr;\n        if (!strcasecmp(arg, \"block\")) {\n            i++; /* Skip option argument. */\n        } else if (!strcasecmp(arg, \"count\")) {\n            i++; /* Skip option argument. */\n        } else if (!strcasecmp(arg, \"group\")) {\n            i += 2; /* Skip option argument. */\n        } else if (!strcasecmp(arg, \"noack\")) {\n            /* Nothing to do. */\n        } else if (!strcasecmp(arg, \"streams\")) {\n            streams_pos = i;\n            break;\n        } else {\n            break; /* Syntax error. */\n        }\n    }\n    if (streams_pos != -1) num = argc - streams_pos - 1;\n\n    /* Syntax error. */\n    if (streams_pos == -1 || num == 0 || num % 2 != 0) {\n        result->numkeys = 0;\n        return 0;\n    }\n    num /= 2; /* We have half the keys as there are arguments because\n                 there are also the IDs, one per key. */\n\n    keys = getKeysPrepareResult(result, num);\n    for (i = streams_pos+1; i < argc-num; i++) {\n        keys[i-streams_pos-1].pos = i;\n        keys[i-streams_pos-1].flags = 0; \n    } \n    result->numkeys = num;\n    return num;\n}\n\n/* Helper function to extract keys from the SET command, which may have\n * an RW flag if the GET, IF* arguments are present, OW otherwise. */\nint setGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    keyReference *keys;\n    UNUSED(cmd);\n\n    keys = getKeysPrepareResult(result, 1);\n    keys[0].pos = 1; /* We always know the position */\n    result->numkeys = 1;\n    int actual = CMD_KEY_OW;\n    int logical = CMD_KEY_UPDATE;\n\n    for (int i = 3; i < argc; i++) {\n        char *arg = argv[i]->ptr;\n        if ((arg[0] == 'g' || arg[0] == 'G') &&\n            (arg[1] == 'e' || arg[1] == 'E') &&\n            (arg[2] == 't' || arg[2] == 'T') && arg[3] == '\\0')\n        {\n            actual = CMD_KEY_RW;\n            logical |= CMD_KEY_ACCESS;\n        } else if (!strcasecmp(arg, \"ifeq\") || !strcasecmp(arg, \"ifne\") ||\n                   !strcasecmp(arg, \"ifdeq\") || !strcasecmp(arg, \"ifdne\"))\n        {\n            actual = CMD_KEY_RW;\n        }\n    }\n\n    keys[0].flags = actual | logical;\n\n    return 1;\n}\n\n/* Helper function to extract keys from the DELEX command, which may have\n * an RW flag if the IF* arguments are present, RM otherwise. */\nint delexGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    keyReference *keys;\n    UNUSED(cmd);\n\n    keys = getKeysPrepareResult(result, 1);\n    keys[0].pos = 1; /* We always know the position */\n    result->numkeys = 1;\n    int actual = CMD_KEY_RM;\n    int logical = CMD_KEY_DELETE;\n\n    for (int i = 2; i < argc; i++) {\n        char *arg = argv[i]->ptr;\n        if (!strcasecmp(arg, \"ifeq\") || !strcasecmp(arg, \"ifne\") ||\n            !strcasecmp(arg, \"ifdeq\") || !strcasecmp(arg, \"ifdne\"))\n        {\n            actual = CMD_KEY_RW;\n        }\n    }\n\n    keys[0].flags = actual | logical;\n\n    return 1;\n}\n\n/* Helper function to extract keys from the BITFIELD command, which may be\n * read-only if the BITFIELD GET subcommand is used. */\nint bitfieldGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    keyReference *keys;\n    int readonly = 1;\n    UNUSED(cmd);\n\n    keys = getKeysPrepareResult(result, 1);\n    keys[0].pos = 1; /* We always know the position */\n    result->numkeys = 1;\n\n    for (int i = 2; i < argc; i++) {\n        int remargs = argc - i - 1; /* Remaining args other than current. */\n        char *arg = argv[i]->ptr;\n        if (!strcasecmp(arg, \"get\") && remargs >= 2) {\n            i += 2;\n        } else if ((!strcasecmp(arg, \"set\") || !strcasecmp(arg, \"incrby\")) && remargs >= 3) {\n            readonly = 0;\n            i += 3;\n            break;\n        } else if (!strcasecmp(arg, \"overflow\") && remargs >= 1) {\n            i += 1;\n        } else {\n            readonly = 0; /* Syntax error. safer to assume non-RO. */\n            break;\n        }\n    }\n\n    if (readonly) {\n        keys[0].flags = CMD_KEY_RO | CMD_KEY_ACCESS;\n    } else {\n        keys[0].flags = CMD_KEY_RW | CMD_KEY_ACCESS | CMD_KEY_UPDATE;\n    }\n    return 1;\n}\n"
  },
  {
    "path": "src/debug.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n#include \"server.h\"\n#include \"util.h\"\n#include \"sha1.h\"   /* SHA1 is used for DEBUG DIGEST */\n#include \"crc64.h\"\n#include \"bio.h\"\n#include \"quicklist.h\"\n#include \"fpconv_dtoa.h\"\n#include \"fast_float_strtod.h\"\n#include \"cluster.h\"\n#include \"threads_mngr.h\"\n#include \"script.h\"\n#include \"cluster_asm.h\"\n\n#include <arpa/inet.h>\n#include <signal.h>\n#include <dlfcn.h>\n#include <fcntl.h>\n#include <sys/mman.h>\n#include <unistd.h>\n\n#ifdef HAVE_BACKTRACE\n#include <execinfo.h>\n#ifndef __OpenBSD__\n#include <ucontext.h>\n#else\ntypedef ucontext_t sigcontext_t;\n#endif\n#endif /* HAVE_BACKTRACE */\n\n#ifdef __CYGWIN__\n#ifndef SA_ONSTACK\n#define SA_ONSTACK 0x08000000\n#endif\n#endif\n\n#if defined(__APPLE__) && defined(__arm64__)\n#include <mach/mach.h>\n#endif\n\n/* Globals */\nstatic int bug_report_start = 0; /* True if bug report header was already logged. */\nstatic pthread_mutex_t bug_report_start_mutex;\nstatic pthread_mutexattr_t bug_report_start_attr;\n\n/* Mutex for a case when two threads crash at the same time. */\nstatic pthread_mutex_t signal_handler_lock;\nstatic pthread_mutexattr_t signal_handler_lock_attr;\nstatic volatile int signal_handler_lock_initialized = 0;\n/* Forward declarations */\nint bugReportStart(void);\nvoid printCrashReport(void);\nvoid bugReportEnd(int killViaSignal, int sig);\nvoid logStackTrace(void *eip, int uplevel, int current_thread);\nvoid sigalrmSignalHandler(int sig, siginfo_t *info, void *secret);\n\n/* ================================= Debugging ============================== */\n\n/* Compute the sha1 of string at 's' with 'len' bytes long.\n * The SHA1 is then xored against the string pointed by digest.\n * Since xor is commutative, this operation is used in order to\n * \"add\" digests relative to unordered elements.\n *\n * So digest(a,b,c,d) will be the same of digest(b,a,c,d) */\nvoid xorDigest(unsigned char *digest, const void *ptr, size_t len) {\n    SHA1_CTX ctx;\n    unsigned char hash[20];\n    int j;\n\n    SHA1Init(&ctx);\n    SHA1Update(&ctx,ptr,len);\n    SHA1Final(hash,&ctx);\n\n    for (j = 0; j < 20; j++)\n        digest[j] ^= hash[j];\n}\n\nvoid xorStringObjectDigest(unsigned char *digest, robj *o) {\n    o = getDecodedObject(o);\n    xorDigest(digest,o->ptr,sdslen(o->ptr));\n    decrRefCount(o);\n}\n\n/* This function instead of just computing the SHA1 and xoring it\n * against digest, also perform the digest of \"digest\" itself and\n * replace the old value with the new one.\n *\n * So the final digest will be:\n *\n * digest = SHA1(digest xor SHA1(data))\n *\n * This function is used every time we want to preserve the order so\n * that digest(a,b,c,d) will be different than digest(b,c,d,a)\n *\n * Also note that mixdigest(\"foo\") followed by mixdigest(\"bar\")\n * will lead to a different digest compared to \"fo\", \"obar\".\n */\nvoid mixDigest(unsigned char *digest, const void *ptr, size_t len) {\n    SHA1_CTX ctx;\n\n    xorDigest(digest,ptr,len);\n    SHA1Init(&ctx);\n    SHA1Update(&ctx,digest,20);\n    SHA1Final(digest,&ctx);\n}\n\nvoid mixStringObjectDigest(unsigned char *digest, robj *o) {\n    o = getDecodedObject(o);\n    mixDigest(digest,o->ptr,sdslen(o->ptr));\n    decrRefCount(o);\n}\n\nvoid mixGCRAObjectDigest(unsigned char *digest, robj *o) {\n    char buf[LONG_STR_SIZE];\n    long long val;\n    getLongLongFromGCRAObject(o, &val);\n    int len = ll2string(buf, sizeof(buf), val);\n    mixDigest(digest,buf,len);\n}\n\n/* This function computes the digest of a data structure stored in the\n * object 'o'. It is the core of the DEBUG DIGEST command: when taking the\n * digest of a whole dataset, we take the digest of the key and the value\n * pair, and xor all those together.\n *\n * Note that this function does not reset the initial 'digest' passed, it\n * will continue mixing this object digest to anything that was already\n * present. */\nvoid xorObjectDigest(redisDb *db, robj *keyobj, unsigned char *digest, robj *o) {\n    uint32_t aux = htonl(o->type);\n    mixDigest(digest,&aux,sizeof(aux));\n    long long expiretime = getExpire(db, keyobj->ptr, NULL);\n    char buf[128];\n\n    /* Save the key and associated value */\n    if (o->type == OBJ_STRING) {\n        mixStringObjectDigest(digest,o);\n    } else if (o->type == OBJ_LIST) {\n        listTypeIterator li;\n        listTypeEntry entry;\n        listTypeInitIterator(&li, o, 0, LIST_TAIL);\n        while(listTypeNext(&li, &entry)) {\n            robj *eleobj = listTypeGet(&entry);\n            mixStringObjectDigest(digest,eleobj);\n            decrRefCount(eleobj);\n        }\n        listTypeResetIterator(&li);\n    } else if (o->type == OBJ_SET) {\n        setTypeIterator si;\n        sds sdsele;\n        setTypeInitIterator(&si, o);\n        while((sdsele = setTypeNextObject(&si)) != NULL) {\n            xorDigest(digest,sdsele,sdslen(sdsele));\n            sdsfree(sdsele);\n        }\n        setTypeResetIterator(&si);\n    } else if (o->type == OBJ_ZSET) {\n        unsigned char eledigest[20];\n\n        if (o->encoding == OBJ_ENCODING_LISTPACK) {\n            unsigned char *zl = o->ptr;\n            unsigned char *eptr, *sptr;\n            unsigned char *vstr;\n            unsigned int vlen;\n            long long vll;\n            double score;\n\n            eptr = lpSeek(zl,0);\n            serverAssert(eptr != NULL);\n            sptr = lpNext(zl,eptr);\n            serverAssert(sptr != NULL);\n\n            while (eptr != NULL) {\n                vstr = lpGetValue(eptr,&vlen,&vll);\n                score = zzlGetScore(sptr);\n\n                memset(eledigest,0,20);\n                if (vstr != NULL) {\n                    mixDigest(eledigest,vstr,vlen);\n                } else {\n                    ll2string(buf,sizeof(buf),vll);\n                    mixDigest(eledigest,buf,strlen(buf));\n                }\n                const int len = fpconv_dtoa(score, buf);\n                buf[len] = '\\0';\n                mixDigest(eledigest,buf,strlen(buf));\n                xorDigest(digest,eledigest,20);\n                zzlNext(zl,&eptr,&sptr);\n            }\n        } else if (o->encoding == OBJ_ENCODING_SKIPLIST) {\n            zset *zs = o->ptr;\n            dictIterator di;\n            dictEntry *de;\n\n            dictInitIterator(&di, zs->dict);\n            while((de = dictNext(&di)) != NULL) {\n                zskiplistNode *znode = dictGetKey(de);\n                sds sdsele = zslGetNodeElement(znode);\n                const int len = fpconv_dtoa(znode->score, buf);\n                buf[len] = '\\0';\n                memset(eledigest,0,20);\n                mixDigest(eledigest,sdsele,sdslen(sdsele));\n                mixDigest(eledigest,buf,strlen(buf));\n                xorDigest(digest,eledigest,20);\n            }\n            dictResetIterator(&di);\n        } else {\n            serverPanic(\"Unknown sorted set encoding\");\n        }\n    } else if (o->type == OBJ_HASH) {\n        hashTypeIterator hi;\n        hashTypeInitIterator(&hi, o);\n        while (hashTypeNext(&hi, 0) != C_ERR) {\n            unsigned char eledigest[20];\n            sds sdsele;\n\n            /* field */\n            memset(eledigest,0,20);\n            sdsele = hashTypeCurrentObjectNewSds(&hi,OBJ_HASH_KEY);\n            mixDigest(eledigest,sdsele,sdslen(sdsele));\n            sdsfree(sdsele);\n            /* val */\n            sdsele = hashTypeCurrentObjectNewSds(&hi,OBJ_HASH_VALUE);\n            mixDigest(eledigest,sdsele,sdslen(sdsele));\n            sdsfree(sdsele);\n            /* hash-field expiration (HFE) */\n            if (hi.expire_time != EB_EXPIRE_TIME_INVALID)\n                xorDigest(eledigest,\"!!hexpire!!\",11);\n            xorDigest(digest,eledigest,20);\n        }\n        hashTypeResetIterator(&hi);\n    } else if (o->type == OBJ_STREAM) {\n        streamIterator si;\n        streamIteratorStart(&si,o->ptr,NULL,NULL,0);\n        streamID id;\n        int64_t numfields;\n\n        while(streamIteratorGetID(&si,&id,&numfields)) {\n            sds itemid = sdscatfmt(sdsempty(),\"%U.%U\",id.ms,id.seq);\n            mixDigest(digest,itemid,sdslen(itemid));\n            sdsfree(itemid);\n\n            while(numfields--) {\n                unsigned char *field, *value;\n                int64_t field_len, value_len;\n                streamIteratorGetField(&si,&field,&value,\n                                           &field_len,&value_len);\n                mixDigest(digest,field,field_len);\n                mixDigest(digest,value,value_len);\n            }\n        }\n        streamIteratorStop(&si);\n    } else if (o->type == OBJ_GCRA) {\n        mixGCRAObjectDigest(digest, o);\n    } else if (o->type == OBJ_MODULE) {\n        RedisModuleDigest md = {{0},{0},keyobj,db->id};\n        moduleValue *mv = o->ptr;\n        moduleType *mt = mv->type;\n        moduleInitDigestContext(md);\n        if (mt->digest) {\n            mt->digest(&md,mv->value);\n            xorDigest(digest,md.x,sizeof(md.x));\n        }\n    } else {\n        serverPanic(\"Unknown object type\");\n    }\n    /* If the key has an expire, add it to the mix */\n    if (expiretime != -1) xorDigest(digest,\"!!expire!!\",10);\n}\n\n/* Compute the dataset digest. Since keys, sets elements, hashes elements\n * are not ordered, we use a trick: every aggregate digest is the xor\n * of the digests of their elements. This way the order will not change\n * the result. For list instead we use a feedback entering the output digest\n * as input in order to ensure that a different ordered list will result in\n * a different digest. */\nvoid computeDatasetDigest(unsigned char *final) {\n    unsigned char digest[20];\n    dictEntry *de;\n    int j;\n    uint32_t aux;\n\n    memset(final,0,20); /* Start with a clean result */\n\n    for (j = 0; j < server.dbnum; j++) {\n        redisDb *db = server.db+j;\n        if (kvstoreSize(db->keys) == 0)\n            continue;\n\n        /* hash the DB id, so the same dataset moved in a different DB will lead to a different digest */\n        aux = htonl(j);\n        mixDigest(final,&aux,sizeof(aux));\n\n        /* Iterate this DB writing every entry */\n        kvstoreIterator kvs_it;\n        kvstoreIteratorInit(&kvs_it, db->keys);\n        while((de = kvstoreIteratorNext(&kvs_it)) != NULL) {\n            robj *keyobj;\n\n            memset(digest,0,20); /* This key-val digest */\n            kvobj *kv = dictGetKV(de);\n            sds key = kvobjGetKey(kv);\n            keyobj = createStringObject(key,sdslen(key));\n\n            mixDigest(digest,key,sdslen(key));\n\n            xorObjectDigest(db, keyobj, digest, kv);\n\n            /* We can finally xor the key-val digest to the final digest */\n            xorDigest(final,digest,20);\n            decrRefCount(keyobj);\n        }\n        kvstoreIteratorReset(&kvs_it);\n    }\n}\n\n#ifdef USE_JEMALLOC\nvoid mallctl_int(client *c, robj **argv, int argc) {\n    int ret;\n    /* start with the biggest size (int64), and if that fails, try smaller sizes (int32, bool) */\n    int64_t old = 0, val;\n    if (argc > 1) {\n        long long ll;\n        if (getLongLongFromObjectOrReply(c, argv[1], &ll, NULL) != C_OK)\n            return;\n        val = ll;\n    }\n    size_t sz = sizeof(old);\n    while (sz > 0) {\n        size_t zz = sz;\n        if ((ret=je_mallctl(argv[0]->ptr, &old, &zz, argc > 1? &val: NULL, argc > 1?sz: 0))) {\n            if (ret == EPERM && argc > 1) {\n                /* if this option is write only, try just writing to it. */\n                if (!(ret=je_mallctl(argv[0]->ptr, NULL, 0, &val, sz))) {\n                    addReply(c, shared.ok);\n                    return;\n                }\n            }\n            if (ret==EINVAL) {\n                /* size might be wrong, try a smaller one */\n                sz /= 2;\n#if BYTE_ORDER == BIG_ENDIAN\n                val <<= 8*sz;\n#endif\n                continue;\n            }\n            addReplyErrorFormat(c,\"%s\", strerror(ret));\n            return;\n        } else {\n#if BYTE_ORDER == BIG_ENDIAN\n            old >>= 64 - 8*sz;\n#endif\n            addReplyLongLong(c, old);\n            return;\n        }\n    }\n    addReplyErrorFormat(c,\"%s\", strerror(EINVAL));\n}\n\nvoid mallctl_string(client *c, robj **argv, int argc) {\n    int rret, wret;\n    char *old;\n    size_t sz = sizeof(old);\n    /* for strings, it seems we need to first get the old value, before overriding it. */\n    if ((rret=je_mallctl(argv[0]->ptr, &old, &sz, NULL, 0))) {\n        /* return error unless this option is write only. */\n        if (!(rret == EPERM && argc > 1)) {\n            addReplyErrorFormat(c,\"%s\", strerror(rret));\n            return;\n        }\n    }\n    if(argc > 1) {\n        char *val = argv[1]->ptr;\n        char **valref = &val;\n        if ((!strcmp(val,\"VOID\")))\n            valref = NULL, sz = 0;\n        wret = je_mallctl(argv[0]->ptr, NULL, 0, valref, sz);\n    }\n    if (!rret)\n        addReplyBulkCString(c, old);\n    else if (wret)\n        addReplyErrorFormat(c,\"%s\", strerror(wret));\n    else\n        addReply(c, shared.ok);\n}\n#endif\n\nvoid debugCommand(client *c) {\n    if (c->argc == 2 && !strcasecmp(c->argv[1]->ptr,\"help\")) {\n        const char *help[] = {\n\"AOF-FLUSH-SLEEP <microsec>\",\n\"    Server will sleep before flushing the AOF, this is used for testing.\",\n\"ASSERT\",\n\"    Crash by assertion failed.\",\n\"CHANGE-REPL-ID\",\n\"    Change the replication IDs of the instance.\",\n\"    Dangerous: should be used only for testing the replication subsystem.\",\n\"CONFIG-REWRITE-FORCE-ALL\",\n\"    Like CONFIG REWRITE but writes all configuration options, including\",\n\"    keywords not listed in original configuration file or default values.\",\n\"CRASH-AND-RECOVER [<milliseconds>]\",\n\"    Hard crash and restart after a <milliseconds> delay (default 0).\",\n\"DIGEST\",\n\"    Output a hex signature representing the current DB content.\",\n\"INTERNAL_SECRET\",\n\"    Return the cluster internal secret (hashed with crc16) or error if not in cluster mode.\",\n\"DIGEST-VALUE <key> [<key> ...]\",\n\"    Output a hex signature of the values of all the specified keys.\",\n\"ERROR <string>\",\n\"    Return a Redis protocol error with <string> as message. Useful for clients\",\n\"    unit tests to simulate Redis errors.\",\n\"LEAK <string>\",\n\"    Create a memory leak of the input string.\",\n\"LOG <message>\",\n\"    Write <message> to the server log.\",\n\"HTSTATS <dbid> [full]\",\n\"    Return hash table statistics of the specified Redis database.\",\n\"HTSTATS-KEY <key> [full]\",\n\"    Like HTSTATS but for the hash table stored at <key>'s value.\",\n\"KEYSIZES-HIST-ASSERT <0|1>\",\n\"    Enable/disable keysizes histogram assertion after each command.\",\n\"LOADAOF\",\n\"    Flush the AOF buffers on disk and reload the AOF in memory.\",\n\"REPLICATE <string>\",\n\"    Replicates the provided string to replicas, allowing data divergence.\",\n#ifdef USE_JEMALLOC\n\"MALLCTL <key> [<val>]\",\n\"    Get or set a malloc tuning integer.\",\n\"MALLCTL-STR <key> [<val>]\",\n\"    Get or set a malloc tuning string.\",\n#endif\n\"OBJECT <key>\",\n\"    Show low level info about `key` and associated value.\",\n\"DROP-CLUSTER-PACKET-FILTER <packet-type>\",\n\"    Drop all packets that match the filtered type. Set to -1 allow all packets.\",\n\"ENABLE-KEYMETA-RUNTIME-REGISTRATION <0|1>\",\n\"    Allow keymeta class registration outside server startup (for testing).\",\n\"OOM\",\n\"    Crash the server simulating an out-of-memory error.\",\n\"PANIC\",\n\"    Crash the server simulating a panic.\",\n\"POPULATE <count> [<prefix>] [<size>]\",\n\"    Create <count> string keys named key:<num>. If <prefix> is specified then\",\n\"    it is used instead of the 'key' prefix. These are not propagated to\",\n\"    replicas. Cluster slots are not respected so keys not belonging to the\",\n\"    current node can be created in cluster mode.\",\n\"PROTOCOL <type>\",\n\"    Reply with a test value of the specified type. <type> can be: string,\",\n\"    integer, double, bignum, null, array, set, map, attrib, push, verbatim,\",\n\"    true, false.\",\n\"RELOAD [option ...]\",\n\"    Save the RDB on disk and reload it back to memory. Valid <option> values:\",\n\"    * MERGE: conflicting keys will be loaded from RDB.\",\n\"    * NOFLUSH: the existing database will not be removed before load, but\",\n\"      conflicting keys will generate an exception and kill the server.\",\n\"    * NOSAVE: the database will be loaded from an existing RDB file.\",\n\"    Examples:\",\n\"    * DEBUG RELOAD: verify that the server is able to persist, flush and reload\",\n\"      the database.\",\n\"    * DEBUG RELOAD NOSAVE: replace the current database with the contents of an\",\n\"      existing RDB file.\",\n\"    * DEBUG RELOAD NOSAVE NOFLUSH MERGE: add the contents of an existing RDB\",\n\"      file to the database.\",\n\"RESTART [<milliseconds>]\",\n\"    Graceful restart: save config, db, restart after a <milliseconds> delay (default 0).\",\n\"SDSLEN <key>\",\n\"    Show low level SDS string info representing `key` and value.\",\n\"SEGFAULT\",\n\"    Crash the server with sigsegv.\",\n\"SET-ACTIVE-EXPIRE <0|1>\",\n\"    Setting it to 0 disables expiring keys (and hash-fields) in background \",\n\"    when they are not accessed (otherwise the Redis behavior). Setting it\",\n\"    to 1 reenables back the default.\",\n\"SET-ALLOW-ACCESS-EXPIRED <0|1>\",\n\"    Setting it to 0 prevents access to expired keys (and hash-fields),\",\n\"    simulating the standard Redis behavior. Setting it to 1 allows\",\n\"    access to expired keys (and hash-fields) without triggering deletion.\",\n\"QUICKLIST-PACKED-THRESHOLD <size>\",\n\"    Sets the threshold for elements to be inserted as plain vs packed nodes\",\n\"    Default value is 1GB, allows values up to 4GB. Setting to 0 restores to default.\",\n\"SET-SKIP-CHECKSUM-VALIDATION <0|1>\",\n\"    Enables or disables checksum checks for RDB files and RESTORE's payload.\",\n\"SLEEP <seconds>\",\n\"    Stop the server for <seconds>. Decimals allowed.\",\n\"STRINGMATCH-TEST\",\n\"    Run a fuzz tester against the stringmatchlen() function.\",\n\"STRUCTSIZE\",\n\"    Return the size of different Redis core C structures.\",\n\"LISTPACK <key>\",\n\"    Show low level info about the listpack encoding of <key>.\",\n\"QUICKLIST <key> [<0|1>]\",\n\"    Show low level info about the quicklist encoding of <key>.\",\n\"    The optional argument (0 by default) sets the level of detail\",\n\"CLIENT-EVICTION\",\n\"    Show low level client eviction pools info (maxmemory-clients).\",\n\"PAUSE-CRON <0|1>\",\n\"    Stop periodic cron job processing.\",\n\"REPLYBUFFER PEAK-RESET-TIME <NEVER||RESET|time>\",\n\"    Sets the time (in milliseconds) to wait between client reply buffer peak resets.\",\n\"    In case NEVER is provided the last observed peak will never be reset\",\n\"    In case RESET is provided the peak reset time will be restored to the default value\",\n\"REPLYBUFFER RESIZING <0|1>\",\n\"    Enable or disable the reply buffer resize cron job\",\n\"REPLY-COPY-AVOIDANCE <0|1>\",\n\"    Enable/disable reply copy avoidance optimization.\",\n\"REPL-PAUSE <clear|after-fork|before-rdb-channel|on-streaming-repl-buf>\",\n\"    Pause the server's main process during various replication steps.\",\n\"DICT-RESIZING <0|1>\",\n\"    Enable or disable the main dict and expire dict resizing.\",\n\"SCRIPT <LIST|<sha>>\",\n\"    Output SHA and content of all scripts or of a specific script with its SHA.\",\n\"MARK-INTERNAL-CLIENT [UNMARK]\",\n\"    Promote the current connection to an internal connection.\",\n\"ASM-FAILPOINT <channel> <state>\",\n\"    Set a fail point for the specified channel and state for cluster atomic slot migration.\",\n\"ASM-TRIM-METHOD <default|none|active|bg> <active-trim-delay> \",\n\"    Disable trimming or force active/background trimming for cluster atomic slot migration.\",\n\"    Active trim delay is used only when method is 'active'. If it is negative,\",\n\"    active trim is disabled.\",\nNULL\n        };\n        addExtendedReplyHelp(c, help, clusterDebugCommandExtendedHelp());\n    } else if (!strcasecmp(c->argv[1]->ptr,\"segfault\")) {\n        /* Compiler gives warnings about writing to a random address\n         * e.g \"*((char*)-1) = 'x';\". As a workaround, we map a read-only area\n         * and try to write there to trigger segmentation fault. */\n        char* p = mmap(NULL, 4096, PROT_READ, MAP_PRIVATE | MAP_ANON, -1, 0);\n        *p = 'x';\n    } else if (!strcasecmp(c->argv[1]->ptr,\"panic\")) {\n        serverPanic(\"DEBUG PANIC called at Unix time %lld\", (long long)time(NULL));\n    } else if (!strcasecmp(c->argv[1]->ptr,\"restart\") ||\n               !strcasecmp(c->argv[1]->ptr,\"crash-and-recover\"))\n    {\n        long long delay = 0;\n        if (c->argc >= 3) {\n            if (getLongLongFromObjectOrReply(c, c->argv[2], &delay, NULL)\n                != C_OK) return;\n            if (delay < 0) delay = 0;\n        }\n        int flags = !strcasecmp(c->argv[1]->ptr,\"restart\") ?\n            (RESTART_SERVER_GRACEFULLY|RESTART_SERVER_CONFIG_REWRITE) :\n             RESTART_SERVER_NONE;\n        restartServer(flags,delay);\n        addReplyError(c,\"failed to restart the server. Check server logs.\");\n    } else if (!strcasecmp(c->argv[1]->ptr,\"oom\")) {\n        void *ptr = zmalloc(SIZE_MAX/2); /* Should trigger an out of memory. */\n        zfree(ptr);\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"assert\")) {\n        serverAssertWithInfo(c,c->argv[0],1 == 2);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"KEYSIZES-HIST-ASSERT\") && c->argc == 3) {\n        long long flag;\n        if (getLongLongFromObjectOrReply(c, c->argv[2], &flag, NULL) != C_OK)\n            return;\n        if (flag)\n            server.dbg_assert_flags |= DBG_ASSERT_KEYSIZES;\n        else\n            server.dbg_assert_flags &= ~DBG_ASSERT_KEYSIZES;\n        addReply(c, shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"ALLOCSIZE-SLOTS-ASSERT\") && c->argc == 3) {\n        long long flag;\n        if (getLongLongFromObjectOrReply(c, c->argv[2], &flag, NULL) != C_OK)\n            return;\n        if (flag)\n            server.dbg_assert_flags |= DBG_ASSERT_ALLOC_SLOT;\n        else\n            server.dbg_assert_flags &= ~DBG_ASSERT_ALLOC_SLOT;\n        addReply(c, shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"log\") && c->argc == 3) {\n        serverLog(LL_WARNING, \"DEBUG LOG: %s\", (char*)c->argv[2]->ptr);\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"leak\") && c->argc == 3) {\n        sdsdup(c->argv[2]->ptr);\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"reload\")) {\n        int flush = 1, save = 1;\n        int flags = RDBFLAGS_NONE;\n\n        /* Parse the additional options that modify the RELOAD\n         * behavior. */\n        for (int j = 2; j < c->argc; j++) {\n            char *opt = c->argv[j]->ptr;\n            if (!strcasecmp(opt,\"MERGE\")) {\n                flags |= RDBFLAGS_ALLOW_DUP;\n            } else if (!strcasecmp(opt,\"NOFLUSH\")) {\n                flush = 0;\n            } else if (!strcasecmp(opt,\"NOSAVE\")) {\n                save = 0;\n            } else {\n                addReplyError(c,\"DEBUG RELOAD only supports the \"\n                                \"MERGE, NOFLUSH and NOSAVE options.\");\n                return;\n            }\n        }\n\n        /* The default behavior is to save the RDB file before loading\n         * it back. */\n        if (save) {\n            rdbSaveInfo rsi, *rsiptr;\n            rsiptr = rdbPopulateSaveInfo(&rsi);\n            if (rdbSave(SLAVE_REQ_NONE,server.rdb_filename,rsiptr,RDBFLAGS_NONE) != C_OK) {\n                addReplyErrorObject(c,shared.err);\n                return;\n            }\n        }\n\n        /* The default behavior is to remove the current dataset from\n         * memory before loading the RDB file, however when MERGE is\n         * used together with NOFLUSH, we are able to merge two datasets. */\n        if (flush) emptyData(-1,EMPTYDB_NO_FLAGS,NULL);\n\n        protectClient(c);\n        int ret = rdbLoad(server.rdb_filename,NULL,flags);\n        unprotectClient(c);\n        if (ret != RDB_OK) {\n            addReplyError(c,\"Error trying to load the RDB dump, check server logs.\");\n            return;\n        }\n        applyAppendOnlyConfig(); /* Check if AOF config was changed while loading */\n        serverLog(LL_NOTICE,\"DB reloaded by DEBUG RELOAD\");\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"loadaof\")) {\n        if (server.aof_state != AOF_OFF) flushAppendOnlyFile(1);\n        emptyData(-1,EMPTYDB_NO_FLAGS,NULL);\n        protectClient(c);\n        if (server.aof_manifest) aofManifestFree(server.aof_manifest);\n        aofLoadManifestFromDisk();\n        aofDelHistoryFiles();\n        int ret = loadAppendOnlyFiles(server.aof_manifest);\n        unprotectClient(c);\n        if (ret != AOF_OK && ret != AOF_EMPTY) {\n            addReplyError(c, \"Error trying to load the AOF files, check server logs.\");\n            return;\n        }\n        applyAppendOnlyConfig(); /* Check if AOF config was changed while loading */\n        server.dirty = 0; /* Prevent AOF / replication */\n        serverLog(LL_NOTICE,\"Append Only File loaded by DEBUG LOADAOF\");\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"drop-cluster-packet-filter\") && c->argc == 3) {\n        long packet_type;\n        if (getLongFromObjectOrReply(c, c->argv[2], &packet_type, NULL) != C_OK)\n            return;\n        server.cluster_drop_packet_filter = packet_type;\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"object\") && c->argc == 3) {\n        kvobj *kv;\n        char *strenc;\n\n        if ((kv = dbFind(c->db, c->argv[2]->ptr)) == NULL) {\n            addReplyErrorObject(c,shared.nokeyerr);\n            return;\n        }\n\n        strenc = strEncoding(kv->encoding);\n\n        char extra[138] = {0};\n        if (kv->encoding == OBJ_ENCODING_QUICKLIST) {\n            char *nextra = extra;\n            int remaining = sizeof(extra);\n            quicklist *ql = kv->ptr;\n            /* Add number of quicklist nodes */\n            int used = snprintf(nextra, remaining, \" ql_nodes:%lu\", ql->len);\n            nextra += used;\n            remaining -= used;\n            /* Add average quicklist fill factor */\n            double avg = (double)ql->count/ql->len;\n            used = snprintf(nextra, remaining, \" ql_avg_node:%.2f\", avg);\n            nextra += used;\n            remaining -= used;\n            /* Add quicklist fill level / max listpack size */\n            used = snprintf(nextra, remaining, \" ql_listpack_max:%d\", ql->fill);\n            nextra += used;\n            remaining -= used;\n            /* Add isCompressed? */\n            int compressed = ql->compress != 0;\n            used = snprintf(nextra, remaining, \" ql_compressed:%d\", compressed);\n            nextra += used;\n            remaining -= used;\n            /* Add total uncompressed size */\n            unsigned long sz = 0;\n            for (quicklistNode *node = ql->head; node; node = node->next) {\n                sz += node->sz;\n            }\n            used = snprintf(nextra, remaining, \" ql_uncompressed_size:%lu\", sz);\n            nextra += used;\n            remaining -= used;\n        }\n\n        addReplyStatusFormat(c,\n            \"Value at:%p refcount:%d \"\n            \"encoding:%s serializedlength:%zu \"\n            \"lru:%d lru_seconds_idle:%llu%s\",\n            (void*)kv, kv->refcount,\n            strenc, rdbSavedObjectLen(kv, c->argv[2], c->db->id),\n            kv->lru, estimateObjectIdleTime(kv)/1000, extra);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"sdslen\") && c->argc == 3) {\n        robj *val;\n        sds key;\n        kvobj *kv;\n\n        if ((kv = dbFind(c->db, c->argv[2]->ptr)) == NULL) {\n            addReplyErrorObject(c,shared.nokeyerr);\n            return;\n        }\n        \n        val = kv;\n        key = kvobjGetKey(kv);\n        if (kv->type != OBJ_STRING || !sdsEncodedObject(val)) {\n            addReplyError(c,\"Not an sds encoded string.\");\n        } else {\n            /* The key's allocation size reflects the entire robj allocation.  \n             * For embedded values, report an allocation size of 0. */\n            size_t obj_alloc = zmalloc_usable_size(val);\n            size_t val_alloc = val->encoding == OBJ_ENCODING_RAW ? sdsAllocSize(val->ptr) : 0;\n            addReplyStatusFormat(c,\n                \"key_sds_len:%lld, key_sds_avail:%lld, key_zmalloc: %lld, \"\n                \"val_sds_len:%lld, val_sds_avail:%lld, val_zmalloc: %lld\",\n                (long long) sdslen(key),\n                (long long) sdsavail(key),\n                (long long) obj_alloc,\n                (long long) sdslen(val->ptr),\n                (long long) sdsavail(val->ptr),\n                (long long) val_alloc);\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"listpack\") && c->argc == 3) {\n        kvobj *o;\n\n        if ((o = kvobjCommandLookupOrReply(c, c->argv[2], shared.nokeyerr))\n                == NULL) return;\n\n        if (o->encoding != OBJ_ENCODING_LISTPACK && o->encoding != OBJ_ENCODING_LISTPACK_EX) {\n            addReplyError(c,\"Not a listpack encoded object.\");\n        } else {\n            if (o->encoding == OBJ_ENCODING_LISTPACK)\n                lpRepr(o->ptr);\n            else if (o->encoding == OBJ_ENCODING_LISTPACK_EX)\n                lpRepr(((listpackEx*)o->ptr)->lp);\n\n            addReplyStatus(c,\"Listpack structure printed on stdout\");\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"quicklist\") && (c->argc == 3 || c->argc == 4)) {\n        kvobj *o;\n\n        if ((o = kvobjCommandLookupOrReply(c, c->argv[2], shared.nokeyerr))\n            == NULL) return;\n\n        int full = 0;\n        if (c->argc == 4)\n            full = atoi(c->argv[3]->ptr);\n        if (o->encoding != OBJ_ENCODING_QUICKLIST) {\n            addReplyError(c,\"Not a quicklist encoded object.\");\n        } else {\n            quicklistRepr(o->ptr, full);\n            addReplyStatus(c,\"Quicklist structure printed on stdout\");\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"populate\") &&\n               c->argc >= 3 && c->argc <= 5) {\n        long keys, j;\n        robj *key, *val;\n        char buf[128];\n\n        if (getPositiveLongFromObjectOrReply(c, c->argv[2], &keys, NULL) != C_OK)\n            return;\n\n        if (server.loading || server.async_loading) {\n            addReplyErrorObject(c, shared.loadingerr);\n            return;\n        }\n\n        if (dbExpand(c->db, keys, 1) == C_ERR) {\n            addReplyError(c, \"OOM in dictTryExpand\");\n            return;\n        }\n        long valsize = 0;\n        if ( c->argc == 5 && getPositiveLongFromObjectOrReply(c, c->argv[4], &valsize, NULL) != C_OK ) \n            return;\n\n        for (j = 0; j < keys; j++) {\n            snprintf(buf,sizeof(buf),\"%s:%lu\",\n                (c->argc == 3) ? \"key\" : (char*)c->argv[3]->ptr, j);\n            key = createStringObject(buf,strlen(buf));\n            if (lookupKeyWrite(c->db,key) != NULL) {\n                decrRefCount(key);\n                continue;\n            }\n            snprintf(buf,sizeof(buf),\"value:%lu\",j);\n            if (valsize==0)\n                val = createStringObject(buf,strlen(buf));\n            else {\n                int buflen = strlen(buf);\n                val = createStringObject(NULL,valsize);\n                memcpy(val->ptr, buf, valsize<=buflen? valsize: buflen);\n            }\n            dbAdd(c->db, key, &val);\n            keyModified(c,c->db,key,val,1);\n            decrRefCount(key);\n        }\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"digest\") && c->argc == 2) {\n        /* DEBUG DIGEST (form without keys specified) */\n        unsigned char digest[20];\n        sds d = sdsempty();\n\n        computeDatasetDigest(digest);\n        for (int i = 0; i < 20; i++) d = sdscatprintf(d, \"%02x\",digest[i]);\n        addReplyStatus(c,d);\n        sdsfree(d);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"internal_secret\") && c->argc == 2) {\n        size_t len;\n        const char *internal_secret = clusterGetSecret(&len);\n        if (!internal_secret) {\n            addReplyError(c, \"Internal secret is missing\");\n        } else {\n            uint16_t hash = crc16(internal_secret, len);\n            addReplyLongLong(c, hash);\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"digest-value\") && c->argc >= 2) {\n        /* DEBUG DIGEST-VALUE key key key ... key. */\n        addReplyArrayLen(c,c->argc-2);\n        for (int j = 2; j < c->argc; j++) {\n            unsigned char digest[20];\n            memset(digest,0,20); /* Start with a clean result */\n\n            /* We don't use lookupKey because a debug command should\n             * work on logically expired keys */\n            kvobj *o = dbFind(c->db, c->argv[j]->ptr);\n            if (o) xorObjectDigest(c->db,c->argv[j],digest,o);\n\n            sds d = sdsempty();\n            for (int i = 0; i < 20; i++) d = sdscatprintf(d, \"%02x\",digest[i]);\n            addReplyStatus(c,d);\n            sdsfree(d);\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"protocol\") && c->argc == 3) {\n        /* DEBUG PROTOCOL [string|integer|double|bignum|null|array|set|map|\n         *                 attrib|push|verbatim|true|false] */\n        char *name = c->argv[2]->ptr;\n        if (!strcasecmp(name,\"string\")) {\n            addReplyBulkCString(c,\"Hello World\");\n        } else if (!strcasecmp(name,\"integer\")) {\n            addReplyLongLong(c,12345);\n        } else if (!strcasecmp(name,\"double\")) {\n            addReplyDouble(c,3.141);\n        } else if (!strcasecmp(name,\"bignum\")) {\n            addReplyBigNum(c,\"1234567999999999999999999999999999999\",37);\n        } else if (!strcasecmp(name,\"null\")) {\n            addReplyNull(c);\n        } else if (!strcasecmp(name,\"array\")) {\n            addReplyArrayLen(c,3);\n            for (int j = 0; j < 3; j++) addReplyLongLong(c,j);\n        } else if (!strcasecmp(name,\"set\")) {\n            addReplySetLen(c,3);\n            for (int j = 0; j < 3; j++) addReplyLongLong(c,j);\n        } else if (!strcasecmp(name,\"map\")) {\n            addReplyMapLen(c,3);\n            for (int j = 0; j < 3; j++) {\n                addReplyLongLong(c,j);\n                addReplyBool(c, j == 1);\n            }\n        } else if (!strcasecmp(name,\"attrib\")) {\n            if (c->resp >= 3) {\n                addReplyAttributeLen(c,1);\n                addReplyBulkCString(c,\"key-popularity\");\n                addReplyArrayLen(c,2);\n                addReplyBulkCString(c,\"key:123\");\n                addReplyLongLong(c,90);\n            }\n            /* Attributes are not real replies, so a well formed reply should\n             * also have a normal reply type after the attribute. */\n            addReplyBulkCString(c,\"Some real reply following the attribute\");\n        } else if (!strcasecmp(name,\"push\")) {\n            if (c->resp < 3) {\n                addReplyError(c,\"RESP2 is not supported by this command\");\n                return;\n\t    }\n            uint64_t old_flags = c->flags;\n            c->flags |= CLIENT_PUSHING;\n            addReplyPushLen(c,2);\n            addReplyBulkCString(c,\"server-cpu-usage\");\n            addReplyLongLong(c,42);\n            if (!(old_flags & CLIENT_PUSHING)) c->flags &= ~CLIENT_PUSHING;\n            /* Push replies are not synchronous replies, so we emit also a\n             * normal reply in order for blocking clients just discarding the\n             * push reply, to actually consume the reply and continue. */\n            addReplyBulkCString(c,\"Some real reply following the push reply\");\n        } else if (!strcasecmp(name,\"true\")) {\n            addReplyBool(c,1);\n        } else if (!strcasecmp(name,\"false\")) {\n            addReplyBool(c,0);\n        } else if (!strcasecmp(name,\"verbatim\")) {\n            addReplyVerbatim(c,\"This is a verbatim\\nstring\",25,\"txt\");\n        } else {\n            addReplyError(c,\"Wrong protocol type name. Please use one of the following: string|integer|double|bignum|null|array|set|map|attrib|push|verbatim|true|false\");\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"sleep\") && c->argc == 3) {\n        double dtime = fast_float_strtod(c->argv[2]->ptr,sdslen(c->argv[2]->ptr),NULL);\n        long long utime = dtime*1000000;\n        struct timespec tv;\n\n        tv.tv_sec = utime / 1000000;\n        tv.tv_nsec = (utime % 1000000) * 1000;\n        nanosleep(&tv, NULL);\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"set-active-expire\") &&\n               c->argc == 3)\n    {\n        server.active_expire_enabled = atoi(c->argv[2]->ptr);\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"set-allow-access-expired\") &&\n               c->argc == 3)\n    {\n        server.allow_access_expired = atoi(c->argv[2]->ptr);\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"quicklist-packed-threshold\") &&\n               c->argc == 3)\n    {\n        int memerr;\n        unsigned long long sz = memtoull((const char *)c->argv[2]->ptr, &memerr);\n        if (memerr || !quicklistSetPackedThreshold(sz)) {\n            addReplyError(c, \"argument must be a memory value bigger than 1 and smaller than 4gb\");\n        } else {\n            addReply(c,shared.ok);\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"set-skip-checksum-validation\") &&\n               c->argc == 3)\n    {\n        server.skip_checksum_validation = atoi(c->argv[2]->ptr);\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"enable-keymeta-runtime-registration\") &&\n               c->argc == 3)\n    {\n        server.allow_keymeta_registration = atoi(c->argv[2]->ptr);\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"aof-flush-sleep\") &&\n               c->argc == 3)\n    {\n        server.aof_flush_sleep = atoi(c->argv[2]->ptr);\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"replicate\") && c->argc >= 3) {\n        replicationFeedSlaves(server.slaves, -1,\n                c->argv + 2, c->argc - 2);\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"error\") && c->argc == 3) {\n        sds errstr = sdsnewlen(\"-\",1);\n\n        errstr = sdscatsds(errstr,c->argv[2]->ptr);\n        errstr = sdsmapchars(errstr,\"\\n\\r\",\"  \",2); /* no newlines in errors. */\n        errstr = sdscatlen(errstr,\"\\r\\n\",2);\n        addReplySds(c,errstr);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"structsize\") && c->argc == 2) {\n        sds sizes = sdsempty();\n        sizes = sdscatprintf(sizes,\"bits:%d \",(sizeof(void*) == 8)?64:32);\n        sizes = sdscatprintf(sizes,\"robj:%d \",(int)sizeof(robj));\n        sizes = sdscatprintf(sizes,\"dictentry:%d \",(int)dictEntryMemUsage(0));\n        sizes = sdscatprintf(sizes,\"sdshdr5:%d \",(int)sizeof(struct sdshdr5));\n        sizes = sdscatprintf(sizes,\"sdshdr8:%d \",(int)sizeof(struct sdshdr8));\n        sizes = sdscatprintf(sizes,\"sdshdr16:%d \",(int)sizeof(struct sdshdr16));\n        sizes = sdscatprintf(sizes,\"sdshdr32:%d \",(int)sizeof(struct sdshdr32));\n        sizes = sdscatprintf(sizes,\"sdshdr64:%d \",(int)sizeof(struct sdshdr64));\n        addReplyBulkSds(c,sizes);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"htstats\") && c->argc >= 3) {\n        long dbid;\n        sds stats = sdsempty();\n        char buf[4096];\n        int full = 0;\n\n        if (getLongFromObjectOrReply(c, c->argv[2], &dbid, NULL) != C_OK) {\n            sdsfree(stats);\n            return;\n        }\n        if (dbid < 0 || dbid >= server.dbnum) {\n            sdsfree(stats);\n            addReplyError(c,\"Out of range database\");\n            return;\n        }\n        if (c->argc >= 4 && !strcasecmp(c->argv[3]->ptr,\"full\"))\n            full = 1;\n\n        stats = sdscatprintf(stats,\"[Dictionary HT]\\n\");\n        kvstoreGetStats(server.db[dbid].keys, buf, sizeof(buf), full);\n        stats = sdscat(stats,buf);\n\n        stats = sdscatprintf(stats,\"[Expires HT]\\n\");\n        kvstoreGetStats(server.db[dbid].expires, buf, sizeof(buf), full);\n        stats = sdscat(stats,buf);\n\n        addReplyVerbatim(c,stats,sdslen(stats),\"txt\");\n        sdsfree(stats);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"htstats-key\") && c->argc >= 3) {\n        kvobj *o;\n        dict *ht = NULL;\n        int full = 0;\n\n        if (c->argc >= 4 && !strcasecmp(c->argv[3]->ptr,\"full\"))\n            full = 1;\n\n        if ((o = kvobjCommandLookupOrReply(c,c->argv[2],shared.nokeyerr))\n                == NULL) return;\n\n        /* Get the hash table reference from the object, if possible. */\n        switch (o->encoding) {\n        case OBJ_ENCODING_SKIPLIST:\n            {\n                zset *zs = o->ptr;\n                ht = zs->dict;\n            }\n            break;\n        case OBJ_ENCODING_HT:\n            ht = o->ptr;\n            break;\n        }\n\n        if (ht == NULL) {\n            addReplyError(c,\"The value stored at the specified key is not \"\n                            \"represented using an hash table\");\n        } else {\n            char buf[4096];\n            dictGetStats(buf,sizeof(buf),ht,full);\n            addReplyVerbatim(c,buf,strlen(buf),\"txt\");\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"change-repl-id\") && c->argc == 2) {\n        serverLog(LL_NOTICE,\"Changing replication IDs after receiving DEBUG change-repl-id\");\n        changeReplicationId();\n        clearReplicationId2();\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"stringmatch-test\") && c->argc == 2)\n    {\n        stringmatchlen_fuzz_test();\n        addReplyStatus(c,\"Apparently Redis did not crash: test passed\");\n    } else if (!strcasecmp(c->argv[1]->ptr,\"set-disable-deny-scripts\") && c->argc == 3)\n    {\n        server.script_disable_deny_script = atoi(c->argv[2]->ptr);\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"config-rewrite-force-all\") && c->argc == 2)\n    {\n        if (rewriteConfig(server.configfile, 1) == -1)\n            addReplyErrorFormat(c, \"CONFIG-REWRITE-FORCE-ALL failed: %s\", strerror(errno));\n        else\n            addReply(c, shared.ok);\n    } else if(!strcasecmp(c->argv[1]->ptr,\"client-eviction\") && c->argc == 2) {\n        if (!server.client_mem_usage_buckets) {\n            addReplyError(c,\"maxmemory-clients is disabled.\");\n            return;\n        }\n        sds bucket_info = sdsempty();\n        for (int j = 0; j < CLIENT_MEM_USAGE_BUCKETS; j++) {\n            if (j == 0)\n                bucket_info = sdscatprintf(bucket_info, \"bucket          0\");\n            else\n                bucket_info = sdscatprintf(bucket_info, \"bucket %10zu\", (size_t)1<<(j-1+CLIENT_MEM_USAGE_BUCKET_MIN_LOG));\n            if (j == CLIENT_MEM_USAGE_BUCKETS-1)\n                bucket_info = sdscatprintf(bucket_info, \"+            : \");\n            else\n                bucket_info = sdscatprintf(bucket_info, \" - %10zu: \", ((size_t)1<<(j+CLIENT_MEM_USAGE_BUCKET_MIN_LOG))-1);\n            bucket_info = sdscatprintf(bucket_info, \"tot-mem: %10zu, clients: %lu\\n\",\n                server.client_mem_usage_buckets[j].mem_usage_sum,\n                server.client_mem_usage_buckets[j].clients->len);\n        }\n        addReplyVerbatim(c,bucket_info,sdslen(bucket_info),\"txt\");\n        sdsfree(bucket_info);\n#ifdef USE_JEMALLOC\n    } else if(!strcasecmp(c->argv[1]->ptr,\"mallctl\") && c->argc >= 3) {\n        mallctl_int(c, c->argv+2, c->argc-2);\n        return;\n    } else if(!strcasecmp(c->argv[1]->ptr,\"mallctl-str\") && c->argc >= 3) {\n        mallctl_string(c, c->argv+2, c->argc-2);\n        return;\n#endif\n    } else if (!strcasecmp(c->argv[1]->ptr,\"pause-cron\") && c->argc == 3)\n    {\n        server.pause_cron = atoi(c->argv[2]->ptr);\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"replybuffer\") && c->argc == 4 ) {\n        if(!strcasecmp(c->argv[2]->ptr, \"peak-reset-time\")) {\n            if (!strcasecmp(c->argv[3]->ptr, \"never\")) {\n                server.reply_buffer_peak_reset_time = -1;\n            } else if(!strcasecmp(c->argv[3]->ptr, \"reset\")) {\n                server.reply_buffer_peak_reset_time = REPLY_BUFFER_DEFAULT_PEAK_RESET_TIME;\n            } else {\n                if (getLongFromObjectOrReply(c, c->argv[3], &server.reply_buffer_peak_reset_time, NULL) != C_OK)\n                    return;\n            }\n        } else if(!strcasecmp(c->argv[2]->ptr,\"resizing\")) {\n            server.reply_buffer_resizing_enabled = atoi(c->argv[3]->ptr);\n        } else {\n            addReplySubcommandSyntaxError(c);\n            return;\n        }\n        addReply(c, shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"reply-copy-avoidance\") && c->argc == 3) {\n        server.reply_copy_avoidance_enabled = atoi(c->argv[2]->ptr);\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr, \"repl-pause\") && c->argc == 3) {\n        if (!strcasecmp(c->argv[2]->ptr, \"clear\")) {\n            server.repl_debug_pause = REPL_DEBUG_PAUSE_NONE;\n        } else if (!strcasecmp(c->argv[2]->ptr,\"after-fork\")) {\n            server.repl_debug_pause |= REPL_DEBUG_AFTER_FORK;\n        } else if (!strcasecmp(c->argv[2]->ptr,\"before-rdb-channel\")) {\n            server.repl_debug_pause |= REPL_DEBUG_BEFORE_RDB_CHANNEL;\n        } else if (!strcasecmp(c->argv[2]->ptr, \"on-streaming-repl-buf\")) {\n            server.repl_debug_pause |= REPL_DEBUG_ON_STREAMING_REPL_BUF;\n        } else {\n            addReplySubcommandSyntaxError(c);\n            return;\n        }\n        addReply(c, shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr, \"dict-resizing\") && c->argc == 3) {\n        server.dict_resizing = atoi(c->argv[2]->ptr);\n        addReply(c, shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"script\") && c->argc == 3) {\n        if (server.hide_user_data_from_log) {\n            addReplyError(c, \"DEBUG SCRIPT is disabled when hide-user-data-from-log is enabled\");\n            return;\n        }\n        if (!strcasecmp(c->argv[2]->ptr,\"list\")) {\n            dictIterator di;\n            dictEntry *de;\n            dictInitIterator(&di, evalScriptsDict());\n            while ((de = dictNext(&di)) != NULL) {\n                luaScript *script = dictGetVal(de);\n                sds *sha = dictGetKey(de);\n                serverLog(LL_WARNING, \"SCRIPT SHA: %s\\n%s\", (char*)sha, (char*)script->body->ptr);\n            }\n            dictResetIterator(&di);\n        } else if (sdslen(c->argv[2]->ptr) == 40) {\n            dictEntry *de;\n            if ((de = dictFind(evalScriptsDict(), c->argv[2]->ptr)) == NULL) {\n                addReplyErrorObject(c, shared.noscripterr);\n                return;\n            }\n            luaScript *script = dictGetVal(de);\n            serverLog(LL_WARNING, \"SCRIPT SHA: %s\\n%s\", (char*)c->argv[2]->ptr, (char*)script->body->ptr);\n        } else {\n            addReplySubcommandSyntaxError(c);\n            return;\n        }\n        addReply(c,shared.ok);\n    } else if(!strcasecmp(c->argv[1]->ptr,\"mark-internal-client\") && c->argc < 4) {\n        if (c->argc == 2) {\n            c->flags |= CLIENT_INTERNAL;\n            addReply(c, shared.ok);\n        } else if (c->argc == 3 && !strcasecmp(c->argv[2]->ptr, \"unmark\")) {\n            c->flags &= ~CLIENT_INTERNAL;\n            addReply(c, shared.ok);\n        } else {\n            addReplySubcommandSyntaxError(c);\n            return;\n        }\n    } else if(!strcasecmp(c->argv[1]->ptr,\"asm-failpoint\") && c->argc == 4) {\n        if (asmDebugSetFailPoint(c->argv[2]->ptr, c->argv[3]->ptr) != C_OK) {\n            addReplyError(c, \"Failed to set ASM fail point\");\n        } else {\n            addReply(c, shared.ok);\n        }\n    } else if(!strcasecmp(c->argv[1]->ptr,\"asm-trim-method\") && c->argc >= 3) {\n        int delay = c->argc == 4 ? atoi(c->argv[3]->ptr) : 0;\n        if (asmDebugSetTrimMethod(c->argv[2]->ptr, delay) != C_OK) {\n            addReplyError(c, \"Failed to set ASM trim method\");\n        } else {\n            addReply(c, shared.ok);\n        }\n    } else if(!handleDebugClusterCommand(c)) {\n        addReplySubcommandSyntaxError(c);\n        return;\n    }\n}\n\n/* =========================== Crash handling  ============================== */\n\n/* When hide-user-data-from-log is enabled, to avoid leaking user info, we only\n * print tokens of the current command into the log. First, we collect command\n * tokens into this struct (Commands tokens are defined in json schema). Later,\n * checking each argument against the token list. */\n#define CMD_TOKEN_MAX_COUNT 128 /* Max token count in a command's json schema */\nstruct cmdToken {\n    const char *tokens[CMD_TOKEN_MAX_COUNT];\n    int n_token;\n};\n\n/* Collect tokens from command arguments recursively. */\nstatic void cmdTokenCollect(struct cmdToken *tk, redisCommandArg *args, int argc) {\n    if (args == NULL)\n        return;\n\n    for (int i = 0; i < argc && tk->n_token < CMD_TOKEN_MAX_COUNT; i++) {\n        if (args[i].token)\n            tk->tokens[tk->n_token++] = args[i].token;\n        cmdTokenCollect(tk, args[i].subargs, args[i].num_args);\n    }\n}\n\n/* Get tokens of the command. */\nstatic void cmdTokenGetFromCommand(struct cmdToken *tk, struct redisCommand *cmd) {\n    tk->n_token = 0;\n    cmdTokenCollect(tk, cmd->args, cmd->num_args);\n}\n\n/* Check if object is one of command's tokens. */\nstatic int cmdTokenCheck(struct cmdToken *tk, robj *o) {\n    if (o->type != OBJ_STRING || !sdsEncodedObject(o))\n        return 0;\n\n    for (int i = 0; i < tk->n_token; i++) {\n        if (strcasecmp(tk->tokens[i], o->ptr) == 0)\n            return 1;\n    }\n    return 0;\n}\n\n__attribute__ ((noinline))\nvoid _serverAssert(const char *estr, const char *file, int line) {\n    int new_report = bugReportStart();\n    serverLog(LL_WARNING,\"=== %sASSERTION FAILED ===\", new_report ? \"\" : \"RECURSIVE \");\n    serverLog(LL_WARNING,\"==> %s:%d '%s' is not true\",file,line,estr);\n\n    if (server.crashlog_enabled) {\n#ifdef HAVE_BACKTRACE\n        logStackTrace(NULL, 1, 0);\n#endif\n        /* If this was a recursive assertion, it what most likely generated\n         * from printCrashReport. */\n        if (new_report) printCrashReport();\n    }\n\n    // remove the signal handler so on abort() we will output the crash report.\n    removeSigSegvHandlers();\n    bugReportEnd(0, 0);\n}\n\nvoid _serverAssertPrintClientInfo(const client *c) {\n    int j;\n    char conninfo[CONN_INFO_LEN];\n    struct redisCommand *cmd = NULL;\n    struct cmdToken tokens = {{0}};\n\n    bugReportStart();\n    serverLog(LL_WARNING,\"=== ASSERTION FAILED CLIENT CONTEXT ===\");\n    serverLog(LL_WARNING,\"client->flags = %llu\", (unsigned long long) c->flags);\n    serverLog(LL_WARNING,\"client->conn = %s\", connGetInfo(c->conn, conninfo, sizeof(conninfo)));\n    serverLog(LL_WARNING,\"client->argc = %d\", c->argc);\n    if (server.hide_user_data_from_log) {\n        cmd = lookupCommand(c->argv, c->argc);\n        if (cmd)\n            cmdTokenGetFromCommand(&tokens, cmd);\n    }\n\n    for (j=0; j < c->argc; j++) {\n        char buf[128];\n        char *arg;\n\n        /* Allow command name, subcommand name and command tokens in the log. */\n        if (server.hide_user_data_from_log && (j != 0 && !(j == 1 && cmd && cmd->parent))) {\n            if (!cmdTokenCheck(&tokens, c->argv[j])) {\n                serverLog(LL_WARNING, \"client->argv[%d] = *redacted*\", j);\n                continue;\n            }\n        }\n\n        if (c->argv[j]->type == OBJ_STRING && sdsEncodedObject(c->argv[j])) {\n            arg = (char*) c->argv[j]->ptr;\n        } else {\n            snprintf(buf,sizeof(buf),\"Object type: %u, encoding: %u\",\n                c->argv[j]->type, c->argv[j]->encoding);\n            arg = buf;\n        }\n        serverLog(LL_WARNING,\"client->argv[%d] = \\\"%s\\\" (refcount: %d)\",\n            j, arg, c->argv[j]->refcount);\n    }\n}\n\nvoid serverLogObjectDebugInfo(const robj *o) {\n    serverLog(LL_WARNING,\"Object type: %u\", o->type);\n    serverLog(LL_WARNING,\"Object encoding: %u\", o->encoding);\n    serverLog(LL_WARNING,\"Object refcount: %d\", o->refcount);\n#if UNSAFE_CRASH_REPORT\n    /* This code is now disabled. o->ptr may be unreliable to print. in some\n     * cases a ziplist could have already been freed by realloc, but not yet\n     * updated to o->ptr. in other cases the call to ziplistLen may need to\n     * iterate on all the items in the list (and possibly crash again).\n     * For some cases it may be ok to crash here again, but these could cause\n     * invalid memory access which will bother valgrind and also possibly cause\n     * random memory portion to be \"leaked\" into the logfile. */\n    if (o->type == OBJ_STRING && sdsEncodedObject(o)) {\n        serverLog(LL_WARNING,\"Object raw string len: %zu\", sdslen(o->ptr));\n        if (!server.hide_user_data_from_log && sdslen(o->ptr) < 4096) {\n            sds repr = sdscatrepr(sdsempty(),o->ptr,sdslen(o->ptr));\n            serverLog(LL_WARNING,\"Object raw string content: %s\", repr);\n            sdsfree(repr);\n        }\n    } else if (o->type == OBJ_LIST) {\n        serverLog(LL_WARNING,\"List length: %d\", (int) listTypeLength(o));\n    } else if (o->type == OBJ_SET) {\n        serverLog(LL_WARNING,\"Set size: %d\", (int) setTypeSize(o));\n    } else if (o->type == OBJ_HASH) {\n        serverLog(LL_WARNING,\"Hash size: %d\", (int) hashTypeLength(o, 0));\n    } else if (o->type == OBJ_ZSET) {\n        serverLog(LL_WARNING,\"Sorted set size: %d\", (int) zsetLength(o));\n        if (o->encoding == OBJ_ENCODING_SKIPLIST)\n            serverLog(LL_WARNING,\"Skiplist level: %d\", (int) ((const zset*)o->ptr)->zsl->level);\n    } else if (o->type == OBJ_STREAM) {\n        serverLog(LL_WARNING,\"Stream size: %d\", (int) streamLength(o));\n    } else if (o->type == OBJ_GCRA) {\n#if UINTPTR_MAX == 0xffffffffffffffff\n        serverLog(LL_WARNING, \"GCRA object: %lld\", (long long)o->ptr);\n#endif\n    }\n#endif\n}\n\nvoid _serverAssertPrintObject(const robj *o) {\n    bugReportStart();\n    serverLog(LL_WARNING,\"=== ASSERTION FAILED OBJECT CONTEXT ===\");\n    serverLogObjectDebugInfo(o);\n}\n\nvoid _serverAssertWithInfo(const client *c, const robj *o, const char *estr, const char *file, int line) {\n    if (c) _serverAssertPrintClientInfo(c);\n    if (o) _serverAssertPrintObject(o);\n    _serverAssert(estr,file,line);\n}\n\n__attribute__ ((noinline))\nvoid _serverPanic(const char *file, int line, const char *msg, ...) {\n    va_list ap;\n    va_start(ap,msg);\n    char fmtmsg[256];\n    vsnprintf(fmtmsg,sizeof(fmtmsg),msg,ap);\n    va_end(ap);\n\n    int new_report = bugReportStart();\n    serverLog(LL_WARNING,\"------------------------------------------------\");\n    serverLog(LL_WARNING,\"!!! Software Failure. Press left mouse button to continue\");\n    serverLog(LL_WARNING,\"Guru Meditation: %s #%s:%d\",fmtmsg,file,line);\n\n    if (server.crashlog_enabled) {\n#ifdef HAVE_BACKTRACE\n        logStackTrace(NULL, 1, 0);\n#endif\n        /* If this was a recursive panic, it what most likely generated\n         * from printCrashReport. */\n        if (new_report) printCrashReport();\n    }\n\n    // remove the signal handler so on abort() we will output the crash report.\n    removeSigSegvHandlers();\n    bugReportEnd(0, 0);\n}\n\n/* Start a bug report, returning 1 if this is the first time this function was called, 0 otherwise. */\nint bugReportStart(void) {\n    pthread_mutex_lock(&bug_report_start_mutex);\n    if (bug_report_start == 0) {\n        bug_report_start = 1;\n        serverLogRaw(LL_WARNING|LL_RAW,\n        \"\\n\\n=== REDIS BUG REPORT START: Cut & paste starting from here ===\\n\");\n        pthread_mutex_unlock(&bug_report_start_mutex);\n        return 1;\n    }\n    pthread_mutex_unlock(&bug_report_start_mutex);\n    return 0;\n}\n\n#ifdef HAVE_BACKTRACE\n\n/* Returns the current eip and set it to the given new value (if its not NULL) */\nstatic void* getAndSetMcontextEip(ucontext_t *uc, void *eip) {\n#define NOT_SUPPORTED() do {\\\n    UNUSED(uc);\\\n    UNUSED(eip);\\\n    return NULL;\\\n} while(0)\n#define GET_SET_RETURN(target_var, new_val) do {\\\n    void *old_val = (void*)target_var; \\\n    if (new_val) { \\\n        void **temp = (void**)&target_var; \\\n        *temp = new_val; \\\n    } \\\n    return old_val; \\\n} while(0)\n#if defined(__APPLE__) && !defined(MAC_OS_10_6_DETECTED)\n    /* OSX < 10.6 */\n    #if defined(__x86_64__)\n    GET_SET_RETURN(uc->uc_mcontext->__ss.__rip, eip);\n    #elif defined(__i386__)\n    GET_SET_RETURN(uc->uc_mcontext->__ss.__eip, eip);\n    #else\n    GET_SET_RETURN(uc->uc_mcontext->__ss.__srr0, eip);\n    #endif\n#elif defined(__APPLE__) && defined(MAC_OS_10_6_DETECTED)\n    /* OSX >= 10.6 */\n    #if defined(_STRUCT_X86_THREAD_STATE64) && !defined(__i386__)\n    GET_SET_RETURN(uc->uc_mcontext->__ss.__rip, eip);\n    #elif defined(__i386__)\n    GET_SET_RETURN(uc->uc_mcontext->__ss.__eip, eip);\n    #else\n    /* OSX ARM64 */\n    void *old_val = (void*)arm_thread_state64_get_pc(uc->uc_mcontext->__ss);\n    if (eip) {\n        arm_thread_state64_set_pc_fptr(uc->uc_mcontext->__ss, eip);\n    }\n    return old_val;\n    #endif\n#elif defined(__linux__)\n    /* Linux */\n    #if defined(__i386__) || ((defined(__X86_64__) || defined(__x86_64__)) && defined(__ILP32__))\n    GET_SET_RETURN(uc->uc_mcontext.gregs[14], eip);\n    #elif defined(__X86_64__) || defined(__x86_64__)\n    GET_SET_RETURN(uc->uc_mcontext.gregs[16], eip);\n    #elif defined(__ia64__) /* Linux IA64 */\n    GET_SET_RETURN(uc->uc_mcontext.sc_ip, eip);\n    #elif defined(__riscv) /* Linux RISC-V */\n    GET_SET_RETURN(uc->uc_mcontext.__gregs[REG_PC], eip);\n    #elif defined(__arm__) /* Linux ARM */\n    GET_SET_RETURN(uc->uc_mcontext.arm_pc, eip);\n    #elif defined(__aarch64__) /* Linux AArch64 */\n    GET_SET_RETURN(uc->uc_mcontext.pc, eip);\n    #else\n    NOT_SUPPORTED();\n    #endif\n#elif defined(__FreeBSD__)\n    /* FreeBSD */\n    #if defined(__i386__)\n    GET_SET_RETURN(uc->uc_mcontext.mc_eip, eip);\n    #elif defined(__x86_64__)\n    GET_SET_RETURN(uc->uc_mcontext.mc_rip, eip);\n    #else\n    NOT_SUPPORTED();\n    #endif\n#elif defined(__OpenBSD__)\n    /* OpenBSD */\n    #if defined(__i386__)\n    GET_SET_RETURN(uc->sc_eip, eip);\n    #elif defined(__x86_64__)\n    GET_SET_RETURN(uc->sc_rip, eip);\n    #else\n    NOT_SUPPORTED();\n    #endif\n#elif defined(__NetBSD__)\n    #if defined(__i386__)\n    GET_SET_RETURN(uc->uc_mcontext.__gregs[_REG_EIP], eip);\n    #elif defined(__x86_64__)\n    GET_SET_RETURN(uc->uc_mcontext.__gregs[_REG_RIP], eip);\n    #else\n    NOT_SUPPORTED();\n    #endif\n#elif defined(__DragonFly__)\n    GET_SET_RETURN(uc->uc_mcontext.mc_rip, eip);\n#elif defined(__sun) && defined(__x86_64__)\n    GET_SET_RETURN(uc->uc_mcontext.gregs[REG_RIP], eip);\n#else\n    NOT_SUPPORTED();\n#endif\n#undef NOT_SUPPORTED\n}\n\nREDIS_NO_SANITIZE_MSAN(\"memory\")\nREDIS_NO_SANITIZE(\"address\")\nvoid logStackContent(void **sp) {\n    if (server.hide_user_data_from_log) {\n        serverLog(LL_NOTICE,\"hide-user-data-from-log is on, skip logging stack content to avoid spilling PII.\");\n        return;\n    }\n    int i;\n    for (i = 15; i >= 0; i--) {\n        unsigned long addr = (unsigned long) sp+i;\n        unsigned long val = (unsigned long) sp[i];\n\n        if (sizeof(long) == 4)\n            serverLog(LL_WARNING, \"(%08lx) -> %08lx\", addr, val);\n        else\n            serverLog(LL_WARNING, \"(%016lx) -> %016lx\", addr, val);\n    }\n}\n\n/* Log dump of processor registers */\nvoid logRegisters(ucontext_t *uc) {\n    serverLog(LL_WARNING|LL_RAW, \"\\n------ REGISTERS ------\\n\");\n#define NOT_SUPPORTED() do {\\\n    UNUSED(uc);\\\n    serverLog(LL_WARNING,\\\n              \"  Dumping of registers not supported for this OS/arch\");\\\n} while(0)\n\n/* OSX */\n#if defined(__APPLE__) && defined(MAC_OS_10_6_DETECTED)\n  /* OSX AMD64 */\n    #if defined(_STRUCT_X86_THREAD_STATE64) && !defined(__i386__)\n    serverLog(LL_WARNING,\n    \"\\n\"\n    \"RAX:%016lx RBX:%016lx\\nRCX:%016lx RDX:%016lx\\n\"\n    \"RDI:%016lx RSI:%016lx\\nRBP:%016lx RSP:%016lx\\n\"\n    \"R8 :%016lx R9 :%016lx\\nR10:%016lx R11:%016lx\\n\"\n    \"R12:%016lx R13:%016lx\\nR14:%016lx R15:%016lx\\n\"\n    \"RIP:%016lx EFL:%016lx\\nCS :%016lx FS:%016lx  GS:%016lx\",\n        (unsigned long) uc->uc_mcontext->__ss.__rax,\n        (unsigned long) uc->uc_mcontext->__ss.__rbx,\n        (unsigned long) uc->uc_mcontext->__ss.__rcx,\n        (unsigned long) uc->uc_mcontext->__ss.__rdx,\n        (unsigned long) uc->uc_mcontext->__ss.__rdi,\n        (unsigned long) uc->uc_mcontext->__ss.__rsi,\n        (unsigned long) uc->uc_mcontext->__ss.__rbp,\n        (unsigned long) uc->uc_mcontext->__ss.__rsp,\n        (unsigned long) uc->uc_mcontext->__ss.__r8,\n        (unsigned long) uc->uc_mcontext->__ss.__r9,\n        (unsigned long) uc->uc_mcontext->__ss.__r10,\n        (unsigned long) uc->uc_mcontext->__ss.__r11,\n        (unsigned long) uc->uc_mcontext->__ss.__r12,\n        (unsigned long) uc->uc_mcontext->__ss.__r13,\n        (unsigned long) uc->uc_mcontext->__ss.__r14,\n        (unsigned long) uc->uc_mcontext->__ss.__r15,\n        (unsigned long) uc->uc_mcontext->__ss.__rip,\n        (unsigned long) uc->uc_mcontext->__ss.__rflags,\n        (unsigned long) uc->uc_mcontext->__ss.__cs,\n        (unsigned long) uc->uc_mcontext->__ss.__fs,\n        (unsigned long) uc->uc_mcontext->__ss.__gs\n    );\n    logStackContent((void**)uc->uc_mcontext->__ss.__rsp);\n    #elif defined(__i386__)\n    /* OSX x86 */\n    serverLog(LL_WARNING,\n    \"\\n\"\n    \"EAX:%08lx EBX:%08lx ECX:%08lx EDX:%08lx\\n\"\n    \"EDI:%08lx ESI:%08lx EBP:%08lx ESP:%08lx\\n\"\n    \"SS:%08lx  EFL:%08lx EIP:%08lx CS :%08lx\\n\"\n    \"DS:%08lx  ES:%08lx  FS :%08lx GS :%08lx\",\n        (unsigned long) uc->uc_mcontext->__ss.__eax,\n        (unsigned long) uc->uc_mcontext->__ss.__ebx,\n        (unsigned long) uc->uc_mcontext->__ss.__ecx,\n        (unsigned long) uc->uc_mcontext->__ss.__edx,\n        (unsigned long) uc->uc_mcontext->__ss.__edi,\n        (unsigned long) uc->uc_mcontext->__ss.__esi,\n        (unsigned long) uc->uc_mcontext->__ss.__ebp,\n        (unsigned long) uc->uc_mcontext->__ss.__esp,\n        (unsigned long) uc->uc_mcontext->__ss.__ss,\n        (unsigned long) uc->uc_mcontext->__ss.__eflags,\n        (unsigned long) uc->uc_mcontext->__ss.__eip,\n        (unsigned long) uc->uc_mcontext->__ss.__cs,\n        (unsigned long) uc->uc_mcontext->__ss.__ds,\n        (unsigned long) uc->uc_mcontext->__ss.__es,\n        (unsigned long) uc->uc_mcontext->__ss.__fs,\n        (unsigned long) uc->uc_mcontext->__ss.__gs\n    );\n    logStackContent((void**)uc->uc_mcontext->__ss.__esp);\n    #else\n    /* OSX ARM64 */\n    serverLog(LL_WARNING,\n    \"\\n\"\n    \"x0:%016lx x1:%016lx x2:%016lx x3:%016lx\\n\"\n    \"x4:%016lx x5:%016lx x6:%016lx x7:%016lx\\n\"\n    \"x8:%016lx x9:%016lx x10:%016lx x11:%016lx\\n\"\n    \"x12:%016lx x13:%016lx x14:%016lx x15:%016lx\\n\"\n    \"x16:%016lx x17:%016lx x18:%016lx x19:%016lx\\n\"\n    \"x20:%016lx x21:%016lx x22:%016lx x23:%016lx\\n\"\n    \"x24:%016lx x25:%016lx x26:%016lx x27:%016lx\\n\"\n    \"x28:%016lx fp:%016lx lr:%016lx\\n\"\n    \"sp:%016lx pc:%016lx cpsr:%08lx\\n\",\n        (unsigned long) uc->uc_mcontext->__ss.__x[0],\n        (unsigned long) uc->uc_mcontext->__ss.__x[1],\n        (unsigned long) uc->uc_mcontext->__ss.__x[2],\n        (unsigned long) uc->uc_mcontext->__ss.__x[3],\n        (unsigned long) uc->uc_mcontext->__ss.__x[4],\n        (unsigned long) uc->uc_mcontext->__ss.__x[5],\n        (unsigned long) uc->uc_mcontext->__ss.__x[6],\n        (unsigned long) uc->uc_mcontext->__ss.__x[7],\n        (unsigned long) uc->uc_mcontext->__ss.__x[8],\n        (unsigned long) uc->uc_mcontext->__ss.__x[9],\n        (unsigned long) uc->uc_mcontext->__ss.__x[10],\n        (unsigned long) uc->uc_mcontext->__ss.__x[11],\n        (unsigned long) uc->uc_mcontext->__ss.__x[12],\n        (unsigned long) uc->uc_mcontext->__ss.__x[13],\n        (unsigned long) uc->uc_mcontext->__ss.__x[14],\n        (unsigned long) uc->uc_mcontext->__ss.__x[15],\n        (unsigned long) uc->uc_mcontext->__ss.__x[16],\n        (unsigned long) uc->uc_mcontext->__ss.__x[17],\n        (unsigned long) uc->uc_mcontext->__ss.__x[18],\n        (unsigned long) uc->uc_mcontext->__ss.__x[19],\n        (unsigned long) uc->uc_mcontext->__ss.__x[20],\n        (unsigned long) uc->uc_mcontext->__ss.__x[21],\n        (unsigned long) uc->uc_mcontext->__ss.__x[22],\n        (unsigned long) uc->uc_mcontext->__ss.__x[23],\n        (unsigned long) uc->uc_mcontext->__ss.__x[24],\n        (unsigned long) uc->uc_mcontext->__ss.__x[25],\n        (unsigned long) uc->uc_mcontext->__ss.__x[26],\n        (unsigned long) uc->uc_mcontext->__ss.__x[27],\n        (unsigned long) uc->uc_mcontext->__ss.__x[28],\n        (unsigned long) arm_thread_state64_get_fp(uc->uc_mcontext->__ss),\n        (unsigned long) arm_thread_state64_get_lr(uc->uc_mcontext->__ss),\n        (unsigned long) arm_thread_state64_get_sp(uc->uc_mcontext->__ss),\n        (unsigned long) arm_thread_state64_get_pc(uc->uc_mcontext->__ss),\n        (unsigned long) uc->uc_mcontext->__ss.__cpsr\n    );\n    logStackContent((void**) arm_thread_state64_get_sp(uc->uc_mcontext->__ss));\n    #endif\n/* Linux */\n#elif defined(__linux__)\n    /* Linux x86 */\n    #if defined(__i386__) || ((defined(__X86_64__) || defined(__x86_64__)) && defined(__ILP32__))\n    serverLog(LL_WARNING,\n    \"\\n\"\n    \"EAX:%08lx EBX:%08lx ECX:%08lx EDX:%08lx\\n\"\n    \"EDI:%08lx ESI:%08lx EBP:%08lx ESP:%08lx\\n\"\n    \"SS :%08lx EFL:%08lx EIP:%08lx CS:%08lx\\n\"\n    \"DS :%08lx ES :%08lx FS :%08lx GS:%08lx\",\n        (unsigned long) uc->uc_mcontext.gregs[11],\n        (unsigned long) uc->uc_mcontext.gregs[8],\n        (unsigned long) uc->uc_mcontext.gregs[10],\n        (unsigned long) uc->uc_mcontext.gregs[9],\n        (unsigned long) uc->uc_mcontext.gregs[4],\n        (unsigned long) uc->uc_mcontext.gregs[5],\n        (unsigned long) uc->uc_mcontext.gregs[6],\n        (unsigned long) uc->uc_mcontext.gregs[7],\n        (unsigned long) uc->uc_mcontext.gregs[18],\n        (unsigned long) uc->uc_mcontext.gregs[17],\n        (unsigned long) uc->uc_mcontext.gregs[14],\n        (unsigned long) uc->uc_mcontext.gregs[15],\n        (unsigned long) uc->uc_mcontext.gregs[3],\n        (unsigned long) uc->uc_mcontext.gregs[2],\n        (unsigned long) uc->uc_mcontext.gregs[1],\n        (unsigned long) uc->uc_mcontext.gregs[0]\n    );\n    logStackContent((void**)uc->uc_mcontext.gregs[7]);\n    #elif defined(__X86_64__) || defined(__x86_64__)\n    /* Linux AMD64 */\n    serverLog(LL_WARNING,\n    \"\\n\"\n    \"RAX:%016lx RBX:%016lx\\nRCX:%016lx RDX:%016lx\\n\"\n    \"RDI:%016lx RSI:%016lx\\nRBP:%016lx RSP:%016lx\\n\"\n    \"R8 :%016lx R9 :%016lx\\nR10:%016lx R11:%016lx\\n\"\n    \"R12:%016lx R13:%016lx\\nR14:%016lx R15:%016lx\\n\"\n    \"RIP:%016lx EFL:%016lx\\nCSGSFS:%016lx\",\n        (unsigned long) uc->uc_mcontext.gregs[13],\n        (unsigned long) uc->uc_mcontext.gregs[11],\n        (unsigned long) uc->uc_mcontext.gregs[14],\n        (unsigned long) uc->uc_mcontext.gregs[12],\n        (unsigned long) uc->uc_mcontext.gregs[8],\n        (unsigned long) uc->uc_mcontext.gregs[9],\n        (unsigned long) uc->uc_mcontext.gregs[10],\n        (unsigned long) uc->uc_mcontext.gregs[15],\n        (unsigned long) uc->uc_mcontext.gregs[0],\n        (unsigned long) uc->uc_mcontext.gregs[1],\n        (unsigned long) uc->uc_mcontext.gregs[2],\n        (unsigned long) uc->uc_mcontext.gregs[3],\n        (unsigned long) uc->uc_mcontext.gregs[4],\n        (unsigned long) uc->uc_mcontext.gregs[5],\n        (unsigned long) uc->uc_mcontext.gregs[6],\n        (unsigned long) uc->uc_mcontext.gregs[7],\n        (unsigned long) uc->uc_mcontext.gregs[16],\n        (unsigned long) uc->uc_mcontext.gregs[17],\n        (unsigned long) uc->uc_mcontext.gregs[18]\n    );\n    logStackContent((void**)uc->uc_mcontext.gregs[15]);\n    #elif defined(__riscv) /* Linux RISC-V */\n    serverLog(LL_WARNING,\n\t\"\\n\"\n    \"ra:%016lx gp:%016lx\\ntp:%016lx t0:%016lx\\n\"\n    \"t1:%016lx t2:%016lx\\ns0:%016lx s1:%016lx\\n\"\n    \"a0:%016lx a1:%016lx\\na2:%016lx a3:%016lx\\n\"\n    \"a4:%016lx a5:%016lx\\na6:%016lx a7:%016lx\\n\"\n    \"s2:%016lx s3:%016lx\\ns4:%016lx s5:%016lx\\n\"\n    \"s6:%016lx s7:%016lx\\ns8:%016lx s9:%016lx\\n\"\n    \"s10:%016lx s11:%016lx\\nt3:%016lx t4:%016lx\\n\"\n    \"t5:%016lx t6:%016lx\\n\",\n        (unsigned long) uc->uc_mcontext.__gregs[1],\n        (unsigned long) uc->uc_mcontext.__gregs[3],\n        (unsigned long) uc->uc_mcontext.__gregs[4],\n        (unsigned long) uc->uc_mcontext.__gregs[5],\n        (unsigned long) uc->uc_mcontext.__gregs[6],\n        (unsigned long) uc->uc_mcontext.__gregs[7],\n        (unsigned long) uc->uc_mcontext.__gregs[8],\n        (unsigned long) uc->uc_mcontext.__gregs[9],\n        (unsigned long) uc->uc_mcontext.__gregs[10],\n        (unsigned long) uc->uc_mcontext.__gregs[11],\n        (unsigned long) uc->uc_mcontext.__gregs[12],\n        (unsigned long) uc->uc_mcontext.__gregs[13],\n        (unsigned long) uc->uc_mcontext.__gregs[14],\n        (unsigned long) uc->uc_mcontext.__gregs[15],\n        (unsigned long) uc->uc_mcontext.__gregs[16],\n        (unsigned long) uc->uc_mcontext.__gregs[17],\n        (unsigned long) uc->uc_mcontext.__gregs[18],\n        (unsigned long) uc->uc_mcontext.__gregs[19],\n        (unsigned long) uc->uc_mcontext.__gregs[20],\n        (unsigned long) uc->uc_mcontext.__gregs[21],\n        (unsigned long) uc->uc_mcontext.__gregs[22],\n        (unsigned long) uc->uc_mcontext.__gregs[23],\n        (unsigned long) uc->uc_mcontext.__gregs[24],\n        (unsigned long) uc->uc_mcontext.__gregs[25],\n        (unsigned long) uc->uc_mcontext.__gregs[26],\n        (unsigned long) uc->uc_mcontext.__gregs[27],\n        (unsigned long) uc->uc_mcontext.__gregs[28],\n        (unsigned long) uc->uc_mcontext.__gregs[29],\n        (unsigned long) uc->uc_mcontext.__gregs[30],\n        (unsigned long) uc->uc_mcontext.__gregs[31]\n    );\n    logStackContent((void**)uc->uc_mcontext.__gregs[REG_SP]);\n    #elif defined(__aarch64__) /* Linux AArch64 */\n    serverLog(LL_WARNING,\n\t      \"\\n\"\n\t      \"X18:%016lx X19:%016lx\\nX20:%016lx X21:%016lx\\n\"\n\t      \"X22:%016lx X23:%016lx\\nX24:%016lx X25:%016lx\\n\"\n\t      \"X26:%016lx X27:%016lx\\nX28:%016lx X29:%016lx\\n\"\n\t      \"X30:%016lx\\n\"\n\t      \"pc:%016lx sp:%016lx\\npstate:%016lx fault_address:%016lx\\n\",\n\t      (unsigned long) uc->uc_mcontext.regs[18],\n\t      (unsigned long) uc->uc_mcontext.regs[19],\n\t      (unsigned long) uc->uc_mcontext.regs[20],\n\t      (unsigned long) uc->uc_mcontext.regs[21],\n\t      (unsigned long) uc->uc_mcontext.regs[22],\n\t      (unsigned long) uc->uc_mcontext.regs[23],\n\t      (unsigned long) uc->uc_mcontext.regs[24],\n\t      (unsigned long) uc->uc_mcontext.regs[25],\n\t      (unsigned long) uc->uc_mcontext.regs[26],\n\t      (unsigned long) uc->uc_mcontext.regs[27],\n\t      (unsigned long) uc->uc_mcontext.regs[28],\n\t      (unsigned long) uc->uc_mcontext.regs[29],\n\t      (unsigned long) uc->uc_mcontext.regs[30],\n\t      (unsigned long) uc->uc_mcontext.pc,\n\t      (unsigned long) uc->uc_mcontext.sp,\n\t      (unsigned long) uc->uc_mcontext.pstate,\n\t      (unsigned long) uc->uc_mcontext.fault_address\n\t\t      );\n\t      logStackContent((void**)uc->uc_mcontext.sp);\n    #elif defined(__arm__) /* Linux ARM */\n    serverLog(LL_WARNING,\n\t      \"\\n\"\n\t      \"R10:%016lx R9 :%016lx\\nR8 :%016lx R7 :%016lx\\n\"\n\t      \"R6 :%016lx R5 :%016lx\\nR4 :%016lx R3 :%016lx\\n\"\n\t      \"R2 :%016lx R1 :%016lx\\nR0 :%016lx EC :%016lx\\n\"\n\t      \"fp: %016lx ip:%016lx\\n\"\n\t      \"pc:%016lx sp:%016lx\\ncpsr:%016lx fault_address:%016lx\\n\",\n\t      (unsigned long) uc->uc_mcontext.arm_r10,\n\t      (unsigned long) uc->uc_mcontext.arm_r9,\n\t      (unsigned long) uc->uc_mcontext.arm_r8,\n\t      (unsigned long) uc->uc_mcontext.arm_r7,\n\t      (unsigned long) uc->uc_mcontext.arm_r6,\n\t      (unsigned long) uc->uc_mcontext.arm_r5,\n\t      (unsigned long) uc->uc_mcontext.arm_r4,\n\t      (unsigned long) uc->uc_mcontext.arm_r3,\n\t      (unsigned long) uc->uc_mcontext.arm_r2,\n\t      (unsigned long) uc->uc_mcontext.arm_r1,\n\t      (unsigned long) uc->uc_mcontext.arm_r0,\n\t      (unsigned long) uc->uc_mcontext.error_code,\n\t      (unsigned long) uc->uc_mcontext.arm_fp,\n\t      (unsigned long) uc->uc_mcontext.arm_ip,\n\t      (unsigned long) uc->uc_mcontext.arm_pc,\n\t      (unsigned long) uc->uc_mcontext.arm_sp,\n\t      (unsigned long) uc->uc_mcontext.arm_cpsr,\n\t      (unsigned long) uc->uc_mcontext.fault_address\n\t\t      );\n\t      logStackContent((void**)uc->uc_mcontext.arm_sp);\n    #else\n\tNOT_SUPPORTED();\n    #endif\n#elif defined(__FreeBSD__)\n    #if defined(__x86_64__)\n    serverLog(LL_WARNING,\n    \"\\n\"\n    \"RAX:%016lx RBX:%016lx\\nRCX:%016lx RDX:%016lx\\n\"\n    \"RDI:%016lx RSI:%016lx\\nRBP:%016lx RSP:%016lx\\n\"\n    \"R8 :%016lx R9 :%016lx\\nR10:%016lx R11:%016lx\\n\"\n    \"R12:%016lx R13:%016lx\\nR14:%016lx R15:%016lx\\n\"\n    \"RIP:%016lx EFL:%016lx\\nCSGSFS:%016lx\",\n        (unsigned long) uc->uc_mcontext.mc_rax,\n        (unsigned long) uc->uc_mcontext.mc_rbx,\n        (unsigned long) uc->uc_mcontext.mc_rcx,\n        (unsigned long) uc->uc_mcontext.mc_rdx,\n        (unsigned long) uc->uc_mcontext.mc_rdi,\n        (unsigned long) uc->uc_mcontext.mc_rsi,\n        (unsigned long) uc->uc_mcontext.mc_rbp,\n        (unsigned long) uc->uc_mcontext.mc_rsp,\n        (unsigned long) uc->uc_mcontext.mc_r8,\n        (unsigned long) uc->uc_mcontext.mc_r9,\n        (unsigned long) uc->uc_mcontext.mc_r10,\n        (unsigned long) uc->uc_mcontext.mc_r11,\n        (unsigned long) uc->uc_mcontext.mc_r12,\n        (unsigned long) uc->uc_mcontext.mc_r13,\n        (unsigned long) uc->uc_mcontext.mc_r14,\n        (unsigned long) uc->uc_mcontext.mc_r15,\n        (unsigned long) uc->uc_mcontext.mc_rip,\n        (unsigned long) uc->uc_mcontext.mc_rflags,\n        (unsigned long) uc->uc_mcontext.mc_cs\n    );\n    logStackContent((void**)uc->uc_mcontext.mc_rsp);\n    #elif defined(__i386__)\n    serverLog(LL_WARNING,\n    \"\\n\"\n    \"EAX:%08lx EBX:%08lx ECX:%08lx EDX:%08lx\\n\"\n    \"EDI:%08lx ESI:%08lx EBP:%08lx ESP:%08lx\\n\"\n    \"SS :%08lx EFL:%08lx EIP:%08lx CS:%08lx\\n\"\n    \"DS :%08lx ES :%08lx FS :%08lx GS:%08lx\",\n        (unsigned long) uc->uc_mcontext.mc_eax,\n        (unsigned long) uc->uc_mcontext.mc_ebx,\n        (unsigned long) uc->uc_mcontext.mc_ebx,\n        (unsigned long) uc->uc_mcontext.mc_edx,\n        (unsigned long) uc->uc_mcontext.mc_edi,\n        (unsigned long) uc->uc_mcontext.mc_esi,\n        (unsigned long) uc->uc_mcontext.mc_ebp,\n        (unsigned long) uc->uc_mcontext.mc_esp,\n        (unsigned long) uc->uc_mcontext.mc_ss,\n        (unsigned long) uc->uc_mcontext.mc_eflags,\n        (unsigned long) uc->uc_mcontext.mc_eip,\n        (unsigned long) uc->uc_mcontext.mc_cs,\n        (unsigned long) uc->uc_mcontext.mc_es,\n        (unsigned long) uc->uc_mcontext.mc_fs,\n        (unsigned long) uc->uc_mcontext.mc_gs\n    );\n    logStackContent((void**)uc->uc_mcontext.mc_esp);\n    #else\n    NOT_SUPPORTED();\n    #endif\n#elif defined(__OpenBSD__)\n    #if defined(__x86_64__)\n    serverLog(LL_WARNING,\n    \"\\n\"\n    \"RAX:%016lx RBX:%016lx\\nRCX:%016lx RDX:%016lx\\n\"\n    \"RDI:%016lx RSI:%016lx\\nRBP:%016lx RSP:%016lx\\n\"\n    \"R8 :%016lx R9 :%016lx\\nR10:%016lx R11:%016lx\\n\"\n    \"R12:%016lx R13:%016lx\\nR14:%016lx R15:%016lx\\n\"\n    \"RIP:%016lx EFL:%016lx\\nCSGSFS:%016lx\",\n        (unsigned long) uc->sc_rax,\n        (unsigned long) uc->sc_rbx,\n        (unsigned long) uc->sc_rcx,\n        (unsigned long) uc->sc_rdx,\n        (unsigned long) uc->sc_rdi,\n        (unsigned long) uc->sc_rsi,\n        (unsigned long) uc->sc_rbp,\n        (unsigned long) uc->sc_rsp,\n        (unsigned long) uc->sc_r8,\n        (unsigned long) uc->sc_r9,\n        (unsigned long) uc->sc_r10,\n        (unsigned long) uc->sc_r11,\n        (unsigned long) uc->sc_r12,\n        (unsigned long) uc->sc_r13,\n        (unsigned long) uc->sc_r14,\n        (unsigned long) uc->sc_r15,\n        (unsigned long) uc->sc_rip,\n        (unsigned long) uc->sc_rflags,\n        (unsigned long) uc->sc_cs\n    );\n    logStackContent((void**)uc->sc_rsp);\n    #elif defined(__i386__)\n    serverLog(LL_WARNING,\n    \"\\n\"\n    \"EAX:%08lx EBX:%08lx ECX:%08lx EDX:%08lx\\n\"\n    \"EDI:%08lx ESI:%08lx EBP:%08lx ESP:%08lx\\n\"\n    \"SS :%08lx EFL:%08lx EIP:%08lx CS:%08lx\\n\"\n    \"DS :%08lx ES :%08lx FS :%08lx GS:%08lx\",\n        (unsigned long) uc->sc_eax,\n        (unsigned long) uc->sc_ebx,\n        (unsigned long) uc->sc_ebx,\n        (unsigned long) uc->sc_edx,\n        (unsigned long) uc->sc_edi,\n        (unsigned long) uc->sc_esi,\n        (unsigned long) uc->sc_ebp,\n        (unsigned long) uc->sc_esp,\n        (unsigned long) uc->sc_ss,\n        (unsigned long) uc->sc_eflags,\n        (unsigned long) uc->sc_eip,\n        (unsigned long) uc->sc_cs,\n        (unsigned long) uc->sc_es,\n        (unsigned long) uc->sc_fs,\n        (unsigned long) uc->sc_gs\n    );\n    logStackContent((void**)uc->sc_esp);\n    #else\n    NOT_SUPPORTED();\n    #endif\n#elif defined(__NetBSD__)\n    #if defined(__x86_64__)\n    serverLog(LL_WARNING,\n    \"\\n\"\n    \"RAX:%016lx RBX:%016lx\\nRCX:%016lx RDX:%016lx\\n\"\n    \"RDI:%016lx RSI:%016lx\\nRBP:%016lx RSP:%016lx\\n\"\n    \"R8 :%016lx R9 :%016lx\\nR10:%016lx R11:%016lx\\n\"\n    \"R12:%016lx R13:%016lx\\nR14:%016lx R15:%016lx\\n\"\n    \"RIP:%016lx EFL:%016lx\\nCSGSFS:%016lx\",\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_RAX],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_RBX],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_RCX],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_RDX],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_RDI],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_RSI],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_RBP],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_RSP],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_R8],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_R9],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_R10],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_R11],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_R12],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_R13],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_R14],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_R15],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_RIP],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_RFLAGS],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_CS]\n    );\n    logStackContent((void**)uc->uc_mcontext.__gregs[_REG_RSP]);\n    #elif defined(__i386__)\n    serverLog(LL_WARNING,\n    \"\\n\"\n    \"EAX:%08lx EBX:%08lx ECX:%08lx EDX:%08lx\\n\"\n    \"EDI:%08lx ESI:%08lx EBP:%08lx ESP:%08lx\\n\"\n    \"SS :%08lx EFL:%08lx EIP:%08lx CS:%08lx\\n\"\n    \"DS :%08lx ES :%08lx FS :%08lx GS:%08lx\",\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_EAX],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_EBX],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_EDX],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_EDI],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_ESI],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_EBP],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_ESP],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_SS],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_EFLAGS],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_EIP],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_CS],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_ES],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_FS],\n        (unsigned long) uc->uc_mcontext.__gregs[_REG_GS]\n    );\n    #else\n    NOT_SUPPORTED();\n    #endif\n#elif defined(__DragonFly__)\n    serverLog(LL_WARNING,\n    \"\\n\"\n    \"RAX:%016lx RBX:%016lx\\nRCX:%016lx RDX:%016lx\\n\"\n    \"RDI:%016lx RSI:%016lx\\nRBP:%016lx RSP:%016lx\\n\"\n    \"R8 :%016lx R9 :%016lx\\nR10:%016lx R11:%016lx\\n\"\n    \"R12:%016lx R13:%016lx\\nR14:%016lx R15:%016lx\\n\"\n    \"RIP:%016lx EFL:%016lx\\nCSGSFS:%016lx\",\n        (unsigned long) uc->uc_mcontext.mc_rax,\n        (unsigned long) uc->uc_mcontext.mc_rbx,\n        (unsigned long) uc->uc_mcontext.mc_rcx,\n        (unsigned long) uc->uc_mcontext.mc_rdx,\n        (unsigned long) uc->uc_mcontext.mc_rdi,\n        (unsigned long) uc->uc_mcontext.mc_rsi,\n        (unsigned long) uc->uc_mcontext.mc_rbp,\n        (unsigned long) uc->uc_mcontext.mc_rsp,\n        (unsigned long) uc->uc_mcontext.mc_r8,\n        (unsigned long) uc->uc_mcontext.mc_r9,\n        (unsigned long) uc->uc_mcontext.mc_r10,\n        (unsigned long) uc->uc_mcontext.mc_r11,\n        (unsigned long) uc->uc_mcontext.mc_r12,\n        (unsigned long) uc->uc_mcontext.mc_r13,\n        (unsigned long) uc->uc_mcontext.mc_r14,\n        (unsigned long) uc->uc_mcontext.mc_r15,\n        (unsigned long) uc->uc_mcontext.mc_rip,\n        (unsigned long) uc->uc_mcontext.mc_rflags,\n        (unsigned long) uc->uc_mcontext.mc_cs\n    );\n    logStackContent((void**)uc->uc_mcontext.mc_rsp);\n#elif defined(__sun)\n    #if defined(__x86_64__)\n    serverLog(LL_WARNING,\n    \"\\n\"\n    \"RAX:%016lx RBX:%016lx\\nRCX:%016lx RDX:%016lx\\n\"\n    \"RDI:%016lx RSI:%016lx\\nRBP:%016lx RSP:%016lx\\n\"\n    \"R8 :%016lx R9 :%016lx\\nR10:%016lx R11:%016lx\\n\"\n    \"R12:%016lx R13:%016lx\\nR14:%016lx R15:%016lx\\n\"\n    \"RIP:%016lx EFL:%016lx\\nCSGSFS:%016lx\",\n        (unsigned long) uc->uc_mcontext.gregs[REG_RAX],\n        (unsigned long) uc->uc_mcontext.gregs[REG_RBX],\n        (unsigned long) uc->uc_mcontext.gregs[REG_RCX],\n        (unsigned long) uc->uc_mcontext.gregs[REG_RDX],\n        (unsigned long) uc->uc_mcontext.gregs[REG_RDI],\n        (unsigned long) uc->uc_mcontext.gregs[REG_RSI],\n        (unsigned long) uc->uc_mcontext.gregs[REG_RBP],\n        (unsigned long) uc->uc_mcontext.gregs[REG_RSP],\n        (unsigned long) uc->uc_mcontext.gregs[REG_R8],\n        (unsigned long) uc->uc_mcontext.gregs[REG_R9],\n        (unsigned long) uc->uc_mcontext.gregs[REG_R10],\n        (unsigned long) uc->uc_mcontext.gregs[REG_R11],\n        (unsigned long) uc->uc_mcontext.gregs[REG_R12],\n        (unsigned long) uc->uc_mcontext.gregs[REG_R13],\n        (unsigned long) uc->uc_mcontext.gregs[REG_R14],\n        (unsigned long) uc->uc_mcontext.gregs[REG_R15],\n        (unsigned long) uc->uc_mcontext.gregs[REG_RIP],\n        (unsigned long) uc->uc_mcontext.gregs[REG_RFL],\n        (unsigned long) uc->uc_mcontext.gregs[REG_CS]\n    );\n    logStackContent((void**)uc->uc_mcontext.gregs[REG_RSP]);\n    #endif\n#else\n    NOT_SUPPORTED();\n#endif\n#undef NOT_SUPPORTED\n}\n\n#endif /* HAVE_BACKTRACE */\n\n/* Return a file descriptor to write directly to the Redis log with the\n * write(2) syscall, that can be used in critical sections of the code\n * where the rest of Redis can't be trusted (for example during the memory\n * test) or when an API call requires a raw fd.\n *\n * Close it with closeDirectLogFiledes(). */\nint openDirectLogFiledes(void) {\n    int log_to_stdout = server.logfile[0] == '\\0';\n    int fd = log_to_stdout ?\n        STDOUT_FILENO :\n        open(server.logfile, O_APPEND|O_CREAT|O_WRONLY, 0644);\n    return fd;\n}\n\n/* Used to close what closeDirectLogFiledes() returns. */\nvoid closeDirectLogFiledes(int fd) {\n    int log_to_stdout = server.logfile[0] == '\\0';\n    if (!log_to_stdout) close(fd);\n}\n\n#if defined(HAVE_BACKTRACE) && defined(__linux__)\nstatic int stacktrace_pipe[2] = {0};\nstatic void setupStacktracePipe(void) {\n    if (-1 == anetPipe(stacktrace_pipe, O_CLOEXEC | O_NONBLOCK, O_CLOEXEC | O_NONBLOCK)) {\n        serverLog(LL_WARNING, \"setupStacktracePipe failed: %s\", strerror(errno));\n    }\n}\n#else\nstatic void setupStacktracePipe(void) {/* we don't need a pipe to write the stacktraces */}\n#endif\n#ifdef HAVE_BACKTRACE\n#define BACKTRACE_MAX_SIZE 100\n\n#ifdef __linux__\n#if !defined(_GNU_SOURCE)\n#define _GNU_SOURCE\n#endif\n#include <sys/prctl.h>\n#include <unistd.h>\n#include <sys/syscall.h>\n#include <dirent.h>\n\n#define TIDS_MAX_SIZE 50\nstatic size_t get_ready_to_signal_threads_tids(int sig_num, pid_t tids[TIDS_MAX_SIZE]);\n\ntypedef struct {\n    char thread_name[16];\n    int trace_size;\n    pid_t tid;\n    void *trace[BACKTRACE_MAX_SIZE];\n} stacktrace_data;\n\n__attribute__ ((noinline)) static void collect_stacktrace_data(void) {\n    stacktrace_data trace_data = {{0}};\n\n    /* Get the stack trace first! */\n    trace_data.trace_size = backtrace(trace_data.trace, BACKTRACE_MAX_SIZE);\n\n    /* get the thread name */\n    prctl(PR_GET_NAME, trace_data.thread_name);\n\n    /* get the thread id */\n    trace_data.tid = syscall(SYS_gettid);\n\n    /* Send the output to the main process*/\n    if (write(stacktrace_pipe[1], &trace_data, sizeof(trace_data)) == -1) {/* Avoid warning. */};\n}\n\n__attribute__ ((noinline))\nstatic void writeStacktraces(int fd, int uplevel) {\n    /* get the list of all the process's threads that don't block or ignore the THREADS_SIGNAL */\n    pid_t tids[TIDS_MAX_SIZE];\n    size_t len_tids = get_ready_to_signal_threads_tids(THREADS_SIGNAL, tids);\n    if (!len_tids) {\n        serverLogRawFromHandler(LL_WARNING, \"writeStacktraces(): Failed to get the process's threads.\");\n    }\n\n    char buff[PIPE_BUF];\n    /* Clear the stacktraces pipe */\n    while (read(stacktrace_pipe[0], &buff, sizeof(buff)) > 0) {}\n\n    /* ThreadsManager_runOnThreads returns 0 if it is already running */\n    if (!ThreadsManager_runOnThreads(tids, len_tids, collect_stacktrace_data)) return;\n\n    size_t collected = 0;\n\n    pid_t calling_tid = syscall(SYS_gettid);\n\n    /* Read the stacktrace_pipe until it's empty */\n    stacktrace_data curr_stacktrace_data = {{0}};\n    while (read(stacktrace_pipe[0], &curr_stacktrace_data, sizeof(curr_stacktrace_data)) > 0) {\n        /* stacktrace header includes the tid and the thread's name */\n        snprintf_async_signal_safe(buff, sizeof(buff), \"\\n%d %s\", curr_stacktrace_data.tid, curr_stacktrace_data.thread_name);\n        if (write(fd,buff,strlen(buff)) == -1) {/* Avoid warning. */};\n\n        /* skip kernel call to the signal handler, the signal handler and the callback addresses */\n        int curr_uplevel = 3;\n\n        if (curr_stacktrace_data.tid == calling_tid) {\n            /* skip signal syscall and ThreadsManager_runOnThreads */\n            curr_uplevel += uplevel + 2;\n            /* Add an indication to header of the thread that is handling the log file */\n            if (write(fd,\" *\\n\",strlen(\" *\\n\")) == -1) {/* Avoid warning. */};\n        } else {\n            /* just add a new line */\n            if (write(fd,\"\\n\",strlen(\"\\n\")) == -1) {/* Avoid warning. */};\n        }\n\n        /* add the stacktrace */\n        backtrace_symbols_fd(curr_stacktrace_data.trace+curr_uplevel, curr_stacktrace_data.trace_size-curr_uplevel, fd);\n\n        ++collected;\n    }\n\n    snprintf_async_signal_safe(buff, sizeof(buff), \"\\n%lu/%lu expected stacktraces.\\n\", (long unsigned)(collected), (long unsigned)len_tids);\n    if (write(fd,buff,strlen(buff)) == -1) {/* Avoid warning. */};\n\n}\n\n#endif /* __linux__ */\n__attribute__ ((noinline))\nstatic void writeCurrentThreadsStackTrace(int fd, int uplevel) {\n    void *trace[BACKTRACE_MAX_SIZE];\n\n    int trace_size = backtrace(trace, BACKTRACE_MAX_SIZE);\n\n    char *msg = \"\\nBacktrace:\\n\";\n    if (write(fd,msg,strlen(msg)) == -1) {/* Avoid warning. */};\n    backtrace_symbols_fd(trace+uplevel, trace_size-uplevel, fd);\n}\n\n/* Logs the stack trace using the backtrace() call. This function is designed\n * to be called from signal handlers safely.\n * The eip argument is optional (can take NULL).\n * The uplevel argument indicates how many of the calling functions to skip.\n * Functions that are taken in consideration in \"uplevel\" should be declared with\n * __attribute__ ((noinline)) to make sure the compiler won't inline them.\n */\n__attribute__ ((noinline))\nvoid logStackTrace(void *eip, int uplevel, int current_thread) {\n    int fd = openDirectLogFiledes();\n    char *msg;\n    uplevel++; /* skip this function */\n\n    if (fd == -1) return; /* If we can't log there is anything to do. */\n\n    msg = \"\\n------ STACK TRACE ------\\n\";\n    if (write(fd,msg,strlen(msg)) == -1) {/* Avoid warning. */};\n\n    if (eip) {\n        /* Write EIP to the log file*/\n        msg = \"EIP:\\n\";\n        if (write(fd,msg,strlen(msg)) == -1) {/* Avoid warning. */};\n        backtrace_symbols_fd(&eip, 1, fd);\n    }\n\n    /* Write symbols to log file */\n    ++uplevel;\n#ifdef __linux__\n    if (current_thread) {\n        writeCurrentThreadsStackTrace(fd, uplevel);\n    } else {\n        writeStacktraces(fd, uplevel);\n    }\n#else\n    /* Outside of linux, we only support writing the current thread. */\n    UNUSED(current_thread);\n    writeCurrentThreadsStackTrace(fd, uplevel);\n#endif\n    msg = \"\\n------ STACK TRACE DONE ------\\n\";\n    if (write(fd,msg,strlen(msg)) == -1) {/* Avoid warning. */};\n\n\n    /* Cleanup */\n    closeDirectLogFiledes(fd);\n}\n\n#endif /* HAVE_BACKTRACE */\n\nsds genClusterDebugString(sds infostring) {\n    sds cluster_info = genClusterInfoString();\n    sds cluster_nodes = clusterGenNodesDescription(NULL, 0, 0);\n\n    infostring = sdscatprintf(infostring, \"\\r\\n# Cluster info\\r\\n\");\n    infostring = sdscatsds(infostring, cluster_info);\n    infostring = sdscatprintf(infostring, \"\\n------ CLUSTER NODES OUTPUT ------\\n\");\n    infostring = sdscatsds(infostring, cluster_nodes);\n\n    sdsfree(cluster_info);\n    sdsfree(cluster_nodes);\n\n    return infostring;\n}\n\n/* Log global server info */\nvoid logServerInfo(void) {\n    sds infostring, clients;\n    serverLogRaw(LL_WARNING|LL_RAW, \"\\n------ INFO OUTPUT ------\\n\");\n    int all = 0, everything = 0;\n    robj *argv[1];\n    argv[0] = createStringObject(\"all\", strlen(\"all\"));\n    dict *section_dict = genInfoSectionDict(argv, 1, NULL, &all, &everything);\n    infostring = genRedisInfoString(section_dict, all, everything);\n    if (server.cluster_enabled){\n        infostring = genClusterDebugString(infostring);\n    }\n    serverLogRaw(LL_WARNING|LL_RAW, infostring);\n    serverLogRaw(LL_WARNING|LL_RAW, \"\\n------ CLIENT LIST OUTPUT ------\\n\");\n    clients = getAllClientsInfoString(-1);\n    serverLogRaw(LL_WARNING|LL_RAW, clients);\n    sdsfree(infostring);\n    sdsfree(clients);\n    releaseInfoSectionDict(section_dict);\n    decrRefCount(argv[0]);\n}\n\n/* Log certain config values, which can be used for debugging */\nvoid logConfigDebugInfo(void) {\n    sds configstring;\n    configstring = getConfigDebugInfo();\n    serverLogRaw(LL_WARNING|LL_RAW, \"\\n------ CONFIG DEBUG OUTPUT ------\\n\");\n    serverLogRaw(LL_WARNING|LL_RAW, configstring);\n    sdsfree(configstring);\n}\n\n/* Log modules info. Something we wanna do last since we fear it may crash. */\nvoid logModulesInfo(void) {\n    serverLogRaw(LL_WARNING|LL_RAW, \"\\n------ MODULES INFO OUTPUT ------\\n\");\n    sds infostring = modulesCollectInfo(sdsempty(), NULL, 1, 0);\n    serverLogRaw(LL_WARNING|LL_RAW, infostring);\n    sdsfree(infostring);\n}\n\n/* Log information about the \"current\" client, that is, the client that is\n * currently being served by Redis. May be NULL if Redis is not serving a\n * client right now. */\nvoid logCurrentClient(client *cc, const char *title) {\n    if (cc == NULL) return;\n\n    sds client;\n    int j;\n    struct redisCommand *cmd = NULL;\n    struct cmdToken tokens = {{0}};\n\n    serverLog(LL_WARNING|LL_RAW, \"\\n------ %s CLIENT INFO ------\\n\", title);\n    client = catClientInfoString(sdsempty(),cc);\n    serverLog(LL_WARNING|LL_RAW,\"%s\\n\", client);\n    sdsfree(client);\n    serverLog(LL_WARNING|LL_RAW,\"argc: '%d'\\n\", cc->argc);\n    if (server.hide_user_data_from_log) {\n        cmd = lookupCommand(cc->argv, cc->argc);\n        if (cmd)\n            cmdTokenGetFromCommand(&tokens, cmd);\n    }\n\n    for (j = 0; j < cc->argc; j++) {\n        /* Allow command name, subcommand name and command tokens in the log. */\n        if (server.hide_user_data_from_log && (j != 0 && !(j == 1 && cmd && cmd->parent))) {\n            if (!cmdTokenCheck(&tokens, cc->argv[j])) {\n                serverLog(LL_WARNING|LL_RAW, \"argv[%d]: '*redacted*'\\n\", j);\n                continue;\n            }\n        }\n        robj *decoded;\n        decoded = getDecodedObject(cc->argv[j]);\n        sds repr = sdscatrepr(sdsempty(),decoded->ptr, min(sdslen(decoded->ptr), 1024));\n        serverLog(LL_WARNING|LL_RAW,\"argv[%d]: '%s'\\n\", j, (char*)repr);\n        if (!strcasecmp(decoded->ptr, \"auth\") || !strcasecmp(decoded->ptr, \"auth2\")) {\n            sdsfree(repr);\n            decrRefCount(decoded);\n            break;\n        }\n        sdsfree(repr);\n        decrRefCount(decoded);\n    }\n    /* Check if the first argument, usually a key, is found inside the\n     * selected DB, and if so print info about the associated object. */\n    if (cc->argc > 1) {\n        robj *key = getDecodedObject(cc->argv[1]);\n        kvobj *kv = dbFind(cc->db, key->ptr);\n        if (kv) {\n            serverLog(LL_WARNING,\"key '%s' found in DB containing the following object:\", redactLogCstr((char*)key->ptr));\n            serverLogObjectDebugInfo(kv);\n        }\n        decrRefCount(key);\n    }\n}\n\n#if defined(HAVE_PROC_MAPS)\n\n#define MEMTEST_MAX_REGIONS 128\n\n/* A non destructive memory test executed during segfault. */\nint memtest_test_linux_anonymous_maps(void) {\n    FILE *fp;\n    char line[1024];\n    char logbuf[1024];\n    size_t start_addr, end_addr, size;\n    size_t start_vect[MEMTEST_MAX_REGIONS];\n    size_t size_vect[MEMTEST_MAX_REGIONS];\n    int regions = 0, j;\n\n    int fd = openDirectLogFiledes();\n    if (fd == -1) return 0;\n\n    fp = fopen(\"/proc/self/maps\",\"r\");\n    if (!fp) {\n        closeDirectLogFiledes(fd);\n        return 0;\n    }\n    while(fgets(line,sizeof(line),fp) != NULL) {\n        char *start, *end, *p = line;\n\n        start = p;\n        p = strchr(p,'-');\n        if (!p) continue;\n        *p++ = '\\0';\n        end = p;\n        p = strchr(p,' ');\n        if (!p) continue;\n        *p++ = '\\0';\n        if (strstr(p,\"stack\") ||\n            strstr(p,\"vdso\") ||\n            strstr(p,\"vsyscall\")) continue;\n        if (!strstr(p,\"00:00\")) continue;\n        if (!strstr(p,\"rw\")) continue;\n\n        start_addr = strtoul(start,NULL,16);\n        end_addr = strtoul(end,NULL,16);\n        size = end_addr-start_addr;\n\n        start_vect[regions] = start_addr;\n        size_vect[regions] = size;\n        snprintf(logbuf,sizeof(logbuf),\n            \"*** Preparing to test memory region %lx (%lu bytes)\\n\",\n                (unsigned long) start_vect[regions],\n                (unsigned long) size_vect[regions]);\n        if (write(fd,logbuf,strlen(logbuf)) == -1) { /* Nothing to do. */ }\n        regions++;\n    }\n\n    int errors = 0;\n    for (j = 0; j < regions; j++) {\n        if (write(fd,\".\",1) == -1) { /* Nothing to do. */ }\n        errors += memtest_preserving_test((void*)start_vect[j],size_vect[j],1);\n        if (write(fd, errors ? \"E\" : \"O\",1) == -1) { /* Nothing to do. */ }\n    }\n    if (write(fd,\"\\n\",1) == -1) { /* Nothing to do. */ }\n\n    /* NOTE: It is very important to close the file descriptor only now\n     * because closing it before may result into unmapping of some memory\n     * region that we are testing. */\n    fclose(fp);\n    closeDirectLogFiledes(fd);\n    return errors;\n}\n#endif /* HAVE_PROC_MAPS */\n\nstatic void killMainThread(void) {\n    int err;\n    if (pthread_self() != server.main_thread_id && pthread_cancel(server.main_thread_id) == 0) {\n        if ((err = pthread_join(server.main_thread_id,NULL)) != 0) {\n            serverLog(LL_WARNING, \"main thread can not be joined: %s\", strerror(err));\n        } else {\n            serverLog(LL_WARNING, \"main thread terminated\");\n        }\n    }\n}\n\n/* Kill the running threads (other than current) in an unclean way. This function\n * should be used only when it's critical to stop the threads for some reason.\n * Currently Redis does this only on crash (for instance on SIGSEGV) in order\n * to perform a fast memory check without other threads messing with memory. */\nvoid killThreads(void) {\n    killMainThread();\n    bioKillThreads();\n    killIOThreads();\n}\n\nvoid doFastMemoryTest(void) {\n#if defined(HAVE_PROC_MAPS)\n    if (server.memcheck_enabled) {\n        /* Test memory */\n        serverLogRaw(LL_WARNING|LL_RAW, \"\\n------ FAST MEMORY TEST ------\\n\");\n        killThreads();\n        if (memtest_test_linux_anonymous_maps()) {\n            serverLogRaw(LL_WARNING|LL_RAW,\n                \"!!! MEMORY ERROR DETECTED! Check your memory ASAP !!!\\n\");\n        } else {\n            serverLogRaw(LL_WARNING|LL_RAW,\n                \"Fast memory test PASSED, however your memory can still be broken. Please run a memory test for several hours if possible.\\n\");\n        }\n    }\n#endif /* HAVE_PROC_MAPS */\n}\n\n/* Scans the (assumed) x86 code starting at addr, for a max of `len`\n * bytes, searching for E8 (callq) opcodes, and dumping the symbols\n * and the call offset if they appear to be valid. */\nvoid dumpX86Calls(void *addr, size_t len) {\n    size_t j;\n    unsigned char *p = addr;\n    Dl_info info;\n    /* Hash table to best-effort avoid printing the same symbol\n     * multiple times. */\n    unsigned long ht[256] = {0};\n\n    if (len < 5) return;\n    for (j = 0; j < len-4; j++) {\n        if (p[j] != 0xE8) continue; /* Not an E8 CALL opcode. */\n        unsigned long target = (unsigned long)addr+j+5;\n        uint32_t tmp;\n        memcpy(&tmp, p+j+1, sizeof(tmp));\n        target += tmp;\n        if (dladdr((void*)target, &info) != 0 && info.dli_sname != NULL) {\n            if (ht[target&0xff] != target) {\n                printf(\"Function at 0x%lx is %s\\n\",target,info.dli_sname);\n                ht[target&0xff] = target;\n            }\n            j += 4; /* Skip the 32 bit immediate. */\n        }\n    }\n}\n\nvoid dumpCodeAroundEIP(void *eip) {\n    Dl_info info;\n    if (dladdr(eip, &info) != 0) {\n        serverLog(LL_WARNING|LL_RAW,\n            \"\\n------ DUMPING CODE AROUND EIP ------\\n\"\n            \"Symbol: %s (base: %p)\\n\"\n            \"Module: %s (base %p)\\n\"\n            \"$ xxd -r -p /tmp/dump.hex /tmp/dump.bin\\n\"\n            \"$ objdump --adjust-vma=%p -D -b binary -m i386:x86-64 /tmp/dump.bin\\n\"\n            \"------\\n\",\n            info.dli_sname, info.dli_saddr, info.dli_fname, info.dli_fbase,\n            info.dli_saddr);\n        size_t len = (long)eip - (long)info.dli_saddr;\n        unsigned long sz = sysconf(_SC_PAGESIZE);\n        if (len < 1<<13) { /* we don't have functions over 8k (verified) */\n            /* Find the address of the next page, which is our \"safety\"\n             * limit when dumping. Then try to dump just 128 bytes more\n             * than EIP if there is room, or stop sooner. */\n            void *base = (void *)info.dli_saddr;\n            unsigned long next = ((unsigned long)eip + sz) & ~(sz-1);\n            unsigned long end = (unsigned long)eip + 128;\n            if (end > next) end = next;\n            len = end - (unsigned long)base;\n            serverLogHexDump(LL_WARNING, \"dump of function\",\n                base, len);\n            dumpX86Calls(base, len);\n        }\n    }\n}\n\nvoid invalidFunctionWasCalled(void) {}\n\ntypedef void (*invalidFunctionWasCalledType)(void);\n\n__attribute__ ((noinline))\nstatic void sigsegvHandler(int sig, siginfo_t *info, void *secret) {\n    UNUSED(secret);\n    UNUSED(info);\n    int print_full_crash_info = 1;\n    /* Check if it is safe to enter the signal handler. second thread crashing at the same time will deadlock. */\n    if(pthread_mutex_lock(&signal_handler_lock) == EDEADLK) {\n        /* If this thread already owns the lock (meaning we crashed during handling a signal) switch\n         * to printing the minimal information about the crash. */\n        serverLogRawFromHandler(LL_WARNING,\n            \"Crashed running signal handler. Providing reduced version of recursive crash report.\");\n        print_full_crash_info = 0;\n    }\n\n    bugReportStart();\n    serverLog(LL_WARNING,\n        \"Redis %s crashed by signal: %d, si_code: %d\", REDIS_VERSION, sig, info->si_code);\n    if (sig == SIGSEGV || sig == SIGBUS) {\n        serverLog(LL_WARNING,\n        \"Accessing address: %p\", (void*)info->si_addr);\n    }\n    if (info->si_code == SI_USER && info->si_pid != -1) {\n        serverLog(LL_WARNING, \"Killed by PID: %ld, UID: %d\", (long) info->si_pid, info->si_uid);\n    }\n\n#ifdef HAVE_BACKTRACE\n    ucontext_t *uc = (ucontext_t*) secret;\n    void *eip = getAndSetMcontextEip(uc, NULL);\n    if (eip != NULL) {\n        serverLog(LL_WARNING,\n        \"Crashed running the instruction at: %p\", eip);\n    }\n\n    if (eip == info->si_addr) {\n        /* When eip matches the bad address, it's an indication that we crashed when calling a non-mapped\n         * function pointer. In that case the call to backtrace will crash trying to access that address and we\n         * won't get a crash report logged. Set it to a valid point to avoid that crash. */\n\n        /* This trick allow to avoid compiler warning */\n        void *ptr;\n        invalidFunctionWasCalledType *ptr_ptr = (invalidFunctionWasCalledType*)&ptr;\n        *ptr_ptr = invalidFunctionWasCalled;\n        getAndSetMcontextEip(uc, ptr);\n    }\n\n    /* When printing the reduced crash info, just print the current thread\n     * to avoid race conditions with the multi-threaded stack collector. */\n    logStackTrace(eip, 1, !print_full_crash_info);\n\n    if (eip == info->si_addr) {\n        /* Restore old eip */\n        getAndSetMcontextEip(uc, eip);\n    }\n\n    logRegisters(uc);\n#endif\n\n    if (print_full_crash_info) printCrashReport();\n\n#ifdef HAVE_BACKTRACE\n    if (eip != NULL)\n        dumpCodeAroundEIP(eip);\n#endif\n\n    bugReportEnd(1, sig);\n}\n\nvoid setupDebugSigHandlers(void) {\n    setupStacktracePipe();\n\n    setupSigSegvHandler();\n\n    struct sigaction act;\n\n    sigemptyset(&act.sa_mask);\n    act.sa_flags = SA_SIGINFO;\n    act.sa_sigaction = sigalrmSignalHandler;\n    sigaction(SIGALRM, &act, NULL);\n}\n\nvoid setupSigSegvHandler(void) {\n    /* Initialize the signal handler lock.\n    Attempting to initialize an already initialized mutex or mutexattr results in undefined behavior. */\n    if (!signal_handler_lock_initialized) {\n        /* Set signal handler with error checking attribute. re-lock within the same thread will error. */\n        pthread_mutexattr_init(&signal_handler_lock_attr);\n        pthread_mutexattr_settype(&signal_handler_lock_attr, PTHREAD_MUTEX_ERRORCHECK);\n        pthread_mutex_init(&signal_handler_lock, &signal_handler_lock_attr);\n\n        pthread_mutexattr_init(&bug_report_start_attr);\n        /* Use recursive to avoid deadlock when a signal is raised during bugReportStart(). */\n        pthread_mutexattr_settype(&bug_report_start_attr, PTHREAD_MUTEX_RECURSIVE);\n        pthread_mutex_init(&bug_report_start_mutex, &bug_report_start_attr);\n\n        signal_handler_lock_initialized = 1;\n    }\n\n    struct sigaction act;\n\n    sigemptyset(&act.sa_mask);\n    /* SA_NODEFER to disables adding the signal to the signal mask of the\n     * calling process on entry to the signal handler unless it is included in the sa_mask field. */\n    /* SA_SIGINFO flag is set to raise the function defined in sa_sigaction.\n     * Otherwise, sa_handler is used. */\n    act.sa_flags = SA_NODEFER | SA_SIGINFO;\n    act.sa_sigaction = sigsegvHandler;\n    if(server.crashlog_enabled) {\n        sigaction(SIGSEGV, &act, NULL);\n        sigaction(SIGBUS, &act, NULL);\n        sigaction(SIGFPE, &act, NULL);\n        sigaction(SIGILL, &act, NULL);\n        sigaction(SIGABRT, &act, NULL);\n    }\n}\n\nvoid removeSigSegvHandlers(void) {\n    struct sigaction act;\n    sigemptyset(&act.sa_mask);\n    act.sa_flags = SA_NODEFER | SA_RESETHAND;\n    act.sa_handler = SIG_DFL;\n    sigaction(SIGSEGV, &act, NULL);\n    sigaction(SIGBUS, &act, NULL);\n    sigaction(SIGFPE, &act, NULL);\n    sigaction(SIGILL, &act, NULL);\n    sigaction(SIGABRT, &act, NULL);\n}\n\nvoid printCrashReport(void) {\n    atomicSet(server.crashing, 1);\n\n    /* Log INFO and CLIENT LIST */\n    logServerInfo();\n\n    /* Log the current client */\n    logCurrentClient(server.current_client, \"CURRENT\");\n    logCurrentClient(server.executing_client, \"EXECUTING\");\n\n    /* Log modules info. Something we wanna do last since we fear it may crash. */\n    logModulesInfo();\n\n    /* Log debug config information, which are some values\n     * which may be useful for debugging crashes. */\n    logConfigDebugInfo();\n\n    /* Run memory test in case the crash was triggered by memory corruption. */\n    doFastMemoryTest();\n}\n\nvoid bugReportEnd(int killViaSignal, int sig) {\n    struct sigaction act;\n\n    serverLogRawFromHandler(LL_WARNING|LL_RAW,\n\"\\n=== REDIS BUG REPORT END. Make sure to include from START to END. ===\\n\\n\"\n\"       Please report the crash by opening an issue on github:\\n\\n\"\n\"           http://github.com/redis/redis/issues\\n\\n\"\n\"  If a Redis module was involved, please open in the module's repo instead.\\n\\n\"\n\"  Suspect RAM error? Use redis-server --test-memory to verify it.\\n\\n\"\n\"  Some other issues could be detected by redis-server --check-system\\n\"\n);\n\n    /* free(messages); Don't call free() with possibly corrupted memory. */\n    if (server.daemonize && server.supervised == 0 && server.pidfile) unlink(server.pidfile);\n\n    if (!killViaSignal) {\n        /* To avoid issues with valgrind, we may wanna exit rather than generate a signal */\n        if (server.use_exit_on_panic) {\n             /* Using _exit to bypass false leak reports by gcc ASAN */\n             fflush(stdout);\n            _exit(1);\n        }\n        abort();\n    }\n\n    /* Make sure we exit with the right signal at the end. So for instance\n     * the core will be dumped if enabled. */\n    sigemptyset (&act.sa_mask);\n    act.sa_flags = 0;\n    act.sa_handler = SIG_DFL;\n    sigaction (sig, &act, NULL);\n    kill(getpid(),sig);\n}\n\n/* ==================== Logging functions for debugging ===================== */\n\nvoid serverLogHexDump(int level, char *descr, void *value, size_t len) {\n    char buf[65], *b;\n    unsigned char *v = value;\n    char charset[] = \"0123456789abcdef\";\n\n    serverLog(level,\"%s (hexdump of %zu bytes):\", descr, len);\n    b = buf;\n    while(len) {\n        b[0] = charset[(*v)>>4];\n        b[1] = charset[(*v)&0xf];\n        b[2] = '\\0';\n        b += 2;\n        len--;\n        v++;\n        if (b-buf == 64 || len == 0) {\n            serverLogRaw(level|LL_RAW,buf);\n            b = buf;\n        }\n    }\n    serverLogRaw(level|LL_RAW,\"\\n\");\n}\n\n/* =========================== Software Watchdog ============================ */\n#include <sys/time.h>\n\nvoid sigalrmSignalHandler(int sig, siginfo_t *info, void *secret) {\n    /* Save and restore errno to avoid spoiling it's value as caught by\n     * WARNING: ThreadSanitizer: signal handler spoils errno */\n    int save_errno = errno;\n#ifdef HAVE_BACKTRACE\n    ucontext_t *uc = (ucontext_t*) secret;\n#else\n    (void)secret;\n#endif\n    UNUSED(sig);\n\n    /* SIGALRM can be sent explicitly to the process calling kill() to get the stacktraces,\n       or every watchdog_period interval. In the last case, si_pid is not set */\n    if(info->si_pid == 0) {\n        serverLogRawFromHandler(LL_WARNING,\"\\n--- WATCHDOG TIMER EXPIRED ---\");\n    } else {\n        serverLogRawFromHandler(LL_WARNING, \"\\nReceived SIGALRM\");\n    }\n#ifdef HAVE_BACKTRACE\n    logStackTrace(getAndSetMcontextEip(uc, NULL), 1, 0);\n#else\n    serverLogRawFromHandler(LL_WARNING,\"Sorry: no support for backtrace().\");\n#endif\n    serverLogRawFromHandler(LL_WARNING,\"--------\\n\");\n    errno = save_errno;\n}\n\n/* Schedule a SIGALRM delivery after the specified period in milliseconds.\n * If a timer is already scheduled, this function will re-schedule it to the\n * specified time. If period is 0 the current timer is disabled. */\nvoid watchdogScheduleSignal(int period) {\n    struct itimerval it;\n\n    /* Will stop the timer if period is 0. */\n    it.it_value.tv_sec = period/1000;\n    it.it_value.tv_usec = (period%1000)*1000;\n    /* Don't automatically restart. */\n    it.it_interval.tv_sec = 0;\n    it.it_interval.tv_usec = 0;\n    setitimer(ITIMER_REAL, &it, NULL);\n}\nvoid applyWatchdogPeriod(void) {\n    /* Disable watchdog when period is 0 */\n    if (server.watchdog_period == 0) {\n        watchdogScheduleSignal(0); /* Stop the current timer. */\n    } else {\n        /* If the configured period is smaller than twice the timer period, it is\n         * too short for the software watchdog to work reliably. Fix it now\n         * if needed. */\n        int min_period = (1000/server.hz)*2;\n        if (server.watchdog_period < min_period) server.watchdog_period = min_period;\n        watchdogScheduleSignal(server.watchdog_period); /* Adjust the current timer. */\n    }\n}\n\nvoid debugPauseProcess(void) {\n    serverLog(LL_NOTICE, \"Process is about to stop.\");\n    raise(SIGSTOP);\n    serverLog(LL_NOTICE, \"Process has been continued.\");\n}\n\n/* Positive input is sleep time in microseconds. Negative input is fractions\n * of microseconds, i.e. -10 means 100 nanoseconds. */\nvoid debugDelay(int usec) {\n    /* Since even the shortest sleep results in context switch and system call,\n     * the way we achieve short sleeps is by statistically sleeping less often. */\n    if (usec < 0) usec = (rand() % -usec) == 0 ? 1: 0;\n    if (usec) usleep(usec);\n}\n\n#ifdef HAVE_BACKTRACE\n#ifdef __linux__\n\n/* =========================== Stacktrace Utils ============================ */\n\n\n\n/** If it doesn't block and doesn't ignore, return 1 (the thread will handle the signal)\n * If thread tid blocks or ignores sig_num returns 0 (thread is not ready to catch the signal).\n * also returns 0 if something is wrong and prints a warning message to the log file **/\nstatic int is_thread_ready_to_signal(const char *proc_pid_task_path, const char *tid, int sig_num) {\n    /* Open the threads status file path /proc/<pid>/task/<tid>/status */\n    char path_buff[PATH_MAX];\n    snprintf_async_signal_safe(path_buff, PATH_MAX, \"%s/%s/status\", proc_pid_task_path, tid);\n\n    int thread_status_file = open(path_buff, O_RDONLY);\n    char buff[PATH_MAX];\n    if (thread_status_file == -1) {\n        serverLogFromHandler(LL_WARNING, \"tid:%s: failed to open %s file\", tid, path_buff);\n        return 0;\n    }\n\n    int ret = 1;\n    size_t field_name_len = strlen(\"SigBlk:\\t\"); /* SigIgn has the same length */\n    char *line = NULL;\n    size_t fields_count = 2;\n    while ((line = fgets_async_signal_safe(buff, PATH_MAX, thread_status_file)) && fields_count) {\n        /* iterate the file until we reach SigBlk or SigIgn field line */\n        if (!strncmp(buff, \"SigBlk:\\t\", field_name_len) ||  !strncmp(buff, \"SigIgn:\\t\", field_name_len)) {\n            line = buff + field_name_len;\n            unsigned long sig_mask;\n            if (-1 == string2ul_base16_async_signal_safe(line, sizeof(buff), &sig_mask)) {\n                serverLogRawFromHandler(LL_WARNING, \"Can't convert signal mask to an unsigned long due to an overflow\");\n                ret = 0;\n                break;\n            }\n\n            /* The bit position in a signal mask aligns with the signal number. Since signal numbers start from 1 \n            we need to adjust the signal number by subtracting 1 to align it correctly with the zero-based indexing used */\n            if (sig_mask & (1L << (sig_num - 1))) { /* if the signal is blocked/ignored return 0 */\n                ret = 0;\n                break;\n            }\n            --fields_count;\n        }\n    }\n\n    close(thread_status_file);\n\n    /* if we reached EOF, it means we haven't found SigBlk or/and SigIgn, something is wrong */\n    if (line == NULL)  {\n        ret = 0;\n        serverLogFromHandler(LL_WARNING, \"tid:%s: failed to find SigBlk or/and SigIgn field(s) in %s/%s/status file\", tid, proc_pid_task_path, tid);\n    }\n    return ret;\n}\n\n/** We are using syscall(SYS_getdents64) to read directories, which unlike opendir(), is considered \n * async-signal-safe. This function wrapper getdents64() in glibc is supported as of glibc 2.30.\n * To support earlier versions of glibc, we use syscall(SYS_getdents64), which requires defining\n * linux_dirent64 ourselves. This structure is very old and stable: It will not change unless the kernel\n * chooses to break compatibility with all existing binaries. Highly Unlikely.\n*/\nstruct linux_dirent64 {\n   unsigned long long d_ino;\n   long long d_off;\n   unsigned short d_reclen;     /* Length of this linux_dirent */\n   unsigned char  d_type;\n   char           d_name[256];  /* Filename (null-terminated) */\n};\n\n/** Returns the number of the process's threads that can receive signal sig_num.\n * Writes into tids the tids of these threads.\n * If it fails, returns 0.\n*/\nstatic size_t get_ready_to_signal_threads_tids(int sig_num, pid_t tids[TIDS_MAX_SIZE]) {\n    /* Open /proc/<pid>/task file. */\n    char path_buff[PATH_MAX];\n    snprintf_async_signal_safe(path_buff, PATH_MAX, \"/proc/%d/task\", getpid());\n\n    int dir;\n    if (-1 == (dir = open(path_buff,  O_RDONLY | O_DIRECTORY))) return 0;\n\n    size_t tids_count = 0;\n    pid_t calling_tid = syscall(SYS_gettid);\n    int current_thread_index = -1;\n    long nread;\n    char buff[PATH_MAX] = {0};\n\n    /* readdir() is not async-signal-safe (AS-safe).\n    Hence, we read the file using SYS_getdents64, which is considered AS-sync*/\n    while ((nread = syscall(SYS_getdents64, dir, buff, PATH_MAX))) {\n        if (nread == -1) {\n            close(dir);\n            serverLogRawFromHandler(LL_WARNING, \"get_ready_to_signal_threads_tids(): Failed to read the process's task directory\");\n            return 0;\n        }\n        /* Each thread is represented by a directory */\n        for (long pos = 0; pos < nread;) {\n            struct linux_dirent64 *entry = (struct linux_dirent64 *)(buff + pos);\n            pos += entry->d_reclen;\n            /* Skip irrelevant directories. */\n            if (strcmp(entry->d_name, \".\") == 0 || strcmp(entry->d_name, \"..\") == 0) continue;\n\n            /* the thread's directory name is equivalent to its tid. */\n           long tid;\n           string2l(entry->d_name, strlen(entry->d_name), &tid);\n\n            if(!is_thread_ready_to_signal(path_buff, entry->d_name, sig_num)) continue;\n\n            if(tid == calling_tid) {\n                current_thread_index = tids_count;\n            }\n\n            /* save the thread id */\n            tids[tids_count++] = tid;\n            \n            /* Stop if we reached the maximum threads number. */\n            if(tids_count == TIDS_MAX_SIZE) {\n                serverLogRawFromHandler(LL_WARNING, \"get_ready_to_signal_threads_tids(): Reached the limit of the tids buffer.\");\n                break;\n            }\n        }\n\n        if(tids_count == TIDS_MAX_SIZE) break;\n    }\n\n    /* Swap the last tid with the the current thread id */\n    if(current_thread_index != -1) {\n        pid_t last_tid = tids[tids_count - 1];\n\n        tids[tids_count - 1] = calling_tid;\n        tids[current_thread_index] = last_tid;\n    }\n\n    close(dir);\n\n    return tids_count;\n}\n#endif /* __linux__ */\n#endif /* HAVE_BACKTRACE */\n"
  },
  {
    "path": "src/debugmacro.h",
    "content": "/* This file contains debugging macros to be used when investigating issues.\n *\n * -----------------------------------------------------------------------------\n *\n * Copyright (c) 2016-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef _REDIS_DEBUGMACRO_H_\n#define _REDIS_DEBUGMACRO_H_\n\n#include <stdio.h>\n#define D(...)                                                               \\\n    do {                                                                     \\\n        FILE *fp = fopen(\"/tmp/log.txt\",\"a\");                                \\\n        fprintf(fp,\"%s:%s:%d:\\t\", __FILE__, __func__, __LINE__);             \\\n        fprintf(fp,__VA_ARGS__);                                             \\\n        fprintf(fp,\"\\n\");                                                    \\\n        fclose(fp);                                                          \\\n    } while (0)\n\n#endif /* _REDIS_DEBUGMACRO_H_ */\n"
  },
  {
    "path": "src/defrag.c",
    "content": "/* \n * Active memory defragmentation\n * Try to find key / value allocations that need to be re-allocated in order \n * to reduce external fragmentation.\n * We do that by scanning the keyspace and for each pointer we have, we can try to\n * ask the allocator if moving it to a new address will help reduce fragmentation.\n *\n * Copyright (c) 2020-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n#include \"server.h\"\n#include <stddef.h>\n#include <math.h>\n\n#ifdef HAVE_DEFRAG\n\n#define DEFRAG_CYCLE_US 500 /* Standard duration of defrag cycle (in microseconds) */\n\ntypedef enum { DEFRAG_NOT_DONE = 0,\n               DEFRAG_DONE = 1 } doneStatus;\n\n/*\n * Defragmentation is performed in stages. Each stage is serviced by a stage function\n * (defragStageFn). The stage function is passed a context (void*) to defrag. The contents of that\n * context are unique to the particular stage - and may even be NULL for some stage functions. The\n * same stage function can be used multiple times (for different stages) each having a different\n * context.\n *\n * Parameters:\n *  endtime     - This is the monotonic time that the function should end and return. This ensures\n *                a bounded latency due to defrag.\n *  ctx         - A pointer to context which is unique to the stage function.\n *\n * Returns:\n *  - DEFRAG_DONE if the stage is complete\n *  - DEFRAG_NOT_DONE if there is more work to do\n */\ntypedef doneStatus (*defragStageFn)(void *ctx, monotime endtime);\n\n/* Function pointer type for freeing context in defragmentation stages. */\ntypedef void (*defragStageContextFreeFn)(void *ctx);\ntypedef struct {\n    defragStageFn stage_fn; /* The function to be invoked for the stage */\n    defragStageContextFreeFn ctx_free_fn; /* Function to free the context */\n    void *ctx; /* Context, unique to the stage function */\n} StageDescriptor;\n\n/* Globals needed for the main defrag processing logic.\n * Doesn't include variables specific to a stage or type of data. */\nstruct DefragContext {\n    monotime start_cycle;           /* Time of beginning of defrag cycle */\n    long long start_defrag_hits;    /* server.stat_active_defrag_hits captured at beginning of cycle */\n    long long start_defrag_misses;  /* server.stat_active_defrag_misses captured at beginning of cycle */\n    float start_frag_pct;           /* Fragmention percent of beginning of defrag cycle */\n    float decay_rate;               /* Defrag speed decay rate */\n\n    list *remaining_stages;         /* List of stages which remain to be processed */\n    listNode *current_stage;        /* The list node of stage that's currently being processed */\n\n    long long timeproc_id;      /* Eventloop ID of the timerproc (or AE_DELETED_EVENT_ID) */\n    monotime timeproc_end_time; /* Ending time of previous timerproc execution */\n    long timeproc_overage_us;   /* A correction value if over target CPU percent */\n};\nstatic struct DefragContext defrag = {0, 0, 0, 0, 1.0f};\n\n#define ITER_SLOT_DEFRAG_LUT (-2)\n#define ITER_SLOT_UNASSIGNED (-1)\n\n/* There are a number of stages which process a kvstore. To simplify this, a stage helper function\n * `defragStageKvstoreHelper()` is defined. This function aids in iterating over the kvstore. It\n * uses these definitions.\n */\n/* State of the kvstore helper. The context passed to the kvstore helper MUST BEGIN\n * with a kvstoreIterState (or be passed as NULL). */\ntypedef struct {\n    kvstore *kvs;\n    int slot;   /* Consider defines ITER_SLOT_XXX for special values. */\n    unsigned long cursor;\n} kvstoreIterState;\n#define INIT_KVSTORE_STATE(kvs) ((kvstoreIterState){(kvs), ITER_SLOT_DEFRAG_LUT, 0})\n\n/* The kvstore helper uses this function to perform tasks before continuing the iteration. For the\n * main dictionary, large items are set aside and processed by this function before continuing with\n * iteration over the kvstore.\n *  endtime     - This is the monotonic time that the function should end and return.\n *  ctx         - Context for functions invoked by the helper. If provided in the call to\n *                `defragStageKvstoreHelper()`, the `kvstoreIterState` portion (at the beginning)\n *                will be updated with the current kvstore iteration status.\n *\n * Returns:\n *  - DEFRAG_DONE if the pre-continue work is complete\n *  - DEFRAG_NOT_DONE if there is more work to do\n */\ntypedef doneStatus (*kvstoreHelperPreContinueFn)(void *ctx, monotime endtime);\n\ntypedef struct {\n    kvstoreIterState kvstate;\n    int dbid;\n\n    /* When scanning a main kvstore, large elements are queued for later handling rather than\n     * causing a large latency spike while processing a hash table bucket. This list is only used\n     * for stage: \"defragStageDbKeys\". It will only contain values for the current kvstore being\n     * defragged.\n     * Note that this is a list of key names. It's possible that the key may be deleted or modified\n     * before \"later\" and we will search by key name to find the entry when we defrag the item later. */\n    list *defrag_later;\n    unsigned long defrag_later_cursor;\n} defragKeysCtx;\nstatic_assert(offsetof(defragKeysCtx, kvstate) == 0, \"defragStageKvstoreHelper requires this\");\n\n/* Context for subexpires */\ntypedef struct {\n    estore *subexpires;\n    int slot; /* Consider defines ITER_SLOT_XXX for special values. */\n    int dbid;\n    unsigned long cursor;\n} defragSubexpiresCtx;\n\n/* Context for pubsub kvstores */\ntypedef dict *(*getClientChannelsFn)(client *);\ntypedef struct {\n    kvstoreIterState kvstate;\n    getClientChannelsFn getPubSubChannels;\n} defragPubSubCtx;\nstatic_assert(offsetof(defragPubSubCtx, kvstate) == 0, \"defragStageKvstoreHelper requires this\");\n\ntypedef struct {\n    sds module_name;\n    unsigned long cursor;\n} defragModuleCtx;\n\n/* this method was added to jemalloc in order to help us understand which\n * pointers are worthwhile moving and which aren't */\nint je_get_defrag_hint(void* ptr);\n\n#if !defined(DEBUG_DEFRAG_FORCE)\n/* Defrag helper for generic allocations without freeing old pointer.\n *\n * Note: The caller is responsible for freeing the old pointer if this function\n * returns a non-NULL value. */\nvoid* activeDefragAllocWithoutFree(void *ptr) {\n    size_t size;\n    void *newptr;\n    if(!je_get_defrag_hint(ptr)) {\n        server.stat_active_defrag_misses++;\n        return NULL;\n    }\n    /* move this allocation to a new allocation.\n     * make sure not to use the thread cache. so that we don't get back the same\n     * pointers we try to free */\n    size = zmalloc_usable_size(ptr);\n    newptr = zmalloc_no_tcache(size);\n    memcpy(newptr, ptr, size);\n    server.stat_active_defrag_hits++;\n    return newptr;\n}\n\nvoid activeDefragFree(void *ptr) {\n    zfree_no_tcache(ptr);\n}\n\n/* Defrag helper for generic allocations.\n *\n * returns NULL in case the allocation wasn't moved.\n * when it returns a non-null value, the old pointer was already released\n * and should NOT be accessed. */\nvoid* activeDefragAlloc(void *ptr) {\n    void *newptr = activeDefragAllocWithoutFree(ptr);\n    if (newptr)\n        activeDefragFree(ptr);\n    return newptr;\n}\n\n/* Raw memory allocation for defrag, avoid using tcache. */\nvoid *activeDefragAllocRaw(size_t size) {\n    return zmalloc_no_tcache(size);\n}\n\n/* Raw memory free for defrag, avoid using tcache. */\nvoid activeDefragFreeRaw(void *ptr) {\n    activeDefragFree(ptr);\n    server.stat_active_defrag_hits++;\n}\n#else\nvoid *activeDefragAllocWithoutFree(void *ptr) {\n    size_t size;\n    void *newptr;\n    size = zmalloc_usable_size(ptr);\n    newptr = zmalloc(size);\n    memcpy(newptr, ptr, size);\n    server.stat_active_defrag_hits++;\n    return newptr;\n}\n\nvoid activeDefragFree(void *ptr) {\n    zfree(ptr);\n}\n\nvoid *activeDefragAlloc(void *ptr) {\n    void *newptr = activeDefragAllocWithoutFree(ptr);\n    if (newptr)\n        activeDefragFree(ptr);\n    return newptr;\n}\n\nvoid *activeDefragAllocRaw(size_t size) {\n    return zmalloc(size);\n}\n\nvoid activeDefragFreeRaw(void *ptr) {\n    zfree(ptr);\n    server.stat_active_defrag_hits++;\n}\n#endif\n\n/*Defrag helper for sds strings\n *\n * returns NULL in case the allocation wasn't moved.\n * when it returns a non-null value, the old pointer was already released\n * and should NOT be accessed. */\nsds activeDefragSds(sds sdsptr) {\n    void* ptr = sdsAllocPtr(sdsptr);\n    void* newptr = activeDefragAlloc(ptr);\n    if (newptr) {\n        size_t offset = sdsptr - (char*)ptr;\n        sdsptr = (char*)newptr + offset;\n        return sdsptr;\n    }\n    return NULL;\n}\n\n/* Defrag helper for hfield (entry) strings\n *\n * returns NULL in case the allocation wasn't moved.\n * when it returns a non-null value, the old pointer was already released\n * and should NOT be accessed. */\nEntry *activeDefragEntry(Entry *entry) {\n    Entry *ret = NULL;\n\n    /* First, defrag the entry allocation itself */\n    void *ptr = entryGetAllocPtr(entry);\n    void *newptr = activeDefragAlloc(ptr);\n    if (newptr) {\n        size_t offset = (char*)entry - (char*)ptr;\n        entry = (Entry *)((char*)newptr + offset);\n        ret = entry;\n    }\n\n    /* Then defrag the value if it's not embedded (using the potentially new entry) */\n    sds *valuePtr = entryGetValuePtrRef(entry);\n    if (valuePtr) {\n        sds new_value = activeDefragSds(*valuePtr);\n        if (new_value) *valuePtr = new_value;\n    }\n\n    return ret;\n}\n\n/* Defrag helper for hfield strings and update the reference in the dict.\n *\n * returns NULL in case the allocation wasn't moved.\n * when it returns a non-null value, the old pointer was already released\n * and should NOT be accessed. */\nvoid *activeDefragHfieldAndUpdateRef(void *ptr, void *privdata) {\n    dict *d = privdata;\n    dictEntryLink link;\n\n    /* Before the key is released, obtain the link to\n     * ensure we can safely access and update the key. */\n    link = dictFindLink(d, ptr, NULL);\n    serverAssert(link);\n\n    Entry *newEntry = activeDefragEntry(ptr);\n    if (newEntry)\n        dictSetKeyAtLink(d, newEntry, &link, 0);\n    return newEntry;\n}\n\n/* Defrag helper for robj and/or string objects with expected refcount.\n *\n * Like activeDefragStringOb, but it requires the caller to pass in the expected\n * reference count. In some cases, the caller needs to update a robj whose\n * reference count is not 1, in these cases, the caller must explicitly pass\n * in the reference count, otherwise defragmentation will not be performed.\n * Note that the caller is responsible for updating any other references to the robj. */\nrobj *activeDefragStringObEx(robj* ob, int expected_refcount) {\n    robj *ret = NULL;\n    if (ob->refcount!=expected_refcount)\n        return NULL;\n\n    /* try to defrag robj (only if not an EMBSTR type (handled below). */\n    if (ob->type!=OBJ_STRING || ob->encoding!=OBJ_ENCODING_EMBSTR) {\n        if ((ret = activeDefragAlloc(ob))) {\n            ob = ret;\n        }\n    }\n\n    /* try to defrag string object */\n    if (ob->type == OBJ_STRING) {\n        if(ob->encoding==OBJ_ENCODING_RAW) {\n            sds newsds = activeDefragSds((sds)ob->ptr);\n            if (newsds) {\n                ob->ptr = newsds;\n            }\n        } else if (ob->encoding==OBJ_ENCODING_EMBSTR) {\n            /* The sds is embedded in the object allocation, calculate the\n             * offset and update the pointer in the new allocation. */\n            long ofs = (intptr_t)ob->ptr - (intptr_t)ob;\n            if ((ret = activeDefragAlloc(ob))) {\n                ret->ptr = (void*)((intptr_t)ret + ofs);\n            }\n        } else if (ob->encoding!=OBJ_ENCODING_INT) {\n            serverPanic(\"Unknown string encoding\");\n        }\n    }\n    return ret;\n}\n\n/* Defrag helper for robj and/or string objects\n *\n * returns NULL in case the allocation wasn't moved.\n * when it returns a non-null value, the old pointer was already released\n * and should NOT be accessed. */\nrobj *activeDefragStringOb(robj* ob) {\n    return activeDefragStringObEx(ob, 1);\n}\n\n/* Defrag helper for lua scripts\n *\n * returns NULL in case the allocation wasn't moved.\n * when it returns a non-null value, the old pointer was already released\n * and should NOT be accessed. */\nluaScript *activeDefragLuaScript(luaScript *script) {\n    luaScript *ret = NULL;\n\n    /* try to defrag script struct */\n    if ((ret = activeDefragAlloc(script))) {\n        script = ret;\n    }\n\n    /* try to defrag actual script object */\n    robj *ob = activeDefragStringOb(script->body);\n    if (ob) script->body = ob;\n\n    return ret;\n}\n\n/* Defrag helper for dict main allocations (dict struct, and hash tables).\n * Receives a pointer to the dict* and return a new dict* when the dict\n * struct itself was moved.\n * \n * Returns NULL in case the allocation wasn't moved.\n * When it returns a non-null value, the old pointer was already released\n * and should NOT be accessed. */\ndict *dictDefragTables(dict *d) {\n    dict *ret = NULL;\n    dictEntry **newtable;\n    /* handle the dict struct */\n    if ((ret = activeDefragAlloc(d)))\n        d = ret;\n    /* handle the first hash table */\n    if (!d->ht_table[0]) return ret; /* created but unused */\n    newtable = activeDefragAlloc(d->ht_table[0]);\n    if (newtable)\n        d->ht_table[0] = newtable;\n    /* handle the second hash table */\n    if (d->ht_table[1]) {\n        newtable = activeDefragAlloc(d->ht_table[1]);\n        if (newtable)\n            d->ht_table[1] = newtable;\n    }\n    return ret;\n}\n\n/* Internal function used by activeDefragZsetNode */\nvoid zslUpdateNode(zskiplist *zsl, zskiplistNode *oldnode, zskiplistNode *newnode, zskiplistNode **update) {\n    int i;\n    for (i = 0; i < zsl->level; i++) {\n        if (update[i]->level[i].forward == oldnode)\n            update[i]->level[i].forward = newnode;\n    }\n    serverAssert(zsl->header!=oldnode);\n    if (newnode->level[0].forward) {\n        serverAssert(newnode->level[0].forward->backward==oldnode);\n        newnode->level[0].forward->backward = newnode;\n    } else {\n        serverAssert(zsl->tail==oldnode);\n        zsl->tail = newnode;\n    }\n}\n\n/* Defrag a single zset node, update dictEntry and skiplist struct */\nvoid activeDefragZsetNode(zset *zs, dictEntry *de, dictEntryLink plink) {\n    zskiplistNode *znode = dictGetKey(de);\n\n    /* Try to defrag the skiplist node first */\n    zskiplistNode *newnode = activeDefragAllocWithoutFree(znode);\n    if (!newnode) return; /* No defrag needed */\n\n    /* Node was defragged, now we need to update all skiplist pointers */\n    zskiplistNode *update[ZSKIPLIST_MAXLEVEL], *iter;\n    int i;\n    double score = newnode->score;\n    sds ele = zslGetNodeElement(newnode);\n\n    /* Find all pointers that need to be updated */\n    iter = zs->zsl->header;\n    for (i = zs->zsl->level-1; i >= 0; i--) {\n        while (iter->level[i].forward &&\n            iter->level[i].forward != znode &&\n            zslCompareWithNode(score, ele, iter->level[i].forward) > 0)\n            iter = iter->level[i].forward;\n        update[i] = iter;\n    }\n\n    /* Verify we found the right node */\n    iter = iter->level[0].forward;\n    serverAssert(iter && iter == znode);\n\n    /* Update all skiplist pointers and dict key */\n    zslUpdateNode(zs->zsl, znode, newnode, update);\n    dictSetKeyAtLink(zs->dict, newnode, &plink, 0);\n\n    /* Free the old node now that all pointers have been updated */\n    activeDefragFree(znode);\n}\n\n#define DEFRAG_SDS_DICT_NO_VAL 0\n#define DEFRAG_SDS_DICT_VAL_IS_SDS 1\n#define DEFRAG_SDS_DICT_VAL_IS_STROB 2\n#define DEFRAG_SDS_DICT_VAL_VOID_PTR 3\n#define DEFRAG_SDS_DICT_VAL_LUA_SCRIPT 4\n\nvoid activeDefragSdsDictCallback(void *privdata, const dictEntry *de, dictEntryLink plink) {\n    UNUSED(plink);\n    UNUSED(privdata);\n    UNUSED(de);\n}\n\nvoid activeDefragLuaScriptDictCallback(void *privdata, const dictEntry *de, dictEntryLink plink) {\n    UNUSED(plink);\n    UNUSED(privdata);\n\n    /* If this luaScript is in the LRU list, unconditionally update the node's\n     * value pointer to the current dict key (regardless of reallocation). */\n    luaScript *script = dictGetVal(de);\n    if (script->node)\n        script->node->value = dictGetKey(de);\n}\n\nvoid activeDefragHfieldDictCallback(void *privdata, const dictEntry *de, dictEntryLink plink) {\n    dict *d = privdata;\n    Entry *newEntry = NULL, *entry = dictGetKey(de);\n\n    /* If the hfield does not have TTL, we directly defrag it.\n     * Fields with TTL are skipped here and will be defragmented later\n     * during the hash expiry ebuckets defragmentation phase. */\n    if (entryGetExpiry(entry) == EB_EXPIRE_TIME_INVALID) {\n        if ((newEntry = activeDefragEntry(entry))) {\n            /* Hash dicts use no_value=1, so we must use dictSetKeyAtLink */\n            dictSetKeyAtLink(d, newEntry, &plink, 0);\n        }\n    }\n}\n\n/* Defrag a dict with sds key and optional value (either ptr, sds or robj string) */\nvoid activeDefragSdsDict(dict* d, int val_type) {\n    unsigned long cursor = 0;\n    dictDefragFunctions defragfns = {\n        .defragAlloc = activeDefragAlloc,\n        .defragKey = (dictDefragAllocFunction *)activeDefragSds,\n        .defragVal = (val_type == DEFRAG_SDS_DICT_VAL_IS_SDS ? (dictDefragAllocFunction *)activeDefragSds :\n                      val_type == DEFRAG_SDS_DICT_VAL_IS_STROB ? (dictDefragAllocFunction *)activeDefragStringOb :\n                      val_type == DEFRAG_SDS_DICT_VAL_VOID_PTR ? (dictDefragAllocFunction *)activeDefragAlloc :\n                      val_type == DEFRAG_SDS_DICT_VAL_LUA_SCRIPT ? (dictDefragAllocFunction *)activeDefragLuaScript :\n                      NULL)\n    };\n    dictScanFunction *fn = (val_type == DEFRAG_SDS_DICT_VAL_LUA_SCRIPT ?\n        activeDefragLuaScriptDictCallback : activeDefragSdsDictCallback);\n    do {\n        cursor = dictScanDefrag(d, cursor, fn,\n                                &defragfns, NULL);\n    } while (cursor != 0);\n}\n\n/* Defrag a dict with hfield key (no separate value - value is part of entry). */\nvoid activeDefragHfieldDict(dict *d) {\n    unsigned long cursor = 0;\n    dictDefragFunctions defragfns = {\n        .defragAlloc = activeDefragAlloc, /* Only defrag dictEntry */\n        .defragKey = NULL, /* Will be defragmented in activeDefragHfieldDictCallback. */\n        .defragVal = NULL  /* No separate value - value is part of the entry (hfield). */\n    };\n    do {\n        cursor = dictScanDefrag(d, cursor, activeDefragHfieldDictCallback,\n                                &defragfns, d);\n    } while (cursor != 0);\n\n    /* Continue with defragmentation of hash fields that have with TTL.\n     * During the dictionary defragmentaion above, we skipped fields with TTL,\n     * Now we continue to defrag those fields by using the expiry buckets. */\n    if (d->type == &entryHashDictTypeWithHFE) {\n        cursor = 0;\n        ebDefragFunctions eb_defragfns = {\n            .defragAlloc = activeDefragAlloc,\n            .defragItem = activeDefragHfieldAndUpdateRef\n        };\n        ebuckets *eb = hashTypeGetDictMetaHFE(d);\n        while (ebScanDefrag(eb, &hashFieldExpireBucketsType, &cursor, &eb_defragfns, d)) {}\n    }\n}\n\n/* Defrag a list of ptr, sds or robj string values */\nvoid activeDefragQuickListNode(quicklist *ql, quicklistNode **node_ref) {\n    quicklistNode *newnode, *node = *node_ref;\n    unsigned char *newzl;\n    if ((newnode = activeDefragAlloc(node))) {\n        if (newnode->prev)\n            newnode->prev->next = newnode;\n        else\n            ql->head = newnode;\n        if (newnode->next)\n            newnode->next->prev = newnode;\n        else\n            ql->tail = newnode;\n        *node_ref = node = newnode;\n    }\n    if ((newzl = activeDefragAlloc(node->entry)))\n        node->entry = newzl;\n}\n\nvoid activeDefragQuickListNodes(quicklist *ql) {\n    quicklistNode *node = ql->head;\n    while (node) {\n        activeDefragQuickListNode(ql, &node);\n        node = node->next;\n    }\n}\n\n/* when the value has lots of elements, we want to handle it later and not as\n * part of the main dictionary scan. this is needed in order to prevent latency\n * spikes when handling large items */\nvoid defragLater(defragKeysCtx *ctx, kvobj *kv) {\n    if (!ctx->defrag_later) {\n        ctx->defrag_later = listCreate();\n        listSetFreeMethod(ctx->defrag_later, sdsfreegeneric);\n        ctx->defrag_later_cursor = 0;\n    }\n    sds key = sdsdup(kvobjGetKey(kv));\n    listAddNodeTail(ctx->defrag_later, key);\n}\n\n/* returns 0 if no more work needs to be been done, and 1 if time is up and more work is needed. */\nlong scanLaterList(robj *ob, unsigned long *cursor, monotime endtime) {\n    quicklist *ql = ob->ptr;\n    quicklistNode *node;\n    long iterations = 0;\n    int bookmark_failed = 0;\n    serverAssert(ob->type == OBJ_LIST && ob->encoding == OBJ_ENCODING_QUICKLIST);\n\n    if (*cursor == 0) {\n        /* if cursor is 0, we start new iteration */\n        node = ql->head;\n    } else {\n        node = quicklistBookmarkFind(ql, \"_AD\");\n        if (!node) {\n            /* if the bookmark was deleted, it means we reached the end. */\n            *cursor = 0;\n            return 0;\n        }\n        node = node->next;\n    }\n\n    (*cursor)++;\n    while (node) {\n        activeDefragQuickListNode(ql, &node);\n        server.stat_active_defrag_scanned++;\n        if (++iterations > 128 && !bookmark_failed) {\n            if (getMonotonicUs() > endtime) {\n                if (!quicklistBookmarkCreate(&ql, \"_AD\", node)) {\n                    bookmark_failed = 1;\n                } else {\n                    ob->ptr = ql; /* bookmark creation may have re-allocated the quicklist */\n                    return 1;\n                }\n            }\n            iterations = 0;\n        }\n        node = node->next;\n    }\n    quicklistBookmarkDelete(ql, \"_AD\");\n    *cursor = 0;\n    return bookmark_failed? 1: 0;\n}\n\ntypedef struct {\n    zset *zs;\n} scanLaterZsetData;\n\nvoid scanZsetCallback(void *privdata, const dictEntry *_de, dictEntryLink plink) {\n    dictEntry *de = (dictEntry*)_de;\n    scanLaterZsetData *data = privdata;\n    activeDefragZsetNode(data->zs, de, plink);\n    server.stat_active_defrag_scanned++;\n}\n\nvoid scanLaterZset(robj *ob, unsigned long *cursor) {\n    serverAssert(ob->type == OBJ_ZSET && ob->encoding == OBJ_ENCODING_SKIPLIST);\n    zset *zs = (zset*)ob->ptr;\n    dict *d = zs->dict;\n    scanLaterZsetData data = {zs};\n    dictDefragFunctions defragfns = {.defragAlloc = activeDefragAlloc};\n    *cursor = dictScanDefrag(d, *cursor, scanZsetCallback, &defragfns, &data);\n}\n\n/* Used as scan callback when all the work is done in the dictDefragFunctions. */\nvoid scanCallbackCountScanned(void *privdata, const dictEntry *de, dictEntryLink plink) {\n    UNUSED(plink);\n    UNUSED(privdata);\n    UNUSED(de);\n    server.stat_active_defrag_scanned++;\n}\n\nvoid scanLaterSet(robj *ob, unsigned long *cursor) {\n    serverAssert(ob->type == OBJ_SET && ob->encoding == OBJ_ENCODING_HT);\n    dict *d = ob->ptr;\n    dictDefragFunctions defragfns = {\n        .defragAlloc = activeDefragAlloc,\n        .defragKey = (dictDefragAllocFunction *)activeDefragSds\n    };\n    *cursor = dictScanDefrag(d, *cursor, scanCallbackCountScanned, &defragfns, NULL);\n}\n\nvoid scanLaterHash(robj *ob, unsigned long *cursor) {\n    serverAssert(ob->type == OBJ_HASH && ob->encoding == OBJ_ENCODING_HT);\n    dict *d = ob->ptr;\n\n    typedef enum {\n        HASH_DEFRAG_NONE = 0,\n        HASH_DEFRAG_DICT = 1,\n        HASH_DEFRAG_EBUCKETS = 2\n    } hashDefragPhase;\n    static hashDefragPhase defrag_phase = HASH_DEFRAG_NONE;\n\n    /* Start a new hash defrag. */\n    if (!*cursor || defrag_phase == HASH_DEFRAG_NONE)\n        defrag_phase = HASH_DEFRAG_DICT;\n\n    /* Defrag hash dictionary but skip TTL fields. */\n    if (defrag_phase == HASH_DEFRAG_DICT) {\n        dictDefragFunctions defragfns = {\n            .defragAlloc = activeDefragAlloc,\n            .defragKey = NULL, /* Will be defragmented in activeDefragHfieldDictCallback. */\n            .defragVal = NULL  /* value stored along with key as part of Entry */\n        };\n        *cursor = dictScanDefrag(d, *cursor, activeDefragHfieldDictCallback, &defragfns, d);\n\n        /* Move to next phase. */\n        if (!*cursor) defrag_phase = HASH_DEFRAG_EBUCKETS;\n    }\n\n    /* Defrag ebuckets and TTL fields. */\n    if (defrag_phase == HASH_DEFRAG_EBUCKETS) {\n        if (d->type == &entryHashDictTypeWithHFE) {\n            ebDefragFunctions eb_defragfns = {\n                .defragAlloc = activeDefragAlloc,\n                .defragItem = activeDefragHfieldAndUpdateRef\n            };\n            ebuckets *eb = hashTypeGetDictMetaHFE(d);\n            ebScanDefrag(eb, &hashFieldExpireBucketsType, cursor, &eb_defragfns, d);\n        } else {\n            /* Finish defragmentation if this dict doesn't have expired fields. */\n            *cursor = 0;\n        }\n        if (!*cursor) defrag_phase = HASH_DEFRAG_NONE;\n    }\n}\n\nvoid defragQuicklist(defragKeysCtx *ctx, kvobj *kv) {\n    quicklist *ql = kv->ptr, *newql;\n    serverAssert(kv->type == OBJ_LIST && kv->encoding == OBJ_ENCODING_QUICKLIST);\n    if ((newql = activeDefragAlloc(ql)))\n        kv->ptr = ql = newql;\n    if (ql->len > server.active_defrag_max_scan_fields)\n        defragLater(ctx, kv);\n    else\n        activeDefragQuickListNodes(ql);\n}\n\nvoid defragZsetSkiplist(defragKeysCtx *ctx, kvobj *ob) {\n    zset *zs = (zset*)ob->ptr;\n    zset *newzs;\n    zskiplist *newzsl;\n    dict *newdict;\n    struct zskiplistNode *newheader;\n    serverAssert(ob->type == OBJ_ZSET && ob->encoding == OBJ_ENCODING_SKIPLIST);\n    if ((newzs = activeDefragAlloc(zs)))\n        ob->ptr = zs = newzs;\n    if ((newzsl = activeDefragAlloc(zs->zsl)))\n        zs->zsl = newzsl;\n    if ((newheader = activeDefragAlloc(zs->zsl->header)))\n        zs->zsl->header = newheader;\n    if (dictSize(zs->dict) > server.active_defrag_max_scan_fields)\n        defragLater(ctx, ob);\n    else {\n        /* Use dictScanDefrag to iterate and defrag both dictEntry structures and skiplist nodes.\n         * dictScanDefrag handles defragging dictEntry/dictEntryNoValue structures via defragfns,\n         * and calls our callback with plink for each entry so we can defrag skiplist nodes. */\n        scanLaterZsetData data = {zs};\n        dictDefragFunctions defragfns = {.defragAlloc = activeDefragAlloc};\n        unsigned long cursor = 0;\n        do {\n            cursor = dictScanDefrag(zs->dict, cursor, scanZsetCallback, &defragfns, &data);\n        } while (cursor != 0);\n    }\n    /* defrag the dict struct and tables */\n    if ((newdict = dictDefragTables(zs->dict)))\n        zs->dict = newdict;\n}\n\nvoid defragHash(defragKeysCtx *ctx, kvobj *ob) {\n    dict *d, *newd;\n    serverAssert(ob->type == OBJ_HASH && ob->encoding == OBJ_ENCODING_HT);\n    d = ob->ptr;\n    if (dictSize(d) > server.active_defrag_max_scan_fields)\n        defragLater(ctx, ob);\n    else\n        activeDefragHfieldDict(d);\n    /* defrag the dict struct and tables */\n    if ((newd = dictDefragTables(ob->ptr)))\n        ob->ptr = newd;\n}\n\nvoid defragSet(defragKeysCtx *ctx, kvobj *ob) {\n    dict *d, *newd;\n    serverAssert(ob->type == OBJ_SET && ob->encoding == OBJ_ENCODING_HT);\n    d = ob->ptr;\n    if (dictSize(d) > server.active_defrag_max_scan_fields)\n        defragLater(ctx, ob);\n    else\n        activeDefragSdsDict(d, DEFRAG_SDS_DICT_NO_VAL);\n    /* defrag the dict struct and tables */\n    if ((newd = dictDefragTables(ob->ptr)))\n        ob->ptr = newd;\n}\n\n/* Defrag callback for radix tree iterator, called for each node,\n * used in order to defrag the nodes allocations. */\nint defragRaxNode(raxNode **noderef, void *privdata) {\n    UNUSED(privdata);\n    raxNode *newnode = activeDefragAlloc(*noderef);\n    if (newnode) {\n        *noderef = newnode;\n        return 1;\n    }\n    return 0;\n}\n\n/* returns 0 if no more work needs to be been done, and 1 if time is up and more work is needed. */\nint scanLaterStreamListpacks(robj *ob, unsigned long *cursor, monotime endtime) {\n    static unsigned char next[sizeof(streamID)];\n    raxIterator ri;\n    long iterations = 0;\n    serverAssert(ob->type == OBJ_STREAM && ob->encoding == OBJ_ENCODING_STREAM);\n\n    stream *s = ob->ptr;\n    raxStart(&ri,s->rax);\n    if (*cursor == 0) {\n        /* if cursor is 0, we start new iteration */\n        defragRaxNode(&s->rax->head, NULL);\n        /* assign the iterator node callback before the seek, so that the\n         * initial nodes that are processed till the first item are covered */\n        ri.node_cb = defragRaxNode;\n        raxSeek(&ri,\"^\",NULL,0);\n    } else {\n        /* if cursor is non-zero, we seek to the static 'next'.\n         * Since node_cb is set after seek operation, any node traversed during seek wouldn't\n         * be defragmented. To prevent this, we advance to next node before exiting previous\n         * run, ensuring it gets defragmented instead of being skipped during current seek. */\n        if (!raxSeek(&ri,\">=\", next, sizeof(next))) {\n            *cursor = 0;\n            raxStop(&ri);\n            return 0;\n        }\n        /* assign the iterator node callback after the seek, so that the\n         * initial nodes that are processed till now aren't covered */\n        ri.node_cb = defragRaxNode;\n    }\n\n    (*cursor)++;\n    while (raxNext(&ri)) {\n        void *newdata = activeDefragAlloc(ri.data);\n        if (newdata)\n            raxSetData(ri.node, ri.data=newdata);\n        server.stat_active_defrag_scanned++;\n        if (++iterations > 128) {\n            if (getMonotonicUs() > endtime) {\n                /* Move to next node. */\n                if (!raxNext(&ri)) {\n                    /* If we reached the end, we can stop */\n                    *cursor = 0;\n                    raxStop(&ri);\n                    return 0;\n                }\n                serverAssert(ri.key_len==sizeof(next));\n                memcpy(next,ri.key,ri.key_len);\n                raxStop(&ri);\n                return 1;\n            }\n            iterations = 0;\n        }\n    }\n    raxStop(&ri);\n    *cursor = 0;\n    return 0;\n}\n\n/* optional callback used defrag each rax element (not including the element pointer itself) */\ntypedef void *(raxDefragFunction)(raxIterator *ri, void *privdata);\n\n/* defrag radix tree including:\n * 1) rax struct\n * 2) rax nodes\n * 3) rax entry data (only if defrag_data is specified)\n * 4) call a callback per element, and allow the callback to return a new pointer for the element */\nvoid defragRadixTree(rax **raxref, int defrag_data, raxDefragFunction *element_cb, void *element_cb_data) {\n    raxIterator ri;\n    rax* rax;\n    if ((rax = activeDefragAlloc(*raxref)))\n        *raxref = rax;\n    rax = *raxref;\n    raxStart(&ri,rax);\n    ri.node_cb = defragRaxNode;\n    defragRaxNode(&rax->head, NULL);\n    raxSeek(&ri,\"^\",NULL,0);\n    while (raxNext(&ri)) {\n        void *newdata = NULL;\n        if (element_cb)\n            newdata = element_cb(&ri, element_cb_data);\n        if (defrag_data && !newdata)\n            newdata = activeDefragAlloc(ri.data);\n        if (newdata)\n            raxSetData(ri.node, ri.data=newdata);\n    }\n    raxStop(&ri);\n}\n\nvoid* defragStreamConsumerPendingEntry(raxIterator *ri, void *privdata) {\n    streamConsumer *c = privdata;\n    streamNACK *nack = ri->data;\n    /* NACKs are already defragged by the CG PEL walk (defragStreamCGPendingEntry).\n     * cgroup_ref_node->value is also updated there for all NACKs (including\n     * unowned NACK-zone entries that have no consumer PEL walk).\n     * Here we only fix up the back-pointer to the possibly-relocated consumer. */\n    nack->consumer = c;\n    return NULL;\n}\n\nvoid* defragStreamCGPendingEntry(raxIterator *ri, void *privdata) {\n    streamCG *cg = privdata;\n    streamNACK *nack = ri->data, *newnack;\n    /* Update cgroup_ref_node to the possibly-relocated CG for every NACK.\n     * Consumer-owned entries will get this overwritten again redundantly by\n     * defragStreamConsumerPendingEntry; unowned (NACK zone) entries have no\n     * consumer PEL walk, so this is their only chance. */\n    nack->cgroup_ref_node->value = cg;\n    newnack = activeDefragAlloc(nack);\n    if (newnack) {\n        /* If this NACK is owned by a consumer, update the consumer's PEL. */\n        if (newnack->consumer) {\n            void *prev;\n            raxInsert(newnack->consumer->pel, ri->key, ri->key_len, newnack, &prev);\n            serverAssert(prev == nack);\n        }\n        if (newnack->pel_prev) {\n            newnack->pel_prev->pel_next = newnack;\n        } else {\n            cg->pel_time_head = newnack;\n        }\n        if (newnack->pel_next) {\n            newnack->pel_next->pel_prev = newnack;\n        } else {\n            cg->pel_time_tail = newnack;\n        }\n        if (cg->pel_nack_tail == nack) {\n            cg->pel_nack_tail = newnack;\n        }\n    }\n    return newnack;\n}\n\nvoid* defragStreamConsumer(raxIterator *ri, void *privdata) {\n    stream *s = privdata;\n    streamConsumer *c = ri->data;\n    void *newc = activeDefragAlloc(c);\n    if (newc) {\n        c = newc;\n    }\n    sds newsds = activeDefragSds(c->name);\n    if (newsds)\n        c->name = newsds;\n    if (c->pel) {\n        /* Update pel back-pointer to new stream */\n        c->pel->alloc_size = &s->alloc_size;\n        defragRadixTree(&c->pel, 0, defragStreamConsumerPendingEntry, c);\n    }\n    return newc; /* returns NULL if c was not defragged */\n}\n\nvoid* defragStreamConsumerGroup(raxIterator *ri, void *privdata) {\n    stream *s = privdata;\n    streamCG *newcg, *cg = ri->data;\n    if ((newcg = activeDefragAlloc(cg)))\n        cg = newcg;\n    if (cg->pel) {\n        /* Update pel back-pointer to new stream */\n        cg->pel->alloc_size = &s->alloc_size;\n        defragRadixTree(&cg->pel, 0, defragStreamCGPendingEntry, cg);\n    }\n    if (cg->consumers) {\n        /* Update consumers back-pointer to new stream */\n        cg->consumers->alloc_size = &s->alloc_size;\n        defragRadixTree(&cg->consumers, 0, defragStreamConsumer, s);\n    }\n    return cg;\n}\n\n/* Defrag a single idmpProducer's dict and linked list entries. */\nstatic void defragIdmpProducer(idmpProducer *producer) {\n    if (producer->idmp_dict == NULL) return;\n\n    dict *newdict = dictDefragTables(producer->idmp_dict);\n    if (newdict)\n        producer->idmp_dict = newdict;\n\n    idmpEntry *prev = NULL;\n    idmpEntry *entry = producer->idmp_head;\n    while (entry != NULL) {\n        idmpEntry *next = entry->next;\n        idmpEntry *newentry = activeDefragAllocWithoutFree(entry);\n        if (newentry) {\n            dictEntry *de = dictFind(producer->idmp_dict, entry);\n            serverAssert(de);\n            dictSetKey(producer->idmp_dict, de, newentry);\n            if (prev)\n                prev->next = newentry;\n            else\n                producer->idmp_head = newentry;\n            if (producer->idmp_tail == entry)\n                producer->idmp_tail = newentry;\n            activeDefragFree(entry);\n            entry = newentry;\n        }\n        prev = entry;\n        entry = next;\n    }\n}\n\nstatic void* defragIdmpProducerCallback(raxIterator *ri, void *privdata) {\n    UNUSED(privdata);\n    idmpProducer *producer = ri->data;\n    idmpProducer *newproducer = activeDefragAlloc(producer);\n    if (newproducer) {\n        producer = newproducer;\n    }\n    defragIdmpProducer(producer);\n    return newproducer; /* returns NULL if producer was not defragged */\n}\n\nvoid defragStream(defragKeysCtx *ctx, kvobj *ob) {\n    serverAssert(ob->type == OBJ_STREAM && ob->encoding == OBJ_ENCODING_STREAM);\n    stream *s = ob->ptr, *news;\n\n    /* handle the main struct */\n    if ((news = activeDefragAlloc(s)))\n        ob->ptr = s = news;\n\n    /* Update rax back-pointer to new stream */\n    s->rax->alloc_size = &s->alloc_size;\n    if (raxSize(s->rax) > server.active_defrag_max_scan_fields) {\n        rax *newrax = activeDefragAlloc(s->rax);\n        if (newrax)\n            s->rax = newrax;\n        defragLater(ctx, ob);\n    } else\n        defragRadixTree(&s->rax, 1, NULL, NULL);\n\n    if (s->cgroups) {\n        /* Update cgroups back-pointer to new stream */\n        s->cgroups->alloc_size = &s->alloc_size;\n        defragRadixTree(&s->cgroups, 0, defragStreamConsumerGroup, s);\n    }\n\n    if (s->cgroups_ref) {\n        /* Update cgroups_ref back-pointer to new stream */\n        s->cgroups_ref->alloc_size = &s->alloc_size;\n    }\n\n    if (s->idmp_producers) {\n        /* Update idmp_producers back-pointer to new stream */\n        s->idmp_producers->alloc_size = &s->alloc_size;\n        defragRadixTree(&s->idmp_producers, 0, defragIdmpProducerCallback, NULL);\n    }\n}\n\n/* Defrag a module key. This is either done immediately or scheduled\n * for later. Returns then number of pointers defragged.\n */\nvoid defragModule(defragKeysCtx *ctx, redisDb *db, kvobj *kv) {\n    serverAssert(kv->type == OBJ_MODULE);\n    robj keyobj;\n    initStaticStringObject(keyobj, kvobjGetKey(kv));\n    if (!moduleDefragValue(&keyobj, kv, db->id))\n        defragLater(ctx, kv);\n}\n\n/* Defrag a kvobj structure, taking into account optional preceding metadata.\n * For EMBSTR strings, also defrags the embedded string value in the same allocation.\n * For RAW strings and other types, only the kvobj wrapper is defragged here;\n * the value's internal data structures are defragged separately in defragKey().\n *\n * Returns NULL if the allocation wasn't moved.\n * When it returns a non-null value, the old pointer was already released\n * (unless without_free is set) and should NOT be accessed. */\nrobj *activeDefragKvobj(kvobj* kv, int without_free) {\n    void *alloc, *newalloc;\n    kvobj *kvNew = NULL;\n    /* Use LONG_MIN as sentinel to detect if we have an EMBSTR string */\n    long offsetEmbstr = LONG_MIN;\n\n    /* Don't defrag kvobj's with multiple references (refcount > 1) */\n    if (kv->refcount != 1)\n        return NULL;\n\n    /* Calculate offset for EMBSTR strings */\n    if ((kv->type == OBJ_STRING) && (kv->encoding == OBJ_ENCODING_EMBSTR))\n        offsetEmbstr = (intptr_t)kv->ptr - (intptr_t)kv;\n\n    /* Defrag the kvobj allocation (including optional metadata prefix).\n     * For EMBSTR strings, this allocation also contains the embedded string data,\n     * so we'll need to recalculate the ptr offset after defragmentation (see below). */\n\n    alloc = kvobjGetAllocPtr(kv);\n    size_t metaBytes = (char *)kv - (char *)alloc;\n    if (without_free)\n        newalloc = activeDefragAllocWithoutFree(alloc);\n    else\n        newalloc = activeDefragAlloc(alloc);\n\n    if (!newalloc)\n        return NULL;\n\n    /* Update kv pointer to new allocation */\n    kvNew = (kvobj *)((char *)newalloc + metaBytes);\n\n    /* For EMBSTR strings, recalculate ptr to point to the embedded string data\n     * at the same offset within the new allocation */\n    if (offsetEmbstr != LONG_MIN)\n        kvNew->ptr = (void*)((intptr_t)kvNew + offsetEmbstr);\n\n    return kvNew;\n}\n\n/* for each key we scan in the main dict, this function will attempt to defrag\n * all the various pointers it has. */\nvoid defragKey(defragKeysCtx *ctx, dictEntry *de, dictEntryLink link) {\n    UNUSED(link);\n    dictEntryLink exlink = NULL;\n    kvobj *kvnew = NULL, *ob = dictGetKV(de);\n    size_t oldsize = 0;\n    redisDb *db = &server.db[ctx->dbid];\n    int slot = ctx->kvstate.slot;\n    unsigned char *newzl;\n\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(ob);\n\n    long long expire = kvobjGetExpire(ob);\n    /* We can't search in db->expires for that KV after we've released\n     * the pointer it holds, since it won't be able to do the string\n     * compare. Search it before, if needed. */ \n     if (expire != -1) {\n         exlink = kvstoreDictFindLink(db->expires, slot, kvobjGetKey(ob), NULL);\n         serverAssert(exlink != NULL);\n     }\n\n    /* Try to defrag robj. For hash objects with HFEs,\n     * defer defragmentation until processing db's subexpires. */\n    if (!(ob->type == OBJ_HASH && hashTypeGetMinExpire(ob, 0) != EB_EXPIRE_TIME_INVALID)) {\n        /* If the dict doesn't have metadata, we directly defrag it. */\n        kvnew = activeDefragKvobj(ob, 0);\n    }\n    if (kvnew) {\n        kvstoreDictSetAtLink(db->keys, slot, kvnew, &link, 0);\n        if (expire != -1)\n            kvstoreDictSetAtLink(db->expires, slot, kvnew, &exlink, 0);\n        ob = kvnew;\n    }\n\n    if (ob->type == OBJ_STRING) {\n        /* Only defrag strings with refcount==1 (String might be shared as dict \n         * keys, e.g. pub/sub channels, and may be accessed by IO threads. Other \n         * types are never used as dict keys) */\n        if ((ob->refcount==1) && (ob->encoding == OBJ_ENCODING_RAW)) {\n            /* For RAW strings, defrag the separate SDS allocation */\n            sds newsds = activeDefragSds((sds)ob->ptr);\n            if (newsds) ob->ptr = newsds;\n        } \n    } else if (ob->type == OBJ_LIST) {\n        if (ob->encoding == OBJ_ENCODING_QUICKLIST) {\n            defragQuicklist(ctx, ob);\n        } else if (ob->encoding == OBJ_ENCODING_LISTPACK) {\n            if ((newzl = activeDefragAlloc(ob->ptr)))\n                ob->ptr = newzl;\n        } else {\n            serverPanic(\"Unknown list encoding\");\n        }\n    } else if (ob->type == OBJ_SET) {\n        if (ob->encoding == OBJ_ENCODING_HT) {\n            defragSet(ctx, ob);\n        } else if (ob->encoding == OBJ_ENCODING_INTSET ||\n                   ob->encoding == OBJ_ENCODING_LISTPACK)\n        {\n            void *newptr, *ptr = ob->ptr;\n            if ((newptr = activeDefragAlloc(ptr)))\n                ob->ptr = newptr;\n        } else {\n            serverPanic(\"Unknown set encoding\");\n        }\n    } else if (ob->type == OBJ_ZSET) {\n        if (ob->encoding == OBJ_ENCODING_LISTPACK) {\n            if ((newzl = activeDefragAlloc(ob->ptr)))\n                ob->ptr = newzl;\n        } else if (ob->encoding == OBJ_ENCODING_SKIPLIST) {\n            defragZsetSkiplist(ctx, ob);\n        } else {\n            serverPanic(\"Unknown sorted set encoding\");\n        }\n    } else if (ob->type == OBJ_HASH) {\n        if (ob->encoding == OBJ_ENCODING_LISTPACK) {\n            if ((newzl = activeDefragAlloc(ob->ptr)))\n                ob->ptr = newzl;\n        } else if (ob->encoding == OBJ_ENCODING_LISTPACK_EX) {\n            listpackEx *newlpt, *lpt = (listpackEx*)ob->ptr;\n            if ((newlpt = activeDefragAlloc(lpt)))\n                ob->ptr = lpt = newlpt;\n            if ((newzl = activeDefragAlloc(lpt->lp)))\n                lpt->lp = newzl;\n        } else if (ob->encoding == OBJ_ENCODING_HT) {\n            defragHash(ctx, ob);\n        } else {\n            serverPanic(\"Unknown hash encoding\");\n        }\n    } else if (ob->type == OBJ_STREAM) {\n        defragStream(ctx, ob);\n    } else if (ob->type == OBJ_GCRA) {\n        /* GCRA object is just an allocation to a long long value */\n#if UINTPTR_MAX == 0xffffffff\n        void *newptr, *ptr = ob->ptr;\n        if ((newptr = activeDefragAlloc(ptr)))\n            ob->ptr = newptr;\n#endif\n    } else if (ob->type == OBJ_MODULE) {\n        defragModule(ctx,db, ob);\n    } else {\n        serverPanic(\"Unknown object type\");\n    }\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(db, slot, ob, oldsize, kvobjAllocSize(ob));\n}\n\n/* Defrag scan callback for the main db dictionary. */\nstatic void dbKeysScanCallback(void *privdata, const dictEntry *de, dictEntryLink plink) {\n    long long hits_before = server.stat_active_defrag_hits;\n    defragKey((defragKeysCtx *)privdata, (dictEntry *)de, plink);\n    if (server.stat_active_defrag_hits != hits_before)\n        server.stat_active_defrag_key_hits++;\n    else\n        server.stat_active_defrag_key_misses++;\n    server.stat_active_defrag_scanned++;\n}\n\n#if !defined(DEBUG_DEFRAG_FORCE)\n/* Utility function to get the fragmentation ratio from jemalloc.\n * It is critical to do that by comparing only heap maps that belong to\n * jemalloc, and skip ones the jemalloc keeps as spare. Since we use this\n * fragmentation ratio in order to decide if a defrag action should be taken\n * or not, a false detection can cause the defragmenter to waste a lot of CPU\n * without the possibility of getting any results. */\nfloat getAllocatorFragmentation(size_t *out_frag_bytes) {\n    size_t resident, active, allocated, frag_smallbins_bytes;\n    zmalloc_get_allocator_info(1, &allocated, &active, &resident, NULL, NULL, &frag_smallbins_bytes);\n\n    if (server.lua_arena != UINT_MAX) {\n        size_t lua_resident, lua_active, lua_allocated, lua_frag_smallbins_bytes;\n        zmalloc_get_allocator_info_by_arena(server.lua_arena, 0, &lua_allocated, &lua_active, &lua_resident, &lua_frag_smallbins_bytes);\n        resident -= lua_resident;\n        active -= lua_active;\n        allocated -= lua_allocated;\n        frag_smallbins_bytes -= lua_frag_smallbins_bytes;\n    }\n\n    /* Calculate the fragmentation ratio as the proportion of wasted memory in small\n     * bins (which are defraggable) relative to the total allocated memory (including large bins).\n     * This is because otherwise, if most of the memory usage is large bins, we may show high percentage,\n     * despite the fact it's not a lot of memory for the user. */\n    float frag_pct = (float)frag_smallbins_bytes / allocated * 100;\n    float rss_pct = ((float)resident / allocated)*100 - 100;\n    size_t rss_bytes = resident - allocated;\n    if(out_frag_bytes)\n        *out_frag_bytes = frag_smallbins_bytes;\n    serverLog(LL_DEBUG,\n        \"allocated=%zu, active=%zu, resident=%zu, frag=%.2f%% (%.2f%% rss), frag_bytes=%zu (%zu rss)\",\n        allocated, active, resident, frag_pct, rss_pct, frag_smallbins_bytes, rss_bytes);\n    return frag_pct;\n}\n#else\nfloat getAllocatorFragmentation(size_t *out_frag_bytes) {\n    if (out_frag_bytes)\n        *out_frag_bytes = SIZE_MAX;\n    return 99; /* The maximum percentage of fragmentation */\n}\n#endif\n\n/* Defrag scan callback for the pubsub dictionary. */\nvoid defragPubsubScanCallback(void *privdata, const dictEntry *de, dictEntryLink plink) {\n    UNUSED(plink);\n    defragPubSubCtx *ctx = privdata;\n    kvstore *pubsub_channels = ctx->kvstate.kvs;\n    robj *newchannel, *channel = dictGetKey(de);\n    dict *newclients, *clients = dictGetVal(de);\n\n    /* Try to defrag the channel name. */\n    serverAssert(channel->refcount == (int)dictSize(clients) + 1);\n    newchannel = activeDefragStringObEx(channel, dictSize(clients) + 1);\n    if (newchannel) {\n        kvstoreDictSetKey(pubsub_channels, ctx->kvstate.slot, (dictEntry*)de, newchannel);\n\n        /* The channel name is shared by the client's pubsub(shard) and server's\n         * pubsub(shard), after defraging the channel name, we need to update\n         * the reference in the clients' dictionary. */\n        dictIterator di;\n        dictEntry *clientde;\n        dictInitIterator(&di, clients);\n        while((clientde = dictNext(&di)) != NULL) {\n            client *c = dictGetKey(clientde);\n            dict *client_channels = ctx->getPubSubChannels(c);\n            uint64_t hash = dictGetHash(client_channels, newchannel);\n            dictEntry *pubsub_channel = dictFindByHashAndPtr(client_channels, channel, hash);\n            serverAssert(pubsub_channel);\n            dictSetKey(ctx->getPubSubChannels(c), pubsub_channel, newchannel);\n        }\n        dictResetIterator(&di);\n    }\n\n    /* Try to defrag the dictionary of clients that is stored as the value part. */\n    if ((newclients = dictDefragTables(clients)))\n        kvstoreDictSetVal(pubsub_channels, ctx->kvstate.slot, (dictEntry *)de, newclients);\n\n    server.stat_active_defrag_scanned++;\n}\n\n/* returns 0 more work may or may not be needed (see non-zero cursor),\n * and 1 if time is up and more work is needed. */\nint defragLaterItem(kvobj *ob, unsigned long *cursor, monotime endtime, int dbid) {\n    if (ob) {\n        if (ob->type == OBJ_LIST && ob->encoding == OBJ_ENCODING_QUICKLIST) {\n            return scanLaterList(ob, cursor, endtime);\n        } else if (ob->type == OBJ_SET && ob->encoding == OBJ_ENCODING_HT) {\n            scanLaterSet(ob, cursor);\n        } else if (ob->type == OBJ_ZSET && ob->encoding == OBJ_ENCODING_SKIPLIST) {\n            scanLaterZset(ob, cursor);\n        } else if (ob->type == OBJ_HASH && ob->encoding == OBJ_ENCODING_HT) {\n            scanLaterHash(ob, cursor);\n        } else if (ob->type == OBJ_STREAM && ob->encoding == OBJ_ENCODING_STREAM) {\n            return scanLaterStreamListpacks(ob, cursor, endtime);\n        } else if (ob->type == OBJ_MODULE) {\n            robj keyobj;\n            initStaticStringObject(keyobj, kvobjGetKey(ob));\n            return moduleLateDefrag(&keyobj, ob, cursor, endtime, dbid);\n        } else {\n            *cursor = 0; /* object type/encoding may have changed since we schedule it for later */\n        }\n    } else {\n        *cursor = 0; /* object may have been deleted already */\n    }\n    return 0;\n}\n\nstatic int defragIsRunning(void) {\n    return (defrag.timeproc_id > 0);\n}\n\n/* A kvstoreHelperPreContinueFn */\nstatic doneStatus defragLaterStep(void *ctx, monotime endtime) {\n    defragKeysCtx *defrag_keys_ctx = ctx;\n    redisDb *db = &server.db[defrag_keys_ctx->dbid];\n    int slot = defrag_keys_ctx->kvstate.slot;\n    size_t oldsize = 0;\n\n    unsigned int iterations = 0;\n    unsigned long long prev_defragged = server.stat_active_defrag_hits;\n    unsigned long long prev_scanned = server.stat_active_defrag_scanned;\n\n    while (defrag_keys_ctx->defrag_later && listLength(defrag_keys_ctx->defrag_later) > 0) {\n        listNode *head = listFirst(defrag_keys_ctx->defrag_later);\n        sds key = head->value;\n        dictEntry *de = kvstoreDictFind(defrag_keys_ctx->kvstate.kvs, defrag_keys_ctx->kvstate.slot, key);\n        kvobj *kv = de ? dictGetKV(de) : NULL;\n\n        long long key_defragged = server.stat_active_defrag_hits;\n        if (server.memory_tracking_enabled && kv)\n            oldsize = kvobjAllocSize(kv);\n        int timeout = (defragLaterItem(kv, &defrag_keys_ctx->defrag_later_cursor, endtime, defrag_keys_ctx->dbid) == 1);\n        if (server.memory_tracking_enabled && kv)\n            updateSlotAllocSize(db, slot, kv, oldsize, kvobjAllocSize(kv));\n        if (key_defragged != server.stat_active_defrag_hits) {\n            server.stat_active_defrag_key_hits++;\n        } else {\n            server.stat_active_defrag_key_misses++;\n        }\n\n        if (timeout) break;\n\n        if (defrag_keys_ctx->defrag_later_cursor == 0) {\n            /* the item is finished, move on */\n            listDelNode(defrag_keys_ctx->defrag_later, head);\n        }\n\n        if (++iterations > 16 || server.stat_active_defrag_hits - prev_defragged > 512 ||\n            server.stat_active_defrag_scanned - prev_scanned > 64) {\n            if (getMonotonicUs() > endtime) break;\n            iterations = 0;\n            prev_defragged = server.stat_active_defrag_hits;\n            prev_scanned = server.stat_active_defrag_scanned;\n        }\n    }\n\n    return (!defrag_keys_ctx->defrag_later || listLength(defrag_keys_ctx->defrag_later) == 0) ? DEFRAG_DONE : DEFRAG_NOT_DONE;\n}\n\n#define INTERPOLATE(x, x1, x2, y1, y2) ( (y1) + ((x)-(x1)) * ((y2)-(y1)) / ((x2)-(x1)) )\n#define LIMIT(y, min, max) ((y)<(min)? min: ((y)>(max)? max: (y)))\n\n/* decide if defrag is needed, and at what CPU effort to invest in it */\nvoid computeDefragCycles(void) {\n    size_t frag_bytes;\n    float frag_pct = getAllocatorFragmentation(&frag_bytes);\n    /* If we're not already running, and below the threshold, exit. */\n    if (!server.active_defrag_running) {\n        if(frag_pct < server.active_defrag_threshold_lower || frag_bytes < server.active_defrag_ignore_bytes)\n            return;\n    }\n\n    /* Calculate the adaptive aggressiveness of the defrag based on the current\n     * fragmentation and configurations. */\n    int cpu_pct = INTERPOLATE(frag_pct,\n            server.active_defrag_threshold_lower,\n            server.active_defrag_threshold_upper,\n            server.active_defrag_cycle_min,\n            server.active_defrag_cycle_max);\n    cpu_pct *= defrag.decay_rate;\n    cpu_pct = LIMIT(cpu_pct,\n            server.active_defrag_cycle_min,\n            server.active_defrag_cycle_max);\n\n    /* Normally we allow increasing the aggressiveness during a scan, but don't\n     * reduce it, since we should not lower the aggressiveness when fragmentation\n     * drops. But when a configuration is made, we should reconsider it. */\n    if (cpu_pct > server.active_defrag_running ||\n        server.active_defrag_configuration_changed)\n    {\n        server.active_defrag_configuration_changed = 0;\n        if (defragIsRunning()) {\n            serverLog(LL_VERBOSE, \"Changing active defrag CPU, frag=%.0f%%, frag_bytes=%zu, cpu=%d%%\",\n                      frag_pct, frag_bytes, cpu_pct);\n        } else {\n            serverLog(LL_VERBOSE,\n                \"Starting active defrag, frag=%.0f%%, frag_bytes=%zu, cpu=%d%%\",\n                frag_pct, frag_bytes, cpu_pct);\n        }\n        server.active_defrag_running = cpu_pct;\n    }\n}\n\n/* This helper function handles most of the work for iterating over a kvstore. 'privdata', if\n * provided, MUST begin with 'kvstoreIterState' and this part is automatically updated by this\n * function during the iteration. */\nstatic doneStatus defragStageKvstoreHelper(monotime endtime,\n                                           void *ctx,\n                                           dictScanFunction scan_fn,\n                                           kvstoreHelperPreContinueFn precontinue_fn,\n                                           dictDefragFunctions *defragfns)\n{\n    unsigned int iterations = 0;\n    unsigned long long prev_defragged = server.stat_active_defrag_hits;\n    unsigned long long prev_scanned = server.stat_active_defrag_scanned;\n    kvstoreIterState *state = (kvstoreIterState*)ctx;\n\n    if (state->slot == ITER_SLOT_DEFRAG_LUT) {\n        /* Before we start scanning the kvstore, handle the main structures */\n        do {\n            state->cursor = kvstoreDictLUTDefrag(state->kvs, state->cursor, dictDefragTables);\n            if (getMonotonicUs() >= endtime) return DEFRAG_NOT_DONE;\n        } while (state->cursor != 0);\n        state->slot = ITER_SLOT_UNASSIGNED;\n    }\n\n    while (1) {\n        if (++iterations > 16 || server.stat_active_defrag_hits - prev_defragged > 512 || server.stat_active_defrag_scanned - prev_scanned > 64) {\n            if (getMonotonicUs() >= endtime) break;\n            iterations = 0;\n            prev_defragged = server.stat_active_defrag_hits;\n            prev_scanned = server.stat_active_defrag_scanned;\n        }\n\n        if (precontinue_fn) {\n            if (precontinue_fn(ctx, endtime) == DEFRAG_NOT_DONE) return DEFRAG_NOT_DONE;\n        }\n\n        if (!state->cursor) {\n            /* If there's no cursor, we're ready to begin a new kvstore slot. */\n            if (state->slot == ITER_SLOT_UNASSIGNED) {\n                state->slot = kvstoreGetFirstNonEmptyDictIndex(state->kvs);\n            } else {\n                state->slot = kvstoreGetNextNonEmptyDictIndex(state->kvs, state->slot);\n            }\n\n            if (state->slot == ITER_SLOT_UNASSIGNED) return DEFRAG_DONE;\n        }\n\n        /* Whatever privdata's actual type, this function requires that it begins with kvstoreIterState. */\n        state->cursor = kvstoreDictScanDefrag(state->kvs, state->slot, state->cursor,\n                                             scan_fn, defragfns, ctx);\n    }\n\n    return DEFRAG_NOT_DONE;\n}\n\nstatic doneStatus defragStageDbKeys(void *ctx, monotime endtime) {\n    defragKeysCtx *defrag_keys_ctx = ctx;\n    redisDb *db = &server.db[defrag_keys_ctx->dbid];\n    if (db->keys != defrag_keys_ctx->kvstate.kvs) {\n        /* There has been a change of the kvs (flushdb, swapdb, etc.). Just complete the stage. */\n        return DEFRAG_DONE;\n    }\n\n    /* Note: for DB keys, we use the start/finish callback to fix an expires table entry if\n     * the main DB entry has been moved. */\n    static dictDefragFunctions defragfns = {\n        .defragAlloc = activeDefragAlloc,\n        .defragKey = NULL, /* Handled by dbKeysScanCallback */\n        .defragVal = NULL, /* Handled by dbKeysScanCallback */\n    };\n\n    return defragStageKvstoreHelper(endtime, ctx,\n        dbKeysScanCallback, defragLaterStep, &defragfns);\n}\n\nstatic doneStatus defragStageExpiresKvstore(void *ctx, monotime endtime) {\n    defragKeysCtx *defrag_keys_ctx = ctx;\n    redisDb *db = &server.db[defrag_keys_ctx->dbid];\n    if (db->expires != defrag_keys_ctx->kvstate.kvs) {\n        /* There has been a change of the kvs (flushdb, swapdb, etc.). Just complete the stage. */\n        return DEFRAG_DONE;\n    }\n\n    static dictDefragFunctions defragfns = {\n        .defragAlloc = activeDefragAlloc,\n        .defragKey = NULL, /* Not needed for expires (just a ref) */\n        .defragVal = NULL, /* Not needed for expires (no value) */\n    };\n    return defragStageKvstoreHelper(endtime, ctx,\n        scanCallbackCountScanned, NULL, &defragfns);\n}\n\n/* Defrag (hash) object with subexpiry and update its reference in the DB keys. */\nvoid *activeDefragSubexpiresOB(void *ptr, void *privdata) {\n    redisDb *db = privdata;\n    dictEntryLink link, exlink = NULL;\n    kvobj *newkv, *kv = ptr;\n    sds keystr = kvobjGetKey(kv);\n    unsigned int slot = calculateKeySlot(keystr);\n\n    serverAssert(kv->type == OBJ_HASH); /* Currently relevant only for hashes */\n\n    long long expire = kvobjGetExpire(kv);\n    /* We can't search in db->expires for that KV after we've released\n     * the pointer it holds, since it won't be able to do the string\n     * compare. Search it before, if needed. */\n    if (expire != -1) {\n        exlink = kvstoreDictFindLink(db->expires, slot, keystr, NULL);\n        serverAssert(exlink != NULL);\n    }\n\n    if ((newkv = activeDefragKvobj(kv, 1))) {\n        /* Update its reference in the DB keys. */\n        link = kvstoreDictFindLink(db->keys, slot, keystr, NULL);\n        serverAssert(link != NULL);\n        kvstoreDictSetAtLink(db->keys, slot, newkv, &link, 0);\n        if (expire != -1)\n            kvstoreDictSetAtLink(db->expires, slot, newkv, &exlink, 0);\n        activeDefragFree(kvobjGetAllocPtr(kv));\n    }\n    return newkv;\n}\n\nstatic doneStatus defragStageSubexpires(void *ctx, monotime endtime) {\n    unsigned int iterations = 0;\n    unsigned long long prev_defragged = server.stat_active_defrag_hits;\n    unsigned long long prev_scanned = server.stat_active_defrag_scanned;\n    defragSubexpiresCtx *subctx = ctx;\n    redisDb *db = &server.db[subctx->dbid];\n    estore *subexpires = db->subexpires;\n\n    /* If estore changed (flushdb, swapdb, etc.), Just complete the stage. */\n    if (db->subexpires != subctx->subexpires) {\n        return DEFRAG_DONE;\n    }\n\n    ebDefragFunctions eb_defragfns = {\n        .defragAlloc = activeDefragAlloc,\n        .defragItem = activeDefragSubexpiresOB\n    };\n\n    while (1) {\n        if (++iterations > 16 ||\n            server.stat_active_defrag_hits - prev_defragged > 512 ||\n            server.stat_active_defrag_scanned - prev_scanned > 64)\n        {\n            if (getMonotonicUs() >= endtime) break;\n            iterations = 0;\n            prev_defragged = server.stat_active_defrag_hits;\n            prev_scanned = server.stat_active_defrag_scanned;\n        }\n\n        /* If there's no cursor, we're ready to begin a new estore slot. */\n        if (!subctx->cursor) {\n            if (subctx->slot == ITER_SLOT_UNASSIGNED) {\n                subctx->slot = estoreGetFirstNonEmptyBucket(subexpires);\n            } else {\n                subctx->slot = estoreGetNextNonEmptyBucket(subexpires, subctx->slot);\n            }\n\n            if (subctx->slot == ITER_SLOT_UNASSIGNED) return DEFRAG_DONE;\n        }\n\n        /* Get the ebuckets for the current slot and scan it */\n        ebuckets *bucket = estoreGetBuckets(subexpires, subctx->slot);\n        if (!ebScanDefrag(bucket, &subexpiresBucketsType, &subctx->cursor, &eb_defragfns, db))\n            subctx->cursor = 0; /* Reset cursor to move to next slot */\n    }\n\n    return DEFRAG_NOT_DONE;\n}\n\nstatic doneStatus defragStagePubsubKvstore(void *ctx, monotime endtime) {\n    static dictDefragFunctions defragfns = {\n        .defragAlloc = activeDefragAlloc,\n        .defragKey = NULL, /* Handled by defragPubsubScanCallback */\n        .defragVal = NULL, /* Not needed for expires (no value) */\n    };\n\n    return defragStageKvstoreHelper(endtime, ctx,\n        defragPubsubScanCallback, NULL, &defragfns);\n}\n\nstatic doneStatus defragLuaScripts(void *ctx, monotime endtime) {\n    UNUSED(endtime);\n    UNUSED(ctx);\n    activeDefragSdsDict(evalScriptsDict(), DEFRAG_SDS_DICT_VAL_LUA_SCRIPT);\n    return DEFRAG_DONE;\n}\n\n/* Handles defragmentation of module global data. This is a stage function\n * that gets called periodically during the active defragmentation process. */\nstatic doneStatus defragModuleGlobals(void *ctx, monotime endtime) {\n    defragModuleCtx *defrag_module_ctx = ctx;\n\n    RedisModule *module = moduleGetHandleByName(defrag_module_ctx->module_name);\n    if (!module) {\n        /* Module has been unloaded, nothing to defrag. */\n        return DEFRAG_DONE;\n    }\n    /* Interval shouldn't exceed 1 hour  */\n    serverAssert(!endtime || llabs((long long)endtime - (long long)getMonotonicUs()) < 60*60*1000*1000LL);\n\n    /* Call appropriate version of module's defrag callback:\n     * 1. Version 2 (defrag_cb_2): Supports incremental defrag and returns whether more work is needed\n     * 2. Version 1 (defrag_cb): Legacy version, performs all work in one call.\n     *    Note: V1 doesn't support incremental defragmentation, may block for longer periods. */\n    RedisModuleDefragCtx defrag_ctx = { endtime, &defrag_module_ctx->cursor, NULL, -1, -1, -1 };\n    if (module->defrag_cb_2) {\n        return module->defrag_cb_2(&defrag_ctx) ? DEFRAG_NOT_DONE : DEFRAG_DONE;\n    } else if (module->defrag_cb) {\n        module->defrag_cb(&defrag_ctx);\n        return DEFRAG_DONE;\n    } else {\n        redis_unreachable();\n    }\n}\n\nstatic void freeDefragKeysContext(void *ctx) {\n    defragKeysCtx *defrag_keys_ctx = ctx;\n    if (defrag_keys_ctx->defrag_later) {\n        listRelease(defrag_keys_ctx->defrag_later);\n    }\n    zfree(defrag_keys_ctx);\n}\n\nstatic void freeDefragModelContext(void *ctx) {\n    defragModuleCtx *defrag_model_ctx = ctx;\n    sdsfree(defrag_model_ctx->module_name);\n    zfree(defrag_model_ctx);\n}\n\nstatic void freeDefragContext(void *ptr) {\n    StageDescriptor *stage = ptr;\n    if (stage->ctx_free_fn)\n        stage->ctx_free_fn(stage->ctx);\n    zfree(stage);\n}\n\nstatic void addDefragStage(defragStageFn stage_fn, defragStageContextFreeFn ctx_free_fn, void *ctx) {\n    StageDescriptor *stage = zmalloc(sizeof(StageDescriptor));\n    stage->stage_fn = stage_fn;\n    stage->ctx_free_fn = ctx_free_fn;\n    stage->ctx = ctx;\n    listAddNodeTail(defrag.remaining_stages, stage);\n}\n\n/* Updates the defrag decay rate based on the observed effectiveness of the defrag process.\n * The decay rate is used to gradually slow down defrag when it's not being effective. */\nstatic void updateDefragDecayRate(float frag_pct) {\n    long long last_hits = server.stat_active_defrag_hits - defrag.start_defrag_hits;\n    long long last_misses = server.stat_active_defrag_misses - defrag.start_defrag_misses;\n    float last_frag_pct_change = defrag.start_frag_pct - frag_pct;\n    /* When defragmentation efficiency is low, we gradually reduce the\n     * speed for the next cycle to avoid CPU waste. However, in the\n     * following two cases, we keep the normal speed:\n     * 1) If the fragmentation percentage has increased or decreased by more than 2%.\n     * 2) If the fragmentation percentage decrease is small, but hits are above 1%,\n     *    we still keep the normal speed. */\n    if (fabs(last_frag_pct_change) > 2 ||\n        (last_frag_pct_change < 0 && last_hits >= (last_hits + last_misses) * 0.01))\n    {\n        defrag.decay_rate = 1.0f;\n    } else {\n        defrag.decay_rate *= 0.9;\n    }\n}\n\n/* Called at the end of a complete defrag cycle, or when defrag is terminated */\nstatic void endDefragCycle(int normal_termination) {\n    if (normal_termination) {\n        /* For normal termination, we expect... */\n        serverAssert(!defrag.current_stage);\n        serverAssert(listLength(defrag.remaining_stages) == 0);\n    } else {\n        /* Defrag is being terminated abnormally */\n        aeDeleteTimeEvent(server.el, defrag.timeproc_id);\n\n        if (defrag.current_stage) {\n            listDelNode(defrag.remaining_stages, defrag.current_stage);\n            defrag.current_stage = NULL;\n        }\n    }\n    defrag.timeproc_id = AE_DELETED_EVENT_ID;\n\n    listRelease(defrag.remaining_stages);\n    defrag.remaining_stages = NULL;\n\n    size_t frag_bytes;\n    float frag_pct = getAllocatorFragmentation(&frag_bytes);\n    serverLog(LL_VERBOSE, \"Active defrag done in %dms, reallocated=%d, frag=%.0f%%, frag_bytes=%zu\",\n              (int)elapsedMs(defrag.start_cycle), (int)(server.stat_active_defrag_hits - defrag.start_defrag_hits),\n              frag_pct, frag_bytes);\n\n    server.stat_total_active_defrag_time += elapsedUs(server.stat_last_active_defrag_time);\n    server.stat_last_active_defrag_time = 0;\n    server.active_defrag_running = 0;\n\n    updateDefragDecayRate(frag_pct);\n    moduleDefragEnd();\n\n    /* Immediately check to see if we should start another defrag cycle. */\n    activeDefragCycle();\n}\n\n/* Must be called at the start of the timeProc as it measures the delay from the end of the previous\n * timeProc invocation when performing the computation. */\nstatic int computeDefragCycleUs(void) {\n    long dutyCycleUs;\n\n    int targetCpuPercent = server.active_defrag_running;\n    serverAssert(targetCpuPercent > 0 && targetCpuPercent < 100);\n\n    static int prevCpuPercent = 0; /* STATIC - this persists */\n    if (targetCpuPercent != prevCpuPercent) {\n        /* If the targetCpuPercent changes, the value might be different from when the last wait\n         * time was computed. In this case, don't consider wait time. (This is really only an\n         * issue in crazy tests that dramatically increase CPU while defrag is running.) */\n        defrag.timeproc_end_time = 0;\n        prevCpuPercent = targetCpuPercent;\n    }\n\n    /* Given when the last duty cycle ended, compute time needed to achieve the desired percentage. */\n    if (defrag.timeproc_end_time == 0) {\n        /* Either the first call to the timeProc, or we were paused for some reason. */\n        defrag.timeproc_overage_us = 0;\n        dutyCycleUs = DEFRAG_CYCLE_US;\n    } else {\n        long waitedUs = getMonotonicUs() - defrag.timeproc_end_time;\n        /* Given the elapsed wait time between calls, compute the necessary duty time needed to\n         * achieve the desired CPU percentage.\n         * With:  D = duty time, W = wait time, P = percent\n         * Solve:    D          P\n         *         -----   =  -----\n         *         D + W       100\n         * Solving for D:\n         *     D = P * W / (100 - P)\n         *\n         * Note that dutyCycleUs addresses starvation. If the wait time was long, we will compensate\n         * with a proportionately long duty-cycle. This won't significantly affect perceived\n         * latency, because clients are already being impacted by the long cycle time which caused\n         * the starvation of the timer. */\n        dutyCycleUs = targetCpuPercent * waitedUs / (100 - targetCpuPercent);\n\n        /* Also adjust for any accumulated overage. */\n        dutyCycleUs -= defrag.timeproc_overage_us;\n        defrag.timeproc_overage_us = 0;\n\n        if (dutyCycleUs < DEFRAG_CYCLE_US) {\n            /* We never reduce our cycle time, that would increase overhead. Instead, we track this\n             * as part of the overage, and increase wait time between cycles. */\n            defrag.timeproc_overage_us = DEFRAG_CYCLE_US - dutyCycleUs;\n            dutyCycleUs = DEFRAG_CYCLE_US;\n        } else if (dutyCycleUs > DEFRAG_CYCLE_US * 10) {\n            /* Add a time limit for the defrag duty cycle to prevent excessive latency.\n             * When latency is already high (indicated by a long time between calls),\n             * we don't want to make it worse by running defrag for too long. */\n            dutyCycleUs = DEFRAG_CYCLE_US * 10;\n        }\n    }\n    return dutyCycleUs;\n}\n\n/* Must be called at the end of the timeProc as it records the timeproc_end_time for use in the next\n * computeDefragCycleUs computation. */\nstatic int computeDelayMs(monotime intendedEndtime) {\n    defrag.timeproc_end_time = getMonotonicUs();\n    long overage = defrag.timeproc_end_time - intendedEndtime;\n    defrag.timeproc_overage_us += overage; /* track over/under desired CPU */\n    /* Allow negative overage (underage) to count against existing overage, but don't allow\n     * underage (from short stages) to be accumulated. */\n    if (defrag.timeproc_overage_us < 0) defrag.timeproc_overage_us = 0;\n\n    int targetCpuPercent = server.active_defrag_running;\n    serverAssert(targetCpuPercent > 0 && targetCpuPercent < 100);\n\n    /* Given the desired duty cycle, what inter-cycle delay do we need to achieve that? */\n    /* We want to achieve a specific CPU percent. To do that, we can't use a skewed computation. */\n    /* Example, if we run for 1ms and delay 10ms, that's NOT 10%, because the total cycle time is 11ms. */\n    /* Instead, if we rum for 1ms, our total time should be 10ms. So the delay is only 9ms. */\n    long totalCycleTimeUs = DEFRAG_CYCLE_US * 100 / targetCpuPercent;\n    long delayUs = totalCycleTimeUs - DEFRAG_CYCLE_US;\n    /* Only increase delay by the fraction of the overage that would be non-duty-cycle */\n    delayUs += defrag.timeproc_overage_us * (100 - targetCpuPercent) / 100;\n    if (delayUs < 0) delayUs = 0;\n    long delayMs = delayUs / 1000; /* round down */\n    return delayMs;\n}\n\n/* An independent time proc for defrag. While defrag is running, this is called much more often\n * than the server cron. Frequent short calls provides low latency impact. */\nstatic int activeDefragTimeProc(struct aeEventLoop *eventLoop, long long id, void *clientData) {\n    UNUSED(eventLoop);\n    UNUSED(id);\n    UNUSED(clientData);\n\n    /* This timer shouldn't be registered unless there's work to do. */\n    serverAssert(defrag.current_stage || listLength(defrag.remaining_stages) > 0);\n\n    if (!server.active_defrag_enabled) {\n        /* Defrag has been disabled while running */\n        endDefragCycle(0);\n        return AE_NOMORE;\n    }\n\n    if (hasActiveChildProcess()) {\n        /* If there's a child process, pause the defrag, polling until the child completes. */\n        defrag.timeproc_end_time = 0; /* prevent starvation recovery */\n        return 100;\n    }\n\n    monotime starttime = getMonotonicUs();\n    int dutyCycleUs = computeDefragCycleUs();\n#if defined(DEBUG_DEFRAG_FULLY)\n    dutyCycleUs = 30*1000*1000LL; /* 30 seconds */\n#endif\n    monotime endtime = starttime + dutyCycleUs;\n    int haveMoreWork = 1;\n\n    mstime_t latency;\n    latencyStartMonitor(latency);\n\n    do {\n        if (!defrag.current_stage) {\n            defrag.current_stage = listFirst(defrag.remaining_stages);\n        }\n\n        StageDescriptor *stage = listNodeValue(defrag.current_stage);\n        doneStatus status = stage->stage_fn(stage->ctx, endtime);\n        if (status == DEFRAG_DONE) {\n            listDelNode(defrag.remaining_stages, defrag.current_stage);\n            defrag.current_stage = NULL;\n        }\n\n        haveMoreWork = (defrag.current_stage || listLength(defrag.remaining_stages) > 0);\n        /* If we've completed a stage early, and still have a standard time allotment remaining,\n         * we'll start another stage. This can happen when defrag is running infrequently, and\n         * starvation protection has increased the duty-cycle. */\n    } while (haveMoreWork && getMonotonicUs() <= endtime - DEFRAG_CYCLE_US);\n\n    latencyEndMonitor(latency);\n    latencyAddSampleIfNeeded(\"active-defrag-cycle\", latency);\n\n    if (haveMoreWork) {\n        return computeDelayMs(endtime);\n    } else {\n        endDefragCycle(1);\n        return AE_NOMORE; /* Ends the timer proc */\n    }\n}\n\n/* During long running scripts, or while loading, there is a periodic function for handling other\n * actions. This interface allows defrag to continue running, avoiding a single long defrag step\n * after the long operation completes. */\nvoid defragWhileBlocked(void) {\n    /* This is called infrequently, while timers are not active. We might need to start defrag. */\n    if (!defragIsRunning()) activeDefragCycle();\n\n    if (!defragIsRunning()) return;\n\n    /* Save off the timeproc_id. If we have a normal termination, it will be cleared. */\n    long long timeproc_id = defrag.timeproc_id;\n\n    /* Simulate a single call of the timer proc */\n    long long reschedule_delay = activeDefragTimeProc(NULL, 0, NULL);\n    if (reschedule_delay == AE_NOMORE) {\n        /* If it's done, deregister the timer */\n        aeDeleteTimeEvent(server.el, timeproc_id);\n    }\n    /* Otherwise, just ignore the reschedule_delay, the timer will pop the next time that the\n     * event loop can process timers again. */\n}\n\nstatic void beginDefragCycle(void) {\n    serverAssert(!defragIsRunning());\n\n    moduleDefragStart();\n\n    serverAssert(defrag.remaining_stages == NULL);\n    defrag.remaining_stages = listCreate();\n    listSetFreeMethod(defrag.remaining_stages, freeDefragContext);\n\n    for (int dbid = 0; dbid < server.dbnum; dbid++) {\n        redisDb *db = &server.db[dbid];\n\n        /* Add stage for keys. */\n        defragKeysCtx *defrag_keys_ctx = zcalloc(sizeof(defragKeysCtx));\n        defrag_keys_ctx->kvstate = INIT_KVSTORE_STATE(db->keys);\n        defrag_keys_ctx->dbid = dbid;\n        addDefragStage(defragStageDbKeys, freeDefragKeysContext, defrag_keys_ctx);\n\n        /* Add stage for expires. */\n        defragKeysCtx *defrag_expires_ctx = zcalloc(sizeof(defragKeysCtx));\n        defrag_expires_ctx->kvstate = INIT_KVSTORE_STATE(db->expires);\n        defrag_expires_ctx->dbid = dbid;\n        addDefragStage(defragStageExpiresKvstore, freeDefragKeysContext, defrag_expires_ctx);\n\n        /* Add stage for subexpires. */\n        defragSubexpiresCtx *defrag_subexpires_ctx = zcalloc(sizeof(defragSubexpiresCtx));\n        defrag_subexpires_ctx->subexpires = db->subexpires;\n        defrag_subexpires_ctx->slot = ITER_SLOT_UNASSIGNED;\n        defrag_subexpires_ctx->cursor = 0;\n        defrag_subexpires_ctx->dbid = dbid;\n        addDefragStage(defragStageSubexpires, zfree, defrag_subexpires_ctx);\n    }\n\n    /* Add stage for pubsub channels. */\n    defragPubSubCtx *defrag_pubsub_ctx = zmalloc(sizeof(defragPubSubCtx));\n    defrag_pubsub_ctx->kvstate = INIT_KVSTORE_STATE(server.pubsub_channels);\n    defrag_pubsub_ctx->getPubSubChannels = getClientPubSubChannels;\n    addDefragStage(defragStagePubsubKvstore, zfree, defrag_pubsub_ctx);\n\n    /* Add stage for pubsubshard channels. */\n    defragPubSubCtx *defrag_pubsubshard_ctx = zmalloc(sizeof(defragPubSubCtx));\n    defrag_pubsubshard_ctx->kvstate = INIT_KVSTORE_STATE(server.pubsubshard_channels);\n    defrag_pubsubshard_ctx->getPubSubChannels = getClientPubSubShardChannels;\n    addDefragStage(defragStagePubsubKvstore, zfree, defrag_pubsubshard_ctx);\n\n    addDefragStage(defragLuaScripts, NULL, NULL);\n\n    /* Add stages for modules. */\n    dictIterator di;\n    dictEntry *de;\n    dictInitIterator(&di, modules);\n    while ((de = dictNext(&di)) != NULL) {\n        struct RedisModule *module = dictGetVal(de);\n        if (module->defrag_cb || module->defrag_cb_2) {\n            defragModuleCtx *ctx = zmalloc(sizeof(defragModuleCtx));\n            ctx->cursor = 0;\n            ctx->module_name = sdsnew(module->name);\n            addDefragStage(defragModuleGlobals, freeDefragModelContext, ctx);\n        }\n    }\n    dictResetIterator(&di);\n\n    defrag.current_stage = NULL;\n    defrag.start_cycle = getMonotonicUs();\n    defrag.start_defrag_hits = server.stat_active_defrag_hits;\n    defrag.start_defrag_misses = server.stat_active_defrag_misses;\n    defrag.start_frag_pct = getAllocatorFragmentation(NULL);\n    defrag.timeproc_end_time = 0;\n    defrag.timeproc_overage_us = 0;\n    defrag.timeproc_id = aeCreateTimeEvent(server.el, 0, activeDefragTimeProc, NULL, NULL);\n\n    elapsedStart(&server.stat_last_active_defrag_time);\n}\n\nvoid activeDefragCycle(void) {\n    if (!server.active_defrag_enabled) return;\n\n    /* Defrag gets paused while a child process is active. So there's no point in starting a new\n     * cycle or adjusting the CPU percentage for an existing cycle. */\n    if (hasActiveChildProcess()) return;\n\n    computeDefragCycles();\n\n    if (server.active_defrag_running > 0 && !defragIsRunning()) beginDefragCycle();\n}\n\n#else /* HAVE_DEFRAG */\n\nvoid activeDefragCycle(void) {\n    /* Not implemented yet. */\n}\n\nvoid *activeDefragAlloc(void *ptr) {\n    UNUSED(ptr);\n    return NULL;\n}\n\nvoid *activeDefragAllocRaw(size_t size) {\n    /* fallback to regular allocation */\n    return zmalloc(size);\n}\n\nvoid activeDefragFreeRaw(void *ptr) {\n    /* fallback to regular free */\n    zfree(ptr);\n}\n\nrobj *activeDefragStringOb(robj *ob) {\n    UNUSED(ob);\n    return NULL;\n}\n\nvoid defragWhileBlocked(void) {\n}\n\n#endif\n"
  },
  {
    "path": "src/dict.c",
    "content": "/* Hash Tables Implementation.\n *\n * This file implements in memory hash tables with insert/del/replace/find/\n * get-random-element operations. Hash tables will auto resize if needed\n * tables of power of two in size are used, collisions are handled by\n * chaining. See the source code for more information... :)\n *\n * Copyright (c) 2006-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"fmacros.h\"\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <stdint.h>\n#include <string.h>\n#include <stdarg.h>\n#include <limits.h>\n#include <sys/time.h>\n#include <stddef.h>\n\n#include \"dict.h\"\n#include \"zmalloc.h\"\n#include \"redisassert.h\"\n#include \"monotonic.h\"\n#include \"util.h\"\n\n/* Using dictSetResizeEnabled() we make possible to disable\n * resizing and rehashing of the hash table as needed. This is very important\n * for Redis, as we use copy-on-write and don't want to move too much memory\n * around when there is a child performing saving operations.\n *\n * Note that even when dict_can_resize is set to DICT_RESIZE_AVOID, not all\n * resizes are prevented:\n *  - A hash table is still allowed to expand if the ratio between the number\n *    of elements and the buckets >= dict_force_resize_ratio.\n *  - A hash table is still allowed to shrink if the ratio between the number\n *    of elements and the buckets <= 1 / (HASHTABLE_MIN_FILL * dict_force_resize_ratio). */\nstatic dictResizeEnable dict_can_resize = DICT_RESIZE_ENABLE;\nstatic unsigned int dict_force_resize_ratio = 4;\n\n/* -------------------------- types ----------------------------------------- */\nstruct dictEntry {\n    struct dictEntry *next;  /* Must be first */\n    void *key;               /* Must be second */\n    union {\n        void *val;\n        uint64_t u64;\n        int64_t s64;\n        double d;\n    } v;\n};\n\ntypedef struct dictEntryNoValue {\n    dictEntry *next; /* Must be first */\n    void *key;       /* Must be second */\n} dictEntryNoValue;\n\nstatic_assert(offsetof(dictEntry, next) == offsetof(dictEntryNoValue, next), \"dictEntry & dictEntryNoValue next not aligned\");\nstatic_assert(offsetof(dictEntry, key) == offsetof(dictEntryNoValue, key), \"dictEntry & dictEntryNoValue key not aligned\");\n\n/* -------------------------- private prototypes ---------------------------- */\n\nstatic int _dictExpandIfNeeded(dict *d);\nstatic void _dictShrinkIfNeeded(dict *d);\nstatic void _dictRehashStepIfNeeded(dict *d, uint64_t visitedIdx);\nstatic signed char _dictNextExp(unsigned long size);\nstatic int _dictInit(dict *d, dictType *type);\nstatic dictEntryLink dictGetNextLink(dictEntry *de);\nstatic void dictSetNext(dictEntry *de, dictEntry *next);\nstatic int dictDefaultCompare(dictCmpCache *cache, const void *key1, const void *key2);\nstatic dictEntryLink dictFindLinkInternal(dict *d, const void *key, dictEntryLink *bucket);\ndictEntryLink dictFindLinkForInsert(dict *d, const void *key, dictEntry **existing);\nstatic dictEntry *dictInsertKeyAtLink(dict *d, void *key __stored_key, dictEntryLink link);\n\n/* -------------------------- unused  --------------------------- */\nvoid dictSetSignedIntegerVal(dictEntry *de, int64_t val);\nint64_t dictGetSignedIntegerVal(const dictEntry *de);\ndouble dictIncrDoubleVal(dictEntry *de, double val);\nvoid *dictEntryMetadata(dictEntry *de);\nint64_t dictIncrSignedIntegerVal(dictEntry *de, int64_t val);\n\n/* -------------------------- misc inline functions -------------------------------- */\n\ntypedef int (*keyCmpFunc)(dictCmpCache *cache, const void *key1, const void *key2);\nstatic inline keyCmpFunc dictGetCmpFunc(dict *d) {\n    if (d->type->keyCompare)\n        return d->type->keyCompare;\n    return dictDefaultCompare;\n}\n\nstatic const void *dictStoredKey2Key(dict *d, const void *key __stored_key) {\n    return (d->type->keyFromStoredKey) ? d->type->keyFromStoredKey(key) : key;\n}\n\n/* -------------------------- hash functions -------------------------------- */\n\nstatic uint8_t dict_hash_function_seed[16];\n\nvoid dictSetHashFunctionSeed(uint8_t *seed) {\n    memcpy(dict_hash_function_seed,seed,sizeof(dict_hash_function_seed));\n}\n\n/* The default hashing function uses SipHash implementation\n * in siphash.c. */\n\nuint64_t siphash(const uint8_t *in, const size_t inlen, const uint8_t *k);\nuint64_t siphash_nocase(const uint8_t *in, const size_t inlen, const uint8_t *k);\n\nuint64_t dictGenHashFunction(const void *key, size_t len) {\n    return siphash(key, len, dict_hash_function_seed);\n}\n\nuint64_t dictGenCaseHashFunction(const unsigned char *buf, size_t len) {\n    return siphash_nocase(buf,len,dict_hash_function_seed);\n}\n\n/* --------------------- dictEntry pointer bit tricks ----------------------  */\n\n/* The 3 least significant bits in a pointer to a dictEntry determines what the\n * pointer actually points to. If the least bit is set, it's a key. Otherwise,\n * the bit pattern of the least 3 significant bits mark the kind of entry. */\n\n#define ENTRY_PTR_MASK        7 /* 111 */\n#define ENTRY_PTR_NORMAL      0 /* 000 : If a pointer to an entry with value. */\n#define ENTRY_PTR_IS_ODD_KEY  1 /* XX1 : If a pointer to odd key address (must be 1). */\n#define ENTRY_PTR_IS_EVEN_KEY 2 /* 010 : If a pointer to even key address. (must be 2 or 4). */\n#define ENTRY_PTR_UNUSED      4 /* 100 : Unused. */\n\n/* Returns 1 if the entry pointer is a pointer to a key, rather than to an\n * allocated entry. Returns 0 otherwise. */\nstatic inline int entryIsKey(const dictEntry *de) {\n    return ((uintptr_t)de & (ENTRY_PTR_IS_ODD_KEY | ENTRY_PTR_IS_EVEN_KEY));\n}\n\n/* Returns 1 if the pointer is actually a pointer to a dictEntry struct. Returns\n * 0 otherwise. */\nstatic inline int entryIsNormal(const dictEntry *de) {\n    return ((uintptr_t)(void *)de & ENTRY_PTR_MASK) == ENTRY_PTR_NORMAL;\n}\n\n/* Creates an entry without a value field. */\nstatic inline dictEntry *createEntryNoValue(void *key __stored_key, dictEntry *next) {\n    dictEntryNoValue *entry = zmalloc(sizeof(*entry));\n    entry->key = key;\n    entry->next = next;\n    return (dictEntry *) entry;\n}\n\nstatic inline dictEntry *encodeMaskedPtr(const void *ptr, unsigned int bits) {\n    assert(((uintptr_t)ptr & ENTRY_PTR_MASK) == 0);\n    return (dictEntry *)(void *)((uintptr_t)ptr | bits);\n}\n\nstatic inline void *decodeMaskedPtr(const dictEntry *de) {\n    return (void *)((uintptr_t)(void *)de & ~ENTRY_PTR_MASK);\n}\n\n/* Encode a key pointer for storage in a no_value dict bucket.\n * For odd keys (like SDS strings), the key can be stored directly.\n * For even keys, we need to tag it with ENTRY_PTR_IS_EVEN_KEY. */\nstatic inline dictEntry *encodeEntryKey(dict *d, void *key) {\n    if (d->type->keys_are_odd) {\n        debugAssert(((uintptr_t)key & ENTRY_PTR_IS_ODD_KEY) == ENTRY_PTR_IS_ODD_KEY);\n        return key;\n    } else {\n        return encodeMaskedPtr(key, ENTRY_PTR_IS_EVEN_KEY);\n    }\n}\n\n/* Decodes the pointer to an entry without value, when you know it is an entry\n * without value. Hint: Use entryIsNoValue to check. */\nstatic inline dictEntryNoValue *decodeEntryNoValue(const dictEntry *de) {\n    return decodeMaskedPtr(de);\n}\n\n/* Returns 1 if the entry has a value field and 0 otherwise. */\nstatic inline int entryHasValue(const dictEntry *de) {\n    return entryIsNormal(de);\n}\n\n/* ----------------------------- API implementation ------------------------- */\n\n/* Reset hash table parameters already initialized with _dictInit()*/\nstatic void _dictReset(dict *d, int htidx)\n{\n    d->ht_table[htidx] = NULL;\n    d->ht_size_exp[htidx] = -1;\n    d->ht_used[htidx] = 0;\n}\n\n/* Create a new hash table */\ndict *dictCreate(dictType *type)\n{\n    size_t metasize = type->dictMetadataBytes ? type->dictMetadataBytes(NULL) : 0;\n    dict *d = zmalloc(sizeof(*d)+metasize);\n    if (metasize > 0) {\n        memset(dictMetadata(d), 0, metasize);\n    }\n    _dictInit(d,type);\n    return d;\n}\n\n/* Change dictType of dict to another one with metadata support\n * Rest of dictType's values must stay the same */\nvoid dictTypeAddMeta(dict **d, dictType *typeWithMeta) {\n    /* Verify new dictType is compatible with the old one */\n    dictType toCmp = *typeWithMeta;\n    /* Ignore 'dictMetadataBytes' and 'onDictRelease' in comparison */\n    toCmp.dictMetadataBytes = (*d)->type->dictMetadataBytes;\n    toCmp.onDictRelease = (*d)->type->onDictRelease;\n    assert(memcmp((*d)->type, &toCmp, sizeof(dictType)) == 0); /* The rest of the dictType fields must be the same */\n\n    *d = zrealloc(*d, sizeof(dict) + typeWithMeta->dictMetadataBytes(*d));\n    (*d)->type = typeWithMeta;\n}\n\n/* Initialize the hash table */\nint _dictInit(dict *d, dictType *type)\n{\n    _dictReset(d, 0);\n    _dictReset(d, 1);\n    d->type = type;\n    d->rehashidx = -1;\n    d->pauserehash = 0;\n    d->pauseAutoResize = 0;\n    return DICT_OK;\n}\n\n/* Resize or create the hash table,\n * when malloc_failed is non-NULL, it'll avoid panic if malloc fails (in which case it'll be set to 1).\n * Returns DICT_OK if resize was performed, and DICT_ERR if skipped. */\nint _dictResize(dict *d, unsigned long size, int* malloc_failed)\n{\n    if (malloc_failed) *malloc_failed = 0;\n\n    /* We can't rehash twice if rehashing is ongoing. */\n    assert(!dictIsRehashing(d));\n\n    /* the new hash table */\n    dictEntry **new_ht_table;\n    unsigned long new_ht_used;\n    signed char new_ht_size_exp = _dictNextExp(size);\n\n    /* Detect overflows */\n    size_t newsize = DICTHT_SIZE(new_ht_size_exp);\n    if (newsize < size || newsize * sizeof(dictEntry*) < newsize)\n        return DICT_ERR;\n\n    /* Rehashing to the same table size is not useful. */\n    if (new_ht_size_exp == d->ht_size_exp[0]) return DICT_ERR;\n\n    /* Allocate the new hash table and initialize all pointers to NULL */\n    if (malloc_failed) {\n        new_ht_table = ztrycalloc(newsize*sizeof(dictEntry*));\n        *malloc_failed = new_ht_table == NULL;\n        if (*malloc_failed)\n            return DICT_ERR;\n    } else\n        new_ht_table = zcalloc(newsize*sizeof(dictEntry*));\n\n    new_ht_used = 0;\n\n    /* Prepare a second hash table for incremental rehashing.\n     * We do this even for the first initialization, so that we can trigger the\n     * rehashingStarted more conveniently, we will clean it up right after. */\n    d->ht_size_exp[1] = new_ht_size_exp;\n    d->ht_used[1] = new_ht_used;\n    d->ht_table[1] = new_ht_table;\n    d->rehashidx = 0;\n    if (d->type->rehashingStarted) d->type->rehashingStarted(d);\n    if (d->type->bucketChanged)\n        d->type->bucketChanged(d, DICTHT_SIZE(d->ht_size_exp[1]));\n\n    /* Is this the first initialization or is the first hash table empty? If so\n     * it's not really a rehashing, we can just set the first hash table so that\n     * it can accept keys. */\n    if (d->ht_table[0] == NULL || d->ht_used[0] == 0) {\n        if (d->type->rehashingCompleted) d->type->rehashingCompleted(d);\n        if (d->type->bucketChanged)\n            d->type->bucketChanged(d, -(long long)DICTHT_SIZE(d->ht_size_exp[0]));\n        if (d->ht_table[0]) zfree(d->ht_table[0]);\n        d->ht_size_exp[0] = new_ht_size_exp;\n        d->ht_used[0] = new_ht_used;\n        d->ht_table[0] = new_ht_table;\n        _dictReset(d, 1);\n        d->rehashidx = -1;\n        return DICT_OK;\n    }\n\n    /* Force a full rehashing of the dictionary */\n    if (d->type->force_full_rehash) {\n        while (dictRehash(d, 1000)) {\n            /* Continue rehashing */\n        }\n    }\n    return DICT_OK;\n}\n\nint _dictExpand(dict *d, unsigned long size, int* malloc_failed) {\n    /* the size is invalid if it is smaller than the size of the hash table \n     * or smaller than the number of elements already inside the hash table */\n    if (dictIsRehashing(d) || d->ht_used[0] > size || DICTHT_SIZE(d->ht_size_exp[0]) >= size)\n        return DICT_ERR;\n    return _dictResize(d, size, malloc_failed);\n}\n\n/* return DICT_ERR if expand was not performed */\nint dictExpand(dict *d, unsigned long size) {\n    return _dictExpand(d, size, NULL);\n}\n\n/* return DICT_ERR if expand failed due to memory allocation failure */\nint dictTryExpand(dict *d, unsigned long size) {\n    int malloc_failed = 0;\n    _dictExpand(d, size, &malloc_failed);\n    return malloc_failed? DICT_ERR : DICT_OK;\n}\n\n/* return DICT_ERR if shrink was not performed */\nint dictShrink(dict *d, unsigned long size) {\n    /* the size is invalid if it is bigger than the size of the hash table\n     * or smaller than the number of elements already inside the hash table */\n    if (dictIsRehashing(d) || d->ht_used[0] > size || DICTHT_SIZE(d->ht_size_exp[0]) <= size)\n        return DICT_ERR;\n    return _dictResize(d, size, NULL);\n}\n\n/* Helper function for `dictRehash` and `dictBucketRehash` which rehashes all the keys\n * in a bucket at index `idx` from the old to the new hash HT. */\nstatic void rehashEntriesInBucketAtIndex(dict *d, uint64_t idx) {\n    dictEntry *de = d->ht_table[0][idx];\n    uint64_t h;\n    dictEntry *nextde;\n    while (de) {\n        nextde = dictGetNext(de);\n        void *storedKey = dictGetKey(de);\n        /* Get the index in the new hash table */\n        if (d->ht_size_exp[1] > d->ht_size_exp[0]) {\n            const void *key = dictStoredKey2Key(d, storedKey);\n            h = dictGetHash(d, key) & DICTHT_SIZE_MASK(d->ht_size_exp[1]);\n        } else {\n            /* We're shrinking the table. The tables sizes are powers of\n             * two, so we simply mask the bucket index in the larger table\n             * to get the bucket index in the smaller table. */\n            h = idx & DICTHT_SIZE_MASK(d->ht_size_exp[1]);\n        }\n        if (d->type->no_value) {\n            if (!d->ht_table[1][h]) {\n                /* The destination bucket is empty, allowing the key to be stored \n                 * directly without allocating a dictEntry. If an old entry was \n                 * previously allocated, free its memory. */                \n                if (!entryIsKey(de)) zfree(decodeMaskedPtr(de));\n                \n                de = encodeEntryKey(d, storedKey);\n                \n            } else if (entryIsKey(de)) {\n                /* We don't have an allocated entry but we need one. */\n                de = createEntryNoValue(storedKey, d->ht_table[1][h]);\n            } else {\n                dictSetNext(de, d->ht_table[1][h]);\n            }\n        } else {\n            dictSetNext(de, d->ht_table[1][h]);\n        }\n        d->ht_table[1][h] = de;\n        d->ht_used[0]--;\n        d->ht_used[1]++;\n        de = nextde;\n    }\n    d->ht_table[0][idx] = NULL;\n}\n\n/* This checks if we already rehashed the whole table and if more rehashing is required */\nstatic int dictCheckRehashingCompleted(dict *d) {\n    if (d->ht_used[0] != 0) return 0;\n    \n    if (d->type->rehashingCompleted) d->type->rehashingCompleted(d);\n    if (d->type->bucketChanged)\n        d->type->bucketChanged(d, -(long long)DICTHT_SIZE(d->ht_size_exp[0]));\n    zfree(d->ht_table[0]);\n    /* Copy the new ht onto the old one */\n    d->ht_table[0] = d->ht_table[1];\n    d->ht_used[0] = d->ht_used[1];\n    d->ht_size_exp[0] = d->ht_size_exp[1];\n    _dictReset(d, 1);\n    d->rehashidx = -1;\n    return 1;\n}\n\n/* Performs N steps of incremental rehashing. Returns 1 if there are still\n * keys to move from the old to the new hash table, otherwise 0 is returned.\n *\n * Note that a rehashing step consists in moving a bucket (that may have more\n * than one key as we use chaining) from the old to the new hash table, however\n * since part of the hash table may be composed of empty spaces, it is not\n * guaranteed that this function will rehash even a single bucket, since it\n * will visit at max N*10 empty buckets in total, otherwise the amount of\n * work it does would be unbound and the function may block for a long time. */\nint dictRehash(dict *d, int n) {\n    int empty_visits = n*10; /* Max number of empty buckets to visit. */\n    unsigned long s0 = DICTHT_SIZE(d->ht_size_exp[0]);\n    unsigned long s1 = DICTHT_SIZE(d->ht_size_exp[1]);\n    if (dict_can_resize == DICT_RESIZE_FORBID || !dictIsRehashing(d)) return 0;\n    /* If dict_can_resize is DICT_RESIZE_AVOID, we want to avoid rehashing. \n     * - If expanding, the threshold is dict_force_resize_ratio which is 4.\n     * - If shrinking, the threshold is 1 / (HASHTABLE_MIN_FILL * dict_force_resize_ratio) which is 1/32. */\n    if (dict_can_resize == DICT_RESIZE_AVOID && \n        ((s1 > s0 && s1 < dict_force_resize_ratio * s0) ||\n         (s1 < s0 && s0 < HASHTABLE_MIN_FILL * dict_force_resize_ratio * s1)))\n    {\n        return 0;\n    }\n\n    while(n-- && d->ht_used[0] != 0) {\n        /* Note that rehashidx can't overflow as we are sure there are more\n         * elements because ht[0].used != 0 */\n        assert(DICTHT_SIZE(d->ht_size_exp[0]) > (unsigned long)d->rehashidx);\n        while(d->ht_table[0][d->rehashidx] == NULL) {\n            d->rehashidx++;\n            if (--empty_visits == 0) return 1;\n        }\n        /* Move all the keys in this bucket from the old to the new hash HT */\n        rehashEntriesInBucketAtIndex(d, d->rehashidx);\n        d->rehashidx++;\n    }\n\n    return !dictCheckRehashingCompleted(d);\n}\n\nlong long timeInMilliseconds(void) {\n    struct timeval tv;\n\n    gettimeofday(&tv,NULL);\n    return (((long long)tv.tv_sec)*1000)+(tv.tv_usec/1000);\n}\n\n/* Rehash in us+\"delta\" microseconds. The value of \"delta\" is larger\n * than 0, and is smaller than 1000 in most cases. The exact upper bound\n * depends on the running time of dictRehash(d,100).*/\nint dictRehashMicroseconds(dict *d, uint64_t us) {\n    if (d->pauserehash > 0) return 0;\n\n    monotime timer;\n    elapsedStart(&timer);\n    int rehashes = 0;\n\n    while(dictRehash(d,100)) {\n        rehashes += 100;\n        if (elapsedUs(timer) >= us) break;\n    }\n    return rehashes;\n}\n\n/* This function performs just a step of rehashing, and only if hashing has\n * not been paused for our hash table. When we have iterators in the\n * middle of a rehashing we can't mess with the two hash tables otherwise\n * some elements can be missed or duplicated.\n *\n * This function is called by common lookup or update operations in the\n * dictionary so that the hash table automatically migrates from H1 to H2\n * while it is actively used. */\nstatic void _dictRehashStep(dict *d) {\n    if (d->pauserehash == 0) dictRehash(d,1);\n}\n\n/* Performs rehashing on a single bucket. */\nint _dictBucketRehash(dict *d, uint64_t idx) {\n    if (d->pauserehash != 0) return 0;\n    unsigned long s0 = DICTHT_SIZE(d->ht_size_exp[0]);\n    unsigned long s1 = DICTHT_SIZE(d->ht_size_exp[1]);\n    if (dict_can_resize == DICT_RESIZE_FORBID || !dictIsRehashing(d)) return 0;\n    /* If dict_can_resize is DICT_RESIZE_AVOID, we want to avoid rehashing. \n     * - If expanding, the threshold is dict_force_resize_ratio which is 4.\n     * - If shrinking, the threshold is 1 / (HASHTABLE_MIN_FILL * dict_force_resize_ratio) which is 1/32. */\n    if (dict_can_resize == DICT_RESIZE_AVOID && \n        ((s1 > s0 && s1 < dict_force_resize_ratio * s0) ||\n         (s1 < s0 && s0 < HASHTABLE_MIN_FILL * dict_force_resize_ratio * s1)))\n    {\n        return 0;\n    }\n    rehashEntriesInBucketAtIndex(d, idx);\n    dictCheckRehashingCompleted(d);\n    return 1;\n}\n\n/* Add an element to the target hash table */\nint dictAdd(dict *d, void *key __stored_key, void *val)\n{\n    dictEntry *entry = dictAddRaw(d,key,NULL);\n\n    if (!entry) return DICT_ERR;\n    if (!d->type->no_value) dictSetVal(d, entry, val);\n    return DICT_OK;\n}\n\nint dictCompareKeys(dict *d, const void *key1, const void *key2) {\n    dictCmpCache cache = {0};\n    keyCmpFunc cmpFunc = dictGetCmpFunc(d);\n    return cmpFunc(&cache, key1, key2);\n}\n\n/* Low level add or find:\n * This function adds the entry but instead of setting a value returns the\n * dictEntry structure to the user, that will make sure to fill the value\n * field as they wish.\n *\n * This function is also directly exposed to the user API to be called\n * mainly in order to store non-pointers inside the hash value, example:\n *\n * entry = dictAddRaw(dict,mykey,NULL);\n * if (entry != NULL) dictSetSignedIntegerVal(entry,1000);\n *\n * Return values:\n *\n * If key already exists NULL is returned, and \"*existing\" is populated\n * with the existing entry if existing is not NULL.\n *\n * If key was added, the hash entry is returned to be manipulated by the caller.\n */\ndictEntry *dictAddRaw(dict *d, void *key __stored_key, dictEntry **existing)\n{\n    /* Get the position for the new key or NULL if the key already exists. */\n    void *position = dictFindLinkForInsert(d, dictStoredKey2Key(d, key), existing);\n    if (!position) return NULL;\n\n    /* Dup the key if necessary. */\n    if (d->type->keyDup) key = d->type->keyDup(d, key);\n\n    return dictInsertKeyAtLink(d, key, position);\n}\n\n/* Adds a key in the dict's hashtable at the link returned by a preceding\n * call to dictFindLinkForInsert(). This is a low level function which allows\n * splitting dictAddRaw in two parts. Normally, dictAddRaw or dictAdd should be\n * used instead. It assumes that dictExpandIfNeeded() was called before. */\ndictEntry *dictInsertKeyAtLink(dict *d, void *key __stored_key, dictEntryLink link) {\n    dictEntryLink bucket = link; /* It's a bucket, but the API hides that. */\n    dictEntry *entry;\n    /* If rehashing is ongoing, we insert in table 1, otherwise in table 0.\n     * Assert that the provided bucket is the right table. */\n    int htidx = dictIsRehashing(d) ? 1 : 0;\n    assert(bucket >= &d->ht_table[htidx][0] &&\n           bucket <= &d->ht_table[htidx][DICTHT_SIZE_MASK(d->ht_size_exp[htidx])]);\n    if (d->type->no_value) {\n        if (!*bucket) {\n            /* We can store the key directly in the destination bucket without \n             * allocating dictEntry.\n             */\n            entry = encodeEntryKey(d, key);\n            assert(entryIsKey(entry));\n        } else {\n            /* Allocate an entry without value. */\n            entry = createEntryNoValue(key, *bucket);\n        }\n    } else {\n        /* Allocate the memory and store the new entry.\n         * Insert the element in top, with the assumption that in a database\n         * system it is more likely that recently added entries are accessed\n         * more frequently. */\n        entry = zmalloc(sizeof(*entry));\n        assert(entryIsNormal(entry)); /* Check alignment of allocation */\n        entry->key = key;\n        entry->next = *bucket;\n    }\n    *bucket = entry;\n    d->ht_used[htidx]++;\n\n    return entry;\n}\n\n/* Add or Overwrite:\n * Add an element, discarding the old value if the key already exists.\n * Return 1 if the key was added from scratch, 0 if there was already an\n * element with such key and dictReplace() just performed a value update\n * operation. */\nint dictReplace(dict *d, void *key __stored_key, void *val)\n{\n    dictEntry *entry, *existing;\n\n    /* Try to add the element. If the key\n     * does not exists dictAdd will succeed. */\n    entry = dictAddRaw(d,key,&existing);\n    if (entry) {\n        dictSetVal(d, entry, val);\n        return 1;\n    }\n\n    /* Set the new value and free the old one. Note that it is important\n     * to do that in this order, as the value may just be exactly the same\n     * as the previous one. In this context, think to reference counting,\n     * you want to increment (set), and then decrement (free), and not the\n     * reverse. */\n    void *oldval = dictGetVal(existing);\n    dictSetVal(d, existing, val);\n    if (d->type->valDestructor)\n        d->type->valDestructor(d, oldval);\n    return 0;\n}\n\n/* Add or Find:\n * dictAddOrFind() is simply a version of dictAddRaw() that always\n * returns the hash entry of the specified key, even if the key already\n * exists and can't be added (in that case the entry of the already\n * existing key is returned.)\n *\n * See dictAddRaw() for more information. */\ndictEntry *dictAddOrFind(dict *d, void *key __stored_key) {\n    dictEntry *entry, *existing;\n    entry = dictAddRaw(d,key,&existing);\n    return entry ? entry : existing;\n}\n\n/* Search and remove an element. This is a helper function for\n * dictDelete() and dictUnlink(), please check the top comment\n * of those functions. */\nstatic dictEntry *dictGenericDelete(dict *d, const void *key, int nofree) {\n    dictCmpCache cmpCache = {0};\n    uint64_t h, idx;\n    dictEntry *he, *prevHe;\n    int table;\n\n    /* dict is empty */\n    if (dictSize(d) == 0) return NULL;\n\n    h = dictGetHash(d, key);\n    idx = h & DICTHT_SIZE_MASK(d->ht_size_exp[0]);\n\n    /* Rehash the hash table if needed */\n    _dictRehashStepIfNeeded(d,idx);\n\n    keyCmpFunc cmpFunc = dictGetCmpFunc(d);\n\n    for (table = 0; table <= 1; table++) {\n        if (table == 0 && (long)idx < d->rehashidx) continue;\n        idx = h & DICTHT_SIZE_MASK(d->ht_size_exp[table]);\n        he = d->ht_table[table][idx];\n        prevHe = NULL;\n        while(he) {\n            const void *he_key = dictStoredKey2Key(d, dictGetKey(he));\n            if (key == he_key || cmpFunc(&cmpCache, key, he_key)) {\n                /* Unlink the element from the list */\n                if (prevHe)\n                    dictSetNext(prevHe, dictGetNext(he));\n                else\n                    d->ht_table[table][idx] = dictGetNext(he);\n                if (!nofree) {\n                    dictFreeUnlinkedEntry(d, he);\n                }\n                d->ht_used[table]--;\n                _dictShrinkIfNeeded(d);\n                return he;\n            }\n            prevHe = he;\n            he = dictGetNext(he);\n        }\n        if (!dictIsRehashing(d)) break;\n    }\n    return NULL; /* not found */\n}\n\n/* Remove an element, returning DICT_OK on success or DICT_ERR if the\n * element was not found. */\nint dictDelete(dict *ht, const void *key) {\n    return dictGenericDelete(ht,key,0) ? DICT_OK : DICT_ERR;\n}\n\n/* Remove an element from the table, but without actually releasing\n * the key, value and dictionary entry. The dictionary entry is returned\n * if the element was found (and unlinked from the table), and the user\n * should later call `dictFreeUnlinkedEntry()` with it in order to release it.\n * Otherwise if the key is not found, NULL is returned.\n *\n * This function is useful when we want to remove something from the hash\n * table but want to use its value before actually deleting the entry.\n * Without this function the pattern would require two lookups:\n *\n *  entry = dictFind(...);\n *  // Do something with entry\n *  dictDelete(dictionary,entry);\n *\n * Thanks to this function it is possible to avoid this, and use\n * instead:\n *\n * entry = dictUnlink(dictionary,entry);\n * // Do something with entry\n * dictFreeUnlinkedEntry(entry); // <- This does not need to lookup again.\n */\ndictEntry *dictUnlink(dict *d, const void *key) {\n    return dictGenericDelete(d,key,1);\n}\n\n/* You need to call this function to really free the entry after a call\n * to dictUnlink(). It's safe to call this function with 'he' = NULL. */\nvoid dictFreeUnlinkedEntry(dict *d, dictEntry *he) {\n    if (he == NULL) return;\n    dictFreeKey(d, he);\n    dictFreeVal(d, he);\n    if (!entryIsKey(he)) zfree(decodeMaskedPtr(he));\n}\n\n/* Destroy an entire dictionary */\nint _dictClear(dict *d, int htidx, void(callback)(dict*)) {\n    unsigned long i;\n\n    /* Free all the elements */\n    for (i = 0; i < DICTHT_SIZE(d->ht_size_exp[htidx]) && d->ht_used[htidx] > 0; i++) {\n        dictEntry *he, *nextHe;\n        /* Callback will be called once for every 65535 deletions. Beware,\n         * if dict has less than 65535 items, it will not be called at all.*/\n        if (callback && i != 0 && (i & 65535) == 0) callback(d);\n\n        if ((he = d->ht_table[htidx][i]) == NULL) continue;\n        while(he) {\n            nextHe = dictGetNext(he);\n            dictFreeKey(d, he);\n            dictFreeVal(d, he);\n            if (!entryIsKey(he)) zfree(decodeMaskedPtr(he));\n            d->ht_used[htidx]--;\n            he = nextHe;\n        }\n    }\n    /* Free the table and the allocated cache structure */\n    zfree(d->ht_table[htidx]);\n    /* Re-initialize the table */\n    _dictReset(d, htidx);\n    return DICT_OK; /* never fails */\n}\n\n/* Clear & Release the hash table */\nvoid dictRelease(dict *d)\n{\n    /* Someone may be monitoring a dict that started rehashing, before\n     * destroying the dict fake completion. */\n    if (dictIsRehashing(d) && d->type->rehashingCompleted)\n        d->type->rehashingCompleted(d);\n\n    /* Subtract the size of all buckets. */\n    if (d->type->bucketChanged)\n        d->type->bucketChanged(d, -(long long)dictBuckets(d));\n\n    if (d->type->onDictRelease)\n        d->type->onDictRelease(d);\n\n    _dictClear(d,0,NULL);\n    _dictClear(d,1,NULL);\n    zfree(d);\n}\n\n/* Finds a given key. Like dictFindLink(), yet search bucket even if dict is empty. \n * \n * Returns dictEntryLink reference if found. Otherwise, return NULL.\n * \n * bucket - return pointer to bucket that the key was mapped. unless dict is empty.\n */\nstatic dictEntryLink dictFindLinkInternal(dict *d, const void *key, dictEntryLink *bucket) {\n    dictCmpCache cmpCache = {0};\n    dictEntryLink link;\n    uint64_t idx;\n    int table;\n    \n    if (bucket) {\n        *bucket = NULL;\n    } else {\n        /* If dict is empty and no need to find bucket, return NULL */\n        if (dictSize(d) == 0) return NULL; \n    }\n\n    const uint64_t hash = dictGetHash(d, key);\n    idx = hash & DICTHT_SIZE_MASK(d->ht_size_exp[0]);\n    keyCmpFunc cmpFunc = dictGetCmpFunc(d);\n\n    /* Rehash the hash table if needed */\n    _dictRehashStepIfNeeded(d,idx);\n\n    int tables = (dictIsRehashing(d)) ? 2 : 1;\n    for (table = 0; table < tables; table++) {\n        if (table == 0 && (long)idx < d->rehashidx) continue;\n        idx = hash & DICTHT_SIZE_MASK(d->ht_size_exp[table]);\n\n        link = &(d->ht_table[table][idx]);\n        if (bucket) *bucket = link;\n        while(link && *link) {\n            const void *visitedKey = dictStoredKey2Key(d, dictGetKey(*link));\n\n            if (key == visitedKey || cmpFunc( &cmpCache, key, visitedKey))                \n                return link;\n\n            link = dictGetNextLink(*link);\n        }\n    }\n    return NULL;\n}\n\ndictEntry *dictFind(dict *d, const void *key)\n{\n    dictEntryLink link = dictFindLink(d, key, NULL);\n    return (link) ? *link : NULL;\n}\n\n/* Finds the dictEntry using pointer and pre-calculated hash.\n * oldkey is a dead pointer and should not be accessed.\n * the hash value should be provided using dictGetHash.\n * no string / key comparison is performed.\n * return value is a pointer to the dictEntry if found, or NULL if not found. */\ndictEntry *dictFindByHashAndPtr(dict *d, const void *oldptr, const uint64_t hash) {\n    dictEntry *he;\n    unsigned long idx, table;\n\n    if (dictSize(d) == 0) return NULL; /* dict is empty */\n    for (table = 0; table <= 1; table++) {\n        idx = hash & DICTHT_SIZE_MASK(d->ht_size_exp[table]);\n        if (table == 0 && (long)idx < d->rehashidx) continue;\n        he = d->ht_table[table][idx];\n        while(he) {\n            if (oldptr == dictGetKey(he))\n                return he;\n            he = dictGetNext(he);\n        }\n        if (!dictIsRehashing(d)) return NULL;\n    }\n    return NULL;\n}\n\n/* Find a key and return its dictEntryLink reference. Otherwise, return NULL\n * \n * A dictEntryLink pointer being used to find preceding dictEntry of searched item. \n * It is Useful for deletion, addition, unlinking and updating, especially for \n * dict configured with 'no_value'. In such cases returning only `dictEntry` from \n * a lookup may be insufficient since it might be opt-out to be the object itself. \n * By locating preceding dictEntry (dictEntryLink) these ops can be properly handled. \n * \n * After calling link = dictFindLink(...), any necessary updates based on returned \n * link or bucket must be performed immediately after by calling dictSetKeyAtLink() \n * without any intervening operations on given dict. Otherwise, `dictEntryLink` may \n * become invalid. Example with kvobj of replacing key with new key:\n * \n *      link = dictFindLink(d, key, &bucket, 0);\n *      ... Do something, but don't modify the dict ...\n *      // assert(link != NULL);\n *      dictSetKeyAtLink(d, kv, &link, 0);\n *      \n * To add new value (If no space for the new key, dict will be expanded by\n * dictSetKeyAtLink() and bucket will be looked up again.):\n *   \n *      link = dictFindLink(d, key, &bucket);\n *      ... Do something, but don't modify the dict ...\n *      // assert(link == NULL);\n *      dictSetKeyAtLink(d, kv, &bucket, 1);\n *  \n *  bucket - return link to bucket that the key was mapped. unless dict is empty.\n */\ndictEntryLink dictFindLink(dict *d, const void *key, dictEntryLink *bucket) {\n    if (bucket) *bucket = NULL;\n    if (unlikely(dictSize(d) == 0))\n        return NULL;\n    \n    return dictFindLinkInternal(d, key, bucket);\n}\n\n/* Set the key with link \n *\n * link:    - When `newItem` is set, `link` points to the bucket of the key.\n *          - When `newItem` is not set, `link` points to the link of the key.\n *          - If *link is NULL, dictFindLink() will be called to locate the key.\n *          - On return, get updated, by need, to the inserted key. \n *\n * newItem: 1 = Add a key with a new dictEntry.\n *          0 = Set a key to an existing dictEntry. \n */\nvoid dictSetKeyAtLink(dict *d, void *key __stored_key, dictEntryLink *link, int newItem) {\n    dictEntryLink dummy = NULL;\n    if (link == NULL) link = &dummy;\n    void *addedKey = (d->type->keyDup) ? d->type->keyDup(d, key) : key;\n    \n    if (newItem) {\n        signed char snap[2] = {d->ht_size_exp[0], d->ht_size_exp[1] };\n\n        /* Make room if needed for the new key */\n        dictExpandIfNeeded(d);\n        \n        /* Lookup key's link if tables reallocated or if given link is set to NULL */\n        if (snap[0] != d->ht_size_exp[0] || snap[1] != d->ht_size_exp[1] || *link == NULL) {\n            dictEntryLink bucket;\n            /* Bypass dictFindLink() to search bucket even if dict is empty!!! */\n            *link = dictFindLinkInternal(d, dictStoredKey2Key(d, key), &bucket);\n            assert(bucket != NULL);\n            assert(*link == NULL);\n            *link = bucket; /* On newItem the link should be the bucket */\n        }\n        dictInsertKeyAtLink(d, addedKey, *link);\n        return;\n    } \n    \n    /* Setting key of existing dictEntry (newItem == 0)*/\n    \n    if (*link == NULL) {\n        *link = dictFindLink(d, key, NULL);\n        assert(*link != NULL);\n    }\n    \n    dictEntry **de = *link;\n    if (entryIsKey(*de)) {\n        /* `de` opt-out to be actually a key. Replace key but keep the lsb flags */\n        *de = encodeEntryKey(d, addedKey);\n    } else {\n        /* either dictEntry or dictEntryNoValue */\n        (*de)->key = addedKey;\n    }\n}\n\nvoid *dictFetchValue(dict *d, const void *key) {\n    dictEntry *he;\n\n    he = dictFind(d,key);\n    return he ? dictGetVal(he) : NULL;\n}\n\n/* Find an element from the table. A link is returned if the element is found, and\n * the user should later call `dictTwoPhaseUnlinkFree` with it in order to unlink\n * and release it. Otherwise if the key is not found, NULL is returned. These two\n * functions should be used in pair.\n * `dictTwoPhaseUnlinkFind` pauses rehash and `dictTwoPhaseUnlinkFree` resumes rehash.\n *\n * We can use like this:\n *\n * dictEntryLink link = dictTwoPhaseUnlinkFind(db->dict,key->ptr, &table);\n * // Do something, but we can't modify the dict\n * dictTwoPhaseUnlinkFree(db->dict, link, table); // We don't need to lookup again\n *\n * If we want to find an entry before delete this entry, this an optimization to avoid\n * dictFind followed by dictDelete. i.e. the first API is a find, and it gives some info\n * to the second one to avoid repeating the lookup\n */\ndictEntryLink dictTwoPhaseUnlinkFind(dict *d, const void *key, int *table_index) {\n    dictCmpCache cmpCache = {0};\n    uint64_t h, idx, table;\n\n    if (dictSize(d) == 0) return NULL; /* dict is empty */\n    if (dictIsRehashing(d)) _dictRehashStep(d);\n\n    h = dictGetHash(d, key);    \n    keyCmpFunc cmpFunc = dictGetCmpFunc(d);\n\n    for (table = 0; table <= 1; table++) {\n        idx = h & DICTHT_SIZE_MASK(d->ht_size_exp[table]);\n        if (table == 0 && (long)idx < d->rehashidx) continue;\n        dictEntry **ref = &d->ht_table[table][idx];\n        while (ref && *ref) {\n            const void *de_key = dictStoredKey2Key(d, dictGetKey(*ref));\n            if (key == de_key || cmpFunc(&cmpCache, key, de_key)) {\n                *table_index = table;\n                dictPauseRehashing(d);\n                return ref;\n            }\n            ref = dictGetNextLink(*ref);\n        }\n        if (!dictIsRehashing(d)) return NULL;\n    }\n    return NULL;\n}\n\nvoid dictTwoPhaseUnlinkFree(dict *d, dictEntryLink plink, int table_index) {\n    if (plink == NULL || *plink == NULL) return;\n    dictEntry *de = *plink;\n    d->ht_used[table_index]--;\n\n    *plink = dictGetNext(de);\n    dictFreeKey(d, de);\n    dictFreeVal(d, de);\n    if (!entryIsKey(de)) zfree(decodeMaskedPtr(de));\n    _dictShrinkIfNeeded(d);\n    dictResumeRehashing(d);\n}\n\nvoid dictSetKey(dict *d, dictEntry* de, void *key __stored_key) {\n    assert(!d->type->no_value);\n    if (d->type->keyDup)\n        de->key = d->type->keyDup(d, key);\n    else\n        de->key = key;\n}\n\nvoid dictSetVal(dict *d, dictEntry *de, void *val) {\n    assert(entryHasValue(de));\n    de->v.val = d->type->valDup ? d->type->valDup(d, val) : val;\n}\n\nvoid dictSetSignedIntegerVal(dictEntry *de, int64_t val) {\n    assert(entryHasValue(de));\n    de->v.s64 = val;\n}\n\nvoid dictSetUnsignedIntegerVal(dictEntry *de, uint64_t val) {\n    assert(entryHasValue(de));\n    de->v.u64 = val;\n}\n\nvoid dictSetDoubleVal(dictEntry *de, double val) {\n    assert(entryHasValue(de));\n    de->v.d = val;\n}\n\nint64_t dictIncrSignedIntegerVal(dictEntry *de, int64_t val) {\n    assert(entryHasValue(de));\n    return de->v.s64 += val;\n}\n\nuint64_t dictIncrUnsignedIntegerVal(dictEntry *de, uint64_t val) {\n    assert(entryHasValue(de));\n    return de->v.u64 += val;\n}\n\ndouble dictIncrDoubleVal(dictEntry *de, double val) {\n    assert(entryHasValue(de));\n    return de->v.d += val;\n}\n\nint dictEntryIsKey(const dictEntry *de) {\n    return entryIsKey(de);\n}\n\nvoid *dictGetKey(const dictEntry *de) {\n    /* if entryIsKey() */\n    if ((uintptr_t)de & ENTRY_PTR_IS_ODD_KEY) return (void *) de;\n    if ((uintptr_t)de & ENTRY_PTR_IS_EVEN_KEY) return decodeMaskedPtr(de);    \n    /* Regular entry */\n    return de->key;\n}\n\nvoid *dictGetVal(const dictEntry *de) {\n    assert(entryHasValue(de));\n    return de->v.val;\n}\n\nint64_t dictGetSignedIntegerVal(const dictEntry *de) {\n    assert(entryHasValue(de));\n    return de->v.s64;\n}\n\nuint64_t dictGetUnsignedIntegerVal(const dictEntry *de) {\n    assert(entryHasValue(de));\n    return de->v.u64;\n}\n\ndouble dictGetDoubleVal(const dictEntry *de) {\n    assert(entryHasValue(de));\n    return de->v.d;\n}\n\n/* Returns a mutable reference to the value as a double within the entry. */\ndouble *dictGetDoubleValPtr(dictEntry *de) {\n    assert(entryHasValue(de));\n    return &de->v.d;\n}\n\n/* Returns the 'next' field of the entry or NULL if the entry doesn't have a\n * 'next' field. */\ndictEntry *dictGetNext(const dictEntry *de) {\n    if (entryIsKey(de)) return NULL; /* there's no next */\n    /* Must come after entryIsKey() check */\n    return de->next;\n}\n\n/* Returns a pointer to the 'next' field in the entry or NULL if the entry\n * doesn't have a next field. */\nstatic dictEntryLink dictGetNextLink(dictEntry *de) {\n    if (entryIsKey(de)) return NULL;\n    /* Must come after entryIsKey() check */\n    return &de->next;\n}\n\nstatic void dictSetNext(dictEntry *de, dictEntry *next) {\n    assert(!entryIsKey(de));\n    /* dictEntryNoValue & dictEntry are layout-compatible */\n    de->next = next;\n}\n\n/* Returns the memory usage in bytes of the dict, excluding the size of the keys\n * and values. */\nsize_t dictMemUsage(const dict *d) {\n    return dictSize(d) * sizeof(dictEntry) +\n        dictBuckets(d) * sizeof(dictEntry*);\n}\n\nsize_t dictEntryMemUsage(int noValueDict) {\n    return (noValueDict) ? sizeof(dictEntryNoValue) :sizeof(dictEntry);\n}\n\n/* A fingerprint is a 64 bit number that represents the state of the dictionary\n * at a given time, it's just a few dict properties xored together.\n * When an unsafe iterator is initialized, we get the dict fingerprint, and check\n * the fingerprint again when the iterator is released.\n * If the two fingerprints are different it means that the user of the iterator\n * performed forbidden operations against the dictionary while iterating. */\nunsigned long long dictFingerprint(dict *d) {\n    unsigned long long integers[6], hash = 0;\n    int j;\n\n    integers[0] = (long) d->ht_table[0];\n    integers[1] = d->ht_size_exp[0];\n    integers[2] = d->ht_used[0];\n    integers[3] = (long) d->ht_table[1];\n    integers[4] = d->ht_size_exp[1];\n    integers[5] = d->ht_used[1];\n\n    /* We hash N integers by summing every successive integer with the integer\n     * hashing of the previous sum. Basically:\n     *\n     * Result = hash(hash(hash(int1)+int2)+int3) ...\n     *\n     * This way the same set of integers in a different order will (likely) hash\n     * to a different number. */\n    for (j = 0; j < 6; j++) {\n        hash += integers[j];\n        /* For the hashing step we use Tomas Wang's 64 bit integer hash. */\n        hash = (~hash) + (hash << 21); // hash = (hash << 21) - hash - 1;\n        hash = hash ^ (hash >> 24);\n        hash = (hash + (hash << 3)) + (hash << 8); // hash * 265\n        hash = hash ^ (hash >> 14);\n        hash = (hash + (hash << 2)) + (hash << 4); // hash * 21\n        hash = hash ^ (hash >> 28);\n        hash = hash + (hash << 31);\n    }\n    return hash;\n}\n\nvoid dictInitIterator(dictIterator *iter, dict *d)\n{\n    iter->d = d;\n    iter->table = 0;\n    iter->index = -1;\n    iter->safe = 0;\n    iter->entry = NULL;\n    iter->nextEntry = NULL;\n}\n\nvoid dictInitSafeIterator(dictIterator *iter, dict *d)\n{\n    dictInitIterator(iter, d);\n    iter->safe = 1;\n}\n\nvoid dictResetIterator(dictIterator *iter)\n{\n    if (!(iter->index == -1 && iter->table == 0)) {\n        if (iter->safe)\n            dictResumeRehashing(iter->d);\n        else\n            assert(iter->fingerprint == dictFingerprint(iter->d));\n    }\n}\n\ndictIterator *dictGetIterator(dict *d)\n{\n    dictIterator *iter = zmalloc(sizeof(*iter));\n    dictInitIterator(iter, d);\n    return iter;\n}\n\ndictIterator *dictGetSafeIterator(dict *d) {\n    dictIterator *i = dictGetIterator(d);\n\n    i->safe = 1;\n    return i;\n}\n\ndictEntry *dictNext(dictIterator *iter)\n{\n    while (1) {\n        if (iter->entry == NULL) {\n            if (iter->index == -1 && iter->table == 0) {\n                if (iter->safe)\n                    dictPauseRehashing(iter->d);\n                else\n                    iter->fingerprint = dictFingerprint(iter->d);\n\n                /* skip the rehashed slots in table[0] */\n                if (dictIsRehashing(iter->d)) {\n                    iter->index = iter->d->rehashidx - 1;\n                }\n            }\n            iter->index++;\n            if (iter->index >= (long) DICTHT_SIZE(iter->d->ht_size_exp[iter->table])) {\n                if (dictIsRehashing(iter->d) && iter->table == 0) {\n                    iter->table++;\n                    iter->index = 0;\n                } else {\n                    break;\n                }\n            }\n            iter->entry = iter->d->ht_table[iter->table][iter->index];\n        } else {\n            iter->entry = iter->nextEntry;\n        }\n        if (iter->entry) {\n            /* We need to save the 'next' here, the iterator user\n             * may delete the entry we are returning. */\n            iter->nextEntry = dictGetNext(iter->entry);\n            return iter->entry;\n        }\n    }\n    return NULL;\n}\n\nvoid dictReleaseIterator(dictIterator *iter)\n{\n    dictResetIterator(iter);\n    zfree(iter);\n}\n\n/* Return a random entry from the hash table. Useful to\n * implement randomized algorithms */\ndictEntry *dictGetRandomKey(dict *d)\n{\n    dictEntry *he, *orighe;\n    unsigned long h;\n    int listlen, listele;\n\n    if (dictSize(d) == 0) return NULL;\n    if (dictIsRehashing(d)) _dictRehashStep(d);\n    if (dictIsRehashing(d)) {\n        unsigned long s0 = DICTHT_SIZE(d->ht_size_exp[0]);\n        do {\n            /* We are sure there are no elements in indexes from 0\n             * to rehashidx-1 */\n            h = d->rehashidx + (randomULong() % (dictBuckets(d) - d->rehashidx));\n            he = (h >= s0) ? d->ht_table[1][h - s0] : d->ht_table[0][h];\n        } while(he == NULL);\n    } else {\n        unsigned long m = DICTHT_SIZE_MASK(d->ht_size_exp[0]);\n        do {\n            h = randomULong() & m;\n            he = d->ht_table[0][h];\n        } while(he == NULL);\n    }\n\n    /* Now we found a non empty bucket, but it is a linked\n     * list and we need to get a random element from the list.\n     * The only sane way to do so is counting the elements and\n     * select a random index. */\n    listlen = 0;\n    orighe = he;\n    while(he) {\n        he = dictGetNext(he);\n        listlen++;\n    }\n    listele = random() % listlen;\n    he = orighe;\n    while(listele--) he = dictGetNext(he);\n    return he;\n}\n\n/* This function samples the dictionary to return a few keys from random\n * locations.\n *\n * It does not guarantee to return all the keys specified in 'count', nor\n * it does guarantee to return non-duplicated elements, however it will make\n * some effort to do both things.\n *\n * Returned pointers to hash table entries are stored into 'des' that\n * points to an array of dictEntry pointers. The array must have room for\n * at least 'count' elements, that is the argument we pass to the function\n * to tell how many random elements we need.\n *\n * The function returns the number of items stored into 'des', that may\n * be less than 'count' if the hash table has less than 'count' elements\n * inside, or if not enough elements were found in a reasonable amount of\n * steps.\n *\n * Note that this function is not suitable when you need a good distribution\n * of the returned items, but only when you need to \"sample\" a given number\n * of continuous elements to run some kind of algorithm or to produce\n * statistics. However the function is much faster than dictGetRandomKey()\n * at producing N elements. */\nunsigned int dictGetSomeKeys(dict *d, dictEntry **des, unsigned int count) {\n    unsigned long j; /* internal hash table id, 0 or 1. */\n    unsigned long tables; /* 1 or 2 tables? */\n    unsigned long stored = 0, maxsizemask;\n    unsigned long maxsteps;\n\n    if (dictSize(d) < count) count = dictSize(d);\n    maxsteps = count*10;\n\n    /* Try to do a rehashing work proportional to 'count'. */\n    for (j = 0; j < count; j++) {\n        if (dictIsRehashing(d))\n            _dictRehashStep(d);\n        else\n            break;\n    }\n\n    tables = dictIsRehashing(d) ? 2 : 1;\n    maxsizemask = DICTHT_SIZE_MASK(d->ht_size_exp[0]);\n    if (tables > 1 && maxsizemask < DICTHT_SIZE_MASK(d->ht_size_exp[1]))\n        maxsizemask = DICTHT_SIZE_MASK(d->ht_size_exp[1]);\n\n    /* Pick a random point inside the larger table. */\n    unsigned long i = randomULong() & maxsizemask;\n    unsigned long emptylen = 0; /* Continuous empty entries so far. */\n    while(stored < count && maxsteps--) {\n        for (j = 0; j < tables; j++) {\n            /* Invariant of the dict.c rehashing: up to the indexes already\n             * visited in ht[0] during the rehashing, there are no populated\n             * buckets, so we can skip ht[0] for indexes between 0 and idx-1. */\n            if (tables == 2 && j == 0 && i < (unsigned long) d->rehashidx) {\n                /* Moreover, if we are currently out of range in the second\n                 * table, there will be no elements in both tables up to\n                 * the current rehashing index, so we jump if possible.\n                 * (this happens when going from big to small table). */\n                if (i >= DICTHT_SIZE(d->ht_size_exp[1]))\n                    i = d->rehashidx;\n                else\n                    continue;\n            }\n            if (i >= DICTHT_SIZE(d->ht_size_exp[j])) continue; /* Out of range for this table. */\n            dictEntry *he = d->ht_table[j][i];\n\n            /* Count contiguous empty buckets, and jump to other\n             * locations if they reach 'count' (with a minimum of 5). */\n            if (he == NULL) {\n                emptylen++;\n                if (emptylen >= 5 && emptylen > count) {\n                    i = randomULong() & maxsizemask;\n                    emptylen = 0;\n                }\n            } else {\n                emptylen = 0;\n                while (he) {\n                    /* Collect all the elements of the buckets found non empty while iterating.\n                     * To avoid the issue of being unable to sample the end of a long chain,\n                     * we utilize the Reservoir Sampling algorithm to optimize the sampling process.\n                     * This means that even when the maximum number of samples has been reached,\n                     * we continue sampling until we reach the end of the chain.\n                     * See https://en.wikipedia.org/wiki/Reservoir_sampling. */\n                    if (stored < count) {\n                        des[stored] = he;\n                    } else {\n                        unsigned long r = randomULong() % (stored + 1);\n                        if (r < count) des[r] = he;\n                    }\n\n                    he = dictGetNext(he);\n                    stored++;\n                }\n                if (stored >= count) goto end;\n            }\n        }\n        i = (i+1) & maxsizemask;\n    }\n\nend:\n    return stored > count ? count : stored;\n}\n\n\n/* Reallocate the dictEntry, key and value allocations in a bucket using the\n * provided allocation functions in order to defrag them. */\nstatic void dictDefragBucket(dict *d, dictEntry **bucketref, dictDefragFunctions *defragfns) {\n    dictDefragAllocFunction *defragalloc = defragfns->defragAlloc;\n    dictDefragAllocFunction *defragkey = defragfns->defragKey;\n    dictDefragAllocFunction *defragval = defragfns->defragVal;\n    while (bucketref && *bucketref) {\n        dictEntry *de = *bucketref, *newde = NULL;\n        void *newkey = defragkey ? defragkey(dictGetKey(de)) : NULL;\n\n        if (d->type->no_value) {\n            if (entryIsKey(de)) {\n                if (newkey) *bucketref = encodeEntryKey(d, newkey);\n            } else {\n                dictEntryNoValue *entry = decodeEntryNoValue(de), *newentry;\n                if ((newentry = defragalloc(entry))) {\n                    newde = (dictEntry *) newentry;\n                    entry = newentry;\n                }\n                if (newkey) entry->key = newkey;\n            }\n        } else {\n            void *newval = defragval ? defragval(dictGetVal(de)) : NULL;\n            assert(entryIsNormal(de));\n            newde = defragalloc(de);\n            if (newde) de = newde;\n            if (newkey) de->key = newkey;\n            if (newval) de->v.val = newval;\n        }\n        if (newde) {\n            *bucketref = newde;\n        }\n        bucketref = dictGetNextLink(*bucketref);\n    }\n}\n\n/* This is like dictGetRandomKey() from the POV of the API, but will do more\n * work to ensure a better distribution of the returned element.\n *\n * This function improves the distribution because the dictGetRandomKey()\n * problem is that it selects a random bucket, then it selects a random\n * element from the chain in the bucket. However elements being in different\n * chain lengths will have different probabilities of being reported. With\n * this function instead what we do is to consider a \"linear\" range of the table\n * that may be constituted of N buckets with chains of different lengths\n * appearing one after the other. Then we report a random element in the range.\n * In this way we smooth away the problem of different chain lengths. */\n#define GETFAIR_NUM_ENTRIES 15\ndictEntry *dictGetFairRandomKey(dict *d) {\n    dictEntry *entries[GETFAIR_NUM_ENTRIES];\n    unsigned int count = dictGetSomeKeys(d,entries,GETFAIR_NUM_ENTRIES);\n    /* Note that dictGetSomeKeys() may return zero elements in an unlucky\n     * run() even if there are actually elements inside the hash table. So\n     * when we get zero, we call the true dictGetRandomKey() that will always\n     * yield the element if the hash table has at least one. */\n    if (count == 0) return dictGetRandomKey(d);\n    unsigned int idx = rand() % count;\n    return entries[idx];\n}\n\n/* Function to reverse bits. Algorithm from:\n * http://graphics.stanford.edu/~seander/bithacks.html#ReverseParallel */\nstatic unsigned long rev(unsigned long v) {\n    unsigned long s = CHAR_BIT * sizeof(v); // bit size; must be power of 2\n    unsigned long mask = ~0UL;\n    while ((s >>= 1) > 0) {\n        mask ^= (mask << s);\n        v = ((v >> s) & mask) | ((v << s) & ~mask);\n    }\n    return v;\n}\n\n/* dictScan() is used to iterate over the elements of a dictionary.\n *\n * Iterating works the following way:\n *\n * 1) Initially you call the function using a cursor (v) value of 0.\n * 2) The function performs one step of the iteration, and returns the\n *    new cursor value you must use in the next call.\n * 3) When the returned cursor is 0, the iteration is complete.\n *\n * The function guarantees all elements present in the\n * dictionary get returned between the start and end of the iteration.\n * However it is possible some elements get returned multiple times.\n *\n * For every element returned, the callback argument 'fn' is\n * called with 'privdata' as first argument and the dictionary entry\n * 'de' as second argument.\n *\n * HOW IT WORKS.\n *\n * The iteration algorithm was designed by Pieter Noordhuis.\n * The main idea is to increment a cursor starting from the higher order\n * bits. That is, instead of incrementing the cursor normally, the bits\n * of the cursor are reversed, then the cursor is incremented, and finally\n * the bits are reversed again.\n *\n * This strategy is needed because the hash table may be resized between\n * iteration calls.\n *\n * dict.c hash tables are always power of two in size, and they\n * use chaining, so the position of an element in a given table is given\n * by computing the bitwise AND between Hash(key) and SIZE-1\n * (where SIZE-1 is always the mask that is equivalent to taking the rest\n *  of the division between the Hash of the key and SIZE).\n *\n * For example if the current hash table size is 16, the mask is\n * (in binary) 1111. The position of a key in the hash table will always be\n * the last four bits of the hash output, and so forth.\n *\n * WHAT HAPPENS IF THE TABLE CHANGES IN SIZE?\n *\n * If the hash table grows, elements can go anywhere in one multiple of\n * the old bucket: for example let's say we already iterated with\n * a 4 bit cursor 1100 (the mask is 1111 because hash table size = 16).\n *\n * If the hash table will be resized to 64 elements, then the new mask will\n * be 111111. The new buckets you obtain by substituting in ??1100\n * with either 0 or 1 can be targeted only by keys we already visited\n * when scanning the bucket 1100 in the smaller hash table.\n *\n * By iterating the higher bits first, because of the inverted counter, the\n * cursor does not need to restart if the table size gets bigger. It will\n * continue iterating using cursors without '1100' at the end, and also\n * without any other combination of the final 4 bits already explored.\n *\n * Similarly when the table size shrinks over time, for example going from\n * 16 to 8, if a combination of the lower three bits (the mask for size 8\n * is 111) were already completely explored, it would not be visited again\n * because we are sure we tried, for example, both 0111 and 1111 (all the\n * variations of the higher bit) so we don't need to test it again.\n *\n * WAIT... YOU HAVE *TWO* TABLES DURING REHASHING!\n *\n * Yes, this is true, but we always iterate the smaller table first, then\n * we test all the expansions of the current cursor into the larger\n * table. For example if the current cursor is 101 and we also have a\n * larger table of size 16, we also test (0)101 and (1)101 inside the larger\n * table. This reduces the problem back to having only one table, where\n * the larger one, if it exists, is just an expansion of the smaller one.\n *\n * LIMITATIONS\n *\n * This iterator is completely stateless, and this is a huge advantage,\n * including no additional memory used.\n *\n * The disadvantages resulting from this design are:\n *\n * 1) It is possible we return elements more than once. However this is usually\n *    easy to deal with in the application level.\n * 2) The iterator must return multiple elements per call, as it needs to always\n *    return all the keys chained in a given bucket, and all the expansions, so\n *    we are sure we don't miss keys moving during rehashing.\n * 3) The reverse cursor is somewhat hard to understand at first, but this\n *    comment is supposed to help.\n */\nunsigned long dictScan(dict *d,\n                       unsigned long v,\n                       dictScanFunction *fn,\n                       void *privdata)\n{\n    return dictScanDefrag(d, v, fn, NULL, privdata);\n}\n\nvoid dictScanDefragBucket(dict *d,dictScanFunction *fn,\n                          dictDefragFunctions *defragfns,\n                          void *privdata,\n                          dictEntry **bucketref) {\n    dictEntry **plink, *de, *next;\n\n    /* Emit entries at bucket */\n    if (defragfns) dictDefragBucket(d, bucketref, defragfns);\n\n    de = *bucketref;\n    plink = bucketref;\n    while (de) {\n        next = dictGetNext(de);\n        fn(privdata, de, plink);\n\n        if (!next) break; /* if last element, break */\n\n        /* if `*plink` still pointing to 'de', then it means that the \n         * visited item wasn't deleted by fn() */\n        if (*plink == de)            \n            plink = &(de->next);\n\n        de = next;\n    }\n}\n\n/* Like dictScan, but additionally reallocates the memory used by the dict\n * entries using the provided allocation function. This feature was added for\n * the active defrag feature.\n *\n * The 'defragfns' callbacks are called with a pointer to memory that callback\n * can reallocate. The callbacks should return a new memory address or NULL,\n * where NULL means that no reallocation happened and the old memory is still\n * valid. */\nunsigned long dictScanDefrag(dict *d,\n                             unsigned long v,\n                             dictScanFunction *fn,\n                             dictDefragFunctions *defragfns,\n                             void *privdata)\n{\n    int htidx0, htidx1;\n    unsigned long m0, m1;\n\n    if (dictSize(d) == 0) return 0;\n\n    /* This is needed in case the scan callback tries to do dictFind or alike. */\n    dictPauseRehashing(d);\n\n    if (!dictIsRehashing(d)) {\n        htidx0 = 0;\n        m0 = DICTHT_SIZE_MASK(d->ht_size_exp[htidx0]);\n        dictScanDefragBucket(d, fn, defragfns, privdata, &d->ht_table[htidx0][v & m0]);\n\n        /* Set unmasked bits so incrementing the reversed cursor\n         * operates on the masked bits */\n        v |= ~m0;\n\n        /* Increment the reverse cursor */\n        v = rev(v);\n        v++;\n        v = rev(v);\n\n    } else {\n        htidx0 = 0;\n        htidx1 = 1;\n\n        /* Make sure t0 is the smaller and t1 is the bigger table */\n        if (DICTHT_SIZE(d->ht_size_exp[htidx0]) > DICTHT_SIZE(d->ht_size_exp[htidx1])) {\n            htidx0 = 1;\n            htidx1 = 0;\n        }\n\n        m0 = DICTHT_SIZE_MASK(d->ht_size_exp[htidx0]);\n        m1 = DICTHT_SIZE_MASK(d->ht_size_exp[htidx1]);\n\n        dictScanDefragBucket(d, fn, defragfns, privdata, &d->ht_table[htidx0][v & m0]);\n\n        /* Iterate over indices in larger table that are the expansion\n         * of the index pointed to by the cursor in the smaller table */\n        do {\n            dictScanDefragBucket(d, fn, defragfns, privdata, &d->ht_table[htidx1][v & m1]);\n\n            /* Increment the reverse cursor not covered by the smaller mask.*/\n            v |= ~m1;\n            v = rev(v);\n            v++;\n            v = rev(v);\n\n            /* Continue while bits covered by mask difference is non-zero */\n        } while (v & (m0 ^ m1));\n    }\n\n    dictResumeRehashing(d);\n\n    return v;\n}\n\n/* ------------------------- private functions ------------------------------ */\n\n/* Because we may need to allocate huge memory chunk at once when dict\n * resizes, we will check this allocation is allowed or not if the dict\n * type has resizeAllowed member function. */\nstatic int dictTypeResizeAllowed(dict *d, size_t size) {\n    if (d->type->resizeAllowed == NULL) return 1;\n    return d->type->resizeAllowed(\n                    DICTHT_SIZE(_dictNextExp(size)) * sizeof(dictEntry*),\n                    (double)d->ht_used[0] / DICTHT_SIZE(d->ht_size_exp[0]));\n}\n\n/* Returning DICT_OK indicates a successful expand or the dictionary is undergoing rehashing, \n * and there is nothing else we need to do about this dictionary currently. While DICT_ERR indicates\n * that expand has not been triggered (may be try shrinking?)*/\nint dictExpandIfNeeded(dict *d) {\n    /* Incremental rehashing already in progress. Return. */\n    if (dictIsRehashing(d)) return DICT_OK;\n\n    /* If the hash table is empty expand it to the initial size. */\n    if (DICTHT_SIZE(d->ht_size_exp[0]) == 0) {\n        dictExpand(d, DICT_HT_INITIAL_SIZE);\n        return DICT_OK;\n    }\n\n    /* If we reached the 1:1 ratio, and we are allowed to resize the hash\n     * table (global setting) or we should avoid it but the ratio between\n     * elements/buckets is over the \"safe\" threshold, we resize doubling\n     * the number of buckets. */\n    if ((dict_can_resize == DICT_RESIZE_ENABLE &&\n         d->ht_used[0] >= DICTHT_SIZE(d->ht_size_exp[0])) ||\n        (dict_can_resize != DICT_RESIZE_FORBID &&\n         d->ht_used[0] >= dict_force_resize_ratio * DICTHT_SIZE(d->ht_size_exp[0])))\n    {\n        if (dictTypeResizeAllowed(d, d->ht_used[0] + 1))\n            dictExpand(d, d->ht_used[0] + 1);\n        return DICT_OK;\n    }\n    return DICT_ERR;\n}\n\n/* Expand the hash table if needed (OK=Expanded, ERR=Not expanded) */\nstatic int _dictExpandIfNeeded(dict *d) {\n    /* Automatic resizing is disallowed. Return */\n    if (d->pauseAutoResize > 0) return DICT_ERR;\n    \n    return dictExpandIfNeeded(d);\n}\n\n/* Returning DICT_OK indicates a successful shrinking or the dictionary is undergoing rehashing, \n * and there is nothing else we need to do about this dictionary currently. While DICT_ERR indicates\n * that shrinking has not been triggered (may be try expanding?)*/\nint dictShrinkIfNeeded(dict *d) {\n    /* Incremental rehashing already in progress. Return. */\n    if (dictIsRehashing(d)) return DICT_OK;\n    \n    /* If the size of hash table is DICT_HT_INITIAL_SIZE, don't shrink it. */\n    if (DICTHT_SIZE(d->ht_size_exp[0]) <= DICT_HT_INITIAL_SIZE) return DICT_OK;\n\n    /* If we reached below 1:8 elements/buckets ratio, and we are allowed to resize\n     * the hash table (global setting) or we should avoid it but the ratio is below 1:32,\n     * we'll trigger a resize of the hash table. */\n    if ((dict_can_resize == DICT_RESIZE_ENABLE &&\n         d->ht_used[0] * HASHTABLE_MIN_FILL <= DICTHT_SIZE(d->ht_size_exp[0])) ||\n        (dict_can_resize != DICT_RESIZE_FORBID &&\n         d->ht_used[0] * HASHTABLE_MIN_FILL * dict_force_resize_ratio <= DICTHT_SIZE(d->ht_size_exp[0])))\n    {\n        if (dictTypeResizeAllowed(d, d->ht_used[0]))\n            dictShrink(d, d->ht_used[0]);\n        return DICT_OK;\n    }\n    return DICT_ERR;\n}\n\nstatic void _dictShrinkIfNeeded(dict *d) \n{\n    /* Automatic resizing is disallowed. Return */\n    if (d->pauseAutoResize > 0) return;\n\n    dictShrinkIfNeeded(d);\n}\n\nstatic void _dictRehashStepIfNeeded(dict *d, uint64_t visitedIdx) {\n    if ((!dictIsRehashing(d)) || (d->pauserehash != 0))\n        return;\n    /* rehashing not in progress if rehashidx == -1 */\n    if ((long)visitedIdx >= d->rehashidx && d->ht_table[0][visitedIdx]) {\n        /* If we have a valid hash entry at `idx` in ht0, we perform\n         * rehash on the bucket at `idx` (being more CPU cache friendly) */\n        _dictBucketRehash(d, visitedIdx);\n    } else {\n        /* If the hash entry is not in ht0, we rehash the buckets based\n         * on the rehashidx (not CPU cache friendly). */\n        dictRehash(d,1);\n    }\n}\n\n/* Our hash table capability is a power of two */\nstatic signed char _dictNextExp(unsigned long size)\n{\n    if (size <= DICT_HT_INITIAL_SIZE) return DICT_HT_INITIAL_EXP;\n    if (size >= LONG_MAX) return (8*sizeof(long)-1);\n\n    return 8*sizeof(long) - __builtin_clzl(size-1);\n}\n\n/* Finds and returns the link within the dict where the provided key should\n * be inserted using dictInsertKeyAtLink() if the key does not already exist in\n * the dict. If the key exists in the dict, NULL is returned and the optional\n * 'existing' entry pointer is populated, if provided. */\ndictEntryLink dictFindLinkForInsert(dict *d, const void *key, dictEntry **existing) {\n    unsigned long idx, table;\n    dictCmpCache cmpCache = {0};\n    dictEntry *he;\n    uint64_t hash = dictGetHash(d, key);\n    if (existing) *existing = NULL;\n    idx = hash & DICTHT_SIZE_MASK(d->ht_size_exp[0]);\n\n    /* Rehash the hash table if needed */\n    _dictRehashStepIfNeeded(d,idx);\n\n    /* Expand the hash table if needed */\n    _dictExpandIfNeeded(d);\n    keyCmpFunc cmpFunc = dictGetCmpFunc(d);\n\n    for (table = 0; table <= 1; table++) {\n        if (table == 0 && (long)idx < d->rehashidx) continue; \n        idx = hash & DICTHT_SIZE_MASK(d->ht_size_exp[table]);\n        /* Search if this slot does not already contain the given key */\n        he = d->ht_table[table][idx];\n        while(he) {\n            const void *he_key = dictStoredKey2Key(d, dictGetKey(he));            \n            if (key == he_key || cmpFunc(&cmpCache, key, he_key)) {\n                if (existing) *existing = he;\n                return NULL;\n            }\n            he = dictGetNext(he);\n        }\n        if (!dictIsRehashing(d)) break;\n    }\n\n    /* If we are in the process of rehashing the hash table, the bucket is\n     * always returned in the context of the second (new) hash table. */\n    dictEntry **bucket = &d->ht_table[dictIsRehashing(d) ? 1 : 0][idx];\n    return bucket;\n}\n\n\nvoid dictEmpty(dict *d, void(callback)(dict*)) {\n    /* Someone may be monitoring a dict that started rehashing, before\n     * destroying the dict fake completion. */\n    if (dictIsRehashing(d) && d->type->rehashingCompleted)\n        d->type->rehashingCompleted(d);\n\n    /* Subtract the size of all buckets. */\n    if (d->type->bucketChanged)\n        d->type->bucketChanged(d, -(long long)dictBuckets(d));\n\n    _dictClear(d,0,callback);\n    _dictClear(d,1,callback);\n    d->rehashidx = -1;\n    d->pauserehash = 0;\n    d->pauseAutoResize = 0;\n}\n\nvoid dictSetResizeEnabled(dictResizeEnable enable) {\n    dict_can_resize = enable;\n}\n\n/* Compiler inlines this for internal calls within dict.c (verified with -O3). */\nuint64_t dictGetHash(dict *d, const void *key) {\n    return d->type->hashFunction(key);\n}\n\n/* Provides the old and new ht size for a given dictionary during rehashing. This method\n * should only be invoked during initialization/rehashing. */\nvoid dictRehashingInfo(dict *d, unsigned long long *from_size, unsigned long long *to_size) {\n    /* Invalid method usage if rehashing isn't ongoing. */\n    assert(dictIsRehashing(d));\n    *from_size = DICTHT_SIZE(d->ht_size_exp[0]);\n    *to_size = DICTHT_SIZE(d->ht_size_exp[1]);\n}\n\n/* ------------------------------- Debugging ---------------------------------*/\n#define DICT_STATS_VECTLEN 50\nvoid dictFreeStats(dictStats *stats) {\n    zfree(stats->clvector);\n    zfree(stats);\n}\n\nvoid dictCombineStats(dictStats *from, dictStats *into) {\n    into->buckets += from->buckets;\n    into->maxChainLen = (from->maxChainLen > into->maxChainLen) ? from->maxChainLen : into->maxChainLen;\n    into->totalChainLen += from->totalChainLen;\n    into->htSize += from->htSize;\n    into->htUsed += from->htUsed;\n    for (int i = 0; i < DICT_STATS_VECTLEN; i++) {\n        into->clvector[i] += from->clvector[i];\n    }\n}\n\ndictStats *dictGetStatsHt(dict *d, int htidx, int full) {\n    unsigned long *clvector = zcalloc(sizeof(unsigned long) * DICT_STATS_VECTLEN);\n    dictStats *stats = zcalloc(sizeof(dictStats));\n    stats->htidx = htidx;\n    stats->clvector = clvector;\n    stats->htSize = DICTHT_SIZE(d->ht_size_exp[htidx]);\n    stats->htUsed = d->ht_used[htidx];\n    if (!full) return stats;\n    /* Compute stats. */\n    for (unsigned long i = 0; i < DICTHT_SIZE(d->ht_size_exp[htidx]); i++) {\n        dictEntry *he;\n\n        if (d->ht_table[htidx][i] == NULL) {\n            clvector[0]++;\n            continue;\n        }\n        stats->buckets++;\n        /* For each hash entry on this slot... */\n        unsigned long chainlen = 0;\n        he = d->ht_table[htidx][i];\n        while(he) {\n            chainlen++;\n            he = dictGetNext(he);\n        }\n        clvector[(chainlen < DICT_STATS_VECTLEN) ? chainlen : (DICT_STATS_VECTLEN-1)]++;\n        if (chainlen > stats->maxChainLen) stats->maxChainLen = chainlen;\n        stats->totalChainLen += chainlen;\n    }\n\n    return stats;\n}\n\n/* Generates human readable stats. */\nsize_t dictGetStatsMsg(char *buf, size_t bufsize, dictStats *stats, int full) {\n    if (stats->htUsed == 0) {\n        return snprintf(buf,bufsize,\n            \"Hash table %d stats (%s):\\n\"\n            \"No stats available for empty dictionaries\\n\",\n            stats->htidx, (stats->htidx == 0) ? \"main hash table\" : \"rehashing target\");\n    }\n    size_t l = 0;\n    l += snprintf(buf + l, bufsize - l,\n                  \"Hash table %d stats (%s):\\n\"\n                  \" table size: %lu\\n\"\n                  \" number of elements: %lu\\n\",\n                  stats->htidx, (stats->htidx == 0) ? \"main hash table\" : \"rehashing target\",\n                  stats->htSize, stats->htUsed);\n    if (full) {\n        l += snprintf(buf + l, bufsize - l,\n                      \" different slots: %lu\\n\"\n                      \" max chain length: %lu\\n\"\n                      \" avg chain length (counted): %.02f\\n\"\n                      \" avg chain length (computed): %.02f\\n\"\n                      \" Chain length distribution:\\n\",\n                      stats->buckets, stats->maxChainLen,\n                      (float) stats->totalChainLen / stats->buckets, (float) stats->htUsed / stats->buckets);\n\n        for (unsigned long i = 0; i < DICT_STATS_VECTLEN - 1; i++) {\n            if (stats->clvector[i] == 0) continue;\n            if (l >= bufsize) break;\n            l += snprintf(buf + l, bufsize - l,\n                          \"   %ld: %ld (%.02f%%)\\n\",\n                          i, stats->clvector[i], ((float) stats->clvector[i] / stats->htSize) * 100);\n        }\n    }\n\n    /* Make sure there is a NULL term at the end. */\n    buf[bufsize-1] = '\\0';\n    /* Unlike snprintf(), return the number of characters actually written. */\n    return strlen(buf);\n}\n\nvoid dictGetStats(char *buf, size_t bufsize, dict *d, int full) {\n    size_t l;\n    char *orig_buf = buf;\n    size_t orig_bufsize = bufsize;\n\n    dictStats *mainHtStats = dictGetStatsHt(d, 0, full);\n    l = dictGetStatsMsg(buf, bufsize, mainHtStats, full);\n    dictFreeStats(mainHtStats);\n    buf += l;\n    bufsize -= l;\n    if (dictIsRehashing(d) && bufsize > 0) {\n        dictStats *rehashHtStats = dictGetStatsHt(d, 1, full);\n        dictGetStatsMsg(buf, bufsize, rehashHtStats, full);\n        dictFreeStats(rehashHtStats);\n    }\n    /* Make sure there is a NULL term at the end. */\n    orig_buf[orig_bufsize-1] = '\\0';\n}\n\nstatic int dictDefaultCompare(dictCmpCache *cache, const void *key1, const void *key2) {\n    (void)(cache); /*unused*/\n    return key1 == key2;\n}\n\n/* ------------------------------- Benchmark ---------------------------------*/\n\n#ifdef REDIS_TEST\n#include \"testhelp.h\"\n\n#define UNUSED(V) ((void) V)\n#define TEST(name) printf(\"test — %s\\n\", name);\n\nuint64_t hashCallback(const void *key) {\n    return dictGenHashFunction((unsigned char*)key, strlen((char*)key));\n}\n\nint compareCallback(dictCmpCache *cache, const void *key1, const void *key2) {\n    int l1,l2;\n    UNUSED(cache);\n\n    l1 = strlen((char*)key1);\n    l2 = strlen((char*)key2);\n    if (l1 != l2) return 0;\n    return memcmp(key1, key2, l1) == 0;\n}\n\nvoid freeCallback(dict *d, void *val) {\n    UNUSED(d);\n\n    zfree(val);\n}\n\nchar *stringFromLongLong(long long value) {\n    char buf[32];\n    int len;\n    char *s;\n\n    len = snprintf(buf,sizeof(buf),\"%lld\",value);\n    s = zmalloc(len+1);\n    memcpy(s, buf, len);\n    s[len] = '\\0';\n    return s;\n}\n\nchar *stringFromSubstring(void) {\n    #define LARGE_STRING_SIZE 10000\n    #define MIN_STRING_SIZE 100\n    #define MAX_STRING_SIZE 500\n    static char largeString[LARGE_STRING_SIZE + 1];\n    static int init = 0;\n    if (init == 0) {\n        /* Generate a large string */\n        for (size_t i = 0; i < LARGE_STRING_SIZE; i++) {\n            /* Random printable ASCII character (33 to 126) */\n            largeString[i] = 33 + (rand() % 94);\n        }\n        /* Null-terminate the large string */\n        largeString[LARGE_STRING_SIZE] = '\\0';\n        init = 1;\n    }\n    /* Randomly choose a size between minSize and maxSize */\n    size_t substringSize = MIN_STRING_SIZE + (rand() % (MAX_STRING_SIZE - MIN_STRING_SIZE + 1));\n    size_t startIndex = rand() % (LARGE_STRING_SIZE - substringSize + 1);\n    /* Allocate memory for the substring (+1 for null terminator) */\n    char *s = zmalloc(substringSize + 1);\n    memcpy(s, largeString + startIndex, substringSize); // Copy the substring\n    s[substringSize] = '\\0'; // Null-terminate the string\n    return s;\n}\n\ndictType BenchmarkDictType = {\n    hashCallback,\n    NULL,\n    NULL,\n    compareCallback,\n    freeCallback,\n    NULL,\n    NULL\n};\n\n#define start_benchmark() start = timeInMilliseconds()\n#define end_benchmark(msg) do { \\\n    elapsed = timeInMilliseconds()-start; \\\n    printf(msg \": %ld items in %lld ms\\n\", count, elapsed); \\\n} while(0)\n\n/* ./redis-server test dict [<count> | --accurate] */\nint dictTest(int argc, char **argv, int flags) {\n    long j;\n    long long start, elapsed;\n    int retval;\n    dict *d = dictCreate(&BenchmarkDictType);\n    dictEntry* de = NULL;\n    dictEntry* existing = NULL;\n    long count = 0;\n    unsigned long new_dict_size, current_dict_used, remain_keys;\n    int accurate = (flags & REDIS_TEST_ACCURATE);\n\n    if (argc == 4) {\n        if (accurate) {\n            count = 5000000;\n        } else {\n            count = strtol(argv[3],NULL,10);\n        }\n    } else {\n        count = 5000;\n    }\n\n    TEST(\"Add 16 keys and verify dict resize is ok\") {\n        dictSetResizeEnabled(DICT_RESIZE_ENABLE);\n        for (j = 0; j < 16; j++) {\n            retval = dictAdd(d,stringFromLongLong(j),(void*)j);\n            assert(retval == DICT_OK);\n        }\n        while (dictIsRehashing(d)) dictRehashMicroseconds(d,1000);\n        assert(dictSize(d) == 16);\n        assert(dictBuckets(d) == 16);\n    }\n\n    TEST(\"Use DICT_RESIZE_AVOID to disable the dict resize and pad to (dict_force_resize_ratio * 16)\") {\n        /* Use DICT_RESIZE_AVOID to disable the dict resize, and pad\n         * the number of keys to (dict_force_resize_ratio * 16), so we can satisfy\n         * dict_force_resize_ratio in next test. */\n        dictSetResizeEnabled(DICT_RESIZE_AVOID);\n        for (j = 16; j < (long)dict_force_resize_ratio * 16; j++) {\n            retval = dictAdd(d,stringFromLongLong(j),(void*)j);\n            assert(retval == DICT_OK);\n        }\n        current_dict_used = dict_force_resize_ratio * 16;\n        assert(dictSize(d) == current_dict_used);\n        assert(dictBuckets(d) == 16);\n    }\n\n    TEST(\"Add one more key, trigger the dict resize\") {\n        retval = dictAdd(d,stringFromLongLong(current_dict_used),(void*)(current_dict_used));\n        assert(retval == DICT_OK);\n        current_dict_used++;\n        new_dict_size = 1UL << _dictNextExp(current_dict_used);\n        assert(dictSize(d) == current_dict_used);\n        assert(DICTHT_SIZE(d->ht_size_exp[0]) == 16);\n        assert(DICTHT_SIZE(d->ht_size_exp[1]) == new_dict_size);\n\n        /* Wait for rehashing. */\n        dictSetResizeEnabled(DICT_RESIZE_ENABLE);\n        while (dictIsRehashing(d)) dictRehashMicroseconds(d,1000);\n        assert(dictSize(d) == current_dict_used);\n        assert(DICTHT_SIZE(d->ht_size_exp[0]) == new_dict_size);\n        assert(DICTHT_SIZE(d->ht_size_exp[1]) == 0);\n    }\n\n    TEST(\"Delete keys until we can trigger shrink in next test\") {\n        /* Delete keys until we can satisfy (1 / HASHTABLE_MIN_FILL) in the next test. */\n        for (j = new_dict_size / HASHTABLE_MIN_FILL + 1; j < (long)current_dict_used; j++) {\n            char *key = stringFromLongLong(j);\n            retval = dictDelete(d, key);\n            zfree(key);\n            assert(retval == DICT_OK);\n        }\n        current_dict_used = new_dict_size / HASHTABLE_MIN_FILL + 1;\n        assert(dictSize(d) == current_dict_used);\n        assert(DICTHT_SIZE(d->ht_size_exp[0]) == new_dict_size);\n        assert(DICTHT_SIZE(d->ht_size_exp[1]) == 0);\n    }\n\n    TEST(\"Delete one more key, trigger the dict resize\") {\n        current_dict_used--;\n        char *key = stringFromLongLong(current_dict_used);\n        retval = dictDelete(d, key);\n        zfree(key);\n        unsigned long oldDictSize = new_dict_size;\n        new_dict_size = 1UL << _dictNextExp(current_dict_used);\n        assert(retval == DICT_OK);\n        assert(dictSize(d) == current_dict_used);\n        assert(DICTHT_SIZE(d->ht_size_exp[0]) == oldDictSize);\n        assert(DICTHT_SIZE(d->ht_size_exp[1]) == new_dict_size);\n\n        /* Wait for rehashing. */\n        while (dictIsRehashing(d)) dictRehashMicroseconds(d,1000);\n        assert(dictSize(d) == current_dict_used);\n        assert(DICTHT_SIZE(d->ht_size_exp[0]) == new_dict_size);\n        assert(DICTHT_SIZE(d->ht_size_exp[1]) == 0);\n    }\n\n    TEST(\"Empty the dictionary and add 128 keys\") {\n        dictEmpty(d, NULL);\n        for (j = 0; j < 128; j++) {\n            retval = dictAdd(d,stringFromLongLong(j),(void*)j);\n            assert(retval == DICT_OK);\n        }\n        while (dictIsRehashing(d)) dictRehashMicroseconds(d,1000);\n        assert(dictSize(d) == 128);\n        assert(dictBuckets(d) == 128);\n    }\n\n    TEST(\"Use DICT_RESIZE_AVOID to disable the dict resize and reduce to 3\") {\n        /* Use DICT_RESIZE_AVOID to disable the dict reset, and reduce\n         * the number of keys until we can trigger shrinking in next test. */\n        dictSetResizeEnabled(DICT_RESIZE_AVOID);\n        remain_keys = DICTHT_SIZE(d->ht_size_exp[0]) / (HASHTABLE_MIN_FILL * dict_force_resize_ratio) + 1;\n        for (j = remain_keys; j < 128; j++) {\n            char *key = stringFromLongLong(j);\n            retval = dictDelete(d, key);\n            zfree(key);\n            assert(retval == DICT_OK);\n        }\n        current_dict_used = remain_keys;\n        assert(dictSize(d) == remain_keys);\n        assert(dictBuckets(d) == 128);\n    }\n\n    TEST(\"Delete one more key, trigger the dict resize\") {\n        current_dict_used--;\n        char *key = stringFromLongLong(current_dict_used);\n        retval = dictDelete(d, key);\n        zfree(key);\n        new_dict_size = 1UL << _dictNextExp(current_dict_used);\n        assert(retval == DICT_OK);\n        assert(dictSize(d) == current_dict_used);\n        assert(DICTHT_SIZE(d->ht_size_exp[0]) == 128);\n        assert(DICTHT_SIZE(d->ht_size_exp[1]) == new_dict_size);\n\n        /* Wait for rehashing. */\n        dictSetResizeEnabled(DICT_RESIZE_ENABLE);\n        while (dictIsRehashing(d)) dictRehashMicroseconds(d,1000);\n        assert(dictSize(d) == current_dict_used);\n        assert(DICTHT_SIZE(d->ht_size_exp[0]) == new_dict_size);\n        assert(DICTHT_SIZE(d->ht_size_exp[1]) == 0);\n    }\n\n    TEST(\"Restore to original state\") {\n        dictEmpty(d, NULL);\n        dictSetResizeEnabled(DICT_RESIZE_ENABLE);\n    }\n    srand(12345);\n    start_benchmark();\n    for (j = 0; j < count; j++) {\n        /* Create a dynamically allocated substring */\n        char *key = stringFromSubstring();\n\n        /* Insert the range directly from the large string */\n        de = dictAddRaw(d, key, &existing);\n        assert(de != NULL || existing != NULL);\n        /* If key already exists NULL is returned so we need to free the temp key string */\n        if (de == NULL) zfree(key);\n    }\n    end_benchmark(\"Inserting random substrings (100-500B) from large string with symbols\");\n    assert((long)dictSize(d) <= count);\n    dictEmpty(d, NULL);\n\n    start_benchmark();\n    for (j = 0; j < count; j++) {\n        retval = dictAdd(d,stringFromLongLong(j),(void*)j);\n        assert(retval == DICT_OK);\n    }\n    end_benchmark(\"Inserting via dictAdd() non existing\");\n    assert((long)dictSize(d) == count);\n\n    dictEmpty(d, NULL);\n\n    start_benchmark();\n    for (j = 0; j < count; j++) {\n        de = dictAddRaw(d,stringFromLongLong(j),NULL);\n        assert(de != NULL);\n    }\n    end_benchmark(\"Inserting via dictAddRaw() non existing\");\n    assert((long)dictSize(d) == count);\n\n    start_benchmark();\n    for (j = 0; j < count; j++) {\n        void *key = stringFromLongLong(j);\n        de = dictAddRaw(d,key,&existing);\n        assert(existing != NULL);\n        zfree(key);\n    }\n    end_benchmark(\"Inserting via dictAddRaw() existing (no insertion)\");\n    assert((long)dictSize(d) == count);\n\n    /* Wait for rehashing. */\n    while (dictIsRehashing(d)) {\n        dictRehashMicroseconds(d,100*1000);\n    }\n\n    start_benchmark();\n    for (j = 0; j < count; j++) {\n        char *key = stringFromLongLong(j);\n        dictEntry *de = dictFind(d,key);\n        assert(de != NULL);\n        zfree(key);\n    }\n    end_benchmark(\"Linear access of existing elements\");\n\n    start_benchmark();\n    for (j = 0; j < count; j++) {\n        char *key = stringFromLongLong(j);\n        dictEntry *de = dictFind(d,key);\n        assert(de != NULL);\n        zfree(key);\n    }\n    end_benchmark(\"Linear access of existing elements (2nd round)\");\n\n    start_benchmark();\n    for (j = 0; j < count; j++) {\n        char *key = stringFromLongLong(rand() % count);\n        dictEntry *de = dictFind(d,key);\n        assert(de != NULL);\n        zfree(key);\n    }\n    end_benchmark(\"Random access of existing elements\");\n\n    start_benchmark();\n    for (j = 0; j < count; j++) {\n        dictEntry *de = dictGetRandomKey(d);\n        assert(de != NULL);\n    }\n    end_benchmark(\"Accessing random keys\");\n\n    start_benchmark();\n    for (j = 0; j < count; j++) {\n        char *key = stringFromLongLong(rand() % count);\n        key[0] = 'X';\n        dictEntry *de = dictFind(d,key);\n        assert(de == NULL);\n        zfree(key);\n    }\n    end_benchmark(\"Accessing missing\");\n\n    start_benchmark();\n    for (j = 0; j < count; j++) {\n        char *key = stringFromLongLong(j);\n        retval = dictDelete(d,key);\n        assert(retval == DICT_OK);\n        key[0] += 17; /* Change first number to letter. */\n        retval = dictAdd(d,key,(void*)j);\n        assert(retval == DICT_OK);\n    }\n    end_benchmark(\"Removing and adding\");\n    dictRelease(d);\n\n    TEST(\"Use dict without values (no_value=1)\") {\n        dictType dt = BenchmarkDictType;\n        dt.no_value = 1;\n\n        /* Allocate array of size count and fill it with keys (stringFromLongLong(j) */\n        char **lookupKeys = zmalloc(sizeof(char*) * count);\n        for (long j = 0; j < count; j++)\n            lookupKeys[j] = stringFromLongLong(j);\n\n\n        /* Add keys without values. */\n        dict *d = dictCreate(&dt);\n        for (j = 0; j < count; j++) {\n            retval = dictAdd(d,lookupKeys[j],NULL);\n            assert(retval == DICT_OK);\n        }\n\n        /* Now, we should be able to find the keys. */\n        for (j = 0; j < count; j++) {\n            dictEntry *de = dictFind(d,lookupKeys[j]);\n            assert(de != NULL);\n        }\n\n        /* Find non exists keys. */\n        for (j = 0; j < count; j++) {\n            /* Temporarily override first char of key */\n            char tmp = lookupKeys[j][0];\n            lookupKeys[j][0] = 'X';\n            dictEntry *de = dictFind(d,lookupKeys[j]);\n            lookupKeys[j][0] = tmp;\n            assert(de == NULL);\n        }\n\n        dictRelease(d);\n        zfree(lookupKeys);\n    }\n\n    TEST(\"Test dictFindLink() functionality\") {\n        dictType dt = BenchmarkDictType;\n        dict *d = dictCreate(&dt);\n        \n        /* find in empty dict */\n        dictEntryLink link = dictFindLink(d, \"key\", NULL);\n        assert(link == NULL);\n\n        /* Add keys to dict and test */\n        for (j = 0; j < 10; j++) {\n            /* Add another key to dict */\n            char *key = stringFromLongLong(j);\n            retval = dictAdd(d, key, (void*)j);\n            assert(retval == DICT_OK);\n            /* find existing keys with dictFindLink() */\n            dictEntryLink link = dictFindLink(d, key, NULL);\n            assert(link != NULL);\n            assert(*link != NULL);\n            assert(dictGetKey(*link) != NULL);\n            \n            /* Test that the key found is the correct one */\n            void *foundKey = dictGetKey(*link);\n            assert(compareCallback( NULL, foundKey, key));\n\n            /* Test finding a non-existing key */\n            char *nonExistingKey = stringFromLongLong(j + 10);\n            link = dictFindLink(d, nonExistingKey, NULL);\n            assert(link == NULL);\n\n            /* Test with bucket parameter */\n            dictEntryLink bucket = NULL;\n            link = dictFindLink(d, key, &bucket);\n            assert(link != NULL);\n            assert(bucket != NULL);\n\n            /* Test bucket parameter with non-existing key */\n            link = dictFindLink(d, nonExistingKey, &bucket);\n            assert(link == NULL);\n            assert(bucket != NULL); /* Bucket should still be set even for non-existing keys */\n\n            /* Clean up */\n            zfree(nonExistingKey);\n        }\n\n        dictRelease(d);\n    }\n\n    return 0;\n}\n#endif\n"
  },
  {
    "path": "src/dict.h",
    "content": "/* Hash Tables Implementation.\n *\n * This file implements in-memory hash tables with insert/del/replace/find/\n * get-random-element operations. Hash tables will auto-resize if needed\n * tables of power of two in size are used, collisions are handled by\n * chaining. See the source code for more information... :)\n *\n * Copyright (c) 2006-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Dict usage of pointer tagging\n * -----------------------------\n * In the \"normal\" case (no_value=0), a dict slot contains only a pointer to a \n * dictEntry, and dictEntry holds untagged pointers to key and value. But when a \n * dict is used as a set (no_value=1), we optimize by storing direct key pointers \n * when possible, avoiding dictEntry allocation. This happens when A bucket contains \n * only one key, or at the tail of a collision chain. Redis dicts uses pointer \n * tagging, to identify direct key pointers from dictEntry pointers, i.e embedding \n * metadata in the lowest three bits of pointers. This requires 8-byte alignment, \n * which zmalloc() guarantees on both 32-bit and 64-bit systems (via jemalloc/tcmalloc, \n * or standard malloc with explicit PREFIX_SIZE=8).\n * \n * Besides of distinguishing direct key pointers from dictEntry pointers, we also \n * need to distinguish between even and odd key pointers that being stored in the \n * dict. Therefore, we use the following tagging scheme:\n * - dictEntry pointer: Points to a dictEntry structure (8-byte aligned). Left intact: \n *   ENTRY_PTR_NORMAL=000\n * - Odd-address key (keys_are_odd=1): Direct pointer to a \n *   key with odd address (e.g., all SDS strings), Left intact: \n *   ENTRY_PTR_IS_ODD_KEY=XX1\n * - Even-address key  (keys_are_odd=0): Direct pointer to a key with \n *   even address. Since 8-byte alignment yields bits = 000, same as dictEntry, \n *   we tag it by setting bit 1 which results with: \n *   ENTRY_PTR_IS_EVEN_KEY=010.\n */\n\n#ifndef __DICT_H\n#define __DICT_H\n\n#include \"mt19937-64.h\"\n#include <limits.h>\n#include <stdint.h>\n#include <stdlib.h>\n\n#define DICT_OK 0\n#define DICT_ERR 1\n\n/* Hash table parameters */\n#define HASHTABLE_MIN_FILL        8      /* Minimal hash table fill 12.5%(100/8) */\n\n/* stored-key vs. key\n * ------------------\n * If dictType.keyFromStoredKey is non-NULL, then dict distinguishes between the\n * lookup key and the actual stored-key object. In this case, \"key\" is used to \n * locate entries, while \"storedKey\" is the actual element stored in the dict.\n * If dictType.keyFromStoredKey is NULL, the lookup \"key\" and the stored-key are the\n * same. This API is primarily relevant for no_value=1 dicts, where the key and value\n * might be packed together. When values are stored separately, this identity \n * distinction does not arise. The marker __stored_key is used to indicate that \n * the pointer refers to the stored-key rather than the lookup key.\n */\n#define __stored_key\n\ntypedef struct dictEntry dictEntry; /* opaque */\ntypedef struct dict dict;\ntypedef dictEntry **dictEntryLink; /* See description of dictFindLink() */\n\n/* Searching for a key in a dict may involve few comparisons.\n * If extracting the looked-up key is expensive (e.g., sdslen(), kvobjGetKey()),  \n * caching can be used to reduce those repetitive computations.  \n *  \n * This struct, passed to the comparison function as temporary caching, if \n * needed by the function across comparison of a given lookup. \n * for the looked-up key and resets before each new lookup. */\ntypedef struct dictCmpCache {\n    int useCache;\n    \n    union {\n        uint64_t u64;\n        int64_t i64;\n        int i;\n        size_t sz;\n        void *p;\n    } data[2];\n} dictCmpCache;\n\ntypedef struct dictType {\n    /* Callbacks */\n    uint64_t (*hashFunction)(const void *key);\n    void *(*keyDup)(dict *d, const void *key __stored_key);\n    void *(*valDup)(dict *d, const void *obj);\n    int (*keyCompare)(dictCmpCache *cache, const void *key1, const void *key2);\n    void (*keyDestructor)(dict *d, void *key __stored_key);\n    void (*valDestructor)(dict *d, void *obj);\n    int (*resizeAllowed)(size_t moreMem, double usedRatio);\n    /* Invoked at the start of dict initialization/rehashing (old and new ht are already created) */\n    void (*rehashingStarted)(dict *d);\n    /* Invoked at the end of dict initialization/rehashing of all the entries from old to new ht. Both ht still exists\n     * and are cleaned up after this callback.  */\n    void (*rehashingCompleted)(dict *d);\n    /* Invoked when the size of the dictionary changes.\n     * The `delta` parameter can be positive (size increase) or negative (size decrease). */\n    void (*bucketChanged)(dict *d, long long delta);\n    /* Allow a dict to carry extra caller-defined metadata. The\n     * extra memory is initialized to 0 when a dict is allocated. */\n    size_t (*dictMetadataBytes)(dict *d);\n\n    /* Data */\n    void *userdata;\n\n    /* Flags */\n    /* The 'no_value' flag, if set, indicates that values are not used, i.e. the\n     * dict is a set. When this flag is set, it's not possible to access the\n     * value of a dictEntry and it's also impossible to use dictSetKey(). It \n     * enables an optimization to store a key directly without an allocating \n     * dictEntry in between, if it is the only key in the bucket. */\n    unsigned int no_value:1;\n    /* This flag is required for `no_value` optimization since the optimization\n     * reuses LSB bits as metadata */ \n    unsigned int keys_are_odd:1;\n\n    /* Ensures that the entire hash table is rehashed at once if set. */\n    unsigned int force_full_rehash:1;\n    \n    /* Callback to extract key from stored-key object. When set, the dict can\n     * store keys in one format (e.g., a structure) but look them up using a\n     * different format, extracted from the stored-key. (e.g., sds or integer). \n     * Set to NULL if key and stored-key object are the same. Relevant only for\n     * no_value=1 dicts. */\n    const void *(*keyFromStoredKey)(const void *key __stored_key);\n\n    /* Optional callback called when the dict is destroyed. */\n    void (*onDictRelease)(dict *d);\n} dictType;\n\n#define DICTHT_SIZE(exp) ((exp) == -1 ? 0 : (unsigned long)1<<(exp))\n#define DICTHT_SIZE_MASK(exp) ((exp) == -1 ? 0 : (DICTHT_SIZE(exp))-1)\n\nstruct dict {\n    dictType *type;\n\n    dictEntry **ht_table[2];\n    unsigned long ht_used[2];\n\n    long rehashidx; /* rehashing not in progress if rehashidx == -1 */\n\n    /* Note: pauserehash is a full unsigned so iterator increments\n     * don't perform RMW on the same storage unit as other bitfields. */\n    unsigned pauserehash; /* If >0 rehashing is paused */\n\n    /* Keep small vars at end for optimal (minimal) struct padding */\n    signed char ht_size_exp[2]; /* exponent of size. (size = 1<<exp) */\n    int16_t pauseAutoResize;  /* If >0 automatic resizing is disallowed (<0 indicates coding error) */\n    void *metadata[];\n};\n\n/* If safe is set to 1 this is a safe iterator, that means, you can call\n * dictAdd, dictFind, and other functions against the dictionary even while\n * iterating. Otherwise it is a non safe iterator, and only dictNext()\n * should be called while iterating. */\ntypedef struct dictIterator {\n    dict *d;\n    long index;\n    int table, safe;\n    dictEntry *entry, *nextEntry;\n    /* unsafe iterator fingerprint for misuse detection. */\n    unsigned long long fingerprint;\n} dictIterator;\n\ntypedef struct dictStats {\n    int htidx;\n    unsigned long buckets;\n    unsigned long maxChainLen;\n    unsigned long totalChainLen;\n    unsigned long htSize;\n    unsigned long htUsed;\n    unsigned long *clvector;\n} dictStats;\n\ntypedef void (dictScanFunction)(void *privdata, const dictEntry *de, dictEntry **plink);\ntypedef void *(dictDefragAllocFunction)(void *ptr);\ntypedef struct {\n    dictDefragAllocFunction *defragAlloc; /* Used for entries etc. */\n    dictDefragAllocFunction *defragKey;   /* Defrag-realloc keys (optional) */\n    dictDefragAllocFunction *defragVal;   /* Defrag-realloc values (optional) */\n} dictDefragFunctions;\n\n/* This is the initial size of every hash table */\n#define DICT_HT_INITIAL_EXP      2\n#define DICT_HT_INITIAL_SIZE     (1<<(DICT_HT_INITIAL_EXP))\n\n/* ------------------------------- Macros ------------------------------------*/\n#define dictFreeVal(d, entry) do {                     \\\n    if ((d)->type->valDestructor)                      \\\n        (d)->type->valDestructor((d), dictGetVal(entry)); \\\n   } while(0)\n\n#define dictFreeKey(d, entry) \\\n    if ((d)->type->keyDestructor) \\\n        (d)->type->keyDestructor((d), dictGetKey(entry))\n\n#define dictMetadata(d) (&(d)->metadata)\n#define dictMetadataSize(d) ((d)->type->dictMetadataBytes \\\n                             ? (d)->type->dictMetadataBytes(d) : 0)\n\n#define dictBuckets(d) (DICTHT_SIZE((d)->ht_size_exp[0])+DICTHT_SIZE((d)->ht_size_exp[1]))\n#define dictSize(d) ((d)->ht_used[0]+(d)->ht_used[1])\n#define dictIsEmpty(d) ((d)->ht_used[0] == 0 && (d)->ht_used[1] == 0)\n#define dictIsRehashing(d) ((d)->rehashidx != -1)\n#define dictPauseRehashing(d) ((d)->pauserehash++)\n#define dictResumeRehashing(d) ((d)->pauserehash--)\n#define dictIsRehashingPaused(d) ((d)->pauserehash > 0)\n#define dictPauseAutoResize(d) ((d)->pauseAutoResize++)\n#define dictResumeAutoResize(d) ((d)->pauseAutoResize--)\n\n/* If our unsigned long type can store a 64 bit number, use a 64 bit PRNG. */\n#if ULONG_MAX >= 0xffffffffffffffff\n#define randomULong() ((unsigned long) genrand64_int64())\n#else\n#define randomULong() random()\n#endif\n\ntypedef enum {\n    DICT_RESIZE_ENABLE,\n    DICT_RESIZE_AVOID,\n    DICT_RESIZE_FORBID,\n} dictResizeEnable;\n\n/* API */\ndict *dictCreate(dictType *type);\nvoid dictTypeAddMeta(dict **d, dictType *typeWithMeta);\nint dictExpand(dict *d, unsigned long size);\nint dictTryExpand(dict *d, unsigned long size);\nint dictShrink(dict *d, unsigned long size);\nint dictAdd(dict *d, void *key __stored_key, void *val);\ndictEntry *dictAddRaw(dict *d, void *key __stored_key, dictEntry **existing);\ndictEntry *dictAddOrFind(dict *d, void *key __stored_key);\nint dictReplace(dict *d, void *key __stored_key, void *val);\nint dictDelete(dict *d, const void *key);\ndictEntry *dictUnlink(dict *d, const void *key);\nvoid dictFreeUnlinkedEntry(dict *d, dictEntry *he);\ndictEntryLink dictTwoPhaseUnlinkFind(dict *d, const void *key, int *table_index);\nvoid dictTwoPhaseUnlinkFree(dict *d, dictEntryLink llink, int table_index);\nvoid dictRelease(dict *d);\ndictEntry * dictFind(dict *d, const void *key);\ndictEntry *dictFindByHashAndPtr(dict *d, const void *oldptr, const uint64_t hash);\nint dictShrinkIfNeeded(dict *d);\nint dictExpandIfNeeded(dict *d);\nvoid *dictGetKey(const dictEntry *de);\nint dictEntryIsKey(const dictEntry *de);\nint dictCompareKeys(dict *d, const void *key1, const void *key2);\nsize_t dictMemUsage(const dict *d);\nsize_t dictEntryMemUsage(int noValueDict);\ndictIterator *dictGetIterator(dict *d);\ndictIterator *dictGetSafeIterator(dict *d);\nvoid dictInitIterator(dictIterator *iter, dict *d);\nvoid dictInitSafeIterator(dictIterator *iter, dict *d);\nvoid dictResetIterator(dictIterator *iter);\ndictEntry *dictNext(dictIterator *iter);\ndictEntry *dictGetNext(const dictEntry *de);\nvoid dictReleaseIterator(dictIterator *iter);\ndictEntry *dictGetRandomKey(dict *d);\ndictEntry *dictGetFairRandomKey(dict *d);\nunsigned int dictGetSomeKeys(dict *d, dictEntry **des, unsigned int count);\nvoid dictGetStats(char *buf, size_t bufsize, dict *d, int full);\nuint64_t dictGenHashFunction(const void *key, size_t len);\nuint64_t dictGenCaseHashFunction(const unsigned char *buf, size_t len);\nvoid dictEmpty(dict *d, void(callback)(dict*));\nvoid dictSetResizeEnabled(dictResizeEnable enable);\nint dictRehash(dict *d, int n);\nint dictRehashMicroseconds(dict *d, uint64_t us);\nvoid dictSetHashFunctionSeed(uint8_t *seed);\nunsigned long dictScan(dict *d, unsigned long v, dictScanFunction *fn, void *privdata);\nunsigned long dictScanDefrag(dict *d, unsigned long v, dictScanFunction *fn, dictDefragFunctions *defragfns, void *privdata);\nuint64_t dictGetHash(dict *d, const void *key);\nvoid dictRehashingInfo(dict *d, unsigned long long *from_size, unsigned long long *to_size);\n\nsize_t dictGetStatsMsg(char *buf, size_t bufsize, dictStats *stats, int full);\ndictStats* dictGetStatsHt(dict *d, int htidx, int full);\nvoid dictCombineStats(dictStats *from, dictStats *into);\nvoid dictFreeStats(dictStats *stats);\n\ndictEntryLink dictFindLink(dict *d, const void *key, dictEntryLink *bucket);\nvoid dictSetKeyAtLink(dict *d, void *key __stored_key, dictEntryLink *link, int newItem);\n\n/* API relevant only when dict is used as a hash-map (no_value=0) */ \nvoid dictSetKey(dict *d, dictEntry* de, void *key __stored_key);\nvoid dictSetVal(dict *d, dictEntry *de, void *val);\nvoid *dictGetVal(const dictEntry *de);\nvoid dictSetDoubleVal(dictEntry *de, double val);\ndouble dictGetDoubleVal(const dictEntry *de);\ndouble *dictGetDoubleValPtr(dictEntry *de);\nvoid *dictFetchValue(dict *d, const void *key);\nvoid dictSetUnsignedIntegerVal(dictEntry *de, uint64_t val);\nuint64_t dictIncrUnsignedIntegerVal(dictEntry *de, uint64_t val);\nuint64_t dictGetUnsignedIntegerVal(const dictEntry *de);\n\n#define dictForEach(d, ty, m, ...) do { \\\n    dictIterator di; \\\n    dictEntry *de; \\\n    dictInitIterator(&di, d); \\\n    while ((de = dictNext(&di)) != NULL) { \\\n        ty *m = dictGetVal(de); \\\n        do { \\\n            __VA_ARGS__ \\\n        } while(0); \\\n    } \\\n    dictResetIterator(&di); \\\n} while(0);\n\n#ifdef REDIS_TEST\nint dictTest(int argc, char *argv[], int flags);\n#endif\n\n#endif /* __DICT_H */\n"
  },
  {
    "path": "src/ebuckets.c",
    "content": "/*\n * Copyright Redis Ltd. 2024 - present\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include <stdio.h>\n#include <stddef.h>\n#include <stdlib.h>\n#include <inttypes.h>\n#include <string.h>\n#include \"zmalloc.h\"\n#include \"redisassert.h\"\n#include \"config.h\"\n#include \"ebuckets.h\"\n\n#define UNUSED(x) (void)(x)\n\n\n/*** DEBUGGING & VALIDATION\n *\n * To validate DS on add(), remove() and ebExpire()\n * #define EB_VALIDATE_DEBUG 1\n */\n\n#if (REDIS_TEST || EB_VALIDATE_DEBUG) && !defined(EB_TEST_BENCHMARK)\n#define EB_VALIDATE_STRUCTURE(eb, type) ebValidate(eb, type)\n#else\n#define EB_VALIDATE_STRUCTURE(eb, type) // Do nothing\n#endif\n\n/*** BENCHMARK\n *\n * To benchmark ebuckets creation and active-expire with 10 million items, apply\n * the following command such that `EB_TEST_BENCHMARK` gets desired distribution\n * of expiration times:\n *\n *   # 0=1msec, 1=1sec, 2=1min, 3=1hour, 4=1day, 5=1week, 6=1month\n *   make REDIS_CFLAGS='-DREDIS_TEST -DEB_TEST_BENCHMARK=3' && ./src/redis-server test ebuckets\n */\n\n/*\n *  Keep just enough bytes of bucket-key, taking into consideration configured\n *  EB_BUCKET_KEY_PRECISION, and ignoring LSB bits that has no impact.\n *\n * The main motivation is that since the bucket-key size determines the maximum\n * depth of the rax tree, then we can prune the tree to be more shallow and thus\n * reduce the maintenance and traversal of each node in the B-tree.\n */\n#if EB_BUCKET_KEY_PRECISION < 8\n#define EB_KEY_SIZE 6\n#elif EB_BUCKET_KEY_PRECISION >= 8 && EB_BUCKET_KEY_PRECISION < 16\n#define EB_KEY_SIZE 5\n#else\n#define EB_KEY_SIZE 4\n#endif\n\n/*\n * EB_SEG_MAX_ITEMS - Maximum number of items in rax-segment before trying to\n * split. To simplify, it has the same value as EB_LIST_MAX_ITEMS.\n */\n#define EB_SEG_MAX_ITEMS 16\n#define EB_LIST_MAX_ITEMS EB_SEG_MAX_ITEMS\n\n/* From expiration time to bucket-key */\n#define EB_BUCKET_KEY(exptime) ((exptime) >> EB_BUCKET_KEY_PRECISION)\n\n /* From bucket-key to expiration time */\n#define EB_BUCKET_EXP_TIME(bucketKey) ((uint64_t)(bucketKey) << EB_BUCKET_KEY_PRECISION)\n\n/*** structs ***/\n\ntypedef struct CommonSegHdr {\n    eItem head;\n} CommonSegHdr;\n\n\n/* FirstSegHdr - Header of first segment of a bucket.\n *\n * A bucket in rax tree with a single segment will be as follows:\n *\n *            +-------------+     +------------+             +------------+\n *            | FirstSegHdr |     | eItem(1)   |             | eItem(N)   |\n * [rax] -->  | eItem head  | --> | void *next | --> ... --> | void *next | --+\n *            +-------------+     +------------+             +------------+   |\n *                    ^                                                       |\n *                    |                                                       |\n *                    +-------------------------------------------------------+\n *\n * Note that the cyclic references assist to update locally the segment(s) without\n * the need to \"heavy\" traversal of the rax tree for each change.\n */\ntypedef struct FirstSegHdr {\n    eItem head;          /* first item in the list */\n    uint32_t totalItems; /* total items in the bucket, across chained segments */\n    uint32_t numSegs;    /* number of segments in the bucket */\n} FirstSegHdr;\n\n/* NextSegHdr - Header of next segment in an extended-segment (bucket)\n *\n * Here is the layout of an extended-segment, after adding another item to a single,\n * full (EB_SEG_MAX_ITEMS=16), segment (all items must have same bucket-key value):\n *\n *            +-------------+     +------------+      +------------+     +------------+             +------------+\n *            | FirstSegHdr |     | eItem(17)  |      | NextSegHdr |     | eItem(1)   |             | eItem(16)  |\n * [rax] -->  | eItem head  | --> | void *next | -->  | eItem head | --> | void *next | --> ... --> | void *next | --+\n *            +-------------+     +------------+      +------------+     +------------+             +------------+   |\n *                    ^                                  |    ^                                                      |\n *                    |                                  |    |                                                      |\n *                    +------------- firstSeg / prevSeg -+    +------------------------------------------------------+\n */\ntypedef struct NextSegHdr {\n    eItem head;\n    CommonSegHdr *prevSeg; /* pointer to previous segment */\n    FirstSegHdr *firstSeg; /* pointer to first segment of the bucket */\n} NextSegHdr;\n\n/* Selective copy of ifndef from server.h instead of including it */\n#ifndef static_assert\n#define static_assert(expr, lit) extern char __static_assert_failure[(expr) ? 1:-1]\n#endif\n/* Verify that \"head\" field is aligned in FirstSegHdr, NextSegHdr and CommonSegHdr */\nstatic_assert(offsetof(FirstSegHdr, head) == 0, \"FirstSegHdr head is not aligned\");\nstatic_assert(offsetof(NextSegHdr, head) == 0, \"FirstSegHdr head is not aligned\");\nstatic_assert(offsetof(CommonSegHdr, head) == 0, \"FirstSegHdr head is not aligned\");\n/* Verify attached metadata to rax is aligned */\nstatic_assert(offsetof(rax, metadata) % sizeof(void*) == 0, \"metadata field is not aligned in rax\");\n\n/* EBucketNew - Indicates the caller to create a new bucket following the addition\n * of another item to a bucket (either single-segment or extended-segment). */\ntypedef struct EBucketNew {\n    FirstSegHdr segment;\n    ExpireMeta *mLast;  /* last item in the chain */\n    uint64_t ebKey;\n} EBucketNew;\n\nstatic void ebNewBucket(EbucketsType *type, EBucketNew *newBucket, eItem item, uint64_t key);\nstatic int ebBucketPrint(uint64_t bucketKey, EbucketsType *type, FirstSegHdr *firstSeg);\nstatic uint64_t *ebRaxNumItems(rax *rax);\n\n/*** Static functions ***/\n\n/* Extract pointer to list from ebuckets handler */\nstatic inline rax *ebGetRaxPtr(ebuckets eb) { return (rax *)eb; }\n\n/* The lsb in ebuckets pointer determines whether the pointer points to rax or list. */\nstatic inline int ebIsList(ebuckets eb) {\n    return (((uintptr_t)(void *)eb & 0x1) == 1);\n}\n/* set lsb in ebuckets pointer to 1 to mark it as list. Unless empty (NULL) */\nstatic inline ebuckets ebMarkAsList(eItem item) {\n    if (item == NULL) return item;\n\n    /* either 'itemsAddrAreOdd' or not, we end up with lsb is set to 1 */\n    return (void *) ((uintptr_t) item | 1);\n}\n\n/* Extract pointer to the list from ebuckets handler */\nstatic inline eItem ebGetListPtr(EbucketsType *type, ebuckets eb) {\n    /* if 'itemsAddrAreOdd' then no need to reset lsb bit */\n    if (type->itemsAddrAreOdd)\n        return eb;\n    else\n        return (void*)((uintptr_t)(eb) & ~1);\n}\n\n/* Converts the logical starting time value of a given bucket-key to its equivalent\n * \"physical\" value in the context of an rax tree (rax-key). Although their values\n * are the same, their memory layouts differ. The raxKey layout orders bytes in\n * memory is from the MSB to the LSB, and the length of the key is EB_KEY_SIZE. */\nstatic inline void bucketKey2RaxKey(uint64_t bucketKey, unsigned char *raxKey) {\n    for (int i = EB_KEY_SIZE-1; i >= 0; --i) {\n        raxKey[i] = (unsigned char) (bucketKey & 0xFF);\n        bucketKey >>= 8;\n    }\n}\n\n/* Converts the \"physical\" value of rax-key to its logical counterpart, representing\n * the starting time value of a bucket. The values are equivalent, but their memory\n * layouts differ. The raxKey is assumed to be ordered from the MSB to the LSB with\n * a length of EB_KEY_SIZE. The resulting bucket-key is the logical representation\n * with respect to ebuckets. */\nstatic inline uint64_t raxKey2BucketKey(unsigned char *raxKey) {\n    uint64_t bucketKey = 0;\n    for (int i = 0; i < EB_KEY_SIZE ; ++i)\n        bucketKey = (bucketKey<<8) + raxKey[i];\n    return bucketKey;\n}\n\n/* Add another item to a bucket that consists of extended-segments. In this\n * scenario, all items in the bucket share the same bucket-key value and the first\n * segment is already full (if not, the function ebSegAddAvail() would have being\n * called). This requires the creation of another segment. The layout of the\n * segments before and after the addition of the new item is as follows:\n *\n *  Before:                               [segHdr] -> {item1,..,item16} -> [..]\n *  After:   [segHdr] -> {newItem} -> [nextSegHdr] -> {item1,..,item16} -> [..]\n *\n *  Taken care to persist `segHdr` to be the same instance after the change.\n *  This is important because the rax tree is pointing to it. */\nstatic int ebSegAddExtended(EbucketsType *type, FirstSegHdr *firstSegHdr, eItem newItem) {\n    /* Allocate nextSegHdr and let it take the items of first segment header */\n    NextSegHdr *nextSegHdr = zmalloc(sizeof(NextSegHdr));\n    nextSegHdr->head = firstSegHdr->head;\n    /* firstSegHdr will stay the first and new nextSegHdr will follow it */\n    nextSegHdr->prevSeg = (CommonSegHdr *) firstSegHdr;\n    nextSegHdr->firstSeg = firstSegHdr;\n\n    ExpireMeta *mIter = type->getExpireMeta(nextSegHdr->head);\n    mIter->firstItemBucket = 0;\n    for (int i = 0 ; i < EB_SEG_MAX_ITEMS-1 ; i++)\n        mIter = type->getExpireMeta(mIter->next);\n\n    if (mIter->lastItemBucket) {\n        mIter->next = nextSegHdr;\n    } else {\n        /* Update next-next-segment to point back to next-segment */\n        NextSegHdr *nextNextSegHdr = mIter->next;\n        nextNextSegHdr->prevSeg = (CommonSegHdr *) nextSegHdr;\n    }\n\n    firstSegHdr->numSegs += 1;\n    firstSegHdr->totalItems += 1;\n    firstSegHdr->head = newItem;\n\n    ExpireMeta *mNewItem = type->getExpireMeta(newItem);\n    mNewItem->numItems = 1;\n    mNewItem->next = nextSegHdr;\n    mNewItem->firstItemBucket = 1;\n    mNewItem->lastInSegment = 1;\n\n    return 0;\n}\n\n/* Add another eItem to a segment with available space. Keep items sorted in ascending order */\nstatic int ebSegAddAvail(EbucketsType *type, FirstSegHdr *seg, eItem item) {\n    eItem head = seg->head;\n    ExpireMeta *nextMeta;\n    ExpireMeta *mHead = type->getExpireMeta(head);\n    ExpireMeta *mItem = type->getExpireMeta(item);\n    uint64_t itemExpireTime = ebGetMetaExpTime(mItem);\n\n    seg->totalItems++;\n\n    assert(mHead->numItems < EB_SEG_MAX_ITEMS);\n\n    /* if new item expiry time is smaller than the head then add it before the head */\n    if (ebGetMetaExpTime(mHead) > itemExpireTime) {\n        /* Insert item as the new head */\n        mItem->next = head;\n        mItem->firstItemBucket = mHead->firstItemBucket;\n        mItem->numItems = mHead->numItems + 1;\n        mHead->firstItemBucket = 0;\n        mHead->numItems = 0;\n        seg->head = item;\n        return 0;\n    }\n\n    /* Insert item in the middle of segment */\n    ExpireMeta *mIter = mHead;\n    for (int i = 1 ; i < mHead->numItems ; i++) {\n        nextMeta = type->getExpireMeta(mIter->next);\n        /* Insert item in the middle */\n        if (ebGetMetaExpTime(nextMeta) > itemExpireTime) {\n            mHead->numItems = mHead->numItems + 1;\n            mItem->next = mIter->next;\n            mIter->next = item;\n            return 0;\n        }\n        mIter = nextMeta;\n    }\n\n    /* Insert item as the last item of the segment. Inherit flags from previous last item */\n    mHead->numItems = mHead->numItems + 1;\n    mItem->next = mIter->next;\n    mItem->lastInSegment = mIter->lastInSegment;\n    mItem->lastItemBucket = mIter->lastItemBucket;\n    mIter->lastInSegment = 0;\n    mIter->lastItemBucket = 0;\n    mIter->next = item;\n    return 0;\n}\n\n/* Return 1 if split segment to two succeeded. Else, return 0. The only reason\n * the split can fail is that All the items in the segment have the same bucket-key */\nstatic int ebTrySegSplit(EbucketsType *type, FirstSegHdr *seg, EBucketNew *newBucket) {\n    int minMidDist=(EB_SEG_MAX_ITEMS / 2), bestMiddleIndex = -1;\n    uint64_t splitKey = -1;\n    eItem firstItemSecondPart;\n    ExpireMeta *mLastItemFirstPart, *mFirstItemSecondPart;\n\n    eItem head = seg->head;\n    ExpireMeta *mHead = type->getExpireMeta(head);\n    ExpireMeta *mNext, *mIter = mHead;\n\n    /* Search for best middle index to split the segment into two segments. As the\n     * items are arranged in ascending order, it cannot split between two items that\n     * have the same expiration time and therefore the split won't necessarily be\n     * balanced (Or won't be possible to split at all if all have the same exp-time!)\n     */\n    for (int i = 0 ; i < EB_SEG_MAX_ITEMS-1 ; i++) {\n        mNext = type->getExpireMeta(mIter->next);\n        if (EB_BUCKET_KEY(ebGetMetaExpTime(mNext)) > EB_BUCKET_KEY(\n                                                         ebGetMetaExpTime(mIter))) {\n            /* If found better middle index before reaching halfway, save it */\n            if (i < (EB_SEG_MAX_ITEMS/2)) {\n                splitKey = EB_BUCKET_KEY(ebGetMetaExpTime(mNext));\n                bestMiddleIndex = i;\n                mLastItemFirstPart = mIter;\n                mFirstItemSecondPart = mNext;\n                firstItemSecondPart = mIter->next;\n                minMidDist = (EB_SEG_MAX_ITEMS / 2) - bestMiddleIndex;\n            } else {\n                /* after crossing the middle need only to look for the first diff */\n                if (minMidDist > (i + 1 - EB_SEG_MAX_ITEMS / 2)) {\n                    splitKey = EB_BUCKET_KEY(ebGetMetaExpTime(mNext));\n                    bestMiddleIndex = i;\n                    mLastItemFirstPart = mIter;\n                    mFirstItemSecondPart = mNext;\n                    firstItemSecondPart = mIter->next;\n                    minMidDist = i + 1 - EB_SEG_MAX_ITEMS / 2;\n                }\n            }\n        }\n        mIter = mNext;\n    }\n\n    /* If cannot find index to split because all with same EB_BUCKET_KEY(), then\n     * segment should be treated as extended segment */\n    if (bestMiddleIndex == -1)\n        return 0;\n\n    /* New bucket */\n    newBucket->segment.head = firstItemSecondPart;\n    newBucket->segment.numSegs = 1;\n    newBucket->segment.totalItems = EB_SEG_MAX_ITEMS - bestMiddleIndex - 1;\n    mFirstItemSecondPart->numItems = EB_SEG_MAX_ITEMS - bestMiddleIndex - 1;\n    newBucket->mLast = mIter;\n    newBucket->ebKey = splitKey;\n    mIter->lastInSegment = 1;\n    mIter->lastItemBucket = 1;\n    mIter->next = &newBucket->segment; /* to be updated by caller */\n    mFirstItemSecondPart->firstItemBucket = 1;\n\n    /* update existing bucket */\n    seg->totalItems = bestMiddleIndex + 1;\n    mHead->numItems = bestMiddleIndex + 1;\n    mLastItemFirstPart->lastInSegment = 1;\n    mLastItemFirstPart->lastItemBucket = 1;\n    mLastItemFirstPart->next = seg;\n    return 1;\n}\n\n/* Return 1 if managed to expire the entire segment. Returns 0 otherwise. */\nint ebSingleSegExpire(FirstSegHdr *firstSegHdr,\n                             EbucketsType *type,\n                             ExpireInfo *info,\n                             eItem *updateList)\n{\n    uint64_t itemExpTime;\n    eItem iter = firstSegHdr->head;\n    ExpireMeta *mIter = type->getExpireMeta(iter);\n    uint32_t i=0, numItemsInSeg = mIter->numItems;\n\n    while (info->itemsExpired < info->maxToExpire) {\n        itemExpTime = ebGetMetaExpTime(mIter);\n\n        /* Items are arranged in ascending expire-time order in a segment. Stops\n         * active expiration when an item's expire time is greater than `now`. */\n        if (itemExpTime > info->now)\n            break;\n\n        /* keep aside next before deletion of iter */\n        eItem next = mIter->next;\n        mIter->trash = 1;\n        ExpireAction act = info->onExpireItem(iter, info->ctx);\n\n        /* if (act == ACT_REMOVE_EXP_ITEM)\n         *  then don't touch the item. Assume it got deleted */\n\n        /* If indicated to stop then break (cb didn't delete the item) */\n        if (act == ACT_STOP_ACTIVE_EXP) {\n            mIter->trash = 0;\n            break;\n        }\n\n        /* If indicated to re-insert the item, then chain it to updateList.\n         * it will be ebAdd() back to ebuckets at the end of ebExpire() */\n        if (act == ACT_UPDATE_EXP_ITEM) {\n            mIter->next = *updateList;\n            *updateList = iter;\n        }\n\n        ++info->itemsExpired;\n\n        /* if deleted all items in segment, delete header and return */\n        if (++i == numItemsInSeg) {\n            zfree(firstSegHdr);\n            return 1;\n        }\n\n        /* More items in the segment. Set iter to next item and update mIter */\n        iter = next;\n        mIter = type->getExpireMeta(iter);\n    }\n\n    /* Update the single-segment with remaining items */\n    mIter->numItems = numItemsInSeg - i;\n    mIter->firstItemBucket = 1;\n    firstSegHdr->head = iter;\n    firstSegHdr->totalItems -= i;\n\n    /* Update nextExpireTime */\n    info->nextExpireTime = ebGetMetaExpTime(mIter);\n\n    return 0;\n}\n\n/* return 1 if managed to expire the entire segment. Returns 0 otherwise. */\nstatic int ebSegExpire(FirstSegHdr *firstSegHdr,\n                       EbucketsType *type,\n                       ExpireInfo *info,\n                       eItem *updateList)\n{\n    eItem iter = firstSegHdr->head;\n    uint32_t numSegs = firstSegHdr->numSegs;\n    void *nextSegHdr = firstSegHdr;\n\n    if (numSegs == 1)\n        return ebSingleSegExpire(firstSegHdr, type, info, updateList);\n\n    /*\n     * In an extended-segment, there's no need to verify the expiration time of\n     * each item. This is because all items in an extended-segment share the same\n     * bucket-key. Therefore, we can remove all items without checking their\n     * individual expiration times. This is different from a single-segment\n     * scenario, where items can have different bucket-keys.\n     */\n    for (uint32_t seg=0 ; seg < numSegs ; seg++) {\n        uint32_t i;\n        ExpireMeta *mIter = type->getExpireMeta(iter);\n        uint32_t numItemsInSeg = mIter->numItems;\n\n        for (i = 0; (i < numItemsInSeg) && (info->itemsExpired < info->maxToExpire) ; ++i) {\n            mIter = type->getExpireMeta(iter);\n\n            /* keep aside `next` before removing `iter` by onExpireItem */\n            eItem next = mIter->next;\n            mIter->trash = 1;\n            ExpireAction act = info->onExpireItem(iter, info->ctx);\n\n            /* if (act == ACT_REMOVE_EXP_ITEM)\n             *  then don't touch the item. Assume it got deleted */\n\n            /* If indicated to stop then break (callback didn't delete the item) */\n            if (act == ACT_STOP_ACTIVE_EXP) {\n                mIter->trash = 0;\n                break;\n            }\n\n            /* If indicated to re-insert the item, then chain it to updateList.\n             * it will be ebAdd() back to ebuckets at the end of ebExpire() */\n            if (act == ACT_UPDATE_EXP_ITEM) {\n                mIter->next = *updateList;\n                *updateList = iter;\n            }\n\n            /* Item was REMOVED/UPDATED. Advance to `next` item */\n            iter = next;\n            ++info->itemsExpired;\n            firstSegHdr->totalItems -= 1;\n        }\n\n        /* if deleted all items in segment */\n        if (i == numItemsInSeg) {\n            /* If not last segment in bucket, then delete segment header */\n            if (seg + 1 < numSegs) {\n                nextSegHdr = iter;\n                iter = ((NextSegHdr *) nextSegHdr)->head;\n                zfree(nextSegHdr);\n                firstSegHdr->numSegs -= 1;\n                firstSegHdr->head = iter;\n                mIter = type->getExpireMeta(iter);\n                mIter->firstItemBucket = 1;\n            }\n        } else {\n            /* We reached here because for-loop above break due to\n             * ACT_STOP_ACTIVE_EXP or reached maxToExpire */\n            firstSegHdr->head = iter;\n            mIter = type->getExpireMeta(iter);\n            mIter->numItems = numItemsInSeg - i;\n            mIter->firstItemBucket = 1;\n            info->nextExpireTime = ebGetMetaExpTime(mIter);\n\n            /* If deleted one or more segments, update prevSeg of next seg to point firstSegHdr.\n             * If it is the last segment, then last item need to point firstSegHdr */\n            if (seg>0) {\n                int numItems = mIter->numItems;\n                for (int i = 0; i < numItems - 1; i++)\n                    mIter = type->getExpireMeta(mIter->next);\n\n                if (mIter->lastItemBucket) {\n                    mIter->next = firstSegHdr;\n                } else {\n                    /* Update next-segment to point back to firstSegHdr */\n                    NextSegHdr *nsh = mIter->next;\n                    nsh->prevSeg = (CommonSegHdr *) firstSegHdr;\n                }\n            }\n\n            return 0;\n        }\n    }\n\n    /* deleted last segment in bucket */\n    zfree(firstSegHdr);\n    return 1;\n}\n\n/*** Static functions of list ***/\n\n/* Convert a list to rax.\n *\n * To create a new rax, the function first converts the list to a segment by\n * allocating a segment header and attaching to it the already existing list.\n * Then, it adds the new segment to the rax as the first bucket. */\nstatic rax *ebConvertListToRax(eItem listHead, EbucketsType *type) {\n    FirstSegHdr *firstSegHdr = zmalloc(sizeof(FirstSegHdr));\n    firstSegHdr->head = listHead;\n    firstSegHdr->totalItems = EB_LIST_MAX_ITEMS ;\n    firstSegHdr->numSegs = 1;\n\n    /* update last item to point on the segment header */\n    ExpireMeta *metaItem = type->getExpireMeta(listHead);\n    uint64_t bucketKey = EB_BUCKET_KEY(ebGetMetaExpTime(metaItem));\n    while (metaItem->lastItemBucket == 0)\n        metaItem = type->getExpireMeta(metaItem->next);\n    metaItem->next = firstSegHdr;\n\n    /* Use min expire-time for the first segment in rax */\n    unsigned char raxKey[EB_KEY_SIZE];\n    bucketKey2RaxKey(bucketKey, raxKey);\n    rax *rax = raxNewWithMetadata(sizeof(uint64_t), NULL);\n    *ebRaxNumItems(rax) = EB_LIST_MAX_ITEMS;\n    raxInsert(rax, raxKey, EB_KEY_SIZE, firstSegHdr, NULL);\n    return rax;\n}\n\n/**\n * Adds another 'item' to the ebucket of type list, keeping the list sorted by\n * ascending expiration time.\n *\n * @param eb - Pointer to the ebuckets handler of type list. Gets updated if the item is\n * added as the new head.\n * @param type - Pointer to the EbucketsType structure defining the type of ebucket.\n * @param item - The eItem to be added to the list.\n *\n * @return 1 if the maximum list length is reached; otherwise, return 0.\n */\nstatic int ebAddToList(ebuckets *eb, EbucketsType *type, eItem item) {\n    ExpireMeta *metaItem = type->getExpireMeta(item);\n\n    /* if ebucket-list is empty (NULL), then create a new list by marking 'item'\n     * as the head and tail of the list */\n    if (unlikely(ebIsEmpty(*eb))) {\n        metaItem->next = NULL;\n        metaItem->numItems = 1;\n        metaItem->lastInSegment = 1;\n        metaItem->firstItemBucket = 1;\n        metaItem->lastItemBucket = 1;\n        *eb = ebMarkAsList(item);\n        return 0;\n    }\n\n    eItem head = ebGetListPtr(type, *eb);\n    ExpireMeta *metaHead = type->getExpireMeta(head);\n\n    /* If reached max items in list, then return 1 */\n    if (metaHead->numItems == EB_LIST_MAX_ITEMS)\n        return 1;\n\n    /* if expiry time of 'item' is smaller than the head then add it as the new head */\n    if (ebGetMetaExpTime(metaHead) > ebGetMetaExpTime(metaItem)) {\n        /* Insert item as the new head */\n        metaItem->next = head;\n        metaItem->firstItemBucket = 1;\n        metaItem->numItems = metaHead->numItems + 1;\n        metaHead->firstItemBucket = 0;\n        metaHead->numItems = 0;\n        *eb = ebMarkAsList(item);\n        return 0;\n    }\n\n\n    /* Try insert item in the middle of list */\n    ExpireMeta *mIter = metaHead;\n    for (int i = 1 ; i < metaHead->numItems ; i++) {\n        ExpireMeta *nextMeta = type->getExpireMeta(mIter->next);\n        /* Insert item in the middle */\n        if (ebGetMetaExpTime(nextMeta) > ebGetMetaExpTime(metaItem)) {\n            metaHead->numItems += 1;\n            metaItem->next = mIter->next;\n            mIter->next = item;\n            return 0;\n        }\n        mIter = nextMeta;\n    }\n\n    /* Insert item as the last item of the list. */\n    metaHead->numItems += 1;\n    metaItem->next = NULL;\n    metaItem->lastInSegment = 1;\n    metaItem->lastItemBucket = 1;\n    /* Update obsolete last item */\n    mIter->lastInSegment = 0;\n    mIter->lastItemBucket = 0;\n    mIter->next = item;\n    return 0;\n}\n\n/* return 1 if removed from list. Otherwise, return 0 */\nstatic int ebRemoveFromList(ebuckets *eb, EbucketsType *type, eItem item) {\n    if (ebIsEmpty(*eb))\n        return 0; /* not removed */\n\n    ExpireMeta *metaItem = type->getExpireMeta(item);\n    eItem head = ebGetListPtr(type, *eb);\n\n    /* if item is the head of the list */\n    if (head == item) {\n        eItem newHead = metaItem->next;\n        if (newHead != NULL) {\n            ExpireMeta *mNewHead = type->getExpireMeta(newHead);\n            mNewHead->numItems = metaItem->numItems - 1;\n            mNewHead->firstItemBucket = 1;\n            *eb = ebMarkAsList(newHead);\n            return 1; /* removed */\n        }\n        *eb = NULL;\n        return 1; /* removed */\n    }\n\n    /* item is not the head of the list */\n    ExpireMeta *metaHead = type->getExpireMeta(head);\n\n    eItem iter = head;\n    while (iter != NULL) {\n        ExpireMeta *metaIter = type->getExpireMeta(iter);\n        if (metaIter->next == item) {\n            metaIter->next = metaItem->next;\n            /* If deleted item is the last in the list, then update new last item */\n            if (metaItem->next == NULL) {\n                metaIter->lastInSegment = 1;\n                metaIter->lastItemBucket = 1;\n            }\n            metaHead->numItems -= 1;\n            return 1; /* removed */\n        }\n        iter = metaIter->next;\n    }\n    return 0; /* not removed */\n}\n\n/* return 1 if none left. Otherwise return 0 */\nstatic int ebListExpire(ebuckets *eb,\n                        EbucketsType *type,\n                        ExpireInfo *info,\n                        eItem *updateList)\n{\n    uint32_t expired = 0;\n    eItem item = ebGetListPtr(type, *eb);\n    ExpireMeta *metaItem = type->getExpireMeta(item);\n    uint32_t numItems = metaItem->numItems; /* first item must exists */\n\n    while (item != NULL) {\n        metaItem = type->getExpireMeta(item);\n        uint64_t itemExpTime = ebGetMetaExpTime(metaItem);\n\n        /* Items are arranged in ascending expire-time order in a list. Stops list\n         * active expiration when an item's expiration time is greater than `now`. */\n        if (itemExpTime > info->now)\n            break;\n\n        if (info->itemsExpired == info->maxToExpire)\n            break;\n\n        /* keep aside `next` before removing `iter` by onExpireItem */\n        eItem *next = metaItem->next;\n        metaItem->trash = 1;\n        ExpireAction act = info->onExpireItem(item, info->ctx);\n\n        /* if (act == ACT_REMOVE_EXP_ITEM)\n         *  then don't touch the item. Assume it got deleted */\n\n        /* If indicated to stop then break (cb didn't delete the item) */\n        if (act == ACT_STOP_ACTIVE_EXP) {\n            metaItem->trash = 0;\n            break;\n        }\n\n        /* If indicated to re-insert the item, then chain it to updateList.\n         * it will be ebAdd() back to ebuckets at the end of ebExpire() */\n        if (act == ACT_UPDATE_EXP_ITEM) {\n            metaItem->next = *updateList;\n            *updateList = item;\n        }\n\n        ++expired;\n        ++(info->itemsExpired);\n        item = next;\n    }\n\n    if (expired == numItems) {\n        *eb = NULL;\n        info->nextExpireTime = EB_EXPIRE_TIME_INVALID;\n        return 1;\n    }\n\n    metaItem->numItems = numItems - expired;\n    metaItem->firstItemBucket = 1;\n    info->nextExpireTime = ebGetMetaExpTime(metaItem);\n    *eb = ebMarkAsList(item);\n    return 0;\n}\n\n/* Validate the general structure of the list */\nstatic void ebValidateList(eItem head, EbucketsType *type) {\n    if (head == NULL)\n        return;\n\n    ExpireMeta *mHead = type->getExpireMeta(head);\n    eItem iter = head;\n    ExpireMeta *mIter = type->getExpireMeta(iter), *mIterPrev = NULL;\n\n    for (int i = 0; i < mHead->numItems ; ++i) {\n        mIter = type->getExpireMeta(iter);\n        assert(mIter->trash == 0);\n        if (i == 0) {\n            /* first item */\n            assert(mIter->numItems > 0 && mIter->numItems <= EB_LIST_MAX_ITEMS);\n            assert(mIter->firstItemBucket == 1);\n        } else  {\n            /* Verify that expire time of previous item is smaller or equal */\n            assert(ebGetMetaExpTime(mIterPrev) <= ebGetMetaExpTime(mIter));\n            assert(mIter->numItems == 0);\n            assert(mIter->firstItemBucket == 0);\n        }\n\n        if (i == (mHead->numItems - 1)) {\n            /* last item */\n            assert(mIter->lastInSegment == 1);\n            assert(mIter->lastItemBucket == 1);\n            assert(mIter->next == NULL);\n        } else {\n            assert(mIter->lastInSegment == 0);\n            assert(mIter->lastItemBucket == 0);\n            assert(mIter->next != NULL);\n            mIterPrev = mIter;\n            iter = mIter->next;\n        }\n    }\n}\n\n/*** Static functions of ebuckets / rax ***/\n\nstatic uint64_t *ebRaxNumItems(rax *rax) {\n    return (uint64_t*) rax->metadata;\n}\n\n/* Allocate a single segment with a single item */\nstatic void ebNewBucket(EbucketsType *type, EBucketNew *newBucket, eItem item, uint64_t key) {\n    ExpireMeta *mItem = type->getExpireMeta(item);\n\n    newBucket->segment.head = item;\n    newBucket->segment.totalItems = 1;\n    newBucket->segment.numSegs = 1;\n    newBucket->mLast = type->getExpireMeta(item);\n    newBucket->ebKey = key;\n    mItem->numItems = 1;\n    mItem->firstItemBucket = 1;\n    mItem->lastInSegment = 1;\n    mItem->lastItemBucket = 1;\n    mItem->next = &newBucket->segment;\n}\n\n/*\n * ebBucketPrint - Prints all the segments in the bucket and time expiration\n * of each item in the following fashion:\n *\n *      Bucket(tot=0008,sgs=0001) :    [11, 21, 26, 27, 29, 49, 59, 62]\n *      Bucket(tot=0007,sgs=0001) :    [67, 86, 90, 92, 115, 123, 126]\n *      Bucket(tot=0005,sgs=0001) :    [130, 135, 135, 136, 140]\n *      Bucket(tot=0009,sgs=0002) :    [182]\n *                                     [162, 163, 167, 168, 172, 177, 183, 186]\n *      Bucket(tot=0001,sgs=0001) :    [193]\n */\nstatic int ebBucketPrint(uint64_t bucketKey, EbucketsType *type, FirstSegHdr *firstSeg) {\n    eItem iter;\n    ExpireMeta *mIter, *mHead;\n    static int PRINT_EXPIRE_META_FLAGS=0;\n\n    iter = firstSeg->head;\n    mHead = type->getExpireMeta(iter);\n\n    printf(\"Bucket(key=%06\" PRIu64 \",tot=%04d,sgs=%04d) :\", bucketKey, firstSeg->totalItems, firstSeg->numSegs);\n    while (1) {\n        mIter = type->getExpireMeta(iter);  /* not really needed. Just to hash the compiler */\n        printf(\"    [\");\n        for (int i = 0; i < mHead->numItems ; ++i) {\n            mIter = type->getExpireMeta(iter);\n            uint64_t expireTime = ebGetMetaExpTime(mIter);\n\n            if (i == 0 && PRINT_EXPIRE_META_FLAGS)\n                printf(\"%\" PRIu64 \"<n=%d,f=%d,ls=%d,lb=%d>, \",\n                       expireTime, mIter->numItems, mIter->firstItemBucket,\n                       mIter->lastInSegment, mIter->lastItemBucket);\n            else if (i == (mHead->numItems - 1) && PRINT_EXPIRE_META_FLAGS) {\n                printf(\"%\" PRIu64 \"<n=%d,f=%d,ls=%d,lb=%d>\",\n                       expireTime, mIter->numItems, mIter->firstItemBucket,\n                       mIter->lastInSegment, mIter->lastItemBucket);\n            } else\n                printf(\"%\" PRIu64 \"%s\", expireTime, (i == mHead->numItems - 1) ? \"\" : \", \");\n\n            iter = mIter->next;\n        }\n\n        if (mIter->lastItemBucket) {\n            printf(\"]\\n\");\n            break;\n        }\n        printf(\"]\\n                           \");\n        iter = ((NextSegHdr *) mIter->next)->head;\n        mHead = type->getExpireMeta(iter);\n\n    }\n    return 0;\n}\n\n/* Add another eItem to bucket. If needed return 'newBucket' for insertion in rax tree.\n *\n * 1) If the bucket is based on a single, not full segment, then add the item to the segment.\n * 2) If a single, full segment, then try to split it and then add the item.\n * 3) If failed to split, then all items in the bucket have the same bucket-key.\n *    - If the new item has the same bucket-key, then extend the segment to\n *      be an extended-segment, if not already, and add the item to it.\n *    - If the new item has a different bucket-key, then allocate a new bucket\n *      for it.\n */\nstatic int ebAddToBucket(EbucketsType *type,\n                         FirstSegHdr *firstSegBkt,\n                         eItem item,\n                         EBucketNew *newBucket,\n                         uint64_t *updateBucketKey)\n{\n    newBucket->segment.head = NULL; /* no new bucket as default */\n\n    if (firstSegBkt->numSegs == 1) {\n        /* If bucket is a single, not full segment, then add the item to the segment */\n        if (firstSegBkt->totalItems < EB_SEG_MAX_ITEMS)\n            return ebSegAddAvail(type, firstSegBkt, item);\n\n        /* If bucket is a single, full segment, and segment split succeeded */\n        if (ebTrySegSplit(type, firstSegBkt, newBucket) == 1) {\n            /* The split got failed only because all items in the segment have the\n             * same bucket-key */\n            ExpireMeta *mItem = type->getExpireMeta(item);\n\n            /* Check which of the two segments the new item should be added to. Note that\n             * after the split, bucket-key of `newBucket` is bigger than bucket-key of\n             * `firstSegBkt`. That is `firstSegBkt` preserves its bucket-key value\n             * (and its location in rax tree) before the split */\n            if (EB_BUCKET_KEY(ebGetMetaExpTime(type->getExpireMeta(item))) < newBucket->ebKey) {\n                return ebSegAddAvail(type, firstSegBkt, item);\n            } else {\n                /* Add the `item` to the new bucket */\n                ebSegAddAvail(type, &(newBucket->segment), item);\n\n                /* if new item is now last item in the segment, then update lastItemBucket */\n                if (mItem->lastItemBucket)\n                    newBucket->mLast = mItem;\n                return 0;\n            }\n        }\n    }\n\n    /* If reached here, then either:\n     * (1) a bucket with multiple segments\n     * (2) Or, a single, full segment which failed to split.\n     *\n     * Either way, all items in the bucket have the same bucket-key value. Thus:\n     * (A) If 'item' has the same bucket-key as the ones in this bucket, then add it as well\n     * (B) Else, allocate a new bucket for it.\n     */\n\n    ExpireMeta *mHead = type->getExpireMeta(firstSegBkt->head);\n    ExpireMeta *mItem = type->getExpireMeta(item);\n\n    uint64_t bucketKey = EB_BUCKET_KEY(ebGetMetaExpTime(mHead)); /* same for all items in the segment */\n    uint64_t itemKey = EB_BUCKET_KEY(ebGetMetaExpTime(mItem));\n\n    if (bucketKey == itemKey) {\n        /* New item has the same bucket-key as the ones in this bucket, Add it as well */\n        if (mHead->numItems < EB_SEG_MAX_ITEMS)\n            return ebSegAddAvail(type, firstSegBkt, item); /* Add item to first segment */\n        else  {\n            /* If a regular segment becomes extended-segment, then update the\n             * bucket-key to be aligned with the expiration-time of the items\n             * it contains */\n            if (firstSegBkt->numSegs == 1)\n                *updateBucketKey = bucketKey;\n\n            return ebSegAddExtended(type, firstSegBkt, item); /* Add item in a new segment */\n        }\n    } else {\n        /* If the item cannot be added to the visited (extended-segment) bucket\n         * because it has a key not equal to bucket-key, then need to allocate a new\n         * bucket for the item. If the key of the item is below the bucket-key of\n         * the visited bucket, then the new item will be added to a new segment\n         * before it and the visited bucket key will be updated to accurately\n         * reflect the bucket-key of the (extended-segment) bucket */\n        if (bucketKey > itemKey)\n            *updateBucketKey = bucketKey;\n\n        ebNewBucket(type, newBucket, item, EB_BUCKET_KEY(ebGetMetaExpTime(mItem)));\n        return 0;\n    }\n}\n\n/*\n * Remove item from rax\n *\n * Return 1 if removed. Otherwise, return 0\n *\n * Note: The function is optimized to remove items locally from segments without\n *       traversing rax tree or stepping long extended-segments. Therefore, it is\n *       assumed that the item is present in the bucket without verification.\n *\n * TODO: Written straightforward. Should be optimized to merge small segments.\n */\nstatic int ebRemoveFromRax(ebuckets *eb, EbucketsType *type, eItem item) {\n    ExpireMeta *mItem = type->getExpireMeta(item);\n    rax *rax = ebGetRaxPtr(*eb);\n\n    /* if item is the only one left in a single-segment bucket, then delete bucket */\n    if (unlikely(mItem->firstItemBucket && mItem->lastItemBucket)) {\n        raxIterator ri;\n        raxStart(&ri, rax);\n        unsigned char raxKey[EB_KEY_SIZE];\n        bucketKey2RaxKey(EB_BUCKET_KEY(ebGetMetaExpTime(mItem)), raxKey);\n        raxSeek(&ri, \"<=\", raxKey, EB_KEY_SIZE);\n\n        if (raxNext(&ri) == 0)\n            return 0; /* not removed */\n\n        FirstSegHdr *segHdr = ri.data;\n\n        if (segHdr->head != item)\n            return 0; /* not removed */\n\n        zfree(segHdr);\n        raxRemove(ri.rt, ri.key, EB_KEY_SIZE, NULL);\n        raxStop(&ri);\n\n        /* If last bucket in rax, then delete the rax */\n        if (rax->numele == 0) {\n            raxFree(rax);\n            *eb = NULL;\n            return 1; /* removed */\n        }\n    } else if (mItem->numItems == 1) {\n        /* If the `item` is the only one in its segment, there must be additional\n         * items and segments in this bucket. If there weren't, the item would\n         * have been removed by the previous condition. */\n\n        if (mItem->firstItemBucket) {\n            /* If the first item/segment in extended-segments, then\n             * - Remove current segment (with single item) and promote next-segment to be first.\n             * - Update first item of next-segment to be firstItemBucket\n             * - Update `prevSeg` next-of-next segment to point new header of next-segment\n             * - Update FirstSegHdr to totalItems-1, numSegs-1 */\n            NextSegHdr *nextHdr = mItem->next;\n            FirstSegHdr *firstHdr = (FirstSegHdr *) nextHdr->prevSeg;\n            firstHdr->head = nextHdr->head;\n            firstHdr->totalItems--;\n            firstHdr->numSegs--;\n            zfree(nextHdr);\n            eItem *iter = firstHdr->head;\n            ExpireMeta *mIter = type->getExpireMeta(iter);\n            mIter->firstItemBucket = 1;\n            while (mIter->lastInSegment == 0) {\n                iter = mIter->next;\n                mIter = type->getExpireMeta(iter);\n            }\n            if (mIter->lastItemBucket)\n                mIter->next = firstHdr;\n            else\n                ((NextSegHdr *) mIter->next)->prevSeg = (CommonSegHdr *) firstHdr;\n\n        } else if (mItem->lastItemBucket) {\n            /* If last item/segment in bucket, then\n             * - promote previous segment to be last segment\n             * - Update FirstSegHdr to totalItems-1, numSegs-1 */\n            NextSegHdr *currHdr = mItem->next;\n            CommonSegHdr *prevHdr = currHdr->prevSeg;\n            eItem iter = prevHdr->head;\n            ExpireMeta *mIter = type->getExpireMeta(iter);\n            while (mIter->lastInSegment == 0) {\n                iter = mIter->next;\n                mIter = type->getExpireMeta(iter);\n            }\n            currHdr->firstSeg->totalItems--;\n            currHdr->firstSeg->numSegs--;\n            mIter->next = prevHdr;\n            mIter->lastItemBucket = 1;\n            zfree(currHdr);\n\n        } else {\n            /* item/segment is not the first or last item/segment.\n             * - Update previous segment to point next segment.\n             * - Update `prevSeg` of next segment\n             * - Update FirstSegHdr to totalItems-1, numSegs-1 */\n            NextSegHdr *nextHdr = mItem->next;\n            NextSegHdr *currHdr = (NextSegHdr *) nextHdr->prevSeg;\n            CommonSegHdr *prevHdr = currHdr->prevSeg;\n\n            ExpireMeta *mIter = type->getExpireMeta(prevHdr->head);\n            while (mIter->lastInSegment == 0)\n                mIter = type->getExpireMeta(mIter->next);\n\n            mIter->next = nextHdr;\n            nextHdr->prevSeg = prevHdr;\n            nextHdr->firstSeg->totalItems--;\n            nextHdr->firstSeg->numSegs--;\n            zfree(currHdr);\n\n        }\n    } else {\n        /* At least 2 items in current segment */\n        if (mItem->numItems) {\n            /* If item is first item in segment (Must be numItems>1), then\n             * - Find segment header and update to point next item.\n             * - Let next inherit 'item' flags {firstItemBucket, numItems-1}\n             * - Update FirstSegHdr to totalItems-1 */\n            ExpireMeta *mIter = mItem;\n            CommonSegHdr *currHdr;\n            while (mIter->lastInSegment == 0)\n                mIter = type->getExpireMeta(mIter->next);\n            if (mIter->lastItemBucket)\n                currHdr = (CommonSegHdr *) mIter->next;\n            else\n                currHdr = (CommonSegHdr *) ((NextSegHdr *) mIter->next)->prevSeg;\n\n            if (mItem->firstItemBucket)\n                ((FirstSegHdr *) currHdr)->totalItems--;\n            else\n                ((NextSegHdr *) currHdr)->firstSeg->totalItems--;\n\n            eItem *newHead = mItem->next;\n            ExpireMeta *mNewHead = type->getExpireMeta(newHead);\n            mNewHead->firstItemBucket = mItem->firstItemBucket;\n            mNewHead->numItems = mItem->numItems - 1;\n            currHdr->head = newHead;\n\n        } else if (mItem->lastInSegment) {\n            /* If item is last in segment, then\n             * - find previous item and let it inherit (next, lastInSegment, lastItemBucket)\n             * - Find and update segment header to numItems-1\n             * - Update FirstSegHdr to totalItems-1 */\n            CommonSegHdr *currHdr;\n            if (mItem->lastItemBucket)\n                currHdr = (CommonSegHdr *) mItem->next;\n            else\n                currHdr = (CommonSegHdr *) ((NextSegHdr *) mItem->next)->prevSeg;\n\n            ExpireMeta *mHead = type->getExpireMeta(currHdr->head);\n            mHead->numItems--;\n            ExpireMeta *mIter = mHead;\n            while (mIter->next != item)\n                mIter = type->getExpireMeta(mIter->next);\n\n            mIter->next = mItem->next;\n            mIter->lastInSegment = mItem->lastInSegment;\n            mIter->lastItemBucket = mItem->lastItemBucket;\n\n            if (mHead->firstItemBucket)\n                ((FirstSegHdr *) currHdr)->totalItems--;\n            else\n                ((NextSegHdr *) currHdr)->firstSeg->totalItems--;\n\n        } else {\n            /* - Item is in the middle of segment. Find previous item and update to point next.\n             * - Find and Update segment header to numItems-1\n             * - Update FirstSegHdr to totalItems-1 */\n            ExpireMeta *mIter = mItem;\n            CommonSegHdr *currHdr;\n            while (mIter->lastInSegment == 0)\n                mIter = type->getExpireMeta(mIter->next);\n            if (mIter->lastItemBucket)\n                currHdr = (CommonSegHdr *) mIter->next;\n            else\n                currHdr = (CommonSegHdr *) ((NextSegHdr *) mIter->next)->prevSeg;\n\n            ExpireMeta *mHead = type->getExpireMeta(currHdr->head);\n            mHead->numItems--;\n            mIter = mHead;\n            while (mIter->next != item)\n                mIter = type->getExpireMeta(mIter->next);\n\n            mIter->next = mItem->next;\n            mIter->lastInSegment = mItem->lastInSegment;\n            mIter->lastItemBucket = mItem->lastItemBucket;\n\n            if (mHead->firstItemBucket)\n                ((FirstSegHdr *) currHdr)->totalItems--;\n            else\n                ((NextSegHdr *) currHdr)->firstSeg->totalItems--;\n        }\n    }\n    *ebRaxNumItems(rax) -= 1;\n    return 1; /* removed */\n}\n\nint ebAddToRax(ebuckets *eb, EbucketsType *type, eItem item, uint64_t bucketKeyItem) {\n    EBucketNew newBucket; /* ebAddToBucket takes care to update newBucket.segment.head */\n    raxIterator iter;\n    unsigned char raxKey[EB_KEY_SIZE];\n    bucketKey2RaxKey(bucketKeyItem, raxKey);\n    rax *rax = ebGetRaxPtr(*eb);\n    raxStart(&iter,rax);\n    raxSeek(&iter, \"<=\", raxKey, EB_KEY_SIZE);\n    *ebRaxNumItems(rax) += 1;\n    /* If expireTime of the item is below the bucket-key of first bucket in rax,\n     * then need to add it as a new bucket at the beginning of the rax. */\n    if(raxNext(&iter) == 0) {\n        FirstSegHdr *firstSegHdr = zmalloc(sizeof(FirstSegHdr));\n        firstSegHdr->head = item;\n        firstSegHdr->totalItems = 1;\n        firstSegHdr->numSegs = 1;\n\n        /* update last item to point on the segment header */\n        ExpireMeta *metaItem = type->getExpireMeta(item);\n        metaItem->lastItemBucket = 1;\n        metaItem->lastInSegment = 1;\n        metaItem->firstItemBucket = 1;\n        metaItem->numItems = 1;\n        metaItem->next = firstSegHdr;\n        bucketKey2RaxKey(bucketKeyItem, raxKey);\n        raxInsert(rax, raxKey, EB_KEY_SIZE, firstSegHdr, NULL);\n        raxStop(&iter);\n        return 0;\n    }\n\n    /* Add the new item into the first segment of the bucket that we found */\n    uint64_t updateBucketKey = 0;\n    ebAddToBucket(type, iter.data, item, &newBucket, &updateBucketKey);\n\n    /* If following the addition need to `updateBucketKey` of `foundBucket` in rax */\n    if(unlikely(updateBucketKey && updateBucketKey != raxKey2BucketKey(iter.key))) {\n        raxRemove(iter.rt, iter.key, EB_KEY_SIZE, NULL);\n        bucketKey2RaxKey(updateBucketKey, raxKey);\n        raxInsert(iter.rt, raxKey, EB_KEY_SIZE, iter.data, NULL);\n    }\n\n    /* If ebAddToBucket() returned a new bucket, then add the bucket to rax.\n     *\n     * This might happen when trying to add another item to a bucket that is:\n     * 1. A single, full segment. Will result in a bucket (segment) split.\n     * 2. Extended segment with a different bucket-key than the new item.\n     *    Will result in a new bucket (of size 1) for the new item.\n     */\n    if (newBucket.segment.head != NULL) {\n        /* Allocate segment header for the new bucket */\n        FirstSegHdr *newSeg = zmalloc(sizeof(FirstSegHdr));\n        /* Move the segment from 'newBucket' to allocated segment header */\n        *newSeg = newBucket.segment;\n        /* Update 'next' of last item in segment to point to 'FirstSegHdr` */\n        newBucket.mLast->next = newSeg;\n        /* Insert the new bucket to rax */\n        bucketKey2RaxKey(newBucket.ebKey, raxKey);\n        raxInsert(iter.rt, raxKey, EB_KEY_SIZE, newSeg, NULL);\n    }\n\n    raxStop(&iter);\n    return 0;\n}\n\n/* Validate the general structure of the buckets in rax */\nstatic void ebValidateRax(rax *rax, EbucketsType *type) {\n    uint64_t numItemsTotal = 0;\n    raxIterator raxIter;\n    raxStart(&raxIter, rax);\n    raxSeek(&raxIter, \"^\", NULL, 0);\n    while (raxNext(&raxIter)) {\n        int expectFirstItemBucket = 1;\n        FirstSegHdr *firstSegHdr = raxIter.data;\n        eItem iter;\n        ExpireMeta *mIter, *mHead;\n        iter = firstSegHdr->head;\n        mHead = type->getExpireMeta(iter);\n        uint64_t numItemsBucket = 0, countSegments = 0;\n\n        int extendedSeg = (firstSegHdr->numSegs > 1) ? 1 : 0;\n        void *segHdr = firstSegHdr;\n\n        mIter = type->getExpireMeta(iter);\n        while (1) {\n            uint64_t curBktKey, prevBktKey;\n            for (int i = 0; i < mHead->numItems ; ++i) {\n                assert(iter != NULL);\n                mIter = type->getExpireMeta(iter);\n                curBktKey = EB_BUCKET_KEY(ebGetMetaExpTime(mIter));\n                assert(mIter->trash == 0);\n                if (i == 0) {\n                    assert(mIter->numItems > 0 && mIter->numItems <= EB_SEG_MAX_ITEMS);\n                    assert(mIter->firstItemBucket == expectFirstItemBucket);\n                    expectFirstItemBucket = 0;\n                    prevBktKey = curBktKey;\n                } else  {\n                    assert( (extendedSeg && prevBktKey == curBktKey) ||\n                            (!extendedSeg && prevBktKey <= curBktKey) );\n                    assert(mIter->numItems == 0);\n                    assert(mIter->firstItemBucket == 0);\n                    prevBktKey = curBktKey;\n                }\n\n                if (i == mHead->numItems - 1)\n                    assert(mIter->lastInSegment == 1);\n                else\n                    assert(mIter->lastInSegment == 0);\n\n                iter = mIter->next;\n            }\n\n            numItemsBucket += mHead->numItems;\n            countSegments += 1;\n\n            if (mIter->lastItemBucket)\n                break;\n\n            NextSegHdr *nextSegHdr = mIter->next;\n            assert(nextSegHdr->firstSeg == firstSegHdr);\n            assert(nextSegHdr->prevSeg == segHdr);\n            iter = nextSegHdr->head;\n            mHead = type->getExpireMeta(iter);\n            segHdr = nextSegHdr;\n        }\n        /* Verify next of last item, `totalItems` and `numSegs` in iterated bucket */\n        assert(mIter->next == segHdr);\n        assert(numItemsBucket == firstSegHdr->totalItems);\n        assert(countSegments == firstSegHdr->numSegs);\n        numItemsTotal += numItemsBucket;\n    }\n    raxStop(&raxIter);\n    assert(numItemsTotal == *ebRaxNumItems(rax));\n}\n\nstruct deleteCbCtx { EbucketsType *type; void *userCtx; };\nvoid ebRaxDeleteCb(void *item, void *context) {\n    struct deleteCbCtx *ctx = context;\n    FirstSegHdr *firstSegHdr = item;\n    eItem itemIter = firstSegHdr->head;\n    uint32_t numSegs = firstSegHdr->numSegs;\n    void *nextSegHdr = firstSegHdr;\n\n    for (uint32_t seg=0 ; seg < numSegs ; seg++) {\n        zfree(nextSegHdr);\n\n        ExpireMeta *mIter = ctx->type->getExpireMeta(itemIter);\n        uint32_t numItemsInSeg = mIter->numItems;\n\n        for (uint32_t i = 0; i < numItemsInSeg ; ++i) {\n            mIter = ctx->type->getExpireMeta(itemIter);\n            eItem toDelete = itemIter;\n            mIter->trash = 1;\n            itemIter = mIter->next;\n            if (ctx->type->onDeleteItem) ctx->type->onDeleteItem(toDelete, &ctx->userCtx);\n        }\n        nextSegHdr = itemIter;\n\n        if (seg + 1 < numSegs)\n            itemIter = ((NextSegHdr *) nextSegHdr)->head;\n    }\n\n}\n\nstatic void _ebPrint(ebuckets eb, EbucketsType *type, int64_t usedMem, int printItems) {\n    if (ebIsEmpty(eb)) {\n        printf(\"Empty ebuckets\\n\");\n        return;\n    }\n\n    if (ebIsList(eb)) {\n        /* mock rax segment */\n        eItem head = ebGetListPtr(type, eb);\n        ExpireMeta *metaHead = type->getExpireMeta(head);\n        FirstSegHdr mockSeg = { head, metaHead->numItems, 1};\n        if (printItems)\n            ebBucketPrint(0, type, &mockSeg);\n        return;\n    }\n\n    uint64_t totalItems = 0;\n    uint64_t numBuckets = 0;\n    uint64_t numSegments = 0;\n\n    rax *rax = ebGetRaxPtr(eb);\n    raxIterator iter;\n    raxStart(&iter, rax);\n    raxSeek(&iter, \"^\", NULL, 0);\n    while (raxNext(&iter)) {\n        FirstSegHdr *seg = iter.data;\n        if (printItems)\n            ebBucketPrint(raxKey2BucketKey(iter.key), type, seg);\n        totalItems += seg->totalItems;\n        numBuckets++;\n        numSegments += seg->numSegs;\n    }\n\n    printf(\"Total number of items              : %\" PRIu64 \"\\n\", totalItems);\n    printf(\"Total number of buckets            : %\" PRIu64 \"\\n\", numBuckets);\n    printf(\"Total number of segments           : %\" PRIu64 \"\\n\", numSegments);\n    printf(\"Average items per bucket           : %.2f\\n\",\n           (double) totalItems / numBuckets);\n    printf(\"Average items per segment          : %.2f\\n\",\n           (double) totalItems / numSegments);\n    printf(\"Average segments per bucket        : %.2f\\n\",\n           (double) numSegments / numBuckets);\n\n    if (usedMem != -1)\n    {\n        printf(\"\\nEbuckets memory usage (including FirstSegHdr/NexSegHdr):\\n\");\n        printf(\"Total                              : %.2f KBytes\\n\",\n               (double) usedMem / 1024);\n        printf(\"Average per bucket                 : %\" PRIu64 \" Bytes\\n\",\n               usedMem / numBuckets);\n        printf(\"Average per item                   : %\" PRIu64 \" Bytes\\n\",\n               usedMem / totalItems);\n        printf(\"EB_BUCKET_KEY_PRECISION            : %d\\n\",\n               EB_BUCKET_KEY_PRECISION);\n        printf(\"EB_SEG_MAX_ITEMS                   : %d\\n\",\n               EB_SEG_MAX_ITEMS);\n    }\n    raxStop(&iter);\n}\n\n/*** API functions ***/\n\n/**\n * Deletes all items from given ebucket, invoking optional item deletion callbacks.\n *\n * @param eb - The ebucket to be deleted.\n * @param type - Pointer to the EbucketsType structure defining the type of ebucket.\n * @param ctx - A context pointer that can be used in optional item deletion callbacks.\n */\nvoid ebDestroy(ebuckets *eb, EbucketsType *type, void *ctx) {\n    if (ebIsEmpty(*eb))\n        return;\n\n    if (ebIsList(*eb)) {\n        eItem head = ebGetListPtr(type, *eb);\n        eItem *pItemNext = &head;\n        while ( (*pItemNext) != NULL) {\n            eItem toDelete = *pItemNext;\n            ExpireMeta *metaToDelete = type->getExpireMeta(toDelete);\n            *pItemNext = metaToDelete->next;\n            metaToDelete->trash = 1;\n            if (type->onDeleteItem) type->onDeleteItem(toDelete, ctx);\n        }\n    } else {\n        struct deleteCbCtx deleteCtx = {type, ctx};\n        raxFreeWithCbAndContext(ebGetRaxPtr(*eb), ebRaxDeleteCb, &deleteCtx);\n    }\n\n    *eb = NULL;\n}\n\n/**\n * Removes the specified item from the given ebucket, updating the ebuckets handler\n * accordingly. The function is optimized to remove items locally from segments\n * without traversing rax tree or stepping long extended-segments. Therefore,\n * it is assumed that the item is present in the bucket without verification.\n *\n * @param eb   - Pointer to the ebuckets handler, which may get updated if the removal\n *               affects the structure.\n * @param type - Pointer to the EbucketsType structure defining the type of ebucket.\n * @param item - The eItem to be removed from the ebucket.\n *\n * @return 1 if the item was successfully removed; otherwise, return 0.\n */\nint ebRemove(ebuckets *eb, EbucketsType *type, eItem item) {\n\n    if (ebIsEmpty(*eb))\n        return 0; /* not removed */\n\n    int res;\n    if (ebIsList(*eb))\n        res = ebRemoveFromList(eb, type, item);\n    else  /* rax */\n        res = ebRemoveFromRax(eb, type, item);\n\n    /* if removed then mark as trash */\n    if (res)\n        type->getExpireMeta(item)->trash = 1;\n\n    EB_VALIDATE_STRUCTURE(*eb, type);\n\n    return res;\n}\n\n/**\n * Adds the specified item to the ebucket structure based on expiration time.\n * If the ebucket is a list or empty, it attempts to add the item to the list.\n * Otherwise, it adds the item to rax. If the list reaches its maximum size, it\n * is converted to rax. The ebuckets handler may be updated accordingly.\n *\n * @param eb - Pointer to the ebuckets handler, which may get updated\n * @param type - Pointer to the EbucketsType structure defining the type of ebucket.\n * @param item - The eItem to be added to the ebucket.\n * @param expireTime - The expiration time of the item.\n *\n * @return 0 (C_OK) if the item was successfully added;\n *         Otherwise, return -1 (C_ERR) on failure.\n */\nint ebAdd(ebuckets *eb, EbucketsType *type, eItem item, uint64_t expireTime) {\n    int res;\n\n    assert(expireTime <= EB_EXPIRE_TIME_MAX);\n\n    /* Set expire-time and reset segment flags */\n    ExpireMeta *itemMeta = type->getExpireMeta(item);\n    ebSetMetaExpTime(itemMeta, expireTime);\n    itemMeta->lastInSegment = 0;\n    itemMeta->firstItemBucket = 0;\n    itemMeta->lastItemBucket = 0;\n    itemMeta->numItems = 0;\n    itemMeta->trash = 0;\n\n    if (ebIsList(*eb) || (ebIsEmpty(*eb))) {\n        /* Try add item to list */\n        if ( (res = ebAddToList(eb, type, item)) == 1) {\n            /* Failed to add since list reached maximum size. Convert to rax */\n            *eb = ebConvertListToRax(ebGetListPtr(type, *eb), type);\n            res = ebAddToRax(eb, type, item, EB_BUCKET_KEY(expireTime));\n        }\n    } else {\n        /* Add item to rax */\n        res = ebAddToRax(eb, type, item, EB_BUCKET_KEY(expireTime));\n    }\n\n    EB_VALIDATE_STRUCTURE(*eb, type);\n\n    return res;\n}\n\n/**\n * Performs expiration on the given ebucket, removing items that have expired.\n *\n * If all items in the data structure are expired, 'eb' will be set to NULL.\n *\n * @param eb - Pointer to the ebuckets handler, which may get updated\n * @param type - Pointer to the EbucketsType structure defining the type of ebucket.\n * @param info - Providing information about the expiration action.\n */\nvoid ebExpire(ebuckets *eb, EbucketsType *type, ExpireInfo *info) {\n    /* updateList - maintain a list of expired items that the callback `onExpireItem`\n     * indicated to update their expiration time rather than removing them.\n     * At the end of this function, the items will be `ebAdd()` back.\n     *\n     * Note, this list of items does not allocate any memory, but temporary reuses\n     * the `next` pointer of the `ExpireMeta` structure of the expired items. */\n    eItem updateList = NULL;\n\n    /* reset info outputs */\n    info->nextExpireTime = EB_EXPIRE_TIME_INVALID;\n    info->itemsExpired = 0;\n\n    /* if empty ebuckets */\n    if (ebIsEmpty(*eb)) return;\n\n    if (ebIsList(*eb)) {\n        ebListExpire(eb, type, info, &updateList);\n        goto END_ACTEXP;\n    }\n\n    /* handle rax expiry */\n\n    rax *rax = ebGetRaxPtr(*eb);\n    raxIterator iter;\n\n    raxStart(&iter, rax);\n\n    uint64_t nowKey = EB_BUCKET_KEY(info->now);\n    uint64_t itemsExpiredBefore = info->itemsExpired;\n\n    while (1) {\n        raxSeek(&iter,\"^\",NULL,0);\n        if (!raxNext(&iter)) break;\n\n        uint64_t bucketKey = raxKey2BucketKey(iter.key);\n\n        FirstSegHdr *firstSegHdr = iter.data;\n\n        /* We need to take into consideration EB_BUCKET_KEY_PRECISION. The value of\n         * \"info->now\" will be adjusted to lookup only for all buckets with assigned\n         * keys that are older than 1<<EB_BUCKET_KEY_PRECISION msec ago. That is, it\n         * is needed to visit only the buckets with keys that are \"<\" than:\n         * EB_BUCKET_KEY(info->now). */\n        if (bucketKey >= nowKey) {\n            /* Take care to update next expire time based on next segment to expire */\n            info->nextExpireTime = ebGetMetaExpTime(\n                    type->getExpireMeta(firstSegHdr->head));\n            break;\n        }\n\n        /* If not managed to remove entire bucket then return */\n        if (ebSegExpire(firstSegHdr, type, info, &updateList) == 0)\n            break;\n\n        raxRemove(iter.rt, iter.key, EB_KEY_SIZE, NULL);\n    }\n\n    raxStop(&iter);\n    *ebRaxNumItems(rax) -= info->itemsExpired - itemsExpiredBefore;\n\n    if(raxEOF(&iter) && (updateList == 0)) {\n        raxFree(rax);\n        *eb = NULL;\n    }\n\nEND_ACTEXP:\n    /* Add back items with updated expiration time */\n    while (updateList) {\n        ExpireMeta *mItem = type->getExpireMeta(updateList);\n        eItem next = mItem->next;\n        uint64_t expireAt = ebGetMetaExpTime(mItem);\n\n        /* Update next minimum expire time if needed.\n         * Condition is valid also if nextExpireTime is EB_EXPIRE_TIME_INVALID */\n        if (expireAt < info->nextExpireTime)\n            info->nextExpireTime = expireAt;\n\n        ebAdd(eb, type, updateList, expireAt);\n        updateList = next;\n    }\n\n    EB_VALIDATE_STRUCTURE(*eb, type);\n\n    return;\n}\n\n/* Performs active expiration dry-run to evaluate number of expired items\n *\n * It is faster than actual active-expire because it iterates only over the\n * headers of the buckets until the first non-expired bucket, and no more than\n * EB_SEG_MAX_ITEMS items in the last bucket\n *\n * @param eb - The ebucket to be checked.\n * @param type - Pointer to the EbucketsType structure defining the type of ebucket.\n * @param now - The current time in milliseconds.\n */\nuint64_t ebExpireDryRun(ebuckets eb, EbucketsType *type, uint64_t now) {\n    if (ebIsEmpty(eb)) return 0;\n\n    uint64_t numExpired = 0;\n\n    /* If list, then iterate and count expired ones */\n    if (ebIsList(eb)) {\n        ExpireMeta *mIter = type->getExpireMeta(ebGetListPtr(type, eb));\n        while (1) {\n            if (ebGetMetaExpTime(mIter) >= now)\n                return numExpired;\n\n            numExpired++;\n\n            if (mIter->lastInSegment)\n                return numExpired;\n\n            mIter = type->getExpireMeta(mIter->next);\n        }\n    }\n\n    /* Handle rax active-expire */\n    rax *rax = ebGetRaxPtr(eb);\n    raxIterator iter;\n    raxStart(&iter, rax);\n    uint64_t nowKey = EB_BUCKET_KEY(now);\n    raxSeek(&iter,\"^\",NULL,0);\n    assert(raxNext(&iter)); /* must be at least one bucket */\n    FirstSegHdr *currBucket = iter.data;\n\n    while (1) {\n        /* if 'currBucket' is last bucket, then break */\n        if(!raxNext(&iter)) break;\n        FirstSegHdr *nextBucket = iter.data;\n\n        /* if 'nextBucket' is not less than now then break */\n        if (raxKey2BucketKey(iter.key) >= nowKey) break;\n\n        /* nextBucket less than now. For sure all items in currBucket are expired */\n        numExpired += currBucket->totalItems;\n        currBucket = nextBucket;\n    }\n    raxStop(&iter);\n\n    /* If single segment bucket, iterate over items and count expired ones */\n    if (currBucket->numSegs == 1) {\n        ExpireMeta *mIter = type->getExpireMeta(currBucket->head);\n        while (1) {\n            if (ebGetMetaExpTime(mIter) >= now)\n                return numExpired;\n\n            numExpired++;\n\n            if (mIter->lastInSegment)\n                return numExpired;\n\n            mIter = type->getExpireMeta(mIter->next);\n        }\n    }\n\n    /* Bucket key exactly reflect expiration time of all items (currBucket->numSegs > 1) */\n    if (EB_BUCKET_KEY_PRECISION == 0) {\n        if (ebGetMetaExpTime(type->getExpireMeta(currBucket->head)) >= now)\n            return numExpired;\n        else\n            return numExpired + currBucket->totalItems;\n    }\n\n    /* Iterate extended-segment and count expired ones */\n\n    /* Unreachable code, provided for completeness. Following operation is not\n     * bound in time and this is the main reason why we set above\n     * EB_BUCKET_KEY_PRECISION to 0 and have early return on previous condition */\n\n    ExpireMeta *mIter = type->getExpireMeta(currBucket->head);\n    while(1) {\n        if (ebGetMetaExpTime(mIter) < now)\n            numExpired++;\n\n        if (mIter->lastItemBucket)\n            return numExpired;\n\n        if (mIter->lastInSegment)\n            mIter = type->getExpireMeta(((NextSegHdr *) mIter->next)->head);\n        else\n            mIter = type->getExpireMeta(mIter->next);\n    }\n}\n\n/**\n * Retrieves the expiration time of the item with the nearest expiration\n *\n * @param eb - The ebucket to be checked.\n * @param type - Pointer to the EbucketsType structure defining the type of ebucket.\n *\n * @return The expiration time of the item with the nearest expiration time in\n *         the ebucket. If empty, return EB_EXPIRE_TIME_INVALID. If ebuckets is\n *         of type rax and minimal bucket is extended-segment, then it might not\n *         return accurate result up-to 1<<EB_BUCKET_KEY_PRECISION-1 msec (we\n *         don't want to traverse the entire extended-segment since it might not\n *         bounded).\n */\nuint64_t ebGetNextTimeToExpire(ebuckets eb, EbucketsType *type) {\n    if (ebIsEmpty(eb))\n        return EB_EXPIRE_TIME_INVALID;\n\n    if (ebIsList(eb))\n        return ebGetMetaExpTime(type->getExpireMeta(ebGetListPtr(type, eb)));\n\n    /* rax */\n    uint64_t minExpire;\n    rax *rax = ebGetRaxPtr(eb);\n    raxIterator iter;\n    raxStart(&iter, rax);\n    raxSeek(&iter, \"^\", NULL, 0);\n    raxNext(&iter); /* seek to the last bucket */\n    FirstSegHdr *firstSegHdr = iter.data;\n    if ((firstSegHdr->numSegs == 1) || (EB_BUCKET_KEY_PRECISION == 0)) {\n        /* Single segment, or extended-segments that all have same expiration time.\n         * return the first item with the nearest expiration time */\n        minExpire = ebGetMetaExpTime(type->getExpireMeta(firstSegHdr->head));\n    } else {\n\n        /* If reached here, then it is because it is extended segment and buckets\n         * are with lower precision than 1msec. In that case it is better not to\n         * iterate extended-segments, which might be unbounded, and just return\n         * worst possible expiration time in this bucket.\n         *\n         * The reason we return blindly worst case expiration time value in this\n         * bucket is because the only usage of this function is to figure out\n         * when is the next time active expiration should be performed, and it\n         * is better to do it only after 1 or more items were expired and not the\n         * other way around.\n         */\n        uint64_t expTime = ebGetMetaExpTime(type->getExpireMeta(firstSegHdr->head));\n        minExpire = expTime | ( (1<<EB_BUCKET_KEY_PRECISION)-1) ;\n    }\n    raxStop(&iter);\n    return minExpire;\n}\n\n/**\n * Retrieves the expiration time of the item with the latest expiration\n *\n * However, precision loss (EB_BUCKET_KEY_PRECISION) in rax tree buckets\n * may result in slight inaccuracies, up to a variation of\n * 1<<EB_BUCKET_KEY_PRECISION msec.\n *\n * @param eb - The ebucket to be checked.\n * @param type - Pointer to the EbucketsType structure defining the type of ebucket.\n * @param accurate - If 1, then the function will return accurate result. Otherwise,\n *                   it might return the upper limit with slight inaccuracy of\n *                   1<<EB_BUCKET_KEY_PRECISION msec.\n *\n *                   This special case is relevant only when the last bucket\n *                   is of type extended-segment. In this case, we might don't\n *                   want to traverse the entire bucket to find the accurate\n *                   expiration time  since there might be unbounded number of\n *                   items in the extended-segment. If EB_BUCKET_KEY_PRECISION\n *                   is 0, then the function will return accurate result anyway.\n *\n * @return The expiration time of the item with the latest expiration time in\n *         the ebucket. If empty, return EB_EXPIRE_TIME_INVALID.\n */\nuint64_t ebGetMaxExpireTime(ebuckets eb, EbucketsType *type, int accurate) {\n    if (ebIsEmpty(eb))\n        return EB_EXPIRE_TIME_INVALID;\n\n    if (ebIsList(eb)) {\n        eItem item = ebGetListPtr(type, eb);\n        ExpireMeta *em = type->getExpireMeta(item);\n        while (em->lastInSegment == 0)\n            em = type->getExpireMeta(em->next);\n        return ebGetMetaExpTime(em);\n    }\n\n    /* rax */\n    uint64_t maxExpire;\n    rax *rax = ebGetRaxPtr(eb);\n    raxIterator iter;\n    raxStart(&iter, rax);\n    raxSeek(&iter, \"$\", NULL, 0);\n    raxNext(&iter); /* seek to the last bucket */\n    FirstSegHdr *firstSegHdr = iter.data;\n    if (firstSegHdr->numSegs == 1) {\n        /* Single segment. return the last item with the highest expiration time */\n        ExpireMeta *em = type->getExpireMeta(firstSegHdr->head);\n        while (em->lastInSegment == 0)\n            em = type->getExpireMeta(em->next);\n        maxExpire = ebGetMetaExpTime(em);\n    } else if (EB_BUCKET_KEY_PRECISION == 0) {\n        /* Extended-segments that all have same expiration time */\n        maxExpire = ebGetMetaExpTime(type->getExpireMeta(firstSegHdr->head));\n    } else {\n        if (accurate == 0) {\n            /* return upper limit of the last bucket */\n            int mask = (1<<EB_BUCKET_KEY_PRECISION)-1;\n            uint64_t expTime = ebGetMetaExpTime(type->getExpireMeta(firstSegHdr->head));\n            maxExpire = (expTime + (mask+1)) & (~mask);\n        } else {\n            maxExpire = 0;\n            ExpireMeta *mIter = type->getExpireMeta(firstSegHdr->head);\n            while(1) {\n                while(1) {\n                    if (maxExpire < ebGetMetaExpTime(mIter))\n                        maxExpire = ebGetMetaExpTime(mIter);\n                    if (mIter->lastInSegment == 1) break;\n                    mIter = type->getExpireMeta(mIter->next);\n                }\n\n                if (mIter->lastItemBucket) break;\n                mIter = type->getExpireMeta(((NextSegHdr *) mIter->next)->head);\n            }\n        }\n    }\n    raxStop(&iter);\n    return maxExpire;\n}\n\n/**\n * Retrieves the total number of items in the ebucket.\n */\nuint64_t ebGetTotalItems(ebuckets eb, EbucketsType *type) {\n    if (ebIsEmpty(eb))\n        return 0;\n\n    if (ebIsList(eb))\n        return type->getExpireMeta(ebGetListPtr(type, eb))->numItems;\n    else\n        return *ebRaxNumItems(ebGetRaxPtr(eb));\n}\n\n/* print expiration-time of items, ebuckets layout and some statistics */\nvoid ebPrint(ebuckets eb, EbucketsType *type) {\n    _ebPrint(eb, type, -1, 1);\n}\n\n/* Validate the general structure of ebuckets. Calls assert(0) on error. */\nvoid ebValidate(ebuckets eb, EbucketsType *type) {\n    if (ebIsEmpty(eb))\n        return;\n\n    if (ebIsList(eb))\n        ebValidateList(ebGetListPtr(type, eb), type);\n    else\n        ebValidateRax(ebGetRaxPtr(eb), type);\n}\n\n/* Defrag callback for radix tree iterator, called for each node,\n * used in order to defrag the nodes allocations. */\nint ebDefragRaxNode(raxNode **noderef, void *privdata) {\n    ebDefragFunctions *defragfns = privdata;\n    raxNode *newnode = defragfns->defragAlloc(*noderef);\n    if (newnode) {\n        *noderef = newnode;\n        return 1;\n    }\n    return 0;\n}\n\n/* Defragments items in list-based bucket. */\nvoid ebDefragList(ebuckets *eb, EbucketsType *type, ebDefragFunctions *defragfns, void *privdata) {\n    ExpireMeta *previtem = NULL;\n    eItem newitem, curitem = ebGetListPtr(type, *eb);\n    while (curitem != NULL) {\n        if ((newitem = defragfns->defragItem(curitem, privdata))) {\n            curitem = newitem;\n            if (previtem) {\n                previtem->next = curitem;\n            } else {\n                *eb = ebMarkAsList(curitem);\n            }\n        }\n        /* Move to the next item in the list. */\n        previtem = type->getExpireMeta(curitem);\n        curitem = previtem->next;\n    }\n}\n\n/* Defragments a single bucket in rax, including its segments and items. */\nvoid ebDefragRaxBucket(EbucketsType *type, raxIterator *ri,\n                       ebDefragFunctions *defragfns, void *privdata)\n{\n    CommonSegHdr *currentSegHdr = ri->data;\n    CommonSegHdr *firstSegHdr = currentSegHdr;\n    eItem iter = ((FirstSegHdr*)currentSegHdr)->head;\n    ExpireMeta *mHead = type->getExpireMeta(iter);\n    ExpireMeta *prevSegLastItem = NULL; /* The last item of the previous segment */\n\n    while (1) {\n        unsigned int numItems = mHead->numItems;\n        assert(numItems);  /* Avoid compiler warning with old build chain. */\n        ExpireMeta *prevIter = NULL;\n        ExpireMeta *mIter = NULL;\n\n        for (unsigned int i = 0; i < numItems; ++i) {\n            eItem newiter = defragfns->defragItem(iter, privdata);\n            if (newiter) {\n                iter = newiter;\n\n                if (prevIter == NULL) {\n                    /* If this is the first item in the segment, update the segment\n                     * header to point to the new item location. */\n                    currentSegHdr->head = iter;\n                } else {\n                    /* Update the previous item's next pointer to point to the newly defragmented item */\n                    prevIter->next = iter;\n                }\n            }\n            mIter = type->getExpireMeta(iter);\n            prevIter = mIter;\n            iter = mIter->next;\n        }\n\n        /* Try to defragment the current segment. */\n        CommonSegHdr *newSegHdr = defragfns->defragAlloc(currentSegHdr);\n        if (newSegHdr) {\n            if (currentSegHdr == ri->data) {\n                /* If it's the first segment, update the rax data pointer. */\n                raxSetData(ri->node, ri->data=newSegHdr);\n                firstSegHdr = newSegHdr;\n            } else {\n                /* For non-first segments, update the previous segment's next\n                 * item to new pointer. */\n                prevSegLastItem->next = newSegHdr;\n            }\n            currentSegHdr = newSegHdr;\n        }\n\n        /* Remember last item in this segment for next iteration */\n        prevSegLastItem = mIter;\n\n        if (mIter->lastItemBucket) {\n            /* The last eitem needs to point back to the segment. */\n            if (newSegHdr) mIter->next = currentSegHdr;\n            break;\n        }\n\n        NextSegHdr *nextSegHdr = mIter->next;\n        nextSegHdr->firstSeg = (FirstSegHdr *)firstSegHdr;\n        if (newSegHdr) {\n            /* Update next segment's prev to point to the defragmented segment. */\n            nextSegHdr->prevSeg = newSegHdr;\n        }\n\n        /* Update pointers for next segment iteration */\n        iter = nextSegHdr->head;\n        mHead = type->getExpireMeta(iter);\n        currentSegHdr = (CommonSegHdr *)nextSegHdr;\n    }\n}\n\n/* Defragments items in rax-based bucket.\n * returns 0 if no more work needs to be been done, and 1 if more work is needed. */\nint ebDefragRax(ebuckets *eb, EbucketsType *type, unsigned long *cursor,\n                ebDefragFunctions *defragfns, void *privdata)\n{\n    rax *newrax, *rax = ebGetRaxPtr(*eb);\n    raxIterator ri;\n    static unsigned char next[EB_KEY_SIZE];\n\n    /* defrag the rax struct */\n    if (!*cursor) {\n        if ((newrax = defragfns->defragAlloc(rax))) {\n            *eb = newrax;\n            rax = newrax;\n        }\n    }\n\n    raxStart(&ri,rax);\n    if (!*cursor) {\n        ebDefragRaxNode(&rax->head, defragfns);\n        /* assign the iterator node callback before the seek, so that the\n         * initial nodes that are processed till the first item are covered */\n        ri.node_cb = ebDefragRaxNode;\n        ri.privdata = defragfns;\n        raxSeek(&ri, \"^\", NULL, 0);\n    } else {\n        /* if cursor is non-zero, we seek to the static 'next'.\n         * Since node_cb is set after seek operation, any node traversed during seek wouldn't\n         * be defragmented. To prevent this, we advance to next node before exiting previous\n         * run, ensuring it gets defragmented instead of being skipped during current seek. */\n        if (!raxSeek(&ri, \">=\", next, EB_KEY_SIZE)) {\n            *cursor = 0;\n            raxStop(&ri);\n            return 0;\n        }\n        /* assign the iterator node callback after the seek, so that the\n         * initial nodes that are processed till now aren't covered */\n        ri.node_cb = ebDefragRaxNode;\n        ri.privdata = defragfns;\n    }\n\n    /* Defrag the bucket in the rax node. */\n    assert(raxNext(&ri));\n    ebDefragRaxBucket(type, &ri, defragfns, privdata);\n\n    /* Move to next node. */\n    if (!raxNext(&ri)) {\n        /* If we reached the end, we can stop. */\n        *cursor = 0;\n        raxStop(&ri);\n        return 0;\n    }\n\n    (*cursor)++;\n    assert(ri.key_len == sizeof(next));\n    memcpy(next, ri.key, ri.key_len);\n    raxStop(&ri);\n    return 1;\n}\n\n/* Reallocates the memory used by ebucket components (segments and items)\n * using the provided allocation functions. This feature was added for\n * the active defrag feature.\n *\n * The 'defragfns' callbacks are called with a pointer to memory that callback\n * can reallocate. The callbacks should return a new memory address or NULL,\n * where NULL means that no reallocation happened and the old memory is still valid.\n *\n * Returns 0 if no more work needs to be been done. Otherwise 1.\n */\nint ebScanDefrag(ebuckets *eb, EbucketsType *type, unsigned long *cursor,\n                 ebDefragFunctions *defragfns, void *privdata)\n{\n    if (ebIsEmpty(*eb)) {\n        *cursor = 0;\n        return 0;\n    }\n\n    if (ebIsList(*eb)) {\n        ebDefragList(eb, type, defragfns, privdata);\n        *cursor = 0;\n        return 0;\n    } else {\n        return ebDefragRax(eb, type, cursor, defragfns, privdata);\n    }\n}\n\n/* Retrieves the expiration time associated with the given item. If associated\n * ExpireMeta is marked as trash, then return EB_EXPIRE_TIME_INVALID */\nuint64_t ebGetExpireTime(EbucketsType *type, eItem item) {\n    ExpireMeta *meta = type->getExpireMeta(item);\n    if (unlikely(meta->trash)) return EB_EXPIRE_TIME_INVALID;\n    return ebGetMetaExpTime(meta);\n}\n\n/* Init ebuckets iterator\n *\n * This is a non-safe iterator. Any modification to ebuckets will invalidate the\n * iterator. Calling this function takes care to reference the first item\n * in ebuckets with minimal expiration time. If no items to iterate, then\n * iter->currItem will be NULL and iter->itemsCurrBucket will be set to 0.\n */\nvoid ebStart(EbucketsIterator *iter, ebuckets eb, EbucketsType *type) {\n    iter->eb = eb;\n    iter->type = type;\n    iter->isRax = 0;\n\n    if (ebIsEmpty(eb)) {\n        iter->currItem = NULL;\n        iter->itemsCurrBucket = 0;\n    } else if (ebIsList(eb)) {\n        iter->currItem = ebGetListPtr(type, eb);\n        iter->itemsCurrBucket = type->getExpireMeta(iter->currItem)->numItems;\n    } else {\n        rax *rax = ebGetRaxPtr(eb);\n        raxStart(&iter->raxIter, rax);\n        raxSeek(&iter->raxIter, \"^\", NULL, 0);\n        raxNext(&iter->raxIter);\n        FirstSegHdr *firstSegHdr = iter->raxIter.data;\n        iter->itemsCurrBucket = firstSegHdr->totalItems;\n        iter->currItem = firstSegHdr->head;\n        iter->isRax = 1;\n    }\n}\n\n/* Advance iterator to the next item\n *\n * Returns:\n *   - 0 if the end of ebuckets has been reached, setting `iter->currItem`\n *       to NULL.\n *   - 1 otherwise, updating `iter->currItem` to the next item.\n */\nint ebNext(EbucketsIterator *iter) {\n    if (iter->currItem == NULL)\n        return 0;\n\n    eItem item = iter->currItem;\n    ExpireMeta *meta = iter->type->getExpireMeta(item);\n    if (iter->isRax) {\n        if (meta->lastItemBucket) {\n            if (raxNext(&iter->raxIter)) {\n                FirstSegHdr *firstSegHdr = iter->raxIter.data;\n                iter->currItem = firstSegHdr->head;\n                iter->itemsCurrBucket = firstSegHdr->totalItems;\n            } else {\n                iter->currItem = NULL;\n            }\n        } else if (meta->lastInSegment) {\n            NextSegHdr *nextSegHdr = meta->next;\n            iter->currItem = nextSegHdr->head;\n        } else {\n            iter->currItem = meta->next;\n        }\n    } else {\n        iter->currItem = meta->next;\n    }\n\n    if (iter->currItem == NULL) {\n        iter->itemsCurrBucket = 0;\n        return 0;\n    }\n\n    return 1;\n}\n\n/* Advance the iterator to the next bucket\n *\n * Returns:\n *   - 0 if no more ebuckets are available, setting `iter->currItem` to NULL\n *       and `iter->itemsCurrBucket` to 0.\n *   - 1 otherwise, updating `iter->currItem` and `iter->itemsCurrBucket` for the\n *       next ebucket.\n */\nint ebNextBucket(EbucketsIterator *iter) {\n    if (iter->currItem == NULL)\n        return 0;\n\n    if ((iter->isRax) && (raxNext(&iter->raxIter))) {\n        FirstSegHdr *currSegHdr = iter->raxIter.data;\n        iter->currItem = currSegHdr->head;\n        iter->itemsCurrBucket = currSegHdr->totalItems;\n    } else {\n        iter->currItem = NULL;\n        iter->itemsCurrBucket = 0;\n    }\n    return 1;\n}\n\n/* Stop and cleanup the ebuckets iterator */\nvoid ebStop(EbucketsIterator *iter) {\n    if (iter->isRax)\n        raxStop(&iter->raxIter);\n}\n\n/*** Unit tests ***/\n\n#ifdef REDIS_TEST\n#include <stddef.h>\n#include <sys/time.h>\n#include <sys/resource.h>\n#include <string.h>\n#include \"testhelp.h\"\n\n#define TEST(name) printf(\"[TEST] >>> %s\\n\", name);\n#define TEST_COND(name, cond) printf(\"[%s] >>> %s\\n\", (cond) ? \"TEST\" : \"BYPS\", name);  if (cond)\n\ntypedef struct MyItem {\n    int index;\n    ExpireMeta mexpire;\n} MyItem;\n\ntypedef struct TimeRange {\n    uint64_t start;\n    uint64_t end;\n} TimeRange;\n\nExpireMeta *getMyItemExpireMeta(const eItem item) {\n    return &((MyItem *)item)->mexpire;\n}\n\nExpireAction expireItemCb(void *item, eItem ctx);\nvoid deleteItemCb(eItem item, void *ctx);\nEbucketsType myEbucketsType = {\n    .getExpireMeta = getMyItemExpireMeta,\n    .onDeleteItem = deleteItemCb,\n    .itemsAddrAreOdd = 0,\n};\n\nEbucketsType myEbucketsType2 = {\n    .getExpireMeta = getMyItemExpireMeta,\n    .onDeleteItem = NULL,\n    .itemsAddrAreOdd = 0,\n};\n\n/* XOR over all items time-expiration. Must be 0 after all addition/removal */\nuint64_t expItemsHashValue = 0;\n\nExpireAction expireItemCb(eItem item, void *ctx) {\n    ExpireMeta *meta = myEbucketsType.getExpireMeta(item);\n    uint64_t expTime = ebGetMetaExpTime(meta);\n    expItemsHashValue = expItemsHashValue ^ expTime;\n\n    TimeRange *range = (TimeRange *) ctx;\n    /* Verify expiration time is within the range */\n    if (range != NULL) assert(expTime >= range->start && expTime <= range->end);\n\n/* If benchmarking then avoid from heavyweight free operation. It is user side logic */\n#ifndef EB_TEST_BENCHMARK\n    zfree(item);\n#endif\n    return ACT_REMOVE_EXP_ITEM;\n}\n\nExpireAction expireUpdateThirdItemCb(eItem item, void *ctx) {\n    uint64_t expTime = (uint64_t) (uintptr_t) ctx;\n    static int calls = 0;\n    if ((calls++) == 3) {\n        ebSetMetaExpTime(&(((MyItem *)item)->mexpire), expTime );\n        return ACT_UPDATE_EXP_ITEM;\n    }\n\n    return ACT_REMOVE_EXP_ITEM;\n}\n\nvoid deleteItemCb(eItem item, void *ctx) {\n    UNUSED(ctx);\n    zfree(item);\n}\n\nvoid addItems(ebuckets *eb, uint64_t startExpire, int step, uint64_t numItems, MyItem **ar) {\n    for (uint64_t i = 0 ; i < numItems ; i++) {\n        uint64_t expireTime = startExpire + (i * step);\n        expItemsHashValue = expItemsHashValue ^ expireTime;\n        MyItem *item = zmalloc(sizeof(MyItem));\n        if (ar) ar[i] = item;\n        ebAdd(eb, &myEbucketsType, item, expireTime);\n    }\n}\n\n/* expireRanges - is given as bucket-key to be agnostic to the different configuration\n *                of EB_BUCKET_KEY_PRECISION */\nvoid distributeTest(int lowestTime,\n                    uint64_t *expireRanges,\n                    const int *ItemsPerRange,\n                    int numRanges,\n                    int isExpire,\n                    int printStat) {\n    struct timeval timeBefore, timeAfter, timeDryRun, timeCreation, timeDestroy;\n    ebuckets eb = ebCreate();\n\n    /* create items with random expiry */\n    uint64_t startRange = lowestTime;\n\n    expItemsHashValue = 0;\n    void *listOfItems = NULL;\n    for (int i = 0; i < numRanges; i++) {\n        uint64_t endRange = EB_BUCKET_EXP_TIME(expireRanges[i]);\n        for (int j = 0; j < ItemsPerRange[i]; j++) {\n            uint64_t randomExpirey = (rand() % (endRange - startRange)) + startRange;\n            expItemsHashValue = expItemsHashValue ^ (uint32_t) randomExpirey;\n            MyItem *item = zmalloc(sizeof(MyItem));\n            getMyItemExpireMeta(item)->next = listOfItems;\n            listOfItems = item;\n            ebSetMetaExpTime(getMyItemExpireMeta(item), randomExpirey);\n        }\n        startRange = EB_BUCKET_EXP_TIME(expireRanges[i]); /* next start range */\n    }\n\n    /* Take to sample memory after all items allocated and before insertion to ebuckets */\n    size_t  usedMemBefore =  zmalloc_used_memory();\n\n    gettimeofday(&timeBefore, NULL);\n    while (listOfItems) {\n        MyItem *item = (MyItem *)listOfItems;\n        listOfItems = getMyItemExpireMeta(item)->next;\n        uint64_t expireTime = ebGetMetaExpTime(&item->mexpire);\n        ebAdd(&eb, &myEbucketsType, item, expireTime);\n    }\n    gettimeofday(&timeAfter, NULL);\n    timersub(&timeAfter, &timeBefore, &timeCreation);\n\n    gettimeofday(&timeBefore, NULL);\n    ebExpireDryRun(eb, &myEbucketsType, 0xFFFFFFFFFFFF);  /* expire dry-run all */\n    gettimeofday(&timeAfter, NULL);\n    timersub(&timeAfter, &timeBefore, &timeDryRun);\n\n    if (printStat) {\n        _ebPrint(eb, &myEbucketsType, zmalloc_used_memory() - usedMemBefore, 0);\n    }\n\n    gettimeofday(&timeBefore, NULL);\n    if (isExpire) {\n        startRange = lowestTime;\n        /* Active expire according to the ranges */\n        for (int i = 0 ; i < numRanges ; i++) {\n\n            /* When checking how many items are expired, we need to take into\n             * consideration EB_BUCKET_KEY_PRECISION. The value of \"info->now\"\n             * will be adjusted by ebActiveExpire() to lookup only for all buckets\n             * with assigned keys that are older than 1<<EB_BUCKET_KEY_PRECISION\n             * msec ago. That is, it is needed to visit only the buckets with keys\n             * that are \"<\" EB_BUCKET_KEY(info->now) and not \"<=\".\n             * But if there is a list behind ebuckets, then this limitation is not\n             * applied and the operator \"<=\" will be used instead.\n             *\n             * The '-1' in case of list brings makes both cases aligned to have\n             * same result */\n            uint64_t now = EB_BUCKET_EXP_TIME(expireRanges[i]) + (ebIsList(eb) ? -1 : 0);\n\n            TimeRange range = {EB_BUCKET_EXP_TIME(startRange), EB_BUCKET_EXP_TIME(expireRanges[i]) };\n            ExpireInfo info = {\n                    .maxToExpire = 0xFFFFFFFF,\n                    .onExpireItem = expireItemCb,\n                    .ctx = &range,\n                    .now = now,\n                    .itemsExpired = 0};\n\n            ebExpire(&eb, &myEbucketsType, &info);\n\n            assert( (eb==NULL && (i + 1 == numRanges)) || (eb!=NULL && (i + 1 < numRanges)) );\n            assert( info.itemsExpired == (uint64_t) ItemsPerRange[i]);\n            startRange = expireRanges[i];\n        }\n        assert(eb == NULL);\n        assert( (expItemsHashValue & 0xFFFFFFFF) == 0);\n    }\n    ebDestroy(&eb, &myEbucketsType, NULL);\n    gettimeofday(&timeAfter, NULL);\n    timersub(&timeAfter, &timeBefore, &timeDestroy);\n\n    if (printStat) {\n        printf(\"Time elapsed ebuckets creation     : %ld.%06ld\\n\", (long int)timeCreation.tv_sec, (long int)timeCreation.tv_usec);\n        printf(\"Time elapsed active-expire dry-run : %ld.%06ld\\n\", (long int)timeDryRun.tv_sec, (long int)timeDryRun.tv_usec);\n        if (isExpire)\n            printf(\"Time elapsed active-expire         : %ld.%06ld\\n\", (long int)timeDestroy.tv_sec, (long int)timeDestroy.tv_usec);\n        else\n            printf(\"Time elapsed destroy               : %ld.%06ld\\n\", (long int)timeDestroy.tv_sec, (long int)timeDestroy.tv_usec);\n    }\n\n}\n\n#define UNUSED(x) (void)(x)\n#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))\n\nvoid *defragCallback(void *ptr) {\n    size_t size = zmalloc_usable_size(ptr);\n    void *newitem = zmalloc(size);\n    memcpy(newitem, ptr, size);\n    zfree(ptr);\n    return newitem;\n}\n\nvoid *defragItemCallback(void *ptr, void *privdata) {\n    MyItem *item = ptr;\n    MyItem **items = privdata;\n    int index = item->index;\n    void *newitem = defragCallback(ptr);\n    if (newitem)\n        items[index] = newitem;\n    return newitem;\n}\n\nint ebucketsTest(int argc, char **argv, int flags) {\n    UNUSED(argc);\n    UNUSED(argv);\n    srand(0);\n\n    int verbose = (flags & REDIS_TEST_VERBOSE) ? 2 : 1;\n    UNUSED(verbose);\n\n#ifdef EB_TEST_BENCHMARK\n    TEST(\"ebuckets - benchmark 10 million items: alloc + add + activeExpire\") {\n\n        struct TestParams {\n            uint64_t minExpire;\n            uint64_t maxExpire;\n            int items;\n            const char *description;\n        } testCases[] = {\n            { 1805092100000, 1805092100000 + (uint64_t) 1,                10000000, \"1 msec distribution\"  },\n            { 1805092100000, 1805092100000 + (uint64_t) 1000,             10000000, \"1 sec distribution\"   },\n            { 1805092100000, 1805092100000 + (uint64_t) 1000*60,          10000000, \"1 min distribution\"   },\n            { 1805092100000, 1805092100000 + (uint64_t) 1000*60*60,       10000000, \"1 hour distribution\"  },\n            { 1805092100000, 1805092100000 + (uint64_t) 1000*60*60*24,    10000000, \"1 day distribution\"   },\n            { 1805092100000, 1805092100000 + (uint64_t) 1000*60*60*24*7,  10000000, \"1 week distribution\"  },\n            { 1805092100000, 1805092100000 + (uint64_t) 1000*60*60*24*30, 10000000, \"1 month distribution\" }\n        };\n\n        /* selected test */\n        uint32_t tid = EB_TEST_BENCHMARK;\n\n        printf(\"\\n------ TEST EBUCKETS: %s ------\\n\", testCases[tid].description);\n        uint64_t expireRanges[] = { testCases[tid].minExpire, testCases[tid].maxExpire };\n        int itemsPerRange[] = { 0, testCases[tid].items };\n\n        /* expireRanges[] is provided to distributeTest() as bucket-key values */\n        for (uint32_t j = 0; j < ARRAY_SIZE(expireRanges); ++j) {\n            expireRanges[j] = expireRanges[j] >> EB_BUCKET_KEY_PRECISION;\n        }\n\n        distributeTest(0, expireRanges, itemsPerRange, ARRAY_SIZE(expireRanges), 1, 1);\n        return 0;\n    }\n#endif\n\n    TEST(\"basic iterator test\") {\n        MyItem *items[100];\n        for (uint32_t numItems = 0 ; numItems < ARRAY_SIZE(items) ; ++numItems) {\n            ebuckets eb = NULL;\n            EbucketsIterator iter;\n\n            /* Create and add items to ebuckets */\n            for (uint32_t i = 0; i < numItems; i++) {\n                items[i] = zmalloc(sizeof(MyItem));\n                ebAdd(&eb, &myEbucketsType, items[i], i);\n            }\n\n            /* iterate items */\n            ebStart(&iter, eb, &myEbucketsType);\n            for (uint32_t i = 0; i < numItems; i++) {\n                assert(iter.currItem == items[i]);\n                int res = ebNext(&iter);\n                if (i+1<numItems) {\n                    assert(res == 1);\n                    assert(iter.currItem != NULL);\n                } else {\n                    assert(res == 0);\n                    assert(iter.currItem == NULL);\n                }\n            }\n            ebStop(&iter);\n\n            /* iterate buckets */\n            ebStart(&iter, eb, &myEbucketsType);\n            uint32_t countItems = 0;\n\n            uint32_t countBuckets = 0;\n            while (1) {\n                countItems += iter.itemsCurrBucket;\n                if (!ebNextBucket(&iter)) break;\n                countBuckets++;\n            }\n            ebStop(&iter);\n            assert(countItems == numItems);\n            if (numItems>=8) assert(numItems/8 >= countBuckets);\n            ebDestroy(&eb, &myEbucketsType, NULL);\n        }\n    }\n\n    TEST(\"list - Create a single item, get TTL, and remove\") {\n        MyItem *singleItem = zmalloc(sizeof(MyItem));\n        ebuckets eb = NULL;\n        ebAdd(&eb, &myEbucketsType, singleItem, 1000);\n        assert(ebGetExpireTime(&myEbucketsType, singleItem) == 1000 );\n\n        /* remove the item */\n        assert(ebRemove(&eb, &myEbucketsType, singleItem));\n        /* now the ebuckets is empty */\n        assert(ebRemove(&eb, &myEbucketsType, singleItem) == 0);\n\n        zfree(singleItem);\n\n        ebDestroy(&eb, &myEbucketsType, NULL);\n    }\n\n    TEST(\"list - Create few items on different times, get TTL, and then remove\") {\n        MyItem *items[EB_LIST_MAX_ITEMS];\n        ebuckets eb = NULL;\n        for (int i = 0 ; i < EB_LIST_MAX_ITEMS  ; i++) {\n            items[i] = zmalloc(sizeof(MyItem));\n            ebAdd(&eb, &myEbucketsType, items[i], i);\n        }\n\n        for (uint64_t i = 0 ; i < EB_LIST_MAX_ITEMS ; i++) {\n            assert(ebGetExpireTime(&myEbucketsType, items[i]) == i );\n            assert(ebRemove(&eb, &myEbucketsType, items[i]));\n        }\n\n        for (int i = 0 ; i < EB_LIST_MAX_ITEMS  ; i++)\n            zfree(items[i]);\n\n        ebDestroy(&eb, &myEbucketsType, NULL);\n    }\n\n    TEST(\"list - Create few items on different times, get TTL, and then delete\") {\n        MyItem *items[EB_LIST_MAX_ITEMS];\n        ebuckets eb = NULL;\n        for (int i = 0 ; i < EB_LIST_MAX_ITEMS  ; i++) {\n            items[i] = zmalloc(sizeof(MyItem));\n            ebAdd(&eb, &myEbucketsType, items[i], i);\n        }\n\n        for (uint64_t i = 0 ; i < EB_LIST_MAX_ITEMS ; i++) {\n            assert(ebGetExpireTime(&myEbucketsType, items[i]) == i );\n        }\n\n        ebDestroy(&eb, &myEbucketsType, NULL);\n    }\n\n    TEST_COND(\"ebuckets - Add items with increased/decreased expiration time and then expire\",\n              EB_BUCKET_KEY_PRECISION > 0)\n    {\n        ebuckets eb = NULL;\n\n        for (int isDecr = 0; isDecr < 2; ++isDecr) {\n            for (uint32_t numItems = 1; numItems < 64; ++numItems) {\n                uint64_t step = 1 << EB_BUCKET_KEY_PRECISION;\n\n                if (isDecr == 0)\n                    addItems(&eb, 0, step, numItems, NULL);\n                else\n                    addItems(&eb, (numItems - 1) * step, -step, numItems, NULL);\n\n                for (uint32_t i = 1; i <= numItems; i++) {\n                    TimeRange range = {EB_BUCKET_EXP_TIME(i - 1), EB_BUCKET_EXP_TIME(i)};\n                    ExpireInfo info = {\n                            .maxToExpire = 1,\n                            .onExpireItem = expireItemCb,\n                            .ctx = &range,\n                            .now = EB_BUCKET_EXP_TIME(i),\n                            .itemsExpired = 0};\n\n                    ebExpire(&eb, &myEbucketsType, &info);\n                    assert(info.itemsExpired == 1);\n                    if (i == numItems) { /* if last item */\n                        assert(eb == NULL);\n                        assert(info.nextExpireTime == EB_EXPIRE_TIME_INVALID);\n                    } else {\n                        assert(info.nextExpireTime == EB_BUCKET_EXP_TIME(i));\n                    }\n                }\n            }\n        }\n    }\n\n    TEST_COND(\"ebuckets - Create items with same expiration time and then expire\",\n              EB_BUCKET_KEY_PRECISION > 0)\n    {\n        ebuckets eb = NULL;\n        uint64_t expirePerIter = 2;\n        for (uint32_t numIterations = 1; numIterations < 100; ++numIterations) {\n            uint32_t numItems = numIterations * expirePerIter;\n            uint64_t expireTime = (1 << EB_BUCKET_KEY_PRECISION) + 1;\n            addItems(&eb, expireTime, 0, numItems, NULL);\n\n            for (uint32_t i = 1; i <= numIterations; i++) {\n                ExpireInfo info = {\n                        .maxToExpire = expirePerIter,\n                        .onExpireItem = expireItemCb,\n                        .ctx = NULL,\n                        .now = (2 << EB_BUCKET_KEY_PRECISION),\n                        .itemsExpired = 0};\n                ebExpire(&eb, &myEbucketsType, &info);\n                assert(info.itemsExpired == expirePerIter);\n                if (i == numIterations) { /* if last item */\n                    assert(eb == NULL);\n                    assert(info.nextExpireTime == EB_EXPIRE_TIME_INVALID);\n                } else {\n                    assert(info.nextExpireTime == expireTime);\n                }\n            }\n        }\n    }\n\n    TEST(\"list - Create few items on random times and then expire/delete \") {\n        for (int isExpire = 0 ; isExpire <= 1 ; ++isExpire ) {\n            uint64_t expireRanges[] = {1000};   /* bucket-keys */\n            int itemsPerRange[] = {EB_LIST_MAX_ITEMS};\n            distributeTest(0, expireRanges, itemsPerRange,\n                           ARRAY_SIZE(expireRanges), isExpire, 0);\n        }\n    }\n\n    TEST(\"list - Create few items (list) on same time and then active expire/delete \") {\n        for (int isExpire = 0 ; isExpire <= 1 ; ++isExpire ) {\n            uint64_t expireRanges[] = {1, 2};  /* bucket-keys */\n            int itemsPerRange[] = {0, EB_LIST_MAX_ITEMS};\n\n            distributeTest(0, expireRanges, itemsPerRange,\n                           ARRAY_SIZE(expireRanges), isExpire, 0);\n        }\n    }\n\n    TEST(\"ebuckets - Create many items on same time and then active expire/delete \") {\n        for (int isExpire = 1 ; isExpire <= 1 ; ++isExpire ) {\n            uint64_t expireRanges[] = {1, 2}; /* bucket-keys */\n            int itemsPerRange[] = {0, 20};\n\n            distributeTest(0, expireRanges, itemsPerRange,\n                           ARRAY_SIZE(expireRanges), isExpire, 0);\n        }\n    }\n\n    TEST(\"ebuckets - Create items on different times and then expire/delete \") {\n        for (int isExpire = 0 ; isExpire <= 0 ; ++isExpire ) {\n            for (int numItems = 1 ; numItems < 100 ; ++numItems ) {\n                uint64_t expireRanges[] = {1000000}; /* bucket-keys */\n                int itemsPerRange[] = {numItems};\n                distributeTest(0, expireRanges, itemsPerRange,\n                               ARRAY_SIZE(expireRanges), 1, 0);\n            }\n        }\n    }\n\n    TEST(\"ebuckets - Create items on different times and then ebRemove() \") {\n        ebuckets eb = NULL;\n\n        for (int step = -1 ; step <= 1 ; ++step) {\n            for (int numItems = 1; numItems <= EB_SEG_MAX_ITEMS*3; ++numItems) {\n                for (int offset = 0; offset < numItems; offset++) {\n                    MyItem *items[numItems];\n                    uint64_t startValue = 1000 << EB_BUCKET_KEY_PRECISION;\n                    int stepValue = step * (1 << EB_BUCKET_KEY_PRECISION);\n                    addItems(&eb, startValue, stepValue, numItems, items);\n                    for (int i = 0; i < numItems; i++) {\n                        int at = (i + offset) % numItems;\n                        assert(ebRemove(&eb, &myEbucketsType, items[at]));\n                        zfree(items[at]);\n                    }\n                    assert(eb == NULL);\n                }\n            }\n        }\n    }\n\n    TEST(\"ebuckets - test min/max expire time\") {\n        ebuckets eb = NULL;\n        MyItem items[3*EB_SEG_MAX_ITEMS];\n        for (int numItems = 1 ; numItems < (int)ARRAY_SIZE(items) ; numItems++) {\n            uint64_t minExpTime = RAND_MAX, maxExpTime = 0;\n            for (int i = 0; i < numItems; i++) {\n                 /* generate random expiration time */\n                uint64_t expireTime = rand();\n                if (expireTime < minExpTime) minExpTime = expireTime;\n                if (expireTime > maxExpTime) maxExpTime = expireTime;\n                ebAdd(&eb, &myEbucketsType2, items + i, expireTime);\n                assert(ebGetNextTimeToExpire(eb, &myEbucketsType2) == minExpTime);\n                assert(ebGetMaxExpireTime(eb, &myEbucketsType2, 0) == maxExpTime);\n            }\n            ebDestroy(&eb, &myEbucketsType2, NULL);\n        }\n    }\n\n    TEST_COND(\"ebuckets - test min/max expire time, with extended-segment\",\n              (1<<EB_BUCKET_KEY_PRECISION) > 2*EB_SEG_MAX_ITEMS) {\n        ebuckets eb = NULL;\n        MyItem items[(2*EB_SEG_MAX_ITEMS)-1];\n        for (int numItems = EB_SEG_MAX_ITEMS+1 ; numItems < (int)ARRAY_SIZE(items) ; numItems++) {\n            /* First reach extended-segment (two chained segments in a bucket) */\n            for (int i = 0; i <= EB_SEG_MAX_ITEMS; i++) {\n                uint64_t itemExpireTime = (1<<EB_BUCKET_KEY_PRECISION) + i;\n                ebAdd(&eb, &myEbucketsType2, items + i, itemExpireTime);\n            }\n\n            /* Now start adding more items to extended-segment and verify min/max */\n            for (int i = EB_SEG_MAX_ITEMS+1; i < numItems; i++) {\n                uint64_t itemExpireTime = (1<<EB_BUCKET_KEY_PRECISION) + i;\n                ebAdd(&eb, &myEbucketsType2, items + i, itemExpireTime);\n                assert(ebGetNextTimeToExpire(eb, &myEbucketsType2) == (uint64_t)(2<<EB_BUCKET_KEY_PRECISION));\n                assert(ebGetMaxExpireTime(eb, &myEbucketsType2, 0) == (uint64_t)(2<<EB_BUCKET_KEY_PRECISION));\n                assert(ebGetMaxExpireTime(eb, &myEbucketsType2, 1) == (uint64_t)((1<<EB_BUCKET_KEY_PRECISION) + i));\n            }\n            ebDestroy(&eb, &myEbucketsType2, NULL);\n        }\n    }\n\n    TEST(\"ebuckets - active-expire dry-run\") {\n        ebuckets eb = NULL;\n        MyItem items[2*EB_SEG_MAX_ITEMS];\n\n        for (int numItems = 1 ; numItems < (int)ARRAY_SIZE(items) ; numItems++) {\n            int maxExpireKey = (numItems % 2) ? 40 : 2;\n            /* Allocate numItems and add to ebuckets */\n            for (int i = 0; i < numItems; i++) {\n                /* generate random expiration time */\n                uint64_t expireTime = (rand() % maxExpireKey) << EB_BUCKET_KEY_PRECISION;\n                ebAdd(&eb, &myEbucketsType2, items + i, expireTime);\n            }\n\n            for (int i = 0 ; i <= maxExpireKey ; ++i) {\n                uint64_t now = i << EB_BUCKET_KEY_PRECISION;\n\n                /* Count how much items are expired */\n                uint64_t expectedNumExpired = 0;\n                for (int j = 0; j < numItems; j++) {\n                    if (ebGetExpireTime(&myEbucketsType2, items + j) < now)\n                        expectedNumExpired++;\n                }\n                /* Perform dry-run and verify number of expired items */\n                assert(ebExpireDryRun(eb, &myEbucketsType2, now) == expectedNumExpired);\n            }\n            ebDestroy(&eb, &myEbucketsType2, NULL);\n        }\n    }\n\n    TEST(\"ebuckets - active expire callback returns ACT_UPDATE_EXP_ITEM\") {\n        ebuckets eb = NULL;\n        MyItem items[2*EB_SEG_MAX_ITEMS];\n        int numItems = 2*EB_SEG_MAX_ITEMS;\n\n        /* timeline */\n        int expiredAt           = 2,\n            applyActiveExpireAt = 3,\n            updateItemTo        = 5,\n            expectedExpiredAt   = 6;\n\n        /* Allocate numItems and add to ebuckets */\n        for (int i = 0; i < numItems; i++)\n            ebAdd(&eb, &myEbucketsType2, items + i, expiredAt << EB_BUCKET_KEY_PRECISION);\n\n        /* active-expire. Expected that all but one will be expired */\n        ExpireInfo info = {\n                .maxToExpire = 0xFFFFFFFF,\n                .onExpireItem = expireUpdateThirdItemCb,\n                .ctx = (void *) (uintptr_t) (updateItemTo << EB_BUCKET_KEY_PRECISION),\n                .now = applyActiveExpireAt << EB_BUCKET_KEY_PRECISION,\n                .itemsExpired = 0};\n        ebExpire(&eb, &myEbucketsType2, &info);\n        assert(info.itemsExpired == (uint64_t) numItems);\n        assert(info.nextExpireTime == (uint64_t)updateItemTo << EB_BUCKET_KEY_PRECISION);\n        assert(ebGetTotalItems(eb, &myEbucketsType2) == 1);\n\n        /* active-expire. Expected that all will be expired */\n        ExpireInfo info2 = {\n                .maxToExpire = 0xFFFFFFFF,\n                .onExpireItem = expireUpdateThirdItemCb,\n                .ctx = (void *) (uintptr_t) (updateItemTo << EB_BUCKET_KEY_PRECISION),\n                .now = expectedExpiredAt << EB_BUCKET_KEY_PRECISION,\n                .itemsExpired = 0};\n        ebExpire(&eb, &myEbucketsType2, &info2);\n        assert(info2.itemsExpired == (uint64_t) 1);\n        assert(info2.nextExpireTime == EB_EXPIRE_TIME_INVALID);\n        assert(ebGetTotalItems(eb, &myEbucketsType2) == 0);\n\n        ebDestroy(&eb, &myEbucketsType2, NULL);\n\n    }\n\n    TEST(\"item defragmentation\") {\n        for (int s = 1; s <= EB_LIST_MAX_ITEMS * 3; s++) {\n            ebuckets eb = NULL;\n            MyItem *items[s];\n            for (int i = 0; i < s; i++) {\n                items[i] = zmalloc(sizeof(MyItem));\n                items[i]->index = i;\n                ebAdd(&eb, &myEbucketsType, items[i], i);\n            }\n            assert((s <= EB_LIST_MAX_ITEMS) ? ebIsList(eb) : !ebIsList(eb));\n            /* Defrag all the items. */\n            unsigned long cursor = 0;\n            ebDefragFunctions defragfns = {\n                .defragAlloc = defragCallback,\n                .defragItem = defragItemCallback,\n            };\n            while (ebScanDefrag(&eb, &myEbucketsType, &cursor, &defragfns, items)) {}\n            /* Verify that the data is not corrupted. */\n            ebValidate(eb, &myEbucketsType);\n            for (int i = 0; i < s; i++)\n                assert(items[i]->index == i);\n            ebDestroy(&eb, &myEbucketsType, NULL);\n        }\n    }\n\n//    TEST(\"segment - Add smaller item to full segment that all share same ebucket-key\")\n//    TEST(\"segment - Add item to full segment and make it extended-segment (all share same ebucket-key)\")\n//    TEST(\"ebuckets - Create rax tree with extended-segment and add item before\")\n\n    return 0;\n}\n\n#endif\n"
  },
  {
    "path": "src/ebuckets.h",
    "content": "/*\n * Copyright Redis Ltd. 2024 - present\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n *\n * WHAT IS EBUCKETS?\n * -----------------\n * ebuckets is being used to store items that are set with expiration-time. It\n * supports the basic API of add, remove and active expiration. The implementation\n * of it is based on rax-tree, or plain linked-list when small. The expiration time\n * of the items are used as the key to traverse rax-tree.\n *\n * Instead of holding a distinct item in each leaf of the rax-tree we can aggregate\n * items into small segments and hold it in each leaf. This way we can  avoid\n * frequent modification of the rax-tree, since many of the modifications\n * will be done only at the segment level. It will also save memory because\n * rax-tree can be costly, around 40 bytes per leaf (with rax-key limited to 6\n * bytes). Whereas each additional item in the segment will cost the size of the\n * 'next' pointer in a list (8 bytes) and few more bytes for maintenance of the\n * segment.\n *\n * EBUCKETS STRUCTURE\n * ------------------\n * The ebuckets data structure is organized in a hierarchical manner as follows:\n *\n * 1. ebuckets: This is the top-level data structure. It can be either a rax tree\n *    or a plain linked list. It contains one or more buckets, each representing\n *    an interval in time.\n *\n * 2. bucket: Each bucket represents an interval in time and contains one or more\n *    segments. The key in the rax-tree for each bucket represents low\n *    bound expiration-time for the items within this bucket. The key of the\n *    following bucket represents the upper bound expiration-time.\n *\n * 3. segment: Each segment within a bucket can hold up to `EB_SEG_MAX_ITEMS`\n *    items as a linked list. If there are more, the segment will try to\n *    split the bucket. To avoid wasting memory, it is a singly linked list (only\n *    next-item pointer). It is a cyclic linked-list to allow efficient removal of\n *    items from the middle of the segment without traversing the rax tree.\n *\n * 4. item: Each item that is stored in ebuckets should embed the ExpireMeta\n *    struct and supply getter function (see EbucketsType.getExpireMeta). This\n *    struct holds the expire-time of the item and few more fields that are used\n *    to maintain the segments data-structure.\n *\n * SPLITTING BUCKET\n * ----------------\n * Each segment can hold up-to `EB_SEG_MAX_ITEMS` items. On insertion of new\n * item, it will try to split the segment. Here is an example For adding item\n * with expiration of 42 to a segment that already reached its maximum capacity\n * which will cause to split of the segment and in turn split of the bucket as\n * well to a finer grained ranges:\n *\n *       BUCKETS                             BUCKETS\n *      [ 00-10 ] -> size(Seg0) = 11   ==>  [ 00-10 ] -> size(Seg0) = 11\n *      [ 11-76 ] -> size(Seg1) = 16        [ 11-36 ] -> size(Seg1) = 9\n *                                          [ 37-76 ] -> size(Seg2) = 7\n *\n * EXTENDING BUCKET\n * ----------------\n * In the example above, the reason it wasn't split evenly is that Seg1 must have\n * been holding items with same TTL and they must reside together in the same\n * bucket after the split. Which brings us to another important point. If there\n * is a segment that reached its maximum capacity and all the items have same\n * expiration-time key, then we cannot split the bucket but aggregate all the\n * items, with same expiration time key, by allocating an extended-segment and\n * chain it to the first segment in visited bucket. In that sense, extended\n * segments will only hold items with same expiration-time key.\n *\n *       BUCKETS                            BUCKETS\n *      [ 00-10 ] -> size(Seg0)=11   ==>   [ 00-10 ] -> size(Seg0)=11\n *      [ 11-12 ] -> size(Seg1)=16         [ 11-12 ] -> size(Seg1)=1 -> size(Seg2)=16\n *\n * LIMITING RAX TREE DEPTH\n * -----------------------\n * The rax tree is basically a B-tree and its depth is bounded by the sizeof of\n * the key. Holding 6 bytes for expiration-time key is more than enough to represent\n * unix-time in msec, and in turn the depth of the tree is limited to 6 levels.\n * At a first glance it might look sufficient but we need take into consideration\n * the heavyweight maintenance and traversal of each node in the B-tree.\n *\n * And so, we can further prune the tree such that holding keys with msec precision\n * in the tree doesn't bring with it much value. The active-expiration operation can\n * live with deletion of expired items, say, older than 1 sec, which means the size\n * of time-expiration keys to the rax tree become no more than ~4.5 bytes and we\n * also get rid of the \"noisy\" bits which most probably will cause to yet another\n * branching and modification of the rax tree in case of items with time-expiration\n * difference of less than 1 second. The lazy expiration will still be precise and\n * without compromise on accuracy because the exact expiration-time is kept\n * attached as well to each item, in `ExpireMeta`, and each traversal of item with\n * expiration will behave as expected down to the msec. Take care to configure\n * `EB_BUCKET_KEY_PRECISION` according to your needs.\n *\n * EBUCKET KEY\n * -----------\n * Taking into account configured value of `EB_BUCKET_KEY_PRECISION`, two items\n * with expiration-time t1 and t2 will be considered to have the same key in the\n * rax-tree/buckets if and only if:\n *\n *              EB_BUCKET_KEY(t1) == EB_BUCKET_KEY(t2)\n *\n * EBUCKETS CREATION\n * -----------------\n * To avoid the cost of allocating rax data-structure for only few elements,\n * ebuckets will start as a simple linked-list and only when it reaches some\n * threshold, it will be converted to rax.\n *\n * TODO\n * ----\n * - ebRemove() optimize to merge small segments into one segment.\n * - ebAdd() Fix pathological case of cascade addition of items into rax such\n *   that their values are smaller/bigger than visited extended-segment which ends\n *   up with multiple segments with a single item in each segment.\n */\n\n#ifndef __EBUCKETS_H\n#define __EBUCKETS_H\n\n#include <stdlib.h>\n#include <sys/types.h>\n#include <stdarg.h>\n#include <stdint.h>\n#include \"rax.h\"\n\n/*\n * EB_BUCKET_KEY_PRECISION - Defines the number of bits to ignore from the\n * expiration-time when mapping to buckets. The higher the value, the more items\n * with similar expiration-time will be aggregated into the same bucket. The lower\n * the value, the more \"accurate\" the active expiration of buckets will be.\n *\n * Note that the accurate time expiration of each item is preserved anyway and\n * enforced by lazy expiration. It only impacts the active expiration that will\n * be able to work on buckets older than (1<<EB_BUCKET_KEY_PRECISION) msec ago.\n * For example if EB_BUCKET_KEY_PRECISION is 10, then active expiration\n * will work only on buckets that already got expired at least 1sec ago.\n *\n * The idea of it is to trim the rax tree depth, avoid having too many branches,\n * and reduce frequent modifications of the tree to the minimum.\n */\n#define EB_BUCKET_KEY_PRECISION 0   /* TBD: modify to 10 */\n\n/* From expiration time to bucket-key */\n#define EB_BUCKET_KEY(exptime) ((exptime) >> EB_BUCKET_KEY_PRECISION)\n\n\n#define EB_EXPIRE_TIME_MAX     ((uint64_t)0x0000FFFFFFFFFFFF) /* Maximum expire-time. */\n#define EB_EXPIRE_TIME_INVALID (EB_EXPIRE_TIME_MAX+1) /* assumed bigger than max */\n\n/* Handler to ebuckets DS. Pointer to a list, rax or NULL (empty DS). See also ebIsList(). */\ntypedef void *ebuckets;\n\n/* Users of ebuckets will store `eItem` which is just a void pointer to their\n * element. In addition, eItem should embed the ExpireMeta struct and supply\n * getter function (see EbucketsType.getExpireMeta).\n */\ntypedef void *eItem;\n\n/* This struct Should be embedded inside `eItem` and must be aligned in memory. */\ntypedef struct ExpireMeta {\n    /* 48bits of unix-time in msec.  This value is sufficient to represent, in\n     * unix-time, until the date of 02 August, 10889\n     */\n    uint32_t expireTimeLo;              /* Low bits of expireTime. */\n    uint16_t expireTimeHi;              /* High bits of expireTime. */\n\n    unsigned int lastInSegment    : 1;  /* Last item in segment. If set, then 'next' will\n                                           point to the NextSegHdr, unless lastItemBucket=1\n                                           then it will point to segment header of the\n                                           current segment. */\n    unsigned int firstItemBucket  : 1;  /* First item in bucket. This flag assist\n                                           to manipulate segments directly without\n                                           the need to traverse from start the\n                                           rax tree  */\n    unsigned int lastItemBucket   : 1;  /* Last item in bucket. This flag assist\n                                           to manipulate segments directly without\n                                           the need to traverse from start the\n                                           rax tree  */\n    unsigned int numItems         : 5;  /* Only first item in segment will maintain\n                                           this value. */\n\n    unsigned int trash            : 1;  /* This flag indicates whether the ExpireMeta\n                                           associated with the item is leftover.\n                                           There is always a potential to reuse the\n                                           item after removal/deletion. Note that,\n                                           the user can still safely O(1) TTL lookup\n                                           a given item and verify whether attached\n                                           TTL is valid or leftover. See function\n                                           ebGetExpireTime(). */\n\n    unsigned int userData         : 3;  /* ebuckets can be used to store in same\n                                           instance few different types of items,\n                                           such as, listpack and hash. This field\n                                           is reserved to store such identification\n                                           associated with the item and can help\n                                           to distinct on delete or expire callback.\n                                           It is not used by ebuckets internally and\n                                           should be maintained by the user */\n\n    unsigned int reserved         : 4;\n\n    void *next;                       /* - If not last item in segment then next\n                                           points to next eItem (lastInSegment=0).\n                                         - If last in segment but not last in\n                                           bucket (lastItemBucket=0) then it\n                                           points to next segment header.\n                                         - If last in bucket then it points to\n                                           current segment header (Can be either\n                                           of type FirstSegHdr or NextSegHdr). */\n} ExpireMeta;\n\n/* Each instance of ebuckets need to have corresponding EbucketsType that holds\n * the necessary callbacks and configuration to operate correctly on the type\n * of items that are stored in it. Conceptually it should have hold reference\n * from ebuckets instance to this type, but to save memory we will pass it as\n * an argument to each API call. */\ntypedef struct EbucketsType {\n    /* getter to extract the ExpireMeta from the item */\n    ExpireMeta* (*getExpireMeta)(const eItem item);\n\n    /* Called during ebDestroy(). Set to NULL if not needed. */\n    void (*onDeleteItem)(eItem item, void *ctx);\n\n    /* Is addresses of items are odd in memory. It is taken into consideration\n     * and used by ebuckets to know how to distinct between ebuckets pointer to\n     * rax versus a pointer to item which is head of list. */\n    unsigned int itemsAddrAreOdd;\n} EbucketsType;\n\n/* Returned value by `onExpireItem` callback to indicate the action to be taken by\n * ebExpire(). */\ntypedef enum ExpireAction {\n    ACT_REMOVE_EXP_ITEM=0,      /* Remove the item from ebuckets. */\n    ACT_UPDATE_EXP_ITEM,        /* Re-insert the item with updated expiration-time.\n                                   Before returning this value, the cb need to\n                                   update expiration time of the item by assisting\n                                   function ebSetMetaExpTime(). The item will be\n                                   kept aside and will be added again to ebuckets\n                                   at the end of ebExpire() */\n    ACT_STOP_ACTIVE_EXP         /* Stop active-expiration. It will assume that\n                                   provided 'item' wasn't deleted by the callback. */\n} ExpireAction;\n\n/* ExpireInfo is used to pass input and output parameters to ebExpire(). */\ntypedef struct ExpireInfo {\n    /* onExpireItem - Called during active-expiration by ebExpire() */\n    ExpireAction (*onExpireItem)(eItem item, void *ctx);\n\n    uint64_t maxToExpire;         /* [INPUT ] Limit of number expired items to scan */\n    void *ctx;                    /* [INPUT ] context to pass to onExpireItem */\n    uint64_t now;                 /* [INPUT ] Current time in msec. */\n    uint64_t itemsExpired;        /* [OUTPUT] Returns the number of expired or updated items. */\n    uint64_t nextExpireTime;      /* [OUTPUT] Next expiration time. Returns\n                                     EB_EXPIRE_TIME_INVALID if none left. */\n} ExpireInfo;\n\n/* Iterator to traverse ebuckets items */\ntypedef struct EbucketsIterator {\n    /* private data of iterator */\n    ebuckets eb;\n    EbucketsType *type;\n    raxIterator raxIter;\n    int isRax;\n\n    /* public read only */\n    eItem currItem;               /* Current item ref. Use ebGetMetaExpTime()\n                                     on `currItem` to get expiration time.*/\n    uint64_t itemsCurrBucket;     /* Number of items in current bucket. */\n} EbucketsIterator;\n\ntypedef void *(ebDefragAllocFunction)(void *ptr);\ntypedef void *(ebDefragAllocItemFunction)(void *ptr, void *privdata);\ntypedef struct {\n    ebDefragAllocFunction *defragAlloc; /* Used for rax nodes, segment etc. */\n    ebDefragAllocItemFunction *defragItem;  /* Defrag-realloc eitem */\n} ebDefragFunctions;\n\n/* ebuckets API */\n\nstatic inline ebuckets ebCreate(void) { return NULL; } /* Empty ebuckets */\n\nvoid ebDestroy(ebuckets *eb, EbucketsType *type, void *deletedItemsCbCtx);\n\nvoid ebExpire(ebuckets *eb, EbucketsType *type, ExpireInfo *info);\n\nuint64_t ebExpireDryRun(ebuckets eb, EbucketsType *type, uint64_t now);\n\nstatic inline int ebIsEmpty(ebuckets eb) { return eb == NULL; }\n\nuint64_t ebGetNextTimeToExpire(ebuckets eb, EbucketsType *type);\n\nuint64_t ebGetMaxExpireTime(ebuckets eb, EbucketsType *type, int accurate);\n\nuint64_t ebGetTotalItems(ebuckets eb, EbucketsType *type);\n\n/* Item related API */\n\nint ebRemove(ebuckets *eb, EbucketsType *type, eItem item);\n\nint ebAdd(ebuckets *eb, EbucketsType *type, eItem item, uint64_t expireTime);\n\nuint64_t ebGetExpireTime(EbucketsType *type, eItem item);\n\nvoid ebStart(EbucketsIterator *iter, ebuckets eb, EbucketsType *type);\n\nvoid ebStop(EbucketsIterator *iter);\n\nint ebNext(EbucketsIterator *iter);\n\nint ebNextBucket(EbucketsIterator *iter);\n\nint ebScanDefrag(ebuckets *eb, EbucketsType *type, unsigned long *cursor,\n                 ebDefragFunctions *defragfns, void *privdata);\n\nstatic inline uint64_t ebGetMetaExpTime(ExpireMeta *expMeta) {\n    return (((uint64_t)(expMeta)->expireTimeHi << 32) | (expMeta)->expireTimeLo);\n}\n\nstatic inline void ebSetMetaExpTime(ExpireMeta *expMeta, uint64_t t) {\n    expMeta->expireTimeLo = (uint32_t)(t&0xFFFFFFFF);\n    expMeta->expireTimeHi = (uint16_t)((t) >> 32);\n}\n\n/* Debug API */\n\nvoid ebValidate(ebuckets eb, EbucketsType *type);\n\nvoid ebPrint(ebuckets eb, EbucketsType *type);\n\n#ifdef REDIS_TEST\nint ebucketsTest(int argc, char *argv[], int flags);\n#endif\n\n#endif /* __EBUCKETS_H */\n"
  },
  {
    "path": "src/endianconv.c",
    "content": "/* endinconv.c -- Endian conversions utilities.\n *\n * This functions are never called directly, but always using the macros\n * defined into endianconv.h, this way we define everything is a non-operation\n * if the arch is already little endian.\n *\n * Redis tries to encode everything as little endian (but a few things that need\n * to be backward compatible are still in big endian) because most of the\n * production environments are little endian, and we have a lot of conversions\n * in a few places because ziplists, intsets, zipmaps, need to be endian-neutral\n * even in memory, since they are serialized on RDB files directly with a single\n * write(2) without other additional steps.\n *\n * ----------------------------------------------------------------------------\n *\n * Copyright (c) 2011-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n\n#include <stdint.h>\n\n/* Toggle the 16 bit unsigned integer pointed by *p from little endian to\n * big endian */\nvoid memrev16(void *p) {\n    unsigned char *x = p, t;\n\n    t = x[0];\n    x[0] = x[1];\n    x[1] = t;\n}\n\n/* Toggle the 32 bit unsigned integer pointed by *p from little endian to\n * big endian */\nvoid memrev32(void *p) {\n    unsigned char *x = p, t;\n\n    t = x[0];\n    x[0] = x[3];\n    x[3] = t;\n    t = x[1];\n    x[1] = x[2];\n    x[2] = t;\n}\n\n/* Toggle the 64 bit unsigned integer pointed by *p from little endian to\n * big endian */\nvoid memrev64(void *p) {\n    unsigned char *x = p, t;\n\n    t = x[0];\n    x[0] = x[7];\n    x[7] = t;\n    t = x[1];\n    x[1] = x[6];\n    x[6] = t;\n    t = x[2];\n    x[2] = x[5];\n    x[5] = t;\n    t = x[3];\n    x[3] = x[4];\n    x[4] = t;\n}\n\nuint16_t intrev16(uint16_t v) {\n    memrev16(&v);\n    return v;\n}\n\nuint32_t intrev32(uint32_t v) {\n    memrev32(&v);\n    return v;\n}\n\nuint64_t intrev64(uint64_t v) {\n    memrev64(&v);\n    return v;\n}\n\n#ifdef REDIS_TEST\n#include <stdio.h>\n\n#define UNUSED(x) (void)(x)\nint endianconvTest(int argc, char *argv[], int flags) {\n    char buf[32];\n\n    UNUSED(argc);\n    UNUSED(argv);\n    UNUSED(flags);\n\n    snprintf(buf,sizeof(buf),\"ciaoroma\");\n    memrev16(buf);\n    printf(\"%s\\n\", buf);\n\n    snprintf(buf,sizeof(buf),\"ciaoroma\");\n    memrev32(buf);\n    printf(\"%s\\n\", buf);\n\n    snprintf(buf,sizeof(buf),\"ciaoroma\");\n    memrev64(buf);\n    printf(\"%s\\n\", buf);\n\n    return 0;\n}\n#endif\n"
  },
  {
    "path": "src/endianconv.h",
    "content": "/* See endianconv.c top comments for more information\n *\n * ----------------------------------------------------------------------------\n *\n * Copyright (c) 2011-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef __ENDIANCONV_H\n#define __ENDIANCONV_H\n\n#include \"config.h\"\n#include <stdint.h>\n\n/* --------------------------------------------------------------------------\n * Optimized endian conversion helpers\n * -------------------------------------------------------------------------- */\n\n/* For GCC, Clang — use builtins that compile to a single instruction */\n#if defined(__GNUC__) || defined(__clang__)\n#define REDIS_BSWAP64(v) __builtin_bswap64(v)\n#else\n#define REDIS_BSWAP64(v) intrev64(v)\n#endif\n\nvoid memrev16(void *p);\nvoid memrev32(void *p);\nvoid memrev64(void *p);\nuint16_t intrev16(uint16_t v);\nuint32_t intrev32(uint32_t v);\nuint64_t intrev64(uint64_t v);\n\n/* variants of the function doing the actual conversion only if the target\n * host is big endian */\n#if (BYTE_ORDER == LITTLE_ENDIAN)\n#define memrev16ifbe(p) ((void)(0))\n#define memrev32ifbe(p) ((void)(0))\n#define memrev64ifbe(p) ((void)(0))\n#define intrev16ifbe(v) (v)\n#define intrev32ifbe(v) (v)\n#define intrev64ifbe(v) (v)\n#else\n#define memrev16ifbe(p) memrev16(p)\n#define memrev32ifbe(p) memrev32(p)\n#define memrev64ifbe(p) memrev64(p)\n#define intrev16ifbe(v) intrev16(v)\n#define intrev32ifbe(v) intrev32(v)\n#define intrev64ifbe(v) intrev64(v)\n#endif\n\n/* The functions htonu64() and ntohu64() convert the specified value to\n * network byte ordering and back. In big endian systems they are no-ops. */\n#if (BYTE_ORDER == BIG_ENDIAN)\n#define htonu64(v) (v)\n#define ntohu64(v) (v)\n#else\n#define htonu64(v) REDIS_BSWAP64(v)\n#define ntohu64(v) REDIS_BSWAP64(v)\n#endif\n\n#ifdef REDIS_TEST\nint endianconvTest(int argc, char *argv[], int flags);\n#endif\n\n#endif\n"
  },
  {
    "path": "src/entry.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n */\n\n#include \"server.h\"\n#include \"redisassert.h\"\n#include \"entry.h\"\n\n/* Aggregates parameters for entry layout and serialization.\n * Populated by setEntryWriteInfo() and consumed by needsNewAlloc() and entryWrite*(). */\ntypedef struct EntryWriteInfo {\n    int isEmbdNewVal;          /* Whether value should be embedded */\n    int embdFieldType;         /* SDS type for the field */\n    size_t embdFieldSize;      /* Allocated size for SDS field */\n    size_t embdValueSize;      /* Required size for new embd value (SDS_TYPE_8) */\n    size_t expirySize;         /* Size of expiry metadata (0 if none) */\n    size_t newEntryAllocSize;  /* Total allocation size needed for new entry */\n    uint32_t flags;            /* Entry creation/update flags */\n} EntryWriteInfo;\n\nenum {\n    /* SDS aux flag. If set, it indicates that the entry has TTL metadata set. */\n    FIELD_SDS_AUX_BIT_ENTRY_HAS_EXPIRY = 0,\n    /* SDS aux flag. If set, it indicates that the entry has an embedded value\n     * pointer located in memory before the embedded field. If unset, the entry\n     * instead has an embedded value located after the embedded field. */\n    FIELD_SDS_AUX_BIT_ENTRY_HAS_VALUE_PTR = 1,\n    FIELD_SDS_AUX_BIT_ENTRY_MAX\n};\nstatic_assert(FIELD_SDS_AUX_BIT_ENTRY_MAX + SDS_TYPE_BITS < sizeof(char) * CHAR_BIT, \n              \"too many sds bits are used for entry metadata\");\n\n/* Returns true if the entry's value is not embedded (stored by pointer). */\nstatic inline int entryHasValuePtr(const Entry *entry) {\n    return sdsGetAuxBit(entryGetField(entry), FIELD_SDS_AUX_BIT_ENTRY_HAS_VALUE_PTR);\n}\n\n/* Returns true if the entry's value is embedded in the entry. */\nstatic inline int entryHasEmbeddedValue(Entry *entry) {\n    return !entryHasValuePtr(entry);\n}\n\n/* Returns true if the entry has an expiration timestamp. */\nint entryHasExpiry(const Entry *entry) {\n    return sdsGetAuxBit(entryGetField(entry), FIELD_SDS_AUX_BIT_ENTRY_HAS_EXPIRY);\n}\n\n/* Returns the location of a pointer to a separately allocated value. Only for\n * an entry without an embedded value. */\nstatic sds *entryGetValueRef(const Entry *entry) {\n    serverAssert(entryHasValuePtr(entry));\n    char *fieldHdr = sdsAllocPtr(entryGetField(entry));\n    char *valuePtr = fieldHdr - sizeof(sds);\n    return (sds *) valuePtr;\n}\n\n/* A pointer to the value pointer. If embedded or doesn't have a value, returns NULL. */\nsds *entryGetValuePtrRef(const Entry *entry) {\n    return entryHasValuePtr(entry) ? entryGetValueRef(entry) : NULL;\n}\n\n/* Returns the sds of the entry's value. */\nsds entryGetValue(const Entry *entry) {\n    if (entryHasValuePtr(entry)) {\n        return *entryGetValueRef(entry);\n    } else {\n        /* Skip field content, field null terminator and value sds8 hdr. */\n        size_t offset = sdslen(entryGetField(entry)) + 1 + sdsHdrSize(SDS_TYPE_8);\n        return (sds)((char *)entry + offset);\n    }\n}\n\n/* Returns the address of the entry allocation. */\nvoid *entryGetAllocPtr(const Entry *entry) {\n    char *buf = sdsAllocPtr(entryGetField(entry));\n    if (entryHasValuePtr(entry)) buf -= sizeof(sds);\n    if (entryHasExpiry(entry)) buf -= sizeof(ExpireMeta);\n    return buf;\n}\n\n/**************************************** Entry Expiry API *****************************************/\n\n/* Returns the entry expiration timestamp, or EB_EXPIRE_TIME_INVALID */\nuint64_t entryGetExpiry(const Entry *entry) {\n    if (!entryHasExpiry(entry))\n        return EB_EXPIRE_TIME_INVALID;\n\n    ExpireMeta *expireMeta = (ExpireMeta *)entryGetAllocPtr(entry);\n    if (expireMeta->trash)\n        return EB_EXPIRE_TIME_INVALID;\n\n    return ebGetMetaExpTime(expireMeta);\n}\n\nint entryIsExpired(const Entry *entry) {\n    if (server.allow_access_expired)\n        return 0;\n\n    /* Condition remains valid even if entryGetExpiry() returns EB_EXPIRE_TIME_INVALID,\n     * as the constant is equivalent to (EB_EXPIRE_TIME_MAX + 1). */\n    uint64_t expireTime = entryGetExpiry(entry);\n    return (mstime_t)expireTime < commandTimeSnapshot();\n}\n\n/**************************************** Entry Expiry API - End *****************************************/\n\n/* Calculate entry allocation size based on SDS alloc fields.\n * This is used for size accounting. */\nsize_t entryMemUsage(Entry *entry) {\n    size_t size = sdsAllocSize(entryGetField(entry));\n    size += sdsAllocSize(entryGetValue(entry));\n    if (entryHasValuePtr(entry)) size += sizeof(sds);\n    if (entryHasExpiry(entry)) size += sizeof(ExpireMeta);\n    return size;\n}\n\nvoid entryFree(Entry *entry, size_t *usable) {\n    if (usable) *usable = entryMemUsage(entry);\n\n    if (entryHasValuePtr(entry))\n        sdsfree(entryGetValue(entry));\n\n    zfree(entryGetAllocPtr(entry));\n}\n\n/* Determine the appropriate SDS type for the field based on length and expiry.\n * SDS_TYPE_5 cannot be used if expiry is present (no aux bits available). */\nstatic inline int entryCalcFieldSdsType(size_t fieldLen, int hasExpiry) {\n    int sdsType = sdsReqType(fieldLen);\n    if (sdsType == SDS_TYPE_5 && hasExpiry) {\n        sdsType = SDS_TYPE_8;\n    }\n    return sdsType;\n}\n\n/* Calculate required size and layout for an entry */\nstatic inline void setEntryWriteInfo(EntryWriteInfo *info, sds field, sds value, uint32_t flags) {\n    info->flags = flags;\n\n    /* Calculate expiry allocation size */\n    info->expirySize = (flags & ENTRY_HAS_EXPIRY) ? sizeof(ExpireMeta) : 0;\n\n    /* Calculate field allocation size */\n    size_t fieldLen = sdslen(field);\n    info->embdFieldType = entryCalcFieldSdsType(fieldLen, info->expirySize > 0);\n    info->embdFieldSize = sdsReqSize(fieldLen, info->embdFieldType);\n    info->isEmbdNewVal = 0;\n\n    /* Calculate value allocation size (Always use SDS_TYPE_8 for embedded values) */\n    size_t valueLen = value ? sdslen(value) : 0;\n    info->embdValueSize = value ? sdsReqSize(valueLen, SDS_TYPE_8) : 0;\n\n    /* Start with field and expiry */\n    size_t allocSize = info->embdFieldSize + info->expirySize;\n    \n    if (unlikely(!value)) {\n        info->newEntryAllocSize = allocSize;\n        return;\n    }\n    \n    /* Decide whether to embed the value or use a pointer */\n    if (allocSize + info->embdValueSize <= EMBED_VALUE_MAX_ALLOC_SIZE) {\n        /* Embed field and value (SDS_TYPE_8). Unused space in value's SDS header.\n         *   [ExpireMeta | Field hdr \"foo\"\\0 | Value hdr8 \"bar\"\\0] \n         */\n        info->isEmbdNewVal = 1;\n        allocSize += info->embdValueSize;\n    } else {\n        /* Embed field only (>= SDS_TYPE_8 to encode value ptr flag).\n         *   [ExpireMeta | Value Ptr | Field hdr8 \"foo\"\\0] \n         */\n        info->isEmbdNewVal = 0;\n        allocSize += sizeof(sds);\n\n        /* Upgrade field to SDS_TYPE_8 if needed for aux bits */\n        if (info->embdFieldType == SDS_TYPE_5) {\n            info->embdFieldType = SDS_TYPE_8;\n            info->embdFieldSize = sdsReqSize(fieldLen, info->embdFieldType);\n            allocSize = info->embdFieldSize + info->expirySize + sizeof(sds);\n        }\n    }\n    info->newEntryAllocSize = allocSize;\n}\n\n/* Serialize the content of the entry into the provided buffer */\nstatic Entry *entryWriteNew(EntryWriteInfo *info, sds field, sds value) {\n    char *fieldBuf, *buf = zmalloc(info->newEntryAllocSize);\n\n    /* Take into account expiry metadata if present */\n    if (info->expirySize) {\n        ExpireMeta *expMeta = (ExpireMeta *)buf;\n        expMeta->trash = 1;  /* Mark as trash until added to ebuckets */\n        fieldBuf = buf + info->expirySize;\n    } else {\n        fieldBuf = buf;\n    }\n\n    /* Write value (either as pointer or embedded) */\n    if (value) {\n        if (info->isEmbdNewVal) {\n            /* Embed the value after the field - always copy content */\n            sdsnewplacement(fieldBuf + info->embdFieldSize, info->embdValueSize,\n                          SDS_TYPE_8, value, sdslen(value));\n            /* Free value only if ownership was transferred */\n            if (info->flags & ENTRY_TAKE_VALUE)\n                sdsfree(value);\n            info->flags &= ~ENTRY_TAKE_VALUE;\n        } else {\n            /* Store value as a pointer. dup if ownership wasn't transferred */\n            *(sds *)fieldBuf = ((info->flags & ENTRY_TAKE_VALUE)) ? value : sdsdup(value);\n            info->flags &= ~ENTRY_TAKE_VALUE;\n            fieldBuf += sizeof(sds);\n        }\n    }\n\n    /* Write the field */\n    Entry *newEntry = (Entry *)sdsnewplacement(fieldBuf, info->embdFieldSize,\n                                               info->embdFieldType, field, sdslen(field));\n\n    /* Set aux bits to encode entry type */\n    sdsSetAuxBit(entryGetField(newEntry), FIELD_SDS_AUX_BIT_ENTRY_HAS_VALUE_PTR, !info->isEmbdNewVal);\n    sdsSetAuxBit(entryGetField(newEntry), FIELD_SDS_AUX_BIT_ENTRY_HAS_EXPIRY, info->expirySize > 0);\n\n    /* Verify the entry was built correctly */\n    debugServerAssert(entryHasValuePtr(newEntry) == !info->isEmbdNewVal);\n    debugServerAssert(entryHasExpiry(newEntry) == (info->expirySize > 0));\n\n    return newEntry;\n}\n\nEntry *entryCreate(sds field, sds value, uint32_t flags, size_t *usable) {\n    /* Calculate required size and layout */\n    EntryWriteInfo info;\n    setEntryWriteInfo(&info, field, value, flags);\n\n    /* Allocate and write the entry */\n    Entry *entry = entryWriteNew(&info, field, value);\n\n    /* Calculate usable size if requested */\n    if (usable)\n        *usable = entryMemUsage(entry);\n\n    return entry;\n}\n\n/* Helper: Check if we need to create a new entry allocation during update.\n * Returns true if a new allocation is needed, false if we can update in-place. */\nstatic inline int needsNewAlloc(Entry *e,\n                                EntryWriteInfo *info,\n                                int isUpdateValue,\n                                int expiryAddRemove)\n{\n    /* if we need to add/remove expiration metadata */\n    if (expiryAddRemove)\n        return 1;\n    \n    /* If not updating value, no need to allocate new entry */\n    if (!isUpdateValue)\n        return 0;\n    \n    /* If value embedding <> pointer changed */\n    if (info->isEmbdNewVal != entryHasEmbeddedValue(e))\n        return 1;\n    \n    /* If old & new are both pointers, no need to allocate new entry */\n    if (!info->isEmbdNewVal)\n        return 0;\n    \n    /* Check if new embedded value fits in old allocation */\n    size_t oldAllocSize = sdsAllocSize(entryGetValue(e));\n    size_t newReqSize = info->embdValueSize;\n    return !((newReqSize <= oldAllocSize) && (newReqSize >= oldAllocSize * 3 / 4));\n}\n\n/* Helper: Update entry in-place */\nstatic Entry *entryWriteOver(Entry *e, EntryWriteInfo *info, sds value)\n{\n    /* No need to touch expiration metadata. It's done by caller */\n\n    if (entryHasValuePtr(e)) {\n        /* Replace pointer value */\n        sds *valueRef = entryGetValueRef(e);\n        sdsfree(*valueRef);\n        *valueRef = (info->flags & ENTRY_TAKE_VALUE) ? value : sdsdup(value);\n    } else {\n        /* Update embedded value in-place - always copy content */\n        sds oldValue = entryGetValue(e);\n        /* Use the old value's allocation size */\n        size_t valueSize = sdsAllocSize(oldValue);\n        sdsnewplacement(sdsAllocPtr(oldValue), valueSize, SDS_TYPE_8, value, sdslen(value));\n        /* Free value only if we took ownership */\n        if (info->flags & ENTRY_TAKE_VALUE)\n            sdsfree(value);\n    }\n    info->flags &= ~ENTRY_TAKE_VALUE;\n    return e;\n}\n\n/* Modify the entry's value and/or reserve space for expiration metadata */\nEntry *entryUpdate(Entry *entry, sds value, uint32_t flags, ssize_t *usableDiff) {\n    /* Check if we need to add/remove expiration metadata */\n    int oldHasExpiry = entryHasExpiry(entry) != 0;\n    int newHasExpiry = (flags & ENTRY_HAS_EXPIRY) != 0;    \n    int expiryAddRemove = (oldHasExpiry != newHasExpiry);\n    \n    int isUpdateVal = (value != NULL);\n\n    /* Early return if nothing changes */\n    if (!isUpdateVal && !expiryAddRemove) {\n        if (usableDiff)\n            *usableDiff = 0;\n        if (flags & ENTRY_TAKE_VALUE)\n            sdsfree(value);\n        return entry;\n    }\n\n    /* Calculate old usable size before any modifications */\n    size_t oldUsable = entryMemUsage(entry);\n\n    /* Get the value to use (either provided or existing) */\n    if (!value)\n        value = entryGetValue(entry);\n\n    /* Calculate required size and layout for the updated entry */\n    EntryWriteInfo info;\n    setEntryWriteInfo(&info, entryGetField(entry), value, flags);\n\n    /* Decide whether to update in-place or create a new entry */\n    Entry *newEntry;\n    size_t newUsable;\n\n    if (needsNewAlloc(entry, &info, isUpdateVal, expiryAddRemove)) {\n        Entry *oldEntry = entry;\n        /* If not updating value */\n        if (value == NULL) {\n            /* Should not flag ownership of value if not updating value */\n            debugServerAssert((info.flags & ENTRY_TAKE_VALUE) == 0);\n            \n            /* Try reuse the existing value */\n            value = entryGetValue(oldEntry);\n            \n            /* If value is a pointer, we can transfer it from old to new entry  */\n            if (entryHasValuePtr(oldEntry)) {\n                sds *oldValuePtr = entryGetValueRef(oldEntry);\n                *oldValuePtr = NULL;\n                info.flags |= ENTRY_TAKE_VALUE;\n            }\n        }\n        \n        newEntry = entryWriteNew(&info, entryGetField(oldEntry), value);\n        entryFree(oldEntry, NULL);\n\n        newUsable = entryMemUsage(newEntry);\n    } else {\n        /* Update in-place */\n        newEntry = entryWriteOver(entry, &info, value);\n        newUsable = entryMemUsage(newEntry);\n    }\n\n    /* Calculate the difference in memory usage */\n    if (usableDiff)\n        *usableDiff = (ssize_t)newUsable - (ssize_t)oldUsable;\n\n    /* Verify the entry was built correctly */\n    debugServerAssert(entryHasValuePtr(newEntry) == !info.isEmbdNewVal);\n    debugServerAssert(entryHasExpiry(newEntry) == (info.expirySize != 0));\n    debugServerAssert((info.flags & ENTRY_TAKE_VALUE) == 0); /* verify the flag is cleared */\n    serverAssert(newEntry);\n\n    return newEntry;\n}\n\n/* Defragments a hashtable entry (field-value pair) if needed, using the\n * provided defrag functions. The defrag functions return NULL if the allocation\n * was not moved, otherwise they return a pointer to the new memory location.\n * A separate sds defrag function is needed because of the unique memory layout\n * of sds strings.\n * If the location of the entry changed we return the new location,\n * otherwise we return NULL. */\nEntry *entryDefrag(Entry *e, void *(*defragfn)(void *), sds (*sdsdefragfn)(sds)) {\n    if (entryHasValuePtr(e)) {\n        sds *valueRef = entryGetValueRef(e);\n        sds newValue = sdsdefragfn(*valueRef);\n        if (newValue) *valueRef = newValue;\n    }\n    char *allocation = entryGetAllocPtr(e);\n    char *newAllocation = defragfn(allocation);\n    if (newAllocation != NULL) {\n        /* Return the same offset into the new allocation as the entry's offset\n         * in the old allocation. */\n        int entryPointerOffset = (char *)e - allocation;\n        return (Entry *)(newAllocation + entryPointerOffset);\n    }\n    return NULL;\n}\n\n/* Used for releasing memory to OS to avoid unnecessary CoW. Called when we've\n * forked and memory won't be used again. See zmadvise_dontneed() */\nvoid entryDismissMemory(Entry *entry) {\n    /* Only dismiss values memory since the field size usually is small. */\n    if (entryHasValuePtr(entry)) {\n        dismissSds(*entryGetValueRef(entry));\n    }\n}\n"
  },
  {
    "path": "src/entry.h",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n */\n\n/* -----------------------------------------------------------------------------\n * Entry \n * -----------------------------------------------------------------------------\n * An Entry represents a packed field-value pair with optional expiration metadata.\n * Currently it is used only by Hash.\n * \n * There are 3 different formats for the \"entry\".  In all cases, the \"entry\" \n * pointer points into the allocation and is identical to the \"field\" sds pointer.\n *\n * Type 1: Field sds type is an SDS_TYPE_5\n *     With this type, both the key and value are embedded in the entry. Expiration \n *     is not allowed as the SDS_TYPE_5 (on field) doesn't contain any aux bits \n *     to encode the existence of an expiration.  Extra padding is included in \n *     the value to the size of the physical block.\n *\n *             entry\n *               |\n *     +---------V------------+----------------------------+\n *     |       Field          |      Value                 |\n *     | sdshdr5 | \"foo\" \\0   | sdshdr8 \"bar\" \\0 (padding) |\n *     +---------+------------+----------------------------+\n *\n *     Identified by: field sds type is SDS_TYPE_5\n *\n *\n * Type 2: Field sds type is an SDS_TYPE_8 type\n *     With this type, both the key and value are embedded.  Extra bits in the \n *     sdshdr8 (on field) are used to encode aux flags which may indicate the \n *     presence of an optional expiration. Extra padding is included in the value \n *     to the size of the physical block.\n *\n *                            entry\n *                              |\n *     +--------------+---------V------------+----------------------------+\n *     | Expire (opt) |       Field          |      Value                 |\n *     |  ExpireMeta  | sdshdr8 | \"foo\" \\0   | sdshdr8 \"bar\" \\0 (padding) |\n *     +--------------+---------+------------+----------------------------+\n *\n *     Identified by: sds type is SDS_TYPE_8  AND  has embedded value\n *\n *\n * Type 3: Value is an sds, referenced by pointer\n *     With this type, the key is embedded, and the value is an sds, referenced \n *     by pointer.  Extra bits in the sdshdr8/16/32 (on field) are used to encode \n *     aux flags which indicate the presence of a value by pointer.  An aux bit \n *     may indicate the presence of an optional expiration.  Note that the \n *     \"field\" is not padded, so there's no direct way to identify the length of the allocation.\n *\n *                                             entry\n *                                               |\n *     +--------------+---------------+----------V----------+--------+\n *     | Expire (opt) |     Value     |        Field        | / / / /|\n *     |  ExpireMeta  | sds (pointer) | sdshdr8+ | \"foo\" \\0 |/ / / / |\n *     +--------------+-------+-------+----------+----------+--------+\n *                            |\n *                            +-> sds value\n *\n *     Identified by: Aux bit FIELD_SDS_AUX_BIT_ENTRY_HAS_VALUE_PTR\n */\n#ifndef _ENTRY_H_\n#define _ENTRY_H_\n\n#include \"sds.h\"\n#include \"ebuckets.h\"\ntypedef struct _entry Entry;\n\n/* The maximum allocation size we want to use for entries with embedded values. */\n#define EMBED_VALUE_MAX_ALLOC_SIZE 128\n\n/* Flags for entryCreate() and entryUpdate() */\n#define ENTRY_TAKE_VALUE   (1<<1)  /* Take ownership of value if possible (not embedded) */\n#define ENTRY_HAS_EXPIRY   (1<<2)  /* Entry has expiration */\n\n/* Returns the value string (sds) from the entry. */\nsds entryGetValue(const Entry *entry);\n\n/* A pointer to the value pointer. If embedded or doesn't have a value, returns NULL. */\nsds *entryGetValuePtrRef(const Entry *entry);\n\n/* Gets the expiration timestamp (UNIX time in milliseconds). */\nuint64_t entryGetExpiry(const Entry *entry);\n\nint entryIsExpired(const Entry *entry);\n\n/* Returns true if the entry has an expiration timestamp set. */\nint entryHasExpiry(const Entry *entry);\n\n/* Frees the memory used by the entry (including field/value). */\nvoid entryFree(Entry *entry, size_t *usable);\n\n/* Creates a new entry with the given field, value, and optional expiry.\n * Flags can be ENTRY_TAKE_VALUE (take ownership of value if not embedded) and\n * ENTRY_HAS_EXPIRY (entry has expiration metadata).\n * If usable is not NULL, it will be set to the actual allocated size. */\nEntry *entryCreate(sds field, sds value, uint32_t flags, size_t *usable);\n\n/* Updates the value and/or expiry of an existing entry.\n * In case value is NULL, will use the existing entry value.\n * Flags can be ENTRY_TAKE_VALUE (take ownership of value if not embedded) and\n * ENTRY_HAS_EXPIRY (reserve space for existing expiry).\n * If usableDiff not NULL, it will be set to diff in mem usage (newUsable - oldUsable) */\nEntry *entryUpdate(Entry *entry, sds value, uint32_t flags, ssize_t *usableDiff);\n\n/* Calculate entry allocation size based on SDS alloc fields.\n * This is used for size accounting. */\nsize_t entryMemUsage(Entry *entry);\n\n/* Returns the address of the entry allocation. */\nvoid *entryGetAllocPtr(const Entry *entry);\n\n/* Defragments the entry and returns the new pointer (if moved). */\nEntry *entryDefrag(Entry *entry, void *(*defragfn)(void *), sds (*sdsdefragfn)(sds));\n\n/* Advises allocator to dismiss memory used by entry. */\nvoid entryDismissMemory(Entry *entry);\n\n/* Get a reference to the expiry metadata if present, NULL otherwise. */\nstatic inline ExpireMeta *entryRefExpiryMeta(Entry *entry) {\n    return entryHasExpiry(entry) ? (ExpireMeta *)entryGetAllocPtr(entry) : NULL;\n}\n\n/* The entry pointer is the field sds, but that's an implementation detail. */\nstatic inline sds entryGetField(const Entry *entry) {\n    /* Note: The Entry pointer is identical to the field sds pointer.\n     * This is a fundamental design assumption verified by the implementation. */\n    return (sds)entry;\n}\n\nstatic inline size_t entryFieldLen(const Entry *entry) {\n    return sdslen(entryGetField(entry));\n}\n\n#endif\n"
  },
  {
    "path": "src/estore.c",
    "content": "/*\n * estore.c -- Expiration Store implementation\n * \n * Copyright (c) 2011-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"fmacros.h\"\n#include \"estore.h\"\n#include \"zmalloc.h\"\n#include \"redisassert.h\"\n#include \"server.h\"\n#include <string.h>\n\n/* Forward declaration of the estore structure */\nstruct _estore {\n    int flags;                  /* Flags for configuration options */\n    EbucketsType *bucket_type;  /* Type of buckets used in this store */\n    ebuckets *ebArray;          /* Array of ebuckets (one per slot in cluster mode, or just one) */\n    int num_buckets_bits;       /* Log2 of the number of buckets */\n    int num_buckets;            /* Number of buckets (1 << num_buckets_bits) */\n    unsigned long long count;   /* Total number of items in this estore */\n    fenwickTree *buckets_sizes; /* Binary indexed tree (BIT) that describes cumulative key frequencies */\n};\n\n/* Get the appropriate bucket for a given eidx */\nebuckets *estoreGetBuckets(estore *es, int eidx) {\n    debugAssert(eidx < es->num_buckets);\n    return &(es->ebArray[eidx]);\n}\n\n/* Create a new expiration store\n * type             - Pointer to a static EbucketsType defining the bucket behavior.\n * num_buckets_bits - The log2 of the number of buckets (0 for 1 bucket,\n *                    CLUSTER_SLOT_MASK_BITS for CLUSTER_SLOTS buckets)\n * flags - Configuration flags\n */\nestore *estoreCreate(EbucketsType *type, int num_buckets_bits) {\n    /* We can't support more than 2^16 buckets to be consistent with kvstore */\n    assert(num_buckets_bits <= 16);\n\n    estore *es = zmalloc(sizeof(estore));\n    /* Store the bucket type */\n    es->bucket_type = type;\n\n    /* Calculate number of buckets based on num_buckets_bits */\n    es->num_buckets_bits = num_buckets_bits;\n    es->num_buckets = 1 << num_buckets_bits;\n    es->buckets_sizes = es->num_buckets > 1 ? fwTreeCreate(num_buckets_bits) : NULL;\n\n    /* Allocate the buckets array */\n    es->ebArray = zcalloc(sizeof(ebuckets) * es->num_buckets);\n\n    /* Initialize all buckets */\n    for (int i = 0; i < es->num_buckets; i++) {\n        es->ebArray[i] = ebCreate();\n    }\n\n    es->count = 0;\n    return es;\n}\n\n/* Empty an expiration store (clear all entries but keep the structure) */\nvoid estoreEmpty(estore *es) {\n    if (es == NULL) return;\n\n    for (int i = 0; i < es->num_buckets; i++) {\n        ebDestroy(&es->ebArray[i], es->bucket_type, NULL);\n        es->ebArray[i] = ebCreate();\n    }\n\n    if (es->buckets_sizes) fwTreeClear(es->buckets_sizes);\n    es->count = 0;\n}\n\n/* Check if the expiration store is empty */\nint estoreIsEmpty(estore *es) {\n    return es->count == 0;\n}\n\n/* Get the first non-empty bucket index in the estore */\nint estoreGetFirstNonEmptyBucket(estore *es) {\n    if (es->num_buckets == 1 || estoreSize(es) == 0)\n        return 0;\n    return fwTreeFindFirstNonEmpty(es->buckets_sizes);\n}\n\n/* Get the next non-empty bucket index after the given index */\nint estoreGetNextNonEmptyBucket(estore *es, int eidx) {\n    if (es->num_buckets == 1) {\n        assert(eidx == 0);\n        return -1;\n    }\n    return fwTreeFindNextNonEmpty(es->buckets_sizes, eidx);\n}\n\n/* Release an expiration store (free all memory) */\nvoid estoreRelease(estore *es) {\n    if (es == NULL) return;\n\n    for (int i = 0; i < es->num_buckets; i++) {\n        if (es->ebArray[i])\n            ebDestroy(&es->ebArray[i], es->bucket_type, NULL);\n    }\n    fwTreeDestroy(es->buckets_sizes);\n    zfree(es->ebArray);\n    zfree(es);\n}\n\n/* Perform active expiration on a specific bucket */\nvoid estoreActiveExpire(estore *es, int eidx, ExpireInfo *info) {\n    ebuckets *eb = estoreGetBuckets(es, eidx);\n    uint64_t before = ebGetTotalItems(*eb, es->bucket_type);\n    ebExpire(eb, es->bucket_type, info);\n    /* If items expired (or updated), update the BIT and estore count */\n    if (info->itemsExpired) {\n        uint64_t diff = before - ebGetTotalItems(*eb, es->bucket_type);\n        fwTreeUpdate(es->buckets_sizes, eidx, (long long) diff);\n        es->count -= diff;\n    }\n}\n\n/* Add item to estore with the given expiration time. The item must has\n * expireMeta already allocated. */\nvoid estoreAdd(estore *es, int eidx, eItem item, uint64_t when) {\n    debugAssert(es != NULL && item != NULL);\n\n    /* currently only used by hash field expiration. Verify it has expireMeta */\n    debugAssert((((robj *)item)->encoding == OBJ_ENCODING_LISTPACK_EX) ||\n                ((((robj *)item)->encoding == OBJ_ENCODING_HT) &&\n                 ((dict *) ((robj *)item)->ptr)->type == &entryHashDictTypeWithHFE));\n\n    ebuckets *bucket = estoreGetBuckets(es, eidx);\n    if (ebAdd(bucket, es->bucket_type, item, when) == 0) {\n        es->count++;\n        fwTreeUpdate(es->buckets_sizes, eidx, 1);\n    }\n}\n\n/* Remove an item from the expiration store. Returns the expire time or EB_EXPIRE_TIME_INVALID */\nuint64_t estoreRemove(estore *es, int eidx, eItem item) {\n    uint64_t expireTime;\n    debugAssert(es != NULL && item != NULL);\n\n    /* Currently only used by hash field expiration. gracefully ignore otherwise */\n    kvobj *kv = (kvobj *) item;\n    if ( (kv->type != OBJ_HASH) ||\n         (kv->encoding == OBJ_ENCODING_LISTPACK) ||\n         ((kv->encoding == OBJ_ENCODING_HT) && (((dict *)kv->ptr)->type != &entryHashDictTypeWithHFE)))\n        return EB_EXPIRE_TIME_INVALID;\n\n    /* If (ExpireMeta of kv) marked as trash, then it is already removed */\n    if ((expireTime = ebGetExpireTime(es->bucket_type, item)) == EB_EXPIRE_TIME_INVALID)\n        return EB_EXPIRE_TIME_INVALID;\n\n    ebuckets *bucket = estoreGetBuckets(es, eidx);\n    serverAssert(ebRemove(bucket, es->bucket_type, item)==1);\n    es->count--;\n    fwTreeUpdate(es->buckets_sizes, eidx, -1);\n\n    return expireTime;\n}\n\n/* Update an item's expiration time in the store */\nvoid estoreUpdate(estore *es, int eidx, eItem item, uint64_t when) {\n    debugAssert(es != NULL && item != NULL);\n\n    /* currently only used by hash field expiration. Verify it has expireMeta */\n    debugAssert((((robj *)item)->encoding == OBJ_ENCODING_LISTPACK_EX) ||\n                ((((robj *)item)->encoding == OBJ_ENCODING_HT) &&\n                 ((dict *) ((robj *)item)->ptr)->type == &entryHashDictTypeWithHFE));\n\n    debugAssert(ebGetExpireTime(es->bucket_type, item) != EB_EXPIRE_TIME_INVALID);\n\n    ebuckets *bucket = estoreGetBuckets(es, eidx);\n\n    /* Remove the item from its current position */\n    serverAssert(ebRemove(bucket, es->bucket_type, item) != 0);\n\n    /* Add the item back with the new expiration time */\n    serverAssert(ebAdd(bucket, es->bucket_type, item, when) == 0);\n\n    /* Note that estore count remain unchanged */\n}\n\n/* Get the total number of items in the expiration store */\nuint64_t estoreSize(estore *es) {\n    return es->count;\n}\n\n/* Move ebuckets from one estore to another */\nvoid estoreMoveEbuckets(estore *src, estore *dst, int eidx) {\n    serverAssert(src->num_buckets > eidx);\n    serverAssert(src->num_buckets == dst->num_buckets);\n    serverAssert(ebIsEmpty(dst->ebArray[eidx])); /* If it is NULL */\n\n    /* Adjust source estore */\n    ebuckets eb = src->ebArray[eidx];\n    if (ebIsEmpty(eb)) return;\n    int64_t count = (int64_t)ebGetTotalItems(eb, src->bucket_type);\n    src->count -= count;\n    fwTreeUpdate(src->buckets_sizes, eidx, -count);\n    src->ebArray[eidx] = ebCreate(); /* Set to NULL actually.*/\n\n    /* Move ebuckets to destination estore */\n    dst->ebArray[eidx] = eb;\n    dst->count += count;\n    fwTreeUpdate(dst->buckets_sizes, eidx, count);\n}\n\n#ifdef REDIS_TEST\n#include <stdio.h>\n#include \"testhelp.h\"\n\n#define TEST(name) printf(\"test — %s\\n\", name);\n\n/* Test item structure for estore testing */\ntypedef struct TestItem {\n    kvobj kv;  /* mimic kvobj of type HASH to pass checks in estore */\n    ExpireMeta mexpire;\n    int index;\n} TestItem;\n\n/* Test EbucketsType for estore testing */\nExpireMeta *getTestItemExpireMeta(const eItem item) {\n    return &((TestItem *)item)->mexpire;\n}\n\nvoid deleteTestItemCb(eItem item, void *ctx) {\n    UNUSED(ctx);\n    zfree(item);\n}\n\nEbucketsType testEbucketsType = {\n    .getExpireMeta = getTestItemExpireMeta,\n    .onDeleteItem = deleteTestItemCb,\n    .itemsAddrAreOdd = 0,\n};\n\n/* Helper function to create a test item */\nstatic TestItem *createTestItem(int index) {\n    TestItem *item = zmalloc(sizeof(TestItem));\n    item->index = index;\n    item->mexpire.trash = 1;\n    /* mimic kvobj of type HASH to pass checks in estore */\n    item->kv.type = OBJ_HASH;\n    item->kv.encoding = OBJ_ENCODING_LISTPACK_EX;\n    return item;\n}\n\nstatic ExpireAction activeExpireTestCb(eItem item, void *ctx) {\n    UNUSED(ctx);\n    zfree(item);\n    return 0;\n}\n\nint estoreTest(int argc, char **argv, int flags) {\n    UNUSED(argc);\n    UNUSED(argv);\n    UNUSED(flags);\n\n    /* Initialize minimal server state needed for testing */\n    server.hz = 10;\n    server.unixtime = time(NULL);\n\n    TEST(\"Create and destroy estore\") {\n        estore *es = estoreCreate(&testEbucketsType, 0);\n        assert(es != NULL);\n        assert(estoreIsEmpty(es));\n        assert(estoreSize(es) == 0);\n        estoreRelease(es);\n    }\n\n    TEST(\"Create estore with multiple buckets\") {\n        estore *es = estoreCreate(&testEbucketsType, 2); /* 4 buckets */\n        assert(es != NULL);\n        assert(estoreIsEmpty(es));\n        assert(estoreSize(es) == 0);\n\n        /* Test bucket access */\n        ebuckets *bucket0 = estoreGetBuckets(es, 0);\n        ebuckets *bucket1 = estoreGetBuckets(es, 1);\n        ebuckets *bucket2 = estoreGetBuckets(es, 2);\n        ebuckets *bucket3 = estoreGetBuckets(es, 3);\n\n        assert(bucket0 != NULL);\n        assert(bucket1 != NULL);\n        assert(bucket2 != NULL);\n        assert(bucket3 != NULL);\n\n        /* All buckets should be different */\n        assert(bucket0 != bucket1);\n        assert(bucket0 != bucket2);\n        assert(bucket1 != bucket3);\n\n        estoreRelease(es);\n    }\n\n    TEST(\"Add and remove items\") {\n        estore *es = estoreCreate(&testEbucketsType, 1); /* 2 buckets */\n\n        /* Test initial state */\n        assert(estoreSize(es) == 0);\n        assert(estoreIsEmpty(es));\n\n        /* Create test items */\n        TestItem *item1 = createTestItem(1);\n        TestItem *item2 = createTestItem(2);\n        TestItem *item3 = createTestItem(3);\n\n        /* Add items to different buckets */\n        estoreAdd(es, 0, item1, 1000);\n        assert(estoreSize(es) == 1);\n        assert(!estoreIsEmpty(es));\n\n        estoreAdd(es, 1, item2, 2000);\n        assert(estoreSize(es) == 2);\n\n        estoreAdd(es, 0, item3, 3000);  /* Add another item to bucket 0 */\n        assert(estoreSize(es) == 3);\n\n        /* Verify expiration times are set correctly */\n        assert(ebGetMetaExpTime(&item1->mexpire) == 1000);\n        assert(ebGetMetaExpTime(&item2->mexpire) == 2000);\n        assert(ebGetMetaExpTime(&item3->mexpire) == 3000);\n\n        /* Remove items */\n        uint64_t expireTime1 = estoreRemove(es, 0, item1);\n        assert(expireTime1 == 1000);\n        assert(estoreSize(es) == 2);\n        zfree(item1);\n\n        uint64_t expireTime2 = estoreRemove(es, 1, item2);\n        assert(expireTime2 == 2000);\n        assert(estoreSize(es) == 1);\n        zfree(item2);\n\n        uint64_t expireTime3 = estoreRemove(es, 0, item3);\n        assert(expireTime3 == 3000);\n        assert(estoreSize(es) == 0);\n        assert(estoreIsEmpty(es));\n        zfree(item3);\n\n        /* Clean up - items are freed by the onDeleteItem callback */\n        estoreRelease(es);\n    }\n\n    TEST(\"Update item expiration\") {\n        estore *es = estoreCreate(&testEbucketsType, 0); /* 1 bucket */\n\n        /* Create and add a test item */\n        TestItem *item = createTestItem(1);\n        estoreAdd(es, 0, item, 1000);\n        assert(estoreSize(es) == 1);\n\n        /* Verify initial expiration time */\n        assert(ebGetMetaExpTime(&item->mexpire) == 1000);\n\n        /* Update expiration time */\n        estoreUpdate(es, 0, item, 2000);\n        assert(estoreSize(es) == 1); /* Size should remain the same */\n        assert(ebGetMetaExpTime(&item->mexpire) == 2000);\n\n        /* Update again to a different time */\n        estoreUpdate(es, 0, item, 500);\n        assert(estoreSize(es) == 1);\n        assert(ebGetMetaExpTime(&item->mexpire) == 500);\n\n        /* Clean up */\n        estoreRemove(es, 0, item);\n        assert(estoreSize(es) == 0);\n        zfree(item);\n\n        estoreRelease(es);\n    }\n\n    TEST(\"Non-empty bucket iteration\") {\n        estore *es = estoreCreate(&testEbucketsType, 2); /* 4 buckets */\n\n        /* Test bucket iteration on empty store */\n        assert(estoreGetFirstNonEmptyBucket(es) == 0); /* Returns 0 when empty */\n        assert(estoreGetNextNonEmptyBucket(es, 0) == -1); /* No next bucket when empty */\n\n        /* Create test items and add to specific buckets */\n        TestItem *item1 = createTestItem(1);\n        TestItem *item2 = createTestItem(2);\n        TestItem *item3 = createTestItem(3);\n\n        /* Add to bucket 1 and 3 (skip 0 and 2) */\n        estoreAdd(es, 1, item1, 1000);\n        estoreAdd(es, 3, item2, 2000);\n        estoreAdd(es, 3, item3, 3000);  /* Add another item to bucket 3 */\n\n        assert(estoreSize(es) == 3);\n\n        /* Test iteration through non-empty buckets */\n        int firstBucket = estoreGetFirstNonEmptyBucket(es);\n        assert(firstBucket == 1);\n\n        int nextBucket = estoreGetNextNonEmptyBucket(es, firstBucket);\n        assert(nextBucket == 3);\n\n        int lastBucket = estoreGetNextNonEmptyBucket(es, nextBucket);\n        assert(lastBucket == -1); /* No more non-empty buckets */\n\n        /* Test iteration from different starting points */\n        assert(estoreGetNextNonEmptyBucket(es, 0) == 1);\n        assert(estoreGetNextNonEmptyBucket(es, 2) == 3);\n\n        /* Clean up */\n        estoreRemove(es, 1, item1);\n        zfree(item1);\n        estoreRemove(es, 3, item2);\n        zfree(item2);\n        estoreRemove(es, 3, item3);\n        zfree(item3);\n        assert(estoreSize(es) == 0);\n\n        estoreRelease(es);\n    }\n\n    TEST(\"Empty estore\") {\n        estore *es = estoreCreate(&testEbucketsType, 1); /* 2 buckets */\n\n        /* Add some items */\n        TestItem *item1 = createTestItem(1);\n        TestItem *item2 = createTestItem(2);\n        TestItem *item3 = createTestItem(3);\n\n        estoreAdd(es, 0, item1, 1000);\n        estoreAdd(es, 1, item2, 2000);\n        estoreAdd(es, 0, item3, 3000);\n        assert(estoreSize(es) == 3);\n        assert(!estoreIsEmpty(es));\n\n        /* Empty the store - this should call onDeleteItem for all items */\n        estoreEmpty(es);\n        assert(estoreSize(es) == 0);\n        assert(estoreIsEmpty(es));\n\n        /* Verify buckets are empty */\n        assert(estoreGetFirstNonEmptyBucket(es) == 0); /* Returns 0 when empty */\n        assert(estoreGetNextNonEmptyBucket(es, 0) == -1);\n\n        estoreRelease(es);\n    }\n\n    TEST(\"Active expiration\") {\n        estore *es = estoreCreate(&testEbucketsType, 14); /* 2^14 buckets */\n\n        /* Create items with different expiration times */\n        TestItem *expiredItem1 = createTestItem(1);\n        TestItem *expiredItem2 = createTestItem(2);\n        TestItem *expiredItem3 = createTestItem(3);\n        TestItem *futureItem = createTestItem(4);\n\n        estoreAdd(es, 0, expiredItem1, 1023);\n        estoreAdd(es, 0, expiredItem2, 2047);\n        estoreAdd(es, 1, expiredItem3, 127);\n        estoreAdd(es, 0, futureItem, 4095);\n        assert(estoreSize(es) == 4);\n\n        /* Perform active expiration */\n        ExpireInfo expireInfo = {\n            .maxToExpire = UINT64_MAX,\n            .onExpireItem = activeExpireTestCb,\n            .ctx = NULL,\n            .now = 2048,  /* Current time in milliseconds */\n            .itemsExpired = 0\n        };\n\n        estoreActiveExpire(es, 0, &expireInfo);\n\n        /* The expired items should be removed, future item should remain */\n        assert(expireInfo.itemsExpired == 2);\n        assert(estoreSize(es) == 2);\n        \n        estoreActiveExpire(es, 1, &expireInfo);\n        assert(expireInfo.itemsExpired == 1);\n        assert(estoreSize(es) == 1);\n\n        /* Clean up remaining item */\n        estoreRemove(es, 0, futureItem);\n        zfree(futureItem);\n        assert(estoreSize(es) == 0);\n\n        estoreRelease(es);\n    }\n    \n    return 0;\n}\n#endif\n"
  },
  {
    "path": "src/estore.h",
    "content": "/*\n * Copyright (c) 2011-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * estore.h -- Expiration Store implementation\n *\n * ESTORE (Expiration Store)\n * =========================\n * \n * Index-based expiration store implementation. Similar to kvstore, but built\n * on top of ebuckets instead of dict. Items stored in estore must embed an\n * ExpireMeta, enabling efficient active-expiration.\n *\n * Estore is currently used to manage \"subexpiry\" only for hash objects with\n * field-level expiration (HFE). Each hash with HFE is registered in estore\n * with the earliest expiration time among its fields.\n *\n * USAGE IN REDIS\n * ==============\n * This implementation is used to efficiently track hash objects that have\n * field-level expiration (HFE):\n * - Each hash with HFE is registered with its earliest field expiration time\n * - Enables efficient active expiration of hash fields\n * - Uses Fenwick tree for efficient iteration through non-empty buckets\n * - Supports cluster mode with per-slot buckets\n *\n * IMPLEMENTATION NOTES\n * ====================\n * - Built on top of ebuckets data structure for expiration management\n * - Uses Fenwick tree to track cumulative item counts across buckets\n * - Supports both single bucket (standalone) and multiple buckets (cluster) modes\n * - All operations have O(log n) time complexity for bucket selection\n *\n * STRUCTURE\n * =========\n * - ebArray: Array of ebuckets (one per slot in cluster mode, or just one)\n * - buckets_sizes: Fenwick tree tracking cumulative counts for efficient iteration\n * - bucket_type: EbucketsType defining callbacks for the stored items\n */\n\n#ifndef __ESTORE_H\n#define __ESTORE_H\n\n#include \"ebuckets.h\"\n#include \"fwtree.h\"\n\n/* Forward declaration of the estore structure */\ntypedef struct _estore estore;\n\n/* Estore API */\n\nestore *estoreCreate(EbucketsType *type, int num_buckets_bits);\n\nvoid estoreEmpty(estore *es);\n\nint estoreIsEmpty(estore *es);\n\nvoid estoreRelease(estore *es);\n\nvoid estoreActiveExpire(estore *es, int eidx, ExpireInfo *info);\n\nuint64_t estoreRemove(estore *es, int eidx, eItem item);\n\nvoid estoreAdd(estore *es, int eidx, eItem item, uint64_t when);\n\nvoid estoreUpdate(estore *es, int eidx, eItem item, uint64_t when);\n\nuint64_t estoreSize(estore *es);\n\nebuckets *estoreGetBuckets(estore *es, int eidx);\n\nint estoreGetFirstNonEmptyBucket(estore *es);\n\nint estoreGetNextNonEmptyBucket(estore *es, int eidx);\n\nvoid estoreMoveEbuckets(estore *src, estore *dst, int eidx);\n\n/* Hash-specific function to get ExpireMeta from a hash kvobj. \n * Once we shall have another data-type with subexpiry, we should refactor\n * ExpireMeta to optionally reside as part of kvobj struct */\nExpireMeta *hashGetExpireMeta(const eItem kvobjHash);\n\n#ifdef REDIS_TEST\nint estoreTest(int argc, char *argv[], int flags);\n#endif\n\n#endif /* __ESTORE_H */\n"
  },
  {
    "path": "src/eval.c",
    "content": "/*\n * Copyright (c) 2011-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n#include \"sha1.h\"\n#include \"rand.h\"\n#include \"cluster.h\"\n#include \"monotonic.h\"\n#include \"resp_parser.h\"\n#include \"script_lua.h\"\n\n#include <lua.h>\n#include <lauxlib.h>\n#include <lualib.h>\n#if defined(USE_JEMALLOC)\n#include <lstate.h>\n#endif\n#include <ctype.h>\n#include <math.h>\n\nstatic int gc_count = 0; /* Counter for the number of GC requests, reset after each GC execution */\n\nvoid ldbInit(void);\nvoid ldbDisable(client *c);\nvoid ldbEnable(client *c);\nvoid evalGenericCommandWithDebugging(client *c, int evalsha);\nsds ldbCatStackValue(sds s, lua_State *lua, int idx);\nlistNode *luaScriptsLRUAdd(client *c, sds sha, int evalsha);\n\nstatic void dictLuaScriptDestructor(dict *d, void *val) {\n    UNUSED(d);\n    if (val == NULL) return; /* Lazy freeing will set value to NULL. */\n    decrRefCount(((luaScript*)val)->body);\n    zfree(val);\n}\n\nstatic uint64_t dictStrCaseHash(const void *key) {\n    return dictGenCaseHashFunction((unsigned char*)key, strlen((char*)key));\n}\n\n/* lctx.lua_scripts sha (as sds string) -> scripts (as luaScript) cache. */\ndictType shaScriptObjectDictType = {\n        dictStrCaseHash,            /* hash function */\n        NULL,                       /* key dup */\n        NULL,                       /* val dup */\n        dictSdsKeyCaseCompare,      /* key compare */\n        dictSdsDestructor,          /* key destructor */\n        dictLuaScriptDestructor,    /* val destructor */\n        NULL                        /* allow to expand */\n};\n\n/* Lua context */\nstruct luaCtx {\n    lua_State *lua; /* The Lua interpreter. We use just one for all clients */\n    client *lua_client;   /* The \"fake client\" to query Redis from Lua */\n    dict *lua_scripts;         /* A dictionary of SHA1 -> Lua scripts */\n    list *lua_scripts_lru_list; /* A list of SHA1, first in first out LRU eviction. */\n    unsigned long long lua_scripts_mem;  /* Cached scripts' memory + oh */\n} lctx;\n\n/* Debugger shared state is stored inside this global structure. */\n#define LDB_BREAKPOINTS_MAX 64  /* Max number of breakpoints. */\n#define LDB_MAX_LEN_DEFAULT 256 /* Default len limit for replies / var dumps. */\nstruct ldbState {\n    connection *conn; /* Connection of the debugging client. */\n    int active; /* Are we debugging EVAL right now? */\n    int forked; /* Is this a fork()ed debugging session? */\n    list *logs; /* List of messages to send to the client. */\n    list *traces; /* Messages about Redis commands executed since last stop.*/\n    list *children; /* All forked debugging sessions pids. */\n    int bp[LDB_BREAKPOINTS_MAX]; /* An array of breakpoints line numbers. */\n    int bpcount; /* Number of valid entries inside bp. */\n    int step;   /* Stop at next line regardless of breakpoints. */\n    int luabp;  /* Stop at next line because redis.breakpoint() was called. */\n    sds *src;   /* Lua script source code split by line. */\n    int lines;  /* Number of lines in 'src'. */\n    int currentline;    /* Current line number. */\n    sds cbuf;   /* Debugger client command buffer. */\n    size_t maxlen;  /* Max var dump / reply length. */\n    int maxlen_hint_sent; /* Did we already hint about \"set maxlen\"? */\n} ldb;\n\n/* ---------------------------------------------------------------------------\n * Utility functions.\n * ------------------------------------------------------------------------- */\n\n/* Perform the SHA1 of the input string. We use this both for hashing script\n * bodies in order to obtain the Lua function name, and in the implementation\n * of redis.sha1().\n *\n * 'digest' should point to a 41 bytes buffer: 40 for SHA1 converted into a\n * hexadecimal number, plus 1 byte for null term. */\nvoid sha1hex(char *digest, char *script, size_t len) {\n    SHA1_CTX ctx;\n    unsigned char hash[20];\n    char *cset = \"0123456789abcdef\";\n    int j;\n\n    SHA1Init(&ctx);\n    SHA1Update(&ctx,(unsigned char*)script,len);\n    SHA1Final(hash,&ctx);\n\n    for (j = 0; j < 20; j++) {\n        digest[j*2] = cset[((hash[j]&0xF0)>>4)];\n        digest[j*2+1] = cset[(hash[j]&0xF)];\n    }\n    digest[40] = '\\0';\n}\n\n/* redis.breakpoint()\n *\n * Allows to stop execution during a debugging session from within\n * the Lua code implementation, like if a breakpoint was set in the code\n * immediately after the function. */\nint luaRedisBreakpointCommand(lua_State *lua) {\n    if (ldb.active) {\n        ldb.luabp = 1;\n        lua_pushboolean(lua,1);\n    } else {\n        lua_pushboolean(lua,0);\n    }\n    return 1;\n}\n\n/* redis.debug()\n *\n * Log a string message into the output console.\n * Can take multiple arguments that will be separated by commas.\n * Nothing is returned to the caller. */\nint luaRedisDebugCommand(lua_State *lua) {\n    if (!ldb.active) return 0;\n    int argc = lua_gettop(lua);\n    sds log = sdscatprintf(sdsempty(),\"<debug> line %d: \", ldb.currentline);\n    while(argc--) {\n        log = ldbCatStackValue(log,lua,-1 - argc);\n        if (argc != 0) log = sdscatlen(log,\", \",2);\n    }\n    ldbLog(log);\n    return 0;\n}\n\n/* redis.replicate_commands()\n *\n * DEPRECATED: Now do nothing and always return true.\n * Turn on single commands replication if the script never called\n * a write command so far, and returns true. Otherwise if the script\n * already started to write, returns false and stick to whole scripts\n * replication, which is our default. */\nint luaRedisReplicateCommandsCommand(lua_State *lua) {\n    lua_pushboolean(lua,1);\n    return 1;\n}\n\n/* Initialize the scripting environment.\n *\n * This function is called the first time at server startup with\n * the 'setup' argument set to 1.\n *\n * It can be called again multiple times during the lifetime of the Redis\n * process, with 'setup' set to 0, and following a scriptingRelease() call,\n * in order to reset the Lua scripting environment.\n *\n * However it is simpler to just call scriptingReset() that does just that. */\nvoid scriptingInit(int setup) {\n    if (setup) {\n        lctx.lua_client = NULL;\n        server.script_disable_deny_script = 0;\n        ldbInit();\n    }\n\n    lua_State *lua = createLuaState();\n    if (lua == NULL) {\n        serverLog(LL_WARNING, \"Failed creating the lua VM.\");\n        exit(1);\n    }\n\n    /* Initialize a dictionary we use to map SHAs to scripts.\n     * Initialize a list we use for lua script evictions, it shares the\n     * sha with the dictionary, so free fn is not set. */\n    lctx.lua_scripts = dictCreate(&shaScriptObjectDictType);\n    lctx.lua_scripts_lru_list = listCreate();\n    lctx.lua_scripts_mem = 0;\n\n    luaRegisterRedisAPI(lua);\n\n    /* register debug commands */\n    lua_getglobal(lua,\"redis\");\n\n    /* redis.breakpoint */\n    lua_pushstring(lua,\"breakpoint\");\n    lua_pushcfunction(lua,luaRedisBreakpointCommand);\n    lua_settable(lua,-3);\n\n    /* redis.debug */\n    lua_pushstring(lua,\"debug\");\n    lua_pushcfunction(lua,luaRedisDebugCommand);\n    lua_settable(lua,-3);\n\n    /* redis.replicate_commands */\n    lua_pushstring(lua, \"replicate_commands\");\n    lua_pushcfunction(lua, luaRedisReplicateCommandsCommand);\n    lua_settable(lua, -3);\n\n    lua_setglobal(lua,\"redis\");\n\n    /* Add a helper function we use for pcall error reporting.\n     * Note that when the error is in the C function we want to report the\n     * information about the caller, that's what makes sense from the point\n     * of view of the user debugging a script. */\n    {\n        char *errh_func =       \"local dbg = debug\\n\"\n                                \"debug = nil\\n\"\n                                \"function __redis__err__handler(err)\\n\"\n                                \"  local i = dbg.getinfo(2,'nSl')\\n\"\n                                \"  if i and i.what == 'C' then\\n\"\n                                \"    i = dbg.getinfo(3,'nSl')\\n\"\n                                \"  end\\n\"\n                                \"  if type(err) ~= 'table' then\\n\"\n                                \"    err = {err='ERR ' .. tostring(err)}\"\n                                \"  end\"\n                                \"  if i then\\n\"\n                                \"    err['source'] = i.source\\n\"\n                                \"    err['line'] = i.currentline\\n\"\n                                \"  end\"\n                                \"  return err\\n\"\n                                \"end\\n\";\n        luaL_loadbuffer(lua,errh_func,strlen(errh_func),\"@err_handler_def\");\n        lua_pcall(lua,0,0,0);\n    }\n\n    /* Create the (non connected) client that we use to execute Redis commands\n     * inside the Lua interpreter.\n     * Note: there is no need to create it again when this function is called\n     * by scriptingReset(). */\n    if (lctx.lua_client == NULL) {\n        lctx.lua_client = createClient(NULL);\n        lctx.lua_client->flags |= CLIENT_SCRIPT;\n\n        /* We do not want to allow blocking commands inside Lua */\n        lctx.lua_client->flags |= CLIENT_DENY_BLOCKING;\n    }\n\n    /* Lock the global table from any changes */\n    lua_pushvalue(lua, LUA_GLOBALSINDEX);\n    luaSetErrorMetatable(lua);\n    /* Recursively lock all tables that can be reached from the global table */\n    luaSetTableProtectionRecursively(lua);\n    lua_pop(lua, 1);\n    /* Set metatables of basic types (string, number, nil etc.) readonly. */\n    luaSetTableProtectionForBasicTypes(lua);\n\n    lctx.lua = lua;\n}\n\n/* Free lua_scripts dict and close lua interpreter. */\nvoid freeLuaScriptsSync(dict *lua_scripts, list *lua_scripts_lru_list, lua_State *lua) {\n    dictRelease(lua_scripts);\n    listRelease(lua_scripts_lru_list);\n\n#if defined(USE_JEMALLOC)\n    /* When lua is closed, destroy the previously used private tcache. */\n    void *ud = (global_State*)G(lua)->ud;\n    unsigned int lua_tcache = (unsigned int)(uintptr_t)ud;\n#endif\n\n    lua_gc(lua, LUA_GCCOLLECT, 0);\n    lua_close(lua);\n\n#if defined(USE_JEMALLOC)\n    je_mallctl(\"tcache.destroy\", NULL, NULL, (void *)&lua_tcache, sizeof(unsigned int));\n#endif\n}\n\n/* Release resources related to Lua scripting.\n * This function is used in order to reset the scripting environment. */\nvoid scriptingRelease(int async) {\n    if (async)\n        freeLuaScriptsAsync(lctx.lua_scripts, lctx.lua_scripts_lru_list, lctx.lua);\n    else\n        freeLuaScriptsSync(lctx.lua_scripts, lctx.lua_scripts_lru_list, lctx.lua);\n}\n\nvoid scriptingReset(int async) {\n    scriptingRelease(async);\n    scriptingInit(0);\n}\n\n/* ---------------------------------------------------------------------------\n * EVAL and SCRIPT commands implementation\n * ------------------------------------------------------------------------- */\n\nstatic void evalCalcFunctionName(int evalsha, sds script, char *out_funcname) {\n    /* We obtain the script SHA1, then check if this function is already\n     * defined into the Lua state */\n    out_funcname[0] = 'f';\n    out_funcname[1] = '_';\n    if (!evalsha) {\n        /* Hash the code if this is an EVAL call */\n        sha1hex(out_funcname+2,script,sdslen(script));\n    } else {\n        /* We already have the SHA if it is an EVALSHA */\n        int j;\n        char *sha = script;\n\n        /* Convert to lowercase. We don't use tolower since the function\n         * managed to always show up in the profiler output consuming\n         * a non trivial amount of time. */\n        for (j = 0; j < 40; j++)\n            out_funcname[j+2] = (sha[j] >= 'A' && sha[j] <= 'Z') ?\n                sha[j]+('a'-'A') : sha[j];\n        out_funcname[42] = '\\0';\n    }\n}\n\n/* Helper function to try and extract shebang flags from the script body.\n * If no shebang is found, return with success and COMPAT mode flag.\n * The err arg is optional, can be used to get a detailed error string.\n * The out_shebang_len arg is optional, can be used to trim the shebang from the script.\n * Returns C_OK on success, and C_ERR on error. */\nint evalExtractShebangFlags(sds body, uint64_t *out_flags, ssize_t *out_shebang_len, sds *err) {\n    ssize_t shebang_len = 0;\n    uint64_t script_flags = SCRIPT_FLAG_EVAL_COMPAT_MODE;\n    if (!strncmp(body, \"#!\", 2)) {\n        int numparts,j;\n        char *shebang_end = strchr(body, '\\n');\n        if (shebang_end == NULL) {\n            if (err)\n                *err = sdsnew(\"Invalid script shebang\");\n            return C_ERR;\n        }\n        shebang_len = shebang_end - body;\n        sds shebang = sdsnewlen(body, shebang_len);\n        sds *parts = sdssplitargs(shebang, &numparts);\n        sdsfree(shebang);\n        if (!parts || numparts == 0) {\n            if (err)\n                *err = sdsnew(\"Invalid engine in script shebang\");\n            sdsfreesplitres(parts, numparts);\n            return C_ERR;\n        }\n        /* Verify lua interpreter was specified */\n        if (strcmp(parts[0], \"#!lua\")) {\n            if (err)\n                *err = sdscatfmt(sdsempty(), \"Unexpected engine in script shebang: %s\", parts[0]);\n            sdsfreesplitres(parts, numparts);\n            return C_ERR;\n        }\n        script_flags &= ~SCRIPT_FLAG_EVAL_COMPAT_MODE;\n        for (j = 1; j < numparts; j++) {\n            if (!strncmp(parts[j], \"flags=\", 6)) {\n                sdsrange(parts[j], 6, -1);\n                int numflags, jj;\n                sds *flags = sdssplitlen(parts[j], sdslen(parts[j]), \",\", 1, &numflags);\n                for (jj = 0; jj < numflags; jj++) {\n                    scriptFlag *sf;\n                    for (sf = scripts_flags_def; sf->flag; sf++) {\n                        if (!strcmp(flags[jj], sf->str)) break;\n                    }\n                    if (!sf->flag) {\n                        if (err)\n                            *err = sdscatfmt(sdsempty(), \"Unexpected flag in script shebang: %s\", flags[jj]);\n                        sdsfreesplitres(flags, numflags);\n                        sdsfreesplitres(parts, numparts);\n                        return C_ERR;\n                    }\n                    script_flags |= sf->flag;\n                }\n                sdsfreesplitres(flags, numflags);\n            } else {\n                /* We only support function flags options for lua scripts */\n                if (err)\n                    *err = sdscatfmt(sdsempty(), \"Unknown lua shebang option: %s\", parts[j]);\n                sdsfreesplitres(parts, numparts);\n                return C_ERR;\n            }\n        }\n        sdsfreesplitres(parts, numparts);\n    }\n    if (out_shebang_len)\n        *out_shebang_len = shebang_len;\n    *out_flags = script_flags;\n    return C_OK;\n}\n\n/* Try to extract command flags if we can, returns the modified flags.\n * Note that it does not guarantee the command arguments are right. */\nuint64_t evalGetCommandFlags(client *c, uint64_t cmd_flags) {\n    char funcname[43];\n    int evalsha = c->cmd->proc == evalShaCommand || c->cmd->proc == evalShaRoCommand;\n    if (evalsha && sdslen(c->argv[1]->ptr) != 40)\n        return cmd_flags;\n    uint64_t script_flags;\n    evalCalcFunctionName(evalsha, c->argv[1]->ptr, funcname);\n    char *lua_cur_script = funcname + 2;\n    c->cur_script = dictFind(lctx.lua_scripts, lua_cur_script);\n    if (!c->cur_script) {\n        if (evalsha)\n            return cmd_flags;\n        if (evalExtractShebangFlags(c->argv[1]->ptr, &script_flags, NULL, NULL) == C_ERR)\n            return cmd_flags;\n    } else {\n        luaScript *l = dictGetVal(c->cur_script);\n        script_flags = l->flags;\n    }\n    if (script_flags & SCRIPT_FLAG_EVAL_COMPAT_MODE)\n        return cmd_flags;\n    return scriptFlagsToCmdFlags(cmd_flags, script_flags);\n}\n\n/* Define a Lua function with the specified body.\n * The function name will be generated in the following form:\n *\n *   f_<hex sha1 sum>\n *\n * The function increments the reference count of the 'body' object as a\n * side effect of a successful call.\n *\n * On success a pointer to an SDS string representing the function SHA1 of the\n * just added function is returned (and will be valid until the next call\n * to scriptingReset() function), otherwise NULL is returned.\n *\n * The function handles the fact of being called with a script that already\n * exists, and in such a case, it behaves like in the success case.\n *\n * If 'c' is not NULL, on error the client is informed with an appropriate\n * error describing the nature of the problem and the Lua interpreter error.\n *\n * 'evalsha' indicating whether the lua function is created from the EVAL context\n * or from the SCRIPT LOAD. */\nsds luaCreateFunction(client *c, robj *body, int evalsha) {\n    char funcname[43];\n    dictEntry *de;\n    uint64_t script_flags;\n\n    funcname[0] = 'f';\n    funcname[1] = '_';\n    sha1hex(funcname+2,body->ptr,sdslen(body->ptr));\n\n    if ((de = dictFind(lctx.lua_scripts,funcname+2)) != NULL) {\n        return dictGetKey(de);\n    }\n\n    /* Handle shebang header in script code */\n    ssize_t shebang_len = 0;\n    sds err = NULL;\n    if (evalExtractShebangFlags(body->ptr, &script_flags, &shebang_len, &err) == C_ERR) {\n        if (c != NULL) {\n            addReplyErrorSds(c, err);\n        }\n        return NULL;\n    }\n\n    /* Note that in case of a shebang line we skip it but keep the line feed to conserve the user's line numbers */\n    if (luaL_loadbuffer(lctx.lua,(char*)body->ptr + shebang_len,sdslen(body->ptr) - shebang_len,\"@user_script\")) {\n        if (c != NULL) {\n            addReplyErrorFormat(c,\n                \"Error compiling script (new function): %s\",\n                lua_tostring(lctx.lua,-1));\n        }\n        lua_pop(lctx.lua,1);\n        luaGC(lctx.lua, &gc_count);\n        return NULL;\n    }\n\n    serverAssert(lua_isfunction(lctx.lua, -1));\n\n    lua_setfield(lctx.lua, LUA_REGISTRYINDEX, funcname);\n\n    /* We also save a SHA1 -> Original script map in a dictionary\n     * so that we can replicate / write in the AOF all the\n     * EVALSHA commands as EVAL using the original script. */\n    luaScript *l = zcalloc(sizeof(luaScript));\n    l->body = body;\n    l->flags = script_flags;\n    sds sha = sdsnewlen(funcname+2,40);\n    l->node = luaScriptsLRUAdd(c, sha, evalsha);\n    int retval = dictAdd(lctx.lua_scripts,sha,l);\n    serverAssertWithInfo(c ? c : lctx.lua_client,NULL,retval == DICT_OK);\n    lctx.lua_scripts_mem += sdsZmallocSize(sha) + getStringObjectSdsUsedMemory(body);\n    incrRefCount(body);\n\n    /* Perform GC after creating the script and adding it to the LRU list,\n     * as script may be evicted during addition. */\n    luaGC(lctx.lua, &gc_count);\n\n    return sha;\n}\n\n/* Delete a Lua function with the specified sha.\n *\n * This will delete the lua function from the lua interpreter and delete\n * the lua function from server. */\nvoid luaDeleteFunction(client *c, sds sha) {\n    /* Delete the script from lua interpreter. */\n    char funcname[43];\n    funcname[0] = 'f';\n    funcname[1] = '_';\n    memcpy(funcname+2, sha, 40);\n    funcname[42] = '\\0';\n    lua_pushnil(lctx.lua);\n    lua_setfield(lctx.lua, LUA_REGISTRYINDEX, funcname);\n\n    /* Delete the script from server. */\n    dictEntry *de = dictUnlink(lctx.lua_scripts, sha);\n    serverAssertWithInfo(c ? c : lctx.lua_client, NULL, de);\n    luaScript *l = dictGetVal(de);\n    /* We only delete `EVAL` scripts, which must exist in the LRU list. */\n    serverAssert(l->node);\n    listDelNode(lctx.lua_scripts_lru_list, l->node);\n    lctx.lua_scripts_mem -= sdsZmallocSize(sha) + getStringObjectSdsUsedMemory(l->body);\n    dictFreeUnlinkedEntry(lctx.lua_scripts, de);\n}\n\n/* Users who abuse EVAL will generate a new lua script on each call, which can\n * consume large amounts of memory over time. Since EVAL is mostly the one that\n * abuses the lua cache, and these won't have pipeline issues (scripts won't\n * disappear when EVALSHA needs it and cause failure), we implement script eviction\n * only for these (not for one loaded with SCRIPT LOAD). Considering that we don't\n * have many scripts, then unlike keys, we don't need to worry about the memory\n * usage of keeping a true sorted LRU linked list.\n *\n * 'evalsha' indicating whether the lua function is added from the EVAL context\n * or from the SCRIPT LOAD.\n *\n * Returns the corresponding node added, which is used to save it in luaScript\n * and use it for quick removal and re-insertion into an LRU list each time the\n * script is used. */\n#define LRU_LIST_LENGTH 500\nlistNode *luaScriptsLRUAdd(client *c, sds sha, int evalsha) {\n    /* Script eviction only applies to EVAL, not SCRIPT LOAD. */\n    if (evalsha) return NULL;\n\n    /* Evict oldest. */\n    while (listLength(lctx.lua_scripts_lru_list) >= LRU_LIST_LENGTH) {\n        listNode *ln = listFirst(lctx.lua_scripts_lru_list);\n        sds oldest = listNodeValue(ln);\n        luaDeleteFunction(c, oldest);\n        server.stat_evictedscripts++;\n    }\n\n    /* Add current. */\n    listAddNodeTail(lctx.lua_scripts_lru_list, sha);\n    return listLast(lctx.lua_scripts_lru_list);\n}\n\nvoid evalGenericCommand(client *c, int evalsha) {\n    lua_State *lua = lctx.lua;\n    char funcname[43];\n    long long numkeys;\n\n    /* Get the number of arguments that are keys */\n    if (getLongLongFromObjectOrReply(c,c->argv[2],&numkeys,NULL) != C_OK)\n        return;\n    if (numkeys > (c->argc - 3)) {\n        addReplyError(c,\"Number of keys can't be greater than number of args\");\n        return;\n    } else if (numkeys < 0) {\n        addReplyError(c,\"Number of keys can't be negative\");\n        return;\n    }\n\n    if (c->cur_script) {\n        funcname[0] = 'f', funcname[1] = '_';\n        memcpy(funcname+2, dictGetKey(c->cur_script), 40);\n        funcname[42] = '\\0';\n    } else\n        evalCalcFunctionName(evalsha, c->argv[1]->ptr, funcname);\n\n    /* Push the pcall error handler function on the stack. */\n    lua_getglobal(lua, \"__redis__err__handler\");\n\n    /* Try to lookup the Lua function */\n    lua_getfield(lua, LUA_REGISTRYINDEX, funcname);\n    if (lua_isnil(lua,-1)) {\n        lua_pop(lua,1); /* remove the nil from the stack */\n        /* Function not defined... let's define it if we have the\n         * body of the function. If this is an EVALSHA call we can just\n         * return an error. */\n        if (evalsha) {\n            lua_pop(lua,1); /* remove the error handler from the stack. */\n            addReplyErrorObject(c, shared.noscripterr);\n            return;\n        }\n        if (luaCreateFunction(c, c->argv[1], evalsha) == NULL) {\n            lua_pop(lua,1); /* remove the error handler from the stack. */\n            /* The error is sent to the client by luaCreateFunction()\n             * itself when it returns NULL. */\n            return;\n        }\n        /* Now the following is guaranteed to return non nil */\n        lua_getfield(lua, LUA_REGISTRYINDEX, funcname);\n        serverAssert(!lua_isnil(lua,-1));\n    }\n\n    char *lua_cur_script = funcname + 2;\n    dictEntry *de = c->cur_script;\n    if (!de)\n        de = dictFind(lctx.lua_scripts, lua_cur_script);\n    luaScript *l = dictGetVal(de);\n    int ro = c->cmd->proc == evalRoCommand || c->cmd->proc == evalShaRoCommand;\n\n    scriptRunCtx rctx;\n    if (scriptPrepareForRun(&rctx, lctx.lua_client, c, lua_cur_script, l->flags, ro) != C_OK) {\n        lua_pop(lua,2); /* Remove the function and error handler. */\n        return;\n    }\n    rctx.flags |= SCRIPT_EVAL_MODE; /* mark the current run as EVAL (as opposed to FCALL) so we'll\n                                      get appropriate error messages and logs */\n\n    if (l->node) {\n        /* Quick removal and re-insertion after the script is called to\n         * maintain the LRU list. */\n        listUnlinkNode(lctx.lua_scripts_lru_list, l->node);\n        listLinkNodeTail(lctx.lua_scripts_lru_list, l->node);\n    }\n\n    luaCallFunction(&rctx, lua, c->argv+3, numkeys, c->argv+3+numkeys, c->argc-3-numkeys, ldb.active);\n    lua_pop(lua,1); /* Remove the error handler. */\n    scriptResetRun(&rctx);\n    luaGC(lua, &gc_count);\n\n    /* We can no longer touch 'l' here, as it may have been reallocated by activedefrag\n     * during AOF loading of long-running scripts. This issue is not with newly generated\n     * AOF files, in which scripts propagate effects rather than scripts. */\n}\n\nvoid evalCommand(client *c) {\n    /* Explicitly feed monitor here so that lua commands appear after their\n     * script command. */\n    replicationFeedMonitors(c,server.monitors,c->db->id,c->argv,c->argc);\n    if (!(c->flags & CLIENT_LUA_DEBUG))\n        evalGenericCommand(c,0);\n    else\n        evalGenericCommandWithDebugging(c,0);\n}\n\nvoid evalRoCommand(client *c) {\n    evalCommand(c);\n}\n\nvoid evalShaCommand(client *c) {\n    /* Explicitly feed monitor here so that lua commands appear after their\n     * script command. */\n    replicationFeedMonitors(c,server.monitors,c->db->id,c->argv,c->argc);\n    if (sdslen(c->argv[1]->ptr) != 40) {\n        /* We know that a match is not possible if the provided SHA is\n         * not the right length. So we return an error ASAP, this way\n         * evalGenericCommand() can be implemented without string length\n         * sanity check */\n        addReplyErrorObject(c, shared.noscripterr);\n        return;\n    }\n    if (!(c->flags & CLIENT_LUA_DEBUG))\n        evalGenericCommand(c,1);\n    else {\n        addReplyError(c,\"Please use EVAL instead of EVALSHA for debugging\");\n        return;\n    }\n}\n\nvoid evalShaRoCommand(client *c) {\n    evalShaCommand(c);\n}\n\nvoid scriptCommand(client *c) {\n    if (c->argc == 2 && !strcasecmp(c->argv[1]->ptr,\"help\")) {\n        const char *help[] = {\n\"DEBUG (YES|SYNC|NO)\",\n\"    Set the debug mode for subsequent scripts executed.\",\n\"EXISTS <sha1> [<sha1> ...]\",\n\"    Return information about the existence of the scripts in the script cache.\",\n\"FLUSH [ASYNC|SYNC]\",\n\"    Flush the Lua scripts cache. Very dangerous on replicas.\",\n\"    When called without the optional mode argument, the behavior is determined by the\",\n\"    lazyfree-lazy-user-flush configuration directive. Valid modes are:\",\n\"    * ASYNC: Asynchronously flush the scripts cache.\",\n\"    * SYNC: Synchronously flush the scripts cache.\",\n\"KILL\",\n\"    Kill the currently executing Lua script.\",\n\"LOAD <script>\",\n\"    Load a script into the scripts cache without executing it.\",\nNULL\n        };\n        addReplyHelp(c, help);\n    } else if (c->argc >= 2 && !strcasecmp(c->argv[1]->ptr,\"flush\")) {\n        int async = 0;\n        if (c->argc == 3 && !strcasecmp(c->argv[2]->ptr,\"sync\")) {\n            async = 0;\n        } else if (c->argc == 3 && !strcasecmp(c->argv[2]->ptr,\"async\")) {\n            async = 1;\n        } else if (c->argc == 2) {\n            async = server.lazyfree_lazy_user_flush ? 1 : 0;\n        } else {\n            addReplyError(c,\"SCRIPT FLUSH only support SYNC|ASYNC option\");\n            return;\n        }\n        scriptingReset(async);\n        addReply(c,shared.ok);\n    } else if (c->argc >= 2 && !strcasecmp(c->argv[1]->ptr,\"exists\")) {\n        int j;\n\n        addReplyArrayLen(c, c->argc-2);\n        for (j = 2; j < c->argc; j++) {\n            if (dictFind(lctx.lua_scripts,c->argv[j]->ptr))\n                addReply(c,shared.cone);\n            else\n                addReply(c,shared.czero);\n        }\n    } else if (c->argc == 3 && !strcasecmp(c->argv[1]->ptr,\"load\")) {\n        sds sha = luaCreateFunction(c, c->argv[2], 1);\n        if (sha == NULL) return; /* The error was sent by luaCreateFunction(). */\n        addReplyBulkCBuffer(c,sha,40);\n    } else if (c->argc == 2 && !strcasecmp(c->argv[1]->ptr,\"kill\")) {\n        scriptKill(c, 1);\n    } else if (c->argc == 3 && !strcasecmp(c->argv[1]->ptr,\"debug\")) {\n        if (clientHasPendingReplies(c)) {\n            addReplyError(c,\"SCRIPT DEBUG must be called outside a pipeline\");\n            return;\n        }\n        if (!strcasecmp(c->argv[2]->ptr,\"no\")) {\n            ldbDisable(c);\n            addReply(c,shared.ok);\n        } else if (!strcasecmp(c->argv[2]->ptr,\"yes\")) {\n            ldbEnable(c);\n            addReply(c,shared.ok);\n        } else if (!strcasecmp(c->argv[2]->ptr,\"sync\")) {\n            ldbEnable(c);\n            addReply(c,shared.ok);\n            c->flags |= CLIENT_LUA_DEBUG_SYNC;\n        } else {\n            addReplyError(c,\"Use SCRIPT DEBUG YES/SYNC/NO\");\n            return;\n        }\n    } else {\n        addReplySubcommandSyntaxError(c);\n    }\n}\n\nunsigned long evalScriptsMemoryVM(void) {\n    return luaMemory(lctx.lua);\n}\n\ndict* evalScriptsDict(void) {\n    return lctx.lua_scripts;\n}\n\nunsigned long evalScriptsMemoryEngine(void) {\n    return lctx.lua_scripts_mem +\n            dictMemUsage(lctx.lua_scripts) +\n            dictSize(lctx.lua_scripts) * sizeof(luaScript) +\n            listLength(lctx.lua_scripts_lru_list) * sizeof(listNode);\n}\n\n/* ---------------------------------------------------------------------------\n * LDB: Redis Lua debugging facilities\n * ------------------------------------------------------------------------- */\n\n/* Initialize Lua debugger data structures. */\nvoid ldbInit(void) {\n    ldb.conn = NULL;\n    ldb.active = 0;\n    ldb.logs = listCreate();\n    listSetFreeMethod(ldb.logs, sdsfreegeneric);\n    ldb.children = listCreate();\n    ldb.src = NULL;\n    ldb.lines = 0;\n    ldb.cbuf = sdsempty();\n}\n\n/* Remove all the pending messages in the specified list. */\nvoid ldbFlushLog(list *log) {\n    listNode *ln;\n\n    while((ln = listFirst(log)) != NULL)\n        listDelNode(log,ln);\n}\n\nint ldbIsEnabled(void){\n    return ldb.active && ldb.step;\n}\n\n/* Enable debug mode of Lua scripts for this client. */\nvoid ldbEnable(client *c) {\n    c->flags |= CLIENT_LUA_DEBUG;\n    ldbFlushLog(ldb.logs);\n    ldb.conn = c->conn;\n    ldb.step = 1;\n    ldb.bpcount = 0;\n    ldb.luabp = 0;\n    sdsfree(ldb.cbuf);\n    ldb.cbuf = sdsempty();\n    ldb.maxlen = LDB_MAX_LEN_DEFAULT;\n    ldb.maxlen_hint_sent = 0;\n}\n\n/* Exit debugging mode from the POV of client. This function is not enough\n * to properly shut down a client debugging session, see ldbEndSession()\n * for more information. */\nvoid ldbDisable(client *c) {\n    c->flags &= ~(CLIENT_LUA_DEBUG|CLIENT_LUA_DEBUG_SYNC);\n}\n\n/* Append a log entry to the specified LDB log. */\nvoid ldbLog(sds entry) {\n    listAddNodeTail(ldb.logs,entry);\n}\n\n/* A version of ldbLog() which prevents producing logs greater than\n * ldb.maxlen. The first time the limit is reached a hint is generated\n * to inform the user that reply trimming can be disabled using the\n * debugger \"maxlen\" command. */\nvoid ldbLogWithMaxLen(sds entry) {\n    int trimmed = 0;\n    if (ldb.maxlen && sdslen(entry) > ldb.maxlen) {\n        sdsrange(entry,0,ldb.maxlen-1);\n        entry = sdscatlen(entry,\" ...\",4);\n        trimmed = 1;\n    }\n    ldbLog(entry);\n    if (trimmed && ldb.maxlen_hint_sent == 0) {\n        ldb.maxlen_hint_sent = 1;\n        ldbLog(sdsnew(\n        \"<hint> The above reply was trimmed. Use 'maxlen 0' to disable trimming.\"));\n    }\n}\n\n/* Send ldb.logs to the debugging client as a multi-bulk reply\n * consisting of simple strings. Log entries which include newlines have them\n * replaced with spaces. The entries sent are also consumed. */\nvoid ldbSendLogs(void) {\n    sds proto = sdsempty();\n    proto = sdscatfmt(proto,\"*%i\\r\\n\", (int)listLength(ldb.logs));\n    while(listLength(ldb.logs)) {\n        listNode *ln = listFirst(ldb.logs);\n        proto = sdscatlen(proto,\"+\",1);\n        sdsmapchars(ln->value,\"\\r\\n\",\"  \",2);\n        proto = sdscatsds(proto,ln->value);\n        proto = sdscatlen(proto,\"\\r\\n\",2);\n        listDelNode(ldb.logs,ln);\n    }\n    if (connWrite(ldb.conn,proto,sdslen(proto)) == -1) {\n        /* Avoid warning. We don't check the return value of write()\n         * since the next read() will catch the I/O error and will\n         * close the debugging session. */\n    }\n    sdsfree(proto);\n}\n\n/* Start a debugging session before calling EVAL implementation.\n * The technique we use is to capture the client socket file descriptor,\n * in order to perform direct I/O with it from within Lua hooks. This\n * way we don't have to re-enter Redis in order to handle I/O.\n *\n * The function returns 1 if the caller should proceed to call EVAL,\n * and 0 if instead the caller should abort the operation (this happens\n * for the parent in a forked session, since it's up to the children\n * to continue, or when fork returned an error).\n *\n * The caller should call ldbEndSession() only if ldbStartSession()\n * returned 1. */\nint ldbStartSession(client *c) {\n    ldb.forked = (c->flags & CLIENT_LUA_DEBUG_SYNC) == 0;\n    if (ldb.forked) {\n        pid_t cp = redisFork(CHILD_TYPE_LDB);\n        if (cp == -1) {\n            addReplyErrorFormat(c,\"Fork() failed: can't run EVAL in debugging mode: %s\", strerror(errno));\n            return 0;\n        } else if (cp == 0) {\n            /* Child. Let's ignore important signals handled by the parent. */\n            struct sigaction act;\n            sigemptyset(&act.sa_mask);\n            act.sa_flags = 0;\n            act.sa_handler = SIG_IGN;\n            sigaction(SIGTERM, &act, NULL);\n            sigaction(SIGINT, &act, NULL);\n\n            /* Log the creation of the child and close the listening\n             * socket to make sure if the parent crashes a reset is sent\n             * to the clients. */\n            serverLog(LL_NOTICE,\"Redis forked for debugging eval\");\n        } else {\n            /* Parent */\n            listAddNodeTail(ldb.children,(void*)(unsigned long)cp);\n            freeClientAsync(c); /* Close the client in the parent side. */\n            return 0;\n        }\n    } else {\n        serverLog(LL_NOTICE,\n            \"Redis synchronous debugging eval session started\");\n    }\n\n    /* Setup our debugging session. */\n    connBlock(ldb.conn);\n    connSendTimeout(ldb.conn,5000);\n    ldb.active = 1;\n\n    /* First argument of EVAL is the script itself. We split it into different\n     * lines since this is the way the debugger accesses the source code. */\n    sds srcstring = sdsdup(c->argv[1]->ptr);\n    size_t srclen = sdslen(srcstring);\n    while(srclen && (srcstring[srclen-1] == '\\n' ||\n                     srcstring[srclen-1] == '\\r'))\n    {\n        srcstring[--srclen] = '\\0';\n    }\n    sdssetlen(srcstring,srclen);\n    ldb.src = sdssplitlen(srcstring,sdslen(srcstring),\"\\n\",1,&ldb.lines);\n    sdsfree(srcstring);\n    return 1;\n}\n\n/* End a debugging session after the EVAL call with debugging enabled\n * returned. */\nvoid ldbEndSession(client *c) {\n    /* Emit the remaining logs and an <endsession> mark. */\n    ldbLog(sdsnew(\"<endsession>\"));\n    ldbSendLogs();\n\n    /* If it's a fork()ed session, we just exit. */\n    if (ldb.forked) {\n        writeToClient(c,0);\n        serverLog(LL_NOTICE,\"Lua debugging session child exiting\");\n        exitFromChild(0, 0);\n    } else {\n        serverLog(LL_NOTICE,\n            \"Redis synchronous debugging eval session ended\");\n    }\n\n    /* Otherwise let's restore client's state. */\n    connNonBlock(ldb.conn);\n    connSendTimeout(ldb.conn,0);\n\n    /* Close the client connection after sending the final EVAL reply\n     * in order to signal the end of the debugging session. */\n    c->flags |= CLIENT_CLOSE_AFTER_REPLY;\n\n    /* Cleanup. */\n    sdsfreesplitres(ldb.src,ldb.lines);\n    ldb.lines = 0;\n    ldb.active = 0;\n}\n\n/* If the specified pid is among the list of children spawned for\n * forked debugging sessions, it is removed from the children list.\n * If the pid was found non-zero is returned. */\nint ldbRemoveChild(pid_t pid) {\n    listNode *ln = listSearchKey(ldb.children,(void*)(unsigned long)pid);\n    if (ln) {\n        listDelNode(ldb.children,ln);\n        return 1;\n    }\n    return 0;\n}\n\n/* Return the number of children we still did not receive termination\n * acknowledge via wait() in the parent process. */\nint ldbPendingChildren(void) {\n    return listLength(ldb.children);\n}\n\n/* Kill all the forked sessions. */\nvoid ldbKillForkedSessions(void) {\n    listIter li;\n    listNode *ln;\n\n    listRewind(ldb.children,&li);\n    while((ln = listNext(&li))) {\n        pid_t pid = (unsigned long) ln->value;\n        serverLog(LL_NOTICE,\"Killing debugging session %ld\",(long)pid);\n        kill(pid,SIGKILL);\n    }\n    listRelease(ldb.children);\n    ldb.children = listCreate();\n}\n\n/* Wrapper for EVAL / EVALSHA that enables debugging, and makes sure\n * that when EVAL returns, whatever happened, the session is ended. */\nvoid evalGenericCommandWithDebugging(client *c, int evalsha) {\n    if (ldbStartSession(c)) {\n        evalGenericCommand(c,evalsha);\n        ldbEndSession(c);\n    } else {\n        ldbDisable(c);\n    }\n}\n\n/* Return a pointer to ldb.src source code line, considering line to be\n * one-based, and returning a special string for out of range lines. */\nchar *ldbGetSourceLine(int line) {\n    int idx = line-1;\n    if (idx < 0 || idx >= ldb.lines) return \"<out of range source code line>\";\n    return ldb.src[idx];\n}\n\n/* Return true if there is a breakpoint in the specified line. */\nint ldbIsBreakpoint(int line) {\n    int j;\n\n    for (j = 0; j < ldb.bpcount; j++)\n        if (ldb.bp[j] == line) return 1;\n    return 0;\n}\n\n/* Add the specified breakpoint. Ignore it if we already reached the max.\n * Returns 1 if the breakpoint was added (or was already set). 0 if there is\n * no space for the breakpoint or if the line is invalid. */\nint ldbAddBreakpoint(int line) {\n    if (line <= 0 || line > ldb.lines) return 0;\n    if (!ldbIsBreakpoint(line) && ldb.bpcount != LDB_BREAKPOINTS_MAX) {\n        ldb.bp[ldb.bpcount++] = line;\n        return 1;\n    }\n    return 0;\n}\n\n/* Remove the specified breakpoint, returning 1 if the operation was\n * performed or 0 if there was no such breakpoint. */\nint ldbDelBreakpoint(int line) {\n    int j;\n\n    for (j = 0; j < ldb.bpcount; j++) {\n        if (ldb.bp[j] == line) {\n            ldb.bpcount--;\n            memmove(ldb.bp+j,ldb.bp+j+1,ldb.bpcount-j);\n            return 1;\n        }\n    }\n    return 0;\n}\n\n/* Expect a valid multi-bulk command in the debugging client query buffer.\n * On success the command is parsed and returned as an array of SDS strings,\n * otherwise NULL is returned and there is to read more buffer. */\nsds *ldbReplParseCommand(int *argcp, char** err) {\n    static char* protocol_error = \"protocol error\";\n    sds *argv = NULL;\n    int argc = 0;\n    if (sdslen(ldb.cbuf) == 0) return NULL;\n\n    /* Working on a copy is simpler in this case. We can modify it freely\n     * for the sake of simpler parsing. */\n    sds copy = sdsdup(ldb.cbuf);\n    char *p = copy;\n\n    /* This Redis protocol parser is a joke... just the simplest thing that\n     * works in this context. It is also very forgiving regarding broken\n     * protocol. */\n\n    /* Seek and parse *<count>\\r\\n. */\n    p = strchr(p,'*'); if (!p) goto protoerr;\n    char *plen = p+1; /* Multi bulk len pointer. */\n    p = strstr(p,\"\\r\\n\"); if (!p) goto keep_reading;\n    *p = '\\0'; p += 2;\n    *argcp = atoi(plen);\n    if (*argcp <= 0 || *argcp > 1024) goto protoerr;\n\n    /* Parse each argument. */\n    argv = zmalloc(sizeof(sds)*(*argcp));\n    argc = 0;\n    while(argc < *argcp) {\n        /* reached the end but there should be more data to read */\n        if (*p == '\\0') goto keep_reading;\n\n        if (*p != '$') goto protoerr;\n        plen = p+1; /* Bulk string len pointer. */\n        p = strstr(p,\"\\r\\n\"); if (!p) goto keep_reading;\n        *p = '\\0'; p += 2;\n        int slen = atoi(plen); /* Length of this arg. */\n        if (slen <= 0 || slen > 1024) goto protoerr;\n        if ((size_t)(p + slen + 2 - copy) > sdslen(copy) ) goto keep_reading;\n        argv[argc++] = sdsnewlen(p,slen);\n        p += slen; /* Skip the already parsed argument. */\n        if (p[0] != '\\r' || p[1] != '\\n') goto protoerr;\n        p += 2; /* Skip \\r\\n. */\n    }\n    sdsfree(copy);\n    return argv;\n\nprotoerr:\n    *err = protocol_error;\nkeep_reading:\n    sdsfreesplitres(argv,argc);\n    sdsfree(copy);\n    return NULL;\n}\n\n/* Log the specified line in the Lua debugger output. */\nvoid ldbLogSourceLine(int lnum) {\n    char *line = ldbGetSourceLine(lnum);\n    char *prefix;\n    int bp = ldbIsBreakpoint(lnum);\n    int current = ldb.currentline == lnum;\n\n    if (current && bp)\n        prefix = \"->#\";\n    else if (current)\n        prefix = \"-> \";\n    else if (bp)\n        prefix = \"  #\";\n    else\n        prefix = \"   \";\n    sds thisline = sdscatprintf(sdsempty(),\"%s%-3d %s\", prefix, lnum, line);\n    ldbLog(thisline);\n}\n\n/* Implement the \"list\" command of the Lua debugger. If around is 0\n * the whole file is listed, otherwise only a small portion of the file\n * around the specified line is shown. When a line number is specified\n * the amount of context (lines before/after) is specified via the\n * 'context' argument. */\nvoid ldbList(int around, int context) {\n    int j;\n\n    for (j = 1; j <= ldb.lines; j++) {\n        if (around != 0 && abs(around-j) > context) continue;\n        ldbLogSourceLine(j);\n    }\n}\n\n/* Append a human readable representation of the Lua value at position 'idx'\n * on the stack of the 'lua' state, to the SDS string passed as argument.\n * The new SDS string with the represented value attached is returned.\n * Used in order to implement ldbLogStackValue().\n *\n * The element is not automatically removed from the stack, nor it is\n * converted to a different type. */\n#define LDB_MAX_VALUES_DEPTH (LUA_MINSTACK/2)\nsds ldbCatStackValueRec(sds s, lua_State *lua, int idx, int level) {\n    int t = lua_type(lua,idx);\n\n    if (level++ == LDB_MAX_VALUES_DEPTH)\n        return sdscat(s,\"<max recursion level reached! Nested table?>\");\n\n    switch(t) {\n    case LUA_TSTRING:\n        {\n        size_t strl;\n        char *strp = (char*)lua_tolstring(lua,idx,&strl);\n        s = sdscatrepr(s,strp,strl);\n        }\n        break;\n    case LUA_TBOOLEAN:\n        s = sdscat(s,lua_toboolean(lua,idx) ? \"true\" : \"false\");\n        break;\n    case LUA_TNUMBER:\n        s = sdscatprintf(s,\"%g\",(double)lua_tonumber(lua,idx));\n        break;\n    case LUA_TNIL:\n        s = sdscatlen(s,\"nil\",3);\n        break;\n    case LUA_TTABLE:\n        {\n        int expected_index = 1; /* First index we expect in an array. */\n        int is_array = 1; /* Will be set to null if check fails. */\n        /* Note: we create two representations at the same time, one\n         * assuming the table is an array, one assuming it is not. At the\n         * end we know what is true and select the right one. */\n        sds repr1 = sdsempty();\n        sds repr2 = sdsempty();\n        lua_pushnil(lua); /* The first key to start the iteration is nil. */\n        while (lua_next(lua,idx-1)) {\n            /* Test if so far the table looks like an array. */\n            if (is_array &&\n                (lua_type(lua,-2) != LUA_TNUMBER ||\n                 lua_tonumber(lua,-2) != expected_index)) is_array = 0;\n            /* Stack now: table, key, value */\n            /* Array repr. */\n            repr1 = ldbCatStackValueRec(repr1,lua,-1,level);\n            repr1 = sdscatlen(repr1,\"; \",2);\n            /* Full repr. */\n            repr2 = sdscatlen(repr2,\"[\",1);\n            repr2 = ldbCatStackValueRec(repr2,lua,-2,level);\n            repr2 = sdscatlen(repr2,\"]=\",2);\n            repr2 = ldbCatStackValueRec(repr2,lua,-1,level);\n            repr2 = sdscatlen(repr2,\"; \",2);\n            lua_pop(lua,1); /* Stack: table, key. Ready for next iteration. */\n            expected_index++;\n        }\n        /* Strip the last \" ;\" from both the representations. */\n        if (sdslen(repr1)) sdsrange(repr1,0,-3);\n        if (sdslen(repr2)) sdsrange(repr2,0,-3);\n        /* Select the right one and discard the other. */\n        s = sdscatlen(s,\"{\",1);\n        s = sdscatsds(s,is_array ? repr1 : repr2);\n        s = sdscatlen(s,\"}\",1);\n        sdsfree(repr1);\n        sdsfree(repr2);\n        }\n        break;\n    case LUA_TFUNCTION:\n    case LUA_TUSERDATA:\n    case LUA_TTHREAD:\n    case LUA_TLIGHTUSERDATA:\n        {\n        const void *p = lua_topointer(lua,idx);\n        char *typename = \"unknown\";\n        if (t == LUA_TFUNCTION) typename = \"function\";\n        else if (t == LUA_TUSERDATA) typename = \"userdata\";\n        else if (t == LUA_TTHREAD) typename = \"thread\";\n        else if (t == LUA_TLIGHTUSERDATA) typename = \"light-userdata\";\n        s = sdscatprintf(s,\"\\\"%s@%p\\\"\",typename,p);\n        }\n        break;\n    default:\n        s = sdscat(s,\"\\\"<unknown-lua-type>\\\"\");\n        break;\n    }\n    return s;\n}\n\n/* Higher level wrapper for ldbCatStackValueRec() that just uses an initial\n * recursion level of '0'. */\nsds ldbCatStackValue(sds s, lua_State *lua, int idx) {\n    return ldbCatStackValueRec(s,lua,idx,0);\n}\n\n/* Produce a debugger log entry representing the value of the Lua object\n * currently on the top of the stack. The element is not popped nor modified.\n * Check ldbCatStackValue() for the actual implementation. */\nvoid ldbLogStackValue(lua_State *lua, char *prefix) {\n    sds s = sdsnew(prefix);\n    s = ldbCatStackValue(s,lua,-1);\n    ldbLogWithMaxLen(s);\n}\n\nchar *ldbRedisProtocolToHuman_Int(sds *o, char *reply);\nchar *ldbRedisProtocolToHuman_Bulk(sds *o, char *reply);\nchar *ldbRedisProtocolToHuman_Status(sds *o, char *reply);\nchar *ldbRedisProtocolToHuman_MultiBulk(sds *o, char *reply);\nchar *ldbRedisProtocolToHuman_Set(sds *o, char *reply);\nchar *ldbRedisProtocolToHuman_Map(sds *o, char *reply);\nchar *ldbRedisProtocolToHuman_Null(sds *o, char *reply);\nchar *ldbRedisProtocolToHuman_Bool(sds *o, char *reply);\nchar *ldbRedisProtocolToHuman_Double(sds *o, char *reply);\n\n/* Get Redis protocol from 'reply' and appends it in human readable form to\n * the passed SDS string 'o'.\n *\n * Note that the SDS string is passed by reference (pointer of pointer to\n * char*) so that we can return a modified pointer, as for SDS semantics. */\nchar *ldbRedisProtocolToHuman(sds *o, char *reply) {\n    char *p = reply;\n    switch(*p) {\n    case ':': p = ldbRedisProtocolToHuman_Int(o,reply); break;\n    case '$': p = ldbRedisProtocolToHuman_Bulk(o,reply); break;\n    case '+': p = ldbRedisProtocolToHuman_Status(o,reply); break;\n    case '-': p = ldbRedisProtocolToHuman_Status(o,reply); break;\n    case '*': p = ldbRedisProtocolToHuman_MultiBulk(o,reply); break;\n    case '~': p = ldbRedisProtocolToHuman_Set(o,reply); break;\n    case '%': p = ldbRedisProtocolToHuman_Map(o,reply); break;\n    case '_': p = ldbRedisProtocolToHuman_Null(o,reply); break;\n    case '#': p = ldbRedisProtocolToHuman_Bool(o,reply); break;\n    case ',': p = ldbRedisProtocolToHuman_Double(o,reply); break;\n    }\n    return p;\n}\n\n/* The following functions are helpers for ldbRedisProtocolToHuman(), each\n * take care of a given Redis return type. */\n\nchar *ldbRedisProtocolToHuman_Int(sds *o, char *reply) {\n    char *p = strchr(reply+1,'\\r');\n    *o = sdscatlen(*o,reply+1,p-reply-1);\n    return p+2;\n}\n\nchar *ldbRedisProtocolToHuman_Bulk(sds *o, char *reply) {\n    char *p = strchr(reply+1,'\\r');\n    long long bulklen;\n\n    string2ll(reply+1,p-reply-1,&bulklen);\n    if (bulklen == -1) {\n        *o = sdscatlen(*o,\"NULL\",4);\n        return p+2;\n    } else {\n        *o = sdscatrepr(*o,p+2,bulklen);\n        return p+2+bulklen+2;\n    }\n}\n\nchar *ldbRedisProtocolToHuman_Status(sds *o, char *reply) {\n    char *p = strchr(reply+1,'\\r');\n\n    *o = sdscatrepr(*o,reply,p-reply);\n    return p+2;\n}\n\nchar *ldbRedisProtocolToHuman_MultiBulk(sds *o, char *reply) {\n    char *p = strchr(reply+1,'\\r');\n    long long mbulklen;\n    int j = 0;\n\n    string2ll(reply+1,p-reply-1,&mbulklen);\n    p += 2;\n    if (mbulklen == -1) {\n        *o = sdscatlen(*o,\"NULL\",4);\n        return p;\n    }\n    *o = sdscatlen(*o,\"[\",1);\n    for (j = 0; j < mbulklen; j++) {\n        p = ldbRedisProtocolToHuman(o,p);\n        if (j != mbulklen-1) *o = sdscatlen(*o,\",\",1);\n    }\n    *o = sdscatlen(*o,\"]\",1);\n    return p;\n}\n\nchar *ldbRedisProtocolToHuman_Set(sds *o, char *reply) {\n    char *p = strchr(reply+1,'\\r');\n    long long mbulklen;\n    int j = 0;\n\n    string2ll(reply+1,p-reply-1,&mbulklen);\n    p += 2;\n    *o = sdscatlen(*o,\"~(\",2);\n    for (j = 0; j < mbulklen; j++) {\n        p = ldbRedisProtocolToHuman(o,p);\n        if (j != mbulklen-1) *o = sdscatlen(*o,\",\",1);\n    }\n    *o = sdscatlen(*o,\")\",1);\n    return p;\n}\n\nchar *ldbRedisProtocolToHuman_Map(sds *o, char *reply) {\n    char *p = strchr(reply+1,'\\r');\n    long long mbulklen;\n    int j = 0;\n\n    string2ll(reply+1,p-reply-1,&mbulklen);\n    p += 2;\n    *o = sdscatlen(*o,\"{\",1);\n    for (j = 0; j < mbulklen; j++) {\n        p = ldbRedisProtocolToHuman(o,p);\n        *o = sdscatlen(*o,\" => \",4);\n        p = ldbRedisProtocolToHuman(o,p);\n        if (j != mbulklen-1) *o = sdscatlen(*o,\",\",1);\n    }\n    *o = sdscatlen(*o,\"}\",1);\n    return p;\n}\n\nchar *ldbRedisProtocolToHuman_Null(sds *o, char *reply) {\n    char *p = strchr(reply+1,'\\r');\n    *o = sdscatlen(*o,\"(null)\",6);\n    return p+2;\n}\n\nchar *ldbRedisProtocolToHuman_Bool(sds *o, char *reply) {\n    char *p = strchr(reply+1,'\\r');\n    if (reply[1] == 't')\n        *o = sdscatlen(*o,\"#true\",5);\n    else\n        *o = sdscatlen(*o,\"#false\",6);\n    return p+2;\n}\n\nchar *ldbRedisProtocolToHuman_Double(sds *o, char *reply) {\n    char *p = strchr(reply+1,'\\r');\n    *o = sdscatlen(*o,\"(double) \",9);\n    *o = sdscatlen(*o,reply+1,p-reply-1);\n    return p+2;\n}\n\n/* Log a Redis reply as debugger output, in a human readable format.\n * If the resulting string is longer than 'len' plus a few more chars\n * used as prefix, it gets truncated. */\nvoid ldbLogRedisReply(char *reply) {\n    sds log = sdsnew(\"<reply> \");\n    ldbRedisProtocolToHuman(&log,reply);\n    ldbLogWithMaxLen(log);\n}\n\n/* Implements the \"print <var>\" command of the Lua debugger. It scans for Lua\n * var \"varname\" starting from the current stack frame up to the top stack\n * frame. The first matching variable is printed. */\nvoid ldbPrint(lua_State *lua, char *varname) {\n    lua_Debug ar;\n\n    int l = 0; /* Stack level. */\n    while (lua_getstack(lua,l,&ar) != 0) {\n        l++;\n        const char *name;\n        int i = 1; /* Variable index. */\n        while((name = lua_getlocal(lua,&ar,i)) != NULL) {\n            i++;\n            if (strcmp(varname,name) == 0) {\n                ldbLogStackValue(lua,\"<value> \");\n                lua_pop(lua,1);\n                return;\n            } else {\n                lua_pop(lua,1); /* Discard the var name on the stack. */\n            }\n        }\n    }\n\n    /* Let's try with global vars in two selected cases */\n    if (!strcmp(varname,\"ARGV\") || !strcmp(varname,\"KEYS\")) {\n        lua_getglobal(lua, varname);\n        ldbLogStackValue(lua,\"<value> \");\n        lua_pop(lua,1);\n    } else {\n        ldbLog(sdsnew(\"No such variable.\"));\n    }\n}\n\n/* Implements the \"print\" command (without arguments) of the Lua debugger.\n * Prints all the variables in the current stack frame. */\nvoid ldbPrintAll(lua_State *lua) {\n    lua_Debug ar;\n    int vars = 0;\n\n    if (lua_getstack(lua,0,&ar) != 0) {\n        const char *name;\n        int i = 1; /* Variable index. */\n        while((name = lua_getlocal(lua,&ar,i)) != NULL) {\n            i++;\n            if (!strstr(name,\"(*temporary)\")) {\n                sds prefix = sdscatprintf(sdsempty(),\"<value> %s = \",name);\n                ldbLogStackValue(lua,prefix);\n                sdsfree(prefix);\n                vars++;\n            }\n            lua_pop(lua,1);\n        }\n    }\n\n    if (vars == 0) {\n        ldbLog(sdsnew(\"No local variables in the current context.\"));\n    }\n}\n\n/* Implements the break command to list, add and remove breakpoints. */\nvoid ldbBreak(sds *argv, int argc) {\n    if (argc == 1) {\n        if (ldb.bpcount == 0) {\n            ldbLog(sdsnew(\"No breakpoints set. Use 'b <line>' to add one.\"));\n            return;\n        } else {\n            ldbLog(sdscatfmt(sdsempty(),\"%i breakpoints set:\",ldb.bpcount));\n            int j;\n            for (j = 0; j < ldb.bpcount; j++)\n                ldbLogSourceLine(ldb.bp[j]);\n        }\n    } else {\n        int j;\n        for (j = 1; j < argc; j++) {\n            char *arg = argv[j];\n            long line;\n            if (!string2l(arg,sdslen(arg),&line)) {\n                ldbLog(sdscatfmt(sdsempty(),\"Invalid argument:'%s'\",arg));\n            } else {\n                if (line == 0) {\n                    ldb.bpcount = 0;\n                    ldbLog(sdsnew(\"All breakpoints removed.\"));\n                } else if (line > 0) {\n                    if (ldb.bpcount == LDB_BREAKPOINTS_MAX) {\n                        ldbLog(sdsnew(\"Too many breakpoints set.\"));\n                    } else if (ldbAddBreakpoint(line)) {\n                        ldbList(line,1);\n                    } else {\n                        ldbLog(sdsnew(\"Wrong line number.\"));\n                    }\n                } else if (line < 0) {\n                    if (ldbDelBreakpoint(-line))\n                        ldbLog(sdsnew(\"Breakpoint removed.\"));\n                    else\n                        ldbLog(sdsnew(\"No breakpoint in the specified line.\"));\n                }\n            }\n        }\n    }\n}\n\n/* Implements the Lua debugger \"eval\" command. It just compiles the user\n * passed fragment of code and executes it, showing the result left on\n * the stack. */\nvoid ldbEval(lua_State *lua, sds *argv, int argc) {\n    /* Glue the script together if it is composed of multiple arguments. */\n    sds code = sdsjoinsds(argv+1,argc-1,\" \",1);\n    sds expr = sdscatsds(sdsnew(\"return \"),code);\n\n    /* Try to compile it as an expression, prepending \"return \". */\n    if (luaL_loadbuffer(lua,expr,sdslen(expr),\"@ldb_eval\")) {\n        lua_pop(lua,1);\n        /* Failed? Try as a statement. */\n        if (luaL_loadbuffer(lua,code,sdslen(code),\"@ldb_eval\")) {\n            ldbLog(sdscatfmt(sdsempty(),\"<error> %s\",lua_tostring(lua,-1)));\n            lua_pop(lua,1);\n            sdsfree(code);\n            sdsfree(expr);\n            return;\n        }\n    }\n\n    /* Call it. */\n    sdsfree(code);\n    sdsfree(expr);\n    if (lua_pcall(lua,0,1,0)) {\n        ldbLog(sdscatfmt(sdsempty(),\"<error> %s\",lua_tostring(lua,-1)));\n        lua_pop(lua,1);\n        return;\n    }\n    ldbLogStackValue(lua,\"<retval> \");\n    lua_pop(lua,1);\n}\n\n/* Implement the debugger \"redis\" command. We use a trick in order to make\n * the implementation very simple: we just call the Lua redis.call() command\n * implementation, with ldb.step enabled, so as a side effect the Redis command\n * and its reply are logged. */\nvoid ldbRedis(lua_State *lua, sds *argv, int argc) {\n    int j;\n\n    if (!lua_checkstack(lua, argc + 1)) {\n        /* Increase the Lua stack if needed to make sure there is enough room\n         * to push 'argc + 1' elements to the stack. On failure, return error.\n         * Notice that we need, in worst case, 'argc + 1' elements because we push all the arguments\n         * given by the user (without the first argument) and we also push the 'redis' global table and\n         * 'redis.call' function so:\n         * (1 (redis table)) + (1 (redis.call function)) + (argc - 1 (all arguments without the first)) = argc + 1*/\n        ldbLogRedisReply(\"max lua stack reached\");\n        return;\n    }\n\n    lua_getglobal(lua,\"redis\");\n    lua_pushstring(lua,\"call\");\n    lua_gettable(lua,-2);       /* Stack: redis, redis.call */\n    for (j = 1; j < argc; j++)\n        lua_pushlstring(lua,argv[j],sdslen(argv[j]));\n    ldb.step = 1;               /* Force redis.call() to log. */\n    lua_pcall(lua,argc-1,1,0);  /* Stack: redis, result */\n    ldb.step = 0;               /* Disable logging. */\n    lua_pop(lua,2);             /* Discard the result and clean the stack. */\n}\n\n/* Implements \"trace\" command of the Lua debugger. It just prints a backtrace\n * querying Lua starting from the current callframe back to the outer one. */\nvoid ldbTrace(lua_State *lua) {\n    lua_Debug ar;\n    int level = 0;\n\n    while(lua_getstack(lua,level,&ar)) {\n        lua_getinfo(lua,\"Snl\",&ar);\n        if(strstr(ar.short_src,\"user_script\") != NULL) {\n            ldbLog(sdscatprintf(sdsempty(),\"%s %s:\",\n                (level == 0) ? \"In\" : \"From\",\n                ar.name ? ar.name : \"top level\"));\n            ldbLogSourceLine(ar.currentline);\n        }\n        level++;\n    }\n    if (level == 0) {\n        ldbLog(sdsnew(\"<error> Can't retrieve Lua stack.\"));\n    }\n}\n\n/* Implements the debugger \"maxlen\" command. It just queries or sets the\n * ldb.maxlen variable. */\nvoid ldbMaxlen(sds *argv, int argc) {\n    if (argc == 2) {\n        int newval = atoi(argv[1]);\n        ldb.maxlen_hint_sent = 1; /* User knows about this command. */\n        if (newval != 0 && newval <= 60) newval = 60;\n        ldb.maxlen = newval;\n    }\n    if (ldb.maxlen) {\n        ldbLog(sdscatprintf(sdsempty(),\"<value> replies are truncated at %d bytes.\",(int)ldb.maxlen));\n    } else {\n        ldbLog(sdscatprintf(sdsempty(),\"<value> replies are unlimited.\"));\n    }\n}\n\n/* Read debugging commands from client.\n * Return C_OK if the debugging session is continuing, otherwise\n * C_ERR if the client closed the connection or is timing out. */\nint ldbRepl(lua_State *lua) {\n    sds *argv;\n    int argc;\n    char* err = NULL;\n\n    /* We continue processing commands until a command that should return\n     * to the Lua interpreter is found. */\n    while(1) {\n        while((argv = ldbReplParseCommand(&argc, &err)) == NULL) {\n            char buf[1024];\n            if (err) {\n                luaPushError(lua, err);\n                luaError(lua);\n            }\n            int nread = connRead(ldb.conn,buf,sizeof(buf));\n            if (nread <= 0) {\n                /* Make sure the script runs without user input since the\n                 * client is no longer connected. */\n                ldb.step = 0;\n                ldb.bpcount = 0;\n                return C_ERR;\n            }\n            ldb.cbuf = sdscatlen(ldb.cbuf,buf,nread);\n            /* after 1M we will exit with an error\n             * so that the client will not blow the memory\n             */\n            if (sdslen(ldb.cbuf) > 1<<20) {\n                sdsfree(ldb.cbuf);\n                ldb.cbuf = sdsempty();\n                luaPushError(lua, \"max client buffer reached\");\n                luaError(lua);\n            }\n        }\n\n        /* Flush the old buffer. */\n        sdsfree(ldb.cbuf);\n        ldb.cbuf = sdsempty();\n\n        /* Execute the command. */\n        if (!strcasecmp(argv[0],\"h\") || !strcasecmp(argv[0],\"help\")) {\nldbLog(sdsnew(\"Redis Lua debugger help:\"));\nldbLog(sdsnew(\"[h]elp               Show this help.\"));\nldbLog(sdsnew(\"[s]tep               Run current line and stop again.\"));\nldbLog(sdsnew(\"[n]ext               Alias for step.\"));\nldbLog(sdsnew(\"[c]ontinue           Run till next breakpoint.\"));\nldbLog(sdsnew(\"[l]ist               List source code around current line.\"));\nldbLog(sdsnew(\"[l]ist [line]        List source code around [line].\"));\nldbLog(sdsnew(\"                     line = 0 means: current position.\"));\nldbLog(sdsnew(\"[l]ist [line] [ctx]  In this form [ctx] specifies how many lines\"));\nldbLog(sdsnew(\"                     to show before/after [line].\"));\nldbLog(sdsnew(\"[w]hole              List all source code. Alias for 'list 1 1000000'.\"));\nldbLog(sdsnew(\"[p]rint              Show all the local variables.\"));\nldbLog(sdsnew(\"[p]rint <var>        Show the value of the specified variable.\"));\nldbLog(sdsnew(\"                     Can also show global vars KEYS and ARGV.\"));\nldbLog(sdsnew(\"[b]reak              Show all breakpoints.\"));\nldbLog(sdsnew(\"[b]reak <line>       Add a breakpoint to the specified line.\"));\nldbLog(sdsnew(\"[b]reak -<line>      Remove breakpoint from the specified line.\"));\nldbLog(sdsnew(\"[b]reak 0            Remove all breakpoints.\"));\nldbLog(sdsnew(\"[t]race              Show a backtrace.\"));\nldbLog(sdsnew(\"[e]val <code>        Execute some Lua code (in a different callframe).\"));\nldbLog(sdsnew(\"[r]edis <cmd>        Execute a Redis command.\"));\nldbLog(sdsnew(\"[m]axlen [len]       Trim logged Redis replies and Lua var dumps to len.\"));\nldbLog(sdsnew(\"                     Specifying zero as <len> means unlimited.\"));\nldbLog(sdsnew(\"[a]bort              Stop the execution of the script. In sync\"));\nldbLog(sdsnew(\"                     mode dataset changes will be retained.\"));\nldbLog(sdsnew(\"\"));\nldbLog(sdsnew(\"Debugger functions you can call from Lua scripts:\"));\nldbLog(sdsnew(\"redis.debug()        Produce logs in the debugger console.\"));\nldbLog(sdsnew(\"redis.breakpoint()   Stop execution like if there was a breakpoint in the\"));\nldbLog(sdsnew(\"                     next line of code.\"));\n            ldbSendLogs();\n        } else if (!strcasecmp(argv[0],\"s\") || !strcasecmp(argv[0],\"step\") ||\n                   !strcasecmp(argv[0],\"n\") || !strcasecmp(argv[0],\"next\")) {\n            ldb.step = 1;\n            break;\n        } else if (!strcasecmp(argv[0],\"c\") || !strcasecmp(argv[0],\"continue\")){\n            break;\n        } else if (!strcasecmp(argv[0],\"t\") || !strcasecmp(argv[0],\"trace\")) {\n            ldbTrace(lua);\n            ldbSendLogs();\n        } else if (!strcasecmp(argv[0],\"m\") || !strcasecmp(argv[0],\"maxlen\")) {\n            ldbMaxlen(argv,argc);\n            ldbSendLogs();\n        } else if (!strcasecmp(argv[0],\"b\") || !strcasecmp(argv[0],\"break\")) {\n            ldbBreak(argv,argc);\n            ldbSendLogs();\n        } else if (!strcasecmp(argv[0],\"e\") || !strcasecmp(argv[0],\"eval\")) {\n            ldbEval(lua,argv,argc);\n            ldbSendLogs();\n        } else if (!strcasecmp(argv[0],\"a\") || !strcasecmp(argv[0],\"abort\")) {\n            luaPushError(lua, \"script aborted for user request\");\n            luaError(lua);\n        } else if (argc > 1 &&\n                   (!strcasecmp(argv[0],\"r\") || !strcasecmp(argv[0],\"redis\"))) {\n            ldbRedis(lua,argv,argc);\n            ldbSendLogs();\n        } else if ((!strcasecmp(argv[0],\"p\") || !strcasecmp(argv[0],\"print\"))) {\n            if (argc == 2)\n                ldbPrint(lua,argv[1]);\n            else\n                ldbPrintAll(lua);\n            ldbSendLogs();\n        } else if (!strcasecmp(argv[0],\"l\") || !strcasecmp(argv[0],\"list\")){\n            int around = ldb.currentline, ctx = 5;\n            if (argc > 1) {\n                int num = atoi(argv[1]);\n                if (num > 0) around = num;\n            }\n            if (argc > 2) ctx = atoi(argv[2]);\n            ldbList(around,ctx);\n            ldbSendLogs();\n        } else if (!strcasecmp(argv[0],\"w\") || !strcasecmp(argv[0],\"whole\")){\n            ldbList(1,1000000);\n            ldbSendLogs();\n        } else {\n            ldbLog(sdsnew(\"<error> Unknown Redis Lua debugger command or \"\n                          \"wrong number of arguments.\"));\n            ldbSendLogs();\n        }\n\n        /* Free the command vector. */\n        sdsfreesplitres(argv,argc);\n    }\n\n    /* Free the current command argv if we break inside the while loop. */\n    sdsfreesplitres(argv,argc);\n    return C_OK;\n}\n\n/* This is the core of our Lua debugger, called each time Lua is about\n * to start executing a new line. */\nvoid luaLdbLineHook(lua_State *lua, lua_Debug *ar) {\n    scriptRunCtx* rctx = luaGetFromRegistry(lua, REGISTRY_RUN_CTX_NAME);\n    serverAssert(rctx); /* Only supported inside script invocation */\n    lua_getstack(lua,0,ar);\n    lua_getinfo(lua,\"Sl\",ar);\n    ldb.currentline = ar->currentline;\n\n    int bp = ldbIsBreakpoint(ldb.currentline) || ldb.luabp;\n    int timeout = 0;\n\n    /* Events outside our script are not interesting. */\n    if(strstr(ar->short_src,\"user_script\") == NULL) return;\n\n    /* Check if a timeout occurred. */\n    if (ar->event == LUA_HOOKCOUNT && ldb.step == 0 && bp == 0) {\n        mstime_t elapsed = elapsedMs(rctx->start_time);\n        mstime_t timelimit = server.busy_reply_threshold ?\n                             server.busy_reply_threshold : 5000;\n        if (elapsed >= timelimit) {\n            timeout = 1;\n            ldb.step = 1;\n        } else {\n            return; /* No timeout, ignore the COUNT event. */\n        }\n    }\n\n    if (ldb.step || bp) {\n        char *reason = \"step over\";\n        if (bp) reason = ldb.luabp ? \"redis.breakpoint() called\" :\n                                     \"break point\";\n        else if (timeout) reason = \"timeout reached, infinite loop?\";\n        ldb.step = 0;\n        ldb.luabp = 0;\n        ldbLog(sdscatprintf(sdsempty(),\n            \"* Stopped at %d, stop reason = %s\",\n            ldb.currentline, reason));\n        ldbLogSourceLine(ldb.currentline);\n        ldbSendLogs();\n        if (ldbRepl(lua) == C_ERR && timeout) {\n            /* If the client closed the connection and we have a timeout\n             * connection, let's kill the script otherwise the process\n             * will remain blocked indefinitely. */\n            luaPushError(lua, \"timeout during Lua debugging with client closing connection\");\n            luaError(lua);\n        }\n        rctx->start_time = getMonotonicUs();\n    }\n}\n"
  },
  {
    "path": "src/eventnotifier.c",
    "content": "/* eventnotifier.c -- An event notifier based on eventfd or pipe.\n *\n * Copyright (c) 2024-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"eventnotifier.h\"\n\n#include <stdint.h>\n#include <unistd.h>\n#include <fcntl.h>\n#ifdef HAVE_EVENT_FD\n#include <sys/eventfd.h>\n#endif\n\n#include \"anet.h\"\n#include \"zmalloc.h\"\n\neventNotifier* createEventNotifier(void) {\n    eventNotifier *en = zmalloc(sizeof(eventNotifier));\n    if (!en) return NULL;\n\n#ifdef HAVE_EVENT_FD\n    if ((en->efd = eventfd(0, EFD_NONBLOCK| EFD_CLOEXEC)) != -1) {\n        return en;\n    }\n#else\n    if (anetPipe(en->pipefd, O_CLOEXEC|O_NONBLOCK, O_CLOEXEC|O_NONBLOCK) != -1) {\n        return en;\n    }\n#endif\n\n    /* Clean up if error. */\n    zfree(en);\n    return NULL;\n}\n\nint getReadEventFd(struct eventNotifier *en) {\n#ifdef HAVE_EVENT_FD\n    return en->efd;\n#else\n    return en->pipefd[0];\n#endif\n}\n\nint getWriteEventFd(struct eventNotifier *en) {\n#ifdef HAVE_EVENT_FD\n    return en->efd;\n#else\n    return en->pipefd[1];\n#endif\n}\n\nint triggerEventNotifier(struct eventNotifier *en) {\n#ifdef HAVE_EVENT_FD\n    uint64_t u = 1;\n    if (write(en->efd, &u, sizeof(uint64_t)) == -1) {\n        return EN_ERR;\n    }\n#else\n    char buf[1] = {'R'};\n    if (write(en->pipefd[1], buf, 1) == -1) {\n        return EN_ERR;\n    }\n#endif\n    return EN_OK;\n}\n\nint handleEventNotifier(struct eventNotifier *en) {\n#ifdef HAVE_EVENT_FD\n    uint64_t u;\n    if (read(en->efd, &u, sizeof(uint64_t)) == -1) {\n        return EN_ERR;\n    }\n#else\n    char buf[1];\n    if (read(en->pipefd[0], buf, 1) == -1) {\n        return EN_ERR;\n    }\n#endif\n    return EN_OK;\n}\n\nvoid freeEventNotifier(struct eventNotifier *en) {\n#ifdef HAVE_EVENT_FD\n    close(en->efd);\n#else\n    close(en->pipefd[0]);\n    close(en->pipefd[1]);\n#endif\n\n    /* Free memory */\n    zfree(en);\n}\n"
  },
  {
    "path": "src/eventnotifier.h",
    "content": "/* eventnotifier.h -- An event notifier based on eventfd or pipe.\n *\n * Copyright (c) 2024-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef EVENTNOTIFIER_H\n#define EVENTNOTIFIER_H\n\n#include \"config.h\"\n\n#define EN_OK 0\n#define EN_ERR -1\n\ntypedef struct eventNotifier {\n#ifdef HAVE_EVENT_FD\n    int efd;\n#else\n    int pipefd[2];\n#endif\n} eventNotifier;\n\neventNotifier* createEventNotifier(void);\nint getReadEventFd(struct eventNotifier *en);\nint getWriteEventFd(struct eventNotifier *en);\nint triggerEventNotifier(struct eventNotifier *en);\nint handleEventNotifier(struct eventNotifier *en);\nvoid freeEventNotifier(struct eventNotifier *en);\n\n#endif\n"
  },
  {
    "path": "src/evict.c",
    "content": "/* Maxmemory directive handling (LRU eviction and other policies).\n *\n * ----------------------------------------------------------------------------\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n#include \"bio.h\"\n#include \"atomicvar.h\"\n#include \"script.h\"\n#include \"cluster.h\"\n#include \"cluster_asm.h\"\n#include <math.h>\n\n/* ----------------------------------------------------------------------------\n * Data structures\n * --------------------------------------------------------------------------*/\n\n/* To improve the quality of the LRU approximation we take a set of keys\n * that are good candidate for eviction across performEvictions() calls.\n *\n * Entries inside the eviction pool are taken ordered by idle time, putting\n * greater idle times to the right (ascending order).\n *\n * When an LFU policy is used instead, a reverse frequency indication is used\n * instead of the idle time, so that we still evict by larger value (larger\n * inverse frequency means to evict keys with the least frequent accesses).\n *\n * Empty entries have the key pointer set to NULL. */\n#define EVPOOL_SIZE 16\n#define EVPOOL_CACHED_SDS_SIZE 255\nstruct evictionPoolEntry {\n    unsigned long long idle;    /* Object idle time (inverse frequency for LFU) */\n    sds key;                    /* Key name. */\n    sds cached;                 /* Cached SDS object for key name. */\n    int dbid;                   /* Key DB number. */\n    int slot;                   /* Slot. */\n};\n\nstatic struct evictionPoolEntry *EvictionPoolLRU;\n\n/* ----------------------------------------------------------------------------\n * Implementation of eviction, aging and LRU\n * --------------------------------------------------------------------------*/\n\n/* Return the LRU clock, based on the clock resolution. This is a time\n * in a reduced-bits format that can be used to set and check the\n * object->lru field of redisObject structures. */\nunsigned int getLRUClock(void) {\n    return (mstime()/LRU_CLOCK_RESOLUTION) & LRU_CLOCK_MAX;\n}\n\n/* This function is used to obtain the current LRU clock.\n * If the current resolution is lower than the frequency we refresh the\n * LRU clock (as it should be in production servers) we return the\n * precomputed value, otherwise we need to resort to a system call. */\nunsigned int LRU_CLOCK(void) {\n    unsigned int lruclock;\n    if (1000/server.hz <= LRU_CLOCK_RESOLUTION) {\n        lruclock = server.lruclock;\n    } else {\n        lruclock = getLRUClock();\n    }\n    return lruclock;\n}\n\n/* Given an object returns the min number of milliseconds the object was never\n * requested, using an approximated LRU algorithm. */\nunsigned long long estimateObjectIdleTime(robj *o) {\n    unsigned long long lruclock = LRU_CLOCK();\n    if (lruclock >= o->lru) {\n        return (lruclock - o->lru) * LRU_CLOCK_RESOLUTION;\n    } else {\n        return (lruclock + (LRU_CLOCK_MAX - o->lru)) *\n                    LRU_CLOCK_RESOLUTION;\n    }\n}\n\n/* During atomic slot migration, keys that are being imported are in an\n * intermediate state. We cannot evict them and therefore skip them. */\nstatic int randomEvictionShouldSkipDictIndex(int didx) {\n    return !clusterCanAccessKeysInSlot(didx);\n}\n\n/* LRU approximation algorithm\n *\n * Redis uses an approximation of the LRU algorithm that runs in constant\n * memory. Every time there is a key to expire, we sample N keys (with\n * N very small, usually in around 5) to populate a pool of best keys to\n * evict of M keys (the pool size is defined by EVPOOL_SIZE).\n *\n * The N keys sampled are added in the pool of good keys to expire (the one\n * with an old access time) if they are better than one of the current keys\n * in the pool.\n *\n * After the pool is populated, the best key we have in the pool is expired.\n * However note that we don't remove keys from the pool when they are deleted\n * so the pool may contain keys that no longer exist.\n *\n * When we try to evict a key, and all the entries in the pool don't exist\n * we populate it again. This time we'll be sure that the pool has at least\n * one key that can be evicted, if there is at least one key that can be\n * evicted in the whole database. */\n\n/* Create a new eviction pool. */\nvoid evictionPoolAlloc(void) {\n    struct evictionPoolEntry *ep;\n    int j;\n\n    ep = zmalloc(sizeof(*ep)*EVPOOL_SIZE);\n    for (j = 0; j < EVPOOL_SIZE; j++) {\n        ep[j].idle = 0;\n        ep[j].key = NULL;\n        ep[j].cached = sdsnewlen(NULL,EVPOOL_CACHED_SDS_SIZE);\n        ep[j].dbid = 0;\n    }\n    EvictionPoolLRU = ep;\n}\n\n/* This is a helper function for performEvictions(), it is used in order\n * to populate the evictionPool with a few entries every time we want to\n * expire a key. Keys with idle time bigger than one of the current\n * keys are added. Keys are always added if there are free entries.\n *\n * We insert keys on place in ascending order, so keys with the smaller\n * idle time are on the left, and keys with the higher idle time on the\n * right. */\nint evictionPoolPopulate(redisDb *db, kvstore *samplekvs, struct evictionPoolEntry *pool) {\n    int j, k, count;\n    dictEntry *samples[server.maxmemory_samples];\n\n    /* Don't retry, since we will call evictionPoolPopulate multiple times if needed. */\n    int slot = kvstoreGetFairRandomDictIndex(samplekvs, randomEvictionShouldSkipDictIndex, 1, 0);\n    if (slot == -1) return 0;\n    count = kvstoreDictGetSomeKeys(samplekvs,slot,samples,server.maxmemory_samples);\n    for (j = 0; j < count; j++) {\n        unsigned long long idle;\n        \n        dictEntry *de = samples[j];\n        kvobj *kv = dictGetKV(de);\n        sds key = kvobjGetKey(kv);\n        \n        /* Calculate the idle time according to the policy. This is called\n         * idle just because the code initially handled LRU, but is in fact\n         * just a score where a higher score means better candidate. */\n        if (server.maxmemory_policy & (MAXMEMORY_FLAG_LRU|MAXMEMORY_FLAG_LRM)) {\n            idle = estimateObjectIdleTime(kv);\n        } else if (server.maxmemory_policy & MAXMEMORY_FLAG_LFU) {\n            /* When we use an LRU policy, we sort the keys by idle time\n             * so that we expire keys starting from greater idle time.\n             * However when the policy is an LFU one, we have a frequency\n             * estimation, and we want to evict keys with lower frequency\n             * first. So inside the pool we put objects using the inverted\n             * frequency subtracting the actual frequency to the maximum\n             * frequency of 255. */\n            idle = 255-LFUDecrAndReturn(kv);\n        } else if (server.maxmemory_policy == MAXMEMORY_VOLATILE_TTL) {\n            /* In this case the sooner the expire the better. */\n            idle = ULLONG_MAX - kvobjGetExpire(kv);\n        } else {\n            serverPanic(\"Unknown eviction policy in evictionPoolPopulate()\");\n        }\n\n        /* Insert the element inside the pool.\n         * First, find the first empty bucket or the first populated\n         * bucket that has an idle time smaller than our idle time. */\n        k = 0;\n        while (k < EVPOOL_SIZE &&\n               pool[k].key &&\n               pool[k].idle < idle) k++;\n        if (k == 0 && pool[EVPOOL_SIZE-1].key != NULL) {\n            /* Can't insert if the element is < the worst element we have\n             * and there are no empty buckets. */\n            continue;\n        } else if (k < EVPOOL_SIZE && pool[k].key == NULL) {\n            /* Inserting into empty position. No setup needed before insert. */\n        } else {\n            /* Inserting in the middle. Now k points to the first element\n             * greater than the element to insert.  */\n            if (pool[EVPOOL_SIZE-1].key == NULL) {\n                /* Free space on the right? Insert at k shifting\n                 * all the elements from k to end to the right. */\n\n                /* Save SDS before overwriting. */\n                sds cached = pool[EVPOOL_SIZE-1].cached;\n                memmove(pool+k+1,pool+k,\n                    sizeof(pool[0])*(EVPOOL_SIZE-k-1));\n                pool[k].cached = cached;\n            } else {\n                /* No free space on right? Insert at k-1 */\n                k--;\n                /* Shift all elements on the left of k (included) to the\n                 * left, so we discard the element with smaller idle time. */\n                sds cached = pool[0].cached; /* Save SDS before overwriting. */\n                if (pool[0].key != pool[0].cached) sdsfree(pool[0].key);\n                memmove(pool,pool+1,sizeof(pool[0])*k);\n                pool[k].cached = cached;\n            }\n        }\n\n        /* Try to reuse the cached SDS string allocated in the pool entry,\n         * because allocating and deallocating this object is costly\n         * (according to the profiler, not my fantasy. Remember:\n         * premature optimization bla bla bla. */\n        int klen = sdslen(key);\n        if (klen > EVPOOL_CACHED_SDS_SIZE) {\n            pool[k].key = sdsdup(key);\n        } else {\n            memcpy(pool[k].cached,key,klen+1);\n            sdssetlen(pool[k].cached,klen);\n            pool[k].key = pool[k].cached;\n        }\n        pool[k].idle = idle;\n        pool[k].dbid = db->id;\n        pool[k].slot = slot;\n    }\n\n    return count;\n}\n\n/* ----------------------------------------------------------------------------\n * LFU (Least Frequently Used) implementation.\n\n * We have 24 total bits of space in each object in order to implement\n * an LFU (Least Frequently Used) eviction policy, since we re-use the\n * LRU field for this purpose.\n *\n * We split the 24 bits into two fields:\n *\n *            16 bits       8 bits\n *     +------------------+--------+\n *     + Last access time | LOG_C  |\n *     +------------------+--------+\n *\n * LOG_C is a logarithmic counter that provides an indication of the access\n * frequency. However this field must also be decremented otherwise what used\n * to be a frequently accessed key in the past, will remain ranked like that\n * forever, while we want the algorithm to adapt to access pattern changes.\n *\n * So the remaining 16 bits are used in order to store the \"access time\",\n * a reduced-precision Unix time (we take 16 bits of the time converted\n * in minutes since we don't care about wrapping around) where the LOG_C\n * counter decays every minute by default (depends on lfu-decay-time).\n *\n * New keys don't start at zero, in order to have the ability to collect\n * some accesses before being trashed away, so they start at LFU_INIT_VAL.\n * The logarithmic increment performed on LOG_C takes care of LFU_INIT_VAL\n * when incrementing the key, so that keys starting at LFU_INIT_VAL\n * (or having a smaller value) have a very high chance of being incremented\n * on access. (The chance depends on counter and lfu-log-factor.)\n *\n * During decrement, the value of the logarithmic counter is decremented by\n * one when lfu-decay-time minutes elapsed.\n * --------------------------------------------------------------------------*/\n\n/* Return the current time in minutes, just taking the least significant\n * 16 bits. The returned time is suitable to be stored as LDT (last access\n * time) for the LFU implementation. */\nunsigned long LFUGetTimeInMinutes(void) {\n    return (server.unixtime/60) & 65535;\n}\n\n/* Given an object ldt (last access time), compute the minimum number of minutes\n * that elapsed since the last access. Handle overflow (ldt greater than\n * the current 16 bits minutes time) considering the time as wrapping\n * exactly once. */\nunsigned long LFUTimeElapsed(unsigned long ldt) {\n    unsigned long now = LFUGetTimeInMinutes();\n    if (now >= ldt) return now-ldt;\n    return 65535-ldt+now;\n}\n\n/* Logarithmically increment a counter. The greater is the current counter value\n * the less likely is that it gets really incremented. Saturate it at 255. */\nuint8_t LFULogIncr(uint8_t counter) {\n    if (counter == 255) return 255;\n    double r = (double)rand()/RAND_MAX;\n    double baseval = counter - LFU_INIT_VAL;\n    if (baseval < 0) baseval = 0;\n    double p = 1.0/(baseval*server.lfu_log_factor+1);\n    if (r < p) counter++;\n    return counter;\n}\n\n/* If the object's ldt (last access time) is reached, decrement the LFU counter but\n * do not update LFU fields of the object, we update the access time\n * and counter in an explicit way when the object is really accessed.\n * And we will decrement the counter according to the times of\n * elapsed time than server.lfu_decay_time.\n * Return the object frequency counter.\n *\n * This function is used in order to scan the dataset for the best object\n * to fit: as we check for the candidate, we incrementally decrement the\n * counter of the scanned objects if needed. */\nunsigned long LFUDecrAndReturn(robj *o) {\n    unsigned long ldt = o->lru >> 8;\n    unsigned long counter = o->lru & 255;\n    unsigned long num_periods = server.lfu_decay_time ? LFUTimeElapsed(ldt) / server.lfu_decay_time : 0;\n    if (num_periods)\n        counter = (num_periods > counter) ? 0 : counter - num_periods;\n    return counter;\n}\n\n/* We don't want to count AOF buffers and slaves output buffers as\n * used memory: the eviction should use mostly data size, because\n * it can cause feedback-loop when we push DELs into them, putting\n * more and more DELs will make them bigger, if we count them, we\n * need to evict more keys, and then generate more DELs, maybe cause\n * massive eviction loop, even all keys are evicted.\n *\n * This function returns the sum of AOF and replication buffer. */\nsize_t freeMemoryGetNotCountedMemory(void) {\n    size_t overhead = 0;\n\n    /* Since all replicas and replication backlog share global replication\n     * buffer, we think only the part of exceeding backlog size is the extra\n     * separate consumption of replicas.\n     *\n     * Note that although the backlog is also initially incrementally grown\n     * (pushing DELs consumes memory), it'll eventually stop growing and\n     * remain constant in size, so even if its creation will cause some\n     * eviction, it's capped, and also here to stay (no resonance effect)\n     *\n     * Note that, because we trim backlog incrementally in the background,\n     * backlog size may exceeds our setting if slow replicas that reference\n     * vast replication buffer blocks disconnect. To avoid massive eviction\n     * loop, we don't count the delayed freed replication backlog into used\n     * memory even if there are no replicas, i.e. we still regard this memory\n     * as replicas'. */\n    if ((long long)server.repl_buffer_mem > server.repl_backlog_size) {\n        /* We use list structure to manage replication buffer blocks, so backlog\n         * also occupies some extra memory, we can't know exact blocks numbers,\n         * we only get approximate size according to per block size. */\n        size_t extra_approx_size =\n            (server.repl_backlog_size/PROTO_REPLY_CHUNK_BYTES + 1) *\n            (sizeof(replBufBlock)+sizeof(listNode));\n        size_t counted_mem = server.repl_backlog_size + extra_approx_size;\n        if (server.repl_buffer_mem > counted_mem) {\n            overhead += (server.repl_buffer_mem - counted_mem);\n        }\n    }\n\n    /* The migrate client is like a replica, we also push DELs into it when\n     * evicting keys belonging to the migrating slot, so we don't count its\n     * output buffer to avoid eviction loop. */\n    overhead += asmGetMigrateOutputBufferSize();\n\n    if (server.aof_state != AOF_OFF) {\n        overhead += sdsAllocSize(server.aof_buf);\n    }\n    return overhead;\n}\n\n/* Get the memory status from the point of view of the maxmemory directive:\n * if the memory used is under the maxmemory setting then C_OK is returned.\n * Otherwise, if we are over the memory limit, the function returns\n * C_ERR.\n *\n * The function may return additional info via reference, only if the\n * pointers to the respective arguments is not NULL. Certain fields are\n * populated only when C_ERR is returned:\n *\n *  'total'     total amount of bytes used.\n *              (Populated both for C_ERR and C_OK)\n *\n *  'logical'   the amount of memory used minus the slaves/AOF buffers.\n *              (Populated when C_ERR is returned)\n *\n *  'tofree'    the amount of memory that should be released\n *              in order to return back into the memory limits.\n *              (Populated when C_ERR is returned)\n *\n *  'level'     this usually ranges from 0 to 1, and reports the amount of\n *              memory currently used. May be > 1 if we are over the memory\n *              limit.\n *              (Populated both for C_ERR and C_OK)\n */\nint getMaxmemoryState(size_t *total, size_t *logical, size_t *tofree, float *level) {\n    size_t mem_reported, mem_used, mem_tofree;\n\n    /* Check if we are over the memory usage limit. If we are not, no need\n     * to subtract the slaves output buffers. We can just return ASAP. */\n    mem_reported = zmalloc_used_memory();\n    if (total) *total = mem_reported;\n\n    /* We may return ASAP if there is no need to compute the level. */\n    if (!server.maxmemory) {\n        if (level) *level = 0;\n        return C_OK;\n    }\n    if (mem_reported <= server.maxmemory && !level) return C_OK;\n\n    /* Remove the size of slaves output buffers and AOF buffer from the\n     * count of used memory. */\n    mem_used = mem_reported;\n    size_t overhead = freeMemoryGetNotCountedMemory();\n    mem_used = (mem_used > overhead) ? mem_used-overhead : 0;\n\n    /* Compute the ratio of memory usage. */\n    if (level) *level = (float)mem_used / (float)server.maxmemory;\n\n    if (mem_reported <= server.maxmemory) return C_OK;\n\n    /* Check if we are still over the memory limit. */\n    if (mem_used <= server.maxmemory) return C_OK;\n\n    /* Compute how much memory we need to free. */\n    mem_tofree = mem_used - server.maxmemory;\n\n    if (logical) *logical = mem_used;\n    if (tofree) *tofree = mem_tofree;\n\n    return C_ERR;\n}\n\n/* Return 1 if used memory is more than maxmemory after allocating more memory,\n * return 0 if not. Redis may reject user's requests or evict some keys if used\n * memory exceeds maxmemory, especially, when we allocate huge memory at once. */\nint overMaxmemoryAfterAlloc(size_t moremem) {\n    if (!server.maxmemory) return  0; /* No limit. */\n\n    /* Check quickly. */\n    size_t mem_used = zmalloc_used_memory();\n    if (mem_used + moremem <= server.maxmemory) return 0;\n\n    size_t overhead = freeMemoryGetNotCountedMemory();\n    mem_used = (mem_used > overhead) ? mem_used - overhead : 0;\n    return mem_used + moremem > server.maxmemory;\n}\n\n/* The evictionTimeProc is started when \"maxmemory\" has been breached and\n * could not immediately be resolved.  This will spin the event loop with short\n * eviction cycles until the \"maxmemory\" condition has resolved or there are no\n * more evictable items.  */\nstatic int isEvictionProcRunning = 0;\nstatic int evictionTimeProc(\n        struct aeEventLoop *eventLoop, long long id, void *clientData) {\n    UNUSED(eventLoop);\n    UNUSED(id);\n    UNUSED(clientData);\n\n    if (performEvictions() == EVICT_RUNNING) return 0;  /* keep evicting */\n\n    /* For EVICT_OK - things are good, no need to keep evicting.\n     * For EVICT_FAIL - there is nothing left to evict.  */\n    isEvictionProcRunning = 0;\n    return AE_NOMORE;\n}\n\nvoid startEvictionTimeProc(void) {\n    if (!isEvictionProcRunning) {\n        isEvictionProcRunning = 1;\n        aeCreateTimeEvent(server.el, 0,\n                evictionTimeProc, NULL, NULL);\n    }\n}\n\n/* Check if it's safe to perform evictions.\n *   Returns 1 if evictions can be performed\n *   Returns 0 if eviction processing should be skipped\n */\nstatic int isSafeToPerformEvictions(void) {\n    /* - There must be no script in timeout condition.\n     * - Nor we are loading data right now.  */\n    if (isInsideYieldingLongCommand() || server.loading) return 0;\n\n    /* By default replicas should ignore maxmemory\n     * and just be masters exact copies. */\n    if (server.masterhost && server.repl_slave_ignore_maxmemory) return 0;\n\n    /* Disable eviction during slot migration import to avoid delays and errors\n     * caused by failed evictions. A special client is created for data import,\n     * identified by the CLIENT_MASTER and CLIENT_ASM_IMPORTING flags. */\n    if (server.current_client && server.current_client->flags & CLIENT_MASTER &&\n        server.current_client->flags & CLIENT_ASM_IMPORTING)\n        return 0;\n\n    /* If 'evict' action is paused, for whatever reason, then return false */\n    if (isPausedActionsWithUpdate(PAUSE_ACTION_EVICT)) return 0;\n\n    return 1;\n}\n\n/* Algorithm for converting tenacity (0-100) to a time limit.  */\nstatic unsigned long evictionTimeLimitUs(void) {\n    serverAssert(server.maxmemory_eviction_tenacity >= 0);\n    serverAssert(server.maxmemory_eviction_tenacity <= 100);\n\n    if (server.maxmemory_eviction_tenacity <= 10) {\n        /* A linear progression from 0..500us */\n        return 50uL * server.maxmemory_eviction_tenacity;\n    }\n\n    if (server.maxmemory_eviction_tenacity < 100) {\n        /* A 15% geometric progression, resulting in a limit of ~2 min at tenacity==99  */\n        return (unsigned long)(500.0 * pow(1.15, server.maxmemory_eviction_tenacity - 10.0));\n    }\n\n    return ULONG_MAX;   /* No limit to eviction time */\n}\n\n/* Check that memory usage is within the current \"maxmemory\" limit.  If over\n * \"maxmemory\", attempt to free memory by evicting data (if it's safe to do so).\n *\n * It's possible for Redis to suddenly be significantly over the \"maxmemory\"\n * setting.  This can happen if there is a large allocation (like a hash table\n * resize) or even if the \"maxmemory\" setting is manually adjusted.  Because of\n * this, it's important to evict for a managed period of time - otherwise Redis\n * would become unresponsive while evicting.\n *\n * The goal of this function is to improve the memory situation - not to\n * immediately resolve it.  In the case that some items have been evicted but\n * the \"maxmemory\" limit has not been achieved, an aeTimeProc will be started\n * which will continue to evict items until memory limits are achieved or\n * nothing more is evictable.\n *\n * This should be called before execution of commands.  If EVICT_FAIL is\n * returned, commands which will result in increased memory usage should be\n * rejected.\n *\n * Returns:\n *   EVICT_OK       - memory is OK or it's not possible to perform evictions now\n *   EVICT_RUNNING  - memory is over the limit, but eviction is still processing\n *   EVICT_FAIL     - memory is over the limit, and there's nothing to evict\n * */\nint performEvictions(void) {\n    /* Note, we don't goto update_metrics here because this check skips eviction\n     * as if it wasn't triggered. it's a fake EVICT_OK. */\n    if (!isSafeToPerformEvictions()) return EVICT_OK;\n\n    int keys_freed = 0;\n    size_t mem_reported, mem_tofree;\n    long long mem_freed; /* May be negative */\n    mstime_t latency;\n    int slaves = listLength(server.slaves);\n    int result = EVICT_FAIL;\n\n    if (getMaxmemoryState(&mem_reported,NULL,&mem_tofree,NULL) == C_OK) {\n        result = EVICT_OK;\n        goto update_metrics;\n    }\n\n    if (server.maxmemory_policy == MAXMEMORY_NO_EVICTION) {\n        result = EVICT_FAIL;  /* We need to free memory, but policy forbids. */\n        goto update_metrics;\n    }\n\n    unsigned long eviction_time_limit_us = evictionTimeLimitUs();\n\n    mem_freed = 0;\n\n    latencyStartMonitor(latency);\n\n    monotime evictionTimer;\n    elapsedStart(&evictionTimer);\n\n    /* Try to smoke-out bugs (server.also_propagate should be empty here) */\n    serverAssert(server.also_propagate.numops == 0);\n    /* Evictions are performed on random keys that have nothing to do with the current command slot. */\n\n    while (mem_freed < (long long)mem_tofree) {\n        int j, k, i;\n        static unsigned int next_db = 0;\n        sds bestkey = NULL;\n        int bestdbid;\n        redisDb *db;\n        dictEntry *de;\n\n        if (server.maxmemory_policy & (MAXMEMORY_FLAG_LRU|MAXMEMORY_FLAG_LFU|MAXMEMORY_FLAG_LRM) ||\n            server.maxmemory_policy == MAXMEMORY_VOLATILE_TTL)\n        {\n            struct evictionPoolEntry *pool = EvictionPoolLRU;\n            while (bestkey == NULL) {\n                unsigned long total_keys = 0;\n                unsigned long total_sampled_keys = 0;\n\n                /* We don't want to make local-db choices when expiring keys,\n                 * so to start populate the eviction pool sampling keys from\n                 * every DB. */\n                for (i = 0; i < server.dbnum; i++) {\n                    db = server.db+i;\n                    kvstore *kvs;\n                    if (server.maxmemory_policy & MAXMEMORY_FLAG_ALLKEYS) {\n                        kvs = db->keys;\n                    } else {\n                        kvs = db->expires;\n                    }\n                    unsigned long sampled_keys = 0;\n                    unsigned long current_db_keys = kvstoreSize(kvs);\n                    if (current_db_keys == 0) continue;\n\n                    total_keys += current_db_keys;\n                    int l = kvstoreNumNonEmptyDicts(kvs);\n                    /* Do not exceed the number of non-empty slots when looping. */\n                    while (l--) {\n                        sampled_keys += evictionPoolPopulate(db, kvs, pool);\n                        total_sampled_keys += sampled_keys;\n                        /* We have sampled enough keys in the current db, exit the loop. */\n                        if (sampled_keys >= (unsigned long) server.maxmemory_samples)\n                            break;\n                        /* If there are not a lot of keys in the current db, dict/s may be very\n                         * sparsely populated, exit the loop without meeting the sampling\n                         * requirement. */\n                        if (current_db_keys < (unsigned long) server.maxmemory_samples*10)\n                            break;\n                    }\n                }\n                if (!total_keys) break; /* No keys to evict. */\n\n                /* If we iterated all the DBs and all non-empty slot dicts, then\n                 * did not sample any key, stop sampling. */\n                if (!total_sampled_keys) break;\n\n                /* Go backward from best to worst element to evict. */\n                for (k = EVPOOL_SIZE-1; k >= 0; k--) {\n                    if (pool[k].key == NULL) continue;\n                    bestdbid = pool[k].dbid;\n\n                    kvstore *kvs;\n                    if (server.maxmemory_policy & MAXMEMORY_FLAG_ALLKEYS) {\n                        kvs = server.db[bestdbid].keys;\n                    } else {\n                        kvs = server.db[bestdbid].expires;\n                    }\n                    de = kvstoreDictFind(kvs, pool[k].slot, pool[k].key);\n\n                    /* Remove the entry from the pool. */\n                    if (pool[k].key != pool[k].cached)\n                        sdsfree(pool[k].key);\n                    pool[k].key = NULL;\n                    pool[k].idle = 0;\n\n                    /* If the key exists, is our pick. Otherwise it is\n                     * a ghost and we need to try the next element. */\n                    if (de) {\n                        bestkey = kvobjGetKey(dictGetKV(de));\n                        break;\n                    } else {\n                        /* Ghost... Iterate again. */\n                    }\n                }\n            }\n        }\n\n        /* volatile-random and allkeys-random policy */\n        else if (server.maxmemory_policy == MAXMEMORY_ALLKEYS_RANDOM ||\n                 server.maxmemory_policy == MAXMEMORY_VOLATILE_RANDOM)\n        {\n            /* When evicting a random key, we try to evict a key for\n             * each DB, so we use the static 'next_db' variable to\n             * incrementally visit all DBs. */\n            for (i = 0; i < server.dbnum; i++) {\n                j = (++next_db) % server.dbnum;\n                db = server.db+j;\n                kvstore *kvs;\n                if (server.maxmemory_policy == MAXMEMORY_ALLKEYS_RANDOM) {\n                    kvs = db->keys;\n                } else {\n                    kvs = db->expires;\n                }\n                int slot = kvstoreGetFairRandomDictIndex(kvs, randomEvictionShouldSkipDictIndex, 16, 0);\n                if (slot == -1) continue;\n                de = kvstoreDictGetRandomKey(kvs, slot);\n                if (de) {\n                    kvobj *kv = dictGetKV(de);\n                    bestkey = kvobjGetKey(kv);\n                    bestdbid = j;\n                    break;\n                }\n            }\n        }\n\n        /* Finally remove the selected key. */\n        if (bestkey) {\n            long long key_mem_freed;\n            db = server.db+bestdbid;\n\n            enterExecutionUnit(1, 0);\n            robj *keyobj = createStringObject(bestkey,sdslen(bestkey));\n            deleteEvictedKeyAndPropagate(db, keyobj, &key_mem_freed);\n            decrRefCount(keyobj);\n            exitExecutionUnit();\n            /* Propagate the DEL command */\n            postExecutionUnitOperations();\n\n            mem_freed += key_mem_freed;\n            keys_freed++;\n\n            if (keys_freed % 16 == 0) {\n                /* When the memory to free starts to be big enough, we may\n                 * start spending so much time here that is impossible to\n                 * deliver data to the replicas fast enough, so we force the\n                 * transmission here inside the loop. */\n                if (slaves) flushSlavesOutputBuffers();\n\n                /* Normally our stop condition is the ability to release\n                 * a fixed, pre-computed amount of memory. However when we\n                 * are deleting objects in another thread, it's better to\n                 * check, from time to time, if we already reached our target\n                 * memory, since the \"mem_freed\" amount is computed only\n                 * across the dbAsyncDelete() call, while the thread can\n                 * release the memory all the time. */\n                if (server.lazyfree_lazy_eviction) {\n                    if (getMaxmemoryState(NULL,NULL,NULL,NULL) == C_OK) {\n                        break;\n                    }\n                }\n\n                /* After some time, exit the loop early - even if memory limit\n                 * hasn't been reached.  If we suddenly need to free a lot of\n                 * memory, don't want to spend too much time here.  */\n                if (elapsedUs(evictionTimer) > eviction_time_limit_us) {\n                    // We still need to free memory - start eviction timer proc\n                    startEvictionTimeProc();\n                    break;\n                }\n            }\n        } else {\n            goto cant_free; /* nothing to free... */\n        }\n    }\n    /* at this point, the memory is OK, or we have reached the time limit */\n    result = (isEvictionProcRunning) ? EVICT_RUNNING : EVICT_OK;\n\ncant_free:\n    if (result == EVICT_FAIL) {\n        /* At this point, we have run out of evictable items.  It's possible\n         * that some items are being freed in the lazyfree thread.  Perform a\n         * short wait here if such jobs exist, but don't wait long.  */\n        mstime_t lazyfree_latency;\n        latencyStartMonitor(lazyfree_latency);\n        while (bioPendingJobsOfType(BIO_LAZY_FREE) &&\n              elapsedUs(evictionTimer) < eviction_time_limit_us) {\n            if (getMaxmemoryState(NULL,NULL,NULL,NULL) == C_OK) {\n                result = EVICT_OK;\n                break;\n            }\n            usleep(eviction_time_limit_us < 1000 ? eviction_time_limit_us : 1000);\n        }\n        latencyEndMonitor(lazyfree_latency);\n        latencyAddSampleIfNeeded(\"eviction-lazyfree\",lazyfree_latency);\n    }\n\n    latencyEndMonitor(latency);\n    latencyAddSampleIfNeeded(\"eviction-cycle\",latency);\n\nupdate_metrics:\n    if (result == EVICT_RUNNING || result == EVICT_FAIL) {\n        if (server.stat_last_eviction_exceeded_time == 0)\n            elapsedStart(&server.stat_last_eviction_exceeded_time);\n    } else if (result == EVICT_OK) {\n        if (server.stat_last_eviction_exceeded_time != 0) {\n            server.stat_total_eviction_exceeded_time += elapsedUs(server.stat_last_eviction_exceeded_time);\n            server.stat_last_eviction_exceeded_time = 0;\n        }\n    }\n    return result;\n}\n"
  },
  {
    "path": "src/expire.c",
    "content": "/* Implementation of EXPIRE (keys with fixed time to live).\n *\n * ----------------------------------------------------------------------------\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n#include \"cluster.h\"\n#include \"redisassert.h\"\n\n/*-----------------------------------------------------------------------------\n * Incremental collection of expired keys.\n *\n * When keys are accessed they are expired on-access. However we need a\n * mechanism in order to ensure keys are eventually removed when expired even\n * if no access is performed on them.\n *----------------------------------------------------------------------------*/\n\n/* Constants table from pow(0.98, 1) to pow(0.98, 16). \n * Help calculating the db->avg_ttl. */\nstatic double avg_ttl_factor[16] = {0.98, 0.9604, 0.941192, 0.922368, 0.903921, 0.885842, 0.868126, 0.850763, 0.833748, 0.817073, 0.800731, 0.784717, 0.769022, 0.753642, 0.738569, 0.723798};\n\n/* Helper function for the activeExpireCycle() function.\n * This function will try to expire the key-value entry that is stored in the \n * hash table entry 'de' of the 'expires' hash table of a Redis database.\n *\n * If the key is found to be expired, it is removed from the database and\n * 1 is returned. Otherwise no operation is performed and 0 is returned.\n *\n * When a key is expired, server.stat_expiredkeys is incremented.\n *\n * The parameter 'now' is the current time in milliseconds as is passed\n * to the function to avoid too many gettimeofday() syscalls. */\nint activeExpireCycleTryExpire(redisDb *db, kvobj *kv, long long now) {\n    if (now < kvobjGetExpire(kv))\n        return 0;\n\n    enterExecutionUnit(1, 0);\n    sds key = kvobjGetKey(kv);\n    robj *keyobj = createStringObject(key,sdslen(key));\n    deleteExpiredKeyAndPropagate(db,keyobj);\n    server.stat_expiredkeys_active++;\n    decrRefCount(keyobj);\n    exitExecutionUnit();\n    /* Propagate the DEL command */\n    postExecutionUnitOperations();\n    return 1;\n}\n\n/* Try to expire a few timed out keys. The algorithm used is adaptive and\n * will use few CPU cycles if there are few expiring keys, otherwise\n * it will get more aggressive to avoid that too much memory is used by\n * keys that can be removed from the keyspace.\n *\n * Every expire cycle tests multiple databases: the next call will start\n * again from the next db. No more than CRON_DBS_PER_CALL databases are\n * tested at every iteration.\n *\n * The function can perform more or less work, depending on the \"type\"\n * argument. It can execute a \"fast cycle\" or a \"slow cycle\". The slow\n * cycle is the main way we collect expired cycles: this happens with\n * the \"server.hz\" frequency (usually 10 hertz).\n *\n * However the slow cycle can exit for timeout, since it used too much time.\n * For this reason the function is also invoked to perform a fast cycle\n * at every event loop cycle, in the beforeSleep() function. The fast cycle\n * will try to perform less work, but will do it much more often.\n *\n * The following are the details of the two expire cycles and their stop\n * conditions:\n *\n * If type is ACTIVE_EXPIRE_CYCLE_FAST the function will try to run a\n * \"fast\" expire cycle that takes no longer than ACTIVE_EXPIRE_CYCLE_FAST_DURATION\n * microseconds, and is not repeated again before the same amount of time.\n * The cycle will also refuse to run at all if the latest slow cycle did not\n * terminate because of a time limit condition.\n *\n * If type is ACTIVE_EXPIRE_CYCLE_SLOW, that normal expire cycle is\n * executed, where the time limit is a percentage of the REDIS_HZ period\n * as specified by the ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC define. In the\n * fast cycle, the check of every database is interrupted once the number\n * of already expired keys in the database is estimated to be lower than\n * a given percentage, in order to avoid doing too much work to gain too\n * little memory.\n *\n * The configured expire \"effort\" will modify the baseline parameters in\n * order to do more work in both the fast and slow expire cycles.\n */\n\n#define ACTIVE_EXPIRE_CYCLE_KEYS_PER_LOOP 20 /* Keys for each DB loop. */\n#define ACTIVE_EXPIRE_CYCLE_FAST_DURATION 1000 /* Microseconds. */\n#define ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC 25 /* Max % of CPU to use. */\n#define ACTIVE_EXPIRE_CYCLE_ACCEPTABLE_STALE 10 /* % of stale keys after which\n                                                   we do extra efforts. */\n\n#define HFE_DB_BASE_ACTIVE_EXPIRE_FIELDS_PER_SEC 10000\n\n/* Data used by the expire dict scan callback. */\ntypedef struct {\n    redisDb *db;\n    long long now;\n    unsigned long sampled; /* num keys checked */\n    unsigned long expired; /* num keys expired */\n    long long ttl_sum; /* sum of ttl for key with ttl not yet expired */\n    int ttl_samples; /* num keys with ttl not yet expired */\n} expireScanData;\n\nvoid expireScanCallback(void *privdata, const dictEntry *de, dictEntryLink plink) {\n    UNUSED(plink);\n    kvobj *kv = dictGetKV(de);\n    expireScanData *data = privdata;\n    long long ttl  = kvobjGetExpire(kv) - data->now;\n    if (activeExpireCycleTryExpire(data->db, kv, data->now)) {\n        data->expired++;\n    }\n    if (ttl > 0) {\n        /* We want the average TTL of keys yet not expired. */\n        data->ttl_sum += ttl;\n        data->ttl_samples++;\n    }\n    data->sampled++;\n}\n\nstatic inline int expirySamplingShouldSkipDict(dict *d, int didx) {\n    long long numkeys = dictSize(d);\n    unsigned long buckets = dictBuckets(d);\n    /* When there are less than 1% filled buckets, sampling the key\n     * space is expensive, so stop here waiting for better times...\n     * The dictionary will be resized asap. */\n    if (buckets > DICT_HT_INITIAL_SIZE && (numkeys * 100/buckets < 1)) {\n        return 1;\n    }\n\n    /* During atomic slot migration, keys that are being imported are in an\n     * intermediate state. we cannot expire them and therefore skip them. */\n    if (!clusterCanAccessKeysInSlot(didx)) return 1;\n\n    return 0;\n}\n\n/* SubexpireCtx passed to activeSubexpiresCb() */\ntypedef struct SubexpireCtx {\n    uint32_t fieldsToExpireQuota;\n    redisDb *db;\n    int slot;\n} SubexpireCtx;\n\n/*\n * Active sub-expiration callback\n *\n * Called by activeSubexpires() for each key registered in the subexpires DB\n * with an expiration-time on its \"elements\"  that are less than or equal current\n * time.\n *\n * This callback performs the following actions for each hash:\n * - Delete expired fields as by calling ebExpire(hash)\n * - If afterward there are future fields to expire, it will update the hash in\n *   HFE DB with the next hash-field minimum expiration time by returning\n *   ACT_UPDATE_EXP_ITEM.\n * - If the hash has no more fields to expire, it is removed from the HFE DB\n *   by returning ACT_REMOVE_EXP_ITEM.\n * - If hash has no more fields afterward, it will remove the hash from keyspace.\n */\nstatic ExpireAction activeSubexpiresCb(eItem item, void *ctx) {\n    SubexpireCtx *subexCtx = ctx;\n\n    /* If no more quota left for this callback, stop */\n    if (subexCtx->fieldsToExpireQuota == 0)\n        return ACT_STOP_ACTIVE_EXP;\n\n    kvobj *kv = (kvobj *) item;\n\n    /* currently we only support hash type sub-expire */\n    assert(kv->type == OBJ_HASH);\n    uint64_t nextExpTime = hashTypeExpire(subexCtx->db, kv, &subexCtx->fieldsToExpireQuota, 0, 1);\n\n    /* If hash has no more fields to expire or got deleted, indicate\n     * to remove it from HFE DB to the caller ebExpire() */\n    if (nextExpTime == EB_EXPIRE_TIME_INVALID || nextExpTime == 0) {\n        return ACT_REMOVE_EXP_ITEM;\n    } else {\n        /* Hash has more fields to expire. Update next expiration time of the hash\n         * and indicate to add it back to global HFE DS */\n        ebSetMetaExpTime(hashGetExpireMeta(item), nextExpTime);\n        return ACT_UPDATE_EXP_ITEM;\n    }\n}\n\n/* DB active expire and update hashes with time-expiration on fields.\n *\n * The callback function activeSubexpiresCb() is invoked for each hash registered\n * in the subexpires DB with an expiration-time less than or equal to the\n * current time. This callback performs the following actions for each hash:\n * - If the hash has one or more fields to expire, it will delete those fields.\n * - If there are more fields to expire, it will update the hash with the next\n *   expiration time in subexpires DB.\n * - If the hash has no more fields to expire, it is removed from the subexpires DB.\n * - If the hash has no more fields, it is removed from the main DB.\n *\n * Returns number of fields active-expired.\n */\nuint64_t activeSubexpires(redisDb *db, int slot, uint32_t maxFieldsToExpire) {\n    SubexpireCtx ctx = { .db = db, .fieldsToExpireQuota = maxFieldsToExpire, .slot = slot };\n    ExpireInfo info = {\n            .maxToExpire = UINT64_MAX, /* Only maxFieldsToExpire play a role */\n            .onExpireItem = activeSubexpiresCb,\n            .ctx = &ctx,\n            .now = commandTimeSnapshot(),\n            .itemsExpired = 0};\n\n    estoreActiveExpire(db->subexpires, slot, &info);\n\n    /* Return number of fields active-expired */\n    return maxFieldsToExpire - ctx.fieldsToExpireQuota;\n}\n\n/* Active expiration Cycle for hash-fields.\n *\n * Note that releasing fields is expected to be more predictable and rewarding\n * than releasing keys because it is stored in `ebuckets` DS which optimized for\n * active expiration and in addition the deletion of fields is simple to handle. */\nstatic inline void activeSubexpiresCycle(int type) {\n    /* Remember current db across calls */\n    static unsigned int currentDb = 0;\n    static int currentSlot = -1;\n\n    /* Tracks the count of fields actively expired for the current database.\n     * This count continues as long as it fails to actively expire all expired\n     * fields of currentDb, indicating a possible need to adjust the value of\n     * maxToExpire. */\n    static uint64_t activeExpirySequence = 0;\n    /* Threshold for adjusting maxToExpire */\n    const uint32_t EXPIRED_FIELDS_TH = 1000000;\n\n    redisDb *db = server.db + currentDb;\n\n    /* If db is empty, move to next db and return */\n    if (estoreIsEmpty(db->subexpires)) {\n        activeExpirySequence = 0;\n        currentDb = (currentDb + 1) % server.dbnum;\n        return;\n    }\n    if (currentSlot == -1)\n        currentSlot = estoreGetFirstNonEmptyBucket(db->subexpires);\n\n    /* During atomic slot migration, keys that are being imported are in an\n     * intermediate state. We cannot expire them and therefore skip them. */\n    if (!clusterCanAccessKeysInSlot(currentSlot)) {\n        /* Move to next non-empty subexpires slot */\n        currentSlot = estoreGetNextNonEmptyBucket(db->subexpires, currentSlot);\n        if (currentSlot == -1)\n            currentDb = (currentDb + 1) % server.dbnum; /* Move to next db */\n        return;\n    }\n\n    /* Maximum number of fields to actively expire on a single call */\n    uint32_t maxToExpire = HFE_DB_BASE_ACTIVE_EXPIRE_FIELDS_PER_SEC / server.hz;\n\n    /* If running for a while and didn't manage to active-expire all expired fields of\n     * currentDb (i.e. activeExpirySequence becomes significant) then adjust maxToExpire */\n    if ((activeExpirySequence > EXPIRED_FIELDS_TH) && (type == ACTIVE_EXPIRE_CYCLE_SLOW)) {\n        /* maxToExpire is multiplied by a factor between 1 and 32, proportional to\n         * the number of times activeExpirySequence exceeded EXPIRED_FIELDS_TH */\n        uint64_t factor = activeExpirySequence / EXPIRED_FIELDS_TH;\n        maxToExpire *= (factor<32) ? factor : 32;\n    }\n\n    if (activeSubexpires(db, currentSlot, maxToExpire) == maxToExpire) {\n        /* active-expire reached maxToExpire limit */\n        activeExpirySequence += maxToExpire;\n    } else {\n        /* Managed to active-expire all expired fields of currentDb */\n        activeExpirySequence = 0;\n        /* Move to next non-empty subexpires slot */\n        currentSlot = estoreGetNextNonEmptyBucket(db->subexpires, currentSlot);\n        if (currentSlot == -1)\n            currentDb = (currentDb + 1) % server.dbnum;\n    }\n}\n\nvoid activeExpireCycle(int type) {\n    /* Adjust the running parameters according to the configured expire\n     * effort. The default effort is 1, and the maximum configurable effort\n     * is 10. */\n    unsigned long\n    effort = server.active_expire_effort-1, /* Rescale from 0 to 9. */\n    config_keys_per_loop = ACTIVE_EXPIRE_CYCLE_KEYS_PER_LOOP +\n                           ACTIVE_EXPIRE_CYCLE_KEYS_PER_LOOP/4*effort,\n    config_cycle_fast_duration = ACTIVE_EXPIRE_CYCLE_FAST_DURATION +\n                                 ACTIVE_EXPIRE_CYCLE_FAST_DURATION/4*effort,\n    config_cycle_slow_time_perc = ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC +\n                                  2*effort,\n    config_cycle_acceptable_stale = ACTIVE_EXPIRE_CYCLE_ACCEPTABLE_STALE-\n                                    effort;\n\n    /* This function has some global state in order to continue the work\n     * incrementally across calls. */\n    static unsigned int current_db = 0; /* Next DB to test. */\n    static int timelimit_exit = 0;      /* Time limit hit in previous call? */\n    static long long last_fast_cycle = 0; /* When last fast cycle ran. */\n\n    int j, iteration = 0;\n    int dbs_per_call = CRON_DBS_PER_CALL;\n    int dbs_performed = 0;\n    long long start = ustime(), timelimit, elapsed;\n\n    /* If 'expire' action is paused, for whatever reason, then don't expire any key.\n     * Typically, at the end of the pause we will properly expire the key OR we\n     * will have failed over and the new primary will send us the expire. */\n    if (isPausedActionsWithUpdate(PAUSE_ACTION_EXPIRE)) return;\n\n    if (type == ACTIVE_EXPIRE_CYCLE_FAST) {\n        /* Don't start a fast cycle if the previous cycle did not exit\n         * for time limit, unless the percentage of estimated stale keys is\n         * too high. Also never repeat a fast cycle for the same period\n         * as the fast cycle total duration itself. */\n        if (!timelimit_exit &&\n            server.stat_expired_stale_perc < config_cycle_acceptable_stale)\n            return;\n\n        if (start < last_fast_cycle + (long long)config_cycle_fast_duration*2)\n            return;\n\n        last_fast_cycle = start;\n    }\n\n    /* We usually should test CRON_DBS_PER_CALL per iteration, with\n     * two exceptions:\n     *\n     * 1) Don't test more DBs than we have.\n     * 2) If last time we hit the time limit, we want to scan all DBs\n     * in this iteration, as there is work to do in some DB and we don't want\n     * expired keys to use memory for too much time. */\n    if (dbs_per_call > server.dbnum || timelimit_exit)\n        dbs_per_call = server.dbnum;\n\n    /* We can use at max 'config_cycle_slow_time_perc' percentage of CPU\n     * time per iteration. Since this function gets called with a frequency of\n     * server.hz times per second, the following is the max amount of\n     * microseconds we can spend in this function. */\n    timelimit = config_cycle_slow_time_perc*1000000/server.hz/100;\n    timelimit_exit = 0;\n    if (timelimit <= 0) timelimit = 1;\n\n    if (type == ACTIVE_EXPIRE_CYCLE_FAST)\n        timelimit = config_cycle_fast_duration; /* in microseconds. */\n\n    /* Accumulate some global stats as we expire keys, to have some idea\n     * about the number of keys that are already logically expired, but still\n     * existing inside the database. */\n    long total_sampled = 0;\n    long total_expired = 0;\n\n    /* Try to smoke-out bugs (server.also_propagate should be empty here) */\n    serverAssert(server.also_propagate.numops == 0);\n\n    /* Stop iteration when one of the following conditions is met:\n     *\n     * 1) We have checked a sufficient number of databases with expiration time.\n     * 2) The time limit has been exceeded.\n     * 3) All databases have been traversed. */\n    for (j = 0; dbs_performed < dbs_per_call && timelimit_exit == 0 && j < server.dbnum; j++) {\n        /* Scan callback data including expired and checked count per iteration. */\n        expireScanData data;\n        data.ttl_sum = 0;\n        data.ttl_samples = 0;\n\n        redisDb *db = server.db+(current_db % server.dbnum);\n        data.db = db;\n\n        int db_done = 0; /* The scan of the current DB is done? */\n        int update_avg_ttl_times = 0, repeat = 0;\n\n        /* Increment the DB now so we are sure if we run out of time\n         * in the current DB we'll restart from the next. This allows to\n         * distribute the time evenly across DBs. */\n        current_db++;\n\n        /* Interleaving sub-expiration with key expiration. Better call it before\n         * handling expired keys because ebuckets is optimized for active expiration */\n        activeSubexpiresCycle(type);\n\n        if (kvstoreSize(db->expires))\n            dbs_performed++;\n\n        /* Continue to expire if at the end of the cycle there are still\n         * a big percentage of keys to expire, compared to the number of keys\n         * we scanned. The percentage, stored in config_cycle_acceptable_stale\n         * is not fixed, but depends on the Redis configured \"expire effort\". */\n        do {\n            unsigned long num;\n            iteration++;\n\n            /* If there is nothing to expire try next DB ASAP. */\n            if ((num = kvstoreSize(db->expires)) == 0) {\n                db->avg_ttl = 0;\n                break;\n            }\n            data.now = mstime();\n\n            /* The main collection cycle. Scan through keys among keys\n             * with an expire set, checking for expired ones. */\n            data.sampled = 0;\n            data.expired = 0;\n\n            if (num > config_keys_per_loop)\n                num = config_keys_per_loop;\n\n            /* Here we access the low level representation of the hash table\n             * for speed concerns: this makes this code coupled with dict.c,\n             * but it hardly changed in ten years.\n             *\n             * Note that certain places of the hash table may be empty,\n             * so we want also a stop condition about the number of\n             * buckets that we scanned. However scanning for free buckets\n             * is very fast: we are in the cache line scanning a sequential\n             * array of NULL pointers, so we can scan a lot more buckets\n             * than keys in the same time. */\n            long max_buckets = num*20;\n            long checked_buckets = 0;\n\n            int origin_ttl_samples = data.ttl_samples;\n\n            while (data.sampled < num && checked_buckets < max_buckets) {\n                db->expires_cursor = kvstoreScan(db->expires, db->expires_cursor, -1, expireScanCallback, expirySamplingShouldSkipDict, &data);\n                if (db->expires_cursor == 0) {\n                    db_done = 1;\n                    break;\n                }\n                checked_buckets++;\n            }\n            total_expired += data.expired;\n            total_sampled += data.sampled;\n\n            /* If find keys with ttl not yet expired, we need to update the average TTL stats once. */\n            if (data.ttl_samples - origin_ttl_samples > 0) update_avg_ttl_times++;\n\n            /* We don't repeat the cycle for the current database if the db is done\n             * for scanning or an acceptable number of stale keys (logically expired\n             * but yet not reclaimed). */\n            repeat = db_done ? 0 : (data.sampled == 0 || (data.expired * 100 / data.sampled) > config_cycle_acceptable_stale);\n\n            /* We can't block forever here even if there are many keys to\n             * expire. So after a given amount of microseconds return to the\n             * caller waiting for the other active expire cycle. */\n            if ((iteration & 0xf) == 0 || !repeat) { /* Update the average TTL stats every 16 iterations or about to exit. */\n                /* Update the average TTL stats for this database, \n                 * because this may reach the time limit. */\n                if (data.ttl_samples) {\n                    long long avg_ttl = data.ttl_sum / data.ttl_samples;\n\n                    /* Do a simple running average with a few samples.\n                     * We just use the current estimate with a weight of 2%\n                     * and the previous estimate with a weight of 98%. */\n                    if (db->avg_ttl == 0) {\n                        db->avg_ttl = avg_ttl;\n                    } else {\n                        /* The origin code is as follow.\n                         * for (int i = 0; i < update_avg_ttl_times; i++) {\n                         *   db->avg_ttl = (db->avg_ttl/50)*49 + (avg_ttl/50);\n                         * } \n                         * We can convert the loop into a sum of a geometric progression.\n                         * db->avg_ttl = db->avg_ttl * pow(0.98, update_avg_ttl_times) + \n                         *                  avg_ttl / 50 * (pow(0.98, update_avg_ttl_times - 1) + ... + 1) \n                         *             = db->avg_ttl * pow(0.98, update_avg_ttl_times) + \n                         *                  avg_ttl * (1 - pow(0.98, update_avg_ttl_times))\n                         *             = avg_ttl +  (db->avg_ttl - avg_ttl) * pow(0.98, update_avg_ttl_times) \n                         * Notice that update_avg_ttl_times is between 1 and 16, we use a constant table \n                         * to accelerate the calculation of pow(0.98, update_avg_ttl_times).*/\n                        db->avg_ttl = avg_ttl + (db->avg_ttl - avg_ttl) * avg_ttl_factor[update_avg_ttl_times - 1] ;\n                    }\n                    update_avg_ttl_times = 0;\n                    data.ttl_sum = 0;\n                    data.ttl_samples = 0;\n                }\n                if ((iteration & 0xf) == 0) { /* check time limit every 16 iterations. */\n                    elapsed = ustime()-start;\n                    if (elapsed > timelimit) {\n                        timelimit_exit = 1;\n                        server.stat_expired_time_cap_reached_count++;\n                        break;\n                    }\n                }\n            }\n        } while (repeat);\n    }\n\n    elapsed = ustime()-start;\n    server.stat_expire_cycle_time_used += elapsed;\n    latencyAddSampleIfNeeded(\"expire-cycle\",elapsed/1000);\n\n    /* Update our estimate of keys existing but yet to be expired.\n     * Running average with this sample accounting for 5%. */\n    double current_perc;\n    if (total_sampled) {\n        current_perc = (double)total_expired/total_sampled;\n    } else\n        current_perc = 0;\n    server.stat_expired_stale_perc = (current_perc*0.05)+\n                                     (server.stat_expired_stale_perc*0.95);\n}\n\n/*-----------------------------------------------------------------------------\n * Expires of keys created in writable slaves\n *\n * Normally slaves do not process expires: they wait the masters to synthesize\n * DEL operations in order to retain consistency. However writable slaves are\n * an exception: if a key is created in the slave and an expire is assigned\n * to it, we need a way to expire such a key, since the master does not know\n * anything about such a key.\n *\n * In order to do so, we track keys created in the slave side with an expire\n * set, and call the expireSlaveKeys() function from time to time in order to\n * reclaim the keys if they already expired.\n *\n * Note that the use case we are trying to cover here, is a popular one where\n * slaves are put in writable mode in order to compute slow operations in\n * the slave side that are mostly useful to actually read data in a more\n * processed way. Think at sets intersections in a tmp key, with an expire so\n * that it is also used as a cache to avoid intersecting every time.\n *\n * This implementation is currently not perfect but a lot better than leaking\n * the keys as implemented in 3.2.\n *----------------------------------------------------------------------------*/\n\n/* The dictionary where we remember key names and database ID of keys we may\n * want to expire from the slave. Since this function is not often used we\n * don't even care to initialize the database at startup. We'll do it once\n * the feature is used the first time, that is, when rememberSlaveKeyWithExpire()\n * is called.\n *\n * The dictionary has an SDS string representing the key as the hash table\n * key, while the value is a 64 bit unsigned integer with the bits corresponding\n * to the DB where the keys may exist set to 1. Currently the keys created\n * with a DB id > 63 are not expired, but a trivial fix is to set the bitmap\n * to the max 64 bit unsigned value when we know there is a key with a DB\n * ID greater than 63, and check all the configured DBs in such a case. */\ndict *slaveKeysWithExpire = NULL;\n\n/* Check the set of keys created by the master with an expire set in order to\n * check if they should be evicted. */\nvoid expireSlaveKeys(void) {\n    if (slaveKeysWithExpire == NULL ||\n        dictSize(slaveKeysWithExpire) == 0) return;\n\n    int cycles = 0, noexpire = 0;\n    mstime_t start = mstime();\n    while(1) {\n        dictEntry *de = dictGetRandomKey(slaveKeysWithExpire);\n        sds keyname = dictGetKey(de);\n        uint64_t dbids = dictGetUnsignedIntegerVal(de);\n        uint64_t new_dbids = 0;\n\n        /* Check the key against every database corresponding to the\n         * bits set in the value bitmap. */\n        int dbid = 0;\n        while(dbids && dbid < server.dbnum) {\n            if ((dbids & 1) != 0) {\n                redisDb *db = server.db+dbid;\n                kvobj *kv = dbFindExpires(db, keyname);\n                int expired = kv && activeExpireCycleTryExpire(server.db+dbid, kv, start);\n\n                /* If the key was not expired in this DB, we need to set the\n                 * corresponding bit in the new bitmap we set as value.\n                 * At the end of the loop if the bitmap is zero, it means we\n                 * no longer need to keep track of this key. */\n                if (kv && !expired) {\n                    noexpire++;\n                    new_dbids |= (uint64_t)1 << dbid;\n                }\n            }\n            dbid++;\n            dbids >>= 1;\n        }\n\n        /* Set the new bitmap as value of the key, in the dictionary\n         * of keys with an expire set directly in the writable slave. Otherwise\n         * if the bitmap is zero, we no longer need to keep track of it. */\n        if (new_dbids)\n            dictSetUnsignedIntegerVal(de,new_dbids);\n        else\n            dictDelete(slaveKeysWithExpire,keyname);\n\n        /* Stop conditions: found 3 keys we can't expire in a row or\n         * time limit was reached. */\n        cycles++;\n        if (noexpire > 3) break;\n        if ((cycles % 64) == 0 && mstime()-start > 1) break;\n        if (dictSize(slaveKeysWithExpire) == 0) break;\n    }\n}\n\n/* Track keys that received an EXPIRE or similar command in the context\n * of a writable slave. */\nvoid rememberSlaveKeyWithExpire(redisDb *db, sds key) {\n    if (slaveKeysWithExpire == NULL) {\n        static dictType dt = {\n            dictSdsHash,                /* hash function */\n            NULL,                       /* key dup */\n            NULL,                       /* val dup */\n            dictSdsKeyCompare,          /* key compare */\n            dictSdsDestructor,          /* key destructor */\n            NULL,                       /* val destructor */\n            NULL                        /* allow to expand */\n        };\n        slaveKeysWithExpire = dictCreate(&dt);\n    }\n    if (db->id > 63) return;\n\n    dictEntry *de = dictAddOrFind(slaveKeysWithExpire, key);\n    /* If the entry was just created, set it to a copy of the SDS string\n     * representing the key: we don't want to need to take those keys\n     * in sync with the main DB. The keys will be removed by expireSlaveKeys()\n     * as it scans to find keys to remove. */\n    if (dictGetKey(de) == key) {\n        dictSetKey(slaveKeysWithExpire, de, sdsdup(key));\n        dictSetUnsignedIntegerVal(de,0);\n    }\n\n    uint64_t dbids = dictGetUnsignedIntegerVal(de);\n    dbids |= (uint64_t)1 << db->id;\n    dictSetUnsignedIntegerVal(de,dbids);\n}\n\n/* Return the number of keys we are tracking. */\nsize_t getSlaveKeyWithExpireCount(void) {\n    if (slaveKeysWithExpire == NULL) return 0;\n    return dictSize(slaveKeysWithExpire);\n}\n\n/* Remove the keys in the hash table. We need to do that when data is\n * flushed from the server. We may receive new keys from the master with\n * the same name/db and it is no longer a good idea to expire them.\n *\n * Note: technically we should handle the case of a single DB being flushed\n * but it is not worth it since anyway race conditions using the same set\n * of key names in a writable slave and in its master will lead to\n * inconsistencies. This is just a best-effort thing we do. */\nvoid flushSlaveKeysWithExpireList(void) {\n    if (slaveKeysWithExpire) {\n        dictRelease(slaveKeysWithExpire);\n        slaveKeysWithExpire = NULL;\n    }\n}\n\nint checkAlreadyExpired(long long when) {\n    /* EXPIRE with negative TTL, or EXPIREAT with a timestamp into the past\n     * should never be executed as a DEL when load the AOF or in the context\n     * of a slave instance.\n     *\n     * Instead we add the already expired key to the database with expire time\n     * (possibly in the past) and wait for an explicit DEL from the master. */\n    if (server.current_client && server.current_client->flags & CLIENT_MASTER) return 0;\n    return (when <= commandTimeSnapshot() && !server.loading && !server.masterhost);\n}\n\n#define EXPIRE_NX (1<<0)\n#define EXPIRE_XX (1<<1)\n#define EXPIRE_GT (1<<2)\n#define EXPIRE_LT (1<<3)\n\n/* Parse additional flags of expire commands\n *\n * Supported flags:\n * - NX: set expiry only when the key has no expiry\n * - XX: set expiry only when the key has an existing expiry\n * - GT: set expiry only when the new expiry is greater than current one\n * - LT: set expiry only when the new expiry is less than current one */\nint parseExtendedExpireArgumentsOrReply(client *c, int *flags) {\n    int nx = 0, xx = 0, gt = 0, lt = 0;\n\n    int j = 3;\n    while (j < c->argc) {\n        char *opt = c->argv[j]->ptr;\n        if (!strcasecmp(opt,\"nx\")) {\n            *flags |= EXPIRE_NX;\n            nx = 1;\n        } else if (!strcasecmp(opt,\"xx\")) {\n            *flags |= EXPIRE_XX;\n            xx = 1;\n        } else if (!strcasecmp(opt,\"gt\")) {\n            *flags |= EXPIRE_GT;\n            gt = 1;\n        } else if (!strcasecmp(opt,\"lt\")) {\n            *flags |= EXPIRE_LT;\n            lt = 1;\n        } else {\n            addReplyErrorFormat(c, \"Unsupported option %s\", opt);\n            return C_ERR;\n        }\n        j++;\n    }\n\n    if ((nx && xx) || (nx && gt) || (nx && lt)) {\n        addReplyError(c, \"NX and XX, GT or LT options at the same time are not compatible\");\n        return C_ERR;\n    }\n\n    if (gt && lt) {\n        addReplyError(c, \"GT and LT options at the same time are not compatible\");\n        return C_ERR;\n    }\n\n    return C_OK;\n}\n\n/*-----------------------------------------------------------------------------\n * Expires Commands\n *----------------------------------------------------------------------------*/\n\n/* This is the generic command implementation for EXPIRE, PEXPIRE, EXPIREAT\n * and PEXPIREAT. Because the command second argument may be relative or absolute\n * the \"basetime\" argument is used to signal what the base time is (either 0\n * for *AT variants of the command, or the current time for relative expires).\n *\n * unit is either UNIT_SECONDS or UNIT_MILLISECONDS, and is only used for\n * the argv[2] parameter. The basetime is always specified in milliseconds.\n *\n * Additional flags are supported and parsed via parseExtendedExpireArguments */\nvoid expireGenericCommand(client *c, long long basetime, int unit) {\n    robj *key = c->argv[1], *param = c->argv[2];\n    long long when; /* unix time in milliseconds when the key will expire. */\n    long long current_expire = -1;\n    int flag = 0;\n\n    /* checking optional flags */\n    if (parseExtendedExpireArgumentsOrReply(c, &flag) != C_OK) {\n        return;\n    }\n\n    if (getLongLongFromObjectOrReply(c, param, &when, NULL) != C_OK)\n        return;\n\n    /* EXPIRE allows negative numbers, but we can at least detect an\n     * overflow by either unit conversion or basetime addition. */\n    if (unit == UNIT_SECONDS) {\n        if (when > LLONG_MAX / 1000 || when < LLONG_MIN / 1000) {\n            addReplyErrorExpireTime(c);\n            return;\n        }\n        when *= 1000;\n    }\n\n    if (when > LLONG_MAX - basetime) {\n        addReplyErrorExpireTime(c);\n        return;\n    }\n    when += basetime;\n\n    /* No key, return zero. */\n    kvobj *kv = lookupKeyWrite(c->db,key); \n    if (kv == NULL) {\n        addReply(c,shared.czero);\n        return;\n    }\n\n    if (flag) {\n        current_expire = kvobjGetExpire(kv);\n\n        /* NX option is set, check current expiry */\n        if (flag & EXPIRE_NX) {\n            if (current_expire != -1) {\n                addReply(c,shared.czero);\n                return;\n            }\n        }\n\n        /* XX option is set, check current expiry */\n        if (flag & EXPIRE_XX) {\n            if (current_expire == -1) {\n                /* reply 0 when the key has no expiry */\n                addReply(c,shared.czero);\n                return;\n            }\n        }\n\n        /* GT option is set, check current expiry */\n        if (flag & EXPIRE_GT) {\n            /* When current_expire is -1, we consider it as infinite TTL,\n             * so expire command with gt always fail the GT. */\n            if (when <= current_expire || current_expire == -1) {\n                /* reply 0 when the new expiry is not greater than current */\n                addReply(c,shared.czero);\n                return;\n            }\n        }\n\n        /* LT option is set, check current expiry */\n        if (flag & EXPIRE_LT) {\n            /* When current_expire -1, we consider it as infinite TTL,\n             * but 'when' can still be negative at this point, so if there is\n             * an expiry on the key and it's not less than current, we fail the LT. */\n            if (current_expire != -1 && when >= current_expire) {\n                /* reply 0 when the new expiry is not less than current */\n                addReply(c,shared.czero);\n                return;\n            }\n        }\n    }\n\n    if (checkAlreadyExpired(when)) {\n        robj *aux;\n\n        int deleted = dbGenericDelete(c->db,key,server.lazyfree_lazy_expire,DB_FLAG_KEY_EXPIRED);\n        serverAssertWithInfo(c,key,deleted);\n        server.dirty++;\n\n        /* Replicate/AOF this as an explicit DEL or UNLINK. */\n        aux = server.lazyfree_lazy_expire ? shared.unlink : shared.del;\n        rewriteClientCommandVector(c,2,aux,key);\n        keyModified(c,c->db,key,NULL,1);\n        notifyKeyspaceEvent(NOTIFY_GENERIC,\"del\",key,c->db->id);\n        addReply(c, shared.cone);\n        return;\n    } else {\n        kv = setExpire(c,c->db,key,when); /* might realloc kv */\n        addReply(c,shared.cone);\n        /* Propagate as PEXPIREAT millisecond-timestamp\n         * Only rewrite the command arg if not already PEXPIREAT */\n        if (c->cmd->proc != pexpireatCommand) {\n            rewriteClientCommandArgument(c,0,shared.pexpireat);\n        }\n\n        /* Avoid creating a string object when it's the same as argv[2] parameter  */\n        if (basetime != 0 || unit == UNIT_SECONDS) {\n            robj *when_obj = createStringObjectFromLongLong(when);\n            rewriteClientCommandArgument(c,2,when_obj);\n            decrRefCount(when_obj);\n        }\n\n        keyModified(c,c->db,key,kv,1);\n        notifyKeyspaceEvent(NOTIFY_GENERIC,\"expire\",key,c->db->id);\n        KSN_INVALIDATE_KVOBJ(kv);\n        server.dirty++;\n        return;\n    }\n}\n\n/* EXPIRE key seconds [ NX | XX | GT | LT] */\nvoid expireCommand(client *c) {\n    expireGenericCommand(c,commandTimeSnapshot(),UNIT_SECONDS);\n}\n\n/* EXPIREAT key unix-time-seconds [ NX | XX | GT | LT] */\nvoid expireatCommand(client *c) {\n    expireGenericCommand(c,0,UNIT_SECONDS);\n}\n\n/* PEXPIRE key milliseconds [ NX | XX | GT | LT] */\nvoid pexpireCommand(client *c) {\n    expireGenericCommand(c,commandTimeSnapshot(),UNIT_MILLISECONDS);\n}\n\n/* PEXPIREAT key unix-time-milliseconds [ NX | XX | GT | LT] */\nvoid pexpireatCommand(client *c) {\n    expireGenericCommand(c,0,UNIT_MILLISECONDS);\n}\n\n/* Implements TTL, PTTL, EXPIRETIME and PEXPIRETIME */\nvoid ttlGenericCommand(client *c, int output_ms, int output_abs) {\n    long long expire, ttl = -1;\n\n    /* If the key does not exist at all, return -2 */\n    kvobj *kv = lookupKeyReadWithFlags(c->db,c->argv[1],LOOKUP_NOTOUCH);\n    if (kv == NULL) {\n        addReplyLongLong(c,-2);\n        return;\n    }\n\n    /* The key exists. Return -1 if it has no expire, or the actual\n     * TTL value otherwise. */\n    expire = kvobjGetExpire(kv);\n    if (expire != -1) {\n        ttl = output_abs ? expire : expire-commandTimeSnapshot();\n        if (ttl < 0) ttl = 0;\n    }\n    if (ttl == -1) {\n        addReplyLongLong(c,-1);\n    } else {\n        addReplyLongLong(c,output_ms ? ttl : ((ttl+500)/1000));\n    }\n}\n\n/* TTL key */\nvoid ttlCommand(client *c) {\n    ttlGenericCommand(c, 0, 0);\n}\n\n/* PTTL key */\nvoid pttlCommand(client *c) {\n    ttlGenericCommand(c, 1, 0);\n}\n\n/* EXPIRETIME key */\nvoid expiretimeCommand(client *c) {\n    ttlGenericCommand(c, 0, 1);\n}\n\n/* PEXPIRETIME key */\nvoid pexpiretimeCommand(client *c) {\n    ttlGenericCommand(c, 1, 1);\n}\n\n/* PERSIST key */\nvoid persistCommand(client *c) {\n    kvobj *kv;\n    if ((kv = lookupKeyWrite(c->db,c->argv[1]))) {\n        if (removeExpire(c->db,c->argv[1])) {\n            keyModified(c,c->db,c->argv[1],kv,1);\n            notifyKeyspaceEvent(NOTIFY_GENERIC,\"persist\",c->argv[1],c->db->id);\n            KSN_INVALIDATE_KVOBJ(kv);\n            addReply(c,shared.cone);\n            server.dirty++;\n        } else {\n            addReply(c,shared.czero);\n        }\n    } else {\n        addReply(c,shared.czero);\n    }\n}\n\n/* TOUCH key1 [key2 key3 ... keyN] */\nvoid touchCommand(client *c) {\n    int touched = 0;\n    for (int j = 1; j < c->argc; j++)\n        if (lookupKeyRead(c->db,c->argv[j]) != NULL) touched++;\n    addReplyLongLong(c,touched);\n}\n"
  },
  {
    "path": "src/fast_float_strtod.c",
    "content": "/* fast_float_strtod.c - Fast string to double conversion\n *\n * This is a C conversion of a subset of the fast_float C++ library,\n * implementing only what Redis needs: parsing decimal floating-point strings.\n *\n * Original fast_float library:\n *   https://github.com/fastfloat/fast_float\n *   by Daniel Lemire and João Paulo Magalhaes\n *\n * MIT License\n *\n * Copyright (c) 2021 The fast_float authors\n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to deal\n * in the Software without restriction, including without limitation the rights\n * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n * copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in all\n * copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n * SOFTWARE.\n */\n\n#include <stdint.h>\n#include <stdlib.h>\n#include <string.h>\n#include <errno.h>\n#include <math.h>\n#include <float.h>\n\n#include \"fast_float_strtod.h\"\n#include \"config.h\"\n#include \"zmalloc.h\"\n\n/* Powers of 10 from 10^0 to 10^22 (exact in double precision).\n * These are the only powers of 10 that can be exactly represented as doubles. */\nstatic const double powers_of_ten[] = {\n    1e0,  1e1,  1e2,  1e3,  1e4,  1e5,  1e6,  1e7,  1e8,  1e9,  1e10, 1e11,\n    1e12, 1e13, 1e14, 1e15, 1e16, 1e17, 1e18, 1e19, 1e20, 1e21, 1e22\n};\n\n/* Maximum mantissa for fast path: 2^53 */\n#define MAX_MANTISSA_FAST_PATH 9007199254740992ULL  /* 2^53 */\n\n/* Exponent limits for fast path */\n#define MIN_EXPONENT_FAST_PATH -22\n#define MAX_EXPONENT_FAST_PATH 22\n\n/* Maximum number of significant digits we track before overflow */\n#define MAX_DIGITS 19\n\n/* Case-insensitive match against known lowercase literals using `| 0x20`.\n * Only valid when the target characters are ASCII letters (a-z). */\nstatic inline int strcasecmp_3(const char *s, char c0, char c1, char c2) {\n    return ((s[0] | 0x20) == c0) & ((s[1] | 0x20) == c1) & ((s[2] | 0x20) == c2);\n}\n\n/* Case-insensitive comparison for first n characters.\n * Only valid when the target characters are ASCII letters (a-z). */\nstatic int strncasecmp_local(const char *s1, const char *s2, size_t n) {\n    for (size_t i = 0; i < n; i++) {\n        int diff = (s1[i] | 0x20) - s2[i];\n        if (diff) return diff;\n    }\n    return 0;\n}\n\n/* Parse inf/nan special values.\n * Returns 1 if parsed successfully, 0 otherwise.\n * On success, *endptr points past the parsed value. */\nstatic inline int parse_infnan(const char *p, const char *pend, double *result, const char **endptr) {\n    int negative = (*p == '-');\n    if (*p == '-' || *p == '+') p++;\n    size_t remaining = pend - p;\n\n    if (remaining >= 3) {\n        if (strcasecmp_3(p, 'n', 'a', 'n')) {\n            *result = negative ? -NAN : NAN;\n            p += 3;\n            /* Check for optional nan(n-char-seq) */\n            if (p < pend && *p == '(') {\n                const char *start = p;\n                p++;\n                while (p < pend) {\n                    char c = *p;\n                    if (c == ')') {\n                        p++;\n                        break;\n                    }\n                    if (!((c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') ||\n                          (c >= '0' && c <= '9') || c == '_')) {\n                        /* Invalid character, revert to position after \"nan\" */\n                        p = start;\n                        break;\n                    }\n                    p++;\n                }\n                /* If we didn't find closing ')', revert */\n                if (p[-1] != ')') {\n                    p = start;\n                }\n            }\n            if (endptr) *endptr = (char *)p;\n            return 1;\n        }\n        if (strcasecmp_3(p, 'i', 'n', 'f')) {\n            *result = negative ? -INFINITY : INFINITY;\n            p += 3;\n            /* Check for optional \"inity\" suffix */\n            if (remaining == 8 && strncasecmp_local(p, \"inity\", 5) == 0) {\n                p += 5;\n            }\n            if (endptr) *endptr = (char *)p;\n            return 1;\n        }\n    }\n    return 0;\n}\n\n/* SWAR (SIMD Within A Register) helpers for batch digit parsing. */\n\nstatic inline uint64_t read8_to_u64(const char *p) {\n    uint64_t val;\n    memcpy(&val, p, sizeof(uint64_t));\n#if BYTE_ORDER == BIG_ENDIAN\n    /* SWAR digit parsing assumes first char in LSB (little-endian layout). */\n#if defined(__GNUC__) || defined(__clang__)\n    val = __builtin_bswap64(val);\n#else\n    val = ((val & 0x00000000FFFFFFFFULL) << 32) | ((val & 0xFFFFFFFF00000000ULL) >> 32);\n    val = ((val & 0x0000FFFF0000FFFFULL) << 16) | ((val & 0xFFFF0000FFFF0000ULL) >> 16);\n    val = ((val & 0x00FF00FF00FF00FFULL) << 8)  | ((val & 0xFF00FF00FF00FF00ULL) >> 8);\n#endif\n#endif\n    return val;\n}\n\nstatic inline int is_made_of_eight_digits(uint64_t val) {\n    return !((((val + 0x4646464646464646ULL) | (val - 0x3030303030303030ULL)) &\n              0x8080808080808080ULL));\n}\n\nstatic inline uint32_t parse_eight_digits_swar(uint64_t val) {\n    uint64_t const mask = 0x000000FF000000FFULL;\n    uint64_t const mul1 = 0x000F424000000064ULL; /* 100 + (1000000ULL << 32) */\n    uint64_t const mul2 = 0x0000271000000001ULL; /* 1 + (10000ULL << 32) */\n    val -= 0x3030303030303030ULL;\n    val = (val * 10) + (val >> 8);\n    val = (((val & mask) * mul1) + (((val >> 16) & mask) * mul2)) >> 32;\n    return (uint32_t)val;\n}\n\n/* Parse a decimal number string into components.\n * This follows the fast_float algorithm closely. */\nstatic inline int parse_number_string(const char *p, const char *pend, double *result, const char **endptr) {\n    uint64_t mantissa = 0;  /* Mantissa digits as uint64 */\n    int64_t exponent = 0;   /* Decimal exponent (adjusted for decimal point) */\n    int negative = 0;       /* Sign flag */\n    *endptr = p;\n\n    if (p == pend) return 0;\n\n    /* Parse sign */\n    negative = (*p == '-');\n    if (*p == '-' || *p == '+') {\n        p++;\n        if (p == pend) return 0;\n    }\n\n    const char *start_digits = p;\n\n    /* Parse integer part */\n    mantissa = 0;\n    while (pend - p >= 8) {\n        uint64_t val = read8_to_u64(p);\n        if (!is_made_of_eight_digits(val)) break;\n        mantissa = mantissa * 100000000 + parse_eight_digits_swar(val);\n        p += 8;\n    }\n    while (p != pend && *p >= '0' && *p <= '9') {\n        mantissa = mantissa * 10 + (*p - '0');\n        p++;\n    }\n\n    int64_t digit_count = p - start_digits;\n\n    /* Parse decimal point and fractional part */\n    exponent = 0;\n    int has_decimal = (p != pend && *p == '.');\n\n    if (has_decimal) {\n        p++;\n        const char *before = p;\n        while (pend - p >= 8) {\n            uint64_t val = read8_to_u64(p);\n            if (!is_made_of_eight_digits(val)) break;\n            mantissa = mantissa * 100000000 + parse_eight_digits_swar(val);\n            p += 8;\n        }\n        while (p != pend && *p >= '0' && *p <= '9') {\n            mantissa = mantissa * 10 + (*p - '0');\n            p++;\n        }\n        exponent = before - p;  /* Negative: number of fractional digits */\n        digit_count += (p - before);\n    }\n\n    /* Must have at least one digit */\n    if (digit_count == 0) return 0;\n\n    /* Parse exponent */\n    int64_t exp_number = 0;\n    if (p != pend && (*p == 'e' || *p == 'E')) {\n        const char *exp_start = p;\n        p++;\n\n        int neg_exp = 0;\n        if (p != pend && *p == '-') {\n            neg_exp = 1;\n            p++;\n        } else if (p != pend && *p == '+') {\n            p++;\n        }\n\n        if (p == pend || *p < '0' || *p > '9') {\n            /* No digits after e/E, revert to position before 'e' */\n            p = exp_start;\n        } else {\n            while (p != pend && *p >= '0' && *p <= '9') {\n                if (exp_number < 0x10000000) {\n                    exp_number = exp_number * 10 + (*p - '0');\n                }\n                p++;\n            }\n            if (neg_exp) exp_number = -exp_number;\n            exponent += exp_number;\n        }\n    }\n\n    *endptr = p;\n    \n    /* Handle overflow in mantissa: if we have too many digits,\n     * we need to reparse more carefully */\n    if (digit_count > MAX_DIGITS) {\n        /* Skip leading zeros to get actual digit count */\n        const char *s = start_digits;\n        while (s != pend && (*s == '0' || *s == '.')) {\n            if (*s == '0') digit_count--;\n            s++;\n        }\n\n        if (digit_count > MAX_DIGITS) return 0;\n    }\n\n    /* Check if we're within fast path bounds */\n    if (exponent < MIN_EXPONENT_FAST_PATH) return 0;\n    if (exponent > MAX_EXPONENT_FAST_PATH) return 0;\n\n    double value;\n    if (mantissa <= MAX_MANTISSA_FAST_PATH) {\n        /* Clinger fast path: all operands exact in double precision,\n         * single multiply/divide produces a correctly-rounded result. */\n        value = (double)mantissa;\n        if (exponent < 0)       value = value / powers_of_ten[-exponent];\n        else if (exponent > 0)  value = value * powers_of_ten[exponent];\n    } else {\n#ifdef __SIZEOF_INT128__\n        /* Widened fast path for 17-19 significant-digit mantissas.\n         *\n         * (double)mantissa alone loses up to 11 bits when mantissa > 2^53,\n         * so the existing Clinger path would yield up to 1 ULP vs strtod.\n         * We recover full precision by doing the multiply/divide in 128-bit\n         * integer arithmetic (correctly-rounded by construction). Cases\n         * outside the supported exponent range fall through to strtod.\n         *\n         * Requires __uint128_t (GCC/Clang builtin, available on every 64-bit\n         * target Redis supports). 32-bit builds take the strtod() fallback. */\n        if (exponent < -19 || exponent > 19) return 0;\n\n        if (exponent >= 0) {\n            /* (mantissa * 10^e) fits in 128 bits. Convert exactly: the\n             * single (double) cast from __uint128_t rounds to nearest. */\n            __uint128_t prod = (__uint128_t)mantissa * (uint64_t)powers_of_ten[exponent];\n            uint64_t hi = (uint64_t)(prod >> 64);\n            uint64_t lo = (uint64_t)prod;\n            /* (double)hi * 2^64 has no rounding error (hi up to 2^64-1 rounds\n             * once, then * 2^64 is exact). Adding lo rounds once. Total:\n             * matches strtod on every tested case with e in [0,19]. */\n            value = (double)hi * 18446744073709551616.0 + (double)lo;\n        } else {\n            /* mantissa / 10^|e|: scale numerator up by 2^64 before integer\n             * division to preserve precision, then descale by multiplying by\n             * 2^-64 (exact power-of-two scaling, does not round). The single\n             * (double) cast of the integer quotient produces IEEE round-to-\n             * nearest-even, matching strtod() bit-exactly for every tested\n             * 16-19 significant digit case. */\n            uint64_t divisor = (uint64_t)powers_of_ten[-exponent];\n            __uint128_t scaled = (__uint128_t)mantissa << 64;\n            __uint128_t q = scaled / divisor;\n            uint64_t hi = (uint64_t)(q >> 64);\n            uint64_t lo = (uint64_t)q;\n            value = ((double)hi * 18446744073709551616.0 + (double)lo)\n                  * 5.421010862427522170037e-20; /* 2^-64 */\n        }\n#else\n        /* 32-bit target without __uint128_t: fall through to the strtod()\n         * fallback. Correctness is preserved (it's the same path that shipped\n         * in 8.8-M02); only the perf gain is 64-bit-target-specific. */\n        return 0;\n#endif\n    }\n\n    if (negative) value = -value;\n    *result = value;\n    return 1;\n}\n\n/* Main conversion function.\n *\n * This function behaves similarly to the standard strtod function, converting\n * the initial portion of the string pointed to by `nptr` to a `double` value.\n * If the conversion fails, errno is set to EINVAL error code.\n *\n * @param nptr   A pointer to the null-terminated byte string to be interpreted.\n * @param endptr A pointer to a pointer to character. If `endptr` is not NULL,\n *               it will point to the character after the last character used\n *               in the conversion.\n * @return       The converted value as a double. If no valid conversion could\n *               be performed, returns 0.0.\n */\nstatic inline int fast_float_try_fast(const char *nptr, const char *pend, double *result, const char **endptr) {\n    if (nptr == pend) {\n        errno = EINVAL;\n        if (endptr) *endptr = (char *)nptr;\n        return 0;\n    }\n\n    /* Parse the number string */\n    if (parse_number_string(nptr, pend, result, endptr)) {\n        return 1;\n    }\n\n    /* Not a valid decimal number, try inf/nan special values */\n    if (parse_infnan(nptr, pend, result, endptr)) {\n        return 1;\n    }\n\n    return 0;\n}\n\nstatic double fast_float_strtod_fallback(const char *nptr, size_t len, char **endptr) {\n    /* Since the input may not be null-terminated, we must copy it into a temporary buffer. */\n    char static_buf[128];\n    char *buf = static_buf;\n    if (len >= sizeof(static_buf))\n        buf = zmalloc(len + 1);\n    memcpy(buf, nptr, len);\n    buf[len] = '\\0';\n\n    char *fallback_end;\n    double result = strtod(buf, &fallback_end);\n    if (endptr) *endptr = (char *)nptr + (fallback_end - buf);\n\n    /* If strtod failed to parse, set errno */\n    if (fallback_end == buf) {\n        errno = EINVAL;\n    }\n\n    if (buf != static_buf) zfree(buf);\n    return result;\n}\n\n/* Convert string to double, with explicit length (string need NOT be null-terminated).\n * Falls back to strtod by copying to a temporary null-terminated buffer. */\ndouble fast_float_strtod(const char *nptr, size_t len, char **endptr) {\n    double result = 0.0;\n    const char *pend = nptr + len;\n    const char *eptr;\n\n    /* Use fast path for non-null-terminated strings */\n    if (likely(fast_float_try_fast(nptr, pend, &result, &eptr) && eptr == pend)) {\n        if (endptr) *endptr = (char *)eptr;\n#if UINTPTR_MAX == 0xffffffff\n        /* On 32-bit x86 with x87 FPU, the fast-path fdiv/fmul result lives in\n         * an 80-bit extended-precision register. With optimisation the compiler\n         * may return that value in st(0) without ever storing it to a 64-bit\n         * memory slot, so the caller would receive an 80-bit value that differs\n         * from the correctly-rounded 64-bit double.  Writing through a volatile\n         * forces a real fstpl (store + pop to 64-bit memory) followed by fldl\n         * (reload into st(0) from that 64-bit slot), ensuring the return value\n         * is truncated to double precision before it reaches the caller. */\n        volatile double ret = result;\n        return ret;\n#else\n        return result;\n#endif\n    }\n    \n    /* Fall back to strtod for complex cases:\n     * - Very large or very small exponents\n     * - Too many digits (need precise rounding)\n     * This ensures we get correctly-rounded results for edge cases. */\n    return fast_float_strtod_fallback(nptr, len, endptr);\n}\n\n#ifdef REDIS_TEST\n#include <stdio.h>\n#include \"testhelp.h\"\n\n#define UNUSED(x) (void)(x)\n#define COUNTOF(arr) (int)(sizeof(arr) / sizeof((arr)[0]))\n\ntypedef struct {\n    const char *input;\n    double expected;\n} ff_testcase;\n\nstatic int ff_eq(double a, double b) {\n    if (isnan(a)) return isnan(b);\n    if (isinf(a)) return isinf(b) && (a > 0) == (b > 0);\n    return a == b;\n}\n\nstatic void run_ff_tests(ff_testcase *cases, int n, int expect_failed) {\n    for (int i = 0; i < n; i++) {\n        const char *s = cases[i].input;\n        size_t len = strlen(s);\n        char *eptr;\n\n        errno = 0;\n        double d = fast_float_strtod(s, len, &eptr);\n        int failed = ((size_t)(eptr - s) != len) || errno == EINVAL ||\n            (errno == ERANGE && (d == HUGE_VAL || d == -HUGE_VAL || fpclassify(d) == FP_ZERO));\n        int ok = (expect_failed == failed) && ff_eq(d, cases[i].expected);\n        char descr[128];\n        if (ok)\n            snprintf(descr, sizeof(descr), \"\\\"%s\\\" -> expect %s(%.20g)\",\n                     s, expect_failed ? \"fail\" : \"ok\", cases[i].expected);\n        else\n            snprintf(descr, sizeof(descr), \"\\\"%s\\\" -> expect %s(%.20g) but got %s(%.20g)\",\n                     s, expect_failed ? \"fail\" : \"ok\", cases[i].expected, failed ? \"fail\" : \"ok\", d);\n        test_cond(descr, ok);\n    }\n}\n\nint fastFloatTest(int argc, char **argv, int flags) {\n    UNUSED(argc);\n    UNUSED(argv);\n    UNUSED(flags);\n\n    /* Finite decimals: fast path, exponent ±22 edges, mantissa 2^53, strtod fallback. */\n    ff_testcase decimal_ok[] = {\n        {\"0\", 0.0},\n        {\"+0\", 0.0},\n        {\"-0\", -0.0},\n        {\"42\", 42.0},\n        {\"+42\", 42.0},\n        {\"-42\", -42.0},\n        {\"00007\", 7.0},\n        {\"00.25\", 0.25},\n        {\"3.14\", 3.14},\n        {\".5\", 0.5},\n        {\"+.5\", 0.5},\n        {\"1.\", 1.0},\n        {\"0.\", 0.0},\n        {\".0\", 0.0},\n        {\"-1.5e2\", -150.0},\n        {\"1e5\", 1e5},\n        {\"1E5\", 1e5},\n        {\"2E3\", 2000.0},\n        {\"3e+5\", 3e5},\n        {\"1e-10\", 1e-10},\n        {\"1e-22\", 1e-22},\n        {\"1e+22\", 1e22},\n        {\"1e-23\", 1e-23},\n        {\"1e+100\", 1e100},\n        {\"1e-100\", 1e-100},\n        {\"9007199254740992\", 9007199254740992.0},\n        {\"9007199254740993\", 9007199254740992.0},\n        {\"12345678901234567890\", 1.2345678901234567e19},\n        {\"2.2250738585072012e-308\", 2.2250738585072012e-308}, /* Near DBL_MIN boundary */\n        {\"0x10\", 16.0},\n\n        /* Widened fast path: mantissa > 2^53 (==9007199254740992), |exp| in [1,19].\n         * These cover the __uint128_t code path that avoids the strtod() fallback.\n         * Each expected value is the IEEE-correct round-to-nearest double. */\n\n        /* 17-19 significant digit mantissas — negative exponent (scores in [0,1)) */\n        {\"0.49606648747577575\", 0.49606648747577575}, /* 17 sig digits, ZADD hot case */\n        {\"0.8731899671198792\",  0.8731899671198792},  /* 16 sig digits */\n        {\"0.34912978268081996\", 0.34912978268081996}, /* 17 sig digits */\n        {\"0.0033318113277969186\", 0.0033318113277969186}, /* 19 sig digits after leading-zero strip */\n        {\"0.9955843393406656\",  0.9955843393406656},\n        {\"0.999999999999999\",   0.999999999999999},   /* repunit-ish, ULP boundary */\n\n        /* Mantissa just above 2^53: triggers the widened path */\n        {\"9007199254740993.0\",  9007199254740992.0},  /* rounds down */\n        {\"9007199254740995.0\",  9007199254740996.0},  /* ties-to-even up */\n        {\"9007199254740996.0\",  9007199254740996.0},\n        {\"10000000000000000\",   1e16},                /* exact 10^16, mantissa = 10^16 */\n        {\"99999999999999999\",   1e17},                /* one less than 10^17 */\n\n        /* 18-digit mantissa with various exponents */\n        {\"1234567890123456789\",    1.2345678901234568e18}, /* 19 digits, integer form */\n        {\"1234567890123456789e0\",  1.2345678901234568e18},\n        {\"1234567890123456789e-5\", 12345678901234.568},\n        {\"1234567890123456789e-19\", 0.12345678901234568},\n        {\"1234567890123456789e5\",  1.2345678901234569e23}, /* 19-digit mantissa × 10^5 — widened path */\n\n        /* Boundary: exponent exactly ±19 (widened-path limit) */\n        {\"1234567890123.456789e-19\", 1.2345678901234568e-7}, /* effective exp = -25, falls back to strtod */\n        {\"9999999999999999e19\",       9.999999999999999e34},\n        {\"9999999999999999e-19\",      9.999999999999999e-4},\n\n        /* Negative numbers exercising the widened path */\n        {\"-0.49606648747577575\", -0.49606648747577575},\n        {\"-9007199254740993\",    -9007199254740992.0},\n    };\n    run_ff_tests(decimal_ok, COUNTOF(decimal_ok), 0);\n\n    /* No valid prefix for full buffer, or trailing junk. */\n    ff_testcase decimal_bad[] = {\n        {\"1abc\", 1.0},\n        {\"1e\", 1.0},\n        {\"1e+\", 1.0},\n        {\"1e-\", 1.0},\n        {\"1e+z\", 1.0},\n        {\"12.34.56\", 12.34},\n        {\"..1\", 0.0},\n        {\"e10\", 0.0},\n        {\"E10\", 0.0},\n        {\"+\", 0.0},\n        {\"-\", 0.0},\n        {\"foo\", 0.0},\n        {\"1 \", 1.0},\n        {\"3.14!\", 3.14},\n    };\n    run_ff_tests(decimal_bad, COUNTOF(decimal_bad), 1);\n\n    ff_testcase inf_valid[] = {\n        {\"inf\", INFINITY},\n        {\"INF\", INFINITY},\n        {\"Inf\", INFINITY},\n        {\"infinity\", INFINITY},\n        {\"INFINITY\", INFINITY},\n        {\"Infinity\", INFINITY},\n        {\"+inf\", INFINITY},\n        {\"-inf\", -INFINITY},\n        {\"+infinity\", INFINITY},\n        {\"-INFINITY\", -INFINITY},\n    };\n    run_ff_tests(inf_valid, COUNTOF(inf_valid), 0);\n\n    ff_testcase inf_invalid[] = {\n        {\"in\", 0},\n        {\"infin\", INFINITY},\n        {\"infini1\", INFINITY},\n        {\"infinitx\", INFINITY},\n        {\"infinityy\", INFINITY},\n        {\"info\", INFINITY},\n        {\"ina\", 0},\n        {\"INFI\", INFINITY},\n        {\"iNf0\", INFINITY},\n    };\n    run_ff_tests(inf_invalid, COUNTOF(inf_invalid), 1);\n\n    ff_testcase nan_valid[] = {\n        {\"nan\", NAN},\n        {\"NAN\", NAN},\n        {\"Nan\", NAN},\n        {\"nan(123)\", NAN},\n        {\"nan(abc)\", NAN},\n        {\"nan(123abc)\", NAN},\n    };\n    run_ff_tests(nan_valid, COUNTOF(nan_valid), 0);\n\n    ff_testcase nan_invalid[] = {\n        {\"na\", 0},\n        {\"nan(\", NAN},         /* unclosed paren */\n        {\"nan(abc\", NAN},      /* missing closing paren */\n        {\"nan(ab!c)\", NAN},    /* invalid char in paren */\n        {\"nan(ab c)\", NAN},    /* space in paren */\n        {\"nanx\", NAN},         /* trailing garbage */\n    };\n    run_ff_tests(nan_invalid, COUNTOF(nan_invalid), 1);\n\n    /* Large input that exceeds static_buf (128 bytes), exercising the zmalloc fallback path. */\n    {\n        /* Build a string \"000...00042.0\" with total length > 128. */\n        char big[256];\n        memset(big, '0', sizeof(big));\n        big[sizeof(big) - 4] = '2';\n        big[sizeof(big) - 3] = '.';\n        big[sizeof(big) - 2] = '0';\n        big[sizeof(big) - 1] = '\\0';\n        char *eptr;\n        double d = fast_float_strtod(big, strlen(big), &eptr);\n        test_cond(\"large input (>128 bytes) zmalloc fallback path\",\n                  (size_t)(eptr - big) == strlen(big) && ff_eq(d, 2.0));\n\n        /* Large input that is completely invalid. */\n        memset(big, 'x', sizeof(big) - 1);\n        big[sizeof(big) - 1] = '\\0';\n        d = fast_float_strtod(big, strlen(big), &eptr);\n        test_cond(\"invalid large input (>128 bytes) zmalloc fallback path\",\n                  eptr == big && ff_eq(d, 0.0));\n    }\n\n    return 0;\n}\n#endif\n"
  },
  {
    "path": "src/fast_float_strtod.h",
    "content": "\n#ifndef __FAST_FLOAT_STRTOD_H__\n#define __FAST_FLOAT_STRTOD_H__\n\n#include <stddef.h>\n\ndouble fast_float_strtod(const char *nptr, size_t len, char **endptr);\n\n#ifdef REDIS_TEST\nint fastFloatTest(int argc, char **argv, int flags);\n#endif\n\n#endif /* __FAST_FLOAT_STRTOD_H__ */\n"
  },
  {
    "path": "src/fmacros.h",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef _REDIS_FMACRO_H\n#define _REDIS_FMACRO_H\n\n#define _BSD_SOURCE\n\n#if defined(__linux__)\n#define _GNU_SOURCE\n#define _DEFAULT_SOURCE\n#endif\n\n#if defined(_AIX)\n#define _ALL_SOURCE\n#endif\n\n#if defined(__linux__) || defined(__OpenBSD__)\n#define _XOPEN_SOURCE 700\n/*\n * On NetBSD, _XOPEN_SOURCE undefines _NETBSD_SOURCE and\n * thus hides inet_aton etc.\n */\n#elif !defined(__NetBSD__)\n#define _XOPEN_SOURCE\n#endif\n\n#if defined(__sun)\n#define _POSIX_C_SOURCE 199506L\n#endif\n\n#define _LARGEFILE_SOURCE\n#define _FILE_OFFSET_BITS 64\n\n/* deprecate unsafe functions\n *\n * NOTE: We do not use the poison pragma since it\n * will error on stdlib definitions in files as well*/\n#if (__GNUC__ && __GNUC__ >= 4) && !defined __APPLE__\nint sprintf(char *str, const char *format, ...) __attribute__((deprecated(\"please avoid use of unsafe C functions. prefer use of snprintf instead\")));\nchar *strcpy(char *restrict dest, const char *src) __attribute__((deprecated(\"please avoid use of unsafe C functions. prefer use of redis_strlcpy instead\")));\nchar *strcat(char *restrict dest, const char *restrict src) __attribute__((deprecated(\"please avoid use of unsafe C functions. prefer use of redis_strlcat instead\")));\n#endif\n\n#ifdef __linux__\n/* features.h uses the defines above to set feature specific defines.  */\n#include <features.h>\n#endif\n\n#endif\n"
  },
  {
    "path": "src/fmtargs.h",
    "content": "/*\n * Copyright Redis Contributors.\n * All rights reserved.\n * SPDX-License-Identifier: BSD 3-Clause\n *\n * To make it easier to map each part of the format string with each argument,\n * this file provides a way to write\n *\n *     printf(\"a = %s, b = %s, c = %s\\n\",\n *            arg1, arg2, arg3);\n *\n * as\n *\n *     printf(FMTARGS(\"a = %s, \", arg1,\n *                    \"b = %s, \", arg2,\n *                    \"c = %s\\n\", arg3));\n *\n * FMTARGS is variadic macro which is implemented by passing on its arguments to\n * two other variadic macros of which one extracts the odd (the formats) and the\n * other extracts the even (the arguments). The definitions of these macros\n * include counting the number of macro arguments. Therefore, they don't accept\n * an unlimited number of arguments. Currently it is fixed to a maximum of 120\n * formats and arguments.\n */\n#ifndef FMTARGS_H\n#define FMTARGS_H\n\n/* A macro to count the number of arguments. */\n#define NARG(...) NARG_I(__VA_ARGS__,RSEQ_N())\n#define NARG_I(...) ARG_N(__VA_ARGS__)\n\n/* Define a macro which will call an arbitrary macro appended with a number indicating\n * the number of arguments it has. */\n#define VFUNC_N_(name, n) name##n\n#define VFUNC_N(name, n) VFUNC_N_(name, n)\n#define VFUNC(func, ...) VFUNC_N(func, NARG(__VA_ARGS__)) (__VA_ARGS__)\n\n/* Macros to extract the formats and the arguments from the fmt-arg pairs and\n * then combine them again with all formats first and the arguments last. */\n#define COMPACT_FMT(...) VFUNC(COMPACT_FMT_, __VA_ARGS__)\n#define COMPACT_VALUES(...) VFUNC(COMPACT_VALUES_, __VA_ARGS__)\n#define FMTARGS(...) COMPACT_FMT(__VA_ARGS__), COMPACT_VALUES(__VA_ARGS__)\n\n/* Everything below this line is automatically generated by\n * generate-fmtargs.py. Do not manually edit. */\n\n#define ARG_N(_1, _2, _3, _4, _5, _6, _7, _8, _9, _10, _11, _12, _13, _14, _15, _16, _17, _18, _19, _20, _21, _22, _23, _24, _25, _26, _27, _28, _29, _30, _31, _32, _33, _34, _35, _36, _37, _38, _39, _40, _41, _42, _43, _44, _45, _46, _47, _48, _49, _50, _51, _52, _53, _54, _55, _56, _57, _58, _59, _60, _61, _62, _63, _64, _65, _66, _67, _68, _69, _70, _71, _72, _73, _74, _75, _76, _77, _78, _79, _80, _81, _82, _83, _84, _85, _86, _87, _88, _89, _90, _91, _92, _93, _94, _95, _96, _97, _98, _99, _100, _101, _102, _103, _104, _105, _106, _107, _108, _109, _110, _111, _112, _113, _114, _115, _116, _117, _118, _119, _120, _121, _122, _123, _124, _125, _126, _127, _128, _129, _130, _131, _132, _133, _134, _135, _136, _137, _138, _139, _140, _141, _142, _143, _144, _145, _146, _147, _148, _149, _150, _151, _152, _153, _154, _155, _156, _157, _158, _159, _160, N, ...) N\n\n#define RSEQ_N() 160, 159, 158, 157, 156, 155, 154, 153, 152, 151, 150, 149, 148, 147, 146, 145, 144, 143, 142, 141, 140, 139, 138, 137, 136, 135, 134, 133, 132, 131, 130, 129, 128, 127, 126, 125, 124, 123, 122, 121, 120, 119, 118, 117, 116, 115, 114, 113, 112, 111, 110, 109, 108, 107, 106, 105, 104, 103, 102, 101, 100, 99, 98, 97, 96, 95, 94, 93, 92, 91, 90, 89, 88, 87, 86, 85, 84, 83, 82, 81, 80, 79, 78, 77, 76, 75, 74, 73, 72, 71, 70, 69, 68, 67, 66, 65, 64, 63, 62, 61, 60, 59, 58, 57, 56, 55, 54, 53, 52, 51, 50, 49, 48, 47, 46, 45, 44, 43, 42, 41, 40, 39, 38, 37, 36, 35, 34, 33, 32, 31, 30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0\n\n#define COMPACT_FMT_2(fmt, value) fmt\n#define COMPACT_FMT_4(fmt, value, ...) fmt COMPACT_FMT_2(__VA_ARGS__)\n#define COMPACT_FMT_6(fmt, value, ...) fmt COMPACT_FMT_4(__VA_ARGS__)\n#define COMPACT_FMT_8(fmt, value, ...) fmt COMPACT_FMT_6(__VA_ARGS__)\n#define COMPACT_FMT_10(fmt, value, ...) fmt COMPACT_FMT_8(__VA_ARGS__)\n#define COMPACT_FMT_12(fmt, value, ...) fmt COMPACT_FMT_10(__VA_ARGS__)\n#define COMPACT_FMT_14(fmt, value, ...) fmt COMPACT_FMT_12(__VA_ARGS__)\n#define COMPACT_FMT_16(fmt, value, ...) fmt COMPACT_FMT_14(__VA_ARGS__)\n#define COMPACT_FMT_18(fmt, value, ...) fmt COMPACT_FMT_16(__VA_ARGS__)\n#define COMPACT_FMT_20(fmt, value, ...) fmt COMPACT_FMT_18(__VA_ARGS__)\n#define COMPACT_FMT_22(fmt, value, ...) fmt COMPACT_FMT_20(__VA_ARGS__)\n#define COMPACT_FMT_24(fmt, value, ...) fmt COMPACT_FMT_22(__VA_ARGS__)\n#define COMPACT_FMT_26(fmt, value, ...) fmt COMPACT_FMT_24(__VA_ARGS__)\n#define COMPACT_FMT_28(fmt, value, ...) fmt COMPACT_FMT_26(__VA_ARGS__)\n#define COMPACT_FMT_30(fmt, value, ...) fmt COMPACT_FMT_28(__VA_ARGS__)\n#define COMPACT_FMT_32(fmt, value, ...) fmt COMPACT_FMT_30(__VA_ARGS__)\n#define COMPACT_FMT_34(fmt, value, ...) fmt COMPACT_FMT_32(__VA_ARGS__)\n#define COMPACT_FMT_36(fmt, value, ...) fmt COMPACT_FMT_34(__VA_ARGS__)\n#define COMPACT_FMT_38(fmt, value, ...) fmt COMPACT_FMT_36(__VA_ARGS__)\n#define COMPACT_FMT_40(fmt, value, ...) fmt COMPACT_FMT_38(__VA_ARGS__)\n#define COMPACT_FMT_42(fmt, value, ...) fmt COMPACT_FMT_40(__VA_ARGS__)\n#define COMPACT_FMT_44(fmt, value, ...) fmt COMPACT_FMT_42(__VA_ARGS__)\n#define COMPACT_FMT_46(fmt, value, ...) fmt COMPACT_FMT_44(__VA_ARGS__)\n#define COMPACT_FMT_48(fmt, value, ...) fmt COMPACT_FMT_46(__VA_ARGS__)\n#define COMPACT_FMT_50(fmt, value, ...) fmt COMPACT_FMT_48(__VA_ARGS__)\n#define COMPACT_FMT_52(fmt, value, ...) fmt COMPACT_FMT_50(__VA_ARGS__)\n#define COMPACT_FMT_54(fmt, value, ...) fmt COMPACT_FMT_52(__VA_ARGS__)\n#define COMPACT_FMT_56(fmt, value, ...) fmt COMPACT_FMT_54(__VA_ARGS__)\n#define COMPACT_FMT_58(fmt, value, ...) fmt COMPACT_FMT_56(__VA_ARGS__)\n#define COMPACT_FMT_60(fmt, value, ...) fmt COMPACT_FMT_58(__VA_ARGS__)\n#define COMPACT_FMT_62(fmt, value, ...) fmt COMPACT_FMT_60(__VA_ARGS__)\n#define COMPACT_FMT_64(fmt, value, ...) fmt COMPACT_FMT_62(__VA_ARGS__)\n#define COMPACT_FMT_66(fmt, value, ...) fmt COMPACT_FMT_64(__VA_ARGS__)\n#define COMPACT_FMT_68(fmt, value, ...) fmt COMPACT_FMT_66(__VA_ARGS__)\n#define COMPACT_FMT_70(fmt, value, ...) fmt COMPACT_FMT_68(__VA_ARGS__)\n#define COMPACT_FMT_72(fmt, value, ...) fmt COMPACT_FMT_70(__VA_ARGS__)\n#define COMPACT_FMT_74(fmt, value, ...) fmt COMPACT_FMT_72(__VA_ARGS__)\n#define COMPACT_FMT_76(fmt, value, ...) fmt COMPACT_FMT_74(__VA_ARGS__)\n#define COMPACT_FMT_78(fmt, value, ...) fmt COMPACT_FMT_76(__VA_ARGS__)\n#define COMPACT_FMT_80(fmt, value, ...) fmt COMPACT_FMT_78(__VA_ARGS__)\n#define COMPACT_FMT_82(fmt, value, ...) fmt COMPACT_FMT_80(__VA_ARGS__)\n#define COMPACT_FMT_84(fmt, value, ...) fmt COMPACT_FMT_82(__VA_ARGS__)\n#define COMPACT_FMT_86(fmt, value, ...) fmt COMPACT_FMT_84(__VA_ARGS__)\n#define COMPACT_FMT_88(fmt, value, ...) fmt COMPACT_FMT_86(__VA_ARGS__)\n#define COMPACT_FMT_90(fmt, value, ...) fmt COMPACT_FMT_88(__VA_ARGS__)\n#define COMPACT_FMT_92(fmt, value, ...) fmt COMPACT_FMT_90(__VA_ARGS__)\n#define COMPACT_FMT_94(fmt, value, ...) fmt COMPACT_FMT_92(__VA_ARGS__)\n#define COMPACT_FMT_96(fmt, value, ...) fmt COMPACT_FMT_94(__VA_ARGS__)\n#define COMPACT_FMT_98(fmt, value, ...) fmt COMPACT_FMT_96(__VA_ARGS__)\n#define COMPACT_FMT_100(fmt, value, ...) fmt COMPACT_FMT_98(__VA_ARGS__)\n#define COMPACT_FMT_102(fmt, value, ...) fmt COMPACT_FMT_100(__VA_ARGS__)\n#define COMPACT_FMT_104(fmt, value, ...) fmt COMPACT_FMT_102(__VA_ARGS__)\n#define COMPACT_FMT_106(fmt, value, ...) fmt COMPACT_FMT_104(__VA_ARGS__)\n#define COMPACT_FMT_108(fmt, value, ...) fmt COMPACT_FMT_106(__VA_ARGS__)\n#define COMPACT_FMT_110(fmt, value, ...) fmt COMPACT_FMT_108(__VA_ARGS__)\n#define COMPACT_FMT_112(fmt, value, ...) fmt COMPACT_FMT_110(__VA_ARGS__)\n#define COMPACT_FMT_114(fmt, value, ...) fmt COMPACT_FMT_112(__VA_ARGS__)\n#define COMPACT_FMT_116(fmt, value, ...) fmt COMPACT_FMT_114(__VA_ARGS__)\n#define COMPACT_FMT_118(fmt, value, ...) fmt COMPACT_FMT_116(__VA_ARGS__)\n#define COMPACT_FMT_120(fmt, value, ...) fmt COMPACT_FMT_118(__VA_ARGS__)\n#define COMPACT_FMT_122(fmt, value, ...) fmt COMPACT_FMT_120(__VA_ARGS__)\n#define COMPACT_FMT_124(fmt, value, ...) fmt COMPACT_FMT_122(__VA_ARGS__)\n#define COMPACT_FMT_126(fmt, value, ...) fmt COMPACT_FMT_124(__VA_ARGS__)\n#define COMPACT_FMT_128(fmt, value, ...) fmt COMPACT_FMT_126(__VA_ARGS__)\n#define COMPACT_FMT_130(fmt, value, ...) fmt COMPACT_FMT_128(__VA_ARGS__)\n#define COMPACT_FMT_132(fmt, value, ...) fmt COMPACT_FMT_130(__VA_ARGS__)\n#define COMPACT_FMT_134(fmt, value, ...) fmt COMPACT_FMT_132(__VA_ARGS__)\n#define COMPACT_FMT_136(fmt, value, ...) fmt COMPACT_FMT_134(__VA_ARGS__)\n#define COMPACT_FMT_138(fmt, value, ...) fmt COMPACT_FMT_136(__VA_ARGS__)\n#define COMPACT_FMT_140(fmt, value, ...) fmt COMPACT_FMT_138(__VA_ARGS__)\n#define COMPACT_FMT_142(fmt, value, ...) fmt COMPACT_FMT_140(__VA_ARGS__)\n#define COMPACT_FMT_144(fmt, value, ...) fmt COMPACT_FMT_142(__VA_ARGS__)\n#define COMPACT_FMT_146(fmt, value, ...) fmt COMPACT_FMT_144(__VA_ARGS__)\n#define COMPACT_FMT_148(fmt, value, ...) fmt COMPACT_FMT_146(__VA_ARGS__)\n#define COMPACT_FMT_150(fmt, value, ...) fmt COMPACT_FMT_148(__VA_ARGS__)\n#define COMPACT_FMT_152(fmt, value, ...) fmt COMPACT_FMT_150(__VA_ARGS__)\n#define COMPACT_FMT_154(fmt, value, ...) fmt COMPACT_FMT_152(__VA_ARGS__)\n#define COMPACT_FMT_156(fmt, value, ...) fmt COMPACT_FMT_154(__VA_ARGS__)\n#define COMPACT_FMT_158(fmt, value, ...) fmt COMPACT_FMT_156(__VA_ARGS__)\n#define COMPACT_FMT_160(fmt, value, ...) fmt COMPACT_FMT_158(__VA_ARGS__)\n\n#define COMPACT_VALUES_2(fmt, value) value\n#define COMPACT_VALUES_4(fmt, value, ...) value, COMPACT_VALUES_2(__VA_ARGS__)\n#define COMPACT_VALUES_6(fmt, value, ...) value, COMPACT_VALUES_4(__VA_ARGS__)\n#define COMPACT_VALUES_8(fmt, value, ...) value, COMPACT_VALUES_6(__VA_ARGS__)\n#define COMPACT_VALUES_10(fmt, value, ...) value, COMPACT_VALUES_8(__VA_ARGS__)\n#define COMPACT_VALUES_12(fmt, value, ...) value, COMPACT_VALUES_10(__VA_ARGS__)\n#define COMPACT_VALUES_14(fmt, value, ...) value, COMPACT_VALUES_12(__VA_ARGS__)\n#define COMPACT_VALUES_16(fmt, value, ...) value, COMPACT_VALUES_14(__VA_ARGS__)\n#define COMPACT_VALUES_18(fmt, value, ...) value, COMPACT_VALUES_16(__VA_ARGS__)\n#define COMPACT_VALUES_20(fmt, value, ...) value, COMPACT_VALUES_18(__VA_ARGS__)\n#define COMPACT_VALUES_22(fmt, value, ...) value, COMPACT_VALUES_20(__VA_ARGS__)\n#define COMPACT_VALUES_24(fmt, value, ...) value, COMPACT_VALUES_22(__VA_ARGS__)\n#define COMPACT_VALUES_26(fmt, value, ...) value, COMPACT_VALUES_24(__VA_ARGS__)\n#define COMPACT_VALUES_28(fmt, value, ...) value, COMPACT_VALUES_26(__VA_ARGS__)\n#define COMPACT_VALUES_30(fmt, value, ...) value, COMPACT_VALUES_28(__VA_ARGS__)\n#define COMPACT_VALUES_32(fmt, value, ...) value, COMPACT_VALUES_30(__VA_ARGS__)\n#define COMPACT_VALUES_34(fmt, value, ...) value, COMPACT_VALUES_32(__VA_ARGS__)\n#define COMPACT_VALUES_36(fmt, value, ...) value, COMPACT_VALUES_34(__VA_ARGS__)\n#define COMPACT_VALUES_38(fmt, value, ...) value, COMPACT_VALUES_36(__VA_ARGS__)\n#define COMPACT_VALUES_40(fmt, value, ...) value, COMPACT_VALUES_38(__VA_ARGS__)\n#define COMPACT_VALUES_42(fmt, value, ...) value, COMPACT_VALUES_40(__VA_ARGS__)\n#define COMPACT_VALUES_44(fmt, value, ...) value, COMPACT_VALUES_42(__VA_ARGS__)\n#define COMPACT_VALUES_46(fmt, value, ...) value, COMPACT_VALUES_44(__VA_ARGS__)\n#define COMPACT_VALUES_48(fmt, value, ...) value, COMPACT_VALUES_46(__VA_ARGS__)\n#define COMPACT_VALUES_50(fmt, value, ...) value, COMPACT_VALUES_48(__VA_ARGS__)\n#define COMPACT_VALUES_52(fmt, value, ...) value, COMPACT_VALUES_50(__VA_ARGS__)\n#define COMPACT_VALUES_54(fmt, value, ...) value, COMPACT_VALUES_52(__VA_ARGS__)\n#define COMPACT_VALUES_56(fmt, value, ...) value, COMPACT_VALUES_54(__VA_ARGS__)\n#define COMPACT_VALUES_58(fmt, value, ...) value, COMPACT_VALUES_56(__VA_ARGS__)\n#define COMPACT_VALUES_60(fmt, value, ...) value, COMPACT_VALUES_58(__VA_ARGS__)\n#define COMPACT_VALUES_62(fmt, value, ...) value, COMPACT_VALUES_60(__VA_ARGS__)\n#define COMPACT_VALUES_64(fmt, value, ...) value, COMPACT_VALUES_62(__VA_ARGS__)\n#define COMPACT_VALUES_66(fmt, value, ...) value, COMPACT_VALUES_64(__VA_ARGS__)\n#define COMPACT_VALUES_68(fmt, value, ...) value, COMPACT_VALUES_66(__VA_ARGS__)\n#define COMPACT_VALUES_70(fmt, value, ...) value, COMPACT_VALUES_68(__VA_ARGS__)\n#define COMPACT_VALUES_72(fmt, value, ...) value, COMPACT_VALUES_70(__VA_ARGS__)\n#define COMPACT_VALUES_74(fmt, value, ...) value, COMPACT_VALUES_72(__VA_ARGS__)\n#define COMPACT_VALUES_76(fmt, value, ...) value, COMPACT_VALUES_74(__VA_ARGS__)\n#define COMPACT_VALUES_78(fmt, value, ...) value, COMPACT_VALUES_76(__VA_ARGS__)\n#define COMPACT_VALUES_80(fmt, value, ...) value, COMPACT_VALUES_78(__VA_ARGS__)\n#define COMPACT_VALUES_82(fmt, value, ...) value, COMPACT_VALUES_80(__VA_ARGS__)\n#define COMPACT_VALUES_84(fmt, value, ...) value, COMPACT_VALUES_82(__VA_ARGS__)\n#define COMPACT_VALUES_86(fmt, value, ...) value, COMPACT_VALUES_84(__VA_ARGS__)\n#define COMPACT_VALUES_88(fmt, value, ...) value, COMPACT_VALUES_86(__VA_ARGS__)\n#define COMPACT_VALUES_90(fmt, value, ...) value, COMPACT_VALUES_88(__VA_ARGS__)\n#define COMPACT_VALUES_92(fmt, value, ...) value, COMPACT_VALUES_90(__VA_ARGS__)\n#define COMPACT_VALUES_94(fmt, value, ...) value, COMPACT_VALUES_92(__VA_ARGS__)\n#define COMPACT_VALUES_96(fmt, value, ...) value, COMPACT_VALUES_94(__VA_ARGS__)\n#define COMPACT_VALUES_98(fmt, value, ...) value, COMPACT_VALUES_96(__VA_ARGS__)\n#define COMPACT_VALUES_100(fmt, value, ...) value, COMPACT_VALUES_98(__VA_ARGS__)\n#define COMPACT_VALUES_102(fmt, value, ...) value, COMPACT_VALUES_100(__VA_ARGS__)\n#define COMPACT_VALUES_104(fmt, value, ...) value, COMPACT_VALUES_102(__VA_ARGS__)\n#define COMPACT_VALUES_106(fmt, value, ...) value, COMPACT_VALUES_104(__VA_ARGS__)\n#define COMPACT_VALUES_108(fmt, value, ...) value, COMPACT_VALUES_106(__VA_ARGS__)\n#define COMPACT_VALUES_110(fmt, value, ...) value, COMPACT_VALUES_108(__VA_ARGS__)\n#define COMPACT_VALUES_112(fmt, value, ...) value, COMPACT_VALUES_110(__VA_ARGS__)\n#define COMPACT_VALUES_114(fmt, value, ...) value, COMPACT_VALUES_112(__VA_ARGS__)\n#define COMPACT_VALUES_116(fmt, value, ...) value, COMPACT_VALUES_114(__VA_ARGS__)\n#define COMPACT_VALUES_118(fmt, value, ...) value, COMPACT_VALUES_116(__VA_ARGS__)\n#define COMPACT_VALUES_120(fmt, value, ...) value, COMPACT_VALUES_118(__VA_ARGS__)\n#define COMPACT_VALUES_122(fmt, value, ...) value, COMPACT_VALUES_120(__VA_ARGS__)\n#define COMPACT_VALUES_124(fmt, value, ...) value, COMPACT_VALUES_122(__VA_ARGS__)\n#define COMPACT_VALUES_126(fmt, value, ...) value, COMPACT_VALUES_124(__VA_ARGS__)\n#define COMPACT_VALUES_128(fmt, value, ...) value, COMPACT_VALUES_126(__VA_ARGS__)\n#define COMPACT_VALUES_130(fmt, value, ...) value, COMPACT_VALUES_128(__VA_ARGS__)\n#define COMPACT_VALUES_132(fmt, value, ...) value, COMPACT_VALUES_130(__VA_ARGS__)\n#define COMPACT_VALUES_134(fmt, value, ...) value, COMPACT_VALUES_132(__VA_ARGS__)\n#define COMPACT_VALUES_136(fmt, value, ...) value, COMPACT_VALUES_134(__VA_ARGS__)\n#define COMPACT_VALUES_138(fmt, value, ...) value, COMPACT_VALUES_136(__VA_ARGS__)\n#define COMPACT_VALUES_140(fmt, value, ...) value, COMPACT_VALUES_138(__VA_ARGS__)\n#define COMPACT_VALUES_142(fmt, value, ...) value, COMPACT_VALUES_140(__VA_ARGS__)\n#define COMPACT_VALUES_144(fmt, value, ...) value, COMPACT_VALUES_142(__VA_ARGS__)\n#define COMPACT_VALUES_146(fmt, value, ...) value, COMPACT_VALUES_144(__VA_ARGS__)\n#define COMPACT_VALUES_148(fmt, value, ...) value, COMPACT_VALUES_146(__VA_ARGS__)\n#define COMPACT_VALUES_150(fmt, value, ...) value, COMPACT_VALUES_148(__VA_ARGS__)\n#define COMPACT_VALUES_152(fmt, value, ...) value, COMPACT_VALUES_150(__VA_ARGS__)\n#define COMPACT_VALUES_154(fmt, value, ...) value, COMPACT_VALUES_152(__VA_ARGS__)\n#define COMPACT_VALUES_156(fmt, value, ...) value, COMPACT_VALUES_154(__VA_ARGS__)\n#define COMPACT_VALUES_158(fmt, value, ...) value, COMPACT_VALUES_156(__VA_ARGS__)\n#define COMPACT_VALUES_160(fmt, value, ...) value, COMPACT_VALUES_158(__VA_ARGS__)\n\n#endif\n"
  },
  {
    "path": "src/function_lua.c",
    "content": "/*\n * Copyright (c) 2021-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n/*\n * function_lua.c unit provides the Lua engine functionality.\n * Including registering the engine and implementing the engine\n * callbacks:\n * * Create a function from blob (usually text)\n * * Invoke a function\n * * Free function memory\n * * Get memory usage\n *\n * Uses script_lua.c to run the Lua code.\n */\n\n#include \"functions.h\"\n#include \"script_lua.h\"\n#include <lua.h>\n#include <lauxlib.h>\n#include <lualib.h>\n#if defined(USE_JEMALLOC)\n#include <lstate.h>\n#endif\n\n#define LUA_ENGINE_NAME \"LUA\"\n#define REGISTRY_ENGINE_CTX_NAME \"__ENGINE_CTX__\"\n#define REGISTRY_ERROR_HANDLER_NAME \"__ERROR_HANDLER__\"\n#define REGISTRY_LOAD_CTX_NAME \"__LIBRARY_CTX__\"\n#define LIBRARY_API_NAME \"__LIBRARY_API__\"\n#define GLOBALS_API_NAME \"__GLOBALS_API__\"\n\nstatic int gc_count = 0; /* Counter for the number of GC requests, reset after each GC execution */\n\n/* Lua engine ctx */\ntypedef struct luaEngineCtx {\n    lua_State *lua;\n} luaEngineCtx;\n\n/* Lua function ctx */\ntypedef struct luaFunctionCtx {\n    /* Special ID that allows getting the Lua function object from the Lua registry */\n    int lua_function_ref;\n} luaFunctionCtx;\n\ntypedef struct loadCtx {\n    functionLibInfo *li;\n    monotime start_time;\n    size_t timeout;\n} loadCtx;\n\ntypedef struct registerFunctionArgs {\n    sds name;\n    sds desc;\n    luaFunctionCtx *lua_f_ctx;\n    uint64_t f_flags;\n} registerFunctionArgs;\n\n/* Hook for FUNCTION LOAD execution.\n * Used to cancel the execution in case of a timeout (500ms).\n * This execution should be fast and should only register\n * functions so 500ms should be more than enough. */\nstatic void luaEngineLoadHook(lua_State *lua, lua_Debug *ar) {\n    UNUSED(ar);\n    loadCtx *load_ctx = luaGetFromRegistry(lua, REGISTRY_LOAD_CTX_NAME);\n    serverAssert(load_ctx); /* Only supported inside script invocation */\n    uint64_t duration = elapsedMs(load_ctx->start_time);\n    if (load_ctx->timeout > 0 && duration > load_ctx->timeout) {\n        lua_sethook(lua, luaEngineLoadHook, LUA_MASKLINE, 0);\n\n        luaPushError(lua,\"FUNCTION LOAD timeout\");\n        luaError(lua);\n    }\n}\n\n/*\n * Compile a given blob and save it on the registry.\n * Return a function ctx with Lua ref that allows to later retrieve the\n * function from the registry.\n *\n * Return NULL on compilation error and set the error to the err variable\n */\nstatic int luaEngineCreate(void *engine_ctx, functionLibInfo *li, sds blob, size_t timeout, sds *err) {\n    int ret = C_ERR;\n    luaEngineCtx *lua_engine_ctx = engine_ctx;\n    lua_State *lua = lua_engine_ctx->lua;\n\n    /* set load library globals */\n    lua_getmetatable(lua, LUA_GLOBALSINDEX);\n    lua_enablereadonlytable(lua, -1, 0); /* disable global protection */\n    lua_getfield(lua, LUA_REGISTRYINDEX, LIBRARY_API_NAME);\n    lua_setfield(lua, -2, \"__index\");\n    lua_enablereadonlytable(lua, LUA_GLOBALSINDEX, 1); /* enable global protection */\n    lua_pop(lua, 1); /* pop the metatable */\n\n    /* compile the code */\n    if (luaL_loadbuffer(lua, blob, sdslen(blob), \"@user_function\")) {\n        *err = sdscatprintf(sdsempty(), \"Error compiling function: %s\", lua_tostring(lua, -1));\n        lua_pop(lua, 1); /* pops the error */\n        goto done;\n    }\n    serverAssert(lua_isfunction(lua, -1));\n\n    loadCtx load_ctx = {\n        .li = li,\n        .start_time = getMonotonicUs(),\n        .timeout = timeout,\n    };\n    luaSaveOnRegistry(lua, REGISTRY_LOAD_CTX_NAME, &load_ctx);\n\n    lua_sethook(lua,luaEngineLoadHook,LUA_MASKCOUNT,100000);\n    /* Run the compiled code to allow it to register functions */\n    if (lua_pcall(lua,0,0,0)) {\n        errorInfo err_info = {0};\n        luaExtractErrorInformation(lua, &err_info);\n        *err = sdscatprintf(sdsempty(), \"Error registering functions: %s\", err_info.msg);\n        lua_pop(lua, 1); /* pops the error */\n        luaErrorInformationDiscard(&err_info);\n        goto done;\n    }\n\n    ret = C_OK;\n\ndone:\n    /* restore original globals */\n    lua_getmetatable(lua, LUA_GLOBALSINDEX);\n    lua_enablereadonlytable(lua, -1, 0); /* disable global protection */\n    lua_getfield(lua, LUA_REGISTRYINDEX, GLOBALS_API_NAME);\n    lua_setfield(lua, -2, \"__index\");\n    lua_enablereadonlytable(lua, LUA_GLOBALSINDEX, 1); /* enable global protection */\n    lua_pop(lua, 1); /* pop the metatable */\n\n    lua_sethook(lua,NULL,0,0); /* Disable hook */\n    luaSaveOnRegistry(lua, REGISTRY_LOAD_CTX_NAME, NULL);\n    luaGC(lua, &gc_count);\n    return ret;\n}\n\n/*\n * Invole the give function with the given keys and args\n */\nstatic void luaEngineCall(scriptRunCtx *run_ctx,\n                          void *engine_ctx,\n                          void *compiled_function,\n                          robj **keys,\n                          size_t nkeys,\n                          robj **args,\n                          size_t nargs)\n{\n    luaEngineCtx *lua_engine_ctx = engine_ctx;\n    lua_State *lua = lua_engine_ctx->lua;\n    luaFunctionCtx *f_ctx = compiled_function;\n\n    /* Push error handler */\n    lua_pushstring(lua, REGISTRY_ERROR_HANDLER_NAME);\n    lua_gettable(lua, LUA_REGISTRYINDEX);\n\n    lua_rawgeti(lua, LUA_REGISTRYINDEX, f_ctx->lua_function_ref);\n\n    serverAssert(lua_isfunction(lua, -1));\n\n    luaCallFunction(run_ctx, lua, keys, nkeys, args, nargs, 0);\n    lua_pop(lua, 1); /* Pop error handler */\n    luaGC(lua, &gc_count);\n}\n\nstatic size_t luaEngineGetUsedMemoy(void *engine_ctx) {\n    luaEngineCtx *lua_engine_ctx = engine_ctx;\n    return luaMemory(lua_engine_ctx->lua);\n}\n\nstatic size_t luaEngineFunctionMemoryOverhead(void *compiled_function) {\n    return zmalloc_size(compiled_function);\n}\n\nstatic size_t luaEngineMemoryOverhead(void *engine_ctx) {\n    luaEngineCtx *lua_engine_ctx = engine_ctx;\n    return zmalloc_size(lua_engine_ctx);\n}\n\nstatic void luaEngineFreeFunction(void *engine_ctx, void *compiled_function) {\n    luaEngineCtx *lua_engine_ctx = engine_ctx;\n    lua_State *lua = lua_engine_ctx->lua;\n    luaFunctionCtx *f_ctx = compiled_function;\n    lua_unref(lua, f_ctx->lua_function_ref);\n    zfree(f_ctx);\n}\n\nstatic void luaEngineFreeCtx(void *engine_ctx) {\n    luaEngineCtx *lua_engine_ctx = engine_ctx;\n#if defined(USE_JEMALLOC)\n    /* When lua is closed, destroy the previously used private tcache. */\n    void *ud = (global_State*)G(lua_engine_ctx->lua)->ud;\n    unsigned int lua_tcache = (unsigned int)(uintptr_t)ud;\n#endif\n\n    lua_gc(lua_engine_ctx->lua, LUA_GCCOLLECT, 0);\n    lua_close(lua_engine_ctx->lua);\n    zfree(lua_engine_ctx);\n\n#if defined(USE_JEMALLOC)\n    je_mallctl(\"tcache.destroy\", NULL, NULL, (void *)&lua_tcache, sizeof(unsigned int));\n#endif\n}\n\nstatic void luaRegisterFunctionArgsInitialize(registerFunctionArgs *register_f_args,\n    sds name,\n    sds desc,\n    luaFunctionCtx *lua_f_ctx,\n    uint64_t flags)\n{\n    *register_f_args = (registerFunctionArgs){\n        .name = name,\n        .desc = desc,\n        .lua_f_ctx = lua_f_ctx,\n        .f_flags = flags,\n    };\n}\n\nstatic void luaRegisterFunctionArgsDispose(lua_State *lua, registerFunctionArgs *register_f_args) {\n    sdsfree(register_f_args->name);\n    if (register_f_args->desc) sdsfree(register_f_args->desc);\n    lua_unref(lua, register_f_args->lua_f_ctx->lua_function_ref);\n    zfree(register_f_args->lua_f_ctx);\n}\n\n/* Read function flags located on the top of the Lua stack.\n * On success, return C_OK and set the flags to 'flags' out parameter\n * Return C_ERR if encounter an unknown flag. */\nstatic int luaRegisterFunctionReadFlags(lua_State *lua, uint64_t *flags) {\n    int j = 1;\n    int ret = C_ERR;\n    int f_flags = 0;\n    while(1) {\n        lua_pushnumber(lua,j++);\n        lua_gettable(lua,-2);\n        int t = lua_type(lua,-1);\n        if (t == LUA_TNIL) {\n            lua_pop(lua,1);\n            break;\n        }\n        if (!lua_isstring(lua, -1)) {\n            lua_pop(lua,1);\n            goto done;\n        }\n\n        const char *flag_str = lua_tostring(lua, -1);\n        int found = 0;\n        for (scriptFlag *flag = scripts_flags_def; flag->str ; ++flag) {\n            if (!strcasecmp(flag->str, flag_str)) {\n                f_flags |= flag->flag;\n                found = 1;\n                break;\n            }\n        }\n        /* pops the value to continue the iteration */\n        lua_pop(lua,1);\n        if (!found) {\n            /* flag not found */\n            goto done;\n        }\n    }\n\n    *flags = f_flags;\n    ret = C_OK;\n\ndone:\n    return ret;\n}\n\nstatic int luaRegisterFunctionReadNamedArgs(lua_State *lua, registerFunctionArgs *register_f_args) {\n    char *err = NULL;\n    sds name = NULL;\n    sds desc = NULL;\n    luaFunctionCtx *lua_f_ctx = NULL;\n    uint64_t flags = 0;\n    if (!lua_istable(lua, 1)) {\n        err = \"calling redis.register_function with a single argument is only applicable to Lua table (representing named arguments).\";\n        goto error;\n    }\n\n    /* Iterating on all the named arguments */\n    lua_pushnil(lua);\n    while (lua_next(lua, -2)) {\n        /* Stack now: table, key, value */\n        if (!lua_isstring(lua, -2)) {\n            err = \"named argument key given to redis.register_function is not a string\";\n            goto error;\n        }\n        const char *key = lua_tostring(lua, -2);\n        if (!strcasecmp(key, \"function_name\")) {\n            if (!(name = luaGetStringSds(lua, -1))) {\n                err = \"function_name argument given to redis.register_function must be a string\";\n                goto error;\n            }\n        } else if (!strcasecmp(key, \"description\")) {\n            if (!(desc = luaGetStringSds(lua, -1))) {\n                err = \"description argument given to redis.register_function must be a string\";\n                goto error;\n            }\n        } else if (!strcasecmp(key, \"callback\")) {\n            if (!lua_isfunction(lua, -1)) {\n                err = \"callback argument given to redis.register_function must be a function\";\n                goto error;\n            }\n            int lua_function_ref = luaL_ref(lua, LUA_REGISTRYINDEX);\n\n            lua_f_ctx = zmalloc(sizeof(*lua_f_ctx));\n            lua_f_ctx->lua_function_ref = lua_function_ref;\n            continue; /* value was already popped, so no need to pop it out. */\n        } else if (!strcasecmp(key, \"flags\")) {\n            if (!lua_istable(lua, -1)) {\n                err = \"flags argument to redis.register_function must be a table representing function flags\";\n                goto error;\n            }\n            if (luaRegisterFunctionReadFlags(lua, &flags) != C_OK) {\n                err = \"unknown flag given\";\n                goto error;\n            }\n        } else {\n            /* unknown argument was given, raise an error */\n            err = \"unknown argument given to redis.register_function\";\n            goto error;\n        }\n        lua_pop(lua, 1); /* pop the value to continue the iteration */\n    }\n\n    if (!name) {\n        err = \"redis.register_function must get a function name argument\";\n        goto error;\n    }\n\n    if (!lua_f_ctx) {\n        err = \"redis.register_function must get a callback argument\";\n        goto error;\n    }\n\n    luaRegisterFunctionArgsInitialize(register_f_args, name, desc, lua_f_ctx, flags);\n\n    return C_OK;\n\nerror:\n    if (name) sdsfree(name);\n    if (desc) sdsfree(desc);\n    if (lua_f_ctx) {\n        lua_unref(lua, lua_f_ctx->lua_function_ref);\n        zfree(lua_f_ctx);\n    }\n    luaPushError(lua, err);\n    return C_ERR;\n}\n\nstatic int luaRegisterFunctionReadPositionalArgs(lua_State *lua, registerFunctionArgs *register_f_args) {\n    char *err = NULL;\n    sds name = NULL;\n    sds desc = NULL;\n    luaFunctionCtx *lua_f_ctx = NULL;\n    if (!(name = luaGetStringSds(lua, 1))) {\n        err = \"first argument to redis.register_function must be a string\";\n        goto error;\n    }\n\n    if (!lua_isfunction(lua, 2)) {\n        err = \"second argument to redis.register_function must be a function\";\n        goto error;\n    }\n\n    int lua_function_ref = luaL_ref(lua, LUA_REGISTRYINDEX);\n\n    lua_f_ctx = zmalloc(sizeof(*lua_f_ctx));\n    lua_f_ctx->lua_function_ref = lua_function_ref;\n\n    luaRegisterFunctionArgsInitialize(register_f_args, name, NULL, lua_f_ctx, 0);\n\n    return C_OK;\n\nerror:\n    if (name) sdsfree(name);\n    if (desc) sdsfree(desc);\n    luaPushError(lua, err);\n    return C_ERR;\n}\n\nstatic int luaRegisterFunctionReadArgs(lua_State *lua, registerFunctionArgs *register_f_args) {\n    int argc = lua_gettop(lua);\n    if (argc < 1 || argc > 2) {\n        luaPushError(lua, \"wrong number of arguments to redis.register_function\");\n        return C_ERR;\n    }\n\n    if (argc == 1) {\n        return luaRegisterFunctionReadNamedArgs(lua, register_f_args);\n    } else {\n        return luaRegisterFunctionReadPositionalArgs(lua, register_f_args);\n    }\n}\n\nstatic int luaRegisterFunction(lua_State *lua) {\n    registerFunctionArgs register_f_args = {0};\n\n    loadCtx *load_ctx = luaGetFromRegistry(lua, REGISTRY_LOAD_CTX_NAME);\n    if (!load_ctx) {\n        luaPushError(lua, \"redis.register_function can only be called on FUNCTION LOAD command\");\n        return luaError(lua);\n    }\n\n    if (luaRegisterFunctionReadArgs(lua, &register_f_args) != C_OK) {\n        return luaError(lua);\n    }\n\n    sds err = NULL;\n    if (functionLibCreateFunction(register_f_args.name, register_f_args.lua_f_ctx, load_ctx->li, register_f_args.desc, register_f_args.f_flags, &err) != C_OK) {\n        luaRegisterFunctionArgsDispose(lua, &register_f_args);\n        luaPushError(lua, err);\n        sdsfree(err);\n        return luaError(lua);\n    }\n\n    return 0;\n}\n\n/* Initialize Lua engine, should be called once on start. */\nint luaEngineInitEngine(void) {\n    luaEngineCtx *lua_engine_ctx = zmalloc(sizeof(*lua_engine_ctx));\n    lua_engine_ctx->lua = createLuaState();\n\n    luaRegisterRedisAPI(lua_engine_ctx->lua);\n\n    /* Register the library commands table and fields and store it to registry */\n    lua_newtable(lua_engine_ctx->lua); /* load library globals */\n    lua_newtable(lua_engine_ctx->lua); /* load library `redis` table */\n\n    lua_pushstring(lua_engine_ctx->lua, \"register_function\");\n    lua_pushcfunction(lua_engine_ctx->lua, luaRegisterFunction);\n    lua_settable(lua_engine_ctx->lua, -3);\n\n    luaRegisterLogFunction(lua_engine_ctx->lua);\n    luaRegisterVersion(lua_engine_ctx->lua);\n\n    luaSetErrorMetatable(lua_engine_ctx->lua);\n    lua_setfield(lua_engine_ctx->lua, -2, REDIS_API_NAME);\n\n    luaSetErrorMetatable(lua_engine_ctx->lua);\n    luaSetTableProtectionRecursively(lua_engine_ctx->lua); /* protect load library globals */\n    lua_setfield(lua_engine_ctx->lua, LUA_REGISTRYINDEX, LIBRARY_API_NAME);\n\n    /* Save error handler to registry */\n    lua_pushstring(lua_engine_ctx->lua, REGISTRY_ERROR_HANDLER_NAME);\n    char *errh_func =       \"local dbg = debug\\n\"\n                            \"debug = nil\\n\"\n                            \"local error_handler = function (err)\\n\"\n                            \"  local i = dbg.getinfo(2,'nSl')\\n\"\n                            \"  if i and i.what == 'C' then\\n\"\n                            \"    i = dbg.getinfo(3,'nSl')\\n\"\n                            \"  end\\n\"\n                            \"  if type(err) ~= 'table' then\\n\"\n                            \"    err = {err='ERR ' .. tostring(err)}\"\n                            \"  end\"\n                            \"  if i then\\n\"\n                            \"    err['source'] = i.source\\n\"\n                            \"    err['line'] = i.currentline\\n\"\n                            \"  end\"\n                            \"  return err\\n\"\n                            \"end\\n\"\n                            \"return error_handler\";\n    luaL_loadbuffer(lua_engine_ctx->lua, errh_func, strlen(errh_func), \"@err_handler_def\");\n    lua_pcall(lua_engine_ctx->lua,0,1,0);\n    lua_settable(lua_engine_ctx->lua, LUA_REGISTRYINDEX);\n\n    lua_pushvalue(lua_engine_ctx->lua, LUA_GLOBALSINDEX);\n    luaSetErrorMetatable(lua_engine_ctx->lua);\n    luaSetTableProtectionRecursively(lua_engine_ctx->lua); /* protect globals */\n    lua_pop(lua_engine_ctx->lua, 1);\n\n    /* Save default globals to registry */\n    lua_pushvalue(lua_engine_ctx->lua, LUA_GLOBALSINDEX);\n    lua_setfield(lua_engine_ctx->lua, LUA_REGISTRYINDEX, GLOBALS_API_NAME);\n\n    /* save the engine_ctx on the registry so we can get it from the Lua interpreter */\n    luaSaveOnRegistry(lua_engine_ctx->lua, REGISTRY_ENGINE_CTX_NAME, lua_engine_ctx);\n\n    /* Create new empty table to be the new globals, we will be able to control the real globals\n     * using metatable */\n    lua_newtable(lua_engine_ctx->lua); /* new globals */\n    lua_newtable(lua_engine_ctx->lua); /* new globals metatable */\n    lua_pushvalue(lua_engine_ctx->lua, LUA_GLOBALSINDEX);\n    lua_setfield(lua_engine_ctx->lua, -2, \"__index\");\n    lua_enablereadonlytable(lua_engine_ctx->lua, -1, 1); /* protect the metatable */\n    lua_setmetatable(lua_engine_ctx->lua, -2);\n    lua_enablereadonlytable(lua_engine_ctx->lua, -1, 1); /* protect the new global table */\n    lua_replace(lua_engine_ctx->lua, LUA_GLOBALSINDEX); /* set new global table as the new globals */\n\n    /* Set metatables of basic types (string, number, nil etc.) readonly. */\n    luaSetTableProtectionForBasicTypes(lua_engine_ctx->lua);\n\n    engine *lua_engine = zmalloc(sizeof(*lua_engine));\n    *lua_engine = (engine) {\n        .engine_ctx = lua_engine_ctx,\n        .create = luaEngineCreate,\n        .call = luaEngineCall,\n        .get_used_memory = luaEngineGetUsedMemoy,\n        .get_function_memory_overhead = luaEngineFunctionMemoryOverhead,\n        .get_engine_memory_overhead = luaEngineMemoryOverhead,\n        .free_function = luaEngineFreeFunction,\n        .free_ctx = luaEngineFreeCtx,\n    };\n    return functionsRegisterEngine(LUA_ENGINE_NAME, lua_engine);\n}\n"
  },
  {
    "path": "src/functions.c",
    "content": "/*\n * Copyright (c) 2011-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"functions.h\"\n#include \"sds.h\"\n#include \"dict.h\"\n#include \"adlist.h\"\n#include \"atomicvar.h\"\n\n#define LOAD_TIMEOUT_MS 500\n\ntypedef enum {\n    restorePolicy_Flush, restorePolicy_Append, restorePolicy_Replace\n} restorePolicy;\n\nstatic size_t engine_cache_memory = 0;\n\n/* Forward declaration */\nstatic void engineFunctionDispose(dict *d, void *obj);\nstatic void engineStatsDispose(dict *d, void *obj);\nstatic void engineLibraryDispose(dict *d, void *obj);\nstatic void engineDispose(dict *d, void *obj);\nstatic int functionsVerifyName(sds name);\n\ntypedef struct functionsLibEngineStats {\n    size_t n_lib;\n    size_t n_functions;\n} functionsLibEngineStats;\n\nstruct functionsLibCtx {\n    dict *libraries;     /* Library name -> Library object */\n    dict *functions;     /* Function name -> Function object that can be used to run the function */\n    size_t cache_memory; /* Overhead memory (structs, dictionaries, ..) used by all the functions */\n    dict *engines_stats; /* Per engine statistics */\n};\n\ntypedef struct functionsLibMataData {\n    sds engine;\n    sds name;\n    sds code;\n} functionsLibMataData;\n\ndictType engineDictType = {\n        dictSdsCaseHash,       /* hash function */\n        dictSdsDup,            /* key dup */\n        NULL,                  /* val dup */\n        dictSdsKeyCaseCompare, /* key compare */\n        dictSdsDestructor,     /* key destructor */\n        engineDispose,         /* val destructor */\n        NULL                   /* allow to expand */\n};\n\ndictType functionDictType = {\n        dictSdsCaseHash,      /* hash function */\n        dictSdsDup,           /* key dup */\n        NULL,                 /* val dup */\n        dictSdsKeyCaseCompare,/* key compare */\n        dictSdsDestructor,    /* key destructor */\n        NULL,                 /* val destructor */\n        NULL                  /* allow to expand */\n};\n\ndictType engineStatsDictType = {\n        dictSdsCaseHash,      /* hash function */\n        dictSdsDup,           /* key dup */\n        NULL,                 /* val dup */\n        dictSdsKeyCaseCompare,/* key compare */\n        dictSdsDestructor,    /* key destructor */\n        engineStatsDispose,   /* val destructor */\n        NULL                  /* allow to expand */\n};\n\ndictType libraryFunctionDictType = {\n        dictSdsHash,          /* hash function */\n        dictSdsDup,           /* key dup */\n        NULL,                 /* val dup */\n        dictSdsKeyCompare,    /* key compare */\n        dictSdsDestructor,    /* key destructor */\n        engineFunctionDispose,/* val destructor */\n        NULL                  /* allow to expand */\n};\n\ndictType librariesDictType = {\n        dictSdsHash,          /* hash function */\n        dictSdsDup,           /* key dup */\n        NULL,                 /* val dup */\n        dictSdsKeyCompare,    /* key compare */\n        dictSdsDestructor,    /* key destructor */\n        engineLibraryDispose, /* val destructor */\n        NULL                  /* allow to expand */\n};\n\n/* Dictionary of engines */\nstatic dict *engines = NULL;\n\n/* Libraries Ctx. */\nstatic functionsLibCtx *curr_functions_lib_ctx = NULL;\n\nstatic size_t functionMallocSize(functionInfo *fi) {\n    return zmalloc_size(fi) + sdsZmallocSize(fi->name)\n            + (fi->desc ? sdsZmallocSize(fi->desc) : 0)\n            + fi->li->ei->engine->get_function_memory_overhead(fi->function);\n}\n\nstatic size_t libraryMallocSize(functionLibInfo *li) {\n    return zmalloc_size(li) + sdsZmallocSize(li->name)\n            + sdsZmallocSize(li->code);\n}\n\nstatic void engineStatsDispose(dict *d, void *obj) {\n    UNUSED(d);\n    functionsLibEngineStats *stats = obj;\n    zfree(stats);\n}\n\n/* Dispose function memory */\nstatic void engineFunctionDispose(dict *d, void *obj) {\n    UNUSED(d);\n    if (!obj) {\n        return;\n    }\n    functionInfo *fi = obj;\n    sdsfree(fi->name);\n    if (fi->desc) {\n        sdsfree(fi->desc);\n    }\n    engine *engine = fi->li->ei->engine;\n    engine->free_function(engine->engine_ctx, fi->function);\n    zfree(fi);\n}\n\nstatic void engineLibraryFree(functionLibInfo* li) {\n    if (!li) {\n        return;\n    }\n    dictRelease(li->functions);\n    sdsfree(li->name);\n    sdsfree(li->code);\n    zfree(li);\n}\n\nstatic void engineLibraryFreeGeneric(void *li) {\n    engineLibraryFree((functionLibInfo *)li);\n}\n\nstatic void engineLibraryDispose(dict *d, void *obj) {\n    UNUSED(d);\n    engineLibraryFree(obj);\n}\n\nstatic void engineDispose(dict *d, void *obj) {\n    UNUSED(d);\n    engineInfo *ei = obj;\n    freeClient(ei->c);\n    sdsfree(ei->name);\n    ei->engine->free_ctx(ei->engine->engine_ctx);\n    zfree(ei->engine);\n    zfree(ei);\n}\n\n/* Clear all the functions from the given library ctx */\nvoid functionsLibCtxClear(functionsLibCtx *lib_ctx) {\n    dictEmpty(lib_ctx->functions, NULL);\n    dictEmpty(lib_ctx->libraries, NULL);\n    dictIterator iter;\n    dictEntry *entry = NULL;\n    dictInitIterator(&iter, lib_ctx->engines_stats);\n    while ((entry = dictNext(&iter))) {\n        functionsLibEngineStats *stats = dictGetVal(entry);\n        stats->n_functions = 0;\n        stats->n_lib = 0;\n    }\n    dictResetIterator(&iter);\n    lib_ctx->cache_memory = 0;\n}\n\nvoid functionsLibCtxClearCurrent(int async) {\n    if (async) {\n        functionsLibCtx *old_l_ctx = curr_functions_lib_ctx;\n        dict *old_engines = engines;\n        freeFunctionsAsync(old_l_ctx, old_engines);\n    } else {\n        functionsLibCtxFree(curr_functions_lib_ctx);\n        dictRelease(engines);\n    }\n    functionsInit();\n}\n\n/* Free the given functions ctx */\nvoid functionsLibCtxFree(functionsLibCtx *functions_lib_ctx) {\n    functionsLibCtxClear(functions_lib_ctx);\n    dictRelease(functions_lib_ctx->functions);\n    dictRelease(functions_lib_ctx->libraries);\n    dictRelease(functions_lib_ctx->engines_stats);\n    zfree(functions_lib_ctx);\n}\n\n/* Swap the current functions ctx with the given one.\n * Free the old functions ctx. */\nvoid functionsLibCtxSwapWithCurrent(functionsLibCtx *new_lib_ctx) {\n    functionsLibCtxFree(curr_functions_lib_ctx);\n    curr_functions_lib_ctx = new_lib_ctx;\n}\n\n/* return the current functions ctx */\nfunctionsLibCtx* functionsLibCtxGetCurrent(void) {\n    return curr_functions_lib_ctx;\n}\n\n/* Create a new functions ctx */\nfunctionsLibCtx* functionsLibCtxCreate(void) {\n    functionsLibCtx *ret = zmalloc(sizeof(functionsLibCtx));\n    ret->libraries = dictCreate(&librariesDictType);\n    ret->functions = dictCreate(&functionDictType);\n    ret->engines_stats = dictCreate(&engineStatsDictType);\n    dictIterator iter;\n    dictEntry *entry = NULL;\n    dictInitIterator(&iter, engines);\n    while ((entry = dictNext(&iter))) {\n        engineInfo *ei = dictGetVal(entry);\n        functionsLibEngineStats *stats = zcalloc(sizeof(*stats));\n        dictAdd(ret->engines_stats, ei->name, stats);\n    }\n    dictResetIterator(&iter);\n    ret->cache_memory = 0;\n    return ret;\n}\n\n/*\n * Creating a function inside the given library.\n * On success, return C_OK.\n * On error, return C_ERR and set err output parameter with a relevant error message.\n *\n * Note: the code assumes 'name' is NULL terminated but not require it to be binary safe.\n *       the function will verify that the given name is following the naming format\n *       and return an error if its not.\n */\nint functionLibCreateFunction(sds name, void *function, functionLibInfo *li, sds desc, uint64_t f_flags, sds *err) {\n    if (functionsVerifyName(name) != C_OK) {\n        *err = sdsnew(\"Library names can only contain letters, numbers, or underscores(_) and must be at least one character long\");\n        return C_ERR;\n    }\n\n    if (dictFetchValue(li->functions, name)) {\n        *err = sdsnew(\"Function already exists in the library\");\n        return C_ERR;\n    }\n\n    functionInfo *fi = zmalloc(sizeof(*fi));\n    *fi = (functionInfo) {\n        .name = name,\n        .function = function,\n        .li = li,\n        .desc = desc,\n        .f_flags = f_flags,\n    };\n\n    int res = dictAdd(li->functions, fi->name, fi);\n    serverAssert(res == DICT_OK);\n\n    return C_OK;\n}\n\nstatic functionLibInfo* engineLibraryCreate(sds name, engineInfo *ei, sds code) {\n    functionLibInfo *li = zmalloc(sizeof(*li));\n    *li = (functionLibInfo) {\n        .name = sdsdup(name),\n        .functions = dictCreate(&libraryFunctionDictType),\n        .ei = ei,\n        .code = sdsdup(code),\n    };\n    return li;\n}\n\nstatic void libraryUnlink(functionsLibCtx *lib_ctx, functionLibInfo* li) {\n    dictIterator iter;\n    dictEntry *entry = NULL;\n    dictInitIterator(&iter, li->functions);\n    while ((entry = dictNext(&iter))) {\n        functionInfo *fi = dictGetVal(entry);\n        int ret = dictDelete(lib_ctx->functions, fi->name);\n        serverAssert(ret == DICT_OK);\n        lib_ctx->cache_memory -= functionMallocSize(fi);\n    }\n    dictResetIterator(&iter);\n    entry = dictUnlink(lib_ctx->libraries, li->name);\n    dictSetVal(lib_ctx->libraries, entry, NULL);\n    dictFreeUnlinkedEntry(lib_ctx->libraries, entry);\n    lib_ctx->cache_memory -= libraryMallocSize(li);\n\n    /* update stats */\n    functionsLibEngineStats *stats = dictFetchValue(lib_ctx->engines_stats, li->ei->name);\n    serverAssert(stats);\n    stats->n_lib--;\n    stats->n_functions -= dictSize(li->functions);\n}\n\nstatic void libraryLink(functionsLibCtx *lib_ctx, functionLibInfo* li) {\n    dictIterator iter;\n    dictEntry *entry = NULL;\n    dictInitIterator(&iter, li->functions);\n    while ((entry = dictNext(&iter))) {\n        functionInfo *fi = dictGetVal(entry);\n        dictAdd(lib_ctx->functions, fi->name, fi);\n        lib_ctx->cache_memory += functionMallocSize(fi);\n    }\n    dictResetIterator(&iter);\n\n    dictAdd(lib_ctx->libraries, li->name, li);\n    lib_ctx->cache_memory += libraryMallocSize(li);\n\n    /* update stats */\n    functionsLibEngineStats *stats = dictFetchValue(lib_ctx->engines_stats, li->ei->name);\n    serverAssert(stats);\n    stats->n_lib++;\n    stats->n_functions += dictSize(li->functions);\n}\n\n/* Takes all libraries from lib_ctx_src and add to lib_ctx_dst.\n * On collision, if 'replace' argument is true, replace the existing library with the new one.\n * Otherwise abort and leave 'lib_ctx_dst' and 'lib_ctx_src' untouched.\n * Return C_OK on success and C_ERR if aborted. If C_ERR is returned, set a relevant\n * error message on the 'err' out parameter.\n *  */\nstatic int libraryJoin(functionsLibCtx *functions_lib_ctx_dst, functionsLibCtx *functions_lib_ctx_src, int replace, sds *err) {\n    int ret = C_ERR;\n    dictIterator iter;\n    /* Stores the libraries we need to replace in case a revert is required.\n     * Only initialized when needed */\n    list *old_libraries_list = NULL;\n    dictEntry *entry = NULL;\n    dictInitIterator(&iter, functions_lib_ctx_src->libraries);\n    while ((entry = dictNext(&iter))) {\n        functionLibInfo *li = dictGetVal(entry);\n        functionLibInfo *old_li = dictFetchValue(functions_lib_ctx_dst->libraries, li->name);\n        if (old_li) {\n            if (!replace) {\n                /* library already exists, failed the restore. */\n                *err = sdscatfmt(sdsempty(), \"Library %s already exists\", li->name);\n                dictResetIterator(&iter);\n                goto done;\n            } else {\n                if (!old_libraries_list) {\n                    old_libraries_list = listCreate();\n                    listSetFreeMethod(old_libraries_list, engineLibraryFreeGeneric);\n                }\n                libraryUnlink(functions_lib_ctx_dst, old_li);\n                listAddNodeTail(old_libraries_list, old_li);\n            }\n        }\n    }\n    dictResetIterator(&iter);\n\n    /* Make sure no functions collision */\n    dictInitIterator(&iter, functions_lib_ctx_src->functions);\n    while ((entry = dictNext(&iter))) {\n        functionInfo *fi = dictGetVal(entry);\n        if (dictFetchValue(functions_lib_ctx_dst->functions, fi->name)) {\n            *err = sdscatfmt(sdsempty(), \"Function %s already exists\", fi->name);\n            dictResetIterator(&iter);\n            goto done;\n        }\n    }\n    dictResetIterator(&iter);\n\n    /* No collision, it is safe to link all the new libraries. */\n    dictInitIterator(&iter, functions_lib_ctx_src->libraries);\n    while ((entry = dictNext(&iter))) {\n        functionLibInfo *li = dictGetVal(entry);\n        libraryLink(functions_lib_ctx_dst, li);\n        dictSetVal(functions_lib_ctx_src->libraries, entry, NULL);\n    }\n    dictResetIterator(&iter);\n\n    functionsLibCtxClear(functions_lib_ctx_src);\n    if (old_libraries_list) {\n        listRelease(old_libraries_list);\n        old_libraries_list = NULL;\n    }\n    ret = C_OK;\n\ndone:\n    if (old_libraries_list) {\n        /* Link back all libraries on tmp_l_ctx */\n        while (listLength(old_libraries_list) > 0) {\n            listNode *head = listFirst(old_libraries_list);\n            functionLibInfo *li = listNodeValue(head);\n            listNodeValue(head) = NULL;\n            libraryLink(functions_lib_ctx_dst, li);\n            listDelNode(old_libraries_list, head);\n        }\n        listRelease(old_libraries_list);\n    }\n    return ret;\n}\n\n/* Register an engine, should be called once by the engine on startup and give the following:\n *\n * - engine_name - name of the engine to register\n * - engine_ctx - the engine ctx that should be used by Redis to interact with the engine */\nint functionsRegisterEngine(const char *engine_name, engine *engine) {\n    sds engine_name_sds = sdsnew(engine_name);\n    if (dictFetchValue(engines, engine_name_sds)) {\n        serverLog(LL_WARNING, \"Same engine was registered twice\");\n        sdsfree(engine_name_sds);\n        return C_ERR;\n    }\n\n    client *c = createClient(NULL);\n    c->flags |= (CLIENT_DENY_BLOCKING | CLIENT_SCRIPT);\n    engineInfo *ei = zmalloc(sizeof(*ei));\n    *ei = (engineInfo ) { .name = engine_name_sds, .engine = engine, .c = c,};\n\n    dictAdd(engines, engine_name_sds, ei);\n\n    engine_cache_memory += zmalloc_size(ei) + sdsZmallocSize(ei->name) +\n            zmalloc_size(engine) +\n            engine->get_engine_memory_overhead(engine->engine_ctx);\n\n    return C_OK;\n}\n\n/*\n * FUNCTION STATS\n */\nvoid functionStatsCommand(client *c) {\n    if (scriptIsRunning() && scriptIsEval()) {\n        addReplyErrorObject(c, shared.slowevalerr);\n        return;\n    }\n\n    addReplyMapLen(c, 2);\n\n    addReplyBulkCString(c, \"running_script\");\n    if (!scriptIsRunning()) {\n        addReplyNull(c);\n    } else {\n        addReplyMapLen(c, 3);\n        addReplyBulkCString(c, \"name\");\n        addReplyBulkCString(c, scriptCurrFunction());\n        addReplyBulkCString(c, \"command\");\n        client *script_client = scriptGetCaller();\n        addReplyArrayLen(c, script_client->argc);\n        for (int i = 0 ; i < script_client->argc ; ++i) {\n            addReplyBulkCBuffer(c, script_client->argv[i]->ptr, sdslen(script_client->argv[i]->ptr));\n        }\n        addReplyBulkCString(c, \"duration_ms\");\n        addReplyLongLong(c, scriptRunDuration());\n    }\n\n    addReplyBulkCString(c, \"engines\");\n    addReplyMapLen(c, dictSize(engines));\n    dictIterator iter;\n    dictEntry *entry = NULL;\n    dictInitIterator(&iter, engines);\n    while ((entry = dictNext(&iter))) {\n        engineInfo *ei = dictGetVal(entry);\n        addReplyBulkCString(c, ei->name);\n        addReplyMapLen(c, 2);\n        functionsLibEngineStats *e_stats = dictFetchValue(curr_functions_lib_ctx->engines_stats, ei->name);\n        addReplyBulkCString(c, \"libraries_count\");\n        addReplyLongLong(c, e_stats->n_lib);\n        addReplyBulkCString(c, \"functions_count\");\n        addReplyLongLong(c, e_stats->n_functions);\n    }\n    dictResetIterator(&iter);\n}\n\nstatic void functionListReplyFlags(client *c, functionInfo *fi) {\n    /* First count the number of flags we have */\n    int flagcount = 0;\n    for (scriptFlag *flag = scripts_flags_def; flag->str ; ++flag) {\n        if (fi->f_flags & flag->flag) {\n            ++flagcount;\n        }\n    }\n\n    addReplySetLen(c, flagcount);\n\n    for (scriptFlag *flag = scripts_flags_def; flag->str ; ++flag) {\n        if (fi->f_flags & flag->flag) {\n            addReplyStatus(c, flag->str);\n        }\n    }\n}\n\n/*\n * FUNCTION LIST [LIBRARYNAME PATTERN] [WITHCODE]\n *\n * Return general information about all the libraries:\n * * Library name\n * * The engine used to run the Library\n * * Functions list\n * * Library code (if WITHCODE is given)\n *\n * It is also possible to given library name pattern using\n * LIBRARYNAME argument, if given, return only libraries\n * that matches the given pattern.\n */\nvoid functionListCommand(client *c) {\n    int with_code = 0;\n    sds library_name = NULL;\n    for (int i = 2 ; i < c->argc ; ++i) {\n        robj *next_arg = c->argv[i];\n        if (!with_code && !strcasecmp(next_arg->ptr, \"withcode\")) {\n            with_code = 1;\n            continue;\n        }\n        if (!library_name && !strcasecmp(next_arg->ptr, \"libraryname\")) {\n            if (i >= c->argc - 1) {\n                addReplyError(c, \"library name argument was not given\");\n                return;\n            }\n            library_name = c->argv[++i]->ptr;\n            continue;\n        }\n        addReplyErrorSds(c, sdscatfmt(sdsempty(), \"Unknown argument %s\", next_arg->ptr));\n        return;\n    }\n    size_t reply_len = 0;\n    void *len_ptr = NULL;\n    if (library_name) {\n        len_ptr = addReplyDeferredLen(c);\n    } else {\n        /* If no pattern is asked we know the reply len and we can just set it */\n        addReplyArrayLen(c, dictSize(curr_functions_lib_ctx->libraries));\n    }\n    dictIterator iter;\n    dictEntry *entry = NULL;\n    dictInitIterator(&iter, curr_functions_lib_ctx->libraries);\n    while ((entry = dictNext(&iter))) {\n        functionLibInfo *li = dictGetVal(entry);\n        if (library_name) {\n            if (!stringmatchlen(library_name, sdslen(library_name), li->name, sdslen(li->name), 1)) {\n                continue;\n            }\n        }\n        ++reply_len;\n        addReplyMapLen(c, with_code? 4 : 3);\n        addReplyBulkCString(c, \"library_name\");\n        addReplyBulkCBuffer(c, li->name, sdslen(li->name));\n        addReplyBulkCString(c, \"engine\");\n        addReplyBulkCBuffer(c, li->ei->name, sdslen(li->ei->name));\n\n        addReplyBulkCString(c, \"functions\");\n        addReplyArrayLen(c, dictSize(li->functions));\n        dictIterator functions_iter;\n        dictEntry *function_entry = NULL;\n        dictInitIterator(&functions_iter, li->functions);\n        while ((function_entry = dictNext(&functions_iter))) {\n            functionInfo *fi = dictGetVal(function_entry);\n            addReplyMapLen(c, 3);\n            addReplyBulkCString(c, \"name\");\n            addReplyBulkCBuffer(c, fi->name, sdslen(fi->name));\n            addReplyBulkCString(c, \"description\");\n            if (fi->desc) {\n                addReplyBulkCBuffer(c, fi->desc, sdslen(fi->desc));\n            } else {\n                addReplyNull(c);\n            }\n            addReplyBulkCString(c, \"flags\");\n            functionListReplyFlags(c, fi);\n        }\n        dictResetIterator(&functions_iter);\n\n        if (with_code) {\n            addReplyBulkCString(c, \"library_code\");\n            addReplyBulkCBuffer(c, li->code, sdslen(li->code));\n        }\n    }\n    dictResetIterator(&iter);\n    if (len_ptr) {\n        setDeferredArrayLen(c, len_ptr, reply_len);\n    }\n}\n\n/*\n * FUNCTION DELETE <LIBRARY NAME>\n */\nvoid functionDeleteCommand(client *c) {\n    robj *function_name = c->argv[2];\n    functionLibInfo *li = dictFetchValue(curr_functions_lib_ctx->libraries, function_name->ptr);\n    if (!li) {\n        addReplyError(c, \"Library not found\");\n        return;\n    }\n\n    libraryUnlink(curr_functions_lib_ctx, li);\n    engineLibraryFree(li);\n    /* Indicate that the command changed the data so it will be replicated and\n     * counted as a data change (for persistence configuration) */\n    server.dirty++;\n    addReply(c, shared.ok);\n}\n\n/* FUNCTION KILL */\nvoid functionKillCommand(client *c) {\n    scriptKill(c, 0);\n}\n\n/* Try to extract command flags if we can, returns the modified flags.\n * Note that it does not guarantee the command arguments are right. */\nuint64_t fcallGetCommandFlags(client *c, uint64_t cmd_flags) {\n    robj *function_name = c->argv[1];\n    c->cur_script = dictFind(curr_functions_lib_ctx->functions, function_name->ptr);\n    if (!c->cur_script)\n        return cmd_flags;\n    functionInfo *fi = dictGetVal(c->cur_script);\n    uint64_t script_flags = fi->f_flags;\n    return scriptFlagsToCmdFlags(cmd_flags, script_flags);\n}\n\nstatic void fcallCommandGeneric(client *c, int ro) {\n    /* Functions need to be fed to monitors before the commands they execute. */\n    replicationFeedMonitors(c,server.monitors,c->db->id,c->argv,c->argc);\n\n    robj *function_name = c->argv[1];\n    dictEntry *de = c->cur_script;\n    if (!de)\n        de = dictFind(curr_functions_lib_ctx->functions, function_name->ptr);\n    if (!de) {\n        addReplyError(c, \"Function not found\");\n        return;\n    }\n    functionInfo *fi = dictGetVal(de);\n    engine *engine = fi->li->ei->engine;\n\n    long long numkeys;\n    /* Get the number of arguments that are keys */\n    if (getLongLongFromObject(c->argv[2], &numkeys) != C_OK) {\n        addReplyError(c, \"Bad number of keys provided\");\n        return;\n    }\n    if (numkeys > (c->argc - 3)) {\n        addReplyError(c, \"Number of keys can't be greater than number of args\");\n        return;\n    } else if (numkeys < 0) {\n        addReplyError(c, \"Number of keys can't be negative\");\n        return;\n    }\n\n    scriptRunCtx run_ctx;\n\n    if (scriptPrepareForRun(&run_ctx, fi->li->ei->c, c, fi->name, fi->f_flags, ro) != C_OK)\n        return;\n\n    engine->call(&run_ctx, engine->engine_ctx, fi->function, c->argv + 3, numkeys,\n                 c->argv + 3 + numkeys, c->argc - 3 - numkeys);\n    scriptResetRun(&run_ctx);\n}\n\n/*\n * FCALL <FUNCTION NAME> nkeys <key1 .. keyn> <arg1 .. argn>\n */\nvoid fcallCommand(client *c) {\n    fcallCommandGeneric(c, 0);\n}\n\n/*\n * FCALL_RO <FUNCTION NAME> nkeys <key1 .. keyn> <arg1 .. argn>\n */\nvoid fcallroCommand(client *c) {\n    fcallCommandGeneric(c, 1);\n}\n\n/*\n * Returns a binary payload representing all the libraries.\n * Can be loaded using FUNCTION RESTORE\n *\n * The payload structure is the same as on RDB. Each library\n * is saved separately with the following information:\n * * Library name\n * * Engine name\n * * Library code\n * RDB_OPCODE_FUNCTION2 is saved before each library to present\n * that the payload is a library.\n * RDB version and crc64 is saved at the end of the payload.\n * The RDB version is saved for backward compatibility.\n * crc64 is saved so we can verify the payload content.\n */\nvoid createFunctionDumpPayload(rio *payload) {\n    uint64_t crc;\n    unsigned char buf[2];\n\n    rioInitWithBuffer(payload, sdsempty());\n\n    rdbSaveFunctions(payload);\n\n    /* RDB version */\n    buf[0] = RDB_VERSION & 0xff;\n    buf[1] = (RDB_VERSION >> 8) & 0xff;\n    payload->io.buffer.ptr = sdscatlen(payload->io.buffer.ptr, buf, 2);\n\n    /* CRC64 */\n    crc = crc64(0, (unsigned char*) payload->io.buffer.ptr,\n                sdslen(payload->io.buffer.ptr));\n    memrev64ifbe(&crc);\n    payload->io.buffer.ptr = sdscatlen(payload->io.buffer.ptr, &crc, 8);\n}\n\n/*\n * FUNCTION DUMP\n */\nvoid functionDumpCommand(client *c) {\n    rio payload;\n    createFunctionDumpPayload(&payload);\n\n    addReplyBulkSds(c, payload.io.buffer.ptr);\n}\n\n/*\n * FUNCTION RESTORE <payload> [FLUSH|APPEND|REPLACE]\n *\n * Restore the libraries represented by the give payload.\n * Restore policy to can be given to control how to handle existing libraries (default APPEND):\n * * FLUSH: delete all existing libraries.\n * * APPEND: appends the restored libraries to the existing libraries. On collision, abort.\n * * REPLACE: appends the restored libraries to the existing libraries.\n *   On collision, replace the old libraries with the new libraries.\n */\nvoid functionRestoreCommand(client *c) {\n    if (c->argc > 4) {\n        addReplySubcommandSyntaxError(c);\n        return;\n    }\n\n    restorePolicy restore_replicy = restorePolicy_Append; /* default policy: APPEND */\n    sds data = c->argv[2]->ptr;\n    size_t data_len = sdslen(data);\n    rio payload;\n    sds err = NULL;\n\n    if (c->argc == 4) {\n        const char *restore_policy_str = c->argv[3]->ptr;\n        if (!strcasecmp(restore_policy_str, \"append\")) {\n            restore_replicy = restorePolicy_Append;\n        } else if (!strcasecmp(restore_policy_str, \"replace\")) {\n            restore_replicy = restorePolicy_Replace;\n        } else if (!strcasecmp(restore_policy_str, \"flush\")) {\n            restore_replicy = restorePolicy_Flush;\n        } else {\n            addReplyError(c, \"Wrong restore policy given, value should be either FLUSH, APPEND or REPLACE.\");\n            return;\n        }\n    }\n\n    uint16_t rdbver;\n    if (verifyDumpPayload((unsigned char*)data, data_len, &rdbver) != C_OK) {\n        addReplyError(c, \"DUMP payload version or checksum are wrong\");\n        return;\n    }\n\n    functionsLibCtx *functions_lib_ctx = functionsLibCtxCreate();\n    rioInitWithBuffer(&payload, data);\n\n    /* Read until reaching last 10 bytes that should contain RDB version and checksum. */\n    while (data_len - payload.io.buffer.pos > 10) {\n        int type;\n        if ((type = rdbLoadType(&payload)) == -1) {\n            err = sdsnew(\"can not read data type\");\n            goto load_error;\n        }\n        if (type == RDB_OPCODE_FUNCTION_PRE_GA) {\n            err = sdsnew(\"Pre-GA function format not supported\");\n            goto load_error;\n        }\n        if (type != RDB_OPCODE_FUNCTION2) {\n            err = sdsnew(\"given type is not a function\");\n            goto load_error;\n        }\n        if (rdbFunctionLoad(&payload, rdbver, functions_lib_ctx, RDBFLAGS_NONE, &err) != C_OK) {\n            if (!err) {\n                err = sdsnew(\"failed loading the given functions payload\");\n            }\n            goto load_error;\n        }\n    }\n\n    if (restore_replicy == restorePolicy_Flush) {\n        functionsLibCtxSwapWithCurrent(functions_lib_ctx);\n        functions_lib_ctx = NULL; /* avoid releasing the f_ctx in the end */\n    } else {\n        if (libraryJoin(curr_functions_lib_ctx, functions_lib_ctx, restore_replicy == restorePolicy_Replace, &err) != C_OK) {\n            goto load_error;\n        }\n    }\n\n    /* Indicate that the command changed the data so it will be replicated and\n     * counted as a data change (for persistence configuration) */\n    server.dirty++;\n\nload_error:\n    if (err) {\n        addReplyErrorSds(c, err);\n    } else {\n        addReply(c, shared.ok);\n    }\n    if (functions_lib_ctx) {\n        functionsLibCtxFree(functions_lib_ctx);\n    }\n}\n\n/* FUNCTION FLUSH [ASYNC | SYNC] */\nvoid functionFlushCommand(client *c) {\n    if (c->argc > 3) {\n        addReplySubcommandSyntaxError(c);\n        return;\n    }\n    int async = 0;\n    if (c->argc == 3 && !strcasecmp(c->argv[2]->ptr,\"sync\")) {\n        async = 0;\n    } else if (c->argc == 3 && !strcasecmp(c->argv[2]->ptr,\"async\")) {\n        async = 1;\n    } else if (c->argc == 2) {\n        async = server.lazyfree_lazy_user_flush ? 1 : 0;\n    } else {\n        addReplyError(c,\"FUNCTION FLUSH only supports SYNC|ASYNC option\");\n        return;\n    }\n\n    functionsLibCtxClearCurrent(async);\n\n    /* Indicate that the command changed the data so it will be replicated and\n     * counted as a data change (for persistence configuration) */\n    server.dirty++;\n    addReply(c,shared.ok);\n}\n\n/* FUNCTION HELP */\nvoid functionHelpCommand(client *c) {\n    const char *help[] = {\n\"LOAD [REPLACE] <FUNCTION CODE>\",\n\"    Create a new library with the given library name and code.\",\n\"DELETE <LIBRARY NAME>\",\n\"    Delete the given library.\",\n\"LIST [LIBRARYNAME PATTERN] [WITHCODE]\",\n\"    Return general information on all the libraries:\",\n\"    * Library name\",\n\"    * The engine used to run the Library\",\n\"    * Functions list\",\n\"    * Library code (if WITHCODE is given)\",\n\"    It also possible to get only function that matches a pattern using LIBRARYNAME argument.\",\n\"STATS\",\n\"    Return information about the current function running:\",\n\"    * Function name\",\n\"    * Command used to run the function\",\n\"    * Duration in MS that the function is running\",\n\"    If no function is running, return nil\",\n\"    In addition, returns a list of available engines.\",\n\"KILL\",\n\"    Kill the current running function.\",\n\"FLUSH [ASYNC|SYNC]\",\n\"    Delete all the libraries.\",\n\"    When called without the optional mode argument, the behavior is determined by the\",\n\"    lazyfree-lazy-user-flush configuration directive. Valid modes are:\",\n\"    * ASYNC: Asynchronously flush the libraries.\",\n\"    * SYNC: Synchronously flush the libraries.\",\n\"DUMP\",\n\"    Return a serialized payload representing the current libraries, can be restored using FUNCTION RESTORE command\",\n\"RESTORE <PAYLOAD> [FLUSH|APPEND|REPLACE]\",\n\"    Restore the libraries represented by the given payload, it is possible to give a restore policy to\",\n\"    control how to handle existing libraries (default APPEND):\",\n\"    * FLUSH: delete all existing libraries.\",\n\"    * APPEND: appends the restored libraries to the existing libraries. On collision, abort.\",\n\"    * REPLACE: appends the restored libraries to the existing libraries, On collision, replace the old\",\n\"      libraries with the new libraries (notice that even on this option there is a chance of failure\",\n\"      in case of functions name collision with another library).\",\nNULL };\n    addReplyHelp(c, help);\n}\n\n/* Verify that the function name is of the format: [a-zA-Z0-9_][a-zA-Z0-9_]? */\nstatic int functionsVerifyName(sds name) {\n    if (sdslen(name) == 0) {\n        return C_ERR;\n    }\n    for (size_t i = 0 ; i < sdslen(name) ; ++i) {\n        char curr_char = name[i];\n        if ((curr_char >= 'a' && curr_char <= 'z') ||\n            (curr_char >= 'A' && curr_char <= 'Z') ||\n            (curr_char >= '0' && curr_char <= '9') ||\n            (curr_char == '_'))\n        {\n            continue;\n        }\n        return C_ERR;\n    }\n    return C_OK;\n}\n\nint functionExtractLibMetaData(sds payload, functionsLibMataData *md, sds *err) {\n    sds name = NULL;\n    sds engine = NULL;\n    if (strncmp(payload, \"#!\", 2) != 0) {\n        *err = sdsnew(\"Missing library metadata\");\n        return C_ERR;\n    }\n    char *shebang_end = strchr(payload, '\\n');\n    if (shebang_end == NULL) {\n        *err = sdsnew(\"Invalid library metadata\");\n        return C_ERR;\n    }\n    size_t shebang_len = shebang_end - payload;\n    sds shebang = sdsnewlen(payload, shebang_len);\n    int numparts;\n    sds *parts = sdssplitargs(shebang, &numparts);\n    sdsfree(shebang);\n    if (!parts || numparts == 0) {\n        *err = sdsnew(\"Invalid library metadata\");\n        sdsfreesplitres(parts, numparts);\n        return C_ERR;\n    }\n    engine = sdsdup(parts[0]);\n    sdsrange(engine, 2, -1);\n    for (int i = 1 ; i < numparts ; ++i) {\n        sds part = parts[i];\n        if (strncasecmp(part, \"name=\", 5) == 0) {\n            if (name) {\n                *err = sdscatfmt(sdsempty(), \"Invalid metadata value, name argument was given multiple times\");\n                goto error;\n            }\n            name = sdsdup(part);\n            sdsrange(name, 5, -1);\n            continue;\n        }\n        *err = sdscatfmt(sdsempty(), \"Invalid metadata value given: %s\", part);\n        goto error;\n    }\n\n    if (!name) {\n        *err = sdsnew(\"Library name was not given\");\n        goto error;\n    }\n\n    sdsfreesplitres(parts, numparts);\n\n    md->name = name;\n    md->code = sdsnewlen(shebang_end, sdslen(payload) - shebang_len);\n    md->engine = engine;\n\n    return C_OK;\n\nerror:\n    if (name) sdsfree(name);\n    if (engine) sdsfree(engine);\n    sdsfreesplitres(parts, numparts);\n    return C_ERR;\n}\n\nvoid functionFreeLibMetaData(functionsLibMataData *md) {\n    if (md->code) sdsfree(md->code);\n    if (md->name) sdsfree(md->name);\n    if (md->engine) sdsfree(md->engine);\n}\n\n/* Compile and save the given library, return the loaded library name on success\n * and NULL on failure. In case on failure the err out param is set with relevant error message */\nsds functionsCreateWithLibraryCtx(sds code, int replace, sds* err, functionsLibCtx *lib_ctx, size_t timeout) {\n    dictIterator iter;\n    dictEntry *entry = NULL;\n    functionLibInfo *new_li = NULL;\n    functionLibInfo *old_li = NULL;\n    functionsLibMataData md = {0};\n    if (functionExtractLibMetaData(code, &md, err) != C_OK) {\n        return NULL;\n    }\n\n    if (functionsVerifyName(md.name)) {\n        *err = sdsnew(\"Library names can only contain letters, numbers, or underscores(_) and must be at least one character long\");\n        goto error;\n    }\n\n    engineInfo *ei = dictFetchValue(engines, md.engine);\n    if (!ei) {\n        *err = sdscatfmt(sdsempty(), \"Engine '%S' not found\", md.engine);\n        goto error;\n    }\n    engine *engine = ei->engine;\n\n    old_li = dictFetchValue(lib_ctx->libraries, md.name);\n    if (old_li && !replace) {\n        old_li = NULL;\n        *err = sdscatfmt(sdsempty(), \"Library '%S' already exists\", md.name);\n        goto error;\n    }\n\n    if (old_li) {\n        libraryUnlink(lib_ctx, old_li);\n    }\n\n    new_li = engineLibraryCreate(md.name, ei, code);\n    if (engine->create(engine->engine_ctx, new_li, md.code, timeout, err) != C_OK) {\n        goto error;\n    }\n\n    if (dictSize(new_li->functions) == 0) {\n        *err = sdsnew(\"No functions registered\");\n        goto error;\n    }\n\n    /* Verify no duplicate functions */\n    dictInitIterator(&iter, new_li->functions);\n    while ((entry = dictNext(&iter))) {\n        functionInfo *fi = dictGetVal(entry);\n        if (dictFetchValue(lib_ctx->functions, fi->name)) {\n            /* functions name collision, abort. */\n            *err = sdscatfmt(sdsempty(), \"Function %s already exists\", fi->name);\n            dictResetIterator(&iter);\n            goto error;\n        }\n    }\n    dictResetIterator(&iter);\n\n    libraryLink(lib_ctx, new_li);\n\n    if (old_li) {\n        engineLibraryFree(old_li);\n    }\n\n    sds loaded_lib_name = md.name;\n    md.name = NULL;\n    functionFreeLibMetaData(&md);\n\n    return loaded_lib_name;\n\nerror:\n    if (new_li) engineLibraryFree(new_li);\n    if (old_li) libraryLink(lib_ctx, old_li);\n    functionFreeLibMetaData(&md);\n    return NULL;\n}\n\n/*\n * FUNCTION LOAD [REPLACE] <LIBRARY CODE>\n * REPLACE         - optional, replace existing library\n * LIBRARY CODE    - library code to pass to the engine\n */\nvoid functionLoadCommand(client *c) {\n    int replace = 0;\n    int argc_pos = 2;\n    while (argc_pos < c->argc - 1) {\n        robj *next_arg = c->argv[argc_pos++];\n        if (!strcasecmp(next_arg->ptr, \"replace\")) {\n            replace = 1;\n            continue;\n        }\n        addReplyErrorFormat(c, \"Unknown option given: %s\", (char*)next_arg->ptr);\n        return;\n    }\n\n    if (argc_pos >= c->argc) {\n        addReplyError(c, \"Function code is missing\");\n        return;\n    }\n\n    robj *code = c->argv[argc_pos];\n    sds err = NULL;\n    sds library_name = NULL;\n    size_t timeout = LOAD_TIMEOUT_MS;\n    if (mustObeyClient(c)) {\n        timeout = 0;\n    }\n    if (!(library_name = functionsCreateWithLibraryCtx(code->ptr, replace, &err, curr_functions_lib_ctx, timeout)))\n    {\n        addReplyErrorSds(c, err);\n        return;\n    }\n    /* Indicate that the command changed the data so it will be replicated and\n     * counted as a data change (for persistence configuration) */\n    server.dirty++;\n    addReplyBulkSds(c, library_name);\n}\n\n/* Return memory usage of all the engines combine */\nunsigned long functionsMemoryVM(void) {\n    dictIterator iter;\n    dictEntry *entry = NULL;\n    size_t engines_memory = 0;\n\n    dictInitIterator(&iter, engines);\n    while ((entry = dictNext(&iter))) {\n        engineInfo *ei = dictGetVal(entry);\n        engine *engine = ei->engine;\n        engines_memory += engine->get_used_memory(engine->engine_ctx);\n    }\n    dictResetIterator(&iter);\n\n    return engines_memory;\n}\n\n/* Return memory overhead of all the engines combine */\nunsigned long functionsMemoryEngine(void) {\n    size_t memory_overhead = dictMemUsage(engines);\n    memory_overhead += dictMemUsage(curr_functions_lib_ctx->functions);\n    memory_overhead += sizeof(functionsLibCtx);\n    memory_overhead += curr_functions_lib_ctx->cache_memory;\n    memory_overhead += engine_cache_memory;\n\n    return memory_overhead;\n}\n\n/* Returns the number of functions */\nunsigned long functionsNum(void) {\n    return dictSize(curr_functions_lib_ctx->functions);\n}\n\nunsigned long functionsLibNum(void) {\n    return dictSize(curr_functions_lib_ctx->libraries);\n}\n\ndict* functionsLibGet(void) {\n    return curr_functions_lib_ctx->libraries;\n}\n\nsize_t functionsLibCtxFunctionsLen(functionsLibCtx *functions_ctx) {\n    return dictSize(functions_ctx->functions);\n}\n\n/* Initialize engine data structures.\n * Should be called once on server initialization */\nint functionsInit(void) {\n    engines = dictCreate(&engineDictType);\n\n    if (luaEngineInitEngine() != C_OK) {\n        return C_ERR;\n    }\n\n    /* Must be initialized after engines initialization */\n    curr_functions_lib_ctx = functionsLibCtxCreate();\n\n    return C_OK;\n}\n"
  },
  {
    "path": "src/functions.h",
    "content": "/*\n * Copyright (c) 2021-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef __FUNCTIONS_H_\n#define __FUNCTIONS_H_\n\n/*\n * functions.c unit provides the Redis Functions API:\n * * FUNCTION LOAD\n * * FUNCTION LIST\n * * FUNCTION CALL (FCALL and FCALL_RO)\n * * FUNCTION DELETE\n * * FUNCTION STATS\n * * FUNCTION KILL\n * * FUNCTION FLUSH\n * * FUNCTION DUMP\n * * FUNCTION RESTORE\n * * FUNCTION HELP\n *\n * Also contains implementation for:\n * * Save/Load function from rdb\n * * Register engines\n */\n\n#include \"server.h\"\n#include \"script.h\"\n#include \"redismodule.h\"\n\ntypedef struct functionLibInfo functionLibInfo;\n\ntypedef struct engine {\n    /* engine specific context */\n    void *engine_ctx;\n\n    /* Create function callback, get the engine_ctx, and function code\n     * engine_ctx - opaque struct that was created on engine initialization\n     * li - library information that need to be provided and when add functions\n     * code - the library code\n     * timeout - timeout for the library creation (0 for no timeout)\n     * err - description of error (if occurred)\n     * returns C_ERR on error and set err to be the error message */\n    int (*create)(void *engine_ctx, functionLibInfo *li, sds code, size_t timeout, sds *err);\n\n    /* Invoking a function, r_ctx is an opaque object (from engine POV).\n     * The r_ctx should be used by the engine to interaction with Redis,\n     * such interaction could be running commands, set resp, or set\n     * replication mode\n     */\n    void (*call)(scriptRunCtx *r_ctx, void *engine_ctx, void *compiled_function,\n            robj **keys, size_t nkeys, robj **args, size_t nargs);\n\n    /* get current used memory by the engine */\n    size_t (*get_used_memory)(void *engine_ctx);\n\n    /* Return memory overhead for a given function,\n     * such memory is not counted as engine memory but as general\n     * structs memory that hold different information */\n    size_t (*get_function_memory_overhead)(void *compiled_function);\n\n    /* Return memory overhead for engine (struct size holding the engine)*/\n    size_t (*get_engine_memory_overhead)(void *engine_ctx);\n\n    /* free the given function */\n    void (*free_function)(void *engine_ctx, void *compiled_function);\n\n    /* Free the engine context. */\n    void (*free_ctx)(void *engine_ctx);\n} engine;\n\n/* Hold information about an engine.\n * Used on rdb.c so it must be declared here. */\ntypedef struct engineInfo {\n    sds name;       /* Name of the engine */\n    engine *engine; /* engine callbacks that allows to interact with the engine */\n    client *c;      /* Client that is used to run commands */\n} engineInfo;\n\n/* Hold information about the specific function.\n * Used on rdb.c so it must be declared here. */\ntypedef struct functionInfo {\n    sds name;            /* Function name */\n    void *function;      /* Opaque object that set by the function's engine and allow it\n                            to run the function, usually it's the function compiled code. */\n    functionLibInfo* li; /* Pointer to the library created the function */\n    sds desc;            /* Function description */\n    uint64_t f_flags;    /* Function flags */\n} functionInfo;\n\n/* Hold information about the specific library.\n * Used on rdb.c so it must be declared here. */\nstruct functionLibInfo {\n    sds name;        /* Library name */\n    dict *functions; /* Functions dictionary */\n    engineInfo *ei;  /* Pointer to the function engine */\n    sds code;        /* Library code */\n};\n\nint functionsRegisterEngine(const char *engine_name, engine *engine_ctx);\nsds functionsCreateWithLibraryCtx(sds code, int replace, sds* err, functionsLibCtx *lib_ctx, size_t timeout);\nunsigned long functionsMemoryVM(void);\nunsigned long functionsMemoryEngine(void);\nunsigned long functionsNum(void);\nunsigned long functionsLibNum(void);\ndict* functionsLibGet(void);\nsize_t functionsLibCtxFunctionsLen(functionsLibCtx *functions_ctx);\nfunctionsLibCtx* functionsLibCtxGetCurrent(void);\nfunctionsLibCtx* functionsLibCtxCreate(void);\nvoid functionsLibCtxClearCurrent(int async);\nvoid functionsLibCtxFree(functionsLibCtx *lib_ctx);\nvoid functionsLibCtxClear(functionsLibCtx *lib_ctx);\nvoid functionsLibCtxSwapWithCurrent(functionsLibCtx *lib_ctx);\n\nint functionLibCreateFunction(sds name, void *function, functionLibInfo *li, sds desc, uint64_t f_flags, sds *err);\n\nint luaEngineInitEngine(void);\nint functionsInit(void);\nvoid functionsFree(functionsLibCtx *lib_ctx, dict *engs);\n\nvoid createFunctionDumpPayload(rio *payload);\n\n#endif /* __FUNCTIONS_H_ */\n"
  },
  {
    "path": "src/fwtree.c",
    "content": "/*\n * fwtree.c -- FENWICK TREE (Binary Indexed Tree)\n * \n * Copyright (c) 2011-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n#include \"fwtree.h\"\n#include \"zmalloc.h\"\n#include \"redisassert.h\"\n#include <string.h>\n\nstruct _fenwickTree {\n    unsigned long long *tree;\n    int size_bits;\n    int size;\n    uint64_t total;\n};\n\n/* Create a new Fenwick tree with 2^sizeBits elements (all initialized to 0) */\nfenwickTree *fwTreeCreate(int sizeBits) {\n    fenwickTree *ft = zmalloc(sizeof(fenwickTree));\n    ft->size_bits = sizeBits;\n    ft->size = 1 << sizeBits;\n    /* Fenwick tree is 1-based, so we need size + 1 elements */\n    ft->tree = zcalloc(sizeof(unsigned long long) * (ft->size + 1));\n    ft->total = 0;\n    return ft;\n}\n\nvoid fwTreeDestroy(fenwickTree *ft) {\n    if (!ft) return;\n    zfree(ft->tree);\n    zfree(ft);\n}\n\n/* Query cumulative sum from index 0 to idx (inclusive, 0-based) */\nunsigned long long fwTreePrefixSum(fenwickTree *ft, int idx) {\n    if (!ft || idx < 0) return 0;\n    if (idx >= ft->size) idx = ft->size - 1;\n\n    /* Convert to 1-based indexing */\n    idx++;\n\n    unsigned long long sum = 0;\n    while (idx > 0) {\n        sum += ft->tree[idx];\n        idx -= (idx & -idx);\n    }\n    return sum;\n}\n\n/* Update the tree by adding delta to the element at idx (0-based) */\nvoid fwTreeUpdate(fenwickTree *ft, int idx, long long delta) {\n    if (!ft || idx < 0 || idx >= ft->size) return;\n\n    /* Convert to 1-based indexing */\n    idx++;\n    ft->total += delta;\n\n    while (idx <= ft->size) {\n        if (delta < 0) {\n            assert(ft->tree[idx] >= (unsigned long long)(-delta));\n        }\n        ft->tree[idx] += delta;\n        idx += (idx & -idx);\n    }\n    debugAssert(ft->total == fwTreePrefixSum(ft, ft->size - 1));\n}\n\n/* Find the 0-based index where the cumulative sum first reaches or exceeds target.\n * target should be in range [1..total].\n * Returns the 0-based index, or 0 if target <= 0 or tree is empty.\n */\nint fwTreeFindIndex(fenwickTree *ft, unsigned long long target) {\n    debugAssert(ft);\n\n    if (target <= 0) return 0;\n\n    int result = 0, bit_mask = 1 << ft->size_bits;\n    for (int i = bit_mask; i != 0; i >>= 1) {\n        int current = result + i;\n        /* When the target index is greater than 'current' node value the we will update\n         * the target and search in the 'current' node tree. */\n        if (target > ft->tree[current]) {\n            target -= ft->tree[current];\n            result = current;\n        }\n    }\n    /* Adjust the result to get the correct index:\n     * 1. result += 1;\n     *    After the calculations, the index of target in the tree should be the next one,\n     *    so we should add 1.\n     * 2. result -= 1;\n     *    Unlike BIT (tree is 1-based), the API uses 0-based indexing, so we need to subtract 1.\n     * As the addition and subtraction cancel each other out, we can simply return the result. */\n    return result;\n}\n\n/* Find the first non-empty index (equivalent to fwTreeFindIndex(ft, 1)) */\nint fwTreeFindFirstNonEmpty(fenwickTree *ft) {\n    debugAssert(ft);\n    return fwTreeFindIndex(ft, 1);\n}\n\n/* Find the next non-empty index after idx (0-based).\n * Returns the 0-based index of the next non-empty element, or -1 if no such element exists.\n * If idx is -1, finds the first non-empty index.\n * Time complexity: O(log n)\n */\nint fwTreeFindNextNonEmpty(fenwickTree *ft, int idx) {\n    if (!ft || idx < 0 || idx >= ft->size) return -1;\n    /* Get cumulative sum up to current index */\n    unsigned long long next_sum = fwTreePrefixSum(ft, idx) + 1;\n    /* Find the index that contains the next key (curr_sum + 1) */\n    return (next_sum <= ft->total) ? fwTreeFindIndex(ft, next_sum) : -1;\n}\n\n/* Clear all values in the tree */\nvoid fwTreeClear(fenwickTree *ft) {\n    debugAssert(ft);\n    memset(ft->tree, 0, sizeof(unsigned long long) * (ft->size + 1));\n    ft->total = 0;\n}\n\n#ifdef REDIS_TEST\n#include <stdio.h>\n\n#define TEST(name) printf(\"%s\\n\", name);\n\nint fwtreeTest(int argc, char *argv[], int flags) {\n    UNUSED(argc);\n    UNUSED(argv);\n    UNUSED(flags);\n    /* Test basic operations */\n    int sizeBits = 3; /*size = 8*/\n    fenwickTree *ft = fwTreeCreate(sizeBits);\n    assert(ft != NULL);\n\n    TEST(\"estore - Test updates and queries\") {\n        fwTreeUpdate(ft, 0, 5);  /* index 0 += 5 */\n        fwTreeUpdate(ft, 2, 3);  /* index 2 += 3 */\n        fwTreeUpdate(ft, 4, 7);  /* index 4 += 7 */\n        fwTreeUpdate(ft, 6, 2);  /* index 6 += 2 */\n    }\n\n    TEST(\"estore - Test cumulative queries\") {\n        assert(fwTreePrefixSum(ft, 0) == 5);   /* sum[0..0] = 5 */\n        assert(fwTreePrefixSum(ft, 1) == 5);   /* sum[0..1] = 5 */\n        assert(fwTreePrefixSum(ft, 2) == 8);   /* sum[0..2] = 5+3 = 8 */\n        assert(fwTreePrefixSum(ft, 3) == 8);   /* sum[0..3] = 8 */\n        assert(fwTreePrefixSum(ft, 4) == 15);  /* sum[0..4] = 5+3+7 = 15 */\n        assert(fwTreePrefixSum(ft, 5) == 15);  /* sum[0..5] = 15 */\n        assert(fwTreePrefixSum(ft, 6) == 17);  /* sum[0..6] = 5+3+7+2 = 17 */\n        assert(fwTreePrefixSum(ft, 7) == 17);  /* sum[0..7] = 17 */\n    }\n\n\n\n    TEST(\"estore - Test find_index functionality\") {\n        assert(fwTreeFindIndex(ft, 1) == 0);  /* target 1 -> index 0 */\n        assert(fwTreeFindIndex(ft, 5) == 0);  /* target 5 -> index 0 */\n        assert(fwTreeFindIndex(ft, 6) == 2);  /* target 6 -> index 2 */\n        assert(fwTreeFindIndex(ft, 8) == 2);  /* target 8 -> index 2 */\n        assert(fwTreeFindIndex(ft, 9) == 4);  /* target 9 -> index 4 */\n        assert(fwTreeFindIndex(ft, 15) == 4); /* target 15 -> index 4 */\n        assert(fwTreeFindIndex(ft, 16) == 6); /* target 16 -> index 6 */\n        assert(fwTreeFindIndex(ft, 17) == 6); /* target 17 -> index 6 */\n    }\n\n    TEST(\"estore - Test fwTreeFindNextNonEmpty functionality\") {\n        /* Current state: indices 0, 2, 4, 6 have values 5, 3, 7, 2 respectively */\n        assert(fwTreeFindNextNonEmpty(ft, -1) == -1);  /* Invalid index */\n        assert(fwTreeFindNextNonEmpty(ft, 0) == 2);   /* Next after 0 is index 2 */\n        assert(fwTreeFindNextNonEmpty(ft, 1) == 2);   /* Next after 1 is index 2 */\n        assert(fwTreeFindNextNonEmpty(ft, 2) == 4);   /* Next after 2 is index 4 */\n        assert(fwTreeFindNextNonEmpty(ft, 3) == 4);   /* Next after 3 is index 4 */\n        assert(fwTreeFindNextNonEmpty(ft, 4) == 6);   /* Next after 4 is index 6 */\n        assert(fwTreeFindNextNonEmpty(ft, 5) == 6);   /* Next after 5 is index 6 */\n        assert(fwTreeFindNextNonEmpty(ft, 6) == -1);  /* No next after 6 */\n        assert(fwTreeFindNextNonEmpty(ft, 7) == -1);  /* No next after 7 */\n    }\n\n    TEST(\"estore - Test negative updates\") {\n        fwTreeUpdate(ft, 2, -1);  /* index 2 -= 1 */\n        assert(fwTreePrefixSum(ft, 2) == 7);   /* sum[0..2] = 5+2 = 7 */\n        assert(fwTreePrefixSum(ft, 7) == 16);  /* total = 16 */\n    }\n\n    TEST(\"estore - Test making an index empty\") {\n        fwTreeUpdate(ft, 2, -2);  /* index 2 -= 2, should become empty */\n        assert(fwTreePrefixSum(ft, 2) == 5);   /* sum[0..2] = 5+0 = 5 */\n    }\n    \n    TEST(\"estore - Test fwTreeFindNextNonEmpty after making index 2 empty\") {\n        /* Current state: indices 0, 4, 6 have values 5, 7, 2 respectively (index 2 is now empty) */\n        assert(fwTreeFindNextNonEmpty(ft, 0) == 4);   /* Next after 0 is now index 4 (skipping empty 2) */\n        assert(fwTreeFindNextNonEmpty(ft, 1) == 4);   /* Next after 1 is index 4 */\n        assert(fwTreeFindNextNonEmpty(ft, 2) == 4);   /* Next after 2 is index 4 */\n        assert(fwTreeFindNextNonEmpty(ft, 3) == 4);   /* Next after 3 is index 4 */\n    }\n\n    TEST(\"estore - Operations on empty tree\") {\n        fwTreeClear(ft);\n        assert(fwTreePrefixSum(ft, 7) == 0);\n    \n        /* Test fwTreeFindNextNonEmpty on empty tree */\n        assert(fwTreeFindNextNonEmpty(ft, -1) == -1);  /* Empty tree */\n        assert(fwTreeFindNextNonEmpty(ft, 0) == -1);   /* Empty tree */\n    }\n\n    fwTreeDestroy(ft);\n\n    TEST(\"estore - misc\") {\n        ft = fwTreeCreate(0);\n        \n        fwTreeUpdate(ft, 0, 10);  /* add 10 to index 0 */\n        assert(fwTreePrefixSum(ft, 0) == 10);\n    \n        assert(fwTreeFindIndex(ft, 5) == 0);\n    \n        /* Test fwTreeFindNextNonEmpty on single element tree */\n        assert(fwTreeFindNextNonEmpty(ft, -1) == -1);  /* Invalid index */\n        assert(fwTreeFindNextNonEmpty(ft, 0) == -1);   /* No next after 0 in single element tree */\n    \n        fwTreeDestroy(ft);\n    }\n    \n    return 0;\n}\n\n#endif\n"
  },
  {
    "path": "src/fwtree.h",
    "content": "/*\n * Copyright (c) 2011-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n * \n *\n * FENWICK TREE (Binary Indexed Tree)\n * ----------------------------------\n * A Fenwick tree is a data structure that efficiently supports:\n * - Point updates: Add/subtract values at specific indices in O(log n) time\n * - Prefix sum queries: Calculate cumulative sums from index 0 to any index in O(log n) time\n * - Range queries: Calculate sums over any range [i,j] in O(log n) time\n * - Space complexity: O(n)\n *\n * USAGE IN REDIS\n * --------------\n * This implementation is used by KVSTORE and ESTORE to efficiently track:\n * - Cumulative key counts across dictionary slots (KVSTORE)\n * - Cumulative item counts across expiration buckets (ESTORE)\n *\n * This enables efficient operations like:\n * - Finding which dictionary/bucket contains the Nth key/item\n * - Iterating through non-empty dictionaries/buckets\n * - Load balancing and random key selection\n *\n * IMPLEMENTATION NOTES\n * -------------------\n * - The tree uses 1-based indexing internally for mathematical convenience\n * - The public API uses 0-based indexing for consistency with Redis codebase\n * - Tree size must be a power of 2 (specified as sizeBits where size = 2^sizeBits)\n * - All operations have O(log n) time complexity where n is the tree size\n *\n * REFERENCES\n * ----------\n * For more details on Fenwick trees: https://en.wikipedia.org/wiki/Fenwick_tree\n */\n\n#ifndef __FWTREE_H\n#define __FWTREE_H\n\n#include <stdint.h>\n\n/* Forward declaration of the fenwick tree structure */\ntypedef struct _fenwickTree fenwickTree;\n\n/* Fenwick Tree API */\n\nfenwickTree *fwTreeCreate(int sizeBits);\n\nvoid fwTreeDestroy(fenwickTree *ft);\n\nunsigned long long fwTreePrefixSum(fenwickTree *ft, int idx);\n\nvoid fwTreeUpdate(fenwickTree *ft, int idx, long long delta);\n\nint fwTreeFindIndex(fenwickTree *ft, unsigned long long target);\n\nint fwTreeFindFirstNonEmpty(fenwickTree *ft);\n\nint fwTreeFindNextNonEmpty(fenwickTree *ft, int idx);\n\nvoid fwTreeClear(fenwickTree *ft);\n\n#ifdef REDIS_TEST\nint fwtreeTest(int argc, char *argv[], int flags);\n#endif\n\n#endif /* __FWTREE_H */\n"
  },
  {
    "path": "src/gcra.c",
    "content": "/*\n * Copyright (c) 2026-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n#include \"server.h\"\n#include <math.h>\n\n/* GCRA algorithm for rate limiting.\n * Implementation is heavily based on the implementation of (redis-cell)\n * [https://github.com/brandur/redis-cell] by (brandur)[https://github.com/brandur].\n *\n * It is a leaky-bucket type algorithm but instead of periodically dripping, we\n * calculate the next time the bucket has capacity - called\n * Theoretical arrival time(TaT) by the algorithm. We allow requests at a\n * sustained rate (f.e 5 request per 10 seconds, i.e 1 request per 2 seconds)\n * but also allow bursts of multiple request at one time.\n *\n * Explanation of the algorithm follows using the leaky-bucket analogy.\n *\n * GCRA works by keeping track of the next TaT and updating it after requests\n * are allowed. Let T be the emission interval for a request - in the\n * leaky-bucket analogy this will be the period at which the bucket drips.\n * Using N requests will result in the (next TaT) = (current TaT) + N * T (time\n * needed to drain the bucket). To determine if a request can be allowed we can\n * calculate the time at which \"the bucket dripped\", which is TaT-T.\n * If this time is in the past the request is allowed, otherwise we wait and TaT\n * is not updated. This only accounts for 1 request though. In order to allow\n * bursts we can imagine a full burst fully filling an empty bucket, thus\n * we need to calculate the time after which \"the bucket will completely drain\"\n * the requests of the burst - this will be t = T * max_burst.\n * At last the allowance check will be:\n *\n *   \"now\" >= TaT - (T + t)\n *\n * And in this case a picture is worth about 250 words:\n *\n * +-------------------+\n * |  ALLOWED REQUEST  |\n * +-------------------+\n * \n *   +-----------+          +-----+    +-----+\n *   | allow at  |          | now |    | TaT |\n *   |  (past)   |          +-----+    +-----+\n *   +-----------+            |          |\n *                            |          |\n * ---+-----------------------+----------+-----------> time\n *    |//////////////////////////////////|\n *    |//////////////////////////////////|\n *    +----------------------------------+\n *    |                                  |\n *    |<------------- t + T ------------>|\n * \n * \n *    +------------------------------------------+\n *    | T     = Emission interval                |\n *    | t     = Capacity of bucket               |\n *    | t + T = Delay variation tolerance        |\n *    | tat   = Theoretical arrival time         |\n *    | now   = Actual time of request           |\n *    +------------------------------------------+\n *\n * (ASCII art adapted from https://brandur.org/rate-limiting). */\n\n/* GCRA key max_burst tokens_per_period period [TOKENS count]\n *\n * key: Key related to specific rate limiting case\n * max_burst: Maximum tokens allowed as burst (in addition to sustained rate)\n * tokens_per_period: Number of tokens allowed per period\n * period: Period in seconds for calculating sustained rate\n * tokens: Optional, cost of this request (default: 1)\n */\nvoid gcraCommand(client *c) {\n    robj *key = c->argv[1];\n\n    /* GCRA parameters */\n    long max_burst;\n    long tokens_per_period;\n    long num_tokens = 1;\n    double period;\n\n    /* Variables used in the reply */\n    int limited; /* Whether or not the request was limited */\n    long long remaining = 0; /* Number of requests available immediately */\n    long long retry_after_s = -1; /* Time in seconds after which the caller can retry */\n    long long reset_after_s = 0; /* Number of seconds after which a full burst will be allowed */\n\n    if (c->argc > 7) {\n        addReplyErrorArity(c);\n        return;\n    }\n\n    if (getPositiveLongFromObjectOrReply(c, c->argv[2], &max_burst, NULL) != C_OK) {\n        return;\n    }\n    if (likely(max_burst < LONG_MAX)) max_burst += 1;\n\n    if (getRangeLongFromObjectOrReply(c, c->argv[3], 1, LONG_MAX, &tokens_per_period, NULL) != C_OK) {\n        return;\n    }\n\n    if (getDoubleFromObjectOrReply(c, c->argv[4], &period, NULL) != C_OK) {\n        return;\n    }\n    if (period <= 0 || period >= 1e12) {\n        addReplyError(c, \"period must be > 0 and < 1e12\");\n        return;\n    }\n\n    if (c->argc >= 6) {\n        if (strcasecmp(\"tokens\", c->argv[5]->ptr)) {\n            addReplyErrorObject(c, shared.syntaxerr);\n            return;\n        }\n        if (c->argc == 6) {\n            addReplyError(c, \"Missing TOKENS value\");\n            return;\n        }\n        if (getRangeLongFromObjectOrReply(c, c->argv[6], 1, LONG_MAX, &num_tokens, NULL) != C_OK) {\n            return;\n        }\n    }\n\n    ustime_t now = commandTimeSnapshot() * 1000;\n\n    long long tat_us, new_tat_us;\n    dictEntryLink link;\n    kvobj *kv = lookupKeyWriteWithLink(c->db, key, &link);\n    if (checkType(c, kv, OBJ_GCRA)) {\n        return;\n    }\n    if (kv != NULL) {\n        getLongLongFromGCRAObject(kv, &tat_us);\n    } else {\n        tat_us = now;\n    }\n\n    /* microsecond accuracy */\n    double period_us = period * 1000000.;\n\n    /* Emission interval is the minimum amount of time between requests.\n     * Note on calculations:\n     * Even if emission_interval_us becomes less than 1us, we assume it's min\n     * 1ms. The API is already in seconds granularity so it is expected the user\n     * won't need a submicrosecond accuracy. */\n    long long emission_interval_us = (long long)(period_us / tokens_per_period + 0.5);\n    if (unlikely(emission_interval_us == 0)) emission_interval_us = 1;\n\n    /* overflow checks. In normal circumstances we shouldn't get these but the\n     * user may have wrongfully specified very large values.\n     * Note that all values are positive. */\n    if (emission_interval_us > LLONG_MAX / num_tokens) {\n        addReplyError(c, \"GCRA limiting uses microsecond accuracy. Combination of period, tokens_per_period and TOKENS would cause an overflow\");\n        return;\n    }\n    if (emission_interval_us > LLONG_MAX / max_burst) {\n        addReplyError(c, \"GCRA limiting uses microsecond accuracy. Combination of period, tokens_per_period and max_burst would cause an overflow\");\n        return;\n    }\n\n    /* Max bursts give us an amount of requests we can use up at one time.\n     * The variance will calculate the amount of time that many request need\n     * to \"refill the bucket\". */\n    long long variance_us = emission_interval_us * max_burst;\n\n    /* If a request is allowed the next TaT is after an emission_interval_us time.\n     * Hence for multiple requests we multiple by their number. */\n    long long increment_us = emission_interval_us * num_tokens;\n\n    long long base_us = (now > tat_us) ? now : tat_us;\n    if (LLONG_MAX - base_us < increment_us) {\n        addReplyError(c, \"GCRA limiting uses microsecond accuracy. Combination of period, tokens_per_period and TOKENS would cause an overflow\");\n        return;\n    }\n    new_tat_us = base_us + increment_us;\n\n    /* Calculate the time a request is allowed. This is TaT, but because we allow\n     * a burst we move that time in the past. If the allow time is before the\n     * time we ask (i.e now) we allow the request, otherwise we limit it and\n     * calculate after how much time the user should retry. */\n    long long allow_at = new_tat_us - variance_us;\n    long long diff_us = now - allow_at;\n    long long ttl_us;\n    if (diff_us < 0) {\n        limited = 1;\n        /* NOTE: if increment is more than variance, then number of requests is\n         * more than what is maximally allowed (i.e max_bursts + 1) so we leave\n         * retry_after_s to -1 in this case, as it should never be retried. */\n        if (increment_us <= variance_us) {\n            retry_after_s = ceil((-diff_us) / 1000000.);\n        }\n        ttl_us = tat_us - now;\n    } else {\n        limited = 0;\n        ttl_us = new_tat_us - now;\n        robj *tatobj = createGCRAObject(new_tat_us);\n        setKeyByLink(c, c->db, key, &tatobj, kv ? SETKEY_ALREADY_EXIST : SETKEY_DOESNT_EXIST, &link);\n        notifyKeyspaceEvent(NOTIFY_RATE_LIMIT,\"gcra\",key,c->db->id);\n\n        /* The key implicitly sets its own expiry time (which is basically the\n         * TaT after which time the value is no longer of any use). That way even\n         * if only one GCRA command is called on a key it will automatically\n         * expire after reaching its TaT without user needing to explicitly call\n         * DEL on it.\n         * These keys are expected to be numerous and short lived thus the\n         * decision to keep the implicit expiraty.\n         * NOTE: idea is same as in redis-cell. */\n        long long when = new_tat_us / 1000;\n        kv = setExpireByLink(c, c->db, key->ptr, when, link);\n        notifyKeyspaceEvent(NOTIFY_GENERIC,\"expire\",key,c->db->id);\n\n        /* Replicating the command directly would mess up TaT as we use\n         * commandTimeSnapshot. We instead rewrite the command as SET with the\n         * appropriate expire time. */\n        robj *gcrasetvalue = createStringObject(\"GCRASETVALUE\", 12);\n        robj *newtatstr = createStringObjectFromLongLong(new_tat_us);\n        rewriteClientCommandVector(c, 3, gcrasetvalue, key, newtatstr);\n        decrRefCount(gcrasetvalue);\n        decrRefCount(newtatstr);\n\n        server.dirty++;\n    }\n\n    long long next_us = variance_us - ttl_us;\n    if (next_us > -emission_interval_us) {\n        remaining = next_us / emission_interval_us;\n    }\n    reset_after_s = ceil(ttl_us / 1000000.);\n\n    addReplyArrayLen(c, 5);\n    addReply(c, limited ? shared.cone : shared.czero);\n    addReplyLongLong(c, max_burst);\n    addReplyLongLong(c, remaining);\n    addReplyLongLong(c, retry_after_s);\n    addReplyLongLong(c, reset_after_s);\n}\n\n/* GCRASETVALUE key tat\n *\n * Internal command used during AOF rewrite to record a GCRA TAT value. The GCRA\n * command is also rewritten as GCRASETVALUE for replication since GCRA uses\n * commandTimeSnapshot. */\nvoid gcraSetValueCommand(client *c) {\n    robj *key = c->argv[1];\n    robj *tat = c->argv[2];\n    long long when;\n\n    dictEntryLink link;\n    kvobj *kv = lookupKeyWriteWithLink(c->db, key, &link);\n    if (checkType(c, kv, OBJ_GCRA)) return;\n\n    if (getLongLongFromObjectOrReply(c, tat, &when, \"Invalid TaT value\") == C_ERR) {\n        return;\n    }\n    if (when < 0) {\n        addReplyError(c, \"Invalid negative TaT value\");\n        return;\n    }\n\n    robj *tatobj = createGCRAObject(when);\n    setKeyByLink(c, c->db, key, &tatobj, kv ? SETKEY_ALREADY_EXIST : SETKEY_DOESNT_EXIST, &link);\n    notifyKeyspaceEvent(NOTIFY_RATE_LIMIT,\"gcra\",key,c->db->id);\n\n    /* Just like the base GCRA command we set the expire time of the key implicitly. */\n    long long when_ms = when / 1000;\n    kv = setExpireByLink(c, c->db, key->ptr, when_ms, link);\n    notifyKeyspaceEvent(NOTIFY_GENERIC,\"expire\",key,c->db->id);\n    server.dirty++;\n\n    addReply(c, shared.ok);\n}\n\nrobj *gcraDup(robj *o) {\n    long long val;\n    getLongLongFromGCRAObject(o, &val);\n    return createGCRAObject(val);\n}\n"
  },
  {
    "path": "src/geo.c",
    "content": "/*\n * Copyright (c) 2014, Matt Stancliff <matt@genges.com>.\n * Copyright (c) 2015-current, Redis Ltd.\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#include \"geo.h\"\n#include \"geohash_helper.h\"\n#include \"debugmacro.h\"\n#include \"pqsort.h\"\n\n/* Things exported from t_zset.c only for geo.c, since it is the only other\n * part of Redis that requires close zset introspection. */\nunsigned char *zzlFirstInRange(unsigned char *zl, zrangespec *range);\nint zslValueLteMax(double value, zrangespec *spec);\n\n/* ====================================================================\n * This file implements the following commands:\n *\n *   - geoadd - add coordinates for value to geoset\n *   - georadius - search radius by coordinates in geoset\n *   - georadiusbymember - search radius based on geoset member position\n * ==================================================================== */\n\n/* ====================================================================\n * geoArray implementation\n * ==================================================================== */\n\n/* Create a new array of geoPoints. */\ngeoArray *geoArrayCreate(void) {\n    geoArray *ga = zmalloc(sizeof(*ga));\n    /* It gets allocated on first geoArrayAppend() call. */\n    ga->array = NULL;\n    ga->buckets = 0;\n    ga->used = 0;\n    return ga;\n}\n\n/* Add and populate with data a new entry to the geoArray. */\ngeoPoint *geoArrayAppend(geoArray *ga, double *xy, double dist,\n                         double score, char *member)\n{\n    if (ga->used == ga->buckets) {\n        ga->buckets = (ga->buckets == 0) ? 8 : ga->buckets*2;\n        ga->array = zrealloc(ga->array,sizeof(geoPoint)*ga->buckets);\n    }\n    geoPoint *gp = ga->array+ga->used;\n    gp->longitude = xy[0];\n    gp->latitude = xy[1];\n    gp->dist = dist;\n    gp->member = member;\n    gp->score = score;\n    ga->used++;\n    return gp;\n}\n\n/* Destroy a geoArray created with geoArrayCreate(). */\nvoid geoArrayFree(geoArray *ga) {\n    size_t i;\n    for (i = 0; i < ga->used; i++) sdsfree(ga->array[i].member);\n    zfree(ga->array);\n    zfree(ga);\n}\n\n/* ====================================================================\n * Helpers\n * ==================================================================== */\nint decodeGeohash(double bits, double *xy) {\n    GeoHashBits hash = { .bits = (uint64_t)bits, .step = GEO_STEP_MAX };\n    return geohashDecodeToLongLatWGS84(hash, xy);\n}\n\n/* Input Argument Helper */\n/* Take a pointer to the latitude arg then use the next arg for longitude.\n * On parse error C_ERR is returned, otherwise C_OK. */\nint extractLongLatOrReply(client *c, robj **argv, double *xy) {\n    int i;\n    for (i = 0; i < 2; i++) {\n        if (getDoubleFromObjectOrReply(c, argv[i], xy + i, NULL) !=\n            C_OK) {\n            return C_ERR;\n        }\n    }\n    if (xy[0] < GEO_LONG_MIN || xy[0] > GEO_LONG_MAX ||\n        xy[1] < GEO_LAT_MIN  || xy[1] > GEO_LAT_MAX) {\n        addReplyErrorFormat(c,\n            \"invalid longitude,latitude pair %f,%f\",xy[0],xy[1]);\n        return C_ERR;\n    }\n    return C_OK;\n}\n\n/* Input Argument Helper */\n/* Decode lat/long from a zset member's score.\n * Returns C_OK on successful decoding, otherwise C_ERR is returned. */\nint longLatFromMember(robj *zobj, robj *member, double *xy) {\n    double score = 0;\n\n    if (zsetScore(zobj, member->ptr, &score) == C_ERR) return C_ERR;\n    if (!decodeGeohash(score, xy)) return C_ERR;\n    return C_OK;\n}\n\n/* Check that the unit argument matches one of the known units, and returns\n * the conversion factor to meters (you need to divide meters by the conversion\n * factor to convert to the right unit).\n *\n * If the unit is not valid, an error is reported to the client, and a value\n * less than zero is returned. */\ndouble extractUnitOrReply(client *c, robj *unit) {\n    char *u = unit->ptr;\n\n    if (!strcasecmp(u, \"m\")) {\n        return 1;\n    } else if (!strcasecmp(u, \"km\")) {\n        return 1000;\n    } else if (!strcasecmp(u, \"ft\")) {\n        return 0.3048;\n    } else if (!strcasecmp(u, \"mi\")) {\n        return 1609.34;\n    } else {\n        addReplyError(c,\n            \"unsupported unit provided. please use M, KM, FT, MI\");\n        return -1;\n    }\n}\n\n/* Input Argument Helper.\n * Extract the distance from the specified two arguments starting at 'argv'\n * that should be in the form: <number> <unit>, and return C_OK or C_ERR means success or failure\n * *conversions is populated with the coefficient to use in order to convert meters to the unit.*/\nint extractDistanceOrReply(client *c, robj **argv,\n                              double *conversion, double *radius) {\n    double distance;\n    if (getDoubleFromObjectOrReply(c, argv[0], &distance,\n                                   \"need numeric radius\") != C_OK) {\n        return C_ERR;\n    }\n\n    if (distance < 0) {\n        addReplyError(c,\"radius cannot be negative\");\n        return C_ERR;\n    }\n    if (radius) *radius = distance;\n\n    double to_meters = extractUnitOrReply(c,argv[1]);\n    if (to_meters < 0) {\n        return C_ERR;\n    }\n\n    if (conversion) *conversion = to_meters;\n    return C_OK;\n}\n\n/* Input Argument Helper.\n * Extract height and width from the specified three arguments starting at 'argv'\n * that should be in the form: <number> <number> <unit>, and return C_OK or C_ERR means success or failure\n * *conversions is populated with the coefficient to use in order to convert meters to the unit.*/\nint extractBoxOrReply(client *c, robj **argv, double *conversion,\n                         double *width, double *height) {\n    double h, w;\n    if ((getDoubleFromObjectOrReply(c, argv[0], &w, \"need numeric width\") != C_OK) ||\n        (getDoubleFromObjectOrReply(c, argv[1], &h, \"need numeric height\") != C_OK)) {\n        return C_ERR;\n    }\n\n    if (h < 0 || w < 0) {\n        addReplyError(c, \"height or width cannot be negative\");\n        return C_ERR;\n    }\n    if (height) *height = h;\n    if (width) *width = w;\n\n    double to_meters = extractUnitOrReply(c,argv[2]);\n    if (to_meters < 0) {\n        return C_ERR;\n    }\n\n    if (conversion) *conversion = to_meters;\n    return C_OK;\n}\n\n/* The default addReplyDouble has too much accuracy.  We use this\n * for returning location distances. \"5.2145 meters away\" is nicer\n * than \"5.2144992818115 meters away.\" We provide 4 digits after the dot\n * so that the returned value is decently accurate even when the unit is\n * the kilometer. */\nvoid addReplyDoubleDistance(client *c, double d) {\n    char dbuf[128];\n    const int dlen = fixedpoint_d2string(dbuf, sizeof(dbuf), d, 4);\n    addReplyBulkCBuffer(c, dbuf, dlen);\n}\n\n/* Helper function for geoGetPointsInRange(): given a sorted set score\n * representing a point, and a GeoShape, checks if the point is within the search area.\n *\n * shape: the rectangle\n * score: the encoded version of lat,long\n * xy: output variable, the decoded lat,long\n * distance: output variable, the distance between the center of the shape and the point\n *\n * Return values:\n *\n * The return value is C_OK if the point is within search area, or C_ERR if it is outside.\n * \"*xy\" is populated with the decoded lat,long.\n * \"*distance\" is populated with the distance between the center of the shape and the point.\n */\nint geoWithinShape(GeoShape *shape, double score, double *xy, double *distance) {\n    if (!decodeGeohash(score,xy)) return C_ERR; /* Can't decode. */\n    /* Note that geohashGetDistanceIfInRadiusWGS84() takes arguments in\n     * reverse order: longitude first, latitude later. */\n    if (shape->type == CIRCULAR_TYPE) {\n        if (!geohashGetDistanceIfInRadiusWGS84(shape->xy[0], shape->xy[1], xy[0], xy[1],\n                                               shape->t.radius*shape->conversion, distance))\n            return C_ERR;\n    } else if (shape->type == RECTANGLE_TYPE) {\n        if (!geohashGetDistanceIfInRectangle(shape->t.r.width * shape->conversion,\n                                             shape->t.r.height * shape->conversion,\n                                             shape->xy[0], shape->xy[1], xy[0], xy[1], distance))\n            return C_ERR;\n    }\n    return C_OK;\n}\n\n/* Query a Redis sorted set to extract all the elements between 'min' and\n * 'max', appending them into the array of geoPoint structures 'geoArray'.\n * The command returns the number of elements added to the array.\n *\n * Elements which are farther than 'radius' from the specified 'x' and 'y'\n * coordinates are not included.\n *\n * The ability of this function to append to an existing set of points is\n * important for good performances because querying by radius is performed\n * using multiple queries to the sorted set, that we later need to sort\n * via qsort. Similarly we need to be able to reject points outside the search\n * radius area ASAP in order to allocate and process more points than needed. */\nint geoGetPointsInRange(robj *zobj, double min, double max, GeoShape *shape, geoArray *ga, unsigned long limit) {\n    /* minex 0 = include min in range; maxex 1 = exclude max in range */\n    /* That's: min <= val < max */\n    zrangespec range = { .min = min, .max = max, .minex = 0, .maxex = 1 };\n    size_t origincount = ga->used;\n    if (zobj->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *zl = zobj->ptr;\n        unsigned char *eptr, *sptr;\n        unsigned char *vstr = NULL;\n        unsigned int vlen = 0;\n        long long vlong = 0;\n        double score = 0;\n\n        if ((eptr = zzlFirstInRange(zl, &range)) == NULL) {\n            /* Nothing exists starting at our min.  No results. */\n            return 0;\n        }\n\n        sptr = lpNext(zl, eptr);\n        while (eptr) {\n            double xy[2];\n            double distance = 0;\n            score = zzlGetScore(sptr);\n\n            /* If we fell out of range, break. */\n            if (!zslValueLteMax(score, &range))\n                break;\n\n            vstr = lpGetValue(eptr, &vlen, &vlong);\n            if (geoWithinShape(shape, score, xy, &distance) == C_OK) {\n                /* Append the new element. */\n                char *member = (vstr == NULL) ? sdsfromlonglong(vlong) : sdsnewlen(vstr, vlen);\n                geoArrayAppend(ga, xy, distance, score, member);\n            }\n            if (ga->used && limit && ga->used >= limit) break;\n            zzlNext(zl, &eptr, &sptr);\n        }\n    } else if (zobj->encoding == OBJ_ENCODING_SKIPLIST) {\n        zset *zs = zobj->ptr;\n        zskiplist *zsl = zs->zsl;\n        zskiplistNode *ln;\n\n        if ((ln = zslNthInRange(zsl, &range, 0, NULL)) == NULL) {\n            /* Nothing exists starting at our min.  No results. */\n            return 0;\n        }\n\n        while (ln) {\n            double xy[2];\n            double distance = 0;\n            /* Abort when the node is no longer in range. */\n            if (!zslValueLteMax(ln->score, &range))\n                break;\n            if (geoWithinShape(shape, ln->score, xy, &distance) == C_OK) {\n                /* Append the new element. */\n                sds ele = zslGetNodeElement(ln);\n                geoArrayAppend(ga, xy, distance, ln->score, sdsdup(ele));\n            }\n            if (ga->used && limit && ga->used >= limit) break;\n            ln = ln->level[0].forward;\n        }\n    }\n    return ga->used - origincount;\n}\n\n/* Compute the sorted set scores min (inclusive), max (exclusive) we should\n * query in order to retrieve all the elements inside the specified area\n * 'hash'. The two scores are returned by reference in *min and *max. */\nvoid scoresOfGeoHashBox(GeoHashBits hash, GeoHashFix52Bits *min, GeoHashFix52Bits *max) {\n    /* We want to compute the sorted set scores that will include all the\n     * elements inside the specified Geohash 'hash', which has as many\n     * bits as specified by hash.step * 2.\n     *\n     * So if step is, for example, 3, and the hash value in binary\n     * is 101010, since our score is 52 bits we want every element which\n     * is in binary: 101010?????????????????????????????????????????????\n     * Where ? can be 0 or 1.\n     *\n     * To get the min score we just use the initial hash value left\n     * shifted enough to get the 52 bit value. Later we increment the\n     * 6 bit prefix (see the hash.bits++ statement), and get the new\n     * prefix: 101011, which we align again to 52 bits to get the maximum\n     * value (which is excluded from the search). So we get everything\n     * between the two following scores (represented in binary):\n     *\n     * 1010100000000000000000000000000000000000000000000000 (included)\n     * and\n     * 1010110000000000000000000000000000000000000000000000 (excluded).\n     */\n    *min = geohashAlign52Bits(hash);\n    hash.bits++;\n    *max = geohashAlign52Bits(hash);\n}\n\n/* Obtain all members between the min/max of this geohash bounding box.\n * Populate a geoArray of GeoPoints by calling geoGetPointsInRange().\n * Return the number of points added to the array. */\nint membersOfGeoHashBox(robj *zobj, GeoHashBits hash, geoArray *ga, GeoShape *shape, unsigned long limit) {\n    GeoHashFix52Bits min, max;\n\n    scoresOfGeoHashBox(hash,&min,&max);\n    return geoGetPointsInRange(zobj, min, max, shape, ga, limit);\n}\n\n/* Search all eight neighbors + self geohash box */\nint membersOfAllNeighbors(robj *zobj, const GeoHashRadius *n, GeoShape *shape, geoArray *ga, unsigned long limit) {\n    GeoHashBits neighbors[9];\n    unsigned int i, count = 0, last_processed = 0;\n    int debugmsg = 0;\n\n    neighbors[0] = n->hash;\n    neighbors[1] = n->neighbors.north;\n    neighbors[2] = n->neighbors.south;\n    neighbors[3] = n->neighbors.east;\n    neighbors[4] = n->neighbors.west;\n    neighbors[5] = n->neighbors.north_east;\n    neighbors[6] = n->neighbors.north_west;\n    neighbors[7] = n->neighbors.south_east;\n    neighbors[8] = n->neighbors.south_west;\n\n    /* For each neighbor (*and* our own hashbox), get all the matching\n     * members and add them to the potential result list. */\n    for (i = 0; i < sizeof(neighbors) / sizeof(*neighbors); i++) {\n        if (HASHISZERO(neighbors[i])) {\n            if (debugmsg) D(\"neighbors[%d] is zero\",i);\n            continue;\n        }\n\n        /* Debugging info. */\n        if (debugmsg) {\n            GeoHashRange long_range, lat_range;\n            geohashGetCoordRange(&long_range,&lat_range);\n            GeoHashArea myarea = {{0}};\n            geohashDecode(long_range, lat_range, neighbors[i], &myarea);\n\n            /* Dump center square. */\n            D(\"neighbors[%d]:\\n\",i);\n            D(\"area.longitude.min: %f\\n\", myarea.longitude.min);\n            D(\"area.longitude.max: %f\\n\", myarea.longitude.max);\n            D(\"area.latitude.min: %f\\n\", myarea.latitude.min);\n            D(\"area.latitude.max: %f\\n\", myarea.latitude.max);\n            D(\"\\n\");\n        }\n\n        /* When a huge Radius (in the 5000 km range or more) is used,\n         * adjacent neighbors can be the same, leading to duplicated\n         * elements. Skip every range which is the same as the one\n         * processed previously. */\n        if (last_processed &&\n            neighbors[i].bits == neighbors[last_processed].bits &&\n            neighbors[i].step == neighbors[last_processed].step)\n        {\n            if (debugmsg)\n                D(\"Skipping processing of %d, same as previous\\n\",i);\n            continue;\n        }\n        if (ga->used && limit && ga->used >= limit) break;\n        count += membersOfGeoHashBox(zobj, neighbors[i], ga, shape, limit);\n        last_processed = i;\n    }\n    return count;\n}\n\n/* Sort comparators for qsort() */\nstatic int sort_gp_asc(const void *a, const void *b) {\n    const struct geoPoint *gpa = a, *gpb = b;\n    /* We can't do adist - bdist because they are doubles and\n     * the comparator returns an int. */\n    if (gpa->dist > gpb->dist)\n        return 1;\n    else if (gpa->dist == gpb->dist)\n        return 0;\n    else\n        return -1;\n}\n\nstatic int sort_gp_desc(const void *a, const void *b) {\n    return -sort_gp_asc(a, b);\n}\n\n/* ====================================================================\n * Commands\n * ==================================================================== */\n\n/* GEOADD key [CH] [NX|XX] long lat name [long2 lat2 name2 ... longN latN nameN] */\nvoid geoaddCommand(client *c) {\n    int xx = 0, nx = 0, longidx = 2;\n    int i;\n\n    /* Parse options. At the end 'longidx' is set to the argument position\n     * of the longitude of the first element. */\n    while (longidx < c->argc) {\n        char *opt = c->argv[longidx]->ptr;\n        if (!strcasecmp(opt,\"nx\")) nx = 1;\n        else if (!strcasecmp(opt,\"xx\")) xx = 1;\n        else if (!strcasecmp(opt,\"ch\")) { /* Handle in zaddCommand. */ }\n        else break;\n        longidx++;\n    }\n\n    if ((c->argc - longidx) % 3 || (xx && nx)) {\n        /* Need an odd number of arguments if we got this far... */\n            addReplyErrorObject(c,shared.syntaxerr);\n        return;\n    }\n\n    /* Set up the vector for calling ZADD. */\n    int elements = (c->argc - longidx) / 3;\n    int argc = longidx+elements*2; /* ZADD key [CH] [NX|XX] score ele ... */\n    robj **argv = zcalloc(argc*sizeof(robj*));\n    argv[0] = createRawStringObject(\"zadd\",4);\n    for (i = 1; i < longidx; i++) {\n        argv[i] = c->argv[i];\n        incrRefCount(argv[i]);\n    }\n\n    /* Create the argument vector to call ZADD in order to add all\n     * the score,value pairs to the requested zset, where score is actually\n     * an encoded version of lat,long. */\n    for (i = 0; i < elements; i++) {\n        double xy[2];\n\n        if (extractLongLatOrReply(c, (c->argv+longidx)+(i*3),xy) == C_ERR) {\n            for (i = 0; i < argc; i++)\n                if (argv[i]) decrRefCount(argv[i]);\n            zfree(argv);\n            return;\n        }\n\n        /* Turn the coordinates into the score of the element. */\n        GeoHashBits hash;\n        geohashEncodeWGS84(xy[0], xy[1], GEO_STEP_MAX, &hash);\n        GeoHashFix52Bits bits = geohashAlign52Bits(hash);\n        robj *score = createStringObjectFromLongLongWithSds(bits);\n        robj *val = c->argv[longidx + i * 3 + 2];\n        argv[longidx+i*2] = score;\n        argv[longidx+1+i*2] = val;\n        incrRefCount(val);\n    }\n\n    /* Finally call ZADD that will do the work for us. */\n    replaceClientCommandVector(c,argc,argv);\n    zaddCommand(c);\n}\n\n#define SORT_NONE 0\n#define SORT_ASC 1\n#define SORT_DESC 2\n\n#define RADIUS_COORDS (1<<0)    /* Search around coordinates. */\n#define RADIUS_MEMBER (1<<1)    /* Search around member. */\n#define RADIUS_NOSTORE (1<<2)   /* Do not accept STORE/STOREDIST option. */\n#define GEOSEARCH (1<<3)        /* GEOSEARCH command variant (different arguments supported) */\n#define GEOSEARCHSTORE (1<<4)   /* GEOSEARCHSTORE just accept STOREDIST option */\n\n/* GEORADIUS key x y radius unit [WITHDIST] [WITHHASH] [WITHCOORD] [ASC|DESC]\n *                               [COUNT count [ANY]] [STORE key|STOREDIST key]\n * GEORADIUSBYMEMBER key member radius unit ... options ...\n * GEOSEARCH key [FROMMEMBER member] [FROMLONLAT long lat] [BYRADIUS radius unit]\n *               [BYBOX width height unit] [WITHCOORD] [WITHDIST] [WITHASH] [COUNT count [ANY]] [ASC|DESC]\n * GEOSEARCHSTORE dest_key src_key [FROMMEMBER member] [FROMLONLAT long lat] [BYRADIUS radius unit]\n *               [BYBOX width height unit] [COUNT count [ANY]] [ASC|DESC] [STOREDIST]\n *  */\nvoid georadiusGeneric(client *c, int srcKeyIndex, int flags) {\n    robj *storekey = NULL;\n    int storedist = 0; /* 0 for STORE, 1 for STOREDIST. */\n\n    /* Look up the requested zset */\n    kvobj *zobj = lookupKeyRead(c->db, c->argv[srcKeyIndex]);\n    if (checkType(c, zobj, OBJ_ZSET)) return;\n\n    /* Find long/lat to use for radius or box search based on inquiry type */\n    int base_args;\n    GeoShape shape = {0};\n    if (flags & RADIUS_COORDS) {\n        /* GEORADIUS or GEORADIUS_RO */\n        base_args = 6;\n        shape.type = CIRCULAR_TYPE;\n        if (extractLongLatOrReply(c, c->argv + 2, shape.xy) == C_ERR) return;\n        if (extractDistanceOrReply(c, c->argv+base_args-2, &shape.conversion, &shape.t.radius) != C_OK) return;\n    } else if ((flags & RADIUS_MEMBER) && !zobj) {\n        /* We don't have a source key, but we need to proceed with argument\n         * parsing, so we know which reply to use depending on the STORE flag. */\n        base_args = 5;\n    } else if (flags & RADIUS_MEMBER) {\n        /* GEORADIUSBYMEMBER or GEORADIUSBYMEMBER_RO */\n        base_args = 5;\n        shape.type = CIRCULAR_TYPE;\n        robj *member = c->argv[2];\n        if (longLatFromMember(zobj, member, shape.xy) == C_ERR) {\n            addReplyError(c, \"could not decode requested zset member\");\n            return;\n        }\n        if (extractDistanceOrReply(c, c->argv+base_args-2, &shape.conversion, &shape.t.radius) != C_OK) return;\n    } else if (flags & GEOSEARCH) {\n        /* GEOSEARCH or GEOSEARCHSTORE */\n        base_args = 2;\n        if (flags & GEOSEARCHSTORE) {\n            base_args = 3;\n            storekey = c->argv[1];\n        }\n    } else {\n        addReplyError(c, \"Unknown georadius search type\");\n        return;\n    }\n\n    /* Discover and populate all optional parameters. */\n    int withdist = 0, withhash = 0, withcoords = 0;\n    int frommember = 0, fromloc = 0, byradius = 0, bybox = 0;\n    int sort = SORT_NONE;\n    int any = 0; /* any=1 means a limited search, stop as soon as enough results were found. */\n    long long count = 0;  /* Max number of results to return. 0 means unlimited. */\n    if (c->argc > base_args) {\n        int remaining = c->argc - base_args;\n        for (int i = 0; i < remaining; i++) {\n            char *arg = c->argv[base_args + i]->ptr;\n            if (!strcasecmp(arg, \"withdist\")) {\n                withdist = 1;\n            } else if (!strcasecmp(arg, \"withhash\")) {\n                withhash = 1;\n            } else if (!strcasecmp(arg, \"withcoord\")) {\n                withcoords = 1;\n            } else if (!strcasecmp(arg, \"any\")) {\n                any = 1;\n            } else if (!strcasecmp(arg, \"asc\")) {\n                sort = SORT_ASC;\n            } else if (!strcasecmp(arg, \"desc\")) {\n                sort = SORT_DESC;\n            } else if (!strcasecmp(arg, \"count\") && (i+1) < remaining) {\n                if (getLongLongFromObjectOrReply(c, c->argv[base_args+i+1],\n                                                 &count, NULL) != C_OK) return;\n                if (count <= 0) {\n                    addReplyError(c,\"COUNT must be > 0\");\n                    return;\n                }\n                i++;\n            } else if (!strcasecmp(arg, \"store\") &&\n                       (i+1) < remaining &&\n                       !(flags & RADIUS_NOSTORE) &&\n                       !(flags & GEOSEARCH))\n            {\n                storekey = c->argv[base_args+i+1];\n                storedist = 0;\n                i++;\n            } else if (!strcasecmp(arg, \"storedist\") &&\n                       (i+1) < remaining &&\n                       !(flags & RADIUS_NOSTORE) &&\n                       !(flags & GEOSEARCH))\n            {\n                storekey = c->argv[base_args+i+1];\n                storedist = 1;\n                i++;\n            } else if (!strcasecmp(arg, \"storedist\") &&\n                       (flags & GEOSEARCH) &&\n                       (flags & GEOSEARCHSTORE))\n            {\n                storedist = 1;\n            } else if (!strcasecmp(arg, \"frommember\") &&\n                      (i+1) < remaining &&\n                      flags & GEOSEARCH &&\n                      !fromloc)\n            {\n                /* No source key, proceed with argument parsing and return an error when done. */\n                if (zobj == NULL) {\n                    frommember = 1;\n                    i++;\n                    continue;\n                }\n\n                if (longLatFromMember(zobj, c->argv[base_args+i+1], shape.xy) == C_ERR) {\n                    addReplyError(c, \"could not decode requested zset member\");\n                    return;\n                }\n                frommember = 1;\n                i++;\n            } else if (!strcasecmp(arg, \"fromlonlat\") &&\n                       (i+2) < remaining &&\n                       flags & GEOSEARCH &&\n                       !frommember)\n            {\n                if (extractLongLatOrReply(c, c->argv+base_args+i+1, shape.xy) == C_ERR) return;\n                fromloc = 1;\n                i += 2;\n            } else if (!strcasecmp(arg, \"byradius\") &&\n                       (i+2) < remaining &&\n                       flags & GEOSEARCH &&\n                       !bybox)\n            {\n                if (extractDistanceOrReply(c, c->argv+base_args+i+1, &shape.conversion, &shape.t.radius) != C_OK)\n                    return;\n                shape.type = CIRCULAR_TYPE;\n                byradius = 1;\n                i += 2;\n            } else if (!strcasecmp(arg, \"bybox\") &&\n                       (i+3) < remaining &&\n                       flags & GEOSEARCH &&\n                       !byradius)\n            {\n                if (extractBoxOrReply(c, c->argv+base_args+i+1, &shape.conversion, &shape.t.r.width,\n                        &shape.t.r.height) != C_OK) return;\n                shape.type = RECTANGLE_TYPE;\n                bybox = 1;\n                i += 3;\n            } else {\n                addReplyErrorObject(c,shared.syntaxerr);\n                return;\n            }\n        }\n    }\n\n    /* Trap options not compatible with STORE and STOREDIST. */\n    if (storekey && (withdist || withhash || withcoords)) {\n        addReplyErrorFormat(c,\n            \"%s is not compatible with WITHDIST, WITHHASH and WITHCOORD options\",\n            flags & GEOSEARCHSTORE? \"GEOSEARCHSTORE\": \"STORE option in GEORADIUS\");\n        return;\n    }\n\n    if ((flags & GEOSEARCH) && !(frommember || fromloc)) {\n        addReplyErrorFormat(c,\n            \"exactly one of FROMMEMBER or FROMLONLAT can be specified for %s\",\n            (char *)c->argv[0]->ptr);\n        return;\n    }\n\n    if ((flags & GEOSEARCH) && !(byradius || bybox)) {\n        addReplyErrorFormat(c,\n            \"exactly one of BYRADIUS and BYBOX can be specified for %s\",\n            (char *)c->argv[0]->ptr);\n        return;\n    }\n\n    if (any && !count) {\n        addReplyError(c, \"the ANY argument requires COUNT argument\");\n        return;\n    }\n\n    /* Return ASAP when src key does not exist. */\n    if (zobj == NULL) {\n        if (storekey) {\n            /* store key is not NULL, try to delete it and return 0. */\n            if (dbDelete(c->db, storekey)) {\n                keyModified(c, c->db, storekey, NULL, 1);\n                notifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", storekey, c->db->id);\n                server.dirty++;\n            }\n            addReply(c, shared.czero);\n        } else {\n            /* Otherwise we return an empty array. */\n            addReply(c, shared.emptyarray);\n        }\n        return;\n    }\n\n    /* COUNT without ordering does not make much sense (we need to\n     * sort in order to return the closest N entries),\n     * force ASC ordering if COUNT was specified but no sorting was\n     * requested. Note that this is not needed for ANY option. */\n    if (count != 0 && sort == SORT_NONE && !any) sort = SORT_ASC;\n\n    /* Get all neighbor geohash boxes for our radius search */\n    GeoHashRadius georadius = geohashCalculateAreasByShapeWGS84(&shape);\n\n    /* Search the zset for all matching points */\n    geoArray *ga = geoArrayCreate();\n    membersOfAllNeighbors(zobj, &georadius, &shape, ga, any ? count : 0);\n\n    /* If no matching results, the user gets an empty reply. */\n    if (ga->used == 0 && storekey == NULL) {\n        addReply(c,shared.emptyarray);\n        geoArrayFree(ga);\n        return;\n    }\n\n    long result_length = ga->used;\n    long returned_items = (count == 0 || result_length < count) ?\n                          result_length : count;\n    long option_length = 0;\n\n    /* Process [optional] requested sorting */\n    if (sort != SORT_NONE) {\n        int (*sort_gp_callback)(const void *a, const void *b) = NULL;\n        if (sort == SORT_ASC) {\n            sort_gp_callback = sort_gp_asc;\n        } else if (sort == SORT_DESC) {\n            sort_gp_callback = sort_gp_desc;\n        }\n\n        if (returned_items == result_length) {\n            qsort(ga->array, result_length, sizeof(geoPoint), sort_gp_callback);\n        } else {\n            pqsort(ga->array, result_length, sizeof(geoPoint), sort_gp_callback,\n                0, (returned_items - 1));\n        }\n    }\n\n    if (storekey == NULL) {\n        /* No target key, return results to user. */\n\n        /* Our options are self-contained nested multibulk replies, so we\n         * only need to track how many of those nested replies we return. */\n        if (withdist)\n            option_length++;\n\n        if (withcoords)\n            option_length++;\n\n        if (withhash)\n            option_length++;\n\n        /* The array len we send is exactly result_length. The result is\n         * either all strings of just zset members  *or* a nested multi-bulk\n         * reply containing the zset member string _and_ all the additional\n         * options the user enabled for this request. */\n        addReplyArrayLen(c, returned_items);\n\n        /* Finally send results back to the caller */\n        int i;\n        for (i = 0; i < returned_items; i++) {\n            geoPoint *gp = ga->array+i;\n            gp->dist /= shape.conversion; /* Fix according to unit. */\n\n            /* If we have options in option_length, return each sub-result\n             * as a nested multi-bulk.  Add 1 to account for result value\n             * itself. */\n            if (option_length)\n                addReplyArrayLen(c, option_length + 1);\n\n            addReplyBulkSds(c,gp->member);\n            gp->member = NULL;\n\n            if (withdist)\n                addReplyDoubleDistance(c, gp->dist);\n\n            if (withhash)\n                addReplyLongLong(c, gp->score);\n\n            if (withcoords) {\n                addReplyArrayLen(c, 2);\n                addReplyDouble(c,gp->longitude);\n                addReplyDouble(c,gp->latitude);\n            }\n        }\n    } else {\n        /* Target key, create a sorted set with the results. */\n        robj *zobj;\n        zset *zs;\n        int i;\n        size_t maxelelen = 0, totelelen = 0;\n\n        if (returned_items) {\n            zobj = createZsetObject();\n            zs = zobj->ptr;\n        }\n\n        for (i = 0; i < returned_items; i++) {\n            zskiplistNode *znode;\n            geoPoint *gp = ga->array+i;\n            gp->dist /= shape.conversion; /* Fix according to unit. */\n            double score = storedist ? gp->dist : gp->score;\n            size_t elelen = sdslen(gp->member);\n\n            if (maxelelen < elelen) maxelelen = elelen;\n            totelelen += elelen;\n            znode = zslInsert(zs->zsl,score,gp->member);\n            serverAssert(dictAdd(zs->dict, znode, NULL) == DICT_OK);\n            sdsfree(gp->member); /* zslInsert copies the sds, so free the original */\n            gp->member = NULL;\n        }\n\n        if (returned_items) {\n            zsetConvertToListpackIfNeeded(zobj,maxelelen,totelelen);\n            setKey(c,c->db,storekey,&zobj,0);\n            notifyKeyspaceEvent(NOTIFY_ZSET,flags & GEOSEARCH ? \"geosearchstore\" : \"georadiusstore\",storekey,\n                                c->db->id);\n            server.dirty += returned_items;\n        } else if (dbDelete(c->db,storekey)) {\n            keyModified(c,c->db,storekey,NULL,1);\n            notifyKeyspaceEvent(NOTIFY_GENERIC,\"del\",storekey,c->db->id);\n            server.dirty++;\n        }\n        addReplyLongLong(c, returned_items);\n    }\n    geoArrayFree(ga);\n}\n\n/* GEORADIUS wrapper function. */\nvoid georadiusCommand(client *c) {\n    georadiusGeneric(c, 1, RADIUS_COORDS);\n}\n\n/* GEORADIUSBYMEMBER wrapper function. */\nvoid georadiusbymemberCommand(client *c) {\n    georadiusGeneric(c, 1, RADIUS_MEMBER);\n}\n\n/* GEORADIUS_RO wrapper function. */\nvoid georadiusroCommand(client *c) {\n    georadiusGeneric(c, 1, RADIUS_COORDS|RADIUS_NOSTORE);\n}\n\n/* GEORADIUSBYMEMBER_RO wrapper function. */\nvoid georadiusbymemberroCommand(client *c) {\n    georadiusGeneric(c, 1, RADIUS_MEMBER|RADIUS_NOSTORE);\n}\n\nvoid geosearchCommand(client *c) {\n    georadiusGeneric(c, 1, GEOSEARCH);\n}\n\nvoid geosearchstoreCommand(client *c) {\n    georadiusGeneric(c, 2, GEOSEARCH|GEOSEARCHSTORE);\n}\n\n/* GEOHASH key ele1 ele2 ... eleN\n *\n * Returns an array with an 11 characters geohash representation of the\n * position of the specified elements. */\nvoid geohashCommand(client *c) {\n    char *geoalphabet= \"0123456789bcdefghjkmnpqrstuvwxyz\";\n    int j;\n\n    /* Look up the requested zset */\n    kvobj *zobj = lookupKeyRead(c->db, c->argv[1]);\n    if (checkType(c, zobj, OBJ_ZSET)) return;\n\n    /* Geohash elements one after the other, using a null bulk reply for\n     * missing elements. */\n    addReplyArrayLen(c,c->argc-2);\n    for (j = 2; j < c->argc; j++) {\n        double score;\n        if (!zobj || zsetScore(zobj, c->argv[j]->ptr, &score) == C_ERR) {\n            addReplyNull(c);\n        } else {\n            /* The internal format we use for geocoding is a bit different\n             * than the standard, since we use as initial latitude range\n             * -85,85, while the normal geohashing algorithm uses -90,90.\n             * So we have to decode our position and re-encode using the\n             * standard ranges in order to output a valid geohash string. */\n\n            /* Decode... */\n            double xy[2];\n            if (!decodeGeohash(score,xy)) {\n                addReplyNull(c);\n                continue;\n            }\n\n            /* Re-encode */\n            GeoHashRange r[2];\n            GeoHashBits hash;\n            r[0].min = -180;\n            r[0].max = 180;\n            r[1].min = -90;\n            r[1].max = 90;\n            geohashEncode(&r[0],&r[1],xy[0],xy[1],26,&hash);\n\n            char buf[12];\n            int i;\n            for (i = 0; i < 11; i++) {\n                int idx;\n                if (i == 10) {\n                    /* We have just 52 bits, but the API used to output\n                     * an 11 bytes geohash. For compatibility we assume\n                     * zero. */\n                    idx = 0;\n                } else {\n                    idx = (hash.bits >> (52-((i+1)*5))) & 0x1f;\n                }\n                buf[i] = geoalphabet[idx];\n            }\n            buf[11] = '\\0';\n            addReplyBulkCBuffer(c,buf,11);\n        }\n    }\n}\n\n/* GEOPOS key ele1 ele2 ... eleN\n *\n * Returns an array of two-items arrays representing the x,y position of each\n * element specified in the arguments. For missing elements NULL is returned. */\nvoid geoposCommand(client *c) {\n    int j;\n\n    /* Look up the requested zset */\n    robj *zobj = lookupKeyRead(c->db, c->argv[1]);\n    if (checkType(c, zobj, OBJ_ZSET)) return;\n\n    /* Report elements one after the other, using a null bulk reply for\n     * missing elements. */\n    addReplyArrayLen(c,c->argc-2);\n    for (j = 2; j < c->argc; j++) {\n        double score;\n        if (!zobj || zsetScore(zobj, c->argv[j]->ptr, &score) == C_ERR) {\n            addReplyNullArray(c);\n        } else {\n            /* Decode... */\n            double xy[2];\n            if (!decodeGeohash(score,xy)) {\n                addReplyNullArray(c);\n                continue;\n            }\n            addReplyArrayLen(c,2);\n            addReplyDouble(c,xy[0]);\n            addReplyDouble(c,xy[1]);\n        }\n    }\n}\n\n/* GEODIST key ele1 ele2 [unit]\n *\n * Return the distance, in meters by default, otherwise according to \"unit\",\n * between points ele1 and ele2. If one or more elements are missing NULL\n * is returned. */\nvoid geodistCommand(client *c) {\n    double to_meter = 1;\n\n    /* Check if there is the unit to extract, otherwise assume meters. */\n    if (c->argc == 5) {\n        to_meter = extractUnitOrReply(c,c->argv[4]);\n        if (to_meter < 0) return;\n    } else if (c->argc > 5) {\n        addReplyErrorObject(c,shared.syntaxerr);\n        return;\n    }\n\n    /* Look up the requested zset */\n    kvobj *zobj = NULL;\n    if ((zobj = lookupKeyReadOrReply(c, c->argv[1], shared.null[c->resp]))\n        == NULL || checkType(c, zobj, OBJ_ZSET)) return;\n\n    /* Get the scores. We need both otherwise NULL is returned. */\n    double score1, score2, xyxy[4];\n    if (zsetScore(zobj, c->argv[2]->ptr, &score1) == C_ERR ||\n        zsetScore(zobj, c->argv[3]->ptr, &score2) == C_ERR)\n    {\n        addReplyNull(c);\n        return;\n    }\n\n    /* Decode & compute the distance. */\n    if (!decodeGeohash(score1,xyxy) || !decodeGeohash(score2,xyxy+2))\n        addReplyNull(c);\n    else\n        addReplyDoubleDistance(c,\n            geohashGetDistance(xyxy[0],xyxy[1],xyxy[2],xyxy[3]) / to_meter);\n}\n"
  },
  {
    "path": "src/geo.h",
    "content": "#ifndef __GEO_H__\n#define __GEO_H__\n\n#include \"server.h\"\n\n/* Structures used inside geo.c in order to represent points and array of\n * points on the earth. */\ntypedef struct geoPoint {\n    double longitude;\n    double latitude;\n    double dist;\n    double score;\n    char *member;\n} geoPoint;\n\ntypedef struct geoArray {\n    struct geoPoint *array;\n    size_t buckets;\n    size_t used;\n} geoArray;\n\n#endif\n"
  },
  {
    "path": "src/geohash.c",
    "content": "/*\n * Copyright (c) 2013-2014, yinqiwen <yinqiwen@gmail.com>\n * Copyright (c) 2014, Matt Stancliff <matt@genges.com>.\n * Copyright (c) 2015-current, Redis Ltd.\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *  * Redistributions of source code must retain the above copyright notice,\n *    this list of conditions and the following disclaimer.\n *  * Redistributions in binary form must reproduce the above copyright\n *    notice, this list of conditions and the following disclaimer in the\n *    documentation and/or other materials provided with the distribution.\n *  * Neither the name of Redis nor the names of its contributors may be used\n *    to endorse or promote products derived from this software without\n *    specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS\n * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF\n * THE POSSIBILITY OF SUCH DAMAGE.\n */\n#include \"geohash.h\"\n\n/**\n * Hashing works like this:\n * Divide the world into 4 buckets.  Label each one as such:\n *  -----------------\n *  |       |       |\n *  |       |       |\n *  | 0,1   | 1,1   |\n *  -----------------\n *  |       |       |\n *  |       |       |\n *  | 0,0   | 1,0   |\n *  -----------------\n */\n\n/* Interleave lower bits of x and y, so the bits of x\n * are in the even positions and bits from y in the odd;\n * x and y must initially be less than 2**32 (4294967296).\n * From:  https://graphics.stanford.edu/~seander/bithacks.html#InterleaveBMN\n */\nstatic inline uint64_t interleave64(uint32_t xlo, uint32_t ylo) {\n    static const uint64_t B[] = {0x5555555555555555ULL, 0x3333333333333333ULL,\n                                 0x0F0F0F0F0F0F0F0FULL, 0x00FF00FF00FF00FFULL,\n                                 0x0000FFFF0000FFFFULL};\n    static const unsigned int S[] = {1, 2, 4, 8, 16};\n\n    uint64_t x = xlo;\n    uint64_t y = ylo;\n\n    x = (x | (x << S[4])) & B[4];\n    y = (y | (y << S[4])) & B[4];\n\n    x = (x | (x << S[3])) & B[3];\n    y = (y | (y << S[3])) & B[3];\n\n    x = (x | (x << S[2])) & B[2];\n    y = (y | (y << S[2])) & B[2];\n\n    x = (x | (x << S[1])) & B[1];\n    y = (y | (y << S[1])) & B[1];\n\n    x = (x | (x << S[0])) & B[0];\n    y = (y | (y << S[0])) & B[0];\n\n    return x | (y << 1);\n}\n\n/* reverse the interleave process\n * derived from http://stackoverflow.com/questions/4909263\n */\nstatic inline uint64_t deinterleave64(uint64_t interleaved) {\n    static const uint64_t B[] = {0x5555555555555555ULL, 0x3333333333333333ULL,\n                                 0x0F0F0F0F0F0F0F0FULL, 0x00FF00FF00FF00FFULL,\n                                 0x0000FFFF0000FFFFULL, 0x00000000FFFFFFFFULL};\n    static const unsigned int S[] = {0, 1, 2, 4, 8, 16};\n\n    uint64_t x = interleaved;\n    uint64_t y = interleaved >> 1;\n\n    x = (x | (x >> S[0])) & B[0];\n    y = (y | (y >> S[0])) & B[0];\n\n    x = (x | (x >> S[1])) & B[1];\n    y = (y | (y >> S[1])) & B[1];\n\n    x = (x | (x >> S[2])) & B[2];\n    y = (y | (y >> S[2])) & B[2];\n\n    x = (x | (x >> S[3])) & B[3];\n    y = (y | (y >> S[3])) & B[3];\n\n    x = (x | (x >> S[4])) & B[4];\n    y = (y | (y >> S[4])) & B[4];\n\n    x = (x | (x >> S[5])) & B[5];\n    y = (y | (y >> S[5])) & B[5];\n\n    return x | (y << 32);\n}\n\nvoid geohashGetCoordRange(GeoHashRange *long_range, GeoHashRange *lat_range) {\n    /* These are constraints from EPSG:900913 / EPSG:3785 / OSGEO:41001 */\n    /* We can't geocode at the north/south pole. */\n    long_range->max = GEO_LONG_MAX;\n    long_range->min = GEO_LONG_MIN;\n    lat_range->max = GEO_LAT_MAX;\n    lat_range->min = GEO_LAT_MIN;\n}\n\nint geohashEncode(const GeoHashRange *long_range, const GeoHashRange *lat_range,\n                  double longitude, double latitude, uint8_t step,\n                  GeoHashBits *hash) {\n    /* Check basic arguments sanity. */\n    if (hash == NULL || step > 32 || step == 0 ||\n        RANGEPISZERO(lat_range) || RANGEPISZERO(long_range)) return 0;\n\n    /* Return an error when trying to index outside the supported\n     * constraints. */\n    if (longitude > GEO_LONG_MAX || longitude < GEO_LONG_MIN ||\n        latitude > GEO_LAT_MAX || latitude < GEO_LAT_MIN) return 0;\n\n    hash->bits = 0;\n    hash->step = step;\n\n    if (latitude < lat_range->min || latitude > lat_range->max ||\n        longitude < long_range->min || longitude > long_range->max) {\n        return 0;\n    }\n\n    double lat_offset =\n        (latitude - lat_range->min) / (lat_range->max - lat_range->min);\n    double long_offset =\n        (longitude - long_range->min) / (long_range->max - long_range->min);\n\n    /* convert to fixed point based on the step size */\n    lat_offset *= (1ULL << step);\n    long_offset *= (1ULL << step);\n    hash->bits = interleave64(lat_offset, long_offset);\n    return 1;\n}\n\nint geohashEncodeType(double longitude, double latitude, uint8_t step, GeoHashBits *hash) {\n    GeoHashRange r[2] = {{0}};\n    geohashGetCoordRange(&r[0], &r[1]);\n    return geohashEncode(&r[0], &r[1], longitude, latitude, step, hash);\n}\n\nint geohashEncodeWGS84(double longitude, double latitude, uint8_t step,\n                       GeoHashBits *hash) {\n    return geohashEncodeType(longitude, latitude, step, hash);\n}\n\nint geohashDecode(const GeoHashRange long_range, const GeoHashRange lat_range,\n                   const GeoHashBits hash, GeoHashArea *area) {\n    if (HASHISZERO(hash) || NULL == area || RANGEISZERO(lat_range) ||\n        RANGEISZERO(long_range)) {\n        return 0;\n    }\n\n    area->hash = hash;\n    uint8_t step = hash.step;\n    uint64_t hash_sep = deinterleave64(hash.bits); /* hash = [LAT][LONG] */\n\n    double lat_scale = lat_range.max - lat_range.min;\n    double long_scale = long_range.max - long_range.min;\n\n    uint32_t ilato = hash_sep;       /* get lat part of deinterleaved hash */\n    uint32_t ilono = hash_sep >> 32; /* shift over to get long part of hash */\n\n    /* divide by 2**step.\n     * Then, for 0-1 coordinate, multiply times scale and add\n       to the min to get the absolute coordinate. */\n    area->latitude.min =\n        lat_range.min + (ilato * 1.0 / (1ull << step)) * lat_scale;\n    area->latitude.max =\n        lat_range.min + ((ilato + 1) * 1.0 / (1ull << step)) * lat_scale;\n    area->longitude.min =\n        long_range.min + (ilono * 1.0 / (1ull << step)) * long_scale;\n    area->longitude.max =\n        long_range.min + ((ilono + 1) * 1.0 / (1ull << step)) * long_scale;\n\n    return 1;\n}\n\nint geohashDecodeType(const GeoHashBits hash, GeoHashArea *area) {\n    GeoHashRange r[2] = {{0}};\n    geohashGetCoordRange(&r[0], &r[1]);\n    return geohashDecode(r[0], r[1], hash, area);\n}\n\nint geohashDecodeWGS84(const GeoHashBits hash, GeoHashArea *area) {\n    return geohashDecodeType(hash, area);\n}\n\nint geohashDecodeAreaToLongLat(const GeoHashArea *area, double *xy) {\n    if (!xy) return 0;\n    xy[0] = (area->longitude.min + area->longitude.max) / 2;\n    if (xy[0] > GEO_LONG_MAX) xy[0] = GEO_LONG_MAX;\n    if (xy[0] < GEO_LONG_MIN) xy[0] = GEO_LONG_MIN;\n    xy[1] = (area->latitude.min + area->latitude.max) / 2;\n    if (xy[1] > GEO_LAT_MAX) xy[1] = GEO_LAT_MAX;\n    if (xy[1] < GEO_LAT_MIN) xy[1] = GEO_LAT_MIN;\n    return 1;\n}\n\nint geohashDecodeToLongLatType(const GeoHashBits hash, double *xy) {\n    GeoHashArea area = {{0}};\n    if (!xy || !geohashDecodeType(hash, &area))\n        return 0;\n    return geohashDecodeAreaToLongLat(&area, xy);\n}\n\nint geohashDecodeToLongLatWGS84(const GeoHashBits hash, double *xy) {\n    return geohashDecodeToLongLatType(hash, xy);\n}\n\nstatic void geohash_move_x(GeoHashBits *hash, int8_t d) {\n    if (d == 0)\n        return;\n\n    uint64_t x = hash->bits & 0xaaaaaaaaaaaaaaaaULL;\n    uint64_t y = hash->bits & 0x5555555555555555ULL;\n\n    uint64_t zz = 0x5555555555555555ULL >> (64 - hash->step * 2);\n\n    if (d > 0) {\n        x = x + (zz + 1);\n    } else {\n        x = x | zz;\n        x = x - (zz + 1);\n    }\n\n    x &= (0xaaaaaaaaaaaaaaaaULL >> (64 - hash->step * 2));\n    hash->bits = (x | y);\n}\n\nstatic void geohash_move_y(GeoHashBits *hash, int8_t d) {\n    if (d == 0)\n        return;\n\n    uint64_t x = hash->bits & 0xaaaaaaaaaaaaaaaaULL;\n    uint64_t y = hash->bits & 0x5555555555555555ULL;\n\n    uint64_t zz = 0xaaaaaaaaaaaaaaaaULL >> (64 - hash->step * 2);\n    if (d > 0) {\n        y = y + (zz + 1);\n    } else {\n        y = y | zz;\n        y = y - (zz + 1);\n    }\n    y &= (0x5555555555555555ULL >> (64 - hash->step * 2));\n    hash->bits = (x | y);\n}\n\nvoid geohashNeighbors(const GeoHashBits *hash, GeoHashNeighbors *neighbors) {\n    neighbors->east = *hash;\n    neighbors->west = *hash;\n    neighbors->north = *hash;\n    neighbors->south = *hash;\n    neighbors->south_east = *hash;\n    neighbors->south_west = *hash;\n    neighbors->north_east = *hash;\n    neighbors->north_west = *hash;\n\n    geohash_move_x(&neighbors->east, 1);\n    geohash_move_y(&neighbors->east, 0);\n\n    geohash_move_x(&neighbors->west, -1);\n    geohash_move_y(&neighbors->west, 0);\n\n    geohash_move_x(&neighbors->south, 0);\n    geohash_move_y(&neighbors->south, -1);\n\n    geohash_move_x(&neighbors->north, 0);\n    geohash_move_y(&neighbors->north, 1);\n\n    geohash_move_x(&neighbors->north_west, -1);\n    geohash_move_y(&neighbors->north_west, 1);\n\n    geohash_move_x(&neighbors->north_east, 1);\n    geohash_move_y(&neighbors->north_east, 1);\n\n    geohash_move_x(&neighbors->south_east, 1);\n    geohash_move_y(&neighbors->south_east, -1);\n\n    geohash_move_x(&neighbors->south_west, -1);\n    geohash_move_y(&neighbors->south_west, -1);\n}\n"
  },
  {
    "path": "src/geohash.h",
    "content": "/*\n * Copyright (c) 2013-2014, yinqiwen <yinqiwen@gmail.com>\n * Copyright (c) 2014, Matt Stancliff <matt@genges.com>.\n * Copyright (c) 2015-current, Redis Ltd.\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *  * Redistributions of source code must retain the above copyright notice,\n *    this list of conditions and the following disclaimer.\n *  * Redistributions in binary form must reproduce the above copyright\n *    notice, this list of conditions and the following disclaimer in the\n *    documentation and/or other materials provided with the distribution.\n *  * Neither the name of Redis nor the names of its contributors may be used\n *    to endorse or promote products derived from this software without\n *    specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS\n * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF\n * THE POSSIBILITY OF SUCH DAMAGE.\n */\n\n#ifndef GEOHASH_H_\n#define GEOHASH_H_\n\n#include <stddef.h>\n#include <stdint.h>\n\n#if defined(__cplusplus)\nextern \"C\" {\n#endif\n\n#define HASHISZERO(r) (!(r).bits && !(r).step)\n#define RANGEISZERO(r) (!(r).max && !(r).min)\n#define RANGEPISZERO(r) (r == NULL || RANGEISZERO(*r))\n\n#define GEO_STEP_MAX 26 /* 26*2 = 52 bits. */\n\n/* Limits from EPSG:900913 / EPSG:3785 / OSGEO:41001 */\n#define GEO_LAT_MIN -85.05112878\n#define GEO_LAT_MAX 85.05112878\n#define GEO_LONG_MIN -180\n#define GEO_LONG_MAX 180\n\ntypedef enum {\n    GEOHASH_NORTH = 0,\n    GEOHASH_EAST,\n    GEOHASH_WEST,\n    GEOHASH_SOUTH,\n    GEOHASH_SOUTH_WEST,\n    GEOHASH_SOUTH_EAST,\n    GEOHASH_NORT_WEST,\n    GEOHASH_NORT_EAST\n} GeoDirection;\n\ntypedef struct {\n    uint64_t bits;\n    uint8_t step;\n} GeoHashBits;\n\ntypedef struct {\n    double min;\n    double max;\n} GeoHashRange;\n\ntypedef struct {\n    GeoHashBits hash;\n    GeoHashRange longitude;\n    GeoHashRange latitude;\n} GeoHashArea;\n\ntypedef struct {\n    GeoHashBits north;\n    GeoHashBits east;\n    GeoHashBits west;\n    GeoHashBits south;\n    GeoHashBits north_east;\n    GeoHashBits south_east;\n    GeoHashBits north_west;\n    GeoHashBits south_west;\n} GeoHashNeighbors;\n\n#define CIRCULAR_TYPE 1\n#define RECTANGLE_TYPE 2\ntypedef struct {\n    int type; /* search type */\n    double xy[2]; /* search center point, xy[0]: lon, xy[1]: lat */\n    double conversion; /* km: 1000 */\n    double bounds[4]; /* bounds[0]: min_lon, bounds[1]: min_lat\n                       * bounds[2]: max_lon, bounds[3]: max_lat */\n    union {\n        /* CIRCULAR_TYPE */\n        double radius;\n        /* RECTANGLE_TYPE */\n        struct {\n            double height;\n            double width;\n        } r;\n    } t;\n} GeoShape;\n\n/*\n * 0:success\n * -1:failed\n */\nvoid geohashGetCoordRange(GeoHashRange *long_range, GeoHashRange *lat_range);\nint geohashEncode(const GeoHashRange *long_range, const GeoHashRange *lat_range,\n                  double longitude, double latitude, uint8_t step,\n                  GeoHashBits *hash);\nint geohashEncodeType(double longitude, double latitude,\n                      uint8_t step, GeoHashBits *hash);\nint geohashEncodeWGS84(double longitude, double latitude, uint8_t step,\n                       GeoHashBits *hash);\nint geohashDecode(const GeoHashRange long_range, const GeoHashRange lat_range,\n                  const GeoHashBits hash, GeoHashArea *area);\nint geohashDecodeType(const GeoHashBits hash, GeoHashArea *area);\nint geohashDecodeWGS84(const GeoHashBits hash, GeoHashArea *area);\nint geohashDecodeAreaToLongLat(const GeoHashArea *area, double *xy);\nint geohashDecodeToLongLatType(const GeoHashBits hash, double *xy);\nint geohashDecodeToLongLatWGS84(const GeoHashBits hash, double *xy);\nvoid geohashNeighbors(const GeoHashBits *hash, GeoHashNeighbors *neighbors);\n\n#if defined(__cplusplus)\n}\n#endif\n#endif /* GEOHASH_H_ */\n"
  },
  {
    "path": "src/geohash_helper.c",
    "content": "/*\n * Copyright (c) 2013-2014, yinqiwen <yinqiwen@gmail.com>\n * Copyright (c) 2014, Matt Stancliff <matt@genges.com>.\n * Copyright (c) 2015-current, Redis Ltd.\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *  * Redistributions of source code must retain the above copyright notice,\n *    this list of conditions and the following disclaimer.\n *  * Redistributions in binary form must reproduce the above copyright\n *    notice, this list of conditions and the following disclaimer in the\n *    documentation and/or other materials provided with the distribution.\n *  * Neither the name of Redis nor the names of its contributors may be used\n *    to endorse or promote products derived from this software without\n *    specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS\n * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF\n * THE POSSIBILITY OF SUCH DAMAGE.\n */\n\n/* This is a C++ to C conversion from the ardb project.\n * This file started out as:\n * https://github.com/yinqiwen/ardb/blob/d42503/src/geo/geohash_helper.cpp\n */\n\n#include \"fmacros.h\"\n#include \"geohash_helper.h\"\n#include \"debugmacro.h\"\n#include <math.h>\n\n#define D_R (M_PI / 180.0)\n#define R_MAJOR 6378137.0\n#define R_MINOR 6356752.3142\n#define RATIO (R_MINOR / R_MAJOR)\n#define ECCENT (sqrt(1.0 - (RATIO *RATIO)))\n#define COM (0.5 * ECCENT)\n\n/// @brief The usual PI/180 constant\nconst double DEG_TO_RAD = 0.017453292519943295769236907684886;\n/// @brief Earth's quatratic mean radius for WGS-84\nconst double EARTH_RADIUS_IN_METERS = 6372797.560856;\n\nconst double MERCATOR_MAX = 20037726.37;\nconst double MERCATOR_MIN = -20037726.37;\n\nstatic inline double deg_rad(double ang) { return ang * D_R; }\nstatic inline double rad_deg(double ang) { return ang / D_R; }\n\n/* This function is used in order to estimate the step (bits precision)\n * of the 9 search area boxes during radius queries. */\nuint8_t geohashEstimateStepsByRadius(double range_meters, double lat) {\n    if (range_meters == 0) return 26;\n    int step = 1;\n    while (range_meters < MERCATOR_MAX) {\n        range_meters *= 2;\n        step++;\n    }\n    step -= 2; /* Make sure range is included in most of the base cases. */\n\n    /* Wider range towards the poles... Note: it is possible to do better\n     * than this approximation by computing the distance between meridians\n     * at this latitude, but this does the trick for now. */\n    if (lat > 66 || lat < -66) {\n        step--;\n        if (lat > 80 || lat < -80) step--;\n    }\n\n    /* Frame to valid range. */\n    if (step < 1) step = 1;\n    if (step > 26) step = 26;\n    return step;\n}\n\n/* Return the bounding box of the search area by shape (see geohash.h GeoShape)\n * bounds[0] - bounds[2] is the minimum and maximum longitude\n * while bounds[1] - bounds[3] is the minimum and maximum latitude.\n * since the higher the latitude, the shorter the arc length, the box shape is as follows\n * (left and right edges are actually bent), as shown in the following diagram:\n *\n *    \\-----------------/          --------               \\-----------------/\n *     \\               /         /          \\              \\               /\n *      \\  (long,lat) /         / (long,lat) \\              \\  (long,lat) /\n *       \\           /         /              \\             /             \\\n *         ---------          /----------------\\           /---------------\\\n *  Northern Hemisphere       Southern Hemisphere         Around the equator\n */\nint geohashBoundingBox(GeoShape *shape, double *bounds) {\n    if (!bounds) return 0;\n    double longitude = shape->xy[0];\n    double latitude = shape->xy[1];\n    double height = shape->conversion * (shape->type == CIRCULAR_TYPE ? shape->t.radius : shape->t.r.height/2);\n    double width = shape->conversion * (shape->type == CIRCULAR_TYPE ? shape->t.radius : shape->t.r.width/2);\n\n    const double lat_delta = rad_deg(height/EARTH_RADIUS_IN_METERS);\n    const double long_delta_top = rad_deg(width/EARTH_RADIUS_IN_METERS/cos(deg_rad(latitude+lat_delta)));\n    const double long_delta_bottom = rad_deg(width/EARTH_RADIUS_IN_METERS/cos(deg_rad(latitude-lat_delta)));\n    /* The directions of the northern and southern hemispheres\n     * are opposite, so we choice different points as min/max long/lat */\n    int southern_hemisphere = latitude < 0 ? 1 : 0;\n    bounds[0] = southern_hemisphere ? longitude-long_delta_bottom : longitude-long_delta_top;\n    bounds[2] = southern_hemisphere ? longitude+long_delta_bottom : longitude+long_delta_top;\n    bounds[1] = latitude - lat_delta;\n    bounds[3] = latitude + lat_delta;\n    return 1;\n}\n\n/* Calculate a set of areas (center + 8) that are able to cover a range query\n * for the specified position and shape (see geohash.h GeoShape).\n * the bounding box saved in shaple.bounds */\nGeoHashRadius geohashCalculateAreasByShapeWGS84(GeoShape *shape) {\n    GeoHashRange long_range, lat_range;\n    GeoHashRadius radius;\n    GeoHashBits hash;\n    GeoHashNeighbors neighbors;\n    GeoHashArea area;\n    double min_lon, max_lon, min_lat, max_lat;\n    int steps;\n\n    geohashBoundingBox(shape, shape->bounds);\n    min_lon = shape->bounds[0];\n    min_lat = shape->bounds[1];\n    max_lon = shape->bounds[2];\n    max_lat = shape->bounds[3];\n\n    double longitude = shape->xy[0];\n    double latitude = shape->xy[1];\n    /* radius_meters is calculated differently in different search types:\n     * 1) CIRCULAR_TYPE, just use radius.\n     * 2) RECTANGLE_TYPE, we use sqrt((width/2)^2 + (height/2)^2) to\n     * calculate the distance from the center point to the corner */\n    double radius_meters = shape->type == CIRCULAR_TYPE ? shape->t.radius :\n            sqrt((shape->t.r.width/2)*(shape->t.r.width/2) + (shape->t.r.height/2)*(shape->t.r.height/2));\n    radius_meters *= shape->conversion;\n\n    steps = geohashEstimateStepsByRadius(radius_meters,latitude);\n\n    geohashGetCoordRange(&long_range,&lat_range);\n    geohashEncode(&long_range,&lat_range,longitude,latitude,steps,&hash);\n    geohashNeighbors(&hash,&neighbors);\n    geohashDecode(long_range,lat_range,hash,&area);\n\n    /* Check if the step is enough at the limits of the covered area.\n     * Sometimes when the search area is near an edge of the\n     * area, the estimated step is not small enough, since one of the\n     * north / south / west / east square is too near to the search area\n     * to cover everything. */\n    int decrease_step = 0;\n    {\n        GeoHashArea north, south, east, west;\n\n        geohashDecode(long_range, lat_range, neighbors.north, &north);\n        geohashDecode(long_range, lat_range, neighbors.south, &south);\n        geohashDecode(long_range, lat_range, neighbors.east, &east);\n        geohashDecode(long_range, lat_range, neighbors.west, &west);\n\n        if (north.latitude.max < max_lat) \n            decrease_step = 1;\n        if (south.latitude.min > min_lat) \n            decrease_step = 1;\n        if (east.longitude.max < max_lon) \n            decrease_step = 1;\n        if (west.longitude.min > min_lon)  \n            decrease_step = 1;\n    }\n\n    if (steps > 1 && decrease_step) {\n        steps--;\n        geohashEncode(&long_range,&lat_range,longitude,latitude,steps,&hash);\n        geohashNeighbors(&hash,&neighbors);\n        geohashDecode(long_range,lat_range,hash,&area);\n    }\n\n    /* Exclude the search areas that are useless. */\n    if (steps >= 2) {\n        if (area.latitude.min < min_lat) {\n            GZERO(neighbors.south);\n            GZERO(neighbors.south_west);\n            GZERO(neighbors.south_east);\n        }\n        if (area.latitude.max > max_lat) {\n            GZERO(neighbors.north);\n            GZERO(neighbors.north_east);\n            GZERO(neighbors.north_west);\n        }\n        if (area.longitude.min < min_lon) {\n            GZERO(neighbors.west);\n            GZERO(neighbors.south_west);\n            GZERO(neighbors.north_west);\n        }\n        if (area.longitude.max > max_lon) {\n            GZERO(neighbors.east);\n            GZERO(neighbors.south_east);\n            GZERO(neighbors.north_east);\n        }\n    }\n    radius.hash = hash;\n    radius.neighbors = neighbors;\n    radius.area = area;\n    return radius;\n}\n\nGeoHashFix52Bits geohashAlign52Bits(const GeoHashBits hash) {\n    uint64_t bits = hash.bits;\n    bits <<= (52 - hash.step * 2);\n    return bits;\n}\n\n/* Calculate distance using simplified haversine great circle distance formula.\n * Given longitude diff is 0 the asin(sqrt(a)) on the haversine is asin(sin(abs(u))).\n * arcsin(sin(x)) equal to x when x ∈[−𝜋/2,𝜋/2]. Given latitude is between [−𝜋/2,𝜋/2]\n * we can simplify arcsin(sin(x)) to x.\n */\ndouble geohashGetLatDistance(double lat1d, double lat2d) {\n    return EARTH_RADIUS_IN_METERS * fabs(deg_rad(lat2d) - deg_rad(lat1d));\n}\n\n/* Calculate distance using haversine great circle distance formula. */\ndouble geohashGetDistance(double lon1d, double lat1d, double lon2d, double lat2d) {\n    double lat1r, lon1r, lat2r, lon2r, u, v, a;\n    lon1r = deg_rad(lon1d);\n    lon2r = deg_rad(lon2d);\n    v = sin((lon2r - lon1r) / 2);\n    /* if v == 0 we can avoid doing expensive math when lons are practically the same */\n    if (v == 0.0)\n        return geohashGetLatDistance(lat1d, lat2d);\n    lat1r = deg_rad(lat1d);\n    lat2r = deg_rad(lat2d);\n    u = sin((lat2r - lat1r) / 2);\n    a = u * u + cos(lat1r) * cos(lat2r) * v * v;\n    return 2.0 * EARTH_RADIUS_IN_METERS * asin(sqrt(a));\n}\n\nint geohashGetDistanceIfInRadius(double x1, double y1,\n                                 double x2, double y2, double radius,\n                                 double *distance) {\n    *distance = geohashGetDistance(x1, y1, x2, y2);\n    if (*distance > radius) return 0;\n    return 1;\n}\n\nint geohashGetDistanceIfInRadiusWGS84(double x1, double y1, double x2,\n                                      double y2, double radius,\n                                      double *distance) {\n    return geohashGetDistanceIfInRadius(x1, y1, x2, y2, radius, distance);\n}\n\n/* Judge whether a point is in the axis-aligned rectangle, when the distance\n * between a searched point and the center point is less than or equal to\n * height/2 or width/2 in height and width, the point is in the rectangle.\n *\n * width_m, height_m: the rectangle\n * x1, y1 : the center of the box\n * x2, y2 : the point to be searched\n */\nint geohashGetDistanceIfInRectangle(double width_m, double height_m, double x1, double y1,\n                                    double x2, double y2, double *distance) {\n    /* latitude distance is less expensive to compute than longitude distance\n     * so we check first for the latitude condition */\n    double lat_distance = geohashGetLatDistance(y2, y1);\n    if (lat_distance > height_m/2) {\n        return 0;\n    }\n    double lon_distance = geohashGetDistance(x2, y2, x1, y2);\n    if (lon_distance > width_m/2) {\n        return 0;\n    }\n    *distance = geohashGetDistance(x1, y1, x2, y2);\n    return 1;\n}\n"
  },
  {
    "path": "src/geohash_helper.h",
    "content": "/*\n * Copyright (c) 2013-2014, yinqiwen <yinqiwen@gmail.com>\n * Copyright (c) 2014, Matt Stancliff <matt@genges.com>.\n * Copyright (c) 2015-current, Redis Ltd.\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *  * Redistributions of source code must retain the above copyright notice,\n *    this list of conditions and the following disclaimer.\n *  * Redistributions in binary form must reproduce the above copyright\n *    notice, this list of conditions and the following disclaimer in the\n *    documentation and/or other materials provided with the distribution.\n *  * Neither the name of Redis nor the names of its contributors may be used\n *    to endorse or promote products derived from this software without\n *    specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS\n * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF\n * THE POSSIBILITY OF SUCH DAMAGE.\n */\n\n#ifndef GEOHASH_HELPER_HPP_\n#define GEOHASH_HELPER_HPP_\n\n#include \"geohash.h\"\n\n#define GZERO(s) s.bits = s.step = 0;\n#define GISZERO(s) (!s.bits && !s.step)\n#define GISNOTZERO(s) (s.bits || s.step)\n\ntypedef uint64_t GeoHashFix52Bits;\ntypedef uint64_t GeoHashVarBits;\n\ntypedef struct {\n    GeoHashBits hash;\n    GeoHashArea area;\n    GeoHashNeighbors neighbors;\n} GeoHashRadius;\n\nuint8_t geohashEstimateStepsByRadius(double range_meters, double lat);\nint geohashBoundingBox(GeoShape *shape, double *bounds);\nGeoHashRadius geohashCalculateAreasByShapeWGS84(GeoShape *shape);\nGeoHashFix52Bits geohashAlign52Bits(const GeoHashBits hash);\ndouble geohashGetDistance(double lon1d, double lat1d,\n                          double lon2d, double lat2d);\nint geohashGetDistanceIfInRadius(double x1, double y1,\n                                 double x2, double y2, double radius,\n                                 double *distance);\nint geohashGetDistanceIfInRadiusWGS84(double x1, double y1, double x2,\n                                      double y2, double radius,\n                                      double *distance);\nint geohashGetDistanceIfInRectangle(double width_m, double height_m, double x1, double y1,\n                                    double x2, double y2, double *distance);\n\n#endif /* GEOHASH_HELPER_HPP_ */\n"
  },
  {
    "path": "src/hotkeys.c",
    "content": "/* Hotkey tracking related functionality\n *\n * Copyright (c) 2026-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n#include \"chk.h\"\n#include \"cluster.h\"\n#include <sys/resource.h>\n\nstatic inline int nearestNextPowerOf2(unsigned int count) {\n    if (count <= 1) return 1;\n    return 1 << (32 - __builtin_clz(count-1));\n}\n\n/* Comparison function for qsort to sort slot indices */\nstatic inline int slotCompare(const void *a, const void *b) {\n    return (*(const int *)a) - (*(const int *)b);\n}\n\n/* Initialize the hotkeys structure and start tracking. If tracking keys in\n * specific slots is desired the user should pass along an already allocated and\n * populated slotRangeArray. The hotkeys structure takes ownership of the array\n * and will free it upon release. On failure the slots memory is released. */\nhotkeyStats *hotkeyStatsCreate(int count, int duration, int sample_ratio,\n                               slotRangeArray *slots, uint64_t tracked_metrics)\n{\n    serverAssert(tracked_metrics & (HOTKEYS_TRACK_CPU | HOTKEYS_TRACK_NET));\n\n    hotkeyStats *hotkeys = zcalloc(sizeof(hotkeyStats));\n\n    /* We track count * 10 keys for better accuracy. Numbuckets is roughly 10\n     * times the elements we track (actually num_buckets == 7-8 * count is\n     * enough) again for better accuracy. Note the CHK implementation uses a\n     * power of 2 numbuckets for better cache locality. */\n    if (tracked_metrics & HOTKEYS_TRACK_CPU)\n        hotkeys->cpu = chkTopKCreate(count * 10, nearestNextPowerOf2((unsigned)count * 100), 1.08);\n\n    if (tracked_metrics & HOTKEYS_TRACK_NET)\n        hotkeys->net = chkTopKCreate(count * 10, nearestNextPowerOf2((unsigned)count * 100), 1.08);\n\n    hotkeys->tracked_metrics = tracked_metrics;\n    hotkeys->tracking_count = count;\n    hotkeys->duration = duration;\n    hotkeys->sample_ratio = sample_ratio;\n    hotkeys->slots = slots;\n    hotkeys->active = 1;\n    hotkeys->keys_result = (getKeysResult)GETKEYS_RESULT_INIT;\n    hotkeys->start = server.mstime;\n\n    /* Store initial rusage for CPU time tracking */\n    struct rusage rusage;\n    getrusage(RUSAGE_SELF, &rusage);\n    hotkeys->ru_utime = rusage.ru_utime;\n    hotkeys->ru_stime = rusage.ru_stime;\n\n    return hotkeys;\n}\n\nvoid hotkeyStatsRelease(hotkeyStats *hotkeys) {\n    if (!hotkeys) return;\n    if (hotkeys->cpu) chkTopKRelease(hotkeys->cpu);\n    if (hotkeys->net) chkTopKRelease(hotkeys->net);\n    slotRangeArrayFree(hotkeys->slots);\n    getKeysFreeResult(&hotkeys->keys_result);\n\n    zfree(hotkeys);\n}\n\n/* Helper function for hotkey tracking to check if a slot is in the selected\n * slots list. If slots is NULL then all slots are selected. */\nstatic inline int isSlotSelected(hotkeyStats *hotkeys, int slot) {\n    if (hotkeys->slots == NULL) return 1;\n    return slotRangeArrayContains(hotkeys->slots, slot);\n}\n\n/* Preparation for updates of the hotkeyStats for the current command, f.e\n * cache the current client and the getKeysResult. */\nvoid hotkeyStatsPreCurrentCmd(hotkeyStats *hotkeys, client *c) {\n    if (!hotkeys || !hotkeys->active) return;\n\n    robj **argv = c->original_argv ? c->original_argv : c->argv;\n    int argc = c->original_argv ? c->original_argc : c->argc;\n\n    hotkeys->keys_result = (getKeysResult)GETKEYS_RESULT_INIT;\n    if (getKeysFromCommandWithSpecs(c->realcmd, argv, argc, GET_KEYSPEC_DEFAULT,\n                                    &hotkeys->keys_result) == 0)\n    {\n        return;\n    }\n\n    /* Check if command is sampled */\n    hotkeys->is_sampled = 1;\n    if (hotkeys->sample_ratio > 1 &&\n        (double)rand() / RAND_MAX >= 1.0 / hotkeys->sample_ratio)\n    {\n        hotkeys->is_sampled = 0;\n    }\n\n    hotkeys->is_in_selected_slots = isSlotSelected(hotkeys, c->slot);\n\n    hotkeys->current_client = c;\n}\n\n/* Update the hotkeyStats with passed metrics. This can be called multiple times\n * between the calls to hotkeyStatsPreCurrentCmd and hotkeyStatsPostCurrentCmd */\nvoid hotkeyStatsUpdateCurrentCmd(hotkeyStats *hotkeys, hotkeyMetrics metrics) {\n    if (!hotkeys || !hotkeys->active) return;\n    if (hotkeys->keys_result.numkeys == 0) return;\n\n    /* Don't update stats for nested calls, except when inside MULTI/EXEC\n     * where we want to track each individual command. */\n    if (server.execution_nesting && !server.in_exec) return;\n\n    serverAssert(hotkeys->current_client);\n\n    int numkeys = hotkeys->keys_result.numkeys;\n    uint64_t duration_per_key = metrics.cpu_time_usec / numkeys;\n    uint64_t total_bytes = metrics.net_bytes;\n    uint64_t bytes_per_key = total_bytes / numkeys;\n\n    /* Update statistics counters */\n    hotkeys->time_all_commands_all_slots += metrics.cpu_time_usec;\n    hotkeys->net_bytes_all_commands_all_slots += total_bytes;\n\n    if (hotkeys->is_in_selected_slots) {\n        hotkeys->time_all_commands_selected_slots += metrics.cpu_time_usec;\n        hotkeys->net_bytes_all_commands_selected_slots += total_bytes;\n\n        if (hotkeys->is_sampled && hotkeys->sample_ratio > 1) {\n            hotkeys->time_sampled_commands_selected_slots += metrics.cpu_time_usec;\n            hotkeys->net_bytes_sampled_commands_selected_slots += total_bytes;\n        }\n    }\n\n    /* Only add keys to topK structure if command was sampled and is in selected\n     * slots. */\n    if (!hotkeys->is_sampled || !hotkeys->is_in_selected_slots) {\n        return;\n    }\n\n    mstime_t start_time = ustime();\n\n    /* Keys we've cached in the keys_result only track positions in the client's\n     * argv array so we must fetch it. */\n    client *c = hotkeys->current_client;\n    robj **argv = c->original_argv ? c->original_argv : c->argv;\n\n    /* Add all keys to topK structure */\n    for (int i = 0; i < numkeys; ++i) {\n        int pos = hotkeys->keys_result.keys[i].pos;\n\n        if (hotkeys->tracked_metrics & HOTKEYS_TRACK_CPU) {\n            sds ret = chkTopKUpdate(hotkeys->cpu, argv[pos]->ptr, sdslen(argv[pos]->ptr), duration_per_key);\n            if (ret) sdsfree(ret);\n        }\n\n        if (hotkeys->tracked_metrics & HOTKEYS_TRACK_NET) {\n            sds ret = chkTopKUpdate(hotkeys->net, argv[pos]->ptr, sdslen(argv[pos]->ptr), bytes_per_key);\n            if (ret) sdsfree(ret);\n        }\n    }\n\n    /* Track CPU time spent updating the topk structures. */\n    mstime_t end_time = ustime();\n    hotkeys->cpu_time += (end_time - start_time)/1000;\n}\n\n/* Some cleanup work for hotkeyStats after the command has finished execution */\nvoid hotkeyStatsPostCurrentCmd(hotkeyStats *hotkeys) {\n    if (!hotkeys || !hotkeys->active) return;\n\n    getKeysFreeResult(&hotkeys->keys_result);\n    hotkeys->keys_result = (getKeysResult)GETKEYS_RESULT_INIT;\n\n    hotkeys->current_client = NULL;\n    hotkeys->is_sampled = 0;\n    hotkeys->is_in_selected_slots = 0;\n}\n\nsize_t hotkeysGetMemoryUsage(hotkeyStats *hotkeys) {\n    if (!hotkeys) return 0;\n\n    size_t memory_usage = sizeof(hotkeyStats);\n    if (hotkeys->cpu) {\n        memory_usage += chkTopKGetMemoryUsage(hotkeys->cpu);\n    }\n    if (hotkeys->net) {\n        memory_usage += chkTopKGetMemoryUsage(hotkeys->net);\n    }\n    /* Add memory for slotRangeArray if present */\n    if (hotkeys->slots) {\n        memory_usage += sizeof(slotRangeArray) + sizeof(slotRange) * hotkeys->slots->num_ranges;\n    }\n\n    return memory_usage;\n}\n\nstatic int64_t time_diff_ms(struct timeval a, struct timeval b) {\n    int64_t sec = (int64_t)(a.tv_sec - b.tv_sec);\n    int64_t usec = (int64_t)(a.tv_usec - b.tv_usec);\n\n    if (usec < 0) {\n        sec--;\n        usec += 1000000;\n    }\n\n    return sec * 1000 + usec / 1000;\n}\n\n/* Helper function to output a slotRangeArray as array of arrays.\n * Single slots become 1-element arrays, ranges become 2-element arrays. */\nstatic void addReplySlotRangeArray(client *c, slotRangeArray *slots) {\n    addReplyArrayLen(c, slots->num_ranges);\n    for (int i = 0; i < slots->num_ranges; i++) {\n        if (slots->ranges[i].start == slots->ranges[i].end) {\n            /* Single slot */\n            addReplyArrayLen(c, 1);\n            addReplyLongLong(c, slots->ranges[i].start);\n        } else {\n            /* Range */\n            addReplyArrayLen(c, 2);\n            addReplyLongLong(c, slots->ranges[i].start);\n            addReplyLongLong(c, slots->ranges[i].end);\n        }\n    }\n}\n\n/* Helper function to output selected-slots as array of arrays.\n * If slots is NULL, outputs the local node's slot ranges (all slots in non-cluster mode). */\nstatic void addReplySelectedSlots(client *c, hotkeyStats *hotkeys) {\n    if (hotkeys->slots == NULL) {\n        /* No specific slots selected - return the local node's slot ranges */\n        slotRangeArray *slots = clusterGetLocalSlotRanges();\n        addReplySlotRangeArray(c, slots);\n        slotRangeArrayFree(slots);\n        return;\n    }\n\n    /* Slots are already stored as a sorted/merged slotRangeArray */\n    addReplySlotRangeArray(c, hotkeys->slots);\n}\n\n/* HOTKEYS command implementation\n *\n * HOTKEYS START\n *         <METRICS count [CPU] [NET]>\n *         [COUNT k]\n *         [DURATION duration]\n *         [SAMPLE ratio]\n *         [SLOTS count slot…]\n * HOTKEYS STOP\n * HOTKEYS RESET\n * HOTKEYS GET\n */\nvoid hotkeysCommand(client *c) {\n    if (c->argc < 2) {\n        addReplyError(c, \"HOTKEYS subcommand required\");\n        return;\n    }\n\n    char *sub = c->argv[1]->ptr;\n\n    if (!strcasecmp(sub, \"HELP\")) {\n        const char *help[] = {\n            \"START <METRICS count [CPU] [NET]> [COUNT k] [DURATION duration] [SAMPLE ratio] [SLOTS count slot...]\",\n            \"    Starts hotkeys tracking with specified metrics.\",\n            \"    * METRICS count [CPU] [NET]\",\n            \"        Specify count of metrics and choose amongst:\",\n            \"        - CPU: Track hotkeys by CPU time percentage\",\n            \"        - NET: Track hotkeys by network bytes percentage\",\n            \"    * COUNT k\",\n            \"        Specifies the value of K for the top-K hotkeys tracking. Default: 10\",\n            \"    * DURATION duration\",\n            \"        Specifies tracking duration in seconds. 0 means tracking will continue until manually stopped. Default: 0\",\n            \"    * SAMPLE ratio\",\n            \"        Keys are tracked with probability 1/ratio. Default: 1 (tracks every key)\",\n            \"    * SLOTS count slot...\",\n            \"        Specify which slots to track keys from. Only available in cluster mode. Default: empty (track all slots)\",\n            \"STOP\",\n            \"    Stop hotkeys tracking. Results are still available via GET\",\n            \"GET\",\n            \"    Get results from hotkeys tracking.\",\n            \"RESET\",\n            \"    Reset memory used for hotkeys tracking. Tracking must have been stopped.\",\n            \"    Results will no longer be available after this command.\",\n            NULL\n        };\n        addReplyHelp(c, help);\n    } else if (!strcasecmp(sub, \"START\")) {\n        /* HOTKEYS START\n         *         <METRICS count [CPU] [NET]>\n         *         [COUNT k]\n         *         [DURATION seconds]\n         *         [SAMPLE ratio]\n         *         [SLOTS count slot…] */\n        /* Return error if a session is already started */\n        if (server.hotkeys && server.hotkeys->active) {\n            addReplyError(c, \"hotkey tracking session already in progress\");\n            return;\n        }\n\n        /* METRICS is required and must be the first argument */\n        if (c->argc < 4 || strcasecmp(c->argv[2]->ptr, \"METRICS\")) {\n            addReplyError(c, \"METRICS parameter is required\");\n            return;\n        }\n\n        long metrics_count;\n        char errmsg[128];\n        snprintf(errmsg, 128, \"METRICS count must be > 0 and <= %d\", HOTKEYS_METRICS_COUNT);\n        if (getRangeLongFromObjectOrReply(c, c->argv[3], 1, HOTKEYS_METRICS_COUNT,\n                &metrics_count, errmsg) != C_OK)\n        {\n            return;\n        }\n\n        uint64_t tracked_metrics = 0;\n\n        int j = 4;\n\n        /* Parse CPU and NET tokens */\n        int metrics_parsed = 0;\n        int valid_metrics = 0;\n        while (j < c->argc && metrics_parsed < metrics_count) {\n            if (!strcasecmp(c->argv[j]->ptr, \"CPU\")) {\n                if (tracked_metrics & HOTKEYS_TRACK_CPU) {\n                    addReplyError(c, \"METRICS CPU defined more than once!\");\n                    return;\n                }\n                tracked_metrics |= HOTKEYS_TRACK_CPU;\n                ++valid_metrics;\n            } else if (!strcasecmp(c->argv[j]->ptr, \"NET\")) {\n                if (tracked_metrics & HOTKEYS_TRACK_NET) {\n                    addReplyError(c, \"METRICS NET defined more than once!\");\n                    return;\n                }\n                tracked_metrics |= HOTKEYS_TRACK_NET;\n                ++valid_metrics;\n            }\n            ++metrics_parsed;\n            ++j;\n        }\n\n        if (metrics_parsed != metrics_count) {\n            addReplyError(c, \"METRICS count does not match number of metric types provided\");\n            return;\n        }\n\n        if (valid_metrics == 0) {\n            addReplyError(c, \"METRICS no valid metrics passed. Supported: CPU|NET\");\n            return;\n        }\n\n        int count = 10;  /* default */\n        long duration = 0;  /* default: no auto-stop */\n        int sample_ratio = 1;  /* default: track every key */\n        slotRangeArray *slots = NULL;\n        while (j < c->argc) {\n            int moreargs = (c->argc-1) - j;\n            if (moreargs && !strcasecmp(c->argv[j]->ptr, \"COUNT\")) {\n                long count_val;\n                if (getRangeLongFromObjectOrReply(c, c->argv[j+1], 1, 64,\n                        &count_val, \"COUNT must be between 1 and 64\") != C_OK)\n                {\n                    slotRangeArrayFree(slots);\n                    return;\n                }\n                count = (int)count_val;\n                j += 2;\n            } else if (moreargs && !strcasecmp(c->argv[j]->ptr, \"DURATION\")) {\n                /* Arbitrary 1 million seconds limit, so we don't overflow the\n                 * duration member which is kept in milliseconds */\n                if (getRangeLongFromObjectOrReply(c, c->argv[j+1], 1, 1000000,\n                        &duration, \"DURATION must be between 1 and 1000000\") != C_OK)\n                {\n                    slotRangeArrayFree(slots);\n                    return;\n                }\n                duration *= 1000;\n                j += 2;\n            } else if (moreargs && !strcasecmp(c->argv[j]->ptr, \"SAMPLE\")) {\n                long ratio_val;\n                if (getRangeLongFromObjectOrReply(c, c->argv[j+1], 1, INT_MAX,\n                        &ratio_val, \"SAMPLE ratio must be positive\") != C_OK)\n                {\n                    slotRangeArrayFree(slots);\n                    return;\n                }\n                sample_ratio = (int)ratio_val;\n                j += 2;\n            } else if (moreargs && !strcasecmp(c->argv[j]->ptr, \"SLOTS\")) {\n                if (!server.cluster_enabled) {\n                    addReplyError(c, \"SLOTS parameter cannot be used in non-cluster mode\");\n                    return;\n                }\n\n                if (slots) {\n                    addReplyError(c, \"SLOTS parameter already specified\");\n                    slotRangeArrayFree(slots);\n                    return;\n                }\n                long slots_count_val;\n                char msg[64];\n                snprintf(msg, 64, \"SLOTS count must be between 1 and %d\",\n                         CLUSTER_SLOTS);\n                if (getRangeLongFromObjectOrReply(c, c->argv[j+1], 1,\n                        CLUSTER_SLOTS, &slots_count_val, msg) != C_OK)\n                {\n                    return;\n                }\n                int slots_count = (int)slots_count_val;\n\n                /* Parse slot numbers */\n                if (j + 1 + slots_count >= c->argc) {\n                    addReplyError(c, \"not enough slot numbers provided\");\n                    return;\n                }\n\n                /* Collect slots into a temporary array for sorting */\n                int *temp_slots = zmalloc(sizeof(int) * slots_count);\n                for (int i = 0; i < slots_count; i++) {\n                    long slot_val;\n                    if ((slot_val = getSlotOrReply(c, c->argv[j+2+i])) == -1) {\n                        zfree(temp_slots);\n                        return;\n                    }\n                    if (!clusterNodeCoversSlot(getMyClusterNode(), slot_val)) {\n                        addReplyErrorFormat(c, \"slot %ld not handled by this node\", slot_val);\n                        zfree(temp_slots);\n                        return;\n                    }\n\n                    /* Check for duplicate slot */\n                    for (int k = 0; k < i; k++) {\n                        if (temp_slots[k] == slot_val) {\n                            addReplyError(c, \"duplicate slot number\");\n                            zfree(temp_slots);\n                            return;\n                        }\n                    }\n\n                    temp_slots[i] = (int)slot_val;\n                }\n\n                /* Sort the slots array */\n                qsort(temp_slots, slots_count, sizeof(int), slotCompare);\n\n                /* Build slotRangeArray from sorted slots */\n                for (int i = 0; i < slots_count; i++) {\n                    slots = slotRangeArrayAppend(slots, temp_slots[i]);\n                }\n                zfree(temp_slots);\n\n                j += 2 + slots_count;\n            } else {\n                addReplyError(c, \"syntax error\");\n                slotRangeArrayFree(slots);\n                return;\n            }\n        }\n\n        hotkeyStats *hotkeys = hotkeyStatsCreate(count, duration, sample_ratio,\n                                                 slots, tracked_metrics);\n \n        hotkeyStatsRelease(server.hotkeys);\n        server.hotkeys = hotkeys;\n\n        addReply(c, shared.ok);\n\n    } else if (!strcasecmp(sub, \"STOP\")) {\n        /* HOTKEYS STOP */\n        if (c->argc != 2) {\n            addReplyError(c, \"wrong number of arguments for 'hotkeys|stop' command\");\n            return;\n        }\n\n        if (!server.hotkeys || !server.hotkeys->active) {\n            addReplyNull(c);\n            return;\n        }\n\n        server.hotkeys->active = 0;\n        server.hotkeys->duration = server.mstime - server.hotkeys->start;\n        addReply(c, shared.ok);\n\n    } else if (!strcasecmp(sub, \"GET\")) {\n        /* HOTKEYS GET */\n        if (c->argc != 2) {\n            addReplyError(c, \"wrong number of arguments for 'hotkeys|get' command\");\n            return;\n        }\n\n        /* If no tracking is started, return (nil) */\n        if (!server.hotkeys) {\n            addReplyNull(c);\n            return;\n        }\n\n        serverAssert(server.hotkeys->tracked_metrics);\n\n        /* Calculate duration */\n        int duration = 0;\n        if (!server.hotkeys->active) {\n            duration = server.hotkeys->duration;\n        } else {\n            duration = server.mstime - server.hotkeys->start;\n        }\n\n        /* Get total CPU time using rusage (RUSAGE_SELF) -\n         * only if CPU tracking is enabled */\n        uint64_t total_cpu_user_msec = 0;\n        uint64_t total_cpu_sys_msec = 0;\n        if (server.hotkeys->tracked_metrics & HOTKEYS_TRACK_CPU) {\n            struct rusage current_ru;\n            getrusage(RUSAGE_SELF, &current_ru);\n\n            /* Calculate difference in user and sys time */\n            total_cpu_user_msec = time_diff_ms(current_ru.ru_utime, server.hotkeys->ru_utime);\n            total_cpu_sys_msec = time_diff_ms(current_ru.ru_stime, server.hotkeys->ru_stime);\n        }\n\n        /* Get totals and lists for enabled metrics */\n        uint64_t total_net_bytes = 0;\n        chkHeapBucket *cpu = NULL;\n        chkHeapBucket *net = NULL;\n        int cpu_count = 0;\n        int net_count = 0;\n\n        if (server.hotkeys->tracked_metrics & HOTKEYS_TRACK_CPU) {\n            cpu = chkTopKList(server.hotkeys->cpu);\n            for (int i = 0; i < server.hotkeys->tracking_count; ++i) {\n                if (cpu[i].count == 0) break;\n                cpu_count++;\n            }\n        }\n\n        if (server.hotkeys->tracked_metrics & HOTKEYS_TRACK_NET) {\n            total_net_bytes = server.hotkeys->net->total;\n            net = chkTopKList(server.hotkeys->net);\n            for (int i = 0; i < server.hotkeys->tracking_count; ++i) {\n                if (net[i].count == 0) break;\n                net_count++;\n            }\n        }\n\n        int has_selected_slots = (server.hotkeys->slots != NULL);\n        int has_sampling = (server.hotkeys->sample_ratio > 1);\n\n        /* We return an array of map for easy aggregation of results from\n         * different nodes. */\n        addReplyArrayLen(c, 1);\n\n        int total_len = 7;\n        void *maplenptr = addReplyDeferredLen(c);\n\n        /* tracking-active */\n        addReplyBulkCString(c, \"tracking-active\");\n        addReplyLongLong(c, server.hotkeys->active ? 1 : 0);\n\n        /* sample-ratio */\n        addReplyBulkCString(c, \"sample-ratio\");\n        addReplyLongLong(c, server.hotkeys->sample_ratio);\n\n        /* selected-slots - array of arrays with merged ranges */\n        addReplyBulkCString(c, \"selected-slots\");\n        addReplySelectedSlots(c, server.hotkeys);\n\n        /* sampled-commands-selected-slots-us (conditional) */\n        if (has_sampling && has_selected_slots) {\n            addReplyBulkCString(c, \"sampled-commands-selected-slots-us\");\n            addReplyLongLong(c, server.hotkeys->time_sampled_commands_selected_slots);\n\n            total_len++;\n        }\n\n        /* all-commands-selected-slots-us (conditional) */\n        if (has_selected_slots) {\n            addReplyBulkCString(c, \"all-commands-selected-slots-us\");\n            addReplyLongLong(c, server.hotkeys->time_all_commands_selected_slots);\n\n            ++total_len;\n        }\n\n        /* all-commands-all-slots-us */\n        addReplyBulkCString(c, \"all-commands-all-slots-us\");\n        addReplyLongLong(c, server.hotkeys->time_all_commands_all_slots);\n\n        /* net-bytes-sampled-commands-selected-slots (conditional) */\n        if (has_sampling && has_selected_slots) {\n            addReplyBulkCString(c, \"net-bytes-sampled-commands-selected-slots\");\n            addReplyLongLong(c, server.hotkeys->net_bytes_sampled_commands_selected_slots);\n\n            ++total_len;\n        }\n\n        /* net-bytes-all-commands-selected-slots (conditional) */\n        if (has_selected_slots) {\n            addReplyBulkCString(c, \"net-bytes-all-commands-selected-slots\");\n            addReplyLongLong(c,\n                server.hotkeys->net_bytes_all_commands_selected_slots);\n\n            ++total_len;\n        }\n\n        /* net-bytes-all-commands-all-slots */\n        addReplyBulkCString(c, \"net-bytes-all-commands-all-slots\");\n        addReplyLongLong(c, server.hotkeys->net_bytes_all_commands_all_slots);\n\n        /* collection-start-time-unix-ms */\n        addReplyBulkCString(c, \"collection-start-time-unix-ms\");\n        addReplyLongLong(c, server.hotkeys->start);\n\n        /* collection-duration-ms */\n        addReplyBulkCString(c, \"collection-duration-ms\");\n        addReplyLongLong(c, duration);\n\n        /* total-cpu-time-user-ms (in milliseconds) - only if CPU tracking is enabled */\n        if (server.hotkeys->tracked_metrics & HOTKEYS_TRACK_CPU) {\n            addReplyBulkCString(c, \"total-cpu-time-user-ms\");\n            addReplyLongLong(c, total_cpu_user_msec);\n\n            /* total-cpu-time-sys-ms (in milliseconds) */\n            addReplyBulkCString(c, \"total-cpu-time-sys-ms\");\n            addReplyLongLong(c, total_cpu_sys_msec);\n\n            total_len += 2;\n        }\n\n        /* total-net-bytes - only if NET tracking is enabled */\n        if (server.hotkeys->tracked_metrics & HOTKEYS_TRACK_NET) {\n            addReplyBulkCString(c, \"total-net-bytes\");\n            addReplyLongLong(c, total_net_bytes);\n\n            ++total_len;\n        }\n\n        /* by-cpu-time-us - only if CPU tracking is enabled */\n        if (server.hotkeys->tracked_metrics & HOTKEYS_TRACK_CPU) {\n            addReplyBulkCString(c, \"by-cpu-time-us\");\n            /* Nested array of key-value pairs */\n            addReplyArrayLen(c, 2 * cpu_count);\n            for (int i = 0; i < cpu_count; ++i) {\n                addReplyBulkCBuffer(c, cpu[i].item, sdslen(cpu[i].item));\n                /* Return raw microsec value */\n                addReplyLongLong(c, cpu[i].count);\n            }\n            zfree(cpu);\n\n            ++total_len;\n        }\n\n        /* by-net-bytes - only if NET tracking is enabled */\n        if (server.hotkeys->tracked_metrics & HOTKEYS_TRACK_NET) {\n            addReplyBulkCString(c, \"by-net-bytes\");\n            /* Nested array of key-value pairs */\n            addReplyArrayLen(c, 2 * net_count);\n            for (int i = 0; i < net_count; ++i) {\n                addReplyBulkCBuffer(c, net[i].item, sdslen(net[i].item));\n                /* Return raw byte value */\n                addReplyLongLong(c, net[i].count);\n            }\n            zfree(net);\n\n            ++total_len;\n        }\n\n        setDeferredMapLen(c, maplenptr, total_len);\n\n    } else if (!strcasecmp(sub, \"RESET\")) {\n        /* HOTKEYS RESET */\n        if (c->argc != 2) {\n            addReplyError(c,\n                \"wrong number of arguments for 'hotkeys|reset' command\");\n            return;\n        }\n\n        /* Return error if session is in progress and not yet completed */\n        if (server.hotkeys && server.hotkeys->active) {\n            addReplyError(c,\n                \"hotkey tracking session in progress, stop tracking first\");\n            return;\n        }\n\n        /* Release the resources used for hotkey tracking */\n        hotkeyStatsRelease(server.hotkeys);\n        server.hotkeys = NULL;\n \n        addReply(c, shared.ok);\n    } else {\n        addReplyError(c, \"unknown subcommand or wrong number of arguments\");\n    }\n}\n"
  },
  {
    "path": "src/hyperloglog.c",
    "content": "/* hyperloglog.c - Redis HyperLogLog probabilistic cardinality approximation.\n * This file implements the algorithm and the exported Redis commands.\n *\n * Copyright (c) 2014-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n#include \"server.h\"\n\n#include <stdint.h>\n#include <math.h>\n\n#ifdef HAVE_AVX2\n/* Define __MM_MALLOC_H to prevent importing the memory aligned\n * allocation functions, which we don't use. */\n#define __MM_MALLOC_H\n#include <immintrin.h>\n#endif\n\n#ifdef HAVE_AARCH64_NEON\n#include <arm_neon.h>\n#endif\n\n#undef MAX\n#define MAX(a, b) ((a) > (b) ? (a) : (b))\n\n/* The Redis HyperLogLog implementation is based on the following ideas:\n *\n * * The use of a 64 bit hash function as proposed in [1], in order to estimate\n *   cardinalities larger than 10^9, at the cost of just 1 additional bit per\n *   register.\n * * The use of 16384 6-bit registers for a great level of accuracy, using\n *   a total of 12k per key.\n * * The use of the Redis string data type. No new type is introduced.\n * * No attempt is made to compress the data structure as in [1]. Also the\n *   algorithm used is the original HyperLogLog Algorithm as in [2], with\n *   the only difference that a 64 bit hash function is used, so no correction\n *   is performed for values near 2^32 as in [1].\n *\n * [1] Heule, Nunkesser, Hall: HyperLogLog in Practice: Algorithmic\n *     Engineering of a State of The Art Cardinality Estimation Algorithm.\n *\n * [2] P. Flajolet, Éric Fusy, O. Gandouet, and F. Meunier. Hyperloglog: The\n *     analysis of a near-optimal cardinality estimation algorithm.\n *\n * Redis uses two representations:\n *\n * 1) A \"dense\" representation where every entry is represented by\n *    a 6-bit integer.\n * 2) A \"sparse\" representation using run length compression suitable\n *    for representing HyperLogLogs with many registers set to 0 in\n *    a memory efficient way.\n *\n *\n * HLL header\n * ===\n *\n * Both the dense and sparse representation have a 16 byte header as follows:\n *\n * +------+---+-----+----------+\n * | HYLL | E | N/U | Cardin.  |\n * +------+---+-----+----------+\n *\n * The first 4 bytes are a magic string set to the bytes \"HYLL\".\n * \"E\" is one byte encoding, currently set to HLL_DENSE or\n * HLL_SPARSE. N/U are three not used bytes.\n *\n * The \"Cardin.\" field is a 64 bit integer stored in little endian format\n * with the latest cardinality computed that can be reused if the data\n * structure was not modified since the last computation (this is useful\n * because there are high probabilities that HLLADD operations don't\n * modify the actual data structure and hence the approximated cardinality).\n *\n * When the most significant bit in the most significant byte of the cached\n * cardinality is set, it means that the data structure was modified and\n * we can't reuse the cached value that must be recomputed.\n *\n * Dense representation\n * ===\n *\n * The dense representation used by Redis is the following:\n *\n * +--------+--------+--------+------//      //--+\n * |11000000|22221111|33333322|55444444 ....     |\n * +--------+--------+--------+------//      //--+\n *\n * The 6 bits counters are encoded one after the other starting from the\n * LSB to the MSB, and using the next bytes as needed.\n *\n * Sparse representation\n * ===\n *\n * The sparse representation encodes registers using a run length\n * encoding composed of three opcodes, two using one byte, and one using\n * of two bytes. The opcodes are called ZERO, XZERO and VAL.\n *\n * ZERO opcode is represented as 00xxxxxx. The 6-bit integer represented\n * by the six bits 'xxxxxx', plus 1, means that there are N registers set\n * to 0. This opcode can represent from 1 to 64 contiguous registers set\n * to the value of 0.\n *\n * XZERO opcode is represented by two bytes 01xxxxxx yyyyyyyy. The 14-bit\n * integer represented by the bits 'xxxxxx' as most significant bits and\n * 'yyyyyyyy' as least significant bits, plus 1, means that there are N\n * registers set to 0. This opcode can represent from 0 to 16384 contiguous\n * registers set to the value of 0.\n *\n * VAL opcode is represented as 1vvvvvxx. It contains a 5-bit integer\n * representing the value of a register, and a 2-bit integer representing\n * the number of contiguous registers set to that value 'vvvvv'.\n * To obtain the value and run length, the integers vvvvv and xx must be\n * incremented by one. This opcode can represent values from 1 to 32,\n * repeated from 1 to 4 times.\n *\n * The sparse representation can't represent registers with a value greater\n * than 32, however it is very unlikely that we find such a register in an\n * HLL with a cardinality where the sparse representation is still more\n * memory efficient than the dense representation. When this happens the\n * HLL is converted to the dense representation.\n *\n * The sparse representation is purely positional. For example a sparse\n * representation of an empty HLL is just: XZERO:16384.\n *\n * An HLL having only 3 non-zero registers at position 1000, 1020, 1021\n * respectively set to 2, 3, 3, is represented by the following three\n * opcodes:\n *\n * XZERO:1000 (Registers 0-999 are set to 0)\n * VAL:2,1    (1 register set to value 2, that is register 1000)\n * ZERO:19    (Registers 1001-1019 set to 0)\n * VAL:3,2    (2 registers set to value 3, that is registers 1020,1021)\n * XZERO:15362 (Registers 1022-16383 set to 0)\n *\n * In the example the sparse representation used just 7 bytes instead\n * of 12k in order to represent the HLL registers. In general for low\n * cardinality there is a big win in terms of space efficiency, traded\n * with CPU time since the sparse representation is slower to access.\n *\n * The following table shows average cardinality vs bytes used, 100\n * samples per cardinality (when the set was not representable because\n * of registers with too big value, the dense representation size was used\n * as a sample).\n *\n * 100 267\n * 200 485\n * 300 678\n * 400 859\n * 500 1033\n * 600 1205\n * 700 1375\n * 800 1544\n * 900 1713\n * 1000 1882\n * 2000 3480\n * 3000 4879\n * 4000 6089\n * 5000 7138\n * 6000 8042\n * 7000 8823\n * 8000 9500\n * 9000 10088\n * 10000 10591\n *\n * The dense representation uses 12288 bytes, so there is a big win up to\n * a cardinality of ~2000-3000. For bigger cardinalities the constant times\n * involved in updating the sparse representation is not justified by the\n * memory savings. The exact maximum length of the sparse representation\n * when this implementation switches to the dense representation is\n * configured via the define server.hll_sparse_max_bytes.\n */\n\nstruct hllhdr {\n    char magic[4];      /* \"HYLL\" */\n    uint8_t encoding;   /* HLL_DENSE or HLL_SPARSE. */\n    uint8_t notused[3]; /* Reserved for future use, must be zero. */\n    uint8_t card[8];    /* Cached cardinality, little endian. */\n    uint8_t registers[]; /* Data bytes. */\n};\n\n/* The cached cardinality MSB is used to signal validity of the cached value. */\n#define HLL_INVALIDATE_CACHE(hdr) (hdr)->card[7] |= (1<<7)\n#define HLL_VALID_CACHE(hdr) (((hdr)->card[7] & (1<<7)) == 0)\n\n#define HLL_P 14 /* The greater is P, the smaller the error. */\n#define HLL_Q (64-HLL_P) /* The number of bits of the hash value used for\n                            determining the number of leading zeros. */\n#define HLL_REGISTERS (1<<HLL_P) /* With P=14, 16384 registers. */\n#define HLL_P_MASK (HLL_REGISTERS-1) /* Mask to index register. */\n#define HLL_BITS 6 /* Enough to count up to 63 leading zeroes. */\n#define HLL_REGISTER_MAX ((1<<HLL_BITS)-1)\n#define HLL_HDR_SIZE sizeof(struct hllhdr)\n#define HLL_DENSE_SIZE (HLL_HDR_SIZE+((HLL_REGISTERS*HLL_BITS+7)/8))\n#define HLL_DENSE 0 /* Dense encoding. */\n#define HLL_SPARSE 1 /* Sparse encoding. */\n#define HLL_RAW 255 /* Only used internally, never exposed. */\n#define HLL_MAX_ENCODING 1\n\nstatic char *invalid_hll_err = \"-INVALIDOBJ Corrupted HLL object detected\";\n\n#if defined(HAVE_AVX2) || defined(HAVE_AARCH64_NEON)\nstatic int simd_enabled = 1;\n#endif\n\n#ifdef HAVE_AVX2\n#define HLL_USE_AVX2 (simd_enabled && __builtin_cpu_supports(\"avx2\"))\n#else\n#define HLL_USE_AVX2 0\n#endif\n\n#ifdef HAVE_AARCH64_NEON\n#define HLL_USE_NEON (simd_enabled)\n#else\n#define HLL_USE_NEON 0\n#endif\n\n/* =========================== Low level bit macros ========================= */\n\n/* Macros to access the dense representation.\n *\n * We need to get and set 6 bit counters in an array of 8 bit bytes.\n * We use macros to make sure the code is inlined since speed is critical\n * especially in order to compute the approximated cardinality in\n * HLLCOUNT where we need to access all the registers at once.\n * For the same reason we also want to avoid conditionals in this code path.\n *\n * +--------+--------+--------+------//\n * |11000000|22221111|33333322|55444444\n * +--------+--------+--------+------//\n *\n * Note: in the above representation the most significant bit (MSB)\n * of every byte is on the left. We start using bits from the LSB to MSB,\n * and so forth passing to the next byte.\n *\n * Example, we want to access to counter at pos = 1 (\"111111\" in the\n * illustration above).\n *\n * The index of the first byte b0 containing our data is:\n *\n *  b0 = 6 * pos / 8 = 0\n *\n *   +--------+\n *   |11000000|  <- Our byte at b0\n *   +--------+\n *\n * The position of the first bit (counting from the LSB = 0) in the byte\n * is given by:\n *\n *  fb = 6 * pos % 8 -> 6\n *\n * Right shift b0 of 'fb' bits.\n *\n *   +--------+\n *   |11000000|  <- Initial value of b0\n *   |00000011|  <- After right shift of 6 pos.\n *   +--------+\n *\n * Left shift b1 of bits 8-fb bits (2 bits)\n *\n *   +--------+\n *   |22221111|  <- Initial value of b1\n *   |22111100|  <- After left shift of 2 bits.\n *   +--------+\n *\n * OR the two bits, and finally AND with 111111 (63 in decimal) to\n * clean the higher order bits we are not interested in:\n *\n *   +--------+\n *   |00000011|  <- b0 right shifted\n *   |22111100|  <- b1 left shifted\n *   |22111111|  <- b0 OR b1\n *   |  111111|  <- (b0 OR b1) AND 63, our value.\n *   +--------+\n *\n * We can try with a different example, like pos = 0. In this case\n * the 6-bit counter is actually contained in a single byte.\n *\n *  b0 = 6 * pos / 8 = 0\n *\n *   +--------+\n *   |11000000|  <- Our byte at b0\n *   +--------+\n *\n *  fb = 6 * pos % 8 = 0\n *\n *  So we right shift of 0 bits (no shift in practice) and\n *  left shift the next byte of 8 bits, even if we don't use it,\n *  but this has the effect of clearing the bits so the result\n *  will not be affected after the OR.\n *\n * -------------------------------------------------------------------------\n *\n * Setting the register is a bit more complex, let's assume that 'val'\n * is the value we want to set, already in the right range.\n *\n * We need two steps, in one we need to clear the bits, and in the other\n * we need to bitwise-OR the new bits.\n *\n * Let's try with 'pos' = 1, so our first byte at 'b' is 0,\n *\n * \"fb\" is 6 in this case.\n *\n *   +--------+\n *   |11000000|  <- Our byte at b0\n *   +--------+\n *\n * To create an AND-mask to clear the bits about this position, we just\n * initialize the mask with the value 63, left shift it of \"fs\" bits,\n * and finally invert the result.\n *\n *   +--------+\n *   |00111111|  <- \"mask\" starts at 63\n *   |11000000|  <- \"mask\" after left shift of \"ls\" bits.\n *   |00111111|  <- \"mask\" after invert.\n *   +--------+\n *\n * Now we can bitwise-AND the byte at \"b\" with the mask, and bitwise-OR\n * it with \"val\" left-shifted of \"ls\" bits to set the new bits.\n *\n * Now let's focus on the next byte b1:\n *\n *   +--------+\n *   |22221111|  <- Initial value of b1\n *   +--------+\n *\n * To build the AND mask we start again with the 63 value, right shift\n * it by 8-fb bits, and invert it.\n *\n *   +--------+\n *   |00111111|  <- \"mask\" set at 2&6-1\n *   |00001111|  <- \"mask\" after the right shift by 8-fb = 2 bits\n *   |11110000|  <- \"mask\" after bitwise not.\n *   +--------+\n *\n * Now we can mask it with b+1 to clear the old bits, and bitwise-OR\n * with \"val\" left-shifted by \"rs\" bits to set the new value.\n */\n\n/* Note: if we access the last counter, we will also access the b+1 byte\n * that is out of the array, but sds strings always have an implicit null\n * term, so the byte exists, and we can skip the conditional (or the need\n * to allocate 1 byte more explicitly). */\n\n/* Store the value of the register at position 'regnum' into variable 'target'.\n * 'p' is an array of unsigned bytes. */\n#define HLL_DENSE_GET_REGISTER(target,p,regnum) do { \\\n    uint8_t *_p = (uint8_t*) p; \\\n    unsigned long _byte = regnum*HLL_BITS/8; \\\n    unsigned long _fb = regnum*HLL_BITS&7; \\\n    unsigned long _fb8 = 8 - _fb; \\\n    unsigned long b0 = _p[_byte]; \\\n    unsigned long b1 = _p[_byte+1]; \\\n    target = ((b0 >> _fb) | (b1 << _fb8)) & HLL_REGISTER_MAX; \\\n} while(0)\n\n/* Set the value of the register at position 'regnum' to 'val'.\n * 'p' is an array of unsigned bytes. */\n#define HLL_DENSE_SET_REGISTER(p,regnum,val) do { \\\n    uint8_t *_p = (uint8_t*) p; \\\n    unsigned long _byte = (regnum)*HLL_BITS/8; \\\n    unsigned long _fb = (regnum)*HLL_BITS&7; \\\n    unsigned long _fb8 = 8 - _fb; \\\n    unsigned long _v = (val); \\\n    _p[_byte] &= ~(HLL_REGISTER_MAX << _fb); \\\n    _p[_byte] |= _v << _fb; \\\n    _p[_byte+1] &= ~(HLL_REGISTER_MAX >> _fb8); \\\n    _p[_byte+1] |= _v >> _fb8; \\\n} while(0)\n\n/* Macros to access the sparse representation.\n * The macros parameter is expected to be an uint8_t pointer. */\n#define HLL_SPARSE_XZERO_BIT 0x40 /* 01xxxxxx */\n#define HLL_SPARSE_VAL_BIT 0x80 /* 1vvvvvxx */\n#define HLL_SPARSE_IS_ZERO(p) (((*(p)) & 0xc0) == 0) /* 00xxxxxx */\n#define HLL_SPARSE_IS_XZERO(p) (((*(p)) & 0xc0) == HLL_SPARSE_XZERO_BIT)\n#define HLL_SPARSE_IS_VAL(p) ((*(p)) & HLL_SPARSE_VAL_BIT)\n#define HLL_SPARSE_ZERO_LEN(p) (((*(p)) & 0x3f)+1)\n#define HLL_SPARSE_XZERO_LEN(p) (((((*(p)) & 0x3f) << 8) | (*((p)+1)))+1)\n#define HLL_SPARSE_VAL_VALUE(p) ((((*(p)) >> 2) & 0x1f)+1)\n#define HLL_SPARSE_VAL_LEN(p) (((*(p)) & 0x3)+1)\n#define HLL_SPARSE_VAL_MAX_VALUE 32\n#define HLL_SPARSE_VAL_MAX_LEN 4\n#define HLL_SPARSE_ZERO_MAX_LEN 64\n#define HLL_SPARSE_XZERO_MAX_LEN 16384\n#define HLL_SPARSE_VAL_SET(p,val,len) do { \\\n    *(p) = (((val)-1)<<2|((len)-1))|HLL_SPARSE_VAL_BIT; \\\n} while(0)\n#define HLL_SPARSE_ZERO_SET(p,len) do { \\\n    *(p) = (len)-1; \\\n} while(0)\n#define HLL_SPARSE_XZERO_SET(p,len) do { \\\n    int _l = (len)-1; \\\n    *(p) = (_l>>8) | HLL_SPARSE_XZERO_BIT; \\\n    *((p)+1) = (_l&0xff); \\\n} while(0)\n#define HLL_ALPHA_INF 0.721347520444481703680 /* constant for 0.5/ln(2) */\n\n/* ========================= HyperLogLog algorithm  ========================= */\n\n/* Our hash function is MurmurHash2, 64 bit version.\n * It was modified for Redis in order to provide the same result in\n * big and little endian archs (endian neutral). */\nREDIS_NO_SANITIZE(\"alignment\")\nuint64_t MurmurHash64A (const void * key, size_t len, unsigned int seed) {\n    const uint64_t m = 0xc6a4a7935bd1e995;\n    const int r = 47;\n    uint64_t h = seed ^ (len * m);\n    const uint8_t *data = (const uint8_t *)key;\n    const uint8_t *end = data + (len-(len&7));\n\n    while(data != end) {\n        uint64_t k;\n\n#if (BYTE_ORDER == LITTLE_ENDIAN)\n    #ifdef USE_ALIGNED_ACCESS\n        memcpy(&k,data,sizeof(uint64_t));\n    #else\n        k = *((uint64_t*)data);\n    #endif\n#else\n        k = (uint64_t) data[0];\n        k |= (uint64_t) data[1] << 8;\n        k |= (uint64_t) data[2] << 16;\n        k |= (uint64_t) data[3] << 24;\n        k |= (uint64_t) data[4] << 32;\n        k |= (uint64_t) data[5] << 40;\n        k |= (uint64_t) data[6] << 48;\n        k |= (uint64_t) data[7] << 56;\n#endif\n\n        k *= m;\n        k ^= k >> r;\n        k *= m;\n        h ^= k;\n        h *= m;\n        data += 8;\n    }\n\n    switch(len & 7) {\n    case 7: h ^= (uint64_t)data[6] << 48; /* fall-thru */\n    case 6: h ^= (uint64_t)data[5] << 40; /* fall-thru */\n    case 5: h ^= (uint64_t)data[4] << 32; /* fall-thru */\n    case 4: h ^= (uint64_t)data[3] << 24; /* fall-thru */\n    case 3: h ^= (uint64_t)data[2] << 16; /* fall-thru */\n    case 2: h ^= (uint64_t)data[1] << 8; /* fall-thru */\n    case 1: h ^= (uint64_t)data[0];\n            h *= m; /* fall-thru */\n    };\n\n    h ^= h >> r;\n    h *= m;\n    h ^= h >> r;\n    return h;\n}\n\n/* Given a string element to add to the HyperLogLog, returns the length\n * of the pattern 000..1 of the element hash. As a side effect 'regp' is\n * set to the register index this element hashes to. */\nint hllPatLen(unsigned char *ele, size_t elesize, long *regp) {\n    uint64_t hash, index;\n    int count;\n\n    /* Count the number of zeroes starting from bit HLL_REGISTERS\n     * (that is a power of two corresponding to the first bit we don't use\n     * as index). The max run can be 64-P+1 = Q+1 bits.\n     *\n     * Note that the final \"1\" ending the sequence of zeroes must be\n     * included in the count, so if we find \"001\" the count is 3, and\n     * the smallest count possible is no zeroes at all, just a 1 bit\n     * at the first position, that is a count of 1. */\n    hash = MurmurHash64A(ele,elesize,0xadc83b19ULL);\n    index = hash & HLL_P_MASK; /* Register index. */\n    hash >>= HLL_P; /* Remove bits used to address the register. */\n    hash |= ((uint64_t)1<<HLL_Q); /* Make sure the loop terminates\n                                     and count will be <= Q+1. */\n\n    count = __builtin_ctzll(hash) + 1;\n    *regp = (int) index;\n    return count;\n}\n\n/* ================== Dense representation implementation  ================== */\n\n/* Low level function to set the dense HLL register at 'index' to the\n * specified value if the current value is smaller than 'count'.\n *\n * 'registers' is expected to have room for HLL_REGISTERS plus an\n * additional byte on the right. This requirement is met by sds strings\n * automatically since they are implicitly null terminated.\n *\n * The function always succeed, however if as a result of the operation\n * the approximated cardinality changed, 1 is returned. Otherwise 0\n * is returned. */\nint hllDenseSet(uint8_t *registers, long index, uint8_t count) {\n    uint8_t oldcount;\n\n    HLL_DENSE_GET_REGISTER(oldcount,registers,index);\n    if (count > oldcount) {\n        HLL_DENSE_SET_REGISTER(registers,index,count);\n        return 1;\n    } else {\n        return 0;\n    }\n}\n\n/* \"Add\" the element in the dense hyperloglog data structure.\n * Actually nothing is added, but the max 0 pattern counter of the subset\n * the element belongs to is incremented if needed.\n *\n * This is just a wrapper to hllDenseSet(), performing the hashing of the\n * element in order to retrieve the index and zero-run count. */\nint hllDenseAdd(uint8_t *registers, unsigned char *ele, size_t elesize) {\n    long index;\n    uint8_t count = hllPatLen(ele,elesize,&index);\n    /* Update the register if this element produced a longer run of zeroes. */\n    return hllDenseSet(registers,index,count);\n}\n\n/* Compute the register histogram in the dense representation. */\nvoid hllDenseRegHisto(uint8_t *registers, int* reghisto) {\n    int j;\n\n    /* Redis default is to use 16384 registers 6 bits each. The code works\n     * with other values by modifying the defines, but for our target value\n     * we take a faster path with unrolled loops. */\n    if (HLL_REGISTERS == 16384 && HLL_BITS == 6) {\n        uint8_t *r = registers;\n        unsigned long r0, r1, r2, r3, r4, r5, r6, r7, r8, r9,\n                      r10, r11, r12, r13, r14, r15;\n        for (j = 0; j < 1024; j++) {\n            /* Handle 16 registers per iteration. */\n            r0 = r[0] & 63;\n            r1 = (r[0] >> 6 | r[1] << 2) & 63;\n            r2 = (r[1] >> 4 | r[2] << 4) & 63;\n            r3 = (r[2] >> 2) & 63;\n            r4 = r[3] & 63;\n            r5 = (r[3] >> 6 | r[4] << 2) & 63;\n            r6 = (r[4] >> 4 | r[5] << 4) & 63;\n            r7 = (r[5] >> 2) & 63;\n            r8 = r[6] & 63;\n            r9 = (r[6] >> 6 | r[7] << 2) & 63;\n            r10 = (r[7] >> 4 | r[8] << 4) & 63;\n            r11 = (r[8] >> 2) & 63;\n            r12 = r[9] & 63;\n            r13 = (r[9] >> 6 | r[10] << 2) & 63;\n            r14 = (r[10] >> 4 | r[11] << 4) & 63;\n            r15 = (r[11] >> 2) & 63;\n\n            reghisto[r0]++;\n            reghisto[r1]++;\n            reghisto[r2]++;\n            reghisto[r3]++;\n            reghisto[r4]++;\n            reghisto[r5]++;\n            reghisto[r6]++;\n            reghisto[r7]++;\n            reghisto[r8]++;\n            reghisto[r9]++;\n            reghisto[r10]++;\n            reghisto[r11]++;\n            reghisto[r12]++;\n            reghisto[r13]++;\n            reghisto[r14]++;\n            reghisto[r15]++;\n\n            r += 12;\n        }\n    } else {\n        for(j = 0; j < HLL_REGISTERS; j++) {\n            unsigned long reg;\n            HLL_DENSE_GET_REGISTER(reg,registers,j);\n            reghisto[reg]++;\n        }\n    }\n}\n\n/* ================== Sparse representation implementation  ================= */\n\n/* Convert the HLL with sparse representation given as input in its dense\n * representation. Both representations are represented by SDS strings, and\n * the input representation is freed as a side effect.\n *\n * The function returns C_OK if the sparse representation was valid,\n * otherwise C_ERR is returned if the representation was corrupted. */\nint hllSparseToDense(robj *o) {\n    sds sparse = o->ptr, dense;\n    struct hllhdr *hdr, *oldhdr = (struct hllhdr*)sparse;\n    int idx = 0, runlen, regval;\n    uint8_t *p = (uint8_t*)sparse, *end = p+sdslen(sparse);\n    int valid = 1;\n\n    /* If the representation is already the right one return ASAP. */\n    hdr = (struct hllhdr*) sparse;\n    if (hdr->encoding == HLL_DENSE) return C_OK;\n\n    /* Create a string of the right size filled with zero bytes.\n     * Note that the cached cardinality is set to 0 as a side effect\n     * that is exactly the cardinality of an empty HLL. */\n    dense = sdsnewlen(NULL,HLL_DENSE_SIZE);\n    hdr = (struct hllhdr*) dense;\n    *hdr = *oldhdr; /* This will copy the magic and cached cardinality. */\n    hdr->encoding = HLL_DENSE;\n\n    /* Now read the sparse representation and set non-zero registers\n     * accordingly. */\n    p += HLL_HDR_SIZE;\n    while(p < end) {\n        if (HLL_SPARSE_IS_ZERO(p)) {\n            runlen = HLL_SPARSE_ZERO_LEN(p);\n            if ((runlen + idx) > HLL_REGISTERS) { /* Overflow. */\n                valid = 0;\n                break;\n            }\n            idx += runlen;\n            p++;\n        } else if (HLL_SPARSE_IS_XZERO(p)) {\n            runlen = HLL_SPARSE_XZERO_LEN(p);\n            if ((runlen + idx) > HLL_REGISTERS) { /* Overflow. */\n                valid = 0;\n                break;\n            }\n            idx += runlen;\n            p += 2;\n        } else {\n            runlen = HLL_SPARSE_VAL_LEN(p);\n            regval = HLL_SPARSE_VAL_VALUE(p);\n            if ((runlen + idx) > HLL_REGISTERS) { /* Overflow. */\n                valid = 0;\n                break;\n            }\n            while(runlen--) {\n                HLL_DENSE_SET_REGISTER(hdr->registers,idx,regval);\n                idx++;\n            }\n            p++;\n        }\n    }\n\n    /* If the sparse representation was valid, we expect to find idx\n     * set to HLL_REGISTERS. */\n    if (!valid || idx != HLL_REGISTERS) {\n        sdsfree(dense);\n        return C_ERR;\n    }\n\n    /* Free the old representation and set the new one. */\n    sdsfree(o->ptr);\n    o->ptr = dense;\n    return C_OK;\n}\n\n/* Low level function to set the sparse HLL register at 'index' to the\n * specified value if the current value is smaller than 'count'.\n *\n * The object 'o' is the String object holding the HLL. The function requires\n * a reference to the object in order to be able to enlarge the string if\n * needed.\n *\n * On success, the function returns 1 if the cardinality changed, or 0\n * if the register for this element was not updated.\n * On error (if the representation is invalid) -1 is returned.\n *\n * As a side effect the function may promote the HLL representation from\n * sparse to dense: this happens when a register requires to be set to a value\n * not representable with the sparse representation, or when the resulting\n * size would be greater than server.hll_sparse_max_bytes. */\nint hllSparseSet(robj *o, long index, uint8_t count) {\n    struct hllhdr *hdr;\n    uint8_t oldcount, *sparse, *end, *p, *prev, *next;\n    long first, span;\n    long is_zero = 0, is_xzero = 0, is_val = 0, runlen = 0;\n\n    /* If the count is too big to be representable by the sparse representation\n     * switch to dense representation. */\n    if (count > HLL_SPARSE_VAL_MAX_VALUE) goto promote;\n\n    /* When updating a sparse representation, sometimes we may need to enlarge the\n     * buffer for up to 3 bytes in the worst case (XZERO split into XZERO-VAL-XZERO),\n     * and the following code does the enlarge job.\n     * Actually, we use a greedy strategy, enlarge more than 3 bytes to avoid the need\n     * for future reallocates on incremental growth. But we do not allocate more than\n     * 'server.hll_sparse_max_bytes' bytes for the sparse representation.\n     * If the available size of hyperloglog sds string is not enough for the increment\n     * we need, we promote the hyperloglog to dense representation in 'step 3'.\n     */\n    if (sdsalloc(o->ptr) < server.hll_sparse_max_bytes && sdsavail(o->ptr) < 3) {\n        size_t newlen = sdslen(o->ptr) + 3;\n        newlen += min(newlen, 300); /* Greediness: double 'newlen' if it is smaller than 300, or add 300 to it when it exceeds 300 */\n        if (newlen > server.hll_sparse_max_bytes)\n            newlen = server.hll_sparse_max_bytes;\n        o->ptr = sdsResize(o->ptr, newlen, 1);\n    }\n\n    /* Step 1: we need to locate the opcode we need to modify to check\n     * if a value update is actually needed. */\n    sparse = p = ((uint8_t*)o->ptr) + HLL_HDR_SIZE;\n    end = p + sdslen(o->ptr) - HLL_HDR_SIZE;\n\n    first = 0;\n    prev = NULL; /* Points to previous opcode at the end of the loop. */\n    next = NULL; /* Points to the next opcode at the end of the loop. */\n    span = 0;\n    while(p < end) {\n        long oplen;\n\n        /* Set span to the number of registers covered by this opcode.\n         *\n         * This is the most performance critical loop of the sparse\n         * representation. Sorting the conditionals from the most to the\n         * least frequent opcode in many-bytes sparse HLLs is faster. */\n        oplen = 1;\n        if (HLL_SPARSE_IS_ZERO(p)) {\n            span = HLL_SPARSE_ZERO_LEN(p);\n        } else if (HLL_SPARSE_IS_VAL(p)) {\n            span = HLL_SPARSE_VAL_LEN(p);\n        } else { /* XZERO. */\n            span = HLL_SPARSE_XZERO_LEN(p);\n            oplen = 2;\n        }\n        /* Break if this opcode covers the register as 'index'. */\n        if (index <= first+span-1) break;\n        prev = p;\n        p += oplen;\n        first += span;\n    }\n    if (span == 0 || p >= end) return -1; /* Invalid format. */\n\n    next = HLL_SPARSE_IS_XZERO(p) ? p+2 : p+1;\n    if (next >= end) next = NULL;\n\n    /* Cache current opcode type to avoid using the macro again and\n     * again for something that will not change.\n     * Also cache the run-length of the opcode. */\n    if (HLL_SPARSE_IS_ZERO(p)) {\n        is_zero = 1;\n        runlen = HLL_SPARSE_ZERO_LEN(p);\n    } else if (HLL_SPARSE_IS_XZERO(p)) {\n        is_xzero = 1;\n        runlen = HLL_SPARSE_XZERO_LEN(p);\n    } else {\n        is_val = 1;\n        runlen = HLL_SPARSE_VAL_LEN(p);\n    }\n\n    /* Step 2: After the loop:\n     *\n     * 'first' stores to the index of the first register covered\n     *  by the current opcode, which is pointed by 'p'.\n     *\n     * 'next' ad 'prev' store respectively the next and previous opcode,\n     *  or NULL if the opcode at 'p' is respectively the last or first.\n     *\n     * 'span' is set to the number of registers covered by the current\n     *  opcode.\n     *\n     * There are different cases in order to update the data structure\n     * in place without generating it from scratch:\n     *\n     * A) If it is a VAL opcode already set to a value >= our 'count'\n     *    no update is needed, regardless of the VAL run-length field.\n     *    In this case PFADD returns 0 since no changes are performed.\n     *\n     * B) If it is a VAL opcode with len = 1 (representing only our\n     *    register) and the value is less than 'count', we just update it\n     *    since this is a trivial case. */\n    if (is_val) {\n        oldcount = HLL_SPARSE_VAL_VALUE(p);\n        /* Case A. */\n        if (oldcount >= count) return 0;\n\n        /* Case B. */\n        if (runlen == 1) {\n            HLL_SPARSE_VAL_SET(p,count,1);\n            goto updated;\n        }\n    }\n\n    /* C) Another trivial to handle case is a ZERO opcode with a len of 1.\n     * We can just replace it with a VAL opcode with our value and len of 1. */\n    if (is_zero && runlen == 1) {\n        HLL_SPARSE_VAL_SET(p,count,1);\n        goto updated;\n    }\n\n    /* D) General case.\n     *\n     * The other cases are more complex: our register requires to be updated\n     * and is either currently represented by a VAL opcode with len > 1,\n     * by a ZERO opcode with len > 1, or by an XZERO opcode.\n     *\n     * In those cases the original opcode must be split into multiple\n     * opcodes. The worst case is an XZERO split in the middle resulting into\n     * XZERO - VAL - XZERO, so the resulting sequence max length is\n     * 5 bytes.\n     *\n     * We perform the split writing the new sequence into the 'new' buffer\n     * with 'newlen' as length. Later the new sequence is inserted in place\n     * of the old one, possibly moving what is on the right a few bytes\n     * if the new sequence is longer than the older one. */\n    uint8_t seq[5], *n = seq;\n    int last = first+span-1; /* Last register covered by the sequence. */\n    int len;\n\n    if (is_zero || is_xzero) {\n        /* Handle splitting of ZERO / XZERO. */\n        if (index != first) {\n            len = index-first;\n            if (len > HLL_SPARSE_ZERO_MAX_LEN) {\n                HLL_SPARSE_XZERO_SET(n,len);\n                n += 2;\n            } else {\n                HLL_SPARSE_ZERO_SET(n,len);\n                n++;\n            }\n        }\n        HLL_SPARSE_VAL_SET(n,count,1);\n        n++;\n        if (index != last) {\n            len = last-index;\n            if (len > HLL_SPARSE_ZERO_MAX_LEN) {\n                HLL_SPARSE_XZERO_SET(n,len);\n                n += 2;\n            } else {\n                HLL_SPARSE_ZERO_SET(n,len);\n                n++;\n            }\n        }\n    } else {\n        /* Handle splitting of VAL. */\n        int curval = HLL_SPARSE_VAL_VALUE(p);\n\n        if (index != first) {\n            len = index-first;\n            HLL_SPARSE_VAL_SET(n,curval,len);\n            n++;\n        }\n        HLL_SPARSE_VAL_SET(n,count,1);\n        n++;\n        if (index != last) {\n            len = last-index;\n            HLL_SPARSE_VAL_SET(n,curval,len);\n            n++;\n        }\n    }\n\n    /* Step 3: substitute the new sequence with the old one.\n     *\n     * Note that we already allocated space on the sds string\n     * calling sdsResize(). */\n    int seqlen = n-seq;\n    int oldlen = is_xzero ? 2 : 1;\n    int deltalen = seqlen-oldlen;\n\n    if (deltalen > 0 &&\n        sdslen(o->ptr) + deltalen > server.hll_sparse_max_bytes) goto promote;\n    serverAssert(sdslen(o->ptr) + deltalen <= sdsalloc(o->ptr));\n    if (deltalen && next) memmove(next+deltalen,next,end-next);\n    sdsIncrLen(o->ptr,deltalen);\n    memcpy(p,seq,seqlen);\n    end += deltalen;\n\nupdated:\n    /* Step 4: Merge adjacent values if possible.\n     *\n     * The representation was updated, however the resulting representation\n     * may not be optimal: adjacent VAL opcodes can sometimes be merged into\n     * a single one. */\n    p = prev ? prev : sparse;\n    int scanlen = 5; /* Scan up to 5 upcodes starting from prev. */\n    while (p < end && scanlen--) {\n        if (HLL_SPARSE_IS_XZERO(p)) {\n            p += 2;\n            continue;\n        } else if (HLL_SPARSE_IS_ZERO(p)) {\n            p++;\n            continue;\n        }\n        /* We need two adjacent VAL opcodes to try a merge, having\n         * the same value, and a len that fits the VAL opcode max len. */\n        if (p+1 < end && HLL_SPARSE_IS_VAL(p+1)) {\n            int v1 = HLL_SPARSE_VAL_VALUE(p);\n            int v2 = HLL_SPARSE_VAL_VALUE(p+1);\n            if (v1 == v2) {\n                int len = HLL_SPARSE_VAL_LEN(p)+HLL_SPARSE_VAL_LEN(p+1);\n                if (len <= HLL_SPARSE_VAL_MAX_LEN) {\n                    HLL_SPARSE_VAL_SET(p+1,v1,len);\n                    memmove(p,p+1,end-p);\n                    sdsIncrLen(o->ptr,-1);\n                    end--;\n                    /* After a merge we reiterate without incrementing 'p'\n                     * in order to try to merge the just merged value with\n                     * a value on its right. */\n                    continue;\n                }\n            }\n        }\n        p++;\n    }\n\n    /* Invalidate the cached cardinality. */\n    hdr = o->ptr;\n    HLL_INVALIDATE_CACHE(hdr);\n    return 1;\n\npromote: /* Promote to dense representation. */\n    if (hllSparseToDense(o) == C_ERR) return -1; /* Corrupted HLL. */\n    hdr = o->ptr;\n\n    /* We need to call hllDenseAdd() to perform the operation after the\n     * conversion. However the result must be 1, since if we need to\n     * convert from sparse to dense a register requires to be updated.\n     *\n     * Note that this in turn means that PFADD will make sure the command\n     * is propagated to slaves / AOF, so if there is a sparse -> dense\n     * conversion, it will be performed in all the slaves as well. */\n    int dense_retval = hllDenseSet(hdr->registers,index,count);\n    serverAssert(dense_retval == 1);\n    return dense_retval;\n}\n\n/* \"Add\" the element in the sparse hyperloglog data structure.\n * Actually nothing is added, but the max 0 pattern counter of the subset\n * the element belongs to is incremented if needed.\n *\n * This function is actually a wrapper for hllSparseSet(), it only performs\n * the hashing of the element to obtain the index and zeros run length. */\nint hllSparseAdd(robj *o, unsigned char *ele, size_t elesize) {\n    long index;\n    uint8_t count = hllPatLen(ele,elesize,&index);\n    /* Update the register if this element produced a longer run of zeroes. */\n    return hllSparseSet(o,index,count);\n}\n\n/* Compute the register histogram in the sparse representation. */\nvoid hllSparseRegHisto(uint8_t *sparse, int sparselen, int *invalid, int* reghisto) {\n    int idx = 0, runlen, regval;\n    uint8_t *end = sparse+sparselen, *p = sparse;\n    int valid = 1;\n\n    while(p < end) {\n        if (HLL_SPARSE_IS_ZERO(p)) {\n            runlen = HLL_SPARSE_ZERO_LEN(p);\n            if ((runlen + idx) > HLL_REGISTERS) { /* Overflow. */\n                valid = 0;\n                break;\n            }\n            idx += runlen;\n            reghisto[0] += runlen;\n            p++;\n        } else if (HLL_SPARSE_IS_XZERO(p)) {\n            runlen = HLL_SPARSE_XZERO_LEN(p);\n            if ((runlen + idx) > HLL_REGISTERS) { /* Overflow. */\n                valid = 0;\n                break;\n            }\n            idx += runlen;\n            reghisto[0] += runlen;\n            p += 2;\n        } else {\n            runlen = HLL_SPARSE_VAL_LEN(p);\n            regval = HLL_SPARSE_VAL_VALUE(p);\n            if ((runlen + idx) > HLL_REGISTERS) { /* Overflow. */\n                valid = 0;\n                break;\n            }\n            idx += runlen;\n            reghisto[regval] += runlen;\n            p++;\n        }\n    }\n    if ((!valid || idx != HLL_REGISTERS) && invalid) *invalid = 1;\n}\n\n/* ========================= HyperLogLog Count ==============================\n * This is the core of the algorithm where the approximated count is computed.\n * The function uses the lower level hllDenseRegHisto() and hllSparseRegHisto()\n * functions as helpers to compute histogram of register values part of the\n * computation, which is representation-specific, while all the rest is common. */\n\n/* Implements the register histogram calculation for uint8_t data type\n * which is only used internally as speedup for PFCOUNT with multiple keys. */\nvoid hllRawRegHisto(uint8_t *registers, int* reghisto) {\n    uint64_t *word = (uint64_t*) registers;\n    uint8_t *bytes;\n    int j;\n\n    for (j = 0; j < HLL_REGISTERS/8; j++) {\n        if (*word == 0) {\n            reghisto[0] += 8;\n        } else {\n            bytes = (uint8_t*) word;\n            reghisto[bytes[0]]++;\n            reghisto[bytes[1]]++;\n            reghisto[bytes[2]]++;\n            reghisto[bytes[3]]++;\n            reghisto[bytes[4]]++;\n            reghisto[bytes[5]]++;\n            reghisto[bytes[6]]++;\n            reghisto[bytes[7]]++;\n        }\n        word++;\n    }\n}\n\n/* Helper function sigma as defined in\n * \"New cardinality estimation algorithms for HyperLogLog sketches\"\n * Otmar Ertl, arXiv:1702.01284 */\ndouble hllSigma(double x) {\n    if (x == 1.) return INFINITY;\n    double zPrime;\n    double y = 1;\n    double z = x;\n    do {\n        x *= x;\n        zPrime = z;\n        z += x * y;\n        y += y;\n    } while(zPrime != z);\n    return z;\n}\n\n/* Helper function tau as defined in\n * \"New cardinality estimation algorithms for HyperLogLog sketches\"\n * Otmar Ertl, arXiv:1702.01284 */\ndouble hllTau(double x) {\n    if (x == 0. || x == 1.) return 0.;\n    double zPrime;\n    double y = 1.0;\n    double z = 1 - x;\n    do {\n        x = sqrt(x);\n        zPrime = z;\n        y *= 0.5;\n        z -= pow(1 - x, 2)*y;\n    } while(zPrime != z);\n    return z / 3;\n}\n\n/* Return the approximated cardinality of the set based on the harmonic\n * mean of the registers values. 'hdr' points to the start of the SDS\n * representing the String object holding the HLL representation.\n *\n * If the sparse representation of the HLL object is not valid, the integer\n * pointed by 'invalid' is set to non-zero, otherwise it is left untouched.\n *\n * hllCount() supports a special internal-only encoding of HLL_RAW, that\n * is, hdr->registers will point to an uint8_t array of HLL_REGISTERS element.\n * This is useful in order to speedup PFCOUNT when called against multiple\n * keys (no need to work with 6-bit integers encoding). */\nuint64_t hllCount(struct hllhdr *hdr, int *invalid) {\n    double m = HLL_REGISTERS;\n    double E;\n    int j;\n    /* Note that reghisto size could be just HLL_Q+2, because HLL_Q+1 is\n     * the maximum frequency of the \"000...1\" sequence the hash function is\n     * able to return. However it is slow to check for sanity of the\n     * input: instead we history array at a safe size: overflows will\n     * just write data to wrong, but correctly allocated, places. */\n    int reghisto[64] = {0};\n\n    /* Compute register histogram */\n    if (hdr->encoding == HLL_DENSE) {\n        hllDenseRegHisto(hdr->registers,reghisto);\n    } else if (hdr->encoding == HLL_SPARSE) {\n        hllSparseRegHisto(hdr->registers,\n                         sdslen((sds)hdr)-HLL_HDR_SIZE,invalid,reghisto);\n    } else if (hdr->encoding == HLL_RAW) {\n        hllRawRegHisto(hdr->registers,reghisto);\n    } else {\n        serverPanic(\"Unknown HyperLogLog encoding in hllCount()\");\n    }\n\n    /* Estimate cardinality from register histogram. See:\n     * \"New cardinality estimation algorithms for HyperLogLog sketches\"\n     * Otmar Ertl, arXiv:1702.01284 */\n    double z = m * hllTau((m-reghisto[HLL_Q+1])/(double)m);\n    for (j = HLL_Q; j >= 1; --j) {\n        z += reghisto[j];\n        z *= 0.5;\n    }\n    z += m * hllSigma(reghisto[0]/(double)m);\n    E = llroundl(HLL_ALPHA_INF*m*m/z);\n\n    return (uint64_t) E;\n}\n\n/* Call hllDenseAdd() or hllSparseAdd() according to the HLL encoding. */\nint hllAdd(robj *o, unsigned char *ele, size_t elesize) {\n    struct hllhdr *hdr = o->ptr;\n    switch(hdr->encoding) {\n    case HLL_DENSE: return hllDenseAdd(hdr->registers,ele,elesize);\n    case HLL_SPARSE: return hllSparseAdd(o,ele,elesize);\n    default: return -1; /* Invalid representation. */\n    }\n}\n\n#ifdef HAVE_AVX2\n/* A specialized version of hllMergeDense, optimized for default configurations.\n * \n * Requirements:\n * 1) HLL_REGISTERS == 16384 && HLL_BITS == 6\n * 2) The CPU supports AVX2 (checked at runtime in hllMergeDense)\n *\n * reg_raw: pointer to the raw representation array (16384 bytes, one byte per register)\n * reg_dense: pointer to the dense representation array (12288 bytes, 6 bits per register)\n */\nATTRIBUTE_TARGET_AVX2\nvoid hllMergeDenseAVX2(uint8_t *reg_raw, const uint8_t *reg_dense) {\n    const __m256i shuffle = _mm256_setr_epi8( //\n        4, 5, 6, -1,                          //\n        7, 8, 9, -1,                          //\n        10, 11, 12, -1,                       //\n        13, 14, 15, -1,                       //\n        0, 1, 2, -1,                          //\n        3, 4, 5, -1,                          //\n        6, 7, 8, -1,                          //\n        9, 10, 11, -1                         //\n    );\n\n    /* Merge the first 8 registers (6 bytes) normally \n     * as the AVX2 algorithm needs 4 padding bytes at the start */\n    uint8_t val;\n    for (int i = 0; i < 8; i++) {\n        HLL_DENSE_GET_REGISTER(val, reg_dense, i);\n        reg_raw[i] = MAX(reg_raw[i], val);\n    }\n\n    /* Dense to Raw:\n     *\n     * 4 registers in 3 bytes:\n     * {bbaaaaaa|ccccbbbb|ddddddcc}\n     * \n     * LOAD 32 bytes (32 registers) per iteration:\n     * 4(padding) + 12(16 registers) + 12(16 registers) + 4(padding)\n     * {XXXX|AAAB|BBCC|CDDD|EEEF|FFGG|GHHH|XXXX}\n     * \n     * SHUFFLE to:\n     * {AAA0|BBB0|CCC0|DDD0|EEE0|FFF0|GGG0|HHH0}\n     * {bbaaaaaa|ccccbbbb|ddddddcc|00000000} x8\n     *\n     * AVX2 is little endian, each of the 8 groups is a little-endian int32.\n     * A group (int32) contains 3 valid bytes (4 registers) and a zero byte.\n     * \n     * extract registers in each group with AND and SHIFT:\n     * {00aaaaaa|00000000|00000000|00000000} x8 (<<0)\n     * {00000000|00bbbbbb|00000000|00000000} x8 (<<2)\n     * {00000000|00000000|00cccccc|00000000} x8 (<<4)\n     * {00000000|00000000|00000000|00dddddd} x8 (<<6)\n     *\n     * merge the extracted registers with OR:\n     * {00aaaaaa|00bbbbbb|00cccccc|00dddddd} x8\n     *\n     * Finally, compute MAX(reg_raw, merged) and STORE it back to reg_raw\n     */\n\n    /* Skip 8 registers (6 bytes) */ \n    const uint8_t *r = reg_dense + 6 - 4;\n    uint8_t *t = reg_raw + 8;\n\n    for (int i = 0; i < HLL_REGISTERS / 32 - 1; ++i) {\n        __m256i x0, x;\n        x0 = _mm256_loadu_si256((__m256i *)r);\n        x = _mm256_shuffle_epi8(x0, shuffle);\n\n        __m256i a1, a2, a3, a4;\n        a1 = _mm256_and_si256(x, _mm256_set1_epi32(0x0000003f));\n        a2 = _mm256_and_si256(x, _mm256_set1_epi32(0x00000fc0));\n        a3 = _mm256_and_si256(x, _mm256_set1_epi32(0x0003f000));\n        a4 = _mm256_and_si256(x, _mm256_set1_epi32(0x00fc0000));\n\n        a2 = _mm256_slli_epi32(a2, 2);\n        a3 = _mm256_slli_epi32(a3, 4);\n        a4 = _mm256_slli_epi32(a4, 6);\n\n        __m256i y1, y2, y;\n        y1 = _mm256_or_si256(a1, a2);\n        y2 = _mm256_or_si256(a3, a4);\n        y = _mm256_or_si256(y1, y2);\n\n        __m256i z = _mm256_loadu_si256((__m256i *)t);\n\n        z = _mm256_max_epu8(z, y);\n\n        _mm256_storeu_si256((__m256i *)t, z);\n\n        r += 24;\n        t += 32;\n    }\n\n    /* Merge the last 24 registers normally \n     * as the AVX2 algorithm needs 4 padding bytes at the end */\n    for (int i = HLL_REGISTERS - 24; i < HLL_REGISTERS; i++) {\n        HLL_DENSE_GET_REGISTER(val, reg_dense, i);\n        reg_raw[i] = MAX(reg_raw[i], val);\n    }\n}\n#endif\n\n#ifdef HAVE_AARCH64_NEON\n/* A specialized version of hllMergeDense, optimized for default configurations.\n * Based on the AVX2 version.\n * \n * Requirements:\n * 1) HLL_REGISTERS == 16384 && HLL_BITS == 6\n * 2) Aarch64 CPU supports NEON (checked at runtime in hllMergeDense)\n *\n * reg_raw: pointer to the raw representation array (16384 bytes, one byte per register)\n * reg_dense: pointer to the dense representation array (12288 bytes, 6 bits per register)\n */\nvoid hllMergeDenseAarch64(uint8_t *reg_raw, const uint8_t *reg_dense) {\n    const uint8_t *r = reg_dense;\n    uint8_t *t = reg_raw;\n\n    /* Shuffle pattern to expand each 12-byte packed group (16 regs x 6 bits)\n     * to 16 bytes by inserting zeroes at bytes 3, 7, 11 and 15. */\n    const uint8x16_t shuffle = {\n        0, 1, 2, -1,\n        3, 4, 5, -1,\n        6, 7, 8, -1,\n        9, 10, 11, -1\n    };\n\n    for (int i = 0; i < HLL_REGISTERS / 16 - 1; ++i) {\n        /* Load 16 bytes (12 meaningful) and apply table; zeros fill pad positions. */\n        uint8x16_t x, x0;\n        x0 = vld1q_u8(r);\n        x = vqtbl1q_u8(x0, shuffle);\n\n        /* Treat as 4x32-bit lanes (LE); each lane now holds 3 packed bytes + one zero. */\n        uint32x4_t x32 = vreinterpretq_u32_u8(x);\n\n        /* Extract the four 6-bit fields per 32-bit lane. */\n        uint32x4_t a1, a2, a3, a4;\n        a1 = vandq_u32(x32, vdupq_n_u32(0x0000003f));\n        a2 = vandq_u32(x32, vdupq_n_u32(0x00000fc0));\n        a3 = vandq_u32(x32, vdupq_n_u32(0x0003f000));\n        a4 = vandq_u32(x32, vdupq_n_u32(0x00fc0000));\n\n        /* Align fields to byte boundaries within each lane. */\n        a2 = vshlq_n_u32(a2, 2);\n        a3 = vshlq_n_u32(a3, 4);\n        a4 = vshlq_n_u32(a4, 6);\n\n        /* Combine fields per lane (32-bit). */\n        uint32x4_t y32 = vorrq_u32(vorrq_u32(a1, a2), vorrq_u32(a3, a4));\n\n        /* Reinterpret to actual 8-bit uints for comparison. */\n        uint8x16_t y = vreinterpretq_u8_u32(y32);\n\n        /* Max-merge with existing raw registers. */\n        uint8x16_t z = vld1q_u8(t);\n        z = vmaxq_u8(z, y);\n\n        /* Store the results. */\n        vst1q_u8(t, z);\n\n        r += 12;\n        t += 16;\n    }\n\n    /* Process remaining registers, we do this manually because we don't want to over-read 4 bytes */\n    uint8_t val;\n    for (int i = HLL_REGISTERS - 16; i < HLL_REGISTERS; i++) {\n        HLL_DENSE_GET_REGISTER(val, reg_dense, i);\n        reg_raw[i] = MAX(reg_raw[i], val);\n    }\n}\n#endif /* HAVE_AARCH64_NEON */\n\n/* Merge dense-encoded registers to raw registers array. */\nvoid hllMergeDense(uint8_t* reg_raw, const uint8_t* reg_dense) {\n#if HLL_REGISTERS == 16384 && HLL_BITS == 6\n#ifdef HAVE_AVX2\n    if (HLL_USE_AVX2) {\n        hllMergeDenseAVX2(reg_raw, reg_dense);\n        return;\n    }\n#endif\n#ifdef HAVE_AARCH64_NEON\n    if (HLL_USE_NEON) {\n        hllMergeDenseAarch64(reg_raw, reg_dense);\n        return;\n    }\n#endif\n#endif\n\n    uint8_t val;\n    for (int i = 0; i < HLL_REGISTERS; i++) {\n        HLL_DENSE_GET_REGISTER(val, reg_dense, i);\n        reg_raw[i] = MAX(reg_raw[i], val);\n    }\n}\n\n/* Merge by computing MAX(registers[i],hll[i]) the HyperLogLog 'hll'\n * with an array of uint8_t HLL_REGISTERS registers pointed by 'max'.\n *\n * The hll object must be already validated via isHLLObjectOrReply()\n * or in some other way.\n *\n * If the HyperLogLog is sparse and is found to be invalid, C_ERR\n * is returned, otherwise the function always succeeds. */\nint hllMerge(uint8_t *max, robj *hll) {\n    struct hllhdr *hdr = hll->ptr;\n    int i;\n\n    if (hdr->encoding == HLL_DENSE) {\n        hllMergeDense(max, hdr->registers);\n    } else {\n        uint8_t *p = hll->ptr, *end = p + sdslen(hll->ptr);\n        long runlen, regval;\n        int valid = 1;\n\n        p += HLL_HDR_SIZE;\n        i = 0;\n        while(p < end) {\n            if (HLL_SPARSE_IS_ZERO(p)) {\n                runlen = HLL_SPARSE_ZERO_LEN(p);\n                if ((runlen + i) > HLL_REGISTERS) { /* Overflow. */\n                    valid = 0;\n                    break;\n                }\n                i += runlen;\n                p++;\n            } else if (HLL_SPARSE_IS_XZERO(p)) {\n                runlen = HLL_SPARSE_XZERO_LEN(p);\n                if ((runlen + i) > HLL_REGISTERS) { /* Overflow. */\n                    valid = 0;\n                    break;\n                }\n                i += runlen;\n                p += 2;\n            } else {\n                runlen = HLL_SPARSE_VAL_LEN(p);\n                regval = HLL_SPARSE_VAL_VALUE(p);\n                if ((runlen + i) > HLL_REGISTERS) { /* Overflow. */\n                    valid = 0;\n                    break;\n                }\n                while(runlen--) {\n                    if (regval > max[i]) max[i] = regval;\n                    i++;\n                }\n                p++;\n            }\n        }\n        if (!valid || i != HLL_REGISTERS) return C_ERR;\n    }\n    return C_OK;\n}\n\n#ifdef HAVE_AVX2\n/* A specialized version of hllDenseCompress, optimized for default configurations.\n * \n * Requirements:\n * 1) HLL_REGISTERS == 16384 && HLL_BITS == 6\n * 2) The CPU supports AVX2 (checked at runtime in hllDenseCompress)\n *\n * reg_dense: pointer to the dense representation array (12288 bytes, 6 bits per register)\n * reg_raw: pointer to the raw representation array (16384 bytes, one byte per register)\n */\nATTRIBUTE_TARGET_AVX2\nvoid hllDenseCompressAVX2(uint8_t *reg_dense, const uint8_t *reg_raw) {\n    const __m256i shuffle = _mm256_setr_epi8( //\n        0, 1, 2,                              //\n        4, 5, 6,                              //\n        8, 9, 10,                             //\n        12, 13, 14,                           //\n        -1, -1, -1, -1,                       //\n        0, 1, 2,                              //\n        4, 5, 6,                              //\n        8, 9, 10,                             //\n        12, 13, 14,                           //\n        -1, -1, -1, -1                        //\n    );\n\n    /* Raw to Dense:\n     * \n     * LOAD 32 bytes (32 registers) per iteration:\n     * {00aaaaaa|00bbbbbb|00cccccc|00dddddd} x8\n     *\n     * AVX2 is little endian, each of the 8 groups is a little-endian int32.\n     * A group (int32) contains 4 registers.\n     * \n     * move the registers to correct positions with AND and SHIFT:\n     * {00aaaaaa|00000000|00000000|00000000} x8 (>>0)\n     * {bb000000|0000bbbb|00000000|00000000} x8 (>>2)\n     * {00000000|cccc0000|000000cc|00000000} x8 (>>4)\n     * {00000000|00000000|dddddd00|00000000} x8 (>>6)\n     *\n     * merge the registers with OR:\n     * {bbaaaaaa|ccccbbbb|ddddddcc|00000000} x8\n     * {AAA0|BBB0|CCC0|DDD0|EEE0|FFF0|GGG0|HHH0}\n     *\n     * SHUFFLE to:\n     * {AAAB|BBCC|CDDD|0000|EEEF|FFGG|GHHH|0000}\n     *\n     * STORE the lower half and higher half respectively:\n     * AAABBBCCCDDD0000 \n     *             EEEFFFGGGHHH0000\n     * AAABBBCCCDDDEEEFFFGGGHHH0000\n     *\n     * Note that the last 4 bytes are padding bytes.\n     */\n\n    const uint8_t *r = reg_raw;\n    uint8_t *t = reg_dense;\n\n    for (int i = 0; i < HLL_REGISTERS / 32 - 1; ++i) {\n        __m256i x = _mm256_loadu_si256((__m256i *)r);\n\n        __m256i a1, a2, a3, a4;\n        a1 = _mm256_and_si256(x, _mm256_set1_epi32(0x0000003f));\n        a2 = _mm256_and_si256(x, _mm256_set1_epi32(0x00003f00));\n        a3 = _mm256_and_si256(x, _mm256_set1_epi32(0x003f0000));\n        a4 = _mm256_and_si256(x, _mm256_set1_epi32(0x3f000000));\n\n        a2 = _mm256_srli_epi32(a2, 2);\n        a3 = _mm256_srli_epi32(a3, 4);\n        a4 = _mm256_srli_epi32(a4, 6);\n\n        __m256i y1, y2, y;\n        y1 = _mm256_or_si256(a1, a2);\n        y2 = _mm256_or_si256(a3, a4);\n        y = _mm256_or_si256(y1, y2);\n        y = _mm256_shuffle_epi8(y, shuffle);\n\n        __m128i lower, higher;\n        lower = _mm256_castsi256_si128(y);\n        higher = _mm256_extracti128_si256(y, 1);\n\n        _mm_storeu_si128((__m128i *)t, lower);\n        _mm_storeu_si128((__m128i *)(t + 12), higher);\n\n        r += 32;\n        t += 24;\n    }\n\n    /* Merge the last 32 registers normally \n     * as the AVX2 algorithm needs 4 padding bytes at the end */\n    for (int i = HLL_REGISTERS - 32; i < HLL_REGISTERS; i++) {\n        HLL_DENSE_SET_REGISTER(reg_dense, i, reg_raw[i]);\n    }\n}\n#endif\n\n#ifdef HAVE_AARCH64_NEON\n/* A specialized version of hllDenseCompress, optimized for default configurations.\n * Based on the AVX2 version.\n * \n * Requirements:\n * 1) HLL_REGISTERS == 16384 && HLL_BITS == 6\n * 2) Aarch64 CPU supports NEON (checked at runtime in hllDenseCompress)\n *\n * reg_dense: pointer to the dense representation array (12288 bytes, 6 bits per register)\n * reg_raw: pointer to the raw representation array (16384 bytes, one byte per register)\n */\nvoid hllDenseCompressAarch64(uint8_t *reg_dense, const uint8_t *reg_raw) {\n    const uint8_t *r = reg_raw;\n    uint8_t *t = reg_dense;\n\n    /* Shuffle pattern to collapse 16 raw bytes (16 regs x 8 bits)\n     * into 12 bytes (16 regs x 6 bits) by dropping padding bytes 3, 7, 11, 15. */\n    const uint8x16_t shuffle = {\n        0, 1, 2,\n        4, 5, 6,\n        8, 9, 10,\n        12, 13, 14,\n        -1, -1, -1\n    };\n\n    for (int i = 0; i < HLL_REGISTERS / 16 - 1; ++i) {\n        /* Load 16 raw registers as four 32-bit lanes (LE). */\n        const uint32x4_t x = vld1q_u32((const uint32_t *)r);\n\n        /* Extract the four 6-bit fields per 32-bit lane. */\n        uint32x4_t a1, a2, a3, a4;\n        a1 = vandq_u32(x, vdupq_n_u32(0x0000003f));\n        a2 = vandq_u32(x, vdupq_n_u32(0x00003f00));\n        a3 = vandq_u32(x, vdupq_n_u32(0x003f0000));\n        a4 = vandq_u32(x, vdupq_n_u32(0x3f000000));\n\n        /* Align fields to packed positions within each lane. */\n        a2 = vshrq_n_u32(a2, 2);\n        a3 = vshrq_n_u32(a3, 4);\n        a4 = vshrq_n_u32(a4, 6);\n\n        /* Combine fields per lane (32-bit). */\n        uint32x4_t y32 = vorrq_u32(vorrq_u32(a1, a2), vorrq_u32(a3, a4));\n\n        /* Reinterpret to 8-bit uints; each lane now holds 3 packed bytes + one pad. */\n        uint8x16_t y = vreinterpretq_u8_u32(y32);\n\n        /* Compact to 12 bytes by removing each lane's pad byte. */\n        y = vqtbl1q_u8(y, shuffle);\n\n        /* Store the results. */\n        vst1q_u8(t, y);\n\n        r += 16;\n        t += 12;\n    }\n\n    /* Merge the last 16 registers normally \n     * as the NEON algorithm needs 4 padding bytes at the end */\n    for (int i = HLL_REGISTERS - 16; i < HLL_REGISTERS; i++) {\n        HLL_DENSE_SET_REGISTER(reg_dense, i, reg_raw[i]);\n    }\n}\n#endif\n\n/* Compress raw registers to dense representation. */\nvoid hllDenseCompress(uint8_t *reg_dense, const uint8_t *reg_raw) {\n#if HLL_REGISTERS == 16384 && HLL_BITS == 6\n#ifdef HAVE_AVX2\n    if (HLL_USE_AVX2) {\n        hllDenseCompressAVX2(reg_dense, reg_raw);\n        return;\n    }\n#endif\n\n#ifdef HAVE_AARCH64_NEON\n    if (HLL_USE_NEON) {\n        hllDenseCompressAarch64(reg_dense, reg_raw);\n        return;\n    }\n#endif\n#endif\n\n    for (int i = 0; i < HLL_REGISTERS; i++) {\n        HLL_DENSE_SET_REGISTER(reg_dense, i, reg_raw[i]);\n    }\n}\n\n/* ========================== HyperLogLog commands ========================== */\n\n/* Create an HLL object. We always create the HLL using sparse encoding.\n * This will be upgraded to the dense representation as needed. */\nrobj *createHLLObject(void) {\n    robj *o;\n    struct hllhdr *hdr;\n    sds s;\n    uint8_t *p;\n    int sparselen = HLL_HDR_SIZE +\n                    (((HLL_REGISTERS+(HLL_SPARSE_XZERO_MAX_LEN-1)) /\n                     HLL_SPARSE_XZERO_MAX_LEN)*2);\n    int aux;\n\n    /* Populate the sparse representation with as many XZERO opcodes as\n     * needed to represent all the registers. */\n    aux = HLL_REGISTERS;\n    s = sdsnewlen(NULL,sparselen);\n    p = (uint8_t*)s + HLL_HDR_SIZE;\n    while(aux) {\n        int xzero = HLL_SPARSE_XZERO_MAX_LEN;\n        if (xzero > aux) xzero = aux;\n        HLL_SPARSE_XZERO_SET(p,xzero);\n        p += 2;\n        aux -= xzero;\n    }\n    serverAssert((p-(uint8_t*)s) == sparselen);\n\n    /* Create the actual object. */\n    o = createObject(OBJ_STRING,s);\n    hdr = o->ptr;\n    memcpy(hdr->magic,\"HYLL\",4);\n    hdr->encoding = HLL_SPARSE;\n    return o;\n}\n\n/* Check if the object is a String with a valid HLL representation.\n * Return C_OK if this is true, otherwise reply to the client\n * with an error and return C_ERR. */\nint isHLLObjectOrReply(client *c, robj *o) {\n    struct hllhdr *hdr;\n\n    /* Key exists, check type */\n    if (checkType(c,o,OBJ_STRING))\n        return C_ERR; /* Error already sent. */\n\n    if (!sdsEncodedObject(o)) goto invalid;\n    if (stringObjectLen(o) < sizeof(*hdr)) goto invalid;\n    hdr = o->ptr;\n\n    /* Magic should be \"HYLL\". */\n    if (hdr->magic[0] != 'H' || hdr->magic[1] != 'Y' ||\n        hdr->magic[2] != 'L' || hdr->magic[3] != 'L') goto invalid;\n\n    if (hdr->encoding > HLL_MAX_ENCODING) goto invalid;\n\n    /* Dense representation string length should match exactly. */\n    if (hdr->encoding == HLL_DENSE &&\n        stringObjectLen(o) != HLL_DENSE_SIZE) goto invalid;\n\n    /* All tests passed. */\n    return C_OK;\n\ninvalid:\n    addReplyError(c,\"-WRONGTYPE Key is not a valid \"\n               \"HyperLogLog string value.\");\n    return C_ERR;\n}\n\n/* PFADD var ele ele ele ... ele => :0 or :1 */\nvoid pfaddCommand(client *c) {\n    uint64_t oldlen;\n    dictEntryLink link;\n    size_t oldsize = 0;\n    kvobj *kv = lookupKeyWriteWithLink(c->db,c->argv[1], &link);\n\n    struct hllhdr *hdr;\n    int updated = 0, j;\n\n    if (kv == NULL) {\n        /* Create the key with a string value of the exact length to\n         * hold our HLL data structure. sdsnewlen() when NULL is passed\n         * is guaranteed to return bytes initialized to zero. */\n        robj *o = createHLLObject();\n        kv = dbAddByLink(c->db, c->argv[1], &o, &link);\n        updated++;\n    } else {\n        if (isHLLObjectOrReply(c,kv) != C_OK) return;\n        kv = dbUnshareStringValue(c->db,c->argv[1],kv);\n    }\n    oldlen = stringObjectLen(kv);\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(kv);\n\n    /* Perform the low level ADD operation for every element. */\n    for (j = 2; j < c->argc; j++) {\n        int retval = hllAdd(kv, (unsigned char*)c->argv[j]->ptr,\n                               sdslen(c->argv[j]->ptr));\n        switch(retval) {\n        case 1:\n            updated++;\n            break;\n        case -1:\n            addReplyError(c,invalid_hll_err);\n            if (server.memory_tracking_enabled)\n                updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), kv, oldsize, kvobjAllocSize(kv));\n            return;\n        }\n    }\n\n    hdr = kv->ptr;\n    updateKeysizesHist(c->db, OBJ_STRING, oldlen, stringObjectLen(kv));\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), kv, oldsize, kvobjAllocSize(kv));\n    if (updated) {\n        HLL_INVALIDATE_CACHE(hdr);\n        keyModified(c,c->db,c->argv[1],kv,1);\n        notifyKeyspaceEvent(NOTIFY_STRING,\"pfadd\",c->argv[1],c->db->id);\n        server.dirty += updated;\n    }\n    addReply(c, updated ? shared.cone : shared.czero);\n}\n\n/* PFCOUNT var -> approximated cardinality of set. */\nvoid pfcountCommand(client *c) {\n    struct hllhdr *hdr;\n    uint64_t card;\n\n    /* Case 1: multi-key keys, cardinality of the union.\n     *\n     * When multiple keys are specified, PFCOUNT actually computes\n     * the cardinality of the merge of the N HLLs specified. */\n    if (c->argc > 2) {\n        uint8_t max[HLL_HDR_SIZE+HLL_REGISTERS], *registers;\n        int j;\n\n        /* Compute an HLL with M[i] = MAX(M[i]_j). */\n        memset(max,0,sizeof(max));\n        hdr = (struct hllhdr*) max;\n        hdr->encoding = HLL_RAW; /* Special internal-only encoding. */\n        registers = max + HLL_HDR_SIZE;\n        for (j = 1; j < c->argc; j++) {\n            /* Check type and size. */\n            kvobj *o = lookupKeyRead(c->db,c->argv[j]);\n            if (o == NULL) continue; /* Assume empty HLL for non existing var.*/\n            if (isHLLObjectOrReply(c,o) != C_OK) return;\n\n            /* Merge with this HLL with our 'max' HLL by setting max[i]\n             * to MAX(max[i],hll[i]). */\n            if (hllMerge(registers,o) == C_ERR) {\n                addReplyError(c,invalid_hll_err);\n                return;\n            }\n        }\n\n        /* Compute cardinality of the resulting set. */\n        addReplyLongLong(c,hllCount(hdr,NULL));\n        return;\n    }\n\n    /* Case 2: cardinality of the single HLL.\n     *\n     * The user specified a single key. Either return the cached value\n     * or compute one and update the cache.\n     *\n     * Since a HLL is a regular Redis string type value, updating the cache does\n     * modify the value. We do a lookupKeyRead anyway since this is flagged as a\n     * read-only command. The difference is that with lookupKeyWrite, a\n     * logically expired key on a replica is deleted, while with lookupKeyRead\n     * it isn't, but the lookup returns NULL either way if the key is logically\n     * expired, which is what matters here. */\n    kvobj *o = lookupKeyRead(c->db, c->argv[1]);\n    if (o == NULL) {\n        /* No key? Cardinality is zero since no element was added, otherwise\n         * we would have a key as HLLADD creates it as a side effect. */\n        addReply(c,shared.czero);\n    } else {\n        if (isHLLObjectOrReply(c,o) != C_OK) return;\n        o = dbUnshareStringValue(c->db,c->argv[1],o);\n\n        /* Check if the cached cardinality is valid. */\n        hdr = o->ptr;\n        if (HLL_VALID_CACHE(hdr)) {\n            /* Just return the cached value. */\n            card = (uint64_t)hdr->card[0];\n            card |= (uint64_t)hdr->card[1] << 8;\n            card |= (uint64_t)hdr->card[2] << 16;\n            card |= (uint64_t)hdr->card[3] << 24;\n            card |= (uint64_t)hdr->card[4] << 32;\n            card |= (uint64_t)hdr->card[5] << 40;\n            card |= (uint64_t)hdr->card[6] << 48;\n            card |= (uint64_t)hdr->card[7] << 56;\n        } else {\n            int invalid = 0;\n            /* Recompute it and update the cached value. */\n            card = hllCount(hdr,&invalid);\n            if (invalid) {\n                addReplyError(c,invalid_hll_err);\n                return;\n            }\n            hdr->card[0] = card & 0xff;\n            hdr->card[1] = (card >> 8) & 0xff;\n            hdr->card[2] = (card >> 16) & 0xff;\n            hdr->card[3] = (card >> 24) & 0xff;\n            hdr->card[4] = (card >> 32) & 0xff;\n            hdr->card[5] = (card >> 40) & 0xff;\n            hdr->card[6] = (card >> 48) & 0xff;\n            hdr->card[7] = (card >> 56) & 0xff;\n            /* This is considered a read-only command even if the cached value\n             * may be modified and given that the HLL is a Redis string\n             * we need to propagate the change. */\n            keyModified(c,c->db,c->argv[1],o,1);\n            server.dirty++;\n        }\n        addReplyLongLong(c,card);\n    }\n}\n\n/* PFMERGE dest src1 src2 src3 ... srcN => OK */\nvoid pfmergeCommand(client *c) {\n    uint8_t max[HLL_REGISTERS];\n    struct hllhdr *hdr;\n    int j;\n    int use_dense = 0; /* Use dense representation as target? */\n    size_t oldsize = 0;\n\n    /* Compute an HLL with M[i] = MAX(M[i]_j).\n     * We store the maximum into the max array of registers. We'll write\n     * it to the target variable later. */\n    memset(max,0,sizeof(max));\n    for (j = 1; j < c->argc; j++) {\n        /* Check type and size. */\n        kvobj *o = lookupKeyRead(c->db, c->argv[j]);\n        if (o == NULL) continue; /* Assume empty HLL for non existing var. */\n        if (isHLLObjectOrReply(c, o) != C_OK) return;\n\n        /* If at least one involved HLL is dense, use the dense representation\n         * as target ASAP to save time and avoid the conversion step. */\n        hdr = o->ptr;\n        if (hdr->encoding == HLL_DENSE) use_dense = 1;\n\n        /* Merge with this HLL with our 'max' HLL by setting max[i]\n         * to MAX(max[i],hll[i]). */\n        if (hllMerge(max,o) == C_ERR) {\n            addReplyError(c,invalid_hll_err);\n            return;\n        }\n    }\n\n    /* Create / unshare the destination key's value if needed. */\n    dictEntryLink link;\n    kvobj *kv = lookupKeyWriteWithLink(c->db,c->argv[1],&link);\n    if (kv == NULL) {\n        /* Create the key with a string value of the exact length to\n         * hold our HLL data structure. sdsnewlen() when NULL is passed\n         * is guaranteed to return bytes initialized to zero. */\n        robj *o = createHLLObject();\n        kv = dbAddByLink(c->db, c->argv[1], &o, &link);\n    } else {\n        /* If key exists we are sure it's of the right type/size\n         * since we checked when merging the different HLLs, so we\n         * don't check again. */\n        kv = dbUnshareStringValue(c->db,c->argv[1],kv);\n    }\n\n    uint64_t oldLen = stringObjectLen(kv);\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(kv);\n\n    /* Convert the destination object to dense representation if at least\n     * one of the inputs was dense. */\n    if (use_dense && hllSparseToDense(kv) == C_ERR) {\n        addReplyError(c,invalid_hll_err);\n        return;\n    }\n\n    /* Write the resulting HLL to the destination HLL registers and\n     * invalidate the cached value. */\n    if (use_dense) {\n        hdr = kv->ptr;\n        hllDenseCompress(hdr->registers, max);\n    } else {\n        for (j = 0; j < HLL_REGISTERS; j++) {\n            if (max[j] == 0) continue;\n            hdr = kv->ptr;\n            switch (hdr->encoding) {\n                case HLL_DENSE: hllDenseSet(hdr->registers,j,max[j]); break;\n                case HLL_SPARSE: hllSparseSet(kv,j,max[j]); break;\n            }\n        }\n    }\n    hdr = kv->ptr; /* o->ptr may be different now, as a side effect of\n                     last hllSparseSet() call. */\n    HLL_INVALIDATE_CACHE(hdr);\n\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), kv, oldsize, kvobjAllocSize(kv));\n    keyModified(c,c->db,c->argv[1],kv,1);\n    /* We generate a PFADD event for PFMERGE for semantical simplicity\n     * since in theory this is a mass-add of elements. */\n    notifyKeyspaceEvent(NOTIFY_STRING,\"pfadd\",c->argv[1],c->db->id);\n\n    updateKeysizesHist(c->db, OBJ_STRING, oldLen, stringObjectLen(kv));\n    server.dirty++;\n    addReply(c,shared.ok);\n}\n\n/* ========================== Testing / Debugging  ========================== */\n\n/* PFSELFTEST\n * This command performs a self-test of the HLL registers implementation.\n * Something that is not easy to test from within the outside. */\n#define HLL_TEST_CYCLES 1000\nvoid pfselftestCommand(client *c) {\n    unsigned int j, i;\n    sds bitcounters = sdsnewlen(NULL,HLL_DENSE_SIZE);\n    struct hllhdr *hdr = (struct hllhdr*) bitcounters, *hdr2;\n    robj *o = NULL;\n    uint8_t bytecounters[HLL_REGISTERS];\n\n    /* Test 1: access registers.\n     * The test is conceived to test that the different counters of our data\n     * structure are accessible and that setting their values both result in\n     * the correct value to be retained and not affect adjacent values. */\n    for (j = 0; j < HLL_TEST_CYCLES; j++) {\n        /* Set the HLL counters and an array of unsigned byes of the\n         * same size to the same set of random values. */\n        for (i = 0; i < HLL_REGISTERS; i++) {\n            unsigned int r = rand() & HLL_REGISTER_MAX;\n\n            bytecounters[i] = r;\n            HLL_DENSE_SET_REGISTER(hdr->registers,i,r);\n        }\n        /* Check that we are able to retrieve the same values. */\n        for (i = 0; i < HLL_REGISTERS; i++) {\n            unsigned int val;\n\n            HLL_DENSE_GET_REGISTER(val,hdr->registers,i);\n            if (val != bytecounters[i]) {\n                addReplyErrorFormat(c,\n                    \"TESTFAILED Register %d should be %d but is %d\",\n                    i, (int) bytecounters[i], (int) val);\n                goto cleanup;\n            }\n        }\n    }\n\n    /* Test 2: approximation error.\n     * The test adds unique elements and check that the estimated value\n     * is always reasonable bounds.\n     *\n     * We check that the error is smaller than a few times than the expected\n     * standard error, to make it very unlikely for the test to fail because\n     * of a \"bad\" run.\n     *\n     * The test is performed with both dense and sparse HLLs at the same\n     * time also verifying that the computed cardinality is the same. */\n    memset(hdr->registers,0,HLL_DENSE_SIZE-HLL_HDR_SIZE);\n    o = createHLLObject();\n    double relerr = 1.04/sqrt(HLL_REGISTERS);\n    int64_t checkpoint = 1;\n    uint64_t seed = (uint64_t)rand() | (uint64_t)rand() << 32;\n    uint64_t ele;\n    for (j = 1; j <= 10000000; j++) {\n        ele = j ^ seed;\n        hllDenseAdd(hdr->registers,(unsigned char*)&ele,sizeof(ele));\n        hllAdd(o,(unsigned char*)&ele,sizeof(ele));\n\n        /* Make sure that for small cardinalities we use sparse\n         * encoding. */\n        if (j == checkpoint && j < server.hll_sparse_max_bytes/2) {\n            hdr2 = o->ptr;\n            if (hdr2->encoding != HLL_SPARSE) {\n                addReplyError(c, \"TESTFAILED sparse encoding not used\");\n                goto cleanup;\n            }\n        }\n\n        /* Check that dense and sparse representations agree. */\n        if (j == checkpoint && hllCount(hdr,NULL) != hllCount(o->ptr,NULL)) {\n                addReplyError(c, \"TESTFAILED dense/sparse disagree\");\n                goto cleanup;\n        }\n\n        /* Check error. */\n        if (j == checkpoint) {\n            int64_t abserr = checkpoint - (int64_t)hllCount(hdr,NULL);\n            uint64_t maxerr = ceil(relerr*6*checkpoint);\n\n            /* Adjust the max error we expect for cardinality 10\n             * since from time to time it is statistically likely to get\n             * much higher error due to collision, resulting into a false\n             * positive. */\n            if (j == 10) maxerr = 1;\n\n            if (abserr < 0) abserr = -abserr;\n            if (abserr > (int64_t)maxerr) {\n                addReplyErrorFormat(c,\n                    \"TESTFAILED Too big error. card:%llu abserr:%llu\",\n                    (unsigned long long) checkpoint,\n                    (unsigned long long) abserr);\n                goto cleanup;\n            }\n            checkpoint *= 10;\n        }\n    }\n\n    /* Success! */\n    addReply(c,shared.ok);\n\ncleanup:\n    sdsfree(bitcounters);\n    if (o) decrRefCount(o);\n}\n\n/* Different debugging related operations about the HLL implementation.\n *\n * PFDEBUG GETREG <key>\n * PFDEBUG DECODE <key>\n * PFDEBUG ENCODING <key>\n * PFDEBUG TODENSE <key>\n * PFDEBUG SIMD (ON|OFF)\n */\nvoid pfdebugCommand(client *c) {\n    char *cmd = c->argv[1]->ptr;\n    struct hllhdr *hdr;\n    kvobj *o;\n    int j;\n    size_t oldsize = 0;\n\n    if (!strcasecmp(cmd, \"simd\")) {\n        if (c->argc != 3) goto arityerr;\n\n        if (!strcasecmp(c->argv[2]->ptr, \"on\")) {\n#if defined(HAVE_AVX2) || defined(HAVE_AARCH64_NEON)\n            simd_enabled = 1;\n#endif\n        } else if (!strcasecmp(c->argv[2]->ptr, \"off\")) {\n#if defined(HAVE_AVX2) || defined(HAVE_AARCH64_NEON)\n            simd_enabled = 0;\n#endif\n        } else {\n            addReplyError(c, \"Argument must be ON or OFF\");\n        }\n\n        addReplyStatus(c, HLL_USE_AVX2 || HLL_USE_NEON ? \"enabled\" : \"disabled\");\n\n        return;\n    }\n\n    o = lookupKeyWrite(c->db,c->argv[2]);\n    if (o == NULL) {\n        addReplyError(c,\"The specified key does not exist\");\n        return;\n    }\n    if (isHLLObjectOrReply(c,o) != C_OK) return;\n    o = dbUnshareStringValue(c->db,c->argv[2],o);\n    hdr = o->ptr;\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(o);\n\n    /* PFDEBUG GETREG <key> */\n    if (!strcasecmp(cmd,\"getreg\")) {\n        if (c->argc != 3) goto arityerr;\n\n        if (hdr->encoding == HLL_SPARSE) {\n            uint64_t oldlen = (uint64_t) stringObjectLen(o);\n            if (hllSparseToDense(o) == C_ERR) {\n                addReplyError(c,invalid_hll_err);\n                return;\n            }\n            updateKeysizesHist(c->db, OBJ_STRING, oldlen, stringObjectLen(o));\n            if (server.memory_tracking_enabled)\n                updateSlotAllocSize(c->db, getKeySlot(c->argv[2]->ptr), o, oldsize, kvobjAllocSize(o));\n            server.dirty++; /* Force propagation on encoding change. */\n        }\n\n        hdr = o->ptr;\n        addReplyArrayLen(c,HLL_REGISTERS);\n        for (j = 0; j < HLL_REGISTERS; j++) {\n            uint8_t val;\n\n            HLL_DENSE_GET_REGISTER(val,hdr->registers,j);\n            addReplyLongLong(c,val);\n        }\n    }\n    /* PFDEBUG DECODE <key> */\n    else if (!strcasecmp(cmd,\"decode\")) {\n        if (c->argc != 3) goto arityerr;\n\n        uint8_t *p = o->ptr, *end = p+sdslen(o->ptr);\n        sds decoded = sdsempty();\n\n        if (hdr->encoding != HLL_SPARSE) {\n            sdsfree(decoded);\n            addReplyError(c,\"HLL encoding is not sparse\");\n            return;\n        }\n\n        p += HLL_HDR_SIZE;\n        while(p < end) {\n            int runlen, regval;\n\n            if (HLL_SPARSE_IS_ZERO(p)) {\n                runlen = HLL_SPARSE_ZERO_LEN(p);\n                p++;\n                decoded = sdscatprintf(decoded,\"z:%d \",runlen);\n            } else if (HLL_SPARSE_IS_XZERO(p)) {\n                runlen = HLL_SPARSE_XZERO_LEN(p);\n                p += 2;\n                decoded = sdscatprintf(decoded,\"Z:%d \",runlen);\n            } else {\n                runlen = HLL_SPARSE_VAL_LEN(p);\n                regval = HLL_SPARSE_VAL_VALUE(p);\n                p++;\n                decoded = sdscatprintf(decoded,\"v:%d,%d \",regval,runlen);\n            }\n        }\n        decoded = sdstrim(decoded,\" \");\n        addReplyBulkCBuffer(c,decoded,sdslen(decoded));\n        sdsfree(decoded);\n    }\n    /* PFDEBUG ENCODING <key> */\n    else if (!strcasecmp(cmd,\"encoding\")) {\n        char *encodingstr[2] = {\"dense\",\"sparse\"};\n        if (c->argc != 3) goto arityerr;\n\n        addReplyStatus(c,encodingstr[hdr->encoding]);\n    }\n    /* PFDEBUG TODENSE <key> */\n    else if (!strcasecmp(cmd,\"todense\")) {\n        int conv = 0;\n        if (c->argc != 3) goto arityerr;\n\n        if (hdr->encoding == HLL_SPARSE) {\n            int64_t oldlen = (int64_t) stringObjectLen(o);\n            if (hllSparseToDense(o) == C_ERR) {\n                addReplyError(c,invalid_hll_err);\n                return;\n            }\n            updateKeysizesHist(c->db, OBJ_STRING, oldlen, stringObjectLen(o));\n            if (server.memory_tracking_enabled)\n                updateSlotAllocSize(c->db, getKeySlot(c->argv[2]->ptr), o, oldsize, kvobjAllocSize(o));\n            conv = 1;\n            server.dirty++; /* Force propagation on encoding change. */\n        }\n        addReply(c,conv ? shared.cone : shared.czero);\n    } else {\n        addReplyErrorFormat(c,\"Unknown PFDEBUG subcommand '%s'\", cmd);\n    }\n    return;\n\narityerr:\n    addReplyErrorFormat(c,\n        \"Wrong number of arguments for the '%s' subcommand\",cmd);\n}\n\n"
  },
  {
    "path": "src/intset.c",
    "content": "/*\n * Copyright (c) 2009-2012, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n * Copyright (c) 2009-current, Redis Ltd.\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include \"intset.h\"\n#include \"zmalloc.h\"\n#include \"endianconv.h\"\n#include \"redisassert.h\"\n\n/* Note that these encodings are ordered, so:\n * INTSET_ENC_INT16 < INTSET_ENC_INT32 < INTSET_ENC_INT64. */\n#define INTSET_ENC_INT16 (sizeof(int16_t))\n#define INTSET_ENC_INT32 (sizeof(int32_t))\n#define INTSET_ENC_INT64 (sizeof(int64_t))\n\n/* Return the required encoding for the provided value. */\nstatic uint8_t _intsetValueEncoding(int64_t v) {\n    if (v < INT32_MIN || v > INT32_MAX)\n        return INTSET_ENC_INT64;\n    else if (v < INT16_MIN || v > INT16_MAX)\n        return INTSET_ENC_INT32;\n    else\n        return INTSET_ENC_INT16;\n}\n\n/* Return the value at pos, given an encoding. */\nstatic int64_t _intsetGetEncoded(intset *is, int pos, uint8_t enc) {\n    int64_t v64;\n    int32_t v32;\n    int16_t v16;\n\n    if (enc == INTSET_ENC_INT64) {\n        memcpy(&v64,((int64_t*)is->contents)+pos,sizeof(v64));\n        memrev64ifbe(&v64);\n        return v64;\n    } else if (enc == INTSET_ENC_INT32) {\n        memcpy(&v32,((int32_t*)is->contents)+pos,sizeof(v32));\n        memrev32ifbe(&v32);\n        return v32;\n    } else {\n        memcpy(&v16,((int16_t*)is->contents)+pos,sizeof(v16));\n        memrev16ifbe(&v16);\n        return v16;\n    }\n}\n\n/* Return the value at pos, using the configured encoding. */\nstatic int64_t _intsetGet(intset *is, int pos) {\n    return _intsetGetEncoded(is,pos,intrev32ifbe(is->encoding));\n}\n\n/* Set the value at pos, using the configured encoding. */\nstatic void _intsetSet(intset *is, int pos, int64_t value) {\n    uint32_t encoding = intrev32ifbe(is->encoding);\n\n    if (encoding == INTSET_ENC_INT64) {\n        ((int64_t*)is->contents)[pos] = value;\n        memrev64ifbe(((int64_t*)is->contents)+pos);\n    } else if (encoding == INTSET_ENC_INT32) {\n        ((int32_t*)is->contents)[pos] = value;\n        memrev32ifbe(((int32_t*)is->contents)+pos);\n    } else {\n        ((int16_t*)is->contents)[pos] = value;\n        memrev16ifbe(((int16_t*)is->contents)+pos);\n    }\n}\n\n/* Create an empty intset. */\nintset *intsetNew(void) {\n    intset *is = zmalloc(sizeof(intset));\n    is->encoding = intrev32ifbe(INTSET_ENC_INT16);\n    is->length = 0;\n    return is;\n}\n\nsize_t intsetAllocSize(intset *is) {\n    uint64_t size = (uint64_t)intrev32ifbe(is->length)*intrev32ifbe(is->encoding);\n    assert(size <= SIZE_MAX - sizeof(intset));\n    return sizeof(intset)+size;\n}\n\n/* Resize the intset */\nstatic intset *intsetResize(intset *is, uint32_t len) {\n    uint64_t size = (uint64_t)len*intrev32ifbe(is->encoding);\n    assert(size <= SIZE_MAX - sizeof(intset));\n    is = zrealloc(is,sizeof(intset)+size);\n    return is;\n}\n\n/* Search for the position of \"value\". Return 1 when the value was found and\n * sets \"pos\" to the position of the value within the intset. Return 0 when\n * the value is not present in the intset and sets \"pos\" to the position\n * where \"value\" can be inserted. */\nstatic uint8_t intsetSearch(intset *is, int64_t value, uint32_t *pos) {\n    int min = 0, max = intrev32ifbe(is->length)-1, mid = -1;\n    int64_t cur = -1;\n\n    /* The value can never be found when the set is empty */\n    if (intrev32ifbe(is->length) == 0) {\n        if (pos) *pos = 0;\n        return 0;\n    } else {\n        /* Check for the case where we know we cannot find the value,\n         * but do know the insert position. */\n        if (value > _intsetGet(is,max)) {\n            if (pos) *pos = intrev32ifbe(is->length);\n            return 0;\n        } else if (value < _intsetGet(is,0)) {\n            if (pos) *pos = 0;\n            return 0;\n        }\n    }\n\n    while(max >= min) {\n        mid = ((unsigned int)min + (unsigned int)max) >> 1;\n        cur = _intsetGet(is,mid);\n        if (value > cur) {\n            min = mid+1;\n        } else if (value < cur) {\n            max = mid-1;\n        } else {\n            break;\n        }\n    }\n\n    if (value == cur) {\n        if (pos) *pos = mid;\n        return 1;\n    } else {\n        if (pos) *pos = min;\n        return 0;\n    }\n}\n\n/* Upgrades the intset to a larger encoding and inserts the given integer. */\nstatic intset *intsetUpgradeAndAdd(intset *is, int64_t value) {\n    uint8_t curenc = intrev32ifbe(is->encoding);\n    uint8_t newenc = _intsetValueEncoding(value);\n    int length = intrev32ifbe(is->length);\n    int prepend = value < 0 ? 1 : 0;\n\n    /* First set new encoding and resize */\n    is->encoding = intrev32ifbe(newenc);\n    is = intsetResize(is,intrev32ifbe(is->length)+1);\n\n    /* Upgrade back-to-front so we don't overwrite values.\n     * Note that the \"prepend\" variable is used to make sure we have an empty\n     * space at either the beginning or the end of the intset. */\n    while(length--)\n        _intsetSet(is,length+prepend,_intsetGetEncoded(is,length,curenc));\n\n    /* Set the value at the beginning or the end. */\n    if (prepend)\n        _intsetSet(is,0,value);\n    else\n        _intsetSet(is,intrev32ifbe(is->length),value);\n    is->length = intrev32ifbe(intrev32ifbe(is->length)+1);\n    return is;\n}\n\nstatic void intsetMoveTail(intset *is, uint32_t from, uint32_t to) {\n    void *src, *dst;\n    uint32_t bytes = intrev32ifbe(is->length)-from;\n    uint32_t encoding = intrev32ifbe(is->encoding);\n\n    if (encoding == INTSET_ENC_INT64) {\n        src = (int64_t*)is->contents+from;\n        dst = (int64_t*)is->contents+to;\n        bytes *= sizeof(int64_t);\n    } else if (encoding == INTSET_ENC_INT32) {\n        src = (int32_t*)is->contents+from;\n        dst = (int32_t*)is->contents+to;\n        bytes *= sizeof(int32_t);\n    } else {\n        src = (int16_t*)is->contents+from;\n        dst = (int16_t*)is->contents+to;\n        bytes *= sizeof(int16_t);\n    }\n    memmove(dst,src,bytes);\n}\n\n/* Insert an integer in the intset */\nintset *intsetAdd(intset *is, int64_t value, uint8_t *success) {\n    uint8_t valenc = _intsetValueEncoding(value);\n    uint32_t pos;\n    if (success) *success = 1;\n\n    /* Upgrade encoding if necessary. If we need to upgrade, we know that\n     * this value should be either appended (if > 0) or prepended (if < 0),\n     * because it lies outside the range of existing values. */\n    if (valenc > intrev32ifbe(is->encoding)) {\n        /* This always succeeds, so we don't need to curry *success. */\n        return intsetUpgradeAndAdd(is,value);\n    } else {\n        /* Abort if the value is already present in the set.\n         * This call will populate \"pos\" with the right position to insert\n         * the value when it cannot be found. */\n        if (intsetSearch(is,value,&pos)) {\n            if (success) *success = 0;\n            return is;\n        }\n\n        is = intsetResize(is,intrev32ifbe(is->length)+1);\n        if (pos < intrev32ifbe(is->length)) intsetMoveTail(is,pos,pos+1);\n    }\n\n    _intsetSet(is,pos,value);\n    is->length = intrev32ifbe(intrev32ifbe(is->length)+1);\n    return is;\n}\n\n/* Delete integer from intset */\nintset *intsetRemove(intset *is, int64_t value, int *success) {\n    uint8_t valenc = _intsetValueEncoding(value);\n    uint32_t pos;\n    if (success) *success = 0;\n\n    if (valenc <= intrev32ifbe(is->encoding) && intsetSearch(is,value,&pos)) {\n        uint32_t len = intrev32ifbe(is->length);\n\n        /* We know we can delete */\n        if (success) *success = 1;\n\n        /* Overwrite value with tail and update length */\n        if (pos < (len-1)) intsetMoveTail(is,pos+1,pos);\n        is = intsetResize(is,len-1);\n        is->length = intrev32ifbe(len-1);\n    }\n    return is;\n}\n\n/* Determine whether a value belongs to this set */\nuint8_t intsetFind(intset *is, int64_t value) {\n    uint8_t valenc = _intsetValueEncoding(value);\n    return valenc <= intrev32ifbe(is->encoding) && intsetSearch(is,value,NULL);\n}\n\n/* Return random member */\nint64_t intsetRandom(intset *is) {\n    uint32_t len = intrev32ifbe(is->length);\n    assert(len); /* avoid division by zero on corrupt intset payload. */\n    return _intsetGet(is,rand()%len);\n}\n\n/* Return the largest member. */\nint64_t intsetMax(intset *is) {\n    uint32_t len = intrev32ifbe(is->length);\n    return _intsetGet(is, len - 1);\n}\n\n/* Return the smallest member. */\nint64_t intsetMin(intset *is) {\n    return _intsetGet(is, 0);\n}\n\n/* Get the value at the given position. When this position is\n * out of range the function returns 0, when in range it returns 1. */\nuint8_t intsetGet(intset *is, uint32_t pos, int64_t *value) {\n    if (pos < intrev32ifbe(is->length)) {\n        *value = _intsetGet(is,pos);\n        return 1;\n    }\n    return 0;\n}\n\n/* Return intset length */\nuint32_t intsetLen(const intset *is) {\n    return intrev32ifbe(is->length);\n}\n\n/* Return intset blob size in bytes. */\nsize_t intsetBlobLen(intset *is) {\n    return sizeof(intset)+(size_t)intrev32ifbe(is->length)*intrev32ifbe(is->encoding);\n}\n\n/* Validate the integrity of the data structure.\n * when `deep` is 0, only the integrity of the header is validated.\n * when `deep` is 1, we make sure there are no duplicate or out of order records. */\nint intsetValidateIntegrity(const unsigned char *p, size_t size, int deep) {\n    intset *is = (intset *)p;\n    /* check that we can actually read the header. */\n    if (size < sizeof(*is))\n        return 0;\n\n    uint32_t encoding = intrev32ifbe(is->encoding);\n\n    size_t record_size;\n    if (encoding == INTSET_ENC_INT64) {\n        record_size = INTSET_ENC_INT64;\n    } else if (encoding == INTSET_ENC_INT32) {\n        record_size = INTSET_ENC_INT32;\n    } else if (encoding == INTSET_ENC_INT16){\n        record_size = INTSET_ENC_INT16;\n    } else {\n        return 0;\n    }\n\n    /* check that the size matches (all records are inside the buffer). */\n    uint32_t count = intrev32ifbe(is->length);\n    if (sizeof(*is) + count*record_size != size)\n        return 0;\n\n    /* check that the set is not empty. */\n    if (count==0)\n        return 0;\n\n    if (!deep)\n        return 1;\n\n    /* check that there are no dup or out of order records. */\n    int64_t prev = _intsetGet(is,0);\n    for (uint32_t i=1; i<count; i++) {\n        int64_t cur = _intsetGet(is,i);\n        if (cur <= prev)\n            return 0;\n        prev = cur;\n    }\n\n    return 1;\n}\n\n#ifdef REDIS_TEST\n#include <sys/time.h>\n#include <time.h>\n\n#if 0\nstatic void intsetRepr(intset *is) {\n    for (uint32_t i = 0; i < intrev32ifbe(is->length); i++) {\n        printf(\"%lld\\n\", (uint64_t)_intsetGet(is,i));\n    }\n    printf(\"\\n\");\n}\n\nstatic void error(char *err) {\n    printf(\"%s\\n\", err);\n    exit(1);\n}\n#endif\n\nstatic void ok(void) {\n    printf(\"OK\\n\");\n}\n\nstatic long long usec(void) {\n    struct timeval tv;\n    gettimeofday(&tv,NULL);\n    return (((long long)tv.tv_sec)*1000000)+tv.tv_usec;\n}\n\nstatic intset *createSet(int bits, int size) {\n    uint64_t mask = (1<<bits)-1;\n    uint64_t value;\n    intset *is = intsetNew();\n\n    for (int i = 0; i < size; i++) {\n        if (bits > 32) {\n            value = (rand()*rand()) & mask;\n        } else {\n            value = rand() & mask;\n        }\n        is = intsetAdd(is,value,NULL);\n    }\n    return is;\n}\n\nstatic void checkConsistency(intset *is) {\n    for (uint32_t i = 0; i < (intrev32ifbe(is->length)-1); i++) {\n        uint32_t encoding = intrev32ifbe(is->encoding);\n\n        if (encoding == INTSET_ENC_INT16) {\n            int16_t *i16 = (int16_t*)is->contents;\n            assert(i16[i] < i16[i+1]);\n        } else if (encoding == INTSET_ENC_INT32) {\n            int32_t *i32 = (int32_t*)is->contents;\n            assert(i32[i] < i32[i+1]);\n        } else {\n            int64_t *i64 = (int64_t*)is->contents;\n            assert(i64[i] < i64[i+1]);\n        }\n    }\n}\n\n#define UNUSED(x) (void)(x)\nint intsetTest(int argc, char **argv, int flags) {\n    uint8_t success;\n    int i;\n    intset *is;\n    srand(time(NULL));\n\n    UNUSED(argc);\n    UNUSED(argv);\n    UNUSED(flags);\n\n    printf(\"Value encodings: \"); {\n        assert(_intsetValueEncoding(-32768) == INTSET_ENC_INT16);\n        assert(_intsetValueEncoding(+32767) == INTSET_ENC_INT16);\n        assert(_intsetValueEncoding(-32769) == INTSET_ENC_INT32);\n        assert(_intsetValueEncoding(+32768) == INTSET_ENC_INT32);\n        assert(_intsetValueEncoding(-2147483648) == INTSET_ENC_INT32);\n        assert(_intsetValueEncoding(+2147483647) == INTSET_ENC_INT32);\n        assert(_intsetValueEncoding(-2147483649) == INTSET_ENC_INT64);\n        assert(_intsetValueEncoding(+2147483648) == INTSET_ENC_INT64);\n        assert(_intsetValueEncoding(-9223372036854775808ull) ==\n                    INTSET_ENC_INT64);\n        assert(_intsetValueEncoding(+9223372036854775807ull) ==\n                    INTSET_ENC_INT64);\n        ok();\n    }\n\n    printf(\"Basic adding: \"); {\n        is = intsetNew();\n        is = intsetAdd(is,5,&success); assert(success);\n        is = intsetAdd(is,6,&success); assert(success);\n        is = intsetAdd(is,4,&success); assert(success);\n        is = intsetAdd(is,4,&success); assert(!success);\n        assert(6 == intsetMax(is));\n        assert(4 == intsetMin(is));\n        ok();\n        zfree(is);\n    }\n\n    printf(\"Large number of random adds: \"); {\n        uint32_t inserts = 0;\n        is = intsetNew();\n        for (i = 0; i < 1024; i++) {\n            is = intsetAdd(is,rand()%0x800,&success);\n            if (success) inserts++;\n        }\n        assert(intrev32ifbe(is->length) == inserts);\n        checkConsistency(is);\n        ok();\n        zfree(is);\n    }\n\n    printf(\"Upgrade from int16 to int32: \"); {\n        is = intsetNew();\n        is = intsetAdd(is,32,NULL);\n        assert(intrev32ifbe(is->encoding) == INTSET_ENC_INT16);\n        is = intsetAdd(is,65535,NULL);\n        assert(intrev32ifbe(is->encoding) == INTSET_ENC_INT32);\n        assert(intsetFind(is,32));\n        assert(intsetFind(is,65535));\n        checkConsistency(is);\n        zfree(is);\n\n        is = intsetNew();\n        is = intsetAdd(is,32,NULL);\n        assert(intrev32ifbe(is->encoding) == INTSET_ENC_INT16);\n        is = intsetAdd(is,-65535,NULL);\n        assert(intrev32ifbe(is->encoding) == INTSET_ENC_INT32);\n        assert(intsetFind(is,32));\n        assert(intsetFind(is,-65535));\n        checkConsistency(is);\n        ok();\n        zfree(is);\n    }\n\n    printf(\"Upgrade from int16 to int64: \"); {\n        is = intsetNew();\n        is = intsetAdd(is,32,NULL);\n        assert(intrev32ifbe(is->encoding) == INTSET_ENC_INT16);\n        is = intsetAdd(is,4294967295,NULL);\n        assert(intrev32ifbe(is->encoding) == INTSET_ENC_INT64);\n        assert(intsetFind(is,32));\n        assert(intsetFind(is,4294967295));\n        checkConsistency(is);\n        zfree(is);\n\n        is = intsetNew();\n        is = intsetAdd(is,32,NULL);\n        assert(intrev32ifbe(is->encoding) == INTSET_ENC_INT16);\n        is = intsetAdd(is,-4294967295,NULL);\n        assert(intrev32ifbe(is->encoding) == INTSET_ENC_INT64);\n        assert(intsetFind(is,32));\n        assert(intsetFind(is,-4294967295));\n        checkConsistency(is);\n        ok();\n        zfree(is);\n    }\n\n    printf(\"Upgrade from int32 to int64: \"); {\n        is = intsetNew();\n        is = intsetAdd(is,65535,NULL);\n        assert(intrev32ifbe(is->encoding) == INTSET_ENC_INT32);\n        is = intsetAdd(is,4294967295,NULL);\n        assert(intrev32ifbe(is->encoding) == INTSET_ENC_INT64);\n        assert(intsetFind(is,65535));\n        assert(intsetFind(is,4294967295));\n        checkConsistency(is);\n        zfree(is);\n\n        is = intsetNew();\n        is = intsetAdd(is,65535,NULL);\n        assert(intrev32ifbe(is->encoding) == INTSET_ENC_INT32);\n        is = intsetAdd(is,-4294967295,NULL);\n        assert(intrev32ifbe(is->encoding) == INTSET_ENC_INT64);\n        assert(intsetFind(is,65535));\n        assert(intsetFind(is,-4294967295));\n        checkConsistency(is);\n        ok();\n        zfree(is);\n    }\n\n    printf(\"Stress lookups: \"); {\n        long num = 100000, size = 10000;\n        int i, bits = 20;\n        long long start;\n        is = createSet(bits,size);\n        checkConsistency(is);\n\n        start = usec();\n        for (i = 0; i < num; i++) intsetSearch(is,rand() % ((1<<bits)-1),NULL);\n        printf(\"%ld lookups, %ld element set, %lldusec\\n\",\n               num,size,usec()-start);\n        zfree(is);\n    }\n\n    printf(\"Stress add+delete: \"); {\n        int i, v1, v2;\n        is = intsetNew();\n        for (i = 0; i < 0xffff; i++) {\n            v1 = rand() % 0xfff;\n            is = intsetAdd(is,v1,NULL);\n            assert(intsetFind(is,v1));\n\n            v2 = rand() % 0xfff;\n            is = intsetRemove(is,v2,NULL);\n            assert(!intsetFind(is,v2));\n        }\n        checkConsistency(is);\n        ok();\n        zfree(is);\n    }\n\n    return 0;\n}\n#endif\n"
  },
  {
    "path": "src/intset.h",
    "content": "/*\n * Copyright (c) 2009-2012, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n * Copyright (c) 2009-current, Redis Ltd.\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#ifndef __INTSET_H\n#define __INTSET_H\n#include <stdint.h>\n\ntypedef struct intset {\n    uint32_t encoding;\n    uint32_t length;\n    int8_t contents[];\n} intset;\n\nintset *intsetNew(void);\nintset *intsetAdd(intset *is, int64_t value, uint8_t *success);\nintset *intsetRemove(intset *is, int64_t value, int *success);\nuint8_t intsetFind(intset *is, int64_t value);\nint64_t intsetRandom(intset *is);\nint64_t intsetMax(intset *is);\nint64_t intsetMin(intset *is);\nuint8_t intsetGet(intset *is, uint32_t pos, int64_t *value);\nuint32_t intsetLen(const intset *is);\nsize_t intsetBlobLen(intset *is);\nint intsetValidateIntegrity(const unsigned char *is, size_t size, int deep);\nsize_t intsetAllocSize(intset *is);\n\n#ifdef REDIS_TEST\nint intsetTest(int argc, char *argv[], int flags);\n#endif\n\n#endif // __INTSET_H\n"
  },
  {
    "path": "src/iothread.c",
    "content": "/* iothread.c -- The threaded io implementation.\n *\n * Copyright (c) 2024-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n\n/* IO threads. */\nstatic IOThread IOThreads[IO_THREADS_MAX_NUM];\n\n/* For main thread */\nstatic list *mainThreadPendingClientsToIOThreads[IO_THREADS_MAX_NUM]; /* Clients to IO threads */\nstatic list *mainThreadProcessingClients[IO_THREADS_MAX_NUM]; /* Clients in processing */\nstatic list *mainThreadPendingClients[IO_THREADS_MAX_NUM]; /* Pending clients from IO threads */\nstatic pthread_mutex_t mainThreadPendingClientsMutexes[IO_THREADS_MAX_NUM]; /* Mutex for pending clients */\nstatic eventNotifier* mainThreadPendingClientsNotifiers[IO_THREADS_MAX_NUM]; /* Notifier for pending clients */\n\n/* Send the clients to the main thread for processing when the number of clients\n * in pending list reaches IO_THREAD_MAX_PENDING_CLIENTS, or check_size is 0. */\nstatic inline void sendPendingClientsToMainThreadIfNeeded(IOThread *t, int check_size) {\n    size_t len = listLength(t->pending_clients_to_main_thread);\n    if (len == 0 || (check_size && len < IO_THREAD_MAX_PENDING_CLIENTS)) return;\n\n    int running = 0, pending = 0;\n    pthread_mutex_lock(&mainThreadPendingClientsMutexes[t->id]);\n    pending = listLength(mainThreadPendingClients[t->id]);\n    listJoin(mainThreadPendingClients[t->id], t->pending_clients_to_main_thread);\n    pthread_mutex_unlock(&mainThreadPendingClientsMutexes[t->id]);\n    if (!pending) atomicGetWithSync(server.running, running);\n\n    /* Only notify main thread if it is not running and no pending clients to process,\n     * to avoid unnecessary notify/wakeup. If the main thread is running, it will\n     * process the clients in beforeSleep. If there are pending clients, we may\n     * already notify the main thread if needed. */\n    if (!running && !pending) {\n        triggerEventNotifier(mainThreadPendingClientsNotifiers[t->id]);\n    }\n}\n\n/* When moving a client from IO thread to main thread we may need to update\n * some of its variables as they are duplicated to avoid contention with main\n * thread.\n * For now this is valid only for master or slave clients. */\nvoid updateClientDataFromIOThread(client *c) {\n    if (!(c->flags & CLIENT_MASTER) && !(c->flags & CLIENT_SLAVE)) return;\n\n    serverAssert(c->tid != IOTHREAD_MAIN_THREAD_ID &&\n                 c->running_tid == IOTHREAD_MAIN_THREAD_ID);\n\n    if (c->io_repl_ack_time > c->repl_ack_time) {\n        serverAssert(c->flags & CLIENT_SLAVE);\n        c->repl_ack_time = c->io_repl_ack_time;\n    }\n    if (c->io_lastinteraction > c->lastinteraction) {\n        serverAssert(c->flags & CLIENT_MASTER);\n        c->lastinteraction = c->io_lastinteraction;\n    }\n    if (c->io_read_reploff > c->read_reploff) {\n        serverAssert(c->flags & CLIENT_MASTER);\n        c->read_reploff = c->io_read_reploff;\n    }\n\n    /* Update replication buffer referenced node if IO thread has sent some data. */\n    if (c->flags & CLIENT_SLAVE && c->ref_repl_buf_node != NULL &&\n        (c->io_curr_repl_node != c->ref_repl_buf_node ||\n         c->io_curr_block_pos != c->ref_block_pos))\n    {\n        ((replBufBlock*)listNodeValue(c->ref_repl_buf_node))->refcount--;\n        ((replBufBlock*)listNodeValue(c->io_curr_repl_node))->refcount++;\n        c->ref_block_pos = c->io_curr_block_pos;\n        c->ref_repl_buf_node = c->io_curr_repl_node;\n        incrementalTrimReplicationBacklog(REPL_BACKLOG_TRIM_BLOCKS_PER_CALL);\n    }\n}\n\n/* Check to see if the client needs any cron jobs run for them. Return 1 if the\n * client should be terminated */\nint runClientCronFromIOThread(client *c) {\n    if (c->flags & CLIENT_MASTER &&\n        c->io_last_repl_cron + 1000 <= server.mstime)\n    {\n        c->io_last_repl_cron = server.mstime;\n        if (replicationCronRunMasterClient()) return 1;\n    }\n\n    /* Run client cron task for the client per second or it is marked as pending cron. */\n    if (c->io_last_client_cron + 1000 <= server.mstime ||\n        c->io_flags & CLIENT_IO_PENDING_CRON)\n    {\n        c->io_last_client_cron = server.mstime;\n        if (clientsCronRunClient(c)) return 1;\n    } else {\n        /* Update the client in the mem usage if clientsCronRunClient is not\n         * being called, since that function already performs the update. */\n        updateClientMemUsageAndBucket(c);\n    }\n\n    return 0;\n}\n\n/* When IO threads read a complete query of clients or want to free clients, it\n * should remove it from its clients list and put the client in the list to main\n * thread, we will send these clients to main thread in IOThreadBeforeSleep. */\nvoid enqueuePendingClientsToMainThread(client *c, int unbind) {\n    /* If the IO thread may no longer manage it, such as closing client, we should\n     * unbind client from event loop, so main thread doesn't need to do it costly. */\n    if (unbind) connUnbindEventLoop(c->conn);\n    /* Just skip if it already is transferred. */\n    if (c->io_thread_client_list_node) {\n        IOThread *t = &IOThreads[c->tid];\n        /* If there are several clients to process, let the main thread handle them ASAP.\n         * Since the client being added to the queue may still need to be processed by\n         * the IO thread, we must call this before adding it to the queue to avoid\n         * races with the main thread. */\n        sendPendingClientsToMainThreadIfNeeded(t, 1);\n        /* Disable read and write to avoid race when main thread processes. */\n        c->io_flags &= ~(CLIENT_IO_READ_ENABLED | CLIENT_IO_WRITE_ENABLED);\n        /* Remove the client from IO thread, add it to main thread's pending list. */\n        listUnlinkNode(t->clients, c->io_thread_client_list_node);\n        listLinkNodeTail(t->pending_clients_to_main_thread, c->io_thread_client_list_node);\n        c->io_thread_client_list_node = NULL;\n    }\n}\n\nvoid enqueuePendingClienstToIOThreads(client *c) {\n    serverAssert(c->tid != IOTHREAD_MAIN_THREAD_ID &&\n                 c->running_tid == IOTHREAD_MAIN_THREAD_ID);\n\n    if (c->flags & CLIENT_PENDING_WRITE) {\n        c->flags &= ~CLIENT_PENDING_WRITE;\n        listUnlinkNode(server.clients_pending_write, &c->clients_pending_write_node);\n    }\n    if (c->flags & CLIENT_SLAVE) {\n        serverAssert(c->ref_repl_buf_node != NULL);\n\n        c->io_repl_ack_time = c->repl_ack_time;\n        c->io_curr_repl_node = c->ref_repl_buf_node;\n        c->io_curr_block_pos = c->ref_block_pos;\n        c->io_bound_repl_node = listLast(server.repl_buffer_blocks);\n        c->io_bound_block_pos = ((replBufBlock*)listNodeValue(c->io_bound_repl_node))->used;\n    }\n    if (c->flags & CLIENT_MASTER) {\n        c->io_read_reploff = c->read_reploff;\n        c->io_lastinteraction = c->lastinteraction;\n    }\n\n    c->running_tid = c->tid;\n    listAddNodeHead(mainThreadPendingClientsToIOThreads[c->tid], c);\n}\n\n/* Unbind connection of client from io thread event loop, write and read handlers\n * also be removed, ensures that we can operate the client safely. */\nvoid unbindClientFromIOThreadEventLoop(client *c) {\n    serverAssert(c->tid != IOTHREAD_MAIN_THREAD_ID &&\n                 c->running_tid == IOTHREAD_MAIN_THREAD_ID);\n    if (!connHasEventLoop(c->conn)) return;\n    /* As calling in main thread, we should pause the io thread to make it safe. */\n    pauseIOThread(c->tid);\n    connUnbindEventLoop(c->conn);\n    resumeIOThread(c->tid);\n}\n\n/* When main thread is processing a client from IO thread, and wants to keep it,\n * we should unbind connection of client from io thread event loop first,\n * and then bind the client connection into server's event loop. */\nvoid keepClientInMainThread(client *c) {\n    if (c->tid == IOTHREAD_MAIN_THREAD_ID) return;\n    serverAssert(c->running_tid == IOTHREAD_MAIN_THREAD_ID);\n    /* IO thread no longer manage it. */\n    server.io_threads_clients_num[c->tid]--;\n    /* Unbind connection of client from io thread event loop. */\n    unbindClientFromIOThreadEventLoop(c);\n    /* Update the client's data in case it was just fetched from IO thread */\n    updateClientDataFromIOThread(c);\n    /* Let main thread to run it, rebind event loop and read handler */\n    connRebindEventLoop(c->conn, server.el);\n    connSetReadHandler(c->conn, readQueryFromClient);\n    c->io_flags |= CLIENT_IO_READ_ENABLED | CLIENT_IO_WRITE_ENABLED;\n    c->tid = IOTHREAD_MAIN_THREAD_ID;\n    freeClientDeferredObjects(c, 1); /* Free deferred objects. */\n    freeClientIODeferredObjects(c, 1); /* Free IO deferred objects. */\n    tryUnlinkClientFromPendingRefReply(c, 0);\n    /* Main thread starts to manage it. */\n    server.io_threads_clients_num[c->tid]++;\n}\n\n/* If the client is managed by IO thread, we should fetch it from IO thread\n * and then main thread will can process it. Just like IO Thread transfers\n * the client to the main thread for processing. */\nvoid fetchClientFromIOThread(client *c) {\n    serverAssert(c->tid != IOTHREAD_MAIN_THREAD_ID &&\n                 c->running_tid != IOTHREAD_MAIN_THREAD_ID);\n    pauseIOThread(c->tid);\n    /* Remove the client from clients list of IO thread or main thread. */\n    if (c->io_thread_client_list_node) {\n        listDelNode(IOThreads[c->tid].clients, c->io_thread_client_list_node);\n        c->io_thread_client_list_node = NULL;\n    } else {\n        list *clients[5] = {\n            IOThreads[c->tid].pending_clients,\n            IOThreads[c->tid].pending_clients_to_main_thread,\n            mainThreadPendingClients[c->tid],\n            mainThreadProcessingClients[c->tid],\n            mainThreadPendingClientsToIOThreads[c->tid]\n        };\n        for (int i = 0; i < 5; i++) {\n            listNode *ln = listSearchKey(clients[i], c);\n            if (ln) {\n                listDelNode(clients[i], ln);\n                /* Client only can be in one client list. */\n                break;\n            }\n        }\n    }\n    /* Unbind connection of client from io thread event loop. */\n    connUnbindEventLoop(c->conn);\n    /* Now main thread can process it. */\n    resumeIOThread(c->tid);\n\n    /* Keep the client in main thread. */\n    c->running_tid = IOTHREAD_MAIN_THREAD_ID;\n    keepClientInMainThread(c);\n}\n\n/* For some clients, we must handle them in the main thread, since there is\n * data race to be processed in IO threads.\n *\n * - Close ASAP, we must free the client in main thread.\n * - Pubsub, monitor, blocked, tracking clients, main thread may\n *   directly write them a reply when conditions are met.\n * - Script command with debug may operate connection directly.\n * - Master/Replica are only handled by IO thread when RDB replication is\n *   completed. Note we need to check them after checking for other flags\n *   that may overlap with CLIENT_MASTER/SLAVE - CLOSE_ASAP, MONITOR,\n *   (UN)BLOCKED, TRACKING. */\nint isClientMustHandledByMainThread(client *c) {\n    if (c->flags & (CLIENT_CLOSE_ASAP |\n                    CLIENT_PUBSUB | CLIENT_MONITOR | CLIENT_BLOCKED |\n                    CLIENT_UNBLOCKED | CLIENT_TRACKING | CLIENT_LUA_DEBUG |\n                    CLIENT_LUA_DEBUG_SYNC | CLIENT_ASM_MIGRATING |\n                    CLIENT_ASM_IMPORTING))\n    {\n        return 1;\n    }\n\n    /* If RDB replication is done it's safe to move the master client to an IO thread.\n     * Note that we keep the master client in main thread during failover so as\n     * not to slow down the failover process by waiting the master replication\n     * cron in IO thread. */\n    if (c->flags & CLIENT_MASTER &&\n        server.repl_state == REPL_STATE_CONNECTED &&\n        server.repl_rdb_ch_state == REPL_RDB_CH_STATE_NONE &&\n        server.failover_state == NO_FAILOVER)\n    {\n        return 0;\n    }\n\n    /* If RDB replication is done for this slave it's safe to move it to an IO thread\n     * Note that we also check if the ref_repl_buf_node is initialized in order\n     * to prevent race conditions with main thread when it feeds the replication\n     * buffer. */\n    if (c->flags & CLIENT_SLAVE &&\n        (c->replstate == SLAVE_STATE_ONLINE ||\n         c->replstate == SLAVE_STATE_SEND_BULK_AND_STREAM) &&\n        c->repl_start_cmd_stream_on_ack == 0 &&\n        c->ref_repl_buf_node != NULL)\n    {\n        return 0;\n    }\n\n    if (c->flags & (CLIENT_MASTER | CLIENT_SLAVE)) return 1;\n\n    return 0;\n}\n\n/* When the main thread accepts a new client or transfers clients to IO threads,\n * it assigns the client to the IO thread with the fewest clients. */\nvoid assignClientToIOThread(client *c) {\n    serverAssert(c->tid == IOTHREAD_MAIN_THREAD_ID);\n    /* Find the IO thread with the fewest clients. */\n    int min_id = 0;\n    int min = INT_MAX;\n    for (int i = 1; i < server.io_threads_num; i++) {\n        if (server.io_threads_clients_num[i] < min) {\n            min = server.io_threads_clients_num[i];\n            min_id = i;\n        }\n    }\n\n    /* Assign the client to the IO thread. */\n    server.io_threads_clients_num[c->tid]--;\n    c->tid = min_id;\n    server.io_threads_clients_num[min_id]++;\n\n    /* The client running in IO thread needs to have deferred objects array. */\n    c->deferred_objects = zmalloc(sizeof(deferredObject) * CLIENT_MAX_DEFERRED_OBJECTS);\n\n    /* Unbind connection of client from main thread event loop, disable read and\n     * write, and then put it in the list, main thread will send these clients\n     * to IO thread in beforeSleep. */\n    connUnbindEventLoop(c->conn);\n    c->io_flags &= ~(CLIENT_IO_READ_ENABLED | CLIENT_IO_WRITE_ENABLED);\n\n    enqueuePendingClienstToIOThreads(c);\n}\n\n/* If updating maxclients config, we not only resize the event loop of main thread\n * but also resize the event loop of all io threads, and if one thread is failed,\n * it is failed totally, since a fd can be distributed into any IO thread. */\nint resizeAllIOThreadsEventLoops(size_t newsize) {\n    int result = AE_OK;\n    if (server.io_threads_num <= 1) return result;\n\n    /* To make context safe. */\n    pauseAllIOThreads();\n    for (int i = 1; i < server.io_threads_num; i++) {\n        IOThread *t = &IOThreads[i];\n        if (aeResizeSetSize(t->el, newsize) == AE_ERR)\n            result = AE_ERR;\n    }\n    resumeAllIOThreads();\n    return result;\n}\n\n/* In the main thread, we may want to operate data of io threads, maybe uninstall\n * event handler, access query/output buffer or resize event loop, we need a clean\n * and safe context to do that. We pause io thread in IOThreadBeforeSleep, do some\n * jobs and then resume it. To avoid thread suspended, we use busy waiting to confirm\n * the target status. Besides we use atomic variable to make sure memory visibility\n * and ordering.\n *\n * Make sure that only the main thread can call these function,\n *  - pauseIOThread, resumeIOThread\n *  - pauseAllIOThreads, resumeAllIOThreads\n *  - pauseIOThreadsRange, resumeIOThreadsRange\n *\n * The main thread will pause the io thread, and then wait for the io thread to\n * be paused. The io thread will check the paused status in IOThreadBeforeSleep,\n * and then pause itself.\n *\n * The main thread will resume the io thread, and then wait for the io thread to\n * be resumed. The io thread will check the paused status in IOThreadBeforeSleep,\n * and then resume itself.\n */\n\n/* We may pause the same io thread nestedly, so we need to record the times of\n * pausing, and only when the times of pausing is 0, we can pause the io thread,\n * and only when the times of pausing is 1, we can resume the io thread. */\nstatic int PausedIOThreads[IO_THREADS_MAX_NUM] = {0};\n\n/* Pause the specific range of io threads, and wait for them to be paused. */\nvoid pauseIOThreadsRange(int start, int end) {\n    if (!server.io_threads_active) return;\n    serverAssert(start >= 1 && end < server.io_threads_num && start <= end);\n    serverAssert(pthread_equal(pthread_self(), server.main_thread_id));\n\n    /* Try to make all io threads paused in parallel */\n    for (int i = start; i <= end; i++) {\n        PausedIOThreads[i]++;\n        /* Skip if already paused */\n        if (PausedIOThreads[i] > 1) continue;\n\n        int paused;\n        atomicGetWithSync(IOThreads[i].paused, paused);\n        /* Don't support to call reentrant */\n        serverAssert(paused == IO_THREAD_UNPAUSED);\n        atomicSetWithSync(IOThreads[i].paused, IO_THREAD_PAUSING);\n        /* Just notify io thread, no actual job, since io threads check paused\n         * status in IOThreadBeforeSleep, so just wake it up if polling wait. */\n        triggerEventNotifier(IOThreads[i].pending_clients_notifier);\n    }\n\n    /* Wait for all io threads paused */\n    for (int i = start; i <= end; i++) {\n        if (PausedIOThreads[i] > 1) continue;\n        int paused = IO_THREAD_PAUSING;\n        while (paused != IO_THREAD_PAUSED) {\n            atomicGetWithSync(IOThreads[i].paused, paused);\n        }\n    }\n}\n\n/* Resume the specific range of io threads, and wait for them to be resumed. */\nvoid resumeIOThreadsRange(int start, int end) {\n    if (!server.io_threads_active) return;\n    serverAssert(start >= 1 && end < server.io_threads_num && start <= end);\n    serverAssert(pthread_equal(pthread_self(), server.main_thread_id));\n\n    for (int i = start; i <= end; i++) {\n        serverAssert(PausedIOThreads[i] > 0);\n        PausedIOThreads[i]--;\n        if (PausedIOThreads[i] > 0) continue;\n\n        int paused;\n        /* Check if it is paused, since we must call 'pause' and\n         * 'resume' in pairs */\n        atomicGetWithSync(IOThreads[i].paused, paused);\n        serverAssert(paused == IO_THREAD_PAUSED);\n        /* Resume */\n        atomicSetWithSync(IOThreads[i].paused, IO_THREAD_RESUMING);\n        while (paused != IO_THREAD_UNPAUSED) {\n            atomicGetWithSync(IOThreads[i].paused, paused);\n        }\n    }\n}\n\n/* The IO thread checks whether it is being paused, and if so, it pauses itself\n * and waits for resuming, corresponding to the pause/resumeIOThread* functions.\n * Currently, this is only called in IOThreadBeforeSleep, as there are no pending\n * I/O events at this point, with a clean context. */\nvoid handlePauseAndResume(IOThread *t) {\n    int paused;\n    /* Check if i am being paused. */\n    atomicGetWithSync(t->paused, paused);\n    if (paused == IO_THREAD_PAUSING) {\n        atomicSetWithSync(t->paused, IO_THREAD_PAUSED);\n        /* Wait for resuming */\n        while (paused != IO_THREAD_RESUMING) {\n            atomicGetWithSync(t->paused, paused);\n        }\n        atomicSetWithSync(t->paused, IO_THREAD_UNPAUSED);\n    }\n}\n\n/* Pause the specific io thread, and wait for it to be paused. */\nvoid pauseIOThread(int id) {\n    pauseIOThreadsRange(id, id);\n}\n\n/* Resume the specific io thread, and wait for it to be resumed. */\nvoid resumeIOThread(int id) {\n    resumeIOThreadsRange(id, id);\n}\n\n/* Pause all io threads, and wait for them to be paused. */\nvoid pauseAllIOThreads(void) {\n    pauseIOThreadsRange(1, server.io_threads_num-1);\n}\n\n/* Resume all io threads, and wait for them to be resumed. */\nvoid resumeAllIOThreads(void) {\n    resumeIOThreadsRange(1, server.io_threads_num-1);\n}\n\n/* Add the pending clients to the list of IO threads, and trigger an event to\n * notify io threads to handle. */\nint sendPendingClientsToIOThreads(void) {\n    int processed = 0;\n    for (int i = 1; i < server.io_threads_num; i++) {\n        int len = listLength(mainThreadPendingClientsToIOThreads[i]);\n        if (len > 0) {\n            IOThread *t = &IOThreads[i];\n            pthread_mutex_lock(&t->pending_clients_mutex);\n            listJoin(t->pending_clients, mainThreadPendingClientsToIOThreads[i]);\n            pthread_mutex_unlock(&t->pending_clients_mutex);\n            /* Trigger an event, maybe an error is returned when buffer is full\n             * if using pipe, but no worry, io thread will handle all clients\n             * in list when receiving a notification. */\n            triggerEventNotifier(t->pending_clients_notifier);\n        }\n        processed += len;\n    }\n    return processed;\n}\n\n/* Prefetch the commands from the IO thread. The return value is the number\n * of clients that have been prefetched. */\nint prefetchIOThreadCommands(IOThread *t) {\n    int len = listLength(mainThreadProcessingClients[t->id]);\n    int to_prefetch = determinePrefetchCount(len);\n    if (to_prefetch == 0) return 0;\n\n    /* Two-phase approach to optimize cache utilization:\n     * Phase 1: Issue prefetch hints for client structures\n     * Phase 2: Access the now-cached client data and add commands to batch */\n    /* Since we double the configured size for better performance,\n     * see also `determinePrefetchCount` */\n    static client *c[PREFETCH_BATCH_MAX_SIZE*2];\n    serverAssert(PREFETCH_BATCH_MAX_SIZE*2 >= to_prefetch );\n    int clients = 0;\n    listIter li;\n    listNode *ln;\n    listRewind(mainThreadProcessingClients[t->id], &li);\n    /* Phase 1: Issue prefetch instructions for client struct and pending_cmds.\n     * These prefetches will bring data into cache asynchronously. */\n    for (int i = 0; i < to_prefetch && (ln = listNext(&li)); i++) {\n        c[i] = listNodeValue(ln);\n        redis_prefetch_read(c[i]);\n        redis_prefetch_read(&c[i]->pending_cmds);\n        redis_prefetch_read(&c[i]->io_deferred_objects);\n    }\n    /* Phase 2: Access client data (now likely in cache) and add to batch.\n     * Also prefetch additional fields (reply, mem_usage_bucket) that will be\n     * needed later during command execution. */\n    for (int i = 0; i < to_prefetch; i++) {\n        if (addCommandToBatch(c[i]) == C_ERR) break;\n        if (c[i]->reply) redis_prefetch_read(c[i]->reply);\n        redis_prefetch_read(&c[i]->mem_usage_bucket);\n        if (c[i]->io_deferred_objects) redis_prefetch_read(c[i]->io_deferred_objects);\n        clients++;\n    }\n    /* Prefetch the commands in the batch. */\n    prefetchCommands();\n    return clients;\n}\n\nextern int ProcessingEventsWhileBlocked;\n\n/* Send the pending clients to the IO thread if the number of pending clients\n * is greater than IO_THREAD_MAX_PENDING_CLIENTS, or if size_check is 0. */\nstatic inline void sendPendingClientsToIOThreadIfNeeded(IOThread *t, int size_check) {\n    size_t len = listLength(mainThreadPendingClientsToIOThreads[t->id]);\n    if (len == 0 || (size_check && len < IO_THREAD_MAX_PENDING_CLIENTS)) return;\n\n    /* If AOF fsync policy is always, we should not let io thread handle these\n     * clients now since we don't flush AOF buffer to file and sync yet.\n     * So these clients will be delayed to send io threads in beforeSleep after\n     * flushAppendOnlyFile. \n     * \n     * If we are in processEventsWhileBlocked, we don't send clients to io threads\n     * now, we want to update server.events_processed_while_blocked accurately. */\n    if (server.aof_fsync != AOF_FSYNC_ALWAYS && !ProcessingEventsWhileBlocked) {\n        int running = 0, pending = 0;\n        pthread_mutex_lock(&(t->pending_clients_mutex));\n        pending = listLength(t->pending_clients);\n        listJoin(t->pending_clients, mainThreadPendingClientsToIOThreads[t->id]);\n        pthread_mutex_unlock(&(t->pending_clients_mutex));\n        if (!pending) atomicGetWithSync(t->running, running);\n\n        /* Only notify io thread if it is not running and no pending clients to\n         * process, to avoid unnecessary notify/wakeup. If the io thread is running,\n         * it will process the clients in beforeSleep. If there are pending clients,\n         * we may already notify the io thread if needed. */\n        if(!running && !pending) triggerEventNotifier(t->pending_clients_notifier);\n    }\n}\n\n/* The main thread processes the clients from IO threads, these clients may have\n * a complete command to execute or need to be freed. Note that IO threads never\n * free client since this operation access much server data.\n *\n * Please notice that this function may be called reentrantly, i,e, the same goes\n * for handleClientsFromIOThread and processClientsOfAllIOThreads. For example,\n * when processing script command, it may call processEventsWhileBlocked to\n * process new events, if the clients with fired events from the same io thread,\n * it may call this function reentrantly. */\nint processClientsFromIOThread(IOThread *t) {\n    /* Get the list of clients to process. */\n    pthread_mutex_lock(&mainThreadPendingClientsMutexes[t->id]);\n    listJoin(mainThreadProcessingClients[t->id], mainThreadPendingClients[t->id]);\n    pthread_mutex_unlock(&mainThreadPendingClientsMutexes[t->id]);\n    size_t processed = listLength(mainThreadProcessingClients[t->id]);\n    if (processed == 0) return 0;\n\n    int prefetch_clients = 0;\n    /* We may call processClientsFromIOThread reentrantly, so we need to\n     * reset the prefetching batch, besides, users may change the config\n     * of prefetch batch size, so we need to reset the prefetching batch. */\n    resetCommandsBatch();\n\n    listNode *node = NULL;\n    while (listLength(mainThreadProcessingClients[t->id])) {\n        if (prefetch_clients <= 0) {\n            /* Reset the prefetching batch if we have processed all clients. */\n            resetCommandsBatch();\n            /* Prefetch the commands if no clients in the batch. */\n            prefetch_clients = prefetchIOThreadCommands(t);\n        }\n        prefetch_clients--;\n\n        /* Each time we pop up only the first client to process to guarantee\n         * reentrancy safety. */\n        if (node) zfree(node);\n        node = listFirst(mainThreadProcessingClients[t->id]);\n        listUnlinkNode(mainThreadProcessingClients[t->id], node);\n        client *c = listNodeValue(node);\n\n        /* Make sure the client is neither readable nor writable in io thread to\n         * avoid data race. */\n        serverAssert(!(c->io_flags & (CLIENT_IO_READ_ENABLED | CLIENT_IO_WRITE_ENABLED)));\n        serverAssert(!(c->flags & CLIENT_CLOSE_ASAP));\n\n        /* Let main thread to run it, set running thread id first. */\n        c->running_tid = IOTHREAD_MAIN_THREAD_ID;\n\n        /* Free objects queued by IO thread for deferred freeing. */\n        freeClientIODeferredObjects(c, 0);\n        tryUnlinkClientFromPendingRefReply(c, 0);\n\n        /* If a read error occurs, handle it in the main thread first, since we\n         * want to print logs about client information before freeing. */\n        if (isClientReadErrorFatal(c)) handleClientReadError(c);\n\n        /* The client is asked to close in IO thread. */\n        if (c->io_flags & CLIENT_IO_CLOSE_ASAP) {\n            freeClient(c);\n            continue;\n        }\n\n        /* Update some client's members while we are in main thread so we avoid\n         * data races. */\n        updateClientDataFromIOThread(c);\n\n        /* Check if we need to run a cron job for the client */\n        if (runClientCronFromIOThread(c)) continue;\n\n        /* Process the pending command and input buffer. */\n        if (!isClientReadErrorFatal(c) && c->io_flags & CLIENT_IO_PENDING_COMMAND) {\n            /* IO-thread reads may enqueue one-by-one complete commands that are\n             * executed in main thread without re-entering processInputBuffer().\n             * Account this client as active before processing that handoff path. */\n            statsUpdateActiveClients(c);\n            c->flags |= CLIENT_PENDING_COMMAND;\n            if (processPendingCommandAndInputBuffer(c) == C_ERR) {\n                /* If the client is no longer valid, it must be freed safely. */\n                continue;\n            }\n        }\n\n        /* We may have pending replies if io thread may not finish writing\n         * reply to client, so we did not put the client in pending write\n         * queue. And we should do that first since we may keep the client\n         * in main thread instead of returning to io threads. */\n        if (!(c->flags & CLIENT_PENDING_WRITE) && clientHasPendingReplies(c))\n            putClientInPendingWriteQueue(c);\n\n        /* The client only can be processed in the main thread, otherwise data\n         * race will happen, since we may touch client's data in main thread. */\n        if (isClientMustHandledByMainThread(c)) {\n            keepClientInMainThread(c);\n            continue;\n        }\n\n        /* Handle replica clients in putReplicasInPendingClientsToIOThreads in\n         * beforeSleep */\n        if (c->flags & CLIENT_SLAVE) continue;\n\n        /* Remove this client from pending write clients queue of main thread,\n         * And some clients may do not have reply if CLIENT REPLY OFF/SKIP. */\n        if (c->flags & CLIENT_PENDING_WRITE) {\n            c->flags &= ~CLIENT_PENDING_WRITE;\n            listUnlinkNode(server.clients_pending_write, &c->clients_pending_write_node);\n        }\n        c->running_tid = c->tid;\n        listLinkNodeHead(mainThreadPendingClientsToIOThreads[c->tid], node);\n        node = NULL;\n\n        /* If there are several clients to process, let io thread handle them ASAP. */\n        sendPendingClientsToIOThreadIfNeeded(t, 1);\n    }\n    if (node) zfree(node);\n\n    /* Send the clients to io thread without pending size check, since main thread\n     * may process clients from other io threads, so we need to send them to the\n     * io thread to process in prallel. */\n    sendPendingClientsToIOThreadIfNeeded(t, 0);\n\n    return processed;\n}\n\n/* When the io thread finishes processing the client with the read event, it will\n * notify the main thread through event triggering in IOThreadBeforeSleep. The main\n * thread handles the event through this function. */\nvoid handleClientsFromIOThread(struct aeEventLoop *el, int fd, void *ptr, int mask) {\n    UNUSED(el);\n    UNUSED(mask);\n\n    IOThread *t = ptr;\n\n    /* Handle fd event first. */\n    serverAssert(fd == getReadEventFd(mainThreadPendingClientsNotifiers[t->id]));\n    handleEventNotifier(mainThreadPendingClientsNotifiers[t->id]);\n\n    /* Process the clients from IO threads. */\n    processClientsFromIOThread(t);\n}\n\n/* In the new threaded io design, one thread may process multiple clients, so when\n * an io thread notifies the main thread of an event, there may be multiple clients\n * with commands that need to be processed. But in the event handler function\n * handleClientsFromIOThread may be blocked when processing the specific command,\n * the previous clients can not get a reply, and the subsequent clients can not be\n * processed, so we need to handle this scenario in beforeSleep. The function is to\n * process the commands of subsequent clients from io threads. And another function\n * sendPendingClientsToIOThreads make sure clients from io thread can get replies.\n * See also beforeSleep.\n * \n * In beforeSleep, we also call this function to handle the clients that are\n * transferred from io threads without notification. */\nint processClientsOfAllIOThreads(void) {\n    int processed = 0;\n    for (int i = 1; i < server.io_threads_num; i++) {\n        processed += processClientsFromIOThread(&IOThreads[i]);\n    }\n    return processed;\n}\n\n/* After the main thread processes the clients, it will send the clients back to\n * io threads to handle, and fire an event, the io thread handles the event by\n * this function. */\nvoid handleClientsFromMainThread(struct aeEventLoop *ae, int fd, void *ptr, int mask) {\n    UNUSED(ae);\n    UNUSED(mask);\n\n    IOThread *t = ptr;\n\n    /* Handle fd event first. */\n    serverAssert(fd == getReadEventFd(t->pending_clients_notifier));\n    handleEventNotifier(t->pending_clients_notifier);\n\n    /* Process the clients from main thread. */\n    processClientsFromMainThread(t);\n}\n\n/* Processing clients that have finished executing commands from the main thread.\n * If the client is not binded to the event loop, we should bind it first and\n * install read handler. If the client still has query buffer, we should process\n * the input buffer. If the client has pending reply, we just reply to client,\n * and then install write handler if needed. */\nint processClientsFromMainThread(IOThread *t) {\n    pthread_mutex_lock(&t->pending_clients_mutex);\n    listJoin(t->processing_clients, t->pending_clients);\n    pthread_mutex_unlock(&t->pending_clients_mutex);\n    size_t processed = listLength(t->processing_clients);\n    if (processed == 0) return 0;\n\n    listIter li;\n    listNode *ln;\n    listRewind(t->processing_clients, &li);\n    while((ln = listNext(&li))) {\n        client *c = listNodeValue(ln);\n        serverAssert(!(c->io_flags & (CLIENT_IO_READ_ENABLED | CLIENT_IO_WRITE_ENABLED)));\n        /* Main thread must handle clients with CLIENT_CLOSE_ASAP flag, since\n         * we only set io_flags when clients in io thread are freed ASAP. */\n        serverAssert(!(c->flags & CLIENT_CLOSE_ASAP));\n\n        /* Link client in IO thread clients list first. */\n        serverAssert(c->io_thread_client_list_node == NULL);\n        listUnlinkNode(t->processing_clients, ln);\n        listLinkNodeTail(t->clients, ln);\n        c->io_thread_client_list_node = listLast(t->clients);\n\n        /* The client now is in the IO thread, let's free deferred objects. */\n        freeClientDeferredObjects(c, 0);\n\n        /* The client is asked to close, we just let main thread free it. */\n        if (c->io_flags & CLIENT_IO_CLOSE_ASAP) {\n            enqueuePendingClientsToMainThread(c, 1);\n            continue;\n        }\n\n        /* Enable read and write and reset some flags. */\n        c->io_flags |= CLIENT_IO_READ_ENABLED | CLIENT_IO_WRITE_ENABLED;\n        c->io_flags &= ~(CLIENT_IO_PENDING_COMMAND | CLIENT_IO_PENDING_CRON);\n\n        /* Only bind once, we never remove read handler unless freeing client. */\n        if (!connHasEventLoop(c->conn)) {\n            connRebindEventLoop(c->conn, t->el);\n            serverAssert(!connHasReadHandler(c->conn));\n            connSetReadHandler(c->conn, readQueryFromClient);\n        }\n\n        /* If the client has pending replies, write replies to client. */\n        if (clientHasPendingReplies(c)) {\n            writeToClient(c, 0);\n            if (!(c->io_flags & CLIENT_IO_CLOSE_ASAP) && clientHasPendingReplies(c)) {\n                connSetWriteHandler(c->conn, sendReplyToClient);\n            }\n        }\n    }\n    /* All clients must are processed. */\n    serverAssert(listLength(t->processing_clients) == 0);\n    return processed;\n}\n\nvoid IOThreadBeforeSleep(struct aeEventLoop *el) {\n    IOThread *t = el->privdata[0];\n\n    /* Handle pending data(typical TLS). */\n    connTypeProcessPendingData(el);\n\n    /* If any connection type(typical TLS) still has pending unread data don't sleep at all. */\n    int dont_sleep = connTypeHasPendingData(el);\n\n    /* Process clients from main thread, since the main thread may deliver clients\n     * without notification during IO thread processing events. */\n    if (processClientsFromMainThread(t) > 0) {\n        /* If there are clients that are processed, we should not sleep since main\n         * thread may want to continue deliverring clients without notification, so\n         * IO thread can process them ASAP, and the main thread can avoid unnecessary\n         * notification (write fd and wake up) is costly. */\n        dont_sleep = 1;\n    }\n    if (!dont_sleep) {\n        atomicSetWithSync(t->running, 0); /* Not running if going to sleep. */\n        /* Try to process clients from main thread again, since before we set\n         * running to 0, the main thread may deliver clients to this io thread. */\n        processClientsFromMainThread(t);\n    }\n    aeSetDontWait(t->el, dont_sleep);\n\n    /* Check if i am being paused, pause myself and resume. */\n    handlePauseAndResume(t);\n\n    /* Send clients to main thread to process, we don't check size here since\n     * we want to send all clients to main thread before going to sleeping. */\n    sendPendingClientsToMainThreadIfNeeded(t, 0);\n}\n\nvoid IOThreadAfterSleep(struct aeEventLoop *el) {\n    IOThread *t = el->privdata[0];\n\n    /* Set the IO thread to running state, so the main thread can deliver\n     * clients to it without extra notifications. */\n    atomicSetWithSync(t->running, 1);\n}\n\n/* Periodically transfer part of clients to the main thread for processing. */\nvoid IOThreadClientsCron(IOThread *t) {\n    /* Process at least a few clients while we are at it, even if we need\n     * to process less than CLIENTS_CRON_MIN_ITERATIONS to meet our contract\n     * of processing each client once per second. */\n    int iterations = listLength(t->clients) / CONFIG_DEFAULT_HZ;\n    if (iterations < CLIENTS_CRON_MIN_ITERATIONS) {\n        iterations = CLIENTS_CRON_MIN_ITERATIONS;\n    }\n\n    listIter li;\n    listNode *ln;\n    listRewind(t->clients, &li);\n    while ((ln = listNext(&li)) && iterations--) {\n        client *c = listNodeValue(ln);\n        /* Mark the client as pending cron, main thread will process it. */\n        c->io_flags |= CLIENT_IO_PENDING_CRON;\n        enqueuePendingClientsToMainThread(c, 0);\n    }\n}\n\n/* This is the IO thread timer interrupt, CONFIG_DEFAULT_HZ times per second.\n * The current responsibility is to detect clients that have been stuck in the\n * IO thread for too long and hand them over to the main thread for handling. */\nint IOThreadCron(struct aeEventLoop *eventLoop, long long id, void *clientData) {\n    UNUSED(eventLoop);\n    UNUSED(id);\n    IOThread *t = clientData;\n\n    /* Run cron tasks for the clients in the IO thread. */\n    IOThreadClientsCron(t);\n\n    return 1000/CONFIG_DEFAULT_HZ;\n}\n\n/* The main function of IO thread, it will run an event loop. The mian thread\n * and IO thread will communicate through event notifier. */\nvoid *IOThreadMain(void *ptr) {\n    IOThread *t = ptr;\n    char thdname[16];\n    snprintf(thdname, sizeof(thdname), \"io_thd_%d\", t->id);\n    redis_set_thread_title(thdname);\n    redisSetCpuAffinity(server.server_cpulist);\n    makeThreadKillable();\n    aeSetBeforeSleepProc(t->el, IOThreadBeforeSleep);\n    aeSetAfterSleepProc(t->el, IOThreadAfterSleep);\n    aeMain(t->el);\n    return NULL;\n}\n\n/* Initialize the data structures needed for threaded I/O. */\nvoid initThreadedIO(void) {\n    if (server.io_threads_num <= 1) return;\n\n    server.io_threads_active = 1;\n\n    if (server.io_threads_num > IO_THREADS_MAX_NUM) {\n        serverLog(LL_WARNING,\"Fatal: too many I/O threads configured. \"\n                             \"The maximum number is %d.\", IO_THREADS_MAX_NUM);\n        exit(1);\n    }\n\n    /* Spawn and initialize the I/O threads. */\n    for (int i = 1; i < server.io_threads_num; i++) {\n        IOThread *t = &IOThreads[i];\n        t->id = i;\n        t->el = aeCreateEventLoop(server.maxclients+CONFIG_FDSET_INCR);\n        t->el->privdata[0] = t;\n        t->pending_clients = listCreate();\n        t->processing_clients = listCreate();\n        t->pending_clients_to_main_thread = listCreate();\n        t->clients = listCreate();\n        atomicSetWithSync(t->paused, IO_THREAD_UNPAUSED);\n        atomicSetWithSync(t->running, 0);\n\n        pthread_mutexattr_t *attr = NULL;\n        #if defined(__linux__) && defined(__GLIBC__)\n        attr = zmalloc(sizeof(pthread_mutexattr_t));\n        pthread_mutexattr_init(attr);\n        pthread_mutexattr_settype(attr, PTHREAD_MUTEX_ADAPTIVE_NP);\n        #endif\n        pthread_mutex_init(&t->pending_clients_mutex, attr);\n\n        t->pending_clients_notifier = createEventNotifier();\n        if (aeCreateFileEvent(t->el, getReadEventFd(t->pending_clients_notifier),\n                              AE_READABLE, handleClientsFromMainThread, t) != AE_OK)\n        {\n            serverLog(LL_WARNING, \"Fatal: Can't register file event for IO thread notifications.\");\n            exit(1);\n        }\n\n        /* This is the timer callback of the IO thread, used to gradually handle \n         * some background operations, such as clients cron. */\n        if (aeCreateTimeEvent(t->el, 1, IOThreadCron, t, NULL) == AE_ERR) {\n            serverLog(LL_WARNING, \"Fatal: Can't create event loop timers in IO thread.\");\n            exit(1);\n        }\n\n        /* Create IO thread */\n        if (pthread_create(&t->tid, NULL, IOThreadMain, (void*)t) != 0) {\n            serverLog(LL_WARNING, \"Fatal: Can't initialize IO thread.\");\n            exit(1);\n        }\n\n        /* For main thread */\n        mainThreadPendingClientsToIOThreads[i] = listCreate();\n        mainThreadPendingClients[i] = listCreate();\n        mainThreadProcessingClients[i] = listCreate();\n        pthread_mutex_init(&mainThreadPendingClientsMutexes[i], attr);\n        mainThreadPendingClientsNotifiers[i] = createEventNotifier();\n        if (aeCreateFileEvent(server.el, getReadEventFd(mainThreadPendingClientsNotifiers[i]),\n                              AE_READABLE, handleClientsFromIOThread, t) != AE_OK)\n        {\n            serverLog(LL_WARNING, \"Fatal: Can't register file event for main thread notifications.\");\n            exit(1);\n        }\n        if (attr) zfree(attr);\n    }\n}\n\n/* Kill the IO threads, TODO: release the applied resources. */\nvoid killIOThreads(void) {\n    if (server.io_threads_num <= 1) return;\n\n    int err, j;\n    for (j = 1; j < server.io_threads_num; j++) {\n        if (IOThreads[j].tid == pthread_self()) continue;\n        if (IOThreads[j].tid && pthread_cancel(IOThreads[j].tid) == 0) {\n            if ((err = pthread_join(IOThreads[j].tid,NULL)) != 0) {\n                serverLog(LL_WARNING,\n                    \"IO thread(tid:%lu) can not be joined: %s\",\n                        (unsigned long)IOThreads[j].tid, strerror(err));\n            } else {\n                serverLog(LL_WARNING,\n                    \"IO thread(tid:%lu) terminated\",(unsigned long)IOThreads[j].tid);\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "src/keymeta.c",
    "content": "/* Read keymeta.h for high-level overview. */\n\n#include \"server.h\"\n#include <string.h>\n\n/* Encoding constants for metadata class names and serialization */\n#define KM_NAME_LEN           4    /* Short name length (e.g., \"KMT1\") */\n#define KM_PREFIX             \"META-\"\n#define KM_PREFIX_LEN         5    /* Length of \"META-\" prefix */\n#define KM_FULLNAME_LEN       9    /* Full name length: \"META-xxxx\" */\n#define KM_ENC_CHAR_BITS      6    /* Bits per character in encoding */\n#define KM_CHARSET_SIZE       64   /* Size of character set (2^6) */\n#define KM_VER_BITS           5    /* Bits for version in 32-bit class spec */\n#define KM_VER_MAX            31   /* Max version value (2^5 - 1) */\n#define KM_FLAGS_BITS         3    /* Bits for flags in 32-bit class spec */\n#define KM_FLAGS_MASK         0x7  /* Mask for 3-bit flags */\n#define KM_VER_MASK           0x1F /* Mask for 5-bit version */\n#define KM_CHAR_MASK          0x3F /* Mask for 6-bit character */\n#define KM_ENTITY_VER_BITS    10  /* Bits for version in 64-bit entity ID */\n#define KM_CLASS_SPEC_SIZE    4    /* Size of 32-bit class spec in bytes */\n#define KM_EXPIRE_RESET_VALUE ((uint64_t)-1) /* Sentinel: no expiration */\n\n/* Cast const away only for initialization */\n#define KM_SET_CONST_CONF(conf)  (*((KeyMetaClassConf *) (&conf)))\n\n/* Character set for metadata class names (same as module types). */\nstatic const char *keyMetaCharSet = \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n                                    \"abcdefghijklmnopqrstuvwxyz\"\n                                    \"0123456789_-\";\n\ntypedef enum KeyMetaClassState {\n    CLASS_STATE_FREE = 0, /* Free must be 0. */\n    CLASS_STATE_INUSE = 1,\n    CLASS_STATE_RELEASED = 2,\n} KeyMetaClassState;\n\nstatic_assert(CLASS_STATE_FREE == 0, \"CLASS_STATE_FREE must be 0 for memset initialization\");\n\n/* Key metadata class */\ntypedef struct KeyMetaClass {\n    char name[5];                 /* 4-char name of the class */\n    ModuleEntityId entity;        /* module key metadata name and ID. */\n    const KeyMetaClassConf conf;  /* copy of config */\n    KeyMetaClassState state;      /* FREE/INUSE/RELEASED */\n    uint32_t classSpecEncoded;    /* See keyMetaClassEncode() */\n} KeyMetaClass;\nstatic KeyMetaClass keyMetaClass[KEY_META_ID_MAX];\n\n/* Add metadata to keymeta spec, handling out-of-order metaid */\nstatic void keyMetaSpecAddUnordered(KeyMetaSpec *keymeta, int metaid, uint64_t metaval);\n\n\n/* Encode 64b For module entity encode. Encode 32b class spec for RDB. \n *\n * Takes a 4-character name (e.g., \"KMT1\"), version (0-31), and flags, validates \n * 4-char name uses valid character set. Version is 5 bits (0-31).\n * \n * >> ENCODING 32-BIT CLASS SPEC\n * Encodes compact 32-bit class Spec for RDB/DUMP serialization:\n *            31                           8 7     3 2   0\n *            ┌───────────────────────────────┬───────┬─────┐\n *            │   4-char name \"xxxx\"(24 bits) │ ver   │flags│\n *            │   (6 bits per char)           │(5 bit)│(3b) │\n *            └───────────────────────────────┴───────┴─────┘\n *\n * >> ENCODING MODULE-STYLE ID\n * Generates 9-char entity name with \"META-\" prefix (e.g., \"META-KMT1\"), 54 bits\n * in total, plus 10 bits version (values 0-31). Compatible with moduleTypeEncodeId:\n *            63                                     10 9           0\n *            ┌───────────────────────────────────────┬─────────────┐\n *            │   9-char name (56 bits) \"META-xxxx\"   │  ver (0-31) │\n *            │   (6 bits per char)                   │   (10 bit)  │\n *            └───────────────────────────────────────┴─────────────┘\n * \n */\nstatic uint64_t keyMetaClassEncode(const char *name, int metaver, uint64_t flags,\n                                char *fullname, uint32_t *rdbEncodedValue) {\n    /* Validate name is exactly 4 characters */\n    if (strlen(name) != KM_NAME_LEN) return 0;\n\n    /* Validate version range (5 bits = 0-31 for metadata classes) */\n    if (metaver < 0 || metaver > KM_VER_MAX) return 0;\n\n    /* Generate 9-char name with \"META-\" prefix */\n    memcpy(fullname, KM_PREFIX, KM_PREFIX_LEN);\n    memcpy(fullname + KM_PREFIX_LEN, name, KM_NAME_LEN);\n    fullname[KM_FULLNAME_LEN] = '\\0';\n\n    /* Encode 9-char name into 64-bit entityId (module-style ID, 54 bits name\n     * plus 10 bits version) */\n    uint64_t encName9Chars = 0;\n    /* Encode last 4-char into 32-bit serialized class ID (24b name + 5b version + 3b flags) */\n    uint32_t encName4chars = 0;\n    for (int j = 0; j < KM_FULLNAME_LEN; j++) {\n        char *p = strchr(keyMetaCharSet, fullname[j]);\n        if (!p) return 0; /* Invalid character in name */\n        unsigned long pos = p - keyMetaCharSet;\n        encName9Chars = (encName9Chars << KM_ENC_CHAR_BITS) | pos;\n        if (j >= KM_PREFIX_LEN) encName4chars = (encName4chars << KM_ENC_CHAR_BITS) | pos;\n    }\n\n    /* Encodes compact 32-bit RDB/DUMP serialized class Spec */\n    *rdbEncodedValue = ((encName4chars << KM_VER_BITS) | metaver) << KM_FLAGS_BITS | (flags & KM_FLAGS_MASK);\n\n    /* Encodes the 9-char name into 64-bit ID (compatible with moduleTypeEncodeId) */\n    uint64_t entityId = (encName9Chars << KM_ENTITY_VER_BITS) | metaver;\n    return entityId;\n}\n\n/* Decode 32-bit class spec from RDB/DUMP format\n *\n * Takes a 32-bit keyMetaClassSer and extracts:\n * - 4-character name (24 bits, 6 bits per char)\n * - version (5 bits, 0-31)\n * - flags (3 bits)\n *\n * This is the reverse of the encoding done in keyMetaClassEncode().\n *\n * Cannot fail: all 32-bit values are valid (6-bit char mask ensures valid charset\n * indices, and all 32 bits are consumed by design: 3 + 5 + 24 = 32).\n */\nvoid keyMetaClassDecode(uint32_t value, char *name, int *metaver, uint8_t *flags) {\n    debugServerAssert(name && metaver && flags);\n\n    /* Extract flags (lowest 3 bits) */\n    *flags = value & KM_FLAGS_MASK;\n    value >>= KM_FLAGS_BITS;\n\n    /* Extract version (next 5 bits) */\n    *metaver = value & KM_VER_MASK;\n    value >>= KM_VER_BITS;\n\n    /* Extract 4-char name (24 bits, 6 bits per char, big-endian) */\n    for (int i = KM_NAME_LEN - 1; i >= 0; i--) {\n        unsigned int pos = value & KM_CHAR_MASK;\n        debugServerAssert(pos < KM_CHARSET_SIZE); /* 6-bit value always < 64 */\n        name[i] = keyMetaCharSet[pos];\n        value >>= KM_ENC_CHAR_BITS;\n    }\n    name[KM_NAME_LEN] = '\\0';\n\n    /* All 32 bits should be consumed (3 + 5 + 24 = 32) */\n    debugServerAssert(value == 0);\n}\n\n/* Return -1 if not found, 1..7 for slot if INUSE, alreadyReleased if found but released */\nstatic int keyMetaClassLookupByName(const char *name, int *alreadyReleased) {\n    *alreadyReleased = 0;\n    if (!name) return -1;\n\n    for (int i = KEY_META_ID_MODULE_FIRST; i <= KEY_META_ID_MODULE_LAST; i++) {\n        if (keyMetaClass[i].state == CLASS_STATE_FREE)\n            continue;\n        if (memcmp(keyMetaClass[i].name, name, KM_NAME_LEN) != 0)\n            continue;\n        if (keyMetaClass[i].state == CLASS_STATE_INUSE)\n            return i;\n        if (keyMetaClass[i].state == CLASS_STATE_RELEASED) {\n            *alreadyReleased = 1;\n            return i;\n        }\n    }\n    return -1;\n}\n\n/* Initialize server.keyMeta with defaults and reserve built-in classes. */\nvoid keyMetaInit(void) {\n    memset(keyMetaClass, 0, sizeof(KeyMetaClass) * KEY_META_ID_MAX);\n\n    /* Slot 0 is EXPIRE, built-in and always active. */\n    keyMetaClass[KEY_META_ID_EXPIRE].state = CLASS_STATE_INUSE;\n    KM_SET_CONST_CONF(keyMetaClass[KEY_META_ID_EXPIRE].conf).flags = 0;\n    KM_SET_CONST_CONF(keyMetaClass[KEY_META_ID_EXPIRE].conf).reset_value = KM_EXPIRE_RESET_VALUE;\n}\n\n/* Prepare key metadata spec for copy of `srcKv` */\nvoid keyMetaOnCopy(kvobj *kv, robj *srcKey, robj *dstKey, int srcDbId, int dstDbId,\n                   KeyMetaSpec *keymeta)\n{\n    uint64_t *pMeta = ((uint64_t *)kv) - 1;\n    if (kv->metabits & KEY_META_MASK_EXPIRE) {\n        if (*pMeta != KM_EXPIRE_RESET_VALUE)\n            keyMetaSpecAdd(keymeta, KEY_META_ID_EXPIRE, *pMeta);\n        pMeta--;\n    }\n\n    uint32_t mbits = kv->metabits >> KEY_META_ID_MODULE_FIRST;\n    if (likely(mbits == 0)) return;\n\n    int keyMetaId = KEY_META_ID_MODULE_FIRST;\n    struct RedisModuleKeyOptCtx ctx = {srcKey, dstKey, srcDbId, dstDbId };\n    do {\n        if (mbits & 1) {\n            serverAssert(keyMetaClass[keyMetaId].state == CLASS_STATE_INUSE);\n            /* Copy metadata from kv to temporary storage keymeta */\n            uint64_t tmpMeta = *pMeta--;\n            if (tmpMeta != keyMetaClass[keyMetaId].conf.reset_value &&\n                keyMetaClass[keyMetaId].conf.copy &&\n                keyMetaClass[keyMetaId].conf.copy(&ctx, &tmpMeta))\n                keyMetaSpecAdd(keymeta, keyMetaId, tmpMeta);\n        }\n        mbits >>= 1;\n        keyMetaId++;\n    } while (mbits != 0);\n}\n\n/* Prepare metadata spec for rename of `kv` */\nvoid keyMetaOnRename(struct redisDb *db,  kvobj *kv, robj *oldKey, robj *newKey, KeyMetaSpec *kms) {\n    uint64_t *pMeta = ((uint64_t *)kv) - 1;\n\n    /* Handle builtin expire: add only if set and value != -1, but always advance\n     * the pointer when the expire bit is set since the slot exists either way. */\n    if (kv->metabits & KEY_META_MASK_EXPIRE) {\n        if (*pMeta != KM_EXPIRE_RESET_VALUE)\n            keyMetaSpecAdd(kms, KEY_META_ID_EXPIRE, *pMeta);\n        pMeta--; /* skip expire slot */\n    }\n\n    /* Process module metadata. Default on rename: keep if no callback. */\n    uint32_t mbits = kv->metabits >> KEY_META_ID_MODULE_FIRST;\n    if (likely(mbits == 0)) return;\n\n    int keyMetaId = KEY_META_ID_MODULE_FIRST;\n    struct RedisModuleKeyOptCtx ctx = { oldKey, newKey, db ? db->id : -1, db ? db->id : -1 };\n    do {\n        if (mbits & 1) {\n            serverAssert(keyMetaClass[keyMetaId].state == CLASS_STATE_INUSE);\n            uint64_t tmpMeta = *pMeta; /* read current module slot */\n            if (tmpMeta != keyMetaClass[keyMetaId].conf.reset_value &&\n                (!keyMetaClass[keyMetaId].conf.rename || \n                 keyMetaClass[keyMetaId].conf.rename(&ctx, &tmpMeta))) \n            {\n                keyMetaSpecAdd(kms, keyMetaId, tmpMeta);\n                /* Set old metadata slot to reset_value to prevent free callback */\n                *pMeta = keyMetaClass[keyMetaId].conf.reset_value;\n            }\n            pMeta--; /* advance to next module slot */\n        }\n        mbits >>= 1;\n        keyMetaId++;\n    } while (mbits != 0);\n}\n\n/* Prepare metadata spec for move of `kv` from srcDbId to dstDbId */\nvoid keyMetaOnMove(kvobj *kv, robj *key, int srcDbId, int dstDbId, KeyMetaSpec *kms) {\n    uint64_t *pMeta = ((uint64_t *)kv) - 1;\n\n    /* Handle builtin expire: add only if set and value != -1, but always advance\n     * the pointer when the expire bit is set since the slot exists either way. */\n    if (kv->metabits & KEY_META_MASK_EXPIRE) {\n        if (*pMeta != KM_EXPIRE_RESET_VALUE)\n            keyMetaSpecAdd(kms, KEY_META_ID_EXPIRE, *pMeta);\n        pMeta--; /* skip expire slot */\n    }\n\n    /* Process module metadata. Default on move: keep if no callback. */\n    uint32_t mbits = kv->metabits >> KEY_META_ID_MODULE_FIRST;\n    if (likely(mbits == 0)) return;\n\n    int keyMetaId = KEY_META_ID_MODULE_FIRST;\n    struct RedisModuleKeyOptCtx ctx = { key, NULL, srcDbId, dstDbId};\n    do {\n        if (mbits & 1) {\n            serverAssert(keyMetaClass[keyMetaId].state == CLASS_STATE_INUSE);\n            uint64_t tmpMeta = *pMeta; /* read current module slot */\n            if (tmpMeta != keyMetaClass[keyMetaId].conf.reset_value &&\n                (!keyMetaClass[keyMetaId].conf.move || \n                 keyMetaClass[keyMetaId].conf.move(&ctx, &tmpMeta))) \n            {\n                keyMetaSpecAdd(kms, keyMetaId, tmpMeta);\n                /* If keep, set old metadata to reset_value to prevent free callback */\n                *pMeta = keyMetaClass[keyMetaId].conf.reset_value;\n            }\n            pMeta--; /* advance to next module slot */\n        }\n        mbits >>= 1;\n        keyMetaId++;\n    } while (mbits != 0);\n}\n\n/*\n * keyMetaOnUnlink() - when a key is logically overwritten/removed from the DB\n *\n * - Runs before the value object is actually freed (see keyMetaOnFree()).\n * - Runs on the main thread (same timing as moduleNotifyKeyUnlink()).\n * - Allows modules to detach per-key metadata from external structures, update\n *   auxiliary indexes, stats, etc.\n * - Skips the built-in EXPIRE slot (handled by caller).\n * - Iterates over module metadata bits and, for every set bit, invokes the\n *   class-specific unlink callback if provided.\n */\nvoid keyMetaOnUnlink(redisDb *db, robj *key, kvobj *kv) {\n    /* Skip builtin expire slot if present; no action for expire itself here. */\n    uint64_t *pMeta = ((uint64_t *)kv) - 1;\n    if (kv->metabits & KEY_META_MASK_EXPIRE)\n        pMeta--;\n\n    /* Iterate module metadata and invoke per-class unlink if provided. */\n    uint32_t mbits = kv->metabits >> KEY_META_ID_MODULE_FIRST;\n    if (likely(mbits == 0)) return;\n\n    /* Build operation context for modules: from_key = key name, to_key = NULL. */\n    struct RedisModuleKeyOptCtx ctx = { key, NULL, db ? db->id : -1, -1 };\n\n    int keyMetaId = KEY_META_ID_MODULE_FIRST;\n    do {\n        if (mbits & 1) {\n            serverAssert(keyMetaClass[keyMetaId].state == CLASS_STATE_INUSE);\n\n            if (*pMeta != keyMetaClass[keyMetaId].conf.reset_value &&\n                keyMetaClass[keyMetaId].conf.unlink) \n            {\n                keyMetaClass[keyMetaId].conf.unlink(&ctx, pMeta);\n            }\n            pMeta--;\n        }\n        mbits >>= 1;\n        keyMetaId++;\n    } while (mbits != 0);\n}\n\n/*\n * keyMetaOnFree() - when kvobj's metadata is actually being freed \n *\n * - Called after the key has been logically unlinked (see keyMetaOnUnlink())\n * - This is the place to reclaim resources associated with per-key metadata (e.g.,\n *   free external allocations referenced by the 8-byte metadata value).\n * - May run in a background thread; therefore module code invoked here must NOT\n *   access Redis keyspace or perform operations that require the main thread.\n *   Only perform thread-safe memory cleanup pertinent to the metadata.\n * - For each attached metadata invokes class-specific 'free' callback if given, \n */\nvoid keyMetaOnFree(kvobj *kv) {\n    /* Skip builtin expire slot if present; no action needed for expire itself. */\n    uint64_t *pMeta = ((uint64_t *)kv) - 1;\n    if (kv->metabits & KEY_META_MASK_EXPIRE)\n        pMeta--;\n\n    /* Iterate module metadata and invoke per-class free if provided. */\n    uint32_t mbits = kv->metabits >> KEY_META_ID_MODULE_FIRST;\n    if (likely(mbits == 0)) return;\n\n    int keyMetaId = KEY_META_ID_MODULE_FIRST;\n    const char *keyname = kvobjGetKey(kv);\n    do {\n        if (mbits & 1) {\n            serverAssert(keyMetaClass[keyMetaId].state == CLASS_STATE_INUSE);\n            uint64_t meta = *pMeta--; /* consume this module's metadata slot */\n            if (meta != keyMetaClass[keyMetaId].conf.reset_value &&\n                keyMetaClass[keyMetaId].conf.free)\n                keyMetaClass[keyMetaId].conf.free(keyname, meta);\n        }\n        mbits >>= 1;\n        keyMetaId++;\n    } while (mbits != 0);\n}\n\n/* Free any metadata stored in a KeyMetaSpec. This is called when RDB load fails \n * after some metadata has been loaded. It invokes the free cb for each metadata \n * class that was already loaded, preventing memory leaks from partially-loaded metadata.\n *\n * Note: \n * - We pass NULL for keyname since the key doesn't exist yet.\n * - The kms->meta[] array is stored in reverse order: smallest metaid at the end.\n */\nvoid keyMetaSpecCleanup(KeyMetaSpec *kms) {\n    if (kms->numMeta == 0) return;\n\n    /* Iterate through the metadata array in reverse order (largest to smallest ID) */\n    int startIdx = KEY_META_ID_MAX - kms->numMeta;\n    uint32_t mbits = kms->metabits;\n\n    for (int i = startIdx ; mbits != 0 ; i++) {\n        /* Find the highest metaid remaining in mbits */\n        int metaid = 31 - __builtin_clz((unsigned)mbits);\n\n        /* Get the metadata value for this slot */\n        uint64_t meta = kms->meta[i];\n\n        /* Call free callback if metadata is not reset value */\n        KeyMetaClass *pClass = &keyMetaClass[metaid];\n        if (pClass->state == CLASS_STATE_INUSE &&\n            meta != pClass->conf.reset_value &&\n            pClass->conf.free)\n        {\n            pClass->conf.free(NULL, meta);\n        }\n\n        /* Clear this bit and continue to next slot */\n        mbits &= ~(1 << metaid);\n    }\n    kms->numMeta = 0;\n    kms->metabits = 0;\n}\n\nint rdbLoadSkipMetaIfAllowed(rio *rdb, char *cname, int flags) {\n    static int countDownNotice = 0;\n    static rio *lastRdb = NULL;\n    if (lastRdb != rdb) {\n        countDownNotice = 10;\n        lastRdb = rdb;\n    }\n\n    /* Check ALLOW_IGNORE flag */\n    if (flags & (1 << KEY_META_FLAG_ALLOW_IGNORE)) {\n        if (countDownNotice-- > 0) {\n            /* Skip this metadata gracefully */\n            serverLog(LL_NOTICE, \"Skipping metadata for class '%s' (not registered or missing rdb_load)\", cname);\n        }\n\n        /* Skip the metadata value by loading and discarding it.\n         * The metadata format is: VALUE (variable length) + EOF marker.\n         *\n         * The VALUE is saved using RedisModule_Save* functions which use module opcodes\n         * (RDB_MODULE_OPCODE_SINT, etc.), so we use rdbLoadCheckModuleValue() to skip it.\n         *\n         * Note: rdbLoadCheckModuleValue() reads opcodes until it finds RDB_MODULE_OPCODE_EOF,\n         * so it consumes the EOF marker as well. We don't need to read it separately. */\n        robj *dummy = rdbLoadCheckModuleValue(rdb, cname, 1);\n        if (dummy == NULL) {\n            serverLog(LL_WARNING, \"Corrupted metadata value for class '%s'\", cname);\n            return -1;\n        }\n\n        decrRefCount(dummy);\n        return 0;\n    } else {\n        serverLog(LL_WARNING, \"RDB load key metadata failed: Class '%s' not registered or missing rdb_load().\", cname);\n        return -1;\n    }\n}\n\n/* Load module metadata from RDB.\n * Returns 0 on success, -1 on error.\n * Stores loaded metadata in the provided KeyMetaSpec structure.\n *\n * Format (same as save):\n *   1B: NUM_CLASSES (already read by caller)\n *   For each class:\n *     4B: CLASS_SPEC (32-bit classSpecEncoded)\n *     ?B: VALUE (from rdb_load callback)\n *     1B: RDB_MODULE_OPCODE_EOF\n */\nint rdbLoadKeyMetadata(rio *rdb, int dbid, int numClasses, KeyMetaSpec *kms) {\n    if (numClasses > KEY_META_MAX_NUM_MODULES) {\n        serverLog(LL_WARNING, \"Too many metadata classes: %d (max %d)\",\n                  numClasses, KEY_META_MAX_NUM_MODULES);\n        return -1;\n    }\n\n    for (int i = 0; i < numClasses; i++) {\n        /* Read 32-bit encoded class spec */\n        uint32_t encClassSpec;\n        if (rioRead(rdb, &encClassSpec, KM_CLASS_SPEC_SIZE) == 0) goto error;\n\n        /* Deserialize to get name, version, flags */\n        char name[5];\n        int metaver;\n        uint8_t flags;\n        keyMetaClassDecode(encClassSpec, name, &metaver, &flags);\n\n        /* Lookup class by name */\n        int alreadyReleased = 0;\n        KeyMetaClassId classId = keyMetaClassLookupByName(name, &alreadyReleased);\n\n        /* If class not found or released, check ALLOW_IGNORE flag */\n        if (classId == -1 || alreadyReleased) {\n            int rc = rdbLoadSkipMetaIfAllowed(rdb, name, flags);\n            if (rc == -1) goto error;\n            continue;\n        }\n\n        /* Verify version matches */\n        KeyMetaClass *pClass = &keyMetaClass[classId];\n        debugServerAssert(pClass->state == CLASS_STATE_INUSE);\n\n        /* If no rdb_load callback, check ALLOW_IGNORE flag */\n        if (pClass->conf.rdb_load == NULL) {\n            /* No rdb_load callback - check ALLOW_IGNORE flag */\n            int rc = rdbLoadSkipMetaIfAllowed(rdb, name, flags);\n            if (rc == -1) goto error;\n            continue;\n        }\n\n        RedisModuleIO io;\n        /* We don't have the key yet, so pass NULL for now */\n        moduleInitIOContext(&io, &pClass->entity, rdb, NULL, dbid);\n\n        uint64_t meta = 0;\n        int rc = pClass->conf.rdb_load(&io, &meta, metaver);\n\n        /* Read EOF marker */\n        uint64_t eof = rdbLoadLen(rdb, NULL);\n        if (eof != RDB_MODULE_OPCODE_EOF) {\n            serverLog(LL_WARNING, \"Missing EOF after key metadata '%s' (got 0x%llx)\",\n                      name, (unsigned long long)eof);\n            io.error = 1;\n        }\n\n        if (io.ctx) {\n            moduleFreeContext(io.ctx);\n            zfree(io.ctx);\n        }\n\n        if (io.error) {\n            /* rdb_load succeeded but loading EOF failed */\n            if (rc == 1) keyMetaSpecAddUnordered(kms, classId, meta);\n            goto error;\n        }\n\n        /* Handle rdb_load return value:\n         *   1: Attach metadata to key (success)\n         *   0: Ignore/skip metadata (not an error)\n         *  -1: Error - abort RDB load (module should clean up before returning -1) */\n        if (rc == 1) {\n            /* Add metadata, handling out-of-order classIds that may occur when\n             * modules register in different order at load time vs save time */\n            keyMetaSpecAddUnordered(kms, classId, meta);\n        } else if (rc == 0) {\n            /* Ignore/skip - don't attach metadata, continue loading */\n        } else if (rc == -1) {\n            /* Error - abort RDB load */\n            serverLog(LL_WARNING,\n                \"RDB load failed: rdb_load callback for metadata class '%s' returned error\", name);\n            goto error;\n        } else {\n            /* Invalid return value */\n            serverLog(LL_WARNING,\n                \"RDB load failed: rdb_load callback for metadata class '%s' \"\n                \"returned invalid value %d (expected -1, 0, or 1)\",\n                name, rc);\n            goto error;\n        }\n    }\n\n    return 0; /* Success */\n\nerror:\n    /* Clean up any metadata that was successfully loaded before the error */\n    keyMetaSpecCleanup(kms);\n    return -1;\n}\n\n/* Save all key metadata to RDB using lazy header writing.\n * We accumulate class data (CLASS_SPEC + VALUE + EOF) in a temporary buffer,\n * counting classes that actually write data. Only if count > 0, we write the\n * opcode and NUM_CLASSES to RDB, followed by the accumulated payload.\n * This avoids writing RDB_OPCODE_KEY_META when no module writes any data.\n *\n * Format:\n *   1B: RDB_OPCODE_KEY_META\n *   ?B: NUM_CLASSES (count of classes that wrote data)\n *   For each class:\n *     4B: CLASS_SPEC (32-bit classSpecEncoded)\n *     ?B: VALUE (from rdb_save callback)\n *     1B: RDB_MODULE_OPCODE_EOF\n *     \n  * Returns -1 on error, 0 on success.\n */\nint rdbSaveKeyMetadata(rio *rdb, robj *key, kvobj *kv, int dbid) {\n\n    /* Check if there are any module metadata bits set */\n    uint32_t mbits = kv->metabits >> KEY_META_ID_MODULE_FIRST;\n    if (likely(mbits == 0)) return 0; /* No module metadata */\n\n    /* Skip builtin expire slot if present */\n    uint64_t *pMeta = ((uint64_t *)kv) - 1;\n    if (kv->metabits & KEY_META_MASK_EXPIRE)\n        pMeta--;\n\n    /* Create temporary buffer for payload (class data only, no headers) */\n    rio payload_rio;\n    rioInitWithBuffer(&payload_rio, sdsempty());\n\n    /* Iterate through classes and accumulate payload */\n    int numClasses = 0;\n    int keyMetaId = KEY_META_ID_MODULE_FIRST;\n    uint32_t mbits_copy = mbits;\n\n    do {\n        /* Check if metadata is attached for this class */\n        if (mbits_copy & 1) {\n            KeyMetaClass *pClass = &keyMetaClass[keyMetaId];\n            serverAssert(pClass->state == CLASS_STATE_INUSE);\n\n            if (*pMeta != pClass->conf.reset_value && pClass->conf.rdb_save) {\n                /* Write 32-bit class spec to payload buffer */\n                uint32_t classSpec = pClass->classSpecEncoded;\n                if (rdbWriteRaw(&payload_rio, &classSpec, KM_CLASS_SPEC_SIZE) == -1) goto error;\n\n                size_t bytes_before = sdslen(payload_rio.io.buffer.ptr);\n\n                /* Call module's rdb_save callback */\n                RedisModuleIO io;\n                moduleInitIOContext(&io, &pClass->entity, &payload_rio, key, dbid);\n                pClass->conf.rdb_save(&io, NULL, pMeta);\n\n                if (io.ctx) {\n                    moduleFreeContext(io.ctx);\n                    zfree(io.ctx);\n                }\n\n                if (io.error) goto error;\n\n                size_t bytes_after = sdslen(payload_rio.io.buffer.ptr);\n\n                /* Check if module actually wrote any data */\n                if (bytes_after > bytes_before) {\n                    /* Module wrote data - add EOF marker and count it */\n                    if (rdbSaveLen(&payload_rio, RDB_MODULE_OPCODE_EOF) == -1) goto error;\n                    numClasses++;\n                } else {\n                    /* Module didn't write data - remove the class spec we wrote.\n                     * bytes_before is the length after writing the class spec, so we want\n                     * to keep bytes_before - KM_CLASS_SPEC_SIZE bytes. We also need to update the RIO's pos to match. */\n                    sdssubstr(payload_rio.io.buffer.ptr, 0, bytes_before - KM_CLASS_SPEC_SIZE);\n                    payload_rio.io.buffer.pos = bytes_before - KM_CLASS_SPEC_SIZE;\n                }\n            }\n\n            pMeta--; /* Move to next metadata slot */\n        }\n        keyMetaId++;\n        mbits_copy >>= 1;\n    } while (mbits_copy);\n\n    /* If no classes wrote data, discard everything */\n    if (numClasses == 0) {\n        sdsfree(payload_rio.io.buffer.ptr);\n        return 0;\n    }\n\n    /* Now write: [RDB_OPCODE_KEY_META][numClasses][payload] */\n    if ((rdbSaveType(rdb, RDB_OPCODE_KEY_META) == -1) ||\n        (rdbSaveLen(rdb, numClasses) == -1) ||\n        (rdbWriteRaw(rdb, payload_rio.io.buffer.ptr, sdslen(payload_rio.io.buffer.ptr)) == -1))\n    {\n        goto error;\n    }\n    \n    sdsfree(payload_rio.io.buffer.ptr);\n    return 0;\n\nerror:\n    sdsfree(payload_rio.io.buffer.ptr);\n    return -1;\n}\n\n/* returns 0 on error, 1 on success. */\nint keyMetaOnAof(rio *r, robj *key, kvobj *kv, int dbid) {\n    /* Skip builtin expire slot if present; no action needed for expire itself. */\n    uint64_t *pMeta = ((uint64_t *)kv) - 1;\n    if (kv->metabits & KEY_META_MASK_EXPIRE)\n        pMeta--;\n\n    /* Iterate module metadata and invoke per-class aof_rewrite if provided */\n    uint32_t mbits = kv->metabits >> KEY_META_ID_MODULE_FIRST;\n    if (likely(mbits == 0)) return 1;\n\n    int keyMetaId = KEY_META_ID_MODULE_FIRST;\n    do {\n        if (mbits & 1) {\n            serverAssert(keyMetaClass[keyMetaId].state == CLASS_STATE_INUSE);\n\n            uint64_t meta = *pMeta;\n            if (meta != keyMetaClass[keyMetaId].conf.reset_value &&\n                keyMetaClass[keyMetaId].conf.aof_rewrite) \n            {\n                RedisModuleIO io;\n                moduleInitIOContext(&io, &keyMetaClass[keyMetaId].entity, r, key, dbid);\n                keyMetaClass[keyMetaId].conf.aof_rewrite(&io, NULL, meta);\n                if (io.ctx) {\n                    moduleFreeContext(io.ctx);\n                    zfree(io.ctx);\n                }\n                if (io.error) return 0;\n            }\n            pMeta--;\n        }\n        mbits >>= 1;\n        keyMetaId++;\n    } while (mbits != 0);\n\n    return 1;\n}\n\n/* Move entire metadata from old to new kvobj as is */\nvoid keyMetaTransition(kvobj *kvOld, kvobj *kvNew) {\n    /* Precondition: */\n    debugServerAssert(kvOld->metabits>>KEY_META_ID_MODULE_FIRST);\n    \n    /* Skip builtin expire slot if present; no action needed for expire itself. */\n    uint64_t *pMetaOld = ((uint64_t *)kvOld) - 1;\n    if (kvOld->metabits & KEY_META_MASK_EXPIRE) pMetaOld--;\n    uint64_t *pMetaNew = ((uint64_t *)kvNew) - 1;\n    if (kvNew->metabits & KEY_META_MASK_EXPIRE) pMetaNew--;\n    \n    uint32_t mbitsOld = kvOld->metabits >> KEY_META_ID_MODULE_FIRST;\n    uint32_t mbitsNew = kvNew->metabits >> KEY_META_ID_MODULE_FIRST;\n    if (likely(mbitsOld == 0)) return;\n    int keyMetaId = KEY_META_ID_MODULE_FIRST;\n    do {\n        if (mbitsOld & 1) {\n            if (mbitsNew & 1) {\n                /* Transition metadata from old to new */\n                *pMetaNew-- = *pMetaOld;\n                /* Reset old metadata value to prevent double-free */\n                *pMetaOld-- = keyMetaClass[keyMetaId].conf.reset_value;\n            } else {\n                /* Leave metadata in old key as is */\n                pMetaOld--;\n            }\n        } else {\n            /* Update pMetaNew if needed (No need to reset value in new key, \n             * assuming it was initialized earlier). */\n            pMetaNew -= mbitsNew & 1;  \n        }\n        \n        mbitsOld >>= 1;\n        mbitsNew >>= 1;\n        keyMetaId++;\n    } while (mbitsOld);\n}\n\n/* Create a new metadata class. Returns class ID (1-7) on success, 0 on failure.\n * \n * context - In case of a module, pass the module pointer. Otherwise NULL.\n */\nKeyMetaClassId keyMetaClassCreate(RedisModule *context, const char *name,\n                                  int metaver, KeyMetaClassConf *conf) {\n    if (!conf) return 0;\n\n    /* Validate and encode ID. This also validates 4-char name and generates \"META-\" prefix. */\n    char fullname[KM_FULLNAME_LEN+1];\n    uint32_t classSpecEncoded;\n    /* Resolve: entityId, fullname, keyMetaClassSer */\n    uint64_t entityId = keyMetaClassEncode(name,\n                                        metaver,\n                                        conf->flags & KEY_META_FLAGS_RDB_MASK,\n                                        fullname,\n                                        &classSpecEncoded);\n    if (entityId == 0) return 0;\n\n    /* Check for name conflicts using 4-char name. Allow reuse of RELEASED; forbid if INUSE. */\n    int alreayReleased;\n    int keyMetaId = keyMetaClassLookupByName(name, &alreayReleased);\n\n    if (alreayReleased) {\n        /* If already released, then reuse the keyMetaId. */\n    } else {\n        /* Assert class is registered for first time */\n        serverAssert(keyMetaId == -1);\n\n        /* Find free keyMetaId */\n        for (int i = KEY_META_ID_MODULE_FIRST; i <= KEY_META_ID_MODULE_LAST; i++) {\n            if (keyMetaClass[i].state == CLASS_STATE_FREE) {\n                keyMetaId = i;\n                break;\n            }\n        }\n        if (keyMetaId == -1) return 0; /* no free keyMetaId */\n    }\n\n    KeyMetaClass *pKeyMetaClass = &keyMetaClass[keyMetaId];\n\n    /* Store 4-char short name */\n    memcpy(pKeyMetaClass->name, name, KM_NAME_LEN);\n    pKeyMetaClass->name[KM_NAME_LEN] = '\\0';\n\n    /* Store 9-char full name with \"META-\" prefix */\n    memcpy(pKeyMetaClass->entity.name, fullname, KM_FULLNAME_LEN+1);\n    pKeyMetaClass->entity.id = entityId;\n    pKeyMetaClass->entity.module = context;\n    pKeyMetaClass->state = CLASS_STATE_INUSE;\n    pKeyMetaClass->classSpecEncoded = classSpecEncoded;\n    KM_SET_CONST_CONF(pKeyMetaClass->conf) = *conf; /* Copy config as is. */\n    return keyMetaId; /* Return handle (1..7). */\n}\n\n/* Destroy (release) a class by its ID. Returns 1 on success, 0 on failure. */\nint keyMetaClassRelease(KeyMetaClassId id) {\n    if (!(id >= KEY_META_ID_MODULE_FIRST && id <= KEY_META_ID_MODULE_LAST))\n        return 0;\n\n    if (keyMetaClass[id].state != CLASS_STATE_INUSE)\n        return 0;\n\n    keyMetaClass[id].state = CLASS_STATE_RELEASED;\n    return 1;\n}\n\n/* Set a module metadata value on an opened key. Returns the new kvobj pointer (may be reallocated).\n * Returns NULL on failure. The caller must update any references to the old kv pointer. */\nkvobj *keyMetaSetMetadata(redisDb *db, kvobj *kv, KeyMetaClassId id, uint64_t metadata) {\n    serverAssert(id >= KEY_META_ID_MODULE_FIRST && id <= KEY_META_ID_MODULE_LAST);\n\n    /* Class must be active */\n    if (keyMetaClass[id].state != CLASS_STATE_INUSE)\n        return NULL;\n\n    /* If metadata already attached, just update it in place. */\n    if (kv->metabits & (1u << id)) {\n        *kvobjMetaRef(kv, id) = metadata;\n        return kv;\n    }\n\n    /* We need to grow kv to add a new 8-byte metadata slot. This may reallocate\n     * the object, so we must carefully preserve and restore:\n     * - The key's expires dictionary entry (if TTL is set)\n     * - The global Hash Field Expires (HFE) registration for hash objects\n     * - All existing metadata values (including expire value)\n     */\n\n    sds key = kvobjGetKey(kv);\n    int slot = getKeySlot(key);\n\n    /* Preserve HFE registration for hash objects (embedded in object memory). */\n    uint64_t subexpiry = EB_EXPIRE_TIME_INVALID;\n    if (kv->type == OBJ_HASH)\n        subexpiry = estoreRemove(db->subexpires, slot, kv);\n\n    /* Preserve existing expire value (and whether an expires entry exists). */\n    long long old_expire_val = kvobjGetExpire(kv);\n    \n    /* We'll need the key's link in the main dictionary to update pointer if reallocated. */\n    dictEntryLink keyLink = kvstoreDictFindLink(db->keys, slot, key, NULL);\n    serverAssert(keyLink != NULL);\n\n    /* If the key has an actual TTL (expire != -1), also preserve the expires dict link. */\n    dictEntryLink exLink = NULL;\n    if (old_expire_val != -1) {\n        exLink = kvstoreDictFindLink(db->expires, slot, key, NULL);\n        serverAssert(exLink != NULL);\n    }\n\n    /* Reallocate kv with the new metadata bit enabled. kvobjSet may return a new \n     * ptr. Takes care to transition existing metadata as needed. */\n    size_t oldsize = 0;\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(kv);\n    kv = kvobjSet(key, kv, kv->metabits | (1u << id));\n    kvstoreDictSetAtLink(db->keys, slot, kv, &keyLink, 0);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(db, slot, kv, oldsize, kvobjAllocSize(kv));\n\n    /* Set new metadata */\n    *kvobjMetaRef(kv, id) = metadata;\n    \n    /* If there was an expires entry (expire != -1), update its kv pointer. */\n    if (exLink) {\n        ((uint64_t *)kv)[-1] = old_expire_val; /* expiry must be first meta */\n        kvstoreDictSetAtLink(db->expires, slot, kv, &exLink, 0);\n    }\n\n    /* Re-register in HFE if needed. */\n    if (subexpiry != EB_EXPIRE_TIME_INVALID)\n        estoreAdd(db->subexpires, slot, kv, subexpiry);\n\n    return kv;\n}\n\n/* Retrieve a module metadata value from an opened key. Returns 1 on success, 0 otherwise. */\nint keyMetaGetMetadata(KeyMetaClassId kmcId, kvobj *kv, uint64_t *metadata) {\n    serverAssert(kmcId >= KEY_META_ID_MODULE_FIRST && kmcId <= KEY_META_ID_MODULE_LAST);\n    \n    if (keyMetaClass[kmcId].state != CLASS_STATE_INUSE) \n        return 0;\n    \n    if (!(kv->metabits & (1u << kmcId))) \n        return 0; /* metadata not attached */\n\n    *metadata = *kvobjMetaRef(kv, kmcId);\n    return 1;\n}\n\n/* Add metadata to keymeta spec. Must be in range 0..7 and in order! */\nvoid keyMetaSpecAdd(KeyMetaSpec *keymeta, int metaid, uint64_t metaval) {\n    /* Verify added in order and for the first time */\n    debugServerAssert(keymeta->metabits == 0 || (1<<metaid) > keymeta->metabits);\n    keymeta->metabits |= 1 << metaid ;\n    keymeta->numMeta++;\n    /* populated in reverse order */\n    keymeta->meta[KEY_META_ID_MAX - keymeta->numMeta] = metaval;\n}\n\n/* Add metadata to keymeta spec, handling out-of-order metaid addition.\n * This is useful when metadata may arrive in different order than class IDs\n * (e.g., RDB load with different module registration order).\n * The function maintains the sorted order of the reverse-populated array. */\nstatic void keyMetaSpecAddUnordered(KeyMetaSpec *keymeta, int metaid, uint64_t metaval) {\n    debugServerAssert(metaid >= 0 && metaid < KEY_META_ID_MAX);\n    debugServerAssert((keymeta->metabits & (1 << metaid)) == 0); /* Not already added */\n\n    /* The meta array is populated in reverse order from the end backward. smallest \n     * metaid is at the end. Iterate through array slots upward, but find metaids \n     * by scanning downward (highest to lowest) to match the reverse-order layout. */\n    int startIdx = KEY_META_ID_MAX - keymeta->numMeta;\n    uint16_t tmpBits = keymeta->metabits;\n    int slot = startIdx;\n\n    while (tmpBits) {\n        /* Find highest metaid in tmpBits (scanning downward from highest bit) */\n        int id = 31 - __builtin_clz((unsigned)tmpBits);\n\n        /* break if we found the slot for the new metaid */\n        if (id < metaid) break;\n\n        /* This id is bigger, shift it down */\n        keymeta->meta[slot - 1] = keymeta->meta[slot];\n        tmpBits &= ~(1 << id);\n        slot++;\n    }\n\n    /* Insert new metaid at position slot - 1 */\n    keymeta->meta[slot - 1] = metaval;\n    keymeta->metabits |= 1 << metaid;\n    keymeta->numMeta++;\n}\n\n/* Blindly reset modules metadata values to reset_value */\nvoid keyMetaResetModuleValues(kvobj *kv) {\n    /* Precondition: only called for module metadata (bits 1-7) */\n    debugServerAssert(kv->metabits & KEY_META_MASK_MODULES);\n\n    /* Skip expire slot (bit 0) if present, start directly at module metadata */\n    uint64_t *pMeta = ((uint64_t *)kv) - 1;\n    if (kv->metabits & KEY_META_MASK_EXPIRE)\n        pMeta--;\n\n    /* Process only module metadata bits (1-7) */\n    uint32_t mbits = kv->metabits >> KEY_META_ID_MODULE_FIRST;\n    int keyMetaId = KEY_META_ID_MODULE_FIRST;\n    do {\n        if (mbits & 1)\n            *pMeta-- = keyMetaClass[keyMetaId].conf.reset_value;\n\n        mbits >>= 1;\n        keyMetaId++;\n    } while (mbits != 0);\n}\n"
  },
  {
    "path": "src/keymeta.h",
    "content": "/*\n * Key Metadata (keymeta)\n *\n * High-level idea\n * ----------------\n * keymeta is a framework for attaching & maintaining metadata to keys. \n * \n * - Up to 8 different metadata classes can be registered globally. First one\n *   is reserved for EXPIRE.\n * - Each class has a unique ID and name, like modules datat-types names, yet \n *   it has its own namespace.\n * - Each key metadata class provides a set of callbacks for key lifecycle operations, \n *   ensuring consistent handling across copy, rename, logical removal (unlink), \n *   actual deallocation (free), persistence (RDB/AOF), and defragmentation. \n * - Each key can carry up to 8 independent metadata values. Each value is related \n *   to a specific metadata class. \n * - The 8-byte slot can either hold inline data or a pointer/handle to a larger, \n *   externally managed structure.\n *\n * Relation to other components\n * ----------------------------\n * - kvobj: 8 bits metabits field in kvobj is used to indicate active metadata.\n *   bit number corresponds to class ID. \n * - Expiration: class ID 0 is reserved for TTL/expire; \n * - Registration: redisServer.keyMetaClass[] stores registered classes. Modules\n *   register via keyMetaClassCreate (see redismodule.h) and may provide callbacks\n *   for persistence, copy/rename behavior, and lifecycle hooks (unlink/free).\n * - modules: modules can register metadata classes and provide callbacks.  \n */\n\n#ifndef __KEYMETA_H\n#define __KEYMETA_H\n\n#include <stdint.h>\n#include <stddef.h>\n#include \"sds.h\"\n#include \"object.h\"\n\n/* fwd decls */\nstruct redisDb;\nstruct redisObject;\nstruct RedisModuleIO;\nstruct RedisModuleKeyOptCtx;\nstruct RedisModuleDefragCtx;\nstruct RedisModule;\ntypedef int KeyMetaClassId; /* Index into redisServer.keyMetaClass[] */\n\n/* kvmeta - Metadata to be attached to kvobj */\n#define KEY_META_ID_EXPIRE        0 /* Must be first */\n/* IDs 1..7 are available for modules */\n#define KEY_META_ID_MODULE_FIRST  1\n#define KEY_META_ID_MODULE_LAST   7\n#define KEY_META_ID_MAX           8\n\n#define KEY_META_MAX_NUM_MODULES  (KEY_META_ID_MODULE_LAST - KEY_META_ID_MODULE_FIRST + 1)\n\n#define KEY_META_MASK_NONE        0\n#define KEY_META_MASK_MODULES     (((1U << KEY_META_MAX_NUM_MODULES) - 1) << KEY_META_ID_MODULE_FIRST)\n#define KEY_META_MASK_EXPIRE      (1U << KEY_META_ID_EXPIRE)\n\n/* RDB load callback: Return 1 to attach, 0 to skip, -1 on error */\ntypedef int (*KeyMetaLoadFunc)(RedisModuleIO *rdb, uint64_t *meta, int encver);\ntypedef void (*KeyMetaSaveFunc)(RedisModuleIO *rdb, void *reserved, uint64_t *meta);\ntypedef void (*KeyMetaAOFRewriteFunc)(RedisModuleIO *aof, void *reserved, uint64_t meta);\ntypedef void (*KeyMetaFreeFunc)(const char *keyname, uint64_t meta);\ntypedef int (*KeyMetaCopyFunc)(struct RedisModuleKeyOptCtx *ctx, uint64_t *meta);\ntypedef int (*KeyMetaRenameFunc)(struct RedisModuleKeyOptCtx *ctx, uint64_t *meta);\ntypedef int (*KeyMetaDefragFunc)(RedisModuleDefragCtx *ctx, RedisModuleString *keyname, uint64_t meta);\ntypedef size_t (*KeyMetaMemUsageFunc)(struct RedisModuleKeyOptCtx *ctx, size_t sample_size, uint64_t meta);\ntypedef size_t (*KeyMetaFreeEffortFunc)(struct RedisModuleKeyOptCtx *ctx, uint64_t meta);\ntypedef void (*KeyMetaUnlinkFunc)(struct RedisModuleKeyOptCtx *ctx, uint64_t *meta);\ntypedef int (*KeyMetaMoveFunc)(struct RedisModuleKeyOptCtx *ctx, uint64_t *meta);\n\n/* For explanation, see struct RedisModuleKeyMetaClassConfig */\ntypedef struct KeyMetaClassConf {\n#define KEY_META_FLAGS_RDB_MASK      0x7 /* First 3 flags are serialized into RDB with key */\n#define KEY_META_FLAG_ALLOW_IGNORE   0   /* Aligned with: REDISMODULE_META_ALLOW_IGNORE */\n#define KEY_META_FLAG_RBB_RESERVED_1 1   /* Reserved for future use */\n#define KEY_META_FLAG_RBB_RESERVED_2 2   /* Reserved for future use */\n    uint64_t flags;\n    \n    /* Sentinel value meaning \"no resource attached\". It guarantees callbacks are \n     * ONLY invoked when meta != reset_value. This prevents double-free, avoids \n     * persisting sentinels to RDB/AOF, and simplifies module logic. */\n    uint64_t reset_value;\n\n    int (*copy)(struct RedisModuleKeyOptCtx *ctx, uint64_t *meta);\n    int (*rename)(struct RedisModuleKeyOptCtx *ctx, uint64_t *meta);\n    int (*move)(struct RedisModuleKeyOptCtx *ctx, uint64_t *meta);\n    void (*unlink)(struct RedisModuleKeyOptCtx *ctx, uint64_t *meta);\n    void (*free)(const char *keyname, uint64_t meta);\n    int (*rdb_load)(struct RedisModuleIO *rdb, uint64_t *meta, int metaver);\n    void (*rdb_save)(struct RedisModuleIO *rdb, void *reserved, uint64_t *meta);\n    void (*aof_rewrite)(struct RedisModuleIO *aof, void *reserved, uint64_t meta);\n    \n    /****************************** TBD: ******************************/\n    int (*defrag) (struct RedisModuleDefragCtx *ctx, struct redisObject *key, uint64_t meta);\n    size_t (*mem_usage)(struct RedisModuleKeyOptCtx *ctx, size_t sample_size, uint64_t meta);\n    size_t (*free_effort)(struct RedisModuleKeyOptCtx *ctx, uint64_t meta);\n} KeyMetaClassConf;\n\n/* KeyMetaSpec - Used by dbAddInternal() to describe metadata of a new key */\ntypedef struct KeyMetaSpec {\n    uint16_t numMeta; /* Num active metadata entries. Aligned with metabits */\n    uint16_t metabits;\n\n    /* Array of metadata values. Entries are populated in reverse order\n     * (from the end of the array backward) to make bulk copying with\n     * memcpy more efficient. During insertion, the next slot is:\n     *            meta[KEY_META_ID_MAX - (++numMeta)]\n     *            \n     * For example if numMeta=2, and metabits=0b101, then the last entry holds \n     * value for class 0, and the previous entry holds value for class 2.  \n     */\n    uint64_t meta[KEY_META_ID_MAX];\n} KeyMetaSpec;\n\n/* init Keys metadata on server startup */\nvoid keyMetaInit(void);\n\n/* Key metadata event callbacks */\nvoid keyMetaOnUnlink(struct redisDb *db, robj *key,kvobj *kv);\nvoid keyMetaOnFree(kvobj *kv);\nvoid keyMetaOnRename(struct redisDb *db,  kvobj *kv, robj *oldKey, robj *newKey, KeyMetaSpec *kms);\nvoid keyMetaOnMove(kvobj *kv, robj *key, int srcDbId, int dstDbId, KeyMetaSpec *kms);\nvoid keyMetaOnCopy(kvobj *kv, robj *srcKey, robj *dstKey, int srcDbId, int dstDbId, KeyMetaSpec *kms);\nint keyMetaOnAof(rio *r, robj *key, kvobj *kv, int dbid);\n\n/* RDB serialization */\nint rdbSaveKeyMetadata(rio *rdb, robj *key, kvobj *kv, int dbid);\nint rdbLoadKeyMetadata(rio *rdb, int dbid, int numClasses, KeyMetaSpec *kms);\n\nvoid keyMetaResetModuleValues(kvobj *kv);\nvoid keyMetaTransition(kvobj *kvOld, kvobj *kvNew);\n\n/* return 0 if failed to create. Otherwise return handle (between 1 and 7) */\nKeyMetaClassId keyMetaClassCreate(struct RedisModule *ctx, const char *metaname, int metaver, KeyMetaClassConf *conf);\n/* Destroy (release) a previously created class. Return 1 on success, 0 on failure. */\nint keyMetaClassRelease(KeyMetaClassId class_id);\n\nkvobj *keyMetaSetMetadata(struct redisDb *db, kvobj *kv, KeyMetaClassId kmcId, uint64_t metadata);\nint keyMetaGetMetadata(KeyMetaClassId kmcId, kvobj *kv, uint64_t *metadata);\nint keyMetaRemoveMetadata(KeyMetaClassId kmcId, RedisModuleKey *key);\n\n/* bit operations on metabits */\nstatic inline uint32_t getNumMeta(uint16_t metabits);\nstatic inline uint32_t getModuleMetaBits(uint16_t metabits);\n\n/********** Inline functions **********/\n\nstatic inline void keyMetaResetValues(kvobj *kv) {\n    if (unlikely(kv->metabits & KEY_META_MASK_MODULES))\n        keyMetaResetModuleValues(kv);\n    /* Must be first meta (optimized) */\n    if (kv->metabits & KEY_META_MASK_EXPIRE)\n        ((uint64_t *)kv)[-1] = -1;\n}\n\nstatic inline void keyMetaSpecInit(KeyMetaSpec *keymeta) {\n    /* Enough to init metabits and numMeta. meta[] is not used. */\n    keymeta->metabits = 0;\n    keymeta->numMeta = 0;\n}\n\n/* Add metadata to keymeta spec. metaid must be in range 0..7 and added in order! */\nvoid keyMetaSpecAdd(KeyMetaSpec *keymeta, int metaid, uint64_t metaval);\n\n/* Free any metadata stored in a KeyMetaSpec. This is called when RDB load fails after\n * some metadata has been loaded. It invokes the free cb for each metadata class that \n * was already loaded, preventing memory leaks from partially-loaded metadata. */\nvoid keyMetaSpecCleanup(KeyMetaSpec *kms);\n\nstatic inline uint32_t getNumMeta(uint16_t metabits) {\n    /* Assumed expire is always first meta */\n    return __builtin_popcount(metabits);\n}\n\nstatic inline uint32_t getModuleMetaBits(uint16_t metabits) {\n    return metabits & KEY_META_MASK_MODULES;\n}\n\n#endif // __KEYMETA_H\n"
  },
  {
    "path": "src/kvstore.c",
    "content": "/*\n * Copyright (c) 2011-Present, Redis Ltd. and contributors.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n#include \"fmacros.h\"\n\n#include <string.h>\n#include <stddef.h>\n\n#include \"zmalloc.h\"\n#include \"kvstore.h\"\n#include \"fwtree.h\"\n#include \"redisassert.h\"\n#include \"monotonic.h\"\n\n#define UNUSED(V) ((void) V)\n\nstruct _kvstore {\n    int flags;\n    kvstoreType *type;\n    dictType dtype;\n    dict **dicts;\n    long long num_dicts;\n    long long num_dicts_bits;\n    list *rehashing;                       /* List of dictionaries in this kvstore that are currently rehashing. */\n    int resize_cursor;                     /* Cron job uses this cursor to gradually resize dictionaries (only used if num_dicts > 1). */\n    int allocated_dicts;                   /* The number of allocated dicts. */\n    int non_empty_dicts;                   /* The number of non-empty dicts. */\n    unsigned long long key_count;          /* Total number of keys in this kvstore. */\n    unsigned long long bucket_count;       /* Total number of buckets in this kvstore across dictionaries. */\n    fenwickTree *dict_sizes;               /* Binary indexed tree (BIT) that describes cumulative key frequencies up until given dict-index. */\n    size_t overhead_hashtable_rehashing;   /* The overhead of dictionaries rehashing. */\n    void *metadata[];                      /* conditionally allocated based on \"flags\" */\n};\n\n/**********************************/\n/*** Helpers **********************/\n/**********************************/\n\n/* Get the dictionary pointer based on dict-index. */\ndict *kvstoreGetDict(kvstore *kvs, int didx) {\n    return kvs->dicts[didx];\n}\n\nstatic dict **kvstoreGetDictRef(kvstore *kvs, int didx) {\n    return &kvs->dicts[didx];\n}\n\nstatic int kvstoreDictIsRehashingPaused(kvstore *kvs, int didx)\n{\n    dict *d = kvstoreGetDict(kvs, didx);\n    return d ? dictIsRehashingPaused(d) : 0;\n}\n\nstatic void addDictIndexToCursor(kvstore *kvs, int didx, unsigned long long *cursor) {\n    if (kvs->num_dicts == 1)\n        return;\n    /* didx can be -1 when iteration is over and there are no more dicts to visit. */\n    if (didx < 0)\n        return;\n    *cursor = (*cursor << kvs->num_dicts_bits) | didx;\n}\n\nstatic int getAndClearDictIndexFromCursor(kvstore *kvs, unsigned long long *cursor) {\n    if (kvs->num_dicts == 1)\n        return 0;\n    int didx = (int) (*cursor & (kvs->num_dicts-1));\n    *cursor = *cursor >> kvs->num_dicts_bits;\n    return didx;\n}\n\n/* Updates binary index tree (Fenwick tree), updates key count for a given dict */\nstatic void cumulativeKeyCountAdd(kvstore *kvs, int didx, long delta) {\n    kvs->key_count += delta;\n\n    dict *d = kvstoreGetDict(kvs, didx);\n    size_t dsize = dictSize(d);\n    /* Increment if dsize is 1 and delta is positive (first element inserted, dict becomes non-empty).\n     * Decrement if dsize is 0 (dict becomes empty). */\n    int non_empty_dicts_delta = (dsize == 1 && delta > 0) ? 1 : (dsize == 0) ? -1 : 0;\n    kvs->non_empty_dicts += non_empty_dicts_delta;\n\n    /* BIT does not need to be calculated when there's only one dict. */\n    if (kvs->num_dicts == 1)\n        return;\n\n    /* Update the BIT */\n    fwTreeUpdate(kvs->dict_sizes, didx, delta);\n}\n\n/* Create the dict if it does not exist and return it. */\nstatic dict *createDictIfNeeded(kvstore *kvs, int didx) {\n    dict *d = kvstoreGetDict(kvs, didx);\n    if (d) return d;\n\n    kvs->dicts[didx] = dictCreate(&kvs->dtype);\n    kvs->allocated_dicts++;\n    return kvs->dicts[didx];\n}\n\n/* Called when the dict will delete entries, the function will check\n * KVSTORE_FREE_EMPTY_DICTS to determine whether the empty dict needs\n * to be freed.\n *\n * Note that for rehashing dicts, that is, in the case of safe iterators\n * and Scan, we won't delete the dict. We will check whether it needs\n * to be deleted when we're releasing the iterator. */\nstatic void freeDictIfNeeded(kvstore *kvs, int didx) {\n    if (!(kvs->flags & KVSTORE_FREE_EMPTY_DICTS) ||\n        !kvstoreGetDict(kvs, didx) ||\n        kvstoreDictSize(kvs, didx) != 0 ||\n        kvstoreDictIsRehashingPaused(kvs, didx))\n        return;\n\n    /* Use callback if provided to check if dict can be freed */\n    if (kvs->type->canFreeDict && !kvs->type->canFreeDict(kvs, didx))\n        return;\n\n    dictRelease(kvs->dicts[didx]);\n    kvs->dicts[didx] = NULL;\n    kvs->allocated_dicts--;\n}\n\nvoid kvstoreFreeDictIfNeeded(kvstore *kvs, int didx) {\n    freeDictIfNeeded(kvs, didx);\n}\n\n/**********************************/\n/*** dict callbacks ***************/\n/**********************************/\n\n/* Adds dictionary to the rehashing list, which allows us\n * to quickly find rehash targets during incremental rehashing.\n *\n * If there are multiple dicts, updates the bucket count for the given dictionary\n * in a DB, bucket count incremented with the new ht size during the rehashing phase.\n * If there's one dict, bucket count can be retrieved directly from single dict bucket. */\nstatic void kvstoreDictRehashingStarted(dict *d) {\n    kvstore *kvs = d->type->userdata;\n    kvstoreDictMetaBase *metadata = (kvstoreDictMetaBase *)dictMetadata(d);\n    listAddNodeTail(kvs->rehashing, d);\n    metadata->rehashing_node = listLast(kvs->rehashing);\n\n    unsigned long long from, to;\n    dictRehashingInfo(d, &from, &to);\n    kvs->overhead_hashtable_rehashing += from;\n}\n\n/* Remove dictionary from the rehashing list.\n *\n * Updates the bucket count for the given dictionary in a DB. It removes\n * the old ht size of the dictionary from the total sum of buckets for a DB.  */\nstatic void kvstoreDictRehashingCompleted(dict *d) {\n    kvstore *kvs = d->type->userdata;\n    kvstoreDictMetaBase *metadata = (kvstoreDictMetaBase *)dictMetadata(d);\n    if (metadata->rehashing_node) {\n        listDelNode(kvs->rehashing, metadata->rehashing_node);\n        metadata->rehashing_node = NULL;\n    }\n\n    unsigned long long from, to;\n    dictRehashingInfo(d, &from, &to);\n    kvs->overhead_hashtable_rehashing -= from;\n}\n\n/* Updates the bucket count for the given dictionary in a DB. It adds the new ht size\n * of the dictionary or removes the old ht size of the dictionary from the total\n * sum of buckets for a DB. */\nstatic void kvstoreDictBucketChanged(dict *d, long long delta) {\n    kvstore *kvs = d->type->userdata;\n    kvs->bucket_count += delta;\n}\n\n/* Returns the size of the DB dict extended metadata in bytes. */\nstatic size_t kvstoreDictBaseMetaSize(dict *d) {\n    UNUSED(d);\n    return sizeof(kvstoreDictMetaBase);\n}\n\n/**********************************/\n/*** API **************************/\n/**********************************/\n\n/* Create an array of dictionaries\n * num_dicts_bits is the log2 of the amount of dictionaries needed (e.g. 0 for 1 dict,\n * 3 for 8 dicts, etc.) */\nkvstore *kvstoreCreate(kvstoreType *type, dictType *dtype, int num_dicts_bits, int flags) {\n    /* We can't support more than 2^16 dicts because we want to save 48 bits\n     * for the dict cursor, see kvstoreScan */\n    assert(num_dicts_bits <= 16);\n    assert(!type->dictMetadataBytes || type->dictMetadataBytes(NULL) >= sizeof(kvstoreDictMetaBase));\n\n    /* Calc kvstore size */   \n    size_t kvsize = sizeof(kvstore);\n    /* Conditionally calc also histogram size */\n    if (type->kvstoreMetadataBytes)\n        kvsize += type->kvstoreMetadataBytes(NULL);\n    \n    kvstore *kvs = zcalloc(kvsize);\n    memcpy(&kvs->dtype, dtype, sizeof(kvs->dtype));\n    kvs->flags = flags;\n    kvs->type = type;\n\n    /* kvstore must be the one to set these callbacks, so we make sure the\n     * caller didn't do it */\n    assert(!dtype->userdata);\n    assert(!dtype->dictMetadataBytes);\n    assert(!dtype->rehashingStarted);\n    assert(!dtype->rehashingCompleted);\n    kvs->dtype.userdata = kvs;\n    kvs->dtype.dictMetadataBytes = type->dictMetadataBytes ?\n        type->dictMetadataBytes : kvstoreDictBaseMetaSize;\n    kvs->dtype.rehashingStarted = kvstoreDictRehashingStarted;\n    kvs->dtype.rehashingCompleted = kvstoreDictRehashingCompleted;\n    kvs->dtype.bucketChanged = kvstoreDictBucketChanged;\n\n    kvs->num_dicts_bits = num_dicts_bits;\n    kvs->num_dicts = 1 << kvs->num_dicts_bits;\n    kvs->dicts = zcalloc(sizeof(dict*) * kvs->num_dicts);\n    if (!(kvs->flags & KVSTORE_ALLOCATE_DICTS_ON_DEMAND)) {\n        for (int i = 0; i < kvs->num_dicts; i++)\n            createDictIfNeeded(kvs, i);\n    }\n\n    kvs->rehashing = listCreate();\n    kvs->key_count = 0;\n    kvs->non_empty_dicts = 0;\n    kvs->resize_cursor = 0;\n    kvs->dict_sizes = kvs->num_dicts > 1 ? fwTreeCreate(kvs->num_dicts_bits) : NULL;\n    kvs->bucket_count = 0;\n    kvs->overhead_hashtable_rehashing = 0;\n    return kvs;\n}\n\nvoid kvstoreEmpty(kvstore *kvs, void(callback)(dict*)) {\n    for (int didx = 0; didx < kvs->num_dicts; didx++) {\n        dict *d = kvstoreGetDict(kvs, didx);\n        if (!d)\n            continue;\n        kvstoreDictMetaBase *metadata = (kvstoreDictMetaBase *)dictMetadata(d);\n        if (metadata->rehashing_node)\n            metadata->rehashing_node = NULL;\n        dictEmpty(d, callback);\n        if (kvs->type->onDictEmpty) kvs->type->onDictEmpty(kvs, didx);\n        freeDictIfNeeded(kvs, didx);\n    }\n\n    if (kvs->type->onKvstoreEmpty) kvs->type->onKvstoreEmpty(kvs);\n\n    listEmpty(kvs->rehashing);\n\n    kvs->key_count = 0;\n    kvs->non_empty_dicts = 0;\n    kvs->resize_cursor = 0;\n    kvs->bucket_count = 0;\n    if (kvs->dict_sizes)\n        fwTreeClear(kvs->dict_sizes);\n    kvs->overhead_hashtable_rehashing = 0;\n}\n\nvoid kvstoreRelease(kvstore *kvs) {\n    for (int didx = 0; didx < kvs->num_dicts; didx++) {\n        dict *d = kvstoreGetDict(kvs, didx);\n        if (!d)\n            continue;\n        kvstoreDictMetaBase *metadata = (kvstoreDictMetaBase *)dictMetadata(d);\n        if (metadata->rehashing_node)\n            metadata->rehashing_node = NULL;\n        if (kvs->type->onDictEmpty) kvs->type->onDictEmpty(kvs, didx);\n        dictRelease(d);\n    }\n    zfree(kvs->dicts);\n\n    listRelease(kvs->rehashing);\n    if (kvs->dict_sizes)\n        fwTreeDestroy(kvs->dict_sizes);\n\n    zfree(kvs);\n}\n\nunsigned long long int kvstoreSize(kvstore *kvs) {\n    return kvs->key_count;\n}\n\n/* This method provides the cumulative sum of all the dictionary buckets\n * across dictionaries in a database. */\nunsigned long kvstoreBuckets(kvstore *kvs) {\n    if (kvs->num_dicts != 1) {\n        return kvs->bucket_count;\n    } else {\n        return kvs->dicts[0]? dictBuckets(kvs->dicts[0]) : 0;\n    }\n}\n\nsize_t kvstoreMemUsage(kvstore *kvs) {\n    size_t mem = sizeof(*kvs);\n    size_t metaSize = kvs->dtype.dictMetadataBytes(NULL);\n    unsigned long long keys_count = kvstoreSize(kvs);\n    mem += keys_count * dictEntryMemUsage(kvs->dtype.no_value) +\n           kvstoreBuckets(kvs) * sizeof(dictEntry*) +\n           kvs->allocated_dicts * (sizeof(dict) + metaSize);\n\n    /* Values are dict* shared with kvs->dicts */\n    mem += listLength(kvs->rehashing) * sizeof(listNode);\n\n    return mem;\n}\n\n/*\n * This method is used to iterate over the elements of the entire kvstore specifically across dicts.\n * It's a three pronged approach.\n *\n * 1. It uses the provided cursor `cursor` to retrieve the dict index from it.\n * 2. If the dictionary is in a valid state checked through the provided callback `dictScanValidFunction`,\n *    it performs a dictScan over the appropriate `keyType` dictionary of `db`.\n * 3. If the dict is entirely scanned i.e. the cursor has reached 0, the next non empty dict is discovered.\n *    The dict information is embedded into the cursor and returned.\n *\n * To restrict the scan to a single dict, pass a valid dict index as\n * 'onlydidx', otherwise pass -1.\n */\nunsigned long long kvstoreScan(kvstore *kvs, unsigned long long cursor,\n                               int onlydidx, dictScanFunction *scan_cb,\n                               kvstoreScanShouldSkipDict *skip_cb,\n                               void *privdata)\n{\n    unsigned long long _cursor = 0;\n    /* During dictionary traversal, 48 upper bits in the cursor are used for positioning in the HT.\n     * Following lower bits are used for the dict index number, ranging from 0 to 2^num_dicts_bits-1.\n     * Dict index is always 0 at the start of iteration and can be incremented only if there are\n     * multiple dicts. */\n    int didx = getAndClearDictIndexFromCursor(kvs, &cursor);\n    if (onlydidx >= 0) {\n        if (didx < onlydidx) {\n            /* Fast-forward to onlydidx. */\n            assert(onlydidx < kvs->num_dicts);\n            didx = onlydidx;\n            cursor = 0;\n        } else if (didx > onlydidx) {\n            /* The cursor is already past onlydidx. */\n            return 0;\n        }\n    }\n\n    dict *d = kvstoreGetDict(kvs, didx);\n\n    int skip = !d || (skip_cb && skip_cb(d, didx));\n    if (!skip) {\n        _cursor = dictScan(d, cursor, scan_cb, privdata);\n        /* In dictScan, scan_cb may delete entries (e.g., in active expire case). */\n        freeDictIfNeeded(kvs, didx);\n    }\n    /* scanning done for the current dictionary or if the scanning wasn't possible, move to the next dict index. */\n    if (_cursor == 0 || skip) {\n        if (onlydidx >= 0)\n            return 0;\n        didx = kvstoreGetNextNonEmptyDictIndex(kvs, didx);\n    }\n    if (didx == -1) {\n        return 0;\n    }\n    addDictIndexToCursor(kvs, didx, &_cursor);\n    return _cursor;\n}\n\n/*\n * This functions increases size of kvstore to match desired number.\n * It resizes all individual dictionaries, unless skip_cb indicates otherwise.\n *\n * Based on the parameter `try_expand`, appropriate dict expand API is invoked.\n * if try_expand is set to 1, `dictTryExpand` is used else `dictExpand`.\n * The return code is either `DICT_OK`/`DICT_ERR` for both the API(s).\n * `DICT_OK` response is for successful expansion. However, `DICT_ERR` response signifies failure in allocation in\n * `dictTryExpand` call and in case of `dictExpand` call it signifies no expansion was performed.\n */\nint kvstoreExpand(kvstore *kvs, uint64_t newsize, int try_expand, kvstoreExpandShouldSkipDictIndex *skip_cb) {\n    for (int i = 0; i < kvs->num_dicts; i++) {\n        if (skip_cb && skip_cb(i)) continue;\n        dict *d = createDictIfNeeded(kvs, i);\n        if (!d) continue;\n\n        int result = try_expand ? dictTryExpand(d, newsize) : dictExpand(d, newsize);\n        if (try_expand && result == DICT_ERR)\n            return 0;\n    }\n\n    return 1;\n}\n\n/* Returns fair random dict index, probability of each dict being returned is\n * proportional to the number of elements that dictionary holds.\n * This function guarantees that it returns a dict-index of a non-empty dict,\n * unless the entire kvstore is empty or all dicts are skipped.\n *\n * Parameters:\n * - kvs: the kvstore instance\n * - skip_cb: callback to determine if a dict should be skipped (NULL means no skipping)\n * - fair_attempts: number of fair selection attempts before falling back\n * - slow_fallback: if 1, uses systematic search when fair attempts fail\n *\n * Returns:\n * - Valid dict index (>= 0) on success\n * - -1 if no valid dict found (either slow_fallback is 0 or all dicts are skipped)\n *\n * Time complexity: O(fair_attempts * log(kvs->num_dicts)) for fair attempts,\n * plus O(kvs->num_dicts) for systematic fallback if enabled.\n */\nint kvstoreGetFairRandomDictIndex(kvstore *kvs, kvstoreRandomShouldSkipDictIndex *skip_cb,\n                                  int fair_attempts, int slow_fallback)\n{\n    if (kvs->num_dicts == 1 || kvstoreSize(kvs) == 0)\n        return 0;\n\n    unsigned long long total_size = kvstoreSize(kvs);\n\n    /* Try fair attempts first. If skip_cb is not applicable, execute only once. */\n    for (int attempt = 0; attempt < fair_attempts; attempt++) {\n        unsigned long target = (randomULong() % total_size) + 1;\n        int didx = kvstoreFindDictIndexByKeyIndex(kvs, target);\n        if (!skip_cb || !skip_cb(didx)) {\n            return didx;\n        }\n    }\n\n    /* If fair attempts failed and slow fallback is allowed */\n    if (slow_fallback) {\n        /* systematic check from random start */\n        int start = randomULong() % kvs->num_dicts;\n        for (int i = 0; i < kvs->num_dicts; i++) {\n            int didx = (start + i) % kvs->num_dicts;\n            dict *d = kvstoreGetDict(kvs, didx);\n            if (d && (!skip_cb || !skip_cb(didx))) {\n                return didx;\n            }\n        }\n    }\n\n    /* Failed to find valid dict that has elements */\n    return -1;\n}\n\nvoid kvstoreGetStats(kvstore *kvs, char *buf, size_t bufsize, int full) {\n    buf[0] = '\\0';\n\n    size_t l;\n    char *orig_buf = buf;\n    size_t orig_bufsize = bufsize;\n    dictStats *mainHtStats = NULL;\n    dictStats *rehashHtStats = NULL;\n    dict *d;\n    kvstoreIterator kvs_it;\n\n    kvstoreIteratorInit(&kvs_it, kvs);\n    while ((d = kvstoreIteratorNextDict(&kvs_it))) {\n        dictStats *stats = dictGetStatsHt(d, 0, full);\n        if (!mainHtStats) {\n            mainHtStats = stats;\n        } else {\n            dictCombineStats(stats, mainHtStats);\n            dictFreeStats(stats);\n        }\n        if (dictIsRehashing(d)) {\n            stats = dictGetStatsHt(d, 1, full);\n            if (!rehashHtStats) {\n                rehashHtStats = stats;\n            } else {\n                dictCombineStats(stats, rehashHtStats);\n                dictFreeStats(stats);\n            }\n        }\n    }\n    kvstoreIteratorReset(&kvs_it);\n\n    if (mainHtStats && bufsize > 0) {\n        l = dictGetStatsMsg(buf, bufsize, mainHtStats, full);\n        dictFreeStats(mainHtStats);\n        buf += l;\n        bufsize -= l;\n    }\n\n    if (rehashHtStats && bufsize > 0) {\n        l = dictGetStatsMsg(buf, bufsize, rehashHtStats, full);\n        dictFreeStats(rehashHtStats);\n        buf += l;\n        bufsize -= l;\n    }\n    /* Make sure there is a NULL term at the end. */\n    if (orig_bufsize) orig_buf[orig_bufsize - 1] = '\\0';\n}\n\n/* Finds a dict containing target element in a key space ordered by dict index.\n * Consider this example. Dictionaries are represented by brackets and keys by dots:\n *  #0   #1   #2     #3    #4\n * [..][....][...][.......][.]\n *                    ^\n *                 target\n *\n * In this case dict #3 contains key that we are trying to find.\n *\n * The return value is 0 based dict-index, and the range of the target is [1..kvstoreSize], kvstoreSize inclusive.\n *\n * To find the dict, we start with the root node of the binary index tree and search through its children\n * from the highest index (2^num_dicts_bits in our case) to the lowest index. At each node, we check if the target\n * value is greater than the node's value. If it is, we remove the node's value from the target and recursively\n * search for the new target using the current node as the parent.\n * Time complexity of this function is O(log(kvs->num_dicts))\n */\nint kvstoreFindDictIndexByKeyIndex(kvstore *kvs, unsigned long target) {\n    if (kvs->num_dicts == 1 || kvstoreSize(kvs) == 0)\n        return 0;\n    assert(target <= kvstoreSize(kvs));\n\n    return fwTreeFindIndex(kvs->dict_sizes, target);\n}\n\n/* Get the first non-empty dict index in the kvstore. Returns -1 if kvstore is empty. */\nint kvstoreGetFirstNonEmptyDictIndex(kvstore *kvs) {\n    if (kvstoreSize(kvs) == 0)\n        return -1;\n    if (kvs->num_dicts == 1)\n        return 0;\n    return fwTreeFindFirstNonEmpty(kvs->dict_sizes);\n}\n\n/* Returns next non-empty dict index strictly after given one, or -1 if provided didx is the last one. */\nint kvstoreGetNextNonEmptyDictIndex(kvstore *kvs, int didx) {\n    if (kvs->num_dicts == 1) {\n        assert(didx == 0);\n        return -1;\n    }\n    return fwTreeFindNextNonEmpty(kvs->dict_sizes, didx);\n}\n\nint kvstoreNumNonEmptyDicts(kvstore *kvs) {\n    return kvs->non_empty_dicts;\n}\n\nint kvstoreNumAllocatedDicts(kvstore *kvs) {\n    return kvs->allocated_dicts;\n}\n\nint kvstoreNumDicts(kvstore *kvs) {\n    return kvs->num_dicts;\n}\n\n/* Move dict from one kvstore to another. */\nvoid kvstoreMoveDict(kvstore *kvs, kvstore *dst, int didx) {\n    assert(kvs->num_dicts > didx);\n    assert(kvs->num_dicts == dst->num_dicts);\n    assert(dst->dicts[didx] == NULL);\n\n    dict *d = kvs->dicts[didx];\n    if (d == NULL) return;\n\n    /* Adjust source kvstore */\n    kvs->allocated_dicts -= 1;\n    cumulativeKeyCountAdd(kvs, didx, -((long long)dictSize(d)));\n    kvstoreDictBucketChanged(d, -((long long) dictBuckets(d)));\n    /* If rehashing, stop it. */\n    if (dictIsRehashing(d))\n        kvstoreDictRehashingCompleted(d);\n    /* Clear dict from source kvstore and create a new one if needed */\n    kvs->dicts[didx] = NULL;\n    if (!(kvs->flags & (KVSTORE_ALLOCATE_DICTS_ON_DEMAND | KVSTORE_FREE_EMPTY_DICTS)))\n        createDictIfNeeded(kvs, didx);\n\n    /* Move dict to destination kvstore */\n    dst->dicts[didx] = d;\n    dst->dicts[didx]->type = &dst->dtype;\n    dst->allocated_dicts += 1;\n    cumulativeKeyCountAdd(dst, didx, dictSize(d));\n    kvstoreDictBucketChanged(d, dictBuckets(d));\n    if (dictIsRehashing(dst->dicts[didx]))\n        kvstoreDictRehashingStarted(dst->dicts[didx]);\n}\n\n/* Returns kvstore iterator that can be used to iterate through sub-dictionaries.\n *\n * The caller should reset kvs_it with kvstoreIteratorReset. */\nvoid kvstoreIteratorInit(kvstoreIterator *kvs_it, kvstore *kvs) {\n    kvs_it->kvs = kvs;\n    kvs_it->didx = -1;\n    kvs_it->next_didx = kvstoreGetFirstNonEmptyDictIndex(kvs_it->kvs); /* Finds first non-empty dict index. */\n    dictInitSafeIterator(&kvs_it->di, NULL);\n}\n\n/* Free the kvs_it returned by kvstoreIteratorInit. */\nvoid kvstoreIteratorReset(kvstoreIterator *kvs_it) {\n    dictIterator *iter = &kvs_it->di;\n    dictResetIterator(iter);\n    /* In the safe iterator context, we may delete entries. */\n    if (kvs_it->didx != -1)\n        freeDictIfNeeded(kvs_it->kvs, kvs_it->didx);\n}\n\n/* Returns next dictionary from the iterator, or NULL if iteration is complete.\n *\n * - Takes care to reset the iter of the previous dict before moved to the next dict.\n */\ndict *kvstoreIteratorNextDict(kvstoreIterator *kvs_it) {\n    if (kvs_it->next_didx == -1)\n        return NULL;\n\n    /* The dict may be deleted during the iteration process, so here need to check for NULL. */\n    if (kvs_it->didx != -1 && kvstoreGetDict(kvs_it->kvs, kvs_it->didx)) {\n        /* Before we move to the next dict, reset the iter of the previous dict. */\n        dictIterator *iter = &kvs_it->di;\n        dictResetIterator(iter);\n        /* In the safe iterator context, we may delete entries. */\n        freeDictIfNeeded(kvs_it->kvs, kvs_it->didx);\n    }\n\n    kvs_it->didx = kvs_it->next_didx;\n    kvs_it->next_didx = kvstoreGetNextNonEmptyDictIndex(kvs_it->kvs, kvs_it->didx);\n    return kvs_it->kvs->dicts[kvs_it->didx];\n}\n\nint kvstoreIteratorGetCurrentDictIndex(kvstoreIterator *kvs_it) {\n    assert(kvs_it->didx >= 0 && kvs_it->didx < kvs_it->kvs->num_dicts);\n    return kvs_it->didx;\n}\n\n/* Returns next entry. */\ndictEntry *kvstoreIteratorNext(kvstoreIterator *kvs_it) {\n    dictEntry *de = kvs_it->di.d ? dictNext(&kvs_it->di) : NULL;\n    if (!de) { /* No current dict or reached the end of the dictionary. */\n\n        /* Before we move to the next dict, function kvstoreIteratorNextDict()\n         * reset the iter of the previous dict & freeDictIfNeeded(). */\n        dict *d = kvstoreIteratorNextDict(kvs_it);\n\n        if (!d)\n            return NULL;\n\n        dictInitSafeIterator(&kvs_it->di, d);\n        de = dictNext(&kvs_it->di);\n    }\n    return de;\n}\n\n/* This method traverses through kvstore dictionaries and triggers a resize.\n * It first tries to shrink if needed, and if it isn't, it tries to expand. */\nvoid kvstoreTryResizeDicts(kvstore *kvs, int limit) {\n    if (limit > kvs->num_dicts)\n        limit = kvs->num_dicts;\n\n    for (int i = 0; i < limit; i++) {\n        int didx = kvs->resize_cursor;\n        dict *d = kvstoreGetDict(kvs, didx);\n        if (d && dictShrinkIfNeeded(d) == DICT_ERR) {\n            dictExpandIfNeeded(d);\n        }\n        kvs->resize_cursor = (didx + 1) % kvs->num_dicts;\n    }\n}\n\n/* Our hash table implementation performs rehashing incrementally while\n * we write/read from the hash table. Still if the server is idle, the hash\n * table will use two tables for a long time. So we try to use threshold_us\n * of CPU time at every call of this function to perform some rehashing.\n *\n * The function returns the amount of microsecs spent if some rehashing was\n * performed, otherwise 0 is returned. */\nuint64_t kvstoreIncrementallyRehash(kvstore *kvs, uint64_t threshold_us) {\n    if (listLength(kvs->rehashing) == 0)\n        return 0;\n\n    /* Our goal is to rehash as many dictionaries as we can before reaching threshold_us,\n     * after each dictionary completes rehashing, it removes itself from the list. */\n    listNode *node;\n    monotime timer;\n    uint64_t elapsed_us = 0;\n    elapsedStart(&timer);\n    while ((node = listFirst(kvs->rehashing))) {\n        dictRehashMicroseconds(listNodeValue(node), threshold_us - elapsed_us);\n\n        elapsed_us = elapsedUs(timer);\n        if (elapsed_us >= threshold_us) {\n            break;  /* Reached the time limit. */\n        }\n    }\n    return elapsed_us;\n}\n\nsize_t kvstoreOverheadHashtableLut(kvstore *kvs) {\n    return kvs->bucket_count * sizeof(dictEntry *);\n}\n\nsize_t kvstoreOverheadHashtableRehashing(kvstore *kvs) {\n    return kvs->overhead_hashtable_rehashing * sizeof(dictEntry *);\n}\n\nunsigned long kvstoreDictRehashingCount(kvstore *kvs) {\n    return listLength(kvs->rehashing);\n}\n\nunsigned long kvstoreDictSize(kvstore *kvs, int didx)\n{\n    dict *d = kvstoreGetDict(kvs, didx);\n    if (!d)\n        return 0;\n    return dictSize(d);\n}\n\nvoid kvstoreInitDictIterator(kvstoreDictIterator *kvs_di, kvstore *kvs, int didx)\n{\n    kvs_di->kvs = kvs;\n    kvs_di->didx = didx;\n    dictInitIterator(&kvs_di->di, kvstoreGetDict(kvs, didx));\n}\n\nvoid kvstoreInitDictSafeIterator(kvstoreDictIterator *kvs_di, kvstore *kvs, int didx)\n{\n    kvs_di->kvs = kvs;\n    kvs_di->didx = didx;\n    dictInitSafeIterator(&kvs_di->di, kvstoreGetDict(kvs, didx));\n}\n\n/* Free the kvs_di returned by kvstoreGetDictIterator and kvstoreGetDictSafeIterator. */\nvoid kvstoreResetDictIterator(kvstoreDictIterator *kvs_di)\n{\n    /* The dict may be deleted during the iteration process, so here need to check for NULL. */\n    if (kvstoreGetDict(kvs_di->kvs, kvs_di->didx)) {\n        dictResetIterator(&kvs_di->di);\n        /* In the safe iterator context, we may delete entries. */\n        freeDictIfNeeded(kvs_di->kvs, kvs_di->didx);\n    }\n}\n\n/* Get the next element of the dict through kvstoreDictIterator and dictNext. */\ndictEntry *kvstoreDictIteratorNext(kvstoreDictIterator *kvs_di)\n{\n    /* The dict may be deleted during the iteration process, so here need to check for NULL. */\n    dict *d = kvstoreGetDict(kvs_di->kvs, kvs_di->didx);\n    if (!d) return NULL;\n\n    return dictNext(&kvs_di->di);\n}\n\ndictEntry *kvstoreDictGetRandomKey(kvstore *kvs, int didx)\n{\n    dict *d = kvstoreGetDict(kvs, didx);\n    if (!d)\n        return NULL;\n    return dictGetRandomKey(d);\n}\n\ndictEntry *kvstoreDictGetFairRandomKey(kvstore *kvs, int didx)\n{\n    dict *d = kvstoreGetDict(kvs, didx);\n    if (!d)\n        return NULL;\n    return dictGetFairRandomKey(d);\n}\n\nunsigned int kvstoreDictGetSomeKeys(kvstore *kvs, int didx, dictEntry **des, unsigned int count)\n{\n    dict *d = kvstoreGetDict(kvs, didx);\n    if (!d)\n        return 0;\n    return dictGetSomeKeys(d, des, count);\n}\n\nint kvstoreDictExpand(kvstore *kvs, int didx, unsigned long size)\n{\n    dict *d = createDictIfNeeded(kvs, didx);\n    if (!d)\n        return DICT_ERR;\n    return dictExpand(d, size);\n}\n\nunsigned long kvstoreDictScanDefrag(kvstore *kvs, int didx, unsigned long v, dictScanFunction *fn, dictDefragFunctions *defragfns, void *privdata)\n{\n    dict *d = kvstoreGetDict(kvs, didx);\n    if (!d)\n        return 0;\n    return dictScanDefrag(d, v, fn, defragfns, privdata);\n}\n\n/* Unlike kvstoreDictScanDefrag(), this method doesn't defrag the data(keys and values)\n * within dict, it only reallocates the memory used by the dict structure itself using \n * the provided allocation function. This feature was added for the active defrag feature.\n *\n * With 16k dictionaries for cluster mode with 1 shard, this operation may require substantial time\n * to execute.  A \"cursor\" is used to perform the operation iteratively.  When first called, a\n * cursor value of 0 should be provided.  The return value is an updated cursor which should be\n * provided on the next iteration.  The operation is complete when 0 is returned.\n *\n * The 'defragfn' callback is called with a reference to the dict that callback can reallocate. */\nunsigned long kvstoreDictLUTDefrag(kvstore *kvs, unsigned long cursor, kvstoreDictLUTDefragFunction *defragfn) {\n    for (int didx = cursor; didx < kvs->num_dicts; didx++) {\n        dict **d = kvstoreGetDictRef(kvs, didx), *newd;\n        if (!*d)\n            continue;\n        if ((newd = defragfn(*d))) {\n            *d = newd;\n\n            /* After defragmenting the dict, update its corresponding\n             * rehashing node in the kvstore's rehashing list. */\n            kvstoreDictMetaBase *metadata = (kvstoreDictMetaBase *)dictMetadata(*d);\n            if (metadata->rehashing_node)\n                metadata->rehashing_node->value = *d;\n        }\n        return (didx + 1);\n    }\n    return 0;\n}\n\nvoid *kvstoreDictFetchValue(kvstore *kvs, int didx, const void *key)\n{\n    dict *d = kvstoreGetDict(kvs, didx);\n    if (!d)\n        return NULL;\n    assert(d->type->no_value == 0); \n    return dictFetchValue(d, key);\n}\n\ndictEntry *kvstoreDictFind(kvstore *kvs, int didx, void *key) {\n    dict *d = kvstoreGetDict(kvs, didx);\n    if (!d)\n        return NULL;\n    return dictFind(d, key);\n}\n\n/* Find a link to a key in the specified kvstore. If not found return NULL.\n *\n * This function is a wrapper around dictFindLink(), used to locate a key in a dict\n * from a kvstore. \n *\n * The caller may provide a bucket pointer to receive the reference to the bucket \n * where the key is stored or need to be added.\n *\n * Returns:\n *   A reference to the dictEntry if found, otherwise NULL.\n *   \n * Important: \n * After calling kvstoreDictFindLink(), any necessary updates based on returned \n * link or bucket must be made immediately after, commonly by kvstoreDictSetAtLink() \n * without any operations in between that might modify the dict. Otherwise, \n * the link or bucket may become invalid. Example usage:\n *\n *      link = kvstoreDictFindLink(kvs, didx, key, &bucket);\n *      ... Do something, but don't modify kvs->dicts[didx] ...\n *      if (link)\n *          kvstoreDictSetAtLink(kvs, didx, kv, &link, 0);   // Update existing entry\n *      else\n *          kvstoreDictSetAtLink(kvs, didx, kv, &bucket, 1); // Insert new entry\n */\ndictEntryLink kvstoreDictFindLink(kvstore *kvs, int didx, void *key, dictEntryLink *bucket) {\n    if (bucket) *bucket = NULL;    \n    dict *d = kvstoreGetDict(kvs, didx);\n    if (!d) return NULL;\n    return dictFindLink(d, key, bucket);\n}\n\n/* Set a key (or key-value) in the specified kvstore. \n *\n * This function inserts a new key or updates an existing one, depending on \n * the `newItem` flag.\n *\n * Parameters:\n * link:      - When `newItem` is set, `link` points to the bucket of the key.\n *            - When `newItem` is not set, `link` points to the link of the key.\n *            - If link is NULL, dictFindLink() will be called to locate the link.\n *          \n * newItem: - If set, add a new key with a new dictEntry.\n *          - If not set, update the key of an existing dictEntry.\n */\nvoid kvstoreDictSetAtLink(kvstore *kvs, int didx, void *kv, dictEntryLink *link, int newItem) {\n    dict *d;\n    if (newItem) {\n        d = createDictIfNeeded(kvs, didx);\n        dictSetKeyAtLink(d, kv, link, newItem);\n        cumulativeKeyCountAdd(kvs, didx, 1); /* must be called only after updating dict */\n    } else {\n        d = kvstoreGetDict(kvs, didx);\n        dictSetKeyAtLink(d, kv, link, newItem);\n    }\n}\n\ndictEntry *kvstoreDictAddRaw(kvstore *kvs, int didx, void *key, dictEntry **existing) {\n    dict *d = createDictIfNeeded(kvs, didx);\n    dictEntry *ret = dictAddRaw(d, key, existing);\n    if (ret)\n        cumulativeKeyCountAdd(kvs, didx, 1);\n    return ret;\n}\n\nvoid kvstoreDictSetKey(kvstore *kvs, int didx, dictEntry* de, void *key) {\n    dict *d = kvstoreGetDict(kvs, didx);\n    dictSetKey(d, de, key);\n}\n\nvoid kvstoreDictSetVal(kvstore *kvs, int didx, dictEntry *de, void *val) {\n    dict *d = kvstoreGetDict(kvs, didx);\n    assert(d->type->no_value == 0); \n    dictSetVal(d, de, val);\n}\n\ndictEntryLink kvstoreDictTwoPhaseUnlinkFind(kvstore *kvs, int didx, const void *key, int *table_index) {\n    dict *d = kvstoreGetDict(kvs, didx);\n    if (!d)\n        return NULL;\n    return dictTwoPhaseUnlinkFind(kvstoreGetDict(kvs, didx), key, table_index);\n}\n\nvoid kvstoreDictTwoPhaseUnlinkFree(kvstore *kvs, int didx, dictEntryLink link, int table_index) {\n    dict *d = kvstoreGetDict(kvs, didx);\n    dictTwoPhaseUnlinkFree(d, link, table_index);\n    cumulativeKeyCountAdd(kvs, didx, -1);\n    freeDictIfNeeded(kvs, didx);\n}\n\nint kvstoreDictDelete(kvstore *kvs, int didx, const void *key) {\n    dict *d = kvstoreGetDict(kvs, didx);\n    if (!d)\n        return DICT_ERR;\n    int ret = dictDelete(d, key);\n    if (ret == DICT_OK) {\n        cumulativeKeyCountAdd(kvs, didx, -1);\n        freeDictIfNeeded(kvs, didx);\n    }\n    return ret;\n}\n\nvoid *kvstoreGetDictMeta(kvstore *kvs, int didx, int createIfNeeded) {\n    dict *d = kvstoreGetDict(kvs, didx);\n    if (!d) {\n        if (!createIfNeeded) return NULL;\n        d = createDictIfNeeded(kvs, didx);\n    }\n    return dictMetadata(d);\n}\n\nvoid *kvstoreGetMetadata(kvstore *kvs) {\n    if (!kvs->type->kvstoreMetadataBytes)\n        return NULL;\n    return &kvs->metadata;\n}\n\n#ifdef REDIS_TEST\n#include <stdio.h>\n#include \"testhelp.h\"\n\n#define TEST(name) printf(\"test — %s\\n\", name);\n\nuint64_t hashTestCallback(const void *key) {\n    return dictGenHashFunction((unsigned char*)key, strlen((char*)key));\n}\n\nvoid freeTestCallback(dict *d, void *val) {\n    UNUSED(d);\n    zfree(val);\n}\n\nvoid *defragAllocTest(void *ptr) {\n    size_t size = zmalloc_usable_size(ptr);\n    void *newptr = zmalloc(size);\n    memcpy(newptr, ptr, size);\n    zfree(ptr);\n    return newptr;\n}\n\ndict *defragLUTTestCallback(dict *d) {\n    /* handle the dict struct */\n    d = defragAllocTest(d);\n    /* handle the first hash table */\n    d->ht_table[0] = defragAllocTest(d->ht_table[0]);\n    /* handle the second hash table */\n    if (d->ht_table[1])\n        d->ht_table[1] = defragAllocTest(d->ht_table[1]);\n    return d; \n}\n\ndictType KvstoreDictTestType = {\n    hashTestCallback,\n    NULL,\n    NULL,\n    NULL,\n    freeTestCallback,\n    NULL,\n    NULL\n};\n\nkvstoreType KvstoreTestType = {\n    NULL, /* kvstore metadata size */\n    NULL, /* dict metadata size */\n    NULL, /* can free dict */\n    NULL, /* on kvstore empty */\n    NULL, /* on dict empty */\n};\n\nchar *stringFromInt(int value) {\n    char buf[32];\n    int len;\n    char *s;\n\n    len = snprintf(buf, sizeof(buf), \"%d\",value);\n    s = zmalloc(len+1);\n    memcpy(s, buf, len);\n    s[len] = '\\0';\n    return s;\n}\n\n/* ./redis-server test kvstore */\nint kvstoreTest(int argc, char **argv, int flags) {\n    UNUSED(argc);\n    UNUSED(argv);\n    UNUSED(flags);\n\n    int i;\n    void *key;\n    dictEntry *de;\n    kvstoreIterator kvs_it;\n    kvstoreDictIterator kvs_di;\n\n    /* Test also dictType with no_value=1 */\n    dictType KvstoreDictNovalTestType = KvstoreDictTestType;\n    KvstoreDictNovalTestType.no_value = 1;\n\n    int didx = 0;\n    int curr_slot = 0;\n    kvstore *kvs1 = kvstoreCreate(&KvstoreTestType, &KvstoreDictTestType, 0, KVSTORE_ALLOCATE_DICTS_ON_DEMAND);\n    kvstore *kvs2 = kvstoreCreate(&KvstoreTestType, &KvstoreDictNovalTestType, 0, KVSTORE_ALLOCATE_DICTS_ON_DEMAND | KVSTORE_FREE_EMPTY_DICTS);\n\n    TEST(\"Add 16 keys\") {\n        for (i = 0; i < 16; i++) {\n            de = kvstoreDictAddRaw(kvs1, didx, stringFromInt(i), NULL);\n            assert(de != NULL);\n            de = kvstoreDictAddRaw(kvs2, didx, stringFromInt(i), NULL);\n            assert(de != NULL);\n        }\n        assert(kvstoreDictSize(kvs1, didx) == 16);\n        assert(kvstoreSize(kvs1) == 16);\n        assert(kvstoreDictSize(kvs2, didx) == 16);\n        assert(kvstoreSize(kvs2) == 16);\n    }\n\n    TEST(\"kvstoreIterator creating and releasing without kvstoreIteratorNextDict()\") {\n        kvstore *kvs = kvstoreCreate(&KvstoreTestType, &KvstoreDictNovalTestType, 0, KVSTORE_ALLOCATE_DICTS_ON_DEMAND | KVSTORE_FREE_EMPTY_DICTS);\n        kvstoreIterator kvs_iter;\n        kvstoreIteratorInit(&kvs_iter, kvs);\n        kvstoreIteratorReset(&kvs_iter);\n        kvstoreRelease(kvs);\n    }\n\n    TEST(\"kvstoreIterator case 1: removing all keys does not delete the empty dict\") {\n        kvstoreIteratorInit(&kvs_it, kvs1);\n        while((de = kvstoreIteratorNext(&kvs_it)) != NULL) {\n            curr_slot = kvstoreIteratorGetCurrentDictIndex(&kvs_it);\n            key = dictGetKey(de);\n            assert(kvstoreDictDelete(kvs1, curr_slot, key) == DICT_OK);\n        }\n        kvstoreIteratorReset(&kvs_it);\n\n        dict *d = kvstoreGetDict(kvs1, didx);\n        assert(d != NULL);\n        assert(kvstoreDictSize(kvs1, didx) == 0);\n        assert(kvstoreSize(kvs1) == 0);\n    }\n\n    TEST(\"kvstoreIterator case 2: removing all keys will delete the empty dict\") {\n        kvstoreIteratorInit(&kvs_it, kvs2);\n        while((de = kvstoreIteratorNext(&kvs_it)) != NULL) {\n            curr_slot = kvstoreIteratorGetCurrentDictIndex(&kvs_it);\n            key = dictGetKey(de);\n            assert(kvstoreDictDelete(kvs2, curr_slot, key) == DICT_OK);\n        }\n        kvstoreIteratorReset(&kvs_it);\n\n        /* Make sure the dict was removed from the rehashing list. */\n        while (kvstoreIncrementallyRehash(kvs2, 1000)) {}\n\n        dict *d = kvstoreGetDict(kvs2, didx);\n        assert(d == NULL);\n        assert(kvstoreDictSize(kvs2, didx) == 0);\n        assert(kvstoreSize(kvs2) == 0);\n    }\n\n    TEST(\"Add 16 keys again\") {\n        for (i = 0; i < 16; i++) {\n            de = kvstoreDictAddRaw(kvs1, didx, stringFromInt(i), NULL);\n            assert(de != NULL);\n            de = kvstoreDictAddRaw(kvs2, didx, stringFromInt(i), NULL);\n            assert(de != NULL);\n        }\n        assert(kvstoreDictSize(kvs1, didx) == 16);\n        assert(kvstoreSize(kvs1) == 16);\n        assert(kvstoreDictSize(kvs2, didx) == 16);\n        assert(kvstoreSize(kvs2) == 16);\n    }\n\n    TEST(\"kvstoreDictIterator case 1: removing all keys does not delete the empty dict\") {\n        kvstoreInitDictSafeIterator(&kvs_di, kvs1, didx);\n        while((de = kvstoreDictIteratorNext(&kvs_di)) != NULL) {\n            key = dictGetKey(de);\n            assert(kvstoreDictDelete(kvs1, didx, key) == DICT_OK);\n        }\n        kvstoreResetDictIterator(&kvs_di);\n\n        dict *d = kvstoreGetDict(kvs1, didx);\n        assert(d != NULL);\n        assert(kvstoreDictSize(kvs1, didx) == 0);\n        assert(kvstoreSize(kvs1) == 0);\n    }\n\n    TEST(\"kvstoreDictIterator case 2: removing all keys will delete the empty dict\") {\n        kvstoreInitDictSafeIterator(&kvs_di, kvs2, didx);\n        while((de = kvstoreDictIteratorNext(&kvs_di)) != NULL) {\n            key = dictGetKey(de);\n            assert(kvstoreDictDelete(kvs2, didx, key) == DICT_OK);\n        }\n        kvstoreResetDictIterator(&kvs_di);\n\n        dict *d = kvstoreGetDict(kvs2, didx);\n        assert(d == NULL);\n        assert(kvstoreDictSize(kvs2, didx) == 0);\n        assert(kvstoreSize(kvs2) == 0);\n    }\n\n    TEST(\"Verify that a rehashing dict's node in the rehashing list is correctly updated after defragmentation\") {\n        unsigned long cursor = 0;\n        kvstore *kvs = kvstoreCreate(&KvstoreTestType, &KvstoreDictTestType, 0, KVSTORE_ALLOCATE_DICTS_ON_DEMAND);\n        for (i = 0; i < 256; i++) {\n            de = kvstoreDictAddRaw(kvs, 0, stringFromInt(i), NULL);\n            if (listLength(kvs->rehashing)) break;\n        }\n        assert(listLength(kvs->rehashing));\n        while ((cursor = kvstoreDictLUTDefrag(kvs, cursor, defragLUTTestCallback)) != 0) {}\n        while (kvstoreIncrementallyRehash(kvs, 1000)) {}\n        kvstoreRelease(kvs);\n    }\n\n    TEST(\"Verify non-empty dict count is correctly updated\") {\n        kvstore *kvs = kvstoreCreate(&KvstoreTestType, &KvstoreDictTestType, 2, \n                            KVSTORE_ALLOCATE_DICTS_ON_DEMAND);\n        for (int idx = 0; idx < 4; idx++) {\n            for (i = 0; i < 16; i++) {\n                de = kvstoreDictAddRaw(kvs, idx, stringFromInt(i), NULL);\n                assert(de != NULL);\n                /* When the first element is inserted, the number of non-empty dictionaries is increased by 1. */\n                if (i == 0) assert(kvstoreNumNonEmptyDicts(kvs) == idx + 1);\n            }\n        }\n\n        /* Step by step, clear all dictionaries and ensure non-empty dict count is updated */\n        for (int idx = 0; idx < 4; idx++) {\n            kvstoreInitDictSafeIterator(&kvs_di, kvs, idx);\n            while((de = kvstoreDictIteratorNext(&kvs_di)) != NULL) {\n                key = dictGetKey(de);\n                assert(kvstoreDictDelete(kvs, idx, key) == DICT_OK);\n                /* When the dictionary is emptied, the number of non-empty dictionaries is reduced by 1. */\n                if (kvstoreDictSize(kvs, idx) == 0) assert(kvstoreNumNonEmptyDicts(kvs) == 3 - idx);\n            }\n            kvstoreResetDictIterator(&kvs_di);\n        }\n        kvstoreRelease(kvs);\n    }\n\n    kvstoreRelease(kvs1);\n    kvstoreRelease(kvs2);\n    return 0;\n}\n#endif\n"
  },
  {
    "path": "src/kvstore.h",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * KVSTORE\n * -------\n * Index-based KV store implementation. This file implements a KV store comprised\n * of an array of dicts (see dict.c) The purpose of this KV store is to have easy\n * access to all keys that belong in the same dict (i.e. are in the same dict-index)\n *\n * For example, when Redis is running in cluster mode, we use kvstore to save\n * all keys that map to the same hash-slot in a separate dict within the kvstore\n * struct.\n * This enables us to easily access all keys that map to a specific hash-slot.\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n#ifndef DICTARRAY_H_\n#define DICTARRAY_H_\n\n#include \"dict.h\"\n#include \"adlist.h\"\n\ntypedef struct _kvstore kvstore;\n\n/* Structure for kvstore iterator that allows iterating across multiple dicts. */\ntypedef struct _kvstoreIterator {\n    kvstore *kvs;\n    long long didx;\n    long long next_didx;\n    dictIterator di;\n} kvstoreIterator;\n\n/* Structure for kvstore dict iterator that allows iterating the corresponding dict. */\ntypedef struct _kvstoreDictIterator {\n    kvstore *kvs;\n    long long didx;\n    dictIterator di;\n} kvstoreDictIterator;\n\ntypedef int (kvstoreScanShouldSkipDict)(dict *d, int didx);\ntypedef int (kvstoreExpandShouldSkipDictIndex)(int didx);\ntypedef int (kvstoreRandomShouldSkipDictIndex)(int didx);\n\n/* kvstoreType: Callback structure for kvstore-specific operations.\n * Similar to dictType for dict, this allows kvstore to be a generic data structure\n * without hardcoding dependencies on specific subsystems. */\ntypedef struct kvstoreType {\n    /* Allow kvstore to carry extra caller-defined metadata. The\n     * extra memory is initialized to 0 when a kvstore is allocated. */\n    size_t (*kvstoreMetadataBytes)(kvstore *kvs);\n\n    /* Allow a per slot dicts to carry extra caller-defined metadata. The\n     * extra memory is initialized to 0 when each dict is allocated. */\n    size_t (*dictMetadataBytes)(dict *d);\n\n    /* Check if a dict at given index can be freed. Used by kvstore when\n     * KVSTORE_FREE_EMPTY_DICTS is set. Return 1 if can free, 0 otherwise.\n     * Parameters: kvstore pointer, dict index */\n    int (*canFreeDict)(kvstore *kvs, int didx);\n\n    /* Called when kvstore becomes empty. */\n    void (*onKvstoreEmpty)(kvstore *kvs);\n\n    /* Called when per slot dict becomes empty. Parameters: kvstore pointer,\n     * dict index. */\n    void (*onDictEmpty)(kvstore *kvs, int didx);\n} kvstoreType;\n\n/* Basic metadata allocated per dict.\n * If additional metadata needed, embed this structure as the first member\n * in a new, larger structure. */\ntypedef struct {\n    listNode *rehashing_node;   /* list node in rehashing list */\n} kvstoreDictMetaBase;\n\n#define KVSTORE_ALLOCATE_DICTS_ON_DEMAND (1<<0)\n#define KVSTORE_FREE_EMPTY_DICTS (1<<1)\nkvstore *kvstoreCreate(kvstoreType *type, dictType *dtype, int num_dicts_bits, int flags);\nvoid kvstoreEmpty(kvstore *kvs, void(callback)(dict*));\nvoid kvstoreRelease(kvstore *kvs);\nunsigned long long kvstoreSize(kvstore *kvs);\nunsigned long kvstoreBuckets(kvstore *kvs);\nsize_t kvstoreMemUsage(kvstore *kvs);\nunsigned long long kvstoreScan(kvstore *kvs, unsigned long long cursor,\n                               int onlydidx, dictScanFunction *scan_cb,\n                               kvstoreScanShouldSkipDict *skip_cb,\n                               void *privdata);\nint kvstoreExpand(kvstore *kvs, uint64_t newsize, int try_expand, kvstoreExpandShouldSkipDictIndex *skip_cb);\nint kvstoreGetFairRandomDictIndex(kvstore *kvs, kvstoreExpandShouldSkipDictIndex *skip_cb,\n                                  int fair_attempts, int slow_fallback);\nvoid kvstoreGetStats(kvstore *kvs, char *buf, size_t bufsize, int full);\n\nint kvstoreFindDictIndexByKeyIndex(kvstore *kvs, unsigned long target);\nint kvstoreGetFirstNonEmptyDictIndex(kvstore *kvs);\nint kvstoreGetNextNonEmptyDictIndex(kvstore *kvs, int didx);\nint kvstoreNumNonEmptyDicts(kvstore *kvs);\nint kvstoreNumAllocatedDicts(kvstore *kvs);\nint kvstoreNumDicts(kvstore *kvs);\nvoid kvstoreMoveDict(kvstore *kvs, kvstore *dst, int didx);\n\n/* kvstore iterator specific functions */\nvoid kvstoreIteratorInit(kvstoreIterator *kvs_it, kvstore *kvs);\nvoid kvstoreIteratorReset(kvstoreIterator *kvs_it);\ndict *kvstoreIteratorNextDict(kvstoreIterator *kvs_it);\nint kvstoreIteratorGetCurrentDictIndex(kvstoreIterator *kvs_it);\ndictEntry *kvstoreIteratorNext(kvstoreIterator *kvs_it);\n\n/* Rehashing */\nvoid kvstoreTryResizeDicts(kvstore *kvs, int limit);\nuint64_t kvstoreIncrementallyRehash(kvstore *kvs, uint64_t threshold_us);\nsize_t kvstoreOverheadHashtableLut(kvstore *kvs);\nsize_t kvstoreOverheadHashtableRehashing(kvstore *kvs);\nunsigned long kvstoreDictRehashingCount(kvstore *kvs);\n\n/* Specific dict access by dict-index */\nunsigned long kvstoreDictSize(kvstore *kvs, int didx);\nvoid kvstoreInitDictIterator(kvstoreDictIterator *kvs_di, kvstore *kvs, int didx);\nvoid kvstoreInitDictSafeIterator(kvstoreDictIterator *kvs_di, kvstore *kvs, int didx);\nvoid kvstoreResetDictIterator(kvstoreDictIterator *kvs_di);\ndictEntry *kvstoreDictIteratorNext(kvstoreDictIterator *kvs_di);\ndictEntry *kvstoreDictGetRandomKey(kvstore *kvs, int didx);\ndictEntry *kvstoreDictGetFairRandomKey(kvstore *kvs, int didx);\nunsigned int kvstoreDictGetSomeKeys(kvstore *kvs, int didx, dictEntry **des, unsigned int count);\nint kvstoreDictExpand(kvstore *kvs, int didx, unsigned long size);\nunsigned long kvstoreDictScanDefrag(kvstore *kvs, int didx, unsigned long v, dictScanFunction *fn, dictDefragFunctions *defragfns, void *privdata);\ntypedef dict *(kvstoreDictLUTDefragFunction)(dict *d);\nunsigned long kvstoreDictLUTDefrag(kvstore *kvs, unsigned long cursor, kvstoreDictLUTDefragFunction *defragfn);\nvoid *kvstoreDictFetchValue(kvstore *kvs, int didx, const void *key);\ndictEntry *kvstoreDictFind(kvstore *kvs, int didx, void *key);\ndictEntry *kvstoreDictAddRaw(kvstore *kvs, int didx, void *key, dictEntry **existing);\ndictEntryLink kvstoreDictTwoPhaseUnlinkFind(kvstore *kvs, int didx, const void *key, int *table_index);\nvoid kvstoreDictTwoPhaseUnlinkFree(kvstore *kvs, int didx, dictEntryLink plink, int table_index);\nint kvstoreDictDelete(kvstore *kvs, int didx, const void *key);\ndict *kvstoreGetDict(kvstore *kvs, int didx);\nvoid kvstoreFreeDictIfNeeded(kvstore *kvs, int didx);\nvoid *kvstoreGetDictMeta(kvstore *kvs, int didx, int createIfNeeded);\nvoid *kvstoreGetMetadata(kvstore *kvs);\n\ndictEntryLink kvstoreDictFindLink(kvstore *kvs, int didx, void *key, dictEntryLink *bucket);\nvoid kvstoreDictSetAtLink(kvstore *kvs, int didx, void *kv, dictEntryLink *link, int newItem);\n\n/* dict with distinct key & value (no_value=1) currently is used only by pubsub. */\nvoid kvstoreDictSetKey(kvstore *kvs, int didx, dictEntry* de, void *key);\nvoid kvstoreDictSetVal(kvstore *kvs, int didx, dictEntry *de, void *val);\n\n#ifdef REDIS_TEST\nint kvstoreTest(int argc, char *argv[], int flags);\n#endif\n\n#endif /* DICTARRAY_H_ */\n"
  },
  {
    "path": "src/latency.c",
    "content": "/* The latency monitor allows to easily observe the sources of latency\n * in a Redis instance using the LATENCY command. Different latency\n * sources are monitored, like disk I/O, execution of commands, fork\n * system call, and so forth.\n *\n * ----------------------------------------------------------------------------\n *\n * Copyright (c) 2014-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n#include \"hdr_histogram.h\"\n\n/* Dictionary type for latency events. */\nint dictStringKeyCompare(dictCmpCache *cache, const void *key1, const void *key2) {\n    UNUSED(cache);\n    return strcmp(key1,key2) == 0;\n}\n\nuint64_t dictStringHash(const void *key) {\n    return dictGenHashFunction(key, strlen(key));\n}\n\nvoid dictVanillaFree(dict *d, void *val);\n\ndictType latencyTimeSeriesDictType = {\n    dictStringHash,             /* hash function */\n    NULL,                       /* key dup */\n    NULL,                       /* val dup */\n    dictStringKeyCompare,       /* key compare */\n    dictVanillaFree,            /* key destructor */\n    dictVanillaFree,            /* val destructor */\n    NULL                        /* allow to expand */\n};\n\n/* ------------------------- Utility functions ------------------------------ */\n\n/* Report the amount of AnonHugePages in smap, in bytes. If the return\n * value of the function is non-zero, the process is being targeted by\n * THP support, and is likely to have memory usage / latency issues. */\nint THPGetAnonHugePagesSize(void) {\n    return zmalloc_get_smap_bytes_by_field(\"AnonHugePages:\",-1);\n}\n\n/* ---------------------------- Latency API --------------------------------- */\n\n/* Latency monitor initialization. We just need to create the dictionary\n * of time series, each time series is created on demand in order to avoid\n * having a fixed list to maintain. */\nvoid latencyMonitorInit(void) {\n    server.latency_events = dictCreate(&latencyTimeSeriesDictType);\n}\n\n/* Add the specified sample to the specified time series \"event\".\n * This function is usually called via latencyAddSampleIfNeeded(), that\n * is a macro that only adds the sample if the latency is higher than\n * server.latency_monitor_threshold. */\nvoid latencyAddSample(const char *event, mstime_t latency) {\n    struct latencyTimeSeries *ts = dictFetchValue(server.latency_events,event);\n    time_t now = time(NULL);\n    int prev;\n\n    /* Create the time series if it does not exist. */\n    if (ts == NULL) {\n        ts = zmalloc(sizeof(*ts));\n        ts->idx = 0;\n        ts->max = 0;\n        memset(ts->samples,0,sizeof(ts->samples));\n        dictAdd(server.latency_events,zstrdup(event),ts);\n    }\n\n    if (latency > ts->max) ts->max = latency;\n\n    /* If the previous sample is in the same second, we update our old sample\n     * if this latency is > of the old one, or just return. */\n    prev = (ts->idx + LATENCY_TS_LEN - 1) % LATENCY_TS_LEN;\n    if (ts->samples[prev].time == now) {\n        if (latency > ts->samples[prev].latency)\n            ts->samples[prev].latency = latency;\n        return;\n    }\n\n    ts->samples[ts->idx].time = now;\n    ts->samples[ts->idx].latency = latency;\n\n    ts->idx++;\n    if (ts->idx == LATENCY_TS_LEN) ts->idx = 0;\n}\n\n/* Reset data for the specified event, or all the events data if 'event' is\n * NULL.\n *\n * Note: this is O(N) even when event_to_reset is not NULL because makes\n * the code simpler and we have a small fixed max number of events. */\nint latencyResetEvent(char *event_to_reset) {\n    dictIterator di;\n    dictEntry *de;\n    int resets = 0;\n\n    dictInitSafeIterator(&di, server.latency_events);\n    while((de = dictNext(&di)) != NULL) {\n        char *event = dictGetKey(de);\n\n        if (event_to_reset == NULL || strcasecmp(event,event_to_reset) == 0) {\n            dictDelete(server.latency_events, event);\n            resets++;\n        }\n    }\n    dictResetIterator(&di);\n    return resets;\n}\n\n/* ------------------------ Latency reporting (doctor) ---------------------- */\n\n/* Analyze the samples available for a given event and return a structure\n * populate with different metrics, average, MAD, min, max, and so forth.\n * Check latency.h definition of struct latencyStats for more info.\n * If the specified event has no elements the structure is populate with\n * zero values. */\nvoid analyzeLatencyForEvent(char *event, struct latencyStats *ls) {\n    struct latencyTimeSeries *ts = dictFetchValue(server.latency_events,event);\n    int j;\n    uint64_t sum;\n\n    ls->all_time_high = ts ? ts->max : 0;\n    ls->avg = 0;\n    ls->min = 0;\n    ls->max = 0;\n    ls->mad = 0;\n    ls->samples = 0;\n    ls->period = 0;\n    if (!ts) return;\n\n    /* First pass, populate everything but the MAD. */\n    sum = 0;\n    for (j = 0; j < LATENCY_TS_LEN; j++) {\n        if (ts->samples[j].time == 0) continue;\n        ls->samples++;\n        if (ls->samples == 1) {\n            ls->min = ls->max = ts->samples[j].latency;\n        } else {\n            if (ls->min > ts->samples[j].latency)\n                ls->min = ts->samples[j].latency;\n            if (ls->max < ts->samples[j].latency)\n                ls->max = ts->samples[j].latency;\n        }\n        sum += ts->samples[j].latency;\n\n        /* Track the oldest event time in ls->period. */\n        if (ls->period == 0 || ts->samples[j].time < ls->period)\n            ls->period = ts->samples[j].time;\n    }\n\n    /* So far avg is actually the sum of the latencies, and period is\n     * the oldest event time. We need to make the first an average and\n     * the second a range of seconds. */\n    if (ls->samples) {\n        ls->avg = sum / ls->samples;\n        ls->period = time(NULL) - ls->period;\n        if (ls->period == 0) ls->period = 1;\n    }\n\n    /* Second pass, compute MAD. */\n    sum = 0;\n    for (j = 0; j < LATENCY_TS_LEN; j++) {\n        int64_t delta;\n\n        if (ts->samples[j].time == 0) continue;\n        delta = (int64_t)ls->avg - ts->samples[j].latency;\n        if (delta < 0) delta = -delta;\n        sum += delta;\n    }\n    if (ls->samples) ls->mad = sum / ls->samples;\n}\n\n/* Create a human readable report of latency events for this Redis instance. */\nsds createLatencyReport(void) {\n    sds report = sdsempty();\n    int advise_better_vm = 0;       /* Better virtual machines. */\n    int advise_slowlog_enabled = 0; /* Enable slowlog. */\n    int advise_slowlog_tuning = 0;  /* Reconfigure slowlog. */\n    int advise_slowlog_inspect = 0; /* Check your slowlog. */\n    int advise_disk_contention = 0; /* Try to lower disk contention. */\n    int advise_scheduler = 0;       /* Intrinsic latency. */\n    int advise_data_writeback = 0;  /* data=writeback. */\n    int advise_no_appendfsync = 0;  /* don't fsync during rewrites. */\n    int advise_local_disk = 0;      /* Avoid remote disks. */\n    int advise_ssd = 0;             /* Use an SSD drive. */\n    int advise_write_load_info = 0; /* Print info about AOF and write load. */\n    int advise_hz = 0;              /* Use higher HZ. */\n    int advise_large_objects = 0;   /* Deletion of large objects. */\n    int advise_mass_eviction = 0;   /* Avoid mass eviction of keys. */\n    int advise_relax_fsync_policy = 0; /* appendfsync always is slow. */\n    int advise_disable_thp = 0;     /* AnonHugePages detected. */\n    int advices = 0;\n\n    /* Return ASAP if the latency engine is disabled and it looks like it\n     * was never enabled so far. */\n    if (dictSize(server.latency_events) == 0 &&\n        server.latency_monitor_threshold == 0)\n    {\n        report = sdscat(report,\"I'm sorry, Dave, I can't do that. Latency monitoring is disabled in this Redis instance. You may use \\\"CONFIG SET latency-monitor-threshold <milliseconds>.\\\" in order to enable it. If we weren't in a deep space mission I'd suggest to take a look at https://redis.io/docs/latest/operate/oss_and_stack/management/optimization/latency-monitor.\\n\");\n        return report;\n    }\n\n    /* Show all the events stats and add for each event some event-related\n     * comment depending on the values. */\n    dictIterator di;\n    dictEntry *de;\n    int eventnum = 0;\n\n    dictInitSafeIterator(&di, server.latency_events);\n    while((de = dictNext(&di)) != NULL) {\n        char *event = dictGetKey(de);\n        struct latencyTimeSeries *ts = dictGetVal(de);\n        struct latencyStats ls;\n\n        if (ts == NULL) continue;\n        eventnum++;\n        if (eventnum == 1) {\n            report = sdscat(report,\"Dave, I have observed latency spikes in this Redis instance. You don't mind talking about it, do you Dave?\\n\\n\");\n        }\n        analyzeLatencyForEvent(event,&ls);\n\n        report = sdscatprintf(report,\n            \"%d. %s: %d latency spikes (average %lums, mean deviation %lums, period %.2f sec). Worst all time event %lums.\",\n            eventnum, event,\n            ls.samples,\n            (unsigned long) ls.avg,\n            (unsigned long) ls.mad,\n            (double) ls.period/ls.samples,\n            (unsigned long) ts->max);\n\n        /* Fork */\n        if (!strcasecmp(event,\"fork\")) {\n            char *fork_quality;\n            if (server.stat_fork_rate < 10) {\n                fork_quality = \"terrible\";\n                advise_better_vm = 1;\n                advices++;\n            } else if (server.stat_fork_rate < 25) {\n                fork_quality = \"poor\";\n                advise_better_vm = 1;\n                advices++;\n            } else if (server.stat_fork_rate < 100) {\n                fork_quality = \"good\";\n            } else {\n                fork_quality = \"excellent\";\n            }\n            report = sdscatprintf(report,\n                \" Fork rate is %.2f GB/sec (%s).\", server.stat_fork_rate,\n                fork_quality);\n        }\n\n        /* Potentially commands. */\n        if (!strcasecmp(event,\"command\")) {\n            if (server.slowlog_log_slower_than < 0 || server.slowlog_max_len == 0) {\n                advise_slowlog_enabled = 1;\n                advices++;\n            } else if (server.slowlog_log_slower_than/1000 >\n                       server.latency_monitor_threshold)\n            {\n                advise_slowlog_tuning = 1;\n                advices++;\n            }\n            advise_slowlog_inspect = 1;\n            advise_large_objects = 1;\n            advices += 2;\n        }\n\n        /* fast-command. */\n        if (!strcasecmp(event,\"fast-command\")) {\n            advise_scheduler = 1;\n            advices++;\n        }\n\n        /* AOF and I/O. */\n        if (!strcasecmp(event,\"aof-write-pending-fsync\")) {\n            advise_local_disk = 1;\n            advise_disk_contention = 1;\n            advise_ssd = 1;\n            advise_data_writeback = 1;\n            advices += 4;\n        }\n\n        if (!strcasecmp(event,\"aof-write-active-child\")) {\n            advise_no_appendfsync = 1;\n            advise_data_writeback = 1;\n            advise_ssd = 1;\n            advices += 3;\n        }\n\n        if (!strcasecmp(event,\"aof-write-alone\")) {\n            advise_local_disk = 1;\n            advise_data_writeback = 1;\n            advise_ssd = 1;\n            advices += 3;\n        }\n\n        if (!strcasecmp(event,\"aof-fsync-always\")) {\n            advise_relax_fsync_policy = 1;\n            advices++;\n        }\n\n        if (!strcasecmp(event,\"aof-fstat\") ||\n            !strcasecmp(event,\"rdb-unlink-temp-file\")) {\n            advise_disk_contention = 1;\n            advise_local_disk = 1;\n            advices += 2;\n        }\n\n        if (!strcasecmp(event,\"aof-rewrite-diff-write\") ||\n            !strcasecmp(event,\"aof-rename\")) {\n            advise_write_load_info = 1;\n            advise_data_writeback = 1;\n            advise_ssd = 1;\n            advise_local_disk = 1;\n            advices += 4;\n        }\n\n        /* Expire cycle. */\n        if (!strcasecmp(event,\"expire-cycle\")) {\n            advise_hz = 1;\n            advise_large_objects = 1;\n            advices += 2;\n        }\n\n        /* Eviction cycle. */\n        if (!strcasecmp(event,\"eviction-del\")) {\n            advise_large_objects = 1;\n            advices++;\n        }\n\n        if (!strcasecmp(event,\"eviction-cycle\")) {\n            advise_mass_eviction = 1;\n            advices++;\n        }\n\n        report = sdscatlen(report,\"\\n\",1);\n    }\n    dictResetIterator(&di);\n\n    /* Add non event based advices. */\n    if (THPGetAnonHugePagesSize() > 0) {\n        advise_disable_thp = 1;\n        advices++;\n    }\n\n    if (eventnum == 0 && advices == 0) {\n        report = sdscat(report,\"Dave, no latency spike was observed during the lifetime of this Redis instance, not in the slightest bit. I honestly think you ought to sit down calmly, take a stress pill, and think things over.\\n\");\n    } else if (eventnum > 0 && advices == 0) {\n        report = sdscat(report,\"\\nWhile there are latency events logged, I'm not able to suggest any easy fix. Please use the Redis community to get some help, providing this report in your help request.\\n\");\n    } else {\n        /* Add all the suggestions accumulated so far. */\n\n        /* Better VM. */\n        report = sdscat(report,\"\\nI have a few advices for you:\\n\\n\");\n        if (advise_better_vm) {\n            report = sdscat(report,\"- If you are using a virtual machine, consider upgrading it with a faster one using a hypervisior that provides less latency during fork() calls. Xen is known to have poor fork() performance. Even in the context of the same VM provider, certain kinds of instances can execute fork faster than others.\\n\");\n        }\n\n        /* Slow log. */\n        if (advise_slowlog_enabled) {\n            report = sdscatprintf(report,\"- There are latency issues with potentially slow commands you are using. Try to enable the Slow Log Redis feature using the command 'CONFIG SET slowlog-log-slower-than %llu'. If the Slow log is disabled Redis is not able to log slow commands execution for you.\\n\", (unsigned long long)server.latency_monitor_threshold*1000);\n        }\n\n        if (advise_slowlog_tuning) {\n            report = sdscatprintf(report,\"- Your current Slow Log configuration only logs events that are slower than your configured latency monitor threshold. Please use 'CONFIG SET slowlog-log-slower-than %llu'.\\n\", (unsigned long long)server.latency_monitor_threshold*1000);\n        }\n\n        if (advise_slowlog_inspect) {\n            report = sdscat(report,\"- Check your Slow Log to understand what are the commands you are running which are too slow to execute. Please check https://redis.io/commands/slowlog for more information.\\n\");\n        }\n\n        /* Intrinsic latency. */\n        if (advise_scheduler) {\n            report = sdscat(report,\"- The system is slow to execute Redis code paths not containing system calls. This usually means the system does not provide Redis CPU time to run for long periods. You should try to:\\n\"\n            \"  1) Lower the system load.\\n\"\n            \"  2) Use a computer / VM just for Redis if you are running other software in the same system.\\n\"\n            \"  3) Check if you have a \\\"noisy neighbour\\\" problem.\\n\"\n            \"  4) Check with 'redis-cli --intrinsic-latency 100' what is the intrinsic latency in your system.\\n\"\n            \"  5) Check if the problem is allocator-related by recompiling Redis with MALLOC=libc, if you are using Jemalloc. However this may create fragmentation problems.\\n\");\n        }\n\n        /* AOF / Disk latency. */\n        if (advise_local_disk) {\n            report = sdscat(report,\"- It is strongly advised to use local disks for persistence, especially if you are using AOF. Remote disks provided by platform-as-a-service providers are known to be slow.\\n\");\n        }\n\n        if (advise_ssd) {\n            report = sdscat(report,\"- SSD disks are able to reduce fsync latency, and total time needed for snapshotting and AOF log rewriting (resulting in smaller memory usage). With extremely high write load SSD disks can be a good option. However Redis should perform reasonably with high load using normal disks. Use this advice as a last resort.\\n\");\n        }\n\n        if (advise_data_writeback) {\n            report = sdscat(report,\"- Mounting ext3/4 filesystems with data=writeback can provide a performance boost compared to data=ordered, however this mode of operation provides less guarantees, and sometimes it can happen that after a hard crash the AOF file will have a half-written command at the end and will require to be repaired before Redis restarts.\\n\");\n        }\n\n        if (advise_disk_contention) {\n            report = sdscat(report,\"- Try to lower the disk contention. This is often caused by other disk intensive processes running in the same computer (including other Redis instances).\\n\");\n        }\n\n        if (advise_no_appendfsync) {\n            report = sdscat(report,\"- Assuming from the point of view of data safety this is viable in your environment, you could try to enable the 'no-appendfsync-on-rewrite' option, so that fsync will not be performed while there is a child rewriting the AOF file or producing an RDB file (the moment where there is high disk contention).\\n\");\n        }\n\n        if (advise_relax_fsync_policy && server.aof_fsync == AOF_FSYNC_ALWAYS) {\n            report = sdscat(report,\"- Your fsync policy is set to 'always'. It is very hard to get good performances with such a setup, if possible try to relax the fsync policy to 'onesec'.\\n\");\n        }\n\n        if (advise_write_load_info) {\n            report = sdscat(report,\"- Latency during the AOF atomic rename operation or when the final difference is flushed to the AOF file at the end of the rewrite, sometimes is caused by very high write load, causing the AOF buffer to get very large. If possible try to send less commands to accomplish the same work, or use Lua scripts to group multiple operations into a single EVALSHA call.\\n\");\n        }\n\n        if (advise_hz && server.hz < 100) {\n            report = sdscat(report,\"- In order to make the Redis keys expiring process more incremental, try to set the 'hz' configuration parameter to 100 using 'CONFIG SET hz 100'.\\n\");\n        }\n\n        if (advise_large_objects) {\n            report = sdscat(report,\"- Deleting, expiring or evicting (because of maxmemory policy) large objects is a blocking operation. If you have very large objects that are often deleted, expired, or evicted, try to fragment those objects into multiple smaller objects.\\n\");\n        }\n\n        if (advise_mass_eviction) {\n            report = sdscat(report,\"- Sudden changes to the 'maxmemory' setting via 'CONFIG SET', or allocation of large objects via sets or sorted sets intersections, STORE option of SORT, Redis Cluster large keys migrations (RESTORE command), may create sudden memory pressure forcing the server to block trying to evict keys. \\n\");\n        }\n\n        if (advise_disable_thp) {\n            report = sdscat(report,\"- I detected a non zero amount of anonymous huge pages used by your process. This creates very serious latency events in different conditions, especially when Redis is persisting on disk. To disable THP support use the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled', make sure to also add it into /etc/rc.local so that the command will be executed again after a reboot. Note that even if you have already disabled THP, you still need to restart the Redis process to get rid of the huge pages already created.\\n\");\n        }\n    }\n\n    return report;\n}\n\n/* ---------------------- Latency command implementation -------------------- */\n\n/* latencyCommand() helper to produce a map of time buckets,\n * each representing a latency range,\n * between 1 nanosecond and roughly 1 second.\n * Each bucket covers twice the previous bucket's range.\n * Empty buckets are not printed.\n * Everything above 1 sec is considered +Inf.\n * At max there will be log2(1000000000)=30 buckets */\nvoid fillCommandCDF(client *c, struct hdr_histogram* histogram) {\n    addReplyMapLen(c,2);\n    addReplyBulkCString(c,\"calls\");\n    addReplyLongLong(c,(long long) histogram->total_count);\n    addReplyBulkCString(c,\"histogram_usec\");\n    void *replylen = addReplyDeferredLen(c);\n    int samples = 0;\n    struct hdr_iter iter;\n    hdr_iter_log_init(&iter,histogram,1024,2);\n    int64_t previous_count = 0;\n    while (hdr_iter_next(&iter)) {\n        const int64_t micros = iter.highest_equivalent_value / 1000;\n        const int64_t cumulative_count = iter.cumulative_count;\n        if(cumulative_count > previous_count){\n            addReplyLongLong(c,(long long) micros);\n            addReplyLongLong(c,(long long) cumulative_count);\n            samples++;\n        }\n        previous_count = cumulative_count;\n    }\n    setDeferredMapLen(c,replylen,samples);\n}\n\n/* latencyCommand() helper to produce for all commands,\n * a per command cumulative distribution of latencies. */\nvoid latencyAllCommandsFillCDF(client *c, dict *commands, int *command_with_data) {\n    dictIterator di;\n    dictEntry *de;\n    struct redisCommand *cmd;\n\n    dictInitSafeIterator(&di, commands);\n    while((de = dictNext(&di)) != NULL) {\n        cmd = (struct redisCommand *) dictGetVal(de);\n        if (cmd->latency_histogram) {\n            addReplyBulkCBuffer(c, cmd->fullname, sdslen(cmd->fullname));\n            fillCommandCDF(c, cmd->latency_histogram);\n            (*command_with_data)++;\n        }\n\n        if (cmd->subcommands) {\n            latencyAllCommandsFillCDF(c, cmd->subcommands_dict, command_with_data);\n        }\n    }\n    dictResetIterator(&di);\n}\n\n/* latencyCommand() helper to produce for a specific command set,\n * a per command cumulative distribution of latencies. */\nvoid latencySpecificCommandsFillCDF(client *c) {\n    void *replylen = addReplyDeferredLen(c);\n    int command_with_data = 0;\n    for (int j = 2; j < c->argc; j++){\n        struct redisCommand *cmd = lookupCommandBySds(c->argv[j]->ptr);\n        /* If the command does not exist we skip the reply */\n        if (cmd == NULL) {\n            continue;\n        }\n\n        if (cmd->latency_histogram) {\n            addReplyBulkCBuffer(c, cmd->fullname, sdslen(cmd->fullname));\n            fillCommandCDF(c, cmd->latency_histogram);\n            command_with_data++;\n        }\n\n        if (cmd->subcommands_dict) {\n            dictEntry *de;\n            dictIterator di;\n\n            dictInitSafeIterator(&di, cmd->subcommands_dict);\n            while ((de = dictNext(&di)) != NULL) {\n                struct redisCommand *sub = dictGetVal(de);\n                if (sub->latency_histogram) {\n                    addReplyBulkCBuffer(c, sub->fullname, sdslen(sub->fullname));\n                    fillCommandCDF(c, sub->latency_histogram);\n                    command_with_data++;\n                }\n            }\n            dictResetIterator(&di);\n        }\n    }\n    setDeferredMapLen(c,replylen,command_with_data);\n}\n\n/* latencyCommand() helper to produce a time-delay reply for all the samples\n * in memory for the specified time series. */\nvoid latencyCommandReplyWithSamples(client *c, struct latencyTimeSeries *ts) {\n    void *replylen = addReplyDeferredLen(c);\n    int samples = 0, j;\n\n    for (j = 0; j < LATENCY_TS_LEN; j++) {\n        int i = (ts->idx + j) % LATENCY_TS_LEN;\n\n        if (ts->samples[i].time == 0) continue;\n        addReplyArrayLen(c,2);\n        addReplyLongLong(c,ts->samples[i].time);\n        addReplyLongLong(c,ts->samples[i].latency);\n        samples++;\n    }\n    setDeferredArrayLen(c,replylen,samples);\n}\n\n/* latencyCommand() helper to produce the reply for the LATEST subcommand,\n * listing the last latency sample for every event type registered so far. */\nvoid latencyCommandReplyWithLatestEvents(client *c) {\n    dictIterator di;\n    dictEntry *de;\n\n    addReplyArrayLen(c,dictSize(server.latency_events));\n    dictInitIterator(&di, server.latency_events);\n    while((de = dictNext(&di)) != NULL) {\n        char *event = dictGetKey(de);\n        struct latencyTimeSeries *ts = dictGetVal(de);\n        int last = (ts->idx + LATENCY_TS_LEN - 1) % LATENCY_TS_LEN;\n\n        addReplyArrayLen(c,4);\n        addReplyBulkCString(c,event);\n        addReplyLongLong(c,ts->samples[last].time);\n        addReplyLongLong(c,ts->samples[last].latency);\n        addReplyLongLong(c,ts->max);\n    }\n    dictResetIterator(&di);\n}\n\n#define LATENCY_GRAPH_COLS 80\nsds latencyCommandGenSparkeline(char *event, struct latencyTimeSeries *ts) {\n    int j;\n    struct sequence *seq = createSparklineSequence();\n    sds graph = sdsempty();\n    uint32_t min = 0, max = 0;\n\n    for (j = 0; j < LATENCY_TS_LEN; j++) {\n        int i = (ts->idx + j) % LATENCY_TS_LEN;\n        int elapsed;\n        char buf[64];\n\n        if (ts->samples[i].time == 0) continue;\n        /* Update min and max. */\n        if (seq->length == 0) {\n            min = max = ts->samples[i].latency;\n        } else {\n            if (ts->samples[i].latency > max) max = ts->samples[i].latency;\n            if (ts->samples[i].latency < min) min = ts->samples[i].latency;\n        }\n        /* Use as label the number of seconds / minutes / hours / days\n         * ago the event happened. */\n        elapsed = time(NULL) - ts->samples[i].time;\n        if (elapsed < 60)\n            snprintf(buf,sizeof(buf),\"%ds\",elapsed);\n        else if (elapsed < 3600)\n            snprintf(buf,sizeof(buf),\"%dm\",elapsed/60);\n        else if (elapsed < 3600*24)\n            snprintf(buf,sizeof(buf),\"%dh\",elapsed/3600);\n        else\n            snprintf(buf,sizeof(buf),\"%dd\",elapsed/(3600*24));\n        sparklineSequenceAddSample(seq,ts->samples[i].latency,buf);\n    }\n\n    graph = sdscatprintf(graph,\n        \"%s - high %lu ms, low %lu ms (all time high %lu ms)\\n\", event,\n        (unsigned long) max, (unsigned long) min, (unsigned long) ts->max);\n    for (j = 0; j < LATENCY_GRAPH_COLS; j++)\n        graph = sdscatlen(graph,\"-\",1);\n    graph = sdscatlen(graph,\"\\n\",1);\n    graph = sparklineRender(graph,seq,LATENCY_GRAPH_COLS,4,SPARKLINE_FILL);\n    freeSparklineSequence(seq);\n    return graph;\n}\n\n/* LATENCY command implementations.\n *\n * LATENCY HISTORY: return time-latency samples for the specified event.\n * LATENCY LATEST: return the latest latency for all the events classes.\n * LATENCY DOCTOR: returns a human readable analysis of instance latency.\n * LATENCY GRAPH: provide an ASCII graph of the latency of the specified event.\n * LATENCY RESET: reset data of a specified event or all the data if no event provided.\n * LATENCY HISTOGRAM: return a cumulative distribution of latencies in the format of an histogram for the specified command names.\n */\nvoid latencyCommand(client *c) {\n    struct latencyTimeSeries *ts;\n\n    if (!strcasecmp(c->argv[1]->ptr,\"history\") && c->argc == 3) {\n        /* LATENCY HISTORY <event> */\n        ts = dictFetchValue(server.latency_events,c->argv[2]->ptr);\n        if (ts == NULL) {\n            addReplyArrayLen(c,0);\n        } else {\n            latencyCommandReplyWithSamples(c,ts);\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"graph\") && c->argc == 3) {\n        /* LATENCY GRAPH <event> */\n        sds graph;\n        dictEntry *de;\n        char *event;\n\n        de = dictFind(server.latency_events,c->argv[2]->ptr);\n        if (de == NULL) goto nodataerr;\n        ts = dictGetVal(de);\n        event = dictGetKey(de);\n\n        graph = latencyCommandGenSparkeline(event,ts);\n        addReplyVerbatim(c,graph,sdslen(graph),\"txt\");\n        sdsfree(graph);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"latest\") && c->argc == 2) {\n        /* LATENCY LATEST */\n        latencyCommandReplyWithLatestEvents(c);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"doctor\") && c->argc == 2) {\n        /* LATENCY DOCTOR */\n        sds report = createLatencyReport();\n\n        addReplyVerbatim(c,report,sdslen(report),\"txt\");\n        sdsfree(report);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"reset\") && c->argc >= 2) {\n        /* LATENCY RESET */\n        if (c->argc == 2) {\n            addReplyLongLong(c,latencyResetEvent(NULL));\n        } else {\n            int j, resets = 0;\n\n            for (j = 2; j < c->argc; j++)\n                resets += latencyResetEvent(c->argv[j]->ptr);\n            addReplyLongLong(c,resets);\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"histogram\") && c->argc >= 2) {\n        /* LATENCY HISTOGRAM*/\n        if (c->argc == 2) {\n            int command_with_data = 0;\n            void *replylen = addReplyDeferredLen(c);\n            latencyAllCommandsFillCDF(c, server.commands, &command_with_data);\n            setDeferredMapLen(c, replylen, command_with_data);\n        } else {\n            latencySpecificCommandsFillCDF(c);\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"help\") && c->argc == 2) {\n        const char *help[] = {\n\"DOCTOR\",\n\"    Return a human readable latency analysis report.\",\n\"GRAPH <event>\",\n\"    Return an ASCII latency graph for the <event> class.\",\n\"HISTORY <event>\",\n\"    Return time-latency samples for the <event> class.\",\n\"LATEST\",\n\"    Return the latest latency samples for all events.\",\n\"RESET [<event> ...]\",\n\"    Reset latency data of one or more <event> classes.\",\n\"    (default: reset all data for all event classes)\",\n\"HISTOGRAM [COMMAND ...]\",\n\"    Return a cumulative distribution of latencies in the format of a histogram for the specified command names.\",\n\"    If no commands are specified then all histograms are replied.\",\nNULL\n        };\n        addReplyHelp(c, help);\n    } else {\n        addReplySubcommandSyntaxError(c);\n    }\n    return;\n\nnodataerr:\n    /* Common error when the user asks for an event we have no latency\n     * information about. */\n    addReplyErrorFormat(c,\n        \"No samples available for event '%s'\", (char*) c->argv[2]->ptr);\n}\n\nvoid durationAddSample(int type, monotime duration) {\n    if (type >= EL_DURATION_TYPE_NUM) {\n        return;\n    }\n    durationStats* ds = &server.duration_stats[type];\n    ds->cnt++;\n    ds->sum += duration;\n    if (duration > ds->max) {\n        ds->max = duration;\n    }\n}\n"
  },
  {
    "path": "src/latency.h",
    "content": "/* latency.h -- latency monitor API header file\n * See latency.c for more information.\n *\n * ----------------------------------------------------------------------------\n *\n * Copyright (c) 2014-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef __LATENCY_H\n#define __LATENCY_H\n\n#define LATENCY_TS_LEN 160 /* History length for every monitored event. */\n\n/* Representation of a latency sample: the sampling time and the latency\n * observed in milliseconds. */\nstruct latencySample {\n    int32_t time; /* We don't use time_t to force 4 bytes usage everywhere. */\n    uint32_t latency; /* Latency in milliseconds. */\n};\n\n/* The latency time series for a given event. */\nstruct latencyTimeSeries {\n    int idx; /* Index of the next sample to store. */\n    uint32_t max; /* Max latency observed for this event. */\n    struct latencySample samples[LATENCY_TS_LEN]; /* Latest history. */\n};\n\n/* Latency statistics structure. */\nstruct latencyStats {\n    uint32_t all_time_high; /* Absolute max observed since latest reset. */\n    uint32_t avg;           /* Average of current samples. */\n    uint32_t min;           /* Min of current samples. */\n    uint32_t max;           /* Max of current samples. */\n    uint32_t mad;           /* Mean absolute deviation. */\n    uint32_t samples;       /* Number of non-zero samples. */\n    time_t period;          /* Number of seconds since first event and now. */\n};\n\nvoid latencyMonitorInit(void);\nvoid latencyAddSample(const char *event, mstime_t latency);\n\n/* Latency monitoring macros. */\n\n/* Start monitoring an event. We just set the current time. */\n#define latencyStartMonitor(var) if (server.latency_monitor_threshold) { \\\n    var = mstime(); \\\n} else { \\\n    var = 0; \\\n}\n\n/* End monitoring an event, compute the difference with the current time\n * to check the amount of time elapsed. */\n#define latencyEndMonitor(var) if (server.latency_monitor_threshold) { \\\n    var = mstime() - var; \\\n}\n\n/* Add the sample only if the elapsed time is >= to the configured threshold. */\n#define latencyAddSampleIfNeeded(event,var) \\\n    if (server.latency_monitor_threshold && \\\n        (var) >= server.latency_monitor_threshold) \\\n          latencyAddSample((event),(var));\n\n/* Remove time from a nested event. */\n#define latencyRemoveNestedEvent(event_var,nested_var) \\\n    event_var += nested_var;\n\ntypedef struct durationStats {\n    unsigned long long cnt;\n    unsigned long long sum;\n    unsigned long long max;\n} durationStats;\n\ntypedef enum {\n    EL_DURATION_TYPE_EL = 0, // cumulative time duration metric of the whole eventloop\n    EL_DURATION_TYPE_CMD,    // cumulative time duration metric of executing commands\n    EL_DURATION_TYPE_AOF,    // cumulative time duration metric of flushing AOF in eventloop\n    EL_DURATION_TYPE_CRON,   // cumulative time duration metric of cron (serverCron and beforeSleep, but excluding IO and AOF)\n    EL_DURATION_TYPE_NUM\n} DurationType;\n\nvoid durationAddSample(int type, monotime duration);\n\n#endif /* __LATENCY_H */\n"
  },
  {
    "path": "src/lazyfree.c",
    "content": "#include \"server.h\"\n#include \"bio.h\"\n#include \"atomicvar.h\"\n#include \"functions.h\"\n#include \"cluster.h\"\n#include \"cluster_asm.h\"\n#include \"ebuckets.h\"\n\nstatic redisAtomic size_t lazyfree_objects = 0;\nstatic redisAtomic size_t lazyfreed_objects = 0;\n\n/* Release objects from the lazyfree thread. It's just decrRefCount()\n * updating the count of objects to release. */\nvoid lazyfreeFreeObject(void *args[]) {\n    robj *o = (robj *) args[0];\n    decrRefCount(o);\n    atomicDecr(lazyfree_objects,1);\n    atomicIncr(lazyfreed_objects,1);\n}\n\n/* Populate delta histograms by iterating through keys in the kvstore. To be \n * deduced from the main db histogram later on kvsAsyncFreeDoneCB */\nstatic void populateDeltaHistograms(kvstore *kvs, asmTrimCtx *ctx) {\n    kvstoreIterator kvs_it;\n    kvstoreIteratorInit(&kvs_it, kvs);\n    dictEntry *de;\n\n    while ((de = kvstoreIteratorNext(&kvs_it)) != NULL) {\n        kvobj *kv = dictGetKV(de);\n        if ((!kv) || (kv->type >= OBJ_TYPE_BASIC_MAX)) continue;\n\n        /* Update keysizes_hist delta */\n        size_t len = getObjectLength(kv);\n        int sizeBin = (len == 0) ? 0 : log2ceil(len) + 1; /* Only strings can be empty */\n        debugServerAssert(sizeBin < MAX_KEYSIZES_BINS);\n        ctx->delta_keysizes_hist[kv->type][sizeBin]++;\n\n        /* Update allocsizes_hist delta */\n        if (server.memory_tracking_enabled) {\n            size_t alloc_size = kvobjAllocSize(kv);\n            int allocBin = (alloc_size == 0) ? 0 : log2ceil(alloc_size) + 1;\n            debugServerAssert(allocBin < MAX_KEYSIZES_BINS);\n            ctx->delta_allocsizes_hist[kv->type][allocBin]++;\n        }\n    }\n    kvstoreIteratorReset(&kvs_it);\n}\n\n/* Release a database from the lazyfree thread. The 'db' pointer is the\n * database which was substituted with a fresh one in the main thread\n * when the database was logically deleted.\n *\n * If args[3] is provided, it's an asmTrimCtx for tracking histogram deltas\n * during ASM background trim. */\nvoid kvsLazyfreeFree(void *args[]) {\n    kvstore *da1 = args[0];\n    kvstore *da2 = args[1];\n    estore *subexpires = args[2];\n    dict *stream_idmp_keys = args[3];\n    asmTrimCtx *ctx = args[4];  /* Optional: ASM trim context */\n\n    /* If ASM context provided, populate delta histograms */\n    if (ctx) populateDeltaHistograms(da1, ctx);\n\n    estoreRelease(subexpires);\n    dictRelease(stream_idmp_keys);\n    size_t numkeys = kvstoreSize(da1);\n    kvstoreRelease(da1);\n    kvstoreRelease(da2);\n    atomicDecr(lazyfree_objects,numkeys);\n    atomicIncr(lazyfreed_objects,numkeys);\n\n#if defined(USE_JEMALLOC)\n    /* Only clear the current thread cache.\n     * Ignore the return call since this will fail if the tcache is disabled. */\n    je_mallctl(\"thread.tcache.flush\", NULL, NULL, NULL, 0);\n\n    jemalloc_purge();\n#endif\n}\n\n/* Release the key tracking table. */\nvoid lazyFreeTrackingTable(void *args[]) {\n    rax *rt = args[0];\n    size_t len = rt->numele;\n    freeTrackingRadixTree(rt);\n    atomicDecr(lazyfree_objects,len);\n    atomicIncr(lazyfreed_objects,len);\n}\n\n/* Release the error stats rax tree. */\nvoid lazyFreeErrors(void *args[]) {\n    rax *errors = args[0];\n    size_t len = errors->numele;\n    raxFreeWithCallback(errors, zfree);\n    atomicDecr(lazyfree_objects,len);\n    atomicIncr(lazyfreed_objects,len);\n}\n\n/* Release the lua_scripts dict. */\nvoid lazyFreeLuaScripts(void *args[]) {\n    dict *lua_scripts = args[0];\n    list *lua_scripts_lru_list = args[1];\n    lua_State *lua = args[2];\n    long long len = dictSize(lua_scripts);\n    freeLuaScriptsSync(lua_scripts, lua_scripts_lru_list, lua);\n    atomicDecr(lazyfree_objects,len);\n    atomicIncr(lazyfreed_objects,len);\n}\n\n/* Release the functions ctx. */\nvoid lazyFreeFunctionsCtx(void *args[]) {\n    functionsLibCtx *functions_lib_ctx = args[0];\n    dict *engs = args[1];\n    size_t len = functionsLibCtxFunctionsLen(functions_lib_ctx);\n    functionsLibCtxFree(functions_lib_ctx);\n    len += dictSize(engs);\n    dictRelease(engs);\n    atomicDecr(lazyfree_objects,len);\n    atomicIncr(lazyfreed_objects,len);\n}\n\n/* Release replication backlog referencing memory. */\nvoid lazyFreeReplicationBacklogRefMem(void *args[]) {\n    list *blocks = args[0];\n    rax *index = args[1];\n    long long len = listLength(blocks);\n    len += raxSize(index);\n    listRelease(blocks);\n    raxFree(index);\n    atomicDecr(lazyfree_objects,len);\n    atomicIncr(lazyfreed_objects,len);\n}\n\n/* Return the number of currently pending objects to free. */\nsize_t lazyfreeGetPendingObjectsCount(void) {\n    size_t aux;\n    atomicGet(lazyfree_objects,aux);\n    return aux;\n}\n\n/* Return the number of objects that have been freed. */\nsize_t lazyfreeGetFreedObjectsCount(void) {\n    size_t aux;\n    atomicGet(lazyfreed_objects,aux);\n    return aux;\n}\n\nvoid lazyfreeResetStats(void) {\n    atomicSet(lazyfreed_objects,0);\n}\n\n/* Return the amount of work needed in order to free an object.\n * The return value is not always the actual number of allocations the\n * object is composed of, but a number proportional to it.\n *\n * For strings the function always returns 1.\n *\n * For aggregated objects represented by hash tables or other data structures\n * the function just returns the number of elements the object is composed of.\n *\n * Objects composed of single allocations are always reported as having a\n * single item even if they are actually logical composed of multiple\n * elements.\n *\n * For lists the function returns the number of elements in the quicklist\n * representing the list. */\nsize_t lazyfreeGetFreeEffort(robj *key, robj *obj, int dbid) {\n    if (obj->type == OBJ_LIST && obj->encoding == OBJ_ENCODING_QUICKLIST) {\n        quicklist *ql = obj->ptr;\n        return ql->len;\n    } else if (obj->type == OBJ_SET && obj->encoding == OBJ_ENCODING_HT) {\n        dict *ht = obj->ptr;\n        return dictSize(ht);\n    } else if (obj->type == OBJ_ZSET && obj->encoding == OBJ_ENCODING_SKIPLIST){\n        zset *zs = obj->ptr;\n        return zs->zsl->length;\n    } else if (obj->type == OBJ_HASH && obj->encoding == OBJ_ENCODING_HT) {\n        dict *ht = obj->ptr;\n        return dictSize(ht);\n    } else if (obj->type == OBJ_STREAM) {\n        size_t effort = 0;\n        stream *s = obj->ptr;\n\n        /* Make a best effort estimate to maintain constant runtime. Every macro\n         * node in the Stream is one allocation. */\n        effort += s->rax->numnodes;\n\n        /* Every consumer group is an allocation and so are the entries in its\n         * PEL. We use size of the first group's PEL as an estimate for all\n         * others. */\n        if (s->cgroups && raxSize(s->cgroups)) {\n            raxIterator ri;\n            streamCG *cg;\n            raxStart(&ri,s->cgroups);\n            raxSeek(&ri,\"^\",NULL,0);\n            /* There must be at least one group so the following should always\n             * work. */\n            serverAssert(raxNext(&ri));\n            cg = ri.data;\n            effort += raxSize(s->cgroups)*(1+raxSize(cg->pel));\n            raxStop(&ri);\n        }\n        return effort;\n    } else if (obj->type == OBJ_MODULE) {\n        size_t effort = moduleGetFreeEffort(key, obj, dbid);\n        /* If the module's free_effort returns 0, we will use asynchronous free\n         * memory by default. */\n        return effort == 0 ? ULONG_MAX : effort;\n    } else {\n        return 1; /* Everything else is a single allocation. */\n    }\n}\n\n/* If there are enough allocations to free the value object asynchronously, it\n * may be put into a lazy free list instead of being freed synchronously. The\n * lazy free list will be reclaimed in a different bio.c thread. If the value is\n * composed of a few allocations, to free in a lazy way is actually just\n * slower... So under a certain limit we just free the object synchronously. */\n#define LAZYFREE_THRESHOLD 64\n\n/* Free an object, if the object is huge enough, free it in async way. */\nvoid freeObjAsync(robj *key, robj *obj, int dbid) {\n    size_t free_effort = lazyfreeGetFreeEffort(key,obj,dbid);\n    /* Note that if the object is shared, to reclaim it now it is not\n     * possible. This rarely happens, however sometimes the implementation\n     * of parts of the Redis core may call incrRefCount() to protect\n     * objects, and then call dbDelete(). */\n    if (free_effort > LAZYFREE_THRESHOLD && obj->refcount == 1) {\n        atomicIncr(lazyfree_objects,1);\n        bioCreateLazyFreeJob(lazyfreeFreeObject,1,obj);\n    } else {\n        decrRefCount(obj);\n    }\n}\n\n/* Duplicate client reply objects that reference database objects to avoid race\n * conditions with bio threads during async flushdb.\n *\n * Since incrRefCount/decrRefCount are not thread-safe, and bio thread may\n * free database objects while main thread/IO threads send client replies, we need to\n * create independent copies of the string objects to avoid concurrent access. */\nstatic void protectClientReplyObjects(void) {\n    /* If there are no clients with pending ref replies, exit ASAP. */\n    if (!listLength(server.clients_with_pending_ref_reply))\n        return;\n\n    /* Pause all IO threads to safely duplicate string objects. */\n    int allpaused = 0;\n    if (server.io_threads_num > 1) {\n        serverAssert(pthread_equal(server.main_thread_id, pthread_self()));\n        allpaused = 1;\n        pauseAllIOThreads();\n    }\n\n    listNode *ln;\n    listIter li;\n    listRewind(server.clients_with_pending_ref_reply, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        client *c = listNodeValue(ln);\n\n        /* Process c->buf if it's encoded */\n        if (c->buf_encoded && c->bufpos > 0) {\n            char *ptr = c->buf;\n            while (ptr < c->buf + c->bufpos) {\n                payloadHeader *header = (payloadHeader *)ptr;\n                ptr += sizeof(payloadHeader);\n\n                if (header->payload_type == BULK_STR_REF) {\n                    bulkStrRef *str_ref = (bulkStrRef *)ptr;\n                    if (str_ref->obj != NULL) {\n                        /* Duplicate the string object */\n                        robj *new_obj = dupStringObject(str_ref->obj);\n                        decrRefCount(str_ref->obj);\n                        str_ref->obj = new_obj;\n                    }\n                }\n                ptr += header->payload_len;\n            }\n        }\n\n        /* Process reply list */\n        if (c->reply && listLength(c->reply)) {\n            listIter reply_li;\n            listNode *reply_ln;\n            listRewind(c->reply, &reply_li);\n            while ((reply_ln = listNext(&reply_li))) {\n                clientReplyBlock *block = listNodeValue(reply_ln);\n                if (block && block->buf_encoded) {\n                    char *ptr = block->buf;\n                    while (ptr < block->buf + block->used) {\n                        payloadHeader *header = (payloadHeader *)ptr;\n                        ptr += sizeof(payloadHeader);\n\n                        if (header->payload_type == BULK_STR_REF) {\n                            bulkStrRef *str_ref = (bulkStrRef *)ptr;\n                            if (str_ref->obj != NULL) {\n                                /* Duplicate the string object */\n                                robj *new_obj = dupStringObject(str_ref->obj);\n                                decrRefCount(str_ref->obj);\n                                str_ref->obj = new_obj;\n                            }\n                        }\n                        ptr += header->payload_len;\n                    }\n                }\n            }\n        }\n\n        /* Process references in IO deferred objects and remove client from\n         * pending ref list since all refs have been duplicated above. */\n        freeClientIODeferredObjects(c, 0);\n        tryUnlinkClientFromPendingRefReply(c, 1);\n    }\n\n    if (allpaused) resumeAllIOThreads();\n}\n\n/* Empty a Redis DB asynchronously. What the function does actually is to\n * create a new empty set of hash tables and scheduling the old ones for\n * lazy freeing. */\nvoid emptyDbAsync(redisDb *db) {\n    int slot_count_bits = 0;\n    int flags = KVSTORE_ALLOCATE_DICTS_ON_DEMAND;\n    if (server.cluster_enabled) {\n        slot_count_bits = CLUSTER_SLOT_MASK_BITS;\n        flags |= KVSTORE_FREE_EMPTY_DICTS;\n    }\n    kvstore *oldkeys = db->keys, *oldexpires = db->expires;\n    estore *oldsubexpires = db->subexpires;\n    dict *old_stream_idmp_keys = db->stream_idmp_keys;\n    db->keys = kvstoreCreate(&kvstoreExType, &dbDictType, slot_count_bits, flags);\n    db->expires = kvstoreCreate(&kvstoreBaseType, &dbExpiresDictType, slot_count_bits, flags);\n    db->subexpires = estoreCreate(&subexpiresBucketsType, slot_count_bits);\n    db->stream_idmp_keys = dictCreate(&objectKeyNoValueDictType);\n    protectClientReplyObjects(); /* Protect client reply objects before async free. */\n    emptyDbDataAsync(oldkeys, oldexpires, oldsubexpires, old_stream_idmp_keys, NULL);\n}\n\n/* Empty a kvstore data asynchronously. */\nvoid emptyDbDataAsync(kvstore *keys, kvstore *expires, ebuckets hexpires, dict *stream_idmp_keys, asmTrimCtx *ctx) {\n    atomicIncr(lazyfree_objects, kvstoreSize(keys));\n    bioCreateLazyFreeJob(kvsLazyfreeFree, 5, keys, expires, hexpires, stream_idmp_keys, ctx);\n}\n\n/* Free the key tracking table.\n * If the table is huge enough, free it in async way. */\nvoid freeTrackingRadixTreeAsync(rax *tracking) {\n    /* Because this rax has only keys and no values so we use numnodes. */\n    if (tracking->numnodes > LAZYFREE_THRESHOLD) {\n        atomicIncr(lazyfree_objects,tracking->numele);\n        bioCreateLazyFreeJob(lazyFreeTrackingTable,1,tracking);\n    } else {\n        freeTrackingRadixTree(tracking);\n    }\n}\n\n/* Free the error stats rax tree.\n * If the rax tree is huge enough, free it in async way. */\nvoid freeErrorsRadixTreeAsync(rax *errors) {\n    /* Because this rax has only keys and no values so we use numnodes. */\n    if (errors->numnodes > LAZYFREE_THRESHOLD) {\n        atomicIncr(lazyfree_objects,errors->numele);\n        bioCreateLazyFreeJob(lazyFreeErrors,1,errors);\n    } else {\n        raxFreeWithCallback(errors, zfree);\n    }\n}\n\n/* Free lua_scripts dict and lru list, if the dict is huge enough, free them in async way.\n * Close lua interpreter, if there are a lot of lua scripts, close it in async way. */\nvoid freeLuaScriptsAsync(dict *lua_scripts, list *lua_scripts_lru_list, lua_State *lua) {\n    if (dictSize(lua_scripts) > LAZYFREE_THRESHOLD) {\n        atomicIncr(lazyfree_objects,dictSize(lua_scripts));\n        bioCreateLazyFreeJob(lazyFreeLuaScripts,3,lua_scripts,lua_scripts_lru_list,lua);\n    } else {\n        freeLuaScriptsSync(lua_scripts, lua_scripts_lru_list, lua);\n    }\n}\n\n/* Free functions ctx, if the functions ctx contains enough functions, free it in async way. */\nvoid freeFunctionsAsync(functionsLibCtx *functions_lib_ctx, dict *engs) {\n    if (functionsLibCtxFunctionsLen(functions_lib_ctx) > LAZYFREE_THRESHOLD) {\n        atomicIncr(lazyfree_objects,functionsLibCtxFunctionsLen(functions_lib_ctx)+dictSize(engs));\n        bioCreateLazyFreeJob(lazyFreeFunctionsCtx,2,functions_lib_ctx,engs);\n    } else {\n        functionsLibCtxFree(functions_lib_ctx);\n        dictRelease(engs);\n    }\n}\n\n/* Free replication backlog referencing buffer blocks and rax index. */\nvoid freeReplicationBacklogRefMemAsync(list *blocks, rax *index) {\n    if (listLength(blocks) > LAZYFREE_THRESHOLD ||\n        raxSize(index) > LAZYFREE_THRESHOLD)\n    {\n        atomicIncr(lazyfree_objects,listLength(blocks)+raxSize(index));\n        bioCreateLazyFreeJob(lazyFreeReplicationBacklogRefMem,2,blocks,index);\n    } else {\n        listRelease(blocks);\n        raxFree(index);\n    }\n}\n"
  },
  {
    "path": "src/listpack.c",
    "content": "/* Listpack -- A lists of strings serialization format\n *\n * This file implements the specification you can find at:\n *\n *  https://github.com/antirez/listpack\n *\n * Copyright (c) 2017-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include <stdint.h>\n#include <limits.h>\n#include <sys/types.h>\n#include <stdlib.h>\n#include <string.h>\n#include <stdio.h>\n\n#include \"listpack.h\"\n#include \"listpack_malloc.h\"\n#include \"redisassert.h\"\n#include \"util.h\"\n\n#define LP_HDR_SIZE 6       /* 32 bit total len + 16 bit number of elements. */\n#define LP_HDR_NUMELE_UNKNOWN UINT16_MAX\n#define LP_MAX_INT_ENCODING_LEN 9\n#define LP_MAX_BACKLEN_SIZE 5\n#define LP_ENCODING_INT 0\n#define LP_ENCODING_STRING 1\n\n#define LP_ENCODING_7BIT_UINT 0\n#define LP_ENCODING_7BIT_UINT_MASK 0x80\n#define LP_ENCODING_IS_7BIT_UINT(byte) (((byte)&LP_ENCODING_7BIT_UINT_MASK)==LP_ENCODING_7BIT_UINT)\n#define LP_ENCODING_7BIT_UINT_ENTRY_SIZE 2\n\n#define LP_ENCODING_6BIT_STR 0x80\n#define LP_ENCODING_6BIT_STR_MASK 0xC0\n#define LP_ENCODING_IS_6BIT_STR(byte) (((byte)&LP_ENCODING_6BIT_STR_MASK)==LP_ENCODING_6BIT_STR)\n\n#define LP_ENCODING_13BIT_INT 0xC0\n#define LP_ENCODING_13BIT_INT_MASK 0xE0\n#define LP_ENCODING_IS_13BIT_INT(byte) (((byte)&LP_ENCODING_13BIT_INT_MASK)==LP_ENCODING_13BIT_INT)\n#define LP_ENCODING_13BIT_INT_ENTRY_SIZE 3\n\n#define LP_ENCODING_12BIT_STR 0xE0\n#define LP_ENCODING_12BIT_STR_MASK 0xF0\n#define LP_ENCODING_IS_12BIT_STR(byte) (((byte)&LP_ENCODING_12BIT_STR_MASK)==LP_ENCODING_12BIT_STR)\n\n#define LP_ENCODING_16BIT_INT 0xF1\n#define LP_ENCODING_16BIT_INT_MASK 0xFF\n#define LP_ENCODING_IS_16BIT_INT(byte) (((byte)&LP_ENCODING_16BIT_INT_MASK)==LP_ENCODING_16BIT_INT)\n#define LP_ENCODING_16BIT_INT_ENTRY_SIZE 4\n\n#define LP_ENCODING_24BIT_INT 0xF2\n#define LP_ENCODING_24BIT_INT_MASK 0xFF\n#define LP_ENCODING_IS_24BIT_INT(byte) (((byte)&LP_ENCODING_24BIT_INT_MASK)==LP_ENCODING_24BIT_INT)\n#define LP_ENCODING_24BIT_INT_ENTRY_SIZE 5\n\n#define LP_ENCODING_32BIT_INT 0xF3\n#define LP_ENCODING_32BIT_INT_MASK 0xFF\n#define LP_ENCODING_IS_32BIT_INT(byte) (((byte)&LP_ENCODING_32BIT_INT_MASK)==LP_ENCODING_32BIT_INT)\n#define LP_ENCODING_32BIT_INT_ENTRY_SIZE 6\n\n#define LP_ENCODING_64BIT_INT 0xF4\n#define LP_ENCODING_64BIT_INT_MASK 0xFF\n#define LP_ENCODING_IS_64BIT_INT(byte) (((byte)&LP_ENCODING_64BIT_INT_MASK)==LP_ENCODING_64BIT_INT)\n#define LP_ENCODING_64BIT_INT_ENTRY_SIZE 10\n\n#define LP_ENCODING_32BIT_STR 0xF0\n#define LP_ENCODING_32BIT_STR_MASK 0xFF\n#define LP_ENCODING_IS_32BIT_STR(byte) (((byte)&LP_ENCODING_32BIT_STR_MASK)==LP_ENCODING_32BIT_STR)\n\n#define LP_EOF 0xFF\n\n#define LP_ENCODING_6BIT_STR_LEN(p) ((p)[0] & 0x3F)\n#define LP_ENCODING_12BIT_STR_LEN(p) ((((p)[0] & 0xF) << 8) | (p)[1])\n#define LP_ENCODING_32BIT_STR_LEN(p) (((uint32_t)(p)[1]<<0) | \\\n                                      ((uint32_t)(p)[2]<<8) | \\\n                                      ((uint32_t)(p)[3]<<16) | \\\n                                      ((uint32_t)(p)[4]<<24))\n\n#define lpGetTotalBytes(p)           (((uint32_t)(p)[0]<<0) | \\\n                                      ((uint32_t)(p)[1]<<8) | \\\n                                      ((uint32_t)(p)[2]<<16) | \\\n                                      ((uint32_t)(p)[3]<<24))\n\n#define lpGetNumElements(p)          (((uint32_t)(p)[4]<<0) | \\\n                                      ((uint32_t)(p)[5]<<8))\n#define lpSetTotalBytes(p,v) do { \\\n    (p)[0] = (v)&0xff; \\\n    (p)[1] = ((v)>>8)&0xff; \\\n    (p)[2] = ((v)>>16)&0xff; \\\n    (p)[3] = ((v)>>24)&0xff; \\\n} while(0)\n\n#define lpSetNumElements(p,v) do { \\\n    (p)[4] = (v)&0xff; \\\n    (p)[5] = ((v)>>8)&0xff; \\\n} while(0)\n\n/* Validates that 'p' is not outside the listpack.\n * All function that return a pointer to an element in the listpack will assert\n * that this element is valid, so it can be freely used.\n * Generally functions such lpNext and lpDelete assume the input pointer is\n * already validated (since it's the return value of another function). */\n#define ASSERT_INTEGRITY(lp, p) do { \\\n    assert((p) >= (lp)+LP_HDR_SIZE && (p) < (lp)+lpGetTotalBytes((lp))); \\\n} while (0)\n\n/* Similar to the above, but validates the entire element length rather than just\n * it's pointer. */\n#define ASSERT_INTEGRITY_LEN(lp, p, len) do { \\\n    assert((p) >= (lp)+LP_HDR_SIZE && (p)+(len) < (lp)+lpGetTotalBytes((lp))); \\\n} while (0)\n\nstatic inline void lpAssertValidEntry(unsigned char* lp, size_t lpbytes, unsigned char *p);\n\n/* Don't let listpacks grow over 1GB in any case, don't wanna risk overflow in\n * Total Bytes header field */\n#define LISTPACK_MAX_SAFETY_SIZE (1<<30)\nint lpSafeToAdd(unsigned char* lp, size_t add) {\n    size_t len = lp? lpGetTotalBytes(lp): 0;\n    if (add > LISTPACK_MAX_SAFETY_SIZE || len > LISTPACK_MAX_SAFETY_SIZE - add)\n        return 0;\n    return 1;\n}\n\n/* Convert a string into a signed 64 bit integer.\n * The function returns 1 if the string could be parsed into a (non-overflowing)\n * signed 64 bit int, 0 otherwise. The 'value' will be set to the parsed value\n * when the function returns success.\n *\n * Note that this function demands that the string strictly represents\n * a int64 value: no spaces or other characters before or after the string\n * representing the number are accepted, nor zeroes at the start if not\n * for the string \"0\" representing the zero number.\n *\n * Because of its strictness, it is safe to use this function to check if\n * you can convert a string into a long long, and obtain back the string\n * from the number without any loss in the string representation. *\n *\n * -----------------------------------------------------------------------------\n *\n * Credits: this function was adapted from the Redis source code, file\n * \"utils.c\", function string2ll(), and is copyright:\n *\n * Copyright(C) 2011, Pieter Noordhuis\n * Copyright(C) 2011-current, Redis Ltd.\n *\n * The function is released under the BSD 3-clause license.\n */\nint lpStringToInt64(const char *s, unsigned long slen, int64_t *value) {\n    const char *p = s;\n    unsigned long plen = 0;\n    int negative = 0;\n    uint64_t v;\n\n    /* Abort if length indicates this cannot possibly be an int */\n    if (slen == 0 || slen >= LONG_STR_SIZE)\n        return 0;\n\n    /* Special case: first and only digit is 0. */\n    if (slen == 1 && p[0] == '0') {\n        if (value != NULL) *value = 0;\n        return 1;\n    }\n\n    if (p[0] == '-') {\n        negative = 1;\n        p++; plen++;\n\n        /* Abort on only a negative sign. */\n        if (plen == slen)\n            return 0;\n    }\n\n    /* First digit should be 1-9, otherwise the string should just be 0. */\n    if (p[0] >= '1' && p[0] <= '9') {\n        v = p[0]-'0';\n        p++; plen++;\n    } else {\n        return 0;\n    }\n\n    while (plen < slen && p[0] >= '0' && p[0] <= '9') {\n        if (v > (UINT64_MAX / 10)) /* Overflow. */\n            return 0;\n        v *= 10;\n\n        if (v > (UINT64_MAX - (p[0]-'0'))) /* Overflow. */\n            return 0;\n        v += p[0]-'0';\n\n        p++; plen++;\n    }\n\n    /* Return if not all bytes were used. */\n    if (plen < slen)\n        return 0;\n\n    if (negative) {\n        if (v > ((uint64_t)(-(INT64_MIN+1))+1)) /* Overflow. */\n            return 0;\n        if (value != NULL) *value = -v;\n    } else {\n        if (v > INT64_MAX) /* Overflow. */\n            return 0;\n        if (value != NULL) *value = v;\n    }\n    return 1;\n}\n\n/* Create a new, empty listpack.\n * On success the new listpack is returned, otherwise an error is returned.\n * Pre-allocate at least `capacity` bytes of memory,\n * over-allocated memory can be shrunk by `lpShrinkToFit`.\n * */\nunsigned char *lpNew(size_t capacity) {\n    unsigned char *lp = lp_malloc(capacity > LP_HDR_SIZE+1 ? capacity : LP_HDR_SIZE+1);\n    if (lp == NULL) return NULL;\n    lpSetTotalBytes(lp,LP_HDR_SIZE+1);\n    lpSetNumElements(lp,0);\n    lp[LP_HDR_SIZE] = LP_EOF;\n    return lp;\n}\n\n/* Free the specified listpack. */\nvoid lpFree(unsigned char *lp) {\n    lp_free(lp);\n}\n\n/* Shrink the memory to fit. */\nunsigned char* lpShrinkToFit(unsigned char *lp) {\n    size_t size = lpGetTotalBytes(lp);\n    if (size < lp_malloc_size(lp)) {\n        return lp_realloc(lp, size);\n    } else {\n        return lp;\n    }\n}\n\n/* Stores the integer encoded representation of 'v' in the 'intenc' buffer. */\nstatic inline void lpEncodeIntegerGetType(int64_t v, unsigned char *intenc, uint64_t *enclen) {\n    if (v >= 0 && v <= 127) {\n        /* Single byte 0-127 integer. */\n        if (intenc != NULL) intenc[0] = v;\n        if (enclen != NULL) *enclen = 1;\n    } else if (v >= -4096 && v <= 4095) {\n        /* 13 bit integer. */\n        if (v < 0) v = ((int64_t)1<<13)+v;\n        if (intenc != NULL) {\n            intenc[0] = (v>>8)|LP_ENCODING_13BIT_INT;\n            intenc[1] = v&0xff;\n        }\n        if (enclen != NULL) *enclen = 2;\n    } else if (v >= -32768 && v <= 32767) {\n        /* 16 bit integer. */\n        if (v < 0) v = ((int64_t)1<<16)+v;\n        if (intenc != NULL) {\n            intenc[0] = LP_ENCODING_16BIT_INT;\n            intenc[1] = v&0xff;\n            intenc[2] = v>>8;\n        }\n        if (enclen != NULL) *enclen = 3;\n    } else if (v >= -8388608 && v <= 8388607) {\n        /* 24 bit integer. */\n        if (v < 0) v = ((int64_t)1<<24)+v;\n        if (intenc != NULL) {\n            intenc[0] = LP_ENCODING_24BIT_INT;\n            intenc[1] = v&0xff;\n            intenc[2] = (v>>8)&0xff;\n            intenc[3] = v>>16;\n        }\n        if (enclen != NULL) *enclen = 4;\n    } else if (v >= -2147483648 && v <= 2147483647) {\n        /* 32 bit integer. */\n        if (v < 0) v = ((int64_t)1<<32)+v;\n        if (intenc != NULL) {\n            intenc[0] = LP_ENCODING_32BIT_INT;\n            intenc[1] = v&0xff;\n            intenc[2] = (v>>8)&0xff;\n            intenc[3] = (v>>16)&0xff;\n            intenc[4] = v>>24;\n        }\n        if (enclen != NULL) *enclen = 5;\n    } else {\n        /* 64 bit integer. */\n        uint64_t uv = v;\n        if (intenc != NULL) {\n            intenc[0] = LP_ENCODING_64BIT_INT;\n            intenc[1] = uv&0xff;\n            intenc[2] = (uv>>8)&0xff;\n            intenc[3] = (uv>>16)&0xff;\n            intenc[4] = (uv>>24)&0xff;\n            intenc[5] = (uv>>32)&0xff;\n            intenc[6] = (uv>>40)&0xff;\n            intenc[7] = (uv>>48)&0xff;\n            intenc[8] = uv>>56;\n        }\n        if (enclen != NULL) *enclen = 9;\n    }\n}\n\n/* Given an element 'ele' of size 'size', determine if the element can be\n * represented inside the listpack encoded as integer, and returns\n * LP_ENCODING_INT if so. Otherwise returns LP_ENCODING_STR if no integer\n * encoding is possible.\n *\n * If the LP_ENCODING_INT is returned, the function stores the integer encoded\n * representation of the element in the 'intenc' buffer.\n *\n * Regardless of the returned encoding, 'enclen' is populated by reference to\n * the number of bytes that the string or integer encoded element will require\n * in order to be represented. */\nstatic inline int lpEncodeGetType(unsigned char *ele, uint32_t size, unsigned char *intenc, uint64_t *enclen) {\n    int64_t v;\n    if (lpStringToInt64((const char*)ele, size, &v)) {\n        lpEncodeIntegerGetType(v, intenc, enclen);\n        return LP_ENCODING_INT;\n    } else {\n        if (size < 64) *enclen = 1+size;\n        else if (size < 4096) *enclen = 2+size;\n        else *enclen = 5+(uint64_t)size;\n        return LP_ENCODING_STRING;\n    }\n}\n\n/* Store a reverse-encoded variable length field, representing the length\n * of the previous element of size 'l', in the target buffer 'buf'.\n * The function returns the number of bytes used to encode it, from\n * 1 to 5. If 'buf' is NULL the function just returns the number of bytes\n * needed in order to encode the backlen. */\nstatic inline unsigned long lpEncodeBacklen(unsigned char *buf, uint64_t l) {\n    if (l <= 127) {\n        if (buf) buf[0] = l;\n        return 1;\n    } else if (l < 16383) {\n        if (buf) {\n            buf[0] = l>>7;\n            buf[1] = (l&127)|128;\n        }\n        return 2;\n    } else if (l < 2097151) {\n        if (buf) {\n            buf[0] = l>>14;\n            buf[1] = ((l>>7)&127)|128;\n            buf[2] = (l&127)|128;\n        }\n        return 3;\n    } else if (l < 268435455) {\n        if (buf) {\n            buf[0] = l>>21;\n            buf[1] = ((l>>14)&127)|128;\n            buf[2] = ((l>>7)&127)|128;\n            buf[3] = (l&127)|128;\n        }\n        return 4;\n    } else {\n        if (buf) {\n            buf[0] = l>>28;\n            buf[1] = ((l>>21)&127)|128;\n            buf[2] = ((l>>14)&127)|128;\n            buf[3] = ((l>>7)&127)|128;\n            buf[4] = (l&127)|128;\n        }\n        return 5;\n    }\n}\n\n/* Calculate the number of bytes required to reverse-encode a variable length\n * field representing the length of the previous element of size 'l', ranging\n * from 1 to 5. */\nstatic inline unsigned long lpEncodeBacklenBytes(uint64_t l) {\n    if (l <= 127) {\n        return 1;\n    } else if (l < 16383) {\n        return 2;\n    } else if (l < 2097151) {\n        return 3;\n    } else if (l < 268435455) {\n        return 4;\n    } else {\n        return 5;\n    }\n}\n\n/* Decode the backlen and returns it. If the encoding looks invalid (more than\n * 5 bytes are used), UINT64_MAX is returned to report the problem.\n *\n * Optimized for the common case: most backlen values fit in one or two bytes\n * due to listpack size limits. This version avoids a loop and pointer\n * mutation, reducing overhead in hot paths while keeping the same encoding\n * semantics.\n *\n * Note: the caller guarantees that up to 5 bytes preceding 'p' are readable,\n * as ensured by listpack invariants. */\nstatic inline uint64_t lpDecodeBacklen(unsigned char *p) {\n    uint64_t val;\n\n    /* Fast path: single byte (most common for small entries <= 127 bytes) */\n    if (likely(!(p[0] & 128))) {\n        return p[0] & 127;\n    }\n\n    /* Two bytes */\n    val = (uint64_t)(p[0] & 127);\n    if (!(p[-1] & 128)) {\n        return val | ((uint64_t)(p[-1] & 127) << 7);\n    }\n\n    /* Three bytes */\n    val |= (uint64_t)(p[-1] & 127) << 7;\n    if (!(p[-2] & 128)) {\n        return val | ((uint64_t)(p[-2] & 127) << 14);\n    }\n\n    /* Four bytes */\n    val |= (uint64_t)(p[-2] & 127) << 14;\n    if (!(p[-3] & 128)) {\n        return val | ((uint64_t)(p[-3] & 127) << 21);\n    }\n\n    /* Five bytes */\n    val |= (uint64_t)(p[-3] & 127) << 21;\n    if (!(p[-4] & 128)) {\n        return val | ((uint64_t)(p[-4] & 127) << 28);\n    }\n\n    /* Invalid: more than 5 bytes */\n    return UINT64_MAX;\n}\n\n/* Encode the string element pointed by 's' of size 'len' in the target\n * buffer 's'. The function should be called with 'buf' having always enough\n * space for encoding the string. This is done by calling lpEncodeGetType()\n * before calling this function. */\nstatic inline void lpEncodeString(unsigned char *buf, unsigned char *s, uint32_t len) {\n    if (len < 64) {\n        buf[0] = len | LP_ENCODING_6BIT_STR;\n        memcpy(buf+1,s,len);\n    } else if (len < 4096) {\n        buf[0] = (len >> 8) | LP_ENCODING_12BIT_STR;\n        buf[1] = len & 0xff;\n        memcpy(buf+2,s,len);\n    } else {\n        buf[0] = LP_ENCODING_32BIT_STR;\n        buf[1] = len & 0xff;\n        buf[2] = (len >> 8) & 0xff;\n        buf[3] = (len >> 16) & 0xff;\n        buf[4] = (len >> 24) & 0xff;\n        memcpy(buf+5,s,len);\n    }\n}\n\n/* Return the encoded length of the listpack element pointed by 'p'.\n * This includes the encoding byte, length bytes, and the element data itself.\n * If the element encoding is wrong then 0 is returned.\n * Note that this method may access additional bytes (in case of 12 and 32 bit\n * str), so should only be called when we know 'p' was already validated by\n * lpCurrentEncodedSizeBytes or ASSERT_INTEGRITY_LEN (possibly since 'p' is\n * a return value of another function that validated its return. */\nstatic inline uint32_t lpCurrentEncodedSizeUnsafe(unsigned char *p) {\n    if (LP_ENCODING_IS_7BIT_UINT(p[0])) return 1;\n    if (LP_ENCODING_IS_6BIT_STR(p[0])) return 1+LP_ENCODING_6BIT_STR_LEN(p);\n    if (LP_ENCODING_IS_13BIT_INT(p[0])) return 2;\n    if (LP_ENCODING_IS_16BIT_INT(p[0])) return 3;\n    if (LP_ENCODING_IS_24BIT_INT(p[0])) return 4;\n    if (LP_ENCODING_IS_32BIT_INT(p[0])) return 5;\n    if (LP_ENCODING_IS_64BIT_INT(p[0])) return 9;\n    if (LP_ENCODING_IS_12BIT_STR(p[0])) return 2+LP_ENCODING_12BIT_STR_LEN(p);\n    if (LP_ENCODING_IS_32BIT_STR(p[0])) return 5+LP_ENCODING_32BIT_STR_LEN(p);\n    if (p[0] == LP_EOF) return 1;\n    return 0;\n}\n\n/* Return bytes needed to encode the length of the listpack element pointed by 'p'.\n * This includes just the encoding byte, and the bytes needed to encode the length\n * of the element (excluding the element data itself)\n * If the element encoding is wrong then 0 is returned. */\nstatic inline uint32_t lpCurrentEncodedSizeBytes(const unsigned char encoding) {\n    if (LP_ENCODING_IS_7BIT_UINT(encoding)) return 1;\n    if (LP_ENCODING_IS_6BIT_STR(encoding)) return 1;\n    if (LP_ENCODING_IS_13BIT_INT(encoding)) return 1;\n    if (LP_ENCODING_IS_16BIT_INT(encoding)) return 1;\n    if (LP_ENCODING_IS_24BIT_INT(encoding)) return 1;\n    if (LP_ENCODING_IS_32BIT_INT(encoding)) return 1;\n    if (LP_ENCODING_IS_64BIT_INT(encoding)) return 1;\n    if (LP_ENCODING_IS_12BIT_STR(encoding)) return 2;\n    if (LP_ENCODING_IS_32BIT_STR(encoding)) return 5;\n    if (encoding == LP_EOF) return 1;\n    return 0;\n}\n\n/* Skip the current entry returning the next. It is invalid to call this\n * function if the current element is the EOF element at the end of the\n * listpack, however, while this function is used to implement lpNext(),\n * it does not return NULL when the EOF element is encountered. */\nstatic inline unsigned char *lpSkip(unsigned char *p) {\n    unsigned long entrylen = lpCurrentEncodedSizeUnsafe(p);\n    entrylen += lpEncodeBacklenBytes(entrylen);\n    p += entrylen;\n    return p;\n}\n\n/* This is similar to lpNext() but avoids the inner call to lpBytes when you already know the listpack size. */\nunsigned char *lpNextWithBytes(unsigned char *lp, unsigned char *p, const size_t lpbytes) {\n    assert(p);\n    p = lpSkip(p);\n    if (p[0] == LP_EOF) return NULL;\n    lpAssertValidEntry(lp, lpbytes, p);\n    return p;\n}\n\n/* If 'p' points to an element of the listpack, calling lpNext() will return\n * the pointer to the next element (the one on the right), or NULL if 'p'\n * already pointed to the last element of the listpack. */\nunsigned char *lpNext(unsigned char *lp, unsigned char *p) {\n    assert(p);\n    p = lpSkip(p);\n    if (p[0] == LP_EOF) return NULL;\n    lpAssertValidEntry(lp, lpBytes(lp), p);\n    return p;\n}\n\n/* If 'p' points to an element of the listpack, calling lpPrev() will return\n * the pointer to the previous element (the one on the left), or NULL if 'p'\n * already pointed to the first element of the listpack. */\nunsigned char *lpPrev(unsigned char *lp, unsigned char *p) {\n    assert(p);\n    if (p-lp == LP_HDR_SIZE) return NULL;\n    p--; /* Seek the first backlen byte of the last element. */\n    uint64_t prevlen = lpDecodeBacklen(p);\n    prevlen += lpEncodeBacklenBytes(prevlen);\n    p -= prevlen-1; /* Seek the first byte of the previous entry. */\n    lpAssertValidEntry(lp, lpBytes(lp), p);\n    return p;\n}\n\n/* Return a pointer to the first element of the listpack, or NULL if the\n * listpack has no elements. */\nunsigned char *lpFirst(unsigned char *lp) {\n    unsigned char *p = lp + LP_HDR_SIZE; /* Skip the header. */\n    if (p[0] == LP_EOF) return NULL;\n    lpAssertValidEntry(lp, lpBytes(lp), p);\n    return p;\n}\n\n/* Return a pointer to the last element of the listpack, or NULL if the\n * listpack has no elements. */\nunsigned char *lpLast(unsigned char *lp) {\n    unsigned char *p = lp+lpGetTotalBytes(lp)-1; /* Seek EOF element. */\n    return lpPrev(lp,p); /* Will return NULL if EOF is the only element. */\n}\n\n/* Return the number of elements inside the listpack. This function attempts\n * to use the cached value when within range, otherwise a full scan is\n * needed. As a side effect of calling this function, the listpack header\n * could be modified, because if the count is found to be already within\n * the 'numele' header field range, the new value is set. */\nunsigned long lpLength(unsigned char *lp) {\n    uint32_t numele = lpGetNumElements(lp);\n    if (numele != LP_HDR_NUMELE_UNKNOWN) return numele;\n\n    /* Too many elements inside the listpack. We need to scan in order\n     * to get the total number. */\n    uint32_t count = 0;\n    unsigned char *p = lpFirst(lp);\n    while(p) {\n        count++;\n        p = lpNext(lp,p);\n    }\n\n    /* If the count is again within range of the header numele field,\n     * set it. */\n    if (count < LP_HDR_NUMELE_UNKNOWN) lpSetNumElements(lp,count);\n    return count;\n}\n\n/* Return the listpack element pointed by 'p'.\n *\n * The function changes behavior depending on the passed 'intbuf' value.\n * Specifically, if 'intbuf' is NULL:\n *\n * If the element is internally encoded as an integer, the function returns\n * NULL and populates the integer value by reference in 'count'. Otherwise if\n * the element is encoded as a string a pointer to the string (pointing inside\n * the listpack itself) is returned, and 'count' is set to the length of the\n * string.\n *\n * If instead 'intbuf' points to a buffer passed by the caller, that must be\n * at least LP_INTBUF_SIZE bytes, the function always returns the element as\n * it was a string (returning the pointer to the string and setting the\n * 'count' argument to the string length by reference). However if the element\n * is encoded as an integer, the 'intbuf' buffer is used in order to store\n * the string representation.\n *\n * The user should use one or the other form depending on what the value will\n * be used for. If there is immediate usage for an integer value returned\n * by the function, than to pass a buffer (and convert it back to a number)\n * is of course useless.\n *\n * If 'entry_size' is not NULL, *entry_size is set to the entry length of the\n * listpack element pointed by 'p'. This includes the encoding bytes, length\n * bytes, the element data itself, and the backlen bytes.\n *\n * If the function is called against a badly encoded ziplist, so that there\n * is no valid way to parse it, the function returns like if there was an\n * integer encoded with value 12345678900000000 + <unrecognized byte>, this may\n * be an hint to understand that something is wrong. To crash in this case is\n * not sensible because of the different requirements of the application using\n * this lib.\n *\n * Similarly, there is no error returned since the listpack normally can be\n * assumed to be valid, so that would be a very high API cost. */\nstatic inline unsigned char *lpGetWithSize(unsigned char *p, int64_t *count, unsigned char *intbuf, uint64_t *entry_size) {\n    int64_t val;\n    uint64_t uval, negstart, negmax;\n\n    assert(p); /* assertion for valgrind (avoid NPD) */\n    if (LP_ENCODING_IS_7BIT_UINT(p[0])) {\n        negstart = UINT64_MAX; /* 7 bit ints are always positive. */\n        negmax = 0;\n        uval = p[0] & 0x7f;\n        if (entry_size) *entry_size = LP_ENCODING_7BIT_UINT_ENTRY_SIZE;\n    } else if (LP_ENCODING_IS_6BIT_STR(p[0])) {\n        *count = LP_ENCODING_6BIT_STR_LEN(p);\n        if (entry_size) *entry_size = 1 + *count + lpEncodeBacklenBytes(*count + 1);\n        return p+1;\n    } else if (LP_ENCODING_IS_13BIT_INT(p[0])) {\n        uval = ((p[0]&0x1f)<<8) | p[1];\n        negstart = (uint64_t)1<<12;\n        negmax = 8191;\n        if (entry_size) *entry_size = LP_ENCODING_13BIT_INT_ENTRY_SIZE;\n    } else if (LP_ENCODING_IS_16BIT_INT(p[0])) {\n        uval = (uint64_t)p[1] |\n               (uint64_t)p[2]<<8;\n        negstart = (uint64_t)1<<15;\n        negmax = UINT16_MAX;\n        if (entry_size) *entry_size = LP_ENCODING_16BIT_INT_ENTRY_SIZE;\n    } else if (LP_ENCODING_IS_24BIT_INT(p[0])) {\n        uval = (uint64_t)p[1] |\n               (uint64_t)p[2]<<8 |\n               (uint64_t)p[3]<<16;\n        negstart = (uint64_t)1<<23;\n        negmax = UINT32_MAX>>8;\n        if (entry_size) *entry_size = LP_ENCODING_24BIT_INT_ENTRY_SIZE;\n    } else if (LP_ENCODING_IS_32BIT_INT(p[0])) {\n        uval = (uint64_t)p[1] |\n               (uint64_t)p[2]<<8 |\n               (uint64_t)p[3]<<16 |\n               (uint64_t)p[4]<<24;\n        negstart = (uint64_t)1<<31;\n        negmax = UINT32_MAX;\n        if (entry_size) *entry_size = LP_ENCODING_32BIT_INT_ENTRY_SIZE;\n    } else if (LP_ENCODING_IS_64BIT_INT(p[0])) {\n        uval = (uint64_t)p[1] |\n               (uint64_t)p[2]<<8 |\n               (uint64_t)p[3]<<16 |\n               (uint64_t)p[4]<<24 |\n               (uint64_t)p[5]<<32 |\n               (uint64_t)p[6]<<40 |\n               (uint64_t)p[7]<<48 |\n               (uint64_t)p[8]<<56;\n        negstart = (uint64_t)1<<63;\n        negmax = UINT64_MAX;\n        if (entry_size) *entry_size = LP_ENCODING_64BIT_INT_ENTRY_SIZE;\n    } else if (LP_ENCODING_IS_12BIT_STR(p[0])) {\n        *count = LP_ENCODING_12BIT_STR_LEN(p);\n        if (entry_size) *entry_size = 2 + *count + lpEncodeBacklenBytes(*count + 2);\n        return p+2;\n    } else if (LP_ENCODING_IS_32BIT_STR(p[0])) {\n        *count = LP_ENCODING_32BIT_STR_LEN(p);\n        if (entry_size) *entry_size = 5 + *count + lpEncodeBacklenBytes(*count + 5);\n        return p+5;\n    } else {\n        uval = 12345678900000000ULL + p[0];\n        negstart = UINT64_MAX;\n        negmax = 0;\n    }\n\n    /* We reach this code path only for integer encodings.\n     * Convert the unsigned value to the signed one using two's complement\n     * rule. */\n    if (uval >= negstart) {\n        /* This three steps conversion should avoid undefined behaviors\n         * in the unsigned -> signed conversion. */\n        uval = negmax-uval;\n        val = uval;\n        val = -val-1;\n    } else {\n        val = uval;\n    }\n\n    /* Return the string representation of the integer or the value itself\n     * depending on intbuf being NULL or not. */\n    if (intbuf) {\n        *count = ll2string((char*)intbuf,LP_INTBUF_SIZE,(long long)val);\n        return intbuf;\n    } else {\n        *count = val;\n        return NULL;\n    }\n}\n\n/* Return the listpack element pointed by 'p'.\n *\n * The function has the same behaviour as lpGetWithSize when 'entry_size' is NULL,\n * but avoids a lot of unecesarry branching performance penalties. */\nstatic inline unsigned char *lpGetWithBuf(unsigned char *p, int64_t *count, unsigned char *intbuf) {\n    int64_t val;\n    uint64_t uval, negstart, negmax;\n    assert(p); /* assertion for valgrind (avoid NPD) */\n    const unsigned char encoding = p[0];\n\n    /* string encoding */\n    if (LP_ENCODING_IS_6BIT_STR(encoding)) {\n        *count = LP_ENCODING_6BIT_STR_LEN(p);\n        return p+1;\n    }\n    if (LP_ENCODING_IS_12BIT_STR(encoding)) {\n        *count = LP_ENCODING_12BIT_STR_LEN(p);\n        return p+2;\n    }\n    if (LP_ENCODING_IS_32BIT_STR(encoding)) {\n        *count = LP_ENCODING_32BIT_STR_LEN(p);\n        return p+5;\n    }\n    /* int encoding */\n    if (LP_ENCODING_IS_7BIT_UINT(encoding)) {\n        negstart = UINT64_MAX; /* 7 bit ints are always positive. */\n        negmax = 0;\n        uval = encoding & 0x7f;\n    } else if (LP_ENCODING_IS_13BIT_INT(encoding)) {\n        uval = ((encoding&0x1f)<<8) | p[1];\n        negstart = (uint64_t)1<<12;\n        negmax = 8191;\n    } else if (LP_ENCODING_IS_16BIT_INT(encoding)) {\n        uval = (uint64_t)p[1] |\n               (uint64_t)p[2]<<8;\n        negstart = (uint64_t)1<<15;\n        negmax = UINT16_MAX;\n    } else if (LP_ENCODING_IS_24BIT_INT(encoding)) {\n        uval = (uint64_t)p[1] |\n               (uint64_t)p[2]<<8 |\n               (uint64_t)p[3]<<16;\n        negstart = (uint64_t)1<<23;\n        negmax = UINT32_MAX>>8;\n    } else if (LP_ENCODING_IS_32BIT_INT(encoding)) {\n        uval = (uint64_t)p[1] |\n               (uint64_t)p[2]<<8 |\n               (uint64_t)p[3]<<16 |\n               (uint64_t)p[4]<<24;\n        negstart = (uint64_t)1<<31;\n        negmax = UINT32_MAX;\n    } else if (LP_ENCODING_IS_64BIT_INT(encoding)) {\n        uval = (uint64_t)p[1] |\n               (uint64_t)p[2]<<8 |\n               (uint64_t)p[3]<<16 |\n               (uint64_t)p[4]<<24 |\n               (uint64_t)p[5]<<32 |\n               (uint64_t)p[6]<<40 |\n               (uint64_t)p[7]<<48 |\n               (uint64_t)p[8]<<56;\n        negstart = (uint64_t)1<<63;\n        negmax = UINT64_MAX;\n    } else {\n        uval = 12345678900000000ULL + encoding;\n        negstart = UINT64_MAX;\n        negmax = 0;\n    }\n\n    /* We reach this code path only for integer encodings.\n     * Convert the unsigned value to the signed one using two's complement\n     * rule. */\n    if (uval >= negstart) {\n        /* This three steps conversion should avoid undefined behaviors\n         * in the unsigned -> signed conversion. */\n        uval = negmax-uval;\n        val = uval;\n        val = -val-1;\n    } else {\n        val = uval;\n    }\n\n    /* Return the string representation of the integer or the value itself\n     * depending on intbuf being NULL or not. */\n    if (intbuf) {\n        *count = ll2string((char*)intbuf,LP_INTBUF_SIZE,(long long)val);\n        return intbuf;\n    } else {\n        *count = val;\n        return NULL;\n    }\n}\n\nunsigned char *lpGet(unsigned char *p, int64_t *count, unsigned char *intbuf) {\n    return lpGetWithBuf(p, count, intbuf);\n}\n\n/* This is just a wrapper to lpGet() that is able to get entry value directly.\n * When the function returns NULL, it populates the integer value by reference in 'lval'.\n * Otherwise if the element is encoded as a string a pointer to the string (pointing\n * inside the listpack itself) is returned, and 'slen' is set to the length of the\n * string. */\nunsigned char *lpGetValue(unsigned char *p, unsigned int *slen, long long *lval) {\n    unsigned char *vstr;\n    int64_t ele_len;\n\n    vstr = lpGet(p, &ele_len, NULL);\n    if (vstr) {\n        *slen = ele_len;\n    } else {\n        *lval = ele_len;\n    }\n    return vstr;\n}\n\n/* This is just a wrapper to lpGet() that is able to get an integer from an entry directly.\n * Returns 1 and stores the integer in 'lval' if the entry is an integer.\n * Returns 0 if the entry is a string. */\nint lpGetIntegerValue(unsigned char *p, long long *lval) {\n    int64_t ele_len;\n    if (!lpGet(p, &ele_len, NULL)) {\n        *lval = ele_len;\n        return 1;\n    }\n    return 0;\n}\n\n/* Find pointer to the entry with a comparator callback.\n *\n * 'cmp' is a comparator callback. If it returns zero, current entry pointer\n * will be returned. 'user' is passed to this callback.\n * Skip 'skip' entries between every comparison.\n * Returns NULL when the field could not be found. */\nstatic inline unsigned char *lpFindCbInternal(unsigned char *lp, unsigned char *p,\n                                              void *user, lpCmp cmp, unsigned int skip)\n{\n    int skipcnt = 0;\n    unsigned char *value;\n    int64_t ll;\n    uint64_t entry_size = 123456789; /* initialized to avoid warning. */\n    uint32_t lp_bytes = lpBytes(lp);\n\n    if (!p)\n        p = lpFirst(lp);\n\n    while (p) {\n        if (skipcnt == 0) {\n            value = lpGetWithSize(p, &ll, NULL, &entry_size);\n            if (value) {\n                /* check the value doesn't reach outside the listpack before accessing it */\n                assert(p >= lp + LP_HDR_SIZE && p + entry_size < lp + lp_bytes);\n            }\n\n            if (unlikely(cmp(lp, p, user, value, ll) == 0))\n                return p;\n\n            /* Reset skip count */\n            skipcnt = skip;\n            p += entry_size;\n        } else {\n            /* Skip entry */\n            skipcnt--;\n\n            /* Move to next entry, avoid use `lpNext` due to `lpAssertValidEntry` in\n            * `lpNext` will call `lpBytes`, will cause performance degradation */\n            p = lpSkip(p);\n        }\n\n        /* The next call to lpGetWithSize could read at most 8 bytes past `p`\n         * We use the slower validation call only when necessary. */\n        if (p + 8 >= lp + lp_bytes)\n            lpAssertValidEntry(lp, lp_bytes, p);\n        else\n            assert(p >= lp + LP_HDR_SIZE && p < lp + lp_bytes);\n        if (p[0] == LP_EOF) break;\n    }\n\n    return NULL;\n}\n\nunsigned char *lpFindCb(unsigned char *lp, unsigned char *p,\n                        void *user, lpCmp cmp, unsigned int skip)\n{\n    return lpFindCbInternal(lp, p, user, cmp, skip);\n}\n\nstruct lpFindArg {\n    unsigned char *s; /* Item to search */\n    uint32_t slen;    /* Item len */\n    int vencoding;\n    int64_t vll;\n};\n\n/* Comparator function to find item */\nstatic inline int lpFindCmp(const unsigned char *lp, unsigned char *p,\n                            void *user, unsigned char *s, long long slen) {\n    (void) lp;\n    (void) p;\n    struct lpFindArg *arg = user;\n\n    if (s) {\n        if (slen == arg->slen && memcmp(arg->s, s, slen) == 0) {\n            return 0;\n        }\n    } else {\n        /* Find out if the searched field can be encoded. Note that\n         * we do it only the first time, once done vencoding is set\n         * to non-zero and vll is set to the integer value. */\n        if (arg->vencoding == 0) {\n            /* If the entry can be encoded as integer we set it to\n             * 1, else set it to UCHAR_MAX, so that we don't retry\n             * again the next time. */\n            if (arg->slen >= 32 || arg->slen == 0 || !lpStringToInt64((const char*)arg->s, arg->slen, &arg->vll)) {\n                arg->vencoding = UCHAR_MAX;\n            } else {\n                arg->vencoding = 1;\n            }\n        }\n\n        /* Compare current entry with specified entry, do it only\n         * if vencoding != UCHAR_MAX because if there is no encoding\n         * possible for the field it can't be a valid integer. */\n        if (arg->vencoding != UCHAR_MAX && slen == arg->vll) {\n            return 0;\n        }\n    }\n\n    return 1;\n}\n\n/* Find pointer to the entry equal to the specified entry. Skip 'skip' entries\n * between every comparison. Returns NULL when the field could not be found. */\nunsigned char *lpFind(unsigned char *lp, unsigned char *p, unsigned char *s,\n                      uint32_t slen, unsigned int skip)\n{\n    struct lpFindArg arg = {\n        .s = s,\n        .slen = slen\n    };\n    return lpFindCbInternal(lp, p, &arg, lpFindCmp, skip);\n}\n\n/* Insert, delete or replace the specified string element 'elestr' of length\n * 'size' or integer element 'eleint' at the specified position 'p', with 'p'\n * being a listpack element pointer obtained with lpFirst(), lpLast(), lpNext(),\n * lpPrev() or lpSeek().\n *\n * The element is inserted before, after, or replaces the element pointed\n * by 'p' depending on the 'where' argument, that can be LP_BEFORE, LP_AFTER\n * or LP_REPLACE.\n * \n * If both 'elestr' and `eleint` are NULL, the function removes the element\n * pointed by 'p' instead of inserting one.\n * If `eleint` is non-NULL, 'size' is the length of 'eleint', the function insert\n * or replace with a 64 bit integer, which is stored in the 'eleint' buffer.\n * If 'elestr` is non-NULL, 'size' is the length of 'elestr', the function insert\n * or replace with a string, which is stored in the 'elestr' buffer.\n * \n * Returns NULL on out of memory or when the listpack total length would exceed\n * the max allowed size of 2^32-1, otherwise the new pointer to the listpack\n * holding the new element is returned (and the old pointer passed is no longer\n * considered valid)\n *\n * If 'newp' is not NULL, at the end of a successful call '*newp' will be set\n * to the address of the element just added, so that it will be possible to\n * continue an interaction with lpNext() and lpPrev().\n *\n * For deletion operations (both 'elestr' and 'eleint' set to NULL) 'newp' is\n * set to the next element, on the right of the deleted one, or to NULL if the\n * deleted element was the last one. */\nunsigned char *lpInsert(unsigned char *lp, unsigned char *elestr, unsigned char *eleint,\n                        uint32_t size, unsigned char *p, int where, unsigned char **newp)\n{\n    unsigned char intenc[LP_MAX_INT_ENCODING_LEN];\n    unsigned char backlen[LP_MAX_BACKLEN_SIZE];\n\n    uint64_t enclen; /* The length of the encoded element. */\n    int delete = (elestr == NULL && eleint == NULL);\n\n    /* when deletion, it is conceptually replacing the element with a\n     * zero-length element. So whatever we get passed as 'where', set\n     * it to LP_REPLACE. */\n    if (delete) where = LP_REPLACE;\n\n    /* If we need to insert after the current element, we just jump to the\n     * next element (that could be the EOF one) and handle the case of\n     * inserting before. So the function will actually deal with just two\n     * cases: LP_BEFORE and LP_REPLACE. */\n    if (where == LP_AFTER) {\n        p = lpSkip(p);\n        where = LP_BEFORE;\n        ASSERT_INTEGRITY(lp, p);\n    }\n\n    /* Store the offset of the element 'p', so that we can obtain its\n     * address again after a reallocation. */\n    unsigned long poff = p-lp;\n\n    int enctype;\n    if (elestr) {\n        /* Calling lpEncodeGetType() results into the encoded version of the\n        * element to be stored into 'intenc' in case it is representable as\n        * an integer: in that case, the function returns LP_ENCODING_INT.\n        * Otherwise if LP_ENCODING_STR is returned, we'll have to call\n        * lpEncodeString() to actually write the encoded string on place later.\n        *\n        * Whatever the returned encoding is, 'enclen' is populated with the\n        * length of the encoded element. */\n        enctype = lpEncodeGetType(elestr,size,intenc,&enclen);\n        if (enctype == LP_ENCODING_INT) eleint = intenc;\n    } else if (eleint) {\n        enctype = LP_ENCODING_INT;\n        enclen = size; /* 'size' is the length of the encoded integer element. */\n    } else {\n        enctype = -1;\n        enclen = 0;\n    }\n\n    /* We need to also encode the backward-parsable length of the element\n     * and append it to the end: this allows to traverse the listpack from\n     * the end to the start. */\n    unsigned long backlen_size = (!delete) ? lpEncodeBacklen(backlen,enclen) : 0;\n    uint64_t old_listpack_bytes = lpGetTotalBytes(lp);\n    uint32_t replaced_len  = 0;\n    if (where == LP_REPLACE) {\n        replaced_len = lpCurrentEncodedSizeUnsafe(p);\n        replaced_len += lpEncodeBacklenBytes(replaced_len);\n        ASSERT_INTEGRITY_LEN(lp, p, replaced_len);\n    }\n\n    uint64_t new_listpack_bytes = old_listpack_bytes + enclen + backlen_size\n                                  - replaced_len;\n    if (new_listpack_bytes > UINT32_MAX) return NULL;\n\n    /* We now need to reallocate in order to make space or shrink the\n     * allocation (in case 'when' value is LP_REPLACE and the new element is\n     * smaller). However we do that before memmoving the memory to\n     * make room for the new element if the final allocation will get\n     * larger, or we do it after if the final allocation will get smaller. */\n\n    unsigned char *dst = lp + poff; /* May be updated after reallocation. */\n\n    /* Realloc before: we need more room. */\n    if (new_listpack_bytes > old_listpack_bytes &&\n        new_listpack_bytes > lp_malloc_size(lp)) {\n        if ((lp = lp_realloc(lp,new_listpack_bytes)) == NULL) return NULL;\n        dst = lp + poff;\n    }\n\n    /* Setup the listpack relocating the elements to make the exact room\n     * we need to store the new one. */\n    if (where == LP_BEFORE) {\n        memmove(dst+enclen+backlen_size,dst,old_listpack_bytes-poff);\n    } else { /* LP_REPLACE. */\n        memmove(dst+enclen+backlen_size,\n                dst+replaced_len,\n                old_listpack_bytes-poff-replaced_len);\n    }\n\n    /* Realloc after: we need to free space. */\n    if (new_listpack_bytes < old_listpack_bytes) {\n        if ((lp = lp_realloc(lp,new_listpack_bytes)) == NULL) return NULL;\n        dst = lp + poff;\n    }\n\n    /* Store the entry. */\n    if (newp) {\n        *newp = dst;\n        /* In case of deletion, set 'newp' to NULL if the next element is\n         * the EOF element. */\n        if (delete && dst[0] == LP_EOF) *newp = NULL;\n    }\n    if (!delete) {\n        if (enctype == LP_ENCODING_INT) {\n            memcpy(dst,eleint,enclen);\n        } else if (elestr) {\n            lpEncodeString(dst,elestr,size);\n        } else {\n            redis_unreachable();\n        }\n        dst += enclen;\n        memcpy(dst,backlen,backlen_size);\n        dst += backlen_size;\n    }\n\n    /* Update header. */\n    if (where != LP_REPLACE || delete) {\n        uint32_t num_elements = lpGetNumElements(lp);\n        if (num_elements != LP_HDR_NUMELE_UNKNOWN) {\n            if (!delete)\n                lpSetNumElements(lp,num_elements+1);\n            else\n                lpSetNumElements(lp,num_elements-1);\n        }\n    }\n    lpSetTotalBytes(lp,new_listpack_bytes);\n\n#if 0\n    /* This code path is normally disabled: what it does is to force listpack\n     * to return *always* a new pointer after performing some modification to\n     * the listpack, even if the previous allocation was enough. This is useful\n     * in order to spot bugs in code using listpacks: by doing so we can find\n     * if the caller forgets to set the new pointer where the listpack reference\n     * is stored, after an update. */\n    unsigned char *oldlp = lp;\n    lp = lp_malloc(new_listpack_bytes);\n    memcpy(lp,oldlp,new_listpack_bytes);\n    if (newp) {\n        unsigned long offset = (*newp)-oldlp;\n        *newp = lp + offset;\n    }\n    /* Make sure the old allocation contains garbage. */\n    memset(oldlp,'A',new_listpack_bytes);\n    lp_free(oldlp);\n#endif\n\n    return lp;\n}\n\n/* Insert the specified elements with 'entries' and 'len' at the specified\n * position 'p', with 'p' being a listpack element pointer obtained with\n * lpFirst(), lpLast(), lpNext(), lpPrev() or lpSeek().\n *\n * This is similar to lpInsert() but allows you to insert batch of entries in\n * one call. This function is more efficient than inserting entries one by one\n * as it does single realloc()/memmove() calls for all the entries.\n *\n * In each listpackEntry, if 'sval' is  not null, it is assumed entry is string\n * and 'sval' and 'slen' will be used. Otherwise, 'lval' will be used to append\n * the integer entry.\n *\n * The elements are inserted before or after the element pointed by 'p'\n * depending on the 'where' argument, that can be LP_BEFORE or LP_AFTER.\n *\n * If 'newp' is not NULL, at the end of a successful call '*newp' will be set\n * to the address of the element just added, so that it will be possible to\n * continue an interaction with lpNext() and lpPrev().\n *\n * Returns NULL on out of memory or when the listpack total length would exceed\n * the max allowed size of 2^32-1, otherwise the new pointer to the listpack\n * holding the new element is returned (and the old pointer passed is no longer\n * considered valid). */\nunsigned char *lpBatchInsert(unsigned char *lp, unsigned char *p, int where,\n                             listpackEntry *entries, unsigned int len,\n                             unsigned char **newp)\n{\n    assert(where == LP_BEFORE || where == LP_AFTER);\n    assert(entries != NULL && len > 0);\n\n    struct listpackInsertEntry {\n        int enctype;\n        uint64_t enclen;\n        unsigned char intenc[LP_MAX_INT_ENCODING_LEN];\n        unsigned char backlen[LP_MAX_BACKLEN_SIZE];\n        unsigned long backlen_size;\n    };\n\n    uint64_t addedlen = 0;       /* The encoded length of the added elements. */\n    struct listpackInsertEntry tmp[3];  /* Encoded entries */\n    struct listpackInsertEntry *enc = tmp;\n\n    if (len > sizeof(tmp) / sizeof(struct listpackInsertEntry)) {\n        /* If 'len' is larger than local buffer size, allocate on heap. */\n        enc = zmalloc(len * sizeof(struct listpackInsertEntry));\n    }\n\n    /* If we need to insert after the current element, we just jump to the\n     * next element (that could be the EOF one) and handle the case of\n     * inserting before. So the function will actually deal with just one\n     * case: LP_BEFORE. */\n    if (where == LP_AFTER) {\n        p = lpSkip(p);\n        where = LP_BEFORE;\n        ASSERT_INTEGRITY(lp, p);\n    }\n\n    for (unsigned int i = 0; i < len; i++) {\n        listpackEntry *e = &entries[i];\n        if (e->sval) {\n           /* Calling lpEncodeGetType() results into the encoded version of the\n            * element to be stored into 'intenc' in case it is representable as\n            * an integer: in that case, the function returns LP_ENCODING_INT.\n            * Otherwise, if LP_ENCODING_STR is returned, we'll have to call\n            * lpEncodeString() to actually write the encoded string on place\n            * later.\n            *\n            * Whatever the returned encoding is, 'enclen' is populated with the\n            * length of the encoded element. */\n            enc[i].enctype = lpEncodeGetType(e->sval, e->slen,\n                                             enc[i].intenc, &enc[i].enclen);\n        } else {\n            enc[i].enctype = LP_ENCODING_INT;\n            lpEncodeIntegerGetType(e->lval, enc[i].intenc, &enc[i].enclen);\n        }\n        addedlen += enc[i].enclen;\n\n        /* We need to also encode the backward-parsable length of the element\n         * and append it to the end: this allows to traverse the listpack from\n         * the end to the start. */\n        enc[i].backlen_size = lpEncodeBacklen(enc[i].backlen, enc[i].enclen);\n        addedlen += enc[i].backlen_size;\n    }\n\n    uint64_t old_listpack_bytes = lpGetTotalBytes(lp);\n    uint64_t new_listpack_bytes = old_listpack_bytes + addedlen;\n    if (new_listpack_bytes > UINT32_MAX) {\n        if (enc != tmp) lp_free(enc);\n        return NULL;\n    }\n\n    /* Store the offset of the element 'p', so that we can obtain its\n     * address again after a reallocation. */\n    unsigned long poff = p-lp;\n    unsigned char *dst = lp + poff; /* May be updated after reallocation. */\n\n    /* Realloc before: we need more room. */\n    if (new_listpack_bytes > old_listpack_bytes &&\n        new_listpack_bytes > lp_malloc_size(lp)) {\n        if ((lp = lp_realloc(lp,new_listpack_bytes)) == NULL) {\n            if (enc != tmp) lp_free(enc);\n            return NULL;\n        }\n        dst = lp + poff;\n    }\n\n    /* Setup the listpack relocating the elements to make the exact room\n     * we need to store the new ones. */\n    memmove(dst+addedlen,dst,old_listpack_bytes-poff);\n\n    for (unsigned int i = 0; i < len; i++) {\n        listpackEntry *ent = &entries[i];\n\n        if (newp)\n            *newp = dst;\n\n        if (enc[i].enctype == LP_ENCODING_INT)\n            memcpy(dst, enc[i].intenc, enc[i].enclen);\n        else\n            lpEncodeString(dst, ent->sval, ent->slen);\n\n        dst += enc[i].enclen;\n        memcpy(dst, enc[i].backlen, enc[i].backlen_size);\n        dst += enc[i].backlen_size;\n    }\n\n    /* Update header. */\n    uint32_t num_elements = lpGetNumElements(lp);\n    if (num_elements != LP_HDR_NUMELE_UNKNOWN) {\n        if ((int64_t) len > (int64_t) LP_HDR_NUMELE_UNKNOWN - (int64_t) num_elements)\n            lpSetNumElements(lp, LP_HDR_NUMELE_UNKNOWN);\n        else\n            lpSetNumElements(lp,num_elements + len);\n    }\n    lpSetTotalBytes(lp,new_listpack_bytes);\n    if (enc != tmp) lp_free(enc);\n\n    return lp;\n}\n\n/* This is just a wrapper for lpInsert() to directly use a string. */\nunsigned char *lpInsertString(unsigned char *lp, unsigned char *s, uint32_t slen,\n                              unsigned char *p, int where, unsigned char **newp)\n{\n    return lpInsert(lp, s, NULL, slen, p, where, newp);\n}\n\n/* This is just a wrapper for lpInsert() to directly use a 64 bit integer\n * instead of a string. */\nunsigned char *lpInsertInteger(unsigned char *lp, long long lval, unsigned char *p, int where, unsigned char **newp) {\n    uint64_t enclen; /* The length of the encoded element. */\n    unsigned char intenc[LP_MAX_INT_ENCODING_LEN];\n\n    lpEncodeIntegerGetType(lval, intenc, &enclen);\n    return lpInsert(lp, NULL, intenc, enclen, p, where, newp);\n}\n\n/* Append the specified element 's' of length 'slen' at the head of the listpack. */\nunsigned char *lpPrepend(unsigned char *lp, unsigned char *s, uint32_t slen) {\n    unsigned char *p = lpFirst(lp);\n    if (!p) return lpAppend(lp, s, slen);\n    return lpInsert(lp, s, NULL, slen, p, LP_BEFORE, NULL);\n}\n\n/* Append the specified integer element 'lval' at the head of the listpack. */\nunsigned char *lpPrependInteger(unsigned char *lp, long long lval) {\n    unsigned char *p = lpFirst(lp);\n    if (!p) return lpAppendInteger(lp, lval);\n    return lpInsertInteger(lp, lval, p, LP_BEFORE, NULL);\n}\n\n/* Append the specified element 'ele' of length 'size' at the end of the\n * listpack. It is implemented in terms of lpInsert(), so the return value is\n * the same as lpInsert(). */\nunsigned char *lpAppend(unsigned char *lp, unsigned char *ele, uint32_t size) {\n    uint64_t listpack_bytes = lpGetTotalBytes(lp);\n    unsigned char *eofptr = lp + listpack_bytes - 1;\n    return lpInsert(lp,ele,NULL,size,eofptr,LP_BEFORE,NULL);\n}\n\n/* Append the specified integer element 'lval' at the end of the listpack. */\nunsigned char *lpAppendInteger(unsigned char *lp, long long lval) {\n    uint64_t listpack_bytes = lpGetTotalBytes(lp);\n    unsigned char *eofptr = lp + listpack_bytes - 1;\n    return lpInsertInteger(lp, lval, eofptr, LP_BEFORE, NULL);\n}\n\n/* Append batch of entries to the listpack.\n *\n * This call is more efficient than multiple lpAppend() calls as it only does\n * a single realloc() for all the given entries.\n *\n * In each listpackEntry, if 'sval' is  not null, it is assumed entry is string\n * and 'sval' and 'slen' will be used. Otherwise, 'lval' will be used to append\n * the integer entry. */\nunsigned char *lpBatchAppend(unsigned char *lp, listpackEntry *entries, unsigned long len) {\n    uint64_t listpack_bytes = lpGetTotalBytes(lp);\n    unsigned char *eofptr = lp + listpack_bytes - 1;\n    return lpBatchInsert(lp, eofptr, LP_BEFORE, entries, len, NULL);\n}\n\n/* This is just a wrapper for lpInsert() to directly use a string to replace\n * the current element. The function returns the new listpack as return\n * value, and also updates the current cursor by updating '*p'. */\nunsigned char *lpReplace(unsigned char *lp, unsigned char **p, unsigned char *s, uint32_t slen) {\n    return lpInsert(lp, s, NULL, slen, *p, LP_REPLACE, p);\n}\n\n/* This is just a wrapper for lpInsertInteger() to directly use a 64 bit integer\n * instead of a string to replace the current element. The function returns\n * the new listpack as return value, and also updates the current cursor\n * by updating '*p'. */\nunsigned char *lpReplaceInteger(unsigned char *lp, unsigned char **p, long long lval) {\n    return lpInsertInteger(lp, lval, *p, LP_REPLACE, p);\n}\n\n/* Remove the element pointed by 'p', and return the resulting listpack.\n * If 'newp' is not NULL, the next element pointer (to the right of the\n * deleted one) is returned by reference. If the deleted element was the\n * last one, '*newp' is set to NULL. */\nunsigned char *lpDelete(unsigned char *lp, unsigned char *p, unsigned char **newp) {\n    return lpInsert(lp,NULL,NULL,0,p,LP_REPLACE,newp);\n}\n\n/* Delete a range of entries from the listpack start with the element pointed by 'p'. */\nunsigned char *lpDeleteRangeWithEntry(unsigned char *lp, unsigned char **p, unsigned long num) {\n    size_t bytes = lpBytes(lp);\n    unsigned long deleted = 0;\n    unsigned char *eofptr = lp + bytes - 1;\n    unsigned char *first, *tail;\n    first = tail = *p;\n\n    if (num == 0) return lp;  /* Nothing to delete, return ASAP. */\n\n    /* Find the next entry to the last entry that needs to be deleted.\n     * lpLength may be unreliable due to corrupt data, so we cannot\n     * treat 'num' as the number of elements to be deleted. */\n    while (num--) {\n        deleted++;\n        tail = lpSkip(tail);\n        if (tail[0] == LP_EOF) break;\n        lpAssertValidEntry(lp, bytes, tail);\n    }\n\n    /* Store the offset of the element 'first', so that we can obtain its\n     * address again after a reallocation. */\n    unsigned long poff = first-lp;\n\n    /* Move tail to the front of the listpack */\n    memmove(first, tail, eofptr - tail + 1);\n    lpSetTotalBytes(lp, bytes - (tail - first));\n    uint32_t numele = lpGetNumElements(lp);\n    if (numele != LP_HDR_NUMELE_UNKNOWN)\n        lpSetNumElements(lp, numele-deleted);\n    lp = lpShrinkToFit(lp);\n\n    /* Store the entry. */\n    *p = lp+poff;\n    if ((*p)[0] == LP_EOF) *p = NULL;\n\n    return lp;\n}\n\n/* Delete a range of entries from the listpack. */\nunsigned char *lpDeleteRange(unsigned char *lp, long index, unsigned long num) {\n    unsigned char *p;\n    uint32_t numele = lpGetNumElements(lp);\n\n    if (num == 0) return lp; /* Nothing to delete, return ASAP. */\n    if ((p = lpSeek(lp, index)) == NULL) return lp;\n\n    /* If we know we're gonna delete beyond the end of the listpack, we can just move\n     * the EOF marker, and there's no need to iterate through the entries,\n     * but if we can't be sure how many entries there are, we rather avoid calling lpLength\n     * since that means an additional iteration on all elements.\n     *\n     * Note that index could overflow, but we use the value after seek, so when we\n     * use it no overflow happens. */\n    if (numele != LP_HDR_NUMELE_UNKNOWN && index < 0) index = (long)numele + index;\n    if (numele != LP_HDR_NUMELE_UNKNOWN && (numele - (unsigned long)index) <= num) {\n        p[0] = LP_EOF;\n        lpSetTotalBytes(lp, p - lp + 1);\n        lpSetNumElements(lp, index);\n        lp = lpShrinkToFit(lp);\n    } else {\n        lp = lpDeleteRangeWithEntry(lp, &p, num);\n    }\n\n    return lp;\n}\n\n/* Delete the elements 'ps' passed as an array of 'count' element pointers and\n * return the resulting listpack. The elements must be given in the same order\n * as they apper in the listpack. */\nunsigned char *lpBatchDelete(unsigned char *lp, unsigned char **ps, unsigned long count) {\n    if (count == 0) return lp;\n    unsigned char *dst = ps[0];\n    size_t total_bytes = lpGetTotalBytes(lp);\n    unsigned char *lp_end = lp + total_bytes; /* After the EOF element. */\n    assert(lp_end[-1] == LP_EOF);\n    /*\n     * ----+--------+-----------+--------+---------+-----+---+\n     * ... | Delete | Keep      | Delete | Keep    | ... |EOF|\n     * ... |xxxxxxxx|           |xxxxxxxx|         | ... |   |\n     * ----+--------+-----------+--------+---------+-----+---+\n     *     ^        ^           ^                            ^\n     *     |        |           |                            |\n     *     ps[i]    |           ps[i+1]                      |\n     *     skip     keep_start  keep_end                     lp_end\n     *\n     * The loop memmoves the bytes between keep_start and keep_end to dst.\n     */\n    for (unsigned long i = 0; i < count; i++) {\n        unsigned char *skip = ps[i];\n        assert(skip != NULL && skip[0] != LP_EOF);\n        unsigned char *keep_start = lpSkip(skip);\n        unsigned char *keep_end;\n        if (i + 1 < count) {\n            keep_end = ps[i + 1];\n            /* Deleting consecutive elements. Nothing to keep between them. */\n            if (keep_start == keep_end) continue;\n        } else {\n            /* Keep the rest of the listpack including the EOF marker. */\n            keep_end = lp_end;\n        }\n        assert(keep_end > keep_start);\n        size_t bytes_to_keep = keep_end - keep_start;\n        memmove(dst, keep_start, bytes_to_keep);\n        dst += bytes_to_keep;\n    }\n    /* Update total size and num elements. */\n    size_t deleted_bytes = lp_end - dst;\n    total_bytes -= deleted_bytes;\n    assert(lp[total_bytes - 1] == LP_EOF);\n    lpSetTotalBytes(lp, total_bytes);\n    uint32_t numele = lpGetNumElements(lp);\n    if (numele != LP_HDR_NUMELE_UNKNOWN) lpSetNumElements(lp, numele - count);\n    return lpShrinkToFit(lp);\n}\n\n/* Merge listpacks 'first' and 'second' by appending 'second' to 'first'.\n *\n * NOTE: The larger listpack is reallocated to contain the new merged listpack.\n * Either 'first' or 'second' can be used for the result.  The parameter not\n * used will be free'd and set to NULL.\n *\n * After calling this function, the input parameters are no longer valid since\n * they are changed and free'd in-place.\n *\n * The result listpack is the contents of 'first' followed by 'second'.\n *\n * On failure: returns NULL if the merge is impossible.\n * On success: returns the merged listpack (which is expanded version of either\n * 'first' or 'second', also frees the other unused input listpack, and sets the\n * input listpack argument equal to newly reallocated listpack return value. */\nunsigned char *lpMerge(unsigned char **first, unsigned char **second) {\n    /* If any params are null, we can't merge, so NULL. */\n    if (first == NULL || *first == NULL || second == NULL || *second == NULL)\n        return NULL;\n\n    /* Can't merge same list into itself. */\n    if (*first == *second)\n        return NULL;\n\n    size_t first_bytes = lpBytes(*first);\n    unsigned long first_len = lpLength(*first);\n\n    size_t second_bytes = lpBytes(*second);\n    unsigned long second_len = lpLength(*second);\n\n    int append;\n    unsigned char *source, *target;\n    size_t target_bytes, source_bytes;\n    /* Pick the largest listpack so we can resize easily in-place.\n     * We must also track if we are now appending or prepending to\n     * the target listpack. */\n    if (first_bytes >= second_bytes) {\n        /* retain first, append second to first. */\n        target = *first;\n        target_bytes = first_bytes;\n        source = *second;\n        source_bytes = second_bytes;\n        append = 1;\n    } else {\n        /* else, retain second, prepend first to second. */\n        target = *second;\n        target_bytes = second_bytes;\n        source = *first;\n        source_bytes = first_bytes;\n        append = 0;\n    }\n\n    /* Calculate final bytes (subtract one pair of metadata) */\n    unsigned long long lpbytes = (unsigned long long)first_bytes + second_bytes - LP_HDR_SIZE - 1;\n    assert(lpbytes < UINT32_MAX); /* larger values can't be stored */\n    unsigned long lplength = first_len + second_len;\n\n    /* Combined lp length should be limited within UINT16_MAX */\n    lplength = lplength < UINT16_MAX ? lplength : UINT16_MAX;\n\n    /* Extend target to new lpbytes then append or prepend source. */\n    target = lp_realloc(target, lpbytes);\n    if (append) {\n        /* append == appending to target */\n        /* Copy source after target (copying over original [END]):\n         *   [TARGET - END, SOURCE - HEADER] */\n        memcpy(target + target_bytes - 1,\n               source + LP_HDR_SIZE,\n               source_bytes - LP_HDR_SIZE);\n    } else {\n        /* !append == prepending to target */\n        /* Move target *contents* exactly size of (source - [END]),\n         * then copy source into vacated space (source - [END]):\n         *   [SOURCE - END, TARGET - HEADER] */\n        memmove(target + source_bytes - 1,\n                target + LP_HDR_SIZE,\n                target_bytes - LP_HDR_SIZE);\n        memcpy(target, source, source_bytes - 1);\n    }\n\n    lpSetNumElements(target, lplength);\n    lpSetTotalBytes(target, lpbytes);\n\n    /* Now free and NULL out what we didn't realloc */\n    if (append) {\n        lp_free(*second);\n        *second = NULL;\n        *first = target;\n    } else {\n        lp_free(*first);\n        *first = NULL;\n        *second = target;\n    }\n\n    return target;\n}\n\nunsigned char *lpDup(unsigned char *lp) {\n    size_t lpbytes = lpBytes(lp);\n    unsigned char *newlp = lp_malloc(lpbytes);\n    memcpy(newlp, lp, lpbytes);\n    return newlp;\n}\n\n/* Return the total number of bytes the listpack is composed of. */\nsize_t lpBytes(unsigned char *lp) {\n    return lpGetTotalBytes(lp);\n}\n\n/* Returns the size 'lval' will require when encoded, in bytes */\nsize_t lpEntrySizeInteger(long long lval) {\n    uint64_t enclen;\n    lpEncodeIntegerGetType(lval, NULL, &enclen);\n    unsigned long backlen = lpEncodeBacklenBytes(enclen);\n    return enclen + backlen;\n}\n\n/* Returns the size of a listpack consisting of an integer repeated 'rep' times. */\nsize_t lpEstimateBytesRepeatedInteger(long long lval, unsigned long rep) {\n    return LP_HDR_SIZE + lpEntrySizeInteger(lval) * rep + 1;\n}\n\n/* Seek the specified element and returns the pointer to the seeked element.\n * Positive indexes specify the zero-based element to seek from the head to\n * the tail, negative indexes specify elements starting from the tail, where\n * -1 means the last element, -2 the penultimate and so forth. If the index\n * is out of range, NULL is returned. */\nunsigned char *lpSeek(unsigned char *lp, long index) {\n    int forward = 1; /* Seek forward by default. */\n\n    /* We want to seek from left to right or the other way around\n     * depending on the listpack length and the element position.\n     * However if the listpack length cannot be obtained in constant time,\n     * we always seek from left to right. */\n    uint32_t numele = lpGetNumElements(lp);\n    if (numele != LP_HDR_NUMELE_UNKNOWN) {\n        if (index < 0) index = (long)numele+index;\n        if (index < 0) return NULL; /* Index still < 0 means out of range. */\n        if (index >= (long)numele) return NULL; /* Out of range the other side. */\n        /* We want to scan right-to-left if the element we are looking for\n         * is past the half of the listpack. */\n        if (index > (long)numele/2) {\n            forward = 0;\n            /* Right to left scanning always expects a negative index. Convert\n             * our index to negative form. */\n            index -= numele;\n        }\n    } else {\n        /* If the listpack length is unspecified, for negative indexes we\n         * want to always scan right-to-left. */\n        if (index < 0) forward = 0;\n    }\n\n    /* Forward and backward scanning is trivially based on lpNext()/lpPrev(). */\n    if (forward) {\n        unsigned char *ele = lpFirst(lp);\n        while (index > 0 && ele) {\n            ele = lpNext(lp,ele);\n            index--;\n        }\n        return ele;\n    } else {\n        unsigned char *ele = lpLast(lp);\n        while (index < -1 && ele) {\n            ele = lpPrev(lp,ele);\n            index++;\n        }\n        return ele;\n    }\n}\n\n/* Same as lpFirst but without validation assert, to be used right before lpValidateNext. */\nunsigned char *lpValidateFirst(unsigned char *lp) {\n    unsigned char *p = lp + LP_HDR_SIZE; /* Skip the header. */\n    if (p[0] == LP_EOF) return NULL;\n    return p;\n}\n\n/* Validate the integrity of a single listpack entry and move to the next one.\n * The input argument 'pp' is a reference to the current record and is advanced on exit.\n *  the data pointed to by 'lp' will not be modified by the function.\n * Returns 1 if valid, 0 if invalid. */\nint lpValidateNext(unsigned char *lp, unsigned char **pp, size_t lpbytes) {\n#define OUT_OF_RANGE(p) ( \\\n        (p) < lp + LP_HDR_SIZE || \\\n        (p) > lp + lpbytes - 1)\n    unsigned char *p = *pp;\n    if (!p)\n        return 0;\n\n    /* Before accessing p, make sure it's valid. */\n    if (OUT_OF_RANGE(p))\n        return 0;\n\n    if (*p == LP_EOF) {\n        *pp = NULL;\n        return 1;\n    }\n\n    /* check that we can read the encoded size */\n    uint32_t lenbytes = lpCurrentEncodedSizeBytes(p[0]);\n    if (!lenbytes)\n        return 0;\n\n    /* make sure the encoded entry length doesn't reach outside the edge of the listpack */\n    if (OUT_OF_RANGE(p + lenbytes))\n        return 0;\n\n    /* get the entry length and encoded backlen. */\n    unsigned long entrylen = lpCurrentEncodedSizeUnsafe(p);\n    unsigned long encodedBacklen = lpEncodeBacklenBytes(entrylen);\n    entrylen += encodedBacklen;\n\n    /* make sure the entry doesn't reach outside the edge of the listpack */\n    if (OUT_OF_RANGE(p + entrylen))\n        return 0;\n\n    /* move to the next entry */\n    p += entrylen;\n\n    /* make sure the encoded length at the end patches the one at the beginning. */\n    uint64_t prevlen = lpDecodeBacklen(p-1);\n    if (prevlen + encodedBacklen != entrylen)\n        return 0;\n\n    *pp = p;\n    return 1;\n#undef OUT_OF_RANGE\n}\n\n/* Validate that the entry doesn't reach outside the listpack allocation. */\nstatic inline void lpAssertValidEntry(unsigned char* lp, size_t lpbytes, unsigned char *p) {\n    assert(lpValidateNext(lp, &p, lpbytes));\n}\n\n/* Validate the integrity of the data structure.\n * when `deep` is 0, only the integrity of the header is validated.\n * when `deep` is 1, we scan all the entries one by one. */\nint lpValidateIntegrity(unsigned char *lp, size_t size, int deep, \n                        listpackValidateEntryCB entry_cb, void *cb_userdata) {\n    /* Check that we can actually read the header. (and EOF) */\n    if (size < LP_HDR_SIZE + 1)\n        return 0;\n\n    /* Check that the encoded size in the header must match the allocated size. */\n    size_t bytes = lpGetTotalBytes(lp);\n    if (bytes != size)\n        return 0;\n\n    /* The last byte must be the terminator. */\n    if (lp[size-1] != LP_EOF)\n        return 0;\n\n    if (!deep)\n        return 1;\n\n    /* Validate the individual entries. */\n    uint32_t count = 0;\n    uint32_t numele = lpGetNumElements(lp);\n    unsigned char *p = lp + LP_HDR_SIZE;\n    while(p && p[0] != LP_EOF) {\n        unsigned char *prev = p;\n\n        /* Validate this entry and move to the next entry in advance\n         * to avoid callback crash due to corrupt listpack. */\n        if (!lpValidateNext(lp, &p, bytes))\n            return 0;\n\n        /* Optionally let the caller validate the entry too. */\n        if (entry_cb && !entry_cb(prev, numele, cb_userdata))\n            return 0;\n\n        count++;\n    }\n\n    /* Make sure 'p' really does point to the end of the listpack. */\n    if (p != lp + size - 1)\n        return 0;\n\n    /* Check that the count in the header is correct */\n    if (numele != LP_HDR_NUMELE_UNKNOWN && numele != count)\n        return 0;\n\n    return 1;\n}\n\n/* Compare entry pointer to by 'p' with string 's' of length 'slen'.\n * Return 1 if equal. */\nunsigned int lpCompare(unsigned char *p, unsigned char *s, uint32_t slen,\n                       long long *cached_longval, int *cached_valid) {\n    unsigned char *value;\n    int64_t sz;\n    if (p[0] == LP_EOF) return 0;\n\n    value = lpGet(p, &sz, NULL);\n    if (value) {\n        return (slen == sz) && memcmp(value,s,slen) == 0;\n    } else {\n        int64_t sval;\n        /* We use lpStringToInt64() to get an integer representation of the\n         * string 's' and compare it to 'sval', it's much faster than convert\n         * integer to string and comparing. */\n        if (cached_valid != NULL) {\n            /* Use caching */\n            if (*cached_valid == 0) {\n                if (lpStringToInt64((const char*)s, slen, (int64_t*)cached_longval)) {\n                    *cached_valid = 1;\n                } else {\n                    *cached_valid = -1;\n                }\n            }\n            return (*cached_valid == 1 && sz == *cached_longval);\n        } else {\n            /* No caching - direct conversion */\n            if (lpStringToInt64((const char*)s, slen, &sval))\n                return sz == sval;\n        }\n    }\n\n    return 0;\n}\n\n/* uint compare for qsort */\nstatic int uintCompare(const void *a, const void *b) {\n    return (*(unsigned int *) a - *(unsigned int *) b);\n}\n\n/* Helper method to store a string from val or lval into dest */\nstatic inline void lpSaveValue(unsigned char *val, unsigned int len, int64_t lval, listpackEntry *dest) {\n    dest->sval = val;\n    dest->slen = len;\n    dest->lval = lval;\n}\n\n/* Randomly select a pair of key and value.\n * total_count is a pre-computed length/2 of the listpack (to avoid calls to lpLength)\n * 'key' and 'val' are used to store the result key value pair.\n * 'val' can be NULL if the value is not needed.\n * 'tuple_len' indicates entry count of a single logical item. It should be 2\n * if listpack was saved as key-value pair or more for key-value-...(n_entries). */\nvoid lpRandomPair(unsigned char *lp, unsigned long total_count,\n                  listpackEntry *key, listpackEntry *val, int tuple_len)\n{\n    unsigned char *p;\n\n    assert(tuple_len >= 2);\n\n    /* Avoid div by zero on corrupt listpack */\n    assert(total_count);\n\n    int r = (rand() % total_count) * tuple_len;\n    assert((p = lpSeek(lp, r)));\n    key->sval = lpGetValue(p, &(key->slen), &(key->lval));\n\n    if (!val)\n        return;\n    assert((p = lpNext(lp, p)));\n    val->sval = lpGetValue(p, &(val->slen), &(val->lval));\n}\n\n/* Randomly select 'count' entries and store them in the 'entries' array, which\n * needs to have space for 'count' listpackEntry structs. The order is random\n * and duplicates are possible. */\nvoid lpRandomEntries(unsigned char *lp, unsigned int count, listpackEntry *entries) {\n    struct pick {\n        unsigned int index;\n        unsigned int order;\n    } *picks = lp_malloc(count * sizeof(struct pick));\n    unsigned int total_size = lpLength(lp);\n    assert(total_size);\n    for (unsigned int i = 0; i < count; i++) {\n        picks[i].index = rand() % total_size;\n        picks[i].order = i;\n    }\n\n    /* Sort by index. */\n    qsort(picks, count, sizeof(struct pick), uintCompare);\n\n    /* Iterate over listpack in index order and store the values in the entries\n     * array respecting the original order. */\n    unsigned char *p = lpFirst(lp);\n    unsigned int j = 0; /* index in listpack */\n    for (unsigned int i = 0; i < count; i++) {\n        /* Advance listpack pointer to until we reach 'index' listpack. */\n        while (j < picks[i].index) {\n            p = lpNext(lp, p);\n            j++;\n        }\n        int storeorder = picks[i].order;\n        unsigned int len = 0;\n        long long llval = 0;\n        unsigned char *str = lpGetValue(p, &len, &llval);\n        lpSaveValue(str, len, llval, &entries[storeorder]);\n    }\n    lp_free(picks);\n}\n\n/* Randomly select count of key value pairs and store into 'keys' and\n * 'vals' args. The order of the picked entries is random, and the selections\n * are non-unique (repetitions are possible).\n * The 'vals' arg can be NULL in which case we skip these.\n * 'tuple_len' indicates entry count of a single logical item. It should be 2\n * if listpack was saved as key-value pair or more for key-value-...(n_entries). */\nvoid lpRandomPairs(unsigned char *lp, unsigned int count, listpackEntry *keys, listpackEntry *vals, int tuple_len) {\n    unsigned char *p, *key, *value;\n    unsigned int klen = 0, vlen = 0;\n    long long klval = 0, vlval = 0;\n\n    assert(tuple_len >= 2);\n\n    /* Notice: the index member must be first due to the use in uintCompare */\n    typedef struct {\n        unsigned int index;\n        unsigned int order;\n    } rand_pick;\n    rand_pick *picks = lp_malloc(sizeof(rand_pick)*count);\n    unsigned int total_size = lpLength(lp)/tuple_len;\n\n    /* Avoid div by zero on corrupt listpack */\n    assert(total_size);\n\n    /* create a pool of random indexes (some may be duplicate). */\n    for (unsigned int i = 0; i < count; i++) {\n        /* Generate indexes that key exist at */\n        picks[i].index = (rand() % total_size) * tuple_len;\n        /* keep track of the order we picked them */\n        picks[i].order = i;\n    }\n\n    /* sort by indexes. */\n    qsort(picks, count, sizeof(rand_pick), uintCompare);\n\n    /* fetch the elements form the listpack into a output array respecting the original order. */\n    unsigned int lpindex = picks[0].index, pickindex = 0;\n    p = lpSeek(lp, lpindex);\n    while (p && pickindex < count) {\n        key = lpGetValue(p, &klen, &klval);\n        assert((p = lpNext(lp, p)));\n        value = lpGetValue(p, &vlen, &vlval);\n        while (pickindex < count && lpindex == picks[pickindex].index) {\n            int storeorder = picks[pickindex].order;\n            lpSaveValue(key, klen, klval, &keys[storeorder]);\n            if (vals)\n                lpSaveValue(value, vlen, vlval, &vals[storeorder]);\n             pickindex++;\n        }\n        lpindex += tuple_len;\n\n        for (int i = 0; i < tuple_len - 1; i++) {\n            p = lpNext(lp, p);\n        }\n    }\n\n    lp_free(picks);\n}\n\n/* Randomly select count of key value pairs and store into 'keys' and\n * 'vals' args. The selections are unique (no repetitions), and the order of\n * the picked entries is NOT-random.\n * The 'vals' arg can be NULL in which case we skip these.\n * 'tuple_len' indicates entry count of a single logical item. It should be 2\n * if listpack was saved as key-value pair or more for key-value-...(n_entries).\n * The return value is the number of items picked which can be lower than the\n * requested count if the listpack doesn't hold enough pairs. */\nunsigned int lpRandomPairsUnique(unsigned char *lp, unsigned int count,\n                                 listpackEntry *keys, listpackEntry *vals,\n                                 int tuple_len)\n{\n    assert(tuple_len >= 2);\n\n    unsigned char *p, *key;\n    unsigned int klen = 0;\n    long long klval = 0;\n    unsigned int total_size = lpLength(lp)/tuple_len;\n    unsigned int index = 0;\n    if (count > total_size)\n        count = total_size;\n\n    p = lpFirst(lp);\n    unsigned int picked = 0, remaining = count;\n    while (picked < count && p) {\n        assert((p = lpNextRandom(lp, p, &index, remaining, tuple_len)));\n        key = lpGetValue(p, &klen, &klval);\n        lpSaveValue(key, klen, klval, &keys[picked]);\n        assert((p = lpNext(lp, p)));\n        index++;\n        if (vals) {\n            key = lpGetValue(p, &klen, &klval);\n            lpSaveValue(key, klen, klval, &vals[picked]);\n        }\n        p = lpNext(lp, p);\n        remaining--;\n        picked++;\n        index++;\n    }\n    return picked;\n}\n\n/* Iterates forward to the \"next random\" element, given we are yet to pick\n * 'remaining' unique elements between the starting element 'p' (inclusive) and\n * the end of the list. The 'index' needs to be initialized according to the\n * current zero-based index matching the position of the starting element 'p'\n * and is updated to match the returned element's zero-based index. If\n * 'tuple_len' indicates entry count of a single logical item. e.g. This is\n * useful if listpack represents key-value pairs. In this case, tuple_len should\n * be two and even indexes will be picked.\n *\n * Note that this function can return p. In order to skip the previously\n * returned element, you need to call lpNext() or lpDelete() after each call to\n * lpNextRandom(). Idea:\n *\n *     assert(remaining <= lpLength(lp));\n *     p = lpFirst(lp);\n *     i = 0;\n *     while (remaining > 0) {\n *         p = lpNextRandom(lp, p, &i, remaining--, 1);\n *\n *         // ... Do stuff with p ...\n *\n *         p = lpNext(lp, p);\n *         i++;\n *     }\n */\nunsigned char *lpNextRandom(unsigned char *lp, unsigned char *p, unsigned int *index,\n                            unsigned int remaining, int tuple_len)\n{\n    assert(tuple_len > 0);\n    /* To only iterate once, every time we try to pick a member, the probability\n     * we pick it is the quotient of the count left we want to pick and the\n     * count still we haven't visited. This way, we could make every member be\n     * equally likely to be picked. */\n    unsigned int i = *index;\n    unsigned int total_size = lpLength(lp);\n    while (i < total_size && p != NULL) {\n        if (i % tuple_len != 0) {\n            p = lpNext(lp, p);\n            i++;\n            continue;\n        }\n\n        /* Do we pick this element? */\n        unsigned int available = (total_size - i) / tuple_len;\n        double randomDouble = ((double)rand()) / RAND_MAX;\n        double threshold = ((double)remaining) / available;\n        if (randomDouble <= threshold) {\n            *index = i;\n            return p;\n        }\n\n        p = lpNext(lp, p);\n        i++;\n    }\n\n    return NULL;\n}\n\n/* Print info of listpack which is used in debugCommand */\nvoid lpRepr(unsigned char *lp) {\n    unsigned char *p, *vstr;\n    int64_t vlen;\n    unsigned char intbuf[LP_INTBUF_SIZE];\n    int index = 0;\n\n    printf(\"{total bytes %zu} {num entries %lu}\\n\", lpBytes(lp), lpLength(lp));\n        \n    p = lpFirst(lp);\n    while(p) {\n        uint32_t encoded_size_bytes = lpCurrentEncodedSizeBytes(p[0]);\n        uint32_t encoded_size = lpCurrentEncodedSizeUnsafe(p);\n        unsigned long back_len = lpEncodeBacklenBytes(encoded_size);\n        printf(\n            \"{\\n\"\n                \"\\taddr: 0x%08lx,\\n\"\n                \"\\tindex: %2d,\\n\"\n                \"\\toffset: %1lu,\\n\"\n                \"\\thdr+entrylen+backlen: %2lu,\\n\"\n                \"\\thdrlen: %3u,\\n\"\n                \"\\tbacklen: %2lu,\\n\"\n                \"\\tpayload: %1u\\n\",\n            (long unsigned)p,\n            index,\n            (unsigned long) (p-lp),\n            encoded_size + back_len,\n            encoded_size_bytes,\n            back_len,\n            encoded_size - encoded_size_bytes);\n        printf(\"\\tbytes: \");\n        for (unsigned int i = 0; i < (encoded_size + back_len); i++) {\n            printf(\"%02x|\",p[i]);\n        }\n        printf(\"\\n\");\n\n        vstr = lpGet(p, &vlen, intbuf);\n        printf(\"\\t[str]\");\n        if (vlen > 40) {\n            if (fwrite(vstr, 40, 1, stdout) == 0) perror(\"fwrite\");\n            printf(\"...\");\n        } else {\n            if (fwrite(vstr, vlen, 1, stdout) == 0) perror(\"fwrite\");\n        }\n        printf(\"\\n}\\n\");\n        index++;\n        p = lpNext(lp, p);\n    }\n    printf(\"{end}\\n\\n\");\n}\n\n#ifdef REDIS_TEST\n\n#include <sys/time.h>\n#include \"adlist.h\"\n#include \"sds.h\"\n#include \"testhelp.h\"\n\n#define UNUSED(x) (void)(x)\n#define TEST(name) printf(\"test — %s\\n\", name);\n\nchar *mixlist[] = {\"hello\", \"foo\", \"quux\", \"1024\"};\nchar *intlist[] = {\"4294967296\", \"-100\", \"100\", \"128000\", \n                   \"non integer\", \"much much longer non integer\"};\n\nstatic unsigned char *createList(void) {\n    unsigned char *lp = lpNew(0);\n    lp = lpAppend(lp, (unsigned char*)mixlist[1], strlen(mixlist[1]));\n    lp = lpAppend(lp, (unsigned char*)mixlist[2], strlen(mixlist[2]));\n    lp = lpPrepend(lp, (unsigned char*)mixlist[0], strlen(mixlist[0]));\n    lp = lpAppend(lp, (unsigned char*)mixlist[3], strlen(mixlist[3]));\n    return lp;\n}\n\nstatic unsigned char *createIntList(void) {\n    unsigned char *lp = lpNew(0);\n    lp = lpAppend(lp, (unsigned char*)intlist[2], strlen(intlist[2]));\n    lp = lpAppend(lp, (unsigned char*)intlist[3], strlen(intlist[3]));\n    lp = lpPrepend(lp, (unsigned char*)intlist[1], strlen(intlist[1]));\n    lp = lpPrepend(lp, (unsigned char*)intlist[0], strlen(intlist[0]));\n    lp = lpAppend(lp, (unsigned char*)intlist[4], strlen(intlist[4]));\n    lp = lpAppend(lp, (unsigned char*)intlist[5], strlen(intlist[5]));\n    return lp;\n}\n\nstatic long long usec(void) {\n    struct timeval tv;\n    gettimeofday(&tv, NULL);\n    return (((long long)tv.tv_sec)*1000000)+tv.tv_usec;\n}\n\nstatic void stress(int pos, int num, int maxsize, int dnum) {\n    int i, j, k;\n    unsigned char *lp;\n    char posstr[2][5] = { \"HEAD\", \"TAIL\" };\n    long long start;\n    for (i = 0; i < maxsize; i+=dnum) {\n        lp = lpNew(0);\n        for (j = 0; j < i; j++) {\n            lp = lpAppend(lp, (unsigned char*)\"quux\", 4);\n        }\n\n        /* Do num times a push+pop from pos */\n        start = usec();\n        for (k = 0; k < num; k++) {\n            if (pos == 0) {\n                lp = lpPrepend(lp, (unsigned char*)\"quux\", 4);\n            } else {\n                lp = lpAppend(lp, (unsigned char*)\"quux\", 4);\n\n            }\n            lp = lpDelete(lp, lpFirst(lp), NULL);\n        }\n        printf(\"List size: %8d, bytes: %8zu, %dx push+pop (%s): %6lld usec\\n\",\n               i, lpBytes(lp), num, posstr[pos], usec()-start);\n        lpFree(lp);\n    }\n}\n\nstatic unsigned char *pop(unsigned char *lp, int where) {\n    unsigned char *p, *vstr;\n    int64_t vlen;\n\n    p = lpSeek(lp, where == 0 ? 0 : -1);\n    vstr = lpGet(p, &vlen, NULL);\n    if (where == 0)\n        printf(\"Pop head: \");\n    else\n        printf(\"Pop tail: \");\n\n    if (vstr) {\n        if (vlen && fwrite(vstr, vlen, 1, stdout) == 0) perror(\"fwrite\");\n    } else {\n        printf(\"%lld\", (long long)vlen);\n    }\n\n    printf(\"\\n\");\n    return lpDelete(lp, p, &p);\n}\n\nstatic int randstring(char *target, unsigned int min, unsigned int max) {\n    int p = 0;\n    int len = min+rand()%(max-min+1);\n    int minval, maxval;\n    switch(rand() % 3) {\n    case 0:\n        minval = 0;\n        maxval = 255;\n    break;\n    case 1:\n        minval = 48;\n        maxval = 122;\n    break;\n    case 2:\n        minval = 48;\n        maxval = 52;\n    break;\n    default:\n        assert(NULL);\n    }\n\n    while(p < len)\n        target[p++] = minval+rand()%(maxval-minval+1);\n    return len;\n}\n\nstatic void verifyEntry(unsigned char *p, unsigned char *s, size_t slen) {\n    assert(lpCompare(p, s, slen, NULL, NULL));\n}\n\nstatic int lpValidation(unsigned char *p, unsigned int head_count, void *userdata) {\n    UNUSED(p);\n    UNUSED(head_count);\n\n    int ret;\n    long *count = userdata;\n    ret = lpCompare(p, (unsigned char *)mixlist[*count], strlen(mixlist[*count]), NULL, NULL);\n    (*count)++;\n    return ret;\n}\n\nstatic int lpFindCbCmp(const unsigned char *lp, unsigned char *p, void *user, unsigned char *s, long long slen) {\n    assert(lp);\n    assert(p);\n\n    char *n = user;\n\n    if (!s) {\n        int64_t sval;\n        if (lpStringToInt64((const char*)n, strlen(n), &sval))\n            return slen == sval ? 0 : 1;\n    } else {\n        if (strlen(n) == (size_t) slen && memcmp(n, s, slen) == 0)\n            return 0;\n    }\n\n    return 1;\n}\n\nint listpackTest(int argc, char *argv[], int flags) {\n    UNUSED(argc);\n    UNUSED(argv);\n\n    int i;\n    unsigned char *lp, *p, *vstr;\n    int64_t vlen;\n    unsigned char intbuf[LP_INTBUF_SIZE];\n    int accurate = (flags & REDIS_TEST_ACCURATE);\n\n    TEST(\"Create int list\") {\n        lp = createIntList();\n        assert(lpLength(lp) == 6);\n        lpFree(lp);\n    }\n\n    TEST(\"Create list\") {\n        lp = createList();\n        assert(lpLength(lp) == 4);\n        lpFree(lp);\n    }\n\n    TEST(\"Test lpPrepend\") {\n        lp = lpNew(0);\n        lp = lpPrepend(lp, (unsigned char*)\"abc\", 3);\n        lp = lpPrepend(lp, (unsigned char*)\"1024\", 4);\n        verifyEntry(lpSeek(lp, 0), (unsigned char*)\"1024\", 4);\n        verifyEntry(lpSeek(lp, 1), (unsigned char*)\"abc\", 3);\n        lpFree(lp);\n    }\n\n    TEST(\"Test lpPrependInteger\") {\n        lp = lpNew(0);\n        lp = lpPrependInteger(lp, 127);\n        lp = lpPrependInteger(lp, 4095);\n        lp = lpPrependInteger(lp, 32767);\n        lp = lpPrependInteger(lp, 8388607);\n        lp = lpPrependInteger(lp, 2147483647);\n        lp = lpPrependInteger(lp, 9223372036854775807);\n        verifyEntry(lpSeek(lp, 0), (unsigned char*)\"9223372036854775807\", 19);\n        verifyEntry(lpSeek(lp, -1), (unsigned char*)\"127\", 3);\n        lpFree(lp);\n    }\n\n    TEST(\"Get element at index\") {\n        lp = createList();\n        verifyEntry(lpSeek(lp, 0), (unsigned char*)\"hello\", 5);\n        verifyEntry(lpSeek(lp, 3), (unsigned char*)\"1024\", 4);\n        verifyEntry(lpSeek(lp, -1), (unsigned char*)\"1024\", 4);\n        verifyEntry(lpSeek(lp, -4), (unsigned char*)\"hello\", 5);\n        assert(lpSeek(lp, 4) == NULL);\n        assert(lpSeek(lp, -5) == NULL);\n        lpFree(lp);\n    }\n    \n    TEST(\"Pop list\") {\n        lp = createList();\n        lp = pop(lp, 1);\n        lp = pop(lp, 0);\n        lp = pop(lp, 1);\n        lp = pop(lp, 1);\n        lpFree(lp);\n    }\n\n    TEST(\"Get element at index\") {\n        lp = createList();\n        verifyEntry(lpSeek(lp, 0), (unsigned char*)\"hello\", 5);\n        verifyEntry(lpSeek(lp, 3), (unsigned char*)\"1024\", 4);\n        verifyEntry(lpSeek(lp, -1), (unsigned char*)\"1024\", 4);\n        verifyEntry(lpSeek(lp, -4), (unsigned char*)\"hello\", 5);\n        assert(lpSeek(lp, 4) == NULL);\n        assert(lpSeek(lp, -5) == NULL);\n        lpFree(lp);\n    }\n\n    TEST(\"Iterate list from 0 to end\") {\n        lp = createList();\n        p = lpFirst(lp);\n        i = 0;\n        while (p) {\n            verifyEntry(p, (unsigned char*)mixlist[i], strlen(mixlist[i]));\n            p = lpNext(lp, p);\n            i++;\n        }\n        lpFree(lp);\n    }\n    \n    TEST(\"Iterate list from 1 to end\") {\n        lp = createList();\n        i = 1;\n        p = lpSeek(lp, i);\n        while (p) {\n            verifyEntry(p, (unsigned char*)mixlist[i], strlen(mixlist[i]));\n            p = lpNext(lp, p);\n            i++;\n        }\n        lpFree(lp);\n    }\n    \n    TEST(\"Iterate list from 2 to end\") {\n        lp = createList();\n        i = 2;\n        p = lpSeek(lp, i);\n        while (p) {\n            verifyEntry(p, (unsigned char*)mixlist[i], strlen(mixlist[i]));\n            p = lpNext(lp, p);\n            i++;\n        }\n        lpFree(lp);\n    }\n    \n    TEST(\"Iterate from back to front\") {\n        lp = createList();\n        p = lpLast(lp);\n        i = 3;\n        while (p) {\n            verifyEntry(p, (unsigned char*)mixlist[i], strlen(mixlist[i]));\n            p = lpPrev(lp, p);\n            i--;\n        }\n        lpFree(lp);\n    }\n    \n    TEST(\"Iterate from back to front, deleting all items\") {\n        lp = createList();\n        p = lpLast(lp);\n        i = 3;\n        while ((p = lpLast(lp))) {\n            verifyEntry(p, (unsigned char*)mixlist[i], strlen(mixlist[i]));\n            lp = lpDelete(lp, p, &p);\n            assert(p == NULL);\n            i--;\n        }\n        lpFree(lp);\n    }\n\n    TEST(\"Delete whole listpack when num == -1\");\n    {\n        lp = createList();\n        lp = lpDeleteRange(lp, 0, -1);\n        assert(lpLength(lp) == 0);\n        assert(lp[LP_HDR_SIZE] == LP_EOF);\n        assert(lpBytes(lp) == (LP_HDR_SIZE + 1));\n        zfree(lp);\n\n        lp = createList();\n        unsigned char *ptr = lpFirst(lp);\n        lp = lpDeleteRangeWithEntry(lp, &ptr, -1);\n        assert(lpLength(lp) == 0);\n        assert(lp[LP_HDR_SIZE] == LP_EOF);\n        assert(lpBytes(lp) == (LP_HDR_SIZE + 1));\n        zfree(lp);\n    }\n\n    TEST(\"Delete whole listpack with negative index\");\n    {\n        lp = createList();\n        lp = lpDeleteRange(lp, -4, 4);\n        assert(lpLength(lp) == 0);\n        assert(lp[LP_HDR_SIZE] == LP_EOF);\n        assert(lpBytes(lp) == (LP_HDR_SIZE + 1));\n        zfree(lp);\n\n        lp = createList();\n        unsigned char *ptr = lpSeek(lp, -4);\n        lp = lpDeleteRangeWithEntry(lp, &ptr, 4);\n        assert(lpLength(lp) == 0);\n        assert(lp[LP_HDR_SIZE] == LP_EOF);\n        assert(lpBytes(lp) == (LP_HDR_SIZE + 1));\n        zfree(lp);\n    }\n\n    TEST(\"Delete inclusive range 0,0\");\n    {\n        lp = createList();\n        lp = lpDeleteRange(lp, 0, 1);\n        assert(lpLength(lp) == 3);\n        assert(lpSkip(lpLast(lp))[0] == LP_EOF); /* check set LP_EOF correctly */\n        zfree(lp);\n\n        lp = createList();\n        unsigned char *ptr = lpFirst(lp);\n        lp = lpDeleteRangeWithEntry(lp, &ptr, 1);\n        assert(lpLength(lp) == 3);\n        assert(lpSkip(lpLast(lp))[0] == LP_EOF); /* check set LP_EOF correctly */\n        zfree(lp);\n    }\n\n    TEST(\"Delete inclusive range 0,1\");\n    {\n        lp = createList();\n        lp = lpDeleteRange(lp, 0, 2);\n        assert(lpLength(lp) == 2);\n        verifyEntry(lpFirst(lp), (unsigned char*)mixlist[2], strlen(mixlist[2]));\n        zfree(lp);\n\n        lp = createList();\n        unsigned char *ptr = lpFirst(lp);\n        lp = lpDeleteRangeWithEntry(lp, &ptr, 2);\n        assert(lpLength(lp) == 2);\n        verifyEntry(lpFirst(lp), (unsigned char*)mixlist[2], strlen(mixlist[2]));\n        zfree(lp);\n    }\n\n    TEST(\"Delete inclusive range 1,2\");\n    {\n        lp = createList();\n        lp = lpDeleteRange(lp, 1, 2);\n        assert(lpLength(lp) == 2);\n        verifyEntry(lpFirst(lp), (unsigned char*)mixlist[0], strlen(mixlist[0]));\n        zfree(lp);\n\n        lp = createList();\n        unsigned char *ptr = lpSeek(lp, 1);\n        lp = lpDeleteRangeWithEntry(lp, &ptr, 2);\n        assert(lpLength(lp) == 2);\n        verifyEntry(lpFirst(lp), (unsigned char*)mixlist[0], strlen(mixlist[0]));\n        zfree(lp);\n    }\n    \n    TEST(\"Delete with start index out of range\");\n    {\n        lp = createList();\n        lp = lpDeleteRange(lp, 5, 1);\n        assert(lpLength(lp) == 4);\n        zfree(lp);\n    }\n\n    TEST(\"Delete with num overflow\");\n    {\n        lp = createList();\n        lp = lpDeleteRange(lp, 1, 5);\n        assert(lpLength(lp) == 1);\n        verifyEntry(lpFirst(lp), (unsigned char*)mixlist[0], strlen(mixlist[0]));\n        zfree(lp);\n\n        lp = createList();\n        unsigned char *ptr = lpSeek(lp, 1);\n        lp = lpDeleteRangeWithEntry(lp, &ptr, 5);\n        assert(lpLength(lp) == 1);\n        verifyEntry(lpFirst(lp), (unsigned char*)mixlist[0], strlen(mixlist[0]));\n        zfree(lp);\n    }\n\n    TEST(\"Batch append\") {\n        listpackEntry ent[6] = {\n                {.sval = (unsigned char*)mixlist[0], .slen = strlen(mixlist[0])},\n                {.sval = (unsigned char*)mixlist[1], .slen = strlen(mixlist[1])},\n                {.sval = (unsigned char*)mixlist[2], .slen = strlen(mixlist[2])},\n                {.lval = 4294967296},\n                {.sval = (unsigned char*)mixlist[3], .slen = strlen(mixlist[3])},\n                {.lval = -100}\n        };\n\n        lp = lpNew(0);\n        lp = lpBatchAppend(lp, ent, 2);\n        verifyEntry(lpSeek(lp, 0), ent[0].sval, ent[0].slen);\n        verifyEntry(lpSeek(lp, 1), ent[1].sval, ent[1].slen);\n        assert(lpLength(lp) == 2);\n\n        lp = lpBatchAppend(lp, &ent[2], 1);\n        verifyEntry(lpSeek(lp, 0), ent[0].sval, ent[0].slen);\n        verifyEntry(lpSeek(lp, 1), ent[1].sval, ent[1].slen);\n        verifyEntry(lpSeek(lp, 2), ent[2].sval, ent[2].slen);\n        assert(lpLength(lp) == 3);\n\n        lp = lpDeleteRange(lp, 1, 1);\n        verifyEntry(lpSeek(lp, 0), ent[0].sval, ent[0].slen);\n        verifyEntry(lpSeek(lp, 1), ent[2].sval, ent[2].slen);\n        assert(lpLength(lp) == 2);\n\n        lp = lpBatchAppend(lp, &ent[3], 3);\n        verifyEntry(lpSeek(lp, 0), ent[0].sval, ent[0].slen);\n        verifyEntry(lpSeek(lp, 1), ent[2].sval, ent[2].slen);\n        verifyEntry(lpSeek(lp, 2), (unsigned char*) \"4294967296\", 10);\n        verifyEntry(lpSeek(lp, 3), ent[4].sval, ent[4].slen);\n        verifyEntry(lpSeek(lp, 4), (unsigned char*) \"-100\", 4);\n        assert(lpLength(lp) == 5);\n\n        lp = lpDeleteRange(lp, 1, 3);\n        verifyEntry(lpSeek(lp, 0), ent[0].sval, ent[0].slen);\n        verifyEntry(lpSeek(lp, 1), (unsigned char*) \"-100\", 4);\n        assert(lpLength(lp) == 2);\n\n        lpFree(lp);\n    }\n\n    TEST(\"Batch insert\") {\n        lp = lpNew(0);\n        listpackEntry ent[6] = {\n                {.sval = (unsigned char*)mixlist[0], .slen = strlen(mixlist[0])},\n                {.sval = (unsigned char*)mixlist[1], .slen = strlen(mixlist[1])},\n                {.sval = (unsigned char*)mixlist[2], .slen = strlen(mixlist[2])},\n                {.lval = 4294967296},\n                {.sval = (unsigned char*)mixlist[3], .slen = strlen(mixlist[3])},\n                {.lval = -100}\n        };\n\n        lp = lpBatchAppend(lp, ent, 4);\n        assert(lpLength(lp) == 4);\n        verifyEntry(lpSeek(lp, 0), ent[0].sval, ent[0].slen);\n        verifyEntry(lpSeek(lp, 1), ent[1].sval, ent[1].slen);\n        verifyEntry(lpSeek(lp, 2), ent[2].sval, ent[2].slen);\n        verifyEntry(lpSeek(lp, 3), (unsigned char*)\"4294967296\", 10);\n\n        /* Insert with LP_BEFORE */\n        p = lpSeek(lp, 3);\n        lp = lpBatchInsert(lp, p, LP_BEFORE, &ent[4], 2, &p);\n        verifyEntry(p, (unsigned char*)\"-100\", 4);\n        assert(lpLength(lp) == 6);\n        verifyEntry(lpSeek(lp, 0), ent[0].sval, ent[0].slen);\n        verifyEntry(lpSeek(lp, 1), ent[1].sval, ent[1].slen);\n        verifyEntry(lpSeek(lp, 2), ent[2].sval, ent[2].slen);\n        verifyEntry(lpSeek(lp, 3), ent[4].sval, ent[4].slen);\n        verifyEntry(lpSeek(lp, 4), (unsigned char*)\"-100\", 4);\n        verifyEntry(lpSeek(lp, 5), (unsigned char*)\"4294967296\", 10);\n\n        lp = lpDeleteRange(lp, 1, 2);\n        assert(lpLength(lp) == 4);\n        verifyEntry(lpSeek(lp, 0), ent[0].sval, ent[0].slen);\n        verifyEntry(lpSeek(lp, 1), ent[4].sval, ent[4].slen);\n        verifyEntry(lpSeek(lp, 2), (unsigned char*)\"-100\", 4);\n        verifyEntry(lpSeek(lp, 3), (unsigned char*)\"4294967296\", 10);\n\n        /* Insert with LP_AFTER */\n        p = lpSeek(lp, 0);\n        lp = lpBatchInsert(lp, p, LP_AFTER, &ent[1], 2, &p);\n        verifyEntry(p, ent[2].sval, ent[2].slen);\n        assert(lpLength(lp) == 6);\n        verifyEntry(lpSeek(lp, 0), ent[0].sval, ent[0].slen);\n        verifyEntry(lpSeek(lp, 1), ent[1].sval, ent[1].slen);\n        verifyEntry(lpSeek(lp, 2), ent[2].sval, ent[2].slen);\n        verifyEntry(lpSeek(lp, 3), ent[4].sval, ent[4].slen);\n        verifyEntry(lpSeek(lp, 4), (unsigned char*)\"-100\", 4);\n        verifyEntry(lpSeek(lp, 5), (unsigned char*)\"4294967296\", 10);\n\n        lp = lpDeleteRange(lp, 2, 4);\n        assert(lpLength(lp) == 2);\n        p = lpSeek(lp, 1);\n        lp = lpBatchInsert(lp, p, LP_AFTER, &ent[2], 1, &p);\n        verifyEntry(p, ent[2].sval, ent[2].slen);\n        assert(lpLength(lp) == 3);\n        verifyEntry(lpSeek(lp, 0), ent[0].sval, ent[0].slen);\n        verifyEntry(lpSeek(lp, 1), ent[1].sval, ent[1].slen);\n        verifyEntry(lpSeek(lp, 2), ent[2].sval, ent[2].slen);\n\n        lpFree(lp);\n    }\n\n    TEST(\"Batch delete\") {\n        unsigned char *lp = createList(); /* char *mixlist[] = {\"hello\", \"foo\", \"quux\", \"1024\"} */\n        assert(lpLength(lp) == 4); /* Pre-condition */\n        unsigned char *p0 = lpFirst(lp),\n            *p1 = lpNext(lp, p0),\n            *p2 = lpNext(lp, p1),\n            *p3 = lpNext(lp, p2);\n        unsigned char *ps[] = {p0, p1, p3};\n        lp = lpBatchDelete(lp, ps, 3);\n        assert(lpLength(lp) == 1);\n        verifyEntry(lpFirst(lp), (unsigned char*)mixlist[2], strlen(mixlist[2]));\n        assert(lpValidateIntegrity(lp, lpBytes(lp), 1, NULL, NULL) == 1);\n        lpFree(lp);\n    }\n\n    TEST(\"Delete foo while iterating\") {\n        lp = createList();\n        p = lpFirst(lp);\n        while (p) {\n            if (lpCompare(p, (unsigned char*)\"foo\", 3, NULL, NULL)) {\n                lp = lpDelete(lp, p, &p);\n            } else {\n                p = lpNext(lp, p);\n            }\n        }\n        lpFree(lp);\n    }\n\n    TEST(\"Replace with same size\") {\n        lp = createList(); /* \"hello\", \"foo\", \"quux\", \"1024\" */\n        unsigned char *orig_lp = lp;\n        p = lpSeek(lp, 0);\n        lp = lpReplace(lp, &p, (unsigned char*)\"zoink\", 5);\n        p = lpSeek(lp, 3);\n        lp = lpReplace(lp, &p, (unsigned char*)\"y\", 1);\n        p = lpSeek(lp, 1);\n        lp = lpReplace(lp, &p, (unsigned char*)\"65536\", 5);\n        p = lpSeek(lp, 0);\n        assert(!memcmp((char*)p,\n                       \"\\x85zoink\\x06\"\n                       \"\\xf2\\x00\\x00\\x01\\x04\" /* 65536 as int24 */\n                       \"\\x84quux\\05\" \"\\x81y\\x02\" \"\\xff\",\n                       22));\n        assert(lp == orig_lp); /* no reallocations have happened */\n        lpFree(lp);\n    }\n\n    TEST(\"Replace with different size\") {\n        lp = createList(); /* \"hello\", \"foo\", \"quux\", \"1024\" */\n        p = lpSeek(lp, 1);\n        lp = lpReplace(lp, &p, (unsigned char*)\"squirrel\", 8);\n        p = lpSeek(lp, 0);\n        assert(!strncmp((char*)p,\n                        \"\\x85hello\\x06\" \"\\x88squirrel\\x09\" \"\\x84quux\\x05\"\n                        \"\\xc4\\x00\\x02\" \"\\xff\",\n                        27));\n        lpFree(lp);\n    }\n\n    TEST(\"Regression test for >255 byte strings\") {\n        char v1[257] = {0}, v2[257] = {0};\n        memset(v1,'x',256);\n        memset(v2,'y',256);\n        lp = lpNew(0);\n        lp = lpAppend(lp, (unsigned char*)v1 ,strlen(v1));\n        lp = lpAppend(lp, (unsigned char*)v2 ,strlen(v2));\n\n        /* Pop values again and compare their value. */\n        p = lpFirst(lp);\n        vstr = lpGet(p, &vlen, NULL);\n        assert(strncmp(v1, (char*)vstr, vlen) == 0);\n        p = lpSeek(lp, 1);\n        vstr = lpGet(p, &vlen, NULL);\n        assert(strncmp(v2, (char*)vstr, vlen) == 0);\n        lpFree(lp);\n    }\n\n    TEST(\"Create long list and check indices\") {\n        lp = lpNew(0);\n        char buf[32];\n        int i,len;\n        for (i = 0; i < 1000; i++) {\n            len = snprintf(buf, sizeof(buf), \"%d\", i);\n            lp = lpAppend(lp, (unsigned char*)buf, len);\n        }\n        for (i = 0; i < 1000; i++) {\n            p = lpSeek(lp, i);\n            vstr = lpGet(p, &vlen, NULL);\n            assert(i == vlen);\n\n            p = lpSeek(lp, -i-1);\n            vstr = lpGet(p, &vlen, NULL);\n            assert(999-i == vlen);\n        }\n        lpFree(lp);\n    }\n\n    TEST(\"Compare strings with listpack entries\") {\n        lp = createList();\n        p = lpSeek(lp,0);\n        assert(lpCompare(p,(unsigned char*)\"hello\",5,NULL,NULL));\n        assert(!lpCompare(p,(unsigned char*)\"hella\",5,NULL,NULL));\n\n        p = lpSeek(lp,3);\n        assert(lpCompare(p,(unsigned char*)\"1024\",4,NULL,NULL));\n        assert(!lpCompare(p,(unsigned char*)\"1025\",4,NULL,NULL));\n        lpFree(lp);\n    }\n\n    TEST(\"lpMerge two empty listpacks\") {\n        unsigned char *lp1 = lpNew(0);\n        unsigned char *lp2 = lpNew(0);\n\n        /* Merge two empty listpacks, get empty result back. */\n        lp1 = lpMerge(&lp1, &lp2);\n        assert(lpLength(lp1) == 0);\n        zfree(lp1);\n    }\n\n    TEST(\"lpMerge two listpacks - first larger than second\") {\n        unsigned char *lp1 = createIntList();\n        unsigned char *lp2 = createList();\n\n        size_t lp1_bytes = lpBytes(lp1);\n        size_t lp2_bytes = lpBytes(lp2);\n        unsigned long lp1_len = lpLength(lp1);\n        unsigned long lp2_len = lpLength(lp2);\n\n        unsigned char *lp3 = lpMerge(&lp1, &lp2);\n        assert(lp3 == lp1);\n        assert(lp2 == NULL);\n        assert(lpLength(lp3) == (lp1_len + lp2_len));\n        assert(lpBytes(lp3) == (lp1_bytes + lp2_bytes - LP_HDR_SIZE - 1));\n        verifyEntry(lpSeek(lp3, 0), (unsigned char*)\"4294967296\", 10);\n        verifyEntry(lpSeek(lp3, 5), (unsigned char*)\"much much longer non integer\", 28);\n        verifyEntry(lpSeek(lp3, 6), (unsigned char*)\"hello\", 5);\n        verifyEntry(lpSeek(lp3, -1), (unsigned char*)\"1024\", 4);\n        zfree(lp3);\n    }\n\n    TEST(\"lpMerge two listpacks - second larger than first\") {\n        unsigned char *lp1 = createList();\n        unsigned char *lp2 = createIntList();\n\n        size_t lp1_bytes = lpBytes(lp1);\n        size_t lp2_bytes = lpBytes(lp2);\n        unsigned long lp1_len = lpLength(lp1);\n        unsigned long lp2_len = lpLength(lp2);\n\n        unsigned char *lp3 = lpMerge(&lp1, &lp2);\n        assert(lp3 == lp2);\n        assert(lp1 == NULL);\n        assert(lpLength(lp3) == (lp1_len + lp2_len));\n        assert(lpBytes(lp3) == (lp1_bytes + lp2_bytes - LP_HDR_SIZE - 1));\n        verifyEntry(lpSeek(lp3, 0), (unsigned char*)\"hello\", 5);\n        verifyEntry(lpSeek(lp3, 3), (unsigned char*)\"1024\", 4);\n        verifyEntry(lpSeek(lp3, 4), (unsigned char*)\"4294967296\", 10);\n        verifyEntry(lpSeek(lp3, -1), (unsigned char*)\"much much longer non integer\", 28);\n        zfree(lp3);\n    }\n\n    TEST(\"lpNextRandom normal usage\") {\n        /* Create some data */\n        unsigned char *lp = lpNew(0);\n        unsigned char buf[100] = \"asdf\";\n        unsigned int size = 100;\n        for (size_t i = 0; i < size; i++) {\n            lp = lpAppend(lp, buf, i);\n        }\n        assert(lpLength(lp) == size);\n\n        /* Pick a subset of the elements of every possible subset size */\n        for (unsigned int count = 0; count <= size; count++) {\n            unsigned int remaining = count;\n            unsigned char *p = lpFirst(lp);\n            unsigned char *prev = NULL;\n            unsigned index = 0;\n            while (remaining > 0) {\n                assert(p != NULL);\n                p = lpNextRandom(lp, p, &index, remaining--, 1);\n                assert(p != NULL);\n                assert(p != prev);\n                prev = p;\n                p = lpNext(lp, p);\n                index++;\n            }\n        }\n        lpFree(lp);\n    }\n\n    TEST(\"lpNextRandom corner cases\") {\n        unsigned char *lp = lpNew(0);\n        unsigned i = 0;\n\n        /* Pick from empty listpack returns NULL. */\n        assert(lpNextRandom(lp, NULL, &i, 2, 1) == NULL);\n\n        /* Add some elements and find their pointers within the listpack. */\n        lp = lpAppend(lp, (unsigned char *)\"abc\", 3);\n        lp = lpAppend(lp, (unsigned char *)\"def\", 3);\n        lp = lpAppend(lp, (unsigned char *)\"ghi\", 3);\n        assert(lpLength(lp) == 3);\n        unsigned char *p0 = lpFirst(lp);\n        unsigned char *p1 = lpNext(lp, p0);\n        unsigned char *p2 = lpNext(lp, p1);\n        assert(lpNext(lp, p2) == NULL);\n\n        /* Pick zero elements returns NULL. */\n        i = 0; assert(lpNextRandom(lp, lpFirst(lp), &i, 0, 1) == NULL);\n\n        /* Pick all returns all. */\n        i = 0; assert(lpNextRandom(lp, p0, &i, 3, 1) == p0 && i == 0);\n        i = 1; assert(lpNextRandom(lp, p1, &i, 2, 1) == p1 && i == 1);\n        i = 2; assert(lpNextRandom(lp, p2, &i, 1, 1) == p2 && i == 2);\n\n        /* Pick more than one when there's only one left returns the last one. */\n        i = 2; assert(lpNextRandom(lp, p2, &i, 42, 1) == p2 && i == 2);\n\n        /* Pick all even elements returns p0 and p2. */\n        i = 0; assert(lpNextRandom(lp, p0, &i, 10, 2) == p0 && i == 0);\n        i = 1; assert(lpNextRandom(lp, p1, &i, 10, 2) == p2 && i == 2);\n\n        /* Don't crash even for bad index. */\n        for (int j = 0; j < 100; j++) {\n            unsigned char *p;\n            switch (j % 4) {\n            case 0: p = p0; break;\n            case 1: p = p1; break;\n            case 2: p = p2; break;\n            case 3: p = NULL; break;\n            }\n            i = j % 7;\n            unsigned int remaining = j % 5;\n            p = lpNextRandom(lp, p, &i, remaining, 1);\n            assert(p == p0 || p == p1 || p == p2 || p == NULL);\n        }\n        lpFree(lp);\n    }\n\n    TEST(\"Random pair with one element\") {\n        listpackEntry key, val;\n        unsigned char *lp = lpNew(0);\n        lp = lpAppend(lp, (unsigned char*)\"abc\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"123\", 3);\n        lpRandomPair(lp, 1, &key, &val, 2);\n        assert(memcmp(key.sval, \"abc\", key.slen) == 0);\n        assert(val.lval == 123);\n        lpFree(lp);\n    }\n\n    TEST(\"Random pair with many elements\") {\n        listpackEntry key, val;\n        unsigned char *lp = lpNew(0);\n        lp = lpAppend(lp, (unsigned char*)\"abc\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"123\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"456\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"def\", 3);\n        lpRandomPair(lp, 2, &key, &val, 2);\n        if (key.sval) {\n            assert(!memcmp(key.sval, \"abc\", key.slen));\n            assert(key.slen == 3);\n            assert(val.lval == 123);\n        }\n        if (!key.sval) {\n            assert(key.lval == 456);\n            assert(!memcmp(val.sval, \"def\", val.slen));\n        }\n        lpFree(lp);\n    }\n\n    TEST(\"Random pair with tuple_len 3\") {\n        listpackEntry key, val;\n        unsigned char *lp = lpNew(0);\n        lp = lpAppend(lp, (unsigned char*)\"abc\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"123\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"xxx\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"456\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"def\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"xxx\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"281474976710655\", 15);\n        lp = lpAppend(lp, (unsigned char*)\"789\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"xxx\", 3);\n\n        for (int i = 0; i < 5; i++) {\n            lpRandomPair(lp, 3, &key, &val, 3);\n            if (key.sval) {\n                if (!memcmp(key.sval, \"abc\", key.slen)) {\n                    assert(key.slen == 3);\n                    assert(val.lval == 123);\n                } else {\n                    assert(0);\n                };\n            }\n            if (!key.sval) {\n                if (key.lval == 456)\n                    assert(!memcmp(val.sval, \"def\", val.slen));\n                else if (key.lval == 281474976710655LL)\n                    assert(val.lval == 789);\n                else\n                    assert(0);\n            }\n        }\n\n        lpFree(lp);\n    }\n\n    TEST(\"Random pairs with one element\") {\n        int count = 5;\n        unsigned char *lp = lpNew(0);\n        listpackEntry *keys = zmalloc(sizeof(listpackEntry) * count);\n        listpackEntry *vals = zmalloc(sizeof(listpackEntry) * count);\n\n        lp = lpAppend(lp, (unsigned char*)\"abc\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"123\", 3);\n        lpRandomPairs(lp, count, keys, vals, 2);\n        assert(memcmp(keys[4].sval, \"abc\", keys[4].slen) == 0);\n        assert(vals[4].lval == 123);\n        zfree(keys);\n        zfree(vals);\n        lpFree(lp);\n    }\n\n    TEST(\"Random pairs with many elements\") {\n        int count = 5;\n        lp = lpNew(0);\n        listpackEntry *keys = zmalloc(sizeof(listpackEntry) * count);\n        listpackEntry *vals = zmalloc(sizeof(listpackEntry) * count);\n\n        lp = lpAppend(lp, (unsigned char*)\"abc\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"123\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"456\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"def\", 3);\n        lpRandomPairs(lp, count, keys, vals, 2);\n        for (int i = 0; i < count; i++) {\n            if (keys[i].sval) {\n                assert(!memcmp(keys[i].sval, \"abc\", keys[i].slen));\n                assert(keys[i].slen == 3);\n                assert(vals[i].lval == 123);\n            }\n            if (!keys[i].sval) {\n                assert(keys[i].lval == 456);\n                assert(!memcmp(vals[i].sval, \"def\", vals[i].slen));\n            }\n        }\n        zfree(keys);\n        zfree(vals);\n        lpFree(lp);\n    }\n\n    TEST(\"Random pairs with many elements and tuple_len 3\") {\n        int count = 5;\n        lp = lpNew(0);\n        listpackEntry *keys = zcalloc(sizeof(listpackEntry) * count);\n        listpackEntry *vals = zcalloc(sizeof(listpackEntry) * count);\n\n        lp = lpAppend(lp, (unsigned char*)\"abc\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"123\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"xxx\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"456\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"def\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"xxx\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"281474976710655\", 15);\n        lp = lpAppend(lp, (unsigned char*)\"789\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"xxx\", 3);\n\n        lpRandomPairs(lp, count, keys, vals, 3);\n        for (int i = 0; i < count; i++) {\n            if (keys[i].sval) {\n                if (!memcmp(keys[i].sval, \"abc\", keys[i].slen)) {\n                    assert(keys[i].slen == 3);\n                    assert(vals[i].lval == 123);\n                } else {\n                    assert(0);\n                };\n            }\n            if (!keys[i].sval) {\n                if (keys[i].lval == 456)\n                    assert(!memcmp(vals[i].sval, \"def\", vals[i].slen));\n                else if (keys[i].lval == 281474976710655LL)\n                    assert(vals[i].lval == 789);\n                else\n                    assert(0);\n            }\n        }\n\n        zfree(keys);\n        zfree(vals);\n        lpFree(lp);\n    }\n\n    TEST(\"Random pairs unique with one element\") {\n        unsigned picked;\n        int count = 5;\n        lp = lpNew(0);\n        listpackEntry *keys = zmalloc(sizeof(listpackEntry) * count);\n        listpackEntry *vals = zmalloc(sizeof(listpackEntry) * count);\n\n        lp = lpAppend(lp, (unsigned char*)\"abc\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"123\", 3);\n        picked = lpRandomPairsUnique(lp, count, keys, vals, 2);\n        assert(picked == 1);\n        assert(memcmp(keys[0].sval, \"abc\", keys[0].slen) == 0);\n        assert(vals[0].lval == 123);\n        zfree(keys);\n        zfree(vals);\n        lpFree(lp);\n    }\n\n    TEST(\"Random pairs unique with many elements\") {\n        unsigned picked;\n        int count = 5;\n        lp = lpNew(0);\n        listpackEntry *keys = zmalloc(sizeof(listpackEntry) * count);\n        listpackEntry *vals = zmalloc(sizeof(listpackEntry) * count);\n\n        lp = lpAppend(lp, (unsigned char*)\"abc\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"123\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"456\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"def\", 3);\n        picked = lpRandomPairsUnique(lp, count, keys, vals, 2);\n        assert(picked == 2);\n        for (int i = 0; i < 2; i++) {\n            if (keys[i].sval) {\n                assert(!memcmp(keys[i].sval, \"abc\", keys[i].slen));\n                assert(keys[i].slen == 3);\n                assert(vals[i].lval == 123);\n            }\n            if (!keys[i].sval) {\n                assert(keys[i].lval == 456);\n                assert(!memcmp(vals[i].sval, \"def\", vals[i].slen));\n            }\n        }\n        zfree(keys);\n        zfree(vals);\n        lpFree(lp);\n    }\n\n    TEST(\"Random pairs unique with many elements and tuple_len 3\") {\n        unsigned picked;\n        int count = 5;\n        lp = lpNew(0);\n        listpackEntry *keys = zmalloc(sizeof(listpackEntry) * count);\n        listpackEntry *vals = zmalloc(sizeof(listpackEntry) * count);\n\n        lp = lpAppend(lp, (unsigned char*)\"abc\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"123\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"xxx\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"456\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"def\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"xxx\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"281474976710655\", 15);\n        lp = lpAppend(lp, (unsigned char*)\"789\", 3);\n        lp = lpAppend(lp, (unsigned char*)\"xxx\", 3);\n        picked = lpRandomPairsUnique(lp, count, keys, vals, 3);\n        assert(picked == 3);\n        for (int i = 0; i < 3; i++) {\n            if (keys[i].sval) {\n                if (!memcmp(keys[i].sval, \"abc\", keys[i].slen)) {\n                    assert(keys[i].slen == 3);\n                    assert(vals[i].lval == 123);\n                } else {\n                    assert(0);\n                };\n            }\n            if (!keys[i].sval) {\n                if (keys[i].lval == 456)\n                    assert(!memcmp(vals[i].sval, \"def\", vals[i].slen));\n                else if (keys[i].lval == 281474976710655LL)\n                    assert(vals[i].lval == 789);\n                else\n                    assert(0);\n            }\n        }\n        zfree(keys);\n        zfree(vals);\n        lpFree(lp);\n    }\n\n    TEST(\"push various encodings\") {\n        lp = lpNew(0);\n\n        /* Push integer encode element using lpAppend */\n        lp = lpAppend(lp, (unsigned char*)\"127\", 3);\n        assert(LP_ENCODING_IS_7BIT_UINT(lpLast(lp)[0]));\n        lp = lpAppend(lp, (unsigned char*)\"4095\", 4);\n        assert(LP_ENCODING_IS_13BIT_INT(lpLast(lp)[0]));\n        lp = lpAppend(lp, (unsigned char*)\"32767\", 5);\n        assert(LP_ENCODING_IS_16BIT_INT(lpLast(lp)[0]));\n        lp = lpAppend(lp, (unsigned char*)\"8388607\", 7);\n        assert(LP_ENCODING_IS_24BIT_INT(lpLast(lp)[0]));\n        lp = lpAppend(lp, (unsigned char*)\"2147483647\", 10);\n        assert(LP_ENCODING_IS_32BIT_INT(lpLast(lp)[0]));\n        lp = lpAppend(lp, (unsigned char*)\"9223372036854775807\", 19);\n        assert(LP_ENCODING_IS_64BIT_INT(lpLast(lp)[0]));\n\n        /* Push integer encode element using lpAppendInteger */\n        lp = lpAppendInteger(lp, 127);\n        assert(LP_ENCODING_IS_7BIT_UINT(lpLast(lp)[0]));\n        verifyEntry(lpLast(lp), (unsigned char*)\"127\", 3);\n        lp = lpAppendInteger(lp, 4095);\n        verifyEntry(lpLast(lp), (unsigned char*)\"4095\", 4);\n        assert(LP_ENCODING_IS_13BIT_INT(lpLast(lp)[0]));\n        lp = lpAppendInteger(lp, 32767);\n        verifyEntry(lpLast(lp), (unsigned char*)\"32767\", 5);\n        assert(LP_ENCODING_IS_16BIT_INT(lpLast(lp)[0]));\n        lp = lpAppendInteger(lp, 8388607);\n        verifyEntry(lpLast(lp), (unsigned char*)\"8388607\", 7);\n        assert(LP_ENCODING_IS_24BIT_INT(lpLast(lp)[0]));\n        lp = lpAppendInteger(lp, 2147483647);\n        verifyEntry(lpLast(lp), (unsigned char*)\"2147483647\", 10);\n        assert(LP_ENCODING_IS_32BIT_INT(lpLast(lp)[0]));\n        lp = lpAppendInteger(lp, 9223372036854775807);\n        verifyEntry(lpLast(lp), (unsigned char*)\"9223372036854775807\", 19);\n        assert(LP_ENCODING_IS_64BIT_INT(lpLast(lp)[0]));\n\n        /* string encode */\n        unsigned char *str = zmalloc(65535);\n        memset(str, 0, 65535);\n        lp = lpAppend(lp, (unsigned char*)str, 63);\n        assert(LP_ENCODING_IS_6BIT_STR(lpLast(lp)[0]));\n        lp = lpAppend(lp, (unsigned char*)str, 4095);\n        assert(LP_ENCODING_IS_12BIT_STR(lpLast(lp)[0]));\n        lp = lpAppend(lp, (unsigned char*)str, 65535);\n        assert(LP_ENCODING_IS_32BIT_STR(lpLast(lp)[0]));\n        zfree(str);\n        lpFree(lp);\n    }\n\n    TEST(\"Test lpFind\") {\n        lp = createList();\n        assert(lpFind(lp, lpFirst(lp), (unsigned char*)\"abc\", 3, 0) == NULL);\n        verifyEntry(lpFind(lp, lpFirst(lp), (unsigned char*)\"hello\", 5, 0), (unsigned char*)\"hello\", 5);\n        verifyEntry(lpFind(lp, lpFirst(lp), (unsigned char*)\"1024\", 4, 0), (unsigned char*)\"1024\", 4);\n        lpFree(lp);\n    }\n\n    TEST(\"Test lpFindCb\") {\n        lp = createList(); /* \"hello\", \"foo\", \"quux\", \"1024\" */\n        assert(lpFindCb(lp, lpFirst(lp), \"abc\", lpFindCbCmp, 0) == NULL);\n        verifyEntry(lpFindCb(lp, NULL, \"hello\", lpFindCbCmp, 0), (unsigned char*)\"hello\", 5);\n        verifyEntry(lpFindCb(lp, NULL, \"1024\", lpFindCbCmp, 0), (unsigned char*)\"1024\", 4);\n        verifyEntry(lpFindCb(lp, NULL, \"quux\", lpFindCbCmp, 0), (unsigned char*)\"quux\", 4);\n        verifyEntry(lpFindCb(lp, NULL, \"foo\", lpFindCbCmp, 0), (unsigned char*)\"foo\", 3);\n        lpFree(lp);\n\n        lp = lpNew(0);\n        assert(lpFindCb(lp, lpFirst(lp), \"hello\", lpFindCbCmp, 0) == NULL);\n        assert(lpFindCb(lp, lpFirst(lp), \"1024\", lpFindCbCmp, 0) == NULL);\n        lpFree(lp);\n    }\n\n    TEST(\"Test lpValidateIntegrity\") {\n        lp = createList();\n        long count = 0;\n        assert(lpValidateIntegrity(lp, lpBytes(lp), 1, lpValidation, &count) == 1);\n        lpFree(lp);\n    }\n\n    TEST(\"Test number of elements exceeds LP_HDR_NUMELE_UNKNOWN\") {\n        lp = lpNew(0);\n        for (int i = 0; i < LP_HDR_NUMELE_UNKNOWN + 1; i++)\n            lp = lpAppend(lp, (unsigned char*)\"1\", 1);\n\n        assert(lpGetNumElements(lp) == LP_HDR_NUMELE_UNKNOWN);\n        assert(lpLength(lp) == LP_HDR_NUMELE_UNKNOWN+1);\n\n        lp = lpDeleteRange(lp, -2, 2);\n        assert(lpGetNumElements(lp) == LP_HDR_NUMELE_UNKNOWN);\n        assert(lpLength(lp) == LP_HDR_NUMELE_UNKNOWN-1);\n        assert(lpGetNumElements(lp) == LP_HDR_NUMELE_UNKNOWN-1); /* update length after lpLength */\n        lpFree(lp);\n    }\n\n    TEST(\"Test number of elements exceeds LP_HDR_NUMELE_UNKNOWN with batch insert\") {\n        listpackEntry ent[2] = {\n                {.sval = (unsigned char*)mixlist[0], .slen = strlen(mixlist[0])},\n                {.sval = (unsigned char*)mixlist[1], .slen = strlen(mixlist[1])}\n        };\n\n        lp = lpNew(0);\n        for (int i = 0; i < (LP_HDR_NUMELE_UNKNOWN/2) + 1; i++)\n            lp = lpBatchAppend(lp, ent, 2);\n\n        assert(lpGetNumElements(lp) == LP_HDR_NUMELE_UNKNOWN);\n        assert(lpLength(lp) == LP_HDR_NUMELE_UNKNOWN+1);\n\n        lp = lpDeleteRange(lp, -2, 2);\n        assert(lpGetNumElements(lp) == LP_HDR_NUMELE_UNKNOWN);\n        assert(lpLength(lp) == LP_HDR_NUMELE_UNKNOWN-1);\n        assert(lpGetNumElements(lp) == LP_HDR_NUMELE_UNKNOWN-1); /* update length after lpLength */\n        lpFree(lp);\n    }\n\n    TEST(\"Stress with random payloads of different encoding\") {\n        unsigned long long start = usec();\n        int i,j,len,where;\n        unsigned char *p;\n        char buf[1024];\n        int buflen;\n        list *ref;\n        listNode *refnode;\n\n        int iteration = accurate ? 20000 : 20;\n        for (i = 0; i < iteration; i++) {\n            lp = lpNew(0);\n            ref = listCreate();\n            listSetFreeMethod(ref, sdsfreegeneric);\n            len = rand() % 256;\n\n            /* Create lists */\n            for (j = 0; j < len; j++) {\n                where = (rand() & 1) ? 0 : 1;\n                if (rand() % 2) {\n                    buflen = randstring(buf,1,sizeof(buf)-1);\n                } else {\n                    switch(rand() % 3) {\n                    case 0:\n                        buflen = snprintf(buf,sizeof(buf),\"%lld\",(0LL + rand()) >> 20);\n                        break;\n                    case 1:\n                        buflen = snprintf(buf,sizeof(buf),\"%lld\",(0LL + rand()));\n                        break;\n                    case 2:\n                        buflen = snprintf(buf,sizeof(buf),\"%lld\",(0LL + rand()) << 20);\n                        break;\n                    default:\n                        assert(NULL);\n                    }\n                }\n\n                /* Add to listpack */\n                if (where == 0) {\n                    lp = lpPrepend(lp, (unsigned char*)buf, buflen);\n                } else {\n                    lp = lpAppend(lp, (unsigned char*)buf, buflen);\n                }\n\n                /* Add to reference list */\n                if (where == 0) {\n                    listAddNodeHead(ref,sdsnewlen(buf, buflen));\n                } else if (where == 1) {\n                    listAddNodeTail(ref,sdsnewlen(buf, buflen));\n                } else {\n                    assert(NULL);\n                }\n            }\n\n            assert(listLength(ref) == lpLength(lp));\n            for (j = 0; j < len; j++) {\n                /* Naive way to get elements, but similar to the stresser\n                 * executed from the Tcl test suite. */\n                p = lpSeek(lp,j);\n                refnode = listIndex(ref,j);\n\n                vstr = lpGet(p, &vlen, intbuf);\n                assert(memcmp(vstr,listNodeValue(refnode),vlen) == 0);\n            }\n            lpFree(lp);\n            listRelease(ref);\n        }\n        printf(\"Done. usec=%lld\\n\\n\", usec()-start);\n    }\n\n    TEST(\"Stress with variable listpack size\") {\n        unsigned long long start = usec();\n        int maxsize = accurate ? 16384 : 16;\n        stress(0,100000,maxsize,256);\n        stress(1,100000,maxsize,256);\n        printf(\"Done. usec=%lld\\n\\n\", usec()-start);\n    }\n\n    /* Benchmarks */\n    {\n        int iteration = accurate ? 100000 : 100;\n        lp = lpNew(0);\n        TEST(\"Benchmark lpAppend\") {\n            unsigned long long start = usec();\n            for (int i=0; i<iteration; i++) {\n                char buf[4096] = \"asdf\";\n                lp = lpAppend(lp, (unsigned char*)buf, 4);\n                lp = lpAppend(lp, (unsigned char*)buf, 40);\n                lp = lpAppend(lp, (unsigned char*)buf, 400);\n                lp = lpAppend(lp, (unsigned char*)buf, 4000);\n                lp = lpAppend(lp, (unsigned char*)\"1\", 1);\n                lp = lpAppend(lp, (unsigned char*)\"10\", 2);\n                lp = lpAppend(lp, (unsigned char*)\"100\", 3);\n                lp = lpAppend(lp, (unsigned char*)\"1000\", 4);\n                lp = lpAppend(lp, (unsigned char*)\"10000\", 5);\n                lp = lpAppend(lp, (unsigned char*)\"100000\", 6);\n            }\n            printf(\"Done. usec=%lld\\n\", usec()-start);\n        }\n\n        TEST(\"Benchmark lpFind string\") {\n            unsigned long long start = usec();\n            for (int i = 0; i < 2000; i++) {\n                unsigned char *fptr = lpFirst(lp);\n                fptr = lpFind(lp, fptr, (unsigned char*)\"nothing\", 7, 1);\n            }\n            printf(\"Done. usec=%lld\\n\", usec()-start);\n        }\n\n        TEST(\"Benchmark lpFind number\") {\n            unsigned long long start = usec();\n            for (int i = 0; i < 2000; i++) {\n                unsigned char *fptr = lpFirst(lp);\n                fptr = lpFind(lp, fptr, (unsigned char*)\"99999\", 5, 1);\n            }\n            printf(\"Done. usec=%lld\\n\", usec()-start);\n        }\n\n        TEST(\"Benchmark lpSeek\") {\n            unsigned long long start = usec();\n            for (int i = 0; i < 2000; i++) {\n                lpSeek(lp, 99999);\n            }\n            printf(\"Done. usec=%lld\\n\", usec()-start);\n        }\n\n        TEST(\"Benchmark lpValidateIntegrity\") {\n            unsigned long long start = usec();\n            for (int i = 0; i < 2000; i++) {\n                lpValidateIntegrity(lp, lpBytes(lp), 1, NULL, NULL);\n            }\n            printf(\"Done. usec=%lld\\n\", usec()-start);\n        }\n\n        TEST(\"Benchmark lpCompare with string\") {\n            unsigned long long start = usec();\n            for (int i = 0; i < 2000; i++) {\n                unsigned char *eptr = lpSeek(lp,0);\n                while (eptr != NULL) {\n                    lpCompare(eptr,(unsigned char*)\"nothing\",7,NULL,NULL);\n                    eptr = lpNext(lp,eptr);\n                }\n            }\n            printf(\"Done. usec=%lld\\n\", usec()-start);\n        }\n\n        TEST(\"Benchmark lpCompare with number\") {\n            unsigned long long start = usec();\n            for (int i = 0; i < 2000; i++) {\n                unsigned char *eptr = lpSeek(lp,0);\n                while (eptr != NULL) {\n                    lpCompare(eptr, (unsigned char*)\"99999\", 5, NULL, NULL);\n                    eptr = lpNext(lp,eptr);\n                }\n            }\n            printf(\"Done. usec=%lld\\n\", usec()-start);\n        }\n\n        TEST(\"Benchmark lpCompare with number and caching\") {\n            unsigned long long start = usec();\n            for (int i = 0; i < 2000; i++) {\n                unsigned char *eptr = lpSeek(lp,0);\n                long long cached_val = 0;\n                int cached_valid = 0;\n                while (eptr != NULL) {\n                    lpCompare(eptr, (unsigned char*)\"99999\", 5, &cached_val, &cached_valid);\n                    eptr = lpNext(lp,eptr);\n                }\n            }\n            printf(\"Done. usec=%lld\\n\", usec()-start);\n        }\n\n        lpFree(lp);\n    }\n\n    return 0;\n}\n\n#endif\n"
  },
  {
    "path": "src/listpack.h",
    "content": "/* Listpack -- A lists of strings serialization format\n *\n * This file implements the specification you can find at:\n *\n *  https://github.com/antirez/listpack\n *\n * Copyright (c) 2017-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef __LISTPACK_H\n#define __LISTPACK_H\n\n#include <stdlib.h>\n#include <stdint.h>\n\n#define LP_INTBUF_SIZE 21 /* 20 digits of -2^63 + 1 null term = 21. */\n\n/* lpInsert() where argument possible values: */\n#define LP_BEFORE 0\n#define LP_AFTER 1\n#define LP_REPLACE 2\n\n/* Each entry in the listpack is either a string or an integer. */\ntypedef struct {\n    /* When string is used, it is provided with the length (slen). */\n    unsigned char *sval;\n    uint32_t slen;\n    /* When integer is used, 'sval' is NULL, and lval holds the value. */\n    long long lval;\n} listpackEntry;\n\nunsigned char *lpNew(size_t capacity);\nvoid lpFree(unsigned char *lp);\nunsigned char* lpShrinkToFit(unsigned char *lp);\nunsigned char *lpInsertString(unsigned char *lp, unsigned char *s, uint32_t slen,\n                              unsigned char *p, int where, unsigned char **newp);\nunsigned char *lpInsertInteger(unsigned char *lp, long long lval,\n                               unsigned char *p, int where, unsigned char **newp);\nunsigned char *lpPrepend(unsigned char *lp, unsigned char *s, uint32_t slen);\nunsigned char *lpPrependInteger(unsigned char *lp, long long lval);\nunsigned char *lpAppend(unsigned char *lp, unsigned char *s, uint32_t slen);\nunsigned char *lpAppendInteger(unsigned char *lp, long long lval);\nunsigned char *lpReplace(unsigned char *lp, unsigned char **p, unsigned char *s, uint32_t slen);\nunsigned char *lpReplaceInteger(unsigned char *lp, unsigned char **p, long long lval);\nunsigned char *lpDelete(unsigned char *lp, unsigned char *p, unsigned char **newp);\nunsigned char *lpDeleteRangeWithEntry(unsigned char *lp, unsigned char **p, unsigned long num);\nunsigned char *lpDeleteRange(unsigned char *lp, long index, unsigned long num);\nunsigned char *lpBatchAppend(unsigned char *lp, listpackEntry *entries, unsigned long len);\nunsigned char *lpBatchInsert(unsigned char *lp, unsigned char *p, int where,\n                             listpackEntry *entries, unsigned int len, unsigned char **newp);\nunsigned char *lpBatchDelete(unsigned char *lp, unsigned char **ps, unsigned long count);\nunsigned char *lpMerge(unsigned char **first, unsigned char **second);\nunsigned char *lpDup(unsigned char *lp);\nunsigned long lpLength(unsigned char *lp);\nunsigned char *lpGet(unsigned char *p, int64_t *count, unsigned char *intbuf);\nunsigned char *lpGetValue(unsigned char *p, unsigned int *slen, long long *lval);\nint lpGetIntegerValue(unsigned char *p, long long *lval);\nunsigned char *lpFind(unsigned char *lp, unsigned char *p, unsigned char *s, uint32_t slen, unsigned int skip);\ntypedef int (*lpCmp)(const unsigned char *lp, unsigned char *p, void *user, unsigned char *s, long long slen);\nunsigned char *lpFindCb(unsigned char *lp, unsigned char *p, void *user, lpCmp cmp, unsigned int skip);\nunsigned char *lpFirst(unsigned char *lp);\nunsigned char *lpLast(unsigned char *lp);\nunsigned char *lpNext(unsigned char *lp, unsigned char *p);\nunsigned char *lpNextWithBytes(unsigned char *lp, unsigned char *p, const size_t lpbytes);\nunsigned char *lpPrev(unsigned char *lp, unsigned char *p);\nsize_t lpBytes(unsigned char *lp);\nsize_t lpEntrySizeInteger(long long lval);\nsize_t lpEstimateBytesRepeatedInteger(long long lval, unsigned long rep);\nunsigned char *lpSeek(unsigned char *lp, long index);\ntypedef int (*listpackValidateEntryCB)(unsigned char *p, unsigned int head_count, void *userdata);\nint lpValidateIntegrity(unsigned char *lp, size_t size, int deep,\n                        listpackValidateEntryCB entry_cb, void *cb_userdata);\nunsigned char *lpValidateFirst(unsigned char *lp);\nint lpValidateNext(unsigned char *lp, unsigned char **pp, size_t lpbytes);\nunsigned int lpCompare(unsigned char *p, unsigned char *s, uint32_t slen, long long *cached_longval, int *cached_valid);\nvoid lpRandomPair(unsigned char *lp, unsigned long total_count,\n                  listpackEntry *key, listpackEntry *val, int tuple_len);\nvoid lpRandomPairs(unsigned char *lp, unsigned int count,\n                   listpackEntry *keys, listpackEntry *vals, int tuple_len);\nunsigned int lpRandomPairsUnique(unsigned char *lp, unsigned int count,\n                                 listpackEntry *keys, listpackEntry *vals, int tuple_len);\nvoid lpRandomEntries(unsigned char *lp, unsigned int count, listpackEntry *entries);\nunsigned char *lpNextRandom(unsigned char *lp, unsigned char *p, unsigned int *index,\n                            unsigned int remaining, int tuple_len);\nint lpSafeToAdd(unsigned char* lp, size_t add);\nvoid lpRepr(unsigned char *lp);\n\n#ifdef REDIS_TEST\nint listpackTest(int argc, char *argv[], int flags);\n#endif\n\n#endif\n"
  },
  {
    "path": "src/listpack_malloc.h",
    "content": "/* Listpack -- A lists of strings serialization format\n * https://github.com/antirez/listpack\n *\n * Copyright (c) 2017-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n/* Allocator selection.\n *\n * This file is used in order to change the Rax allocator at compile time.\n * Just define the following defines to what you want to use. Also add\n * the include of your alternate allocator if needed (not needed in order\n * to use the default libc allocator). */\n\n#ifndef LISTPACK_ALLOC_H\n#define LISTPACK_ALLOC_H\n#include \"zmalloc.h\"\n/* We use zmalloc_usable/zrealloc_usable instead of zmalloc/zrealloc\n * to ensure the safe invocation of 'zmalloc_usable_size().\n * See comment in zmalloc_usable_size(). */\n#define lp_malloc(sz) zmalloc_usable(sz,NULL)\n#define lp_realloc(ptr,sz) zrealloc_usable(ptr,sz,NULL,NULL)\n#define lp_free zfree\n#define lp_malloc_size zmalloc_usable_size\n#endif\n"
  },
  {
    "path": "src/localtime.c",
    "content": "/*\n * Copyright (c) 2018-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include <time.h>\n\n/* This is a safe version of localtime() which contains no locks and is\n * fork() friendly. Even the _r version of localtime() cannot be used safely\n * in Redis. Another thread may be calling localtime() while the main thread\n * forks(). Later when the child process calls localtime() again, for instance\n * in order to log something to the Redis log, it may deadlock: in the copy\n * of the address space of the forked process the lock will never be released.\n *\n * This function takes the timezone 'tz' as argument, and the 'dst' flag is\n * used to check if daylight saving time is currently in effect. The caller\n * of this function should obtain such information calling tzset() ASAP in the\n * main() function to obtain the timezone offset from the 'timezone' global\n * variable. To obtain the daylight information, if it is currently active or not,\n * one trick is to call localtime() in main() ASAP as well, and get the\n * information from the tm_isdst field of the tm structure. However the daylight\n * time may switch in the future for long running processes, so this information\n * should be refreshed at safe times.\n *\n * Note that this function does not work for dates < 1/1/1970, it is solely\n * designed to work with what time(NULL) may return, and to support Redis\n * logging of the dates, it's not really a complete implementation. */\nstatic int is_leap_year(time_t year) {\n    if (year % 4) return 0;         /* A year not divisible by 4 is not leap. */\n    else if (year % 100) return 1;  /* If div by 4 and not 100 is surely leap. */\n    else if (year % 400) return 0;  /* If div by 100 *and* not by 400 is not leap. */\n    else return 1;                  /* If div by 100 and 400 is leap. */\n}\n\nvoid nolocks_localtime(struct tm *tmp, time_t t, time_t tz, int dst) {\n    const time_t secs_min = 60;\n    const time_t secs_hour = 3600;\n    const time_t secs_day = 3600*24;\n\n    t -= tz;                            /* Adjust for timezone. */\n    t += 3600*dst;                      /* Adjust for daylight time. */\n    time_t days = t / secs_day;         /* Days passed since epoch. */\n    time_t seconds = t % secs_day;      /* Remaining seconds. */\n\n    tmp->tm_isdst = dst;\n    tmp->tm_hour = seconds / secs_hour;\n    tmp->tm_min = (seconds % secs_hour) / secs_min;\n    tmp->tm_sec = (seconds % secs_hour) % secs_min;\n\n    /* 1/1/1970 was a Thursday, that is, day 4 from the POV of the tm structure\n     * where sunday = 0, so to calculate the day of the week we have to add 4\n     * and take the modulo by 7. */\n    tmp->tm_wday = (days+4)%7;\n\n    /* Calculate the current year. */\n    tmp->tm_year = 1970;\n    while(1) {\n        /* Leap years have one day more. */\n        time_t days_this_year = 365 + is_leap_year(tmp->tm_year);\n        if (days_this_year > days) break;\n        days -= days_this_year;\n        tmp->tm_year++;\n    }\n    tmp->tm_yday = days;  /* Number of day of the current year. */\n\n    /* We need to calculate in which month and day of the month we are. To do\n     * so we need to skip days according to how many days there are in each\n     * month, and adjust for the leap year that has one more day in February. */\n    int mdays[12] = {31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31};\n    mdays[1] += is_leap_year(tmp->tm_year);\n\n    tmp->tm_mon = 0;\n    while(days >= mdays[tmp->tm_mon]) {\n        days -= mdays[tmp->tm_mon];\n        tmp->tm_mon++;\n    }\n\n    tmp->tm_mday = days+1;  /* Add 1 since our 'days' is zero-based. */\n    tmp->tm_year -= 1900;   /* Surprisingly tm_year is year-1900. */\n}\n\n#ifdef LOCALTIME_TEST_MAIN\n#include <stdio.h>\n\nint main(void) {\n    /* Obtain timezone and daylight info. */\n    tzset(); /* Now 'timezone' global is populated. */\n    time_t t = time(NULL);\n    struct tm *aux = localtime(&t);\n    int daylight_active = aux->tm_isdst;\n\n    struct tm tm;\n    char buf[1024];\n\n    nolocks_localtime(&tm,t,timezone,daylight_active);\n    strftime(buf,sizeof(buf),\"%d %b %H:%M:%S\",&tm);\n    printf(\"[timezone: %d, dl: %d] %s\\n\", (int)timezone, (int)daylight_active, buf);\n}\n#endif\n"
  },
  {
    "path": "src/logreqres.c",
    "content": "/*\n * Copyright (c) 2021-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n/* This file implements the interface of logging clients' requests and\n * responses into a file.\n * This feature needs the LOG_REQ_RES macro to be compiled and is turned\n * on by the req-res-logfile config.\"\n *\n * Some examples:\n *\n * PING:\n *\n * 4\n * ping\n * 12\n * __argv_end__\n * +PONG\n *\n * LRANGE:\n *\n * 6\n * lrange\n * 4\n * list\n * 1\n * 0\n * 2\n * -1\n * 12\n * __argv_end__\n * *1\n * $3\n * ele\n *\n * The request is everything up until the __argv_end__ marker.\n * The format is:\n * <number of characters>\n * <the argument>\n *\n * After __argv_end__ the response appears, and the format is\n * RESP (2 or 3, depending on what the client has configured)\n */\n\n#include \"server.h\"\n#include <ctype.h>\n\n#ifdef LOG_REQ_RES\n\n/* ----- Helpers ----- */\n\nstatic int reqresShouldLog(client *c) {\n    if (!server.req_res_logfile)\n        return 0;\n\n    /* Ignore client with streaming non-standard response */\n    if (c->flags & (CLIENT_PUBSUB|CLIENT_MONITOR|CLIENT_SLAVE))\n        return 0;\n\n    /* We only work on masters (didn't implement reqresAppendResponse to work on shared slave buffers) */\n    if (getClientType(c) == CLIENT_TYPE_MASTER)\n        return 0;\n\n    return 1;\n}\n\nstatic size_t reqresAppendBuffer(client *c, void *buf, size_t len) {\n    if (!c->reqres.buf) {\n        c->reqres.capacity = max(len, 1024);\n        c->reqres.buf = zmalloc(c->reqres.capacity);\n    } else if (c->reqres.capacity - c->reqres.used < len) {\n        c->reqres.capacity += len;\n        c->reqres.buf = zrealloc(c->reqres.buf, c->reqres.capacity);\n    }\n\n    memcpy(c->reqres.buf + c->reqres.used, buf, len);\n    c->reqres.used += len;\n    return len;\n}\n\n/* Functions for requests */\n\nstatic size_t reqresAppendArg(client *c, char *arg, size_t arg_len) {\n    char argv_len_buf[LONG_STR_SIZE];\n    size_t argv_len_buf_len = ll2string(argv_len_buf,sizeof(argv_len_buf),(long)arg_len);\n    size_t ret = reqresAppendBuffer(c, argv_len_buf, argv_len_buf_len);\n    ret += reqresAppendBuffer(c, \"\\r\\n\", 2);\n    ret += reqresAppendBuffer(c, arg, arg_len);\n    ret += reqresAppendBuffer(c, \"\\r\\n\", 2);\n    return ret;\n}\n\n/* Helper function to decode and append encoded buffer content.\n * Encoded buffers contain payloadHeader structures followed by payloads.\n * For PLAIN_REPLY: just copy the payload data.\n * For BULK_STR_REF: expand to \"$<len>\\r\\n<string>\\r\\n\" format. */\nstatic size_t reqresAppendEncodedBuffer(client *c, char *buf, size_t len) {\n    size_t ret = 0;\n    char *ptr = buf;\n    char *end = buf + len;\n\n    while (ptr < end) {\n        payloadHeader *header = (payloadHeader *)ptr;\n        if (header->payload_type == PLAIN_REPLY) {\n            /* Plain reply data - copy directly */\n            ret += reqresAppendBuffer(c, ptr + sizeof(payloadHeader), header->payload_len);\n        } else {\n            /* BULK_STR_REF - expand to full RESP format */\n            bulkStrRef *str_ref = (bulkStrRef *)(ptr + sizeof(payloadHeader));\n\n            /* Append prefix: \"$<len>\\r\\n\" */\n            ret += reqresAppendBuffer(c, str_ref->prefix, str_ref->prefix_cnt);\n            /* Append string content */\n            ret += reqresAppendBuffer(c, str_ref->obj->ptr, sdslen(str_ref->obj->ptr));\n            /* Append trailing CRLF */\n            ret += reqresAppendBuffer(c, str_ref->crlf, 2);\n        }\n        ptr += sizeof(payloadHeader) + header->payload_len;\n    }\n\n    return ret;\n}\n\n/* ----- API ----- */\n\n\n/* Zero out the clientReqResInfo struct inside the client,\n * and free the buffer if needed */\nvoid reqresReset(client *c, int free_buf) {\n    if (free_buf && c->reqres.buf)\n        zfree(c->reqres.buf);\n    memset(&c->reqres, 0, sizeof(c->reqres));\n}\n\n/* Save the offset of the reply buffer (or the reply list).\n * Should be called when adding a reply (but it will only save the offset\n * on the very first time it's called, because of c->reqres.offset.saved)\n * The idea is:\n * 1. When a client is executing a command, we save the reply offset.\n * 2. During the execution, the reply offset may grow, as addReply* functions are called.\n * 3. When client is done with the command (commandProcessed), reqresAppendResponse\n *    is called.\n * 4. reqresAppendResponse will append the diff between the current offset and the one from step (1)\n * 5. When client is reset before the next command, we clear c->reqres.offset.saved and start again\n *\n * We cannot reply on c->sentlen to keep track because it depends on the network\n * (reqresAppendResponse will always write the whole buffer, unlike writeToClient)\n *\n * Ideally, we would just have this code inside reqresAppendRequest, which is called\n * from processCommand, but we cannot save the reply offset inside processCommand\n * because of the following pipe-lining scenario:\n * set rd [redis_deferring_client]\n * set buf \"\"\n * append buf \"SET key vale\\r\\n\"\n * append buf \"BLPOP mylist 0\\r\\n\"\n * $rd write $buf\n * $rd flush\n *\n * Let's assume we save the reply offset in processCommand\n * When BLPOP is processed the offset is 5 (+OK\\r\\n from the SET)\n * Then beforeSleep is called, the +OK is written to network, and bufpos is 0\n * When the client is finally unblocked, the cached offset is 5, but bufpos is already\n * 0, so we would miss the first 5 bytes of the reply.\n **/\nvoid reqresSaveClientReplyOffset(client *c) {\n    if (!reqresShouldLog(c))\n        return;\n\n    if (c->reqres.offset.saved)\n        return;\n\n    c->reqres.offset.saved = 1;\n\n    c->reqres.offset.bufpos = c->bufpos;\n    if (listLength(c->reply) && listNodeValue(listLast(c->reply))) {\n        c->reqres.offset.last_node.index = listLength(c->reply) - 1;\n        c->reqres.offset.last_node.used = ((clientReplyBlock *)listNodeValue(listLast(c->reply)))->used;\n    } else {\n        c->reqres.offset.last_node.index = 0;\n        c->reqres.offset.last_node.used = 0;\n    }\n}\n\nsize_t reqresAppendRequest(client *c) {\n    robj **argv = c->argv;\n    int argc = c->argc;\n\n    serverAssert(argc);\n\n    if (!reqresShouldLog(c))\n        return 0;\n\n    /* Ignore commands that have streaming non-standard response */\n    sds cmd = argv[0]->ptr;\n    if (!strcasecmp(cmd,\"debug\") || /* because of DEBUG SEGFAULT */\n        !strcasecmp(cmd,\"sync\") ||\n        !strcasecmp(cmd,\"psync\") ||\n        !strcasecmp(cmd,\"monitor\") ||\n        !strcasecmp(cmd,\"subscribe\") ||\n        !strcasecmp(cmd,\"unsubscribe\") ||\n        !strcasecmp(cmd,\"ssubscribe\") ||\n        !strcasecmp(cmd,\"sunsubscribe\") ||\n        !strcasecmp(cmd,\"psubscribe\") ||\n        !strcasecmp(cmd,\"punsubscribe\"))\n    {\n        return 0;\n    }\n\n    c->reqres.argv_logged = 1;\n\n    size_t ret = 0;\n    for (int i = 0; i < argc; i++) {\n        if (sdsEncodedObject(argv[i])) {\n            ret += reqresAppendArg(c, argv[i]->ptr, sdslen(argv[i]->ptr));\n        } else if (argv[i]->encoding == OBJ_ENCODING_INT) {\n            char buf[LONG_STR_SIZE];\n            size_t len = ll2string(buf,sizeof(buf),(long)argv[i]->ptr);\n            ret += reqresAppendArg(c, buf, len);\n        } else {\n            serverPanic(\"Wrong encoding in reqresAppendRequest()\");\n        }\n    }\n    return ret + reqresAppendArg(c, \"__argv_end__\", 12);\n}\n\nsize_t reqresAppendResponse(client *c) {\n    size_t ret = 0;\n\n    if (!reqresShouldLog(c))\n        return 0;\n\n    if (!c->reqres.argv_logged) /* Example: UNSUBSCRIBE */\n        return 0;\n\n    if (!c->reqres.offset.saved) /* Example: module client blocked on keys + CLIENT KILL */\n        return 0;\n\n    /* First append the static reply buffer */\n    if (c->bufpos > c->reqres.offset.bufpos) {\n        size_t written;\n        if (!c->buf_encoded) {\n            /* Plain buffer - copy directly */\n            written = reqresAppendBuffer(c, c->buf + c->reqres.offset.bufpos, c->bufpos - c->reqres.offset.bufpos);\n        } else {\n            /* Decode and append encoded buffer */\n            written = reqresAppendEncodedBuffer(c, c->buf + c->reqres.offset.bufpos, c->bufpos - c->reqres.offset.bufpos);\n        }\n        ret += written;\n    }\n\n    int curr_index = 0;\n    size_t curr_used = 0;\n    if (listLength(c->reply)) {\n        curr_index = listLength(c->reply) - 1;\n        curr_used = ((clientReplyBlock *)listNodeValue(listLast(c->reply)))->used;\n    }\n\n    /* Now, append reply bytes from the reply list */\n    if (curr_index > c->reqres.offset.last_node.index ||\n        curr_used > c->reqres.offset.last_node.used)\n    {\n        int i = 0;\n        listIter iter;\n        listNode *curr;\n        clientReplyBlock *o;\n        listRewind(c->reply, &iter);\n        while ((curr = listNext(&iter)) != NULL) {\n            size_t written = 0;\n\n            /* Skip nodes we had already processed */\n            if (i < c->reqres.offset.last_node.index) {\n                i++;\n                continue;\n            }\n            o = listNodeValue(curr);\n            if (o->used == 0) {\n                i++;\n                continue;\n            }\n\n            if (!o->buf_encoded) {\n                if (i == c->reqres.offset.last_node.index) {\n                    /* Write the potentially incomplete node, which had data from\n                     * before the current command started */\n                    written = reqresAppendBuffer(c, o->buf + c->reqres.offset.last_node.used,\n                                                 o->used - c->reqres.offset.last_node.used);\n                } else {\n                    /* New node */\n                    written = reqresAppendBuffer(c, o->buf, o->used);\n                }\n            } else {\n                /* Encoded buffer - decode and append */\n                if (i == c->reqres.offset.last_node.index) {\n                    /* Write the potentially incomplete node, which had data from\n                     * before the current command started */\n                    written = reqresAppendEncodedBuffer(c, o->buf + c->reqres.offset.last_node.used,\n                                                        o->used - c->reqres.offset.last_node.used);\n                } else {\n                    /* New node */\n                    written = reqresAppendEncodedBuffer(c, o->buf, o->used);\n                }\n            }\n\n            ret += written;\n            i++;\n        }\n    }\n    serverAssert(ret);\n\n    /* Flush both request and response to file */\n    FILE *fp = fopen(server.req_res_logfile, \"a\");\n    serverAssert(fp);\n    fwrite(c->reqres.buf, c->reqres.used, 1, fp);\n    fclose(fp);\n\n    return ret;\n}\n\n#else /* #ifdef LOG_REQ_RES */\n\n/* Just mimic the API without doing anything */\n\nvoid reqresReset(client *c, int free_buf) {\n    UNUSED(c);\n    UNUSED(free_buf);\n}\n\ninline void reqresSaveClientReplyOffset(client *c) {\n    UNUSED(c);\n}\n\ninline size_t reqresAppendRequest(client *c) {\n    UNUSED(c);\n    return 0;\n}\n\ninline size_t reqresAppendResponse(client *c) {\n    UNUSED(c);\n    return 0;\n}\n\n#endif /* #ifdef LOG_REQ_RES */\n"
  },
  {
    "path": "src/lolwut.c",
    "content": "/*\n * Copyright (c) 2018-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * ----------------------------------------------------------------------------\n *\n * This file implements the LOLWUT command. The command should do something\n * fun and interesting, and should be replaced by a new implementation at\n * each new version of Redis.\n */\n\n#include \"server.h\"\n#include \"lolwut.h\"\n#include <math.h>\n\nvoid lolwut5Command(client *c);\nvoid lolwut6Command(client *c);\nvoid lolwut8Command(client *c);\n\n/* The default target for LOLWUT if no matching version was found.\n * This is what unstable versions of Redis will display. */\nvoid lolwutUnstableCommand(client *c) {\n    sds rendered = sdsnew(\"Redis ver. \");\n    rendered = sdscat(rendered,REDIS_VERSION);\n    rendered = sdscatlen(rendered,\"\\n\",1);\n    addReplyVerbatim(c,rendered,sdslen(rendered),\"txt\");\n    sdsfree(rendered);\n}\n\n/* LOLWUT [VERSION <version>] [... version specific arguments ...] */\nvoid lolwutCommand(client *c) {\n    char *v = REDIS_VERSION;\n    char verstr[64];\n\n    if (c->argc >= 3 && !strcasecmp(c->argv[1]->ptr,\"version\")) {\n        long ver;\n        if (getLongFromObjectOrReply(c,c->argv[2],&ver,NULL) != C_OK) return;\n        snprintf(verstr,sizeof(verstr),\"%u.0.0\",(unsigned int)ver);\n        v = verstr;\n\n        /* Adjust argv/argc to filter the \"VERSION ...\" option, since the\n         * specific LOLWUT version implementations don't know about it\n         * and expect their arguments. */\n        c->argv += 2;\n        c->argc -= 2;\n    }\n\n    if ((v[0] == '5' && v[1] == '.' && v[2] != '9') ||\n        (v[0] == '4' && v[1] == '.' && v[2] == '9'))\n        lolwut5Command(c);\n    else if ((v[0] == '6' && v[1] == '.' && v[2] != '9') ||\n             (v[0] == '5' && v[1] == '.' && v[2] == '9'))\n        lolwut6Command(c);\n    else if ((v[0] == '8' && v[1] == '.' && v[2] != '9') ||\n             (v[0] == '7' && v[1] == '.' && v[2] == '9'))\n        lolwut8Command(c);\n    else\n        lolwutUnstableCommand(c);\n\n    /* Fix back argc/argv in case of VERSION argument. */\n    if (v == verstr) {\n        c->argv -= 2;\n        c->argc += 2;\n    }\n}\n\n/* ========================== LOLWUT Canvas ===============================\n * Many LOLWUT versions will likely print some computer art to the screen.\n * This is the case with LOLWUT 5 and LOLWUT 6, so here there is a generic\n * canvas implementation that can be reused.  */\n\n/* Allocate and return a new canvas of the specified size. */\nlwCanvas *lwCreateCanvas(int width, int height, int bgcolor) {\n    lwCanvas *canvas = zmalloc(sizeof(*canvas));\n    canvas->width = width;\n    canvas->height = height;\n    canvas->pixels = zmalloc((size_t)width*height);\n    memset(canvas->pixels,bgcolor,(size_t)width*height);\n    return canvas;\n}\n\n/* Free the canvas created by lwCreateCanvas(). */\nvoid lwFreeCanvas(lwCanvas *canvas) {\n    zfree(canvas->pixels);\n    zfree(canvas);\n}\n\n/* Set a pixel to the specified color. Color is 0 or 1, where zero means no\n * dot will be displayed, and 1 means dot will be displayed.\n * Coordinates are arranged so that left-top corner is 0,0. You can write\n * out of the size of the canvas without issues. */\nvoid lwDrawPixel(lwCanvas *canvas, int x, int y, int color) {\n    if (x < 0 || x >= canvas->width ||\n        y < 0 || y >= canvas->height) return;\n    canvas->pixels[x+y*canvas->width] = color;\n}\n\n/* Return the value of the specified pixel on the canvas. */\nint lwGetPixel(lwCanvas *canvas, int x, int y) {\n    if (x < 0 || x >= canvas->width ||\n        y < 0 || y >= canvas->height) return 0;\n    return canvas->pixels[x+y*canvas->width];\n}\n\n/* Draw a line from x1,y1 to x2,y2 using the Bresenham algorithm. */\nvoid lwDrawLine(lwCanvas *canvas, int x1, int y1, int x2, int y2, int color) {\n    int dx = abs(x2-x1);\n    int dy = abs(y2-y1);\n    int sx = (x1 < x2) ? 1 : -1;\n    int sy = (y1 < y2) ? 1 : -1;\n    int err = dx-dy, e2;\n\n    while(1) {\n        lwDrawPixel(canvas,x1,y1,color);\n        if (x1 == x2 && y1 == y2) break;\n        e2 = err*2;\n        if (e2 > -dy) {\n            err -= dy;\n            x1 += sx;\n        }\n        if (e2 < dx) {\n            err += dx;\n            y1 += sy;\n        }\n    }\n}\n\n/* Draw a square centered at the specified x,y coordinates, with the specified\n * rotation angle and size. In order to write a rotated square, we use the\n * trivial fact that the parametric equation:\n *\n *  x = sin(k)\n *  y = cos(k)\n *\n * Describes a circle for values going from 0 to 2*PI. So basically if we start\n * at 45 degrees, that is k = PI/4, with the first point, and then we find\n * the other three points incrementing K by PI/2 (90 degrees), we'll have the\n * points of the square. In order to rotate the square, we just start with\n * k = PI/4 + rotation_angle, and we are done.\n *\n * Of course the vanilla equations above will describe the square inside a\n * circle of radius 1, so in order to draw larger squares we'll have to\n * multiply the obtained coordinates, and then translate them. However this\n * is much simpler than implementing the abstract concept of 2D shape and then\n * performing the rotation/translation transformation, so for LOLWUT it's\n * a good approach. */\nvoid lwDrawSquare(lwCanvas *canvas, int x, int y, float size, float angle, int color) {\n    int px[4], py[4];\n\n    /* Adjust the desired size according to the fact that the square inscribed\n     * into a circle of radius 1 has the side of length SQRT(2). This way\n     * size becomes a simple multiplication factor we can use with our\n     * coordinates to magnify them. */\n    size /= 1.4142135623;\n    size = round(size);\n\n    /* Compute the four points. */\n    float k = M_PI/4 + angle;\n    for (int j = 0; j < 4; j++) {\n        px[j] = round(sin(k) * size + x);\n        py[j] = round(cos(k) * size + y);\n        k += M_PI/2;\n    }\n\n    /* Draw the square. */\n    for (int j = 0; j < 4; j++)\n        lwDrawLine(canvas,px[j],py[j],px[(j+1)%4],py[(j+1)%4],color);\n}\n"
  },
  {
    "path": "src/lolwut.h",
    "content": "/*\n * Copyright (c) 2018-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n/* This structure represents our canvas. Drawing functions will take a pointer\n * to a canvas to write to it. Later the canvas can be rendered to a string\n * suitable to be printed on the screen, using unicode Braille characters. */\n\n/* This represents a very simple generic canvas in order to draw stuff.\n * It's up to each LOLWUT versions to translate what they draw to the\n * screen, depending on the result to accomplish. */\n\n#ifndef __LOLWUT_H\n#define __LOLWUT_H\n\ntypedef struct lwCanvas {\n    int width;\n    int height;\n    char *pixels;\n} lwCanvas;\n\n/* Drawing functions implemented inside lolwut.c. */\nlwCanvas *lwCreateCanvas(int width, int height, int bgcolor);\nvoid lwFreeCanvas(lwCanvas *canvas);\nvoid lwDrawPixel(lwCanvas *canvas, int x, int y, int color);\nint lwGetPixel(lwCanvas *canvas, int x, int y);\nvoid lwDrawLine(lwCanvas *canvas, int x1, int y1, int x2, int y2, int color);\nvoid lwDrawSquare(lwCanvas *canvas, int x, int y, float size, float angle, int color);\n\n#endif\n"
  },
  {
    "path": "src/lolwut5.c",
    "content": "/*\n * Copyright (c) 2018-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * ----------------------------------------------------------------------------\n *\n * This file implements the LOLWUT command. The command should do something\n * fun and interesting, and should be replaced by a new implementation at\n * each new version of Redis.\n */\n\n#include \"server.h\"\n#include \"lolwut.h\"\n#include <math.h>\n\n/* Translate a group of 8 pixels (2x4 vertical rectangle) to the corresponding\n * braille character. The byte should correspond to the pixels arranged as\n * follows, where 0 is the least significant bit, and 7 the most significant\n * bit:\n *\n *   0 3\n *   1 4\n *   2 5\n *   6 7\n *\n * The corresponding utf8 encoded character is set into the three bytes\n * pointed by 'output'.\n */\n#include <stdio.h>\nvoid lwTranslatePixelsGroup(int byte, char *output) {\n    int code = 0x2800 + byte;\n    /* Convert to unicode. This is in the U0800-UFFFF range, so we need to\n     * emit it like this in three bytes:\n     * 1110xxxx 10xxxxxx 10xxxxxx. */\n    output[0] = 0xE0 | (code >> 12);          /* 1110-xxxx */\n    output[1] = 0x80 | ((code >> 6) & 0x3F);  /* 10-xxxxxx */\n    output[2] = 0x80 | (code & 0x3F);         /* 10-xxxxxx */\n}\n\n/* Schotter, the output of LOLWUT of Redis 5, is a computer graphic art piece\n * generated by Georg Nees in the 60s. It explores the relationship between\n * caos and order.\n *\n * The function creates the canvas itself, depending on the columns available\n * in the output display and the number of squares per row and per column\n * requested by the caller. */\nlwCanvas *lwDrawSchotter(int console_cols, int squares_per_row, int squares_per_col) {\n    /* Calculate the canvas size. */\n    int canvas_width = console_cols*2;\n    int padding = canvas_width > 4 ? 2 : 0;\n    float square_side = (float)(canvas_width-padding*2) / squares_per_row;\n    int canvas_height = square_side * squares_per_col + padding*2;\n    lwCanvas *canvas = lwCreateCanvas(canvas_width, canvas_height, 0);\n\n    for (int y = 0; y < squares_per_col; y++) {\n        for (int x = 0; x < squares_per_row; x++) {\n            int sx = x * square_side + square_side/2 + padding;\n            int sy = y * square_side + square_side/2 + padding;\n            /* Rotate and translate randomly as we go down to lower\n             * rows. */\n            float angle = 0;\n            if (y > 1) {\n                float r1 = (float)rand() / (float) RAND_MAX / squares_per_col * y;\n                float r2 = (float)rand() / (float) RAND_MAX / squares_per_col * y;\n                float r3 = (float)rand() / (float) RAND_MAX / squares_per_col * y;\n                if (rand() % 2) r1 = -r1;\n                if (rand() % 2) r2 = -r2;\n                if (rand() % 2) r3 = -r3;\n                angle = r1;\n                sx += r2*square_side/3;\n                sy += r3*square_side/3;\n            }\n            lwDrawSquare(canvas,sx,sy,square_side,angle,1);\n        }\n    }\n\n    return canvas;\n}\n\n/* Converts the canvas to an SDS string representing the UTF8 characters to\n * print to the terminal in order to obtain a graphical representation of the\n * logical canvas. The actual returned string will require a terminal that is\n * width/2 large and height/4 tall in order to hold the whole image without\n * overflowing or scrolling, since each Barille character is 2x4. */\nstatic sds renderCanvas(lwCanvas *canvas) {\n    sds text = sdsempty();\n    for (int y = 0; y < canvas->height; y += 4) {\n        for (int x = 0; x < canvas->width; x += 2) {\n            /* We need to emit groups of 8 bits according to a specific\n             * arrangement. See lwTranslatePixelsGroup() for more info. */\n            int byte = 0;\n            if (lwGetPixel(canvas,x,y)) byte |= (1<<0);\n            if (lwGetPixel(canvas,x,y+1)) byte |= (1<<1);\n            if (lwGetPixel(canvas,x,y+2)) byte |= (1<<2);\n            if (lwGetPixel(canvas,x+1,y)) byte |= (1<<3);\n            if (lwGetPixel(canvas,x+1,y+1)) byte |= (1<<4);\n            if (lwGetPixel(canvas,x+1,y+2)) byte |= (1<<5);\n            if (lwGetPixel(canvas,x,y+3)) byte |= (1<<6);\n            if (lwGetPixel(canvas,x+1,y+3)) byte |= (1<<7);\n            char unicode[3];\n            lwTranslatePixelsGroup(byte,unicode);\n            text = sdscatlen(text,unicode,3);\n        }\n        if (y != canvas->height-1) text = sdscatlen(text,\"\\n\",1);\n    }\n    return text;\n}\n\n/* The LOLWUT command:\n *\n * LOLWUT [terminal columns] [squares-per-row] [squares-per-col]\n *\n * By default the command uses 66 columns, 8 squares per row, 12 squares\n * per column.\n */\nvoid lolwut5Command(client *c) {\n    long cols = 66;\n    long squares_per_row = 8;\n    long squares_per_col = 12;\n\n    /* Parse the optional arguments if any. */\n    if (c->argc > 1 &&\n        getLongFromObjectOrReply(c,c->argv[1],&cols,NULL) != C_OK)\n        return;\n\n    if (c->argc > 2 &&\n        getLongFromObjectOrReply(c,c->argv[2],&squares_per_row,NULL) != C_OK)\n        return;\n\n    if (c->argc > 3 &&\n        getLongFromObjectOrReply(c,c->argv[3],&squares_per_col,NULL) != C_OK)\n        return;\n\n    /* Limits. We want LOLWUT to be always reasonably fast and cheap to execute\n     * so we have maximum number of columns, rows, and output resolution. */\n    if (cols < 1) cols = 1;\n    if (cols > 1000) cols = 1000;\n    if (squares_per_row < 1) squares_per_row = 1;\n    if (squares_per_row > 200) squares_per_row = 200;\n    if (squares_per_col < 1) squares_per_col = 1;\n    if (squares_per_col > 200) squares_per_col = 200;\n\n    /* Generate some computer art and reply. */\n    lwCanvas *canvas = lwDrawSchotter(cols,squares_per_row,squares_per_col);\n    sds rendered = renderCanvas(canvas);\n    rendered = sdscat(rendered,\n        \"\\nGeorg Nees - schotter, plotter on paper, 1968. Redis ver. \");\n    rendered = sdscat(rendered,REDIS_VERSION);\n    rendered = sdscatlen(rendered,\"\\n\",1);\n    addReplyVerbatim(c,rendered,sdslen(rendered),\"txt\");\n    sdsfree(rendered);\n    lwFreeCanvas(canvas);\n}\n"
  },
  {
    "path": "src/lolwut6.c",
    "content": "/*\n * Copyright (c) 2019-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * ----------------------------------------------------------------------------\n *\n * This file implements the LOLWUT command. The command should do something\n * fun and interesting, and should be replaced by a new implementation at\n * each new version of Redis.\n *\n * Thanks to Michele Hiki Falcone for the original image that inspired\n * the image, part of his game, Plaguemon.\n *\n * Thanks to the Shhh computer art collective for the help in tuning the\n * output to have a better artistic effect.\n */\n\n#include \"server.h\"\n#include \"lolwut.h\"\n\n/* Render the canvas using the four gray levels of the standard color\n * terminal: they match very well to the grayscale display of the gameboy. */\nstatic sds renderCanvas(lwCanvas *canvas) {\n    sds text = sdsempty();\n    for (int y = 0; y < canvas->height; y++) {\n        for (int x = 0; x < canvas->width; x++) {\n            int color = lwGetPixel(canvas,x,y);\n            char *ce; /* Color escape sequence. */\n\n            /* Note that we set both the foreground and background color.\n             * This way we are able to get a more consistent result among\n             * different terminals implementations. */\n            switch(color) {\n            case 0: ce = \"0;30;40m\"; break;    /* Black */\n            case 1: ce = \"0;90;100m\"; break;   /* Gray 1 */\n            case 2: ce = \"0;37;47m\"; break;    /* Gray 2 */\n            case 3: ce = \"0;97;107m\"; break;   /* White */\n            default: ce = \"0;30;40m\"; break;   /* Just for safety. */\n            }\n            text = sdscatprintf(text,\"\\033[%s \\033[0m\",ce);\n        }\n        if (y != canvas->height-1) text = sdscatlen(text,\"\\n\",1);\n    }\n    return text;\n}\n\n/* Draw a skyscraper on the canvas, according to the parameters in the\n * 'skyscraper' structure. Window colors are random and are always one\n * of the two grays. */\nstruct skyscraper {\n    int xoff;       /* X offset. */\n    int width;      /* Pixels width. */\n    int height;     /* Pixels height. */\n    int windows;    /* Draw windows if true. */\n    int color;      /* Color of the skyscraper. */\n};\n\nvoid generateSkyscraper(lwCanvas *canvas, struct skyscraper *si) {\n    int starty = canvas->height-1;\n    int endy = starty - si->height + 1;\n    for (int y = starty; y >= endy; y--) {\n        for (int x = si->xoff; x < si->xoff+si->width; x++) {\n            /* The roof is four pixels less wide. */\n            if (y == endy && (x <= si->xoff+1 || x >= si->xoff+si->width-2))\n                continue;\n            int color = si->color;\n            /* Alter the color if this is a place where we want to\n             * draw a window. We check that we are in the inner part of the\n             * skyscraper, so that windows are far from the borders. */\n            if (si->windows &&\n                x > si->xoff+1 &&\n                x < si->xoff+si->width-2 &&\n                y > endy+1 &&\n                y < starty-1)\n            {\n                /* Calculate the x,y position relative to the start of\n                 * the window area. */\n                int relx = x - (si->xoff+1);\n                int rely = y - (endy+1);\n\n                /* Note that we want the windows to be two pixels wide\n                 * but just one pixel tall, because terminal \"pixels\"\n                 * (characters) are not square. */\n                if (relx/2 % 2 && rely % 2) {\n                    do {\n                        color = 1 + rand() % 2;\n                    } while (color == si->color);\n                    /* Except we want adjacent pixels creating the same\n                     * window to be the same color. */\n                    if (relx % 2) color = lwGetPixel(canvas,x-1,y);\n                }\n            }\n            lwDrawPixel(canvas,x,y,color);\n        }\n    }\n}\n\n/* Generate a skyline inspired by the parallax backgrounds of 8 bit games. */\nvoid generateSkyline(lwCanvas *canvas) {\n    struct skyscraper si;\n\n    /* First draw the background skyscraper without windows, using the\n     * two different grays. We use two passes to make sure that the lighter\n     * ones are always in the background. */\n    for (int color = 2; color >= 1; color--) {\n        si.color = color;\n        for (int offset = -10; offset < canvas->width;) {\n            offset += rand() % 8;\n            si.xoff = offset;\n            si.width = 10 + rand()%9;\n            if (color == 2)\n                si.height = canvas->height/2 + rand()%canvas->height/2;\n            else\n                si.height = canvas->height/2 + rand()%canvas->height/3;\n            si.windows = 0;\n            generateSkyscraper(canvas, &si);\n            if (color == 2)\n                offset += si.width/2;\n            else\n                offset += si.width+1;\n        }\n    }\n\n    /* Now draw the foreground skyscraper with the windows. */\n    si.color = 0;\n    for (int offset = -10; offset < canvas->width;) {\n        offset += rand() % 8;\n        si.xoff = offset;\n        si.width = 5 + rand()%14;\n        if (si.width % 4) si.width += (si.width % 3);\n        si.height = canvas->height/3 + rand()%canvas->height/2;\n        si.windows = 1;\n        generateSkyscraper(canvas, &si);\n        offset += si.width+5;\n    }\n}\n\n/* The LOLWUT 6 command:\n *\n * LOLWUT [columns] [rows]\n *\n * By default the command uses 80 columns, 40 squares per row\n * per column.\n */\nvoid lolwut6Command(client *c) {\n    long cols = 80;\n    long rows = 20;\n\n    /* Parse the optional arguments if any. */\n    if (c->argc > 1 &&\n        getLongFromObjectOrReply(c,c->argv[1],&cols,NULL) != C_OK)\n        return;\n\n    if (c->argc > 2 &&\n        getLongFromObjectOrReply(c,c->argv[2],&rows,NULL) != C_OK)\n        return;\n\n    /* Limits. We want LOLWUT to be always reasonably fast and cheap to execute\n     * so we have maximum number of columns, rows, and output resolution. */\n    if (cols < 1) cols = 1;\n    if (cols > 1000) cols = 1000;\n    if (rows < 1) rows = 1;\n    if (rows > 1000) rows = 1000;\n\n    /* Generate the city skyline and reply. */\n    lwCanvas *canvas = lwCreateCanvas(cols,rows,3);\n    generateSkyline(canvas);\n    sds rendered = renderCanvas(canvas);\n    rendered = sdscat(rendered,\n        \"\\nDedicated to the 8 bit game developers of past and present.\\n\"\n        \"Original 8 bit image from Plaguemon by hikikomori. Redis ver. \");\n    rendered = sdscat(rendered,REDIS_VERSION);\n    rendered = sdscatlen(rendered,\"\\n\",1);\n    addReplyVerbatim(c,rendered,sdslen(rendered),\"txt\");\n    sdsfree(rendered);\n    lwFreeCanvas(canvas);\n}\n"
  },
  {
    "path": "src/lolwut8.c",
    "content": "/*\n * Copyright (c) 2025-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Originally authored by: Salvatore Sanfilippo.\n * Algorithm based on the Almanacco Bompiani description and the Python\n * code written by Emiliano Russo.\n */\n\n#include \"server.h\"\n#include <ctype.h>\n\n/* The LOLWUT 8 command:\n *\n * LOLWUT [EN|IT]\n *\n * By default the command produces verses in English language, in order for\n * the output to be more universally accessible. However, passing IT as argument\n * it is possible to reproduce the original output, exactly like done by\n * Nanni Balestrini in TAPE MARK I, and described in the Almanacco Letterario\n * Bompiani, 1962.\n */\n\n// Structure to represent a verse with its metrical characteristics.\ntypedef struct {\n    char text_en[100];    // English verse text.\n    char text_it[100];    // Italian verse text.\n    char fraction1[5];    // First fraction (rhythm/meter indicator).\n    char fraction2[5];    // Second fraction (rhythm/meter indicator).\n    char group[2];        // Group number (1-3 representing different\n                          // literary sources).\n} Verse;\n\n// Fisher-Yates shuffle algorithm to randomize verse order.\nstatic void shuffle(Verse *array, int size) {\n    for (int i = size - 1; i > 0; i--) {\n        int j = rand() % (i + 1);\n        Verse temp = array[j];\n        array[j] = array[i];\n        array[i] = temp;\n    }\n}\n\nvoid lolwut8Command(client *c) {\n    int en_lang = 1;  // Default to English.\n\n    /* Parse the optional arguments if any. */\n    if (c->argc > 1 && !strcasecmp(c->argv[1]->ptr,\"IT\"))\n        en_lang = 0;\n\n    // Define verses from three literary sources with their metrical fractions:\n    // Group 1: Diary of Hiroshima by Michihito Hachiya.\n    // Group 2: The Mystery of the Elevator by Paul Goldwin.\n    // Group 3: Tao Te Ching by Lao Tse.\n    Verse verses[] = {\n        // Group 1: Hiroshima verses.\n        {\" The blinding / globe / of fire \",\n         \" l accecante   /  globo  /  di fuoco  \", \"1/4\", \"2/3\", \"1\"},\n        {\" It expands / rapidly \",\n         \" si espande   /  rapidamente  \", \"1/2\", \"3/4\", \"1\"},\n        {\" Thirty times / brighter / than the sun \",\n         \" trenta volte  / piu luminoso  / del sole \", \"2/3\", \"2/4\", \"1\"},\n        {\" When it reaches / the stratosphere \",\n         \" quando  raggiunge / la stratosfera  \", \"3/4\", \"1/2\", \"1\"},\n        {\" The summit / of the cloud \",\n         \" la  sommita  /  della nuvola \", \"1/3\", \"2/3\", \"1\"},\n        {\" Assumes / the well-known shape / of a mushroom \",\n         \" assume   / la ben nota forma  / di fungo \", \"2/4\", \"3/4\", \"1\"},\n\n        // Group 2: Elevator mystery verses.\n        {\" The head / pressed / upon the shoulder \",\n         \" la testa / premuta  / sulla spalla  \", \"1/4\", \"2/4\", \"2\"},\n        {\" The hair / between the lips \",\n         \" i  capelli   /  tra le labbra \", \"1/4\", \"2/4\", \"2\"},\n        {\" They lay / motionless / without speaking \",\n         \" giacquero  /   immobili / senza parlare \", \"2/3\", \"2/3\", \"2\"},\n        {\" Till he moved / his fingers / slowly \",\n         \" finche non mosse  /  le dita  / lentamente    \", \"3/4\", \"1/3\", \"2\"},\n        {\" Trying / to grasp \",\n         \" cercando / di afferrare  \", \"3/4\", \"1/2\", \"2\"},\n\n        // Group 3: Tao Te Ching verses.\n        {\" While the multitude / of things / comes into being \",\n         \" mentre la moltitudine  /  delle cose  /   accade   \", \"1/2\", \"1/2\", \"3\"},\n        {\" I envisage / their return \",\n         \" io contemplo  /  il loro ritorno    \", \"2/3\", \"3/4\", \"3\"},\n        {\" Although / things / flourish \",\n         \" malgrado / che le cose  /  fioriscano    \", \"1/2\", \"2/3\", \"3\"},\n        {\" They all return / to / their roots \",\n         \" esse tornano  / tutte    / alla loro radice   \", \"2/3\", \"1/4\", \"3\"}\n    };\n\n    // Calculate the total number of verses.\n    int num_verses = sizeof(verses) / sizeof(verses[0]);\n\n    // Create a working copy of verses for manipulation.\n    Verse *working_verses = zmalloc(num_verses * sizeof(Verse));\n    memcpy(working_verses, verses, num_verses * sizeof(Verse));\n\n    // Step 1: Shuffle the verses randomly.\n    shuffle(working_verses, num_verses);\n\n    // Step 2: Build stanza by finding compatible verses\n    // Each subsequent verse must:\n    // - Have compatible metrical fractions (connecting criteria).\n    // - Belong to a different group than the previous verse.\n    Verse stanza[10];\n\n    int j; // At the end, it will contain the number of added stanzas.\n    for (j = 0; j < 10; j++) {\n        int i = 0;\n        int found = 0;\n\n        // Search for compatible verse among remaining verses.\n        while (i < num_verses) {\n            // Metrical compatibility check: this is used to select verses\n            // that go somewhat well together, if their fractions match.\n            // The algorithm checks if current verse's first fraction matches\n            // with previous verse's second fraction in various ways, and\n            // force successive verses to be of different groups.\n            if (j == 0 || // First stanza is always accepted.\n                ((working_verses[i].fraction1[0] == stanza[j-1].fraction2[0] ||\n                  working_verses[i].fraction1[2] == stanza[j-1].fraction2[0] ||\n                  working_verses[i].fraction1[2] == stanza[j-1].fraction2[2]) &&\n                 strcmp(working_verses[i].group, stanza[j-1].group) != 0))\n            {\n\n                // Add compatible verse to stanza.\n                stanza[j] = working_verses[i];\n\n                // Remove selected verse from working set, to avoid reuse.\n                for (int k = i; k < num_verses - 1; k++)\n                    working_verses[k] = working_verses[k + 1];\n                num_verses--;\n\n                found = 1;\n                break;\n            }\n            i++;\n        }\n\n        // Exit if there are no longer matching verses.\n        if (!found) break;\n    }\n    zfree(working_verses);\n\n    // Step 3: Combine all stanza verses into single SDS string.\n    sds combined = sdsempty();\n    for (int i = 0; i < j; i++) {\n        if (en_lang) {\n            combined = sdscat(combined, stanza[i].text_en);\n        } else {\n            combined = sdscat(combined, stanza[i].text_it);\n        }\n        combined = sdscat(combined, \"\\n\");\n    }\n\n    // Step 4: Make uppercase, and strip the \"/\".\n    for (size_t j = 0; j < sdslen(combined); j++) {\n        combined[j] = toupper(combined[j]);\n        if (combined[j] == '/') combined[j] = ' ';\n    }\n\n    // Step 5: Add background info about what the user just saw.\n    combined = sdscat(combined,\n        \"\\nIn 1961, Nanni Balestrini created one of the first computer-generated poems, TAPE MARK I, using an IBM 7090 mainframe. Each execution combined verses from three literary sources following algorithmic rules based on metrical compatibility and group constraints. This LOLWUT command reproduces Balestrini's original algorithm, generating new stanzas through the same computational poetry process described in Almanacco Letterario Bompiani, 1962.\\n\\n\"\n        \"https://en.wikipedia.org/wiki/Digital_poetry\\n\"\n        \"https://www.youtube.com/watch?v=8i7uFCK7G0o (English subs)\\n\\n\"\n        \"Use: LOLWUT IT for the original Italian output. Redis ver. \");\n    combined = sdscat(combined,REDIS_VERSION);\n    combined = sdscatlen(combined,\"\\n\",1);\n\n    addReplyVerbatim(c,combined,sdslen(combined),\"txt\");\n    sdsfree(combined);\n}\n"
  },
  {
    "path": "src/lzf.h",
    "content": "/*\n * Copyright (c) 2000-2008 Marc Alexander Lehmann <schmorp@schmorp.de>\n *\n * Redistribution and use in source and binary forms, with or without modifica-\n * tion, are permitted provided that the following conditions are met:\n *\n *   1.  Redistributions of source code must retain the above copyright notice,\n *       this list of conditions and the following disclaimer.\n *\n *   2.  Redistributions in binary form must reproduce the above copyright\n *       notice, this list of conditions and the following disclaimer in the\n *       documentation and/or other materials provided with the distribution.\n *\n * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED\n * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER-\n * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO\n * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE-\n * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;\n * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,\n * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH-\n * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED\n * OF THE POSSIBILITY OF SUCH DAMAGE.\n *\n * Alternatively, the contents of this file may be used under the terms of\n * the GNU General Public License (\"GPL\") version 2 or any later version,\n * in which case the provisions of the GPL are applicable instead of\n * the above. If you wish to allow the use of your version of this file\n * only under the terms of the GPL and not to allow others to use your\n * version of this file under the BSD license, indicate your decision\n * by deleting the provisions above and replace them with the notice\n * and other provisions required by the GPL. If you do not delete the\n * provisions above, a recipient may use your version of this file under\n * either the BSD or the GPL.\n */\n\n#ifndef LZF_H\n#define LZF_H\n\n/***********************************************************************\n**\n**\tlzf -- an extremely fast/free compression/decompression-method\n**\thttp://liblzf.plan9.de/\n**\n**\tThis algorithm is believed to be patent-free.\n**\n***********************************************************************/\n\n#define LZF_VERSION 0x0105 /* 1.5, API version */\n\n/*\n * Compress in_len bytes stored at the memory block starting at\n * in_data and write the result to out_data, up to a maximum length\n * of out_len bytes.\n *\n * If the output buffer is not large enough or any error occurs return 0,\n * otherwise return the number of bytes used, which might be considerably\n * more than in_len (but less than 104% of the original size), so it\n * makes sense to always use out_len == in_len - 1), to ensure _some_\n * compression, and store the data uncompressed otherwise (with a flag, of\n * course.\n *\n * lzf_compress might use different algorithms on different systems and\n * even different runs, thus might result in different compressed strings\n * depending on the phase of the moon or similar factors. However, all\n * these strings are architecture-independent and will result in the\n * original data when decompressed using lzf_decompress.\n *\n * The buffers must not be overlapping.\n *\n * If the option LZF_STATE_ARG is enabled, an extra argument must be\n * supplied which is not reflected in this header file. Refer to lzfP.h\n * and lzf_c.c.\n *\n */\nsize_t\nlzf_compress (const void *const in_data,  size_t in_len,\n              void             *out_data, size_t out_len);\n\n/*\n * Decompress data compressed with some version of the lzf_compress\n * function and stored at location in_data and length in_len. The result\n * will be stored at out_data up to a maximum of out_len characters.\n *\n * If the output buffer is not large enough to hold the decompressed\n * data, a 0 is returned and errno is set to E2BIG. Otherwise the number\n * of decompressed bytes (i.e. the original length of the data) is\n * returned.\n *\n * If an error in the compressed data is detected, a zero is returned and\n * errno is set to EINVAL.\n *\n * This function is very fast, about as fast as a copying loop.\n */\nsize_t\nlzf_decompress (const void *const in_data,  size_t in_len,\n                void             *out_data, size_t out_len);\n\n#endif\n\n"
  },
  {
    "path": "src/lzfP.h",
    "content": "/*\n * Copyright (c) 2000-2007 Marc Alexander Lehmann <schmorp@schmorp.de>\n *\n * Redistribution and use in source and binary forms, with or without modifica-\n * tion, are permitted provided that the following conditions are met:\n *\n *   1.  Redistributions of source code must retain the above copyright notice,\n *       this list of conditions and the following disclaimer.\n *\n *   2.  Redistributions in binary form must reproduce the above copyright\n *       notice, this list of conditions and the following disclaimer in the\n *       documentation and/or other materials provided with the distribution.\n *\n * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED\n * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER-\n * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO\n * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE-\n * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;\n * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,\n * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH-\n * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED\n * OF THE POSSIBILITY OF SUCH DAMAGE.\n *\n * Alternatively, the contents of this file may be used under the terms of\n * the GNU General Public License (\"GPL\") version 2 or any later version,\n * in which case the provisions of the GPL are applicable instead of\n * the above. If you wish to allow the use of your version of this file\n * only under the terms of the GPL and not to allow others to use your\n * version of this file under the BSD license, indicate your decision\n * by deleting the provisions above and replace them with the notice\n * and other provisions required by the GPL. If you do not delete the\n * provisions above, a recipient may use your version of this file under\n * either the BSD or the GPL.\n */\n\n#ifndef LZFP_h\n#define LZFP_h\n\n#define STANDALONE 1 /* at the moment, this is ok. */\n\n#ifndef STANDALONE\n# include \"lzf.h\"\n#endif\n\n/*\n * Size of hashtable is (1 << HLOG) * sizeof (char *)\n * decompression is independent of the hash table size\n * the difference between 15 and 14 is very small\n * for small blocks (and 14 is usually a bit faster).\n * For a low-memory/faster configuration, use HLOG == 13;\n * For best compression, use 15 or 16 (or more, up to 22).\n */\n#ifndef HLOG\n# define HLOG 16\n#endif\n\n/*\n * Sacrifice very little compression quality in favour of compression speed.\n * This gives almost the same compression as the default code, and is\n * (very roughly) 15% faster. This is the preferred mode of operation.\n */\n#ifndef VERY_FAST\n# define VERY_FAST 1\n#endif\n\n/*\n * Sacrifice some more compression quality in favour of compression speed.\n * (roughly 1-2% worse compression for large blocks and\n * 9-10% for small, redundant, blocks and >>20% better speed in both cases)\n * In short: when in need for speed, enable this for binary data,\n * possibly disable this for text data.\n */\n#ifndef ULTRA_FAST\n# define ULTRA_FAST 0\n#endif\n\n/*\n * Unconditionally aligning does not cost very much, so do it if unsure\n */\n#ifndef STRICT_ALIGN\n# if !(defined(__i386) || defined (__amd64))\n#  define STRICT_ALIGN 1\n# else\n#  define STRICT_ALIGN 0\n# endif\n#endif\n\n/*\n * You may choose to pre-set the hash table (might be faster on some\n * modern cpus and large (>>64k) blocks, and also makes compression\n * deterministic/repeatable when the configuration otherwise is the same).\n */\n#ifndef INIT_HTAB\n# define INIT_HTAB 0\n#endif\n\n/*\n * Avoid assigning values to errno variable? for some embedding purposes\n * (linux kernel for example), this is necessary. NOTE: this breaks\n * the documentation in lzf.h. Avoiding errno has no speed impact.\n */\n#ifndef AVOID_ERRNO\n# define AVOID_ERRNO 0\n#endif\n\n/*\n * Whether to pass the LZF_STATE variable as argument, or allocate it\n * on the stack. For small-stack environments, define this to 1.\n * NOTE: this breaks the prototype in lzf.h.\n */\n#ifndef LZF_STATE_ARG\n# define LZF_STATE_ARG 0\n#endif\n\n/*\n * Whether to add extra checks for input validity in lzf_decompress\n * and return EINVAL if the input stream has been corrupted. This\n * only shields against overflowing the input buffer and will not\n * detect most corrupted streams.\n * This check is not normally noticeable on modern hardware\n * (<1% slowdown), but might slow down older cpus considerably.\n */\n#ifndef CHECK_INPUT\n# define CHECK_INPUT 1\n#endif\n\n/*\n * Whether to store pointers or offsets inside the hash table. On\n * 64 bit architectures, pointers take up twice as much space,\n * and might also be slower. Default is to autodetect.\n * Notice: Don't set this value to 1, it will result in 'LZF_HSLOT'\n * not being able to store offset above UINT32_MAX in 64bit. */\n#define LZF_USE_OFFSETS 0\n\n/*****************************************************************************/\n/* nothing should be changed below */\n\n#ifdef __cplusplus\n# include <cstring>\n# include <climits>\nusing namespace std;\n#else\n# include <string.h>\n# include <limits.h>\n#endif\n\n#ifndef LZF_USE_OFFSETS\n# if defined (WIN32)\n#  define LZF_USE_OFFSETS defined(_M_X64)\n# else\n#  if __cplusplus > 199711L\n#   include <cstdint>\n#  else\n#   include <stdint.h>\n#  endif\n#  define LZF_USE_OFFSETS (UINTPTR_MAX > 0xffffffffU)\n# endif\n#endif\n\ntypedef unsigned char u8;\n\n#if LZF_USE_OFFSETS\n# define LZF_HSLOT_BIAS ((const u8 *)in_data)\n  typedef unsigned int LZF_HSLOT;\n#else\n# define LZF_HSLOT_BIAS 0\n  typedef const u8 *LZF_HSLOT;\n#endif\n\ntypedef LZF_HSLOT LZF_STATE[1 << (HLOG)];\n\n#if !STRICT_ALIGN\n/* for unaligned accesses we need a 16 bit datatype. */\n# if USHRT_MAX == 65535\n    typedef unsigned short u16;\n# elif UINT_MAX == 65535\n    typedef unsigned int u16;\n# else\n#  undef STRICT_ALIGN\n#  define STRICT_ALIGN 1\n# endif\n#endif\n\n#if ULTRA_FAST\n# undef VERY_FAST\n#endif\n\n#endif\n\n"
  },
  {
    "path": "src/lzf_c.c",
    "content": "/*\n * Copyright (c) 2000-2010 Marc Alexander Lehmann <schmorp@schmorp.de>\n *\n * Redistribution and use in source and binary forms, with or without modifica-\n * tion, are permitted provided that the following conditions are met:\n *\n *   1.  Redistributions of source code must retain the above copyright notice,\n *       this list of conditions and the following disclaimer.\n *\n *   2.  Redistributions in binary form must reproduce the above copyright\n *       notice, this list of conditions and the following disclaimer in the\n *       documentation and/or other materials provided with the distribution.\n *\n * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED\n * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER-\n * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO\n * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE-\n * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;\n * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,\n * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH-\n * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED\n * OF THE POSSIBILITY OF SUCH DAMAGE.\n *\n * Alternatively, the contents of this file may be used under the terms of\n * the GNU General Public License (\"GPL\") version 2 or any later version,\n * in which case the provisions of the GPL are applicable instead of\n * the above. If you wish to allow the use of your version of this file\n * only under the terms of the GPL and not to allow others to use your\n * version of this file under the BSD license, indicate your decision\n * by deleting the provisions above and replace them with the notice\n * and other provisions required by the GPL. If you do not delete the\n * provisions above, a recipient may use your version of this file under\n * either the BSD or the GPL.\n */\n\n#include \"lzfP.h\"\n\n#define HSIZE (1 << (HLOG))\n\n/*\n * don't play with this unless you benchmark!\n * the data format is not dependent on the hash function.\n * the hash function might seem strange, just believe me,\n * it works ;)\n */\n#ifndef FRST\n# define FRST(p) (((p[0]) << 8) | p[1])\n# define NEXT(v,p) (((v) << 8) | p[2])\n# if ULTRA_FAST\n#  define IDX(h) ((( h             >> (3*8 - HLOG)) - h  ) & (HSIZE - 1))\n# elif VERY_FAST\n#  define IDX(h) ((( h             >> (3*8 - HLOG)) - h*5) & (HSIZE - 1))\n# else\n#  define IDX(h) ((((h ^ (h << 5)) >> (3*8 - HLOG)) - h*5) & (HSIZE - 1))\n# endif\n#endif\n/*\n * IDX works because it is very similar to a multiplicative hash, e.g.\n * ((h * 57321 >> (3*8 - HLOG)) & (HSIZE - 1))\n * the latter is also quite fast on newer CPUs, and compresses similarly.\n *\n * the next one is also quite good, albeit slow ;)\n * (int)(cos(h & 0xffffff) * 1e6)\n */\n\n#if 0\n/* original lzv-like hash function, much worse and thus slower */\n# define FRST(p) (p[0] << 5) ^ p[1]\n# define NEXT(v,p) ((v) << 5) ^ p[2]\n# define IDX(h) ((h) & (HSIZE - 1))\n#endif\n\n#define        MAX_LIT        (1 <<  5)\n#define        MAX_OFF        (1 << 13)\n#define        MAX_REF        ((1 << 8) + (1 << 3))\n\n#if __GNUC__ >= 3\n# define expect(expr,value)         __builtin_expect ((expr),(value))\n# define inline                     inline\n#else\n# define expect(expr,value)         (expr)\n# define inline                     static\n#endif\n\n#define expect_false(expr) expect ((expr) != 0, 0)\n#define expect_true(expr)  expect ((expr) != 0, 1)\n\n#if defined(__has_attribute)\n# if __has_attribute(no_sanitize)\n#  define NO_SANITIZE(sanitizer) __attribute__((no_sanitize(sanitizer)))\n# endif\n#endif\n\n#if !defined(NO_SANITIZE)\n# define NO_SANITIZE(sanitizer)\n#endif\n\n#if defined(__clang__)\n#define NO_SANITIZE_MSAN(sanitizer) NO_SANITIZE(sanitizer)\n#else\n#define NO_SANITIZE_MSAN(sanitizer)\n#endif\n\n/*\n * compressed format\n *\n * 000LLLLL <L+1>    ; literal, L+1=1..33 octets\n * LLLooooo oooooooo ; backref L+1=1..7 octets, o+1=1..4096 offset\n * 111ooooo LLLLLLLL oooooooo ; backref L+8 octets, o+1=1..4096 offset\n *\n */\nNO_SANITIZE(\"alignment\")\nNO_SANITIZE_MSAN(\"memory\")\nsize_t\nlzf_compress (const void *const in_data, size_t in_len,\n\t      void *out_data, size_t out_len\n#if LZF_STATE_ARG\n              , LZF_STATE htab\n#endif\n              )\n{\n#if !LZF_STATE_ARG\n  LZF_STATE htab;\n#endif\n  const u8 *ip = (const u8 *)in_data;\n        u8 *op = (u8 *)out_data;\n  const u8 *in_end  = ip + in_len;\n        u8 *out_end = op + out_len;\n  const u8 *ref;\n\n  /* off requires a type wide enough to hold a general pointer difference.\n   * ISO C doesn't have that (size_t might not be enough and ptrdiff_t only\n   * works for differences within a single object). We also assume that no\n   * no bit pattern traps. Since the only platform that is both non-POSIX\n   * and fails to support both assumptions is windows 64 bit, we make a\n   * special workaround for it.\n   */\n#if defined (WIN32) && defined (_M_X64)\n  unsigned _int64 off; /* workaround for missing POSIX compliance */\n#else\n  size_t off;\n#endif\n  unsigned int hval;\n  int lit;\n\n  if (!in_len || !out_len)\n    return 0;\n\n#if INIT_HTAB\n  memset (htab, 0, sizeof (htab));\n#endif\n\n  lit = 0; op++; /* start run */\n\n  hval = FRST (ip);\n  while (ip < in_end - 2)\n    {\n      LZF_HSLOT *hslot;\n\n      hval = NEXT (hval, ip);\n      hslot = htab + IDX (hval);\n      ref = *hslot ? (*hslot + LZF_HSLOT_BIAS) : NULL; /* avoid applying zero offset to null pointer */\n      *hslot = ip - LZF_HSLOT_BIAS;\n\n      if (1\n#if INIT_HTAB\n          && ref < ip /* the next test will actually take care of this, but this is faster */\n#endif\n          && (off = ip - ref - 1) < MAX_OFF\n          && ref > (u8 *)in_data\n          && ref[2] == ip[2]\n#if STRICT_ALIGN\n          && ((ref[1] << 8) | ref[0]) == ((ip[1] << 8) | ip[0])\n#else\n          && *(u16 *)ref == *(u16 *)ip\n#endif\n        )\n        {\n          /* match found at *ref++ */\n          unsigned int len = 2;\n          size_t maxlen = in_end - ip - len;\n          maxlen = maxlen > MAX_REF ? MAX_REF : maxlen;\n\n          if (expect_false (op + 3 + 1 >= out_end)) /* first a faster conservative test */\n            if (op - !lit + 3 + 1 >= out_end) /* second the exact but rare test */\n              return 0;\n\n          op [- lit - 1] = lit - 1; /* stop run */\n          op -= !lit; /* undo run if length is zero */\n\n          for (;;)\n            {\n              if (expect_true (maxlen > 16))\n                {\n                  len++; if (ref [len] != ip [len]) break;\n                  len++; if (ref [len] != ip [len]) break;\n                  len++; if (ref [len] != ip [len]) break;\n                  len++; if (ref [len] != ip [len]) break;\n\n                  len++; if (ref [len] != ip [len]) break;\n                  len++; if (ref [len] != ip [len]) break;\n                  len++; if (ref [len] != ip [len]) break;\n                  len++; if (ref [len] != ip [len]) break;\n\n                  len++; if (ref [len] != ip [len]) break;\n                  len++; if (ref [len] != ip [len]) break;\n                  len++; if (ref [len] != ip [len]) break;\n                  len++; if (ref [len] != ip [len]) break;\n\n                  len++; if (ref [len] != ip [len]) break;\n                  len++; if (ref [len] != ip [len]) break;\n                  len++; if (ref [len] != ip [len]) break;\n                  len++; if (ref [len] != ip [len]) break;\n                }\n\n              do\n                len++;\n              while (len < maxlen && ref[len] == ip[len]);\n\n              break;\n            }\n\n          len -= 2; /* len is now #octets - 1 */\n          ip++;\n\n          if (len < 7)\n            {\n              *op++ = (off >> 8) + (len << 5);\n            }\n          else\n            {\n              *op++ = (off >> 8) + (  7 << 5);\n              *op++ = len - 7;\n            }\n\n          *op++ = off;\n\n          lit = 0; op++; /* start run */\n\n          ip += len + 1;\n\n          if (expect_false (ip >= in_end - 2))\n            break;\n\n#if ULTRA_FAST || VERY_FAST\n          --ip;\n# if VERY_FAST && !ULTRA_FAST\n          --ip;\n# endif\n          hval = FRST (ip);\n\n          hval = NEXT (hval, ip);\n          htab[IDX (hval)] = ip - LZF_HSLOT_BIAS;\n          ip++;\n\n# if VERY_FAST && !ULTRA_FAST\n          hval = NEXT (hval, ip);\n          htab[IDX (hval)] = ip - LZF_HSLOT_BIAS;\n          ip++;\n# endif\n#else\n          ip -= len + 1;\n\n          do\n            {\n              hval = NEXT (hval, ip);\n              htab[IDX (hval)] = ip - LZF_HSLOT_BIAS;\n              ip++;\n            }\n          while (len--);\n#endif\n        }\n      else\n        {\n          /* one more literal byte we must copy */\n          if (expect_false (op >= out_end))\n            return 0;\n\n          lit++; *op++ = *ip++;\n\n          if (expect_false (lit == MAX_LIT))\n            {\n              op [- lit - 1] = lit - 1; /* stop run */\n              lit = 0; op++; /* start run */\n            }\n        }\n    }\n\n  if (op + 3 > out_end) /* at most 3 bytes can be missing here */\n    return 0;\n\n  while (ip < in_end)\n    {\n      lit++; *op++ = *ip++;\n\n      if (expect_false (lit == MAX_LIT))\n        {\n          op [- lit - 1] = lit - 1; /* stop run */\n          lit = 0; op++; /* start run */\n        }\n    }\n\n  op [- lit - 1] = lit - 1; /* end run */\n  op -= !lit; /* undo run if length is zero */\n\n  return op - (u8 *)out_data;\n}\n\n"
  },
  {
    "path": "src/lzf_d.c",
    "content": "/*\n * Copyright (c) 2000-2010 Marc Alexander Lehmann <schmorp@schmorp.de>\n *\n * Redistribution and use in source and binary forms, with or without modifica-\n * tion, are permitted provided that the following conditions are met:\n *\n *   1.  Redistributions of source code must retain the above copyright notice,\n *       this list of conditions and the following disclaimer.\n *\n *   2.  Redistributions in binary form must reproduce the above copyright\n *       notice, this list of conditions and the following disclaimer in the\n *       documentation and/or other materials provided with the distribution.\n *\n * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED\n * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MER-\n * CHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO\n * EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPE-\n * CIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;\n * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,\n * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTH-\n * ERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED\n * OF THE POSSIBILITY OF SUCH DAMAGE.\n *\n * Alternatively, the contents of this file may be used under the terms of\n * the GNU General Public License (\"GPL\") version 2 or any later version,\n * in which case the provisions of the GPL are applicable instead of\n * the above. If you wish to allow the use of your version of this file\n * only under the terms of the GPL and not to allow others to use your\n * version of this file under the BSD license, indicate your decision\n * by deleting the provisions above and replace them with the notice\n * and other provisions required by the GPL. If you do not delete the\n * provisions above, a recipient may use your version of this file under\n * either the BSD or the GPL.\n */\n\n#include \"lzfP.h\"\n\n#if AVOID_ERRNO\n# define SET_ERRNO(n)\n#else\n# include <errno.h>\n# define SET_ERRNO(n) errno = (n)\n#endif\n\n#if USE_REP_MOVSB /* small win on amd, big loss on intel */\n#if (__i386 || __amd64) && __GNUC__ >= 3\n# define lzf_movsb(dst, src, len)                \\\n   asm (\"rep movsb\"                              \\\n        : \"=D\" (dst), \"=S\" (src), \"=c\" (len)     \\\n        :  \"0\" (dst),  \"1\" (src),  \"2\" (len));\n#endif\n#endif\n\n#if defined(__GNUC__) && __GNUC__ >= 7\n#pragma GCC diagnostic push\n#pragma GCC diagnostic ignored \"-Wimplicit-fallthrough\"\n#endif\nsize_t\nlzf_decompress (const void *const in_data,  size_t in_len,\n                void             *out_data, size_t out_len)\n{\n  u8 const *ip = (const u8 *)in_data;\n  u8       *op = (u8 *)out_data;\n  u8 const *const in_end  = ip + in_len;\n  u8       *const out_end = op + out_len;\n\n  while (ip < in_end)\n    {\n      unsigned int ctrl;\n      ctrl = *ip++;\n\n      if (ctrl < (1 << 5)) /* literal run */\n        {\n          ctrl++;\n\n          if (op + ctrl > out_end)\n            {\n              SET_ERRNO (E2BIG);\n              return 0;\n            }\n\n#if CHECK_INPUT\n          if (ip + ctrl > in_end)\n            {\n              SET_ERRNO (EINVAL);\n              return 0;\n            }\n#endif\n\n#ifdef lzf_movsb\n          lzf_movsb (op, ip, ctrl);\n#else\n          switch (ctrl)\n            {\n              case 32: *op++ = *ip++; case 31: *op++ = *ip++; case 30: *op++ = *ip++; case 29: *op++ = *ip++;\n              case 28: *op++ = *ip++; case 27: *op++ = *ip++; case 26: *op++ = *ip++; case 25: *op++ = *ip++;\n              case 24: *op++ = *ip++; case 23: *op++ = *ip++; case 22: *op++ = *ip++; case 21: *op++ = *ip++;\n              case 20: *op++ = *ip++; case 19: *op++ = *ip++; case 18: *op++ = *ip++; case 17: *op++ = *ip++;\n              case 16: *op++ = *ip++; case 15: *op++ = *ip++; case 14: *op++ = *ip++; case 13: *op++ = *ip++;\n              case 12: *op++ = *ip++; case 11: *op++ = *ip++; case 10: *op++ = *ip++; case  9: *op++ = *ip++;\n              case  8: *op++ = *ip++; case  7: *op++ = *ip++; case  6: *op++ = *ip++; case  5: *op++ = *ip++;\n              case  4: *op++ = *ip++; case  3: *op++ = *ip++; case  2: *op++ = *ip++; case  1: *op++ = *ip++;\n            }\n#endif\n        }\n      else /* back reference */\n        {\n          unsigned int len = ctrl >> 5;\n\n          u8 *ref = op - ((ctrl & 0x1f) << 8) - 1;\n\n#if CHECK_INPUT\n          if (ip >= in_end)\n            {\n              SET_ERRNO (EINVAL);\n              return 0;\n            }\n#endif\n          if (len == 7)\n            {\n              len += *ip++;\n#if CHECK_INPUT\n              if (ip >= in_end)\n                {\n                  SET_ERRNO (EINVAL);\n                  return 0;\n                }\n#endif\n            }\n\n          ref -= *ip++;\n\n          if (op + len + 2 > out_end)\n            {\n              SET_ERRNO (E2BIG);\n              return 0;\n            }\n\n          if (ref < (u8 *)out_data)\n            {\n              SET_ERRNO (EINVAL);\n              return 0;\n            }\n\n#ifdef lzf_movsb\n          len += 2;\n          lzf_movsb (op, ref, len);\n#else\n          switch (len)\n            {\n              default:\n                len += 2;\n\n                if (op >= ref + len)\n                  {\n                    /* disjunct areas */\n                    memcpy (op, ref, len);\n                    op += len;\n                  }\n                else\n                  {\n                    /* overlapping, use octte by octte copying */\n                    do\n                      *op++ = *ref++;\n                    while (--len);\n                  }\n\n                break;\n\n              case 9: *op++ = *ref++; /* fall-thru */\n              case 8: *op++ = *ref++; /* fall-thru */\n              case 7: *op++ = *ref++; /* fall-thru */\n              case 6: *op++ = *ref++; /* fall-thru */\n              case 5: *op++ = *ref++; /* fall-thru */\n              case 4: *op++ = *ref++; /* fall-thru */\n              case 3: *op++ = *ref++; /* fall-thru */\n              case 2: *op++ = *ref++; /* fall-thru */\n              case 1: *op++ = *ref++; /* fall-thru */\n              case 0: *op++ = *ref++; /* two octets more */\n                      *op++ = *ref++; /* fall-thru */\n            }\n#endif\n        }\n    }\n\n  return op - (u8 *)out_data;\n}\n#if defined(__GNUC__) && __GNUC__ >= 5\n#pragma GCC diagnostic pop\n#endif\n"
  },
  {
    "path": "src/memory_prefetch.c",
    "content": "/*\n * This file utilizes prefetching keys and data for multiple commands in a batch,\n * to improve performance by amortizing memory access costs across multiple operations.\n *\n * Copyright (c) 2025-Present, Redis Ltd. and contributors.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n#include \"memory_prefetch.h\"\n#include \"server.h\"\n#include \"dict.h\"\n\ntypedef enum { HT_IDX_FIRST = 0, HT_IDX_SECOND = 1, HT_IDX_INVALID = -1 } HashTableIndex;\n\ntypedef enum {\n    PREFETCH_BUCKET,     /* Initial state, determines which hash table to use and prefetch the table's bucket */\n    PREFETCH_ENTRY,      /* prefetch entries associated with the given key's hash */\n    PREFETCH_KVOBJ,      /* prefetch the kv object of the entry found in the previous step */\n    PREFETCH_VALDATA,    /* prefetch the value data of the kv object found in the previous step */\n    PREFETCH_DONE        /* Indicates that prefetching for this key is complete */\n} PrefetchState;\n\n\n/************************************ State machine diagram for the prefetch operation. ********************************\n                                                           │\n                                                         start\n                                                           │\n                                                  ┌────────▼─────────┐\n                                       ┌─────────►│  PREFETCH_BUCKET ├────►────────┐\n                                       │          └────────┬─────────┘            no more tables -> done\n                                       |             bucket|found                  |\n                                       │                   |                       │\n        entry not found - goto next table         ┌────────▼────────┐              │\n                                       └────◄─────┤ PREFETCH_ENTRY  |              ▼\n                                    ┌────────────►└────────┬────────┘              │\n                                    |                 Entry│found                  │\n                                    │                      |                       │\n                                    |              ┌───────▼────────┐              │\n                                    │              | PREFETCH_KVOBJ |              ▼\n                                    │              └───────┬────────┘              │\n        kvobj not found - goto next entry                  |                       |\n                                    │          ┌───────────▼────────────┐          │\n                                    └──────◄───│    PREFETCH_VALDATA    │          ▼\n                                               └───────────┬────────────┘          │\n                                                           |                       │\n                                                 ┌───────-─▼─────────────┐         │\n                                                 │     PREFETCH_DONE     │◄────────┘\n                                                 └───────────────────────┘\n**********************************************************************************************************************/\n\ntypedef void *(*GetValueDataFunc)(const void *val);\n\ntypedef struct KeyPrefetchInfo {\n    PrefetchState state;      /* Current state of the prefetch operation */\n    HashTableIndex ht_idx;    /* Index of the current hash table (0 or 1 for rehashing) */\n    uint64_t bucket_idx;      /* Index of the bucket in the current hash table */\n    uint64_t key_hash;        /* Hash value of the key being prefetched */\n    dictEntry *current_entry; /* Pointer to the current entry being processed */\n    kvobj *current_kv;        /* Pointer to the kv object being prefetched */\n} KeyPrefetchInfo;\n\n/* PrefetchCommandsBatch structure holds the state of the current batch of client commands being processed. */\ntypedef struct PrefetchCommandsBatch {\n    size_t cur_idx;                 /* Index of the current key being processed */\n    size_t key_count;               /* Number of keys in the current batch */\n    size_t client_count;            /* Number of clients in the current batch */\n    size_t pcmd_count;              /* Number of pending commands in the current batch */\n    size_t max_prefetch_size;       /* Maximum number of keys to prefetch in a batch */\n    void **keys;                    /* Array of keys to prefetch in the current batch */\n    client **clients;               /* Array of clients in the current batch */\n    pendingCommand **pending_cmds;  /* Array of pending commands in the current batch */\n    dict **keys_dicts;              /* Main dict for each key */\n    dict **current_dicts;           /* Points to dict to prefetch from */\n    KeyPrefetchInfo *prefetch_info; /* Prefetch info for each key */\n    GetValueDataFunc get_value_data_func; /* Function to get the value data */\n} PrefetchCommandsBatch;\n\nstatic PrefetchCommandsBatch *batch = NULL;\n\nvoid freePrefetchCommandsBatch(void) {\n    if (batch == NULL) {\n        return;\n    }\n\n    zfree(batch->clients);\n    zfree(batch->pending_cmds);\n    zfree(batch->keys);\n    zfree(batch->keys_dicts);\n    zfree(batch->prefetch_info);\n    zfree(batch);\n    batch = NULL;\n}\n\nvoid prefetchCommandsBatchInit(void) {\n    serverAssert(!batch);\n\n    /* To avoid prefetching small batches, we set the max size to twice\n     * the configured size, so if not exceeding twice the limit, we can\n     * prefetch all of it. See also `determinePrefetchCount` */\n    size_t max_prefetch_size = server.prefetch_batch_max_size * 2;\n\n    if (max_prefetch_size == 0) {\n        return;\n    }\n\n    batch = zcalloc(sizeof(PrefetchCommandsBatch));\n    batch->max_prefetch_size = max_prefetch_size;\n    batch->clients = zcalloc(max_prefetch_size * sizeof(client *));\n    batch->pending_cmds = zcalloc(max_prefetch_size * sizeof(pendingCommand *));\n    batch->keys = zcalloc(max_prefetch_size * sizeof(void *));\n    batch->keys_dicts = zcalloc(max_prefetch_size * sizeof(dict *));\n    batch->prefetch_info = zcalloc(max_prefetch_size * sizeof(KeyPrefetchInfo));\n}\n\nvoid onMaxBatchSizeChange(void) {\n    if (batch && batch->client_count > 0) {\n        /* We need to process the current batch before updating the size */\n        return;\n    }\n\n    freePrefetchCommandsBatch();\n    prefetchCommandsBatchInit();\n}\n\n/* Prefetch the given pointer and move to the next key in the batch. */\nstatic inline void prefetchAndMoveToNextKey(void *addr) {\n    redis_prefetch_read(addr);\n    /* While the prefetch is in progress, we can continue to the next key */\n    batch->cur_idx = (batch->cur_idx + 1) % batch->key_count;\n}\n\nstatic inline void markKeyAsdone(KeyPrefetchInfo *info) {\n    info->state = PREFETCH_DONE;\n    server.stat_total_prefetch_entries++;\n}\n\n/* Returns the next KeyPrefetchInfo structure that needs to be processed. */\nstatic KeyPrefetchInfo *getNextPrefetchInfo(void) {\n    size_t start_idx = batch->cur_idx;\n    do {\n        KeyPrefetchInfo *info = &batch->prefetch_info[batch->cur_idx];\n        if (info->state != PREFETCH_DONE) return info;\n        batch->cur_idx = (batch->cur_idx + 1) % batch->key_count;\n    } while (batch->cur_idx != start_idx);\n    return NULL;\n}\n\nstatic void initBatchInfo(dict **dicts, GetValueDataFunc func) {\n    batch->current_dicts = dicts;\n    batch->get_value_data_func = func;\n\n    /* Initialize the prefetch info */\n    for (size_t i = 0; i < batch->key_count; i++) {\n        KeyPrefetchInfo *info = &batch->prefetch_info[i];\n        if (!batch->current_dicts[i] || dictSize(batch->current_dicts[i]) == 0) {\n            info->state = PREFETCH_DONE;\n            continue;\n        }\n\n        /* We skip prefetch during loading, so ht_table[0] should never be NULL\n         * when dictSize() > 0 (which only happens mid-dictEmpty via _dictReset). */\n        serverAssert(batch->current_dicts[i]->ht_table[0]);\n\n        info->ht_idx = HT_IDX_INVALID;\n        info->current_entry = NULL;\n        info->current_kv = NULL;\n        info->state = PREFETCH_BUCKET;\n        info->key_hash = dictGetHash(batch->current_dicts[i], batch->keys[i]);\n    }\n}\n\n/* Prefetch the bucket of the next hash table index.\n * If no tables are left, move to the PREFETCH_DONE state. */\nstatic void prefetchBucket(KeyPrefetchInfo *info) {\n    size_t i = batch->cur_idx;\n\n    /* Determine which hash table to use */\n    if (info->ht_idx == HT_IDX_INVALID) {\n        info->ht_idx = HT_IDX_FIRST;\n    } else if (info->ht_idx == HT_IDX_FIRST && dictIsRehashing(batch->current_dicts[i])) {\n        info->ht_idx = HT_IDX_SECOND;\n    } else {\n        /* No more tables left - mark as done. */\n        markKeyAsdone(info);\n        return;\n    }\n\n    /* Prefetch the bucket */\n    info->bucket_idx = info->key_hash & DICTHT_SIZE_MASK(batch->current_dicts[i]->ht_size_exp[info->ht_idx]);\n    prefetchAndMoveToNextKey(&batch->current_dicts[i]->ht_table[info->ht_idx][info->bucket_idx]);\n    info->current_entry = NULL;\n    info->state = PREFETCH_ENTRY;\n}\n\n/* Prefetch the entry in the bucket and move to the PREFETCH_KVOBJ state.\n * If no more entries in the bucket, move to the PREFETCH_BUCKET state to look at the next table. */\nstatic void prefetchEntry(KeyPrefetchInfo *info) {\n    size_t i = batch->cur_idx;\n\n    if (info->current_entry) {\n        /* We already found an entry in the bucket - move to the next entry */\n        info->current_entry = dictGetNext(info->current_entry);\n    } else {\n        /* Go to the first entry in the bucket */\n        info->current_entry = batch->current_dicts[i]->ht_table[info->ht_idx][info->bucket_idx];\n    }\n\n    if (info->current_entry) {\n        prefetchAndMoveToNextKey(info->current_entry);\n        info->current_kv = NULL;\n        info->state = PREFETCH_KVOBJ;\n    } else {\n        /* No entry found in the bucket - try the bucket in the next table */\n        info->state = PREFETCH_BUCKET;\n    }\n}\n\n/* Prefetch the kv object in the dict entry, and to the PREFETCH_VALDATA state. */\nstatic inline void prefetchKVOject(KeyPrefetchInfo *info) {\n    kvobj *kv = dictGetKey(info->current_entry);\n    int is_kv = dictEntryIsKey(info->current_entry);\n\n    info->current_kv = kv;\n    info->state = PREFETCH_VALDATA;\n    /* If the entry is a pointer of kv object, we don't need to prefetch it */\n    if (!is_kv) prefetchAndMoveToNextKey(kv);\n}\n\n/* Prefetch the value data of the kv object found in dict entry. */\nstatic void prefetchValueData(KeyPrefetchInfo *info) {\n    size_t i = batch->cur_idx;\n    kvobj *kv = info->current_kv;\n    sds key = kvobjGetKey(kv);\n\n    /* 1. If this is the last element, we assume a hit and don't compare the keys\n     * 2. This kv object is the target of the lookup. */\n    if ((!dictGetNext(info->current_entry) && !dictIsRehashing(batch->current_dicts[i])) ||\n        dictCompareKeys(batch->current_dicts[i], batch->keys[i], key))\n    {\n        if (batch->get_value_data_func) {\n            void *value_data = batch->get_value_data_func(kv);\n            if (value_data) prefetchAndMoveToNextKey(value_data);\n        }\n        markKeyAsdone(info);\n    } else {\n        /* Not found in the current entry, move to the next entry */\n        info->state = PREFETCH_ENTRY;\n    }\n}\n\n/* Prefetch dictionary data for an array of keys.\n *\n * This function takes an array of dictionaries and keys, attempting to bring\n * data closer to the L1 cache that might be needed for dictionary operations\n * on those keys.\n *\n * The dictFind algorithm:\n * 1. Evaluate the hash of the key\n * 2. Access the index in the first table\n * 3. Walk the entries linked list until the key is found\n *    If the key hasn't been found and the dictionary is in the middle of rehashing,\n *    access the index on the second table and repeat step 3\n *\n * dictPrefetch executes the same algorithm as dictFind, but one step at a time\n * for each key. Instead of waiting for data to be read from memory, it prefetches\n * the data and then moves on to execute the next prefetch for another key.\n *\n * dicts - An array of dictionaries to prefetch data from.\n * get_val_data_func - A callback function that dictPrefetch can invoke\n * to bring the key's value data closer to the L1 cache as well.\n */\nstatic void dictPrefetch(dict **dicts, GetValueDataFunc get_val_data_func) {\n    initBatchInfo(dicts, get_val_data_func);\n    KeyPrefetchInfo *info;\n    while ((info = getNextPrefetchInfo())) {\n        switch (info->state) {\n        case PREFETCH_BUCKET: prefetchBucket(info); break;\n        case PREFETCH_ENTRY: prefetchEntry(info); break;\n        case PREFETCH_KVOBJ: prefetchKVOject(info); break;\n        case PREFETCH_VALDATA: prefetchValueData(info); break;\n        default: serverPanic(\"Unknown prefetch state %d\", info->state);\n        }\n    }\n}\n\n/* Helper function to get the value pointer of a kv object. */\nstatic void *getObjectValuePtr(const void *value) {\n    kvobj *kv = (kvobj *)value;\n    return (kv->type == OBJ_STRING && kv->encoding == OBJ_ENCODING_RAW) ? kv->ptr : NULL;\n}\n\nvoid resetCommandsBatch(void) {\n    if (batch == NULL) {\n        /* Handle the case where prefetching becomes enabled from disabled. */\n        if (server.prefetch_batch_max_size) prefetchCommandsBatchInit();\n        return;\n    }\n\n    batch->cur_idx = 0;\n    batch->key_count = 0;\n    batch->client_count = 0;\n    batch->pcmd_count = 0;\n\n    /* Handle the case where the max prefetch size has been changed. */\n    if (batch->max_prefetch_size != (size_t)server.prefetch_batch_max_size * 2) {\n        onMaxBatchSizeChange();\n    }\n}\n\n/* Prefetching in very small batches tends to be ineffective because the technique\n * relies on a small gap—typically a few CPU cycles—between issuing the prefetch\n * and performing the actual memory access. If the batch is too small, this delay\n * cannot be effectively inserted, and the prefetching yields little to no benefit.\n *\n * To avoid wasting effort, when the remaining data is small (less than twice the\n * maximum batch size), we simply prefetch all of it at once. Otherwise, we only\n * prefetch a limited portion, capped at the configured maximum. */\nint determinePrefetchCount(int len) {\n    if (!batch) return 0;\n\n    /* The batch max size is double of the configured size. */\n    int config_size = batch->max_prefetch_size / 2;\n    return len < (int)batch->max_prefetch_size ? len : config_size;\n}\n\n/* Prefetch command-related data:\n * 1. Prefetch the command arguments allocated by the I/O thread to bring them\n *    closer to the L1 cache.\n * 2. Prefetch the io_deferred_objects for all clients.\n * 3. Prefetch the keys and values for all commands in the current batch from\n *    the main dictionaries. */\nvoid prefetchCommands(void) {\n    if (!batch || server.loading) return;\n\n    /* Prefetch argv's for all pending commands */\n    for (size_t i = 0; i < batch->pcmd_count; i++) {\n        pendingCommand *pcmd = batch->pending_cmds[i];\n        if (unlikely(pcmd->argc <= 0)) continue;\n        for (int j = 0; j < pcmd->argc; j++) {\n            redis_prefetch_read(pcmd->argv[j]);\n        }\n    }\n\n    /* Prefetch the argv->ptr if required */\n    for (size_t i = 0; i < batch->pcmd_count; i++) {\n        pendingCommand *pcmd = batch->pending_cmds[i];\n        if (unlikely(pcmd->argc <= 1)) continue;\n        /* Skip the first argument (command name), as it's typically short */\n        for (int j = 1; j < pcmd->argc; j++) {\n            if (pcmd->argv[j]->encoding == OBJ_ENCODING_RAW) {\n                redis_prefetch_read(pcmd->argv[j]->ptr);\n            }\n        }\n    }\n\n    /* Prefetch io_deferred_objects for all clients */\n    for (size_t i = 0; i < batch->client_count; i++) {\n        client *c = batch->clients[i];\n        if (!c->io_deferred_objects || c->io_deferred_objects_num == 0) continue;\n        for (int j = 0; j < c->io_deferred_objects_num; j++)\n            redis_prefetch_read(c->io_deferred_objects[j]);\n    }\n\n    /* Get the keys ptrs - we do it here after the key obj was prefetched. */\n    for (size_t i = 0; i < batch->key_count; i++) {\n        batch->keys[i] = ((robj *)batch->keys[i])->ptr;\n    }\n\n    /* Prefetch dict keys for all commands.\n     * Prefetching is beneficial only if there are more than one key. */\n    if (batch->key_count > 1) {\n        server.stat_total_prefetch_batches++;\n        /* Prefetch keys from the main dict */\n        dictPrefetch(batch->keys_dicts, getObjectValuePtr);\n    }\n}\n\n/* Adds the client's command to the current batch.\n *\n * Returns C_OK if the command was added successfully, C_ERR otherwise. */\nint addCommandToBatch(client *c) {\n    if (unlikely(!batch)) return C_ERR;\n\n    /* If the batch is full, process it.\n     * We also check the client count to handle cases where\n     * no keys exist for the clients' commands. */\n    if (batch->client_count == batch->max_prefetch_size ||\n        batch->key_count == batch->max_prefetch_size ||\n        batch->pcmd_count == batch->max_prefetch_size)\n    {\n        return C_ERR;\n    }\n\n    /* Avoid partial prefetching: if the batch already has commands and adding this\n     * client's ready commands would likely exceed the batch size limit, reject\n     * the entire client. This is a conservative estimate using command count as\n     * a proxy for key count to ensure all keys from a client are either fully\n     * prefetched together or not prefetched at all. */\n    if (batch->pcmd_count > 0 &&\n        (c->pending_cmds.ready_len + batch->key_count > batch->max_prefetch_size ||\n         c->pending_cmds.ready_len + batch->pcmd_count > batch->max_prefetch_size))\n    {\n        return C_ERR;\n    }\n\n    batch->clients[batch->client_count++] = c;\n\n    pendingCommand *pcmd = c->pending_cmds.head;\n    while (pcmd != NULL && batch->pcmd_count < batch->max_prefetch_size) {\n        if (pcmd->next) redis_prefetch_read(pcmd->next);\n\n        /* Skip commands that have not been preprocessed, or have errors. */\n        if ((pcmd->flags & PENDING_CMD_FLAG_INCOMPLETE) || !pcmd->cmd || pcmd->read_error) break;\n\n        batch->pending_cmds[batch->pcmd_count++] = pcmd;\n\n        serverAssert(pcmd->flags & PENDING_CMD_KEYS_RESULT_VALID);\n        for (int i = 0; i < pcmd->keys_result.numkeys && batch->key_count < batch->max_prefetch_size; i++) {\n            batch->keys[batch->key_count] = pcmd->argv[pcmd->keys_result.keys[i].pos];\n            batch->keys_dicts[batch->key_count] =\n                kvstoreGetDict(c->db->keys, pcmd->slot > 0 ? pcmd->slot : 0);\n            batch->key_count++;\n        }\n        pcmd = pcmd->next;\n    }\n\n    return C_OK;\n}\n"
  },
  {
    "path": "src/memory_prefetch.h",
    "content": "/*\n * Copyright (c) 2025-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n#ifndef MEMORY_PREFETCH_H\n#define MEMORY_PREFETCH_H\n\nstruct client;\n\nvoid prefetchCommandsBatchInit(void);\nint determinePrefetchCount(int len);\nint addCommandToBatch(struct client *c);\nvoid resetCommandsBatch(void);\nvoid prefetchCommands(void);\n\n#endif /* MEMORY_PREFETCH_H */\n"
  },
  {
    "path": "src/memtest.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n#include <stdint.h>\n#include <stdlib.h>\n#include <stdio.h>\n#include <string.h>\n#include <limits.h>\n#include <errno.h>\n#include <termios.h>\n#include <sys/ioctl.h>\n#if defined(__sun)\n#include <stropts.h>\n#endif\n#include \"config.h\"\n#include \"redisassert.h\"\n\n#if (ULONG_MAX == 4294967295UL)\n#define MEMTEST_32BIT\n#elif (ULONG_MAX == 18446744073709551615ULL)\n#define MEMTEST_64BIT\n#else\n#error \"ULONG_MAX value not supported.\"\n#endif\n\n#ifdef MEMTEST_32BIT\n#define ULONG_ONEZERO 0xaaaaaaaaUL\n#define ULONG_ZEROONE 0x55555555UL\n#else\n#define ULONG_ONEZERO 0xaaaaaaaaaaaaaaaaUL\n#define ULONG_ZEROONE 0x5555555555555555UL\n#endif\n\nstatic struct winsize ws;\nsize_t progress_printed; /* Printed chars in screen-wide progress bar. */\nsize_t progress_full; /* How many chars to write to fill the progress bar. */\n\nvoid memtest_progress_start(char *title, int pass) {\n    int j;\n\n    printf(\"\\x1b[H\\x1b[2J\");    /* Cursor home, clear screen. */\n    /* Fill with dots. */\n    for (j = 0; j < ws.ws_col*(ws.ws_row-2); j++) printf(\".\");\n    printf(\"Please keep the test running several minutes per GB of memory.\\n\");\n    printf(\"Also check http://www.memtest86.com/ and http://pyropus.ca/software/memtester/\");\n    printf(\"\\x1b[H\\x1b[2K\");          /* Cursor home, clear current line.  */\n    printf(\"%s [%d]\\n\", title, pass); /* Print title. */\n    progress_printed = 0;\n    progress_full = (size_t)ws.ws_col*(ws.ws_row-3);\n    fflush(stdout);\n}\n\nvoid memtest_progress_end(void) {\n    printf(\"\\x1b[H\\x1b[2J\");    /* Cursor home, clear screen. */\n}\n\nvoid memtest_progress_step(size_t curr, size_t size, char c) {\n    size_t chars = ((unsigned long long)curr*progress_full)/size, j;\n\n    for (j = 0; j < chars-progress_printed; j++) printf(\"%c\",c);\n    progress_printed = chars;\n    fflush(stdout);\n}\n\n/* Test that addressing is fine. Every location is populated with its own\n * address, and finally verified. This test is very fast but may detect\n * ASAP big issues with the memory subsystem. */\nint memtest_addressing(unsigned long *l, size_t bytes, int interactive) {\n    unsigned long words = bytes/sizeof(unsigned long);\n    unsigned long j, *p;\n\n    /* Fill */\n    p = l;\n    for (j = 0; j < words; j++) {\n        *p = (unsigned long)p;\n        p++;\n        if ((j & 0xffff) == 0 && interactive)\n            memtest_progress_step(j,words*2,'A');\n    }\n    /* Test */\n    p = l;\n    for (j = 0; j < words; j++) {\n        if (*p != (unsigned long)p) {\n            if (interactive) {\n                printf(\"\\n*** MEMORY ADDRESSING ERROR: %p contains %lu\\n\",\n                    (void*) p, *p);\n                exit(1);\n            }\n            return 1;\n        }\n        p++;\n        if ((j & 0xffff) == 0 && interactive)\n            memtest_progress_step(j+words,words*2,'A');\n    }\n    return 0;\n}\n\n/* Fill words stepping a single page at every write, so we continue to\n * touch all the pages in the smallest amount of time reducing the\n * effectiveness of caches, and making it hard for the OS to transfer\n * pages on the swap.\n *\n * In this test we can't call rand() since the system may be completely\n * unable to handle library calls, so we have to resort to our own\n * PRNG that only uses local state. We use an xorshift* PRNG. */\n#define xorshift64star_next() do { \\\n        rseed ^= rseed >> 12; \\\n        rseed ^= rseed << 25; \\\n        rseed ^= rseed >> 27; \\\n        rout = rseed * UINT64_C(2685821657736338717); \\\n} while(0)\n\nvoid memtest_fill_random(unsigned long *l, size_t bytes, int interactive) {\n    unsigned long step = 4096/sizeof(unsigned long);\n    unsigned long words = bytes/sizeof(unsigned long)/2;\n    unsigned long iwords = words/step;  /* words per iteration */\n    unsigned long off, w, *l1, *l2;\n    uint64_t rseed = UINT64_C(0xd13133de9afdb566); /* Just a random seed. */\n    uint64_t rout = 0;\n\n    assert((bytes & 4095) == 0);\n    for (off = 0; off < step; off++) {\n        l1 = l+off;\n        l2 = l1+words;\n        for (w = 0; w < iwords; w++) {\n            xorshift64star_next();\n            *l1 = *l2 = (unsigned long) rout;\n            l1 += step;\n            l2 += step;\n            if ((w & 0xffff) == 0 && interactive)\n                memtest_progress_step(w+iwords*off,words,'R');\n        }\n    }\n}\n\n/* Like memtest_fill_random() but uses the two specified values to fill\n * memory, in an alternated way (v1|v2|v1|v2|...) */\nvoid memtest_fill_value(unsigned long *l, size_t bytes, unsigned long v1,\n                        unsigned long v2, char sym, int interactive)\n{\n    unsigned long step = 4096/sizeof(unsigned long);\n    unsigned long words = bytes/sizeof(unsigned long)/2;\n    unsigned long iwords = words/step;  /* words per iteration */\n    unsigned long off, w, *l1, *l2, v;\n\n    assert((bytes & 4095) == 0);\n    for (off = 0; off < step; off++) {\n        l1 = l+off;\n        l2 = l1+words;\n        v = (off & 1) ? v2 : v1;\n        for (w = 0; w < iwords; w++) {\n#ifdef MEMTEST_32BIT\n            *l1 = *l2 = ((unsigned long)     v) |\n                        (((unsigned long)    v) << 16);\n#else\n            *l1 = *l2 = ((unsigned long)     v) |\n                        (((unsigned long)    v) << 16) |\n                        (((unsigned long)    v) << 32) |\n                        (((unsigned long)    v) << 48);\n#endif\n            l1 += step;\n            l2 += step;\n            if ((w & 0xffff) == 0 && interactive)\n                memtest_progress_step(w+iwords*off,words,sym);\n        }\n    }\n}\n\nint memtest_compare(unsigned long *l, size_t bytes, int interactive) {\n    unsigned long words = bytes/sizeof(unsigned long)/2;\n    unsigned long w, *l1, *l2;\n\n    assert((bytes & 4095) == 0);\n    l1 = l;\n    l2 = l1+words;\n    for (w = 0; w < words; w++) {\n        if (*l1 != *l2) {\n            if (interactive) {\n                printf(\"\\n*** MEMORY ERROR DETECTED: %p != %p (%lu vs %lu)\\n\",\n                    (void*)l1, (void*)l2, *l1, *l2);\n                exit(1);\n            }\n            return 1;\n        }\n        l1 ++;\n        l2 ++;\n        if ((w & 0xffff) == 0 && interactive)\n            memtest_progress_step(w,words,'=');\n    }\n    return 0;\n}\n\nint memtest_compare_times(unsigned long *m, size_t bytes, int pass, int times,\n                          int interactive)\n{\n    int j;\n    int errors = 0;\n\n    for (j = 0; j < times; j++) {\n        if (interactive) memtest_progress_start(\"Compare\",pass);\n        errors += memtest_compare(m,bytes,interactive);\n        if (interactive) memtest_progress_end();\n    }\n    return errors;\n}\n\n/* Test the specified memory. The number of bytes must be multiple of 4096.\n * If interactive is true the program exists with an error and prints\n * ASCII arts to show progresses. Instead when interactive is 0, it can\n * be used as an API call, and returns 1 if memory errors were found or\n * 0 if there were no errors detected. */\nint memtest_test(unsigned long *m, size_t bytes, int passes, int interactive) {\n    int pass = 0;\n    int errors = 0;\n\n    while (pass != passes) {\n        pass++;\n\n        if (interactive) memtest_progress_start(\"Addressing test\",pass);\n        errors += memtest_addressing(m,bytes,interactive);\n        if (interactive) memtest_progress_end();\n\n        if (interactive) memtest_progress_start(\"Random fill\",pass);\n        memtest_fill_random(m,bytes,interactive);\n        if (interactive) memtest_progress_end();\n        errors += memtest_compare_times(m,bytes,pass,4,interactive);\n\n        if (interactive) memtest_progress_start(\"Solid fill\",pass);\n        memtest_fill_value(m,bytes,0,(unsigned long)-1,'S',interactive);\n        if (interactive) memtest_progress_end();\n        errors += memtest_compare_times(m,bytes,pass,4,interactive);\n\n        if (interactive) memtest_progress_start(\"Checkerboard fill\",pass);\n        memtest_fill_value(m,bytes,ULONG_ONEZERO,ULONG_ZEROONE,'C',interactive);\n        if (interactive) memtest_progress_end();\n        errors += memtest_compare_times(m,bytes,pass,4,interactive);\n    }\n    return errors;\n}\n\n/* A version of memtest_test() that tests memory in small pieces\n * in order to restore the memory content at exit.\n *\n * One problem we have with this approach, is that the cache can avoid\n * real memory accesses, and we can't test big chunks of memory at the\n * same time, because we need to backup them on the stack (the allocator\n * may not be usable or we may be already in an out of memory condition).\n * So what we do is to try to trash the cache with useless memory accesses\n * between the fill and compare cycles. */\n#define MEMTEST_BACKUP_WORDS (1024*(1024/sizeof(long)))\n/* Random accesses of MEMTEST_DECACHE_SIZE are performed at the start and\n * end of the region between fill and compare cycles in order to trash\n * the cache. */\n#define MEMTEST_DECACHE_SIZE (1024*8)\n\nREDIS_NO_SANITIZE(\"undefined\")\nint memtest_preserving_test(unsigned long *m, size_t bytes, int passes) {\n    unsigned long backup[MEMTEST_BACKUP_WORDS];\n    unsigned long *p = m;\n    unsigned long *end = (unsigned long*) (((unsigned char*)m)+(bytes-MEMTEST_DECACHE_SIZE));\n    size_t left = bytes;\n    int errors = 0;\n\n    if (bytes & 4095) return 0; /* Can't test across 4k page boundaries. */\n    if (bytes < 4096*2) return 0; /* Can't test a single page. */\n\n    while(left) {\n        /* If we have to test a single final page, go back a single page\n         * so that we can test two pages, since the code can't test a single\n         * page but at least two. */\n        if (left == 4096) {\n            left += 4096;\n            p -= 4096/sizeof(unsigned long);\n        }\n\n        int pass = 0;\n        size_t len = (left > sizeof(backup)) ? sizeof(backup) : left;\n\n        /* Always test an even number of pages. */\n        if (len/4096 % 2) len -= 4096;\n\n        memcpy(backup,p,len); /* Backup. */\n        while(pass != passes) {\n            pass++;\n            errors += memtest_addressing(p,len,0);\n            memtest_fill_random(p,len,0);\n            if (bytes >= MEMTEST_DECACHE_SIZE) {\n                memtest_compare_times(m,MEMTEST_DECACHE_SIZE,pass,1,0);\n                memtest_compare_times(end,MEMTEST_DECACHE_SIZE,pass,1,0);\n            }\n            errors += memtest_compare_times(p,len,pass,4,0);\n            memtest_fill_value(p,len,0,(unsigned long)-1,'S',0);\n            if (bytes >= MEMTEST_DECACHE_SIZE) {\n                memtest_compare_times(m,MEMTEST_DECACHE_SIZE,pass,1,0);\n                memtest_compare_times(end,MEMTEST_DECACHE_SIZE,pass,1,0);\n            }\n            errors += memtest_compare_times(p,len,pass,4,0);\n            memtest_fill_value(p,len,ULONG_ONEZERO,ULONG_ZEROONE,'C',0);\n            if (bytes >= MEMTEST_DECACHE_SIZE) {\n                memtest_compare_times(m,MEMTEST_DECACHE_SIZE,pass,1,0);\n                memtest_compare_times(end,MEMTEST_DECACHE_SIZE,pass,1,0);\n            }\n            errors += memtest_compare_times(p,len,pass,4,0);\n        }\n        memcpy(p,backup,len); /* Restore. */\n        left -= len;\n        p += len/sizeof(unsigned long);\n    }\n    return errors;\n}\n\n/* Perform an interactive test allocating the specified number of megabytes. */\nvoid memtest_alloc_and_test(size_t megabytes, int passes) {\n    size_t bytes = megabytes*1024*1024;\n    unsigned long *m = malloc(bytes);\n\n    if (m == NULL) {\n        fprintf(stderr,\"Unable to allocate %zu megabytes: %s\",\n            megabytes, strerror(errno));\n        exit(1);\n    }\n    memtest_test(m,bytes,passes,1);\n    free(m);\n}\n\nvoid memtest(size_t megabytes, int passes) {\n#if !defined(__HAIKU__)\n    if (ioctl(1, TIOCGWINSZ, &ws) == -1) {\n        ws.ws_col = 80;\n        ws.ws_row = 20;\n    }\n#else\n    ws.ws_col = 80;\n    ws.ws_row = 20;\n#endif\n    memtest_alloc_and_test(megabytes,passes);\n    printf(\"\\nYour memory passed this test.\\n\");\n    printf(\"Please if you are still in doubt use the following two tools:\\n\");\n    printf(\"1) memtest86: http://www.memtest86.com/\\n\");\n    printf(\"2) memtester: http://pyropus.ca/software/memtester/\\n\");\n    exit(0);\n}\n"
  },
  {
    "path": "src/mkreleasehdr.sh",
    "content": "#!/bin/sh\nGIT_SHA1=`(git show-ref --head --hash=8 2> /dev/null || echo 00000000) | head -n1`\nGIT_DIRTY=`git diff --no-ext-diff -- ../src ../deps 2> /dev/null | wc -l`\nBUILD_ID=`uname -n`\"-\"`date +%s`\nif [ -n \"$SOURCE_DATE_EPOCH\" ]; then\n  BUILD_ID=$(date -u -d \"@$SOURCE_DATE_EPOCH\" +%s 2>/dev/null || date -u -r \"$SOURCE_DATE_EPOCH\" +%s 2>/dev/null || date -u +%s)\nfi\ntest -f release.h || touch release.h\n(cat release.h | grep SHA1 | grep $GIT_SHA1) && \\\n(cat release.h | grep DIRTY | grep $GIT_DIRTY) && exit 0 # Already up-to-date\necho \"#define REDIS_GIT_SHA1 \\\"$GIT_SHA1\\\"\" > release.h\necho \"#define REDIS_GIT_DIRTY \\\"$GIT_DIRTY\\\"\" >> release.h\necho \"#define REDIS_BUILD_ID \\\"$BUILD_ID\\\"\" >> release.h\necho \"#include \\\"version.h\\\"\" >> release.h\necho \"#define REDIS_BUILD_ID_RAW REDIS_VERSION REDIS_BUILD_ID REDIS_GIT_DIRTY REDIS_GIT_SHA1\" >> release.h\ntouch release.c # Force recompile of release.c\n"
  },
  {
    "path": "src/module.c",
    "content": "/*\n * Copyright (c) 2016-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n/* --------------------------------------------------------------------------\n * Modules API documentation information\n *\n * The comments in this file are used to generate the API documentation on the\n * Redis website.\n *\n * Each function starting with RM_ and preceded by a block comment is included\n * in the API documentation. To hide an RM_ function, put a blank line between\n * the comment and the function definition or put the comment inside the\n * function body.\n *\n * The functions are divided into sections. Each section is preceded by a\n * documentation block, which is comment block starting with a markdown level 2\n * heading, i.e. a line starting with ##, on the first line of the comment block\n * (with the exception of a ----- line which can appear first). Other comment\n * blocks, which are not intended for the modules API user, such as this comment\n * block, do NOT start with a markdown level 2 heading, so they are included in\n * the generated a API documentation.\n *\n * The documentation comments may contain markdown formatting. Some automatic\n * replacements are done, such as the replacement of RM with RedisModule in\n * function names. For details, see the script src/modules/gendoc.rb.\n * -------------------------------------------------------------------------- */\n\n#include \"server.h\"\n#include \"cluster.h\"\n#include \"cluster_asm.h\"\n#include \"slowlog.h\"\n#include \"rdb.h\"\n#include \"monotonic.h\"\n#include \"script.h\"\n#include \"call_reply.h\"\n#include \"hdr_histogram.h\"\n#include \"crc16_slottable.h\"\n#include <dlfcn.h>\n#include <sys/stat.h>\n#include <sys/wait.h>\n#include <fcntl.h>\n#include <string.h>\n\n/* --------------------------------------------------------------------------\n * Private data structures used by the modules system. Those are data\n * structures that are never exposed to Redis Modules, if not as void\n * pointers that have an API the module can call with them)\n * -------------------------------------------------------------------------- */\n\nstruct RedisModuleInfoCtx {\n    struct RedisModule *module;\n    dict *requested_sections;\n    sds info;           /* info string we collected so far */\n    int sections;       /* number of sections we collected so far */\n    int in_section;     /* indication if we're in an active section or not */\n    int in_dict_field;  /* indication that we're currently appending to a dict */\n};\n\n/* This represents a shared API. Shared APIs will be used to populate\n * the server.sharedapi dictionary, mapping names of APIs exported by\n * modules for other modules to use, to their structure specifying the\n * function pointer that can be called. */\nstruct RedisModuleSharedAPI {\n    void *func;\n    RedisModule *module;\n};\ntypedef struct RedisModuleSharedAPI RedisModuleSharedAPI;\ntypedef struct RedisModuleKeyOptCtx RedisModuleKeyOptCtx;\n\ndict *modules; /* Hash table of modules. SDS -> RedisModule ptr.*/\n\n/* Entries in the context->amqueue array, representing objects to free\n * when the callback returns. */\nstruct AutoMemEntry {\n    void *ptr;\n    int type;\n};\n\n/* AutoMemEntry type field values. */\n#define REDISMODULE_AM_KEY 0\n#define REDISMODULE_AM_STRING 1\n#define REDISMODULE_AM_REPLY 2\n#define REDISMODULE_AM_FREED 3 /* Explicitly freed by user already. */\n#define REDISMODULE_AM_DICT 4\n#define REDISMODULE_AM_INFO 5\n#define REDISMODULE_AM_CONFIG 6\n#define REDISMODULE_AM_SLOTRANGEARRAY 7\n\n/* The pool allocator block. Redis Modules can allocate memory via this special\n * allocator that will automatically release it all once the callback returns.\n * This means that it can only be used for ephemeral allocations. However\n * there are two advantages for modules to use this API:\n *\n * 1) The memory is automatically released when the callback returns.\n * 2) This allocator is faster for many small allocations since whole blocks\n *    are allocated, and small pieces returned to the caller just advancing\n *    the index of the allocation.\n *\n * Allocations are always rounded to the size of the void pointer in order\n * to always return aligned memory chunks. */\n\n#define REDISMODULE_POOL_ALLOC_MIN_SIZE (1024*8)\n#define REDISMODULE_POOL_ALLOC_ALIGN (sizeof(void*))\n\ntypedef struct RedisModulePoolAllocBlock {\n    uint32_t size;\n    uint32_t used;\n    struct RedisModulePoolAllocBlock *next;\n    char memory[];\n} RedisModulePoolAllocBlock;\n\n/* This structure represents the context in which Redis modules operate.\n * Most APIs module can access, get a pointer to the context, so that the API\n * implementation can hold state across calls, or remember what to free after\n * the call and so forth.\n *\n * Note that not all the context structure is always filled with actual values\n * but only the fields needed in a given context. */\n\nstruct RedisModuleBlockedClient;\nstruct RedisModuleUser;\n\nstruct RedisModuleCtx {\n    void *getapifuncptr;            /* NOTE: Must be the first field. */\n    struct RedisModule *module;     /* Module reference. */\n    client *client;                 /* Client calling a command. */\n    struct RedisModuleBlockedClient *blocked_client; /* Blocked client for\n                                                        thread safe context. */\n    struct AutoMemEntry *amqueue;   /* Auto memory queue of objects to free. */\n    int amqueue_len;                /* Number of slots in amqueue. */\n    int amqueue_used;               /* Number of used slots in amqueue. */\n    int flags;                      /* REDISMODULE_CTX_... flags. */\n    void **postponed_arrays;        /* To set with RM_ReplySetArrayLength(). */\n    int postponed_arrays_count;     /* Number of entries in postponed_arrays. */\n    void *blocked_privdata;         /* Privdata set when unblocking a client. */\n    RedisModuleString *blocked_ready_key; /* Key ready when the reply callback\n                                             gets called for clients blocked\n                                             on keys. */\n\n    /* Used if there is the REDISMODULE_CTX_KEYS_POS_REQUEST or \n     * REDISMODULE_CTX_CHANNEL_POS_REQUEST flag set. */\n    getKeysResult *keys_result;\n\n    struct RedisModulePoolAllocBlock *pa_head;\n    long long next_yield_time;\n\n    const struct RedisModuleUser *user;  /* RedisModuleUser commands executed via\n                                            RM_Call should be executed as, if set */\n};\ntypedef struct RedisModuleCtx RedisModuleCtx;\n\n#define REDISMODULE_CTX_NONE (0)\n#define REDISMODULE_CTX_AUTO_MEMORY (1<<0)\n#define REDISMODULE_CTX_KEYS_POS_REQUEST (1<<1)\n#define REDISMODULE_CTX_BLOCKED_REPLY (1<<2)\n#define REDISMODULE_CTX_BLOCKED_TIMEOUT (1<<3)\n#define REDISMODULE_CTX_THREAD_SAFE (1<<4)\n#define REDISMODULE_CTX_BLOCKED_DISCONNECTED (1<<5)\n#define REDISMODULE_CTX_TEMP_CLIENT (1<<6) /* Return client object to the pool\n                                              when the context is destroyed */\n#define REDISMODULE_CTX_NEW_CLIENT (1<<7)  /* Free client object when the\n                                              context is destroyed */\n#define REDISMODULE_CTX_CHANNELS_POS_REQUEST (1<<8)\n#define REDISMODULE_CTX_COMMAND (1<<9) /* Context created to serve a command from call() or AOF (which calls cmd->proc directly) */\n\n\n/* This represents a Redis key opened with RM_OpenKey(). */\nstruct RedisModuleKey {\n    RedisModuleCtx *ctx;\n    redisDb *db;\n    robj *key;      /* Key name object. */\n    kvobj *kv;      /* Key-Value object, or NULL if the key was not found. */\n    void *iter;     /* Iterator. */\n    int mode;       /* Opening mode. */\n\n    union {\n        struct {\n            /* List, use only if value->type == OBJ_LIST */\n            listTypeEntry entry;   /* Current entry in iteration. */\n            long index;            /* Current 0-based index in iteration. */\n        } list;\n        struct {\n            /* Zset iterator, use only if value->type == OBJ_ZSET */\n            uint32_t type;         /* REDISMODULE_ZSET_RANGE_* */\n            zrangespec rs;         /* Score range. */\n            zlexrangespec lrs;     /* Lex range. */\n            uint32_t start;        /* Start pos for positional ranges. */\n            uint32_t end;          /* End pos for positional ranges. */\n            void *current;         /* Zset iterator current node. */\n            int er;                /* Zset iterator end reached flag\n                                       (true if end was reached). */\n        } zset;\n        struct {\n            /* Stream, use only if value->type == OBJ_STREAM */\n            streamID currentid;    /* Current entry while iterating. */\n            int64_t numfieldsleft; /* Fields left to fetch for current entry. */\n            int signalready;       /* Flag that signalKeyAsReady() is needed. */\n        } stream;\n    } u;\n};\n\n/* RedisModuleKey 'ztype' values. */\n#define REDISMODULE_ZSET_RANGE_NONE 0       /* This must always be 0. */\n#define REDISMODULE_ZSET_RANGE_LEX 1\n#define REDISMODULE_ZSET_RANGE_SCORE 2\n#define REDISMODULE_ZSET_RANGE_POS 3\n\n/* Function pointer type of a function representing a command inside\n * a Redis module. */\nstruct RedisModuleBlockedClient;\ntypedef int (*RedisModuleCmdFunc) (RedisModuleCtx *ctx, void **argv, int argc);\ntypedef int (*RedisModuleAuthCallback)(RedisModuleCtx *ctx, void *username, void *password, RedisModuleString **err);\ntypedef void (*RedisModuleDisconnectFunc) (RedisModuleCtx *ctx, struct RedisModuleBlockedClient *bc);\n\n/* This struct holds the information about a command registered by a module.*/\nstruct RedisModuleCommand {\n    struct RedisModule *module;\n    RedisModuleCmdFunc func;\n    struct redisCommand *rediscmd;\n};\ntypedef struct RedisModuleCommand RedisModuleCommand;\n\n#define REDISMODULE_REPLYFLAG_NONE 0\n#define REDISMODULE_REPLYFLAG_TOPARSE (1<<0) /* Protocol must be parsed. */\n#define REDISMODULE_REPLYFLAG_NESTED (1<<1)  /* Nested reply object. No proto\n                                                or struct free. */\n\n/* Reply of RM_Call() function. The function is filled in a lazy\n * way depending on the function called on the reply structure. By default\n * only the type, proto and protolen are filled. */\ntypedef struct CallReply RedisModuleCallReply;\n\n/* Structure to hold the module auth callback & the Module implementing it. */\ntypedef struct RedisModuleAuthCtx {\n    struct RedisModule *module;\n    RedisModuleAuthCallback auth_cb;\n} RedisModuleAuthCtx;\n\n/* Structure representing a blocked client. We get a pointer to such\n * an object when blocking from modules. */\ntypedef struct RedisModuleBlockedClient {\n    client *client;  /* Pointer to the blocked client. or NULL if the client\n                        was destroyed during the life of this object. */\n    RedisModule *module;    /* Module blocking the client. */\n    RedisModuleCmdFunc reply_callback; /* Reply callback on normal completion.*/\n    RedisModuleAuthCallback auth_reply_cb; /* Reply callback on completing blocking\n                                                    module authentication. */\n    RedisModuleCmdFunc timeout_callback; /* Reply callback on timeout. */\n    RedisModuleDisconnectFunc disconnect_callback; /* Called on disconnection.*/\n    void (*free_privdata)(RedisModuleCtx*,void*);/* privdata cleanup callback.*/\n    void *privdata;     /* Module private data that may be used by the reply\n                           or timeout callback. It is set via the\n                           RedisModule_UnblockClient() API. */\n    client *thread_safe_ctx_client; /* Fake client to be used for thread safe\n                                       context so that no lock is required. */\n    client *reply_client;           /* Fake client used to accumulate replies\n                                       in thread safe contexts. */\n    int dbid;           /* Database number selected by the original client. */\n    int blocked_on_keys;    /* If blocked via RM_BlockClientOnKeys(). */\n    int unblocked;          /* Already on the moduleUnblocked list. */\n    monotime background_timer; /* Timer tracking the start of background work */\n    uint64_t background_duration; /* Current command background time duration.\n                                     Used for measuring latency of blocking cmds */\n    int blocked_on_keys_explicit_unblock; /* Set to 1 only in the case of an explicit RM_Unblock on\n                                           * a client that is blocked on keys. In this case we will\n                                           * call the timeout call back from within\n                                           * moduleHandleBlockedClients which runs from the main thread */\n} RedisModuleBlockedClient;\n\n/* This is a list of Module Auth Contexts. Each time a Module registers a callback, a new ctx is\n * added to this list. Multiple modules can register auth callbacks and the same Module can have\n * multiple auth callbacks. */\nstatic list *moduleAuthCallbacks;\n\nstatic pthread_mutex_t moduleUnblockedClientsMutex = PTHREAD_MUTEX_INITIALIZER;\nstatic list *moduleUnblockedClients;\n\n/* Pool for temporary client objects. Creating and destroying a client object is\n * costly. We manage a pool of clients to avoid this cost. Pool expands when\n * more clients are needed and shrinks when unused. Please see modulesCron()\n * for more details. */\nstatic client **moduleTempClients;\nstatic size_t moduleTempClientCap = 0;\nstatic size_t moduleTempClientCount = 0;    /* Client count in pool */\nstatic size_t moduleTempClientMinCount = 0; /* Min client count in pool since\n                                               the last cron. */\n\n/* We need a mutex that is unlocked / relocked in beforeSleep() in order to\n * allow thread safe contexts to execute commands at a safe moment. */\nstatic pthread_mutex_t moduleGIL = PTHREAD_MUTEX_INITIALIZER;\n\n/* Function pointer type for keyspace event notification subscriptions from modules. */\ntypedef int (*RedisModuleNotificationFunc) (RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key);\n\n/* Function pointer type for keyspace event notifications with subkeys from modules. */\ntypedef void (*RedisModuleNotificationWithSubkeysFunc)(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key, RedisModuleString **subkeys, int count);\n\n/* Function pointer type for post jobs */\ntypedef void (*RedisModulePostNotificationJobFunc) (RedisModuleCtx *ctx, void *pd);\n\n/* Keyspace notification subscriber information.\n * See RM_SubscribeToKeyspaceEvents() for more information. */\ntypedef struct RedisModuleKeyspaceSubscriber {\n    /* The module subscribed to the event */\n    RedisModule *module;\n    /* Notification callback in the module*/\n    RedisModuleNotificationFunc notify_callback;\n    /* Extended notification callback with subkeys */\n    RedisModuleNotificationWithSubkeysFunc notify_callback_with_subkeys;\n    /* A bit mask of the events the module is interested in */\n    int event_mask;\n    /* Delivery flags for subkey notifications, controlling when the callback is invoked. */\n    int flags;\n    /* Active flag set on entry, to avoid reentrant subscribers\n     * calling themselves */\n    int active;\n} RedisModuleKeyspaceSubscriber;\n\ntypedef struct RedisModulePostExecUnitJob {\n    /* The module subscribed to the event */\n    RedisModule *module;\n    RedisModulePostNotificationJobFunc callback;\n    void *pd;\n    void (*free_pd)(void*);\n    int dbid;\n} RedisModulePostExecUnitJob;\n\n/* The module keyspace notification subscribers list */\nstatic list *moduleKeyspaceSubscribers;\n\n/* Cached event types that have at least one subscriber.\n * Updated on subscribe/unsubscribe to avoid traversing the list on every event. */\nstatic int moduleKeyspaceSubscribersTypes = 0;\nstatic int moduleKeyspaceSubscribersWithSubkeysTypes = 0;\n\n/* The module post keyspace jobs list */\nstatic list *modulePostExecUnitJobs;\n\n/* Data structures related to the exported dictionary data structure. */\ntypedef struct RedisModuleDict {\n    rax *rax;                       /* The radix tree. */\n    size_t alloc_size;              /* Total memory used (in bytes) by this dict. */\n} RedisModuleDict;\n\ntypedef struct RedisModuleDictIter {\n    RedisModuleDict *dict;\n    raxIterator ri;\n} RedisModuleDictIter;\n\ntypedef struct RedisModuleCommandFilterCtx {\n    RedisModuleString **argv;\n    int argv_len;\n    int argc;\n    client *c;\n} RedisModuleCommandFilterCtx;\n\ntypedef void (*RedisModuleCommandFilterFunc) (RedisModuleCommandFilterCtx *filter);\n\ntypedef struct RedisModuleCommandFilter {\n    /* The module that registered the filter */\n    RedisModule *module;\n    /* Filter callback function */\n    RedisModuleCommandFilterFunc callback;\n    /* REDISMODULE_CMDFILTER_* flags */\n    int flags;\n} RedisModuleCommandFilter;\n\n/* Registered filters */\nstatic list *moduleCommandFilters;\n\ntypedef void (*RedisModuleForkDoneHandler) (int exitcode, int bysignal, void *user_data);\n\nstatic struct RedisModuleForkInfo {\n    RedisModuleForkDoneHandler done_handler;\n    void* done_handler_user_data;\n} moduleForkInfo = {0};\n\ntypedef struct RedisModuleServerInfoData {\n    rax *rax;                       /* parsed info data. */\n} RedisModuleServerInfoData;\n\ntypedef struct RedisModuleConfigIterator {\n    dictIterator *di; /* Iterator for the configs dict. */\n    sds pattern; /* Pattern to filter configs by name. */\n    int is_glob; /* Is the pattern a glob-pattern or a fixed string? */\n} RedisModuleConfigIterator;\n\n/* Flags for moduleCreateArgvFromUserFormat(). */\n#define REDISMODULE_ARGV_REPLICATE (1<<0)\n#define REDISMODULE_ARGV_NO_AOF (1<<1)\n#define REDISMODULE_ARGV_NO_REPLICAS (1<<2)\n#define REDISMODULE_ARGV_RESP_3 (1<<3)\n#define REDISMODULE_ARGV_RESP_AUTO (1<<4)\n#define REDISMODULE_ARGV_RUN_AS_USER (1<<5)\n#define REDISMODULE_ARGV_SCRIPT_MODE (1<<6)\n#define REDISMODULE_ARGV_NO_WRITES (1<<7)\n#define REDISMODULE_ARGV_CALL_REPLIES_AS_ERRORS (1<<8)\n#define REDISMODULE_ARGV_RESPECT_DENY_OOM (1<<9)\n#define REDISMODULE_ARGV_DRY_RUN (1<<10)\n#define REDISMODULE_ARGV_ALLOW_BLOCK (1<<11)\n\n/* Determine whether Redis should signal modified key implicitly.\n * In case 'ctx' has no 'module' member (and therefore no module->options),\n * we assume default behavior, that is, Redis signals.\n * (see RM_GetThreadSafeContext) */\n#define SHOULD_SIGNAL_MODIFIED_KEYS(ctx) \\\n    ((ctx)->module? !((ctx)->module->options & REDISMODULE_OPTION_NO_IMPLICIT_SIGNAL_MODIFIED) : 1)\n\n/* Server events hooks data structures and defines: this modules API\n * allow modules to subscribe to certain events in Redis, such as\n * the start and end of an RDB or AOF save, the change of role in replication,\n * and similar other events. */\n\ntypedef struct RedisModuleEventListener {\n    RedisModule *module;\n    RedisModuleEvent event;\n    RedisModuleEventCallback callback;\n} RedisModuleEventListener;\n\nlist *RedisModule_EventListeners; /* Global list of all the active events. */\n\n/* Data structures related to the redis module users */\n\n/* This is the object returned by RM_CreateModuleUser(). The module API is\n * able to create users, set ACLs to such users, and later authenticate\n * clients using such newly created users. */\ntypedef struct RedisModuleUser {\n    user *user; /* Reference to the real redis user */\n    int free_user; /* Indicates that user should also be freed when this object is freed */\n} RedisModuleUser;\n\n/* Data structures related to redis module configurations */\n/* The function signatures for module config get callbacks. These are identical to the ones exposed in redismodule.h. */\ntypedef RedisModuleString * (*RedisModuleConfigGetStringFunc)(const char *name, void *privdata);\ntypedef long long (*RedisModuleConfigGetNumericFunc)(const char *name, void *privdata);\ntypedef int (*RedisModuleConfigGetBoolFunc)(const char *name, void *privdata);\ntypedef int (*RedisModuleConfigGetEnumFunc)(const char *name, void *privdata);\n/* The function signatures for module config set callbacks. These are identical to the ones exposed in redismodule.h. */\ntypedef int (*RedisModuleConfigSetStringFunc)(const char *name, RedisModuleString *val, void *privdata, RedisModuleString **err);\ntypedef int (*RedisModuleConfigSetNumericFunc)(const char *name, long long val, void *privdata, RedisModuleString **err);\ntypedef int (*RedisModuleConfigSetBoolFunc)(const char *name, int val, void *privdata, RedisModuleString **err);\ntypedef int (*RedisModuleConfigSetEnumFunc)(const char *name, int val, void *privdata, RedisModuleString **err);\n/* Apply signature, identical to redismodule.h */\ntypedef int (*RedisModuleConfigApplyFunc)(RedisModuleCtx *ctx, void *privdata, RedisModuleString **err);\n\n/* Struct representing a module config. These are stored in a list in the module struct */\nstruct ModuleConfig {\n    sds name;           /* Fullname of the config (as it appears in the config file) */\n    sds alias;          /* Optional alias for the configuration. NULL if none exists */\n\n    int unprefixedFlag; /* Indicates if the REDISMODULE_CONFIG_UNPREFIXED flag was set. \n                         * If the configuration name was prefixed,during get_fn/set_fn \n                         * callbacks, it should be reported without the prefix */\n\n    void *privdata; /* Optional data passed into the module config callbacks */\n    union get_fn { /* The get callback specified by the module */\n        RedisModuleConfigGetStringFunc get_string;\n        RedisModuleConfigGetNumericFunc get_numeric;\n        RedisModuleConfigGetBoolFunc get_bool;\n        RedisModuleConfigGetEnumFunc get_enum;\n    } get_fn;\n    union set_fn { /* The set callback specified by the module */\n        RedisModuleConfigSetStringFunc set_string;\n        RedisModuleConfigSetNumericFunc set_numeric;\n        RedisModuleConfigSetBoolFunc set_bool;\n        RedisModuleConfigSetEnumFunc set_enum;\n    } set_fn;\n    RedisModuleConfigApplyFunc apply_fn;\n    RedisModule *module;\n};\n\ntypedef struct RedisModuleAsyncRMCallPromise{\n    size_t ref_count;\n    void *private_data;\n    RedisModule *module;\n    RedisModuleOnUnblocked on_unblocked;\n    client *c;\n    RedisModuleCtx *ctx;\n} RedisModuleAsyncRMCallPromise;\n\n/* --------------------------------------------------------------------------\n * Prototypes\n * -------------------------------------------------------------------------- */\n\nvoid RM_FreeCallReply(RedisModuleCallReply *reply);\nvoid RM_CloseKey(RedisModuleKey *key);\nvoid autoMemoryCollect(RedisModuleCtx *ctx);\nrobj **moduleCreateArgvFromUserFormat(const char *cmdname, const char *fmt, int *argcp, int *flags, va_list ap);\nvoid RM_ZsetRangeStop(RedisModuleKey *kp);\nstatic void zsetKeyReset(RedisModuleKey *key);\nstatic void moduleInitKeyTypeSpecific(RedisModuleKey *key);\nvoid RM_FreeDict(RedisModuleCtx *ctx, RedisModuleDict *d);\nvoid RM_FreeServerInfo(RedisModuleCtx *ctx, RedisModuleServerInfoData *data);\nvoid RM_ConfigIteratorRelease(RedisModuleCtx *ctx, RedisModuleConfigIterator *iter);\nvoid RM_ClusterFreeSlotRanges(RedisModuleCtx *ctx, RedisModuleSlotRangeArray *slots);\n\n/* Helpers for RM_SetCommandInfo. */\nstatic int moduleValidateCommandInfo(const RedisModuleCommandInfo *info);\nstatic int64_t moduleConvertKeySpecsFlags(int64_t flags, int from_api);\nstatic int moduleValidateCommandArgs(RedisModuleCommandArg *args,\n                                     const RedisModuleCommandInfoVersion *version);\nstatic struct redisCommandArg *moduleCopyCommandArgs(RedisModuleCommandArg *args,\n                                                     const RedisModuleCommandInfoVersion *version);\nstatic redisCommandArgType moduleConvertArgType(RedisModuleCommandArgType type, int *error);\nstatic int moduleConvertArgFlags(int flags);\nvoid moduleCreateContext(RedisModuleCtx *out_ctx, RedisModule *module, int ctx_flags);\n\n/* Common helper functions. */\nint moduleVerifyResourceName(const char *name);\n\n/* --------------------------------------------------------------------------\n * ## Heap allocation raw functions\n *\n * Memory allocated with these functions are taken into account by Redis key\n * eviction algorithms and are reported in Redis memory usage information.\n * -------------------------------------------------------------------------- */\n\n/* Use like malloc(). Memory allocated with this function is reported in\n * Redis INFO memory, used for keys eviction according to maxmemory settings\n * and in general is taken into account as memory allocated by Redis.\n * You should avoid using malloc().\n * This function panics if unable to allocate enough memory. */\nvoid *RM_Alloc(size_t bytes) {\n    /* Use 'zmalloc_usable()' instead of 'zmalloc()' to allow the compiler\n     * to recognize the additional memory size, which means that modules can\n     * use the memory reported by 'RM_MallocUsableSize()' safely. In theory this\n     * isn't really needed since this API can't be inlined (not even for embedded\n     * modules like TLS (we use function pointers for module APIs), and the API doesn't\n     * have the malloc_size attribute, but it's hard to predict how smart future compilers\n     * will be, so better safe than sorry. */\n    return zmalloc_usable(bytes,NULL);\n}\n\n/* Similar to RM_Alloc, but returns NULL in case of allocation failure, instead\n * of panicking. */\nvoid *RM_TryAlloc(size_t bytes) {\n    return ztrymalloc_usable(bytes,NULL);\n}\n\n/* Use like calloc(). Memory allocated with this function is reported in\n * Redis INFO memory, used for keys eviction according to maxmemory settings\n * and in general is taken into account as memory allocated by Redis.\n * You should avoid using calloc() directly. */\nvoid *RM_Calloc(size_t nmemb, size_t size) {\n    return zcalloc_usable(nmemb*size,NULL);\n}\n\n/* Similar to RM_Calloc, but returns NULL in case of allocation failure, instead\n * of panicking. */\nvoid *RM_TryCalloc(size_t nmemb, size_t size) {\n    return ztrycalloc_usable(nmemb*size,NULL);\n}\n\n/* Use like realloc() for memory obtained with RedisModule_Alloc(). */\nvoid* RM_Realloc(void *ptr, size_t bytes) {\n    return zrealloc_usable(ptr,bytes,NULL,NULL);\n}\n\n/* Similar to RM_Realloc, but returns NULL in case of allocation failure,\n * instead of panicking. */\nvoid *RM_TryRealloc(void *ptr, size_t bytes) {\n    return ztryrealloc_usable(ptr,bytes,NULL,NULL);\n}\n\n/* Use like free() for memory obtained by RedisModule_Alloc() and\n * RedisModule_Realloc(). However you should never try to free with\n * RedisModule_Free() memory allocated with malloc() inside your module. */\nvoid RM_Free(void *ptr) {\n    zfree(ptr);\n}\n\n/* Like strdup() but returns memory allocated with RedisModule_Alloc(). */\nchar *RM_Strdup(const char *str) {\n    return zstrdup(str);\n}\n\n/* --------------------------------------------------------------------------\n * Pool allocator\n * -------------------------------------------------------------------------- */\n\n/* Release the chain of blocks used for pool allocations. */\nvoid poolAllocRelease(RedisModuleCtx *ctx) {\n    RedisModulePoolAllocBlock *head = ctx->pa_head, *next;\n\n    while(head != NULL) {\n        next = head->next;\n        zfree(head);\n        head = next;\n    }\n    ctx->pa_head = NULL;\n}\n\n/* Return heap allocated memory that will be freed automatically when the\n * module callback function returns. Mostly suitable for small allocations\n * that are short living and must be released when the callback returns\n * anyway. The returned memory is aligned to the architecture word size\n * if at least word size bytes are requested, otherwise it is just\n * aligned to the next power of two, so for example a 3 bytes request is\n * 4 bytes aligned while a 2 bytes request is 2 bytes aligned.\n *\n * There is no realloc style function since when this is needed to use the\n * pool allocator is not a good idea.\n *\n * The function returns NULL if `bytes` is 0. */\nvoid *RM_PoolAlloc(RedisModuleCtx *ctx, size_t bytes) {\n    if (bytes == 0) return NULL;\n    RedisModulePoolAllocBlock *b = ctx->pa_head;\n    size_t left = b ? b->size - b->used : 0;\n\n    /* Fix alignment. */\n    if (left >= bytes) {\n        size_t alignment = REDISMODULE_POOL_ALLOC_ALIGN;\n        while (bytes < alignment && alignment/2 >= bytes) alignment /= 2;\n        if (b->used % alignment)\n            b->used += alignment - (b->used % alignment);\n        left = (b->used > b->size) ? 0 : b->size - b->used;\n    }\n\n    /* Create a new block if needed. */\n    if (left < bytes) {\n        size_t blocksize = REDISMODULE_POOL_ALLOC_MIN_SIZE;\n        if (blocksize < bytes) blocksize = bytes;\n        b = zmalloc(sizeof(*b) + blocksize);\n        b->size = blocksize;\n        b->used = 0;\n        b->next = ctx->pa_head;\n        ctx->pa_head = b;\n    }\n\n    char *retval = b->memory + b->used;\n    b->used += bytes;\n    return retval;\n}\n\n/* --------------------------------------------------------------------------\n * Helpers for modules API implementation\n * -------------------------------------------------------------------------- */\n\nclient *moduleAllocTempClient(void) {\n    client *c = NULL;\n\n    if (moduleTempClientCount > 0) {\n        c = moduleTempClients[--moduleTempClientCount];\n        if (moduleTempClientCount < moduleTempClientMinCount)\n            moduleTempClientMinCount = moduleTempClientCount;\n    } else {\n        c = createClient(NULL);\n        c->flags |= CLIENT_MODULE;\n        c->user = NULL; /* Root user */\n    }\n    return c;\n}\n\nstatic void freeRedisModuleAsyncRMCallPromise(RedisModuleAsyncRMCallPromise *promise) {\n    if (--promise->ref_count > 0) {\n        return;\n    }\n    /* When the promise is finally freed it can not have a client attached to it.\n     * Either releasing the client or RM_CallReplyPromiseAbort would have removed it. */\n    serverAssert(!promise->c);\n    zfree(promise);\n}\n\nvoid moduleReleaseTempClient(client *c) {\n    if (moduleTempClientCount == moduleTempClientCap) {\n        moduleTempClientCap = moduleTempClientCap ? moduleTempClientCap*2 : 32;\n        moduleTempClients = zrealloc(moduleTempClients, sizeof(c)*moduleTempClientCap);\n    }\n    clearClientConnectionState(c);\n    listEmpty(c->reply);\n    c->reply_bytes = 0;\n    c->duration = 0;\n    resetClient(c, -1);\n    serverAssert(c->all_argv_len_sum == 0);\n    c->bufpos = 0;\n    c->flags = CLIENT_MODULE;\n    c->user = NULL; /* Root user */\n    c->cmd = c->lastcmd = c->realcmd = NULL;\n    if (c->bstate.async_rm_call_handle) {\n        RedisModuleAsyncRMCallPromise *promise = c->bstate.async_rm_call_handle;\n        promise->c = NULL; /* Remove the client from the promise so it will no longer be possible to abort it. */\n        freeRedisModuleAsyncRMCallPromise(promise);\n        c->bstate.async_rm_call_handle = NULL;\n    }\n    moduleTempClients[moduleTempClientCount++] = c;\n}\n\n/* Create an empty key of the specified type. `key` must point to a key object\n * opened for writing where the `.value` member is set to NULL because the\n * key was found to be non existing.\n *\n * On success REDISMODULE_OK is returned and the key is populated with\n * the value of the specified type. The function fails and returns\n * REDISMODULE_ERR if:\n *\n * 1. The key is not open for writing.\n * 2. The key is not empty.\n * 3. The specified type is unknown.\n */\nint moduleCreateEmptyKey(RedisModuleKey *key, int type) {\n    robj *obj;\n\n    /* The key must be open for writing and non existing to proceed. */\n    if (!(key->mode & REDISMODULE_WRITE) || key->kv)\n        return REDISMODULE_ERR;\n\n    switch(type) {\n    case REDISMODULE_KEYTYPE_LIST:\n        obj = createListListpackObject();\n        break;\n    case REDISMODULE_KEYTYPE_ZSET:\n        obj = createZsetListpackObject();\n        break;\n    case REDISMODULE_KEYTYPE_HASH:\n        obj = createHashObject();\n        break;\n    case REDISMODULE_KEYTYPE_STREAM:\n        obj = createStreamObject();\n        break;\n    default: return REDISMODULE_ERR;\n    }\n\n    key->kv = dbAdd(key->db, key->key, &obj);\n    moduleInitKeyTypeSpecific(key);\n    return REDISMODULE_OK;\n}\n\n/* Frees key->iter and sets it to NULL. */\nstatic void moduleFreeKeyIterator(RedisModuleKey *key) {\n    serverAssert(key->iter != NULL);\n    switch (key->kv->type) {\n    case OBJ_LIST:\n        listTypeResetIterator(key->iter);\n        zfree(key->iter);\n        break;\n    case OBJ_STREAM:\n        streamIteratorStop(key->iter);\n        zfree(key->iter);\n        break;\n    default: serverAssert(0); /* No key->iter for other types. */\n    }\n    key->iter = NULL;\n}\n\n/* Callback for listTypeTryConversion().\n * Frees list iterator and sets it to NULL. */\nstatic void moduleFreeListIterator(void *data) {\n    RedisModuleKey *key = (RedisModuleKey*)data;\n    serverAssert(key->kv->type == OBJ_LIST);\n    if (key->iter) moduleFreeKeyIterator(key);\n}\n\n/* This function is called in low-level API implementation functions in order\n * to check if the value associated with the key remained empty after an\n * operation that removed elements from an aggregate data type.\n *\n * If this happens, the key is deleted from the DB and the key object state\n * is set to the right one in order to be targeted again by write operations\n * possibly recreating the key if needed.\n *\n * The function returns 1 if the key value object is found empty and is\n * deleted, otherwise 0 is returned. */\nint moduleDelKeyIfEmpty(RedisModuleKey *key) {\n    if (!(key->mode & REDISMODULE_WRITE) || key->kv == NULL) return 0;\n    int isempty;\n    robj *o = key->kv;\n\n    switch(o->type) {\n    case OBJ_LIST: isempty = listTypeLength(o) == 0; break;\n    case OBJ_SET: isempty = setTypeSize(o) == 0; break;\n    case OBJ_ZSET: isempty = zsetLength(o) == 0; break;\n    case OBJ_HASH: isempty = hashTypeLength(o, 0) == 0; break;\n    case OBJ_STREAM: isempty = streamLength(o) == 0; break;\n    default: isempty = 0;\n    }\n\n    if (isempty) {\n        if (key->iter) moduleFreeKeyIterator(key);\n        dbDelete(key->db,key->key);\n        key->kv = NULL;\n        return 1;\n    } else {\n        return 0;\n    }\n}\n\n/* Update the cached subscriber types by walking the subscriber list.\n * Called after subscribe/unsubscribe operations. */\nstatic void moduleUpdateKeyspaceSubscribersTypes(void) {\n    int mask = 0, subkeys_mask = 0;\n    listIter li;\n    listNode *ln;\n    listRewind(moduleKeyspaceSubscribers,&li);\n    while((ln = listNext(&li))) {\n        RedisModuleKeyspaceSubscriber *sub = ln->value;\n        mask |= sub->event_mask;\n        if (sub->notify_callback_with_subkeys)\n            subkeys_mask |= sub->event_mask;\n    }\n    moduleKeyspaceSubscribersTypes = mask;\n    moduleKeyspaceSubscribersWithSubkeysTypes = subkeys_mask;\n}\n\n/* --------------------------------------------------------------------------\n * Service API exported to modules\n *\n * Note that all the exported APIs are called RM_<funcname> in the core\n * and RedisModule_<funcname> in the module side (defined as function\n * pointers in redismodule.h). In this way the dynamic linker does not\n * mess with our global function pointers, overriding it with the symbols\n * defined in the main executable having the same names.\n * -------------------------------------------------------------------------- */\n\nint RM_GetApi(const char *funcname, void **targetPtrPtr) {\n    /* Lookup the requested module API and store the function pointer into the\n     * target pointer. The function returns REDISMODULE_ERR if there is no such\n     * named API, otherwise REDISMODULE_OK.\n     *\n     * This function is not meant to be used by modules developer, it is only\n     * used implicitly by including redismodule.h. */\n    dictEntry *he = dictFind(server.moduleapi, funcname);\n    if (!he) return REDISMODULE_ERR;\n    *targetPtrPtr = dictGetVal(he);\n    return REDISMODULE_OK;\n}\n\nvoid modulePostExecutionUnitOperations(void) {\n    if (server.execution_nesting)\n        return;\n\n    if (server.busy_module_yield_flags) {\n        blockingOperationEnds();\n        server.busy_module_yield_flags = BUSY_MODULE_YIELD_NONE;\n        if (server.current_client)\n            unprotectClient(server.current_client);\n        unblockPostponedClients();\n    }\n}\n\n/* Free the context after the user function was called. */\nvoid moduleFreeContext(RedisModuleCtx *ctx) {\n    /* See comment in moduleCreateContext */\n    if (!(ctx->flags & (REDISMODULE_CTX_THREAD_SAFE|REDISMODULE_CTX_COMMAND))) {\n        exitExecutionUnit();\n        postExecutionUnitOperations();\n    }\n    autoMemoryCollect(ctx);\n    poolAllocRelease(ctx);\n    if (ctx->postponed_arrays) {\n        zfree(ctx->postponed_arrays);\n        ctx->postponed_arrays_count = 0;\n        serverLog(LL_WARNING,\n            \"API misuse detected in module %s: \"\n            \"RedisModule_ReplyWith*(REDISMODULE_POSTPONED_LEN) \"\n            \"not matched by the same number of RedisModule_SetReply*Len() \"\n            \"calls.\",\n            ctx->module->name);\n    }\n    /* If this context has a temp client, we return it back to the pool.\n     * If this context created a new client (e.g detached context), we free it.\n     * If the client is assigned manually, e.g ctx->client = someClientInstance,\n     * none of these flags will be set and we do not attempt to free it. */\n    if (ctx->flags & REDISMODULE_CTX_TEMP_CLIENT)\n        moduleReleaseTempClient(ctx->client);\n    else if (ctx->flags & REDISMODULE_CTX_NEW_CLIENT)\n        freeClient(ctx->client);\n}\n\nstatic CallReply *moduleParseReply(client *c, RedisModuleCtx *ctx) {\n    /* Convert the result of the Redis command into a module reply. */\n    sds proto = sdsnewlen(c->buf,c->bufpos);\n    c->bufpos = 0;\n    while(listLength(c->reply)) {\n        clientReplyBlock *o = listNodeValue(listFirst(c->reply));\n\n        proto = sdscatlen(proto,o->buf,o->used);\n        listDelNode(c->reply,listFirst(c->reply));\n    }\n    CallReply *reply = callReplyCreate(proto, c->deferred_reply_errors, ctx);\n    c->deferred_reply_errors = NULL; /* now the responsibility of the reply object. */\n    return reply;\n}\n\nvoid moduleCallCommandUnblockedHandler(client *c) {\n    RedisModuleCtx ctx;\n    RedisModuleAsyncRMCallPromise *promise = c->bstate.async_rm_call_handle;\n    serverAssert(promise);\n    RedisModule *module = promise->module;\n    if (!promise->on_unblocked) {\n        moduleReleaseTempClient(c);\n        return; /* module did not set any unblock callback. */\n    }\n    moduleCreateContext(&ctx, module, REDISMODULE_CTX_TEMP_CLIENT);\n    selectDb(ctx.client, c->db->id);\n\n    CallReply *reply = moduleParseReply(c, NULL);\n    module->in_call++;\n    promise->on_unblocked(&ctx, reply, promise->private_data);\n    module->in_call--;\n\n    moduleFreeContext(&ctx);\n    moduleReleaseTempClient(c);\n}\n\n/* Create a module ctx and keep track of the nesting level.\n *\n * Note: When creating ctx for threads (RM_GetThreadSafeContext and\n * RM_GetDetachedThreadSafeContext) we do not bump up the nesting level\n * because we only need to track of nesting level in the main thread\n * (only the main thread uses propagatePendingCommands) */\nvoid moduleCreateContext(RedisModuleCtx *out_ctx, RedisModule *module, int ctx_flags) {\n    memset(out_ctx, 0 ,sizeof(RedisModuleCtx));\n    out_ctx->getapifuncptr = (void*)(unsigned long)&RM_GetApi;\n    out_ctx->module = module;\n    out_ctx->flags = ctx_flags;\n    if (ctx_flags & REDISMODULE_CTX_TEMP_CLIENT)\n        out_ctx->client = moduleAllocTempClient();\n    else if (ctx_flags & REDISMODULE_CTX_NEW_CLIENT)\n        out_ctx->client = createClient(NULL);\n\n    /* Calculate the initial yield time for long blocked contexts.\n     * in loading we depend on the server hz, but in other cases we also wait\n     * for busy_reply_threshold.\n     * Note that in theory we could have started processing BUSY_MODULE_YIELD_EVENTS\n     * sooner, and only delay the processing for clients till the busy_reply_threshold,\n     * but this carries some overheads of frequently marking clients with BLOCKED_POSTPONE\n     * and releasing them, i.e. if modules only block for short periods. */\n    if (server.loading)\n        out_ctx->next_yield_time = getMonotonicUs() + 1000000 / server.hz;\n    else\n        out_ctx->next_yield_time = getMonotonicUs() + server.busy_reply_threshold * 1000;\n\n    /* Increment the execution_nesting counter (module is about to execute some code),\n     * except in the following cases:\n     * 1. We came here from cmd->proc (either call() or AOF load).\n     *    In the former, the counter has been already incremented from within\n     *    call() and in the latter we don't care about execution_nesting\n     * 2. If we are running in a thread (execution_nesting will be dealt with\n     *    when locking/unlocking the GIL) */\n    if (!(ctx_flags & (REDISMODULE_CTX_THREAD_SAFE|REDISMODULE_CTX_COMMAND))) {\n        enterExecutionUnit(1, 0);\n    }\n}\n\n/* This Redis command binds the normal Redis command invocation with commands\n * exported by modules. */\nvoid RedisModuleCommandDispatcher(client *c) {\n    RedisModuleCommand *cp = c->cmd->module_cmd;\n    RedisModuleCtx ctx;\n    moduleCreateContext(&ctx, cp->module, REDISMODULE_CTX_COMMAND);\n\n    ctx.client = c;\n    cp->func(&ctx,(void**)c->argv,c->argc);\n    moduleFreeContext(&ctx);\n\n    /* In some cases processMultibulkBuffer uses sdsMakeRoomFor to\n     * expand the query buffer, and in order to avoid a big object copy\n     * the query buffer SDS may be used directly as the SDS string backing\n     * the client argument vectors: sometimes this will result in the SDS\n     * string having unused space at the end. Later if a module takes ownership\n     * of the RedisString, such space will be wasted forever. Inside the\n     * Redis core this is not a problem because tryObjectEncoding() is called\n     * before storing strings in the key space. Here we need to do it\n     * for the module. */\n    for (int i = 0; i < c->argc; i++) {\n        /* Only do the work if the module took ownership of the object:\n         * in that case the refcount is no longer 1. */\n        if (c->argv[i]->refcount > 1)\n            trimStringObjectIfNeeded(c->argv[i], 0);\n    }\n}\n\n/* This function returns the list of keys, with the same interface as the\n * 'getkeys' function of the native commands, for module commands that exported\n * the \"getkeys-api\" flag during the registration. This is done when the\n * list of keys are not at fixed positions, so that first/last/step cannot\n * be used.\n *\n * In order to accomplish its work, the module command is called, flagging\n * the context in a way that the command can recognize this is a special\n * \"get keys\" call by calling RedisModule_IsKeysPositionRequest(ctx). */\nint moduleGetCommandKeysViaAPI(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    RedisModuleCommand *cp = cmd->module_cmd;\n    RedisModuleCtx ctx;\n    moduleCreateContext(&ctx, cp->module, REDISMODULE_CTX_KEYS_POS_REQUEST);\n\n    /* Initialize getKeysResult */\n    getKeysPrepareResult(result, MAX_KEYS_BUFFER);\n    ctx.keys_result = result;\n\n    cp->func(&ctx,(void**)argv,argc);\n    /* We currently always use the array allocated by RM_KeyAtPos() and don't try\n     * to optimize for the pre-allocated buffer.\n     */\n    moduleFreeContext(&ctx);\n    return result->numkeys;\n}\n\n/* This function returns the list of channels, with the same interface as\n * moduleGetCommandKeysViaAPI, for modules that declare \"getchannels-api\"\n * during registration. Unlike keys, this is the only way to declare channels. */\nint moduleGetCommandChannelsViaAPI(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result) {\n    RedisModuleCommand *cp = cmd->module_cmd;\n    RedisModuleCtx ctx;\n    moduleCreateContext(&ctx, cp->module, REDISMODULE_CTX_CHANNELS_POS_REQUEST);\n\n    /* Initialize getKeysResult */\n    getKeysPrepareResult(result, MAX_KEYS_BUFFER);\n    ctx.keys_result = result;\n\n    cp->func(&ctx,(void**)argv,argc);\n    /* We currently always use the array allocated by RM_RM_ChannelAtPosWithFlags() and don't try\n     * to optimize for the pre-allocated buffer. */\n    moduleFreeContext(&ctx);\n    return result->numkeys;\n}\n\n/* --------------------------------------------------------------------------\n * ## Commands API\n *\n * These functions are used to implement custom Redis commands.\n *\n * For examples, see https://redis.io/docs/latest/develop/reference/modules/.\n * -------------------------------------------------------------------------- */\n\n/* Return non-zero if a module command, that was declared with the\n * flag \"getkeys-api\", is called in a special way to get the keys positions\n * and not to get executed. Otherwise zero is returned. */\nint RM_IsKeysPositionRequest(RedisModuleCtx *ctx) {\n    return (ctx->flags & REDISMODULE_CTX_KEYS_POS_REQUEST) != 0;\n}\n\n/* When a module command is called in order to obtain the position of\n * keys, since it was flagged as \"getkeys-api\" during the registration,\n * the command implementation checks for this special call using the\n * RedisModule_IsKeysPositionRequest() API and uses this function in\n * order to report keys.\n *\n * The supported flags are the ones used by RM_SetCommandInfo, see REDISMODULE_CMD_KEY_*.\n *\n *\n * The following is an example of how it could be used:\n *\n *     if (RedisModule_IsKeysPositionRequest(ctx)) {\n *         RedisModule_KeyAtPosWithFlags(ctx, 2, REDISMODULE_CMD_KEY_RO | REDISMODULE_CMD_KEY_ACCESS);\n *         RedisModule_KeyAtPosWithFlags(ctx, 1, REDISMODULE_CMD_KEY_RW | REDISMODULE_CMD_KEY_UPDATE | REDISMODULE_CMD_KEY_ACCESS);\n *     }\n *\n *  Note: in the example above the get keys API could have been handled by key-specs (preferred).\n *  Implementing the getkeys-api is required only when is it not possible to declare key-specs that cover all keys.\n *\n */\nvoid RM_KeyAtPosWithFlags(RedisModuleCtx *ctx, int pos, int flags) {\n    if (!(ctx->flags & REDISMODULE_CTX_KEYS_POS_REQUEST) || !ctx->keys_result) return;\n    if (pos <= 0) return;\n\n    getKeysResult *res = ctx->keys_result;\n\n    /* Check overflow */\n    if (res->numkeys == res->size) {\n        int newsize = res->size + (res->size > 8192 ? 8192 : res->size);\n        getKeysPrepareResult(res, newsize);\n    }\n\n    res->keys[res->numkeys].pos = pos;\n    res->keys[res->numkeys].flags = moduleConvertKeySpecsFlags(flags, 1);\n    res->numkeys++;\n}\n\n/* This API existed before RM_KeyAtPosWithFlags was added, now deprecated and\n * can be used for compatibility with older versions, before key-specs and flags\n * were introduced. */\nvoid RM_KeyAtPos(RedisModuleCtx *ctx, int pos) {\n    /* Default flags require full access */\n    int flags = moduleConvertKeySpecsFlags(CMD_KEY_FULL_ACCESS, 0);\n    RM_KeyAtPosWithFlags(ctx, pos, flags);\n}\n\n/* Return non-zero if a module command, that was declared with the\n * flag \"getchannels-api\", is called in a special way to get the channel positions\n * and not to get executed. Otherwise zero is returned. */\nint RM_IsChannelsPositionRequest(RedisModuleCtx *ctx) {\n    return (ctx->flags & REDISMODULE_CTX_CHANNELS_POS_REQUEST) != 0;\n}\n\n/* When a module command is called in order to obtain the position of\n * channels, since it was flagged as \"getchannels-api\" during the\n * registration, the command implementation checks for this special call\n * using the RedisModule_IsChannelsPositionRequest() API and uses this\n * function in order to report the channels.\n * \n * The supported flags are:\n * * REDISMODULE_CMD_CHANNEL_SUBSCRIBE: This command will subscribe to the channel.\n * * REDISMODULE_CMD_CHANNEL_UNSUBSCRIBE: This command will unsubscribe from this channel.\n * * REDISMODULE_CMD_CHANNEL_PUBLISH: This command will publish to this channel.\n * * REDISMODULE_CMD_CHANNEL_PATTERN: Instead of acting on a specific channel, will act on any \n *                                    channel specified by the pattern. This is the same access\n *                                    used by the PSUBSCRIBE and PUNSUBSCRIBE commands available \n *                                    in Redis. Not intended to be used with PUBLISH permissions.\n *\n * The following is an example of how it could be used:\n *\n *     if (RedisModule_IsChannelsPositionRequest(ctx)) {\n *         RedisModule_ChannelAtPosWithFlags(ctx, 1, REDISMODULE_CMD_CHANNEL_SUBSCRIBE | REDISMODULE_CMD_CHANNEL_PATTERN);\n *         RedisModule_ChannelAtPosWithFlags(ctx, 1, REDISMODULE_CMD_CHANNEL_PUBLISH);\n *     }\n *\n * Note: One usage of declaring channels is for evaluating ACL permissions. In this context,\n * unsubscribing is always allowed, so commands will only be checked against subscribe and\n * publish permissions. This is preferred over using RM_ACLCheckChannelPermissions, since\n * it allows the ACLs to be checked before the command is executed. */\nvoid RM_ChannelAtPosWithFlags(RedisModuleCtx *ctx, int pos, int flags) {\n    if (!(ctx->flags & REDISMODULE_CTX_CHANNELS_POS_REQUEST) || !ctx->keys_result) return;\n    if (pos <= 0) return;\n\n    getKeysResult *res = ctx->keys_result;\n\n    /* Check overflow */\n    if (res->numkeys == res->size) {\n        int newsize = res->size + (res->size > 8192 ? 8192 : res->size);\n        getKeysPrepareResult(res, newsize);\n    }\n\n    int new_flags = 0;\n    if (flags & REDISMODULE_CMD_CHANNEL_SUBSCRIBE) new_flags |= CMD_CHANNEL_SUBSCRIBE;\n    if (flags & REDISMODULE_CMD_CHANNEL_UNSUBSCRIBE) new_flags |= CMD_CHANNEL_UNSUBSCRIBE;\n    if (flags & REDISMODULE_CMD_CHANNEL_PUBLISH) new_flags |= CMD_CHANNEL_PUBLISH;\n    if (flags & REDISMODULE_CMD_CHANNEL_PATTERN) new_flags |= CMD_CHANNEL_PATTERN;\n\n    res->keys[res->numkeys].pos = pos;\n    res->keys[res->numkeys].flags = new_flags;\n    res->numkeys++;\n}\n\n/* Returns 1 if name is valid, otherwise returns 0.\n *\n * We want to block some chars in module command names that we know can\n * mess things up.\n *\n * There are these characters:\n * ' ' (space) - issues with old inline protocol.\n * '\\r', '\\n' (newline) - can mess up the protocol on acl error replies.\n * '|' - sub-commands.\n * '@' - ACL categories.\n * '=', ',' - info and client list fields (':' handled by getSafeInfoString).\n * */\nint isCommandNameValid(const char *name) {\n    const char *block_chars = \" \\r\\n|@=,\";\n\n    if (strpbrk(name, block_chars))\n        return 0;\n    return 1;\n}\n\n/* Helper for RM_CreateCommand(). Turns a string representing command\n * flags into the command flags used by the Redis core.\n *\n * It returns the set of flags, or -1 if unknown flags are found. */\nint64_t commandFlagsFromString(char *s) {\n    int count, j;\n    int64_t flags = 0;\n    sds *tokens = sdssplitlen(s,strlen(s),\" \",1,&count);\n    for (j = 0; j < count; j++) {\n        char *t = tokens[j];\n        if (!strcasecmp(t,\"write\")) flags |= CMD_WRITE;\n        else if (!strcasecmp(t,\"readonly\")) flags |= CMD_READONLY;\n        else if (!strcasecmp(t,\"admin\")) flags |= CMD_ADMIN;\n        else if (!strcasecmp(t,\"deny-oom\")) flags |= CMD_DENYOOM;\n        else if (!strcasecmp(t,\"deny-script\")) flags |= CMD_NOSCRIPT;\n        else if (!strcasecmp(t,\"allow-loading\")) flags |= CMD_LOADING;\n        else if (!strcasecmp(t,\"pubsub\")) flags |= CMD_PUBSUB;\n        else if (!strcasecmp(t,\"random\")) { /* Deprecated. Silently ignore. */ }\n        else if (!strcasecmp(t,\"blocking\")) flags |= CMD_BLOCKING;\n        else if (!strcasecmp(t,\"allow-stale\")) flags |= CMD_STALE;\n        else if (!strcasecmp(t,\"no-monitor\")) flags |= CMD_SKIP_MONITOR;\n        else if (!strcasecmp(t,\"no-slowlog\")) flags |= CMD_SKIP_SLOWLOG;\n        else if (!strcasecmp(t,\"fast\")) flags |= CMD_FAST;\n        else if (!strcasecmp(t,\"no-auth\")) flags |= CMD_NO_AUTH;\n        else if (!strcasecmp(t,\"may-replicate\")) flags |= CMD_MAY_REPLICATE;\n        else if (!strcasecmp(t,\"getkeys-api\")) flags |= CMD_MODULE_GETKEYS;\n        else if (!strcasecmp(t,\"getchannels-api\")) flags |= CMD_MODULE_GETCHANNELS;\n        else if (!strcasecmp(t,\"no-cluster\")) flags |= CMD_MODULE_NO_CLUSTER;\n        else if (!strcasecmp(t,\"no-mandatory-keys\")) flags |= CMD_NO_MANDATORY_KEYS;\n        else if (!strcasecmp(t,\"allow-busy\")) flags |= CMD_ALLOW_BUSY;\n        else if (!strcasecmp(t,\"internal\")) flags |= (CMD_INTERNAL|CMD_NOSCRIPT); /* We also disallow internal commands in scripts. */\n        else if (!strcasecmp(t,\"touches-arbitrary-keys\")) flags |= CMD_TOUCHES_ARBITRARY_KEYS;\n        else break;\n    }\n    sdsfreesplitres(tokens,count);\n    if (j != count) return -1; /* Some token not processed correctly. */\n    return flags;\n}\n\nRedisModuleCommand *moduleCreateCommandProxy(struct RedisModule *module, sds declared_name, sds fullname, RedisModuleCmdFunc cmdfunc, int64_t flags, int firstkey, int lastkey, int keystep);\n\n/* Register a new command in the Redis server, that will be handled by\n * calling the function pointer 'cmdfunc' using the RedisModule calling\n * convention.\n *\n * The function returns REDISMODULE_ERR in these cases:\n * - If creation of module command is called outside the RedisModule_OnLoad.\n * - The specified command is already busy.\n * - The command name contains some chars that are not allowed.\n * - A set of invalid flags were passed.\n *\n * Otherwise REDISMODULE_OK is returned and the new command is registered.\n *\n * This function must be called during the initialization of the module\n * inside the RedisModule_OnLoad() function. Calling this function outside\n * of the initialization function is not defined.\n *\n * The command function type is the following:\n *\n *      int MyCommand_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc);\n *\n * And is supposed to always return REDISMODULE_OK.\n *\n * The set of flags 'strflags' specify the behavior of the command, and should\n * be passed as a C string composed of space separated words, like for\n * example \"write deny-oom\". The set of flags are:\n *\n * * **\"write\"**:     The command may modify the data set (it may also read\n *                    from it).\n * * **\"readonly\"**:  The command returns data from keys but never writes.\n * * **\"admin\"**:     The command is an administrative command (may change\n *                    replication or perform similar tasks).\n * * **\"deny-oom\"**:  The command may use additional memory and should be\n *                    denied during out of memory conditions.\n * * **\"deny-script\"**:   Don't allow this command in Lua scripts.\n * * **\"allow-loading\"**: Allow this command while the server is loading data.\n *                        Only commands not interacting with the data set\n *                        should be allowed to run in this mode. If not sure\n *                        don't use this flag.\n * * **\"pubsub\"**:    The command publishes things on Pub/Sub channels.\n * * **\"random\"**:    The command may have different outputs even starting\n *                    from the same input arguments and key values.\n *                    Starting from Redis 7.0 this flag has been deprecated.\n *                    Declaring a command as \"random\" can be done using\n *                    command tips, see https://redis.io/docs/latest/develop/reference/command-tips/.\n * * **\"allow-stale\"**: The command is allowed to run on slaves that don't\n *                      serve stale data. Don't use if you don't know what\n *                      this means.\n * * **\"no-monitor\"**: Don't propagate the command on monitor. Use this if\n *                     the command has sensitive data among the arguments.\n * * **\"no-slowlog\"**: Don't log this command in the slowlog. Use this if\n *                     the command has sensitive data among the arguments.\n * * **\"fast\"**:      The command time complexity is not greater\n *                    than O(log(N)) where N is the size of the collection or\n *                    anything else representing the normal scalability\n *                    issue with the command.\n * * **\"getkeys-api\"**: The command implements the interface to return\n *                      the arguments that are keys. Used when start/stop/step\n *                      is not enough because of the command syntax.\n * * **\"no-cluster\"**: The command should not register in Redis Cluster\n *                     since is not designed to work with it because, for\n *                     example, is unable to report the position of the\n *                     keys, programmatically creates key names, or any\n *                     other reason.\n * * **\"no-auth\"**:    This command can be run by an un-authenticated client.\n *                     Normally this is used by a command that is used\n *                     to authenticate a client.\n * * **\"may-replicate\"**: This command may generate replication traffic, even\n *                        though it's not a write command.\n * * **\"no-mandatory-keys\"**: All the keys this command may take are optional\n * * **\"blocking\"**: The command has the potential to block the client.\n * * **\"allow-busy\"**: Permit the command while the server is blocked either by\n *                     a script or by a slow module command, see\n *                     RM_Yield.\n * * **\"getchannels-api\"**: The command implements the interface to return\n *                          the arguments that are channels.\n * * **\"internal\"**: Internal command, one that should not be exposed to the user connections.\n *                   For example, module commands that are called by the modules,\n *                   commands that do not perform ACL validations (relying on earlier checks)\n * * **\"touches-arbitrary-keys\"**: This command may modify arbitrary keys (i.e. not provided via argv).\n *                   This flag is used so we don't wrap the replicated commands with MULTI/EXEC.\n *\n * The last three parameters specify which arguments of the new command are\n * Redis keys. See https://redis.io/commands/command for more information.\n *\n * * `firstkey`: One-based index of the first argument that's a key.\n *               Position 0 is always the command name itself.\n *               0 for commands with no keys.\n * * `lastkey`:  One-based index of the last argument that's a key.\n *               Negative numbers refer to counting backwards from the last\n *               argument (-1 means the last argument provided)\n *               0 for commands with no keys.\n * * `keystep`:  Step between first and last key indexes.\n *               0 for commands with no keys.\n *\n * This information is used by ACL, Cluster and the `COMMAND` command.\n *\n * NOTE: The scheme described above serves a limited purpose and can\n * only be used to find keys that exist at constant indices.\n * For non-trivial key arguments, you may pass 0,0,0 and use\n * RedisModule_SetCommandInfo to set key specs using a more advanced scheme and use\n * RedisModule_SetCommandACLCategories to set Redis ACL categories of the commands. */\nint RM_CreateCommand(RedisModuleCtx *ctx, const char *name, RedisModuleCmdFunc cmdfunc, const char *strflags, int firstkey, int lastkey, int keystep) {\n    if (!ctx->module->onload)\n        return REDISMODULE_ERR;\n    int64_t flags = strflags ? commandFlagsFromString((char*)strflags) : 0;\n    if (flags == -1) return REDISMODULE_ERR;\n    if ((flags & CMD_MODULE_NO_CLUSTER) && server.cluster_enabled)\n        return REDISMODULE_ERR;\n\n    /* We will encounter an error as above if cluster is enable */\n    if (flags & CMD_MODULE_NO_CLUSTER)\n        server.stat_cluster_incompatible_ops++;\n\n    /* Check if the command name is valid. */\n    if (!isCommandNameValid(name))\n        return REDISMODULE_ERR;\n\n    /* Check if the command name is busy. */\n    if (lookupCommandByCString(name) != NULL)\n        return REDISMODULE_ERR;\n\n    sds declared_name = sdsnew(name);\n    RedisModuleCommand *cp = moduleCreateCommandProxy(ctx->module, declared_name, sdsdup(declared_name), cmdfunc, flags, firstkey, lastkey, keystep);\n    cp->rediscmd->arity = cmdfunc ? -1 : -2; /* Default value, can be changed later via dedicated API */\n\n    pauseAllIOThreads();\n    serverAssert(dictAdd(server.commands, sdsdup(declared_name), cp->rediscmd) == DICT_OK);\n    serverAssert(dictAdd(server.orig_commands, sdsdup(declared_name), cp->rediscmd) == DICT_OK);\n    resumeAllIOThreads();\n\n    cp->rediscmd->id = ACLGetCommandID(declared_name); /* ID used for ACL. */\n    return REDISMODULE_OK;\n}\n\n/* A proxy that help create a module command / subcommand.\n *\n * 'declared_name': it contains the sub_name, which is just the fullname for non-subcommands.\n * 'fullname': sds string representing the command fullname.\n *\n * Function will take the ownership of both 'declared_name' and 'fullname' SDS.\n */\nRedisModuleCommand *moduleCreateCommandProxy(struct RedisModule *module, sds declared_name, sds fullname, RedisModuleCmdFunc cmdfunc, int64_t flags, int firstkey, int lastkey, int keystep) {\n    struct redisCommand *rediscmd;\n    RedisModuleCommand *cp;\n\n    /* Create a command \"proxy\", which is a structure that is referenced\n     * in the command table, so that the generic command that works as\n     * binding between modules and Redis, can know what function to call\n     * and what the module is. */\n    cp = zcalloc(sizeof(*cp));\n    cp->module = module;\n    cp->func = cmdfunc;\n    cp->rediscmd = zcalloc(sizeof(*rediscmd));\n    cp->rediscmd->declared_name = declared_name; /* SDS for module commands */\n    cp->rediscmd->fullname = fullname;\n    cp->rediscmd->group = COMMAND_GROUP_MODULE;\n    cp->rediscmd->proc = RedisModuleCommandDispatcher;\n    cp->rediscmd->flags = flags | CMD_MODULE;\n    cp->rediscmd->module_cmd = cp;\n    if (firstkey != 0) {\n        cp->rediscmd->key_specs_num = 1;\n        cp->rediscmd->key_specs = zcalloc(sizeof(keySpec));\n        cp->rediscmd->key_specs[0].flags = CMD_KEY_FULL_ACCESS;\n        if (flags & CMD_MODULE_GETKEYS)\n            cp->rediscmd->key_specs[0].flags |= CMD_KEY_VARIABLE_FLAGS;\n        cp->rediscmd->key_specs[0].begin_search_type = KSPEC_BS_INDEX;\n        cp->rediscmd->key_specs[0].bs.index.pos = firstkey;\n        cp->rediscmd->key_specs[0].find_keys_type = KSPEC_FK_RANGE;\n        cp->rediscmd->key_specs[0].fk.range.lastkey = lastkey < 0 ? lastkey : (lastkey-firstkey);\n        cp->rediscmd->key_specs[0].fk.range.keystep = keystep;\n        cp->rediscmd->key_specs[0].fk.range.limit = 0;\n    } else {\n        cp->rediscmd->key_specs_num = 0;\n        cp->rediscmd->key_specs = NULL;\n    }\n    populateCommandLegacyRangeSpec(cp->rediscmd);\n    cp->rediscmd->microseconds = 0;\n    cp->rediscmd->calls = 0;\n    cp->rediscmd->rejected_calls = 0;\n    cp->rediscmd->failed_calls = 0;\n    return cp;\n}\n\n/* Get an opaque structure, representing a module command, by command name.\n * This structure is used in some of the command-related APIs.\n *\n * NULL is returned in case of the following errors:\n *\n * * Command not found\n * * The command is not a module command\n * * The command doesn't belong to the calling module\n */\nRedisModuleCommand *RM_GetCommand(RedisModuleCtx *ctx, const char *name) {\n    struct redisCommand *cmd = lookupCommandByCString(name);\n\n    if (!cmd || !(cmd->flags & CMD_MODULE))\n        return NULL;\n\n    RedisModuleCommand *cp = cmd->module_cmd;\n    if (cp->module != ctx->module)\n        return NULL;\n\n    return cp;\n}\n\n/* Very similar to RedisModule_CreateCommand except that it is used to create\n * a subcommand, associated with another, container, command.\n *\n * Example: If a module has a configuration command, MODULE.CONFIG, then\n * GET and SET should be individual subcommands, while MODULE.CONFIG is\n * a command, but should not be registered with a valid `funcptr`:\n *\n *      if (RedisModule_CreateCommand(ctx,\"module.config\",NULL,\"\",0,0,0) == REDISMODULE_ERR)\n *          return REDISMODULE_ERR;\n *\n *      RedisModuleCommand *parent = RedisModule_GetCommand(ctx,,\"module.config\");\n *\n *      if (RedisModule_CreateSubcommand(parent,\"set\",cmd_config_set,\"\",0,0,0) == REDISMODULE_ERR)\n *         return REDISMODULE_ERR;\n *\n *      if (RedisModule_CreateSubcommand(parent,\"get\",cmd_config_get,\"\",0,0,0) == REDISMODULE_ERR)\n *         return REDISMODULE_ERR;\n *\n * Returns REDISMODULE_OK on success and REDISMODULE_ERR in case of the following errors:\n *\n * * Error while parsing `strflags`\n * * Command is marked as `no-cluster` but cluster mode is enabled\n * * `parent` is already a subcommand (we do not allow more than one level of command nesting)\n * * `parent` is a command with an implementation (RedisModuleCmdFunc) (A parent command should be a pure container of subcommands)\n * * `parent` already has a subcommand called `name`\n * * Creating a subcommand is called outside of RedisModule_OnLoad.\n */\nint RM_CreateSubcommand(RedisModuleCommand *parent, const char *name, RedisModuleCmdFunc cmdfunc, const char *strflags, int firstkey, int lastkey, int keystep) {\n    if (!parent->module->onload)\n        return REDISMODULE_ERR;\n    int64_t flags = strflags ? commandFlagsFromString((char*)strflags) : 0;\n    if (flags == -1) return REDISMODULE_ERR;\n    if ((flags & CMD_MODULE_NO_CLUSTER) && server.cluster_enabled)\n        return REDISMODULE_ERR;\n\n    /* We will encounter an error as above if cluster is enable */\n    if (flags & CMD_MODULE_NO_CLUSTER)\n        server.stat_cluster_incompatible_ops++;\n\n    struct redisCommand *parent_cmd = parent->rediscmd;\n\n    if (parent_cmd->parent)\n        return REDISMODULE_ERR; /* We don't allow more than one level of subcommands */\n\n    RedisModuleCommand *parent_cp = parent_cmd->module_cmd;\n    if (parent_cp->func)\n        return REDISMODULE_ERR; /* A parent command should be a pure container of subcommands */\n\n    /* Check if the command name is valid. */\n    if (!isCommandNameValid(name))\n        return REDISMODULE_ERR;\n\n    /* Check if the command name is busy within the parent command. */\n    sds declared_name = sdsnew(name);\n    if (parent_cmd->subcommands_dict && lookupSubcommand(parent_cmd, declared_name) != NULL) {\n        sdsfree(declared_name);\n        return REDISMODULE_ERR;\n    }\n\n    sds fullname = catSubCommandFullname(parent_cmd->fullname, name);\n    RedisModuleCommand *cp = moduleCreateCommandProxy(parent->module, declared_name, fullname, cmdfunc, flags, firstkey, lastkey, keystep);\n    cp->rediscmd->arity = -2;\n\n    commandAddSubcommand(parent_cmd, cp->rediscmd, name);\n    return REDISMODULE_OK;\n}\n\n/* Accessors of array elements of structs where the element size is stored\n * separately in the version struct. */\nstatic RedisModuleCommandHistoryEntry *\nmoduleCmdHistoryEntryAt(const RedisModuleCommandInfoVersion *version,\n                        RedisModuleCommandHistoryEntry *entries, int index) {\n    off_t offset = index * version->sizeof_historyentry;\n    return (RedisModuleCommandHistoryEntry *)((char *)(entries) + offset);\n}\nstatic RedisModuleCommandKeySpec *\nmoduleCmdKeySpecAt(const RedisModuleCommandInfoVersion *version,\n                   RedisModuleCommandKeySpec *keyspecs, int index) {\n    off_t offset = index * version->sizeof_keyspec;\n    return (RedisModuleCommandKeySpec *)((char *)(keyspecs) + offset);\n}\nstatic RedisModuleCommandArg *\nmoduleCmdArgAt(const RedisModuleCommandInfoVersion *version,\n               const RedisModuleCommandArg *args, int index) {\n    off_t offset = index * version->sizeof_arg;\n    return (RedisModuleCommandArg *)((char *)(args) + offset);\n}\n\n/* Recursively populate the args structure (setting num_args to the number of\n * subargs) and return the number of args. */\nint populateArgsStructure(struct redisCommandArg *args) {\n    if (!args)\n        return 0;\n    int count = 0;\n    while (args->name) {\n        serverAssert(count < INT_MAX);\n        args->num_args = populateArgsStructure(args->subargs);\n        count++;\n        args++;\n    }\n    return count;\n}\n\n/* RedisModule_AddACLCategory can be used to add new ACL command categories. Category names\n * can only contain alphanumeric characters, underscores, or dashes. Categories can only be added\n * during the RedisModule_OnLoad function. Once a category has been added, it can not be removed. \n * Any module can register a command to any added categories using RedisModule_SetCommandACLCategories.\n * \n * Returns:\n * - REDISMODULE_OK on successfully adding the new ACL category. \n * - REDISMODULE_ERR on failure.\n * \n * On error the errno is set to:\n * - EINVAL if the name contains invalid characters.\n * - EBUSY if the category name already exists.\n * - ENOMEM if the number of categories reached the max limit of 64 categories.\n */\nint RM_AddACLCategory(RedisModuleCtx *ctx, const char *name) {\n    if (!ctx->module->onload) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    }\n\n    if (moduleVerifyResourceName(name) == REDISMODULE_ERR) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    }\n\n    if (ACLGetCommandCategoryFlagByName(name)) {\n        errno = EBUSY;\n        return REDISMODULE_ERR;\n    }\n\n    if (ACLAddCommandCategory(name, 0)) {\n        ctx->module->num_acl_categories_added++;\n        return REDISMODULE_OK;\n    } else {\n        errno = ENOMEM;\n        return REDISMODULE_ERR;\n    }\n}\n\n/* Helper for categoryFlagsFromString(). Attempts to find an acl flag representing the provided flag string\n * and adds that flag to acl_categories_flags if a match is found.\n *\n * Returns '1' if acl category flag is recognized or\n * returns '0' if not recognized  */\nint matchAclCategoryFlag(char *flag, int64_t *acl_categories_flags) {\n    uint64_t this_flag = ACLGetCommandCategoryFlagByName(flag);\n    if (this_flag) {\n        *acl_categories_flags |= (int64_t) this_flag;\n        return 1;\n    }\n    return 0; /* Unrecognized */\n}\n\n/* Helper for RM_SetCommandACLCategories(). Turns a string representing acl category\n * flags into the acl category flags used by Redis ACL which allows users to access \n * the module commands by acl categories.\n * \n * It returns the set of acl flags, or -1 if unknown flags are found. */\nint64_t categoryFlagsFromString(char *aclflags) {\n    int count, j;\n    int64_t acl_categories_flags = 0;\n    sds *tokens = sdssplitlen(aclflags,strlen(aclflags),\" \",1,&count);\n    for (j = 0; j < count; j++) {\n        char *t = tokens[j];\n        if (!matchAclCategoryFlag(t, &acl_categories_flags)) {\n            serverLog(LL_WARNING,\"Unrecognized categories flag %s on module load\", t);\n            break;\n        }\n    }\n    sdsfreesplitres(tokens,count);\n    if (j != count) return -1; /* Some token not processed correctly. */\n    return acl_categories_flags;\n}\n\n/* RedisModule_SetCommandACLCategories can be used to set ACL categories to module\n * commands and subcommands. The set of ACL categories should be passed as\n * a space separated C string 'aclflags'.\n * \n * Example, the acl flags 'write slow' marks the command as part of the write and \n * slow ACL categories.\n * \n * On success REDISMODULE_OK is returned. On error REDISMODULE_ERR is returned.\n * \n * This function can only be called during the RedisModule_OnLoad function. If called\n * outside of this function, an error is returned.\n */\nint RM_SetCommandACLCategories(RedisModuleCommand *command, const char *aclflags) {\n    if (!command || !command->module || !command->module->onload) return REDISMODULE_ERR;\n    int64_t categories_flags = aclflags ? categoryFlagsFromString((char*)aclflags) : 0;\n    if (categories_flags == -1) return REDISMODULE_ERR;\n    struct redisCommand *rcmd = command->rediscmd;\n    rcmd->acl_categories = categories_flags; /* ACL categories flags for module command */\n    command->module->num_commands_with_acl_categories++;\n    return REDISMODULE_OK;\n}\n\n/* Set additional command information.\n *\n * Affects the output of `COMMAND`, `COMMAND INFO` and `COMMAND DOCS`, Cluster,\n * ACL and is used to filter commands with the wrong number of arguments before\n * the call reaches the module code.\n *\n * This function can be called after creating a command using RM_CreateCommand\n * and fetching the command pointer using RM_GetCommand. The information can\n * only be set once for each command and has the following structure:\n *\n *     typedef struct RedisModuleCommandInfo {\n *         const RedisModuleCommandInfoVersion *version;\n *         const char *summary;\n *         const char *complexity;\n *         const char *since;\n *         RedisModuleCommandHistoryEntry *history;\n *         const char *tips;\n *         int arity;\n *         RedisModuleCommandKeySpec *key_specs;\n *         RedisModuleCommandArg *args;\n *     } RedisModuleCommandInfo;\n *\n * All fields except `version` are optional. Explanation of the fields:\n *\n * - `version`: This field enables compatibility with different Redis versions.\n *   Always set this field to REDISMODULE_COMMAND_INFO_VERSION.\n *\n * - `summary`: A short description of the command (optional).\n *\n * - `complexity`: Complexity description (optional).\n *\n * - `since`: The version where the command was introduced (optional).\n *   Note: The version specified should be the module's, not Redis version.\n *\n * - `history`: An array of RedisModuleCommandHistoryEntry (optional), which is\n *   a struct with the following fields:\n *\n *         const char *since;\n *         const char *changes;\n *\n *     `since` is a version string and `changes` is a string describing the\n *     changes. The array is terminated by a zeroed entry, i.e. an entry with\n *     both strings set to NULL.\n *\n * - `tips`: A string of space-separated tips regarding this command, meant for\n *   clients and proxies. See https://redis.io/docs/latest/develop/reference/command-tips/.\n *\n * - `arity`: Number of arguments, including the command name itself. A positive\n *   number specifies an exact number of arguments and a negative number\n *   specifies a minimum number of arguments, so use -N to say >= N. Redis\n *   validates a call before passing it to a module, so this can replace an\n *   arity check inside the module command implementation. A value of 0 (or an\n *   omitted arity field) is equivalent to -2 if the command has sub commands\n *   and -1 otherwise.\n *\n * - `key_specs`: An array of RedisModuleCommandKeySpec, terminated by an\n *   element memset to zero. This is a scheme that tries to describe the\n *   positions of key arguments better than the old RM_CreateCommand arguments\n *   `firstkey`, `lastkey`, `keystep` and is needed if those three are not\n *   enough to describe the key positions. There are two steps to retrieve key\n *   positions: *begin search* (BS) in which index should find the first key and\n *   *find keys* (FK) which, relative to the output of BS, describes how can we\n *   will which arguments are keys. Additionally, there are key specific flags.\n *\n *     Key-specs cause the triplet (firstkey, lastkey, keystep) given in\n *     RM_CreateCommand to be recomputed, but it is still useful to provide\n *     these three parameters in RM_CreateCommand, to better support old Redis\n *     versions where RM_SetCommandInfo is not available.\n *\n *     Note that key-specs don't fully replace the \"getkeys-api\" (see\n *     RM_CreateCommand, RM_IsKeysPositionRequest and RM_KeyAtPosWithFlags) so\n *     it may be a good idea to supply both key-specs and implement the\n *     getkeys-api.\n *\n *     A key-spec has the following structure:\n *\n *         typedef struct RedisModuleCommandKeySpec {\n *             const char *notes;\n *             uint64_t flags;\n *             RedisModuleKeySpecBeginSearchType begin_search_type;\n *             union {\n *                 struct {\n *                     int pos;\n *                 } index;\n *                 struct {\n *                     const char *keyword;\n *                     int startfrom;\n *                 } keyword;\n *             } bs;\n *             RedisModuleKeySpecFindKeysType find_keys_type;\n *             union {\n *                 struct {\n *                     int lastkey;\n *                     int keystep;\n *                     int limit;\n *                 } range;\n *                 struct {\n *                     int keynumidx;\n *                     int firstkey;\n *                     int keystep;\n *                 } keynum;\n *             } fk;\n *         } RedisModuleCommandKeySpec;\n *\n *     Explanation of the fields of RedisModuleCommandKeySpec:\n *\n *     * `notes`: Optional notes or clarifications about this key spec.\n *\n *     * `flags`: A bitwise or of key-spec flags described below.\n *\n *     * `begin_search_type`: This describes how the first key is discovered.\n *       There are two ways to determine the first key:\n *\n *         * `REDISMODULE_KSPEC_BS_UNKNOWN`: There is no way to tell where the\n *           key args start.\n *         * `REDISMODULE_KSPEC_BS_INDEX`: Key args start at a constant index.\n *         * `REDISMODULE_KSPEC_BS_KEYWORD`: Key args start just after a\n *           specific keyword.\n *\n *     * `bs`: This is a union in which the `index` or `keyword` branch is used\n *       depending on the value of the `begin_search_type` field.\n *\n *         * `bs.index.pos`: The index from which we start the search for keys.\n *           (`REDISMODULE_KSPEC_BS_INDEX` only.)\n *\n *         * `bs.keyword.keyword`: The keyword (string) that indicates the\n *           beginning of key arguments. (`REDISMODULE_KSPEC_BS_KEYWORD` only.)\n *\n *         * `bs.keyword.startfrom`: An index in argv from which to start\n *           searching. Can be negative, which means start search from the end,\n *           in reverse. Example: -2 means to start in reverse from the\n *           penultimate argument. (`REDISMODULE_KSPEC_BS_KEYWORD` only.)\n *\n *     * `find_keys_type`: After the \"begin search\", this describes which\n *       arguments are keys. The strategies are:\n *\n *         * `REDISMODULE_KSPEC_BS_UNKNOWN`: There is no way to tell where the\n *           key args are located.\n *         * `REDISMODULE_KSPEC_FK_RANGE`: Keys end at a specific index (or\n *           relative to the last argument).\n *         * `REDISMODULE_KSPEC_FK_KEYNUM`: There's an argument that contains\n *           the number of key args somewhere before the keys themselves.\n *\n *       `find_keys_type` and `fk` can be omitted if this keyspec describes\n *       exactly one key.\n *\n *     * `fk`: This is a union in which the `range` or `keynum` branch is used\n *       depending on the value of the `find_keys_type` field.\n *\n *         * `fk.range` (for `REDISMODULE_KSPEC_FK_RANGE`): A struct with the\n *           following fields:\n *\n *             * `lastkey`: Index of the last key relative to the result of the\n *               begin search step. Can be negative, in which case it's not\n *               relative. -1 indicates the last argument, -2 one before the\n *               last and so on.\n *\n *             * `keystep`: How many arguments should we skip after finding a\n *               key, in order to find the next one?\n *\n *             * `limit`: If `lastkey` is -1, we use `limit` to stop the search\n *               by a factor. 0 and 1 mean no limit. 2 means 1/2 of the\n *               remaining args, 3 means 1/3, and so on.\n *\n *         * `fk.keynum` (for `REDISMODULE_KSPEC_FK_KEYNUM`): A struct with the\n *           following fields:\n *\n *             * `keynumidx`: Index of the argument containing the number of\n *               keys to come, relative to the result of the begin search step.\n *\n *             * `firstkey`: Index of the fist key relative to the result of the\n *               begin search step. (Usually it's just after `keynumidx`, in\n *               which case it should be set to `keynumidx + 1`.)\n *\n *             * `keystep`: How many arguments should we skip after finding a\n *               key, in order to find the next one?\n *\n *     Key-spec flags:\n *\n *     The first four refer to what the command actually does with the *value or\n *     metadata of the key*, and not necessarily the user data or how it affects\n *     it. Each key-spec may must have exactly one of these. Any operation\n *     that's not distinctly deletion, overwrite or read-only would be marked as\n *     RW.\n *\n *     * `REDISMODULE_CMD_KEY_RO`: Read-Only. Reads the value of the key, but\n *       doesn't necessarily return it.\n *\n *     * `REDISMODULE_CMD_KEY_RW`: Read-Write. Modifies the data stored in the\n *       value of the key or its metadata.\n *\n *     * `REDISMODULE_CMD_KEY_OW`: Overwrite. Overwrites the data stored in the\n *       value of the key.\n *\n *     * `REDISMODULE_CMD_KEY_RM`: Deletes the key.\n *\n *     The next four refer to *user data inside the value of the key*, not the\n *     metadata like LRU, type, cardinality. It refers to the logical operation\n *     on the user's data (actual input strings or TTL), being\n *     used/returned/copied/changed. It doesn't refer to modification or\n *     returning of metadata (like type, count, presence of data). ACCESS can be\n *     combined with one of the write operations INSERT, DELETE or UPDATE. Any\n *     write that's not an INSERT or a DELETE would be UPDATE.\n *\n *     * `REDISMODULE_CMD_KEY_ACCESS`: Returns, copies or uses the user data\n *       from the value of the key.\n *\n *     * `REDISMODULE_CMD_KEY_UPDATE`: Updates data to the value, new value may\n *       depend on the old value.\n *\n *     * `REDISMODULE_CMD_KEY_INSERT`: Adds data to the value with no chance of\n *       modification or deletion of existing data.\n *\n *     * `REDISMODULE_CMD_KEY_DELETE`: Explicitly deletes some content from the\n *       value of the key.\n *\n *     Other flags:\n *\n *     * `REDISMODULE_CMD_KEY_NOT_KEY`: The key is not actually a key, but \n *       should be routed in cluster mode as if it was a key.\n *\n *     * `REDISMODULE_CMD_KEY_INCOMPLETE`: The keyspec might not point out all\n *       the keys it should cover.\n *\n *     * `REDISMODULE_CMD_KEY_VARIABLE_FLAGS`: Some keys might have different\n *       flags depending on arguments.\n *\n * - `args`: An array of RedisModuleCommandArg, terminated by an element memset\n *   to zero. RedisModuleCommandArg is a structure with at the fields described\n *   below.\n *\n *         typedef struct RedisModuleCommandArg {\n *             const char *name;\n *             RedisModuleCommandArgType type;\n *             int key_spec_index;\n *             const char *token;\n *             const char *summary;\n *             const char *since;\n *             int flags;\n *             struct RedisModuleCommandArg *subargs;\n *         } RedisModuleCommandArg;\n *\n *     Explanation of the fields:\n *\n *     * `name`: Name of the argument.\n *\n *     * `type`: The type of the argument. See below for details. The types\n *       `REDISMODULE_ARG_TYPE_ONEOF` and `REDISMODULE_ARG_TYPE_BLOCK` require\n *       an argument to have sub-arguments, i.e. `subargs`.\n *\n *     * `key_spec_index`: If the `type` is `REDISMODULE_ARG_TYPE_KEY` you must\n *       provide the index of the key-spec associated with this argument. See\n *       `key_specs` above. If the argument is not a key, you may specify -1.\n *\n *     * `token`: The token preceding the argument (optional). Example: the\n *       argument `seconds` in `SET` has a token `EX`. If the argument consists\n *       of only a token (for example `NX` in `SET`) the type should be\n *       `REDISMODULE_ARG_TYPE_PURE_TOKEN` and `value` should be NULL.\n *\n *     * `summary`: A short description of the argument (optional).\n *\n *     * `since`: The first version which included this argument (optional).\n *\n *     * `flags`: A bitwise or of the macros `REDISMODULE_CMD_ARG_*`. See below.\n *\n *     * `value`: The display-value of the argument. This string is what should\n *       be displayed when creating the command syntax from the output of\n *       `COMMAND`. If `token` is not NULL, it should also be displayed.\n *\n *     Explanation of `RedisModuleCommandArgType`:\n *\n *     * `REDISMODULE_ARG_TYPE_STRING`: String argument.\n *     * `REDISMODULE_ARG_TYPE_INTEGER`: Integer argument.\n *     * `REDISMODULE_ARG_TYPE_DOUBLE`: Double-precision float argument.\n *     * `REDISMODULE_ARG_TYPE_KEY`: String argument representing a keyname.\n *     * `REDISMODULE_ARG_TYPE_PATTERN`: String, but regex pattern.\n *     * `REDISMODULE_ARG_TYPE_UNIX_TIME`: Integer, but Unix timestamp.\n *     * `REDISMODULE_ARG_TYPE_PURE_TOKEN`: Argument doesn't have a placeholder.\n *       It's just a token without a value. Example: the `KEEPTTL` option of the\n *       `SET` command.\n *     * `REDISMODULE_ARG_TYPE_ONEOF`: Used when the user can choose only one of\n *       a few sub-arguments. Requires `subargs`. Example: the `NX` and `XX`\n *       options of `SET`.\n *     * `REDISMODULE_ARG_TYPE_BLOCK`: Used when one wants to group together\n *       several sub-arguments, usually to apply something on all of them, like\n *       making the entire group \"optional\". Requires `subargs`. Example: the\n *       `LIMIT offset count` parameters in `ZRANGE`.\n *\n *     Explanation of the command argument flags:\n *\n *     * `REDISMODULE_CMD_ARG_OPTIONAL`: The argument is optional (like GET in\n *       the SET command).\n *     * `REDISMODULE_CMD_ARG_MULTIPLE`: The argument may repeat itself (like\n *       key in DEL).\n *     * `REDISMODULE_CMD_ARG_MULTIPLE_TOKEN`: The argument may repeat itself,\n *       and so does its token (like `GET pattern` in SORT).\n *\n * On success REDISMODULE_OK is returned. On error REDISMODULE_ERR is returned\n * and `errno` is set to EINVAL if invalid info was provided or EEXIST if info\n * has already been set. If the info is invalid, a warning is logged explaining\n * which part of the info is invalid and why. */\nint RM_SetCommandInfo(RedisModuleCommand *command, const RedisModuleCommandInfo *info) {\n    if (!moduleValidateCommandInfo(info)) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    }\n\n    struct redisCommand *cmd = command->rediscmd;\n\n    /* Check if any info has already been set. Overwriting info involves freeing\n     * the old info, which is not implemented. */\n    if (cmd->summary || cmd->complexity || cmd->since || cmd->history ||\n        cmd->tips || cmd->args ||\n        !(cmd->key_specs_num == 0 ||\n          /* Allow key spec populated from legacy (first,last,step) to exist. */\n          (cmd->key_specs_num == 1 &&\n           cmd->key_specs[0].begin_search_type == KSPEC_BS_INDEX &&\n           cmd->key_specs[0].find_keys_type == KSPEC_FK_RANGE))) {\n        errno = EEXIST;\n        return REDISMODULE_ERR;\n    }\n\n    if (info->summary) cmd->summary = zstrdup(info->summary);\n    if (info->complexity) cmd->complexity = zstrdup(info->complexity);\n    if (info->since) cmd->since = zstrdup(info->since);\n\n    const RedisModuleCommandInfoVersion *version = info->version;\n    if (info->history) {\n        size_t count = 0;\n        while (moduleCmdHistoryEntryAt(version, info->history, count)->since)\n            count++;\n        serverAssert(count < SIZE_MAX / sizeof(commandHistory));\n        cmd->history = zmalloc(sizeof(commandHistory) * (count + 1));\n        for (size_t j = 0; j < count; j++) {\n            RedisModuleCommandHistoryEntry *entry =\n                moduleCmdHistoryEntryAt(version, info->history, j);\n            cmd->history[j].since = zstrdup(entry->since);\n            cmd->history[j].changes = zstrdup(entry->changes);\n        }\n        cmd->history[count].since = NULL;\n        cmd->history[count].changes = NULL;\n        cmd->num_history = count;\n    }\n\n    if (info->tips) {\n        int count;\n        sds *tokens = sdssplitlen(info->tips, strlen(info->tips), \" \", 1, &count);\n        if (tokens) {\n            cmd->tips = zmalloc(sizeof(char *) * (count + 1));\n            for (int j = 0; j < count; j++) {\n                cmd->tips[j] = zstrdup(tokens[j]);\n            }\n            cmd->tips[count] = NULL;\n            cmd->num_tips = count;\n            sdsfreesplitres(tokens, count);\n        }\n    }\n\n    if (info->arity) cmd->arity = info->arity;\n\n    if (info->key_specs) {\n        /* Count and allocate the key specs. */\n        size_t count = 0;\n        while (moduleCmdKeySpecAt(version, info->key_specs, count)->begin_search_type)\n            count++;\n        serverAssert(count < INT_MAX);\n        zfree(cmd->key_specs);\n        cmd->key_specs = zmalloc(sizeof(keySpec) * count);\n\n        /* Copy the contents of the RedisModuleCommandKeySpec array. */\n        cmd->key_specs_num = count;\n        for (size_t j = 0; j < count; j++) {\n            RedisModuleCommandKeySpec *spec =\n                moduleCmdKeySpecAt(version, info->key_specs, j);\n            cmd->key_specs[j].notes = spec->notes ? zstrdup(spec->notes) : NULL;\n            cmd->key_specs[j].flags = moduleConvertKeySpecsFlags(spec->flags, 1);\n            switch (spec->begin_search_type) {\n            case REDISMODULE_KSPEC_BS_UNKNOWN:\n                cmd->key_specs[j].begin_search_type = KSPEC_BS_UNKNOWN;\n                break;\n            case REDISMODULE_KSPEC_BS_INDEX:\n                cmd->key_specs[j].begin_search_type = KSPEC_BS_INDEX;\n                cmd->key_specs[j].bs.index.pos = spec->bs.index.pos;\n                break;\n            case REDISMODULE_KSPEC_BS_KEYWORD:\n                cmd->key_specs[j].begin_search_type = KSPEC_BS_KEYWORD;\n                cmd->key_specs[j].bs.keyword.keyword = zstrdup(spec->bs.keyword.keyword);\n                cmd->key_specs[j].bs.keyword.startfrom = spec->bs.keyword.startfrom;\n                break;\n            default:\n                /* Can't happen; stopped in moduleValidateCommandInfo(). */\n                serverPanic(\"Unknown begin_search_type\");\n            }\n\n            switch (spec->find_keys_type) {\n            case REDISMODULE_KSPEC_FK_OMITTED:\n                /* Omitted field is shorthand to say that it's a single key. */\n                cmd->key_specs[j].find_keys_type = KSPEC_FK_RANGE;\n                cmd->key_specs[j].fk.range.lastkey = 0;\n                cmd->key_specs[j].fk.range.keystep = 1;\n                cmd->key_specs[j].fk.range.limit = 0;\n                break;\n            case REDISMODULE_KSPEC_FK_UNKNOWN:\n                cmd->key_specs[j].find_keys_type = KSPEC_FK_UNKNOWN;\n                break;\n            case REDISMODULE_KSPEC_FK_RANGE:\n                cmd->key_specs[j].find_keys_type = KSPEC_FK_RANGE;\n                cmd->key_specs[j].fk.range.lastkey = spec->fk.range.lastkey;\n                cmd->key_specs[j].fk.range.keystep = spec->fk.range.keystep;\n                cmd->key_specs[j].fk.range.limit = spec->fk.range.limit;\n                break;\n            case REDISMODULE_KSPEC_FK_KEYNUM:\n                cmd->key_specs[j].find_keys_type = KSPEC_FK_KEYNUM;\n                cmd->key_specs[j].fk.keynum.keynumidx = spec->fk.keynum.keynumidx;\n                cmd->key_specs[j].fk.keynum.firstkey = spec->fk.keynum.firstkey;\n                cmd->key_specs[j].fk.keynum.keystep = spec->fk.keynum.keystep;\n                break;\n            default:\n                /* Can't happen; stopped in moduleValidateCommandInfo(). */\n                serverPanic(\"Unknown find_keys_type\");\n            }\n        }\n\n        /* Update the legacy (first,last,step) spec and \"movablekeys\" flag used by the COMMAND command,\n         * by trying to \"glue\" consecutive range key specs. */\n        populateCommandLegacyRangeSpec(cmd);\n    }\n\n    if (info->args) {\n        cmd->args = moduleCopyCommandArgs(info->args, version);\n        /* Populate arg.num_args with the number of subargs, recursively */\n        cmd->num_args = populateArgsStructure(cmd->args);\n    }\n\n    /* Fields added in future versions to be added here, under conditions like\n     * `if (info->version >= 2) { access version 2 fields here }` */\n\n    return REDISMODULE_OK;\n}\n\n/* Returns 1 if v is a power of two, 0 otherwise. */\nstatic inline int isPowerOfTwo(uint64_t v) {\n    return v && !(v & (v - 1));\n}\n\n/* Returns 1 if the command info is valid and 0 otherwise. */\nstatic int moduleValidateCommandInfo(const RedisModuleCommandInfo *info) {\n    const RedisModuleCommandInfoVersion *version = info->version;\n    if (!version) {\n        serverLog(LL_WARNING, \"Invalid command info: version missing\");\n        return 0;\n    }\n\n    /* No validation for the fields summary, complexity, since, tips (strings or\n     * NULL) and arity (any integer). */\n\n    /* History: If since is set, changes must also be set. */\n    if (info->history) {\n        for (size_t j = 0;\n             moduleCmdHistoryEntryAt(version, info->history, j)->since;\n             j++)\n        {\n            if (!moduleCmdHistoryEntryAt(version, info->history, j)->changes) {\n                serverLog(LL_WARNING, \"Invalid command info: history[%zd].changes missing\", j);\n                return 0;\n            }\n        }\n    }\n\n    /* Key specs. */\n    if (info->key_specs) {\n        for (size_t j = 0;\n             moduleCmdKeySpecAt(version, info->key_specs, j)->begin_search_type;\n             j++)\n        {\n            RedisModuleCommandKeySpec *spec =\n                moduleCmdKeySpecAt(version, info->key_specs, j);\n            if (j >= INT_MAX) {\n                serverLog(LL_WARNING, \"Invalid command info: Too many key specs\");\n                return 0; /* redisCommand.key_specs_num is an int. */\n            }\n\n            /* Flags. Exactly one flag in a group is set if and only if the\n             * masked bits is a power of two. */\n            uint64_t key_flags =\n                REDISMODULE_CMD_KEY_RO | REDISMODULE_CMD_KEY_RW |\n                REDISMODULE_CMD_KEY_OW | REDISMODULE_CMD_KEY_RM;\n            uint64_t write_flags =\n                REDISMODULE_CMD_KEY_INSERT | REDISMODULE_CMD_KEY_DELETE |\n                REDISMODULE_CMD_KEY_UPDATE;\n            if (!isPowerOfTwo(spec->flags & key_flags)) {\n                serverLog(LL_WARNING,\n                          \"Invalid command info: key_specs[%zd].flags: \"\n                          \"Exactly one of the flags RO, RW, OW, RM required\", j);\n                return 0;\n            }\n            if ((spec->flags & write_flags) != 0 &&\n                !isPowerOfTwo(spec->flags & write_flags))\n            {\n                serverLog(LL_WARNING,\n                          \"Invalid command info: key_specs[%zd].flags: \"\n                          \"INSERT, DELETE and UPDATE are mutually exclusive\", j);\n                return 0;\n            }\n\n            switch (spec->begin_search_type) {\n            case REDISMODULE_KSPEC_BS_UNKNOWN: break;\n            case REDISMODULE_KSPEC_BS_INDEX: break;\n            case REDISMODULE_KSPEC_BS_KEYWORD:\n                if (spec->bs.keyword.keyword == NULL) {\n                    serverLog(LL_WARNING,\n                              \"Invalid command info: key_specs[%zd].bs.keyword.keyword \"\n                              \"required when begin_search_type is KEYWORD\", j);\n                    return 0;\n                }\n                break;\n            default:\n                serverLog(LL_WARNING,\n                          \"Invalid command info: key_specs[%zd].begin_search_type: \"\n                          \"Invalid value %d\", j, spec->begin_search_type);\n                return 0;\n            }\n\n            /* Validate find_keys_type. */\n            switch (spec->find_keys_type) {\n            case REDISMODULE_KSPEC_FK_OMITTED: break; /* short for RANGE {0,1,0} */\n            case REDISMODULE_KSPEC_FK_UNKNOWN: break;\n            case REDISMODULE_KSPEC_FK_RANGE: break;\n            case REDISMODULE_KSPEC_FK_KEYNUM: break;\n            default:\n                serverLog(LL_WARNING,\n                          \"Invalid command info: key_specs[%zd].find_keys_type: \"\n                          \"Invalid value %d\", j, spec->find_keys_type);\n                return 0;\n            }\n        }\n    }\n\n    /* Args, subargs (recursive) */\n    return moduleValidateCommandArgs(info->args, version);\n}\n\n/* When from_api is true, converts from REDISMODULE_CMD_KEY_* flags to CMD_KEY_* flags.\n * When from_api is false, converts from CMD_KEY_* flags to REDISMODULE_CMD_KEY_* flags. */\nstatic int64_t moduleConvertKeySpecsFlags(int64_t flags, int from_api) {\n    int64_t out = 0;\n    int64_t map[][2] = {\n        {REDISMODULE_CMD_KEY_RO, CMD_KEY_RO},\n        {REDISMODULE_CMD_KEY_RW, CMD_KEY_RW},\n        {REDISMODULE_CMD_KEY_OW, CMD_KEY_OW},\n        {REDISMODULE_CMD_KEY_RM, CMD_KEY_RM},\n        {REDISMODULE_CMD_KEY_ACCESS, CMD_KEY_ACCESS},\n        {REDISMODULE_CMD_KEY_INSERT, CMD_KEY_INSERT},\n        {REDISMODULE_CMD_KEY_UPDATE, CMD_KEY_UPDATE},\n        {REDISMODULE_CMD_KEY_DELETE, CMD_KEY_DELETE},\n        {REDISMODULE_CMD_KEY_NOT_KEY, CMD_KEY_NOT_KEY},\n        {REDISMODULE_CMD_KEY_INCOMPLETE, CMD_KEY_INCOMPLETE},\n        {REDISMODULE_CMD_KEY_VARIABLE_FLAGS, CMD_KEY_VARIABLE_FLAGS},\n        {0,0}};\n\n    int from_idx = from_api ? 0 : 1, to_idx = !from_idx;\n    for (int i=0; map[i][0]; i++)\n        if (flags & map[i][from_idx]) out |= map[i][to_idx];\n    return out;\n}\n\n/* Validates an array of RedisModuleCommandArg. Returns 1 if it's valid and 0 if\n * it's invalid. */\nstatic int moduleValidateCommandArgs(RedisModuleCommandArg *args,\n                                     const RedisModuleCommandInfoVersion *version) {\n    if (args == NULL) return 1; /* Missing args is OK. */\n    for (size_t j = 0; moduleCmdArgAt(version, args, j)->name != NULL; j++) {\n        RedisModuleCommandArg *arg = moduleCmdArgAt(version, args, j);\n        int arg_type_error = 0;\n        moduleConvertArgType(arg->type, &arg_type_error);\n        if (arg_type_error) {\n            serverLog(LL_WARNING,\n                      \"Invalid command info: Argument \\\"%s\\\": Undefined type %d\",\n                      arg->name, arg->type);\n            return 0;\n        }\n        if (arg->type == REDISMODULE_ARG_TYPE_PURE_TOKEN && !arg->token) {\n            serverLog(LL_WARNING,\n                      \"Invalid command info: Argument \\\"%s\\\": \"\n                      \"token required when type is PURE_TOKEN\", args[j].name);\n            return 0;\n        }\n\n        if (arg->type == REDISMODULE_ARG_TYPE_KEY) {\n            if (arg->key_spec_index < 0) {\n                serverLog(LL_WARNING,\n                          \"Invalid command info: Argument \\\"%s\\\": \"\n                          \"key_spec_index required when type is KEY\",\n                          arg->name);\n                return 0;\n            }\n        } else if (arg->key_spec_index != -1 && arg->key_spec_index != 0) {\n            /* 0 is allowed for convenience, to allow it to be omitted in\n             * compound struct literals on the form `.field = value`. */\n            serverLog(LL_WARNING,\n                      \"Invalid command info: Argument \\\"%s\\\": \"\n                      \"key_spec_index specified but type isn't KEY\",\n                      arg->name);\n            return 0;\n        }\n\n        if (arg->flags & ~(_REDISMODULE_CMD_ARG_NEXT - 1)) {\n            serverLog(LL_WARNING,\n                      \"Invalid command info: Argument \\\"%s\\\": Invalid flags\",\n                      arg->name);\n            return 0;\n        }\n\n        if (arg->type == REDISMODULE_ARG_TYPE_ONEOF ||\n            arg->type == REDISMODULE_ARG_TYPE_BLOCK)\n        {\n            if (arg->subargs == NULL) {\n                serverLog(LL_WARNING,\n                          \"Invalid command info: Argument \\\"%s\\\": \"\n                          \"subargs required when type is ONEOF or BLOCK\",\n                          arg->name);\n                return 0;\n            }\n            if (!moduleValidateCommandArgs(arg->subargs, version)) return 0;\n        } else {\n            if (arg->subargs != NULL) {\n                serverLog(LL_WARNING,\n                          \"Invalid command info: Argument \\\"%s\\\": \"\n                          \"subargs specified but type isn't ONEOF nor BLOCK\",\n                          arg->name);\n                return 0;\n            }\n        }\n    }\n    return 1;\n}\n\n/* Converts an array of RedisModuleCommandArg into a freshly allocated array of\n * struct redisCommandArg. */\nstatic struct redisCommandArg *moduleCopyCommandArgs(RedisModuleCommandArg *args,\n                                                     const RedisModuleCommandInfoVersion *version) {\n    size_t count = 0;\n    while (moduleCmdArgAt(version, args, count)->name) count++;\n    serverAssert(count < SIZE_MAX / sizeof(struct redisCommandArg));\n    struct redisCommandArg *realargs = zcalloc((count+1) * sizeof(redisCommandArg));\n\n    for (size_t j = 0; j < count; j++) {\n        RedisModuleCommandArg *arg = moduleCmdArgAt(version, args, j);\n        realargs[j].name = zstrdup(arg->name);\n        realargs[j].type = moduleConvertArgType(arg->type, NULL);\n        if (arg->type == REDISMODULE_ARG_TYPE_KEY)\n            realargs[j].key_spec_index = arg->key_spec_index;\n        else\n            realargs[j].key_spec_index = -1;\n        if (arg->token) realargs[j].token = zstrdup(arg->token);\n        if (arg->summary) realargs[j].summary = zstrdup(arg->summary);\n        if (arg->since) realargs[j].since = zstrdup(arg->since);\n        if (arg->deprecated_since) realargs[j].deprecated_since = zstrdup(arg->deprecated_since);\n        if (arg->display_text) realargs[j].display_text = zstrdup(arg->display_text);\n        realargs[j].flags = moduleConvertArgFlags(arg->flags);\n        if (arg->subargs) realargs[j].subargs = moduleCopyCommandArgs(arg->subargs, version);\n    }\n    return realargs;\n}\n\nstatic redisCommandArgType moduleConvertArgType(RedisModuleCommandArgType type, int *error) {\n    if (error) *error = 0;\n    switch (type) {\n    case REDISMODULE_ARG_TYPE_STRING: return ARG_TYPE_STRING;\n    case REDISMODULE_ARG_TYPE_INTEGER: return ARG_TYPE_INTEGER;\n    case REDISMODULE_ARG_TYPE_DOUBLE: return ARG_TYPE_DOUBLE;\n    case REDISMODULE_ARG_TYPE_KEY: return ARG_TYPE_KEY;\n    case REDISMODULE_ARG_TYPE_PATTERN: return ARG_TYPE_PATTERN;\n    case REDISMODULE_ARG_TYPE_UNIX_TIME: return ARG_TYPE_UNIX_TIME;\n    case REDISMODULE_ARG_TYPE_PURE_TOKEN: return ARG_TYPE_PURE_TOKEN;\n    case REDISMODULE_ARG_TYPE_ONEOF: return ARG_TYPE_ONEOF;\n    case REDISMODULE_ARG_TYPE_BLOCK: return ARG_TYPE_BLOCK;\n    default:\n        if (error) *error = 1;\n        return -1;\n    }\n}\n\nstatic int moduleConvertArgFlags(int flags) {\n    int realflags = 0;\n    if (flags & REDISMODULE_CMD_ARG_OPTIONAL) realflags |= CMD_ARG_OPTIONAL;\n    if (flags & REDISMODULE_CMD_ARG_MULTIPLE) realflags |= CMD_ARG_MULTIPLE;\n    if (flags & REDISMODULE_CMD_ARG_MULTIPLE_TOKEN) realflags |= CMD_ARG_MULTIPLE_TOKEN;\n    return realflags;\n}\n\n/* Return `struct RedisModule *` as `void *` to avoid exposing it outside of module.c. */\nvoid *moduleGetHandleByName(char *modulename) {\n    return dictFetchValue(modules,modulename);\n}\n\n/* Returns 1 if `cmd` is a command of the module `modulename`. 0 otherwise. */\nint moduleIsModuleCommand(void *module_handle, struct redisCommand *cmd) {\n    if (cmd->proc != RedisModuleCommandDispatcher)\n        return 0;\n    if (module_handle == NULL)\n        return 0;\n    RedisModuleCommand *cp = cmd->module_cmd;\n    return (cp->module == module_handle);\n}\n\n/* --------------------------------------------------------------------------\n * ## Module information and time measurement\n * -------------------------------------------------------------------------- */\n\nint moduleListConfigMatch(void *config, void *name) {\n    ModuleConfig *mc = (ModuleConfig *) config;\n    /* Compare the provided name with the config's name and alias if it exists */\n    return strcasecmp(mc->name, (char *) name) == 0 ||\n            ((mc->alias) && strcasecmp(mc->alias, (char *) name) == 0);\n}\n\nvoid moduleListFree(void *config) {\n    ModuleConfig *module_config = (ModuleConfig *) config;\n    sdsfree(module_config->name);\n    sdsfree(module_config->alias);\n    zfree(config);\n}\n\nvoid RM_SetModuleAttribs(RedisModuleCtx *ctx, const char *name, int ver, int apiver) {\n    /* Called by RM_Init() to setup the `ctx->module` structure.\n     *\n     * This is an internal function, Redis modules developers don't need\n     * to use it. */\n    RedisModule *module;\n\n    if (ctx->module != NULL) return;\n    module = zmalloc(sizeof(*module));\n    module->name = sdsnew(name);\n    module->ver = ver;\n    module->apiver = apiver;\n    module->types = listCreate();\n    module->usedby = listCreate();\n    module->using = listCreate();\n    module->filters = listCreate();\n    module->module_configs = listCreate();\n    listSetMatchMethod(module->module_configs, moduleListConfigMatch);\n    listSetFreeMethod(module->module_configs, moduleListFree);\n    module->in_call = 0;\n    module->configs_initialized = 0;\n    module->in_hook = 0;\n    module->options = 0;\n    module->info_cb = 0;\n    module->defrag_cb = 0;\n    module->defrag_cb_2 = 0;\n    module->defrag_start_cb = 0;\n    module->defrag_end_cb = 0;\n    module->loadmod = NULL;\n    module->num_commands_with_acl_categories = 0;\n    module->onload = 1;\n    module->num_acl_categories_added = 0;\n    ctx->module = module;\n}\n\n/* Return non-zero if the module name is busy.\n * Otherwise zero is returned. */\nint RM_IsModuleNameBusy(const char *name) {\n    sds modulename = sdsnew(name);\n    dictEntry *de = dictFind(modules,modulename);\n    sdsfree(modulename);\n    return de != NULL;\n}\n\n/* Return the current UNIX time in milliseconds. */\nmstime_t RM_Milliseconds(void) {\n    return mstime();\n}\n\n/* Return counter of micro-seconds relative to an arbitrary point in time. */\nuint64_t RM_MonotonicMicroseconds(void) {\n    return getMonotonicUs();\n}\n\n/* Return the current UNIX time in microseconds */\nustime_t RM_Microseconds(void) {\n    return ustime();\n}\n\n/* Return the cached UNIX time in microseconds.\n * It is updated in the server cron job and before executing a command.\n * It is useful for complex call stacks, such as a command causing a\n * key space notification, causing a module to execute a RedisModule_Call,\n * causing another notification, etc.\n * It makes sense that all this callbacks would use the same clock. */\nustime_t RM_CachedMicroseconds(void) {\n    return server.ustime;\n}\n\n/* Mark a point in time that will be used as the start time to calculate\n * the elapsed execution time when RM_BlockedClientMeasureTimeEnd() is called.\n * Within the same command, you can call multiple times\n * RM_BlockedClientMeasureTimeStart() and RM_BlockedClientMeasureTimeEnd()\n * to accumulate independent time intervals to the background duration.\n * This method always return REDISMODULE_OK.\n * \n * This function is not thread safe, If used in module thread and blocked callback (possibly main thread)\n * simultaneously, it's recommended to protect them with lock owned by caller instead of GIL. */\nint RM_BlockedClientMeasureTimeStart(RedisModuleBlockedClient *bc) {\n    elapsedStart(&(bc->background_timer));\n    return REDISMODULE_OK;\n}\n\n/* Mark a point in time that will be used as the end time\n * to calculate the elapsed execution time.\n * On success REDISMODULE_OK is returned.\n * This method only returns REDISMODULE_ERR if no start time was\n * previously defined ( meaning RM_BlockedClientMeasureTimeStart was not called ).\n * \n * This function is not thread safe, If used in module thread and blocked callback (possibly main thread)\n * simultaneously, it's recommended to protect them with lock owned by caller instead of GIL. */\nint RM_BlockedClientMeasureTimeEnd(RedisModuleBlockedClient *bc) {\n    // If the counter is 0 then we haven't called RM_BlockedClientMeasureTimeStart\n    if (!bc->background_timer)\n        return REDISMODULE_ERR;\n    bc->background_duration += elapsedUs(bc->background_timer);\n    return REDISMODULE_OK;\n}\n\n/* This API allows modules to let Redis process background tasks, and some\n * commands during long blocking execution of a module command.\n * The module can call this API periodically.\n * The flags is a bit mask of these:\n *\n * - `REDISMODULE_YIELD_FLAG_NONE`: No special flags, can perform some background\n *                                  operations, but not process client commands.\n * - `REDISMODULE_YIELD_FLAG_CLIENTS`: Redis can also process client commands.\n *\n * The `busy_reply` argument is optional, and can be used to control the verbose\n * error string after the `-BUSY` error code.\n *\n * When the `REDISMODULE_YIELD_FLAG_CLIENTS` is used, Redis will only start\n * processing client commands after the time defined by the\n * `busy-reply-threshold` config, in which case Redis will start rejecting most\n * commands with `-BUSY` error, but allow the ones marked with the `allow-busy`\n * flag to be executed.\n * This API can also be used in thread safe context (while locked), and during\n * loading (in the `rdb_load` callback, in which case it'll reject commands with\n * the -LOADING error)\n */\nvoid RM_Yield(RedisModuleCtx *ctx, int flags, const char *busy_reply) {\n    static int yield_nesting = 0;\n    /* Avoid nested calls to RM_Yield */\n    if (yield_nesting)\n        return;\n    yield_nesting++;\n\n    long long now = getMonotonicUs();\n    if (now >= ctx->next_yield_time) {\n        /* In loading mode, there's no need to handle busy_module_yield_reply,\n         * and busy_module_yield_flags, since redis is anyway rejecting all\n         * commands with -LOADING. */\n        if (server.loading) {\n            /* Let redis process events */\n            processEventsWhileBlocked();\n        } else {\n            const char *prev_busy_module_yield_reply = server.busy_module_yield_reply;\n            server.busy_module_yield_reply = busy_reply;\n            /* start the blocking operation if not already started. */\n            if (!server.busy_module_yield_flags) {\n                server.busy_module_yield_flags = BUSY_MODULE_YIELD_EVENTS;\n                blockingOperationStarts();\n                if (server.current_client)\n                    protectClient(server.current_client);\n            }\n            if (flags & REDISMODULE_YIELD_FLAG_CLIENTS)\n                server.busy_module_yield_flags |= BUSY_MODULE_YIELD_CLIENTS;\n\n            /* Let redis process events */\n            if (!pthread_equal(server.main_thread_id, pthread_self())) {\n                /* If we are not in the main thread, we defer event loop processing to the main thread\n                 * after the main thread enters acquiring GIL state in order to protect the event\n                 * loop (ae.c) and avoid potential race conditions. */\n\n                int acquiring;\n                atomicGet(server.module_gil_acquring, acquiring);\n                if (!acquiring) {\n                    /* If the main thread has not yet entered the acquiring GIL state,\n                     * we attempt to wake it up and exit without waiting for it to\n                     * acquire the GIL. This avoids blocking the caller, allowing them to\n                     * continue with unfinished tasks before the next yield.\n                     * We assume the caller keeps the GIL locked. */\n                    if (write(server.module_pipe[1],\"A\",1) != 1) {\n                        /* Ignore the error, this is best-effort. */\n                    }\n                } else {\n                    /* Release the GIL, yielding CPU to give the main thread an opportunity to start\n                     * event processing, and then acquire the GIL again until the main thread releases it. */\n                    moduleReleaseGIL();\n                    usleep(0);\n                    moduleAcquireGIL();\n                }\n            } else {\n                /* If we are in the main thread, we can safely process events. */\n                processEventsWhileBlocked();\n            }\n\n            server.busy_module_yield_reply = prev_busy_module_yield_reply;\n            /* Possibly restore the previous flags in case of two nested contexts\n             * that use this API with different flags, but keep the first bit\n             * (PROCESS_EVENTS) set, so we know to call blockingOperationEnds on time. */\n            server.busy_module_yield_flags &= ~BUSY_MODULE_YIELD_CLIENTS;\n        }\n\n        /* decide when the next event should fire. */\n        ctx->next_yield_time = now + 1000000 / server.hz;\n    }\n    yield_nesting--;\n}\n\n/* Set flags defining capabilities or behavior bit flags.\n *\n * REDISMODULE_OPTIONS_HANDLE_IO_ERRORS:\n * Generally, modules don't need to bother with this, as the process will just\n * terminate if a read error happens, however, setting this flag would allow\n * repl-diskless-load to work if enabled.\n * The module should use RedisModule_IsIOError after reads, before using the\n * data that was read, and in case of error, propagate it upwards, and also be\n * able to release the partially populated value and all it's allocations.\n *\n * REDISMODULE_OPTION_NO_IMPLICIT_SIGNAL_MODIFIED:\n * See RM_SignalModifiedKey().\n * \n * REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD:\n * Setting this flag indicates module awareness of diskless async replication (repl-diskless-load=swapdb)\n * and that redis could be serving reads during replication instead of blocking with LOADING status.\n *\n * REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS:\n * Declare that the module wants to get nested key-space notifications.\n * By default, Redis will not fire key-space notifications that happened inside\n * a key-space notification callback. This flag allows to change this behavior\n * and fire nested key-space notifications. Notice: if enabled, the module\n * should protected itself from infinite recursion. */\nvoid RM_SetModuleOptions(RedisModuleCtx *ctx, int options) {\n    ctx->module->options = options;\n}\n\n/* Signals that the key is modified from user's perspective (i.e. invalidate WATCH\n * and client side caching).\n *\n * This is done automatically when a key opened for writing is closed, unless\n * the option REDISMODULE_OPTION_NO_IMPLICIT_SIGNAL_MODIFIED has been set using\n * RM_SetModuleOptions().\n*/\nint RM_SignalModifiedKey(RedisModuleCtx *ctx, RedisModuleString *keyname) {\n    kvobj *kv = lookupKeyReadWithFlags(ctx->client->db, keyname, LOOKUP_NOEFFECTS);\n    keyModified(ctx->client,ctx->client->db,keyname,kv,1);\n    return REDISMODULE_OK;\n}\n\n/* --------------------------------------------------------------------------\n * ## Automatic memory management for modules\n * -------------------------------------------------------------------------- */\n\n/* Enable automatic memory management.\n *\n * The function must be called as the first function of a command implementation\n * that wants to use automatic memory.\n *\n * When enabled, automatic memory management tracks and automatically frees\n * keys, call replies and Redis string objects once the command returns. In most\n * cases this eliminates the need of calling the following functions:\n *\n * 1. RedisModule_CloseKey()\n * 2. RedisModule_FreeCallReply()\n * 3. RedisModule_FreeString()\n *\n * These functions can still be used with automatic memory management enabled,\n * to optimize loops that make numerous allocations for example. */\nvoid RM_AutoMemory(RedisModuleCtx *ctx) {\n    ctx->flags |= REDISMODULE_CTX_AUTO_MEMORY;\n}\n\n/* Add a new object to release automatically when the callback returns. */\nvoid autoMemoryAdd(RedisModuleCtx *ctx, int type, void *ptr) {\n    if (!(ctx->flags & REDISMODULE_CTX_AUTO_MEMORY)) return;\n    if (ctx->amqueue_used == ctx->amqueue_len) {\n        ctx->amqueue_len *= 2;\n        if (ctx->amqueue_len < 16) ctx->amqueue_len = 16;\n        ctx->amqueue = zrealloc(ctx->amqueue,sizeof(struct AutoMemEntry)*ctx->amqueue_len);\n    }\n    ctx->amqueue[ctx->amqueue_used].type = type;\n    ctx->amqueue[ctx->amqueue_used].ptr = ptr;\n    ctx->amqueue_used++;\n}\n\n/* Mark an object as freed in the auto release queue, so that users can still\n * free things manually if they want.\n *\n * The function returns 1 if the object was actually found in the auto memory\n * pool, otherwise 0 is returned. */\nint autoMemoryFreed(RedisModuleCtx *ctx, int type, void *ptr) {\n    if (!(ctx->flags & REDISMODULE_CTX_AUTO_MEMORY)) return 0;\n\n    int count = (ctx->amqueue_used+1)/2;\n    for (int j = 0; j < count; j++) {\n        for (int side = 0; side < 2; side++) {\n            /* For side = 0 check right side of the array, for\n             * side = 1 check the left side instead (zig-zag scanning). */\n            int i = (side == 0) ? (ctx->amqueue_used - 1 - j) : j;\n            if (ctx->amqueue[i].type == type &&\n                ctx->amqueue[i].ptr == ptr)\n            {\n                ctx->amqueue[i].type = REDISMODULE_AM_FREED;\n\n                /* Switch the freed element and the last element, to avoid growing\n                 * the queue unnecessarily if we allocate/free in a loop */\n                if (i != ctx->amqueue_used-1) {\n                    ctx->amqueue[i] = ctx->amqueue[ctx->amqueue_used-1];\n                }\n\n                /* Reduce the size of the queue because we either moved the top\n                 * element elsewhere or freed it */\n                ctx->amqueue_used--;\n                return 1;\n            }\n        }\n    }\n    return 0;\n}\n\n/* Release all the objects in queue. */\nvoid autoMemoryCollect(RedisModuleCtx *ctx) {\n    if (!(ctx->flags & REDISMODULE_CTX_AUTO_MEMORY)) return;\n    /* Clear the AUTO_MEMORY flag from the context, otherwise the functions\n     * we call to free the resources, will try to scan the auto release\n     * queue to mark the entries as freed. */\n    ctx->flags &= ~REDISMODULE_CTX_AUTO_MEMORY;\n    int j;\n    for (j = 0; j < ctx->amqueue_used; j++) {\n        void *ptr = ctx->amqueue[j].ptr;\n        switch(ctx->amqueue[j].type) {\n        case REDISMODULE_AM_STRING: decrRefCount(ptr); break;\n        case REDISMODULE_AM_REPLY: RM_FreeCallReply(ptr); break;\n        case REDISMODULE_AM_KEY: RM_CloseKey(ptr); break;\n        case REDISMODULE_AM_DICT: RM_FreeDict(NULL,ptr); break;\n        case REDISMODULE_AM_INFO: RM_FreeServerInfo(NULL,ptr); break;\n        case REDISMODULE_AM_CONFIG: RM_ConfigIteratorRelease(NULL, ptr); break;\n        case REDISMODULE_AM_SLOTRANGEARRAY: RM_ClusterFreeSlotRanges(NULL, ptr); break;\n        }\n    }\n    ctx->flags |= REDISMODULE_CTX_AUTO_MEMORY;\n    zfree(ctx->amqueue);\n    ctx->amqueue = NULL;\n    ctx->amqueue_len = 0;\n    ctx->amqueue_used = 0;\n}\n\n/* --------------------------------------------------------------------------\n * ## String objects APIs\n * -------------------------------------------------------------------------- */\n\n/* Create a new module string object. The returned string must be freed\n * with RedisModule_FreeString(), unless automatic memory is enabled.\n *\n * The string is created by copying the `len` bytes starting\n * at `ptr`. No reference is retained to the passed buffer.\n *\n * The module context 'ctx' is optional and may be NULL if you want to create\n * a string out of the context scope. However in that case, the automatic\n * memory management will not be available, and the string memory must be\n * managed manually. */\nRedisModuleString *RM_CreateString(RedisModuleCtx *ctx, const char *ptr, size_t len) {\n    RedisModuleString *o = createStringObject(ptr,len);\n    if (ctx != NULL) autoMemoryAdd(ctx,REDISMODULE_AM_STRING,o);\n    return o;\n}\n\n/* Create a new module string object from a printf format and arguments.\n * The returned string must be freed with RedisModule_FreeString(), unless\n * automatic memory is enabled.\n *\n * The string is created using the sds formatter function sdscatvprintf().\n *\n * The passed context 'ctx' may be NULL if necessary, see the\n * RedisModule_CreateString() documentation for more info. */\nRedisModuleString *RM_CreateStringPrintf(RedisModuleCtx *ctx, const char *fmt, ...) {\n    sds s = sdsempty();\n\n    va_list ap;\n    va_start(ap, fmt);\n    s = sdscatvprintf(s, fmt, ap);\n    va_end(ap);\n\n    RedisModuleString *o = createObject(OBJ_STRING, s);\n    if (ctx != NULL) autoMemoryAdd(ctx,REDISMODULE_AM_STRING,o);\n\n    return o;\n}\n\n\n/* Like RedisModule_CreateString(), but creates a string starting from a `long long`\n * integer instead of taking a buffer and its length.\n *\n * The returned string must be released with RedisModule_FreeString() or by\n * enabling automatic memory management.\n *\n * The passed context 'ctx' may be NULL if necessary, see the\n * RedisModule_CreateString() documentation for more info. */\nRedisModuleString *RM_CreateStringFromLongLong(RedisModuleCtx *ctx, long long ll) {\n    char buf[LONG_STR_SIZE];\n    size_t len = ll2string(buf,sizeof(buf),ll);\n    return RM_CreateString(ctx,buf,len);\n}\n\n/* Like RedisModule_CreateString(), but creates a string starting from a `unsigned long long`\n * integer instead of taking a buffer and its length.\n *\n * The returned string must be released with RedisModule_FreeString() or by\n * enabling automatic memory management.\n *\n * The passed context 'ctx' may be NULL if necessary, see the\n * RedisModule_CreateString() documentation for more info. */\nRedisModuleString *RM_CreateStringFromULongLong(RedisModuleCtx *ctx, unsigned long long ull) {\n    char buf[LONG_STR_SIZE];\n    size_t len = ull2string(buf,sizeof(buf),ull);\n    return RM_CreateString(ctx,buf,len);\n}\n\n/* Like RedisModule_CreateString(), but creates a string starting from a double\n * instead of taking a buffer and its length.\n *\n * The returned string must be released with RedisModule_FreeString() or by\n * enabling automatic memory management. */\nRedisModuleString *RM_CreateStringFromDouble(RedisModuleCtx *ctx, double d) {\n    char buf[MAX_D2STRING_CHARS];\n    size_t len = d2string(buf,sizeof(buf),d);\n    return RM_CreateString(ctx,buf,len);\n}\n\n/* Like RedisModule_CreateString(), but creates a string starting from a long\n * double.\n *\n * The returned string must be released with RedisModule_FreeString() or by\n * enabling automatic memory management.\n *\n * The passed context 'ctx' may be NULL if necessary, see the\n * RedisModule_CreateString() documentation for more info. */\nRedisModuleString *RM_CreateStringFromLongDouble(RedisModuleCtx *ctx, long double ld, int humanfriendly) {\n    char buf[MAX_LONG_DOUBLE_CHARS];\n    size_t len = ld2string(buf,sizeof(buf),ld,\n        (humanfriendly ? LD_STR_HUMAN : LD_STR_AUTO));\n    return RM_CreateString(ctx,buf,len);\n}\n\n/* Like RedisModule_CreateString(), but creates a string starting from another\n * RedisModuleString.\n *\n * The returned string must be released with RedisModule_FreeString() or by\n * enabling automatic memory management.\n *\n * The passed context 'ctx' may be NULL if necessary, see the\n * RedisModule_CreateString() documentation for more info. */\nRedisModuleString *RM_CreateStringFromString(RedisModuleCtx *ctx, const RedisModuleString *str) {\n    RedisModuleString *o = dupStringObject(str);\n    if (ctx != NULL) autoMemoryAdd(ctx,REDISMODULE_AM_STRING,o);\n    return o;\n}\n\n/* Creates a string from a stream ID. The returned string must be released with\n * RedisModule_FreeString(), unless automatic memory is enabled.\n *\n * The passed context `ctx` may be NULL if necessary. See the\n * RedisModule_CreateString() documentation for more info. */\nRedisModuleString *RM_CreateStringFromStreamID(RedisModuleCtx *ctx, const RedisModuleStreamID *id) {\n    streamID streamid = {id->ms, id->seq};\n    RedisModuleString *o = createObjectFromStreamID(&streamid);\n    if (ctx != NULL) autoMemoryAdd(ctx, REDISMODULE_AM_STRING, o);\n    return o;\n}\n\n/* Free a module string object obtained with one of the Redis modules API calls\n * that return new string objects.\n *\n * It is possible to call this function even when automatic memory management\n * is enabled. In that case the string will be released ASAP and removed\n * from the pool of string to release at the end.\n *\n * If the string was created with a NULL context 'ctx', it is also possible to\n * pass ctx as NULL when releasing the string (but passing a context will not\n * create any issue). Strings created with a context should be freed also passing\n * the context, so if you want to free a string out of context later, make sure\n * to create it using a NULL context.\n *\n * This API is not thread safe, access to these retained strings (if they originated\n * from a client command arguments) must be done with GIL locked. */\nvoid RM_FreeString(RedisModuleCtx *ctx, RedisModuleString *str) {\n    decrRefCount(str);\n    if (ctx != NULL) autoMemoryFreed(ctx,REDISMODULE_AM_STRING,str);\n}\n\n/* Every call to this function, will make the string 'str' requiring\n * an additional call to RedisModule_FreeString() in order to really\n * free the string. Note that the automatic freeing of the string obtained\n * enabling modules automatic memory management counts for one\n * RedisModule_FreeString() call (it is just executed automatically).\n *\n * Normally you want to call this function when, at the same time\n * the following conditions are true:\n *\n * 1. You have automatic memory management enabled.\n * 2. You want to create string objects.\n * 3. Those string objects you create need to live *after* the callback\n *    function(for example a command implementation) creating them returns.\n *\n * Usually you want this in order to store the created string object\n * into your own data structure, for example when implementing a new data\n * type.\n *\n * Note that when memory management is turned off, you don't need\n * any call to RetainString() since creating a string will always result\n * into a string that lives after the callback function returns, if\n * no FreeString() call is performed.\n *\n * It is possible to call this function with a NULL context.\n *\n * When strings are going to be retained for an extended duration, it is good\n * practice to also call RedisModule_TrimStringAllocation() in order to\n * optimize memory usage.\n *\n * Threaded modules that reference retained strings from other threads *must*\n * explicitly trim the allocation as soon as the string is retained. Not doing\n * so may result with automatic trimming which is not thread safe.\n *\n * This API is not thread safe, access to these retained strings (if they originated\n * from a client command arguments) must be done with GIL locked. */\nvoid RM_RetainString(RedisModuleCtx *ctx, RedisModuleString *str) {\n    if (ctx == NULL || !autoMemoryFreed(ctx,REDISMODULE_AM_STRING,str)) {\n        /* Increment the string reference counting only if we can't\n         * just remove the object from the list of objects that should\n         * be reclaimed. Why we do that, instead of just incrementing\n         * the refcount in any case, and let the automatic FreeString()\n         * call at the end to bring the refcount back at the desired\n         * value? Because this way we ensure that the object refcount\n         * value is 1 (instead of going to 2 to be dropped later to 1)\n         * after the call to this function. This is needed for functions\n         * like RedisModule_StringAppendBuffer() to work. */\n        incrRefCount(str);\n    }\n}\n\n/**\n* This function can be used instead of RedisModule_RetainString().\n* The main difference between the two is that this function will always\n* succeed, whereas RedisModule_RetainString() may fail because of an\n* assertion.\n*\n* The function returns a pointer to RedisModuleString, which is owned\n* by the caller. It requires a call to RedisModule_FreeString() to free\n* the string when automatic memory management is disabled for the context.\n* When automatic memory management is enabled, you can either call\n* RedisModule_FreeString() or let the automation free it.\n*\n* This function is more efficient than RedisModule_CreateStringFromString()\n* because whenever possible, it avoids copying the underlying\n* RedisModuleString. The disadvantage of using this function is that it\n* might not be possible to use RedisModule_StringAppendBuffer() on the\n* returned RedisModuleString.\n*\n* It is possible to call this function with a NULL context.\n*\n * When strings are going to be held for an extended duration, it is good\n * practice to also call RedisModule_TrimStringAllocation() in order to\n * optimize memory usage.\n *\n * Threaded modules that reference held strings from other threads *must*\n * explicitly trim the allocation as soon as the string is held. Not doing\n * so may result with automatic trimming which is not thread safe.\n *\n * This API is not thread safe, access to these retained strings (if they originated\n * from a client command arguments) must be done with GIL locked. */\nRedisModuleString* RM_HoldString(RedisModuleCtx *ctx, RedisModuleString *str) {\n    if (str->refcount == OBJ_STATIC_REFCOUNT) {\n        return RM_CreateStringFromString(ctx, str);\n    }\n\n    incrRefCount(str);\n    if (ctx != NULL) {\n        /*\n         * Put the str in the auto memory management of the ctx.\n         * It might already be there, in this case, the ref count will\n         * be 2 and we will decrease the ref count twice and free the\n         * object in the auto memory free function.\n         *\n         * Why we can not do the same trick of just remove the object\n         * from the auto memory (like in RM_RetainString)?\n         * This code shows the issue:\n         *\n         * RM_AutoMemory(ctx);\n         * str1 = RM_CreateString(ctx, \"test\", 4);\n         * str2 = RM_HoldString(ctx, str1);\n         * RM_FreeString(str1);\n         * RM_FreeString(str2);\n         *\n         * If after the RM_HoldString we would just remove the string from\n         * the auto memory, this example will cause access to a freed memory\n         * on 'RM_FreeString(str2);' because the String will be free\n         * on 'RM_FreeString(str1);'.\n         *\n         * So it's safer to just increase the ref count\n         * and add the String to auto memory again.\n         *\n         * The limitation is that it is not possible to use RedisModule_StringAppendBuffer\n         * on the String.\n         */\n        autoMemoryAdd(ctx,REDISMODULE_AM_STRING,str);\n    }\n    return str;\n}\n\n/* Given a string module object, this function returns the string pointer\n * and length of the string. The returned pointer and length should only\n * be used for read only accesses and never modified. */\nconst char *RM_StringPtrLen(const RedisModuleString *str, size_t *len) {\n    if (str == NULL) {\n        const char *errmsg = \"(NULL string reply referenced in module)\";\n        if (len) *len = strlen(errmsg);\n        return errmsg;\n    }\n    if (len) *len = sdslen(str->ptr);\n    return str->ptr;\n}\n\n/* --------------------------------------------------------------------------\n * Higher level string operations\n * ------------------------------------------------------------------------- */\n\n/* Convert the string into a `long long` integer, storing it at `*ll`.\n * Returns REDISMODULE_OK on success. If the string can't be parsed\n * as a valid, strict `long long` (no spaces before/after), REDISMODULE_ERR\n * is returned. */\nint RM_StringToLongLong(const RedisModuleString *str, long long *ll) {\n    return string2ll(str->ptr,sdslen(str->ptr),ll) ? REDISMODULE_OK :\n                                                     REDISMODULE_ERR;\n}\n\n/* Convert the string into a `unsigned long long` integer, storing it at `*ull`.\n * Returns REDISMODULE_OK on success. If the string can't be parsed\n * as a valid, strict `unsigned long long` (no spaces before/after), REDISMODULE_ERR\n * is returned. */\nint RM_StringToULongLong(const RedisModuleString *str, unsigned long long *ull) {\n    return string2ull(str->ptr,ull) ? REDISMODULE_OK : REDISMODULE_ERR;\n}\n\n/* Convert the string into a double, storing it at `*d`.\n * Returns REDISMODULE_OK on success or REDISMODULE_ERR if the string is\n * not a valid string representation of a double value. */\nint RM_StringToDouble(const RedisModuleString *str, double *d) {\n    int retval = getDoubleFromObject(str,d);\n    return (retval == C_OK) ? REDISMODULE_OK : REDISMODULE_ERR;\n}\n\n/* Convert the string into a long double, storing it at `*ld`.\n * Returns REDISMODULE_OK on success or REDISMODULE_ERR if the string is\n * not a valid string representation of a double value. */\nint RM_StringToLongDouble(const RedisModuleString *str, long double *ld) {\n    int retval = string2ld(str->ptr,sdslen(str->ptr),ld);\n    return retval ? REDISMODULE_OK : REDISMODULE_ERR;\n}\n\n/* Convert the string into a stream ID, storing it at `*id`.\n * Returns REDISMODULE_OK on success and returns REDISMODULE_ERR if the string\n * is not a valid string representation of a stream ID. The special IDs \"+\" and\n * \"-\" are allowed.\n */\nint RM_StringToStreamID(const RedisModuleString *str, RedisModuleStreamID *id) {\n    streamID streamid;\n    if (streamParseID(str, &streamid) == C_OK) {\n        id->ms = streamid.ms;\n        id->seq = streamid.seq;\n        return REDISMODULE_OK;\n    } else {\n        return REDISMODULE_ERR;\n    }\n}\n\n/* Compare two string objects, returning -1, 0 or 1 respectively if\n * a < b, a == b, a > b. Strings are compared byte by byte as two\n * binary blobs without any encoding care / collation attempt. */\nint RM_StringCompare(const RedisModuleString *a, const RedisModuleString *b) {\n    return compareStringObjects(a,b);\n}\n\n/* Return the (possibly modified in encoding) input 'str' object if\n * the string is unshared, otherwise NULL is returned. */\nRedisModuleString *moduleAssertUnsharedString(RedisModuleString *str) {\n    if (str->refcount != 1) {\n        serverLog(LL_WARNING,\n            \"Module attempted to use an in-place string modify operation \"\n            \"with a string referenced multiple times. Please check the code \"\n            \"for API usage correctness.\");\n        return NULL;\n    }\n    if (str->encoding == OBJ_ENCODING_EMBSTR) {\n        /* Note: here we \"leak\" the additional allocation that was\n         * used in order to store the embedded string in the object. */\n        str->ptr = sdsnewlen(str->ptr,sdslen(str->ptr));\n        str->encoding = OBJ_ENCODING_RAW;\n    } else if (str->encoding == OBJ_ENCODING_INT) {\n        /* Convert the string from integer to raw encoding. */\n        str->ptr = sdsfromlonglong((long)str->ptr);\n        str->encoding = OBJ_ENCODING_RAW;\n    }\n    return str;\n}\n\n/* Append the specified buffer to the string 'str'. The string must be a\n * string created by the user that is referenced only a single time, otherwise\n * REDISMODULE_ERR is returned and the operation is not performed. */\nint RM_StringAppendBuffer(RedisModuleCtx *ctx, RedisModuleString *str, const char *buf, size_t len) {\n    UNUSED(ctx);\n    str = moduleAssertUnsharedString(str);\n    if (str == NULL) return REDISMODULE_ERR;\n    str->ptr = sdscatlen(str->ptr,buf,len);\n    return REDISMODULE_OK;\n}\n\n/* Trim possible excess memory allocated for a RedisModuleString.\n *\n * Sometimes a RedisModuleString may have more memory allocated for\n * it than required, typically for argv arguments that were constructed\n * from network buffers. This function optimizes such strings by reallocating\n * their memory, which is useful for strings that are not short lived but\n * retained for an extended duration.\n *\n * This operation is *not thread safe* and should only be called when\n * no concurrent access to the string is guaranteed. Using it for an argv\n * string in a module command before the string is potentially available\n * to other threads is generally safe.\n *\n * Currently, Redis may also automatically trim retained strings when a\n * module command returns. However, doing this explicitly should still be\n * a preferred option:\n *\n * 1. Future versions of Redis may abandon auto-trimming.\n * 2. Auto-trimming as currently implemented is *not thread safe*.\n *    A background thread manipulating a recently retained string may end up\n *    in a race condition with the auto-trim, which could result with\n *    data corruption.\n */\nvoid RM_TrimStringAllocation(RedisModuleString *str) {\n    if (!str) return;\n    trimStringObjectIfNeeded(str, 1);\n}\n\n/* --------------------------------------------------------------------------\n * ## Reply APIs\n *\n * These functions are used for sending replies to the client.\n *\n * Most functions always return REDISMODULE_OK so you can use it with\n * 'return' in order to return from the command implementation with:\n *\n *     if (... some condition ...)\n *         return RedisModule_ReplyWithLongLong(ctx,mycount);\n *\n * ### Reply with collection functions\n *\n * After starting a collection reply, the module must make calls to other\n * `ReplyWith*` style functions in order to emit the elements of the collection.\n * Collection types include: Array, Map, Set and Attribute.\n *\n * When producing collections with a number of elements that is not known\n * beforehand, the function can be called with a special flag\n * REDISMODULE_POSTPONED_LEN (REDISMODULE_POSTPONED_ARRAY_LEN in the past),\n * and the actual number of elements can be later set with RM_ReplySet*Length()\n * call (which will set the latest \"open\" count if there are multiple ones).\n * -------------------------------------------------------------------------- */\n\n/* Send an error about the number of arguments given to the command,\n * citing the command name in the error message. Returns REDISMODULE_OK.\n *\n * Example:\n *\n *     if (argc != 3) return RedisModule_WrongArity(ctx);\n */\nint RM_WrongArity(RedisModuleCtx *ctx) {\n    addReplyErrorArity(ctx->client);\n    return REDISMODULE_OK;\n}\n\n/* Return the client object the `RM_Reply*` functions should target.\n * Normally this is just `ctx->client`, that is the client that called\n * the module command, however in the case of thread safe contexts there\n * is no directly associated client (since it would not be safe to access\n * the client from a thread), so instead the blocked client object referenced\n * in the thread safe context, has a fake client that we just use to accumulate\n * the replies. Later, when the client is unblocked, the accumulated replies\n * are appended to the actual client.\n *\n * The function returns the client pointer depending on the context, or\n * NULL if there is no potential client. This happens when we are in the\n * context of a thread safe context that was not initialized with a blocked\n * client object. Other contexts without associated clients are the ones\n * initialized to run the timers callbacks. */\nclient *moduleGetReplyClient(RedisModuleCtx *ctx) {\n    if (ctx->flags & REDISMODULE_CTX_THREAD_SAFE) {\n        if (ctx->blocked_client)\n            return ctx->blocked_client->reply_client;\n        else\n            return NULL;\n    } else {\n        /* If this is a non thread safe context, just return the client\n         * that is running the command if any. This may be NULL as well\n         * in the case of contexts that are not executed with associated\n         * clients, like timer contexts. */\n        return ctx->client;\n    }\n}\n\n/* Send an integer reply to the client, with the specified `long long` value.\n * The function always returns REDISMODULE_OK. */\nint RM_ReplyWithLongLong(RedisModuleCtx *ctx, long long ll) {\n    client *c = moduleGetReplyClient(ctx);\n    if (c == NULL) return REDISMODULE_OK;\n    addReplyLongLong(c,ll);\n    return REDISMODULE_OK;\n}\n\n/* Reply with the error 'err'.\n *\n * Note that 'err' must contain all the error, including\n * the initial error code. The function only provides the initial \"-\", so\n * the usage is, for example:\n *\n *     RedisModule_ReplyWithError(ctx,\"ERR Wrong Type\");\n *\n * and not just:\n *\n *     RedisModule_ReplyWithError(ctx,\"Wrong Type\");\n *\n * The function always returns REDISMODULE_OK.\n */\nint RM_ReplyWithError(RedisModuleCtx *ctx, const char *err) {\n    client *c = moduleGetReplyClient(ctx);\n    if (c == NULL) return REDISMODULE_OK;\n    addReplyErrorFormat(c,\"-%s\",err);\n    return REDISMODULE_OK;\n}\n\n/* Reply with the error create from a printf format and arguments.\n *\n * Note that 'fmt' must contain all the error, including\n * the initial error code. The function only provides the initial \"-\", so\n * the usage is, for example:\n *\n *     RedisModule_ReplyWithErrorFormat(ctx,\"ERR Wrong Type: %s\",type);\n *\n * and not just:\n *\n *     RedisModule_ReplyWithErrorFormat(ctx,\"Wrong Type: %s\",type);\n *\n * The function always returns REDISMODULE_OK.\n */\nint RM_ReplyWithErrorFormat(RedisModuleCtx *ctx, const char *fmt, ...) {\n    client *c = moduleGetReplyClient(ctx);\n    if (c == NULL) return REDISMODULE_OK;\n\n    int len = strlen(fmt) + 2; /* 1 for the \\0 and 1 for the hyphen */\n    char *hyphenfmt = zmalloc(len);\n    snprintf(hyphenfmt, len, \"-%s\", fmt);\n\n    va_list ap;\n    va_start(ap, fmt);\n    addReplyErrorFormatInternal(c, 0, hyphenfmt, ap);\n    va_end(ap);\n\n    zfree(hyphenfmt);\n\n    return REDISMODULE_OK;\n}\n\n/* Reply with a simple string (`+... \\r\\n` in RESP protocol). This replies\n * are suitable only when sending a small non-binary string with small\n * overhead, like \"OK\" or similar replies.\n *\n * The function always returns REDISMODULE_OK. */\nint RM_ReplyWithSimpleString(RedisModuleCtx *ctx, const char *msg) {\n    client *c = moduleGetReplyClient(ctx);\n    if (c == NULL) return REDISMODULE_OK;\n    addReplyProto(c,\"+\",1);\n    addReplyProto(c,msg,strlen(msg));\n    addReplyProto(c,\"\\r\\n\",2);\n    return REDISMODULE_OK;\n}\n\n#define COLLECTION_REPLY_ARRAY      1\n#define COLLECTION_REPLY_MAP        2\n#define COLLECTION_REPLY_SET        3\n#define COLLECTION_REPLY_ATTRIBUTE  4\n\nint moduleReplyWithCollection(RedisModuleCtx *ctx, long len, int type) {\n    client *c = moduleGetReplyClient(ctx);\n    if (c == NULL) return REDISMODULE_OK;\n    if (len == REDISMODULE_POSTPONED_LEN) {\n        ctx->postponed_arrays = zrealloc(ctx->postponed_arrays,sizeof(void*)*\n                (ctx->postponed_arrays_count+1));\n        ctx->postponed_arrays[ctx->postponed_arrays_count] =\n            addReplyDeferredLen(c);\n        ctx->postponed_arrays_count++;\n    } else if (len == 0) {\n        switch (type) {\n        case COLLECTION_REPLY_ARRAY:\n            addReply(c, shared.emptyarray);\n            break;\n        case COLLECTION_REPLY_MAP:\n            addReply(c, shared.emptymap[c->resp]);\n            break;\n        case COLLECTION_REPLY_SET:\n            addReply(c, shared.emptyset[c->resp]);\n            break;\n        case COLLECTION_REPLY_ATTRIBUTE:\n            addReplyAttributeLen(c,len);\n            break;\n        default:\n            serverPanic(\"Invalid module empty reply type %d\", type);        }\n    } else {\n        switch (type) {\n        case COLLECTION_REPLY_ARRAY:\n            addReplyArrayLen(c,len);\n            break;\n        case COLLECTION_REPLY_MAP:\n            addReplyMapLen(c,len);\n            break;\n        case COLLECTION_REPLY_SET:\n            addReplySetLen(c,len);\n            break;\n        case COLLECTION_REPLY_ATTRIBUTE:\n            addReplyAttributeLen(c,len);\n            break;\n        default:\n            serverPanic(\"Invalid module reply type %d\", type);\n        }\n    }\n    return REDISMODULE_OK;\n}\n\n/* Reply with an array type of 'len' elements.\n *\n * After starting an array reply, the module must make `len` calls to other\n * `ReplyWith*` style functions in order to emit the elements of the array.\n * See Reply APIs section for more details.\n *\n * Use RM_ReplySetArrayLength() to set deferred length.\n *\n * The function always returns REDISMODULE_OK. */\nint RM_ReplyWithArray(RedisModuleCtx *ctx, long len) {\n    return moduleReplyWithCollection(ctx, len, COLLECTION_REPLY_ARRAY);\n}\n\n/* Reply with a RESP3 Map type of 'len' pairs.\n * Visit https://github.com/antirez/RESP3/blob/master/spec.md for more info about RESP3.\n *\n * After starting a map reply, the module must make `len*2` calls to other\n * `ReplyWith*` style functions in order to emit the elements of the map.\n * See Reply APIs section for more details.\n *\n * If the connected client is using RESP2, the reply will be converted to a flat\n * array.\n * \n * Use RM_ReplySetMapLength() to set deferred length.\n * \n * The function always returns REDISMODULE_OK. */\nint RM_ReplyWithMap(RedisModuleCtx *ctx, long len) {\n    return moduleReplyWithCollection(ctx, len, COLLECTION_REPLY_MAP);\n}\n\n/* Reply with a RESP3 Set type of 'len' elements.\n * Visit https://github.com/antirez/RESP3/blob/master/spec.md for more info about RESP3.\n *\n * After starting a set reply, the module must make `len` calls to other\n * `ReplyWith*` style functions in order to emit the elements of the set.\n * See Reply APIs section for more details.\n *\n * If the connected client is using RESP2, the reply will be converted to an\n * array type.\n *\n * Use RM_ReplySetSetLength() to set deferred length.\n * \n * The function always returns REDISMODULE_OK. */\nint RM_ReplyWithSet(RedisModuleCtx *ctx, long len) {\n    return moduleReplyWithCollection(ctx, len, COLLECTION_REPLY_SET);\n}\n\n\n/* Add attributes (metadata) to the reply. Should be done before adding the\n * actual reply. see https://github.com/antirez/RESP3/blob/master/spec.md#attribute-type\n *\n * After starting an attribute's reply, the module must make `len*2` calls to other\n * `ReplyWith*` style functions in order to emit the elements of the attribute map.\n * See Reply APIs section for more details.\n *\n * Use RM_ReplySetAttributeLength() to set deferred length.\n * \n * Not supported by RESP2 and will return REDISMODULE_ERR, otherwise\n * the function always returns REDISMODULE_OK. */\nint RM_ReplyWithAttribute(RedisModuleCtx *ctx, long len) {\n    if (ctx->client->resp == 2) return REDISMODULE_ERR;\n \n    return moduleReplyWithCollection(ctx, len, COLLECTION_REPLY_ATTRIBUTE);\n}\n\n/* Reply to the client with a null array, simply null in RESP3,\n * null array in RESP2.\n *\n * Note: In RESP3 there's no difference between Null reply and\n * NullArray reply, so to prevent ambiguity it's better to avoid\n * using this API and use RedisModule_ReplyWithNull instead.\n *\n * The function always returns REDISMODULE_OK. */\nint RM_ReplyWithNullArray(RedisModuleCtx *ctx) {\n    client *c = moduleGetReplyClient(ctx);\n    if (c == NULL) return REDISMODULE_OK;\n    addReplyNullArray(c);\n    return REDISMODULE_OK;\n}\n\n/* Reply to the client with an empty array.\n *\n * The function always returns REDISMODULE_OK. */\nint RM_ReplyWithEmptyArray(RedisModuleCtx *ctx) {\n    client *c = moduleGetReplyClient(ctx);\n    if (c == NULL) return REDISMODULE_OK;\n    addReply(c,shared.emptyarray);\n    return REDISMODULE_OK;\n}\n\nvoid moduleReplySetCollectionLength(RedisModuleCtx *ctx, long len, int type) {\n    client *c = moduleGetReplyClient(ctx);\n    if (c == NULL) return;\n    if (ctx->postponed_arrays_count == 0) {\n        serverLog(LL_WARNING,\n            \"API misuse detected in module %s: \"\n            \"RedisModule_ReplySet*Length() called without previous \"\n            \"RedisModule_ReplyWith*(ctx,REDISMODULE_POSTPONED_LEN) \"\n            \"call.\", ctx->module->name);\n            return;\n    }\n    ctx->postponed_arrays_count--;\n    switch(type) {\n    case COLLECTION_REPLY_ARRAY:\n        setDeferredArrayLen(c,ctx->postponed_arrays[ctx->postponed_arrays_count],len);\n        break;\n    case COLLECTION_REPLY_MAP:\n        setDeferredMapLen(c,ctx->postponed_arrays[ctx->postponed_arrays_count],len);\n        break;\n    case COLLECTION_REPLY_SET:\n        setDeferredSetLen(c,ctx->postponed_arrays[ctx->postponed_arrays_count],len);\n        break;\n    case COLLECTION_REPLY_ATTRIBUTE:\n        setDeferredAttributeLen(c,ctx->postponed_arrays[ctx->postponed_arrays_count],len);\n        break;\n    default:\n        serverPanic(\"Invalid module reply type %d\", type);\n    }\n    if (ctx->postponed_arrays_count == 0) {\n        zfree(ctx->postponed_arrays);\n        ctx->postponed_arrays = NULL;\n    }\n}\n\n/* When RedisModule_ReplyWithArray() is used with the argument\n * REDISMODULE_POSTPONED_LEN, because we don't know beforehand the number\n * of items we are going to output as elements of the array, this function\n * will take care to set the array length.\n *\n * Since it is possible to have multiple array replies pending with unknown\n * length, this function guarantees to always set the latest array length\n * that was created in a postponed way.\n *\n * For example in order to output an array like [1,[10,20,30]] we\n * could write:\n *\n *      RedisModule_ReplyWithArray(ctx,REDISMODULE_POSTPONED_LEN);\n *      RedisModule_ReplyWithLongLong(ctx,1);\n *      RedisModule_ReplyWithArray(ctx,REDISMODULE_POSTPONED_LEN);\n *      RedisModule_ReplyWithLongLong(ctx,10);\n *      RedisModule_ReplyWithLongLong(ctx,20);\n *      RedisModule_ReplyWithLongLong(ctx,30);\n *      RedisModule_ReplySetArrayLength(ctx,3); // Set len of 10,20,30 array.\n *      RedisModule_ReplySetArrayLength(ctx,2); // Set len of top array\n *\n * Note that in the above example there is no reason to postpone the array\n * length, since we produce a fixed number of elements, but in the practice\n * the code may use an iterator or other ways of creating the output so\n * that is not easy to calculate in advance the number of elements.\n */\nvoid RM_ReplySetArrayLength(RedisModuleCtx *ctx, long len) {\n    moduleReplySetCollectionLength(ctx, len, COLLECTION_REPLY_ARRAY);\n}\n\n/* Very similar to RedisModule_ReplySetArrayLength except `len` should\n * exactly half of the number of `ReplyWith*` functions called in the\n * context of the map.\n * Visit https://github.com/antirez/RESP3/blob/master/spec.md for more info about RESP3. */\nvoid RM_ReplySetMapLength(RedisModuleCtx *ctx, long len) {\n    moduleReplySetCollectionLength(ctx, len, COLLECTION_REPLY_MAP);\n}\n\n/* Very similar to RedisModule_ReplySetArrayLength\n * Visit https://github.com/antirez/RESP3/blob/master/spec.md for more info about RESP3. */\nvoid RM_ReplySetSetLength(RedisModuleCtx *ctx, long len) {\n    moduleReplySetCollectionLength(ctx, len, COLLECTION_REPLY_SET);\n}\n\n/* Very similar to RedisModule_ReplySetMapLength\n * Visit https://github.com/antirez/RESP3/blob/master/spec.md for more info about RESP3.\n *\n * Must not be called if RM_ReplyWithAttribute returned an error. */\nvoid RM_ReplySetAttributeLength(RedisModuleCtx *ctx, long len) {\n    if (ctx->client->resp == 2) return;\n    moduleReplySetCollectionLength(ctx, len, COLLECTION_REPLY_ATTRIBUTE);\n}\n\n/* Reply with a bulk string, taking in input a C buffer pointer and length.\n *\n * The function always returns REDISMODULE_OK. */\nint RM_ReplyWithStringBuffer(RedisModuleCtx *ctx, const char *buf, size_t len) {\n    client *c = moduleGetReplyClient(ctx);\n    if (c == NULL) return REDISMODULE_OK;\n    addReplyBulkCBuffer(c,(char*)buf,len);\n    return REDISMODULE_OK;\n}\n\n/* Reply with a bulk string, taking in input a C buffer pointer that is\n * assumed to be null-terminated.\n *\n * The function always returns REDISMODULE_OK. */\nint RM_ReplyWithCString(RedisModuleCtx *ctx, const char *buf) {\n    client *c = moduleGetReplyClient(ctx);\n    if (c == NULL) return REDISMODULE_OK;\n    addReplyBulkCString(c,(char*)buf);\n    return REDISMODULE_OK;\n}\n\n/* Reply with a bulk string, taking in input a RedisModuleString object.\n *\n * The function always returns REDISMODULE_OK. */\nint RM_ReplyWithString(RedisModuleCtx *ctx, RedisModuleString *str) {\n    client *c = moduleGetReplyClient(ctx);\n    if (c == NULL) return REDISMODULE_OK;\n    /* Disable reply copy avoidance: the module string may belong to a datatype\n     * that could be lazy-freed by BIO thread, causing UAF if we hold a reference\n     * to it in the client reply list. */\n    addReplyBulkWithFlag(c,str,0);\n    return REDISMODULE_OK;\n}\n\n/* Reply with an empty string.\n *\n * The function always returns REDISMODULE_OK. */\nint RM_ReplyWithEmptyString(RedisModuleCtx *ctx) {\n    client *c = moduleGetReplyClient(ctx);\n    if (c == NULL) return REDISMODULE_OK;\n    addReply(c,shared.emptybulk);\n    return REDISMODULE_OK;\n}\n\n/* Reply with a binary safe string, which should not be escaped or filtered\n * taking in input a C buffer pointer, length and a 3 character type/extension.\n *\n * The function always returns REDISMODULE_OK. */\nint RM_ReplyWithVerbatimStringType(RedisModuleCtx *ctx, const char *buf, size_t len, const char *ext) {\n    client *c = moduleGetReplyClient(ctx);\n    if (c == NULL) return REDISMODULE_OK;\n    addReplyVerbatim(c, buf, len, ext);\n    return REDISMODULE_OK;\n}\n\n/* Reply with a binary safe string, which should not be escaped or filtered\n * taking in input a C buffer pointer and length.\n *\n * The function always returns REDISMODULE_OK. */\nint RM_ReplyWithVerbatimString(RedisModuleCtx *ctx, const char *buf, size_t len) {\n\treturn RM_ReplyWithVerbatimStringType(ctx, buf, len, \"txt\");\n}\n\n/* Reply to the client with a NULL.\n *\n * The function always returns REDISMODULE_OK. */\nint RM_ReplyWithNull(RedisModuleCtx *ctx) {\n    client *c = moduleGetReplyClient(ctx);\n    if (c == NULL) return REDISMODULE_OK;\n    addReplyNull(c);\n    return REDISMODULE_OK;\n}\n\n/* Reply with a RESP3 Boolean type.\n * Visit https://github.com/antirez/RESP3/blob/master/spec.md for more info about RESP3.\n *\n * In RESP3, this is boolean type\n * In RESP2, it's a string response of \"1\" and \"0\" for true and false respectively.\n *\n * The function always returns REDISMODULE_OK. */\nint RM_ReplyWithBool(RedisModuleCtx *ctx, int b) {\n    client *c = moduleGetReplyClient(ctx);\n    if (c == NULL) return REDISMODULE_OK;\n    addReplyBool(c,b);\n    return REDISMODULE_OK;\n}\n\n/* Reply exactly what a Redis command returned us with RedisModule_Call().\n * This function is useful when we use RedisModule_Call() in order to\n * execute some command, as we want to reply to the client exactly the\n * same reply we obtained by the command.\n *\n * Return:\n * - REDISMODULE_OK on success.\n * - REDISMODULE_ERR if the given reply is in RESP3 format but the client expects RESP2.\n *   In case of an error, it's the module writer responsibility to translate the reply\n *   to RESP2 (or handle it differently by returning an error). Notice that for\n *   module writer convenience, it is possible to pass `0` as a parameter to the fmt\n *   argument of `RM_Call` so that the RedisModuleCallReply will return in the same\n *   protocol (RESP2 or RESP3) as set in the current client's context. */\nint RM_ReplyWithCallReply(RedisModuleCtx *ctx, RedisModuleCallReply *reply) {\n    client *c = moduleGetReplyClient(ctx);\n    if (c == NULL) return REDISMODULE_OK;\n    if (c->resp == 2 && callReplyIsResp3(reply)) {\n        /* The reply is in RESP3 format and the client is RESP2,\n         * so it isn't possible to send this reply to the client. */\n        return REDISMODULE_ERR;\n    }\n    size_t proto_len;\n    const char *proto = callReplyGetProto(reply, &proto_len);\n    addReplyProto(c, proto, proto_len);\n    /* Propagate the error list from that reply to the other client, to do some\n     * post error reply handling, like statistics.\n     * Note that if the original reply had an array with errors, and the module\n     * replied with just a portion of the original reply, and not the entire\n     * reply, the errors are currently not propagated and the errors stats\n     * will not get propagated. */\n    list *errors = callReplyDeferredErrorList(reply);\n    if (errors)\n        deferredAfterErrorReply(c, errors);\n    return REDISMODULE_OK;\n}\n\n/* Reply with a RESP3 Double type.\n * Visit https://github.com/antirez/RESP3/blob/master/spec.md for more info about RESP3.\n *\n * Send a string reply obtained converting the double 'd' into a bulk string.\n * This function is basically equivalent to converting a double into\n * a string into a C buffer, and then calling the function\n * RedisModule_ReplyWithStringBuffer() with the buffer and length.\n *\n * In RESP3 the string is tagged as a double, while in RESP2 it's just a plain string \n * that the user will have to parse.\n *\n * The function always returns REDISMODULE_OK. */\nint RM_ReplyWithDouble(RedisModuleCtx *ctx, double d) {\n    client *c = moduleGetReplyClient(ctx);\n    if (c == NULL) return REDISMODULE_OK;\n    addReplyDouble(c,d);\n    return REDISMODULE_OK;\n}\n\n/* Reply with a RESP3 BigNumber type.\n * Visit https://github.com/antirez/RESP3/blob/master/spec.md for more info about RESP3.\n *\n * In RESP3, this is a string of length `len` that is tagged as a BigNumber, \n * however, it's up to the caller to ensure that it's a valid BigNumber.\n * In RESP2, this is just a plain bulk string response.\n *\n * The function always returns REDISMODULE_OK. */\nint RM_ReplyWithBigNumber(RedisModuleCtx *ctx, const char *bignum, size_t len) {\n    client *c = moduleGetReplyClient(ctx);\n    if (c == NULL) return REDISMODULE_OK;\n    addReplyBigNum(c, bignum, len);\n    return REDISMODULE_OK;\n}\n\n/* Send a string reply obtained converting the long double 'ld' into a bulk\n * string. This function is basically equivalent to converting a long double\n * into a string into a C buffer, and then calling the function\n * RedisModule_ReplyWithStringBuffer() with the buffer and length.\n * The double string uses human readable formatting (see\n * `addReplyHumanLongDouble` in networking.c).\n *\n * The function always returns REDISMODULE_OK. */\nint RM_ReplyWithLongDouble(RedisModuleCtx *ctx, long double ld) {\n    client *c = moduleGetReplyClient(ctx);\n    if (c == NULL) return REDISMODULE_OK;\n    addReplyHumanLongDouble(c, ld);\n    return REDISMODULE_OK;\n}\n\n/* --------------------------------------------------------------------------\n * ## Commands replication API\n * -------------------------------------------------------------------------- */\n\n/* Replicate the specified command and arguments to slaves and AOF, as effect\n * of execution of the calling command implementation.\n *\n * The replicated commands are always wrapped into the MULTI/EXEC that\n * contains all the commands replicated in a given module command\n * execution, in the order they were executed.\n *\n * Modules should try to use one interface or the other.\n *\n * This command follows exactly the same interface of RedisModule_Call(),\n * so a set of format specifiers must be passed, followed by arguments\n * matching the provided format specifiers.\n *\n * Please refer to RedisModule_Call() for more information.\n *\n * Using the special \"A\" and \"R\" modifiers, the caller can exclude either\n * the AOF or the replicas from the propagation of the specified command.\n * Otherwise, by default, the command will be propagated in both channels.\n *\n * #### Note about calling this function from a thread safe context:\n *\n * Normally when you call this function from the callback implementing a\n * module command, or any other callback provided by the Redis Module API,\n * Redis will accumulate all the calls to this function in the context of\n * the callback, and will propagate all the commands wrapped in a MULTI/EXEC\n * transaction. However when calling this function from a threaded safe context\n * that can live an undefined amount of time, and can be locked/unlocked in\n * at will, it is important to note that this API is not thread-safe and\n * must be executed while holding the GIL.\n *\n * #### Return value\n *\n * The command returns REDISMODULE_ERR if the format specifiers are invalid\n * or the command name does not belong to a known command. */\nint RM_Replicate(RedisModuleCtx *ctx, const char *cmdname, const char *fmt, ...) {\n    struct redisCommand *cmd;\n    robj **argv = NULL;\n    int argc = 0, flags = 0, j;\n    va_list ap;\n\n    cmd = lookupCommandByCString((char*)cmdname);\n    if (!cmd) return REDISMODULE_ERR;\n\n    /* Create the client and dispatch the command. */\n    va_start(ap, fmt);\n    argv = moduleCreateArgvFromUserFormat(cmdname,fmt,&argc,&flags,ap);\n    va_end(ap);\n    if (argv == NULL) return REDISMODULE_ERR;\n\n    /* Select the propagation target. Usually is AOF + replicas, however\n     * the caller can exclude one or the other using the \"A\" or \"R\"\n     * modifiers. */\n    int target = 0;\n    if (!(flags & REDISMODULE_ARGV_NO_AOF)) target |= PROPAGATE_AOF;\n    if (!(flags & REDISMODULE_ARGV_NO_REPLICAS)) target |= PROPAGATE_REPL;\n\n    alsoPropagate(ctx->client->db->id,argv,argc,target);\n\n    /* Release the argv. */\n    for (j = 0; j < argc; j++) decrRefCount(argv[j]);\n    zfree(argv);\n    server.dirty++;\n    return REDISMODULE_OK;\n}\n\n/* This function will replicate the command exactly as it was invoked\n * by the client. Note that the replicated commands are always wrapped\n * into the MULTI/EXEC that contains all the commands replicated in a\n * given module command execution, in the order they were executed.\n *\n * Basically this form of replication is useful when you want to propagate\n * the command to the slaves and AOF file exactly as it was called, since\n * the command can just be re-executed to deterministically re-create the\n * new state starting from the old one.\n *\n * It is important to note that this API is not thread-safe and\n * must be executed while holding the GIL.\n *\n * The function always returns REDISMODULE_OK. */\nint RM_ReplicateVerbatim(RedisModuleCtx *ctx) {\n    alsoPropagate(ctx->client->db->id,\n        ctx->client->argv,ctx->client->argc,\n        PROPAGATE_AOF|PROPAGATE_REPL);\n    server.dirty++;\n    return REDISMODULE_OK;\n}\n\n/* --------------------------------------------------------------------------\n * ## DB and Key APIs -- Generic API\n * -------------------------------------------------------------------------- */\n\n/* Return the ID of the current client calling the currently active module\n * command. The returned ID has a few guarantees:\n *\n * 1. The ID is different for each different client, so if the same client\n *    executes a module command multiple times, it can be recognized as\n *    having the same ID, otherwise the ID will be different.\n * 2. The ID increases monotonically. Clients connecting to the server later\n *    are guaranteed to get IDs greater than any past ID previously seen.\n *\n * Valid IDs are from 1 to 2^64 - 1. If 0 is returned it means there is no way\n * to fetch the ID in the context the function was currently called.\n *\n * After obtaining the ID, it is possible to check if the command execution\n * is actually happening in the context of AOF loading, using this macro:\n *\n *      if (RedisModule_IsAOFClient(RedisModule_GetClientId(ctx)) {\n *          // Handle it differently.\n *      }\n */\nunsigned long long RM_GetClientId(RedisModuleCtx *ctx) {\n    if (ctx->client == NULL) return 0;\n    return ctx->client->id;\n}\n\n/* Return the ACL user name used by the client with the specified client ID.\n * Client ID can be obtained with RM_GetClientId() API. If the client does not\n * exist, NULL is returned and errno is set to ENOENT. If the client isn't\n * using an ACL user, NULL is returned and errno is set to ENOTSUP */\nRedisModuleString *RM_GetClientUserNameById(RedisModuleCtx *ctx, uint64_t id) {\n    client *client = lookupClientByID(id);\n    if (client == NULL) {\n        errno = ENOENT;\n        return NULL;\n    }\n    \n    if (client->user == NULL) {\n        errno = ENOTSUP;\n        return NULL;\n    }\n\n    sds name = sdsnew(client->user->name);\n    robj *str = createObject(OBJ_STRING, name);\n    autoMemoryAdd(ctx, REDISMODULE_AM_STRING, str);\n    return str;\n}\n\n/* This is a helper for RM_GetClientInfoById() and other functions: given\n * a client, it populates the client info structure with the appropriate\n * fields depending on the version provided. If the version is not valid\n * then REDISMODULE_ERR is returned. Otherwise the function returns\n * REDISMODULE_OK and the structure pointed by 'ci' gets populated. */\n\nint modulePopulateClientInfoStructure(void *ci, client *client, int structver) {\n    if (structver != 1) return REDISMODULE_ERR;\n\n    RedisModuleClientInfoV1 *ci1 = ci;\n    memset(ci1,0,sizeof(*ci1));\n    ci1->version = structver;\n    if (client->flags & CLIENT_MULTI)\n        ci1->flags |= REDISMODULE_CLIENTINFO_FLAG_MULTI;\n    if (client->flags & CLIENT_PUBSUB)\n        ci1->flags |= REDISMODULE_CLIENTINFO_FLAG_PUBSUB;\n    if (client->flags & CLIENT_UNIX_SOCKET)\n        ci1->flags |= REDISMODULE_CLIENTINFO_FLAG_UNIXSOCKET;\n    if (client->flags & CLIENT_TRACKING)\n        ci1->flags |= REDISMODULE_CLIENTINFO_FLAG_TRACKING;\n    if (client->flags & CLIENT_BLOCKED)\n        ci1->flags |= REDISMODULE_CLIENTINFO_FLAG_BLOCKED;\n    if (client->conn->type == connectionTypeTls())\n        ci1->flags |= REDISMODULE_CLIENTINFO_FLAG_SSL;\n\n    int port;\n    connAddrPeerName(client->conn,ci1->addr,sizeof(ci1->addr),&port);\n    ci1->port = port;\n    ci1->db = client->db->id;\n    ci1->id = client->id;\n    return REDISMODULE_OK;\n}\n\n/* This is a helper for moduleFireServerEvent() and other functions:\n * It populates the replication info structure with the appropriate\n * fields depending on the version provided. If the version is not valid\n * then REDISMODULE_ERR is returned. Otherwise the function returns\n * REDISMODULE_OK and the structure pointed by 'ri' gets populated. */\nint modulePopulateReplicationInfoStructure(void *ri, int structver) {\n    if (structver != 1) return REDISMODULE_ERR;\n\n    RedisModuleReplicationInfoV1 *ri1 = ri;\n    memset(ri1,0,sizeof(*ri1));\n    ri1->version = structver;\n    ri1->master = server.masterhost==NULL;\n    ri1->masterhost = server.masterhost? server.masterhost: \"\";\n    ri1->masterport = server.masterport;\n    ri1->replid1 = server.replid;\n    ri1->replid2 = server.replid2;\n    ri1->repl1_offset = server.master_repl_offset;\n    ri1->repl2_offset = server.second_replid_offset;\n    return REDISMODULE_OK;\n}\n\n/* Return information about the client with the specified ID (that was\n * previously obtained via the RedisModule_GetClientId() API). If the\n * client exists, REDISMODULE_OK is returned, otherwise REDISMODULE_ERR\n * is returned.\n *\n * When the client exist and the `ci` pointer is not NULL, but points to\n * a structure of type RedisModuleClientInfoV1, previously initialized with\n * the correct REDISMODULE_CLIENTINFO_INITIALIZER_V1, the structure is populated\n * with the following fields:\n *\n *      uint64_t flags;         // REDISMODULE_CLIENTINFO_FLAG_*\n *      uint64_t id;            // Client ID\n *      char addr[46];          // IPv4 or IPv6 address.\n *      uint16_t port;          // TCP port.\n *      uint16_t db;            // Selected DB.\n *\n * Note: the client ID is useless in the context of this call, since we\n *       already know, however the same structure could be used in other\n *       contexts where we don't know the client ID, yet the same structure\n *       is returned.\n *\n * With flags having the following meaning:\n *\n *     REDISMODULE_CLIENTINFO_FLAG_SSL          Client using SSL connection.\n *     REDISMODULE_CLIENTINFO_FLAG_PUBSUB       Client in Pub/Sub mode.\n *     REDISMODULE_CLIENTINFO_FLAG_BLOCKED      Client blocked in command.\n *     REDISMODULE_CLIENTINFO_FLAG_TRACKING     Client with keys tracking on.\n *     REDISMODULE_CLIENTINFO_FLAG_UNIXSOCKET   Client using unix domain socket.\n *     REDISMODULE_CLIENTINFO_FLAG_MULTI        Client in MULTI state.\n *\n * However passing NULL is a way to just check if the client exists in case\n * we are not interested in any additional information.\n *\n * This is the correct usage when we want the client info structure\n * returned:\n *\n *      RedisModuleClientInfo ci = REDISMODULE_CLIENTINFO_INITIALIZER;\n *      int retval = RedisModule_GetClientInfoById(&ci,client_id);\n *      if (retval == REDISMODULE_OK) {\n *          printf(\"Address: %s\\n\", ci.addr);\n *      }\n */\nint RM_GetClientInfoById(void *ci, uint64_t id) {\n    client *client = lookupClientByID(id);\n    if (client == NULL) return REDISMODULE_ERR;\n    if (ci == NULL) return REDISMODULE_OK;\n\n    /* Fill the info structure if passed. */\n    uint64_t structver = ((uint64_t*)ci)[0];\n    return modulePopulateClientInfoStructure(ci,client,structver);\n}\n\n/* Returns the name of the client connection with the given ID.\n *\n * If the client ID does not exist or if the client has no name associated with\n * it, NULL is returned. */\nRedisModuleString *RM_GetClientNameById(RedisModuleCtx *ctx, uint64_t id) {\n    client *client = lookupClientByID(id);\n    if (client == NULL || client->name == NULL) return NULL;\n    robj *name = client->name;\n    incrRefCount(name);\n    autoMemoryAdd(ctx, REDISMODULE_AM_STRING, name);\n    return name;\n}\n\n/* Sets the name of the client with the given ID. This is equivalent to the client calling\n * `CLIENT SETNAME name`.\n *\n * Returns REDISMODULE_OK on success. On failure, REDISMODULE_ERR is returned\n * and errno is set as follows:\n *\n * - ENOENT if the client does not exist\n * - EINVAL if the name contains invalid characters */\nint RM_SetClientNameById(uint64_t id, RedisModuleString *name) {\n    client *client = lookupClientByID(id);\n    if (client == NULL) {\n        errno = ENOENT;\n        return REDISMODULE_ERR;\n    }\n    if (clientSetName(client, name, NULL) == C_ERR) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    }\n    return REDISMODULE_OK;\n}\n\n/* Publish a message to subscribers (see PUBLISH command). */\nint RM_PublishMessage(RedisModuleCtx *ctx, RedisModuleString *channel, RedisModuleString *message) {\n    UNUSED(ctx);\n    return pubsubPublishMessageAndPropagateToCluster(channel, message, 0);\n}\n\n/* Publish a message to shard-subscribers (see SPUBLISH command). */\nint RM_PublishMessageShard(RedisModuleCtx *ctx, RedisModuleString *channel, RedisModuleString *message) {\n    UNUSED(ctx);\n    return pubsubPublishMessageAndPropagateToCluster(channel, message, 1);\n}\n\n/* Return the currently selected DB. */\nint RM_GetSelectedDb(RedisModuleCtx *ctx) {\n    return ctx->client->db->id;\n}\n\n\n/* Return the current context's flags. The flags provide information on the\n * current request context (whether the client is a Lua script or in a MULTI),\n * and about the Redis instance in general, i.e replication and persistence.\n *\n * It is possible to call this function even with a NULL context, however\n * in this case the following flags will not be reported:\n *\n *  * LUA, MULTI, REPLICATED, DIRTY (see below for more info).\n *\n * Available flags and their meaning:\n *\n *  * REDISMODULE_CTX_FLAGS_LUA: The command is running in a Lua script\n *\n *  * REDISMODULE_CTX_FLAGS_MULTI: The command is running inside a transaction\n *\n *  * REDISMODULE_CTX_FLAGS_REPLICATED: The command was sent over the replication\n *    link by the MASTER\n *\n *  * REDISMODULE_CTX_FLAGS_MASTER: The Redis instance is a master\n *\n *  * REDISMODULE_CTX_FLAGS_SLAVE: The Redis instance is a slave\n *\n *  * REDISMODULE_CTX_FLAGS_READONLY: The Redis instance is read-only\n *\n *  * REDISMODULE_CTX_FLAGS_CLUSTER: The Redis instance is in cluster mode\n *\n *  * REDISMODULE_CTX_FLAGS_AOF: The Redis instance has AOF enabled\n *\n *  * REDISMODULE_CTX_FLAGS_RDB: The instance has RDB enabled\n *\n *  * REDISMODULE_CTX_FLAGS_MAXMEMORY:  The instance has Maxmemory set\n *\n *  * REDISMODULE_CTX_FLAGS_EVICT:  Maxmemory is set and has an eviction\n *    policy that may delete keys\n *\n *  * REDISMODULE_CTX_FLAGS_OOM: Redis is out of memory according to the\n *    maxmemory setting.\n *\n *  * REDISMODULE_CTX_FLAGS_OOM_WARNING: Less than 25% of memory remains before\n *                                       reaching the maxmemory level.\n *\n *  * REDISMODULE_CTX_FLAGS_LOADING: Server is loading RDB/AOF\n *\n *  * REDISMODULE_CTX_FLAGS_REPLICA_IS_STALE: No active link with the master.\n *\n *  * REDISMODULE_CTX_FLAGS_REPLICA_IS_CONNECTING: The replica is trying to\n *                                                 connect with the master.\n *\n *  * REDISMODULE_CTX_FLAGS_REPLICA_IS_TRANSFERRING: Master -> Replica RDB\n *                                                   transfer is in progress.\n *\n *  * REDISMODULE_CTX_FLAGS_REPLICA_IS_ONLINE: The replica has an active link\n *                                             with its master. This is the\n *                                             contrary of STALE state.\n *\n *  * REDISMODULE_CTX_FLAGS_ACTIVE_CHILD: There is currently some background\n *                                        process active (RDB, AUX or module).\n *\n *  * REDISMODULE_CTX_FLAGS_MULTI_DIRTY: The next EXEC will fail due to dirty\n *                                       CAS (touched keys).\n *\n *  * REDISMODULE_CTX_FLAGS_IS_CHILD: Redis is currently running inside\n *                                    background child process.\n *\n *  * REDISMODULE_CTX_FLAGS_RESP3: Indicate the that client attached to this\n *                                 context is using RESP3.\n *\n *  * REDISMODULE_CTX_FLAGS_SERVER_STARTUP: The Redis instance is starting\n *\n *  * REDISMODULE_CTX_FLAGS_DEBUG_ENABLED: Debug commands are enabled for this\n *                                         context.\n *  * REDISMODULE_CTX_FLAGS_TRIM_IN_PROGRESS: Trim is in progress due to slot\n *                                            migration.\n */\nint RM_GetContextFlags(RedisModuleCtx *ctx) {\n    int flags = 0;\n\n    /* Client specific flags */\n    if (ctx) {\n        if (ctx->client) {\n            if (ctx->client->flags & CLIENT_DENY_BLOCKING)\n                flags |= REDISMODULE_CTX_FLAGS_DENY_BLOCKING;\n            /* Module command received from MASTER, is replicated. */\n            if (ctx->client->flags & CLIENT_MASTER)\n                flags |= REDISMODULE_CTX_FLAGS_REPLICATED;\n            if (ctx->client->resp == 3) {\n                flags |= REDISMODULE_CTX_FLAGS_RESP3;\n            }\n        }\n\n        /* For DIRTY flags, we need the blocked client if used */\n        client *c = ctx->blocked_client ? ctx->blocked_client->client : ctx->client;\n        if (c && (c->flags & (CLIENT_DIRTY_CAS|CLIENT_DIRTY_EXEC))) {\n            flags |= REDISMODULE_CTX_FLAGS_MULTI_DIRTY;\n        }\n        if (c && allowProtectedAction(server.enable_debug_cmd, c)) {\n            flags |= REDISMODULE_CTX_FLAGS_DEBUG_ENABLED;\n        }\n    }\n\n    if (scriptIsRunning())\n        flags |= REDISMODULE_CTX_FLAGS_LUA;\n\n    if (server.in_exec)\n        flags |= REDISMODULE_CTX_FLAGS_MULTI;\n\n    if (server.cluster_enabled)\n        flags |= REDISMODULE_CTX_FLAGS_CLUSTER;\n\n    if (server.async_loading)\n        flags |= REDISMODULE_CTX_FLAGS_ASYNC_LOADING;\n    else if (server.loading)\n        flags |= REDISMODULE_CTX_FLAGS_LOADING;\n\n    /* Maxmemory and eviction policy */\n    if (server.maxmemory > 0 && (!server.masterhost || !server.repl_slave_ignore_maxmemory)) {\n        flags |= REDISMODULE_CTX_FLAGS_MAXMEMORY;\n\n        if (server.maxmemory_policy != MAXMEMORY_NO_EVICTION)\n            flags |= REDISMODULE_CTX_FLAGS_EVICT;\n    }\n\n    /* Persistence flags */\n    if (server.aof_state != AOF_OFF)\n        flags |= REDISMODULE_CTX_FLAGS_AOF;\n    if (server.saveparamslen > 0)\n        flags |= REDISMODULE_CTX_FLAGS_RDB;\n\n    /* Replication flags */\n    if (server.masterhost == NULL) {\n        flags |= REDISMODULE_CTX_FLAGS_MASTER;\n    } else {\n        flags |= REDISMODULE_CTX_FLAGS_SLAVE;\n        if (server.repl_slave_ro)\n            flags |= REDISMODULE_CTX_FLAGS_READONLY;\n\n        /* Replica state flags. */\n        if (server.repl_state == REPL_STATE_CONNECT ||\n            server.repl_state == REPL_STATE_CONNECTING)\n        {\n            flags |= REDISMODULE_CTX_FLAGS_REPLICA_IS_CONNECTING;\n        } else if (server.repl_state == REPL_STATE_TRANSFER) {\n            flags |= REDISMODULE_CTX_FLAGS_REPLICA_IS_TRANSFERRING;\n        } else if (server.repl_state == REPL_STATE_CONNECTED) {\n            flags |= REDISMODULE_CTX_FLAGS_REPLICA_IS_ONLINE;\n        }\n\n        if (server.repl_state != REPL_STATE_CONNECTED)\n            flags |= REDISMODULE_CTX_FLAGS_REPLICA_IS_STALE;\n    }\n\n    /* OOM flag. */\n    float level;\n    int retval = getMaxmemoryState(NULL,NULL,NULL,&level);\n    if (retval == C_ERR) flags |= REDISMODULE_CTX_FLAGS_OOM;\n    if (level > 0.75) flags |= REDISMODULE_CTX_FLAGS_OOM_WARNING;\n\n    /* Presence of children processes. */\n    if (hasActiveChildProcess()) flags |= REDISMODULE_CTX_FLAGS_ACTIVE_CHILD;\n    if (server.in_fork_child) flags |= REDISMODULE_CTX_FLAGS_IS_CHILD;\n\n    /* Non-empty server.loadmodule_queue means that Redis is starting. */\n    if (listLength(server.loadmodule_queue) > 0)\n        flags |= REDISMODULE_CTX_FLAGS_SERVER_STARTUP;\n\n    /* If debug commands are completely enabled */\n    if (server.enable_debug_cmd == PROTECTED_ACTION_ALLOWED_YES) {\n        flags |= REDISMODULE_CTX_FLAGS_DEBUG_ENABLED;\n    }\n\n    if (asmIsTrimInProgress())\n        flags |= REDISMODULE_CTX_FLAGS_TRIM_IN_PROGRESS;\n\n    return flags;\n}\n\n/* Returns true if a client sent the CLIENT PAUSE command to the server or\n * if Redis Cluster does a manual failover, pausing the clients.\n * This is needed when we have a master with replicas, and want to write,\n * without adding further data to the replication channel, that the replicas\n * replication offset, match the one of the master. When this happens, it is\n * safe to failover the master without data loss.\n *\n * However modules may generate traffic by calling RedisModule_Call() with\n * the \"!\" flag, or by calling RedisModule_Replicate(), in a context outside\n * commands execution, for instance in timeout callbacks, threads safe\n * contexts, and so forth. When modules will generate too much traffic, it\n * will be hard for the master and replicas offset to match, because there\n * is more data to send in the replication channel.\n *\n * So modules may want to try to avoid very heavy background work that has\n * the effect of creating data to the replication channel, when this function\n * returns true. This is mostly useful for modules that have background\n * garbage collection tasks, or that do writes and replicate such writes\n * periodically in timer callbacks or other periodic callbacks.\n */\nint RM_AvoidReplicaTraffic(void) {\n    return !!(isPausedActionsWithUpdate(PAUSE_ACTION_REPLICA));\n}\n\n/* Change the currently selected DB. Returns an error if the id\n * is out of range.\n *\n * Note that the client will retain the currently selected DB even after\n * the Redis command implemented by the module calling this function\n * returns.\n *\n * If the module command wishes to change something in a different DB and\n * returns back to the original one, it should call RedisModule_GetSelectedDb()\n * before in order to restore the old DB number before returning. */\nint RM_SelectDb(RedisModuleCtx *ctx, int newid) {\n    int retval = selectDb(ctx->client,newid);\n    return (retval == C_OK) ? REDISMODULE_OK : REDISMODULE_ERR;\n}\n\n/* Check if a key exists, without affecting its last access time.\n *\n * This is equivalent to calling RM_OpenKey with the mode REDISMODULE_READ |\n * REDISMODULE_OPEN_KEY_NOTOUCH, then checking if NULL was returned and, if not,\n * calling RM_CloseKey on the opened key.\n */\nint RM_KeyExists(RedisModuleCtx *ctx, robj *keyname) {\n    kvobj *kv = lookupKeyReadWithFlags(ctx->client->db, keyname, LOOKUP_NOTOUCH);\n    return (kv != NULL);\n}\n\n/* Initialize a RedisModuleKey struct */\nstatic void moduleInitKey(RedisModuleKey *kp, RedisModuleCtx *ctx, robj *keyname, kvobj *kv, int mode){\n    kp->ctx = ctx;\n    kp->db = ctx->client->db;\n    kp->key = keyname;\n    incrRefCount(keyname);\n    kp->kv = kv;\n    kp->iter = NULL;\n    kp->mode = mode;\n    if (kp->kv) moduleInitKeyTypeSpecific(kp);\n}\n\n/* Initialize the type-specific part of the key. Only when key has a value. */\nstatic void moduleInitKeyTypeSpecific(RedisModuleKey *key) {\n    switch (key->kv->type) {\n    case OBJ_ZSET: zsetKeyReset(key); break;\n    case OBJ_STREAM: key->u.stream.signalready = 0; break;\n    }\n}\n\n/* Return a handle representing a Redis key, so that it is possible\n * to call other APIs with the key handle as argument to perform\n * operations on the key.\n *\n * The return value is the handle representing the key, that must be\n * closed with RM_CloseKey().\n *\n * If the key does not exist and REDISMODULE_WRITE mode is requested, the handle\n * is still returned, since it is possible to perform operations on\n * a yet not existing key (that will be created, for example, after\n * a list push operation). If the mode is just REDISMODULE_READ instead, and the\n * key does not exist, NULL is returned. However it is still safe to\n * call RedisModule_CloseKey() and RedisModule_KeyType() on a NULL\n * value.\n *\n * Extra flags that can be pass to the API under the mode argument:\n * * REDISMODULE_OPEN_KEY_NOTOUCH - Avoid touching the LRU/LFU of the key when opened.\n * * REDISMODULE_OPEN_KEY_NONOTIFY - Don't trigger keyspace event on key misses.\n * * REDISMODULE_OPEN_KEY_NOSTATS - Don't update keyspace hits/misses counters.\n * * REDISMODULE_OPEN_KEY_NOEXPIRE - Avoid deleting lazy expired keys.\n * * REDISMODULE_OPEN_KEY_NOEFFECTS - Avoid any effects from fetching the key.\n * * REDISMODULE_OPEN_KEY_ACCESS_EXPIRED - Access expired keys that have not yet been deleted */\nRedisModuleKey *RM_OpenKey(RedisModuleCtx *ctx, robj *keyname, int mode) {\n    RedisModuleKey *kp;\n    kvobj *kv;\n    int flags = 0;\n    flags |= (mode & REDISMODULE_OPEN_KEY_NOTOUCH? LOOKUP_NOTOUCH: 0);\n    flags |= (mode & REDISMODULE_OPEN_KEY_NONOTIFY? LOOKUP_NONOTIFY: 0);\n    flags |= (mode & REDISMODULE_OPEN_KEY_NOSTATS? LOOKUP_NOSTATS: 0);\n    flags |= (mode & REDISMODULE_OPEN_KEY_NOEXPIRE? LOOKUP_NOEXPIRE: 0);\n    flags |= (mode & REDISMODULE_OPEN_KEY_NOEFFECTS? LOOKUP_NOEFFECTS: 0);\n    flags |= (mode & REDISMODULE_OPEN_KEY_ACCESS_EXPIRED ? (LOOKUP_ACCESS_EXPIRED) : 0);\n    flags |= (mode & REDISMODULE_OPEN_KEY_ACCESS_TRIMMED ? (LOOKUP_ACCESS_TRIMMED) : 0);\n\n    if (mode & REDISMODULE_WRITE) {\n        kv = lookupKeyWriteWithFlags(ctx->client->db,keyname, flags);\n    } else {\n        kv = lookupKeyReadWithFlags(ctx->client->db,keyname, flags);\n        if (kv == NULL) {\n            return NULL;\n        }\n    }\n\n    /* Setup the key handle. */\n    kp = zmalloc(sizeof(*kp));\n    moduleInitKey(kp, ctx, keyname, kv, mode);\n    autoMemoryAdd(ctx,REDISMODULE_AM_KEY,kp);\n    return kp;\n}\n\n/**\n * Returns the full OpenKey modes mask, using the return value\n * the module can check if a certain set of OpenKey modes are supported\n * by the redis server version in use.\n * Example:\n *\n *        int supportedMode = RM_GetOpenKeyModesAll();\n *        if (supportedMode & REDISMODULE_OPEN_KEY_NOTOUCH) {\n *              // REDISMODULE_OPEN_KEY_NOTOUCH is supported\n *        } else{\n *              // REDISMODULE_OPEN_KEY_NOTOUCH is not supported\n *        }\n */\nint RM_GetOpenKeyModesAll(void) {\n    return _REDISMODULE_OPEN_KEY_ALL;\n}\n\n/* Destroy a RedisModuleKey struct (freeing is the responsibility of the caller). */\nstatic void moduleCloseKey(RedisModuleKey *key) {\n    int signal = SHOULD_SIGNAL_MODIFIED_KEYS(key->ctx);\n    if ((key->mode & REDISMODULE_WRITE) && signal)\n        keyModified(key->ctx->client,key->db,key->key,key->kv,1);\n    if (key->kv) {\n        if (key->iter) moduleFreeKeyIterator(key);\n        switch (key->kv->type) {\n        case OBJ_ZSET:\n            RM_ZsetRangeStop(key);\n            break;\n        case OBJ_STREAM:\n            if (key->u.stream.signalready)\n                /* One or more RM_StreamAdd() have been done. */\n                signalKeyAsReady(key->db, key->key, OBJ_STREAM);\n            break;\n        }\n    }\n    serverAssert(key->iter == NULL);\n    decrRefCount(key->key);\n}\n\n/* Close a key handle. */\nvoid RM_CloseKey(RedisModuleKey *key) {\n    if (key == NULL) return;\n    moduleCloseKey(key);\n    autoMemoryFreed(key->ctx,REDISMODULE_AM_KEY,key);\n    zfree(key);\n}\n\n/* Return the type of the key. If the key pointer is NULL then\n * REDISMODULE_KEYTYPE_EMPTY is returned. */\nint RM_KeyType(RedisModuleKey *key) {\n    if (key == NULL || key->kv ==  NULL) return REDISMODULE_KEYTYPE_EMPTY;\n    /* We map between defines so that we are free to change the internal\n     * defines as desired. */\n    switch(key->kv->type) {\n    case OBJ_STRING: return REDISMODULE_KEYTYPE_STRING;\n    case OBJ_LIST: return REDISMODULE_KEYTYPE_LIST;\n    case OBJ_SET: return REDISMODULE_KEYTYPE_SET;\n    case OBJ_ZSET: return REDISMODULE_KEYTYPE_ZSET;\n    case OBJ_HASH: return REDISMODULE_KEYTYPE_HASH;\n    case OBJ_MODULE: return REDISMODULE_KEYTYPE_MODULE;\n    case OBJ_STREAM: return REDISMODULE_KEYTYPE_STREAM;\n    case OBJ_GCRA: return REDISMODULE_KEYTYPE_GCRA;\n    default: return REDISMODULE_KEYTYPE_EMPTY;\n    }\n}\n\n/* Return the length of the value associated with the key.\n * For strings this is the length of the string. For all the other types\n * is the number of elements (just counting keys for hashes).\n *\n * If the key pointer is NULL or the key is empty, zero is returned. */\nsize_t RM_ValueLength(RedisModuleKey *key) {\n    if (key == NULL || key->kv == NULL) return 0;\n    return getObjectLength(key->kv);\n}\n\n/* If the key is open for writing, remove it, and setup the key to\n * accept new writes as an empty key (that will be created on demand).\n * On success REDISMODULE_OK is returned. If the key is not open for\n * writing REDISMODULE_ERR is returned. */\nint RM_DeleteKey(RedisModuleKey *key) {\n    if (!(key->mode & REDISMODULE_WRITE)) return REDISMODULE_ERR;\n    if (key->kv) {\n        dbDelete(key->db,key->key);\n        key->kv = NULL;\n    }\n    return REDISMODULE_OK;\n}\n\n/* If the key is open for writing, unlink it (that is delete it in a\n * non-blocking way, not reclaiming memory immediately) and setup the key to\n * accept new writes as an empty key (that will be created on demand).\n * On success REDISMODULE_OK is returned. If the key is not open for\n * writing REDISMODULE_ERR is returned. */\nint RM_UnlinkKey(RedisModuleKey *key) {\n    if (!(key->mode & REDISMODULE_WRITE)) return REDISMODULE_ERR;\n    if (key->kv) {\n        dbAsyncDelete(key->db,key->key);\n        key->kv = NULL;\n    }\n    return REDISMODULE_OK;\n}\n\n/* Return the key expire value, as milliseconds of remaining TTL.\n * If no TTL is associated with the key or if the key is empty,\n * REDISMODULE_NO_EXPIRE is returned. */\nmstime_t RM_GetExpire(RedisModuleKey *key) {\n    mstime_t expire = kvobjGetExpire(key->kv);\n    if (expire == -1 || key->kv == NULL)\n        return REDISMODULE_NO_EXPIRE;\n    expire -= commandTimeSnapshot();\n    return expire >= 0 ? expire : 0;\n}\n\n/* Set a new expire for the key. If the special expire\n * REDISMODULE_NO_EXPIRE is set, the expire is cancelled if there was\n * one (the same as the PERSIST command).\n *\n * Note that the expire must be provided as a positive integer representing\n * the number of milliseconds of TTL the key should have.\n *\n * The function returns REDISMODULE_OK on success or REDISMODULE_ERR if\n * the key was not open for writing or is an empty key. */\nint RM_SetExpire(RedisModuleKey *key, mstime_t expire) {\n    if (!(key->mode & REDISMODULE_WRITE) || key->kv == NULL || (expire < 0 && expire != REDISMODULE_NO_EXPIRE))\n        return REDISMODULE_ERR;\n    if (expire != REDISMODULE_NO_EXPIRE) {\n        expire += commandTimeSnapshot();\n        /* setExpire() might realloc kvobj */ \n        key->kv = setExpire(key->ctx->client,key->db,key->key,expire);\n    } else {\n        removeExpire(key->db,key->key);\n    }\n    return REDISMODULE_OK;\n}\n\n/* Return the key expire value, as absolute Unix timestamp.\n * If no TTL is associated with the key or if the key is empty,\n * REDISMODULE_NO_EXPIRE is returned. */\nmstime_t RM_GetAbsExpire(RedisModuleKey *key) {\n    mstime_t expire = kvobjGetExpire(key->kv);\n    if (expire == -1 || key->kv == NULL)\n        return REDISMODULE_NO_EXPIRE;\n    return expire;\n}\n\n/* Set a new expire for the key. If the special expire\n * REDISMODULE_NO_EXPIRE is set, the expire is cancelled if there was\n * one (the same as the PERSIST command).\n * \n * Note that the expire must be provided as a positive integer representing\n * the absolute Unix timestamp the key should have.\n *\n * The function returns REDISMODULE_OK on success or REDISMODULE_ERR if\n * the key was not open for writing or is an empty key. */\nint RM_SetAbsExpire(RedisModuleKey *key, mstime_t expire) {\n    if (!(key->mode & REDISMODULE_WRITE) || key->kv == NULL || (expire < 0 && expire != REDISMODULE_NO_EXPIRE))\n        return REDISMODULE_ERR;\n    if (expire != REDISMODULE_NO_EXPIRE) {\n        key->kv = setExpire(key->ctx->client,key->db,key->key,expire);\n    } else {\n        removeExpire(key->db,key->key);\n    }\n    return REDISMODULE_OK;\n}\n\n/* Register a new key metadata class exported by the module.\n *\n * Key metadata allows modules to attach up to 8 bytes of metadata to any Redis key,\n * regardless of the key's type. This metadata persists across key operations like\n * COPY, RENAME, MOVE, and can be saved/loaded from RDB files.\n *\n * The parameters are the following:\n *\n * * **metaname**: A 9 characters metadata class name that MUST be unique in the Redis\n *   Modules ecosystem. Use the charset A-Z a-z 0-9, plus the two \"-_\" characters.\n *   A good idea is to use, for example `<metaname>-<vendor>`. For example\n *   \"idx-RediSearch\" may mean \"Index metadata by RediSearch module\". To use both\n *   lower case and upper case letters helps in order to prevent collisions.\n *\n * * **metaver**: Encoding version, which is the version of the serialization\n *   that a module used in order to persist metadata. As long as the \"metaname\"\n *   matches, the RDB loading will be dispatched to the metadata class callbacks\n *   whatever 'metaver' is used, however the module can understand if\n *   the encoding it must load is of an older version of the module.\n *   For example the module \"idx-RediSearch\" initially used metaver=0. Later\n *   after an upgrade, it started to serialize metadata in a different format\n *   and to register the class with metaver=1. However this module may\n *   still load old data produced by an older version if the rdb_load\n *   callback is able to check the metaver value and act accordingly.\n *   The metaver must be a positive value between 0 and 1023.\n *\n * * **confPtr** is a pointer to a RedisModuleKeyMetaClassConfig structure\n *   that should be populated with the configuration and callbacks, like in\n *   the following example:\n *\n *         RedisModuleKeyMetaClassConfig config = {\n *             .version = REDISMODULE_KEY_META_VERSION,\n *             .flags = 1 << REDISMODULE_META_ALLOW_IGNORE,\n *             .reset_value = 0,\n *             .copy = myMeta_CopyCallback,\n *             .rename = myMeta_RenameCallback,\n *             .move = myMeta_MoveCallback,\n *             .unlink = myMeta_UnlinkCallback,\n *             .free = myMeta_FreeCallback,\n *             .rdb_load = myMeta_RDBLoadCallback,\n *             .rdb_save = myMeta_RDBSaveCallback,\n *             .aof_rewrite = myMeta_AOFRewriteCallback,\n *             .defrag = myMeta_DefragCallback,\n *             .mem_usage = myMeta_MemUsageCallback,\n *             .free_effort = myMeta_FreeEffortCallback\n *         }\n *\n *   Redis does NOT take ownership of the config structure itself. The `confPtr` \n *   parameter only needs to remain valid during the RM_CreateKeyMetaClass() call \n *   and can be freed immediately after.\n *\n * * **version**: Module must set it to REDISMODULE_KEY_META_VERSION. This field is\n *   bumped when new fields are added; Redis keeps backward compatibility in\n *   RM_CreateKeyMetaClass().\n *\n * * **flags**: Currently supports REDISMODULE_META_ALLOW_IGNORE (value 0).\n *   When set, metadata will be silently ignored during RDB load if the module\n *   is not available or if rdb_load callback is NULL. Otherwise, RDB loading\n *   will fail if metadata is encountered but cannot be loaded.\n *\n * * **reset_value**: The value to which metadata should be reset when it is being\n *   \"removed\" from a key. Typically 0, but can be any 8-byte value. This is\n *   especially relevant when metadata is a pointer/handler to external resources.\n *\n *   IMPORTANT GUARANTEE: Redis only invokes callbacks when meta != reset_value.\n *\n * * **copy**: A callback function pointer for COPY command (optional).\n *   - Return 1 to attach `meta` to the new key, or 0 to skip attaching metadata.\n *   - If NULL, metadata is ignored during copy.\n *   - The `meta` value may be modified in-place to produce a different value\n *     for the new key.\n *\n * * **rename**: A callback function pointer for RENAME command (optional).\n *   - If NULL, then metadata is kept during rename.\n *   - The `meta` value may be modified in-place to produce a different value\n *     for the new key.\n *\n * * **move**: A callback function pointer for MOVE command (optional).\n *   - Return 1 to keep metadata, 0 to drop.\n *   - If NULL, then metadata is kept during move.\n *   - The `meta` value may be modified in-place to produce a different value\n *     for the new key.\n *\n * * **unlink**: A callback function pointer for unlink operations (optional).\n *   - If not provided, then metadata is ignored during unlink.\n *   - Indication that key may soon be freed by background thread.\n *   - Pointer to meta is provided for modification. If the metadata holds a pointer\n *     or handle to resources and you free them here, you should set `*meta=reset_value`\n *     to prevent the free callback from being invoked (Redis skips callbacks when\n *     meta == reset_value, see reset_value documentation above).\n *\n * * **free**: A callback function pointer for cleanup (optional).\n *   Invoked when a key with this metadata is deleted/overwritten/expired,\n *   or when Redis needs to release per-key metadata during lifecycle operations.\n *   The module should free any external allocation referenced by `meta`\n *   if it uses the 8 bytes as a handle/pointer.\n *   This callback may run in a background thread and is not protected by GIL.\n *   It also might be called during RDB loading if the load fails after some\n *   metadata has been successfully loaded. In this case, keyname will be NULL\n *   since the key hasn't been created yet.\n *\n * * **rdb_load**: A callback function pointer for RDB loading (optional).\n *   - Called during RDB loading when metadata for this class is encountered.\n *   - Behavior when NULL:\n *     > If rdb_load is NULL AND REDISMODULE_META_ALLOW_IGNORE flag is set,\n *       the metadata will be silently ignored during RDB load.\n *     > If rdb_load is NULL AND the flag is NOT set, RDB loading will fail\n *       if metadata for this class is encountered.\n *   - Behavior when class is not registered:\n *     > If the class was saved with REDISMODULE_META_ALLOW_IGNORE flag but\n *       is not registered at load time, the metadata will be silently ignored.\n *     > Otherwise, RDB loading will fail.\n *   - Callback responsibilities:\n *     > Read custom serialized data from `rdb` using RedisModule_Load*() APIs\n *     > Deserialize and reconstruct the 8-byte metadata value\n *     > Write the final 8-byte value into `*meta`\n *     > Return appropriate status code (see below)\n *     > Database ID can be derived from `rdb` if needed. The associated key\n *       will be loaded immediately after this callback returns.\n *   - Parameters:\n *     > rdb: RDB I/O context (use RedisModule_Load*() functions to read data)\n *     > meta: Pointer to 8-byte metadata slot (write your deserialized value here)\n *     > encver: Encoding version (the metadata class version at save time)\n *   - Return values:\n *     > 1: Attach value `*meta` to the key (success)\n *     > 0: Ignore/skip metadata (don't attach, but continue loading - not an error)\n *     > -1: Error - abort RDB load (e.g., invalid data, version incompatibility)\n *            Module MUST clean up any allocated metadata before returning -1.\n *\n * * **rdb_save**: A callback function pointer for RDB saving (optional).\n *   - If set to NULL, Redis will not save metadata to RDB.\n *   - Callback should write data using RDB assisting functions: RedisModule_Save*().\n *\n * * **aof_rewrite**: A callback function pointer for AOF rewrite (optional).\n *   Called during AOF rewrite to emit commands that reconstruct the metadata.\n *   IMPORTANT: For AOF/RDB persistence to work correctly, metadata classes must be\n *   registered in RedisModule_OnLoad() so they are available when loading persisted\n *   data on server startup.\n *\n * * **defrag**: A callback function pointer for active defragmentation (optional).\n *   If the metadata contains pointers, this callback should defragment them.\n *\n * * **mem_usage**: A callback function pointer for MEMORY USAGE command (optional).\n *   Should return the memory used by the metadata in bytes.\n *\n * * **free_effort**: A callback function pointer for lazy free (optional).\n *   Should return the complexity of freeing the metadata to determine if\n *   lazy free should be used.\n *\n * Note: the metadata class name \"AAAAAAAAA\" is reserved and produces an error.\n *\n * If RM_CreateKeyMetaClass() is called outside of RedisModule_OnLoad() function\n * and outside of server startup, there is already a metadata class registered\n * with the same name, or if the metadata class name or metaver is invalid,\n * a negative value is returned.\n * Otherwise the new metadata class is registered into Redis, and a reference of\n * type RedisModuleKeyMetaClassId is returned: the caller of the function should store\n * this reference into a global variable to make future use of it in the\n * modules metadata API, since a single module may register multiple metadata classes.\n * Example code fragment:\n *\n *      static RedisModuleKeyMetaClassId IndexMetaClass;\n *\n *      int RedisModule_OnLoad(RedisModuleCtx *ctx) {\n *          // some code here ...\n *          IndexMetaClass = RM_CreateKeyMetaClass(...);\n *      }\n */\nRedisModuleKeyMetaClassId RM_CreateKeyMetaClass(RedisModuleCtx *ctx,\n                                                const char *metaname,\n                                                int metaver,\n                                                void *confPtr)\n{\n    RedisModuleKeyMetaClassId id;\n    \n    /* Allow registration during OnLoad, server startup, or when debug flag is set */\n    int ctx_flags = RM_GetContextFlags(ctx);\n    if (!ctx->module->onload &&\n        !(ctx_flags & REDISMODULE_CTX_FLAGS_SERVER_STARTUP) &&\n        !server.allow_keymeta_registration)\n        return -1;\n\n    if (!confPtr)\n        return -2;\n    \n    /* This structure supposed to evolve over time and defines the superset of all\n     * module type methods supported across different Redis module API versions */\n    struct KeyMetaConfAllVersions {\n        uint64_t version;\n        uint64_t flags;\n        uint64_t reset_value;\n        KeyMetaCopyFunc copy;\n        KeyMetaRenameFunc rename;\n        KeyMetaMoveFunc move;\n        KeyMetaUnlinkFunc unlink;\n        KeyMetaFreeFunc free;\n        /********** TBD: **********/\n        KeyMetaLoadFunc rdb_load;\n        KeyMetaSaveFunc rdb_save;\n        KeyMetaAOFRewriteFunc aof_rewrite;\n        KeyMetaDefragFunc defrag;        \n        KeyMetaMemUsageFunc mem_usage;\n        KeyMetaFreeEffortFunc free_effort;\n    } *legacy = (struct KeyMetaConfAllVersions *)confPtr;\n    \n    if (legacy->version == 0 || legacy->version > REDISMODULE_KEY_META_VERSION)\n        return -3;\n\n    KeyMetaClassConf conf = {\n            .flags = legacy->flags,\n            .reset_value = legacy->reset_value,\n\n            .copy = legacy->copy,\n            .rename = legacy->rename,\n            .move = legacy->move,\n            .unlink = legacy->unlink,\n            .free = legacy->free,\n\n            .rdb_load = legacy->rdb_load,\n            .rdb_save = legacy->rdb_save,\n            .aof_rewrite = legacy->aof_rewrite,\n            .defrag = legacy->defrag,\n            .mem_usage = legacy->mem_usage,\n            .free_effort = legacy->free_effort\n    };\n\n    id = keyMetaClassCreate(ctx->module, metaname, metaver, &conf);\n    if (id == 0) return -4;\n    \n    return id;\n}\n\n/* Release a class by its ID. Returns 1 on success, 0 on failure. */\nint RM_ReleaseKeyMetaClass(RedisModuleKeyMetaClassId id) {\n    return (keyMetaClassRelease(id)) ? REDISMODULE_OK : REDISMODULE_ERR;\n}\n\n/* Set metadata of class id on an opened key. If metadata is already attached,\n * it will be overwritten. The caller is responsible for retrieving and freeing\n * any existing pointer-based metadata before setting a new value. */\nint RM_SetKeyMeta(RedisModuleKeyMetaClassId id, RedisModuleKey *key, uint64_t metadata) {\n    if ((!key) || !(key->mode & REDISMODULE_WRITE) || (key->kv == NULL))\n        return REDISMODULE_ERR;\n\n    kvobj *new_kv = keyMetaSetMetadata(key->db, key->kv, id, metadata);\n    if (new_kv == NULL)\n        return REDISMODULE_ERR;\n\n    /* Update the key->kv pointer in case it was reallocated */\n    key->kv = new_kv;\n\n    return REDISMODULE_OK;\n}\n\n/* Get metadata of class id from an opened key. */\nint RM_GetKeyMeta(RedisModuleKeyMetaClassId id, RedisModuleKey *key, uint64_t *metadata) {\n    if ((!key) || (key->kv == NULL) || (!metadata))\n        return REDISMODULE_ERR;\n    \n    if (keyMetaGetMetadata(id, key->kv, metadata) == 0)\n        return REDISMODULE_ERR;\n    \n    return REDISMODULE_OK;\n}\n\n/* Performs similar operation to FLUSHALL, and optionally start a new AOF file (if enabled)\n * If restart_aof is true, you must make sure the command that triggered this call is not\n * propagated to the AOF file.\n * When async is set to true, db contents will be freed by a background thread. */\nvoid RM_ResetDataset(int restart_aof, int async) {\n    if (restart_aof && server.aof_state != AOF_OFF) stopAppendOnly();\n    flushAllDataAndResetRDB((async? EMPTYDB_ASYNC: EMPTYDB_NO_FLAGS) | EMPTYDB_NOFUNCTIONS);\n    if (server.aof_enabled && restart_aof) startAppendOnlyWithRetry();\n}\n\n/* Returns the number of keys in the current db. */\nunsigned long long RM_DbSize(RedisModuleCtx *ctx) {\n    return dbSize(ctx->client->db);\n}\n\n/* Returns a name of a random key, or NULL if current db is empty. */\nRedisModuleString *RM_RandomKey(RedisModuleCtx *ctx) {\n    robj *key = dbRandomKey(ctx->client->db);\n    autoMemoryAdd(ctx,REDISMODULE_AM_STRING,key);\n    return key;\n}\n\n/* Returns the name of the key currently being processed. */\nconst RedisModuleString *RM_GetKeyNameFromOptCtx(RedisModuleKeyOptCtx *ctx) {\n    return ctx->from_key;\n}\n\n/* Returns the name of the target key currently being processed. */\nconst RedisModuleString *RM_GetToKeyNameFromOptCtx(RedisModuleKeyOptCtx *ctx) {\n    return ctx->to_key;\n}\n\n/* Returns the dbid currently being processed. */\nint RM_GetDbIdFromOptCtx(RedisModuleKeyOptCtx *ctx) {\n    return ctx->from_dbid;\n}\n\n/* Returns the target dbid currently being processed. */\nint RM_GetToDbIdFromOptCtx(RedisModuleKeyOptCtx *ctx) {\n    return ctx->to_dbid;\n}\n/* --------------------------------------------------------------------------\n * ## Key API for String type\n *\n * See also RM_ValueLength(), which returns the length of a string.\n * -------------------------------------------------------------------------- */\n\n/* If the key is open for writing, set the specified string 'str' as the\n * value of the key, deleting the old value if any.\n * On success REDISMODULE_OK is returned. If the key is not open for\n * writing or there is an active iterator, REDISMODULE_ERR is returned. */\nint RM_StringSet(RedisModuleKey *key, RedisModuleString *str) {\n    if (!(key->mode & REDISMODULE_WRITE) || key->iter) return REDISMODULE_ERR;\n    RM_DeleteKey(key);\n    /* Retain str so setKey copies it to db rather than reallocating it. */\n    incrRefCount(str);\n    setKey(key->ctx->client,key->db,key->key,&str,SETKEY_NO_SIGNAL);\n    key->kv = str;\n    return REDISMODULE_OK;\n}\n\n/* Prepare the key associated string value for DMA access, and returns\n * a pointer and size (by reference), that the user can use to read or\n * modify the string in-place accessing it directly via pointer.\n *\n * The 'mode' is composed by bitwise OR-ing the following flags:\n *\n *     REDISMODULE_READ -- Read access\n *     REDISMODULE_WRITE -- Write access\n *\n * If the DMA is not requested for writing, the pointer returned should\n * only be accessed in a read-only fashion.\n *\n * On error (wrong type) NULL is returned.\n *\n * DMA access rules:\n *\n * 1. No other key writing function should be called since the moment\n * the pointer is obtained, for all the time we want to use DMA access\n * to read or modify the string.\n *\n * 2. Each time RM_StringTruncate() is called, to continue with the DMA\n * access, RM_StringDMA() should be called again to re-obtain\n * a new pointer and length.\n *\n * 3. If the returned pointer is not NULL, but the length is zero, no\n * byte can be touched (the string is empty, or the key itself is empty)\n * so a RM_StringTruncate() call should be used if there is to enlarge\n * the string, and later call StringDMA() again to get the pointer.\n */\nchar *RM_StringDMA(RedisModuleKey *key, size_t *len, int mode) {\n    /* We need to return *some* pointer for empty keys, we just return\n     * a string literal pointer, that is the advantage to be mapped into\n     * a read only memory page, so the module will segfault if a write\n     * attempt is performed. */\n    char *emptystring = \"<dma-empty-string>\";\n    if (key->kv == NULL) {\n        *len = 0;\n        return emptystring;\n    }\n\n    if (key->kv->type != OBJ_STRING) return NULL;\n\n    /* For write access, and even for read access if the object is encoded,\n     * we unshare the string (that has the side effect of decoding it). */\n    if ((mode & REDISMODULE_WRITE) || key->kv->encoding != OBJ_ENCODING_RAW)\n        key->kv = dbUnshareStringValue(key->db, key->key, key->kv);\n\n    *len = sdslen(key->kv->ptr);\n    return key->kv->ptr;\n}\n\n/* If the key is open for writing and is of string type, resize it, padding\n * with zero bytes if the new length is greater than the old one.\n *\n * After this call, RM_StringDMA() must be called again to continue\n * DMA access with the new pointer.\n *\n * The function returns REDISMODULE_OK on success, and REDISMODULE_ERR on\n * error, that is, the key is not open for writing, is not a string\n * or resizing for more than 512 MB is requested.\n *\n * If the key is empty, a string key is created with the new string value\n * unless the new length value requested is zero. */\nint RM_StringTruncate(RedisModuleKey *key, size_t newlen) {\n    if (!(key->mode & REDISMODULE_WRITE)) return REDISMODULE_ERR;\n    if (key->kv && key->kv->type != OBJ_STRING) return REDISMODULE_ERR;\n    if (newlen > 512*1024*1024) return REDISMODULE_ERR;\n\n    /* Empty key and new len set to 0. Just return REDISMODULE_OK without\n     * doing anything. */\n    if (key->kv == NULL && newlen == 0) return REDISMODULE_OK;\n\n    if (key->kv == NULL) {\n        /* Empty key: create it with the new size. */\n        robj *o = createObject(OBJ_STRING,sdsnewlen(NULL, newlen));\n        setKey(key->ctx->client, key->db, key->key, &o, SETKEY_NO_SIGNAL);\n        key->kv = o;\n    } else {\n        /* Unshare and resize. */\n        key->kv = dbUnshareStringValue(key->db, key->key, key->kv);\n        size_t oldsize = 0;\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(key->kv);\n        size_t curlen = sdslen(key->kv->ptr);\n        if (newlen > curlen) {\n            key->kv->ptr = sdsgrowzero(key->kv->ptr,newlen);\n        } else if (newlen < curlen) {\n            sdssubstr(key->kv->ptr,0,newlen);\n            /* If the string is too wasteful, reallocate it. */\n            if (sdslen(key->kv->ptr) < sdsavail(key->kv->ptr))\n                key->kv->ptr = sdsRemoveFreeSpace(key->kv->ptr, 0);\n        }\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n        if (curlen != newlen)\n            updateKeysizesHist(key->db, OBJ_STRING, curlen, newlen);\n    }\n    return REDISMODULE_OK;\n}\n\n/* --------------------------------------------------------------------------\n * ## Key API for List type\n *\n * Many of the list functions access elements by index. Since a list is in\n * essence a doubly-linked list, accessing elements by index is generally an\n * O(N) operation. However, if elements are accessed sequentially or with\n * indices close together, the functions are optimized to seek the index from\n * the previous index, rather than seeking from the ends of the list.\n *\n * This enables iteration to be done efficiently using a simple for loop:\n *\n *     long n = RM_ValueLength(key);\n *     for (long i = 0; i < n; i++) {\n *         RedisModuleString *elem = RedisModule_ListGet(key, i);\n *         // Do stuff...\n *     }\n *\n * Note that after modifying a list using RM_ListPop, RM_ListSet or\n * RM_ListInsert, the internal iterator is invalidated so the next operation\n * will require a linear seek.\n *\n * Modifying a list in any another way, for example using RM_Call(), while a key\n * is open will confuse the internal iterator and may cause trouble if the key\n * is used after such modifications. The key must be reopened in this case.\n *\n * See also RM_ValueLength(), which returns the length of a list.\n * -------------------------------------------------------------------------- */\n\n/* Seeks the key's internal list iterator to the given index. On success, 1 is\n * returned and key->iter, key->u.list.entry and key->u.list.index are set. On\n * failure, 0 is returned and errno is set as required by the list API\n * functions. */\nint moduleListIteratorSeek(RedisModuleKey *key, long index, int mode) {\n    if (!key) {\n        errno = EINVAL;\n        return 0;\n    } else if (!key->kv || key->kv->type != OBJ_LIST) {\n        errno = ENOTSUP;\n        return 0;\n    } if (!(key->mode & mode)) {\n        errno = EBADF;\n        return 0;\n    }\n\n    long length = listTypeLength(key->kv);\n    if (index < -length || index >= length) {\n        errno = EDOM; /* Invalid index */\n        return 0;\n    }\n\n    if (key->iter == NULL) {\n        /* No existing iterator. Create one. */\n        key->iter = zmalloc(sizeof(listTypeIterator));\n        listTypeInitIterator(key->iter, key->kv, index, LIST_TAIL);\n        serverAssert(listTypeNext(key->iter, &key->u.list.entry));\n        key->u.list.index = index;\n        return 1;\n    }\n\n    /* There's an existing iterator. Make sure the requested index has the same\n     * sign as the iterator's index. */\n    if      (index < 0 && key->u.list.index >= 0) index += length;\n    else if (index >= 0 && key->u.list.index < 0) index -= length;\n\n    if (index == key->u.list.index) return 1; /* We're done. */\n\n    /* Seek the iterator to the requested index. */\n    unsigned char dir = key->u.list.index < index ? LIST_TAIL : LIST_HEAD;\n    listTypeSetIteratorDirection(key->iter, &key->u.list.entry, dir);\n    while (key->u.list.index != index) {\n        serverAssert(listTypeNext(key->iter, &key->u.list.entry));\n        key->u.list.index += dir == LIST_HEAD ? -1 : 1;\n    }\n    return 1;\n}\n\n/* Push an element into a list, on head or tail depending on 'where' argument\n * (REDISMODULE_LIST_HEAD or REDISMODULE_LIST_TAIL). If the key refers to an\n * empty key opened for writing, the key is created. On success, REDISMODULE_OK\n * is returned. On failure, REDISMODULE_ERR is returned and `errno` is set as\n * follows:\n *\n * - EINVAL if key or ele is NULL.\n * - ENOTSUP if the key is of another type than list.\n * - EBADF if the key is not opened for writing.\n *\n * Note: Before Redis 7.0, `errno` was not set by this function. */\nint RM_ListPush(RedisModuleKey *key, int where, RedisModuleString *ele) {\n    size_t oldsize = 0;\n    if (!key || !ele) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    } else if (key->kv != NULL && key->kv->type != OBJ_LIST) {\n        errno = ENOTSUP;\n        return REDISMODULE_ERR;\n    } if (!(key->mode & REDISMODULE_WRITE)) {\n        errno = EBADF;\n        return REDISMODULE_ERR;\n    }\n\n    if (!(key->mode & REDISMODULE_WRITE)) return REDISMODULE_ERR;\n    if (key->kv && key->kv->type != OBJ_LIST) return REDISMODULE_ERR;\n    if (key->iter) moduleFreeKeyIterator(key);\n    if (key->kv == NULL) moduleCreateEmptyKey(key,REDISMODULE_KEYTYPE_LIST);\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(key->kv);\n    listTypeTryConversionAppend(key->kv, &ele, 0, 0, moduleFreeListIterator, key);\n    listTypePush(key->kv, ele,\n        (where == REDISMODULE_LIST_HEAD) ? LIST_HEAD : LIST_TAIL);\n    int64_t l = listTypeLength(key->kv);\n    updateKeysizesHist(key->db, OBJ_LIST, l-1, l);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n    return REDISMODULE_OK;\n}\n\n/* Pop an element from the list, and returns it as a module string object\n * that the user should be free with RM_FreeString() or by enabling\n * automatic memory. The `where` argument specifies if the element should be\n * popped from the beginning or the end of the list (REDISMODULE_LIST_HEAD or\n * REDISMODULE_LIST_TAIL). On failure, the command returns NULL and sets\n * `errno` as follows:\n *\n * - EINVAL if key is NULL.\n * - ENOTSUP if the key is empty or of another type than list.\n * - EBADF if the key is not opened for writing.\n *\n * Note: Before Redis 7.0, `errno` was not set by this function. */\nRedisModuleString *RM_ListPop(RedisModuleKey *key, int where) {\n    size_t oldsize = 0;\n    if (!key) {\n        errno = EINVAL;\n        return NULL;\n    } else if (key->kv == NULL || key->kv->type != OBJ_LIST) {\n        errno = ENOTSUP;\n        return NULL;\n    } else if (!(key->mode & REDISMODULE_WRITE)) {\n        errno = EBADF;\n        return NULL;\n    }\n    if (key->iter) moduleFreeKeyIterator(key);\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(key->kv);\n    robj *ele = listTypePop(key->kv,\n        (where == REDISMODULE_LIST_HEAD) ? LIST_HEAD : LIST_TAIL);\n    robj *decoded = getDecodedObject(ele);\n    decrRefCount(ele);\n    int64_t l = (int64_t) listTypeLength(key->kv);\n    updateKeysizesHist(key->db, OBJ_LIST, l+1, l);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n    if (!moduleDelKeyIfEmpty(key)) {\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(key->kv);\n        listTypeTryConversion(key->kv, LIST_CONV_SHRINKING, moduleFreeListIterator, key);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n    }\n    autoMemoryAdd(key->ctx,REDISMODULE_AM_STRING,decoded);\n    return decoded;\n}\n\n/* Returns the element at index `index` in the list stored at `key`, like the\n * LINDEX command. The element should be free'd using RM_FreeString() or using\n * automatic memory management.\n *\n * The index is zero-based, so 0 means the first element, 1 the second element\n * and so on. Negative indices can be used to designate elements starting at the\n * tail of the list. Here, -1 means the last element, -2 means the penultimate\n * and so forth.\n *\n * When no value is found at the given key and index, NULL is returned and\n * `errno` is set as follows:\n *\n * - EINVAL if key is NULL.\n * - ENOTSUP if the key is not a list.\n * - EBADF if the key is not opened for reading.\n * - EDOM if the index is not a valid index in the list.\n */\nRedisModuleString *RM_ListGet(RedisModuleKey *key, long index) {\n    if (moduleListIteratorSeek(key, index, REDISMODULE_READ)) {\n        robj *elem = listTypeGet(&key->u.list.entry);\n        robj *decoded = getDecodedObject(elem);\n        decrRefCount(elem);\n        autoMemoryAdd(key->ctx, REDISMODULE_AM_STRING, decoded);\n        return decoded;\n    } else {\n        return NULL;\n    }\n}\n\n/* Replaces the element at index `index` in the list stored at `key`.\n *\n * The index is zero-based, so 0 means the first element, 1 the second element\n * and so on. Negative indices can be used to designate elements starting at the\n * tail of the list. Here, -1 means the last element, -2 means the penultimate\n * and so forth.\n *\n * On success, REDISMODULE_OK is returned. On failure, REDISMODULE_ERR is\n * returned and `errno` is set as follows:\n *\n * - EINVAL if key or value is NULL.\n * - ENOTSUP if the key is not a list.\n * - EBADF if the key is not opened for writing.\n * - EDOM if the index is not a valid index in the list.\n */\nint RM_ListSet(RedisModuleKey *key, long index, RedisModuleString *value) {\n    size_t oldsize = 0;\n    if (!value) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    }\n    if (!key->kv || key->kv->type != OBJ_LIST) {\n        errno = ENOTSUP;\n        return REDISMODULE_ERR;\n    }\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(key->kv);\n    listTypeTryConversionAppend(key->kv, &value, 0, 0, moduleFreeListIterator, key);\n    if (moduleListIteratorSeek(key, index, REDISMODULE_WRITE)) {\n        listTypeReplace(&key->u.list.entry, value);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n        /* A note in quicklist.c forbids use of iterator after insert, so\n         * probably also after replace. */\n        moduleFreeKeyIterator(key);\n        return REDISMODULE_OK;\n    } else {\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n        return REDISMODULE_ERR;\n    }\n}\n\n/* Inserts an element at the given index.\n *\n * The index is zero-based, so 0 means the first element, 1 the second element\n * and so on. Negative indices can be used to designate elements starting at the\n * tail of the list. Here, -1 means the last element, -2 means the penultimate\n * and so forth. The index is the element's index after inserting it.\n *\n * On success, REDISMODULE_OK is returned. On failure, REDISMODULE_ERR is\n * returned and `errno` is set as follows:\n *\n * - EINVAL if key or value is NULL.\n * - ENOTSUP if the key of another type than list.\n * - EBADF if the key is not opened for writing.\n * - EDOM if the index is not a valid index in the list.\n */\nint RM_ListInsert(RedisModuleKey *key, long index, RedisModuleString *value) {\n    size_t oldsize = 0;\n    if (!value) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    } else if (key != NULL && key->kv == NULL &&\n               (index == 0 || index == -1)) {\n        /* Insert in empty key => push. */\n        return RM_ListPush(key, REDISMODULE_LIST_TAIL, value);\n    } else if (key != NULL && key->kv != NULL &&\n               key->kv->type == OBJ_LIST &&\n               (index == (long)listTypeLength(key->kv) || index == -1)) {\n        /* Insert after the last element => push tail. */\n        return RM_ListPush(key, REDISMODULE_LIST_TAIL, value);\n    } else if (key != NULL && key->kv != NULL &&\n               key->kv->type == OBJ_LIST &&\n               (index == 0 || index == -(long)listTypeLength(key->kv) - 1)) {\n        /* Insert before the first element => push head. */\n        return RM_ListPush(key, REDISMODULE_LIST_HEAD, value);\n    }\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(key->kv);\n    listTypeTryConversionAppend(key->kv, &value, 0, 0, moduleFreeListIterator, key);\n    if (moduleListIteratorSeek(key, index, REDISMODULE_WRITE)) {\n        int where = index < 0 ? LIST_TAIL : LIST_HEAD;\n        listTypeInsert(&key->u.list.entry, value, where);\n        int64_t l = (int64_t) listTypeLength(key->kv);\n        updateKeysizesHist(key->db, OBJ_LIST, l-1, l);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n        /* A note in quicklist.c forbids use of iterator after insert. */\n        moduleFreeKeyIterator(key);\n        return REDISMODULE_OK;\n    } else {\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n        return REDISMODULE_ERR;\n    }\n}\n\n/* Removes an element at the given index. The index is 0-based. A negative index\n * can also be used, counting from the end of the list.\n *\n * On success, REDISMODULE_OK is returned. On failure, REDISMODULE_ERR is\n * returned and `errno` is set as follows:\n *\n * - EINVAL if key or value is NULL.\n * - ENOTSUP if the key is not a list.\n * - EBADF if the key is not opened for writing.\n * - EDOM if the index is not a valid index in the list.\n */\nint RM_ListDelete(RedisModuleKey *key, long index) {\n    if (moduleListIteratorSeek(key, index, REDISMODULE_WRITE)) {\n        size_t oldsize = 0;\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(key->kv);\n        listTypeDelete(key->iter, &key->u.list.entry);\n        int64_t l = (int64_t) listTypeLength(key->kv);\n        updateKeysizesHist(key->db, OBJ_LIST, l+1, l);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n        if (moduleDelKeyIfEmpty(key)) return REDISMODULE_OK;\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(key->kv);\n        listTypeTryConversion(key->kv, LIST_CONV_SHRINKING, moduleFreeListIterator, key);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n        if (!key->iter) return REDISMODULE_OK; /* Return ASAP if iterator has been freed */\n        if (listTypeNext(key->iter, &key->u.list.entry)) {\n            /* After delete entry at position 'index', we need to update\n             * 'key->u.list.index' according to the following cases:\n             * 1) [1, 2, 3] => dir: forward, index: 0  => [2, 3] => index: still 0\n             * 2) [1, 2, 3] => dir: forward, index: -3 => [2, 3] => index: -2\n             * 3) [1, 2, 3] => dir: reverse, index: 2  => [1, 2] => index: 1\n             * 4) [1, 2, 3] => dir: reverse, index: -1 => [1, 2] => index: still -1 */\n            listTypeIterator *li = key->iter;\n            int reverse = li->direction == LIST_HEAD;\n            if (key->u.list.index < 0)\n                key->u.list.index += reverse ? 0 : 1;\n            else\n                key->u.list.index += reverse ? -1 : 0;\n        } else {\n            /* Reset list iterator if the next entry doesn't exist. */\n            moduleFreeKeyIterator(key);\n        }\n        return REDISMODULE_OK;\n    } else {\n        return REDISMODULE_ERR;\n    }\n}\n\n/* --------------------------------------------------------------------------\n * ## Key API for Sorted Set type\n *\n * See also RM_ValueLength(), which returns the length of a sorted set.\n * -------------------------------------------------------------------------- */\n\n/* Conversion from/to public flags of the Modules API and our private flags,\n * so that we have everything decoupled. */\nint moduleZsetAddFlagsToCoreFlags(int flags) {\n    int retflags = 0;\n    if (flags & REDISMODULE_ZADD_XX) retflags |= ZADD_IN_XX;\n    if (flags & REDISMODULE_ZADD_NX) retflags |= ZADD_IN_NX;\n    if (flags & REDISMODULE_ZADD_GT) retflags |= ZADD_IN_GT;\n    if (flags & REDISMODULE_ZADD_LT) retflags |= ZADD_IN_LT;\n    return retflags;\n}\n\n/* See previous function comment. */\nint moduleZsetAddFlagsFromCoreFlags(int flags) {\n    int retflags = 0;\n    if (flags & ZADD_OUT_ADDED) retflags |= REDISMODULE_ZADD_ADDED;\n    if (flags & ZADD_OUT_UPDATED) retflags |= REDISMODULE_ZADD_UPDATED;\n    if (flags & ZADD_OUT_NOP) retflags |= REDISMODULE_ZADD_NOP;\n    return retflags;\n}\n\n/* Add a new element into a sorted set, with the specified 'score'.\n * If the element already exists, the score is updated.\n *\n * A new sorted set is created at value if the key is an empty open key\n * setup for writing.\n *\n * Additional flags can be passed to the function via a pointer, the flags\n * are both used to receive input and to communicate state when the function\n * returns. 'flagsptr' can be NULL if no special flags are used.\n *\n * The input flags are:\n *\n *     REDISMODULE_ZADD_XX: Element must already exist. Do nothing otherwise.\n *     REDISMODULE_ZADD_NX: Element must not exist. Do nothing otherwise.\n *     REDISMODULE_ZADD_GT: If element exists, new score must be greater than the current score. \n *                          Do nothing otherwise. Can optionally be combined with XX.\n *     REDISMODULE_ZADD_LT: If element exists, new score must be less than the current score.\n *                          Do nothing otherwise. Can optionally be combined with XX.\n *\n * The output flags are:\n *\n *     REDISMODULE_ZADD_ADDED: The new element was added to the sorted set.\n *     REDISMODULE_ZADD_UPDATED: The score of the element was updated.\n *     REDISMODULE_ZADD_NOP: No operation was performed because XX or NX flags.\n *\n * On success the function returns REDISMODULE_OK. On the following errors\n * REDISMODULE_ERR is returned:\n *\n * * The key was not opened for writing.\n * * The key is of the wrong type.\n * * 'score' double value is not a number (NaN).\n */\nint RM_ZsetAdd(RedisModuleKey *key, double score, RedisModuleString *ele, int *flagsptr) {\n    int in_flags = 0, out_flags = 0;\n    size_t oldsize = 0;\n    if (!(key->mode & REDISMODULE_WRITE)) return REDISMODULE_ERR;\n    if (key->kv && key->kv->type != OBJ_ZSET) return REDISMODULE_ERR;\n    if (key->kv == NULL) moduleCreateEmptyKey(key,REDISMODULE_KEYTYPE_ZSET);\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(key->kv);\n    if (flagsptr) in_flags = moduleZsetAddFlagsToCoreFlags(*flagsptr);\n    if (zsetAdd(key->kv,score,ele->ptr,in_flags,&out_flags,NULL) == 0) {\n        if (flagsptr) *flagsptr = 0;\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n        moduleDelKeyIfEmpty(key);\n        return REDISMODULE_ERR;\n    }\n    if (flagsptr) *flagsptr = moduleZsetAddFlagsFromCoreFlags(out_flags);\n    int64_t l = (int64_t) zsetLength(key->kv);\n    updateKeysizesHist(key->db, OBJ_ZSET, l-1, l);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n    return REDISMODULE_OK;\n}\n\n/* This function works exactly like RM_ZsetAdd(), but instead of setting\n * a new score, the score of the existing element is incremented, or if the\n * element does not already exist, it is added assuming the old score was\n * zero.\n *\n * The input and output flags, and the return value, have the same exact\n * meaning, with the only difference that this function will return\n * REDISMODULE_ERR even when 'score' is a valid double number, but adding it\n * to the existing score results into a NaN (not a number) condition.\n *\n * This function has an additional field 'newscore', if not NULL is filled\n * with the new score of the element after the increment, if no error\n * is returned. */\nint RM_ZsetIncrby(RedisModuleKey *key, double score, RedisModuleString *ele, int *flagsptr, double *newscore) {\n    int in_flags = 0, out_flags = 0;\n    size_t oldsize = 0;\n    if (!(key->mode & REDISMODULE_WRITE)) return REDISMODULE_ERR;\n    if (key->kv && key->kv->type != OBJ_ZSET) return REDISMODULE_ERR;\n    if (key->kv == NULL) moduleCreateEmptyKey(key,REDISMODULE_KEYTYPE_ZSET);\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(key->kv);\n    if (flagsptr) in_flags = moduleZsetAddFlagsToCoreFlags(*flagsptr);\n    in_flags |= ZADD_IN_INCR;\n    if (zsetAdd(key->kv,score,ele->ptr,in_flags,&out_flags,newscore) == 0) {\n        if (flagsptr) *flagsptr = 0;\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n        moduleDelKeyIfEmpty(key);\n        return REDISMODULE_ERR;\n    }\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n    if (out_flags & ZADD_OUT_ADDED) {\n        int64_t l = (int64_t) zsetLength(key->kv);\n        updateKeysizesHist(key->db, OBJ_ZSET, l-1, l);\n    }\n    if (flagsptr) *flagsptr = moduleZsetAddFlagsFromCoreFlags(out_flags);\n    return REDISMODULE_OK;\n}\n\n/* Remove the specified element from the sorted set.\n * The function returns REDISMODULE_OK on success, and REDISMODULE_ERR\n * on one of the following conditions:\n *\n * * The key was not opened for writing.\n * * The key is of the wrong type.\n *\n * The return value does NOT indicate the fact the element was really\n * removed (since it existed) or not, just if the function was executed\n * with success.\n *\n * In order to know if the element was removed, the additional argument\n * 'deleted' must be passed, that populates the integer by reference\n * setting it to 1 or 0 depending on the outcome of the operation.\n * The 'deleted' argument can be NULL if the caller is not interested\n * to know if the element was really removed.\n *\n * Empty keys will be handled correctly by doing nothing. */\nint RM_ZsetRem(RedisModuleKey *key, RedisModuleString *ele, int *deleted) {\n    size_t oldsize = 0;\n    if (!(key->mode & REDISMODULE_WRITE)) return REDISMODULE_ERR;\n    if (key->kv == NULL) {\n        if (deleted) *deleted = 0;\n        return REDISMODULE_OK;\n    }\n    if (key->kv->type != OBJ_ZSET) return REDISMODULE_ERR;\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(key->kv);\n    if (zsetDel(key->kv,ele->ptr)) {\n        if (deleted) *deleted = 1;\n        int64_t l = (int64_t) zsetLength(key->kv);\n        updateKeysizesHist(key->db, OBJ_ZSET, l+1, l);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n        moduleDelKeyIfEmpty(key);\n    } else {\n        if (deleted) *deleted = 0;\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n    }\n    return REDISMODULE_OK;\n}\n\n/* On success retrieve the double score associated at the sorted set element\n * 'ele' and returns REDISMODULE_OK. Otherwise REDISMODULE_ERR is returned\n * to signal one of the following conditions:\n *\n * * There is no such element 'ele' in the sorted set.\n * * The key is not a sorted set.\n * * The key is an open empty key.\n */\nint RM_ZsetScore(RedisModuleKey *key, RedisModuleString *ele, double *score) {\n    if (key->kv == NULL) return REDISMODULE_ERR;\n    if (key->kv->type != OBJ_ZSET) return REDISMODULE_ERR;\n    if (zsetScore(key->kv,ele->ptr,score) == C_ERR) return REDISMODULE_ERR;\n    return REDISMODULE_OK;\n}\n\n/* --------------------------------------------------------------------------\n * ## Key API for Sorted Set iterator\n * -------------------------------------------------------------------------- */\n\nvoid zsetKeyReset(RedisModuleKey *key) {\n    key->u.zset.type = REDISMODULE_ZSET_RANGE_NONE;\n    key->u.zset.current = NULL;\n    key->u.zset.er = 1;\n}\n\n/* Stop a sorted set iteration. */\nvoid RM_ZsetRangeStop(RedisModuleKey *key) {\n    if (!key->kv || key->kv->type != OBJ_ZSET) return;\n    /* Free resources if needed. */\n    if (key->u.zset.type == REDISMODULE_ZSET_RANGE_LEX)\n        zslFreeLexRange(&key->u.zset.lrs);\n    /* Setup sensible values so that misused iteration API calls when an\n     * iterator is not active will result into something more sensible\n     * than crashing. */\n    zsetKeyReset(key);\n}\n\n/* Return the \"End of range\" flag value to signal the end of the iteration. */\nint RM_ZsetRangeEndReached(RedisModuleKey *key) {\n    if (!key->kv || key->kv->type != OBJ_ZSET) return 1;\n    return key->u.zset.er;\n}\n\n/* Helper function for RM_ZsetFirstInScoreRange() and RM_ZsetLastInScoreRange().\n * Setup the sorted set iteration according to the specified score range\n * (see the functions calling it for more info). If 'first' is true the\n * first element in the range is used as a starting point for the iterator\n * otherwise the last. Return REDISMODULE_OK on success otherwise\n * REDISMODULE_ERR. */\nint zsetInitScoreRange(RedisModuleKey *key, double min, double max, int minex, int maxex, int first) {\n    if (!key->kv || key->kv->type != OBJ_ZSET) return REDISMODULE_ERR;\n\n    RM_ZsetRangeStop(key);\n    key->u.zset.type = REDISMODULE_ZSET_RANGE_SCORE;\n    key->u.zset.er = 0;\n\n    /* Setup the range structure used by the sorted set core implementation\n     * in order to seek at the specified element. */\n    zrangespec *zrs = &key->u.zset.rs;\n    zrs->min = min;\n    zrs->max = max;\n    zrs->minex = minex;\n    zrs->maxex = maxex;\n\n    if (key->kv->encoding == OBJ_ENCODING_LISTPACK) {\n        key->u.zset.current = first ? zzlFirstInRange(key->kv->ptr,zrs) :\n                                      zzlLastInRange(key->kv->ptr,zrs);\n    } else if (key->kv->encoding == OBJ_ENCODING_SKIPLIST) {\n        zset *zs = key->kv->ptr;\n        zskiplist *zsl = zs->zsl;\n        key->u.zset.current = first ? zslNthInRange(zsl, zrs, 0, NULL) :\n                                      zslNthInRange(zsl, zrs, -1, NULL);\n    } else {\n        serverPanic(\"Unsupported zset encoding\");\n    }\n    if (key->u.zset.current == NULL) key->u.zset.er = 1;\n    return REDISMODULE_OK;\n}\n\n/* Setup a sorted set iterator seeking the first element in the specified\n * range. Returns REDISMODULE_OK if the iterator was correctly initialized\n * otherwise REDISMODULE_ERR is returned in the following conditions:\n *\n * 1. The value stored at key is not a sorted set or the key is empty.\n *\n * The range is specified according to the two double values 'min' and 'max'.\n * Both can be infinite using the following two macros:\n *\n * * REDISMODULE_POSITIVE_INFINITE for positive infinite value\n * * REDISMODULE_NEGATIVE_INFINITE for negative infinite value\n *\n * 'minex' and 'maxex' parameters, if true, respectively setup a range\n * where the min and max value are exclusive (not included) instead of\n * inclusive. */\nint RM_ZsetFirstInScoreRange(RedisModuleKey *key, double min, double max, int minex, int maxex) {\n    return zsetInitScoreRange(key,min,max,minex,maxex,1);\n}\n\n/* Exactly like RedisModule_ZsetFirstInScoreRange() but the last element of\n * the range is selected for the start of the iteration instead. */\nint RM_ZsetLastInScoreRange(RedisModuleKey *key, double min, double max, int minex, int maxex) {\n    return zsetInitScoreRange(key,min,max,minex,maxex,0);\n}\n\n/* Helper function for RM_ZsetFirstInLexRange() and RM_ZsetLastInLexRange().\n * Setup the sorted set iteration according to the specified lexicographical\n * range (see the functions calling it for more info). If 'first' is true the\n * first element in the range is used as a starting point for the iterator\n * otherwise the last. Return REDISMODULE_OK on success otherwise\n * REDISMODULE_ERR.\n *\n * Note that this function takes 'min' and 'max' in the same form of the\n * Redis ZRANGEBYLEX command. */\nint zsetInitLexRange(RedisModuleKey *key, RedisModuleString *min, RedisModuleString *max, int first) {\n    if (!key->kv || key->kv->type != OBJ_ZSET) return REDISMODULE_ERR;\n\n    RM_ZsetRangeStop(key);\n    key->u.zset.er = 0;\n\n    /* Setup the range structure used by the sorted set core implementation\n     * in order to seek at the specified element. */\n    zlexrangespec *zlrs = &key->u.zset.lrs;\n    if (zslParseLexRange(min, max, zlrs) == C_ERR) return REDISMODULE_ERR;\n\n    /* Set the range type to lex only after successfully parsing the range,\n     * otherwise we don't want the zlexrangespec to be freed. */\n    key->u.zset.type = REDISMODULE_ZSET_RANGE_LEX;\n\n    if (key->kv->encoding == OBJ_ENCODING_LISTPACK) {\n        key->u.zset.current = first ? zzlFirstInLexRange(key->kv->ptr,zlrs) :\n                                      zzlLastInLexRange(key->kv->ptr,zlrs);\n    } else if (key->kv->encoding == OBJ_ENCODING_SKIPLIST) {\n        zset *zs = key->kv->ptr;\n        zskiplist *zsl = zs->zsl;\n        key->u.zset.current = first ? zslNthInLexRange(zsl,zlrs,0,NULL) :\n                                      zslNthInLexRange(zsl,zlrs,-1,NULL);\n    } else {\n        serverPanic(\"Unsupported zset encoding\");\n    }\n    if (key->u.zset.current == NULL) key->u.zset.er = 1;\n\n    return REDISMODULE_OK;\n}\n\n/* Setup a sorted set iterator seeking the first element in the specified\n * lexicographical range. Returns REDISMODULE_OK if the iterator was correctly\n * initialized otherwise REDISMODULE_ERR is returned in the\n * following conditions:\n *\n * 1. The value stored at key is not a sorted set or the key is empty.\n * 2. The lexicographical range 'min' and 'max' format is invalid.\n *\n * 'min' and 'max' should be provided as two RedisModuleString objects\n * in the same format as the parameters passed to the ZRANGEBYLEX command.\n * The function does not take ownership of the objects, so they can be released\n * ASAP after the iterator is setup. */\nint RM_ZsetFirstInLexRange(RedisModuleKey *key, RedisModuleString *min, RedisModuleString *max) {\n    return zsetInitLexRange(key,min,max,1);\n}\n\n/* Exactly like RedisModule_ZsetFirstInLexRange() but the last element of\n * the range is selected for the start of the iteration instead. */\nint RM_ZsetLastInLexRange(RedisModuleKey *key, RedisModuleString *min, RedisModuleString *max) {\n    return zsetInitLexRange(key,min,max,0);\n}\n\n/* Return the current sorted set element of an active sorted set iterator\n * or NULL if the range specified in the iterator does not include any\n * element. */\nRedisModuleString *RM_ZsetRangeCurrentElement(RedisModuleKey *key, double *score) {\n    RedisModuleString *str;\n\n    if (!key->kv || key->kv->type != OBJ_ZSET) return NULL;\n    if (key->u.zset.current == NULL) return NULL;\n    if (key->kv->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *eptr, *sptr;\n        eptr = key->u.zset.current;\n        sds ele = lpGetObject(eptr);\n        if (score) {\n            sptr = lpNext(key->kv->ptr,eptr);\n            *score = zzlGetScore(sptr);\n        }\n        str = createObject(OBJ_STRING,ele);\n    } else if (key->kv->encoding == OBJ_ENCODING_SKIPLIST) {\n        zskiplistNode *ln = key->u.zset.current;\n        if (score) *score = ln->score;\n        sds ele = zslGetNodeElement(ln);\n        str = createStringObject(ele,sdslen(ele));\n    } else {\n        serverPanic(\"Unsupported zset encoding\");\n    }\n    autoMemoryAdd(key->ctx,REDISMODULE_AM_STRING,str);\n    return str;\n}\n\n/* Go to the next element of the sorted set iterator. Returns 1 if there was\n * a next element, 0 if we are already at the latest element or the range\n * does not include any item at all. */\nint RM_ZsetRangeNext(RedisModuleKey *key) {\n    if (!key->kv || key->kv->type != OBJ_ZSET) return 0;\n    if (!key->u.zset.type || !key->u.zset.current) return 0; /* No active iterator. */\n\n    if (key->kv->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *zl = key->kv->ptr;\n        unsigned char *eptr = key->u.zset.current;\n        unsigned char *next;\n        next = lpNext(zl,eptr); /* Skip element. */\n        if (next) next = lpNext(zl,next); /* Skip score. */\n        if (next == NULL) {\n            key->u.zset.er = 1;\n            return 0;\n        } else {\n            /* Are we still within the range? */\n            if (key->u.zset.type == REDISMODULE_ZSET_RANGE_SCORE) {\n                /* Fetch the next element score for the\n                 * range check. */\n                unsigned char *saved_next = next;\n                next = lpNext(zl,next); /* Skip next element. */\n                double score = zzlGetScore(next); /* Obtain the next score. */\n                if (!zslValueLteMax(score,&key->u.zset.rs)) {\n                    key->u.zset.er = 1;\n                    return 0;\n                }\n                next = saved_next;\n            } else if (key->u.zset.type == REDISMODULE_ZSET_RANGE_LEX) {\n                if (!zzlLexValueLteMax(next,&key->u.zset.lrs)) {\n                    key->u.zset.er = 1;\n                    return 0;\n                }\n            }\n            key->u.zset.current = next;\n            return 1;\n        }\n    } else if (key->kv->encoding == OBJ_ENCODING_SKIPLIST) {\n        zskiplistNode *ln = key->u.zset.current, *next = ln->level[0].forward;\n        if (next == NULL) {\n            key->u.zset.er = 1;\n            return 0;\n        } else {\n            /* Are we still within the range? */\n            if (key->u.zset.type == REDISMODULE_ZSET_RANGE_SCORE &&\n                !zslValueLteMax(next->score,&key->u.zset.rs))\n            {\n                key->u.zset.er = 1;\n                return 0;\n            } else if (key->u.zset.type == REDISMODULE_ZSET_RANGE_LEX) {\n                if (!zslLexValueLteMax(zslGetNodeElement(next),&key->u.zset.lrs)) {\n                    key->u.zset.er = 1;\n                    return 0;\n                }\n            }\n            key->u.zset.current = next;\n            return 1;\n        }\n    } else {\n        serverPanic(\"Unsupported zset encoding\");\n    }\n}\n\n/* Go to the previous element of the sorted set iterator. Returns 1 if there was\n * a previous element, 0 if we are already at the first element or the range\n * does not include any item at all. */\nint RM_ZsetRangePrev(RedisModuleKey *key) {\n    if (!key->kv || key->kv->type != OBJ_ZSET) return 0;\n    if (!key->u.zset.type || !key->u.zset.current) return 0; /* No active iterator. */\n\n    if (key->kv->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *zl = key->kv->ptr;\n        unsigned char *eptr = key->u.zset.current;\n        unsigned char *prev;\n        prev = lpPrev(zl,eptr); /* Go back to previous score. */\n        if (prev) prev = lpPrev(zl,prev); /* Back to previous ele. */\n        if (prev == NULL) {\n            key->u.zset.er = 1;\n            return 0;\n        } else {\n            /* Are we still within the range? */\n            if (key->u.zset.type == REDISMODULE_ZSET_RANGE_SCORE) {\n                /* Fetch the previous element score for the\n                 * range check. */\n                unsigned char *saved_prev = prev;\n                prev = lpNext(zl,prev); /* Skip element to get the score.*/\n                double score = zzlGetScore(prev); /* Obtain the prev score. */\n                if (!zslValueGteMin(score,&key->u.zset.rs)) {\n                    key->u.zset.er = 1;\n                    return 0;\n                }\n                prev = saved_prev;\n            } else if (key->u.zset.type == REDISMODULE_ZSET_RANGE_LEX) {\n                if (!zzlLexValueGteMin(prev,&key->u.zset.lrs)) {\n                    key->u.zset.er = 1;\n                    return 0;\n                }\n            }\n            key->u.zset.current = prev;\n            return 1;\n        }\n    } else if (key->kv->encoding == OBJ_ENCODING_SKIPLIST) {\n        zskiplistNode *ln = key->u.zset.current, *prev = ln->backward;\n        if (prev == NULL) {\n            key->u.zset.er = 1;\n            return 0;\n        } else {\n            /* Are we still within the range? */\n            if (key->u.zset.type == REDISMODULE_ZSET_RANGE_SCORE &&\n                !zslValueGteMin(prev->score,&key->u.zset.rs))\n            {\n                key->u.zset.er = 1;\n                return 0;\n            } else if (key->u.zset.type == REDISMODULE_ZSET_RANGE_LEX) {\n                if (!zslLexValueGteMin(zslGetNodeElement(prev),&key->u.zset.lrs)) {\n                    key->u.zset.er = 1;\n                    return 0;\n                }\n            }\n            key->u.zset.current = prev;\n            return 1;\n        }\n    } else {\n        serverPanic(\"Unsupported zset encoding\");\n    }\n}\n\n/* --------------------------------------------------------------------------\n * ## Key API for Hash type\n *\n * See also RM_ValueLength(), which returns the number of fields in a hash.\n * -------------------------------------------------------------------------- */\n\n/* Set the field of the specified hash field to the specified value.\n * If the key is an empty key open for writing, it is created with an empty\n * hash value, in order to set the specified field.\n *\n * The function is variadic and the user must specify pairs of field\n * names and values, both as RedisModuleString pointers (unless the\n * CFIELD option is set, see later). At the end of the field/value-ptr pairs,\n * NULL must be specified as last argument to signal the end of the arguments\n * in the variadic function.\n *\n * Example to set the hash argv[1] to the value argv[2]:\n *\n *      RedisModule_HashSet(key,REDISMODULE_HASH_NONE,argv[1],argv[2],NULL);\n *\n * The function can also be used in order to delete fields (if they exist)\n * by setting them to the specified value of REDISMODULE_HASH_DELETE:\n *\n *      RedisModule_HashSet(key,REDISMODULE_HASH_NONE,argv[1],\n *                          REDISMODULE_HASH_DELETE,NULL);\n *\n * The behavior of the command changes with the specified flags, that can be\n * set to REDISMODULE_HASH_NONE if no special behavior is needed.\n *\n *     REDISMODULE_HASH_NX: The operation is performed only if the field was not\n *                          already existing in the hash.\n *     REDISMODULE_HASH_XX: The operation is performed only if the field was\n *                          already existing, so that a new value could be\n *                          associated to an existing filed, but no new fields\n *                          are created.\n *     REDISMODULE_HASH_CFIELDS: The field names passed are null terminated C\n *                               strings instead of RedisModuleString objects.\n *     REDISMODULE_HASH_COUNT_ALL: Include the number of inserted fields in the\n *                                 returned number, in addition to the number of\n *                                 updated and deleted fields. (Added in Redis\n *                                 6.2.)\n *\n * Unless NX is specified, the command overwrites the old field value with\n * the new one.\n *\n * When using REDISMODULE_HASH_CFIELDS, field names are reported using\n * normal C strings, so for example to delete the field \"foo\" the following\n * code can be used:\n *\n *      RedisModule_HashSet(key,REDISMODULE_HASH_CFIELDS,\"foo\",\n *                          REDISMODULE_HASH_DELETE,NULL);\n *\n * Return value:\n *\n * The number of fields existing in the hash prior to the call, which have been\n * updated (its old value has been replaced by a new value) or deleted. If the\n * flag REDISMODULE_HASH_COUNT_ALL is set, inserted fields not previously\n * existing in the hash are also counted.\n *\n * If the return value is zero, `errno` is set (since Redis 6.2) as follows:\n *\n * - EINVAL if any unknown flags are set or if key is NULL.\n * - ENOTSUP if the key is associated with a non Hash value.\n * - EBADF if the key was not opened for writing.\n * - ENOENT if no fields were counted as described under Return value above.\n *   This is not actually an error. The return value can be zero if all fields\n *   were just created and the COUNT_ALL flag was unset, or if changes were held\n *   back due to the NX and XX flags.\n *\n * NOTICE: The return value semantics of this function are very different\n * between Redis 6.2 and older versions. Modules that use it should determine\n * the Redis version and handle it accordingly.\n */\nint RM_HashSet(RedisModuleKey *key, int flags, ...) {\n    va_list ap;\n    size_t oldsize = 0;\n    if (!key || (flags & ~(REDISMODULE_HASH_NX |\n                           REDISMODULE_HASH_XX |\n                           REDISMODULE_HASH_CFIELDS |\n                           REDISMODULE_HASH_COUNT_ALL))) {\n        errno = EINVAL;\n        return 0;\n    } else if (key->kv && key->kv->type != OBJ_HASH) {\n        errno = ENOTSUP;\n        return 0;\n    } else if (!(key->mode & REDISMODULE_WRITE)) {\n        errno = EBADF;\n        return 0;\n    }\n    if (key->kv == NULL) moduleCreateEmptyKey(key,REDISMODULE_KEYTYPE_HASH);\n\n    int64_t oldlen = (int64_t) getObjectLength(key->kv);\n\n    int count = 0;\n    va_start(ap, flags);\n    while(1) {\n        RedisModuleString *field, *value;\n        /* Get the field and value objects. */\n        if (flags & REDISMODULE_HASH_CFIELDS) {\n            char *cfield = va_arg(ap,char*);\n            if (cfield == NULL) break;\n            field = createRawStringObject(cfield,strlen(cfield));\n        } else {\n            field = va_arg(ap,RedisModuleString*);\n            if (field == NULL) break;\n        }\n        value = va_arg(ap,RedisModuleString*);\n\n        /* Handle XX and NX */\n        if (flags & (REDISMODULE_HASH_XX|REDISMODULE_HASH_NX)) {\n            int hfeFlags = HFE_LAZY_AVOID_HASH_DEL | HFE_LAZY_NO_UPDATE_KEYSIZES;\n\n            /*\n             * The hash might contain expired fields. If we lazily delete expired\n             * field and the command was sent with XX flag, the operation could\n             * fail and leave the hash empty, which the caller might not expect.\n             * To prevent unexpected behavior, we avoid lazy deletion in this case\n             * yet let the operation fail. Note that moduleDelKeyIfEmpty()\n             * below won't delete the hash if it left with single expired key\n             * because hash counts blindly expired fields as well.\n             */\n            if (flags & REDISMODULE_HASH_XX)\n                hfeFlags |= HFE_LAZY_AVOID_FIELD_DEL;\n\n            int exists = hashTypeExists(key->db, key->kv, field->ptr, hfeFlags, NULL);\n            if (((flags & REDISMODULE_HASH_XX) && !exists) ||\n                ((flags & REDISMODULE_HASH_NX) && exists))\n            {\n                if (flags & REDISMODULE_HASH_CFIELDS) decrRefCount(field);\n                continue;\n            }\n        }\n\n        /* Handle deletion if value is REDISMODULE_HASH_DELETE. */\n        if (value == REDISMODULE_HASH_DELETE) {\n            if (server.memory_tracking_enabled)\n                oldsize = kvobjAllocSize(key->kv);\n            count += hashTypeDelete(key->kv, field->ptr);\n            if (server.memory_tracking_enabled)\n                updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n            if (flags & REDISMODULE_HASH_CFIELDS) decrRefCount(field);\n            continue;\n        }\n\n        int low_flags = HASH_SET_COPY;\n        /* If CFIELDS is active, we can pass the ownership of the\n         * SDS object to the low level function that sets the field\n         * to avoid a useless copy. */\n        if (flags & REDISMODULE_HASH_CFIELDS)\n            low_flags |= HASH_SET_TAKE_FIELD;\n\n        robj *argv[2] = {field,value};\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(key->kv);\n        hashTypeTryConversion(key->db,key->kv,argv,0,1);\n        int updated = hashTypeSet(key->db, key->kv, field->ptr, value->ptr, low_flags);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n        count += (flags & REDISMODULE_HASH_COUNT_ALL) ? 1 : updated;\n\n        /* If CFIELDS is active, SDS string ownership is now of hashTypeSet(),\n         * however we still have to release the 'field' object shell. */\n        if (flags & REDISMODULE_HASH_CFIELDS) {\n           field->ptr = NULL; /* Prevent the SDS string from being freed. */\n           decrRefCount(field);\n        }\n    }\n    va_end(ap);\n    updateKeysizesHist(key->db, OBJ_HASH, oldlen,\n                       (int64_t) hashTypeLength(key->kv, 0));\n\n    moduleDelKeyIfEmpty(key);\n    if (count == 0) errno = ENOENT;\n    return count;\n}\n\n/* Get fields from a hash value. This function is called using a variable\n * number of arguments, alternating a field name (as a RedisModuleString\n * pointer) with a pointer to a RedisModuleString pointer, that is set to the\n * value of the field if the field exists, or NULL if the field does not exist.\n * At the end of the field/value-ptr pairs, NULL must be specified as last\n * argument to signal the end of the arguments in the variadic function.\n *\n * This is an example usage:\n *\n *      RedisModuleString *first, *second;\n *      RedisModule_HashGet(mykey,REDISMODULE_HASH_NONE,argv[1],&first,\n *                          argv[2],&second,NULL);\n *\n * As with RedisModule_HashSet() the behavior of the command can be specified\n * passing flags different than REDISMODULE_HASH_NONE:\n *\n * REDISMODULE_HASH_CFIELDS: field names as null terminated C strings.\n *\n * REDISMODULE_HASH_EXISTS: instead of setting the value of the field\n * expecting a RedisModuleString pointer to pointer, the function just\n * reports if the field exists or not and expects an integer pointer\n * as the second element of each pair.\n * \n * REDISMODULE_HASH_EXPIRE_TIME: retrieves the expiration time of a field in the hash.\n * The function expects a `mstime_t` pointer as the second element of each pair.\n * If the field does not exist or has no expiration, the value is set to \n * `REDISMODULE_NO_EXPIRE`. This flag must not be used with `REDISMODULE_HASH_EXISTS`.\n * \n * Example of REDISMODULE_HASH_CFIELDS:\n *\n *      RedisModuleString *username, *hashedpass;\n *      RedisModule_HashGet(mykey,REDISMODULE_HASH_CFIELDS,\"username\",&username,\"hp\",&hashedpass, NULL);\n *\n * Example of REDISMODULE_HASH_EXISTS:\n *\n *      int exists;\n *      RedisModule_HashGet(mykey,REDISMODULE_HASH_EXISTS,\"username\",&exists,NULL);\n *\n * Example of REDISMODULE_HASH_EXPIRE_TIME:\n *\n *      mstime_t hpExpireTime; \n *      RedisModule_HashGet(mykey,REDISMODULE_HASH_EXPIRE_TIME,\"hp\",&hpExpireTime,NULL);\n *      \n * The function returns REDISMODULE_OK on success and REDISMODULE_ERR if\n * the key is not a hash value.\n *\n * Memory management:\n *\n * The returned RedisModuleString objects should be released with\n * RedisModule_FreeString(), or by enabling automatic memory management.\n */\nint RM_HashGet(RedisModuleKey *key, int flags, ...) {\n    int hfeFlags = HFE_LAZY_AVOID_FIELD_DEL | HFE_LAZY_AVOID_HASH_DEL | HFE_LAZY_NO_UPDATE_KEYSIZES;\n    va_list ap;\n    if (key->kv && key->kv->type != OBJ_HASH) return REDISMODULE_ERR;\n\n    if (key->mode & REDISMODULE_OPEN_KEY_ACCESS_EXPIRED)\n        hfeFlags = HFE_LAZY_ACCESS_EXPIRED; /* allow read also expired fields */\n\n    /* Verify flag HASH_EXISTS is not set together with HASH_EXPIRE_TIME */\n    if ((flags & REDISMODULE_HASH_EXISTS) && (flags & REDISMODULE_HASH_EXPIRE_TIME))    \n        return REDISMODULE_ERR;        \n\n    va_start(ap, flags);\n    while(1) {\n        RedisModuleString *field, **valueptr;\n        int *existsptr;\n        /* Get the field object and the value pointer to pointer. */\n        if (flags & REDISMODULE_HASH_CFIELDS) {\n            char *cfield = va_arg(ap,char*);\n            if (cfield == NULL) break;\n            field = createRawStringObject(cfield,strlen(cfield));\n        } else {\n            field = va_arg(ap,RedisModuleString*);\n            if (field == NULL) break;\n        }\n\n        /* Query the hash for existence or value object. */\n        if (flags & REDISMODULE_HASH_EXISTS) {\n            existsptr = va_arg(ap,int*);\n            if (key->kv) {\n                *existsptr = hashTypeExists(key->db, key->kv, field->ptr, hfeFlags, NULL);\n            } else {\n                *existsptr = 0;\n            }\n        } else if (flags & REDISMODULE_HASH_EXPIRE_TIME) {\n            mstime_t *expireptr = va_arg(ap,mstime_t*);            \n            *expireptr = REDISMODULE_NO_EXPIRE;\n            if (key->kv) {\n                uint64_t expireTime = 0;\n                /* As an opt, avoid fetching value, only expire time */\n                int res = hashTypeGetValueObject(key->db, key->kv, field->ptr,\n                                                 hfeFlags, NULL, &expireTime, NULL);\n                /* If field has expiration time */\n                if (res && expireTime != 0) *expireptr = expireTime;\n            }\n        } else {\n            valueptr = va_arg(ap,RedisModuleString**);\n            if (key->kv) {\n                hashTypeGetValueObject(key->db, key->kv, field->ptr,\n                                       hfeFlags, valueptr, NULL, NULL);\n\n                if (*valueptr) {\n                    robj *decoded = getDecodedObject(*valueptr);\n                    decrRefCount(*valueptr);\n                    *valueptr = decoded;\n                }\n                if (*valueptr)\n                    autoMemoryAdd(key->ctx,REDISMODULE_AM_STRING,*valueptr);\n            } else {\n                *valueptr = NULL;\n            }\n        }\n\n        /* Cleanup */\n        if (flags & REDISMODULE_HASH_CFIELDS) decrRefCount(field);\n    }\n    va_end(ap);\n    return REDISMODULE_OK;\n}\n\n/**\n * Retrieves the minimum expiration time of fields in a hash.\n * \n * Return:\n *   - The minimum expiration time (in milliseconds) of the hash fields if at\n *     least one field has an expiration set.\n *   - REDISMODULE_NO_EXPIRE if no fields have an expiration set or if the key\n *     is not a hash.\n */\nmstime_t RM_HashFieldMinExpire(RedisModuleKey *key) {\n    if ((!key->kv) || (key->kv->type != OBJ_HASH))\n        return REDISMODULE_NO_EXPIRE;\n    \n    mstime_t min = hashTypeGetMinExpire(key->kv, 1);\n    return (min == EB_EXPIRE_TIME_INVALID) ? REDISMODULE_NO_EXPIRE : min;\n}\n\n/* --------------------------------------------------------------------------\n * ## Key API for Stream type\n *\n * For an introduction to streams, see https://redis.io/docs/latest/develop/data-types/streams/.\n *\n * The type RedisModuleStreamID, which is used in stream functions, is a struct\n * with two 64-bit fields and is defined as\n *\n *     typedef struct RedisModuleStreamID {\n *         uint64_t ms;\n *         uint64_t seq;\n *     } RedisModuleStreamID;\n *\n * See also RM_ValueLength(), which returns the length of a stream, and the\n * conversion functions RM_StringToStreamID() and RM_CreateStringFromStreamID().\n * -------------------------------------------------------------------------- */\n\n/* Adds an entry to a stream. Like XADD without trimming.\n *\n * - `key`: The key where the stream is (or will be) stored\n * - `flags`: A bit field of\n *   - `REDISMODULE_STREAM_ADD_AUTOID`: Assign a stream ID automatically, like\n *     `*` in the XADD command.\n * - `id`: If the `AUTOID` flag is set, this is where the assigned ID is\n *   returned. Can be NULL if `AUTOID` is set, if you don't care to receive the\n *   ID. If `AUTOID` is not set, this is the requested ID.\n * - `argv`: A pointer to an array of size `numfields * 2` containing the\n *   fields and values.\n * - `numfields`: The number of field-value pairs in `argv`.\n *\n * Returns REDISMODULE_OK if an entry has been added. On failure,\n * REDISMODULE_ERR is returned and `errno` is set as follows:\n *\n * - EINVAL if called with invalid arguments\n * - ENOTSUP if the key refers to a value of a type other than stream\n * - EBADF if the key was not opened for writing\n * - EDOM if the given ID was 0-0 or not greater than all other IDs in the\n *   stream (only if the AUTOID flag is unset)\n * - EFBIG if the stream has reached the last possible ID\n * - ERANGE if the elements are too large to be stored.\n */\nint RM_StreamAdd(RedisModuleKey *key, int flags, RedisModuleStreamID *id, RedisModuleString **argv, long numfields) {\n    /* Validate args */\n    if (!key || (numfields != 0 && !argv) || /* invalid key or argv */\n        (flags & ~(REDISMODULE_STREAM_ADD_AUTOID)) || /* invalid flags */\n        (!(flags & REDISMODULE_STREAM_ADD_AUTOID) && !id)) { /* id required */\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    } else if (key->kv && key->kv->type != OBJ_STREAM) {\n        errno = ENOTSUP; /* wrong type */\n        return REDISMODULE_ERR;\n    } else if (!(key->mode & REDISMODULE_WRITE)) {\n        errno = EBADF; /* key not open for writing */\n        return REDISMODULE_ERR;\n    } else if (!(flags & REDISMODULE_STREAM_ADD_AUTOID) &&\n               id->ms == 0 && id->seq == 0) {\n        errno = EDOM; /* ID out of range */\n        return REDISMODULE_ERR;\n    }\n\n    /* Create key if necessary */\n    int created = 0;\n    if (key->kv == NULL) {\n        moduleCreateEmptyKey(key, REDISMODULE_KEYTYPE_STREAM);\n        created = 1;\n    }\n\n    stream *s = key->kv->ptr;\n    if (s->last_id.ms == UINT64_MAX && s->last_id.seq == UINT64_MAX) {\n        /* The stream has reached the last possible ID */\n        errno = EFBIG;\n        return REDISMODULE_ERR;\n    }\n\n    streamID added_id;\n    streamID use_id;\n    streamID *use_id_ptr = NULL;\n    if (!(flags & REDISMODULE_STREAM_ADD_AUTOID)) {\n        use_id.ms = id->ms;\n        use_id.seq = id->seq;\n        use_id_ptr = &use_id;\n    }\n\n    size_t oldsize = server.memory_tracking_enabled ? kvobjAllocSize(key->kv) : 0;\n    if (streamAppendItem(s,argv,numfields,&added_id,use_id_ptr,1) == C_ERR) {\n        /* Either the ID not greater than all existing IDs in the stream, or\n         * the elements are too large to be stored. either way, errno is already\n         * set by streamAppendItem. */\n        if (created) moduleDelKeyIfEmpty(key);\n        return REDISMODULE_ERR;\n    }\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n    /* Postponed signalKeyAsReady(). Done implicitly by moduleCreateEmptyKey()\n     * so not needed if the stream has just been created. */\n    if (!created) key->u.stream.signalready = 1;\n\n    if (id != NULL) {\n        id->ms = added_id.ms;\n        id->seq = added_id.seq;\n    }\n\n    return REDISMODULE_OK;\n}\n\n/* Deletes an entry from a stream.\n *\n * - `key`: A key opened for writing, with no stream iterator started.\n * - `id`: The stream ID of the entry to delete.\n *\n * Returns REDISMODULE_OK on success. On failure, REDISMODULE_ERR is returned\n * and `errno` is set as follows:\n *\n * - EINVAL if called with invalid arguments\n * - ENOTSUP if the key refers to a value of a type other than stream or if the\n *   key is empty\n * - EBADF if the key was not opened for writing or if a stream iterator is\n *   associated with the key\n * - ENOENT if no entry with the given stream ID exists\n *\n * See also RM_StreamIteratorDelete() for deleting the current entry while\n * iterating using a stream iterator.\n */\nint RM_StreamDelete(RedisModuleKey *key, RedisModuleStreamID *id) {\n    if (!key || !id) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    } else if (!key->kv || key->kv->type != OBJ_STREAM) {\n        errno = ENOTSUP; /* wrong type */\n        return REDISMODULE_ERR;\n    } else if (!(key->mode & REDISMODULE_WRITE) ||\n               key->iter != NULL) {\n        errno = EBADF; /* key not opened for writing or iterator started */\n        return REDISMODULE_ERR;\n    }\n    stream *s = key->kv->ptr;\n    size_t oldsize = server.memory_tracking_enabled ? kvobjAllocSize(key->kv) : 0;\n    streamID streamid = {id->ms, id->seq};\n    if (streamDeleteItem(s, &streamid)) {\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n        return REDISMODULE_OK;\n    } else {\n        errno = ENOENT; /* no entry with this id */\n        return REDISMODULE_ERR;\n    }\n}\n\n/* Sets up a stream iterator.\n *\n * - `key`: The stream key opened for reading using RedisModule_OpenKey().\n * - `flags`:\n *   - `REDISMODULE_STREAM_ITERATOR_EXCLUSIVE`: Don't include `start` and `end`\n *     in the iterated range.\n *   - `REDISMODULE_STREAM_ITERATOR_REVERSE`: Iterate in reverse order, starting\n *     from the `end` of the range.\n * - `start`: The lower bound of the range. Use NULL for the beginning of the\n *   stream.\n * - `end`: The upper bound of the range. Use NULL for the end of the stream.\n *\n * Returns REDISMODULE_OK on success. On failure, REDISMODULE_ERR is returned\n * and `errno` is set as follows:\n *\n * - EINVAL if called with invalid arguments\n * - ENOTSUP if the key refers to a value of a type other than stream or if the\n *   key is empty\n * - EBADF if the key was not opened for writing or if a stream iterator is\n *   already associated with the key\n * - EDOM if `start` or `end` is outside the valid range\n *\n * Returns REDISMODULE_OK on success and REDISMODULE_ERR if the key doesn't\n * refer to a stream or if invalid arguments were given.\n *\n * The stream IDs are retrieved using RedisModule_StreamIteratorNextID() and\n * for each stream ID, the fields and values are retrieved using\n * RedisModule_StreamIteratorNextField(). The iterator is freed by calling\n * RedisModule_StreamIteratorStop().\n *\n * Example (error handling omitted):\n *\n *     RedisModule_StreamIteratorStart(key, 0, startid_ptr, endid_ptr);\n *     RedisModuleStreamID id;\n *     long numfields;\n *     while (RedisModule_StreamIteratorNextID(key, &id, &numfields) ==\n *            REDISMODULE_OK) {\n *         RedisModuleString *field, *value;\n *         while (RedisModule_StreamIteratorNextField(key, &field, &value) ==\n *                REDISMODULE_OK) {\n *             //\n *             // ... Do stuff ...\n *             //\n *             RedisModule_FreeString(ctx, field);\n *             RedisModule_FreeString(ctx, value);\n *         }\n *     }\n *     RedisModule_StreamIteratorStop(key);\n */\nint RM_StreamIteratorStart(RedisModuleKey *key, int flags, RedisModuleStreamID *start, RedisModuleStreamID *end) {\n    /* check args */\n    if (!key ||\n        (flags & ~(REDISMODULE_STREAM_ITERATOR_EXCLUSIVE |\n                   REDISMODULE_STREAM_ITERATOR_REVERSE))) {\n        errno = EINVAL; /* key missing or invalid flags */\n        return REDISMODULE_ERR;\n    } else if (!key->kv || key->kv->type != OBJ_STREAM) {\n        errno = ENOTSUP;\n        return REDISMODULE_ERR; /* not a stream */\n    } else if (key->iter) {\n        errno = EBADF; /* iterator already started */\n        return REDISMODULE_ERR;\n    }\n\n    /* define range for streamIteratorStart() */\n    streamID lower, upper;\n    if (start) lower = (streamID){start->ms, start->seq};\n    if (end)   upper = (streamID){end->ms,   end->seq};\n    if (flags & REDISMODULE_STREAM_ITERATOR_EXCLUSIVE) {\n        if ((start && streamIncrID(&lower) != C_OK) ||\n            (end   && streamDecrID(&upper) != C_OK)) {\n            errno = EDOM; /* end is 0-0 or start is MAX-MAX? */\n            return REDISMODULE_ERR;\n        }\n    }\n\n    /* create iterator */\n    stream *s = key->kv->ptr;\n    int rev = flags & REDISMODULE_STREAM_ITERATOR_REVERSE;\n    streamIterator *si = zmalloc(sizeof(*si));\n    streamIteratorStart(si, s, start ? &lower : NULL, end ? &upper : NULL, rev);\n    key->iter = si;\n    key->u.stream.currentid.ms = 0; /* for RM_StreamIteratorDelete() */\n    key->u.stream.currentid.seq = 0;\n    key->u.stream.numfieldsleft = 0; /* for RM_StreamIteratorNextField() */\n    return REDISMODULE_OK;\n}\n\n/* Stops a stream iterator created using RedisModule_StreamIteratorStart() and\n * reclaims its memory.\n *\n * Returns REDISMODULE_OK on success. On failure, REDISMODULE_ERR is returned\n * and `errno` is set as follows:\n *\n * - EINVAL if called with a NULL key\n * - ENOTSUP if the key refers to a value of a type other than stream or if the\n *   key is empty\n * - EBADF if the key was not opened for writing or if no stream iterator is\n *   associated with the key\n */\nint RM_StreamIteratorStop(RedisModuleKey *key) {\n    if (!key) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    } else if (!key->kv || key->kv->type != OBJ_STREAM) {\n        errno = ENOTSUP;\n        return REDISMODULE_ERR;\n    } else if (!key->iter) {\n        errno = EBADF;\n        return REDISMODULE_ERR;\n    }\n    streamIteratorStop(key->iter);\n    zfree(key->iter);\n    key->iter = NULL;\n    return REDISMODULE_OK;\n}\n\n/* Finds the next stream entry and returns its stream ID and the number of\n * fields.\n *\n * - `key`: Key for which a stream iterator has been started using\n *   RedisModule_StreamIteratorStart().\n * - `id`: The stream ID returned. NULL if you don't care.\n * - `numfields`: The number of fields in the found stream entry. NULL if you\n *   don't care.\n *\n * Returns REDISMODULE_OK and sets `*id` and `*numfields` if an entry was found.\n * On failure, REDISMODULE_ERR is returned and `errno` is set as follows:\n *\n * - EINVAL if called with a NULL key\n * - ENOTSUP if the key refers to a value of a type other than stream or if the\n *   key is empty\n * - EBADF if no stream iterator is associated with the key\n * - ENOENT if there are no more entries in the range of the iterator\n *\n * In practice, if RM_StreamIteratorNextID() is called after a successful call\n * to RM_StreamIteratorStart() and with the same key, it is safe to assume that\n * an REDISMODULE_ERR return value means that there are no more entries.\n *\n * Use RedisModule_StreamIteratorNextField() to retrieve the fields and values.\n * See the example at RedisModule_StreamIteratorStart().\n */\nint RM_StreamIteratorNextID(RedisModuleKey *key, RedisModuleStreamID *id, long *numfields) {\n    if (!key) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    } else if (!key->kv || key->kv->type != OBJ_STREAM) {\n        errno = ENOTSUP;\n        return REDISMODULE_ERR;\n    } else if (!key->iter) {\n        errno = EBADF;\n        return REDISMODULE_ERR;\n    }\n    streamIterator *si = key->iter;\n    int64_t *num_ptr = &key->u.stream.numfieldsleft;\n    streamID *streamid_ptr = &key->u.stream.currentid;\n    if (streamIteratorGetID(si, streamid_ptr, num_ptr)) {\n        if (id) {\n            id->ms = streamid_ptr->ms;\n            id->seq = streamid_ptr->seq;\n        }\n        if (numfields) *numfields = *num_ptr;\n        return REDISMODULE_OK;\n    } else {\n        /* No entry found. */\n        key->u.stream.currentid.ms = 0; /* for RM_StreamIteratorDelete() */\n        key->u.stream.currentid.seq = 0;\n        key->u.stream.numfieldsleft = 0; /* for RM_StreamIteratorNextField() */\n        errno = ENOENT;\n        return REDISMODULE_ERR;\n    }\n}\n\n/* Retrieves the next field of the current stream ID and its corresponding value\n * in a stream iteration. This function should be called repeatedly after calling\n * RedisModule_StreamIteratorNextID() to fetch each field-value pair.\n *\n * - `key`: Key where a stream iterator has been started.\n * - `field_ptr`: This is where the field is returned.\n * - `value_ptr`: This is where the value is returned.\n *\n * Returns REDISMODULE_OK and points `*field_ptr` and `*value_ptr` to freshly\n * allocated RedisModuleString objects. The string objects are freed\n * automatically when the callback finishes if automatic memory is enabled. On\n * failure, REDISMODULE_ERR is returned and `errno` is set as follows:\n *\n * - EINVAL if called with a NULL key\n * - ENOTSUP if the key refers to a value of a type other than stream or if the\n *   key is empty\n * - EBADF if no stream iterator is associated with the key\n * - ENOENT if there are no more fields in the current stream entry\n *\n * In practice, if RM_StreamIteratorNextField() is called after a successful\n * call to RM_StreamIteratorNextID() and with the same key, it is safe to assume\n * that an REDISMODULE_ERR return value means that there are no more fields.\n *\n * See the example at RedisModule_StreamIteratorStart().\n */\nint RM_StreamIteratorNextField(RedisModuleKey *key, RedisModuleString **field_ptr, RedisModuleString **value_ptr) {\n    if (!key) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    } else if (!key->kv || key->kv->type != OBJ_STREAM) {\n        errno = ENOTSUP;\n        return REDISMODULE_ERR;\n    } else if (!key->iter) {\n        errno = EBADF;\n        return REDISMODULE_ERR;\n    } else if (key->u.stream.numfieldsleft <= 0) {\n        errno = ENOENT;\n        return REDISMODULE_ERR;\n    }\n    streamIterator *si = key->iter;\n    unsigned char *field, *value;\n    int64_t field_len, value_len;\n    streamIteratorGetField(si, &field, &value, &field_len, &value_len);\n    if (field_ptr) {\n        *field_ptr = createRawStringObject((char *)field, field_len);\n        autoMemoryAdd(key->ctx, REDISMODULE_AM_STRING, *field_ptr);\n    }\n    if (value_ptr) {\n        *value_ptr = createRawStringObject((char *)value, value_len);\n        autoMemoryAdd(key->ctx, REDISMODULE_AM_STRING, *value_ptr);\n    }\n    key->u.stream.numfieldsleft--;\n    return REDISMODULE_OK;\n}\n\n/* Deletes the current stream entry while iterating.\n *\n * This function can be called after RM_StreamIteratorNextID() or after any\n * calls to RM_StreamIteratorNextField().\n *\n * Returns REDISMODULE_OK on success. On failure, REDISMODULE_ERR is returned\n * and `errno` is set as follows:\n *\n * - EINVAL if key is NULL\n * - ENOTSUP if the key is empty or is of another type than stream\n * - EBADF if the key is not opened for writing, if no iterator has been started\n * - ENOENT if the iterator has no current stream entry\n */\nint RM_StreamIteratorDelete(RedisModuleKey *key) {\n    if (!key) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    } else if (!key->kv || key->kv->type != OBJ_STREAM) {\n        errno = ENOTSUP;\n        return REDISMODULE_ERR;\n    } else if (!(key->mode & REDISMODULE_WRITE) || !key->iter) {\n        errno = EBADF;\n        return REDISMODULE_ERR;\n    } else if (key->u.stream.currentid.ms == 0 &&\n               key->u.stream.currentid.seq == 0) {\n        errno = ENOENT;\n        return REDISMODULE_ERR;\n    }\n    streamIterator *si = key->iter;\n    streamIteratorRemoveEntry(si, &key->u.stream.currentid);\n    key->u.stream.currentid.ms = 0; /* Make sure repeated Delete() fails */\n    key->u.stream.currentid.seq = 0;\n    key->u.stream.numfieldsleft = 0; /* Make sure NextField() fails */\n    return REDISMODULE_OK;\n}\n\n/* Trim a stream by length, similar to XTRIM with MAXLEN.\n *\n * - `key`: Key opened for writing.\n * - `flags`: A bitfield of\n *   - `REDISMODULE_STREAM_TRIM_APPROX`: Trim less if it improves performance,\n *     like XTRIM with `~`.\n * - `length`: The number of stream entries to keep after trimming.\n *\n * Returns the number of entries deleted. On failure, a negative value is\n * returned and `errno` is set as follows:\n *\n * - EINVAL if called with invalid arguments\n * - ENOTSUP if the key is empty or of a type other than stream\n * - EBADF if the key is not opened for writing\n */\nlong long RM_StreamTrimByLength(RedisModuleKey *key, int flags, long long length) {\n    if (!key || (flags & ~(REDISMODULE_STREAM_TRIM_APPROX)) || length < 0) {\n        errno = EINVAL;\n        return -1;\n    } else if (!key->kv || key->kv->type != OBJ_STREAM) {\n        errno = ENOTSUP;\n        return -1;\n    } else if (!(key->mode & REDISMODULE_WRITE)) {\n        errno = EBADF;\n        return -1;\n    }\n    int approx = flags & REDISMODULE_STREAM_TRIM_APPROX ? 1 : 0;\n    stream *s = key->kv->ptr;\n    size_t oldsize = server.memory_tracking_enabled ? kvobjAllocSize(key->kv) : 0;\n    long long retval = streamTrimByLength(s, length, approx);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n    return retval;\n}\n\n/* Trim a stream by ID, similar to XTRIM with MINID.\n *\n * - `key`: Key opened for writing.\n * - `flags`: A bitfield of\n *   - `REDISMODULE_STREAM_TRIM_APPROX`: Trim less if it improves performance,\n *     like XTRIM with `~`.\n * - `id`: The smallest stream ID to keep after trimming.\n *\n * Returns the number of entries deleted. On failure, a negative value is\n * returned and `errno` is set as follows:\n *\n * - EINVAL if called with invalid arguments\n * - ENOTSUP if the key is empty or of a type other than stream\n * - EBADF if the key is not opened for writing\n */\nlong long RM_StreamTrimByID(RedisModuleKey *key, int flags, RedisModuleStreamID *id) {\n    if (!key || (flags & ~(REDISMODULE_STREAM_TRIM_APPROX)) || !id) {\n        errno = EINVAL;\n        return -1;\n    } else if (!key->kv || key->kv->type != OBJ_STREAM) {\n        errno = ENOTSUP;\n        return -1;\n    } else if (!(key->mode & REDISMODULE_WRITE)) {\n        errno = EBADF;\n        return -1;\n    }\n    int approx = flags & REDISMODULE_STREAM_TRIM_APPROX ? 1 : 0;\n    streamID minid = (streamID){id->ms, id->seq};\n    stream *s = key->kv->ptr;\n    size_t oldsize = server.memory_tracking_enabled ? kvobjAllocSize(key->kv) : 0;\n    long long retval = streamTrimByID(s, minid, approx);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(key->db, getKeySlot(key->key->ptr), key->kv, oldsize, kvobjAllocSize(key->kv));\n    return retval;\n}\n\n/* --------------------------------------------------------------------------\n * ## Calling Redis commands from modules\n *\n * RM_Call() sends a command to Redis. The remaining functions handle the reply.\n * -------------------------------------------------------------------------- */\n\n\nvoid moduleParseCallReply_Int(RedisModuleCallReply *reply);\nvoid moduleParseCallReply_BulkString(RedisModuleCallReply *reply);\nvoid moduleParseCallReply_SimpleString(RedisModuleCallReply *reply);\nvoid moduleParseCallReply_Array(RedisModuleCallReply *reply);\n\n\n\n\n/* Free a Call reply and all the nested replies it contains if it's an\n * array. */\nvoid RM_FreeCallReply(RedisModuleCallReply *reply) {\n    /* This is a wrapper for the recursive free reply function. This is needed\n     * in order to have the first level function to return on nested replies,\n     * but only if called by the module API. */\n\n    RedisModuleCtx *ctx = NULL;\n    if(callReplyType(reply) == REDISMODULE_REPLY_PROMISE) {\n        RedisModuleAsyncRMCallPromise *promise = callReplyGetPrivateData(reply);\n        ctx = promise->ctx;\n        freeRedisModuleAsyncRMCallPromise(promise);\n    } else {\n        ctx = callReplyGetPrivateData(reply);\n    }\n\n    freeCallReply(reply);\n    if (ctx) {\n        autoMemoryFreed(ctx,REDISMODULE_AM_REPLY,reply);\n    }\n}\n\n/* Return the reply type as one of the following:\n *\n * - REDISMODULE_REPLY_UNKNOWN\n * - REDISMODULE_REPLY_STRING\n * - REDISMODULE_REPLY_ERROR\n * - REDISMODULE_REPLY_INTEGER\n * - REDISMODULE_REPLY_ARRAY\n * - REDISMODULE_REPLY_NULL\n * - REDISMODULE_REPLY_MAP\n * - REDISMODULE_REPLY_SET\n * - REDISMODULE_REPLY_BOOL\n * - REDISMODULE_REPLY_DOUBLE\n * - REDISMODULE_REPLY_BIG_NUMBER\n * - REDISMODULE_REPLY_VERBATIM_STRING\n * - REDISMODULE_REPLY_ATTRIBUTE\n * - REDISMODULE_REPLY_PROMISE */\nint RM_CallReplyType(RedisModuleCallReply *reply) {\n    return callReplyType(reply);\n}\n\n/* Return the reply type length, where applicable. */\nsize_t RM_CallReplyLength(RedisModuleCallReply *reply) {\n    return callReplyGetLen(reply);\n}\n\n/* Return the 'idx'-th nested call reply element of an array reply, or NULL\n * if the reply type is wrong or the index is out of range. */\nRedisModuleCallReply *RM_CallReplyArrayElement(RedisModuleCallReply *reply, size_t idx) {\n    return callReplyGetArrayElement(reply, idx);\n}\n\n/* Return the `long long` of an integer reply. */\nlong long RM_CallReplyInteger(RedisModuleCallReply *reply) {\n    return callReplyGetLongLong(reply);\n}\n\n/* Return the double value of a double reply. */\ndouble RM_CallReplyDouble(RedisModuleCallReply *reply) {\n    return callReplyGetDouble(reply);\n}\n\n/* Return the big number value of a big number reply. */\nconst char *RM_CallReplyBigNumber(RedisModuleCallReply *reply, size_t *len) {\n    return callReplyGetBigNumber(reply, len);\n}\n\n/* Return the value of a verbatim string reply,\n * An optional output argument can be given to get verbatim reply format. */\nconst char *RM_CallReplyVerbatim(RedisModuleCallReply *reply, size_t *len, const char **format) {\n    return callReplyGetVerbatim(reply, len, format);\n}\n\n/* Return the Boolean value of a Boolean reply. */\nint RM_CallReplyBool(RedisModuleCallReply *reply) {\n    return callReplyGetBool(reply);\n}\n\n/* Return the 'idx'-th nested call reply element of a set reply, or NULL\n * if the reply type is wrong or the index is out of range. */\nRedisModuleCallReply *RM_CallReplySetElement(RedisModuleCallReply *reply, size_t idx) {\n    return callReplyGetSetElement(reply, idx);\n}\n\n/* Retrieve the 'idx'-th key and value of a map reply.\n *\n * Returns:\n * - REDISMODULE_OK on success.\n * - REDISMODULE_ERR if idx out of range or if the reply type is wrong.\n *\n * The `key` and `value` arguments are used to return by reference, and may be\n * NULL if not required. */\nint RM_CallReplyMapElement(RedisModuleCallReply *reply, size_t idx, RedisModuleCallReply **key, RedisModuleCallReply **val) {\n    if (callReplyGetMapElement(reply, idx, key, val) == C_OK){\n        return REDISMODULE_OK;\n    }\n    return REDISMODULE_ERR;\n}\n\n/* Return the attribute of the given reply, or NULL if no attribute exists. */\nRedisModuleCallReply *RM_CallReplyAttribute(RedisModuleCallReply *reply) {\n    return callReplyGetAttribute(reply);\n}\n\n/* Retrieve the 'idx'-th key and value of an attribute reply.\n *\n * Returns:\n * - REDISMODULE_OK on success.\n * - REDISMODULE_ERR if idx out of range or if the reply type is wrong.\n *\n * The `key` and `value` arguments are used to return by reference, and may be\n * NULL if not required. */\nint RM_CallReplyAttributeElement(RedisModuleCallReply *reply, size_t idx, RedisModuleCallReply **key, RedisModuleCallReply **val) {\n    if (callReplyGetAttributeElement(reply, idx, key, val) == C_OK){\n        return REDISMODULE_OK;\n    }\n    return REDISMODULE_ERR;\n}\n\n/* Set unblock handler (callback and private data) on the given promise RedisModuleCallReply.\n * The given reply must be of promise type (REDISMODULE_REPLY_PROMISE). */\nvoid RM_CallReplyPromiseSetUnblockHandler(RedisModuleCallReply *reply, RedisModuleOnUnblocked on_unblock, void *private_data) {\n    RedisModuleAsyncRMCallPromise *promise = callReplyGetPrivateData(reply);\n    promise->on_unblocked = on_unblock;\n    promise->private_data = private_data;\n}\n\n/* Abort the execution of a given promise RedisModuleCallReply.\n * return REDMODULE_OK in case the abort was done successfully and REDISMODULE_ERR\n * if its not possible to abort the execution (execution already finished).\n * In case the execution was aborted (REDMODULE_OK was returned), the private_data out parameter\n * will be set with the value of the private data that was given on 'RM_CallReplyPromiseSetUnblockHandler'\n * so the caller will be able to release the private data.\n *\n * If the execution was aborted successfully, it is promised that the unblock handler will not be called.\n * That said, it is possible that the abort operation will successes but the operation will still continue.\n * This can happened if, for example, a module implements some blocking command and does not respect the\n * disconnect callback. For pure Redis commands this can not happened.*/\nint RM_CallReplyPromiseAbort(RedisModuleCallReply *reply, void **private_data) {\n    RedisModuleAsyncRMCallPromise *promise = callReplyGetPrivateData(reply);\n    if (!promise->c) return REDISMODULE_ERR; /* Promise can not be aborted, either already aborted or already finished. */\n    if (!(promise->c->flags & CLIENT_BLOCKED)) return REDISMODULE_ERR; /* Client is not blocked anymore, can not abort it. */\n\n    /* Client is still blocked, remove it from any blocking state and release it. */\n    if (private_data) *private_data = promise->private_data;\n    promise->private_data = NULL;\n    promise->on_unblocked = NULL;\n    unblockClient(promise->c, 0);\n    moduleReleaseTempClient(promise->c);\n    return REDISMODULE_OK;\n}\n\n/* Return the pointer and length of a string or error reply. */\nconst char *RM_CallReplyStringPtr(RedisModuleCallReply *reply, size_t *len) {\n    size_t private_len;\n    if (!len) len = &private_len;\n    return callReplyGetString(reply, len);\n}\n\n/* Return a new string object from a call reply of type string, error or\n * integer. Otherwise (wrong reply type) return NULL. */\nRedisModuleString *RM_CreateStringFromCallReply(RedisModuleCallReply *reply) {\n    RedisModuleCtx* ctx = callReplyGetPrivateData(reply);\n    size_t len;\n    const char *str;\n    switch(callReplyType(reply)) {\n        case REDISMODULE_REPLY_STRING:\n        case REDISMODULE_REPLY_ERROR:\n            str = callReplyGetString(reply, &len);\n            return RM_CreateString(ctx, str, len);\n        case REDISMODULE_REPLY_INTEGER: {\n            char buf[64];\n            int len = ll2string(buf,sizeof(buf),callReplyGetLongLong(reply));\n            return RM_CreateString(ctx ,buf,len);\n            }\n        default:\n            return NULL;\n    }\n}\n\n/* Modifies the user that RM_Call will use (e.g. for ACL checks) */\nvoid RM_SetContextUser(RedisModuleCtx *ctx, const RedisModuleUser *user) {\n    ctx->user = user;\n}\n\n/* Returns the user associated with the context via RM_SetContextUser.\n * Returns NULL if no user was set on the context.\n * The returned pointer is borrowed from the context — do NOT free it. */\nconst RedisModuleUser *RM_GetContextUser(RedisModuleCtx *ctx) {\n    return ctx->user;\n}\n\n/* Returns an array of robj pointers, by parsing the format specifier \"fmt\" as described for\n * the RM_Call(), RM_Replicate() and other module APIs. Populates *argcp with the number of\n * items (which equals to the length of the allocated argv).\n *\n * The integer pointed by 'flags' is populated with flags according\n * to special modifiers in \"fmt\".\n *\n *     \"!\" -> REDISMODULE_ARGV_REPLICATE\n *     \"A\" -> REDISMODULE_ARGV_NO_AOF\n *     \"R\" -> REDISMODULE_ARGV_NO_REPLICAS\n *     \"3\" -> REDISMODULE_ARGV_RESP_3\n *     \"0\" -> REDISMODULE_ARGV_RESP_AUTO\n *     \"C\" -> REDISMODULE_ARGV_RUN_AS_USER\n *     \"M\" -> REDISMODULE_ARGV_RESPECT_DENY_OOM\n *     \"K\" -> REDISMODULE_ARGV_ALLOW_BLOCK\n *\n * On error (format specifier error) NULL is returned and nothing is\n * allocated. On success the argument vector is returned. */\nrobj **moduleCreateArgvFromUserFormat(const char *cmdname, const char *fmt, int *argcp, int *flags, va_list ap) {\n    int argc = 0, argv_size, j;\n    robj **argv = NULL;\n\n    /* As a first guess to avoid useless reallocations, size argv to\n     * hold one argument for each char specifier in 'fmt'. */\n    argv_size = strlen(fmt)+1; /* +1 because of the command name. */\n    argv = zrealloc(argv,sizeof(robj*)*argv_size);\n\n    /* Build the arguments vector based on the format specifier. */\n    argv[0] = createStringObject(cmdname,strlen(cmdname));\n    argc++;\n\n    /* Create the client and dispatch the command. */\n    const char *p = fmt;\n    while(*p) {\n        if (*p == 'c') {\n            char *cstr = va_arg(ap,char*);\n            argv[argc++] = createStringObject(cstr,strlen(cstr));\n        } else if (*p == 's') {\n            robj *obj = va_arg(ap,void*);\n            if (obj->refcount == OBJ_STATIC_REFCOUNT)\n                obj = createStringObject(obj->ptr,sdslen(obj->ptr));\n            else\n                incrRefCount(obj);\n            argv[argc++] = obj;\n        } else if (*p == 'b') {\n            char *buf = va_arg(ap,char*);\n            size_t len = va_arg(ap,size_t);\n            argv[argc++] = createStringObject(buf,len);\n        } else if (*p == 'l') {\n            long long ll = va_arg(ap,long long);\n            argv[argc++] = createStringObjectFromLongLongWithSds(ll);\n        } else if (*p == 'v') {\n             /* A vector of strings */\n             robj **v = va_arg(ap, void*);\n             size_t vlen = va_arg(ap, size_t);\n\n             /* We need to grow argv to hold the vector's elements.\n              * We resize by vector_len-1 elements, because we held\n              * one element in argv for the vector already */\n             argv_size += vlen-1;\n             argv = zrealloc(argv,sizeof(robj*)*argv_size);\n\n             size_t i = 0;\n             for (i = 0; i < vlen; i++) {\n                 incrRefCount(v[i]);\n                 argv[argc++] = v[i];\n             }\n        } else if (*p == '!') {\n            if (flags) (*flags) |= REDISMODULE_ARGV_REPLICATE;\n        } else if (*p == 'A') {\n            if (flags) (*flags) |= REDISMODULE_ARGV_NO_AOF;\n        } else if (*p == 'R') {\n            if (flags) (*flags) |= REDISMODULE_ARGV_NO_REPLICAS;\n        } else if (*p == '3') {\n            if (flags) (*flags) |= REDISMODULE_ARGV_RESP_3;\n        } else if (*p == '0') {\n            if (flags) (*flags) |= REDISMODULE_ARGV_RESP_AUTO;\n        } else if (*p == 'C') {\n            if (flags) (*flags) |= REDISMODULE_ARGV_RUN_AS_USER;\n        } else if (*p == 'S') {\n            if (flags) (*flags) |= REDISMODULE_ARGV_SCRIPT_MODE;\n        } else if (*p == 'W') {\n            if (flags) (*flags) |= REDISMODULE_ARGV_NO_WRITES;\n        } else if (*p == 'M') {\n            if (flags) (*flags) |= REDISMODULE_ARGV_RESPECT_DENY_OOM;\n        } else if (*p == 'E') {\n            if (flags) (*flags) |= REDISMODULE_ARGV_CALL_REPLIES_AS_ERRORS;\n        } else if (*p == 'D') {\n            if (flags) (*flags) |= (REDISMODULE_ARGV_DRY_RUN | REDISMODULE_ARGV_CALL_REPLIES_AS_ERRORS);\n        } else if (*p == 'K') {\n            if (flags) (*flags) |= REDISMODULE_ARGV_ALLOW_BLOCK;\n        } else {\n            goto fmterr;\n        }\n        p++;\n    }\n    if (argcp) *argcp = argc;\n    return argv;\n\nfmterr:\n    for (j = 0; j < argc; j++)\n        decrRefCount(argv[j]);\n    zfree(argv);\n    return NULL;\n}\n\n/* Exported API to call any Redis command from modules.\n *\n * * **cmdname**: The Redis command to call.\n * * **fmt**: A format specifier string for the command's arguments. Each\n *   of the arguments should be specified by a valid type specification. The\n *   format specifier can also contain the modifiers `!`, `A`, `3` and `R` which\n *   don't have a corresponding argument.\n *\n *     * `b` -- The argument is a buffer and is immediately followed by another\n *              argument that is the buffer's length.\n *     * `c` -- The argument is a pointer to a plain C string (null-terminated).\n *     * `l` -- The argument is a `long long` integer.\n *     * `s` -- The argument is a RedisModuleString.\n *     * `v` -- The argument(s) is a vector of RedisModuleString.\n *     * `!` -- Sends the Redis command and its arguments to replicas and AOF.\n *     * `A` -- Suppress AOF propagation, send only to replicas (requires `!`).\n *     * `R` -- Suppress replicas propagation, send only to AOF (requires `!`).\n *     * `3` -- Return a RESP3 reply. This will change the command reply.\n *              e.g., HGETALL returns a map instead of a flat array.\n *     * `0` -- Return the reply in auto mode, i.e. the reply format will be the\n *              same as the client attached to the given RedisModuleCtx. This will\n *              probably used when you want to pass the reply directly to the client.\n *     * `C` -- Run a command as the user attached to the context.\n *              User is either attached automatically via the client that directly\n *              issued the command and created the context or via RM_SetContextUser.\n *              If the context is not directly created by an issued command (such as a\n *              background context and no user was set on it via RM_SetContextUser,\n *              RM_Call will fail.\n *              Checks if the command can be executed according to ACL rules and causes\n *              the command to run as the determined user, so that any future user\n *              dependent activity, such as ACL checks within scripts will proceed as\n *              expected.\n *              Otherwise, the command will run as the Redis unrestricted user.\n *              Upon sending a command from an internal connection, this flag is\n *              ignored and the command will run as the Redis unrestricted user.\n *     * `S` -- Run the command in a script mode, this means that it will raise\n *              an error if a command which are not allowed inside a script\n *              (flagged with the `deny-script` flag) is invoked (like SHUTDOWN).\n *              In addition, on script mode, write commands are not allowed if there are\n *              not enough good replicas (as configured with `min-replicas-to-write`)\n *              or when the server is unable to persist to the disk.\n *     * `W` -- Do not allow to run any write command (flagged with the `write` flag).\n *     * `M` -- Do not allow `deny-oom` flagged commands when over the memory limit.\n *     * `E` -- Return error as RedisModuleCallReply. If there is an error before\n *              invoking the command, the error is returned using errno mechanism.\n *              This flag allows to get the error also as an error CallReply with\n *              relevant error message.\n *     * 'D' -- A \"Dry Run\" mode. Return before executing the underlying call().\n *              If everything succeeded, it will return with a NULL, otherwise it will\n *              return with a CallReply object denoting the error, as if it was called with\n *              the 'E' code.\n *     * 'K' -- Allow running blocking commands. If enabled and the command gets blocked, a\n *              special REDISMODULE_REPLY_PROMISE will be returned. This reply type\n *              indicates that the command was blocked and the reply will be given asynchronously.\n *              The module can use this reply object to set a handler which will be called when\n *              the command gets unblocked using RedisModule_CallReplyPromiseSetUnblockHandler.\n *              The handler must be set immediately after the command invocation (without releasing\n *              the Redis lock in between). If the handler is not set, the blocking command will\n *              still continue its execution but the reply will be ignored (fire and forget),\n *              notice that this is dangerous in case of role change, as explained below.\n *              The module can use RedisModule_CallReplyPromiseAbort to abort the command invocation\n *              if it was not yet finished (see RedisModule_CallReplyPromiseAbort documentation for more\n *              details). It is also the module's responsibility to abort the execution on role change, either by using\n *              server event (to get notified when the instance becomes a replica) or relying on the disconnect\n *              callback of the original client. Failing to do so can result in a write operation on a replica.\n *              Unlike other call replies, promise call reply **must** be freed while the Redis GIL is locked.\n *              Notice that on unblocking, the only promise is that the unblock handler will be called,\n *              If the blocking RM_Call caused the module to also block some real client (using RM_BlockClient),\n *              it is the module responsibility to unblock this client on the unblock handler.\n *              On the unblock handler it is only allowed to perform the following:\n *              * Calling additional Redis commands using RM_Call\n *              * Open keys using RM_OpenKey\n *              * Replicate data to the replica or AOF\n *\n *              Specifically, it is not allowed to call any Redis module API which are client related such as:\n *              * RM_Reply* API's\n *              * RM_BlockClient\n *              * RM_GetCurrentUserName\n *\n * * **...**: The actual arguments to the Redis command.\n *\n * On success a RedisModuleCallReply object is returned, otherwise\n * NULL is returned and errno is set to the following values:\n *\n * * EBADF: wrong format specifier.\n * * EINVAL: wrong command arity.\n * * ENOENT: command does not exist.\n * * EPERM: operation in Cluster instance with key in non local slot.\n * * EROFS: operation in Cluster instance when a write command is sent\n *          in a readonly state.\n * * ENETDOWN: operation in Cluster instance when cluster is down.\n * * ENOTSUP: No ACL user for the specified module context\n * * EACCES: Command cannot be executed, according to ACL rules\n * * ENOSPC: Write or deny-oom command is not allowed\n * * ESPIPE: Command not allowed on script mode\n *\n * Example code fragment:\n *\n *      reply = RedisModule_Call(ctx,\"INCRBY\",\"sc\",argv[1],\"10\");\n *      if (RedisModule_CallReplyType(reply) == REDISMODULE_REPLY_INTEGER) {\n *        long long myval = RedisModule_CallReplyInteger(reply);\n *        // Do something with myval.\n *      }\n *\n * This API is documented here: https://redis.io/docs/latest/develop/reference/modules/\n */\nRedisModuleCallReply *RM_Call(RedisModuleCtx *ctx, const char *cmdname, const char *fmt, ...) {\n    client *c = NULL;\n    robj **argv = NULL;\n    int argc = 0, flags = 0;\n    va_list ap;\n    RedisModuleCallReply *reply = NULL;\n    int replicate = 0; /* Replicate this command? */\n    int error_as_call_replies = 0; /* return errors as RedisModuleCallReply object */\n    uint64_t cmd_flags;\n\n    /* Handle arguments. */\n    va_start(ap, fmt);\n    argv = moduleCreateArgvFromUserFormat(cmdname,fmt,&argc,&flags,ap);\n    replicate = flags & REDISMODULE_ARGV_REPLICATE;\n    error_as_call_replies = flags & REDISMODULE_ARGV_CALL_REPLIES_AS_ERRORS;\n    va_end(ap);\n\n    c = moduleAllocTempClient();\n\n    if (!(flags & REDISMODULE_ARGV_ALLOW_BLOCK)) {\n        /* We do not want to allow block, the module do not expect it */\n        c->flags |= CLIENT_DENY_BLOCKING;\n    }\n    c->db = ctx->client->db;\n    c->argv = argv;\n    /* We have to assign argv_len, which is equal to argc in that case (RM_Call)\n     * because we may be calling a command that uses rewriteClientCommandArgument */\n    c->argc = c->argv_len = argc;\n    c->resp = 2;\n    if (flags & REDISMODULE_ARGV_RESP_3) {\n        c->resp = 3;\n    } else if (flags & REDISMODULE_ARGV_RESP_AUTO) {\n        /* Auto mode means to take the same protocol as the ctx client. */\n        c->resp = ctx->client->resp;\n    }\n    if (ctx->module) ctx->module->in_call++;\n\n    /* Attach the user of the context or client.\n     * Internal connections always run with the unrestricted user. */\n    user *user = NULL;\n    if ((flags & REDISMODULE_ARGV_RUN_AS_USER) &&\n        !(ctx->client->flags & CLIENT_INTERNAL))\n    {\n        user = ctx->user ? ctx->user->user : ctx->client->user;\n        if (!user) {\n            errno = ENOTSUP;\n            if (error_as_call_replies) {\n                sds msg = sdsnew(\"cannot run as user, no user directly attached to context or context's client\");\n                reply = callReplyCreateError(msg, ctx);\n            }\n            goto cleanup;\n        }\n        c->user = user;\n    }\n\n    /* We handle the above format error only when the client is setup so that\n     * we can free it normally. */\n    if (argv == NULL) {\n        /* We do not return a call reply here this is an error that should only\n         * be catch by the module indicating wrong fmt was given, the module should\n         * handle this error and decide how to continue. It is not an error that\n         * should be propagated to the user. */\n        errno = EBADF;\n        goto cleanup;\n    }\n\n    /* Call command filters */\n    moduleCallCommandFilters(c);\n\n    /* Lookup command now, after filters had a chance to make modifications\n     * if necessary.\n     */\n    c->cmd = c->lastcmd = c->realcmd = lookupCommand(c->argv,c->argc);\n\n    /* We nullify the command if it is not supposed to be seen by the client,\n     * such that it will be rejected like an unknown command. */\n    if (c->cmd &&\n        (c->cmd->flags & CMD_INTERNAL) &&\n        (flags & REDISMODULE_ARGV_RUN_AS_USER) &&\n        !((ctx->client->flags & CLIENT_INTERNAL) || mustObeyClient(ctx->client)))\n    {\n        c->cmd = c->lastcmd = c->realcmd = NULL;\n    }\n\n    sds err;\n    if (!commandCheckExistence(c, error_as_call_replies? &err : NULL)) {\n        errno = ENOENT;\n        if (error_as_call_replies)\n            reply = callReplyCreateError(err, ctx);\n        goto cleanup;\n    }\n    if (!commandCheckArity(c->cmd, c->argc, error_as_call_replies? &err : NULL)) {\n        errno = EINVAL;\n        if (error_as_call_replies)\n            reply = callReplyCreateError(err, ctx);\n        goto cleanup;\n    }\n\n    cmd_flags = getCommandFlags(c);\n\n    if (flags & REDISMODULE_ARGV_SCRIPT_MODE) {\n        /* Basically on script mode we want to only allow commands that can\n         * be executed on scripts (CMD_NOSCRIPT is not set on the command flags) */\n        if (cmd_flags & CMD_NOSCRIPT) {\n            errno = ESPIPE;\n            if (error_as_call_replies) {\n                sds msg = sdscatfmt(sdsempty(), \"command '%S' is not allowed on script mode\", c->cmd->fullname);\n                reply = callReplyCreateError(msg, ctx);\n            }\n            goto cleanup;\n        }\n    }\n\n    if (flags & REDISMODULE_ARGV_RESPECT_DENY_OOM && server.maxmemory) {\n        if (cmd_flags & CMD_DENYOOM) {\n            int oom_state;\n            if (ctx->flags & REDISMODULE_CTX_THREAD_SAFE) {\n                /* On background thread we can not count on server.pre_command_oom_state.\n                 * Because it is only set on the main thread, in such case we will check\n                 * the actual memory usage. */\n                oom_state = (getMaxmemoryState(NULL,NULL,NULL,NULL) == C_ERR);\n            } else {\n                oom_state = server.pre_command_oom_state;\n            }\n            if (oom_state) {\n                errno = ENOSPC;\n                if (error_as_call_replies) {\n                    sds msg = sdsdup(shared.oomerr->ptr);\n                    reply = callReplyCreateError(msg, ctx);\n                }\n                goto cleanup;\n            }\n        }\n    } else {\n        /* if we aren't OOM checking in RM_Call, we want further executions from this client to also not fail on OOM */\n        c->flags |= CLIENT_ALLOW_OOM;\n    }\n\n    if (flags & REDISMODULE_ARGV_NO_WRITES) {\n        if (cmd_flags & CMD_WRITE) {\n            errno = ENOSPC;\n            if (error_as_call_replies) {\n                sds msg = sdscatfmt(sdsempty(), \"Write command '%S' was \"\n                                                \"called while write is not allowed.\", c->cmd->fullname);\n                reply = callReplyCreateError(msg, ctx);\n            }\n            goto cleanup;\n        }\n    }\n\n    /* Script mode tests */\n    if (flags & REDISMODULE_ARGV_SCRIPT_MODE) {\n        if (cmd_flags & CMD_WRITE) {\n            /* on script mode, if a command is a write command,\n             * We will not run it if we encounter disk error\n             * or we do not have enough replicas */\n\n            if (!checkGoodReplicasStatus()) {\n                errno = ESPIPE;\n                if (error_as_call_replies) {\n                    sds msg = sdsdup(shared.noreplicaserr->ptr);\n                    reply = callReplyCreateError(msg, ctx);\n                }\n                goto cleanup;\n            }\n\n            int deny_write_type = writeCommandsDeniedByDiskError();\n            int obey_client = (server.current_client && mustObeyClient(server.current_client));\n\n            if (deny_write_type != DISK_ERROR_TYPE_NONE && !obey_client) {\n                errno = ESPIPE;\n                if (error_as_call_replies) {\n                    sds msg = writeCommandsGetDiskErrorMessage(deny_write_type);\n                    reply = callReplyCreateError(msg, ctx);\n                }\n                goto cleanup;\n            }\n\n            if (server.masterhost && server.repl_slave_ro && !obey_client) {\n                errno = ESPIPE;\n                if (error_as_call_replies) {\n                    sds msg = sdsdup(shared.roslaveerr->ptr);\n                    reply = callReplyCreateError(msg, ctx);\n                }\n                goto cleanup;\n            }\n        }\n\n        if (server.masterhost && server.repl_state != REPL_STATE_CONNECTED &&\n            server.repl_serve_stale_data == 0 && !(cmd_flags & CMD_STALE)) {\n            errno = ESPIPE;\n            if (error_as_call_replies) {\n                sds msg = sdsdup(shared.masterdownerr->ptr);\n                reply = callReplyCreateError(msg, ctx);\n            }\n            goto cleanup;\n        }\n    }\n\n    /* Check if the user can run this command according to the current\n     * ACLs.\n     *\n     * If RM_SetContextUser has set a user, that user is used, otherwise\n     * use the attached client's user. If there is no attached client user and no manually\n     * set user, an error will be returned.\n     * An internal command should only succeed for an internal connection, AOF,\n     * and master commands. */\n    if (flags & REDISMODULE_ARGV_RUN_AS_USER) {\n        int acl_errpos;\n        int acl_retval;\n\n        acl_retval = ACLCheckAllUserCommandPerm(user,c->cmd,c->argv,c->argc,NULL,&acl_errpos);\n        if (acl_retval != ACL_OK) {\n            sds object = (acl_retval == ACL_DENIED_CMD) ? sdsdup(c->cmd->fullname) : sdsdup(c->argv[acl_errpos]->ptr);\n            addACLLogEntry(ctx->client, acl_retval, ACL_LOG_CTX_MODULE, -1, c->user->name, object);\n            if (error_as_call_replies) {\n                /* verbosity should be same as processCommand() in server.c */\n                sds acl_msg = getAclErrorMessage(acl_retval, c->user, c->cmd, c->argv[acl_errpos]->ptr, 0);\n                sds msg = sdscatfmt(sdsempty(), \"-NOPERM %S\\r\\n\", acl_msg);\n                sdsfree(acl_msg);\n                reply = callReplyCreateError(msg, ctx);\n            }\n            errno = EACCES;\n            goto cleanup;\n        }\n    }\n\n    /* If this is a Redis Cluster node, we need to make sure the module is not\n     * trying to access non-local keys, with the exception of commands\n     * received from our master. */\n    if (server.cluster_enabled && !mustObeyClient(ctx->client)) {\n        int error_code;\n        /* Duplicate relevant flags in the module client. */\n        c->flags &= ~(CLIENT_READONLY|CLIENT_ASKING);\n        c->flags |= ctx->client->flags & (CLIENT_READONLY|CLIENT_ASKING);\n        const uint64_t cmd_flags = getCommandFlags(c);\n        if (getNodeByQuery(c,c->cmd,c->argv,c->argc,NULL,NULL,0,cmd_flags,&error_code) !=\n                           getMyClusterNode())\n        {\n            sds msg = NULL;\n            if (error_code == CLUSTER_REDIR_DOWN_RO_STATE) {\n                if (error_as_call_replies) {\n                    msg = sdscatfmt(sdsempty(), \"Can not execute a write command '%S' while the cluster is down and readonly\", c->cmd->fullname);\n                }\n                errno = EROFS;\n            } else if (error_code == CLUSTER_REDIR_DOWN_STATE) {\n                if (error_as_call_replies) {\n                    msg = sdscatfmt(sdsempty(), \"Can not execute a command '%S' while the cluster is down\", c->cmd->fullname);\n                }\n                errno = ENETDOWN;\n            } else {\n                if (error_as_call_replies) {\n                    msg = sdsnew(\"Attempted to access a non local key in a cluster node\");\n                }\n                errno = EPERM;\n            }\n            if (msg) {\n                reply = callReplyCreateError(msg, ctx);\n            }\n            goto cleanup;\n        }\n    }\n\n    if (flags & REDISMODULE_ARGV_DRY_RUN) {\n        goto cleanup;\n    }\n\n    /* We need to use a global replication_allowed flag in order to prevent\n     * replication of nested RM_Calls. Example:\n     * 1. module1.foo does RM_Call of module2.bar without replication (i.e. no '!')\n     * 2. module2.bar internally calls RM_Call of INCR with '!'\n     * 3. at the end of module1.foo we call RM_ReplicateVerbatim\n     * We want the replica/AOF to see only module1.foo and not the INCR from module2.bar */\n    int prev_replication_allowed = server.replication_allowed;\n    server.replication_allowed = replicate && server.replication_allowed;\n\n    /* Run the command */\n    int call_flags = CMD_CALL_FROM_MODULE;\n    if (replicate) {\n        if (!(flags & REDISMODULE_ARGV_NO_AOF))\n            call_flags |= CMD_CALL_PROPAGATE_AOF;\n        if (!(flags & REDISMODULE_ARGV_NO_REPLICAS))\n            call_flags |= CMD_CALL_PROPAGATE_REPL;\n    }\n    call(c,call_flags);\n    server.replication_allowed = prev_replication_allowed;\n\n    if (c->flags & CLIENT_BLOCKED) {\n        serverAssert(flags & REDISMODULE_ARGV_ALLOW_BLOCK);\n        serverAssert(ctx->module);\n        RedisModuleAsyncRMCallPromise *promise = zmalloc(sizeof(RedisModuleAsyncRMCallPromise));\n        *promise = (RedisModuleAsyncRMCallPromise) {\n                /* We start with ref_count value of 2 because this object is held\n                 * by the promise CallReply and the fake client that was used to execute the command. */\n                .ref_count = 2,\n                .module = ctx->module,\n                .on_unblocked = NULL,\n                .private_data = NULL,\n                .c = c,\n                .ctx = (ctx->flags & REDISMODULE_CTX_AUTO_MEMORY) ? ctx : NULL,\n        };\n        reply = callReplyCreatePromise(promise);\n        c->bstate.async_rm_call_handle = promise;\n        if (!(call_flags & CMD_CALL_PROPAGATE_AOF)) {\n            /* No need for AOF propagation, set the relevant flags of the client */\n            c->flags |= CLIENT_MODULE_PREVENT_AOF_PROP;\n        }\n        if (!(call_flags & CMD_CALL_PROPAGATE_REPL)) {\n            /* No need for replication propagation, set the relevant flags of the client */\n            c->flags |= CLIENT_MODULE_PREVENT_REPL_PROP;\n        }\n        c = NULL; /* Make sure not to free the client */\n    } else {\n        reply = moduleParseReply(c, (ctx->flags & REDISMODULE_CTX_AUTO_MEMORY) ? ctx : NULL);\n    }\n\ncleanup:\n    if (reply) autoMemoryAdd(ctx,REDISMODULE_AM_REPLY,reply);\n    if (ctx->module) ctx->module->in_call--;\n    if (c) moduleReleaseTempClient(c);\n    return reply;\n}\n\n/* Return a pointer, and a length, to the protocol returned by the command\n * that returned the reply object. */\nconst char *RM_CallReplyProto(RedisModuleCallReply *reply, size_t *len) {\n    return callReplyGetProto(reply, len);\n}\n\n/* --------------------------------------------------------------------------\n * ## Modules data types\n *\n * When String DMA or using existing data structures is not enough, it is\n * possible to create new data types from scratch and export them to\n * Redis. The module must provide a set of callbacks for handling the\n * new values exported (for example in order to provide RDB saving/loading,\n * AOF rewrite, and so forth). In this section we define this API.\n * -------------------------------------------------------------------------- */\n\n/* Turn a 9 chars name in the specified charset and a 10 bit encver into\n * a single 64 bit unsigned integer that represents this exact module name\n * and version. This final number is called a \"type ID\" and is used when\n * writing module exported values to RDB files, in order to re-associate the\n * value to the right module to load them during RDB loading.\n *\n * If the string is not of the right length or the charset is wrong, or\n * if encver is outside the unsigned 10 bit integer range, 0 is returned,\n * otherwise the function returns the right type ID.\n *\n * The resulting 64 bit integer is composed as follows:\n *\n *     (high order bits) 6|6|6|6|6|6|6|6|6|10 (low order bits)\n *\n * The first 6 bits value is the first character, name[0], while the last\n * 6 bits value, immediately before the 10 bits integer, is name[8].\n * The last 10 bits are the encoding version.\n *\n * Note that a name and encver combo of \"AAAAAAAAA\" and 0, will produce\n * zero as return value, that is the same we use to signal errors, thus\n * this combination is invalid, and also useless since type names should\n * try to be vary to avoid collisions. */\n\nconst char *ModuleTypeNameCharSet =\n             \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n             \"abcdefghijklmnopqrstuvwxyz\"\n             \"0123456789-_\";\n\nuint64_t moduleTypeEncodeId(const char *name, int encver) {\n    /* We use 64 symbols so that we can map each character into 6 bits\n     * of the final output. */\n    const char *cset = ModuleTypeNameCharSet;\n    if (strlen(name) != 9) return 0;\n    if (encver < 0 || encver > 1023) return 0;\n\n    uint64_t id = 0;\n    for (int j = 0; j < 9; j++) {\n        char *p = strchr(cset,name[j]);\n        if (!p) return 0;\n        unsigned long pos = p-cset;\n        id = (id << 6) | pos;\n    }\n    id = (id << 10) | encver;\n    return id;\n}\n\n/* Search, in the list of exported data types of all the modules registered,\n * a type with the same name as the one given. Returns the moduleType\n * structure pointer if such a module is found, or NULL otherwise. */\nmoduleType *moduleTypeLookupModuleByNameInternal(const char *name, int ignore_case) {\n    dictIterator di;\n    dictEntry *de;\n\n    dictInitIterator(&di, modules);\n    while ((de = dictNext(&di)) != NULL) {\n        struct RedisModule *module = dictGetVal(de);\n        listIter li;\n        listNode *ln;\n\n        listRewind(module->types,&li);\n        while((ln = listNext(&li))) {\n            moduleType *mt = ln->value;\n            if ((!ignore_case && memcmp(name,mt->entity.name,sizeof(mt->entity.name)) == 0)\n                || (ignore_case && !strcasecmp(name, mt->entity.name)))\n            {\n                dictResetIterator(&di);\n                return mt;\n            }\n        }\n    }\n    dictResetIterator(&di);\n    return NULL;\n}\n/* Search all registered modules by name, and name is case sensitive */\nmoduleType *moduleTypeLookupModuleByName(const char *name) {\n    return moduleTypeLookupModuleByNameInternal(name, 0);\n}\n\n/* Search all registered modules by name, but case insensitive */\nmoduleType *moduleTypeLookupModuleByNameIgnoreCase(const char *name) {\n    return moduleTypeLookupModuleByNameInternal(name, 1);\n}\n\n/* Lookup a module by ID, with caching. This function is used during RDB\n * loading. Modules exporting data types should never be able to unload, so\n * our cache does not need to expire. */\n#define MODULE_LOOKUP_CACHE_SIZE 3\n\nmoduleType *moduleTypeLookupModuleByID(uint64_t id) {\n    static struct {\n        uint64_t id;\n        moduleType *mt;\n    } cache[MODULE_LOOKUP_CACHE_SIZE];\n\n    /* Search in cache to start. */\n    int j;\n    for (j = 0; j < MODULE_LOOKUP_CACHE_SIZE && cache[j].mt != NULL; j++)\n        if (cache[j].id == id) return cache[j].mt;\n\n    /* Slow module by module lookup. */\n    moduleType *mt = NULL;\n    dictIterator di;\n    dictEntry *de;\n\n    dictInitIterator(&di, modules);\n    while ((de = dictNext(&di)) != NULL && mt == NULL) {\n        struct RedisModule *module = dictGetVal(de);\n        listIter li;\n        listNode *ln;\n\n        listRewind(module->types,&li);\n        while((ln = listNext(&li))) {\n            moduleType *this_mt = ln->value;\n            /* Compare only the 54 bit module identifier and not the\n             * encoding version. */\n            if (this_mt->entity.id >> 10 == id >> 10) {\n                mt = this_mt;\n                break;\n            }\n        }\n    }\n    dictResetIterator(&di);\n\n    /* Add to cache if possible. */\n    if (mt && j < MODULE_LOOKUP_CACHE_SIZE) {\n        cache[j].id = id;\n        cache[j].mt = mt;\n    }\n    return mt;\n}\n\n/* Turn an (unresolved) module ID into a type name, to show the user an\n * error when RDB files contain module data we can't load.\n * The buffer pointed by 'name' must be 10 bytes at least. The function will\n * fill it with a null terminated module name. */\nvoid moduleTypeNameByID(char *name, uint64_t moduleid) {\n    const char *cset = ModuleTypeNameCharSet;\n\n    name[9] = '\\0';\n    char *p = name+8;\n    moduleid >>= 10;\n    for (int j = 0; j < 9; j++) {\n        *p-- = cset[moduleid & 63];\n        moduleid >>= 6;\n    }\n}\n\n/* Return the name of the module that owns the specified moduleType. */\nconst char *moduleTypeModuleName(moduleType *mt) {\n    if (!mt || !mt->entity.module) return NULL;\n    return mt->entity.module->name;\n}\n\n/* Return the module name from a module command */\nconst char *moduleNameFromCommand(struct redisCommand *cmd) {\n    serverAssert(cmd->proc == RedisModuleCommandDispatcher);\n\n    RedisModuleCommand *cp = cmd->module_cmd;\n    return cp->module->name;\n}\n\n/* Create a copy of a module type value using the copy callback. If failed\n * or not supported, produce an error reply and return NULL.\n */\nrobj *moduleTypeDupOrReply(client *c, robj *fromkey, robj *tokey, int todb, robj *value) {\n    moduleValue *mv = value->ptr;\n    moduleType *mt = mv->type;\n    if (!mt->copy && !mt->copy2) {\n        addReplyError(c, \"not supported for this module key\");\n        return NULL;\n    }\n    void *newval = NULL;\n    if (mt->copy2 != NULL) {\n        RedisModuleKeyOptCtx ctx = {fromkey, tokey, c->db->id, todb};\n        newval = mt->copy2(&ctx, mv->value);\n    } else {\n        newval = mt->copy(fromkey, tokey, mv->value);\n    }\n     \n    if (!newval) {\n        addReplyError(c, \"module key failed to copy\");\n        return NULL;\n    }\n    return createModuleObject(mt, newval);\n}\n\n/* Register a new data type exported by the module. The parameters are the\n * following. Please for in depth documentation check the modules API\n * documentation, especially https://redis.io/docs/latest/develop/reference/modules/modules-native-types/.\n *\n * * **name**: A 9 characters data type name that MUST be unique in the Redis\n *   Modules ecosystem. Be creative... and there will be no collisions. Use\n *   the charset A-Z a-z 9-0, plus the two \"-_\" characters. A good\n *   idea is to use, for example `<typename>-<vendor>`. For example\n *   \"tree-AntZ\" may mean \"Tree data structure by @antirez\". To use both\n *   lower case and upper case letters helps in order to prevent collisions.\n * * **encver**: Encoding version, which is, the version of the serialization\n *   that a module used in order to persist data. As long as the \"name\"\n *   matches, the RDB loading will be dispatched to the type callbacks\n *   whatever 'encver' is used, however the module can understand if\n *   the encoding it must load are of an older version of the module.\n *   For example the module \"tree-AntZ\" initially used encver=0. Later\n *   after an upgrade, it started to serialize data in a different format\n *   and to register the type with encver=1. However this module may\n *   still load old data produced by an older version if the rdb_load\n *   callback is able to check the encver value and act accordingly.\n *   The encver must be a positive value between 0 and 1023.\n *\n * * **typemethods_ptr** is a pointer to a RedisModuleTypeMethods structure\n *   that should be populated with the methods callbacks and structure\n *   version, like in the following example:\n *\n *         RedisModuleTypeMethods tm = {\n *             .version = REDISMODULE_TYPE_METHOD_VERSION,\n *             .rdb_load = myType_RDBLoadCallBack,\n *             .rdb_save = myType_RDBSaveCallBack,\n *             .aof_rewrite = myType_AOFRewriteCallBack,\n *             .free = myType_FreeCallBack,\n *\n *             // Optional fields\n *             .digest = myType_DigestCallBack,\n *             .mem_usage = myType_MemUsageCallBack,\n *             .aux_load = myType_AuxRDBLoadCallBack,\n *             .aux_save = myType_AuxRDBSaveCallBack,\n *             .free_effort = myType_FreeEffortCallBack,\n *             .unlink = myType_UnlinkCallBack,\n *             .copy = myType_CopyCallback,\n *             .defrag = myType_DefragCallback\n * \n *             // Enhanced optional fields\n *             .mem_usage2 = myType_MemUsageCallBack2,\n *             .free_effort2 = myType_FreeEffortCallBack2,\n *             .unlink2 = myType_UnlinkCallBack2,\n *             .copy2 = myType_CopyCallback2,\n *         }\n *\n * * **rdb_load**: A callback function pointer that loads data from RDB files.\n * * **rdb_save**: A callback function pointer that saves data to RDB files.\n * * **aof_rewrite**: A callback function pointer that rewrites data as commands.\n * * **digest**: A callback function pointer that is used for `DEBUG DIGEST`.\n * * **free**: A callback function pointer that can free a type value.\n * * **aux_save**: A callback function pointer that saves out of keyspace data to RDB files.\n *   'when' argument is either REDISMODULE_AUX_BEFORE_RDB or REDISMODULE_AUX_AFTER_RDB.\n * * **aux_load**: A callback function pointer that loads out of keyspace data from RDB files.\n *   Similar to aux_save, returns REDISMODULE_OK on success, and ERR otherwise.\n * * **free_effort**: A callback function pointer that used to determine whether the module's\n *   memory needs to be lazy reclaimed. The module should return the complexity involved by\n *   freeing the value. for example: how many pointers are gonna be freed. Note that if it \n *   returns 0, we'll always do an async free.\n * * **unlink**: A callback function pointer that used to notifies the module that the key has \n *   been removed from the DB by redis, and may soon be freed by a background thread. Note that \n *   it won't be called on FLUSHALL/FLUSHDB (both sync and async), and the module can use the \n *   RedisModuleEvent_FlushDB to hook into that.\n * * **copy**: A callback function pointer that is used to make a copy of the specified key.\n *   The module is expected to perform a deep copy of the specified value and return it.\n *   In addition, hints about the names of the source and destination keys is provided.\n *   A NULL return value is considered an error and the copy operation fails.\n *   Note: if the target key exists and is being overwritten, the copy callback will be\n *   called first, followed by a free callback to the value that is being replaced.\n * \n * * **defrag**: A callback function pointer that is used to request the module to defrag\n *   a key. The module should then iterate pointers and call the relevant RM_Defrag*()\n *   functions to defragment pointers or complex types. The module should continue\n *   iterating as long as RM_DefragShouldStop() returns a zero value, and return a\n *   zero value if finished or non-zero value if more work is left to be done. If more work\n *   needs to be done, RM_DefragCursorSet() and RM_DefragCursorGet() can be used to track\n *   this work across different calls.\n *   Normally, the defrag mechanism invokes the callback without a time limit, so\n *   RM_DefragShouldStop() always returns zero. The \"late defrag\" mechanism which has\n *   a time limit and provides cursor support is used only for keys that are determined\n *   to have significant internal complexity. To determine this, the defrag mechanism\n *   uses the free_effort callback and the 'active-defrag-max-scan-fields' config directive.\n *   NOTE: The value is passed as a `void**` and the function is expected to update the\n *   pointer if the top-level value pointer is defragmented and consequently changes.\n *\n * * **mem_usage2**: Similar to `mem_usage`, but provides the `RedisModuleKeyOptCtx` parameter\n *   so that meta information such as key name and db id can be obtained, and\n *   the `sample_size` for size estimation (see MEMORY USAGE command).\n * * **free_effort2**: Similar to `free_effort`, but provides the `RedisModuleKeyOptCtx` parameter\n *   so that meta information such as key name and db id can be obtained.\n * * **unlink2**: Similar to `unlink`, but provides the `RedisModuleKeyOptCtx` parameter\n *   so that meta information such as key name and db id can be obtained.\n * * **copy2**: Similar to `copy`, but provides the `RedisModuleKeyOptCtx` parameter\n *   so that meta information such as key names and db ids can be obtained.\n * * **aux_save2**: Similar to `aux_save`, but with small semantic change, if the module\n *   saves nothing on this callback then no data about this aux field will be written to the\n *   RDB and it will be possible to load the RDB even if the module is not loaded.\n * \n * Note: the module name \"AAAAAAAAA\" is reserved and produces an error, it\n * happens to be pretty lame as well.\n *\n * If RedisModule_CreateDataType() is called outside of RedisModule_OnLoad() function,\n * there is already a module registering a type with the same name,\n * or if the module name or encver is invalid, NULL is returned.\n * Otherwise the new type is registered into Redis, and a reference of\n * type RedisModuleType is returned: the caller of the function should store\n * this reference into a global variable to make future use of it in the\n * modules type API, since a single module may register multiple types.\n * Example code fragment:\n *\n *      static RedisModuleType *BalancedTreeType;\n *\n *      int RedisModule_OnLoad(RedisModuleCtx *ctx) {\n *          // some code here ...\n *          BalancedTreeType = RM_CreateDataType(...);\n *      }\n */\nmoduleType *RM_CreateDataType(RedisModuleCtx *ctx, const char *name, int encver, void *typemethods_ptr) {\n    if (!ctx->module->onload)\n        return NULL;\n    uint64_t id = moduleTypeEncodeId(name,encver);\n    if (id == 0) return NULL;\n    if (moduleTypeLookupModuleByName(name) != NULL) return NULL;\n\n    long typemethods_version = ((long*)typemethods_ptr)[0];\n    if (typemethods_version == 0) return NULL;\n\n    struct typemethods {\n        uint64_t version;\n        moduleTypeLoadFunc rdb_load;\n        moduleTypeSaveFunc rdb_save;\n        moduleTypeRewriteFunc aof_rewrite;\n        moduleTypeMemUsageFunc mem_usage;\n        moduleTypeDigestFunc digest;\n        moduleTypeFreeFunc free;\n        struct {\n            moduleTypeAuxLoadFunc aux_load;\n            moduleTypeAuxSaveFunc aux_save;\n            int aux_save_triggers;\n        } v2;\n        struct {\n            moduleTypeFreeEffortFunc free_effort;\n            moduleTypeUnlinkFunc unlink;\n            moduleTypeCopyFunc copy;\n            moduleTypeDefragFunc defrag;\n        } v3;\n        struct {\n            moduleTypeMemUsageFunc2 mem_usage2;\n            moduleTypeFreeEffortFunc2 free_effort2;\n            moduleTypeUnlinkFunc2 unlink2;\n            moduleTypeCopyFunc2 copy2;\n        } v4;\n        struct {\n            moduleTypeAuxSaveFunc aux_save2;\n        } v5;\n    } *tms = (struct typemethods*) typemethods_ptr;\n\n    moduleType *mt = zcalloc(sizeof(*mt));\n    mt->entity.id = id;\n    mt->entity.module = ctx->module;\n    mt->rdb_load = tms->rdb_load;\n    mt->rdb_save = tms->rdb_save;\n    mt->aof_rewrite = tms->aof_rewrite;\n    mt->mem_usage = tms->mem_usage;\n    mt->digest = tms->digest;\n    mt->free = tms->free;\n    if (tms->version >= 2) {\n        mt->aux_load = tms->v2.aux_load;\n        mt->aux_save = tms->v2.aux_save;\n        mt->aux_save_triggers = tms->v2.aux_save_triggers;\n    }\n    if (tms->version >= 3) {\n        mt->free_effort = tms->v3.free_effort;\n        mt->unlink = tms->v3.unlink;\n        mt->copy = tms->v3.copy;\n        mt->defrag = tms->v3.defrag;\n    }\n    if (tms->version >= 4) {\n        mt->mem_usage2 = tms->v4.mem_usage2;\n        mt->unlink2 = tms->v4.unlink2;\n        mt->free_effort2 = tms->v4.free_effort2;\n        mt->copy2 = tms->v4.copy2;\n    }\n    if (tms->version >= 5) {\n        mt->aux_save2 = tms->v5.aux_save2;\n    }\n    memcpy(mt->entity.name,name,sizeof(mt->entity.name));\n    listAddNodeTail(ctx->module->types,mt);\n    return mt;\n}\n\n/* If the key is open for writing, set the specified module type object\n * as the value of the key, deleting the old value if any.\n * On success REDISMODULE_OK is returned. If the key is not open for\n * writing or there is an active iterator, REDISMODULE_ERR is returned. */\nint RM_ModuleTypeSetValue(RedisModuleKey *key, moduleType *mt, void *value) {\n    if (!(key->mode & REDISMODULE_WRITE) || key->iter) return REDISMODULE_ERR;\n    RM_DeleteKey(key);\n    robj *o = createModuleObject(mt,value);\n    setKey(key->ctx->client,key->db,key->key, &o,SETKEY_NO_SIGNAL);\n    key->kv = o;\n    return REDISMODULE_OK;\n}\n\n/* Assuming RedisModule_KeyType() returned REDISMODULE_KEYTYPE_MODULE on\n * the key, returns the module type pointer of the value stored at key.\n *\n * If the key is NULL, is not associated with a module type, or is empty,\n * then NULL is returned instead. */\nmoduleType *RM_ModuleTypeGetType(RedisModuleKey *key) {\n    if (key == NULL ||\n        key->kv == NULL ||\n        RM_KeyType(key) != REDISMODULE_KEYTYPE_MODULE) return NULL;\n    moduleValue *mv = key->kv->ptr;\n    return mv->type;\n}\n\n/* Assuming RedisModule_KeyType() returned REDISMODULE_KEYTYPE_MODULE on\n * the key, returns the module type low-level value stored at key, as\n * it was set by the user via RedisModule_ModuleTypeSetValue().\n *\n * If the key is NULL, is not associated with a module type, or is empty,\n * then NULL is returned instead. */\nvoid *RM_ModuleTypeGetValue(RedisModuleKey *key) {\n    if (key == NULL ||\n        key->kv == NULL ||\n        RM_KeyType(key) != REDISMODULE_KEYTYPE_MODULE) return NULL;\n    moduleValue *mv = key->kv->ptr;\n    return mv->value;\n}\n\n/* --------------------------------------------------------------------------\n * ## RDB loading and saving functions\n * -------------------------------------------------------------------------- */\n\n/* Called when there is a load error in the context of a module. On some\n * modules this cannot be recovered, but if the module declared capability\n * to handle errors, we'll raise a flag rather than exiting. */\nvoid moduleRDBLoadError(RedisModuleIO *io) {\n    if (io->entity->module->options & REDISMODULE_OPTIONS_HANDLE_IO_ERRORS) {\n        io->error = 1;\n        return;\n    }\n    serverPanic(\n        \"Error loading data from RDB (short read or EOF). \"\n        \"Read performed by module '%s' about type '%s' \"\n        \"after reading '%llu' bytes of a value \"\n        \"for key named: '%s'.\",\n        io->entity->module->name,\n        io->entity->name,\n        (unsigned long long)io->bytes,\n        io->key? (char*)io->key->ptr: \"(null)\");\n}\n\n/* Returns 0 if there's at least one registered data type that did not declare\n * REDISMODULE_OPTIONS_HANDLE_IO_ERRORS, in which case diskless loading should\n * be avoided since it could cause data loss. */\nint moduleAllDatatypesHandleErrors(void) {\n    dictIterator di;\n    dictEntry *de;\n\n    dictInitIterator(&di, modules);\n    while ((de = dictNext(&di)) != NULL) {\n        struct RedisModule *module = dictGetVal(de);\n        if (listLength(module->types) &&\n            !(module->options & REDISMODULE_OPTIONS_HANDLE_IO_ERRORS))\n        {\n            dictResetIterator(&di);\n            return 0;\n        }\n    }\n    dictResetIterator(&di);\n    return 1;\n}\n\n/* Returns 0 if module did not declare REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD, in which case\n * diskless async loading should be avoided because module doesn't know there can be traffic during\n * database full resynchronization. */\nint moduleAllModulesHandleReplAsyncLoad(void) {\n    dictIterator di;\n    dictEntry *de;\n\n    dictInitIterator(&di, modules);\n    while ((de = dictNext(&di)) != NULL) {\n        struct RedisModule *module = dictGetVal(de);\n        if (!(module->options & REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD)) {\n            dictResetIterator(&di);\n            return 0;\n        }\n    }\n    dictResetIterator(&di);\n    return 1;\n}\n\n/* Returns true if any previous IO API failed.\n * for `Load*` APIs the REDISMODULE_OPTIONS_HANDLE_IO_ERRORS flag must be set with\n * RedisModule_SetModuleOptions first. */\nint RM_IsIOError(RedisModuleIO *io) {\n    return io->error;\n}\n\nstatic int flushRedisModuleIOBuffer(RedisModuleIO *io) {\n    if (!io->pre_flush_buffer) return 0;\n\n    /* We have data that must be flushed before saving the current data.\n     * Lets flush it. */\n    sds pre_flush_buffer = io->pre_flush_buffer;\n    io->pre_flush_buffer = NULL;\n    ssize_t retval = rdbWriteRaw(io->rio, pre_flush_buffer, sdslen(pre_flush_buffer));\n    sdsfree(pre_flush_buffer);\n    if (retval >= 0) io->bytes += retval;\n    return retval;\n}\n\n/* Save an unsigned 64 bit value into the RDB file. This function should only\n * be called in the context of the rdb_save method of modules implementing new\n * data types. */\nvoid RM_SaveUnsigned(RedisModuleIO *io, uint64_t value) {\n    if (io->error) return;\n    if (flushRedisModuleIOBuffer(io) == -1) goto saveerr;\n    /* Save opcode. */\n    int retval = rdbSaveLen(io->rio, RDB_MODULE_OPCODE_UINT);\n    if (retval == -1) goto saveerr;\n    io->bytes += retval;\n    /* Save value. */\n    retval = rdbSaveLen(io->rio, value);\n    if (retval == -1) goto saveerr;\n    io->bytes += retval;\n    return;\n\nsaveerr:\n    io->error = 1;\n}\n\n/* Load an unsigned 64 bit value from the RDB file. This function should only\n * be called in the context of the `rdb_load` method of modules implementing\n * new data types. */\nuint64_t RM_LoadUnsigned(RedisModuleIO *io) {\n    if (io->error) return 0;\n    uint64_t opcode = rdbLoadLen(io->rio,NULL);\n    if (opcode != RDB_MODULE_OPCODE_UINT) goto loaderr;\n    uint64_t value;\n    int retval = rdbLoadLenByRef(io->rio, NULL, &value);\n    if (retval == -1) goto loaderr;\n    return value;\n\nloaderr:\n    moduleRDBLoadError(io);\n    return 0;\n}\n\n/* Like RedisModule_SaveUnsigned() but for signed 64 bit values. */\nvoid RM_SaveSigned(RedisModuleIO *io, int64_t value) {\n    union {uint64_t u; int64_t i;} conv;\n    conv.i = value;\n    RM_SaveUnsigned(io,conv.u);\n}\n\n/* Like RedisModule_LoadUnsigned() but for signed 64 bit values. */\nint64_t RM_LoadSigned(RedisModuleIO *io) {\n    union {uint64_t u; int64_t i;} conv;\n    conv.u = RM_LoadUnsigned(io);\n    return conv.i;\n}\n\n/* In the context of the rdb_save method of a module type, saves a\n * string into the RDB file taking as input a RedisModuleString.\n *\n * The string can be later loaded with RedisModule_LoadString() or\n * other Load family functions expecting a serialized string inside\n * the RDB file. */\nvoid RM_SaveString(RedisModuleIO *io, RedisModuleString *s) {\n    if (io->error) return;\n    if (flushRedisModuleIOBuffer(io) == -1) goto saveerr;\n    /* Save opcode. */\n    ssize_t retval = rdbSaveLen(io->rio, RDB_MODULE_OPCODE_STRING);\n    if (retval == -1) goto saveerr;\n    io->bytes += retval;\n    /* Save value. */\n    retval = rdbSaveStringObject(io->rio, s);\n    if (retval == -1) goto saveerr;\n    io->bytes += retval;\n    return;\n\nsaveerr:\n    io->error = 1;\n}\n\n/* Like RedisModule_SaveString() but takes a raw C pointer and length\n * as input. */\nvoid RM_SaveStringBuffer(RedisModuleIO *io, const char *str, size_t len) {\n    if (io->error) return;\n    if (flushRedisModuleIOBuffer(io) == -1) goto saveerr;\n    /* Save opcode. */\n    ssize_t retval = rdbSaveLen(io->rio, RDB_MODULE_OPCODE_STRING);\n    if (retval == -1) goto saveerr;\n    io->bytes += retval;\n    /* Save value. */\n    retval = rdbSaveRawString(io->rio, (unsigned char*)str,len);\n    if (retval == -1) goto saveerr;\n    io->bytes += retval;\n    return;\n\nsaveerr:\n    io->error = 1;\n}\n\n/* Implements RM_LoadString() and RM_LoadStringBuffer() */\nvoid *moduleLoadString(RedisModuleIO *io, int plain, size_t *lenptr) {\n    if (io->error) return NULL;\n    uint64_t opcode = rdbLoadLen(io->rio,NULL);\n    if (opcode != RDB_MODULE_OPCODE_STRING) goto loaderr;\n    void *s = rdbGenericLoadStringObject(io->rio,\n              plain ? RDB_LOAD_PLAIN : RDB_LOAD_NONE, lenptr);\n    if (s == NULL) goto loaderr;\n    return s;\n\nloaderr:\n    moduleRDBLoadError(io);\n    return NULL;\n}\n\n/* In the context of the rdb_load method of a module data type, loads a string\n * from the RDB file, that was previously saved with RedisModule_SaveString()\n * functions family.\n *\n * The returned string is a newly allocated RedisModuleString object, and\n * the user should at some point free it with a call to RedisModule_FreeString().\n *\n * If the data structure does not store strings as RedisModuleString objects,\n * the similar function RedisModule_LoadStringBuffer() could be used instead. */\nRedisModuleString *RM_LoadString(RedisModuleIO *io) {\n    return moduleLoadString(io,0,NULL);\n}\n\n/* Like RedisModule_LoadString() but returns a heap allocated string that\n * was allocated with RedisModule_Alloc(), and can be resized or freed with\n * RedisModule_Realloc() or RedisModule_Free().\n *\n * The size of the string is stored at '*lenptr' if not NULL.\n * The returned string is not automatically NULL terminated, it is loaded\n * exactly as it was stored inside the RDB file. */\nchar *RM_LoadStringBuffer(RedisModuleIO *io, size_t *lenptr) {\n    return moduleLoadString(io,1,lenptr);\n}\n\n/* In the context of the rdb_save method of a module data type, saves a double\n * value to the RDB file. The double can be a valid number, a NaN or infinity.\n * It is possible to load back the value with RedisModule_LoadDouble(). */\nvoid RM_SaveDouble(RedisModuleIO *io, double value) {\n    if (io->error) return;\n    if (flushRedisModuleIOBuffer(io) == -1) goto saveerr;\n    /* Save opcode. */\n    int retval = rdbSaveLen(io->rio, RDB_MODULE_OPCODE_DOUBLE);\n    if (retval == -1) goto saveerr;\n    io->bytes += retval;\n    /* Save value. */\n    retval = rdbSaveBinaryDoubleValue(io->rio, value);\n    if (retval == -1) goto saveerr;\n    io->bytes += retval;\n    return;\n\nsaveerr:\n    io->error = 1;\n}\n\n/* In the context of the rdb_save method of a module data type, loads back the\n * double value saved by RedisModule_SaveDouble(). */\ndouble RM_LoadDouble(RedisModuleIO *io) {\n    if (io->error) return 0;\n    uint64_t opcode = rdbLoadLen(io->rio,NULL);\n    if (opcode != RDB_MODULE_OPCODE_DOUBLE) goto loaderr;\n    double value;\n    int retval = rdbLoadBinaryDoubleValue(io->rio, &value);\n    if (retval == -1) goto loaderr;\n    return value;\n\nloaderr:\n    moduleRDBLoadError(io);\n    return 0;\n}\n\n/* In the context of the rdb_save method of a module data type, saves a float\n * value to the RDB file. The float can be a valid number, a NaN or infinity.\n * It is possible to load back the value with RedisModule_LoadFloat(). */\nvoid RM_SaveFloat(RedisModuleIO *io, float value) {\n    if (io->error) return;\n    if (flushRedisModuleIOBuffer(io) == -1) goto saveerr;\n    /* Save opcode. */\n    int retval = rdbSaveLen(io->rio, RDB_MODULE_OPCODE_FLOAT);\n    if (retval == -1) goto saveerr;\n    io->bytes += retval;\n    /* Save value. */\n    retval = rdbSaveBinaryFloatValue(io->rio, value);\n    if (retval == -1) goto saveerr;\n    io->bytes += retval;\n    return;\n\nsaveerr:\n    io->error = 1;\n}\n\n/* In the context of the rdb_save method of a module data type, loads back the\n * float value saved by RedisModule_SaveFloat(). */\nfloat RM_LoadFloat(RedisModuleIO *io) {\n    if (io->error) return 0;\n    uint64_t opcode = rdbLoadLen(io->rio,NULL);\n    if (opcode != RDB_MODULE_OPCODE_FLOAT) goto loaderr;\n    float value;\n    int retval = rdbLoadBinaryFloatValue(io->rio, &value);\n    if (retval == -1) goto loaderr;\n    return value;\n\nloaderr:\n    moduleRDBLoadError(io);\n    return 0;\n}\n\n/* In the context of the rdb_save method of a module data type, saves a long double\n * value to the RDB file. The double can be a valid number, a NaN or infinity.\n * It is possible to load back the value with RedisModule_LoadLongDouble(). */\nvoid RM_SaveLongDouble(RedisModuleIO *io, long double value) {\n    if (io->error) return;\n    char buf[MAX_LONG_DOUBLE_CHARS];\n    /* Long double has different number of bits in different platforms, so we\n     * save it as a string type. */\n    size_t len = ld2string(buf,sizeof(buf),value,LD_STR_HEX);\n    RM_SaveStringBuffer(io,buf,len);\n}\n\n/* In the context of the rdb_save method of a module data type, loads back the\n * long double value saved by RedisModule_SaveLongDouble(). */\nlong double RM_LoadLongDouble(RedisModuleIO *io) {\n    if (io->error) return 0;\n    long double value;\n    size_t len;\n    char* str = RM_LoadStringBuffer(io,&len);\n    if (!str) return 0;\n    string2ld(str,len,&value);\n    RM_Free(str);\n    return value;\n}\n\n/* Iterate over modules, and trigger rdb aux saving for the ones modules types\n * who asked for it. */\nssize_t rdbSaveModulesAux(rio *rdb, int when) {\n    size_t total_written = 0;\n    dictIterator di;\n    dictEntry *de;\n\n    dictInitIterator(&di, modules);\n    while ((de = dictNext(&di)) != NULL) {\n        struct RedisModule *module = dictGetVal(de);\n        listIter li;\n        listNode *ln;\n\n        listRewind(module->types,&li);\n        while((ln = listNext(&li))) {\n            moduleType *mt = ln->value;\n            if ((!mt->aux_save && !mt->aux_save2) || !(mt->aux_save_triggers & when))\n                continue;\n            ssize_t ret = rdbSaveSingleModuleAux(rdb, when, mt);\n            if (ret==-1) {\n                dictResetIterator(&di);\n                return -1;\n            }\n            total_written += ret;\n        }\n    }\n\n    dictResetIterator(&di);\n    return total_written;\n}\n\n/* --------------------------------------------------------------------------\n * ## Key digest API (DEBUG DIGEST interface for modules types)\n * -------------------------------------------------------------------------- */\n\n/* Add a new element to the digest. This function can be called multiple times\n * one element after the other, for all the elements that constitute a given\n * data structure. The function call must be followed by the call to\n * `RedisModule_DigestEndSequence` eventually, when all the elements that are\n * always in a given order are added. See the Redis Modules data types\n * documentation for more info. However this is a quick example that uses Redis\n * data types as an example.\n *\n * To add a sequence of unordered elements (for example in the case of a Redis\n * Set), the pattern to use is:\n *\n *     foreach element {\n *         AddElement(element);\n *         EndSequence();\n *     }\n *\n * Because Sets are not ordered, so every element added has a position that\n * does not depend from the other. However if instead our elements are\n * ordered in pairs, like field-value pairs of a Hash, then one should\n * use:\n *\n *     foreach key,value {\n *         AddElement(key);\n *         AddElement(value);\n *         EndSequence();\n *     }\n *\n * Because the key and value will be always in the above order, while instead\n * the single key-value pairs, can appear in any position into a Redis hash.\n *\n * A list of ordered elements would be implemented with:\n *\n *     foreach element {\n *         AddElement(element);\n *     }\n *     EndSequence();\n *\n */\nvoid RM_DigestAddStringBuffer(RedisModuleDigest *md, const char *ele, size_t len) {\n    mixDigest(md->o,ele,len);\n}\n\n/* Like `RedisModule_DigestAddStringBuffer()` but takes a `long long` as input\n * that gets converted into a string before adding it to the digest. */\nvoid RM_DigestAddLongLong(RedisModuleDigest *md, long long ll) {\n    char buf[LONG_STR_SIZE];\n    size_t len = ll2string(buf,sizeof(buf),ll);\n    mixDigest(md->o,buf,len);\n}\n\n/* See the documentation for `RedisModule_DigestAddElement()`. */\nvoid RM_DigestEndSequence(RedisModuleDigest *md) {\n    xorDigest(md->x,md->o,sizeof(md->o));\n    memset(md->o,0,sizeof(md->o));\n}\n\n/* Decode a serialized representation of a module data type 'mt', in a specific encoding version 'encver'\n * from string 'str' and return a newly allocated value, or NULL if decoding failed.\n *\n * This call basically reuses the 'rdb_load' callback which module data types\n * implement in order to allow a module to arbitrarily serialize/de-serialize\n * keys, similar to how the Redis 'DUMP' and 'RESTORE' commands are implemented.\n *\n * Modules should generally use the REDISMODULE_OPTIONS_HANDLE_IO_ERRORS flag and\n * make sure the de-serialization code properly checks and handles IO errors\n * (freeing allocated buffers and returning a NULL).\n *\n * If this is NOT done, Redis will handle corrupted (or just truncated) serialized\n * data by producing an error message and terminating the process.\n */\nvoid *RM_LoadDataTypeFromStringEncver(const RedisModuleString *str, const moduleType *mt, int encver) {\n    rio payload;\n    RedisModuleIO io;\n    void *ret;\n\n    rioInitWithBuffer(&payload, str->ptr);\n    moduleType *mt_non_const = (moduleType *)mt; /*cast const away*/    \n    moduleInitIOContext(&io, &mt_non_const->entity, &payload, NULL, -1);\n\n    /* All RM_Save*() calls always write a version 2 compatible format, so we\n     * need to make sure we read the same.\n     */\n    ret = mt->rdb_load(&io,encver);\n    if (io.ctx) {\n        moduleFreeContext(io.ctx);\n        zfree(io.ctx);\n    }\n    return ret;\n}\n\n/* Similar to RM_LoadDataTypeFromStringEncver, original version of the API, kept\n * for backward compatibility. \n */\nvoid *RM_LoadDataTypeFromString(const RedisModuleString *str, const moduleType *mt) {\n    return RM_LoadDataTypeFromStringEncver(str, mt, 0);\n}\n\n/* Encode a module data type 'mt' value 'data' into serialized form, and return it\n * as a newly allocated RedisModuleString.\n *\n * This call basically reuses the 'rdb_save' callback which module data types\n * implement in order to allow a module to arbitrarily serialize/de-serialize\n * keys, similar to how the Redis 'DUMP' and 'RESTORE' commands are implemented.\n */\nRedisModuleString *RM_SaveDataTypeToString(RedisModuleCtx *ctx, void *data, const moduleType *mt) {\n    rio payload;\n    RedisModuleIO io;\n\n    rioInitWithBuffer(&payload,sdsempty());\n    moduleType *mt_non_const = (moduleType *)mt; /*cast const away*/\n    moduleInitIOContext(&io, &mt_non_const->entity, &payload, NULL, -1);\n    mt->rdb_save(&io,data);\n    if (io.ctx) {\n        moduleFreeContext(io.ctx);\n        zfree(io.ctx);\n    }\n    if (io.error) {\n        sdsfree(payload.io.buffer.ptr);\n        return NULL;\n    } else {\n        robj *str = createObject(OBJ_STRING,payload.io.buffer.ptr);\n        if (ctx != NULL) autoMemoryAdd(ctx,REDISMODULE_AM_STRING,str);\n        return str;\n    }\n}\n\n/* Returns the name of the key currently being processed. */\nconst RedisModuleString *RM_GetKeyNameFromDigest(RedisModuleDigest *dig) {\n    return dig->key;\n}\n\n/* Returns the database id of the key currently being processed. */\nint RM_GetDbIdFromDigest(RedisModuleDigest *dig) {\n    return dig->dbid;\n}\n/* --------------------------------------------------------------------------\n * ## AOF API for modules data types\n * -------------------------------------------------------------------------- */\n\n/* Emits a command into the AOF during the AOF rewriting process. This function\n * is only called in the context of the aof_rewrite method of data types exported\n * by a module. The command works exactly like RedisModule_Call() in the way\n * the parameters are passed, but it does not return anything as the error\n * handling is performed by Redis itself. */\nvoid RM_EmitAOF(RedisModuleIO *io, const char *cmdname, const char *fmt, ...) {\n    if (io->error) return;\n    struct redisCommand *cmd;\n    robj **argv = NULL;\n    int argc = 0, flags = 0, j;\n    va_list ap;\n\n    cmd = lookupCommandByCString((char*)cmdname);\n    if (!cmd) {\n        serverLog(LL_WARNING,\n            \"Fatal: AOF method for module data type '%s' tried to \"\n            \"emit unknown command '%s'\",\n            io->entity->name, cmdname);\n        io->error = 1;\n        errno = EINVAL;\n        return;\n    }\n\n    /* Emit the arguments into the AOF in Redis protocol format. */\n    va_start(ap, fmt);\n    argv = moduleCreateArgvFromUserFormat(cmdname,fmt,&argc,&flags,ap);\n    va_end(ap);\n    if (argv == NULL) {\n        serverLog(LL_WARNING,\n            \"Fatal: AOF method for module data type '%s' tried to \"\n            \"call RedisModule_EmitAOF() with wrong format specifiers '%s'\",\n            io->entity->name, fmt);\n        io->error = 1;\n        errno = EINVAL;\n        return;\n    }\n\n    /* Bulk count. */\n    if (!io->error && rioWriteBulkCount(io->rio,'*',argc) == 0)\n        io->error = 1;\n\n    /* Arguments. */\n    for (j = 0; j < argc; j++) {\n        if (!io->error && rioWriteBulkObject(io->rio,argv[j]) == 0)\n            io->error = 1;\n        decrRefCount(argv[j]);\n    }\n    zfree(argv);\n    return;\n}\n\n/* --------------------------------------------------------------------------\n * ## IO context handling\n * -------------------------------------------------------------------------- */\n\nRedisModuleCtx *RM_GetContextFromIO(RedisModuleIO *io) {\n    if (io->ctx) return io->ctx; /* Can't have more than one... */\n    io->ctx = zmalloc(sizeof(RedisModuleCtx));\n    moduleCreateContext(io->ctx, io->entity->module, REDISMODULE_CTX_NONE);\n    return io->ctx;\n}\n\n/* Returns the name of the key currently being processed.\n * There is no guarantee that the key name is always available, so this may return NULL.\n */\nconst RedisModuleString *RM_GetKeyNameFromIO(RedisModuleIO *io) {\n    return io->key;\n}\n\n/* Returns a RedisModuleString with the name of the key from RedisModuleKey. */\nconst RedisModuleString *RM_GetKeyNameFromModuleKey(RedisModuleKey *key) {\n    return key ? key->key : NULL;\n}\n\n/* Returns a database id of the key from RedisModuleKey. */\nint RM_GetDbIdFromModuleKey(RedisModuleKey *key) {\n    return key ? key->db->id : -1;\n}\n\n/* Returns the database id of the key currently being processed.\n * There is no guarantee that this info is always available, so this may return -1.\n */\nint RM_GetDbIdFromIO(RedisModuleIO *io) {\n    return io->dbid;\n}\n\n/* --------------------------------------------------------------------------\n * ## Logging\n * -------------------------------------------------------------------------- */\n\n/* This is the low level function implementing both:\n *\n *      RM_Log()\n *      RM_LogIOError()\n *\n */\nvoid moduleLogRaw(RedisModule *module, const char *levelstr, const char *fmt, va_list ap) {\n    char msg[LOG_MAX_LEN];\n    size_t name_len;\n    int level;\n\n    if (!strcasecmp(levelstr,\"debug\")) level = LL_DEBUG;\n    else if (!strcasecmp(levelstr,\"verbose\")) level = LL_VERBOSE;\n    else if (!strcasecmp(levelstr,\"notice\")) level = LL_NOTICE;\n    else if (!strcasecmp(levelstr,\"warning\")) level = LL_WARNING;\n    else level = LL_VERBOSE; /* Default. */\n\n    if (level < server.verbosity) return;\n\n    name_len = snprintf(msg, sizeof(msg),\"<%s> \", module? module->name: \"module\");\n    vsnprintf(msg + name_len, sizeof(msg) - name_len, fmt, ap);\n    serverLogRaw(level,msg);\n}\n\n/* Produces a log message to the standard Redis log, the format accepts\n * printf-alike specifiers, while level is a string describing the log\n * level to use when emitting the log, and must be one of the following:\n *\n * * \"debug\" (`REDISMODULE_LOGLEVEL_DEBUG`)\n * * \"verbose\" (`REDISMODULE_LOGLEVEL_VERBOSE`)\n * * \"notice\" (`REDISMODULE_LOGLEVEL_NOTICE`)\n * * \"warning\" (`REDISMODULE_LOGLEVEL_WARNING`)\n *\n * If the specified log level is invalid, verbose is used by default.\n * There is a fixed limit to the length of the log line this function is able\n * to emit, this limit is not specified but is guaranteed to be more than\n * a few lines of text.\n *\n * The ctx argument may be NULL if cannot be provided in the context of the\n * caller for instance threads or callbacks, in which case a generic \"module\"\n * will be used instead of the module name.\n */\nvoid RM_Log(RedisModuleCtx *ctx, const char *levelstr, const char *fmt, ...) {\n    va_list ap;\n    va_start(ap, fmt);\n    moduleLogRaw(ctx? ctx->module: NULL,levelstr,fmt,ap);\n    va_end(ap);\n}\n\n/* Log errors from RDB / AOF serialization callbacks.\n *\n * This function should be used when a callback is returning a critical\n * error to the caller since cannot load or save the data for some\n * critical reason. */\nvoid RM_LogIOError(RedisModuleIO *io, const char *levelstr, const char *fmt, ...) {\n    va_list ap;\n    va_start(ap, fmt);\n    moduleLogRaw(io->entity->module, levelstr, fmt, ap);\n    va_end(ap);\n}\n\n/* Redis-like assert function.\n *\n * The macro `RedisModule_Assert(expression)` is recommended, rather than\n * calling this function directly.\n *\n * A failed assertion will shut down the server and produce logging information\n * that looks identical to information generated by Redis itself.\n */\nvoid RM__Assert(const char *estr, const char *file, int line) {\n    _serverAssert(estr, file, line);\n}\n\n/* Allows adding event to the latency monitor to be observed by the LATENCY\n * command. The call is skipped if the latency is smaller than the configured\n * latency-monitor-threshold. */\nvoid RM_LatencyAddSample(const char *event, mstime_t latency) {\n    if (latency >= server.latency_monitor_threshold)\n        latencyAddSample(event, latency);\n}\n\n/* --------------------------------------------------------------------------\n * ## Blocking clients from modules\n *\n * For a guide about blocking commands in modules, see\n * https://redis.io/docs/latest/develop/reference/modules/modules-blocking-ops/.\n * -------------------------------------------------------------------------- */\n\n/* Returns 1 if the client already in the moduleUnblocked list, 0 otherwise. */\nint isModuleClientUnblocked(client *c) {\n    RedisModuleBlockedClient *bc = c->bstate.module_blocked_handle;\n\n    return bc->unblocked == 1;\n}\n\n/* This is called from blocked.c in order to unblock a client: may be called\n * for multiple reasons while the client is in the middle of being blocked\n * because the client is terminated, but is also called for cleanup when a\n * client is unblocked in a clean way after replaying.\n *\n * What we do here is just to set the client to NULL in the redis module\n * blocked client handle. This way if the client is terminated while there\n * is a pending threaded operation involving the blocked client, we'll know\n * that the client no longer exists and no reply callback should be called.\n *\n * The structure RedisModuleBlockedClient will be always deallocated when\n * running the list of clients blocked by a module that need to be unblocked. */\nvoid unblockClientFromModule(client *c) {\n    RedisModuleBlockedClient *bc = c->bstate.module_blocked_handle;\n\n    /* Call the disconnection callback if any. Note that\n     * bc->disconnect_callback is set to NULL if the client gets disconnected\n     * by the module itself or because of a timeout, so the callback will NOT\n     * get called if this is not an actual disconnection event. */\n    if (bc->disconnect_callback) {\n        RedisModuleCtx ctx;\n        moduleCreateContext(&ctx, bc->module, REDISMODULE_CTX_NONE);\n        ctx.blocked_privdata = bc->privdata;\n        ctx.client = bc->client;\n        bc->disconnect_callback(&ctx,bc);\n        moduleFreeContext(&ctx);\n    }\n\n    /* If we made it here and client is still blocked it means that the command\n     * timed-out, client was killed or disconnected and disconnect_callback was\n     * not implemented (or it was, but RM_UnblockClient was not called from\n     * within it, as it should).\n     * We must call moduleUnblockClient in order to free privdata and\n     * RedisModuleBlockedClient.\n     *\n     * Note that we only do that for clients that are blocked on keys, for which\n     * the contract is that the module should not call RM_UnblockClient under\n     * normal circumstances.\n     * Clients implementing threads and working with private data should be\n     * aware that calling RM_UnblockClient for every blocked client is their\n     * responsibility, and if they fail to do so memory may leak. Ideally they\n     * should implement the disconnect and timeout callbacks and call\n     * RM_UnblockClient, but any other way is also acceptable. */\n    if (bc->blocked_on_keys && !bc->unblocked)\n        moduleUnblockClient(c);\n\n    bc->client = NULL;\n}\n\n/* Block a client in the context of a module: this function implements both\n * RM_BlockClient() and RM_BlockClientOnKeys() depending on the fact the\n * keys are passed or not.\n *\n * When not blocking for keys, the keys, numkeys, and privdata parameters are\n * not needed. The privdata in that case must be NULL, since later is\n * RM_UnblockClient() that will provide some private data that the reply\n * callback will receive.\n *\n * Instead when blocking for keys, normally RM_UnblockClient() will not be\n * called (because the client will unblock when the key is modified), so\n * 'privdata' should be provided in that case, so that once the client is\n * unlocked and the reply callback is called, it will receive its associated\n * private data.\n *\n * Even when blocking on keys, RM_UnblockClient() can be called however, but\n * in that case the privdata argument is disregarded, because we pass the\n * reply callback the privdata that is set here while blocking.\n *\n */\nRedisModuleBlockedClient *moduleBlockClient(RedisModuleCtx *ctx, RedisModuleCmdFunc reply_callback,\n                                            RedisModuleAuthCallback auth_reply_callback,\n                                            RedisModuleCmdFunc timeout_callback, void (*free_privdata)(RedisModuleCtx*,void*),\n                                            long long timeout_ms, RedisModuleString **keys, int numkeys, void *privdata,\n                                            int flags) {\n    client *c = ctx->client;\n    int islua = scriptIsRunning();\n    int ismulti = server.in_exec;\n\n    c->bstate.module_blocked_handle = zcalloc(sizeof(RedisModuleBlockedClient));\n    RedisModuleBlockedClient *bc = c->bstate.module_blocked_handle;\n    ctx->module->blocked_clients++;\n\n    /* We need to handle the invalid operation of calling modules blocking\n     * commands from Lua or MULTI. We actually create an already aborted\n     * (client set to NULL) blocked client handle, and actually reply with\n     * an error. */\n    bc->client = (islua || ismulti) ? NULL : c;\n    bc->module = ctx->module;\n    bc->reply_callback = reply_callback;\n    bc->auth_reply_cb = auth_reply_callback;\n    bc->timeout_callback = timeout_callback;\n    bc->disconnect_callback = NULL; /* Set by RM_SetDisconnectCallback() */\n    bc->free_privdata = free_privdata;\n    bc->privdata = privdata;\n    bc->reply_client = moduleAllocTempClient();\n    bc->thread_safe_ctx_client = moduleAllocTempClient();\n    if (bc->client)\n        bc->reply_client->resp = bc->client->resp;\n    bc->dbid = c->db->id;\n    bc->blocked_on_keys = keys != NULL;\n    bc->unblocked = 0;\n    bc->background_timer = 0;\n    bc->background_duration = 0;\n\n    mstime_t timeout = 0;\n    if (timeout_ms) {\n        mstime_t now = mstime();\n        if (timeout_ms > LLONG_MAX - now) {\n            c->bstate.module_blocked_handle = NULL;\n            addReplyError(c, \"timeout is out of range\"); /* 'timeout_ms+now' would overflow */\n            return bc;\n        }\n        timeout = timeout_ms + now;\n    }\n\n    if (islua || ismulti) {\n        c->bstate.module_blocked_handle = NULL;\n        addReplyError(c, islua ?\n            \"Blocking module command called from Lua script\" :\n            \"Blocking module command called from transaction\");\n    } else if (ctx->flags & REDISMODULE_CTX_BLOCKED_REPLY) {\n        c->bstate.module_blocked_handle = NULL;\n        addReplyError(c, \"Blocking module command called from a Reply callback context\");\n    } else if (!auth_reply_callback && clientHasModuleAuthInProgress(c)) {\n        c->bstate.module_blocked_handle = NULL;\n        addReplyError(c, \"Clients undergoing module based authentication can only be blocked on auth\");\n    } else {\n        if (keys) {\n            blockForKeys(c,BLOCKED_MODULE,keys,numkeys,timeout,flags&REDISMODULE_BLOCK_UNBLOCK_DELETED);\n        } else {\n            c->bstate.timeout = timeout;\n            blockClient(c,BLOCKED_MODULE);\n        }\n    }\n    return bc;\n}\n\n/* This API registers a callback to execute in addition to normal password based authentication.\n * Multiple callbacks can be registered across different modules. When a Module is unloaded, all the\n * auth callbacks registered by it are unregistered.\n * The callbacks are attempted (in the order of most recently registered first) when the AUTH/HELLO\n * (with AUTH field provided) commands are called.\n * The callbacks will be called with a module context along with a username and a password, and are\n * expected to take one of the following actions:\n * (1) Authenticate - Use the RM_AuthenticateClient* API and return REDISMODULE_AUTH_HANDLED.\n * This will immediately end the auth chain as successful and add the OK reply.\n * (2) Deny Authentication - Return REDISMODULE_AUTH_HANDLED without authenticating or blocking the\n * client. Optionally, `err` can be set to a custom error message and `err` will be automatically\n * freed by the server.\n * This will immediately end the auth chain as unsuccessful and add the ERR reply.\n * (3) Block a client on authentication - Use the RM_BlockClientOnAuth API and return\n * REDISMODULE_AUTH_HANDLED. Here, the client will be blocked until the RM_UnblockClient API is used\n * which will trigger the auth reply callback (provided through the RM_BlockClientOnAuth).\n * In this reply callback, the Module should authenticate, deny or skip handling authentication.\n * (4) Skip handling Authentication - Return REDISMODULE_AUTH_NOT_HANDLED without blocking the\n * client. This will allow the engine to attempt the next module auth callback.\n * If none of the callbacks authenticate or deny auth, then password based auth is attempted and\n * will authenticate or add failure logs and reply to the clients accordingly.\n *\n * Note: If a client is disconnected while it was in the middle of blocking module auth, that\n * occurrence of the AUTH or HELLO command will not be tracked in the INFO command stats.\n *\n * The following is an example of how non-blocking module based authentication can be used:\n *\n *      int auth_cb(RedisModuleCtx *ctx, RedisModuleString *username, RedisModuleString *password, RedisModuleString **err) {\n *          const char *user = RedisModule_StringPtrLen(username, NULL);\n *          const char *pwd = RedisModule_StringPtrLen(password, NULL);\n *          if (!strcmp(user,\"foo\") && !strcmp(pwd,\"valid_password\")) {\n *              RedisModule_AuthenticateClientWithACLUser(ctx, \"foo\", 3, NULL, NULL, NULL);\n *              return REDISMODULE_AUTH_HANDLED;\n *          }\n *\n *          else if (!strcmp(user,\"foo\") && !strcmp(pwd,\"wrong_password\")) {\n *              RedisModuleString *log = RedisModule_CreateString(ctx, \"Module Auth\", 11);\n *              RedisModule_ACLAddLogEntryByUserName(ctx, username, log, REDISMODULE_ACL_LOG_AUTH);\n *              RedisModule_FreeString(ctx, log);\n *              const char *err_msg = \"Auth denied by Misc Module.\";\n *              *err = RedisModule_CreateString(ctx, err_msg, strlen(err_msg));\n *              return REDISMODULE_AUTH_HANDLED;\n *          }\n *          return REDISMODULE_AUTH_NOT_HANDLED;\n *       }\n *\n *      int RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n *          if (RedisModule_Init(ctx,\"authmodule\",1,REDISMODULE_APIVER_1)== REDISMODULE_ERR)\n *              return REDISMODULE_ERR;\n *          RedisModule_RegisterAuthCallback(ctx, auth_cb);\n *          return REDISMODULE_OK;\n *      }\n */\nvoid RM_RegisterAuthCallback(RedisModuleCtx *ctx, RedisModuleAuthCallback cb) {\n    RedisModuleAuthCtx *auth_ctx = zmalloc(sizeof(RedisModuleAuthCtx));\n    auth_ctx->module = ctx->module;\n    auth_ctx->auth_cb = cb;\n    listAddNodeHead(moduleAuthCallbacks, auth_ctx);\n}\n\n/* Helper function to invoke the free private data callback of a Module blocked client. */\nvoid moduleInvokeFreePrivDataCallback(client *c, RedisModuleBlockedClient *bc) {\n    if (bc->privdata && bc->free_privdata) {\n        RedisModuleCtx ctx;\n        int ctx_flags = c == NULL ? REDISMODULE_CTX_BLOCKED_DISCONNECTED : REDISMODULE_CTX_NONE;\n        moduleCreateContext(&ctx, bc->module, ctx_flags);\n        ctx.blocked_privdata = bc->privdata;\n        ctx.client = bc->client;\n        bc->free_privdata(&ctx,bc->privdata);\n        moduleFreeContext(&ctx);\n    }\n}\n\n/* Unregisters all the module auth callbacks that have been registered by this Module. */\nvoid moduleUnregisterAuthCBs(RedisModule *module) {\n    listIter li;\n    listNode *ln;\n    listRewind(moduleAuthCallbacks, &li);\n    while ((ln = listNext(&li))) {\n        RedisModuleAuthCtx *ctx = listNodeValue(ln);\n        if (ctx->module == module) {\n            listDelNode(moduleAuthCallbacks, ln);\n            zfree(ctx);\n        }\n    }\n}\n\n/* Search for & attempt next module auth callback after skipping the ones already attempted.\n * Returns the result of the module auth callback. */\nint attemptNextAuthCb(client *c, robj *username, robj *password, robj **err) {\n    int handle_next_callback = c->module_auth_ctx == NULL;\n    RedisModuleAuthCtx *cur_auth_ctx = NULL;\n    listNode *ln;\n    listIter li;\n    listRewind(moduleAuthCallbacks, &li);\n    int result = REDISMODULE_AUTH_NOT_HANDLED;\n    while((ln = listNext(&li))) {\n        cur_auth_ctx = listNodeValue(ln);\n        /* Skip over the previously attempted auth contexts. */\n        if (!handle_next_callback) {\n            handle_next_callback = cur_auth_ctx == c->module_auth_ctx;\n            continue;\n        }\n        /* Remove the module auth complete flag before we attempt the next cb. */\n        c->flags &= ~CLIENT_MODULE_AUTH_HAS_RESULT;\n        RedisModuleCtx ctx;\n        moduleCreateContext(&ctx, cur_auth_ctx->module, REDISMODULE_CTX_NONE);\n        ctx.client = c;\n        *err = NULL;\n        c->module_auth_ctx = cur_auth_ctx;\n        result = cur_auth_ctx->auth_cb(&ctx, username, password, err);\n        moduleFreeContext(&ctx);\n        if (result == REDISMODULE_AUTH_HANDLED) break;\n        /* If Auth was not handled (allowed/denied/blocked) by the Module, try the next auth cb. */\n    }\n    return result;\n}\n\n/* Helper function to handle a reprocessed unblocked auth client.\n * Returns REDISMODULE_AUTH_NOT_HANDLED if the client was not reprocessed after a blocking module\n * auth operation.\n * Otherwise, we attempt the auth reply callback & the free priv data callback, update fields and\n * return the result of the reply callback. */\nint attemptBlockedAuthReplyCallback(client *c, robj *username, robj *password, robj **err) {\n    int result = REDISMODULE_AUTH_NOT_HANDLED;\n    if (!c->module_blocked_client) return result;\n    RedisModuleBlockedClient *bc = (RedisModuleBlockedClient *) c->module_blocked_client;\n    bc->client = c;\n    if (bc->auth_reply_cb) {\n        RedisModuleCtx ctx;\n        moduleCreateContext(&ctx, bc->module, REDISMODULE_CTX_BLOCKED_REPLY);\n        ctx.blocked_privdata = bc->privdata;\n        ctx.blocked_ready_key = NULL;\n        ctx.client = bc->client;\n        ctx.blocked_client = bc;\n        result = bc->auth_reply_cb(&ctx, username, password, err);\n        moduleFreeContext(&ctx);\n    }\n    moduleInvokeFreePrivDataCallback(c, bc);\n    c->module_blocked_client = NULL;\n    c->lastcmd->microseconds += bc->background_duration;\n    bc->module->blocked_clients--;\n    zfree(bc);\n    return result;\n}\n\n/* Helper function to attempt Module based authentication through module auth callbacks.\n * Here, the Module is expected to authenticate the client using the RedisModule APIs and to add ACL\n * logs in case of errors.\n * Returns one of the following codes:\n * AUTH_OK - Indicates that a module handled and authenticated the client.\n * AUTH_ERR - Indicates that a module handled and denied authentication for this client.\n * AUTH_NOT_HANDLED - Indicates that authentication was not handled by any Module and that\n * normal password based authentication can be attempted next.\n * AUTH_BLOCKED - Indicates module authentication is in progress through a blocking implementation.\n * In this case, authentication is handled here again after the client is unblocked / reprocessed. */\nint checkModuleAuthentication(client *c, robj *username, robj *password, robj **err) {\n    if (!listLength(moduleAuthCallbacks)) return AUTH_NOT_HANDLED;\n    int result = attemptBlockedAuthReplyCallback(c, username, password, err);\n    if (result == REDISMODULE_AUTH_NOT_HANDLED) {\n        result = attemptNextAuthCb(c, username, password, err);\n    }\n    if (c->flags & CLIENT_BLOCKED) {\n        /* Modules are expected to return REDISMODULE_AUTH_HANDLED when blocking clients. */\n        serverAssert(result == REDISMODULE_AUTH_HANDLED);\n        return AUTH_BLOCKED;\n    }\n    c->module_auth_ctx = NULL;\n    if (result == REDISMODULE_AUTH_NOT_HANDLED) {\n        c->flags &= ~CLIENT_MODULE_AUTH_HAS_RESULT;\n        return AUTH_NOT_HANDLED;\n    }\n    if (c->flags & CLIENT_MODULE_AUTH_HAS_RESULT) {\n        c->flags &= ~CLIENT_MODULE_AUTH_HAS_RESULT;\n        if (c->authenticated) return AUTH_OK;\n    }\n    return AUTH_ERR;\n}\n\n/* This function is called from module.c in order to check if a module\n * blocked for BLOCKED_MODULE and subtype 'on keys' (bc->blocked_on_keys true)\n * can really be unblocked, since the module was able to serve the client.\n * If the callback returns REDISMODULE_OK, then the client can be unblocked,\n * otherwise the client remains blocked and we'll retry again when one of\n * the keys it blocked for becomes \"ready\" again.\n * This function returns 1 if client was served (and should be unblocked) */\nint moduleTryServeClientBlockedOnKey(client *c, robj *key) {\n    int served = 0;\n    RedisModuleBlockedClient *bc = c->bstate.module_blocked_handle;\n\n    /* Protect against re-processing: don't serve clients that are already\n     * in the unblocking list for any reason (including RM_UnblockClient()\n     * explicit call). See #6798. */\n    if (bc->unblocked) return 0;\n\n    RedisModuleCtx ctx;\n    moduleCreateContext(&ctx, bc->module, REDISMODULE_CTX_BLOCKED_REPLY);\n    ctx.blocked_ready_key = key;\n    ctx.blocked_privdata = bc->privdata;\n    ctx.client = bc->client;\n    ctx.blocked_client = bc;\n    if (bc->reply_callback(&ctx,(void**)c->argv,c->argc) == REDISMODULE_OK)\n        served = 1;\n    moduleFreeContext(&ctx);\n    return served;\n}\n\n/* Block a client in the context of a blocking command, returning a handle\n * which will be used, later, in order to unblock the client with a call to\n * RedisModule_UnblockClient(). The arguments specify callback functions\n * and a timeout after which the client is unblocked.\n *\n * The callbacks are called in the following contexts:\n *\n *     reply_callback:   called after a successful RedisModule_UnblockClient()\n *                       call in order to reply to the client and unblock it.\n *\n *     timeout_callback: called when the timeout is reached or if `CLIENT UNBLOCK`\n *                       is invoked, in order to send an error to the client.\n *\n *     free_privdata:    called in order to free the private data that is passed\n *                       by RedisModule_UnblockClient() call.\n *\n * Note: RedisModule_UnblockClient should be called for every blocked client,\n *       even if client was killed, timed-out or disconnected. Failing to do so\n *       will result in memory leaks.\n *\n * There are some cases where RedisModule_BlockClient() cannot be used:\n *\n * 1. If the client is a Lua script.\n * 2. If the client is executing a MULTI block.\n *\n * In these cases, a call to RedisModule_BlockClient() will **not** block the\n * client, but instead produce a specific error reply.\n *\n * A module that registers a timeout_callback function can also be unblocked\n * using the `CLIENT UNBLOCK` command, which will trigger the timeout callback.\n * If a callback function is not registered, then the blocked client will be\n * treated as if it is not in a blocked state and `CLIENT UNBLOCK` will return\n * a zero value.\n *\n * Measuring background time: By default the time spent in the blocked command\n * is not account for the total command duration. To include such time you should\n * use RM_BlockedClientMeasureTimeStart() and RM_BlockedClientMeasureTimeEnd() one,\n * or multiple times within the blocking command background work.\n */\nRedisModuleBlockedClient *RM_BlockClient(RedisModuleCtx *ctx, RedisModuleCmdFunc reply_callback,\n                                         RedisModuleCmdFunc timeout_callback, void (*free_privdata)(RedisModuleCtx*,void*),\n                                         long long timeout_ms) {\n    return moduleBlockClient(ctx,reply_callback,NULL,timeout_callback,free_privdata,timeout_ms, NULL,0,NULL,0);\n}\n\n/* Block the current client for module authentication in the background. If module auth is not in\n * progress on the client, the API returns NULL. Otherwise, the client is blocked and the RM_BlockedClient\n * is returned similar to the RM_BlockClient API.\n * Note: Only use this API from the context of a module auth callback. */\nRedisModuleBlockedClient *RM_BlockClientOnAuth(RedisModuleCtx *ctx, RedisModuleAuthCallback reply_callback,\n                                               void (*free_privdata)(RedisModuleCtx*,void*)) {\n    if (!clientHasModuleAuthInProgress(ctx->client)) {\n        addReplyError(ctx->client, \"Module blocking client on auth when not currently undergoing module authentication\");\n        return NULL;\n    }\n    RedisModuleBlockedClient *bc = moduleBlockClient(ctx,NULL,reply_callback,NULL,free_privdata,0, NULL,0,NULL,0);\n    if (ctx->client->flags & CLIENT_BLOCKED) {\n        ctx->client->flags |= CLIENT_PENDING_COMMAND;\n    }\n    return bc;\n}\n\n/* Get the private data that was previusely set on a blocked client */\nvoid *RM_BlockClientGetPrivateData(RedisModuleBlockedClient *blocked_client) {\n    return blocked_client->privdata;\n}\n\n/* Set private data on a blocked client */\nvoid RM_BlockClientSetPrivateData(RedisModuleBlockedClient *blocked_client, void *private_data) {\n    blocked_client->privdata = private_data;\n}\n\n/* This call is similar to RedisModule_BlockClient(), however in this case we\n * don't just block the client, but also ask Redis to unblock it automatically\n * once certain keys become \"ready\", that is, contain more data.\n *\n * Basically this is similar to what a typical Redis command usually does,\n * like BLPOP or BZPOPMAX: the client blocks if it cannot be served ASAP,\n * and later when the key receives new data (a list push for instance), the\n * client is unblocked and served.\n *\n * However in the case of this module API, when the client is unblocked?\n *\n * 1. If you block on a key of a type that has blocking operations associated,\n *    like a list, a sorted set, a stream, and so forth, the client may be\n *    unblocked once the relevant key is targeted by an operation that normally\n *    unblocks the native blocking operations for that type. So if we block\n *    on a list key, an RPUSH command may unblock our client and so forth.\n * 2. If you are implementing your native data type, or if you want to add new\n *    unblocking conditions in addition to \"1\", you can call the modules API\n *    RedisModule_SignalKeyAsReady().\n *\n * Anyway we can't be sure if the client should be unblocked just because the\n * key is signaled as ready: for instance a successive operation may change the\n * key, or a client in queue before this one can be served, modifying the key\n * as well and making it empty again. So when a client is blocked with\n * RedisModule_BlockClientOnKeys() the reply callback is not called after\n * RM_UnblockClient() is called, but every time a key is signaled as ready:\n * if the reply callback can serve the client, it returns REDISMODULE_OK\n * and the client is unblocked, otherwise it will return REDISMODULE_ERR\n * and we'll try again later.\n *\n * The reply callback can access the key that was signaled as ready by\n * calling the API RedisModule_GetBlockedClientReadyKey(), that returns\n * just the string name of the key as a RedisModuleString object.\n *\n * Thanks to this system we can setup complex blocking scenarios, like\n * unblocking a client only if a list contains at least 5 items or other\n * more fancy logics.\n *\n * Note that another difference with RedisModule_BlockClient(), is that here\n * we pass the private data directly when blocking the client: it will\n * be accessible later in the reply callback. Normally when blocking with\n * RedisModule_BlockClient() the private data to reply to the client is\n * passed when calling RedisModule_UnblockClient() but here the unblocking\n * is performed by Redis itself, so we need to have some private data before\n * hand. The private data is used to store any information about the specific\n * unblocking operation that you are implementing. Such information will be\n * freed using the free_privdata callback provided by the user.\n *\n * However the reply callback will be able to access the argument vector of\n * the command, so the private data is often not needed.\n *\n * Note: Under normal circumstances RedisModule_UnblockClient should not be\n *       called for clients that are blocked on keys (Either the key will\n *       become ready or a timeout will occur). If for some reason you do want\n *       to call RedisModule_UnblockClient it is possible, but it must NOT be\n *       called from module threads and the client will be handled as if it\n *       timed out (You must implement the timeout callback in that case).\n */\nRedisModuleBlockedClient *RM_BlockClientOnKeys(RedisModuleCtx *ctx, RedisModuleCmdFunc reply_callback,\n                                               RedisModuleCmdFunc timeout_callback, void (*free_privdata)(RedisModuleCtx*,void*),\n                                               long long timeout_ms, RedisModuleString **keys, int numkeys, void *privdata) {\n    return moduleBlockClient(ctx,reply_callback,NULL,timeout_callback,free_privdata,timeout_ms, keys,numkeys,privdata,0);\n}\n\n/* Same as RedisModule_BlockClientOnKeys, but can take REDISMODULE_BLOCK_* flags\n * Can be either REDISMODULE_BLOCK_UNBLOCK_DEFAULT, which means default behavior (same\n * as calling RedisModule_BlockClientOnKeys)\n *\n * The flags is a bit mask of these:\n *\n * - `REDISMODULE_BLOCK_UNBLOCK_DELETED`: The clients should to be awakened in case any of `keys` are deleted.\n *                                        Mostly useful for commands that require the key to exist (like XREADGROUP)\n */\nRedisModuleBlockedClient *RM_BlockClientOnKeysWithFlags(RedisModuleCtx *ctx, RedisModuleCmdFunc reply_callback,\n                                                        RedisModuleCmdFunc timeout_callback, void (*free_privdata)(RedisModuleCtx*,void*),\n                                                        long long timeout_ms, RedisModuleString **keys, int numkeys, void *privdata,\n                                                        int flags) {\n    return moduleBlockClient(ctx,reply_callback,NULL,timeout_callback,free_privdata,timeout_ms, keys,numkeys,privdata,flags);\n}\n\n/* This function is used in order to potentially unblock a client blocked\n * on keys with RedisModule_BlockClientOnKeys(). When this function is called,\n * all the clients blocked for this key will get their reply_callback called. */\nvoid RM_SignalKeyAsReady(RedisModuleCtx *ctx, RedisModuleString *key) {\n    signalKeyAsReady(ctx->client->db, key, OBJ_MODULE);\n}\n\n/* Implements RM_UnblockClient() and moduleUnblockClient(). */\nint moduleUnblockClientByHandle(RedisModuleBlockedClient *bc, void *privdata) {\n    pthread_mutex_lock(&moduleUnblockedClientsMutex);\n    if (!bc->blocked_on_keys) bc->privdata = privdata;\n    bc->unblocked = 1;\n    if (listLength(moduleUnblockedClients) == 0) {\n        if (write(server.module_pipe[1],\"A\",1) != 1) {\n            /* Ignore the error, this is best-effort. */\n        }\n    }\n    listAddNodeTail(moduleUnblockedClients,bc);\n    pthread_mutex_unlock(&moduleUnblockedClientsMutex);\n    return REDISMODULE_OK;\n}\n\n/* This API is used by the Redis core to unblock a client that was blocked\n * by a module. */\nvoid moduleUnblockClient(client *c) {\n    RedisModuleBlockedClient *bc = c->bstate.module_blocked_handle;\n    moduleUnblockClientByHandle(bc,NULL);\n}\n\n/* Return true if the client 'c' was blocked by a module using\n * RM_BlockClientOnKeys(). */\nint moduleClientIsBlockedOnKeys(client *c) {\n    RedisModuleBlockedClient *bc = c->bstate.module_blocked_handle;\n    return bc->blocked_on_keys;\n}\n\n/* Unblock a client blocked by `RedisModule_BlockedClient`. This will trigger\n * the reply callbacks to be called in order to reply to the client.\n * The 'privdata' argument will be accessible by the reply callback, so\n * the caller of this function can pass any value that is needed in order to\n * actually reply to the client.\n *\n * A common usage for 'privdata' is a thread that computes something that\n * needs to be passed to the client, included but not limited some slow\n * to compute reply or some reply obtained via networking.\n *\n * Note 1: this function can be called from threads spawned by the module when\n * the client was blocked using RedisModule_BlockClient().\n *\n * Note 2: when we unblock a client that is blocked for keys using the API\n * RedisModule_BlockClientOnKeys(), the privdata argument here is not used.\n * Unblocking a client that was blocked for keys using this API will still\n * require the client to get some reply, so the function will use the\n * \"timeout\" handler in order to do so (The privdata provided in\n * RedisModule_BlockClientOnKeys() is accessible from the timeout\n * callback via RM_GetBlockedClientPrivateData). */\nint RM_UnblockClient(RedisModuleBlockedClient *bc, void *privdata) {\n    if (bc->blocked_on_keys) {\n        /* In theory the user should always pass the timeout handler as an\n         * argument, but better to be safe than sorry. */\n        if (bc->timeout_callback == NULL) return REDISMODULE_ERR;\n        if (bc->unblocked) return REDISMODULE_OK;\n        if (bc->client) bc->blocked_on_keys_explicit_unblock = 1;\n    }\n    moduleUnblockClientByHandle(bc,privdata);\n    return REDISMODULE_OK;\n}\n\n/* Abort a blocked client blocking operation: the client will be unblocked\n * without firing any callback. */\nint RM_AbortBlock(RedisModuleBlockedClient *bc) {\n    bc->reply_callback = NULL;\n    bc->disconnect_callback = NULL;\n    bc->auth_reply_cb = NULL;\n    return RM_UnblockClient(bc,NULL);\n}\n\n/* Set a callback that will be called if a blocked client disconnects\n * before the module has a chance to call RedisModule_UnblockClient()\n *\n * Usually what you want to do there, is to cleanup your module state\n * so that you can call RedisModule_UnblockClient() safely, otherwise\n * the client will remain blocked forever if the timeout is large.\n *\n * Notes:\n *\n * 1. It is not safe to call Reply* family functions here, it is also\n *    useless since the client is gone.\n *\n * 2. This callback is not called if the client disconnects because of\n *    a timeout. In such a case, the client is unblocked automatically\n *    and the timeout callback is called.\n */\nvoid RM_SetDisconnectCallback(RedisModuleBlockedClient *bc, RedisModuleDisconnectFunc callback) {\n    bc->disconnect_callback = callback;\n}\n\n/* This function will check the moduleUnblockedClients queue in order to\n * call the reply callback and really unblock the client.\n *\n * Clients end into this list because of calls to RM_UnblockClient(),\n * however it is possible that while the module was doing work for the\n * blocked client, it was terminated by Redis (for timeout or other reasons).\n * When this happens the RedisModuleBlockedClient structure in the queue\n * will have the 'client' field set to NULL. */\nvoid moduleHandleBlockedClients(void) {\n    listNode *ln;\n    RedisModuleBlockedClient *bc;\n\n    pthread_mutex_lock(&moduleUnblockedClientsMutex);\n    while (listLength(moduleUnblockedClients)) {\n        ln = listFirst(moduleUnblockedClients);\n        bc = ln->value;\n        client *c = bc->client;\n        listDelNode(moduleUnblockedClients,ln);\n        pthread_mutex_unlock(&moduleUnblockedClientsMutex);\n\n        /* Release the lock during the loop, as long as we don't\n         * touch the shared list. */\n\n        /* Call the reply callback if the client is valid and we have\n         * any callback. However the callback is not called if the client\n         * was blocked on keys (RM_BlockClientOnKeys()), because we already\n         * called such callback in moduleTryServeClientBlockedOnKey() when\n         * the key was signaled as ready. */\n        long long prev_error_replies = server.stat_total_error_replies;\n        uint64_t reply_us = 0;\n        if (c && !bc->blocked_on_keys && bc->reply_callback) {\n            RedisModuleCtx ctx;\n            moduleCreateContext(&ctx, bc->module, REDISMODULE_CTX_BLOCKED_REPLY);\n            ctx.blocked_privdata = bc->privdata;\n            ctx.blocked_ready_key = NULL;\n            ctx.client = bc->client;\n            ctx.blocked_client = bc;\n            monotime replyTimer;\n            elapsedStart(&replyTimer);\n            bc->reply_callback(&ctx,(void**)c->argv,c->argc);\n            reply_us = elapsedUs(replyTimer);\n            moduleFreeContext(&ctx);\n        }\n        if (c && bc->blocked_on_keys_explicit_unblock) {\n            serverAssert(bc->blocked_on_keys);\n            moduleBlockedClientTimedOut(c);\n        }\n        /* Hold onto the blocked client if module auth is in progress. The reply callback is invoked\n         * when the client is reprocessed. */\n        if (c && clientHasModuleAuthInProgress(c)) {\n            c->module_blocked_client = bc;\n        } else {\n            /* Free privdata if any. */\n            moduleInvokeFreePrivDataCallback(c, bc);\n        }\n\n        /* It is possible that this blocked client object accumulated\n         * replies to send to the client in a thread safe context.\n         * We need to glue such replies to the client output buffer and\n         * free the temporary client we just used for the replies. */\n        if (c) AddReplyFromClient(c, bc->reply_client);\n        moduleReleaseTempClient(bc->reply_client);\n        moduleReleaseTempClient(bc->thread_safe_ctx_client);\n\n        /* Update stats now that we've finished the blocking operation.\n         * This needs to be out of the reply callback above given that a\n         * module might not define any callback and still do blocking ops.\n         *\n         * If the module is blocked on keys updateStatsOnUnblock will be\n         * called from moduleUnblockClientOnKey\n         */\n        if (c && !clientHasModuleAuthInProgress(c) && !bc->blocked_on_keys) {\n            updateStatsOnUnblock(c, bc->background_duration, reply_us, server.stat_total_error_replies != prev_error_replies);\n        }\n\n        if (c != NULL) {\n            /* Before unblocking the client, set the disconnect callback\n             * to NULL, because if we reached this point, the client was\n             * properly unblocked by the module. */\n            bc->disconnect_callback = NULL;\n            unblockClient(c, 1);\n\n            /* Update the wait offset, we don't know if this blocked client propagated anything,\n             * currently we rather not add any API for that, so we just assume it did. */\n            c->woff = server.master_repl_offset;\n\n            /* Put the client in the list of clients that need to write\n             * if there are pending replies here. This is needed since\n             * during a non blocking command the client may receive output. */\n            if (!clientHasModuleAuthInProgress(c) && clientHasPendingReplies(c) &&\n                !(c->flags & CLIENT_PENDING_WRITE) && c->conn)\n            {\n                c->flags |= CLIENT_PENDING_WRITE;\n                listLinkNodeHead(server.clients_pending_write, &c->clients_pending_write_node);\n            }\n        }\n\n        /* Free 'bc' only after unblocking the client, since it is\n         * referenced in the client blocking context, and must be valid\n         * when calling unblockClient(). */\n        if (!(c && clientHasModuleAuthInProgress(c))) {\n            bc->module->blocked_clients--;\n            zfree(bc);\n        }\n\n        /* Lock again before to iterate the loop. */\n        pthread_mutex_lock(&moduleUnblockedClientsMutex);\n    }\n    pthread_mutex_unlock(&moduleUnblockedClientsMutex);\n}\n\n/* Check if the specified client can be safely timed out using\n * moduleBlockedClientTimedOut().\n */\nint moduleBlockedClientMayTimeout(client *c) {\n    if (c->bstate.btype != BLOCKED_MODULE)\n        return 1;\n\n    RedisModuleBlockedClient *bc = c->bstate.module_blocked_handle;\n    return (bc && bc->timeout_callback != NULL);\n}\n\n/* Called when our client timed out. After this function unblockClient()\n * is called, and it will invalidate the blocked client. So this function\n * does not need to do any cleanup. Eventually the module will call the\n * API to unblock the client and the memory will be released. \n *\n * This function should only be called from the main thread, we must handle the unblocking\n * of the client synchronously. This ensures that we can reply to the client before\n * resetClient() is called. */\nvoid moduleBlockedClientTimedOut(client *c) {\n    RedisModuleBlockedClient *bc = c->bstate.module_blocked_handle;\n\n    RedisModuleCtx ctx;\n    moduleCreateContext(&ctx, bc->module, REDISMODULE_CTX_BLOCKED_TIMEOUT);\n    ctx.client = bc->client;\n    ctx.blocked_client = bc;\n    ctx.blocked_privdata = bc->privdata;\n\n    long long prev_error_replies = server.stat_total_error_replies;\n\n    if (bc->timeout_callback) {\n        /* In theory, the user should always pass the timeout handler as an\n         * argument, but better to be safe than sorry. */\n        bc->timeout_callback(&ctx,(void**)c->argv,c->argc);\n    }\n\n    moduleFreeContext(&ctx);\n\n    updateStatsOnUnblock(c, bc->background_duration, 0, server.stat_total_error_replies != prev_error_replies);\n\n    /* For timeout events, we do not want to call the disconnect callback,\n     * because the blocked client will be automatically disconnected in\n     * this case, and the user can still hook using the timeout callback. */\n    bc->disconnect_callback = NULL;\n}\n\n/* Return non-zero if a module command was called in order to fill the\n * reply for a blocked client. */\nint RM_IsBlockedReplyRequest(RedisModuleCtx *ctx) {\n    return (ctx->flags & REDISMODULE_CTX_BLOCKED_REPLY) != 0;\n}\n\n/* Return non-zero if a module command was called in order to fill the\n * reply for a blocked client that timed out. */\nint RM_IsBlockedTimeoutRequest(RedisModuleCtx *ctx) {\n    return (ctx->flags & REDISMODULE_CTX_BLOCKED_TIMEOUT) != 0;\n}\n\n/* Get the private data set by RedisModule_UnblockClient() */\nvoid *RM_GetBlockedClientPrivateData(RedisModuleCtx *ctx) {\n    return ctx->blocked_privdata;\n}\n\n/* Get the key that is ready when the reply callback is called in the context\n * of a client blocked by RedisModule_BlockClientOnKeys(). */\nRedisModuleString *RM_GetBlockedClientReadyKey(RedisModuleCtx *ctx) {\n    return ctx->blocked_ready_key;\n}\n\n/* Get the blocked client associated with a given context.\n * This is useful in the reply and timeout callbacks of blocked clients,\n * before sometimes the module has the blocked client handle references\n * around, and wants to cleanup it. */\nRedisModuleBlockedClient *RM_GetBlockedClientHandle(RedisModuleCtx *ctx) {\n    return ctx->blocked_client;\n}\n\n/* Return true if when the free callback of a blocked client is called,\n * the reason for the client to be unblocked is that it disconnected\n * while it was blocked. */\nint RM_BlockedClientDisconnected(RedisModuleCtx *ctx) {\n    return (ctx->flags & REDISMODULE_CTX_BLOCKED_DISCONNECTED) != 0;\n}\n\n/* --------------------------------------------------------------------------\n * ## Thread Safe Contexts\n * -------------------------------------------------------------------------- */\n\n/* Return a context which can be used inside threads to make Redis context\n * calls with certain modules APIs. If 'bc' is not NULL then the module will\n * be bound to a blocked client, and it will be possible to use the\n * `RedisModule_Reply*` family of functions to accumulate a reply for when the\n * client will be unblocked. Otherwise the thread safe context will be\n * detached by a specific client.\n *\n * To call non-reply APIs, the thread safe context must be prepared with:\n *\n *     RedisModule_ThreadSafeContextLock(ctx);\n *     ... make your call here ...\n *     RedisModule_ThreadSafeContextUnlock(ctx);\n *\n * This is not needed when using `RedisModule_Reply*` functions, assuming\n * that a blocked client was used when the context was created, otherwise\n * no RedisModule_Reply* call should be made at all.\n *\n * NOTE: If you're creating a detached thread safe context (bc is NULL),\n * consider using `RM_GetDetachedThreadSafeContext` which will also retain\n * the module ID and thus be more useful for logging. */\nRedisModuleCtx *RM_GetThreadSafeContext(RedisModuleBlockedClient *bc) {\n    RedisModuleCtx *ctx = zmalloc(sizeof(*ctx));\n    RedisModule *module = bc ? bc->module : NULL;\n    int flags = REDISMODULE_CTX_THREAD_SAFE;\n\n    /* Creating a new client object is costly. To avoid that, we have an\n     * internal pool of client objects. In blockClient(), a client object is\n     * assigned to bc->thread_safe_ctx_client to be used for the thread safe\n     * context.\n     * For detached thread safe contexts, we create a new client object.\n     * Otherwise, as this function can be called from different threads, we\n     * would need to synchronize access to internal pool of client objects.\n     * Assuming creating detached context is rare and not that performance\n     * critical, we avoid synchronizing access to the client pool by creating\n     * a new client */\n    if (!bc) flags |= REDISMODULE_CTX_NEW_CLIENT;\n    moduleCreateContext(ctx, module, flags);\n    /* Even when the context is associated with a blocked client, we can't\n     * access it safely from another thread, so we use a fake client here\n     * in order to keep things like the currently selected database and similar\n     * things. */\n    if (bc) {\n        ctx->blocked_client = bc;\n        ctx->client = bc->thread_safe_ctx_client;\n        selectDb(ctx->client,bc->dbid);\n        if (bc->client) {\n            ctx->client->id = bc->client->id;\n            ctx->client->resp = bc->client->resp;\n        }\n    }\n    return ctx;\n}\n\n/* Return a detached thread safe context that is not associated with any\n * specific blocked client, but is associated with the module's context.\n *\n * This is useful for modules that wish to hold a global context over\n * a long term, for purposes such as logging. */\nRedisModuleCtx *RM_GetDetachedThreadSafeContext(RedisModuleCtx *ctx) {\n    RedisModuleCtx *new_ctx = zmalloc(sizeof(*new_ctx));\n    /* We create a new client object for the detached context.\n     * See RM_GetThreadSafeContext() for more information */\n    moduleCreateContext(new_ctx, ctx->module,\n                        REDISMODULE_CTX_THREAD_SAFE|REDISMODULE_CTX_NEW_CLIENT);\n    return new_ctx;\n}\n\n/* Release a thread safe context. */\nvoid RM_FreeThreadSafeContext(RedisModuleCtx *ctx) {\n    moduleFreeContext(ctx);\n    zfree(ctx);\n}\n\nvoid moduleGILAfterLock(void) {\n    /* We should never get here if we already inside a module\n     * code block which already opened a context. */\n    serverAssert(server.execution_nesting == 0);\n    /* Bump up the nesting level to prevent immediate propagation\n     * of possible RM_Call from th thread */\n    enterExecutionUnit(1, 0);\n}\n\n/* Acquire the server lock before executing a thread safe API call.\n * This is not needed for `RedisModule_Reply*` calls when there is\n * a blocked client connected to the thread safe context. */\nvoid RM_ThreadSafeContextLock(RedisModuleCtx *ctx) {\n    UNUSED(ctx);\n    moduleAcquireGIL();\n    moduleGILAfterLock();\n}\n\n/* Similar to RM_ThreadSafeContextLock but this function\n * would not block if the server lock is already acquired.\n *\n * If successful (lock acquired) REDISMODULE_OK is returned,\n * otherwise REDISMODULE_ERR is returned and errno is set\n * accordingly. */\nint RM_ThreadSafeContextTryLock(RedisModuleCtx *ctx) {\n    UNUSED(ctx);\n\n    int res = moduleTryAcquireGIL();\n    if(res != 0) {\n        errno = res;\n        return REDISMODULE_ERR;\n    }\n    moduleGILAfterLock();\n    return REDISMODULE_OK;\n}\n\nvoid moduleGILBeforeUnlock(void) {\n    /* We should never get here if we already inside a module\n     * code block which already opened a context, except\n     * the bump-up from moduleGILAcquired. */\n    serverAssert(server.execution_nesting == 1);\n    /* Restore nesting level and propagate pending commands\n     * (because it's unclear when thread safe contexts are\n     * released we have to propagate here). */\n    exitExecutionUnit();\n    postExecutionUnitOperations();\n}\n\n/* Release the server lock after a thread safe API call was executed. */\nvoid RM_ThreadSafeContextUnlock(RedisModuleCtx *ctx) {\n    UNUSED(ctx);\n    moduleGILBeforeUnlock();\n    moduleReleaseGIL();\n}\n\nvoid moduleAcquireGIL(void) {\n    pthread_mutex_lock(&moduleGIL);\n}\n\nint moduleTryAcquireGIL(void) {\n    return pthread_mutex_trylock(&moduleGIL);\n}\n\nvoid moduleReleaseGIL(void) {\n    pthread_mutex_unlock(&moduleGIL);\n}\n\n\n/* --------------------------------------------------------------------------\n * ## Module Keyspace Notifications API\n * -------------------------------------------------------------------------- */\n\n/* Subscribe to keyspace notifications. This is a low-level version of the\n * keyspace-notifications API. A module can register callbacks to be notified\n * when keyspace events occur.\n *\n * Notification events are filtered by their type (string events, set events,\n * etc), and the subscriber callback receives only events that match a specific\n * mask of event types.\n *\n * When subscribing to notifications with RedisModule_SubscribeToKeyspaceEvents\n * the module must provide an event type-mask, denoting the events the subscriber\n * is interested in. This can be an ORed mask of any of the following flags:\n *\n *  - REDISMODULE_NOTIFY_GENERIC: Generic commands like DEL, EXPIRE, RENAME\n *  - REDISMODULE_NOTIFY_STRING: String events\n *  - REDISMODULE_NOTIFY_LIST: List events\n *  - REDISMODULE_NOTIFY_SET: Set events\n *  - REDISMODULE_NOTIFY_HASH: Hash events\n *  - REDISMODULE_NOTIFY_ZSET: Sorted Set events\n *  - REDISMODULE_NOTIFY_EXPIRED: Expiration events\n *  - REDISMODULE_NOTIFY_EVICTED: Eviction events\n *  - REDISMODULE_NOTIFY_STREAM: Stream events\n *  - REDISMODULE_NOTIFY_MODULE: Module types events\n *  - REDISMODULE_NOTIFY_KEYMISS: Key-miss events\n *                                Notice, key-miss event is the only type\n *                                of event that is fired from within a read command.\n *                                Performing RM_Call with a write command from within\n *                                this notification is wrong and discourage. It will\n *                                cause the read command that trigger the event to be\n *                                replicated to the AOF/Replica.\n *\n *  - REDISMODULE_NOTIFY_NEW: New key notification\n *  - REDISMODULE_NOTIFY_OVERWRITTEN: Overwritten events\n *  - REDISMODULE_NOTIFY_TYPE_CHANGED: Type-changed events\n *  - REDISMODULE_NOTIFY_KEY_TRIMMED: Key trimmed events after a slot migration operation\n *  - REDISMODULE_NOTIFY_RATE_LIMIT: Rate limit event\n *  - REDISMODULE_NOTIFY_ALL: All events (Excluding REDISMODULE_NOTIFY_KEYMISS,\n *                            REDISMODULE_NOTIFY_NEW, REDISMODULE_NOTIFY_OVERWRITTEN,\n *                            REDISMODULE_NOTIFY_TYPE_CHANGED, REDISMODULE_NOTIFY_KEY_TRIMMED\n *                            and REDISMODULE_NOTIFY_RATE_LIMIT)\n *  - REDISMODULE_NOTIFY_LOADED: A special notification available only for modules,\n *                               indicates that the key was loaded from persistence.\n *                               Notice, when this event fires, the given key\n *                               can not be retained, use RM_CreateStringFromString\n *                               instead.\n *\n * We do not distinguish between key events and keyspace events, and it is up\n * to the module to filter the actions taken based on the key.\n *\n * The subscriber signature is:\n *\n *     int (*RedisModuleNotificationFunc) (RedisModuleCtx *ctx, int type,\n *                                         const char *event,\n *                                         RedisModuleString *key);\n *\n * `type` is the event type bit, that must match the mask given at registration\n * time. The event string is the actual command being executed, and key is the\n * relevant Redis key.\n *\n * Notification callback gets executed with a redis context that can not be\n * used to send anything to the client, and has the db number where the event\n * occurred as its selected db number.\n *\n * Notice that it is not necessary to enable notifications in redis.conf for\n * module notifications to work.\n *\n * Warning: the notification callbacks are performed in a synchronous manner,\n * so notification callbacks must to be fast, or they would slow Redis down.\n * If you need to take long actions, use threads to offload them.\n *\n * Moreover, the fact that the notification is executed synchronously means\n * that the notification code will be executed in the middle on Redis logic\n * (commands logic, eviction, expire). Changing the key space while the logic\n * runs is dangerous and discouraged. In order to react to key space events with\n * write actions, please refer to `RM_AddPostNotificationJob`.\n *\n * See https://redis.io/docs/latest/develop/use/keyspace-notifications/ for more information.\n */\nint RM_SubscribeToKeyspaceEvents(RedisModuleCtx *ctx, int types, RedisModuleNotificationFunc callback) {\n    RedisModuleKeyspaceSubscriber *sub = zmalloc(sizeof(*sub));\n    sub->module = ctx->module;\n    sub->event_mask = types;\n    sub->flags = REDISMODULE_NOTIFY_FLAG_NONE;\n    sub->notify_callback = callback;\n    sub->notify_callback_with_subkeys = NULL;\n    sub->active = 0;\n\n    listAddNodeTail(moduleKeyspaceSubscribers, sub);\n    moduleUpdateKeyspaceSubscribersTypes();\n    return REDISMODULE_OK;\n}\n\n/*\n * RM_UnsubscribeFromKeyspaceEvents - Unregister a module's callback from keyspace notifications for specific event types.\n *\n * This function removes a previously registered subscription identified by both the event mask and the callback function.\n * It is useful to reduce performance overhead when the module no longer requires notifications for certain events.\n *\n * Parameters:\n *  - ctx: The RedisModuleCtx associated with the calling module.\n *  - types: The event mask representing the keyspace notification types to unsubscribe from.\n *  - callback: The callback function pointer that was originally registered for these events.\n *\n * Returns:\n *  - REDISMODULE_OK on successful removal of the subscription.\n *  - REDISMODULE_ERR if no matching subscription was found or if invalid parameters were provided.\n */\nint RM_UnsubscribeFromKeyspaceEvents(RedisModuleCtx *ctx, int types, RedisModuleNotificationFunc callback) {\n    if (!ctx || !callback) return REDISMODULE_ERR;\n    int removed = 0;\n    listIter li;\n    listNode *ln;\n    listRewind(moduleKeyspaceSubscribers,&li);\n    while ((ln = listNext(&li))) {\n        RedisModuleKeyspaceSubscriber *sub = ln->value;\n        if (sub->event_mask == types && sub->notify_callback == callback && sub->module == ctx->module) {\n            zfree(sub);\n            listDelNode(moduleKeyspaceSubscribers, ln);\n            removed++;\n        }\n    }\n    if (removed > 0) moduleUpdateKeyspaceSubscribersTypes();\n    return removed > 0 ? REDISMODULE_OK : REDISMODULE_ERR;\n}\n\n/* Subscribe to keyspace notifications with subkey information.\n *\n * This is the extended version of RM_SubscribeToKeyspaceEvents. When subkeys\n * are available, the `subkeys` array and `count` are passed to the callback.\n * `subkeys` contains only the names of affected subkeys (values are not included),\n * and `count` is the number of elements. The array may contain duplicates when\n * the same subkey appears more than once in a command (e.g. HSET key f1 v1 f1 v2\n * produces subkeys=[\"f1\",\"f1\"], count=2). When no subkeys are present, `subkeys`\n * will be NULL and `count` will be 0. Whether events without subkeys are delivered\n * depends on the `flags` parameter (see below).\n *\n * `types` is a bit mask of event types the module is interested in\n * (using the same REDISMODULE_NOTIFY_* flags as RM_SubscribeToKeyspaceEvents).\n *\n * `flags` controls delivery filtering:\n *  - REDISMODULE_NOTIFY_FLAG_NONE: The callback is invoked for all matching\n *    events regardless of whether subkeys are present, so a separate\n *    RM_SubscribeToKeyspaceEvents registration can be omitted.\n *  - REDISMODULE_NOTIFY_FLAG_SUBKEYS_REQUIRED: The callback is only invoked\n *    when subkeys are not empty. Events without subkey information (e.g. SET,\n *    EXPIRE, DEL) are skipped.\n *\n * The callback signature is:\n *   void callback(RedisModuleCtx *ctx, int type, const char *event,\n *                 RedisModuleString *key, RedisModuleString **subkeys, int count);\n *\n * The subkeys array and its contents are only valid during the callback.\n * The underlying objects may be stack-allocated or temporary, so\n * RM_RetainString must NOT be used on them. To keep a subkey beyond\n * the callback (e.g. in a RM_AddPostNotificationJob callback), use\n * RM_HoldString (which handles static objects by copying) or\n * RM_CreateStringFromString to make a deep copy before returning.\n */\nint RM_SubscribeToKeyspaceEventsWithSubkeys(RedisModuleCtx *ctx, int types, int flags, RedisModuleNotificationWithSubkeysFunc callback) {\n    RedisModuleKeyspaceSubscriber *sub = zmalloc(sizeof(*sub));\n    sub->module = ctx->module;\n    sub->event_mask = types;\n    sub->flags = flags;\n    sub->notify_callback = NULL;\n    sub->notify_callback_with_subkeys = callback;\n    sub->active = 0;\n\n    listAddNodeTail(moduleKeyspaceSubscribers, sub);\n    moduleUpdateKeyspaceSubscribersTypes();\n    return REDISMODULE_OK;\n}\n\n/* Unregister a module's callback from keyspace notifications with subkeys\n * for specific event types.\n *\n * This function removes a previously registered subscription identified by\n * the event mask, delivery flags, and the callback function.\n *\n * Parameters:\n *  - ctx: The RedisModuleCtx associated with the calling module.\n *  - types: The event mask representing the notification types to unsubscribe from.\n *  - flags: The delivery flags that were used during registration.\n *  - callback: The callback function pointer that was originally registered.\n *\n * Returns:\n *  - REDISMODULE_OK on successful removal of the subscription.\n *  - REDISMODULE_ERR if no matching subscription was found. */\nint RM_UnsubscribeFromKeyspaceEventsWithSubkeys(RedisModuleCtx *ctx, int types, int flags, RedisModuleNotificationWithSubkeysFunc callback) {\n    if (!ctx || !callback) return REDISMODULE_ERR;\n    int removed = 0;\n    listIter li;\n    listNode *ln;\n    listRewind(moduleKeyspaceSubscribers,&li);\n    while ((ln = listNext(&li))) {\n        RedisModuleKeyspaceSubscriber *sub = ln->value;\n        if (sub->event_mask == types && sub->flags == flags &&\n            sub->notify_callback_with_subkeys == callback &&\n            sub->module == ctx->module)\n        {\n            zfree(sub);\n            listDelNode(moduleKeyspaceSubscribers, ln);\n            removed++;\n        }\n    }\n    if (removed > 0) moduleUpdateKeyspaceSubscribersTypes();\n    return removed > 0 ? REDISMODULE_OK : REDISMODULE_ERR;\n}\n\n/* Check any subscriber for event. */\nint moduleHasSubscribersForKeyspaceEvent(int type) {\n    return (moduleKeyspaceSubscribersTypes & type) != 0;\n}\n\n/* Check any subscriber for event with subkeys. */\nint moduleHasSubscribersForKeyspaceEventWithSubkeys(int type) {\n    return (moduleKeyspaceSubscribersWithSubkeysTypes & type) != 0;\n}\n\nvoid firePostExecutionUnitJobs(void) {\n    /* Avoid propagation of commands.\n     * In that way, postExecutionUnitOperations will prevent\n     * recursive calls to firePostExecutionUnitJobs.\n     * This is a special case where we need to increase 'execution_nesting'\n     * but we do not want to update the cached time */\n    enterExecutionUnit(0, 0);\n    while (listLength(modulePostExecUnitJobs) > 0) {\n        listNode *ln = listFirst(modulePostExecUnitJobs);\n        RedisModulePostExecUnitJob *job = listNodeValue(ln);\n        listDelNode(modulePostExecUnitJobs, ln);\n\n        RedisModuleCtx ctx;\n        moduleCreateContext(&ctx, job->module, REDISMODULE_CTX_TEMP_CLIENT);\n        selectDb(ctx.client, job->dbid);\n\n        job->callback(&ctx, job->pd);\n        if (job->free_pd) job->free_pd(job->pd);\n\n        moduleFreeContext(&ctx);\n        zfree(job);\n    }\n    exitExecutionUnit();\n}\n\n/* When running inside a key space notification callback, it is dangerous and highly discouraged to perform any write\n * operation (See `RM_SubscribeToKeyspaceEvents`). In order to still perform write actions in this scenario,\n * Redis provides `RM_AddPostNotificationJob` API. The API allows to register a job callback which Redis will call\n * when the following condition are promised to be fulfilled:\n * 1. It is safe to perform any write operation.\n * 2. The job will be called atomically along side the key space notification.\n *\n * Notice, one job might trigger key space notifications that will trigger more jobs.\n * This raises a concerns of entering an infinite loops, we consider infinite loops\n * as a logical bug that need to be fixed in the module, an attempt to protect against\n * infinite loops by halting the execution could result in violation of the feature correctness\n * and so Redis will make no attempt to protect the module from infinite loops.\n *\n * 'free_pd' can be NULL and in such case will not be used.\n *\n * Return REDISMODULE_OK on success and REDISMODULE_ERR if was called while loading data from disk (AOF or RDB) or\n * if the instance is a readonly replica. */\nint RM_AddPostNotificationJob(RedisModuleCtx *ctx, RedisModulePostNotificationJobFunc callback, void *privdata, void (*free_privdata)(void*)) {\n    if (server.loading|| (server.masterhost && server.repl_slave_ro)) {\n        return REDISMODULE_ERR;\n    }\n    RedisModulePostExecUnitJob *job = zmalloc(sizeof(*job));\n    job->module = ctx->module;\n    job->callback = callback;\n    job->pd = privdata;\n    job->free_pd = free_privdata;\n    job->dbid = ctx->client->db->id;\n\n    listAddNodeTail(modulePostExecUnitJobs, job);\n    return REDISMODULE_OK;\n}\n\n/* Get the configured bitmap of notify-keyspace-events (Could be used\n * for additional filtering in RedisModuleNotificationFunc) */\nint RM_GetNotifyKeyspaceEvents(void) {\n    return server.notify_keyspace_events;\n}\n\n/* Expose notifyKeyspaceEvent to modules */\nint RM_NotifyKeyspaceEvent(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key) {\n    if (!ctx || !ctx->client)\n        return REDISMODULE_ERR;\n    notifyKeyspaceEvent(type, (char *)event, key, ctx->client->db->id);\n    return REDISMODULE_OK;\n}\n\n/* Like RM_NotifyKeyspaceEvent, but also triggers subkey-level notifications\n * when subkeys are provided. Both key-level (keyspace/keyevent) and\n * subkey-level (subkeyspace/subkeyevent/subkeyspaceitem/subkeyspaceevent)\n * channels are published to, depending on the server configuration.\n *\n * This is the extended version of RM_NotifyKeyspaceEvent and can actually\n * replace it. When called with subkeys=NULL and count=0, it behaves\n * identically to RM_NotifyKeyspaceEvent. */\nint RM_NotifyKeyspaceEventWithSubkeys(RedisModuleCtx *ctx, int type, const char *event,\n                                      RedisModuleString *key, RedisModuleString **subkeys, int count) {\n    if (!ctx || !ctx->client)\n        return REDISMODULE_ERR;\n    notifyKeyspaceEventWithSubkeys(type, (char *)event, key, ctx->client->db->id, subkeys, count);\n    return REDISMODULE_OK;\n}\n\n/* Dispatcher for keyspace notifications to module subscriber functions.\n * This gets called only if at least one module requested to be notified on\n * keyspace notifications. For each subscriber, if notify_callback is set it\n * is called; otherwise if notify_callback_with_subkeys is set it is called\n * for all events (subkeys may be NULL/0 when not applicable). */\nvoid moduleNotifyKeyspaceEvent(int type, const char *event, robj *key, int dbid,\n                               robj **subkeys, int count) {\n    /* Don't do anything if there aren't any subscribers */\n    if (listLength(moduleKeyspaceSubscribers) == 0) return;\n\n    /* Ugly hack to handle modules which use write commands from within\n     * notify_callback, which they should NOT do!\n     * Modules should use RedisModules_AddPostNotificationJob instead.\n     *\n     * Anyway, we want any propagated commands from within notify_callback\n     * to be propagated inside a MULTI/EXEC together with the original\n     * command that caused the KSN.\n     * Note that it's only relevant for KSNs which are not generated from within\n     * call(), for example active-expiry and eviction (because anyway\n     * execution_nesting is incremented from within call())\n     *\n     * In order to do that we increment the execution_nesting counter, thus\n     * preventing postExecutionUnitOperations (from within moduleFreeContext)\n     * from propagating commands from CB.\n     *\n     * This is a special case where we need to increase 'execution_nesting'\n     * but we do not want to update the cached time */\n    enterExecutionUnit(0, 0);\n\n    listIter li;\n    listNode *ln;\n    listRewind(moduleKeyspaceSubscribers,&li);\n\n    /* Remove irrelevant flags from the type mask */\n    type &= ~(NOTIFY_KEYEVENT | NOTIFY_KEYSPACE |\n              NOTIFY_SUBKEYSPACE | NOTIFY_SUBKEYEVENT |\n              NOTIFY_SUBKEYSPACEITEM | NOTIFY_SUBKEYSPACEEVENT);\n\n    while((ln = listNext(&li))) {\n        RedisModuleKeyspaceSubscriber *sub = ln->value;\n        /* Only notify subscribers on events matching the registration,\n         * and avoid subscribers triggering themselves */\n        if ((sub->event_mask & type) &&\n            (sub->active == 0 || (sub->module->options & REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS))) {\n\n            /* If SUBKEYS_REQUIRED is set, skip events without subkeys. */\n            if (sub->notify_callback_with_subkeys &&\n                (sub->flags & REDISMODULE_NOTIFY_FLAG_SUBKEYS_REQUIRED) &&\n                (subkeys == NULL || count == 0))\n            {\n                continue;\n            }\n\n            RedisModuleCtx ctx;\n            moduleCreateContext(&ctx, sub->module, REDISMODULE_CTX_TEMP_CLIENT);\n            selectDb(ctx.client, dbid);\n\n            /* mark the handler as active to avoid reentrant loops.\n             * If the subscriber performs an action triggering itself,\n             * it will not be notified about it. */\n            int prev_active = sub->active;\n            sub->active = 1;\n            server.allow_access_expired++;\n            server.allow_access_trimmed++;\n            if (sub->notify_callback) {\n                sub->notify_callback(&ctx, type, event, key);\n            } else if (sub->notify_callback_with_subkeys) {\n                sub->notify_callback_with_subkeys(&ctx, type, event, key, subkeys, count);\n            }\n            server.allow_access_expired--;\n            server.allow_access_trimmed--;\n            sub->active = prev_active;\n            moduleFreeContext(&ctx);\n        }\n    }\n\n    exitExecutionUnit();\n}\n\n/* Unsubscribe any notification subscribers this module has upon unloading */\nvoid moduleUnsubscribeNotifications(RedisModule *module) {\n    listIter li;\n    listNode *ln;\n    listRewind(moduleKeyspaceSubscribers,&li);\n    while((ln = listNext(&li))) {\n        RedisModuleKeyspaceSubscriber *sub = ln->value;\n        if (sub->module == module) {\n            listDelNode(moduleKeyspaceSubscribers, ln);\n            zfree(sub);\n        }\n    }\n    moduleUpdateKeyspaceSubscribersTypes();\n}\n\n/* --------------------------------------------------------------------------\n * ## Modules Cluster API\n * -------------------------------------------------------------------------- */\n\n/* The Cluster message callback function pointer type. */\ntypedef void (*RedisModuleClusterMessageReceiver)(RedisModuleCtx *ctx, const char *sender_id, uint8_t type, const unsigned char *payload, uint32_t len);\n\n/* This structure identifies a registered caller: it must match a given module\n * ID, for a given message type. The callback function is just the function\n * that was registered as receiver. */\ntypedef struct moduleClusterReceiver {\n    uint64_t module_id;\n    RedisModuleClusterMessageReceiver callback;\n    struct RedisModule *module;\n    struct moduleClusterReceiver *next;\n} moduleClusterReceiver;\n\ntypedef struct moduleClusterNodeInfo {\n    int flags;\n    char ip[NET_IP_STR_LEN];\n    int port;\n    char master_id[40]; /* Only if flags & REDISMODULE_NODE_MASTER is true. */\n} mdouleClusterNodeInfo;\n\n/* We have an array of message types: each bucket is a linked list of\n * configured receivers. */\nstatic moduleClusterReceiver *clusterReceivers[UINT8_MAX];\n\n/* Dispatch the message to the right module receiver. */\nvoid moduleCallClusterReceivers(const char *sender_id, uint64_t module_id, uint8_t type, const unsigned char *payload, uint32_t len) {\n    moduleClusterReceiver *r = clusterReceivers[type];\n    while(r) {\n        if (r->module_id == module_id) {\n            RedisModuleCtx ctx;\n            moduleCreateContext(&ctx, r->module, REDISMODULE_CTX_TEMP_CLIENT);\n            r->callback(&ctx,sender_id,type,payload,len);\n            moduleFreeContext(&ctx);\n            return;\n        }\n        r = r->next;\n    }\n}\n\n/* Register a callback receiver for cluster messages of type 'type'. If there\n * was already a registered callback, this will replace the callback function\n * with the one provided, otherwise if the callback is set to NULL and there\n * is already a callback for this function, the callback is unregistered\n * (so this API call is also used in order to delete the receiver). */\nvoid RM_RegisterClusterMessageReceiver(RedisModuleCtx *ctx, uint8_t type, RedisModuleClusterMessageReceiver callback) {\n    if (!server.cluster_enabled) return;\n\n    uint64_t module_id = moduleTypeEncodeId(ctx->module->name,0);\n    moduleClusterReceiver *r = clusterReceivers[type], *prev = NULL;\n    while(r) {\n        if (r->module_id == module_id) {\n            /* Found! Set or delete. */\n            if (callback) {\n                r->callback = callback;\n            } else {\n                /* Delete the receiver entry if the user is setting\n                 * it to NULL. Just unlink the receiver node from the\n                 * linked list. */\n                if (prev)\n                    prev->next = r->next;\n                else\n                    clusterReceivers[type] = r->next; /* Update the head */\n                zfree(r);\n            }\n            return;\n        }\n        prev = r;\n        r = r->next;\n    }\n\n    /* Not found, let's add it. */\n    if (callback) {\n        r = zmalloc(sizeof(*r));\n        r->module_id = module_id;\n        r->module = ctx->module;\n        r->callback = callback;\n        r->next = clusterReceivers[type];\n        clusterReceivers[type] = r;\n    }\n}\n\n/* Send a message to all the nodes in the cluster if `target` is NULL, otherwise\n * at the specified target, which is a REDISMODULE_NODE_ID_LEN bytes node ID, as\n * returned by the receiver callback or by the nodes iteration functions.\n *\n * The function returns REDISMODULE_OK if the message was successfully sent,\n * otherwise if the node is not connected or such node ID does not map to any\n * known cluster node, REDISMODULE_ERR is returned. */\nint RM_SendClusterMessage(RedisModuleCtx *ctx, const char *target_id, uint8_t type, const char *msg, uint32_t len) {\n    if (!server.cluster_enabled) return REDISMODULE_ERR;\n    uint64_t module_id = moduleTypeEncodeId(ctx->module->name,0);\n    if (clusterSendModuleMessageToTarget(target_id,module_id,type,msg,len) == C_OK)\n        return REDISMODULE_OK;\n    else\n        return REDISMODULE_ERR;\n}\n\n/* Return an array of string pointers, each string pointer points to a cluster\n * node ID of exactly REDISMODULE_NODE_ID_LEN bytes (without any null term).\n * The number of returned node IDs is stored into `*numnodes`.\n * However if this function is called by a module not running an a Redis\n * instance with Redis Cluster enabled, NULL is returned instead.\n *\n * The IDs returned can be used with RedisModule_GetClusterNodeInfo() in order\n * to get more information about single node.\n *\n * The array returned by this function must be freed using the function\n * RedisModule_FreeClusterNodesList().\n *\n * Example:\n *\n *     size_t count, j;\n *     char **ids = RedisModule_GetClusterNodesList(ctx,&count);\n *     for (j = 0; j < count; j++) {\n *         RedisModule_Log(ctx,\"notice\",\"Node %.*s\",\n *             REDISMODULE_NODE_ID_LEN,ids[j]);\n *     }\n *     RedisModule_FreeClusterNodesList(ids);\n */\nchar **RM_GetClusterNodesList(RedisModuleCtx *ctx, size_t *numnodes) {\n    UNUSED(ctx);\n\n    if (!server.cluster_enabled) return NULL;\n    return getClusterNodesList(numnodes);\n}\n\n/* Free the node list obtained with RedisModule_GetClusterNodesList. */\nvoid RM_FreeClusterNodesList(char **ids) {\n    if (ids == NULL) return;\n    for (int j = 0; ids[j]; j++) zfree(ids[j]);\n    zfree(ids);\n}\n\n/* Return this node ID (REDISMODULE_CLUSTER_ID_LEN bytes) or NULL if the cluster\n * is disabled. */\nconst char *RM_GetMyClusterID(void) {\n    if (!server.cluster_enabled) return NULL;\n    return getMyClusterId();\n}\n\n/* Return the number of nodes in the cluster, regardless of their state\n * (handshake, noaddress, ...) so that the number of active nodes may actually\n * be smaller, but not greater than this number. If the instance is not in\n * cluster mode, zero is returned. */\nsize_t RM_GetClusterSize(void) {\n    if (!server.cluster_enabled) return 0;\n    return getClusterSize();\n}\n\n/* Populate the specified info for the node having as ID the specified 'id',\n * then returns REDISMODULE_OK. Otherwise if the format of node ID is invalid\n * or the node ID does not exist from the POV of this local node, REDISMODULE_ERR\n * is returned.\n *\n * The arguments `ip`, `master_id`, `port` and `flags` can be NULL in case we don't\n * need to populate back certain info. If an `ip` and `master_id` (only populated\n * if the instance is a slave) are specified, they point to buffers holding\n * at least REDISMODULE_NODE_ID_LEN bytes. The strings written back as `ip`\n * and `master_id` are not null terminated.\n *\n * The list of flags reported is the following:\n *\n * * REDISMODULE_NODE_MYSELF:       This node\n * * REDISMODULE_NODE_MASTER:       The node is a master\n * * REDISMODULE_NODE_SLAVE:        The node is a replica\n * * REDISMODULE_NODE_PFAIL:        We see the node as failing\n * * REDISMODULE_NODE_FAIL:         The cluster agrees the node is failing\n * * REDISMODULE_NODE_NOFAILOVER:   The slave is configured to never failover\n */\nint RM_GetClusterNodeInfo(RedisModuleCtx *ctx, const char *id, char *ip, char *master_id, int *port, int *flags) {\n    UNUSED(ctx);\n\n    clusterNode *node = clusterLookupNode(id, CLUSTER_NAMELEN);\n    if (node == NULL || clusterNodePending(node))\n    {\n        return REDISMODULE_ERR;\n    }\n\n    if (ip) redis_strlcpy(ip, clusterNodeIp(node),NET_IP_STR_LEN);\n\n    if (master_id) {\n        /* If the information is not available, the function will set the\n         * field to zero bytes, so that when the field can't be populated the\n         * function kinda remains predictable. */\n        if (clusterNodeIsSlave(node) && clusterNodeGetSlaveof(node))\n            memcpy(master_id, clusterNodeGetName(clusterNodeGetSlaveof(node)) ,REDISMODULE_NODE_ID_LEN);\n        else\n            memset(master_id,0,REDISMODULE_NODE_ID_LEN);\n    }\n    if (port) *port = getNodeDefaultClientPort(node);\n\n    /* As usually we have to remap flags for modules, in order to ensure\n     * we can provide binary compatibility. */\n    if (flags) {\n        *flags = 0;\n        if (clusterNodeIsMyself(node)) *flags |= REDISMODULE_NODE_MYSELF;\n        if (clusterNodeIsMaster(node)) *flags |= REDISMODULE_NODE_MASTER;\n        if (clusterNodeIsSlave(node)) *flags |= REDISMODULE_NODE_SLAVE;\n        if (clusterNodeTimedOut(node)) *flags |= REDISMODULE_NODE_PFAIL;\n        if (clusterNodeIsFailing(node)) *flags |= REDISMODULE_NODE_FAIL;\n        if (clusterNodeIsNoFailover(node)) *flags |= REDISMODULE_NODE_NOFAILOVER;\n    }\n    return REDISMODULE_OK;\n}\n\n/* Set Redis Cluster flags in order to change the normal behavior of\n * Redis Cluster, especially with the goal of disabling certain functions.\n * This is useful for modules that use the Cluster API in order to create\n * a different distributed system, but still want to use the Redis Cluster\n * message bus. Flags that can be set:\n *\n * * CLUSTER_MODULE_FLAG_NO_FAILOVER\n * * CLUSTER_MODULE_FLAG_NO_REDIRECTION\n *\n * With the following effects:\n *\n * * NO_FAILOVER: prevent Redis Cluster slaves from failing over a dead master.\n *                Also disables the replica migration feature.\n *\n * * NO_REDIRECTION: Every node will accept any key, without trying to perform\n *                   partitioning according to the Redis Cluster algorithm.\n *                   Slots information will still be propagated across the\n *                   cluster, but without effect. */\nvoid RM_SetClusterFlags(RedisModuleCtx *ctx, uint64_t flags) {\n    UNUSED(ctx);\n    server.cluster_module_flags = CLUSTER_MODULE_FLAG_NONE;\n    if (flags & REDISMODULE_CLUSTER_FLAG_NO_FAILOVER)\n        server.cluster_module_flags |= CLUSTER_MODULE_FLAG_NO_FAILOVER;\n    if (flags & REDISMODULE_CLUSTER_FLAG_NO_REDIRECTION)\n        server.cluster_module_flags |= CLUSTER_MODULE_FLAG_NO_REDIRECTION;\n}\n\n/* RM_ClusterDisableTrim allows a module to temporarily prevent slot trimming\n * after a slot migration. This is useful when the module has asynchronous\n * operations that rely on keys in migrating slots, which would be trimmed.\n *\n * The module must call RM_ClusterEnableTrim once it has completed those\n * operations to re-enable trimming.\n *\n * Trimming uses a reference counter: every call to RM_ClusterDisableTrim\n * increments the counter, and every RM_ClusterEnableTrim call decrements it.\n * Trimming remains disabled as long as the counter is greater than zero.\n *\n * Disable automatic slot trimming. */\nint RM_ClusterDisableTrim(RedisModuleCtx *ctx) {\n    UNUSED(ctx);\n    if (server.cluster_module_trim_disablers < INT_MAX) {\n        server.cluster_module_trim_disablers++;\n        return REDISMODULE_OK;\n    }\n    return REDISMODULE_ERR;\n}\n\n/* Enable automatic slot trimming. See also comments on RM_ClusterDisableTrim. */\nint RM_ClusterEnableTrim(RedisModuleCtx *ctx) {\n    UNUSED(ctx);\n    if (server.cluster_module_trim_disablers > 0) {\n        server.cluster_module_trim_disablers--;\n        return REDISMODULE_OK;\n    }\n    return REDISMODULE_ERR;\n}\n\n/* Returns the cluster slot of a key, similar to the `CLUSTER KEYSLOT` command.\n * This function works even if cluster mode is not enabled. */\nunsigned int RM_ClusterKeySlot(RedisModuleString *key) {\n    return keyHashSlot(key->ptr, sdslen(key->ptr));\n}\n\n/* Like `RM_ClusterKeySlot`, but gets a char pointer and a length.\n * Returns the cluster slot of a key, similar to the `CLUSTER KEYSLOT` command.\n * This function works even if cluster mode is not enabled. */\nunsigned int RM_ClusterKeySlotC(const char *keystr, size_t keylen) {\n    return keyHashSlot(keystr, keylen);\n}\n\n/* Returns a short string that can be used as a key or as a hash tag in a key,\n * such that the key maps to the given cluster slot. Returns NULL if slot is not\n * a valid slot. */\nconst char *RM_ClusterCanonicalKeyNameInSlot(unsigned int slot) {\n    return (slot < CLUSTER_SLOTS) ? crc16_slot_table[slot] : NULL;\n}\n\n/* Returns 1 if keys in the specified slot can be accessed by this node, 0 otherwise.\n *\n * This function returns 1 in the following cases:\n * - The slot is owned by this node or by its master if this node is a replica\n * - The slot is being imported under the old slot migration approach (CLUSTER SETSLOT <slot> IMPORTING ..)\n * - Not in cluster mode (all slots are accessible)\n *\n * Returns 0 for:\n * - Invalid slot numbers (< 0 or >= 16384)\n * - Slots owned by other nodes\n */\nint RM_ClusterCanAccessKeysInSlot(int slot) {\n    if (slot < 0 || slot >= CLUSTER_SLOTS) return 0;\n    return clusterCanAccessKeysInSlot(slot);\n}\n\n/* Propagate commands along with slot migration.\n *\n * This function allows modules to add commands that will be sent to the\n * destination node before the actual slot migration begins. It should only be\n * called during the REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_MODULE_PROPAGATE event.\n *\n * This function can be called multiple times within the same event to\n * replicate multiple commands. All commands will be sent before the\n * actual slot data migration begins.\n *\n * Note: This function is only available in the fork child process just before\n *       slot snapshot delivery begins.\n *\n * On success REDISMODULE_OK is returned, otherwise\n * REDISMODULE_ERR is returned and errno is set to the following values:\n *\n * * EINVAL: function arguments or format specifiers are invalid.\n * * EBADF: not called in the correct context, e.g. not called in the REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_MODULE_PROPAGATE event.\n * * ENOENT: command does not exist.\n * * ENOTSUP: command is cross-slot.\n * * ERANGE: command contains keys that are not within the migrating slot range.\n */\nint RM_ClusterPropagateForSlotMigration(RedisModuleCtx *ctx, const char *cmdname, const char *fmt, ...) {\n    int argc = 0, flags = 0;\n    robj **argv = NULL;\n    struct redisCommand *cmd;\n    va_list ap;\n\n    if (ctx == NULL || cmdname == NULL || fmt == NULL) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    }\n\n    errno = 0;\n    cmd = lookupCommandByCString((char*)cmdname);\n    if (!cmd) {\n        errno = ENOENT;\n        return REDISMODULE_ERR;\n    }\n\n    va_start(ap, fmt);\n    argv = moduleCreateArgvFromUserFormat(cmdname, fmt, &argc, &flags, ap);\n    va_end(ap);\n    if (argv == NULL) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    }\n\n    int ret = asmModulePropagateBeforeSlotSnapshot(cmd, argv, argc);\n    int saved_errno = errno;\n\n    /* Release the argv. */\n    for (int i = 0; i < argc; i++) decrRefCount(argv[i]);\n    zfree(argv);\n    errno = saved_errno;\n    return ret == C_OK ? REDISMODULE_OK : REDISMODULE_ERR;\n}\n\n/* Returns the locally owned slot ranges for the node.\n *\n * An optional `ctx` can be provided to enable auto-memory management.\n * If cluster mode is disabled, the array will include all slots (0–16383).\n * If the node is a replica, the slot ranges of its master are returned.\n *\n * The returned array must be freed with RM_ClusterFreeSlotRanges().\n */\nRedisModuleSlotRangeArray *RM_ClusterGetLocalSlotRanges(RedisModuleCtx *ctx) {\n    slotRangeArray *slots = clusterGetLocalSlotRanges();\n    if (ctx) autoMemoryAdd(ctx, REDISMODULE_AM_SLOTRANGEARRAY, slots);\n    return (RedisModuleSlotRangeArray *)slots;\n}\n\n/* Frees a slot range array returned by RM_ClusterGetLocalSlotRanges().\n * Pass the `ctx` pointer only if the array was created with a context. */\nvoid RM_ClusterFreeSlotRanges(RedisModuleCtx *ctx, RedisModuleSlotRangeArray *slots) {\n    if (ctx) autoMemoryFreed(ctx, REDISMODULE_AM_SLOTRANGEARRAY, slots);\n    slotRangeArrayFree((slotRangeArray *)slots);\n}\n\n/* --------------------------------------------------------------------------\n * ## Modules Timers API\n *\n * Module timers are a high precision \"green timers\" abstraction where\n * every module can register even millions of timers without problems, even if\n * the actual event loop will just have a single timer that is used to awake the\n * module timers subsystem in order to process the next event.\n *\n * All the timers are stored into a radix tree, ordered by expire time, when\n * the main Redis event loop timer callback is called, we try to process all\n * the timers already expired one after the other. Then we re-enter the event\n * loop registering a timer that will expire when the next to process module\n * timer will expire.\n *\n * Every time the list of active timers drops to zero, we unregister the\n * main event loop timer, so that there is no overhead when such feature is\n * not used.\n * -------------------------------------------------------------------------- */\n\nstatic rax *Timers;     /* The radix tree of all the timers sorted by expire. */\nlong long aeTimer = -1; /* Main event loop (ae.c) timer identifier. */\n\ntypedef void (*RedisModuleTimerProc)(RedisModuleCtx *ctx, void *data);\n\n/* The timer descriptor, stored as value in the radix tree. */\ntypedef struct RedisModuleTimer {\n    RedisModule *module;                /* Module reference. */\n    RedisModuleTimerProc callback;      /* The callback to invoke on expire. */\n    void *data;                         /* Private data for the callback. */\n    int dbid;                           /* Database number selected by the original client. */\n} RedisModuleTimer;\n\n/* This is the timer handler that is called by the main event loop. We schedule\n * this timer to be called when the nearest of our module timers will expire. */\nint moduleTimerHandler(struct aeEventLoop *eventLoop, long long id, void *clientData) {\n    UNUSED(eventLoop);\n    UNUSED(id);\n    UNUSED(clientData);\n\n    /* To start let's try to fire all the timers already expired. */\n    raxIterator ri;\n    raxStart(&ri,Timers);\n    uint64_t now = ustime();\n    long long next_period = 0;\n    while(1) {\n        raxSeek(&ri,\"^\",NULL,0);\n        if (!raxNext(&ri)) break;\n        uint64_t expiretime;\n        memcpy(&expiretime,ri.key,sizeof(expiretime));\n        expiretime = ntohu64(expiretime);\n        if (now >= expiretime) {\n            RedisModuleTimer *timer = ri.data;\n            RedisModuleCtx ctx;\n            moduleCreateContext(&ctx,timer->module,REDISMODULE_CTX_TEMP_CLIENT);\n            selectDb(ctx.client, timer->dbid);\n            timer->callback(&ctx,timer->data);\n            moduleFreeContext(&ctx);\n            raxRemove(Timers,(unsigned char*)ri.key,ri.key_len,NULL);\n            zfree(timer);\n        } else {\n            /* We call ustime() again instead of using the cached 'now' so that\n             * 'next_period' isn't affected by the time it took to execute\n             * previous calls to 'callback.\n             * We need to cast 'expiretime' so that the compiler will not treat\n             * the difference as unsigned (Causing next_period to be huge) in\n             * case expiretime < ustime() */\n            next_period = ((long long)expiretime-ustime())/1000; /* Scale to milliseconds. */\n            break;\n        }\n    }\n    raxStop(&ri);\n\n    /* Reschedule the next timer or cancel it. */\n    if (next_period <= 0) next_period = 1;\n    if (raxSize(Timers) > 0) {\n        return next_period;\n    } else {\n        aeTimer = -1;\n        return AE_NOMORE;\n    }\n}\n\n/* Create a new timer that will fire after `period` milliseconds, and will call\n * the specified function using `data` as argument. The returned timer ID can be\n * used to get information from the timer or to stop it before it fires.\n * Note that for the common use case of a repeating timer (Re-registration\n * of the timer inside the RedisModuleTimerProc callback) it matters when\n * this API is called:\n * If it is called at the beginning of 'callback' it means\n * the event will triggered every 'period'.\n * If it is called at the end of 'callback' it means\n * there will 'period' milliseconds gaps between events.\n * (If the time it takes to execute 'callback' is negligible the two\n * statements above mean the same) */\nRedisModuleTimerID RM_CreateTimer(RedisModuleCtx *ctx, mstime_t period, RedisModuleTimerProc callback, void *data) {\n    RedisModuleTimer *timer = zmalloc(sizeof(*timer));\n    timer->module = ctx->module;\n    timer->callback = callback;\n    timer->data = data;\n    timer->dbid = ctx->client ? ctx->client->db->id : 0;\n    uint64_t expiretime = ustime()+period*1000;\n    uint64_t key;\n\n    while(1) {\n        key = htonu64(expiretime);\n        if (!raxFind(Timers, (unsigned char*)&key,sizeof(key),NULL)) {\n            raxInsert(Timers,(unsigned char*)&key,sizeof(key),timer,NULL);\n            break;\n        } else {\n            expiretime++;\n        }\n    }\n\n    /* We need to install the main event loop timer if it's not already\n     * installed, or we may need to refresh its period if we just installed\n     * a timer that will expire sooner than any other else (i.e. the timer\n     * we just installed is the first timer in the Timers rax). */\n    if (aeTimer != -1) {\n        raxIterator ri;\n        raxStart(&ri,Timers);\n        raxSeek(&ri,\"^\",NULL,0);\n        raxNext(&ri);\n        if (memcmp(ri.key,&key,sizeof(key)) == 0) {\n            /* This is the first key, we need to re-install the timer according\n             * to the just added event. */\n            aeDeleteTimeEvent(server.el,aeTimer);\n            aeTimer = -1;\n        }\n        raxStop(&ri);\n    }\n\n    /* If we have no main timer (the old one was invalidated, or this is the\n     * first module timer we have), install one. */\n    if (aeTimer == -1)\n        aeTimer = aeCreateTimeEvent(server.el,period,moduleTimerHandler,NULL,NULL);\n\n    return key;\n}\n\n/* Stop a timer, returns REDISMODULE_OK if the timer was found, belonged to the\n * calling module, and was stopped, otherwise REDISMODULE_ERR is returned.\n * If not NULL, the data pointer is set to the value of the data argument when\n * the timer was created. */\nint RM_StopTimer(RedisModuleCtx *ctx, RedisModuleTimerID id, void **data) {\n    void *result;\n    if (!raxFind(Timers,(unsigned char*)&id,sizeof(id),&result))\n        return REDISMODULE_ERR;\n    RedisModuleTimer *timer = result;\n    if (timer->module != ctx->module)\n        return REDISMODULE_ERR;\n    if (data) *data = timer->data;\n    raxRemove(Timers,(unsigned char*)&id,sizeof(id),NULL);\n    zfree(timer);\n    return REDISMODULE_OK;\n}\n\n/* Obtain information about a timer: its remaining time before firing\n * (in milliseconds), and the private data pointer associated with the timer.\n * If the timer specified does not exist or belongs to a different module\n * no information is returned and the function returns REDISMODULE_ERR, otherwise\n * REDISMODULE_OK is returned. The arguments remaining or data can be NULL if\n * the caller does not need certain information. */\nint RM_GetTimerInfo(RedisModuleCtx *ctx, RedisModuleTimerID id, uint64_t *remaining, void **data) {\n    void *result;\n    if (!raxFind(Timers,(unsigned char*)&id,sizeof(id),&result))\n        return REDISMODULE_ERR;\n    RedisModuleTimer *timer = result;\n    if (timer->module != ctx->module)\n        return REDISMODULE_ERR;\n    if (remaining) {\n        int64_t rem = ntohu64(id)-ustime();\n        if (rem < 0) rem = 0;\n        *remaining = rem/1000; /* Scale to milliseconds. */\n    }\n    if (data) *data = timer->data;\n    return REDISMODULE_OK;\n}\n\n/* Query timers to see if any timer belongs to the module.\n * Return 1 if any timer was found, otherwise 0 would be returned. */\nint moduleHoldsTimer(struct RedisModule *module) {\n    raxIterator iter;\n    int found = 0;\n    raxStart(&iter,Timers);\n    raxSeek(&iter,\"^\",NULL,0);\n    while (raxNext(&iter)) {\n        RedisModuleTimer *timer = iter.data;\n        if (timer->module == module) {\n            found = 1;\n            break;\n        }\n    }\n    raxStop(&iter);\n    return found;\n}\n\n/* --------------------------------------------------------------------------\n * ## Modules EventLoop API\n * --------------------------------------------------------------------------*/\n\ntypedef struct EventLoopData {\n    RedisModuleEventLoopFunc rFunc;\n    RedisModuleEventLoopFunc wFunc;\n    void *user_data;\n} EventLoopData;\n\ntypedef struct EventLoopOneShot {\n    RedisModuleEventLoopOneShotFunc func;\n    void *user_data;\n} EventLoopOneShot;\n\nlist *moduleEventLoopOneShots;\nstatic pthread_mutex_t moduleEventLoopMutex = PTHREAD_MUTEX_INITIALIZER;\n\nstatic int eventLoopToAeMask(int mask) {\n    int aeMask = 0;\n    if (mask & REDISMODULE_EVENTLOOP_READABLE)\n        aeMask |= AE_READABLE;\n    if (mask & REDISMODULE_EVENTLOOP_WRITABLE)\n        aeMask |= AE_WRITABLE;\n    return aeMask;\n}\n\nstatic int eventLoopFromAeMask(int ae_mask) {\n    int mask = 0;\n    if (ae_mask & AE_READABLE)\n        mask |= REDISMODULE_EVENTLOOP_READABLE;\n    if (ae_mask & AE_WRITABLE)\n        mask |= REDISMODULE_EVENTLOOP_WRITABLE;\n    return mask;\n}\n\nstatic void eventLoopCbReadable(struct aeEventLoop *ae, int fd, void *user_data, int ae_mask) {\n    UNUSED(ae);\n    EventLoopData *data = user_data;\n    data->rFunc(fd, data->user_data, eventLoopFromAeMask(ae_mask));\n}\n\nstatic void eventLoopCbWritable(struct aeEventLoop *ae, int fd, void *user_data, int ae_mask) {\n    UNUSED(ae);\n    EventLoopData *data = user_data;\n    data->wFunc(fd, data->user_data, eventLoopFromAeMask(ae_mask));\n}\n\n/* Add a pipe / socket event to the event loop.\n *\n * * `mask` must be one of the following values:\n *\n *     * `REDISMODULE_EVENTLOOP_READABLE`\n *     * `REDISMODULE_EVENTLOOP_WRITABLE`\n *     * `REDISMODULE_EVENTLOOP_READABLE | REDISMODULE_EVENTLOOP_WRITABLE`\n *\n * On success REDISMODULE_OK is returned, otherwise\n * REDISMODULE_ERR is returned and errno is set to the following values:\n *\n * * ERANGE: `fd` is negative or higher than `maxclients` Redis config.\n * * EINVAL: `callback` is NULL or `mask` value is invalid.\n *\n * `errno` might take other values in case of an internal error.\n *\n * Example:\n *\n *     void onReadable(int fd, void *user_data, int mask) {\n *         char buf[32];\n *         int bytes = read(fd,buf,sizeof(buf));\n *         printf(\"Read %d bytes \\n\", bytes);\n *     }\n *     RM_EventLoopAdd(fd, REDISMODULE_EVENTLOOP_READABLE, onReadable, NULL);\n */\nint RM_EventLoopAdd(int fd, int mask, RedisModuleEventLoopFunc func, void *user_data) {\n    if (fd < 0 || fd >= aeGetSetSize(server.el)) {\n        errno = ERANGE;\n        return REDISMODULE_ERR;\n    }\n\n    if (!func || mask & ~(REDISMODULE_EVENTLOOP_READABLE |\n                          REDISMODULE_EVENTLOOP_WRITABLE)) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    }\n\n    /* We are going to register stub callbacks to 'ae' for two reasons:\n     *\n     * - \"ae\" callback signature is different from RedisModuleEventLoopCallback,\n     *   that will be handled it in our stub callbacks.\n     * - We need to remap 'mask' value to provide binary compatibility.\n     *\n     * For the stub callbacks, saving user 'callback' and 'user_data' in an\n     * EventLoopData object and passing it to ae, later, we'll extract\n     * 'callback' and 'user_data' from that.\n     */\n    EventLoopData *data = aeGetFileClientData(server.el, fd);\n    if (!data)\n        data = zcalloc(sizeof(*data));\n\n    aeFileProc *aeProc;\n    if (mask & REDISMODULE_EVENTLOOP_READABLE)\n        aeProc = eventLoopCbReadable;\n    else\n        aeProc = eventLoopCbWritable;\n\n    int aeMask = eventLoopToAeMask(mask);\n\n    if (aeCreateFileEvent(server.el, fd, aeMask, aeProc, data) != AE_OK) {\n        if (aeGetFileEvents(server.el, fd) == AE_NONE)\n            zfree(data);\n        return REDISMODULE_ERR;\n    }\n\n    data->user_data = user_data;\n    if (mask & REDISMODULE_EVENTLOOP_READABLE)\n        data->rFunc = func;\n    if (mask & REDISMODULE_EVENTLOOP_WRITABLE)\n        data->wFunc = func;\n\n    errno = 0;\n    return REDISMODULE_OK;\n}\n\n/* Delete a pipe / socket event from the event loop.\n *\n * * `mask` must be one of the following values:\n *\n *     * `REDISMODULE_EVENTLOOP_READABLE`\n *     * `REDISMODULE_EVENTLOOP_WRITABLE`\n *     * `REDISMODULE_EVENTLOOP_READABLE | REDISMODULE_EVENTLOOP_WRITABLE`\n *\n * On success REDISMODULE_OK is returned, otherwise\n * REDISMODULE_ERR is returned and errno is set to the following values:\n *\n * * ERANGE: `fd` is negative or higher than `maxclients` Redis config.\n * * EINVAL: `mask` value is invalid.\n */\nint RM_EventLoopDel(int fd, int mask) {\n    if (fd < 0 || fd >= aeGetSetSize(server.el)) {\n        errno = ERANGE;\n        return REDISMODULE_ERR;\n    }\n\n    if (mask & ~(REDISMODULE_EVENTLOOP_READABLE |\n                 REDISMODULE_EVENTLOOP_WRITABLE)) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    }\n\n    /* After deleting the event, if fd does not have any registered event\n     * anymore, we can free the EventLoopData object. */\n    EventLoopData *data = aeGetFileClientData(server.el, fd);\n    aeDeleteFileEvent(server.el, fd, eventLoopToAeMask(mask));\n    if (aeGetFileEvents(server.el, fd) == AE_NONE)\n        zfree(data);\n\n    errno = 0;\n    return REDISMODULE_OK;\n}\n\n/* This function can be called from other threads to trigger callback on Redis\n * main thread. On success REDISMODULE_OK is returned. If `func` is NULL\n * REDISMODULE_ERR is returned and errno is set to EINVAL.\n */\nint RM_EventLoopAddOneShot(RedisModuleEventLoopOneShotFunc func, void *user_data) {\n    if (!func) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    }\n\n    EventLoopOneShot *oneshot = zmalloc(sizeof(*oneshot));\n    oneshot->func = func;\n    oneshot->user_data = user_data;\n\n    pthread_mutex_lock(&moduleEventLoopMutex);\n    if (!moduleEventLoopOneShots) moduleEventLoopOneShots = listCreate();\n    listAddNodeTail(moduleEventLoopOneShots, oneshot);\n    pthread_mutex_unlock(&moduleEventLoopMutex);\n\n    if (write(server.module_pipe[1],\"A\",1) != 1) {\n        /* Pipe is non-blocking, write() may fail if it's full. */\n    }\n\n    errno = 0;\n    return REDISMODULE_OK;\n}\n\n/* This function will check the moduleEventLoopOneShots queue in order to\n * call the callback for the registered oneshot events. */\nstatic void eventLoopHandleOneShotEvents(void) {\n    pthread_mutex_lock(&moduleEventLoopMutex);\n    if (moduleEventLoopOneShots) {\n        while (listLength(moduleEventLoopOneShots)) {\n            listNode *ln = listFirst(moduleEventLoopOneShots);\n            EventLoopOneShot *oneshot = ln->value;\n            listDelNode(moduleEventLoopOneShots, ln);\n            /* Unlock mutex before the callback. Another oneshot event can be\n             * added in the callback, it will need to lock the mutex. */\n            pthread_mutex_unlock(&moduleEventLoopMutex);\n            oneshot->func(oneshot->user_data);\n            zfree(oneshot);\n            /* Lock again for the next iteration */\n            pthread_mutex_lock(&moduleEventLoopMutex);\n        }\n    }\n    pthread_mutex_unlock(&moduleEventLoopMutex);\n}\n\n/* --------------------------------------------------------------------------\n * ## Modules ACL API\n *\n * Implements a hook into the authentication and authorization within Redis.\n * --------------------------------------------------------------------------*/\n\n/* This function is called when a client's user has changed and invokes the\n * client's user changed callback if it was set. This callback should\n * cleanup any state the module was tracking about this client.\n *\n * A client's user can be changed through the AUTH command, module\n * authentication, and when a client is freed. */\nvoid moduleNotifyUserChanged(client *c) {\n    if (c->auth_callback) {\n        c->auth_callback(c->id, c->auth_callback_privdata);\n\n        /* The callback will fire exactly once, even if the user remains\n         * the same. It is expected to completely clean up the state\n         * so all references are cleared here. */\n        c->auth_callback = NULL;\n        c->auth_callback_privdata = NULL;\n        c->auth_module = NULL;\n    }\n}\n\nvoid revokeClientAuthentication(client *c) {\n    /* Freeing the client would result in moduleNotifyUserChanged() to be\n     * called later, however since we use revokeClientAuthentication() also\n     * in moduleFreeAuthenticatedClients() to implement module unloading, we\n     * do this action ASAP: this way if the module is unloaded, when the client\n     * is eventually freed we don't rely on the module to still exist. */\n    moduleNotifyUserChanged(c);\n\n    deauthenticateAndCloseClient(c);\n}\n\n/* Cleanup all clients that have been authenticated with this module. This\n * is called from onUnload() to give the module a chance to cleanup any\n * resources associated with clients it has authenticated. */\nstatic void moduleFreeAuthenticatedClients(RedisModule *module) {\n    listIter li;\n    listNode *ln;\n    listRewind(server.clients,&li);\n    while ((ln = listNext(&li)) != NULL) {\n        client *c = listNodeValue(ln);\n        if (!c->auth_module) continue;\n\n        RedisModule *auth_module = (RedisModule *) c->auth_module;\n        if (auth_module == module) {\n            revokeClientAuthentication(c);\n        }\n    }\n}\n\n/* Creates a Redis ACL user that the module can use to authenticate a client.\n * After obtaining the user, the module should set what such user can do\n * using the RM_SetUserACL() function. Once configured, the user\n * can be used in order to authenticate a connection, with the specified\n * ACL rules, using the RedisModule_AuthClientWithUser() function.\n *\n * Note that:\n *\n * * Users created here are not listed by the ACL command.\n * * Users created here are not checked for duplicated name, so it's up to\n *   the module calling this function to take care of not creating users\n *   with the same name.\n * * The created user can be used to authenticate multiple Redis connections.\n *\n * The caller can later free the user using the function\n * RM_FreeModuleUser(). When this function is called, if there are\n * still clients authenticated with this user, they are disconnected.\n * The function to free the user should only be used when the caller really\n * wants to invalidate the user to define a new one with different\n * capabilities. */\nRedisModuleUser *RM_CreateModuleUser(const char *name) {\n    RedisModuleUser *new_user = zmalloc(sizeof(RedisModuleUser));\n    new_user->user = ACLCreateUnlinkedUser();\n    new_user->free_user = 1;\n\n    /* Free the previous temporarily assigned name to assign the new one */\n    sdsfree(new_user->user->name);\n    new_user->user->name = sdsnew(name);\n    return new_user;\n}\n\n/* Frees a given user and disconnects all of the clients that have been\n * authenticated with it. See RM_CreateModuleUser for detailed usage.*/\nint RM_FreeModuleUser(RedisModuleUser *user) {\n    if (user->free_user)\n        ACLFreeUserAndKillClients(user->user);\n    zfree(user);\n    return REDISMODULE_OK;\n}\n\n/* Return the username of the given RedisModuleUser as a RedisModuleString.\n * Returns NULL if user is NULL or the user has no name.\n * The returned string must be freed by the caller with RedisModule_FreeString()\n * or by enabling automatic memory management on a context. */\n RedisModuleString *RM_GetUserUsername(RedisModuleCtx *ctx, const RedisModuleUser *user) {\n    if(user == NULL || user->user == NULL || user->user->name == NULL) \n        return NULL;\n    \n    return RM_CreateString(ctx, user->user->name, sdslen(user->user->name));\n}\n\n/* Sets the permissions of a user created through the redis module\n * interface. The syntax is the same as ACL SETUSER, so refer to the\n * documentation in acl.c for more information. See RM_CreateModuleUser\n * for detailed usage.\n *\n * Returns REDISMODULE_OK on success and REDISMODULE_ERR on failure\n * and will set an errno describing why the operation failed. */\nint RM_SetModuleUserACL(RedisModuleUser *user, const char* acl) {\n    return ACLSetUser(user->user, acl, -1);\n}\n\n/* Sets the permission of a user with a complete ACL string, such as one\n * would use on the redis ACL SETUSER command line API. This differs from\n * RM_SetModuleUserACL, which only takes single ACL operations at a time.\n *\n * Returns REDISMODULE_OK on success and REDISMODULE_ERR on failure\n * if a RedisModuleString is provided in error, a string describing the error\n * will be returned */\nint RM_SetModuleUserACLString(RedisModuleCtx *ctx, RedisModuleUser *user, const char *acl, RedisModuleString **error) {\n    serverAssert(user != NULL);\n\n    int argc;\n    sds *argv = sdssplitargs(acl, &argc);\n\n    sds err = ACLStringSetUser(user->user, NULL, argv, argc);\n\n    sdsfreesplitres(argv, argc);\n\n    if (err) {\n        if (error) {\n            *error = createObject(OBJ_STRING, err);\n            if (ctx != NULL) autoMemoryAdd(ctx, REDISMODULE_AM_STRING, *error);\n        } else {\n            sdsfree(err);\n        }\n\n        return REDISMODULE_ERR;\n    }\n\n    return REDISMODULE_OK;\n}\n\n/* Get the ACL string for a given user\n * Returns a RedisModuleString\n */\nRedisModuleString *RM_GetModuleUserACLString(RedisModuleUser *user) {\n    serverAssert(user != NULL);\n\n    return ACLDescribeUser(user->user);\n}\n\n/* Retrieve the user name of the client connection behind the current context.\n * The user name can be used later, in order to get a RedisModuleUser.\n * See more information in RM_GetModuleUserFromUserName.\n *\n * The returned string must be released with RedisModule_FreeString() or by\n * enabling automatic memory management. */\nRedisModuleString *RM_GetCurrentUserName(RedisModuleCtx *ctx) {\n    /* Sometimes, the user isn't passed along the call stack or isn't\n     * even set, so we need to check for the members to avoid crashes. */\n    if (ctx->client == NULL || ctx->client->user == NULL || ctx->client->user->name == NULL) {\n        return NULL;\n    }\n\n    return RM_CreateString(ctx,ctx->client->user->name,sdslen(ctx->client->user->name));\n}\n\n/* A RedisModuleUser can be used to check if command, key or channel can be executed or\n * accessed according to the ACLs rules associated with that user.\n * When a Module wants to do ACL checks on a general ACL user (not created by RM_CreateModuleUser),\n * it can get the RedisModuleUser from this API, based on the user name retrieved by RM_GetCurrentUserName.\n *\n * Since a general ACL user can be deleted at any time, this RedisModuleUser should be used only in the context\n * where this function was called. In order to do ACL checks out of that context, the Module can store the user name,\n * and call this API at any other context.\n *\n * Returns NULL if the user is disabled or the user does not exist.\n * The caller should later free the user using the function RM_FreeModuleUser().*/\nRedisModuleUser *RM_GetModuleUserFromUserName(RedisModuleString *name) {\n    /* First, verify that the user exist */\n    user *acl_user = ACLGetUserByName(name->ptr, sdslen(name->ptr));\n    if (acl_user == NULL) {\n        return NULL;\n    }\n\n    RedisModuleUser *new_user = zmalloc(sizeof(RedisModuleUser));\n    new_user->user = acl_user;\n    new_user->free_user = 0;\n    return new_user;\n}\n\n/* Checks if the command can be executed by the user, according to the ACLs associated with it.\n *\n * On success a REDISMODULE_OK is returned, otherwise\n * REDISMODULE_ERR is returned and errno is set to the following values:\n *\n * * ENOENT: Specified command does not exist.\n * * EINVAL: Invalid number of arguments for the specified command.\n * * EACCES: Command cannot be executed, according to ACL rules\n */\nint RM_ACLCheckCommandPermissions(RedisModuleUser *user, RedisModuleString **argv, int argc) {\n    int keyidxptr;\n    struct redisCommand *cmd;\n\n    /* Find command */\n    if ((cmd = lookupCommand(argv, argc)) == NULL) {\n        errno = ENOENT;\n        return REDISMODULE_ERR;\n    }\n\n    if (!commandCheckArity(cmd, argc, NULL)) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    }\n\n    if (ACLCheckAllUserCommandPerm(user->user, cmd, argv, argc, NULL, &keyidxptr) != ACL_OK) {\n        errno = EACCES;\n        return REDISMODULE_ERR;\n    }\n\n    return REDISMODULE_OK;\n}\n\n/* Check if the key can be accessed by the user according to the ACLs attached to the user\n * and the flags representing the key access. The flags are the same that are used in the\n * keyspec for logical operations. These flags are documented in RedisModule_SetCommandInfo as\n * the REDISMODULE_CMD_KEY_ACCESS, REDISMODULE_CMD_KEY_UPDATE, REDISMODULE_CMD_KEY_INSERT,\n * and REDISMODULE_CMD_KEY_DELETE flags.\n * \n * If no flags are supplied, the user is still required to have some access to the key for\n * this command to return successfully.\n *\n * If the user is able to access the key then REDISMODULE_OK is returned, otherwise\n * REDISMODULE_ERR is returned and errno is set to one of the following values:\n * \n * * EINVAL: The provided flags are invalid.\n * * EACCESS: The user does not have permission to access the key.\n */\nint RM_ACLCheckKeyPermissions(RedisModuleUser *user, RedisModuleString *key, int flags) {\n    const int allow_mask = (REDISMODULE_CMD_KEY_ACCESS\n        | REDISMODULE_CMD_KEY_INSERT\n        | REDISMODULE_CMD_KEY_DELETE\n        | REDISMODULE_CMD_KEY_UPDATE);\n\n    if ((flags & allow_mask) != flags) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    }\n\n    int keyspec_flags = moduleConvertKeySpecsFlags(flags, 0);\n    if (ACLUserCheckKeyPerm(user->user, key->ptr, sdslen(key->ptr), keyspec_flags) != ACL_OK) {\n        errno = EACCES;\n        return REDISMODULE_ERR;\n    }\n\n    return REDISMODULE_OK;\n}\n\n/* Check if the user can access keys matching the given key prefix according to the ACLs \n * attached to the user and the flags representing key access. The flags are the same that \n * are used in the keyspec for logical operations. These flags are documented in \n * RedisModule_SetCommandInfo as the REDISMODULE_CMD_KEY_ACCESS, \n * REDISMODULE_CMD_KEY_UPDATE, REDISMODULE_CMD_KEY_INSERT, and REDISMODULE_CMD_KEY_DELETE flags.\n * \n * If no flags are supplied, the user is still required to have some access to keys matching \n * the prefix for this command to return successfully.\n *\n * If the user is able to access keys matching the prefix, then REDISMODULE_OK is returned.\n * Otherwise, REDISMODULE_ERR is returned and errno is set to one of the following values:\n * \n * * EINVAL: The provided flags are invalid.\n * * EACCES: The user does not have permission to access keys matching the prefix.\n */\nint RM_ACLCheckKeyPrefixPermissions(RedisModuleUser *user, RedisModuleString *prefix, int flags) {\n    const int allow_mask = (REDISMODULE_CMD_KEY_ACCESS\n                            | REDISMODULE_CMD_KEY_INSERT\n                            | REDISMODULE_CMD_KEY_DELETE\n                            | REDISMODULE_CMD_KEY_UPDATE);\n\n    if ((flags & allow_mask) != flags) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    }\n\n    int keyspec_flags = moduleConvertKeySpecsFlags(flags, 0);\n\n    /* Add the prefix flag to the keyspec flags */\n    keyspec_flags |= CMD_KEY_PREFIX;\n\n    if (ACLUserCheckKeyPerm(user->user, prefix->ptr, sdslen(prefix->ptr), keyspec_flags) != ACL_OK) {\n        errno = EACCES;\n        return REDISMODULE_ERR;\n    }\n\n    return REDISMODULE_OK;\n}\n\n/* Check if the pubsub channel can be accessed by the user based off of the given\n * access flags. See RM_ChannelAtPosWithFlags for more information about the\n * possible flags that can be passed in.\n *\n * If the user is able to access the pubsub channel then REDISMODULE_OK is returned, otherwise\n * REDISMODULE_ERR is returned and errno is set to one of the following values:\n * \n * * EINVAL: The provided flags are invalid.\n * * EACCESS: The user does not have permission to access the pubsub channel. \n */\nint RM_ACLCheckChannelPermissions(RedisModuleUser *user, RedisModuleString *ch, int flags) {\n    const int allow_mask = (REDISMODULE_CMD_CHANNEL_PUBLISH\n        | REDISMODULE_CMD_CHANNEL_SUBSCRIBE\n        | REDISMODULE_CMD_CHANNEL_UNSUBSCRIBE\n        | REDISMODULE_CMD_CHANNEL_PATTERN);\n\n    if ((flags & allow_mask) != flags) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    }\n\n    /* Unsubscribe permissions are currently always allowed. */\n    if (flags & REDISMODULE_CMD_CHANNEL_UNSUBSCRIBE){\n        return REDISMODULE_OK;\n    }\n\n    int is_pattern = flags & REDISMODULE_CMD_CHANNEL_PATTERN;\n    if (ACLUserCheckChannelPerm(user->user, ch->ptr, is_pattern) != ACL_OK)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n\n/* Helper function to map a RedisModuleACLLogEntryReason to ACL Log entry reason. */\nint moduleGetACLLogEntryReason(RedisModuleACLLogEntryReason reason) {\n    int acl_reason = 0;\n    switch (reason) {\n        case REDISMODULE_ACL_LOG_AUTH: acl_reason = ACL_DENIED_AUTH; break;\n        case REDISMODULE_ACL_LOG_KEY: acl_reason = ACL_DENIED_KEY; break;\n        case REDISMODULE_ACL_LOG_CHANNEL: acl_reason = ACL_DENIED_CHANNEL; break;\n        case REDISMODULE_ACL_LOG_CMD: acl_reason = ACL_DENIED_CMD; break;\n        default: break;\n    }\n    return acl_reason;\n}\n\n/* Adds a new entry in the ACL log.\n * Returns REDISMODULE_OK on success and REDISMODULE_ERR on error.\n *\n * For more information about ACL log, please refer to https://redis.io/commands/acl-log */\nint RM_ACLAddLogEntry(RedisModuleCtx *ctx, RedisModuleUser *user, RedisModuleString *object, RedisModuleACLLogEntryReason reason) {\n    int acl_reason = moduleGetACLLogEntryReason(reason);\n    if (!acl_reason) return REDISMODULE_ERR;\n    addACLLogEntry(ctx->client, acl_reason, ACL_LOG_CTX_MODULE, -1, user->user->name, sdsdup(object->ptr));\n    return REDISMODULE_OK;\n}\n\n/* Adds a new entry in the ACL log with the `username` RedisModuleString provided.\n * Returns REDISMODULE_OK on success and REDISMODULE_ERR on error.\n *\n * For more information about ACL log, please refer to https://redis.io/commands/acl-log */\nint RM_ACLAddLogEntryByUserName(RedisModuleCtx *ctx, RedisModuleString *username, RedisModuleString *object, RedisModuleACLLogEntryReason reason) {\n    int acl_reason = moduleGetACLLogEntryReason(reason);\n    if (!acl_reason) return REDISMODULE_ERR;\n    addACLLogEntry(ctx->client, acl_reason, ACL_LOG_CTX_MODULE, -1, username->ptr, sdsdup(object->ptr));\n    return REDISMODULE_OK;\n}\n\n/* Authenticate the client associated with the context with\n * the provided user. Returns REDISMODULE_OK on success and\n * REDISMODULE_ERR on error.\n *\n * This authentication can be tracked with the optional callback and private\n * data fields. The callback will be called whenever the user of the client\n * changes. This callback should be used to cleanup any state that is being\n * kept in the module related to the client authentication. It will only be\n * called once, even when the user hasn't changed, in order to allow for a\n * new callback to be specified. If this authentication does not need to be\n * tracked, pass in NULL for the callback and privdata.\n *\n * If client_id is not NULL, it will be filled with the id of the client\n * that was authenticated. This can be used with the\n * RM_DeauthenticateAndCloseClient() API in order to deauthenticate a\n * previously authenticated client if the authentication is no longer valid.\n *\n * For expensive authentication operations, it is recommended to block the\n * client and do the authentication in the background and then attach the user\n * to the client in a threadsafe context. */\nstatic int authenticateClientWithUser(RedisModuleCtx *ctx, user *user, RedisModuleUserChangedFunc callback, void *privdata, uint64_t *client_id) {\n    if (user->flags & USER_FLAG_DISABLED) {\n        return REDISMODULE_ERR;\n    }\n\n    /* Avoid settings which are meaningless and will be lost */\n    if (!ctx->client || (ctx->client->flags & CLIENT_MODULE)) {\n        return REDISMODULE_ERR;\n    }\n\n    moduleNotifyUserChanged(ctx->client);\n\n    ctx->client->user = user;\n    ctx->client->authenticated = 1;\n\n    if (clientHasModuleAuthInProgress(ctx->client)) {\n        ctx->client->flags |= CLIENT_MODULE_AUTH_HAS_RESULT;\n    }\n\n    if (callback) {\n        ctx->client->auth_callback = callback;\n        ctx->client->auth_callback_privdata = privdata;\n        ctx->client->auth_module = ctx->module;\n    }\n\n    if (client_id) {\n        *client_id = ctx->client->id;\n    }\n\n    return REDISMODULE_OK;\n}\n\n\n/* Authenticate the current context's user with the provided redis acl user.\n * Returns REDISMODULE_ERR if the user is disabled.\n *\n * See authenticateClientWithUser for information about callback, client_id,\n * and general usage for authentication. */\nint RM_AuthenticateClientWithUser(RedisModuleCtx *ctx, RedisModuleUser *module_user, RedisModuleUserChangedFunc callback, void *privdata, uint64_t *client_id) {\n    return authenticateClientWithUser(ctx, module_user->user, callback, privdata, client_id);\n}\n\n/* Authenticate the current context's user with the provided redis acl user.\n * Returns REDISMODULE_ERR if the user is disabled or the user does not exist.\n *\n * See authenticateClientWithUser for information about callback, client_id,\n * and general usage for authentication. */\nint RM_AuthenticateClientWithACLUser(RedisModuleCtx *ctx, const char *name, size_t len, RedisModuleUserChangedFunc callback, void *privdata, uint64_t *client_id) {\n    user *acl_user = ACLGetUserByName(name, len);\n\n    if (!acl_user) {\n        return REDISMODULE_ERR;\n    }\n    return authenticateClientWithUser(ctx, acl_user, callback, privdata, client_id);\n}\n\n/* Deauthenticate and close the client. The client resources will not be\n * immediately freed, but will be cleaned up in a background job. This is\n * the recommended way to deauthenticate a client since most clients can't\n * handle users becoming deauthenticated. Returns REDISMODULE_ERR when the\n * client doesn't exist and REDISMODULE_OK when the operation was successful.\n *\n * The client ID is returned from the RM_AuthenticateClientWithUser and\n * RM_AuthenticateClientWithACLUser APIs, but can be obtained through\n * the CLIENT api or through server events.\n *\n * This function is not thread safe, and must be executed within the context\n * of a command or thread safe context. */\nint RM_DeauthenticateAndCloseClient(RedisModuleCtx *ctx, uint64_t client_id) {\n    UNUSED(ctx);\n    client *c = lookupClientByID(client_id);\n    if (c == NULL) return REDISMODULE_ERR;\n\n    /* Revoke also marks client to be closed ASAP */\n    revokeClientAuthentication(c);\n    return REDISMODULE_OK;\n}\n\n/* Redact the client command argument specified at the given position. Redacted arguments \n * are obfuscated in user facing commands such as SLOWLOG or MONITOR, as well as\n * never being written to server logs. This command may be called multiple times on the\n * same position.\n * \n * Note that the command name, position 0, can not be redacted. \n * \n * Returns REDISMODULE_OK if the argument was redacted and REDISMODULE_ERR if there \n * was an invalid parameter passed in or the position is outside the client \n * argument range. */\nint RM_RedactClientCommandArgument(RedisModuleCtx *ctx, int pos) {\n    if (!ctx || !ctx->client || pos <= 0 || ctx->client->argc <= pos) {\n        return REDISMODULE_ERR;\n    }\n    redactClientCommandArgument(ctx->client, pos);\n    return REDISMODULE_OK;\n}\n\n/* Return the X.509 client-side certificate used by the client to authenticate\n * this connection.\n *\n * The return value is an allocated RedisModuleString that is a X.509 certificate\n * encoded in PEM (Base64) format. It should be freed (or auto-freed) by the caller.\n *\n * A NULL value is returned in the following conditions:\n *\n * - Connection ID does not exist\n * - Connection is not a TLS connection\n * - Connection is a TLS connection but no client certificate was used\n */\nRedisModuleString *RM_GetClientCertificate(RedisModuleCtx *ctx, uint64_t client_id) {\n    client *c = lookupClientByID(client_id);\n    if (c == NULL) return NULL;\n\n    sds cert = connGetPeerCert(c->conn);\n    if (!cert) return NULL;\n\n    RedisModuleString *s = createObject(OBJ_STRING, cert);\n    if (ctx != NULL) autoMemoryAdd(ctx, REDISMODULE_AM_STRING, s);\n\n    return s;\n}\n\n/* --------------------------------------------------------------------------\n * ## Modules Dictionary API\n *\n * Implements a sorted dictionary (actually backed by a radix tree) with\n * the usual get / set / del / num-items API, together with an iterator\n * capable of going back and forth.\n * -------------------------------------------------------------------------- */\n\n/* Create a new dictionary. The 'ctx' pointer can be the current module context\n * or NULL, depending on what you want. Please follow the following rules:\n *\n * 1. Use a NULL context if you plan to retain a reference to this dictionary\n *    that will survive the time of the module callback where you created it.\n * 2. Use a NULL context if no context is available at the time you are creating\n *    the dictionary (of course...).\n * 3. However use the current callback context as 'ctx' argument if the\n *    dictionary time to live is just limited to the callback scope. In this\n *    case, if enabled, you can enjoy the automatic memory management that will\n *    reclaim the dictionary memory, as well as the strings returned by the\n *    Next / Prev dictionary iterator calls.\n */\nRedisModuleDict *RM_CreateDict(RedisModuleCtx *ctx) {\n    size_t usable;\n    RedisModuleDict *d = zmalloc_usable(sizeof(*d), &usable);\n    d->alloc_size = usable;\n    d->rax = raxNewWithMetadata(0, &d->alloc_size);\n    if (ctx != NULL) autoMemoryAdd(ctx,REDISMODULE_AM_DICT,d);\n    return d;\n}\n\n/* Free a dictionary created with RM_CreateDict(). You need to pass the\n * context pointer 'ctx' only if the dictionary was created using the\n * context instead of passing NULL. */\nvoid RM_FreeDict(RedisModuleCtx *ctx, RedisModuleDict *d) {\n    if (ctx != NULL) autoMemoryFreed(ctx,REDISMODULE_AM_DICT,d);\n    raxFree(d->rax);\n    zfree(d);\n}\n\n/* Return the size of the dictionary (number of keys). */\nuint64_t RM_DictSize(RedisModuleDict *d) {\n    return raxSize(d->rax);\n}\n\n/* Store the specified key into the dictionary, setting its value to the\n * pointer 'ptr'. If the key was added with success, since it did not\n * already exist, REDISMODULE_OK is returned. Otherwise if the key already\n * exists the function returns REDISMODULE_ERR. */\nint RM_DictSetC(RedisModuleDict *d, void *key, size_t keylen, void *ptr) {\n    int retval = raxTryInsert(d->rax,key,keylen,ptr,NULL);\n    return (retval == 1) ? REDISMODULE_OK : REDISMODULE_ERR;\n}\n\n/* Like RedisModule_DictSetC() but will replace the key with the new\n * value if the key already exists. */\nint RM_DictReplaceC(RedisModuleDict *d, void *key, size_t keylen, void *ptr) {\n    int retval = raxInsert(d->rax,key,keylen,ptr,NULL);\n    return (retval == 1) ? REDISMODULE_OK : REDISMODULE_ERR;\n}\n\n/* Like RedisModule_DictSetC() but takes the key as a RedisModuleString. */\nint RM_DictSet(RedisModuleDict *d, RedisModuleString *key, void *ptr) {\n    return RM_DictSetC(d,key->ptr,sdslen(key->ptr),ptr);\n}\n\n/* Like RedisModule_DictReplaceC() but takes the key as a RedisModuleString. */\nint RM_DictReplace(RedisModuleDict *d, RedisModuleString *key, void *ptr) {\n    return RM_DictReplaceC(d,key->ptr,sdslen(key->ptr),ptr);\n}\n\n/* Return the value stored at the specified key. The function returns NULL\n * both in the case the key does not exist, or if you actually stored\n * NULL at key. So, optionally, if the 'nokey' pointer is not NULL, it will\n * be set by reference to 1 if the key does not exist, or to 0 if the key\n * exists. */\nvoid *RM_DictGetC(RedisModuleDict *d, void *key, size_t keylen, int *nokey) {\n    void *res = NULL;\n    int found = raxFind(d->rax,key,keylen,&res);\n    if (nokey) *nokey = !found;\n    return res;\n}\n\n/* Like RedisModule_DictGetC() but takes the key as a RedisModuleString. */\nvoid *RM_DictGet(RedisModuleDict *d, RedisModuleString *key, int *nokey) {\n    return RM_DictGetC(d,key->ptr,sdslen(key->ptr),nokey);\n}\n\n/* Remove the specified key from the dictionary, returning REDISMODULE_OK if\n * the key was found and deleted, or REDISMODULE_ERR if instead there was\n * no such key in the dictionary. When the operation is successful, if\n * 'oldval' is not NULL, then '*oldval' is set to the value stored at the\n * key before it was deleted. Using this feature it is possible to get\n * a pointer to the value (for instance in order to release it), without\n * having to call RedisModule_DictGet() before deleting the key. */\nint RM_DictDelC(RedisModuleDict *d, void *key, size_t keylen, void *oldval) {\n    int retval = raxRemove(d->rax,key,keylen,oldval);\n    return retval ? REDISMODULE_OK : REDISMODULE_ERR;\n}\n\n/* Like RedisModule_DictDelC() but gets the key as a RedisModuleString. */\nint RM_DictDel(RedisModuleDict *d, RedisModuleString *key, void *oldval) {\n    return RM_DictDelC(d,key->ptr,sdslen(key->ptr),oldval);\n}\n\n/* Return an iterator, setup in order to start iterating from the specified\n * key by applying the operator 'op', which is just a string specifying the\n * comparison operator to use in order to seek the first element. The\n * operators available are:\n *\n * * `^`   -- Seek the first (lexicographically smaller) key.\n * * `$`   -- Seek the last  (lexicographically bigger) key.\n * * `>`   -- Seek the first element greater than the specified key.\n * * `>=`  -- Seek the first element greater or equal than the specified key.\n * * `<`   -- Seek the first element smaller than the specified key.\n * * `<=`  -- Seek the first element smaller or equal than the specified key.\n * * `==`  -- Seek the first element matching exactly the specified key.\n *\n * Note that for `^` and `$` the passed key is not used, and the user may\n * just pass NULL with a length of 0.\n *\n * If the element to start the iteration cannot be seeked based on the\n * key and operator passed, RedisModule_DictNext() / Prev() will just return\n * REDISMODULE_ERR at the first call, otherwise they'll produce elements.\n */\nRedisModuleDictIter *RM_DictIteratorStartC(RedisModuleDict *d, const char *op, void *key, size_t keylen) {\n    RedisModuleDictIter *di = zmalloc(sizeof(*di));\n    di->dict = d;\n    raxStart(&di->ri,d->rax);\n    raxSeek(&di->ri,op,key,keylen);\n    return di;\n}\n\n/* Exactly like RedisModule_DictIteratorStartC, but the key is passed as a\n * RedisModuleString. */\nRedisModuleDictIter *RM_DictIteratorStart(RedisModuleDict *d, const char *op, RedisModuleString *key) {\n    return RM_DictIteratorStartC(d,op,key->ptr,sdslen(key->ptr));\n}\n\n/* Release the iterator created with RedisModule_DictIteratorStart(). This call\n * is mandatory otherwise a memory leak is introduced in the module. */\nvoid RM_DictIteratorStop(RedisModuleDictIter *di) {\n    raxStop(&di->ri);\n    zfree(di);\n}\n\n/* After its creation with RedisModule_DictIteratorStart(), it is possible to\n * change the currently selected element of the iterator by using this\n * API call. The result based on the operator and key is exactly like\n * the function RedisModule_DictIteratorStart(), however in this case the\n * return value is just REDISMODULE_OK in case the seeked element was found,\n * or REDISMODULE_ERR in case it was not possible to seek the specified\n * element. It is possible to reseek an iterator as many times as you want. */\nint RM_DictIteratorReseekC(RedisModuleDictIter *di, const char *op, void *key, size_t keylen) {\n    return raxSeek(&di->ri,op,key,keylen);\n}\n\n/* Like RedisModule_DictIteratorReseekC() but takes the key as a\n * RedisModuleString. */\nint RM_DictIteratorReseek(RedisModuleDictIter *di, const char *op, RedisModuleString *key) {\n    return RM_DictIteratorReseekC(di,op,key->ptr,sdslen(key->ptr));\n}\n\n/* Return the current item of the dictionary iterator `di` and steps to the\n * next element. If the iterator already yield the last element and there\n * are no other elements to return, NULL is returned, otherwise a pointer\n * to a string representing the key is provided, and the `*keylen` length\n * is set by reference (if keylen is not NULL). The `*dataptr`, if not NULL\n * is set to the value of the pointer stored at the returned key as auxiliary\n * data (as set by the RedisModule_DictSet API).\n *\n * Usage example:\n *\n *      ... create the iterator here ...\n *      char *key;\n *      void *data;\n *      while((key = RedisModule_DictNextC(iter,&keylen,&data)) != NULL) {\n *          printf(\"%.*s %p\\n\", (int)keylen, key, data);\n *      }\n *\n * The returned pointer is of type void because sometimes it makes sense\n * to cast it to a `char*` sometimes to an unsigned `char*` depending on the\n * fact it contains or not binary data, so this API ends being more\n * comfortable to use.\n *\n * The validity of the returned pointer is until the next call to the\n * next/prev iterator step. Also the pointer is no longer valid once the\n * iterator is released. */\nvoid *RM_DictNextC(RedisModuleDictIter *di, size_t *keylen, void **dataptr) {\n    if (!raxNext(&di->ri)) return NULL;\n    if (keylen) *keylen = di->ri.key_len;\n    if (dataptr) *dataptr = di->ri.data;\n    return di->ri.key;\n}\n\n/* This function is exactly like RedisModule_DictNext() but after returning\n * the currently selected element in the iterator, it selects the previous\n * element (lexicographically smaller) instead of the next one. */\nvoid *RM_DictPrevC(RedisModuleDictIter *di, size_t *keylen, void **dataptr) {\n    if (!raxPrev(&di->ri)) return NULL;\n    if (keylen) *keylen = di->ri.key_len;\n    if (dataptr) *dataptr = di->ri.data;\n    return di->ri.key;\n}\n\n/* Like RedisModuleNextC(), but instead of returning an internally allocated\n * buffer and key length, it returns directly a module string object allocated\n * in the specified context 'ctx' (that may be NULL exactly like for the main\n * API RedisModule_CreateString).\n *\n * The returned string object should be deallocated after use, either manually\n * or by using a context that has automatic memory management active. */\nRedisModuleString *RM_DictNext(RedisModuleCtx *ctx, RedisModuleDictIter *di, void **dataptr) {\n    size_t keylen;\n    void *key = RM_DictNextC(di,&keylen,dataptr);\n    if (key == NULL) return NULL;\n    return RM_CreateString(ctx,key,keylen);\n}\n\n/* Like RedisModule_DictNext() but after returning the currently selected\n * element in the iterator, it selects the previous element (lexicographically\n * smaller) instead of the next one. */\nRedisModuleString *RM_DictPrev(RedisModuleCtx *ctx, RedisModuleDictIter *di, void **dataptr) {\n    size_t keylen;\n    void *key = RM_DictPrevC(di,&keylen,dataptr);\n    if (key == NULL) return NULL;\n    return RM_CreateString(ctx,key,keylen);\n}\n\n/* Compare the element currently pointed by the iterator to the specified\n * element given by key/keylen, according to the operator 'op' (the set of\n * valid operators are the same valid for RedisModule_DictIteratorStart).\n * If the comparison is successful the command returns REDISMODULE_OK\n * otherwise REDISMODULE_ERR is returned.\n *\n * This is useful when we want to just emit a lexicographical range, so\n * in the loop, as we iterate elements, we can also check if we are still\n * on range.\n *\n * The function return REDISMODULE_ERR if the iterator reached the\n * end of elements condition as well. */\nint RM_DictCompareC(RedisModuleDictIter *di, const char *op, void *key, size_t keylen) {\n    if (raxEOF(&di->ri)) return REDISMODULE_ERR;\n    int res = raxCompare(&di->ri,op,key,keylen);\n    return res ? REDISMODULE_OK : REDISMODULE_ERR;\n}\n\n/* Like RedisModule_DictCompareC but gets the key to compare with the current\n * iterator key as a RedisModuleString. */\nint RM_DictCompare(RedisModuleDictIter *di, const char *op, RedisModuleString *key) {\n    if (raxEOF(&di->ri)) return REDISMODULE_ERR;\n    int res = raxCompare(&di->ri,op,key->ptr,sdslen(key->ptr));\n    return res ? REDISMODULE_OK : REDISMODULE_ERR;\n}\n\n\n\n\n/* --------------------------------------------------------------------------\n * ## Modules Info fields\n * -------------------------------------------------------------------------- */\n\nint RM_InfoEndDictField(RedisModuleInfoCtx *ctx);\n\n/* Used to start a new section, before adding any fields. the section name will\n * be prefixed by `<modulename>_` and must only include A-Z,a-z,0-9.\n * NULL or empty string indicates the default section (only `<modulename>`) is used.\n * When return value is REDISMODULE_ERR, the section should and will be skipped. */\nint RM_InfoAddSection(RedisModuleInfoCtx *ctx, const char *name) {\n    sds full_name = sdsdup(ctx->module->name);\n    if (name != NULL && strlen(name) > 0)\n        full_name = sdscatfmt(full_name, \"_%s\", name);\n\n    /* Implicitly end dicts, instead of returning an error which is likely un checked. */\n    if (ctx->in_dict_field)\n        RM_InfoEndDictField(ctx);\n\n    /* proceed only if:\n     * 1) no section was requested (emit all)\n     * 2) the module name was requested (emit all)\n     * 3) this specific section was requested. */\n    if (ctx->requested_sections) {\n        if ((!full_name || !dictFind(ctx->requested_sections, full_name)) &&\n            (!dictFind(ctx->requested_sections, ctx->module->name)))\n        {\n            sdsfree(full_name);\n            ctx->in_section = 0;\n            return REDISMODULE_ERR;\n        }\n    }\n    if (ctx->sections++) ctx->info = sdscat(ctx->info,\"\\r\\n\");\n    ctx->info = sdscatfmt(ctx->info, \"# %S\\r\\n\", full_name);\n    ctx->in_section = 1;\n    sdsfree(full_name);\n    return REDISMODULE_OK;\n}\n\n/* Starts a dict field, similar to the ones in INFO KEYSPACE. Use normal\n * RedisModule_InfoAddField* functions to add the items to this field, and\n * terminate with RedisModule_InfoEndDictField. */\nint RM_InfoBeginDictField(RedisModuleInfoCtx *ctx, const char *name) {\n    if (!ctx->in_section)\n        return REDISMODULE_ERR;\n    /* Implicitly end dicts, instead of returning an error which is likely un checked. */\n    if (ctx->in_dict_field)\n        RM_InfoEndDictField(ctx);\n    char *tmpmodname, *tmpname;\n    ctx->info = sdscatfmt(ctx->info,\n        \"%s_%s:\",\n        getSafeInfoString(ctx->module->name, strlen(ctx->module->name), &tmpmodname),\n        getSafeInfoString(name, strlen(name), &tmpname));\n    if (tmpmodname != NULL) zfree(tmpmodname);\n    if (tmpname != NULL) zfree(tmpname);\n    ctx->in_dict_field = 1;\n    return REDISMODULE_OK;\n}\n\n/* Ends a dict field, see RedisModule_InfoBeginDictField */\nint RM_InfoEndDictField(RedisModuleInfoCtx *ctx) {\n    if (!ctx->in_dict_field)\n        return REDISMODULE_ERR;\n    /* trim the last ',' if found. */\n    if (ctx->info[sdslen(ctx->info)-1]==',')\n        sdsIncrLen(ctx->info, -1);\n    ctx->info = sdscat(ctx->info, \"\\r\\n\");\n    ctx->in_dict_field = 0;\n    return REDISMODULE_OK;\n}\n\n/* Used by RedisModuleInfoFunc to add info fields.\n * Each field will be automatically prefixed by `<modulename>_`.\n * Field names or values must not include `\\r\\n` or `:`. */\nint RM_InfoAddFieldString(RedisModuleInfoCtx *ctx, const char *field, RedisModuleString *value) {\n    if (!ctx->in_section)\n        return REDISMODULE_ERR;\n    if (ctx->in_dict_field) {\n        ctx->info = sdscatfmt(ctx->info,\n            \"%s=%S,\",\n            field,\n            (sds)value->ptr);\n        return REDISMODULE_OK;\n    }\n    ctx->info = sdscatfmt(ctx->info,\n        \"%s_%s:%S\\r\\n\",\n        ctx->module->name,\n        field,\n        (sds)value->ptr);\n    return REDISMODULE_OK;\n}\n\n/* See RedisModule_InfoAddFieldString(). */\nint RM_InfoAddFieldCString(RedisModuleInfoCtx *ctx, const char *field, const char *value) {\n    if (!ctx->in_section)\n        return REDISMODULE_ERR;\n    if (ctx->in_dict_field) {\n        ctx->info = sdscatfmt(ctx->info,\n            \"%s=%s,\",\n            field,\n            value);\n        return REDISMODULE_OK;\n    }\n    ctx->info = sdscatfmt(ctx->info,\n        \"%s_%s:%s\\r\\n\",\n        ctx->module->name,\n        field,\n        value);\n    return REDISMODULE_OK;\n}\n\n/* See RedisModule_InfoAddFieldString(). */\nint RM_InfoAddFieldDouble(RedisModuleInfoCtx *ctx, const char *field, double value) {\n    if (!ctx->in_section)\n        return REDISMODULE_ERR;\n    if (ctx->in_dict_field) {\n        ctx->info = sdscatprintf(ctx->info,\n            \"%s=%.17g,\",\n            field,\n            value);\n        return REDISMODULE_OK;\n    }\n    ctx->info = sdscatprintf(ctx->info,\n        \"%s_%s:%.17g\\r\\n\",\n        ctx->module->name,\n        field,\n        value);\n    return REDISMODULE_OK;\n}\n\n/* See RedisModule_InfoAddFieldString(). */\nint RM_InfoAddFieldLongLong(RedisModuleInfoCtx *ctx, const char *field, long long value) {\n    if (!ctx->in_section)\n        return REDISMODULE_ERR;\n    if (ctx->in_dict_field) {\n        ctx->info = sdscatfmt(ctx->info,\n            \"%s=%I,\",\n            field,\n            value);\n        return REDISMODULE_OK;\n    }\n    ctx->info = sdscatfmt(ctx->info,\n        \"%s_%s:%I\\r\\n\",\n        ctx->module->name,\n        field,\n        value);\n    return REDISMODULE_OK;\n}\n\n/* See RedisModule_InfoAddFieldString(). */\nint RM_InfoAddFieldULongLong(RedisModuleInfoCtx *ctx, const char *field, unsigned long long value) {\n    if (!ctx->in_section)\n        return REDISMODULE_ERR;\n    if (ctx->in_dict_field) {\n        ctx->info = sdscatfmt(ctx->info,\n            \"%s=%U,\",\n            field,\n            value);\n        return REDISMODULE_OK;\n    }\n    ctx->info = sdscatfmt(ctx->info,\n        \"%s_%s:%U\\r\\n\",\n        ctx->module->name,\n        field,\n        value);\n    return REDISMODULE_OK;\n}\n\n/* Registers callback for the INFO command. The callback should add INFO fields\n * by calling the `RedisModule_InfoAddField*()` functions. */\nint RM_RegisterInfoFunc(RedisModuleCtx *ctx, RedisModuleInfoFunc cb) {\n    ctx->module->info_cb = cb;\n    return REDISMODULE_OK;\n}\n\nsds modulesCollectInfo(sds info, dict *sections_dict, int for_crash_report, int sections) {\n    dictIterator di;\n    dictEntry *de;\n\n    dictInitIterator(&di, modules);\n    while ((de = dictNext(&di)) != NULL) {\n        struct RedisModule *module = dictGetVal(de);\n        if (!module->info_cb)\n            continue;\n        RedisModuleInfoCtx info_ctx = {module, sections_dict, info, sections, 0, 0};\n        module->info_cb(&info_ctx, for_crash_report);\n        /* Implicitly end dicts (no way to handle errors, and we must add the newline). */\n        if (info_ctx.in_dict_field)\n            RM_InfoEndDictField(&info_ctx);\n        info = info_ctx.info;\n        sections = info_ctx.sections;\n    }\n    dictResetIterator(&di);\n    return info;\n}\n\n/* Get information about the server similar to the one that returns from the\n * INFO command. This function takes an optional 'section' argument that may\n * be NULL. The return value holds the output and can be used with\n * RedisModule_ServerInfoGetField and alike to get the individual fields.\n * When done, it needs to be freed with RedisModule_FreeServerInfo or with the\n * automatic memory management mechanism if enabled. */\nRedisModuleServerInfoData *RM_GetServerInfo(RedisModuleCtx *ctx, const char *section) {\n    struct RedisModuleServerInfoData *d = zmalloc(sizeof(*d));\n    d->rax = raxNew();\n    if (ctx != NULL) autoMemoryAdd(ctx,REDISMODULE_AM_INFO,d);\n    int all = 0, everything = 0;\n    robj *argv[1];\n    argv[0] = section ? createStringObject(section, strlen(section)) : NULL;\n    dict *section_dict = genInfoSectionDict(argv, section ? 1 : 0, NULL, &all, &everything);\n    sds info = genRedisInfoString(section_dict, all, everything);\n    int totlines, i;\n    sds *lines = sdssplitlen(info, sdslen(info), \"\\r\\n\", 2, &totlines);\n    for(i=0; i<totlines; i++) {\n        sds line = lines[i];\n        if (line[0]=='#') continue;\n        char *sep = strchr(line, ':');\n        if (!sep) continue;\n        unsigned char *key = (unsigned char*)line;\n        size_t keylen = (intptr_t)sep-(intptr_t)line;\n        sds val = sdsnewlen(sep+1,sdslen(line)-((intptr_t)sep-(intptr_t)line)-1);\n        if (!raxTryInsert(d->rax,key,keylen,val,NULL))\n            sdsfree(val);\n    }\n    sdsfree(info);\n    sdsfreesplitres(lines,totlines);\n    releaseInfoSectionDict(section_dict);\n    if(argv[0]) decrRefCount(argv[0]);\n    return d;\n}\n\n/* Free data created with RM_GetServerInfo(). You need to pass the\n * context pointer 'ctx' only if the dictionary was created using the\n * context instead of passing NULL. */\nvoid RM_FreeServerInfo(RedisModuleCtx *ctx, RedisModuleServerInfoData *data) {\n    if (ctx != NULL) autoMemoryFreed(ctx,REDISMODULE_AM_INFO,data);\n    raxFreeWithCallback(data->rax, sdsfreegeneric);\n    zfree(data);\n}\n\n/* Get the value of a field from data collected with RM_GetServerInfo(). You\n * need to pass the context pointer 'ctx' only if you want to use auto memory\n * mechanism to release the returned string. Return value will be NULL if the\n * field was not found. */\nRedisModuleString *RM_ServerInfoGetField(RedisModuleCtx *ctx, RedisModuleServerInfoData *data, const char* field) {\n    void *result;\n    if (!raxFind(data->rax, (unsigned char *)field, strlen(field), &result))\n        return NULL;\n    sds val = result;\n    RedisModuleString *o = createStringObject(val,sdslen(val));\n    if (ctx != NULL) autoMemoryAdd(ctx,REDISMODULE_AM_STRING,o);\n    return o;\n}\n\n/* Similar to RM_ServerInfoGetField, but returns a char* which should not be freed but the caller. */\nconst char *RM_ServerInfoGetFieldC(RedisModuleServerInfoData *data, const char* field) {\n    void *result = NULL;\n    raxFind(data->rax, (unsigned char *)field, strlen(field), &result);\n    return result;\n}\n\n/* Get the value of a field from data collected with RM_GetServerInfo(). If the\n * field is not found, or is not numerical or out of range, return value will be\n * 0, and the optional out_err argument will be set to REDISMODULE_ERR. */\nlong long RM_ServerInfoGetFieldSigned(RedisModuleServerInfoData *data, const char* field, int *out_err) {\n    long long ll;\n    void *result;\n    if (!raxFind(data->rax, (unsigned char *)field, strlen(field), &result)) {\n        if (out_err) *out_err = REDISMODULE_ERR;\n        return 0;\n    }\n    sds val = result;\n    if (!string2ll(val,sdslen(val),&ll)) {\n        if (out_err) *out_err = REDISMODULE_ERR;\n        return 0;\n    }\n    if (out_err) *out_err = REDISMODULE_OK;\n    return ll;\n}\n\n/* Get the value of a field from data collected with RM_GetServerInfo(). If the\n * field is not found, or is not numerical or out of range, return value will be\n * 0, and the optional out_err argument will be set to REDISMODULE_ERR. */\nunsigned long long RM_ServerInfoGetFieldUnsigned(RedisModuleServerInfoData *data, const char* field, int *out_err) {\n    unsigned long long ll;\n    void *result;\n    if (!raxFind(data->rax, (unsigned char *)field, strlen(field), &result)) {\n        if (out_err) *out_err = REDISMODULE_ERR;\n        return 0;\n    }\n    sds val = result;\n    if (!string2ull(val,&ll)) {\n        if (out_err) *out_err = REDISMODULE_ERR;\n        return 0;\n    }\n    if (out_err) *out_err = REDISMODULE_OK;\n    return ll;\n}\n\n/* Get the value of a field from data collected with RM_GetServerInfo(). If the\n * field is not found, or is not a double, return value will be 0, and the\n * optional out_err argument will be set to REDISMODULE_ERR. */\ndouble RM_ServerInfoGetFieldDouble(RedisModuleServerInfoData *data, const char* field, int *out_err) {\n    double dbl;\n    void *result;\n    if (!raxFind(data->rax, (unsigned char *)field, strlen(field), &result)) {\n        if (out_err) *out_err = REDISMODULE_ERR;\n        return 0;\n    }\n    sds val = result;\n    if (!string2d(val,sdslen(val),&dbl)) {\n        if (out_err) *out_err = REDISMODULE_ERR;\n        return 0;\n    }\n    if (out_err) *out_err = REDISMODULE_OK;\n    return dbl;\n}\n\n/* --------------------------------------------------------------------------\n * ## Modules utility APIs\n * -------------------------------------------------------------------------- */\n\n/* Return random bytes using SHA1 in counter mode with a /dev/urandom\n * initialized seed. This function is fast so can be used to generate\n * many bytes without any effect on the operating system entropy pool.\n * Currently this function is not thread safe. */\nvoid RM_GetRandomBytes(unsigned char *dst, size_t len) {\n    getRandomBytes(dst,len);\n}\n\n/* Like RedisModule_GetRandomBytes() but instead of setting the string to\n * random bytes the string is set to random characters in the in the\n * hex charset [0-9a-f]. */\nvoid RM_GetRandomHexChars(char *dst, size_t len) {\n    getRandomHexChars(dst,len);\n}\n\n/* --------------------------------------------------------------------------\n * ## Modules API exporting / importing\n * -------------------------------------------------------------------------- */\n\n/* This function is called by a module in order to export some API with a\n * given name. Other modules will be able to use this API by calling the\n * symmetrical function RM_GetSharedAPI() and casting the return value to\n * the right function pointer.\n *\n * The function will return REDISMODULE_OK if the name is not already taken,\n * otherwise REDISMODULE_ERR will be returned and no operation will be\n * performed.\n *\n * IMPORTANT: the apiname argument should be a string literal with static\n * lifetime. The API relies on the fact that it will always be valid in\n * the future. */\nint RM_ExportSharedAPI(RedisModuleCtx *ctx, const char *apiname, void *func) {\n    RedisModuleSharedAPI *sapi = zmalloc(sizeof(*sapi));\n    sapi->module = ctx->module;\n    sapi->func = func;\n    if (dictAdd(server.sharedapi, (char*)apiname, sapi) != DICT_OK) {\n        zfree(sapi);\n        return REDISMODULE_ERR;\n    }\n    return REDISMODULE_OK;\n}\n\n/* Request an exported API pointer. The return value is just a void pointer\n * that the caller of this function will be required to cast to the right\n * function pointer, so this is a private contract between modules.\n *\n * If the requested API is not available then NULL is returned. Because\n * modules can be loaded at different times with different order, this\n * function calls should be put inside some module generic API registering\n * step, that is called every time a module attempts to execute a\n * command that requires external APIs: if some API cannot be resolved, the\n * command should return an error.\n *\n * Here is an example:\n *\n *     int ... myCommandImplementation(void) {\n *        if (getExternalAPIs() == 0) {\n *             reply with an error here if we cannot have the APIs\n *        }\n *        // Use the API:\n *        myFunctionPointer(foo);\n *     }\n *\n * And the function registerAPI() is:\n *\n *     int getExternalAPIs(void) {\n *         static int api_loaded = 0;\n *         if (api_loaded != 0) return 1; // APIs already resolved.\n *\n *         myFunctionPointer = RedisModule_GetSharedAPI(\"...\");\n *         if (myFunctionPointer == NULL) return 0;\n *\n *         return 1;\n *     }\n */\nvoid *RM_GetSharedAPI(RedisModuleCtx *ctx, const char *apiname) {\n    dictEntry *de = dictFind(server.sharedapi, apiname);\n    if (de == NULL) return NULL;\n    RedisModuleSharedAPI *sapi = dictGetVal(de);\n    if (listSearchKey(sapi->module->usedby,ctx->module) == NULL) {\n        listAddNodeTail(sapi->module->usedby,ctx->module);\n        listAddNodeTail(ctx->module->using,sapi->module);\n    }\n    return sapi->func;\n}\n\n/* Remove all the APIs registered by the specified module. Usually you\n * want this when the module is going to be unloaded. This function\n * assumes that's caller responsibility to make sure the APIs are not\n * used by other modules.\n *\n * The number of unregistered APIs is returned. */\nint moduleUnregisterSharedAPI(RedisModule *module) {\n    int count = 0;\n    dictIterator di;\n    dictEntry *de;\n    dictInitSafeIterator(&di, server.sharedapi);\n    while ((de = dictNext(&di)) != NULL) {\n        const char *apiname = dictGetKey(de);\n        RedisModuleSharedAPI *sapi = dictGetVal(de);\n        if (sapi->module == module) {\n            dictDelete(server.sharedapi,apiname);\n            zfree(sapi);\n            count++;\n        }\n    }\n    dictResetIterator(&di);\n    return count;\n}\n\n/* Remove the specified module as an user of APIs of ever other module.\n * This is usually called when a module is unloaded.\n *\n * Returns the number of modules this module was using APIs from. */\nint moduleUnregisterUsedAPI(RedisModule *module) {\n    listIter li;\n    listNode *ln;\n    int count = 0;\n\n    listRewind(module->using,&li);\n    while((ln = listNext(&li))) {\n        RedisModule *used = ln->value;\n        listNode *ln = listSearchKey(used->usedby,module);\n        if (ln) {\n            listDelNode(used->usedby,ln);\n            count++;\n        }\n    }\n    return count;\n}\n\n/* Unregister all filters registered by a module.\n * This is called when a module is being unloaded.\n *\n * Returns the number of filters unregistered. */\nint moduleUnregisterFilters(RedisModule *module) {\n    listIter li;\n    listNode *ln;\n    int count = 0;\n\n    listRewind(module->filters,&li);\n    while((ln = listNext(&li))) {\n        RedisModuleCommandFilter *filter = ln->value;\n        listNode *ln = listSearchKey(moduleCommandFilters,filter);\n        if (ln) {\n            listDelNode(moduleCommandFilters,ln);\n            count++;\n        }\n        zfree(filter);\n    }\n    return count;\n}\n\n/* --------------------------------------------------------------------------\n * ## Module Command Filter API\n * -------------------------------------------------------------------------- */\n\n/* Register a new command filter function.\n *\n * Command filtering makes it possible for modules to extend Redis by plugging\n * into the execution flow of all commands.\n *\n * A registered filter gets called before Redis executes *any* command.  This\n * includes both core Redis commands and commands registered by any module.  The\n * filter applies in all execution paths including:\n *\n * 1. Invocation by a client.\n * 2. Invocation through `RedisModule_Call()` by any module.\n * 3. Invocation through Lua `redis.call()`.\n * 4. Replication of a command from a master.\n *\n * The filter executes in a special filter context, which is different and more\n * limited than a RedisModuleCtx.  Because the filter affects any command, it\n * must be implemented in a very efficient way to reduce the performance impact\n * on Redis.  All Redis Module API calls that require a valid context (such as\n * `RedisModule_Call()`, `RedisModule_OpenKey()`, etc.) are not supported in a\n * filter context.\n *\n * The `RedisModuleCommandFilterCtx` can be used to inspect or modify the\n * executed command and its arguments.  As the filter executes before Redis\n * begins processing the command, any change will affect the way the command is\n * processed.  For example, a module can override Redis commands this way:\n *\n * 1. Register a `MODULE.SET` command which implements an extended version of\n *    the Redis `SET` command.\n * 2. Register a command filter which detects invocation of `SET` on a specific\n *    pattern of keys.  Once detected, the filter will replace the first\n *    argument from `SET` to `MODULE.SET`.\n * 3. When filter execution is complete, Redis considers the new command name\n *    and therefore executes the module's own command.\n *\n * Note that in the above use case, if `MODULE.SET` itself uses\n * `RedisModule_Call()` the filter will be applied on that call as well.  If\n * that is not desired, the `REDISMODULE_CMDFILTER_NOSELF` flag can be set when\n * registering the filter.\n *\n * The `REDISMODULE_CMDFILTER_NOSELF` flag prevents execution flows that\n * originate from the module's own `RM_Call()` from reaching the filter.  This\n * flag is effective for all execution flows, including nested ones, as long as\n * the execution begins from the module's command context or a thread-safe\n * context that is associated with a blocking command.\n *\n * Detached thread-safe contexts are *not* associated with the module and cannot\n * be protected by this flag.\n *\n * If multiple filters are registered (by the same or different modules), they\n * are executed in the order of registration.\n */\nRedisModuleCommandFilter *RM_RegisterCommandFilter(RedisModuleCtx *ctx, RedisModuleCommandFilterFunc callback, int flags) {\n    RedisModuleCommandFilter *filter = zmalloc(sizeof(*filter));\n    filter->module = ctx->module;\n    filter->callback = callback;\n    filter->flags = flags;\n\n    listAddNodeTail(moduleCommandFilters, filter);\n    listAddNodeTail(ctx->module->filters, filter);\n    return filter;\n}\n\n/* Unregister a command filter.\n */\nint RM_UnregisterCommandFilter(RedisModuleCtx *ctx, RedisModuleCommandFilter *filter) {\n    listNode *ln;\n\n    /* A module can only remove its own filters */\n    if (filter->module != ctx->module) return REDISMODULE_ERR;\n\n    ln = listSearchKey(moduleCommandFilters,filter);\n    if (!ln) return REDISMODULE_ERR;\n    listDelNode(moduleCommandFilters,ln);\n\n    ln = listSearchKey(ctx->module->filters,filter);\n    if (!ln) return REDISMODULE_ERR;    /* Shouldn't happen */\n    listDelNode(ctx->module->filters,ln);\n\n    zfree(filter);\n\n    return REDISMODULE_OK;\n}\n\nvoid moduleCallCommandFilters(client *c) {\n    if (listLength(moduleCommandFilters) == 0) return;\n\n    listIter li;\n    listNode *ln;\n    listRewind(moduleCommandFilters,&li);\n\n    RedisModuleCommandFilterCtx filter = {\n        .argv = c->argv,\n        .argv_len = c->argv_len,\n        .argc = c->argc,\n        .c = c\n    };\n\n    while((ln = listNext(&li))) {\n        RedisModuleCommandFilter *f = ln->value;\n\n        /* Skip filter if REDISMODULE_CMDFILTER_NOSELF is set and module is\n         * currently processing a command.\n         */\n        if ((f->flags & REDISMODULE_CMDFILTER_NOSELF) && f->module->in_call) continue;\n\n        /* Call filter */\n        f->callback(&filter);\n    }\n\n    /* If the filter sets a new command, including command or subcommand,\n     * the command looked up will be invalid. */\n    c->lookedcmd = NULL;\n\n    c->argv = filter.argv;\n    c->argv_len = filter.argv_len;\n    c->argc = filter.argc;\n\n    /* Update pending command if it exists. */\n    pendingCommand *pcmd = c->current_pending_cmd;\n    if (pcmd) {\n        pcmd->argv = filter.argv;\n        pcmd->argc = filter.argc;\n        pcmd->argv_len = filter.argv_len;\n        pcmd->cmd = NULL;\n        pcmd->slot = INVALID_CLUSTER_SLOT;\n        pcmd->flags = 0;\n\n        /* Reset keys result */\n        getKeysFreeResult(&pcmd->keys_result);\n        pcmd->keys_result = (getKeysResult)GETKEYS_RESULT_INIT;\n    }\n}\n\n/* Return the number of arguments a filtered command has.  The number of\n * arguments include the command itself.\n */\nint RM_CommandFilterArgsCount(RedisModuleCommandFilterCtx *fctx)\n{\n    return fctx->argc;\n}\n\n/* Return the specified command argument.  The first argument (position 0) is\n * the command itself, and the rest are user-provided args.\n */\nRedisModuleString *RM_CommandFilterArgGet(RedisModuleCommandFilterCtx *fctx, int pos)\n{\n    if (pos < 0 || pos >= fctx->argc) return NULL;\n    return fctx->argv[pos];\n}\n\n/* Modify the filtered command by inserting a new argument at the specified\n * position.  The specified RedisModuleString argument may be used by Redis\n * after the filter context is destroyed, so it must not be auto-memory\n * allocated, freed or used elsewhere.\n */\nint RM_CommandFilterArgInsert(RedisModuleCommandFilterCtx *fctx, int pos, RedisModuleString *arg)\n{\n    int i;\n\n    if (pos < 0 || pos > fctx->argc) return REDISMODULE_ERR;\n\n    if (fctx->argv_len < fctx->argc+1) {\n        fctx->argv_len = fctx->argc+1;\n        fctx->argv = zrealloc(fctx->argv, fctx->argv_len*sizeof(RedisModuleString *));\n    }\n    for (i = fctx->argc; i > pos; i--) {\n        fctx->argv[i] = fctx->argv[i-1];\n    }\n    fctx->argv[pos] = arg;\n    fctx->argc++;\n\n    return REDISMODULE_OK;\n}\n\n/* Modify the filtered command by replacing an existing argument with a new one.\n * The specified RedisModuleString argument may be used by Redis after the\n * filter context is destroyed, so it must not be auto-memory allocated, freed\n * or used elsewhere.\n */\nint RM_CommandFilterArgReplace(RedisModuleCommandFilterCtx *fctx, int pos, RedisModuleString *arg)\n{\n    if (pos < 0 || pos >= fctx->argc) return REDISMODULE_ERR;\n\n    decrRefCount(fctx->argv[pos]);\n    fctx->argv[pos] = arg;\n\n    return REDISMODULE_OK;\n}\n\n/* Modify the filtered command by deleting an argument at the specified\n * position.\n */\nint RM_CommandFilterArgDelete(RedisModuleCommandFilterCtx *fctx, int pos)\n{\n    int i;\n    if (pos < 0 || pos >= fctx->argc) return REDISMODULE_ERR;\n\n    decrRefCount(fctx->argv[pos]);\n    for (i = pos; i < fctx->argc-1; i++) {\n        fctx->argv[i] = fctx->argv[i+1];\n    }\n    fctx->argc--;\n\n    return REDISMODULE_OK;\n}\n\n/* Get Client ID for client that issued the command we are filtering */\nunsigned long long RM_CommandFilterGetClientId(RedisModuleCommandFilterCtx *fctx) {\n    return fctx->c->id;\n}\n\n/* For a given pointer allocated via RedisModule_Alloc() or\n * RedisModule_Realloc(), return the amount of memory allocated for it.\n * Note that this may be different (larger) than the memory we allocated\n * with the allocation calls, since sometimes the underlying allocator\n * will allocate more memory.\n */\nsize_t RM_MallocSize(void* ptr) {\n    return zmalloc_size(ptr);\n}\n\n/* Similar to RM_MallocSize, the difference is that RM_MallocUsableSize\n * returns the usable size of memory by the module. */\nsize_t RM_MallocUsableSize(void *ptr) {\n    /* It is safe to use 'zmalloc_usable_size()' to manipulate additional\n     * memory space, as we guarantee that the compiler can recognize this\n     * after 'RM_Alloc', 'RM_TryAlloc', 'RM_Realloc', or 'RM_Calloc'. */\n    return zmalloc_usable_size(ptr);\n}\n\n/* Same as RM_MallocSize, except it works on RedisModuleString pointers.\n */\nsize_t RM_MallocSizeString(RedisModuleString* str) {\n    serverAssert(str->type == OBJ_STRING);\n    return sizeof(*str) + getStringObjectSdsUsedMemory(str);\n}\n\n/* Same as RM_MallocSize, except it works on RedisModuleDict pointers.\n * Note that the returned value is only the overhead of the underlying structures,\n * it does not include the allocation size of the keys and values.\n */\nsize_t RM_MallocSizeDict(RedisModuleDict* dict) {\n    return dict->alloc_size;\n}\n\n/* Return the a number between 0 to 1 indicating the amount of memory\n * currently used, relative to the Redis \"maxmemory\" configuration.\n *\n * * 0 - No memory limit configured.\n * * Between 0 and 1 - The percentage of the memory used normalized in 0-1 range.\n * * Exactly 1 - Memory limit reached.\n * * Greater 1 - More memory used than the configured limit.\n */\nfloat RM_GetUsedMemoryRatio(void){\n    float level;\n    getMaxmemoryState(NULL, NULL, NULL, &level);\n    return level;\n}\n\n/* --------------------------------------------------------------------------\n * ## Scanning keyspace and hashes\n * -------------------------------------------------------------------------- */\n\ntypedef void (*RedisModuleScanCB)(RedisModuleCtx *ctx, RedisModuleString *keyname, RedisModuleKey *key, void *privdata);\ntypedef struct {\n    RedisModuleCtx *ctx;\n    void* user_data;\n    RedisModuleScanCB fn;\n} ScanCBData;\n\ntypedef struct RedisModuleScanCursor{\n    unsigned long long cursor;\n    int done;\n}RedisModuleScanCursor;\n\nstatic void moduleScanCallback(void *privdata, const dictEntry *de, dictEntryLink plink) {\n    UNUSED(plink);\n    ScanCBData *data = privdata;\n    kvobj *keyvalObj = dictGetKey(de);\n    sds key = kvobjGetKey(keyvalObj);\n    RedisModuleString *keyname = createObject(OBJ_STRING,sdsdup(key));\n\n    /* Setup the key handle. */\n    RedisModuleKey kp = {0};\n    moduleInitKey(&kp, data->ctx, keyname, keyvalObj, REDISMODULE_READ);\n\n    data->fn(data->ctx, keyname, &kp, data->user_data);\n\n    moduleCloseKey(&kp);\n    decrRefCount(keyname);\n}\n\n/* Create a new cursor to be used with RedisModule_Scan */\nRedisModuleScanCursor *RM_ScanCursorCreate(void) {\n    RedisModuleScanCursor* cursor = zmalloc(sizeof(*cursor));\n    cursor->cursor = 0;\n    cursor->done = 0;\n    return cursor;\n}\n\n/* Restart an existing cursor. The keys will be rescanned. */\nvoid RM_ScanCursorRestart(RedisModuleScanCursor *cursor) {\n    cursor->cursor = 0;\n    cursor->done = 0;\n}\n\n/* Destroy the cursor struct. */\nvoid RM_ScanCursorDestroy(RedisModuleScanCursor *cursor) {\n    zfree(cursor);\n}\n\n/* Scan API that allows a module to scan all the keys and value in\n * the selected db.\n *\n * Callback for scan implementation.\n *\n *     void scan_callback(RedisModuleCtx *ctx, RedisModuleString *keyname,\n *                        RedisModuleKey *key, void *privdata);\n *\n * - `ctx`: the redis module context provided to for the scan.\n * - `keyname`: owned by the caller and need to be retained if used after this\n *   function.\n * - `key`: holds info on the key and value, it is provided as best effort, in\n *   some cases it might be NULL, in which case the user should (can) use\n *   RedisModule_OpenKey() (and CloseKey too).\n *   when it is provided, it is owned by the caller and will be free when the\n *   callback returns.\n * - `privdata`: the user data provided to RedisModule_Scan().\n *\n * The way it should be used:\n *\n *      RedisModuleScanCursor *c = RedisModule_ScanCursorCreate();\n *      while(RedisModule_Scan(ctx, c, callback, privateData));\n *      RedisModule_ScanCursorDestroy(c);\n *\n * It is also possible to use this API from another thread while the lock\n * is acquired during the actual call to RM_Scan:\n *\n *      RedisModuleScanCursor *c = RedisModule_ScanCursorCreate();\n *      RedisModule_ThreadSafeContextLock(ctx);\n *      while(RedisModule_Scan(ctx, c, callback, privateData)){\n *          RedisModule_ThreadSafeContextUnlock(ctx);\n *          // do some background job\n *          RedisModule_ThreadSafeContextLock(ctx);\n *      }\n *      RedisModule_ScanCursorDestroy(c);\n *\n * The function will return 1 if there are more elements to scan and\n * 0 otherwise, possibly setting errno if the call failed.\n *\n * It is also possible to restart an existing cursor using RM_ScanCursorRestart.\n *\n * IMPORTANT: This API is very similar to the Redis SCAN command from the\n * point of view of the guarantees it provides. This means that the API\n * may report duplicated keys, but guarantees to report at least one time\n * every key that was there from the start to the end of the scanning process.\n *\n * NOTE: If you do database changes within the callback, you should be aware\n * that the internal state of the database may change. For instance it is safe\n * to delete or modify the current key, but may not be safe to delete any\n * other key.\n * Moreover playing with the Redis keyspace while iterating may have the\n * effect of returning more duplicates. A safe pattern is to store the keys\n * names you want to modify elsewhere, and perform the actions on the keys\n * later when the iteration is complete. However this can cost a lot of\n * memory, so it may make sense to just operate on the current key when\n * possible during the iteration, given that this is safe. */\nint RM_Scan(RedisModuleCtx *ctx, RedisModuleScanCursor *cursor, RedisModuleScanCB fn, void *privdata) {\n    if (cursor->done) {\n        errno = ENOENT;\n        return 0;\n    }\n    int ret = 1;\n    ScanCBData data = { ctx, privdata, fn };\n    cursor->cursor = dbScan(ctx->client->db, cursor->cursor, moduleScanCallback, &data);\n    if (cursor->cursor == 0) {\n        cursor->done = 1;\n        ret = 0;\n    }\n    errno = 0;\n    return ret;\n}\n\ntypedef void (*RedisModuleScanKeyCB)(RedisModuleKey *key, RedisModuleString *field, RedisModuleString *value, void *privdata);\ntypedef struct {\n    RedisModuleKey *key;\n    void* user_data;\n    RedisModuleScanKeyCB fn;\n} ScanKeyCBData;\n\nstatic void moduleScanKeyCallback(void *privdata, const dictEntry *de, dictEntryLink plink) {\n    UNUSED(plink);\n    ScanKeyCBData *data = privdata;\n    sds key = dictGetKey(de);\n    kvobj *kv = data->key->kv;\n    robj *field = NULL;\n    robj *value = NULL;\n    if (kv->type == OBJ_SET) {\n        field = createStringObject(key, sdslen(key));\n        value = NULL;\n    } else if (kv->type == OBJ_HASH) {\n        Entry *e = (Entry *) key;\n\n        /* If field is expired and not indicated to access expired, then ignore */\n        if ((!(data->key->mode & REDISMODULE_OPEN_KEY_ACCESS_EXPIRED)) &&\n            (entryIsExpired(e)))\n            return;\n\n        /* For hash, the value is stored in the entry (field), not in the dict entry */\n        sds fieldStr = entryGetField(e);\n        sds val = entryGetValue(e);\n\n        field = createStringObject(fieldStr, sdslen(fieldStr));\n        value = createStringObject(val, sdslen(val));\n    } else if (kv->type == OBJ_ZSET) {\n        zskiplistNode *znode = (zskiplistNode *) key;\n        sds fieldStr = zslGetNodeElement(znode);\n        field = createStringObject(fieldStr, sdslen(fieldStr));\n        value = createStringObjectFromLongDouble(znode->score, 0);\n    }\n    \n    serverAssert(field != NULL);\n    data->fn(data->key, field, value, data->user_data);\n    decrRefCount(field);\n    if (value) decrRefCount(value);\n}\n\n/* Scan api that allows a module to scan the elements in a hash, set or sorted set key\n *\n * Callback for scan implementation.\n *\n *     void scan_callback(RedisModuleKey *key, RedisModuleString* field, RedisModuleString* value, void *privdata);\n *\n * - key - the redis key context provided to for the scan.\n * - field - field name, owned by the caller and need to be retained if used\n *   after this function.\n * - value - value string or NULL for set type, owned by the caller and need to\n *   be retained if used after this function.\n * - privdata - the user data provided to RedisModule_ScanKey.\n *\n * The way it should be used:\n *\n *      RedisModuleScanCursor *c = RedisModule_ScanCursorCreate();\n *      RedisModuleKey *key = RedisModule_OpenKey(...);\n *      while(RedisModule_ScanKey(key, c, callback, privateData));\n *      RedisModule_CloseKey(key);\n *      RedisModule_ScanCursorDestroy(c);\n *\n * It is also possible to use this API from another thread while the lock is acquired during\n * the actual call to RM_ScanKey, and re-opening the key each time:\n *\n *      RedisModuleScanCursor *c = RedisModule_ScanCursorCreate();\n *      RedisModule_ThreadSafeContextLock(ctx);\n *      RedisModuleKey *key = RedisModule_OpenKey(...);\n *      while(RedisModule_ScanKey(ctx, c, callback, privateData)){\n *          RedisModule_CloseKey(key);\n *          RedisModule_ThreadSafeContextUnlock(ctx);\n *          // do some background job\n *          RedisModule_ThreadSafeContextLock(ctx);\n *          key = RedisModule_OpenKey(...);\n *      }\n *      RedisModule_CloseKey(key);\n *      RedisModule_ScanCursorDestroy(c);\n *\n * The function will return 1 if there are more elements to scan and 0 otherwise,\n * possibly setting errno if the call failed.\n * It is also possible to restart an existing cursor using RM_ScanCursorRestart.\n *\n * NOTE: Certain operations are unsafe while iterating the object. For instance\n * while the API guarantees to return at least one time all the elements that\n * are present in the data structure consistently from the start to the end\n * of the iteration (see HSCAN and similar commands documentation), the more\n * you play with the elements, the more duplicates you may get. In general\n * deleting the current element of the data structure is safe, while removing\n * the key you are iterating is not safe. */\nint RM_ScanKey(RedisModuleKey *key, RedisModuleScanCursor *cursor, RedisModuleScanKeyCB fn, void *privdata) {\n    if (key == NULL || key->kv == NULL) {\n        errno = EINVAL;\n        return 0;\n    }\n    dict *ht = NULL;\n    kvobj *kv = key->kv;\n    if (kv->type == OBJ_SET) {\n        if (kv->encoding == OBJ_ENCODING_HT)\n            ht = kv->ptr;\n    } else if (kv->type == OBJ_HASH) {\n        if (kv->encoding == OBJ_ENCODING_HT)\n            ht = kv->ptr;\n    } else if (kv->type == OBJ_ZSET) {\n        if (kv->encoding == OBJ_ENCODING_SKIPLIST)\n            ht = ((zset *)kv->ptr)->dict;\n    } else {\n        errno = EINVAL;\n        return 0;\n    }\n    if (cursor->done) {\n        errno = ENOENT;\n        return 0;\n    }\n    int ret = 1;\n    if (ht) {\n        ScanKeyCBData data = { key, privdata, fn };\n        cursor->cursor = dictScan(ht, cursor->cursor, moduleScanKeyCallback, &data);\n        if (cursor->cursor == 0) {\n            cursor->done = 1;\n            ret = 0;\n        }\n    } else if (kv->type == OBJ_SET) {\n        setTypeIterator si;\n        sds sdsele;\n        setTypeInitIterator(&si, kv);\n        while ((sdsele = setTypeNextObject(&si)) != NULL) {\n            robj *field = createObject(OBJ_STRING, sdsele);\n            fn(key, field, NULL, privdata);\n            decrRefCount(field);\n        }\n        setTypeResetIterator(&si);\n        cursor->cursor = 1;\n        cursor->done = 1;\n        ret = 0;\n    } else if (kv->type == OBJ_ZSET || kv->type == OBJ_HASH) {\n        unsigned char *lp, *p;\n        /* is hash with expiry on fields, then lp tuples are [field][value][expire] */\n        int hfe = kv->type == OBJ_HASH && kv->encoding == OBJ_ENCODING_LISTPACK_EX;\n\n        if (kv->type == OBJ_HASH)\n            lp = hashTypeListpackGetLp(kv);\n        else\n            lp = kv->ptr;\n\n        p = lpSeek(lp,0);\n        while(p) {\n            long long vllField, vllValue, vllExpire;\n            unsigned int lenField, lenValue;\n            unsigned char *pField, *pValue;\n\n            pField = lpGetValue(p,&lenField,&vllField);\n            p = lpNext(lp,p);\n            pValue = lpGetValue(p,&lenValue,&vllValue);\n            p = lpNext(lp,p);\n\n            if (hfe) {\n                serverAssert(lpGetIntegerValue(p, &vllExpire));\n                p = lpNext(lp, p);\n\n                /* Skip expired fields */\n                if ((!(key->mode & REDISMODULE_OPEN_KEY_ACCESS_EXPIRED)) &&\n                    (hashTypeIsExpired(kv, vllExpire)))\n                    continue;\n            }\n\n            robj *value = (pValue != NULL) ?\n                          createStringObject((char*)pValue,lenValue) :\n                          createStringObjectFromLongLongWithSds(vllValue);\n\n            robj *field = (pField != NULL) ?\n                          createStringObject((char*)pField,lenField) :\n                          createStringObjectFromLongLongWithSds(vllField);\n            fn(key, field, value, privdata);\n\n            decrRefCount(field);\n            decrRefCount(value);\n        }\n        cursor->cursor = 1;\n        cursor->done = 1;\n        ret = 0;\n    }\n    errno = 0;\n    return ret;\n}\n\n/* --------------------------------------------------------------------------\n * ## Module fork API\n * -------------------------------------------------------------------------- */\n\n/* Create a background child process with the current frozen snapshot of the\n * main process where you can do some processing in the background without\n * affecting / freezing the traffic and no need for threads and GIL locking.\n * Note that Redis allows for only one concurrent fork.\n * When the child wants to exit, it should call RedisModule_ExitFromChild.\n * If the parent wants to kill the child it should call RedisModule_KillForkChild\n * The done handler callback will be executed on the parent process when the\n * child existed (but not when killed)\n * Return: -1 on failure, on success the parent process will get a positive PID\n * of the child, and the child process will get 0.\n */\nint RM_Fork(RedisModuleForkDoneHandler cb, void *user_data) {\n    pid_t childpid;\n\n    if ((childpid = redisFork(CHILD_TYPE_MODULE)) == 0) {\n        /* Child */\n        redisSetProcTitle(\"redis-module-fork\");\n    } else if (childpid == -1) {\n        serverLog(LL_WARNING,\"Can't fork for module: %s\", strerror(errno));\n    } else {\n        /* Parent */\n        moduleForkInfo.done_handler = cb;\n        moduleForkInfo.done_handler_user_data = user_data;\n        serverLog(LL_VERBOSE, \"Module fork started pid: %ld \", (long) childpid);\n    }\n    return childpid;\n}\n\n/* The module is advised to call this function from the fork child once in a while,\n * so that it can report progress and COW memory to the parent which will be\n * reported in INFO.\n * The `progress` argument should between 0 and 1, or -1 when not available. */\nvoid RM_SendChildHeartbeat(double progress) {\n    sendChildInfoGeneric(CHILD_INFO_TYPE_CURRENT_INFO, 0, progress, \"Module fork\");\n}\n\n/* Call from the child process when you want to terminate it.\n * retcode will be provided to the done handler executed on the parent process.\n */\nint RM_ExitFromChild(int retcode) {\n    sendChildCowInfo(CHILD_INFO_TYPE_MODULE_COW_SIZE, \"Module fork\");\n    exitFromChild(retcode, 0);\n    return REDISMODULE_OK;\n}\n\n/* Kill the active module forked child, if there is one active and the\n * pid matches, and returns C_OK. Otherwise if there is no active module\n * child or the pid does not match, return C_ERR without doing anything. */\nint TerminateModuleForkChild(int child_pid, int wait) {\n    /* Module child should be active and pid should match. */\n    if (server.child_type != CHILD_TYPE_MODULE ||\n        server.child_pid != child_pid) return C_ERR;\n\n    int statloc;\n    serverLog(LL_VERBOSE,\"Killing running module fork child: %ld\",\n        (long) server.child_pid);\n    if (kill(server.child_pid,SIGUSR1) != -1 && wait) {\n        while(waitpid(server.child_pid, &statloc, 0) !=\n              server.child_pid);\n    }\n    /* Reset the buffer accumulating changes while the child saves. */\n    resetChildState();\n    moduleForkInfo.done_handler = NULL;\n    moduleForkInfo.done_handler_user_data = NULL;\n    return C_OK;\n}\n\n/* Can be used to kill the forked child process from the parent process.\n * child_pid would be the return value of RedisModule_Fork. */\nint RM_KillForkChild(int child_pid) {\n    /* Kill module child, wait for child exit. */\n    if (TerminateModuleForkChild(child_pid,1) == C_OK)\n        return REDISMODULE_OK;\n    else\n        return REDISMODULE_ERR;\n}\n\nvoid ModuleForkDoneHandler(int exitcode, int bysignal) {\n    serverLog(LL_NOTICE,\n        \"Module fork exited pid: %ld, retcode: %d, bysignal: %d\",\n        (long) server.child_pid, exitcode, bysignal);\n    if (moduleForkInfo.done_handler) {\n        moduleForkInfo.done_handler(exitcode, bysignal,\n            moduleForkInfo.done_handler_user_data);\n    }\n\n    moduleForkInfo.done_handler = NULL;\n    moduleForkInfo.done_handler_user_data = NULL;\n}\n\n/* --------------------------------------------------------------------------\n * ## Server hooks implementation\n * -------------------------------------------------------------------------- */\n\n/* This must be synced with REDISMODULE_EVENT_*\n * We use -1 (MAX_UINT64) to denote that this event doesn't have\n * a data structure associated with it. We use MAX_UINT64 on purpose,\n * in order to pass the check in RedisModule_SubscribeToServerEvent. */\nstatic uint64_t moduleEventVersions[] = {\n    REDISMODULE_REPLICATIONINFO_VERSION, /* REDISMODULE_EVENT_REPLICATION_ROLE_CHANGED */\n    -1, /* REDISMODULE_EVENT_PERSISTENCE */\n    REDISMODULE_FLUSHINFO_VERSION, /* REDISMODULE_EVENT_FLUSHDB */\n    -1, /* REDISMODULE_EVENT_LOADING */\n    REDISMODULE_CLIENTINFO_VERSION, /* REDISMODULE_EVENT_CLIENT_CHANGE */\n    -1, /* REDISMODULE_EVENT_SHUTDOWN */\n    -1, /* REDISMODULE_EVENT_REPLICA_CHANGE */\n    -1, /* REDISMODULE_EVENT_MASTER_LINK_CHANGE */\n    REDISMODULE_CRON_LOOP_VERSION, /* REDISMODULE_EVENT_CRON_LOOP */\n    REDISMODULE_MODULE_CHANGE_VERSION, /* REDISMODULE_EVENT_MODULE_CHANGE */\n    REDISMODULE_LOADING_PROGRESS_VERSION, /* REDISMODULE_EVENT_LOADING_PROGRESS */\n    REDISMODULE_SWAPDBINFO_VERSION, /* REDISMODULE_EVENT_SWAPDB */\n    -1, /* REDISMODULE_EVENT_REPL_BACKUP */\n    -1, /* REDISMODULE_EVENT_FORK_CHILD */\n    -1, /* REDISMODULE_EVENT_REPL_ASYNC_LOAD */\n    -1, /* REDISMODULE_EVENT_EVENTLOOP */\n    -1, /* REDISMODULE_EVENT_CONFIG */\n    REDISMODULE_KEYINFO_VERSION, /* REDISMODULE_EVENT_KEY */\n    REDISMODULE_CLUSTER_SLOT_MIGRATION_INFO_VERSION, /* REDISMODULE_EVENT_CLUSTER_SLOT_MIGRATION */\n    REDISMODULE_CLUSTER_SLOT_MIGRATION_TRIMINFO_VERSION, /* REDISMODULE_EVENT_CLUSTER_SLOT_MIGRATION_TRIM */\n};\n\n/* Register to be notified, via a callback, when the specified server event\n * happens. The callback is called with the event as argument, and an additional\n * argument which is a void pointer and should be cased to a specific type\n * that is event-specific (but many events will just use NULL since they do not\n * have additional information to pass to the callback).\n *\n * If the callback is NULL and there was a previous subscription, the module\n * will be unsubscribed. If there was a previous subscription and the callback\n * is not null, the old callback will be replaced with the new one.\n *\n * The callback must be of this type:\n *\n *     int (*RedisModuleEventCallback)(RedisModuleCtx *ctx,\n *                                     RedisModuleEvent eid,\n *                                     uint64_t subevent,\n *                                     void *data);\n *\n * The 'ctx' is a normal Redis module context that the callback can use in\n * order to call other modules APIs. The 'eid' is the event itself, this\n * is only useful in the case the module subscribed to multiple events: using\n * the 'id' field of this structure it is possible to check if the event\n * is one of the events we registered with this callback. The 'subevent' field\n * depends on the event that fired.\n *\n * Finally the 'data' pointer may be populated, only for certain events, with\n * more relevant data.\n *\n * Here is a list of events you can use as 'eid' and related sub events:\n *\n * * RedisModuleEvent_ReplicationRoleChanged:\n *\n *     This event is called when the instance switches from master\n *     to replica or the other way around, however the event is\n *     also called when the replica remains a replica but starts to\n *     replicate with a different master.\n *\n *     The following sub events are available:\n *\n *     * `REDISMODULE_SUBEVENT_REPLROLECHANGED_NOW_MASTER`\n *     * `REDISMODULE_SUBEVENT_REPLROLECHANGED_NOW_REPLICA`\n *\n *     The 'data' field can be casted by the callback to a\n *     `RedisModuleReplicationInfo` structure with the following fields:\n *\n *         int master; // true if master, false if replica\n *         char *masterhost; // master instance hostname for NOW_REPLICA\n *         int masterport; // master instance port for NOW_REPLICA\n *         char *replid1; // Main replication ID\n *         char *replid2; // Secondary replication ID\n *         uint64_t repl1_offset; // Main replication offset\n *         uint64_t repl2_offset; // Offset of replid2 validity\n *\n * * RedisModuleEvent_Persistence\n *\n *     This event is called when RDB saving or AOF rewriting starts\n *     and ends. The following sub events are available:\n *\n *     * `REDISMODULE_SUBEVENT_PERSISTENCE_RDB_START`\n *     * `REDISMODULE_SUBEVENT_PERSISTENCE_AOF_START`\n *     * `REDISMODULE_SUBEVENT_PERSISTENCE_SYNC_RDB_START`\n *     * `REDISMODULE_SUBEVENT_PERSISTENCE_SYNC_AOF_START`\n *     * `REDISMODULE_SUBEVENT_PERSISTENCE_ENDED`\n *     * `REDISMODULE_SUBEVENT_PERSISTENCE_FAILED`\n *\n *     The above events are triggered not just when the user calls the\n *     relevant commands like BGSAVE, but also when a saving operation\n *     or AOF rewriting occurs because of internal server triggers.\n *     The SYNC_RDB_START sub events are happening in the foreground due to\n *     SAVE command, FLUSHALL, or server shutdown, and the other RDB and\n *     AOF sub events are executed in a background fork child, so any\n *     action the module takes can only affect the generated AOF or RDB,\n *     but will not be reflected in the parent process and affect connected\n *     clients and commands. Also note that the AOF_START sub event may end\n *     up saving RDB content in case of an AOF with rdb-preamble.\n *\n * * RedisModuleEvent_FlushDB\n *\n *     The FLUSHALL, FLUSHDB or an internal flush (for instance\n *     because of replication, after the replica synchronization)\n *     happened. The following sub events are available:\n *\n *     * `REDISMODULE_SUBEVENT_FLUSHDB_START`\n *     * `REDISMODULE_SUBEVENT_FLUSHDB_END`\n *\n *     The data pointer can be casted to a RedisModuleFlushInfo\n *     structure with the following fields:\n *\n *         int32_t async;  // True if the flush is done in a thread.\n *                         // See for instance FLUSHALL ASYNC.\n *                         // In this case the END callback is invoked\n *                         // immediately after the database is put\n *                         // in the free list of the thread.\n *         int32_t dbnum;  // Flushed database number, -1 for all the DBs\n *                         // in the case of the FLUSHALL operation.\n *\n *     The start event is called *before* the operation is initiated, thus\n *     allowing the callback to call DBSIZE or other operation on the\n *     yet-to-free keyspace.\n *\n * * RedisModuleEvent_Loading\n *\n *     Called on loading operations: at startup when the server is\n *     started, but also after a first synchronization when the\n *     replica is loading the RDB file from the master.\n *     The following sub events are available:\n *\n *     * `REDISMODULE_SUBEVENT_LOADING_RDB_START`\n *     * `REDISMODULE_SUBEVENT_LOADING_AOF_START`\n *     * `REDISMODULE_SUBEVENT_LOADING_REPL_START`\n *     * `REDISMODULE_SUBEVENT_LOADING_ENDED`\n *     * `REDISMODULE_SUBEVENT_LOADING_FAILED`\n *\n *     Note that AOF loading may start with an RDB data in case of\n *     rdb-preamble, in which case you'll only receive an AOF_START event.\n *\n * * RedisModuleEvent_ClientChange\n *\n *     Called when a client connects or disconnects.\n *     The data pointer can be casted to a RedisModuleClientInfo\n *     structure, documented in RedisModule_GetClientInfoById().\n *     The following sub events are available:\n *\n *     * `REDISMODULE_SUBEVENT_CLIENT_CHANGE_CONNECTED`\n *     * `REDISMODULE_SUBEVENT_CLIENT_CHANGE_DISCONNECTED`\n *\n * * RedisModuleEvent_Shutdown\n *\n *     The server is shutting down. No subevents are available.\n *\n * * RedisModuleEvent_ReplicaChange\n *\n *     This event is called when the instance (that can be both a\n *     master or a replica) get a new online replica, or lose a\n *     replica since it gets disconnected.\n *     The following sub events are available:\n *\n *     * `REDISMODULE_SUBEVENT_REPLICA_CHANGE_ONLINE`\n *     * `REDISMODULE_SUBEVENT_REPLICA_CHANGE_OFFLINE`\n *\n *     No additional information is available so far: future versions\n *     of Redis will have an API in order to enumerate the replicas\n *     connected and their state.\n *\n * * RedisModuleEvent_CronLoop\n *\n *     This event is called every time Redis calls the serverCron()\n *     function in order to do certain bookkeeping. Modules that are\n *     required to do operations from time to time may use this callback.\n *     Normally Redis calls this function 10 times per second, but\n *     this changes depending on the \"hz\" configuration.\n *     No sub events are available.\n *\n *     The data pointer can be casted to a RedisModuleCronLoop\n *     structure with the following fields:\n *\n *         int32_t hz;  // Approximate number of events per second.\n *\n * * RedisModuleEvent_MasterLinkChange\n *\n *     This is called for replicas in order to notify when the\n *     replication link becomes functional (up) with our master,\n *     or when it goes down. Note that the link is not considered\n *     up when we just connected to the master, but only if the\n *     replication is happening correctly.\n *     The following sub events are available:\n *\n *     * `REDISMODULE_SUBEVENT_MASTER_LINK_UP`\n *     * `REDISMODULE_SUBEVENT_MASTER_LINK_DOWN`\n *\n * * RedisModuleEvent_ModuleChange\n *\n *     This event is called when a new module is loaded or one is unloaded.\n *     The following sub events are available:\n *\n *     * `REDISMODULE_SUBEVENT_MODULE_LOADED`\n *     * `REDISMODULE_SUBEVENT_MODULE_UNLOADED`\n *\n *     The data pointer can be casted to a RedisModuleModuleChange\n *     structure with the following fields:\n *\n *         const char* module_name;  // Name of module loaded or unloaded.\n *         int32_t module_version;  // Module version.\n *\n * * RedisModuleEvent_LoadingProgress\n *\n *     This event is called repeatedly called while an RDB or AOF file\n *     is being loaded.\n *     The following sub events are available:\n *\n *     * `REDISMODULE_SUBEVENT_LOADING_PROGRESS_RDB`\n *     * `REDISMODULE_SUBEVENT_LOADING_PROGRESS_AOF`\n *\n *     The data pointer can be casted to a RedisModuleLoadingProgress\n *     structure with the following fields:\n *\n *         int32_t hz;  // Approximate number of events per second.\n *         int32_t progress;  // Approximate progress between 0 and 1024,\n *                            // or -1 if unknown.\n *\n * * RedisModuleEvent_SwapDB\n *\n *     This event is called when a SWAPDB command has been successfully\n *     Executed.\n *     For this event call currently there is no subevents available.\n *\n *     The data pointer can be casted to a RedisModuleSwapDbInfo\n *     structure with the following fields:\n *\n *         int32_t dbnum_first;    // Swap Db first dbnum\n *         int32_t dbnum_second;   // Swap Db second dbnum\n *\n * * RedisModuleEvent_ReplBackup\n * \n *     WARNING: Replication Backup events are deprecated since Redis 7.0 and are never fired.\n *     See RedisModuleEvent_ReplAsyncLoad for understanding how Async Replication Loading events\n *     are now triggered when repl-diskless-load is set to swapdb.\n *\n *     Called when repl-diskless-load config is set to swapdb,\n *     And redis needs to backup the current database for the\n *     possibility to be restored later. A module with global data and\n *     maybe with aux_load and aux_save callbacks may need to use this\n *     notification to backup / restore / discard its globals.\n *     The following sub events are available:\n *\n *     * `REDISMODULE_SUBEVENT_REPL_BACKUP_CREATE`\n *     * `REDISMODULE_SUBEVENT_REPL_BACKUP_RESTORE`\n *     * `REDISMODULE_SUBEVENT_REPL_BACKUP_DISCARD`\n * \n * * RedisModuleEvent_ReplAsyncLoad\n *\n *     Called when repl-diskless-load config is set to swapdb and a replication with a master of same\n *     data set history (matching replication ID) occurs.\n *     In which case redis serves current data set while loading new database in memory from socket.\n *     Modules must have declared they support this mechanism in order to activate it, through\n *     REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD flag.\n *     The following sub events are available:\n *\n *     * `REDISMODULE_SUBEVENT_REPL_ASYNC_LOAD_STARTED`\n *     * `REDISMODULE_SUBEVENT_REPL_ASYNC_LOAD_ABORTED`\n *     * `REDISMODULE_SUBEVENT_REPL_ASYNC_LOAD_COMPLETED`\n *\n * * RedisModuleEvent_ForkChild\n *\n *     Called when a fork child (AOFRW, RDBSAVE, module fork...) is born/dies\n *     The following sub events are available:\n *\n *     * `REDISMODULE_SUBEVENT_FORK_CHILD_BORN`\n *     * `REDISMODULE_SUBEVENT_FORK_CHILD_DIED`\n *\n * * RedisModuleEvent_EventLoop\n *\n *     Called on each event loop iteration, once just before the event loop goes\n *     to sleep or just after it wakes up.\n *     The following sub events are available:\n *\n *     * `REDISMODULE_SUBEVENT_EVENTLOOP_BEFORE_SLEEP`\n *     * `REDISMODULE_SUBEVENT_EVENTLOOP_AFTER_SLEEP`\n *\n * * RedisModule_Event_Config\n *\n *     Called when a configuration event happens\n *     The following sub events are available:\n *\n *     * `REDISMODULE_SUBEVENT_CONFIG_CHANGE`\n *\n *     The data pointer can be casted to a RedisModuleConfigChange\n *     structure with the following fields:\n *\n *         const char **config_names; // An array of C string pointers containing the\n *                                    // name of each modified configuration item \n *         uint32_t num_changes;      // The number of elements in the config_names array\n *\n * * RedisModule_Event_Key\n *\n *     Called when a key is removed from the keyspace. We can't modify any key in\n *     the event.\n *     The following sub events are available:\n *\n *     * `REDISMODULE_SUBEVENT_KEY_DELETED`\n *     * `REDISMODULE_SUBEVENT_KEY_EXPIRED`\n *     * `REDISMODULE_SUBEVENT_KEY_EVICTED`\n *     * `REDISMODULE_SUBEVENT_KEY_OVERWRITTEN`\n *\n *     The data pointer can be casted to a RedisModuleKeyInfo\n *     structure with the following fields:\n *\n *         RedisModuleKey *key;    // Key name\n *\n * * RedisModuleEvent_ClusterSlotMigration\n *\n *     Called when an atomic slot migration (ASM) event happens.\n *     IMPORT events are triggered on the destination side of a slot migration\n *     operation. These notifications let modules prepare for the upcoming\n *     ownership change, observe successful completion once the cluster config\n *     reflects the new owner, or detect a failure in which case slot ownership\n *     remains with the source.\n *\n *     Similarly, MIGRATE events triggered on the source side of a slot\n *     migration operation to let modules prepare for the ownership change and\n *     observe the completion of the slot migration. MIGRATE_MODULE_PROPAGATE\n *     event is triggered in the fork just before snapshot delivery; modules may\n *     use it to enqueue commands that will be delivered first. See\n *     RedisModule_ClusterPropagateForSlotMigration() for details.\n *\n *     * `REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_IMPORT_STARTED`\n *     * `REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_IMPORT_FAILED`\n *     * `REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_IMPORT_COMPLETED`\n *     * `REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_STARTED`\n *     * `REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_FAILED`\n *     * `REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_COMPLETED`\n *     * `REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_MODULE_PROPAGATE`\n *\n *     The data pointer can be casted to a RedisModuleClusterSlotMigrationInfo\n *     structure with the following fields:\n *\n *         char source_node_id[REDISMODULE_NODE_ID_LEN + 1];\n *         char destination_node_id[REDISMODULE_NODE_ID_LEN + 1];\n *         const char *task_id;               // Task ID\n *         RedisModuleSlotRangeArray *slots;  // Slot ranges\n *\n * * RedisModuleEvent_ClusterSlotMigrationTrim\n *\n *     Called when trimming keys after a slot migration. Fires on the source\n *     after a successful migration to clean up migrated keys, or on the\n *     destination after a failed import to discard partial imports. Two methods\n *     are supported. In the first method, keys are deleted in a background\n *     thread; this is reported via the TRIM_BACKGROUND event. In the second\n *     method, Redis performs incremental deletions on the main thread via the\n *     cron loop to avoid stalls; this is reported via the TRIM_STARTED and\n *     TRIM_COMPLETED events. Each deletion emits REDISMODULE_NOTIFY_KEY_TRIMMED\n *     so modules can react to individual key deletions. Redis selects the\n *     method automatically: background by default; switches to main thread\n *     trimming when a module subscribes to REDISMODULE_NOTIFY_KEY_TRIMMED.\n *\n *     The following sub events are available:\n *\n *     * `REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_TRIM_STARTED`\n *     * `REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_TRIM_COMPLETED`\n *     * `REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_TRIM_BACKGROUND`\n *\n *     The data pointer can be casted to a RedisModuleClusterSlotMigrationTrimInfo\n *     structure with the following fields:\n *\n *         RedisModuleSlotRangeArray *slots;  // Slot ranges\n *\n * The function returns REDISMODULE_OK if the module was successfully subscribed\n * for the specified event. If the API is called from a wrong context or unsupported event\n * is given then REDISMODULE_ERR is returned. */\nint RM_SubscribeToServerEvent(RedisModuleCtx *ctx, RedisModuleEvent event, RedisModuleEventCallback callback) {\n    RedisModuleEventListener *el;\n\n    /* Protect in case of calls from contexts without a module reference. */\n    if (ctx->module == NULL) return REDISMODULE_ERR;\n    if (event.id >= _REDISMODULE_EVENT_NEXT) return REDISMODULE_ERR;\n    if (event.dataver > moduleEventVersions[event.id]) return REDISMODULE_ERR; /* Module compiled with a newer redismodule.h than we support */\n\n    /* Search an event matching this module and event ID. */\n    listIter li;\n    listNode *ln;\n    listRewind(RedisModule_EventListeners,&li);\n    while((ln = listNext(&li))) {\n        el = ln->value;\n        if (el->module == ctx->module && el->event.id == event.id)\n            break; /* Matching event found. */\n    }\n\n    /* Modify or remove the event listener if we already had one. */\n    if (ln) {\n        if (callback == NULL) {\n            listDelNode(RedisModule_EventListeners,ln);\n            zfree(el);\n        } else {\n            el->callback = callback; /* Update the callback with the new one. */\n        }\n        return REDISMODULE_OK;\n    }\n\n    /* No event found, we need to add a new one. */\n    el = zmalloc(sizeof(*el));\n    el->module = ctx->module;\n    el->event = event;\n    el->callback = callback;\n    listAddNodeTail(RedisModule_EventListeners,el);\n    return REDISMODULE_OK;\n}\n\n/**\n * For a given server event and subevent, return zero if the\n * subevent is not supported and non-zero otherwise.\n */\nint RM_IsSubEventSupported(RedisModuleEvent event, int64_t subevent) {\n    switch (event.id) {\n    case REDISMODULE_EVENT_REPLICATION_ROLE_CHANGED:\n        return subevent < _REDISMODULE_EVENT_REPLROLECHANGED_NEXT;\n    case REDISMODULE_EVENT_PERSISTENCE:\n        return subevent < _REDISMODULE_SUBEVENT_PERSISTENCE_NEXT;\n    case REDISMODULE_EVENT_FLUSHDB:\n        return subevent < _REDISMODULE_SUBEVENT_FLUSHDB_NEXT;\n    case REDISMODULE_EVENT_LOADING:\n        return subevent < _REDISMODULE_SUBEVENT_LOADING_NEXT;\n    case REDISMODULE_EVENT_CLIENT_CHANGE:\n        return subevent < _REDISMODULE_SUBEVENT_CLIENT_CHANGE_NEXT;\n    case REDISMODULE_EVENT_SHUTDOWN:\n        return subevent < _REDISMODULE_SUBEVENT_SHUTDOWN_NEXT;\n    case REDISMODULE_EVENT_REPLICA_CHANGE:\n        return subevent < _REDISMODULE_EVENT_REPLROLECHANGED_NEXT;\n    case REDISMODULE_EVENT_MASTER_LINK_CHANGE:\n        return subevent < _REDISMODULE_SUBEVENT_MASTER_NEXT;\n    case REDISMODULE_EVENT_CRON_LOOP:\n        return subevent < _REDISMODULE_SUBEVENT_CRON_LOOP_NEXT;\n    case REDISMODULE_EVENT_MODULE_CHANGE:\n        return subevent < _REDISMODULE_SUBEVENT_MODULE_NEXT;\n    case REDISMODULE_EVENT_LOADING_PROGRESS:\n        return subevent < _REDISMODULE_SUBEVENT_LOADING_PROGRESS_NEXT;\n    case REDISMODULE_EVENT_SWAPDB:\n        return subevent < _REDISMODULE_SUBEVENT_SWAPDB_NEXT;\n    case REDISMODULE_EVENT_REPL_ASYNC_LOAD:\n        return subevent < _REDISMODULE_SUBEVENT_REPL_ASYNC_LOAD_NEXT;\n    case REDISMODULE_EVENT_FORK_CHILD:\n        return subevent < _REDISMODULE_SUBEVENT_FORK_CHILD_NEXT;\n    case REDISMODULE_EVENT_EVENTLOOP:\n        return subevent < _REDISMODULE_SUBEVENT_EVENTLOOP_NEXT;\n    case REDISMODULE_EVENT_CONFIG:\n        return subevent < _REDISMODULE_SUBEVENT_CONFIG_NEXT; \n    case REDISMODULE_EVENT_KEY:\n        return subevent < _REDISMODULE_SUBEVENT_KEY_NEXT;\n    case REDISMODULE_EVENT_CLUSTER_SLOT_MIGRATION:\n        return subevent < _REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_NEXT;\n    case REDISMODULE_EVENT_CLUSTER_SLOT_MIGRATION_TRIM:\n        return subevent < _REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_TRIM_NEXT;\n    default:\n        break;\n    }\n    return 0;\n}\n\ntypedef struct KeyInfo {\n    int32_t dbnum;\n    RedisModuleString *key;\n    kvobj *kv;  /* key-value object */\n    int mode;\n} KeyInfo;\n\n/* This is called by the Redis internals every time we want to fire an\n * event that can be intercepted by some module. The pointer 'data' is useful\n * in order to populate the event-specific structure when needed, in order\n * to return the structure with more information to the callback.\n *\n * 'eid' and 'subid' are just the main event ID and the sub event associated\n * with the event, depending on what exactly happened. */\nvoid moduleFireServerEvent(uint64_t eid, int subid, void *data) {\n    /* Fast path to return ASAP if there is nothing to do, avoiding to\n     * setup the iterator and so forth: we want this call to be extremely\n     * cheap if there are no registered modules. */\n    if (listLength(RedisModule_EventListeners) == 0) return;\n\n    listIter li;\n    listNode *ln;\n    listRewind(RedisModule_EventListeners,&li);\n    while((ln = listNext(&li))) {\n        RedisModuleEventListener *el = ln->value;\n        if (el->event.id == eid) {\n            RedisModuleCtx ctx;\n            if (eid == REDISMODULE_EVENT_CLIENT_CHANGE) {\n                /* In the case of client changes, we're pushing the real client\n                 * so the event handler can mutate it if needed. For example,\n                 * to change its authentication state in a way that does not\n                 * depend on specific commands executed later.\n                 */\n                moduleCreateContext(&ctx,el->module,REDISMODULE_CTX_NONE);\n                ctx.client = (client *) data;\n            } else {\n                moduleCreateContext(&ctx,el->module,REDISMODULE_CTX_TEMP_CLIENT);\n            }\n\n            void *moduledata = NULL;\n            RedisModuleClientInfoV1 civ1;\n            RedisModuleReplicationInfoV1 riv1;\n            RedisModuleModuleChangeV1 mcv1;\n            RedisModuleKey key;\n            RedisModuleKeyInfoV1 ki = {REDISMODULE_KEYINFO_VERSION, &key};\n\n            /* Event specific context and data pointer setup. */\n            if (eid == REDISMODULE_EVENT_CLIENT_CHANGE) {\n                serverAssert(modulePopulateClientInfoStructure(&civ1,data, el->event.dataver) == REDISMODULE_OK);\n                moduledata = &civ1;\n            } else if (eid == REDISMODULE_EVENT_REPLICATION_ROLE_CHANGED) {\n                serverAssert(modulePopulateReplicationInfoStructure(&riv1,el->event.dataver) == REDISMODULE_OK);\n                moduledata = &riv1;\n            } else if (eid == REDISMODULE_EVENT_FLUSHDB) {\n                moduledata = data;\n                RedisModuleFlushInfoV1 *fi = data;\n                if (fi->dbnum != -1)\n                    selectDb(ctx.client, fi->dbnum);\n            } else if (eid == REDISMODULE_EVENT_MODULE_CHANGE) {\n                RedisModule *m = data;\n                if (m == el->module) {\n                    moduleFreeContext(&ctx);\n                    continue;\n                }\n                mcv1.version = REDISMODULE_MODULE_CHANGE_VERSION;\n                mcv1.module_name = m->name;\n                mcv1.module_version = m->ver;\n                moduledata = &mcv1;\n            } else if (eid == REDISMODULE_EVENT_LOADING_PROGRESS) {\n                moduledata = data;\n            } else if (eid == REDISMODULE_EVENT_CRON_LOOP) {\n                moduledata = data;\n            } else if (eid == REDISMODULE_EVENT_SWAPDB) {\n                moduledata = data;\n            } else if (eid == REDISMODULE_EVENT_CONFIG) {\n                moduledata = data;\n            } else if (eid == REDISMODULE_EVENT_KEY) {\n                KeyInfo *info = data;\n                selectDb(ctx.client, info->dbnum);\n                moduleInitKey(&key, &ctx, info->key, info->kv, info->mode);\n                moduledata = &ki;\n            } else if (eid == REDISMODULE_EVENT_CLUSTER_SLOT_MIGRATION) {\n                moduledata = data;\n            } else if (eid == REDISMODULE_EVENT_CLUSTER_SLOT_MIGRATION_TRIM) {\n                moduledata = data;\n            }\n\n            el->module->in_hook++;\n            el->callback(&ctx,el->event,subid,moduledata);\n            el->module->in_hook--;\n\n            if (eid == REDISMODULE_EVENT_KEY) {\n                moduleCloseKey(&key);\n            }\n\n            moduleFreeContext(&ctx);\n        }\n    }\n}\n\n/* Remove all the listeners for this module: this is used before unloading\n * a module. */\nvoid moduleUnsubscribeAllServerEvents(RedisModule *module) {\n    RedisModuleEventListener *el;\n    listIter li;\n    listNode *ln;\n    listRewind(RedisModule_EventListeners,&li);\n\n    while((ln = listNext(&li))) {\n        el = ln->value;\n        if (el->module == module) {\n            listDelNode(RedisModule_EventListeners,ln);\n            zfree(el);\n        }\n    }\n}\n\nvoid processModuleLoadingProgressEvent(int is_aof) {\n    long long now = server.ustime;\n    static long long next_event = 0;\n    if (now >= next_event) {\n        /* Fire the loading progress modules end event. */\n        int progress = -1;\n        if (server.loading_total_bytes)\n            progress = (server.loading_loaded_bytes<<10) / server.loading_total_bytes;\n        RedisModuleLoadingProgressV1 fi = {REDISMODULE_LOADING_PROGRESS_VERSION,\n                                     server.hz,\n                                     progress};\n        moduleFireServerEvent(REDISMODULE_EVENT_LOADING_PROGRESS,\n                              is_aof?\n                                REDISMODULE_SUBEVENT_LOADING_PROGRESS_AOF:\n                                REDISMODULE_SUBEVENT_LOADING_PROGRESS_RDB,\n                              &fi);\n        /* decide when the next event should fire. */\n        next_event = now + 1000000 / server.hz;\n    }\n}\n\n/* When a key is deleted (in dbAsyncDelete/dbSyncDelete/setKey), it\n*  will be called to tell the module which key is about to be released. */\nvoid moduleNotifyKeyUnlink(robj *key, kvobj *kv, int dbid, int flags) {\n    server.allow_access_expired++;\n    server.allow_access_trimmed++;\n    int subevent = REDISMODULE_SUBEVENT_KEY_DELETED;\n    if (flags & DB_FLAG_KEY_EXPIRED) {\n        subevent = REDISMODULE_SUBEVENT_KEY_EXPIRED;\n    } else if (flags & DB_FLAG_KEY_EVICTED) {\n        subevent = REDISMODULE_SUBEVENT_KEY_EVICTED;\n    } else if (flags & DB_FLAG_KEY_OVERWRITE) {\n        subevent = REDISMODULE_SUBEVENT_KEY_OVERWRITTEN;\n    }\n    KeyInfo info = {dbid, key, kv, REDISMODULE_READ};\n    moduleFireServerEvent(REDISMODULE_EVENT_KEY, subevent, &info);\n\n    if (kv->type == OBJ_MODULE) {\n        moduleValue *mv = kv->ptr;\n        moduleType *mt = mv->type;\n        /* We prefer to use the enhanced version. */\n        if (mt->unlink2 != NULL) {\n            RedisModuleKeyOptCtx ctx = {key, NULL, dbid, -1};\n            mt->unlink2(&ctx,mv->value);\n        } else if (mt->unlink != NULL) {\n            mt->unlink(key,mv->value);\n        }\n    }\n    server.allow_access_expired--;\n    server.allow_access_trimmed--;\n}\n\n/* Return the free_effort of the module, it will automatically choose to call \n * `free_effort` or `free_effort2`, and the default return value is 1.\n * value of 0 means very high effort (always asynchronous freeing). */\nsize_t moduleGetFreeEffort(robj *key, robj *val, int dbid) {\n    moduleValue *mv = val->ptr;\n    moduleType *mt = mv->type;\n    size_t effort = 1;\n    /* We prefer to use the enhanced version. */\n    if (mt->free_effort2 != NULL) {\n        RedisModuleKeyOptCtx ctx = {key, NULL, dbid, -1};\n        effort = mt->free_effort2(&ctx,mv->value);\n    } else if (mt->free_effort != NULL) {\n        effort = mt->free_effort(key,mv->value);\n    }  \n\n    return effort;\n}\n\n/* Return the memory usage of the module, it will automatically choose to call \n * `mem_usage` or `mem_usage2`, and the default return value is 0. */\nsize_t moduleGetMemUsage(robj *key, robj *val, size_t sample_size, int dbid) {\n    moduleValue *mv = val->ptr;\n    moduleType *mt = mv->type;\n    size_t size = 0;\n    /* We prefer to use the enhanced version. */\n    if (mt->mem_usage2 != NULL) {\n        RedisModuleKeyOptCtx ctx = {key, NULL, dbid, -1};\n        size = mt->mem_usage2(&ctx, mv->value, sample_size);\n    } else if (mt->mem_usage != NULL) {\n        size = mt->mem_usage(mv->value);\n    } \n\n    return size;\n}\n\n/* --------------------------------------------------------------------------\n * Modules API internals\n * -------------------------------------------------------------------------- */\n\n/* server.moduleapi dictionary type. Only uses plain C strings since\n * this gets queries from modules. */\n\nuint64_t dictCStringKeyHash(const void *key) {\n    return dictGenHashFunction((unsigned char*)key, strlen((char*)key));\n}\n\nint dictCStringKeyCompare(dictCmpCache *cache, const void *key1, const void *key2) {\n    UNUSED(cache);\n    return strcmp(key1,key2) == 0;\n}\n\ndictType moduleAPIDictType = {\n    dictCStringKeyHash,        /* hash function */\n    NULL,                      /* key dup */\n    NULL,                      /* val dup */\n    dictCStringKeyCompare,     /* key compare */\n    NULL,                      /* key destructor */\n    NULL,                      /* val destructor */\n    NULL                       /* allow to expand */\n};\n\nint moduleRegisterApi(const char *funcname, void *funcptr) {\n    return dictAdd(server.moduleapi, (char*)funcname, funcptr);\n}\n\n#define REGISTER_API(name) \\\n    moduleRegisterApi(\"RedisModule_\" #name, (void *)(unsigned long)RM_ ## name)\n\n/* Global initialization at Redis startup. */\nvoid moduleRegisterCoreAPI(void);\n\n/* Currently, this function is just a placeholder for the module system\n * initialization steps that need to be run after server initialization.\n * A previous issue, selectDb() in createClient() requires that server.db has\n * been initialized, see #7323. */\nvoid moduleInitModulesSystemLast(void) {\n}\n\n\ndictType sdsKeyValueHashDictType = {\n    dictSdsCaseHash,            /* hash function */\n    NULL,                       /* key dup */\n    NULL,                       /* val dup */\n    dictSdsKeyCaseCompare,      /* key compare */\n    dictSdsDestructor,          /* key destructor */\n    dictSdsDestructor,          /* val destructor */\n    NULL                        /* allow to expand */\n};\n\nvoid moduleInitModulesSystem(void) {\n    moduleUnblockedClients = listCreate();\n    server.loadmodule_queue = listCreate();\n    server.module_configs_queue = dictCreate(&sdsKeyValueHashDictType);\n    server.module_gil_acquring = 0;\n    modules = dictCreate(&modulesDictType);\n    moduleAuthCallbacks = listCreate();\n\n    /* Set up the keyspace notification subscriber list and static client */\n    moduleKeyspaceSubscribers = listCreate();\n\n    modulePostExecUnitJobs = listCreate();\n\n    /* Set up filter list */\n    moduleCommandFilters = listCreate();\n\n    moduleRegisterCoreAPI();\n\n    /* Create a pipe for module threads to be able to wake up the redis main thread.\n     * Make the pipe non blocking. This is just a best effort aware mechanism\n     * and we do not want to block not in the read nor in the write half.\n     * Enable close-on-exec flag on pipes in case of the fork-exec system calls in\n     * sentinels or redis servers. */\n    if (anetPipe(server.module_pipe, O_CLOEXEC|O_NONBLOCK, O_CLOEXEC|O_NONBLOCK) == -1) {\n        serverLog(LL_WARNING,\n            \"Can't create the pipe for module threads: %s\", strerror(errno));\n        exit(1);\n    }\n\n    /* Create the timers radix tree. */\n    Timers = raxNew();\n\n    /* Setup the event listeners data structures. */\n    RedisModule_EventListeners = listCreate();\n\n    /* Making sure moduleEventVersions is synced with the number of events. */\n    serverAssert(sizeof(moduleEventVersions)/sizeof(moduleEventVersions[0]) == _REDISMODULE_EVENT_NEXT);\n\n    /* Our thread-safe contexts GIL must start with already locked:\n     * it is just unlocked when it's safe. */\n    pthread_mutex_lock(&moduleGIL);\n}\n\nvoid modulesCron(void) {\n    /* Check number of temporary clients in the pool and free the unused ones\n     * since the last cron. moduleTempClientMinCount tracks minimum count of\n     * clients in the pool since the last cron. This is the number of clients\n     * that we didn't use for the last cron period. */\n\n    /* Limit the max client count to be freed at once to avoid latency spikes.*/\n    int iteration = 50;\n    /* We are freeing clients if we have more than 8 unused clients. Keeping\n     * small amount of clients to avoid client allocation costs if temporary\n     * clients are required after some idle period. */\n    const unsigned int min_client = 8;\n    while (iteration > 0 && moduleTempClientCount > 0 && moduleTempClientMinCount > min_client) {\n        client *c = moduleTempClients[--moduleTempClientCount];\n        freeClient(c);\n        iteration--;\n        moduleTempClientMinCount--;\n    }\n    moduleTempClientMinCount = moduleTempClientCount;\n\n    /* Shrink moduleTempClients array itself if it is wasting some space */\n    if (moduleTempClientCap > 32 && moduleTempClientCap > moduleTempClientCount * 4) {\n        moduleTempClientCap /= 4;\n        moduleTempClients = zrealloc(moduleTempClients,sizeof(client*)*moduleTempClientCap);\n    }\n}\n\nvoid moduleLoadQueueEntryFree(struct moduleLoadQueueEntry *loadmod) {\n    if (!loadmod) return;\n    sdsfree(loadmod->path);\n    for (int i = 0; i < loadmod->argc; i++) {\n        decrRefCount(loadmod->argv[i]);\n    }\n    zfree(loadmod->argv);\n    zfree(loadmod);\n}\n\n/* Remove Module Configs from standardConfig array in config.c */\nvoid moduleRemoveConfigs(RedisModule *module) {\n    listIter li;\n    listNode *ln;\n    listRewind(module->module_configs, &li);\n    while ((ln = listNext(&li))) {\n        ModuleConfig *config = listNodeValue(ln);\n        removeConfig(config->name);\n        if (config->alias)\n            removeConfig(config->alias);\n    }\n}\n\n/* Remove ACL categories added by the module when it fails to load. */\nvoid moduleRemoveCateogires(RedisModule *module) {\n    if (module->num_acl_categories_added) {\n        ACLCleanupCategoriesOnFailure(module->num_acl_categories_added);\n    }\n}\n\nint VectorSets_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc);\n/* Load internal data types that bundled as modules */\nvoid moduleLoadInternalModules(void) {\n#ifdef INCLUDE_VEC_SETS\n    int retval = moduleOnLoad((int (*)(void *, void **, int)) VectorSets_OnLoad, NULL, NULL, NULL, 0, 0);\n    serverAssert(retval == C_OK);\n#endif\n}\n\n/* Load all the modules in the server.loadmodule_queue list, which is\n * populated by `loadmodule` directives in the configuration file.\n * We can't load modules directly when processing the configuration file\n * because the server must be fully initialized before loading modules.\n *\n * The function aborts the server on errors, since to start with missing\n * modules is not considered sane: clients may rely on the existence of\n * given commands, loading AOF also may need some modules to exist, and\n * if this instance is a slave, it must understand commands from master. */\nvoid moduleLoadFromQueue(void) {\n    listIter li;\n    listNode *ln;\n\n    listRewind(server.loadmodule_queue,&li);\n    while((ln = listNext(&li))) {\n        struct moduleLoadQueueEntry *loadmod = ln->value;\n        if (moduleLoad(loadmod->path,(void **)loadmod->argv,loadmod->argc, 0)\n            == C_ERR)\n        {\n            serverLog(LL_WARNING,\n                \"Can't load module from %s: server aborting\",\n                loadmod->path);\n            exit(1);\n        }\n        moduleLoadQueueEntryFree(loadmod);\n        listDelNode(server.loadmodule_queue, ln);\n    }\n    if (dictSize(server.module_configs_queue)) {\n        serverLog(LL_WARNING, \"Unresolved Configuration(s) Detected:\");\n        dictIterator di;\n        dictEntry *de;\n        dictInitIterator(&di, server.module_configs_queue);\n        while ((de = dictNext(&di)) != NULL) {\n            serverLog(LL_WARNING, \">>> '%s %s'\", redactLogCstr((char *)dictGetKey(de)), redactLogCstr((char *)dictGetVal(de)));\n        }\n        dictResetIterator(&di);\n        serverLog(LL_WARNING, \"Module Configuration detected without loadmodule directive or no ApplyConfig call: aborting\");\n        exit(1);\n    }\n}\n\nvoid moduleFreeModuleStructure(struct RedisModule *module) {\n    listRelease(module->types);\n    listRelease(module->filters);\n    listRelease(module->usedby);\n    listRelease(module->using);\n    listRelease(module->module_configs);\n    sdsfree(module->name);\n    moduleLoadQueueEntryFree(module->loadmod);\n    zfree(module);\n}\n\nvoid moduleFreeArgs(struct redisCommandArg *args, int num_args) {\n    for (int j = 0; j < num_args; j++) {\n        zfree((char *)args[j].name);\n        zfree((char *)args[j].token);\n        zfree((char *)args[j].summary);\n        zfree((char *)args[j].since);\n        zfree((char *)args[j].deprecated_since);\n        zfree((char *)args[j].display_text);\n\n        if (args[j].subargs) {\n            moduleFreeArgs(args[j].subargs, args[j].num_args);\n        }\n    }\n    zfree(args);\n}\n\n/* Free the command registered with the specified module.\n * On success C_OK is returned, otherwise C_ERR is returned.\n *\n * Note that caller needs to handle the deletion of the command table dict,\n * and after that needs to free the command->fullname and the command itself.\n */\nint moduleFreeCommand(struct RedisModule *module, struct redisCommand *cmd) {\n    if (cmd->proc != RedisModuleCommandDispatcher)\n        return C_ERR;\n\n    RedisModuleCommand *cp = cmd->module_cmd;\n    if (cp->module != module)\n        return C_ERR;\n\n    /* Free everything except cmd->fullname and cmd itself. */\n    for (int j = 0; j < cmd->key_specs_num; j++) {\n        if (cmd->key_specs[j].notes)\n            zfree((char *)cmd->key_specs[j].notes);\n        if (cmd->key_specs[j].begin_search_type == KSPEC_BS_KEYWORD)\n            zfree((char *)cmd->key_specs[j].bs.keyword.keyword);\n    }\n    zfree(cmd->key_specs);\n    for (int j = 0; cmd->tips && cmd->tips[j]; j++)\n        zfree((char *)cmd->tips[j]);\n    zfree(cmd->tips);\n    for (int j = 0; cmd->history && cmd->history[j].since; j++) {\n        zfree((char *)cmd->history[j].since);\n        zfree((char *)cmd->history[j].changes);\n    }\n    zfree(cmd->history);\n    zfree((char *)cmd->summary);\n    zfree((char *)cmd->since);\n    zfree((char *)cmd->deprecated_since);\n    zfree((char *)cmd->complexity);\n    if (cmd->latency_histogram) {\n        hdr_close(cmd->latency_histogram);\n        cmd->latency_histogram = NULL;\n    }\n    moduleFreeArgs(cmd->args, cmd->num_args);\n    zfree(cp);\n\n    if (cmd->subcommands_dict) {\n        dictEntry *de;\n        dictIterator di;\n        dictInitSafeIterator(&di, cmd->subcommands_dict);\n        while ((de = dictNext(&di)) != NULL) {\n            struct redisCommand *sub = dictGetVal(de);\n            if (moduleFreeCommand(module, sub) != C_OK) continue;\n\n            serverAssert(dictDelete(cmd->subcommands_dict, sub->declared_name) == DICT_OK);\n            sdsfree((sds)sub->declared_name);\n            sdsfree(sub->fullname);\n            zfree(sub);\n        }\n        dictResetIterator(&di);\n        dictRelease(cmd->subcommands_dict);\n    }\n\n    return C_OK;\n}\n\nvoid moduleUnregisterCommands(struct RedisModule *module) {\n    pauseAllIOThreads();\n    /* Unregister all the commands registered by this module. */\n    dictIterator di;\n    dictEntry *de;\n    dictInitSafeIterator(&di, server.commands);\n    while ((de = dictNext(&di)) != NULL) {\n        struct redisCommand *cmd = dictGetVal(de);\n        if (moduleFreeCommand(module, cmd) != C_OK) continue;\n\n        serverAssert(dictDelete(server.commands, cmd->fullname) == DICT_OK);\n        serverAssert(dictDelete(server.orig_commands, cmd->fullname) == DICT_OK);\n        sdsfree((sds)cmd->declared_name);\n        sdsfree(cmd->fullname);\n        zfree(cmd);\n    }\n    dictResetIterator(&di);\n    resumeAllIOThreads();\n}\n\n/* We parse argv to add sds \"NAME VALUE\" pairs to the server.module_configs_queue list of configs.\n * We also increment the module_argv pointer to just after ARGS if there are args, otherwise\n * we set it to NULL */\nint parseLoadexArguments(RedisModuleString ***module_argv, int *module_argc) {\n    int args_specified = 0;\n    RedisModuleString **argv = *module_argv;\n    int argc = *module_argc;\n    for (int i = 0; i < argc; i++) {\n        char *arg_val = argv[i]->ptr;\n        if (!strcasecmp(arg_val, \"CONFIG\")) {\n            if (i + 2 >= argc) {\n                serverLog(LL_NOTICE, \"CONFIG specified without name value pair\");\n                return REDISMODULE_ERR;\n            }\n            sds name = sdsdup(argv[i + 1]->ptr);\n            sds value = sdsdup(argv[i + 2]->ptr);\n            if (!dictReplace(server.module_configs_queue, name, value)) sdsfree(name);\n            i += 2;\n        } else if (!strcasecmp(arg_val, \"ARGS\")) {\n            args_specified = 1;\n            i++;\n            if (i >= argc) {\n                *module_argv = NULL;\n                *module_argc = 0;\n            } else {\n                *module_argv = argv + i;\n                *module_argc = argc - i;\n            }\n            break;\n        } else {\n            serverLog(LL_NOTICE, \"Syntax Error from arguments to loadex around %s.\", redactLogCstr(arg_val));\n            return REDISMODULE_ERR;\n        }\n    }\n    if (!args_specified) {\n        *module_argv = NULL;\n        *module_argc = 0;\n    }\n    return REDISMODULE_OK;\n}\n\n/* Unregister module-related things, called when moduleLoad fails or moduleUnload. */\nvoid moduleUnregisterCleanup(RedisModule *module) {\n    moduleFreeAuthenticatedClients(module);\n    moduleUnregisterCommands(module);\n    moduleUnsubscribeNotifications(module);\n    moduleUnregisterSharedAPI(module);\n    moduleUnregisterUsedAPI(module);\n    moduleUnregisterFilters(module);\n    moduleUnsubscribeAllServerEvents(module);\n    moduleRemoveConfigs(module);\n    moduleUnregisterAuthCBs(module);\n}\n\n/* Load a module by path and initialize it. On success C_OK is returned, otherwise\n * C_ERR is returned. */\nint moduleLoad(const char *path, void **module_argv, int module_argc, int is_loadex) {\n    int (*onload)(void *, void **, int);\n    void *handle;\n\n    struct stat st;\n    if (stat(path, &st) == 0) {\n        /* This check is best effort */\n        if (!(st.st_mode & (S_IXUSR  | S_IXGRP | S_IXOTH))) {\n            serverLog(LL_WARNING, \"Module %s failed to load: It does not have execute permissions.\", path);\n            return C_ERR;\n        }\n    }\n\n    handle = dlopen(path,RTLD_NOW|RTLD_LOCAL);\n    if (handle == NULL) {\n        serverLog(LL_WARNING, \"Module %s failed to load: %s\", path, dlerror());\n        return C_ERR;\n    }\n    onload = (int (*)(void *, void **, int))(unsigned long) dlsym(handle,\"RedisModule_OnLoad\");\n    if (onload == NULL) {\n        dlclose(handle);\n        serverLog(LL_WARNING,\n            \"Module %s does not export RedisModule_OnLoad() \"\n            \"symbol. Module not loaded.\",path);\n        return C_ERR;\n    }\n\n    return moduleOnLoad(onload, path, handle, module_argv, module_argc, is_loadex);\n}\n\n/* Load a module by its 'onload' callback and initialize it. On success C_OK is returned, otherwise\n * C_ERR is returned. */\nint moduleOnLoad(int (*onload)(void *, void **, int), const char *path, void *handle, void **module_argv, int module_argc, int is_loadex) {\n    RedisModuleCtx ctx;\n    moduleCreateContext(&ctx, NULL, REDISMODULE_CTX_TEMP_CLIENT); /* We pass NULL since we don't have a module yet. */\n    if (onload((void*)&ctx,module_argv,module_argc) == REDISMODULE_ERR) {\n        serverLog(LL_WARNING,\n            \"Module %s initialization failed. Module not loaded\",path);\n        if (ctx.module) {\n            moduleUnregisterCleanup(ctx.module);\n            moduleRemoveCateogires(ctx.module);\n            moduleFreeModuleStructure(ctx.module);\n        }\n        moduleFreeContext(&ctx);\n        if (handle) dlclose(handle);\n        return C_ERR;\n    }\n\n    /* Redis module loaded! Register it. */\n    dictAdd(modules,ctx.module->name,ctx.module);\n    ctx.module->blocked_clients = 0;\n    ctx.module->handle = handle;\n    ctx.module->loadmod = zmalloc(sizeof(struct moduleLoadQueueEntry));\n    ctx.module->loadmod->path = sdsnew(path);\n    ctx.module->loadmod->argv = module_argc ? zmalloc(sizeof(robj*)*module_argc) : NULL;\n    ctx.module->loadmod->argc = module_argc;\n    for (int i = 0; i < module_argc; i++) {\n        ctx.module->loadmod->argv[i] = module_argv[i];\n        incrRefCount(ctx.module->loadmod->argv[i]);\n    }\n\n    /* If module commands have ACL categories, recompute command bits\n     * for all existing users once the modules has been registered. */\n    if (ctx.module->num_commands_with_acl_categories) {\n        ACLRecomputeCommandBitsFromCommandRulesAllUsers();\n    }\n    if (path) serverLog(LL_NOTICE,\"Module '%s' loaded from %s\",ctx.module->name,path);\n    ctx.module->onload = 0;\n\n    int post_load_err = 0;\n    if (listLength(ctx.module->module_configs) && !(ctx.module->configs_initialized & MODULE_CONFIGS_USER_VALS)) {\n        serverLogRaw(LL_WARNING, \"Module Configurations were not set, missing LoadConfigs call. Unloading the module.\");\n        post_load_err = 1;\n    }\n\n    if (is_loadex && dictSize(server.module_configs_queue)) {\n        serverLogRaw(LL_WARNING, \"Loadex configurations were not applied, likely due to invalid arguments. Unloading the module.\");\n        post_load_err = 1;\n    }\n\n    if (post_load_err) {\n        serverAssert(moduleUnload(ctx.module->name, NULL, 1) == C_OK);\n        moduleFreeContext(&ctx);\n        return C_ERR;\n    }\n\n    /* Fire the loaded modules event. */\n    moduleFireServerEvent(REDISMODULE_EVENT_MODULE_CHANGE,\n                          REDISMODULE_SUBEVENT_MODULE_LOADED,\n                          ctx.module);\n\n    moduleFreeContext(&ctx);\n    return C_OK;\n}\n\n/* Unload the module registered with the specified name. On success\n * C_OK is returned, otherwise C_ERR is returned and errmsg is set\n * with an appropriate message.\n * Only forcefully unload this module, passing forced_unload != 0, \n * if it is certain that it has not yet been in use (e.g., immediate\n * unload on failed load). */\nint moduleUnload(sds name, const char **errmsg, int forced_unload) {\n    struct RedisModule *module = dictFetchValue(modules,name);\n\n    if (module == NULL) {\n        *errmsg = \"no such module with that name\";\n        return C_ERR;\n    } else if (sdslen(module->loadmod->path) == 0) {\n        *errmsg = \"the module can't be unloaded\";\n        return C_ERR;\n    } else if (listLength(module->types) && !forced_unload) {\n        *errmsg = \"the module exports one or more module-side data \"\n                  \"types, can't unload\";\n        return C_ERR;\n    } else if (listLength(module->usedby)) {\n        *errmsg = \"the module exports APIs used by other modules. \"\n                  \"Please unload them first and try again\";\n        return C_ERR;\n    } else if (module->blocked_clients) {\n        *errmsg = \"the module has blocked clients. \"\n                  \"Please wait for them to be unblocked and try again\";\n        return C_ERR;\n    } else if (moduleHoldsTimer(module)) {\n        *errmsg = \"the module holds timer that is not fired. \"\n                  \"Please stop the timer or wait until it fires.\";\n        return C_ERR;\n    }\n\n    /* Give module a chance to clean up. */\n    int (*onunload)(void *);\n    onunload = (int (*)(void *))(unsigned long) dlsym(module->handle, \"RedisModule_OnUnload\");\n    if (onunload) {\n        RedisModuleCtx ctx;\n        moduleCreateContext(&ctx, module, REDISMODULE_CTX_TEMP_CLIENT);\n        int unload_status = onunload((void*)&ctx);\n        moduleFreeContext(&ctx);\n\n        if (unload_status == REDISMODULE_ERR) {\n            serverLog(LL_WARNING, \"Module %s OnUnload failed.  Unload canceled.\", name);\n            errno = ECANCELED;\n            return C_ERR;\n        }\n    }\n\n    moduleUnregisterCleanup(module);\n\n    /* Unload the dynamic library. */\n    if (dlclose(module->handle) == -1) {\n        char *error = dlerror();\n        if (error == NULL) error = \"Unknown error\";\n        serverLog(LL_WARNING,\"Error when trying to close the %s module: %s\",\n            module->name, error);\n    }\n\n    /* Fire the unloaded modules event. */\n    moduleFireServerEvent(REDISMODULE_EVENT_MODULE_CHANGE,\n                          REDISMODULE_SUBEVENT_MODULE_UNLOADED,\n                          module);\n\n    /* Remove from list of modules. */\n    serverLog(LL_NOTICE,\"Module %s unloaded\",module->name);\n    dictDelete(modules,module->name);\n    module->name = NULL; /* The name was already freed by dictDelete(). */\n    moduleFreeModuleStructure(module);\n\n    /* Recompute command bits for all users once the modules has been completely unloaded. */\n    ACLRecomputeCommandBitsFromCommandRulesAllUsers();\n    return C_OK;\n}\n\nvoid modulePipeReadable(aeEventLoop *el, int fd, void *privdata, int mask) {\n    UNUSED(el);\n    UNUSED(fd);\n    UNUSED(mask);\n    UNUSED(privdata);\n\n    char buf[128];\n    while (read(fd, buf, sizeof(buf)) == sizeof(buf));\n\n    /* Handle event loop events if pipe was written from event loop API */\n    eventLoopHandleOneShotEvents();\n}\n\n/* Helper function for the MODULE and HELLO command: send the list of the\n * loaded modules to the client. */\nvoid addReplyLoadedModules(client *c) {\n    const long ln = dictSize(modules);\n    /* In case no module is load we avoid iterator creation */\n    addReplyArrayLen(c,ln);\n    if (ln == 0) {\n        return;\n    }\n    dictIterator di;\n    dictEntry *de;\n\n    dictInitIterator(&di, modules);\n    while ((de = dictNext(&di)) != NULL) {\n        sds name = dictGetKey(de);\n        struct RedisModule *module = dictGetVal(de);\n        sds path = module->loadmod->path;\n        addReplyMapLen(c,4);\n        addReplyBulkCString(c,\"name\");\n        addReplyBulkCBuffer(c,name,sdslen(name));\n        addReplyBulkCString(c,\"ver\");\n        addReplyLongLong(c,module->ver);\n        addReplyBulkCString(c,\"path\");\n        addReplyBulkCBuffer(c,path,sdslen(path));\n        addReplyBulkCString(c,\"args\");\n        addReplyArrayLen(c,module->loadmod->argc);\n        for (int i = 0; i < module->loadmod->argc; i++) {\n            addReplyBulk(c,module->loadmod->argv[i]);\n        }\n    }\n    dictResetIterator(&di);\n}\n\n/* Helper for genModulesInfoString(): given a list of modules, return\n * an SDS string in the form \"[modulename|modulename2|...]\" */\nsds genModulesInfoStringRenderModulesList(list *l) {\n    listIter li;\n    listNode *ln;\n    listRewind(l,&li);\n    sds output = sdsnew(\"[\");\n    while((ln = listNext(&li))) {\n        RedisModule *module = ln->value;\n        output = sdscat(output,module->name);\n        if (ln != listLast(l))\n            output = sdscat(output,\"|\");\n    }\n    output = sdscat(output,\"]\");\n    return output;\n}\n\n/* Helper for genModulesInfoString(): render module options as an SDS string. */\nsds genModulesInfoStringRenderModuleOptions(struct RedisModule *module) {\n    sds output = sdsnew(\"[\");\n    if (module->options & REDISMODULE_OPTIONS_HANDLE_IO_ERRORS)\n        output = sdscat(output,\"handle-io-errors|\");\n    if (module->options & REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD)\n        output = sdscat(output,\"handle-repl-async-load|\");\n    if (module->options & REDISMODULE_OPTION_NO_IMPLICIT_SIGNAL_MODIFIED)\n        output = sdscat(output,\"no-implicit-signal-modified|\");\n    output = sdstrim(output,\"|\");\n    output = sdscat(output,\"]\");\n    return output;\n}\n\n\n/* Helper function for the INFO command: adds loaded modules as to info's\n * output.\n *\n * After the call, the passed sds info string is no longer valid and all the\n * references must be substituted with the new pointer returned by the call. */\nsds genModulesInfoString(sds info) {\n    dictIterator di;\n    dictEntry *de;\n\n    dictInitIterator(&di, modules);\n    while ((de = dictNext(&di)) != NULL) {\n        sds name = dictGetKey(de);\n        struct RedisModule *module = dictGetVal(de);\n\n        sds usedby = genModulesInfoStringRenderModulesList(module->usedby);\n        sds using = genModulesInfoStringRenderModulesList(module->using);\n        sds options = genModulesInfoStringRenderModuleOptions(module);\n        info = sdscatfmt(info,\n            \"module:name=%S,ver=%i,api=%i,filters=%i,\"\n            \"usedby=%S,using=%S,options=%S\\r\\n\",\n                name, module->ver, module->apiver,\n                (int)listLength(module->filters), usedby, using, options);\n        sdsfree(usedby);\n        sdsfree(using);\n        sdsfree(options);\n    }\n    dictResetIterator(&di);\n    return info;\n}\n\n/* --------------------------------------------------------------------------\n * Module Configurations API internals\n * -------------------------------------------------------------------------- */\n\t \n/* Check if the configuration name is already registered */\nint isModuleConfigNameRegistered(RedisModule *module, const char *name) {\n    listNode *match = listSearchKey(module->module_configs, (void *) name);\n    return match != NULL;\n}\n\n/* Assert that the flags passed into the RM_RegisterConfig Suite are valid */\nint moduleVerifyConfigFlags(unsigned int flags, configType type) {\n    if ((flags & ~(REDISMODULE_CONFIG_DEFAULT\n                    | REDISMODULE_CONFIG_IMMUTABLE\n                    | REDISMODULE_CONFIG_SENSITIVE\n                    | REDISMODULE_CONFIG_HIDDEN\n                    | REDISMODULE_CONFIG_PROTECTED\n                    | REDISMODULE_CONFIG_DENY_LOADING\n                    | REDISMODULE_CONFIG_BITFLAGS\n                    | REDISMODULE_CONFIG_MEMORY\n                    | REDISMODULE_CONFIG_UNPREFIXED))) {\n        serverLogRaw(LL_WARNING, \"Invalid flag(s) for configuration\");\n        return REDISMODULE_ERR;\n    }\n    if (type != NUMERIC_CONFIG && flags & REDISMODULE_CONFIG_MEMORY) {\n        serverLogRaw(LL_WARNING, \"Numeric flag provided for non-numeric configuration.\");\n        return REDISMODULE_ERR;\n    }\n    if (type != ENUM_CONFIG && flags & REDISMODULE_CONFIG_BITFLAGS) {\n        serverLogRaw(LL_WARNING, \"Enum flag provided for non-enum configuration.\");\n        return REDISMODULE_ERR;\n    }\n    return REDISMODULE_OK;\n}\n\n/* Verify a module resource or name has only alphanumeric characters, underscores\n * or dashes. */\nint moduleVerifyResourceName(const char *name) {\n    if (name[0] == '\\0') {\n        return REDISMODULE_ERR;\n    }\n\n    for (size_t i = 0; name[i] != '\\0'; i++) {\n        char curr_char = name[i];\n        if ((curr_char >= 'a' && curr_char <= 'z') ||\n            (curr_char >= 'A' && curr_char <= 'Z') ||\n            (curr_char >= '0' && curr_char <= '9') ||\n            (curr_char == '_') || (curr_char == '-'))\n        {\n            continue;\n        }\n        serverLog(LL_WARNING, \"Invalid character %c in Module resource name %s.\", curr_char, name);\n        return REDISMODULE_ERR;\n    }\n    return REDISMODULE_OK;\n}\n\n/* Verify unprefixed name config might be a single \"<name>\" or in the form \n * \"<name>|<alias>\". Unlike moduleVerifyResourceName(), unprefixed name config \n * allows a single dot in the name or alias. \n * \n * delim - Updates to point to \"|\" if it exists, NULL otherwise. \n */\nint moduleVerifyUnprefixedName(const char *nameAlias, const char **delim) {\n    if (nameAlias[0] == '\\0')\n        return REDISMODULE_ERR;\n\n    *delim = NULL;\n    int dot_count = 0, lname = 0;\n\n    for (size_t i = 0; nameAlias[i] != '\\0'; i++) {\n        char ch = nameAlias[i];\n        \n        if (((*delim) == NULL) && (ch == '|')) {\n            /* Handle single separator between name and alias */\n            if (!lname) {\n                serverLog(LL_WARNING, \"Module configuration name is empty: %s\", nameAlias);\n                return REDISMODULE_ERR;\n            }\n            *delim = &nameAlias[i];\n            dot_count = lname = 0;\n        } else if ( (ch >= 'a' && ch <= 'z') || (ch >= 'A' && ch <= 'Z') ||\n                    (ch >= '0' && ch <= '9') || (ch == '_') || (ch == '-') )\n        {\n            ++lname;\n        } else if (ch == '.') {\n            /* Allow only one dot per section (name or alias) */\n            if (++dot_count > 1) { \n                serverLog(LL_WARNING, \"Invalid character sequence in Module configuration name or alias: %s\", nameAlias);\n                return REDISMODULE_ERR;\n            }\n        } else {\n            serverLog(LL_WARNING, \"Invalid character %c in Module configuration name or alias %s.\", ch, nameAlias);\n            return REDISMODULE_ERR;\n        }\n    }\n    \n    if (!lname) {\n        serverLog(LL_WARNING, \"Module configuration name or alias is empty : %s\", nameAlias);\n        return REDISMODULE_ERR;\n    }\n\n    return REDISMODULE_OK;\n}\n\n/* This is a series of set functions for each type that act as dispatchers for \n * config.c to call module set callbacks. */\n#define CONFIG_ERR_SIZE 256\nstatic char configerr[CONFIG_ERR_SIZE];\nstatic void propagateErrorString(RedisModuleString *err_in, const char **err) {\n    if (err_in) {\n        redis_strlcpy(configerr, err_in->ptr, CONFIG_ERR_SIZE);\n        decrRefCount(err_in);\n        *err = configerr;\n    }\n}\n\n/* If configuration was originally registered with indication to prefix the name, \n * return the name without the prefix by skipping prefix \"<MODULE-NAME>.\". \n * Otherwise, return the stored name as is. */\nstatic char *getRegisteredConfigName(ModuleConfig *config) {\n    if (config->unprefixedFlag)\n        return config->name;\n\n    /* For prefixed configuration, find the '.' indicating the end of the prefix */\n    char *endOfPrefix = strchr(config->name, '.');\n    serverAssert(endOfPrefix != NULL);    \n    return endOfPrefix + 1;\n}\n\nint setModuleBoolConfig(ModuleConfig *config, int val, const char **err) {\n    RedisModuleString *error = NULL;\n\n    char *rname = getRegisteredConfigName(config);\n    int return_code = config->set_fn.set_bool(rname, val, config->privdata, &error);\n    propagateErrorString(error, err);\n    return return_code == REDISMODULE_OK ? 1 : 0;\n}\n\nint setModuleStringConfig(ModuleConfig *config, sds strval, const char **err) {\n    RedisModuleString *error = NULL;\n    RedisModuleString *new = createStringObject(strval, sdslen(strval));\n    \n    char *rname = getRegisteredConfigName(config);\n    int return_code = config->set_fn.set_string(rname, new, config->privdata, &error);\n    propagateErrorString(error, err);\n    decrRefCount(new);\n    return return_code == REDISMODULE_OK ? 1 : 0;\n}\n\nint setModuleEnumConfig(ModuleConfig *config, int val, const char **err) {\n    RedisModuleString *error = NULL;\n    char *rname = getRegisteredConfigName(config);\n    int return_code = config->set_fn.set_enum(rname, val, config->privdata, &error);\n    propagateErrorString(error, err);\n    return return_code == REDISMODULE_OK ? 1 : 0;\n}\n\nint setModuleNumericConfig(ModuleConfig *config, long long val, const char **err) {\n    RedisModuleString *error = NULL;\n    char *rname = getRegisteredConfigName(config);\n    int return_code = config->set_fn.set_numeric(rname, val, config->privdata, &error);\n    propagateErrorString(error, err);\n    return return_code == REDISMODULE_OK ? 1 : 0;\n}\n\n/* This is a series of get functions for each type that act as dispatchers for \n * config.c to call module set callbacks. */\nint getModuleBoolConfig(ModuleConfig *module_config) {\n    char *rname = getRegisteredConfigName(module_config);\n    return module_config->get_fn.get_bool(rname, module_config->privdata);\n}\n\nsds getModuleStringConfig(ModuleConfig *module_config) {\n    char *rname = getRegisteredConfigName(module_config);\n    RedisModuleString *val = module_config->get_fn.get_string(rname, module_config->privdata);\n    return val ? sdsdup(val->ptr) : NULL;\n}\n\nint getModuleEnumConfig(ModuleConfig *module_config) {\n    char *rname = getRegisteredConfigName(module_config);\n    return module_config->get_fn.get_enum(rname, module_config->privdata);\n}\n\nlong long getModuleNumericConfig(ModuleConfig *module_config) {\n    char *rname = getRegisteredConfigName(module_config);\n    return module_config->get_fn.get_numeric(rname, module_config->privdata);\n}\n\nint loadModuleDefaultConfigs(RedisModule *module) {\n    listIter li;\n    listNode *ln;\n    const char *err = NULL;\n    listRewind(module->module_configs, &li);\n    while ((ln = listNext(&li))) {\n        ModuleConfig *module_config = listNodeValue(ln);\n        if (!performModuleConfigSetDefaultFromName(module_config->name, &err)) {\n            serverLog(LL_WARNING, \"Issue attempting to set default value of configuration %s : %s\", module_config->name, err);\n            return REDISMODULE_ERR;\n        }\n    }\n    module->configs_initialized |= MODULE_CONFIGS_DEFAULTS;\n    return REDISMODULE_OK;\n}\n\n/* This function takes a module and a list of configs stored as sds NAME VALUE pairs.\n * It attempts to call set on each of these configs. */\nint loadModuleConfigs(RedisModule *module) {\n    listIter li;\n    listNode *ln;\n    const char *err = NULL;\n    listRewind(module->module_configs, &li);\n    const int set_default_if_missing = !(module->configs_initialized & MODULE_CONFIGS_DEFAULTS);\n    while ((ln = listNext(&li))) {\n        ModuleConfig *module_config = listNodeValue(ln);\n        dictEntry *de = dictUnlink(server.module_configs_queue, module_config->name);\n        if ((!de) && (module_config->alias))\n            de = dictUnlink(server.module_configs_queue, module_config->alias);\n\n        /* If found in the queue, set the value. Otherwise, set the default value. */\n        if (de) {\n            if (!performModuleConfigSetFromName(dictGetKey(de), dictGetVal(de), &err)) {\n                serverLog(LL_WARNING, \"Issue during loading of configuration %s : %s\", redactLogCstr((char *)dictGetKey(de)), err);\n                dictFreeUnlinkedEntry(server.module_configs_queue, de);\n                dictEmpty(server.module_configs_queue, NULL);\n                return REDISMODULE_ERR;\n            }\n            dictFreeUnlinkedEntry(server.module_configs_queue, de);\n        } else if (set_default_if_missing) {\n            if (!performModuleConfigSetDefaultFromName(module_config->name, &err)) {\n                serverLog(LL_WARNING, \"Issue attempting to set default value of configuration %s : %s\", module_config->name, err);\n                dictEmpty(server.module_configs_queue, NULL);\n                return REDISMODULE_ERR;\n            }\n        }\n    }\n    module->configs_initialized = MODULE_CONFIGS_ALL_APPLIED;\n    return REDISMODULE_OK;\n}\n\n/* Add module_config to the list if the apply and privdata do not match one already in it. */\nvoid addModuleConfigApply(list *module_configs, ModuleConfig *module_config) {\n    if (!module_config->apply_fn) return;\n    listIter li;\n    listNode *ln;\n    ModuleConfig *pending_apply;\n    listRewind(module_configs, &li);\n    while ((ln = listNext(&li))) {\n        pending_apply = listNodeValue(ln);\n        if (pending_apply->apply_fn == module_config->apply_fn && pending_apply->privdata == module_config->privdata) {\n            return;\n        }\n    }\n    listAddNodeTail(module_configs, module_config);\n}\n\n/* Call apply on a module config. Assumes module_config->apply_fn != NULL! */\nint moduleConfigApplyInternal(ModuleConfig *module_config, const char **err) {\n    RedisModuleCtx ctx;\n    RedisModuleString *error = NULL;\n\n    moduleCreateContext(&ctx, module_config->module, REDISMODULE_CTX_NONE);\n    if (module_config->apply_fn(&ctx, module_config->privdata, &error)) {\n        propagateErrorString(error, err);\n        moduleFreeContext(&ctx);\n        return 0;\n    }\n    moduleFreeContext(&ctx);\n    return 1;\n}\n\n/* Call apply on a single module config. */\nint moduleConfigApply(ModuleConfig *module_config, const char **err) {\n    if (module_config->apply_fn == NULL) return 1;\n    return moduleConfigApplyInternal(module_config, err);\n}\n\nint moduleConfigNeedsApply(ModuleConfig *config) {\n    return config->apply_fn != NULL;\n}\n\n/* Call apply on all module configs specified in set, if an apply function was specified at registration time. */\nint moduleConfigApplyConfig(list *module_configs, const char **err, const char **err_arg_name) {\n    if (!listLength(module_configs)) return 1;\n    listIter li;\n    listNode *ln;\n    ModuleConfig *module_config;\n\n    listRewind(module_configs, &li);\n    while ((ln = listNext(&li))) {\n        module_config = listNodeValue(ln);\n        /* We know apply_fn is not NULL so skip the check */\n        if (!moduleConfigApplyInternal(module_config, err)) {\n            if (err_arg_name) *err_arg_name = module_config->name;\n            return 0;\n        }\n    }\n    return 1;\n}\n\n/* --------------------------------------------------------------------------\n * ## Module Configurations API\n * -------------------------------------------------------------------------- */\n\n/* Resolve config name and create a module config object */\nModuleConfig *createModuleConfig(const char *name, RedisModuleConfigApplyFunc apply_fn, \n                                 void *privdata, RedisModule *module, unsigned int flags) \n{\n    sds cname, alias = NULL;\n\n    /* Determine the configuration name:\n     * - If the unprefixed flag is set, the \"<MODULE-NAME>.\" prefix is omitted.\n     * - An optional alias can be specified using \"<NAME>|<ALIAS>\".\n     * \n     * Examples:\n     *   - Unprefixed: \"bf.initial_size\" or \"bf-initial-size|bf.initial_size\".\n     *   - Prefixed:   \"initial_size\" becomes \"<MODULE-NAME>.initial_size\".\n     */    \n    if (flags & REDISMODULE_CONFIG_UNPREFIXED) {\n        const char *delim = strchr(name, '|');\n        cname = sdsnew(name);\n        if (delim) { /* Handle \"<NAME>|<ALIAS>\" format */\n            sdssubstr(cname, 0, delim - name);\n            alias = sdsnew(delim + 1);\n        }\n    } else {\n        /* Add the module name prefix */\n        cname = sdscatfmt(sdsempty(), \"%s.%s\", module->name, name);\n    }\n    \n    ModuleConfig *new_config = zmalloc(sizeof(ModuleConfig));\n    new_config->unprefixedFlag = flags & REDISMODULE_CONFIG_UNPREFIXED;\n    new_config->name = cname;\n    new_config->alias = alias;\n    new_config->apply_fn = apply_fn;\n    new_config->privdata = privdata;\n    new_config->module = module;\n    return new_config;\n}\n\n/* Verify the configuration name and check for duplicates.\n * \n * - If the configuration is flagged as unprefixed, it checks for duplicate\n *   names and optional aliases in the format <NAME>|<ALIAS>.\n * - If the configuration is prefixed, it ensures the name is unique with\n *   the module name prepended (<MODULE_NAME>.<NAME>).\n */\nint moduleConfigValidityCheck(RedisModule *module, const char *name, unsigned int flags, configType type) {\n    if (!module->onload) {\n        errno = EBUSY;\n        return REDISMODULE_ERR;\n    }\n    if (moduleVerifyConfigFlags(flags, type)) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    }\n    \n    int isdup = 0;    \n    if (flags & REDISMODULE_CONFIG_UNPREFIXED) {\n        const char *delim = NULL; /* Pointer to the '|' delimiter in <NAME>|<ALIAS> */\n        if (moduleVerifyUnprefixedName(name, &delim)){\n            errno = EINVAL;\n            return REDISMODULE_ERR;\n        }\n        \n        if (delim) { \n            /* Temporary split the \"<NAME>|<ALIAS>\" for the check */\n            int count;\n            sds *ar = sdssplitlen(name, strlen(name), \"|\", 1, &count);\n            serverAssert(count == 2); /* Already validated */\n            isdup = configExists(ar[0]) || \n                    configExists(ar[1]) || \n                    (sdscmp(ar[0], ar[1]) == 0);\n            sdsfreesplitres(ar, count);\n        } else {\n            sds _name = sdsnew(name);\n            isdup = configExists(_name);\n            sdsfree(_name);\n        }\n    } else {\n        if (moduleVerifyResourceName(name)) {\n            errno = EINVAL;\n            return REDISMODULE_ERR;\n        }\n\n        sds fullname = sdscatfmt(sdsempty(), \"%s.%s\", module->name, name);\n        isdup = configExists(fullname);\n        sdsfree(fullname);\n    }\n    \n    if (isdup) {\n        serverLog(LL_WARNING, \"Configuration by the name: %s already registered\", name);\n        errno = EALREADY;\n        return REDISMODULE_ERR;\n    }\n    return REDISMODULE_OK;\n}\n\nunsigned int maskModuleConfigFlags(unsigned int flags) {\n    unsigned int new_flags = 0;\n    if (flags & REDISMODULE_CONFIG_DEFAULT) new_flags |= MODIFIABLE_CONFIG;\n    if (flags & REDISMODULE_CONFIG_IMMUTABLE) new_flags |= IMMUTABLE_CONFIG;\n    if (flags & REDISMODULE_CONFIG_HIDDEN) new_flags |= HIDDEN_CONFIG;\n    if (flags & REDISMODULE_CONFIG_PROTECTED) new_flags |= PROTECTED_CONFIG;\n    if (flags & REDISMODULE_CONFIG_DENY_LOADING) new_flags |= DENY_LOADING_CONFIG;\n    return new_flags;\n}\n\nunsigned int maskModuleNumericConfigFlags(unsigned int flags) {\n    unsigned int new_flags = 0;\n    if (flags & REDISMODULE_CONFIG_MEMORY) new_flags |= MEMORY_CONFIG;\n    return new_flags;\n}\n\nunsigned int maskModuleEnumConfigFlags(unsigned int flags) {\n    unsigned int new_flags = 0;\n    if (flags & REDISMODULE_CONFIG_BITFLAGS) new_flags |= MULTI_ARG_CONFIG;\n    return new_flags;\n}\n\n/* Create a string config that Redis users can interact with via the Redis config file,\n * `CONFIG SET`, `CONFIG GET`, and `CONFIG REWRITE` commands.\n *\n * The actual config value is owned by the module, and the `getfn`, `setfn` and optional\n * `applyfn` callbacks that are provided to Redis in order to access or manipulate the\n * value. The `getfn` callback retrieves the value from the module, while the `setfn`\n * callback provides a value to be stored into the module config.\n * The optional `applyfn` callback is called after a `CONFIG SET` command modified one or\n * more configs using the `setfn` callback and can be used to atomically apply a config\n * after several configs were changed together.\n * If there are multiple configs with `applyfn` callbacks set by a single `CONFIG SET`\n * command, they will be deduplicated if their `applyfn` function and `privdata` pointers\n * are identical, and the callback will only be run once.\n * Both the `setfn` and `applyfn` can return an error if the provided value is invalid or\n * cannot be used.\n * The config also declares a type for the value that is validated by Redis and\n * provided to the module. The config system provides the following types:\n *\n * * Redis String: Binary safe string data.\n * * Enum: One of a finite number of string tokens, provided during registration.\n * * Numeric: 64 bit signed integer, which also supports min and max values.\n * * Bool: Yes or no value.\n *\n * The `setfn` callback is expected to return REDISMODULE_OK when the value is successfully\n * applied. It can also return REDISMODULE_ERR if the value can't be applied, and the\n * *err pointer can be set with a RedisModuleString error message to provide to the client.\n * This RedisModuleString will be freed by redis after returning from the set callback.\n *\n * All configs are registered with a name, a type, a default value, private data that is made\n * available in the callbacks, as well as several flags that modify the behavior of the config.\n * The name must only contain alphanumeric characters or dashes. The supported flags are:\n *\n * * REDISMODULE_CONFIG_DEFAULT: The default flags for a config. This creates a config that can be modified after startup.\n * * REDISMODULE_CONFIG_IMMUTABLE: This config can only be provided loading time.\n * * REDISMODULE_CONFIG_SENSITIVE: The value stored in this config is redacted from all logging.\n * * REDISMODULE_CONFIG_HIDDEN: The name is hidden from `CONFIG GET` with pattern matching.\n * * REDISMODULE_CONFIG_PROTECTED: This config will be only be modifiable based off the value of enable-protected-configs.\n * * REDISMODULE_CONFIG_DENY_LOADING: This config is not modifiable while the server is loading data.\n * * REDISMODULE_CONFIG_MEMORY: For numeric configs, this config will convert data unit notations into their byte equivalent.\n * * REDISMODULE_CONFIG_BITFLAGS: For enum configs, this config will allow multiple entries to be combined as bit flags.\n *\n * Default values are used on startup to set the value if it is not provided via the config file\n * or command line. Default values are also used to compare to on a config rewrite.\n *\n * Notes:\n *\n *  1. On string config sets that the string passed to the set callback will be freed after execution and the module must retain it.\n *  2. On string config gets the string will not be consumed and will be valid after execution.\n *\n * Example implementation:\n *\n *     RedisModuleString *strval;\n *     int adjustable = 1;\n *     RedisModuleString *getStringConfigCommand(const char *name, void *privdata) {\n *         return strval;\n *     }\n *\n *     int setStringConfigCommand(const char *name, RedisModuleString *new, void *privdata, RedisModuleString **err) {\n *        if (adjustable) {\n *            RedisModule_Free(strval);\n *            RedisModule_RetainString(NULL, new);\n *            strval = new;\n *            return REDISMODULE_OK;\n *        }\n *        *err = RedisModule_CreateString(NULL, \"Not adjustable.\", 15);\n *        return REDISMODULE_ERR;\n *     }\n *     ...\n *     RedisModule_RegisterStringConfig(ctx, \"string\", NULL, REDISMODULE_CONFIG_DEFAULT, getStringConfigCommand, setStringConfigCommand, NULL, NULL);\n *\n * If the registration fails, REDISMODULE_ERR is returned and one of the following\n * errno is set:\n * * EBUSY: Registering the Config outside of RedisModule_OnLoad.\n * * EINVAL: The provided flags are invalid for the registration or the name of the config contains invalid characters.\n * * EALREADY: The provided configuration name is already used. */\nint RM_RegisterStringConfig(RedisModuleCtx *ctx, const char *name, const char *default_val, unsigned int flags, RedisModuleConfigGetStringFunc getfn, RedisModuleConfigSetStringFunc setfn, RedisModuleConfigApplyFunc applyfn, void *privdata) {\n    RedisModule *module = ctx->module;\n    if (moduleConfigValidityCheck(module, name, flags, NUMERIC_CONFIG)) {\n        return REDISMODULE_ERR;\n    }\n    \n    ModuleConfig *mc = createModuleConfig(name, applyfn, privdata, module, flags);\n    mc->get_fn.get_string = getfn;\n    mc->set_fn.set_string = setfn;\n    listAddNodeTail(module->module_configs, mc);\n    unsigned int cflags = maskModuleConfigFlags(flags);\n    addModuleStringConfig(sdsdup(mc->name), (mc->alias) ? sdsdup(mc->alias) : NULL, \n                          cflags, mc, default_val ? sdsnew(default_val) : NULL);\n    return REDISMODULE_OK;\n}\n\n/* Create a bool config that server clients can interact with via the \n * `CONFIG SET`, `CONFIG GET`, and `CONFIG REWRITE` commands. See \n * RedisModule_RegisterStringConfig for detailed information about configs. */\nint RM_RegisterBoolConfig(RedisModuleCtx *ctx, const char *name, int default_val, unsigned int flags, RedisModuleConfigGetBoolFunc getfn, RedisModuleConfigSetBoolFunc setfn, RedisModuleConfigApplyFunc applyfn, void *privdata) {\n    RedisModule *module = ctx->module;\n    if (moduleConfigValidityCheck(module, name, flags, BOOL_CONFIG)) {\n        return REDISMODULE_ERR;\n    }\n    ModuleConfig *mc = createModuleConfig(name, applyfn, privdata, module, flags);\n    mc->get_fn.get_bool = getfn;\n    mc->set_fn.set_bool = setfn;\n    listAddNodeTail(module->module_configs, mc);\n    unsigned int cflags = maskModuleConfigFlags(flags);\n    addModuleBoolConfig(sdsdup(mc->name), (mc->alias) ? sdsdup(mc->alias) : NULL, \n                        cflags, mc, default_val);\n    return REDISMODULE_OK;\n}\n\n/* \n * Create an enum config that server clients can interact with via the \n * `CONFIG SET`, `CONFIG GET`, and `CONFIG REWRITE` commands. \n * Enum configs are a set of string tokens to corresponding integer values, where \n * the string value is exposed to Redis clients but the value passed Redis and the\n * module is the integer value. These values are defined in enum_values, an array\n * of null-terminated c strings, and int_vals, an array of enum values who has an\n * index partner in enum_values.\n * Example Implementation:\n *      const char *enum_vals[3] = {\"first\", \"second\", \"third\"};\n *      const int int_vals[3] = {0, 2, 4};\n *      int enum_val = 0;\n *\n *      int getEnumConfigCommand(const char *name, void *privdata) {\n *          return enum_val;\n *      }\n *       \n *      int setEnumConfigCommand(const char *name, int val, void *privdata, const char **err) {\n *          enum_val = val;\n *          return REDISMODULE_OK;\n *      }\n *      ...\n *      RedisModule_RegisterEnumConfig(ctx, \"enum\", 0, REDISMODULE_CONFIG_DEFAULT, enum_vals, int_vals, 3, getEnumConfigCommand, setEnumConfigCommand, NULL, NULL);\n *\n * Note that you can use REDISMODULE_CONFIG_BITFLAGS so that multiple enum string\n * can be combined into one integer as bit flags, in which case you may want to\n * sort your enums so that the preferred combinations are present first.\n *\n * See RedisModule_RegisterStringConfig for detailed general information about configs. */\nint RM_RegisterEnumConfig(RedisModuleCtx *ctx, const char *name, int default_val, unsigned int flags, const char **enum_values, const int *int_values, int num_enum_vals, RedisModuleConfigGetEnumFunc getfn, RedisModuleConfigSetEnumFunc setfn, RedisModuleConfigApplyFunc applyfn, void *privdata) {\n    RedisModule *module = ctx->module;\n    if (moduleConfigValidityCheck(module, name, flags, ENUM_CONFIG)) {\n        return REDISMODULE_ERR;\n    }\n    ModuleConfig *mc = createModuleConfig(name, applyfn, privdata, module, flags);\n    mc->get_fn.get_enum = getfn;\n    mc->set_fn.set_enum = setfn;\n    configEnum *enum_vals = zmalloc((num_enum_vals + 1) * sizeof(configEnum));\n    for (int i = 0; i < num_enum_vals; i++) {\n        enum_vals[i].name = zstrdup(enum_values[i]);\n        enum_vals[i].val = int_values[i];\n    }\n    enum_vals[num_enum_vals].name = NULL;\n    enum_vals[num_enum_vals].val = 0;\n    listAddNodeTail(module->module_configs, mc);\n\n    unsigned int cflags = maskModuleConfigFlags(flags) | maskModuleEnumConfigFlags(flags);\n    addModuleEnumConfig(sdsdup(mc->name), (mc->alias) ? sdsdup(mc->alias) : NULL, \n                        cflags, mc, default_val, enum_vals, num_enum_vals);\n    return REDISMODULE_OK;\n}\n\n/*\n * Create an integer config that server clients can interact with via the \n * `CONFIG SET`, `CONFIG GET`, and `CONFIG REWRITE` commands. See \n * RedisModule_RegisterStringConfig for detailed information about configs. */\nint RM_RegisterNumericConfig(RedisModuleCtx *ctx, const char *name, long long default_val, unsigned int flags, long long min, long long max, RedisModuleConfigGetNumericFunc getfn, RedisModuleConfigSetNumericFunc setfn, RedisModuleConfigApplyFunc applyfn, void *privdata) {\n    RedisModule *module = ctx->module;\n    if (moduleConfigValidityCheck(module, name, flags, NUMERIC_CONFIG)) {\n        return REDISMODULE_ERR;\n    }\n    ModuleConfig *mc = createModuleConfig(name, applyfn, privdata, module, flags);\n    mc->get_fn.get_numeric = getfn;\n    mc->set_fn.set_numeric = setfn;\n    listAddNodeTail(module->module_configs, mc);\n    unsigned int numeric_flags = maskModuleNumericConfigFlags(flags);\n\n    unsigned int cflags = maskModuleConfigFlags(flags);\n    addModuleNumericConfig(sdsdup(mc->name), (mc->alias) ? sdsdup(mc->alias) : NULL, \n                           cflags, mc, default_val, numeric_flags, min, max);\n    return REDISMODULE_OK;\n}\n\n/* Applies all default configurations for the parameters the module registered.\n * Only call this function if the module would like to make changes to the\n * configuration values before the actual values are applied by RedisModule_LoadConfigs.\n * Otherwise it's sufficient to call RedisModule_LoadConfigs, it should already set the default values if needed.\n * This makes it possible to distinguish between default values and user provided values and apply other changes between setting the defaults and the user values.\n * This will return REDISMODULE_ERR if it is called:\n * 1. outside RedisModule_OnLoad\n * 2. more than once\n * 3. after the RedisModule_LoadConfigs call */\nint RM_LoadDefaultConfigs(RedisModuleCtx *ctx) {\n    if (!ctx || !ctx->module || !ctx->module->onload || ctx->module->configs_initialized) {\n        return REDISMODULE_ERR;\n    }\n    RedisModule *module = ctx->module;\n    /* Load default configs of the module */\n    return loadModuleDefaultConfigs(module);\n}\n\n/* Applies all pending configurations on the module load. This should be called\n * after all of the configurations have been registered for the module inside of RedisModule_OnLoad.\n * This will return REDISMODULE_ERR if it is called outside RedisModule_OnLoad.\n * This API needs to be called when configurations are provided in either `MODULE LOADEX`\n * or provided as startup arguments. */\nint RM_LoadConfigs(RedisModuleCtx *ctx) {\n    if (!ctx || !ctx->module || !ctx->module->onload) {\n        return REDISMODULE_ERR;\n    }\n    RedisModule *module = ctx->module;\n    /* Load configs from conf file or arguments from loadex */\n    return loadModuleConfigs(module);\n}\n\n/* --------------------------------------------------------------------------\n * ## RDB load/save API\n * -------------------------------------------------------------------------- */\n\n#define REDISMODULE_RDB_STREAM_FILE 1\n\ntypedef struct RedisModuleRdbStream {\n    int type;\n\n    union {\n        char *filename;\n    } data;\n} RedisModuleRdbStream;\n\n/* Create a stream object to save/load RDB to/from a file.\n *\n * This function returns a pointer to RedisModuleRdbStream which is owned\n * by the caller. It requires a call to RM_RdbStreamFree() to free\n * the object. */\nRedisModuleRdbStream *RM_RdbStreamCreateFromFile(const char *filename) {\n    RedisModuleRdbStream *stream = zmalloc(sizeof(*stream));\n    stream->type = REDISMODULE_RDB_STREAM_FILE;\n    stream->data.filename = zstrdup(filename);\n    return stream;\n}\n\n/* Release an RDB stream object. */\nvoid RM_RdbStreamFree(RedisModuleRdbStream *stream) {\n    switch (stream->type) {\n    case REDISMODULE_RDB_STREAM_FILE:\n        zfree(stream->data.filename);\n        break;\n    default:\n        serverAssert(0);\n        break;\n    }\n    zfree(stream);\n}\n\n/* Load RDB file from the `stream`. Dataset will be cleared first and then RDB\n * file will be loaded.\n *\n * `flags` must be zero. This parameter is for future use.\n *\n * On success REDISMODULE_OK is returned, otherwise REDISMODULE_ERR is returned\n * and errno is set accordingly.\n *\n * Example:\n *\n *     RedisModuleRdbStream *s = RedisModule_RdbStreamCreateFromFile(\"exp.rdb\");\n *     RedisModule_RdbLoad(ctx, s, 0);\n *     RedisModule_RdbStreamFree(s);\n */\nint RM_RdbLoad(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags) {\n    UNUSED(ctx);\n\n    if (!stream || flags != 0) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    }\n\n    /* Not allowed on replicas. */\n    if (server.masterhost != NULL) {\n        errno = ENOTSUP;\n        return REDISMODULE_ERR;\n    }\n\n    /* Drop replicas if exist. */\n    disconnectSlaves();\n    freeReplicationBacklog();\n\n    if (server.aof_state != AOF_OFF) stopAppendOnly();\n\n    /* Kill existing RDB fork as it is saving outdated data. Also killing it\n     * will prevent COW memory issue. */\n    if (server.child_type == CHILD_TYPE_RDB) killRDBChild();\n\n    emptyData(-1,EMPTYDB_NO_FLAGS,NULL);\n\n    /* rdbLoad() can go back to the networking and process network events. If\n     * RM_RdbLoad() is called inside a command callback, we don't want to\n     * process the current client. Otherwise, we may free the client or try to\n     * process next message while we are already in the command callback. */\n    if (server.current_client) protectClient(server.current_client);\n\n    serverAssert(stream->type == REDISMODULE_RDB_STREAM_FILE);\n    int ret = rdbLoad(stream->data.filename,NULL,RDBFLAGS_NONE);\n\n    if (server.current_client) unprotectClient(server.current_client);\n    if (server.aof_enabled) startAppendOnlyWithRetry();\n\n    if (ret != RDB_OK) {\n        errno = (ret == RDB_NOT_EXIST) ? ENOENT : EIO;\n        return REDISMODULE_ERR;\n    }\n\n    errno = 0;\n    return REDISMODULE_OK;\n}\n\n/* Save dataset to the RDB stream.\n *\n * `flags` must be zero. This parameter is for future use.\n *\n * On success REDISMODULE_OK is returned, otherwise REDISMODULE_ERR is returned\n * and errno is set accordingly.\n *\n * Example:\n *\n *     RedisModuleRdbStream *s = RedisModule_RdbStreamCreateFromFile(\"exp.rdb\");\n *     RedisModule_RdbSave(ctx, s, 0);\n *     RedisModule_RdbStreamFree(s);\n */\nint RM_RdbSave(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags) {\n    UNUSED(ctx);\n\n    if (!stream || flags != 0) {\n        errno = EINVAL;\n        return REDISMODULE_ERR;\n    }\n\n    serverAssert(stream->type == REDISMODULE_RDB_STREAM_FILE);\n\n    if (rdbSaveToFile(stream->data.filename) != C_OK) {\n        return REDISMODULE_ERR;\n    }\n\n    errno = 0;\n    return REDISMODULE_OK;\n}\n\n/* Returns the internal secret of the cluster.\n * Should be used to authenticate as an internal connection to a node in the\n * cluster, and by that gain the permissions to execute internal commands.\n */\nconst char* RM_GetInternalSecret(RedisModuleCtx *ctx, size_t *len) {\n    UNUSED(ctx);\n    serverAssert(len != NULL);\n    const char *secret = clusterGetSecret(len);\n    return secret;\n}\n\n\n/* --------------------------------------------------------------------------\n * ## Config access API\n * -------------------------------------------------------------------------- */\n\n/* Get an iterator to all configs.\n * Optional `ctx` can be provided if use of auto-memory is desired.\n * Optional `pattern` can be provided to filter configs by name. If `pattern` is\n * NULL all configs will be returned.\n *\n * The returned iterator can be used to iterate over all configs using\n * RedisModule_ConfigIteratorNext().\n *\n * Example usage:\n * ```\n * // Below is same as RedisModule_ConfigIteratorCreate(ctx, NULL)\n * RedisModuleConfigIterator *iter = RedisModule_ConfigIteratorCreate(ctx, \"*\");\n * const char *config_name = NULL;\n * while ((config_name = RedisModule_ConfigIteratorNext(iter)) != NULL) {\n *     RedisModuleString *value = NULL;\n *     if (RedisModule_ConfigGet(ctx, config_name, &value) == REDISMODULE_OK) {\n *         // Do something with `value`...\n *         RedisModule_FreeString(ctx, value);\n *     }\n * }\n * RedisModule_ConfigIteratorRelease(ctx, iter);\n *\n * // Or optionally one can check the type to get the config value directly\n * // via the appropriate API in case performance is of consideration\n * iter = RedisModule_ConfigIteratorCreate(ctx, \"*\");\n * while ((config_name = RedisModule_ConfigIteratorNext(iter)) != NULL) {\n *     RedisModuleConfigType type = RedisModule_ConfigGetType(config_name);\n *     if (type == REDISMODULE_CONFIG_TYPE_STRING) {\n *         RedisModuleString *value;\n *         RedisModule_ConfigGet(ctx, config_name, &value);\n *         // Do something with `value`...\n *         RedisModule_FreeString(ctx, value);\n *     } if (type == REDISMODULE_CONFIG_TYPE_NUMERIC) {\n *         long long value;\n *         RedisModule_ConfigGetNumeric(ctx, config_name, &value);\n *         // Do something with `value`...\n *     } else if (type == REDISMODULE_CONFIG_TYPE_BOOL) {\n *         int value;\n *         RedisModule_ConfigGetBool(ctx, config_name, &value);\n *         // Do something with `value`...\n *     } else if (type == REDISMODULE_CONFIG_TYPE_ENUM) {\n *         RedisModuleString *value;\n *         RedisModule_ConfigGetEnum(ctx, config_name, &value);\n *         // Do something with `value`...\n *         RedisModule_Free(value);\n *     }\n * }\n * RedisModule_ConfigIteratorRelease(ctx, iter);\n * ```\n *\n * Returns a pointer to RedisModuleConfigIterator. Unless auto-memory is enabled\n * the caller is responsible for freeing the iterator using\n * RedisModule_ConfigIteratorRelease(). */\nRedisModuleConfigIterator *RM_ConfigIteratorCreate(RedisModuleCtx *ctx, const char *pattern) {\n    RedisModuleConfigIterator *iter = RM_Alloc(sizeof(*iter));\n\n    iter->di = moduleGetConfigIterator();\n    if (pattern != NULL) {\n        iter->pattern = sdsnew(pattern);\n        iter->is_glob = (strpbrk(pattern, \"*?[\") != NULL);\n    } else {\n        iter->pattern = NULL;\n        iter->is_glob = false;\n    }\n\n    if (ctx != NULL) autoMemoryAdd(ctx,REDISMODULE_AM_CONFIG, iter);\n    return iter;\n}\n\n/* Release the iterator returned by RedisModule_ConfigIteratorCreate(). If auto-memory\n * is enabled and manual release is needed one must pass the same RedisModuleCtx\n * that was used to create the iterator. */\nvoid RM_ConfigIteratorRelease(RedisModuleCtx *ctx, RedisModuleConfigIterator *iter) {\n    if (ctx != NULL) autoMemoryFreed(ctx,REDISMODULE_AM_CONFIG,iter);\n    if (iter->di) dictReleaseIterator(iter->di);\n    sdsfree(iter->pattern);\n    RM_Free(iter);\n}\n\nstatic RedisModuleConfigType convertToRedisModuleConfigType(configType type) {\n    switch (type) {\n    case BOOL_CONFIG:\n        return REDISMODULE_CONFIG_TYPE_BOOL;\n    case NUMERIC_CONFIG:\n        return REDISMODULE_CONFIG_TYPE_NUMERIC;\n    case STRING_CONFIG:\n    case SDS_CONFIG:\n    case SPECIAL_CONFIG:\n        return REDISMODULE_CONFIG_TYPE_STRING;\n    case ENUM_CONFIG:\n        return REDISMODULE_CONFIG_TYPE_ENUM;\n    default:\n        serverAssert(0);\n        break;\n    }\n}\n\n/* Get the type of a config as RedisModuleConfigType. One may use this  in order\n * to get or set the values of the config with the appropriate function if the\n * generic RedisModule_ConfigGet and RedisModule_ConfigSet APIs are performing\n * poorly.\n *\n * Intended usage of this function is when iteration over the configs is\n * performed. See RedisModule_ConfigIteratorNext() for example usage. If setting\n * or getting individual configs one can check the config type by hand in\n * redis.conf (or via other sources if config is added by a module) and use the\n * appropriate function without the need to call this function.\n *\n * Explanation of config types:\n *  - REDISMODULE_CONFIG_TYPE_BOOL: Config is a boolean. One can use RedisModule_Config(Get/Set)Bool\n *  - REDISMODULE_CONFIG_TYPE_NUMERIC: Config is a numeric value. One can use RedisModule_Config(Get/Set)Numeric\n *  - REDISMODULE_CONFIG_TYPE_STRING: Config is a string. One can use the generic RedisModule_Config(Get/Set)\n *  - REDISMODULE_CONFIG_TYPE_ENUM: Config is an enum. One can use RedisModule_Config(Get/Set)Enum\n *\n * If a config with the given name exists `res` is populated with its type, else\n * REDISMODULE_ERR is returned. */\nint RM_ConfigGetType(const char *name, RedisModuleConfigType *res) {\n    sds config_name = sdsnew(name);\n    configType type;\n    int ret = moduleGetConfigType(config_name, &type);\n    sdsfree(config_name);\n\n    if (!ret)\n        return REDISMODULE_ERR;\n\n    *res = convertToRedisModuleConfigType(type);\n    return REDISMODULE_OK;\n}\n\n/* Go to the next element of the config iterator.\n *\n * Returns the name of the next config, or NULL if there are no more configs.\n * Returned string is non-owning and thus should not be freed.\n * If a pattern was provided when creating the iterator, only configs matching\n * the pattern will be returned.\n *\n * See RedisModule_ConfigIteratorCreate() for example usage. */\nconst char *RM_ConfigIteratorNext(RedisModuleConfigIterator *iter) {\n    return moduleConfigIteratorNext(&iter->di, iter->pattern, iter->is_glob, NULL);\n}\n\n/* Get the value of a config as a string. This function can be used to get the\n * value of any config, regardless of its type.\n *\n * The string is allocated by the module and must be freed by the caller unless\n * auto memory is enabled.\n *\n * If the config does not exist, REDISMODULE_ERR is returned, else REDISMODULE_OK\n * is returned and `res` is populated with the value. */\nint RM_ConfigGet(RedisModuleCtx *ctx, const char *name, RedisModuleString **res) {\n    sds config_name = sdsnew(name);\n    sds res_sds = NULL;\n    int ret = moduleGetStringConfig(config_name, &res_sds);\n    sdsfree(config_name);\n    if (ret)\n        *res = RM_CreateString(ctx, res_sds, sdslen(res_sds));\n    sdsfree(res_sds);\n    return ret ? REDISMODULE_OK : REDISMODULE_ERR;\n}\n\n/* Get the value of a bool config.\n *\n * If the config does not exist or is not a bool config, REDISMODULE_ERR is\n * returned, else REDISMODULE_OK is returned and `res` is populated with the\n * value. */\nint RM_ConfigGetBool(RedisModuleCtx *ctx, const char *name, int *res) {\n    UNUSED(ctx);\n    sds config_name = sdsnew(name);\n    int ret = moduleGetBoolConfig(config_name, res);\n    sdsfree(config_name);\n    return ret ? REDISMODULE_OK : REDISMODULE_ERR;\n}\n\n/* Get the value of an enum config.\n *\n * If the config does not exist or is not an enum config, REDISMODULE_ERR is\n * returned, else REDISMODULE_OK is returned and `res` is populated with the value.\n * If the config has multiple arguments they are returned as a space-separated\n * string. */\nint RM_ConfigGetEnum(RedisModuleCtx *ctx, const char *name, RedisModuleString **res) {\n    sds config_name = sdsnew(name);\n    sds res_sds = NULL;\n    int ret = moduleGetEnumConfig(config_name, &res_sds);\n    sdsfree(config_name);\n    if (ret)\n        *res = RM_CreateString(ctx, res_sds, sdslen(res_sds));\n    sdsfree(res_sds);\n    return ret ? REDISMODULE_OK : REDISMODULE_ERR;\n}\n\n/* Get the value of a numeric config.\n *\n * If the config does not exist or is not a numeric config, REDISMODULE_ERR is\n * returned, else REDISMODULE_OK is returned and `res` is populated with the\n * value. */\nint RM_ConfigGetNumeric(RedisModuleCtx *ctx, const char *name, long long *res) {\n    UNUSED(ctx);\n    sds config_name = sdsnew(name);\n    int ret = moduleGetNumericConfig(config_name, res);\n    sdsfree(config_name);\n    return ret ? REDISMODULE_OK : REDISMODULE_ERR;\n}\n\n/* Set the value of a config.\n *\n * This function can be used to set the value of any config, regardless of its\n * type. If the config is multi-argument, the value must be a space-separated\n * string.\n *\n * If the value failed to be set REDISMODULE_ERR will be returned and if `err`\n * is not NULL, it will be populated with an error message. */\nint RM_ConfigSet(RedisModuleCtx *ctx, const char *name, RedisModuleString *value, RedisModuleString **err) {\n    sds config_name = sdsnew(name);\n    const char *cerr = NULL;\n    const char *val = RM_StringPtrLen(value, NULL);\n    int res = moduleSetStringConfig(ctx->client, config_name, val, &cerr);\n    sdsfree(config_name);\n    if (err && cerr)\n        *err = RM_CreateString(ctx, cerr, strlen(cerr));\n    return (res == 0 ? REDISMODULE_ERR : REDISMODULE_OK);\n}\n\n/* Set the value of a bool config.\n *\n * See RedisModule_ConfigSet for return value. */\nint RM_ConfigSetBool(RedisModuleCtx *ctx, const char *name, int value, RedisModuleString **err) {\n    const char *cerr = NULL;\n    sds config_name = sdsnew(name);\n    int res = moduleSetBoolConfig(ctx->client, config_name, value, &cerr);\n    sdsfree(config_name);\n    if (err && cerr)\n        *err = RM_CreateString(ctx, cerr, strlen(cerr));\n    return (res == 0 ? REDISMODULE_ERR : REDISMODULE_OK);\n}\n\n/* Set the value of an enum config.\n *\n * If the config is multi-argument the value parameter must be a space-separated\n * string.\n *\n * See RedisModule_ConfigSet for return value. */\nint RM_ConfigSetEnum(RedisModuleCtx *ctx, const char *name, RedisModuleString *value, RedisModuleString **err) {\n    sds config_name = sdsnew(name);\n    const char *cerr = NULL;\n\n    size_t len;\n    const char *val = RM_StringPtrLen(value, &len);\n    sds sds_val = sdsnewlen(val, len);\n\n    int vals_cnt = 0;\n    sds *vals = sdssplitlen(val, sdslen(sds_val), \" \", 1, &vals_cnt);\n\n    int res = moduleSetEnumConfig(ctx->client, config_name, vals, vals_cnt, &cerr);\n\n    sdsfreesplitres(vals, vals_cnt);\n    sdsfree(sds_val);\n    sdsfree(config_name);\n    if (err && cerr)\n        *err = RM_CreateString(ctx, cerr, strlen(cerr));\n    return (res == 0 ? REDISMODULE_ERR : REDISMODULE_OK);\n}\n\n/* Set the value of a numeric config.\n * If the value passed is meant to be a percentage, it should be passed as a\n * negative value.\n * For unsigned configs pass the value and cast to (long long) - internal type\n * checks will handle it.\n *\n * See RedisModule_ConfigSet for return value. */\nint RM_ConfigSetNumeric(RedisModuleCtx *ctx, const char *name, long long value, RedisModuleString **err) {\n    sds config_name = sdsnew(name);\n    const char *cerr = NULL;\n    int res = moduleSetNumericConfig(ctx->client, config_name, value, &cerr);\n    sdsfree(config_name);\n    if (err && cerr)\n        *err = RM_CreateString(ctx, cerr, strlen(cerr));\n    return (res == 0 ? REDISMODULE_ERR : REDISMODULE_OK);\n}\n\n/* Redis MODULE command.\n *\n * MODULE LIST\n * MODULE LOAD <path> [args...]\n * MODULE LOADEX <path> [[CONFIG NAME VALUE] [CONFIG NAME VALUE]] [ARGS ...]\n * MODULE UNLOAD <name>\n */\nvoid moduleCommand(client *c) {\n    char *subcmd = c->argv[1]->ptr;\n\n    if (c->argc == 2 && !strcasecmp(subcmd,\"help\")) {\n        const char *help[] = {\n\"LIST\",\n\"    Return a list of loaded modules.\",\n\"LOAD <path> [<arg> ...]\",\n\"    Load a module library from <path>, passing to it any optional arguments.\",\n\"LOADEX <path> [[CONFIG NAME VALUE] [CONFIG NAME VALUE]] [ARGS ...]\",\n\"    Load a module library from <path>, while passing it module configurations and optional arguments.\",\n\"UNLOAD <name>\",\n\"    Unload a module.\",\nNULL\n        };\n        addReplyHelp(c, help);\n    } else if (!strcasecmp(subcmd,\"load\") && c->argc >= 3) {\n        robj **argv = NULL;\n        int argc = 0;\n\n        if (c->argc > 3) {\n            argc = c->argc - 3;\n            argv = &c->argv[3];\n        }\n\n        if (moduleLoad(c->argv[2]->ptr,(void **)argv,argc, 0) == C_OK)\n            addReply(c,shared.ok);\n        else\n            addReplyError(c,\n                \"Error loading the extension. Please check the server logs.\");\n    } else if (!strcasecmp(subcmd,\"loadex\") && c->argc >= 3) {\n        robj **argv = NULL;\n        int argc = 0;\n\n        if (c->argc > 3) {\n            argc = c->argc - 3;\n            argv = &c->argv[3];\n        }\n        /* If this is a loadex command we want to populate server.module_configs_queue with \n         * sds NAME VALUE pairs. We also want to increment argv to just after ARGS, if supplied. */\n        if (parseLoadexArguments((RedisModuleString ***) &argv, &argc) == REDISMODULE_OK &&\n            moduleLoad(c->argv[2]->ptr, (void **)argv, argc, 1) == C_OK)\n            addReply(c,shared.ok);\n        else {\n            dictEmpty(server.module_configs_queue, NULL);\n            addReplyError(c,\n                \"Error loading the extension. Please check the server logs.\");\n        }\n\n    } else if (!strcasecmp(subcmd,\"unload\") && c->argc == 3) {\n        const char *errmsg = NULL;\n        if (moduleUnload(c->argv[2]->ptr, &errmsg, 0) == C_OK)\n            addReply(c,shared.ok);\n        else {\n            if (errmsg == NULL) errmsg = \"operation not possible.\";\n            addReplyErrorFormat(c, \"Error unloading module: %s\", errmsg);\n            serverLog(LL_WARNING, \"Error unloading module %s: %s\", (sds) c->argv[2]->ptr, errmsg);\n        }\n    } else if (!strcasecmp(subcmd,\"list\") && c->argc == 2) {\n        addReplyLoadedModules(c);\n    } else {\n        addReplySubcommandSyntaxError(c);\n        return;\n    }\n}\n\n/* Return the number of registered modules. */\nsize_t moduleCount(void) {\n    return dictSize(modules);\n}\n\n/* --------------------------------------------------------------------------\n * ## Key eviction API\n * -------------------------------------------------------------------------- */\n\n/* Set the key last access time for LRU based eviction. not relevant if the\n * servers's maxmemory policy is LFU based. Value is idle time in milliseconds.\n * returns REDISMODULE_OK if the LRU was updated, REDISMODULE_ERR otherwise. */\nint RM_SetLRU(RedisModuleKey *key, mstime_t lru_idle) {\n    if (!key->kv)\n        return REDISMODULE_ERR;\n    if (objectSetLRUOrLFU(key->kv, -1, lru_idle, lru_idle>=0 ? LRU_CLOCK() : 0, 1))\n        return REDISMODULE_OK;\n    return REDISMODULE_ERR;\n}\n\n/* Gets the key last access time.\n * Value is idletime in milliseconds or -1 if the server's eviction policy is\n * LFU based.\n * returns REDISMODULE_OK if when key is valid. */\nint RM_GetLRU(RedisModuleKey *key, mstime_t *lru_idle) {\n    *lru_idle = -1;\n    if (!key->kv)\n        return REDISMODULE_ERR;\n    if (server.maxmemory_policy & MAXMEMORY_FLAG_LFU)\n        return REDISMODULE_OK;\n    *lru_idle = estimateObjectIdleTime(key->kv);\n    return REDISMODULE_OK;\n}\n\n/* Set the key access frequency. only relevant if the server's maxmemory policy\n * is LFU based.\n * The frequency is a logarithmic counter that provides an indication of\n * the access frequencyonly (must be <= 255).\n * returns REDISMODULE_OK if the LFU was updated, REDISMODULE_ERR otherwise. */\nint RM_SetLFU(RedisModuleKey *key, long long lfu_freq) {\n    if (!key->kv)\n        return REDISMODULE_ERR;\n    if (objectSetLRUOrLFU(key->kv, lfu_freq, -1, 0, 1))\n        return REDISMODULE_OK;\n    return REDISMODULE_ERR;\n}\n\n/* Gets the key access frequency or -1 if the server's eviction policy is not\n * LFU based.\n * returns REDISMODULE_OK if when key is valid. */\nint RM_GetLFU(RedisModuleKey *key, long long *lfu_freq) {\n    *lfu_freq = -1;\n    if (!key->kv)\n        return REDISMODULE_ERR;\n    if (server.maxmemory_policy & MAXMEMORY_FLAG_LFU)\n        *lfu_freq = LFUDecrAndReturn(key->kv);\n    return REDISMODULE_OK;\n}\n\n/* --------------------------------------------------------------------------\n * ## Miscellaneous APIs\n * -------------------------------------------------------------------------- */\n\n/**\n * Returns the full module options flags mask, using the return value\n * the module can check if a certain set of module options are supported\n * by the redis server version in use.\n * Example:\n *\n *        int supportedFlags = RM_GetModuleOptionsAll();\n *        if (supportedFlags & REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS) {\n *              // REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS is supported\n *        } else{\n *              // REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS is not supported\n *        }\n */\nint RM_GetModuleOptionsAll(void) {\n    return _REDISMODULE_OPTIONS_FLAGS_NEXT - 1;\n}\n\n/**\n * Returns the full ContextFlags mask, using the return value\n * the module can check if a certain set of flags are supported\n * by the redis server version in use.\n * Example:\n *\n *        int supportedFlags = RM_GetContextFlagsAll();\n *        if (supportedFlags & REDISMODULE_CTX_FLAGS_MULTI) {\n *              // REDISMODULE_CTX_FLAGS_MULTI is supported\n *        } else{\n *              // REDISMODULE_CTX_FLAGS_MULTI is not supported\n *        }\n */\nint RM_GetContextFlagsAll(void) {\n    return _REDISMODULE_CTX_FLAGS_NEXT - 1;\n}\n\n/**\n * Returns the full KeyspaceNotification mask, using the return value\n * the module can check if a certain set of flags are supported\n * by the redis server version in use.\n * Example:\n *\n *        int supportedFlags = RM_GetKeyspaceNotificationFlagsAll();\n *        if (supportedFlags & REDISMODULE_NOTIFY_LOADED) {\n *              // REDISMODULE_NOTIFY_LOADED is supported\n *        } else{\n *              // REDISMODULE_NOTIFY_LOADED is not supported\n *        }\n */\nint RM_GetKeyspaceNotificationFlagsAll(void) {\n    return _REDISMODULE_NOTIFY_NEXT - 1;\n}\n\n/**\n * Return the redis version in format of 0x00MMmmpp.\n * Example for 6.0.7 the return value will be 0x00060007.\n */\nint RM_GetServerVersion(void) {\n    return REDIS_VERSION_NUM;\n}\n\n/**\n * Return the current redis-server runtime value of REDISMODULE_TYPE_METHOD_VERSION.\n * You can use that when calling RM_CreateDataType to know which fields of\n * RedisModuleTypeMethods are gonna be supported and which will be ignored.\n */\nint RM_GetTypeMethodVersion(void) {\n    return REDISMODULE_TYPE_METHOD_VERSION;\n}\n\n/* Replace the value assigned to a module type.\n *\n * The key must be open for writing, have an existing value, and have a moduleType\n * that matches the one specified by the caller.\n *\n * Unlike RM_ModuleTypeSetValue() which will free the old value, this function\n * simply swaps the old value with the new value.\n *\n * The function returns REDISMODULE_OK on success, REDISMODULE_ERR on errors\n * such as:\n *\n * 1. Key is not opened for writing.\n * 2. Key is not a module data type key.\n * 3. Key is a module datatype other than 'mt'.\n *\n * If old_value is non-NULL, the old value is returned by reference.\n */\nint RM_ModuleTypeReplaceValue(RedisModuleKey *key, moduleType *mt, void *new_value, void **old_value) {\n    if (!(key->mode & REDISMODULE_WRITE) || key->iter)\n        return REDISMODULE_ERR;\n    if (!key->kv || key->kv->type != OBJ_MODULE)\n        return REDISMODULE_ERR;\n\n    moduleValue *mv = key->kv->ptr;\n    if (mv->type != mt)\n        return REDISMODULE_ERR;\n\n    if (old_value)\n        *old_value = mv->value;\n    mv->value = new_value;\n\n    return REDISMODULE_OK;\n}\n\n/* For a specified command, parse its arguments and return an array that\n * contains the indexes of all key name arguments. This function is\n * essentially a more efficient way to do `COMMAND GETKEYS`.\n *\n * The out_flags argument is optional, and can be set to NULL.\n * When provided it is filled with REDISMODULE_CMD_KEY_ flags in matching\n * indexes with the key indexes of the returned array.\n *\n * A NULL return value indicates the specified command has no keys, or\n * an error condition. Error conditions are indicated by setting errno\n * as follows:\n *\n * * ENOENT: Specified command does not exist.\n * * EINVAL: Invalid command arity specified.\n *\n * NOTE: The returned array is not a Redis Module object so it does not\n * get automatically freed even when auto-memory is used. The caller\n * must explicitly call RM_Free() to free it, same as the out_flags pointer if\n * used.\n */\nint *RM_GetCommandKeysWithFlags(RedisModuleCtx *ctx, RedisModuleString **argv, int argc, int *num_keys, int **out_flags) {\n    UNUSED(ctx);\n    struct redisCommand *cmd;\n    int *res = NULL;\n\n    /* Find command */\n    if ((cmd = lookupCommand(argv,argc)) == NULL) {\n        errno = ENOENT;\n        return NULL;\n    }\n\n    /* Bail out if command has no keys */\n    if (!doesCommandHaveKeys(cmd)) {\n        errno = 0;\n        return NULL;\n    }\n\n    if ((cmd->arity > 0 && cmd->arity != argc) || (argc < -cmd->arity)) {\n        errno = EINVAL;\n        return NULL;\n    }\n\n    getKeysResult result = GETKEYS_RESULT_INIT;\n    getKeysFromCommand(cmd, argv, argc, &result);\n\n    *num_keys = result.numkeys;\n    if (!result.numkeys) {\n        errno = 0;\n        getKeysFreeResult(&result);\n        return NULL;\n    }\n\n    /* The return value here expects an array of key positions */\n    unsigned long int size = sizeof(int) * result.numkeys;\n    res = zmalloc(size);\n    if (out_flags)\n        *out_flags = zmalloc(size);\n    for (int i = 0; i < result.numkeys; i++) {\n        res[i] = result.keys[i].pos;\n        if (out_flags)\n            (*out_flags)[i] = moduleConvertKeySpecsFlags(result.keys[i].flags, 0);\n    }\n\n    getKeysFreeResult(&result);\n    return res;\n}\n\n/* Identical to RM_GetCommandKeysWithFlags when flags are not needed. */\nint *RM_GetCommandKeys(RedisModuleCtx *ctx, RedisModuleString **argv, int argc, int *num_keys) {\n    return RM_GetCommandKeysWithFlags(ctx, argv, argc, num_keys, NULL);\n}\n\n/* Return the name of the command currently running */\nconst char *RM_GetCurrentCommandName(RedisModuleCtx *ctx) {\n    if (!ctx || !ctx->client || !ctx->client->cmd)\n        return NULL;\n\n    return (const char*)ctx->client->cmd->fullname;\n}\n\n/* --------------------------------------------------------------------------\n * ## Defrag API\n * -------------------------------------------------------------------------- */\n\n/* Register a defrag callback for global data, i.e. anything that the module\n * may allocate that is not tied to a specific data type.\n */\nint RM_RegisterDefragFunc(RedisModuleCtx *ctx, RedisModuleDefragFunc cb) {\n    ctx->module->defrag_cb = cb;\n    return REDISMODULE_OK;\n}\n\n/* Register a defrag callback for global data, i.e. anything that the module\n * may allocate that is not tied to a specific data type.\n * This is a more advanced version of RM_RegisterDefragFunc, in that it takes\n * a callbacks that has a return value, and can use RM_DefragShouldStop\n * in and indicate that it should be called again later, or is it done (returned 0).\n */\nint RM_RegisterDefragFunc2(RedisModuleCtx *ctx, RedisModuleDefragFunc2 cb) {\n    ctx->module->defrag_cb_2 = cb;\n    return REDISMODULE_OK;\n}\n\n/* Register a defrag callbacks that will be called when defrag operation starts and ends.\n *\n * The callbacks are the same as `RM_RegisterDefragFunc` but the user\n * can also assume the callbacks are called when the defrag operation starts and ends. */\nint RM_RegisterDefragCallbacks(RedisModuleCtx *ctx, RedisModuleDefragFunc start, RedisModuleDefragFunc end) {\n    ctx->module->defrag_start_cb = start;\n    ctx->module->defrag_end_cb = end;\n    return REDISMODULE_OK;\n}\n\n/* When the data type defrag callback iterates complex structures, this\n * function should be called periodically. A zero (false) return\n * indicates the callback may continue its work. A non-zero value (true)\n * indicates it should stop.\n *\n * When stopped, the callback may use RM_DefragCursorSet() to store its\n * position so it can later use RM_DefragCursorGet() to resume defragging.\n *\n * When stopped and more work is left to be done, the callback should\n * return 1. Otherwise, it should return 0.\n */\nint RM_DefragShouldStop(RedisModuleDefragCtx *ctx) {\n    if (ctx->stopping) /* Return immediately if already stopping */\n        return 1;\n    if (!ctx->endtime) /* Return if no time limit set */\n        return 0;\n\n    /* We use certain thresholds to avoid excessive system calls.\n     * Time checks are only performed when any threshold is reached,\n     * which means we might slightly exceed the expected end time. */\n    if (server.stat_active_defrag_hits - ctx->last_stop_check_hits >= 512 ||\n        server.stat_active_defrag_misses - ctx->last_stop_check_misses >= 1024)\n    {\n        if (ctx->endtime <= getMonotonicUs()) {\n            ctx->stopping = 1;\n            return 1;\n        }\n        ctx->last_stop_check_hits = server.stat_active_defrag_hits;\n        ctx->last_stop_check_misses = server.stat_active_defrag_misses;\n    }\n    return 0;\n}\n\n/* Store an arbitrary cursor value for future re-use.\n *\n * This should only be called if RM_DefragShouldStop() has returned a non-zero\n * value and the defrag callback is about to exit without fully iterating its\n * data type.\n *\n * This behavior is reserved to cases where late defrag is performed. Late\n * defrag is selected for keys that implement the `free_effort` callback and\n * return a `free_effort` value that is larger than the defrag\n * 'active-defrag-max-scan-fields' configuration directive.\n *\n * Smaller keys, keys that do not implement `free_effort` or the global\n * defrag callback are not called in late-defrag mode. In those cases, a\n * call to this function will return REDISMODULE_ERR.\n *\n * The cursor may be used by the module to represent some progress into the\n * module's data type. Modules may also store additional cursor-related\n * information locally and use the cursor as a flag that indicates when\n * traversal of a new key begins. This is possible because the API makes\n * a guarantee that concurrent defragmentation of multiple keys will\n * not be performed.\n */\nint RM_DefragCursorSet(RedisModuleDefragCtx *ctx, unsigned long cursor) {\n    if (!ctx->cursor)\n        return REDISMODULE_ERR;\n\n    *ctx->cursor = cursor;\n    return REDISMODULE_OK;\n}\n\n/* Fetch a cursor value that has been previously stored using RM_DefragCursorSet().\n *\n * If not called for a late defrag operation, REDISMODULE_ERR will be returned and\n * the cursor should be ignored. See RM_DefragCursorSet() for more details on\n * defrag cursors.\n */\nint RM_DefragCursorGet(RedisModuleDefragCtx *ctx, unsigned long *cursor) {\n    if (!ctx->cursor)\n        return REDISMODULE_ERR;\n\n    *cursor = *ctx->cursor;\n    return REDISMODULE_OK;\n}\n\n/* Defrag a memory allocation previously allocated by RM_Alloc, RM_Calloc, etc.\n * The defragmentation process involves allocating a new memory block and copying\n * the contents to it, like realloc().\n *\n * If defragmentation was not necessary, NULL is returned and the operation has\n * no other effect.\n *\n * If a non-NULL value is returned, the caller should use the new pointer instead\n * of the old one and update any reference to the old pointer, which must not\n * be used again.\n */\nvoid *RM_DefragAlloc(RedisModuleDefragCtx *ctx, void *ptr) {\n    UNUSED(ctx);\n    return activeDefragAlloc(ptr);\n}\n\n/* Allocate memory for defrag purposes\n *\n * On the common cases user simply want to reallocate a pointer with a single\n * owner. For such usecase RM_DefragAlloc is enough. But on some usecases the user\n * might want to replace a pointer with multiple owners in different keys.\n * In such case, an in place replacement can not work because the other key still\n * keep a pointer to the old value. \n * \n * RM_DefragAllocRaw and RM_DefragFreeRaw allows to control when the memory\n * for defrag purposes will be allocated and when it will be freed,\n * allow to support more complex defrag usecases. */\nvoid *RM_DefragAllocRaw(RedisModuleDefragCtx *ctx, size_t size) {\n    UNUSED(ctx);\n    return activeDefragAllocRaw(size);\n}\n\n/* Free memory for defrag purposes\n * \n * See RM_DefragAllocRaw for more information. */\nvoid RM_DefragFreeRaw(RedisModuleDefragCtx *ctx, void *ptr) {\n    UNUSED(ctx);\n    activeDefragFreeRaw(ptr);\n}\n\n/* Defrag a RedisModuleString previously allocated by RM_Alloc, RM_Calloc, etc.\n * See RM_DefragAlloc() for more information on how the defragmentation process\n * works.\n *\n * NOTE: It is only possible to defrag strings that have a single reference.\n * Typically this means strings retained with RM_RetainString or RM_HoldString\n * may not be defragmentable. One exception is command argvs which, if retained\n * by the module, will end up with a single reference (because the reference\n * on the Redis side is dropped as soon as the command callback returns).\n */\nRedisModuleString *RM_DefragRedisModuleString(RedisModuleDefragCtx *ctx, RedisModuleString *str) {\n    UNUSED(ctx);\n    return activeDefragStringOb(str);\n}\n\n/* Defrag callback for radix tree iterator, called for each node,\n * used in order to defrag the nodes allocations. */\nint moduleDefragRaxNode(raxNode **noderef, void *privdata) {\n    UNUSED(privdata);\n    raxNode *newnode = activeDefragAlloc(*noderef);\n    if (newnode) {\n        *noderef = newnode;\n        return 1;\n    }\n    return 0;\n}\n\n/* Defragment a Redis Module Dictionary by scanning its contents and calling a value\n * callback for each value.\n *\n * The callback gets the current value in the dict, and should update newptr to the new pointer,\n * if the value was re-allocated to a different address. The callback also gets the key name just as a reference.\n * The callback returns 0 when defrag is complete for this node, 1 when node needs more work.\n *\n * The API can work incrementally by accepting a seek position to continue from, and\n * returning the next position to seek to on the next call (or return NULL when the iteration is completed).\n *\n * This API returns a new dict if it was re-allocated to a new address (will only\n * be attempted when *seekTo is NULL on entry).\n */\nRedisModuleDict *RM_DefragRedisModuleDict(RedisModuleDefragCtx *ctx, RedisModuleDict *dict, RedisModuleDefragDictValueCallback valueCB, RedisModuleString **seekTo) {\n    RedisModuleDict *newdict = NULL;\n    raxIterator ri;\n\n    if (*seekTo == NULL) {\n        /* if last seek is NULL, we start new iteration */\n        rax* newrax = NULL;\n        if ((newdict = activeDefragAlloc(dict)))\n            dict = newdict;\n        /* Update rax back-pointer to dict */\n        dict->rax->alloc_size = &dict->alloc_size;\n        if ((newrax = activeDefragAlloc(dict->rax)))\n            dict->rax = newrax;\n    }\n\n    raxStart(&ri,dict->rax);\n    if (*seekTo == NULL) {\n        /* if last seek is NULL, we start new iteration */\n        moduleDefragRaxNode(&dict->rax->head, NULL);\n        /* assign the iterator node callback before the seek, so that the\n         * initial nodes that are processed till the first item are covered */\n        ri.node_cb = moduleDefragRaxNode;\n        raxSeek(&ri,\"^\",NULL,0);\n    } else {\n        /* Seek to the static 'seekTo'. */\n        if (!raxSeek(&ri,\">=\", (*seekTo)->ptr, sdslen((*seekTo)->ptr))) {\n            goto cleanup;\n        }\n        /* assign the iterator node callback after the seek, so that the\n         * initial nodes that are processed till now aren't covered */\n        ri.node_cb = moduleDefragRaxNode;\n    }\n\n    while (raxNext(&ri)) {\n        int ret = 0;\n        void *newdata = NULL;\n\n        if (valueCB) {\n            ret = valueCB(ctx, ri.data, ri.key, ri.key_len, &newdata);\n            if (newdata)\n                raxSetData(ri.node, ri.data=newdata);\n        }\n\n        /* Check if we need to interrupt defragmentation.\n         * - For explicit interruption, use current position\n         * - For timeout interruption, try to advance to next node if possible */\n        if (ret == 1 || RM_DefragShouldStop(ctx)) {\n            if (ret == 0 && !raxNext(&ri)) goto cleanup; /* Last node and no more work needed. */\n            if (*seekTo) RM_FreeString(NULL, *seekTo);\n            *seekTo = RM_CreateString(NULL, (const char *)ri.key, ri.key_len);\n            raxStop(&ri);\n            return newdict;\n        }\n    }\ncleanup:\n    if (*seekTo) RM_FreeString(NULL, *seekTo);\n    *seekTo = NULL;\n    raxStop(&ri);\n    return newdict;\n}\n\n/* Perform a late defrag of a module datatype key.\n *\n * Returns a zero value (and initializes the cursor) if no more needs to be done,\n * or a non-zero value otherwise.\n */\nint moduleLateDefrag(robj *key, robj *value, unsigned long *cursor, monotime endtime, int dbid) {\n    moduleValue *mv = value->ptr;\n    moduleType *mt = mv->type;\n\n    /* Interval shouldn't exceed 1 hour. */\n    serverAssert(!endtime || llabs((long long)endtime - (long long)getMonotonicUs()) < 60*60*1000*1000LL);\n\n    RedisModuleDefragCtx defrag_ctx = INIT_MODULE_DEFRAG_CTX(endtime, cursor, key, dbid);\n\n    /* Invoke callback. Note that the callback may be missing if the key has been\n     * replaced with a different type since our last visit.\n     */\n    int ret = 0;\n    if (mt->defrag)\n        ret = mt->defrag(&defrag_ctx, key, &mv->value);\n\n    if (!ret) {\n        *cursor = 0;    /* No more work to do */\n        return 0;\n    }\n\n    return 1;\n}\n\n/* Attempt to defrag a module data type value. Depending on complexity,\n * the operation may happen immediately or be scheduled for later.\n *\n * Returns 1 if the operation has been completed or 0 if it needs to\n * be scheduled for late defrag.\n */\nint moduleDefragValue(robj *key, robj *value, int dbid) {\n    moduleValue *mv = value->ptr;\n    moduleType *mt = mv->type;\n\n    /* Try to defrag moduleValue itself regardless of whether or not\n     * defrag callbacks are provided.\n     */\n    moduleValue *newmv = activeDefragAlloc(mv);\n    if (newmv) {\n        value->ptr = mv = newmv;\n    }\n\n    if (!mt->defrag)\n        return 1;\n\n    /* Use free_effort to determine complexity of module value, and if\n     * necessary schedule it for defragLater instead of quick immediate\n     * defrag.\n     */\n    size_t effort = moduleGetFreeEffort(key, value, dbid);\n    if (!effort)\n        effort = SIZE_MAX;\n    if (effort > server.active_defrag_max_scan_fields) {\n        return 0;  /* Defrag later */\n    }\n\n    RedisModuleDefragCtx defrag_ctx = INIT_MODULE_DEFRAG_CTX(0, NULL, key, dbid);\n    mt->defrag(&defrag_ctx, key, &mv->value);\n    return 1;\n}\n\n/* Call registered module API defrag start functions */\nvoid moduleDefragStart(void) {\n    dictForEach(modules, struct RedisModule, module, \n        if (module->defrag_start_cb) {\n            RedisModuleDefragCtx defrag_ctx = INIT_MODULE_DEFRAG_CTX(0, NULL, NULL, -1);\n            module->defrag_start_cb(&defrag_ctx);\n        }\n    );\n}\n\n/* Call registered module API defrag end functions */\nvoid moduleDefragEnd(void) {\n    dictForEach(modules, struct RedisModule, module, \n        if (module->defrag_end_cb) {\n            RedisModuleDefragCtx defrag_ctx = INIT_MODULE_DEFRAG_CTX(0, NULL, NULL, -1);\n            module->defrag_end_cb(&defrag_ctx);\n        }\n    );\n}\n\n/* Returns the name of the key currently being processed.\n * There is no guarantee that the key name is always available, so this may return NULL.\n */\nconst RedisModuleString *RM_GetKeyNameFromDefragCtx(RedisModuleDefragCtx *ctx) {\n    return ctx->key;\n}\n\n/* Returns the database id of the key currently being processed.\n * There is no guarantee that this info is always available, so this may return -1.\n */\nint RM_GetDbIdFromDefragCtx(RedisModuleDefragCtx *ctx) {\n    return ctx->dbid;\n}\n\n/* Register all the APIs we export. Keep this function at the end of the\n * file so that's easy to seek it to add new entries. */\nvoid moduleRegisterCoreAPI(void) {\n    server.moduleapi = dictCreate(&moduleAPIDictType);\n    server.sharedapi = dictCreate(&moduleAPIDictType);\n    REGISTER_API(Alloc);\n    REGISTER_API(TryAlloc);\n    REGISTER_API(Calloc);\n    REGISTER_API(TryCalloc);\n    REGISTER_API(Realloc);\n    REGISTER_API(TryRealloc);\n    REGISTER_API(Free);\n    REGISTER_API(Strdup);\n    REGISTER_API(CreateCommand);\n    REGISTER_API(GetCommand);\n    REGISTER_API(CreateSubcommand);\n    REGISTER_API(SetCommandInfo);\n    REGISTER_API(SetCommandACLCategories);\n    REGISTER_API(AddACLCategory);\n    REGISTER_API(SetModuleAttribs);\n    REGISTER_API(IsModuleNameBusy);\n    REGISTER_API(WrongArity);\n    REGISTER_API(ReplyWithLongLong);\n    REGISTER_API(ReplyWithError);\n    REGISTER_API(ReplyWithErrorFormat);\n    REGISTER_API(ReplyWithSimpleString);\n    REGISTER_API(ReplyWithArray);\n    REGISTER_API(ReplyWithMap);\n    REGISTER_API(ReplyWithSet);\n    REGISTER_API(ReplyWithAttribute);\n    REGISTER_API(ReplyWithNullArray);\n    REGISTER_API(ReplyWithEmptyArray);\n    REGISTER_API(ReplySetArrayLength);\n    REGISTER_API(ReplySetMapLength);\n    REGISTER_API(ReplySetSetLength);\n    REGISTER_API(ReleaseKeyMetaClass);\n    REGISTER_API(ReplySetAttributeLength);\n    REGISTER_API(ReplyWithString);\n    REGISTER_API(ReplyWithEmptyString);\n    REGISTER_API(ReplyWithVerbatimString);\n    REGISTER_API(ReplyWithVerbatimStringType);\n    REGISTER_API(ReplyWithStringBuffer);\n    REGISTER_API(CreateKeyMetaClass);\n    REGISTER_API(SetKeyMeta);\n    REGISTER_API(GetKeyMeta);\n    REGISTER_API(ReplyWithCString);\n    REGISTER_API(ReplyWithNull);\n    REGISTER_API(ReplyWithBool);\n    REGISTER_API(ReplyWithCallReply);\n    REGISTER_API(ReplyWithDouble);\n    REGISTER_API(ReplyWithBigNumber);\n    REGISTER_API(ReplyWithLongDouble);\n    REGISTER_API(GetSelectedDb);\n    REGISTER_API(SelectDb);\n    REGISTER_API(KeyExists);\n    REGISTER_API(OpenKey);\n    REGISTER_API(GetOpenKeyModesAll);\n    REGISTER_API(CloseKey);\n    REGISTER_API(KeyType);\n    REGISTER_API(ValueLength);\n    REGISTER_API(ListPush);\n    REGISTER_API(ListPop);\n    REGISTER_API(ListGet);\n    REGISTER_API(ListSet);\n    REGISTER_API(ListInsert);\n    REGISTER_API(ListDelete);\n    REGISTER_API(StringToLongLong);\n    REGISTER_API(StringToULongLong);\n    REGISTER_API(StringToDouble);\n    REGISTER_API(StringToLongDouble);\n    REGISTER_API(StringToStreamID);\n    REGISTER_API(Call);\n    REGISTER_API(CallReplyProto);\n    REGISTER_API(FreeCallReply);\n    REGISTER_API(CallReplyInteger);\n    REGISTER_API(CallReplyDouble);\n    REGISTER_API(CallReplyBigNumber);\n    REGISTER_API(CallReplyVerbatim);\n    REGISTER_API(CallReplyBool);\n    REGISTER_API(CallReplySetElement);\n    REGISTER_API(CallReplyMapElement);\n    REGISTER_API(CallReplyAttributeElement);\n    REGISTER_API(CallReplyPromiseSetUnblockHandler);\n    REGISTER_API(CallReplyPromiseAbort);\n    REGISTER_API(CallReplyAttribute);\n    REGISTER_API(CallReplyType);\n    REGISTER_API(CallReplyLength);\n    REGISTER_API(CallReplyArrayElement);\n    REGISTER_API(CallReplyStringPtr);\n    REGISTER_API(CreateStringFromCallReply);\n    REGISTER_API(CreateString);\n    REGISTER_API(CreateStringFromLongLong);\n    REGISTER_API(CreateStringFromULongLong);\n    REGISTER_API(CreateStringFromDouble);\n    REGISTER_API(CreateStringFromLongDouble);\n    REGISTER_API(CreateStringFromString);\n    REGISTER_API(CreateStringFromStreamID);\n    REGISTER_API(CreateStringPrintf);\n    REGISTER_API(FreeString);\n    REGISTER_API(StringPtrLen);\n    REGISTER_API(AutoMemory);\n    REGISTER_API(Replicate);\n    REGISTER_API(ReplicateVerbatim);\n    REGISTER_API(DeleteKey);\n    REGISTER_API(UnlinkKey);\n    REGISTER_API(StringSet);\n    REGISTER_API(StringDMA);\n    REGISTER_API(StringTruncate);\n    REGISTER_API(SetExpire);\n    REGISTER_API(GetExpire);\n    REGISTER_API(SetAbsExpire);\n    REGISTER_API(GetAbsExpire);\n    REGISTER_API(ResetDataset);\n    REGISTER_API(DbSize);\n    REGISTER_API(RandomKey);\n    REGISTER_API(ZsetAdd);\n    REGISTER_API(ZsetIncrby);\n    REGISTER_API(ZsetScore);\n    REGISTER_API(ZsetRem);\n    REGISTER_API(ZsetRangeStop);\n    REGISTER_API(ZsetFirstInScoreRange);\n    REGISTER_API(ZsetLastInScoreRange);\n    REGISTER_API(ZsetFirstInLexRange);\n    REGISTER_API(ZsetLastInLexRange);\n    REGISTER_API(ZsetRangeCurrentElement);\n    REGISTER_API(ZsetRangeNext);\n    REGISTER_API(ZsetRangePrev);\n    REGISTER_API(ZsetRangeEndReached);\n    REGISTER_API(HashSet);\n    REGISTER_API(HashGet);\n    REGISTER_API(HashFieldMinExpire);\n    REGISTER_API(StreamAdd);\n    REGISTER_API(StreamDelete);\n    REGISTER_API(StreamIteratorStart);\n    REGISTER_API(StreamIteratorStop);\n    REGISTER_API(StreamIteratorNextID);\n    REGISTER_API(StreamIteratorNextField);\n    REGISTER_API(StreamIteratorDelete);\n    REGISTER_API(StreamTrimByLength);\n    REGISTER_API(StreamTrimByID);\n    REGISTER_API(IsKeysPositionRequest);\n    REGISTER_API(KeyAtPos);\n    REGISTER_API(KeyAtPosWithFlags);\n    REGISTER_API(IsChannelsPositionRequest);\n    REGISTER_API(ChannelAtPosWithFlags);\n    REGISTER_API(GetClientId);\n    REGISTER_API(GetClientUserNameById);\n    REGISTER_API(GetContextFlags);\n    REGISTER_API(AvoidReplicaTraffic);\n    REGISTER_API(PoolAlloc);\n    REGISTER_API(CreateDataType);\n    REGISTER_API(ModuleTypeSetValue);\n    REGISTER_API(ModuleTypeReplaceValue);\n    REGISTER_API(ModuleTypeGetType);\n    REGISTER_API(ModuleTypeGetValue);\n    REGISTER_API(IsIOError);\n    REGISTER_API(SetModuleOptions);\n    REGISTER_API(SignalModifiedKey);\n    REGISTER_API(SaveUnsigned);\n    REGISTER_API(LoadUnsigned);\n    REGISTER_API(SaveSigned);\n    REGISTER_API(LoadSigned);\n    REGISTER_API(SaveString);\n    REGISTER_API(SaveStringBuffer);\n    REGISTER_API(LoadString);\n    REGISTER_API(LoadStringBuffer);\n    REGISTER_API(SaveDouble);\n    REGISTER_API(LoadDouble);\n    REGISTER_API(SaveFloat);\n    REGISTER_API(LoadFloat);\n    REGISTER_API(SaveLongDouble);\n    REGISTER_API(LoadLongDouble);\n    REGISTER_API(SaveDataTypeToString);\n    REGISTER_API(LoadDataTypeFromString);\n    REGISTER_API(LoadDataTypeFromStringEncver);\n    REGISTER_API(EmitAOF);\n    REGISTER_API(Log);\n    REGISTER_API(LogIOError);\n    REGISTER_API(_Assert);\n    REGISTER_API(LatencyAddSample);\n    REGISTER_API(StringAppendBuffer);\n    REGISTER_API(TrimStringAllocation);\n    REGISTER_API(RetainString);\n    REGISTER_API(HoldString);\n    REGISTER_API(StringCompare);\n    REGISTER_API(GetContextFromIO);\n    REGISTER_API(GetKeyNameFromIO);\n    REGISTER_API(GetKeyNameFromModuleKey);\n    REGISTER_API(GetDbIdFromModuleKey);\n    REGISTER_API(GetDbIdFromIO);\n    REGISTER_API(GetKeyNameFromOptCtx);\n    REGISTER_API(GetToKeyNameFromOptCtx);\n    REGISTER_API(GetDbIdFromOptCtx);\n    REGISTER_API(GetToDbIdFromOptCtx);\n    REGISTER_API(GetKeyNameFromDefragCtx);\n    REGISTER_API(GetDbIdFromDefragCtx);\n    REGISTER_API(GetKeyNameFromDigest);\n    REGISTER_API(GetDbIdFromDigest);\n    REGISTER_API(BlockClient);\n    REGISTER_API(BlockClientGetPrivateData);\n    REGISTER_API(BlockClientSetPrivateData);\n    REGISTER_API(BlockClientOnAuth);\n    REGISTER_API(UnblockClient);\n    REGISTER_API(IsBlockedReplyRequest);\n    REGISTER_API(IsBlockedTimeoutRequest);\n    REGISTER_API(GetBlockedClientPrivateData);\n    REGISTER_API(AbortBlock);\n    REGISTER_API(Milliseconds);\n    REGISTER_API(MonotonicMicroseconds);\n    REGISTER_API(Microseconds);\n    REGISTER_API(CachedMicroseconds);\n    REGISTER_API(BlockedClientMeasureTimeStart);\n    REGISTER_API(BlockedClientMeasureTimeEnd);\n    REGISTER_API(GetThreadSafeContext);\n    REGISTER_API(GetDetachedThreadSafeContext);\n    REGISTER_API(FreeThreadSafeContext);\n    REGISTER_API(ThreadSafeContextLock);\n    REGISTER_API(ThreadSafeContextTryLock);\n    REGISTER_API(ThreadSafeContextUnlock);\n    REGISTER_API(DigestAddStringBuffer);\n    REGISTER_API(DigestAddLongLong);\n    REGISTER_API(DigestEndSequence);\n    REGISTER_API(NotifyKeyspaceEvent);\n    REGISTER_API(NotifyKeyspaceEventWithSubkeys);\n    REGISTER_API(GetNotifyKeyspaceEvents);\n    REGISTER_API(SubscribeToKeyspaceEvents);\n    REGISTER_API(UnsubscribeFromKeyspaceEvents);\n    REGISTER_API(SubscribeToKeyspaceEventsWithSubkeys);\n    REGISTER_API(UnsubscribeFromKeyspaceEventsWithSubkeys);\n    REGISTER_API(AddPostNotificationJob);\n    REGISTER_API(RegisterClusterMessageReceiver);\n    REGISTER_API(SendClusterMessage);\n    REGISTER_API(GetClusterNodeInfo);\n    REGISTER_API(GetClusterNodesList);\n    REGISTER_API(FreeClusterNodesList);\n    REGISTER_API(CreateTimer);\n    REGISTER_API(StopTimer);\n    REGISTER_API(GetTimerInfo);\n    REGISTER_API(GetMyClusterID);\n    REGISTER_API(GetClusterSize);\n    REGISTER_API(GetRandomBytes);\n    REGISTER_API(GetRandomHexChars);\n    REGISTER_API(BlockedClientDisconnected);\n    REGISTER_API(SetDisconnectCallback);\n    REGISTER_API(GetBlockedClientHandle);\n    REGISTER_API(SetClusterFlags);\n    REGISTER_API(ClusterDisableTrim);\n    REGISTER_API(ClusterEnableTrim);\n    REGISTER_API(ClusterKeySlot);\n    REGISTER_API(ClusterKeySlotC);\n    REGISTER_API(ClusterCanonicalKeyNameInSlot);\n    REGISTER_API(ClusterCanAccessKeysInSlot);\n    REGISTER_API(ClusterPropagateForSlotMigration);\n    REGISTER_API(ClusterGetLocalSlotRanges);\n    REGISTER_API(ClusterFreeSlotRanges);\n    REGISTER_API(CreateDict);\n    REGISTER_API(FreeDict);\n    REGISTER_API(DictSize);\n    REGISTER_API(DictSetC);\n    REGISTER_API(DictReplaceC);\n    REGISTER_API(DictSet);\n    REGISTER_API(DictReplace);\n    REGISTER_API(DictGetC);\n    REGISTER_API(DictGet);\n    REGISTER_API(DictDelC);\n    REGISTER_API(DictDel);\n    REGISTER_API(DictIteratorStartC);\n    REGISTER_API(DictIteratorStart);\n    REGISTER_API(DictIteratorStop);\n    REGISTER_API(DictIteratorReseekC);\n    REGISTER_API(DictIteratorReseek);\n    REGISTER_API(DictNextC);\n    REGISTER_API(DictPrevC);\n    REGISTER_API(DictNext);\n    REGISTER_API(DictPrev);\n    REGISTER_API(DictCompareC);\n    REGISTER_API(DictCompare);\n    REGISTER_API(ExportSharedAPI);\n    REGISTER_API(GetSharedAPI);\n    REGISTER_API(RegisterCommandFilter);\n    REGISTER_API(UnregisterCommandFilter);\n    REGISTER_API(CommandFilterArgsCount);\n    REGISTER_API(CommandFilterArgGet);\n    REGISTER_API(CommandFilterArgInsert);\n    REGISTER_API(CommandFilterArgReplace);\n    REGISTER_API(CommandFilterArgDelete);\n    REGISTER_API(CommandFilterGetClientId);\n    REGISTER_API(Fork);\n    REGISTER_API(SendChildHeartbeat);\n    REGISTER_API(ExitFromChild);\n    REGISTER_API(KillForkChild);\n    REGISTER_API(RegisterInfoFunc);\n    REGISTER_API(InfoAddSection);\n    REGISTER_API(InfoBeginDictField);\n    REGISTER_API(InfoEndDictField);\n    REGISTER_API(InfoAddFieldString);\n    REGISTER_API(InfoAddFieldCString);\n    REGISTER_API(InfoAddFieldDouble);\n    REGISTER_API(InfoAddFieldLongLong);\n    REGISTER_API(InfoAddFieldULongLong);\n    REGISTER_API(GetServerInfo);\n    REGISTER_API(FreeServerInfo);\n    REGISTER_API(ServerInfoGetField);\n    REGISTER_API(ServerInfoGetFieldC);\n    REGISTER_API(ServerInfoGetFieldSigned);\n    REGISTER_API(ServerInfoGetFieldUnsigned);\n    REGISTER_API(ServerInfoGetFieldDouble);\n    REGISTER_API(GetClientInfoById);\n    REGISTER_API(GetClientNameById);\n    REGISTER_API(SetClientNameById);\n    REGISTER_API(PublishMessage);\n    REGISTER_API(PublishMessageShard);\n    REGISTER_API(SubscribeToServerEvent);\n    REGISTER_API(SetLRU);\n    REGISTER_API(GetLRU);\n    REGISTER_API(SetLFU);\n    REGISTER_API(GetLFU);\n    REGISTER_API(BlockClientOnKeys);\n    REGISTER_API(BlockClientOnKeysWithFlags);\n    REGISTER_API(SignalKeyAsReady);\n    REGISTER_API(GetBlockedClientReadyKey);\n    REGISTER_API(GetUsedMemoryRatio);\n    REGISTER_API(MallocSize);\n    REGISTER_API(MallocUsableSize);\n    REGISTER_API(MallocSizeString);\n    REGISTER_API(MallocSizeDict);\n    REGISTER_API(ScanCursorCreate);\n    REGISTER_API(ScanCursorDestroy);\n    REGISTER_API(ScanCursorRestart);\n    REGISTER_API(Scan);\n    REGISTER_API(ScanKey);\n    REGISTER_API(CreateModuleUser);\n    REGISTER_API(SetContextUser);\n    REGISTER_API(GetContextUser);\n    REGISTER_API(GetUserUsername);\n    REGISTER_API(SetModuleUserACL);\n    REGISTER_API(SetModuleUserACLString);\n    REGISTER_API(GetModuleUserACLString);\n    REGISTER_API(GetCurrentUserName);\n    REGISTER_API(GetModuleUserFromUserName);\n    REGISTER_API(ACLCheckCommandPermissions);\n    REGISTER_API(ACLCheckKeyPermissions);\n    REGISTER_API(ACLCheckKeyPrefixPermissions);\n    REGISTER_API(ACLCheckChannelPermissions);\n    REGISTER_API(ACLAddLogEntry);\n    REGISTER_API(ACLAddLogEntryByUserName);\n    REGISTER_API(FreeModuleUser);\n    REGISTER_API(DeauthenticateAndCloseClient);\n    REGISTER_API(AuthenticateClientWithACLUser);\n    REGISTER_API(AuthenticateClientWithUser);\n    REGISTER_API(GetContextFlagsAll);\n    REGISTER_API(GetModuleOptionsAll);\n    REGISTER_API(GetKeyspaceNotificationFlagsAll);\n    REGISTER_API(IsSubEventSupported);\n    REGISTER_API(GetServerVersion);\n    REGISTER_API(GetClientCertificate);\n    REGISTER_API(RedactClientCommandArgument);\n    REGISTER_API(GetCommandKeys);\n    REGISTER_API(GetCommandKeysWithFlags);\n    REGISTER_API(GetCurrentCommandName);\n    REGISTER_API(GetTypeMethodVersion);\n    REGISTER_API(RegisterDefragFunc);\n    REGISTER_API(RegisterDefragFunc2);\n    REGISTER_API(RegisterDefragCallbacks);\n    REGISTER_API(DefragAlloc);\n    REGISTER_API(DefragAllocRaw);\n    REGISTER_API(DefragFreeRaw);\n    REGISTER_API(DefragRedisModuleString);\n    REGISTER_API(DefragRedisModuleDict);\n    REGISTER_API(DefragShouldStop);\n    REGISTER_API(DefragCursorSet);\n    REGISTER_API(DefragCursorGet);\n    REGISTER_API(EventLoopAdd);\n    REGISTER_API(EventLoopDel);\n    REGISTER_API(EventLoopAddOneShot);\n    REGISTER_API(Yield);\n    REGISTER_API(RegisterBoolConfig);\n    REGISTER_API(RegisterNumericConfig);\n    REGISTER_API(RegisterStringConfig);\n    REGISTER_API(RegisterEnumConfig);\n    REGISTER_API(LoadDefaultConfigs);\n    REGISTER_API(LoadConfigs);\n    REGISTER_API(RegisterAuthCallback);\n    REGISTER_API(RdbStreamCreateFromFile);\n    REGISTER_API(RdbStreamFree);\n    REGISTER_API(RdbLoad);\n    REGISTER_API(RdbSave);\n    REGISTER_API(GetInternalSecret);\n    REGISTER_API(ConfigIteratorCreate);\n    REGISTER_API(ConfigIteratorRelease);\n    REGISTER_API(ConfigIteratorNext);\n    REGISTER_API(ConfigGetType);\n    REGISTER_API(ConfigGet);\n    REGISTER_API(ConfigGetBool);\n    REGISTER_API(ConfigGetEnum);\n    REGISTER_API(ConfigGetNumeric);\n    REGISTER_API(ConfigSet);\n    REGISTER_API(ConfigSetBool);\n    REGISTER_API(ConfigSetEnum);\n    REGISTER_API(ConfigSetNumeric);\n}\n"
  },
  {
    "path": "src/modules/.gitignore",
    "content": "*.so\n*.xo\n"
  },
  {
    "path": "src/modules/Makefile",
    "content": "\n# find the OS\nuname_S := $(shell sh -c 'uname -s 2>/dev/null || echo not')\n\n# Compile flags for linux / osx\nifeq ($(uname_S),Linux)\n\tSHOBJ_CFLAGS ?= -W -Wall -fno-common -g -ggdb -std=c99 -O2\n\tSHOBJ_LDFLAGS ?= -shared\nelse\n\tSHOBJ_CFLAGS ?= -W -Wall -dynamic -fno-common -g -ggdb -std=c99 -O2\n\tSHOBJ_LDFLAGS ?= -bundle -undefined dynamic_lookup\nendif\n\n# OS X 11.x doesn't have /usr/lib/libSystem.dylib and needs an explicit setting.\nifeq ($(uname_S),Darwin)\nifeq (\"$(wildcard /usr/lib/libSystem.dylib)\",\"\")\nLIBS = -L /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/lib -lsystem\nendif\nendif\n\n.SUFFIXES: .c .so .xo .o\n\nall: helloworld.so hellotype.so helloblock.so hellocluster.so hellotimer.so hellodict.so hellohook.so helloacl.so\n\n.c.xo:\n\t$(CC) -I. $(CFLAGS) $(SHOBJ_CFLAGS) -fPIC -c $< -o $@\n\nhelloworld.xo: ../redismodule.h\n\nhelloworld.so: helloworld.xo\n\t$(LD) -o $@ $^ $(SHOBJ_LDFLAGS) $(LIBS) -lc\n\nhellotype.xo: ../redismodule.h\n\nhellotype.so: hellotype.xo\n\t$(LD) -o $@ $^ $(SHOBJ_LDFLAGS) $(LIBS) -lc\n\nhelloblock.xo: ../redismodule.h\n\nhelloblock.so: helloblock.xo\n\t$(LD) -o $@ $^ $(SHOBJ_LDFLAGS) $(LIBS) -lpthread -lc\n\nhellocluster.xo: ../redismodule.h\n\nhellocluster.so: hellocluster.xo\n\t$(LD) -o $@ $^ $(SHOBJ_LDFLAGS) $(LIBS) -lc\n\nhellotimer.xo: ../redismodule.h\n\nhellotimer.so: hellotimer.xo\n\t$(LD) -o $@ $^ $(SHOBJ_LDFLAGS) $(LIBS) -lc\n\nhellodict.xo: ../redismodule.h\n\nhellodict.so: hellodict.xo\n\t$(LD) -o $@ $^ $(SHOBJ_LDFLAGS) $(LIBS) -lc\n\nhellohook.xo: ../redismodule.h\n\nhellohook.so: hellohook.xo\n\t$(LD) -o $@ $^ $(SHOBJ_LDFLAGS) $(LIBS) -lc\n\nhelloacl.xo: ../redismodule.h\n\nhelloacl.so: helloacl.xo\n\t$(LD) -o $@ $^ $(SHOBJ_LDFLAGS) $(LIBS) -lc\n\nclean:\n\trm -rf *.xo *.so\n"
  },
  {
    "path": "src/modules/helloacl.c",
    "content": "/* ACL API example - An example for performing custom synchronous and\n * asynchronous password authentication.\n *\n * -----------------------------------------------------------------------------\n *\n * Copyright 2019 Amazon.com, Inc. or its affiliates.\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#include \"../redismodule.h\"\n#include <pthread.h>\n#include <unistd.h>\n\n// A simple global user\nstatic RedisModuleUser *global;\nstatic uint64_t global_auth_client_id = 0;\n\n/* HELLOACL.REVOKE \n * Synchronously revoke access from a user. */\nint RevokeCommand_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (global_auth_client_id) {\n        RedisModule_DeauthenticateAndCloseClient(ctx, global_auth_client_id);\n        return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    } else {\n        return RedisModule_ReplyWithError(ctx, \"Global user currently not used\");    \n    }\n}\n\n/* HELLOACL.RESET \n * Synchronously delete and re-create a module user. */\nint ResetCommand_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_FreeModuleUser(global);\n    global = RedisModule_CreateModuleUser(\"global\");\n    RedisModule_SetModuleUserACL(global, \"allcommands\");\n    RedisModule_SetModuleUserACL(global, \"allkeys\");\n    RedisModule_SetModuleUserACL(global, \"on\");\n\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\n/* Callback handler for user changes, use this to notify a module of \n * changes to users authenticated by the module */\nvoid HelloACL_UserChanged(uint64_t client_id, void *privdata) {\n    REDISMODULE_NOT_USED(privdata);\n    REDISMODULE_NOT_USED(client_id);\n    global_auth_client_id = 0;\n}\n\n/* HELLOACL.AUTHGLOBAL \n * Synchronously assigns a module user to the current context. */\nint AuthGlobalCommand_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (global_auth_client_id) {\n        return RedisModule_ReplyWithError(ctx, \"Global user currently used\");    \n    }\n\n    RedisModule_AuthenticateClientWithUser(ctx, global, HelloACL_UserChanged, NULL, &global_auth_client_id);\n\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\n#define TIMEOUT_TIME 1000\n\n/* Reply callback for auth command HELLOACL.AUTHASYNC */\nint HelloACL_Reply(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    size_t length;\n\n    RedisModuleString *user_string = RedisModule_GetBlockedClientPrivateData(ctx);\n    const char *name = RedisModule_StringPtrLen(user_string, &length);\n\n    if (RedisModule_AuthenticateClientWithACLUser(ctx, name, length, NULL, NULL, NULL) == \n            REDISMODULE_ERR) {\n        return RedisModule_ReplyWithError(ctx, \"Invalid Username or password\");    \n    }\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\n/* Timeout callback for auth command HELLOACL.AUTHASYNC */\nint HelloACL_Timeout(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    return RedisModule_ReplyWithSimpleString(ctx, \"Request timedout\");\n}\n\n/* Private data frees data for HELLOACL.AUTHASYNC command. */\nvoid HelloACL_FreeData(RedisModuleCtx *ctx, void *privdata) {\n    REDISMODULE_NOT_USED(ctx);\n    RedisModule_FreeString(NULL, privdata);\n}\n\n/* Background authentication can happen here. */\nvoid *HelloACL_ThreadMain(void *args) {\n    void **targs = args;\n    RedisModuleBlockedClient *bc = targs[0];\n    RedisModuleString *user = targs[1];\n    RedisModule_Free(targs);\n\n    RedisModule_UnblockClient(bc,user);\n    return NULL;\n}\n\n/* HELLOACL.AUTHASYNC \n * Asynchronously assigns an ACL user to the current context. */\nint AuthAsyncCommand_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n\n    pthread_t tid;\n    RedisModuleBlockedClient *bc = RedisModule_BlockClient(ctx, HelloACL_Reply, HelloACL_Timeout, HelloACL_FreeData, TIMEOUT_TIME);\n    \n\n    void **targs = RedisModule_Alloc(sizeof(void*)*2);\n    targs[0] = bc;\n    targs[1] = RedisModule_CreateStringFromString(NULL, argv[1]);\n\n    if (pthread_create(&tid, NULL, HelloACL_ThreadMain, targs) != 0) {\n        RedisModule_FreeString(NULL, targs[1]);\n        RedisModule_Free(targs);\n        RedisModule_AbortBlock(bc);\n        return RedisModule_ReplyWithError(ctx, \"-ERR Can't start thread\");\n    }\n\n    return REDISMODULE_OK;\n}\n\n/* This function must be present on each Redis module. It is used in order to\n * register the commands into the Redis server. */\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx,\"helloacl\",1,REDISMODULE_APIVER_1)\n        == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"helloacl.reset\",\n        ResetCommand_RedisCommand,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"helloacl.revoke\",\n        RevokeCommand_RedisCommand,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"helloacl.authglobal\",\n        AuthGlobalCommand_RedisCommand,\"no-auth\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"helloacl.authasync\",\n        AuthAsyncCommand_RedisCommand,\"no-auth\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    global = RedisModule_CreateModuleUser(\"global\");\n    RedisModule_SetModuleUserACL(global, \"allcommands\");\n    RedisModule_SetModuleUserACL(global, \"allkeys\");\n    RedisModule_SetModuleUserACL(global, \"on\");\n\n    global_auth_client_id = 0;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "src/modules/helloblock.c",
    "content": "/* Helloblock module -- An example of blocking command implementation\n * with threads.\n *\n * -----------------------------------------------------------------------------\n *\n * Copyright (c) 2016-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"../redismodule.h\"\n#include <stdio.h>\n#include <stdlib.h>\n#include <pthread.h>\n#include <unistd.h>\n\n/* Reply callback for blocking command HELLO.BLOCK */\nint HelloBlock_Reply(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    int *myint = RedisModule_GetBlockedClientPrivateData(ctx);\n    return RedisModule_ReplyWithLongLong(ctx,*myint);\n}\n\n/* Timeout callback for blocking command HELLO.BLOCK */\nint HelloBlock_Timeout(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    return RedisModule_ReplyWithSimpleString(ctx,\"Request timedout\");\n}\n\n/* Private data freeing callback for HELLO.BLOCK command. */\nvoid HelloBlock_FreeData(RedisModuleCtx *ctx, void *privdata) {\n    REDISMODULE_NOT_USED(ctx);\n    RedisModule_Free(privdata);\n}\n\n/* The thread entry point that actually executes the blocking part\n * of the command HELLO.BLOCK. */\nvoid *HelloBlock_ThreadMain(void *arg) {\n    void **targ = arg;\n    RedisModuleBlockedClient *bc = targ[0];\n    long long delay = (unsigned long)targ[1];\n    RedisModule_Free(targ);\n\n    sleep(delay);\n    int *r = RedisModule_Alloc(sizeof(int));\n    *r = rand();\n    RedisModule_UnblockClient(bc,r);\n    return NULL;\n}\n\n/* An example blocked client disconnection callback.\n *\n * Note that in the case of the HELLO.BLOCK command, the blocked client is now\n * owned by the thread calling sleep(). In this specific case, there is not\n * much we can do, however normally we could instead implement a way to\n * signal the thread that the client disconnected, and sleep the specified\n * amount of seconds with a while loop calling sleep(1), so that once we\n * detect the client disconnection, we can terminate the thread ASAP. */\nvoid HelloBlock_Disconnected(RedisModuleCtx *ctx, RedisModuleBlockedClient *bc) {\n    RedisModule_Log(ctx,\"warning\",\"Blocked client %p disconnected!\",\n        (void*)bc);\n\n    /* Here you should cleanup your state / threads, and if possible\n     * call RedisModule_UnblockClient(), or notify the thread that will\n     * call the function ASAP. */\n}\n\n/* HELLO.BLOCK <delay> <timeout> -- Block for <count> seconds, then reply with\n * a random number. Timeout is the command timeout, so that you can test\n * what happens when the delay is greater than the timeout. */\nint HelloBlock_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 3) return RedisModule_WrongArity(ctx);\n    long long delay;\n    long long timeout;\n\n    if (RedisModule_StringToLongLong(argv[1],&delay) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid count\");\n    }\n\n    if (RedisModule_StringToLongLong(argv[2],&timeout) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid count\");\n    }\n\n    pthread_t tid;\n    RedisModuleBlockedClient *bc = RedisModule_BlockClient(ctx,HelloBlock_Reply,HelloBlock_Timeout,HelloBlock_FreeData,timeout);\n\n    /* Here we set a disconnection handler, however since this module will\n     * block in sleep() in a thread, there is not much we can do in the\n     * callback, so this is just to show you the API. */\n    RedisModule_SetDisconnectCallback(bc,HelloBlock_Disconnected);\n\n    /* Now that we setup a blocking client, we need to pass the control\n     * to the thread. However we need to pass arguments to the thread:\n     * the delay and a reference to the blocked client handle. */\n    void **targ = RedisModule_Alloc(sizeof(void*)*2);\n    targ[0] = bc;\n    targ[1] = (void*)(unsigned long) delay;\n\n    if (pthread_create(&tid,NULL,HelloBlock_ThreadMain,targ) != 0) {\n        RedisModule_AbortBlock(bc);\n        return RedisModule_ReplyWithError(ctx,\"-ERR Can't start thread\");\n    }\n    return REDISMODULE_OK;\n}\n\n/* The thread entry point that actually executes the blocking part\n * of the command HELLO.KEYS.\n *\n * Note: this implementation is very simple on purpose, so no duplicated\n * keys (returned by SCAN) are filtered. However adding such a functionality\n * would be trivial just using any data structure implementing a dictionary\n * in order to filter the duplicated items. */\nvoid *HelloKeys_ThreadMain(void *arg) {\n    RedisModuleBlockedClient *bc = arg;\n    RedisModuleCtx *ctx = RedisModule_GetThreadSafeContext(bc);\n    long long cursor = 0;\n    size_t replylen = 0;\n\n    RedisModule_ReplyWithArray(ctx,REDISMODULE_POSTPONED_LEN);\n    do {\n        RedisModule_ThreadSafeContextLock(ctx);\n        RedisModuleCallReply *reply = RedisModule_Call(ctx,\n            \"SCAN\",\"l\",(long long)cursor);\n        RedisModule_ThreadSafeContextUnlock(ctx);\n\n        RedisModuleCallReply *cr_cursor =\n            RedisModule_CallReplyArrayElement(reply,0);\n        RedisModuleCallReply *cr_keys =\n            RedisModule_CallReplyArrayElement(reply,1);\n\n        RedisModuleString *s = RedisModule_CreateStringFromCallReply(cr_cursor);\n        RedisModule_StringToLongLong(s,&cursor);\n        RedisModule_FreeString(ctx,s);\n\n        size_t items = RedisModule_CallReplyLength(cr_keys);\n        for (size_t j = 0; j < items; j++) {\n            RedisModuleCallReply *ele =\n                RedisModule_CallReplyArrayElement(cr_keys,j);\n            RedisModule_ReplyWithCallReply(ctx,ele);\n            replylen++;\n        }\n        RedisModule_FreeCallReply(reply);\n    } while (cursor != 0);\n    RedisModule_ReplySetArrayLength(ctx,replylen);\n\n    RedisModule_FreeThreadSafeContext(ctx);\n    RedisModule_UnblockClient(bc,NULL);\n    return NULL;\n}\n\n/* HELLO.KEYS -- Return all the keys in the current database without blocking\n * the server. The keys do not represent a point-in-time state so only the keys\n * that were in the database from the start to the end are guaranteed to be\n * there. */\nint HelloKeys_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    if (argc != 1) return RedisModule_WrongArity(ctx);\n\n    pthread_t tid;\n\n    /* Note that when blocking the client we do not set any callback: no\n     * timeout is possible since we passed '0', nor we need a reply callback\n     * because we'll use the thread safe context to accumulate a reply. */\n    RedisModuleBlockedClient *bc = RedisModule_BlockClient(ctx,NULL,NULL,NULL,0);\n\n    /* Now that we setup a blocking client, we need to pass the control\n     * to the thread. However we need to pass arguments to the thread:\n     * the reference to the blocked client handle. */\n    if (pthread_create(&tid,NULL,HelloKeys_ThreadMain,bc) != 0) {\n        RedisModule_AbortBlock(bc);\n        return RedisModule_ReplyWithError(ctx,\"-ERR Can't start thread\");\n    }\n    return REDISMODULE_OK;\n}\n\n/* This function must be present on each Redis module. It is used in order to\n * register the commands into the Redis server. */\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx,\"helloblock\",1,REDISMODULE_APIVER_1)\n        == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hello.block\",\n        HelloBlock_RedisCommand,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"hello.keys\",\n        HelloKeys_RedisCommand,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "src/modules/hellocluster.c",
    "content": "/* Helloworld cluster -- A ping/pong cluster API example.\n *\n * -----------------------------------------------------------------------------\n *\n * Copyright (c) 2018-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"../redismodule.h\"\n#include <stdio.h>\n#include <stdlib.h>\n#include <ctype.h>\n#include <string.h>\n\n#define MSGTYPE_PING 1\n#define MSGTYPE_PONG 2\n\n/* HELLOCLUSTER.PINGALL */\nint PingallCommand_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_SendClusterMessage(ctx,NULL,MSGTYPE_PING,\"Hey\",3);\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\n/* HELLOCLUSTER.LIST */\nint ListCommand_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    size_t numnodes;\n    char **ids = RedisModule_GetClusterNodesList(ctx,&numnodes);\n    if (ids == NULL) {\n        return RedisModule_ReplyWithError(ctx,\"Cluster not enabled\");\n    }\n\n    RedisModule_ReplyWithArray(ctx,numnodes);\n    for (size_t j = 0; j < numnodes; j++) {\n        int port;\n        RedisModule_GetClusterNodeInfo(ctx,ids[j],NULL,NULL,&port,NULL);\n        RedisModule_ReplyWithArray(ctx,2);\n        RedisModule_ReplyWithStringBuffer(ctx,ids[j],REDISMODULE_NODE_ID_LEN);\n        RedisModule_ReplyWithLongLong(ctx,port);\n    }\n    RedisModule_FreeClusterNodesList(ids);\n    return REDISMODULE_OK;\n}\n\n/* Callback for message MSGTYPE_PING */\nvoid PingReceiver(RedisModuleCtx *ctx, const char *sender_id, uint8_t type, const unsigned char *payload, uint32_t len) {\n    RedisModule_Log(ctx,\"notice\",\"PING (type %d) RECEIVED from %.*s: '%.*s'\",\n        type,REDISMODULE_NODE_ID_LEN,sender_id,(int)len, payload);\n    RedisModule_SendClusterMessage(ctx,NULL,MSGTYPE_PONG,\"Ohi!\",4);\n    RedisModuleCallReply *reply = RedisModule_Call(ctx, \"INCR\", \"c\", \"pings_received\");\n    RedisModule_FreeCallReply(reply);\n}\n\n/* Callback for message MSGTYPE_PONG. */\nvoid PongReceiver(RedisModuleCtx *ctx, const char *sender_id, uint8_t type, const unsigned char *payload, uint32_t len) {\n    RedisModule_Log(ctx,\"notice\",\"PONG (type %d) RECEIVED from %.*s: '%.*s'\",\n        type,REDISMODULE_NODE_ID_LEN,sender_id,(int)len, payload);\n}\n\n/* This function must be present on each Redis module. It is used in order to\n * register the commands into the Redis server. */\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx,\"hellocluster\",1,REDISMODULE_APIVER_1)\n        == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hellocluster.pingall\",\n        PingallCommand_RedisCommand,\"readonly\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hellocluster.list\",\n        ListCommand_RedisCommand,\"readonly\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    /* Disable Redis Cluster sharding and redirections. This way every node\n     * will be able to access every possible key, regardless of the hash slot.\n     * This way the PING message handler will be able to increment a specific\n     * variable. Normally you do that in order for the distributed system\n     * you create as a module to have total freedom in the keyspace\n     * manipulation. */\n    RedisModule_SetClusterFlags(ctx,REDISMODULE_CLUSTER_FLAG_NO_REDIRECTION);\n\n    /* Register our handlers for different message types. */\n    RedisModule_RegisterClusterMessageReceiver(ctx,MSGTYPE_PING,PingReceiver);\n    RedisModule_RegisterClusterMessageReceiver(ctx,MSGTYPE_PONG,PongReceiver);\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "src/modules/hellodict.c",
    "content": "/* Hellodict -- An example of modules dictionary API\n *\n * This module implements a volatile key-value store on top of the\n * dictionary exported by the Redis modules API.\n *\n * -----------------------------------------------------------------------------\n *\n * Copyright (c) 2018-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"../redismodule.h\"\n#include <stdio.h>\n#include <stdlib.h>\n#include <ctype.h>\n#include <string.h>\n\nstatic RedisModuleDict *Keyspace;\n\n/* HELLODICT.SET <key> <value>\n *\n * Set the specified key to the specified value. */\nint cmd_SET(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 3) return RedisModule_WrongArity(ctx);\n    RedisModule_DictSet(Keyspace,argv[1],argv[2]);\n    /* We need to keep a reference to the value stored at the key, otherwise\n     * it would be freed when this callback returns. */\n    RedisModule_RetainString(NULL,argv[2]);\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\n/* HELLODICT.GET <key>\n *\n * Return the value of the specified key, or a null reply if the key\n * is not defined. */\nint cmd_GET(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n    RedisModuleString *val = RedisModule_DictGet(Keyspace,argv[1],NULL);\n    if (val == NULL) {\n        return RedisModule_ReplyWithNull(ctx);\n    } else {\n        return RedisModule_ReplyWithString(ctx, val);\n    }\n}\n\n/* HELLODICT.KEYRANGE <startkey> <endkey> <count>\n *\n * Return a list of matching keys, lexicographically between startkey\n * and endkey. No more than 'count' items are emitted. */\nint cmd_KEYRANGE(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 4) return RedisModule_WrongArity(ctx);\n\n    /* Parse the count argument. */\n    long long count;\n    if (RedisModule_StringToLongLong(argv[3],&count) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid count\");\n    }\n\n    /* Seek the iterator. */\n    RedisModuleDictIter *iter = RedisModule_DictIteratorStart(\n        Keyspace, \">=\", argv[1]);\n\n    /* Reply with the matching items. */\n    char *key;\n    size_t keylen;\n    long long replylen = 0; /* Keep track of the emitted array len. */\n    RedisModule_ReplyWithArray(ctx,REDISMODULE_POSTPONED_LEN);\n    while((key = RedisModule_DictNextC(iter,&keylen,NULL)) != NULL) {\n        if (replylen >= count) break;\n        if (RedisModule_DictCompare(iter,\"<=\",argv[2]) == REDISMODULE_ERR)\n            break;\n        RedisModule_ReplyWithStringBuffer(ctx,key,keylen);\n        replylen++;\n    }\n    RedisModule_ReplySetArrayLength(ctx,replylen);\n\n    /* Cleanup. */\n    RedisModule_DictIteratorStop(iter);\n    return REDISMODULE_OK;\n}\n\n/* This function must be present on each Redis module. It is used in order to\n * register the commands into the Redis server. */\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx,\"hellodict\",1,REDISMODULE_APIVER_1)\n        == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hellodict.set\",\n        cmd_SET,\"write deny-oom\",1,1,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hellodict.get\",\n        cmd_GET,\"readonly\",1,1,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hellodict.keyrange\",\n        cmd_KEYRANGE,\"readonly\",1,1,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    /* Create our global dictionary. Here we'll set our keys and values. */\n    Keyspace = RedisModule_CreateDict(NULL);\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "src/modules/hellohook.c",
    "content": "/* Server hooks API example\n *\n * -----------------------------------------------------------------------------\n *\n * Copyright (c) 2019-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"../redismodule.h\"\n#include <stdio.h>\n#include <stdlib.h>\n#include <ctype.h>\n#include <string.h>\n\n/* Client state change callback. */\nvoid clientChangeCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data)\n{\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(e);\n\n    RedisModuleClientInfo *ci = data;\n    printf(\"Client %s event for client #%llu %s:%d\\n\",\n        (sub == REDISMODULE_SUBEVENT_CLIENT_CHANGE_CONNECTED) ?\n            \"connection\" : \"disconnection\",\n        (unsigned long long)ci->id,ci->addr,ci->port);\n}\n\nvoid flushdbCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data)\n{\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(e);\n\n    RedisModuleFlushInfo *fi = data;\n    if (sub == REDISMODULE_SUBEVENT_FLUSHDB_START) {\n        if (fi->dbnum != -1) {\n            RedisModuleCallReply *reply;\n            reply = RedisModule_Call(ctx,\"DBSIZE\",\"\");\n            long long numkeys = RedisModule_CallReplyInteger(reply);\n            printf(\"FLUSHDB event of database %d started (%lld keys in DB)\\n\",\n                fi->dbnum, numkeys);\n            RedisModule_FreeCallReply(reply);\n        } else {\n            printf(\"FLUSHALL event started\\n\");\n        }\n    } else {\n        if (fi->dbnum != -1) {\n            printf(\"FLUSHDB event of database %d ended\\n\",fi->dbnum);\n        } else {\n            printf(\"FLUSHALL event ended\\n\");\n        }\n    }\n}\n\n/* This function must be present on each Redis module. It is used in order to\n * register the commands into the Redis server. */\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx,\"hellohook\",1,REDISMODULE_APIVER_1)\n        == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    RedisModule_SubscribeToServerEvent(ctx,\n        RedisModuleEvent_ClientChange, clientChangeCallback);\n    RedisModule_SubscribeToServerEvent(ctx,\n        RedisModuleEvent_FlushDB, flushdbCallback);\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "src/modules/hellotimer.c",
    "content": "/* Timer API example -- Register and handle timer events\n *\n * -----------------------------------------------------------------------------\n *\n * Copyright (c) 2018-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"../redismodule.h\"\n#include <stdio.h>\n#include <stdlib.h>\n#include <ctype.h>\n#include <string.h>\n\n/* Timer callback. */\nvoid timerHandler(RedisModuleCtx *ctx, void *data) {\n    REDISMODULE_NOT_USED(ctx);\n    printf(\"Fired %s!\\n\", (char *)data);\n    RedisModule_Free(data);\n}\n\n/* HELLOTIMER.TIMER*/\nint TimerCommand_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    for (int j = 0; j < 10; j++) {\n        int delay = rand() % 5000;\n        char *buf = RedisModule_Alloc(256);\n        snprintf(buf,256,\"After %d\", delay);\n        RedisModuleTimerID tid = RedisModule_CreateTimer(ctx,delay,timerHandler,buf);\n        REDISMODULE_NOT_USED(tid);\n    }\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\n/* This function must be present on each Redis module. It is used in order to\n * register the commands into the Redis server. */\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx,\"hellotimer\",1,REDISMODULE_APIVER_1)\n        == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hellotimer.timer\",\n        TimerCommand_RedisCommand,\"readonly\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "src/modules/hellotype.c",
    "content": "/* This file implements a new module native data type called \"HELLOTYPE\".\n * The data structure implemented is a very simple ordered linked list of\n * 64 bit integers, in order to have something that is real world enough, but\n * at the same time, extremely simple to understand, to show how the API\n * works, how a new data type is created, and how to write basic methods\n * for RDB loading, saving and AOF rewriting.\n *\n * -----------------------------------------------------------------------------\n *\n * Copyright (c) 2016-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"../redismodule.h\"\n#include <stdio.h>\n#include <stdlib.h>\n#include <ctype.h>\n#include <string.h>\n#include <stdint.h>\n\nstatic RedisModuleType *HelloType;\n\n/* ========================== Internal data structure  =======================\n * This is just a linked list of 64 bit integers where elements are inserted\n * in-place, so it's ordered. There is no pop/push operation but just insert\n * because it is enough to show the implementation of new data types without\n * making things complex. */\n\nstruct HelloTypeNode {\n    int64_t value;\n    struct HelloTypeNode *next;\n};\n\nstruct HelloTypeObject {\n    struct HelloTypeNode *head;\n    size_t len; /* Number of elements added. */\n};\n\nstruct HelloTypeObject *createHelloTypeObject(void) {\n    struct HelloTypeObject *o;\n    o = RedisModule_Alloc(sizeof(*o));\n    o->head = NULL;\n    o->len = 0;\n    return o;\n}\n\nvoid HelloTypeInsert(struct HelloTypeObject *o, int64_t ele) {\n    struct HelloTypeNode *next = o->head, *newnode, *prev = NULL;\n\n    while(next && next->value < ele) {\n        prev = next;\n        next = next->next;\n    }\n    newnode = RedisModule_Alloc(sizeof(*newnode));\n    newnode->value = ele;\n    newnode->next = next;\n    if (prev) {\n        prev->next = newnode;\n    } else {\n        o->head = newnode;\n    }\n    o->len++;\n}\n\nvoid HelloTypeReleaseObject(struct HelloTypeObject *o) {\n    struct HelloTypeNode *cur, *next;\n    cur = o->head;\n    while(cur) {\n        next = cur->next;\n        RedisModule_Free(cur);\n        cur = next;\n    }\n    RedisModule_Free(o);\n}\n\n/* ========================= \"hellotype\" type commands ======================= */\n\n/* HELLOTYPE.INSERT key value */\nint HelloTypeInsert_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx); /* Use automatic memory management. */\n\n    if (argc != 3) return RedisModule_WrongArity(ctx);\n    RedisModuleKey *key = RedisModule_OpenKey(ctx,argv[1],\n        REDISMODULE_READ|REDISMODULE_WRITE);\n    int type = RedisModule_KeyType(key);\n    if (type != REDISMODULE_KEYTYPE_EMPTY &&\n        RedisModule_ModuleTypeGetType(key) != HelloType)\n    {\n        return RedisModule_ReplyWithError(ctx,REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    long long value;\n    if ((RedisModule_StringToLongLong(argv[2],&value) != REDISMODULE_OK)) {\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid value: must be a signed 64 bit integer\");\n    }\n\n    /* Create an empty value object if the key is currently empty. */\n    struct HelloTypeObject *hto;\n    if (type == REDISMODULE_KEYTYPE_EMPTY) {\n        hto = createHelloTypeObject();\n        RedisModule_ModuleTypeSetValue(key,HelloType,hto);\n    } else {\n        hto = RedisModule_ModuleTypeGetValue(key);\n    }\n\n    /* Insert the new element. */\n    HelloTypeInsert(hto,value);\n    RedisModule_SignalKeyAsReady(ctx,argv[1]);\n\n    RedisModule_ReplyWithLongLong(ctx,hto->len);\n    RedisModule_ReplicateVerbatim(ctx);\n    return REDISMODULE_OK;\n}\n\n/* HELLOTYPE.RANGE key first count */\nint HelloTypeRange_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx); /* Use automatic memory management. */\n\n    if (argc != 4) return RedisModule_WrongArity(ctx);\n    RedisModuleKey *key = RedisModule_OpenKey(ctx,argv[1],\n        REDISMODULE_READ|REDISMODULE_WRITE);\n    int type = RedisModule_KeyType(key);\n    if (type != REDISMODULE_KEYTYPE_EMPTY &&\n        RedisModule_ModuleTypeGetType(key) != HelloType)\n    {\n        return RedisModule_ReplyWithError(ctx,REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    long long first, count;\n    if (RedisModule_StringToLongLong(argv[2],&first) != REDISMODULE_OK ||\n        RedisModule_StringToLongLong(argv[3],&count) != REDISMODULE_OK ||\n        first < 0 || count < 0)\n    {\n        return RedisModule_ReplyWithError(ctx,\n            \"ERR invalid first or count parameters\");\n    }\n\n    struct HelloTypeObject *hto = RedisModule_ModuleTypeGetValue(key);\n    struct HelloTypeNode *node = hto ? hto->head : NULL;\n    RedisModule_ReplyWithArray(ctx,REDISMODULE_POSTPONED_LEN);\n    long long arraylen = 0;\n    while(node && count--) {\n        RedisModule_ReplyWithLongLong(ctx,node->value);\n        arraylen++;\n        node = node->next;\n    }\n    RedisModule_ReplySetArrayLength(ctx,arraylen);\n    return REDISMODULE_OK;\n}\n\n/* HELLOTYPE.LEN key */\nint HelloTypeLen_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx); /* Use automatic memory management. */\n\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n    RedisModuleKey *key = RedisModule_OpenKey(ctx,argv[1],\n        REDISMODULE_READ|REDISMODULE_WRITE);\n    int type = RedisModule_KeyType(key);\n    if (type != REDISMODULE_KEYTYPE_EMPTY &&\n        RedisModule_ModuleTypeGetType(key) != HelloType)\n    {\n        return RedisModule_ReplyWithError(ctx,REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    struct HelloTypeObject *hto = RedisModule_ModuleTypeGetValue(key);\n    RedisModule_ReplyWithLongLong(ctx,hto ? hto->len : 0);\n    return REDISMODULE_OK;\n}\n\n/* ====================== Example of a blocking command ==================== */\n\n/* Reply callback for blocking command HELLOTYPE.BRANGE, this will get\n * called when the key we blocked for is ready: we need to check if we\n * can really serve the client, and reply OK or ERR accordingly. */\nint HelloBlock_Reply(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModuleString *keyname = RedisModule_GetBlockedClientReadyKey(ctx);\n    RedisModuleKey *key = RedisModule_OpenKey(ctx,keyname,REDISMODULE_READ);\n    int type = RedisModule_KeyType(key);\n    if (type != REDISMODULE_KEYTYPE_MODULE ||\n        RedisModule_ModuleTypeGetType(key) != HelloType)\n    {\n        RedisModule_CloseKey(key);\n        return REDISMODULE_ERR;\n    }\n\n    /* In case the key is able to serve our blocked client, let's directly\n     * use our original command implementation to make this example simpler. */\n    RedisModule_CloseKey(key);\n    return HelloTypeRange_RedisCommand(ctx,argv,argc-1);\n}\n\n/* Timeout callback for blocking command HELLOTYPE.BRANGE */\nint HelloBlock_Timeout(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    return RedisModule_ReplyWithSimpleString(ctx,\"Request timedout\");\n}\n\n/* Private data freeing callback for HELLOTYPE.BRANGE command. */\nvoid HelloBlock_FreeData(RedisModuleCtx *ctx, void *privdata) {\n    REDISMODULE_NOT_USED(ctx);\n    RedisModule_Free(privdata);\n}\n\n/* HELLOTYPE.BRANGE key first count timeout -- This is a blocking version of\n * the RANGE operation, in order to show how to use the API\n * RedisModule_BlockClientOnKeys(). */\nint HelloTypeBRange_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 5) return RedisModule_WrongArity(ctx);\n    RedisModule_AutoMemory(ctx); /* Use automatic memory management. */\n    RedisModuleKey *key = RedisModule_OpenKey(ctx,argv[1],\n        REDISMODULE_READ|REDISMODULE_WRITE);\n    int type = RedisModule_KeyType(key);\n    if (type != REDISMODULE_KEYTYPE_EMPTY &&\n        RedisModule_ModuleTypeGetType(key) != HelloType)\n    {\n        return RedisModule_ReplyWithError(ctx,REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    /* Parse the timeout before even trying to serve the client synchronously,\n     * so that we always fail ASAP on syntax errors. */\n    long long timeout;\n    if (RedisModule_StringToLongLong(argv[4],&timeout) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx,\n            \"ERR invalid timeout parameter\");\n    }\n\n    /* Can we serve the reply synchronously? */\n    if (type != REDISMODULE_KEYTYPE_EMPTY) {\n        return HelloTypeRange_RedisCommand(ctx,argv,argc-1);\n    }\n\n    /* Otherwise let's block on the key. */\n    void *privdata = RedisModule_Alloc(100);\n    RedisModule_BlockClientOnKeys(ctx,HelloBlock_Reply,HelloBlock_Timeout,HelloBlock_FreeData,timeout,argv+1,1,privdata);\n    return REDISMODULE_OK;\n}\n\n/* ========================== \"hellotype\" type methods ======================= */\n\nvoid *HelloTypeRdbLoad(RedisModuleIO *rdb, int encver) {\n    if (encver != 0) {\n        /* RedisModule_Log(\"warning\",\"Can't load data with version %d\", encver);*/\n        return NULL;\n    }\n    uint64_t elements = RedisModule_LoadUnsigned(rdb);\n    struct HelloTypeObject *hto = createHelloTypeObject();\n    while(elements--) {\n        int64_t ele = RedisModule_LoadSigned(rdb);\n        HelloTypeInsert(hto,ele);\n    }\n    return hto;\n}\n\nvoid HelloTypeRdbSave(RedisModuleIO *rdb, void *value) {\n    struct HelloTypeObject *hto = value;\n    struct HelloTypeNode *node = hto->head;\n    RedisModule_SaveUnsigned(rdb,hto->len);\n    while(node) {\n        RedisModule_SaveSigned(rdb,node->value);\n        node = node->next;\n    }\n}\n\nvoid HelloTypeAofRewrite(RedisModuleIO *aof, RedisModuleString *key, void *value) {\n    struct HelloTypeObject *hto = value;\n    struct HelloTypeNode *node = hto->head;\n    while(node) {\n        RedisModule_EmitAOF(aof,\"HELLOTYPE.INSERT\",\"sl\",key,node->value);\n        node = node->next;\n    }\n}\n\n/* The goal of this function is to return the amount of memory used by\n * the HelloType value. */\nsize_t HelloTypeMemUsage(const void *value) {\n    const struct HelloTypeObject *hto = value;\n    struct HelloTypeNode *node = hto->head;\n    return sizeof(*hto) + sizeof(*node)*hto->len;\n}\n\nvoid HelloTypeFree(void *value) {\n    HelloTypeReleaseObject(value);\n}\n\nvoid HelloTypeDigest(RedisModuleDigest *md, void *value) {\n    struct HelloTypeObject *hto = value;\n    struct HelloTypeNode *node = hto->head;\n    while(node) {\n        RedisModule_DigestAddLongLong(md,node->value);\n        node = node->next;\n    }\n    RedisModule_DigestEndSequence(md);\n}\n\n/* This function must be present on each Redis module. It is used in order to\n * register the commands into the Redis server. */\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx,\"hellotype\",1,REDISMODULE_APIVER_1)\n        == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    RedisModuleTypeMethods tm = {\n        .version = REDISMODULE_TYPE_METHOD_VERSION,\n        .rdb_load = HelloTypeRdbLoad,\n        .rdb_save = HelloTypeRdbSave,\n        .aof_rewrite = HelloTypeAofRewrite,\n        .mem_usage = HelloTypeMemUsage,\n        .free = HelloTypeFree,\n        .digest = HelloTypeDigest\n    };\n\n    HelloType = RedisModule_CreateDataType(ctx,\"hellotype\",0,&tm);\n    if (HelloType == NULL) return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hellotype.insert\",\n        HelloTypeInsert_RedisCommand,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hellotype.range\",\n        HelloTypeRange_RedisCommand,\"readonly\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hellotype.len\",\n        HelloTypeLen_RedisCommand,\"readonly\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hellotype.brange\",\n        HelloTypeBRange_RedisCommand,\"readonly\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "src/modules/helloworld.c",
    "content": "/* Helloworld module -- A few examples of the Redis Modules API in the form\n * of commands showing how to accomplish common tasks.\n *\n * This module does not do anything useful, if not for a few commands. The\n * examples are designed in order to show the API.\n *\n * -----------------------------------------------------------------------------\n *\n * Copyright (c) 2016-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"../redismodule.h\"\n#include <stdio.h>\n#include <stdlib.h>\n#include <ctype.h>\n#include <string.h>\n\n/* HELLO.SIMPLE is among the simplest commands you can implement.\n * It just returns the currently selected DB id, a functionality which is\n * missing in Redis. The command uses two important API calls: one to\n * fetch the currently selected DB, the other in order to send the client\n * an integer reply as response. */\nint HelloSimple_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    RedisModule_ReplyWithLongLong(ctx,RedisModule_GetSelectedDb(ctx));\n    return REDISMODULE_OK;\n}\n\n/* HELLO.PUSH.NATIVE re-implements RPUSH, and shows the low level modules API\n * where you can \"open\" keys, make low level operations, create new keys by\n * pushing elements into non-existing keys, and so forth.\n *\n * You'll find this command to be roughly as fast as the actual RPUSH\n * command. */\nint HelloPushNative_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc != 3) return RedisModule_WrongArity(ctx);\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx,argv[1],\n        REDISMODULE_READ|REDISMODULE_WRITE);\n\n    RedisModule_ListPush(key,REDISMODULE_LIST_TAIL,argv[2]);\n    size_t newlen = RedisModule_ValueLength(key);\n    RedisModule_CloseKey(key);\n    RedisModule_ReplyWithLongLong(ctx,newlen);\n    return REDISMODULE_OK;\n}\n\n/* HELLO.PUSH.CALL implements RPUSH using an higher level approach, calling\n * a Redis command instead of working with the key in a low level way. This\n * approach is useful when you need to call Redis commands that are not\n * available as low level APIs, or when you don't need the maximum speed\n * possible but instead prefer implementation simplicity. */\nint HelloPushCall_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc != 3) return RedisModule_WrongArity(ctx);\n\n    RedisModuleCallReply *reply;\n\n    reply = RedisModule_Call(ctx,\"RPUSH\",\"ss\",argv[1],argv[2]);\n    long long len = RedisModule_CallReplyInteger(reply);\n    RedisModule_FreeCallReply(reply);\n    RedisModule_ReplyWithLongLong(ctx,len);\n    return REDISMODULE_OK;\n}\n\n/* HELLO.PUSH.CALL2\n * This is exactly as HELLO.PUSH.CALL, but shows how we can reply to the\n * client using directly a reply object that Call() returned. */\nint HelloPushCall2_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc != 3) return RedisModule_WrongArity(ctx);\n\n    RedisModuleCallReply *reply;\n\n    reply = RedisModule_Call(ctx,\"RPUSH\",\"ss\",argv[1],argv[2]);\n    RedisModule_ReplyWithCallReply(ctx,reply);\n    RedisModule_FreeCallReply(reply);\n    return REDISMODULE_OK;\n}\n\n/* HELLO.LIST.SUM.LEN returns the total length of all the items inside\n * a Redis list, by using the high level Call() API.\n * This command is an example of the array reply access. */\nint HelloListSumLen_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n\n    RedisModuleCallReply *reply;\n\n    reply = RedisModule_Call(ctx,\"LRANGE\",\"sll\",argv[1],(long long)0,(long long)-1);\n    size_t strlen = 0;\n    size_t items = RedisModule_CallReplyLength(reply);\n    size_t j;\n    for (j = 0; j < items; j++) {\n        RedisModuleCallReply *ele = RedisModule_CallReplyArrayElement(reply,j);\n        strlen += RedisModule_CallReplyLength(ele);\n    }\n    RedisModule_FreeCallReply(reply);\n    RedisModule_ReplyWithLongLong(ctx,strlen);\n    return REDISMODULE_OK;\n}\n\n/* HELLO.LIST.SPLICE srclist dstlist count\n * Moves 'count' elements from the tail of 'srclist' to the head of\n * 'dstlist'. If less than count elements are available, it moves as much\n * elements as possible. */\nint HelloListSplice_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 4) return RedisModule_WrongArity(ctx);\n\n    RedisModuleKey *srckey = RedisModule_OpenKey(ctx,argv[1],\n        REDISMODULE_READ|REDISMODULE_WRITE);\n    RedisModuleKey *dstkey = RedisModule_OpenKey(ctx,argv[2],\n        REDISMODULE_READ|REDISMODULE_WRITE);\n\n    /* Src and dst key must be empty or lists. */\n    if ((RedisModule_KeyType(srckey) != REDISMODULE_KEYTYPE_LIST &&\n         RedisModule_KeyType(srckey) != REDISMODULE_KEYTYPE_EMPTY) ||\n        (RedisModule_KeyType(dstkey) != REDISMODULE_KEYTYPE_LIST &&\n         RedisModule_KeyType(dstkey) != REDISMODULE_KEYTYPE_EMPTY))\n    {\n        RedisModule_CloseKey(srckey);\n        RedisModule_CloseKey(dstkey);\n        return RedisModule_ReplyWithError(ctx,REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    long long count;\n    if ((RedisModule_StringToLongLong(argv[3],&count) != REDISMODULE_OK) ||\n        (count < 0)) {\n        RedisModule_CloseKey(srckey);\n        RedisModule_CloseKey(dstkey);\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid count\");\n    }\n\n    while(count-- > 0) {\n        RedisModuleString *ele;\n\n        ele = RedisModule_ListPop(srckey,REDISMODULE_LIST_TAIL);\n        if (ele == NULL) break;\n        RedisModule_ListPush(dstkey,REDISMODULE_LIST_HEAD,ele);\n        RedisModule_FreeString(ctx,ele);\n    }\n\n    size_t len = RedisModule_ValueLength(srckey);\n    RedisModule_CloseKey(srckey);\n    RedisModule_CloseKey(dstkey);\n    RedisModule_ReplyWithLongLong(ctx,len);\n    return REDISMODULE_OK;\n}\n\n/* Like the HELLO.LIST.SPLICE above, but uses automatic memory management\n * in order to avoid freeing stuff. */\nint HelloListSpliceAuto_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 4) return RedisModule_WrongArity(ctx);\n\n    RedisModule_AutoMemory(ctx);\n\n    RedisModuleKey *srckey = RedisModule_OpenKey(ctx,argv[1],\n        REDISMODULE_READ|REDISMODULE_WRITE);\n    RedisModuleKey *dstkey = RedisModule_OpenKey(ctx,argv[2],\n        REDISMODULE_READ|REDISMODULE_WRITE);\n\n    /* Src and dst key must be empty or lists. */\n    if ((RedisModule_KeyType(srckey) != REDISMODULE_KEYTYPE_LIST &&\n         RedisModule_KeyType(srckey) != REDISMODULE_KEYTYPE_EMPTY) ||\n        (RedisModule_KeyType(dstkey) != REDISMODULE_KEYTYPE_LIST &&\n         RedisModule_KeyType(dstkey) != REDISMODULE_KEYTYPE_EMPTY))\n    {\n        return RedisModule_ReplyWithError(ctx,REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    long long count;\n    if ((RedisModule_StringToLongLong(argv[3],&count) != REDISMODULE_OK) ||\n        (count < 0))\n    {\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid count\");\n    }\n\n    while(count-- > 0) {\n        RedisModuleString *ele;\n\n        ele = RedisModule_ListPop(srckey,REDISMODULE_LIST_TAIL);\n        if (ele == NULL) break;\n        RedisModule_ListPush(dstkey,REDISMODULE_LIST_HEAD,ele);\n    }\n\n    size_t len = RedisModule_ValueLength(srckey);\n    RedisModule_ReplyWithLongLong(ctx,len);\n    return REDISMODULE_OK;\n}\n\n/* HELLO.RAND.ARRAY <count>\n * Shows how to generate arrays as commands replies.\n * It just outputs <count> random numbers. */\nint HelloRandArray_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n    long long count;\n    if (RedisModule_StringToLongLong(argv[1],&count) != REDISMODULE_OK ||\n        count < 0)\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid count\");\n\n    /* To reply with an array, we call RedisModule_ReplyWithArray() followed\n     * by other \"count\" calls to other reply functions in order to generate\n     * the elements of the array. */\n    RedisModule_ReplyWithArray(ctx,count);\n    while(count--) RedisModule_ReplyWithLongLong(ctx,rand());\n    return REDISMODULE_OK;\n}\n\n/* This is a simple command to test replication. Because of the \"!\" modified\n * in the RedisModule_Call() call, the two INCRs get replicated.\n * Also note how the ECHO is replicated in an unexpected position (check\n * comments the function implementation). */\nint HelloRepl1_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    RedisModule_AutoMemory(ctx);\n\n    /* This will be replicated *after* the two INCR statements, since\n     * the Call() replication has precedence, so the actual replication\n     * stream will be:\n     *\n     * MULTI\n     * INCR foo\n     * INCR bar\n     * ECHO c foo\n     * EXEC\n     */\n    RedisModule_Replicate(ctx,\"ECHO\",\"c\",\"foo\");\n\n    /* Using the \"!\" modifier we replicate the command if it\n     * modified the dataset in some way. */\n    RedisModule_Call(ctx,\"INCR\",\"c!\",\"foo\");\n    RedisModule_Call(ctx,\"INCR\",\"c!\",\"bar\");\n\n    RedisModule_ReplyWithLongLong(ctx,0);\n\n    return REDISMODULE_OK;\n}\n\n/* Another command to show replication. In this case, we call\n * RedisModule_ReplicateVerbatim() to mean we want just the command to be\n * propagated to slaves / AOF exactly as it was called by the user.\n *\n * This command also shows how to work with string objects.\n * It takes a list, and increments all the elements (that must have\n * a numerical value) by 1, returning the sum of all the elements\n * as reply.\n *\n * Usage: HELLO.REPL2 <list-key> */\nint HelloRepl2_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n\n    RedisModule_AutoMemory(ctx); /* Use automatic memory management. */\n    RedisModuleKey *key = RedisModule_OpenKey(ctx,argv[1],\n        REDISMODULE_READ|REDISMODULE_WRITE);\n\n    if (RedisModule_KeyType(key) != REDISMODULE_KEYTYPE_LIST)\n        return RedisModule_ReplyWithError(ctx,REDISMODULE_ERRORMSG_WRONGTYPE);\n\n    size_t listlen = RedisModule_ValueLength(key);\n    long long sum = 0;\n\n    /* Rotate and increment. */\n    while(listlen--) {\n        RedisModuleString *ele = RedisModule_ListPop(key,REDISMODULE_LIST_TAIL);\n        long long val;\n        if (RedisModule_StringToLongLong(ele,&val) != REDISMODULE_OK) val = 0;\n        val++;\n        sum += val;\n        RedisModuleString *newele = RedisModule_CreateStringFromLongLong(ctx,val);\n        RedisModule_ListPush(key,REDISMODULE_LIST_HEAD,newele);\n    }\n    RedisModule_ReplyWithLongLong(ctx,sum);\n    RedisModule_ReplicateVerbatim(ctx);\n    return REDISMODULE_OK;\n}\n\n/* This is an example of strings DMA access. Given a key containing a string\n * it toggles the case of each character from lower to upper case or the\n * other way around.\n *\n * No automatic memory management is used in this example (for the sake\n * of variety).\n *\n * HELLO.TOGGLE.CASE key */\nint HelloToggleCase_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx,argv[1],\n        REDISMODULE_READ|REDISMODULE_WRITE);\n\n    int keytype = RedisModule_KeyType(key);\n    if (keytype != REDISMODULE_KEYTYPE_STRING &&\n        keytype != REDISMODULE_KEYTYPE_EMPTY)\n    {\n        RedisModule_CloseKey(key);\n        return RedisModule_ReplyWithError(ctx,REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    if (keytype == REDISMODULE_KEYTYPE_STRING) {\n        size_t len, j;\n        char *s = RedisModule_StringDMA(key,&len,REDISMODULE_WRITE);\n        for (j = 0; j < len; j++) {\n            if (isupper(s[j])) {\n                s[j] = tolower(s[j]);\n            } else {\n                s[j] = toupper(s[j]);\n            }\n        }\n    }\n\n    RedisModule_CloseKey(key);\n    RedisModule_ReplyWithSimpleString(ctx,\"OK\");\n    RedisModule_ReplicateVerbatim(ctx);\n    return REDISMODULE_OK;\n}\n\n/* HELLO.MORE.EXPIRE key milliseconds.\n *\n * If the key has already an associated TTL, extends it by \"milliseconds\"\n * milliseconds. Otherwise no operation is performed. */\nint HelloMoreExpire_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx); /* Use automatic memory management. */\n    if (argc != 3) return RedisModule_WrongArity(ctx);\n\n    mstime_t addms, expire;\n\n    if (RedisModule_StringToLongLong(argv[2],&addms) != REDISMODULE_OK)\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid expire time\");\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx,argv[1],\n        REDISMODULE_READ|REDISMODULE_WRITE);\n    expire = RedisModule_GetExpire(key);\n    if (expire != REDISMODULE_NO_EXPIRE) {\n        expire += addms;\n        RedisModule_SetExpire(key,expire);\n    }\n    return RedisModule_ReplyWithSimpleString(ctx,\"OK\");\n}\n\n/* HELLO.ZSUMRANGE key startscore endscore\n * Return the sum of all the scores elements between startscore and endscore.\n *\n * The computation is performed two times, one time from start to end and\n * another time backward. The two scores, returned as a two element array,\n * should match.*/\nint HelloZsumRange_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    double score_start, score_end;\n    if (argc != 4) return RedisModule_WrongArity(ctx);\n\n    if (RedisModule_StringToDouble(argv[2],&score_start) != REDISMODULE_OK ||\n        RedisModule_StringToDouble(argv[3],&score_end) != REDISMODULE_OK)\n    {\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid range\");\n    }\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx,argv[1],\n        REDISMODULE_READ|REDISMODULE_WRITE);\n    if (RedisModule_KeyType(key) != REDISMODULE_KEYTYPE_ZSET) {\n        return RedisModule_ReplyWithError(ctx,REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    double scoresum_a = 0;\n    double scoresum_b = 0;\n\n    RedisModule_ZsetFirstInScoreRange(key,score_start,score_end,0,0);\n    while(!RedisModule_ZsetRangeEndReached(key)) {\n        double score;\n        RedisModuleString *ele = RedisModule_ZsetRangeCurrentElement(key,&score);\n        RedisModule_FreeString(ctx,ele);\n        scoresum_a += score;\n        RedisModule_ZsetRangeNext(key);\n    }\n    RedisModule_ZsetRangeStop(key);\n\n    RedisModule_ZsetLastInScoreRange(key,score_start,score_end,0,0);\n    while(!RedisModule_ZsetRangeEndReached(key)) {\n        double score;\n        RedisModuleString *ele = RedisModule_ZsetRangeCurrentElement(key,&score);\n        RedisModule_FreeString(ctx,ele);\n        scoresum_b += score;\n        RedisModule_ZsetRangePrev(key);\n    }\n\n    RedisModule_ZsetRangeStop(key);\n\n    RedisModule_CloseKey(key);\n\n    RedisModule_ReplyWithArray(ctx,2);\n    RedisModule_ReplyWithDouble(ctx,scoresum_a);\n    RedisModule_ReplyWithDouble(ctx,scoresum_b);\n    return REDISMODULE_OK;\n}\n\n/* HELLO.LEXRANGE key min_lex max_lex min_age max_age\n * This command expects a sorted set stored at key in the following form:\n * - All the elements have score 0.\n * - Elements are pairs of \"<name>:<age>\", for example \"Anna:52\".\n * The command will return all the sorted set items that are lexicographically\n * between the specified range (using the same format as ZRANGEBYLEX)\n * and having an age between min_age and max_age. */\nint HelloLexRange_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx); /* Use automatic memory management. */\n\n    if (argc != 6) return RedisModule_WrongArity(ctx);\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx,argv[1],\n        REDISMODULE_READ|REDISMODULE_WRITE);\n    if (RedisModule_KeyType(key) != REDISMODULE_KEYTYPE_ZSET) {\n        return RedisModule_ReplyWithError(ctx,REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    if (RedisModule_ZsetFirstInLexRange(key,argv[2],argv[3]) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx,\"invalid range\");\n    }\n\n    int arraylen = 0;\n    RedisModule_ReplyWithArray(ctx,REDISMODULE_POSTPONED_LEN);\n    while(!RedisModule_ZsetRangeEndReached(key)) {\n        double score;\n        RedisModuleString *ele = RedisModule_ZsetRangeCurrentElement(key,&score);\n        RedisModule_ReplyWithString(ctx,ele);\n        RedisModule_FreeString(ctx,ele);\n        RedisModule_ZsetRangeNext(key);\n        arraylen++;\n    }\n    RedisModule_ZsetRangeStop(key);\n    RedisModule_ReplySetArrayLength(ctx,arraylen);\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\n/* HELLO.HCOPY key srcfield dstfield\n * This is just an example command that sets the hash field dstfield to the\n * same value of srcfield. If srcfield does not exist no operation is\n * performed.\n *\n * The command returns 1 if the copy is performed (srcfield exists) otherwise\n * 0 is returned. */\nint HelloHCopy_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx); /* Use automatic memory management. */\n\n    if (argc != 4) return RedisModule_WrongArity(ctx);\n    RedisModuleKey *key = RedisModule_OpenKey(ctx,argv[1],\n        REDISMODULE_READ|REDISMODULE_WRITE);\n    int type = RedisModule_KeyType(key);\n    if (type != REDISMODULE_KEYTYPE_HASH &&\n        type != REDISMODULE_KEYTYPE_EMPTY)\n    {\n        return RedisModule_ReplyWithError(ctx,REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    /* Get the old field value. */\n    RedisModuleString *oldval;\n    RedisModule_HashGet(key,REDISMODULE_HASH_NONE,argv[2],&oldval,NULL);\n    if (oldval) {\n        RedisModule_HashSet(key,REDISMODULE_HASH_NONE,argv[3],oldval,NULL);\n    }\n    RedisModule_ReplyWithLongLong(ctx,oldval != NULL);\n    return REDISMODULE_OK;\n}\n\n/* HELLO.LEFTPAD str len ch\n * This is an implementation of the infamous LEFTPAD function, that\n * was at the center of an issue with the npm modules system in March 2016.\n *\n * LEFTPAD is a good example of using a Redis Modules API called\n * \"pool allocator\", that was a famous way to allocate memory in yet another\n * open source project, the Apache web server.\n *\n * The concept is very simple: there is memory that is useful to allocate\n * only in the context of serving a request, and must be freed anyway when\n * the callback implementing the command returns. So in that case the module\n * does not need to retain a reference to these allocations, it is just\n * required to free the memory before returning. When this is the case the\n * module can call RedisModule_PoolAlloc() instead, that works like malloc()\n * but will automatically free the memory when the module callback returns.\n *\n * Note that PoolAlloc() does not necessarily require AutoMemory to be\n * active. */\nint HelloLeftPad_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx); /* Use automatic memory management. */\n    long long padlen;\n\n    if (argc != 4) return RedisModule_WrongArity(ctx);\n\n    if ((RedisModule_StringToLongLong(argv[2],&padlen) != REDISMODULE_OK) ||\n        (padlen< 0)) {\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid padding length\");\n    }\n    size_t strlen, chlen;\n    const char *str = RedisModule_StringPtrLen(argv[1], &strlen);\n    const char *ch = RedisModule_StringPtrLen(argv[3], &chlen);\n\n    /* If the string is already larger than the target len, just return\n     * the string itself. */\n    if (strlen >= (size_t)padlen)\n        return RedisModule_ReplyWithString(ctx,argv[1]);\n\n    /* Padding must be a single character in this simple implementation. */\n    if (chlen != 1)\n        return RedisModule_ReplyWithError(ctx,\n            \"ERR padding must be a single char\");\n\n    /* Here we use our pool allocator, for our throw-away allocation. */\n    padlen -= strlen;\n    char *buf = RedisModule_PoolAlloc(ctx,padlen+strlen);\n    for (long long j = 0; j < padlen; j++) buf[j] = *ch;\n    memcpy(buf+padlen,str,strlen);\n\n    RedisModule_ReplyWithStringBuffer(ctx,buf,padlen+strlen);\n    return REDISMODULE_OK;\n}\n\n/* This function must be present on each Redis module. It is used in order to\n * register the commands into the Redis server. */\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (RedisModule_Init(ctx,\"helloworld\",1,REDISMODULE_APIVER_1)\n        == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    /* Log the list of parameters passing loading the module. */\n    for (int j = 0; j < argc; j++) {\n        const char *s = RedisModule_StringPtrLen(argv[j],NULL);\n        printf(\"Module loaded with ARGV[%d] = %s\\n\", j, s);\n    }\n\n    if (RedisModule_CreateCommand(ctx,\"hello.simple\",\n        HelloSimple_RedisCommand,\"readonly\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hello.push.native\",\n        HelloPushNative_RedisCommand,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hello.push.call\",\n        HelloPushCall_RedisCommand,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hello.push.call2\",\n        HelloPushCall2_RedisCommand,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hello.list.sum.len\",\n        HelloListSumLen_RedisCommand,\"readonly\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hello.list.splice\",\n        HelloListSplice_RedisCommand,\"write deny-oom\",1,2,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hello.list.splice.auto\",\n        HelloListSpliceAuto_RedisCommand,\n        \"write deny-oom\",1,2,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hello.rand.array\",\n        HelloRandArray_RedisCommand,\"readonly\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hello.repl1\",\n        HelloRepl1_RedisCommand,\"write\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hello.repl2\",\n        HelloRepl2_RedisCommand,\"write\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hello.toggle.case\",\n        HelloToggleCase_RedisCommand,\"write\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hello.more.expire\",\n        HelloMoreExpire_RedisCommand,\"write\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hello.zsumrange\",\n        HelloZsumRange_RedisCommand,\"readonly\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hello.lexrange\",\n        HelloLexRange_RedisCommand,\"readonly\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hello.hcopy\",\n        HelloHCopy_RedisCommand,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"hello.leftpad\",\n        HelloLeftPad_RedisCommand,\"\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "src/monotonic.c",
    "content": "#include \"monotonic.h\"\n#include <stddef.h>\n#include <stdlib.h>\n#include <stdio.h>\n#include <time.h>\n#include \"redisassert.h\"\n#include <string.h>\n\n/* The function pointer for clock retrieval.  */\nmonotime (*getMonotonicUs)(void) = NULL;\n\nstatic char monotonic_info_string[32];\n\n\n/* Using the processor clock (aka TSC on x86) can provide improved performance\n * throughout Redis wherever the monotonic clock is used.  The processor clock\n * is significantly faster than calling 'clock_getting' (POSIX).  While this is\n * generally safe on modern systems, this link provides additional information\n * about use of the x86 TSC: http://oliveryang.net/2015/09/pitfalls-of-TSC-usage\n *\n * On ARM aarch64 systems, the hardware clock is enabled by default because the\n * ARM Generic Timer is architecturally guaranteed to be available and monotonic\n * on all ARMv8-A processors (see the “The Generic Timer in AArch64 state”\n * section of the Arm Architecture Reference Manual for Armv8-A).\n *\n * To use the processor clock on other architectures, either uncomment this line,\n * or build with\n *   CFLAGS=\"-DUSE_PROCESSOR_CLOCK\"\n#define USE_PROCESSOR_CLOCK\n */\n\n\n#if defined(USE_PROCESSOR_CLOCK) && defined(__x86_64__) && defined(__linux__)\n#include <regex.h>\n#include <x86intrin.h>\n\nstatic long mono_ticksPerMicrosecond = 0;\n\nstatic monotime getMonotonicUs_x86(void) {\n    return __rdtsc() / mono_ticksPerMicrosecond;\n}\n\nstatic void monotonicInit_x86linux(void) {\n    const int bufflen = 256;\n    char buf[bufflen];\n    regex_t cpuGhzRegex, constTscRegex;\n    const size_t nmatch = 2;\n    regmatch_t pmatch[nmatch];\n    int constantTsc = 0;\n    int rc;\n\n    /* Determine the number of TSC ticks in a micro-second.  This is\n     * a constant value matching the standard speed of the processor.\n     * On modern processors, this speed remains constant even though\n     * the actual clock speed varies dynamically for each core.  */\n    rc = regcomp(&cpuGhzRegex, \"^model name\\\\s+:.*@ ([0-9.]+)GHz\", REG_EXTENDED);\n    assert(rc == 0);\n\n    /* Also check that the constant_tsc flag is present.  (It should be\n     * unless this is a really old CPU.  */\n    rc = regcomp(&constTscRegex, \"^flags\\\\s+:.* constant_tsc\", REG_EXTENDED);\n    assert(rc == 0);\n\n    FILE *cpuinfo = fopen(\"/proc/cpuinfo\", \"r\");\n    if (cpuinfo != NULL) {\n        while (fgets(buf, bufflen, cpuinfo) != NULL) {\n            if (regexec(&cpuGhzRegex, buf, nmatch, pmatch, 0) == 0) {\n                buf[pmatch[1].rm_eo] = '\\0';\n                double ghz = atof(&buf[pmatch[1].rm_so]);\n                mono_ticksPerMicrosecond = (long)(ghz * 1000);\n                break;\n            }\n        }\n        while (fgets(buf, bufflen, cpuinfo) != NULL) {\n            if (regexec(&constTscRegex, buf, nmatch, pmatch, 0) == 0) {\n                constantTsc = 1;\n                break;\n            }\n        }\n\n        fclose(cpuinfo);\n    }\n    regfree(&cpuGhzRegex);\n    regfree(&constTscRegex);\n\n    if (mono_ticksPerMicrosecond == 0) {\n        fprintf(stderr, \"monotonic: x86 linux, unable to determine clock rate\\n\");\n        return;\n    }\n    if (!constantTsc) {\n        fprintf(stderr, \"monotonic: x86 linux, 'constant_tsc' flag not present\\n\");\n        return;\n    }\n\n    snprintf(monotonic_info_string, sizeof(monotonic_info_string),\n            \"X86 TSC @ %ld ticks/us\", mono_ticksPerMicrosecond);\n    getMonotonicUs = getMonotonicUs_x86;\n}\n#endif\n\n#if defined(__aarch64__)\nstatic long mono_ticksPerMicrosecond = 0;\n\n/* Read the clock value.\n * CNTVCT_EL0 is a system counter register, that provides the monotonic\n * timestamp as a 64-bit count value. */\nstatic inline uint64_t __cntvct(void) {\n    uint64_t virtual_timer_value;\n    __asm__ volatile(\"mrs %0, cntvct_el0\" : \"=r\"(virtual_timer_value));\n    return virtual_timer_value;\n}\n\n/* Read the Count-timer Frequency.\n * CNTFRQ_EL0 is a system counter register that provides the frequency (in Hz)\n * needed to convert ticks to microseconds. Together with CNTVCT_EL0, this enables\n * high-performance monotonic time measurement without system calls. */\nstatic inline uint32_t cntfrq_hz(void) {\n    uint64_t virtual_freq_value;\n    __asm__ volatile(\"mrs %0, cntfrq_el0\" : \"=r\"(virtual_freq_value));\n    return (uint32_t)virtual_freq_value;    /* top 32 bits are reserved */\n}\n\nstatic monotime getMonotonicUs_aarch64(void) {\n    return __cntvct() / mono_ticksPerMicrosecond;\n}\n\nstatic void monotonicInit_aarch64(void) {\n    mono_ticksPerMicrosecond = (long)cntfrq_hz() / 1000L / 1000L;\n    if (mono_ticksPerMicrosecond == 0) {\n        fprintf(stderr, \"monotonic: aarch64, unable to determine clock rate\\n\");\n        return;\n    }\n\n    snprintf(monotonic_info_string, sizeof(monotonic_info_string),\n            \"ARM CNTVCT @ %ld ticks/us\", mono_ticksPerMicrosecond);\n    getMonotonicUs = getMonotonicUs_aarch64;\n}\n#endif\n\n\n#if defined(USE_PROCESSOR_CLOCK) && defined(__riscv) && defined(__linux__)\nstatic long mono_ticksPerMicrosecond = 0;\n\nstatic inline uint64_t read_mtime(void) {\n    uint64_t val;\n    asm volatile(\"csrr %0, time\" : \"=r\"(val));\n    return val;\n}\n\n/* Read RISC-V timebase-frequency, which may be stored as either a 64-bit\n * or 32-bit big-endian integer in the device tree.  */\nstatic uint64_t get_timebase_frequency(void) {\n    uint64_t freq = 0;\n    FILE *fp = fopen(\"/proc/device-tree/cpus/timebase-frequency\", \"rb\");\n    if (!fp)\n        return 0;\n\n    uint8_t buf[8] = {0};\n    size_t cnt = fread(buf, 1, sizeof(buf), fp);\n    fclose(fp);\n\n    if (cnt == 8) {\n        uint64_t be64 = 0;\n        memcpy(&be64, buf, sizeof(be64));\n        /* Convert be64 from big-endian to little-endian.  */\n        freq = __builtin_bswap64(be64);\n    } else if (cnt == 4) {\n        uint32_t be32 = 0;\n        memcpy(&be32, buf, sizeof(be32));\n        /* Convert be32 from big-endian to little-endian.  */\n        freq = __builtin_bswap32(be32);\n    } else {\n        /* Unable to read timebase-frequency.  */\n        return 0;\n    }\n\n    return freq;\n}\n\nstatic monotime getMonotonicUs_riscv(void) {\n    return read_mtime() / mono_ticksPerMicrosecond;\n}\n\nstatic void monotonicInit_riscv(void) {\n    mono_ticksPerMicrosecond = (long)get_timebase_frequency() / 1000L / 1000L;\n    if (mono_ticksPerMicrosecond == 0) {\n        fprintf(stderr, \"monotonic: riscv, unable to determine clock rate\\n\");\n        return;\n    }\n    snprintf(monotonic_info_string, sizeof(monotonic_info_string),\n            \"RISC-V mtime @ %ld ticks/us\", mono_ticksPerMicrosecond);\n    getMonotonicUs = getMonotonicUs_riscv;\n}\n#endif\n\nstatic monotime getMonotonicUs_posix(void) {\n    /* clock_gettime() is specified in POSIX.1b (1993).  Even so, some systems\n     * did not support this until much later.  CLOCK_MONOTONIC is technically\n     * optional and may not be supported - but it appears to be universal.\n     * If this is not supported, provide a system-specific alternate version.  */\n    struct timespec ts;\n    clock_gettime(CLOCK_MONOTONIC, &ts);\n    return ((uint64_t)ts.tv_sec) * 1000000 + ts.tv_nsec / 1000;\n}\n\nstatic void monotonicInit_posix(void) {\n    /* Ensure that CLOCK_MONOTONIC is supported.  This should be supported\n     * on any reasonably current OS.  If the assertion below fails, provide\n     * an appropriate alternate implementation.  */\n    struct timespec ts;\n    int rc = clock_gettime(CLOCK_MONOTONIC, &ts);\n    assert(rc == 0);\n\n    snprintf(monotonic_info_string, sizeof(monotonic_info_string),\n            \"POSIX clock_gettime\");\n    getMonotonicUs = getMonotonicUs_posix;\n}\n\n\n\nconst char * monotonicInit(void) {\n    #if defined(USE_PROCESSOR_CLOCK) && defined(__x86_64__) && defined(__linux__)\n    if (getMonotonicUs == NULL) monotonicInit_x86linux();\n    #endif\n\n    #if defined(__aarch64__)\n    if (getMonotonicUs == NULL) monotonicInit_aarch64();\n    #endif\n\n    #if defined(USE_PROCESSOR_CLOCK) && defined(__riscv) && defined(__linux__)\n    if (getMonotonicUs == NULL) monotonicInit_riscv();\n    #endif\n\n    if (getMonotonicUs == NULL) monotonicInit_posix();\n\n    return monotonic_info_string;\n}\n\nconst char *monotonicInfoString(void) {\n    return monotonic_info_string;\n}\n\nmonotonic_clock_type monotonicGetType(void) {\n    if (getMonotonicUs == getMonotonicUs_posix)\n        return MONOTONIC_CLOCK_POSIX;\n    return MONOTONIC_CLOCK_HW;\n}\n"
  },
  {
    "path": "src/monotonic.h",
    "content": "#ifndef __MONOTONIC_H\n#define __MONOTONIC_H\n/* The monotonic clock is an always increasing clock source.  It is unrelated to\n * the actual time of day and should only be used for relative timings.  The\n * monotonic clock is also not guaranteed to be chronologically precise; there\n * may be slight skew/shift from a precise clock.\n *\n * Depending on system architecture, the monotonic time may be able to be\n * retrieved much faster than a normal clock source by using an instruction\n * counter on the CPU.  On x86 architectures (for example), the RDTSC\n * instruction is a very fast clock source for this purpose.\n */\n\n#include \"fmacros.h\"\n#include <stdint.h>\n#include <unistd.h>\n\n/* A counter in micro-seconds.  The 'monotime' type is provided for variables\n * holding a monotonic time.  This will help distinguish & document that the\n * variable is associated with the monotonic clock and should not be confused\n * with other types of time.*/\ntypedef uint64_t monotime;\n\n/* Retrieve counter of micro-seconds relative to an arbitrary point in time.  */\nextern monotime (*getMonotonicUs)(void);\n\ntypedef enum monotonic_clock_type {\n    MONOTONIC_CLOCK_POSIX,\n    MONOTONIC_CLOCK_HW,\n} monotonic_clock_type;\n\n/* Call once at startup to initialize the monotonic clock.  Though this only\n * needs to be called once, it may be called additional times without impact.\n * Returns a printable string indicating the type of clock initialized.\n * (The returned string is static and doesn't need to be freed.)  */\nconst char *monotonicInit(void);\n\n/* Return a string indicating the type of monotonic clock being used. */\nconst char *monotonicInfoString(void);\n\n/* Return the type of monotonic clock being used. */\nmonotonic_clock_type monotonicGetType(void);\n\n/* Functions to measure elapsed time.  Example:\n *     monotime myTimer;\n *     elapsedStart(&myTimer);\n *     while (elapsedMs(myTimer) < 10) {} // loops for 10ms\n */\nstatic inline void elapsedStart(monotime *start_time) {\n    *start_time = getMonotonicUs();\n}\n\nstatic inline uint64_t elapsedUs(monotime start_time) {\n    return getMonotonicUs() - start_time;\n}\n\nstatic inline uint64_t elapsedMs(monotime start_time) {\n    return elapsedUs(start_time) / 1000;\n}\n\n#endif\n"
  },
  {
    "path": "src/mstr.c",
    "content": "/*\n * Copyright Redis Ltd. 2024 - present\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include <string.h>\n#include <assert.h>\n#include \"sdsalloc.h\"\n#include \"mstr.h\"\n#include \"stdio.h\"\n\n#define NULL_SIZE 1\n\nstatic inline char mstrReqType(size_t string_size);\nstatic inline int mstrHdrSize(char type);\nstatic inline int mstrSumMetaLen(mstrKind *k, mstrFlags flags);\nstatic inline size_t mstrAllocLen(const mstr s, struct mstrKind *kind);\n\n/*** mstr API ***/\n\n/* Create mstr without any metadata attached, based on string 'initStr'.\n * - If initStr equals NULL, then only allocation will be made.\n * - string of mstr is always null-terminated.\n */\nmstr mstrNew(const char *initStr, size_t lenStr, int trymalloc, size_t *usable) {\n    unsigned char *pInfo; /* pointer to mstr info field */\n    void *sh;\n    mstr s;\n    char type = mstrReqType(lenStr);\n    int mstrHdr = mstrHdrSize(type);\n\n    assert(lenStr + mstrHdr + 1 > lenStr); /* Catch size_t overflow */\n\n    size_t len = mstrHdr + lenStr + NULL_SIZE;\n    sh = trymalloc? s_trymalloc_usable(len, usable) : s_malloc_usable(len, usable);\n\n    if (sh == NULL) return NULL;\n\n    s = (char*)sh + mstrHdr;\n    pInfo = ((unsigned char*)s) - 1;\n\n    switch(type) {\n        case MSTR_TYPE_5: {\n            *pInfo = CREATE_MSTR_INFO(lenStr, 0 /*ismeta*/, type);\n            break;\n        }\n        case MSTR_TYPE_8: {\n            MSTR_HDR_VAR(8,s);\n            *pInfo = CREATE_MSTR_INFO(0 /*unused*/, 0 /*ismeta*/, type);\n            sh->len = lenStr;\n            break;\n        }\n        case MSTR_TYPE_16: {\n            MSTR_HDR_VAR(16,s);\n            *pInfo = CREATE_MSTR_INFO(0 /*unused*/, 0 /*ismeta*/, type);\n            sh->len = lenStr;\n            break;\n        }\n        case MSTR_TYPE_64: {\n            MSTR_HDR_VAR(64,s);\n            *pInfo = CREATE_MSTR_INFO(0 /*unused*/, 0 /*ismeta*/, type);\n            sh->len = lenStr;\n            break;\n        }\n    }\n\n    if (initStr && lenStr)\n        memcpy(s, initStr, lenStr);\n\n    s[lenStr] = '\\0';\n    return s;\n}\n\n/* Creates mstr with given string. Reserve space for metadata.\n *\n * Note: mstrNew(s,l,u) and mstrNewWithMeta(s,l,0,u) are not the same. The first allocates\n * just string. The second allocates a string with flags (yet without any metadata\n * structures allocated).\n */\nmstr mstrNewWithMeta(struct mstrKind *kind, const char *initStr, size_t lenStr, mstrFlags metaFlags, int trymalloc, size_t *usable) {\n    unsigned char *pInfo; /* pointer to mstr info field */\n    char *allocMstr;\n    mstr mstrPtr;\n    char type = mstrReqType(lenStr);\n    int mstrHdr = mstrHdrSize(type);\n    int sumMetaLen = mstrSumMetaLen(kind, metaFlags);\n\n\n    /* mstrSumMetaLen() + sizeof(mstrFlags) + sizeof(mstrhdrX) + lenStr  */\n\n    size_t allocLen = sumMetaLen + sizeof(mstrFlags) + mstrHdr + lenStr + NULL_SIZE;\n    allocMstr = trymalloc? s_trymalloc_usable(allocLen, usable) : s_malloc_usable(allocLen, usable);\n\n    if (allocMstr == NULL) return NULL;\n\n    /* metadata is located at the beginning of the allocation, then meta-flags and lastly the string */\n    mstrFlags *pMetaFlags = (mstrFlags *) (allocMstr + sumMetaLen) ;\n    mstrPtr = ((char*) pMetaFlags) + sizeof(mstrFlags) + mstrHdr;\n    pInfo = ((unsigned char*)mstrPtr) - 1;\n\n    switch(type) {\n        case MSTR_TYPE_5: {\n            *pInfo = CREATE_MSTR_INFO(lenStr, 1 /*ismeta*/, type);\n            break;\n        }\n        case MSTR_TYPE_8: {\n            MSTR_HDR_VAR(8, mstrPtr);\n            sh->len = lenStr;\n            *pInfo = CREATE_MSTR_INFO(0 /*unused*/, 1 /*ismeta*/, type);\n            break;\n        }\n        case MSTR_TYPE_16: {\n            MSTR_HDR_VAR(16, mstrPtr);\n            sh->len = lenStr;\n            *pInfo = CREATE_MSTR_INFO(0 /*unused*/, 1 /*ismeta*/, type);\n            break;\n        }\n        case MSTR_TYPE_64: {\n            MSTR_HDR_VAR(64, mstrPtr);\n            sh->len = lenStr;\n            *pInfo = CREATE_MSTR_INFO(0 /*unused*/, 1 /*ismeta*/, type);\n            break;\n        }\n    }\n    *pMetaFlags = metaFlags;\n    if (initStr != NULL) memcpy(mstrPtr, initStr, lenStr);\n    mstrPtr[lenStr] = '\\0';\n\n    return mstrPtr;\n}\n\n/* Create copy of mstr. Flags can be modified. For each metadata flag, if\n * same flag is set on both, then copy its metadata. */\nmstr mstrNewCopy(struct mstrKind *kind, mstr src, mstrFlags newFlags, size_t *usable) {\n    mstr dst;\n\n    /* if no flags are set, then just copy the string */\n    if (newFlags == 0) return mstrNew(src, mstrlen(src), 0, usable);\n\n    dst = mstrNewWithMeta(kind, src, mstrlen(src), newFlags, 0, usable);\n    memcpy(dst, src, mstrlen(src) + 1);\n\n    /* if metadata is attached to src, then selectively copy metadata */\n    if (mstrIsMetaAttached(src)) {\n        mstrFlags *pFlags1 = mstrFlagsRef(src),\n                *pFlags2 = mstrFlagsRef(dst);\n\n        mstrFlags flags1Shift = *pFlags1,\n                flags2Shift = *pFlags2;\n\n        unsigned char *at1 = ((unsigned char *) pFlags1),\n                *at2 = ((unsigned char *) pFlags2);\n\n        /* if the flag is set on both, then copy the metadata */\n        for (int i = 0; flags1Shift != 0; ++i) {\n            int isFlag1Set = flags1Shift & 0x1;\n            int isFlag2Set = flags2Shift & 0x1;\n\n            if (isFlag1Set) at1 -= kind->metaSize[i];\n            if (isFlag2Set) at2 -= kind->metaSize[i];\n\n            if (isFlag1Set && isFlag2Set)\n                memcpy(at2, at1, kind->metaSize[i]);\n            flags1Shift >>= 1;\n            flags2Shift >>= 1;\n        }\n    }\n    return dst;\n}\n\n/* Free mstring. Note, mstrKind is required to eval sizeof metadata and find start\n * of allocation but if mstrIsMetaAttached(s) is false, you can pass NULL as well.\n */\nvoid mstrFree(struct mstrKind *kind, mstr s, size_t *usable) {\n    size_t oldsize = 0;\n    if (s != NULL)\n        s_free_usable(mstrGetAllocPtr(kind, s), &oldsize);\n    if (usable != NULL) *usable = oldsize;\n}\n\n/* return ref to metadata flags. Useful to modify directly flags which doesn't\n * include metadata payload */\nmstrFlags *mstrFlagsRef(mstr s) {\n    switch(s[-1]&MSTR_TYPE_MASK) {\n        case MSTR_TYPE_5:\n            return ((mstrFlags *) (s - sizeof(struct mstrhdr5))) - 1;\n        case MSTR_TYPE_8:\n            return ((mstrFlags *) (s - sizeof(struct mstrhdr8))) - 1;\n        case MSTR_TYPE_16:\n            return ((mstrFlags *) (s - sizeof(struct mstrhdr16))) - 1;\n        default: /* MSTR_TYPE_64: */\n            return ((mstrFlags *) (s - sizeof(struct mstrhdr64))) - 1;\n    }\n}\n\n/* Return a reference to corresponding metadata of the specified metadata flag\n * index (flagIdx). If the metadata doesn't exist, it still returns a reference\n * to the starting location where it would have been written among other metadatas.\n * To verify if `flagIdx` of some metadata is attached, use `mstrGetFlag(s, flagIdx)`.\n */\nvoid *mstrMetaRef(mstr s, struct mstrKind *kind, int flagIdx) {\n    int metaOffset = 0;\n    /* start iterating from flags backward */\n    mstrFlags *pFlags = mstrFlagsRef(s);\n    mstrFlags tmp = *pFlags;\n\n    for (int i = 0 ; i <= flagIdx ; ++i) {\n        if (tmp & 0x1) metaOffset += kind->metaSize[i];\n        tmp >>= 1;\n    }\n    return ((char *)pFlags) - metaOffset;\n}\n\n/* mstr layout: [meta-data#N]...[meta-data#0][mstrFlags][mstrhdr][string][null] */\nvoid *mstrGetAllocPtr(struct mstrKind *kind, mstr str) {\n    if (!mstrIsMetaAttached(str))\n        return (char*)str - mstrHdrSize(str[-1]);\n\n    int totalMetaLen = mstrSumMetaLen(kind, *mstrFlagsRef(str));\n    return (char*)str - mstrHdrSize(str[-1]) - sizeof(mstrFlags) - totalMetaLen;\n}\n\n/* Prints in the following fashion:\n *   [0x7f8bd8816017] my_mstr: foo (strLen=3, mstrLen=11, isMeta=1, metaFlags=0x1)\n *   [0x7f8bd8816010] >> meta[0]: 0x78 0x56 0x34 0x12 (metaLen=4)\n */\nvoid mstrPrint(mstr s, struct mstrKind *kind, int verbose) {\n    mstrFlags mflags, tmp;\n    int isMeta = mstrIsMetaAttached(s);\n\n    tmp = mflags = (isMeta) ? *mstrFlagsRef(s) : 0;\n\n    if (!isMeta) {\n        printf(\"[%p] %s: %s (strLen=%zu, mstrLen=%zu, isMeta=0)\\n\",\n               (void *)s, kind->name, s, mstrlen(s), mstrAllocLen(s, kind));\n        return;\n    }\n\n    printf(\"[%p] %s: %s (strLen=%zu, mstrLen=%zu, isMeta=1, metaFlags=0x%x)\\n\",\n           (void *)s, kind->name, s, mstrlen(s), mstrAllocLen(s, kind),  mflags);\n\n    if (verbose) {\n        for (unsigned int i = 0 ; i < NUM_MSTR_FLAGS ; ++i) {\n            if (tmp & 0x1) {\n                int mSize = kind->metaSize[i];\n                void *mRef = mstrMetaRef(s, kind, i);\n                printf(\"[%p] >> meta[%d]:\", mRef, i);\n                for (int j = 0 ; j < mSize ; ++j) {\n                    printf(\" 0x%02x\", ((unsigned char *) mRef)[j]);\n                }\n                printf(\" (metaLen=%d)\\n\", mSize);\n            }\n            tmp >>= 1;\n        }\n    }\n}\n\n/* return length of the string (ignoring metadata attached) */\nsize_t mstrlen(const mstr s) {\n    unsigned char info = s[-1];\n    switch(info & MSTR_TYPE_MASK) {\n        case MSTR_TYPE_5:\n            return MSTR_TYPE_5_LEN(info);\n        case MSTR_TYPE_8:\n            return MSTR_HDR(8,s)->len;\n        case MSTR_TYPE_16:\n            return MSTR_HDR(16,s)->len;\n        default: /* MSTR_TYPE_64: */\n            return MSTR_HDR(64,s)->len;\n    }\n}\n\n/*** mstr internals ***/\n\nstatic inline int mstrSumMetaLen(mstrKind *k, mstrFlags flags) {\n    int total = 0;\n    int i = 0 ;\n    while (flags) {\n        total += (flags & 0x1) ? k->metaSize[i] : 0;\n        flags >>= 1;\n        ++i;\n    }\n    return total;\n}\n\n/* mstrSumMetaLen() + sizeof(mstrFlags) + sizeof(mstrhdrX) + strlen + '\\0' */\nstatic inline size_t mstrAllocLen(const mstr s, struct mstrKind *kind) {\n    int hdrlen;\n    mstrFlags *pMetaFlags;\n    size_t strlen = 0;\n\n    int isMeta = mstrIsMetaAttached(s);\n    unsigned char info = s[-1];\n\n    switch(info & MSTR_TYPE_MASK) {\n        case MSTR_TYPE_5:\n            strlen = MSTR_TYPE_5_LEN(info);\n            hdrlen = sizeof(struct mstrhdr5);\n            pMetaFlags = ((mstrFlags *) MSTR_HDR(5, s)) - 1;\n            break;\n        case MSTR_TYPE_8:\n            strlen = MSTR_HDR(8,s)->len;\n            hdrlen = sizeof(struct mstrhdr8);\n            pMetaFlags = ((mstrFlags *) MSTR_HDR(8, s)) - 1;\n            break;\n        case MSTR_TYPE_16:\n            strlen = MSTR_HDR(16,s)->len;\n            hdrlen = sizeof(struct mstrhdr16);\n            pMetaFlags = ((mstrFlags *) MSTR_HDR(16, s)) - 1;\n            break;\n        default: /* MSTR_TYPE_64: */\n            strlen = MSTR_HDR(64,s)->len;\n            hdrlen = sizeof(struct mstrhdr64);\n            pMetaFlags = ((mstrFlags *) MSTR_HDR(64, s)) - 1;\n            break;\n    }\n    return hdrlen + strlen + NULL_SIZE + ((isMeta) ? (mstrSumMetaLen(kind, *pMetaFlags) + sizeof(mstrFlags)) : 0);\n}\n\n/* returns pointer to the beginning of malloc() of mstr */\nvoid *mstrGetStartAlloc(mstr s, struct mstrKind *kind) {\n    int hdrlen;\n    mstrFlags *pMetaFlags;\n\n    int isMeta = mstrIsMetaAttached(s);\n\n    switch(s[-1]&MSTR_TYPE_MASK) {\n        case MSTR_TYPE_5:\n            hdrlen = sizeof(struct mstrhdr5);\n            pMetaFlags = ((mstrFlags *) MSTR_HDR(5, s)) - 1;\n            break;\n        case MSTR_TYPE_8:\n            hdrlen = sizeof(struct mstrhdr8);\n            pMetaFlags = ((mstrFlags *) MSTR_HDR(8, s)) - 1;\n            break;\n        case MSTR_TYPE_16:\n            hdrlen = sizeof(struct mstrhdr16);\n            pMetaFlags = ((mstrFlags *) MSTR_HDR(16, s)) - 1;\n            break;\n        default: /* MSTR_TYPE_64: */\n            hdrlen = sizeof(struct mstrhdr64);\n            pMetaFlags = ((mstrFlags *) MSTR_HDR(64, s)) - 1;\n            break;\n    }\n    return (char *) s - hdrlen -  ((isMeta) ? (mstrSumMetaLen(kind, *pMetaFlags) + sizeof(mstrFlags)) : 0);\n}\n\nstatic inline int mstrHdrSize(char type) {\n    switch(type&MSTR_TYPE_MASK) {\n        case MSTR_TYPE_5:\n            return sizeof(struct mstrhdr5);\n        case MSTR_TYPE_8:\n            return sizeof(struct mstrhdr8);\n        case MSTR_TYPE_16:\n            return sizeof(struct mstrhdr16);\n        case MSTR_TYPE_64:\n            return sizeof(struct mstrhdr64);\n    }\n    return 0;\n}\n\nstatic inline char mstrReqType(size_t string_size) {\n    if (string_size < 1<<5)\n        return MSTR_TYPE_5;\n    if (string_size < 1<<8)\n        return MSTR_TYPE_8;\n    if (string_size < 1<<16)\n        return MSTR_TYPE_16;\n    return MSTR_TYPE_64;\n}\n\n#ifdef REDIS_TEST\n#include <stdlib.h>\n#include <assert.h>\n#include \"testhelp.h\"\n#include \"limits.h\"\n\n#ifndef UNUSED\n#define UNUSED(x) (void)(x)\n#endif\n\n/* Challenge mstr with metadata interesting enough that can include the case of hfield and hkey and more */\n#define B(idx)  (1<<(idx))\n\n#define META_IDX_MYMSTR_TTL4             0\n#define META_IDX_MYMSTR_TTL8             1\n#define META_IDX_MYMSTR_TYPE_ENC_LRU     2       // 4Bbit type, 4bit encoding, 24bits lru\n#define META_IDX_MYMSTR_VALUE_PTR        3\n#define META_IDX_MYMSTR_FLAG_NO_META     4\n\n#define TEST_CONTEXT(context) printf(\"\\nContext: %s \\n\", context);\n\nint mstrTest(int argc, char **argv, int flags) {\n    UNUSED(argc);\n    UNUSED(argv);\n    UNUSED(flags);\n\n    struct mstrKind kind_mymstr = {\n            .name = \"my_mstr\",\n            .metaSize[META_IDX_MYMSTR_TTL4]           = 4,\n            .metaSize[META_IDX_MYMSTR_TTL8]           = 8,\n            .metaSize[META_IDX_MYMSTR_TYPE_ENC_LRU]   = 4,\n            .metaSize[META_IDX_MYMSTR_VALUE_PTR]      = 8,\n            .metaSize[META_IDX_MYMSTR_FLAG_NO_META]   = 0,\n    };\n\n    TEST_CONTEXT(\"Create simple short mstr\")\n    {\n        char *str = \"foo\";\n        mstr s = mstrNew(str, strlen(str), 0, NULL);\n        size_t expStrLen = strlen(str);\n\n        test_cond(\"Verify str length and alloc length\",\n                  mstrAllocLen(s, NULL) == (1 + expStrLen + 1) &&   /* mstrhdr5 + str + null */\n                  mstrlen(s) == expStrLen &&                             /* expected strlen(str) */\n                  memcmp(s, str, expStrLen + 1) == 0);\n        mstrFree(&kind_mymstr, s, NULL);\n    }\n\n    TEST_CONTEXT(\"Create simple 40 bytes mstr\")\n    {\n        char *str = \"0123456789012345678901234567890123456789\"; // 40 bytes\n        mstr s = mstrNew(str, strlen(str), 0, NULL);\n\n        test_cond(\"Verify str length and alloc length\",\n                  mstrAllocLen(s, NULL) == (3 + 40 + 1) &&   /* mstrhdr8 + str + null */\n                  mstrlen(s) == 40 &&\n                  memcmp(s,str,40) == 0);\n        mstrFree(&kind_mymstr, s, NULL);\n    }\n\n    TEST_CONTEXT(\"Create mstr with random characters\")\n    {\n        long unsigned int i;\n        char str[66000];\n        for (i = 0 ; i < sizeof(str) ; ++i) str[i] = rand() % 256;\n\n        size_t len[] = { 31, 32, 33, 255, 256, 257, 65535, 65536, 65537, 66000};\n        for (i = 0 ; i < sizeof(len) / sizeof(len[0]) ; ++i) {\n            char title[100];\n            mstr s = mstrNew(str, len[i], 0, NULL);\n            size_t mstrhdrSize = (len[i] < 1<<5) ? sizeof(struct mstrhdr5) :\n                            (len[i] < 1<<8) ? sizeof(struct mstrhdr8) :\n                            (len[i] < 1<<16) ? sizeof(struct mstrhdr16) :\n                            sizeof(struct mstrhdr64);\n\n            snprintf(title, sizeof(title), \"Verify string of length %zu\", len[i]);\n            test_cond(title,\n                      mstrAllocLen(s, NULL) == (mstrhdrSize + len[i] + 1) &&   /* mstrhdrX + str + null */\n                      mstrlen(s) == len[i] &&\n                      memcmp(s,str,len[i]) == 0);\n            mstrFree(&kind_mymstr, s, NULL);\n        }\n    }\n\n    TEST_CONTEXT(\"Create short mstr with TTL4\")\n    {\n        uint32_t *ttl;\n        mstr s = mstrNewWithMeta(&kind_mymstr,\n                                 \"foo\",\n                                 strlen(\"foo\"),\n                                 B(META_IDX_MYMSTR_TTL4), /* allocate with TTL4 metadata */\n                                 0,\n                                 NULL);\n\n        ttl = mstrMetaRef(s, &kind_mymstr, META_IDX_MYMSTR_TTL4);\n        *ttl = 0x12345678;\n\n        test_cond(\"Verify memory-allocation and string lengths\",\n                  mstrAllocLen(s, &kind_mymstr) == (1 + 3 + 2 + 1 + 4) && /* mstrhdr5 + str + null + mstrFlags + TLL */\n                  mstrlen(s) == 3);\n\n        unsigned char expMem[] = {0xFF, 0xFF, 0xFF, 0xFF, 0x01, 0x00, 0x1c, 'f', 'o', 'o', '\\0' };\n        uint32_t value = 0x12345678;\n        memcpy(expMem, &value, sizeof(uint32_t));\n        test_cond(\"Verify string and TTL4 payload\", memcmp(\n                mstrMetaRef(s, &kind_mymstr, 0) , expMem, sizeof(expMem)) == 0);\n\n        test_cond(\"Verify mstrIsMetaAttached() function works\", mstrIsMetaAttached(s) != 0);\n\n        mstrFree(&kind_mymstr, s, NULL);\n    }\n\n    TEST_CONTEXT(\"Create short mstr with TTL4 and value ptr \")\n    {\n        mstr s = mstrNewWithMeta(&kind_mymstr, \"foo\", strlen(\"foo\"),\n                                 B(META_IDX_MYMSTR_TTL4) | B(META_IDX_MYMSTR_VALUE_PTR), 0, NULL);\n        *((uint32_t *) (mstrMetaRef(s, &kind_mymstr,\n                                    META_IDX_MYMSTR_TTL4))) = 0x12345678;\n\n        test_cond(\"Verify length and alloc length\",\n                  mstrAllocLen(s, &kind_mymstr) == (1 + 3 + 1 + 2 + 4 + 8) && /* mstrhdr5 + str + null + mstrFlags + TLL + PTR */\n                  mstrlen(s) == 3);\n        mstrFree(&kind_mymstr, s, NULL);\n    }\n\n    TEST_CONTEXT(\"Copy mstr and add it TTL4\")\n    {\n        mstr s1 = mstrNew(\"foo\", strlen(\"foo\"), 0, NULL);\n        mstr s2 = mstrNewCopy(&kind_mymstr, s1, B(META_IDX_MYMSTR_TTL4), NULL);\n        *((uint32_t *) (mstrMetaRef(s2, &kind_mymstr, META_IDX_MYMSTR_TTL4))) = 0x12345678;\n\n        test_cond(\"Verify new mstr includes TTL4\",\n                  mstrAllocLen(s2, &kind_mymstr) == (1 + 3 + 1 + 2 + 4) &&   /* mstrhdr5 + str + null + mstrFlags + TTL4 */\n                  mstrlen(s2) == 3 &&                   /* 'foo' = 3bytes */\n                  memcmp(s2, \"foo\\0\", 4) == 0);\n\n        mstr s3 = mstrNewCopy(&kind_mymstr, s2, B(META_IDX_MYMSTR_TTL4), NULL);\n        unsigned char expMem[] = { 0xFF, 0xFF, 0xFF, 0xFF, 0x1, 0x0, 0x1c, 'f', 'o', 'o', '\\0' };\n        uint32_t value = 0x12345678;\n        memcpy(expMem, &value, sizeof(uint32_t));\n\n        char *ppp = mstrGetStartAlloc(s3, &kind_mymstr);\n        test_cond(\"Verify string and TTL4 payload\",\n                  memcmp(ppp, expMem, sizeof(expMem)) == 0);\n\n        mstrPrint(s3, &kind_mymstr, 1);\n        mstrFree(&kind_mymstr, s1, NULL);\n        mstrFree(&kind_mymstr, s2, NULL);\n        mstrFree(&kind_mymstr, s3, NULL);\n    }\n\n    return 0;\n}\n#endif\n"
  },
  {
    "path": "src/mstr.h",
    "content": "/*\n * Copyright Redis Ltd. 2024 - present\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n *\n * WHAT IS MSTR (M-STRING)?\n * ------------------------\n * mstr stands for immutable string with optional metadata attached.\n *\n * sds string is widely used across the system and serves as a general purpose\n * container to hold data. The need to optimize memory and aggregate strings\n * along with metadata and store it into Redis data-structures as single bulk keep\n * reoccur. One thought might be, why not to extend sds to support metadata. The\n * answer is that sds is mutable string in its nature, with wide API (split, join,\n * etc.). Pushing metadata logic into sds will make it very fragile, and complex\n * to maintain.\n *\n * Another idea involved using a simple struct with flags and a dynamic buf[] at the\n * end. While this could be viable, it introduces considerable complexity and would\n * need maintenance across different contexts.\n *\n * As an alternative, we introduce a new implementation of immutable strings,\n * with limited API, and with the option to attach metadata. The representation\n * of the string, without any metadata, in its basic form, resembles SDS but\n * without the API to manipulate the string. Only to attach metadata to it. The\n * following diagram shows the memory layout of mstring (mstrhdr8) when no\n * metadata is attached:\n *\n *     +----------------------------------------------+\n *     | mstrhdr8                       | c-string |  |\n *     +--------------------------------+-------------+\n *     |8b   |2b     |1b      |5b       |?bytes    |8b|\n *     | Len | Type  |m-bit=0 | Unused  | String   |\\0|\n *     +----------------------------------------------+\n *                                      ^\n *                                      |\n *  mstrNew() returns pointer to here --+\n *\n * If  metadata-flag is set, depicted in diagram above as m-bit in the diagram,\n * then the header will be preceded with additional 16 bits of metadata flags such\n * that if i'th bit is set, then the i'th metadata structure is attached to the\n * mstring. The metadata layout and their sizes are defined by mstrKind structure\n * (More below).\n *\n * The following diagram shows the memory layout of mstr (mstrhdr8) when 3 bits in mFlags\n * are set to indicate that 3 fields of metadata are attached to the mstring at the\n * beginning.\n *\n *   +-------------------------------------------------------------------------------+\n *   | METADATA FIELDS       | mflags | mstrhdr8                       | c-string |  |\n *   +-----------------------+--------+--------------------------------+-------------+\n *   |?bytes |?bytes |?bytes |16b     |8b   |2b     |1b      |5b       |?bytes    |8b|\n *   | Meta3 | Meta2 | Meta0 | 0x1101 | Len | Type  |m-bit=1 | Unused  | String   |\\0|\n *   +-------------------------------------------------------------------------------+\n *                                                                     ^\n *                                                                     |\n *                         mstrNewWithMeta() returns pointer to here --+\n *\n * mstr allows to define different kinds (groups) of mstrings, each with its\n * own unique metadata layout. For example, in case of hash-fields, all instances of\n * it can optionally have TTL metadata attached to it. This is achieved by first\n * prototyping a single mstrKind structure that defines the metadata layout and sizes\n * of this specific kind. Now each hash-field instance has still the freedom to\n * attach or not attach the metadata to it, and metadata flags (mFlags) of the\n * instance will reflect this decision.\n *\n * In the future, the keys of Redis keyspace can be another kind of mstring that\n * has TTL, LRU or even dictEntry metadata embedded into. Unlike vptr in c++, this\n * struct won't be attached to mstring but will be passed as yet another argument\n * to API, to save memory. In addition, each instance of a given mstrkind can hold\n * any subset of metadata and the 8 bits of metadata-flags will reflect it.\n *\n * The following example shows how to define mstrKind for possible future keyspace\n * that aggregates several keyspace related metadata into one compact, singly\n * allocated, mstring.\n *\n *      typedef enum HkeyMetaFlags {\n *          HKEY_META_VAL_REF_COUNT    = 0,  // refcount\n *          HKEY_META_VAL_REF          = 1,  // Val referenced\n *          HKEY_META_EXPIRE           = 2,  // TTL and more\n *          HKEY_META_TYPE_ENC_LRU     = 3,  // TYPE + LRU + ENC\n *          HKEY_META_DICT_ENT_NEXT    = 4,  // Next dict entry\n *          // Following two must be together and in this order\n *          HKEY_META_VAL_EMBED8       = 5,  // Val embedded, max 7 bytes\n *          HKEY_META_VAL_EMBED16      = 6,  // Val embedded, max 15 bytes (23 with EMBED8)\n *      } HkeyMetaFlags;\n *\n *      mstrKind hkeyKind = {\n *          .name = \"hkey\",\n *          .metaSize[HKEY_META_VAL_REF_COUNT] = 4,\n *          .metaSize[HKEY_META_VAL_REF]       = 8,\n *          .metaSize[HKEY_META_EXPIRE]        = sizeof(ExpireMeta),\n *          .metaSize[HKEY_META_TYPE_ENC_LRU]  = 8,\n *          .metaSize[HKEY_META_DICT_ENT_NEXT] = 8,\n *          .metaSize[HKEY_META_VAL_EMBED8]    = 8,\n *          .metaSize[HKEY_META_VAL_EMBED16]   = 16,\n *      };\n *\n * MSTR-ALIGNMENT\n * --------------\n * There are two types of alignments to take into consideration:\n * 1. Alignment of the metadata.\n * 2. Alignment of returned mstr pointer\n *\n * 1) As the metadatas layout are reversed to their enumeration, it is recommended\n *    to put metadata with \"better\" alignment first in memory layout (enumerated\n *    last) and the worst, or those that simply don't require any alignment will be\n *    last in memory layout (enumerated first). This is similar the to the applied\n *    consideration when defining new struct in C. Note also that each metadata\n *    might either be attached to mstr or not which complicates the design phase\n *    of a new mstrKind a little.\n *\n *    In the example above, HKEY_META_VAL_REF_COUNT, with worst alignment of 4\n *    bytes, is enumerated first, and therefore, will be last in memory layout.\n *\n * 2) Few optimizations in Redis rely on the fact that sds address is always an odd\n *    pointer. We can achieve the same with a little effort. It was already taken\n *    care that all headers of type mstrhdrX has odd size. With that in mind, if\n *    a new kind of mstr is required to be limited to odd addresses, then we must\n *    make sure that sizes of all related metadatas that are defined in mstrKind\n *    are even in size.\n */\n\n#ifndef __MSTR_H\n#define __MSTR_H\n\n#include <sys/types.h>\n#include <stdarg.h>\n#include <stdint.h>\n\n/* Selective copy of ifndef from server.h instead of including it */\n#ifndef static_assert\n#define static_assert(expr, lit) extern char __static_assert_failure[(expr) ? 1:-1]\n#endif\n\n#define MSTR_TYPE_5         0\n#define MSTR_TYPE_8         1\n#define MSTR_TYPE_16        2\n#define MSTR_TYPE_64        3\n#define MSTR_TYPE_MASK      3\n#define MSTR_TYPE_BITS      2\n\n#define MSTR_META_MASK      4\n\n#define MSTR_HDR(T,s) ((struct mstrhdr##T *)((s)-(sizeof(struct mstrhdr##T))))\n#define MSTR_HDR_VAR(T,s) struct mstrhdr##T *sh = (void*)((s)-(sizeof(struct mstrhdr##T)));\n\n#define MSTR_META_BITS  1  /* is metadata attached? */\n#define MSTR_TYPE_5_LEN(f) ((f) >> (MSTR_TYPE_BITS + MSTR_META_BITS))\n#define CREATE_MSTR_INFO(len, ismeta, type) ( (((len<<MSTR_META_BITS) + ismeta) << (MSTR_TYPE_BITS)) | type )\n\n/* mimic plain c-string */\ntypedef char *mstr;\n\n/* Flags that can be set on mstring to indicate for attached metadata. It is\n * */\ntypedef uint16_t mstrFlags;\n\nstruct __attribute__ ((__packed__)) mstrhdr5 {\n    unsigned char info; /* 2 lsb of type, 1 metadata, and 5 msb of string length */\n    char buf[];\n};\nstruct __attribute__ ((__packed__)) mstrhdr8 {\n    uint8_t unused;  /* To achieve odd size header (See comment above) */\n    uint8_t len;\n    unsigned char info; /* 2 lsb of type, 6 unused bits */\n    char buf[];\n};\nstruct __attribute__ ((__packed__)) mstrhdr16 {\n    uint16_t len;\n    unsigned char info; /* 2 lsb of type, 6 unused bits */\n    char buf[];\n};\nstruct __attribute__ ((__packed__)) mstrhdr64 {\n    uint64_t len;\n    unsigned char info; /* 2 lsb of type, 6 unused bits */\n    char buf[];\n};\n\n#define NUM_MSTR_FLAGS (sizeof(mstrFlags)*8)\n\n/* mstrKind is used to define a kind (a group) of mstring with its own metadata layout */\n typedef struct mstrKind {\n    const char *name;\n    int metaSize[NUM_MSTR_FLAGS];\n} mstrKind;\n\nmstr mstrNew(const char *initStr, size_t lenStr, int trymalloc, size_t *usable);\n\nmstr mstrNewWithMeta(struct mstrKind *kind, const char *initStr, size_t lenStr, mstrFlags flags, int trymalloc, size_t *usable);\n\nmstr mstrNewCopy(struct mstrKind *kind, mstr src, mstrFlags newFlags, size_t *usable);\n\nvoid *mstrGetAllocPtr(struct mstrKind *kind, mstr str);\n\nvoid mstrFree(struct mstrKind *kind, mstr s, size_t *usable);\n\nmstrFlags *mstrFlagsRef(mstr s);\n\nvoid *mstrMetaRef(mstr s, struct mstrKind *kind, int flagIdx);\n\nsize_t mstrlen(const mstr s);\n\n/* return non-zero if metadata is attached to mstring */\nstatic inline int mstrIsMetaAttached(mstr s) { return s[-1] & MSTR_META_MASK; }\n\n/* return whether if a specific flag-index is set */\nstatic inline int mstrGetFlag(mstr s, int flagIdx) { return *mstrFlagsRef(s) & (1 << flagIdx); }\n\n/* DEBUG */\nvoid mstrPrint(mstr s, struct mstrKind *kind, int verbose);\n\n/* See comment above about MSTR-ALIGNMENT(2) */\nstatic_assert(sizeof(struct mstrhdr5 ) % 2 == 1, \"must be odd\");\nstatic_assert(sizeof(struct mstrhdr8 ) % 2 == 1, \"must be odd\");\nstatic_assert(sizeof(struct mstrhdr16 ) % 2 == 1, \"must be odd\");\nstatic_assert(sizeof(struct mstrhdr64 ) % 2 == 1, \"must be odd\");\nstatic_assert(sizeof(mstrFlags ) % 2 == 0, \"must be even to keep mstr pointer odd\");\n\n#ifdef REDIS_TEST\nint mstrTest(int argc, char *argv[], int flags);\n#endif\n\n#endif\n"
  },
  {
    "path": "src/mt19937-64.c",
    "content": "/*\n   A C-program for MT19937-64 (2004/9/29 version).\n   Coded by Takuji Nishimura and Makoto Matsumoto.\n\n   This is a 64-bit version of Mersenne Twister pseudorandom number\n   generator.\n\n   Before using, initialize the state by using init_genrand64(seed)\n   or init_by_array64(init_key, key_length).\n\n   Copyright (C) 2004, Makoto Matsumoto and Takuji Nishimura,\n   All rights reserved.\n\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n     1. Redistributions of source code must retain the above copyright\n        notice, this list of conditions and the following disclaimer.\n\n     2. Redistributions in binary form must reproduce the above copyright\n        notice, this list of conditions and the following disclaimer in the\n        documentation and/or other materials provided with the distribution.\n\n     3. The names of its contributors may not be used to endorse or promote\n        products derived from this software without specific prior written\n        permission.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR\n   CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n   References:\n   T. Nishimura, ``Tables of 64-bit Mersenne Twisters''\n     ACM Transactions on Modeling and\n     Computer Simulation 10. (2000) 348--357.\n   M. Matsumoto and T. Nishimura,\n     ``Mersenne Twister: a 623-dimensionally equidistributed\n       uniform pseudorandom number generator''\n     ACM Transactions on Modeling and\n     Computer Simulation 8. (Jan. 1998) 3--30.\n\n   Any feedback is very welcome.\n   http://www.math.hiroshima-u.ac.jp/~m-mat/MT/emt.html\n   email: m-mat @ math.sci.hiroshima-u.ac.jp (remove spaces)\n*/\n\n\n#include \"mt19937-64.h\"\n#include <stdio.h>\n\n#define NN 312\n#define MM 156\n#define MATRIX_A 0xB5026F5AA96619E9ULL\n#define UM 0xFFFFFFFF80000000ULL /* Most significant 33 bits */\n#define LM 0x7FFFFFFFULL /* Least significant 31 bits */\n\n\n/* The array for the state vector */\nstatic unsigned long long mt[NN];\n/* mti==NN+1 means mt[NN] is not initialized */\nstatic int mti=NN+1;\n\n/* initializes mt[NN] with a seed */\nvoid init_genrand64(unsigned long long seed)\n{\n    mt[0] = seed;\n    for (mti=1; mti<NN; mti++)\n        mt[mti] =  (6364136223846793005ULL * (mt[mti-1] ^ (mt[mti-1] >> 62)) + mti);\n}\n\n/* initialize by an array with array-length */\n/* init_key is the array for initializing keys */\n/* key_length is its length */\nvoid init_by_array64(unsigned long long init_key[],\n                     unsigned long long key_length)\n{\n    unsigned long long i, j, k;\n    init_genrand64(19650218ULL);\n    i=1; j=0;\n    k = (NN>key_length ? NN : key_length);\n    for (; k; k--) {\n        mt[i] = (mt[i] ^ ((mt[i-1] ^ (mt[i-1] >> 62)) * 3935559000370003845ULL))\n          + init_key[j] + j; /* non linear */\n        i++; j++;\n        if (i>=NN) { mt[0] = mt[NN-1]; i=1; }\n        if (j>=key_length) j=0;\n    }\n    for (k=NN-1; k; k--) {\n        mt[i] = (mt[i] ^ ((mt[i-1] ^ (mt[i-1] >> 62)) * 2862933555777941757ULL))\n          - i; /* non linear */\n        i++;\n        if (i>=NN) { mt[0] = mt[NN-1]; i=1; }\n    }\n\n    mt[0] = 1ULL << 63; /* MSB is 1; assuring non-zero initial array */\n}\n\n/* generates a random number on [0, 2^64-1]-interval */\nunsigned long long genrand64_int64(void)\n{\n    int i;\n    unsigned long long x;\n    static unsigned long long mag01[2]={0ULL, MATRIX_A};\n\n    if (mti >= NN) { /* generate NN words at one time */\n\n        /* if init_genrand64() has not been called, */\n        /* a default initial seed is used     */\n        if (mti == NN+1)\n            init_genrand64(5489ULL);\n\n        for (i=0;i<NN-MM;i++) {\n            x = (mt[i]&UM)|(mt[i+1]&LM);\n            mt[i] = mt[i+MM] ^ (x>>1) ^ mag01[(int)(x&1ULL)];\n        }\n        for (;i<NN-1;i++) {\n            x = (mt[i]&UM)|(mt[i+1]&LM);\n            mt[i] = mt[i+(MM-NN)] ^ (x>>1) ^ mag01[(int)(x&1ULL)];\n        }\n        x = (mt[NN-1]&UM)|(mt[0]&LM);\n        mt[NN-1] = mt[MM-1] ^ (x>>1) ^ mag01[(int)(x&1ULL)];\n\n        mti = 0;\n    }\n\n    x = mt[mti++];\n\n    x ^= (x >> 29) & 0x5555555555555555ULL;\n    x ^= (x << 17) & 0x71D67FFFEDA60000ULL;\n    x ^= (x << 37) & 0xFFF7EEE000000000ULL;\n    x ^= (x >> 43);\n\n    return x;\n}\n\n/* generates a random number on [0, 2^63-1]-interval */\nlong long genrand64_int63(void)\n{\n    return (long long)(genrand64_int64() >> 1);\n}\n\n/* generates a random number on [0,1]-real-interval */\ndouble genrand64_real1(void)\n{\n    return (genrand64_int64() >> 11) * (1.0/9007199254740991.0);\n}\n\n/* generates a random number on [0,1)-real-interval */\ndouble genrand64_real2(void)\n{\n    return (genrand64_int64() >> 11) * (1.0/9007199254740992.0);\n}\n\n/* generates a random number on (0,1)-real-interval */\ndouble genrand64_real3(void)\n{\n    return ((genrand64_int64() >> 12) + 0.5) * (1.0/4503599627370496.0);\n}\n\n#ifdef MT19937_64_MAIN\nint main(void)\n{\n    int i;\n    unsigned long long init[4]={0x12345ULL, 0x23456ULL, 0x34567ULL, 0x45678ULL}, length=4;\n    init_by_array64(init, length);\n    printf(\"1000 outputs of genrand64_int64()\\n\");\n    for (i=0; i<1000; i++) {\n      printf(\"%20llu \", genrand64_int64());\n      if (i%5==4) printf(\"\\n\");\n    }\n    printf(\"\\n1000 outputs of genrand64_real2()\\n\");\n    for (i=0; i<1000; i++) {\n      printf(\"%10.8f \", genrand64_real2());\n      if (i%5==4) printf(\"\\n\");\n    }\n    return 0;\n}\n#endif\n"
  },
  {
    "path": "src/mt19937-64.h",
    "content": "/*\n   A C-program for MT19937-64 (2004/9/29 version).\n   Coded by Takuji Nishimura and Makoto Matsumoto.\n\n   This is a 64-bit version of Mersenne Twister pseudorandom number\n   generator.\n\n   Before using, initialize the state by using init_genrand64(seed)\n   or init_by_array64(init_key, key_length).\n\n   Copyright (C) 2004, Makoto Matsumoto and Takuji Nishimura,\n   All rights reserved.\n\n   Redistribution and use in source and binary forms, with or without\n   modification, are permitted provided that the following conditions\n   are met:\n\n     1. Redistributions of source code must retain the above copyright\n        notice, this list of conditions and the following disclaimer.\n\n     2. Redistributions in binary form must reproduce the above copyright\n        notice, this list of conditions and the following disclaimer in the\n        documentation and/or other materials provided with the distribution.\n\n     3. The names of its contributors may not be used to endorse or promote\n        products derived from this software without specific prior written\n        permission.\n\n   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n   A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR\n   CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,\n   EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,\n   PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR\n   PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n   LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n   NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n   SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n   References:\n   T. Nishimura, ``Tables of 64-bit Mersenne Twisters''\n     ACM Transactions on Modeling and\n     Computer Simulation 10. (2000) 348--357.\n   M. Matsumoto and T. Nishimura,\n     ``Mersenne Twister: a 623-dimensionally equidistributed\n       uniform pseudorandom number generator''\n     ACM Transactions on Modeling and\n     Computer Simulation 8. (Jan. 1998) 3--30.\n\n   Any feedback is very welcome.\n   http://www.math.hiroshima-u.ac.jp/~m-mat/MT/emt.html\n   email: m-mat @ math.sci.hiroshima-u.ac.jp (remove spaces)\n*/\n\n#ifndef __MT19937_64_H\n#define __MT19937_64_H\n\n/* initializes mt[NN] with a seed */\nvoid init_genrand64(unsigned long long seed);\n\n/* initialize by an array with array-length */\n/* init_key is the array for initializing keys */\n/* key_length is its length */\nvoid init_by_array64(unsigned long long init_key[],\n                     unsigned long long key_length);\n\n/* generates a random number on [0, 2^64-1]-interval */\nunsigned long long genrand64_int64(void);\n\n\n/* generates a random number on [0, 2^63-1]-interval */\nlong long genrand64_int63(void);\n\n/* generates a random number on [0,1]-real-interval */\ndouble genrand64_real1(void);\n\n/* generates a random number on [0,1)-real-interval */\ndouble genrand64_real2(void);\n\n/* generates a random number on (0,1)-real-interval */\ndouble genrand64_real3(void);\n\n/* generates a random number on (0,1]-real-interval */\ndouble genrand64_real4(void);\n\n#endif\n"
  },
  {
    "path": "src/multi.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n#include \"cluster.h\"\n\n/* ================================ MULTI/EXEC ============================== */\n\n/* Client state initialization for MULTI/EXEC */\nvoid initClientMultiState(client *c) {\n    c->mstate.commands = NULL;\n    c->mstate.count = 0;\n    c->mstate.cmd_flags = 0;\n    c->mstate.cmd_inv_flags = 0;\n    c->mstate.argv_len_sums = 0;\n    c->mstate.alloc_count = 0;\n    c->mstate.executing_cmd = -1;\n}\n\n/* Release all the resources associated with MULTI/EXEC state */\nvoid freeClientMultiState(client *c) {\n    for (int i = 0; i < c->mstate.count; i++) {\n        freePendingCommand(c, c->mstate.commands[i]);\n    }\n    zfree(c->mstate.commands);\n}\n\n/* Add a new command into the MULTI commands queue */\nvoid queueMultiCommand(client *c, uint64_t cmd_flags) {\n    /* No sense to waste memory if the transaction is already aborted.\n     * this is useful in case client sends these in a pipeline, or doesn't\n     * bother to read previous responses and didn't notice the multi was already\n     * aborted. */\n    if (c->flags & (CLIENT_DIRTY_CAS|CLIENT_DIRTY_EXEC))\n        return;\n    if (c->mstate.count == 0) {\n        /* If a client is using multi/exec, assuming it is used to execute at least\n         * two commands. Hence, creating by default size of 2. */\n        c->mstate.commands = zmalloc(sizeof(pendingCommand*)*2);\n        c->mstate.alloc_count = 2;\n    }\n    if (c->mstate.count == c->mstate.alloc_count) {\n        c->mstate.alloc_count = c->mstate.alloc_count < INT_MAX/2 ? c->mstate.alloc_count*2 : INT_MAX;\n        c->mstate.commands = zrealloc(c->mstate.commands, sizeof(pendingCommand*)*(c->mstate.alloc_count));\n    }\n\n    /* Move the pending command into the multi-state.\n     * We leave the empty list node in 'pending_cmds' for freeClientPendingCommands to clean up\n     * later, but set the value to NULL to indicate it has been moved out and should not be freed. */\n    pendingCommand *pcmd = popPendingCommandFromHead(&c->pending_cmds);\n    c->current_pending_cmd = NULL;\n    pendingCommand **mc = c->mstate.commands + c->mstate.count;\n    *mc = pcmd;\n\n    c->mstate.count++;\n    c->mstate.cmd_flags |= cmd_flags;\n    c->mstate.cmd_inv_flags |= ~cmd_flags;\n    c->mstate.argv_len_sums += (*mc)->argv_len_sum;\n    c->all_argv_len_sum -= (*mc)->argv_len_sum;\n\n    (*mc)->argv_len_sum = 0; /* This is no longer tracked through all_argv_len_sum, so we don't want */\n                             /* to subtract it from there later. */\n\n    /* Reset the client's args since we moved them into the mstate and shouldn't\n     * reference them from 'c' anymore. */\n    c->argv = NULL;\n    c->argc = 0;\n    c->argv_len = 0;\n}\n\nvoid discardTransaction(client *c) {\n    freeClientMultiState(c);\n    initClientMultiState(c);\n    c->flags &= ~(CLIENT_MULTI|CLIENT_DIRTY_CAS|CLIENT_DIRTY_EXEC);\n    unwatchAllKeys(c);\n}\n\n/* Flag the transaction as DIRTY_EXEC so that EXEC will fail.\n * Should be called every time there is an error while queueing a command. */\nvoid flagTransaction(client *c) {\n    if (c->flags & CLIENT_MULTI)\n        c->flags |= CLIENT_DIRTY_EXEC;\n}\n\nvoid multiCommand(client *c) {\n    if (c->flags & CLIENT_MULTI) {\n        addReplyError(c,\"MULTI calls can not be nested\");\n        return;\n    }\n    c->flags |= CLIENT_MULTI;\n\n    addReply(c,shared.ok);\n}\n\nvoid discardCommand(client *c) {\n    if (!(c->flags & CLIENT_MULTI)) {\n        addReplyError(c,\"DISCARD without MULTI\");\n        return;\n    }\n    discardTransaction(c);\n    addReply(c,shared.ok);\n}\n\n/* Aborts a transaction, with a specific error message.\n * The transaction is always aborted with -EXECABORT so that the client knows\n * the server exited the multi state, but the actual reason for the abort is\n * included too.\n * Note: 'error' may or may not end with \\r\\n. see addReplyErrorFormat. */\nvoid execCommandAbort(client *c, sds error) {\n    discardTransaction(c);\n\n    if (error[0] == '-') error++;\n    addReplyErrorFormat(c, \"-EXECABORT Transaction discarded because of: %s\", error);\n\n    /* Send EXEC to clients waiting data from MONITOR. We did send a MULTI\n     * already, and didn't send any of the queued commands, now we'll just send\n     * EXEC so it is clear that the transaction is over. */\n    replicationFeedMonitors(c,server.monitors,c->db->id,c->argv,c->argc);\n}\n\nvoid execCommand(client *c) {\n    int j;\n    robj **orig_argv;\n    int orig_argc, orig_argv_len;\n    size_t orig_all_argv_len_sum;\n    struct redisCommand *orig_cmd;\n\n    if (!(c->flags & CLIENT_MULTI)) {\n        addReplyError(c,\"EXEC without MULTI\");\n        return;\n    }\n\n    /* EXEC with expired watched key is disallowed*/\n    if (isWatchedKeyExpired(c)) {\n        c->flags |= (CLIENT_DIRTY_CAS);\n    }\n\n    /* Check if we need to abort the EXEC because:\n     * 1) Some WATCHed key was touched.\n     * 2) There was a previous error while queueing commands.\n     * A failed EXEC in the first case returns a multi bulk nil object\n     * (technically it is not an error but a special behavior), while\n     * in the second an EXECABORT error is returned. */\n    if (c->flags & (CLIENT_DIRTY_CAS | CLIENT_DIRTY_EXEC)) {\n        if (c->flags & CLIENT_DIRTY_EXEC) {\n            addReplyErrorObject(c, shared.execaborterr);\n        } else {\n            addReply(c, shared.nullarray[c->resp]);\n        }\n\n        discardTransaction(c);\n        return;\n    }\n\n    uint64_t old_flags = c->flags;\n\n    /* we do not want to allow blocking commands inside multi */\n    c->flags |= CLIENT_DENY_BLOCKING;\n\n    /* Exec all the queued commands */\n    unwatchAllKeys(c); /* Unwatch ASAP otherwise we'll waste CPU cycles */\n\n    server.in_exec = 1;\n\n    orig_argv = c->argv;\n    orig_argv_len = c->argv_len;\n    orig_argc = c->argc;\n    orig_cmd = c->cmd;\n\n    /* Multi-state commands aren't tracked through all_argv_len_sum, so we don't want anything done while executing them to affect that field.\n     * Otherwise, we get inconsistencies and all_argv_len_sum doesn't go back to exactly 0 when the client is finished */\n    orig_all_argv_len_sum = c->all_argv_len_sum;\n\n    c->all_argv_len_sum = c->mstate.argv_len_sums;\n\n    /* Skip ACL check for the AOF client while server loading. */\n    int skip_acl_check = server.loading && c->id == CLIENT_ID_AOF;\n\n    addReplyArrayLen(c,c->mstate.count);\n    for (j = 0; j < c->mstate.count; j++) {\n        c->argc = c->mstate.commands[j]->argc;\n        c->argv = c->mstate.commands[j]->argv;\n        c->argv_len = c->mstate.commands[j]->argv_len;\n        c->cmd = c->realcmd = c->mstate.commands[j]->cmd;\n\n        /* ACL permissions are also checked at the time of execution in case\n         * they were changed after the commands were queued. */\n        int acl_errpos;\n        int acl_retval = ACL_OK;\n        if (!skip_acl_check) {\n            acl_retval = ACLCheckAllPerm(c,&acl_errpos);\n        }\n        if (acl_retval != ACL_OK) {\n            char *reason;\n            switch (acl_retval) {\n            case ACL_DENIED_CMD:\n                reason = \"no permission to execute the command or subcommand\";\n                break;\n            case ACL_DENIED_KEY:\n                reason = \"no permission to touch the specified keys\";\n                break;\n            case ACL_DENIED_CHANNEL:\n                reason = \"no permission to access one of the channels used \"\n                         \"as arguments\";\n                break;\n            default:\n                reason = \"no permission\";\n                break;\n            }\n            addACLLogEntry(c,acl_retval,ACL_LOG_CTX_MULTI,acl_errpos,NULL,NULL);\n            addReplyErrorFormat(c,\n                \"-NOPERM ACLs rules changed between the moment the \"\n                \"transaction was accumulated and the EXEC call. \"\n                \"This command is no longer allowed for the \"\n                \"following reason: %s\", reason);\n        } else {\n            c->mstate.executing_cmd = j;\n            if (c->id == CLIENT_ID_AOF)\n                call(c,CMD_CALL_NONE);\n            else\n                call(c,CMD_CALL_FULL);\n\n            serverAssert((c->flags & CLIENT_BLOCKED) == 0);\n        }\n\n        /* Commands may alter argc/argv, restore mstate. */\n        c->mstate.commands[j]->argc = c->argc;\n        c->mstate.commands[j]->argv = c->argv;\n        c->mstate.commands[j]->argv_len = c->argv_len;\n        c->mstate.commands[j]->cmd = c->cmd;\n    }\n\n    // restore old DENY_BLOCKING value\n    if (!(old_flags & CLIENT_DENY_BLOCKING))\n        c->flags &= ~CLIENT_DENY_BLOCKING;\n\n    c->argv = orig_argv;\n    c->argv_len = orig_argv_len;\n    c->argc = orig_argc;\n    c->cmd = c->realcmd = orig_cmd;\n    c->all_argv_len_sum = orig_all_argv_len_sum;\n    discardTransaction(c);\n\n    server.in_exec = 0;\n}\n\n/* ===================== WATCH (CAS alike for MULTI/EXEC) ===================\n *\n * The implementation uses a per-DB hash table mapping keys to list of clients\n * WATCHing those keys, so that given a key that is going to be modified\n * we can mark all the associated clients as dirty.\n *\n * Also every client contains a list of WATCHed keys so that's possible to\n * un-watch such keys when the client is freed or when UNWATCH is called. */\n\n/* The watchedKey struct is included in two lists: the client->watched_keys list,\n * and db->watched_keys dict (each value in that dict is a list of watchedKey structs).\n * The list in the client struct is a plain list, where each node's value is a pointer to a watchedKey.\n * The list in the db db->watched_keys is different, the listnode member that's embedded in this struct\n * is the node in the dict. And the value inside that listnode is a pointer to the that list, and we can use\n * struct member offset math to get from the listnode to the watchedKey struct.\n * This is done to avoid the need for listSearchKey and dictFind when we remove from the list. */\ntypedef struct watchedKey {\n    listNode node;\n    robj *key;\n    redisDb *db;\n    client *client;\n    unsigned expired:1; /* Flag that we're watching an already expired key. */\n} watchedKey;\n\n/* Attach a watchedKey to the list of clients watching that key. */\nstatic inline void watchedKeyLinkToClients(list *clients, watchedKey *wk) {\n    wk->node.value = clients; /* Point the value back to the list */\n    listLinkNodeTail(clients, &wk->node); /* Link the embedded node */\n}\n\n/* Get the list of clients watching that key. */\nstatic inline list *watchedKeyGetClients(watchedKey *wk) {\n    return listNodeValue(&wk->node); /* embedded node->value points back to the list */\n}\n\n/* Get the node with wk->client in the list of clients watching that key. Actually it\n * is just the embedded node. */\nstatic inline listNode *watchedKeyGetClientNode(watchedKey *wk) {\n    return &wk->node;\n}\n\n/* Watch for the specified key */\nvoid watchForKey(client *c, robj *key) {\n    list *clients = NULL;\n    listIter li;\n    listNode *ln;\n    watchedKey *wk;\n\n    if (listLength(c->watched_keys) == 0) server.watching_clients++;\n\n    /* Check if we are already watching for this key */\n    listRewind(c->watched_keys,&li);\n    while((ln = listNext(&li))) {\n        wk = listNodeValue(ln);\n        if (wk->db == c->db && equalStringObjects(key,wk->key))\n            return; /* Key already watched */\n    }\n    /* This key is not already watched in this DB. Let's add it */\n    clients = dictFetchValue(c->db->watched_keys,key);\n    if (!clients) {\n        clients = listCreate();\n        dictAdd(c->db->watched_keys,key,clients);\n        incrRefCount(key);\n    }\n    /* Add the new key to the list of keys watched by this client */\n    wk = zmalloc(sizeof(*wk));\n    wk->key = key;\n    wk->client = c;\n    wk->db = c->db;\n    wk->expired = keyIsExpired(c->db, key->ptr, NULL);\n    incrRefCount(key);\n    listAddNodeTail(c->watched_keys, wk);\n    watchedKeyLinkToClients(clients, wk);\n}\n\n/* Unwatch all the keys watched by this client. To clean the EXEC dirty\n * flag is up to the caller. */\nvoid unwatchAllKeys(client *c) {\n    listIter li;\n    listNode *ln;\n\n    if (listLength(c->watched_keys) == 0) return;\n    listRewind(c->watched_keys,&li);\n    while((ln = listNext(&li))) {\n        list *clients;\n        watchedKey *wk;\n\n        /* Remove the client's wk from the list of clients watching the key. */\n        wk = listNodeValue(ln);\n        clients = watchedKeyGetClients(wk);\n        serverAssertWithInfo(c,NULL,clients != NULL);\n        listUnlinkNode(clients, watchedKeyGetClientNode(wk));\n        /* Kill the entry at all if this was the only client */\n        if (listLength(clients) == 0)\n            dictDelete(wk->db->watched_keys, wk->key);\n        /* Remove this watched key from the client->watched list */\n        listDelNode(c->watched_keys,ln);\n        decrRefCount(wk->key);\n        zfree(wk);\n    }\n    server.watching_clients--;\n}\n\n/* Iterates over the watched_keys list and looks for an expired key. Keys which\n * were expired already when WATCH was called are ignored. */\nint isWatchedKeyExpired(client *c) {\n    listIter li;\n    listNode *ln;\n    watchedKey *wk;\n    if (listLength(c->watched_keys) == 0) return 0;\n    listRewind(c->watched_keys,&li);\n    while ((ln = listNext(&li))) {\n        wk = listNodeValue(ln);\n        if (wk->expired) continue; /* was expired when WATCH was called */\n        if (keyIsExpired(wk->db, wk->key->ptr, NULL)) return 1;\n    }\n\n    return 0;\n}\n\n/* \"Touch\" a key, so that if this key is being WATCHed by some client the\n * next EXEC will fail.\n *\n * Sanitizer suppression: IO threads also read c->flags, but never modify\n * it or read the CLIENT_DIRTY_CAS bit, main thread just only modifies\n * this bit, so there is actually no real data race. */\nREDIS_NO_SANITIZE(\"thread\")\nvoid touchWatchedKey(redisDb *db, robj *key) {\n    list *clients;\n    listIter li;\n    listNode *ln;\n\n    if (dictSize(db->watched_keys) == 0) return;\n    clients = dictFetchValue(db->watched_keys, key);\n    if (!clients) return;\n\n    /* Mark all the clients watching this key as CLIENT_DIRTY_CAS */\n    /* Check if we are already watching for this key */\n    listRewind(clients,&li);\n    while((ln = listNext(&li))) {\n        watchedKey *wk = redis_member2struct(watchedKey, node, ln);\n        client *c = wk->client;\n\n        if (wk->expired) {\n            /* The key was already expired when WATCH was called. */\n            if (db == wk->db &&\n                equalStringObjects(key, wk->key) &&\n                dbFind(db, key->ptr) == NULL)\n            {\n                /* Already expired key is deleted, so logically no change. Clear\n                 * the flag. Deleted keys are not flagged as expired. */\n                wk->expired = 0;\n                goto skip_client;\n            }\n            break;\n        }\n\n        c->flags |= CLIENT_DIRTY_CAS;\n        /* As the client is marked as dirty, there is no point in getting here\n         * again in case that key (or others) are modified again (or keep the\n         * memory overhead till EXEC). */\n        unwatchAllKeys(c);\n\n    skip_client:\n        continue;\n    }\n}\n\n/* Set CLIENT_DIRTY_CAS to all clients of DB when DB is dirty.\n * It may happen in the following situations:\n * - FLUSHDB, FLUSHALL, SWAPDB, end of successful diskless replication.\n * - Atomic slot migration trimming phase. In this case, 'slots' is set and only\n *   keys in the specified slots are touched.\n *\n * replaced_with: for SWAPDB, the WATCH should be invalidated if\n * the key exists in either of them, and skipped only if it\n * doesn't exist in both. */\nREDIS_NO_SANITIZE(\"thread\")\nvoid touchAllWatchedKeysInDb(redisDb *emptied, redisDb *replaced_with, struct slotRangeArray *slots) {\n    listIter li;\n    listNode *ln;\n    dictEntry *de;\n\n    if (dictSize(emptied->watched_keys) == 0) return;\n\n    dictIterator di;\n    dictInitSafeIterator(&di, emptied->watched_keys);\n    while((de = dictNext(&di)) != NULL) {\n        robj *key = dictGetKey(de);\n        if (slots && !slotRangeArrayContains(slots, keyHashSlot(key->ptr, sdslen(key->ptr))))\n            continue;\n        int exists_in_emptied = dbFind(emptied, key->ptr) != NULL;\n        if (exists_in_emptied ||\n            (replaced_with && dbFind(replaced_with, key->ptr) != NULL))\n        {\n            list *clients = dictGetVal(de);\n            if (!clients) continue;\n            listRewind(clients,&li);\n            while((ln = listNext(&li))) {\n                watchedKey *wk = redis_member2struct(watchedKey, node, ln);\n                if (wk->expired) {\n                    if (!replaced_with || !dbFind(replaced_with, key->ptr)) {\n                        /* Expired key now deleted. No logical change. Clear the\n                         * flag. Deleted keys are not flagged as expired. */\n                        wk->expired = 0;\n                        continue;\n                    } else if (keyIsExpired(replaced_with, key->ptr, NULL)) {\n                        /* Expired key remains expired. */\n                        continue;\n                    }\n                } else if (!exists_in_emptied && keyIsExpired(replaced_with, key->ptr, NULL)) {\n                    /* Non-existing key is replaced with an expired key. */\n                    wk->expired = 1;\n                    continue;\n                }\n                client *c = wk->client;\n                c->flags |= CLIENT_DIRTY_CAS;\n                /* Note - we could potentially call unwatchAllKeys for this specific client in order to reduce\n                 * the total number of iterations. BUT this could also free the current next entry pointer\n                 * held by the iterator and can lead to use-after-free. */\n            }\n        }\n    }\n    dictResetIterator(&di);\n}\n\nvoid watchCommand(client *c) {\n    int j;\n\n    if (c->flags & CLIENT_MULTI) {\n        addReplyError(c,\"WATCH inside MULTI is not allowed\");\n        return;\n    }\n    /* No point in watching if the client is already dirty. */\n    if (c->flags & CLIENT_DIRTY_CAS) {\n        addReply(c,shared.ok);\n        return;\n    }\n    for (j = 1; j < c->argc; j++)\n        watchForKey(c,c->argv[j]);\n    addReply(c,shared.ok);\n}\n\nvoid unwatchCommand(client *c) {\n    unwatchAllKeys(c);\n    c->flags &= (~CLIENT_DIRTY_CAS);\n    addReply(c,shared.ok);\n}\n\nsize_t multiStateMemOverhead(client *c) {\n    size_t mem = c->mstate.argv_len_sums;\n    /* Add watched keys overhead, Note: this doesn't take into account the watched keys themselves, because they aren't managed per-client. */\n    mem += listLength(c->watched_keys) * (sizeof(listNode) + sizeof(watchedKey));\n    /* Reserved memory for queued multi commands. */\n    mem += c->mstate.alloc_count * sizeof(pendingCommand);\n    return mem;\n}\n"
  },
  {
    "path": "src/networking.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n#include \"server.h\"\n#include \"atomicvar.h\"\n#include \"cluster.h\"\n#include \"cluster_slot_stats.h\"\n#include \"script.h\"\n#include \"fpconv_dtoa.h\"\n#include \"fmtargs.h\"\n#include \"cluster_asm.h\"\n#include \"memory_prefetch.h\"\n#include \"connection.h\"\n#include <sys/socket.h>\n#include <sys/uio.h>\n#include <math.h>\n#include <ctype.h>\n\nstatic void setProtocolError(const char *errstr, client *c);\nstatic void pauseClientsByClient(mstime_t end, int isPauseClientAll);\nchar *getClientSockname(client *c);\nstatic inline int clientTypeIsSlave(client *c);\nstatic inline int _clientHasPendingRepliesSlave(client *c);\nstatic inline int _clientHasPendingRepliesNonSlave(client *c);\nstatic inline int _writeToClientNonSlave(client *c, ssize_t *nwritten);\nstatic inline int _writeToClientSlave(client *c, ssize_t *nwritten);\nstatic pendingCommand *acquirePendingCommand(void);\nstatic void reclaimPendingCommand(client *c, pendingCommand *pcmd);\n\nint ProcessingEventsWhileBlocked = 0; /* See processEventsWhileBlocked(). */\n__thread sds thread_reusable_qb = NULL;\n__thread int thread_reusable_qb_used = 0; /* Avoid multiple clients using reusable query\n                                         * buffer due to nested command execution. */\n\n/* Return the size consumed from the allocator, for the specified SDS string,\n * including internal fragmentation. This function is used in order to compute\n * the client output buffer size. */\nsize_t sdsZmallocSize(sds s) {\n    void *sh = sdsAllocPtr(s);\n    return zmalloc_size(sh);\n}\n\n/* Return the amount of memory used by the sds string at object->ptr\n * for a string object. This includes internal fragmentation. */\nsize_t getStringObjectSdsUsedMemory(robj *o) {\n    serverAssertWithInfo(NULL,o,o->type == OBJ_STRING);\n    switch(o->encoding) {\n    case OBJ_ENCODING_RAW: return sdsZmallocSize(o->ptr);\n    case OBJ_ENCODING_EMBSTR: return zmalloc_size(o)-sizeof(robj);\n    default: return 0; /* Just integer encoding for now. */\n    }\n}\n\n/* Return the length of a string object.\n * This does NOT includes internal fragmentation or sds unused space. */\nsize_t getStringObjectLen(robj *o) {\n    serverAssertWithInfo(NULL,o,o->type == OBJ_STRING);\n    switch(o->encoding) {\n    case OBJ_ENCODING_RAW: return sdslen(o->ptr);\n    case OBJ_ENCODING_EMBSTR: return sdslen(o->ptr);\n    default: return 0; /* Just integer encoding for now. */\n    }\n}\n\n/* Client.reply list dup and free methods. */\nvoid *dupClientReplyValue(void *o) {\n    clientReplyBlock *old = o;\n    clientReplyBlock *buf = zmalloc(sizeof(clientReplyBlock) + old->size);\n    memcpy(buf, o, sizeof(clientReplyBlock) + old->size);\n    return buf;\n}\n\nvoid freeClientReplyValue(void *o) {\n    zfree(o);\n}\n\n/* This function links the client to the global linked list of clients.\n * unlinkClient() does the opposite, among other things. */\nvoid linkClient(client *c) {\n    listAddNodeTail(server.clients,c);\n    /* Note that we remember the linked list node where the client is stored,\n     * this way removing the client in unlinkClient() will not require\n     * a linear scan, but just a constant time operation. */\n    c->client_list_node = listLast(server.clients);\n    uint64_t id = htonu64(c->id);\n    raxInsert(server.clients_index,(unsigned char*)&id,sizeof(id),c,NULL);\n}\n\n/* Initialize client authentication state.\n */\nstatic void clientSetDefaultAuth(client *c) {\n    /* If the default user does not require authentication, the user is\n     * directly authenticated. */\n    c->user = DefaultUser;\n    c->authenticated = (c->user->flags & USER_FLAG_NOPASS) &&\n                       !(c->user->flags & USER_FLAG_DISABLED);\n}\n\nint authRequired(client *c) {\n    /* Check if the user is authenticated. This check is skipped in case\n     * the default user is flagged as \"nopass\" and is active. */\n    uint32_t default_flags;\n    atomicGet(DefaultUser->flags, default_flags);\n    int auth_required = (!(default_flags & USER_FLAG_NOPASS) ||\n                          (default_flags & USER_FLAG_DISABLED)) &&\n                        !c->authenticated;\n    return auth_required;\n}\n\nclient *createClient(connection *conn) {\n    client *c = zmalloc(sizeof(client));\n\n    /* passing NULL as conn it is possible to create a non connected client.\n     * This is useful since all the commands needs to be executed\n     * in the context of a client. When commands are executed in other\n     * contexts (for instance a Lua script) we need a non connected client. */\n    if (conn) {\n        connEnableTcpNoDelay(conn);\n        if (server.tcpkeepalive)\n            connKeepAlive(conn,server.tcpkeepalive);\n        connSetReadHandler(conn, readQueryFromClient);\n        connSetPrivateData(conn, c);\n    }\n    c->buf = zmalloc_usable(PROTO_REPLY_CHUNK_BYTES, &c->buf_usable_size);\n    selectDb(c,0);\n    uint64_t client_id;\n    atomicGetIncr(server.next_client_id, client_id, 1);\n    c->id = client_id;\n    c->tid = IOTHREAD_MAIN_THREAD_ID;\n    c->running_tid = IOTHREAD_MAIN_THREAD_ID;\n    if (conn) server.io_threads_clients_num[c->tid]++;\n#ifdef LOG_REQ_RES\n    reqresReset(c, 0);\n    c->resp = server.client_default_resp;\n#else\n    c->resp = 2;\n#endif\n    c->conn = conn;\n    c->name = NULL;\n    c->lib_name = NULL;\n    c->lib_ver = NULL;\n    c->bufpos = 0;\n    c->buf_peak = c->buf_usable_size;\n    c->buf_peak_last_reset_time = server.unixtime;\n    c->buf_encoded = 0;\n    c->last_header = NULL;\n    c->ref_repl_buf_node = NULL;\n    c->ref_block_pos = 0;\n    c->io_curr_repl_node = NULL;\n    c->io_curr_block_pos = 0;\n    c->io_bound_repl_node = NULL;\n    c->io_bound_block_pos = 0;\n    c->qb_pos = 0;\n    c->querybuf = NULL;\n    c->querybuf_peak = 0;\n    c->reqtype = 0;\n    c->argc = 0;\n    c->argv = NULL;\n    c->argv_len = 0;\n    c->all_argv_len_sum = 0;\n    c->pending_cmds.head = c->pending_cmds.tail = NULL;\n    c->pending_cmds.len = c->pending_cmds.ready_len = 0;\n    c->current_pending_cmd = NULL;\n    c->original_argc = 0;\n    c->original_argv = NULL;\n    c->deferred_objects = NULL;\n    c->deferred_objects_num = 0;\n    c->io_deferred_objects = NULL;\n    c->io_deferred_objects_num = 0;\n    c->io_deferred_objects_size = 0;\n    c->cmd = c->lastcmd = c->realcmd = c->lookedcmd = NULL;\n    c->cur_script = NULL;\n    c->multibulklen = 0;\n    c->bulklen = -1;\n    c->sentlen = 0;\n    c->flags = 0;\n    c->io_flags = CLIENT_IO_READ_ENABLED | CLIENT_IO_WRITE_ENABLED;\n    c->read_error = 0;\n    c->slot = -1;\n    c->cluster_compatibility_check_slot = -2;\n    c->ctime = c->lastinteraction = server.unixtime;\n    c->io_lastinteraction = 0;\n    c->duration = 0;\n    clientSetDefaultAuth(c);\n    c->replstate = REPL_STATE_NONE;\n    c->repl_start_cmd_stream_on_ack = 0;\n    c->reploff = 0;\n    c->reploff_next = 0;\n    c->read_reploff = 0;\n    c->io_read_reploff = 0;\n    c->repl_applied = 0;\n    c->repl_ack_off = 0;\n    c->repl_ack_time = 0;\n    c->io_repl_ack_time = 0;\n    c->repl_aof_off = 0;\n    c->repl_last_partial_write = 0;\n    c->slave_listening_port = 0;\n    c->slave_addr = NULL;\n    c->slave_capa = SLAVE_CAPA_NONE;\n    c->slave_req = SLAVE_REQ_NONE;\n    c->main_ch_client_id = 0;\n    c->reply = listCreate();\n    c->deferred_reply_errors = NULL;\n    c->reply_bytes = 0;\n    c->obuf_soft_limit_reached_time = 0;\n    listSetFreeMethod(c->reply,freeClientReplyValue);\n    listSetDupMethod(c->reply,dupClientReplyValue);\n    initClientBlockingState(c);\n    c->woff = 0;\n    c->watched_keys = listCreate();\n    c->pubsub_channels = dictCreate(&objectKeyPointerValueDictType);\n    c->pubsub_patterns = dictCreate(&objectKeyPointerValueDictType);\n    c->pubsubshard_channels = dictCreate(&objectKeyPointerValueDictType);\n    c->peerid = NULL;\n    c->sockname = NULL;\n    c->client_list_node = NULL;\n    c->io_thread_client_list_node = NULL;\n    c->postponed_list_node = NULL;\n    c->client_tracking_redirection = 0;\n    c->client_tracking_prefixes = NULL;\n    c->io_last_client_cron = 0;\n    c->io_last_repl_cron = 0;\n    c->last_memory_usage = 0;\n    c->last_memory_type = CLIENT_TYPE_NORMAL;\n    c->module_blocked_client = NULL;\n    c->module_auth_ctx = NULL;\n    c->auth_callback = NULL;\n    c->auth_callback_privdata = NULL;\n    c->auth_module = NULL;\n    listInitNode(&c->clients_pending_write_node, c);\n    listInitNode(&c->pending_ref_reply_node, c);\n    c->mem_usage_bucket = NULL;\n    c->mem_usage_bucket_node = NULL;\n    c->net_input_bytes_curr_cmd = 0;\n    c->net_output_bytes_curr_cmd = 0;\n    if (conn) linkClient(c);\n    initClientMultiState(c);\n    c->net_input_bytes = 0;\n    c->net_output_bytes = 0;\n    c->commands_processed = 0;\n    c->last_ts_when_counted_as_active = 0;\n    c->stat_total_read_events = 0;\n    c->stat_avg_pipeline_length_sum = 0;\n    c->stat_avg_pipeline_length_cnt = 0;\n    c->task = NULL;\n    c->node_id = NULL;\n    atomicSet(c->pending_read, 0);\n    return c;\n}\n\nvoid installClientWriteHandler(client *c) {\n    int ae_barrier = 0;\n    /* For the fsync=always policy, we want that a given FD is never\n     * served for reading and writing in the same event loop iteration,\n     * so that in the middle of receiving the query, and serving it\n     * to the client, we'll call beforeSleep() that will do the\n     * actual fsync of AOF to disk. the write barrier ensures that. */\n    if (server.aof_state == AOF_ON &&\n        server.aof_fsync == AOF_FSYNC_ALWAYS)\n    {\n        ae_barrier = 1;\n    }\n    if (connSetWriteHandlerWithBarrier(c->conn, sendReplyToClient, ae_barrier) == C_ERR) {\n        freeClientAsync(c);\n    }\n}\n\n/* This function puts the client in the queue of clients that should write\n * their output buffers to the socket. Note that it does not *yet* install\n * the write handler, to start clients are put in a queue of clients that need\n * to write, so we try to do that before returning in the event loop (see the\n * handleClientsWithPendingWrites() function).\n * If we fail and there is more data to write, compared to what the socket\n * buffers can hold, then we'll really install the handler. */\nvoid putClientInPendingWriteQueue(client *c) {\n    /* Schedule the client to write the output buffers to the socket only\n     * if not already done and, for slaves, if the slave can actually receive\n     * writes at this stage. */\n    if (!(c->flags & CLIENT_PENDING_WRITE) &&\n        (c->replstate == REPL_STATE_NONE ||\n         c->replstate == SLAVE_STATE_SEND_BULK_AND_STREAM ||\n         (c->replstate == SLAVE_STATE_ONLINE && !c->repl_start_cmd_stream_on_ack)))\n    {\n        /* Here instead of installing the write handler, we just flag the\n         * client and put it into a list of clients that have something\n         * to write to the socket. This way before re-entering the event\n         * loop, we can try to directly write to the client sockets avoiding\n         * a system call. We'll only really install the write handler if\n         * we'll not be able to write the whole reply at once. */\n        c->flags |= CLIENT_PENDING_WRITE;\n        listLinkNodeHead(server.clients_pending_write, &c->clients_pending_write_node);\n    }\n}\n\nstatic inline int _prepareClientToWrite(client *c) {\n    const uint64_t _flags = c->flags;\n    /* If it's the Lua client we always return ok without installing any\n     * handler since there is no socket at all. */\n    if (unlikely(_flags & (CLIENT_SCRIPT|CLIENT_MODULE))) return C_OK;\n\n    /* If CLIENT_CLOSE_ASAP flag is set, we need not write anything. */\n    if (unlikely(_flags & CLIENT_CLOSE_ASAP)) return C_ERR;\n\n    /* CLIENT REPLY OFF / SKIP handling: don't send replies.\n     * CLIENT_PUSHING handling: disables the reply silencing flags. */\n    if (unlikely((_flags & (CLIENT_REPLY_OFF|CLIENT_REPLY_SKIP)) &&\n        !(_flags & CLIENT_PUSHING))) return C_ERR;\n\n    /* Masters don't receive replies, unless CLIENT_MASTER_FORCE_REPLY flag\n     * is set. */\n    if (unlikely((_flags & CLIENT_MASTER) &&\n        !(_flags & CLIENT_MASTER_FORCE_REPLY))) return C_ERR;\n\n    if (unlikely(!c->conn)) return C_ERR; /* Fake client for AOF loading. */\n\n    /* Schedule the client to write the output buffers to the socket, unless\n     * it should already be setup to do so (it has already pending data).\n     *\n     * If the client runs in an IO thread, we should not put the client in the\n     * pending write queue. Instead, we will install the write handler to the\n     * corresponding IO thread’s event loop and let it handle the reply. */\n    if (likely(c->running_tid == IOTHREAD_MAIN_THREAD_ID) && !clientHasPendingReplies(c))\n        putClientInPendingWriteQueue(c);\n\n    /* Authorize the caller to queue in the output buffer of this client. */\n    return C_OK;\n}\n\n/* This function is called every time we are going to transmit new data\n * to the client. The behavior is the following:\n *\n * If the client should receive new data (normal clients will) the function\n * returns C_OK, and make sure to install the write handler in our event\n * loop so that when the socket is writable new data gets written.\n *\n * If the client should not receive new data, because it is a fake client\n * (used to load AOF in memory), a master or because the setup of the write\n * handler failed, the function returns C_ERR.\n *\n * The function may return C_OK without actually installing the write\n * event handler in the following cases:\n *\n * 1) The event handler should already be installed since the output buffer\n *    already contains something.\n * 2) The client is a slave but not yet online, so we want to just accumulate\n *    writes in the buffer but not actually sending them yet.\n *\n * Typically gets called every time a reply is built, before adding more\n * data to the clients output buffers. If the function returns C_ERR no\n * data should be appended to the output buffers. */\nint prepareClientToWrite(client *c) {\n    return _prepareClientToWrite(c);\n}\n\n/* -----------------------------------------------------------------------------\n * Low level functions to add more data to output buffers.\n * -------------------------------------------------------------------------- */\n\nstatic int tryAddPayload(char *buf, size_t *used, size_t size, uint8_t type, const void *payload, size_t len) {\n    if (*used + sizeof(payloadHeader) + len > size) return 0;\n\n    /* Start a new payload chunk */\n    payloadHeader *header = (payloadHeader *)(buf + *used);\n    header->payload_type = type;\n    header->payload_len = len;\n    memcpy((char *)header + sizeof(payloadHeader), payload, len);\n    *used += sizeof(payloadHeader) + len;\n    return 1;\n}\n\n/* Adds the payload to the reply linked list.\n * Note: some edits to this function need to be relayed to AddReplyFromClient. */\nstatic void _addReplyPayloadToList(client *c, list *reply_list, const char *payload, size_t len, uint8_t payload_type) {\n    listNode *ln = listLast(reply_list);\n    clientReplyBlock *tail = ln ? listNodeValue(ln) : NULL;\n    /* Determine if encoded buffer is required */\n    int encoded = payload_type == BULK_STR_REF;\n\n    /* Note that 'tail' may be NULL even if we have a tail node, because when\n     * addReplyDeferredLen() is used, it sets a dummy node to NULL just\n     * to fill it later, when the size of the bulk length is set. */\n\n    /* Append to tail node when possible. */\n    if (tail) {\n        if (unlikely(tail->buf_encoded)) {\n            /* Try to add to encoded buffer */\n            if (tryAddPayload(tail->buf, &tail->used, tail->size, payload_type, (void *)payload, len)) {\n                len = 0;\n            }\n        } else if (!encoded) {\n            /* Both tail and new payload are non-encoded, can append directly */\n            size_t avail = tail->size - tail->used;\n            size_t copy = avail >= len ? len : avail;\n            if (copy > 0) {\n                memcpy(tail->buf + tail->used, payload, copy);\n                tail->used += copy;\n                payload += copy;\n                len -= copy;\n            }\n        }\n        /* else: tail is non-encoded but new payload needs encoding, can't append */\n    }\n\n    if (len) {\n        /* Create a new node, make sure it is allocated to at\n         * least PROTO_REPLY_CHUNK_BYTES */\n        size_t usable_size;\n        size_t required_size = encoded ? len + sizeof(payloadHeader) : len;\n        size_t size = required_size < PROTO_REPLY_CHUNK_BYTES ? PROTO_REPLY_CHUNK_BYTES : required_size;\n        tail = zmalloc_usable(size + sizeof(clientReplyBlock), &usable_size);\n        /* take over the allocation's internal fragmentation */\n        tail->size = usable_size - sizeof(clientReplyBlock);\n        tail->used = 0;\n        tail->buf_encoded = encoded;\n        if (tail->buf_encoded) {\n            serverAssert(tryAddPayload(tail->buf, &tail->used, tail->size, payload_type, (void *)payload, len));\n        } else {\n            tail->used = len;\n            memcpy(tail->buf, payload, len);\n        }\n        listAddNodeTail(reply_list, tail);\n        c->reply_bytes += tail->size;\n\n        closeClientOnOutputBufferLimitReached(c, 1);\n    }\n}\n\n/* The subscribe / unsubscribe command family has a push as a reply,\n * or in other words, it responds with a push (or several of them\n * depending on how many arguments it got), and has no reply. */\nint cmdHasPushAsReply(struct redisCommand *cmd) {\n    if (!cmd) return 0;\n    return cmd->proc == subscribeCommand  || cmd->proc == unsubscribeCommand ||\n           cmd->proc == psubscribeCommand || cmd->proc == punsubscribeCommand ||\n           cmd->proc == ssubscribeCommand || cmd->proc == sunsubscribeCommand;\n}\n\n/* Attempts to add the reply to the static buffer in the client struct.\n * Returns the length of data that is added to the reply buffer. */\nstatic size_t _addReplyPayloadToBuffer(client *c, const void *payload, size_t len, uint8_t payload_type) {\n    /* If there already are entries in the reply list, we cannot\n     * add anything more to the static buffer. */\n    if (listLength(c->reply) > 0) return 0;\n\n    size_t available = c->buf_usable_size - c->bufpos;\n    size_t reply_len = min(available, len);\n    if (c->buf_encoded) {\n        if (!tryAddPayload(c->buf, &c->bufpos, c->buf_usable_size, payload_type, payload, len))\n            return 0;\n        reply_len = len;\n    } else {\n        memcpy(c->buf + c->bufpos, payload, reply_len);\n        c->bufpos+=reply_len;\n    }\n\n    /* We update the buffer peak after appending the reply to the buffer */\n    if (c->buf_peak < (size_t)c->bufpos) c->buf_peak = (size_t)c->bufpos;\n    return reply_len;\n}\n\n/* Adds bulk string reference (i.e. pointer to object and pointer to string itself) to static buffer\n * Returns non-zero value if succeeded to add */\nstatic size_t _addBulkStrRefToBuffer(client *c, const void *payload, size_t len) {\n    if (!c->buf_encoded) {\n        /* If buffer is plain and not empty then can't add bulk string reference to it */\n        if (c->bufpos) return 0;\n        c->buf_encoded = 1; /* Set c->buf to encoded mode to allow bulk string reference to be stored in it */\n        size_t result = _addReplyPayloadToBuffer(c, payload, len, BULK_STR_REF);\n        if (!result) {\n            /* Failed to add bulk string reference to buffer, need to revert to plain mode. */\n            c->buf_encoded = 0;\n        }\n        return result;\n    }\n    return _addReplyPayloadToBuffer(c, payload, len, BULK_STR_REF);\n}\n\nvoid _addReplyToBufferOrList(client *c, const char *s, size_t len) {\n    if (c->flags & CLIENT_CLOSE_AFTER_REPLY) return;\n\n    /* Replicas should normally not cause any writes to the reply buffer. In case a rogue replica sent a command on the\n     * replication link that caused a reply to be generated we'll simply disconnect it.\n     * Note this is the simplest way to check a command added a response. Replication links are used to write data but\n     * not for responses, so we should normally never get here on a replica client. */\n    if (unlikely(clientTypeIsSlave(c))) {\n        sds cmdname = c->lastcmd ? c->lastcmd->fullname : NULL;\n        logInvalidUseAndFreeClientAsync(c, \"Replica generated a reply to command '%s'\",\n                                        cmdname ? cmdname : \"<unknown>\");\n        return;\n    }\n\n    c->net_output_bytes_curr_cmd += len;\n    /* We call it here because this function may affect the reply\n     * buffer offset (see function comment) */\n    reqresSaveClientReplyOffset(c);\n\n    /* If we're processing a push message into the current client (i.e. executing PUBLISH\n     * to a channel which we are subscribed to, then we wanna postpone that message to be added\n     * after the command's reply (specifically important during multi-exec). the exception is\n     * the SUBSCRIBE command family, which (currently) have a push message instead of a proper reply.\n     * The check for executing_client also avoids affecting push messages that are part of eviction.\n     * Check CLIENT_PUSHING first to avoid race conditions, as it's absent in module's fake client. */\n    if ((c->flags & CLIENT_PUSHING) && c == server.current_client &&\n        server.executing_client && !cmdHasPushAsReply(server.executing_client->cmd))\n    {\n        _addReplyPayloadToList(c,server.pending_push_messages,s,len,PLAIN_REPLY);\n        return;\n    }\n\n    size_t reply_len = _addReplyPayloadToBuffer(c, s, len, PLAIN_REPLY);\n    if (len > reply_len)\n        _addReplyPayloadToList(c, c->reply, s + reply_len, len - reply_len, PLAIN_REPLY);\n}\n\n/* Check if the client's pending_ref_reply_node is currently linked in the list.\n * A node is considered linked if it has neighbors (prev/next), or if it's the\n * only node in the list (head points to it). */\nstatic inline int clientIsInPendingRefReplyList(client *c) {\n    return listNextNode(&c->pending_ref_reply_node) != NULL ||\n           listPrevNode(&c->pending_ref_reply_node) != NULL ||\n           listFirst(server.clients_with_pending_ref_reply) == &c->pending_ref_reply_node;\n}\n\n/* Increment reference to object and add pointer to object and\n * pointer to string itself to current reply buffer */\nstatic void _addBulkStrRefToBufferOrList(client *c, robj *obj, size_t len) {\n    if (c->flags & CLIENT_CLOSE_AFTER_REPLY) return;\n\n    bulkStrRef str_ref;\n    str_ref.obj = obj;\n    incrRefCount(obj); /* Refcount will be decremented in write handler */\n\n    /* Fill prefix with bulk string length: \"$<len>\\r\\n\" */\n    str_ref.prefix[0] = '$';\n    size_t num_len = ll2string(str_ref.prefix + 1, sizeof(str_ref.prefix) - 3, len);\n    str_ref.prefix[num_len + 1] = '\\r';\n    str_ref.prefix[num_len + 2] = '\\n';\n    str_ref.prefix_cnt = num_len + 3;\n    str_ref.crlf[0] = '\\r';\n    str_ref.crlf[1] = '\\n';\n\n    /* Track output bytes: bulk string prefix + content + trailing CRLF */\n    c->net_output_bytes_curr_cmd += str_ref.prefix_cnt + len + 2;\n\n    /* We call it here because this function may affect the reply\n     * buffer offset (see function comment) */\n    reqresSaveClientReplyOffset(c);\n\n    if (!_addBulkStrRefToBuffer(c, (void *)&str_ref, sizeof(str_ref))) {\n        _addReplyPayloadToList(c, c->reply, (void *)&str_ref, sizeof(str_ref), BULK_STR_REF);\n    }\n\n    /* Track clients with pending referenced reply objects for async flushdb protection. */\n    if (!clientIsInPendingRefReplyList(c)) {\n        listLinkNodeTail(server.clients_with_pending_ref_reply, &c->pending_ref_reply_node);\n    }\n}\n\n/* -----------------------------------------------------------------------------\n * Higher level functions to queue data on the client output buffer.\n * The following functions are the ones that commands implementations will call.\n * -------------------------------------------------------------------------- */\n\n/* Add the object 'obj' string representation to the client output buffer. */\nvoid addReply(client *c, robj *obj) {\n    if (_prepareClientToWrite(c) != C_OK) return;\n\n    if (sdsEncodedObject(obj)) {\n        _addReplyToBufferOrList(c,obj->ptr,sdslen(obj->ptr));\n    } else if (obj->encoding == OBJ_ENCODING_INT) {\n        /* For integer encoded strings we just convert it into a string\n         * using our optimized function, and attach the resulting string\n         * to the output buffer. */\n        char buf[32];\n        size_t len = ll2string(buf,sizeof(buf),(long)obj->ptr);\n        _addReplyToBufferOrList(c,buf,len);\n    } else {\n        serverPanic(\"Wrong obj->encoding in addReply()\");\n    }\n}\n\n/* Add the SDS 's' string to the client output buffer, as a side effect\n * the SDS string is freed. */\nvoid addReplySds(client *c, sds s) {\n    if (_prepareClientToWrite(c) != C_OK) {\n        /* The caller expects the sds to be free'd. */\n        sdsfree(s);\n        return;\n    }\n    _addReplyToBufferOrList(c,s,sdslen(s));\n    sdsfree(s);\n}\n\n/* This low level function just adds whatever protocol you send it to the\n * client buffer, trying the static buffer initially, and using the string\n * of objects if not possible.\n *\n * It is efficient because does not create an SDS object nor an Redis object\n * if not needed. The object will only be created by calling\n * _addReplyProtoToList() if we fail to extend the existing tail object\n * in the list of objects. */\nvoid addReplyProto(client *c, const char *s, size_t len) {\n    if (_prepareClientToWrite(c) != C_OK) return;\n    _addReplyToBufferOrList(c,s,len);\n}\n\n/* Low level function called by the addReplyError...() functions.\n * It emits the protocol for a Redis error, in the form:\n *\n * -ERRORCODE Error Message<CR><LF>\n *\n * If the error code is already passed in the string 's', the error\n * code provided is used, otherwise the string \"-ERR \" for the generic\n * error code is automatically added.\n * Note that 's' must NOT end with \\r\\n. */\nvoid addReplyErrorLength(client *c, const char *s, size_t len) {\n    /* If the string already starts with \"-...\" then the error code\n     * is provided by the caller. Otherwise we use \"-ERR\". */\n    if (!len || s[0] != '-') addReplyProto(c,\"-ERR \",5);\n    addReplyProto(c,s,len);\n    addReplyProto(c,\"\\r\\n\",2);\n}\n\n/* Do some actions after an error reply was sent (Log if needed, updates stats, etc.)\n * Possible flags:\n * * ERR_REPLY_FLAG_NO_STATS_UPDATE - indicate not to update any error stats. */\nvoid afterErrorReply(client *c, const char *s, size_t len, int flags) {\n    /* Module clients fall into two categories:\n     * Calls to RM_Call, in which case the error isn't being returned to a client, so should not be counted.\n     * Module thread safe context calls to RM_ReplyWithError, which will be added to a real client by the main thread later. */\n    if (c->flags & CLIENT_MODULE) {\n        if (!c->deferred_reply_errors) {\n            c->deferred_reply_errors = listCreate();\n            listSetFreeMethod(c->deferred_reply_errors, sdsfreegeneric);\n        }\n        listAddNodeTail(c->deferred_reply_errors, sdsnewlen(s, len));\n        return;\n    }\n\n    if (!(flags & ERR_REPLY_FLAG_NO_STATS_UPDATE)) {\n        /* Increment the global error counter */\n        server.stat_total_error_replies++;\n        /* Increment the error stats\n         * If the string already starts with \"-...\" then the error prefix\n         * is provided by the caller ( we limit the search to 32 chars). Otherwise we use \"-ERR\". */\n        if (s[0] != '-') {\n            incrementErrorCount(\"ERR\", 3);\n        } else {\n            char *spaceloc = memchr(s, ' ', len < 32 ? len : 32);\n            if (spaceloc) {\n                const size_t errEndPos = (size_t)(spaceloc - s);\n                incrementErrorCount(s+1, errEndPos-1);\n            } else {\n                /* Fallback to ERR if we can't retrieve the error prefix */\n                incrementErrorCount(\"ERR\", 3);\n            }\n        }\n    } else {\n        /* stat_total_error_replies will not be updated, which means that\n         * the cmd stats will not be updated as well, we still want this command\n         * to be counted as failed so we update it here. We update c->realcmd in\n         * case c->cmd was changed (like in GEOADD). */\n        c->realcmd->failed_calls++;\n    }\n\n    /* Sometimes it could be normal that a slave replies to a master with\n     * an error and this function gets called. Actually the error will never\n     * be sent because addReply*() against master clients has no effect...\n     * A notable example is:\n     *\n     *    EVAL 'redis.call(\"incr\",KEYS[1]); redis.call(\"nonexisting\")' 1 x\n     *\n     * Where the master must propagate the first change even if the second\n     * will produce an error. However it is useful to log such events since\n     * they are rare and may hint at errors in a script or a bug in Redis. */\n    int ctype = getClientType(c);\n    if (ctype == CLIENT_TYPE_MASTER || ctype == CLIENT_TYPE_SLAVE || c->id == CLIENT_ID_AOF) {\n        char *to, *from;\n\n        if (c->id == CLIENT_ID_AOF) {\n            to = \"AOF-loading-client\";\n            from = \"server\";\n        } else if (ctype == CLIENT_TYPE_MASTER) {\n            if (c->flags & CLIENT_ASM_IMPORTING) {\n                to = \"source\";\n                from = \"destination\";\n            } else {\n                to = \"master\";\n                from = \"replica\";\n            }\n        } else {\n            to = \"replica\";\n            from = \"master\";\n        }\n\n        if (len > 4096) len = 4096;\n        sds cmdname = c->lastcmd ? c->lastcmd->fullname : NULL;\n        serverLog(LL_WARNING,\"== CRITICAL == This %s is sending an error \"\n                             \"to its %s: '%.*s' after processing the command \"\n                             \"'%s'\", from, to, (int)len, s, cmdname ? cmdname : \"<unknown>\");\n        if (ctype == CLIENT_TYPE_MASTER && server.repl_backlog &&\n            !(c->flags & CLIENT_ASM_IMPORTING) && server.repl_backlog->histlen > 0)\n        {\n            showLatestBacklog();\n        }\n        server.stat_unexpected_error_replies++;\n\n        /* Based off the propagation error behavior, check if we need to panic here. There\n         * are currently two checked cases:\n         * * If this command was from our master and we are not a writable replica.\n         * * We are reading from an AOF file. */\n        int panic_in_replicas = (ctype == CLIENT_TYPE_MASTER && server.repl_slave_ro)\n            && (server.propagation_error_behavior == PROPAGATION_ERR_BEHAVIOR_PANIC ||\n            server.propagation_error_behavior == PROPAGATION_ERR_BEHAVIOR_PANIC_ON_REPLICAS);\n        int panic_in_aof = c->id == CLIENT_ID_AOF \n            && server.propagation_error_behavior == PROPAGATION_ERR_BEHAVIOR_PANIC;\n        if (panic_in_replicas || panic_in_aof) {\n            serverPanic(\"This %s panicked sending an error to its %s\"\n                \" after processing the command '%s'\",\n                from, to, cmdname ? cmdname : \"<unknown>\");\n        }\n    }\n}\n\n/* The 'err' object is expected to start with -ERRORCODE and end with \\r\\n.\n * Unlike addReplyErrorSds and others alike which rely on addReplyErrorLength. */\nvoid addReplyErrorObject(client *c, robj *err) {\n    addReply(c, err);\n    afterErrorReply(c, err->ptr, sdslen(err->ptr)-2, 0); /* Ignore trailing \\r\\n */\n}\n\n/* Sends either a reply or an error reply by checking the first char.\n * If the first char is '-' the reply is considered an error.\n * In any case the given reply is sent, if the reply is also recognize\n * as an error we also perform some post reply operations such as\n * logging and stats update. */\nvoid addReplyOrErrorObject(client *c, robj *reply) {\n    serverAssert(sdsEncodedObject(reply));\n    sds rep = reply->ptr;\n    if (sdslen(rep) > 1 && rep[0] == '-') {\n        addReplyErrorObject(c, reply);\n    } else {\n        addReply(c, reply);\n    }\n}\n\n/* See addReplyErrorLength for expectations from the input string. */\nvoid addReplyError(client *c, const char *err) {\n    addReplyErrorLength(c,err,strlen(err));\n    afterErrorReply(c,err,strlen(err),0);\n}\n\n/* Add error reply to the given client.\n * Supported flags:\n * * ERR_REPLY_FLAG_NO_STATS_UPDATE - indicate not to perform any error stats updates */\nvoid addReplyErrorSdsEx(client *c, sds err, int flags) {\n    addReplyErrorLength(c,err,sdslen(err));\n    afterErrorReply(c,err,sdslen(err),flags);\n    sdsfree(err);\n}\n\n/* See addReplyErrorLength for expectations from the input string. */\n/* As a side effect the SDS string is freed. */\nvoid addReplyErrorSds(client *c, sds err) {\n    addReplyErrorSdsEx(c, err, 0);\n}\n\n/* See addReplyErrorLength for expectations from the input string. */\n/* As a side effect the SDS string is freed. */\nvoid addReplyErrorSdsSafe(client *c, sds err) {\n    err = sdsmapchars(err, \"\\r\\n\", \"  \",  2);\n    addReplyErrorSdsEx(c, err, 0);\n}\n\n/* Internal function used by addReplyErrorFormat, addReplyErrorFormatEx and RM_ReplyWithErrorFormat.\n * Refer to afterErrorReply for more information about the flags. */\nvoid addReplyErrorFormatInternal(client *c, int flags, const char *fmt, va_list ap) {\n    va_list cpy;\n    va_copy(cpy,ap);\n    sds s = sdscatvprintf(sdsempty(),fmt,cpy);\n    va_end(cpy);\n    /* Trim any newlines at the end (ones will be added by addReplyErrorLength) */\n    s = sdstrim(s, \"\\r\\n\");\n    /* Make sure there are no newlines in the middle of the string, otherwise\n     * invalid protocol is emitted. */\n    s = sdsmapchars(s, \"\\r\\n\", \"  \",  2);\n    addReplyErrorLength(c,s,sdslen(s));\n    afterErrorReply(c,s,sdslen(s),flags);\n    sdsfree(s);\n}\n\nvoid addReplyErrorFormatEx(client *c, int flags, const char *fmt, ...) {\n    va_list ap;\n    va_start(ap,fmt);\n    addReplyErrorFormatInternal(c, flags, fmt, ap);\n    va_end(ap);\n}\n\n/* See addReplyErrorLength for expectations from the formatted string.\n * The formatted string is safe to contain \\r and \\n anywhere. */\nvoid addReplyErrorFormat(client *c, const char *fmt, ...) {\n    va_list ap;\n    va_start(ap,fmt);\n    addReplyErrorFormatInternal(c, 0, fmt, ap);\n    va_end(ap);\n}\n\nvoid addReplyErrorArity(client *c) {\n    addReplyErrorFormat(c, \"wrong number of arguments for '%s' command\",\n                        c->cmd->fullname);\n}\n\nvoid addReplyErrorExpireTime(client *c) {\n    addReplyErrorFormat(c, \"invalid expire time in '%s' command\",\n                        c->cmd->fullname);\n}\n\nvoid addReplyStatusLength(client *c, const char *s, size_t len) {\n    addReplyProto(c,\"+\",1);\n    addReplyProto(c,s,len);\n    addReplyProto(c,\"\\r\\n\",2);\n}\n\nvoid addReplyStatus(client *c, const char *status) {\n    addReplyStatusLength(c,status,strlen(status));\n}\n\nvoid addReplyStatusFormat(client *c, const char *fmt, ...) {\n    va_list ap;\n    va_start(ap,fmt);\n    sds s = sdscatvprintf(sdsempty(),fmt,ap);\n    va_end(ap);\n    addReplyStatusLength(c,s,sdslen(s));\n    sdsfree(s);\n}\n\n/* Sometimes we are forced to create a new reply node, and we can't append to\n * the previous one, when that happens, we wanna try to trim the unused space\n * at the end of the last reply node which we won't use anymore. */\nvoid trimReplyUnusedTailSpace(client *c) {\n    listNode *ln = listLast(c->reply);\n    clientReplyBlock *tail = ln? listNodeValue(ln): NULL;\n\n    /* Note that 'tail' may be NULL even if we have a tail node, because when\n     * addReplyDeferredLen() is used */\n    if (!tail) return;\n\n    /* We only try to trim the space is relatively high (more than a 1/4 of the\n     * allocation), otherwise there's a high chance realloc will NOP.\n     * Also, to avoid large memmove which happens as part of realloc, we only do\n     * that if the used part is small.  */\n    if (tail->size - tail->used > tail->size / 4 &&\n        tail->used < PROTO_REPLY_CHUNK_BYTES && !tail->buf_encoded)\n    {\n        size_t usable_size;\n        size_t old_size = tail->size;\n        tail = zrealloc_usable(tail, tail->used + sizeof(clientReplyBlock), &usable_size, NULL);\n        /* take over the allocation's internal fragmentation (at least for\n         * memory usage tracking) */\n        tail->size = usable_size - sizeof(clientReplyBlock);\n        c->reply_bytes = c->reply_bytes + tail->size - old_size;\n        listNodeValue(ln) = tail;\n    }\n}\n\n/* Adds an empty object to the reply list that will contain the multi bulk\n * length, which is not known when this function is called. */\nvoid *addReplyDeferredLen(client *c) {\n    /* Note that we install the write event here even if the object is not\n     * ready to be sent, since we are sure that before returning to the\n     * event loop setDeferredAggregateLen() will be called. */\n    if (_prepareClientToWrite(c) != C_OK) return NULL;\n\n    /* Replicas should normally not cause any writes to the reply buffer. In case a rogue replica sent a command on the\n     * replication link that caused a reply to be generated we'll simply disconnect it.\n     * Note this is the simplest way to check a command added a response. Replication links are used to write data but\n     * not for responses, so we should normally never get here on a replica client. */\n    if (unlikely(clientTypeIsSlave(c))) {\n        sds cmdname = c->lastcmd ? c->lastcmd->fullname : NULL;\n        logInvalidUseAndFreeClientAsync(c, \"Replica generated a reply to command '%s'\",\n                                        cmdname ? cmdname : \"<unknown>\");\n        return NULL;\n    }\n\n    /* We call it here because this function conceptually affects the reply\n     * buffer offset (see function comment) */\n    reqresSaveClientReplyOffset(c);\n\n    trimReplyUnusedTailSpace(c);\n    listAddNodeTail(c->reply,NULL); /* NULL is our placeholder. */\n    return listLast(c->reply);\n}\n\nvoid setDeferredReply(client *c, void *node, const char *s, size_t length) {\n    listNode *ln = (listNode*)node;\n    clientReplyBlock *next, *prev;\n\n    /* Abort when *node is NULL: when the client should not accept writes\n     * we return NULL in addReplyDeferredLen() */\n    if (node == NULL) return;\n    serverAssert(!listNodeValue(ln));\n\n    /* Normally we fill this dummy NULL node, added by addReplyDeferredLen(),\n     * with a new buffer structure containing the protocol needed to specify\n     * the length of the array following. However sometimes there might be room\n     * in the previous/next node so we can instead remove this NULL node, and\n     * suffix/prefix our data in the node immediately before/after it, in order\n     * to save a write(2) syscall later. Conditions needed to do it:\n     *\n     * - The prev node is non-NULL and has space in it or\n     * - The next node is non-NULL,\n     * - It has enough room already allocated\n     * - And not too large (avoid large memmove) */\n    if (ln->prev != NULL && (prev = listNodeValue(ln->prev)) &&\n        prev->used < prev->size && !prev->buf_encoded)\n    {\n        size_t len_to_copy = prev->size - prev->used;\n        if (len_to_copy > length)\n            len_to_copy = length;\n        memcpy(prev->buf + prev->used, s, len_to_copy);\n        c->net_output_bytes_curr_cmd += len_to_copy;\n        prev->used += len_to_copy;\n        length -= len_to_copy;\n        if (length == 0) {\n            listDelNode(c->reply, ln);\n            return;\n        }\n        s += len_to_copy;\n    }\n\n    if (ln->next != NULL && (next = listNodeValue(ln->next)) &&\n        next->size - next->used >= length &&\n        next->used < PROTO_REPLY_CHUNK_BYTES * 4 &&\n        !next->buf_encoded)\n    {\n        memmove(next->buf + length, next->buf, next->used);\n        memcpy(next->buf, s, length);\n        c->net_output_bytes_curr_cmd += length;\n        next->used += length;\n        listDelNode(c->reply,ln);\n    } else {\n        /* Create a new node */\n        size_t usable_size;\n        clientReplyBlock *buf = zmalloc_usable(length + sizeof(clientReplyBlock), &usable_size);\n        /* Take over the allocation's internal fragmentation */\n        buf->size = usable_size - sizeof(clientReplyBlock);\n        buf->used = length;\n        buf->buf_encoded = 0;\n        memcpy(buf->buf, s, length);\n        c->net_output_bytes_curr_cmd += length;\n        listNodeValue(ln) = buf;\n        c->reply_bytes += buf->size;\n\n        closeClientOnOutputBufferLimitReached(c, 1);\n    }\n}\n\n/* Populate the length object and try gluing it to the next chunk. */\nvoid setDeferredAggregateLen(client *c, void *node, long length, char prefix) {\n    serverAssert(length >= 0);\n\n    /* Abort when *node is NULL: when the client should not accept writes\n     * we return NULL in addReplyDeferredLen() */\n    if (node == NULL) return;\n\n    /* Things like *2\\r\\n, %3\\r\\n or ~4\\r\\n are emitted very often by the protocol\n     * so we have a few shared objects to use if the integer is small\n     * like it is most of the times. */\n    const size_t hdr_len = OBJ_SHARED_HDR_STRLEN(length);\n    const int opt_hdr = length < OBJ_SHARED_BULKHDR_LEN;\n    if (prefix == '*' && opt_hdr) {\n        setDeferredReply(c, node, shared.mbulkhdr[length]->ptr, hdr_len);\n        return;\n    }\n    if (prefix == '%' && opt_hdr) {\n        setDeferredReply(c, node, shared.maphdr[length]->ptr, hdr_len);\n        return;\n    }\n    if (prefix == '~' && opt_hdr) {\n        setDeferredReply(c, node, shared.sethdr[length]->ptr, hdr_len);\n        return;\n    }\n\n    char lenstr[128];\n    size_t lenstr_len = snprintf(lenstr, sizeof(lenstr), \"%c%ld\\r\\n\", prefix, length);\n    setDeferredReply(c, node, lenstr, lenstr_len);\n}\n\nvoid setDeferredArrayLen(client *c, void *node, long length) {\n    setDeferredAggregateLen(c,node,length,'*');\n}\n\nvoid setDeferredMapLen(client *c, void *node, long length) {\n    int prefix = c->resp == 2 ? '*' : '%';\n    if (c->resp == 2) length *= 2;\n    setDeferredAggregateLen(c,node,length,prefix);\n}\n\nvoid setDeferredSetLen(client *c, void *node, long length) {\n    int prefix = c->resp == 2 ? '*' : '~';\n    setDeferredAggregateLen(c,node,length,prefix);\n}\n\nvoid setDeferredAttributeLen(client *c, void *node, long length) {\n    serverAssert(c->resp >= 3);\n    setDeferredAggregateLen(c,node,length,'|');\n}\n\nvoid setDeferredPushLen(client *c, void *node, long length) {\n    serverAssert(c->resp >= 3);\n    setDeferredAggregateLen(c,node,length,'>');\n}\n\n/* Add a double as a bulk reply */\nvoid addReplyDouble(client *c, double d) {\n    if (c->resp == 3) {\n        char dbuf[MAX_D2STRING_CHARS+3];\n        dbuf[0] = ',';\n        const int dlen = d2string(dbuf+1,sizeof(dbuf)-1,d);\n        dbuf[dlen+1] = '\\r';\n        dbuf[dlen+2] = '\\n';\n        dbuf[dlen+3] = '\\0';\n        addReplyProto(c,dbuf,dlen+3);\n    } else {\n        char dbuf[MAX_LONG_DOUBLE_CHARS+32];\n        /* In order to prepend the string length before the formatted number,\n         * but still avoid an extra memcpy of the whole number, we reserve space\n         * for maximum header `$0000\\r\\n`, print double, add the resp header in\n         * front of it, and then send the buffer with the right `start` offset. */\n        const int dlen = d2string(dbuf+7,sizeof(dbuf)-7,d);\n        int digits = digits10(dlen);\n        int start = 4 - digits;\n        serverAssert(start >= 0);\n        dbuf[start] = '$';\n\n        /* Convert `dlen` to string, putting it's digits after '$' and before the\n            * formatted double string. */\n        for(int i = digits, val = dlen; val && i > 0 ; --i, val /= 10) {\n            dbuf[start + i] = \"0123456789\"[val % 10];\n        }\n        dbuf[5] = '\\r';\n        dbuf[6] = '\\n';\n        dbuf[dlen+7] = '\\r';\n        dbuf[dlen+8] = '\\n';\n        dbuf[dlen+9] = '\\0';\n        addReplyProto(c,dbuf+start,dlen+9-start);\n    }\n}\n\nvoid addReplyBigNum(client *c, const char* num, size_t len) {\n    if (c->resp == 2) {\n        addReplyBulkCBuffer(c, num, len);\n    } else {\n        addReplyProto(c,\"(\",1);\n        addReplyProto(c,num,len);\n        addReplyProto(c,\"\\r\\n\",2);\n    }\n}\n\n/* Add a long double as a bulk reply, but uses a human readable formatting\n * of the double instead of exposing the crude behavior of doubles to the\n * dear user. */\nvoid addReplyHumanLongDouble(client *c, long double d) {\n    if (c->resp == 2) {\n        robj *o = createStringObjectFromLongDouble(d,1);\n        addReplyBulk(c,o);\n        decrRefCount(o);\n    } else {\n        char buf[MAX_LONG_DOUBLE_CHARS];\n        int len = ld2string(buf,sizeof(buf),d,LD_STR_HUMAN);\n        addReplyProto(c,\",\",1);\n        addReplyProto(c,buf,len);\n        addReplyProto(c,\"\\r\\n\",2);\n    }\n}\n\nstatic inline void _addReplyLongLongSharedHdr(client *c, long long ll, char prefix,\n                                              robj *shared_hdr[OBJ_SHARED_BULKHDR_LEN])\n{\n    char buf[128];\n    int len;\n    const int opt_hdr = ll < OBJ_SHARED_BULKHDR_LEN && ll >= 0;\n\n    if (opt_hdr) {\n        _addReplyToBufferOrList(c, shared_hdr[ll]->ptr, OBJ_SHARED_HDR_STRLEN(ll));\n        return;\n    }\n\n    buf[0] = prefix;\n    len = ll2string(buf + 1, sizeof(buf) - 1, ll);\n    buf[len + 1] = '\\r';\n    buf[len + 2] = '\\n';\n    _addReplyToBufferOrList(c, buf, len + 3);\n}\n\nstatic inline void _addReplyLongLongBulk(client *c, long long ll) {\n    _addReplyLongLongSharedHdr(c, ll, '$', shared.bulkhdr);\n}\n\nstatic inline void _addReplyLongLongMBulk(client *c, long long ll) {\n    _addReplyLongLongSharedHdr(c, ll, '*', shared.mbulkhdr);\n}\n\n/* Add a long long as integer reply or bulk len / multi bulk count.\n * Basically this is used to output <prefix><long long><crlf>. */\nstatic void _addReplyLongLongWithPrefix(client *c, long long ll, char prefix) {\n    char buf[128];\n    int len;\n\n    /* Things like $3\\r\\n or *2\\r\\n are emitted very often by the protocol\n     * so we have a few shared objects to use if the integer is small\n     * like it is most of the times. */\n    const int opt_hdr = ll < OBJ_SHARED_BULKHDR_LEN && ll >= 0;\n    const size_t hdr_len = OBJ_SHARED_HDR_STRLEN(ll);\n    if (prefix == '*' && opt_hdr) {\n        _addReplyToBufferOrList(c, shared.mbulkhdr[ll]->ptr, hdr_len);\n        return;\n    } else if (prefix == '$' && opt_hdr) {\n        _addReplyToBufferOrList(c, shared.bulkhdr[ll]->ptr, hdr_len);\n        return;\n    } else if (prefix == '%' && opt_hdr) {\n        _addReplyToBufferOrList(c, shared.maphdr[ll]->ptr, hdr_len);\n        return;\n    } else if (prefix == '~' && opt_hdr) {\n        _addReplyToBufferOrList(c, shared.sethdr[ll]->ptr, hdr_len);\n        return;\n    }\n\n    buf[0] = prefix;\n    len = ll2string(buf + 1, sizeof(buf) - 1, ll);\n    buf[len + 1] = '\\r';\n    buf[len + 2] = '\\n';\n    _addReplyToBufferOrList(c, buf, len + 3);\n}\n\nvoid addReplyLongLong(client *c, long long ll) {\n    if (ll == 0)\n        addReply(c,shared.czero);\n    else if (ll == 1)\n        addReply(c, shared.cone);\n    else {\n        if (_prepareClientToWrite(c) != C_OK) return;\n        _addReplyLongLongWithPrefix(c, ll, ':');\n    }\n}\n\nvoid addReplyLongLongFromStr(client *c, robj *str) {\n    addReplyProto(c,\":\",1);\n    addReply(c,str);\n    addReplyProto(c,\"\\r\\n\",2);\n}\n\nvoid addReplyAggregateLen(client *c, long length, int prefix) {\n    serverAssert(length >= 0);\n    if (_prepareClientToWrite(c) != C_OK) return;\n    _addReplyLongLongWithPrefix(c, length, prefix);\n}\n\nvoid addReplyArrayLen(client *c, long length) {\n    serverAssert(length >= 0);\n    if (_prepareClientToWrite(c) != C_OK) return;\n    _addReplyLongLongMBulk(c, length);\n}\n\nvoid addReplyMapLen(client *c, long length) {\n    int prefix = c->resp == 2 ? '*' : '%';\n    if (c->resp == 2) length *= 2;\n    addReplyAggregateLen(c,length,prefix);\n}\n\nvoid addReplySetLen(client *c, long length) {\n    int prefix = c->resp == 2 ? '*' : '~';\n    addReplyAggregateLen(c,length,prefix);\n}\n\nvoid addReplyAttributeLen(client *c, long length) {\n    serverAssert(c->resp >= 3);\n    addReplyAggregateLen(c,length,'|');\n}\n\nvoid addReplyPushLen(client *c, long length) {\n    serverAssert(c->resp >= 3);\n    serverAssertWithInfo(c, NULL, c->flags & CLIENT_PUSHING);\n    addReplyAggregateLen(c,length,'>');\n}\n\nvoid addReplyNull(client *c) {\n    if (c->resp == 2) {\n        addReplyProto(c,\"$-1\\r\\n\",5);\n    } else {\n        addReplyProto(c,\"_\\r\\n\",3);\n    }\n}\n\nvoid addReplyBool(client *c, int b) {\n    if (c->resp == 2) {\n        addReply(c, b ? shared.cone : shared.czero);\n    } else {\n        addReplyProto(c, b ? \"#t\\r\\n\" : \"#f\\r\\n\",4);\n    }\n}\n\n/* A null array is a concept that no longer exists in RESP3. However\n * RESP2 had it, so API-wise we have this call, that will emit the correct\n * RESP2 protocol, however for RESP3 the reply will always be just the\n * Null type \"_\\r\\n\". */\nvoid addReplyNullArray(client *c) {\n    if (c->resp == 2) {\n        addReplyProto(c,\"*-1\\r\\n\",5);\n    } else {\n        addReplyProto(c,\"_\\r\\n\",3);\n    }\n}\n\n/* Create the length prefix of a bulk reply, example: $2234 */\nvoid addReplyBulkLen(client *c, robj *obj) {\n    size_t len = stringObjectLen(obj);\n    if (_prepareClientToWrite(c) != C_OK) return;\n    _addReplyLongLongBulk(c, len);\n}\n\n/* Check if copy avoidance is preferred for this client and object.\n * Copy avoidance allows I/O threads to directly reference obj->ptr\n * instead of copying data to reply buffers. */\nstatic int isCopyAvoidPreferred(client *c, robj *obj, size_t len) {\n    /* Don't use copy avoidance for fake clients. */\n    if (!c->conn || !server.reply_copy_avoidance_enabled) return 0;\n\n    int type = getClientType(c);\n    if (type != CLIENT_TYPE_NORMAL) return 0;\n\n    /* Don't use copy avoidance for push messages. Push messages need to be deferred\n     * to server.pending_push_messages when CLIENT_PUSHING is set. */\n    if (c->flags & CLIENT_PUSHING) return 0;\n\n    if (obj->encoding != OBJ_ENCODING_RAW || obj->refcount >= OBJ_FIRST_SPECIAL_REFCOUNT) return 0;\n\n    /* Copy avoidance is preferred for any string size starting certain number of I/O threads  */\n    if (server.io_threads_num >= COPY_AVOID_MIN_IO_THREADS) return 1;\n\n    /* Main thread only. No I/O threads */\n    if (server.io_threads_num == 1) {\n        /* Copy avoidance is preferred starting certain string size */\n        return len >= COPY_AVOID_MIN_STRING_SIZE;\n    }\n\n    /* Main thread + I/O threads */\n    return len >= COPY_AVOID_MIN_STRING_SIZE_THREADED;\n}\n\n/* Try to avoid whole bulk string copy to a reply buffer\n * If copy avoidance allowed then only pointer to object and string will be copied to the buffer */\nstatic int tryAvoidBulkStrCopyToReply(client *c, robj *obj, size_t len) {\n    if (!isCopyAvoidPreferred(c, obj, len)) return C_ERR;\n    _addBulkStrRefToBufferOrList(c, obj, len);\n    return C_OK;\n}\n\n/* Add a Redis Object as a bulk reply.\n * If avoid_copy is non-zero, attempt to use copy avoidance optimization. */\nvoid addReplyBulkWithFlag(client *c, robj *obj, int avoid_copy) {\n    if (_prepareClientToWrite(c) != C_OK) return;\n\n    if (sdsEncodedObject(obj)) {\n        const size_t len = sdslen(obj->ptr);\n        if (avoid_copy && tryAvoidBulkStrCopyToReply(c, obj, len) == C_OK)\n            return;\n        _addReplyLongLongBulk(c, len);\n        _addReplyToBufferOrList(c,obj->ptr,len);\n        _addReplyToBufferOrList(c,\"\\r\\n\",2);\n    } else if (obj->encoding == OBJ_ENCODING_INT) {\n        /* For integer encoded strings we just convert it into a string\n         * using our optimized function, and attach the resulting string\n         * to the output buffer. */\n        char buf[34];\n        size_t len = ll2string(buf,sizeof(buf),(long)obj->ptr);\n        buf[len] = '\\r';\n        buf[len+1] = '\\n';\n        _addReplyLongLongBulk(c, len);\n        _addReplyToBufferOrList(c,buf,len+2);\n    } else {\n        serverPanic(\"Wrong obj->encoding in addReply()\");\n    }\n}\n\n/* Add a Redis Object as a bulk reply */\nvoid addReplyBulk(client *c, robj *obj) {\n    addReplyBulkWithFlag(c, obj, 1);\n}\n\n/* Add a C buffer as bulk reply */\nvoid addReplyBulkCBuffer(client *c, const void *p, size_t len) {\n    if (_prepareClientToWrite(c) != C_OK) return;\n    _addReplyLongLongBulk(c, len);\n    _addReplyToBufferOrList(c, p, len);\n    _addReplyToBufferOrList(c, \"\\r\\n\", 2);\n}\n\n/* Add sds to reply (takes ownership of sds and frees it) */\nvoid addReplyBulkSds(client *c, sds s) {\n    if (_prepareClientToWrite(c) != C_OK) {\n        sdsfree(s);\n        return;\n    }\n    _addReplyLongLongWithPrefix(c, sdslen(s), '$');\n    _addReplyToBufferOrList(c, s, sdslen(s));\n    sdsfree(s);\n    _addReplyToBufferOrList(c, \"\\r\\n\", 2);\n}\n\n/* Set sds to a deferred reply (for symmetry with addReplyBulkSds it also frees the sds) */\nvoid setDeferredReplyBulkSds(client *c, void *node, sds s) {\n    sds reply = sdscatprintf(sdsempty(), \"$%d\\r\\n%s\\r\\n\", (unsigned)sdslen(s), s);\n    setDeferredReply(c, node, reply, sdslen(reply));\n    sdsfree(reply);\n    sdsfree(s);\n}\n\n/* Add a C null term string as bulk reply */\nvoid addReplyBulkCString(client *c, const char *s) {\n    if (s == NULL) {\n        addReplyNull(c);\n    } else {\n        addReplyBulkCBuffer(c,s,strlen(s));\n    }\n}\n\n/* Add a long long as a bulk reply */\nvoid addReplyBulkLongLong(client *c, long long ll) {\n    char buf[64];\n    int len;\n\n    len = ll2string(buf,64,ll);\n    addReplyBulkCBuffer(c,buf,len);\n}\n\n/* Reply with a verbatim type having the specified extension.\n *\n * The 'ext' is the \"extension\" of the file, actually just a three\n * character type that describes the format of the verbatim string.\n * For instance \"txt\" means it should be interpreted as a text only\n * file by the receiver, \"md \" as markdown, and so forth. Only the\n * three first characters of the extension are used, and if the\n * provided one is shorter than that, the remaining is filled with\n * spaces. */\nvoid addReplyVerbatim(client *c, const char *s, size_t len, const char *ext) {\n    if (c->resp == 2) {\n        addReplyBulkCBuffer(c,s,len);\n    } else {\n        char buf[32];\n        size_t preflen = snprintf(buf,sizeof(buf),\"=%zu\\r\\nxxx:\",len+4);\n        char *p = buf+preflen-4;\n        for (int i = 0; i < 3; i++) {\n            if (*ext == '\\0') {\n                p[i] = ' ';\n            } else {\n                p[i] = *ext++;\n            }\n        }\n        addReplyProto(c,buf,preflen);\n        addReplyProto(c,s,len);\n        addReplyProto(c,\"\\r\\n\",2);\n    }\n}\n\n/* This function is similar to the addReplyHelp function but adds the\n * ability to pass in two arrays of strings. Some commands have\n * some additional subcommands based on the specific feature implementation\n * Redis is compiled with (currently just clustering). This function allows\n * to pass is the common subcommands in `help` and any implementation\n * specific subcommands in `extended_help`.\n */\nvoid addExtendedReplyHelp(client *c, const char **help, const char **extended_help) {\n    sds cmd = sdsnew((char*) c->argv[0]->ptr);\n    void *blenp = addReplyDeferredLen(c);\n    int blen = 0;\n    int idx = 0;\n\n    sdstoupper(cmd);\n    addReplyStatusFormat(c,\n        \"%s <subcommand> [<arg> [value] [opt] ...]. Subcommands are:\",cmd);\n    sdsfree(cmd);\n\n    while (help[blen]) addReplyStatus(c,help[blen++]);\n    if (extended_help) {\n        while (extended_help[idx]) addReplyStatus(c,extended_help[idx++]);\n    }\n    blen += idx;\n\n    addReplyStatus(c,\"HELP\");\n    addReplyStatus(c,\"    Print this help.\");\n\n    blen += 1;  /* Account for the header. */\n    blen += 2;  /* Account for the footer. */\n    setDeferredArrayLen(c,blenp,blen);\n}\n\n/* Add an array of C strings as status replies with a heading.\n * This function is typically invoked by commands that support\n * subcommands in response to the 'help' subcommand. The help array\n * is terminated by NULL sentinel. */\nvoid addReplyHelp(client *c, const char **help) {\n    addExtendedReplyHelp(c, help, NULL);\n}\n\n/* Add a suggestive error reply.\n * This function is typically invoked by from commands that support\n * subcommands in response to an unknown subcommand or argument error. */\nvoid addReplySubcommandSyntaxError(client *c) {\n    sds cmd = sdsnew((char*) c->argv[0]->ptr);\n    sdstoupper(cmd);\n    addReplyErrorFormat(c,\n        \"unknown subcommand or wrong number of arguments for '%.128s'. Try %s HELP.\",\n        (char*)c->argv[1]->ptr,cmd);\n    sdsfree(cmd);\n}\n\n/* Append 'src' client output buffers into 'dst' client output buffers.\n * This function clears the output buffers of 'src' */\nvoid AddReplyFromClient(client *dst, client *src) {\n    /* If the source client contains a partial response due to client output\n     * buffer limits, propagate that to the dest rather than copy a partial\n     * reply. We don't wanna run the risk of copying partial response in case\n     * for some reason the output limits don't reach the same decision (maybe\n     * they changed) */\n    if (src->flags & CLIENT_CLOSE_ASAP) {\n        sds client = catClientInfoString(sdsempty(),dst);\n        freeClientAsync(dst);\n        serverLog(LL_WARNING,\"Client %s scheduled to be closed ASAP for overcoming of output buffer limits.\", client);\n        sdsfree(client);\n        return;\n    }\n\n    /* First add the static buffer (either into the static buffer or reply list) */\n    addReplyProto(dst,src->buf, src->bufpos);\n\n    /* We need to check with _prepareClientToWrite again (after addReplyProto)\n     * since addReplyProto may have changed something (like CLIENT_CLOSE_ASAP) */\n    if (_prepareClientToWrite(dst) != C_OK)\n        return;\n\n    /* We're bypassing _addReplyProtoToList, so we need to add the pre/post\n     * checks in it. */\n    if (dst->flags & CLIENT_CLOSE_AFTER_REPLY) return;\n\n    /* Concatenate the reply list into the dest */\n    if (listLength(src->reply))\n        listJoin(dst->reply,src->reply);\n    dst->reply_bytes += src->reply_bytes;\n    src->reply_bytes = 0;\n    src->bufpos = 0;\n\n    if (src->deferred_reply_errors) {\n        deferredAfterErrorReply(dst, src->deferred_reply_errors);\n        listRelease(src->deferred_reply_errors);\n        src->deferred_reply_errors = NULL;\n    }\n\n    /* Check output buffer limits */\n    closeClientOnOutputBufferLimitReached(dst, 1);\n}\n\n/* Append the listed errors to the server error statistics. the input\n * list is not modified and remains the responsibility of the caller. */\nvoid deferredAfterErrorReply(client *c, list *errors) {\n    listIter li;\n    listNode *ln;\n    listRewind(errors,&li);\n    while((ln = listNext(&li))) {\n        sds err = ln->value;\n        afterErrorReply(c, err, sdslen(err), 0);\n    }\n}\n\n/* Logically copy 'src' replica client buffers info to 'dst' replica.\n * Basically increase referenced buffer block node reference count. */\nvoid copyReplicaOutputBuffer(client *dst, client *src) {\n    serverAssert(src->bufpos == 0 && listLength(src->reply) == 0); \n    serverAssert(src->running_tid == IOTHREAD_MAIN_THREAD_ID &&\n                 dst->running_tid == IOTHREAD_MAIN_THREAD_ID);\n    if (src->ref_repl_buf_node == NULL) return;\n    dst->ref_repl_buf_node = src->ref_repl_buf_node;\n    dst->ref_block_pos = src->ref_block_pos;\n    ((replBufBlock *)listNodeValue(dst->ref_repl_buf_node))->refcount++;\n}\n\nstatic inline int _clientHasPendingRepliesNonSlave(client *c) {\n    return c->bufpos || listLength(c->reply);\n}\n\nstatic inline int _clientHasPendingRepliesSlave(client *c) {\n    /* Replicas use global shared replication buffer instead of\n     * private output buffer. */\n    serverAssert(c->bufpos == 0 && listLength(c->reply) == 0);\n    if (c->ref_repl_buf_node == NULL) return 0;\n\n    /* If the last replication buffer block content is totally sent,\n     * we have nothing to send. */\n    if (c->running_tid == IOTHREAD_MAIN_THREAD_ID) {\n        listNode *ln = listLast(server.repl_buffer_blocks);\n        replBufBlock *tail = listNodeValue(ln);\n        if (ln == c->ref_repl_buf_node &&\n            c->ref_block_pos == tail->used) return 0;\n    } else {\n        if (c->io_bound_repl_node == c->io_curr_repl_node &&\n            c->io_bound_block_pos == c->io_curr_block_pos) return 0;\n    }\n    return 1;\n}\n\n/* Return true if the specified client has pending reply buffers to write to\n * the socket. */\nint clientHasPendingReplies(client *c) {\n    if (unlikely(clientTypeIsSlave(c))) {\n        return _clientHasPendingRepliesSlave(c);\n    }\n    return _clientHasPendingRepliesNonSlave(c);\n}\n\nvoid clientAcceptHandler(connection *conn) {\n    client *c = connGetPrivateData(conn);\n\n    if (connGetState(conn) != CONN_STATE_CONNECTED) {\n        serverLog(LL_WARNING,\n                  \"Error accepting a client connection: %s (addr=%s laddr=%s)\",\n                  connGetLastError(conn), getClientPeerId(c), getClientSockname(c));\n        freeClientAsync(c);\n        return;\n    }\n\n    /* If the server is running in protected mode (the default) and there\n     * is no password set, nor a specific interface is bound, we don't accept\n     * requests from non loopback interfaces. Instead we try to explain the\n     * user what to do to fix it if needed. */\n    if (server.protected_mode &&\n        DefaultUser->flags & USER_FLAG_NOPASS)\n    {\n        if (connIsLocal(conn) != 1) {\n            char *err =\n                \"-DENIED Redis is running in protected mode because protected \"\n                \"mode is enabled and no password is set for the default user. \"\n                \"In this mode connections are only accepted from the loopback interface. \"\n                \"If you want to connect from external computers to Redis you \"\n                \"may adopt one of the following solutions: \"\n                \"1) Just disable protected mode sending the command \"\n                \"'CONFIG SET protected-mode no' from the loopback interface \"\n                \"by connecting to Redis from the same host the server is \"\n                \"running, however MAKE SURE Redis is not publicly accessible \"\n                \"from internet if you do so. Use CONFIG REWRITE to make this \"\n                \"change permanent. \"\n                \"2) Alternatively you can just disable the protected mode by \"\n                \"editing the Redis configuration file, and setting the protected \"\n                \"mode option to 'no', and then restarting the server. \"\n                \"3) If you started the server manually just for testing, restart \"\n                \"it with the '--protected-mode no' option. \"\n                \"4) Set up an authentication password for the default user. \"\n                \"NOTE: You only need to do one of the above things in order for \"\n                \"the server to start accepting connections from the outside.\\r\\n\";\n            if (connWrite(c->conn,err,strlen(err)) == -1) {\n                /* Nothing to do, Just to avoid the warning... */\n            }\n            server.stat_rejected_conn++;\n            freeClientAsync(c);\n            return;\n        }\n    }\n\n    /* Auto-authenticate from cert_user field if set */\n    sds username = connGetPeerUsername(conn);\n    if (username != NULL) {\n        user *u = ACLGetUserByName(username, sdslen(username));\n        if (u && !(u->flags & USER_FLAG_DISABLED)) {\n            c->user = u;\n            c->authenticated = 1;\n            moduleNotifyUserChanged(c);\n            serverLog(LL_VERBOSE, \"TLS: Auto-authenticated client as %s\",\n                      server.hide_user_data_from_log ? \"*redacted*\" : u->name);\n        } else {\n            addACLLogEntry(c, ACL_INVALID_TLS_CERT_AUTH, ACL_LOG_CTX_TOPLEVEL, 0, username, NULL);\n        }\n        sdsfree(username);\n    }\n\n    server.stat_numconnections++;\n    moduleFireServerEvent(REDISMODULE_EVENT_CLIENT_CHANGE,\n                          REDISMODULE_SUBEVENT_CLIENT_CHANGE_CONNECTED,\n                          c);\n\n    /* Assign the client to an IO thread */\n    if (server.io_threads_num > 1) assignClientToIOThread(c);\n}\n\nvoid acceptCommonHandler(connection *conn, int flags, char *ip) {\n    client *c;\n    UNUSED(ip);\n\n    if (connGetState(conn) != CONN_STATE_ACCEPTING) {\n        char addr[NET_ADDR_STR_LEN] = {0};\n        char laddr[NET_ADDR_STR_LEN] = {0};\n        connFormatAddr(conn, addr, sizeof(addr), 1);\n        connFormatAddr(conn, laddr, sizeof(laddr), 0);\n        serverLog(LL_VERBOSE,\n                  \"Accepted client connection in error state: %s (addr=%s laddr=%s)\",\n                  connGetLastError(conn), addr, laddr);\n        connClose(conn);\n        return;\n    }\n\n    /* Limit the number of connections we take at the same time.\n     *\n     * Admission control will happen before a client is created and connAccept()\n     * called, because we don't want to even start transport-level negotiation\n     * if rejected. */\n    if (listLength(server.clients) + getClusterConnectionsCount()\n        >= server.maxclients)\n    {\n        char *err;\n        if (server.cluster_enabled)\n            err = \"-ERR max number of clients + cluster \"\n                  \"connections reached\\r\\n\";\n        else\n            err = \"-ERR max number of clients reached\\r\\n\";\n\n        /* That's a best effort error message, don't check write errors.\n         * Note that for TLS connections, no handshake was done yet so nothing\n         * is written and the connection will just drop. */\n        if (connWrite(conn,err,strlen(err)) == -1) {\n            /* Nothing to do, Just to avoid the warning... */\n        }\n        server.stat_rejected_conn++;\n        connClose(conn);\n        return;\n    }\n\n    /* Create connection and client */\n    if ((c = createClient(conn)) == NULL) {\n        char addr[NET_ADDR_STR_LEN] = {0};\n        char laddr[NET_ADDR_STR_LEN] = {0};\n        connFormatAddr(conn, addr, sizeof(addr), 1);\n        connFormatAddr(conn, laddr, sizeof(laddr), 0);\n        serverLog(LL_WARNING,\n                  \"Error registering fd event for the new client connection: %s (addr=%s laddr=%s)\",\n                  connGetLastError(conn), addr, laddr);\n        connClose(conn); /* May be already closed, just ignore errors */\n        return;\n    }\n\n    /* Last chance to keep flags */\n    c->flags |= flags;\n\n    /* Initiate accept.\n     *\n     * Note that connAccept() is free to do two things here:\n     * 1. Call clientAcceptHandler() immediately;\n     * 2. Schedule a future call to clientAcceptHandler().\n     *\n     * Because of that, we must do nothing else afterwards.\n     */\n    if (connAccept(conn, clientAcceptHandler) == C_ERR) {\n        if (connGetState(conn) == CONN_STATE_ERROR)\n            serverLog(LL_WARNING,\n                      \"Error accepting a client connection: %s (addr=%s laddr=%s)\",\n                      connGetLastError(conn), getClientPeerId(c), getClientSockname(c));\n        freeClient(connGetPrivateData(conn));\n        return;\n    }\n}\n\nstatic void freeDeferredObject(client *c, int type, void *ptr) {\n    if (type == DEFERRED_OBJECT_TYPE_PENDING_COMMAND) {\n        freePendingCommand(c, ptr);\n    } else if (type == DEFERRED_OBJECT_TYPE_ROBJ) {\n        decrRefCount(ptr);\n    } else {\n        serverPanic(\"Unknown deferred object type: %d\", type);\n    }\n}\n\n/* Attempt to defer freeing the object to the IO thread. We usually call this since\n * we know the object is allocated in the IO thread, to avoid memory arena contention,\n * and also reducing the load of the main thread. */\nvoid tryDeferFreeClientObject(client *c, int type, void *ptr) {\n    if (!c || c->tid == IOTHREAD_MAIN_THREAD_ID) {\n        freeDeferredObject(c, type, ptr);\n        return;\n    }\n\n    /* Put the object in the deferred objects array. */\n    if (c->deferred_objects && c->deferred_objects_num < CLIENT_MAX_DEFERRED_OBJECTS) {\n        c->deferred_objects[c->deferred_objects_num].type = type;\n        c->deferred_objects[c->deferred_objects_num].ptr = ptr;\n        c->deferred_objects_num++;\n    } else {\n        freeDeferredObject(c, type, ptr);\n    }\n}\n\n/* Free the objects in the deferred_pending_cmds array. If free_array is true\n * then free the array itself as well. */\nvoid freeClientDeferredObjects(client *c, int free_array) {\n    for (int j = 0; j < c->deferred_objects_num; j++) {\n        deferredObject *obj = &c->deferred_objects[j];\n        freeDeferredObject(c, obj->type, obj->ptr);\n    }\n    c->deferred_objects_num = 0;\n\n    if (free_array) {\n        zfree(c->deferred_objects);\n        c->deferred_objects = NULL;\n    }\n}\n\n/* Queue an robj to be freed by the main thread when client returns from IO thread.\n * This is used in IO thread write path to avoid refcount race conditions. */\n#define IO_DEFERRED_OBJECTS_INIT_SIZE 8\nvoid ioDeferFreeRobj(client *c, robj *obj) {\n    if (c->io_deferred_objects_num >= c->io_deferred_objects_size) {\n        int new_size = !c->io_deferred_objects_size ?\n            IO_DEFERRED_OBJECTS_INIT_SIZE : c->io_deferred_objects_size * 2;\n        c->io_deferred_objects = zrealloc(c->io_deferred_objects, new_size * sizeof(robj *));\n        c->io_deferred_objects_size = new_size;\n    }\n    c->io_deferred_objects[c->io_deferred_objects_num++] = obj;\n}\n\n/* Free all objects queued by IO thread for deferred freeing.\n * Called by main thread when client returns from IO thread.\n * If free_array is true then free the array itself as well. */\nvoid freeClientIODeferredObjects(client *c, int free_array) {\n    if (!c->conn) return;\n\n    for (int i = 0; i < c->io_deferred_objects_num; i++) {\n        robj *obj = c->io_deferred_objects[i];\n        decrRefCount(obj);\n    }\n\n    if (!free_array) {\n        /* If the utilization rate is less than 1/4, reduce the size to 1/2 to avoid thrashing */\n        if (c->io_deferred_objects_size > IO_DEFERRED_OBJECTS_INIT_SIZE &&\n            c->io_deferred_objects_num * 4 < c->io_deferred_objects_size)\n        {\n            int new_size = c->io_deferred_objects_size / 2;\n            c->io_deferred_objects = zrealloc(c->io_deferred_objects, new_size * sizeof(robj *));\n            c->io_deferred_objects_size = new_size;\n        }\n        c->io_deferred_objects_num = 0;\n    } else {\n        zfree(c->io_deferred_objects);\n        c->io_deferred_objects = NULL;\n        c->io_deferred_objects_num = 0;\n        c->io_deferred_objects_size = 0;\n    }\n}\n\nvoid freeClientOriginalArgv(client *c) {\n    /* We didn't rewrite this client */\n    if (!c->original_argv) return;\n\n    for (int j = 0; j < c->original_argc; j++)\n        decrRefCount(c->original_argv[j]);\n    zfree(c->original_argv);\n    c->original_argv = NULL;\n    c->original_argc = 0;\n}\n\nstatic inline void freeClientArgvInternal(client *c, int free_argv) {\n    int j;\n    for (j = 0; j < c->argc; j++)\n        decrRefCount(c->argv[j]);\n    c->argc = 0;\n    c->cmd = NULL;\n    c->lookedcmd = NULL;\n    if (free_argv) {\n        c->argv_len = 0;\n        zfree(c->argv);\n        c->argv = NULL;\n    }\n}\n\nvoid freeClientArgv(client *c) {\n    freeClientArgvInternal(c, 1);\n}\n\nvoid freeClientPendingCommands(client *c, int num_pcmds_to_free) {\n    /* (-1) means free all pending commands */\n    if (num_pcmds_to_free == -1)\n        num_pcmds_to_free = c->pending_cmds.len;\n\n    while (num_pcmds_to_free--) {\n        pendingCommand *pcmd = popPendingCommandFromHead(&c->pending_cmds);\n        serverAssert(pcmd);\n        reclaimPendingCommand(c, pcmd);\n    }\n}\n\n/* Close all the slaves connections. This is useful in chained replication\n * when we resync with our own master and want to force all our slaves to\n * resync with us as well. */\nvoid disconnectSlaves(void) {\n    listIter li;\n    listNode *ln;\n    listRewind(server.slaves,&li);\n    while((ln = listNext(&li))) {\n        freeClient((client*)ln->value);\n    }\n}\n\n/* Check if there is any other slave waiting dumping RDB finished expect me.\n * This function is useful to judge current dumping RDB can be used for full\n * synchronization or not. */\nint anyOtherSlaveWaitRdb(client *except_me) {\n    listIter li;\n    listNode *ln;\n\n    listRewind(server.slaves, &li);\n    while((ln = listNext(&li))) {\n        client *slave = ln->value;\n        if (slave != except_me &&\n            slave->replstate == SLAVE_STATE_WAIT_BGSAVE_END)\n        {\n            return 1;\n        }\n    }\n    return 0;\n}\n\n/* Remove the specified client from global lists where the client could\n * be referenced, not including the Pub/Sub channels.\n * This is used by freeClient() and replicationCacheMaster(). */\nvoid unlinkClient(client *c) {\n    listNode *ln;\n\n    /* If this is marked as current client unset it. */\n    if (c->conn && server.current_client == c) server.current_client = NULL;\n\n    /* Certain operations must be done only if the client has an active connection.\n     * If the client was already unlinked or if it's a \"fake client\" the\n     * conn is already set to NULL. */\n    if (c->conn) {\n        /* Remove from the list of active clients. */\n        if (c->client_list_node) {\n            uint64_t id = htonu64(c->id);\n            raxRemove(server.clients_index,(unsigned char*)&id,sizeof(id),NULL);\n            listDelNode(server.clients,c->client_list_node);\n            c->client_list_node = NULL;\n        }\n\n        /* Check if this is a replica waiting for diskless replication (rdb pipe),\n         * in which case it needs to be cleaned from that list */\n        if (c->flags & CLIENT_SLAVE &&\n            c->replstate == SLAVE_STATE_WAIT_BGSAVE_END &&\n            server.rdb_pipe_conns)\n        {\n            int i;\n            for (i=0; i < server.rdb_pipe_numconns; i++) {\n                if (server.rdb_pipe_conns[i] == c->conn) {\n                    rdbPipeWriteHandlerConnRemoved(c->conn);\n                    server.rdb_pipe_conns[i] = NULL;\n                    break;\n                }\n            }\n        }\n        /* Only use shutdown when the fork is active and we are the parent. */\n        if (server.child_type) {\n            /* connShutdown() may access TLS state. If this is a rdbchannel\n             * client, bgsave fork is writing to the connection and TLS state in\n             * the main process is stale. SSL_shutdown() involves a handshake,\n             * and it may block the caller when used with stale TLS state.*/\n            if (c->flags & CLIENT_REPL_RDB_CHANNEL)\n                shutdown(c->conn->fd, SHUT_RDWR);\n            else\n                connShutdown(c->conn);\n        }\n        connClose(c->conn);\n        c->conn = NULL;\n    }\n\n    /* Remove from the list of pending writes if needed. */\n    if (c->flags & CLIENT_PENDING_WRITE) {\n        serverAssert(&c->clients_pending_write_node.next != NULL || \n                     &c->clients_pending_write_node.prev != NULL);\n        listUnlinkNode(server.clients_pending_write, &c->clients_pending_write_node);\n        c->flags &= ~CLIENT_PENDING_WRITE;\n    }\n\n    /* When client was just unblocked because of a blocking operation,\n     * remove it from the list of unblocked clients. */\n    if (c->flags & CLIENT_UNBLOCKED) {\n        ln = listSearchKey(server.unblocked_clients,c);\n        serverAssert(ln != NULL);\n        listDelNode(server.unblocked_clients,ln);\n        c->flags &= ~CLIENT_UNBLOCKED;\n    }\n\n    freeClientPendingCommands(c, -1);\n    c->argv_len = 0;\n    c->argv = NULL;\n    c->argc = 0;\n    c->cmd = NULL;\n\n    /* Clear the tracking status. */\n    if (c->flags & CLIENT_TRACKING) disableTracking(c);\n}\n\n/* Remove client from the list of clients with pending referenced replies.\n * This is called when the client has finished sending all pending replies,\n * or when the client is being freed.\n *\n * If 'force' is true, the client is removed unconditionally.\n * This should only be used when we are certain that the replies no longer\n * contain any referenced robj. */\nvoid tryUnlinkClientFromPendingRefReply(client *c, int force) {\n    if (clientIsInPendingRefReplyList(c) && (force || !clientHasPendingReplies(c))) {\n        listUnlinkNode(server.clients_with_pending_ref_reply, &c->pending_ref_reply_node);\n    }\n}\n\n/* Clear the client state to resemble a newly connected client. */\nvoid clearClientConnectionState(client *c) {\n    listNode *ln;\n\n    /* MONITOR clients are also marked with CLIENT_SLAVE, we need to\n     * distinguish between the two.\n     */\n    if (c->flags & CLIENT_MONITOR) {\n        ln = listSearchKey(server.monitors,c);\n        serverAssert(ln != NULL);\n        listDelNode(server.monitors,ln);\n\n        c->flags &= ~(CLIENT_MONITOR|CLIENT_SLAVE);\n    }\n\n    serverAssert(!(c->flags &(CLIENT_SLAVE|CLIENT_MASTER)));\n\n    if (c->flags & CLIENT_TRACKING) disableTracking(c);\n    selectDb(c,0);\n#ifdef LOG_REQ_RES\n    c->resp = server.client_default_resp;\n#else\n    c->resp = 2;\n#endif\n\n    clientSetDefaultAuth(c);\n    moduleNotifyUserChanged(c);\n    discardTransaction(c);\n\n    pubsubUnsubscribeAllChannels(c,0);\n    pubsubUnsubscribeShardAllChannels(c, 0);\n    pubsubUnsubscribeAllPatterns(c,0);\n    unmarkClientAsPubSub(c);\n\n    if (c->name) {\n        decrRefCount(c->name);\n        c->name = NULL;\n    }\n\n    /* Note: lib_name and lib_ver are not reset since they still\n     * represent the client library behind the connection. */\n    \n    /* Selectively clear state flags not covered above */\n    c->flags &= ~(CLIENT_ASKING|CLIENT_READONLY|CLIENT_REPLY_OFF|\n                  CLIENT_REPLY_SKIP_NEXT|CLIENT_NO_TOUCH|CLIENT_NO_EVICT);\n}\n\nvoid deauthenticateAndCloseClient(client *c) {\n    c->user = DefaultUser;\n    c->authenticated = 0;\n    /* We will write replies to this client later, so we can't\n     * close it directly even if async. */\n    if (c == server.current_client) {\n        c->flags |= CLIENT_CLOSE_AFTER_COMMAND;\n    } else {\n        freeClientAsync(c);\n    }\n}\n\n/* Resets the reusable query buffer used by the given client.\n * If any data remained in the buffer, the client will take ownership of the buffer\n * and a new empty buffer will be allocated for the reusable buffer. */\nstatic void resetReusableQueryBuf(client *c) {\n    serverAssert(c->io_flags & CLIENT_IO_REUSABLE_QUERYBUFFER);\n    if (c->querybuf != thread_reusable_qb || sdslen(c->querybuf) > c->qb_pos) {\n        /* If querybuf has been reallocated or there is still data left,\n         * let the client take ownership of the reusable buffer. */\n        thread_reusable_qb = NULL;\n    } else {\n        /* It is safe to dereference and reuse the reusable query buffer. */\n        c->querybuf = NULL;\n        c->qb_pos = 0;\n        sdsclear(thread_reusable_qb);\n    } \n\n    /* Mark that the client is no longer using the reusable query buffer\n     * and indicate that it is no longer used by any client. */\n    c->io_flags &= ~CLIENT_IO_REUSABLE_QUERYBUFFER;\n    thread_reusable_qb_used = 0;\n}\n\n/* Release references to string objects inside an encoded buffer.\n * If running in IO thread, defer the free to main thread via io_deferred_objects. */\nstatic void releaseBufReferences(client *c, char *buf, size_t bufpos) {\n    int in_io_thread = (c && c->running_tid != IOTHREAD_MAIN_THREAD_ID);\n    char *ptr = buf;\n    while (ptr < buf + bufpos) {\n        payloadHeader *header = (payloadHeader *)ptr;\n        ptr += sizeof(payloadHeader);\n\n        if (header->payload_type == BULK_STR_REF) {\n            bulkStrRef *str_ref = (bulkStrRef *)ptr;\n            /* Only release if not already released. */\n            if (str_ref->obj != NULL) {\n                if (in_io_thread)\n                    ioDeferFreeRobj(c, str_ref->obj);\n                else\n                    decrRefCount(str_ref->obj);\n                str_ref->obj = NULL;\n            }\n        } else {\n            serverAssert(header->payload_type == PLAIN_REPLY);\n        }\n\n        ptr += header->payload_len;\n    }\n}\n\n/* Release all references to string objects in all encoded buffers */\nstatic void releaseAllBufReferences(client *c) {\n    if (c->buf_encoded) {\n        releaseBufReferences(c, c->buf, c->bufpos);\n    }\n\n    listIter iter;\n    listNode *next;\n    listRewind(c->reply, &iter);\n    while ((next = listNext(&iter))) {\n        clientReplyBlock *o = (clientReplyBlock *)listNodeValue(next);\n        if (o && o->buf_encoded) {\n            releaseBufReferences(c, o->buf, o->used);\n        }\n    }\n}\n\nvoid freeClient(client *c) {\n    listNode *ln;\n\n    /* If a client is protected, yet we need to free it right now, make sure\n     * to at least use asynchronous freeing. */\n    if (c->flags & CLIENT_PROTECTED) {\n        freeClientAsync(c);\n        return;\n    }\n\n    /* If the client is running in io thread, we can't free it directly. */\n    if (c->running_tid != IOTHREAD_MAIN_THREAD_ID) {\n        fetchClientFromIOThread(c);\n    }\n\n    /* We need to unbind connection of client from io thread event loop first. */\n    if (c->tid != IOTHREAD_MAIN_THREAD_ID) {\n        keepClientInMainThread(c);\n    }\n\n    /* Update the number of clients in the IO thread. */\n    if (c->conn) server.io_threads_clients_num[c->tid]--;\n\n    /* For connected clients, call the disconnection event of modules hooks. */\n    if (c->conn) {\n        moduleFireServerEvent(REDISMODULE_EVENT_CLIENT_CHANGE,\n                              REDISMODULE_SUBEVENT_CLIENT_CHANGE_DISCONNECTED,\n                              c);\n    }\n\n    asmCallbackOnFreeClient(c);\n\n    /* Notify module system that this client auth status changed. */\n    moduleNotifyUserChanged(c);\n\n    /* Free the RedisModuleBlockedClient held onto for reprocessing if not already freed. */\n    zfree(c->module_blocked_client);\n\n    /* If this client was scheduled for async freeing we need to remove it\n     * from the queue. Note that we need to do this here, because later\n     * we may call replicationCacheMaster() and the client should already\n     * be removed from the list of clients to free. */\n    if (c->flags & CLIENT_CLOSE_ASAP) {\n        ln = listSearchKey(server.clients_to_close,c);\n        serverAssert(ln != NULL);\n        listDelNode(server.clients_to_close,ln);\n    }\n\n    /* If it is our master that's being disconnected we should make sure\n     * to cache the state to try a partial resynchronization later.\n     *\n     * Note that before doing this we make sure that the client is not in\n     * some unexpected state, by checking its flags. */\n    if (server.master && c->flags & CLIENT_MASTER) {\n        serverLog(LL_NOTICE,\"Connection with master lost.\");\n        if (!(c->flags & (CLIENT_PROTOCOL_ERROR|CLIENT_BLOCKED))) {\n            c->flags &= ~(CLIENT_CLOSE_ASAP|CLIENT_CLOSE_AFTER_REPLY);\n            c->io_flags &= ~CLIENT_IO_CLOSE_ASAP;\n            replicationCacheMaster(c);\n            return;\n        }\n    }\n\n    /* Log link disconnection with slave */\n    if (clientTypeIsSlave(c)) {\n        const char *type = c->flags & CLIENT_REPL_RDB_CHANNEL ? \" (rdbchannel)\" : \"\";\n        serverLog(LL_NOTICE,\"Connection with replica%s %s lost.\", type,\n            replicationGetSlaveName(c));\n    }\n\n    /* Free the query buffer */\n    if (c->io_flags & CLIENT_IO_REUSABLE_QUERYBUFFER)\n        resetReusableQueryBuf(c);\n    sdsfree(c->querybuf);\n    c->querybuf = NULL;\n\n    /* Deallocate structures used to block on blocking ops. */\n    /* If there is any in-flight command, we don't record their duration. */\n    c->duration = 0;\n    if (c->flags & CLIENT_BLOCKED) unblockClient(c, 1);\n    dictRelease(c->bstate.keys);\n\n    /* UNWATCH all the keys */\n    unwatchAllKeys(c);\n    listRelease(c->watched_keys);\n\n    /* Unsubscribe from all the pubsub channels */\n    pubsubUnsubscribeAllChannels(c,0);\n    pubsubUnsubscribeShardAllChannels(c, 0);\n    pubsubUnsubscribeAllPatterns(c,0);\n    unmarkClientAsPubSub(c);\n    dictRelease(c->pubsub_channels);\n    dictRelease(c->pubsub_patterns);\n    dictRelease(c->pubsubshard_channels);\n\n    /* Free data structures. */\n    releaseAllBufReferences(c); /* Release all references to string objects in encoded buffers before freeing */\n    listRelease(c->reply);\n    zfree(c->buf);\n    freeReplicaReferencedReplBuffer(c);\n    freeClientOriginalArgv(c);\n    freeClientDeferredObjects(c, 1);\n    freeClientIODeferredObjects(c, 1);\n    tryUnlinkClientFromPendingRefReply(c, 1);\n    if (c->deferred_reply_errors)\n        listRelease(c->deferred_reply_errors);\n#ifdef LOG_REQ_RES\n    reqresReset(c, 1);\n#endif\n\n    /* Remove the contribution that this client gave to our\n     * incrementally computed memory usage. */\n    if (c->conn)\n        server.stat_clients_type_memory[c->last_memory_type] -=\n            c->last_memory_usage;\n\n    /* Unlink the client: this will close the socket, remove the I/O\n     * handlers, and remove references of the client from different\n     * places where active clients may be referenced.\n     * This will also clean all remaining pending commands in the client,\n     * as they are no longer valid.\n     */\n    unlinkClient(c);\n\n    freeClientMultiState(c);\n    serverAssert(c->pending_cmds.len == 0);\n\n    /* Master/slave cleanup Case 1:\n     * we lost the connection with a slave. */\n    if (c->flags & CLIENT_SLAVE) {\n        /* If there is no any other slave waiting dumping RDB finished, the\n         * current child process need not continue to dump RDB, then we kill it.\n         * So child process won't use more memory, and we also can fork a new\n         * child process asap to dump rdb for next full synchronization or bgsave.\n         * But we also need to check if users enable 'save' RDB, if enable, we\n         * should not remove directly since that means RDB is important for users\n         * to keep data safe and we may delay configured 'save' for full sync. */\n        if (server.saveparamslen == 0 &&\n            c->replstate == SLAVE_STATE_WAIT_BGSAVE_END &&\n            server.child_type == CHILD_TYPE_RDB &&\n            server.rdb_child_type == RDB_CHILD_TYPE_DISK &&\n            anyOtherSlaveWaitRdb(c) == 0)\n        {\n            killRDBChild();\n        }\n        if (c->replstate == SLAVE_STATE_SEND_BULK) {\n            if (c->repldbfd != -1) close(c->repldbfd);\n            if (c->replpreamble) sdsfree(c->replpreamble);\n        }\n        list *l = (c->flags & CLIENT_MONITOR) ? server.monitors : server.slaves;\n        ln = listSearchKey(l,c);\n        serverAssert(ln != NULL);\n        listDelNode(l,ln);\n        /* We need to remember the time when we started to have zero\n         * attached slaves, as after some time we'll free the replication\n         * backlog. */\n        if (clientTypeIsSlave(c) && listLength(server.slaves) == 0)\n            server.repl_no_slaves_since = server.unixtime;\n        refreshGoodSlavesCount();\n        /* Fire the replica change modules event. */\n        if (c->replstate == SLAVE_STATE_ONLINE)\n            moduleFireServerEvent(REDISMODULE_EVENT_REPLICA_CHANGE,\n                                  REDISMODULE_SUBEVENT_REPLICA_CHANGE_OFFLINE,\n                                  NULL);\n    }\n\n    /* Master/slave cleanup Case 2:\n     * we lost the connection with the master. */\n    if (c->flags & CLIENT_MASTER) replicationHandleMasterDisconnection();\n\n    /* Remove client from memory usage buckets */\n    if (c->mem_usage_bucket) {\n        c->mem_usage_bucket->mem_usage_sum -= c->last_memory_usage;\n        listDelNode(c->mem_usage_bucket->clients, c->mem_usage_bucket_node);\n    }\n\n    /* Release other dynamically allocated client structure fields,\n     * and finally release the client structure itself. */\n    if (c->name) decrRefCount(c->name);\n    if (c->lib_name) decrRefCount(c->lib_name);\n    if (c->lib_ver) decrRefCount(c->lib_ver);\n    serverAssert(c->all_argv_len_sum == 0);\n    sdsfree(c->peerid);\n    sdsfree(c->sockname);\n    sdsfree(c->slave_addr);\n    sdsfree(c->node_id);\n    zfree(c);\n}\n\n/* Schedule a client to free it at a safe time in the beforeSleep() function.\n * This function is useful when we need to terminate a client but we are in\n * a context where calling freeClient() is not possible, because the client\n * should be valid for the continuation of the flow of the program. */\nvoid freeClientAsync(client *c) {\n    if (c->running_tid != IOTHREAD_MAIN_THREAD_ID) {\n        int main_thread = pthread_equal(pthread_self(), server.main_thread_id);\n        /* Make sure the main thread can access IO thread data safely. */\n        if (main_thread) pauseIOThread(c->tid);\n        if (!(c->io_flags & CLIENT_IO_CLOSE_ASAP)) {\n            c->io_flags |= CLIENT_IO_CLOSE_ASAP;\n            enqueuePendingClientsToMainThread(c, 1);\n        }\n        if (main_thread) resumeIOThread(c->tid);\n        return;\n    }\n\n    if (c->flags & CLIENT_CLOSE_ASAP || c->flags & CLIENT_SCRIPT) return;\n    c->flags |= CLIENT_CLOSE_ASAP;\n    /* Replicas that was marked as CLIENT_CLOSE_ASAP should not keep the\n     * replication backlog from been trimmed. */\n    if (c->flags & CLIENT_SLAVE) freeReplicaReferencedReplBuffer(c);\n    listAddNodeTail(server.clients_to_close,c);\n}\n\n/* Log errors for invalid use and free the client in async way.\n * We will add additional information about the client to the message. */\nvoid logInvalidUseAndFreeClientAsync(client *c, const char *fmt, ...) {\n    va_list ap;\n    va_start(ap, fmt);\n    sds info = sdscatvprintf(sdsempty(), fmt, ap);\n    va_end(ap);\n\n    sds client = catClientInfoString(sdsempty(), c);\n    serverLog(LL_WARNING, \"%s, disconnecting it: %s\", info, client);\n\n    sdsfree(info);\n    sdsfree(client);\n    freeClientAsync(c);\n}\n\n/* Perform processing of the client before moving on to processing the next client\n * this is useful for performing operations that affect the global state but can't\n * wait until we're done with all clients. In other words can't wait until beforeSleep()\n * return C_ERR in case client is no longer valid after call.\n * The input client argument: c, may be NULL in case the previous client was\n * freed before the call. */\nint beforeNextClient(client *c) {\n    /* Notice, this code is also called from 'processUnblockedClients'.\n     * But in case of a module blocked client (see RM_Call 'K' flag) we do not reach this code path.\n     * So whenever we change the code here we need to consider if we need this change on module\n     * blocked client as well */\n\n    /* Skip the client processing if we're in an IO thread, in that case we'll perform\n       this operation later (this function is called again) in the fan-in stage of the threading mechanism */\n    if (c && c->running_tid != IOTHREAD_MAIN_THREAD_ID)\n        return C_OK;\n    /* Handle async frees */\n    /* Note: this doesn't make the server.clients_to_close list redundant because of\n     * cases where we want an async free of a client other than myself. For example\n     * in ACL modifications we disconnect clients authenticated to non-existent\n     * users (see ACL LOAD). */\n    if (c && (c->flags & CLIENT_CLOSE_ASAP)) {\n        freeClient(c);\n        return C_ERR;\n    }\n    return C_OK;\n}\n\n/* Free the clients marked as CLOSE_ASAP, return the number of clients\n * freed. */\nint freeClientsInAsyncFreeQueue(void) {\n    int freed = 0;\n    listIter li;\n    listNode *ln;\n\n    listRewind(server.clients_to_close,&li);\n    while ((ln = listNext(&li)) != NULL) {\n        client *c = listNodeValue(ln);\n\n        if (c->flags & CLIENT_PROTECTED) continue;\n\n        c->flags &= ~CLIENT_CLOSE_ASAP;\n        freeClient(c);\n        listDelNode(server.clients_to_close,ln);\n        freed++;\n    }\n    return freed;\n}\n\n/* Return a client by ID, or NULL if the client ID is not in the set\n * of registered clients. Note that \"fake clients\", created with -1 as FD,\n * are not registered clients. */\nclient *lookupClientByID(uint64_t id) {\n    id = htonu64(id);\n    void *c = NULL;\n    raxFind(server.clients_index,(unsigned char*)&id,sizeof(id),&c);\n    return c;\n}\n\n/* This struct is used by writevToClient to prepare iovec array for submitting to connWritev */\ntypedef struct ReplyIOV {\n    struct iovec *iov;      /* Array of iovec structures for writev() */\n    int iovmax;             /* Maximum number of iovec entries allocated */\n    int iovcnt;             /* Current number of iovec entries in use */\n    size_t iov_bytes_len;   /* Total bytes across all iovec entries */\n} ReplyIOV;\n\n/* Check if the reply IOV has reached its limit yet. */\nstatic int replyIOVReachLimit(ReplyIOV *reply_iov) {\n    return reply_iov->iovcnt >= reply_iov->iovmax || reply_iov->iov_bytes_len >= NET_MAX_WRITES_PER_EVENT;\n}\n\n/* Helper function to process encoded buffer and build iov array. */\nstatic void processEncodedBufferForWrite(ReplyIOV *reply_iov, char *start_ptr, char *end_ptr, size_t offset) {\n    char *ptr = start_ptr;\n    while (ptr < end_ptr && !replyIOVReachLimit(reply_iov)) {\n        payloadHeader *head = (payloadHeader *)ptr;\n\n        if (head->payload_type == PLAIN_REPLY) {\n            /* Plain data - add directly */\n            reply_iov->iov[reply_iov->iovcnt].iov_base = ptr + sizeof(payloadHeader) + offset;\n            reply_iov->iov[reply_iov->iovcnt].iov_len = head->payload_len - offset;\n            reply_iov->iov_bytes_len += reply_iov->iov[reply_iov->iovcnt++].iov_len;\n        } else {\n            /* BULK_STR_REF - expand to prefix + string + crlf */\n            bulkStrRef *str_ref = (bulkStrRef *)(ptr + sizeof(payloadHeader));\n            size_t prefix_len = str_ref->prefix_cnt;\n            size_t str_len = sdslen(str_ref->obj->ptr);\n\n            /* Add prefix */\n            if (offset < prefix_len) {\n                if (replyIOVReachLimit(reply_iov)) return;\n                reply_iov->iov[reply_iov->iovcnt].iov_base = str_ref->prefix + offset;\n                reply_iov->iov[reply_iov->iovcnt].iov_len = prefix_len - offset;\n                reply_iov->iov_bytes_len += reply_iov->iov[reply_iov->iovcnt++].iov_len;\n                offset = 0;\n            } else {\n                offset -= prefix_len;\n            }\n\n            /* Add string data */\n            if (offset < str_len) {\n                if (replyIOVReachLimit(reply_iov)) return;\n                reply_iov->iov[reply_iov->iovcnt].iov_base = (char *)str_ref->obj->ptr + offset;\n                reply_iov->iov[reply_iov->iovcnt].iov_len = str_len - offset;\n                reply_iov->iov_bytes_len += reply_iov->iov[(reply_iov->iovcnt)++].iov_len;\n                offset = 0;\n            } else {\n                offset -= str_len;\n            }\n\n            /* Add crlf */\n            if (offset < 2) {\n                if (replyIOVReachLimit(reply_iov)) return;\n                reply_iov->iov[reply_iov->iovcnt].iov_base = str_ref->crlf + offset;\n                reply_iov->iov[reply_iov->iovcnt].iov_len = 2 - offset;\n                reply_iov->iov_bytes_len += reply_iov->iov[reply_iov->iovcnt++].iov_len;\n            }\n        }\n\n        offset = 0;\n        ptr += sizeof(payloadHeader) + head->payload_len;\n    }\n}\n\n/* Process sent data in the encoded buffer.\n * Returns pointer to the current payload header being processed, or NULL if all data is processed.\n * If running in IO thread, defer the free to main thread via io_deferred_objects. */\nstatic payloadHeader *processSentDataInEncodedBuffer(client *c, char *start_ptr, char *end_ptr,\n                                                     size_t *sentlen, ssize_t *remaining)\n{\n    int in_io_thread = (c && c->running_tid != IOTHREAD_MAIN_THREAD_ID);\n    char *ptr = start_ptr;\n    while (ptr < end_ptr && *remaining > 0) {\n        payloadHeader *head = (payloadHeader *)ptr;\n\n        if (head->payload_type == PLAIN_REPLY) {\n            if (*remaining < (ssize_t)(head->payload_len - *sentlen)) {\n                *sentlen += *remaining;\n                *remaining = 0;\n                return head;\n            }\n            *remaining -= (head->payload_len - *sentlen);\n            *sentlen = 0;\n        } else {\n            /* BULK_STR_REF - release object references */\n            bulkStrRef *str_ref = (bulkStrRef *)(ptr + sizeof(payloadHeader));\n\n            size_t writen_len = str_ref->prefix_cnt + sdslen(str_ref->obj->ptr) + 2;\n            if (*remaining < (ssize_t)(writen_len - *sentlen)) {\n                *sentlen += *remaining;\n                *remaining = 0;\n                return head;\n            }\n            *remaining -= (writen_len - *sentlen);\n            if (in_io_thread) {\n                ioDeferFreeRobj(c, str_ref->obj);\n            } else {\n                decrRefCount(str_ref->obj);\n            }\n            str_ref->obj = NULL; /* Mark as released to prevent double free */\n            *sentlen = 0;\n        }\n\n        ptr += sizeof(payloadHeader) + head->payload_len;\n    }\n\n    return (ptr == end_ptr) ? NULL : (payloadHeader *)ptr;\n}\n\n/* This function should be called from _writeToClient when the reply list is not empty,\n * it gathers the scattered buffers from reply list and sends them away with connWritev.\n * If we write successfully, it returns C_OK, otherwise, C_ERR is returned,\n * and 'nwritten' is an output parameter, it means how many bytes server write\n * to client. */\nstatic int _writevToClient(client *c, ssize_t *nwritten) {\n    int iovmax = min(IOV_MAX, c->conn->iovcnt);\n    struct iovec iov[iovmax];\n    ReplyIOV reply_iov = {iov, iovmax};\n\n    /* Add c->buf to iov array */\n    if (c->bufpos > 0) {\n        if (likely(!c->buf_encoded)) {\n            /* Non-encoded buffer - add directly */\n            iov[reply_iov.iovcnt].iov_base = c->buf + c->sentlen;\n            iov[reply_iov.iovcnt].iov_len = c->bufpos - c->sentlen;\n            reply_iov.iov_bytes_len += iov[reply_iov.iovcnt++].iov_len;\n        } else {\n            /* Encoded buffer */\n            char *start_ptr = c->last_header ? (char *)c->last_header : c->buf;\n            serverAssert(start_ptr >= c->buf && start_ptr < (c->buf + c->bufpos));\n            processEncodedBufferForWrite(&reply_iov, start_ptr, c->buf + c->bufpos, c->sentlen);\n        }\n    }\n\n    /* Add c->reply list nodes to iov array */\n    if (!replyIOVReachLimit(&reply_iov)) {\n        /* The first node of reply list might be incomplete from the last call,\n         * thus it needs to be calibrated to get the actual data address and length. */\n        size_t offset = c->bufpos > 0 ? 0 : c->sentlen;\n        payloadHeader *last_header = c->bufpos > 0 ? NULL : c->last_header;\n        listIter iter;\n        listNode *next;\n        listRewind(c->reply, &iter);\n        while ((next = listNext(&iter)) && !replyIOVReachLimit(&reply_iov)) {\n            clientReplyBlock *o = listNodeValue(next);\n            if (o->used == 0) { /* empty node, just release it and skip. */\n                c->reply_bytes -= o->size;\n                listDelNode(c->reply, next);\n                offset = 0;\n                last_header = NULL;\n                continue;\n            }\n\n            if (!o->buf_encoded) {\n                serverAssert(!last_header);\n                /* Non-encoded reply block - add directly */\n                iov[reply_iov.iovcnt].iov_base = o->buf + offset;\n                iov[reply_iov.iovcnt].iov_len = o->used - offset;\n                reply_iov.iov_bytes_len += iov[reply_iov.iovcnt++].iov_len;\n                offset = 0;\n            } else {\n                /* Encoded reply block */\n                char *start_ptr = last_header ? (char *)last_header : o->buf;\n                processEncodedBufferForWrite(&reply_iov, start_ptr, o->buf + o->used, offset);\n                offset = 0;\n                last_header = NULL;\n            }\n        }\n    }\n\n    if (reply_iov.iovcnt == 0) return C_OK;\n    *nwritten = connWritev(c->conn, iov, reply_iov.iovcnt);\n    if (*nwritten <= 0) return C_ERR;\n\n    /* Locate the new node which has leftover data and\n     * release all nodes in front of it. */\n    ssize_t remaining = *nwritten;\n    if (c->bufpos > 0) {\n        if (likely(!c->buf_encoded)) {\n            int buf_len = c->bufpos - c->sentlen;\n            c->sentlen += remaining;\n            /* If the buffer was sent, set bufpos to zero to continue with\n            * the remainder of the reply. */\n            if (remaining >= buf_len) {\n                c->bufpos = 0;\n                c->sentlen = 0;\n            }\n            remaining -= buf_len;\n        } else {\n            /* For encoded buffers */\n            char *start_ptr = c->last_header ? (char *)c->last_header : c->buf;\n            c->last_header = processSentDataInEncodedBuffer(c, start_ptr, c->buf + c->bufpos, &c->sentlen, &remaining);\n            if (!c->last_header) { /* reach end */\n                c->bufpos = 0;\n                c->buf_encoded = 0;\n                c->sentlen = 0;\n            }\n        }\n    }\n\n    /* Process c->reply list nodes */\n    listIter iter;\n    listNode *next;\n    listRewind(c->reply, &iter);\n    while (remaining > 0) {\n        next = listNext(&iter);\n        clientReplyBlock *o = listNodeValue(next);\n\n        if (!o->buf_encoded) {\n            if (remaining < (ssize_t)(o->used - c->sentlen)) {\n                c->sentlen += remaining;\n                break;\n            }\n            remaining -= (ssize_t)(o->used - c->sentlen);\n            c->reply_bytes -= o->size;\n            listDelNode(c->reply, next);\n            c->sentlen = 0;\n        } else {\n            /* Encoded reply block */\n            char *start_ptr = c->last_header ? (char *)c->last_header : o->buf;\n            c->last_header = processSentDataInEncodedBuffer(c, start_ptr, o->buf + o->used, &c->sentlen, &remaining);\n            if (!c->last_header) { /* reach end */\n                /* Block fully consumed, remove it */\n                c->reply_bytes -= o->size;\n                listDelNode(c->reply, next);\n                c->sentlen = 0;\n            } else {\n                /* Partial write, c->sentlen and o->last_header already updated, stop processing */\n                break;\n            }\n        }\n    }\n\n    return C_OK;\n}\n\n/* This function does actual writing output buffers for non slave client types,\n * it is called by writeToClient.\n * If we write successfully, it returns C_OK, otherwise, C_ERR is returned,\n * and 'nwritten' is an output parameter, it means how many bytes server write\n * to client. */\nstatic inline int _writeToClientNonSlave(client *c, ssize_t *nwritten) {\n    *nwritten = 0;\n    /* When the reply list is not empty, it's better to use writev to save us some\n     * system calls and TCP packets. */\n    if (listLength(c->reply) > 0) {\n        int ret = _writevToClient(c, nwritten);\n        if (ret != C_OK) return ret;\n\n        /* If there are no longer objects in the list, we expect\n         * the count of reply bytes to be exactly zero. */\n        if (listLength(c->reply) == 0)\n            serverAssert(c->reply_bytes == 0);\n    } else if (c->bufpos > 0) {\n        /* For encoded buffers, we need to use writev to handle bulk string references */\n        if (c->buf_encoded) {\n            int ret = _writevToClient(c, nwritten);\n            return ret;\n        }\n\n        *nwritten = connWrite(c->conn, c->buf + c->sentlen, c->bufpos - c->sentlen);\n        if (*nwritten <= 0) return C_ERR;\n        c->sentlen += *nwritten;\n\n        /* If the buffer was sent, set bufpos to zero to continue with\n         * the remainder of the reply. */\n        if (c->sentlen == c->bufpos) {\n            c->bufpos = 0;\n            c->sentlen = 0;\n        }\n    }\n    return C_OK;\n}\n\n/* This function does actual writing output buffers for slave client types,\n * it is called by writeToClient.\n * If we write successfully, it returns C_OK, otherwise, C_ERR is returned,\n * and 'nwritten' is an output parameter, it means how many bytes server write\n * to client. */\nstatic inline int _writeToClientSlave(client *c, ssize_t *nwritten) {\n    *nwritten = 0;\n    serverAssert(c->bufpos == 0 && listLength(c->reply) == 0);\n\n    if (c->running_tid != IOTHREAD_MAIN_THREAD_ID) {\n        replBufBlock *o = listNodeValue(c->io_curr_repl_node);\n        /* The IO thread must not send data beyond the bound position. */\n        size_t pos = c->io_curr_repl_node == c->io_bound_repl_node ?\n                     c->io_bound_block_pos : o->used;\n        if (pos > c->io_curr_block_pos) {\n            *nwritten = connWrite(c->conn, o->buf+c->io_curr_block_pos,\n                                  pos-c->io_curr_block_pos);\n            if (*nwritten <= 0) return C_ERR;\n            c->io_curr_block_pos += *nwritten;\n        }\n        /* If we fully sent the object and there are more nodes to send, go to the next one. */\n        if (c->io_curr_block_pos == pos && c->io_curr_repl_node != c->io_bound_repl_node) {\n            c->io_curr_repl_node = listNextNode(c->io_curr_repl_node);\n            c->io_curr_block_pos = 0;\n        }\n        return C_OK;\n    }\n\n    replBufBlock *o = listNodeValue(c->ref_repl_buf_node);\n    serverAssert(o->used >= c->ref_block_pos);\n    /* Send current block if it is not fully sent. */\n    if (o->used > c->ref_block_pos) {\n        *nwritten = connWrite(c->conn, o->buf+c->ref_block_pos,\n                                o->used-c->ref_block_pos);\n        if (*nwritten <= 0) return C_ERR;\n        c->ref_block_pos += *nwritten;\n    }\n    /* If we fully sent the object on head, go to the next one. */\n    listNode *next = listNextNode(c->ref_repl_buf_node);\n    if (next && c->ref_block_pos == o->used) {\n        o->refcount--;\n        ((replBufBlock *)(listNodeValue(next)))->refcount++;\n        c->ref_repl_buf_node = next;\n        c->ref_block_pos = 0;\n        incrementalTrimReplicationBacklog(REPL_BACKLOG_TRIM_BLOCKS_PER_CALL);\n    }\n    return C_OK;\n}\n\n/* Write data in output buffers to client. Return C_OK if the client\n * is still valid after the call, C_ERR if it was freed because of some\n * error.  If handler_installed is set, it will attempt to clear the\n * write event.\n *\n * This function is called by threads, but always with handler_installed\n * set to 0. So when handler_installed is set to 0 the function must be\n * thread safe. */\nint writeToClient(client *c, int handler_installed) {\n    if (!(c->io_flags & CLIENT_IO_WRITE_ENABLED)) return C_OK;\n    /* Update the number of writes of io threads on server */\n    atomicIncr(server.stat_io_writes_processed[c->running_tid], 1);\n\n    ssize_t nwritten = 0, totwritten = 0;\n    const int is_slave = clientTypeIsSlave(c);\n\n    if (unlikely(is_slave)) {\n        /* We send as much as possible if the client is\n         * a slave (otherwise, on high-speed traffic, the\n         * replication buffer will grow indefinitely) */\n        while(_clientHasPendingRepliesSlave(c)) {\n            int ret = _writeToClientSlave(c, &nwritten);\n            if (ret == C_ERR) break;\n            totwritten += nwritten;\n        }\n        atomicIncr(server.stat_net_repl_output_bytes, totwritten);\n    } else {\n        /* If we reach this block and client is marked with CLIENT_SLAVE flag\n         * it's because it's a MONITOR/slot-migration client, which are marked\n         * as replicas, but exposed as normal clients */\n        const int is_normal_client = !(c->flags & CLIENT_SLAVE);\n        while (_clientHasPendingRepliesNonSlave(c)) {\n            int ret = _writeToClientNonSlave(c, &nwritten);\n            if (ret == C_ERR) break;\n            totwritten += nwritten;\n            /* Note that we avoid to send more than NET_MAX_WRITES_PER_EVENT\n             * bytes, in a single threaded server it's a good idea to serve\n             * other clients as well, even if a very large request comes from\n             * super fast link that is always able to accept data (in real world\n             * scenario think about 'KEYS *' against the loopback interface).\n             *\n             * However if we are over the maxmemory limit we ignore that and\n             * just deliver as much data as it is possible to deliver.\n             *\n             * Moreover, we also send as much as possible if the client is\n             * a slave (covered above) or a monitor (covered here).\n             * (otherwise, on high-speed traffic, the\n             * output buffer will grow indefinitely) */\n            if (totwritten > NET_MAX_WRITES_PER_EVENT &&\n                (server.maxmemory == 0 ||\n                zmalloc_used_memory() < server.maxmemory) &&\n                is_normal_client) break;\n        }\n        atomicIncr(server.stat_net_output_bytes, totwritten);\n    }\n    c->net_output_bytes += totwritten;\n\n    if (nwritten == -1) {\n        if (connGetState(c->conn) != CONN_STATE_CONNECTED) {\n            serverLog(LL_VERBOSE,\n                \"Error writing to client: %s\", connGetLastError(c->conn));\n            freeClientAsync(c);\n            return C_ERR;\n        }\n    }\n    if (totwritten > 0) {\n        /* For clients representing masters we don't count sending data\n         * as an interaction, since we always send REPLCONF ACK commands\n         * that take some time to just fill the socket output buffer.\n         * We just rely on data / pings received for timeout detection. */\n        if (!(c->flags & CLIENT_MASTER)) c->lastinteraction = server.unixtime;\n    }\n    if (!clientHasPendingReplies(c)) {\n        c->sentlen = 0;\n        /* Note that writeToClient() is called in a threaded way, but\n         * aeDeleteFileEvent() is not thread safe: however writeToClient()\n         * is always called with handler_installed set to 0 from threads\n         * so we are fine. */\n        if (handler_installed) {\n            /* IO Thread also can do that now. */\n            connSetWriteHandler(c->conn, NULL);\n        }\n\n        /* Close connection after entire reply has been sent. */\n        if (c->flags & CLIENT_CLOSE_AFTER_REPLY) {\n            freeClientAsync(c);\n            return C_ERR;\n        }\n\n        /* Remove client from pending referenced reply clients list. */\n        if (c->running_tid == IOTHREAD_MAIN_THREAD_ID)\n            tryUnlinkClientFromPendingRefReply(c, 1);\n\n        /* If replica client has sent all the replication data it knows about\n         * we send it to main thread so it can pick up new repl data ASAP.\n         * Note, that we keep it in IO thread in case we have a pending ACK read. */\n        if (c->flags & CLIENT_SLAVE && c->running_tid != IOTHREAD_MAIN_THREAD_ID) {\n            if (!replicaFromIOThreadHasPendingRead(c))\n                enqueuePendingClientsToMainThread(c, 0);\n        }\n    }\n    /* Update client's memory usage after writing.\n     * Since this isn't thread safe we do this conditionally. */\n    if (c->running_tid == IOTHREAD_MAIN_THREAD_ID) {\n        updateClientMemUsageAndBucket(c);\n    }\n    return C_OK;\n}\n\n/* Write event handler. Just send data to the client. */\nvoid sendReplyToClient(connection *conn) {\n    client *c = connGetPrivateData(conn);\n    writeToClient(c,1);\n}\n\n/* This function is called just before entering the event loop, in the hope\n * we can just write the replies to the client output buffer without any\n * need to use a syscall in order to install the writable event handler,\n * get it called, and so forth. */\nint handleClientsWithPendingWrites(void) {\n    listIter li;\n    listNode *ln;\n    int processed = listLength(server.clients_pending_write);\n\n    listRewind(server.clients_pending_write,&li);\n    while((ln = listNext(&li))) {\n        client *c = listNodeValue(ln);\n\n        /* We handle IO thread replicas in putReplicasInPendingClientsToIOThreads */\n        if (c->flags & CLIENT_SLAVE && c->tid != IOTHREAD_MAIN_THREAD_ID)\n            continue;\n\n        c->flags &= ~CLIENT_PENDING_WRITE;\n        listUnlinkNode(server.clients_pending_write,ln);\n\n        /* If a client is protected, don't do anything,\n         * that may trigger write error or recreate handler. */\n        if (c->flags & CLIENT_PROTECTED) continue;\n\n        /* Don't write to clients that are going to be closed anyway. */\n        if (c->flags & CLIENT_CLOSE_ASAP) continue;\n\n        /* Let IO thread handle the client if possible. */\n        if (server.io_threads_num > 1 &&\n            !(c->flags & CLIENT_CLOSE_AFTER_REPLY) &&\n            c->tid == IOTHREAD_MAIN_THREAD_ID &&\n            !isClientMustHandledByMainThread(c))\n        {\n            assignClientToIOThread(c);\n            continue;\n        }\n\n        /* Try to write buffers to the client socket. */\n        if (writeToClient(c,0) == C_ERR) continue;\n\n        /* If after the synchronous writes above we still have data to\n         * output to the client, we need to install the writable handler. */\n        if (clientHasPendingReplies(c)) {\n            installClientWriteHandler(c);\n        }\n    }\n    return processed;\n}\n\n/* Prepare the client for the parsing of the next command. */\nvoid resetClientQbufState(client *c) {\n    c->reqtype = 0;\n    c->multibulklen = 0;\n    c->bulklen = -1;\n}\n\nstatic inline void resetClientInternal(client *c, int num_pcmds_to_free) {\n    redisCommandProc *prevcmd = c->cmd ? c->cmd->proc : NULL;\n\n    /* We may get here with no pending commands but with an argv that needs freeing.\n     * An example is in the case of modules (RM_Call) */\n    if (c->current_pending_cmd) {\n        freeClientPendingCommands(c, num_pcmds_to_free);\n        if (c->pending_cmds.len == 0)\n            serverAssert(c->all_argv_len_sum == 0);\n        c->current_pending_cmd = NULL;\n    } else if (c->argv) {\n        freeClientArgvInternal(c, 1 /* free_argv */);\n        /* If we're dealing with a client that doesn't create pendingCommand structs (e.g.: a Lua client),\n         * clear the all_argv_len_sum counter so we don't get to freeing the client with it non-zero. */\n        c->all_argv_len_sum = 0;\n    }\n\n    c->argc = 0;\n    c->cmd = NULL;\n    c->argv_len = 0;\n    c->argv = NULL;\n    c->cur_script = NULL;\n    c->slot = -1;\n    c->cluster_compatibility_check_slot = -2;\n    if (c->flags & CLIENT_EXECUTING_COMMAND)\n        c->flags &= ~CLIENT_EXECUTING_COMMAND;\n\n    /* Make sure the duration has been recorded to some command. */\n    serverAssert(c->duration == 0);\n#ifdef LOG_REQ_RES\n    reqresReset(c, 1);\n#endif\n\n    if (c->deferred_reply_errors)\n        listRelease(c->deferred_reply_errors);\n    c->deferred_reply_errors = NULL;\n\n    /* We clear the ASKING flag as well if we are not inside a MULTI, and\n     * if what we just executed is not the ASKING command itself. */\n    if (c->flags & CLIENT_ASKING && !(c->flags & CLIENT_MULTI) &&\n        prevcmd != askingCommand)\n    {\n        c->flags &= ~CLIENT_ASKING;\n    }\n\n    /* We do the same for the CACHING command as well. It also affects\n     * the next command or transaction executed, in a way very similar\n     * to ASKING. */\n    if (c->flags & CLIENT_TRACKING_CACHING && !(c->flags & CLIENT_MULTI) &&\n        prevcmd != clientCommand)\n    {\n        c->flags &= ~CLIENT_TRACKING_CACHING;\n    }\n\n    /* Remove the CLIENT_REPLY_SKIP flag if any so that the reply\n     * to the next command will be sent, but set the flag if the command\n     * we just processed was \"CLIENT REPLY SKIP\". */\n    if (c->flags & CLIENT_REPLY_SKIP)\n        c->flags &= ~CLIENT_REPLY_SKIP;\n\n    if (c->flags & CLIENT_REPLY_SKIP_NEXT) {\n        c->flags |= CLIENT_REPLY_SKIP;\n        c->flags &= ~CLIENT_REPLY_SKIP_NEXT;\n    }\n\n    c->net_input_bytes_curr_cmd = 0;\n    c->net_output_bytes_curr_cmd = 0;\n}\n\n/* resetClient prepare the client to process the next command */\nvoid resetClient(client *c, int num_pcmds_to_free) {\n    resetClientInternal(c, num_pcmds_to_free);\n}\n\n/* This function is used when we want to re-enter the event loop but there\n * is the risk that the client we are dealing with will be freed in some\n * way. This happens for instance in:\n *\n * * DEBUG RELOAD and similar.\n * * When a Lua script is in -BUSY state.\n *\n * So the function will protect the client by doing two things:\n *\n * 1) It removes the file events. This way it is not possible that an\n *    error is signaled on the socket, freeing the client.\n * 2) Moreover it makes sure that if the client is freed in a different code\n *    path, it is not really released, but only marked for later release. */\nvoid protectClient(client *c) {\n    c->flags |= CLIENT_PROTECTED;\n    if (c->conn && c->tid == IOTHREAD_MAIN_THREAD_ID) {\n        connSetReadHandler(c->conn,NULL);\n        connSetWriteHandler(c->conn,NULL);\n    }\n}\n\n/* This will undo the client protection done by protectClient() */\nvoid unprotectClient(client *c) {\n    if (c->flags & CLIENT_PROTECTED) {\n        c->flags &= ~CLIENT_PROTECTED;\n        if (c->conn) {\n            if (c->tid == IOTHREAD_MAIN_THREAD_ID)\n                connSetReadHandler(c->conn,readQueryFromClient);\n            if (clientHasPendingReplies(c)) putClientInPendingWriteQueue(c);\n        }\n    }\n}\n\n/* Like processMultibulkBuffer(), but for the inline protocol instead of RESP,\n * this function consumes the client query buffer and creates a command ready\n * to be executed inside the client structure. Returns C_OK if the command\n * is ready to be executed, or C_ERR if there is still protocol to read to\n * have a well formed command. The function also returns C_ERR when there is\n * a protocol error: in such a case the client structure is setup to reply\n * with the error and close the connection. */\nint processInlineBuffer(client *c, pendingCommand *pcmd) {\n    char *newline;\n    int argc, j, linefeed_chars = 1;\n    sds *argv, aux;\n    size_t querylen;\n\n    /* Search for end of line */\n    newline = strchr(c->querybuf+c->qb_pos,'\\n');\n\n    /* Nothing to do without a \\r\\n */\n    if (newline == NULL) {\n        if (sdslen(c->querybuf)-c->qb_pos > PROTO_INLINE_MAX_SIZE) {\n            pcmd->read_error = CLIENT_READ_TOO_BIG_INLINE_REQUEST;\n        }\n        return C_ERR;\n    }\n\n    /* Handle the \\r\\n case. */\n    if (newline != c->querybuf+c->qb_pos && *(newline-1) == '\\r')\n        newline--, linefeed_chars++;\n\n    /* Split the input buffer up to the \\r\\n */\n    querylen = newline-(c->querybuf+c->qb_pos);\n    aux = sdsnewlen(c->querybuf+c->qb_pos,querylen);\n    argv = sdssplitargs(aux,&argc);\n    sdsfree(aux);\n    if (argv == NULL) {\n        pcmd->read_error = CLIENT_READ_UNBALANCED_QUOTES;\n        return C_ERR;\n    }\n\n    /* Newline from slaves can be used to refresh the last ACK time.\n     * This is useful for a slave to ping back while loading a big\n     * RDB file. */\n    if (querylen == 0 && clientTypeIsSlave(c)) {\n        if (c->running_tid == IOTHREAD_MAIN_THREAD_ID)\n            c->repl_ack_time = server.unixtime;\n        else\n            /* If this is a replica client running in an IO thread we cache the\n             * last ack time in a different member variable in order to avoid\n             * contention with main thread. f.e see refreshGoodSlavesCount()\n             * Note c->repl_ack_time will still be updated in\n             * updateClientDataFromIOThread with the value of c->io_repl_ack_time\n             * when the client moves from IO to main thread. */\n            c->io_repl_ack_time = server.unixtime;\n    }\n\n    /* Masters should never send us inline protocol to run actual\n     * commands. If this happens, it is likely due to a bug in Redis where\n     * we got some desynchronization in the protocol, for example\n     * because of a PSYNC gone bad.\n     *\n     * However there is an exception: masters may send us just a newline\n     * to keep the connection active. */\n    if (querylen != 0 && c->flags & CLIENT_MASTER) {\n        sdsfreesplitres(argv,argc);\n        pcmd->read_error = CLIENT_READ_MASTER_USING_INLINE_PROTOCAL;\n        return C_ERR;\n    }\n\n    /* Move querybuffer position to the next query in the buffer. */\n    c->qb_pos += querylen+linefeed_chars;\n\n    /* Setup argv array on client structure */\n    if (argc) {\n        /* Create new argv if space is insufficient. */\n        if (argc > pcmd->argv_len) {\n            zfree(pcmd->argv);\n            pcmd->argv = zmalloc(sizeof(robj*)*argc);\n            pcmd->argv_len = argc;\n            pcmd->argv_len_sum = 0;\n        }\n    }\n\n    /* Create redis objects for all arguments. */\n    for (pcmd->argc = 0, j = 0; j < argc; j++) {\n        pcmd->argv[pcmd->argc] = createObject(OBJ_STRING,argv[j]);\n        pcmd->argc++;\n        pcmd->argv_len_sum += sdslen(argv[j]);\n        c->all_argv_len_sum += sdslen(argv[j]);\n    }\n    zfree(argv);\n\n    /* Per-slot network bytes-in calculation.\n     *\n     * We calculate and store the current command's ingress bytes under\n     * c->net_input_bytes_curr_cmd, for which its per-slot aggregation is deferred\n     * until c->slot is parsed later within processCommand().\n     *\n     * Calculation: For inline buffer, every whitespace is of length 1,\n     * with the exception of the trailing '\\r\\n' being length 2.\n     *\n     * For example;\n     * Command) SET key value\n     * Inline) SET key value\\r\\n\n     */\n    pcmd->input_bytes = (pcmd->argv_len_sum + (pcmd->argc - 1) + 2);\n\n    return C_OK;\n}\n\n/* Helper function. Record protocol error details in server log,\n * and set the client as CLIENT_CLOSE_AFTER_REPLY and\n * CLIENT_PROTOCOL_ERROR. */\n#define PROTO_DUMP_LEN 128\nstatic void setProtocolError(const char *errstr, client *c) {\n    if (server.verbosity <= LL_VERBOSE || c->flags & CLIENT_MASTER) {\n        sds client = catClientInfoString(sdsempty(),c);\n\n        /* Sample some protocol to given an idea about what was inside. */\n        char buf[256];\n        if (server.hide_user_data_from_log) {\n            snprintf(buf,sizeof(buf),\"Query buffer during protocol error: '*redacted*'\");  \n        } else if (sdslen(c->querybuf)-c->qb_pos < PROTO_DUMP_LEN) {\n            snprintf(buf,sizeof(buf),\"Query buffer during protocol error: '%s'\", c->querybuf+c->qb_pos);  \n        } else {\n            snprintf(buf,sizeof(buf),\"Query buffer during protocol error: '%.*s' (... more %zu bytes ...) '%.*s'\", PROTO_DUMP_LEN/2, c->querybuf+c->qb_pos, sdslen(c->querybuf)-c->qb_pos-PROTO_DUMP_LEN, PROTO_DUMP_LEN/2, c->querybuf+sdslen(c->querybuf)-PROTO_DUMP_LEN/2);  \n        }\n\n        /* Remove non printable chars. */  \n        if (!server.hide_user_data_from_log) {\n            char *p = buf;\n            while (*p != '\\0') {\n                if (!isprint(*p)) *p = '.';\n                p++;\n            }\n        }\n\n        /* Log all the client and protocol info. */\n        int loglevel = (c->flags & CLIENT_MASTER) ? LL_WARNING :\n                                                    LL_VERBOSE;\n        serverLog(loglevel,\n            \"Protocol error (%s) from client: %s. %s\", errstr, client, buf);\n        sdsfree(client);\n    }\n    c->flags |= (CLIENT_CLOSE_AFTER_REPLY|CLIENT_PROTOCOL_ERROR);\n}\n\n/* Process the query buffer for client 'c', setting up the client argument\n * vector for command execution. Returns C_OK if after running the function\n * the client has a well-formed ready to be processed command, otherwise\n * C_ERR if there is still to read more buffer to get the full command.\n * The function also returns C_ERR when there is a protocol error: in such a\n * case the client structure is setup to reply with the error and close\n * the connection.\n *\n * This function is called if processInputBuffer() detects that the next\n * command is in RESP format, so the first byte in the command is found\n * to be '*'. Otherwise for inline commands processInlineBuffer() is called. */\nstatic int processMultibulkBuffer(client *c, pendingCommand *pcmd) {\n    char *newline = NULL;\n    int ok;\n    long long ll;\n    size_t querybuf_len = sdslen(c->querybuf); /* Cache sdslen */\n\n    if (c->multibulklen == 0) {\n        /* The pending command should have been reset */\n        serverAssertWithInfo(c,NULL,pcmd->argc == 0);\n\n        /* Multi bulk length cannot be read without a \\r\\n */\n        newline = strchr(c->querybuf+c->qb_pos,'\\r');\n        if (newline == NULL) {\n            if (querybuf_len-c->qb_pos > PROTO_INLINE_MAX_SIZE) {\n                pcmd->read_error = CLIENT_READ_TOO_BIG_MBULK_COUNT_STRING;\n            }\n            return C_ERR;\n        }\n\n        /* Buffer should also contain \\n */\n        if (newline-(c->querybuf+c->qb_pos) > (ssize_t)(querybuf_len-c->qb_pos-2))\n            return C_ERR;\n\n        /* We know for sure there is a whole line since newline != NULL,\n         * so go ahead and find out the multi bulk length. */\n        serverAssertWithInfo(c,NULL,c->querybuf[c->qb_pos] == '*');\n        size_t multibulklen_slen = newline - (c->querybuf + 1 + c->qb_pos);\n        ok = string2ll(c->querybuf+1+c->qb_pos,newline-(c->querybuf+1+c->qb_pos),&ll);\n        if (!ok || ll > INT_MAX) {\n            pcmd->read_error = CLIENT_READ_INVALID_MULTIBUCK_LENGTH;\n            return C_ERR;\n        } else if (ll > 10 && authRequired(c)) {\n            pcmd->read_error = CLIENT_READ_UNAUTH_MBUCK_COUNT;\n            return C_ERR;\n        }\n\n        c->qb_pos = (newline-c->querybuf)+2;\n\n        if (ll <= 0) return C_OK;\n\n        c->multibulklen = ll;\n        c->bulklen = -1;\n\n        /* Setup argv array on pending command structure.\n         * Reallocate argv array when the requested size is greater than current size. */\n        if (c->multibulklen > pcmd->argv_len) {\n            zfree(pcmd->argv);\n            pcmd->argv_len = min(c->multibulklen, 1024);\n            pcmd->argv = zmalloc(sizeof(robj*)*(pcmd->argv_len));\n            pcmd->argv_len_sum = 0;\n        }\n\n        /* Per-slot network bytes-in calculation.\n         *\n         * We calculate and store the current command's ingress bytes under\n         * c->net_input_bytes_curr_cmd, for which its per-slot aggregation is deferred\n         * until c->slot is parsed later within processCommand().\n         *\n         * Calculation: For multi bulk buffer, we accumulate four factors, namely;\n         *\n         * 1) multibulklen_slen + 3\n         *    Cumulative string length (and not the value of) of multibulklen,\n         *    including the first \"*\" byte and last \"\\r\\n\" 2 bytes from RESP.\n         * 2) bulklen_slen + 3\n         *    Cumulative string length (and not the value of) of bulklen,\n         *    including +3 from RESP first \"$\" byte and last \"\\r\\n\" 2 bytes per argument count.\n         * 3) c->argv_len_sum\n         *    Cumulative string length of all argument vectors.\n         * 4) c->argc * 2\n         *    Cumulative string length of the arguments' white-spaces, for which there exists a total of\n         *    \"\\r\\n\" 2 bytes per argument.\n         *\n         * For example;\n         * Command) SET key value\n         * RESP) *3\\r\\n$3\\r\\nSET\\r\\n$3\\r\\nkey\\r\\n$5\\r\\nvalue\\r\\n\n         *\n         * 1) String length of \"*3\\r\\n\" is 4, obtained from (multibulklen_slen + 3).\n         * 2) String length of \"$3\\r\\n\" \"$3\\r\\n\" \"$5\\r\\n\" is 12, obtained from (bulklen_slen + 3).\n         * 3) String length of \"SET\" \"key\" \"value\" is 11, obtained from (c->argv_len_sum).\n         * 4) String length of the 3 arguments' white-spaces \"\\r\\n\" is 6, obtained from (c->argc * 2).\n         *\n         * The 1st component is calculated within the below line.\n         * */\n        pcmd->input_bytes += (multibulklen_slen + 3);\n    }\n\n    serverAssertWithInfo(c,NULL,c->multibulklen > 0);\n    while(c->multibulklen) {\n        /* Read bulk length if unknown */\n        if (c->bulklen == -1) {\n            newline = memchr(c->querybuf+c->qb_pos,'\\r',sdslen(c->querybuf) - c->qb_pos);\n            if (newline == NULL) {\n                if (querybuf_len-c->qb_pos > PROTO_INLINE_MAX_SIZE) {\n                    pcmd->read_error = CLIENT_READ_TOO_BIG_BUCK_COUNT_STRING;\n                    return C_ERR;\n                }\n                break;\n            }\n\n            /* Buffer should also contain \\n */\n            if (newline-(c->querybuf+c->qb_pos) > (ssize_t)(querybuf_len-c->qb_pos-2))\n                break;\n\n            if (c->querybuf[c->qb_pos] != '$') {\n                pcmd->read_error = CLIENT_READ_EXPECTED_DOLLAR;\n                return C_ERR;\n            }\n\n            size_t bulklen_slen = newline - (c->querybuf + c->qb_pos + 1);\n            ok = string2ll(c->querybuf+c->qb_pos+1,newline-(c->querybuf+c->qb_pos+1),&ll);\n            if (!ok || ll < 0 ||\n                (!(c->flags & CLIENT_MASTER) && ll > server.proto_max_bulk_len)) {\n                pcmd->read_error = CLIENT_READ_INVALID_BUCK_LENGTH;\n                return C_ERR;\n            } else if (ll > 16384 && authRequired(c)) {\n                pcmd->read_error = CLIENT_READ_UNAUTH_BUCK_LENGTH;\n                return C_ERR;\n            }\n\n            c->qb_pos = newline-c->querybuf+2;\n            if (!(c->flags & CLIENT_MASTER) && ll >= PROTO_MBULK_BIG_ARG) {\n                /* When the client is not a master client (because master\n                 * client's querybuf can only be trimmed after data applied\n                 * and sent to replicas).\n                 *\n                 * If we are going to read a large object from network\n                 * try to make it likely that it will start at c->querybuf\n                 * boundary so that we can optimize object creation\n                 * avoiding a large copy of data.\n                 *\n                 * But only when the data we have not parsed is less than\n                 * or equal to ll+2. If the data length is greater than\n                 * ll+2, trimming querybuf is just a waste of time, because\n                 * at this time the querybuf contains not only our bulk. */\n                if (querybuf_len-c->qb_pos <= (size_t)ll+2) {\n                    sdsrange(c->querybuf,c->qb_pos,-1);\n                    querybuf_len = sdslen(c->querybuf);\n                    c->qb_pos = 0;\n                    /* Hint the sds library about the amount of bytes this string is\n                     * going to contain. */\n                    c->querybuf = sdsMakeRoomForNonGreedy(c->querybuf,ll+2-querybuf_len);\n                    /* We later set the peak to the used portion of the buffer, but here we over\n                     * allocated because we know what we need, make sure it'll not be shrunk before used. */\n                    if (c->querybuf_peak < (size_t)ll + 2) c->querybuf_peak = ll + 2;\n                    querybuf_len = sdslen(c->querybuf); /* Update cached length */\n                }\n            }\n            c->bulklen = ll;\n            /* Per-slot network bytes-in calculation, 2nd component. */\n            pcmd->input_bytes += (bulklen_slen + 3);\n        } else {\n            serverAssert(pcmd->flags & PENDING_CMD_FLAG_INCOMPLETE);\n        }\n\n        /* Read bulk argument */\n        if (querybuf_len-c->qb_pos < (size_t)(c->bulklen+2)) {\n            break;\n        } else {\n            /* Check if we have space in argv, grow if needed */\n            if (pcmd->argc >= pcmd->argv_len) {\n                pcmd->argv_len = min(pcmd->argv_len < INT_MAX/2 ? (pcmd->argv_len)*2 : INT_MAX, pcmd->argc+c->multibulklen);\n                pcmd->argv = zrealloc(pcmd->argv, sizeof(robj*)*(pcmd->argv_len));\n            }\n\n            /* Optimization: if a non-master client's buffer contains JUST our bulk element\n             * instead of creating a new object by *copying* the sds we\n             * just use the current sds string. */\n            if (!(c->flags & CLIENT_MASTER) &&\n                c->qb_pos == 0 &&\n                c->bulklen >= PROTO_MBULK_BIG_ARG &&\n                querybuf_len == (size_t)(c->bulklen+2))\n            {\n                (pcmd->argv)[(pcmd->argc)++] = createObject(OBJ_STRING,c->querybuf);\n                pcmd->argv_len_sum += c->bulklen;\n                c->all_argv_len_sum += c->bulklen;\n                sdsIncrLen(c->querybuf,-2); /* remove CRLF */\n                /* Assume that if we saw a fat argument we'll see another one likely...\n                 * But only if that fat argument is not too big compared to the memory limit. */\n                if (!server.maxmemory || (size_t)c->bulklen < server.maxmemory / 32) {\n                    c->querybuf = sdsnewlen(SDS_NOINIT,c->bulklen+2);\n                } else {\n                    c->querybuf = sdsnewlen(SDS_NOINIT, PROTO_IOBUF_LEN);\n                }\n                sdsclear(c->querybuf);\n                querybuf_len = sdslen(c->querybuf); /* Update cached length */\n            } else {\n                (pcmd->argv)[(pcmd->argc)++] =\n                    createStringObject(c->querybuf+c->qb_pos,c->bulklen);\n                pcmd->argv_len_sum += c->bulklen;\n                c->all_argv_len_sum += c->bulklen;\n                c->qb_pos += c->bulklen+2;\n            }\n            c->bulklen = -1;\n            c->multibulklen--;\n        }\n    }\n\n    /* We're done when c->multibulk == 0 */\n    if (c->multibulklen == 0) {\n        /* Per-slot network bytes-in calculation, 3rd and 4th components. */\n        pcmd->input_bytes += (pcmd->argv_len_sum + (pcmd->argc * 2));\n        pcmd->flags &= ~PENDING_CMD_FLAG_INCOMPLETE;\n        return C_OK;\n    }\n\n    /* Still not ready to process the command */\n    pcmd->flags |= PENDING_CMD_FLAG_INCOMPLETE;\n    return C_OK;\n}\n\n/* Prepare the client for executing the next command:\n *\n * 1. Append the response, if necessary.\n * 2. Reset the client.\n * 3. Update the all_argv_len_sum counter and advance the pending_cmd cyclic buffer.\n * 4. Update the cluster slot stats, if necessary.\n */\nvoid prepareForNextCommand(client *c, int update_slot_stats) {\n    reqresAppendResponse(c);\n    if (update_slot_stats) {\n        /* We should do this before reset client. */\n        clusterSlotStatsAddNetworkBytesInForUserClient(c);\n    }\n    resetClientInternal(c, 1);\n}\n\n/* Perform necessary tasks after a command was executed:\n *\n * 1. The client is reset unless there are reasons to avoid doing it.\n * 2. In the case of master clients, the replication offset is updated.\n * 3. Propagate commands we got from our master to replicas down the line. */\nvoid commandProcessed(client *c) {\n    /* If client is blocked(including paused), just return avoid reset and replicate.\n     *\n     * 1. Don't reset the client structure for blocked clients, so that the reply\n     *    callback will still be able to access the client argv and argc fields.\n     *    The client will be reset in unblockClient().\n     * 2. Don't update replication offset or propagate commands to replicas,\n     *    since we have not applied the command. */\n    if (c->flags & CLIENT_BLOCKED) return;\n\n    prepareForNextCommand(c, 1);\n\n    long long prev_offset = c->reploff;\n    if (c->flags & CLIENT_MASTER && !(c->flags & CLIENT_MULTI)) {\n        /* Update the applied replication offset of our master. */\n        serverAssert(c->reploff_next > 0);\n        c->reploff = c->reploff_next;\n    }\n\n    /* If the client is a master we need to compute the difference\n     * between the applied offset before and after processing the buffer,\n     * to understand how much of the replication stream was actually\n     * applied to the master state: this quantity, and its corresponding\n     * part of the replication stream, will be propagated to the\n     * sub-replicas and to the replication backlog. */\n    if (c->flags & CLIENT_MASTER) {\n        long long applied = c->reploff - prev_offset;\n        if (applied) {\n            replicationFeedStreamFromMasterStream(c->querybuf+c->repl_applied,applied);\n            c->repl_applied += applied;\n\n            /* Update the atomic slot migration task's applied bytes. */\n            if (c->flags & CLIENT_ASM_IMPORTING)\n                asmImportIncrAppliedBytes(c->task, applied);\n        }\n    }\n}\n\n/* This function calls processCommand(), but also performs a few sub tasks\n * for the client that are useful in that context:\n *\n * 1. It sets the current client to the client 'c'.\n * 2. calls commandProcessed() if the command was handled.\n *\n * The function returns C_ERR in case the client was freed as a side effect\n * of processing the command, otherwise C_OK is returned. */\nint processCommandAndResetClient(client *c) {\n    int deadclient = 0;\n    client *old_client = server.current_client;\n    server.current_client = c;\n    if (processCommand(c) == C_OK) {\n        commandProcessed(c);\n        /* Update the client's memory to include output buffer growth following the\n         * processed command. */\n        if (c->conn) updateClientMemUsageAndBucket(c);\n    }\n\n    if (server.current_client == NULL) deadclient = 1;\n    /*\n     * Restore the old client, this is needed because when a script\n     * times out, we will get into this code from processEventsWhileBlocked.\n     * Which will cause to set the server.current_client. If not restored\n     * we will return 1 to our caller which will falsely indicate the client\n     * is dead and will stop reading from its buffer.\n     */\n    server.current_client = old_client;\n    /* performEvictions may flush slave output buffers. This may\n     * result in a slave, that may be the active client, to be\n     * freed. */\n    return deadclient ? C_ERR : C_OK;\n}\n\n\n/* This function will execute any fully parsed commands pending on\n * the client. Returns C_ERR if the client is no longer valid after executing\n * the command, and C_OK for all other cases. */\nint processPendingCommandAndInputBuffer(client *c) {\n    /* Notice, this code is also called from 'processUnblockedClients'.\n     * But in case of a module blocked client (see RM_Call 'K' flag) we do not reach this code path.\n     * So whenever we change the code here we need to consider if we need this change on module\n     * blocked client as well */\n    if (c->flags & CLIENT_PENDING_COMMAND) {\n        c->flags &= ~CLIENT_PENDING_COMMAND;\n        if (processCommandAndResetClient(c) == C_ERR) {\n            return C_ERR;\n        }\n    }\n\n    /* Now process client if it has more data in it's buffer.\n     *\n     * Note: when a master client steps into this function,\n     * it can always satisfy this condition, because its querybuf\n     * contains data not applied. */\n    if ((c->querybuf && sdslen(c->querybuf) > 0) || c->pending_cmds.ready_len > 0) {\n        return processInputBuffer(c);\n    }\n    return C_OK;\n}\n\nvoid handleClientReadError(client *c) {\n    switch (c->read_error) {\n        case CLIENT_READ_TOO_BIG_INLINE_REQUEST:\n            addReplyError(c,\"Protocol error: too big inline request\");\n            setProtocolError(\"too big inline request\",c);\n            break;\n        case CLIENT_READ_UNBALANCED_QUOTES:\n            addReplyError(c,\"Protocol error: unbalanced quotes in request\");\n            setProtocolError(\"unbalanced quotes in request\",c);\n            break;\n        case CLIENT_READ_MASTER_USING_INLINE_PROTOCAL:\n            serverLog(LL_WARNING,\"WARNING: Receiving inline protocol from master, master stream corruption? Closing the master connection and discarding the cached master.\");\n            setProtocolError(\"Master using the inline protocol. Desync?\",c);\n            break;\n        case CLIENT_READ_TOO_BIG_MBULK_COUNT_STRING:\n            addReplyError(c,\"Protocol error: too big mbulk count string\");\n            setProtocolError(\"too big mbulk count string\",c);\n            break;\n        case CLIENT_READ_TOO_BIG_BUCK_COUNT_STRING:\n            addReplyError(c, \"Protocol error: too big bulk count string\");\n            setProtocolError(\"too big bulk count string\",c);\n            break;\n        case CLIENT_READ_EXPECTED_DOLLAR:\n            addReplyErrorFormat(c,\n                \"Protocol error: expected '$', got '%c'\",\n                c->querybuf[c->qb_pos]);\n            setProtocolError(\"expected $ but got something else\",c);\n            break;\n        case CLIENT_READ_INVALID_BUCK_LENGTH:\n            addReplyError(c,\"Protocol error: invalid bulk length\");\n            setProtocolError(\"invalid bulk length\",c);\n            break;\n        case CLIENT_READ_UNAUTH_BUCK_LENGTH:\n            addReplyError(c, \"Protocol error: unauthenticated bulk length\");\n            setProtocolError(\"unauth bulk length\", c);\n            break;\n        case CLIENT_READ_INVALID_MULTIBUCK_LENGTH:\n            addReplyError(c,\"Protocol error: invalid multibulk length\");\n            setProtocolError(\"invalid mbulk count\",c);\n            break;\n        case CLIENT_READ_UNAUTH_MBUCK_COUNT:\n            addReplyError(c, \"Protocol error: unauthenticated multibulk length\");\n            setProtocolError(\"unauth mbulk count\", c);\n            break;\n        case CLIENT_READ_CONN_DISCONNECTED:\n            serverLog(LL_VERBOSE, \"Reading from client: %s\",connGetLastError(c->conn));\n            break;\n        case CLIENT_READ_CONN_CLOSED:\n            if (server.verbosity <= LL_VERBOSE) {\n                sds info = catClientInfoString(sdsempty(), c);\n                serverLog(LL_VERBOSE, \"Client closed connection %s\", info);\n                sdsfree(info);\n            }\n            break;\n        case CLIENT_READ_REACHED_MAX_QUERYBUF: {\n            sds ci = catClientInfoString(sdsempty(),c), bytes = sdsempty();\n            bytes = sdscatrepr(bytes,c->querybuf,64);\n            serverLog(LL_WARNING,\"Closing client that reached max query buffer length: %s (qbuf initial bytes: %s)\", ci, bytes);\n            sdsfree(ci);\n            sdsfree(bytes);\n            break;\n        }\n        default:\n            serverPanic(\"Unknown client read error: %d\", c->read_error);\n            break;\n    }\n}\n\n\n/* Helper function to check if a read error is fatal (should stop processing) */\nint isClientReadErrorFatal(client *c) {\n    return c->read_error != 0 &&\n           c->read_error != CLIENT_READ_COMMAND_NOT_FOUND &&\n           c->read_error != CLIENT_READ_BAD_ARITY &&\n           c->read_error != CLIENT_READ_CROSS_SLOT;\n}\n\n/* This function is called every time, in the client structure 'c', there is\n * more query buffer to process, because we read more data from the socket\n * or because a client was blocked and later reactivated, so there could be\n * pending query buffer, already representing a full command, to process.\n * return C_ERR in case the client was freed during the processing */\nint processInputBuffer(client *c) {\n    atomicIncr(server.stat_total_client_process_input_buff_events, 1);\n\n    /* Keep active-client window updates on main-thread paths only (here and\n     * in IO-thread handoff processing) to avoid races with serverCron()\n     * maintenance of the circular slots. */\n    if (c->running_tid == IOTHREAD_MAIN_THREAD_ID)\n        statsUpdateActiveClients(c);\n\n    /* We limit the lookahead for unauthenticated connections to 1.\n     * This is both to reduce memory overhead, and to prevent errors: AUTH can\n     * affect the handling of succeeding commands. Parsing of \"large\"\n     * unauthenticated multibulk commands is rejected, which would cause those\n     * commands to incorrectly return an error to the client. */\n    const int lookahead = authRequired(c) ? 1 : server.lookahead;\n\n    /* Keep processing while there is something in the input buffer */\n    while ((c->querybuf && c->qb_pos < sdslen(c->querybuf)) ||\n           c->pending_cmds.ready_len > 0)\n    {\n        /* Immediately abort if the client is in the middle of something. */\n        if (c->flags & CLIENT_BLOCKED || c->flags & CLIENT_UNBLOCKED) break;\n\n        /* Don't process more buffers from clients that have already pending\n         * commands to execute in c->argv. */\n        if (c->flags & CLIENT_PENDING_COMMAND) break;\n\n        /* Don't process input from the master while there is a busy script\n         * condition on the slave. We want just to accumulate the replication\n         * stream (instead of replying -BUSY like we do with other clients) and\n         * later resume the processing. */\n        if (c->flags & CLIENT_MASTER && isInsideYieldingLongCommand()) break;\n\n        /* CLIENT_CLOSE_AFTER_REPLY closes the connection once the reply is\n         * written to the client. Make sure to not let the reply grow after\n         * this flag has been set (i.e. don't process more commands).\n         *\n         * The same applies for clients we want to terminate ASAP. */\n        if (c->flags & (CLIENT_CLOSE_AFTER_REPLY|CLIENT_CLOSE_ASAP)) break;\n\n        /* Determine if we need to parse more commands from the query buffer.\n         * Only parse when there are no ready commands waiting to be processed. */\n        const int parse_more = !c->pending_cmds.ready_len;\n        int pending_cmd_before_reading = c->pending_cmds.ready_len;\n\n        /* Parse up to lookahead commands only if we don't have enough ready commands */\n        while (parse_more && c->pending_cmds.ready_len < lookahead &&\n               c->querybuf && c->qb_pos < sdslen(c->querybuf))\n        {\n            /* Determine request type when unknown. */\n            if (!c->reqtype) {\n                if (c->querybuf[c->qb_pos] == '*') {\n                    c->reqtype = PROTO_REQ_MULTIBULK;\n                } else {\n                    c->reqtype = PROTO_REQ_INLINE;\n                }\n            }\n\n            pendingCommand *pcmd = NULL;\n            if (c->reqtype == PROTO_REQ_INLINE) {\n                pcmd = acquirePendingCommand();\n                if (processInlineBuffer(c, pcmd) == C_ERR && !pcmd->read_error) {\n                    /* If it fails but there are no errors, it means that it might just be\n                     * that the desired content cannot be parsed. At this point, we exit and wait for the next time. */\n                    freePendingCommand(c, pcmd);\n                    break;\n                }\n            } else if (c->reqtype == PROTO_REQ_MULTIBULK) {\n                int incomplete = (c->pending_cmds.len != c->pending_cmds.ready_len);\n                if (unlikely(incomplete)) {\n                    pcmd = popPendingCommandFromTail(&c->pending_cmds);\n                } else {\n                    pcmd = acquirePendingCommand();\n                }\n\n                if (processMultibulkBuffer(c, pcmd) == C_ERR && !pcmd->read_error) {\n                    /* If it fails but there are no errors, it means that it might just be\n                     * that the desired content cannot be parsed. At this point, we exit and wait for the next time. */\n                    freePendingCommand(c, pcmd);\n                    break;\n                }\n            } else {\n                serverPanic(\"Unknown request type\");\n            }\n\n            addPendingCommand(&c->pending_cmds, pcmd);\n            if (unlikely(pcmd->read_error || (pcmd->flags & PENDING_CMD_FLAG_INCOMPLETE)))\n                break;\n\n            if (c->running_tid == IOTHREAD_MAIN_THREAD_ID)\n                pcmd->reploff = c->read_reploff - sdslen(c->querybuf) + c->qb_pos;\n            else\n                pcmd->reploff = c->io_read_reploff - sdslen(c->querybuf) + c->qb_pos;\n\n            preprocessCommand(c, pcmd);\n            pcmd->flags |= PENDING_CMD_FLAG_PREPROCESSED;\n            resetClientQbufState(c);\n        }\n\n        if (c->pending_cmds.ready_len != pending_cmd_before_reading) {\n            int newly_parsed_cmds = c->pending_cmds.ready_len - pending_cmd_before_reading;\n            atomicIncr(server.stat_avg_pipeline_length_sum, newly_parsed_cmds);\n            atomicIncr(server.stat_avg_pipeline_length_cnt, 1);\n\n            c->stat_avg_pipeline_length_sum += newly_parsed_cmds;\n            c->stat_avg_pipeline_length_cnt++;\n        }\n\n        /* Try to consume the next ready command from the pending command list. */\n        if (!c->pending_cmds.ready_len)\n            break;\n        pendingCommand *curcmd = c->pending_cmds.head;\n\n        /* We populate the old client fields so we don't have to modify all existing logic to work with pendingCommands */\n        c->argc = curcmd->argc;\n        c->argv = curcmd->argv;\n        c->argv_len = curcmd->argv_len;\n        c->net_input_bytes_curr_cmd += curcmd->input_bytes;\n        c->reploff_next = curcmd->reploff;\n        c->slot = curcmd->slot;\n        c->lookedcmd = curcmd->cmd;\n        c->read_error = curcmd->read_error;\n        c->current_pending_cmd = curcmd;\n\n        /* Prefetch the command only when more commands have been parsed and we\n         * are in the main thread. If running in an IO thread, prefetch will be\n         * deferred until the client is processed by the main thread. Skip prefetch\n         * if there are too few commands to avoid meaningless prefetching. */\n        if (parse_more && c->running_tid == IOTHREAD_MAIN_THREAD_ID &&\n            c->pending_cmds.ready_len > 1)\n        {\n            /* Prefetch the commands. */\n            resetCommandsBatch();\n            addCommandToBatch(c);\n            prefetchCommands();\n        }\n\n        /* Check if the client has a fatal read error that requires stopping processing. */\n        if (isClientReadErrorFatal(c)) {\n            if (c->running_tid != IOTHREAD_MAIN_THREAD_ID) {\n                enqueuePendingClientsToMainThread(c, 0);\n            }\n            break;\n        }\n\n        /* Multibulk processing could see a <= 0 length. */\n        if (!c->argc) {\n            /* A naked newline can be sent from masters as a keep-alive, or from slaves to refresh\n             * the last ACK time. In that case there's no command to actually execute. */\n            prepareForNextCommand(c, 0);\n        } else {\n            /* If we are in the context of an I/O thread, we can't really\n             * execute the command here. All we can do is to flag the client\n             * as one that needs to process the command. */\n            if (c->running_tid != IOTHREAD_MAIN_THREAD_ID) {\n                c->io_flags |= CLIENT_IO_PENDING_COMMAND;\n                enqueuePendingClientsToMainThread(c, 0);\n                break;\n            }\n\n            /* We are finally ready to execute the command. */\n            if (processCommandAndResetClient(c) == C_ERR) {\n                /* If the client is no longer valid, we avoid exiting this\n                 * loop and trimming the client buffer later. So we return\n                 * ASAP in that case. */\n                return C_ERR;\n            }\n        }\n    }\n\n    if (c->flags & CLIENT_MASTER) {\n        /* If the client is a master, trim the querybuf to repl_applied,\n         * since master client is very special, its querybuf not only\n         * used to parse command, but also proxy to sub-replicas.\n         *\n         * Here are some scenarios we cannot trim to qb_pos:\n         * 1. we don't receive complete command from master\n         * 2. master client blocked cause of client pause\n         * 3. io threads operate read, master client flagged with CLIENT_PENDING_COMMAND\n         *\n         * In these scenarios, qb_pos points to the part of the current command\n         * or the beginning of next command, and the current command is not applied yet,\n         * so the repl_applied is not equal to qb_pos. */\n        if (c->repl_applied) {\n            sdsrange(c->querybuf,c->repl_applied,-1);\n            serverAssert(c->qb_pos >= (size_t)c->repl_applied);\n            c->qb_pos -= c->repl_applied;\n            c->repl_applied = 0;\n        }\n    } else if (c->qb_pos) {\n        /* Trim to pos */\n        sdsrange(c->querybuf,c->qb_pos,-1);\n        c->qb_pos = 0;\n    }\n\n    /* Update client memory usage after processing the query buffer, this is\n     * important in case the query buffer is big and wasn't drained during\n     * the above loop (because of partially sent big commands). */\n    if (c->running_tid == IOTHREAD_MAIN_THREAD_ID)\n        updateClientMemUsageAndBucket(c);\n\n    return C_OK;\n}\n\nvoid readQueryFromClient(connection *conn) {\n    client *c = connGetPrivateData(conn);\n    int nread, big_arg = 0;\n    size_t qblen, readlen;\n\n    if (!(c->io_flags & CLIENT_IO_READ_ENABLED)) {\n        atomicSetWithSync(c->pending_read, 1);\n        return;\n    } else if (server.io_threads_num > 1) {\n        atomicSetWithSync(c->pending_read, 0);\n    }\n\n    c->read_error = 0;\n\n    c->stat_total_read_events++;\n\n    /* Update the number of reads of io threads on server */\n    atomicIncr(server.stat_io_reads_processed[c->running_tid], 1);\n\n    readlen = PROTO_IOBUF_LEN;\n    /* If this is a multi bulk request, and we are processing a bulk reply\n     * that is large enough, try to maximize the probability that the query\n     * buffer contains exactly the SDS string representing the object, even\n     * at the risk of requiring more read(2) calls. This way the function\n     * processMultiBulkBuffer() can avoid copying buffers to create the\n     * Redis Object representing the argument. */\n    if (c->reqtype == PROTO_REQ_MULTIBULK && c->multibulklen && c->bulklen != -1\n        && c->bulklen >= PROTO_MBULK_BIG_ARG)\n    {\n        /* For big argv, the client always uses its private query buffer.\n         * Using the reusable query buffer would eventually expand it beyond 32k,\n         * causing the client to take ownership of the reusable query buffer. */\n        if (!c->querybuf) c->querybuf = sdsempty();\n\n        ssize_t remaining = (size_t)(c->bulklen+2)-(sdslen(c->querybuf)-c->qb_pos);\n        big_arg = 1;\n\n        /* Note that the 'remaining' variable may be zero in some edge case,\n         * for example once we resume a blocked client after CLIENT PAUSE. */\n        if (remaining > 0) readlen = remaining;\n\n        /* Master client needs expand the readlen when meet BIG_ARG(see #9100),\n         * but doesn't need align to the next arg, we can read more data. */\n        if (c->flags & CLIENT_MASTER && readlen < PROTO_IOBUF_LEN)\n            readlen = PROTO_IOBUF_LEN;\n    } else if (c->querybuf == NULL) {\n        if (unlikely(thread_reusable_qb_used)) {\n            /* The reusable query buffer is already used by another client,\n             * switch to using the client's private query buffer. This only\n             * occurs when commands are executed nested via processEventsWhileBlocked(). */\n            c->querybuf = sdsnewlen(NULL, PROTO_IOBUF_LEN);\n            sdsclear(c->querybuf);\n        } else {\n            /* Create the reusable query buffer if it doesn't exist. */\n            if (!thread_reusable_qb) {\n                thread_reusable_qb = sdsnewlen(NULL, PROTO_IOBUF_LEN);\n                sdsclear(thread_reusable_qb);\n            }\n\n            /* Assign the reusable query buffer to the client and mark it as in use. */\n            serverAssert(sdslen(thread_reusable_qb) == 0);\n            c->querybuf = thread_reusable_qb;\n            c->io_flags |= CLIENT_IO_REUSABLE_QUERYBUFFER;\n            thread_reusable_qb_used = 1;\n        }\n    }\n\n    qblen = sdslen(c->querybuf);\n    if (!(c->flags & CLIENT_MASTER) && // master client's querybuf can grow greedy.\n        (big_arg || sdsalloc(c->querybuf) < PROTO_IOBUF_LEN)) {\n        /* When reading a BIG_ARG we won't be reading more than that one arg\n         * into the query buffer, so we don't need to pre-allocate more than we\n         * need, so using the non-greedy growing. For an initial allocation of\n         * the query buffer, we also don't wanna use the greedy growth, in order\n         * to avoid collision with the RESIZE_THRESHOLD mechanism. */\n        c->querybuf = sdsMakeRoomForNonGreedy(c->querybuf, readlen);\n        /* We later set the peak to the used portion of the buffer, but here we over\n         * allocated because we know what we need, make sure it'll not be shrunk before used. */\n        if (c->querybuf_peak < qblen + readlen) c->querybuf_peak = qblen + readlen;\n    } else {\n        c->querybuf = sdsMakeRoomFor(c->querybuf, readlen);\n\n        /* Read as much as possible from the socket to save read(2) system calls. */\n        readlen = sdsavail(c->querybuf);\n    }\n    nread = connRead(c->conn, c->querybuf+qblen, readlen);\n    if (nread == -1) {\n        if (connGetState(conn) == CONN_STATE_CONNECTED) {\n            goto done;\n        } else {\n            c->read_error = CLIENT_READ_CONN_DISCONNECTED;\n            freeClientAsync(c);\n            goto done;\n        }\n    } else if (nread == 0) {\n        c->read_error = CLIENT_READ_CONN_CLOSED;\n        freeClientAsync(c);\n        goto done;\n    }\n\n    sdsIncrLen(c->querybuf,nread);\n    qblen = sdslen(c->querybuf);\n    if (c->querybuf_peak < qblen) c->querybuf_peak = qblen;\n\n    if (!(c->flags & CLIENT_MASTER) || c->running_tid == IOTHREAD_MAIN_THREAD_ID)\n        c->lastinteraction = server.unixtime;\n    else\n        /* Avoid contention with genRedisInfoString as it can access master\n         * client's data. If this is a master running in IO thread the value of\n         * c->lastinteraction will be updated during processClientsFromIOThread */\n        c->io_lastinteraction = server.unixtime;\n\n    if (c->flags & CLIENT_MASTER) {\n        if (c->running_tid == IOTHREAD_MAIN_THREAD_ID) {\n            c->read_reploff += nread;\n        } else {\n            /* Same comment as for c->io_lastinteraction */\n            c->io_read_reploff += nread;\n        }\n        atomicIncr(server.stat_net_repl_input_bytes, nread);\n    } else {\n        atomicIncr(server.stat_net_input_bytes, nread);\n    }\n    c->net_input_bytes += nread;\n\n    if (!(c->flags & CLIENT_MASTER) &&\n        /* The commands cached in the MULTI/EXEC queue have not been executed yet,\n         * so they are also considered a part of the query buffer in a broader sense.\n         *\n         * For unauthenticated clients, the query buffer cannot exceed 1MB at most. */\n        (c->mstate.argv_len_sums + sdslen(c->querybuf) > server.client_max_querybuf_len ||\n         (c->mstate.argv_len_sums + sdslen(c->querybuf) > 1024*1024 && authRequired(c))))\n    {\n        c->read_error = CLIENT_READ_REACHED_MAX_QUERYBUF;\n        freeClientAsync(c);\n        atomicIncr(server.stat_client_qbuf_limit_disconnections, 1);\n        goto done;\n    }\n\n    /* There is more data in the client input buffer, continue parsing it\n     * and check if there is a full command to execute. */\n    if (processInputBuffer(c) == C_ERR)\n         c = NULL;\n\ndone:\n    if (c && isClientReadErrorFatal(c)) {\n        if (c->running_tid == IOTHREAD_MAIN_THREAD_ID) {\n            handleClientReadError(c);\n        }\n    }\n\n    if (c && (c->io_flags & CLIENT_IO_REUSABLE_QUERYBUFFER)) {\n        serverAssert(c->qb_pos == 0); /* Ensure the client's query buffer is trimmed in processInputBuffer */\n        resetReusableQueryBuf(c);\n    }\n    beforeNextClient(c);\n}\n\n/* A Redis \"Address String\" is a colon separated ip:port pair.\n * For IPv4 it's in the form x.y.z.k:port, example: \"127.0.0.1:1234\".\n * For IPv6 addresses we use [] around the IP part, like in \"[::1]:1234\".\n * For Unix sockets we use path:0, like in \"/tmp/redis:0\".\n *\n * An Address String always fits inside a buffer of NET_ADDR_STR_LEN bytes,\n * including the null term.\n *\n * On failure the function still populates 'addr' with the \"?:0\" string in case\n * you want to relax error checking or need to display something anyway (see\n * anetFdToString implementation for more info). */\nvoid genClientAddrString(client *client, char *addr,\n                         size_t addr_len, int remote) {\n    if (client->flags & CLIENT_UNIX_SOCKET) {\n        /* Unix socket client. */\n        snprintf(addr,addr_len,\"%s:0\",server.unixsocket);\n    } else {\n        /* TCP client. */\n        connFormatAddr(client->conn,addr,addr_len,remote);\n    }\n}\n\n/* This function returns the client peer id, by creating and caching it\n * if client->peerid is NULL, otherwise returning the cached value.\n * The Peer ID never changes during the life of the client, however it\n * is expensive to compute. */\nchar *getClientPeerId(client *c) {\n    char peerid[NET_ADDR_STR_LEN] = {0};\n\n    if (c->peerid == NULL) {\n        genClientAddrString(c,peerid,sizeof(peerid),1);\n        c->peerid = sdsnew(peerid);\n    }\n    return c->peerid;\n}\n\n/* This function returns the client bound socket name, by creating and caching\n * it if client->sockname is NULL, otherwise returning the cached value.\n * The Socket Name never changes during the life of the client, however it\n * is expensive to compute. */\nchar *getClientSockname(client *c) {\n    char sockname[NET_ADDR_STR_LEN] = {0};\n\n    if (c->sockname == NULL) {\n        genClientAddrString(c,sockname,sizeof(sockname),0);\n        c->sockname = sdsnew(sockname);\n    }\n    return c->sockname;\n}\n\nstatic inline int isCrashing(void) {\n    int crashing;\n    atomicGet(server.crashing, crashing);\n    return crashing;\n}\n\n/* Concatenate a string representing the state of a client in a human\n * readable format, into the sds string 's'. */\nsds catClientInfoString(sds s, client *client) {\n    char flags[17], events[3], conninfo[CONN_INFO_LEN], *p;\n\n    /* Pause IO thread to access data of the client safely. */\n    int paused = 0;\n    if (client->running_tid != IOTHREAD_MAIN_THREAD_ID &&\n        pthread_equal(server.main_thread_id, pthread_self()) &&\n        !isCrashing())\n    {\n        paused = 1;\n        pauseIOThread(client->running_tid);\n    }\n\n    p = flags;\n    if (client->flags & CLIENT_SLAVE) {\n        if (client->flags & CLIENT_MONITOR)\n            *p++ = 'O';\n        else if (client->flags & CLIENT_ASM_MIGRATING)\n            *p++ = 'g';\n        else\n            *p++ = 'S';\n    }\n    if (client->flags & CLIENT_MASTER) {\n        if (client->flags & CLIENT_ASM_IMPORTING)\n            *p++ = 'o';\n        else\n            *p++ = 'M';\n    }\n    if (client->flags & CLIENT_PUBSUB) *p++ = 'P';\n    if (client->flags & CLIENT_MULTI) *p++ = 'x';\n    if (client->flags & CLIENT_BLOCKED) *p++ = 'b';\n    if (client->flags & CLIENT_TRACKING) *p++ = 't';\n    if (client->flags & CLIENT_TRACKING_BROKEN_REDIR) *p++ = 'R';\n    if (client->flags & CLIENT_TRACKING_BCAST) *p++ = 'B';\n    if (client->flags & CLIENT_DIRTY_CAS) *p++ = 'd';\n    if (client->flags & CLIENT_CLOSE_AFTER_REPLY) *p++ = 'c';\n    if (client->flags & CLIENT_UNBLOCKED) *p++ = 'u';\n    if (client->flags & CLIENT_CLOSE_ASAP) *p++ = 'A';\n    if (client->flags & CLIENT_UNIX_SOCKET) *p++ = 'U';\n    if (client->flags & CLIENT_READONLY) *p++ = 'r';\n    if (client->flags & CLIENT_NO_EVICT) *p++ = 'e';\n    if (client->flags & CLIENT_NO_TOUCH) *p++ = 'T';\n    if (client->flags & CLIENT_REPL_RDB_CHANNEL) *p++ = 'C';\n    if (client->flags & CLIENT_INTERNAL) *p++ = 'I';\n    if (p == flags) *p++ = 'N';\n    *p++ = '\\0';\n\n    p = events;\n    if (client->conn) {\n        if (connHasReadHandler(client->conn)) *p++ = 'r';\n        if (connHasWriteHandler(client->conn)) *p++ = 'w';\n    }\n    *p = '\\0';\n\n    /* Compute the total memory consumed by this client. */\n    size_t obufmem, total_mem = getClientMemoryUsage(client, &obufmem);\n\n    size_t used_blocks_of_repl_buf = 0;\n    if (client->ref_repl_buf_node) {\n        replBufBlock *last = listNodeValue(listLast(server.repl_buffer_blocks));\n        replBufBlock *cur = listNodeValue(client->ref_repl_buf_node);\n        used_blocks_of_repl_buf = last->id - cur->id + 1;\n    }\n\n    sds ret = sdscatfmt(s, FMTARGS(\n        \"id=%U\", (unsigned long long) client->id,\n        \" addr=%s\", getClientPeerId(client),\n        \" laddr=%s\", getClientSockname(client),\n        \" %s\", connGetInfo(client->conn, conninfo, sizeof(conninfo)),\n        \" name=%s\", client->name ? (char*)client->name->ptr : \"\",\n        \" age=%I\", (long long)(commandTimeSnapshot() / 1000 - client->ctime),\n        \" idle=%I\", (long long)(server.unixtime - client->lastinteraction),\n        \" flags=%s\", flags,\n        \" db=%i\", client->db->id,\n        \" sub=%i\", (int) dictSize(client->pubsub_channels),\n        \" psub=%i\", (int) dictSize(client->pubsub_patterns),\n        \" ssub=%i\", (int) dictSize(client->pubsubshard_channels),\n        \" multi=%i\", (client->flags & CLIENT_MULTI) ? client->mstate.count : -1,\n        \" watch=%i\", (int) listLength(client->watched_keys),\n        \" qbuf=%U\", client->querybuf ? (unsigned long long) sdslen(client->querybuf) : 0,\n        \" qbuf-free=%U\", client->querybuf ? (unsigned long long) sdsavail(client->querybuf) : 0,\n        \" argv-mem=%U\", (unsigned long long) client->all_argv_len_sum,\n        \" multi-mem=%U\", (unsigned long long) client->mstate.argv_len_sums,\n        \" rbs=%U\", (unsigned long long) client->buf_usable_size,\n        \" rbp=%U\", (unsigned long long) client->buf_peak,\n        \" obl=%U\", (unsigned long long) client->bufpos,\n        \" oll=%U\", (unsigned long long) listLength(client->reply) + used_blocks_of_repl_buf,\n        \" omem=%U\", (unsigned long long) obufmem, /* should not include client->buf since we want to see 0 for static clients. */\n        \" tot-mem=%U\", (unsigned long long) total_mem,\n        \" events=%s\", events,\n        \" cmd=%s\", client->lastcmd ? client->lastcmd->fullname : \"NULL\",\n        \" user=%s\", client->user ? client->user->name : \"(superuser)\",\n        \" redir=%I\", (client->flags & CLIENT_TRACKING) ? (long long) client->client_tracking_redirection : -1,\n        \" resp=%i\", client->resp,\n        \" lib-name=%s\", client->lib_name ? (char*)client->lib_name->ptr : \"\",\n        \" lib-ver=%s\", client->lib_ver ? (char*)client->lib_ver->ptr : \"\",\n        \" io-thread=%i\", client->tid,\n        \" tot-net-in=%U\", client->net_input_bytes,\n        \" tot-net-out=%U\", client->net_output_bytes,\n        \" tot-cmds=%U\", client->commands_processed,\n        \" read-events=%U\", (unsigned long long)client->stat_total_read_events,\n        \" avg-pipeline-len-sum=%U\", (unsigned long long)client->stat_avg_pipeline_length_sum,\n        \" avg-pipeline-len-cnt=%U\", (unsigned long long)client->stat_avg_pipeline_length_cnt));\n\n    if (paused) resumeIOThread(client->running_tid);\n    return ret;\n}\n\nsds getAllClientsInfoString(int type) {\n    listNode *ln;\n    listIter li;\n    client *client;\n    sds o = sdsnewlen(SDS_NOINIT,200*listLength(server.clients));\n    sdsclear(o);\n\n    /* Pause all IO threads to access data of clients safely, and pausing the\n     * specific IO thread will not repeatedly execute in catClientInfoString. */\n    int allpaused = 0;\n    if (server.io_threads_num > 1 && !isCrashing() &&\n        pthread_equal(server.main_thread_id, pthread_self()))\n    {\n        allpaused = 1;\n        pauseAllIOThreads();\n    }\n\n    listRewind(server.clients,&li);\n    while ((ln = listNext(&li)) != NULL) {\n        client = listNodeValue(ln);\n        if (type != -1 && getClientType(client) != type) continue;\n        o = catClientInfoString(o,client);\n        o = sdscatlen(o,\"\\n\",1);\n    }\n\n    if (allpaused) resumeAllIOThreads();\n    return o;\n}\n\n/* Check validity of an attribute that's gonna be shown in CLIENT LIST. */\nint validateClientAttr(const char *val) {\n    /* Check if the charset is ok. We need to do this otherwise\n     * CLIENT LIST format will break. You should always be able to\n     * split by space to get the different fields. */\n    while (*val) {\n        if (*val < '!' || *val > '~') { /* ASCII is assumed. */\n            return C_ERR;\n        }\n        val++;\n    }\n    return C_OK;\n}\n\n/* Returns C_OK if the name is valid. Returns C_ERR & sets `err` (when provided) otherwise. */\nint validateClientName(robj *name, const char **err) {\n    const char *err_msg = \"Client names cannot contain spaces, newlines or special characters.\";\n    int len = (name != NULL) ? sdslen(name->ptr) : 0;\n    /* We allow setting the client name to an empty string. */\n    if (len == 0)\n        return C_OK;\n    if (validateClientAttr(name->ptr) == C_ERR) {\n        if (err) *err = err_msg;\n        return C_ERR;\n    }\n    return C_OK;\n}\n\n/* Returns C_OK if the name has been set or C_ERR if the name is invalid. */\nint clientSetName(client *c, robj *name, const char **err) {\n    if (validateClientName(name, err) == C_ERR) {\n        return C_ERR;\n    }\n    int len = (name != NULL) ? sdslen(name->ptr) : 0;\n    /* Setting the client name to an empty string actually removes\n     * the current name. */\n    if (len == 0) {\n        if (c->name) decrRefCount(c->name);\n        c->name = NULL;\n        return C_OK;\n    }\n    if (c->name) decrRefCount(c->name);\n    c->name = name;\n    incrRefCount(name);\n    return C_OK;\n}\n\n/* This function implements CLIENT SETNAME, including replying to the\n * user with an error if the charset is wrong (in that case C_ERR is\n * returned). If the function succeeded C_OK is returned, and it's up\n * to the caller to send a reply if needed.\n *\n * Setting an empty string as name has the effect of unsetting the\n * currently set name: the client will remain unnamed.\n *\n * This function is also used to implement the HELLO SETNAME option. */\nint clientSetNameOrReply(client *c, robj *name) {\n    const char *err = NULL;\n    int result = clientSetName(c, name, &err);\n    if (result == C_ERR) {\n        addReplyError(c, err);\n    }\n    return result;\n}\n\n/* Set client or connection related info */\nvoid clientSetinfoCommand(client *c) {\n    sds attr = c->argv[2]->ptr;\n    robj *valob = c->argv[3];\n    sds val = valob->ptr;\n    robj **destvar = NULL;\n    if (!strcasecmp(attr,\"lib-name\")) {\n        destvar = &c->lib_name;\n    } else if (!strcasecmp(attr,\"lib-ver\")) {\n        destvar = &c->lib_ver;\n    } else {\n        addReplyErrorFormat(c,\"Unrecognized option '%s'\", attr);\n        return;\n    }\n\n    if (validateClientAttr(val)==C_ERR) {\n        addReplyErrorFormat(c,\n            \"%s cannot contain spaces, newlines or special characters.\", attr);\n        return;\n    }\n    if (*destvar) decrRefCount(*destvar);\n    if (sdslen(val)) {\n        *destvar = valob;\n        incrRefCount(valob);\n    } else\n        *destvar = NULL;\n    addReply(c,shared.ok);\n}\n\n/* Reset the client state to resemble a newly connected client.\n */\nvoid resetCommand(client *c) {\n    /* MONITOR clients are also marked with CLIENT_SLAVE, we need to\n     * distinguish between the two.\n     */\n    uint64_t flags = c->flags;\n    if (flags & CLIENT_MONITOR) flags &= ~(CLIENT_MONITOR|CLIENT_SLAVE);\n\n    if (flags & (CLIENT_SLAVE|CLIENT_MASTER|CLIENT_MODULE)) {\n        addReplyError(c,\"can only reset normal client connections\");\n        return;\n    }\n\n    clearClientConnectionState(c);\n    addReplyStatus(c,\"RESET\");\n}\n\n/* Disconnect the current client */\nvoid quitCommand(client *c) {\n    addReply(c,shared.ok);\n    c->flags |= CLIENT_CLOSE_AFTER_REPLY;\n}\n\nvoid clientCommand(client *c) {\n    listNode *ln;\n    listIter li;\n\n    if (c->argc == 2 && !strcasecmp(c->argv[1]->ptr,\"help\")) {\n        const char *help[] = {\n\"CACHING (YES|NO)\",\n\"    Enable/disable tracking of the keys for next command in OPTIN/OPTOUT modes.\",\n\"GETREDIR\",\n\"    Return the client ID we are redirecting to when tracking is enabled.\",\n\"GETNAME\",\n\"    Return the name of the current connection.\",\n\"ID\",\n\"    Return the ID of the current connection.\",\n\"INFO\",\n\"    Return information about the current client connection.\",\n\"KILL <ip:port>\",\n\"    Kill connection made from <ip:port>.\",\n\"KILL <option> <value> [<option> <value> [...]]\",\n\"    Kill connections. Options are:\",\n\"    * ADDR (<ip:port>|<unixsocket>:0)\",\n\"      Kill connections made from the specified address\",\n\"    * LADDR (<ip:port>|<unixsocket>:0)\",\n\"      Kill connections made to specified local address\",\n\"    * TYPE (NORMAL|MASTER|REPLICA|PUBSUB)\",\n\"      Kill connections by type.\",\n\"    * USER <username>\",\n\"      Kill connections authenticated by <username>.\",\n\"    * SKIPME (YES|NO)\",\n\"      Skip killing current connection (default: yes).\",\n\"    * ID <client-id>\",\n\"      Kill connections by client id.\",\n\"    * MAXAGE <maxage>\",\n\"      Kill connections older than the specified age.\",\n\"LIST [options ...]\",\n\"    Return information about client connections. Options:\",\n\"    * TYPE (NORMAL|MASTER|REPLICA|PUBSUB)\",\n\"      Return clients of specified type.\",\n\"UNPAUSE\",\n\"    Stop the current client pause, resuming traffic.\",\n\"PAUSE <timeout> [WRITE|ALL]\",\n\"    Suspend all, or just write, clients for <timeout> milliseconds.\",\n\"REPLY (ON|OFF|SKIP)\",\n\"    Control the replies sent to the current connection.\",\n\"SETNAME <name>\",\n\"    Assign the name <name> to the current connection.\",\n\"SETINFO <option> <value>\",\n\"    Set client meta attr. Options are:\",\n\"    * LIB-NAME: the client lib name.\",\n\"    * LIB-VER: the client lib version.\",\n\"UNBLOCK <clientid> [TIMEOUT|ERROR]\",\n\"    Unblock the specified blocked client.\",\n\"TRACKING (ON|OFF) [REDIRECT <id>] [BCAST] [PREFIX <prefix> [...]]\",\n\"         [OPTIN] [OPTOUT] [NOLOOP]\",\n\"    Control server assisted client side caching.\",\n\"TRACKINGINFO\",\n\"    Report tracking status for the current connection.\",\n\"NO-EVICT (ON|OFF)\",\n\"    Protect current client connection from eviction.\",\n\"NO-TOUCH (ON|OFF)\",\n\"    Will not touch LRU/LFU stats when this mode is on.\",\nNULL\n        };\n        addReplyHelp(c, help);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"id\") && c->argc == 2) {\n        /* CLIENT ID */\n        addReplyLongLong(c,c->id);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"info\") && c->argc == 2) {\n        /* CLIENT INFO */\n        sds o = catClientInfoString(sdsempty(), c);\n        o = sdscatlen(o,\"\\n\",1);\n        addReplyVerbatim(c,o,sdslen(o),\"txt\");\n        sdsfree(o);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"list\")) {\n        /* CLIENT LIST */\n        int type = -1;\n        sds o = NULL;\n        if (c->argc == 4 && !strcasecmp(c->argv[2]->ptr,\"type\")) {\n            type = getClientTypeByName(c->argv[3]->ptr);\n            if (type == -1) {\n                addReplyErrorFormat(c,\"Unknown client type '%s'\",\n                    (char*) c->argv[3]->ptr);\n                return;\n            }\n        } else if (c->argc > 3 && !strcasecmp(c->argv[2]->ptr,\"id\")) {\n            int j;\n            o = sdsempty();\n            for (j = 3; j < c->argc; j++) {\n                long long cid;\n                if (getLongLongFromObjectOrReply(c, c->argv[j], &cid,\n                            \"Invalid client ID\")) {\n                    sdsfree(o);\n                    return;\n                }\n                client *cl = lookupClientByID(cid);\n                if (cl) {\n                    o = catClientInfoString(o, cl);\n                    o = sdscatlen(o, \"\\n\", 1);\n                }\n            }\n        } else if (c->argc != 2) {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        }\n\n        if (!o)\n            o = getAllClientsInfoString(type);\n        addReplyVerbatim(c,o,sdslen(o),\"txt\");\n        sdsfree(o);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"reply\") && c->argc == 3) {\n        /* CLIENT REPLY ON|OFF|SKIP */\n        if (!strcasecmp(c->argv[2]->ptr,\"on\")) {\n            c->flags &= ~(CLIENT_REPLY_SKIP|CLIENT_REPLY_OFF);\n            addReply(c,shared.ok);\n        } else if (!strcasecmp(c->argv[2]->ptr,\"off\")) {\n            c->flags |= CLIENT_REPLY_OFF;\n        } else if (!strcasecmp(c->argv[2]->ptr,\"skip\")) {\n            if (!(c->flags & CLIENT_REPLY_OFF))\n                c->flags |= CLIENT_REPLY_SKIP_NEXT;\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"no-evict\") && c->argc == 3) {\n        /* CLIENT NO-EVICT ON|OFF */\n        if (!strcasecmp(c->argv[2]->ptr,\"on\")) {\n            c->flags |= CLIENT_NO_EVICT;\n            removeClientFromMemUsageBucket(c, 0);\n            addReply(c,shared.ok);\n        } else if (!strcasecmp(c->argv[2]->ptr,\"off\")) {\n            c->flags &= ~CLIENT_NO_EVICT;\n            updateClientMemUsageAndBucket(c);\n            addReply(c,shared.ok);\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"kill\")) {\n        /* CLIENT KILL <ip:port>\n         * CLIENT KILL <option> [value] ... <option> [value] */\n        char *addr = NULL;\n        char *laddr = NULL;\n        user *user = NULL;\n        int type = -1;\n        uint64_t id = 0;\n        long long max_age = 0;\n        int skipme = 1;\n        int killed = 0, close_this_client = 0;\n\n        if (c->argc == 3) {\n            /* Old style syntax: CLIENT KILL <addr> */\n            addr = c->argv[2]->ptr;\n            skipme = 0; /* With the old form, you can kill yourself. */\n        } else if (c->argc > 3) {\n            int i = 2; /* Next option index. */\n\n            /* New style syntax: parse options. */\n            while(i < c->argc) {\n                int moreargs = c->argc > i+1;\n\n                if (!strcasecmp(c->argv[i]->ptr,\"id\") && moreargs) {\n                    long tmp;\n\n                    if (getRangeLongFromObjectOrReply(c, c->argv[i+1], 1, LONG_MAX, &tmp,\n                                                      \"client-id should be greater than 0\") != C_OK)\n                        return;\n                    id = tmp;\n                } else if (!strcasecmp(c->argv[i]->ptr,\"maxage\") && moreargs) {\n                    long long tmp;\n\n                    if (getLongLongFromObjectOrReply(c, c->argv[i+1], &tmp,\n                                                     \"maxage is not an integer or out of range\") != C_OK)\n                        return;\n                    if (tmp <= 0) {\n                        addReplyError(c, \"maxage should be greater than 0\");\n                        return;\n                    }\n\n                    max_age = tmp;\n                } else if (!strcasecmp(c->argv[i]->ptr,\"type\") && moreargs) {\n                    type = getClientTypeByName(c->argv[i+1]->ptr);\n                    if (type == -1) {\n                        addReplyErrorFormat(c,\"Unknown client type '%s'\",\n                            (char*) c->argv[i+1]->ptr);\n                        return;\n                    }\n                } else if (!strcasecmp(c->argv[i]->ptr,\"addr\") && moreargs) {\n                    addr = c->argv[i+1]->ptr;\n                } else if (!strcasecmp(c->argv[i]->ptr,\"laddr\") && moreargs) {\n                    laddr = c->argv[i+1]->ptr;\n                } else if (!strcasecmp(c->argv[i]->ptr,\"user\") && moreargs) {\n                    user = ACLGetUserByName(c->argv[i+1]->ptr,\n                                            sdslen(c->argv[i+1]->ptr));\n                    if (user == NULL) {\n                        addReplyErrorFormat(c,\"No such user '%s'\",\n                            (char*) c->argv[i+1]->ptr);\n                        return;\n                    }\n                } else if (!strcasecmp(c->argv[i]->ptr,\"skipme\") && moreargs) {\n                    if (!strcasecmp(c->argv[i+1]->ptr,\"yes\")) {\n                        skipme = 1;\n                    } else if (!strcasecmp(c->argv[i+1]->ptr,\"no\")) {\n                        skipme = 0;\n                    } else {\n                        addReplyErrorObject(c,shared.syntaxerr);\n                        return;\n                    }\n                } else {\n                    addReplyErrorObject(c,shared.syntaxerr);\n                    return;\n                }\n                i += 2;\n            }\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        }\n\n        /* Iterate clients killing all the matching clients. */\n        listRewind(server.clients,&li);\n        while ((ln = listNext(&li)) != NULL) {\n            client *client = listNodeValue(ln);\n            if (addr && strcmp(getClientPeerId(client),addr) != 0) continue;\n            if (laddr && strcmp(getClientSockname(client),laddr) != 0) continue;\n            if (type != -1 && getClientType(client) != type) continue;\n            if (id != 0 && client->id != id) continue;\n            if (user && client->user != user) continue;\n            if (c == client && skipme) continue;\n            if (max_age != 0 && (long long)(commandTimeSnapshot() / 1000 - client->ctime) < max_age) continue;\n\n            /* Kill it. */\n            if (c == client) {\n                close_this_client = 1;\n            } else {\n                freeClient(client);\n            }\n            killed++;\n        }\n\n        /* Reply according to old/new format. */\n        if (c->argc == 3) {\n            if (killed == 0)\n                addReplyError(c,\"No such client\");\n            else\n                addReply(c,shared.ok);\n        } else {\n            addReplyLongLong(c,killed);\n        }\n\n        /* If this client has to be closed, flag it as CLOSE_AFTER_REPLY\n         * only after we queued the reply to its output buffers. */\n        if (close_this_client) c->flags |= CLIENT_CLOSE_AFTER_REPLY;\n    } else if (!strcasecmp(c->argv[1]->ptr,\"unblock\") && (c->argc == 3 ||\n                                                          c->argc == 4))\n    {\n        /* CLIENT UNBLOCK <id> [timeout|error] */\n        long long id;\n        int unblock_error = 0;\n\n        if (c->argc == 4) {\n            if (!strcasecmp(c->argv[3]->ptr,\"timeout\")) {\n                unblock_error = 0;\n            } else if (!strcasecmp(c->argv[3]->ptr,\"error\")) {\n                unblock_error = 1;\n            } else {\n                addReplyError(c,\n                    \"CLIENT UNBLOCK reason should be TIMEOUT or ERROR\");\n                return;\n            }\n        }\n        if (getLongLongFromObjectOrReply(c,c->argv[2],&id,NULL)\n            != C_OK) return;\n        struct client *target = lookupClientByID(id);\n        /* Note that we never try to unblock a client blocked on a module command,\n         * or a client blocked by CLIENT PAUSE or some other blocking type which\n         * doesn't have a timeout callback (even in the case of UNBLOCK ERROR).\n         * The reason is that we assume that if a command doesn't expect to be timedout,\n         * it also doesn't expect to be unblocked by CLIENT UNBLOCK */\n        if (target && target->flags & CLIENT_BLOCKED && blockedClientMayTimeout(target)) {\n            if (unblock_error)\n                unblockClientOnError(target,\n                    \"-UNBLOCKED client unblocked via CLIENT UNBLOCK\");\n            else\n                unblockClientOnTimeout(target);\n\n            addReply(c,shared.cone);\n        } else {\n            addReply(c,shared.czero);\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"setname\") && c->argc == 3) {\n        /* CLIENT SETNAME */\n        if (clientSetNameOrReply(c,c->argv[2]) == C_OK)\n            addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"getname\") && c->argc == 2) {\n        /* CLIENT GETNAME */\n        if (c->name)\n            addReplyBulk(c,c->name);\n        else\n            addReplyNull(c);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"unpause\") && c->argc == 2) {\n        /* CLIENT UNPAUSE */\n        unpauseActions(PAUSE_BY_CLIENT_COMMAND);\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"pause\") && (c->argc == 3 ||\n                                                        c->argc == 4))\n    {\n        /* CLIENT PAUSE TIMEOUT [WRITE|ALL] */\n        mstime_t end;\n        int isPauseClientAll = 1;\n        if (c->argc == 4) {\n            if (!strcasecmp(c->argv[3]->ptr,\"write\")) {\n                isPauseClientAll = 0;\n            } else if (strcasecmp(c->argv[3]->ptr,\"all\")) {\n                addReplyError(c,\n                    \"CLIENT PAUSE mode must be WRITE or ALL\");  \n                return;       \n            }\n        }\n\n        if (getTimeoutFromObjectOrReply(c,c->argv[2],&end,\n            UNIT_MILLISECONDS) != C_OK) return;\n        pauseClientsByClient(end, isPauseClientAll);\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"tracking\") && c->argc >= 3) {\n        /* CLIENT TRACKING (on|off) [REDIRECT <id>] [BCAST] [PREFIX first]\n         *                          [PREFIX second] [OPTIN] [OPTOUT] [NOLOOP]... */\n        long long redir = 0;\n        uint64_t options = 0;\n        robj **prefix = NULL;\n        size_t numprefix = 0;\n\n        /* Parse the options. */\n        for (int j = 3; j < c->argc; j++) {\n            int moreargs = (c->argc-1) - j;\n\n            if (!strcasecmp(c->argv[j]->ptr,\"redirect\") && moreargs) {\n                j++;\n                if (redir != 0) {\n                    addReplyError(c,\"A client can only redirect to a single \"\n                                    \"other client\");\n                    zfree(prefix);\n                    return;\n                }\n\n                if (getLongLongFromObjectOrReply(c,c->argv[j],&redir,NULL) !=\n                    C_OK)\n                {\n                    zfree(prefix);\n                    return;\n                }\n                /* We will require the client with the specified ID to exist\n                 * right now, even if it is possible that it gets disconnected\n                 * later. Still a valid sanity check. */\n                if (lookupClientByID(redir) == NULL) {\n                    addReplyError(c,\"The client ID you want redirect to \"\n                                    \"does not exist\");\n                    zfree(prefix);\n                    return;\n                }\n            } else if (!strcasecmp(c->argv[j]->ptr,\"bcast\")) {\n                options |= CLIENT_TRACKING_BCAST;\n            } else if (!strcasecmp(c->argv[j]->ptr,\"optin\")) {\n                options |= CLIENT_TRACKING_OPTIN;\n            } else if (!strcasecmp(c->argv[j]->ptr,\"optout\")) {\n                options |= CLIENT_TRACKING_OPTOUT;\n            } else if (!strcasecmp(c->argv[j]->ptr,\"noloop\")) {\n                options |= CLIENT_TRACKING_NOLOOP;\n            } else if (!strcasecmp(c->argv[j]->ptr,\"prefix\") && moreargs) {\n                j++;\n                prefix = zrealloc(prefix,sizeof(robj*)*(numprefix+1));\n                prefix[numprefix++] = c->argv[j];\n            } else {\n                zfree(prefix);\n                addReplyErrorObject(c,shared.syntaxerr);\n                return;\n            }\n        }\n\n        /* Options are ok: enable or disable the tracking for this client. */\n        if (!strcasecmp(c->argv[2]->ptr,\"on\")) {\n            /* Before enabling tracking, make sure options are compatible\n             * among each other and with the current state of the client. */\n            if (!(options & CLIENT_TRACKING_BCAST) && numprefix) {\n                addReplyError(c,\n                    \"PREFIX option requires BCAST mode to be enabled\");\n                zfree(prefix);\n                return;\n            }\n\n            if (c->flags & CLIENT_TRACKING) {\n                int oldbcast = !!(c->flags & CLIENT_TRACKING_BCAST);\n                int newbcast = !!(options & CLIENT_TRACKING_BCAST);\n                if (oldbcast != newbcast) {\n                    addReplyError(c,\n                    \"You can't switch BCAST mode on/off before disabling \"\n                    \"tracking for this client, and then re-enabling it with \"\n                    \"a different mode.\");\n                    zfree(prefix);\n                    return;\n                }\n            }\n\n            if (options & CLIENT_TRACKING_BCAST &&\n                options & (CLIENT_TRACKING_OPTIN|CLIENT_TRACKING_OPTOUT))\n            {\n                addReplyError(c,\n                \"OPTIN and OPTOUT are not compatible with BCAST\");\n                zfree(prefix);\n                return;\n            }\n\n            if (options & CLIENT_TRACKING_OPTIN && options & CLIENT_TRACKING_OPTOUT)\n            {\n                addReplyError(c,\n                \"You can't specify both OPTIN mode and OPTOUT mode\");\n                zfree(prefix);\n                return;\n            }\n\n            if ((options & CLIENT_TRACKING_OPTIN && c->flags & CLIENT_TRACKING_OPTOUT) ||\n                (options & CLIENT_TRACKING_OPTOUT && c->flags & CLIENT_TRACKING_OPTIN))\n            {\n                addReplyError(c,\n                \"You can't switch OPTIN/OPTOUT mode before disabling \"\n                \"tracking for this client, and then re-enabling it with \"\n                \"a different mode.\");\n                zfree(prefix);\n                return;\n            }\n\n            if (options & CLIENT_TRACKING_BCAST) {\n                if (!checkPrefixCollisionsOrReply(c,prefix,numprefix)) {\n                    zfree(prefix);\n                    return;\n                }\n            }\n\n            enableTracking(c,redir,options,prefix,numprefix);\n        } else if (!strcasecmp(c->argv[2]->ptr,\"off\")) {\n            disableTracking(c);\n        } else {\n            zfree(prefix);\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        }\n        zfree(prefix);\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"caching\") && c->argc >= 3) {\n        if (!(c->flags & CLIENT_TRACKING)) {\n            addReplyError(c,\"CLIENT CACHING can be called only when the \"\n                            \"client is in tracking mode with OPTIN or \"\n                            \"OPTOUT mode enabled\");\n            return;\n        }\n\n        char *opt = c->argv[2]->ptr;\n        if (!strcasecmp(opt,\"yes\")) {\n            if (c->flags & CLIENT_TRACKING_OPTIN) {\n                c->flags |= CLIENT_TRACKING_CACHING;\n            } else {\n                addReplyError(c,\"CLIENT CACHING YES is only valid when tracking is enabled in OPTIN mode.\");\n                return;\n            }\n        } else if (!strcasecmp(opt,\"no\")) {\n            if (c->flags & CLIENT_TRACKING_OPTOUT) {\n                c->flags |= CLIENT_TRACKING_CACHING;\n            } else {\n                addReplyError(c,\"CLIENT CACHING NO is only valid when tracking is enabled in OPTOUT mode.\");\n                return;\n            }\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        }\n\n        /* Common reply for when we succeeded. */\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"getredir\") && c->argc == 2) {\n        /* CLIENT GETREDIR */\n        if (c->flags & CLIENT_TRACKING) {\n            addReplyLongLong(c,c->client_tracking_redirection);\n        } else {\n            addReplyLongLong(c,-1);\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"trackinginfo\") && c->argc == 2) {\n        addReplyMapLen(c,3);\n\n        /* Flags */\n        addReplyBulkCString(c,\"flags\");\n        void *arraylen_ptr = addReplyDeferredLen(c);\n        int numflags = 0;\n        addReplyBulkCString(c,c->flags & CLIENT_TRACKING ? \"on\" : \"off\");\n        numflags++;\n        if (c->flags & CLIENT_TRACKING_BCAST) {\n            addReplyBulkCString(c,\"bcast\");\n            numflags++;\n        }\n        if (c->flags & CLIENT_TRACKING_OPTIN) {\n            addReplyBulkCString(c,\"optin\");\n            numflags++;\n            if (c->flags & CLIENT_TRACKING_CACHING) {\n                addReplyBulkCString(c,\"caching-yes\");\n                numflags++;        \n            }\n        }\n        if (c->flags & CLIENT_TRACKING_OPTOUT) {\n            addReplyBulkCString(c,\"optout\");\n            numflags++;\n            if (c->flags & CLIENT_TRACKING_CACHING) {\n                addReplyBulkCString(c,\"caching-no\");\n                numflags++;        \n            }\n        }\n        if (c->flags & CLIENT_TRACKING_NOLOOP) {\n            addReplyBulkCString(c,\"noloop\");\n            numflags++;\n        }\n        if (c->flags & CLIENT_TRACKING_BROKEN_REDIR) {\n            addReplyBulkCString(c,\"broken_redirect\");\n            numflags++;\n        }\n        setDeferredSetLen(c,arraylen_ptr,numflags);\n\n        /* Redirect */\n        addReplyBulkCString(c,\"redirect\");\n        if (c->flags & CLIENT_TRACKING) {\n            addReplyLongLong(c,c->client_tracking_redirection);\n        } else {\n            addReplyLongLong(c,-1);\n        }\n\n        /* Prefixes */\n        addReplyBulkCString(c,\"prefixes\");\n        if (c->client_tracking_prefixes) {\n            addReplyArrayLen(c,raxSize(c->client_tracking_prefixes));\n            raxIterator ri;\n            raxStart(&ri,c->client_tracking_prefixes);\n            raxSeek(&ri,\"^\",NULL,0);\n            while(raxNext(&ri)) {\n                addReplyBulkCBuffer(c,ri.key,ri.key_len);\n            }\n            raxStop(&ri);\n        } else {\n            addReplyArrayLen(c,0);\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr, \"no-touch\")) {\n        /* CLIENT NO-TOUCH ON|OFF */\n        if (!strcasecmp(c->argv[2]->ptr,\"on\")) {\n            c->flags |= CLIENT_NO_TOUCH;\n            addReply(c,shared.ok);\n        } else if (!strcasecmp(c->argv[2]->ptr,\"off\")) {\n            c->flags &= ~CLIENT_NO_TOUCH;\n            addReply(c,shared.ok);\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n        }\n    } else {\n        addReplySubcommandSyntaxError(c);\n    }\n}\n\n/* HELLO [<protocol-version> [AUTH <user> <password>] [SETNAME <name>] ] */\nvoid helloCommand(client *c) {\n    long long ver = 0;\n    int next_arg = 1;\n\n    if (c->argc >= 2) {\n        if (getLongLongFromObjectOrReply(c, c->argv[next_arg++], &ver,\n            \"Protocol version is not an integer or out of range\") != C_OK) {\n            return;\n        }\n\n        if (ver < 2 || ver > 3) {\n            addReplyError(c,\"-NOPROTO unsupported protocol version\");\n            return;\n        }\n    }\n\n    robj *username = NULL;\n    robj *password = NULL;\n    robj *clientname = NULL;\n    for (int j = next_arg; j < c->argc; j++) {\n        int moreargs = (c->argc-1) - j;\n        const char *opt = c->argv[j]->ptr;\n        if (!strcasecmp(opt,\"AUTH\") && moreargs >= 2) {\n            redactClientCommandArgument(c, j+1);\n            redactClientCommandArgument(c, j+2);\n            username = c->argv[j+1];\n            password = c->argv[j+2];\n            j += 2;\n        } else if (!strcasecmp(opt,\"SETNAME\") && moreargs) {\n            clientname = c->argv[j+1];\n            const char *err = NULL;\n            if (validateClientName(clientname, &err) == C_ERR) {\n                addReplyError(c, err);\n                return;\n            }\n            j++;\n        } else {\n            addReplyErrorFormat(c,\"Syntax error in HELLO option '%s'\",opt);\n            return;\n        }\n    }\n\n    if (username && password) {\n        robj *err = NULL;\n        int auth_result = ACLAuthenticateUser(c, username, password, &err);\n        if (auth_result == AUTH_ERR) {\n            addAuthErrReply(c, err);\n        }\n        if (err) decrRefCount(err);\n        /* In case of auth errors, return early since we already replied with an ERR.\n         * In case of blocking module auth, we reply to the client/setname later upon unblocking. */\n        if (auth_result == AUTH_ERR || auth_result == AUTH_BLOCKED) {\n            return;\n        }\n    }\n\n    /* At this point we need to be authenticated to continue. */\n    if (!c->authenticated) {\n        addReplyError(c,\"-NOAUTH HELLO must be called with the client already \"\n                        \"authenticated, otherwise the HELLO <proto> AUTH <user> <pass> \"\n                        \"option can be used to authenticate the client and \"\n                        \"select the RESP protocol version at the same time\");\n        return;\n    }\n\n    /* Now that we're authenticated, set the client name. */\n    if (clientname) clientSetName(c, clientname, NULL);\n\n    /* Let's switch to the specified RESP mode. */\n    if (ver) c->resp = ver;\n    addReplyMapLen(c,6 + !server.sentinel_mode);\n\n    ADD_REPLY_BULK_CBUFFER_STRING_CONSTANT(c,\"server\");\n    ADD_REPLY_BULK_CBUFFER_STRING_CONSTANT(c,\"redis\");\n\n    ADD_REPLY_BULK_CBUFFER_STRING_CONSTANT(c,\"version\");\n    ADD_REPLY_BULK_CBUFFER_STRING_CONSTANT(c,REDIS_VERSION);\n\n    ADD_REPLY_BULK_CBUFFER_STRING_CONSTANT(c,\"proto\");\n\n    addReplyLongLong(c,c->resp);\n\n    ADD_REPLY_BULK_CBUFFER_STRING_CONSTANT(c,\"id\");\n    addReplyLongLong(c,c->id);\n\n    ADD_REPLY_BULK_CBUFFER_STRING_CONSTANT(c,\"mode\");\n    if (server.sentinel_mode) addReplyBulkCString(c,\"sentinel\");\n    else if (server.cluster_enabled) addReplyBulkCString(c,\"cluster\");\n    else addReplyBulkCString(c,\"standalone\");\n\n    if (!server.sentinel_mode) {\n        ADD_REPLY_BULK_CBUFFER_STRING_CONSTANT(c,\"role\");\n        addReplyBulkCString(c,server.masterhost ? \"replica\" : \"master\");\n    }\n\n    ADD_REPLY_BULK_CBUFFER_STRING_CONSTANT(c,\"modules\");\n    addReplyLoadedModules(c);\n}\n\n/* This callback is bound to POST and \"Host:\" command names. Those are not\n * really commands, but are used in security attacks in order to talk to\n * Redis instances via HTTP, with a technique called \"cross protocol scripting\"\n * which exploits the fact that services like Redis will discard invalid\n * HTTP headers and will process what follows.\n *\n * As a protection against this attack, Redis will terminate the connection\n * when a POST or \"Host:\" header is seen, and will log the event from\n * time to time (to avoid creating a DOS as a result of too many logs). */\nvoid securityWarningCommand(client *c) {\n    static time_t logged_time = 0;\n    time_t now = time(NULL);\n\n    if (llabs(now-logged_time) > 60) {\n        char ip[NET_IP_STR_LEN];\n        int port;\n        if (connAddrPeerName(c->conn, ip, sizeof(ip), &port) == -1) {\n            serverLog(LL_WARNING,\"Possible SECURITY ATTACK detected. It looks like somebody is sending POST or Host: commands to Redis. This is likely due to an attacker attempting to use Cross Protocol Scripting to compromise your Redis instance. Connection aborted.\");\n        } else {\n            serverLog(LL_WARNING,\"Possible SECURITY ATTACK detected. It looks like somebody is sending POST or Host: commands to Redis. This is likely due to an attacker attempting to use Cross Protocol Scripting to compromise your Redis instance. Connection from %s:%d aborted.\", ip, port);\n        }\n        logged_time = now;\n    }\n    freeClientAsync(c);\n}\n\n/* Keep track of the original command arguments so that we can generate\n * an accurate slowlog entry after the command has been executed. */\nstatic void retainOriginalCommandVector(client *c) {\n    /* We already rewrote this command, so don't rewrite it again */\n    if (c->original_argv) return;\n    c->original_argc = c->argc;\n    c->original_argv = zmalloc(sizeof(robj*)*(c->argc));\n    for (int j = 0; j < c->argc; j++) {\n        c->original_argv[j] = c->argv[j];\n        incrRefCount(c->argv[j]);\n    }\n}\n\n/* Redact a given argument to prevent it from being shown\n * in the slowlog. This information is stored in the\n * original_argv array. */\nvoid redactClientCommandArgument(client *c, int argc) {\n    retainOriginalCommandVector(c);\n    if (c->original_argv[argc] == shared.redacted) {\n        /* This argument has already been redacted */\n        return;\n    }\n    decrRefCount(c->original_argv[argc]);\n    c->original_argv[argc] = shared.redacted;\n}\n\n/* Rewrite the command vector of the client. All the new objects ref count\n * is incremented. The old command vector is freed, and the old objects\n * ref count is decremented. */\nvoid rewriteClientCommandVector(client *c, int argc, ...) {\n    va_list ap;\n    int j;\n    robj **argv; /* The new argument vector */\n\n    argv = zmalloc(sizeof(robj*)*argc);\n    va_start(ap,argc);\n    for (j = 0; j < argc; j++) {\n        robj *a;\n\n        a = va_arg(ap, robj*);\n        argv[j] = a;\n        incrRefCount(a);\n    }\n    replaceClientCommandVector(c, argc, argv);\n    va_end(ap);\n}\n\n/* Completely replace the client command vector with the provided one. */\nvoid replaceClientCommandVector(client *c, int argc, robj **argv) {\n    int j;\n    retainOriginalCommandVector(c);\n\n    /* We don't need to just fix the client argv, we also need to fix the pending command (same argv),\n     * But sometimes we reach here not from a real client, but from a Lua 'scriptRunCtx'. This flow bypasses the\n     * pending-command system entirely and uses c->argv directly. In this case there's no pending commands\n     * to update, so we skip that code. */\n    pendingCommand *pcmd = NULL;\n    int is_mstate = 0;\n    if (c->mstate.executing_cmd < 0) {\n        is_mstate = 0;\n        if (c->pending_cmds.ready_len > 0) {\n            pcmd = c->pending_cmds.head;\n            serverAssert(!(pcmd->flags & PENDING_CMD_FLAG_INCOMPLETE));\n        }\n    } else {\n        is_mstate = 1;\n        serverAssert(c->mstate.executing_cmd < c->mstate.count);\n        pcmd = c->mstate.commands[c->mstate.executing_cmd];\n    }\n\n    if (pcmd) {\n        serverAssert(pcmd->argv == c->argv);\n        pcmd->argv = argv;\n        pcmd->argc = argc;\n        pcmd->argv_len = argc;\n    }\n    freeClientArgv(c);\n    c->argv = argv;\n    c->argc = c->argv_len = argc;\n\n    if (!is_mstate) {  /* multi-state does not track all_argv_len_sum, see code in queueMultiCommand */\n        size_t new_argv_len_sum = 0;\n        for (j = 0; j < c->argc; j++)\n            if (c->argv[j])\n                new_argv_len_sum += getStringObjectLen(c->argv[j]);\n\n        if (!pcmd) {\n            c->all_argv_len_sum = new_argv_len_sum;\n        } else {\n            c->all_argv_len_sum -= pcmd->argv_len_sum;\n            pcmd->argv_len_sum = new_argv_len_sum;\n            c->all_argv_len_sum += pcmd->argv_len_sum;\n        }\n    }\n    c->cmd = lookupCommandOrOriginal(c->argv,c->argc);\n    if (pcmd)\n        pcmd->cmd = c->cmd;\n    serverAssertWithInfo(c,NULL,c->cmd != NULL);\n}\n\n/* Rewrite a single item in the command vector.\n * The new val ref count is incremented, and the old decremented.\n *\n * It is possible to specify an argument over the current size of the\n * argument vector: in this case the array of objects gets reallocated\n * and c->argc set to the max value. However it's up to the caller to\n *\n * 1. Make sure there are no \"holes\" and all the arguments are set.\n * 2. If the original argument vector was longer than the one we\n *    want to end with, it's up to the caller to set c->argc and\n *    free the no longer used objects on c->argv.\n * 3. To remove argument at i'th index, pass NULL as new value\n */\nvoid rewriteClientCommandArgument(client *c, int i, robj *newval) {\n    robj *oldval;\n    retainOriginalCommandVector(c);\n\n    /* We don't need to just fix the client argv, we also need to fix the pending command (same argv),\n     * But sometimes we reach here not from a real client, but from a Lua 'scriptRunCtx'. This flow bypasses the\n     * pending-command system entirely and uses c->argv directly. In this case there's no pending commands\n     * to update, so we skip that code. */\n    pendingCommand *pcmd = c->pending_cmds.head ? c->pending_cmds.head: NULL;\n    int update_pcmd = pcmd && pcmd->argv == c->argv;\n\n    /* We need to handle both extending beyond argc (just update it and\n     * initialize the new element) or beyond argv_len (realloc is needed).\n     */\n    if (i >= c->argc) {\n        if (i >= c->argv_len) {\n            c->argv = zrealloc(c->argv,sizeof(robj*)*(i+1));\n            c->argv_len = i+1;\n        }\n        c->argc = i+1;\n        c->argv[i] = NULL;\n    }\n    oldval = c->argv[i];\n    if (oldval) c->all_argv_len_sum -= getStringObjectLen(oldval);\n\n    if (newval) {\n        c->argv[i] = newval;\n        incrRefCount(newval);\n        c->all_argv_len_sum += getStringObjectLen(newval);\n    } else {\n        /* move the remaining arguments one step left */\n        for (int j = i+1; j < c->argc; j++) {\n            c->argv[j-1] = c->argv[j];\n        }\n        c->argv[--c->argc] = NULL;\n    }\n    if (oldval) decrRefCount(oldval);\n\n    if (update_pcmd) {\n        pcmd->argv = c->argv;\n        pcmd->argc = c->argc;\n        pcmd->argv_len = c->argv_len;\n        if (oldval) pcmd->argv_len_sum -= getStringObjectLen(oldval);\n        if (newval) pcmd->argv_len_sum += getStringObjectLen(newval);\n    }\n\n    /* If this is the command name make sure to fix c->cmd. */\n    if (i == 0) {\n        c->cmd = lookupCommandOrOriginal(c->argv,c->argc);\n        serverAssertWithInfo(c,NULL,c->cmd != NULL);\n        if (update_pcmd)\n            pcmd->cmd = c->cmd;\n    }\n}\n\n/* This function returns the number of bytes that Redis is\n * using to store the reply still not read by the client.\n *\n * Note: this function is very fast so can be called as many time as\n * the caller wishes. The main usage of this function currently is\n * enforcing the client output length limits. */\nsize_t getClientOutputBufferMemoryUsage(client *c) {\n    if (unlikely(clientTypeIsSlave(c))) {\n        size_t repl_buf_size = 0;\n        size_t repl_node_num = 0;\n        size_t repl_node_size = sizeof(listNode) + sizeof(replBufBlock);\n        if (c->ref_repl_buf_node) {\n            replBufBlock *last = listNodeValue(listLast(server.repl_buffer_blocks));\n            replBufBlock *cur = listNodeValue(c->ref_repl_buf_node);\n            repl_buf_size = last->repl_offset + last->size - cur->repl_offset;\n            repl_node_num = last->id - cur->id + 1;\n        }\n        return repl_buf_size + (repl_node_size*repl_node_num);\n    } else { \n        size_t list_item_size = sizeof(listNode) + sizeof(clientReplyBlock);\n        return c->reply_bytes + (list_item_size*listLength(c->reply));\n    }\n}\n\nsize_t getNormalClientPendingReplyBytes(client *c) {\n    serverAssert(!clientTypeIsSlave(c));\n    if (listLength(c->reply) == 0) return c->bufpos;\n\n    clientReplyBlock *block = listNodeValue(listLast(c->reply));\n    return (c->reply_bytes - block->size + block->used) + c->bufpos;\n}\n\n/* Returns the total client's memory usage.\n * Optionally, if output_buffer_mem_usage is not NULL, it fills it with\n * the client output buffer memory usage portion of the total. */\nsize_t getClientMemoryUsage(client *c, size_t *output_buffer_mem_usage) {\n    size_t mem = getClientOutputBufferMemoryUsage(c);\n\n    if (output_buffer_mem_usage != NULL)\n        *output_buffer_mem_usage = mem;\n    mem += c->querybuf ? sdsZmallocSize(c->querybuf) : 0;\n    mem += zmalloc_size(c);\n    mem += c->buf_usable_size;\n    /* For efficiency (less work keeping track of the argv memory), it doesn't include the used memory\n     * i.e. unused sds space and internal fragmentation, just the string length. but this is enough to\n     * spot problematic clients. */\n    mem += c->all_argv_len_sum + sizeof(robj*)*c->argc;\n    mem += multiStateMemOverhead(c);\n\n    /* Add memory overhead of pubsub channels and patterns. Note: this is just the overhead of the robj pointers\n     * to the strings themselves because they aren't stored per client. */\n    mem += pubsubMemOverhead(c);\n\n    /* Add memory overhead of the tracking prefixes, this is an underestimation so we don't need to traverse the entire rax */\n    if (c->client_tracking_prefixes)\n        mem += c->client_tracking_prefixes->numnodes * (sizeof(raxNode) * sizeof(raxNode*));\n\n    return mem;\n}\n\n/* Get the class of a client, used in order to enforce limits to different\n * classes of clients.\n *\n * The function will return one of the following:\n * CLIENT_TYPE_NORMAL -> Normal client, including MONITOR\n * CLIENT_TYPE_SLAVE  -> Slave\n * CLIENT_TYPE_PUBSUB -> Client subscribed to Pub/Sub channels\n * CLIENT_TYPE_MASTER -> The client representing our replication master.\n */\nint getClientType(client *c) {\n    if (c->flags & CLIENT_MASTER) return CLIENT_TYPE_MASTER;\n    /* Even though MONITOR clients are marked as replicas, we\n     * want the expose them as normal clients. */\n    if ((c->flags & CLIENT_SLAVE) && !(c->flags & CLIENT_MONITOR))\n        return CLIENT_TYPE_SLAVE;\n    if (c->flags & CLIENT_PUBSUB) return CLIENT_TYPE_PUBSUB;\n    return CLIENT_TYPE_NORMAL;\n}\n\nstatic inline int clientTypeIsSlave(client *c) {\n    /* Even though MONITOR clients and ASM destination RDB/main channels are\n     * marked as replicas, we want to expose them as normal clients. */\n    if (unlikely((c->flags & CLIENT_SLAVE) &&\n        !(c->flags & (CLIENT_MONITOR | CLIENT_ASM_MIGRATING))))\n    {\n        return 1;\n    }\n    return 0;\n}\n\nint getClientTypeByName(char *name) {\n    if (!strcasecmp(name,\"normal\")) return CLIENT_TYPE_NORMAL;\n    else if (!strcasecmp(name,\"slave\")) return CLIENT_TYPE_SLAVE;\n    else if (!strcasecmp(name,\"replica\")) return CLIENT_TYPE_SLAVE;\n    else if (!strcasecmp(name,\"pubsub\")) return CLIENT_TYPE_PUBSUB;\n    else if (!strcasecmp(name,\"master\")) return CLIENT_TYPE_MASTER;\n    else return -1;\n}\n\nchar *getClientTypeName(int class) {\n    switch(class) {\n    case CLIENT_TYPE_NORMAL: return \"normal\";\n    case CLIENT_TYPE_SLAVE:  return \"slave\";\n    case CLIENT_TYPE_PUBSUB: return \"pubsub\";\n    case CLIENT_TYPE_MASTER: return \"master\";\n    default:                       return NULL;\n    }\n}\n\n/* The function checks if the client reached output buffer soft or hard\n * limit, and also update the state needed to check the soft limit as\n * a side effect.\n *\n * Return value: non-zero if the client reached the soft or the hard limit.\n *               Otherwise zero is returned. */\nint checkClientOutputBufferLimits(client *c) {\n    int soft = 0, hard = 0, class;\n    unsigned long used_mem = getClientOutputBufferMemoryUsage(c);\n\n    /* For unauthenticated clients the output buffer is limited to prevent\n     * them from abusing it by not reading the replies */\n    if (used_mem > 1024 && authRequired(c))\n        return 1;\n\n    class = getClientType(c);\n    /* For the purpose of output buffer limiting, masters are handled\n     * like normal clients. */\n    if (class == CLIENT_TYPE_MASTER) class = CLIENT_TYPE_NORMAL;\n\n    /* Note that it doesn't make sense to set the replica clients output buffer\n     * limit lower than the repl-backlog-size config (partial sync will succeed\n     * and then replica will get disconnected).\n     * Such a configuration is ignored (the size of repl-backlog-size will be used).\n     * This doesn't have memory consumption implications since the replica client\n     * will share the backlog buffers memory. */\n    size_t hard_limit_bytes = server.client_obuf_limits[class].hard_limit_bytes;\n    if (class == CLIENT_TYPE_SLAVE && hard_limit_bytes &&\n        (long long)hard_limit_bytes < server.repl_backlog_size)\n        hard_limit_bytes = server.repl_backlog_size;\n    if (server.client_obuf_limits[class].hard_limit_bytes &&\n        used_mem >= hard_limit_bytes)\n        hard = 1;\n    if (server.client_obuf_limits[class].soft_limit_bytes &&\n        used_mem >= server.client_obuf_limits[class].soft_limit_bytes)\n        soft = 1;\n\n    /* We need to check if the soft limit is reached continuously for the\n     * specified amount of seconds. */\n    if (soft) {\n        if (c->obuf_soft_limit_reached_time == 0) {\n            c->obuf_soft_limit_reached_time = server.unixtime;\n            soft = 0; /* First time we see the soft limit reached */\n        } else {\n            time_t elapsed = server.unixtime - c->obuf_soft_limit_reached_time;\n\n            if (elapsed <=\n                server.client_obuf_limits[class].soft_limit_seconds) {\n                soft = 0; /* The client still did not reached the max number of\n                             seconds for the soft limit to be considered\n                             reached. */\n            }\n        }\n    } else {\n        c->obuf_soft_limit_reached_time = 0;\n    }\n    return soft || hard;\n}\n\n/* Asynchronously close a client if soft or hard limit is reached on the\n * output buffer size. The caller can check if the client will be closed\n * checking if the client CLIENT_CLOSE_ASAP flag is set.\n *\n * Note: we need to close the client asynchronously because this function is\n * called from contexts where the client can't be freed safely, i.e. from the\n * lower level functions pushing data inside the client output buffers.\n * When `async` is set to 0, we close the client immediately, this is\n * useful when called from cron.\n *\n * Returns 1 if client was (flagged) closed. */\nint closeClientOnOutputBufferLimitReached(client *c, int async) {\n    if (!c->conn) return 0; /* It is unsafe to free fake clients. */\n    serverAssert(c->reply_bytes < SIZE_MAX-(1024*64));\n    /* Note that c->reply_bytes is irrelevant for replica clients\n     * (they use the global repl buffers). */\n    if ((c->reply_bytes == 0 && !clientTypeIsSlave(c)) ||\n        c->flags & CLIENT_CLOSE_ASAP) return 0;\n    if (checkClientOutputBufferLimits(c)) {\n        sds client = catClientInfoString(sdsempty(),c);\n\n        if (async) {\n            freeClientAsync(c);\n            serverLog(LL_WARNING,\n                      \"Client %s scheduled to be closed ASAP for overcoming of output buffer limits.\",\n                      client);\n        } else {\n            freeClient(c);\n            serverLog(LL_WARNING,\n                      \"Client %s closed for overcoming of output buffer limits.\",\n                      client);\n        }\n        sdsfree(client);\n        server.stat_client_outbuf_limit_disconnections++;\n        return  1;\n    }\n    return 0;\n}\n\n/* Helper function used by performEvictions() in order to flush slaves\n * output buffers without returning control to the event loop.\n * This is also called by SHUTDOWN for a best-effort attempt to send\n * slaves the latest writes. */\nvoid flushSlavesOutputBuffers(void) {\n    listIter li;\n    listNode *ln;\n\n    listRewind(server.slaves,&li);\n    while((ln = listNext(&li))) {\n        client *slave = listNodeValue(ln);\n\n        /* Fetch the replica clients that are currently running in IO thread. */\n        if (slave->running_tid != IOTHREAD_MAIN_THREAD_ID) {\n            fetchClientFromIOThread(slave);\n            /* If the slave doesn't have any pending replies nothing to do\n             * anyways. */\n            if (!clientHasPendingReplies(slave)) continue;\n            putClientInPendingWriteQueue(slave);\n        }\n\n        int can_receive_writes = connHasWriteHandler(slave->conn) ||\n                                 (slave->flags & CLIENT_PENDING_WRITE);\n\n        /* We don't want to send the pending data to the replica in a few\n         * cases:\n         *\n         * 1. For some reason there is neither the write handler installed\n         *    nor the client is flagged as to have pending writes: for some\n         *    reason this replica may not be set to receive data. This is\n         *    just for the sake of defensive programming.\n         *\n         * 2. The put_online_on_ack flag is true. To know why we don't want\n         *    to send data to the replica in this case, please grep for the\n         *    flag for this flag.\n         *\n         * 3. Obviously if the slave is not ONLINE.\n         */\n        if ((slave->replstate == SLAVE_STATE_ONLINE || slave->replstate == SLAVE_STATE_SEND_BULK_AND_STREAM) &&\n            !(slave->flags & CLIENT_CLOSE_ASAP) &&\n            can_receive_writes &&\n            !slave->repl_start_cmd_stream_on_ack &&\n            clientHasPendingReplies(slave))\n        {\n            writeToClient(slave,0);\n        }\n    }\n}\n\n/* Compute current paused actions and its end time, aggregated for\n * all pause purposes. */\nvoid updatePausedActions(void) {\n    uint32_t prev_paused_actions = server.paused_actions;\n    server.paused_actions = 0;\n\n    for (int i = 0; i < NUM_PAUSE_PURPOSES; i++) {\n        pause_event *p = &(server.client_pause_per_purpose[i]);\n        if (p->end > server.mstime)\n            server.paused_actions |= p->paused_actions;\n        else {\n            p->paused_actions = 0;\n            p->end = 0;\n        }\n    }\n\n    /* If the pause type is less restrictive than before, we unblock all clients\n     * so they are reprocessed (may get re-paused). */\n    uint32_t mask_cli = (PAUSE_ACTION_CLIENT_WRITE|PAUSE_ACTION_CLIENT_ALL);\n    if ((server.paused_actions & mask_cli) < (prev_paused_actions & mask_cli)) {\n        unblockPostponedClients();\n    }\n}\n\n/* Unblock all paused clients (ones that where blocked by BLOCKED_POSTPONE (possibly in processCommand).\n * This means they'll get re-processed in beforeSleep, and may get paused again if needed. */\nvoid unblockPostponedClients(void) {\n    listNode *ln;\n    listIter li;\n    listRewind(server.postponed_clients, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        client *c = listNodeValue(ln);\n        unblockClient(c, 1);\n    }\n}\n\n/* Set pause-client end-time and restricted action. If already paused, then:\n * 1. Keep higher end-time value between configured and the new one\n * 2. Keep most restrictive action between configured and the new one */\nstatic void pauseClientsByClient(mstime_t endTime, int isPauseClientAll) {\n    uint32_t actions;\n    pause_event *p = &server.client_pause_per_purpose[PAUSE_BY_CLIENT_COMMAND];\n\n    if (isPauseClientAll)\n        actions = PAUSE_ACTIONS_CLIENT_ALL_SET;\n    else {\n        actions = PAUSE_ACTIONS_CLIENT_WRITE_SET;\n        /* If currently configured most restrictive client pause, then keep it */\n        if (p->paused_actions & PAUSE_ACTION_CLIENT_ALL)\n            actions = PAUSE_ACTIONS_CLIENT_ALL_SET;\n    }\n\n    /* Cancel all ASM tasks when starting client pause */\n    clusterAsmCancel(NULL, \"client pause requested\");\n\n    pauseActions(PAUSE_BY_CLIENT_COMMAND, endTime, actions);\n}\n\n/* Pause actions up to the specified unixtime (in ms) for a given type of\n * commands.\n *\n * A main use case of this function is to allow pausing replication traffic\n * so that a failover without data loss to occur. Replicas will continue to receive\n * traffic to facilitate this functionality.\n * \n * This function is also internally used by Redis Cluster for the manual\n * failover procedure implemented by CLUSTER FAILOVER.\n *\n * The function always succeed, even if there is already a pause in progress.\n * The new paused_actions of a given 'purpose' will override the old ones and\n * end time will be updated if new end time is bigger than currently configured */\nvoid pauseActions(pause_purpose purpose, mstime_t end, uint32_t actions) {\n    /* Manage pause type and end time per pause purpose. */\n    server.client_pause_per_purpose[purpose].paused_actions = actions;\n\n    /* If currently configured end time bigger than new one, then keep it */\n    if (server.client_pause_per_purpose[purpose].end < end)\n        server.client_pause_per_purpose[purpose].end = end;\n\n    updatePausedActions();\n\n    /* We allow write commands that were queued\n     * up before and after to execute. We need\n     * to track this state so that we don't assert\n     * in propagateNow(). */\n    if (server.in_exec) {\n        server.client_pause_in_transaction = 1;\n    }\n\n    /* Assert that there is no import task in progress when we are pausing.\n     * otherwise we break the promise that no writes are performed, maybe\n     * causing data lost during a failover. */\n    if (isPausedActions(PAUSE_ACTION_CLIENT_ALL) ||\n        isPausedActions(PAUSE_ACTION_CLIENT_WRITE))\n        serverAssert(!asmImportInProgress());\n}\n\n/* Unpause actions and queue them for reprocessing. */\nvoid unpauseActions(pause_purpose purpose) {\n    server.client_pause_per_purpose[purpose].end = 0;\n    server.client_pause_per_purpose[purpose].paused_actions = 0;\n    updatePausedActions();\n}\n\n/* Returns bitmask of paused actions */\nuint32_t isPausedActions(uint32_t actions_bitmask) {\n    return (server.paused_actions & actions_bitmask);\n}\n\n/* Returns bitmask of paused actions */\nuint32_t isPausedActionsWithUpdate(uint32_t actions_bitmask) {\n    if (!(server.paused_actions & actions_bitmask)) return 0;\n    updatePausedActions();\n    return (server.paused_actions & actions_bitmask);\n}\n\n/* This function is called by Redis in order to process a few events from\n * time to time while blocked into some not interruptible operation.\n * This allows to reply to clients with the -LOADING error while loading the\n * data set at startup or after a full resynchronization with the master\n * and so forth.\n *\n * It calls the event loop in order to process a few events. Specifically we\n * try to call the event loop 4 times as long as we receive acknowledge that\n * some event was processed, in order to go forward with the accept, read,\n * write, close sequence needed to serve a client.\n *\n * The function returns the total number of events processed. */\nvoid processEventsWhileBlocked(void) {\n    int iterations = 4; /* See the function top-comment. */\n\n    /* Update our cached time since it is used to create and update the last\n     * interaction time with clients and for other important things. */\n    updateCachedTime(0);\n\n    /* For the few commands that are allowed during busy scripts, we rather\n     * provide a fresher time than the one from when the script started (they\n     * still won't get it from the call due to execution_nesting. For commands\n     * during loading this doesn't matter. */\n    mstime_t prev_cmd_time_snapshot = server.cmd_time_snapshot;\n    server.cmd_time_snapshot = server.mstime;\n\n    /* Note: when we are processing events while blocked (for instance during\n     * busy Lua scripts), we set a global flag. When such flag is set, we\n     * avoid handling the read part of clients using threaded I/O.\n     * See https://github.com/redis/redis/issues/6988 for more info.\n     * Note that there could be cases of nested calls to this function,\n     * specifically on a busy script during async_loading rdb, and scripts\n     * that came from AOF. */\n    ProcessingEventsWhileBlocked++;\n    while (iterations--) {\n        long long startval = server.events_processed_while_blocked;\n        long long ae_events = aeProcessEvents(server.el,\n            AE_FILE_EVENTS|AE_DONT_WAIT|\n            AE_CALL_BEFORE_SLEEP|AE_CALL_AFTER_SLEEP);\n        /* Note that server.events_processed_while_blocked will also get\n         * incremented by callbacks called by the event loop handlers. */\n        server.events_processed_while_blocked += ae_events;\n        long long events = server.events_processed_while_blocked - startval;\n        if (!events) break;\n    }\n\n    whileBlockedCron();\n\n    ProcessingEventsWhileBlocked--;\n    serverAssert(ProcessingEventsWhileBlocked >= 0);\n\n    server.cmd_time_snapshot = prev_cmd_time_snapshot;\n}\n\n/* Returns the actual client eviction limit based on current configuration or\n * 0 if no limit. */\nsize_t getClientEvictionLimit(void) {\n    size_t maxmemory_clients_actual = SIZE_MAX;\n\n    /* Handle percentage of maxmemory*/\n    if (server.maxmemory_clients < 0 && server.maxmemory > 0) {\n        unsigned long long maxmemory_clients_bytes = (unsigned long long)((double)server.maxmemory * -(double) server.maxmemory_clients / 100);\n        if (maxmemory_clients_bytes <= SIZE_MAX)\n            maxmemory_clients_actual = maxmemory_clients_bytes;\n    }\n    else if (server.maxmemory_clients > 0)\n        maxmemory_clients_actual = server.maxmemory_clients;\n    else\n        return 0;\n\n    /* Don't allow a too small maxmemory-clients to avoid cases where we can't communicate\n     * at all with the server because of bad configuration */\n    if (maxmemory_clients_actual < 1024*128)\n        maxmemory_clients_actual = 1024*128;\n\n    return maxmemory_clients_actual;\n}\n\nvoid evictClients(void) {\n    if (!server.client_mem_usage_buckets)\n        return;\n    /* Start eviction from topmost bucket (largest clients) */\n    int curr_bucket = CLIENT_MEM_USAGE_BUCKETS-1;\n    listIter bucket_iter;\n    listRewind(server.client_mem_usage_buckets[curr_bucket].clients, &bucket_iter);\n    size_t client_eviction_limit = getClientEvictionLimit();\n    if (client_eviction_limit == 0)\n        return;\n    while (server.stat_clients_type_memory[CLIENT_TYPE_NORMAL] +\n           server.stat_clients_type_memory[CLIENT_TYPE_PUBSUB] >= client_eviction_limit) {\n        listNode *ln = listNext(&bucket_iter);\n        if (ln) {\n            client *c = ln->value;\n            size_t last_memory = c->last_memory_usage;\n            int tid = c->running_tid;\n            if (tid != IOTHREAD_MAIN_THREAD_ID) {\n                pauseIOThread(tid);\n                /* We need to update the client memory usage and bucket if the client\n                 * is running in IO thread. This is because the client memory usage\n                 * and bucket are updated 'only' in the main thread, such as processing\n                 * command and clientsCron, it may delay updating, to avoid incorrectly\n                 * evicting clients, we update again before evicting, if the memory\n                 * used by the client does not decrease or memory usage bucket is not\n                 * changed, then we will evict it, otherwise, not evict it. */\n                updateClientMemUsageAndBucket(c);\n            }\n            if (c->last_memory_usage >= last_memory ||\n                c->mem_usage_bucket == &server.client_mem_usage_buckets[curr_bucket])\n            {\n                sds ci = catClientInfoString(sdsempty(),c);\n                serverLog(LL_NOTICE, \"Evicting client: %s\", ci);\n                freeClient(c);\n                sdsfree(ci);\n                server.stat_evictedclients++;\n            }\n            if (tid != IOTHREAD_MAIN_THREAD_ID) {\n                resumeIOThread(tid);\n                /* The 'next' of 'bucket_iter' may be changed after updating client memory\n                 * usage and freeing client, so let reset 'bucket_iter'. */\n                listRewind(server.client_mem_usage_buckets[curr_bucket].clients, &bucket_iter);\n            }\n        } else {\n            curr_bucket--;\n            if (curr_bucket < 0) {\n                serverLog(LL_WARNING, \"Over client maxmemory after evicting all evictable clients\");\n                break;\n            }\n            listRewind(server.client_mem_usage_buckets[curr_bucket].clients, &bucket_iter);\n        }\n    }\n}\n\n/* Acquire a pending command from the shared pool or allocate a new one.\n * Uses the shared pool when available (only when IO threads are inactive),\n * otherwise allocates a new pending command structure. */\nstatic pendingCommand *acquirePendingCommand(void) {\n    /* Ensure pool is empty when IO threads are active to avoid race conditions */\n    serverAssert(server.io_threads_active == 0 || server.cmd_pool.size == 0);\n\n    pendingCommand *pcmd = NULL;\n    if (server.cmd_pool.size > 0) {\n        /* Shared pool is available. */\n        pcmd = server.cmd_pool.pool[--server.cmd_pool.size];\n        server.cmd_pool.pool[server.cmd_pool.size] = NULL;\n\n        /* Track minimum pool size for utilization calculation */\n        if (server.cmd_pool.size < server.cmd_pool.min_size)\n            server.cmd_pool.min_size = server.cmd_pool.size;\n    } else {\n        /* Shared pool is empty, allocate new pending command. */\n        pcmd = zmalloc(sizeof(pendingCommand));\n        initPendingCommand(pcmd);\n    }\n    return pcmd;\n}\n\n/* Try to expand the pending command pool capacity.\n * Returns 1 if expansion succeeded or wasn't needed, 0 if expansion failed. */\nstatic int tryExpandPendingCommandPool(void) {\n    /* Check if expansion is needed */\n    if (server.cmd_pool.size < server.cmd_pool.capacity) {\n        return 1; /* No expansion needed */\n    }\n    \n    /* Check if we can expand further */\n    if (server.cmd_pool.capacity >= PENDING_COMMAND_POOL_MAX_SIZE) {\n        return 0; /* Already at maximum capacity */\n    }\n    \n    /* Expand the pending command pool capacity by doubling it, up to the maximum size */\n    int new_capacity = server.cmd_pool.capacity * 2;\n    if (new_capacity > PENDING_COMMAND_POOL_MAX_SIZE)\n        new_capacity = PENDING_COMMAND_POOL_MAX_SIZE;\n\n    server.cmd_pool.pool = zrealloc(server.cmd_pool.pool, sizeof(pendingCommand*) * new_capacity);\n    server.cmd_pool.capacity = new_capacity;\n    return 1; /* Expansion succeeded */\n}\n\n/* Reclaim a pending command by adding it to the shared pool for reuse or freeing it.\n * The shared pool is only used when IO threads are inactive to avoid race conditions\n * between multiple clients. Additionally, pool reuse provides minimal benefit in\n * multi-threaded scenarios, so we only use it in single-threaded mode. */\nstatic void reclaimPendingCommand(client *c, pendingCommand *pcmd) {\n    if (!server.io_threads_active) {\n        /* Try to add to shared pool for reuse if argv isn't too large */\n        if (likely(pcmd->argv_len < 64)) {\n            /* Check if pool needs expansion before attempting to add */\n            if (!tryExpandPendingCommandPool()) {\n                /* Pool is at maximum capacity, can't expand further */\n                goto free_command;\n            }\n\n            /* Clean up command resources before adding to pool */\n            for (int j = 0; j < pcmd->argc; j++)\n                decrRefCount(pcmd->argv[j]);\n\n            getKeysFreeResult(&pcmd->keys_result);\n\n            if (c) {\n                serverAssert(c->all_argv_len_sum >= pcmd->argv_len_sum); /* assert this doesn't try to go negative */\n                c->all_argv_len_sum -= pcmd->argv_len_sum;\n                pcmd->argv_len_sum = 0;\n            }\n\n            /* Reset the pending command while preserving the argv array for shared pool reuse */\n            robj **argv = pcmd->argv;\n            int argv_len = pcmd->argv_len;\n            memset(pcmd, 0, sizeof(pendingCommand));\n            pcmd->argv = argv;\n            pcmd->argv_len = argv_len;\n            pcmd->slot = INVALID_CLUSTER_SLOT;\n\n            server.cmd_pool.pool[server.cmd_pool.size++] = pcmd;\n            return; /* Successfully added to shared pool for reuse */\n        }\n    } else {\n        /* IO threads are active, handle thread-specific cleanup */\n        if (c && c->tid != IOTHREAD_MAIN_THREAD_ID) {\n            /* Partial cleanup for IO thread commands to avoid race issues.\n             * To avoid robj that may already be referenced elsewhere, we should\n             * decrease the reference count to release our reference to it. */\n            for (int j = 0; j < pcmd->argc; j++) {\n                robj *o = pcmd->argv[j];\n                if (o && o->refcount > 1) {\n                    decrRefCount(o);\n                    pcmd->argv[j] = NULL;\n                }\n            }\n\n            serverAssert(c->all_argv_len_sum >= pcmd->argv_len_sum); /* assert this doesn't try to go negative */\n            c->all_argv_len_sum -= pcmd->argv_len_sum;\n            pcmd->argv_len_sum = 0;\n\n            tryDeferFreeClientObject(c, DEFERRED_OBJECT_TYPE_PENDING_COMMAND, pcmd);\n            return;\n        }\n    }\n\nfree_command:\n    /* Shared pool is full or command argv is too large, free this pending command */\n    freePendingCommand(c, pcmd);\n}\n\nvoid initPendingCommand(pendingCommand *pcmd) {\n    memset(pcmd, 0, sizeof(pendingCommand));\n    pcmd->slot = INVALID_CLUSTER_SLOT;\n}\n\nvoid freePendingCommand(client *c, pendingCommand *pcmd) {\n    if (!pcmd)\n        return;\n\n    getKeysFreeResult(&pcmd->keys_result);\n\n    if (pcmd->argv) {\n        for (int j = 0; j < pcmd->argc; j++) {\n            robj *o = pcmd->argv[j];\n            if (!o) continue; /* argv[j] may be NULL when called from reclaimPendingCommand */\n            decrRefCount(o);\n        }\n\n        zfree(pcmd->argv);\n\n        /* c may be NULL when called from reclaimPendingCommand */\n        if (c) {\n            serverAssert(c->all_argv_len_sum >= pcmd->argv_len_sum); /* assert this doesn't try to go negative */\n            c->all_argv_len_sum -= pcmd->argv_len_sum;\n        }\n    }\n\n    zfree(pcmd);\n}\n\n/* Add a command to the tail of the pending command list. */\nvoid addPendingCommand(pendingCommandList *queue, pendingCommand *cmd) {\n    cmd->next = NULL;\n    cmd->prev = queue->tail;\n\n    if (queue->tail) {\n        queue->tail->next = cmd;\n    } else {\n        /* Queue was empty */\n        queue->head = cmd;\n    }\n\n    queue->tail = cmd;\n    queue->len++;\n    if (!(cmd->flags & PENDING_CMD_FLAG_INCOMPLETE)) queue->ready_len++;\n}\n\n/* Remove and return the first pending command from the list.\n * Returns NULL if the list is empty. */\npendingCommand *popPendingCommandFromHead(pendingCommandList *list) {\n    pendingCommand *cmd = list->head;\n    if (!cmd) return NULL;  /* List is empty */\n\n    list->head = cmd->next;\n    if (list->head) {\n        list->head->prev = NULL;\n    } else {\n        /* Queue was empty */\n        list->tail = NULL;\n    }\n\n    cmd->next = cmd->prev = NULL;\n    list->len--;\n    if (!(cmd->flags & PENDING_CMD_FLAG_INCOMPLETE)) list->ready_len--;\n    return cmd;\n}\n\n/* Remove and return the last pending command from the list.\n * Returns NULL if the list is empty. */\npendingCommand *popPendingCommandFromTail(pendingCommandList *list) {\n    pendingCommand *cmd = list->tail;\n    if (!cmd) return NULL;  /* List is empty */\n\n    list->tail = cmd->prev;\n    if (list->tail) {\n        list->tail->next = NULL;\n    } else {\n        /* Queue became empty */\n        list->head = NULL;\n    }\n\n    cmd->next = cmd->prev = NULL;\n    list->len--;\n    if (!(cmd->flags & PENDING_CMD_FLAG_INCOMPLETE)) list->ready_len--;\n    return cmd;\n}\n\n/* Get cached key result for current pending command */\ngetKeysResult *getClientCachedKeyResult(client *c) {\n    pendingCommand *pcmd = c->current_pending_cmd;\n    if (pcmd) {\n        /* Preprocess the command if needed */\n        if (!(pcmd->flags & PENDING_CMD_FLAG_PREPROCESSED)) {\n            preprocessCommand(c, pcmd);\n            pcmd->flags |= PENDING_CMD_FLAG_PREPROCESSED;\n        }\n\n        /* Return cached result if available */\n        if (pcmd->flags & PENDING_CMD_KEYS_RESULT_VALID)\n            return &c->current_pending_cmd->keys_result;\n    }\n    return NULL;\n}\n\nvoid shrinkPendingCommandPool(void) {\n    /* Don't shrink if pool is too small. */\n    if (server.cmd_pool.capacity <= PENDING_COMMAND_POOL_SIZE) return;\n\n    /* Free commands until we have half the current size, but not below minimum. */\n    int target_size = max(server.cmd_pool.size / 2, PENDING_COMMAND_POOL_SIZE);\n\n    while (server.cmd_pool.size > target_size) {\n        pendingCommand *cmd = server.cmd_pool.pool[--server.cmd_pool.size];\n        if (cmd) {\n            freePendingCommand(NULL, cmd);\n            server.cmd_pool.pool[server.cmd_pool.size] = NULL;\n        }\n    }\n\n    int old_capacity = server.cmd_pool.capacity;\n    server.cmd_pool.capacity = target_size;\n    server.cmd_pool.pool = zrealloc(server.cmd_pool.pool, sizeof(pendingCommand*) * target_size);\n    serverLog(LL_DEBUG, \"Shrunk pending command pool: capacity %d->%d\", old_capacity, server.cmd_pool.capacity);\n}\n"
  },
  {
    "path": "src/notify.c",
    "content": "/*\n * Copyright (c) 2013-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n\n/* This file implements keyspace events notification via Pub/Sub and\n * described at https://redis.io/docs/latest/develop/use/keyspace-notifications/. */\n\n/* Turn a string representing notification classes into an integer\n * representing notification classes flags xored.\n *\n * The function returns -1 if the input contains characters not mapping to\n * any class. */\nint keyspaceEventsStringToFlags(char *classes) {\n    char *p = classes;\n    int c, flags = 0;\n\n    while((c = *p++) != '\\0') {\n        switch(c) {\n        case 'A': flags |= NOTIFY_ALL; break;\n        case 'g': flags |= NOTIFY_GENERIC; break;\n        case '$': flags |= NOTIFY_STRING; break;\n        case 'l': flags |= NOTIFY_LIST; break;\n        case 's': flags |= NOTIFY_SET; break;\n        case 'h': flags |= NOTIFY_HASH; break;\n        case 'z': flags |= NOTIFY_ZSET; break;\n        case 'x': flags |= NOTIFY_EXPIRED; break;\n        case 'e': flags |= NOTIFY_EVICTED; break;\n        case 'K': flags |= NOTIFY_KEYSPACE; break;\n        case 'E': flags |= NOTIFY_KEYEVENT; break;\n        case 't': flags |= NOTIFY_STREAM; break;\n        case 'm': flags |= NOTIFY_KEY_MISS; break;\n        case 'd': flags |= NOTIFY_MODULE; break;\n        case 'n': flags |= NOTIFY_NEW; break;\n        case 'o': flags |= NOTIFY_OVERWRITTEN; break;\n        case 'c': flags |= NOTIFY_TYPE_CHANGED; break;\n        case 'r': flags |= NOTIFY_RATE_LIMIT; break;\n        case 'S': flags |= NOTIFY_SUBKEYSPACE; break;\n        case 'T': flags |= NOTIFY_SUBKEYEVENT; break;\n        case 'I': flags |= NOTIFY_SUBKEYSPACEITEM; break;\n        case 'V': flags |= NOTIFY_SUBKEYSPACEEVENT; break;\n        default: return -1;\n        }\n    }\n    return flags;\n}\n\n/* This function does exactly the reverse of the function above: it gets\n * as input an integer with the xored flags and returns a string representing\n * the selected classes. The string returned is an sds string that needs to\n * be released with sdsfree(). */\nsds keyspaceEventsFlagsToString(int flags) {\n    sds res;\n\n    res = sdsempty();\n    if ((flags & NOTIFY_ALL) == NOTIFY_ALL) {\n        res = sdscatlen(res,\"A\",1);\n    } else {\n        if (flags & NOTIFY_GENERIC) res = sdscatlen(res,\"g\",1);\n        if (flags & NOTIFY_STRING) res = sdscatlen(res,\"$\",1);\n        if (flags & NOTIFY_LIST) res = sdscatlen(res,\"l\",1);\n        if (flags & NOTIFY_SET) res = sdscatlen(res,\"s\",1);\n        if (flags & NOTIFY_HASH) res = sdscatlen(res,\"h\",1);\n        if (flags & NOTIFY_ZSET) res = sdscatlen(res,\"z\",1);\n        if (flags & NOTIFY_EXPIRED) res = sdscatlen(res,\"x\",1);\n        if (flags & NOTIFY_EVICTED) res = sdscatlen(res,\"e\",1);\n        if (flags & NOTIFY_STREAM) res = sdscatlen(res,\"t\",1);\n        if (flags & NOTIFY_MODULE) res = sdscatlen(res,\"d\",1);\n        if (flags & NOTIFY_NEW) res = sdscatlen(res,\"n\",1);\n        if (flags & NOTIFY_OVERWRITTEN) res = sdscatlen(res,\"o\",1);\n        if (flags & NOTIFY_TYPE_CHANGED) res = sdscatlen(res,\"c\",1);\n        if (flags & NOTIFY_RATE_LIMIT) res = sdscatlen(res,\"r\",1);\n    }\n    if (flags & NOTIFY_KEYSPACE) res = sdscatlen(res,\"K\",1);\n    if (flags & NOTIFY_KEYEVENT) res = sdscatlen(res,\"E\",1);\n    if (flags & NOTIFY_KEY_MISS) res = sdscatlen(res,\"m\",1);\n    if (flags & NOTIFY_SUBKEYSPACE) res = sdscatlen(res,\"S\",1);\n    if (flags & NOTIFY_SUBKEYEVENT) res = sdscatlen(res,\"T\",1);\n    if (flags & NOTIFY_SUBKEYSPACEITEM) res = sdscatlen(res,\"I\",1);\n    if (flags & NOTIFY_SUBKEYSPACEEVENT) res = sdscatlen(res,\"V\",1);\n    return res;\n}\n\n/* Append subkeys in length-prefixed format to 'dst'.\n * If 'dst' is NULL, a new sds is created.\n * Format: <len>:<subkey>[,<len>:<subkey>...]\n * Example: 3:abc,2:xx,5:hello */\nstatic sds catSubkeysPayload(sds dst, robj **subkeys, int count) {\n    if (dst == NULL) dst = sdsempty();\n    char lenbuf[32];\n\n    for (int i = 0; i < count; i++) {\n        serverAssert(sdsEncodedObject(subkeys[i]));\n        if (i > 0) dst = sdscatlen(dst, \",\", 1);\n        size_t subkeylen = sdslen(subkeys[i]->ptr);\n        int lenlen = ll2string(lenbuf, sizeof(lenbuf), subkeylen);\n        dst = sdscatlen(dst, lenbuf, lenlen);\n        dst = sdscatlen(dst, \":\", 1);\n        dst = sdscatsds(dst, subkeys[i]->ptr);\n    }\n    return dst;\n}\n\n/* Internal implementation for keyspace event notifications.\n *\n * The API provided to the rest of the Redis core is:\n *\n * notifyKeyspaceEvent(int type, char *event, robj *key, int dbid);\n * notifyKeyspaceEventWithSubkeys(int type, char *event, robj *key, int dbid,\n *                                robj **subkeys, int count);\n *\n * 'type' is the notification class we define in `server.h`.\n * 'event' is a C string representing the event name.\n * 'key' is a Redis object representing the key name.\n * 'dbid' is the database ID where the key lives.\n * 'subkeys' is an array of Redis objects representing the subkey names (can be NULL).\n * 'count' is the number of subkeys in the array.\n *\n * For subkey notifications (4 channel types):\n * - __subkeyspace@<db>__:<key>                  payload: <event>|<subkeys>\n * - __subkeyevent@<db>__:<event>                payload: <key_len>:<key>|<subkeys>\n * - __subkeyspaceitem@<db>__:<key>\\n<subkey>    payload: <event>\n * - __subkeyspaceevent@<db>__:<event>|<key>     payload: <subkeys>\n *\n * Where <subkeys> is in length-prefixed format: <len>:<subkey>[,<len>:<subkey>...]\n * Example: 3:foo,5:hello\n *\n * NOTE: This function may invoke module notification callbacks, which may\n * cause the key's kvobj to be reallocated. */\nstatic void notifyKeyspaceEventImpl(int type, const char *event, robj *key, int dbid,\n                                    robj **subkeys, int count)\n{\n    sds chan;\n    robj *chanobj, *eventobj;\n    char buf[24];\n    serverAssert(sdsEncodedObject(key));\n\n    /* If any modules are interested in events, notify the module system now.\n     * This bypasses the notifications configuration, but the module engine\n     * will only call event subscribers if the event type matches the types\n     * they are interested in. Subkeys are passed through so that subscribers\n     * with a subkey callback receive them. */\n    moduleNotifyKeyspaceEvent(type, event, key, dbid, subkeys, count);\n\n    /* If notifications for this class of events are off, return ASAP. */\n    if (!(server.notify_keyspace_events & type)) return;\n\n    /* If there are no Pub/Sub subscribers (neither pattern nor channel),\n     * skip the remaining notification work since nobody would receive it. */\n    if (dictSize(server.pubsub_patterns) == 0 && kvstoreSize(server.pubsub_channels) == 0)\n        return;\n\n    eventobj = createStringObject(event,strlen(event));\n    int len = ll2string(buf,sizeof(buf),dbid);\n\n    /* __keyspace@<db>__:<key> <event> notifications. */\n    if (server.notify_keyspace_events & NOTIFY_KEYSPACE) {\n        chan = sdsnewlen(\"__keyspace@\",11);\n        chan = sdscatlen(chan, buf, len);\n        chan = sdscatlen(chan, \"__:\", 3);\n        chan = sdscatsds(chan, key->ptr);\n        chanobj = createObject(OBJ_STRING, chan);\n        pubsubPublishMessage(chanobj, eventobj, 0);\n        decrRefCount(chanobj);\n    }\n\n    /* __keyevent@<db>__:<event> <key> notifications. */\n    if (server.notify_keyspace_events & NOTIFY_KEYEVENT) {\n        chan = sdsnewlen(\"__keyevent@\",11);\n        chan = sdscatlen(chan, buf, len);\n        chan = sdscatlen(chan, \"__:\", 3);\n        chan = sdscatsds(chan, eventobj->ptr);\n        chanobj = createObject(OBJ_STRING, chan);\n        pubsubPublishMessage(chanobj, key, 0);\n        decrRefCount(chanobj);\n    }\n\n    /* Subkey-level notifications (only when subkeys are provided). */\n    if (subkeys != NULL && count > 0) {\n        /* __subkeyspace@<db>__:<key> <event>|<len>:<subkey>[,...] notifications.\n         * Skip if the event contains '|' to avoid parsing ambiguity since '|'\n         * is used as a separator between event and subkeys in the payload. */\n        if (server.notify_keyspace_events & NOTIFY_SUBKEYSPACE && !strchr(event, '|')) {\n            chan = sdsnewlen(\"__subkeyspace@\", 14);\n            chan = sdscatlen(chan, buf, len);\n            chan = sdscatlen(chan, \"__:\", 3);\n            chan = sdscatsds(chan, key->ptr);\n            chanobj = createObject(OBJ_STRING, chan);\n\n            /* Build payload: <event>|<subkeys_payload> */\n            sds payload = sdsdup(eventobj->ptr);\n            payload = sdscatlen(payload, \"|\", 1);\n            payload = catSubkeysPayload(payload, subkeys, count);\n            robj *payloadobj = createObject(OBJ_STRING, payload);\n            pubsubPublishMessage(chanobj, payloadobj, 0);\n            decrRefCount(chanobj);\n            decrRefCount(payloadobj);\n        }\n\n        /* __subkeyevent@<db>__:<event> <key_len>:<key>|<len>:<subkey>[,...] notifications. */\n        if (server.notify_keyspace_events & NOTIFY_SUBKEYEVENT) {\n            chan = sdsnewlen(\"__subkeyevent@\", 14);\n            chan = sdscatlen(chan, buf, len);\n            chan = sdscatlen(chan, \"__:\", 3);\n            chan = sdscatsds(chan, eventobj->ptr);\n            chanobj = createObject(OBJ_STRING, chan);\n\n            /* Build payload: <key_len>:<key>|<subkeys_payload> */\n            size_t keylen = sdslen(key->ptr);\n            char keylenbuf[32];\n            int keylenlen = ll2string(keylenbuf, sizeof(keylenbuf), keylen);\n            sds payload = sdsnewlen(keylenbuf, keylenlen);\n            payload = sdscatlen(payload, \":\", 1);\n            payload = sdscatsds(payload, key->ptr);\n            payload = sdscatlen(payload, \"|\", 1);\n            payload = catSubkeysPayload(payload, subkeys, count);\n            robj *payloadobj = createObject(OBJ_STRING, payload);\n            pubsubPublishMessage(chanobj, payloadobj, 0);\n            decrRefCount(chanobj);\n            decrRefCount(payloadobj);\n        }\n\n        /* __subkeyspaceitem@<db>__:<key>\\n<subkey> <event> notifications (per subkey).\n         * Skip if the key contains '\\n' to avoid parsing ambiguity in the channel name. */\n        if (server.notify_keyspace_events & NOTIFY_SUBKEYSPACEITEM &&\n            memchr(key->ptr, '\\n', sdslen(key->ptr)) == NULL)\n        {\n            for (int i = 0; i < count; i++) {\n                serverAssert(sdsEncodedObject(subkeys[i]));\n                chan = sdsnewlen(\"__subkeyspaceitem@\", 18);\n                chan = sdscatlen(chan, buf, len);\n                chan = sdscatlen(chan, \"__:\", 3);\n                chan = sdscatsds(chan, key->ptr);\n                chan = sdscatlen(chan, \"\\n\", 1);\n                chan = sdscatsds(chan, subkeys[i]->ptr);\n                chanobj = createObject(OBJ_STRING, chan);\n                pubsubPublishMessage(chanobj, eventobj, 0);\n                decrRefCount(chanobj);\n            }\n        }\n\n        /* __subkeyspaceevent@<db>__:<event>|<key> <subkeys> notifications.\n         * Skip if the event contains '|' to avoid parsing ambiguity since '|'\n         * is used as a separator between event and key in the channel name. */\n        if (server.notify_keyspace_events & NOTIFY_SUBKEYSPACEEVENT && !strchr(event, '|')) {\n            chan = sdsnewlen(\"__subkeyspaceevent@\", 19);\n            chan = sdscatlen(chan, buf, len);\n            chan = sdscatlen(chan, \"__:\", 3);\n            chan = sdscatsds(chan, eventobj->ptr);\n            chan = sdscatlen(chan, \"|\", 1);\n            chan = sdscatsds(chan, key->ptr);\n            chanobj = createObject(OBJ_STRING, chan);\n            robj *payloadobj = createObject(OBJ_STRING, catSubkeysPayload(NULL, subkeys, count));\n            pubsubPublishMessage(chanobj, payloadobj, 0);\n            decrRefCount(chanobj);\n            decrRefCount(payloadobj);\n        }\n    }\n\n    decrRefCount(eventobj);\n}\n\n/* Public API for key-level notifications (backward compatible). */\nvoid notifyKeyspaceEvent(int type, const char *event, robj *key, int dbid) {\n    notifyKeyspaceEventImpl(type, event, key, dbid, NULL, 0);\n}\n\n/* Public API for notifications with subkeys (key-level + subkey-level). */\nvoid notifyKeyspaceEventWithSubkeys(int type, const char *event, robj *key, int dbid,\n                                    robj **subkeys, int count) {\n    notifyKeyspaceEventImpl(type, event, key, dbid, subkeys, count);\n}\n\n/* Check if subkey information should be collected for the given event type.\n * Returns true if any module subscribed to this event with subkeys, or if\n * there are Pub/Sub subscribers and any subkey-level notification channel is\n * enabled for this event type. */\nint isSubkeyNotifyEnabled(int type) {\n    if (moduleHasSubscribersForKeyspaceEventWithSubkeys(type)) return 1;\n    if (dictSize(server.pubsub_patterns) == 0 && kvstoreSize(server.pubsub_channels) == 0)\n        return 0;\n    return (server.notify_keyspace_events & type) &&\n           (server.notify_keyspace_events & (NOTIFY_SUBKEYSPACE | NOTIFY_SUBKEYEVENT |\n                                             NOTIFY_SUBKEYSPACEITEM | NOTIFY_SUBKEYSPACEEVENT));\n}\n"
  },
  {
    "path": "src/object.c",
    "content": "/* Redis Object implementation.\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n * \n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n#include \"functions.h\"\n#include \"intset.h\"  /* Compact integer set structure */\n#include \"cluster_asm.h\"\n#include <math.h>\n#include <ctype.h>\n\n#ifdef __CYGWIN__\n#define strtold(a,b) ((long double)strtod((a),(b)))\n#endif\n\n/* Map a metadata ID (bit index) to its compacted slot number among set bits,\n * then return a pointer to that slot. Caller must ensure the ID bit is set. */\nuint64_t *kvobjMetaRef(kvobj *kv, int metaId) {\n    uint32_t bits = kv->metabits;\n\n    /* Expiry is always the first metadata */\n    if (likely(metaId == 0)) return ((uint64_t *)kv) - 1;\n    \n    uint32_t maskId = 1u << metaId;\n    serverAssert(bits & maskId);\n    \n    /* Count set bits with lower IDs to get the compacted slot index. */\n    uint32_t lowerMask = maskId - 1u;\n    int metaSlot = __builtin_popcount(bits & lowerMask);\n    return ((uint64_t *)kv) - metaSlot - 1;\n}\n\n/* For objects with large embedded keys, we reserve space for an expire field,\n * so if expire is set later, we don't need to reallocate the object. */\n#define KEY_SIZE_TO_INCLUDE_EXPIRE_THRESHOLD 128\n\n/* ===================== Creation and parsing of objects ==================== */\n\n/* Creates kvobj (with embedded key and optional metadata) \n * \n * keyMetaBits - bitmask of active metadata classes to allocate space for (bit 0 is\n *               reserved for expiration).\n * \n * Example of \"mykey\" with expiration and metadata :\n * \n *    +------------+------------+-----------+------------------+------------------------+\n *    | m.meta (8) | expiry (8) | robj (16) | key-hdr-size (1) | sdshdr5 \"mykey\" \\0 (7) | \n *    +------------+------------+-----------+------------------+------------------------+\n *                              ^\n *                              |\n *                              kvobjCreate() returns pointer to here\n */\nkvobj *kvobjCreate(int type, const sds key, void *ptr, uint32_t keyMetaBits) {\n    /* Determine embedded key and expiration flags */\n    serverAssert(key != NULL);\n\n    /* If key is large and expire is not set, add space for it. */\n    size_t key_sds_len = sdslen(key);\n    if (key_sds_len >= KEY_SIZE_TO_INCLUDE_EXPIRE_THRESHOLD)\n        keyMetaBits |= KEY_META_MASK_EXPIRE;\n\n    /* Now that keyMetaBits is finalized, compute metadata size. */\n    uint32_t sizeMetas = getNumMeta(keyMetaBits) * sizeof(uint64_t);\n\n    /* Calculate embedded key size */\n    char key_sds_type = sdsReqType(key_sds_len);\n    size_t key_sds_size = sdsReqSize(key_sds_len, key_sds_type);\n\n    /* Compute the base object size */\n    size_t min_size = sizeof(robj);\n    min_size += sizeMetas;\n    min_size += 1 + key_sds_size; /* 1 byte for SDS header size */\n\n    /* Allocate object memory */\n    char *alloc = zmalloc(min_size);\n    kvobj *kv = (kvobj *) (alloc + sizeMetas);\n    kv->type = type;\n    kv->encoding = OBJ_ENCODING_RAW;\n    kv->ptr = ptr;\n    kv->refcount = 1;\n    kv->lru = 0;\n    kv->iskvobj = 1;\n    kv->metabits = keyMetaBits;\n\n    /* The memory after the struct where we embedded data. */\n    char *data = (void *)(kv + 1);\n\n    /* Store embedded key. */\n    *data++ = sdsHdrSize(key_sds_type);\n    sdsnewplacement(data, key_sds_size, key_sds_type, key, key_sds_len);\n\n    /* Reset each allocated metadata to its reset_value (such as Expiry=-1, etc) */\n    keyMetaResetValues(kv);\n\n    return kv;\n}\n\nrobj *createObject(int type, void *ptr) {\n    robj *o = zmalloc(sizeof(*o));\n    o->type = type;\n    o->encoding = OBJ_ENCODING_RAW;\n    o->ptr = ptr;\n    o->refcount = 1;\n    o->lru = 0;\n    o->iskvobj = 0;\n    o->metabits = 0;\n    return o;\n}\n\nvoid initObjectLRUOrLFU(robj *o) {\n    if (o->refcount == OBJ_SHARED_REFCOUNT)\n        return;\n    /* Set the LRU to the current lruclock (seconds resolution), or\n     * alternatively the LFU counter. */\n    if (server.maxmemory_policy & MAXMEMORY_FLAG_LFU) {\n        o->lru = (LFUGetTimeInMinutes() << 8) | LFU_INIT_VAL;\n    } else {\n        o->lru = LRU_CLOCK();\n    }\n    return;\n}\n\n/* Set a special refcount in the object to make it \"shared\":\n * incrRefCount and decrRefCount() will test for this special refcount\n * and will not touch the object. This way it is free to access shared\n * objects such as small integers from different threads without any\n * mutex.\n *\n * A common pattern to create shared objects:\n *\n * robj *myobject = makeObjectShared(createObject(...));\n *\n */\nrobj *makeObjectShared(robj *o) {\n    serverAssert(o->refcount == 1);\n    o->refcount = OBJ_SHARED_REFCOUNT;\n    return o;\n}\n\n/* Create a string object with encoding OBJ_ENCODING_RAW, that is a plain\n * string object where o->ptr points to a proper sds string. */\nrobj *createRawStringObject(const char *ptr, size_t len) {\n    return createObject(OBJ_STRING, sdsnewlen(ptr,len));\n}\n\n/* Creates a new embedded string object and copies the content of key, val and\n * expire to the new object. LRU is set to 0. \n * \n * Example of kvobj \"mykey\" with embedded \"myvalue\" (16+1+7+11 = 35bytes):\n *    +-----------+------------------+------------------------+----------------------------+\n *    | robj (16) | key-hdr-size (1) | sdshdr5 \"mykey\" \\0 (7) | sdshdr8 \"myvalue\" \\0  (11) | \n *    +-----------+------------------+------------------------+----------------------------+\n */\nstatic kvobj *kvobjCreateEmbedString(const char *val_ptr, size_t val_len,\n                                     const sds key, uint32_t keyMetaBits)\n{\n    kvobj *o;\n    debugServerAssert(key != NULL);\n    uint32_t sizeMetas = getNumMeta(keyMetaBits) * sizeof(uint64_t);\n\n    /* Calculate sizes for embedded key */\n    size_t key_sds_len = sdslen(key);\n    char key_sds_type = sdsReqType(key_sds_len);\n    size_t key_sds_size = sdsReqSize(key_sds_len, key_sds_type);\n\n    /* Calculate size for embedded value (always SDS_TYPE_8) */\n    size_t val_sds_size = sdsReqSize(val_len, SDS_TYPE_8);\n\n    /* Compute base object size */\n    size_t min_size = sizeof(robj) + val_sds_size;\n    min_size += sizeMetas;\n    min_size += 1 + key_sds_size; /* 1 byte for SDS header size */\n\n    /* Allocate object memory */\n    size_t bufsize = 0;\n    char *alloc = zmalloc_usable(min_size, &bufsize);\n    o = (kvobj *) (alloc + sizeMetas);\n\n    o->type = OBJ_STRING;\n    o->encoding = OBJ_ENCODING_EMBSTR;\n    o->refcount = 1;\n    o->lru = 0;\n    o->metabits = keyMetaBits;\n    o->iskvobj = 1;\n\n    /* The memory after the struct where we embedded data. */\n    char *data = (char *)(o + 1);\n    /* Store embedded key */\n    *data++ = sdsHdrSize(key_sds_type);\n    sdsnewplacement(data, key_sds_size, key_sds_type, key, key_sds_len);\n    data += key_sds_size;\n\n    /* Copy embedded value (EMBSTR) always as SDS TYPE 8. Account for unused\n     * memory in the SDS alloc field. */\n    size_t remaining_size = bufsize - (data - alloc);\n    o->ptr = sdsnewplacement(data, remaining_size, SDS_TYPE_8, val_ptr, val_len);\n    \n    keyMetaResetValues(o); /* modules + expire */\n    \n    return o;\n}\n\n/* Create a string object with encoding OBJ_ENCODING_EMBSTR, that is\n * an object where the sds string is actually an unmodifiable string\n * allocated in the same chunk as the object itself.\n * \n * Example of robj with embedded \"myvalue\" (16+1+11 = 28 bytes):\n *    +-----------+------------------+----------------------------+\n *    | robj (16) | key-hdr-size (1) | sdshdr8 \"myvalue\" \\0  (11) | \n *    +-----------+------------------+----------------------------+\n */\nrobj *createEmbeddedStringObject(const char *val_ptr, size_t val_len) {\n    /* Calculate size for embedded value (always SDS_TYPE_8) */\n    size_t val_sds_size = sdsReqSize(val_len, SDS_TYPE_8);\n    \n    /* Allocate object memory */\n    size_t bufsize = 0;\n    robj *o = zmalloc_usable(sizeof(robj) + val_sds_size, &bufsize);\n    o->type = OBJ_STRING;\n    o->encoding = OBJ_ENCODING_EMBSTR;\n    o->refcount = 1;\n    o->lru = 0;\n    o->metabits = 0;\n    o->iskvobj = 0;\n\n    /* The memory after the struct where we embedded data. */\n    char *data = (char *)(o + 1);\n    \n    /* Copy embedded value (EMBSTR) always as SDS TYPE 8. Account for unused\n     * memory in the SDS alloc field. */\n    size_t remaining_size = bufsize - (data - (char *)(void *)o);\n    o->ptr = sdsnewplacement(data, remaining_size, SDS_TYPE_8, val_ptr, val_len);\n    return o;\n}\n\nsds kvobjGetKey(const kvobj *kv) {\n    unsigned char *data = (void *)(kv + 1);\n    debugServerAssert(kv->iskvobj);\n    uint8_t hdr_size = *(uint8_t *)data;\n    data += 1 + hdr_size;\n    return (sds)data;\n}\n\nlong long kvobjGetExpire(const kvobj *kv) {\n    if (kv->metabits & KEY_META_MASK_EXPIRE) {\n        return (long long) (*kvobjMetaRef((kvobj *)kv, KEY_META_ID_EXPIRE));\n    } else {\n        return -1;\n    }\n}\n\n/* This functions may reallocate the value. The new allocation is returned and\n * the old object's reference counter is decremented and possibly freed. Use the\n * returned object instead of 'val' after calling this function. */\nkvobj *kvobjSetExpire(kvobj *kv, long long expire) {\n    /* If kv not expirable, then we need to realloc to add expire metadata */ \n    if (!(kv->metabits & KEY_META_MASK_EXPIRE)) {\n        /* Nothing to do if kv not expirable and expire is -1 */\n        if (expire == -1)\n            return kv;\n        \n        kv = kvobjSet(kvobjGetKey(kv), kv, kv->metabits | KEY_META_MASK_EXPIRE);\n    }\n\n    /* kv is expirable. Update expire field. */\n    *kvobjMetaRef(kv, KEY_META_ID_EXPIRE) = expire;\n    return kv;\n}\n\n/* This functions may reallocate the value. The new allocation is returned and\n * the old object's reference counter is decremented and possibly freed. Use the\n * returned object instead of 'val' after calling this function. */\nkvobj *kvobjSet(sds key, robj *val, uint32_t keyMetaBits) {\n    kvobj *kv;\n    if (val->type == OBJ_STRING && val->encoding == OBJ_ENCODING_EMBSTR) {\n        size_t len = sdslen(val->ptr);\n\n        /* Embed when the sum is less than a cache line (Metadata is discarded \n         * since we don't have to be accurate and it is placed before the object) */\n        size_t size = sizeof(kvobj);\n        size += (key != NULL) * (sdslen(key) + 3); /* hdr size (1) + hdr (1) + nullterm (1) */\n        size += 4 + len; /* embstr header (3) + nullterm (1) */\n        if (size <= CACHE_LINE_SIZE) {\n            kv = kvobjCreateEmbedString(val->ptr, len, key, keyMetaBits);\n        } else {\n            kv = kvobjCreate(OBJ_STRING, key, sdsnewlen(val->ptr, len), keyMetaBits);\n        }\n    } else {\n        /* Create a new object with embedded key. Reuse ptr if possible. */\n        void *valptr;\n        if (val->refcount == 1) {\n            /* Reuse the ptr. There are no other references to val. */\n            valptr = val->ptr;\n            val->ptr = NULL;\n        } else if (val->type == OBJ_STRING &&\n                   val->encoding == OBJ_ENCODING_INT) {\n            /* The pointer is not allocated memory. We can just copy the pointer. */\n            valptr = val->ptr;\n        } else if (val->type == OBJ_STRING &&\n                   val->encoding == OBJ_ENCODING_RAW) {\n            /* Dup the string. */\n            valptr = sdsdup(val->ptr);\n        } else {\n            /* There are multiple references to this non-string object. Most types\n             * can be duplicated, but for a module type is not always possible. */\n            serverPanic(\"Not implemented\");\n        }\n        kv = kvobjCreate(val->type, key, valptr, keyMetaBits);\n        kv->encoding = val->encoding;\n    }\n    \n    kv->lru = val->lru;\n\n    /* Transfer module metadata from `val` to new `kv` (if `val` of type kvobj with metadata). */\n    if (val->metabits & KEY_META_MASK_MODULES)\n        keyMetaTransition((kvobj *) val, kv);\n    \n    decrRefCount(val);\n    return kv;\n}\n\n/* Create a string object with EMBSTR encoding if it is smaller than\n * OBJ_ENCODING_EMBSTR_SIZE_LIMIT, otherwise the RAW encoding is\n * used.\n *\n * The current limit of 44 is chosen so that the biggest string object\n * we allocate as EMBSTR will still fit into the 64 byte arena of jemalloc. */\n#define OBJ_ENCODING_EMBSTR_SIZE_LIMIT 44\nrobj *createStringObject(const char *ptr, size_t len) {\n    if (len <= OBJ_ENCODING_EMBSTR_SIZE_LIMIT)\n        return createEmbeddedStringObject(ptr,len);\n    else\n        return createRawStringObject(ptr,len);\n}\n\n/* Same as CreateRawStringObject, can return NULL if allocation fails */\nrobj *tryCreateRawStringObject(const char *ptr, size_t len) {\n    sds str = sdstrynewlen(ptr,len);\n    if (!str) return NULL;\n    return createObject(OBJ_STRING, str);\n}\n\n/* Same as createStringObject, can return NULL if allocation fails */\nrobj *tryCreateStringObject(const char *ptr, size_t len) {\n    if (len <= OBJ_ENCODING_EMBSTR_SIZE_LIMIT)\n        return createEmbeddedStringObject(ptr,len);\n    else\n        return tryCreateRawStringObject(ptr,len);\n}\n\n/* Create a string object from a long long value according to the specified flag. */\n#define LL2STROBJ_AUTO 0       /* automatically create the optimal string object */\n#define LL2STROBJ_NO_SHARED 1  /* disallow shared objects */\n#define LL2STROBJ_NO_INT_ENC 2 /* disallow integer encoded objects. */\nrobj *createStringObjectFromLongLongWithOptions(long long value, int flag) {\n    robj *o;\n\n    if (value >= 0 && value < OBJ_SHARED_INTEGERS && flag == LL2STROBJ_AUTO) {\n        o = shared.integers[value];\n    } else {\n        if ((value >= LONG_MIN && value <= LONG_MAX) && flag != LL2STROBJ_NO_INT_ENC) {\n            o = createObject(OBJ_STRING, NULL);\n            o->encoding = OBJ_ENCODING_INT;\n            o->ptr = (void*)((long)value);\n        } else {\n            char buf[LONG_STR_SIZE];\n            int len = ll2string(buf, sizeof(buf), value);\n            o = createStringObject(buf, len);\n        }\n    }\n    return o;\n}\n\n/* Wrapper for createStringObjectFromLongLongWithOptions() always demanding\n * to create a shared object if possible. */\nrobj *createStringObjectFromLongLong(long long value) {\n    return createStringObjectFromLongLongWithOptions(value, LL2STROBJ_AUTO);\n}\n\n/* The function avoids returning a shared integer when LFU/LRU info\n * are needed, that is, when the object is used as a value in the key\n * space(for instance when the INCR command is used), and Redis is\n * configured to evict based on LFU/LRU, so we want LFU/LRU values\n * specific for each key. */\nrobj *createStringObjectFromLongLongForValue(long long value) {\n    if (server.maxmemory == 0 || !(server.maxmemory_policy & MAXMEMORY_FLAG_NO_SHARED_INTEGERS)) {\n        /* If the maxmemory policy permits, we can still return shared integers */\n        return createStringObjectFromLongLongWithOptions(value, LL2STROBJ_AUTO);\n    } else {\n        return createStringObjectFromLongLongWithOptions(value, LL2STROBJ_NO_SHARED);\n    }\n}\n\n/* Create a string object that contains an sds inside it. That means it can't be\n * integer encoded (OBJ_ENCODING_INT), and it'll always be an EMBSTR type. */\nrobj *createStringObjectFromLongLongWithSds(long long value) {\n    return createStringObjectFromLongLongWithOptions(value, LL2STROBJ_NO_INT_ENC);\n}\n\n/* Create a string object from a long double. If humanfriendly is non-zero\n * it does not use exponential format and trims trailing zeroes at the end,\n * however this results in loss of precision. Otherwise exp format is used\n * and the output of snprintf() is not modified.\n *\n * The 'humanfriendly' option is used for INCRBYFLOAT and HINCRBYFLOAT. */\nrobj *createStringObjectFromLongDouble(long double value, int humanfriendly) {\n    char buf[MAX_LONG_DOUBLE_CHARS];\n    int len = ld2string(buf,sizeof(buf),value,humanfriendly? LD_STR_HUMAN: LD_STR_AUTO);\n    return createStringObject(buf,len);\n}\n\n/* Duplicate a string object, with the guarantee that the returned object\n * has the same encoding as the original one.\n *\n * This function also guarantees that duplicating a small integer object\n * (or a string object that contains a representation of a small integer)\n * will always result in a fresh object that is unshared (refcount == 1).\n *\n * The resulting object always has refcount set to 1. */\nrobj *dupStringObject(const robj *o) {\n    robj *d;\n\n    serverAssert(o->type == OBJ_STRING);\n\n    switch(o->encoding) {\n    case OBJ_ENCODING_RAW:\n        return createRawStringObject(o->ptr,sdslen(o->ptr));\n    case OBJ_ENCODING_EMBSTR:\n        return createEmbeddedStringObject(o->ptr,sdslen(o->ptr));\n    case OBJ_ENCODING_INT:\n        d = createObject(OBJ_STRING, NULL);\n        d->encoding = OBJ_ENCODING_INT;\n        d->ptr = o->ptr;\n        return d;\n    default:\n        serverPanic(\"Wrong encoding.\");\n        break;\n    }\n}\n\nrobj *createQuicklistObject(int fill, int compress) {\n    quicklist *l = quicklistNew(fill, compress);\n    robj *o = createObject(OBJ_LIST,l);\n    o->encoding = OBJ_ENCODING_QUICKLIST;\n    return o;\n}\n\nrobj *createListListpackObject(void) {\n    unsigned char *lp = lpNew(0);\n    robj *o = createObject(OBJ_LIST,lp);\n    o->encoding = OBJ_ENCODING_LISTPACK;\n    return o;\n}\n\nrobj *createSetObject(void) {\n    dict *d = dictCreate(&setDictType);\n    robj *o = createObject(OBJ_SET,d);\n    o->encoding = OBJ_ENCODING_HT;\n    return o;\n}\n\nrobj *createIntsetObject(void) {\n    intset *is = intsetNew();\n    robj *o = createObject(OBJ_SET,is);\n    o->encoding = OBJ_ENCODING_INTSET;\n    return o;\n}\n\nrobj *createSetListpackObject(void) {\n    unsigned char *lp = lpNew(0);\n    robj *o = createObject(OBJ_SET, lp);\n    o->encoding = OBJ_ENCODING_LISTPACK;\n    return o;\n}\n\nrobj *createHashObject(void) {\n    unsigned char *zl = lpNew(0);\n    robj *o = createObject(OBJ_HASH, zl);\n    o->encoding = OBJ_ENCODING_LISTPACK;\n    return o;\n}\n\nrobj *createZsetObject(void) {\n    zset *zs = zmalloc(sizeof(*zs));\n    robj *o;\n\n    zs->dict = dictCreate(&zsetDictType);\n    zs->zsl = zslCreate();\n    o = createObject(OBJ_ZSET,zs);\n    o->encoding = OBJ_ENCODING_SKIPLIST;\n    return o;\n}\n\nrobj *createZsetListpackObject(void) {\n    unsigned char *lp = lpNew(0);\n    robj *o = createObject(OBJ_ZSET,lp);\n    o->encoding = OBJ_ENCODING_LISTPACK;\n    return o;\n}\n\nrobj *createStreamObject(void) {\n    stream *s = streamNew();\n    robj *o = createObject(OBJ_STREAM,s);\n    o->encoding = OBJ_ENCODING_STREAM;\n    return o;\n}\n\nrobj *createGCRAObject(long long value) {\n    /* NOTE: for 32-bit systems we can't use integer encoding (as OBJ_STRING does)\n     * as the GCRA object is a unixtime value in microseconds, which as of the\n     * time of writing is already much more than 32-bit's LONG_MAX. */\n#if UINTPTR_MAX == 0xffffffff\n    long long *v = zmalloc(sizeof(long long));\n    *v = value;\n    robj *o = createObject(OBJ_GCRA,v);\n#else\n    robj *o = createObject(OBJ_GCRA,NULL);\n    o->ptr = (void*)value;\n#endif\n\n    o->encoding = OBJ_ENCODING_INT;\n    return o;\n}\n\nrobj *createModuleObject(moduleType *mt, void *value) {\n    moduleValue *mv = zmalloc(sizeof(*mv));\n    mv->type = mt;\n    mv->value = value;\n    return createObject(OBJ_MODULE,mv);\n}\n\nvoid freeStringObject(robj *o) {\n    if (o->encoding == OBJ_ENCODING_RAW) {\n        sdsfree(o->ptr);\n    }\n}\n\nvoid freeListObject(robj *o) {\n    if (o->encoding == OBJ_ENCODING_QUICKLIST) {\n        quicklistRelease(o->ptr);\n    } else if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        lpFree(o->ptr);\n    } else {\n        serverPanic(\"Unknown list encoding type\");\n    }\n}\n\nvoid freeSetObject(robj *o) {\n    switch (o->encoding) {\n    case OBJ_ENCODING_HT:\n#ifdef DEBUG_ASSERTIONS\n        dictEmpty(o->ptr, NULL);\n        debugServerAssert(*htGetMetadataSize(o->ptr) == 0);\n#endif\n        dictRelease((dict*) o->ptr);\n        break;\n    case OBJ_ENCODING_INTSET:\n    case OBJ_ENCODING_LISTPACK:\n        zfree(o->ptr);\n        break;\n    default:\n        serverPanic(\"Unknown set encoding type\");\n    }\n}\n\nvoid freeZsetObject(robj *o) {\n    zset *zs;\n    switch (o->encoding) {\n    case OBJ_ENCODING_SKIPLIST:\n        zs = o->ptr;\n        dictRelease(zs->dict);\n        zslFree(zs->zsl);\n        zfree(zs);\n        break;\n    case OBJ_ENCODING_LISTPACK:\n        zfree(o->ptr);\n        break;\n    default:\n        serverPanic(\"Unknown sorted set encoding\");\n    }\n}\n\nvoid freeHashObject(robj *o) {\n    hashTypeFree(o);\n}\n\nvoid freeModuleObject(robj *o) {\n    moduleValue *mv = o->ptr;\n    mv->type->free(mv->value);\n    zfree(mv);\n}\n\nvoid freeStreamObject(robj *o) {\n    freeStream(o->ptr);\n}\n\nvoid freeGCRAObject(robj *o) {\n#if UINTPTR_MAX == 0xffffffff\n    zfree(o->ptr);\n#else\n    (void)o;\n#endif\n}\n\nvoid incrRefCount(robj *o) {\n    if (o->refcount < OBJ_FIRST_SPECIAL_REFCOUNT - 1) {\n        o->refcount++;\n    } else {\n        if (o->refcount == OBJ_SHARED_REFCOUNT) {\n            /* Nothing to do: this refcount is immutable. */\n        } else if (o->refcount == OBJ_STATIC_REFCOUNT) {\n            serverPanic(\"You tried to retain an object allocated in the stack\");\n        } else {\n            serverPanic(\"You tried to retain an object with maximum refcount\");\n        }\n    }\n}\n\nvoid decrRefCount(robj *o) {\n    if (o->refcount == OBJ_SHARED_REFCOUNT)\n        return; /* Nothing to do: this refcount is immutable. */\n\n    if (unlikely(o->refcount <= 0)) {\n        serverPanic(\"illegal decrRefCount for object with: type %u, encoding %u, refcount %d\",\n            o->type, o->encoding, o->refcount);\n    }\n\n    if (--(o->refcount) == 0) {\n        void *alloc = o;\n        \n        if (o->iskvobj) {\n            /* eval real allocation pointer */\n            alloc = kvobjGetAllocPtr(o);\n            /* if kvobj has metadata attached. */\n            if (getModuleMetaBits(o->metabits))\n                keyMetaOnFree((kvobj *)o);\n        }\n        \n        if (o->ptr != NULL) {\n            switch(o->type) {\n            case OBJ_STRING: freeStringObject(o); break;\n            case OBJ_LIST: freeListObject(o); break;\n            case OBJ_SET: freeSetObject(o); break;\n            case OBJ_ZSET: freeZsetObject(o); break;\n            case OBJ_HASH: freeHashObject(o); break;\n            case OBJ_MODULE: freeModuleObject(o); break;\n            case OBJ_STREAM: freeStreamObject(o); break;\n            case OBJ_GCRA: freeGCRAObject(o); break;\n            default: serverPanic(\"Unknown object type\"); break;\n            }\n        }\n        zfree(alloc);\n    }\n}\n\n/* See dismissObject() */\nvoid dismissSds(sds s) {\n    dismissMemory(sdsAllocPtr(s), sdsAllocSize(s));\n}\n\n/* See dismissObject() */\nvoid dismissStringObject(robj *o) {\n    if (o->encoding == OBJ_ENCODING_RAW) {\n        dismissSds(o->ptr);\n    }\n}\n\n/* See dismissObject() */\nvoid dismissListObject(robj *o, size_t size_hint) {\n    if (o->encoding == OBJ_ENCODING_QUICKLIST) {\n        quicklist *ql = o->ptr;\n        serverAssert(ql->len != 0);\n        /* We iterate all nodes only when average node size is bigger than a\n         * page size, and there's a high chance we'll actually dismiss something. */\n        if (size_hint / ql->len >= server.page_size) {\n            quicklistNode *node = ql->head;\n            while (node) {\n                if (quicklistNodeIsCompressed(node)) {\n                    dismissMemory(node->entry, ((quicklistLZF*)node->entry)->sz);\n                } else {\n                    dismissMemory(node->entry, node->sz);\n                }\n                node = node->next;\n            }\n        }\n    } else if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        dismissMemory(o->ptr, lpBytes((unsigned char*)o->ptr));\n    } else {\n        serverPanic(\"Unknown list encoding type\");\n    }\n}\n\n/* See dismissObject() */\nvoid dismissSetObject(robj *o, size_t size_hint) {\n    if (o->encoding == OBJ_ENCODING_HT) {\n        dict *set = o->ptr;\n        serverAssert(dictSize(set) != 0);\n        /* We iterate all nodes only when average member size is bigger than a\n         * page size, and there's a high chance we'll actually dismiss something. */\n        if (size_hint / dictSize(set) >= server.page_size) {\n            dictEntry *de;\n            dictIterator di;\n            dictInitIterator(&di, set);\n            while ((de = dictNext(&di)) != NULL) {\n                dismissSds(dictGetKey(de));\n            }\n            dictResetIterator(&di);\n        }\n\n        /* Dismiss hash table memory. */\n        dismissDictBucketsMemory(set);\n    } else if (o->encoding == OBJ_ENCODING_INTSET) {\n        dismissMemory(o->ptr, intsetBlobLen((intset*)o->ptr));\n    } else if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        dismissMemory(o->ptr, lpBytes((unsigned char *)o->ptr));\n    } else {\n        serverPanic(\"Unknown set encoding type\");\n    }\n}\n\n/* See dismissObject() */\nvoid dismissZsetObject(robj *o, size_t size_hint) {\n    if (o->encoding == OBJ_ENCODING_SKIPLIST) {\n        zset *zs = o->ptr;\n        zskiplist *zsl = zs->zsl;\n        serverAssert(zsl->length != 0);\n        /* We iterate all nodes only when average member size is bigger than a\n         * page size, and there's a high chance we'll actually dismiss something. */\n        if (size_hint / zsl->length >= server.page_size) {\n            zskiplistNode *zn = zsl->header->level[0].forward;\n            while (zn != NULL) {\n                zskiplistNode *next = zn->level[0].forward;\n                dismissMemory(zn, 0);\n                zn = next;\n            }\n        }\n\n        /* Dismiss hash table memory. */\n        dismissDictBucketsMemory(zs->dict);\n    } else if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        dismissMemory(o->ptr, lpBytes((unsigned char*)o->ptr));\n    } else {\n        serverPanic(\"Unknown zset encoding type\");\n    }\n}\n\n/* See dismissObject() */\nvoid dismissHashObject(robj *o, size_t size_hint) {\n    if (o->encoding == OBJ_ENCODING_HT) {\n        dict *d = o->ptr;\n        serverAssert(dictSize(d) != 0);\n        /* We iterate all fields only when average field/value size is bigger than\n         * a page size, and there's a high chance we'll actually dismiss something. */\n        if (size_hint / dictSize(d) >= server.page_size) {\n            dictEntry *de;\n            dictIterator di;\n            dictInitIterator(&di, d);\n            while ((de = dictNext(&di)) != NULL) {\n                entryDismissMemory((Entry *) dictGetKey(de));\n            }\n            dictResetIterator(&di);\n        }\n\n        /* Dismiss hash table memory. */\n        dismissDictBucketsMemory(d);\n    } else if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        dismissMemory(o->ptr, lpBytes((unsigned char*)o->ptr));\n    } else if (o->encoding == OBJ_ENCODING_LISTPACK_EX) {\n        listpackEx *lpt = o->ptr;\n        dismissMemory(lpt->lp, lpBytes((unsigned char*)lpt->lp));\n    } else {\n        serverPanic(\"Unknown hash encoding type\");\n    }\n}\n\n/* See dismissObject() */\nvoid dismissStreamObject(robj *o, size_t size_hint) {\n    stream *s = o->ptr;\n    rax *rax = s->rax;\n    if (raxSize(rax) == 0) return;\n\n    /* Iterate only on stream entries, although size_hint may include serialized\n     * consumer groups info, but usually, stream entries take up most of\n     * the space. */\n    if (size_hint / raxSize(rax) >= server.page_size) {\n        raxIterator ri;\n        raxStart(&ri,rax);\n        raxSeek(&ri,\"^\",NULL,0);\n        while (raxNext(&ri)) {\n            dismissMemory(ri.data, lpBytes(ri.data));\n        }\n        raxStop(&ri);\n    }\n}\n\nvoid dismissGCRAObject(robj *o, size_t size_hint) {\n    /* GCRA is a single allocation of a long long thus way smaller than a\n     * page-size. The dismiss mechanism is not needed for it - hence NOOP.*/\n    (void)o;\n    (void)size_hint;\n}\n\n/* When creating a snapshot in a fork child process, the main process and child\n * process share the same physical memory pages, and if / when the parent\n * modifies any keys due to write traffic, it'll cause CoW which consume\n * physical memory. In the child process, after serializing the key and value,\n * the data is definitely not accessed again, so to avoid unnecessary CoW, we\n * try to release their memory back to OS. see dismissMemory().\n *\n * Because of the cost of iterating all node/field/member/entry of complex data\n * types, we iterate and dismiss them only when approximate average we estimate\n * the size of an individual allocation is more than a page size of OS.\n * 'size_hint' is the size of serialized value. This method is not accurate, but\n * it can reduce unnecessary iteration for complex data types that are probably\n * not going to release any memory. */\nvoid dismissObject(robj *o, size_t size_hint) {\n    /* madvise(MADV_DONTNEED) may not work if Transparent Huge Pages is enabled. */\n    if (server.thp_enabled) return;\n\n    /* Currently we use zmadvise_dontneed only when we use jemalloc with Linux.\n     * so we avoid these pointless loops when they're not going to do anything. */\n#if defined(USE_JEMALLOC) && defined(__linux__)\n    if (o->refcount != 1) return;\n    switch(o->type) {\n        case OBJ_STRING: dismissStringObject(o); break;\n        case OBJ_LIST: dismissListObject(o, size_hint); break;\n        case OBJ_SET: dismissSetObject(o, size_hint); break;\n        case OBJ_ZSET: dismissZsetObject(o, size_hint); break;\n        case OBJ_HASH: dismissHashObject(o, size_hint); break;\n        case OBJ_STREAM: dismissStreamObject(o, size_hint); break;\n        case OBJ_GCRA: dismissGCRAObject(o, size_hint); break;\n        default: break;\n    }\n#else\n    UNUSED(o); UNUSED(size_hint);\n#endif\n}\n\nint checkType(client *c, robj *o, int type) {\n    /* A NULL is considered an empty key */\n    if (o && o->type != type) {\n        addReplyErrorObject(c,shared.wrongtypeerr);\n        return 1;\n    }\n    return 0;\n}\n\nint isSdsRepresentableAsLongLong(sds s, long long *llval) {\n    return string2ll(s,sdslen(s),llval) ? C_OK : C_ERR;\n}\n\nint isObjectRepresentableAsLongLong(robj *o, long long *llval) {\n    serverAssertWithInfo(NULL,o,o->type == OBJ_STRING);\n    if (o->encoding == OBJ_ENCODING_INT) {\n        if (llval) *llval = (long) o->ptr;\n        return C_OK;\n    } else {\n        return isSdsRepresentableAsLongLong(o->ptr,llval);\n    }\n}\n\n/* Optimize the SDS string inside the string object to require little space,\n * in case there is more than 10% of free space at the end of the SDS. */\nvoid trimStringObjectIfNeeded(robj *o, int trim_small_values) {\n    if (o->encoding != OBJ_ENCODING_RAW) return;\n    /* A string may have free space in the following cases:\n     * 1. When an arg len is greater than PROTO_MBULK_BIG_ARG the query buffer may be used directly as the SDS string.\n     * 2. When utilizing the argument caching mechanism in Lua. \n     * 3. When calling from RM_TrimStringAllocation (trim_small_values is true). */\n    size_t len = sdslen(o->ptr);\n    if (len >= PROTO_MBULK_BIG_ARG ||\n        trim_small_values||\n        (server.executing_client && server.executing_client->flags & CLIENT_SCRIPT && len < LUA_CMD_OBJCACHE_MAX_LEN)) {\n        if (sdsavail(o->ptr) > len/10) {\n            o->ptr = sdsRemoveFreeSpace(o->ptr, 0);\n        }\n    }\n}\n\n/* Try to encode a string object in order to save space */\nrobj *tryObjectEncodingEx(robj *o, int try_trim) {\n    long value;\n    sds s = o->ptr;\n    size_t len;\n\n    /* Make sure this is a string object, the only type we encode\n     * in this function. Other types use encoded memory efficient\n     * representations but are handled by the commands implementing\n     * the type. */\n    serverAssertWithInfo(NULL,o,o->type == OBJ_STRING);\n\n    /* We try some specialized encoding only for objects that are\n     * RAW or EMBSTR encoded, in other words objects that are still\n     * in represented by an actually array of chars. */\n    if (!sdsEncodedObject(o)) return o;\n\n    /* It's not safe to encode shared objects: shared objects can be shared\n     * everywhere in the \"object space\" of Redis and may end in places where\n     * they are not handled. We handle them only as values in the keyspace. */\n     if (o->refcount > 1) return o;\n\n    /* Check if we can represent this string as a long integer.\n     * Note that we are sure that a string larger than 20 chars is not\n     * representable as a 32 nor 64 bit integer. */\n    len = sdslen(s);\n    if (len <= 20 && string2l(s,len,&value)) {\n        /* This object is encodable as a long. */\n        if (o->encoding == OBJ_ENCODING_RAW) {\n            sdsfree(o->ptr);\n            o->encoding = OBJ_ENCODING_INT;\n            o->ptr = (void*) value;\n            return o;\n        } else if (o->encoding == OBJ_ENCODING_EMBSTR) {\n            decrRefCount(o);\n            return createStringObjectFromLongLongForValue(value);\n        }\n    }\n\n    /* If the string is small and is still RAW encoded,\n     * try the EMBSTR encoding which is more efficient.\n     * In this representation the object and the SDS string are allocated\n     * in the same chunk of memory to save space and cache misses. */\n    if (len <= OBJ_ENCODING_EMBSTR_SIZE_LIMIT) {\n        robj *emb;\n\n        if (o->encoding == OBJ_ENCODING_EMBSTR) return o;\n        emb = createEmbeddedStringObject(s,sdslen(s));\n        decrRefCount(o);\n        return emb;\n    }\n\n    /* We can't encode the object...\n     * Do the last try, and at least optimize the SDS string inside */\n    if (try_trim)\n        trimStringObjectIfNeeded(o, 0);\n\n    /* Return the original object. */\n    return o;\n}\n\nrobj *tryObjectEncoding(robj *o) {\n    return tryObjectEncodingEx(o, 1);\n}\n\nsize_t getObjectLength(robj *o) {\n    switch(o->type) {\n        case OBJ_STRING: return stringObjectLen(o);\n        case OBJ_LIST: return listTypeLength(o);\n        case OBJ_SET: return setTypeSize(o);\n        case OBJ_ZSET: return zsetLength(o);\n        case OBJ_HASH: return hashTypeLength(o, 0);\n        case OBJ_STREAM: return streamLength(o);\n        case OBJ_GCRA: return gcraObjectLength(o);\n        default: return 0;\n    }\n}\n\n/* Get a decoded version of an encoded object (returned as a new object).\n * If the object is already raw-encoded just increment the ref count. */\nrobj *getDecodedObject(robj *o) {\n    robj *dec;\n\n    if (sdsEncodedObject(o)) {\n        incrRefCount(o);\n        return o;\n    }\n    if (o->type == OBJ_STRING && o->encoding == OBJ_ENCODING_INT) {\n        char buf[32];\n\n        ll2string(buf,32,(long)o->ptr);\n        dec = createStringObject(buf,strlen(buf));\n        return dec;\n    } else {\n        serverPanic(\"Unknown encoding type\");\n    }\n}\n\n/* Compare two string objects via strcmp() or strcoll() depending on flags.\n * Note that the objects may be integer-encoded. In such a case we\n * use ll2string() to get a string representation of the numbers on the stack\n * and compare the strings, it's much faster than calling getDecodedObject().\n *\n * Important note: when REDIS_COMPARE_BINARY is used a binary-safe comparison\n * is used. */\n\n#define REDIS_COMPARE_BINARY (1<<0)\n#define REDIS_COMPARE_COLL (1<<1)\n\nint compareStringObjectsWithFlags(const robj *a, const robj *b, int flags) {\n    serverAssertWithInfo(NULL,a,a->type == OBJ_STRING && b->type == OBJ_STRING);\n    char bufa[128], bufb[128], *astr, *bstr;\n    size_t alen, blen, minlen;\n\n    if (a == b) return 0;\n    if (sdsEncodedObject(a)) {\n        astr = a->ptr;\n        alen = sdslen(astr);\n    } else {\n        alen = ll2string(bufa,sizeof(bufa),(long) a->ptr);\n        astr = bufa;\n    }\n    if (sdsEncodedObject(b)) {\n        bstr = b->ptr;\n        blen = sdslen(bstr);\n    } else {\n        blen = ll2string(bufb,sizeof(bufb),(long) b->ptr);\n        bstr = bufb;\n    }\n    if (flags & REDIS_COMPARE_COLL) {\n        return strcoll(astr,bstr);\n    } else {\n        int cmp;\n\n        minlen = (alen < blen) ? alen : blen;\n        cmp = memcmp(astr,bstr,minlen);\n        if (cmp == 0) return alen-blen;\n        return cmp;\n    }\n}\n\n/* Wrapper for compareStringObjectsWithFlags() using binary comparison. */\nint compareStringObjects(const robj *a, const robj *b) {\n    return compareStringObjectsWithFlags(a,b,REDIS_COMPARE_BINARY);\n}\n\n/* Wrapper for compareStringObjectsWithFlags() using collation. */\nint collateStringObjects(const robj *a, const robj *b) {\n    return compareStringObjectsWithFlags(a,b,REDIS_COMPARE_COLL);\n}\n\n/* Equal string objects return 1 if the two objects are the same from the\n * point of view of a string comparison, otherwise 0 is returned. Note that\n * this function is faster than checking for (compareStringObject(a,b) == 0)\n * because it can perform some more optimization. */\nint equalStringObjects(robj *a, robj *b) {\n    if (a->encoding == OBJ_ENCODING_INT &&\n        b->encoding == OBJ_ENCODING_INT){\n        /* If both strings are integer encoded just check if the stored\n         * long is the same. */\n        return a->ptr == b->ptr;\n    } else {\n        if (sdsEncodedObject(a) && sdsEncodedObject(b)\n            && sdslen(a->ptr) != sdslen(b->ptr))\n        {\n            return 0;\n        }\n        return compareStringObjects(a,b) == 0;\n    }\n}\n\nsize_t stringObjectLen(robj *o) {\n    serverAssertWithInfo(NULL,o,o->type == OBJ_STRING);\n    if (sdsEncodedObject(o)) {\n        return sdslen(o->ptr);\n    } else {\n        return sdigits10((long)o->ptr);\n    }\n}\n\nsize_t stringObjectAllocSize(const robj *o) {\n    serverAssertWithInfo(NULL,o,o->type == OBJ_STRING);\n    if(o->encoding == OBJ_ENCODING_INT) {\n        /* Value already counted (reuse the \"ptr\" in header to store int) */\n        return 0;\n    } else if(o->encoding == OBJ_ENCODING_RAW) {\n        return sdsAllocSize(o->ptr);\n    } else if(o->encoding == OBJ_ENCODING_EMBSTR) {\n        /* Value already counted (Value embedded in the object as well) */\n        return 0;\n    } else {\n        serverPanic(\"Unknown string encoding\");\n    }\n}\n\nint getDoubleFromObject(const robj *o, double *target) {\n    double value;\n\n    if (o == NULL) {\n        value = 0;\n    } else {\n        serverAssertWithInfo(NULL,o,o->type == OBJ_STRING);\n        if (sdsEncodedObject(o)) {\n            if (!string2d(o->ptr, sdslen(o->ptr), &value))\n                return C_ERR;\n        } else if (o->encoding == OBJ_ENCODING_INT) {\n            value = (long)o->ptr;\n        } else {\n            serverPanic(\"Unknown string encoding\");\n        }\n    }\n    *target = value;\n    return C_OK;\n}\n\nint getDoubleFromObjectOrReply(client *c, robj *o, double *target, const char *msg) {\n    double value;\n    if (getDoubleFromObject(o, &value) != C_OK) {\n        if (msg != NULL) {\n            addReplyError(c,(char*)msg);\n        } else {\n            addReplyError(c,\"value is not a valid float\");\n        }\n        return C_ERR;\n    }\n    *target = value;\n    return C_OK;\n}\n\nint getLongDoubleFromObject(robj *o, long double *target) {\n    long double value;\n\n    if (o == NULL) {\n        value = 0;\n    } else {\n        serverAssertWithInfo(NULL,o,o->type == OBJ_STRING);\n        if (sdsEncodedObject(o)) {\n            if (!string2ld(o->ptr, sdslen(o->ptr), &value))\n                return C_ERR;\n        } else if (o->encoding == OBJ_ENCODING_INT) {\n            value = (long)o->ptr;\n        } else {\n            serverPanic(\"Unknown string encoding\");\n        }\n    }\n    *target = value;\n    return C_OK;\n}\n\nint getLongDoubleFromObjectOrReply(client *c, robj *o, long double *target, const char *msg) {\n    long double value;\n    if (getLongDoubleFromObject(o, &value) != C_OK) {\n        if (msg != NULL) {\n            addReplyError(c,(char*)msg);\n        } else {\n            addReplyError(c,\"value is not a valid float\");\n        }\n        return C_ERR;\n    }\n    *target = value;\n    return C_OK;\n}\n\nint getLongLongFromObject(robj *o, long long *target) {\n    long long value;\n\n    if (o == NULL) {\n        value = 0;\n    } else {\n        serverAssertWithInfo(NULL,o,o->type == OBJ_STRING);\n        if (sdsEncodedObject(o)) {\n            if (string2ll(o->ptr,sdslen(o->ptr),&value) == 0) return C_ERR;\n        } else if (o->encoding == OBJ_ENCODING_INT) {\n            value = (long)o->ptr;\n        } else {\n            serverPanic(\"Unknown string encoding\");\n        }\n    }\n    if (target) *target = value;\n    return C_OK;\n}\n\nint getLongLongFromGCRAObject(robj *o, long long *target) {\n    long long res;\n    serverAssertWithInfo(NULL, o, o->type == OBJ_GCRA);\n    serverAssert(o->encoding == OBJ_ENCODING_INT);\n#if UINTPTR_MAX == 0xffffffff\n    res = *((long long*)o->ptr);\n#else\n    res = (long long)o->ptr;\n#endif\n    if (unlikely(res < 0)) {\n        serverPanic(\"Invalid negative GCRA value\");\n    }\n    *target = res;\n    return C_OK;\n}\n\nint getLongLongFromObjectOrReply(client *c, robj *o, long long *target, const char *msg) {\n    long long value;\n    if (getLongLongFromObject(o, &value) != C_OK) {\n        if (msg != NULL) {\n            addReplyError(c,(char*)msg);\n        } else {\n            addReplyError(c,\"value is not an integer or out of range\");\n        }\n        return C_ERR;\n    }\n    *target = value;\n    return C_OK;\n}\n\nint getLongFromObjectOrReply(client *c, robj *o, long *target, const char *msg) {\n    long long value;\n\n    if (getLongLongFromObjectOrReply(c, o, &value, msg) != C_OK) return C_ERR;\n    if (value < LONG_MIN || value > LONG_MAX) {\n        if (msg != NULL) {\n            addReplyError(c,(char*)msg);\n        } else {\n            addReplyError(c,\"value is out of range\");\n        }\n        return C_ERR;\n    }\n    *target = value;\n    return C_OK;\n}\n\nint getRangeLongFromObjectOrReply(client *c, robj *o, long min, long max, long *target, const char *msg) {\n    if (getLongFromObjectOrReply(c, o, target, msg) != C_OK) return C_ERR;\n    if (*target < min || *target > max) {\n        if (msg != NULL) {\n            addReplyError(c,(char*)msg);\n        } else {\n            addReplyErrorFormat(c,\"value is out of range, value must between %ld and %ld\", min, max);\n        }\n        return C_ERR;\n    }\n    return C_OK;\n}\n\nint getPositiveLongFromObjectOrReply(client *c, robj *o, long *target, const char *msg) {\n    if (msg) {\n        return getRangeLongFromObjectOrReply(c, o, 0, LONG_MAX, target, msg);\n    } else {\n        return getRangeLongFromObjectOrReply(c, o, 0, LONG_MAX, target, \"value is out of range, must be positive\");\n    }\n}\n\nint getIntFromObjectOrReply(client *c, robj *o, int *target, const char *msg) {\n    long value;\n\n    if (getRangeLongFromObjectOrReply(c, o, INT_MIN, INT_MAX, &value, msg) != C_OK)\n        return C_ERR;\n\n    *target = value;\n    return C_OK;\n}\n\nchar *strEncoding(int encoding) {\n    switch(encoding) {\n    case OBJ_ENCODING_RAW: return \"raw\";\n    case OBJ_ENCODING_INT: return \"int\";\n    case OBJ_ENCODING_HT: return \"hashtable\";\n    case OBJ_ENCODING_QUICKLIST: return \"quicklist\";\n    case OBJ_ENCODING_LISTPACK: return \"listpack\";\n    case OBJ_ENCODING_LISTPACK_EX: return \"listpackex\";\n    case OBJ_ENCODING_INTSET: return \"intset\";\n    case OBJ_ENCODING_SKIPLIST: return \"skiplist\";\n    case OBJ_ENCODING_EMBSTR: return \"embstr\";\n    case OBJ_ENCODING_STREAM: return \"stream\";\n    default: return \"unknown\";\n    }\n}\n\n/* =========================== Memory introspection ========================= */\n\n/* Returns the size in bytes consumed by the object header, key and value in RAM.\n * Note that the returned value is just an approximation, especially in the\n * case of aggregated data types where only \"sample_size\" elements\n * are checked and averaged to estimate the total size. */\n#define OBJ_COMPUTE_SIZE_DEF_SAMPLES 5 /* Default sample size. */\nsize_t kvobjComputeSize(robj *key, kvobj *o, size_t sample_size, int dbid) {\n    if (o->type == OBJ_STRING ||\n        o->type == OBJ_LIST ||\n        o->type == OBJ_SET ||\n        o->type == OBJ_ZSET ||\n        o->type == OBJ_HASH ||\n        o->type == OBJ_STREAM ||\n        o->type == OBJ_GCRA)\n    {\n        return kvobjAllocSize(o);\n    } else if (o->type == OBJ_MODULE) {\n        return zmalloc_size(o) + moduleGetMemUsage(key, o, sample_size, dbid);\n    }\n    serverPanic(\"Unknown object type\");\n}\n\nsize_t kvobjAllocSize(kvobj *o) {\n    /* All kv-objects has at least kvobj header and embedded key */\n    size_t asize = zmalloc_size(kvobjGetAllocPtr(o));\n\n    if (o->type == OBJ_STRING) {\n        asize += stringObjectAllocSize(o);\n    } else if (o->type == OBJ_LIST) {\n        asize += listTypeAllocSize(o);\n    } else if (o->type == OBJ_SET) {\n        asize += setTypeAllocSize(o);\n    } else if (o->type == OBJ_ZSET) {\n        asize += zsetAllocSize(o);\n    } else if (o->type == OBJ_HASH) {\n        asize += hashTypeAllocSize(o);\n    } else if (o->type == OBJ_STREAM) {\n        stream *s = o->ptr;\n        asize += s->alloc_size;\n    } else if (o->type == OBJ_GCRA) {\n        asize += gcraTypeAllocSize(o);\n    } else if (o->type == OBJ_MODULE) {\n        /* TODO: Provide moduleGetAllocSize() module API for O(1) allocation size retrieval */\n    }\n    return asize;\n}\n\nsize_t gcraTypeAllocSize(robj *o) {\n    (void)o;\n#if UINTPTR_MAX == 0xffffffff\n    return sizeof(long long);\n#else\n    /* Same as string with int encoding there is no allocation as the value is\n     * cast to void* and stored in o->ptr */\n    return 0;\n#endif\n}\n\n/* The gcra object is a single long long value */\nsize_t gcraObjectLength(robj *o) {\n    (void)o;\n    return 1;\n}\n\n/* Release data obtained with getMemoryOverheadData(). */\nvoid freeMemoryOverheadData(struct redisMemOverhead *mh) {\n    zfree(mh->db);\n    zfree(mh);\n}\n\n/* Return a struct redisMemOverhead filled with memory overhead\n * information used for the MEMORY OVERHEAD and INFO command. The returned\n * structure pointer should be freed calling freeMemoryOverheadData(). */\nstruct redisMemOverhead *getMemoryOverheadData(void) {\n    int j;\n    size_t mem_total = 0;\n    size_t mem = 0;\n    size_t zmalloc_used = zmalloc_used_memory();\n    struct redisMemOverhead *mh = zcalloc(sizeof(*mh));\n\n    mh->total_allocated = zmalloc_used;\n    mh->startup_allocated = server.initial_memory_usage;\n    mh->peak_allocated = server.stat_peak_memory;\n    mh->total_frag =\n        (float)server.cron_malloc_stats.process_rss / server.cron_malloc_stats.zmalloc_used;\n    mh->total_frag_bytes =\n        server.cron_malloc_stats.process_rss - server.cron_malloc_stats.zmalloc_used;\n    /* Starting with redis 7.4, the lua memory is part of the total memory usage\n     * of redis, and that includes RSS and all other memory metrics. We only want\n     * to deduct it from active defrag. */\n    size_t frag_smallbins_bytes =\n        server.cron_malloc_stats.allocator_frag_smallbins_bytes - server.cron_malloc_stats.lua_allocator_frag_smallbins_bytes;\n    size_t allocated =\n        server.cron_malloc_stats.allocator_allocated - server.cron_malloc_stats.lua_allocator_allocated;\n    mh->allocator_frag = (float)frag_smallbins_bytes / allocated + 1;\n    mh->allocator_frag_bytes = frag_smallbins_bytes;\n    mh->allocator_rss =\n        (float)server.cron_malloc_stats.allocator_resident / server.cron_malloc_stats.allocator_active;\n    mh->allocator_rss_bytes =\n        server.cron_malloc_stats.allocator_resident - server.cron_malloc_stats.allocator_active;\n    mh->rss_extra =\n        (float)server.cron_malloc_stats.process_rss / server.cron_malloc_stats.allocator_resident;\n    mh->rss_extra_bytes =\n        server.cron_malloc_stats.process_rss - server.cron_malloc_stats.allocator_resident;\n\n    mem_total += server.initial_memory_usage;\n\n    /* Replication backlog and replicas share one global replication buffer,\n     * only if replication buffer memory is more than the repl backlog setting,\n     * we consider the excess as replicas' memory. Otherwise, replication buffer\n     * memory is the consumption of repl backlog. */\n    if (listLength(server.slaves) &&\n        (long long)server.repl_buffer_mem > server.repl_backlog_size)\n    {\n        mh->clients_slaves = server.repl_buffer_mem - server.repl_backlog_size;\n        mh->repl_backlog = server.repl_backlog_size;\n    } else {\n        mh->clients_slaves = 0;\n        mh->repl_backlog = server.repl_buffer_mem;\n    }\n    if (server.repl_backlog) {\n        /* The approximate memory of rax tree for indexed blocks. */\n        mh->repl_backlog +=\n            server.repl_backlog->blocks_index->numnodes * sizeof(raxNode) +\n            raxSize(server.repl_backlog->blocks_index) * sizeof(void*);\n    }\n\n    mh->replica_fullsync_buffer = server.repl_full_sync_buffer.mem_used;\n    mem_total += mh->replica_fullsync_buffer;\n    mem_total += mh->repl_backlog;\n    mem_total += mh->clients_slaves;\n\n    /* Computing the memory used by the clients would be O(N) if done\n     * here online. We use our values computed incrementally by\n     * updateClientMemoryUsage(). */\n    mh->clients_normal = server.stat_clients_type_memory[CLIENT_TYPE_MASTER]+\n                         server.stat_clients_type_memory[CLIENT_TYPE_PUBSUB]+\n                         server.stat_clients_type_memory[CLIENT_TYPE_NORMAL];\n    mem_total += mh->clients_normal;\n\n    mh->cluster_links = server.stat_cluster_links_memory;\n    mem_total += mh->cluster_links;\n\n    mem = 0;\n    if (server.aof_state != AOF_OFF) {\n        mem += sdsZmallocSize(server.aof_buf);\n    }\n    mh->aof_buffer = mem;\n    mem_total+=mem;\n\n    mem = evalScriptsMemoryEngine();\n    mh->eval_caches = mem;\n    mem_total+=mem;\n    mh->functions_caches = functionsMemoryEngine();\n    mem_total+=mh->functions_caches;\n\n    mh->script_vm = evalScriptsMemoryVM();\n    mh->script_vm += functionsMemoryVM();\n    mem_total+=mh->script_vm;\n\n    /* Cluster atomic slot migration buffers. */\n    mh->asm_import_input_buffer = asmGetImportInputBufferSize();\n    mh->asm_migrate_output_buffer = asmGetMigrateOutputBufferSize();\n    mem_total += mh->asm_import_input_buffer;\n    mem_total += mh->asm_migrate_output_buffer;\n\n    for (j = 0; j < server.dbnum; j++) {\n        redisDb *db = server.db+j;\n        if (!kvstoreNumAllocatedDicts(db->keys)) continue;\n\n        unsigned long long keyscount = kvstoreSize(db->keys);\n\n        mh->total_keys += keyscount;\n        mh->db = zrealloc(mh->db,sizeof(mh->db[0])*(mh->num_dbs+1));\n        mh->db[mh->num_dbs].dbid = j;\n\n        mem = kvstoreMemUsage(db->keys) +\n              keyscount * sizeof(robj);\n        mh->db[mh->num_dbs].overhead_ht_main = mem;\n        mem_total+=mem;\n\n        mem = kvstoreMemUsage(db->expires);\n        mh->db[mh->num_dbs].overhead_ht_expires = mem;\n        mem_total+=mem;\n\n        mh->num_dbs++;\n\n        mh->overhead_db_hashtable_lut += kvstoreOverheadHashtableLut(db->keys);\n        mh->overhead_db_hashtable_lut += kvstoreOverheadHashtableLut(db->expires);\n        mh->overhead_db_hashtable_rehashing += kvstoreOverheadHashtableRehashing(db->keys);\n        mh->overhead_db_hashtable_rehashing += kvstoreOverheadHashtableRehashing(db->expires);\n        mh->db_dict_rehashing_count += kvstoreDictRehashingCount(db->keys);\n        mh->db_dict_rehashing_count += kvstoreDictRehashingCount(db->expires);\n    }\n\n    /* Hotkeys memory overhead */\n    mem_total += hotkeysGetMemoryUsage(server.hotkeys);\n\n    mh->overhead_total = mem_total;\n    mh->dataset = (zmalloc_used > mem_total) ? (zmalloc_used - mem_total) : 0;\n    mh->peak_perc = (float)zmalloc_used*100/mh->peak_allocated;\n\n    /* Metrics computed after subtracting the startup memory from\n     * the total memory. */\n    size_t net_usage = 1;\n    if (zmalloc_used > mh->startup_allocated)\n        net_usage = zmalloc_used - mh->startup_allocated;\n    mh->dataset_perc = (float)mh->dataset*100/net_usage;\n    mh->bytes_per_key = mh->total_keys ? (mh->dataset / mh->total_keys) : 0;\n\n    return mh;\n}\n\n/* Helper for \"MEMORY allocator-stats\", used as a callback for the jemalloc\n * stats output. */\nvoid inputCatSds(void *result, const char *str) {\n    /* result is actually a (sds *), so re-cast it here */\n    sds *info = (sds *)result;\n    *info = sdscat(*info, str);\n}\n\n/* This implements MEMORY DOCTOR. An human readable analysis of the Redis\n * memory condition. */\nsds getMemoryDoctorReport(void) {\n    int empty = 0;          /* Instance is empty or almost empty. */\n    int big_peak = 0;       /* Memory peak is much larger than used mem. */\n    int high_frag = 0;      /* High fragmentation. */\n    int high_alloc_frag = 0;/* High allocator fragmentation. */\n    int high_proc_rss = 0;  /* High process rss overhead. */\n    int high_alloc_rss = 0; /* High rss overhead. */\n    int big_slave_buf = 0;  /* Slave buffers are too big. */\n    int big_client_buf = 0; /* Client buffers are too big. */\n    int many_scripts = 0;   /* Script cache has too many scripts. */\n    int num_reports = 0;\n    struct redisMemOverhead *mh = getMemoryOverheadData();\n\n    if (mh->total_allocated < (1024*1024*5)) {\n        empty = 1;\n        num_reports++;\n    } else {\n        /* Peak is > 150% of current used memory? */\n        if (((float)mh->peak_allocated / mh->total_allocated) > 1.5) {\n            big_peak = 1;\n            num_reports++;\n        }\n\n        /* Fragmentation is higher than 1.4 and 10MB ?*/\n        if (mh->total_frag > 1.4 && mh->total_frag_bytes > 10<<20) {\n            high_frag = 1;\n            num_reports++;\n        }\n\n        /* External fragmentation is higher than 1.1 and 10MB? */\n        if (mh->allocator_frag > 1.1 && mh->allocator_frag_bytes > 10<<20) {\n            high_alloc_frag = 1;\n            num_reports++;\n        }\n\n        /* Allocator rss is higher than 1.1 and 10MB ? */\n        if (mh->allocator_rss > 1.1 && mh->allocator_rss_bytes > 10<<20) {\n            high_alloc_rss = 1;\n            num_reports++;\n        }\n\n        /* Non-Allocator rss is higher than 1.1 and 10MB ? */\n        if (mh->rss_extra > 1.1 && mh->rss_extra_bytes > 10<<20) {\n            high_proc_rss = 1;\n            num_reports++;\n        }\n\n        /* Clients using more than 200k each average? */\n        long numslaves = listLength(server.slaves);\n        long numclients = listLength(server.clients)-numslaves;\n        if (mh->clients_normal / numclients > (1024*200)) {\n            big_client_buf = 1;\n            num_reports++;\n        }\n\n        /* Slaves using more than 10 MB each? */\n        if (numslaves > 0 && mh->clients_slaves > (1024*1024*10)) {\n            big_slave_buf = 1;\n            num_reports++;\n        }\n\n        /* Too many scripts are cached? */\n        if (dictSize(evalScriptsDict()) > 1000) {\n            many_scripts = 1;\n            num_reports++;\n        }\n    }\n\n    sds s;\n    if (num_reports == 0) {\n        s = sdsnew(\n        \"Hi Sam, I can't find any memory issue in your instance. \"\n        \"I can only account for what occurs on this base.\\n\");\n    } else if (empty == 1) {\n        s = sdsnew(\n        \"Hi Sam, this instance is empty or is using very little memory, \"\n        \"my issues detector can't be used in these conditions. \"\n        \"Please, leave for your mission on Earth and fill it with some data. \"\n        \"The new Sam and I will be back to our programming as soon as I \"\n        \"finished rebooting.\\n\");\n    } else {\n        s = sdsnew(\"Sam, I detected a few issues in this Redis instance memory implants:\\n\\n\");\n        if (big_peak) {\n            s = sdscat(s,\" * Peak memory: In the past this instance used more than 150% the memory that is currently using. The allocator is normally not able to release memory after a peak, so you can expect to see a big fragmentation ratio, however this is actually harmless and is only due to the memory peak, and if the Redis instance Resident Set Size (RSS) is currently bigger than expected, the memory will be used as soon as you fill the Redis instance with more data. If the memory peak was only occasional and you want to try to reclaim memory, please try the MEMORY PURGE command, otherwise the only other option is to shutdown and restart the instance.\\n\\n\");\n        }\n        if (high_frag) {\n            s = sdscatprintf(s,\" * High total RSS: This instance has a memory fragmentation and RSS overhead greater than 1.4 (this means that the Resident Set Size of the Redis process is much larger than the sum of the logical allocations Redis performed). This problem is usually due either to a large peak memory (check if there is a peak memory entry above in the report) or may result from a workload that causes the allocator to fragment memory a lot. If the problem is a large peak memory, then there is no issue. Otherwise, make sure you are using the Jemalloc allocator and not the default libc malloc. Note: The currently used allocator is \\\"%s\\\".\\n\\n\", ZMALLOC_LIB);\n        }\n        if (high_alloc_frag) {\n            s = sdscatprintf(s,\" * High allocator fragmentation: This instance has an allocator external fragmentation greater than 1.1. This problem is usually due either to a large peak memory (check if there is a peak memory entry above in the report) or may result from a workload that causes the allocator to fragment memory a lot. You can try enabling 'activedefrag' config option.\\n\\n\");\n        }\n        if (high_alloc_rss) {\n            s = sdscatprintf(s,\" * High allocator RSS overhead: This instance has an RSS memory overhead is greater than 1.1 (this means that the Resident Set Size of the allocator is much larger than the sum what the allocator actually holds). This problem is usually due to a large peak memory (check if there is a peak memory entry above in the report), you can try the MEMORY PURGE command to reclaim it.\\n\\n\");\n        }\n        if (high_proc_rss) {\n            s = sdscatprintf(s,\" * High process RSS overhead: This instance has non-allocator RSS memory overhead is greater than 1.1 (this means that the Resident Set Size of the Redis process is much larger than the RSS the allocator holds). This problem may be due to Lua scripts or Modules.\\n\\n\");\n        }\n        if (big_slave_buf) {\n            s = sdscat(s,\" * Big replica buffers: The replica output buffers in this instance are greater than 10MB for each replica (on average). This likely means that there is some replica instance that is struggling receiving data, either because it is too slow or because of networking issues. As a result, data piles on the master output buffers. Please try to identify what replica is not receiving data correctly and why. You can use the INFO output in order to check the replicas delays and the CLIENT LIST command to check the output buffers of each replica.\\n\\n\");\n        }\n        if (big_client_buf) {\n            s = sdscat(s,\" * Big client buffers: The clients output buffers in this instance are greater than 200K per client (on average). This may result from different causes, like Pub/Sub clients subscribed to channels bot not receiving data fast enough, so that data piles on the Redis instance output buffer, or clients sending commands with large replies or very large sequences of commands in the same pipeline. Please use the CLIENT LIST command in order to investigate the issue if it causes problems in your instance, or to understand better why certain clients are using a big amount of memory.\\n\\n\");\n        }\n        if (many_scripts) {\n            s = sdscat(s,\" * Many scripts: There seem to be many cached scripts in this instance (more than 1000). This may be because scripts are generated and `EVAL`ed, instead of being parameterized (with KEYS and ARGV), `SCRIPT LOAD`ed and `EVALSHA`ed. Unless `SCRIPT FLUSH` is called periodically, the scripts' caches may end up consuming most of your memory.\\n\\n\");\n        }\n        s = sdscat(s,\"I'm here to keep you safe, Sam. I want to help you.\\n\");\n    }\n    freeMemoryOverheadData(mh);\n    return s;\n}\n\n/* Set the object LRU/LFU depending on server.maxmemory_policy.\n * The lfu_freq arg is only relevant if policy is MAXMEMORY_FLAG_LFU.\n * The lru_idle and lru_clock args are only relevant if policy\n * is MAXMEMORY_FLAG_LRU.\n * Either or both of them may be <0, in that case, nothing is set. */\nint objectSetLRUOrLFU(robj *val, long long lfu_freq, long long lru_idle,\n                       long long lru_clock, int lru_multiplier) {\n    if (server.maxmemory_policy & MAXMEMORY_FLAG_LFU) {\n        if (lfu_freq >= 0) {\n            serverAssert(lfu_freq <= 255);\n            val->lru = (LFUGetTimeInMinutes()<<8) | lfu_freq;\n            return 1;\n        }\n    } else if (lru_idle >= 0) {\n        /* Provided LRU idle time is in seconds. Scale\n         * according to the LRU clock resolution this Redis\n         * instance was compiled with (normally 1000 ms, so the\n         * below statement will expand to lru_idle*1000/1000. */\n        lru_idle = lru_idle*lru_multiplier/LRU_CLOCK_RESOLUTION;\n        long lru_abs = lru_clock - lru_idle; /* Absolute access time. */\n        /* If the LRU field underflows (since lru_clock is a wrapping clock),\n         * we need to make it positive again. This will be handled by the unwrapping\n         * code in estimateObjectIdleTime. I.e. imagine a day when lru_clock\n         * wrap arounds (happens once in some 6 months), and becomes a low\n         * value, like 10, an lru_idle of 1000 should be near LRU_CLOCK_MAX. */\n        if (lru_abs < 0)\n            lru_abs += LRU_CLOCK_MAX;\n        val->lru = lru_abs;\n        return 1;\n    }\n    return 0;\n}\n\n/* ======================= The OBJECT and MEMORY commands =================== */\n\n/* This is a helper function for the OBJECT command. We need to lookup keys\n * without any modification of LRU or other parameters. */\nkvobj *kvobjCommandLookup(client *c, robj *key) {\n    return lookupKeyReadWithFlags(c->db,key,LOOKUP_NOTOUCH|LOOKUP_NONOTIFY);\n}\n\nkvobj *kvobjCommandLookupOrReply(client *c, robj *key, robj *reply) {\n    kvobj *kv = kvobjCommandLookup(c,key);\n    if (!kv) addReplyOrErrorObject(c, reply);\n    return kv;\n}\n\n/* Object command allows to inspect the internals of a Redis Object.\n * Usage: OBJECT <refcount|encoding|idletime|freq> <key> */\nvoid objectCommand(client *c) {\n    kvobj *kv;\n\n    if (c->argc == 2 && !strcasecmp(c->argv[1]->ptr,\"help\")) {\n        const char *help[] = {\n\"ENCODING <key>\",\n\"    Return the kind of internal representation used in order to store the value\",\n\"    associated with a <key>.\",\n\"FREQ <key>\",\n\"    Return the access frequency index of the <key>. The returned integer is\",\n\"    proportional to the logarithm of the recent access frequency of the key.\",\n\"IDLETIME <key>\",\n\"    Return the idle time of the <key>, that is the approximated number of\",\n\"    seconds elapsed since the last access to the key.\",\n\"REFCOUNT <key>\",\n\"    Return the number of references of the value associated with the specified\",\n\"    <key>.\",\nNULL\n        };\n        addReplyHelp(c, help);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"refcount\") && c->argc == 3) {\n        if ((kv = kvobjCommandLookupOrReply(c, c->argv[2], shared.null[c->resp]))\n                == NULL) return;\n        addReplyLongLong(c, kv->refcount);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"encoding\") && c->argc == 3) {\n        if ((kv = kvobjCommandLookupOrReply(c, c->argv[2], shared.null[c->resp]))\n                == NULL) return;\n        addReplyBulkCString(c,strEncoding(kv->encoding));\n    } else if (!strcasecmp(c->argv[1]->ptr,\"idletime\") && c->argc == 3) {\n        if ((kv = kvobjCommandLookupOrReply(c, c->argv[2], shared.null[c->resp]))\n                == NULL) return;\n        if (server.maxmemory_policy & MAXMEMORY_FLAG_LFU) {\n            addReplyError(c,\"An LFU maxmemory policy is selected, idle time not tracked. Please note that when switching between policies at runtime LRU and LFU data will take some time to adjust.\");\n            return;\n        }\n        addReplyLongLong(c, estimateObjectIdleTime(kv) / 1000);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"freq\") && c->argc == 3) {\n        if ((kv = kvobjCommandLookupOrReply(c, c->argv[2], shared.null[c->resp]))\n                == NULL) return;\n        if (!(server.maxmemory_policy & MAXMEMORY_FLAG_LFU)) {\n            addReplyError(c,\"An LFU maxmemory policy is not selected, access frequency not tracked. Please note that when switching between policies at runtime LRU and LFU data will take some time to adjust.\");\n            return;\n        }\n        /* LFUDecrAndReturn should be called\n         * in case of the key has not been accessed for a long time,\n         * because we update the access time only\n         * when the key is read or overwritten. */\n        addReplyLongLong(c,LFUDecrAndReturn(kv));\n    } else {\n        addReplySubcommandSyntaxError(c);\n    }\n}\n\n/* The memory command will eventually be a complete interface for the\n * memory introspection capabilities of Redis.\n *\n * Usage: MEMORY usage <key> */\nvoid memoryCommand(client *c) {\n    if (!strcasecmp(c->argv[1]->ptr,\"help\") && c->argc == 2) {\n        const char *help[] = {\n\"DOCTOR\",\n\"    Return memory problems reports.\",\n\"MALLOC-STATS\",\n\"    Return internal statistics report from the memory allocator.\",\n\"PURGE\",\n\"    Attempt to purge dirty pages for reclamation by the allocator.\",\n\"STATS\",\n\"    Return information about the memory usage of the server.\",\n\"USAGE <key> [SAMPLES <count>]\",\n\"    Return memory in bytes used by <key> and its value. Nested values are\",\n\"    sampled up to <count> times (default: 5, 0 means sample all).\",\nNULL\n        };\n        addReplyHelp(c, help);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"usage\") && c->argc >= 3) {\n        kvobj *kv;\n        long long samples = OBJ_COMPUTE_SIZE_DEF_SAMPLES;\n        for (int j = 3; j < c->argc; j++) {\n            if (!strcasecmp(c->argv[j]->ptr,\"samples\") &&\n                j+1 < c->argc)\n            {\n                if (getLongLongFromObjectOrReply(c,c->argv[j+1],&samples,NULL)\n                     == C_ERR) return;\n                if (samples < 0) {\n                    addReplyErrorObject(c,shared.syntaxerr);\n                    return;\n                }\n                if (samples == 0) samples = LLONG_MAX;\n                j++; /* skip option argument. */\n            } else {\n                addReplyErrorObject(c,shared.syntaxerr);\n                return;\n            }\n        }\n        if ((kv = dbFind(c->db, c->argv[2]->ptr)) == NULL) {\n            addReplyNull(c);\n            return;\n        }\n        size_t usage = kvobjComputeSize(c->argv[2], kv, samples, c->db->id);\n        addReplyLongLong(c,usage);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"stats\") && c->argc == 2) {\n        struct redisMemOverhead *mh = getMemoryOverheadData();\n\n        addReplyMapLen(c,33+mh->num_dbs);\n\n        addReplyBulkCString(c,\"peak.allocated\");\n        addReplyLongLong(c,mh->peak_allocated);\n\n        addReplyBulkCString(c,\"total.allocated\");\n        addReplyLongLong(c,mh->total_allocated);\n\n        addReplyBulkCString(c,\"startup.allocated\");\n        addReplyLongLong(c,mh->startup_allocated);\n\n        addReplyBulkCString(c,\"replication.backlog\");\n        addReplyLongLong(c,mh->repl_backlog);\n\n        addReplyBulkCString(c,\"replica.fullsync.buffer\");\n        addReplyLongLong(c,mh->replica_fullsync_buffer);\n\n        addReplyBulkCString(c,\"clients.slaves\");\n        addReplyLongLong(c,mh->clients_slaves);\n\n        addReplyBulkCString(c,\"clients.normal\");\n        addReplyLongLong(c,mh->clients_normal);\n\n        addReplyBulkCString(c,\"cluster.links\");\n        addReplyLongLong(c,mh->cluster_links);\n\n        addReplyBulkCString(c,\"aof.buffer\");\n        addReplyLongLong(c,mh->aof_buffer);\n\n        addReplyBulkCString(c,\"lua.caches\");\n        addReplyLongLong(c,mh->eval_caches);\n\n        addReplyBulkCString(c,\"functions.caches\");\n        addReplyLongLong(c,mh->functions_caches);\n\n        addReplyBulkCString(c,\"script.VMs\");\n        addReplyLongLong(c,mh->script_vm);\n\n        for (size_t j = 0; j < mh->num_dbs; j++) {\n            char dbname[32];\n            snprintf(dbname,sizeof(dbname),\"db.%zd\",mh->db[j].dbid);\n            addReplyBulkCString(c,dbname);\n            addReplyMapLen(c,2);\n\n            addReplyBulkCString(c,\"overhead.hashtable.main\");\n            addReplyLongLong(c,mh->db[j].overhead_ht_main);\n\n            addReplyBulkCString(c,\"overhead.hashtable.expires\");\n            addReplyLongLong(c,mh->db[j].overhead_ht_expires);\n        }\n\n        addReplyBulkCString(c,\"overhead.db.hashtable.lut\");\n        addReplyLongLong(c, mh->overhead_db_hashtable_lut);\n\n        addReplyBulkCString(c,\"overhead.db.hashtable.rehashing\");\n        addReplyLongLong(c, mh->overhead_db_hashtable_rehashing);\n\n        addReplyBulkCString(c,\"overhead.total\");\n        addReplyLongLong(c,mh->overhead_total);\n\n        addReplyBulkCString(c,\"db.dict.rehashing.count\");\n        addReplyLongLong(c, mh->db_dict_rehashing_count);\n\n        addReplyBulkCString(c,\"keys.count\");\n        addReplyLongLong(c,mh->total_keys);\n\n        addReplyBulkCString(c,\"keys.bytes-per-key\");\n        addReplyLongLong(c,mh->bytes_per_key);\n\n        addReplyBulkCString(c,\"dataset.bytes\");\n        addReplyLongLong(c,mh->dataset);\n\n        addReplyBulkCString(c,\"dataset.percentage\");\n        addReplyDouble(c,mh->dataset_perc);\n\n        addReplyBulkCString(c,\"peak.percentage\");\n        addReplyDouble(c,mh->peak_perc);\n\n        addReplyBulkCString(c,\"allocator.allocated\");\n        addReplyLongLong(c,server.cron_malloc_stats.allocator_allocated);\n\n        addReplyBulkCString(c,\"allocator.active\");\n        addReplyLongLong(c,server.cron_malloc_stats.allocator_active);\n\n        addReplyBulkCString(c,\"allocator.resident\");\n        addReplyLongLong(c,server.cron_malloc_stats.allocator_resident);\n\n        addReplyBulkCString(c,\"allocator.muzzy\");\n        addReplyLongLong(c,server.cron_malloc_stats.allocator_muzzy);\n\n        addReplyBulkCString(c,\"allocator-fragmentation.ratio\");\n        addReplyDouble(c,mh->allocator_frag);\n\n        addReplyBulkCString(c,\"allocator-fragmentation.bytes\");\n        addReplyLongLong(c,mh->allocator_frag_bytes);\n\n        addReplyBulkCString(c,\"allocator-rss.ratio\");\n        addReplyDouble(c,mh->allocator_rss);\n\n        addReplyBulkCString(c,\"allocator-rss.bytes\");\n        addReplyLongLong(c,mh->allocator_rss_bytes);\n\n        addReplyBulkCString(c,\"rss-overhead.ratio\");\n        addReplyDouble(c,mh->rss_extra);\n\n        addReplyBulkCString(c,\"rss-overhead.bytes\");\n        addReplyLongLong(c,mh->rss_extra_bytes);\n\n        addReplyBulkCString(c,\"fragmentation\"); /* this is the total RSS overhead, including fragmentation */\n        addReplyDouble(c,mh->total_frag); /* it is kept here for backwards compatibility */\n\n        addReplyBulkCString(c,\"fragmentation.bytes\");\n        addReplyLongLong(c,mh->total_frag_bytes);\n\n        freeMemoryOverheadData(mh);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"malloc-stats\") && c->argc == 2) {\n#if defined(USE_JEMALLOC)\n        sds info = sdsempty();\n        je_malloc_stats_print(inputCatSds, &info, NULL);\n        addReplyVerbatim(c,info,sdslen(info),\"txt\");\n        sdsfree(info);\n#else\n        addReplyBulkCString(c,\"Stats not supported for the current allocator\");\n#endif\n    } else if (!strcasecmp(c->argv[1]->ptr,\"doctor\") && c->argc == 2) {\n        sds report = getMemoryDoctorReport();\n        addReplyVerbatim(c,report,sdslen(report),\"txt\");\n        sdsfree(report);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"purge\") && c->argc == 2) {\n        if (jemalloc_purge() == 0)\n            addReply(c, shared.ok);\n        else\n            addReplyError(c, \"Error purging dirty pages\");\n    } else {\n        addReplySubcommandSyntaxError(c);\n    }\n}\n"
  },
  {
    "path": "src/object.h",
    "content": "/*\n * Redis objects overview\n *\n * robj (struct redisObject) is the fundamental in-memory container that can hold\n * values of different logical types (strings, lists, sets, hashes, sorted sets,\n * streams, modules, ...). It contains:\n *   - type: one of OBJ_STRING, OBJ_LIST, OBJ_SET, OBJ_ZSET, OBJ_HASH, OBJ_STREAM,\n *           OBJ_GCRA, OBJ_MODULE, ...\n *   - encoding: an implementation detail of how the value is represented in\n *           memory for the given type (see OBJ_ENCODING_* below). For example,\n *           strings may be RAW/EMBSTR/INT, sets may be INTSET or HT, etc.\n *   - lru: either LRU information (relative to the global LRU clock) or LFU data\n *           when LFU is enabled.\n *   - refcount: reference counting for object lifetime management.\n *   - ptr: a pointer to the underlying value payload (SDS, dict, quicklist, ...).\n *\n * Object encodings\n * -----------------\n * Some kinds of objects (strings, hashes, sets, zsets, lists) have multiple\n * possible in-memory encodings optimized for memory footprint and speed. The\n * 'encoding' field indicates which representation is currently used (see\n * OBJ_ENCODING_* defines below).\n *\n * kvobj (key-value object)\n * ------------------------\n * kvobj is a specific use of robj that additionally embeds the key (and optional\n * metadata) alongside the value. It is identified by the iskvobj flag in robj.\n * The distinction is mostly declarative for clarity and may be enforced later with\n * explicit casting in places. Conceptually, robj and kvobj are in a relation \n * similar to that of a parent-child class hierarchy.\n * \n * When iskvobj is set, it also contains:\n *   - metabits: bitmap of additional metadata attached to the object.\n *   - lru: LRU time (relative to global lru_clock) or LFU data (see robj above).\n *   - embedded key: the key string is stored inline after the struct.\n *   - embedded value: for small strings, the value is stored inline after the key.\n *\n * Example layout with key and embedded value \"myvalue\":\n *    +--------------+--------------+--------------------+----------------------+\n *    | serverObject | key-hdr-size | sdshdr5 \"mykey\" \\0 | sdshdr8 \"myvalue\" \\0 |\n *    | 16 bytes     | 1 byte       | 1      +   5   + 1 | 3    +      7    + 1 |\n *    +--------------+--------------+--------------------+----------------------+\n * \n * kvobj with metadata (+expiration)\n * ---------------------------------\n * Up to 8 metadata classes are supported, each storing one 8-byte field.\n * Class 0 is reserved for expiration time. Metadata blocks are placed before\n * the kvobj itself, in reverse class order.\n * \n * Example of a key with expiration time (metabits=0b00000001):\n *     +--------------+--------------+--------------+--------------------+\n *     | Expiry Time  | serverObject | key-hdr-size | sdshdr5 \"mykey\" \\0 |\n *     | 8 byte       | 16 bytes     | 1 byte       | 1      +   5   + 1 |\n *     +--------------+--------------+--------------+--------------------+\n *                    ^\n *                    +---- kvobjCreate() returns pointer here\n * \n * Example with metadata of class1 and class3 attached (metabits=0b00001010):\n * +--------------+--------------+--------------+--------------+--------------------+\n * | meta (class3)| meta (class1)| serverObject | key-hdr-size | sdshdr5 \"mykey\" \\0 |\n * | 8 byte       | 8 byte       | 16 bytes     | 1 byte       | 1      +   5   + 1 |\n * +--------------+--------------+--------------+--------------+--------------------+\n *                               ^\n *                               +---- kvobjCreate() returns pointer here\n * \n */\n#ifndef __OBJECT_H\n#define __OBJECT_H\n\n/* forward declarations */\nstruct client;\nstruct RedisModuleType;\n\n/* Object encodings (see header comment below for details). */\n#define OBJ_ENCODING_RAW 0     /* Raw representation */\n#define OBJ_ENCODING_INT 1     /* Encoded as integer */\n#define OBJ_ENCODING_HT 2      /* Encoded as hash table */\n#define OBJ_ENCODING_ZIPMAP 3  /* No longer used: old hash encoding. */\n#define OBJ_ENCODING_LINKEDLIST 4 /* No longer used: old list encoding. */\n#define OBJ_ENCODING_ZIPLIST 5 /* No longer used: old list/hash/zset encoding. */\n#define OBJ_ENCODING_INTSET 6  /* Encoded as intset */\n#define OBJ_ENCODING_SKIPLIST 7  /* Encoded as skiplist */\n#define OBJ_ENCODING_EMBSTR 8  /* Embedded sds string encoding */\n#define OBJ_ENCODING_QUICKLIST 9 /* Encoded as linked list of listpacks */\n#define OBJ_ENCODING_STREAM 10 /* Encoded as a radix tree of listpacks */\n#define OBJ_ENCODING_LISTPACK 11 /* Encoded as a listpack */\n#define OBJ_ENCODING_LISTPACK_EX 12 /* Encoded as listpack, extended with metadata */\n\n#define LRU_BITS 24\n#define LRU_CLOCK_MAX ((1<<LRU_BITS)-1) /* Max value of obj->lru */\n#define LRU_CLOCK_RESOLUTION 1000 /* LRU clock resolution in ms */\n\n#define OBJ_NUM_KVMETA_BITS 8\n#define OBJ_REFCOUNT_BITS 23\n#define OBJ_SHARED_REFCOUNT ((1 << OBJ_REFCOUNT_BITS) - 1) /* Global object never destroyed. */\n#define OBJ_STATIC_REFCOUNT ((1 << OBJ_REFCOUNT_BITS) - 2) /* Object allocated in the stack. */\n#define OBJ_FIRST_SPECIAL_REFCOUNT OBJ_STATIC_REFCOUNT\n\nstruct redisObject {\n    unsigned type:4;\n    unsigned encoding:4;\n    unsigned refcount : OBJ_REFCOUNT_BITS;\n    unsigned iskvobj : 1;   /* 1 if this struct serves as a kvobj base */\n    \n    /* metabits and lru are Relevant only when iskvobj is set: */     \n    unsigned metabits :8;  /* Bitmap of metadata (+expiry) attached to this kvobj */\n    unsigned lru:LRU_BITS; /* LRU time (relative to global lru_clock) or\n                            * LFU data (least significant 8 bits frequency\n                            * and most significant 16 bits access time). */\n    void *ptr;\n};\n\n/* robj - General purpose redis object */\ntypedef struct redisObject robj;\n\n/* kvobj: see header comment above for definition and memory layout. */\ntypedef struct redisObject kvobj;\n\nkvobj *kvobjCreate(int type, const sds key, void *ptr, uint32_t keyMetaBits);\nkvobj *kvobjSet(sds key, robj *val, uint32_t keyMetaBits);\nkvobj *kvobjSetExpire(kvobj *kv, long long expire);\nsds kvobjGetKey(const kvobj *kv);\nlong long kvobjGetExpire(const kvobj *val);\nuint64_t *kvobjMetaRef(kvobj *kv, int metaId);\n\n/* Redis object implementation */\nvoid decrRefCount(robj *o);\nvoid incrRefCount(robj *o);\nrobj *makeObjectShared(robj *o);\nvoid freeStringObject(robj *o);\nvoid freeListObject(robj *o);\nvoid freeSetObject(robj *o);\nvoid freeZsetObject(robj *o);\nvoid freeHashObject(robj *o);\nvoid dismissObject(robj *o, size_t dump_size);\nrobj *createObject(int type, void *ptr);\nvoid initObjectLRUOrLFU(robj *o);\nrobj *createStringObject(const char *ptr, size_t len);\nrobj *createRawStringObject(const char *ptr, size_t len);\nrobj *tryCreateRawStringObject(const char *ptr, size_t len);\nrobj *tryCreateStringObject(const char *ptr, size_t len);\nrobj *dupStringObject(const robj *o);\nint isSdsRepresentableAsLongLong(sds s, long long *llval);\nint isObjectRepresentableAsLongLong(robj *o, long long *llongval);\nrobj *tryObjectEncoding(robj *o);\nrobj *tryObjectEncodingEx(robj *o, int try_trim);\nsize_t getObjectLength(robj *o);\nrobj *getDecodedObject(robj *o);\nsize_t stringObjectLen(robj *o);\nsize_t stringObjectAllocSize(const robj *o);\nrobj *createStringObjectFromLongLong(long long value);\nrobj *createStringObjectFromLongLongForValue(long long value);\nrobj *createStringObjectFromLongLongWithSds(long long value);\nrobj *createStringObjectFromLongDouble(long double value, int humanfriendly);\nrobj *createQuicklistObject(int fill, int compress);\nrobj *createListListpackObject(void);\nrobj *createSetObject(void);\nrobj *createIntsetObject(void);\nrobj *createSetListpackObject(void);\nrobj *createHashObject(void);\nrobj *createZsetObject(void);\nrobj *createZsetListpackObject(void);\nrobj *createStreamObject(void);\nrobj *createGCRAObject(long long value);\nrobj *createModuleObject(struct RedisModuleType *mt, void *value);\nint getLongFromObjectOrReply(struct client *c, robj *o, long *target, const char *msg);\nint getPositiveLongFromObjectOrReply(struct client *c, robj *o, long *target, const char *msg);\nint getRangeLongFromObjectOrReply(struct client *c, robj *o, long min, long max, long *target, const char *msg);\nint checkType(struct client *c, robj *o, int type);\nint getLongLongFromObjectOrReply(struct client *c, robj *o, long long *target, const char *msg);\nint getDoubleFromObjectOrReply(struct client *c, robj *o, double *target, const char *msg);\nint getDoubleFromObject(const robj *o, double *target);\nint getLongLongFromObject(robj *o, long long *target);\nint getLongLongFromGCRAObject(robj *o, long long *target);\nint getLongDoubleFromObject(robj *o, long double *target);\nint getLongDoubleFromObjectOrReply(struct client *c, robj *o, long double *target, const char *msg);\nint getIntFromObjectOrReply(struct client *c, robj *o, int *target, const char *msg);\nchar *strEncoding(int encoding);\nint compareStringObjects(const robj *a, const robj *b);\nint collateStringObjects(const robj *a, const robj *b);\nint equalStringObjects(robj *a, robj *b);\nvoid trimStringObjectIfNeeded(robj *o, int trim_small_values);\nsize_t kvobjAllocSize(kvobj *o);\nsize_t gcraTypeAllocSize(robj *o);\nsize_t gcraObjectLength(robj *o);\n\nint objectSetLRUOrLFU(robj *val, long long lfu_freq, long long lru_idle,\n                      long long lru_clock, int lru_multiplier);\nvoid objectCommand(struct client *c);\nvoid memoryCommand(struct client *c);\n\nstatic inline void *kvobjGetAllocPtr(const kvobj *kv) {\n    /* Return the base allocation pointer (start of the metadata prefix). */\n    uint32_t numMetaBytes = __builtin_popcount(kv->metabits) * sizeof(uint64_t);\n    return (char *)kv - numMetaBytes;\n}\n\n#endif /* __OBJECT_H */\n"
  },
  {
    "path": "src/pqsort.c",
    "content": "/* The following is the NetBSD libc qsort implementation modified in order to\n * support partial sorting of ranges for Redis.\n *\n * Copyright(C) 2009-current Redis Ltd.. All rights reserved.\n *\n * The original copyright notice follows. */\n\n\n/*\t$NetBSD: qsort.c,v 1.19 2009/01/30 23:38:44 lukem Exp $\t*/\n\n/*-\n * Copyright (c) 1992, 1993\n *\tThe Regents of the University of California.  All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions\n * are met:\n * 1. Redistributions of source code must retain the above copyright\n *    notice, this list of conditions and the following disclaimer.\n * 2. Redistributions in binary form must reproduce the above copyright\n *    notice, this list of conditions and the following disclaimer in the\n *    documentation and/or other materials provided with the distribution.\n * 3. Neither the name of the University nor the names of its contributors\n *    may be used to endorse or promote products derived from this software\n *    without specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND\n * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE\n * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n * SUCH DAMAGE.\n */\n\n#include <sys/types.h>\n#include <stdint.h>\n#include <errno.h>\n#include <stdlib.h>\n\nstatic inline char\t*med3 (char *, char *, char *,\n    int (*)(const void *, const void *));\nstatic inline void\t swapfunc (char *, char *, size_t, int);\n\n#define min(a, b)\t(a) < (b) ? a : b\n\n/*\n * Qsort routine from Bentley & McIlroy's \"Engineering a Sort Function\".\n */\n#define swapcode(TYPE, parmi, parmj, n) { \t\t\\\n\tsize_t i = (n) / sizeof (TYPE); \t\t\\\n\tTYPE *pi = (TYPE *)(void *)(parmi); \t\t\\\n\tTYPE *pj = (TYPE *)(void *)(parmj); \t\t\\\n\tdo { \t\t\t\t\t\t\\\n\t\tTYPE\tt = *pi;\t\t\t\\\n\t\t*pi++ = *pj;\t\t\t\t\\\n\t\t*pj++ = t;\t\t\t\t\\\n        } while (--i > 0);\t\t\t\t\\\n}\n\n#define SWAPINIT(a, es) swaptype = (uintptr_t)a % sizeof(long) || \\\n\tes % sizeof(long) ? 2 : es == sizeof(long)? 0 : 1;\n\nstatic inline void\nswapfunc(char *a, char *b, size_t n, int swaptype)\n{\n\n\tif (swaptype <= 1)\n\t\tswapcode(long, a, b, n)\n\telse\n\t\tswapcode(char, a, b, n)\n}\n\n#define swap(a, b)\t\t\t\t\t\t\\\n\tif (swaptype == 0) {\t\t\t\t\t\\\n\t\tlong t = *(long *)(void *)(a);\t\t\t\\\n\t\t*(long *)(void *)(a) = *(long *)(void *)(b);\t\\\n\t\t*(long *)(void *)(b) = t;\t\t\t\\\n\t} else\t\t\t\t\t\t\t\\\n\t\tswapfunc(a, b, es, swaptype)\n\n#define vecswap(a, b, n) if ((n) > 0) swapfunc((a), (b), (size_t)(n), swaptype)\n\nstatic inline char *\nmed3(char *a, char *b, char *c,\n    int (*cmp) (const void *, const void *))\n{\n\n\treturn cmp(a, b) < 0 ?\n\t       (cmp(b, c) < 0 ? b : (cmp(a, c) < 0 ? c : a ))\n              :(cmp(b, c) > 0 ? b : (cmp(a, c) < 0 ? a : c ));\n}\n\nstatic void\n_pqsort(void *a, size_t n, size_t es,\n    int (*cmp) (const void *, const void *), void *lrange, void *rrange)\n{\n\tchar *pa, *pb, *pc, *pd, *pl, *pm, *pn;\n\tsize_t d, r;\n\tint swaptype, cmp_result;\n\nloop:\tSWAPINIT(a, es);\n\tif (n < 7) {\n\t\tfor (pm = (char *) a + es; pm < (char *) a + n * es; pm += es)\n\t\t\tfor (pl = pm; pl > (char *) a && cmp(pl - es, pl) > 0;\n\t\t\t     pl -= es)\n\t\t\t\tswap(pl, pl - es);\n\t\treturn;\n\t}\n\tpm = (char *) a + (n / 2) * es;\n\tif (n > 7) {\n\t\tpl = (char *) a;\n\t\tpn = (char *) a + (n - 1) * es;\n\t\tif (n > 40) {\n\t\t\td = (n / 8) * es;\n\t\t\tpl = med3(pl, pl + d, pl + 2 * d, cmp);\n\t\t\tpm = med3(pm - d, pm, pm + d, cmp);\n\t\t\tpn = med3(pn - 2 * d, pn - d, pn, cmp);\n\t\t}\n\t\tpm = med3(pl, pm, pn, cmp);\n\t}\n\tswap(a, pm);\n\tpa = pb = (char *) a + es;\n\n\tpc = pd = (char *) a + (n - 1) * es;\n\tfor (;;) {\n\t\twhile (pb <= pc && (cmp_result = cmp(pb, a)) <= 0) {\n\t\t\tif (cmp_result == 0) {\n\t\t\t\tswap(pa, pb);\n\t\t\t\tpa += es;\n\t\t\t}\n\t\t\tpb += es;\n\t\t}\n\t\twhile (pb <= pc && (cmp_result = cmp(pc, a)) >= 0) {\n\t\t\tif (cmp_result == 0) {\n\t\t\t\tswap(pc, pd);\n\t\t\t\tpd -= es;\n\t\t\t}\n\t\t\tpc -= es;\n\t\t}\n\t\tif (pb > pc)\n\t\t\tbreak;\n\t\tswap(pb, pc);\n\t\tpb += es;\n\t\tpc -= es;\n\t}\n\n\tpn = (char *) a + n * es;\n\tr = min(pa - (char *) a, pb - pa);\n\tvecswap(a, pb - r, r);\n\tr = min((size_t)(pd - pc), pn - pd - es);\n\tvecswap(pb, pn - r, r);\n\tif ((r = pb - pa) > es) {\n                void *_l = a, *_r = ((unsigned char*)a)+r-1;\n                if (!((lrange < _l && rrange < _l) ||\n                    (lrange > _r && rrange > _r)))\n\t\t    _pqsort(a, r / es, es, cmp, lrange, rrange);\n        }\n\tif ((r = pd - pc) > es) {\n                void *_l, *_r;\n\n\t\t/* Iterate rather than recurse to save stack space */\n\t\ta = pn - r;\n\t\tn = r / es;\n\n                _l = a;\n                _r = ((unsigned char*)a)+r-1;\n                if (!((lrange < _l && rrange < _l) ||\n                    (lrange > _r && rrange > _r)))\n\t\t    goto loop;\n\t}\n/*\t\tqsort(pn - r, r / es, es, cmp);*/\n}\n\nvoid\npqsort(void *a, size_t n, size_t es,\n    int (*cmp) (const void *, const void *), size_t lrange, size_t rrange)\n{\n    _pqsort(a,n,es,cmp,((unsigned char*)a)+(lrange*es),\n                       ((unsigned char*)a)+((rrange+1)*es)-1);\n}\n"
  },
  {
    "path": "src/pqsort.h",
    "content": "/* The following is the NetBSD libc qsort implementation modified in order to\n * support partial sorting of ranges for Redis.\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * See the pqsort.c file for the original copyright notice. */\n\n#ifndef __PQSORT_H\n#define __PQSORT_H\n\nvoid\npqsort(void *a, size_t n, size_t es,\n    int (*cmp) (const void *, const void *), size_t lrange, size_t rrange);\n\n#endif\n"
  },
  {
    "path": "src/pubsub.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n#include \"server.h\"\n#include \"cluster.h\"\n#include \"cluster_slot_stats.h\"\n\n/* Structure to hold the pubsub related metadata. Currently used\n * for pubsub and pubsubshard feature. */\ntypedef struct pubsubtype {\n    int shard;\n    dict *(*clientPubSubChannels)(client*);\n    int (*subscriptionCount)(client*);\n    kvstore **serverPubSubChannels;\n    robj **subscribeMsg;\n    robj **unsubscribeMsg;\n    robj **messageBulk;\n}pubsubtype;\n\n/*\n * Get client's global Pub/Sub channels subscription count.\n */\nint clientSubscriptionsCount(client *c);\n\n/*\n * Get client's shard level Pub/Sub channels subscription count.\n */\nint clientShardSubscriptionsCount(client *c);\n\n/*\n * Get client's global Pub/Sub channels dict.\n */\ndict* getClientPubSubChannels(client *c);\n\n/*\n * Get client's shard level Pub/Sub channels dict.\n */\ndict* getClientPubSubShardChannels(client *c);\n\n/*\n * Get list of channels client is subscribed to.\n * If a pattern is provided, the subset of channels is returned\n * matching the pattern.\n */\nvoid channelList(client *c, sds pat, kvstore *pubsub_channels);\n\n/*\n * Pub/Sub type for global channels.\n */\npubsubtype pubSubType = {\n    .shard = 0,\n    .clientPubSubChannels = getClientPubSubChannels,\n    .subscriptionCount = clientSubscriptionsCount,\n    .serverPubSubChannels = &server.pubsub_channels,\n    .subscribeMsg = &shared.subscribebulk,\n    .unsubscribeMsg = &shared.unsubscribebulk,\n    .messageBulk = &shared.messagebulk,\n};\n\n/*\n * Pub/Sub type for shard level channels bounded to a slot.\n */\npubsubtype pubSubShardType = {\n    .shard = 1,\n    .clientPubSubChannels = getClientPubSubShardChannels,\n    .subscriptionCount = clientShardSubscriptionsCount,\n    .serverPubSubChannels = &server.pubsubshard_channels,\n    .subscribeMsg = &shared.ssubscribebulk,\n    .unsubscribeMsg = &shared.sunsubscribebulk,\n    .messageBulk = &shared.smessagebulk,\n};\n\n/*-----------------------------------------------------------------------------\n * Pubsub client replies API\n *----------------------------------------------------------------------------*/\n\n/* Send a pubsub message of type \"message\" to the client.\n * Normally 'msg' is a Redis object containing the string to send as\n * message. However if the caller sets 'msg' as NULL, it will be able\n * to send a special message (for instance an Array type) by using the\n * addReply*() API family. */\nvoid addReplyPubsubMessage(client *c, robj *channel, robj *msg, robj *message_bulk) {\n    uint64_t old_flags = c->flags;\n    c->flags |= CLIENT_PUSHING;\n    if (c->resp == 2)\n        addReply(c,shared.mbulkhdr[3]);\n    else\n        addReplyPushLen(c,3);\n    addReply(c,message_bulk);\n    addReplyBulk(c,channel);\n    if (msg) addReplyBulk(c,msg);\n    if (!(old_flags & CLIENT_PUSHING)) c->flags &= ~CLIENT_PUSHING;\n}\n\n/* Send a pubsub message of type \"pmessage\" to the client. The difference\n * with the \"message\" type delivered by addReplyPubsubMessage() is that\n * this message format also includes the pattern that matched the message. */\nvoid addReplyPubsubPatMessage(client *c, robj *pat, robj *channel, robj *msg) {\n    uint64_t old_flags = c->flags;\n    c->flags |= CLIENT_PUSHING;\n    if (c->resp == 2)\n        addReply(c,shared.mbulkhdr[4]);\n    else\n        addReplyPushLen(c,4);\n    addReply(c,shared.pmessagebulk);\n    addReplyBulk(c,pat);\n    addReplyBulk(c,channel);\n    addReplyBulk(c,msg);\n    if (!(old_flags & CLIENT_PUSHING)) c->flags &= ~CLIENT_PUSHING;\n}\n\n/* Send the pubsub subscription notification to the client. */\nvoid addReplyPubsubSubscribed(client *c, robj *channel, pubsubtype type) {\n    uint64_t old_flags = c->flags;\n    c->flags |= CLIENT_PUSHING;\n    if (c->resp == 2)\n        addReply(c,shared.mbulkhdr[3]);\n    else\n        addReplyPushLen(c,3);\n    addReply(c,*type.subscribeMsg);\n    addReplyBulk(c,channel);\n    addReplyLongLong(c,type.subscriptionCount(c));\n    if (!(old_flags & CLIENT_PUSHING)) c->flags &= ~CLIENT_PUSHING;\n}\n\n/* Send the pubsub unsubscription notification to the client.\n * Channel can be NULL: this is useful when the client sends a mass\n * unsubscribe command but there are no channels to unsubscribe from: we\n * still send a notification. */\nvoid addReplyPubsubUnsubscribed(client *c, robj *channel, pubsubtype type) {\n    uint64_t old_flags = c->flags;\n    c->flags |= CLIENT_PUSHING;\n    if (c->resp == 2)\n        addReply(c,shared.mbulkhdr[3]);\n    else\n        addReplyPushLen(c,3);\n    addReply(c, *type.unsubscribeMsg);\n    if (channel)\n        addReplyBulk(c,channel);\n    else\n        addReplyNull(c);\n    addReplyLongLong(c,type.subscriptionCount(c));\n    if (!(old_flags & CLIENT_PUSHING)) c->flags &= ~CLIENT_PUSHING;\n}\n\n/* Send the pubsub pattern subscription notification to the client. */\nvoid addReplyPubsubPatSubscribed(client *c, robj *pattern) {\n    uint64_t old_flags = c->flags;\n    c->flags |= CLIENT_PUSHING;\n    if (c->resp == 2)\n        addReply(c,shared.mbulkhdr[3]);\n    else\n        addReplyPushLen(c,3);\n    addReply(c,shared.psubscribebulk);\n    addReplyBulk(c,pattern);\n    addReplyLongLong(c,clientSubscriptionsCount(c));\n    if (!(old_flags & CLIENT_PUSHING)) c->flags &= ~CLIENT_PUSHING;\n}\n\n/* Send the pubsub pattern unsubscription notification to the client.\n * Pattern can be NULL: this is useful when the client sends a mass\n * punsubscribe command but there are no pattern to unsubscribe from: we\n * still send a notification. */\nvoid addReplyPubsubPatUnsubscribed(client *c, robj *pattern) {\n    uint64_t old_flags = c->flags;\n    c->flags |= CLIENT_PUSHING;\n    if (c->resp == 2)\n        addReply(c,shared.mbulkhdr[3]);\n    else\n        addReplyPushLen(c,3);\n    addReply(c,shared.punsubscribebulk);\n    if (pattern)\n        addReplyBulk(c,pattern);\n    else\n        addReplyNull(c);\n    addReplyLongLong(c,clientSubscriptionsCount(c));\n    if (!(old_flags & CLIENT_PUSHING)) c->flags &= ~CLIENT_PUSHING;\n}\n\n/*-----------------------------------------------------------------------------\n * Pubsub low level API\n *----------------------------------------------------------------------------*/\n\n/* Return the number of pubsub channels + patterns is handled. */\nint serverPubsubSubscriptionCount(void) {\n    return kvstoreSize(server.pubsub_channels) + dictSize(server.pubsub_patterns);\n}\n\n/* Return the number of pubsub shard level channels is handled. */\nint serverPubsubShardSubscriptionCount(void) {\n    return kvstoreSize(server.pubsubshard_channels);\n}\n\n/* Return the number of channels + patterns a client is subscribed to. */\nint clientSubscriptionsCount(client *c) {\n    return dictSize(c->pubsub_channels) + dictSize(c->pubsub_patterns);\n}\n\n/* Return the number of shard level channels a client is subscribed to. */\nint clientShardSubscriptionsCount(client *c) {\n    return dictSize(c->pubsubshard_channels);\n}\n\ndict* getClientPubSubChannels(client *c) {\n    return c->pubsub_channels;\n}\n\ndict* getClientPubSubShardChannels(client *c) {\n    return c->pubsubshard_channels;\n}\n\n/* Return the number of pubsub + pubsub shard level channels\n * a client is subscribed to. */\nint clientTotalPubSubSubscriptionCount(client *c) {\n    return clientSubscriptionsCount(c) + clientShardSubscriptionsCount(c);\n}\n\nvoid markClientAsPubSub(client *c) {\n    if (!(c->flags & CLIENT_PUBSUB)) {\n        c->flags |= CLIENT_PUBSUB;\n        server.pubsub_clients++;\n    }\n}\n\nvoid unmarkClientAsPubSub(client *c) {\n    if (c->flags & CLIENT_PUBSUB) {\n        c->flags &= ~CLIENT_PUBSUB;\n        server.pubsub_clients--;\n    }\n}\n\n/* Subscribe a client to a channel. Returns 1 if the operation succeeded, or\n * 0 if the client was already subscribed to that channel. */\nint pubsubSubscribeChannel(client *c, robj *channel, pubsubtype type) {\n    dictEntry *de, *existing;\n    dict *clients = NULL;\n    int retval = 0;\n    unsigned int slot = 0;\n\n    /* Add the channel to the client -> channels hash table */\n    dictEntryLink bucket;\n    dictEntryLink link = dictFindLink(type.clientPubSubChannels(c),channel,&bucket);\n    if (link == NULL) { /* Not yet subscribed to this channel */\n        retval = 1;\n        /* Add the client to the channel -> list of clients hash table */\n        if (server.cluster_enabled && type.shard) {\n            slot = getKeySlot(channel->ptr);\n        }\n\n        de = kvstoreDictAddRaw(*type.serverPubSubChannels, slot, channel, &existing);\n\n        if (existing) {\n            clients = dictGetVal(existing);\n            channel = dictGetKey(existing);\n        } else {\n            clients = dictCreate(&clientDictType);\n            kvstoreDictSetVal(*type.serverPubSubChannels, slot, de, clients);\n            incrRefCount(channel);\n        }\n\n        serverAssert(dictAdd(clients, c, NULL) != DICT_ERR);\n        dictSetKeyAtLink(type.clientPubSubChannels(c), channel, &bucket, 1);\n        incrRefCount(channel);\n    }\n    /* Notify the client */\n    addReplyPubsubSubscribed(c,channel,type);\n    return retval;\n}\n\n/* Unsubscribe a client from a channel. Returns 1 if the operation succeeded, or\n * 0 if the client was not subscribed to the specified channel. */\nint pubsubUnsubscribeChannel(client *c, robj *channel, int notify, pubsubtype type) {\n    dictEntry *de;\n    dict *clients;\n    int retval = 0;\n    int slot = 0;\n\n    /* Remove the channel from the client -> channels hash table */\n    incrRefCount(channel); /* channel may be just a pointer to the same object\n                            we have in the hash tables. Protect it... */\n    if (dictDelete(type.clientPubSubChannels(c),channel) == DICT_OK) {\n        retval = 1;\n        /* Remove the client from the channel -> clients list hash table */\n        if (server.cluster_enabled && type.shard) {\n            /* Compute the slot from the channel directly instead of using getKeySlot(),\n             * because the unsubscribe may be triggered by a different client, and\n             * getKeySlot() would return the cached slot of that client. */\n            slot = keyHashSlot(channel->ptr, sdslen(channel->ptr));\n        }\n        de = kvstoreDictFind(*type.serverPubSubChannels, slot, channel);\n        serverAssertWithInfo(c,NULL,de != NULL);\n        clients = dictGetVal(de);\n        serverAssertWithInfo(c, NULL, dictDelete(clients, c) == DICT_OK);\n        if (dictSize(clients) == 0) {\n            /* Free the dict and associated hash entry at all if this was\n             * the latest client, so that it will be possible to abuse\n             * Redis PUBSUB creating millions of channels. */\n            kvstoreDictDelete(*type.serverPubSubChannels, slot, channel);\n        }\n    }\n    /* Notify the client */\n    if (notify) {\n        addReplyPubsubUnsubscribed(c,channel,type);\n    }\n    decrRefCount(channel); /* it is finally safe to release it */\n    return retval;\n}\n\n/* Unsubscribe all shard channels in a slot. */\nvoid pubsubShardUnsubscribeAllChannelsInSlot(unsigned int slot) {\n    if (!kvstoreDictSize(server.pubsubshard_channels, slot))\n        return;\n\n    dictEntry *de;\n    kvstoreDictIterator kvs_di;\n    kvstoreInitDictSafeIterator(&kvs_di, server.pubsubshard_channels, slot);\n    while ((de = kvstoreDictIteratorNext(&kvs_di)) != NULL) {\n        robj *channel = dictGetKey(de);\n        dict *clients = dictGetVal(de);\n        /* For each client subscribed to the channel, unsubscribe it. */\n        dictIterator iter;\n        dictEntry *entry;\n\n        dictInitIterator(&iter, clients);\n        while ((entry = dictNext(&iter)) != NULL) {\n            client *c = dictGetKey(entry);\n            int retval = dictDelete(c->pubsubshard_channels, channel);\n            serverAssertWithInfo(c,channel,retval == DICT_OK);\n            addReplyPubsubUnsubscribed(c, channel, pubSubShardType);\n            /* If the client has no other pubsub subscription,\n             * move out of pubsub mode. */\n            if (clientTotalPubSubSubscriptionCount(c) == 0) {\n                unmarkClientAsPubSub(c);\n            }\n        }\n        dictResetIterator(&iter);\n        kvstoreDictDelete(server.pubsubshard_channels, slot, channel);\n    }\n    kvstoreResetDictIterator(&kvs_di);\n}\n\n/* Subscribe a client to a pattern. Returns 1 if the operation succeeded, or 0 if the client was already subscribed to that pattern. */\nint pubsubSubscribePattern(client *c, robj *pattern) {\n    dictEntry *de;\n    dict *clients;\n    int retval = 0;\n\n    if (dictAdd(c->pubsub_patterns, pattern, NULL) == DICT_OK) {\n        retval = 1;\n        incrRefCount(pattern);\n        /* Add the client to the pattern -> list of clients hash table */\n        de = dictFind(server.pubsub_patterns,pattern);\n        if (de == NULL) {\n            clients = dictCreate(&clientDictType);\n            dictAdd(server.pubsub_patterns,pattern,clients);\n            incrRefCount(pattern);\n        } else {\n            clients = dictGetVal(de);\n        }\n        serverAssert(dictAdd(clients, c, NULL) != DICT_ERR);\n    }\n    /* Notify the client */\n    addReplyPubsubPatSubscribed(c,pattern);\n    return retval;\n}\n\n/* Unsubscribe a client from a channel. Returns 1 if the operation succeeded, or\n * 0 if the client was not subscribed to the specified channel. */\nint pubsubUnsubscribePattern(client *c, robj *pattern, int notify) {\n    dictEntry *de;\n    dict *clients;\n    int retval = 0;\n\n    incrRefCount(pattern); /* Protect the object. May be the same we remove */\n    if (dictDelete(c->pubsub_patterns, pattern) == DICT_OK) {\n        retval = 1;\n        /* Remove the client from the pattern -> clients list hash table */\n        de = dictFind(server.pubsub_patterns,pattern);\n        serverAssertWithInfo(c,NULL,de != NULL);\n        clients = dictGetVal(de);\n        serverAssertWithInfo(c, NULL, dictDelete(clients, c) == DICT_OK);\n        if (dictSize(clients) == 0) {\n            /* Free the dict and associated hash entry at all if this was\n             * the latest client. */\n            dictDelete(server.pubsub_patterns,pattern);\n        }\n    }\n    /* Notify the client */\n    if (notify) addReplyPubsubPatUnsubscribed(c,pattern);\n    decrRefCount(pattern);\n    return retval;\n}\n\n/* Unsubscribe from all the channels. Return the number of channels the\n * client was subscribed to. */\nint pubsubUnsubscribeAllChannelsInternal(client *c, int notify, pubsubtype type) {\n    int count = 0;\n    if (dictSize(type.clientPubSubChannels(c)) > 0) {\n        dictIterator di;\n        dictEntry *de;\n\n        dictInitSafeIterator(&di, type.clientPubSubChannels(c));\n        while((de = dictNext(&di)) != NULL) {\n            robj *channel = dictGetKey(de);\n\n            count += pubsubUnsubscribeChannel(c,channel,notify,type);\n        }\n        dictResetIterator(&di);\n    }\n    /* We were subscribed to nothing? Still reply to the client. */\n    if (notify && count == 0) {\n        addReplyPubsubUnsubscribed(c,NULL,type);\n    }\n    return count;\n}\n\n/*\n * Unsubscribe a client from all global channels.\n */\nint pubsubUnsubscribeAllChannels(client *c, int notify) {\n    int count = pubsubUnsubscribeAllChannelsInternal(c,notify,pubSubType);\n    return count;\n}\n\n/*\n * Unsubscribe a client from all shard subscribed channels.\n */\nint pubsubUnsubscribeShardAllChannels(client *c, int notify) {\n    int count = pubsubUnsubscribeAllChannelsInternal(c, notify, pubSubShardType);\n    return count;\n}\n\n/* Unsubscribe from all the patterns. Return the number of patterns the\n * client was subscribed from. */\nint pubsubUnsubscribeAllPatterns(client *c, int notify) {\n    int count = 0;\n\n    if (dictSize(c->pubsub_patterns) > 0) {\n        dictIterator di;\n        dictEntry *de;\n\n        dictInitSafeIterator(&di, c->pubsub_patterns);\n        while ((de = dictNext(&di)) != NULL) {\n            robj *pattern = dictGetKey(de);\n            count += pubsubUnsubscribePattern(c, pattern, notify);\n        }\n        dictResetIterator(&di);\n    }\n\n    /* We were subscribed to nothing? Still reply to the client. */\n    if (notify && count == 0) addReplyPubsubPatUnsubscribed(c,NULL);\n    return count;\n}\n\n/*\n * Publish a message to all the subscribers.\n */\nint pubsubPublishMessageInternal(robj *channel, robj *message, pubsubtype type) {\n    int receivers = 0;\n    dictEntry *de;\n    dictIterator di;\n    unsigned int slot = 0;\n\n    /* Send to clients listening for that channel */\n    if (server.cluster_enabled && type.shard) {\n        slot = keyHashSlot(channel->ptr, sdslen(channel->ptr));\n    }\n    de = kvstoreDictFind(*type.serverPubSubChannels, slot, channel);\n    if (de) {\n        dict *clients = dictGetVal(de);\n        dictEntry *entry;\n        dictIterator iter;\n\n        dictInitIterator(&iter, clients);\n        while ((entry = dictNext(&iter)) != NULL) {\n            client *c = dictGetKey(entry);\n            addReplyPubsubMessage(c,channel,message,*type.messageBulk);\n            if (clusterSlotStatsEnabled(CLUSTER_SLOT_STATS_NET))\n                clusterSlotStatsAddNetworkBytesOutForShardedPubSubInternalPropagation(c, slot);\n            updateClientMemUsageAndBucket(c);\n            receivers++;\n        }\n        dictResetIterator(&iter);\n    }\n\n    if (type.shard) {\n        /* Shard pubsub ignores patterns. */\n        return receivers;\n    }\n\n    /* Send to clients listening to matching channels */\n    if (dictSize(server.pubsub_patterns) > 0) {\n        channel = getDecodedObject(channel);\n        dictInitIterator(&di, server.pubsub_patterns);\n        while((de = dictNext(&di)) != NULL) {\n            robj *pattern = dictGetKey(de);\n            dict *clients = dictGetVal(de);\n            if (!stringmatchlen((char*)pattern->ptr,\n                                sdslen(pattern->ptr),\n                                (char*)channel->ptr,\n                                sdslen(channel->ptr),0)) continue;\n\n            dictEntry *entry;\n            dictIterator iter;\n\n            dictInitIterator(&iter, clients);\n            while ((entry = dictNext(&iter)) != NULL) {\n                client *c = dictGetKey(entry);\n                addReplyPubsubPatMessage(c,pattern,channel,message);\n                updateClientMemUsageAndBucket(c);\n                receivers++;\n            }\n            dictResetIterator(&iter);\n        }\n        decrRefCount(channel);\n        dictResetIterator(&di);\n    }\n    return receivers;\n}\n\n/* Publish a message to all the subscribers. */\nint pubsubPublishMessage(robj *channel, robj *message, int sharded) {\n    return pubsubPublishMessageInternal(channel, message, sharded? pubSubShardType : pubSubType);\n}\n\n/*-----------------------------------------------------------------------------\n * Pubsub commands implementation\n *----------------------------------------------------------------------------*/\n\n/* SUBSCRIBE channel [channel ...] */\nvoid subscribeCommand(client *c) {\n    int j;\n    if ((c->flags & CLIENT_DENY_BLOCKING) && !(c->flags & CLIENT_MULTI)) {\n        /**\n         * A client that has CLIENT_DENY_BLOCKING flag on\n         * expect a reply per command and so can not execute subscribe.\n         *\n         * Notice that we have a special treatment for multi because of\n         * backward compatibility\n         */\n        addReplyError(c, \"SUBSCRIBE isn't allowed for a DENY BLOCKING client\");\n        return;\n    }\n    for (j = 1; j < c->argc; j++)\n        pubsubSubscribeChannel(c,c->argv[j],pubSubType);\n    markClientAsPubSub(c);\n}\n\n/* UNSUBSCRIBE [channel ...] */\nvoid unsubscribeCommand(client *c) {\n    if (c->argc == 1) {\n        pubsubUnsubscribeAllChannels(c,1);\n    } else {\n        int j;\n\n        for (j = 1; j < c->argc; j++)\n            pubsubUnsubscribeChannel(c,c->argv[j],1,pubSubType);\n    }\n    if (clientTotalPubSubSubscriptionCount(c) == 0) {\n        unmarkClientAsPubSub(c);\n    }\n}\n\n/* PSUBSCRIBE pattern [pattern ...] */\nvoid psubscribeCommand(client *c) {\n    int j;\n    if ((c->flags & CLIENT_DENY_BLOCKING) && !(c->flags & CLIENT_MULTI)) {\n        /**\n         * A client that has CLIENT_DENY_BLOCKING flag on\n         * expect a reply per command and so can not execute subscribe.\n         *\n         * Notice that we have a special treatment for multi because of\n         * backward compatibility\n         */\n        addReplyError(c, \"PSUBSCRIBE isn't allowed for a DENY BLOCKING client\");\n        return;\n    }\n\n    for (j = 1; j < c->argc; j++)\n        pubsubSubscribePattern(c,c->argv[j]);\n    markClientAsPubSub(c);\n}\n\n/* PUNSUBSCRIBE [pattern [pattern ...]] */\nvoid punsubscribeCommand(client *c) {\n    if (c->argc == 1) {\n        pubsubUnsubscribeAllPatterns(c,1);\n    } else {\n        int j;\n\n        for (j = 1; j < c->argc; j++)\n            pubsubUnsubscribePattern(c,c->argv[j],1);\n    }\n    if (clientTotalPubSubSubscriptionCount(c) == 0) {\n        unmarkClientAsPubSub(c);\n    }\n}\n\n/* This function wraps pubsubPublishMessage and also propagates the message to cluster.\n * Used by the commands PUBLISH/SPUBLISH and their respective module APIs.*/\nint pubsubPublishMessageAndPropagateToCluster(robj *channel, robj *message, int sharded) {\n    int receivers = pubsubPublishMessage(channel, message, sharded);\n    if (server.cluster_enabled)\n        clusterPropagatePublish(channel, message, sharded);\n    return receivers;\n}\n\n/* PUBLISH <channel> <message> */\nvoid publishCommand(client *c) {\n    if (server.sentinel_mode) {\n        sentinelPublishCommand(c);\n        return;\n    }\n\n    int receivers = pubsubPublishMessageAndPropagateToCluster(c->argv[1],c->argv[2],0);\n    if (!server.cluster_enabled)\n        forceCommandPropagation(c,PROPAGATE_REPL);\n    addReplyLongLong(c,receivers);\n}\n\n/* PUBSUB command for Pub/Sub introspection. */\nvoid pubsubCommand(client *c) {\n    if (c->argc == 2 && !strcasecmp(c->argv[1]->ptr,\"help\")) {\n        const char *help[] = {\n\"CHANNELS [<pattern>]\",\n\"    Return the currently active channels matching a <pattern> (default: '*').\",\n\"NUMPAT\",\n\"    Return number of subscriptions to patterns.\",\n\"NUMSUB [<channel> ...]\",\n\"    Return the number of subscribers for the specified channels, excluding\",\n\"    pattern subscriptions(default: no channels).\",\n\"SHARDCHANNELS [<pattern>]\",\n\"    Return the currently active shard level channels matching a <pattern> (default: '*').\",\n\"SHARDNUMSUB [<shardchannel> ...]\",\n\"    Return the number of subscribers for the specified shard level channel(s)\",\nNULL\n        };\n        addReplyHelp(c, help);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"channels\") &&\n        (c->argc == 2 || c->argc == 3))\n    {\n        /* PUBSUB CHANNELS [<pattern>] */\n        sds pat = (c->argc == 2) ? NULL : c->argv[2]->ptr;\n        channelList(c, pat, server.pubsub_channels);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"numsub\") && c->argc >= 2) {\n        /* PUBSUB NUMSUB [Channel_1 ... Channel_N] */\n        int j;\n\n        addReplyArrayLen(c,(c->argc-2)*2);\n        for (j = 2; j < c->argc; j++) {\n            dict *d = kvstoreDictFetchValue(server.pubsub_channels, 0, c->argv[j]);\n\n            addReplyBulk(c,c->argv[j]);\n            addReplyLongLong(c, d ? dictSize(d) : 0);\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"numpat\") && c->argc == 2) {\n        /* PUBSUB NUMPAT */\n        addReplyLongLong(c,dictSize(server.pubsub_patterns));\n    } else if (!strcasecmp(c->argv[1]->ptr,\"shardchannels\") &&\n        (c->argc == 2 || c->argc == 3)) \n    {\n        /* PUBSUB SHARDCHANNELS */\n        sds pat = (c->argc == 2) ? NULL : c->argv[2]->ptr;\n        channelList(c,pat,server.pubsubshard_channels);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"shardnumsub\") && c->argc >= 2) {\n        /* PUBSUB SHARDNUMSUB [ShardChannel_1 ... ShardChannel_N] */\n        int j;\n        addReplyArrayLen(c, (c->argc-2)*2);\n        for (j = 2; j < c->argc; j++) {\n            unsigned int slot = calculateKeySlot(c->argv[j]->ptr);\n            dict *clients = kvstoreDictFetchValue(server.pubsubshard_channels, slot, c->argv[j]);\n\n            addReplyBulk(c,c->argv[j]);\n            addReplyLongLong(c, clients ? dictSize(clients) : 0);\n        }\n    } else {\n        addReplySubcommandSyntaxError(c);\n    }\n}\n\nvoid channelList(client *c, sds pat, kvstore *pubsub_channels) {\n    long mblen = 0;\n    void *replylen;\n    unsigned int slot_cnt = kvstoreNumDicts(pubsub_channels);\n\n    replylen = addReplyDeferredLen(c);\n    for (unsigned int i = 0; i < slot_cnt; i++) {\n        if (!kvstoreDictSize(pubsub_channels, i))\n            continue;\n        dictEntry *de;\n        kvstoreDictIterator kvs_di;\n        kvstoreInitDictIterator(&kvs_di, pubsub_channels, i);\n        while((de = kvstoreDictIteratorNext(&kvs_di)) != NULL) {\n            robj *cobj = dictGetKey(de);\n            sds channel = cobj->ptr;\n\n            if (!pat || stringmatchlen(pat, sdslen(pat),\n                                    channel, sdslen(channel),0))\n            {\n                addReplyBulk(c,cobj);\n                mblen++;\n            }\n        }\n        kvstoreResetDictIterator(&kvs_di);\n    }\n    setDeferredArrayLen(c,replylen,mblen);\n}\n\n/* SPUBLISH <shardchannel> <message> */\nvoid spublishCommand(client *c) {\n    int receivers = pubsubPublishMessageAndPropagateToCluster(c->argv[1],c->argv[2],1);\n    if (!server.cluster_enabled)\n        forceCommandPropagation(c,PROPAGATE_REPL);\n    addReplyLongLong(c,receivers);\n}\n\n/* SSUBSCRIBE shardchannel [shardchannel ...] */\nvoid ssubscribeCommand(client *c) {\n    if (c->flags & CLIENT_DENY_BLOCKING) {\n        /* A client that has CLIENT_DENY_BLOCKING flag on\n         * expect a reply per command and so can not execute subscribe. */\n        addReplyError(c, \"SSUBSCRIBE isn't allowed for a DENY BLOCKING client\");\n        return;\n    }\n\n    for (int j = 1; j < c->argc; j++) {\n        pubsubSubscribeChannel(c, c->argv[j], pubSubShardType);\n    }\n    markClientAsPubSub(c);\n}\n\n/* SUNSUBSCRIBE [shardchannel [shardchannel ...]] */\nvoid sunsubscribeCommand(client *c) {\n    if (c->argc == 1) {\n        pubsubUnsubscribeShardAllChannels(c, 1);\n    } else {\n        for (int j = 1; j < c->argc; j++) {\n            pubsubUnsubscribeChannel(c, c->argv[j], 1, pubSubShardType);\n        }\n    }\n    if (clientTotalPubSubSubscriptionCount(c) == 0) {\n        unmarkClientAsPubSub(c);\n    }\n}\n\nsize_t pubsubMemOverhead(client *c) {\n    /* PubSub patterns */\n    size_t mem = dictMemUsage(c->pubsub_patterns);\n    /* Global PubSub channels */\n    mem += dictMemUsage(c->pubsub_channels);\n    /* Sharded PubSub channels */\n    mem += dictMemUsage(c->pubsubshard_channels);\n    return mem;\n}\n\nint pubsubTotalSubscriptions(void) {\n    return dictSize(server.pubsub_patterns) +\n           kvstoreSize(server.pubsub_channels) +\n           kvstoreSize(server.pubsubshard_channels);\n}\n"
  },
  {
    "path": "src/quicklist.c",
    "content": "/* quicklist.c - A doubly linked list of listpacks\n *\n * Copyright (c) 2014, Matt Stancliff <matt@genges.com>\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must start the above copyright notice,\n *     this quicklist of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this quicklist of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#include <stdio.h>\n#include <string.h> /* for memcpy */\n#include <limits.h>\n#include \"quicklist.h\"\n#include \"zmalloc.h\"\n#include \"config.h\"\n#include \"listpack.h\"\n#include \"util.h\" /* for ll2string */\n#include \"lzf.h\"\n#include \"redisassert.h\"\n\n#ifndef REDIS_STATIC\n#define REDIS_STATIC static\n#endif\n\n/* Optimization levels for size-based filling.\n * Note that the largest possible limit is 64k, so even if each record takes\n * just one byte, it still won't overflow the 16 bit count field. */\nstatic const size_t optimization_level[] = {4096, 8192, 16384, 32768, 65536};\n\n/* This is for test suite development purposes only, 0 means disabled. */\nstatic size_t packed_threshold = 0;\n\n/* set threshold for PLAIN nodes for test suit, the real limit is based on `fill` */\nint quicklistSetPackedThreshold(size_t sz) {\n    /* Don't allow threshold to be set above or even slightly below 4GB */\n    if (sz > (1ull<<32) - (1<<20)) {\n        return 0;\n    }\n    packed_threshold = sz;\n    return 1;\n}\n\n/* Maximum size in bytes of any multi-element listpack.\n * Larger values will live in their own isolated listpacks.\n * This is used only if we're limited by record count. when we're limited by\n * size, the maximum limit is bigger, but still safe.\n * 8k is a recommended / default size limit */\n#define SIZE_SAFETY_LIMIT 8192\n\n/* Maximum estimate of the listpack entry overhead.\n * Although in the worst case(sz < 64), we will waste 6 bytes in one\n * quicklistNode, but can avoid memory waste due to internal fragmentation\n * when the listpack exceeds the size limit by a few bytes (e.g. being 16388). */\n#define SIZE_ESTIMATE_OVERHEAD 8\n\n/* Minimum listpack size in bytes for attempting compression. */\n#define MIN_COMPRESS_BYTES 48\n\n/* Minimum size reduction in bytes to store compressed quicklistNode data.\n * This also prevents us from storing compression if the compression\n * resulted in a larger size than the original data. */\n#define MIN_COMPRESS_IMPROVE 8\n\n/* If not verbose testing, remove all debug printing. */\n#ifndef REDIS_TEST_VERBOSE\n#define D(...)\n#else\n#define D(...)                                                                 \\\n    do {                                                                       \\\n        printf(\"%s:%s:%d:\\t\", __FILE__, __func__, __LINE__);                   \\\n        printf(__VA_ARGS__);                                                   \\\n        printf(\"\\n\");                                                          \\\n    } while (0)\n#endif\n\n/* Bookmarks forward declarations */\n#define QL_MAX_BM ((1 << QL_BM_BITS)-1)\nquicklistBookmark *_quicklistBookmarkFindByName(quicklist *ql, const char *name);\nquicklistBookmark *_quicklistBookmarkFindByNode(quicklist *ql, quicklistNode *node);\nvoid _quicklistBookmarkDelete(quicklist *ql, quicklistBookmark *bm);\n\nREDIS_STATIC quicklistNode *_quicklistSplitNode(quicklist *quicklist, quicklistNode *node,\n                                                int offset, int after);\nREDIS_STATIC quicklistNode *_quicklistMergeNodes(quicklist *quicklist, quicklistNode *center);\n\n/* Simple way to give quicklistEntry structs default values with one call. */\n#define initEntry(e)                                                           \\\n    do {                                                                       \\\n        (e)->zi = (e)->value = NULL;                                           \\\n        (e)->longval = -123456789;                                             \\\n        (e)->quicklist = NULL;                                                 \\\n        (e)->node = NULL;                                                      \\\n        (e)->offset = 123456789;                                               \\\n        (e)->sz = 0;                                                           \\\n    } while (0)\n\n/* Reset the quicklistIter to prevent it from being used again after\n * insert, replace, or other against quicklist operation. */\n#define resetIterator(iter)                                                    \\\n    do {                                                                       \\\n        (iter)->current = NULL;                                                \\\n        (iter)->zi = NULL;                                                     \\\n    } while (0)\n\n#define quicklistUpdateAllocSize(ql, new, old)                                 \\\n    do {                                                                       \\\n        (ql)->alloc_size += (new);                                             \\\n        (ql)->alloc_size -= (old);                                             \\\n    } while (0)\n\n/* Create a new quicklist.\n * Free with quicklistRelease(). */\nquicklist *quicklistCreate(void) {\n    struct quicklist *quicklist;\n    size_t quicklist_sz;\n\n    quicklist = zmalloc_usable(sizeof(*quicklist), &quicklist_sz);\n    quicklist->head = quicklist->tail = NULL;\n    quicklist->len = 0;\n    quicklist->count = 0;\n    quicklist->alloc_size = quicklist_sz;\n    quicklist->compress = 0;\n    quicklist->fill = -2;\n    quicklist->bookmark_count = 0;\n    return quicklist;\n}\n\n#define COMPRESS_MAX ((1 << QL_COMP_BITS)-1)\nvoid quicklistSetCompressDepth(quicklist *quicklist, int compress) {\n    if (compress > COMPRESS_MAX) {\n        compress = COMPRESS_MAX;\n    } else if (compress < 0) {\n        compress = 0;\n    }\n    quicklist->compress = compress;\n}\n\n#define FILL_MAX ((1 << (QL_FILL_BITS-1))-1)\nvoid quicklistSetFill(quicklist *quicklist, int fill) {\n    if (fill > FILL_MAX) {\n        fill = FILL_MAX;\n    } else if (fill < -5) {\n        fill = -5;\n    }\n    quicklist->fill = fill;\n}\n\nvoid quicklistSetOptions(quicklist *quicklist, int fill, int compress) {\n    quicklistSetFill(quicklist, fill);\n    quicklistSetCompressDepth(quicklist, compress);\n}\n\n/* Create a new quicklist with some default parameters. */\nquicklist *quicklistNew(int fill, int compress) {\n    quicklist *quicklist = quicklistCreate();\n    quicklistSetOptions(quicklist, fill, compress);\n    return quicklist;\n}\n\nREDIS_STATIC quicklistNode *quicklistCreateNode(quicklist *quicklist) {\n    size_t node_usable;\n    quicklistNode *node;\n    node = zmalloc_usable(sizeof(*node), &node_usable);\n    quicklist->alloc_size += node_usable;\n    node->entry = NULL;\n    node->count = 0;\n    node->sz = 0;\n    node->next = node->prev = NULL;\n    node->encoding = QUICKLIST_NODE_ENCODING_RAW;\n    node->container = QUICKLIST_NODE_CONTAINER_PACKED;\n    node->recompress = 0;\n    node->dont_compress = 0;\n    return node;\n}\n\n/* Return cached quicklist count */\nunsigned long quicklistCount(const quicklist *ql) { return ql->count; }\n\n/* Return cached quicklist total memory used (in bytes) */\nsize_t quicklistAllocSize(const quicklist *ql) { return ql->alloc_size; }\n\n/* Free entire quicklist. */\nvoid quicklistRelease(quicklist *quicklist) {\n    unsigned long len;\n    quicklistNode *current, *next;\n    size_t usable;\n\n    current = quicklist->head;\n    len = quicklist->len;\n    while (len--) {\n        next = current->next;\n\n        if (current->entry) {\n            if (current->encoding == QUICKLIST_NODE_ENCODING_LZF) {\n                quicklistLZF *lzf = (quicklistLZF *)current->entry;\n                quicklist->alloc_size -= sizeof(*lzf) + lzf->sz;\n            } else {\n                quicklist->alloc_size -= current->sz;\n            }\n            zfree(current->entry);\n        }\n        zfree_usable(current, &usable);\n        quicklist->alloc_size -= usable;\n\n        current = next;\n    }\n    quicklistBookmarksClear(quicklist);\n    debugAssert(quicklist->alloc_size == zmalloc_usable_size(quicklist));\n    zfree(quicklist);\n}\n\n/* Compress the listpack in 'node' and update encoding details.\n * Returns 1 if listpack compressed successfully.\n * Returns 0 if compression failed or if listpack too small to compress. */\nREDIS_STATIC int __quicklistCompressNode(quicklist *quicklist, quicklistNode *node) {\n#ifdef REDIS_TEST\n    node->attempted_compress = 1;\n#endif\n    if (node->dont_compress) return 0;\n\n    /* validate that the node is neither\n     * tail nor head (it has prev and next)*/\n    assert(node->prev && node->next);\n\n    node->recompress = 0;\n    /* Don't bother compressing small values */\n    if (node->sz < MIN_COMPRESS_BYTES)\n        return 0;\n\n    quicklistLZF *lzf = zmalloc(sizeof(*lzf) + node->sz);\n\n    /* Cancel if compression fails or doesn't compress small enough */\n    if (((lzf->sz = lzf_compress(node->entry, node->sz, lzf->compressed,\n                                 node->sz)) == 0) ||\n        lzf->sz + MIN_COMPRESS_IMPROVE >= node->sz) {\n        /* lzf_compress aborts/rejects compression if value not compressible. */\n        zfree(lzf);\n        return 0;\n    }\n    lzf = zrealloc(lzf, sizeof(*lzf) + lzf->sz);\n    zfree(node->entry);\n    node->entry = (unsigned char *)lzf;\n    node->encoding = QUICKLIST_NODE_ENCODING_LZF;\n    quicklistUpdateAllocSize(quicklist, sizeof(*lzf) + lzf->sz, node->sz);\n    return 1;\n}\n\n/* Compress only uncompressed nodes. */\n#define quicklistCompressNode(_ql, _node)                                      \\\n    do {                                                                       \\\n        if ((_node) && (_node)->encoding == QUICKLIST_NODE_ENCODING_RAW) {     \\\n            __quicklistCompressNode((_ql), (_node));                           \\\n        }                                                                      \\\n    } while (0)\n\n/* Uncompress the listpack in 'node' and update encoding details.\n * Returns 1 on successful decode, 0 on failure to decode. */\nREDIS_STATIC int __quicklistDecompressNode(quicklist *quicklist, quicklistNode *node) {\n#ifdef REDIS_TEST\n    node->attempted_compress = 0;\n#endif\n    node->recompress = 0;\n\n    void *decompressed = zmalloc(node->sz);\n    quicklistLZF *lzf = (quicklistLZF *)node->entry;\n    if (lzf_decompress(lzf->compressed, lzf->sz, decompressed, node->sz) == 0) {\n        /* Someone requested decompress, but we can't decompress.  Not good. */\n        zfree(decompressed);\n        return 0;\n    }\n    size_t oldsize = sizeof(*lzf) + lzf->sz;\n    zfree(lzf);\n    quicklistUpdateAllocSize(quicklist, node->sz, oldsize);\n    node->entry = decompressed;\n    node->encoding = QUICKLIST_NODE_ENCODING_RAW;\n    return 1;\n}\n\n/* Decompress only compressed nodes. */\n#define quicklistDecompressNode(_ql, _node)                                    \\\n    do {                                                                       \\\n        if ((_node) && (_node)->encoding == QUICKLIST_NODE_ENCODING_LZF) {     \\\n            __quicklistDecompressNode((_ql), (_node));                         \\\n        }                                                                      \\\n    } while (0)\n\n/* Force node to not be immediately re-compressible */\n#define quicklistDecompressNodeForUse(_ql, _node)                              \\\n    do {                                                                       \\\n        if ((_node) && (_node)->encoding == QUICKLIST_NODE_ENCODING_LZF) {     \\\n            __quicklistDecompressNode((_ql), (_node));                         \\\n            (_node)->recompress = 1;                                           \\\n        }                                                                      \\\n    } while (0)\n\n/* Extract the raw LZF data from this quicklistNode.\n * Pointer to LZF data is assigned to '*data'.\n * Return value is the length of compressed LZF data. */\nsize_t quicklistGetLzf(const quicklistNode *node, void **data) {\n    quicklistLZF *lzf = (quicklistLZF *)node->entry;\n    *data = lzf->compressed;\n    return lzf->sz;\n}\n\n#define quicklistAllowsCompression(_ql) ((_ql)->compress != 0)\n\n/* Force 'quicklist' to meet compression guidelines set by compress depth.\n * The only way to guarantee interior nodes get compressed is to iterate\n * to our \"interior\" compress depth then compress the next node we find.\n * If compress depth is larger than the entire list, we return immediately. */\nREDIS_STATIC void __quicklistCompress(quicklist *quicklist,\n                                      quicklistNode *node) {\n    if (quicklist->len == 0) return;\n\n    /* The head and tail should never be compressed (we should not attempt to recompress them) */\n    assert(quicklist->head->recompress == 0 && quicklist->tail->recompress == 0);\n\n    /* If length is less than our compress depth (from both sides),\n     * we can't compress anything. */\n    if (!quicklistAllowsCompression(quicklist) ||\n        quicklist->len < (unsigned int)(quicklist->compress * 2))\n        return;\n\n#if 0\n    /* Optimized cases for small depth counts */\n    if (quicklist->compress == 1) {\n        quicklistNode *h = quicklist->head, *t = quicklist->tail;\n        quicklistDecompressNode(quicklist, h);\n        quicklistDecompressNode(quicklist, t);\n        if (h != node && t != node)\n            quicklistCompressNode(quicklist, node);\n        return;\n    } else if (quicklist->compress == 2) {\n        quicklistNode *h = quicklist->head, *hn = h->next, *hnn = hn->next;\n        quicklistNode *t = quicklist->tail, *tp = t->prev, *tpp = tp->prev;\n        quicklistDecompressNode(quicklist, h);\n        quicklistDecompressNode(quicklist, hn);\n        quicklistDecompressNode(quicklist, t);\n        quicklistDecompressNode(quicklist, tp);\n        if (h != node && hn != node && t != node && tp != node) {\n            quicklistCompressNode(quicklist, node);\n        }\n        if (hnn != t) {\n            quicklistCompressNode(quicklist, hnn);\n        }\n        if (tpp != h) {\n            quicklistCompressNode(quicklist, tpp);\n        }\n        return;\n    }\n#endif\n\n    /* Iterate until we reach compress depth for both sides of the list.a\n     * Note: because we do length checks at the *top* of this function,\n     *       we can skip explicit null checks below. Everything exists. */\n    quicklistNode *forward = quicklist->head;\n    quicklistNode *reverse = quicklist->tail;\n    int depth = 0;\n    int in_depth = 0;\n    while (depth++ < quicklist->compress) {\n        quicklistDecompressNode(quicklist, forward);\n        quicklistDecompressNode(quicklist, reverse);\n\n        if (forward == node || reverse == node)\n            in_depth = 1;\n\n        /* We passed into compress depth of opposite side of the quicklist\n         * so there's no need to compress anything and we can exit. */\n        if (forward == reverse || forward->next == reverse)\n            return;\n\n        forward = forward->next;\n        reverse = reverse->prev;\n    }\n\n    if (!in_depth)\n        quicklistCompressNode(quicklist, node);\n\n    /* At this point, forward and reverse are one node beyond depth */\n    quicklistCompressNode(quicklist, forward);\n    quicklistCompressNode(quicklist, reverse);\n}\n\n/* This macro is used to compress a node.\n *\n * If the 'recompress' flag of the node is true, we compress it directly without\n * checking whether it is within the range of compress depth.\n * However, it's important to ensure that the 'recompress' flag of head and tail\n * is always false, as we always assume that head and tail are not compressed.\n * \n * If the 'recompress' flag of the node is false, we check whether the node is\n * within the range of compress depth before compressing it. */\n#define quicklistCompress(_ql, _node)                                          \\\n    do {                                                                       \\\n        if ((_node)->recompress)                                               \\\n            quicklistCompressNode((_ql), (_node));                             \\\n        else                                                                   \\\n            __quicklistCompress((_ql), (_node));                               \\\n    } while (0)\n\n/* If we previously used quicklistDecompressNodeForUse(), just recompress. */\n#define quicklistRecompressOnly(_ql, _node)                                    \\\n    do {                                                                       \\\n        if ((_node)->recompress)                                               \\\n            quicklistCompressNode((_ql), (_node));                             \\\n    } while (0)\n\n/* Insert 'new_node' after 'old_node' if 'after' is 1.\n * Insert 'new_node' before 'old_node' if 'after' is 0.\n * Note: 'new_node' is *always* uncompressed, so if we assign it to\n *       head or tail, we do not need to uncompress it. */\nREDIS_STATIC void __quicklistInsertNode(quicklist *quicklist,\n                                        quicklistNode *old_node,\n                                        quicklistNode *new_node, int after) {\n    if (after) {\n        new_node->prev = old_node;\n        if (old_node) {\n            new_node->next = old_node->next;\n            if (old_node->next)\n                old_node->next->prev = new_node;\n            old_node->next = new_node;\n        }\n        if (quicklist->tail == old_node)\n            quicklist->tail = new_node;\n    } else {\n        new_node->next = old_node;\n        if (old_node) {\n            new_node->prev = old_node->prev;\n            if (old_node->prev)\n                old_node->prev->next = new_node;\n            old_node->prev = new_node;\n        }\n        if (quicklist->head == old_node)\n            quicklist->head = new_node;\n    }\n    /* If this insert creates the only element so far, initialize head/tail. */\n    if (quicklist->len == 0) {\n        quicklist->head = quicklist->tail = new_node;\n    }\n\n    /* Update len first, so in __quicklistCompress we know exactly len */\n    quicklist->len++;\n\n    if (old_node)\n        quicklistCompress(quicklist, old_node);\n\n    quicklistCompress(quicklist, new_node);\n}\n\n/* Wrappers for node inserting around existing node. */\nREDIS_STATIC void _quicklistInsertNodeBefore(quicklist *quicklist,\n                                             quicklistNode *old_node,\n                                             quicklistNode *new_node) {\n    __quicklistInsertNode(quicklist, old_node, new_node, 0);\n}\n\nREDIS_STATIC void _quicklistInsertNodeAfter(quicklist *quicklist,\n                                            quicklistNode *old_node,\n                                            quicklistNode *new_node) {\n    __quicklistInsertNode(quicklist, old_node, new_node, 1);\n}\n\n#define sizeMeetsSafetyLimit(sz) ((sz) <= SIZE_SAFETY_LIMIT)\n\n/* Calculate the size limit of the quicklist node based on negative 'fill'. */\nstatic size_t quicklistNodeNegFillLimit(int fill) {\n    assert(fill < 0);\n    size_t offset = (-fill) - 1;\n    size_t max_level = sizeof(optimization_level) / sizeof(*optimization_level);\n    if (offset >= max_level) offset = max_level - 1;\n    return optimization_level[offset];\n}\n\n/* Calculate the size limit or length limit of the quicklist node\n * based on 'fill', and is also used to limit list listpack. */\nvoid quicklistNodeLimit(int fill, size_t *size, unsigned int *count) {\n    *size = SIZE_MAX;\n    *count = UINT_MAX;\n\n    if (fill >= 0) {\n        /* Ensure that one node have at least one entry */\n        *count = (fill == 0) ? 1 : fill;\n    } else {\n        *size = quicklistNodeNegFillLimit(fill);\n    }\n}\n\n/* Check if the limit of the quicklist node has been reached to determine if\n * insertions, merges or other operations that would increase the size of\n * the node can be performed.\n * Return 1 if exceeds the limit, otherwise 0. */\nint quicklistNodeExceedsLimit(int fill, size_t new_sz, unsigned int new_count) {\n    size_t sz_limit;\n    unsigned int count_limit;\n    quicklistNodeLimit(fill, &sz_limit, &count_limit);\n\n    if (likely(sz_limit != SIZE_MAX)) {\n        return new_sz > sz_limit;\n    } else if (count_limit != UINT_MAX) {\n        /* when we reach here we know that the limit is a size limit (which is\n         * safe, see comments next to optimization_level and SIZE_SAFETY_LIMIT) */\n        if (!sizeMeetsSafetyLimit(new_sz)) return 1;\n        return new_count > count_limit;\n    }\n\n    redis_unreachable();\n}\n\n/* Determines whether a given size qualifies as a large element based on a threshold\n * determined by the 'fill'. If the size is considered large, it will be stored in\n * a plain node. */\nstatic int isLargeElement(size_t sz, int fill) {\n    if (unlikely(packed_threshold != 0)) return sz >= packed_threshold;\n    if (fill >= 0)\n        return !sizeMeetsSafetyLimit(sz);\n    else\n        return sz > quicklistNodeNegFillLimit(fill);\n}\n\nREDIS_STATIC int _quicklistNodeAllowInsert(const quicklistNode *node,\n                                           const int fill, const size_t sz) {\n    if (unlikely(!node))\n        return 0;\n\n    if (unlikely(QL_NODE_IS_PLAIN(node) || isLargeElement(sz, fill)))\n        return 0;\n\n    /* Estimate how many bytes will be added to the listpack by this one entry.\n     * We prefer an overestimation, which would at worse lead to a few bytes\n     * below the lowest limit of 4k (see optimization_level).\n     * Note: No need to check for overflow below since both `node->sz` and\n     * `sz` are to be less than 1GB after the plain/large element check above. */\n    size_t new_sz = node->sz + sz + SIZE_ESTIMATE_OVERHEAD;\n    if (unlikely(quicklistNodeExceedsLimit(fill, new_sz, node->count + 1)))\n        return 0;\n    return 1;\n}\n\nREDIS_STATIC int _quicklistNodeAllowMerge(const quicklistNode *a,\n                                          const quicklistNode *b,\n                                          const int fill) {\n    if (!a || !b)\n        return 0;\n\n    if (unlikely(QL_NODE_IS_PLAIN(a) || QL_NODE_IS_PLAIN(b)))\n        return 0;\n\n    /* approximate merged listpack size (- 7 to remove one listpack\n     * header/trailer, see LP_HDR_SIZE and LP_EOF) */\n    unsigned int merge_sz = a->sz + b->sz - 7;\n    if (unlikely(quicklistNodeExceedsLimit(fill, merge_sz, a->count + b->count)))\n        return 0;\n    return 1;\n}\n\n#define quicklistNodeUpdateSz(node)                                            \\\n    do {                                                                       \\\n        (node)->sz = lpBytes((node)->entry);                                   \\\n    } while (0)\n\nstatic quicklistNode* __quicklistCreateNode(quicklist *quicklist, int container, void *value, size_t sz) {\n    quicklistNode *new_node = quicklistCreateNode(quicklist);\n    new_node->container = container;\n    if (container == QUICKLIST_NODE_CONTAINER_PLAIN) {\n        new_node->entry = zmalloc(sz);\n        memcpy(new_node->entry, value, sz);\n        new_node->sz = sz;\n    } else {\n        new_node->entry = lpPrepend(lpNew(0), value, sz);\n        quicklistNodeUpdateSz(new_node);\n    }\n    quicklist->alloc_size += new_node->sz;\n    new_node->count++;\n    return new_node;\n}\n\nstatic void __quicklistInsertPlainNode(quicklist *quicklist, quicklistNode *old_node,\n                                       void *value, size_t sz, int after)\n{\n    quicklistNode *new_node = __quicklistCreateNode(quicklist, QUICKLIST_NODE_CONTAINER_PLAIN, value, sz);\n    __quicklistInsertNode(quicklist, old_node, new_node, after);\n    quicklist->count++;\n}\n\n/* Add new entry to head node of quicklist.\n *\n * Returns 0 if used existing head.\n * Returns 1 if new head created. */\nint quicklistPushHead(quicklist *quicklist, void *value, size_t sz) {\n    quicklistNode *orig_head = quicklist->head;\n\n    if (unlikely(isLargeElement(sz, quicklist->fill))) {\n        __quicklistInsertPlainNode(quicklist, quicklist->head, value, sz, 0);\n        return 1;\n    }\n\n    if (likely(\n            _quicklistNodeAllowInsert(quicklist->head, quicklist->fill, sz))) {\n        size_t oldsize = quicklist->head->sz;\n        quicklist->head->entry = lpPrepend(quicklist->head->entry, value, sz);\n        quicklistNodeUpdateSz(quicklist->head);\n        quicklistUpdateAllocSize(quicklist, quicklist->head->sz, oldsize);\n    } else {\n        quicklistNode *node = quicklistCreateNode(quicklist);\n        node->entry = lpPrepend(lpNew(0), value, sz);\n        quicklistNodeUpdateSz(node);\n        quicklistUpdateAllocSize(quicklist, node->sz, 0);\n        _quicklistInsertNodeBefore(quicklist, quicklist->head, node);\n    }\n    quicklist->count++;\n    quicklist->head->count++;\n    return (orig_head != quicklist->head);\n}\n\n/* Add new entry to tail node of quicklist.\n *\n * Returns 0 if used existing tail.\n * Returns 1 if new tail created. */\nint quicklistPushTail(quicklist *quicklist, void *value, size_t sz) {\n    quicklistNode *orig_tail = quicklist->tail;\n    if (unlikely(isLargeElement(sz, quicklist->fill))) {\n        __quicklistInsertPlainNode(quicklist, quicklist->tail, value, sz, 1);\n        return 1;\n    }\n\n    if (likely(\n            _quicklistNodeAllowInsert(quicklist->tail, quicklist->fill, sz))) {\n        size_t oldsize = quicklist->tail->sz;\n        quicklist->tail->entry = lpAppend(quicklist->tail->entry, value, sz);\n        quicklistNodeUpdateSz(quicklist->tail);\n        quicklistUpdateAllocSize(quicklist, quicklist->tail->sz, oldsize);\n    } else {\n        quicklistNode *node = quicklistCreateNode(quicklist);\n        node->entry = lpAppend(lpNew(0), value, sz);\n        quicklistNodeUpdateSz(node);\n        quicklistUpdateAllocSize(quicklist, node->sz, 0);\n        _quicklistInsertNodeAfter(quicklist, quicklist->tail, node);\n    }\n    quicklist->count++;\n    quicklist->tail->count++;\n    return (orig_tail != quicklist->tail);\n}\n\n/* Create new node consisting of a pre-formed listpack.\n * Used for loading RDBs where entire listpacks have been stored\n * to be retrieved later. */\nvoid quicklistAppendListpack(quicklist *quicklist, unsigned char *zl) {\n    quicklistNode *node = quicklistCreateNode(quicklist);\n\n    node->entry = zl;\n    node->count = lpLength(node->entry);\n    node->sz = lpBytes(zl);\n\n    quicklist->alloc_size += node->sz;\n    _quicklistInsertNodeAfter(quicklist, quicklist->tail, node);\n    quicklist->count += node->count;\n}\n\n/* Create new node consisting of a pre-formed plain node.\n * Used for loading RDBs where entire plain node has been stored\n * to be retrieved later.\n * data - the data to add (pointer becomes the responsibility of quicklist) */\nvoid quicklistAppendPlainNode(quicklist *quicklist, unsigned char *data, size_t sz) {\n    quicklistNode *node = quicklistCreateNode(quicklist);\n\n    node->entry = data;\n    node->count = 1;\n    node->sz = sz;\n    node->container = QUICKLIST_NODE_CONTAINER_PLAIN;\n\n    quicklist->alloc_size += sz;\n    _quicklistInsertNodeAfter(quicklist, quicklist->tail, node);\n    quicklist->count += node->count;\n}\n\n#define quicklistDeleteIfEmpty(ql, n)                                          \\\n    do {                                                                       \\\n        if ((n)->count == 0) {                                                 \\\n            __quicklistDelNode((ql), (n));                                     \\\n            (n) = NULL;                                                        \\\n        }                                                                      \\\n    } while (0)\n\nREDIS_STATIC void __quicklistDelNode(quicklist *quicklist,\n                                     quicklistNode *node) {\n    /* Update the bookmark if any */\n    quicklistBookmark *bm = _quicklistBookmarkFindByNode(quicklist, node);\n    if (bm) {\n        bm->node = node->next;\n        /* if the bookmark was to the last node, delete it. */\n        if (!bm->node)\n            _quicklistBookmarkDelete(quicklist, bm);\n    }\n\n    if (node->next)\n        node->next->prev = node->prev;\n    if (node->prev)\n        node->prev->next = node->next;\n\n    if (node == quicklist->tail) {\n        quicklist->tail = node->prev;\n    }\n\n    if (node == quicklist->head) {\n        quicklist->head = node->next;\n    }\n\n    /* Update len first, so in __quicklistCompress we know exactly len */\n    quicklist->len--;\n    quicklist->count -= node->count;\n\n    /* If we deleted a node within our compress depth, we\n     * now have compressed nodes needing to be decompressed. */\n    __quicklistCompress(quicklist, NULL);\n\n    if (node->encoding == QUICKLIST_NODE_ENCODING_LZF) {\n        quicklistLZF *lzf = (quicklistLZF *)node->entry;\n        size_t lzf_sz = sizeof(*lzf) + lzf->sz;\n        quicklist->alloc_size -= lzf_sz;\n    } else {\n        quicklist->alloc_size -= node->sz;\n    }\n    zfree(node->entry);\n    size_t usable;\n    zfree_usable(node, &usable);\n    quicklist->alloc_size -= usable;\n}\n\n/* Delete one entry from list given the node for the entry and a pointer\n * to the entry in the node.\n *\n * Note: quicklistDelIndex() *requires* uncompressed nodes because you\n *       already had to get *p from an uncompressed node somewhere.\n *\n * Returns 1 if the entire node was deleted, 0 if node still exists.\n * Also updates in/out param 'p' with the next offset in the listpack. */\nREDIS_STATIC int quicklistDelIndex(quicklist *quicklist, quicklistNode *node,\n                                   unsigned char **p) {\n    int gone = 0;\n\n    if (unlikely(QL_NODE_IS_PLAIN(node))) {\n        __quicklistDelNode(quicklist, node);\n        return 1;\n    }\n    size_t oldsize = node->sz;\n    node->entry = lpDelete(node->entry, *p, p);\n    quicklistNodeUpdateSz(node);\n    quicklistUpdateAllocSize(quicklist, node->sz, oldsize);\n    node->count--;\n    if (node->count == 0) {\n        gone = 1;\n        __quicklistDelNode(quicklist, node);\n    }\n    quicklist->count--;\n    /* If we deleted the node, the original node is no longer valid */\n    return gone ? 1 : 0;\n}\n\n/* Delete one element represented by 'entry'\n *\n * 'entry' stores enough metadata to delete the proper position in\n * the correct listpack in the correct quicklist node. */\nvoid quicklistDelEntry(quicklistIter *iter, quicklistEntry *entry) {\n    quicklistNode *prev = entry->node->prev;\n    quicklistNode *next = entry->node->next;\n    int deleted_node = quicklistDelIndex((quicklist *)entry->quicklist,\n                                         entry->node, &entry->zi);\n\n    /* after delete, the zi is now invalid for any future usage. */\n    iter->zi = NULL;\n\n    /* If current node is deleted, we must update iterator node and offset. */\n    if (deleted_node) {\n        if (iter->direction == AL_START_HEAD) {\n            iter->current = next;\n            iter->offset = 0;\n        } else if (iter->direction == AL_START_TAIL) {\n            iter->current = prev;\n            iter->offset = -1;\n        }\n    }\n    /* else if (!deleted_node), no changes needed.\n     * we already reset iter->zi above, and the existing iter->offset\n     * doesn't move again because:\n     *   - [1, 2, 3] => delete offset 1 => [1, 3]: next element still offset 1\n     *   - [1, 2, 3] => delete offset 0 => [2, 3]: next element still offset 0\n     *  if we deleted the last element at offset N and now\n     *  length of this listpack is N-1, the next call into\n     *  quicklistNext() will jump to the next node. */\n}\n\n/* Replace quicklist entry by 'data' with length 'sz'. */\nvoid quicklistReplaceEntry(quicklistIter *iter, quicklistEntry *entry,\n                           void *data, size_t sz)\n{\n    quicklist* quicklist = iter->quicklist;\n    quicklistNode *node = entry->node;\n    unsigned char *newentry;\n\n    if (likely(!QL_NODE_IS_PLAIN(entry->node) && !isLargeElement(sz, quicklist->fill) &&\n        (newentry = lpReplace(entry->node->entry, &entry->zi, data, sz)) != NULL))\n    {\n        entry->node->entry = newentry;\n        size_t oldsize = entry->node->sz;\n        quicklistNodeUpdateSz(entry->node);\n        quicklistUpdateAllocSize(quicklist, entry->node->sz, oldsize);\n        /* quicklistNext() and quicklistInitIteratorEntryAtIdx() provide an uncompressed node */\n        quicklistCompress(quicklist, entry->node);\n    } else if (QL_NODE_IS_PLAIN(entry->node)) {\n        if (isLargeElement(sz, quicklist->fill)) {\n            zfree(entry->node->entry);\n            entry->node->entry = zmalloc(sz);\n            size_t oldsize = entry->node->sz;\n            quicklistUpdateAllocSize(quicklist, sz, oldsize);\n            entry->node->sz = sz;\n            memcpy(entry->node->entry, data, sz);\n            quicklistCompress(quicklist, entry->node);\n        } else {\n            quicklistInsertAfter(iter, entry, data, sz);\n            __quicklistDelNode(quicklist, entry->node);\n        }\n    } else { /* The node is full or data is a large element */\n        quicklistNode *split_node = NULL, *new_node;\n        node->dont_compress = 1; /* Prevent compression in __quicklistInsertNode() */\n\n        /* If the entry is not at the tail, split the node at the entry's offset. */\n        if (entry->offset != node->count - 1 && entry->offset != -1)\n            split_node = _quicklistSplitNode(quicklist, node, entry->offset, 1);\n\n        /* Create a new node and insert it after the original node.\n         * If the original node was split, insert the split node after the new node. */\n        new_node = __quicklistCreateNode(quicklist, isLargeElement(sz, quicklist->fill) ?\n            QUICKLIST_NODE_CONTAINER_PLAIN : QUICKLIST_NODE_CONTAINER_PACKED, data, sz);\n        __quicklistInsertNode(quicklist, node, new_node, 1);\n        if (split_node) __quicklistInsertNode(quicklist, new_node, split_node, 1);\n        quicklist->count++;\n\n        /* Delete the replaced element. */\n        if (entry->node->count == 1) {\n            __quicklistDelNode(quicklist, entry->node);\n        } else {\n            unsigned char *p = lpSeek(entry->node->entry, -1);\n            quicklistDelIndex(quicklist, entry->node, &p);\n            entry->node->dont_compress = 0; /* Re-enable compression */\n            new_node = _quicklistMergeNodes(quicklist, new_node);\n            /* We can't know if the current node and its sibling nodes are correctly compressed,\n             * and we don't know if they are within the range of compress depth, so we need to\n             * use quicklistCompress() for compression, which checks if node is within compress\n             * depth before compressing. */\n            quicklistCompress(quicklist, new_node);\n            quicklistCompress(quicklist, new_node->prev);\n            if (new_node->next) quicklistCompress(quicklist, new_node->next);\n        }\n    }\n\n    /* In any case, we reset iterator to forbid use of iterator after insert.\n     * Notice: iter->current has been compressed above. */\n    resetIterator(iter);\n}\n\n/* Replace quicklist entry at offset 'index' by 'data' with length 'sz'.\n *\n * Returns 1 if replace happened.\n * Returns 0 if replace failed and no changes happened. */\nint quicklistReplaceAtIndex(quicklist *quicklist, long index, void *data,\n                            size_t sz) {\n    quicklistEntry entry;\n    quicklistIter iter;\n    if (likely(quicklistInitIteratorEntryAtIdx(&iter, quicklist, index, &entry))) {\n        quicklistReplaceEntry(&iter, &entry, data, sz);\n        quicklistResetIterator(&iter);\n        return 1;\n    } else {\n        return 0;\n    }\n}\n\n/* Given two nodes, try to merge their listpacks.\n *\n * This helps us not have a quicklist with 3 element listpacks if\n * our fill factor can handle much higher levels.\n *\n * Note: 'a' must be to the LEFT of 'b'.\n *\n * After calling this function, both 'a' and 'b' should be considered\n * unusable.  The return value from this function must be used\n * instead of re-using any of the quicklistNode input arguments.\n *\n * Returns the input node picked to merge against or NULL if\n * merging was not possible. */\nREDIS_STATIC quicklistNode *_quicklistListpackMerge(quicklist *quicklist,\n                                                    quicklistNode *a,\n                                                    quicklistNode *b) {\n    D(\"Requested merge (a,b) (%u, %u)\", a->count, b->count);\n\n    quicklistDecompressNode(quicklist, a);\n    quicklistDecompressNode(quicklist, b);\n    if ((lpMerge(&a->entry, &b->entry))) {\n        /* We merged listpacks! Now remove the unused quicklistNode. */\n        quicklistNode *keep = NULL, *nokeep = NULL;\n        if (!a->entry) {\n            nokeep = a;\n            keep = b;\n        } else if (!b->entry) {\n            nokeep = b;\n            keep = a;\n        }\n        keep->count = lpLength(keep->entry);\n        size_t oldsize = keep->sz;\n        quicklistNodeUpdateSz(keep);\n        quicklistUpdateAllocSize(quicklist, keep->sz, oldsize);\n        keep->recompress = 0; /* Prevent 'keep' from being recompressed if\n                               * it becomes head or tail after merging. */\n\n        nokeep->count = 0;\n        __quicklistDelNode(quicklist, nokeep);\n        quicklistCompress(quicklist, keep);\n        return keep;\n    } else {\n        /* else, the merge returned NULL and nothing changed. */\n        return NULL;\n    }\n}\n\n/* Attempt to merge listpacks within two nodes on either side of 'center'.\n *\n * We attempt to merge:\n *   - (center->prev->prev, center->prev)\n *   - (center->next, center->next->next)\n *   - (center->prev, center)\n *   - (center, center->next)\n * \n * Returns the new 'center' after merging.\n */\nREDIS_STATIC quicklistNode *_quicklistMergeNodes(quicklist *quicklist, quicklistNode *center) {\n    int fill = quicklist->fill;\n    quicklistNode *prev, *prev_prev, *next, *next_next, *target;\n    prev = prev_prev = next = next_next = target = NULL;\n\n    if (center->prev) {\n        prev = center->prev;\n        if (center->prev->prev)\n            prev_prev = center->prev->prev;\n    }\n\n    if (center->next) {\n        next = center->next;\n        if (center->next->next)\n            next_next = center->next->next;\n    }\n\n    /* Try to merge prev_prev and prev */\n    if (_quicklistNodeAllowMerge(prev, prev_prev, fill)) {\n        _quicklistListpackMerge(quicklist, prev_prev, prev);\n        prev_prev = prev = NULL; /* they could have moved, invalidate them. */\n    }\n\n    /* Try to merge next and next_next */\n    if (_quicklistNodeAllowMerge(next, next_next, fill)) {\n        _quicklistListpackMerge(quicklist, next, next_next);\n        next = next_next = NULL; /* they could have moved, invalidate them. */\n    }\n\n    /* Try to merge center node and previous node */\n    if (_quicklistNodeAllowMerge(center, center->prev, fill)) {\n        target = _quicklistListpackMerge(quicklist, center->prev, center);\n        center = NULL; /* center could have been deleted, invalidate it. */\n    } else {\n        /* else, we didn't merge here, but target needs to be valid below. */\n        target = center;\n    }\n\n    /* Use result of center merge (or original) to merge with next node. */\n    if (_quicklistNodeAllowMerge(target, target->next, fill)) {\n        target = _quicklistListpackMerge(quicklist, target, target->next);\n    }\n    return target;\n}\n\n/* Split 'node' into two parts, parameterized by 'offset' and 'after'.\n *\n * The 'after' argument controls which quicklistNode gets returned.\n * If 'after'==1, returned node has elements after 'offset'.\n *                input node keeps elements up to 'offset', including 'offset'.\n * If 'after'==0, returned node has elements up to 'offset'.\n *                input node keeps elements after 'offset', including 'offset'.\n *\n * Or in other words:\n * If 'after'==1, returned node will have elements after 'offset'.\n *                The returned node will have elements [OFFSET+1, END].\n *                The input node keeps elements [0, OFFSET].\n * If 'after'==0, returned node will keep elements up to but not including 'offset'.\n *                The returned node will have elements [0, OFFSET-1].\n *                The input node keeps elements [OFFSET, END].\n *\n * The input node keeps all elements not taken by the returned node.\n *\n * Returns newly created node or NULL if split not possible. */\nREDIS_STATIC quicklistNode *_quicklistSplitNode(quicklist *quicklist, quicklistNode *node,\n                                                int offset, int after) {\n    size_t zl_sz = node->sz;\n\n    /* New node is detached on return but all callers add it back to quicklist\n     * so we account its allocation here and below directly on quicklist. */\n    quicklistNode *new_node = quicklistCreateNode(quicklist);\n    new_node->entry = zmalloc(zl_sz);\n\n    /* Copy original listpack so we can split it */\n    memcpy(new_node->entry, node->entry, zl_sz);\n\n    /* Need positive offset for calculating extent below. */\n    if (offset < 0) offset = node->count + offset;\n\n    /* Ranges to be trimmed: -1 here means \"continue deleting until the list ends\" */\n    int orig_start = after ? offset + 1 : 0;\n    int orig_extent = after ? -1 : offset;\n    int new_start = after ? 0 : offset;\n    int new_extent = after ? offset + 1 : -1;\n\n    D(\"After %d (%d); ranges: [%d, %d], [%d, %d]\", after, offset, orig_start,\n      orig_extent, new_start, new_extent);\n\n    size_t oldsize = node->sz;\n    node->entry = lpDeleteRange(node->entry, orig_start, orig_extent);\n    node->count = lpLength(node->entry);\n    quicklistNodeUpdateSz(node);\n    quicklistUpdateAllocSize(quicklist, node->sz, oldsize);\n\n    new_node->entry = lpDeleteRange(new_node->entry, new_start, new_extent);\n    new_node->count = lpLength(new_node->entry);\n    quicklistNodeUpdateSz(new_node);\n    quicklistUpdateAllocSize(quicklist, new_node->sz, 0);\n\n    D(\"After split lengths: orig (%d), new (%d)\", node->count, new_node->count);\n    return new_node;\n}\n\n/* Insert a new entry before or after existing entry 'entry'.\n *\n * If after==1, the new value is inserted after 'entry', otherwise\n * the new value is inserted before 'entry'. */\nREDIS_STATIC void _quicklistInsert(quicklistIter *iter, quicklistEntry *entry,\n                                   void *value, const size_t sz, int after)\n{\n    quicklist *quicklist = iter->quicklist;\n    int full = 0, at_tail = 0, at_head = 0, avail_next = 0, avail_prev = 0;\n    int fill = quicklist->fill;\n    quicklistNode *node = entry->node;\n    quicklistNode *new_node = NULL;\n\n    if (!node) {\n        /* we have no reference node, so let's create only node in the list */\n        D(\"No node given!\");\n        if (unlikely(isLargeElement(sz, quicklist->fill))) {\n            __quicklistInsertPlainNode(quicklist, quicklist->tail, value, sz, after);\n            return;\n        }\n        new_node = quicklistCreateNode(quicklist);\n        new_node->entry = lpPrepend(lpNew(0), value, sz);\n        quicklistNodeUpdateSz(new_node);\n        quicklistUpdateAllocSize(quicklist, new_node->sz, 0);\n        __quicklistInsertNode(quicklist, NULL, new_node, after);\n        new_node->count++;\n        quicklist->count++;\n        return;\n    }\n\n    /* Populate accounting flags for easier boolean checks later */\n    if (!_quicklistNodeAllowInsert(node, fill, sz)) {\n        D(\"Current node is full with count %d with requested fill %d\",\n          node->count, fill);\n        full = 1;\n    }\n\n    if (after && (entry->offset == node->count - 1 || entry->offset == -1)) {\n        D(\"At Tail of current listpack\");\n        at_tail = 1;\n        if (_quicklistNodeAllowInsert(node->next, fill, sz)) {\n            D(\"Next node is available.\");\n            avail_next = 1;\n        }\n    }\n\n    if (!after && (entry->offset == 0 || entry->offset == -(node->count))) {\n        D(\"At Head\");\n        at_head = 1;\n        if (_quicklistNodeAllowInsert(node->prev, fill, sz)) {\n            D(\"Prev node is available.\");\n            avail_prev = 1;\n        }\n    }\n\n    if (unlikely(isLargeElement(sz, quicklist->fill))) {\n        if (QL_NODE_IS_PLAIN(node) || (at_tail && after) || (at_head && !after)) {\n            __quicklistInsertPlainNode(quicklist, node, value, sz, after);\n        } else {\n            quicklistDecompressNodeForUse(quicklist, node);\n            new_node = _quicklistSplitNode(quicklist, node, entry->offset, after);\n            quicklistNode *entry_node = __quicklistCreateNode(quicklist, QUICKLIST_NODE_CONTAINER_PLAIN, value, sz);\n            __quicklistInsertNode(quicklist, node, entry_node, after);\n            __quicklistInsertNode(quicklist, entry_node, new_node, after);\n            quicklist->count++;\n        }\n        return;\n    }\n\n    /* Now determine where and how to insert the new element */\n    if (!full && after) {\n        D(\"Not full, inserting after current position.\");\n        quicklistDecompressNodeForUse(quicklist, node);\n        node->entry = lpInsertString(node->entry, value, sz, entry->zi, LP_AFTER, NULL);\n        size_t oldsize = node->sz;\n        quicklistNodeUpdateSz(node);\n        quicklistUpdateAllocSize(quicklist, node->sz, oldsize);\n        node->count++;\n        quicklistRecompressOnly(quicklist, node);\n    } else if (!full && !after) {\n        D(\"Not full, inserting before current position.\");\n        quicklistDecompressNodeForUse(quicklist, node);\n        node->entry = lpInsertString(node->entry, value, sz, entry->zi, LP_BEFORE, NULL);\n        size_t oldsize = node->sz;\n        quicklistNodeUpdateSz(node);\n        quicklistUpdateAllocSize(quicklist, node->sz, oldsize);\n        node->count++;\n        quicklistRecompressOnly(quicklist, node);\n    } else if (full && at_tail && avail_next && after) {\n        /* If we are: at tail, next has free space, and inserting after:\n         *   - insert entry at head of next node. */\n        D(\"Full and tail, but next isn't full; inserting next node head\");\n        new_node = node->next;\n        quicklistDecompressNodeForUse(quicklist, new_node);\n        new_node->entry = lpPrepend(new_node->entry, value, sz);\n        size_t oldsize = new_node->sz;\n        quicklistNodeUpdateSz(new_node);\n        quicklistUpdateAllocSize(quicklist, new_node->sz, oldsize);\n        new_node->count++;\n        quicklistRecompressOnly(quicklist, new_node);\n        quicklistRecompressOnly(quicklist, node);\n    } else if (full && at_head && avail_prev && !after) {\n        /* If we are: at head, previous has free space, and inserting before:\n         *   - insert entry at tail of previous node. */\n        D(\"Full and head, but prev isn't full, inserting prev node tail\");\n        new_node = node->prev;\n        quicklistDecompressNodeForUse(quicklist, new_node);\n        new_node->entry = lpAppend(new_node->entry, value, sz);\n        size_t oldsize = new_node->sz;\n        quicklistNodeUpdateSz(new_node);\n        quicklistUpdateAllocSize(quicklist, new_node->sz, oldsize);\n        new_node->count++;\n        quicklistRecompressOnly(quicklist, new_node);\n        quicklistRecompressOnly(quicklist, node);\n    } else if (full && ((at_tail && !avail_next && after) ||\n                        (at_head && !avail_prev && !after))) {\n        /* If we are: full, and our prev/next has no available space, then:\n         *   - create new node and attach to quicklist */\n        D(\"\\tprovisioning new node...\");\n        new_node = quicklistCreateNode(quicklist);\n        new_node->entry = lpPrepend(lpNew(0), value, sz);\n        quicklistNodeUpdateSz(new_node);\n        quicklistUpdateAllocSize(quicklist, new_node->sz, 0);\n        new_node->count++;\n        __quicklistInsertNode(quicklist, node, new_node, after);\n    } else if (full) {\n        /* else, node is full we need to split it. */\n        /* covers both after and !after cases */\n        D(\"\\tsplitting node...\");\n        quicklistDecompressNodeForUse(quicklist, node);\n        new_node = _quicklistSplitNode(quicklist, node, entry->offset, after);\n        if (after)\n            new_node->entry = lpPrepend(new_node->entry, value, sz);\n        else\n            new_node->entry = lpAppend(new_node->entry, value, sz);\n        size_t oldsize = new_node->sz;\n        quicklistNodeUpdateSz(new_node);\n        quicklistUpdateAllocSize(quicklist, new_node->sz, oldsize);\n        new_node->count++;\n        __quicklistInsertNode(quicklist, node, new_node, after);\n        _quicklistMergeNodes(quicklist, node);\n    }\n\n    quicklist->count++;\n\n    /* In any case, we reset iterator to forbid use of iterator after insert.\n     * Notice: iter->current has been compressed in _quicklistInsert(). */\n    resetIterator(iter); \n}\n\nvoid quicklistInsertBefore(quicklistIter *iter, quicklistEntry *entry,\n                           void *value, const size_t sz)\n{\n    _quicklistInsert(iter, entry, value, sz, 0);\n}\n\nvoid quicklistInsertAfter(quicklistIter *iter, quicklistEntry *entry,\n                          void *value, const size_t sz)\n{\n    _quicklistInsert(iter, entry, value, sz, 1);\n}\n\n/* Delete a range of elements from the quicklist.\n *\n * elements may span across multiple quicklistNodes, so we\n * have to be careful about tracking where we start and end.\n *\n * Returns 1 if entries were deleted, 0 if nothing was deleted. */\nint quicklistDelRange(quicklist *quicklist, const long start,\n                      const long count) {\n    if (count <= 0)\n        return 0;\n\n    unsigned long extent = count; /* range is inclusive of start position */\n\n    if (start >= 0 && extent > (quicklist->count - start)) {\n        /* if requesting delete more elements than exist, limit to list size. */\n        extent = quicklist->count - start;\n    } else if (start < 0 && extent > (unsigned long)(-start)) {\n        /* else, if at negative offset, limit max size to rest of list. */\n        extent = -start; /* c.f. LREM -29 29; just delete until end. */\n    }\n\n    quicklistIter iter;\n    if (!quicklistInitIteratorAtIdx(&iter, quicklist, AL_START_TAIL, start))\n        return 0;\n\n    D(\"Quicklist delete request for start %ld, count %ld, extent: %ld\", start,\n      count, extent);\n    quicklistNode *node = iter.current;\n    long offset = iter.offset;\n    quicklistResetIterator(&iter);\n\n    /* iterate over next nodes until everything is deleted. */\n    while (extent) {\n        quicklistNode *next = node->next;\n\n        unsigned long del;\n        int delete_entire_node = 0;\n        if (offset == 0 && extent >= node->count) {\n            /* If we are deleting more than the count of this node, we\n             * can just delete the entire node without listpack math. */\n            delete_entire_node = 1;\n            del = node->count;\n        } else if (offset >= 0 && extent + offset >= node->count) {\n            /* If deleting more nodes after this one, calculate delete based\n             * on size of current node. */\n            del = node->count - offset;\n        } else if (offset < 0) {\n            /* If offset is negative, we are in the first run of this loop\n             * and we are deleting the entire range\n             * from this start offset to end of list.  Since the Negative\n             * offset is the number of elements until the tail of the list,\n             * just use it directly as the deletion count. */\n            del = -offset;\n\n            /* If the positive offset is greater than the remaining extent,\n             * we only delete the remaining extent, not the entire offset.\n             */\n            if (del > extent)\n                del = extent;\n        } else {\n            /* else, we are deleting less than the extent of this node, so\n             * use extent directly. */\n            del = extent;\n        }\n\n        D(\"[%ld]: asking to del: %ld because offset: %ld; (ENTIRE NODE: %d), \"\n          \"node count: %u\",\n          extent, del, offset, delete_entire_node, node->count);\n\n        if (delete_entire_node || QL_NODE_IS_PLAIN(node)) {\n            __quicklistDelNode(quicklist, node);\n        } else {\n            quicklistDecompressNodeForUse(quicklist, node);\n            node->entry = lpDeleteRange(node->entry, offset, del);\n            size_t oldsize = node->sz;\n            quicklistNodeUpdateSz(node);\n            quicklistUpdateAllocSize(quicklist, node->sz, oldsize);\n            node->count -= del;\n            quicklist->count -= del;\n            quicklistDeleteIfEmpty(quicklist, node);\n            if (node)\n                quicklistRecompressOnly(quicklist, node);\n        }\n\n        extent -= del;\n\n        node = next;\n\n        offset = 0;\n    }\n    return 1;\n}\n\n/* Compare a quicklistEntry with a raw value.\n *\n * If the entry stores a string (entry->value != NULL), perform a binary-safe\n * comparison against p2.\n *\n * If the entry stores an integer (entry->value == NULL), lazily convert p2 to\n * a long long using string2ll() once and cache the result using cached_longval\n * and cached_valid.\n *\n * This optimization avoids repeatedly calling string2ll() in tight loops.\n * - If cached_valid == NULL: skip caching\n * - If cached_valid == 0: conversion attempted\n * - If cached_valid == 1/-1: cached result reused\n *\n * Returns 1 if equal, 0 otherwise.\n */\nint quicklistCompare(quicklistEntry *entry, unsigned char *p2, const size_t p2_len,\n                     long long *cached_longval, int *cached_valid) {\n    if (entry->value) {\n        return ((entry->sz == p2_len) && (memcmp(entry->value, p2, p2_len) == 0));\n    } else {\n        /* We use string2ll() to get an integer representation of the\n         * string 'p2' and compare it to 'entry->longval', it's much\n         * faster than convert integer to string and comparing. */\n        if (cached_valid != NULL) {\n            /* Use caching */\n            if (*cached_valid == 0) {\n                if (string2ll((const char *)p2, p2_len, cached_longval)) {\n                    *cached_valid = 1;\n                } else {\n                    *cached_valid = -1;\n                }\n            }\n            return (*cached_valid == 1 && entry->longval == *cached_longval);\n        } else {\n            /* No caching - direct conversion */\n            long long sval;\n            if (string2ll((const char *)p2, p2_len, &sval))\n                return entry->longval == sval;\n        }\n    }\n    return 0;\n}\n\n/* Initialize a quicklist iterator 'iter'. After the initialization every\n * call to quicklistNext() will return the next element of the quicklist. */\nvoid quicklistInitIterator(quicklistIter *iter, quicklist *quicklist, int direction) {\n    if (direction == AL_START_HEAD) {\n        iter->current = quicklist->head;\n        iter->offset = 0;\n    } else if (direction == AL_START_TAIL) {\n        iter->current = quicklist->tail;\n        iter->offset = -1;\n    }\n\n    iter->direction = direction;\n    iter->quicklist = quicklist;\n\n    iter->zi = NULL;\n}\n\n/* Initialize an iterator at a specific offset 'idx' and make the iterator\n * return nodes in 'direction' direction. Returns 1 on success, 0 if index out of range. */\nint quicklistInitIteratorAtIdx(quicklistIter *iter, quicklist *quicklist,\n                               const int direction, const long long idx)\n{\n    quicklistNode *n;\n    unsigned long long accum = 0;\n    unsigned long long index;\n    int forward = idx < 0 ? 0 : 1; /* < 0 -> reverse, 0+ -> forward */\n\n    quicklistInitIterator(iter, quicklist, direction);\n\n    index = forward ? idx : (-idx) - 1;\n    if (index >= quicklist->count) {\n        iter->current = NULL;\n        return 0;\n    }\n\n    /* Seek in the other direction if that way is shorter. */\n    int seek_forward = forward;\n    unsigned long long seek_index = index;\n    if (index > (quicklist->count - 1) / 2) {\n        seek_forward = !forward;\n        seek_index = quicklist->count - 1 - index;\n    }\n\n    n = seek_forward ? quicklist->head : quicklist->tail;\n    while (likely(n)) {\n        if ((accum + n->count) > seek_index) {\n            break;\n        } else {\n            D(\"Skipping over (%p) %u at accum %lld\", (void *)n, n->count,\n              accum);\n            accum += n->count;\n            n = seek_forward ? n->next : n->prev;\n        }\n    }\n\n    if (!n) {\n        iter->current = NULL;\n        return 0;\n    }\n\n    /* Fix accum so it looks like we seeked in the other direction. */\n    if (seek_forward != forward) accum = quicklist->count - n->count - accum;\n\n    D(\"Found node: %p at accum %llu, idx %llu, sub+ %llu, sub- %llu\", (void *)n,\n      accum, index, index - accum, (-index) - 1 + accum);\n\n    iter->current = n;\n    if (forward) {\n        /* forward = normal head-to-tail offset. */\n        iter->offset = index - accum;\n    } else {\n        /* reverse = need negative offset for tail-to-head, so undo\n         * the result of the original index = (-idx) - 1 above. */\n        iter->offset = (-index) - 1 + accum;\n    }\n\n    return 1;\n}\n\n/* Reset iterator.\n * If we still have a valid current node, then re-encode current node. */\nvoid quicklistResetIterator(quicklistIter *iter) {\n    if (iter->current)\n        quicklistCompress(iter->quicklist, iter->current);\n}\n\n/* Get next element in iterator.\n *\n * Note: You must NOT insert into the list while iterating over it.\n * You *may* delete from the list while iterating using the\n * quicklistDelEntry() function.\n * If you insert into the quicklist while iterating, you should\n * re-create the iterator after your addition.\n *\n * iter = quicklistGetIterator(quicklist,<direction>);\n * quicklistEntry entry;\n * while (quicklistNext(iter, &entry)) {\n *     if (entry.value)\n *          [[ use entry.value with entry.sz ]]\n *     else\n *          [[ use entry.longval ]]\n * }\n *\n * Populates 'entry' with values for this iteration.\n * Returns 0 when iteration is complete or if iteration not possible.\n * If return value is 0, the contents of 'entry' are not valid.\n */\nint quicklistNext(quicklistIter *iter, quicklistEntry *entry) {\n    initEntry(entry);\n    entry->quicklist = iter->quicklist;\n    entry->node = iter->current;\n\n    if (!iter->current) {\n        D(\"Returning because current node is NULL\");\n        return 0;\n    }\n\n    unsigned char *(*nextFn)(unsigned char *, unsigned char *) = NULL;\n    int offset_update = 0;\n\n    int plain = QL_NODE_IS_PLAIN(iter->current);\n    if (!iter->zi) {\n        /* If !zi, use current index. */\n        quicklistDecompressNodeForUse(iter->quicklist, iter->current);\n        if (unlikely(plain))\n            iter->zi = iter->current->entry;\n        else\n            iter->zi = lpSeek(iter->current->entry, iter->offset);\n    } else if (unlikely(plain)) {\n        iter->zi = NULL;\n    } else {\n        /* else, use existing iterator offset and get prev/next as necessary. */\n        if (iter->direction == AL_START_HEAD) {\n            nextFn = lpNext;\n            offset_update = 1;\n        } else if (iter->direction == AL_START_TAIL) {\n            nextFn = lpPrev;\n            offset_update = -1;\n        }\n        iter->zi = nextFn(iter->current->entry, iter->zi);\n        iter->offset += offset_update;\n    }\n\n    entry->zi = iter->zi;\n    entry->offset = iter->offset;\n\n    if (iter->zi) {\n        if (unlikely(plain)) {\n            entry->value = entry->node->entry;\n            entry->sz = entry->node->sz;\n            return 1;\n        }\n        /* Populate value from existing listpack position */\n        unsigned int sz = 0;\n        entry->value = lpGetValue(entry->zi, &sz, &entry->longval);\n        entry->sz = sz;\n        return 1;\n    } else {\n        /* We ran out of listpack entries.\n         * Pick next node, update offset, then re-run retrieval. */\n        quicklistCompress(iter->quicklist, iter->current);\n        if (iter->direction == AL_START_HEAD) {\n            /* Forward traversal */\n            D(\"Jumping to start of next node\");\n            iter->current = iter->current->next;\n            iter->offset = 0;\n        } else if (iter->direction == AL_START_TAIL) {\n            /* Reverse traversal */\n            D(\"Jumping to end of previous node\");\n            iter->current = iter->current->prev;\n            iter->offset = -1;\n        }\n        iter->zi = NULL;\n        return quicklistNext(iter, entry);\n    }\n}\n\n/* Sets the direction of a quicklist iterator. */\nvoid quicklistSetDirection(quicklistIter *iter, int direction) {\n    iter->direction = direction;\n}\n\n/* Duplicate the quicklist.\n * On success a copy of the original quicklist is returned.\n *\n * The original quicklist both on success or error is never modified.\n *\n * Returns newly allocated quicklist. */\nquicklist *quicklistDup(quicklist *orig) {\n    quicklist *copy;\n\n    copy = quicklistNew(orig->fill, orig->compress);\n\n    for (quicklistNode *current = orig->head; current;\n         current = current->next) {\n        quicklistNode *node = quicklistCreateNode(copy);\n\n        if (current->encoding == QUICKLIST_NODE_ENCODING_LZF) {\n            quicklistLZF *lzf = (quicklistLZF *)current->entry;\n            size_t lzf_sz = sizeof(*lzf) + lzf->sz;\n            node->entry = zmalloc(lzf_sz);\n            memcpy(node->entry, current->entry, lzf_sz);\n            copy->alloc_size += lzf_sz;\n        } else if (current->encoding == QUICKLIST_NODE_ENCODING_RAW) {\n            node->entry = zmalloc(current->sz);\n            memcpy(node->entry, current->entry, current->sz);\n            copy->alloc_size += current->sz;\n        }\n\n        node->count = current->count;\n        node->sz = current->sz;\n        node->encoding = current->encoding;\n        node->container = current->container;\n        copy->count += node->count;\n\n        _quicklistInsertNodeAfter(copy, copy->tail, node);\n    }\n\n    /* copy->count must equal orig->count here */\n    return copy;\n}\n\n/* Populate 'entry' with the element at the specified zero-based index\n * where 0 is the head, 1 is the element next to head\n * and so on. Negative integers are used in order to count\n * from the tail, -1 is the last element, -2 the penultimate\n * and so on. If the index is out of range 0 is returned.\n *\n * Returns 1 if iterator initialized at specific offset 'idx' and element found\n * Returns 0 if element not found */\nint quicklistInitIteratorEntryAtIdx(quicklistIter *iter, quicklist *quicklist,\n                                    const long long idx, quicklistEntry *entry)\n{\n    if (!quicklistInitIteratorAtIdx(iter, quicklist, AL_START_TAIL, idx))\n        return 0;\n    assert(quicklistNext(iter, entry));\n    return 1;\n}\n\nstatic void quicklistRotatePlain(quicklist *quicklist) {\n    quicklistNode *new_head = quicklist->tail;\n    quicklistNode *new_tail = quicklist->tail->prev;\n    quicklist->head->prev = new_head;\n    new_tail->next = NULL;\n    new_head->next = quicklist->head;\n    new_head->prev = NULL;\n    quicklist->head = new_head;\n    quicklist->tail = new_tail;\n}\n\n/* Rotate quicklist by moving the tail element to the head. */\nvoid quicklistRotate(quicklist *quicklist) {\n    if (quicklist->count <= 1)\n        return;\n\n    if (unlikely(QL_NODE_IS_PLAIN(quicklist->tail))) {\n        quicklistRotatePlain(quicklist);\n        return;\n    }\n\n    /* First, get the tail entry */\n    unsigned char *p = lpSeek(quicklist->tail->entry, -1);\n    unsigned char *value, *tmp;\n    long long longval;\n    unsigned int sz;\n    char longstr[32] = {0};\n    tmp = lpGetValue(p, &sz, &longval);\n\n    /* If value found is NULL, then lpGet populated longval instead */\n    if (!tmp) {\n        /* Write the longval as a string so we can re-add it */\n        sz = ll2string(longstr, sizeof(longstr), longval);\n        value = (unsigned char *)longstr;\n    } else if (quicklist->len == 1) {\n        /* Copy buffer since there could be a memory overlap when move\n         * entity from tail to head in the same listpack. */\n        value = zmalloc(sz);\n        memcpy(value, tmp, sz);\n    } else {\n        value = tmp;\n    }\n\n    /* Add tail entry to head (must happen before tail is deleted). */\n    quicklistPushHead(quicklist, value, sz);\n\n    /* If quicklist has only one node, the head listpack is also the\n     * tail listpack and PushHead() could have reallocated our single listpack,\n     * which would make our pre-existing 'p' unusable. */\n    if (quicklist->len == 1) {\n        p = lpSeek(quicklist->tail->entry, -1);\n    }\n\n    /* Remove tail entry. */\n    quicklistDelIndex(quicklist, quicklist->tail, &p);\n    if (value != (unsigned char*)longstr && value != tmp)\n        zfree(value);\n}\n\n/* pop from quicklist and return result in 'data' ptr.  Value of 'data'\n * is the return value of 'saver' function pointer if the data is NOT a number.\n *\n * If the quicklist element is a long long, then the return value is returned in\n * 'sval'.\n *\n * Return value of 0 means no elements available.\n * Return value of 1 means check 'data' and 'sval' for values.\n * If 'data' is set, use 'data' and 'sz'.  Otherwise, use 'sval'. */\nint quicklistPopCustom(quicklist *quicklist, int where, unsigned char **data,\n                       size_t *sz, long long *sval,\n                       void *(*saver)(unsigned char *data, size_t sz)) {\n    unsigned char *p;\n    unsigned char *vstr;\n    unsigned int vlen;\n    long long vlong;\n    int pos = (where == QUICKLIST_HEAD) ? 0 : -1;\n\n    if (quicklist->count == 0)\n        return 0;\n\n    if (data)\n        *data = NULL;\n    if (sz)\n        *sz = 0;\n    if (sval)\n        *sval = -123456789;\n\n    quicklistNode *node;\n    if (where == QUICKLIST_HEAD && quicklist->head) {\n        node = quicklist->head;\n    } else if (where == QUICKLIST_TAIL && quicklist->tail) {\n        node = quicklist->tail;\n    } else {\n        return 0;\n    }\n\n    /* The head and tail should never be compressed */\n    assert(node->encoding != QUICKLIST_NODE_ENCODING_LZF);\n\n    if (unlikely(QL_NODE_IS_PLAIN(node))) {\n        if (data)\n            *data = saver(node->entry, node->sz);\n        if (sz)\n            *sz = node->sz;\n        quicklistDelIndex(quicklist, node, NULL);\n        return 1;\n    }\n\n    p = lpSeek(node->entry, pos);\n    vstr = lpGetValue(p, &vlen, &vlong);\n    if (vstr) {\n        if (data)\n            *data = saver(vstr, vlen);\n        if (sz)\n            *sz = vlen;\n    } else {\n        if (data)\n            *data = NULL;\n        if (sval)\n            *sval = vlong;\n    }\n    quicklistDelIndex(quicklist, node, &p);\n    return 1;\n}\n\n/* Return a malloc'd copy of data passed in */\nREDIS_STATIC void *_quicklistSaver(unsigned char *data, size_t sz) {\n    unsigned char *vstr;\n    if (data) {\n        vstr = zmalloc(sz);\n        memcpy(vstr, data, sz);\n        return vstr;\n    }\n    return NULL;\n}\n\n/* Default pop function\n *\n * Returns malloc'd value from quicklist */\nint quicklistPop(quicklist *quicklist, int where, unsigned char **data,\n                 size_t *sz, long long *slong) {\n    unsigned char *vstr = NULL;\n    size_t vlen = 0;\n    long long vlong = 0;\n    if (quicklist->count == 0)\n        return 0;\n    int ret = quicklistPopCustom(quicklist, where, &vstr, &vlen, &vlong,\n                                 _quicklistSaver);\n    if (data)\n        *data = vstr;\n    if (slong)\n        *slong = vlong;\n    if (sz)\n        *sz = vlen;\n    return ret;\n}\n\n/* Wrapper to allow argument-based switching between HEAD/TAIL pop */\nvoid quicklistPush(quicklist *quicklist, void *value, const size_t sz,\n                   int where) {\n    /* The head and tail should never be compressed (we don't attempt to decompress them) */\n    if (quicklist->head)\n        assert(quicklist->head->encoding != QUICKLIST_NODE_ENCODING_LZF);\n    if (quicklist->tail)\n        assert(quicklist->tail->encoding != QUICKLIST_NODE_ENCODING_LZF);\n\n    if (where == QUICKLIST_HEAD) {\n        quicklistPushHead(quicklist, value, sz);\n    } else if (where == QUICKLIST_TAIL) {\n        quicklistPushTail(quicklist, value, sz);\n    }\n}\n\n/* Print info of quicklist which is used in debugCommand. */\nvoid quicklistRepr(unsigned char *ql, int full) {\n    int i = 0;\n    quicklist *quicklist  = (struct quicklist*) ql;\n    printf(\"{count : %ld}\\n\", quicklist->count);\n    printf(\"{len : %ld}\\n\", quicklist->len);\n    printf(\"{fill : %d}\\n\", quicklist->fill);\n    printf(\"{compress : %d}\\n\", quicklist->compress);\n    printf(\"{bookmark_count : %d}\\n\", quicklist->bookmark_count);\n    quicklistNode* node = quicklist->head;\n\n    while(node != NULL) {\n        printf(\"{quicklist node(%d)\\n\", i++);\n        printf(\"{container : %s, encoding: %s, size: %zu, count: %d, recompress: %d, attempted_compress: %d}\\n\",\n               QL_NODE_IS_PLAIN(node) ? \"PLAIN\": \"PACKED\",\n               (node->encoding == QUICKLIST_NODE_ENCODING_RAW) ? \"RAW\": \"LZF\",\n               node->sz,\n               node->count,\n               node->recompress,\n               node->attempted_compress);\n\n        if (full) {\n            quicklistDecompressNode(quicklist, node);\n            if (node->container == QUICKLIST_NODE_CONTAINER_PACKED) {\n                printf(\"{ listpack:\\n\");\n                lpRepr(node->entry);\n                printf(\"}\\n\");\n\n            } else if (QL_NODE_IS_PLAIN(node)) {\n                printf(\"{ entry : %s }\\n\", node->entry);\n            }\n            printf(\"}\\n\");\n            quicklistRecompressOnly(quicklist, node);\n        }\n        node = node->next;\n    }\n}\n\n/* Create or update a bookmark in the list which will be updated to the next node\n * automatically when the one referenced gets deleted.\n * Returns 1 on success (creation of new bookmark or override of an existing one).\n * Returns 0 on failure (reached the maximum supported number of bookmarks).\n * NOTE: use short simple names, so that string compare on find is quick.\n * NOTE: bookmark creation may re-allocate the quicklist, so the input pointer\n         may change and it's the caller responsibility to update the reference.\n */\nint quicklistBookmarkCreate(quicklist **ql_ref, const char *name, quicklistNode *node) {\n    quicklist *ql = *ql_ref;\n    if (ql->bookmark_count >= QL_MAX_BM)\n        return 0;\n    quicklistBookmark *bm = _quicklistBookmarkFindByName(ql, name);\n    if (bm) {\n        bm->node = node;\n        return 1;\n    }\n    size_t new_size, old_size;\n    ql = zrealloc_usable(ql, sizeof(quicklist) + (ql->bookmark_count+1) * sizeof(quicklistBookmark),\n                          &new_size, &old_size);\n    *ql_ref = ql;\n    ql->bookmarks[ql->bookmark_count].node = node;\n    size_t name_sz;\n    ql->bookmarks[ql->bookmark_count].name = zstrdup_usable(name, &name_sz);\n    ql->bookmark_count++;\n    quicklistUpdateAllocSize(ql, new_size + name_sz, old_size);\n    return 1;\n}\n\n/* Find the quicklist node referenced by a named bookmark.\n * When the bookmarked node is deleted the bookmark is updated to the next node,\n * and if that's the last node, the bookmark is deleted (so find returns NULL). */\nquicklistNode *quicklistBookmarkFind(quicklist *ql, const char *name) {\n    quicklistBookmark *bm = _quicklistBookmarkFindByName(ql, name);\n    if (!bm) return NULL;\n    return bm->node;\n}\n\n/* Delete a named bookmark.\n * returns 0 if bookmark was not found, and 1 if deleted.\n * Note that the bookmark memory is not freed yet, and is kept for future use. */\nint quicklistBookmarkDelete(quicklist *ql, const char *name) {\n    quicklistBookmark *bm = _quicklistBookmarkFindByName(ql, name);\n    if (!bm)\n        return 0;\n    _quicklistBookmarkDelete(ql, bm);\n    return 1;\n}\n\nquicklistBookmark *_quicklistBookmarkFindByName(quicklist *ql, const char *name) {\n    unsigned i;\n    for (i=0; i<ql->bookmark_count; i++) {\n        if (!strcmp(ql->bookmarks[i].name, name)) {\n            return &ql->bookmarks[i];\n        }\n    }\n    return NULL;\n}\n\nquicklistBookmark *_quicklistBookmarkFindByNode(quicklist *ql, quicklistNode *node) {\n    unsigned i;\n    for (i=0; i<ql->bookmark_count; i++) {\n        if (ql->bookmarks[i].node == node) {\n            return &ql->bookmarks[i];\n        }\n    }\n    return NULL;\n}\n\nvoid _quicklistBookmarkDelete(quicklist *ql, quicklistBookmark *bm) {\n    int index = bm - ql->bookmarks;\n    size_t name_sz;\n    zfree_usable(bm->name, &name_sz);\n    ql->bookmark_count--;\n    ql->alloc_size -= name_sz;\n    memmove(bm, bm+1, (ql->bookmark_count - index)* sizeof(*bm));\n    /* NOTE: We do not shrink (realloc) the quicklist yet (to avoid resonance,\n     * it may be re-used later (a call to realloc may NOP). */\n}\n\nvoid quicklistBookmarksClear(quicklist *ql) {\n    size_t name_sz;\n    while (ql->bookmark_count) {\n        zfree_usable(ql->bookmarks[--ql->bookmark_count].name, &name_sz);\n        ql->alloc_size -= name_sz;\n    }\n    /* NOTE: We do not shrink (realloc) the quick list. main use case for this\n     * function is just before releasing the allocation. */\n}\n\n/* The rest of this file is test cases and test helpers. */\n#ifdef REDIS_TEST\n#include <stdint.h>\n#include <sys/time.h>\n#include \"testhelp.h\"\n#include <stdlib.h>\n\n#define yell(str, ...) printf(\"ERROR! \" str \"\\n\\n\", __VA_ARGS__)\n\n#define ERROR                                                                  \\\n    do {                                                                       \\\n        printf(\"\\tERROR!\\n\");                                                  \\\n        err++;                                                                 \\\n    } while (0)\n\n#define ERR(x, ...)                                                            \\\n    do {                                                                       \\\n        printf(\"%s:%s:%d:\\t\", __FILE__, __func__, __LINE__);                   \\\n        printf(\"ERROR! \" x \"\\n\", __VA_ARGS__);                                 \\\n        err++;                                                                 \\\n    } while (0)\n\n#define TEST(name) printf(\"test — %s\\n\", name);\n#define TEST_DESC(name, ...) printf(\"test — \" name \"\\n\", __VA_ARGS__);\n\n#define QL_TEST_VERBOSE 0\n\n#define UNUSED(x) (void)(x)\nstatic void ql_info(quicklist *ql) {\n#if QL_TEST_VERBOSE\n    printf(\"Container length: %lu\\n\", ql->len);\n    printf(\"Container size: %lu\\n\", ql->count);\n    if (ql->head)\n        printf(\"\\t(zsize head: %lu)\\n\", lpLength(ql->head->entry));\n    if (ql->tail)\n        printf(\"\\t(zsize tail: %lu)\\n\", lpLength(ql->tail->entry));\n    printf(\"\\n\");\n#else\n    UNUSED(ql);\n#endif\n}\n\n/* Return the UNIX time in microseconds */\nstatic long long ustime(void) {\n    struct timeval tv;\n    long long ust;\n\n    gettimeofday(&tv, NULL);\n    ust = ((long long)tv.tv_sec) * 1000000;\n    ust += tv.tv_usec;\n    return ust;\n}\n\n/* Return the UNIX time in milliseconds */\nstatic long long mstime(void) { return ustime() / 1000; }\n\n/* Iterate over an entire quicklist.\n * Print the list if 'print' == 1.\n *\n * Returns physical count of elements found by iterating over the list. */\nstatic int _itrprintr(quicklist *ql, int print, int forward) {\n    quicklistIter iter;\n    quicklistEntry entry;\n    quicklistInitIterator(&iter, ql, forward ? AL_START_HEAD : AL_START_TAIL);\n    int i = 0;\n    int p = 0;\n    quicklistNode *prev = NULL;\n    while (quicklistNext(&iter, &entry)) {\n        if (entry.node != prev) {\n            /* Count the number of list nodes too */\n            p++;\n            prev = entry.node;\n        }\n        if (print) {\n            int size = (entry.sz > (1<<20)) ? 1<<20 : entry.sz;\n            printf(\"[%3d (%2d)]: [%.*s] (%lld)\\n\", i, p, size,\n                   (char *)entry.value, entry.longval);\n        }\n        i++;\n    }\n    quicklistResetIterator(&iter);\n    return i;\n}\nstatic int itrprintr(quicklist *ql, int print) {\n    return _itrprintr(ql, print, 1);\n}\n\nstatic int itrprintr_rev(quicklist *ql, int print) {\n    return _itrprintr(ql, print, 0);\n}\n\n#define ql_verify(a, b, c, d, e)                                               \\\n    do {                                                                       \\\n        err += _ql_verify((a), (b), (c), (d), (e));                            \\\n    } while (0)\n\nstatic int _ql_verify_compress(quicklist *ql) {\n    int errors = 0;\n    if (quicklistAllowsCompression(ql)) {\n        quicklistNode *node = ql->head;\n        unsigned int low_raw = ql->compress;\n        unsigned int high_raw = ql->len - ql->compress;\n\n        for (unsigned int at = 0; at < ql->len; at++, node = node->next) {\n            if (node && (at < low_raw || at >= high_raw)) {\n                if (node->encoding != QUICKLIST_NODE_ENCODING_RAW) {\n                    yell(\"Incorrect compression: node %d is \"\n                         \"compressed at depth %d ((%u, %u); total \"\n                         \"nodes: %lu; size: %zu; recompress: %d)\",\n                         at, ql->compress, low_raw, high_raw, ql->len, node->sz,\n                         node->recompress);\n                    errors++;\n                }\n            } else {\n                if (node->encoding != QUICKLIST_NODE_ENCODING_LZF &&\n                    !node->attempted_compress) {\n                    yell(\"Incorrect non-compression: node %d is NOT \"\n                         \"compressed at depth %d ((%u, %u); total \"\n                         \"nodes: %lu; size: %zu; recompress: %d; attempted: %d)\",\n                         at, ql->compress, low_raw, high_raw, ql->len, node->sz,\n                         node->recompress, node->attempted_compress);\n                    errors++;\n                }\n            }\n        }\n    }\n    return errors;\n}\n\nstatic int _ql_verify_alloc_size(quicklist *ql) {\n    int errors = 0;\n    size_t alloc_size = zmalloc_usable_size(ql);\n\n    quicklistNode* node = ql->head;\n    while (node != NULL) {\n        alloc_size += zmalloc_usable_size(node);\n        if (node->encoding == QUICKLIST_NODE_ENCODING_LZF) {\n            quicklistLZF *lzf = (quicklistLZF *)node->entry;\n            alloc_size += sizeof(*lzf) + lzf->sz;\n        } else {\n            alloc_size += node->sz;\n        }\n        node = node->next;\n    }\n\n    for (unsigned i = 0; i < ql->bookmark_count; i++) {\n        alloc_size += zmalloc_usable_size(ql->bookmarks[i].name);\n    }\n\n    if (ql->alloc_size != alloc_size) {\n        yell(\"quicklist alloc_size wrong: expected %zu, got %zu\", alloc_size, ql->alloc_size);\n        errors++;\n    }\n\n    return errors;\n}\n\n/* Verify list metadata matches physical list contents. */\nstatic int _ql_verify(quicklist *ql, uint32_t len, uint32_t count,\n                      uint32_t head_count, uint32_t tail_count) {\n    int errors = 0;\n\n    ql_info(ql);\n    if (len != ql->len) {\n        yell(\"quicklist length wrong: expected %d, got %lu\", len, ql->len);\n        errors++;\n    }\n\n    if (count != ql->count) {\n        yell(\"quicklist count wrong: expected %d, got %lu\", count, ql->count);\n        errors++;\n    }\n\n    int loopr = itrprintr(ql, 0);\n    if (loopr != (int)ql->count) {\n        yell(\"quicklist cached count not match actual count: expected %lu, got \"\n             \"%d\",\n             ql->count, loopr);\n        errors++;\n    }\n\n    int rloopr = itrprintr_rev(ql, 0);\n    if (loopr != rloopr) {\n        yell(\"quicklist has different forward count than reverse count!  \"\n             \"Forward count is %d, reverse count is %d.\",\n             loopr, rloopr);\n        errors++;\n    }\n\n    if (ql->len == 0 && !errors) {\n        return errors;\n    }\n\n    if (ql->head && head_count != ql->head->count &&\n        head_count != lpLength(ql->head->entry)) {\n        yell(\"quicklist head count wrong: expected %d, \"\n             \"got cached %d vs. actual %lu\",\n             head_count, ql->head->count, lpLength(ql->head->entry));\n        errors++;\n    }\n\n    if (ql->tail && tail_count != ql->tail->count &&\n        tail_count != lpLength(ql->tail->entry)) {\n        yell(\"quicklist tail count wrong: expected %d, \"\n             \"got cached %u vs. actual %lu\",\n             tail_count, ql->tail->count, lpLength(ql->tail->entry));\n        errors++;\n    }\n\n    errors += _ql_verify_alloc_size(ql);\n    errors += _ql_verify_compress(ql);\n    return errors;\n}\n\n/* Reset iterator and verify compress correctly. */\nstatic void ql_reset_iterator(quicklistIter *iter) {\n    quicklist *ql = NULL;\n    if (iter) ql = iter->quicklist;\n    quicklistResetIterator(iter);\n    if (ql) assert(!_ql_verify_compress(ql));\n}\n\n/* Generate new string concatenating integer i against string 'prefix' */\nstatic char *genstr(char *prefix, int i) {\n    static char result[64] = {0};\n    snprintf(result, sizeof(result), \"%s%d\", prefix, i);\n    return result;\n}\n\nstatic void randstring(unsigned char *target, size_t sz) {\n    size_t p = 0;\n    int minval, maxval;\n    switch(rand() % 3) {\n    case 0:\n        minval = 'a';\n        maxval = 'z';\n    break;\n    case 1:\n        minval = '0';\n        maxval = '9';\n    break;\n    case 2:\n        minval = 'A';\n        maxval = 'Z';\n    break;\n    default:\n        assert(NULL);\n    }\n\n    while(p < sz)\n        target[p++] = minval+rand()%(maxval-minval+1);\n}\n\n/* main test, but callable from other files */\nint quicklistTest(int argc, char *argv[], int flags) {\n    UNUSED(argc);\n    UNUSED(argv);\n\n    int accurate = (flags & REDIS_TEST_ACCURATE);\n    unsigned int err = 0;\n    int optimize_start =\n        -(int)(sizeof(optimization_level) / sizeof(*optimization_level));\n\n    printf(\"Starting optimization offset at: %d\\n\", optimize_start);\n\n    int options[] = {0, 1, 2, 3, 4, 5, 6, 10};\n    int fills[] = {-5, -4, -3, -2, -1, 0,\n                   1, 2, 32, 66, 128, 999};\n    size_t option_count = sizeof(options) / sizeof(*options);\n    int fill_count = (int)(sizeof(fills) / sizeof(*fills));\n    long long runtime[option_count];\n\n    for (int _i = 0; _i < (int)option_count; _i++) {\n        printf(\"Testing Compression option %d\\n\", options[_i]);\n        long long start = mstime();\n        quicklistIter iter;\n\n        TEST(\"create list\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            ql_verify(ql, 0, 0, 0, 0);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"add to tail of empty list\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            quicklistPushTail(ql, \"hello\", 6);\n            /* 1 for head and 1 for tail because 1 node = head = tail */\n            ql_verify(ql, 1, 1, 1, 1);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"add to head of empty list\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            quicklistPushHead(ql, \"hello\", 6);\n            /* 1 for head and 1 for tail because 1 node = head = tail */\n            ql_verify(ql, 1, 1, 1, 1);\n            quicklistRelease(ql);\n        }\n\n        TEST_DESC(\"add to tail 5x at compress %d\", options[_i]) {\n            for (int f = 0; f < fill_count; f++) {\n                quicklist *ql = quicklistNew(fills[f], options[_i]);\n                for (int i = 0; i < 5; i++)\n                    quicklistPushTail(ql, genstr(\"hello\", i), 32);\n                if (ql->count != 5)\n                    ERROR;\n                if (fills[f] == 32)\n                    ql_verify(ql, 1, 5, 5, 5);\n                quicklistRelease(ql);\n            }\n        }\n\n        TEST_DESC(\"add to head 5x at compress %d\", options[_i]) {\n            for (int f = 0; f < fill_count; f++) {\n                quicklist *ql = quicklistNew(fills[f], options[_i]);\n                for (int i = 0; i < 5; i++)\n                    quicklistPushHead(ql, genstr(\"hello\", i), 32);\n                if (ql->count != 5)\n                    ERROR;\n                if (fills[f] == 32)\n                    ql_verify(ql, 1, 5, 5, 5);\n                quicklistRelease(ql);\n            }\n        }\n\n        TEST_DESC(\"add to tail 500x at compress %d\", options[_i]) {\n            for (int f = 0; f < fill_count; f++) {\n                quicklist *ql = quicklistNew(fills[f], options[_i]);\n                for (int i = 0; i < 500; i++)\n                    quicklistPushTail(ql, genstr(\"hello\", i), 64);\n                if (ql->count != 500)\n                    ERROR;\n                if (fills[f] == 32)\n                    ql_verify(ql, 16, 500, 32, 20);\n                quicklistRelease(ql);\n            }\n        }\n\n        TEST_DESC(\"add to head 500x at compress %d\", options[_i]) {\n            for (int f = 0; f < fill_count; f++) {\n                quicklist *ql = quicklistNew(fills[f], options[_i]);\n                for (int i = 0; i < 500; i++)\n                    quicklistPushHead(ql, genstr(\"hello\", i), 32);\n                if (ql->count != 500)\n                    ERROR;\n                if (fills[f] == 32)\n                    ql_verify(ql, 16, 500, 20, 32);\n                quicklistRelease(ql);\n            }\n        }\n\n        TEST(\"rotate empty\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            quicklistRotate(ql);\n            ql_verify(ql, 0, 0, 0, 0);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"Compression Plain node\") {\n        for (int f = 0; f < fill_count; f++) {\n            size_t large_limit = (fills[f] < 0) ? quicklistNodeNegFillLimit(fills[f]) + 1 : SIZE_SAFETY_LIMIT + 1;\n\n            char buf[large_limit];\n            quicklist *ql = quicklistNew(fills[f], 1);\n            for (int i = 0; i < 500; i++) {\n                /* Set to 256 to allow the node to be triggered to compress,\n                 * if it is less than 48(nocompress), the test will be successful. */\n                snprintf(buf, sizeof(buf), \"hello%d\", i);\n                quicklistPushHead(ql, buf, large_limit);\n            }\n\n            quicklistIter iter;\n            quicklistEntry entry;\n            quicklistInitIterator(&iter, ql, AL_START_TAIL);\n            int i = 0;\n            while (quicklistNext(&iter, &entry)) {\n                assert(QL_NODE_IS_PLAIN(entry.node));\n                snprintf(buf, sizeof(buf), \"hello%d\", i);\n                if (strcmp((char *)entry.value, buf))\n                    ERR(\"value [%s] didn't match [%s] at position %d\",\n                        entry.value, buf, i);\n                i++;\n            }\n            ql_reset_iterator(&iter);\n            quicklistRelease(ql);\n        }\n        }\n\n        TEST(\"NEXT plain node\") {\n        for (int f = 0; f < fill_count; f++) {\n            size_t large_limit = (fills[f] < 0) ? quicklistNodeNegFillLimit(fills[f]) + 1 : SIZE_SAFETY_LIMIT + 1;\n            quicklist *ql = quicklistNew(fills[f], options[_i]);\n\n            char buf[large_limit];\n            memcpy(buf, \"plain\", 5);\n            quicklistPushHead(ql, buf, large_limit);\n            quicklistPushHead(ql, buf, large_limit);\n            quicklistPushHead(ql, \"packed3\", 7);\n            quicklistPushHead(ql, \"packed4\", 7);\n            quicklistPushHead(ql, buf, large_limit);\n\n            quicklistEntry entry;\n            quicklistIter iter;\n            quicklistInitIterator(&iter, ql, AL_START_TAIL);\n\n            while(quicklistNext(&iter, &entry) != 0) {\n                if (QL_NODE_IS_PLAIN(entry.node))\n                    assert(!memcmp(entry.value, \"plain\", 5));\n                else\n                    assert(!memcmp(entry.value, \"packed\", 6));\n            }\n            ql_reset_iterator(&iter);\n            quicklistRelease(ql);\n        }\n        }\n\n        TEST(\"rotate plain node \") {\n        for (int f = 0; f < fill_count; f++) {\n            size_t large_limit = (fills[f] < 0) ? quicklistNodeNegFillLimit(fills[f]) + 1 : SIZE_SAFETY_LIMIT + 1;\n\n            unsigned char *data = NULL;\n            size_t sz;\n            long long lv;\n            int i =0;\n            quicklist *ql = quicklistNew(fills[f], options[_i]);\n            char buf[large_limit];\n            memcpy(buf, \"hello1\", 6);\n            quicklistPushHead(ql, buf, large_limit);\n            memcpy(buf, \"hello4\", 6);\n            quicklistPushHead(ql, buf, large_limit);\n            memcpy(buf, \"hello3\", 6);\n            quicklistPushHead(ql, buf, large_limit);\n            memcpy(buf, \"hello2\", 6);\n            quicklistPushHead(ql, buf, large_limit);\n            quicklistRotate(ql);\n\n            for(i = 1 ; i < 5; i++) {\n                assert(QL_NODE_IS_PLAIN(ql->tail));\n                quicklistPop(ql, QUICKLIST_HEAD, &data, &sz, &lv);\n                int temp_char = data[5];\n                zfree(data);\n                assert(temp_char == ('0' + i));\n            }\n\n            ql_verify(ql, 0, 0, 0, 0);\n            quicklistRelease(ql);\n        }\n        }\n\n        TEST(\"rotate one val once\") {\n            for (int f = 0; f < fill_count; f++) {\n                quicklist *ql = quicklistNew(fills[f], options[_i]);\n                quicklistPushHead(ql, \"hello\", 6);\n                quicklistRotate(ql);\n                /* Ignore compression verify because listpack is\n                 * too small to compress. */\n                ql_verify(ql, 1, 1, 1, 1);\n                quicklistRelease(ql);\n            }\n        }\n\n        TEST_DESC(\"rotate 500 val 5000 times at compress %d\", options[_i]) {\n            for (int f = 0; f < fill_count; f++) {\n                quicklist *ql = quicklistNew(fills[f], options[_i]);\n                quicklistPushHead(ql, \"900\", 3);\n                quicklistPushHead(ql, \"7000\", 4);\n                quicklistPushHead(ql, \"-1200\", 5);\n                quicklistPushHead(ql, \"42\", 2);\n                for (int i = 0; i < 500; i++)\n                    quicklistPushHead(ql, genstr(\"hello\", i), 64);\n                ql_info(ql);\n                for (int i = 0; i < 5000; i++) {\n                    ql_info(ql);\n                    quicklistRotate(ql);\n                }\n                if (fills[f] == 1)\n                    ql_verify(ql, 504, 504, 1, 1);\n                else if (fills[f] == 2)\n                    ql_verify(ql, 252, 504, 2, 2);\n                else if (fills[f] == 32)\n                    ql_verify(ql, 16, 504, 32, 24);\n                quicklistRelease(ql);\n            }\n        }\n\n        TEST(\"pop empty\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            quicklistPop(ql, QUICKLIST_HEAD, NULL, NULL, NULL);\n            ql_verify(ql, 0, 0, 0, 0);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"pop 1 string from 1\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            char *populate = genstr(\"hello\", 331);\n            quicklistPushHead(ql, populate, 32);\n            unsigned char *data;\n            size_t sz;\n            long long lv;\n            ql_info(ql);\n            assert(quicklistPop(ql, QUICKLIST_HEAD, &data, &sz, &lv));\n            assert(data != NULL);\n            assert(sz == 32);\n            if (strcmp(populate, (char *)data)) {\n                int size = sz;\n                ERR(\"Pop'd value (%.*s) didn't equal original value (%s)\", size,\n                    data, populate);\n            }\n            zfree(data);\n            ql_verify(ql, 0, 0, 0, 0);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"pop head 1 number from 1\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            quicklistPushHead(ql, \"55513\", 5);\n            unsigned char *data;\n            size_t sz;\n            long long lv;\n            ql_info(ql);\n            assert(quicklistPop(ql, QUICKLIST_HEAD, &data, &sz, &lv));\n            assert(data == NULL);\n            assert(lv == 55513);\n            ql_verify(ql, 0, 0, 0, 0);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"pop head 500 from 500\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            for (int i = 0; i < 500; i++)\n                quicklistPushHead(ql, genstr(\"hello\", i), 32);\n            ql_info(ql);\n            for (int i = 0; i < 500; i++) {\n                unsigned char *data;\n                size_t sz;\n                long long lv;\n                int ret = quicklistPop(ql, QUICKLIST_HEAD, &data, &sz, &lv);\n                assert(ret == 1);\n                assert(data != NULL);\n                assert(sz == 32);\n                if (strcmp(genstr(\"hello\", 499 - i), (char *)data)) {\n                    int size = sz;\n                    ERR(\"Pop'd value (%.*s) didn't equal original value (%s)\",\n                        size, data, genstr(\"hello\", 499 - i));\n                }\n                zfree(data);\n            }\n            ql_verify(ql, 0, 0, 0, 0);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"pop head 5000 from 500\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            for (int i = 0; i < 500; i++)\n                quicklistPushHead(ql, genstr(\"hello\", i), 32);\n            for (int i = 0; i < 5000; i++) {\n                unsigned char *data;\n                size_t sz;\n                long long lv;\n                int ret = quicklistPop(ql, QUICKLIST_HEAD, &data, &sz, &lv);\n                if (i < 500) {\n                    assert(ret == 1);\n                    assert(data != NULL);\n                    assert(sz == 32);\n                    if (strcmp(genstr(\"hello\", 499 - i), (char *)data)) {\n                        int size = sz;\n                        ERR(\"Pop'd value (%.*s) didn't equal original value \"\n                            \"(%s)\",\n                            size, data, genstr(\"hello\", 499 - i));\n                    }\n                    zfree(data);\n                } else {\n                    assert(ret == 0);\n                }\n            }\n            ql_verify(ql, 0, 0, 0, 0);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"iterate forward over 500 list\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            quicklistSetFill(ql, 32);\n            for (int i = 0; i < 500; i++)\n                quicklistPushHead(ql, genstr(\"hello\", i), 32);\n            quicklistIter iter_local;\n            quicklistEntry entry;\n            quicklistInitIterator(&iter_local, ql, AL_START_HEAD);\n            int i = 499, count = 0;\n            while (quicklistNext(&iter_local, &entry)) {\n                char *h = genstr(\"hello\", i);\n                if (strcmp((char *)entry.value, h))\n                    ERR(\"value [%s] didn't match [%s] at position %d\",\n                        entry.value, h, i);\n                i--;\n                count++;\n            }\n            if (count != 500)\n                ERR(\"Didn't iterate over exactly 500 elements (%d)\", i);\n            ql_verify(ql, 16, 500, 20, 32);\n            ql_reset_iterator(&iter_local);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"iterate reverse over 500 list\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            quicklistSetFill(ql, 32);\n            for (int i = 0; i < 500; i++)\n                quicklistPushHead(ql, genstr(\"hello\", i), 32);\n            quicklistIter iter_local;\n            quicklistEntry entry;\n            quicklistInitIterator(&iter_local, ql, AL_START_TAIL);\n            int i = 0;\n            while (quicklistNext(&iter_local, &entry)) {\n                char *h = genstr(\"hello\", i);\n                if (strcmp((char *)entry.value, h))\n                    ERR(\"value [%s] didn't match [%s] at position %d\",\n                        entry.value, h, i);\n                i++;\n            }\n            if (i != 500)\n                ERR(\"Didn't iterate over exactly 500 elements (%d)\", i);\n            ql_verify(ql, 16, 500, 20, 32);\n            ql_reset_iterator(&iter_local);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"insert after 1 element\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            quicklistPushHead(ql, \"hello\", 6);\n            quicklistEntry entry;\n            quicklistInitIteratorEntryAtIdx(&iter, ql, 0, &entry);\n            quicklistInsertAfter(&iter, &entry, \"abc\", 4);\n            ql_reset_iterator(&iter);\n            ql_verify(ql, 1, 2, 2, 2);\n\n            /* verify results */\n            quicklistInitIteratorEntryAtIdx(&iter, ql, 0, &entry);\n            int sz = entry.sz;\n            if (strncmp((char *)entry.value, \"hello\", 5)) {\n                ERR(\"Value 0 didn't match, instead got: %.*s\", sz,\n                    entry.value);\n            }\n            ql_reset_iterator(&iter);\n\n            quicklistInitIteratorEntryAtIdx(&iter, ql, 1, &entry);\n            sz = entry.sz;\n            if (strncmp((char *)entry.value, \"abc\", 3)) {\n                ERR(\"Value 1 didn't match, instead got: %.*s\", sz,\n                    entry.value);\n            }\n            ql_reset_iterator(&iter);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"insert before 1 element\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            quicklistPushHead(ql, \"hello\", 6);\n            quicklistEntry entry;\n            quicklistInitIteratorEntryAtIdx(&iter, ql, 0, &entry);\n            quicklistInsertBefore(&iter, &entry, \"abc\", 4);\n            ql_reset_iterator(&iter);\n            ql_verify(ql, 1, 2, 2, 2);\n\n            /* verify results */\n            quicklistInitIteratorEntryAtIdx(&iter, ql, 0, &entry);\n            int sz = entry.sz;\n            if (strncmp((char *)entry.value, \"abc\", 3)) {\n                ERR(\"Value 0 didn't match, instead got: %.*s\", sz,\n                    entry.value);\n            }\n            ql_reset_iterator(&iter);\n\n            quicklistInitIteratorEntryAtIdx(&iter, ql, 1, &entry);\n            sz = entry.sz;\n            if (strncmp((char *)entry.value, \"hello\", 5)) {\n                ERR(\"Value 1 didn't match, instead got: %.*s\", sz,\n                    entry.value);\n            }\n            ql_reset_iterator(&iter);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"insert head while head node is full\") {\n            quicklist *ql = quicklistNew(4, options[_i]);\n            for (int i = 0; i < 10; i++)\n                quicklistPushTail(ql, genstr(\"hello\", i), 6);\n            quicklistSetFill(ql, -1);\n            quicklistEntry entry;\n            quicklistInitIteratorEntryAtIdx(&iter, ql, -10, &entry);\n            char buf[4096] = {0};\n            quicklistInsertBefore(&iter, &entry, buf, 4096);\n            ql_reset_iterator(&iter);\n            ql_verify(ql, 4, 11, 1, 2);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"insert tail while tail node is full\") {\n            quicklist *ql = quicklistNew(4, options[_i]);\n            for (int i = 0; i < 10; i++)\n                quicklistPushHead(ql, genstr(\"hello\", i), 6);\n            quicklistSetFill(ql, -1);\n            quicklistEntry entry;\n            quicklistInitIteratorEntryAtIdx(&iter, ql, -1, &entry);\n            char buf[4096] = {0};\n            quicklistInsertAfter(&iter, &entry, buf, 4096);\n            ql_reset_iterator(&iter);\n            ql_verify(ql, 4, 11, 2, 1);\n            quicklistRelease(ql);\n        }\n\n        TEST_DESC(\"insert once in elements while iterating at compress %d\",\n                  options[_i]) {\n            for (int f = 0; f < fill_count; f++) {\n                quicklist *ql = quicklistNew(fills[f], options[_i]);\n                quicklistPushTail(ql, \"abc\", 3);\n                quicklistSetFill(ql, 1);\n                quicklistPushTail(ql, \"def\", 3); /* force to unique node */\n                quicklistSetFill(ql, f);\n                quicklistPushTail(ql, \"bob\", 3); /* force to reset for +3 */\n                quicklistPushTail(ql, \"foo\", 3);\n                quicklistPushTail(ql, \"zoo\", 3);\n\n                itrprintr(ql, 0);\n                /* insert \"bar\" before \"bob\" while iterating over list. */\n                quicklistIter iter_local;\n                quicklistEntry entry;\n                quicklistInitIterator(&iter_local, ql, AL_START_HEAD);\n                while (quicklistNext(&iter_local, &entry)) {\n                    if (!strncmp((char *)entry.value, \"bob\", 3)) {\n                        /* Insert as fill = 1 so it spills into new node. */\n                        quicklistInsertBefore(&iter_local, &entry, \"bar\", 3);\n                        break; /* didn't we fix insert-while-iterating? */\n                    }\n                }\n                ql_reset_iterator(&iter_local);\n                itrprintr(ql, 0);\n\n                /* verify results */\n                quicklistInitIteratorEntryAtIdx(&iter, ql, 0, &entry);\n                int sz = entry.sz;\n\n                if (strncmp((char *)entry.value, \"abc\", 3))\n                    ERR(\"Value 0 didn't match, instead got: %.*s\", sz,\n                        entry.value);\n                ql_reset_iterator(&iter);\n\n                quicklistInitIteratorEntryAtIdx(&iter, ql, 1, &entry);\n                if (strncmp((char *)entry.value, \"def\", 3))\n                    ERR(\"Value 1 didn't match, instead got: %.*s\", sz,\n                        entry.value);\n                ql_reset_iterator(&iter);\n\n                quicklistInitIteratorEntryAtIdx(&iter, ql, 2, &entry);\n                if (strncmp((char *)entry.value, \"bar\", 3))\n                    ERR(\"Value 2 didn't match, instead got: %.*s\", sz,\n                        entry.value);\n                ql_reset_iterator(&iter);\n\n                quicklistInitIteratorEntryAtIdx(&iter, ql, 3, &entry);\n                if (strncmp((char *)entry.value, \"bob\", 3))\n                    ERR(\"Value 3 didn't match, instead got: %.*s\", sz,\n                        entry.value);\n                ql_reset_iterator(&iter);\n\n                quicklistInitIteratorEntryAtIdx(&iter, ql, 4, &entry);\n                if (strncmp((char *)entry.value, \"foo\", 3))\n                    ERR(\"Value 4 didn't match, instead got: %.*s\", sz,\n                        entry.value);\n                ql_reset_iterator(&iter);\n\n                quicklistInitIteratorEntryAtIdx(&iter, ql, 5, &entry);\n                if (strncmp((char *)entry.value, \"zoo\", 3))\n                    ERR(\"Value 5 didn't match, instead got: %.*s\", sz,\n                        entry.value);\n                ql_reset_iterator(&iter);\n                quicklistRelease(ql);\n            }\n        }\n\n        TEST_DESC(\"insert [before] 250 new in middle of 500 elements at compress %d\",\n                  options[_i]) {\n            for (int f = 0; f < fill_count; f++) {\n                quicklist *ql = quicklistNew(fills[f], options[_i]);\n                for (int i = 0; i < 500; i++)\n                    quicklistPushTail(ql, genstr(\"hello\", i), 32);\n                for (int i = 0; i < 250; i++) {\n                    quicklistEntry entry;\n                    quicklistInitIteratorEntryAtIdx(&iter, ql, 250, &entry);\n                    quicklistInsertBefore(&iter, &entry, genstr(\"abc\", i), 32);\n                    ql_reset_iterator(&iter);\n                }\n                if (fills[f] == 32)\n                    ql_verify(ql, 25, 750, 32, 20);\n                quicklistRelease(ql);\n            }\n        }\n\n        TEST_DESC(\"insert [after] 250 new in middle of 500 elements at compress %d\",\n                  options[_i]) {\n            for (int f = 0; f < fill_count; f++) {\n                quicklist *ql = quicklistNew(fills[f], options[_i]);\n                for (int i = 0; i < 500; i++)\n                    quicklistPushHead(ql, genstr(\"hello\", i), 32);\n                for (int i = 0; i < 250; i++) {\n                    quicklistEntry entry;\n                    quicklistInitIteratorEntryAtIdx(&iter, ql, 250, &entry);\n                    quicklistInsertAfter(&iter, &entry, genstr(\"abc\", i), 32);\n                    ql_reset_iterator(&iter);\n                }\n\n                if (ql->count != 750)\n                    ERR(\"List size not 750, but rather %ld\", ql->count);\n\n                if (fills[f] == 32)\n                    ql_verify(ql, 26, 750, 20, 32);\n                quicklistRelease(ql);\n            }\n        }\n\n        TEST(\"duplicate empty list\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            ql_verify(ql, 0, 0, 0, 0);\n            quicklist *copy = quicklistDup(ql);\n            ql_verify(copy, 0, 0, 0, 0);\n            quicklistRelease(ql);\n            quicklistRelease(copy);\n        }\n\n        TEST(\"duplicate list of 1 element\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            quicklistPushHead(ql, genstr(\"hello\", 3), 32);\n            ql_verify(ql, 1, 1, 1, 1);\n            quicklist *copy = quicklistDup(ql);\n            ql_verify(copy, 1, 1, 1, 1);\n            quicklistRelease(ql);\n            quicklistRelease(copy);\n        }\n\n        TEST(\"duplicate list of 500\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            quicklistSetFill(ql, 32);\n            for (int i = 0; i < 500; i++)\n                quicklistPushHead(ql, genstr(\"hello\", i), 32);\n            ql_verify(ql, 16, 500, 20, 32);\n\n            quicklist *copy = quicklistDup(ql);\n            ql_verify(copy, 16, 500, 20, 32);\n            quicklistRelease(ql);\n            quicklistRelease(copy);\n        }\n\n        for (int f = 0; f < fill_count; f++) {\n            TEST_DESC(\"index 1,200 from 500 list at fill %d at compress %d\", f,\n                      options[_i]) {\n                quicklist *ql = quicklistNew(fills[f], options[_i]);\n                for (int i = 0; i < 500; i++)\n                    quicklistPushTail(ql, genstr(\"hello\", i + 1), 32);\n                quicklistEntry entry;\n                quicklistInitIteratorEntryAtIdx(&iter, ql, 1, &entry);\n                if (strcmp((char *)entry.value, \"hello2\") != 0)\n                    ERR(\"Value: %s\", entry.value);\n                ql_reset_iterator(&iter);\n\n                quicklistInitIteratorEntryAtIdx(&iter, ql, 200, &entry);\n                if (strcmp((char *)entry.value, \"hello201\") != 0)\n                    ERR(\"Value: %s\", entry.value);\n                ql_reset_iterator(&iter);\n                quicklistRelease(ql);\n            }\n\n            TEST_DESC(\"index -1,-2 from 500 list at fill %d at compress %d\",\n                      fills[f], options[_i]) {\n                quicklist *ql = quicklistNew(fills[f], options[_i]);\n                for (int i = 0; i < 500; i++)\n                    quicklistPushTail(ql, genstr(\"hello\", i + 1), 32);\n                quicklistEntry entry;\n                quicklistInitIteratorEntryAtIdx(&iter, ql, -1, &entry);\n                if (strcmp((char *)entry.value, \"hello500\") != 0)\n                    ERR(\"Value: %s\", entry.value);\n                ql_reset_iterator(&iter);\n\n                quicklistInitIteratorEntryAtIdx(&iter, ql, -2, &entry);\n                if (strcmp((char *)entry.value, \"hello499\") != 0)\n                    ERR(\"Value: %s\", entry.value);\n                ql_reset_iterator(&iter);\n                quicklistRelease(ql);\n            }\n\n            TEST_DESC(\"index -100 from 500 list at fill %d at compress %d\",\n                      fills[f], options[_i]) {\n                quicklist *ql = quicklistNew(fills[f], options[_i]);\n                for (int i = 0; i < 500; i++)\n                    quicklistPushTail(ql, genstr(\"hello\", i + 1), 32);\n                quicklistEntry entry;\n                quicklistInitIteratorEntryAtIdx(&iter, ql, -100, &entry);\n                if (strcmp((char *)entry.value, \"hello401\") != 0)\n                    ERR(\"Value: %s\", entry.value);\n                ql_reset_iterator(&iter);\n                quicklistRelease(ql);\n            }\n\n            TEST_DESC(\"index too big +1 from 50 list at fill %d at compress %d\",\n                      fills[f], options[_i]) {\n                quicklist *ql = quicklistNew(fills[f], options[_i]);\n                for (int i = 0; i < 50; i++)\n                    quicklistPushTail(ql, genstr(\"hello\", i + 1), 32);\n                quicklistEntry entry;\n                int sz = entry.sz;\n                if (quicklistInitIteratorEntryAtIdx(&iter, ql, 50, &entry))\n                    ERR(\"Index found at 50 with 50 list: %.*s\", sz,\n                        entry.value);\n                ql_reset_iterator(&iter);\n                quicklistRelease(ql);\n            }\n        }\n\n        TEST(\"delete range empty list\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            quicklistDelRange(ql, 5, 20);\n            ql_verify(ql, 0, 0, 0, 0);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"delete range of entire node in list of one node\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            for (int i = 0; i < 32; i++)\n                quicklistPushHead(ql, genstr(\"hello\", i), 32);\n            ql_verify(ql, 1, 32, 32, 32);\n            quicklistDelRange(ql, 0, 32);\n            ql_verify(ql, 0, 0, 0, 0);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"delete range of entire node with overflow counts\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            for (int i = 0; i < 32; i++)\n                quicklistPushHead(ql, genstr(\"hello\", i), 32);\n            ql_verify(ql, 1, 32, 32, 32);\n            quicklistDelRange(ql, 0, 128);\n            ql_verify(ql, 0, 0, 0, 0);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"delete middle 100 of 500 list\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            quicklistSetFill(ql, 32);\n            for (int i = 0; i < 500; i++)\n                quicklistPushTail(ql, genstr(\"hello\", i + 1), 32);\n            ql_verify(ql, 16, 500, 32, 20);\n            quicklistDelRange(ql, 200, 100);\n            ql_verify(ql, 14, 400, 32, 20);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"delete less than fill but across nodes\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            quicklistSetFill(ql, 32);\n            for (int i = 0; i < 500; i++)\n                quicklistPushTail(ql, genstr(\"hello\", i + 1), 32);\n            ql_verify(ql, 16, 500, 32, 20);\n            quicklistDelRange(ql, 60, 10);\n            ql_verify(ql, 16, 490, 32, 20);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"delete negative 1 from 500 list\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            quicklistSetFill(ql, 32);\n            for (int i = 0; i < 500; i++)\n                quicklistPushTail(ql, genstr(\"hello\", i + 1), 32);\n            ql_verify(ql, 16, 500, 32, 20);\n            quicklistDelRange(ql, -1, 1);\n            ql_verify(ql, 16, 499, 32, 19);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"delete negative 1 from 500 list with overflow counts\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            quicklistSetFill(ql, 32);\n            for (int i = 0; i < 500; i++)\n                quicklistPushTail(ql, genstr(\"hello\", i + 1), 32);\n            ql_verify(ql, 16, 500, 32, 20);\n            quicklistDelRange(ql, -1, 128);\n            ql_verify(ql, 16, 499, 32, 19);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"delete negative 100 from 500 list\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            quicklistSetFill(ql, 32);\n            for (int i = 0; i < 500; i++)\n                quicklistPushTail(ql, genstr(\"hello\", i + 1), 32);\n            quicklistDelRange(ql, -100, 100);\n            ql_verify(ql, 13, 400, 32, 16);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"delete -10 count 5 from 50 list\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            quicklistSetFill(ql, 32);\n            for (int i = 0; i < 50; i++)\n                quicklistPushTail(ql, genstr(\"hello\", i + 1), 32);\n            ql_verify(ql, 2, 50, 32, 18);\n            quicklistDelRange(ql, -10, 5);\n            ql_verify(ql, 2, 45, 32, 13);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"numbers only list read\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            quicklistPushTail(ql, \"1111\", 4);\n            quicklistPushTail(ql, \"2222\", 4);\n            quicklistPushTail(ql, \"3333\", 4);\n            quicklistPushTail(ql, \"4444\", 4);\n            ql_verify(ql, 1, 4, 4, 4);\n            quicklistEntry entry;\n            quicklistInitIteratorEntryAtIdx(&iter, ql, 0, &entry);\n            if (entry.longval != 1111)\n                ERR(\"Not 1111, %lld\", entry.longval);\n            ql_reset_iterator(&iter);\n\n            quicklistInitIteratorEntryAtIdx(&iter, ql, 1, &entry);\n            if (entry.longval != 2222)\n                ERR(\"Not 2222, %lld\", entry.longval);\n            ql_reset_iterator(&iter);\n\n            quicklistInitIteratorEntryAtIdx(&iter, ql, 2, &entry);\n            if (entry.longval != 3333)\n                ERR(\"Not 3333, %lld\", entry.longval);\n            ql_reset_iterator(&iter);\n\n            quicklistInitIteratorEntryAtIdx(&iter, ql, 3, &entry);\n            if (entry.longval != 4444)\n                ERR(\"Not 4444, %lld\", entry.longval);\n            ql_reset_iterator(&iter);\n\n            if (quicklistInitIteratorEntryAtIdx(&iter, ql, 4, &entry))\n                ERR(\"Index past elements: %lld\", entry.longval);\n            ql_reset_iterator(&iter);\n\n            quicklistInitIteratorEntryAtIdx(&iter, ql, -1, &entry);\n            if (entry.longval != 4444)\n                ERR(\"Not 4444 (reverse), %lld\", entry.longval);\n            ql_reset_iterator(&iter);\n\n            quicklistInitIteratorEntryAtIdx(&iter, ql, -2, &entry);\n            if (entry.longval != 3333)\n                ERR(\"Not 3333 (reverse), %lld\", entry.longval);\n            ql_reset_iterator(&iter);\n\n            quicklistInitIteratorEntryAtIdx(&iter, ql, -3, &entry);\n            if (entry.longval != 2222)\n                ERR(\"Not 2222 (reverse), %lld\", entry.longval);\n            ql_reset_iterator(&iter);\n\n            quicklistInitIteratorEntryAtIdx(&iter, ql, -4, &entry);\n            if (entry.longval != 1111)\n                ERR(\"Not 1111 (reverse), %lld\", entry.longval);\n            ql_reset_iterator(&iter);\n\n            if (quicklistInitIteratorEntryAtIdx(&iter, ql, -5, &entry))\n                ERR(\"Index past elements (reverse), %lld\", entry.longval);\n            ql_reset_iterator(&iter);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"numbers larger list read\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            quicklistSetFill(ql, 32);\n            char num[32];\n            long long nums[5000];\n            for (int i = 0; i < 5000; i++) {\n                nums[i] = -5157318210846258176 + i;\n                int sz = ll2string(num, sizeof(num), nums[i]);\n                quicklistPushTail(ql, num, sz);\n            }\n            quicklistPushTail(ql, \"xxxxxxxxxxxxxxxxxxxx\", 20);\n            quicklistEntry entry;\n            for (int i = 0; i < 5000; i++) {\n                quicklistInitIteratorEntryAtIdx(&iter, ql, i, &entry);\n                if (entry.longval != nums[i])\n                    ERR(\"[%d] Not longval %lld but rather %lld\", i, nums[i],\n                        entry.longval);\n                entry.longval = 0xdeadbeef;\n                ql_reset_iterator(&iter);\n            }\n            quicklistInitIteratorEntryAtIdx(&iter, ql, 5000, &entry);\n            if (strncmp((char *)entry.value, \"xxxxxxxxxxxxxxxxxxxx\", 20))\n                ERR(\"String val not match: %s\", entry.value);\n            ql_verify(ql, 157, 5001, 32, 9);\n            ql_reset_iterator(&iter);\n            quicklistRelease(ql);\n        }\n\n        TEST(\"numbers larger list read B\") {\n            quicklist *ql = quicklistNew(-2, options[_i]);\n            quicklistPushTail(ql, \"99\", 2);\n            quicklistPushTail(ql, \"98\", 2);\n            quicklistPushTail(ql, \"xxxxxxxxxxxxxxxxxxxx\", 20);\n            quicklistPushTail(ql, \"96\", 2);\n            quicklistPushTail(ql, \"95\", 2);\n            quicklistReplaceAtIndex(ql, 1, \"foo\", 3);\n            quicklistReplaceAtIndex(ql, -1, \"bar\", 3);\n            quicklistRelease(ql);\n        }\n\n        TEST_DESC(\"lrem test at compress %d\", options[_i]) {\n            for (int f = 0; f < fill_count; f++) {\n                quicklist *ql = quicklistNew(fills[f], options[_i]);\n                char *words[] = {\"abc\", \"foo\", \"bar\",  \"foobar\", \"foobared\",\n                                 \"zap\", \"bar\", \"test\", \"foo\"};\n                char *result[] = {\"abc\", \"foo\",  \"foobar\", \"foobared\",\n                                  \"zap\", \"test\", \"foo\"};\n                char *resultB[] = {\"abc\",      \"foo\", \"foobar\",\n                                   \"foobared\", \"zap\", \"test\"};\n                for (int i = 0; i < 9; i++)\n                    quicklistPushTail(ql, words[i], strlen(words[i]));\n\n                /* lrem 0 bar */\n                quicklistIter iter_local;\n                quicklistEntry entry;\n                quicklistInitIterator(&iter_local, ql, AL_START_HEAD);\n                int i = 0;\n                while (quicklistNext(&iter_local, &entry)) {\n                    if (quicklistCompare(&entry, (unsigned char *)\"bar\", 3,\n                                         NULL, NULL)) {\n                        quicklistDelEntry(&iter_local, &entry);\n                    }\n                    i++;\n                }\n                ql_reset_iterator(&iter_local);\n\n                /* check result of lrem 0 bar */\n                quicklistInitIterator(&iter_local, ql, AL_START_HEAD);\n                i = 0;\n                while (quicklistNext(&iter_local, &entry)) {\n                    /* Result must be: abc, foo, foobar, foobared, zap, test,\n                     * foo */\n                    int sz = entry.sz;\n                    if (strncmp((char *)entry.value, result[i], entry.sz)) {\n                        ERR(\"No match at position %d, got %.*s instead of %s\",\n                            i, sz, entry.value, result[i]);\n                    }\n                    i++;\n                }\n                ql_reset_iterator(&iter_local);\n\n                quicklistPushTail(ql, \"foo\", 3);\n\n                /* lrem -2 foo */\n                quicklistInitIterator(&iter_local, ql, AL_START_TAIL);\n                i = 0;\n                int del = 2;\n                while (quicklistNext(&iter_local, &entry)) {\n                    if (quicklistCompare(&entry, (unsigned char *)\"foo\", 3,\n                                         NULL, NULL)) {\n                        quicklistDelEntry(&iter_local, &entry);\n                        del--;\n                    }\n                    if (!del)\n                        break;\n                    i++;\n                }\n                ql_reset_iterator(&iter_local);\n\n                /* check result of lrem -2 foo */\n                /* (we're ignoring the '2' part and still deleting all foo\n                 * because\n                 * we only have two foo) */\n                quicklistInitIterator(&iter_local, ql, AL_START_TAIL);\n                i = 0;\n                size_t resB = sizeof(resultB) / sizeof(*resultB);\n                while (quicklistNext(&iter_local, &entry)) {\n                    /* Result must be: abc, foo, foobar, foobared, zap, test,\n                     * foo */\n                    int sz = entry.sz;\n                    if (strncmp((char *)entry.value, resultB[resB - 1 - i],\n                                sz)) {\n                        ERR(\"No match at position %d, got %.*s instead of %s\",\n                            i, sz, entry.value, resultB[resB - 1 - i]);\n                    }\n                    i++;\n                }\n\n                ql_reset_iterator(&iter_local);\n                quicklistRelease(ql);\n            }\n        }\n\n        TEST_DESC(\"iterate reverse + delete at compress %d\", options[_i]) {\n            for (int f = 0; f < fill_count; f++) {\n                quicklist *ql = quicklistNew(fills[f], options[_i]);\n                quicklistPushTail(ql, \"abc\", 3);\n                quicklistPushTail(ql, \"def\", 3);\n                quicklistPushTail(ql, \"hij\", 3);\n                quicklistPushTail(ql, \"jkl\", 3);\n                quicklistPushTail(ql, \"oop\", 3);\n\n                quicklistEntry entry;\n                quicklistIter iter_local;\n                quicklistInitIterator(&iter_local, ql, AL_START_TAIL);\n                int i = 0;\n                while (quicklistNext(&iter_local, &entry)) {\n                    if (quicklistCompare(&entry, (unsigned char *)\"hij\", 3,\n                                         NULL, NULL)) {\n                        quicklistDelEntry(&iter_local, &entry);\n                    }\n                    i++;\n                }\n                ql_reset_iterator(&iter_local);\n\n                if (i != 5)\n                    ERR(\"Didn't iterate 5 times, iterated %d times.\", i);\n\n                /* Check results after deletion of \"hij\" */\n                quicklistInitIterator(&iter_local, ql, AL_START_HEAD);\n                i = 0;\n                char *vals[] = {\"abc\", \"def\", \"jkl\", \"oop\"};\n                while (quicklistNext(&iter_local, &entry)) {\n                    if (!quicklistCompare(&entry, (unsigned char *)vals[i],\n                                          3, NULL, NULL)) {\n                        ERR(\"Value at %d didn't match %s\\n\", i, vals[i]);\n                    }\n                    i++;\n                }\n                ql_reset_iterator(&iter_local);\n                quicklistRelease(ql);\n            }\n        }\n\n        TEST_DESC(\"iterator at index test at compress %d\", options[_i]) {\n            for (int f = 0; f < fill_count; f++) {\n                quicklist *ql = quicklistNew(fills[f], options[_i]);\n                char num[32];\n                long long nums[5000];\n                for (int i = 0; i < 760; i++) {\n                    nums[i] = -5157318210846258176 + i;\n                    int sz = ll2string(num, sizeof(num), nums[i]);\n                    quicklistPushTail(ql, num, sz);\n                }\n\n                quicklistEntry entry;\n                quicklistIter iter_local;\n                quicklistInitIteratorAtIdx(&iter_local, ql, AL_START_HEAD, 437);\n                int i = 437;\n                while (quicklistNext(&iter_local, &entry)) {\n                    if (entry.longval != nums[i])\n                        ERR(\"Expected %lld, but got %lld\", entry.longval,\n                            nums[i]);\n                    i++;\n                }\n                ql_reset_iterator(&iter_local);\n                quicklistRelease(ql);\n            }\n        }\n\n        TEST_DESC(\"ltrim test A at compress %d\", options[_i]) {\n            for (int f = 0; f < fill_count; f++) {\n                quicklist *ql = quicklistNew(fills[f], options[_i]);\n                char num[32];\n                long long nums[5000];\n                for (int i = 0; i < 32; i++) {\n                    nums[i] = -5157318210846258176 + i;\n                    int sz = ll2string(num, sizeof(num), nums[i]);\n                    quicklistPushTail(ql, num, sz);\n                }\n                if (fills[f] == 32)\n                    ql_verify(ql, 1, 32, 32, 32);\n                /* ltrim 25 53 (keep [25,32] inclusive = 7 remaining) */\n                quicklistDelRange(ql, 0, 25);\n                quicklistDelRange(ql, 0, 0);\n                quicklistEntry entry;\n                for (int i = 0; i < 7; i++) {\n                    quicklistInitIteratorEntryAtIdx(&iter, ql, i, &entry);\n                    if (entry.longval != nums[25 + i])\n                        ERR(\"Deleted invalid range!  Expected %lld but got \"\n                            \"%lld\",\n                            entry.longval, nums[25 + i]);\n                    ql_reset_iterator(&iter);\n                }\n                if (fills[f] == 32)\n                    ql_verify(ql, 1, 7, 7, 7);\n                quicklistRelease(ql);\n            }\n        }\n\n        TEST_DESC(\"ltrim test B at compress %d\", options[_i]) {\n            for (int f = 0; f < fill_count; f++) {\n                /* Force-disable compression because our 33 sequential\n                 * integers don't compress and the check always fails. */\n                quicklist *ql = quicklistNew(fills[f], QUICKLIST_NOCOMPRESS);\n                char num[32];\n                long long nums[5000];\n                for (int i = 0; i < 33; i++) {\n                    nums[i] = i;\n                    int sz = ll2string(num, sizeof(num), nums[i]);\n                    quicklistPushTail(ql, num, sz);\n                }\n                if (fills[f] == 32)\n                    ql_verify(ql, 2, 33, 32, 1);\n                /* ltrim 5 16 (keep [5,16] inclusive = 12 remaining) */\n                quicklistDelRange(ql, 0, 5);\n                quicklistDelRange(ql, -16, 16);\n                if (fills[f] == 32)\n                    ql_verify(ql, 1, 12, 12, 12);\n                quicklistEntry entry;\n\n                quicklistInitIteratorEntryAtIdx(&iter, ql, 0, &entry);\n                if (entry.longval != 5)\n                    ERR(\"A: longval not 5, but %lld\", entry.longval);\n                ql_reset_iterator(&iter);\n\n                quicklistInitIteratorEntryAtIdx(&iter, ql, -1, &entry);\n                if (entry.longval != 16)\n                    ERR(\"B! got instead: %lld\", entry.longval);\n                quicklistPushTail(ql, \"bobobob\", 7);\n                ql_reset_iterator(&iter);\n\n                quicklistInitIteratorEntryAtIdx(&iter, ql, -1, &entry);\n                int sz = entry.sz;\n                if (strncmp((char *)entry.value, \"bobobob\", 7))\n                    ERR(\"Tail doesn't match bobobob, it's %.*s instead\",\n                        sz, entry.value);\n                ql_reset_iterator(&iter);\n\n                for (int i = 0; i < 12; i++) {\n                    quicklistInitIteratorEntryAtIdx(&iter, ql, i, &entry);\n                    if (entry.longval != nums[5 + i])\n                        ERR(\"Deleted invalid range!  Expected %lld but got \"\n                            \"%lld\",\n                            entry.longval, nums[5 + i]);\n                    ql_reset_iterator(&iter);\n                }\n                quicklistRelease(ql);\n            }\n        }\n\n        TEST_DESC(\"ltrim test C at compress %d\", options[_i]) {\n            for (int f = 0; f < fill_count; f++) {\n                quicklist *ql = quicklistNew(fills[f], options[_i]);\n                char num[32];\n                long long nums[5000];\n                for (int i = 0; i < 33; i++) {\n                    nums[i] = -5157318210846258176 + i;\n                    int sz = ll2string(num, sizeof(num), nums[i]);\n                    quicklistPushTail(ql, num, sz);\n                }\n                if (fills[f] == 32)\n                    ql_verify(ql, 2, 33, 32, 1);\n                /* ltrim 3 3 (keep [3,3] inclusive = 1 remaining) */\n                quicklistDelRange(ql, 0, 3);\n                quicklistDelRange(ql, -29,\n                                  4000); /* make sure not loop forever */\n                if (fills[f] == 32)\n                    ql_verify(ql, 1, 1, 1, 1);\n                quicklistEntry entry;\n                quicklistInitIteratorEntryAtIdx(&iter, ql, 0, &entry);\n                if (entry.longval != -5157318210846258173)\n                    ERROR;\n                ql_reset_iterator(&iter);\n                quicklistRelease(ql);\n            }\n        }\n\n        TEST_DESC(\"ltrim test D at compress %d\", options[_i]) {\n            for (int f = 0; f < fill_count; f++) {\n                quicklist *ql = quicklistNew(fills[f], options[_i]);\n                char num[32];\n                long long nums[5000];\n                for (int i = 0; i < 33; i++) {\n                    nums[i] = -5157318210846258176 + i;\n                    int sz = ll2string(num, sizeof(num), nums[i]);\n                    quicklistPushTail(ql, num, sz);\n                }\n                if (fills[f] == 32)\n                    ql_verify(ql, 2, 33, 32, 1);\n                quicklistDelRange(ql, -12, 3);\n                if (ql->count != 30)\n                    ERR(\"Didn't delete exactly three elements!  Count is: %lu\",\n                        ql->count);\n                quicklistRelease(ql);\n            }\n        }\n\n        long long stop = mstime();\n        runtime[_i] = stop - start;\n    }\n\n    /* Run a longer test of compression depth outside of primary test loop. */\n    int list_sizes[] = {250, 251, 500, 999, 1000};\n    long long start = mstime();\n    int list_count = accurate ? (int)(sizeof(list_sizes) / sizeof(*list_sizes)) : 1;\n    for (int list = 0; list < list_count; list++) {\n        TEST_DESC(\"verify specific compression of interior nodes with %d list \",\n                  list_sizes[list]) {\n            for (int f = 0; f < fill_count; f++) {\n                for (int depth = 1; depth < 40; depth++) {\n                    /* skip over many redundant test cases */\n                    quicklist *ql = quicklistNew(fills[f], depth);\n                    for (int i = 0; i < list_sizes[list]; i++) {\n                        quicklistPushTail(ql, genstr(\"hello TAIL\", i + 1), 64);\n                        quicklistPushHead(ql, genstr(\"hello HEAD\", i + 1), 64);\n                    }\n\n                    for (int step = 0; step < 2; step++) {\n                        /* test remove node */\n                        if (step == 1) {\n                            for (int i = 0; i < list_sizes[list] / 2; i++) {\n                                unsigned char *data;\n                                assert(quicklistPop(ql, QUICKLIST_HEAD, &data,\n                                                    NULL, NULL));\n                                zfree(data);\n                                assert(quicklistPop(ql, QUICKLIST_TAIL, &data,\n                                                    NULL, NULL));\n                                zfree(data);\n                            }\n                        }\n                        quicklistNode *node = ql->head;\n                        unsigned int low_raw = ql->compress;\n                        unsigned int high_raw = ql->len - ql->compress;\n\n                        for (unsigned int at = 0; at < ql->len;\n                            at++, node = node->next) {\n                            if (at < low_raw || at >= high_raw) {\n                                if (node->encoding != QUICKLIST_NODE_ENCODING_RAW) {\n                                    ERR(\"Incorrect compression: node %d is \"\n                                        \"compressed at depth %d ((%u, %u); total \"\n                                        \"nodes: %lu; size: %zu)\",\n                                        at, depth, low_raw, high_raw, ql->len,\n                                        node->sz);\n                                }\n                            } else {\n                                if (node->encoding != QUICKLIST_NODE_ENCODING_LZF) {\n                                    ERR(\"Incorrect non-compression: node %d is NOT \"\n                                        \"compressed at depth %d ((%u, %u); total \"\n                                        \"nodes: %lu; size: %zu; attempted: %d)\",\n                                        at, depth, low_raw, high_raw, ql->len,\n                                        node->sz, node->attempted_compress);\n                                }\n                            }\n                        }\n                    }\n\n                    quicklistRelease(ql);\n                }\n            }\n        }\n    }\n    long long stop = mstime();\n\n    printf(\"\\n\");\n    for (size_t i = 0; i < option_count; i++)\n        printf(\"Test Loop %02d: %0.2f seconds.\\n\", options[i],\n               (float)runtime[i] / 1000);\n    printf(\"Compressions: %0.2f seconds.\\n\", (float)(stop - start) / 1000);\n    printf(\"\\n\");\n\n    TEST(\"bookmark get updated to next item\") {\n        quicklist *ql = quicklistNew(1, 0);\n        quicklistPushTail(ql, \"1\", 1);\n        quicklistPushTail(ql, \"2\", 1);\n        quicklistPushTail(ql, \"3\", 1);\n        quicklistPushTail(ql, \"4\", 1);\n        quicklistPushTail(ql, \"5\", 1);\n        assert(ql->len==5);\n        /* add two bookmarks, one pointing to the node before the last. */\n        assert(quicklistBookmarkCreate(&ql, \"_dummy\", ql->head->next));\n        assert(quicklistBookmarkCreate(&ql, \"_test\", ql->tail->prev));\n        /* test that the bookmark returns the right node, delete it and see that the bookmark points to the last node */\n        assert(quicklistBookmarkFind(ql, \"_test\") == ql->tail->prev);\n        assert(quicklistDelRange(ql, -2, 1));\n        assert(quicklistBookmarkFind(ql, \"_test\") == ql->tail);\n        /* delete the last node, and see that the bookmark was deleted. */\n        assert(quicklistDelRange(ql, -1, 1));\n        assert(quicklistBookmarkFind(ql, \"_test\") == NULL);\n        /* test that other bookmarks aren't affected */\n        assert(quicklistBookmarkFind(ql, \"_dummy\") == ql->head->next);\n        assert(quicklistBookmarkFind(ql, \"_missing\") == NULL);\n        assert(ql->len==3);\n        quicklistBookmarksClear(ql); /* for coverage */\n        assert(quicklistBookmarkFind(ql, \"_dummy\") == NULL);\n        quicklistRelease(ql);\n    }\n\n    TEST(\"bookmark limit\") {\n        int i;\n        quicklist *ql = quicklistNew(1, 0);\n        quicklistPushHead(ql, \"1\", 1);\n        for (i=0; i<QL_MAX_BM; i++)\n            assert(quicklistBookmarkCreate(&ql, genstr(\"\",i), ql->head));\n        /* when all bookmarks are used, creation fails */\n        assert(!quicklistBookmarkCreate(&ql, \"_test\", ql->head));\n        /* delete one and see that we can now create another */\n        assert(quicklistBookmarkDelete(ql, \"0\"));\n        assert(quicklistBookmarkCreate(&ql, \"_test\", ql->head));\n        /* delete one and see that the rest survive */\n        assert(quicklistBookmarkDelete(ql, \"_test\"));\n        for (i=1; i<QL_MAX_BM; i++)\n            assert(quicklistBookmarkFind(ql, genstr(\"\",i)) == ql->head);\n        /* make sure the deleted ones are indeed gone */\n        assert(!quicklistBookmarkFind(ql, \"0\"));\n        assert(!quicklistBookmarkFind(ql, \"_test\"));\n        quicklistRelease(ql);\n    }\n\n    TEST(\"quicklistCompare cached string2ll optimization\") {\n        quicklist *ql = quicklistNew(-2, 0);\n\n        /* Create a list with mixed integer and string entries */\n        quicklistPushTail(ql, \"123\", 3);    /* integer as string */\n        quicklistPushTail(ql, \"456\", 3);    /* integer as string */\n        quicklistPushTail(ql, \"hello\", 5);  /* non-numeric string */\n        quicklistPushTail(ql, \"789\", 3);    /* integer as string */\n        quicklistPushTail(ql, \"world\", 5);  /* non-numeric string */\n\n        quicklistEntry entry;\n        quicklistIter iter_local;\n\n        /* Test 1: NULL parameters should work without crashing */\n        quicklistInitIterator(&iter_local, ql, AL_START_HEAD);\n        assert(quicklistNext(&iter_local, &entry));\n        assert(quicklistCompare(&entry, (unsigned char *)\"123\", 3, NULL, NULL) == 1);\n        assert(quicklistCompare(&entry, (unsigned char *)\"456\", 3, NULL, NULL) == 0);\n        ql_reset_iterator(&iter_local);\n\n        /* Test 2: Caching with numeric strings */\n        long long cached_val = 0;\n        int cached_valid = 0;\n\n        /* First comparison should cache the value */\n        quicklistInitIterator(&iter_local, ql, AL_START_HEAD);\n        assert(quicklistNext(&iter_local, &entry)); /* entry = \"123\" */\n        assert(quicklistCompare(&entry, (unsigned char *)\"123\", 3, &cached_val, &cached_valid) == 1);\n        assert(cached_valid == 1);  /* Should be cached as valid */\n        assert(cached_val == 123);  /* Should have cached value */\n\n        /* Second comparison with same search string should use cache */\n        assert(quicklistNext(&iter_local, &entry)); /* entry = \"456\" */\n        assert(quicklistCompare(&entry, (unsigned char *)\"123\", 3, &cached_val, &cached_valid) == 0);\n        assert(cached_valid == 1);  /* Cache should still be valid */\n        assert(cached_val == 123);  /* Cache value should be unchanged */\n\n        /* Third comparison with same search string should use cache */\n        assert(quicklistNext(&iter_local, &entry)); /* entry = \"hello\" (string) */\n        assert(quicklistCompare(&entry, (unsigned char *)\"123\", 3, &cached_val, &cached_valid) == 0);\n        assert(cached_valid == 1);  /* Cache should still be valid */\n        ql_reset_iterator(&iter_local);\n\n        /* Test 3: Caching with non-numeric strings */\n        cached_val = 0;\n        cached_valid = 0;\n\n        quicklistInitIterator(&iter_local, ql, AL_START_HEAD);\n        assert(quicklistNext(&iter_local, &entry)); /* entry = \"123\" */\n        assert(quicklistCompare(&entry, (unsigned char *)\"abc\", 3, &cached_val, &cached_valid) == 0);\n        assert(cached_valid == -1); /* Should be cached as invalid */\n\n        /* Second comparison with same non-numeric string should use cache */\n        assert(quicklistNext(&iter_local, &entry)); /* entry = \"456\" */\n        assert(quicklistCompare(&entry, (unsigned char *)\"abc\", 3, &cached_val, &cached_valid) == 0);\n        assert(cached_valid == -1); /* Cache should still be invalid */\n        ql_reset_iterator(&iter_local);\n\n        /* Test 4: String entries should work correctly with both NULL and caching */\n        quicklistInitIterator(&iter_local, ql, AL_START_HEAD);\n        quicklistNext(&iter_local, &entry); /* skip \"123\" */\n        quicklistNext(&iter_local, &entry); /* skip \"456\" */\n        assert(quicklistNext(&iter_local, &entry)); /* entry = \"hello\" */\n\n        /* String comparison with NULL parameters */\n        assert(quicklistCompare(&entry, (unsigned char *)\"hello\", 5, NULL, NULL) == 1);\n        assert(quicklistCompare(&entry, (unsigned char *)\"world\", 5, NULL, NULL) == 0);\n\n        /* String comparison with caching parameters (cache not used for strings) */\n        cached_val = 0;\n        cached_valid = 0;\n        assert(quicklistCompare(&entry, (unsigned char *)\"hello\", 5, &cached_val, &cached_valid) == 1);\n        assert(cached_valid == 0); /* Cache should not be used for string entries */\n        ql_reset_iterator(&iter_local);\n\n        /* Test 5: Performance verification - cache should reduce conversions */\n        /* This test demonstrates the optimization by showing cache reuse */\n        cached_val = 0;\n        cached_valid = 0;\n        int comparisons = 0;\n\n        /* Search for \"456\" across all integer entries */\n        quicklistInitIterator(&iter_local, ql, AL_START_HEAD);\n        while (quicklistNext(&iter_local, &entry)) {\n            if (entry.value == NULL) { /* Only test integer entries */\n                quicklistCompare(&entry, (unsigned char *)\"456\", 3, &cached_val, &cached_valid);\n                comparisons++;\n            }\n        }\n        ql_reset_iterator(&iter_local);\n\n        /* After first comparison, cache should be valid and reused for subsequent ones */\n        assert(cached_valid == 1);\n        assert(cached_val == 456);\n        assert(comparisons >= 2); /* Should have compared against multiple integer entries */\n\n        quicklistRelease(ql);\n    }\n\n    /* Benchmarks for quicklistCompare caching optimization */\n    {\n        printf(\"\\n=== quicklistCompare Caching Benchmarks ===\\n\");\n\n        /* Create a quicklist with 10K integer elements */\n        quicklist *ql = quicklistNew(-2, 0);\n        char buf[16];\n        for (int i = 1; i <= 10000; i++) {\n            snprintf(buf, sizeof(buf), \"%d\", i);\n            quicklistPushTail(ql, buf, strlen(buf));\n        }\n        printf(\"Created quicklist with %lu integer elements\\n\", ql->count);\n\n        /* Search string that exists in the middle */\n        unsigned char *search_str = (unsigned char *)\"5000\";\n        size_t search_len = 4;\n        int iterations = accurate ? 50000 : 10000;\n\n        /* Benchmark 1: quicklistCompare WITHOUT caching (NULL parameters) */\n        TEST(\"Benchmark quicklistCompare without caching\") {\n            long long start = ustime();\n            int matches = 0;\n\n            for (int iter = 0; iter < iterations; iter++) {\n                quicklistIter iter_ptr;\n                quicklistEntry entry;\n                quicklistInitIterator(&iter_ptr, ql, AL_START_HEAD);\n\n                while (quicklistNext(&iter_ptr, &entry)) {\n                    if (entry.value == NULL) { /* Only test integer entries */\n                        if (quicklistCompare(&entry, search_str, search_len, NULL, NULL)) {\n                            matches++;\n                        }\n                    }\n                }\n                ql_reset_iterator(&iter_ptr);\n            }\n\n            long long elapsed = ustime() - start;\n            printf(\"Found %d matches in %d iterations\\n\", matches, iterations);\n            printf(\"Without caching: %lld usec (%.2f usec per iteration)\\n\",\n                   elapsed, (double)elapsed / iterations);\n        }\n\n        /* Benchmark 2: quicklistCompare WITH caching */\n        TEST(\"Benchmark quicklistCompare with caching\") {\n            long long start = ustime();\n            int matches = 0;\n\n            for (int iter = 0; iter < iterations; iter++) {\n                /* Reset cache for each iteration to simulate real usage */\n                long long cached_val = 0;\n                int cached_valid = 0;\n\n                quicklistIter iter_ptr;\n                quicklistEntry entry;\n                quicklistInitIterator(&iter_ptr, ql, AL_START_HEAD);\n\n                while (quicklistNext(&iter_ptr, &entry)) {\n                    if (entry.value == NULL) { /* Only test integer entries */\n                        if (quicklistCompare(&entry, search_str, search_len, &cached_val, &cached_valid)) {\n                            matches++;\n                        }\n                    }\n                }\n                ql_reset_iterator(&iter_ptr);\n            }\n\n            long long elapsed = ustime() - start;\n            printf(\"Found %d matches in %d iterations\\n\", matches, iterations);\n            printf(\"With caching: %lld usec (%.2f usec per iteration)\\n\",\n                   elapsed, (double)elapsed / iterations);\n        }\n\n        quicklistRelease(ql);\n        printf(\"=== End quicklistCompare Benchmarks ===\\n\\n\");\n    }\n\n    if (flags & REDIS_TEST_LARGE_MEMORY) {\n        TEST(\"compress and decompress quicklist listpack node\") {\n            quicklist *ql = quicklistNew(1, 0);\n            quicklistNode *node = quicklistCreateNode(ql);\n            node->entry = lpNew(0);\n\n            /* Just to avoid triggering the assertion in __quicklistCompressNode(),\n             * it disables the passing of quicklist head or tail node. */\n            node->prev = quicklistCreateNode(ql);\n            node->next = quicklistCreateNode(ql);\n            \n            /* Create a rand string */\n            size_t sz = (1 << 25); /* 32MB per one entry */\n            unsigned char *s = zmalloc(sz);\n            randstring(s, sz);\n\n            /* Keep filling the node, until it reaches 1GB */\n            for (int i = 0; i < 32; i++) {\n                node->entry = lpAppend(node->entry, s, sz);\n                size_t oldsize = node->sz;\n                quicklistNodeUpdateSz(node);\n                quicklistUpdateAllocSize(ql, node->sz, oldsize);\n\n                long long start = mstime();\n                assert(__quicklistCompressNode(ql, node));\n                assert(__quicklistDecompressNode(ql, node));\n                printf(\"Compress and decompress: %zu MB in %.2f seconds.\\n\",\n                       node->sz/1024/1024, (float)(mstime() - start) / 1000);\n            }\n\n            zfree(s);\n            zfree(node->prev);\n            zfree(node->next);\n            zfree(node->entry);\n            zfree(node);\n            quicklistRelease(ql);\n        }\n\n#if ULONG_MAX >= 0xffffffffffffffff\n        TEST(\"compress and decompress quicklist plain node larger than UINT32_MAX\") {\n            quicklist *ql = quicklistNew(1, 0);\n            size_t sz = (1ull << 32);\n            unsigned char *s = zmalloc(sz);\n            randstring(s, sz);\n            memcpy(s, \"helloworld\", 10);\n            memcpy(s + sz - 10, \"1234567890\", 10);\n\n            quicklistNode *node = __quicklistCreateNode(ql, QUICKLIST_NODE_CONTAINER_PLAIN, s, sz);\n\n            /* Just to avoid triggering the assertion in __quicklistCompressNode(),\n             * it disables the passing of quicklist head or tail node. */\n            node->prev = quicklistCreateNode(ql);\n            node->next = quicklistCreateNode(ql);\n\n            long long start = mstime();\n            assert(__quicklistCompressNode(ql, node));\n            assert(__quicklistDecompressNode(ql, node));\n            printf(\"Compress and decompress: %zu MB in %.2f seconds.\\n\",\n                   node->sz/1024/1024, (float)(mstime() - start) / 1000);\n\n            assert(memcmp(node->entry, \"helloworld\", 10) == 0);\n            assert(memcmp(node->entry + sz - 10, \"1234567890\", 10) == 0);\n            zfree(node->prev);\n            zfree(node->next);\n            zfree(node->entry);\n            zfree(node);\n            quicklistRelease(ql);\n        }\n#endif\n    }\n\n    if (!err)\n        printf(\"ALL TESTS PASSED!\\n\");\n    else\n        ERR(\"Sorry, not all tests passed!  In fact, %d tests failed.\", err);\n\n    return err;\n}\n#endif\n"
  },
  {
    "path": "src/quicklist.h",
    "content": "/* quicklist.h - A generic doubly linked quicklist implementation\n *\n * Copyright (c) 2014, Matt Stancliff <matt@genges.com>\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this quicklist of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this quicklist of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#include <stdint.h> // for UINTPTR_MAX\n\n#ifndef __QUICKLIST_H__\n#define __QUICKLIST_H__\n\n/* Node, quicklist, and Iterator are the only data structures used currently. */\n\n/* quicklistNode is a 32 byte struct describing a listpack for a quicklist.\n * We use bit fields keep the quicklistNode at 32 bytes.\n * count: 16 bits, max 65536 (max lp bytes is 65k, so max count actually < 32k).\n * encoding: 2 bits, RAW=1, LZF=2.\n * container: 2 bits, PLAIN=1 (a single item as char array), PACKED=2 (listpack with multiple items).\n * recompress: 1 bit, bool, true if node is temporary decompressed for usage.\n * attempted_compress: 1 bit, boolean, used for verifying during testing.\n * dont_compress: 1 bit, boolean, used for preventing compression of entry.\n * extra: 9 bits, free for future use; pads out the remainder of 32 bits */\ntypedef struct quicklistNode {\n    struct quicklistNode *prev;\n    struct quicklistNode *next;\n    unsigned char *entry;\n    size_t sz;             /* entry size in bytes */\n    unsigned int count : 16;     /* count of items in listpack */\n    unsigned int encoding : 2;   /* RAW==1 or LZF==2 */\n    unsigned int container : 2;  /* PLAIN==1 or PACKED==2 */\n    unsigned int recompress : 1; /* was this node previous compressed? */\n    unsigned int attempted_compress : 1; /* node can't compress; too small */\n    unsigned int dont_compress : 1; /* prevent compression of entry that will be used later */\n    unsigned int extra : 9; /* more bits to steal for future usage */\n} quicklistNode;\n\n/* quicklistLZF is a 8+N byte struct holding 'sz' followed by 'compressed'.\n * 'sz' is byte length of 'compressed' field.\n * 'compressed' is LZF data with total (compressed) length 'sz'\n * NOTE: uncompressed length is stored in quicklistNode->sz.\n * When quicklistNode->entry is compressed, node->entry points to a quicklistLZF */\ntypedef struct quicklistLZF {\n    size_t sz; /* LZF size in bytes*/\n    char compressed[];\n} quicklistLZF;\n\n/* Bookmarks are padded with realloc at the end of the quicklist struct.\n * They should only be used for very big lists if thousands of nodes were the\n * excess memory usage is negligible, and there's a real need to iterate on them\n * in portions.\n * When not used, they don't add any memory overhead, but when used and then\n * deleted, some overhead remains (to avoid resonance).\n * The number of bookmarks used should be kept to minimum since it also adds\n * overhead on node deletion (searching for a bookmark to update). */\ntypedef struct quicklistBookmark {\n    quicklistNode *node;\n    char *name;\n} quicklistBookmark;\n\n#if UINTPTR_MAX == 0xffffffff\n/* 32-bit */\n#   define QL_FILL_BITS 14\n#   define QL_COMP_BITS 14\n#   define QL_BM_BITS 4\n#elif UINTPTR_MAX == 0xffffffffffffffff\n/* 64-bit */\n#   define QL_FILL_BITS 16\n#   define QL_COMP_BITS 16\n#   define QL_BM_BITS 4 /* we can encode more, but we rather limit the user\n                           since they cause performance degradation. */\n#else\n#   error unknown arch bits count\n#endif\n\n/* quicklist is a 40 byte struct (on 64-bit systems) describing a quicklist.\n * 'count' is the number of total entries.\n * 'len' is the number of quicklist nodes.\n * 'compress' is: 0 if compression disabled, otherwise it's the number\n *                of quicklistNodes to leave uncompressed at ends of quicklist.\n * 'fill' is the user-requested (or default) fill factor.\n * 'bookmarks are an optional feature that is used by realloc this struct,\n *      so that they don't consume memory when not used. */\ntypedef struct quicklist {\n    quicklistNode *head;\n    quicklistNode *tail;\n    unsigned long count;        /* total count of all entries in all listpacks */\n    unsigned long len;          /* number of quicklistNodes */\n    size_t alloc_size;          /* total allocated memory (in bytes) */\n    signed int fill : QL_FILL_BITS;       /* fill factor for individual nodes */\n    unsigned int compress : QL_COMP_BITS; /* depth of end nodes not to compress;0=off */\n    unsigned int bookmark_count: QL_BM_BITS;\n    quicklistBookmark bookmarks[];\n} quicklist;\n\ntypedef struct quicklistIter {\n    quicklist *quicklist;\n    quicklistNode *current;\n    unsigned char *zi; /* points to the current element */\n    long offset; /* offset in current listpack */\n    int direction;\n} quicklistIter;\n\ntypedef struct quicklistEntry {\n    const quicklist *quicklist;\n    quicklistNode *node;\n    unsigned char *zi;\n    unsigned char *value;\n    long long longval;\n    size_t sz;\n    int offset;\n} quicklistEntry;\n\n#define QUICKLIST_HEAD 0\n#define QUICKLIST_TAIL -1\n\n/* quicklist node encodings */\n#define QUICKLIST_NODE_ENCODING_RAW 1\n#define QUICKLIST_NODE_ENCODING_LZF 2\n\n/* quicklist compression disable */\n#define QUICKLIST_NOCOMPRESS 0\n\n/* quicklist node container formats */\n#define QUICKLIST_NODE_CONTAINER_PLAIN 1\n#define QUICKLIST_NODE_CONTAINER_PACKED 2\n\n#define QL_NODE_IS_PLAIN(node) ((node)->container == QUICKLIST_NODE_CONTAINER_PLAIN)\n\n#define quicklistNodeIsCompressed(node)                                        \\\n    ((node)->encoding == QUICKLIST_NODE_ENCODING_LZF)\n\n/* Prototypes */\nquicklist *quicklistCreate(void);\nquicklist *quicklistNew(int fill, int compress);\nvoid quicklistSetCompressDepth(quicklist *quicklist, int compress);\nvoid quicklistSetFill(quicklist *quicklist, int fill);\nvoid quicklistSetOptions(quicklist *quicklist, int fill, int compress);\nvoid quicklistRelease(quicklist *quicklist);\nint quicklistPushHead(quicklist *quicklist, void *value, const size_t sz);\nint quicklistPushTail(quicklist *quicklist, void *value, const size_t sz);\nvoid quicklistPush(quicklist *quicklist, void *value, const size_t sz,\n                   int where);\nvoid quicklistAppendListpack(quicklist *quicklist, unsigned char *zl);\nvoid quicklistAppendPlainNode(quicklist *quicklist, unsigned char *data, size_t sz);\nvoid quicklistInsertAfter(quicklistIter *iter, quicklistEntry *entry,\n                          void *value, const size_t sz);\nvoid quicklistInsertBefore(quicklistIter *iter, quicklistEntry *entry,\n                           void *value, const size_t sz);\nvoid quicklistDelEntry(quicklistIter *iter, quicklistEntry *entry);\nvoid quicklistReplaceEntry(quicklistIter *iter, quicklistEntry *entry,\n                           void *data, size_t sz);\nint quicklistReplaceAtIndex(quicklist *quicklist, long index, void *data,\n                            const size_t sz);\nint quicklistDelRange(quicklist *quicklist, const long start, const long count);\nvoid quicklistInitIterator(quicklistIter *iter, quicklist *quicklist, int direction);\nint quicklistInitIteratorAtIdx(quicklistIter *iter, quicklist *quicklist,\n                               int direction, const long long idx);\nint quicklistInitIteratorEntryAtIdx(quicklistIter *iter, quicklist *quicklist,\n                                    const long long index, quicklistEntry *entry);\nint quicklistNext(quicklistIter *iter, quicklistEntry *entry);\nvoid quicklistSetDirection(quicklistIter *iter, int direction);\nvoid quicklistResetIterator(quicklistIter *iter);\nquicklist *quicklistDup(quicklist *orig);\nvoid quicklistRotate(quicklist *quicklist);\nint quicklistPopCustom(quicklist *quicklist, int where, unsigned char **data,\n                       size_t *sz, long long *sval,\n                       void *(*saver)(unsigned char *data, size_t sz));\nint quicklistPop(quicklist *quicklist, int where, unsigned char **data,\n                 size_t *sz, long long *slong);\nunsigned long quicklistCount(const quicklist *ql);\nsize_t quicklistAllocSize(const quicklist *ql);\nint quicklistCompare(quicklistEntry *entry, unsigned char *p2, const size_t p2_len,\n                     long long *cached_longval, int *cached_valid);\nsize_t quicklistGetLzf(const quicklistNode *node, void **data);\nvoid quicklistNodeLimit(int fill, size_t *size, unsigned int *count);\nint quicklistNodeExceedsLimit(int fill, size_t new_sz, unsigned int new_count);\nvoid quicklistRepr(unsigned char *ql, int full);\n\n/* bookmarks */\nint quicklistBookmarkCreate(quicklist **ql_ref, const char *name, quicklistNode *node);\nint quicklistBookmarkDelete(quicklist *ql, const char *name);\nquicklistNode *quicklistBookmarkFind(quicklist *ql, const char *name);\nvoid quicklistBookmarksClear(quicklist *ql);\nint quicklistSetPackedThreshold(size_t sz);\n\n#ifdef REDIS_TEST\nint quicklistTest(int argc, char *argv[], int flags);\n#endif\n\n/* Directions for iterators */\n#define AL_START_HEAD 0\n#define AL_START_TAIL 1\n\n#endif /* __QUICKLIST_H__ */\n"
  },
  {
    "path": "src/rand.c",
    "content": "/* Pseudo random number generation functions derived from the drand48()\n * function obtained from pysam source code.\n *\n * These functions are used in order to replace the default math.random()\n * Lua implementation with something having exactly the same behavior\n * across different systems (by default Lua uses libc's rand() that is not\n * required to implement a specific PRNG generating the same sequence\n * in different systems if seeded with the same integer).\n *\n * The original code appears to be under the public domain.\n * I modified it removing the unneeded functions and all the\n * 1960-style C coding stuff...\n *\n * ----------------------------------------------------------------------------\n *\n * Copyright (c) 2010-current, Redis Ltd.\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#include <stdint.h>\n\n#define N\t16\n#define MASK\t((1 << (N - 1)) + (1 << (N - 1)) - 1)\n#define LOW(x)\t((unsigned)(x) & MASK)\n#define HIGH(x)\tLOW((x) >> N)\n#define MUL(x, y, z)\t{ int32_t l = (long)(x) * (long)(y); \\\n\t\t(z)[0] = LOW(l); (z)[1] = HIGH(l); }\n#define CARRY(x, y)\t((int32_t)(x) + (long)(y) > MASK)\n#define ADDEQU(x, y, z)\t(z = CARRY(x, (y)), x = LOW(x + (y)))\n#define X0\t0x330E\n#define X1\t0xABCD\n#define X2\t0x1234\n#define A0\t0xE66D\n#define A1\t0xDEEC\n#define A2\t0x5\n#define C\t0xB\n#define SET3(x, x0, x1, x2)\t((x)[0] = (x0), (x)[1] = (x1), (x)[2] = (x2))\n#define SETLOW(x, y, n) SET3(x, LOW((y)[n]), LOW((y)[(n)+1]), LOW((y)[(n)+2]))\n#define SEED(x0, x1, x2) (SET3(x, x0, x1, x2), SET3(a, A0, A1, A2), c = C)\n#define REST(v)\tfor (i = 0; i < 3; i++) { xsubi[i] = x[i]; x[i] = temp[i]; } \\\n\t\treturn (v);\n#define HI_BIT\t(1L << (2 * N - 1))\n\nstatic uint32_t x[3] = { X0, X1, X2 }, a[3] = { A0, A1, A2 }, c = C;\nstatic void next(void);\n\nint32_t redisLrand48(void) {\n    next();\n    return (((int32_t)x[2] << (N - 1)) + (x[1] >> 1));\n}\n\nvoid redisSrand48(int32_t seedval) {\n    SEED(X0, LOW(seedval), HIGH(seedval));\n}\n\nstatic void next(void) {\n    uint32_t p[2], q[2], r[2], carry0, carry1;\n\n    MUL(a[0], x[0], p);\n    ADDEQU(p[0], c, carry0);\n    ADDEQU(p[1], carry0, carry1);\n    MUL(a[0], x[1], q);\n    ADDEQU(p[1], q[0], carry0);\n    MUL(a[1], x[0], r);\n    x[2] = LOW(carry0 + carry1 + CARRY(p[1], r[0]) + q[1] + r[1] +\n            a[0] * x[2] + a[1] * x[1] + a[2] * x[0]);\n    x[1] = LOW(p[1] + r[0]);\n    x[0] = LOW(p[0]);\n}\n"
  },
  {
    "path": "src/rand.h",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef REDIS_RANDOM_H\n#define REDIS_RANDOM_H\n\nint32_t redisLrand48(void);\nvoid redisSrand48(int32_t seedval);\n\n#define REDIS_LRAND48_MAX INT32_MAX\n\n#endif\n"
  },
  {
    "path": "src/rax.c",
    "content": "/* Rax -- A radix tree implementation.\n *\n * Copyright (c) 2017-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include <stdlib.h>\n#include <string.h>\n#include <stdio.h>\n#include <errno.h>\n#include <math.h>\n#include \"rax.h\"\n#include \"redisassert.h\"\n\n#ifndef RAX_MALLOC_INCLUDE\n#define RAX_MALLOC_INCLUDE \"rax_malloc.h\"\n#endif\n\n#include RAX_MALLOC_INCLUDE\n\n/* -------------------------------- Debugging ------------------------------ */\n\nvoid raxDebugShowNode(const char *msg, raxNode *n);\n\n/* Turn debugging messages on/off by compiling with RAX_DEBUG_MSG macro on.\n * When RAX_DEBUG_MSG is defined by default Rax operations will emit a lot\n * of debugging info to the standard output, however you can still turn\n * debugging on/off in order to enable it only when you suspect there is an\n * operation causing a bug using the function raxSetDebugMsg(). */\n#ifdef RAX_DEBUG_MSG\n#define debugf(...)                                                            \\\n    if (raxDebugMsg) {                                                         \\\n        printf(\"%s:%s:%d:\\t\", __FILE__, __func__, __LINE__);                   \\\n        printf(__VA_ARGS__);                                                   \\\n        fflush(stdout);                                                        \\\n    }\n\n#define debugnode(msg,n) raxDebugShowNode(msg,n)\n#else\n#define debugf(...)\n#define debugnode(msg,n)\n#endif\n\n/* By default log debug info if RAX_DEBUG_MSG is defined. */\nstatic int raxDebugMsg = 1;\n\n/* When debug messages are enabled, turn them on/off dynamically. By\n * default they are enabled. Set the state to 0 to disable, and 1 to\n * re-enable. */\nvoid raxSetDebugMsg(int onoff) {\n    raxDebugMsg = onoff;\n}\n\n/* ------------------------- raxStack functions --------------------------\n * The raxStack is a simple stack of pointers that is capable of switching\n * from using a stack-allocated array to dynamic heap once a given number of\n * items are reached. It is used in order to retain the list of parent nodes\n * while walking the radix tree in order to implement certain operations that\n * need to navigate the tree upward.\n * ------------------------------------------------------------------------- */\n\n/* Initialize the stack. */\nstatic inline void raxStackInit(raxStack *ts) {\n    ts->stack = ts->static_items;\n    ts->items = 0;\n    ts->maxitems = RAX_STACK_STATIC_ITEMS;\n    ts->oom = 0;\n}\n\n/* Push an item into the stack, returns 1 on success, 0 on out of memory. */\nstatic inline int raxStackPush(raxStack *ts, void *ptr) {\n    if (ts->items == ts->maxitems) {\n        if (ts->stack == ts->static_items) {\n            ts->stack = rax_malloc(sizeof(void*)*ts->maxitems*2);\n            if (ts->stack == NULL) {\n                ts->stack = ts->static_items;\n                ts->oom = 1;\n                errno = ENOMEM;\n                return 0;\n            }\n            memcpy(ts->stack,ts->static_items,sizeof(void*)*ts->maxitems);\n        } else {\n            void **newalloc = rax_realloc(ts->stack,sizeof(void*)*ts->maxitems*2);\n            if (newalloc == NULL) {\n                ts->oom = 1;\n                errno = ENOMEM;\n                return 0;\n            }\n            ts->stack = newalloc;\n        }\n        ts->maxitems *= 2;\n    }\n    ts->stack[ts->items] = ptr;\n    ts->items++;\n    return 1;\n}\n\n/* Pop an item from the stack, the function returns NULL if there are no\n * items to pop. */\nstatic inline void *raxStackPop(raxStack *ts) {\n    if (ts->items == 0) return NULL;\n    ts->items--;\n    return ts->stack[ts->items];\n}\n\n/* Return the stack item at the top of the stack without actually consuming\n * it. */\nstatic inline void *raxStackPeek(raxStack *ts) {\n    if (ts->items == 0) return NULL;\n    return ts->stack[ts->items-1];\n}\n\n/* Free the stack in case we used heap allocation. */\nstatic inline void raxStackFree(raxStack *ts) {\n    if (ts->stack != ts->static_items) rax_free(ts->stack);\n}\n\n/* ----------------------------------------------------------------------------\n * Radix tree implementation\n * --------------------------------------------------------------------------*/\n\n/* Return the padding needed in the characters section of a node having size\n * 'nodesize'. The padding is needed to store the child pointers to aligned\n * addresses. Note that we add 4 to the node size because the node has a four\n * bytes header. */\n#define raxPadding(nodesize) ((sizeof(void*)-(((nodesize)+4) % sizeof(void*))) & (sizeof(void*)-1))\n\n/* Return the pointer to the last child pointer in a node. For the compressed\n * nodes this is the only child pointer. */\n#define raxNodeLastChildPtr(n) ((raxNode**) ( \\\n    ((char*)(n)) + \\\n    raxNodeCurrentLength(n) - \\\n    sizeof(raxNode*) - \\\n    (((n)->iskey && !(n)->isnull) ? sizeof(void*) : 0) \\\n))\n\n/* Return the pointer to the first child pointer. */\n#define raxNodeFirstChildPtr(n) ((raxNode**) ( \\\n    (n)->data + \\\n    (n)->size + \\\n    raxPadding((n)->size)))\n\n/* Return the current total size of the node. Note that the second line\n * computes the padding after the string of characters, needed in order to\n * save pointers to aligned addresses. */\n#define raxNodeCurrentLength(n) ( \\\n    sizeof(raxNode)+(n)->size+ \\\n    raxPadding((n)->size)+ \\\n    ((n)->iscompr ? sizeof(raxNode*) : sizeof(raxNode*)*(n)->size)+ \\\n    (((n)->iskey && !(n)->isnull)*sizeof(void*)) \\\n)\n\n/* Allocate a new non compressed node with the specified number of children.\n * If datafield is true, the allocation is made large enough to hold the\n * associated data pointer.\n * Returns the new node pointer. On out of memory NULL is returned. */\nraxNode *raxNewNode(rax *rax, size_t children, int datafield) {\n    size_t nodesize = sizeof(raxNode)+children+raxPadding(children)+\n                      sizeof(raxNode*)*children;\n    if (datafield) nodesize += sizeof(void*);\n    size_t usable;\n    raxNode *node = rax_malloc_usable(nodesize,&usable);\n    if (node == NULL) return NULL;\n    node->iskey = 0;\n    node->isnull = 0;\n    node->iscompr = 0;\n    node->size = children;\n    if (rax->alloc_size) *rax->alloc_size += usable;\n    return node;\n}\n\n/* Deallocate node */\nvoid raxFreeNode(rax *rax, raxNode *n) {\n    size_t usable;\n    rax_free_usable(n, &usable);\n    if (rax->alloc_size) *rax->alloc_size -= usable;\n}\n\n/* Allocate a new rax and return its pointer. On out of memory the function\n * returns NULL. */\nrax *raxNew(void) {\n    return raxNewWithMetadata(0, NULL);\n}\n\n/* Allocate a new rax with metadata. On out of memory the function\n * returns NULL.\n * If passed `alloc_size` is non-NULL, rax will account for its used\n * memory at this location. */\nrax *raxNewWithMetadata(int metaSize, size_t *alloc_size) {\n    size_t usable;\n    rax *rax = rax_malloc_usable(sizeof(*rax) + metaSize, &usable);\n    if (rax == NULL) return NULL;\n    rax->numele = 0;\n    rax->numnodes = 1;\n    rax->alloc_size = alloc_size;\n    if (rax->alloc_size) *rax->alloc_size += usable;\n    rax->head = raxNewNode(rax, 0, 0);\n    if (rax->head == NULL) {\n        if (rax->alloc_size) *rax->alloc_size -= usable;\n        rax_free(rax);\n        return NULL;\n    } else {\n        return rax;\n    }\n}\n\n/* realloc the node to have 'newsize'. On out of memory NULL is returned. */\nraxNode *raxNodeRealloc(rax *rax, raxNode *n, size_t newsize) {\n    size_t usable, old_usable;\n    raxNode *newn = rax_realloc_usable(n,newsize,&usable,&old_usable);\n    if (newn == NULL) return NULL;\n    if (rax->alloc_size) {\n        *rax->alloc_size -= old_usable;\n        *rax->alloc_size += usable;\n    }\n    return newn;\n}\n\n/* realloc the node to make room for auxiliary data in order\n * to store an item in that node. On out of memory NULL is returned. */\nraxNode *raxReallocForData(rax *rax, raxNode *n, void *data) {\n    if (data == NULL) return n; /* No reallocation needed, setting isnull=1 */\n    size_t curlen = raxNodeCurrentLength(n);\n    return raxNodeRealloc(rax,n,curlen+sizeof(void*));\n}\n\n/* Set the node auxiliary data to the specified pointer. */\nvoid raxSetData(raxNode *n, void *data) {\n    n->iskey = 1;\n    if (data != NULL) {\n        n->isnull = 0;\n        void **ndata = (void**)\n            ((char*)n+raxNodeCurrentLength(n)-sizeof(void*));\n        memcpy(ndata,&data,sizeof(data));\n    } else {\n        n->isnull = 1;\n    }\n}\n\n/* Get the node auxiliary data. */\nvoid *raxGetData(raxNode *n) {\n    if (n->isnull) return NULL;\n    void **ndata =(void**)((char*)n+raxNodeCurrentLength(n)-sizeof(void*));\n    void *data;\n    memcpy(&data,ndata,sizeof(data));\n    return data;\n}\n\n/* Add a new child to the node 'n' representing the character 'c' and return\n * its new pointer, as well as the child pointer by reference. Additionally\n * '***parentlink' is populated with the raxNode pointer-to-pointer of where\n * the new child was stored, which is useful for the caller to replace the\n * child pointer if it gets reallocated.\n *\n * On success the new parent node pointer is returned (it may change because\n * of the realloc, so the caller should discard 'n' and use the new value).\n * On out of memory NULL is returned, and the old node is still valid. */\nraxNode *raxAddChild(rax *rax, raxNode *n, unsigned char c, raxNode **childptr, raxNode ***parentlink) {\n    assert(n->iscompr == 0);\n\n    size_t curlen = raxNodeCurrentLength(n);\n    n->size++;\n    size_t newlen = raxNodeCurrentLength(n);\n    n->size--; /* For now restore the original size. We'll update it only on\n                  success at the end. */\n\n    /* Alloc the new child we will link to 'n'. */\n    raxNode *child = raxNewNode(rax,0,0);\n    if (child == NULL) return NULL;\n\n    /* Make space in the original node. If the current allocation already\n     * has enough usable bytes (common with jemalloc size-class rounding),\n     * skip the realloc entirely. */\n    if (rax_malloc_usable_size(n) < newlen) {\n        raxNode *newn = raxNodeRealloc(rax,n,newlen);\n        if (newn == NULL) {\n            raxFreeNode(rax,child);\n            return NULL;\n        }\n        n = newn;\n    }\n\n    /* After the reallocation, we have up to 8/16 (depending on the system\n     * pointer size, and the required node padding) bytes at the end, that is,\n     * the additional char in the 'data' section, plus one pointer to the new\n     * child, plus the padding needed in order to store addresses into aligned\n     * locations.\n     *\n     * So if we start with the following node, having \"abde\" edges.\n     *\n     * Note:\n     * - We assume 4 bytes pointer for simplicity.\n     * - Each space below corresponds to one byte\n     *\n     * [HDR*][abde][Aptr][Bptr][Dptr][Eptr]|AUXP|\n     *\n     * After the reallocation we need: 1 byte for the new edge character\n     * plus 4 bytes for a new child pointer (assuming 32 bit machine).\n     * However after adding 1 byte to the edge char, the header + the edge\n     * characters are no longer aligned, so we also need 3 bytes of padding.\n     * In total the reallocation will add 1+4+3 bytes = 8 bytes:\n     *\n     * (Blank bytes are represented by \".\")\n     *\n     * [HDR*][abde][Aptr][Bptr][Dptr][Eptr]|AUXP|[....][....]\n     *\n     * Let's find where to insert the new child in order to make sure\n     * it is inserted in-place lexicographically. Assuming we are adding\n     * a child \"c\" in our case pos will be = 2 after the end of the following\n     * loop. */\n    int pos;\n    if (n->size > 0 && c > n->data[n->size - 1]) {\n        pos = n->size;\n    } else {\n        for (pos = 0; pos < n->size; pos++) {\n            if (n->data[pos] > c) break;\n        }\n    }\n\n    /* Now, if present, move auxiliary data pointer at the end\n     * so that we can mess with the other data without overwriting it.\n     * We will obtain something like that:\n     *\n     * [HDR*][abde][Aptr][Bptr][Dptr][Eptr][....][....]|AUXP|\n     */\n    unsigned char *src, *dst;\n    if (n->iskey && !n->isnull) {\n        src = ((unsigned char*)n+curlen-sizeof(void*));\n        dst = ((unsigned char*)n+newlen-sizeof(void*));\n        memmove(dst,src,sizeof(void*));\n    }\n\n    /* Compute the \"shift\", that is, how many bytes we need to move the\n     * pointers section forward because of the addition of the new child\n     * byte in the string section. Note that if we had no padding, that\n     * would be always \"1\", since we are adding a single byte in the string\n     * section of the node (where now there is \"abde\" basically).\n     *\n     * However we have padding, so it could be zero, or up to 8.\n     *\n     * Another way to think at the shift is, how many bytes we need to\n     * move child pointers forward *other than* the obvious sizeof(void*)\n     * needed for the additional pointer itself. */\n    size_t shift = newlen - curlen - sizeof(void*);\n\n    /* We said we are adding a node with edge 'c'. The insertion\n     * point is between 'b' and 'd', so the 'pos' variable value is\n     * the index of the first child pointer that we need to move forward\n     * to make space for our new pointer.\n     *\n     * To start, move all the child pointers after the insertion point\n     * of shift+sizeof(pointer) bytes on the right, to obtain:\n     *\n     * [HDR*][abde][Aptr][Bptr][....][....][Dptr][Eptr]|AUXP|\n     */\n    src = n->data+n->size+\n          raxPadding(n->size)+\n          sizeof(raxNode*)*pos;\n    memmove(src+shift+sizeof(raxNode*),src,sizeof(raxNode*)*(n->size-pos));\n\n    /* Move the pointers to the left of the insertion position as well. Often\n     * we don't need to do anything if there was already some padding to use. In\n     * that case the final destination of the pointers will be the same, however\n     * in our example there was no pre-existing padding, so we added one byte\n     * plus three bytes of padding. After the next memmove() things will look\n     * like that:\n     *\n     * [HDR*][abde][....][Aptr][Bptr][....][Dptr][Eptr]|AUXP|\n     */\n    if (shift) {\n        src = (unsigned char*) raxNodeFirstChildPtr(n);\n        memmove(src+shift,src,sizeof(raxNode*)*pos);\n    }\n\n    /* Now make the space for the additional char in the data section,\n     * but also move the pointers before the insertion point to the right\n     * by shift bytes, in order to obtain the following:\n     *\n     * [HDR*][ab.d][e...][Aptr][Bptr][....][Dptr][Eptr]|AUXP|\n     */\n    src = n->data+pos;\n    memmove(src+1,src,n->size-pos);\n\n    /* We can now set the character and its child node pointer to get:\n     *\n     * [HDR*][abcd][e...][Aptr][Bptr][....][Dptr][Eptr]|AUXP|\n     * [HDR*][abcd][e...][Aptr][Bptr][Cptr][Dptr][Eptr]|AUXP|\n     */\n    n->data[pos] = c;\n    n->size++;\n    src = (unsigned char*) raxNodeFirstChildPtr(n);\n    raxNode **childfield = (raxNode**)(src+sizeof(raxNode*)*pos);\n    memcpy(childfield,&child,sizeof(child));\n    *childptr = child;\n    *parentlink = childfield;\n    return n;\n}\n\n/* Turn the node 'n', that must be a node without any children, into a\n * compressed node representing a set of nodes linked one after the other\n * and having exactly one child each. The node can be a key or not: this\n * property and the associated value if any will be preserved.\n *\n * The function also returns a child node, since the last node of the\n * compressed chain cannot be part of the chain: it has zero children while\n * we can only compress inner nodes with exactly one child each. */\nraxNode *raxCompressNode(rax *rax, raxNode *n, unsigned char *s, size_t len, raxNode **child) {\n    assert(n->size == 0 && n->iscompr == 0);\n    void *data = NULL; /* Initialized only to avoid warnings. */\n    size_t newsize;\n\n    debugf(\"Compress node: %.*s\\n\", (int)len,s);\n\n    /* Allocate the child to link to this node. */\n    *child = raxNewNode(rax,0,0);\n    if (*child == NULL) return NULL;\n\n    /* Make space in the parent node. */\n    newsize = sizeof(raxNode)+len+raxPadding(len)+sizeof(raxNode*);\n    if (n->iskey) {\n        data = raxGetData(n); /* To restore it later. */\n        if (!n->isnull) newsize += sizeof(void*);\n    }\n    raxNode *newn = raxNodeRealloc(rax,n,newsize);\n    if (newn == NULL) {\n        raxFreeNode(rax, *child);\n        return NULL;\n    }\n    n = newn;\n\n    n->iscompr = 1;\n    n->size = len;\n    memcpy(n->data,s,len);\n    if (n->iskey) raxSetData(n,data);\n    raxNode **childfield = raxNodeLastChildPtr(n);\n    memcpy(childfield,child,sizeof(*child));\n    return n;\n}\n\n/* Low level function that walks the tree looking for the string\n * 's' of 'len' bytes. The function returns the number of characters\n * of the key that was possible to process: if the returned integer\n * is the same as 'len', then it means that the node corresponding to the\n * string was found (however it may not be a key in case the node->iskey is\n * zero or if simply we stopped in the middle of a compressed node, so that\n * 'splitpos' is non zero).\n *\n * Otherwise if the returned integer is not the same as 'len', there was an\n * early stop during the tree walk because of a character mismatch.\n *\n * The node where the search ended (because the full string was processed\n * or because there was an early stop) is returned by reference as\n * '*stopnode' if the passed pointer is not NULL. This node link in the\n * parent's node is returned as '*plink' if not NULL. Finally, if the\n * search stopped in a compressed node, '*splitpos' returns the index\n * inside the compressed node where the search ended. This is useful to\n * know where to split the node for insertion.\n *\n * Note that when we stop in the middle of a compressed node with\n * a perfect match, this function will return a length equal to the\n * 'len' argument (all the key matched), and will return a *splitpos which is\n * always positive (that will represent the index of the character immediately\n * *after* the last match in the current compressed node).\n *\n * When instead we stop at a compressed node and *splitpos is zero, it\n * means that the current node represents the key (that is, none of the\n * compressed node characters are needed to represent the key, just all\n * its parents nodes). */\nstatic inline size_t raxLowWalk(rax *rax, unsigned char *s, size_t len, raxNode **stopnode, raxNode ***plink, int *splitpos, raxStack *ts) {\n    raxNode *h = rax->head;\n    raxNode **parentlink = &rax->head;\n\n    size_t i = 0; /* Position in the string. */\n    size_t j = 0; /* Position in the node children (or bytes if compressed).*/\n    while(h->size && i < len) {\n        debugnode(\"Lookup current node\",h);\n        unsigned char *v = h->data;\n\n        if (h->iscompr) {\n            for (j = 0; j < h->size && i < len; j++, i++) {\n                if (v[j] != s[i]) break;\n            }\n            if (j != h->size) break;\n        } else {\n            /* Children are sorted. Check the last child first: for\n             * sequential inserts the match is almost always at the end,\n             * and for random keys the extra compare is negligible vs\n             * the O(n) scan that follows on miss. */\n            if (v[h->size - 1] == s[i]) {\n                j = h->size - 1;\n            } else if (s[i] > v[h->size - 1]) {\n                j = h->size;\n                break;\n            } else {\n                /* Even when h->size is large, linear scan provides good\n                 * performances compared to other approaches that are in theory\n                 * more sounding, like performing a binary search. */\n                for (j = 0; j < h->size; j++) {\n                    if (v[j] == s[i]) break;\n                }\n                if (j == h->size) break;\n            }\n            i++;\n        }\n\n        if (ts) raxStackPush(ts,h); /* Save stack of parent nodes. */\n        raxNode **children = raxNodeFirstChildPtr(h);\n        if (h->iscompr) j = 0; /* Compressed node only child is at index 0. */\n        memcpy(&h,children+j,sizeof(h));\n        parentlink = children+j;\n        j = 0; /* If the new node is non compressed and we do not\n                  iterate again (since i == len) set the split\n                  position to 0 to signal this node represents\n                  the searched key. */\n    }\n    debugnode(\"Lookup stop node is\",h);\n    if (stopnode) *stopnode = h;\n    if (plink) *plink = parentlink;\n    if (splitpos && h->iscompr) *splitpos = j;\n    return i;\n}\n\n/* Insert the element 's' of size 'len', setting as auxiliary data\n * the pointer 'data'. If the element is already present, the associated\n * data is updated (only if 'overwrite' is set to 1), and 0 is returned,\n * otherwise the element is inserted and 1 is returned. On out of memory the\n * function returns 0 as well but sets errno to ENOMEM, otherwise errno will\n * be set to 0.\n */\nint raxGenericInsert(rax *rax, unsigned char *s, size_t len, void *data, void **old, int overwrite) {\n    size_t i, usable;\n    int j = 0; /* Split position. If raxLowWalk() stops in a compressed\n                  node, the index 'j' represents the char we stopped within the\n                  compressed node, that is, the position where to split the\n                  node for insertion. */\n    raxNode *h, **parentlink;\n    size_t dummy, *alloc_size = &dummy;\n\n    if (rax->alloc_size) alloc_size = rax->alloc_size;\n    debugf(\"### Insert %.*s with value %p\\n\", (int)len, s, data);\n    i = raxLowWalk(rax,s,len,&h,&parentlink,&j,NULL);\n\n    /* If i == len we walked following the whole string. If we are not\n     * in the middle of a compressed node, the string is either already\n     * inserted or this middle node is currently not a key, but can represent\n     * our key. We have just to reallocate the node and make space for the\n     * data pointer. */\n    if (i == len && (!h->iscompr || j == 0 /* not in the middle if j is 0 */)) {\n        debugf(\"### Insert: node representing key exists\\n\");\n        /* Make space for the value pointer if needed. */\n        if (!h->iskey || (h->isnull && overwrite)) {\n            h = raxReallocForData(rax,h,data);\n            if (h) memcpy(parentlink,&h,sizeof(h));\n        }\n        if (h == NULL) {\n            errno = ENOMEM;\n            return 0;\n        }\n\n        /* Update the existing key if there is already one. */\n        if (h->iskey) {\n            if (old) *old = raxGetData(h);\n            if (overwrite) raxSetData(h,data);\n            errno = 0;\n            return 0; /* Element already exists. */\n        }\n\n        /* Otherwise set the node as a key. Note that raxSetData()\n         * will set h->iskey. */\n        raxSetData(h,data);\n        rax->numele++;\n        return 1; /* Element inserted. */\n    }\n\n    /* If the node we stopped at is a compressed node, we need to\n     * split it before to continue.\n     *\n     * Splitting a compressed node have a few possible cases.\n     * Imagine that the node 'h' we are currently at is a compressed\n     * node containing the string \"ANNIBALE\" (it means that it represents\n     * nodes A -> N -> N -> I -> B -> A -> L -> E with the only child\n     * pointer of this node pointing at the 'E' node, because remember that\n     * we have characters at the edges of the graph, not inside the nodes\n     * themselves.\n     *\n     * In order to show a real case imagine our node to also point to\n     * another compressed node, that finally points at the node without\n     * children, representing 'O':\n     *\n     *     \"ANNIBALE\" -> \"SCO\" -> []\n     *\n     * When inserting we may face the following cases. Note that all the cases\n     * require the insertion of a non compressed node with exactly two\n     * children, except for the last case which just requires splitting a\n     * compressed node.\n     *\n     * 1) Inserting \"ANNIENTARE\"\n     *\n     *               |B| -> \"ALE\" -> \"SCO\" -> []\n     *     \"ANNI\" -> |-|\n     *               |E| -> (... continue algo ...) \"NTARE\" -> []\n     *\n     * 2) Inserting \"ANNIBALI\"\n     *\n     *                  |E| -> \"SCO\" -> []\n     *     \"ANNIBAL\" -> |-|\n     *                  |I| -> (... continue algo ...) []\n     *\n     * 3) Inserting \"AGO\" (Like case 1, but set iscompr = 0 into original node)\n     *\n     *            |N| -> \"NIBALE\" -> \"SCO\" -> []\n     *     |A| -> |-|\n     *            |G| -> (... continue algo ...) |O| -> []\n     *\n     * 4) Inserting \"CIAO\"\n     *\n     *     |A| -> \"NNIBALE\" -> \"SCO\" -> []\n     *     |-|\n     *     |C| -> (... continue algo ...) \"IAO\" -> []\n     *\n     * 5) Inserting \"ANNI\"\n     *\n     *     \"ANNI\" -> \"BALE\" -> \"SCO\" -> []\n     *\n     * The final algorithm for insertion covering all the above cases is as\n     * follows.\n     *\n     * ============================= ALGO 1 =============================\n     *\n     * For the above cases 1 to 4, that is, all cases where we stopped in\n     * the middle of a compressed node for a character mismatch, do:\n     *\n     * Let $SPLITPOS be the zero-based index at which, in the\n     * compressed node array of characters, we found the mismatching\n     * character. For example if the node contains \"ANNIBALE\" and we add\n     * \"ANNIENTARE\" the $SPLITPOS is 4, that is, the index at which the\n     * mismatching character is found.\n     *\n     * 1. Save the current compressed node $NEXT pointer (the pointer to the\n     *    child element, that is always present in compressed nodes).\n     *\n     * 2. Create \"split node\" having as child the non common letter\n     *    at the compressed node. The other non common letter (at the key)\n     *    will be added later as we continue the normal insertion algorithm\n     *    at step \"6\".\n     *\n     * 3a. IF $SPLITPOS == 0:\n     *     Replace the old node with the split node, by copying the auxiliary\n     *     data if any. Fix parent's reference. Free old node eventually\n     *     (we still need its data for the next steps of the algorithm).\n     *\n     * 3b. IF $SPLITPOS != 0:\n     *     Trim the compressed node (reallocating it as well) in order to\n     *     contain $splitpos characters. Change child pointer in order to link\n     *     to the split node. If new compressed node len is just 1, set\n     *     iscompr to 0 (layout is the same). Fix parent's reference.\n     *\n     * 4a. IF the postfix len (the length of the remaining string of the\n     *     original compressed node after the split character) is non zero,\n     *     create a \"postfix node\". If the postfix node has just one character\n     *     set iscompr to 0, otherwise iscompr to 1. Set the postfix node\n     *     child pointer to $NEXT.\n     *\n     * 4b. IF the postfix len is zero, just use $NEXT as postfix pointer.\n     *\n     * 5. Set child[0] of split node to postfix node.\n     *\n     * 6. Set the split node as the current node, set current index at child[1]\n     *    and continue insertion algorithm as usually.\n     *\n     * ============================= ALGO 2 =============================\n     *\n     * For case 5, that is, if we stopped in the middle of a compressed\n     * node but no mismatch was found, do:\n     *\n     * Let $SPLITPOS be the zero-based index at which, in the\n     * compressed node array of characters, we stopped iterating because\n     * there were no more keys character to match. So in the example of\n     * the node \"ANNIBALE\", adding the string \"ANNI\", the $SPLITPOS is 4.\n     *\n     * 1. Save the current compressed node $NEXT pointer (the pointer to the\n     *    child element, that is always present in compressed nodes).\n     *\n     * 2. Create a \"postfix node\" containing all the characters from $SPLITPOS\n     *    to the end. Use $NEXT as the postfix node child pointer.\n     *    If the postfix node length is 1, set iscompr to 0.\n     *    Set the node as a key with the associated value of the new\n     *    inserted key.\n     *\n     * 3. Trim the current node to contain the first $SPLITPOS characters.\n     *    As usually if the new node length is just 1, set iscompr to 0.\n     *    Take the iskey / associated value as it was in the original node.\n     *    Fix the parent's reference.\n     *\n     * 4. Set the postfix node as the only child pointer of the trimmed\n     *    node created at step 1.\n     */\n\n    /* ------------------------- ALGORITHM 1 --------------------------- */\n    if (h->iscompr && i != len) {\n        debugf(\"ALGO 1: Stopped at compressed node %.*s (%p)\\n\",\n            h->size, h->data, (void*)h);\n        debugf(\"Still to insert: %.*s\\n\", (int)(len-i), s+i);\n        debugf(\"Splitting at %d: '%c'\\n\", j, ((char*)h->data)[j]);\n        debugf(\"Other (key) letter is '%c'\\n\", s[i]);\n\n        /* 1: Save next pointer. */\n        raxNode **childfield = raxNodeLastChildPtr(h);\n        raxNode *next;\n        memcpy(&next,childfield,sizeof(next));\n        debugf(\"Next is %p\\n\", (void*)next);\n        debugf(\"iskey %d\\n\", h->iskey);\n        if (h->iskey) {\n            debugf(\"key value is %p\\n\", raxGetData(h));\n        }\n\n        /* Set the length of the additional nodes we will need. */\n        size_t trimmedlen = j;\n        size_t postfixlen = h->size - j - 1;\n        int split_node_is_key = !trimmedlen && h->iskey && !h->isnull;\n        size_t nodesize;\n\n        /* 2: Create the split node. Also allocate the other nodes we'll need\n         *    ASAP, so that it will be simpler to handle OOM. */\n        raxNode *splitnode = raxNewNode(rax, 1, split_node_is_key);\n        raxNode *trimmed = NULL;\n        raxNode *postfix = NULL;\n\n        if (trimmedlen) {\n            nodesize = sizeof(raxNode)+trimmedlen+raxPadding(trimmedlen)+\n                       sizeof(raxNode*);\n            if (h->iskey && !h->isnull) nodesize += sizeof(void*);\n            trimmed = rax_malloc_usable(nodesize, &usable);\n            *alloc_size += usable;\n        }\n\n        if (postfixlen) {\n            nodesize = sizeof(raxNode)+postfixlen+raxPadding(postfixlen)+\n                       sizeof(raxNode*);\n            postfix = rax_malloc_usable(nodesize, &usable);\n            *alloc_size += usable;\n        }\n\n        /* OOM? Abort now that the tree is untouched. */\n        if (splitnode == NULL ||\n            (trimmedlen && trimmed == NULL) ||\n            (postfixlen && postfix == NULL))\n        {\n            raxFreeNode(rax,splitnode);\n            raxFreeNode(rax,trimmed);\n            raxFreeNode(rax,postfix);\n            errno = ENOMEM;\n            return 0;\n        }\n        splitnode->data[0] = h->data[j];\n\n        if (j == 0) {\n            /* 3a: Replace the old node with the split node. */\n            if (h->iskey) {\n                void *ndata = raxGetData(h);\n                raxSetData(splitnode,ndata);\n            }\n            memcpy(parentlink,&splitnode,sizeof(splitnode));\n        } else {\n            /* 3b: Trim the compressed node. */\n            trimmed->size = j;\n            memcpy(trimmed->data,h->data,j);\n            trimmed->iscompr = j > 1 ? 1 : 0;\n            trimmed->iskey = h->iskey;\n            trimmed->isnull = h->isnull;\n            if (h->iskey && !h->isnull) {\n                void *ndata = raxGetData(h);\n                raxSetData(trimmed,ndata);\n            }\n            raxNode **cp = raxNodeLastChildPtr(trimmed);\n            memcpy(cp,&splitnode,sizeof(splitnode));\n            memcpy(parentlink,&trimmed,sizeof(trimmed));\n            parentlink = cp; /* Set parentlink to splitnode parent. */\n            rax->numnodes++;\n        }\n\n        /* 4: Create the postfix node: what remains of the original\n         * compressed node after the split. */\n        if (postfixlen) {\n            /* 4a: create a postfix node. */\n            postfix->iskey = 0;\n            postfix->isnull = 0;\n            postfix->size = postfixlen;\n            postfix->iscompr = postfixlen > 1;\n            memcpy(postfix->data,h->data+j+1,postfixlen);\n            raxNode **cp = raxNodeLastChildPtr(postfix);\n            memcpy(cp,&next,sizeof(next));\n            rax->numnodes++;\n        } else {\n            /* 4b: just use next as postfix node. */\n            postfix = next;\n        }\n\n        /* 5: Set splitnode first child as the postfix node. */\n        raxNode **splitchild = raxNodeLastChildPtr(splitnode);\n        memcpy(splitchild,&postfix,sizeof(postfix));\n\n        /* 6. Continue insertion: this will cause the splitnode to\n         * get a new child (the non common character at the currently\n         * inserted key). */\n        raxFreeNode(rax,h);\n        h = splitnode;\n    } else if (h->iscompr && i == len) {\n    /* ------------------------- ALGORITHM 2 --------------------------- */\n        debugf(\"ALGO 2: Stopped at compressed node %.*s (%p) j = %d\\n\",\n            h->size, h->data, (void*)h, j);\n\n        /* Allocate postfix & trimmed nodes ASAP to fail for OOM gracefully. */\n        size_t postfixlen = h->size - j;\n        size_t nodesize = sizeof(raxNode)+postfixlen+raxPadding(postfixlen)+\n                          sizeof(raxNode*);\n        if (data != NULL) nodesize += sizeof(void*);\n        raxNode *postfix = rax_malloc_usable(nodesize, &usable);\n        *alloc_size += usable;\n\n        nodesize = sizeof(raxNode)+j+raxPadding(j)+sizeof(raxNode*);\n        if (h->iskey && !h->isnull) nodesize += sizeof(void*);\n        raxNode *trimmed = rax_malloc_usable(nodesize, &usable);\n        *alloc_size += usable;\n\n        if (postfix == NULL || trimmed == NULL) {\n            raxFreeNode(rax,postfix);\n            raxFreeNode(rax,trimmed);\n            errno = ENOMEM;\n            return 0;\n        }\n\n\n        /* 1: Save next pointer. */\n        raxNode **childfield = raxNodeLastChildPtr(h);\n        raxNode *next;\n        memcpy(&next,childfield,sizeof(next));\n\n        /* 2: Create the postfix node. */\n        postfix->size = postfixlen;\n        postfix->iscompr = postfixlen > 1;\n        postfix->iskey = 1;\n        postfix->isnull = 0;\n        memcpy(postfix->data,h->data+j,postfixlen);\n        raxSetData(postfix,data);\n        raxNode **cp = raxNodeLastChildPtr(postfix);\n        memcpy(cp,&next,sizeof(next));\n        rax->numnodes++;\n\n        /* 3: Trim the compressed node. */\n        trimmed->size = j;\n        trimmed->iscompr = j > 1;\n        trimmed->iskey = 0;\n        trimmed->isnull = 0;\n        memcpy(trimmed->data,h->data,j);\n        memcpy(parentlink,&trimmed,sizeof(trimmed));\n        if (h->iskey) {\n            void *aux = raxGetData(h);\n            raxSetData(trimmed,aux);\n        }\n\n        /* Fix the trimmed node child pointer to point to\n         * the postfix node. */\n        cp = raxNodeLastChildPtr(trimmed);\n        memcpy(cp,&postfix,sizeof(postfix));\n\n        /* Finish! We don't need to continue with the insertion\n         * algorithm for ALGO 2. The key is already inserted. */\n        rax->numele++;\n        raxFreeNode(rax,h);\n        return 1; /* Key inserted. */\n    }\n\n    /* We walked the radix tree as far as we could, but still there are left\n     * chars in our string. We need to insert the missing nodes. */\n    while(i < len) {\n        raxNode *child;\n\n        /* If this node is going to have a single child, and there\n         * are other characters, so that that would result in a chain\n         * of single-childed nodes, turn it into a compressed node. */\n        if (h->size == 0 && len-i > 1) {\n            debugf(\"Inserting compressed node\\n\");\n            size_t comprsize = len-i;\n            if (comprsize > RAX_NODE_MAX_SIZE)\n                comprsize = RAX_NODE_MAX_SIZE;\n            raxNode *newh = raxCompressNode(rax,h,s+i,comprsize,&child);\n            if (newh == NULL) goto oom;\n            h = newh;\n            memcpy(parentlink,&h,sizeof(h));\n            parentlink = raxNodeLastChildPtr(h);\n            i += comprsize;\n        } else {\n            debugf(\"Inserting normal node\\n\");\n            raxNode **new_parentlink;\n            raxNode *newh = raxAddChild(rax,h,s[i],&child,&new_parentlink);\n            if (newh == NULL) goto oom;\n            h = newh;\n            memcpy(parentlink,&h,sizeof(h));\n            parentlink = new_parentlink;\n            i++;\n        }\n        rax->numnodes++;\n        h = child;\n    }\n    raxNode *newh = raxReallocForData(rax,h,data);\n    if (newh == NULL) goto oom;\n    h = newh;\n    if (!h->iskey) rax->numele++;\n    raxSetData(h,data);\n    memcpy(parentlink,&h,sizeof(h));\n    return 1; /* Element inserted. */\n\noom:\n    /* This code path handles out of memory after part of the sub-tree was\n     * already modified. Set the node as a key, and then remove it. However we\n     * do that only if the node is a terminal node, otherwise if the OOM\n     * happened reallocating a node in the middle, we don't need to free\n     * anything. */\n    if (h->size == 0) {\n        h->isnull = 1;\n        h->iskey = 1;\n        rax->numele++; /* Compensate the next remove. */\n        assert(raxRemove(rax,s,i,NULL) != 0);\n    }\n    errno = ENOMEM;\n    return 0;\n}\n\n/* Overwriting insert. Just a wrapper for raxGenericInsert() that will\n * update the element if there is already one for the same key. */\nint raxInsert(rax *rax, unsigned char *s, size_t len, void *data, void **old) {\n    return raxGenericInsert(rax,s,len,data,old,1);\n}\n\n/* Non overwriting insert function: if an element with the same key\n * exists, the value is not updated and the function returns 0.\n * This is just a wrapper for raxGenericInsert(). */\nint raxTryInsert(rax *rax, unsigned char *s, size_t len, void *data, void **old) {\n    return raxGenericInsert(rax,s,len,data,old,0);\n}\n\n/* Find a key in the rax: return 1 if the item is found, 0 otherwise.\n * If there is an item and 'value' is passed in a non-NULL pointer,\n * the value associated with the item is set at that address. */\nint raxFind(rax *rax, unsigned char *s, size_t len, void **value) {\n    raxNode *h;\n\n    debugf(\"### Lookup: %.*s\\n\", (int)len, s);\n    int splitpos = 0;\n    size_t i = raxLowWalk(rax,s,len,&h,NULL,&splitpos,NULL);\n    if (i != len || (h->iscompr && splitpos != 0) || !h->iskey)\n        return 0;\n    if (value != NULL) *value = raxGetData(h);\n    return 1;\n}\n\n/* Return the memory address where the 'parent' node stores the specified\n * 'child' pointer, so that the caller can update the pointer with another\n * one if needed. The function assumes it will find a match, otherwise the\n * operation is an undefined behavior (it will continue scanning the\n * memory without any bound checking). */\nraxNode **raxFindParentLink(raxNode *parent, raxNode *child) {\n    raxNode **cp = raxNodeFirstChildPtr(parent);\n    raxNode *c;\n    while(1) {\n        memcpy(&c,cp,sizeof(c));\n        if (c == child) break;\n        cp++;\n    }\n    return cp;\n}\n\n/* Low level child removal from node. The new node pointer (after the child\n * removal) is returned. Note that this function does not fix the pointer\n * of the parent node in its parent, so this task is up to the caller.\n * The function never fails for out of memory. */\nraxNode *raxRemoveChild(rax *rax, raxNode *parent, raxNode *child) {\n    debugnode(\"raxRemoveChild before\", parent);\n    /* If parent is a compressed node (having a single child, as for definition\n     * of the data structure), the removal of the child consists into turning\n     * it into a normal node without children. */\n    if (parent->iscompr) {\n        void *data = NULL;\n        if (parent->iskey) data = raxGetData(parent);\n        parent->isnull = 0;\n        parent->iscompr = 0;\n        parent->size = 0;\n        if (parent->iskey) raxSetData(parent,data);\n        debugnode(\"raxRemoveChild after\", parent);\n        return parent;\n    }\n\n    /* Otherwise we need to scan for the child pointer and memmove()\n     * accordingly.\n     *\n     * 1. To start we seek the first element in both the children\n     *    pointers and edge bytes in the node. */\n    raxNode **cp = raxNodeFirstChildPtr(parent);\n    raxNode **c = cp;\n    unsigned char *e = parent->data;\n\n    /* 2. Search the child pointer to remove inside the array of children\n     *    pointers. */\n    while(1) {\n        raxNode *aux;\n        memcpy(&aux,c,sizeof(aux));\n        if (aux == child) break;\n        c++;\n        e++;\n    }\n\n    /* 3. Remove the edge and the pointer by memmoving the remaining children\n     *    pointer and edge bytes one position before. */\n    int taillen = parent->size - (e - parent->data) - 1;\n    debugf(\"raxRemoveChild tail len: %d\\n\", taillen);\n    memmove(e,e+1,taillen);\n\n    /* Compute the shift, that is the amount of bytes we should move our\n     * child pointers to the left, since the removal of one edge character\n     * and the corresponding padding change, may change the layout.\n     * We just check if in the old version of the node there was at the\n     * end just a single byte and all padding: in that case removing one char\n     * will remove a whole sizeof(void*) word. */\n    size_t shift = ((parent->size+4) % sizeof(void*)) == 1 ? sizeof(void*) : 0;\n\n    /* Move the children pointers before the deletion point. */\n    if (shift)\n        memmove(((char*)cp)-shift,cp,(parent->size-taillen-1)*sizeof(raxNode**));\n\n    /* Move the remaining \"tail\" pointers at the right position as well. */\n    size_t valuelen = (parent->iskey && !parent->isnull) ? sizeof(void*) : 0;\n    memmove(((char*)c)-shift,c+1,taillen*sizeof(raxNode**)+valuelen);\n\n    /* 4. Update size. */\n    parent->size--;\n\n    /* realloc the node according to the theoretical memory usage, to free\n     * data if we are over-allocating right now. */\n    raxNode *newnode = raxNodeRealloc(rax,parent,raxNodeCurrentLength(parent));\n    if (newnode) {\n        debugnode(\"raxRemoveChild after\", newnode);\n    }\n    /* Note: if raxNodeRealloc() fails we just return the old address, which\n     * is valid. */\n    return newnode ? newnode : parent;\n}\n\n/* Remove the specified item. Returns 1 if the item was found and\n * deleted, 0 otherwise. */\nint raxRemove(rax *rax, unsigned char *s, size_t len, void **old) {\n    raxNode *h;\n    raxStack ts;\n\n    debugf(\"### Delete: %.*s\\n\", (int)len, s);\n    raxStackInit(&ts);\n    int splitpos = 0;\n    size_t i = raxLowWalk(rax,s,len,&h,NULL,&splitpos,&ts);\n    if (i != len || (h->iscompr && splitpos != 0) || !h->iskey) {\n        raxStackFree(&ts);\n        return 0;\n    }\n    if (old) *old = raxGetData(h);\n    h->iskey = 0;\n    rax->numele--;\n\n    /* If this node has no children, the deletion needs to reclaim the\n     * no longer used nodes. This is an iterative process that needs to\n     * walk the three upward, deleting all the nodes with just one child\n     * that are not keys, until the head of the rax is reached or the first\n     * node with more than one child is found. */\n\n    int trycompress = 0; /* Will be set to 1 if we should try to optimize the\n                            tree resulting from the deletion. */\n\n    if (h->size == 0) {\n        debugf(\"Key deleted in node without children. Cleanup needed.\\n\");\n        raxNode *child = NULL;\n        while(h != rax->head) {\n            child = h;\n            debugf(\"Freeing child %p [%.*s] key:%d\\n\", (void*)child,\n                (int)child->size, (char*)child->data, child->iskey);\n            raxFreeNode(rax,child);\n            rax->numnodes--;\n            h = raxStackPop(&ts);\n             /* If this node has more than one child, or actually holds\n              * a key, stop here. */\n            if (h->iskey || (!h->iscompr && h->size != 1)) break;\n        }\n        if (child) {\n            debugf(\"Unlinking child %p from parent %p\\n\",\n                (void*)child, (void*)h);\n            raxNode *new = raxRemoveChild(rax,h,child);\n            if (new != h) {\n                raxNode *parent = raxStackPeek(&ts);\n                raxNode **parentlink;\n                if (parent == NULL) {\n                    parentlink = &rax->head;\n                } else {\n                    parentlink = raxFindParentLink(parent,h);\n                }\n                memcpy(parentlink,&new,sizeof(new));\n            }\n\n            /* If after the removal the node has just a single child\n             * and is not a key, we need to try to compress it. */\n            if (new->size == 1 && new->iskey == 0) {\n                trycompress = 1;\n                h = new;\n            }\n        }\n    } else if (h->size == 1) {\n        /* If the node had just one child, after the removal of the key\n         * further compression with adjacent nodes is potentially possible. */\n        trycompress = 1;\n    }\n\n    /* Don't try node compression if our nodes pointers stack is not\n     * complete because of OOM while executing raxLowWalk() */\n    if (trycompress && ts.oom) trycompress = 0;\n\n    /* Recompression: if trycompress is true, 'h' points to a radix tree node\n     * that changed in a way that could allow to compress nodes in this\n     * sub-branch. Compressed nodes represent chains of nodes that are not\n     * keys and have a single child, so there are two deletion events that\n     * may alter the tree so that further compression is needed:\n     *\n     * 1) A node with a single child was a key and now no longer is a key.\n     * 2) A node with two children now has just one child.\n     *\n     * We try to navigate upward till there are other nodes that can be\n     * compressed, when we reach the upper node which is not a key and has\n     * a single child, we scan the chain of children to collect the\n     * compressible part of the tree, and replace the current node with the\n     * new one, fixing the child pointer to reference the first non\n     * compressible node.\n     *\n     * Example of case \"1\". A tree stores the keys \"FOO\" = 1 and\n     * \"FOOBAR\" = 2:\n     *\n     *\n     * \"FOO\" -> \"BAR\" -> [] (2)\n     *           (1)\n     *\n     * After the removal of \"FOO\" the tree can be compressed as:\n     *\n     * \"FOOBAR\" -> [] (2)\n     *\n     *\n     * Example of case \"2\". A tree stores the keys \"FOOBAR\" = 1 and\n     * \"FOOTER\" = 2:\n     *\n     *          |B| -> \"AR\" -> [] (1)\n     * \"FOO\" -> |-|\n     *          |T| -> \"ER\" -> [] (2)\n     *\n     * After the removal of \"FOOTER\" the resulting tree is:\n     *\n     * \"FOO\" -> |B| -> \"AR\" -> [] (1)\n     *\n     * That can be compressed into:\n     *\n     * \"FOOBAR\" -> [] (1)\n     */\n    if (trycompress) {\n        debugf(\"After removing %.*s:\\n\", (int)len, s);\n        debugnode(\"Compression may be needed\",h);\n        debugf(\"Seek start node\\n\");\n\n        /* Try to reach the upper node that is compressible.\n         * At the end of the loop 'h' will point to the first node we\n         * can try to compress and 'parent' to its parent. */\n        raxNode *parent;\n        while(1) {\n            parent = raxStackPop(&ts);\n            if (!parent || parent->iskey ||\n                (!parent->iscompr && parent->size != 1)) break;\n            h = parent;\n            debugnode(\"Going up to\",h);\n        }\n        raxNode *start = h; /* Compression starting node. */\n\n        /* Scan chain of nodes we can compress. */\n        size_t comprsize = h->size;\n        int nodes = 1;\n        while(h->size != 0) {\n            raxNode **cp = raxNodeLastChildPtr(h);\n            memcpy(&h,cp,sizeof(h));\n            if (h->iskey || (!h->iscompr && h->size != 1)) break;\n            /* Stop here if going to the next node would result into\n             * a compressed node larger than h->size can hold. */\n            if (comprsize + h->size > RAX_NODE_MAX_SIZE) break;\n            nodes++;\n            comprsize += h->size;\n        }\n        if (nodes > 1) {\n            /* If we can compress, create the new node and populate it. */\n            size_t nodesize =\n                sizeof(raxNode)+comprsize+raxPadding(comprsize)+sizeof(raxNode*);\n            size_t usable;\n            raxNode *new = rax_malloc_usable(nodesize, &usable);\n            /* An out of memory here just means we cannot optimize this\n             * node, but the tree is left in a consistent state. */\n            if (new == NULL) {\n                raxStackFree(&ts);\n                return 1;\n            }\n            if (rax->alloc_size) *rax->alloc_size += usable;\n            new->iskey = 0;\n            new->isnull = 0;\n            new->iscompr = 1;\n            new->size = comprsize;\n            rax->numnodes++;\n\n            /* Scan again, this time to populate the new node content and\n             * to fix the new node child pointer. At the same time we free\n             * all the nodes that we'll no longer use. */\n            comprsize = 0;\n            h = start;\n            while(h->size != 0) {\n                memcpy(new->data+comprsize,h->data,h->size);\n                comprsize += h->size;\n                raxNode **cp = raxNodeLastChildPtr(h);\n                raxNode *tofree = h;\n                memcpy(&h,cp,sizeof(h));\n                raxFreeNode(rax,tofree);\n                rax->numnodes--;\n                if (h->iskey || (!h->iscompr && h->size != 1)) break;\n            }\n            debugnode(\"New node\",new);\n\n            /* Now 'h' points to the first node that we still need to use,\n             * so our new node child pointer will point to it. */\n            raxNode **cp = raxNodeLastChildPtr(new);\n            memcpy(cp,&h,sizeof(h));\n\n            /* Fix parent link. */\n            if (parent) {\n                raxNode **parentlink = raxFindParentLink(parent,start);\n                memcpy(parentlink,&new,sizeof(new));\n            } else {\n                rax->head = new;\n            }\n\n            debugf(\"Compressed %d nodes, %d total bytes\\n\",\n                nodes, (int)comprsize);\n        }\n    }\n    raxStackFree(&ts);\n    return 1;\n}\n\n/* This is the core of raxFree(): performs an iterative depth-first scan\n * of the tree and frees all the nodes found. Uses an explicit heap stack\n * to avoid stack overflow on deep trees. The caller passes exactly one\n * callback variant and the non-NULL one is invoked. */\nstatic void raxFreeNodesWithCallback(rax *rax, raxNode *n,\n                                     void (*free_callback)(void *item),\n                                     void (*free_callback_withctx)(void *item, void *ctx),\n                                     void *ctx)\n{\n    raxStack stack;\n    raxStackInit(&stack);\n    raxStackPush(&stack, n);\n\n    while (stack.items > 0) {\n        raxNode *curr = raxStackPop(&stack);\n        debugnode(\"free traversing\",curr);\n        int numchildren = curr->iscompr ? 1 : curr->size;\n        raxNode **cp = raxNodeFirstChildPtr(curr);\n        for (int i = 0; i < numchildren; i++) {\n            raxNode *child;\n            memcpy(&child, cp + i, sizeof(child));\n            raxStackPush(&stack, child);\n        }\n        debugnode(\"free depth-first\",curr);\n        if (curr->iskey && !curr->isnull) {\n            void *data = raxGetData(curr);\n            if (free_callback_withctx)\n                free_callback_withctx(data, ctx);\n            else if (free_callback)\n                free_callback(data);\n        }\n        raxFreeNode(rax, curr);\n        rax->numnodes--;\n    }\n\n    raxStackFree(&stack);\n}\n\n/* Free a whole radix tree, calling the specified callback in order to\n * free the auxiliary data. */\nvoid raxFreeWithCallback(rax *rax, void (*free_callback)(void*)) {\n    raxFreeNodesWithCallback(rax, rax->head, free_callback, NULL, NULL);\n    assert(rax->numnodes == 0);\n    size_t *alloc_size = rax->alloc_size;\n    size_t usable;\n    rax_free_usable(rax, &usable);\n    if (alloc_size) *alloc_size -= usable;\n}\n\n/* Free a whole radix tree, calling the specified callback in order to\n * free the auxiliary data. */\nvoid raxFreeWithCbAndContext(rax *rax,\n                             void (*free_callback)(void *item, void *ctx), void *ctx) {\n    raxFreeNodesWithCallback(rax, rax->head, NULL, free_callback, ctx);\n    assert(rax->numnodes == 0);\n    size_t *alloc_size = rax->alloc_size;\n    size_t usable;\n    rax_free_usable(rax, &usable);\n    if (alloc_size) *alloc_size -= usable;\n}\n\n/* Free a whole radix tree. */\nvoid raxFree(rax *rax) {\n    raxFreeWithCallback(rax,NULL);\n}\n\n/* ------------------------------- Iterator --------------------------------- */\n\n/* Initialize a Rax iterator. This call should be performed a single time\n * to initialize the iterator, and must be followed by a raxSeek() call,\n * otherwise the raxPrev()/raxNext() functions will just return EOF. */\nvoid raxStart(raxIterator *it, rax *rt) {\n    it->flags = RAX_ITER_EOF; /* No crash if the iterator is not seeked. */\n    it->rt = rt;\n    it->key_len = 0;\n    it->key = it->key_static_string;\n    it->key_max = RAX_ITER_STATIC_LEN;\n    it->data = NULL;\n    it->node_cb = NULL;\n    it->privdata = NULL;\n    raxStackInit(&it->stack);\n}\n\n/* Append characters at the current key string of the iterator 'it'. This\n * is a low level function used to implement the iterator, not callable by\n * the user. Returns 0 on out of memory, otherwise 1 is returned. */\nint raxIteratorAddChars(raxIterator *it, unsigned char *s, size_t len) {\n    if (len == 0) return 1;\n    if (it->key_max < it->key_len+len) {\n        unsigned char *old = (it->key == it->key_static_string) ? NULL :\n                                                                  it->key;\n        size_t new_max = (it->key_len+len)*2;\n        it->key = rax_realloc(old,new_max);\n        if (it->key == NULL) {\n            it->key = (!old) ? it->key_static_string : old;\n            errno = ENOMEM;\n            return 0;\n        }\n        if (old == NULL) memcpy(it->key,it->key_static_string,it->key_len);\n        it->key_max = new_max;\n    }\n    /* Use memmove since there could be an overlap between 's' and\n     * it->key when we use the current key in order to re-seek. */\n    memmove(it->key+it->key_len,s,len);\n    it->key_len += len;\n    return 1;\n}\n\n/* Remove the specified number of chars from the right of the current\n * iterator key. */\nvoid raxIteratorDelChars(raxIterator *it, size_t count) {\n    it->key_len -= count;\n}\n\n/* Do an iteration step towards the next element. At the end of the step the\n * iterator key will represent the (new) current key. If it is not possible\n * to step in the specified direction since there are no longer elements, the\n * iterator is flagged with RAX_ITER_EOF.\n *\n * If 'noup' is true the function starts directly scanning for the next\n * lexicographically smaller children, and the current node is already assumed\n * to be the parent of the last key node, so the first operation to go back to\n * the parent will be skipped. This option is used by raxSeek() when\n * implementing seeking a non existing element with the \">\" or \"<\" options:\n * the starting node is not a key in that particular case, so we start the scan\n * from a node that does not represent the key set.\n *\n * The function returns 1 on success or 0 on out of memory. */\nint raxIteratorNextStep(raxIterator *it, int noup) {\n    if (it->flags & RAX_ITER_EOF) {\n        return 1;\n    } else if (it->flags & RAX_ITER_JUST_SEEKED) {\n        it->flags &= ~RAX_ITER_JUST_SEEKED;\n        return 1;\n    }\n\n    /* Save key len, stack items and the node where we are currently\n     * so that on iterator EOF we can restore the current key and state. */\n    size_t orig_key_len = it->key_len;\n    size_t orig_stack_items = it->stack.items;\n    raxNode *orig_node = it->node;\n\n    while(1) {\n        int children = it->node->iscompr ? 1 : it->node->size;\n        if (!noup && children) {\n            debugf(\"GO DEEPER\\n\");\n            /* Seek the lexicographically smaller key in this subtree, which\n             * is the first one found always going towards the first child\n             * of every successive node. */\n            if (!raxStackPush(&it->stack,it->node)) return 0;\n            raxNode **cp = raxNodeFirstChildPtr(it->node);\n            if (!raxIteratorAddChars(it,it->node->data,\n                it->node->iscompr ? it->node->size : 1)) return 0;\n            memcpy(&it->node,cp,sizeof(it->node));\n            /* Call the node callback if any, and replace the node pointer\n             * if the callback returns true. */\n            if (it->node_cb && it->node_cb(&it->node, it->privdata))\n                memcpy(cp,&it->node,sizeof(it->node));\n            /* For \"next\" step, stop every time we find a key along the\n             * way, since the key is lexicographically smaller compared to\n             * what follows in the sub-children. */\n            if (it->node->iskey) {\n                it->data = raxGetData(it->node);\n                return 1;\n            }\n        } else {\n            /* If we finished exploring the previous sub-tree, switch to the\n             * new one: go upper until a node is found where there are\n             * children representing keys lexicographically greater than the\n             * current key. */\n            while(1) {\n                int old_noup = noup;\n\n                /* Already on head? Can't go up, iteration finished. */\n                if (!noup && it->node == it->rt->head) {\n                    it->flags |= RAX_ITER_EOF;\n                    it->stack.items = orig_stack_items;\n                    it->key_len = orig_key_len;\n                    it->node = orig_node;\n                    return 1;\n                }\n                /* If there are no children at the current node, try parent's\n                 * next child. */\n                unsigned char prevchild = it->key[it->key_len-1];\n                if (!noup) {\n                    it->node = raxStackPop(&it->stack);\n                } else {\n                    noup = 0;\n                }\n                /* Adjust the current key to represent the node we are\n                 * at. */\n                int todel = it->node->iscompr ? it->node->size : 1;\n                raxIteratorDelChars(it,todel);\n\n                /* Try visiting the next child if there was at least one\n                 * additional child. */\n                if (!it->node->iscompr && it->node->size > (old_noup ? 0 : 1)) {\n                    raxNode **cp = raxNodeFirstChildPtr(it->node);\n                    int i = 0;\n                    while (i < it->node->size) {\n                        debugf(\"SCAN NEXT %c\\n\", it->node->data[i]);\n                        if (it->node->data[i] > prevchild) break;\n                        i++;\n                        cp++;\n                    }\n                    if (i != it->node->size) {\n                        debugf(\"SCAN found a new node\\n\");\n                        raxIteratorAddChars(it,it->node->data+i,1);\n                        if (!raxStackPush(&it->stack,it->node)) return 0;\n                        memcpy(&it->node,cp,sizeof(it->node));\n                        /* Call the node callback if any, and replace the node\n                         * pointer if the callback returns true. */\n                        if (it->node_cb && it->node_cb(&it->node, it->privdata))\n                            memcpy(cp,&it->node,sizeof(it->node));\n                        if (it->node->iskey) {\n                            it->data = raxGetData(it->node);\n                            return 1;\n                        }\n                        break;\n                    }\n                }\n            }\n        }\n    }\n}\n\n/* Seek the greatest key in the subtree at the current node. Return 0 on\n * out of memory, otherwise 1. This is a helper function for different\n * iteration functions below. */\nint raxSeekGreatest(raxIterator *it) {\n    while(it->node->size) {\n        if (it->node->iscompr) {\n            if (!raxIteratorAddChars(it,it->node->data,\n                it->node->size)) return 0;\n        } else {\n            if (!raxIteratorAddChars(it,it->node->data+it->node->size-1,1))\n                return 0;\n        }\n        raxNode **cp = raxNodeLastChildPtr(it->node);\n        if (!raxStackPush(&it->stack,it->node)) return 0;\n        memcpy(&it->node,cp,sizeof(it->node));\n    }\n    return 1;\n}\n\n/* Like raxIteratorNextStep() but implements an iteration step moving\n * to the lexicographically previous element. The 'noup' option has a similar\n * effect to the one of raxIteratorNextStep(). */\nint raxIteratorPrevStep(raxIterator *it, int noup) {\n    if (it->flags & RAX_ITER_EOF) {\n        return 1;\n    } else if (it->flags & RAX_ITER_JUST_SEEKED) {\n        it->flags &= ~RAX_ITER_JUST_SEEKED;\n        return 1;\n    }\n\n    /* Save key len, stack items and the node where we are currently\n     * so that on iterator EOF we can restore the current key and state. */\n    size_t orig_key_len = it->key_len;\n    size_t orig_stack_items = it->stack.items;\n    raxNode *orig_node = it->node;\n\n    while(1) {\n        int old_noup = noup;\n\n        /* Already on head? Can't go up, iteration finished. */\n        if (!noup && it->node == it->rt->head) {\n            it->flags |= RAX_ITER_EOF;\n            it->stack.items = orig_stack_items;\n            it->key_len = orig_key_len;\n            it->node = orig_node;\n            return 1;\n        }\n\n        unsigned char prevchild = it->key[it->key_len-1];\n        if (!noup) {\n            it->node = raxStackPop(&it->stack);\n        } else {\n            noup = 0;\n        }\n\n        /* Adjust the current key to represent the node we are\n         * at. */\n        int todel = it->node->iscompr ? it->node->size : 1;\n        raxIteratorDelChars(it,todel);\n\n        /* Try visiting the prev child if there is at least one\n         * child. */\n        if (!it->node->iscompr && it->node->size > (old_noup ? 0 : 1)) {\n            raxNode **cp = raxNodeLastChildPtr(it->node);\n            int i = it->node->size-1;\n            while (i >= 0) {\n                debugf(\"SCAN PREV %c\\n\", it->node->data[i]);\n                if (it->node->data[i] < prevchild) break;\n                i--;\n                cp--;\n            }\n            /* If we found a new subtree to explore in this node,\n             * go deeper following all the last children in order to\n             * find the key lexicographically greater. */\n            if (i != -1) {\n                debugf(\"SCAN found a new node\\n\");\n                /* Enter the node we just found. */\n                if (!raxIteratorAddChars(it,it->node->data+i,1)) return 0;\n                if (!raxStackPush(&it->stack,it->node)) return 0;\n                memcpy(&it->node,cp,sizeof(it->node));\n                /* Seek sub-tree max. */\n                if (!raxSeekGreatest(it)) return 0;\n            }\n        }\n\n        /* Return the key: this could be the key we found scanning a new\n         * subtree, or if we did not find a new subtree to explore here,\n         * before giving up with this node, check if it's a key itself. */\n        if (it->node->iskey) {\n            it->data = raxGetData(it->node);\n            return 1;\n        }\n    }\n}\n\n/* Seek an iterator at the specified element.\n * Return 0 if the seek failed for syntax error or out of memory. Otherwise\n * 1 is returned. When 0 is returned for out of memory, errno is set to\n * the ENOMEM value. */\nint raxSeek(raxIterator *it, const char *op, unsigned char *ele, size_t len) {\n    int eq = 0, lt = 0, gt = 0, first = 0, last = 0;\n\n    it->stack.items = 0; /* Just resetting. Initialized by raxStart(). */\n    it->flags |= RAX_ITER_JUST_SEEKED;\n    it->flags &= ~RAX_ITER_EOF;\n    it->key_len = 0;\n    it->node = NULL;\n\n    /* Set flags according to the operator used to perform the seek. */\n    if (op[0] == '>') {\n        gt = 1;\n        if (op[1] == '=') eq = 1;\n    } else if (op[0] == '<') {\n        lt = 1;\n        if (op[1] == '=') eq = 1;\n    } else if (op[0] == '=') {\n        eq = 1;\n    } else if (op[0] == '^') {\n        first = 1;\n    } else if (op[0] == '$') {\n        last = 1;\n    } else {\n        errno = 0;\n        return 0; /* Error. */\n    }\n\n    /* If there are no elements, set the EOF condition immediately and\n     * return. */\n    if (it->rt->numele == 0) {\n        it->flags |= RAX_ITER_EOF;\n        return 1;\n    }\n\n    if (first) {\n        /* Seeking the first key greater or equal to the empty string\n         * is equivalent to seeking the smaller key available. */\n        return raxSeek(it,\">=\",NULL,0);\n    }\n\n    if (last) {\n        /* Find the greatest key taking always the last child till a\n         * final node is found. */\n        it->node = it->rt->head;\n        if (!raxSeekGreatest(it)) return 0;\n        assert(it->node->iskey);\n        it->data = raxGetData(it->node);\n        return 1;\n    }\n\n    /* We need to seek the specified key. What we do here is to actually\n     * perform a lookup, and later invoke the prev/next key code that\n     * we already use for iteration. */\n    int splitpos = 0;\n    size_t i = raxLowWalk(it->rt,ele,len,&it->node,NULL,&splitpos,&it->stack);\n\n    /* Return OOM on incomplete stack info. */\n    if (it->stack.oom) return 0;\n\n    if (eq && i == len && (!it->node->iscompr || splitpos == 0) &&\n        it->node->iskey)\n    {\n        /* We found our node, since the key matches and we have an\n         * \"equal\" condition. */\n        if (!raxIteratorAddChars(it,ele,len)) return 0; /* OOM. */\n        it->data = raxGetData(it->node);\n    } else if (lt || gt) {\n        /* Exact key not found or eq flag not set. We have to set as current\n         * key the one represented by the node we stopped at, and perform\n         * a next/prev operation to seek. */\n        raxIteratorAddChars(it, ele, i-splitpos);\n\n        /* We need to set the iterator in the correct state to call next/prev\n         * step in order to seek the desired element. */\n        debugf(\"After initial seek: i=%d len=%d key=%.*s\\n\",\n            (int)i, (int)len, (int)it->key_len, it->key);\n        if (i != len && !it->node->iscompr) {\n            /* If we stopped in the middle of a normal node because of a\n             * mismatch, add the mismatching character to the current key\n             * and call the iterator with the 'noup' flag so that it will try\n             * to seek the next/prev child in the current node directly based\n             * on the mismatching character. */\n            if (!raxIteratorAddChars(it,ele+i,1)) return 0;\n            debugf(\"Seek normal node on mismatch: %.*s\\n\",\n                (int)it->key_len, (char*)it->key);\n\n            it->flags &= ~RAX_ITER_JUST_SEEKED;\n            if (lt && !raxIteratorPrevStep(it,1)) return 0;\n            if (gt && !raxIteratorNextStep(it,1)) return 0;\n            it->flags |= RAX_ITER_JUST_SEEKED; /* Ignore next call. */\n        } else if (i != len && it->node->iscompr) {\n            debugf(\"Compressed mismatch: %.*s\\n\",\n                (int)it->key_len, (char*)it->key);\n            /* In case of a mismatch within a compressed node. */\n            int nodechar = it->node->data[splitpos];\n            int keychar = ele[i];\n            it->flags &= ~RAX_ITER_JUST_SEEKED;\n            if (gt) {\n                /* If the key the compressed node represents is greater\n                 * than our seek element, continue forward, otherwise set the\n                 * state in order to go back to the next sub-tree. */\n                if (nodechar > keychar) {\n                    if (!raxIteratorNextStep(it,0)) return 0;\n                } else {\n                    if (!raxIteratorAddChars(it,it->node->data,it->node->size))\n                        return 0;\n                    if (!raxIteratorNextStep(it,1)) return 0;\n                }\n            }\n            if (lt) {\n                /* If the key the compressed node represents is smaller\n                 * than our seek element, seek the greater key in this\n                 * subtree, otherwise set the state in order to go back to\n                 * the previous sub-tree. */\n                if (nodechar < keychar) {\n                    if (!raxSeekGreatest(it)) return 0;\n                    it->data = raxGetData(it->node);\n                } else {\n                    if (!raxIteratorAddChars(it,it->node->data,it->node->size))\n                        return 0;\n                    if (!raxIteratorPrevStep(it,1)) return 0;\n                }\n            }\n            it->flags |= RAX_ITER_JUST_SEEKED; /* Ignore next call. */\n        } else {\n            debugf(\"No mismatch: %.*s\\n\",\n                (int)it->key_len, (char*)it->key);\n            /* If there was no mismatch we are into a node representing the\n             * key, (but which is not a key or the seek operator does not\n             * include 'eq'), or we stopped in the middle of a compressed node\n             * after processing all the key. Continue iterating as this was\n             * a legitimate key we stopped at. */\n            it->flags &= ~RAX_ITER_JUST_SEEKED;\n            if (it->node->iscompr && it->node->iskey && splitpos && lt) {\n                /* If we stopped in the middle of a compressed node with\n                 * perfect match, and the condition is to seek a key \"<\" than\n                 * the specified one, then if this node is a key it already\n                 * represents our match. For instance we may have nodes:\n                 *\n                 * \"f\" -> \"oobar\" = 1 -> \"\" = 2\n                 *\n                 * Representing keys \"f\" = 1, \"foobar\" = 2. A seek for\n                 * the key < \"foo\" will stop in the middle of the \"oobar\"\n                 * node, but will be our match, representing the key \"f\".\n                 *\n                 * So in that case, we don't seek backward. */\n                it->data = raxGetData(it->node);\n            } else {\n                if (gt && !raxIteratorNextStep(it,0)) return 0;\n                if (lt && !raxIteratorPrevStep(it,0)) return 0;\n            }\n            it->flags |= RAX_ITER_JUST_SEEKED; /* Ignore next call. */\n        }\n    } else {\n        /* If we are here just eq was set but no match was found. */\n        it->flags |= RAX_ITER_EOF;\n        return 1;\n    }\n    return 1;\n}\n\n/* Go to the next element in the scope of the iterator 'it'.\n * If EOF (or out of memory) is reached, 0 is returned, otherwise 1 is\n * returned. In case 0 is returned because of OOM, errno is set to ENOMEM. */\nint raxNext(raxIterator *it) {\n    if (!raxIteratorNextStep(it,0)) {\n        errno = ENOMEM;\n        return 0;\n    }\n    if (it->flags & RAX_ITER_EOF) {\n        errno = 0;\n        return 0;\n    }\n    return 1;\n}\n\n/* Go to the previous element in the scope of the iterator 'it'.\n * If EOF (or out of memory) is reached, 0 is returned, otherwise 1 is\n * returned. In case 0 is returned because of OOM, errno is set to ENOMEM. */\nint raxPrev(raxIterator *it) {\n    if (!raxIteratorPrevStep(it,0)) {\n        errno = ENOMEM;\n        return 0;\n    }\n    if (it->flags & RAX_ITER_EOF) {\n        errno = 0;\n        return 0;\n    }\n    return 1;\n}\n\n/* Perform a random walk starting in the current position of the iterator.\n * Return 0 if the tree is empty or on out of memory. Otherwise 1 is returned\n * and the iterator is set to the node reached after doing a random walk\n * of 'steps' steps. If the 'steps' argument is 0, the random walk is performed\n * using a random number of steps between 1 and two times the logarithm of\n * the number of elements.\n *\n * NOTE: if you use this function to generate random elements from the radix\n * tree, expect a disappointing distribution. A random walk produces good\n * random elements if the tree is not sparse, however in the case of a radix\n * tree certain keys will be reported much more often than others. At least\n * this function should be able to explore every possible element eventually. */\nint raxRandomWalk(raxIterator *it, size_t steps) {\n    if (it->rt->numele == 0) {\n        it->flags |= RAX_ITER_EOF;\n        return 0;\n    }\n\n    if (steps == 0) {\n        size_t fle = 1+floor(log(it->rt->numele));\n        fle *= 2;\n        steps = 1 + rand() % fle;\n    }\n\n    raxNode *n = it->node;\n    while(steps > 0 || !n->iskey) {\n        int numchildren = n->iscompr ? 1 : n->size;\n        int r = rand() % (numchildren+(n != it->rt->head));\n\n        if (r == numchildren) {\n            /* Go up to parent. */\n            n = raxStackPop(&it->stack);\n            int todel = n->iscompr ? n->size : 1;\n            raxIteratorDelChars(it,todel);\n        } else {\n            /* Select a random child. */\n            if (n->iscompr) {\n                if (!raxIteratorAddChars(it,n->data,n->size)) return 0;\n            } else {\n                if (!raxIteratorAddChars(it,n->data+r,1)) return 0;\n            }\n            raxNode **cp = raxNodeFirstChildPtr(n)+r;\n            if (!raxStackPush(&it->stack,n)) return 0;\n            memcpy(&n,cp,sizeof(n));\n        }\n        if (n->iskey) steps--;\n    }\n    it->node = n;\n    it->data = raxGetData(it->node);\n    return 1;\n}\n\n/* Compare the key currently pointed by the iterator to the specified\n * key according to the specified operator. Returns 1 if the comparison is\n * true, otherwise 0 is returned. */\nint raxCompare(raxIterator *iter, const char *op, unsigned char *key, size_t key_len) {\n    int eq = 0, lt = 0, gt = 0;\n\n    if (op[0] == '=' || op[1] == '=') eq = 1;\n    if (op[0] == '>') gt = 1;\n    else if (op[0] == '<') lt = 1;\n    else if (op[1] != '=') return 0; /* Syntax error. */\n\n    size_t minlen = key_len < iter->key_len ? key_len : iter->key_len;\n    int cmp = memcmp(iter->key,key,minlen);\n\n    /* Handle == */\n    if (lt == 0 && gt == 0) return cmp == 0 && key_len == iter->key_len;\n\n    /* Handle >, >=, <, <= */\n    if (cmp == 0) {\n        /* Same prefix: longer wins. */\n        if (eq && key_len == iter->key_len) return 1;\n        else if (lt) return iter->key_len < key_len;\n        else if (gt) return iter->key_len > key_len;\n        else return 0; /* Avoid warning, just 'eq' is handled before. */\n    } else if (cmp > 0) {\n        return gt ? 1 : 0;\n    } else /* (cmp < 0) */ {\n        return lt ? 1 : 0;\n    }\n}\n\n/* Free the iterator. */\nvoid raxStop(raxIterator *it) {\n    if (it->key != it->key_static_string) rax_free(it->key);\n    raxStackFree(&it->stack);\n}\n\n/* Return if the iterator is in an EOF state. This happens when raxSeek()\n * failed to seek an appropriate element, so that raxNext() or raxPrev()\n * will return zero, or when an EOF condition was reached while iterating\n * with raxNext() and raxPrev(). */\nint raxEOF(raxIterator *it) {\n    return it->flags & RAX_ITER_EOF;\n}\n\n/* Return the number of elements inside the radix tree. */\nuint64_t raxSize(rax *rax) {\n    return rax->numele;\n}\n\n/* ----------------------------- Introspection ------------------------------ */\n\n/* This function is mostly used for debugging and learning purposes.\n * It shows an ASCII representation of a tree on standard output, outline\n * all the nodes and the contained keys.\n *\n * The representation is as follow:\n *\n *  \"foobar\" (compressed node)\n *  [abc] (normal node with three children)\n *  [abc]=0x12345678 (node is a key, pointing to value 0x12345678)\n *  [] (a normal empty node)\n *\n *  Children are represented in new indented lines, each children prefixed by\n *  the \"`-(x)\" string, where \"x\" is the edge byte.\n *\n *  [abc]\n *   `-(a) \"ladin\"\n *   `-(b) [kj]\n *   `-(c) []\n *\n *  However when a node has a single child the following representation\n *  is used instead:\n *\n *  [abc] -> \"ladin\" -> []\n */\n\n/* The actual implementation of raxShow(). */\nvoid raxRecursiveShow(int level, int lpad, raxNode *n) {\n    char s = n->iscompr ? '\"' : '[';\n    char e = n->iscompr ? '\"' : ']';\n\n    int numchars = printf(\"%c%.*s%c\", s, n->size, n->data, e);\n    if (n->iskey) {\n        numchars += printf(\"=%p\",raxGetData(n));\n    }\n\n    int numchildren = n->iscompr ? 1 : n->size;\n    /* Note that 7 and 4 magic constants are the string length\n     * of \" `-(x) \" and \" -> \" respectively. */\n    if (level) {\n        lpad += (numchildren > 1) ? 7 : 4;\n        if (numchildren == 1) lpad += numchars;\n    }\n    raxNode **cp = raxNodeFirstChildPtr(n);\n    for (int i = 0; i < numchildren; i++) {\n        char *branch = \" `-(%c) \";\n        if (numchildren > 1) {\n            printf(\"\\n\");\n            for (int j = 0; j < lpad; j++) putchar(' ');\n            printf(branch,n->data[i]);\n        } else {\n            printf(\" -> \");\n        }\n        raxNode *child;\n        memcpy(&child,cp,sizeof(child));\n        raxRecursiveShow(level+1,lpad,child);\n        cp++;\n    }\n}\n\n/* Show a tree, as outlined in the comment above. */\nvoid raxShow(rax *rax) {\n    raxRecursiveShow(0,0,rax->head);\n    putchar('\\n');\n}\n\n/* Used by debugnode() macro to show info about a given node. */\nvoid raxDebugShowNode(const char *msg, raxNode *n) {\n    if (raxDebugMsg == 0) return;\n    printf(\"%s: %p [%.*s] key:%u size:%u children:\",\n        msg, (void*)n, (int)n->size, (char*)n->data, n->iskey, n->size);\n    int numcld = n->iscompr ? 1 : n->size;\n    raxNode **cldptr = raxNodeLastChildPtr(n) - (numcld-1);\n    while(numcld--) {\n        raxNode *child;\n        memcpy(&child,cldptr,sizeof(child));\n        cldptr++;\n        printf(\"%p \", (void*)child);\n    }\n    printf(\"\\n\");\n    fflush(stdout);\n}\n\n/* Touch all the nodes of a tree returning a check sum. This is useful\n * in order to make Valgrind detect if there is something wrong while\n * reading the data structure.\n *\n * This function was used in order to identify Rax bugs after a big refactoring\n * using this technique:\n *\n * 1. The rax-test is executed using Valgrind, adding a printf() so that for\n *    the fuzz tester we see what iteration in the loop we are in.\n * 2. After every modification of the radix tree made by the fuzz tester\n *    in rax-test.c, we add a call to raxTouch().\n * 3. Now as soon as an operation will corrupt the tree, raxTouch() will\n *    detect it (via Valgrind) immediately. We can add more calls to narrow\n *    the state.\n * 4. At this point a good idea is to enable Rax debugging messages immediately\n *    before the moment the tree is corrupted, to see what happens.\n */\nunsigned long raxTouch(raxNode *n) {\n    debugf(\"Touching %p\\n\", (void*)n);\n    unsigned long sum = 0;\n    if (n->iskey) {\n        sum += (unsigned long)raxGetData(n);\n    }\n\n    int numchildren = n->iscompr ? 1 : n->size;\n    raxNode **cp = raxNodeFirstChildPtr(n);\n    int count = 0;\n    for (int i = 0; i < numchildren; i++) {\n        if (numchildren > 1) {\n            sum += (long)n->data[i];\n        }\n        raxNode *child;\n        memcpy(&child,cp,sizeof(child));\n        if (child == (void*)0x65d1760) count++;\n        if (count > 1) exit(1);\n        sum += raxTouch(child);\n        cp++;\n    }\n    return sum;\n}\n\n/* The rest of this file is test cases and test helpers. */\n#ifdef REDIS_TEST\n#include \"testhelp.h\"\n#include <stdlib.h>\n\n#define UNUSED(x) (void)(x)\n\n#define yell(str, ...) printf(\"ERROR! \" str \"\\n\\n\", __VA_ARGS__)\n\n#define ERR(x, ...)                                                            \\\n    do {                                                                       \\\n        printf(\"%s:%s:%d:\\t\", __FILE__, __func__, __LINE__);                   \\\n        printf(\"ERROR! \" x \"\\n\", __VA_ARGS__);                                 \\\n        err++;                                                                 \\\n    } while (0)\n\n#define TEST(name) printf(\"test — %s\\n\", name);\n\n/* Verify rax memory accounting by calculating actual memory usage */\nstatic int _rax_verify_alloc_size(rax *rax, size_t have) {\n    int errors = 0;\n    size_t want = rax_malloc_usable_size(rax);\n    raxNode *node;\n    raxStack stack;\n\n    raxStackInit(&stack);\n    raxStackPush(&stack, rax->head);\n    while ((node = raxStackPop(&stack))) {\n        want += rax_malloc_usable_size(node);\n        if (!node->iscompr) {\n            /* Non-compressed node: add all children */\n            for (int i = 0; i < node->size; i++) {\n                raxNode **child = raxNodeLastChildPtr(node) - i;\n                if (*child) raxStackPush(&stack, *child);\n            }\n        } else {\n            /* Compressed node: add single child */\n            raxNode **child = raxNodeLastChildPtr(node);\n            if (*child) raxStackPush(&stack, *child);\n        }\n    }\n    raxStackFree(&stack);\n\n    if (want != have) {\n        yell(\"rax alloc_size is wrong: want: %zu, have: %zu\\n\", want, have);\n        errors++;\n    }\n\n    return errors;\n}\n\nstatic void *createTestValue(size_t size) {\n    size_t usable;\n    void *val = rax_malloc_usable(size, &usable);\n    memset(val, 'A', usable);\n    return val;\n}\n\nint raxTest(int argc, char **argv, int flags) {\n    UNUSED(argc);\n    UNUSED(argv);\n    UNUSED(flags);\n\n    int err = 0;\n\n    TEST(\"verify raxAllocSize() after raxInsert()/raxRemove()\") {\n        size_t alloc_size = 0;\n        rax *r = raxNewWithMetadata(0, &alloc_size);\n\n        /* Insert values and verify accounting */\n        void *val1 = createTestValue(100);\n        assert(raxInsert(r, (unsigned char*)\"key1\", 4, val1, NULL) == 1);\n        err += _rax_verify_alloc_size(r, alloc_size);\n\n        void *val2 = createTestValue(200);\n        assert(raxInsert(r, (unsigned char*)\"key2\", 4, val2, NULL) == 1);\n        err += _rax_verify_alloc_size(r, alloc_size);\n\n        void *val3 = createTestValue(10);\n        assert(raxInsert(r, (unsigned char*)\"3yek\", 4, val3, NULL) == 1);\n        err += _rax_verify_alloc_size(r, alloc_size);\n\n        /* Remove a value and verify */\n        void *removed;\n        assert(raxRemove(r, (unsigned char*)\"key1\", 4, &removed) == 1);\n        rax_free(removed);\n        err += _rax_verify_alloc_size(r, alloc_size);\n\n        raxFreeWithCallback(r, rax_free);\n    }\n\n    TEST(\"verify raxAllocSize() when replacing existing key\") {\n        size_t alloc_size = 0;\n        rax *r = raxNewWithMetadata(0, &alloc_size);\n\n        void *val1 = createTestValue(100);\n        assert(raxInsert(r, (unsigned char*)\"key\", 3, val1, NULL) == 1);\n        err += _rax_verify_alloc_size(r, alloc_size);\n\n        /* Update with different sized value */\n        void *val2 = createTestValue(300);\n        void *old;\n        assert(raxInsert(r, (unsigned char*)\"key\", 3, val2, &old) == 0);\n        rax_free(old);\n        err += _rax_verify_alloc_size(r, alloc_size);\n\n        raxFreeWithCallback(r, rax_free);\n    }\n\n    if (!err)\n        printf(\"ALL TESTS PASSED!\\n\");\n    else\n        ERR(\"Sorry, not all tests passed!  In fact, %d tests failed.\", err);\n\n    return err;\n}\n\n#endif\n"
  },
  {
    "path": "src/rax.h",
    "content": "/* Rax -- A radix tree implementation.\n *\n * Copyright (c) 2017-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef RAX_H\n#define RAX_H\n\n#include <stdint.h>\n\n/* Representation of a radix tree as implemented in this file, that contains\n * the strings \"foo\", \"foobar\" and \"footer\" after the insertion of each\n * word. When the node represents a key inside the radix tree, we write it\n * between [], otherwise it is written between ().\n *\n * This is the vanilla representation:\n *\n *              (f) \"\"\n *                \\\n *                (o) \"f\"\n *                  \\\n *                  (o) \"fo\"\n *                    \\\n *                  [t   b] \"foo\"\n *                  /     \\\n *         \"foot\" (e)     (a) \"foob\"\n *                /         \\\n *      \"foote\" (r)         (r) \"fooba\"\n *              /             \\\n *    \"footer\" []             [] \"foobar\"\n *\n * However, this implementation implements a very common optimization where\n * successive nodes having a single child are \"compressed\" into the node\n * itself as a string of characters, each representing a next-level child,\n * and only the link to the node representing the last character node is\n * provided inside the representation. So the above representation is turned\n * into:\n *\n *                  [\"foo\"] \"\"\n *                     |\n *                  [t   b] \"foo\"\n *                  /     \\\n *        \"foot\" (\"er\")    (\"ar\") \"foob\"\n *                 /          \\\n *       \"footer\" []          [] \"foobar\"\n *\n * However this optimization makes the implementation a bit more complex.\n * For instance if a key \"first\" is added in the above radix tree, a\n * \"node splitting\" operation is needed, since the \"foo\" prefix is no longer\n * composed of nodes having a single child one after the other. This is the\n * above tree and the resulting node splitting after this event happens:\n *\n *\n *                    (f) \"\"\n *                    /\n *                 (i o) \"f\"\n *                 /   \\\n *    \"firs\"  (\"rst\")  (o) \"fo\"\n *              /        \\\n *    \"first\" []       [t   b] \"foo\"\n *                     /     \\\n *           \"foot\" (\"er\")    (\"ar\") \"foob\"\n *                    /          \\\n *          \"footer\" []          [] \"foobar\"\n *\n * Similarly after deletion, if a new chain of nodes having a single child\n * is created (the chain must also not include nodes that represent keys),\n * it must be compressed back into a single node.\n *\n */\n\n#define RAX_NODE_MAX_SIZE ((1<<29)-1)\ntypedef struct raxNode {\n    uint32_t iskey:1;     /* Does this node contain a key? */\n    uint32_t isnull:1;    /* Associated value is NULL (don't store it). */\n    uint32_t iscompr:1;   /* Node is compressed. */\n    uint32_t size:29;     /* Number of children, or compressed string len. */\n    /* Data layout is as follows:\n     *\n     * If node is not compressed we have 'size' bytes, one for each children\n     * character, and 'size' raxNode pointers, point to each child node.\n     * Note how the character is not stored in the children but in the\n     * edge of the parents:\n     *\n     * [header iscompr=0][abc][a-ptr][b-ptr][c-ptr](value-ptr?)\n     *\n     * if node is compressed (iscompr bit is 1) the node has 1 child.\n     * In that case the 'size' bytes of the string stored immediately at\n     * the start of the data section, represent a sequence of successive\n     * nodes linked one after the other, for which only the last one in\n     * the sequence is actually represented as a node, and pointed to by\n     * the current compressed node.\n     *\n     * [header iscompr=1][xyz][z-ptr](value-ptr?)\n     *\n     * Both compressed and not compressed nodes can represent a key\n     * with associated data in the radix tree at any level (not just terminal\n     * nodes).\n     *\n     * If the node has an associated key (iskey=1) and is not NULL\n     * (isnull=0), then after the raxNode pointers pointing to the\n     * children, an additional value pointer is present (as you can see\n     * in the representation above as \"value-ptr\" field).\n     */\n    unsigned char data[];\n} raxNode;\n\ntypedef struct rax {\n    raxNode *head;\n    uint64_t numele;\n    uint64_t numnodes;\n    size_t *alloc_size;\n    void *metadata[];\n} rax;\n\n/* Stack data structure used by raxLowWalk() in order to, optionally, return\n * a list of parent nodes to the caller. The nodes do not have a \"parent\"\n * field for space concerns, so we use the auxiliary stack when needed. */\n#define RAX_STACK_STATIC_ITEMS 32\ntypedef struct raxStack {\n    void **stack; /* Points to static_items or an heap allocated array. */\n    size_t items, maxitems; /* Number of items contained and total space. */\n    /* Up to RAXSTACK_STACK_ITEMS items we avoid to allocate on the heap\n     * and use this static array of pointers instead. */\n    void *static_items[RAX_STACK_STATIC_ITEMS];\n    int oom; /* True if pushing into this stack failed for OOM at some point. */\n} raxStack;\n\n/* Optional callback used for iterators and be notified on each rax node,\n * including nodes not representing keys. If the callback returns true\n * the callback changed the node pointer in the iterator structure, and the\n * iterator implementation will have to replace the pointer in the radix tree\n * internals. This allows the callback to reallocate the node to perform\n * very special operations, normally not needed by normal applications.\n *\n * This callback is used to perform very low level analysis of the radix tree\n * structure, scanning each possible node (but the root node), or in order to\n * reallocate the nodes to reduce the allocation fragmentation (this is the\n * Redis application for this callback).\n *\n * This is currently only supported in forward iterations (raxNext) */\ntypedef int (*raxNodeCallback)(raxNode **noderef, void *privdata);\n\n/* Radix tree iterator state is encapsulated into this data structure. */\n#define RAX_ITER_STATIC_LEN 128\n#define RAX_ITER_JUST_SEEKED (1<<0) /* Iterator was just seeked. Return current\n                                       element for the first iteration and\n                                       clear the flag. */\n#define RAX_ITER_EOF (1<<1)    /* End of iteration reached. */\n#define RAX_ITER_SAFE (1<<2)   /* Safe iterator, allows operations while\n                                  iterating. But it is slower. */\ntypedef struct raxIterator {\n    int flags;\n    rax *rt;                /* Radix tree we are iterating. */\n    unsigned char *key;     /* The current string. */\n    void *data;             /* Data associated to this key. */\n    size_t key_len;         /* Current key length. */\n    size_t key_max;         /* Max key len the current key buffer can hold. */\n    unsigned char key_static_string[RAX_ITER_STATIC_LEN];\n    raxNode *node;          /* Current node. Only for unsafe iteration. */\n    raxStack stack;         /* Stack used for unsafe iteration. */\n    raxNodeCallback node_cb; /* Optional node callback. Normally set to NULL. */\n    void *privdata;         /* Optional private data for node callback. */\n} raxIterator;\n\n/* Exported API. */\nrax *raxNew(void);\nrax *raxNewWithMetadata(int metaSize, size_t *alloc_size);\nint raxInsert(rax *rax, unsigned char *s, size_t len, void *data, void **old);\nint raxTryInsert(rax *rax, unsigned char *s, size_t len, void *data, void **old);\nint raxRemove(rax *rax, unsigned char *s, size_t len, void **old);\nint raxFind(rax *rax, unsigned char *s, size_t len, void **value);\nvoid raxFree(rax *rax);\nvoid raxFreeWithCallback(rax *rax, void (*free_callback)(void*));\nvoid raxFreeWithCbAndContext(rax *rax,\n                             void (*free_callback)(void *item, void *ctx),\n                             void *ctx);\nvoid raxStart(raxIterator *it, rax *rt);\nint raxSeek(raxIterator *it, const char *op, unsigned char *ele, size_t len);\nint raxNext(raxIterator *it);\nint raxPrev(raxIterator *it);\nint raxRandomWalk(raxIterator *it, size_t steps);\nint raxCompare(raxIterator *iter, const char *op, unsigned char *key, size_t key_len);\nvoid raxStop(raxIterator *it);\nint raxEOF(raxIterator *it);\nvoid raxShow(rax *rax);\nuint64_t raxSize(rax *rax);\nunsigned long raxTouch(raxNode *n);\nvoid raxSetDebugMsg(int onoff);\n\n/* Internal API. May be used by the node callback in order to access rax nodes\n * in a low level way, so this function is exported as well. */\nvoid raxSetData(raxNode *n, void *data);\n\n#ifdef REDIS_TEST\nint raxTest(int argc, char *argv[], int flags);\n#endif\n\n#endif\n"
  },
  {
    "path": "src/rax_malloc.h",
    "content": "/* Rax -- A radix tree implementation.\n *\n * Copyright (c) 2017-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n/* Allocator selection.\n *\n * This file is used in order to change the Rax allocator at compile time.\n * Just define the following defines to what you want to use. Also add\n * the include of your alternate allocator if needed (not needed in order\n * to use the default libc allocator). */\n\n#ifndef RAX_ALLOC_H\n#define RAX_ALLOC_H\n#include \"zmalloc.h\"\n#define rax_malloc zmalloc\n#define rax_realloc zrealloc\n#define rax_free zfree\n#define rax_malloc_usable zmalloc_usable\n#define rax_realloc_usable zrealloc_usable\n#define rax_free_usable zfree_usable\n#define rax_malloc_usable_size zmalloc_usable_size\n#endif\n"
  },
  {
    "path": "src/rdb.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n#include \"server.h\"\n#include \"lzf.h\"    /* LZF compression library */\n#include \"zipmap.h\"\n#include \"endianconv.h\"\n#include \"fpconv_dtoa.h\"\n#include \"stream.h\"\n#include \"functions.h\"\n#include \"intset.h\"  /* Compact integer set structure */\n#include \"bio.h\"\n#include \"cluster_asm.h\"\n#include \"keymeta.h\"\n\n#include <math.h>\n#include <fcntl.h>\n#include <sys/types.h>\n#include <sys/time.h>\n#include <sys/resource.h>\n#include <sys/wait.h>\n#include <arpa/inet.h>\n#include <sys/stat.h>\n#include <sys/param.h>\n\n/* This macro is called when the internal RDB structure is corrupt */\n#define rdbReportCorruptRDB(...) rdbReportError(1, __LINE__,__VA_ARGS__)\n/* This macro is called when RDB read failed (possibly a short read) */\n#define rdbReportReadError(...) rdbReportError(0, __LINE__,__VA_ARGS__)\n\n/* This macro tells if we are in the context of a RESTORE command, and not loading an RDB or AOF. */\n#define isRestoreContext() \\\n    ((server.current_client == NULL || server.current_client->id == CLIENT_ID_AOF) ? 0 : 1)\n\nchar* rdbFileBeingLoaded = NULL; /* used for rdb checking on read error */\nextern int rdbCheckMode;\nvoid rdbCheckError(const char *fmt, ...);\nvoid rdbCheckSetError(const char *fmt, ...);\n\n#ifdef __GNUC__\nvoid rdbReportError(int corruption_error, int linenum, char *reason, ...) __attribute__ ((format (printf, 3, 4)));\n#endif\nvoid rdbReportError(int corruption_error, int linenum, char *reason, ...) {\n    va_list ap;\n    char msg[1024];\n    int len;\n\n    len = snprintf(msg,sizeof(msg),\n        \"Internal error in RDB reading offset %llu, function at rdb.c:%d -> \",\n        (unsigned long long)server.loading_loaded_bytes, linenum);\n    va_start(ap,reason);\n    vsnprintf(msg+len,sizeof(msg)-len,reason,ap);\n    va_end(ap);\n\n    if (isRestoreContext()) {\n        /* If we're in the context of a RESTORE command, just propagate the error. */\n        /* log in VERBOSE, and return (don't exit). */\n        serverLog(LL_VERBOSE, \"%s\", msg);\n        return;\n    } else if (rdbCheckMode) {\n        /* If we're inside the rdb checker, let it handle the error. */\n        rdbCheckError(\"%s\",msg);\n    } else if (rdbFileBeingLoaded) {\n        /* If we're loading an rdb file form disk, run rdb check (and exit) */\n        serverLog(LL_WARNING, \"%s\", msg);\n        char *argv[2] = {\"\",rdbFileBeingLoaded};\n        if (anetIsFifo(argv[1])) {\n            /* Cannot check RDB FIFO because we cannot reopen the FIFO and check already streamed data. */\n            rdbCheckError(\"Cannot check RDB that is a FIFO: %s\", argv[1]);\n            return;\n        }\n        redis_check_rdb_main(2,argv,NULL);\n    } else if (corruption_error) {\n        /* In diskless loading, in case of corrupt file, log and exit. */\n        serverLog(LL_WARNING, \"%s. Failure loading rdb format\", msg);\n    } else {\n        /* In diskless loading, in case of a short read (not a corrupt\n         * file), log and proceed (don't exit). */\n        serverLog(LL_WARNING, \"%s. Failure loading rdb format from socket, assuming connection error, resuming operation.\", msg);\n        return;\n    }\n    serverLog(LL_WARNING, \"Terminating server after rdb file reading failure.\");\n    exit(1);\n}\n\nssize_t rdbWriteRaw(rio *rdb, void *p, size_t len) {\n    if (rdb && rioWrite(rdb,p,len) == 0)\n        return -1;\n    return len;\n}\n\nint rdbSaveType(rio *rdb, unsigned char type) {\n    return rdbWriteRaw(rdb,&type,1);\n}\n\n/* Load a \"type\" in RDB format, that is a one byte unsigned integer.\n * This function is not only used to load object types, but also special\n * \"types\" like the end-of-file type, the EXPIRE type, and so forth. */\nint rdbLoadType(rio *rdb) {\n    unsigned char type;\n    if (rioRead(rdb,&type,1) == 0) return -1;\n    return type;\n}\n\n/* This is only used to load old databases stored with the RDB_OPCODE_EXPIRETIME\n * opcode. New versions of Redis store using the RDB_OPCODE_EXPIRETIME_MS\n * opcode. On error -1 is returned, however this could be a valid time, so\n * to check for loading errors the caller should call rioGetReadError() after\n * calling this function. */\ntime_t rdbLoadTime(rio *rdb) {\n    int32_t t32;\n    if (rioRead(rdb,&t32,4) == 0) return -1;\n    return (time_t)t32;\n}\n\nssize_t rdbSaveMillisecondTime(rio *rdb, long long t) {\n    int64_t t64 = (int64_t) t;\n    memrev64ifbe(&t64); /* Store in little endian. */\n    return rdbWriteRaw(rdb,&t64,8);\n}\n\n/* This function loads a time from the RDB file. It gets the version of the\n * RDB because, unfortunately, before Redis 5 (RDB version 9), the function\n * failed to convert data to/from little endian, so RDB files with keys having\n * expires could not be shared between big endian and little endian systems\n * (because the expire time will be totally wrong). The fix for this is just\n * to call memrev64ifbe(), however if we fix this for all the RDB versions,\n * this call will introduce an incompatibility for big endian systems:\n * after upgrading to Redis version 5 they will no longer be able to load their\n * own old RDB files. Because of that, we instead fix the function only for new\n * RDB versions, and load older RDB versions as we used to do in the past,\n * allowing big endian systems to load their own old RDB files.\n *\n * On I/O error the function returns LLONG_MAX, however if this is also a\n * valid stored value, the caller should use rioGetReadError() to check for\n * errors after calling this function. */\nlong long rdbLoadMillisecondTime(rio *rdb, int rdbver) {\n    int64_t t64;\n    if (rioRead(rdb,&t64,8) == 0) return LLONG_MAX;\n    if (rdbver >= 9) /* Check the top comment of this function. */\n        memrev64ifbe(&t64); /* Convert in big endian if the system is BE. */\n    return (long long)t64;\n}\n\n/* Saves an encoded length. The first two bits in the first byte are used to\n * hold the encoding type. See the RDB_* definitions for more information\n * on the types of encoding. */\nint rdbSaveLen(rio *rdb, uint64_t len) {\n    unsigned char buf[2];\n    size_t nwritten;\n\n    if (len < (1<<6)) {\n        /* Save a 6 bit len */\n        buf[0] = (len&0xFF)|(RDB_6BITLEN<<6);\n        if (rdbWriteRaw(rdb,buf,1) == -1) return -1;\n        nwritten = 1;\n    } else if (len < (1<<14)) {\n        /* Save a 14 bit len */\n        buf[0] = ((len>>8)&0xFF)|(RDB_14BITLEN<<6);\n        buf[1] = len&0xFF;\n        if (rdbWriteRaw(rdb,buf,2) == -1) return -1;\n        nwritten = 2;\n    } else if (len <= UINT32_MAX) {\n        /* Save a 32 bit len */\n        buf[0] = RDB_32BITLEN;\n        if (rdbWriteRaw(rdb,buf,1) == -1) return -1;\n        uint32_t len32 = htonl(len);\n        if (rdbWriteRaw(rdb,&len32,4) == -1) return -1;\n        nwritten = 1+4;\n    } else {\n        /* Save a 64 bit len */\n        buf[0] = RDB_64BITLEN;\n        if (rdbWriteRaw(rdb,buf,1) == -1) return -1;\n        len = htonu64(len);\n        if (rdbWriteRaw(rdb,&len,8) == -1) return -1;\n        nwritten = 1+8;\n    }\n    return nwritten;\n}\n\n\n/* Load an encoded length. If the loaded length is a normal length as stored\n * with rdbSaveLen(), the read length is set to '*lenptr'. If instead the\n * loaded length describes a special encoding that follows, then '*isencoded'\n * is set to 1 and the encoding format is stored at '*lenptr'.\n *\n * See the RDB_ENC_* definitions in rdb.h for more information on special\n * encodings.\n *\n * The function returns -1 on error, 0 on success. */\nint rdbLoadLenByRef(rio *rdb, int *isencoded, uint64_t *lenptr) {\n    unsigned char buf[2];\n    int type;\n\n    if (isencoded) *isencoded = 0;\n    if (rioRead(rdb,buf,1) == 0) return -1;\n    type = (buf[0]&0xC0)>>6;\n    if (type == RDB_ENCVAL) {\n        /* Read a 6 bit encoding type. */\n        if (isencoded) *isencoded = 1;\n        *lenptr = buf[0]&0x3F;\n    } else if (type == RDB_6BITLEN) {\n        /* Read a 6 bit len. */\n        *lenptr = buf[0]&0x3F;\n    } else if (type == RDB_14BITLEN) {\n        /* Read a 14 bit len. */\n        if (rioRead(rdb,buf+1,1) == 0) return -1;\n        *lenptr = ((buf[0]&0x3F)<<8)|buf[1];\n    } else if (buf[0] == RDB_32BITLEN) {\n        /* Read a 32 bit len. */\n        uint32_t len;\n        if (rioRead(rdb,&len,4) == 0) return -1;\n        *lenptr = ntohl(len);\n    } else if (buf[0] == RDB_64BITLEN) {\n        /* Read a 64 bit len. */\n        uint64_t len;\n        if (rioRead(rdb,&len,8) == 0) return -1;\n        *lenptr = ntohu64(len);\n    } else {\n        rdbReportCorruptRDB(\n            \"Unknown length encoding %d in rdbLoadLen()\",type);\n        return -1; /* Never reached. */\n    }\n    return 0;\n}\n\n/* This is like rdbLoadLenByRef() but directly returns the value read\n * from the RDB stream, signaling an error by returning RDB_LENERR\n * (since it is a too large count to be applicable in any Redis data\n * structure). */\nuint64_t rdbLoadLen(rio *rdb, int *isencoded) {\n    uint64_t len;\n\n    if (rdbLoadLenByRef(rdb,isencoded,&len) == -1) return RDB_LENERR;\n    return len;\n}\n\n/* Encodes the \"value\" argument as integer when it fits in the supported ranges\n * for encoded types. If the function successfully encodes the integer, the\n * representation is stored in the buffer pointer to by \"enc\" and the string\n * length is returned. Otherwise 0 is returned. */\nint rdbEncodeInteger(long long value, unsigned char *enc) {\n    if (value >= -(1<<7) && value <= (1<<7)-1) {\n        enc[0] = (RDB_ENCVAL<<6)|RDB_ENC_INT8;\n        enc[1] = value&0xFF;\n        return 2;\n    } else if (value >= -(1<<15) && value <= (1<<15)-1) {\n        enc[0] = (RDB_ENCVAL<<6)|RDB_ENC_INT16;\n        enc[1] = value&0xFF;\n        enc[2] = (value>>8)&0xFF;\n        return 3;\n    } else if (value >= -((long long)1<<31) && value <= ((long long)1<<31)-1) {\n        enc[0] = (RDB_ENCVAL<<6)|RDB_ENC_INT32;\n        enc[1] = value&0xFF;\n        enc[2] = (value>>8)&0xFF;\n        enc[3] = (value>>16)&0xFF;\n        enc[4] = (value>>24)&0xFF;\n        return 5;\n    } else {\n        return 0;\n    }\n}\n\n/* Loads an integer-encoded object with the specified encoding type \"enctype\".\n * The returned value changes according to the flags, see\n * rdbGenericLoadStringObject() for more info. */\nvoid *rdbLoadIntegerObject(rio *rdb, int enctype, int flags, size_t *lenptr, size_t *usable) {\n    int plainFlag = flags & RDB_LOAD_PLAIN;\n    int sdsFlag = flags & RDB_LOAD_SDS;\n    int encode = flags & RDB_LOAD_ENC;\n    unsigned char enc[4];\n    long long val;\n\n    if (enctype == RDB_ENC_INT8) {\n        if (rioRead(rdb,enc,1) == 0) return NULL;\n        val = (signed char)enc[0];\n    } else if (enctype == RDB_ENC_INT16) {\n        uint16_t v;\n        if (rioRead(rdb,enc,2) == 0) return NULL;\n        v = ((uint32_t)enc[0])|\n            ((uint32_t)enc[1]<<8);\n        val = (int16_t)v;\n    } else if (enctype == RDB_ENC_INT32) {\n        uint32_t v;\n        if (rioRead(rdb,enc,4) == 0) return NULL;\n        v = ((uint32_t)enc[0])|\n            ((uint32_t)enc[1]<<8)|\n            ((uint32_t)enc[2]<<16)|\n            ((uint32_t)enc[3]<<24);\n        val = (int32_t)v;\n    } else {\n        rdbReportCorruptRDB(\"Unknown RDB integer encoding type %d\",enctype);\n        return NULL; /* Never reached. */\n    }\n    if (plainFlag || sdsFlag) {\n        char buf[LONG_STR_SIZE], *p;\n        int len = ll2string(buf,sizeof(buf),val);\n        if (lenptr) *lenptr = len;\n        if (plainFlag) {\n            p = zmalloc_usable(len, usable);\n        } else {\n            debugServerAssert(sdsFlag);\n            p = sdsnewlen(SDS_NOINIT,len);\n            if (usable) *usable = sdsAllocSize(p);\n        } \n        \n        memcpy(p,buf,len);\n        return p;\n    } else if (encode) {\n        return createStringObjectFromLongLongForValue(val);\n    } else {\n        return createStringObjectFromLongLongWithSds(val);\n    }\n}\n\n/* String objects in the form \"2391\" \"-100\" without any space and with a\n * range of values that can fit in an 8, 16 or 32 bit signed value can be\n * encoded as integers to save space */\nint rdbTryIntegerEncoding(char *s, size_t len, unsigned char *enc) {\n    long long value;\n    if (string2ll(s, len, &value)) {\n        return rdbEncodeInteger(value, enc);\n    } else {\n        return 0;\n    }\n}\n\nssize_t rdbSaveLzfBlob(rio *rdb, void *data, size_t compress_len,\n                       size_t original_len) {\n    unsigned char byte;\n    ssize_t n, nwritten = 0;\n\n    /* Data compressed! Let's save it on disk */\n    byte = (RDB_ENCVAL<<6)|RDB_ENC_LZF;\n    if ((n = rdbWriteRaw(rdb,&byte,1)) == -1) goto writeerr;\n    nwritten += n;\n\n    if ((n = rdbSaveLen(rdb,compress_len)) == -1) goto writeerr;\n    nwritten += n;\n\n    if ((n = rdbSaveLen(rdb,original_len)) == -1) goto writeerr;\n    nwritten += n;\n\n    if ((n = rdbWriteRaw(rdb,data,compress_len)) == -1) goto writeerr;\n    nwritten += n;\n\n    return nwritten;\n\nwriteerr:\n    return -1;\n}\n\nssize_t rdbSaveLzfStringObject(rio *rdb, unsigned char *s, size_t len) {\n    size_t comprlen, outlen;\n    void *out;\n\n    /* We require at least four bytes compression for this to be worth it */\n    if (len <= 4) return 0;\n    outlen = len-4;\n    if ((out = zmalloc(outlen+1)) == NULL) return 0;\n    comprlen = lzf_compress(s, len, out, outlen);\n    if (comprlen == 0) {\n        zfree(out);\n        return 0;\n    }\n    ssize_t nwritten = rdbSaveLzfBlob(rdb, out, comprlen, len);\n    zfree(out);\n    return nwritten;\n}\n\n/* Load an LZF compressed string in RDB format. The returned value\n * changes according to 'flags'. For more info check the\n * rdbGenericLoadStringObject() function. */\nvoid *rdbLoadLzfStringObject(rio *rdb, int flags, size_t *lenptr, size_t *usable) {\n    int plainFlag = flags & RDB_LOAD_PLAIN;\n    int sdsFlag = flags & RDB_LOAD_SDS;\n    int robjFlag = !(plainFlag || sdsFlag); /* not plain/sds */\n\n    uint64_t len, clen;\n    unsigned char *c = NULL;\n    char *val = NULL;\n\n    if ((clen = rdbLoadLen(rdb,NULL)) == RDB_LENERR) return NULL;\n    if ((len = rdbLoadLen(rdb,NULL)) == RDB_LENERR) return NULL;\n    if ((c = ztrymalloc(clen)) == NULL) {\n        serverLog(isRestoreContext()? LL_VERBOSE: LL_WARNING, \"rdbLoadLzfStringObject failed allocating %llu bytes\", (unsigned long long)clen);\n        goto err;\n    }\n\n    /* Allocate our target according to the uncompressed size. */\n    if (plainFlag) {\n        val = ztrymalloc_usable(len, usable);\n    } else {\n        debugServerAssert(sdsFlag || robjFlag);\n        val = sdstrynewlen(SDS_NOINIT,len);\n        if (usable) *usable = sdsAllocSize(val);\n    }\n\n    if (!val) {\n        serverLog(isRestoreContext()? LL_VERBOSE: LL_WARNING, \"rdbLoadLzfStringObject failed allocating %llu bytes\", (unsigned long long)len);\n        goto err;\n    }\n\n    if (lenptr) *lenptr = len;\n\n    /* Load the compressed representation and uncompress it to target. */\n    if (rioRead(rdb,c,clen) == 0) goto err;\n    if (lzf_decompress(c,clen,val,len) != len) {\n        rdbReportCorruptRDB(\"Invalid LZF compressed string\");\n        goto err;\n    }\n    zfree(c);\n\n    return (robjFlag) ? createObject(OBJ_STRING,val) : (void *) val;\n\nerr:\n    zfree(c);\n    if (plainFlag) {\n        zfree(val);\n    } else {\n        debugServerAssert(sdsFlag || robjFlag);\n        sdsfree(val);\n    }\n    return NULL;\n}\n\n/* Save a string object as [len][data] on disk. If the object is a string\n * representation of an integer value we try to save it in a special form */\nssize_t rdbSaveRawString(rio *rdb, unsigned char *s, size_t len) {\n    int enclen;\n    ssize_t n, nwritten = 0;\n\n    /* Try integer encoding */\n    if (len <= 11) {\n        unsigned char buf[5];\n        if ((enclen = rdbTryIntegerEncoding((char*)s,len,buf)) > 0) {\n            if (rdbWriteRaw(rdb,buf,enclen) == -1) return -1;\n            return enclen;\n        }\n    }\n\n    /* Try LZF compression - under 20 bytes it's unable to compress even\n     * aaaaaaaaaaaaaaaaaa so skip it */\n    if (server.rdb_compression && len > 20) {\n        n = rdbSaveLzfStringObject(rdb,s,len);\n        if (n == -1) return -1;\n        if (n > 0) return n;\n        /* Return value of 0 means data can't be compressed, save the old way */\n    }\n\n    /* Store verbatim */\n    if ((n = rdbSaveLen(rdb,len)) == -1) return -1;\n    nwritten += n;\n    if (len > 0) {\n        if (rdbWriteRaw(rdb,s,len) == -1) return -1;\n        nwritten += len;\n    }\n    return nwritten;\n}\n\n/* Save a long long value as either an encoded string or a string. */\nssize_t rdbSaveLongLongAsStringObject(rio *rdb, long long value) {\n    unsigned char buf[32];\n    ssize_t n, nwritten = 0;\n    int enclen = rdbEncodeInteger(value,buf);\n    if (enclen > 0) {\n        return rdbWriteRaw(rdb,buf,enclen);\n    } else {\n        /* Encode as string */\n        enclen = ll2string((char*)buf,32,value);\n        serverAssert(enclen < 32);\n        if ((n = rdbSaveLen(rdb,enclen)) == -1) return -1;\n        nwritten += n;\n        if ((n = rdbWriteRaw(rdb,buf,enclen)) == -1) return -1;\n        nwritten += n;\n    }\n    return nwritten;\n}\n\n/* Like rdbSaveRawString() gets a Redis object instead. */\nssize_t rdbSaveStringObject(rio *rdb, robj *obj) {\n    /* Avoid to decode the object, then encode it again, if the\n     * object is already integer encoded. */\n    if (obj->encoding == OBJ_ENCODING_INT) {\n        return rdbSaveLongLongAsStringObject(rdb,(long)obj->ptr);\n    } else {\n        serverAssertWithInfo(NULL,obj,sdsEncodedObject(obj));\n        return rdbSaveRawString(rdb,obj->ptr,sdslen(obj->ptr));\n    }\n}\n\n/* Load a string object from an RDB file according to flags:\n *\n * RDB_LOAD_NONE (no flags): load an RDB object, unencoded.\n * RDB_LOAD_ENC: If the returned type is a Redis object, try to\n *               encode it in a special way to be more memory\n *               efficient. When this flag is passed the function\n *               no longer guarantees that obj->ptr is an SDS string.\n * RDB_LOAD_PLAIN: Return a plain string allocated with zmalloc()\n *                 instead of a Redis object with an sds in it.\n * RDB_LOAD_SDS: Return an SDS string instead of a Redis object.\n *\n * On I/O error NULL is returned.\n */\nvoid *rdbGenericLoadStringObjectUsable(rio *rdb, int flags, size_t *lenptr, size_t *usable) {\n    void *buf;\n    int plainFlag = flags & RDB_LOAD_PLAIN;\n    int sdsFlag = flags & RDB_LOAD_SDS;\n    int robjFlag = !(plainFlag || sdsFlag); /* not plain/sds */\n\n    int isencoded;\n    unsigned long long len;\n\n    len = rdbLoadLen(rdb,&isencoded);\n    if (len == RDB_LENERR) return NULL;\n\n    if (isencoded) {\n        switch(len) {\n        case RDB_ENC_INT8:\n        case RDB_ENC_INT16:\n        case RDB_ENC_INT32:\n            return rdbLoadIntegerObject(rdb,len,flags,lenptr,usable);\n        case RDB_ENC_LZF:\n            return rdbLoadLzfStringObject(rdb,flags,lenptr,usable);\n        default:\n            rdbReportCorruptRDB(\"Unknown RDB string encoding type %llu\",len);\n            return NULL;\n        }\n    }\n\n    /* return robj */\n    if (robjFlag) {\n        if (usable) *usable = 0;\n        robj *o = tryCreateStringObject(SDS_NOINIT,len);\n        if (!o) {\n            serverLog(isRestoreContext()? LL_VERBOSE: LL_WARNING, \"rdbGenericLoadStringObject failed allocating %llu bytes\", len);\n            return NULL;\n        }\n        if (len && rioRead(rdb,o->ptr,len) == 0) {\n            decrRefCount(o);\n            return NULL;\n        }\n        return o;\n    }\n\n    /* plain/sds */\n    if (plainFlag) {\n        buf = ztrymalloc_usable(len, usable);\n    } else {\n        debugServerAssert(sdsFlag);\n        buf = sdstrynewlen(SDS_NOINIT,len);\n        if (usable) *usable = sdsAllocSize(buf);\n    }  \n    \n    if (!buf) {\n        serverLog(isRestoreContext()? LL_VERBOSE: LL_WARNING, \"rdbGenericLoadStringObject failed allocating %llu bytes\", len);\n        return NULL;\n    }\n\n    if (lenptr) *lenptr = len;\n    if (len && rioRead(rdb,buf,len) == 0) {\n        if (plainFlag)\n            zfree(buf);\n        else\n            sdsfree(buf);\n        return NULL;\n    }\n    return buf;\n}\n\nvoid *rdbGenericLoadStringObject(rio *rdb, int flags, size_t *lenptr) {\n    return rdbGenericLoadStringObjectUsable(rdb,flags,lenptr,NULL);\n}\n\nrobj *rdbLoadStringObject(rio *rdb) {\n    return rdbGenericLoadStringObject(rdb,RDB_LOAD_NONE,NULL);\n}\n\nrobj *rdbLoadEncodedStringObject(rio *rdb) {\n    return rdbGenericLoadStringObject(rdb,RDB_LOAD_ENC,NULL);\n}\n\n/* Save a double value. Doubles are saved as strings prefixed by an unsigned\n * 8 bit integer specifying the length of the representation.\n * This 8 bit integer has special values in order to specify the following\n * conditions:\n * 253: not a number\n * 254: + inf\n * 255: - inf\n */\nssize_t rdbSaveDoubleValue(rio *rdb, double val) {\n    unsigned char buf[128];\n    int len;\n\n    if (isnan(val)) {\n        buf[0] = 253;\n        len = 1;\n    } else if (!isfinite(val)) {\n        len = 1;\n        buf[0] = (val < 0) ? 255 : 254;\n    } else {\n        long long lvalue;\n        /* Integer printing function is much faster, check if we can safely use it. */\n        if (double2ll(val, &lvalue))\n            ll2string((char*)buf+1,sizeof(buf)-1,lvalue);\n        else {\n            const int dlen = fpconv_dtoa(val, (char*)buf+1);\n            buf[dlen+1] = '\\0';\n        }\n        buf[0] = strlen((char*)buf+1);\n        len = buf[0]+1;\n    }\n    return rdbWriteRaw(rdb,buf,len);\n}\n\n/* For information about double serialization check rdbSaveDoubleValue() */\nint rdbLoadDoubleValue(rio *rdb, double *val) {\n    char buf[256];\n    unsigned char len;\n\n    if (rioRead(rdb,&len,1) == 0) return -1;\n    switch(len) {\n    case 255: *val = R_NegInf; return 0;\n    case 254: *val = R_PosInf; return 0;\n    case 253: *val = R_Nan; return 0;\n    default:\n        if (rioRead(rdb,buf,len) == 0) return -1;\n        buf[len] = '\\0';\n        if (sscanf(buf, \"%lg\", val)!=1) return -1;\n        return 0;\n    }\n}\n\n/* Saves a double for RDB 8 or greater, where IE754 binary64 format is assumed.\n * We just make sure the integer is always stored in little endian, otherwise\n * the value is copied verbatim from memory to disk.\n *\n * Return -1 on error, the size of the serialized value on success. */\nint rdbSaveBinaryDoubleValue(rio *rdb, double val) {\n    memrev64ifbe(&val);\n    return rdbWriteRaw(rdb,&val,sizeof(val));\n}\n\n/* Loads a double from RDB 8 or greater. See rdbSaveBinaryDoubleValue() for\n * more info. On error -1 is returned, otherwise 0. */\nint rdbLoadBinaryDoubleValue(rio *rdb, double *val) {\n    if (rioRead(rdb,val,sizeof(*val)) == 0) return -1;\n    memrev64ifbe(val);\n    return 0;\n}\n\n/* Like rdbSaveBinaryDoubleValue() but single precision. */\nint rdbSaveBinaryFloatValue(rio *rdb, float val) {\n    memrev32ifbe(&val);\n    return rdbWriteRaw(rdb,&val,sizeof(val));\n}\n\n/* Like rdbLoadBinaryDoubleValue() but single precision. */\nint rdbLoadBinaryFloatValue(rio *rdb, float *val) {\n    if (rioRead(rdb,val,sizeof(*val)) == 0) return -1;\n    memrev32ifbe(val);\n    return 0;\n}\n\n/* Save the object type of object \"o\". */\nint rdbSaveObjectType(rio *rdb, robj *o) {\n    switch (o->type) {\n    case OBJ_STRING:\n        return rdbSaveType(rdb,RDB_TYPE_STRING);\n    case OBJ_LIST:\n        if (o->encoding == OBJ_ENCODING_QUICKLIST || o->encoding == OBJ_ENCODING_LISTPACK)\n            return rdbSaveType(rdb, RDB_TYPE_LIST_QUICKLIST_2);\n        else\n            serverPanic(\"Unknown list encoding\");\n    case OBJ_SET:\n        if (o->encoding == OBJ_ENCODING_INTSET)\n            return rdbSaveType(rdb,RDB_TYPE_SET_INTSET);\n        else if (o->encoding == OBJ_ENCODING_HT)\n            return rdbSaveType(rdb,RDB_TYPE_SET);\n        else if (o->encoding == OBJ_ENCODING_LISTPACK)\n            return rdbSaveType(rdb,RDB_TYPE_SET_LISTPACK);\n        else\n            serverPanic(\"Unknown set encoding\");\n    case OBJ_ZSET:\n        if (o->encoding == OBJ_ENCODING_LISTPACK)\n            return rdbSaveType(rdb,RDB_TYPE_ZSET_LISTPACK);\n        else if (o->encoding == OBJ_ENCODING_SKIPLIST)\n            return rdbSaveType(rdb,RDB_TYPE_ZSET_2);\n        else\n            serverPanic(\"Unknown sorted set encoding\");\n    case OBJ_HASH:\n        if (o->encoding == OBJ_ENCODING_LISTPACK)\n            return rdbSaveType(rdb,RDB_TYPE_HASH_LISTPACK);\n        else if (o->encoding == OBJ_ENCODING_LISTPACK_EX)\n            return rdbSaveType(rdb,RDB_TYPE_HASH_LISTPACK_EX);\n        else if (o->encoding == OBJ_ENCODING_HT) {\n            if (hashTypeGetMinExpire(o, /*accurate*/ 1) == EB_EXPIRE_TIME_INVALID)\n                return rdbSaveType(rdb,RDB_TYPE_HASH);\n            else\n                return rdbSaveType(rdb,RDB_TYPE_HASH_METADATA);\n        } else\n            serverPanic(\"Unknown hash encoding\");\n    case OBJ_STREAM:\n        return rdbSaveType(rdb,RDB_TYPE_STREAM_LISTPACKS_5);\n    case OBJ_GCRA:\n        return rdbSaveType(rdb,RDB_TYPE_GCRA);\n    case OBJ_MODULE:\n        return rdbSaveType(rdb,RDB_TYPE_MODULE_2);\n    default:\n        serverPanic(\"Unknown object type\");\n    }\n    return -1; /* avoid warning */\n}\n\n/* Use rdbLoadType() to load a TYPE in RDB format, but returns -1 if the\n * type is not specifically a valid Object Type. */\nint rdbLoadObjectType(rio *rdb) {\n    int type;\n    if ((type = rdbLoadType(rdb)) == -1) return -1;\n    if (!rdbIsObjectType(type)) return -1;\n    return type;\n}\n\n/* This helper function serializes a consumer group Pending Entries List (PEL)\n * into the RDB file. The 'nacks' argument tells the function if also persist\n * the information about the not acknowledged message, or if to persist\n * just the IDs: this is useful because for the global consumer group PEL\n * we serialized the NACKs as well, but when serializing the local consumer\n * PELs we just add the ID, that will be resolved inside the global PEL to\n * put a reference to the same structure. */\nssize_t rdbSaveStreamPEL(rio *rdb, rax *pel, int nacks) {\n    ssize_t n, nwritten = 0;\n\n    /* Number of entries in the PEL. */\n    if ((n = rdbSaveLen(rdb,raxSize(pel))) == -1) return -1;\n    nwritten += n;\n\n    /* Save each entry. */\n    raxIterator ri;\n    raxStart(&ri,pel);\n    raxSeek(&ri,\"^\",NULL,0);\n    while(raxNext(&ri)) {\n        /* We store IDs in raw form as 128 big big endian numbers, like\n         * they are inside the radix tree key. */\n        if ((n = rdbWriteRaw(rdb,ri.key,sizeof(streamID))) == -1) {\n            raxStop(&ri);\n            return -1;\n        }\n        nwritten += n;\n\n        if (nacks) {\n            streamNACK *nack = ri.data;\n            if ((n = rdbSaveMillisecondTime(rdb,nack->delivery_time)) == -1) {\n                raxStop(&ri);\n                return -1;\n            }\n            nwritten += n;\n            if ((n = rdbSaveLen(rdb,nack->delivery_count)) == -1) {\n                raxStop(&ri);\n                return -1;\n            }\n            nwritten += n;\n            /* We don't save the consumer name: we'll save the pending IDs\n             * for each consumer in the consumer PEL, and resolve the consumer\n             * at loading time. */\n        }\n    }\n    raxStop(&ri);\n    return nwritten;\n}\n\n/* Serialize the IDMP entries for a stream into the RDB file.\n * This saves all the idempotent producer tracking entries (IID -> stream ID mappings).\n * Expired entries are filtered out. Producers whose entries all expired are still\n * written with count=0; the load side skips them.\n * Format: num_producers, then for each producer: pid, num_entries, entries... */\nssize_t rdbSaveStreamIdmpEntries(rio *rdb, stream *s) {\n    ssize_t n, nwritten = 0;\n\n    /* Save the number of producers. */\n    size_t num_producers = s->idmp_producers ? raxSize(s->idmp_producers) : 0;\n    if ((n = rdbSaveLen(rdb,num_producers)) == -1) return -1;\n    nwritten += n;\n\n    if (num_producers == 0) return nwritten;\n\n    uint64_t expire_time = server.mstime - (s->idmp_duration * 1000);\n\n    /* Iterate through all producers. */\n    raxIterator ri;\n    raxStart(&ri, s->idmp_producers);\n    raxSeek(&ri, \"^\", NULL, 0);\n    while (raxNext(&ri)) {\n        idmpProducer *producer = ri.data;\n\n        /* Save the producer ID (pid). */\n        if ((n = rdbSaveRawString(rdb, ri.key, ri.key_len)) == -1) {\n            raxStop(&ri);\n            return -1;\n        }\n        nwritten += n;\n\n        /* Find the first non-expired entry. The linked list is ordered by\n         * timestamp, so all entries after the first valid one are also valid. */\n        idmpEntry *first_valid = producer->idmp_head;\n        size_t expired = 0;\n        while (first_valid && first_valid->id.ms <= expire_time) {\n            first_valid = first_valid->next;\n            expired++;\n        }\n\n        /* Save the number of entries for this producer. */\n        size_t count = dictSize(producer->idmp_dict) - expired;\n        if ((n = rdbSaveLen(rdb, count)) == -1) {\n            raxStop(&ri);\n            return -1;\n        }\n        nwritten += n;\n\n        /* Save each non-expired entry in insertion order. */\n        idmpEntry *entry = first_valid;\n        while (entry != NULL) {\n            /* Save the IID string (length + data). */\n            if ((n = rdbSaveRawString(rdb,(unsigned char *)entry->iid,entry->iid_len)) == -1) {\n                raxStop(&ri);\n                return -1;\n            }\n            nwritten += n;\n\n            /* Save the associated stream ID. */\n            if ((n = rdbSaveLen(rdb,entry->id.ms)) == -1) {\n                raxStop(&ri);\n                return -1;\n            }\n            nwritten += n;\n            if ((n = rdbSaveLen(rdb,entry->id.seq)) == -1) {\n                raxStop(&ri);\n                return -1;\n            }\n            nwritten += n;\n\n            entry = entry->next;\n        }\n    }\n    raxStop(&ri);\n    return nwritten;\n}\n\n/* Load IDMP entries for a stream from the RDB file.\n * This loads all the idempotent producer tracking entries (IID -> stream ID mappings)\n * and inserts them into the stream's idmp_producers rax tree.\n * Format: num_producers, then for each producer: pid, num_entries, entries... */\nint rdbLoadStreamIdmpEntries(rio *rdb, stream *s) {\n    /* Load the number of producers. */\n    uint64_t num_producers = rdbLoadLen(rdb, NULL);\n    if (num_producers == RDB_LENERR) {\n        return -1;\n    }\n\n    if (num_producers == 0) return 0;\n\n    uint64_t expire_time = server.mstime - (s->idmp_duration * 1000);\n\n    /* Create the producers rax tree. */\n    s->idmp_producers = raxNewWithMetadata(0, &s->alloc_size);\n    if (s->idmp_producers == NULL) {\n        return -1;\n    }\n\n    /* Track pid across error paths so cleanup can free it. */\n    char *pid = NULL;\n    size_t pid_len = 0;\n\n    /* Load each producer. */\n    for (uint64_t p = 0; p < num_producers; p++) {\n        /* Load the producer ID (pid). */\n        pid = rdbGenericLoadStringObject(rdb, RDB_LOAD_SDS, &pid_len);\n        if (pid == NULL) goto cleanup;\n\n        /* Load the number of entries for this producer. */\n        uint64_t count = rdbLoadLen(rdb, NULL);\n        if (count == RDB_LENERR) goto cleanup;\n\n        /* Skip producers with 0 entries (written by save when all entries\n         * were expired). Just consume the RDB data and move on. */\n        if (count == 0) {\n            sdsfree(pid);\n            pid = NULL;\n            continue;\n        }\n\n        /* Create the producer. */\n        idmpProducer *producer = idmpProducerCreate(&s->alloc_size);\n\n        /* Insert producer into rax tree. */\n        int inserted = raxTryInsert(s->idmp_producers, (unsigned char *)pid, pid_len, producer, NULL);\n        if (!inserted) {\n            idmpProducerFree(producer, &s->alloc_size);\n            goto cleanup;\n        }\n\n        /* Load each entry for this producer. */\n        for (uint64_t i = 0; i < count; i++) {\n            /* Load the IID string. */\n            size_t iid_len;\n            char *iid = rdbGenericLoadStringObject(rdb, RDB_LOAD_SDS, &iid_len);\n            if (iid == NULL) goto cleanup;\n\n            /* Load the associated stream ID. */\n            streamID id;\n            id.ms = rdbLoadLen(rdb, NULL);\n            id.seq = rdbLoadLen(rdb, NULL);\n            if (rioGetReadError(rdb)) {\n                sdsfree(iid);\n                goto cleanup;\n            }\n\n            /* Skip entries that have already expired. */\n            if (id.ms <= expire_time) {\n                sdsfree(iid);\n                continue;\n            }\n\n            /* Create the idmpEntry. */\n            idmpEntry *entry = idmpEntryCreate(iid, iid_len, &s->alloc_size);\n            sdsfree(iid); /* idmpEntryCreate makes a copy */\n\n            /* Set the stream ID. */\n            entry->id = id;\n            entry->next = NULL;\n\n            /* Insert into dict. If insertion fails (e.g., duplicate), skip. */\n            int ret = dictAdd(producer->idmp_dict, entry, NULL);\n            if (ret != DICT_OK) {\n                /* Insertion failed (duplicate). For RDB loading, we'll just skip\n                 * duplicates rather than failing the entire load. */\n                idmpEntryFree(entry, &s->alloc_size);\n            } else {\n                /* Add to linked list tail. */\n                if (producer->idmp_tail == NULL) {\n                    producer->idmp_head = producer->idmp_tail = entry;\n                } else {\n                    producer->idmp_tail->next = entry;\n                    producer->idmp_tail = entry;\n                }\n            }\n        }\n\n        /* If all entries were expired, remove the empty producer. */\n        if (producer->idmp_head == NULL) {\n            raxRemove(s->idmp_producers, (unsigned char *)pid, pid_len, NULL);\n            idmpProducerFree(producer, &s->alloc_size);\n        }\n        sdsfree(pid);\n        pid = NULL;\n    }\n\n    /* If no producers remain after filtering, free the rax tree. */\n    if (raxSize(s->idmp_producers) == 0) {\n        raxFree(s->idmp_producers);\n        s->idmp_producers = NULL;\n    }\n\n    return 0;\n\ncleanup:\n    if (pid) sdsfree(pid);\n    /* Clean up partially constructed producers tree on error.\n     * This prevents use-after-free when the stream is later freed. */\n    if (s->idmp_producers) {\n        raxFreeWithCbAndContext(s->idmp_producers, streamFreeIdmpProducerGeneric, s);\n        s->idmp_producers = NULL;\n    }\n    return -1;\n}\n\n/* Serialize the consumers of a stream consumer group into the RDB. Helper\n * function for the stream data type serialization. What we do here is to\n * persist the consumer metadata, and it's PEL, for each consumer. */\nsize_t rdbSaveStreamConsumers(rio *rdb, streamCG *cg) {\n    ssize_t n, nwritten = 0;\n\n    /* Number of consumers in this consumer group. */\n    if ((n = rdbSaveLen(rdb,raxSize(cg->consumers))) == -1) return -1;\n    nwritten += n;\n\n    /* Save each consumer. */\n    raxIterator ri;\n    raxStart(&ri,cg->consumers);\n    raxSeek(&ri,\"^\",NULL,0);\n    while(raxNext(&ri)) {\n        streamConsumer *consumer = ri.data;\n\n        /* Consumer name. */\n        if ((n = rdbSaveRawString(rdb,ri.key,ri.key_len)) == -1) {\n            raxStop(&ri);\n            return -1;\n        }\n        nwritten += n;\n\n        /* Seen time. */\n        if ((n = rdbSaveMillisecondTime(rdb,consumer->seen_time)) == -1) {\n            raxStop(&ri);\n            return -1;\n        }\n        nwritten += n;\n\n        /* Active time. */\n        if ((n = rdbSaveMillisecondTime(rdb,consumer->active_time)) == -1) {\n            raxStop(&ri);\n            return -1;\n        }\n        nwritten += n;\n\n        /* Consumer PEL, without the ACKs (see last parameter of the function\n         * passed with value of 0), at loading time we'll lookup the ID\n         * in the consumer group global PEL and will put a reference in the\n         * consumer local PEL. */\n        if ((n = rdbSaveStreamPEL(rdb,consumer->pel,0)) == -1) {\n            raxStop(&ri);\n            return -1;\n        }\n        nwritten += n;\n    }\n    raxStop(&ri);\n    return nwritten;\n}\n\n/* Save a Redis object.\n * Returns -1 on error, number of bytes written on success. */\nssize_t rdbSaveObject(rio *rdb, robj *o, robj *key, int dbid) {\n    ssize_t n = 0, nwritten = 0;\n\n    if (o->type == OBJ_STRING) {\n        /* Save a string value */\n        if ((n = rdbSaveStringObject(rdb,o)) == -1) return -1;\n        nwritten += n;\n    } else if (o->type == OBJ_LIST) {\n        /* Save a list value */\n        if (o->encoding == OBJ_ENCODING_QUICKLIST) {\n            quicklist *ql = o->ptr;\n            quicklistNode *node = ql->head;\n\n            if ((n = rdbSaveLen(rdb,ql->len)) == -1) return -1;\n            nwritten += n;\n\n            while(node) {\n                if ((n = rdbSaveLen(rdb,node->container)) == -1) return -1;\n                nwritten += n;\n\n                if (quicklistNodeIsCompressed(node)) {\n                    void *data;\n                    size_t compress_len = quicklistGetLzf(node, &data);\n                    if ((n = rdbSaveLzfBlob(rdb,data,compress_len,node->sz)) == -1) return -1;\n                    nwritten += n;\n                } else {\n                    if ((n = rdbSaveRawString(rdb,node->entry,node->sz)) == -1) return -1;\n                    nwritten += n;\n                }\n                node = node->next;\n            }\n        } else if (o->encoding == OBJ_ENCODING_LISTPACK) {\n            unsigned char *lp = o->ptr;\n\n            /* Save list listpack as a fake quicklist that only has a single node. */\n            if ((n = rdbSaveLen(rdb,1)) == -1) return -1;\n            nwritten += n;\n            if ((n = rdbSaveLen(rdb,QUICKLIST_NODE_CONTAINER_PACKED)) == -1) return -1;\n            nwritten += n;\n            if ((n = rdbSaveRawString(rdb,lp,lpBytes(lp))) == -1) return -1;\n            nwritten += n;\n        } else {\n            serverPanic(\"Unknown list encoding\");\n        }\n    } else if (o->type == OBJ_SET) {\n        /* Save a set value */\n        if (o->encoding == OBJ_ENCODING_HT) {\n            dict *set = o->ptr;\n            dictIterator di;\n            dictEntry *de;\n\n            if ((n = rdbSaveLen(rdb,dictSize(set))) == -1) {\n                return -1;\n            }\n            nwritten += n;\n\n            dictInitIterator(&di, set);\n            while((de = dictNext(&di)) != NULL) {\n                sds ele = dictGetKey(de);\n                if ((n = rdbSaveRawString(rdb,(unsigned char*)ele,sdslen(ele)))\n                    == -1)\n                {\n                    dictResetIterator(&di);\n                    return -1;\n                }\n                nwritten += n;\n            }\n            dictResetIterator(&di);\n        } else if (o->encoding == OBJ_ENCODING_INTSET) {\n            size_t l = intsetBlobLen((intset*)o->ptr);\n\n            if ((n = rdbSaveRawString(rdb,o->ptr,l)) == -1) return -1;\n            nwritten += n;\n        } else if (o->encoding == OBJ_ENCODING_LISTPACK) {\n            size_t l = lpBytes((unsigned char *)o->ptr);\n            if ((n = rdbSaveRawString(rdb, o->ptr, l)) == -1) return -1;\n            nwritten += n;\n        } else {\n            serverPanic(\"Unknown set encoding\");\n        }\n    } else if (o->type == OBJ_ZSET) {\n        /* Save a sorted set value */\n        if (o->encoding == OBJ_ENCODING_LISTPACK) {\n            size_t l = lpBytes((unsigned char*)o->ptr);\n\n            if ((n = rdbSaveRawString(rdb,o->ptr,l)) == -1) return -1;\n            nwritten += n;\n        } else if (o->encoding == OBJ_ENCODING_SKIPLIST) {\n            zset *zs = o->ptr;\n            zskiplist *zsl = zs->zsl;\n\n            if ((n = rdbSaveLen(rdb,zsl->length)) == -1) return -1;\n            nwritten += n;\n\n            /* We save the skiplist elements from the greatest to the smallest\n             * (that's trivial since the elements are already ordered in the\n             * skiplist): this improves the load process, since the next loaded\n             * element will always be the smaller, so adding to the skiplist\n             * will always immediately stop at the head, making the insertion\n             * O(1) instead of O(log(N)). */\n            zskiplistNode *zn = zsl->tail;\n            while (zn != NULL) {\n                sds ele = zslGetNodeElement(zn);\n                if ((n = rdbSaveRawString(rdb,\n                    (unsigned char*)ele,sdslen(ele))) == -1)\n                {\n                    return -1;\n                }\n                nwritten += n;\n                if ((n = rdbSaveBinaryDoubleValue(rdb,zn->score)) == -1)\n                    return -1;\n                nwritten += n;\n                zn = zn->backward;\n            }\n        } else {\n            serverPanic(\"Unknown sorted set encoding\");\n        }\n    } else if (o->type == OBJ_HASH) {\n        /* Save a hash value */\n        if ((o->encoding == OBJ_ENCODING_LISTPACK) ||\n            (o->encoding == OBJ_ENCODING_LISTPACK_EX))\n        {\n            /* Save min/next HFE expiration time if needed */\n            if (o->encoding == OBJ_ENCODING_LISTPACK_EX) {\n                uint64_t minExpire = hashTypeGetMinExpire(o, 0);\n                /* if invalid time then save 0 */\n                if (minExpire == EB_EXPIRE_TIME_INVALID)\n                    minExpire = 0;\n                if (rdbSaveMillisecondTime(rdb, minExpire) == -1)\n                    return -1;\n            }\n            unsigned char *lp_ptr = hashTypeListpackGetLp(o);\n            size_t l = lpBytes(lp_ptr);\n\n            if ((n = rdbSaveRawString(rdb,lp_ptr,l)) == -1) return -1;\n            nwritten += n;\n        } else if (o->encoding == OBJ_ENCODING_HT) {\n            int hashWithMeta = 0;  /* RDB_TYPE_HASH_METADATA */\n            dictIterator di;\n            dictEntry *de;\n            /* Determine the hash layout to use based on the presence of at least\n             * one field with a valid TTL. If such a field exists, employ the\n             * RDB_TYPE_HASH_METADATA layout, including tuples of [ttl][field][value].\n             * Otherwise, use the standard RDB_TYPE_HASH layout containing only\n             * the tuples [field][value]. */\n            uint64_t minExpire = hashTypeGetMinExpire(o, 1);\n\n            /* if RDB_TYPE_HASH_METADATA (Can have TTLs on fields) */\n            if (minExpire != EB_EXPIRE_TIME_INVALID) {\n                hashWithMeta = 1;\n                /* Save next field expire time of hash */\n                if (rdbSaveMillisecondTime(rdb, minExpire) == -1) {\n                    return -1;\n                }\n            }\n\n            /* save number of fields in hash */\n            if ((n = rdbSaveLen(rdb,dictSize((dict*)o->ptr))) == -1) {\n                return -1;\n            }\n            nwritten += n;\n\n            /* save all hash fields */\n            dictInitIterator(&di, o->ptr);\n            while((de = dictNext(&di)) != NULL) {\n                Entry *entry = dictGetKey(de);\n                sds field = entryGetField(entry);\n                sds value = entryGetValue(entry);\n\n                /* save the TTL */\n                if (hashWithMeta) {\n                    uint64_t ttl, expiryTime= entryGetExpiry(entry);\n\n                    /* Saved TTL value:\n                     *  - 0: Indicates no TTL. This is common case so we keep it small.\n                     *  - Otherwise: TTL is relative to minExpire (with +1 to avoid 0 that already taken)\n                     */\n                    ttl = (expiryTime == EB_EXPIRE_TIME_INVALID) ? 0 : expiryTime - minExpire + 1;\n                    if ((n = rdbSaveLen(rdb, ttl)) == -1) {\n                        dictResetIterator(&di);\n                        return -1;\n                    }\n                    nwritten += n;\n                }\n\n                /* save the key */\n                if ((n = rdbSaveRawString(rdb,(unsigned char*)field,\n                        sdslen(field))) == -1)\n                {\n                    dictResetIterator(&di);\n                    return -1;\n                }\n                nwritten += n;\n\n                /* save the value */\n                if ((n = rdbSaveRawString(rdb,(unsigned char*)value,\n                        sdslen(value))) == -1)\n                {\n                    dictResetIterator(&di);\n                    return -1;\n                }\n                nwritten += n;\n            }\n            dictResetIterator(&di);\n        } else {\n            serverPanic(\"Unknown hash encoding\");\n        }\n    } else if (o->type == OBJ_STREAM) {\n        /* Store how many listpacks we have inside the radix tree. */\n        stream *s = o->ptr;\n        rax *rax = s->rax;\n        if ((n = rdbSaveLen(rdb,raxSize(rax))) == -1) return -1;\n        nwritten += n;\n\n        /* Serialize all the listpacks inside the radix tree as they are,\n         * when loading back, we'll use the first entry of each listpack\n         * to insert it back into the radix tree. */\n        raxIterator ri;\n        raxStart(&ri,rax);\n        raxSeek(&ri,\"^\",NULL,0);\n        while (raxNext(&ri)) {\n            unsigned char *lp = ri.data;\n            size_t lp_bytes = lpBytes(lp);\n            if ((n = rdbSaveRawString(rdb,ri.key,ri.key_len)) == -1) {\n                raxStop(&ri);\n                return -1;\n            }\n            nwritten += n;\n            if ((n = rdbSaveRawString(rdb,lp,lp_bytes)) == -1) {\n                raxStop(&ri);\n                return -1;\n            }\n            nwritten += n;\n        }\n        raxStop(&ri);\n\n        /* Save the number of elements inside the stream. We cannot obtain\n         * this easily later, since our macro nodes should be checked for\n         * number of items: not a great CPU / space tradeoff. */\n        if ((n = rdbSaveLen(rdb,s->length)) == -1) return -1;\n        nwritten += n;\n        /* Save the last entry ID. */\n        if ((n = rdbSaveLen(rdb,s->last_id.ms)) == -1) return -1;\n        nwritten += n;\n        if ((n = rdbSaveLen(rdb,s->last_id.seq)) == -1) return -1;\n        nwritten += n;\n        /* Save the first entry ID. */\n        if ((n = rdbSaveLen(rdb,s->first_id.ms)) == -1) return -1;\n        nwritten += n;\n        if ((n = rdbSaveLen(rdb,s->first_id.seq)) == -1) return -1;\n        nwritten += n;\n        /* Save the maximal tombstone ID. */\n        if ((n = rdbSaveLen(rdb,s->max_deleted_entry_id.ms)) == -1) return -1;\n        nwritten += n;\n        if ((n = rdbSaveLen(rdb,s->max_deleted_entry_id.seq)) == -1) return -1;\n        nwritten += n;\n        /* Save the offset. */\n        if ((n = rdbSaveLen(rdb,s->entries_added)) == -1) return -1;\n        nwritten += n;\n\n        /* The consumer groups and their clients are part of the stream\n         * type, so serialize every consumer group. */\n\n        /* Save the number of groups. */\n        size_t num_cgroups = s->cgroups ? raxSize(s->cgroups) : 0;\n        if ((n = rdbSaveLen(rdb,num_cgroups)) == -1) return -1;\n        nwritten += n;\n\n        if (num_cgroups) {\n            /* Serialize each consumer group. */\n            raxStart(&ri,s->cgroups);\n            raxSeek(&ri,\"^\",NULL,0);\n            while(raxNext(&ri)) {\n                streamCG *cg = ri.data;\n\n                /* Save the group name. */\n                if ((n = rdbSaveRawString(rdb,ri.key,ri.key_len)) == -1) {\n                    raxStop(&ri);\n                    return -1;\n                }\n                nwritten += n;\n\n                /* Last ID. */\n                if ((n = rdbSaveLen(rdb,cg->last_id.ms)) == -1) {\n                    raxStop(&ri);\n                    return -1;\n                }\n                nwritten += n;\n                if ((n = rdbSaveLen(rdb,cg->last_id.seq)) == -1) {\n                    raxStop(&ri);\n                    return -1;\n                }\n                nwritten += n;\n                \n                /* Save the group's logical reads counter. */\n                if ((n = rdbSaveLen(rdb,cg->entries_read)) == -1) {\n                    raxStop(&ri);\n                    return -1;\n                }\n                nwritten += n;\n\n                /* Save the global PEL. */\n                if ((n = rdbSaveStreamPEL(rdb,cg->pel,1)) == -1) {\n                    raxStop(&ri);\n                    return -1;\n                }\n                nwritten += n;\n\n                /* Save the consumers of this group. */\n                if ((n = rdbSaveStreamConsumers(rdb,cg)) == -1) {\n                    raxStop(&ri);\n                    return -1;\n                }\n                nwritten += n;\n\n                /* Save NACK zone: count followed by the IDs of NACKed entries. */\n                uint64_t nacked_count = pelListNackedCount(cg);\n                if ((n = rdbSaveLen(rdb, nacked_count)) == -1) {\n                    raxStop(&ri);\n                    return -1;\n                }\n                nwritten += n;\n\n                if (cg->pel_nack_tail) {\n                    streamNACK *nack = cg->pel_time_head;\n                    while (nack) {\n                        unsigned char buf[sizeof(streamID)];\n                        streamEncodeID(buf, &nack->id);\n                        if ((n = rdbWriteRaw(rdb, buf, sizeof(buf))) == -1) {\n                            raxStop(&ri);\n                            return -1;\n                        }\n                        nwritten += n;\n                        if (nack == cg->pel_nack_tail) break;\n                        nack = nack->pel_next;\n                    }\n                }\n            }\n            raxStop(&ri);\n        }\n\n        /* Save IDMP (Idempotent Message Producer) configuration and entries. */\n        \n        /* Save IDMP duration (in seconds). */\n        if ((n = rdbSaveLen(rdb,s->idmp_duration)) == -1) return -1;\n        nwritten += n;\n        \n        /* Save IDMP max entries. */\n        if ((n = rdbSaveLen(rdb,s->idmp_max_entries)) == -1) return -1;\n        nwritten += n;\n        \n        /* Save all IDMP entries. */\n        if ((n = rdbSaveStreamIdmpEntries(rdb,s)) == -1) return -1;\n        nwritten += n;\n\n        /* Save the all-time count of IIDs added. */\n        if ((n = rdbSaveLen(rdb,s->iids_added)) == -1) return -1;\n        nwritten += n;\n\n        /* Save the all-time count of duplicate IIDs detected. */\n        if ((n = rdbSaveLen(rdb,s->iids_duplicates)) == -1) return -1;\n        nwritten += n;\n    } else if (o->type == OBJ_GCRA) {\n        long long t;\n        getLongLongFromGCRAObject(o, &t);\n        if ((n = rdbSaveLen(rdb,t)) == -1) return -1;\n        nwritten += n;\n    } else if (o->type == OBJ_MODULE) {\n        /* Save a module-specific value. */\n        RedisModuleIO io;\n        moduleValue *mv = o->ptr;\n        moduleType *mt = mv->type;\n\n        /* Write the \"module\" identifier as prefix, so that we'll be able\n         * to call the right module during loading. */\n        int retval = rdbSaveLen(rdb,mt->entity.id);\n        if (retval == -1) return -1;\n        moduleInitIOContext(&io, &mt->entity, rdb, key, dbid);\n        io.bytes += retval;\n\n        /* Then write the module-specific representation + EOF marker. */\n        mt->rdb_save(&io,mv->value);\n        retval = rdbSaveLen(rdb,RDB_MODULE_OPCODE_EOF);\n        if (retval == -1)\n            io.error = 1;\n        else\n            io.bytes += retval;\n\n        if (io.ctx) {\n            moduleFreeContext(io.ctx);\n            zfree(io.ctx);\n        }\n        return io.error ? -1 : (ssize_t)io.bytes;\n    } else {\n        serverPanic(\"Unknown object type\");\n    }\n    return nwritten;\n}\n\n/* Return the length the object will have on disk if saved with\n * the rdbSaveObject() function. Currently we use a trick to get\n * this length with very little changes to the code. In the future\n * we could switch to a faster solution. */\nsize_t rdbSavedObjectLen(robj *o, robj *key, int dbid) {\n    ssize_t len = rdbSaveObject(NULL,o,key,dbid);\n    serverAssertWithInfo(NULL,o,len != -1);\n    return len;\n}\n\n/* Save a key-value pair, with expire time, type, key, value.\n * On error -1 is returned.\n * On success if the key was actually saved 1 is returned. */\nint rdbSaveKeyValuePair(rio *rdb, robj *key, robj *val, long long expiretime, int dbid) {\n    int savelru = server.maxmemory_policy & MAXMEMORY_FLAG_LRU;\n    int savelfu = server.maxmemory_policy & MAXMEMORY_FLAG_LFU;\n\n    /* Save the expire time */\n    if (expiretime != -1) {\n        if (rdbSaveType(rdb,RDB_OPCODE_EXPIRETIME_MS) == -1) return -1;\n        if (rdbSaveMillisecondTime(rdb,expiretime) == -1) return -1;\n    }\n\n    /* Save the LRU info. */\n    if (savelru) {\n        uint64_t idletime = estimateObjectIdleTime(val);\n        idletime /= 1000; /* Using seconds is enough and requires less space.*/\n        if (rdbSaveType(rdb,RDB_OPCODE_IDLE) == -1) return -1;\n        if (rdbSaveLen(rdb,idletime) == -1) return -1;\n    }\n\n    /* Save the LFU info. */\n    if (savelfu) {\n        uint8_t buf[1];\n        buf[0] = LFUDecrAndReturn(val);\n        /* We can encode this in exactly two bytes: the opcode and an 8\n         * bit counter, since the frequency is logarithmic with a 0-255 range.\n         * Note that we do not store the halving time because to reset it\n         * a single time when loading does not affect the frequency much. */\n        if (rdbSaveType(rdb,RDB_OPCODE_FREQ) == -1) return -1;\n        if (rdbWriteRaw(rdb,buf,1) == -1) return -1;\n    }\n\n    /* if needed save key metadata  */\n    if (getModuleMetaBits(val->metabits)) {\n        if (rdbSaveKeyMetadata(rdb, key, val, dbid) == -1)\n            return -1;\n    }\n\n    /* Save type, key, value */\n    if (rdbSaveObjectType(rdb,val) == -1) return -1;\n    if (rdbSaveStringObject(rdb,key) == -1) return -1;\n    if (rdbSaveObject(rdb,val,key,dbid) == -1) return -1;\n\n    /* Delay return if required (for testing) */\n    if (server.rdb_key_save_delay)\n        debugDelay(server.rdb_key_save_delay);\n\n    return 1;\n}\n\n/* Save an AUX field. */\nssize_t rdbSaveAuxField(rio *rdb, void *key, size_t keylen, void *val, size_t vallen) {\n    ssize_t ret, len = 0;\n    if ((ret = rdbSaveType(rdb,RDB_OPCODE_AUX)) == -1) return -1;\n    len += ret;\n    if ((ret = rdbSaveRawString(rdb,key,keylen)) == -1) return -1;\n    len += ret;\n    if ((ret = rdbSaveRawString(rdb,val,vallen)) == -1) return -1;\n    len += ret;\n    return len;\n}\n\n/* Wrapper for rdbSaveAuxField() used when key/val length can be obtained\n * with strlen(). */\nssize_t rdbSaveAuxFieldStrStr(rio *rdb, char *key, char *val) {\n    return rdbSaveAuxField(rdb,key,strlen(key),val,strlen(val));\n}\n\n/* Wrapper for strlen(key) + integer type (up to long long range). */\nssize_t rdbSaveAuxFieldStrInt(rio *rdb, char *key, long long val) {\n    char buf[LONG_STR_SIZE];\n    int vlen = ll2string(buf,sizeof(buf),val);\n    return rdbSaveAuxField(rdb,key,strlen(key),buf,vlen);\n}\n\n/* Save a few default AUX fields with information about the RDB generated. */\nint rdbSaveInfoAuxFields(rio *rdb, int rdbflags, rdbSaveInfo *rsi) {\n    int redis_bits = (sizeof(void*) == 8) ? 64 : 32;\n    int aof_base = (rdbflags & RDBFLAGS_AOF_PREAMBLE) != 0;\n\n    /* Add a few fields about the state when the RDB was created. */\n    if (rdbSaveAuxFieldStrStr(rdb,\"redis-ver\",REDIS_VERSION) == -1) return -1;\n    if (rdbSaveAuxFieldStrInt(rdb,\"redis-bits\",redis_bits) == -1) return -1;\n    if (rdbSaveAuxFieldStrInt(rdb,\"ctime\",time(NULL)) == -1) return -1;\n    if (rdbSaveAuxFieldStrInt(rdb,\"used-mem\",zmalloc_used_memory()) == -1) return -1;\n\n    /* Handle saving options that generate aux fields. */\n    if (rsi) {\n        if (rdbSaveAuxFieldStrInt(rdb,\"repl-stream-db\",rsi->repl_stream_db)\n            == -1) return -1;\n        if (rdbSaveAuxFieldStrStr(rdb,\"repl-id\",server.replid)\n            == -1) return -1;\n        if (rdbSaveAuxFieldStrInt(rdb,\"repl-offset\",server.master_repl_offset)\n            == -1) return -1;\n    }\n    if (rdbSaveAuxFieldStrInt(rdb, \"aof-base\", aof_base) == -1) return -1;\n\n    /* Save the active import ASM task if cluster is enabled. */\n    if (server.cluster_enabled) {\n        sds task_info = asmDumpActiveImportTask();\n        int ret = rdbSaveAuxFieldStrStr(rdb, \"cluster-asm-task\",\n                                        task_info ? task_info : \"\");\n        if (task_info) sdsfree(task_info);\n        if (ret == -1) return -1;\n    }\n\n    return 1;\n}\n\nssize_t rdbSaveSingleModuleAux(rio *rdb, int when, moduleType *mt) {\n    /* Save a module-specific aux value. */\n    RedisModuleIO io;\n    int retval = 0;\n    moduleInitIOContext(&io, &mt->entity, rdb, NULL, -1);\n\n    /* We save the AUX field header in a temporary buffer so we can support aux_save2 API.\n     * If aux_save2 is used the buffer will be flushed at the first time the module will perform\n     * a write operation to the RDB and will be ignored is case there was no writes. */\n    rio aux_save_headers_rio;\n    rioInitWithBuffer(&aux_save_headers_rio, sdsempty());\n\n    if (rdbSaveType(&aux_save_headers_rio, RDB_OPCODE_MODULE_AUX) == -1) goto error;\n\n    /* Write the \"module\" identifier as prefix, so that we'll be able\n     * to call the right module during loading. */\n    if (rdbSaveLen(&aux_save_headers_rio,mt->entity.id) == -1) goto error;\n\n    /* write the 'when' so that we can provide it on loading. add a UINT opcode\n     * for backwards compatibility, everything after the MT needs to be prefixed\n     * by an opcode. */\n    if (rdbSaveLen(&aux_save_headers_rio,RDB_MODULE_OPCODE_UINT) == -1) goto error;\n    if (rdbSaveLen(&aux_save_headers_rio,when) == -1) goto error;\n\n    /* Then write the module-specific representation + EOF marker. */\n    if (mt->aux_save2) {\n        io.pre_flush_buffer = aux_save_headers_rio.io.buffer.ptr;\n        mt->aux_save2(&io,when);\n        if (io.pre_flush_buffer) {\n            /* aux_save did not save any data to the RDB.\n             * We will avoid saving any data related to this aux type\n             * to allow loading this RDB if the module is not present. */\n            sdsfree(io.pre_flush_buffer);\n            io.pre_flush_buffer = NULL;\n            return 0;\n        }\n    } else {\n        /* Write headers now, aux_save does not do lazy saving of the headers. */\n        retval = rdbWriteRaw(rdb, aux_save_headers_rio.io.buffer.ptr, sdslen(aux_save_headers_rio.io.buffer.ptr));\n        if (retval == -1) goto error;\n        io.bytes += retval;\n        sdsfree(aux_save_headers_rio.io.buffer.ptr);\n        mt->aux_save(&io,when);\n    }\n    retval = rdbSaveLen(rdb,RDB_MODULE_OPCODE_EOF);\n    serverAssert(!io.pre_flush_buffer);\n    if (retval == -1)\n        io.error = 1;\n    else\n        io.bytes += retval;\n\n    if (io.ctx) {\n        moduleFreeContext(io.ctx);\n        zfree(io.ctx);\n    }\n    if (io.error)\n        return -1;\n    return io.bytes;\nerror:\n    sdsfree(aux_save_headers_rio.io.buffer.ptr);\n    return -1;\n}\n\nssize_t rdbSaveFunctions(rio *rdb) {\n    dict *functions = functionsLibGet();\n    dictIterator iter;\n    dictEntry *entry = NULL;\n    ssize_t written = 0;\n    ssize_t ret;\n\n    dictInitIterator(&iter, functions);\n    while ((entry = dictNext(&iter))) {\n        if ((ret = rdbSaveType(rdb, RDB_OPCODE_FUNCTION2)) < 0) goto werr;\n        written += ret;\n        functionLibInfo *li = dictGetVal(entry);\n        if ((ret = rdbSaveRawString(rdb, (unsigned char *) li->code, sdslen(li->code))) < 0) goto werr;\n        written += ret;\n    }\n    dictResetIterator(&iter);\n    return written;\n\nwerr:\n    dictResetIterator(&iter);\n    return -1;\n}\n\nssize_t rdbSaveDb(rio *rdb, int dbid, int rdbflags, long *key_counter, unsigned long long *skipped) {\n    dictEntry *de;\n    ssize_t written = 0;\n    ssize_t res;\n    size_t oldsize = 0;\n    kvstoreIterator kvs_it;\n    static long long info_updated_time = 0;\n    char *pname = (rdbflags & RDBFLAGS_AOF_PREAMBLE) ? \"AOF rewrite\" :  \"RDB\";\n\n    redisDb *db = server.db + dbid;\n    unsigned long long int db_size = kvstoreSize(db->keys);\n    if (db_size == 0) return 0;\n\n    /* Write the SELECT DB opcode */\n    if ((res = rdbSaveType(rdb,RDB_OPCODE_SELECTDB)) < 0) goto werr;\n    written += res;\n    if ((res = rdbSaveLen(rdb, dbid)) < 0) goto werr;\n    written += res;\n\n    /* Write the RESIZE DB opcode. */\n    unsigned long long expires_size = kvstoreSize(db->expires);\n    if ((res = rdbSaveType(rdb,RDB_OPCODE_RESIZEDB)) < 0) goto werr;\n    written += res;\n    if ((res = rdbSaveLen(rdb,db_size)) < 0) goto werr;\n    written += res;\n    if ((res = rdbSaveLen(rdb,expires_size)) < 0) goto werr;\n    written += res;\n\n    kvstoreIteratorInit(&kvs_it, db->keys);\n    int last_slot = -1;\n    /* Iterate this DB writing every entry */\n    while ((de = kvstoreIteratorNext(&kvs_it)) != NULL) {\n        int curr_slot = kvstoreIteratorGetCurrentDictIndex(&kvs_it);\n        /* Save slot info. */\n        if (server.cluster_enabled && curr_slot != last_slot) {\n            if ((res = rdbSaveType(rdb, RDB_OPCODE_SLOT_INFO)) < 0) goto werr2;\n            written += res;\n            if ((res = rdbSaveLen(rdb, curr_slot)) < 0) goto werr2;\n            written += res;\n            if ((res = rdbSaveLen(rdb, kvstoreDictSize(db->keys, curr_slot))) < 0) goto werr2;\n            written += res;\n            if ((res = rdbSaveLen(rdb, kvstoreDictSize(db->expires, curr_slot))) < 0) goto werr2;\n            written += res;\n            /* Dismiss bucket arrays of the previous slot to reduce CoW.\n             * The final slot is not dismissed since the child exits shortly after. */\n            if (server.in_fork_child && last_slot != -1)\n                dismissDictBucketsMemory(kvstoreGetDict(db->keys, last_slot));\n            last_slot = curr_slot;\n        }\n        kvobj *kv = dictGetKV(de);\n        robj key;\n        long long expire;\n        size_t rdb_bytes_before_key = rdb->processed_bytes;\n\n        /* Skip keys that are being trimmed */\n        if (server.cluster_enabled && isSlotInTrimJob(curr_slot)) {\n            (*skipped)++;\n            continue;\n        }\n\n        initStaticStringObject(key,kvobjGetKey(kv));\n        expire = kvobjGetExpire(kv);\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(kv);\n        res = rdbSaveKeyValuePair(rdb, &key, kv, expire, dbid);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(db, curr_slot, kv, oldsize, kvobjAllocSize(kv));\n        if (res < 0) goto werr2;\n        written += res;\n\n        /* In fork child process, we can try to release memory back to the\n         * OS and possibly avoid or decrease COW. We give the dismiss\n         * mechanism a hint about an estimated size of the object we stored. */\n        size_t dump_size = rdb->processed_bytes - rdb_bytes_before_key;\n        if (server.in_fork_child && dump_size > server.page_size/2)\n            dismissObject(kv, dump_size);\n\n        /* Update child info every 1 second (approximately).\n         * in order to avoid calling mstime() on each iteration, we will\n         * check the diff every 1024 keys */\n        if (((*key_counter)++ & 1023) == 0) {\n            long long now = mstime();\n            if (now - info_updated_time >= 1000) {\n                sendChildInfo(CHILD_INFO_TYPE_CURRENT_INFO, *key_counter, pname);\n                info_updated_time = now;\n            }\n        }\n    }\n    kvstoreIteratorReset(&kvs_it);\n    return written;\n\nwerr2:\n    kvstoreIteratorReset(&kvs_it);\nwerr:\n    return -1;\n}\n\n/* Produces a dump of the database in RDB format sending it to the specified\n * Redis I/O channel. On success C_OK is returned, otherwise C_ERR\n * is returned and part of the output, or all the output, can be\n * missing because of I/O errors.\n *\n * When the function returns C_ERR and if 'error' is not NULL, the\n * integer pointed by 'error' is set to the value of errno just after the I/O\n * error. */\nint rdbSaveRio(int req, rio *rdb, int *error, int rdbflags, rdbSaveInfo *rsi) {\n    char magic[10];\n    uint64_t cksum;\n    long key_counter = 0;\n    unsigned long long skipped = 0;\n    int j;\n\n    if (server.rdb_checksum)\n        rdb->update_cksum = rioGenericUpdateChecksum;\n    snprintf(magic,sizeof(magic),\"REDIS%04d\",RDB_VERSION);\n    if (rdbWriteRaw(rdb,magic,9) == -1) goto werr;\n    if (rdbSaveInfoAuxFields(rdb,rdbflags,rsi) == -1) goto werr;\n    if (!(req & SLAVE_REQ_RDB_EXCLUDE_DATA) && rdbSaveModulesAux(rdb, REDISMODULE_AUX_BEFORE_RDB) == -1) goto werr;\n\n    /* save functions */\n    if (!(req & SLAVE_REQ_RDB_EXCLUDE_FUNCTIONS) && rdbSaveFunctions(rdb) == -1) goto werr;\n\n    /* save all databases, skip this if we're in functions-only mode */\n    if (!(req & SLAVE_REQ_RDB_EXCLUDE_DATA)) {\n        for (j = 0; j < server.dbnum; j++) {\n            if (rdbSaveDb(rdb, j, rdbflags, &key_counter, &skipped) == -1) goto werr;\n            /* In standalone mode, dismiss bucket arrays of the saved DB's\n             * kvstore to reduce CoW. In cluster mode this is done per-slot. */\n            if (server.in_fork_child && !server.cluster_enabled)\n                dismissKvstoreBucketsMemory(server.db[j].keys);\n        }\n    }\n\n    if (!(req & SLAVE_REQ_RDB_EXCLUDE_DATA) && rdbSaveModulesAux(rdb, REDISMODULE_AUX_AFTER_RDB) == -1) goto werr;\n\n    /* EOF opcode */\n    if (rdbSaveType(rdb,RDB_OPCODE_EOF) == -1) goto werr;\n\n    /* CRC64 checksum. It will be zero if checksum computation is disabled, the\n     * loading code skips the check in this case. */\n    cksum = rdb->cksum;\n    memrev64ifbe(&cksum);\n    if (rioWrite(rdb,&cksum,8) == 0) goto werr;\n    serverLog(LL_NOTICE, \"BGSAVE done, %ld keys saved, %llu keys skipped, %zu bytes written.\", key_counter, skipped, rdb->processed_bytes);\n    return C_OK;\n\nwerr:\n    if (error) *error = errno;\n    return C_ERR;\n}\n\n/* This helper function is only used for diskless replication.\n * This is just a wrapper to rdbSaveRio() that additionally adds a prefix\n * and a suffix to the generated RDB dump. The prefix is:\n *\n * $EOF:<40 bytes unguessable hex string>\\r\\n\n *\n * While the suffix is the 40 bytes hex string we announced in the prefix.\n * This way processes receiving the payload can understand when it ends\n * without doing any processing of the content. */\nint rdbSaveRioWithEOFMark(int req, rio *rdb, int *error, rdbSaveInfo *rsi) {\n    char eofmark[RDB_EOF_MARK_SIZE];\n\n    startSaving(RDBFLAGS_REPLICATION);\n    getRandomHexChars(eofmark,RDB_EOF_MARK_SIZE);\n    if (error) *error = 0;\n    if (rioWrite(rdb,\"$EOF:\",5) == 0) goto werr;\n    if (rioWrite(rdb,eofmark,RDB_EOF_MARK_SIZE) == 0) goto werr;\n    if (rioWrite(rdb,\"\\r\\n\",2) == 0) goto werr;\n    if (rdbSaveRio(req,rdb,error,RDBFLAGS_REPLICATION,rsi) == C_ERR) goto werr;\n    if (rioWrite(rdb,eofmark,RDB_EOF_MARK_SIZE) == 0) goto werr;\n    stopSaving(1);\n    return C_OK;\n\nwerr: /* Write error. */\n    /* Set 'error' only if not already set by rdbSaveRio() call. */\n    if (error && *error == 0) *error = errno;\n    stopSaving(0);\n    return C_ERR;\n}\n\nstatic int rdbSaveInternal(int req, const char *filename, rdbSaveInfo *rsi, int rdbflags) {\n    char cwd[MAXPATHLEN]; /* Current working dir path for error messages. */\n    rio rdb;\n    int error = 0;\n    int saved_errno;\n    char *err_op;    /* For a detailed log */\n\n    FILE *fp = fopen(filename,\"w\");\n    if (!fp) {\n        saved_errno = errno;\n        char *str_err = strerror(errno);\n        char *cwdp = getcwd(cwd,MAXPATHLEN);\n        serverLog(LL_WARNING,\n            \"Failed opening the temp RDB file %s (in server root dir %s) \"\n            \"for saving: %s\",\n            filename,\n            cwdp ? cwdp : \"unknown\",\n            str_err);\n        errno = saved_errno;\n        return C_ERR;\n    }\n\n    rioInitWithFile(&rdb,fp);\n\n    if (server.rdb_save_incremental_fsync) {\n        rioSetAutoSync(&rdb,REDIS_AUTOSYNC_BYTES);\n        if (!(rdbflags & RDBFLAGS_KEEP_CACHE)) rioSetReclaimCache(&rdb,1);\n    }\n\n    if (rdbSaveRio(req,&rdb,&error,rdbflags,rsi) == C_ERR) {\n        errno = error;\n        err_op = \"rdbSaveRio\";\n        goto werr;\n    }\n\n    /* Make sure data will not remain on the OS's output buffers */\n    if (fflush(fp)) { err_op = \"fflush\"; goto werr; }\n    if (fsync(fileno(fp))) { err_op = \"fsync\"; goto werr; }\n    if (!(rdbflags & RDBFLAGS_KEEP_CACHE) && reclaimFilePageCache(fileno(fp), 0, 0) == -1) {\n        serverLog(LL_NOTICE,\"Unable to reclaim cache after saving RDB: %s\", strerror(errno));\n    }\n    if (fclose(fp)) { fp = NULL; err_op = \"fclose\"; goto werr; }\n\n    return C_OK;\n\nwerr:\n    saved_errno = errno;\n    serverLog(LL_WARNING,\"Write error while saving DB to the disk(%s): %s\", err_op, strerror(errno));\n    if (fp) fclose(fp);\n    unlink(filename);\n    errno = saved_errno;\n    return C_ERR;\n}\n\n/* Save DB to the file. Similar to rdbSave() but this function won't use a\n * temporary file and won't update the metrics. */\nint rdbSaveToFile(const char *filename) {\n    startSaving(RDBFLAGS_NONE);\n\n    if (rdbSaveInternal(SLAVE_REQ_NONE,filename,NULL,RDBFLAGS_NONE) != C_OK) {\n        int saved_errno = errno;\n        stopSaving(0);\n        errno = saved_errno;\n        return C_ERR;\n    }\n\n    stopSaving(1);\n    return C_OK;\n}\n\n/* Save the DB on disk. Return C_ERR on error, C_OK on success. */\nint rdbSave(int req, char *filename, rdbSaveInfo *rsi, int rdbflags) {\n    char tmpfile[256];\n    char cwd[MAXPATHLEN]; /* Current working dir path for error messages. */\n\n    startSaving(rdbflags);\n    snprintf(tmpfile,256,\"temp-%d.rdb\", (int) getpid());\n\n    if (rdbSaveInternal(req,tmpfile,rsi,rdbflags) != C_OK) {\n        stopSaving(0);\n        return C_ERR;\n    }\n    \n    /* Use RENAME to make sure the DB file is changed atomically only\n     * if the generate DB file is ok. */\n    if (rename(tmpfile,filename) == -1) {\n        char *str_err = strerror(errno);\n        char *cwdp = getcwd(cwd,MAXPATHLEN);\n        serverLog(LL_WARNING,\n            \"Error moving temp DB file %s on the final \"\n            \"destination %s (in server root dir %s): %s\",\n            tmpfile,\n            filename,\n            cwdp ? cwdp : \"unknown\",\n            str_err);\n        unlink(tmpfile);\n        stopSaving(0);\n        return C_ERR;\n    }\n    if (fsyncFileDir(filename) != 0) {\n        serverLog(LL_WARNING,\n            \"Failed to fsync directory while saving DB: %s\", strerror(errno));\n        stopSaving(0);\n        return C_ERR;\n    }\n\n    serverLog(LL_NOTICE,\"DB saved on disk\");\n    server.dirty = 0;\n    server.lastsave = time(NULL);\n    server.lastbgsave_status = C_OK;\n    stopSaving(1);\n    return C_OK;\n}\n\nint rdbSaveBackground(int req, char *filename, rdbSaveInfo *rsi, int rdbflags) {\n    pid_t childpid;\n\n    if (hasActiveChildProcess()) return C_ERR;\n    server.stat_rdb_saves++;\n\n    server.dirty_before_bgsave = server.dirty;\n    server.lastbgsave_try = time(NULL);\n\n    if ((childpid = redisFork(CHILD_TYPE_RDB)) == 0) {\n        int retval;\n\n        /* Child */\n        redisSetProcTitle(\"redis-rdb-bgsave\");\n        redisSetCpuAffinity(server.bgsave_cpulist);\n        retval = rdbSave(req, filename,rsi,rdbflags);\n        if (retval == C_OK) {\n            sendChildCowInfo(CHILD_INFO_TYPE_RDB_COW_SIZE, \"RDB\");\n        }\n        exitFromChild((retval == C_OK) ? 0 : 1, 0);\n    } else {\n        /* Parent */\n        if (childpid == -1) {\n            server.lastbgsave_status = C_ERR;\n            serverLog(LL_WARNING,\"Can't save in background: fork: %s\",\n                strerror(errno));\n            return C_ERR;\n        }\n        serverLog(LL_NOTICE,\"Background saving started by pid %ld\",(long) childpid);\n        server.rdb_save_time_start = time(NULL);\n        server.rdb_child_type = RDB_CHILD_TYPE_DISK;\n        return C_OK;\n    }\n    return C_OK; /* unreached */\n}\n\n/* Note that we may call this function in signal handle 'sigShutdownHandler',\n * so we need guarantee all functions we call are async-signal-safe.\n * If we call this function from signal handle, we won't call bg_unlink that\n * is not async-signal-safe. */\nvoid rdbRemoveTempFile(pid_t childpid, int from_signal) {\n    char tmpfile[256];\n    char pid[32];\n\n    /* Generate temp rdb file name using async-signal safe functions. */\n    ll2string(pid, sizeof(pid), childpid);\n    redis_strlcpy(tmpfile, \"temp-\", sizeof(tmpfile));\n    redis_strlcat(tmpfile, pid, sizeof(tmpfile));\n    redis_strlcat(tmpfile, \".rdb\", sizeof(tmpfile));\n\n    if (from_signal) {\n        /* bg_unlink is not async-signal-safe, but in this case we don't really\n         * need to close the fd, it'll be released when the process exists. */\n        int fd = open(tmpfile, O_RDONLY|O_NONBLOCK);\n        UNUSED(fd);\n        unlink(tmpfile);\n    } else {\n        bg_unlink(tmpfile);\n    }\n}\n\n/* This function is called by rdbLoadObject() when the code is in RDB-check\n * mode and we find a module value of type 2 that can be parsed without\n * the need of the actual module. The value is parsed for errors.\n * If null_on_error is true, NULL is returned when data corruption is detected;\n * otherwise a dummy redis object is always returned regardless of success or\n * failure. */\nrobj *rdbLoadCheckModuleValue(rio *rdb, char *modulename, int null_on_error) {\n    uint64_t opcode;\n    while((opcode = rdbLoadLen(rdb,NULL)) != RDB_MODULE_OPCODE_EOF) {\n        if (opcode == RDB_LENERR) {\n            rdbReportCorruptRDB(\"Error reading module opcode length from module %s value\", modulename);\n            goto error;\n        }\n\n        if (opcode == RDB_MODULE_OPCODE_SINT ||\n            opcode == RDB_MODULE_OPCODE_UINT)\n        {\n            uint64_t len;\n            if (rdbLoadLenByRef(rdb,NULL,&len) == -1) {\n                rdbReportCorruptRDB(\n                    \"Error reading integer from module %s value\", modulename);\n                goto error;\n            }\n        } else if (opcode == RDB_MODULE_OPCODE_STRING) {\n            robj *o = rdbGenericLoadStringObject(rdb,RDB_LOAD_NONE,NULL);\n            if (o == NULL) {\n                rdbReportCorruptRDB(\n                    \"Error reading string from module %s value\", modulename);\n                goto error;\n            }\n            decrRefCount(o);\n        } else if (opcode == RDB_MODULE_OPCODE_FLOAT) {\n            float val;\n            if (rdbLoadBinaryFloatValue(rdb,&val) == -1) {\n                rdbReportCorruptRDB(\n                    \"Error reading float from module %s value\", modulename);\n                goto error;\n            }\n        } else if (opcode == RDB_MODULE_OPCODE_DOUBLE) {\n            double val;\n            if (rdbLoadBinaryDoubleValue(rdb,&val) == -1) {\n                rdbReportCorruptRDB(\n                    \"Error reading double from module %s value\", modulename);\n                goto error;\n            }\n        } else {\n            rdbReportCorruptRDB(\n                \"Unknown module opcode %llu reading module %s value\", (unsigned long long)opcode, modulename);\n            goto error;\n        }\n    }\n    return createStringObject(\"module-dummy-value\",18);\nerror:\n    return null_on_error ? NULL : createStringObject(\"module-dummy-value\",18);\n}\n\n/* Load object type and optional key metadata (into `keymeta`) from RDB stream.\n * This function handles the RDB_OPCODE_KEY_META opcode that may appear before\n * the actual object type in RDB streams (both regular RDB files and DUMP payloads).\n * The `type` parameter is updated with the actual object type.\n * \n * Returns: 0 on success, -1 on error\n */\nint rdbResolveKeyType(rio *rdb, int *type, int dbid, KeyMetaSpec *keymeta) {\n    if (*type == RDB_OPCODE_KEY_META) {\n        /* Load key metadata from RDB */\n        uint64_t numClasses;\n        if ((numClasses = rdbLoadLen(rdb, NULL)) == RDB_LENERR) {\n            return -1;\n        }\n        if (rdbLoadKeyMetadata(rdb, dbid, numClasses, keymeta) == -1) {\n            return -1;\n        }\n        /* Read the actual object type after metadata */\n        *type = rdbLoadObjectType(rdb);\n        if (*type == -1) {\n            keyMetaSpecCleanup(keymeta);\n            return -1;\n        }\n    } else if (!rdbIsObjectType(*type)) {\n        /* Not metadata and not a valid object type */\n        return -1;\n    }\n\n    return 0;\n}\n\n/* callback for hashZiplistConvertAndValidateIntegrity.\n * Check that the ziplist doesn't have duplicate hash field names.\n * The ziplist element pointed by 'p' will be converted and stored into listpack. */\nstatic int _ziplistPairsEntryConvertAndValidate(unsigned char *p, unsigned int head_count, void *userdata) {\n    unsigned char *str;\n    unsigned int slen;\n    long long vll;\n\n    struct {\n        long count;\n        dict *fields;\n        unsigned char **lp;\n    } *data = userdata;\n\n    if (data->fields == NULL) {\n        data->fields = dictCreate(&hashDictType);\n        dictExpand(data->fields, head_count/2);\n    }\n\n    if (!ziplistGet(p, &str, &slen, &vll))\n        return 0;\n\n    /* Even records are field names, add to dict and check that's not a dup */\n    if (((data->count) & 1) == 0) {\n        sds field = str? sdsnewlen(str, slen): sdsfromlonglong(vll);\n        if (dictAdd(data->fields, field, NULL) != DICT_OK) {\n            /* Duplicate, return an error */\n            sdsfree(field);\n            return 0;\n        }\n    }\n\n    if (str) {\n        *(data->lp) = lpAppend(*(data->lp), (unsigned char*)str, slen);\n    } else {\n        *(data->lp) = lpAppendInteger(*(data->lp), vll);\n    }\n\n    (data->count)++;\n    return 1;\n}\n\n/* Validate the integrity of the data structure while converting it to \n * listpack and storing it at 'lp'.\n * The function is safe to call on non-validated ziplists, it returns 0\n * when encounter an integrity validation issue. */\nint ziplistPairsConvertAndValidateIntegrity(unsigned char *zl, size_t size, unsigned char **lp) {\n    /* Keep track of the field names to locate duplicate ones */\n    struct {\n        long count;\n        dict *fields; /* Initialisation at the first callback. */\n        unsigned char **lp;\n    } data = {0, NULL, lp};\n\n    int ret = ziplistValidateIntegrity(zl, size, 1, _ziplistPairsEntryConvertAndValidate, &data);\n\n    /* make sure we have an even number of records. */\n    if (data.count & 1)\n        ret = 0;\n\n    if (data.fields) dictRelease(data.fields);\n    return ret;\n}\n\n/* callback for ziplistValidateIntegrity.\n * The ziplist element pointed by 'p' will be converted and stored into listpack. */\nstatic int _ziplistEntryConvertAndValidate(unsigned char *p, unsigned int head_count, void *userdata) {\n    UNUSED(head_count);\n    unsigned char *str;\n    unsigned int slen;\n    long long vll;\n    unsigned char **lp = (unsigned char**)userdata;\n\n    if (!ziplistGet(p, &str, &slen, &vll)) return 0;\n\n    if (str)\n        *lp = lpAppend(*lp, (unsigned char*)str, slen);\n    else\n        *lp = lpAppendInteger(*lp, vll);\n\n    return 1;\n}\n\n/* callback for ziplistValidateIntegrity.\n * The ziplist element pointed by 'p' will be converted and stored into quicklist. */\nstatic int _listZiplistEntryConvertAndValidate(unsigned char *p, unsigned int head_count, void *userdata) {\n    UNUSED(head_count);\n    unsigned char *str;\n    unsigned int slen;\n    long long vll;\n    char longstr[32] = {0};\n    quicklist *ql = (quicklist*)userdata;\n\n    if (!ziplistGet(p, &str, &slen, &vll)) return 0;\n    if (!str) {\n        /* Write the longval as a string so we can re-add it */\n        slen = ll2string(longstr, sizeof(longstr), vll);\n        str = (unsigned char *)longstr;\n    }\n    quicklistPushTail(ql, str, slen);\n    return 1;\n}\n\n/* callback for to check the listpack doesn't have duplicate records */\nstatic int _lpEntryValidation(unsigned char *p, unsigned int head_count, void *userdata) {\n    struct {\n        int tuple_len;\n        long count;\n        dict *fields;\n        long long last_expireat;\n    } *data = userdata;\n\n    if (data->fields == NULL) {\n        data->fields = dictCreate(&hashDictType);\n        dictExpand(data->fields, head_count/data->tuple_len);\n    }\n\n    /* If we're checking pairs, then even records are field names. Otherwise\n     * we're checking all elements. Add to dict and check that's not a dup */\n    if (data->count % data->tuple_len == 0) {\n        unsigned char *str;\n        int64_t slen;\n        unsigned char buf[LP_INTBUF_SIZE];\n\n        str = lpGet(p, &slen, buf);\n        sds field = sdsnewlen(str, slen);\n        if (dictAdd(data->fields, field, NULL) != DICT_OK) {\n            /* Duplicate, return an error */\n            sdsfree(field);\n            return 0;\n        }\n    }\n\n    /* Validate TTL field, only for listpackex. */\n    if (data->count % data->tuple_len == 2) {\n        long long expire_at;\n        /* Must be an integer. */\n        if (!lpGetIntegerValue(p, &expire_at)) return 0;\n        /* Must be less than EB_EXPIRE_TIME_MAX. */\n        if (expire_at < 0 || (unsigned long long)expire_at > EB_EXPIRE_TIME_MAX) return 0;\n        /* TTL fields are ordered. If the current field has TTL, the previous field must\n         * also have one, and the current TTL must be greater than the previous one. */\n        if (expire_at != 0 && (data->last_expireat == 0 || expire_at < data->last_expireat)) return 0;\n        data->last_expireat = expire_at;\n    }\n\n    (data->count)++;\n    return 1;\n}\n\n/* Validate the integrity of the listpack structure.\n * when `deep` is 0, only the integrity of the header is validated.\n * when `deep` is 1, we scan all the entries one by one.\n * tuple_len indicates what is a logical entry tuple size.\n * Whether tuple is of size 1 (set), 2 (feild-value) or 3 (field-value[-ttl]),\n * first element in the tuple must be unique */\nint lpValidateIntegrityAndDups(unsigned char *lp, size_t size, int deep, int tuple_len) {\n    if (!deep)\n        return lpValidateIntegrity(lp, size, 0, NULL, NULL);\n\n    /* Keep track of the field names to locate duplicate ones */\n    struct {\n        int tuple_len;\n        long count;\n        dict *fields; /* Initialisation at the first callback. */\n        long long last_expireat; /* Last field's expiry time to ensure order in TTL fields. */\n    } data = {tuple_len, 0, NULL, -1};\n\n    int ret = lpValidateIntegrity(lp, size, 1, _lpEntryValidation, &data);\n\n    /* the number of records should be a multiple of the tuple length */\n    if (data.count % tuple_len != 0)\n        ret = 0;\n\n    if (data.fields) dictRelease(data.fields);\n    return ret;\n}\n\n/* Load a Redis object of the specified type from the specified file.\n * On success a newly allocated object is returned, otherwise NULL.\n *\n * error - When the function returns NULL and if 'error' is not NULL, the\n *   integer pointed by 'error' is set to the type of error that occurred\n * minExpiredField - If loading a hash with expiration on fields, then this value\n *   will be set to the minimum expire time found in the hash fields. If there are\n *   no fields with expiration or it is not a hash, then it will set be to\n *   EB_EXPIRE_TIME_INVALID.\n */\nrobj *rdbLoadObject(int rdbtype, rio *rdb, sds key, int dbid, int *error)\n{\n    robj *o = NULL, *ele, *dec;\n    uint64_t len;\n    unsigned int i;\n\n    /* Set default error of load object, it will be set to 0 on success. */\n    if (error) *error = RDB_LOAD_ERR_OTHER;\n\n    int deep_integrity_validation = server.sanitize_dump_payload == SANITIZE_DUMP_YES;\n    if (server.sanitize_dump_payload == SANITIZE_DUMP_CLIENTS) {\n        /* Skip sanitization when loading (an RDB), or getting a RESTORE command\n         * from either the master or a client using an ACL user with the skip-sanitize-payload flag. */\n        int skip = server.loading ||\n            (server.current_client && (server.current_client->flags & CLIENT_MASTER));\n        if (!skip && server.current_client && server.current_client->user)\n            skip = !!(server.current_client->user->flags & USER_FLAG_SANITIZE_PAYLOAD_SKIP);\n        deep_integrity_validation = !skip;\n    }\n\n    if (rdbtype == RDB_TYPE_STRING) {\n        /* Read string value */\n        if ((o = rdbLoadEncodedStringObject(rdb)) == NULL) return NULL;\n        o = tryObjectEncodingEx(o, 0);\n    } else if (rdbtype == RDB_TYPE_LIST) {\n        /* Read list value */\n        if ((len = rdbLoadLen(rdb,NULL)) == RDB_LENERR) return NULL;\n        if (len == 0) goto emptykey;\n\n        o = createQuicklistObject(server.list_max_listpack_size, server.list_compress_depth);\n\n        /* Load every single element of the list */\n        while(len--) {\n            if ((ele = rdbLoadEncodedStringObject(rdb)) == NULL) {\n                decrRefCount(o);\n                return NULL;\n            }\n            dec = getDecodedObject(ele);\n            size_t len = sdslen(dec->ptr);\n            quicklistPushTail(o->ptr, dec->ptr, len);\n            decrRefCount(dec);\n            decrRefCount(ele);\n        }\n\n        listTypeTryConversion(o, LIST_CONV_AUTO, NULL, NULL);\n    } else if (rdbtype == RDB_TYPE_SET) {\n        /* Read Set value */\n        if ((len = rdbLoadLen(rdb,NULL)) == RDB_LENERR) return NULL;\n        if (len == 0) goto emptykey;\n\n        /* Use a regular set when there are too many entries. */\n        size_t max_entries = server.set_max_intset_entries;\n        if (max_entries >= 1<<30) max_entries = 1<<30;\n        if (len > max_entries) {\n            o = createSetObject();\n            /* It's faster to expand the dict to the right size asap in order\n             * to avoid rehashing */\n            if (len > DICT_HT_INITIAL_SIZE && dictTryExpand(o->ptr, len) != DICT_OK) {\n                rdbReportCorruptRDB(\"OOM in dictTryExpand %llu\", (unsigned long long)len);\n                decrRefCount(o);\n                return NULL;\n            }\n        } else {\n            o = createIntsetObject();\n        }\n\n        /* Load every single element of the set */\n        size_t maxelelen = 0, sumelelen = 0;\n        for (i = 0; i < len; i++) {\n            long long llval;\n            sds sdsele;\n\n            if ((sdsele = rdbGenericLoadStringObject(rdb,RDB_LOAD_SDS,NULL)) == NULL) {\n                decrRefCount(o);\n                return NULL;\n            }\n            size_t elelen = sdslen(sdsele);\n            sumelelen += elelen;\n            if (elelen > maxelelen) maxelelen = elelen;\n\n            if (o->encoding == OBJ_ENCODING_INTSET) {\n                /* Fetch integer value from element. */\n                if (isSdsRepresentableAsLongLong(sdsele,&llval) == C_OK) {\n                    uint8_t success;\n                    o->ptr = intsetAdd(o->ptr, llval, &success);\n                    if (!success) {\n                        rdbReportCorruptRDB(\"Duplicate set members detected\");\n                        decrRefCount(o);\n                        sdsfree(sdsele);\n                        return NULL;\n                    }\n                } else if (setTypeSize(o) < server.set_max_listpack_entries &&\n                           maxelelen <= server.set_max_listpack_value &&\n                           lpSafeToAdd(NULL, sumelelen))\n                {\n                    /* We checked if it's safe to add one large element instead\n                     * of many small ones. It's OK since lpSafeToAdd doesn't\n                     * care about individual elements, only the total size. */\n                    setTypeConvert(o, OBJ_ENCODING_LISTPACK);\n                } else if (setTypeConvertAndExpand(o, OBJ_ENCODING_HT, len, 0) != C_OK) {\n                    rdbReportCorruptRDB(\"OOM in dictTryExpand %llu\", (unsigned long long)len);\n                    sdsfree(sdsele);\n                    decrRefCount(o);\n                    return NULL;\n                }\n            }\n\n            /* This will also be called when the set was just converted\n             * to a listpack encoded set. */\n            if (o->encoding == OBJ_ENCODING_LISTPACK) {\n                if (setTypeSize(o) < server.set_max_listpack_entries &&\n                    elelen <= server.set_max_listpack_value &&\n                    lpSafeToAdd(o->ptr, elelen))\n                {\n                    unsigned char *p = lpFirst(o->ptr);\n                    if (p && lpFind(o->ptr, p, (unsigned char*)sdsele, elelen, 0)) {\n                        rdbReportCorruptRDB(\"Duplicate set members detected\");\n                        decrRefCount(o);\n                        sdsfree(sdsele);\n                        return NULL;\n                    }\n                    o->ptr = lpAppend(o->ptr, (unsigned char *)sdsele, elelen);\n                } else if (setTypeConvertAndExpand(o, OBJ_ENCODING_HT, len, 0) != C_OK) {\n                    rdbReportCorruptRDB(\"OOM in dictTryExpand %llu\",\n                                        (unsigned long long)len);\n                    sdsfree(sdsele);\n                    decrRefCount(o);\n                    return NULL;\n                }\n            }\n\n            /* This will also be called when the set was just converted\n             * to a regular hash table encoded set. */\n            if (o->encoding == OBJ_ENCODING_HT) {\n                if (dictAdd((dict*)o->ptr, sdsele, NULL) != DICT_OK) {\n                    rdbReportCorruptRDB(\"Duplicate set members detected\");\n                    decrRefCount(o);\n                    sdsfree(sdsele);\n                    return NULL;\n                }\n                *htGetMetadataSize(o->ptr) += sdsAllocSize(sdsele);\n            } else {\n                sdsfree(sdsele);\n            }\n        }\n    } else if (rdbtype == RDB_TYPE_ZSET_2 || rdbtype == RDB_TYPE_ZSET) {\n        /* Read sorted set value. */\n        uint64_t zsetlen;\n        size_t maxelelen = 0, totelelen = 0;\n        zset *zs;\n\n        if ((zsetlen = rdbLoadLen(rdb,NULL)) == RDB_LENERR) return NULL;\n        if (zsetlen == 0) goto emptykey;\n\n        o = createZsetObject();\n        zs = o->ptr;\n\n        if (zsetlen > DICT_HT_INITIAL_SIZE && dictTryExpand(zs->dict,zsetlen) != DICT_OK) {\n            rdbReportCorruptRDB(\"OOM in dictTryExpand %llu\", (unsigned long long)zsetlen);\n            decrRefCount(o);\n            return NULL;\n        }\n\n        /* Load every single element of the sorted set. */\n        while(zsetlen--) {\n            sds sdsele;\n            double score;\n            zskiplistNode *znode;\n\n            if ((sdsele = rdbGenericLoadStringObject(rdb,RDB_LOAD_SDS,NULL)) == NULL) {\n                decrRefCount(o);\n                return NULL;\n            }\n\n            if (rdbtype == RDB_TYPE_ZSET_2) {\n                if (rdbLoadBinaryDoubleValue(rdb,&score) == -1) {\n                    decrRefCount(o);\n                    sdsfree(sdsele);\n                    return NULL;\n                }\n            } else {\n                if (rdbLoadDoubleValue(rdb,&score) == -1) {\n                    decrRefCount(o);\n                    sdsfree(sdsele);\n                    return NULL;\n                }\n            }\n\n            if (isnan(score)) {\n                rdbReportCorruptRDB(\"Zset with NAN score detected\");\n                decrRefCount(o);\n                sdsfree(sdsele);\n                return NULL;\n            }\n\n            /* Don't care about integer-encoded strings. */\n            if (sdslen(sdsele) > maxelelen) maxelelen = sdslen(sdsele);\n            totelelen += sdslen(sdsele);\n\n            znode = zslInsert(zs->zsl,score,sdsele);\n            if (dictAdd(zs->dict, znode, NULL) != DICT_OK) {\n                rdbReportCorruptRDB(\"Duplicate zset fields detected\");\n                decrRefCount(o);\n                sdsfree(sdsele); /* zslInsert copies the sds, so we need to free the original */\n                return NULL;\n            }\n            sdsfree(sdsele); /* zslInsert copies the sds into the node, so free the original */\n        }\n\n        /* Convert *after* loading, since sorted sets are not stored ordered. */\n        if (zsetLength(o) <= server.zset_max_listpack_entries &&\n            maxelelen <= server.zset_max_listpack_value &&\n            lpSafeToAdd(NULL, totelelen))\n        {\n            zsetConvert(o, OBJ_ENCODING_LISTPACK);\n        }\n    } else if (rdbtype == RDB_TYPE_HASH) {\n        uint64_t len, original_len;\n        int ret;\n        sds value;\n        Entry *entry;\n        dict *dupSearchDict = NULL;\n\n        len = rdbLoadLen(rdb, NULL);\n        if (len == RDB_LENERR) return NULL;\n        original_len = len;\n        if (len == 0) goto emptykey;\n\n        o = createHashObject();\n\n        /* Too many entries? Use a hash table right from the start. */\n        if (len > server.hash_max_listpack_entries)\n            hashTypeConvert(NULL, o, OBJ_ENCODING_HT);\n        else if (deep_integrity_validation) {\n            /* In this mode, we need to guarantee that the server won't crash\n             * later when the ziplist is converted to a dict.\n             * Create a set (dict with no values) to for a dup search.\n             * We can dismiss it as soon as we convert the ziplist to a hash. */\n            dupSearchDict = dictCreate(&hashDictType);\n        }\n\n        /* Load every field and value into the listpack */\n        while (o->encoding == OBJ_ENCODING_LISTPACK && len > 0) {\n            len--;\n            /* Load raw strings - load field as SDS first */\n            size_t usable;\n            sds fieldSds;\n            if ((fieldSds = rdbGenericLoadStringObject(rdb,RDB_LOAD_SDS,NULL)) == NULL) {\n                decrRefCount(o);\n                if (dupSearchDict) dictRelease(dupSearchDict);\n                return NULL;\n            }\n            if ((value = rdbGenericLoadStringObject(rdb,RDB_LOAD_SDS,NULL)) == NULL) {\n                sdsfree(fieldSds);\n                decrRefCount(o);\n                if (dupSearchDict) dictRelease(dupSearchDict);\n                return NULL;\n            }\n\n            /* Create entry with field and value - take ownership of value */\n            entry = entryCreate(fieldSds, value, ENTRY_TAKE_VALUE, &usable);\n            sdsfree(fieldSds);  /* entryCreate() doesn't take ownership of field */\n\n            if (dupSearchDict) {\n                sds field_dup = sdsdup(entryGetField(entry));\n\n                if (dictAdd(dupSearchDict, field_dup, NULL) != DICT_OK) {\n                    rdbReportCorruptRDB(\"Hash with dup elements\");\n                    dictRelease(dupSearchDict);\n                    decrRefCount(o);\n                    sdsfree(field_dup);\n                    entryFree(entry, NULL);\n                    return NULL;\n                }\n            }\n\n            /* Convert to hash table if size threshold is exceeded */\n            if (entryFieldLen(entry) > server.hash_max_listpack_value ||\n                sdslen(entryGetValue(entry)) > server.hash_max_listpack_value ||\n                !lpSafeToAdd(o->ptr, entryFieldLen(entry) + sdslen(entryGetValue(entry))))\n            {\n                hashTypeConvert(NULL, o, OBJ_ENCODING_HT);\n                ret = dictAdd((dict*)o->ptr, entry, NULL);  /* no_value=1 */\n                if (ret == DICT_ERR) {\n                    rdbReportCorruptRDB(\"Duplicate hash fields detected\");\n                    if (dupSearchDict) dictRelease(dupSearchDict);\n                    entryFree(entry, NULL);\n                    decrRefCount(o);\n                    return NULL;\n                }\n                *htGetMetadataSize(o->ptr) += usable;\n                break;\n            }\n\n            /* Add pair to listpack */\n            o->ptr = lpAppend(o->ptr, (unsigned char*)entryGetField(entry), entryFieldLen(entry));\n            o->ptr = lpAppend(o->ptr, (unsigned char*)entryGetValue(entry), sdslen(entryGetValue(entry)));\n\n            entryFree(entry, NULL);\n        }\n\n        if (dupSearchDict) {\n            /* We no longer need this, from now on the entries are added\n             * to a dict so the check is performed implicitly. */\n            dictRelease(dupSearchDict);\n            dupSearchDict = NULL;\n        }\n\n        if (o->encoding == OBJ_ENCODING_HT && original_len > DICT_HT_INITIAL_SIZE) {\n            if (dictTryExpand(o->ptr, original_len) != DICT_OK) {\n                rdbReportCorruptRDB(\"OOM in dictTryExpand %llu\", (unsigned long long)original_len);\n                decrRefCount(o);\n                return NULL;\n            }\n        }\n\n        /* Load remaining fields and values into the hash table */\n        while (o->encoding == OBJ_ENCODING_HT && len > 0) {\n            len--;\n            /* Load encoded strings - load field as SDS first */\n            size_t usable;\n            sds fieldSds;\n            if ((fieldSds = rdbGenericLoadStringObject(rdb,RDB_LOAD_SDS,NULL)) == NULL) {\n                decrRefCount(o);\n                return NULL;\n            }\n            if ((value = rdbGenericLoadStringObject(rdb,RDB_LOAD_SDS,NULL)) == NULL) {\n                sdsfree(fieldSds);\n                decrRefCount(o);\n                return NULL;\n            }\n\n            /* Create entry with field and value - take ownership of value */\n            entry = entryCreate(fieldSds, value, ENTRY_TAKE_VALUE, &usable);\n            sdsfree(fieldSds);  /* entryCreate() doesn't take ownership of field */\n\n            /* Add entry to hash table */\n            dict *d = o->ptr;\n            ret = dictAdd(d, entry, NULL);  /* no_value=1 */\n            if (ret == DICT_ERR) {\n                rdbReportCorruptRDB(\"Duplicate hash fields detected\");\n                entryFree(entry, NULL);\n                decrRefCount(o);\n                return NULL;\n            }\n            *htGetMetadataSize(o->ptr) += usable;\n        }\n\n        /* All pairs should be read by now */\n        serverAssert(len == 0);\n    } else if (rdbtype == RDB_TYPE_HASH_METADATA || rdbtype == RDB_TYPE_HASH_METADATA_PRE_GA) {\n        sds value;\n        Entry *entry;\n        uint64_t ttl, expireAt, minExpire = EB_EXPIRE_TIME_INVALID;\n        uint64_t original_len;\n        dict *dupSearchDict = NULL;\n\n        /* If hash with TTLs, load next/min expiration time\n         *\n         * - This value is serialized for future use-case of streaming the object\n         *   directly to FLASH (while keeping in mem its next expiration time).\n         * - It is also being used to keep only relative TTL for fields in RDB file.\n         */\n        if (rdbtype == RDB_TYPE_HASH_METADATA) {\n            minExpire = rdbLoadMillisecondTime(rdb, RDB_VERSION);\n            if (rioGetReadError(rdb)) {\n                rdbReportReadError(\"Hash failed loading minExpire\");\n                return NULL;\n            }\n            if (minExpire > EB_EXPIRE_TIME_INVALID) {\n                rdbReportCorruptRDB(\"Hash read invalid minExpire value\");\n            }\n        }\n\n        len = rdbLoadLen(rdb, NULL);\n        if (len == RDB_LENERR) return NULL;\n        original_len = len;\n        if (len == 0) goto emptykey;\n        /* TODO: create listpackEx or HT directly*/\n        o = createHashObject();\n        /* Too many entries? Use a hash table right from the start. */\n        if (len > server.hash_max_listpack_entries) {\n            hashTypeConvert(NULL, o, OBJ_ENCODING_HT);\n            dictTypeAddMeta((dict**)&o->ptr, &entryHashDictTypeWithHFE);\n            initDictExpireMetadata(o);\n        } else {\n            hashTypeConvert(NULL, o, OBJ_ENCODING_LISTPACK_EX);\n            if (deep_integrity_validation) {\n                /* In this mode, we need to guarantee that the server won't crash\n                * later when the listpack is converted to a dict.\n                * Create a set (dict with no values) for dup search.\n                * We can dismiss it as soon as we convert the listpack to a hash. */\n                dupSearchDict = dictCreate(&hashDictType);\n            }\n        }\n\n        while (len > 0) {\n            len--;\n\n            /* read the TTL */\n            if (rdbLoadLenByRef(rdb, NULL, &ttl) == -1) {\n                serverLog(LL_WARNING, \"failed reading hash TTL\");\n                decrRefCount(o);\n                if (dupSearchDict != NULL) dictRelease(dupSearchDict);\n                return NULL;\n            }\n\n\n            if (rdbtype == RDB_TYPE_HASH_METADATA) {\n                /* Loaded TTL value:\n                 *  - 0: Indicates no TTL. This is common case so we keep it small.\n                 *  - Otherwise: TTL is relative to minExpire (with +1 to avoid 0 that already taken)\n                 */\n                expireAt = (ttl != 0) ? (ttl + minExpire - 1) : 0;\n            } else { /* RDB_TYPE_HASH_METADATA_PRE_GA */\n                expireAt = ttl; /* Value is absolute */\n            }\n\n            if (expireAt > EB_EXPIRE_TIME_MAX) {\n                rdbReportCorruptRDB(\"invalid expireAt time: %llu\",\n                                    (unsigned long long) expireAt);\n                decrRefCount(o);\n                if (dupSearchDict != NULL) dictRelease(dupSearchDict);\n                return NULL;\n            }\n\n            /* Load field and value as SDS first */\n            size_t usable;\n            sds fieldSds = rdbGenericLoadStringObject(rdb, RDB_LOAD_SDS, NULL);\n\n            if (fieldSds == NULL) {\n                serverLog(LL_WARNING, \"failed reading hash field\");\n                decrRefCount(o);\n                if (dupSearchDict != NULL) dictRelease(dupSearchDict);\n                return NULL;\n            }\n\n            /* read the value */\n            if ((value = rdbGenericLoadStringObject(rdb,RDB_LOAD_SDS,NULL)) == NULL) {\n                serverLog(LL_WARNING, \"failed reading hash value\");\n                decrRefCount(o);\n                if (dupSearchDict != NULL) dictRelease(dupSearchDict);\n                sdsfree(fieldSds);\n                return NULL;\n            }\n\n            /* Create entry with field, value, and optional expiration */\n            uint32_t entryFlags = ENTRY_TAKE_VALUE | ((expireAt != 0) ? ENTRY_HAS_EXPIRY : 0);\n            entry = entryCreate(fieldSds, value, entryFlags, &usable);\n            sdsfree(fieldSds);  /* entryCreate() doesn't take ownership of field */\n\n            sds field = entryGetField(entry);\n            size_t flen = sdslen(field);\n            sds value = entryGetValue(entry);\n            size_t vlen = sdslen(value);            \n\n            /* store the values read - either to listpack or dict */\n            if (o->encoding == OBJ_ENCODING_LISTPACK_EX) {\n                /* integrity - check for key duplication (if required) */\n                if (dupSearchDict) {\n                    sds field_dup = sdsdup(field);\n\n                    if (dictAdd(dupSearchDict, field_dup, NULL) != DICT_OK) {\n                        rdbReportCorruptRDB(\"Hash with dup elements\");\n                        dictRelease(dupSearchDict);\n                        decrRefCount(o);\n                        sdsfree(field_dup);\n                        entryFree(entry, NULL);\n                        return NULL;\n                    }\n                }\n\n                /* check if the values can be saved to listpack (or should convert to dict encoding) */\n                if (flen > server.hash_max_listpack_value || \n                    vlen > server.hash_max_listpack_value ||\n                    !lpSafeToAdd(((listpackEx*)o->ptr)->lp, flen + vlen + lpEntrySizeInteger(expireAt)))\n                {\n                    /* convert to hash */\n                    hashTypeConvert(NULL, o, OBJ_ENCODING_HT);\n\n                    if (original_len > DICT_HT_INITIAL_SIZE) {\n                        if (dictTryExpand(o->ptr, original_len) != DICT_OK) {\n                            rdbReportCorruptRDB(\"OOM in dictTryExpand %llu\", (unsigned long long)original_len);\n                            decrRefCount(o);\n                            if (dupSearchDict != NULL) dictRelease(dupSearchDict);\n                            entryFree(entry, NULL);\n                            return NULL;\n                        }\n                    }\n\n                    /* don't add the values to the new hash: the next if will catch and the values will be added there */\n                } else {\n                    listpackExAddNew(o, field, flen, value, vlen, expireAt);\n                    entryFree(entry, NULL);\n                }\n            }\n\n            if (o->encoding == OBJ_ENCODING_HT) {\n                /* Add entry to hash table */\n                dict *d = o->ptr;\n                int ret = dictAdd(d, entry, NULL);  /* no_value=1 */\n\n                /* Attach expiry to the hash field and register in hash private HFE DS */\n                if ((ret != DICT_ERR) && expireAt) {\n                    htMetadataEx *m = htGetMetadataEx(d);\n                    ret = ebAdd(&m->hfe, &hashFieldExpireBucketsType, entry, expireAt);\n                }\n\n                if (ret == DICT_ERR) {\n                    rdbReportCorruptRDB(\"Duplicate hash fields detected\");\n                    entryFree(entry, NULL);\n                    decrRefCount(o);\n                    return NULL;\n                }\n                *htGetMetadataSize(d) += usable;\n            }\n        }\n\n        if (dupSearchDict != NULL) dictRelease(dupSearchDict);\n\n    } else if (rdbtype == RDB_TYPE_LIST_QUICKLIST || rdbtype == RDB_TYPE_LIST_QUICKLIST_2) {\n        if ((len = rdbLoadLen(rdb,NULL)) == RDB_LENERR) return NULL;\n        if (len == 0) goto emptykey;\n\n        o = createQuicklistObject(server.list_max_listpack_size, server.list_compress_depth);\n        uint64_t container = QUICKLIST_NODE_CONTAINER_PACKED;\n        while (len--) {\n            unsigned char *lp;\n            size_t encoded_len;\n\n            if (rdbtype == RDB_TYPE_LIST_QUICKLIST_2) {\n                if ((container = rdbLoadLen(rdb,NULL)) == RDB_LENERR) {\n                    decrRefCount(o);\n                    return NULL;\n                }\n\n                if (container != QUICKLIST_NODE_CONTAINER_PACKED && container != QUICKLIST_NODE_CONTAINER_PLAIN) {\n                    rdbReportCorruptRDB(\"Quicklist integrity check failed.\");\n                    decrRefCount(o);\n                    return NULL;\n                }\n            }\n\n            unsigned char *data =\n                rdbGenericLoadStringObject(rdb,RDB_LOAD_PLAIN,&encoded_len);\n            if (data == NULL || (encoded_len == 0)) {\n                zfree(data);\n                decrRefCount(o);\n                return NULL;\n            }\n\n            if (container == QUICKLIST_NODE_CONTAINER_PLAIN) {\n                quicklistAppendPlainNode(o->ptr, data, encoded_len);\n                continue;\n            }\n\n            if (rdbtype == RDB_TYPE_LIST_QUICKLIST_2) {\n                lp = data;\n                if (deep_integrity_validation) server.stat_dump_payload_sanitizations++;\n                if (!lpValidateIntegrity(lp, encoded_len, deep_integrity_validation, NULL, NULL)) {\n                    rdbReportCorruptRDB(\"Listpack integrity check failed.\");\n                    decrRefCount(o);\n                    zfree(lp);\n                    return NULL;\n                }\n            } else {\n                lp = lpNew(encoded_len);\n                if (!ziplistValidateIntegrity(data, encoded_len, 1,\n                        _ziplistEntryConvertAndValidate, &lp))\n                {\n                    rdbReportCorruptRDB(\"Ziplist integrity check failed.\");\n                    decrRefCount(o);\n                    zfree(data);\n                    zfree(lp);\n                    return NULL;\n                }\n                zfree(data);\n                lp = lpShrinkToFit(lp);\n            }\n\n            /* Silently skip empty ziplists, if we'll end up with empty quicklist we'll fail later. */\n            if (lpLength(lp) == 0) {\n                zfree(lp);\n                continue;\n            } else {\n                quicklistAppendListpack(o->ptr, lp);\n            }\n        }\n\n        if (quicklistCount(o->ptr) == 0) {\n            decrRefCount(o);\n            goto emptykey;\n        }\n\n        listTypeTryConversion(o, LIST_CONV_AUTO, NULL, NULL);\n    } else if (rdbtype == RDB_TYPE_HASH_ZIPMAP  ||\n               rdbtype == RDB_TYPE_LIST_ZIPLIST ||\n               rdbtype == RDB_TYPE_SET_INTSET   ||\n               rdbtype == RDB_TYPE_SET_LISTPACK ||\n               rdbtype == RDB_TYPE_ZSET_ZIPLIST ||\n               rdbtype == RDB_TYPE_ZSET_LISTPACK ||\n               rdbtype == RDB_TYPE_HASH_ZIPLIST ||\n               rdbtype == RDB_TYPE_HASH_LISTPACK ||\n               rdbtype == RDB_TYPE_HASH_LISTPACK_EX_PRE_GA ||\n               rdbtype == RDB_TYPE_HASH_LISTPACK_EX)\n    {\n        size_t encoded_len;\n\n        /* If Hash TTLs, Load next/min expiration time before the `encoded` */\n        if (rdbtype == RDB_TYPE_HASH_LISTPACK_EX) {\n            uint64_t minExpire = rdbLoadMillisecondTime(rdb, RDB_VERSION);\n            /* This value was serialized for future use-case of streaming the object\n             * directly to FLASH (while keeping in mem its next expiration time) */\n            UNUSED(minExpire);\n            if (rioGetReadError(rdb)) {\n                rdbReportReadError( \"Short read of listpackex min expiration time.\");\n                return NULL;\n            }\n        }\n\n        unsigned char *encoded =\n            rdbGenericLoadStringObject(rdb,RDB_LOAD_PLAIN,&encoded_len);\n        if (encoded == NULL) return NULL;\n\n        o = createObject(OBJ_STRING, encoded); /* Obj type fixed below. */\n\n        /* Fix the object encoding, and make sure to convert the encoded\n         * data type into the base type if accordingly to the current\n         * configuration there are too many elements in the encoded data\n         * type. Note that we only check the length and not max element\n         * size as this is an O(N) scan. Eventually everything will get\n         * converted. */\n        switch(rdbtype) {\n            case RDB_TYPE_HASH_ZIPMAP:\n                /* Since we don't keep zipmaps anymore, the rdb loading for these\n                 * is O(n) anyway, use `deep` validation. */\n                if (!zipmapValidateIntegrity(encoded, encoded_len, 1)) {\n                    rdbReportCorruptRDB(\"Zipmap integrity check failed.\");\n                    zfree(encoded);\n                    o->ptr = NULL;\n                    decrRefCount(o);\n                    return NULL;\n                }\n                /* Convert to ziplist encoded hash. This must be deprecated\n                 * when loading dumps created by Redis 2.4 gets deprecated. */\n                {\n                    unsigned char *lp = lpNew(0);\n                    unsigned char *zi = zipmapRewind(o->ptr);\n                    unsigned char *fstr, *vstr;\n                    unsigned int flen, vlen;\n                    unsigned int maxlen = 0;\n                    dict *dupSearchDict = dictCreate(&hashDictType);\n\n                    while ((zi = zipmapNext(zi, &fstr, &flen, &vstr, &vlen)) != NULL) {\n                        if (flen > maxlen) maxlen = flen;\n                        if (vlen > maxlen) maxlen = vlen;\n\n                        /* search for duplicate records */\n                        sds field = sdstrynewlen(fstr, flen);\n                        int field_added = (field != NULL && dictAdd(dupSearchDict, field, NULL) == DICT_OK);\n                        if (!field_added || !lpSafeToAdd(lp, (size_t)flen + vlen)) {\n                            rdbReportCorruptRDB(\"Hash zipmap with dup elements, or big length (%u)\", flen);\n                            /* If field was not added to dict, we still own it.\n                             * If it was added, dict owns it and dictRelease will free it. */\n                            if (!field_added) sdsfree(field);\n                            dictRelease(dupSearchDict);\n                            lpFree(lp);\n                            zfree(encoded);\n                            o->ptr = NULL;\n                            decrRefCount(o);\n                            return NULL;\n                        }\n\n                        lp = lpAppend(lp, fstr, flen);\n                        lp = lpAppend(lp, vstr, vlen);\n                    }\n\n                    dictRelease(dupSearchDict);\n                    zfree(o->ptr);\n                    o->ptr = lp;\n                    o->type = OBJ_HASH;\n                    o->encoding = OBJ_ENCODING_LISTPACK;\n\n                    if (hashTypeLength(o, 0) > server.hash_max_listpack_entries ||\n                        maxlen > server.hash_max_listpack_value)\n                    {\n                        hashTypeConvert(NULL, o, OBJ_ENCODING_HT);\n                    }\n                }\n                break;\n            case RDB_TYPE_LIST_ZIPLIST:\n                {\n                    quicklist *ql = quicklistNew(server.list_max_listpack_size,\n                                                 server.list_compress_depth);\n\n                    if (!ziplistValidateIntegrity(encoded, encoded_len, 1,\n                            _listZiplistEntryConvertAndValidate, ql))\n                    {\n                        rdbReportCorruptRDB(\"List ziplist integrity check failed.\");\n                        zfree(encoded);\n                        o->ptr = NULL;\n                        decrRefCount(o);\n                        quicklistRelease(ql);\n                        return NULL;\n                    }\n\n                    if (ql->len == 0) {\n                        zfree(encoded);\n                        o->ptr = NULL;\n                        decrRefCount(o);\n                        quicklistRelease(ql);\n                        goto emptykey;\n                    }\n\n                    zfree(encoded);\n                    o->type = OBJ_LIST;\n                    o->ptr = ql;\n                    o->encoding = OBJ_ENCODING_QUICKLIST;\n                    break;\n                }\n            case RDB_TYPE_SET_INTSET:\n                if (deep_integrity_validation) server.stat_dump_payload_sanitizations++;\n                if (!intsetValidateIntegrity(encoded, encoded_len, deep_integrity_validation)) {\n                    rdbReportCorruptRDB(\"Intset integrity check failed.\");\n                    zfree(encoded);\n                    o->ptr = NULL;\n                    decrRefCount(o);\n                    return NULL;\n                }\n                o->type = OBJ_SET;\n                o->encoding = OBJ_ENCODING_INTSET;\n                if (intsetLen(o->ptr) > server.set_max_intset_entries)\n                    setTypeConvert(o, OBJ_ENCODING_HT);\n                break;\n            case RDB_TYPE_SET_LISTPACK:\n                if (deep_integrity_validation) server.stat_dump_payload_sanitizations++;\n                if (!lpValidateIntegrityAndDups(encoded, encoded_len, deep_integrity_validation, 1)) {\n                    rdbReportCorruptRDB(\"Set listpack integrity check failed.\");\n                    zfree(encoded);\n                    o->ptr = NULL;\n                    decrRefCount(o);\n                    return NULL;\n                }\n                o->type = OBJ_SET;\n                o->encoding = OBJ_ENCODING_LISTPACK;\n\n                if (setTypeSize(o) == 0) {\n                    zfree(encoded);\n                    o->ptr = NULL;\n                    decrRefCount(o);\n                    goto emptykey;\n                }\n                if (setTypeSize(o) > server.set_max_listpack_entries)\n                    setTypeConvert(o, OBJ_ENCODING_HT);\n                break;\n            case RDB_TYPE_ZSET_ZIPLIST:\n                {\n                    unsigned char *lp = lpNew(encoded_len);\n                    if (!ziplistPairsConvertAndValidateIntegrity(encoded, encoded_len, &lp)) {\n                        rdbReportCorruptRDB(\"Zset ziplist integrity check failed.\");\n                        zfree(lp);\n                        zfree(encoded);\n                        o->ptr = NULL;\n                        decrRefCount(o);\n                        return NULL;\n                    }\n\n                    zfree(o->ptr);\n                    o->type = OBJ_ZSET;\n                    o->ptr = lp;\n                    o->encoding = OBJ_ENCODING_LISTPACK;\n                    if (zsetLength(o) == 0) {\n                        decrRefCount(o);\n                        goto emptykey;\n                    }\n\n                    if (zsetLength(o) > server.zset_max_listpack_entries)\n                        zsetConvert(o, OBJ_ENCODING_SKIPLIST);\n                    else\n                        o->ptr = lpShrinkToFit(o->ptr);\n                    break;\n                }\n            case RDB_TYPE_ZSET_LISTPACK:\n                if (deep_integrity_validation) server.stat_dump_payload_sanitizations++;\n                if (!lpValidateIntegrityAndDups(encoded, encoded_len, deep_integrity_validation, 2)) {\n                    rdbReportCorruptRDB(\"Zset listpack integrity check failed.\");\n                    zfree(encoded);\n                    o->ptr = NULL;\n                    decrRefCount(o);\n                    return NULL;\n                }\n                o->type = OBJ_ZSET;\n                o->encoding = OBJ_ENCODING_LISTPACK;\n                if (zsetLength(o) == 0) {\n                    decrRefCount(o);\n                    goto emptykey;\n                }\n\n                if (zsetLength(o) > server.zset_max_listpack_entries)\n                    zsetConvert(o, OBJ_ENCODING_SKIPLIST);\n                break;\n            case RDB_TYPE_HASH_ZIPLIST:\n                {\n                    unsigned char *lp = lpNew(encoded_len);\n                    if (!ziplistPairsConvertAndValidateIntegrity(encoded, encoded_len, &lp)) {\n                        rdbReportCorruptRDB(\"Hash ziplist integrity check failed.\");\n                        zfree(lp);\n                        zfree(encoded);\n                        o->ptr = NULL;\n                        decrRefCount(o);\n                        return NULL;\n                    }\n\n                    zfree(o->ptr);\n                    o->ptr = lp;\n                    o->type = OBJ_HASH;\n                    o->encoding = OBJ_ENCODING_LISTPACK;\n                    if (hashTypeLength(o, 0) == 0) {\n                        decrRefCount(o);\n                        goto emptykey;\n                    }\n\n                    if (hashTypeLength(o, 0) > server.hash_max_listpack_entries)\n                        hashTypeConvert(NULL, o, OBJ_ENCODING_HT);\n                    else\n                        o->ptr = lpShrinkToFit(o->ptr);\n                    break;\n                }\n            case RDB_TYPE_HASH_LISTPACK:\n            case RDB_TYPE_HASH_LISTPACK_EX_PRE_GA:\n            case RDB_TYPE_HASH_LISTPACK_EX:\n                /* listpack-encoded hash with TTL requires its own struct\n                 * pointed to by o->ptr */\n                o->type = OBJ_HASH;\n                if ( (rdbtype == RDB_TYPE_HASH_LISTPACK_EX) ||\n                     (rdbtype == RDB_TYPE_HASH_LISTPACK_EX_PRE_GA) ) {\n                    listpackEx *lpt = listpackExCreate();\n                    lpt->lp = encoded;\n                    o->ptr = lpt;\n                    o->encoding = OBJ_ENCODING_LISTPACK_EX;\n                } else\n                    o->encoding = OBJ_ENCODING_LISTPACK;\n\n                /* tuple_len is the number of elements for each key:\n                 * key + value for simple hash, key + value + tll for hash with TTL*/\n                int tuple_len = (rdbtype == RDB_TYPE_HASH_LISTPACK ? 2 : 3);\n                /* validate read data */\n                if (deep_integrity_validation) server.stat_dump_payload_sanitizations++;\n                if (!lpValidateIntegrityAndDups(encoded, encoded_len,\n                                                deep_integrity_validation, tuple_len)) {\n                    rdbReportCorruptRDB(\"Hash listpack integrity check failed.\");\n                    decrRefCount(o);\n                    return NULL;\n                }\n\n                /* if listpack is empty, delete it */\n                if (hashTypeLength(o, 0) == 0) {\n                    decrRefCount(o);\n                    goto emptykey;\n                }\n\n                /* Convert listpack to hash table without registering in global HFE DS,\n                 * if has HFEs, since the listpack is not connected yet to the DB */\n                if (hashTypeLength(o, 0) > server.hash_max_listpack_entries)\n                    hashTypeConvert(NULL /*db*/, o, OBJ_ENCODING_HT);\n\n                break;\n            default:\n                /* totally unreachable */\n                rdbReportCorruptRDB(\"Unknown RDB encoding type %d\",rdbtype);\n                break;\n        }\n    } else if (rdbtype == RDB_TYPE_STREAM_LISTPACKS ||\n               rdbtype == RDB_TYPE_STREAM_LISTPACKS_2 ||\n               rdbtype == RDB_TYPE_STREAM_LISTPACKS_3 ||\n               rdbtype == RDB_TYPE_STREAM_LISTPACKS_4 ||\n               rdbtype == RDB_TYPE_STREAM_LISTPACKS_5)\n    {\n        o = createStreamObject();\n        stream *s = o->ptr;\n        uint64_t listpacks = rdbLoadLen(rdb,NULL);\n        if (listpacks == RDB_LENERR) {\n            rdbReportReadError(\"Stream listpacks len loading failed.\");\n            decrRefCount(o);\n            return NULL;\n        }\n\n        while(listpacks--) {\n            /* Get the master ID, the one we'll use as key of the radix tree\n             * node: the entries inside the listpack itself are delta-encoded\n             * relatively to this ID. */\n            sds nodekey = rdbGenericLoadStringObject(rdb,RDB_LOAD_SDS,NULL);\n            if (nodekey == NULL) {\n                rdbReportReadError(\"Stream master ID loading failed: invalid encoding or I/O error.\");\n                decrRefCount(o);\n                return NULL;\n            }\n            if (sdslen(nodekey) != sizeof(streamID)) {\n                rdbReportCorruptRDB(\"Stream node key entry is not the \"\n                                        \"size of a stream ID\");\n                sdsfree(nodekey);\n                decrRefCount(o);\n                return NULL;\n            }\n\n            /* Load the listpack. */\n            size_t lp_size;\n            unsigned char *lp =\n                rdbGenericLoadStringObject(rdb,RDB_LOAD_PLAIN,&lp_size);\n            if (lp == NULL) {\n                rdbReportReadError(\"Stream listpacks loading failed.\");\n                sdsfree(nodekey);\n                decrRefCount(o);\n                return NULL;\n            }\n            if (deep_integrity_validation) server.stat_dump_payload_sanitizations++;\n            if (!streamValidateListpackIntegrity(lp, lp_size, deep_integrity_validation)) {\n                rdbReportCorruptRDB(\"Stream listpack integrity check failed.\");\n                sdsfree(nodekey);\n                decrRefCount(o);\n                zfree(lp);\n                return NULL;\n            }\n\n            unsigned char *first = lpFirst(lp);\n            if (first == NULL) {\n                /* Serialized listpacks should never be empty, since on\n                 * deletion we should remove the radix tree key if the\n                 * resulting listpack is empty. */\n                rdbReportCorruptRDB(\"Empty listpack inside stream\");\n                sdsfree(nodekey);\n                decrRefCount(o);\n                zfree(lp);\n                return NULL;\n            }\n\n            /* Insert the key in the radix tree. */\n            int retval = raxTryInsert(s->rax,\n                (unsigned char*)nodekey,sizeof(streamID),lp,NULL);\n            sdsfree(nodekey);\n            if (!retval) {\n                rdbReportCorruptRDB(\"Listpack re-added with existing key\");\n                decrRefCount(o);\n                zfree(lp);\n                return NULL;\n            }\n            s->alloc_size += lpBytes(lp);\n        }\n        /* Load total number of items inside the stream. */\n        s->length = rdbLoadLen(rdb,NULL);\n\n        /* Load the last entry ID. */\n        s->last_id.ms = rdbLoadLen(rdb,NULL);\n        s->last_id.seq = rdbLoadLen(rdb,NULL);\n\n        if (rdbtype >= RDB_TYPE_STREAM_LISTPACKS_2) {\n            /* Load the first entry ID. */\n            s->first_id.ms = rdbLoadLen(rdb,NULL);\n            s->first_id.seq = rdbLoadLen(rdb,NULL);\n\n            /* Load the maximal deleted entry ID. */\n            s->max_deleted_entry_id.ms = rdbLoadLen(rdb,NULL);\n            s->max_deleted_entry_id.seq = rdbLoadLen(rdb,NULL);\n\n            /* Load the offset. */\n            s->entries_added = rdbLoadLen(rdb,NULL);\n        } else {\n            /* During migration the offset can be initialized to the stream's\n             * length. At this point, we also don't care about tombstones\n             * because CG offsets will be later initialized as well. */\n            s->max_deleted_entry_id.ms = 0;\n            s->max_deleted_entry_id.seq = 0;\n            s->entries_added = s->length;\n\n            /* Since the rax is already loaded, we can find the first entry's\n             * ID. */\n            streamGetEdgeID(s,1,1,&s->first_id);\n        }\n\n        if (rioGetReadError(rdb)) {\n            rdbReportReadError(\"Stream object metadata loading failed.\");\n            decrRefCount(o);\n            return NULL;\n        }\n\n        if (s->length && !raxSize(s->rax)) {\n            rdbReportCorruptRDB(\"Stream length inconsistent with rax entries\");\n            decrRefCount(o);\n            return NULL;\n        }\n\n        /* Consumer groups loading */\n        uint64_t cgroups_count = rdbLoadLen(rdb,NULL);\n        if (cgroups_count == RDB_LENERR) {\n            rdbReportReadError(\"Stream cgroup count loading failed.\");\n            decrRefCount(o);\n            return NULL;\n        }\n        while(cgroups_count--) {\n            /* Get the consumer group name and ID. We can then create the\n             * consumer group ASAP and populate its structure as\n             * we read more data. */\n            streamID cg_id;\n            sds cgname = rdbGenericLoadStringObject(rdb,RDB_LOAD_SDS,NULL);\n            if (cgname == NULL) {\n                rdbReportReadError(\n                    \"Error reading the consumer group name from Stream\");\n                decrRefCount(o);\n                return NULL;\n            }\n\n            cg_id.ms = rdbLoadLen(rdb,NULL);\n            cg_id.seq = rdbLoadLen(rdb,NULL);\n            if (rioGetReadError(rdb)) {\n                rdbReportReadError(\"Stream cgroup ID loading failed.\");\n                sdsfree(cgname);\n                decrRefCount(o);\n                return NULL;\n            }\n            \n            /* Load group offset. */\n            uint64_t cg_offset;\n            if (rdbtype >= RDB_TYPE_STREAM_LISTPACKS_2) {\n                cg_offset = rdbLoadLen(rdb,NULL);\n                if (rioGetReadError(rdb)) {\n                    rdbReportReadError(\"Stream cgroup offset loading failed.\");\n                    sdsfree(cgname);\n                    decrRefCount(o);\n                    return NULL;\n                }\n            } else {\n                cg_offset = streamEstimateDistanceFromFirstEverEntry(s,&cg_id);\n            }\n\n            streamCG *cgroup = streamCreateCG(s,cgname,sdslen(cgname),&cg_id,cg_offset);\n            if (cgroup == NULL) {\n                rdbReportCorruptRDB(\"Duplicated consumer group name %s\",\n                                         cgname);\n                decrRefCount(o);\n                sdsfree(cgname);\n                return NULL;\n            }\n            sdsfree(cgname);\n\n            /* Load the global PEL for this consumer group, however we'll\n             * not yet populate the NACK structures with the message\n             * owner, since consumers for this group and their messages will\n             * be read as a next step. So for now leave them not resolved\n             * and later populate it. */\n            uint64_t pel_size = rdbLoadLen(rdb,NULL);\n            if (pel_size == RDB_LENERR) {\n                rdbReportReadError(\"Stream PEL size loading failed.\");\n                decrRefCount(o);\n                return NULL;\n            }\n            while(pel_size--) {\n                unsigned char rawid[sizeof(streamID)];\n                if (rioRead(rdb,rawid,sizeof(rawid)) == 0) {\n                    rdbReportReadError(\"Stream PEL ID loading failed.\");\n                    decrRefCount(o);\n                    return NULL;\n                }\n                streamID nack_id;\n                streamDecodeID(rawid, &nack_id);\n                streamNACK *nack = streamCreateNACK(s, NULL, &nack_id);\n                nack->delivery_time = rdbLoadMillisecondTime(rdb,RDB_VERSION);\n                nack->delivery_count = rdbLoadLen(rdb,NULL);\n                nack->cgroup_ref_node = streamLinkCGroupToEntry(s, cgroup, rawid);\n                if (rioGetReadError(rdb)) {\n                    rdbReportReadError(\"Stream PEL NACK loading failed.\");\n                    streamFreeNACK(s, nack);\n                    decrRefCount(o);\n                    return NULL;\n                }\n                if (!raxTryInsert(cgroup->pel,rawid,sizeof(rawid),nack,NULL)) {\n                    rdbReportCorruptRDB(\"Duplicated global PEL entry \"\n                                            \"loading stream consumer group\");\n                    streamFreeNACK(s, nack);\n                    decrRefCount(o);\n                    return NULL;\n                }\n\n                /* Insert in sorted order since RDB entries may not be time-ordered */\n                pelListInsertSorted(cgroup, nack);\n            }\n\n            /* Now that we loaded our global PEL, we need to load the\n             * consumers and their local PELs. */\n            uint64_t consumers_num = rdbLoadLen(rdb,NULL);\n            if (consumers_num == RDB_LENERR) {\n                rdbReportReadError(\"Stream consumers num loading failed.\");\n                decrRefCount(o);\n                return NULL;\n            }\n            while(consumers_num--) {\n                sds cname = rdbGenericLoadStringObject(rdb,RDB_LOAD_SDS,NULL);\n                if (cname == NULL) {\n                    rdbReportReadError(\n                        \"Error reading the consumer name from Stream group.\");\n                    decrRefCount(o);\n                    return NULL;\n                }\n                streamConsumer *consumer = streamCreateConsumer(s,cgroup,cname,NULL,0,\n                                                        SCC_NO_NOTIFY|SCC_NO_DIRTIFY);\n                sdsfree(cname);\n                if (!consumer) {\n                    rdbReportCorruptRDB(\"Duplicate stream consumer detected.\");\n                    decrRefCount(o);\n                    return NULL;\n                }\n\n                consumer->seen_time = rdbLoadMillisecondTime(rdb,RDB_VERSION);\n                if (rioGetReadError(rdb)) {\n                    rdbReportReadError(\"Stream short read reading seen time.\");\n                    decrRefCount(o);\n                    return NULL;\n                }\n\n                if (rdbtype >= RDB_TYPE_STREAM_LISTPACKS_3) {\n                    consumer->active_time = rdbLoadMillisecondTime(rdb,RDB_VERSION);\n                    if (rioGetReadError(rdb)) {\n                        rdbReportReadError(\"Stream short read reading active time.\");\n                        decrRefCount(o);\n                        return NULL;\n                    }\n                } else {\n                    /* That's the best estimate we got */\n                    consumer->active_time = consumer->seen_time;\n                }\n\n                /* Load the PEL about entries owned by this specific\n                 * consumer. */\n                pel_size = rdbLoadLen(rdb,NULL);\n                if (pel_size == RDB_LENERR) {\n                    rdbReportReadError(\n                        \"Stream consumer PEL num loading failed.\");\n                    decrRefCount(o);\n                    return NULL;\n                }\n                while(pel_size--) {\n                    unsigned char rawid[sizeof(streamID)];\n                    if (rioRead(rdb,rawid,sizeof(rawid)) == 0) {\n                        rdbReportReadError(\n                            \"Stream short read reading PEL streamID.\");\n                        decrRefCount(o);\n                        return NULL;\n                    }\n                    void *result;\n                    if (!raxFind(cgroup->pel,rawid,sizeof(rawid),&result)) {\n                        rdbReportCorruptRDB(\"Consumer entry not found in \"\n                                                \"group global PEL\");\n                        decrRefCount(o);\n                        return NULL;\n                    }\n                    streamNACK *nack = result;\n\n                    /* If the NACK already has a consumer assigned, the\n                     * payload is corrupt — each global PEL entry must be\n                     * claimed by exactly one consumer. */\n                    if (nack->consumer != NULL) {\n                        rdbReportCorruptRDB(\"Stream consumer PEL entry already has a consumer assigned\");\n                        decrRefCount(o);\n                        return NULL;\n                    }\n                    /* Set the NACK consumer, that was left to NULL when\n                     * loading the global PEL. Then set the same shared\n                     * NACK structure also in the consumer-specific PEL. */\n                    nack->consumer = consumer;\n                    if (!raxTryInsert(consumer->pel,rawid,sizeof(rawid),nack,NULL)) {\n                        rdbReportCorruptRDB(\"Duplicated consumer PEL entry \"\n                                                \" loading a stream consumer \"\n                                                \"group\");\n                        streamFreeNACK(s, nack);\n                        decrRefCount(o);\n                        return NULL;\n                    }\n                }\n            }\n\n            /* For RDB_TYPE_STREAM_LISTPACKS_5 and above, load the NACK\n             * zone stream IDs and reconstruct the NACK zone. Entries with\n             * delivery_time == 0 may exist for both nacked and owned PEL\n             * entries, so we cannot rely on a simple walk — we use the\n             * stored IDs to unlink each nacked entry from its sorted\n             * position and re-insert it into the NACK zone. */\n            if (rdbtype >= RDB_TYPE_STREAM_LISTPACKS_5) {\n                uint64_t nacked_count = rdbLoadLen(rdb, NULL);\n                if (nacked_count == RDB_LENERR) {\n                    rdbReportReadError(\"Stream NACK zone count loading failed.\");\n                    decrRefCount(o);\n                    return NULL;\n                }\n\n                /* Load each NACKed entry's stream ID, look it up in the\n                 * group PEL, unlink from its current time-list position,\n                 * and re-insert into the NACK zone. */\n                for (uint64_t i = 0; i < nacked_count; i++) {\n                    unsigned char rawid[sizeof(streamID)];\n                    if (rioRead(rdb, rawid, sizeof(rawid)) == 0) {\n                        rdbReportReadError(\"Stream NACK zone entry ID loading failed.\");\n                        decrRefCount(o);\n                        return NULL;\n                    }\n\n                    void *result;\n                    if (!raxFind(cgroup->pel, rawid, sizeof(rawid), &result)) {\n                        rdbReportCorruptRDB(\"Stream NACK zone entry not found \"\n                                            \"in group global PEL\");\n                        decrRefCount(o);\n                        return NULL;\n                    }\n                    streamNACK *nack = result;\n                    if (nack->consumer != NULL) {\n                        rdbReportCorruptRDB(\"Stream NACK zone entry has a \"\n                                            \"consumer assigned\");\n                        decrRefCount(o);\n                        return NULL;\n                    }\n                    pelListUnlink(cgroup, nack);\n                    pelListInsertNacked(cgroup, nack);\n                }\n\n            }\n\n            /* Verify entries outside the NACK zone all have a consumer\n             * assigned. For old RDB types pel_nack_tail is NULL, so\n             * this walks the entire PEL — equivalent to checking all. */\n            if (deep_integrity_validation) {\n                streamNACK *cur = cgroup->pel_nack_tail ?\n                                 cgroup->pel_nack_tail->pel_next :\n                                 cgroup->pel_time_head;\n                while (cur) {\n                    if (!cur->consumer) {\n                        rdbReportCorruptRDB(\"Stream CG PEL entry without \"\n                                            \"consumer outside NACK zone\");\n                        decrRefCount(o);\n                        return NULL;\n                    }\n                    cur = cur->pel_next;\n                }\n            }\n        }\n\n        /* Load IDMP (Idempotent Message Producer) configuration and entries\n         * for RDB_TYPE_STREAM_LISTPACKS_4 and above. */\n        if (rdbtype >= RDB_TYPE_STREAM_LISTPACKS_4) {\n            /* Load IDMP duration. */\n            s->idmp_duration = rdbLoadLen(rdb,NULL);\n            if (rioGetReadError(rdb)) {\n                rdbReportReadError(\"Stream IDMP duration loading failed.\");\n                decrRefCount(o);\n                return NULL;\n            }\n            if (s->idmp_duration < CONFIG_STREAM_IDMP_MIN_DURATION || \n                s->idmp_duration > CONFIG_STREAM_IDMP_MAX_DURATION) {\n                rdbReportCorruptRDB(\"Stream IDMP duration out of range\");\n                decrRefCount(o);\n                return NULL;\n            }\n\n            /* Load IDMP max entries. */\n            s->idmp_max_entries = rdbLoadLen(rdb,NULL);\n            if (rioGetReadError(rdb)) {\n                rdbReportReadError(\"Stream IDMP max entries loading failed.\");\n                decrRefCount(o);\n                return NULL;\n            }\n            if (s->idmp_max_entries < CONFIG_STREAM_IDMP_MIN_MAXSIZE || \n                s->idmp_max_entries > CONFIG_STREAM_IDMP_MAX_MAXSIZE) {\n                rdbReportCorruptRDB(\"Stream IDMP max entries out of range\");\n                decrRefCount(o);\n                return NULL;\n            }\n\n            /* Load all IDMP entries. */\n            if (rdbLoadStreamIdmpEntries(rdb,s) == -1) {\n                rdbReportReadError(\"Stream IDMP entries loading failed.\");\n                decrRefCount(o);\n                return NULL;\n            }\n\n            /* Load all-time count of IIDs added. */\n            s->iids_added = rdbLoadLen(rdb,NULL);\n            if (rioGetReadError(rdb)) {\n                rdbReportReadError(\"Stream iids_added loading failed.\");\n                decrRefCount(o);\n                return NULL;\n            }\n\n            /* Load all-time count of duplicate IIDs detected. */\n            s->iids_duplicates = rdbLoadLen(rdb,NULL);\n            if (rioGetReadError(rdb)) {\n                rdbReportReadError(\"Stream iids_duplicates loading failed.\");\n                decrRefCount(o);\n                return NULL;\n            }\n        }\n    } else if (rdbtype == RDB_TYPE_MODULE_PRE_GA) {\n            rdbReportCorruptRDB(\"Pre-release module format not supported\");\n            return NULL;\n    } else if (rdbtype == RDB_TYPE_MODULE_2) {\n        uint64_t moduleid = rdbLoadLen(rdb,NULL);\n        if (rioGetReadError(rdb)) {\n            rdbReportReadError(\"Short read module id\");\n            return NULL;\n        }\n        moduleType *mt = moduleTypeLookupModuleByID(moduleid);\n\n        if (rdbCheckMode) {\n            char name[10];\n            moduleTypeNameByID(name,moduleid);\n            return rdbLoadCheckModuleValue(rdb, name, 0);\n        }\n\n        if (mt == NULL) {\n            char name[10];\n            moduleTypeNameByID(name,moduleid);\n            rdbReportCorruptRDB(\"The RDB file contains module data I can't load: no matching module type '%s'\", name);\n            return NULL;\n        }\n        RedisModuleIO io;\n        robj keyobj;\n        initStaticStringObject(keyobj,key);\n        moduleInitIOContext(&io, &mt->entity, rdb, &keyobj, dbid);\n        /* Call the rdb_load method of the module providing the 10 bit\n         * encoding version in the lower 10 bits of the module ID. */\n        void *ptr = mt->rdb_load(&io,moduleid&1023);\n        if (io.ctx) {\n            moduleFreeContext(io.ctx);\n            zfree(io.ctx);\n        }\n\n        /* Module v2 serialization has an EOF mark at the end. */\n        uint64_t eof = rdbLoadLen(rdb,NULL);\n        if (eof == RDB_LENERR) {\n            if (ptr) {\n                o = createModuleObject(mt, ptr); /* creating just in order to easily destroy */\n                decrRefCount(o);\n            }\n            return NULL;\n        }\n        if (eof != RDB_MODULE_OPCODE_EOF) {\n            rdbReportCorruptRDB(\"The RDB file contains module data for the module '%s' that is not terminated by \"\n                                \"the proper module value EOF marker\", moduleTypeModuleName(mt));\n            if (ptr) {\n                o = createModuleObject(mt, ptr); /* creating just in order to easily destroy */\n                decrRefCount(o);\n            }\n            return NULL;\n        }\n\n        if (ptr == NULL) {\n            rdbReportCorruptRDB(\"The RDB file contains module data for the module type '%s', that the responsible \"\n                                \"module is not able to load. Check for modules log above for additional clues.\",\n                                moduleTypeModuleName(mt));\n            return NULL;\n        }\n        o = createModuleObject(mt, ptr);\n    } else if (rdbtype == RDB_TYPE_GCRA) {\n        uint64_t time = rdbLoadLen(rdb, NULL);\n        if (time == RDB_LENERR || time > LLONG_MAX) {\n            rdbReportReadError(\"Failed loading GCRA TaT value\");\n            return NULL;\n        }\n        o = createGCRAObject((long long)time);\n    } else {\n        rdbReportReadError(\"Unknown RDB encoding type %d\",rdbtype);\n        return NULL;\n    }\n\n    if (error) *error = 0;\n    return o;\n\nemptykey:\n    if (error) *error = RDB_LOAD_ERR_EMPTY_KEY;\n    return NULL;\n}\n\n/* Mark that we are loading in the global state and setup the fields\n * needed to provide loading stats. */\nvoid startLoading(size_t size, int rdbflags, int async) {\n    loadingSetFlags(NULL, size, async);\n    loadingFireEvent(rdbflags);\n}\n\n/* Initialize stats, set loading flags and filename if provided. */\nvoid loadingSetFlags(char *filename, size_t size, int async) {\n    rdbFileBeingLoaded = filename;\n    server.loading = 1;\n    if (async == 1) server.async_loading = 1;\n    server.loading_start_time = time(NULL);\n    server.loading_loaded_bytes = 0;\n    server.loading_total_bytes = size;\n    server.loading_rdb_used_mem = 0;\n    server.rdb_last_load_keys_expired = 0;\n    server.rdb_last_load_keys_loaded = 0;\n    blockingOperationStarts();\n}\n\nvoid loadingFireEvent(int rdbflags) {\n    /* Fire the loading modules start event. */\n    int subevent;\n    if (rdbflags & RDBFLAGS_AOF_PREAMBLE)\n        subevent = REDISMODULE_SUBEVENT_LOADING_AOF_START;\n    else if(rdbflags & RDBFLAGS_REPLICATION)\n        subevent = REDISMODULE_SUBEVENT_LOADING_REPL_START;\n    else\n        subevent = REDISMODULE_SUBEVENT_LOADING_RDB_START;\n    moduleFireServerEvent(REDISMODULE_EVENT_LOADING,subevent,NULL);\n}\n\n/* Mark that we are loading in the global state and setup the fields\n * needed to provide loading stats.\n * 'filename' is optional and used for rdb-check on error */\nvoid startLoadingFile(size_t size, char* filename, int rdbflags) {\n    loadingSetFlags(filename, size, 0);\n    loadingFireEvent(rdbflags);\n}\n\n/* Refresh the absolute loading progress info */\nvoid loadingAbsProgress(off_t pos) {\n    server.loading_loaded_bytes = pos;\n    updatePeakMemory();\n}\n\n/* Refresh the incremental loading progress info */\nvoid loadingIncrProgress(off_t size) {\n    server.loading_loaded_bytes += size;\n    updatePeakMemory();\n}\n\n/* Update the file name currently being loaded */\nvoid updateLoadingFileName(char* filename) {\n    rdbFileBeingLoaded = filename;\n}\n\n/* Loading finished */\nvoid stopLoading(int success) {\n    server.loading = 0;\n    server.async_loading = 0;\n    server.loading_skip_checksum = 0;\n    blockingOperationEnds();\n    rdbFileBeingLoaded = NULL;\n\n    /* Fire the loading modules end event. */\n    moduleFireServerEvent(REDISMODULE_EVENT_LOADING,\n                          success?\n                            REDISMODULE_SUBEVENT_LOADING_ENDED:\n                            REDISMODULE_SUBEVENT_LOADING_FAILED,\n                           NULL);\n}\n\nvoid startSaving(int rdbflags) {\n    /* Fire the persistence modules start event. */\n    int subevent;\n    if (rdbflags & RDBFLAGS_AOF_PREAMBLE && getpid() != server.pid)\n        subevent = REDISMODULE_SUBEVENT_PERSISTENCE_AOF_START;\n    else if (rdbflags & RDBFLAGS_AOF_PREAMBLE)\n        subevent = REDISMODULE_SUBEVENT_PERSISTENCE_SYNC_AOF_START;\n    else if (getpid()!=server.pid)\n        subevent = REDISMODULE_SUBEVENT_PERSISTENCE_RDB_START;\n    else\n        subevent = REDISMODULE_SUBEVENT_PERSISTENCE_SYNC_RDB_START;\n    moduleFireServerEvent(REDISMODULE_EVENT_PERSISTENCE,subevent,NULL);\n}\n\nvoid stopSaving(int success) {\n    /* Fire the persistence modules end event. */\n    moduleFireServerEvent(REDISMODULE_EVENT_PERSISTENCE,\n                          success?\n                            REDISMODULE_SUBEVENT_PERSISTENCE_ENDED:\n                            REDISMODULE_SUBEVENT_PERSISTENCE_FAILED,\n                          NULL);\n}\n\n/* Track loading progress in order to serve client's from time to time\n   and if needed calculate rdb checksum  */\nvoid rdbLoadProgressCallback(rio *r, const void *buf, size_t len) {\n    if (server.rdb_checksum && !server.loading_skip_checksum)\n        rioGenericUpdateChecksum(r, buf, len);\n    if (server.loading_process_events_interval_bytes &&\n        (r->processed_bytes + len)/server.loading_process_events_interval_bytes > r->processed_bytes/server.loading_process_events_interval_bytes)\n    {\n        if (server.masterhost && server.repl_state == REPL_STATE_TRANSFER)\n            replicationSendNewlineToMaster();\n        loadingAbsProgress(r->processed_bytes);\n        processEventsWhileBlocked();\n        processModuleLoadingProgressEvent(0);\n    }\n    if (server.repl_state == REPL_STATE_TRANSFER && rioCheckType(r) == RIO_TYPE_CONN) {\n        atomicIncr(server.stat_net_repl_input_bytes, len);\n    }\n}\n\n/* Save the given functions_ctx to the rdb.\n * The err output parameter is optional and will be set with relevant error\n * message on failure, it is the caller responsibility to free the error\n * message on failure.\n *\n * The lib_ctx argument is also optional. If NULL is given, only verify rdb\n * structure with out performing the actual functions loading. */\nint rdbFunctionLoad(rio *rdb, int ver, functionsLibCtx* lib_ctx, int rdbflags, sds *err) {\n    UNUSED(ver);\n    sds error = NULL;\n    sds final_payload = NULL;\n    int res = C_ERR;\n    if (!(final_payload = rdbGenericLoadStringObject(rdb, RDB_LOAD_SDS, NULL))) {\n        error = sdsnew(\"Failed loading library payload\");\n        goto done;\n    }\n\n    if (lib_ctx) {\n        sds library_name = NULL;\n        if (!(library_name = functionsCreateWithLibraryCtx(final_payload, rdbflags & RDBFLAGS_ALLOW_DUP, &error, lib_ctx, 0))) {\n            if (!error) {\n                error = sdsnew(\"Failed creating the library\");\n            }\n            goto done;\n        }\n        sdsfree(library_name);\n    }\n\n    res = C_OK;\n\ndone:\n    if (final_payload) sdsfree(final_payload);\n    if (error) {\n        if (err) {\n            *err = error;\n        } else {\n            serverLog(LL_WARNING, \"Failed creating function, %s\", error);\n            sdsfree(error);\n        }\n    }\n    return res;\n}\n\n/* Load an RDB file from the rio stream 'rdb'. On success C_OK is returned,\n * otherwise C_ERR is returned and 'errno' is set accordingly. */\nint rdbLoadRio(rio *rdb, int rdbflags, rdbSaveInfo *rsi) {\n    functionsLibCtx* functions_lib_ctx = functionsLibCtxGetCurrent();\n    rdbLoadingCtx loading_ctx = { .dbarray = server.db, .functions_lib_ctx = functions_lib_ctx };\n    int retval = rdbLoadRioWithLoadingCtx(rdb,rdbflags,rsi,&loading_ctx);\n    return retval;\n}\n\n/* Load an RDB file from the rio stream 'rdb'. On success C_OK is returned,\n * otherwise C_ERR is returned.\n * The rdb_loading_ctx argument holds objects to which the rdb will be loaded to,\n * currently it only allow to set db object and functionLibCtx to which the data\n * will be loaded (in the future it might contains more such objects). */\nint rdbLoadRioWithLoadingCtx(rio *rdb, int rdbflags, rdbSaveInfo *rsi, rdbLoadingCtx *rdb_loading_ctx) {\n    uint64_t dbid = 0;\n    int type, rdbver;\n    uint64_t db_size = 0, expires_size = 0;\n    int should_expand_db = 0;\n    redisDb *db = rdb_loading_ctx->dbarray+0;\n    char buf[1024];\n    int error;\n    long long empty_keys_skipped = 0;\n\n    rdb->update_cksum = rdbLoadProgressCallback;\n    rdb->max_processing_chunk = server.loading_process_events_interval_bytes;\n    if (rioRead(rdb,buf,9) == 0) goto eoferr;\n    buf[9] = '\\0';\n    if (memcmp(buf,\"REDIS\",5) != 0) {\n        serverLog(LL_WARNING,\"Wrong signature trying to load DB from file\");\n        return C_ERR;\n    }\n    rdbver = atoi(buf+5);\n    if (rdbver < 1 || rdbver > RDB_VERSION) {\n        serverLog(LL_WARNING,\"Can't handle RDB format version %d\",rdbver);\n        return C_ERR;\n    }\n\n    /* Key-specific attributes, set by opcodes before the key type. */\n    long long lru_idle = -1, lfu_freq = -1, expiretime = -1, now = mstime();\n    long long lru_clock = LRU_CLOCK();\n    KeyMetaSpec keyMeta; /* Updated by OPCODE_KEY_META and OPCODE_EXPIRETIME */\n    keyMetaSpecInit(&keyMeta);\n\n    while(1) {\n        sds key;\n        robj *val;\n\n        /* Read type. */\n        if ((type = rdbLoadType(rdb)) == -1) goto eoferr;\n\n        /* Handle special types. */\n        if (type == RDB_OPCODE_EXPIRETIME) {\n            /* EXPIRETIME: load an expire associated with the next key\n             * to load. Note that after loading an expire we need to\n             * load the actual type, and continue. */\n            expiretime = rdbLoadTime(rdb);\n            expiretime *= 1000;\n            keyMetaSpecAdd(&keyMeta, KEY_META_ID_EXPIRE, expiretime);\n            if (rioGetReadError(rdb)) goto eoferr;\n            continue; /* Read next opcode. */\n        } else if (type == RDB_OPCODE_EXPIRETIME_MS) {\n            /* EXPIRETIME_MS: milliseconds precision expire times introduced\n             * with RDB v3. Like EXPIRETIME but no with more precision. */\n            expiretime = rdbLoadMillisecondTime(rdb,rdbver);\n            keyMetaSpecAdd(&keyMeta, KEY_META_ID_EXPIRE, expiretime);\n            if (rioGetReadError(rdb)) goto eoferr;\n            continue; /* Read next opcode. */\n        } else if (type == RDB_OPCODE_FREQ) {\n            /* FREQ: LFU frequency. */\n            uint8_t byte;\n            if (rioRead(rdb,&byte,1) == 0) goto eoferr;\n            lfu_freq = byte;\n            continue; /* Read next opcode. */\n        } else if (type == RDB_OPCODE_IDLE) {\n            /* IDLE: LRU idle time. */\n            uint64_t qword;\n            if ((qword = rdbLoadLen(rdb,NULL)) == RDB_LENERR) goto eoferr;\n            lru_idle = qword;\n            continue; /* Read next opcode. */\n        } else if (type == RDB_OPCODE_EOF) {\n            /* EOF: End of file, exit the main loop. */\n            break;\n        } else if (type == RDB_OPCODE_SELECTDB) {\n            /* SELECTDB: Select the specified database. */\n            if ((dbid = rdbLoadLen(rdb,NULL)) == RDB_LENERR) goto eoferr;\n            if (dbid >= (unsigned)server.dbnum) {\n                serverLog(LL_WARNING,\n                    \"FATAL: Data file was created with a Redis \"\n                    \"server configured to handle more than %d \"\n                    \"databases. Exiting\\n\", server.dbnum);\n                exit(1);\n            }\n            db = rdb_loading_ctx->dbarray+dbid;\n            continue; /* Read next opcode. */\n        } else if (type == RDB_OPCODE_RESIZEDB) {\n            /* RESIZEDB: Hint about the size of the keys in the currently\n             * selected data base, in order to avoid useless rehashing. */\n            if ((db_size = rdbLoadLen(rdb,NULL)) == RDB_LENERR)\n                goto eoferr;\n            if ((expires_size = rdbLoadLen(rdb,NULL)) == RDB_LENERR)\n                goto eoferr;\n            should_expand_db = 1;\n            continue; /* Read next opcode. */\n        } else if (type == RDB_OPCODE_SLOT_INFO) {\n            uint64_t slot_id, slot_size, expires_slot_size;\n            if ((slot_id = rdbLoadLen(rdb,NULL)) == RDB_LENERR)\n                goto eoferr;\n            if ((slot_size = rdbLoadLen(rdb,NULL)) == RDB_LENERR)\n                goto eoferr;\n            if ((expires_slot_size = rdbLoadLen(rdb,NULL)) == RDB_LENERR)\n                goto eoferr;\n            if (!server.cluster_enabled) {\n                continue; /* Ignore gracefully. */\n            }\n            /* In cluster mode we resize individual slot specific dictionaries based on the number of keys that slot holds. */\n            kvstoreDictExpand(db->keys, slot_id, slot_size);\n            kvstoreDictExpand(db->expires, slot_id, expires_slot_size);\n            should_expand_db = 0;\n            continue; /* Read next opcode. */\n        } else if (type == RDB_OPCODE_AUX) {\n            /* AUX: generic string-string fields. Use to add state to RDB\n             * which is backward compatible. Implementations of RDB loading\n             * are required to skip AUX fields they don't understand.\n             *\n             * An AUX field is composed of two strings: key and value. */\n            robj *auxkey, *auxval;\n            if ((auxkey = rdbLoadStringObject(rdb)) == NULL) goto eoferr;\n            if ((auxval = rdbLoadStringObject(rdb)) == NULL) {\n                decrRefCount(auxkey);\n                goto eoferr;\n            }\n\n            if (((char*)auxkey->ptr)[0] == '%') {\n                /* All the fields with a name staring with '%' are considered\n                 * information fields and are logged at startup with a log\n                 * level of NOTICE. */\n                serverLog(LL_NOTICE,\"RDB '%s': %s\",\n                    (char*)auxkey->ptr,\n                    (char*)auxval->ptr);\n            } else if (!strcasecmp(auxkey->ptr,\"repl-stream-db\")) {\n                if (rsi) rsi->repl_stream_db = atoi(auxval->ptr);\n            } else if (!strcasecmp(auxkey->ptr,\"repl-id\")) {\n                if (rsi && sdslen(auxval->ptr) == CONFIG_RUN_ID_SIZE) {\n                    memcpy(rsi->repl_id,auxval->ptr,CONFIG_RUN_ID_SIZE+1);\n                    rsi->repl_id_is_set = 1;\n                }\n            } else if (!strcasecmp(auxkey->ptr,\"repl-offset\")) {\n                if (rsi) rsi->repl_offset = strtoll(auxval->ptr,NULL,10);\n            } else if (!strcasecmp(auxkey->ptr,\"lua\")) {\n                /* Won't load the script back in memory anymore. */\n            } else if (!strcasecmp(auxkey->ptr,\"redis-ver\")) {\n                serverLog(LL_NOTICE,\"Loading RDB produced by version %s\",\n                    (char*)auxval->ptr);\n            } else if (!strcasecmp(auxkey->ptr,\"ctime\")) {\n                time_t age = time(NULL)-strtol(auxval->ptr,NULL,10);\n                if (age < 0) age = 0;\n                serverLog(LL_NOTICE,\"RDB age %ld seconds\",\n                    (unsigned long) age);\n            } else if (!strcasecmp(auxkey->ptr,\"used-mem\")) {\n                long long usedmem = strtoll(auxval->ptr,NULL,10);\n                serverLog(LL_NOTICE,\"RDB memory usage when created %.2f Mb\",\n                    (double) usedmem / (1024*1024));\n                server.loading_rdb_used_mem = usedmem;\n            } else if (!strcasecmp(auxkey->ptr,\"aof-preamble\")) {\n                long long haspreamble = strtoll(auxval->ptr,NULL,10);\n                if (haspreamble) serverLog(LL_NOTICE,\"RDB has an AOF tail\");\n            } else if (!strcasecmp(auxkey->ptr, \"aof-base\")) {\n                long long isbase = strtoll(auxval->ptr, NULL, 10);\n                if (isbase) serverLog(LL_NOTICE, \"RDB is base AOF\");\n            } else if (!strcasecmp(auxkey->ptr,\"cluster-asm-task\")) {\n                asmReplicaHandleMasterTask(auxval->ptr);\n            } else if (!strcasecmp(auxkey->ptr,\"redis-bits\")) {\n                /* Just ignored. */\n            } else {\n                /* We ignore fields we don't understand, as by AUX field\n                 * contract. */\n                serverLog(LL_DEBUG,\"Unrecognized RDB AUX field: '%s'\",\n                    (char*)auxkey->ptr);\n            }\n\n            decrRefCount(auxkey);\n            decrRefCount(auxval);\n            continue; /* Read type again. */\n        } else if (type == RDB_OPCODE_MODULE_AUX) {\n            /* Load module data that is not related to the Redis key space.\n             * Such data can be potentially be stored both before and after the\n             * RDB keys-values section. */\n            uint64_t moduleid = rdbLoadLen(rdb,NULL);\n            int when_opcode = rdbLoadLen(rdb,NULL);\n            int when = rdbLoadLen(rdb,NULL);\n            if (rioGetReadError(rdb)) goto eoferr;\n            if (when_opcode != RDB_MODULE_OPCODE_UINT) {\n                rdbReportReadError(\"bad when_opcode\");\n                goto eoferr;\n            }\n            moduleType *mt = moduleTypeLookupModuleByID(moduleid);\n            char name[10];\n            moduleTypeNameByID(name,moduleid);\n\n            if (!rdbCheckMode && mt == NULL) {\n                /* Unknown module. */\n                serverLog(LL_WARNING,\"The RDB file contains AUX module data I can't load: no matching module '%s'\", name);\n                exit(1);\n            } else if (!rdbCheckMode && mt != NULL) {\n                if (!mt->aux_load) {\n                    /* Module doesn't support AUX. */\n                    serverLog(LL_WARNING,\"The RDB file contains module AUX data, but the module '%s' doesn't seem to support it.\", name);\n                    exit(1);\n                }\n\n                RedisModuleIO io;\n                moduleInitIOContext(&io, &mt->entity, rdb, NULL, -1);\n                /* Call the rdb_load method of the module providing the 10 bit\n                 * encoding version in the lower 10 bits of the module ID. */\n                int rc = mt->aux_load(&io,moduleid&1023, when);\n                if (io.ctx) {\n                    moduleFreeContext(io.ctx);\n                    zfree(io.ctx);\n                }\n                if (rc != REDISMODULE_OK || io.error) {\n                    moduleTypeNameByID(name,moduleid);\n                    serverLog(LL_WARNING,\"The RDB file contains module AUX data for the module type '%s', that the responsible module is not able to load. Check for modules log above for additional clues.\", name);\n                    goto eoferr;\n                }\n                uint64_t eof = rdbLoadLen(rdb,NULL);\n                if (eof != RDB_MODULE_OPCODE_EOF) {\n                    serverLog(LL_WARNING,\"The RDB file contains module AUX data for the module '%s' that is not terminated by the proper module value EOF marker\", name);\n                    goto eoferr;\n                }\n                continue;\n            } else {\n                /* RDB check mode. */\n                robj *aux = rdbLoadCheckModuleValue(rdb, name, 0);\n                decrRefCount(aux);\n                continue; /* Read next opcode. */\n            }\n        } else if (type == RDB_OPCODE_FUNCTION_PRE_GA) {\n            rdbReportCorruptRDB(\"Pre-release function format not supported.\");\n            exit(1);\n        } else if (type == RDB_OPCODE_FUNCTION2) {\n            sds err = NULL;\n            if (rdbFunctionLoad(rdb, rdbver, rdb_loading_ctx->functions_lib_ctx, rdbflags, &err) != C_OK) {\n                serverLog(LL_WARNING,\"Failed loading library, %s\", err);\n                sdsfree(err);\n                goto eoferr;\n            }\n            continue;\n        }\n\n        /* If there is no slot info, it means that it's either not cluster mode or we are trying to load legacy RDB file.\n         * In this case we want to estimate number of keys per slot and resize accordingly. */\n        if (should_expand_db) {\n            dbExpand(db, db_size, 0);\n            dbExpandExpires(db, expires_size, 0);\n            should_expand_db = 0;\n            serverLog(LL_VERBOSE, \"DB %d resized: %lu key buckets, %lu expire buckets\",\n                        db->id, kvstoreBuckets(db->keys), kvstoreBuckets(db->expires));\n        }\n\n        /* With metadata, type = RDB_OPCODE_KEY_META. Layout: [<META>,]<TYPE>,<KEY>,<VALUE> */\n        if (rdbResolveKeyType(rdb, &type, dbid, &keyMeta) == -1) \n            goto eoferr;\n\n        /* Read key */\n        if ((key = rdbGenericLoadStringObject(rdb,RDB_LOAD_SDS,NULL)) == NULL) {\n            keyMetaSpecCleanup(&keyMeta);\n            goto eoferr;\n        }\n        /* Read value */\n        val = rdbLoadObject(type,rdb,key,db->id,&error);\n\n        /* Check if the key already expired. This function is used when loading\n         * an RDB file from disk, either at startup, or when an RDB was\n         * received from the master. In the latter case, the master is\n         * responsible for key expiry. If we would expire keys here, the\n         * snapshot taken by the master may not be reflected on the slave.\n         * Similarly, if the base AOF is RDB format, we want to load all\n         * the keys they are, since the log of operations in the incr AOF\n         * is assumed to work in the exact keyspace state. */\n        if (val == NULL) {\n            keyMetaSpecCleanup(&keyMeta);\n            /* Since we used to have bug that could lead to empty keys\n             * (See #8453), we rather not fail when empty key is encountered\n             * in an RDB file, instead we will silently discard it and\n             * continue loading. */\n            if (error == RDB_LOAD_ERR_EMPTY_KEY) {\n                if(empty_keys_skipped++ < 10)\n                    serverLog(LL_NOTICE, \"rdbLoadObject skipping empty key: %s\", redactLogCstr(key));\n                sdsfree(key);\n            } else {\n                sdsfree(key);\n                goto eoferr;\n            }\n        } else if (iAmMaster() &&\n            !(rdbflags&RDBFLAGS_AOF_PREAMBLE) &&\n            expiretime != -1 && expiretime < now)\n        {\n            if (rdbflags & RDBFLAGS_FEED_REPL) {\n                /* Caller should have created replication backlog,\n                 * and now this path only works when rebooting,\n                 * so we don't have replicas yet. */\n                serverAssert(server.repl_backlog != NULL && listLength(server.slaves) == 0);\n                robj keyobj;\n                initStaticStringObject(keyobj,key);\n                robj *argv[2];\n                argv[0] = server.lazyfree_lazy_expire ? shared.unlink : shared.del;\n                argv[1] = &keyobj;\n                replicationFeedSlaves(server.slaves,dbid,argv,2);\n            }\n            sdsfree(key);\n            decrRefCount(val);\n            keyMetaSpecCleanup(&keyMeta);\n            server.rdb_last_load_keys_expired++;\n        } else {\n            robj keyobj;\n            initStaticStringObject(keyobj,key);\n\n            /* Add the new object in the hash table */\n            kvobj *kv = dbAddRDBLoad(db, key, &val, &keyMeta);\n            server.rdb_last_load_keys_loaded++;\n            if (!kv) {\n                if (rdbflags & RDBFLAGS_ALLOW_DUP) {\n                    /* This flag is useful for DEBUG RELOAD special modes.\n                     * When it's set we allow new keys to replace the current\n                     * keys with the same name. */\n                    dbSyncDelete(db,&keyobj);\n                    kv = dbAddRDBLoad(db, key, &val, &keyMeta);\n                    serverAssert(kv != NULL);\n                } else {\n                    serverLog(LL_WARNING,\n                        \"RDB has duplicated key '%s' in DB %d\",key,db->id);\n                    serverPanic(\"Duplicated key found in RDB file\");\n                }\n            }\n\n            /* If minExpiredField was set, then the object is hash with expiration\n             * on fields and need to register it in global HFE DS */\n            if (kv->type == OBJ_HASH) {\n                uint64_t minExpiredField = hashTypeGetMinExpire(kv, 1);\n                if (minExpiredField != EB_EXPIRE_TIME_INVALID)\n                    estoreAdd(db->subexpires, getKeySlot(key), kv, minExpiredField);\n            }\n\n            /* Register streams with IDMP producers for cron-based expiration. */\n            if (kv->type == OBJ_STREAM)\n                streamKeyLoaded(db, &keyobj, kv);\n\n            /* Set usage information (for eviction). */\n            objectSetLRUOrLFU(val,lfu_freq,lru_idle,lru_clock,1000);\n\n            /* call key space notification on key loaded for modules only */\n            moduleNotifyKeyspaceEvent(NOTIFY_LOADED, \"loaded\", &keyobj, db->id, NULL, 0);\n\n            /* Release key (sds), dictEntry stores a copy of it in embedded data */\n            sdsfree(key);\n        }\n\n        /* Loading the database more slowly is useful in order to test\n         * certain edge cases. */\n        if (server.key_load_delay)\n            debugDelay(server.key_load_delay);\n\n        /* Reset the state that is key-specified and is populated by\n         * opcodes before the key, so that we start from scratch again. */\n        expiretime = -1;\n        lfu_freq = -1;\n        lru_idle = -1;\n        keyMetaSpecInit(&keyMeta);\n    }\n    /* Verify the checksum if RDB version is >= 5 */\n    if (rdbver >= 5) {\n        uint64_t cksum, expected = rdb->cksum;\n\n        if (rioRead(rdb,&cksum,8) == 0) goto eoferr;\n        if (server.rdb_checksum && !server.loading_skip_checksum && !server.skip_checksum_validation) {\n            memrev64ifbe(&cksum);\n            if (cksum == 0) {\n                serverLog(LL_NOTICE,\"RDB file was saved with checksum disabled: no check performed.\");\n            } else if (cksum != expected) {\n                serverLog(LL_WARNING,\"Wrong RDB checksum expected: (%llx) but \"\n                    \"got (%llx). Aborting now.\",\n                        (unsigned long long)expected,\n                        (unsigned long long)cksum);\n                rdbReportCorruptRDB(\"RDB CRC error\");\n                return C_ERR;\n            }\n        }\n    }\n\n    if (empty_keys_skipped) {\n        serverLog(LL_NOTICE,\n            \"Done loading RDB, keys loaded: %lld, keys expired: %lld, empty keys skipped: %lld.\",\n                server.rdb_last_load_keys_loaded, server.rdb_last_load_keys_expired, empty_keys_skipped);\n    } else {\n        serverLog(LL_NOTICE,\n            \"Done loading RDB, keys loaded: %lld, keys expired: %lld.\",\n                server.rdb_last_load_keys_loaded, server.rdb_last_load_keys_expired);\n    }\n    return C_OK;\n\n    /* Unexpected end of file is handled here calling rdbReportReadError():\n     * this will in turn either abort Redis in most cases, or if we are loading\n     * the RDB file from a socket during initial SYNC (diskless replica mode),\n     * we'll report the error to the caller, so that we can retry. */\neoferr:\n    serverLog(LL_WARNING,\n        \"Short read or OOM loading DB. Unrecoverable error, aborting now.\");\n    rdbReportReadError(\"Unexpected EOF reading RDB file\");\n    return C_ERR;\n}\n\nint rdbLoad(char *filename, rdbSaveInfo *rsi, int rdbflags) {\n    return rdbLoadWithEmptyFunc(filename, rsi, rdbflags, NULL);\n}\n\nint slotSnapshotSaveRio(int req, rio *rdb, int *error);\n\n/* Like rdbLoadRio() but takes a filename instead of a rio stream. The\n * filename is open for reading and a rio stream object created in order\n * to do the actual loading. Moreover the ETA displayed in the INFO\n * output is initialized and finalized.\n *\n * If you pass an 'rsi' structure initialized with RDB_SAVE_INFO_INIT, the\n * loading code will fill the information fields in the structure.\n *\n * If emptyDbFunc is not NULL, it will be called to flush old db or to\n * discard partial db on error. */\nint rdbLoadWithEmptyFunc(char *filename, rdbSaveInfo *rsi, int rdbflags, void (*emptyDbFunc)(void)) {\n    FILE *fp;\n    rio rdb;\n    int retval;\n    struct stat sb;\n    int rdb_fd;\n\n    fp = fopen(filename, \"r\");\n    if (fp == NULL) {\n        if (errno == ENOENT) return RDB_NOT_EXIST;\n\n        serverLog(LL_WARNING,\"Fatal error: can't open the RDB file %s for reading: %s\", filename, strerror(errno));\n        return RDB_FAILED;\n    }\n\n    if (fstat(fileno(fp), &sb) == -1)\n        sb.st_size = 0;\n\n    loadingSetFlags(filename, sb.st_size, 0);\n    /* Note that inside loadingSetFlags(), server.loading is set.\n     * emptyDbCallback() may yield back to event-loop to reply -LOADING. */\n    if (emptyDbFunc)\n        emptyDbFunc(); /* Flush existing db. */\n    loadingFireEvent(rdbflags);\n    rioInitWithFile(&rdb,fp);\n\n    retval = rdbLoadRio(&rdb,rdbflags,rsi);\n\n    fclose(fp);\n    if (retval != C_OK && emptyDbFunc)\n        emptyDbFunc(); /* Clean up partial db. */\n\n    stopLoading(retval==C_OK);\n    /* Reclaim the cache backed by rdb */\n    if (retval == C_OK && !(rdbflags & RDBFLAGS_KEEP_CACHE)) {\n        /* TODO: maybe we could combine the fopen and open into one in the future */\n        rdb_fd = open(filename, O_RDONLY);\n        if (rdb_fd >= 0) bioCreateCloseJob(rdb_fd, 0, 1);\n    }\n    return (retval==C_OK) ? RDB_OK : RDB_FAILED;\n}\n\n/* A background saving child (BGSAVE) terminated its work. Handle this.\n * This function covers the case of actual BGSAVEs. */\nstatic void backgroundSaveDoneHandlerDisk(int exitcode, int bysignal, time_t save_end) {\n    if (!bysignal && exitcode == 0) {\n        serverLog(LL_NOTICE,\n            \"Background saving terminated with success\");\n        server.dirty = server.dirty - server.dirty_before_bgsave;\n        server.lastsave = save_end;\n        server.lastbgsave_status = C_OK;\n        server.stat_rdb_consecutive_failures = 0;\n    } else if (!bysignal && exitcode != 0) {\n        serverLog(LL_WARNING, \"Background saving error\");\n        server.lastbgsave_status = C_ERR;\n        server.stat_rdb_consecutive_failures++;\n    } else {\n        mstime_t latency;\n\n        serverLog(LL_WARNING,\n            \"Background saving terminated by signal %d\", bysignal);\n        latencyStartMonitor(latency);\n        rdbRemoveTempFile(server.child_pid, 0);\n        latencyEndMonitor(latency);\n        latencyAddSampleIfNeeded(\"rdb-unlink-temp-file\",latency);\n        /* SIGUSR1 is whitelisted, so we have a way to kill a child without\n         * triggering an error condition. */\n        if (bysignal != SIGUSR1) {\n            server.lastbgsave_status = C_ERR;\n            server.stat_rdb_consecutive_failures++;\n        }\n    }\n}\n\n/* A background saving child (BGSAVE) terminated its work. Handle this.\n * This function covers the case of RDB -> Slaves socket transfers for\n * diskless replication. */\nstatic void backgroundSaveDoneHandlerSocket(int exitcode, int bysignal) {\n    if (!bysignal && exitcode == 0) {\n        serverLog(LL_NOTICE,\n            \"Background RDB transfer terminated with success\");\n    } else if (!bysignal && exitcode != 0) {\n        serverLog(LL_WARNING, \"Background transfer error\");\n    } else {\n        serverLog(LL_WARNING,\n            \"Background transfer terminated by signal %d\", bysignal);\n    }\n    if (server.rdb_child_exit_pipe!=-1)\n        close(server.rdb_child_exit_pipe);\n    if (server.rdb_pipe_read != -1) {\n        aeDeleteFileEvent(server.el, server.rdb_pipe_read, AE_READABLE);\n        close(server.rdb_pipe_read);\n    }\n    server.rdb_child_exit_pipe = -1;\n    server.rdb_pipe_read = -1;\n    zfree(server.rdb_pipe_conns);\n    server.rdb_pipe_conns = NULL;\n    server.rdb_pipe_numconns = 0;\n    server.rdb_pipe_numconns_writing = 0;\n    zfree(server.rdb_pipe_buff);\n    server.rdb_pipe_buff = NULL;\n    server.rdb_pipe_bufflen = 0;\n}\n\n/* When a background RDB saving/transfer terminates, call the right handler. */\nvoid backgroundSaveDoneHandler(int exitcode, int bysignal) {\n    int type = server.rdb_child_type;\n    time_t save_end = time(NULL);\n    if (server.bgsave_aborted)\n        bysignal = SIGUSR1;\n    switch(server.rdb_child_type) {\n    case RDB_CHILD_TYPE_DISK:\n        backgroundSaveDoneHandlerDisk(exitcode,bysignal,save_end);\n        break;\n    case RDB_CHILD_TYPE_SOCKET:\n        backgroundSaveDoneHandlerSocket(exitcode,bysignal);\n        break;\n    default:\n        serverPanic(\"Unknown RDB child type.\");\n        break;\n    }\n\n    server.rdb_child_type = RDB_CHILD_TYPE_NONE;\n    server.rdb_save_time_last = save_end-server.rdb_save_time_start;\n    server.rdb_save_time_start = -1;\n    server.bgsave_aborted = 0;\n    /* Possibly there are slaves waiting for a BGSAVE in order to be served\n     * (the first stage of SYNC is a bulk transfer of dump.rdb) */\n    updateSlavesWaitingBgsave((!bysignal && exitcode == 0) ? C_OK : C_ERR, type);\n}\n\n/* Kill the RDB saving child using SIGUSR1 (so that the parent will know\n * the child did not exit for an error, but because we wanted), and performs\n * the cleanup needed. */\nvoid killRDBChild(void) {\n    kill(server.child_pid, SIGUSR1);\n    /* Because we are not using here waitpid (like we have in killAppendOnlyChild\n     * and TerminateModuleForkChild), all the cleanup operations is done by\n     * checkChildrenDone, that later will find that the process killed.\n     * This includes:\n     * - resetChildState\n     * - rdbRemoveTempFile */\n\n    /* However, there's a chance the child already exited (or about to exit), and will\n     * not receive the signal, in that case it could result in success and the done\n     * handler will override some server metrics (e.g. the dirty counter) which it\n     * shouldn't (e.g. in case of FLUSHALL), or the synchronously created RDB file. */\n     server.bgsave_aborted = 1;\n}\n\n/* Spawn an RDB child that writes the RDB to the sockets of the slaves\n * that are currently in SLAVE_STATE_WAIT_BGSAVE_START state. */\nint rdbSaveToSlavesSockets(int req, rdbSaveInfo *rsi) {\n    listNode *ln;\n    listIter li;\n    pid_t childpid;\n    int pipefds[2], rdb_pipe_write = 0, safe_to_exit_pipe = 0;\n    int rdb_channel = server.repl_rdb_channel && (req & SLAVE_REQ_RDB_CHANNEL);\n    int slots_req = req & SLAVE_REQ_SLOTS_SNAPSHOT;\n\n    if (hasActiveChildProcess()) return C_ERR;\n\n    /* Even if the previous fork child exited, don't start a new one until we\n     * drained the pipe. */\n    if (server.rdb_pipe_conns) return C_ERR;\n\n    if (!rdb_channel) {\n        /* Before to fork, create a pipe that is used to transfer the rdb bytes to\n         * the parent, we can't let it write directly to the sockets, since in case\n         * of TLS we must let the parent handle a continuous TLS state when the\n         * child terminates and parent takes over. */\n        if (anetPipe(pipefds, O_NONBLOCK, 0) == -1) return C_ERR;\n        server.rdb_pipe_read = pipefds[0]; /* read end */\n        rdb_pipe_write = pipefds[1]; /* write end */\n\n        /* create another pipe that is used by the parent to signal to the child\n         * that it can exit. */\n        if (anetPipe(pipefds, 0, 0) == -1) {\n            close(rdb_pipe_write);\n            close(server.rdb_pipe_read);\n            return C_ERR;\n        }\n        safe_to_exit_pipe = pipefds[0]; /* read end */\n        server.rdb_child_exit_pipe = pipefds[1]; /* write end */\n    }\n\n    /* Collect the connections of the replicas we want to transfer\n     * the RDB to, which are in WAIT_BGSAVE_START state. */\n    int numconns = 0;\n    connection **conns = zmalloc(sizeof(*conns) * listLength(server.slaves));\n    listRewind(server.slaves,&li);\n    while((ln = listNext(&li))) {\n        client *slave = ln->value;\n        if (slave->replstate == SLAVE_STATE_WAIT_BGSAVE_START) {\n            /* Check slave has the exact requirements */\n            if (slave->slave_req != req)\n                continue;\n            replicationSetupSlaveForFullResync(slave, getPsyncInitialOffset());\n            conns[numconns++] = slave->conn;\n            if (rdb_channel) {\n                /* Put the socket in blocking mode to simplify RDB transfer. */\n                connSendTimeout(slave->conn, server.repl_timeout * 1000);\n                connBlock(slave->conn);\n            }\n        }\n    }\n\n    if (!rdb_channel) {\n        server.rdb_pipe_conns = conns;\n        server.rdb_pipe_numconns = numconns;\n        server.rdb_pipe_numconns_writing = 0;\n    }\n\n    /* Create the child process. */\n    if ((childpid = redisFork(CHILD_TYPE_RDB)) == 0) {\n        /* Child */\n        int retval, dummy;\n        rio rdb;\n\n        if (rdb_channel) {\n            rioInitWithConnset(&rdb, conns, numconns);\n        } else {\n            rioInitWithFd(&rdb,rdb_pipe_write);\n            /* Close the reading part, so that if the parent crashes, the child\n             * will get a write error and exit. */\n            close(server.rdb_pipe_read);\n        }\n\n        redisSetProcTitle(\"redis-rdb-to-slaves\");\n        redisSetCpuAffinity(server.bgsave_cpulist);\n\n        /* Disable RDB compression and checksum in the fork child if requested.\n         * The parent's configuration is not affected. */\n        if (req & SLAVE_REQ_RDB_NO_COMPRESS)\n            server.rdb_compression = 0;\n        if (req & SLAVE_REQ_RDB_NO_CHECKSUM)\n            server.rdb_checksum = 0;\n\n        if (req & SLAVE_REQ_SLOTS_SNAPSHOT) {\n            /* Slots snapshot is required */\n            retval = slotSnapshotSaveRio(req, &rdb, NULL);\n        } else {\n            retval = rdbSaveRioWithEOFMark(req,&rdb,NULL,rsi);\n        }\n\n        if (retval == C_OK && rioFlush(&rdb) == 0)\n            retval = C_ERR;\n\n        if (retval == C_OK) {\n            sendChildCowInfo(CHILD_INFO_TYPE_RDB_COW_SIZE, \"RDB\");\n        }\n\n        if (rdb_channel) {\n            rioFreeConnset(&rdb);\n        } else {\n            rioFreeFd(&rdb);\n            /* wake up the reader, tell it we're done. */\n            close(rdb_pipe_write);\n            close(server.rdb_child_exit_pipe); /* close write end so that we can detect the close on the parent. */\n            /* hold exit until the parent tells us it's safe. we're not expecting\n             * to read anything, just get the error when the pipe is closed. */\n            dummy = read(safe_to_exit_pipe, pipefds, 1);\n            UNUSED(dummy);\n        }\n        zfree(conns);\n        exitFromChild((retval == C_OK) ? 0 : 1, 0);\n    } else {\n        /* Parent */\n        if (childpid == -1) {\n            serverLog(LL_WARNING,\"Can't save in background: fork: %s\",\n                strerror(errno));\n\n            /* Undo the state change. The caller will perform cleanup on\n             * all the slaves in BGSAVE_START state, but an early call to\n             * replicationSetupSlaveForFullResync() turned it into BGSAVE_END */\n            listRewind(server.slaves,&li);\n            while((ln = listNext(&li))) {\n                client *slave = ln->value;\n                if (slave->replstate == SLAVE_STATE_WAIT_BGSAVE_END) {\n                    slave->replstate = SLAVE_STATE_WAIT_BGSAVE_START;\n                }\n            }\n\n            if (!rdb_channel)  {\n                close(rdb_pipe_write);\n                close(server.rdb_pipe_read);\n                close(server.rdb_child_exit_pipe);\n                zfree(server.rdb_pipe_conns);\n                server.rdb_pipe_conns = NULL;\n                server.rdb_pipe_numconns = 0;\n                server.rdb_pipe_numconns_writing = 0;\n            }\n        } else {\n            serverLog(LL_NOTICE, \"Background RDB transfer started by pid %ld to %s\", (long)childpid,\n                rdb_channel ? (slots_req ? \"slot migration destination socket\" : \"replica socket\") :\n                              \"parent process pipe\");\n            server.rdb_save_time_start = time(NULL);\n            server.rdb_child_type = RDB_CHILD_TYPE_SOCKET;\n            if (!rdb_channel) {\n                close(rdb_pipe_write); /* close write in parent so that it can detect the close on the child. */\n                if (aeCreateFileEvent(server.el, server.rdb_pipe_read, AE_READABLE, rdbPipeReadHandler,NULL) == AE_ERR) {\n                    serverPanic(\"Unrecoverable error creating server.rdb_pipe_read file event.\");\n                }\n            }\n        }\n        if (rdb_channel)\n            zfree(conns);\n        else\n            close(safe_to_exit_pipe);\n\n        return (childpid == -1) ? C_ERR : C_OK;\n    }\n    return C_OK; /* Unreached. */\n}\n\nvoid saveCommand(client *c) {\n    if (server.child_type == CHILD_TYPE_RDB) {\n        addReplyError(c,\"Background save already in progress\");\n        return;\n    }\n\n    server.stat_rdb_saves++;\n\n    rdbSaveInfo rsi, *rsiptr;\n    rsiptr = rdbPopulateSaveInfo(&rsi);\n    if (rdbSave(SLAVE_REQ_NONE,server.rdb_filename,rsiptr,RDBFLAGS_NONE) == C_OK) {\n        addReply(c,shared.ok);\n    } else {\n        addReplyErrorObject(c,shared.err);\n    }\n}\n\n/* BGSAVE [SCHEDULE] */\nvoid bgsaveCommand(client *c) {\n    int schedule = 0;\n\n    /* The SCHEDULE option changes the behavior of BGSAVE when an AOF rewrite\n     * is in progress. Instead of returning an error a BGSAVE gets scheduled. */\n    if (c->argc > 1) {\n        if (c->argc == 2 && !strcasecmp(c->argv[1]->ptr,\"schedule\")) {\n            schedule = 1;\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        }\n    }\n\n    rdbSaveInfo rsi, *rsiptr;\n    rsiptr = rdbPopulateSaveInfo(&rsi);\n\n    if (server.child_type == CHILD_TYPE_RDB) {\n        addReplyError(c,\"Background save already in progress\");\n    } else if (hasActiveChildProcess() || server.in_exec) {\n        if (schedule || server.in_exec) {\n            server.rdb_bgsave_scheduled = 1;\n            addReplyStatus(c,\"Background saving scheduled\");\n        } else {\n            addReplyError(c,\n            \"Another child process is active (AOF?): can't BGSAVE right now. \"\n            \"Use BGSAVE SCHEDULE in order to schedule a BGSAVE whenever \"\n            \"possible.\");\n        }\n    } else if (rdbSaveBackground(SLAVE_REQ_NONE,server.rdb_filename,rsiptr,RDBFLAGS_NONE) == C_OK) {\n        addReplyStatus(c,\"Background saving started\");\n    } else {\n        addReplyErrorObject(c,shared.err);\n    }\n}\n\n/* Populate the rdbSaveInfo structure used to persist the replication\n * information inside the RDB file. Currently the structure explicitly\n * contains just the currently selected DB from the master stream, however\n * if the rdbSave*() family functions receive a NULL rsi structure also\n * the Replication ID/offset is not saved. The function populates 'rsi'\n * that is normally stack-allocated in the caller, returns the populated\n * pointer if the instance has a valid master client, otherwise NULL\n * is returned, and the RDB saving will not persist any replication related\n * information. */\nrdbSaveInfo *rdbPopulateSaveInfo(rdbSaveInfo *rsi) {\n    rdbSaveInfo rsi_init = RDB_SAVE_INFO_INIT;\n    *rsi = rsi_init;\n\n    /* If the instance is a master, we can populate the replication info\n     * only when repl_backlog is not NULL. If the repl_backlog is NULL,\n     * it means that the instance isn't in any replication chains. In this\n     * scenario the replication info is useless, because when a slave\n     * connects to us, the NULL repl_backlog will trigger a full\n     * synchronization, at the same time we will use a new replid and clear\n     * replid2. */\n    if (!server.masterhost && server.repl_backlog) {\n        /* Note that when server.slaveseldb is -1, it means that this master\n         * didn't apply any write commands after a full synchronization.\n         * So we can let repl_stream_db be 0, this allows a restarted slave\n         * to reload replication ID/offset, it's safe because the next write\n         * command must generate a SELECT statement. */\n        rsi->repl_stream_db = server.slaveseldb == -1 ? 0 : server.slaveseldb;\n        return rsi;\n    }\n\n    /* If the instance is a slave we need a connected master\n     * in order to fetch the currently selected DB. */\n    if (server.master) {\n        rsi->repl_stream_db = server.master->db->id;\n        return rsi;\n    }\n\n    /* If we have a cached master we can use it in order to populate the\n     * replication selected DB info inside the RDB file: the slave can\n     * increment the master_repl_offset only from data arriving from the\n     * master, so if we are disconnected the offset in the cached master\n     * is valid. */\n    if (server.cached_master) {\n        rsi->repl_stream_db = server.cached_master->db->id;\n        return rsi;\n    }\n    return NULL;\n}\n"
  },
  {
    "path": "src/rdb.h",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef __RDB_H\n#define __RDB_H\n\n#include <stdio.h>\n#include \"rio.h\"\n\n/* TBD: include only necessary headers. */\n#include \"server.h\"\n\n/* The current RDB version. When the format changes in a way that is no longer\n * backward compatible this number gets incremented. */\n#define RDB_VERSION 14\n\n/* Defines related to the dump file format. To store 32 bits lengths for short\n * keys requires a lot of space, so we check the most significant 2 bits of\n * the first byte to interpreter the length:\n *\n * 00|XXXXXX => if the two MSB are 00 the len is the 6 bits of this byte\n * 01|XXXXXX XXXXXXXX =>  01, the len is 14 bits, 6 bits + 8 bits of next byte\n * 10|000000 [32 bit integer] => A full 32 bit len in net byte order will follow\n * 10|000001 [64 bit integer] => A full 64 bit len in net byte order will follow\n * 11|OBKIND this means: specially encoded object will follow. The six bits\n *           number specify the kind of object that follows.\n *           See the RDB_ENC_* defines.\n *\n * Lengths up to 63 are stored using a single byte, most DB keys, and may\n * values, will fit inside. */\n#define RDB_6BITLEN 0\n#define RDB_14BITLEN 1\n#define RDB_32BITLEN 0x80\n#define RDB_64BITLEN 0x81\n#define RDB_ENCVAL 3\n#define RDB_LENERR UINT64_MAX\n\n/* When a length of a string object stored on disk has the first two bits\n * set, the remaining six bits specify a special encoding for the object\n * accordingly to the following defines: */\n#define RDB_ENC_INT8 0        /* 8 bit signed integer */\n#define RDB_ENC_INT16 1       /* 16 bit signed integer */\n#define RDB_ENC_INT32 2       /* 32 bit signed integer */\n#define RDB_ENC_LZF 3         /* string compressed with FASTLZ */\n\n/* Map object types to RDB object types. Macros starting with OBJ_ are for\n * memory storage and may change. Instead RDB types must be fixed because\n * we store them on disk. */\n#define RDB_TYPE_STRING 0\n#define RDB_TYPE_LIST   1\n#define RDB_TYPE_SET    2\n#define RDB_TYPE_ZSET   3\n#define RDB_TYPE_HASH   4\n#define RDB_TYPE_ZSET_2 5 /* ZSET version 2 with doubles stored in binary. */\n#define RDB_TYPE_MODULE_PRE_GA 6 /* Used in 4.0 release candidates */\n#define RDB_TYPE_MODULE_2 7 /* Module value with annotations for parsing without\n                               the generating module being loaded. */\n#define RDB_TYPE_HASH_ZIPMAP    9\n#define RDB_TYPE_LIST_ZIPLIST  10\n#define RDB_TYPE_SET_INTSET    11\n#define RDB_TYPE_ZSET_ZIPLIST  12\n#define RDB_TYPE_HASH_ZIPLIST  13\n#define RDB_TYPE_LIST_QUICKLIST 14\n#define RDB_TYPE_STREAM_LISTPACKS 15\n#define RDB_TYPE_HASH_LISTPACK 16\n#define RDB_TYPE_ZSET_LISTPACK 17\n#define RDB_TYPE_LIST_QUICKLIST_2   18\n#define RDB_TYPE_STREAM_LISTPACKS_2 19\n#define RDB_TYPE_SET_LISTPACK  20\n#define RDB_TYPE_STREAM_LISTPACKS_3 21\n#define RDB_TYPE_HASH_METADATA_PRE_GA 22      /* Hash with HFEs. Doesn't attach min TTL at start (7.4 RC) */\n#define RDB_TYPE_HASH_LISTPACK_EX_PRE_GA 23   /* Hash LP with HFEs. Doesn't attach min TTL at start (7.4 RC) */\n#define RDB_TYPE_HASH_METADATA 24             /* Hash with HFEs. Attach min TTL at start */\n#define RDB_TYPE_HASH_LISTPACK_EX 25          /* Hash LP with HFEs. Attach min TTL at start */\n#define RDB_TYPE_STREAM_LISTPACKS_4 26        /* Stream with IDMP support */\n#define RDB_TYPE_STREAM_LISTPACKS_5 27        /* Stream with XNACK support (NACKed entries) */\n#define RDB_TYPE_GCRA 28                      /* GCRA object */\n/* NOTE: WHEN ADDING NEW RDB TYPE, UPDATE rdbIsObjectType(), and rdb_type_string[] */\n\n/* Test if a type is an object type. */\n#define rdbIsObjectType(t) (((t) >= 0 && (t) <= 7) || ((t) >= 9 && (t) <= 28))\n\n/* Special RDB opcodes (saved/loaded with rdbSaveType/rdbLoadType). */\n#define RDB_OPCODE_KEY_META   243   /* Key metadata (module metadata classes). */\n#define RDB_OPCODE_SLOT_INFO  244   /* Individual slot info, such as slot id and size (cluster mode only). */\n#define RDB_OPCODE_FUNCTION2  245   /* function library data */\n#define RDB_OPCODE_FUNCTION_PRE_GA   246   /* old function library data for 7.0 rc1 and rc2 */\n#define RDB_OPCODE_MODULE_AUX 247   /* Module auxiliary data. */\n#define RDB_OPCODE_IDLE       248   /* LRU idle time. */\n#define RDB_OPCODE_FREQ       249   /* LFU frequency. */\n#define RDB_OPCODE_AUX        250   /* RDB aux field. */\n#define RDB_OPCODE_RESIZEDB   251   /* Hash table resize hint. */\n#define RDB_OPCODE_EXPIRETIME_MS 252    /* Expire time in milliseconds. */\n#define RDB_OPCODE_EXPIRETIME 253       /* Old expire time in seconds. */\n#define RDB_OPCODE_SELECTDB   254   /* DB number of the following keys. */\n#define RDB_OPCODE_EOF        255   /* End of the RDB file. */\n\n/* Module serialized values sub opcodes */\n#define RDB_MODULE_OPCODE_EOF   0   /* End of module value. */\n#define RDB_MODULE_OPCODE_SINT  1   /* Signed integer. */\n#define RDB_MODULE_OPCODE_UINT  2   /* Unsigned integer. */\n#define RDB_MODULE_OPCODE_FLOAT 3   /* Float. */\n#define RDB_MODULE_OPCODE_DOUBLE 4  /* Double. */\n#define RDB_MODULE_OPCODE_STRING 5  /* String. */\n\n/* rdbLoad...() functions flags. */\n#define RDB_LOAD_NONE     0\n#define RDB_LOAD_ENC      (1<<0)\n#define RDB_LOAD_PLAIN    (1<<1)\n#define RDB_LOAD_SDS      (1<<2)\n\n/* flags on the purpose of rdb save or load */\n#define RDBFLAGS_NONE 0                 /* No special RDB loading or saving. */\n#define RDBFLAGS_AOF_PREAMBLE (1<<0)    /* Load/save the RDB as AOF preamble. */\n#define RDBFLAGS_REPLICATION (1<<1)     /* Load/save for SYNC. */\n#define RDBFLAGS_ALLOW_DUP (1<<2)       /* Allow duplicated keys when loading.*/\n#define RDBFLAGS_FEED_REPL (1<<3)       /* Feed replication stream when loading.*/\n#define RDBFLAGS_KEEP_CACHE (1<<4)      /* Don't reclaim cache after rdb file is generated */\n\n/* When rdbLoadObject() returns NULL, the err flag is\n * set to hold the type of error that occurred */\n#define RDB_LOAD_ERR_EMPTY_KEY       1   /* Error of empty key */\n#define RDB_LOAD_ERR_OTHER           2   /* Any other errors */\n\nssize_t rdbWriteRaw(rio *rdb, void *p, size_t len);\nint rdbSaveType(rio *rdb, unsigned char type);\nint rdbLoadType(rio *rdb);\ntime_t rdbLoadTime(rio *rdb);\nint rdbSaveLen(rio *rdb, uint64_t len);\nssize_t rdbSaveMillisecondTime(rio *rdb, long long t);\nlong long rdbLoadMillisecondTime(rio *rdb, int rdbver);\nuint64_t rdbLoadLen(rio *rdb, int *isencoded);\nint rdbLoadLenByRef(rio *rdb, int *isencoded, uint64_t *lenptr);\nint rdbSaveObjectType(rio *rdb, robj *o);\nint rdbLoadObjectType(rio *rdb);\nint rdbLoadWithEmptyFunc(char *filename, rdbSaveInfo *rsi, int rdbflags, void (*emptyDbFunc)(void));\nint rdbLoad(char *filename, rdbSaveInfo *rsi, int rdbflags);\nint rdbSaveBackground(int req, char *filename, rdbSaveInfo *rsi, int rdbflags);\nint rdbSaveToSlavesSockets(int req, rdbSaveInfo *rsi);\nvoid rdbRemoveTempFile(pid_t childpid, int from_signal);\nint rdbSaveToFile(const char *filename);\nint rdbSave(int req, char *filename, rdbSaveInfo *rsi, int rdbflags);\nssize_t rdbSaveObject(rio *rdb, robj *o, robj *key, int dbid);\nsize_t rdbSavedObjectLen(robj *o, robj *key, int dbid);\nrobj *rdbLoadObject(int rdbtype, rio *rdb, sds key, int dbid, int *error);\nvoid backgroundSaveDoneHandler(int exitcode, int bysignal);\nint rdbSaveKeyValuePair(rio *rdb, robj *key, robj *val, long long expiretime,int dbid);\nssize_t rdbSaveSingleModuleAux(rio *rdb, int when, moduleType *mt);\nrobj *rdbLoadCheckModuleValue(rio *rdb, char *modulename, int null_on_error);\nint rdbResolveKeyType(rio *rdb, int *type, int dbid, KeyMetaSpec *keymeta);\nrobj *rdbLoadStringObject(rio *rdb);\nssize_t rdbSaveStringObject(rio *rdb, robj *obj);\nssize_t rdbSaveRawString(rio *rdb, unsigned char *s, size_t len);\nvoid *rdbGenericLoadStringObject(rio *rdb, int flags, size_t *lenptr);\nint rdbSaveBinaryDoubleValue(rio *rdb, double val);\nint rdbLoadBinaryDoubleValue(rio *rdb, double *val);\nint rdbSaveBinaryFloatValue(rio *rdb, float val);\nint rdbLoadBinaryFloatValue(rio *rdb, float *val);\nint rdbLoadRio(rio *rdb, int rdbflags, rdbSaveInfo *rsi);\nint rdbLoadRioWithLoadingCtx(rio *rdb, int rdbflags, rdbSaveInfo *rsi, rdbLoadingCtx *rdb_loading_ctx);\nint rdbFunctionLoad(rio *rdb, int ver, functionsLibCtx* lib_ctx, int rdbflags, sds *err);\nint rdbSaveRio(int req, rio *rdb, int *error, int rdbflags, rdbSaveInfo *rsi);\nssize_t rdbSaveFunctions(rio *rdb);\nrdbSaveInfo *rdbPopulateSaveInfo(rdbSaveInfo *rsi);\n\n#endif\n"
  },
  {
    "path": "src/redis-benchmark.c",
    "content": "/* Redis benchmark utility.\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"fmacros.h\"\n\n#include <stdio.h>\n#include <string.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <errno.h>\n#include <time.h>\n#include <sys/time.h>\n#include <signal.h>\n#include <assert.h>\n#include <math.h>\n#include <pthread.h>\n\n#include <sdscompat.h> /* Use hiredis' sds compat header that maps sds calls to their hi_ variants */\n#include <sds.h> /* Use hiredis sds. */\n#include \"ae.h\"\n#include <hiredis.h>\n#ifdef USE_OPENSSL\n#include <openssl/ssl.h>\n#include <openssl/err.h>\n#include <hiredis_ssl.h>\n#endif\n#include \"adlist.h\"\n#include \"dict.h\"\n#include \"zmalloc.h\"\n#include \"atomicvar.h\"\n#include \"crc16_slottable.h\"\n#include \"hdr_histogram.h\"\n#include \"cli_common.h\"\n#include \"mt19937-64.h\"\n\n#define UNUSED(V) ((void) V)\n#define RANDPTR_INITIAL_SIZE 8\n#define DEFAULT_LATENCY_PRECISION 3\n#define MAX_LATENCY_PRECISION 4\n#define MAX_THREADS 500\n#define CLUSTER_SLOTS 16384\n#define CONFIG_LATENCY_HISTOGRAM_MIN_VALUE 10L          /* >= 10 usecs */\n#define CONFIG_LATENCY_HISTOGRAM_MAX_VALUE 3000000L          /* <= 3 secs(us precision) */\n#define CONFIG_LATENCY_HISTOGRAM_INSTANT_MAX_VALUE 3000000L   /* <= 3 secs(us precision) */\n#define SHOW_THROUGHPUT_INTERVAL 250  /* 250ms */\n\n#define CLIENT_GET_EVENTLOOP(c) \\\n    (c->thread_id >= 0 ? config.threads[c->thread_id]->el : config.el)\n\nstruct benchmarkThread;\nstruct clusterNode;\nstruct redisConfig;\n\nstatic struct config {\n    aeEventLoop *el;\n    cliConnInfo conn_info;\n    const char *hostsocket;\n    int tls;\n    struct cliSSLconfig sslconfig;\n    int numclients;\n    redisAtomic int liveclients;\n    int requests;\n    redisAtomic int requests_issued;\n    redisAtomic int requests_finished;\n    redisAtomic int previous_requests_finished;\n    int last_printed_bytes;\n    long long previous_tick;\n    int keysize;\n    int datasize;\n    int randomkeys;\n    int randomkeys_keyspacelen;\n    int keepalive;\n    int pipeline;\n    long long start;\n    long long totlatency;\n    const char *title;\n    list *clients;\n    int quiet;\n    int csv;\n    int loop;\n    int idlemode;\n    sds input_dbnumstr;\n    char *tests;\n    int stdinarg; /* get last arg from stdin. (-x option) */\n    int precision;\n    int num_threads;\n    struct benchmarkThread **threads;\n    int cluster_mode;\n    int cluster_node_count;\n    struct clusterNode **cluster_nodes;\n    struct redisConfig *redis_config;\n    struct hdr_histogram* latency_histogram;\n    struct hdr_histogram* current_sec_latency_histogram;\n    redisAtomic int is_fetching_slots;\n    redisAtomic int is_updating_slots;\n    redisAtomic int slots_last_update;\n    int enable_tracking;\n    pthread_mutex_t liveclients_mutex;\n    pthread_mutex_t is_updating_slots_mutex;\n    int resp3; /* use RESP3 */\n} config;\n\ntypedef struct _client {\n    redisContext *context;\n    sds obuf;\n    char **randptr;         /* Pointers to :rand: strings inside the command buf */\n    size_t randlen;         /* Number of pointers in client->randptr */\n    size_t randfree;        /* Number of unused pointers in client->randptr */\n    char **stagptr;         /* Pointers to slot hashtags (cluster mode only) */\n    size_t staglen;         /* Number of pointers in client->stagptr */\n    size_t stagfree;        /* Number of unused pointers in client->stagptr */\n    size_t written;         /* Bytes of 'obuf' already written */\n    long long start;        /* Start time of a request */\n    long long latency;      /* Request latency */\n    int pending;            /* Number of pending requests (replies to consume) */\n    int prefix_pending;     /* If non-zero, number of pending prefix commands. Commands\n                               such as auth and select are prefixed to the pipeline of\n                               benchmark commands and discarded after the first send. */\n    int prefixlen;          /* Size in bytes of the pending prefix commands */\n    int thread_id;\n    struct clusterNode *cluster_node;\n    int slots_last_update;\n} *client;\n\n/* Threads. */\n\ntypedef struct benchmarkThread {\n    int index;\n    pthread_t thread;\n    aeEventLoop *el;\n} benchmarkThread;\n\n/* Cluster. */\ntypedef struct clusterNode {\n    char *ip;\n    int port;\n    sds name;\n    int flags;\n    sds replicate;  /* Master ID if node is a slave */\n    int *slots;\n    int slots_count;\n    int *updated_slots;         /* Used by updateClusterSlotsConfiguration */\n    int updated_slots_count;    /* Used by updateClusterSlotsConfiguration */\n    int replicas_count;\n    sds *migrating; /* An array of sds where even strings are slots and odd\n                     * strings are the destination node IDs. */\n    sds *importing; /* An array of sds where even strings are slots and odd\n                     * strings are the source node IDs. */\n    int migrating_count; /* Length of the migrating array (migrating slots*2) */\n    int importing_count; /* Length of the importing array (importing slots*2) */\n    struct redisConfig *redis_config;\n} clusterNode;\n\ntypedef struct redisConfig {\n    sds save;\n    sds appendonly;\n} redisConfig;\n\n/* Prototypes */\nstatic void writeHandler(aeEventLoop *el, int fd, void *privdata, int mask);\nstatic void createMissingClients(client c);\nstatic benchmarkThread *createBenchmarkThread(int index);\nstatic void freeBenchmarkThread(benchmarkThread *thread);\nstatic void freeBenchmarkThreads(void);\nstatic void *execBenchmarkThread(void *ptr);\nstatic clusterNode *createClusterNode(char *ip, int port);\nstatic redisConfig *getRedisConfig(const char *ip, int port,\n                                   const char *hostsocket);\nstatic redisContext *getRedisContext(const char *ip, int port,\n                                     const char *hostsocket);\nstatic void freeRedisConfig(redisConfig *cfg);\nstatic int fetchClusterSlotsConfiguration(client c);\nstatic void updateClusterSlotsConfiguration(void);\nint showThroughput(struct aeEventLoop *eventLoop, long long id,\n                   void *clientData);\n\n/* Dict callbacks */\nstatic uint64_t dictSdsHash(const void *key);\nstatic int dictSdsKeyCompare(dictCmpCache *cache, const void *key1, const void *key2);\n\n/* Implementation */\nstatic long long ustime(void) {\n    struct timeval tv;\n    long long ust;\n\n    gettimeofday(&tv, NULL);\n    ust = ((long long)tv.tv_sec)*1000000;\n    ust += tv.tv_usec;\n    return ust;\n}\n\nstatic long long mstime(void) {\n    return ustime()/1000;\n}\n\nstatic uint64_t dictSdsHash(const void *key) {\n    return dictGenHashFunction((unsigned char*)key, sdslen((char*)key));\n}\n\nstatic int dictSdsKeyCompare(dictCmpCache *cache, const void *key1, const void *key2)\n{\n    int l1,l2;\n    UNUSED(cache);\n\n    l1 = sdslen((sds)key1);\n    l2 = sdslen((sds)key2);\n    if (l1 != l2) return 0;\n    return memcmp(key1, key2, l1) == 0;\n}\n\nstatic redisContext *getRedisContext(const char *ip, int port,\n                                     const char *hostsocket)\n{\n    redisContext *ctx = NULL;\n    redisReply *reply =  NULL;\n    if (hostsocket == NULL)\n        ctx = redisConnect(ip, port);\n    else\n        ctx = redisConnectUnix(hostsocket);\n    if (ctx == NULL || ctx->err) {\n        fprintf(stderr,\"Could not connect to Redis at \");\n        char *err = (ctx != NULL ? ctx->errstr : \"\");\n        if (hostsocket == NULL)\n            fprintf(stderr,\"%s:%d: %s\\n\",ip,port,err);\n        else\n            fprintf(stderr,\"%s: %s\\n\",hostsocket,err);\n        goto cleanup;\n    }\n    if (config.tls==1) {\n        const char *err = NULL;\n        if (cliSecureConnection(ctx, config.sslconfig, &err) == REDIS_ERR && err) {\n            fprintf(stderr, \"Could not negotiate a TLS connection: %s\\n\", err);\n            goto cleanup;\n        }\n    }\n    if (config.conn_info.auth == NULL)\n        return ctx;\n    if (config.conn_info.user == NULL)\n        reply = redisCommand(ctx,\"AUTH %s\", config.conn_info.auth);\n    else\n        reply = redisCommand(ctx,\"AUTH %s %s\", config.conn_info.user, config.conn_info.auth);\n    if (reply != NULL) {\n        if (reply->type == REDIS_REPLY_ERROR) {\n            if (hostsocket == NULL)\n                fprintf(stderr, \"Node %s:%d replied with error:\\n%s\\n\", ip, port, reply->str);\n            else\n                fprintf(stderr, \"Node %s replied with error:\\n%s\\n\", hostsocket, reply->str);\n            freeReplyObject(reply);\n            redisFree(ctx);\n            exit(1);\n        }\n        freeReplyObject(reply);\n        return ctx;\n    }\n    fprintf(stderr, \"ERROR: failed to fetch reply from \");\n    if (hostsocket == NULL)\n        fprintf(stderr, \"%s:%d\\n\", ip, port);\n    else\n        fprintf(stderr, \"%s\\n\", hostsocket);\ncleanup:\n    freeReplyObject(reply);\n    redisFree(ctx);\n    return NULL;\n}\n\n\n\nstatic redisConfig *getRedisConfig(const char *ip, int port,\n                                   const char *hostsocket)\n{\n    redisConfig *cfg = zcalloc(sizeof(*cfg));\n    if (!cfg) return NULL;\n    redisContext *c = NULL;\n    redisReply *reply = NULL, *sub_reply = NULL;\n    c = getRedisContext(ip, port, hostsocket);\n    if (c == NULL) {\n        freeRedisConfig(cfg);\n        exit(1);\n    }\n    redisAppendCommand(c, \"CONFIG GET %s\", \"save\");\n    redisAppendCommand(c, \"CONFIG GET %s\", \"appendonly\");\n    int abort_test = 0;\n    int i = 0;\n    void *r = NULL;\n    for (; i < 2; i++) {\n        int res = redisGetReply(c, &r);\n        if (reply) freeReplyObject(reply);\n        reply = res == REDIS_OK ? ((redisReply *) r) : NULL;\n        if (res != REDIS_OK || !r) goto fail;\n        if (reply->type == REDIS_REPLY_ERROR) {\n            goto fail;\n        }\n        if (reply->type != REDIS_REPLY_ARRAY || reply->elements < 2) goto fail;\n        sub_reply = reply->element[1];\n        char *value = sub_reply->str;\n        if (!value) value = \"\";\n        switch (i) {\n        case 0: cfg->save = sdsnew(value); break;\n        case 1: cfg->appendonly = sdsnew(value); break;\n        }\n    }\n    freeReplyObject(reply);\n    redisFree(c);\n    return cfg;\nfail:\n    if (reply && reply->type == REDIS_REPLY_ERROR &&\n        !strncmp(reply->str,\"NOAUTH\",6)) {\n        if (hostsocket == NULL)\n            fprintf(stderr, \"Node %s:%d replied with error:\\n%s\\n\", ip, port, reply->str);\n        else\n            fprintf(stderr, \"Node %s replied with error:\\n%s\\n\", hostsocket, reply->str);\n        abort_test = 1;\n    }\n    freeReplyObject(reply);\n    redisFree(c);\n    freeRedisConfig(cfg);\n    if (abort_test) exit(1);\n    return NULL;\n}\nstatic void freeRedisConfig(redisConfig *cfg) {\n    if (cfg->save) sdsfree(cfg->save);\n    if (cfg->appendonly) sdsfree(cfg->appendonly);\n    zfree(cfg);\n}\n\nstatic void freeClient(client c) {\n    aeEventLoop *el = CLIENT_GET_EVENTLOOP(c);\n    listNode *ln;\n    aeDeleteFileEvent(el,c->context->fd,AE_WRITABLE);\n    aeDeleteFileEvent(el,c->context->fd,AE_READABLE);\n    if (c->thread_id >= 0) {\n        int requests_finished = 0;\n        atomicGet(config.requests_finished, requests_finished);\n        if (requests_finished >= config.requests) {\n            aeStop(el);\n        }\n    }\n    redisFree(c->context);\n    sdsfree(c->obuf);\n    zfree(c->randptr);\n    zfree(c->stagptr);\n    zfree(c);\n    if (config.num_threads) pthread_mutex_lock(&(config.liveclients_mutex));\n    config.liveclients--;\n    ln = listSearchKey(config.clients,c);\n    assert(ln != NULL);\n    listDelNode(config.clients,ln);\n    if (config.num_threads) pthread_mutex_unlock(&(config.liveclients_mutex));\n}\n\nstatic void freeAllClients(void) {\n    listNode *ln = config.clients->head, *next;\n\n    while(ln) {\n        next = ln->next;\n        freeClient(ln->value);\n        ln = next;\n    }\n}\n\nstatic void resetClient(client c) {\n    aeEventLoop *el = CLIENT_GET_EVENTLOOP(c);\n    aeDeleteFileEvent(el,c->context->fd,AE_WRITABLE);\n    aeDeleteFileEvent(el,c->context->fd,AE_READABLE);\n    aeCreateFileEvent(el,c->context->fd,AE_WRITABLE,writeHandler,c);\n    c->written = 0;\n    c->pending = config.pipeline;\n}\n\nstatic void randomizeClientKey(client c) {\n    size_t i;\n\n    for (i = 0; i < c->randlen; i++) {\n        char *p = c->randptr[i]+11;\n        size_t r = 0;\n        if (config.randomkeys_keyspacelen != 0)\n            r = random() % config.randomkeys_keyspacelen;\n        size_t j;\n\n        for (j = 0; j < 12; j++) {\n            *p = '0'+r%10;\n            r/=10;\n            p--;\n        }\n    }\n}\n\nstatic void setClusterKeyHashTag(client c) {\n    assert(c->thread_id >= 0);\n    clusterNode *node = c->cluster_node;\n    assert(node);\n    int is_updating_slots = 0;\n    atomicGet(config.is_updating_slots, is_updating_slots);\n    /* If updateClusterSlotsConfiguration is updating the slots array,\n     * call updateClusterSlotsConfiguration is order to block the thread\n     * since the mutex is locked. When the slots will be updated by the\n     * thread that's actually performing the update, the execution of\n     * updateClusterSlotsConfiguration won't actually do anything, since\n     * the updated_slots_count array will be already NULL. */\n    if (is_updating_slots) updateClusterSlotsConfiguration();\n    int slot = node->slots[rand() % node->slots_count];\n    const char *tag = crc16_slot_table[slot];\n    int taglen = strlen(tag);\n    size_t i;\n    for (i = 0; i < c->staglen; i++) {\n        char *p = c->stagptr[i] + 1;\n        p[0] = tag[0];\n        p[1] = (taglen >= 2 ? tag[1] : '}');\n        p[2] = (taglen == 3 ? tag[2] : '}');\n    }\n}\n\nstatic void clientDone(client c) {\n    int requests_finished = 0;\n    atomicGet(config.requests_finished, requests_finished);\n    if (requests_finished >= config.requests) {\n        freeClient(c);\n        if (!config.num_threads && config.el) aeStop(config.el);\n        return;\n    }\n    if (config.keepalive) {\n        resetClient(c);\n    } else {\n        if (config.num_threads) pthread_mutex_lock(&(config.liveclients_mutex));\n        config.liveclients--;\n        createMissingClients(c);\n        config.liveclients++;\n        if (config.num_threads)\n            pthread_mutex_unlock(&(config.liveclients_mutex));\n        freeClient(c);\n    }\n}\n\nREDIS_NO_SANITIZE_MSAN(\"memory\")\nstatic void readHandler(aeEventLoop *el, int fd, void *privdata, int mask) {\n    client c = privdata;\n    void *reply = NULL;\n    UNUSED(el);\n    UNUSED(fd);\n    UNUSED(mask);\n\n    /* Calculate latency only for the first read event. This means that the\n     * server already sent the reply and we need to parse it. Parsing overhead\n     * is not part of the latency, so calculate it only once, here. */\n    if (c->latency < 0) c->latency = ustime()-(c->start);\n\n    if (redisBufferRead(c->context) != REDIS_OK) {\n        fprintf(stderr,\"Error: %s\\n\",c->context->errstr);\n        exit(1);\n    } else {\n        while(c->pending) {\n            if (redisGetReply(c->context,&reply) != REDIS_OK) {\n                fprintf(stderr,\"Error: %s\\n\",c->context->errstr);\n                exit(1);\n            }\n            if (reply != NULL) {\n                if (reply == (void*)REDIS_REPLY_ERROR) {\n                    fprintf(stderr,\"Unexpected error reply, exiting...\\n\");\n                    exit(1);\n                }\n                redisReply *r = reply;\n                if (r->type == REDIS_REPLY_ERROR) {\n                    /* Try to update slots configuration if reply error is\n                    * MOVED/ASK/CLUSTERDOWN and the key(s) used by the command\n                    * contain(s) the slot hash tag.\n                    * If the error is not topology-update related then we\n                    * immediately exit to avoid false results. */\n                    if (c->cluster_node && c->staglen) {\n                        int fetch_slots = 0, do_wait = 0;\n                        if (!strncmp(r->str,\"MOVED\",5) || !strncmp(r->str,\"ASK\",3))\n                            fetch_slots = 1;\n                        else if (!strncmp(r->str,\"CLUSTERDOWN\",11)) {\n                            /* Usually the cluster is able to recover itself after\n                            * a CLUSTERDOWN error, so try to sleep one second\n                            * before requesting the new configuration. */\n                            fetch_slots = 1;\n                            do_wait = 1;\n                            fprintf(stderr, \"Error from server %s:%d: %s.\\n\",\n                                    c->cluster_node->ip,\n                                    c->cluster_node->port,\n                                    r->str);\n                        }\n                        if (do_wait) sleep(1);\n                        if (fetch_slots && !fetchClusterSlotsConfiguration(c))\n                            exit(1);\n                    } else {\n                        if (c->cluster_node) {\n                            fprintf(stderr, \"Error from server %s:%d: %s\\n\",\n                                 c->cluster_node->ip,\n                                 c->cluster_node->port,\n                                 r->str);\n                        } else fprintf(stderr, \"Error from server: %s\\n\", r->str);\n                        exit(1);\n                    }\n                }\n\n                freeReplyObject(reply);\n                /* This is an OK for prefix commands such as auth and select.*/\n                if (c->prefix_pending > 0) {\n                    c->prefix_pending--;\n                    c->pending--;\n                    /* Discard prefix commands on first response.*/\n                    if (c->prefixlen > 0) {\n                        size_t j;\n                        sdsrange(c->obuf, c->prefixlen, -1);\n                        /* We also need to fix the pointers to the strings\n                        * we need to randomize. */\n                        for (j = 0; j < c->randlen; j++)\n                            c->randptr[j] -= c->prefixlen;\n                        /* Fix the pointers to the slot hash tags */\n                        for (j = 0; j < c->staglen; j++)\n                            c->stagptr[j] -= c->prefixlen;\n                        c->prefixlen = 0;\n                    }\n                    continue;\n                }\n                int requests_finished = 0;\n                atomicGetIncr(config.requests_finished, requests_finished, 1);\n                if (requests_finished < config.requests){\n                        if (config.num_threads == 0) {\n                            hdr_record_value(\n                            config.latency_histogram,  // Histogram to record to\n                            (long)c->latency<=CONFIG_LATENCY_HISTOGRAM_MAX_VALUE ? (long)c->latency : CONFIG_LATENCY_HISTOGRAM_MAX_VALUE);  // Value to record\n                            hdr_record_value(\n                            config.current_sec_latency_histogram,  // Histogram to record to\n                            (long)c->latency<=CONFIG_LATENCY_HISTOGRAM_INSTANT_MAX_VALUE ? (long)c->latency : CONFIG_LATENCY_HISTOGRAM_INSTANT_MAX_VALUE);  // Value to record\n                        } else {\n                            hdr_record_value_atomic(\n                            config.latency_histogram,  // Histogram to record to\n                            (long)c->latency<=CONFIG_LATENCY_HISTOGRAM_MAX_VALUE ? (long)c->latency : CONFIG_LATENCY_HISTOGRAM_MAX_VALUE);  // Value to record\n                            hdr_record_value_atomic(\n                            config.current_sec_latency_histogram,  // Histogram to record to\n                            (long)c->latency<=CONFIG_LATENCY_HISTOGRAM_INSTANT_MAX_VALUE ? (long)c->latency : CONFIG_LATENCY_HISTOGRAM_INSTANT_MAX_VALUE);  // Value to record\n                        }\n                }\n                c->pending--;\n                if (c->pending == 0) {\n                    clientDone(c);\n                    break;\n                }\n            } else {\n                break;\n            }\n        }\n    }\n}\n\nstatic void writeHandler(aeEventLoop *el, int fd, void *privdata, int mask) {\n    client c = privdata;\n    UNUSED(el);\n    UNUSED(fd);\n    UNUSED(mask);\n\n    /* Initialize request when nothing was written. */\n    if (c->written == 0) {\n        /* Enforce upper bound to number of requests. */\n        int requests_issued = 0;\n        atomicGetIncr(config.requests_issued, requests_issued, config.pipeline);\n        if (requests_issued >= config.requests) {\n            return;\n        }\n\n        /* Really initialize: randomize keys and set start time. */\n        if (config.randomkeys) randomizeClientKey(c);\n        if (config.cluster_mode && c->staglen > 0) setClusterKeyHashTag(c);\n        atomicGet(config.slots_last_update, c->slots_last_update);\n        c->start = ustime();\n        c->latency = -1;\n    }\n    const ssize_t buflen = sdslen(c->obuf);\n    const ssize_t writeLen = buflen-c->written;\n    if (writeLen > 0) {\n        void *ptr = c->obuf+c->written;\n        while(1) {\n            /* Optimistically try to write before checking if the file descriptor\n             * is actually writable. At worst we get EAGAIN. */\n            const ssize_t nwritten = cliWriteConn(c->context,ptr,writeLen);\n            if (nwritten != writeLen) {\n                if (nwritten == -1 && errno != EAGAIN) {\n                    if (errno != EPIPE)\n                        fprintf(stderr, \"Error writing to the server: %s\\n\", strerror(errno));\n                    freeClient(c);\n                    return;\n                } else if (nwritten > 0) {\n                    c->written += nwritten;\n                    return;\n                }\n            } else {\n                aeDeleteFileEvent(el,c->context->fd,AE_WRITABLE);\n                aeCreateFileEvent(el,c->context->fd,AE_READABLE,readHandler,c);\n                return;\n            }\n        }\n    }\n}\n\n/* Create a benchmark client, configured to send the command passed as 'cmd' of\n * 'len' bytes.\n *\n * The command is copied N times in the client output buffer (that is reused\n * again and again to send the request to the server) accordingly to the configured\n * pipeline size.\n *\n * Also an initial SELECT command is prepended in order to make sure the right\n * database is selected, if needed. The initial SELECT will be discarded as soon\n * as the first reply is received.\n *\n * To create a client from scratch, the 'from' pointer is set to NULL. If instead\n * we want to create a client using another client as reference, the 'from' pointer\n * points to the client to use as reference. In such a case the following\n * information is take from the 'from' client:\n *\n * 1) The command line to use.\n * 2) The offsets of the __rand_int__ elements inside the command line, used\n *    for arguments randomization.\n *\n * Even when cloning another client, prefix commands are applied if needed.*/\nstatic client createClient(char *cmd, size_t len, client from, int thread_id) {\n    int j;\n    int is_cluster_client = (config.cluster_mode && thread_id >= 0);\n    client c = zmalloc(sizeof(struct _client));\n\n    const char *ip = NULL;\n    int port = 0;\n    c->cluster_node = NULL;\n    if (config.hostsocket == NULL || is_cluster_client) {\n        if (!is_cluster_client) {\n            ip = config.conn_info.hostip;\n            port = config.conn_info.hostport;\n        } else {\n            int node_idx = 0;\n            if (config.num_threads < config.cluster_node_count)\n                node_idx = config.liveclients % config.cluster_node_count;\n            else\n                node_idx = thread_id % config.cluster_node_count;\n            clusterNode *node = config.cluster_nodes[node_idx];\n            assert(node != NULL);\n            ip = (const char *) node->ip;\n            port = node->port;\n            c->cluster_node = node;\n        }\n        c->context = redisConnectNonBlock(ip,port);\n    } else {\n        c->context = redisConnectUnixNonBlock(config.hostsocket);\n    }\n    if (c->context->err) {\n        fprintf(stderr,\"Could not connect to Redis at \");\n        if (config.hostsocket == NULL || is_cluster_client)\n            fprintf(stderr,\"%s:%d: %s\\n\",ip,port,c->context->errstr);\n        else\n            fprintf(stderr,\"%s: %s\\n\",config.hostsocket,c->context->errstr);\n        exit(1);\n    }\n    if (config.tls==1) {\n        const char *err = NULL;\n        if (cliSecureConnection(c->context, config.sslconfig, &err) == REDIS_ERR && err) {\n            fprintf(stderr, \"Could not negotiate a TLS connection: %s\\n\", err);\n            exit(1);\n        }\n    }\n    c->thread_id = thread_id;\n    /* Suppress hiredis cleanup of unused buffers for max speed. */\n    c->context->reader->maxbuf = 0;\n\n    /* Build the request buffer:\n     * Queue N requests accordingly to the pipeline size, or simply clone\n     * the example client buffer. */\n    c->obuf = sdsempty();\n    /* Prefix the request buffer with AUTH and/or SELECT commands, if applicable.\n     * These commands are discarded after the first response, so if the client is\n     * reused the commands will not be used again. */\n    c->prefix_pending = 0;\n    if (config.conn_info.auth) {\n        char *buf = NULL;\n        int len;\n        if (config.conn_info.user == NULL)\n            len = redisFormatCommand(&buf, \"AUTH %s\", config.conn_info.auth);\n        else\n            len = redisFormatCommand(&buf, \"AUTH %s %s\",\n                                     config.conn_info.user, config.conn_info.auth);\n        c->obuf = sdscatlen(c->obuf, buf, len);\n        free(buf);\n        c->prefix_pending++;\n    }\n\n    if (config.enable_tracking) {\n        char *buf = NULL;\n        int len = redisFormatCommand(&buf, \"CLIENT TRACKING on\");\n        c->obuf = sdscatlen(c->obuf, buf, len);\n        free(buf);\n        c->prefix_pending++;\n    }\n\n    /* If a DB number different than zero is selected, prefix our request\n     * buffer with the SELECT command, that will be discarded the first\n     * time the replies are received, so if the client is reused the\n     * SELECT command will not be used again. */\n    if (config.conn_info.input_dbnum != 0 && !is_cluster_client) {\n        c->obuf = sdscatprintf(c->obuf,\"*2\\r\\n$6\\r\\nSELECT\\r\\n$%d\\r\\n%s\\r\\n\",\n            (int)sdslen(config.input_dbnumstr),config.input_dbnumstr);\n        c->prefix_pending++;\n    }\n\n    if (config.resp3) {\n        char *buf = NULL;\n        int len = redisFormatCommand(&buf, \"HELLO 3\");\n        c->obuf = sdscatlen(c->obuf, buf, len);\n        free(buf);\n        c->prefix_pending++;\n    }\n\n    c->prefixlen = sdslen(c->obuf);\n    /* Append the request itself. */\n    if (from) {\n        c->obuf = sdscatlen(c->obuf,\n            from->obuf+from->prefixlen,\n            sdslen(from->obuf)-from->prefixlen);\n    } else {\n        for (j = 0; j < config.pipeline; j++)\n            c->obuf = sdscatlen(c->obuf,cmd,len);\n    }\n\n    c->written = 0;\n    c->pending = config.pipeline+c->prefix_pending;\n    c->randptr = NULL;\n    c->randlen = 0;\n    c->stagptr = NULL;\n    c->staglen = 0;\n\n    /* Find substrings in the output buffer that need to be randomized. */\n    if (config.randomkeys) {\n        if (from) {\n            c->randlen = from->randlen;\n            c->randfree = 0;\n            c->randptr = zmalloc(sizeof(char*)*c->randlen);\n            /* copy the offsets. */\n            for (j = 0; j < (int)c->randlen; j++) {\n                c->randptr[j] = c->obuf + (from->randptr[j]-from->obuf);\n                /* Adjust for the different select prefix length. */\n                c->randptr[j] += c->prefixlen - from->prefixlen;\n            }\n        } else {\n            char *p = c->obuf;\n\n            c->randlen = 0;\n            c->randfree = RANDPTR_INITIAL_SIZE;\n            c->randptr = zmalloc(sizeof(char*)*c->randfree);\n            while ((p = strstr(p,\"__rand_int__\")) != NULL) {\n                if (c->randfree == 0) {\n                    c->randptr = zrealloc(c->randptr,sizeof(char*)*c->randlen*2);\n                    c->randfree += c->randlen;\n                }\n                c->randptr[c->randlen++] = p;\n                c->randfree--;\n                p += 12; /* 12 is strlen(\"__rand_int__). */\n            }\n        }\n    }\n    /* If cluster mode is enabled, set slot hashtags pointers. */\n    if (config.cluster_mode) {\n        if (from) {\n            c->staglen = from->staglen;\n            c->stagfree = 0;\n            c->stagptr = zmalloc(sizeof(char*)*c->staglen);\n            /* copy the offsets. */\n            for (j = 0; j < (int)c->staglen; j++) {\n                c->stagptr[j] = c->obuf + (from->stagptr[j]-from->obuf);\n                /* Adjust for the different select prefix length. */\n                c->stagptr[j] += c->prefixlen - from->prefixlen;\n            }\n        } else {\n            char *p = c->obuf;\n\n            c->staglen = 0;\n            c->stagfree = RANDPTR_INITIAL_SIZE;\n            c->stagptr = zmalloc(sizeof(char*)*c->stagfree);\n            while ((p = strstr(p,\"{tag}\")) != NULL) {\n                if (c->stagfree == 0) {\n                    c->stagptr = zrealloc(c->stagptr,\n                                          sizeof(char*) * c->staglen*2);\n                    c->stagfree += c->staglen;\n                }\n                c->stagptr[c->staglen++] = p;\n                c->stagfree--;\n                p += 5; /* 5 is strlen(\"{tag}\"). */\n            }\n        }\n    }\n    aeEventLoop *el = NULL;\n    if (thread_id < 0) el = config.el;\n    else {\n        benchmarkThread *thread = config.threads[thread_id];\n        el = thread->el;\n    }\n    if (config.idlemode == 0)\n        aeCreateFileEvent(el,c->context->fd,AE_WRITABLE,writeHandler,c);\n    else\n        /* In idle mode, clients still need to register readHandler for catching errors */\n        aeCreateFileEvent(el,c->context->fd,AE_READABLE,readHandler,c);\n\n    listAddNodeTail(config.clients,c);\n    atomicIncr(config.liveclients, 1);\n    atomicGet(config.slots_last_update, c->slots_last_update);\n    return c;\n}\n\nstatic void createMissingClients(client c) {\n    int n = 0;\n    while(config.liveclients < config.numclients) {\n        int thread_id = -1;\n        if (config.num_threads)\n            thread_id = config.liveclients % config.num_threads;\n        createClient(NULL,0,c,thread_id);\n\n        /* Listen backlog is quite limited on most systems */\n        if (++n > 64) {\n            usleep(50000);\n            n = 0;\n        }\n    }\n}\n\nstatic void showLatencyReport(void) {\n    if (config.latency_histogram->total_count == 0) {\n        if (config.csv) {\n            printf(\"\\\"%s\\\",\\\"0.00\\\",\\\"0.000\\\",\\\"0.000\\\",\\\"0.000\\\",\\\"0.000\\\",\\\"0.000\\\",\\\"0.000\\\"\\n\", config.title);\n        } else if (config.quiet) {\n            printf(\"%*s\\r\", config.last_printed_bytes, \" \"); // ensure there is a clean line\n            printf(\"%s: 0.00 requests per second, p50=0.000 msec\\n\", config.title);\n        } else {\n            printf(\"%*s\\r\", config.last_printed_bytes, \" \"); // ensure there is a clean line\n            printf(\"====== %s ======\\n\", config.title);\n            printf(\"No latency samples collected\\n\");\n        }\n        return;\n    }\n\n    const float reqpersec = config.totlatency > 0 ? (float)config.requests_finished/((float)config.totlatency/1000.0f) : 0.0f;\n    const float p0 = ((float) hdr_min(config.latency_histogram))/1000.0f;\n    const float p50 = hdr_value_at_percentile(config.latency_histogram, 50.0 )/1000.0f;\n    const float p95 = hdr_value_at_percentile(config.latency_histogram, 95.0 )/1000.0f;\n    const float p99 = hdr_value_at_percentile(config.latency_histogram, 99.0 )/1000.0f;\n    const float p100 = ((float) hdr_max(config.latency_histogram))/1000.0f;\n    const float avg = hdr_mean(config.latency_histogram)/1000.0f;\n\n    if (!config.quiet && !config.csv) {\n        printf(\"%*s\\r\", config.last_printed_bytes, \" \"); // ensure there is a clean line\n        printf(\"====== %s ======\\n\", config.title);\n        printf(\"  %d requests completed in %.2f seconds\\n\", config.requests_finished,\n            (float)config.totlatency/1000);\n        printf(\"  %d parallel clients\\n\", config.numclients);\n        printf(\"  %d bytes payload\\n\", config.datasize);\n        printf(\"  keep alive: %d\\n\", config.keepalive);\n        if (config.cluster_mode) {\n            printf(\"  cluster mode: yes (%d masters)\\n\",\n                   config.cluster_node_count);\n            int m ;\n            for (m = 0; m < config.cluster_node_count; m++) {\n                clusterNode *node =  config.cluster_nodes[m];\n                redisConfig *cfg = node->redis_config;\n                if (cfg == NULL) continue;\n                printf(\"  node [%d] configuration:\\n\",m );\n                printf(\"    save: %s\\n\",\n                    sdslen(cfg->save) ? cfg->save : \"NONE\");\n                printf(\"    appendonly: %s\\n\", cfg->appendonly);\n            }\n        } else {\n            if (config.redis_config) {\n                printf(\"  host configuration \\\"save\\\": %s\\n\",\n                       config.redis_config->save);\n                printf(\"  host configuration \\\"appendonly\\\": %s\\n\",\n                       config.redis_config->appendonly);\n            }\n        }\n        printf(\"  multi-thread: %s\\n\", (config.num_threads ? \"yes\" : \"no\"));\n        if (config.num_threads)\n            printf(\"  threads: %d\\n\", config.num_threads);\n\n        printf(\"\\n\");\n        printf(\"Latency by percentile distribution:\\n\");\n        struct hdr_iter iter;\n        long long previous_cumulative_count = -1;\n        const long long total_count = config.latency_histogram->total_count;\n        hdr_iter_percentile_init(&iter, config.latency_histogram, 1);\n        struct hdr_iter_percentiles *percentiles = &iter.specifics.percentiles;\n        while (hdr_iter_next(&iter))\n        {\n            const double value = iter.highest_equivalent_value / 1000.0f;\n            const double percentile = percentiles->percentile;\n            const long long cumulative_count = iter.cumulative_count;\n            if( previous_cumulative_count != cumulative_count || cumulative_count == total_count ){\n                printf(\"%3.3f%% <= %.3f milliseconds (cumulative count %lld)\\n\", percentile, value, cumulative_count);\n            }\n            previous_cumulative_count = cumulative_count;\n        }\n        printf(\"\\n\");\n        printf(\"Cumulative distribution of latencies:\\n\");\n        previous_cumulative_count = -1;\n        hdr_iter_linear_init(&iter, config.latency_histogram, 100);\n        while (hdr_iter_next(&iter))\n        {\n            const double value = iter.highest_equivalent_value / 1000.0f;\n            const long long cumulative_count = iter.cumulative_count;\n            const double percentile = ((double)cumulative_count/(double)total_count)*100.0;\n            if( previous_cumulative_count != cumulative_count || cumulative_count == total_count ){\n                printf(\"%3.3f%% <= %.3f milliseconds (cumulative count %lld)\\n\", percentile, value, cumulative_count);\n            }\n            /* After the 2 milliseconds latency to have percentages split\n             * by decimals will just add a lot of noise to the output. */\n            if(iter.highest_equivalent_value > 2000){\n                hdr_iter_linear_set_value_units_per_bucket(&iter,1000);\n            }\n            previous_cumulative_count = cumulative_count;\n        }\n        printf(\"\\n\");\n        printf(\"Summary:\\n\");\n        printf(\"  throughput summary: %.2f requests per second\\n\", reqpersec);\n        printf(\"  latency summary (msec):\\n\");\n        printf(\"    %9s %9s %9s %9s %9s %9s\\n\", \"avg\", \"min\", \"p50\", \"p95\", \"p99\", \"max\");\n        printf(\"    %9.3f %9.3f %9.3f %9.3f %9.3f %9.3f\\n\", avg, p0, p50, p95, p99, p100);\n    } else if (config.csv) {\n        printf(\"\\\"%s\\\",\\\"%.2f\\\",\\\"%.3f\\\",\\\"%.3f\\\",\\\"%.3f\\\",\\\"%.3f\\\",\\\"%.3f\\\",\\\"%.3f\\\"\\n\", config.title, reqpersec, avg, p0, p50, p95, p99, p100);\n    } else {\n        printf(\"%*s\\r\", config.last_printed_bytes, \" \"); // ensure there is a clean line\n        printf(\"%s: %.2f requests per second, p50=%.3f msec\\n\", config.title, reqpersec, p50);\n    }\n}\n\nstatic void initBenchmarkThreads(void) {\n    int i;\n    if (config.threads) freeBenchmarkThreads();\n    config.threads = zmalloc(config.num_threads * sizeof(benchmarkThread*));\n    for (i = 0; i < config.num_threads; i++) {\n        benchmarkThread *thread = createBenchmarkThread(i);\n        config.threads[i] = thread;\n    }\n}\n\nstatic void startBenchmarkThreads(void) {\n    int i;\n    for (i = 0; i < config.num_threads; i++) {\n        benchmarkThread *t = config.threads[i];\n        if (pthread_create(&(t->thread), NULL, execBenchmarkThread, t)){\n            fprintf(stderr, \"FATAL: Failed to start thread %d.\\n\", i);\n            exit(1);\n        }\n    }\n    for (i = 0; i < config.num_threads; i++)\n        pthread_join(config.threads[i]->thread, NULL);\n}\n\nstatic void benchmark(const char *title, char *cmd, int len) {\n    client c;\n\n    config.title = title;\n    config.requests_issued = 0;\n    config.requests_finished = 0;\n    config.previous_requests_finished = 0;\n    config.last_printed_bytes = 0;\n    hdr_init(\n        CONFIG_LATENCY_HISTOGRAM_MIN_VALUE,  // Minimum value\n        CONFIG_LATENCY_HISTOGRAM_MAX_VALUE,  // Maximum value\n        config.precision,  // Number of significant figures\n        &config.latency_histogram);  // Pointer to initialise\n    hdr_init(\n        CONFIG_LATENCY_HISTOGRAM_MIN_VALUE,  // Minimum value\n        CONFIG_LATENCY_HISTOGRAM_INSTANT_MAX_VALUE,  // Maximum value\n        config.precision,  // Number of significant figures\n        &config.current_sec_latency_histogram);  // Pointer to initialise\n\n    if (config.num_threads) initBenchmarkThreads();\n\n    int thread_id = config.num_threads > 0 ? 0 : -1;\n    c = createClient(cmd,len,NULL,thread_id);\n    createMissingClients(c);\n\n    config.start = mstime();\n    if (!config.num_threads) aeMain(config.el);\n    else startBenchmarkThreads();\n    config.totlatency = mstime()-config.start;\n\n    showLatencyReport();\n    freeAllClients();\n    if (config.threads) freeBenchmarkThreads();\n    if (config.current_sec_latency_histogram) hdr_close(config.current_sec_latency_histogram);\n    if (config.latency_histogram) hdr_close(config.latency_histogram);\n\n}\n\n/* Thread functions. */\n\nstatic benchmarkThread *createBenchmarkThread(int index) {\n    benchmarkThread *thread = zmalloc(sizeof(*thread));\n    if (thread == NULL) return NULL;\n    thread->index = index;\n    thread->el = aeCreateEventLoop(1024*10);\n    aeCreateTimeEvent(thread->el,1,showThroughput,(void *)thread,NULL);\n    return thread;\n}\n\nstatic void freeBenchmarkThread(benchmarkThread *thread) {\n    if (thread->el) aeDeleteEventLoop(thread->el);\n    zfree(thread);\n}\n\nstatic void freeBenchmarkThreads(void) {\n    int i = 0;\n    for (; i < config.num_threads; i++) {\n        benchmarkThread *thread = config.threads[i];\n        if (thread) freeBenchmarkThread(thread);\n    }\n    zfree(config.threads);\n    config.threads = NULL;\n}\n\nstatic void *execBenchmarkThread(void *ptr) {\n    benchmarkThread *thread = (benchmarkThread *) ptr;\n    aeMain(thread->el);\n    return NULL;\n}\n\n/* Cluster helper functions. */\n\nstatic clusterNode *createClusterNode(char *ip, int port) {\n    clusterNode *node = zmalloc(sizeof(*node));\n    if (!node) return NULL;\n    node->ip = ip;\n    node->port = port;\n    node->name = NULL;\n    node->flags = 0;\n    node->replicate = NULL;\n    node->replicas_count = 0;\n    node->slots = zmalloc(CLUSTER_SLOTS * sizeof(int));\n    node->slots_count = 0;\n    node->updated_slots = NULL;\n    node->updated_slots_count = 0;\n    node->migrating = NULL;\n    node->importing = NULL;\n    node->migrating_count = 0;\n    node->importing_count = 0;\n    node->redis_config = NULL;\n    return node;\n}\n\nstatic void freeClusterNode(clusterNode *node) {\n    int i;\n    if (node->name) sdsfree(node->name);\n    if (node->replicate) sdsfree(node->replicate);\n    if (node->migrating != NULL) {\n        for (i = 0; i < node->migrating_count; i++) sdsfree(node->migrating[i]);\n        zfree(node->migrating);\n    }\n    if (node->importing != NULL) {\n        for (i = 0; i < node->importing_count; i++) sdsfree(node->importing[i]);\n        zfree(node->importing);\n    }\n    /* If the node is not the reference node, that uses the address from\n     * config.conn_info.hostip and config.conn_info.hostport, then the node ip has been\n     * allocated by fetchClusterConfiguration, so it must be freed. */\n    if (node->ip && strcmp(node->ip, config.conn_info.hostip) != 0) sdsfree(node->ip);\n    if (node->redis_config != NULL) freeRedisConfig(node->redis_config);\n    zfree(node->slots);\n    zfree(node);\n}\n\nstatic void freeClusterNodes(void) {\n    int i = 0;\n    for (; i < config.cluster_node_count; i++) {\n        clusterNode *n = config.cluster_nodes[i];\n        if (n) freeClusterNode(n);\n    }\n    zfree(config.cluster_nodes);\n    config.cluster_nodes = NULL;\n}\n\nstatic clusterNode **addClusterNode(clusterNode *node) {\n    int count = config.cluster_node_count + 1;\n    config.cluster_nodes = zrealloc(config.cluster_nodes,\n                                    count * sizeof(*node));\n    if (!config.cluster_nodes) return NULL;\n    config.cluster_nodes[config.cluster_node_count++] = node;\n    return config.cluster_nodes;\n}\n\n/* TODO: This should be refactored to use CLUSTER SLOTS, the migrating/importing\n * information is anyway not used.\n */\nstatic int fetchClusterConfiguration(void) {\n    int success = 1;\n    redisContext *ctx = NULL;\n    redisReply *reply =  NULL;\n    ctx = getRedisContext(config.conn_info.hostip, config.conn_info.hostport, config.hostsocket);\n    if (ctx == NULL) {\n        exit(1);\n    }\n    clusterNode *firstNode = createClusterNode((char *) config.conn_info.hostip,\n                                               config.conn_info.hostport);\n    if (!firstNode) {success = 0; goto cleanup;}\n    reply = redisCommand(ctx, \"CLUSTER NODES\");\n    success = (reply != NULL);\n    if (!success) goto cleanup;\n    success = (reply->type != REDIS_REPLY_ERROR);\n    if (!success) {\n        if (config.hostsocket == NULL) {\n            fprintf(stderr, \"Cluster node %s:%d replied with error:\\n%s\\n\",\n                    config.conn_info.hostip, config.conn_info.hostport, reply->str);\n        } else {\n            fprintf(stderr, \"Cluster node %s replied with error:\\n%s\\n\",\n                    config.hostsocket, reply->str);\n        }\n        goto cleanup;\n    }\n    char *lines = reply->str, *p, *line;\n    while ((p = strstr(lines, \"\\n\")) != NULL) {\n        *p = '\\0';\n        line = lines;\n        lines = p + 1;\n        char *name = NULL, *addr = NULL, *flags = NULL, *master_id = NULL;\n        int i = 0;\n        while ((p = strchr(line, ' ')) != NULL) {\n            *p = '\\0';\n            char *token = line;\n            line = p + 1;\n            switch(i++){\n            case 0: name = token; break;\n            case 1: addr = token; break;\n            case 2: flags = token; break;\n            case 3: master_id = token; break;\n            }\n            if (i == 8) break; // Slots\n        }\n        if (!flags) {\n            fprintf(stderr, \"Invalid CLUSTER NODES reply: missing flags.\\n\");\n            success = 0;\n            goto cleanup;\n        }\n        int myself = (strstr(flags, \"myself\") != NULL);\n        int is_replica = (strstr(flags, \"slave\") != NULL ||\n                         (master_id != NULL && master_id[0] != '-'));\n        if (is_replica) continue;\n        if (addr == NULL) {\n            fprintf(stderr, \"Invalid CLUSTER NODES reply: missing addr.\\n\");\n            success = 0;\n            goto cleanup;\n        }\n        clusterNode *node = NULL;\n        char *ip = NULL;\n        int port = 0;\n        char *paddr = strrchr(addr, ':');\n        if (paddr != NULL) {\n            *paddr = '\\0';\n            ip = addr;\n            addr = paddr + 1;\n            /* If internal bus is specified, then just drop it. */\n            if ((paddr = strchr(addr, '@')) != NULL) *paddr = '\\0';\n            port = atoi(addr);\n        }\n        if (myself) {\n            node = firstNode;\n            if (ip != NULL && strcmp(node->ip, ip) != 0) {\n                node->ip = sdsnew(ip);\n                node->port = port;\n            }\n        } else {\n            node = createClusterNode(sdsnew(ip), port);\n        }\n        if (node == NULL) {\n            success = 0;\n            goto cleanup;\n        }\n        if (name != NULL) node->name = sdsnew(name);\n        if (i == 8) {\n            int remaining = strlen(line);\n            while (remaining > 0) {\n                p = strchr(line, ' ');\n                if (p == NULL) p = line + remaining;\n                remaining -= (p - line);\n\n                char *slotsdef = line;\n                *p = '\\0';\n                if (remaining) {\n                    line = p + 1;\n                    remaining--;\n                } else line = p;\n                char *dash = NULL;\n                if (slotsdef[0] == '[') {\n                    slotsdef++;\n                    if ((p = strstr(slotsdef, \"->-\"))) { // Migrating\n                        *p = '\\0';\n                        p += 3;\n                        char *closing_bracket = strchr(p, ']');\n                        if (closing_bracket) *closing_bracket = '\\0';\n                        sds slot = sdsnew(slotsdef);\n                        sds dst = sdsnew(p);\n                        node->migrating_count += 2;\n                        node->migrating =\n                            zrealloc(node->migrating,\n                                (node->migrating_count * sizeof(sds)));\n                        node->migrating[node->migrating_count - 2] =\n                            slot;\n                        node->migrating[node->migrating_count - 1] =\n                            dst;\n                    }  else if ((p = strstr(slotsdef, \"-<-\"))) {//Importing\n                        *p = '\\0';\n                        p += 3;\n                        char *closing_bracket = strchr(p, ']');\n                        if (closing_bracket) *closing_bracket = '\\0';\n                        sds slot = sdsnew(slotsdef);\n                        sds src = sdsnew(p);\n                        node->importing_count += 2;\n                        node->importing = zrealloc(node->importing,\n                            (node->importing_count * sizeof(sds)));\n                        node->importing[node->importing_count - 2] =\n                            slot;\n                        node->importing[node->importing_count - 1] =\n                            src;\n                    }\n                } else if ((dash = strchr(slotsdef, '-')) != NULL) {\n                    p = dash;\n                    int start, stop;\n                    *p = '\\0';\n                    start = atoi(slotsdef);\n                    stop = atoi(p + 1);\n                    while (start <= stop) {\n                        int slot = start++;\n                        node->slots[node->slots_count++] = slot;\n                    }\n                } else if (p > slotsdef) {\n                    int slot = atoi(slotsdef);\n                    node->slots[node->slots_count++] = slot;\n                }\n            }\n        }\n        if (node->slots_count == 0) {\n            fprintf(stderr,\n                    \"WARNING: Master node %s:%d has no slots, skipping...\\n\",\n                    node->ip, node->port);\n            continue;\n        }\n        if (!addClusterNode(node)) {\n            success = 0;\n            goto cleanup;\n        }\n    }\ncleanup:\n    if (ctx) redisFree(ctx);\n    if (!success) {\n        if (config.cluster_nodes) freeClusterNodes();\n    }\n    if (reply) freeReplyObject(reply);\n    return success;\n}\n\n/* Request the current cluster slots configuration by calling CLUSTER SLOTS\n * and atomically update the slots after a successful reply. */\nstatic int fetchClusterSlotsConfiguration(client c) {\n    UNUSED(c);\n    int success = 1, is_fetching_slots = 0, last_update = 0;\n    size_t i;\n    atomicGet(config.slots_last_update, last_update);\n    if (c->slots_last_update < last_update) {\n        c->slots_last_update = last_update;\n        return -1;\n    }\n    redisReply *reply = NULL;\n    atomicGetIncr(config.is_fetching_slots, is_fetching_slots, 1);\n    if (is_fetching_slots) return -1; //TODO: use other codes || errno ?\n    atomicSet(config.is_fetching_slots, 1);\n    fprintf(stderr,\n            \"WARNING: Cluster slots configuration changed, fetching new one...\\n\");\n    const char *errmsg = \"Failed to update cluster slots configuration\";\n    static dictType dtype = {\n        dictSdsHash,               /* hash function */\n        NULL,                      /* key dup */\n        NULL,                      /* val dup */\n        dictSdsKeyCompare,         /* key compare */\n        NULL,                      /* key destructor */\n        NULL,                      /* val destructor */\n        NULL                       /* allow to expand */\n    };\n    /* printf(\"[%d] fetchClusterSlotsConfiguration\\n\", c->thread_id); */\n    dict *masters = dictCreate(&dtype);\n    redisContext *ctx = NULL;\n    for (i = 0; i < (size_t) config.cluster_node_count; i++) {\n        clusterNode *node = config.cluster_nodes[i];\n        assert(node->ip != NULL);\n        assert(node->name != NULL);\n        assert(node->port);\n        /* Use first node as entry point to connect to. */\n        if (ctx == NULL) {\n            ctx = getRedisContext(node->ip, node->port, NULL);\n            if (!ctx) {\n                success = 0;\n                goto cleanup;\n            }\n        }\n        if (node->updated_slots != NULL)\n            zfree(node->updated_slots);\n        node->updated_slots = NULL;\n        node->updated_slots_count = 0;\n        dictReplace(masters, node->name, node) ;\n    }\n    reply = redisCommand(ctx, \"CLUSTER SLOTS\");\n    if (reply == NULL || reply->type == REDIS_REPLY_ERROR) {\n        success = 0;\n        if (reply)\n            fprintf(stderr,\"%s\\nCLUSTER SLOTS ERROR: %s\\n\",errmsg,reply->str);\n        goto cleanup;\n    }\n    assert(reply->type == REDIS_REPLY_ARRAY);\n    for (i = 0; i < reply->elements; i++) {\n        redisReply *r = reply->element[i];\n        assert(r->type == REDIS_REPLY_ARRAY);\n        assert(r->elements >= 3);\n        int from, to, slot;\n        from = r->element[0]->integer;\n        to = r->element[1]->integer;\n        redisReply *nr =  r->element[2];\n        assert(nr->type == REDIS_REPLY_ARRAY && nr->elements >= 3);\n        assert(nr->element[2]->str != NULL);\n        sds name =  sdsnew(nr->element[2]->str);\n        dictEntry *entry = dictFind(masters, name);\n        if (entry == NULL) {\n            success = 0;\n            fprintf(stderr, \"%s: could not find node with ID %s in current \"\n                            \"configuration.\\n\", errmsg, name);\n            if (name) sdsfree(name);\n            goto cleanup;\n        }\n        sdsfree(name);\n        clusterNode *node = dictGetVal(entry);\n        if (node->updated_slots == NULL)\n            node->updated_slots = zcalloc(CLUSTER_SLOTS * sizeof(int));\n        for (slot = from; slot <= to; slot++)\n            node->updated_slots[node->updated_slots_count++] = slot;\n    }\n    updateClusterSlotsConfiguration();\ncleanup:\n    freeReplyObject(reply);\n    redisFree(ctx);\n    dictRelease(masters);\n    atomicSet(config.is_fetching_slots, 0);\n    return success;\n}\n\n/* Atomically update the new slots configuration. */\nstatic void updateClusterSlotsConfiguration(void) {\n    pthread_mutex_lock(&config.is_updating_slots_mutex);\n    atomicSet(config.is_updating_slots, 1);\n    int i;\n    for (i = 0; i < config.cluster_node_count; i++) {\n        clusterNode *node = config.cluster_nodes[i];\n        if (node->updated_slots != NULL) {\n            int *oldslots = node->slots;\n            node->slots = node->updated_slots;\n            node->slots_count = node->updated_slots_count;\n            node->updated_slots = NULL;\n            node->updated_slots_count = 0;\n            zfree(oldslots);\n        }\n    }\n    atomicSet(config.is_updating_slots, 0);\n    atomicIncr(config.slots_last_update, 1);\n    pthread_mutex_unlock(&config.is_updating_slots_mutex);\n}\n\n/* Generate random data for redis benchmark. See #7196. */\nstatic void genBenchmarkRandomData(char *data, int count) {\n    static uint32_t state = 1234;\n    int i = 0;\n\n    while (count--) {\n        state = (state*1103515245+12345);\n        data[i++] = '0'+((state>>16)&63);\n    }\n}\n\n/* Returns number of consumed options. */\nint parseOptions(int argc, char **argv) {\n    int i;\n    int lastarg;\n    int exit_status = 1;\n    char *tls_usage;\n\n    for (i = 1; i < argc; i++) {\n        lastarg = (i == (argc-1));\n\n        if (!strcmp(argv[i],\"-c\")) {\n            if (lastarg) goto invalid;\n            config.numclients = atoi(argv[++i]);\n        } else if (!strcmp(argv[i],\"-v\") || !strcmp(argv[i], \"--version\")) {\n            sds version = cliVersion();\n            printf(\"redis-benchmark %s\\n\", version);\n            sdsfree(version);\n            exit(0);\n        } else if (!strcmp(argv[i],\"-n\")) {\n            if (lastarg) goto invalid;\n            config.requests = atoi(argv[++i]);\n        } else if (!strcmp(argv[i],\"-k\")) {\n            if (lastarg) goto invalid;\n            config.keepalive = atoi(argv[++i]);\n        } else if (!strcmp(argv[i],\"-h\")) {\n            if (lastarg) goto invalid;\n            sdsfree(config.conn_info.hostip);\n            config.conn_info.hostip = sdsnew(argv[++i]);\n        } else if (!strcmp(argv[i],\"-p\")) {\n            if (lastarg) goto invalid;\n            config.conn_info.hostport = atoi(argv[++i]);\n            if (config.conn_info.hostport < 0 || config.conn_info.hostport > 65535) {\n                fprintf(stderr, \"Invalid server port.\\n\");\n                exit(1);\n            }\n        } else if (!strcmp(argv[i],\"-s\")) {\n            if (lastarg) goto invalid;\n            config.hostsocket = strdup(argv[++i]);\n        } else if (!strcmp(argv[i],\"-x\")) {\n            config.stdinarg = 1;\n        } else if (!strcmp(argv[i],\"-a\") ) {\n            if (lastarg) goto invalid;\n            config.conn_info.auth = sdsnew(argv[++i]);\n        } else if (!strcmp(argv[i],\"--user\")) {\n            if (lastarg) goto invalid;\n            config.conn_info.user = sdsnew(argv[++i]);\n        } else if (!strcmp(argv[i],\"-u\") && !lastarg) {\n            parseRedisUri(argv[++i],\"redis-benchmark\",&config.conn_info,&config.tls);\n            if (config.conn_info.hostport < 0 || config.conn_info.hostport > 65535) {\n                fprintf(stderr, \"Invalid server port.\\n\");\n                exit(1);\n            }\n            config.input_dbnumstr = sdsfromlonglong(config.conn_info.input_dbnum);\n        } else if (!strcmp(argv[i],\"-3\")) {\n            config.resp3 = 1;\n        } else if (!strcmp(argv[i],\"-d\")) {\n            if (lastarg) goto invalid;\n            config.datasize = atoi(argv[++i]);\n            if (config.datasize < 1) config.datasize=1;\n            if (config.datasize > 1024*1024*1024) config.datasize = 1024*1024*1024;\n        } else if (!strcmp(argv[i],\"-P\")) {\n            if (lastarg) goto invalid;\n            config.pipeline = atoi(argv[++i]);\n            if (config.pipeline <= 0) config.pipeline=1;\n        } else if (!strcmp(argv[i],\"-r\")) {\n            if (lastarg) goto invalid;\n            const char *next = argv[++i], *p = next;\n            if (*p == '-') {\n                p++;\n                if (*p < '0' || *p > '9') goto invalid;\n            }\n            config.randomkeys = 1;\n            config.randomkeys_keyspacelen = atoi(next);\n            if (config.randomkeys_keyspacelen < 0)\n                config.randomkeys_keyspacelen = 0;\n        } else if (!strcmp(argv[i],\"-q\")) {\n            config.quiet = 1;\n        } else if (!strcmp(argv[i],\"--csv\")) {\n            config.csv = 1;\n        } else if (!strcmp(argv[i],\"-l\")) {\n            config.loop = 1;\n        } else if (!strcmp(argv[i],\"-I\")) {\n            config.idlemode = 1;\n        } else if (!strcmp(argv[i],\"-e\")) {\n            fprintf(stderr,\n                    \"WARNING: -e option has no effect. \"\n                    \"We now immediately exit on error to avoid false results.\\n\");\n        } else if (!strcmp(argv[i],\"--seed\")) {\n            if (lastarg) goto invalid;\n            int rand_seed = atoi(argv[++i]);\n            srandom(rand_seed);\n            init_genrand64(rand_seed);\n        } else if (!strcmp(argv[i],\"-t\")) {\n            if (lastarg) goto invalid;\n            /* We get the list of tests to run as a string in the form\n             * get,set,lrange,...,test_N. Then we add a comma before and\n             * after the string in order to make sure that searching\n             * for \",testname,\" will always get a match if the test is\n             * enabled. */\n            config.tests = sdsnew(\",\");\n            config.tests = sdscat(config.tests,(char*)argv[++i]);\n            config.tests = sdscat(config.tests,\",\");\n            sdstolower(config.tests);\n        } else if (!strcmp(argv[i],\"--dbnum\")) {\n            if (lastarg) goto invalid;\n            config.conn_info.input_dbnum = atoi(argv[++i]);\n            config.input_dbnumstr = sdsfromlonglong(config.conn_info.input_dbnum);\n        } else if (!strcmp(argv[i],\"--precision\")) {\n            if (lastarg) goto invalid;\n            config.precision = atoi(argv[++i]);\n            if (config.precision < 0) config.precision = DEFAULT_LATENCY_PRECISION;\n            if (config.precision > MAX_LATENCY_PRECISION) config.precision = MAX_LATENCY_PRECISION;\n        } else if (!strcmp(argv[i],\"--threads\")) {\n             if (lastarg) goto invalid;\n             config.num_threads = atoi(argv[++i]);\n             if (config.num_threads > MAX_THREADS) {\n                 fprintf(stderr,\n                         \"WARNING: Too many threads, limiting threads to %d.\\n\",\n                         MAX_THREADS);\n                config.num_threads = MAX_THREADS;\n             } else if (config.num_threads < 0) config.num_threads = 0;\n        } else if (!strcmp(argv[i],\"--cluster\")) {\n            config.cluster_mode = 1;\n        } else if (!strcmp(argv[i],\"--enable-tracking\")) {\n            config.enable_tracking = 1;\n        } else if (!strcmp(argv[i],\"--help\")) {\n            exit_status = 0;\n            goto usage;\n        #ifdef USE_OPENSSL\n        } else if (!strcmp(argv[i],\"--tls\")) {\n            config.tls = 1;\n        } else if (!strcmp(argv[i],\"--sni\")) {\n            if (lastarg) goto invalid;\n            config.sslconfig.sni = strdup(argv[++i]);\n        } else if (!strcmp(argv[i],\"--cacertdir\")) {\n            if (lastarg) goto invalid;\n            config.sslconfig.cacertdir = strdup(argv[++i]);\n        } else if (!strcmp(argv[i],\"--cacert\")) {\n            if (lastarg) goto invalid;\n            config.sslconfig.cacert = strdup(argv[++i]);\n        } else if (!strcmp(argv[i],\"--insecure\")) {\n            config.sslconfig.skip_cert_verify = 1;\n        } else if (!strcmp(argv[i],\"--cert\")) {\n            if (lastarg) goto invalid;\n            config.sslconfig.cert = strdup(argv[++i]);\n        } else if (!strcmp(argv[i],\"--key\")) {\n            if (lastarg) goto invalid;\n            config.sslconfig.key = strdup(argv[++i]);\n        } else if (!strcmp(argv[i],\"--tls-ciphers\")) {\n            if (lastarg) goto invalid;\n            config.sslconfig.ciphers = strdup(argv[++i]);\n        #ifdef TLS1_3_VERSION\n        } else if (!strcmp(argv[i],\"--tls-ciphersuites\")) {\n            if (lastarg) goto invalid;\n            config.sslconfig.ciphersuites = strdup(argv[++i]);\n        #endif\n        #endif\n        } else {\n            /* Assume the user meant to provide an option when the arg starts\n             * with a dash. We're done otherwise and should use the remainder\n             * as the command and arguments for running the benchmark. */\n            if (argv[i][0] == '-') goto invalid;\n            return i;\n        }\n    }\n\n    return i;\n\ninvalid:\n    printf(\"Invalid option \\\"%s\\\" or option argument missing\\n\\n\",argv[i]);\n\nusage:\n    tls_usage =\n#ifdef USE_OPENSSL\n\" --tls              Establish a secure TLS connection.\\n\"\n\" --sni <host>       Server name indication for TLS.\\n\"\n\" --cacert <file>    CA Certificate file to verify with.\\n\"\n\" --cacertdir <dir>  Directory where trusted CA certificates are stored.\\n\"\n\"                    If neither cacert nor cacertdir are specified, the default\\n\"\n\"                    system-wide trusted root certs configuration will apply.\\n\"\n\" --insecure         Allow insecure TLS connection by skipping cert validation.\\n\"\n\" --cert <file>      Client certificate to authenticate with.\\n\"\n\" --key <file>       Private key file to authenticate with.\\n\"\n\" --tls-ciphers <list> Sets the list of preferred ciphers (TLSv1.2 and below)\\n\"\n\"                    in order of preference from highest to lowest separated by colon (\\\":\\\").\\n\"\n\"                    See the ciphers(1ssl) manpage for more information about the syntax of this string.\\n\"\n#ifdef TLS1_3_VERSION\n\" --tls-ciphersuites <list> Sets the list of preferred ciphersuites (TLSv1.3)\\n\"\n\"                    in order of preference from highest to lowest separated by colon (\\\":\\\").\\n\"\n\"                    See the ciphers(1ssl) manpage for more information about the syntax of this string,\\n\"\n\"                    and specifically for TLSv1.3 ciphersuites.\\n\"\n#endif\n#endif\n\"\";\n\n    printf(\n\"%s%s%s\", /* Split to avoid strings longer than 4095 (-Woverlength-strings). */\n\"Usage: redis-benchmark [OPTIONS] [COMMAND ARGS...]\\n\\n\"\n\"Options:\\n\"\n\" -h <hostname>      Server hostname (default 127.0.0.1)\\n\"\n\" -p <port>          Server port (default 6379)\\n\"\n\" -s <socket>        Server socket (overrides host and port)\\n\"\n\" -a <password>      Password for Redis Auth\\n\"\n\" --user <username>  Used to send ACL style 'AUTH username pass'. Needs -a.\\n\"\n\" -u <uri>           Server URI on format redis://user:password@host:port/dbnum\\n\"\n\"                    User, password and dbnum are optional. For authentication\\n\"\n\"                    without a username, use username 'default'. For TLS, use\\n\"\n\"                    the scheme 'rediss'.\\n\"\n\" -c <clients>       Number of parallel connections (default 50).\\n\"\n\"                    Note: If --cluster is used then number of clients has to be\\n\"\n\"                    the same or higher than the number of nodes.\\n\"\n\" -n <requests>      Total number of requests (default 100000)\\n\"\n\" -d <size>          Data size of SET/GET value in bytes (default 3)\\n\"\n\" --dbnum <db>       SELECT the specified db number (default 0)\\n\"\n\" -3                 Start session in RESP3 protocol mode.\\n\"\n\" --threads <num>    Enable multi-thread mode.\\n\"\n\" --cluster          Enable cluster mode.\\n\"\n\"                    If the command is supplied on the command line in cluster\\n\"\n\"                    mode, the key must contain \\\"{tag}\\\". Otherwise, the\\n\"\n\"                    command will not be sent to the right cluster node.\\n\"\n\" --enable-tracking  Send CLIENT TRACKING on before starting benchmark.\\n\"\n\" -k <boolean>       1=keep alive 0=reconnect (default 1)\\n\"\n\" -r <keyspacelen>   Use random keys for SET/GET/INCR, random values for SADD,\\n\"\n\"                    random members and scores for ZADD.\\n\"\n\"                    Using this option the benchmark will expand the string\\n\"\n\"                    __rand_int__ inside an argument with a 12 digits number in\\n\"\n\"                    the specified range from 0 to keyspacelen-1. The\\n\"\n\"                    substitution changes every time a command is executed.\\n\"\n\"                    Default tests use this to hit random keys in the specified\\n\"\n\"                    range.\\n\"\n\"                    Note: If -r is omitted, all commands in a benchmark will\\n\"\n\"                    use the same key.\\n\"\n\" -P <numreq>        Pipeline <numreq> requests. Default 1 (no pipeline).\\n\"\n\" -q                 Quiet. Just show query/sec values\\n\"\n\" --precision        Number of decimal places to display in latency output (default 0)\\n\"\n\" --csv              Output in CSV format\\n\"\n\" -l                 Loop. Run the tests forever\\n\"\n\" -t <tests>         Only run the comma separated list of tests. The test\\n\"\n\"                    names are the same as the ones produced as output.\\n\"\n\"                    The -t option is ignored if a specific command is supplied\\n\"\n\"                    on the command line.\\n\"\n\" -I                 Idle mode. Just open N idle connections and wait.\\n\"\n\" -x                 Read last argument from STDIN.\\n\"\n\" --seed <num>       Set the seed for random number generator. Default seed is based on time.\\n\",\ntls_usage,\n\" --help             Output this help and exit.\\n\"\n\" --version          Output version and exit.\\n\\n\"\n\"Examples:\\n\\n\"\n\" Run the benchmark with the default configuration against 127.0.0.1:6379:\\n\"\n\"   $ redis-benchmark\\n\\n\"\n\" Use 20 parallel clients, for a total of 100k requests, against 192.168.1.1:\\n\"\n\"   $ redis-benchmark -h 192.168.1.1 -p 6379 -n 100000 -c 20\\n\\n\"\n\" Fill 127.0.0.1:6379 with about 1 million keys only using the SET test:\\n\"\n\"   $ redis-benchmark -t set -n 1000000 -r 100000000\\n\\n\"\n\" Benchmark 127.0.0.1:6379 for a few commands producing CSV output:\\n\"\n\"   $ redis-benchmark -t ping,set,get -n 100000 --csv\\n\\n\"\n\" Benchmark a specific command line:\\n\"\n\"   $ redis-benchmark -r 10000 -n 10000 eval 'return redis.call(\\\"ping\\\")' 0\\n\\n\"\n\" Fill a list with 10000 random elements:\\n\"\n\"   $ redis-benchmark -r 10000 -n 10000 lpush mylist __rand_int__\\n\\n\"\n\" On user specified command lines __rand_int__ is replaced with a random integer\\n\"\n\" with a range of values selected by the -r option.\\n\"\n    );\n    exit(exit_status);\n}\n\nint showThroughput(struct aeEventLoop *eventLoop, long long id, void *clientData) {\n    UNUSED(eventLoop);\n    UNUSED(id);\n    benchmarkThread *thread = (benchmarkThread *)clientData;\n    int liveclients = 0;\n    int requests_finished = 0;\n    int previous_requests_finished = 0;\n    long long current_tick = mstime();\n    atomicGet(config.liveclients, liveclients);\n    atomicGet(config.requests_finished, requests_finished);\n    atomicGet(config.previous_requests_finished, previous_requests_finished);\n\n    if (liveclients == 0 && requests_finished != config.requests) {\n        fprintf(stderr,\"All clients disconnected... aborting.\\n\");\n        exit(1);\n    }\n    if (config.num_threads && requests_finished >= config.requests) {\n        aeStop(eventLoop);\n        return AE_NOMORE;\n    }\n    if (config.csv) return SHOW_THROUGHPUT_INTERVAL;\n    /* only first thread output throughput */\n    if (thread != NULL && thread->index != 0) {\n        return SHOW_THROUGHPUT_INTERVAL;\n    }\n    if (config.idlemode == 1) {\n        printf(\"clients: %d\\r\", config.liveclients);\n        fflush(stdout);\n        return SHOW_THROUGHPUT_INTERVAL;\n    }\n    const float dt = (float)(current_tick-config.start)/1000.0;\n    const float rps = dt > 0 ? (float)requests_finished/dt : 0.0f;\n    const float instantaneous_dt = (float)(current_tick-config.previous_tick)/1000.0;\n    const float instantaneous_rps = instantaneous_dt > 0 ? (float)(requests_finished-previous_requests_finished)/instantaneous_dt : 0.0f;\n    config.previous_tick = current_tick;\n    atomicSet(config.previous_requests_finished,requests_finished);\n    printf(\"%*s\\r\", config.last_printed_bytes, \" \"); /* ensure there is a clean line */\n    double avg_mean = config.current_sec_latency_histogram->total_count > 0 ? hdr_mean(config.current_sec_latency_histogram)/1000.0f : 0.0;\n    double overall_mean = config.latency_histogram->total_count > 0 ? hdr_mean(config.latency_histogram)/1000.0f : 0.0;\n    int printed_bytes = printf(\"%s: rps=%.1f (overall: %.1f) avg_msec=%.3f (overall: %.3f)\\r\", config.title, instantaneous_rps, rps, avg_mean, overall_mean);\n    config.last_printed_bytes = printed_bytes;\n    hdr_reset(config.current_sec_latency_histogram);\n    fflush(stdout);\n    return SHOW_THROUGHPUT_INTERVAL;\n}\n\n/* Return true if the named test was selected using the -t command line\n * switch, or if all the tests are selected (no -t passed by user). */\nint test_is_selected(const char *name) {\n    char buf[256];\n    int l = strlen(name);\n\n    if (config.tests == NULL) return 1;\n    buf[0] = ',';\n    memcpy(buf+1,name,l);\n    buf[l+1] = ',';\n    buf[l+2] = '\\0';\n    return strstr(config.tests,buf) != NULL;\n}\n\nint main(int argc, char **argv) {\n    int i;\n    char *data, *cmd, *tag;\n    int len;\n\n    client c;\n\n    srandom(time(NULL) ^ getpid());\n    init_genrand64(ustime() ^ getpid());\n    signal(SIGHUP, SIG_IGN);\n    signal(SIGPIPE, SIG_IGN);\n\n    memset(&config.sslconfig, 0, sizeof(config.sslconfig));\n    config.numclients = 50;\n    config.requests = 100000;\n    config.liveclients = 0;\n    config.el = aeCreateEventLoop(1024*10);\n    aeCreateTimeEvent(config.el,1,showThroughput,NULL,NULL);\n    config.keepalive = 1;\n    config.datasize = 3;\n    config.pipeline = 1;\n    config.randomkeys = 0;\n    config.randomkeys_keyspacelen = 0;\n    config.quiet = 0;\n    config.csv = 0;\n    config.loop = 0;\n    config.idlemode = 0;\n    config.clients = listCreate();\n    config.conn_info.hostip = sdsnew(\"127.0.0.1\");\n    config.conn_info.hostport = 6379;\n    config.hostsocket = NULL;\n    config.tests = NULL;\n    config.conn_info.input_dbnum = 0;\n    config.stdinarg = 0;\n    config.conn_info.auth = NULL;\n    config.precision = DEFAULT_LATENCY_PRECISION;\n    config.num_threads = 0;\n    config.threads = NULL;\n    config.cluster_mode = 0;\n    config.cluster_node_count = 0;\n    config.cluster_nodes = NULL;\n    config.redis_config = NULL;\n    config.is_fetching_slots = 0;\n    config.is_updating_slots = 0;\n    config.slots_last_update = 0;\n    config.enable_tracking = 0;\n    config.resp3 = 0;\n\n    i = parseOptions(argc,argv);\n    argc -= i;\n    argv += i;\n\n    tag = \"\";\n\n#ifdef USE_OPENSSL\n    if (config.tls) {\n        cliSecureInit();\n    }\n#endif\n\n    if (config.cluster_mode) {\n        // We only include the slot placeholder {tag} if cluster mode is enabled\n        tag = \":{tag}\";\n\n        /* Fetch cluster configuration. */\n        if (!fetchClusterConfiguration() || !config.cluster_nodes) {\n            if (!config.hostsocket) {\n                fprintf(stderr, \"Failed to fetch cluster configuration from \"\n                                \"%s:%d\\n\", config.conn_info.hostip, config.conn_info.hostport);\n            } else {\n                fprintf(stderr, \"Failed to fetch cluster configuration from \"\n                                \"%s\\n\", config.hostsocket);\n            }\n            exit(1);\n        }\n        if (config.cluster_node_count <= 1) {\n            fprintf(stderr, \"Invalid cluster: %d node(s).\\n\",\n                    config.cluster_node_count);\n            exit(1);\n        }\n        printf(\"Cluster has %d master nodes:\\n\\n\", config.cluster_node_count);\n        int i = 0;\n        for (; i < config.cluster_node_count; i++) {\n            clusterNode *node = config.cluster_nodes[i];\n            if (!node) {\n                fprintf(stderr, \"Invalid cluster node #%d\\n\", i);\n                exit(1);\n            }\n            printf(\"Master %d: \", i);\n            if (node->name) printf(\"%s \", node->name);\n            printf(\"%s:%d\\n\", node->ip, node->port);\n            node->redis_config = getRedisConfig(node->ip, node->port, NULL);\n            if (node->redis_config == NULL) {\n                fprintf(stderr, \"WARNING: Could not fetch node CONFIG %s:%d\\n\",\n                        node->ip, node->port);\n            }\n        }\n        printf(\"\\n\");\n        /* Automatically set thread number to node count if not specified\n         * by the user. */\n        if (config.num_threads == 0)\n            config.num_threads = config.cluster_node_count;\n    } else {\n        config.redis_config =\n            getRedisConfig(config.conn_info.hostip, config.conn_info.hostport, config.hostsocket);\n        if (config.redis_config == NULL) {\n            fprintf(stderr, \"WARNING: Could not fetch server CONFIG\\n\");\n        }\n    }\n    if (config.num_threads > 0) {\n        pthread_mutex_init(&(config.liveclients_mutex), NULL);\n        pthread_mutex_init(&(config.is_updating_slots_mutex), NULL);\n    }\n\n    if (config.keepalive == 0) {\n        fprintf(stderr,\n                \"WARNING: Keepalive disabled. You probably need \"\n                \"'echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse' for Linux and \"\n                \"'sudo sysctl -w net.inet.tcp.msl=1000' for Mac OS X in order \"\n                \"to use a lot of clients/requests\\n\");\n    }\n    if (argc > 0 && config.tests != NULL) {\n        fprintf(stderr, \"WARNING: Option -t is ignored.\\n\");\n    }\n\n    if (config.idlemode) {\n        printf(\"Creating %d idle connections and waiting forever (Ctrl+C when done)\\n\", config.numclients);\n        int thread_id = -1, use_threads = (config.num_threads > 0);\n        if (use_threads) {\n            thread_id = 0;\n            initBenchmarkThreads();\n        }\n        c = createClient(\"\",0,NULL,thread_id); /* will never receive a reply */\n        createMissingClients(c);\n        if (use_threads) startBenchmarkThreads();\n        else aeMain(config.el);\n        /* and will wait for every */\n    }\n    if(config.csv){\n        printf(\"\\\"test\\\",\\\"rps\\\",\\\"avg_latency_ms\\\",\\\"min_latency_ms\\\",\\\"p50_latency_ms\\\",\\\"p95_latency_ms\\\",\\\"p99_latency_ms\\\",\\\"max_latency_ms\\\"\\n\");\n    }\n    /* Run benchmark with command in the remainder of the arguments. */\n    if (argc) {\n        sds title = sdsnew(argv[0]);\n        for (i = 1; i < argc; i++) {\n            title = sdscatlen(title, \" \", 1);\n            title = sdscatlen(title, (char*)argv[i], strlen(argv[i]));\n        }\n        sds *sds_args = getSdsArrayFromArgv(argc, argv, 0);\n        if (!sds_args) {\n            fprintf(stderr, \"Invalid quoted string\\n\");\n            return 1;\n        }\n        if (config.stdinarg) {\n            sds_args = sds_realloc(sds_args,(argc + 1) * sizeof(sds));\n            sds_args[argc] = readArgFromStdin();\n            argc++;\n        }\n        /* Setup argument length */\n        size_t *argvlen = zmalloc(argc*sizeof(size_t));\n        for (i = 0; i < argc; i++)\n            argvlen[i] = sdslen(sds_args[i]);\n        do {\n            len = redisFormatCommandArgv(&cmd,argc,(const char**)sds_args,argvlen);\n            // adjust the datasize to the parsed command\n            config.datasize = len;\n            benchmark(title,cmd,len);\n            free(cmd);\n        } while(config.loop);\n        sdsfreesplitres(sds_args, argc);\n\n        sdsfree(title);\n        if (config.redis_config != NULL) freeRedisConfig(config.redis_config);\n        zfree(argvlen);\n        return 0;\n    }\n\n    /* Run default benchmark suite. */\n    data = zmalloc(config.datasize+1);\n    do {\n        genBenchmarkRandomData(data, config.datasize);\n        data[config.datasize] = '\\0';\n\n        if (test_is_selected(\"ping_inline\") || test_is_selected(\"ping\"))\n            benchmark(\"PING_INLINE\",\"PING\\r\\n\",6);\n\n        if (test_is_selected(\"ping_mbulk\") || test_is_selected(\"ping\")) {\n            len = redisFormatCommand(&cmd,\"PING\");\n            benchmark(\"PING_MBULK\",cmd,len);\n            free(cmd);\n        }\n\n        if (test_is_selected(\"set\")) {\n            len = redisFormatCommand(&cmd,\"SET key%s:__rand_int__ %s\",tag,data);\n            benchmark(\"SET\",cmd,len);\n            free(cmd);\n        }\n\n        if (test_is_selected(\"get\")) {\n            len = redisFormatCommand(&cmd,\"GET key%s:__rand_int__\",tag);\n            benchmark(\"GET\",cmd,len);\n            free(cmd);\n        }\n\n        if (test_is_selected(\"incr\")) {\n            len = redisFormatCommand(&cmd,\"INCR counter%s:__rand_int__\",tag);\n            benchmark(\"INCR\",cmd,len);\n            free(cmd);\n        }\n\n        if (test_is_selected(\"lpush\")) {\n            len = redisFormatCommand(&cmd,\"LPUSH mylist%s %s\",tag,data);\n            benchmark(\"LPUSH\",cmd,len);\n            free(cmd);\n        }\n\n        if (test_is_selected(\"rpush\")) {\n            len = redisFormatCommand(&cmd,\"RPUSH mylist%s %s\",tag,data);\n            benchmark(\"RPUSH\",cmd,len);\n            free(cmd);\n        }\n\n        if (test_is_selected(\"lpop\")) {\n            len = redisFormatCommand(&cmd,\"LPOP mylist%s\",tag);\n            benchmark(\"LPOP\",cmd,len);\n            free(cmd);\n        }\n\n        if (test_is_selected(\"rpop\")) {\n            len = redisFormatCommand(&cmd,\"RPOP mylist%s\",tag);\n            benchmark(\"RPOP\",cmd,len);\n            free(cmd);\n        }\n\n        if (test_is_selected(\"sadd\")) {\n            len = redisFormatCommand(&cmd,\n                \"SADD myset%s element:__rand_int__\",tag);\n            benchmark(\"SADD\",cmd,len);\n            free(cmd);\n        }\n\n        if (test_is_selected(\"hset\")) {\n            len = redisFormatCommand(&cmd,\n                \"HSET myhash%s element:__rand_int__ %s\",tag,data);\n            benchmark(\"HSET\",cmd,len);\n            free(cmd);\n        }\n\n        if (test_is_selected(\"spop\")) {\n            len = redisFormatCommand(&cmd,\"SPOP myset%s\",tag);\n            benchmark(\"SPOP\",cmd,len);\n            free(cmd);\n        }\n\n        if (test_is_selected(\"zadd\")) {\n            char *score = \"0\";\n            if (config.randomkeys) score = \"__rand_int__\";\n            len = redisFormatCommand(&cmd,\n                \"ZADD myzset%s %s element:__rand_int__\",tag,score);\n            benchmark(\"ZADD\",cmd,len);\n            free(cmd);\n        }\n\n        if (test_is_selected(\"zpopmin\")) {\n            len = redisFormatCommand(&cmd,\"ZPOPMIN myzset%s\",tag);\n            benchmark(\"ZPOPMIN\",cmd,len);\n            free(cmd);\n        }\n\n        if (test_is_selected(\"lrange\") ||\n            test_is_selected(\"lrange_100\") ||\n            test_is_selected(\"lrange_300\") ||\n            test_is_selected(\"lrange_500\") ||\n            test_is_selected(\"lrange_600\"))\n        {\n            len = redisFormatCommand(&cmd,\"LPUSH mylist%s %s\",tag,data);\n            benchmark(\"LPUSH (needed to benchmark LRANGE)\",cmd,len);\n            free(cmd);\n        }\n\n        if (test_is_selected(\"lrange\") || test_is_selected(\"lrange_100\")) {\n            len = redisFormatCommand(&cmd,\"LRANGE mylist%s 0 99\",tag);\n            benchmark(\"LRANGE_100 (first 100 elements)\",cmd,len);\n            free(cmd);\n        }\n\n        if (test_is_selected(\"lrange\") || test_is_selected(\"lrange_300\")) {\n            len = redisFormatCommand(&cmd,\"LRANGE mylist%s 0 299\",tag);\n            benchmark(\"LRANGE_300 (first 300 elements)\",cmd,len);\n            free(cmd);\n        }\n\n        if (test_is_selected(\"lrange\") || test_is_selected(\"lrange_500\")) {\n            len = redisFormatCommand(&cmd,\"LRANGE mylist%s 0 499\",tag);\n            benchmark(\"LRANGE_500 (first 500 elements)\",cmd,len);\n            free(cmd);\n        }\n\n        if (test_is_selected(\"lrange\") || test_is_selected(\"lrange_600\")) {\n            len = redisFormatCommand(&cmd,\"LRANGE mylist%s 0 599\",tag);\n            benchmark(\"LRANGE_600 (first 600 elements)\",cmd,len);\n            free(cmd);\n        }\n\n        if (test_is_selected(\"mset\")) {\n            const char *cmd_argv[21];\n            cmd_argv[0] = \"MSET\";\n            sds key_placeholder = sdscatprintf(sdsnew(\"\"),\"key%s:__rand_int__\",tag);\n            for (i = 1; i < 21; i += 2) {\n                cmd_argv[i] = key_placeholder;\n                cmd_argv[i+1] = data;\n            }\n            len = redisFormatCommandArgv(&cmd,21,cmd_argv,NULL);\n            benchmark(\"MSET (10 keys)\",cmd,len);\n            free(cmd);\n            sdsfree(key_placeholder);\n        }\n\n        if (test_is_selected(\"xadd\")) {\n            len = redisFormatCommand(&cmd,\"XADD mystream%s * myfield %s\", tag, data);\n            benchmark(\"XADD\",cmd,len);\n            free(cmd); \n        }        \n\n        if (!config.csv) printf(\"\\n\");\n    } while(config.loop);\n\n    zfree(data);\n    freeCliConnInfo(config.conn_info);\n    if (config.redis_config != NULL) freeRedisConfig(config.redis_config);\n\n    return 0;\n}\n"
  },
  {
    "path": "src/redis-check-aof.c",
    "content": "/*\n * Copyright (c) 2009-2012, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n * Copyright (c) 2009-current, Redis Ltd.\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#include \"server.h\"\n\n#include <sys/stat.h>\n#include <sys/types.h>\n#include <regex.h>\n#include <libgen.h>\n\n#define AOF_CHECK_OK 0\n#define AOF_CHECK_EMPTY 1\n#define AOF_CHECK_TRUNCATED 2\n#define AOF_CHECK_TIMESTAMP_TRUNCATED 3\n\ntypedef enum {\n    AOF_RESP,\n    AOF_RDB_PREAMBLE,\n    AOF_MULTI_PART,\n} input_file_type;\n\naofManifest *aofManifestCreate(void);\nvoid aofManifestFree(aofManifest *am);\naofManifest *aofLoadManifestFromFile(sds am_filepath);\n\n#define ERROR(...) { \\\n    char __buf[1024]; \\\n    snprintf(__buf, sizeof(__buf), __VA_ARGS__); \\\n    snprintf(error, sizeof(error), \"0x%16llx: %s\", (long long)epos, __buf); \\\n}\n\nstatic char error[1044];\nstatic off_t epos;\nstatic long long line = 1;\nstatic time_t to_timestamp = 0;\n\nint consumeNewline(char *buf) {\n    if (strncmp(buf,\"\\r\\n\",2) != 0) {\n        ERROR(\"Expected \\\\r\\\\n, got: %02x%02x\",buf[0],buf[1]);\n        return 0;\n    }\n    line += 1;\n    return 1;\n}\n\nint readLong(FILE *fp, char prefix, long *target) {\n    char buf[128], *eptr;\n    epos = ftello(fp);\n    if (fgets(buf,sizeof(buf),fp) == NULL) {\n        return 0;\n    }\n    if (buf[0] != prefix) {\n        ERROR(\"Expected prefix '%c', got: '%c'\",prefix,buf[0]);\n        return 0;\n    }\n    *target = strtol(buf+1,&eptr,10);\n    return consumeNewline(eptr);\n}\n\nint readBytes(FILE *fp, char *target, long length) {\n    long real;\n    epos = ftello(fp);\n    real = fread(target,1,length,fp);\n    if (real != length) {\n        ERROR(\"Expected to read %ld bytes, got %ld bytes\",length,real);\n        return 0;\n    }\n    return 1;\n}\n\nint readString(FILE *fp, char** target) {\n    long len;\n    *target = NULL;\n    if (!readLong(fp,'$',&len)) {\n        return 0;\n    }\n\n    if (len < 0 || len > LONG_MAX - 2) {\n        ERROR(\"Expected to read string of %ld bytes, which is not in the suitable range\",len);\n        return 0;\n    }\n\n    /* Increase length to also consume \\r\\n */\n    len += 2;\n    *target = (char*)zmalloc(len);\n    if (!readBytes(fp,*target,len)) {\n        zfree(*target);\n        *target = NULL;\n        return 0;\n    }\n    if (!consumeNewline(*target+len-2)) {\n        zfree(*target);\n        *target = NULL;\n        return 0;\n    }\n    (*target)[len-2] = '\\0';\n    return 1;\n}\n\nint readArgc(FILE *fp, long *target) {\n    return readLong(fp,'*',target);\n}\n\n/* Used to decode a RESP record in the AOF file to obtain the original \n * redis command, and also check whether the command is MULTI/EXEC. If the \n * command is MULTI, the parameter out_multi will be incremented by one, and \n * if the command is EXEC, the parameter out_multi will be decremented \n * by one. The parameter out_multi will be used by the upper caller to determine \n * whether the AOF file contains unclosed transactions.\n **/\nint processRESP(FILE *fp, char *filename, int *out_multi) {\n    long argc;\n    char *str;\n\n    if (!readArgc(fp, &argc)) return 0;\n\n    for (int i = 0; i < argc; i++) {\n        if (!readString(fp, &str)) return 0;\n        if (i == 0) {\n            if (strcasecmp(str, \"multi\") == 0) {\n                if ((*out_multi)++) {\n                    ERROR(\"Unexpected MULTI in AOF %s\", filename);\n                    zfree(str);\n                    return 0;\n                }\n            } else if (strcasecmp(str, \"exec\") == 0) {\n                if (--(*out_multi)) {\n                    ERROR(\"Unexpected EXEC in AOF %s\", filename);\n                    zfree(str);\n                    return 0;\n                }\n            }\n        }\n        zfree(str);\n    }\n\n    return 1;\n}\n\n/* Used to parse an annotation in the AOF file, the annotation starts with '#' \n * in AOF. Currently AOF only contains timestamp annotations, but this function \n * can easily be extended to handle other annotations. \n * \n * The processing rule of time annotation is that once the timestamp is found to\n * be greater than 'to_timestamp', the AOF after the annotation is truncated. \n * Note that in Multi Part AOF, this truncation is only allowed when the last_file \n * parameter is 1.\n **/\nint processAnnotations(FILE *fp, char *filename, int last_file) {\n    char buf[AOF_ANNOTATION_LINE_MAX_LEN];\n\n    epos = ftello(fp);\n    if (fgets(buf, sizeof(buf), fp) == NULL) {\n        printf(\"Failed to read annotations from AOF %s, aborting...\\n\", filename);\n        exit(1);\n    }\n\n    if (to_timestamp && strncmp(buf, \"#TS:\", 4) == 0) {\n        char *endptr;\n        errno = 0;\n        time_t ts = strtol(buf+4, &endptr, 10);\n        if (errno != 0 || *endptr != '\\r') {\n            printf(\"Invalid timestamp annotation\\n\");\n            exit(1);\n        }\n        if (ts <= to_timestamp) return 1;\n        if (epos == 0) {\n            printf(\"AOF %s has nothing before timestamp %ld, \"\n                    \"aborting...\\n\", filename, to_timestamp);\n            exit(1);\n        }\n        if (!last_file) {\n            printf(\"Failed to truncate AOF %s to timestamp %ld to offset %ld because it is not the last file.\\n\",\n                filename, to_timestamp, (long int)epos);\n            printf(\"If you insist, please delete all files after this file according to the manifest \"\n                \"file and delete the corresponding records in manifest file manually. Then re-run redis-check-aof.\\n\");\n            exit(1);\n        }\n        /* Truncate remaining AOF if exceeding 'to_timestamp' */\n        if (ftruncate(fileno(fp), epos) == -1) {\n            printf(\"Failed to truncate AOF %s to timestamp %ld\\n\",\n                    filename, to_timestamp);\n            exit(1);\n        } else {\n            return 0;\n        }\n    }\n    return 1;\n}\n\n/* Used to check the validity of a single AOF file. The AOF file can be:\n * 1. Old-style AOF\n * 2. Old-style RDB-preamble AOF\n * 3. BASE or INCR in Multi Part AOF \n * */\nint checkSingleAof(char *aof_filename, char *aof_filepath, int last_file, int fix, int preamble) {\n    off_t pos = 0, diff;\n    int multi = 0;\n    char buf[2];\n\n    FILE *fp = fopen(aof_filepath, \"r+\");\n    if (fp == NULL) {\n        printf(\"Cannot open file %s: %s, aborting...\\n\", aof_filepath, strerror(errno));\n        exit(1);\n    }\n\n    struct redis_stat sb;\n    if (redis_fstat(fileno(fp),&sb) == -1) {\n        printf(\"Cannot stat file: %s, aborting...\\n\", aof_filename);\n        fclose(fp);\n        exit(1);\n    }\n\n    off_t size = sb.st_size;\n    if (size == 0) {\n        fclose(fp);\n        return AOF_CHECK_EMPTY;\n    }\n\n    if (preamble) {\n        char *argv[2] = {NULL, aof_filepath};\n        if (redis_check_rdb_main(2, argv, fp) == C_ERR) {\n            printf(\"RDB preamble of AOF file is not sane, aborting.\\n\");\n            exit(1);\n        } else {\n            printf(\"RDB preamble is OK, proceeding with AOF tail...\\n\");\n        }\n    }\n\n    while(1) {\n        if (!multi) pos = ftello(fp);\n        if (fgets(buf, sizeof(buf), fp) == NULL) {\n            if (feof(fp)) {\n                break;\n            }\n            printf(\"Failed to read from AOF %s, aborting...\\n\", aof_filename);\n            exit(1);\n        }\n\n        if (fseek(fp, -1, SEEK_CUR) == -1) {\n            printf(\"Failed to fseek in AOF %s: %s\", aof_filename, strerror(errno));\n            exit(1);\n        }\n    \n        if (buf[0] == '#') {\n            if (!processAnnotations(fp, aof_filepath, last_file)) {\n                fclose(fp);\n                return AOF_CHECK_TIMESTAMP_TRUNCATED;\n            }\n        } else if (buf[0] == '*'){\n            if (!processRESP(fp, aof_filepath, &multi)) break;\n        } else {\n            printf(\"AOF %s format error\\n\", aof_filename);\n            break;\n        }\n    }\n\n    if (feof(fp) && multi && strlen(error) == 0) {\n        ERROR(\"Reached EOF before reading EXEC for MULTI\");\n    }\n\n    if (strlen(error) > 0) {\n        printf(\"%s\\n\", error);\n    }\n\n    diff = size-pos;\n\n    /* In truncate-to-timestamp mode, just exit if there is nothing to truncate. */\n    if (diff == 0 && to_timestamp) {\n        printf(\"Truncate nothing in AOF %s to timestamp %ld\\n\", aof_filename, to_timestamp);\n        fclose(fp);\n        return AOF_CHECK_OK;\n    }\n\n    printf(\"AOF analyzed: filename=%s, size=%lld, ok_up_to=%lld, ok_up_to_line=%lld, diff=%lld\\n\",\n        aof_filename, (long long) size, (long long) pos, line, (long long) diff);\n    if (diff > 0) {\n        if (fix) {\n            if (!last_file) {\n                printf(\"Failed to truncate AOF %s because it is not the last file\\n\", aof_filename);\n                exit(1);\n            }\n\n            char buf[2];\n            printf(\"This will shrink the AOF %s from %lld bytes, with %lld bytes, to %lld bytes\\n\",\n                aof_filename, (long long)size, (long long)diff, (long long)pos);\n            printf(\"Continue? [y/N]: \");\n            if (fgets(buf, sizeof(buf), stdin) == NULL || strncasecmp(buf, \"y\", 1) != 0) {\n                printf(\"Aborting...\\n\");\n                exit(1);\n            }\n            if (ftruncate(fileno(fp), pos) == -1) {\n                printf(\"Failed to truncate AOF %s\\n\", aof_filename);\n                exit(1);\n            } else {\n                fclose(fp);\n                return AOF_CHECK_TRUNCATED;\n            }\n        } else {\n            printf(\"AOF %s is not valid. Use the --fix option to try fixing it.\\n\", aof_filename);\n            exit(1);\n        }\n    }\n    fclose(fp);\n    return AOF_CHECK_OK;\n}\n\n/* Used to determine whether the file is a RDB file. These two possibilities:\n * 1. The file is an old style RDB-preamble AOF\n * 2. The file is a BASE AOF in Multi Part AOF\n * */\nint fileIsRDB(char *filepath) {\n    FILE *fp = fopen(filepath, \"r\");\n    if (fp == NULL) {\n        printf(\"Cannot open file %s: %s\\n\", filepath, strerror(errno));\n        exit(1);\n    }\n\n    struct redis_stat sb;\n    if (redis_fstat(fileno(fp), &sb) == -1) {\n        printf(\"Cannot stat file: %s\\n\", filepath);\n        fclose(fp);\n        exit(1);\n    }\n\n    off_t size = sb.st_size;\n    if (size == 0) {\n        fclose(fp);\n        return 0;\n    }\n\n    if (size >= 8) {    /* There must be at least room for the RDB header. */\n        char sig[5];\n        int rdb_file = fread(sig, sizeof(sig), 1, fp) == 1 &&\n                            memcmp(sig, \"REDIS\", sizeof(sig)) == 0;\n        if (rdb_file) {\n            fclose(fp);\n            return 1;\n        } \n    }\n\n    fclose(fp);\n    return 0;\n}\n\n/* Used to determine whether the file is a manifest file. */\n#define MANIFEST_MAX_LINE 1024\nint fileIsManifest(char *filepath) {\n    int is_manifest = 0;\n    FILE *fp = fopen(filepath, \"r\");\n    if (fp == NULL) {\n        printf(\"Cannot open file %s: %s\\n\", filepath, strerror(errno));\n        exit(1);\n    }\n\n    struct redis_stat sb;\n    if (redis_fstat(fileno(fp), &sb) == -1) {\n        printf(\"Cannot stat file: %s\\n\", filepath);\n        fclose(fp);\n        exit(1);\n    }\n\n    off_t size = sb.st_size;\n    if (size == 0) {\n        fclose(fp);\n        return 0;\n    }\n\n    char buf[MANIFEST_MAX_LINE+1];\n    while (1) {\n        if (fgets(buf, MANIFEST_MAX_LINE+1, fp) == NULL) {\n            if (feof(fp)) {\n                break;\n            } else {\n                printf(\"Cannot read file: %s\\n\", filepath);\n                fclose(fp);\n                exit(1);\n            }\n        }\n\n        /* We will skip comments lines.\n         * At present, the manifest format is fixed, see aofInfoFormat.\n         * We will break directly as long as it encounters other items. */\n        if (buf[0] == '#') {\n            continue;\n        } else if (!memcmp(buf, \"file\", strlen(\"file\"))) {\n            is_manifest = 1;\n        } else {\n            break;\n        }\n    }\n\n    fclose(fp);\n    return is_manifest;\n}\n\n/* Get the format of the file to be checked. It can be:\n * AOF_RESP: Old-style AOF\n * AOF_RDB_PREAMBLE: Old-style RDB-preamble AOF\n * AOF_MULTI_PART: manifest in Multi Part AOF \n * \n * redis-check-aof tool will automatically perform different \n * verification logic according to different file formats.\n * */\ninput_file_type getInputFileType(char *filepath) {\n    if (fileIsManifest(filepath)) {\n        return AOF_MULTI_PART;\n    } else if (fileIsRDB(filepath)) {\n        return AOF_RDB_PREAMBLE;\n    } else {\n        return AOF_RESP;\n    }\n}\n\nvoid printAofStyle(int ret, char *aofFileName, char *aofType) {\n    if (ret == AOF_CHECK_OK) {\n        printf(\"%s %s is valid\\n\", aofType, aofFileName);\n    } else if (ret == AOF_CHECK_EMPTY) {\n        printf(\"%s %s is empty\\n\", aofType, aofFileName);\n    } else if (ret == AOF_CHECK_TIMESTAMP_TRUNCATED) {\n        printf(\"Successfully truncated AOF %s to timestamp %ld\\n\",\n            aofFileName, to_timestamp);\n    } else if (ret == AOF_CHECK_TRUNCATED) {\n        printf(\"Successfully truncated AOF %s\\n\", aofFileName);\n    }\n}\n\n/* Check if Multi Part AOF is valid. It will check the BASE file and INCR files \n * at once according to the manifest instructions (this is somewhat similar to \n * redis' AOF loading).\n * \n * When the verification is successful, we can guarantee:\n * 1. The manifest file format is valid\n * 2. Both BASE AOF and INCR AOFs format are valid\n * 3. No BASE or INCR AOFs files are missing\n * \n * Note that in Multi Part AOF, we only allow truncation for the last AOF file.\n * */\nvoid checkMultiPartAof(char *dirpath, char *manifest_filepath, int fix) {\n    int total_num = 0, aof_num = 0, last_file;\n    int ret;\n\n    printf(\"Start checking Multi Part AOF\\n\");\n    aofManifest *am = aofLoadManifestFromFile(manifest_filepath);\n\n    if (am->base_aof_info) total_num++;\n    if (am->incr_aof_list) total_num += listLength(am->incr_aof_list);\n\n    if (am->base_aof_info) {\n        sds aof_filename = am->base_aof_info->file_name;\n        sds aof_filepath = makePath(dirpath, aof_filename);\n        last_file = ++aof_num == total_num;\n        int aof_preable = fileIsRDB(aof_filepath);\n\n        printf(\"Start to check BASE AOF (%s format).\\n\", aof_preable ? \"RDB\":\"RESP\");\n        ret = checkSingleAof(aof_filename, aof_filepath, last_file, fix, aof_preable);\n        printAofStyle(ret, aof_filename, (char *)\"BASE AOF\");\n        sdsfree(aof_filepath);\n    }\n\n    if (listLength(am->incr_aof_list)) {\n        listNode *ln;\n        listIter li;\n\n        printf(\"Start to check INCR files.\\n\");\n        listRewind(am->incr_aof_list, &li);\n        while ((ln = listNext(&li)) != NULL) {\n            aofInfo *ai = (aofInfo*)ln->value;\n            sds aof_filename = (char*)ai->file_name;\n            sds aof_filepath = makePath(dirpath, aof_filename);\n            last_file = ++aof_num == total_num;\n            ret = checkSingleAof(aof_filename, aof_filepath, last_file, fix, 0);\n            printAofStyle(ret, aof_filename, (char *)\"INCR AOF\");\n            sdsfree(aof_filepath);\n        }\n    }\n\n    aofManifestFree(am);\n    printf(\"All AOF files and manifest are valid\\n\");\n}\n\n/* Check if old style AOF is valid. Internally, it will identify whether \n * the AOF is in RDB-preamble format, and will eventually call `checkSingleAof`\n * to do the check. */\nvoid checkOldStyleAof(char *filepath, int fix, int preamble) {\n    printf(\"Start checking Old-Style AOF\\n\");\n    int ret = checkSingleAof(filepath, filepath, 1, fix, preamble);\n    printAofStyle(ret, filepath, (char *)\"AOF\");\n}\n\nint redis_check_aof_main(int argc, char **argv) {\n    char *filepath;\n    char temp_filepath[PATH_MAX + 1];\n    char *dirpath;\n    int fix = 0;\n\n    if (argc < 2) {\n        goto invalid_args;\n    } else if (argc == 2) {\n        if (!strcmp(argv[1], \"-v\") || !strcmp(argv[1], \"--version\")) {\n            sds version = getVersion();\n            printf(\"redis-check-aof %s\\n\", version);\n            sdsfree(version);\n            exit(0);\n        }\n\n        filepath = argv[1];\n    } else if (argc == 3) {\n        if (!strcmp(argv[1], \"--fix\")) {\n            filepath = argv[2];\n            fix = 1;\n        } else {\n            goto invalid_args;\n        }\n    } else if (argc == 4) {\n        if (!strcmp(argv[1], \"--truncate-to-timestamp\")) {\n            char *endptr;\n            errno = 0;\n            to_timestamp = strtol(argv[2], &endptr, 10);\n            if (errno != 0 || *endptr != '\\0') {\n                printf(\"Invalid timestamp, aborting...\\n\");\n                exit(1);\n            }\n            filepath = argv[3];\n        } else {\n            goto invalid_args;\n        }\n    } else {\n        goto invalid_args;\n    }\n\n    /* Check if filepath is longer than PATH_MAX */\n    if (strlen(filepath) > PATH_MAX) {\n        printf(\"Error: filepath is too long (exceeds PATH_MAX)\\n\");\n        goto invalid_args;\n    }\n\n    /* In the glibc implementation dirname may modify their argument. */\n    memcpy(temp_filepath, filepath, strlen(filepath) + 1);\n    dirpath = dirname(temp_filepath);\n\n    /* Select the corresponding verification method according to the input file type. */\n    input_file_type type = getInputFileType(filepath);\n    switch (type) {\n    case AOF_MULTI_PART:\n        checkMultiPartAof(dirpath, filepath, fix);\n        break;\n    case AOF_RESP:\n        checkOldStyleAof(filepath, fix, 0);\n        break;\n    case AOF_RDB_PREAMBLE:\n        checkOldStyleAof(filepath, fix, 1);\n        break;\n    }\n\n    exit(0);\n\ninvalid_args:\n    printf(\"Usage: %s [--fix|--truncate-to-timestamp $timestamp] <file.manifest|file.aof>\\n\",\n        argv[0]);\n    exit(1);\n}\n"
  },
  {
    "path": "src/redis-check-rdb.c",
    "content": "/*\n * Copyright (c) 2016-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"mt19937-64.h\"\n#include \"server.h\"\n#include \"rdb.h\"\n\n#include <stdarg.h>\n#include <sys/time.h>\n#include <unistd.h>\n#include <sys/stat.h>\n\nvoid createSharedObjects(void);\nvoid rdbLoadProgressCallback(rio *r, const void *buf, size_t len);\nint rdbCheckMode = 0;\n\nstruct {\n    rio *rio;\n    robj *key;                      /* Current key we are reading. */\n    int key_type;                   /* Current key type if != -1. */\n    unsigned long keys;             /* Number of keys processed. */\n    unsigned long expires;          /* Number of keys with an expire. */\n    unsigned long already_expired;  /* Number of keys already expired. */\n    unsigned long subexpires;        /* Number of keys with subexpires */\n    int doing;                      /* The state while reading the RDB. */\n    int error_set;                  /* True if error is populated. */\n    char error[1024];\n} rdbstate;\n\n/* At every loading step try to remember what we were about to do, so that\n * we can log this information when an error is encountered. */\n#define RDB_CHECK_DOING_START 0\n#define RDB_CHECK_DOING_READ_TYPE 1\n#define RDB_CHECK_DOING_READ_EXPIRE 2\n#define RDB_CHECK_DOING_READ_KEY 3\n#define RDB_CHECK_DOING_READ_OBJECT_VALUE 4\n#define RDB_CHECK_DOING_CHECK_SUM 5\n#define RDB_CHECK_DOING_READ_LEN 6\n#define RDB_CHECK_DOING_READ_AUX 7\n#define RDB_CHECK_DOING_READ_MODULE_AUX 8\n#define RDB_CHECK_DOING_READ_FUNCTIONS 9\n\nchar *rdb_check_doing_string[] = {\n    \"start\",\n    \"read-type\",\n    \"read-expire\",\n    \"read-key\",\n    \"read-object-value\",\n    \"check-sum\",\n    \"read-len\",\n    \"read-aux\",\n    \"read-module-aux\",\n    \"read-functions\"\n};\n\nchar *rdb_type_string[] = {\n    \"string\",\n    \"list-linked\",\n    \"set-hashtable\",\n    \"zset-v1\",\n    \"hash-hashtable\",\n    \"zset-v2\",\n    \"module-pre-release\",\n    \"module-value\",\n    \"\",\n    \"hash-zipmap\",\n    \"list-ziplist\",\n    \"set-intset\",\n    \"zset-ziplist\",\n    \"hash-ziplist\",\n    \"quicklist\",\n    \"stream\",\n    \"hash-listpack\",\n    \"zset-listpack\",\n    \"quicklist-v2\",\n    \"stream-v2\",\n    \"set-listpack\",\n    \"stream-v3\",\n    \"hash-hashtable-md-pre-release\",\n    \"hash-listpack-md-pre-release\",\n    \"hash-hashtable-md\",\n    \"hash-listpack-md\",\n    \"stream-v4\",\n    \"stream-v5\",\n    \"gcra\",\n};\n\n/* Show a few stats collected into 'rdbstate' */\nvoid rdbShowGenericInfo(void) {\n    printf(\"[info] %lu keys read\\n\", rdbstate.keys);\n    printf(\"[info] %lu expires\\n\", rdbstate.expires);\n    printf(\"[info] %lu already expired\\n\", rdbstate.already_expired);\n    printf(\"[info] %lu subexpires\\n\", rdbstate.subexpires);\n}\n\n/* Called on RDB errors. Provides details about the RDB and the offset\n * we were when the error was detected. */\nvoid rdbCheckError(const char *fmt, ...) {\n    char msg[1024];\n    va_list ap;\n\n    va_start(ap, fmt);\n    vsnprintf(msg, sizeof(msg), fmt, ap);\n    va_end(ap);\n\n    printf(\"--- RDB ERROR DETECTED ---\\n\");\n    printf(\"[offset %llu] %s\\n\",\n        (unsigned long long) (rdbstate.rio ?\n            rdbstate.rio->processed_bytes : 0), msg);\n    printf(\"[additional info] While doing: %s\\n\",\n        rdb_check_doing_string[rdbstate.doing]);\n    if (rdbstate.key)\n        printf(\"[additional info] Reading key '%s'\\n\",\n            (char*)rdbstate.key->ptr);\n    if (rdbstate.key_type != -1)\n        printf(\"[additional info] Reading type %d (%s)\\n\",\n            rdbstate.key_type,\n            ((unsigned)rdbstate.key_type <\n             sizeof(rdb_type_string)/sizeof(char*)) ?\n                rdb_type_string[rdbstate.key_type] : \"unknown\");\n    rdbShowGenericInfo();\n}\n\n/* Print information during RDB checking. */\nvoid rdbCheckInfo(const char *fmt, ...) {\n    char msg[1024];\n    va_list ap;\n\n    va_start(ap, fmt);\n    vsnprintf(msg, sizeof(msg), fmt, ap);\n    va_end(ap);\n\n    printf(\"[offset %llu] %s\\n\",\n        (unsigned long long) (rdbstate.rio ?\n            rdbstate.rio->processed_bytes : 0), msg);\n}\n\n/* Used inside rdb.c in order to log specific errors happening inside\n * the RDB loading internals. */\nvoid rdbCheckSetError(const char *fmt, ...) {\n    va_list ap;\n\n    va_start(ap, fmt);\n    vsnprintf(rdbstate.error, sizeof(rdbstate.error), fmt, ap);\n    va_end(ap);\n    rdbstate.error_set = 1;\n}\n\n/* During RDB check we setup a special signal handler for memory violations\n * and similar conditions, so that we can log the offending part of the RDB\n * if the crash is due to broken content. */\nvoid rdbCheckHandleCrash(int sig, siginfo_t *info, void *secret) {\n    UNUSED(sig);\n    UNUSED(info);\n    UNUSED(secret);\n\n    rdbCheckError(\"Server crash checking the specified RDB file!\");\n    exit(1);\n}\n\nvoid rdbCheckSetupSignals(void) {\n    struct sigaction act;\n\n    sigemptyset(&act.sa_mask);\n    act.sa_flags = SA_NODEFER | SA_RESETHAND | SA_SIGINFO;\n    act.sa_sigaction = rdbCheckHandleCrash;\n    sigaction(SIGSEGV, &act, NULL);\n    sigaction(SIGBUS, &act, NULL);\n    sigaction(SIGFPE, &act, NULL);\n    sigaction(SIGILL, &act, NULL);\n    sigaction(SIGABRT, &act, NULL);\n}\n\n/* Check the specified RDB file. Return 0 if the RDB looks sane, otherwise\n * 1 is returned.\n * The file is specified as a filename in 'rdbfilename' if 'fp' is NULL,\n * otherwise the already open file 'fp' is checked. */\nint redis_check_rdb(char *rdbfilename, FILE *fp) {\n    uint64_t dbid;\n    int selected_dbid = -1;\n    int type, rdbver;\n    char buf[1024];\n    long long expiretime, now = mstime();\n    static rio rdb; /* Pointed by global struct riostate. */\n    struct stat sb;\n\n    int closefile = (fp == NULL);\n    if (fp == NULL && (fp = fopen(rdbfilename,\"r\")) == NULL) return 1;\n\n    if (fstat(fileno(fp), &sb) == -1)\n        sb.st_size = 0;\n\n    startLoadingFile(sb.st_size, rdbfilename, RDBFLAGS_NONE);\n    rioInitWithFile(&rdb,fp);\n    rdbstate.rio = &rdb;\n    rdb.update_cksum = rdbLoadProgressCallback;\n    if (rioRead(&rdb,buf,9) == 0) goto eoferr;\n    buf[9] = '\\0';\n    if (memcmp(buf,\"REDIS\",5) != 0) {\n        rdbCheckError(\"Wrong signature trying to load DB from file\");\n        goto err;\n    }\n    rdbver = atoi(buf+5);\n    if (rdbver < 1 || rdbver > RDB_VERSION) {\n        rdbCheckError(\"Can't handle RDB format version %d\",rdbver);\n        goto err;\n    }\n\n    expiretime = -1;\n    while(1) {\n        robj *key, *val;\n\n        /* Read type. */\n        rdbstate.doing = RDB_CHECK_DOING_READ_TYPE;\n        if ((type = rdbLoadType(&rdb)) == -1) goto eoferr;\n\n        /* Handle special types. */\n        if (type == RDB_OPCODE_EXPIRETIME) {\n            rdbstate.doing = RDB_CHECK_DOING_READ_EXPIRE;\n            /* EXPIRETIME: load an expire associated with the next key\n             * to load. Note that after loading an expire we need to\n             * load the actual type, and continue. */\n            expiretime = rdbLoadTime(&rdb);\n            expiretime *= 1000;\n            if (rioGetReadError(&rdb)) goto eoferr;\n            continue; /* Read next opcode. */\n        } else if (type == RDB_OPCODE_EXPIRETIME_MS) {\n            /* EXPIRETIME_MS: milliseconds precision expire times introduced\n             * with RDB v3. Like EXPIRETIME but no with more precision. */\n            rdbstate.doing = RDB_CHECK_DOING_READ_EXPIRE;\n            expiretime = rdbLoadMillisecondTime(&rdb, rdbver);\n            if (rioGetReadError(&rdb)) goto eoferr;\n            continue; /* Read next opcode. */\n        } else if (type == RDB_OPCODE_FREQ) {\n            /* FREQ: LFU frequency. */\n            uint8_t byte;\n            if (rioRead(&rdb,&byte,1) == 0) goto eoferr;\n            continue; /* Read next opcode. */\n        } else if (type == RDB_OPCODE_IDLE) {\n            /* IDLE: LRU idle time. */\n            if (rdbLoadLen(&rdb,NULL) == RDB_LENERR) goto eoferr;\n            continue; /* Read next opcode. */\n        } else if (type == RDB_OPCODE_KEY_META) {\n            /* KEY_META: Module metadata for the next key. */\n            uint64_t numClasses;\n            if ((numClasses = rdbLoadLen(&rdb,NULL)) == RDB_LENERR) goto eoferr;\n            /* Skip metadata by reading and discarding each class's data */\n            for (uint64_t i = 0; i < numClasses; i++) {\n                /* Read 4-byte CLASS_SPEC */\n                uint32_t classSpec;\n                if (rioRead(&rdb, &classSpec, 4) == 0) goto eoferr;\n                /* Skip module value using rdbLoadCheckModuleValue */\n                robj *o = rdbLoadCheckModuleValue(&rdb, \"metadata\", 1);\n                if (o == NULL) goto eoferr;\n                decrRefCount(o);\n            }\n            continue; /* Read next opcode. */\n        } else if (type == RDB_OPCODE_EOF) {\n            /* EOF: End of file, exit the main loop. */\n            break;\n        } else if (type == RDB_OPCODE_SELECTDB) {\n            /* SELECTDB: Select the specified database. */\n            rdbstate.doing = RDB_CHECK_DOING_READ_LEN;\n            if ((dbid = rdbLoadLen(&rdb,NULL)) == RDB_LENERR)\n                goto eoferr;\n            rdbCheckInfo(\"Selecting DB ID %llu\", (unsigned long long)dbid);\n            selected_dbid = dbid;\n            continue; /* Read type again. */\n        } else if (type == RDB_OPCODE_RESIZEDB) {\n            /* RESIZEDB: Hint about the size of the keys in the currently\n             * selected data base, in order to avoid useless rehashing. */\n            uint64_t db_size, expires_size;\n            rdbstate.doing = RDB_CHECK_DOING_READ_LEN;\n            if ((db_size = rdbLoadLen(&rdb,NULL)) == RDB_LENERR)\n                goto eoferr;\n            if ((expires_size = rdbLoadLen(&rdb,NULL)) == RDB_LENERR)\n                goto eoferr;\n            continue; /* Read type again. */\n        } else if (type == RDB_OPCODE_SLOT_INFO) {\n            uint64_t slot_id, slot_size, expires_slot_size;\n            if ((slot_id = rdbLoadLen(&rdb,NULL)) == RDB_LENERR)\n                goto eoferr;\n            if ((slot_size = rdbLoadLen(&rdb,NULL)) == RDB_LENERR)\n                goto eoferr;\n            if ((expires_slot_size = rdbLoadLen(&rdb,NULL)) == RDB_LENERR)\n                goto eoferr;\n            continue; /* Read type again. */\n        } else if (type == RDB_OPCODE_AUX) {\n            /* AUX: generic string-string fields. Use to add state to RDB\n             * which is backward compatible. Implementations of RDB loading\n             * are required to skip AUX fields they don't understand.\n             *\n             * An AUX field is composed of two strings: key and value. */\n            robj *auxkey, *auxval;\n            rdbstate.doing = RDB_CHECK_DOING_READ_AUX;\n            if ((auxkey = rdbLoadStringObject(&rdb)) == NULL) goto eoferr;\n            if ((auxval = rdbLoadStringObject(&rdb)) == NULL) {\n                decrRefCount(auxkey);\n                goto eoferr;\n            }\n\n            rdbCheckInfo(\"AUX FIELD %s = '%s'\",\n                (char*)auxkey->ptr, (char*)auxval->ptr);\n            decrRefCount(auxkey);\n            decrRefCount(auxval);\n            continue; /* Read type again. */\n        } else if (type == RDB_OPCODE_MODULE_AUX) {\n            /* AUX: Auxiliary data for modules. */\n            uint64_t moduleid, when_opcode, when;\n            rdbstate.doing = RDB_CHECK_DOING_READ_MODULE_AUX;\n            if ((moduleid = rdbLoadLen(&rdb,NULL)) == RDB_LENERR) goto eoferr;\n            if ((when_opcode = rdbLoadLen(&rdb,NULL)) == RDB_LENERR) goto eoferr;\n            if ((when = rdbLoadLen(&rdb,NULL)) == RDB_LENERR) goto eoferr;\n            if (when_opcode != RDB_MODULE_OPCODE_UINT) {\n                rdbCheckError(\"bad when_opcode\");\n                goto err;\n            }\n\n            char name[10];\n            moduleTypeNameByID(name,moduleid);\n            rdbCheckInfo(\"MODULE AUX for: %s\", name);\n\n            robj *o = rdbLoadCheckModuleValue(&rdb, name, 0);\n            decrRefCount(o);\n            continue; /* Read type again. */\n        } else if (type == RDB_OPCODE_FUNCTION_PRE_GA) {\n            rdbCheckError(\"Pre-release function format not supported %d\",rdbver);\n            goto err;\n        } else if (type == RDB_OPCODE_FUNCTION2) {\n            sds err = NULL;\n            rdbstate.doing = RDB_CHECK_DOING_READ_FUNCTIONS;\n            if (rdbFunctionLoad(&rdb, rdbver, NULL, 0, &err) != C_OK) {\n                rdbCheckError(\"Failed loading library, %s\", err);\n                sdsfree(err);\n                goto err;\n            }\n            continue;\n        } else {\n            if (!rdbIsObjectType(type)) {\n                rdbCheckError(\"Invalid object type: %d\", type);\n                goto err;\n            }\n            rdbstate.key_type = type;\n        }\n\n        /* Read key */\n        rdbstate.doing = RDB_CHECK_DOING_READ_KEY;\n        if ((key = rdbLoadStringObject(&rdb)) == NULL) goto eoferr;\n        rdbstate.key = key;\n        rdbstate.keys++;\n        /* Read value */\n        rdbstate.doing = RDB_CHECK_DOING_READ_OBJECT_VALUE;\n        if ((val = rdbLoadObject(type,&rdb,key->ptr,selected_dbid,NULL)) == NULL)\n            goto eoferr;\n        /* Check if the key already expired. */\n        if (expiretime != -1 && expiretime < now)\n            rdbstate.already_expired++;\n        if (expiretime != -1) rdbstate.expires++;\n        /* If hash with HFEs then with expiration on fields then need to count it */\n        if ((val->type == OBJ_HASH) && (hashTypeGetMinExpire(val, 1) != EB_EXPIRE_TIME_INVALID))\n            rdbstate.subexpires++;\n\n        rdbstate.key = NULL;\n        decrRefCount(key);\n        decrRefCount(val);\n        rdbstate.key_type = -1;\n        expiretime = -1;\n    }\n    /* Verify the checksum if RDB version is >= 5 */\n    if (rdbver >= 5 && server.rdb_checksum) {\n        uint64_t cksum, expected = rdb.cksum;\n\n        rdbstate.doing = RDB_CHECK_DOING_CHECK_SUM;\n        if (rioRead(&rdb,&cksum,8) == 0) goto eoferr;\n        memrev64ifbe(&cksum);\n        if (cksum == 0) {\n            rdbCheckInfo(\"RDB file was saved with checksum disabled: no check performed.\");\n        } else if (cksum != expected) {\n            rdbCheckError(\"RDB CRC error\");\n            goto err;\n        } else {\n            rdbCheckInfo(\"Checksum OK\");\n        }\n    }\n\n    if (closefile) fclose(fp);\n    stopLoading(1);\n    return 0;\n\neoferr: /* unexpected end of file is handled here with a fatal exit */\n    if (rdbstate.error_set) {\n        rdbCheckError(rdbstate.error);\n    } else {\n        rdbCheckError(\"Unexpected EOF reading RDB file\");\n    }\nerr:\n    if (closefile) fclose(fp);\n    stopLoading(0);\n    return 1;\n}\n\n/* RDB check main: called form server.c when Redis is executed with the\n * redis-check-rdb alias, on during RDB loading errors.\n *\n * The function works in two ways: can be called with argc/argv as a\n * standalone executable, or called with a non NULL 'fp' argument if we\n * already have an open file to check. This happens when the function\n * is used to check an RDB preamble inside an AOF file.\n *\n * When called with fp = NULL, the function never returns, but exits with the\n * status code according to success (RDB is sane) or error (RDB is corrupted).\n * Otherwise if called with a non NULL fp, the function returns C_OK or\n * C_ERR depending on the success or failure. */\nint redis_check_rdb_main(int argc, char **argv, FILE *fp) {\n    struct timeval tv;\n\n    if (argc != 2 && fp == NULL) {\n        fprintf(stderr, \"Usage: %s <rdb-file-name>\\n\", argv[0]);\n        exit(1);\n    } else if (!strcmp(argv[1],\"-v\") || !strcmp(argv[1], \"--version\")) {\n        sds version = getVersion();\n        printf(\"redis-check-rdb %s\\n\", version);\n        sdsfree(version);\n        exit(0);\n    }\n\n    gettimeofday(&tv, NULL);\n    init_genrand64(((long long) tv.tv_sec * 1000000 + tv.tv_usec) ^ getpid());\n\n    /* In order to call the loading functions we need to create the shared\n     * integer objects, however since this function may be called from\n     * an already initialized Redis instance, check if we really need to. */\n    if (shared.integers[0] == NULL)\n        createSharedObjects();\n    server.loading_process_events_interval_bytes = 0;\n    server.sanitize_dump_payload = SANITIZE_DUMP_YES;\n    rdbCheckMode = 1;\n    rdbCheckInfo(\"Checking RDB file %s\", argv[1]);\n    rdbCheckSetupSignals();\n    int retval = redis_check_rdb(argv[1],fp);\n    if (retval == 0) {\n        rdbCheckInfo(\"\\\\o/ RDB looks OK! \\\\o/\");\n        rdbShowGenericInfo();\n    }\n    if (fp) return (retval == 0) ? C_OK : C_ERR;\n    exit(retval);\n}\n"
  },
  {
    "path": "src/redis-cli.c",
    "content": "/* Redis CLI (command line interface)\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"fmacros.h\"\n\n#include <stdarg.h>\n#include <stdio.h>\n#include <string.h>\n#include <stdlib.h>\n#include <signal.h>\n#include <unistd.h>\n#include <time.h>\n#include <ctype.h>\n#include <errno.h>\n#include <sys/stat.h>\n#include <sys/time.h>\n#include <assert.h>\n#include <fcntl.h>\n#include <limits.h>\n#include <math.h>\n#include <termios.h>\n\n#include <hiredis.h>\n#ifdef USE_OPENSSL\n#include <openssl/ssl.h>\n#include <openssl/err.h>\n#include <hiredis_ssl.h>\n#endif\n#include <sdscompat.h> /* Use hiredis' sds compat header that maps sds calls to their hi_ variants */\n#include <sds.h> /* use sds.h from hiredis, so that only one set of sds functions will be present in the binary */\n#include \"dict.h\"\n#include \"adlist.h\"\n#include \"zmalloc.h\"\n#include \"linenoise.h\"\n#include \"anet.h\"\n#include \"ae.h\"\n#include \"connection.h\"\n#include \"cli_common.h\"\n#include \"mt19937-64.h\"\n#include \"cli_commands.h\"\n#include \"hdr_histogram.h\"\n\n#define UNUSED(V) ((void) V)\n\n#define OUTPUT_STANDARD 0\n#define OUTPUT_RAW 1\n#define OUTPUT_CSV 2\n#define OUTPUT_JSON 3\n#define OUTPUT_QUOTED_JSON 4\n#define REDIS_CLI_KEEPALIVE_INTERVAL 15 /* seconds */\n#define REDIS_CLI_DEFAULT_PIPE_TIMEOUT 30 /* seconds */\n#define REDIS_CLI_HISTFILE_ENV \"REDISCLI_HISTFILE\"\n#define REDIS_CLI_HISTFILE_DEFAULT \".rediscli_history\"\n#define REDIS_CLI_RCFILE_ENV \"REDISCLI_RCFILE\"\n#define REDIS_CLI_RCFILE_DEFAULT \".redisclirc\"\n#define REDIS_CLI_AUTH_ENV \"REDISCLI_AUTH\"\n#define REDIS_CLI_CLUSTER_YES_ENV \"REDISCLI_CLUSTER_YES\"\n\n#define CLUSTER_MANAGER_SLOTS               16384\n#define CLUSTER_MANAGER_PORT_INCR           10000 /* same as CLUSTER_PORT_INCR */\n#define CLUSTER_MANAGER_MIGRATE_TIMEOUT     60000\n#define CLUSTER_MANAGER_MIGRATE_PIPELINE    10\n#define CLUSTER_MANAGER_REBALANCE_THRESHOLD 2\n\n#define CLUSTER_MANAGER_INVALID_HOST_ARG \\\n    \"[ERR] Invalid arguments: you need to pass either a valid \" \\\n    \"address (ie. 120.0.0.1:7000) or space separated IP \" \\\n    \"and port (ie. 120.0.0.1 7000)\\n\"\n#define CLUSTER_MANAGER_MODE() (config.cluster_manager_command.name != NULL)\n#define CLUSTER_MANAGER_MASTERS_COUNT(nodes, replicas) ((nodes)/((replicas) + 1))\n#define CLUSTER_MANAGER_COMMAND(n,...) \\\n        (redisCommand((n)->context, __VA_ARGS__))\n\n#define CLUSTER_MANAGER_NODE_ARRAY_FREE(array) zfree((array)->alloc)\n\n#define CLUSTER_MANAGER_PRINT_REPLY_ERROR(n, err) \\\n    clusterManagerLogErr(\"Node %s:%d replied with error:\\n%s\\n\", \\\n                         (n)->ip, (n)->port, (err));\n\n#define clusterManagerLogInfo(...) \\\n    clusterManagerLog(CLUSTER_MANAGER_LOG_LVL_INFO,__VA_ARGS__)\n\n#define clusterManagerLogErr(...) \\\n    clusterManagerLog(CLUSTER_MANAGER_LOG_LVL_ERR,__VA_ARGS__)\n\n#define clusterManagerLogWarn(...) \\\n    clusterManagerLog(CLUSTER_MANAGER_LOG_LVL_WARN,__VA_ARGS__)\n\n#define clusterManagerLogOk(...) \\\n    clusterManagerLog(CLUSTER_MANAGER_LOG_LVL_SUCCESS,__VA_ARGS__)\n\n#define CLUSTER_MANAGER_FLAG_MYSELF     1 << 0\n#define CLUSTER_MANAGER_FLAG_SLAVE      1 << 1\n#define CLUSTER_MANAGER_FLAG_FRIEND     1 << 2\n#define CLUSTER_MANAGER_FLAG_NOADDR     1 << 3\n#define CLUSTER_MANAGER_FLAG_DISCONNECT 1 << 4\n#define CLUSTER_MANAGER_FLAG_FAIL       1 << 5\n\n#define CLUSTER_MANAGER_CMD_FLAG_FIX            1 << 0\n#define CLUSTER_MANAGER_CMD_FLAG_SLAVE          1 << 1\n#define CLUSTER_MANAGER_CMD_FLAG_YES            1 << 2\n#define CLUSTER_MANAGER_CMD_FLAG_AUTOWEIGHTS    1 << 3\n#define CLUSTER_MANAGER_CMD_FLAG_EMPTYMASTER    1 << 4\n#define CLUSTER_MANAGER_CMD_FLAG_SIMULATE       1 << 5\n#define CLUSTER_MANAGER_CMD_FLAG_REPLACE        1 << 6\n#define CLUSTER_MANAGER_CMD_FLAG_COPY           1 << 7\n#define CLUSTER_MANAGER_CMD_FLAG_COLOR          1 << 8\n#define CLUSTER_MANAGER_CMD_FLAG_CHECK_OWNERS   1 << 9\n#define CLUSTER_MANAGER_CMD_FLAG_FIX_WITH_UNREACHABLE_MASTERS 1 << 10\n#define CLUSTER_MANAGER_CMD_FLAG_MASTERS_ONLY   1 << 11\n#define CLUSTER_MANAGER_CMD_FLAG_SLAVES_ONLY    1 << 12\n\n#define CLUSTER_MANAGER_OPT_GETFRIENDS  1 << 0\n#define CLUSTER_MANAGER_OPT_COLD        1 << 1\n#define CLUSTER_MANAGER_OPT_UPDATE      1 << 2\n#define CLUSTER_MANAGER_OPT_QUIET       1 << 6\n#define CLUSTER_MANAGER_OPT_VERBOSE     1 << 7\n\n#define CLUSTER_MANAGER_LOG_LVL_INFO    1\n#define CLUSTER_MANAGER_LOG_LVL_WARN    2\n#define CLUSTER_MANAGER_LOG_LVL_ERR     3\n#define CLUSTER_MANAGER_LOG_LVL_SUCCESS 4\n\n#define CLUSTER_JOIN_CHECK_AFTER        20\n\n#define LOG_COLOR_BOLD      \"29;1m\"\n#define LOG_COLOR_RED       \"31;1m\"\n#define LOG_COLOR_GREEN     \"32;1m\"\n#define LOG_COLOR_YELLOW    \"33;1m\"\n#define LOG_COLOR_RESET     \"0m\"\n\n/* cliConnect() flags. */\n#define CC_FORCE (1<<0)         /* Re-connect if already connected. */\n#define CC_QUIET (1<<1)         /* Don't log connecting errors. */\n\n/* DNS lookup */\n#define NET_IP_STR_LEN 46       /* INET6_ADDRSTRLEN is 46 */\n\n#define REFRESH_INTERVAL 300 /* milliseconds */\n\n#define IS_TTY_OR_FAKETTY() (isatty(STDOUT_FILENO) || getenv(\"FAKETTY\"))\n\n/* --latency-dist palettes. */\nint spectrum_palette_color_size = 19;\nint spectrum_palette_color[] = {0,233,234,235,237,239,241,243,245,247,144,143,142,184,226,214,208,202,196};\n\nint spectrum_palette_mono_size = 13;\nint spectrum_palette_mono[] = {0,233,234,235,237,239,241,243,245,247,249,251,253};\n\n/* The actual palette in use. */\nint *spectrum_palette;\nint spectrum_palette_size;\n\nstatic int orig_termios_saved = 0;\nstatic struct termios orig_termios; /* To restore terminal at exit.*/\n\n/* Dict Helpers */\nstatic uint64_t dictSdsHash(const void *key);\nstatic int dictSdsKeyCompare(dictCmpCache *cache, const void *key1,\n    const void *key2);\nstatic void dictSdsDestructor(dict *d, void *val);\nstatic void dictListDestructor(dict *d, void *val);\n\n/* Cluster Manager Command Info */\ntypedef struct clusterManagerCommand {\n    char *name;\n    int argc;\n    char **argv;\n    sds stdin_arg; /* arg from stdin. (-X option) */\n    int flags;\n    int replicas;\n    char *from;\n    char *to;\n    char **weight;\n    int weight_argc;\n    char *master_id;\n    int slots;\n    int timeout;\n    int pipeline;\n    float threshold;\n    char *backup_dir;\n    char *from_user;\n    char *from_pass;\n    int from_askpass;\n} clusterManagerCommand;\n\nstatic int createClusterManagerCommand(char *cmdname, int argc, char **argv);\n\n\nstatic redisContext *context;\nstatic struct config {\n    cliConnInfo conn_info;\n    struct timeval connect_timeout;\n    char *hostsocket;\n    int tls;\n    cliSSLconfig sslconfig;\n    long repeat;\n    long interval;\n    int dbnum; /* db num currently selected */\n    int interactive;\n    int shutdown;\n    int monitor_mode;\n    int pubsub_mode;\n    int blocking_state_aborted; /* used to abort monitor_mode and pubsub_mode. */\n    int vset_recall_mode;\n    sds vset_recall_key;\n    int vset_recall_ele_count;\n    int vset_recall_vsim_count;\n    int vset_recall_vsim_ef;\n    int latency_mode;\n    int latency_dist_mode;\n    int latency_history;\n    int lru_test_mode;\n    long long lru_test_sample_size;\n    int cluster_mode;\n    int cluster_reissue_command;\n    int cluster_send_asking;\n    int slave_mode;\n    int pipe_mode;\n    int pipe_timeout;\n    int getrdb_mode;\n    int get_functions_rdb_mode;\n    int stat_mode;\n    int scan_mode;\n    int count;\n    int intrinsic_latency_mode;\n    int intrinsic_latency_duration;\n    sds pattern;\n    char *rdb_filename;\n    int bigkeys;\n    int memkeys;\n    long long memkeys_samples;\n    int hotkeys;\n    int keystats;\n    unsigned long long cursor;\n    unsigned long top_sizes_limit;\n    int stdin_lastarg; /* get last arg from stdin. (-x option) */\n    int stdin_tag_arg; /* get <tag> arg from stdin. (-X option) */\n    char *stdin_tag_name; /* Placeholder(tag name) for user input. */\n    int askpass;\n    int quoted_input;   /* Force input args to be treated as quoted strings */\n    int output; /* output mode, see OUTPUT_* defines */\n    int push_output; /* Should we display spontaneous PUSH replies */\n    sds mb_delim;\n    sds cmd_delim;\n    char prompt[128];\n    char *eval;\n    int eval_ldb;\n    int eval_ldb_sync;  /* Ask for synchronous mode of the Lua debugger. */\n    int eval_ldb_end;   /* Lua debugging session ended. */\n    int enable_ldb_on_eval; /* Handle manual SCRIPT DEBUG + EVAL commands. */\n    int last_cmd_type;\n    redisReply *last_reply;\n    int verbose;\n    int set_errcode;\n    clusterManagerCommand cluster_manager_command;\n    int no_auth_warning;\n    int resp2; /* value of 1: specified explicitly with option -2 */\n    int resp3; /* value of 1: specified explicitly, value of 2: implicit like --json option */\n    int current_resp3; /* 1 if we have RESP3 right now in the current connection. */\n    int in_multi;\n    int pre_multi_dbnum;\n    char *server_version;\n    char *test_hint;\n    char *test_hint_file;\n    char *client_name;\n    int prefer_ipv4; /* Prefer IPv4 over IPv6 on DNS lookup. */\n    int prefer_ipv6; /* Prefer IPv6 over IPv4 on DNS lookup. */\n} config;\n\n/* User preferences. */\nstatic struct pref {\n    int hints;\n} pref;\n\nstatic volatile sig_atomic_t force_cancel_loop = 0;\nstatic void usage(int err);\nstatic void slaveMode(int send_sync);\nstatic int cliConnect(int flags);\n\nstatic char *getInfoField(char *info, char *field);\nstatic long getLongInfoField(char *info, char *field);\n\n/*------------------------------------------------------------------------------\n * Utility functions\n *--------------------------------------------------------------------------- */\nsize_t redis_strlcpy(char *dst, const char *src, size_t dsize);\n\nstatic void cliPushHandler(void *, void *);\n\nuint16_t crc16(const char *buf, int len);\n\nstatic long long ustime(void) {\n    struct timeval tv;\n    long long ust;\n\n    gettimeofday(&tv, NULL);\n    ust = ((long long)tv.tv_sec)*1000000;\n    ust += tv.tv_usec;\n    return ust;\n}\n\nstatic long long mstime(void) {\n    return ustime()/1000;\n}\n\nstatic void cliRefreshPrompt(void) {\n    if (config.eval_ldb) return;\n\n    sds prompt = sdsempty();\n    if (config.hostsocket != NULL) {\n        prompt = sdscatfmt(prompt,\"redis %s\",config.hostsocket);\n    } else {\n        char addr[256];\n        formatAddr(addr, sizeof(addr), config.conn_info.hostip, config.conn_info.hostport);\n        prompt = sdscatlen(prompt,addr,strlen(addr));\n    }\n\n    /* Add [dbnum] if needed */\n    if (config.dbnum != 0)\n        prompt = sdscatfmt(prompt,\"[%i]\",config.dbnum);\n\n    /* Add TX if in transaction state*/\n    if (config.in_multi)  \n        prompt = sdscatlen(prompt,\"(TX)\",4);\n\n    if (config.pubsub_mode)\n        prompt = sdscatfmt(prompt,\"(subscribed mode)\");\n\n    /* Copy the prompt in the static buffer. */\n    prompt = sdscatlen(prompt,\"> \",2);\n    snprintf(config.prompt,sizeof(config.prompt),\"%s\",prompt);\n    sdsfree(prompt);\n}\n\n/* Return the name of the dotfile for the specified 'dotfilename'.\n * Normally it just concatenates user $HOME to the file specified\n * in 'dotfilename'. However if the environment variable 'envoverride'\n * is set, its value is taken as the path.\n *\n * The function returns NULL (if the file is /dev/null or cannot be\n * obtained for some error), or an SDS string that must be freed by\n * the user. */\nstatic sds getDotfilePath(char *envoverride, char *dotfilename) {\n    char *path = NULL;\n    sds dotPath = NULL;\n\n    /* Check the env for a dotfile override. */\n    path = getenv(envoverride);\n    if (path != NULL && *path != '\\0') {\n        if (!strcmp(\"/dev/null\", path)) {\n            return NULL;\n        }\n\n        /* If the env is set, return it. */\n        dotPath = sdsnew(path);\n    } else {\n        char *home = getenv(\"HOME\");\n        if (home != NULL && *home != '\\0') {\n            /* If no override is set use $HOME/<dotfilename>. */\n            dotPath = sdscatprintf(sdsempty(), \"%s/%s\", home, dotfilename);\n        }\n    }\n    return dotPath;\n}\n\nstatic uint64_t dictSdsHash(const void *key) {\n    return dictGenHashFunction((unsigned char*)key, sdslen((char*)key));\n}\n\nstatic int dictSdsKeyCompare(dictCmpCache *cache, const void *key1, const void *key2)\n{\n    int l1,l2;\n    UNUSED(cache);\n\n    l1 = sdslen((sds)key1);\n    l2 = sdslen((sds)key2);\n    if (l1 != l2) return 0;\n    return memcmp(key1, key2, l1) == 0;\n}\n\nstatic void dictSdsDestructor(dict *d, void *val)\n{\n    UNUSED(d);\n    sdsfree(val);\n}\n\nvoid dictListDestructor(dict *d, void *val)\n{\n    UNUSED(d);\n    listRelease((list*)val);\n}\n\n/* Erase the lines before printing, and returns the number of lines printed */\nint cleanPrintfln(char *fmt, ...) {\n    va_list args;\n    char buf[1024]; /* limitation */\n    int char_count, line_count = 0;\n\n    /* Clear the line if in TTY */\n    if (IS_TTY_OR_FAKETTY()) {\n        printf(\"\\033[2K\\r\");\n    }\n\n    va_start(args, fmt);\n    char_count = vsnprintf(buf, sizeof(buf), fmt, args);\n    va_end(args);\n\n    if (char_count >= (int)sizeof(buf)) {\n        fprintf(stderr, \"Warning: String was trimmed in cleanPrintln\\n\");\n    }\n\n    char *position, *string = buf;\n    while ((position = strchr(string, '\\n')) != NULL) {\n        int line_length = (int)(position - string);\n        printf(\"%.*s\\n\", line_length, string);\n        string = position + 1;\n        line_count++;\n    }\n\n    printf(\"%s\\n\", string);\n    return line_count + 1;\n}\n\n/*------------------------------------------------------------------------------\n * Help functions\n *--------------------------------------------------------------------------- */\n\n#define CLI_HELP_COMMAND 1\n#define CLI_HELP_GROUP 2\n\ntypedef struct {\n    int type;\n    int argc;\n    sds *argv;\n    sds full;\n\n    /* Only used for help on commands */\n    struct commandDocs docs;\n} helpEntry;\n\nstatic helpEntry *helpEntries = NULL;\nstatic int helpEntriesLen = 0;\n\n/* For backwards compatibility with pre-7.0 servers.\n * cliLegacyInitHelp() sets up the helpEntries array with the command and group\n * names from the commands.c file. However the Redis instance we are connecting\n * to may support more commands, so this function integrates the previous\n * entries with additional entries obtained using the COMMAND command\n * available in recent versions of Redis. */\nstatic void cliLegacyIntegrateHelp(void) {\n    if (cliConnect(CC_QUIET) == REDIS_ERR) return;\n\n    redisReply *reply = redisCommand(context, \"COMMAND\");\n    if (reply == NULL) return;\n    if (reply->type != REDIS_REPLY_ARRAY) {\n        freeReplyObject(reply);\n        return;\n    }\n\n    /* Scan the array reported by COMMAND and fill only the entries that\n     * don't already match what we have. */\n    for (size_t j = 0; j < reply->elements; j++) {\n        redisReply *entry = reply->element[j];\n        if (entry->type != REDIS_REPLY_ARRAY || entry->elements < 4 ||\n            entry->element[0]->type != REDIS_REPLY_STRING ||\n            entry->element[1]->type != REDIS_REPLY_INTEGER ||\n            entry->element[3]->type != REDIS_REPLY_INTEGER) return;\n        char *cmdname = entry->element[0]->str;\n        int i;\n\n        for (i = 0; i < helpEntriesLen; i++) {\n            helpEntry *he = helpEntries+i;\n            if (!strcasecmp(he->argv[0],cmdname))\n                break;\n        }\n        if (i != helpEntriesLen) continue;\n\n        helpEntriesLen++;\n        helpEntries = zrealloc(helpEntries,sizeof(helpEntry)*helpEntriesLen);\n        helpEntry *new = helpEntries+(helpEntriesLen-1);\n\n        new->argc = 1;\n        new->argv = zmalloc(sizeof(sds));\n        new->argv[0] = sdsnew(cmdname);\n        new->full = new->argv[0];\n        new->type = CLI_HELP_COMMAND;\n        sdstoupper(new->argv[0]);\n\n        new->docs.name = new->argv[0];\n        new->docs.args = NULL;\n        new->docs.numargs = 0;\n        new->docs.params = sdsempty();\n        int args = llabs(entry->element[1]->integer);\n        args--; /* Remove the command name itself. */\n        if (entry->element[3]->integer == 1) {\n            new->docs.params = sdscat(new->docs.params,\"key \");\n            args--;\n        }\n        while(args-- > 0) new->docs.params = sdscat(new->docs.params,\"arg \");\n        if (entry->element[1]->integer < 0)\n            new->docs.params = sdscat(new->docs.params,\"...options...\");\n        new->docs.summary = \"Help not available\";\n        new->docs.since = \"Not known\";\n        new->docs.group = \"generic\";\n    }\n    freeReplyObject(reply);\n}\n\n/* Concatenate a string to an sds string, but if it's empty substitute double quote marks. */\nstatic sds sdscat_orempty(sds params, const char *value) {\n    if (value[0] == '\\0') {\n        return sdscat(params, \"\\\"\\\"\");\n    }\n    return sdscat(params, value);\n}\n\nstatic sds makeHint(char **inputargv, int inputargc, int cmdlen, struct commandDocs docs);\n\nstatic void cliAddCommandDocArg(cliCommandArg *cmdArg, redisReply *argMap);\n\nstatic void cliMakeCommandDocArgs(redisReply *arguments, cliCommandArg *result) {\n    for (size_t j = 0; j < arguments->elements; j++) {\n        cliAddCommandDocArg(&result[j], arguments->element[j]);\n    }\n}\n\nstatic void cliAddCommandDocArg(cliCommandArg *cmdArg, redisReply *argMap) {\n    if (argMap->type != REDIS_REPLY_MAP && argMap->type != REDIS_REPLY_ARRAY) {\n        return;\n    }\n\n    for (size_t i = 0; i < argMap->elements; i += 2) {\n        assert(argMap->element[i]->type == REDIS_REPLY_STRING);\n        char *key = argMap->element[i]->str;\n        if (!strcmp(key, \"name\")) {\n            assert(argMap->element[i + 1]->type == REDIS_REPLY_STRING);\n            cmdArg->name = sdsnew(argMap->element[i + 1]->str);\n        } else if (!strcmp(key, \"display_text\")) {\n            assert(argMap->element[i + 1]->type == REDIS_REPLY_STRING);\n            cmdArg->display_text = sdsnew(argMap->element[i + 1]->str);\n        } else if (!strcmp(key, \"token\")) {\n            assert(argMap->element[i + 1]->type == REDIS_REPLY_STRING);\n            cmdArg->token = sdsnew(argMap->element[i + 1]->str);\n        } else if (!strcmp(key, \"type\")) {\n            assert(argMap->element[i + 1]->type == REDIS_REPLY_STRING);\n            char *type = argMap->element[i + 1]->str;\n            if (!strcmp(type, \"string\")) {\n                cmdArg->type = ARG_TYPE_STRING;\n            } else if (!strcmp(type, \"integer\")) {\n                cmdArg->type = ARG_TYPE_INTEGER;\n            } else if (!strcmp(type, \"double\")) {\n                cmdArg->type = ARG_TYPE_DOUBLE;\n            } else if (!strcmp(type, \"key\")) {\n                cmdArg->type = ARG_TYPE_KEY;\n            } else if (!strcmp(type, \"pattern\")) {\n                cmdArg->type = ARG_TYPE_PATTERN;\n            } else if (!strcmp(type, \"unix-time\")) {\n                cmdArg->type = ARG_TYPE_UNIX_TIME;\n            } else if (!strcmp(type, \"pure-token\")) {\n                cmdArg->type = ARG_TYPE_PURE_TOKEN;\n            } else if (!strcmp(type, \"oneof\")) {\n                cmdArg->type = ARG_TYPE_ONEOF;\n            } else if (!strcmp(type, \"block\")) {\n                cmdArg->type = ARG_TYPE_BLOCK;\n            }\n        } else if (!strcmp(key, \"arguments\")) {\n            redisReply *arguments = argMap->element[i + 1];\n            cmdArg->subargs = zcalloc(arguments->elements * sizeof(cliCommandArg));\n            cmdArg->numsubargs = arguments->elements;\n            cliMakeCommandDocArgs(arguments, cmdArg->subargs);\n        } else if (!strcmp(key, \"flags\")) {\n            redisReply *flags = argMap->element[i + 1];\n            assert(flags->type == REDIS_REPLY_SET || flags->type == REDIS_REPLY_ARRAY);\n            for (size_t j = 0; j < flags->elements; j++) {\n                assert(flags->element[j]->type == REDIS_REPLY_STATUS);\n                char *flag = flags->element[j]->str;\n                if (!strcmp(flag, \"optional\")) {\n                    cmdArg->flags |= CMD_ARG_OPTIONAL;\n                } else if (!strcmp(flag, \"multiple\")) {\n                    cmdArg->flags |= CMD_ARG_MULTIPLE;\n                } else if (!strcmp(flag, \"multiple_token\")) {\n                    cmdArg->flags |= CMD_ARG_MULTIPLE_TOKEN;\n                }\n            }\n        }\n    }\n}\n\n/* Fill in the fields of a help entry for the command/subcommand name. */\nstatic void cliFillInCommandHelpEntry(helpEntry *help, char *cmdname, char *subcommandname) {\n    help->argc = subcommandname ? 2 : 1;\n    help->argv = zmalloc(sizeof(sds) * help->argc);\n    help->argv[0] = sdsnew(cmdname);\n    sdstoupper(help->argv[0]);\n    if (subcommandname) {\n        /* Subcommand name may be two words separated by a pipe character. */\n        char *pipe = strchr(subcommandname, '|');\n        if (pipe != NULL) {\n            help->argv[1] = sdsnew(pipe + 1);\n        } else {\n            help->argv[1] = sdsnew(subcommandname);\n        }\n        sdstoupper(help->argv[1]);\n    }\n    sds fullname = sdsnew(help->argv[0]);\n    if (subcommandname) {\n        fullname = sdscat(fullname, \" \");\n        fullname = sdscat(fullname, help->argv[1]);\n    }\n    help->full = fullname;\n    help->type = CLI_HELP_COMMAND;\n\n    help->docs.name = help->full;\n    help->docs.params = NULL;\n    help->docs.args = NULL;\n    help->docs.numargs = 0;\n    help->docs.since = NULL;\n}\n\n/* Initialize a command help entry for the command/subcommand described in 'specs'.\n * 'next' points to the next help entry to be filled in.\n * 'groups' is a set of command group names to be filled in.\n * Returns a pointer to the next available position in the help entries table.\n * If the command has subcommands, this is called recursively for the subcommands.\n */\nstatic helpEntry *cliInitCommandHelpEntry(char *cmdname, char *subcommandname,\n                                          helpEntry *next, redisReply *specs,\n                                          dict *groups) {\n    helpEntry *help = next++;\n    cliFillInCommandHelpEntry(help, cmdname, subcommandname);\n\n    assert(specs->type == REDIS_REPLY_MAP || specs->type == REDIS_REPLY_ARRAY);\n    for (size_t j = 0; j < specs->elements; j += 2) {\n        assert(specs->element[j]->type == REDIS_REPLY_STRING);\n        char *key = specs->element[j]->str;\n        if (!strcmp(key, \"summary\")) {\n            redisReply *reply = specs->element[j + 1];\n            assert(reply->type == REDIS_REPLY_STRING);\n            help->docs.summary = sdsnew(reply->str);\n        } else if (!strcmp(key, \"since\")) {\n            redisReply *reply = specs->element[j + 1];\n            assert(reply->type == REDIS_REPLY_STRING);\n            help->docs.since = sdsnew(reply->str);\n        } else if (!strcmp(key, \"group\")) {\n            redisReply *reply = specs->element[j + 1];\n            assert(reply->type == REDIS_REPLY_STRING);\n            help->docs.group = sdsnew(reply->str);\n            sds group = sdsdup(help->docs.group);\n            if (dictAdd(groups, group, NULL) != DICT_OK) {\n                sdsfree(group);\n            }\n        } else if (!strcmp(key, \"arguments\")) {\n            redisReply *arguments = specs->element[j + 1];\n            assert(arguments->type == REDIS_REPLY_ARRAY);\n            help->docs.args = zcalloc(arguments->elements * sizeof(cliCommandArg));\n            help->docs.numargs = arguments->elements;\n            cliMakeCommandDocArgs(arguments, help->docs.args);\n            help->docs.params = makeHint(NULL, 0, 0, help->docs);\n        } else if (!strcmp(key, \"subcommands\")) {\n            redisReply *subcommands = specs->element[j + 1];\n            assert(subcommands->type == REDIS_REPLY_MAP || subcommands->type == REDIS_REPLY_ARRAY);\n            for (size_t i = 0; i < subcommands->elements; i += 2) {\n                assert(subcommands->element[i]->type == REDIS_REPLY_STRING);\n                char *subcommandname = subcommands->element[i]->str;\n                redisReply *subcommand = subcommands->element[i + 1];\n                assert(subcommand->type == REDIS_REPLY_MAP || subcommand->type == REDIS_REPLY_ARRAY);\n                next = cliInitCommandHelpEntry(cmdname, subcommandname, next, subcommand, groups);\n            }\n        }\n    }\n    return next;\n}\n\n/* Returns the total number of commands and subcommands in the command docs table. */\nstatic size_t cliCountCommands(redisReply* commandTable) {\n    size_t numCommands = commandTable->elements / 2;\n\n    /* The command docs table maps command names to a map of their specs. */    \n    for (size_t i = 0; i < commandTable->elements; i += 2) {\n        assert(commandTable->element[i]->type == REDIS_REPLY_STRING);  /* Command name. */\n        assert(commandTable->element[i + 1]->type == REDIS_REPLY_MAP ||\n               commandTable->element[i + 1]->type == REDIS_REPLY_ARRAY);\n        redisReply *map = commandTable->element[i + 1];\n        for (size_t j = 0; j < map->elements; j += 2) {\n            assert(map->element[j]->type == REDIS_REPLY_STRING);\n            char *key = map->element[j]->str;\n            if (!strcmp(key, \"subcommands\")) {\n                redisReply *subcommands = map->element[j + 1];\n                assert(subcommands->type == REDIS_REPLY_MAP || subcommands->type == REDIS_REPLY_ARRAY);\n                numCommands += subcommands->elements / 2;\n            }\n        }\n    }\n    return numCommands;\n}\n\n/* Comparator for sorting help table entries. */\nint helpEntryCompare(const void *entry1, const void *entry2) {\n    helpEntry *i1 = (helpEntry *)entry1;\n    helpEntry *i2 = (helpEntry *)entry2;\n    return strcmp(i1->full, i2->full);\n}\n\n/* Initializes command help entries for command groups.\n * Called after the command help entries have already been filled in.\n * Extends the help table with new entries for the command groups.\n */\nvoid cliInitGroupHelpEntries(dict *groups) {\n    dictIterator iter;\n    dictEntry *entry;\n    helpEntry tmp;\n\n    int numGroups = dictSize(groups);\n    int pos = helpEntriesLen;\n    helpEntriesLen += numGroups;\n    helpEntries = zrealloc(helpEntries, sizeof(helpEntry)*helpEntriesLen);\n\n    dictInitIterator(&iter, groups);\n    for (entry = dictNext(&iter); entry != NULL; entry = dictNext(&iter)) {\n        tmp.argc = 1;\n        tmp.argv = zmalloc(sizeof(sds));\n        tmp.argv[0] = sdscatprintf(sdsempty(),\"@%s\",(char *)dictGetKey(entry));\n        tmp.full = tmp.argv[0];\n        tmp.type = CLI_HELP_GROUP;\n        tmp.docs.name = NULL;\n        tmp.docs.params = NULL;\n        tmp.docs.args = NULL;\n        tmp.docs.numargs = 0;\n        tmp.docs.summary = NULL;\n        tmp.docs.since = NULL;\n        tmp.docs.group = NULL;\n        helpEntries[pos++] = tmp;\n    }\n    dictResetIterator(&iter);\n}\n\n/* Initializes help entries for all commands in the COMMAND DOCS reply. */\nvoid cliInitCommandHelpEntries(redisReply *commandTable, dict *groups) {\n    helpEntry *next = helpEntries;\n    for (size_t i = 0; i < commandTable->elements; i += 2) {\n        assert(commandTable->element[i]->type == REDIS_REPLY_STRING);\n        char *cmdname = commandTable->element[i]->str;\n\n        assert(commandTable->element[i + 1]->type == REDIS_REPLY_MAP ||\n               commandTable->element[i + 1]->type == REDIS_REPLY_ARRAY);\n        redisReply *cmdspecs = commandTable->element[i + 1];\n        next = cliInitCommandHelpEntry(cmdname, NULL, next, cmdspecs, groups);\n    }\n}\n\n/* Does the server version support a command/argument only available \"since\" some version?\n * Returns 1 when supported, or 0 when the \"since\" version is newer than \"version\". */\nstatic int versionIsSupported(sds version, sds since) {\n    int i;\n    char *versionPos = version;\n    char *sincePos = since;\n    if (!since) {\n        return 1;\n    }\n\n    for (i = 0; i != 3; i++) {\n        int versionPart = atoi(versionPos);\n        int sincePart = atoi(sincePos);\n        if (versionPart > sincePart) {\n            return 1;\n        } else if (sincePart > versionPart) {\n            return 0;\n        }\n        versionPos = strchr(versionPos, '.');\n        sincePos = strchr(sincePos, '.');\n\n        /* If we finished to parse both `version` and `since`, it means they are equal */\n        if (!versionPos && !sincePos) return 1;\n\n        /* Different number of digits considered as not supported */\n        if (!versionPos || !sincePos) return 0;\n\n        versionPos++;\n        sincePos++;\n    }\n    return 0;\n}\n\nstatic void removeUnsupportedArgs(struct cliCommandArg *args, int *numargs, sds version) {\n    int i = 0, j;\n    while (i != *numargs) {\n        if (versionIsSupported(version, args[i].since)) {\n            if (args[i].subargs) {\n                removeUnsupportedArgs(args[i].subargs, &args[i].numsubargs, version);\n            }\n            i++;\n            continue;\n        }\n        for (j = i; j != *numargs - 1; j++) {\n            args[j] = args[j + 1];\n        }\n        (*numargs)--;\n    }\n}\n\nstatic helpEntry *cliLegacyInitCommandHelpEntry(char *cmdname, char *subcommandname,\n                                                helpEntry *next, struct commandDocs *command,\n                                                dict *groups, sds version) {\n    helpEntry *help = next++;\n    cliFillInCommandHelpEntry(help, cmdname, subcommandname);\n    \n    help->docs.summary = sdsnew(command->summary);\n    help->docs.since = sdsnew(command->since);\n    help->docs.group = sdsnew(command->group);\n    sds group = sdsdup(help->docs.group);\n    if (dictAdd(groups, group, NULL) != DICT_OK) {\n        sdsfree(group);\n    }\n\n    if (command->args != NULL) {\n        help->docs.args = command->args;\n        help->docs.numargs = command->numargs;\n        if (version)\n            removeUnsupportedArgs(help->docs.args, &help->docs.numargs, version);\n        help->docs.params = makeHint(NULL, 0, 0, help->docs);\n    }\n\n    if (command->subcommands != NULL) {\n        for (size_t i = 0; command->subcommands[i].name != NULL; i++) {\n            if (!version || versionIsSupported(version, command->subcommands[i].since)) {\n                char *subcommandname = command->subcommands[i].name;\n                next = cliLegacyInitCommandHelpEntry(\n                    cmdname, subcommandname, next, &command->subcommands[i], groups, version);\n            }\n        }\n    }\n    return next;\n}\n\nint cliLegacyInitCommandHelpEntries(struct commandDocs *commands, dict *groups, sds version) {\n    helpEntry *next = helpEntries;\n    for (size_t i = 0; commands[i].name != NULL; i++) {\n        if (!version || versionIsSupported(version, commands[i].since)) {\n            next = cliLegacyInitCommandHelpEntry(commands[i].name, NULL, next, &commands[i], groups, version);\n        }\n    }\n    return next - helpEntries;\n}\n\n/* Returns the total number of commands and subcommands in the command docs table,\n * filtered by server version (if provided).\n */\nstatic size_t cliLegacyCountCommands(struct commandDocs *commands, sds version) {\n    int numCommands = 0;\n    for (size_t i = 0; commands[i].name != NULL; i++) {\n        if (version && !versionIsSupported(version, commands[i].since)) {\n            continue;\n        }\n        numCommands++;\n        if (commands[i].subcommands != NULL) {\n            numCommands += cliLegacyCountCommands(commands[i].subcommands, version);\n        }\n    }\n    return numCommands;\n}\n\n/* Gets the server version string by calling INFO SERVER.\n * Stores the result in config.server_version.\n * When not connected, or not possible, returns NULL. */\nstatic sds cliGetServerVersion(void) {\n    static const char *key = \"\\nredis_version:\";\n    redisReply *serverInfo = NULL;\n    char *pos;\n\n    if (config.server_version != NULL) {\n        return config.server_version;\n    }\n\n    if (!context) return NULL;\n    serverInfo = redisCommand(context, \"INFO SERVER\");\n    if (serverInfo == NULL || serverInfo->type == REDIS_REPLY_ERROR) {\n        freeReplyObject(serverInfo);\n        return sdsempty();\n    }\n\n    assert(serverInfo->type == REDIS_REPLY_STRING || serverInfo->type == REDIS_REPLY_VERB);\n    sds info = serverInfo->str;\n\n    /* Finds the first appearance of \"redis_version\" in the INFO SERVER reply. */\n    pos = strstr(info, key);\n    if (pos) {\n        pos += strlen(key);\n        char *end = strchr(pos, '\\r');\n        if (end) {\n            sds version = sdsnewlen(pos, end - pos);\n            freeReplyObject(serverInfo);\n            config.server_version = version;\n            return version;\n        }\n    }\n    freeReplyObject(serverInfo);\n    return NULL;\n}\n\nstatic void cliLegacyInitHelp(dict *groups) {\n    sds serverVersion = cliGetServerVersion();\n    \n    /* Scan the commandDocs array and fill in the entries */\n    helpEntriesLen = cliLegacyCountCommands(redisCommandTable, serverVersion);\n    helpEntries = zmalloc(sizeof(helpEntry)*helpEntriesLen);\n\n    helpEntriesLen = cliLegacyInitCommandHelpEntries(redisCommandTable, groups, serverVersion);\n    cliInitGroupHelpEntries(groups);\n\n    qsort(helpEntries, helpEntriesLen, sizeof(helpEntry), helpEntryCompare);\n    dictRelease(groups);\n}\n\n/* cliInitHelp() sets up the helpEntries array with the command and group\n * names and command descriptions obtained using the COMMAND DOCS command.\n */\nstatic void cliInitHelp(void) {\n    /* Dict type for a set of strings, used to collect names of command groups. */\n    dictType groupsdt = {\n        dictSdsHash,                /* hash function */\n        NULL,                       /* key dup */\n        NULL,                       /* val dup */\n        dictSdsKeyCompare,          /* key compare */\n        dictSdsDestructor,          /* key destructor */\n        NULL,                       /* val destructor */\n        NULL                        /* allow to expand */\n    };\n    redisReply *commandTable;\n    dict *groups;\n\n    if (cliConnect(CC_QUIET) == REDIS_ERR) {\n        /* Can not connect to the server, but we still want to provide\n         * help, generate it only from the static cli_commands.c data instead. */\n        groups = dictCreate(&groupsdt);\n        cliLegacyInitHelp(groups);\n        return;\n    }\n    commandTable = redisCommand(context, \"COMMAND DOCS\");\n    if (commandTable == NULL || commandTable->type == REDIS_REPLY_ERROR) {\n        /* New COMMAND DOCS subcommand not supported - generate help from\n         * static cli_commands.c data instead. */\n        freeReplyObject(commandTable);\n\n        groups = dictCreate(&groupsdt);\n        cliLegacyInitHelp(groups);\n        cliLegacyIntegrateHelp();\n        return;\n    };\n    if (commandTable->type != REDIS_REPLY_MAP && commandTable->type != REDIS_REPLY_ARRAY) {\n        freeReplyObject(commandTable);\n        return;\n    }\n\n    /* Scan the array reported by COMMAND DOCS and fill in the entries */\n    helpEntriesLen = cliCountCommands(commandTable);\n    helpEntries = zmalloc(sizeof(helpEntry)*helpEntriesLen);\n\n    groups = dictCreate(&groupsdt);\n    cliInitCommandHelpEntries(commandTable, groups);\n    cliInitGroupHelpEntries(groups);\n\n    qsort(helpEntries, helpEntriesLen, sizeof(helpEntry), helpEntryCompare);\n    freeReplyObject(commandTable);\n    dictRelease(groups);\n}\n\n/* Output command help to stdout. */\nstatic void cliOutputCommandHelp(struct commandDocs *help, int group) {\n    printf(\"\\r\\n  \\x1b[1m%s\\x1b[0m \\x1b[90m%s\\x1b[0m\\r\\n\", help->name, help->params);\n    printf(\"  \\x1b[33msummary:\\x1b[0m %s\\r\\n\", help->summary);\n    if (help->since != NULL) {\n        printf(\"  \\x1b[33msince:\\x1b[0m %s\\r\\n\", help->since);\n    }\n    if (group) {\n        printf(\"  \\x1b[33mgroup:\\x1b[0m %s\\r\\n\", help->group);\n    }\n}\n\n/* Print generic help. */\nstatic void cliOutputGenericHelp(void) {\n    sds version = cliVersion();\n    printf(\n        \"redis-cli %s\\n\"\n        \"To get help about Redis commands type:\\n\"\n        \"      \\\"help @<group>\\\" to get a list of commands in <group>\\n\"\n        \"      \\\"help <command>\\\" for help on <command>\\n\"\n        \"      \\\"help <tab>\\\" to get a list of possible help topics\\n\"\n        \"      \\\"quit\\\" to exit\\n\"\n        \"\\n\"\n        \"To set redis-cli preferences:\\n\"\n        \"      \\\":set hints\\\" enable online hints\\n\"\n        \"      \\\":set nohints\\\" disable online hints\\n\"\n        \"Set your preferences in ~/.redisclirc\\n\",\n        version\n    );\n    sdsfree(version);\n}\n\n/* Output all command help, filtering by group or command name. */\nstatic void cliOutputHelp(int argc, char **argv) {\n    int i, j;\n    char *group = NULL;\n    helpEntry *entry;\n    struct commandDocs *help;\n\n    if (argc == 0) {\n        cliOutputGenericHelp();\n        return;\n    } else if (argc > 0 && argv[0][0] == '@') {\n        group = argv[0]+1;\n    }\n\n    if (helpEntries == NULL) {\n        /* Initialize the help using the results of the COMMAND command.\n         * In case we are using redis-cli help XXX, we need to init it. */\n        cliInitHelp();\n    }\n\n    assert(argc > 0);\n    for (i = 0; i < helpEntriesLen; i++) {\n        entry = &helpEntries[i];\n        if (entry->type != CLI_HELP_COMMAND) continue;\n\n        help = &entry->docs;\n        if (group == NULL) {\n            /* Compare all arguments */\n            if (argc <= entry->argc) {\n                for (j = 0; j < argc; j++) {\n                    if (strcasecmp(argv[j],entry->argv[j]) != 0) break;\n                }\n                if (j == argc) {\n                    cliOutputCommandHelp(help,1);\n                }\n            }\n        } else if (strcasecmp(group, help->group) == 0) {\n            cliOutputCommandHelp(help,0);\n        }\n    }\n    printf(\"\\r\\n\");\n}\n\n/* Linenoise completion callback. */\nstatic void completionCallback(const char *buf, linenoiseCompletions *lc) {\n    size_t startpos = 0;\n    int mask;\n    int i;\n    size_t matchlen;\n    sds tmp;\n\n    if (strncasecmp(buf,\"help \",5) == 0) {\n        startpos = 5;\n        while (isspace(buf[startpos])) startpos++;\n        mask = CLI_HELP_COMMAND | CLI_HELP_GROUP;\n    } else {\n        mask = CLI_HELP_COMMAND;\n    }\n\n    for (i = 0; i < helpEntriesLen; i++) {\n        if (!(helpEntries[i].type & mask)) continue;\n\n        matchlen = strlen(buf+startpos);\n        if (strncasecmp(buf+startpos,helpEntries[i].full,matchlen) == 0) {\n            tmp = sdsnewlen(buf,startpos);\n            tmp = sdscat(tmp,helpEntries[i].full);\n            linenoiseAddCompletion(lc,tmp);\n            sdsfree(tmp);\n        }\n    }\n}\n\nstatic sds addHintForArgument(sds hint, cliCommandArg *arg);\n\n/* Adds a separator character between words of a string under construction.\n * A separator is added if the string length is greater than its previously-recorded\n * length (*len), which is then updated, and it's not the last word to be added.\n */\nstatic sds addSeparator(sds str, size_t *len, char *separator, int is_last) {\n    if (sdslen(str) > *len && !is_last) {\n        str = sdscat(str, separator);\n        *len = sdslen(str);\n    }\n    return str;\n}\n\n/* Recursively zeros the matched* fields of all arguments. */\nstatic void clearMatchedArgs(cliCommandArg *args, int numargs) {\n    for (int i = 0; i != numargs; ++i) {\n        args[i].matched = 0;\n        args[i].matched_token = 0;\n        args[i].matched_name = 0;\n        args[i].matched_all = 0;\n        if (args[i].subargs) {\n            clearMatchedArgs(args[i].subargs, args[i].numsubargs);\n        }\n    }\n}\n\n/* Builds a completion hint string describing the arguments, skipping parts already matched.\n * Hints for all arguments are added to the input 'hint' parameter, separated by 'separator'.\n */\nstatic sds addHintForArguments(sds hint, cliCommandArg *args, int numargs, char *separator) {\n    int i, j, incomplete;\n    size_t len=sdslen(hint);\n    for (i = 0; i < numargs; i++) {\n        if (!(args[i].flags & CMD_ARG_OPTIONAL)) {\n            hint = addHintForArgument(hint, &args[i]);\n            hint = addSeparator(hint, &len, separator, i == numargs-1);\n            continue;\n        }\n\n        /* The rule is that successive \"optional\" arguments can appear in any order.\n         * But if they are followed by a required argument, no more of those optional arguments\n         * can appear after that.\n         * \n         * This code handles all successive optional args together. This lets us show the\n         * completion of the currently-incomplete optional arg first, if there is one.\n         */\n        for (j = i, incomplete = -1; j < numargs; j++) {\n            if (!(args[j].flags & CMD_ARG_OPTIONAL)) break;\n            if (args[j].matched != 0 && args[j].matched_all == 0) {\n                /* User has started typing this arg; show its completion first. */\n                hint = addHintForArgument(hint, &args[j]);\n                hint = addSeparator(hint, &len, separator, i == numargs-1);\n                incomplete = j;\n            }\n        }\n\n        /* If the following non-optional arg has not been matched, add hints for\n         * any remaining optional args in this group. \n         */\n        if (j == numargs || args[j].matched == 0) {\n            for (; i < j; i++) {\n                if (incomplete != i) {\n                    hint = addHintForArgument(hint, &args[i]);\n                    hint = addSeparator(hint, &len, separator, i == numargs-1);\n                }\n            }\n        }\n\n        i = j - 1;\n    }\n    return hint;\n}\n\n/* Adds the \"repeating\" section of the hint string for a multiple-typed argument: [ABC def ...]\n * The repeating part is a fixed unit; we don't filter matched elements from it.\n */\nstatic sds addHintForRepeatedArgument(sds hint, cliCommandArg *arg) {\n    if (!(arg->flags & CMD_ARG_MULTIPLE)) {\n        return hint;\n    }\n\n    /* The repeating part is always shown at the end of the argument's hint,\n     * so we can safely clear its matched flags before printing it.\n     */\n    clearMatchedArgs(arg, 1);\n        \n    if (hint[0] != '\\0') {\n        hint = sdscat(hint, \" \");\n    }\n    hint = sdscat(hint, \"[\");\n\n    if (arg->flags & CMD_ARG_MULTIPLE_TOKEN) {\n        hint = sdscat_orempty(hint, arg->token);\n        if (arg->type != ARG_TYPE_PURE_TOKEN) {\n            hint = sdscat(hint, \" \");\n        }\n    }\n\n    switch (arg->type) {\n     case ARG_TYPE_ONEOF:\n        hint = addHintForArguments(hint, arg->subargs, arg->numsubargs, \"|\");\n        break;\n\n    case ARG_TYPE_BLOCK:\n        hint = addHintForArguments(hint, arg->subargs, arg->numsubargs, \" \");\n        break;\n\n    case ARG_TYPE_PURE_TOKEN:\n        break;\n\n    default:\n        hint = sdscat_orempty(hint, arg->display_text ? arg->display_text : arg->name);\n        break;\n    }\n\n    hint = sdscat(hint, \" ...]\");\n    return hint;\n}\n\n/* Adds hint string for one argument, if not already matched. */\nstatic sds addHintForArgument(sds hint, cliCommandArg *arg) {\n    if (arg->matched_all) {\n        return hint;\n    }\n\n    /* Surround an optional arg with brackets, unless it's partially matched. */\n    if ((arg->flags & CMD_ARG_OPTIONAL) && !arg->matched) {\n        hint = sdscat(hint, \"[\");\n    }\n\n    /* Start with the token, if present and not matched. */\n    if (arg->token != NULL && !arg->matched_token) {\n        hint = sdscat_orempty(hint, arg->token);\n        if (arg->type != ARG_TYPE_PURE_TOKEN) {\n            hint = sdscat(hint, \" \");\n        }\n    }\n\n    /* Add the body of the syntax string. */\n    switch (arg->type) {\n     case ARG_TYPE_ONEOF:\n        if (arg->matched == 0) {\n            hint = addHintForArguments(hint, arg->subargs, arg->numsubargs, \"|\");\n        } else {\n            int i;\n            for (i = 0; i < arg->numsubargs; i++) {\n                if (arg->subargs[i].matched != 0) {\n                    hint = addHintForArgument(hint, &arg->subargs[i]);\n                }\n            }\n        }\n        break;\n\n    case ARG_TYPE_BLOCK:\n        hint = addHintForArguments(hint, arg->subargs, arg->numsubargs, \" \");\n        break;\n\n    case ARG_TYPE_PURE_TOKEN:\n        break;\n\n    default:\n        if (!arg->matched_name) {\n            hint = sdscat_orempty(hint, arg->display_text ? arg->display_text : arg->name);\n        }\n        break;\n    }\n\n    hint = addHintForRepeatedArgument(hint, arg);\n\n    if ((arg->flags & CMD_ARG_OPTIONAL) && !arg->matched) {\n        hint = sdscat(hint, \"]\");\n    }\n\n    return hint;\n}\n\nstatic int matchArg(char **nextword, int numwords, cliCommandArg *arg);\nstatic int matchArgs(char **words, int numwords, cliCommandArg *args, int numargs);\n\n/* Tries to match the next words of the input against an argument. */\nstatic int matchNoTokenArg(char **nextword, int numwords, cliCommandArg *arg) {\n    int i;\n    switch (arg->type) {\n    case ARG_TYPE_BLOCK: {\n        arg->matched += matchArgs(nextword, numwords, arg->subargs, arg->numsubargs);\n\n        /* All the subargs must be matched for the block to match. */\n        arg->matched_all = 1;\n        for (i = 0; i < arg->numsubargs; i++) {\n            if (arg->subargs[i].matched_all == 0) {\n                arg->matched_all = 0;\n            }\n        }\n        break;\n    }\n    case ARG_TYPE_ONEOF: {\n        for (i = 0; i < arg->numsubargs; i++) {\n            if (matchArg(nextword, numwords, &arg->subargs[i])) {\n                arg->matched += arg->subargs[i].matched;\n                arg->matched_all = arg->subargs[i].matched_all;\n                break;\n            }\n        }\n        break;\n    }\n\n    case ARG_TYPE_INTEGER:\n    case ARG_TYPE_UNIX_TIME: {\n        long long value;\n        if (sscanf(*nextword, \"%lld\", &value) == 1) {\n            arg->matched += 1;\n            arg->matched_name = 1;\n            arg->matched_all = 1;\n        } else {\n            /* Matching failed due to incorrect arg type. */\n            arg->matched = 0;\n            arg->matched_name = 0;\n        }\n        break;\n    }\n\n    case ARG_TYPE_DOUBLE: {\n        double value;\n        if (sscanf(*nextword, \"%lf\", &value) == 1) {\n            arg->matched += 1;\n            arg->matched_name = 1;\n            arg->matched_all = 1;\n        } else {\n            /* Matching failed due to incorrect arg type. */\n            arg->matched = 0;\n            arg->matched_name = 0;\n        }\n        break;\n    }\n\n    default:\n        arg->matched += 1;\n        arg->matched_name = 1;\n        arg->matched_all = 1;\n        break;\n    }\n    return arg->matched;\n}\n\n/* Tries to match the next word of the input against a token literal. */\nstatic int matchToken(char **nextword, cliCommandArg *arg) {\n    if (strcasecmp(arg->token, nextword[0]) != 0) {\n        return 0;\n    }\n    arg->matched_token = 1;\n    arg->matched = 1;\n    return 1;\n}\n\n/* Tries to match the next words of the input against the next argument.\n * If the arg is repeated (\"multiple\"), it will be matched only once.\n * If the next input word(s) can't be matched, returns 0 for failure.\n */\nstatic int matchArgOnce(char **nextword, int numwords, cliCommandArg *arg) {\n    /* First match the token, if present. */\n    if (arg->token != NULL) {\n        if (!matchToken(nextword, arg)) {\n            return 0;\n        }\n        if (arg->type == ARG_TYPE_PURE_TOKEN) {\n            arg->matched_all = 1;\n            return 1;\n        }\n        if (numwords == 1) {\n            return 1;\n        }\n        nextword++;\n        numwords--;\n    }\n\n    /* Then match the rest of the argument. */\n    if (!matchNoTokenArg(nextword, numwords, arg)) {\n        return 0;\n    }\n    return arg->matched;\n}\n\n/* Tries to match the next words of the input against the next argument.\n * If the arg is repeated (\"multiple\"), it will be matched as many times as possible.\n */\nstatic int matchArg(char **nextword, int numwords, cliCommandArg *arg) {\n    int matchedWords = 0;\n    int matchedOnce = matchArgOnce(nextword, numwords, arg);\n    if (!(arg->flags & CMD_ARG_MULTIPLE)) {\n        return matchedOnce;\n    }\n\n    /* Found one match; now match a \"multiple\" argument as many times as possible. */\n    matchedWords += matchedOnce;\n    while (arg->matched_all && matchedWords < numwords) {\n        clearMatchedArgs(arg, 1);\n        if (arg->token != NULL && !(arg->flags & CMD_ARG_MULTIPLE_TOKEN)) {\n            /* The token only appears the first time; the rest of the times,\n             * pretend we saw it so we don't hint it.\n             */\n            matchedOnce = matchNoTokenArg(nextword + matchedWords, numwords - matchedWords, arg);\n            if (arg->matched) {\n                arg->matched_token = 1;\n            }\n        } else {\n            matchedOnce = matchArgOnce(nextword + matchedWords, numwords - matchedWords, arg);\n        }\n        matchedWords += matchedOnce;\n    }\n    arg->matched_all = 0;  /* Because more repetitions are still possible. */\n    return matchedWords;\n}\n\n/* Tries to match the next words of the input against\n * any one of a consecutive set of optional arguments.\n */\nstatic int matchOneOptionalArg(char **words, int numwords, cliCommandArg *args, int numargs, int *matchedarg) {\n    for (int nextword = 0, nextarg = 0; nextword != numwords && nextarg != numargs; ++nextarg) {\n        if (args[nextarg].matched) {\n            /* Already matched this arg. */\n            continue;\n        }\n\n        int matchedWords = matchArg(&words[nextword], numwords - nextword, &args[nextarg]);\n        if (matchedWords != 0) {\n            *matchedarg = nextarg;\n            return matchedWords;\n        }\n    }\n    return 0;\n}\n\n/* Matches as many input words as possible against a set of consecutive optional arguments. */\nstatic int matchOptionalArgs(char **words, int numwords, cliCommandArg *args, int numargs) {\n    int nextword = 0;\n    int matchedarg = -1, lastmatchedarg = -1;\n    while (nextword != numwords) {\n        int matchedWords = matchOneOptionalArg(&words[nextword], numwords - nextword, args, numargs, &matchedarg);\n        if (matchedWords == 0) {\n            break;\n        }\n        /* Successfully matched an optional arg; mark any previous match as completed\n         * so it won't be partially hinted.\n         */\n        if (lastmatchedarg != -1) {\n            args[lastmatchedarg].matched_all = 1;\n        }\n        lastmatchedarg = matchedarg;\n        nextword += matchedWords;\n    }\n    return nextword;\n}\n\n/* Matches as many input words as possible against command arguments. */\nstatic int matchArgs(char **words, int numwords, cliCommandArg *args, int numargs) {\n    int nextword, nextarg, matchedWords;\n    for (nextword = 0, nextarg = 0; nextword != numwords && nextarg != numargs; ++nextarg) {\n        /* Optional args can occur in any order. Collect a range of consecutive optional args\n         * and try to match them as a group against the next input words.\n         */\n        if (args[nextarg].flags & CMD_ARG_OPTIONAL) {\n            int lastoptional;\n            for (lastoptional = nextarg; lastoptional < numargs; lastoptional++) {\n                if (!(args[lastoptional].flags & CMD_ARG_OPTIONAL)) break;\n            }\n            matchedWords = matchOptionalArgs(&words[nextword], numwords - nextword, &args[nextarg], lastoptional - nextarg);\n            nextarg = lastoptional - 1;\n        } else {\n            matchedWords = matchArg(&words[nextword], numwords - nextword, &args[nextarg]);\n            if (matchedWords == 0) {\n                /* Couldn't match a required word - matching fails! */\n                return 0;\n            }\n        }\n\n        nextword += matchedWords;\n    }\n    return nextword;\n}\n\n/* Compute the linenoise hint for the input prefix in inputargv/inputargc.\n * cmdlen is the number of words from the start of the input that make up the command.\n * If docs.args exists, dynamically creates a hint string by matching the arg specs\n * against the input words.\n */\nstatic sds makeHint(char **inputargv, int inputargc, int cmdlen, struct commandDocs docs) {\n    sds hint;\n\n    if (docs.args) {\n        /* Remove arguments from the returned hint to show only the\n         * ones the user did not yet type. */\n        clearMatchedArgs(docs.args, docs.numargs);\n        hint = sdsempty();\n        int matchedWords = 0;\n        if (inputargv && inputargc)\n            matchedWords = matchArgs(inputargv + cmdlen, inputargc - cmdlen, docs.args, docs.numargs);\n        if (matchedWords == inputargc - cmdlen) {\n            hint = addHintForArguments(hint, docs.args, docs.numargs, \" \");\n        }\n        return hint;\n    }\n\n    /* If arg specs are not available, show the hint string until the user types something. */\n    if (inputargc <= cmdlen) {\n        hint = sdsnew(docs.params);\n    } else {\n        hint = sdsempty();\n    }\n    return hint;\n}\n\n/* Search for a command matching the longest possible prefix of input words. */\nstatic helpEntry* findHelpEntry(int argc, char **argv) {\n    helpEntry *entry = NULL;\n    int i, rawargc, matchlen = 0;\n    sds *rawargv;\n\n    for (i = 0; i < helpEntriesLen; i++) {\n        if (!(helpEntries[i].type & CLI_HELP_COMMAND)) continue;\n\n        rawargv = helpEntries[i].argv;\n        rawargc = helpEntries[i].argc;\n        if (rawargc <= argc) {\n            int j;\n            for (j = 0; j < rawargc; j++) {\n                if (strcasecmp(rawargv[j],argv[j])) {\n                    break;\n                }\n            }\n            if (j == rawargc && rawargc > matchlen) {\n                matchlen = rawargc;\n                entry = &helpEntries[i];\n            }\n        }\n    }\n    return entry;\n}\n\n/* Returns the command-line hint string for a given partial input. */\nstatic sds getHintForInput(const char *charinput) {\n    sds hint = NULL;\n    int inputargc, inputlen = strlen(charinput);\n    sds *inputargv = sdssplitargs(charinput, &inputargc);\n    int endspace = inputlen && isspace(charinput[inputlen-1]);\n\n    /* Don't match the last word until the user has typed a space after it. */\n    int matchargc = endspace ? inputargc : inputargc - 1;\n\n    helpEntry *entry = findHelpEntry(matchargc, inputargv);\n    if (entry) {\n       hint = makeHint(inputargv, matchargc, entry->argc, entry->docs);\n    }\n    sdsfreesplitres(inputargv, inputargc);\n    return hint;\n}\n\n/* Linenoise hints callback. */\nstatic char *hintsCallback(const char *buf, int *color, int *bold) {\n    if (!pref.hints) return NULL;\n\n    sds hint = getHintForInput(buf);\n    if (hint == NULL) {\n        return NULL;\n    }\n\n    *color = 90;\n    *bold = 0;\n\n    /* Add an initial space if needed. */\n    int len = strlen(buf);\n    int endspace = len && isspace(buf[len-1]);\n    if (!endspace) {\n        sds newhint = sdsnewlen(\" \",1);\n        newhint = sdscatsds(newhint,hint);\n        sdsfree(hint);\n        hint = newhint;\n    }\n\n    return hint;\n}\n\nstatic void freeHintsCallback(void *ptr) {\n    sdsfree(ptr);\n}\n\n/*------------------------------------------------------------------------------\n * TTY manipulation\n *--------------------------------------------------------------------------- */\n\n/* Restore terminal if we've changed it. */\nvoid cliRestoreTTY(void) {\n    if (orig_termios_saved)\n        tcsetattr(STDIN_FILENO, TCSANOW, &orig_termios);\n}\n\n/* Put the terminal in \"press any key\" mode */\nstatic void cliPressAnyKeyTTY(void) {\n    if (!isatty(STDIN_FILENO)) return;\n    if (!orig_termios_saved) {\n        if (tcgetattr(STDIN_FILENO, &orig_termios) == -1) return;\n        atexit(cliRestoreTTY);\n        orig_termios_saved = 1;\n    }\n    struct termios mode = orig_termios;\n    mode.c_lflag &= ~(ECHO | ICANON); /* echoing off, canonical off */\n    tcsetattr(STDIN_FILENO, TCSANOW, &mode);\n}\n\n/*------------------------------------------------------------------------------\n * Networking / parsing\n *--------------------------------------------------------------------------- */\n\n/* Send AUTH command to the server */\nstatic int cliAuth(redisContext *ctx, char *user, char *auth) {\n    redisReply *reply;\n    if (auth == NULL) return REDIS_OK;\n\n    if (user == NULL)\n        reply = redisCommand(ctx,\"AUTH %s\",auth);\n    else\n        reply = redisCommand(ctx,\"AUTH %s %s\",user,auth);\n\n    if (reply == NULL) {\n        fprintf(stderr, \"\\nI/O error\\n\");\n        return REDIS_ERR;\n    }\n\n    int result = REDIS_OK;\n    if (reply->type == REDIS_REPLY_ERROR) {\n        result = REDIS_ERR;\n        fprintf(stderr, \"AUTH failed: %s\\n\", reply->str);\n    }\n    freeReplyObject(reply);\n    return result;\n}\n\n/* Send SELECT input_dbnum to the server */\nstatic int cliSelect(void) {\n    redisReply *reply;\n    if (config.conn_info.input_dbnum == config.dbnum) return REDIS_OK;\n\n    reply = redisCommand(context,\"SELECT %d\",config.conn_info.input_dbnum);\n    if (reply == NULL) {\n        fprintf(stderr, \"\\nI/O error\\n\");\n        return REDIS_ERR;\n    }\n\n    int result = REDIS_OK;\n    if (reply->type == REDIS_REPLY_ERROR) {\n        result = REDIS_ERR;\n        fprintf(stderr,\"SELECT %d failed: %s\\n\",config.conn_info.input_dbnum,reply->str);\n    } else {\n        config.dbnum = config.conn_info.input_dbnum;\n        cliRefreshPrompt();\n    }\n    freeReplyObject(reply);\n    return result;\n}\n\n/* Select RESP3 mode if redis-cli was started with the -3 option.  */\nstatic int cliSwitchProto(void) {\n    redisReply *reply;\n    if (!config.resp3 || config.resp2) return REDIS_OK;\n\n    reply = redisCommand(context,\"HELLO 3\");\n    if (reply == NULL) {\n        fprintf(stderr, \"\\nI/O error\\n\");\n        return REDIS_ERR;\n    }\n\n    int result = REDIS_OK;\n    if (reply->type == REDIS_REPLY_ERROR) {\n        fprintf(stderr,\"HELLO 3 failed: %s\\n\",reply->str);\n        if (config.resp3 == 1) {\n            result = REDIS_ERR;\n        } else if (config.resp3 == 2) {\n            result = REDIS_OK;\n        }\n    }\n\n    /* Retrieve server version string for later use. */\n    for (size_t i = 0; i < reply->elements; i += 2) {\n        assert(reply->element[i]->type == REDIS_REPLY_STRING);\n        char *key = reply->element[i]->str;\n        if (!strcmp(key, \"version\")) {\n            assert(reply->element[i + 1]->type == REDIS_REPLY_STRING);\n            config.server_version = sdsnew(reply->element[i + 1]->str);\n        }\n    }\n    freeReplyObject(reply);\n    config.current_resp3 = 1;\n    return result;\n}\n\n/* Set the client name if configured. */\nstatic int cliSetName(void) {\n    if (config.client_name == NULL) return REDIS_OK;\n\n    redisReply *reply = redisCommand(context,\"CLIENT SETNAME %s\", config.client_name);\n    if (reply == NULL) {\n        fprintf(stderr, \"\\nI/O error\\n\");\n        return REDIS_ERR;\n    }\n    int result = REDIS_OK;\n    if (reply->type == REDIS_REPLY_ERROR) {\n        fprintf(stderr,\"CLIENT SETNAME failed: %s\\n\", reply->str);\n        result = REDIS_ERR;\n    }\n    freeReplyObject(reply);\n    return result;\n}\n\n/* Connect to the server. It is possible to pass certain flags to the function:\n *      CC_FORCE: The connection is performed even if there is already\n *                a connected socket.\n *      CC_QUIET: Don't print errors if connection fails. */\nstatic int cliConnect(int flags) {\n    if (context == NULL || flags & CC_FORCE) {\n        if (context != NULL) {\n            redisFree(context);\n            config.dbnum = 0;\n            config.in_multi = 0;\n            config.pubsub_mode = 0;\n            cliRefreshPrompt();\n        }\n\n        /* Do not use hostsocket when we got redirected in cluster mode */\n        if (config.hostsocket == NULL ||\n            (config.cluster_mode && config.cluster_reissue_command)) {\n            context = redisConnectWrapper(config.conn_info.hostip, config.conn_info.hostport,\n                                          config.connect_timeout);\n        } else {\n            context = redisConnectUnixWrapper(config.hostsocket, config.connect_timeout);\n        }\n\n        if (!context->err && config.tls) {\n            const char *err = NULL;\n            if (cliSecureConnection(context, config.sslconfig, &err) == REDIS_ERR && err) {\n                fprintf(stderr, \"Could not negotiate a TLS connection: %s\\n\", err);\n                redisFree(context);\n                context = NULL;\n                return REDIS_ERR;\n            }\n        }\n\n        if (context->err) {\n            if (!(flags & CC_QUIET)) {\n                fprintf(stderr,\"Could not connect to Redis at \");\n                if (config.hostsocket == NULL ||\n                    (config.cluster_mode && config.cluster_reissue_command))\n                {\n                    fprintf(stderr, \"%s:%d: %s\\n\",\n                        config.conn_info.hostip,config.conn_info.hostport,context->errstr);\n                } else {\n                    fprintf(stderr,\"%s: %s\\n\",\n                        config.hostsocket,context->errstr);\n                }\n            }\n            redisFree(context);\n            context = NULL;\n            return REDIS_ERR;\n        }\n\n\n        /* Set aggressive KEEP_ALIVE socket option in the Redis context socket\n         * in order to prevent timeouts caused by the execution of long\n         * commands. At the same time this improves the detection of real\n         * errors. */\n        anetKeepAlive(NULL, context->fd, REDIS_CLI_KEEPALIVE_INTERVAL);\n\n        /* State of the current connection. */\n        config.current_resp3 = 0;\n\n        /* Do AUTH, select the right DB, switch to RESP3 if needed. */\n        if (cliAuth(context, config.conn_info.user, config.conn_info.auth) != REDIS_OK)\n            return REDIS_ERR;\n        if (cliSelect() != REDIS_OK)\n            return REDIS_ERR;\n        if (cliSwitchProto() != REDIS_OK)\n            return REDIS_ERR;\n        if (cliSetName() != REDIS_OK)\n            return REDIS_ERR;\n    }\n\n    /* Set a PUSH handler if configured to do so. */\n    if (config.push_output) {\n        redisSetPushCallback(context, cliPushHandler);\n    }\n\n    return REDIS_OK;\n}\n\n/* In cluster, if server replies ASK, we will redirect to a different node.\n * Before sending the real command, we need to send ASKING command first. */\nstatic int cliSendAsking(void) {\n    redisReply *reply;\n\n    config.cluster_send_asking = 0;\n    if (context == NULL) {\n        return REDIS_ERR;\n    }\n    reply = redisCommand(context,\"ASKING\");\n    if (reply == NULL) {\n        fprintf(stderr, \"\\nI/O error\\n\");\n        return REDIS_ERR;\n    }\n    int result = REDIS_OK;\n    if (reply->type == REDIS_REPLY_ERROR) {\n        result = REDIS_ERR;\n        fprintf(stderr,\"ASKING failed: %s\\n\",reply->str);\n    }\n    freeReplyObject(reply);\n    return result;\n}\n\nstatic void cliPrintContextError(void) {\n    if (context == NULL) return;\n    fprintf(stderr,\"Error: %s\\n\",context->errstr);\n}\n\nstatic int isInvalidateReply(redisReply *reply) {\n    return reply->type == REDIS_REPLY_PUSH && reply->elements == 2 &&\n        reply->element[0]->type == REDIS_REPLY_STRING &&\n        !strncmp(reply->element[0]->str, \"invalidate\", 10) &&\n        reply->element[1]->type == REDIS_REPLY_ARRAY;\n}\n\n/* Special display handler for RESP3 'invalidate' messages.\n * This function does not validate the reply, so it should\n * already be confirmed correct */\nstatic sds cliFormatInvalidateTTY(redisReply *r) {\n    sds out = sdsnew(\"-> invalidate: \");\n\n    for (size_t i = 0; i < r->element[1]->elements; i++) {\n        redisReply *key = r->element[1]->element[i];\n        assert(key->type == REDIS_REPLY_STRING);\n\n        out = sdscatfmt(out, \"'%s'\", key->str, key->len);\n        if (i < r->element[1]->elements - 1)\n            out = sdscatlen(out, \", \", 2);\n    }\n\n    return sdscatlen(out, \"\\n\", 1);\n}\n\n/* Returns non-zero if cliFormatReplyTTY renders the reply in multiple lines. */\nstatic int cliIsMultilineValueTTY(redisReply *r) {\n    switch (r->type) {\n    case REDIS_REPLY_ARRAY:\n    case REDIS_REPLY_SET:\n    case REDIS_REPLY_PUSH:\n        if (r->elements == 0) return 0;\n        if (r->elements > 1) return 1;\n        return cliIsMultilineValueTTY(r->element[0]);\n    case REDIS_REPLY_MAP:\n        if (r->elements == 0) return 0;\n        if (r->elements > 2) return 1;\n        return cliIsMultilineValueTTY(r->element[1]);\n    default:\n        return 0;\n    }\n}\n\nstatic sds cliFormatReplyTTY(redisReply *r, char *prefix) {\n    sds out = sdsempty();\n    switch (r->type) {\n    case REDIS_REPLY_ERROR:\n        out = sdscatprintf(out,\"(error) %s\\n\", r->str);\n    break;\n    case REDIS_REPLY_STATUS:\n        out = sdscat(out,r->str);\n        out = sdscat(out,\"\\n\");\n    break;\n    case REDIS_REPLY_INTEGER:\n        out = sdscatprintf(out,\"(integer) %lld\\n\",r->integer);\n    break;\n    case REDIS_REPLY_DOUBLE:\n        out = sdscatprintf(out,\"(double) %s\\n\",r->str);\n    break;\n    case REDIS_REPLY_STRING:\n    case REDIS_REPLY_VERB:\n        /* If you are producing output for the standard output we want\n        * a more interesting output with quoted characters and so forth,\n        * unless it's a verbatim string type. */\n        if (r->type == REDIS_REPLY_STRING) {\n            out = sdscatrepr(out,r->str,r->len);\n            out = sdscat(out,\"\\n\");\n        } else {\n            out = sdscatlen(out,r->str,r->len);\n            out = sdscat(out,\"\\n\");\n        }\n    break;\n    case REDIS_REPLY_NIL:\n        out = sdscat(out,\"(nil)\\n\");\n    break;\n    case REDIS_REPLY_BOOL:\n        out = sdscat(out,r->integer ? \"(true)\\n\" : \"(false)\\n\");\n    break;\n    case REDIS_REPLY_ARRAY:\n    case REDIS_REPLY_MAP:\n    case REDIS_REPLY_SET:\n    case REDIS_REPLY_PUSH:\n        if (r->elements == 0) {\n            if (r->type == REDIS_REPLY_ARRAY)\n                out = sdscat(out,\"(empty array)\\n\");\n            else if (r->type == REDIS_REPLY_MAP)\n                out = sdscat(out,\"(empty hash)\\n\");\n            else if (r->type == REDIS_REPLY_SET)\n                out = sdscat(out,\"(empty set)\\n\");\n            else if (r->type == REDIS_REPLY_PUSH)\n                out = sdscat(out,\"(empty push)\\n\");\n            else\n                out = sdscat(out,\"(empty aggregate type)\\n\");\n        } else {\n            unsigned int i, idxlen = 0;\n            char _prefixlen[16];\n            char _prefixfmt[16];\n            sds _prefix;\n            sds tmp;\n\n            /* Calculate chars needed to represent the largest index */\n            i = r->elements;\n            if (r->type == REDIS_REPLY_MAP) i /= 2;\n            do {\n                idxlen++;\n                i /= 10;\n            } while(i);\n\n            /* Prefix for nested multi bulks should grow with idxlen+2 spaces */\n            memset(_prefixlen,' ',idxlen+2);\n            _prefixlen[idxlen+2] = '\\0';\n            _prefix = sdscat(sdsnew(prefix),_prefixlen);\n\n            /* Setup prefix format for every entry */\n            char numsep;\n            if (r->type == REDIS_REPLY_SET) numsep = '~';\n            else if (r->type == REDIS_REPLY_MAP) numsep = '#';\n            /* TODO: this would be a breaking change for scripts, do that in a major version. */\n            /* else if (r->type == REDIS_REPLY_PUSH) numsep = '>'; */\n            else numsep = ')';\n            snprintf(_prefixfmt,sizeof(_prefixfmt),\"%%s%%%ud%c \",idxlen,numsep);\n\n            for (i = 0; i < r->elements; i++) {\n                unsigned int human_idx = (r->type == REDIS_REPLY_MAP) ?\n                                         i/2 : i;\n                human_idx++; /* Make it 1-based. */\n\n                /* Don't use the prefix for the first element, as the parent\n                 * caller already prepended the index number. */\n                out = sdscatprintf(out,_prefixfmt,i == 0 ? \"\" : prefix,human_idx);\n\n                /* Format the multi bulk entry */\n                tmp = cliFormatReplyTTY(r->element[i],_prefix);\n                out = sdscatlen(out,tmp,sdslen(tmp));\n                sdsfree(tmp);\n\n                /* For maps, format the value as well. */\n                if (r->type == REDIS_REPLY_MAP) {\n                    i++;\n                    sdsrange(out,0,-2);\n                    out = sdscat(out,\" => \");\n                    if (cliIsMultilineValueTTY(r->element[i])) {\n                        /* linebreak before multiline value to fix alignment */\n                        out = sdscat(out, \"\\n\");\n                        out = sdscat(out, _prefix);\n                    }\n                    tmp = cliFormatReplyTTY(r->element[i],_prefix);\n                    out = sdscatlen(out,tmp,sdslen(tmp));\n                    sdsfree(tmp);\n                }\n            }\n            sdsfree(_prefix);\n        }\n    break;\n    default:\n        fprintf(stderr,\"Unknown reply type: %d\\n\", r->type);\n        exit(1);\n    }\n    return out;\n}\n\n/* Returns 1 if the reply is a pubsub pushed reply. */\nint isPubsubPush(redisReply *r) {\n    if (r == NULL ||\n        r->type != (config.current_resp3 ? REDIS_REPLY_PUSH : REDIS_REPLY_ARRAY) ||\n        r->elements < 3 ||\n        r->element[0]->type != REDIS_REPLY_STRING)\n    {\n        return 0;\n    }\n    char *str = r->element[0]->str;\n    size_t len = r->element[0]->len;\n    /* Check if it is [p|s][un]subscribe or [p|s]message, but even simpler, we\n     * just check that it ends with \"message\" or \"subscribe\". */\n    return ((len >= strlen(\"message\") &&\n             !strcmp(str + len - strlen(\"message\"), \"message\")) ||\n            (len >= strlen(\"subscribe\") &&\n             !strcmp(str + len - strlen(\"subscribe\"), \"subscribe\")));\n}\n\nint isColorTerm(void) {\n    char *t = getenv(\"TERM\");\n    return t != NULL && strstr(t,\"xterm\") != NULL;\n}\n\n/* Helper function for sdsCatColorizedLdbReply() appending colorize strings\n * to an SDS string. */\nsds sdscatcolor(sds o, char *s, size_t len, char *color) {\n    if (!isColorTerm()) return sdscatlen(o,s,len);\n\n    int bold = strstr(color,\"bold\") != NULL;\n    int ccode = 37; /* Defaults to white. */\n    if (strstr(color,\"red\")) ccode = 31;\n    else if (strstr(color,\"green\")) ccode = 32;\n    else if (strstr(color,\"yellow\")) ccode = 33;\n    else if (strstr(color,\"blue\")) ccode = 34;\n    else if (strstr(color,\"magenta\")) ccode = 35;\n    else if (strstr(color,\"cyan\")) ccode = 36;\n    else if (strstr(color,\"white\")) ccode = 37;\n\n    o = sdscatfmt(o,\"\\033[%i;%i;49m\",bold,ccode);\n    o = sdscatlen(o,s,len);\n    o = sdscat(o,\"\\033[0m\");\n    return o;\n}\n\n/* Colorize Lua debugger status replies according to the prefix they\n * have. */\nsds sdsCatColorizedLdbReply(sds o, char *s, size_t len) {\n    char *color = \"white\";\n\n    if (strstr(s,\"<debug>\")) color = \"bold\";\n    if (strstr(s,\"<redis>\")) color = \"green\";\n    if (strstr(s,\"<reply>\")) color = \"cyan\";\n    if (strstr(s,\"<error>\")) color = \"red\";\n    if (strstr(s,\"<hint>\")) color = \"bold\";\n    if (strstr(s,\"<value>\") || strstr(s,\"<retval>\")) color = \"magenta\";\n    if (len > 4 && isdigit(s[3])) {\n        if (s[1] == '>') color = \"yellow\"; /* Current line. */\n        else if (s[2] == '#') color = \"bold\"; /* Break point. */\n    }\n    return sdscatcolor(o,s,len,color);\n}\n\nstatic sds cliFormatReplyRaw(redisReply *r) {\n    sds out = sdsempty(), tmp;\n    size_t i;\n\n    switch (r->type) {\n    case REDIS_REPLY_NIL:\n        /* Nothing... */\n        break;\n    case REDIS_REPLY_ERROR:\n        out = sdscatlen(out,r->str,r->len);\n        out = sdscatlen(out,\"\\n\",1);\n        break;\n    case REDIS_REPLY_STATUS:\n    case REDIS_REPLY_STRING:\n    case REDIS_REPLY_VERB:\n        if (r->type == REDIS_REPLY_STATUS && config.eval_ldb) {\n            /* The Lua debugger replies with arrays of simple (status)\n             * strings. We colorize the output for more fun if this\n             * is a debugging session. */\n\n            /* Detect the end of a debugging session. */\n            if (strstr(r->str,\"<endsession>\") == r->str) {\n                config.enable_ldb_on_eval = 0;\n                config.eval_ldb = 0;\n                config.eval_ldb_end = 1; /* Signal the caller session ended. */\n                config.output = OUTPUT_STANDARD;\n                cliRefreshPrompt();\n            } else {\n                out = sdsCatColorizedLdbReply(out,r->str,r->len);\n            }\n        } else {\n            out = sdscatlen(out,r->str,r->len);\n        }\n        break;\n    case REDIS_REPLY_BOOL:\n        out = sdscat(out,r->integer ? \"(true)\" : \"(false)\");\n    break;\n    case REDIS_REPLY_INTEGER:\n        out = sdscatprintf(out,\"%lld\",r->integer);\n        break;\n    case REDIS_REPLY_DOUBLE:\n        out = sdscatprintf(out,\"%s\",r->str);\n        break;\n    case REDIS_REPLY_SET:\n    case REDIS_REPLY_ARRAY:\n    case REDIS_REPLY_PUSH:\n        for (i = 0; i < r->elements; i++) {\n            if (i > 0) out = sdscat(out,config.mb_delim);\n            tmp = cliFormatReplyRaw(r->element[i]);\n            out = sdscatlen(out,tmp,sdslen(tmp));\n            sdsfree(tmp);\n        }\n        break;\n    case REDIS_REPLY_MAP:\n        for (i = 0; i < r->elements; i += 2) {\n            if (i > 0) out = sdscat(out,config.mb_delim);\n            tmp = cliFormatReplyRaw(r->element[i]);\n            out = sdscatlen(out,tmp,sdslen(tmp));\n            sdsfree(tmp);\n\n            out = sdscatlen(out,\" \",1);\n            tmp = cliFormatReplyRaw(r->element[i+1]);\n            out = sdscatlen(out,tmp,sdslen(tmp));\n            sdsfree(tmp);\n        }\n        break;\n    default:\n        fprintf(stderr,\"Unknown reply type: %d\\n\", r->type);\n        exit(1);\n    }\n    return out;\n}\n\nstatic sds cliFormatReplyCSV(redisReply *r) {\n    unsigned int i;\n\n    sds out = sdsempty();\n    switch (r->type) {\n    case REDIS_REPLY_ERROR:\n        out = sdscat(out,\"ERROR,\");\n        out = sdscatrepr(out,r->str,strlen(r->str));\n    break;\n    case REDIS_REPLY_STATUS:\n        out = sdscatrepr(out,r->str,r->len);\n    break;\n    case REDIS_REPLY_INTEGER:\n        out = sdscatprintf(out,\"%lld\",r->integer);\n    break;\n    case REDIS_REPLY_DOUBLE:\n        out = sdscatprintf(out,\"%s\",r->str);\n        break;\n    case REDIS_REPLY_STRING:\n    case REDIS_REPLY_VERB:\n        out = sdscatrepr(out,r->str,r->len);\n    break;\n    case REDIS_REPLY_NIL:\n        out = sdscat(out,\"NULL\");\n    break;\n    case REDIS_REPLY_BOOL:\n        out = sdscat(out,r->integer ? \"true\" : \"false\");\n    break;\n    case REDIS_REPLY_ARRAY:\n    case REDIS_REPLY_SET:\n    case REDIS_REPLY_PUSH:\n    case REDIS_REPLY_MAP: /* CSV has no map type, just output flat list. */\n        for (i = 0; i < r->elements; i++) {\n            sds tmp = cliFormatReplyCSV(r->element[i]);\n            out = sdscatlen(out,tmp,sdslen(tmp));\n            if (i != r->elements-1) out = sdscat(out,\",\");\n            sdsfree(tmp);\n        }\n    break;\n    default:\n        fprintf(stderr,\"Unknown reply type: %d\\n\", r->type);\n        exit(1);\n    }\n    return out;\n}\n\n/* Append specified buffer to out and return it, using required JSON output\n * mode. */\nstatic sds jsonStringOutput(sds out, const char *p, int len, int mode) {\n    if (mode == OUTPUT_JSON) {\n        return escapeJsonString(out, p, len);\n    } else if (mode == OUTPUT_QUOTED_JSON) {\n        /* Need to double-quote backslashes */\n        sds tmp = sdscatrepr(sdsempty(), p, len);\n        int tmplen = sdslen(tmp);\n        char *n = tmp;\n        while (tmplen--) {\n            if (*n == '\\\\') out = sdscatlen(out, \"\\\\\\\\\", 2);\n            else out = sdscatlen(out, n, 1);\n            n++;\n        }\n\n        sdsfree(tmp);\n        return out;\n    } else {\n        assert(0);\n    }\n}\n\nstatic sds cliFormatReplyJson(sds out, redisReply *r, int mode) {\n    unsigned int i;\n\n    switch (r->type) {\n    case REDIS_REPLY_ERROR:\n        out = sdscat(out,\"error:\");\n        out = jsonStringOutput(out,r->str,strlen(r->str),mode);\n        break;\n    case REDIS_REPLY_STATUS:\n        out = jsonStringOutput(out,r->str,r->len,mode);\n        break;\n    case REDIS_REPLY_INTEGER:\n        out = sdscatprintf(out,\"%lld\",r->integer);\n        break;\n    case REDIS_REPLY_DOUBLE:\n        out = sdscatprintf(out,\"%s\",r->str);\n        break;\n    case REDIS_REPLY_STRING:\n    case REDIS_REPLY_VERB:\n        out = jsonStringOutput(out,r->str,r->len,mode);\n        break;\n    case REDIS_REPLY_NIL:\n        out = sdscat(out,\"null\");\n        break;\n    case REDIS_REPLY_BOOL:\n        out = sdscat(out,r->integer ? \"true\" : \"false\");\n        break;\n    case REDIS_REPLY_ARRAY:\n    case REDIS_REPLY_SET:\n    case REDIS_REPLY_PUSH:\n        out = sdscat(out,\"[\");\n        for (i = 0; i < r->elements; i++ ) {\n            out = cliFormatReplyJson(out,r->element[i],mode);\n            if (i != r->elements-1) out = sdscat(out,\",\");\n        }\n        out = sdscat(out,\"]\");\n        break;\n    case REDIS_REPLY_MAP:\n        out = sdscat(out,\"{\");\n        for (i = 0; i < r->elements; i += 2) {\n            redisReply *key = r->element[i];\n            if (key->type == REDIS_REPLY_ERROR ||\n                key->type == REDIS_REPLY_STATUS ||\n                key->type == REDIS_REPLY_STRING ||\n                key->type == REDIS_REPLY_VERB)\n            {\n                out = cliFormatReplyJson(out,key,mode);\n            } else {\n                /* According to JSON spec, JSON map keys must be strings,\n                 * and in RESP3, they can be other types. \n                 * The first one(cliFormatReplyJson) is to convert non string type to string\n                 * The Second one(escapeJsonString) is to escape the converted string */\n                sds keystr = cliFormatReplyJson(sdsempty(),key,mode);\n                if (keystr[0] == '\"') out = sdscatsds(out,keystr);\n                else out = sdscatfmt(out,\"\\\"%S\\\"\",keystr);\n                sdsfree(keystr);\n            }\n            out = sdscat(out,\":\");\n\n            out = cliFormatReplyJson(out,r->element[i+1],mode);\n            if (i != r->elements-2) out = sdscat(out,\",\");\n        }\n        out = sdscat(out,\"}\");\n        break;\n    default:\n        fprintf(stderr,\"Unknown reply type: %d\\n\", r->type);\n        exit(1);\n    }\n    return out;\n}\n\n/* Generate reply strings in various output modes */\nstatic sds cliFormatReply(redisReply *reply, int mode, int verbatim) {\n    sds out;\n\n    if (verbatim) {\n        out = cliFormatReplyRaw(reply);\n    }  else if (mode == OUTPUT_STANDARD) {\n        out = cliFormatReplyTTY(reply, \"\");\n    } else if (mode == OUTPUT_RAW) {\n        out = cliFormatReplyRaw(reply);\n        out = sdscatsds(out, config.cmd_delim);\n    } else if (mode == OUTPUT_CSV) {\n        out = cliFormatReplyCSV(reply);\n        out = sdscatlen(out, \"\\n\", 1);\n    } else if (mode == OUTPUT_JSON || mode == OUTPUT_QUOTED_JSON) {\n        out = cliFormatReplyJson(sdsempty(), reply, mode);\n        out = sdscatlen(out, \"\\n\", 1);\n    } else {\n        fprintf(stderr, \"Error:  Unknown output encoding %d\\n\", mode);\n        exit(1);\n    }\n\n    return out;\n}\n\n/* Output any spontaneous PUSH reply we receive */\nstatic void cliPushHandler(void *privdata, void *reply) {\n    UNUSED(privdata);\n    sds out;\n\n    if (config.output == OUTPUT_STANDARD && isInvalidateReply(reply)) {\n        out = cliFormatInvalidateTTY(reply);\n    } else {\n        out = cliFormatReply(reply, config.output, 0);\n    }\n\n    fwrite(out, sdslen(out), 1, stdout);\n\n    freeReplyObject(reply);\n    sdsfree(out);\n}\n\nstatic int cliReadReply(int output_raw_strings) {\n    void *_reply;\n    redisReply *reply;\n    sds out = NULL;\n    int output = 1;\n\n    if (config.last_reply) {\n        freeReplyObject(config.last_reply);\n        config.last_reply = NULL;\n    }\n\n    if (redisGetReply(context,&_reply) != REDIS_OK) {\n        if (config.blocking_state_aborted) {\n            config.blocking_state_aborted = 0;\n            config.monitor_mode = 0;\n            config.pubsub_mode = 0;\n            return cliConnect(CC_FORCE);\n        }\n\n        if (config.shutdown) {\n            redisFree(context);\n            context = NULL;\n            return REDIS_OK;\n        }\n        if (config.interactive) {\n            /* Filter cases where we should reconnect */\n            if (context->err == REDIS_ERR_IO &&\n                (errno == ECONNRESET || errno == EPIPE))\n                return REDIS_ERR;\n            if (context->err == REDIS_ERR_EOF)\n                return REDIS_ERR;\n        }\n        cliPrintContextError();\n        exit(1);\n        return REDIS_ERR; /* avoid compiler warning */\n    }\n\n    config.last_reply = reply = (redisReply*)_reply;\n\n    config.last_cmd_type = reply->type;\n\n    /* Check if we need to connect to a different node and reissue the\n     * request. */\n    if (config.cluster_mode && reply->type == REDIS_REPLY_ERROR &&\n        (!strncmp(reply->str,\"MOVED \",6) || !strncmp(reply->str,\"ASK \",4)))\n    {\n        char *p = reply->str, *s;\n        int slot;\n\n        output = 0;\n        /* Comments show the position of the pointer as:\n         *\n         * [S] for pointer 's'\n         * [P] for pointer 'p'\n         */\n        s = strchr(p,' ');      /* MOVED[S]3999 127.0.0.1:6381 */\n        p = strchr(s+1,' ');    /* MOVED[S]3999[P]127.0.0.1:6381 */\n        *p = '\\0';\n        slot = atoi(s+1);\n        s = strrchr(p+1,':');    /* MOVED 3999[P]127.0.0.1[S]6381 */\n        *s = '\\0';\n        if (p+1 != s) {\n            /* Host might be empty, like 'MOVED 3999 :6381', if endpoint type is unknown. Only update the\n             * host if it's non-empty. */\n            sdsfree(config.conn_info.hostip);\n            config.conn_info.hostip = sdsnew(p+1);\n        }\n        config.conn_info.hostport = atoi(s+1);\n        if (config.interactive)\n            printf(\"-> Redirected to slot [%d] located at %s:%d\\n\",\n                slot, config.conn_info.hostip, config.conn_info.hostport);\n        config.cluster_reissue_command = 1;\n        if (!strncmp(reply->str,\"ASK \",4)) {\n            config.cluster_send_asking = 1;\n        }\n        cliRefreshPrompt();\n    } else if (!config.interactive && config.set_errcode && \n        reply->type == REDIS_REPLY_ERROR) \n    {\n        fprintf(stderr,\"%s\\n\",reply->str);\n        exit(1);\n        return REDIS_ERR; /* avoid compiler warning */\n    }\n\n    if (output) {\n        out = cliFormatReply(reply, config.output, output_raw_strings);\n        fwrite(out,sdslen(out),1,stdout);\n        fflush(stdout);\n        sdsfree(out);\n    }\n    return REDIS_OK;\n}\n\n/* Simultaneously wait for pubsub messages from redis and input on stdin. */\nstatic void cliWaitForMessagesOrStdin(void) {\n    int show_info = config.output != OUTPUT_RAW && (isatty(STDOUT_FILENO) ||\n                                                    getenv(\"FAKETTY\"));\n    int use_color = show_info && isColorTerm();\n    cliPressAnyKeyTTY();\n    while (config.pubsub_mode) {\n        /* First check if there are any buffered replies. */\n        redisReply *reply;\n        do {\n            if (redisGetReplyFromReader(context, (void **)&reply) != REDIS_OK) {\n                cliPrintContextError();\n                exit(1);\n            }\n            if (reply) {\n                sds out = cliFormatReply(reply, config.output, 0);\n                fwrite(out,sdslen(out),1,stdout);\n                fflush(stdout);\n                sdsfree(out);\n            }\n        } while(reply);\n\n        /* Wait for input, either on the Redis socket or on stdin. */\n        struct timeval tv;\n        fd_set readfds;\n        FD_ZERO(&readfds);\n        FD_SET(context->fd, &readfds);\n        FD_SET(STDIN_FILENO, &readfds);\n        tv.tv_sec = 5;\n        tv.tv_usec = 0;\n        if (show_info) {\n            if (use_color) printf(\"\\033[1;90m\"); /* Bold, bright color. */\n            printf(\"Reading messages... (press Ctrl-C to quit or any key to type command)\\r\");\n            if (use_color) printf(\"\\033[0m\"); /* Reset color. */\n            fflush(stdout);\n        }\n        select(context->fd + 1, &readfds, NULL, NULL, &tv);\n        if (show_info) {\n            printf(\"\\033[K\"); /* Erase current line */\n            fflush(stdout);\n        }\n        if (config.blocking_state_aborted) {\n            /* Ctrl-C pressed */\n            config.blocking_state_aborted = 0;\n            config.pubsub_mode = 0;\n            if (cliConnect(CC_FORCE) != REDIS_OK) {\n                cliPrintContextError();\n                exit(1);\n            }\n            break;\n        } else if (FD_ISSET(context->fd, &readfds)) {\n            /* Message from Redis */\n            if (cliReadReply(0) != REDIS_OK) {\n                cliPrintContextError();\n                exit(1);\n            }\n            fflush(stdout);\n        } else if (FD_ISSET(STDIN_FILENO, &readfds)) {\n            /* Any key pressed */\n            break;\n        }\n    }\n    cliRestoreTTY();\n}\n\nstatic int cliSendCommand(int argc, char **argv, long repeat) {\n    char *command = argv[0];\n    size_t *argvlen;\n    int j, output_raw;\n\n    if (context == NULL) return REDIS_ERR;\n\n    output_raw = 0;\n    if (!strcasecmp(command,\"info\") ||\n        !strcasecmp(command,\"lolwut\") ||\n        (argc >= 2 && !strcasecmp(command,\"debug\") &&\n                       !strcasecmp(argv[1],\"htstats\")) ||\n        (argc >= 2 && !strcasecmp(command,\"debug\") &&\n                       !strcasecmp(argv[1],\"htstats-key\")) ||\n        (argc >= 2 && !strcasecmp(command,\"debug\") &&\n                       !strcasecmp(argv[1],\"client-eviction\")) ||\n        (argc >= 2 && !strcasecmp(command,\"memory\") &&\n                      (!strcasecmp(argv[1],\"malloc-stats\") ||\n                       !strcasecmp(argv[1],\"doctor\"))) ||\n        (argc == 2 && !strcasecmp(command,\"cluster\") &&\n                      (!strcasecmp(argv[1],\"nodes\") ||\n                       !strcasecmp(argv[1],\"info\"))) ||\n        (argc >= 2 && !strcasecmp(command,\"client\") &&\n                       (!strcasecmp(argv[1],\"list\") ||\n                        !strcasecmp(argv[1],\"info\"))) ||\n        (argc == 3 && !strcasecmp(command,\"latency\") &&\n                       !strcasecmp(argv[1],\"graph\")) ||\n        (argc == 2 && !strcasecmp(command,\"latency\") &&\n                       !strcasecmp(argv[1],\"doctor\")) ||\n        /* Format PROXY INFO command for Redis Cluster Proxy:\n         * https://github.com/artix75/redis-cluster-proxy */\n        (argc >= 2 && !strcasecmp(command,\"proxy\") &&\n                       !strcasecmp(argv[1],\"info\")))\n    {\n        output_raw = 1;\n    }\n\n    if (!strcasecmp(command,\"shutdown\")) config.shutdown = 1;\n    if (!strcasecmp(command,\"monitor\")) config.monitor_mode = 1;\n    int is_subscribe = (!strcasecmp(command, \"subscribe\") ||\n                        !strcasecmp(command, \"psubscribe\") ||\n                        !strcasecmp(command, \"ssubscribe\"));\n    int is_unsubscribe = (!strcasecmp(command, \"unsubscribe\") ||\n                          !strcasecmp(command, \"punsubscribe\") ||\n                          !strcasecmp(command, \"sunsubscribe\"));\n    if (!strcasecmp(command,\"sync\") ||\n        !strcasecmp(command,\"psync\")) config.slave_mode = 1;\n\n    /* When the user manually calls SCRIPT DEBUG, setup the activation of\n     * debugging mode on the next eval if needed. */\n    if (argc == 3 && !strcasecmp(argv[0],\"script\") &&\n                     !strcasecmp(argv[1],\"debug\"))\n    {\n        if (!strcasecmp(argv[2],\"yes\") || !strcasecmp(argv[2],\"sync\")) {\n            config.enable_ldb_on_eval = 1;\n        } else {\n            config.enable_ldb_on_eval = 0;\n        }\n    }\n\n    /* Actually activate LDB on EVAL if needed. */\n    if (!strcasecmp(command,\"eval\") && config.enable_ldb_on_eval) {\n        config.eval_ldb = 1;\n        config.output = OUTPUT_RAW;\n    }\n\n    /* Setup argument length */\n    argvlen = zmalloc(argc*sizeof(size_t));\n    for (j = 0; j < argc; j++)\n        argvlen[j] = sdslen(argv[j]);\n\n    /* Negative repeat is allowed and causes infinite loop,\n       works well with the interval option. */\n    while(repeat < 0 || repeat-- > 0) {\n        redisAppendCommandArgv(context,argc,(const char**)argv,argvlen);\n\n        if (config.monitor_mode) {\n            do {\n                if (cliReadReply(output_raw) != REDIS_OK) {\n                    cliPrintContextError();\n                    exit(1);\n                }\n                fflush(stdout);\n\n                /* This happens when the MONITOR command returns an error. */\n                if (config.last_cmd_type == REDIS_REPLY_ERROR)\n                    config.monitor_mode = 0;\n            } while(config.monitor_mode);\n            zfree(argvlen);\n            return REDIS_OK;\n        }\n\n        int num_expected_pubsub_push = 0;\n        if (is_subscribe || is_unsubscribe) {\n            /* When a push callback is set, redisGetReply (hiredis) loops until\n             * an in-band message is received, but these commands are confirmed\n             * using push replies only. There is one push reply per channel if\n             * channels are specified, otherwise at least one. */\n            num_expected_pubsub_push = argc > 1 ? argc - 1 : 1;\n            /* Unset our default PUSH handler so this works in RESP2/RESP3 */\n            redisSetPushCallback(context, NULL);\n        }\n\n        if (config.slave_mode) {\n            printf(\"Entering replica output mode...  (press Ctrl-C to quit)\\n\");\n            slaveMode(0);\n            config.slave_mode = 0;\n            zfree(argvlen);\n            return REDIS_ERR;  /* Error = slaveMode lost connection to master */\n        }\n\n        /* Read response, possibly skipping pubsub/push messages. */\n        while (1) {\n            if (cliReadReply(output_raw) != REDIS_OK) {\n                zfree(argvlen);\n                return REDIS_ERR;\n            }\n            fflush(stdout);\n            if (config.pubsub_mode || num_expected_pubsub_push > 0) {\n                if (isPubsubPush(config.last_reply)) {\n                    if (num_expected_pubsub_push > 0 &&\n                        !strcasecmp(config.last_reply->element[0]->str, command))\n                    {\n                        /* This pushed message confirms the\n                         * [p|s][un]subscribe command. */\n                        if (is_subscribe && !config.pubsub_mode) {\n                            config.pubsub_mode = 1;\n                            cliRefreshPrompt();\n                        }\n                        if (--num_expected_pubsub_push > 0) {\n                            continue; /* We need more of these. */\n                        }\n                    } else {\n                        continue; /* Skip this pubsub message. */\n                    }\n                } else if (config.last_reply->type == REDIS_REPLY_PUSH) {\n                    continue; /* Skip other push message. */\n                }\n            }\n\n            /* Store database number when SELECT was successfully executed. */\n            if (!strcasecmp(command,\"select\") && argc == 2 && \n                config.last_cmd_type != REDIS_REPLY_ERROR) \n            {\n                config.conn_info.input_dbnum = config.dbnum = atoi(argv[1]);\n                cliRefreshPrompt();\n            } else if (!strcasecmp(command,\"auth\") && (argc == 2 || argc == 3)) {\n                cliSelect();\n            } else if (!strcasecmp(command,\"multi\") && argc == 1 &&\n                config.last_cmd_type != REDIS_REPLY_ERROR) \n            {\n                config.in_multi = 1;\n                config.pre_multi_dbnum = config.dbnum;\n                cliRefreshPrompt();\n            } else if (!strcasecmp(command,\"exec\") && argc == 1 && config.in_multi) {\n                config.in_multi = 0;\n                if (config.last_cmd_type == REDIS_REPLY_ERROR ||\n                    config.last_cmd_type == REDIS_REPLY_NIL)\n                {\n                    config.conn_info.input_dbnum = config.dbnum = config.pre_multi_dbnum;\n                }\n                cliRefreshPrompt();\n            } else if (!strcasecmp(command,\"discard\") && argc == 1 && \n                config.last_cmd_type != REDIS_REPLY_ERROR) \n            {\n                config.in_multi = 0;\n                config.conn_info.input_dbnum = config.dbnum = config.pre_multi_dbnum;\n                cliRefreshPrompt();\n            } else if (!strcasecmp(command,\"reset\") && argc == 1 &&\n                                     config.last_cmd_type != REDIS_REPLY_ERROR) {\n                config.in_multi = 0;\n                config.dbnum = 0;\n                config.conn_info.input_dbnum = 0;\n                config.current_resp3 = 0;\n                if (config.pubsub_mode && config.push_output) {\n                    redisSetPushCallback(context, cliPushHandler);\n                }\n                config.pubsub_mode = 0;\n                cliRefreshPrompt();\n            } else if (!strcasecmp(command,\"hello\")) {\n                if (config.last_cmd_type == REDIS_REPLY_MAP) {\n                    config.current_resp3 = 1;\n                } else if (config.last_cmd_type == REDIS_REPLY_ARRAY) {\n                    config.current_resp3 = 0;\n                }\n            } else if ((is_subscribe || is_unsubscribe) && !config.pubsub_mode) {\n                /* We didn't enter pubsub mode. Restore push callback. */\n                if (config.push_output)\n                    redisSetPushCallback(context, cliPushHandler);\n            }\n\n            break;\n        }\n        if (config.cluster_reissue_command){\n            /* If we need to reissue the command, break to prevent a\n               further 'repeat' number of dud interactions */\n            break;\n        }\n        if (config.interval) usleep(config.interval);\n        fflush(stdout); /* Make it grep friendly */\n    }\n\n    zfree(argvlen);\n    return REDIS_OK;\n}\n\n/* Send a command reconnecting the link if needed. */\nstatic redisReply *reconnectingRedisCommand(redisContext *c, const char *fmt, ...) {\n    redisReply *reply = NULL;\n    int tries = 0;\n    va_list ap;\n\n    assert(!c->err);\n    while(reply == NULL) {\n        while (c->err & (REDIS_ERR_IO | REDIS_ERR_EOF)) {\n            printf(\"\\r\\x1b[0K\"); /* Cursor to left edge + clear line. */\n            printf(\"Reconnecting... %d\\r\", ++tries);\n            fflush(stdout);\n\n            redisFree(c);\n            c = redisConnectWrapper(config.conn_info.hostip, config.conn_info.hostport,\n                                    config.connect_timeout);\n            if (!c->err && config.tls) {\n                const char *err = NULL;\n                if (cliSecureConnection(c, config.sslconfig, &err) == REDIS_ERR && err) {\n                    fprintf(stderr, \"TLS Error: %s\\n\", err);\n                    exit(1);\n                }\n            }\n            usleep(1000000);\n        }\n\n        va_start(ap,fmt);\n        reply = redisvCommand(c,fmt,ap);\n        va_end(ap);\n\n        if (c->err && !(c->err & (REDIS_ERR_IO | REDIS_ERR_EOF))) {\n            fprintf(stderr, \"Error: %s\\n\", c->errstr);\n            exit(1);\n        } else if (tries > 0) {\n            printf(\"\\r\\x1b[0K\"); /* Cursor to left edge + clear line. */\n        }\n    }\n\n    context = c;\n    return reply;\n}\n\n/*------------------------------------------------------------------------------\n * User interface\n *--------------------------------------------------------------------------- */\n\nstatic int parseOptions(int argc, char **argv) {\n    int i;\n\n    for (i = 1; i < argc; i++) {\n        int lastarg = i==argc-1;\n\n        if (!strcmp(argv[i],\"-h\") && !lastarg) {\n            sdsfree(config.conn_info.hostip);\n            config.conn_info.hostip = sdsnew(argv[++i]);\n        } else if (!strcmp(argv[i],\"-h\") && lastarg) {\n            usage(0);\n        } else if (!strcmp(argv[i],\"--help\")) {\n            usage(0);\n        } else if (!strcmp(argv[i],\"-x\")) {\n            config.stdin_lastarg = 1;\n        } else if (!strcmp(argv[i], \"-X\") && !lastarg) {\n            config.stdin_tag_arg = 1;\n            config.stdin_tag_name = argv[++i];\n        } else if (!strcmp(argv[i],\"-p\") && !lastarg) {\n            config.conn_info.hostport = atoi(argv[++i]);\n            if (config.conn_info.hostport < 0 || config.conn_info.hostport > 65535) {\n                fprintf(stderr, \"Invalid server port.\\n\");\n                exit(1);\n            }\n        } else if (!strcmp(argv[i],\"-t\") && !lastarg) {\n            char *eptr;\n            double seconds = strtod(argv[++i], &eptr);\n            if (eptr[0] != '\\0' || isnan(seconds) || seconds < 0.0) {\n                fprintf(stderr, \"Invalid connection timeout for -t.\\n\");\n                exit(1);\n            }\n            config.connect_timeout.tv_sec = (long long)seconds;\n            config.connect_timeout.tv_usec = ((long long)(seconds * 1000000)) % 1000000;\n        } else if (!strcmp(argv[i],\"-s\") && !lastarg) {\n            config.hostsocket = argv[++i];\n        } else if (!strcmp(argv[i],\"-r\") && !lastarg) {\n            config.repeat = strtoll(argv[++i],NULL,10);\n        } else if (!strcmp(argv[i],\"-i\") && !lastarg) {\n            double seconds = atof(argv[++i]);\n            config.interval = seconds*1000000;\n        } else if (!strcmp(argv[i],\"-n\") && !lastarg) {\n            config.conn_info.input_dbnum = atoi(argv[++i]);\n        } else if (!strcmp(argv[i], \"--no-auth-warning\")) {\n            config.no_auth_warning = 1;\n        } else if (!strcmp(argv[i], \"--askpass\")) {\n            config.askpass = 1;\n        } else if ((!strcmp(argv[i],\"-a\") || !strcmp(argv[i],\"--pass\"))\n                   && !lastarg)\n        {\n            config.conn_info.auth = sdsnew(argv[++i]);\n        } else if (!strcmp(argv[i],\"--user\") && !lastarg) {\n            config.conn_info.user = sdsnew(argv[++i]);\n        } else if (!strcmp(argv[i],\"-u\") && !lastarg) {\n            parseRedisUri(argv[++i],\"redis-cli\",&config.conn_info,&config.tls);\n            if (config.conn_info.hostport < 0 || config.conn_info.hostport > 65535) {\n                fprintf(stderr, \"Invalid server port.\\n\");\n                exit(1);\n            }\n        } else if (!strcmp(argv[i],\"--raw\")) {\n            config.output = OUTPUT_RAW;\n        } else if (!strcmp(argv[i],\"--no-raw\")) {\n            config.output = OUTPUT_STANDARD;\n        } else if (!strcmp(argv[i],\"--quoted-input\")) {\n            config.quoted_input = 1;\n        } else if (!strcmp(argv[i],\"--csv\")) {\n            config.output = OUTPUT_CSV;\n        } else if (!strcmp(argv[i],\"--json\")) {\n            /* Not overwrite explicit value by -3 */\n            if (config.resp3 == 0) {\n                config.resp3 = 2;\n            }\n            config.output = OUTPUT_JSON;\n        } else if (!strcmp(argv[i],\"--quoted-json\")) {\n            /* Not overwrite explicit value by -3*/\n            if (config.resp3 == 0) {\n                config.resp3 = 2;\n            }\n            config.output = OUTPUT_QUOTED_JSON;\n        } else if (!strcmp(argv[i],\"--latency\")) {\n            config.latency_mode = 1;\n        } else if (!strcmp(argv[i],\"--latency-dist\")) {\n            config.latency_dist_mode = 1;\n        } else if (!strcmp(argv[i],\"--mono\")) {\n            spectrum_palette = spectrum_palette_mono;\n            spectrum_palette_size = spectrum_palette_mono_size;\n        } else if (!strcmp(argv[i],\"--latency-history\")) {\n            config.latency_mode = 1;\n            config.latency_history = 1;\n        } else if (!strcmp(argv[i],\"--vset-recall\") && !lastarg) {\n            config.vset_recall_mode = 1;\n            config.vset_recall_key = sdsnew(argv[++i]);\n        } else if (!strcmp(argv[i],\"--vset-recall-ele\") && !lastarg) {\n            config.vset_recall_ele_count = strtoll(argv[++i],NULL,10);\n            if (config.vset_recall_ele_count <= 0)\n                config.vset_recall_ele_count = 1;\n        } else if (!strcmp(argv[i],\"--vset-recall-count\") && !lastarg) {\n            config.vset_recall_vsim_count = strtoll(argv[++i],NULL,10);\n            if (config.vset_recall_vsim_count <= 0)\n                config.vset_recall_vsim_count = 1;\n        } else if (!strcmp(argv[i],\"--vset-recall-ef\") && !lastarg) {\n            config.vset_recall_vsim_ef = strtoll(argv[++i],NULL,10);\n            if (config.vset_recall_vsim_ef <= 0)\n                config.vset_recall_vsim_ef = 1;\n        } else if (!strcmp(argv[i],\"--lru-test\") && !lastarg) {\n            config.lru_test_mode = 1;\n            config.lru_test_sample_size = strtoll(argv[++i],NULL,10);\n        } else if (!strcmp(argv[i],\"--slave\")) {\n            config.slave_mode = 1;\n        } else if (!strcmp(argv[i],\"--replica\")) {\n            config.slave_mode = 1;\n        } else if (!strcmp(argv[i],\"--stat\")) {\n            config.stat_mode = 1;\n        } else if (!strcmp(argv[i],\"--scan\")) {\n            config.scan_mode = 1;\n        } else if (!strcmp(argv[i],\"--pattern\") && !lastarg) {\n            sdsfree(config.pattern);\n            config.pattern = sdsnew(argv[++i]);\n        } else if (!strcmp(argv[i],\"--count\") && !lastarg) {\n            config.count = atoi(argv[++i]);\n        } else if (!strcmp(argv[i],\"--quoted-pattern\") && !lastarg) {\n            sdsfree(config.pattern);\n            config.pattern = unquoteCString(argv[++i]);\n            if (!config.pattern) {\n                fprintf(stderr,\"Invalid quoted string specified for --quoted-pattern.\\n\");\n                exit(1);\n            }\n        } else if (!strcmp(argv[i],\"--intrinsic-latency\") && !lastarg) {\n            config.intrinsic_latency_mode = 1;\n            config.intrinsic_latency_duration = atoi(argv[++i]);\n        } else if (!strcmp(argv[i],\"--rdb\") && !lastarg) {\n            config.getrdb_mode = 1;\n            config.rdb_filename = argv[++i];\n        } else if (!strcmp(argv[i],\"--functions-rdb\") && !lastarg) {\n            config.get_functions_rdb_mode = 1;\n            config.rdb_filename = argv[++i];\n        } else if (!strcmp(argv[i],\"--pipe\")) {\n            config.pipe_mode = 1;\n        } else if (!strcmp(argv[i],\"--pipe-timeout\") && !lastarg) {\n            config.pipe_timeout = atoi(argv[++i]);\n        } else if (!strcmp(argv[i],\"--bigkeys\")) {\n            config.bigkeys = 1;\n        } else if (!strcmp(argv[i],\"--memkeys\")) {\n            config.memkeys = 1;\n            config.memkeys_samples = -1; /* use redis default */\n        } else if (!strcmp(argv[i],\"--memkeys-samples\") && !lastarg) {\n            char *endptr;\n            config.memkeys = 1;\n            config.keystats = 1;\n            config.memkeys_samples = strtoll(argv[++i], &endptr, 10);\n            if (*endptr) {\n                fprintf(stderr, \"--memkeys-samples conversion error.\\n\");\n                exit(1);\n            }\n            if (config.memkeys_samples < 0) {\n               fprintf(stderr, \"--memkeys-samples value should be positive.\\n\");\n               exit(1);\n            }\n        } else if (!strcmp(argv[i],\"--hotkeys\")) {\n            config.hotkeys = 1;\n        } else if (!strcmp(argv[i], \"--keystats\")) {\n            config.keystats = 1;\n            config.memkeys_samples = -1; /* use redis default */\n        } else if (!strcmp(argv[i],\"--keystats-samples\") && !lastarg) {\n            char *endptr;\n            config.keystats = 1;\n            config.memkeys_samples = strtoll(argv[++i], &endptr, 10);\n            if (*endptr) {\n                fprintf(stderr, \"--keystats-samples conversion error.\\n\");\n                exit(1);\n            }\n            if (config.memkeys_samples < 0) {\n               fprintf(stderr, \"--keystats-samples value should be positive.\\n\");\n               exit(1);\n            }\n        } else if (!strcmp(argv[i],\"--cursor\") && !lastarg) {\n            i++;\n            char sign = *argv[i];\n            char *endptr;\n            config.cursor = strtoull(argv[i], &endptr, 10);\n            if (*endptr) {\n               fprintf(stderr, \"--cursor conversion error.\\n\");\n               exit(1);\n            }\n            if (sign == '-' && config.cursor != 0) {\n                fprintf(stderr, \"--cursor should be followed by a positive integer.\\n\");\n                exit(1);\n            }\n        } else if (!strcmp(argv[i],\"--top\") && !lastarg) {\n            i++;\n            char sign = *argv[i];\n            char *endptr;\n            config.top_sizes_limit = strtoull(argv[i], &endptr, 10);\n            if (*endptr) {\n               fprintf(stderr, \"--top conversion error.\\n\");\n               exit(1);\n            }\n            if (sign == '-' && config.top_sizes_limit != 0) {\n                fprintf(stderr, \"--top should be followed by a positive integer.\\n\");\n                exit(1);\n            }\n        } else if (!strcmp(argv[i],\"--eval\") && !lastarg) {\n            config.eval = argv[++i];\n        } else if (!strcmp(argv[i],\"--ldb\")) {\n            config.eval_ldb = 1;\n            config.output = OUTPUT_RAW;\n        } else if (!strcmp(argv[i],\"--ldb-sync-mode\")) {\n            config.eval_ldb = 1;\n            config.eval_ldb_sync = 1;\n            config.output = OUTPUT_RAW;\n        } else if (!strcmp(argv[i],\"-c\")) {\n            config.cluster_mode = 1;\n        } else if (!strcmp(argv[i],\"-d\") && !lastarg) {\n            sdsfree(config.mb_delim);\n            config.mb_delim = sdsnew(argv[++i]);\n        } else if (!strcmp(argv[i],\"-D\") && !lastarg) {\n            sdsfree(config.cmd_delim);\n            config.cmd_delim = sdsnew(argv[++i]);\n        } else if (!strcmp(argv[i],\"-e\")) {\n            config.set_errcode = 1;\n        } else if (!strcmp(argv[i],\"--verbose\")) {\n            config.verbose = 1;\n        } else if (!strcmp(argv[i],\"-4\")) {\n            config.prefer_ipv4 = 1;\n        } else if (!strcmp(argv[i],\"-6\")) {\n            config.prefer_ipv6 = 1;\n        } else if (!strcmp(argv[i],\"--cluster\") && !lastarg) {\n            if (CLUSTER_MANAGER_MODE()) usage(1);\n            char *cmd = argv[++i];\n            int j = i;\n            while (j < argc && argv[j][0] != '-') j++;\n            if (j > i) j--;\n            int err = createClusterManagerCommand(cmd, j - i, argv + i + 1);\n            if (err) exit(err);\n            i = j;\n        } else if (!strcmp(argv[i],\"--cluster\") && lastarg) {\n            usage(1);\n        } else if ((!strcmp(argv[i],\"--cluster-only-masters\"))) {\n            config.cluster_manager_command.flags |=\n                    CLUSTER_MANAGER_CMD_FLAG_MASTERS_ONLY;\n        } else if ((!strcmp(argv[i],\"--cluster-only-replicas\"))) {\n            config.cluster_manager_command.flags |=\n                    CLUSTER_MANAGER_CMD_FLAG_SLAVES_ONLY;\n        } else if (!strcmp(argv[i],\"--cluster-replicas\") && !lastarg) {\n            config.cluster_manager_command.replicas = atoi(argv[++i]);\n        } else if (!strcmp(argv[i],\"--cluster-master-id\") && !lastarg) {\n            config.cluster_manager_command.master_id = argv[++i];\n        } else if (!strcmp(argv[i],\"--cluster-from\") && !lastarg) {\n            config.cluster_manager_command.from = argv[++i];\n        } else if (!strcmp(argv[i],\"--cluster-to\") && !lastarg) {\n            config.cluster_manager_command.to = argv[++i];\n        } else if (!strcmp(argv[i],\"--cluster-from-user\") && !lastarg) {\n            config.cluster_manager_command.from_user = argv[++i];\n        } else if (!strcmp(argv[i],\"--cluster-from-pass\") && !lastarg) {\n            config.cluster_manager_command.from_pass = argv[++i];\n        } else if (!strcmp(argv[i], \"--cluster-from-askpass\")) {\n            config.cluster_manager_command.from_askpass = 1;\n        } else if (!strcmp(argv[i],\"--cluster-weight\") && !lastarg) {\n            if (config.cluster_manager_command.weight != NULL) {\n                fprintf(stderr, \"WARNING: you cannot use --cluster-weight \"\n                                \"more than once.\\n\"\n                                \"You can set more weights by adding them \"\n                                \"as a space-separated list, ie:\\n\"\n                                \"--cluster-weight n1=w n2=w\\n\");\n                exit(1);\n            }\n            int widx = i + 1;\n            char **weight = argv + widx;\n            int wargc = 0;\n            for (; widx < argc; widx++) {\n                if (strstr(argv[widx], \"--\") == argv[widx]) break;\n                if (strchr(argv[widx], '=') == NULL) break;\n                wargc++;\n            }\n            if (wargc > 0) {\n                config.cluster_manager_command.weight = weight;\n                config.cluster_manager_command.weight_argc = wargc;\n                i += wargc;\n            }\n        } else if (!strcmp(argv[i],\"--cluster-slots\") && !lastarg) {\n            config.cluster_manager_command.slots = atoi(argv[++i]);\n        } else if (!strcmp(argv[i],\"--cluster-timeout\") && !lastarg) {\n            config.cluster_manager_command.timeout = atoi(argv[++i]);\n        } else if (!strcmp(argv[i],\"--cluster-pipeline\") && !lastarg) {\n            config.cluster_manager_command.pipeline = atoi(argv[++i]);\n        } else if (!strcmp(argv[i],\"--cluster-threshold\") && !lastarg) {\n            config.cluster_manager_command.threshold = atof(argv[++i]);\n        } else if (!strcmp(argv[i],\"--cluster-yes\")) {\n            config.cluster_manager_command.flags |=\n                CLUSTER_MANAGER_CMD_FLAG_YES;\n        } else if (!strcmp(argv[i],\"--cluster-simulate\")) {\n            config.cluster_manager_command.flags |=\n                CLUSTER_MANAGER_CMD_FLAG_SIMULATE;\n        } else if (!strcmp(argv[i],\"--cluster-replace\")) {\n            config.cluster_manager_command.flags |=\n                CLUSTER_MANAGER_CMD_FLAG_REPLACE;\n        } else if (!strcmp(argv[i],\"--cluster-copy\")) {\n            config.cluster_manager_command.flags |=\n                CLUSTER_MANAGER_CMD_FLAG_COPY;\n        } else if (!strcmp(argv[i],\"--cluster-slave\")) {\n            config.cluster_manager_command.flags |=\n                CLUSTER_MANAGER_CMD_FLAG_SLAVE;\n        } else if (!strcmp(argv[i],\"--cluster-use-empty-masters\")) {\n            config.cluster_manager_command.flags |=\n                CLUSTER_MANAGER_CMD_FLAG_EMPTYMASTER;\n        } else if (!strcmp(argv[i],\"--cluster-search-multiple-owners\")) {\n            config.cluster_manager_command.flags |=\n                CLUSTER_MANAGER_CMD_FLAG_CHECK_OWNERS;\n        } else if (!strcmp(argv[i],\"--cluster-fix-with-unreachable-masters\")) {\n            config.cluster_manager_command.flags |=\n                CLUSTER_MANAGER_CMD_FLAG_FIX_WITH_UNREACHABLE_MASTERS;\n        } else if (!strcmp(argv[i],\"--test_hint\") && !lastarg) {\n            config.test_hint = argv[++i];\n        } else if (!strcmp(argv[i],\"--test_hint_file\") && !lastarg) {\n            config.test_hint_file = argv[++i];\n        } else if (!strcmp(argv[i], \"--name\") && !lastarg) {\n            config.client_name = argv[++i];\n#ifdef USE_OPENSSL\n        } else if (!strcmp(argv[i],\"--tls\")) {\n            config.tls = 1;\n        } else if (!strcmp(argv[i],\"--sni\") && !lastarg) {\n            config.sslconfig.sni = argv[++i];\n        } else if (!strcmp(argv[i],\"--cacertdir\") && !lastarg) {\n            config.sslconfig.cacertdir = argv[++i];\n        } else if (!strcmp(argv[i],\"--cacert\") && !lastarg) {\n            config.sslconfig.cacert = argv[++i];\n        } else if (!strcmp(argv[i],\"--cert\") && !lastarg) {\n            config.sslconfig.cert = argv[++i];\n        } else if (!strcmp(argv[i],\"--key\") && !lastarg) {\n            config.sslconfig.key = argv[++i];\n        } else if (!strcmp(argv[i],\"--tls-ciphers\") && !lastarg) {\n            config.sslconfig.ciphers = argv[++i];\n        } else if (!strcmp(argv[i],\"--insecure\")) {\n            config.sslconfig.skip_cert_verify = 1;\n        #ifdef TLS1_3_VERSION\n        } else if (!strcmp(argv[i],\"--tls-ciphersuites\") && !lastarg) {\n            config.sslconfig.ciphersuites = argv[++i];\n        #endif\n#endif\n        } else if (!strcmp(argv[i],\"-v\") || !strcmp(argv[i], \"--version\")) {\n            sds version = cliVersion();\n            printf(\"redis-cli %s\\n\", version);\n            sdsfree(version);\n            exit(0);\n        } else if (!strcmp(argv[i],\"-2\")) {\n            config.resp2 = 1;\n        } else if (!strcmp(argv[i],\"-3\")) {\n            config.resp3 = 1;\n        } else if (!strcmp(argv[i],\"--show-pushes\") && !lastarg) {\n            char *argval = argv[++i];\n            if (!strncasecmp(argval, \"n\", 1)) {\n                config.push_output = 0;\n            } else if (!strncasecmp(argval, \"y\", 1)) {\n                config.push_output = 1;\n            } else {\n                fprintf(stderr, \"Unknown --show-pushes value '%s' \"\n                        \"(valid: '[y]es', '[n]o')\\n\", argval);\n            }\n        } else if (CLUSTER_MANAGER_MODE() && argv[i][0] != '-') {\n            if (config.cluster_manager_command.argc == 0) {\n                int j = i + 1;\n                while (j < argc && argv[j][0] != '-') j++;\n                int cmd_argc = j - i;\n                config.cluster_manager_command.argc = cmd_argc;\n                config.cluster_manager_command.argv = argv + i;\n                if (cmd_argc > 1) i = j - 1;\n            }\n        } else {\n            if (argv[i][0] == '-') {\n                fprintf(stderr,\n                    \"Unrecognized option or bad number of args for: '%s'\\n\",\n                    argv[i]);\n                exit(1);\n            } else {\n                /* Likely the command name, stop here. */\n                break;\n            }\n        }\n    }\n\n    if (config.hostsocket && config.cluster_mode) {\n        fprintf(stderr,\"Options -c and -s are mutually exclusive.\\n\");\n        exit(1);\n    }\n\n    if (config.resp2 && config.resp3 == 1) {\n        fprintf(stderr,\"Options -2 and -3 are mutually exclusive.\\n\");\n        exit(1);\n    }\n\n    /* --ldb requires --eval. */\n    if (config.eval_ldb && config.eval == NULL) {\n        fprintf(stderr,\"Options --ldb and --ldb-sync-mode require --eval.\\n\");\n        fprintf(stderr,\"Try %s --help for more information.\\n\", argv[0]);\n        exit(1);\n    }\n\n    if (!config.no_auth_warning && config.conn_info.auth != NULL) {\n        fputs(\"Warning: Using a password with '-a' or '-u' option on the command\"\n              \" line interface may not be safe.\\n\", stderr);\n    }\n\n    if (config.get_functions_rdb_mode && config.getrdb_mode) {\n        fprintf(stderr,\"Option --functions-rdb and --rdb are mutually exclusive.\\n\");\n        exit(1);\n    }\n \n    if (config.stdin_lastarg && config.stdin_tag_arg) {\n        fprintf(stderr, \"Options -x and -X are mutually exclusive.\\n\");\n        exit(1);\n    }\n\n    if (config.prefer_ipv4 && config.prefer_ipv6) {\n        fprintf(stderr, \"Options -4 and -6 are mutually exclusive.\\n\");\n        exit(1);\n    }\n\n    return i;\n}\n\nstatic void parseEnv(void) {\n    /* Set auth from env, but do not overwrite CLI arguments if passed */\n    char *auth = getenv(REDIS_CLI_AUTH_ENV);\n    if (auth != NULL && config.conn_info.auth == NULL) {\n        config.conn_info.auth = auth;\n    }\n\n    char *cluster_yes = getenv(REDIS_CLI_CLUSTER_YES_ENV);\n    if (cluster_yes != NULL && !strcmp(cluster_yes, \"1\")) {\n        config.cluster_manager_command.flags |= CLUSTER_MANAGER_CMD_FLAG_YES;\n    }\n}\n\nstatic void usage(int err) {\n    sds version = cliVersion();\n    FILE *target = err ? stderr: stdout;\n    const char *tls_usage =\n#ifdef USE_OPENSSL\n\"  --tls              Establish a secure TLS connection.\\n\"\n\"  --sni <host>       Server name indication for TLS.\\n\"\n\"  --cacert <file>    CA Certificate file to verify with.\\n\"\n\"  --cacertdir <dir>  Directory where trusted CA certificates are stored.\\n\"\n\"                     If neither cacert nor cacertdir are specified, the default\\n\"\n\"                     system-wide trusted root certs configuration will apply.\\n\"\n\"  --insecure         Allow insecure TLS connection by skipping cert validation.\\n\"\n\"  --cert <file>      Client certificate to authenticate with.\\n\"\n\"  --key <file>       Private key file to authenticate with.\\n\"\n\"  --tls-ciphers <list> Sets the list of preferred ciphers (TLSv1.2 and below)\\n\"\n\"                     in order of preference from highest to lowest separated by colon (\\\":\\\").\\n\"\n\"                     See the ciphers(1ssl) manpage for more information about the syntax of this string.\\n\"\n#ifdef TLS1_3_VERSION\n\"  --tls-ciphersuites <list> Sets the list of preferred ciphersuites (TLSv1.3)\\n\"\n\"                     in order of preference from highest to lowest separated by colon (\\\":\\\").\\n\"\n\"                     See the ciphers(1ssl) manpage for more information about the syntax of this string,\\n\"\n\"                     and specifically for TLSv1.3 ciphersuites.\\n\"\n#endif\n#endif\n\"\";\n\n    fprintf(target,\n\"redis-cli %s\\n\"\n\"\\n\"\n\"Usage: redis-cli [OPTIONS] [cmd [arg [arg ...]]]\\n\"\n\"  -h <hostname>      Server hostname (default: 127.0.0.1).\\n\"\n\"  -p <port>          Server port (default: 6379).\\n\"\n\"  -t <timeout>       Server connection timeout in seconds (decimals allowed).\\n\"\n\"                     Default timeout is 0, meaning no limit, depending on the OS.\\n\"\n\"  -s <socket>        Server socket (overrides hostname and port).\\n\"\n\"  -a <password>      Password to use when connecting to the server.\\n\"\n\"                     You can also use the \" REDIS_CLI_AUTH_ENV \" environment\\n\"\n\"                     variable to pass this password more safely\\n\"\n\"                     (if both are used, this argument takes precedence).\\n\"\n\"  --user <username>  Used to send ACL style 'AUTH username pass'. Needs -a.\\n\"\n\"  --pass <password>  Alias of -a for consistency with the new --user option.\\n\"\n\"  --askpass          Force user to input password with mask from STDIN.\\n\"\n\"                     If this argument is used, '-a' and \" REDIS_CLI_AUTH_ENV \"\\n\"\n\"                     environment variable will be ignored.\\n\"\n\"  -u <uri>           Server URI on format redis://user:password@host:port/dbnum\\n\"\n\"                     User, password and dbnum are optional. For authentication\\n\"\n\"                     without a username, use username 'default'. For TLS, use\\n\"\n\"                     the scheme 'rediss'.\\n\"\n\"  -r <repeat>        Execute specified command N times.\\n\"\n\"  -i <interval>      When -r is used, waits <interval> seconds per command.\\n\"\n\"                     It is possible to specify sub-second times like -i 0.1.\\n\"\n\"                     This interval is also used in --scan and --stat per cycle.\\n\"\n\"                     and in --bigkeys, --memkeys, --keystats, and --hotkeys per 100 cycles.\\n\"\n\"  -n <db>            Database number.\\n\"\n\"  --name <name>      Set the client name.\\n\"\n\"  -2                 Start session in RESP2 protocol mode.\\n\"\n\"  -3                 Start session in RESP3 protocol mode.\\n\"\n\"  -x                 Read last argument from STDIN (see example below).\\n\"\n\"  -X                 Read <tag> argument from STDIN (see example below).\\n\"\n\"  -d <delimiter>     Delimiter between response bulks for raw formatting (default: \\\\n).\\n\"\n\"  -D <delimiter>     Delimiter between responses for raw formatting (default: \\\\n).\\n\"\n\"  -c                 Enable cluster mode (follow -ASK and -MOVED redirections).\\n\"\n\"  -e                 Return exit error code when command execution fails.\\n\"\n\"  -4                 Prefer IPv4 over IPv6 on DNS lookup.\\n\"\n\"  -6                 Prefer IPv6 over IPv4 on DNS lookup.\\n\"\n\"%s\"\n\"  --raw              Use raw formatting for replies (default when STDOUT is\\n\"\n\"                     not a tty).\\n\"\n\"  --no-raw           Force formatted output even when STDOUT is not a tty.\\n\"\n\"  --quoted-input     Force input to be handled as quoted strings.\\n\"\n\"  --csv              Output in CSV format.\\n\"\n\"  --json             Output in JSON format (default RESP3, use -2 if you want to use with RESP2).\\n\"\n\"  --quoted-json      Same as --json, but produce ASCII-safe quoted strings, not Unicode.\\n\"\n\"  --show-pushes <yn> Whether to print RESP3 PUSH messages.  Enabled by default when\\n\"\n\"                     STDOUT is a tty but can be overridden with --show-pushes no.\\n\"\n\"  --stat             Print rolling stats about server: mem, clients, ...\\n\",\nversion,tls_usage);\n\n    fprintf(target,\n\"  --latency          Enter a special mode continuously sampling latency.\\n\"\n\"                     If you use this mode in an interactive session it runs\\n\"\n\"                     forever displaying real-time stats. Otherwise if --raw or\\n\"\n\"                     --csv is specified, or if you redirect the output to a non\\n\"\n\"                     TTY, it samples the latency for 1 second (you can use\\n\"\n\"                     -i to change the interval), then produces a single output\\n\"\n\"                     and exits.\\n\"\n\"  --latency-history  Like --latency but tracking latency changes over time.\\n\"\n\"                     Default time interval is 15 sec. Change it using -i.\\n\"\n\"  --latency-dist     Shows latency as a spectrum, requires xterm 256 colors.\\n\"\n\"                     Default time interval is 1 sec. Change it using -i.\\n\"\n\"  --vset-recall <key> Enable VSIM recall test mode for the specified key\\n\"\n\"                     (that must be a vector set). Random vectors are created\\n\"\n\"                     mixing components from other elements. A VSIM is then\\n\"\n\"                     executed and checked against ground truth.\\n\"\n\"  --vset-recall-count <count> How many top elements to fetch per query.\\n\"\n\"  --vset-recall-ef <ef> HSNW EF (search effort) to use. Default 500.\\n\"\n\"  --vset-recall-ele <count> Number of elements used to compose query vectors\\n\"\n\"                            Default 1.\\n\"\n\"  --lru-test <keys>  Simulate a cache workload with an 80-20 distribution.\\n\"\n\"  --replica          Simulate a replica showing commands received from the master.\\n\"\n\"  --rdb <filename>   Transfer an RDB dump from remote server to local file.\\n\"\n\"                     Use filename of \\\"-\\\" to write to stdout.\\n\"\n\"  --functions-rdb <filename> Like --rdb but only get the functions (not the keys)\\n\"\n\"                     when getting the RDB dump file.\\n\"\n\"  --pipe             Transfer raw Redis protocol from stdin to server.\\n\"\n\"  --pipe-timeout <n> In --pipe mode, abort with error if after sending all data.\\n\"\n\"                     no reply is received within <n> seconds.\\n\"\n\"                     Default timeout: %d. Use 0 to wait forever.\\n\",\n    REDIS_CLI_DEFAULT_PIPE_TIMEOUT);\n    fprintf(target,\n\"  --bigkeys          Sample Redis keys looking for keys with many elements (complexity).\\n\"\n\"  --memkeys          Sample Redis keys looking for keys consuming a lot of memory.\\n\"\n\"  --memkeys-samples <n> Sample Redis keys looking for keys consuming a lot of memory.\\n\"\n\"                     And define number of key elements to sample\\n\"\n\"  --keystats         Sample Redis keys looking for keys memory size and length (combine bigkeys and memkeys).\\n\"\n\"  --keystats-samples <n> Sample Redis keys looking for keys memory size and length.\\n\"\n\"                     And define number of key elements to sample (only for memory usage).\\n\"\n\"  --cursor <n>       Start the scan at the cursor <n> (usually after a Ctrl-C).\\n\"\n\"                     Optionally used with --keystats and --keystats-samples.\\n\"\n\"  --top <n>          To display <n> top key sizes (default: 10).\\n\"\n\"                     Optionally used with --keystats and --keystats-samples.\\n\"\n\"  --hotkeys          Sample Redis keys looking for hot keys.\\n\"\n\"                     only works when maxmemory-policy is *lfu.\\n\"\n\"  --scan             List all keys using the SCAN command.\\n\"\n\"  --pattern <pat>    Keys pattern when using the --scan, --bigkeys, --memkeys,\\n\"\n\"                     --keystats or --hotkeys options (default: *).\\n\"\n\"  --count <count>    Count option when using the --scan, --bigkeys, --memkeys,\\n\"\n\"                     --keystats or --hotkeys (default: 10).\\n\"\n\"  --quoted-pattern <pat> Same as --pattern, but the specified string can be\\n\"\n\"                         quoted, in order to pass an otherwise non binary-safe string.\\n\"\n\"  --intrinsic-latency <sec> Run a test to measure intrinsic system latency.\\n\"\n\"                     The test will run for the specified amount of seconds.\\n\"\n\"  --eval <file>      Send an EVAL command using the Lua script at <file>.\\n\"\n\"  --ldb              Used with --eval enable the Redis Lua debugger.\\n\"\n\"  --ldb-sync-mode    Like --ldb but uses the synchronous Lua debugger, in\\n\"\n\"                     this mode the server is blocked and script changes are\\n\"\n\"                     not rolled back from the server memory.\\n\"\n\"  --cluster <command> [args...] [opts...]\\n\"\n\"                     Cluster Manager command and arguments (see below).\\n\"\n\"  --verbose          Verbose mode.\\n\"\n\"  --no-auth-warning  Don't show warning message when using password on command\\n\"\n\"                     line interface.\\n\"\n\"  --help             Output this help and exit.\\n\"\n\"  --version          Output version and exit.\\n\"\n\"\\n\");\n    /* Using another fprintf call to avoid -Woverlength-strings compile warning */\n    fprintf(target,\n\"Cluster Manager Commands:\\n\"\n\"  Use --cluster help to list all available cluster manager commands.\\n\"\n\"\\n\"\n\"Examples:\\n\"\n\"  redis-cli -u redis://default:PASSWORD@localhost:6379/0\\n\"\n\"  cat /etc/passwd | redis-cli -x set mypasswd\\n\"\n\"  redis-cli -D \\\"\\\" --raw dump key > key.dump && redis-cli -X dump_tag restore key2 0 dump_tag replace < key.dump\\n\"\n\"  redis-cli -r 100 lpush mylist x\\n\"\n\"  redis-cli -r 100 -i 1 info | grep used_memory_human:\\n\"\n\"  redis-cli --quoted-input set '\\\"null-\\\\x00-separated\\\"' value\\n\"\n\"  redis-cli --eval myscript.lua key1 key2 , arg1 arg2 arg3\\n\"\n\"  redis-cli --scan --pattern '*:12345*'\\n\"\n\"  redis-cli --scan --pattern '*:12345*' --count 100\\n\"\n\"\\n\"\n\"  (Note: when using --eval the comma separates KEYS[] from ARGV[] items)\\n\"\n\"\\n\"\n\"When no command is given, redis-cli starts in interactive mode.\\n\"\n\"Type \\\"help\\\" in interactive mode for information on available commands\\n\"\n\"and settings.\\n\"\n\"\\n\");\n    sdsfree(version);\n    exit(err);\n}\n\nstatic int confirmWithYes(char *msg, int ignore_force) {\n    /* if --cluster-yes option is set and ignore_force is false,\n     * do not prompt for an answer */\n    if (!ignore_force &&\n        (config.cluster_manager_command.flags & CLUSTER_MANAGER_CMD_FLAG_YES)) {\n        return 1;\n    }\n\n    printf(\"%s (type 'yes' to accept): \", msg);\n    fflush(stdout);\n    char buf[4];\n    int nread = read(fileno(stdin),buf,4);\n    buf[3] = '\\0';\n    return (nread != 0 && !strcmp(\"yes\", buf));\n}\n\nstatic int issueCommandRepeat(int argc, char **argv, long repeat) {\n    /* In Lua debugging mode, we want to pass the \"help\" to Redis to get\n     * it's own HELP message, rather than handle it by the CLI, see ldbRepl.\n     *\n     * For the normal Redis HELP, we can process it without a connection. */\n    if (!config.eval_ldb &&\n        (!strcasecmp(argv[0],\"help\") || !strcasecmp(argv[0],\"?\")))\n    {\n        cliOutputHelp(--argc, ++argv);\n        return REDIS_OK;\n    }\n\n    while (1) {\n        if (config.cluster_reissue_command || context == NULL ||\n            context->err == REDIS_ERR_IO || context->err == REDIS_ERR_EOF)\n        {\n            if (cliConnect(CC_FORCE) != REDIS_OK) {\n                cliPrintContextError();\n                config.cluster_reissue_command = 0;\n                return REDIS_ERR;\n            }\n            /* Reset dbnum after reconnecting so we can re-select the previous db in cliSelect(). */\n            config.dbnum = 0;\n            cliSelect();\n        }\n        config.cluster_reissue_command = 0;\n        if (config.cluster_send_asking) {\n            if (cliSendAsking() != REDIS_OK) {\n                cliPrintContextError();\n                return REDIS_ERR;\n            }\n        }\n        if (cliSendCommand(argc,argv,repeat) != REDIS_OK) {\n            cliPrintContextError();\n            redisFree(context);\n            context = NULL;\n            return REDIS_ERR;\n        }\n\n        /* Issue the command again if we got redirected in cluster mode */\n        if (config.cluster_mode && config.cluster_reissue_command) {\n            continue;\n        }\n        break;\n    }\n    return REDIS_OK;\n}\n\nstatic int issueCommand(int argc, char **argv) {\n    return issueCommandRepeat(argc, argv, config.repeat);\n}\n\n/* Split the user provided command into multiple SDS arguments.\n * This function normally uses sdssplitargs() from sds.c which is able\n * to understand \"quoted strings\", escapes and so forth. However when\n * we are in Lua debugging mode and the \"eval\" command is used, we want\n * the remaining Lua script (after \"e \" or \"eval \") to be passed verbatim\n * as a single big argument. */\nstatic sds *cliSplitArgs(char *line, int *argc) {\n    if (config.eval_ldb && (strstr(line,\"eval \") == line ||\n                            strstr(line,\"e \") == line))\n    {\n        sds *argv = sds_malloc(sizeof(sds)*2);\n        *argc = 2;\n        int len = strlen(line);\n        int elen = line[1] == ' ' ? 2 : 5; /* \"e \" or \"eval \"? */\n        argv[0] = sdsnewlen(line,elen-1);\n        argv[1] = sdsnewlen(line+elen,len-elen);\n        return argv;\n    } else {\n        return sdssplitargs(line,argc);\n    }\n}\n\n/* Set the CLI preferences. This function is invoked when an interactive\n * \":command\" is called, or when reading ~/.redisclirc file, in order to\n * set user preferences. */\nvoid cliSetPreferences(char **argv, int argc, int interactive) {\n    if (!strcasecmp(argv[0],\":set\") && argc >= 2) {\n        if (!strcasecmp(argv[1],\"hints\")) pref.hints = 1;\n        else if (!strcasecmp(argv[1],\"nohints\")) pref.hints = 0;\n        else {\n            printf(\"%sunknown redis-cli preference '%s'\\n\",\n                interactive ? \"\" : \".redisclirc: \",\n                argv[1]);\n        }\n    } else {\n        printf(\"%sunknown redis-cli internal command '%s'\\n\",\n            interactive ? \"\" : \".redisclirc: \",\n            argv[0]);\n    }\n}\n\n/* Load the ~/.redisclirc file if any. */\nvoid cliLoadPreferences(void) {\n    sds rcfile = getDotfilePath(REDIS_CLI_RCFILE_ENV,REDIS_CLI_RCFILE_DEFAULT);\n    if (rcfile == NULL) return;\n    FILE *fp = fopen(rcfile,\"r\");\n    char buf[1024];\n\n    if (fp) {\n        while(fgets(buf,sizeof(buf),fp) != NULL) {\n            sds *argv;\n            int argc;\n\n            argv = sdssplitargs(buf,&argc);\n            if (argc > 0) cliSetPreferences(argv,argc,0);\n            sdsfreesplitres(argv,argc);\n        }\n        fclose(fp);\n    }\n    sdsfree(rcfile);\n}\n\n/* Some commands can include sensitive information and shouldn't be put in the\n * history file. Currently these commands are include:\n * - AUTH\n * - ACL DELUSER, ACL SETUSER, ACL GETUSER\n * - CONFIG SET masterauth/masteruser/tls-key-file-pass/tls-client-key-file-pass/requirepass\n * - HELLO with [AUTH username password]\n * - MIGRATE with [AUTH password] or [AUTH2 username password] \n * - SENTINEL CONFIG SET sentinel-pass password, SENTINEL CONFIG SET sentinel-user username \n * - SENTINEL SET <mastername> auth-pass password, SENTINEL SET <mastername> auth-user username */\nstatic int isSensitiveCommand(int argc, char **argv) {\n    if (!strcasecmp(argv[0],\"auth\")) {\n        return 1;\n    } else if (argc > 1 &&\n        !strcasecmp(argv[0],\"acl\") && (\n            !strcasecmp(argv[1],\"deluser\") ||\n            !strcasecmp(argv[1],\"setuser\") ||\n            !strcasecmp(argv[1],\"getuser\")))\n    {\n        return 1;\n    } else if (argc > 2 &&\n        !strcasecmp(argv[0],\"config\") &&\n        !strcasecmp(argv[1],\"set\")) {\n            for (int j = 2; j < argc; j = j+2) {\n                if (!strcasecmp(argv[j],\"masterauth\") ||\n                    !strcasecmp(argv[j],\"masteruser\") ||\n                    !strcasecmp(argv[j],\"tls-key-file-pass\") ||\n                    !strcasecmp(argv[j],\"tls-client-key-file-pass\") ||\n                    !strcasecmp(argv[j],\"requirepass\")) {\n                    return 1;\n                }\n            }\n            return 0;\n    /* HELLO [protover [AUTH username password] [SETNAME clientname]] */\n    } else if (argc > 4 && !strcasecmp(argv[0],\"hello\")) {\n        for (int j = 2; j < argc; j++) {\n            int moreargs = argc - 1 - j;\n            if (!strcasecmp(argv[j],\"AUTH\") && moreargs >= 2) {\n                return 1;\n            } else if (!strcasecmp(argv[j],\"SETNAME\") && moreargs) {\n                j++;\n            } else {\n                return 0;\n            }\n        }\n    /* MIGRATE host port key|\"\" destination-db timeout [COPY] [REPLACE]\n     * [AUTH password] [AUTH2 username password] [KEYS key [key ...]] */\n    } else if (argc > 7 && !strcasecmp(argv[0], \"migrate\")) {\n        for (int j = 6; j < argc; j++) {\n            int moreargs = argc - 1 - j;\n            if (!strcasecmp(argv[j],\"auth\") && moreargs) {\n                return 1;\n            } else if (!strcasecmp(argv[j],\"auth2\") && moreargs >= 2) {\n                return 1;\n            } else if (!strcasecmp(argv[j],\"keys\") && moreargs) {\n                return 0;\n            }\n        }\n    } else if (argc > 4 && !strcasecmp(argv[0], \"sentinel\")) {\n        /* SENTINEL CONFIG SET sentinel-pass password\n         * SENTINEL CONFIG SET sentinel-user username */\n        if (!strcasecmp(argv[1], \"config\") && \n            !strcasecmp(argv[2], \"set\") &&\n            (!strcasecmp(argv[3], \"sentinel-pass\") ||\n             !strcasecmp(argv[3], \"sentinel-user\"))) \n        {\n            return 1;\n        }\n        /* SENTINEL SET <mastername> auth-pass password \n         * SENTINEL SET <mastername> auth-user username */\n        if (!strcasecmp(argv[1], \"set\") &&\n            (!strcasecmp(argv[3], \"auth-pass\") || \n             !strcasecmp(argv[3], \"auth-user\"))) \n        {\n            return 1;\n        }\n    }\n    return 0;\n}\n\nstatic void repl(void) {\n    sds historyfile = NULL;\n    int history = 0;\n    char *line;\n    int argc;\n    sds *argv;\n\n    /* There is no need to initialize redis HELP when we are in lua debugger mode.\n     * It has its own HELP and commands (COMMAND or COMMAND DOCS will fail and got nothing).\n     * We will initialize the redis HELP after the Lua debugging session ended.*/\n    if ((!config.eval_ldb) && isatty(fileno(stdin))) {\n        /* Initialize the help using the results of the COMMAND command. */\n        cliInitHelp();\n    }\n\n    config.interactive = 1;\n    linenoiseSetMultiLine(1);\n    linenoiseSetCompletionCallback(completionCallback);\n    linenoiseSetHintsCallback(hintsCallback);\n    linenoiseSetFreeHintsCallback(freeHintsCallback);\n\n    /* Only use history and load the rc file when stdin is a tty. */\n    if (getenv(\"FAKETTY_WITH_PROMPT\") != NULL || isatty(fileno(stdin))) {\n        historyfile = getDotfilePath(REDIS_CLI_HISTFILE_ENV,REDIS_CLI_HISTFILE_DEFAULT);\n        //keep in-memory history always regardless if history file can be determined\n        history = 1;\n        if (historyfile != NULL) {\n            linenoiseHistoryLoad(historyfile);\n        }\n        cliLoadPreferences();\n    }\n\n    cliRefreshPrompt();\n    while(1) {\n        line = linenoise(context ? config.prompt : \"not connected> \");\n        if (line == NULL) {\n            /* ^C, ^D or similar. */\n            if (config.pubsub_mode) {\n                config.pubsub_mode = 0;\n                if (cliConnect(CC_FORCE) == REDIS_OK)\n                    continue;\n            }\n            break;\n        } else if (line[0] != '\\0') {\n            long repeat = 1;\n            int skipargs = 0;\n            char *endptr = NULL;\n\n            argv = cliSplitArgs(line,&argc);\n            if (argv == NULL) {\n                printf(\"Invalid argument(s)\\n\");\n                fflush(stdout);\n                if (history) linenoiseHistoryAdd(line, 0);\n                if (historyfile) linenoiseHistorySave(historyfile);\n                linenoiseFree(line);\n                continue;\n            } else if (argc == 0) {\n                sdsfreesplitres(argv,argc);\n                linenoiseFree(line);\n                continue;\n            }\n\n            /* check if we have a repeat command option and\n             * need to skip the first arg */\n            errno = 0;\n            repeat = strtol(argv[0], &endptr, 10);\n            if (argc > 1 && *endptr == '\\0') {\n                if (errno == ERANGE || errno == EINVAL || repeat <= 0) {\n                    fputs(\"Invalid redis-cli repeat command option value.\\n\", stdout);\n                    sdsfreesplitres(argv, argc);\n                    linenoiseFree(line);\n                    continue;\n                }\n                skipargs = 1;\n            } else {\n                repeat = 1;\n            }\n\n            /* Always keep in-memory history. But for commands with sensitive information,\n             * avoid writing them to the history file. */\n            int is_sensitive = isSensitiveCommand(argc - skipargs, argv + skipargs);\n            if (history) linenoiseHistoryAdd(line, is_sensitive);\n            if (!is_sensitive && historyfile) linenoiseHistorySave(historyfile);\n\n            if (strcasecmp(argv[0],\"quit\") == 0 ||\n                strcasecmp(argv[0],\"exit\") == 0)\n            {\n                exit(0);\n            } else if (argv[0][0] == ':') {\n                cliSetPreferences(argv,argc,1);\n                sdsfreesplitres(argv,argc);\n                linenoiseFree(line);\n                continue;\n            } else if (strcasecmp(argv[0],\"restart\") == 0) {\n                if (config.eval) {\n                    config.eval_ldb = 1;\n                    config.output = OUTPUT_RAW;\n                    sdsfreesplitres(argv,argc);\n                    linenoiseFree(line);\n                    return; /* Return to evalMode to restart the session. */\n                } else {\n                    printf(\"Use 'restart' only in Lua debugging mode.\\n\");\n                    fflush(stdout);\n                }\n            } else if (argc == 3 && !strcasecmp(argv[0],\"connect\")) {\n                sdsfree(config.conn_info.hostip);\n                config.conn_info.hostip = sdsnew(argv[1]);\n                config.conn_info.hostport = atoi(argv[2]);\n                cliRefreshPrompt();\n                cliConnect(CC_FORCE);\n            } else if (argc == 1 && !strcasecmp(argv[0],\"clear\")) {\n                linenoiseClearScreen();\n            } else {\n                long long start_time = mstime(), elapsed;\n\n                issueCommandRepeat(argc-skipargs, argv+skipargs, repeat);\n\n                /* If our debugging session ended, show the EVAL final\n                    * reply. */\n                if (config.eval_ldb_end) {\n                    config.eval_ldb_end = 0;\n                    cliReadReply(0);\n                    printf(\"\\n(Lua debugging session ended%s)\\n\\n\",\n                        config.eval_ldb_sync ? \"\" :\n                        \" -- dataset changes rolled back\");\n                    cliInitHelp();\n                }\n\n                elapsed = mstime()-start_time;\n                if (elapsed >= 500 &&\n                    config.output == OUTPUT_STANDARD)\n                {\n                    printf(\"(%.2fs)\\n\",(double)elapsed/1000);\n                }\n            }\n            /* Free the argument vector */\n            sdsfreesplitres(argv,argc);\n        }\n\n        if (config.pubsub_mode) {\n            cliWaitForMessagesOrStdin();\n        }\n\n        /* linenoise() returns malloc-ed lines like readline() */\n        linenoiseFree(line);\n    }\n    exit(0);\n}\n\nstatic int noninteractive(int argc, char **argv) {\n    int retval = 0;\n    sds *sds_args = getSdsArrayFromArgv(argc, argv, config.quoted_input);\n\n    if (!sds_args) {\n        printf(\"Invalid quoted string\\n\");\n        return 1;\n    }\n\n    if (config.stdin_lastarg) {\n        sds_args = sds_realloc(sds_args, (argc + 1) * sizeof(sds));\n        sds_args[argc] = readArgFromStdin();\n        argc++;\n    } else if (config.stdin_tag_arg) {\n        int i = 0, tag_match = 0;\n\n        for (; i < argc; i++) {\n            if (strcmp(config.stdin_tag_name, sds_args[i]) != 0) continue;\n\n            tag_match = 1;\n            sdsfree(sds_args[i]);\n            sds_args[i] = readArgFromStdin();\n            break;\n        }\n\n        if (!tag_match) {\n            sdsfreesplitres(sds_args, argc);\n            fprintf(stderr, \"Using -X option but stdin tag not match.\\n\");\n            return 1;\n        }\n    }\n\n    retval = issueCommand(argc, sds_args);\n    sdsfreesplitres(sds_args, argc);\n    while (config.pubsub_mode) {\n        if (cliReadReply(0) != REDIS_OK) {\n            cliPrintContextError();\n            exit(1);\n        }\n        fflush(stdout);\n    }\n    return retval == REDIS_OK ? 0 : 1;\n}\n\nstatic void longStatLoopModeStop(int s) {\n    UNUSED(s);\n    force_cancel_loop = 1;\n}\n\n/*------------------------------------------------------------------------------\n * Eval mode\n *--------------------------------------------------------------------------- */\n\nstatic int evalMode(int argc, char **argv) {\n    sds script = NULL;\n    FILE *fp;\n    char buf[1024];\n    size_t nread;\n    char **argv2;\n    int j, got_comma, keys;\n    int retval = REDIS_OK;\n\n    while(1) {\n        if (config.eval_ldb) {\n            printf(\n            \"Lua debugging session started, please use:\\n\"\n            \"quit    -- End the session.\\n\"\n            \"restart -- Restart the script in debug mode again.\\n\"\n            \"help    -- Show Lua script debugging commands.\\n\\n\"\n            );\n        }\n\n        sdsfree(script);\n        script = sdsempty();\n        got_comma = 0;\n        keys = 0;\n\n        /* Load the script from the file, as an sds string. */\n        fp = fopen(config.eval,\"r\");\n        if (!fp) {\n            fprintf(stderr,\n                \"Can't open file '%s': %s\\n\", config.eval, strerror(errno));\n            exit(1);\n        }\n        while((nread = fread(buf,1,sizeof(buf),fp)) != 0) {\n            script = sdscatlen(script,buf,nread);\n        }\n        fclose(fp);\n\n        /* If we are debugging a script, enable the Lua debugger. */\n        if (config.eval_ldb) {\n            redisReply *reply = redisCommand(context,\n                    config.eval_ldb_sync ?\n                    \"SCRIPT DEBUG sync\": \"SCRIPT DEBUG yes\");\n            if (reply) freeReplyObject(reply);\n        }\n\n        /* Create our argument vector */\n        argv2 = zmalloc(sizeof(sds)*(argc+3));\n        argv2[0] = sdsnew(\"EVAL\");\n        argv2[1] = script;\n        for (j = 0; j < argc; j++) {\n            if (!got_comma && argv[j][0] == ',' && argv[j][1] == 0) {\n                got_comma = 1;\n                continue;\n            }\n            argv2[j+3-got_comma] = sdsnew(argv[j]);\n            if (!got_comma) keys++;\n        }\n        argv2[2] = sdscatprintf(sdsempty(),\"%d\",keys);\n\n        /* Call it */\n        int eval_ldb = config.eval_ldb; /* Save it, may be reverted. */\n        retval = issueCommand(argc+3-got_comma, argv2);\n        for (j = 0; j < argc+3-got_comma; j++) sdsfree(argv2[j]);\n        zfree(argv2);\n        if (eval_ldb) {\n            if (!config.eval_ldb) {\n                /* If the debugging session ended immediately, there was an\n                 * error compiling the script. Show it and they don't enter\n                 * the REPL at all. */\n                printf(\"Eval debugging session can't start:\\n\");\n                cliReadReply(0);\n                break; /* Return to the caller. */\n            } else {\n                strncpy(config.prompt,\"lua debugger> \",sizeof(config.prompt));\n                repl();\n                /* Restart the session if repl() returned. */\n                cliConnect(CC_FORCE);\n                printf(\"\\n\");\n            }\n        } else {\n            break; /* Return to the caller. */\n        }\n    }\n    return retval == REDIS_OK ? 0 : 1;\n}\n\n/*------------------------------------------------------------------------------\n * Cluster Manager\n *--------------------------------------------------------------------------- */\n\n/* The Cluster Manager global structure */\nstatic struct clusterManager {\n    list *nodes;    /* List of nodes in the configuration. */\n    list *errors;\n    int unreachable_masters;    /* Masters we are not able to reach. */\n} cluster_manager;\n\n/* Used by clusterManagerFixSlotsCoverage */\ndict *clusterManagerUncoveredSlots = NULL;\n\ntypedef struct clusterManagerNode {\n    redisContext *context;\n    sds name;\n    char *ip;\n    int port;\n    int bus_port; /* cluster-port */\n    uint64_t current_epoch;\n    time_t ping_sent;\n    time_t ping_recv;\n    int flags;\n    list *flags_str; /* Flags string representations */\n    sds replicate;  /* Master ID if node is a slave */\n    int dirty;      /* Node has changes that can be flushed */\n    uint8_t slots[CLUSTER_MANAGER_SLOTS];\n    int slots_count;\n    int replicas_count;\n    list *friends;\n    sds *migrating; /* An array of sds where even strings are slots and odd\n                     * strings are the destination node IDs. */\n    sds *importing; /* An array of sds where even strings are slots and odd\n                     * strings are the source node IDs. */\n    int migrating_count; /* Length of the migrating array (migrating slots*2) */\n    int importing_count; /* Length of the importing array (importing slots*2) */\n    float weight;   /* Weight used by rebalance */\n    int balance;    /* Used by rebalance */\n} clusterManagerNode;\n\n/* Data structure used to represent a sequence of cluster nodes. */\ntypedef struct clusterManagerNodeArray {\n    clusterManagerNode **nodes; /* Actual nodes array */\n    clusterManagerNode **alloc; /* Pointer to the allocated memory */\n    int len;                    /* Actual length of the array */\n    int count;                  /* Non-NULL nodes count */\n} clusterManagerNodeArray;\n\n/* Used for the reshard table. */\ntypedef struct clusterManagerReshardTableItem {\n    clusterManagerNode *source;\n    int slot;\n} clusterManagerReshardTableItem;\n\n/* Info about a cluster internal link. */\n\ntypedef struct clusterManagerLink {\n    sds node_name;\n    sds node_addr;\n    int connected;\n    int handshaking;\n} clusterManagerLink;\n\nstatic dictType clusterManagerDictType = {\n    dictSdsHash,               /* hash function */\n    NULL,                      /* key dup */\n    NULL,                      /* val dup */\n    dictSdsKeyCompare,         /* key compare */\n    NULL,                      /* key destructor */\n    dictSdsDestructor,         /* val destructor */\n    NULL                       /* allow to expand */\n};\n\nstatic dictType clusterManagerLinkDictType = {\n    dictSdsHash,               /* hash function */\n    NULL,                      /* key dup */\n    NULL,                      /* val dup */\n    dictSdsKeyCompare,         /* key compare */\n    dictSdsDestructor,         /* key destructor */\n    dictListDestructor,        /* val destructor */\n    NULL                       /* allow to expand */\n};\n\ntypedef int clusterManagerCommandProc(int argc, char **argv);\ntypedef int (*clusterManagerOnReplyError)(redisReply *reply,\n    clusterManagerNode *n, int bulk_idx);\n\n/* Cluster Manager helper functions */\n\nstatic clusterManagerNode *clusterManagerNewNode(char *ip, int port, int bus_port);\nstatic clusterManagerNode *clusterManagerNodeByName(const char *name);\nstatic clusterManagerNode *clusterManagerNodeByAbbreviatedName(const char *n);\nstatic void clusterManagerNodeResetSlots(clusterManagerNode *node);\nstatic int clusterManagerNodeIsCluster(clusterManagerNode *node, char **err);\nstatic void clusterManagerPrintNotClusterNodeError(clusterManagerNode *node,\n                                                   char *err);\nstatic int clusterManagerNodeLoadInfo(clusterManagerNode *node, int opts,\n                                      char **err);\nstatic int clusterManagerLoadInfoFromNode(clusterManagerNode *node);\nstatic int clusterManagerNodeIsEmpty(clusterManagerNode *node, char **err);\nstatic int clusterManagerGetAntiAffinityScore(clusterManagerNodeArray *ipnodes,\n    int ip_count, clusterManagerNode ***offending, int *offending_len);\nstatic void clusterManagerOptimizeAntiAffinity(clusterManagerNodeArray *ipnodes,\n    int ip_count);\nstatic sds clusterManagerNodeInfo(clusterManagerNode *node, int indent);\nstatic void clusterManagerShowNodes(void);\nstatic void clusterManagerShowClusterInfo(void);\nstatic int clusterManagerFlushNodeConfig(clusterManagerNode *node, char **err);\nstatic void clusterManagerWaitForClusterJoin(void);\nstatic int clusterManagerCheckCluster(int quiet);\nstatic void clusterManagerLog(int level, const char* fmt, ...);\nstatic int clusterManagerIsConfigConsistent(void);\nstatic dict *clusterManagerGetLinkStatus(void);\nstatic void clusterManagerOnError(sds err);\nstatic void clusterManagerNodeArrayInit(clusterManagerNodeArray *array,\n                                        int len);\nstatic void clusterManagerNodeArrayReset(clusterManagerNodeArray *array);\nstatic void clusterManagerNodeArrayShift(clusterManagerNodeArray *array,\n                                         clusterManagerNode **nodeptr);\nstatic void clusterManagerNodeArrayAdd(clusterManagerNodeArray *array,\n                                       clusterManagerNode *node);\n\n/* Cluster Manager commands. */\n\nstatic int clusterManagerCommandCreate(int argc, char **argv);\nstatic int clusterManagerCommandAddNode(int argc, char **argv);\nstatic int clusterManagerCommandDeleteNode(int argc, char **argv);\nstatic int clusterManagerCommandInfo(int argc, char **argv);\nstatic int clusterManagerCommandCheck(int argc, char **argv);\nstatic int clusterManagerCommandFix(int argc, char **argv);\nstatic int clusterManagerCommandReshard(int argc, char **argv);\nstatic int clusterManagerCommandRebalance(int argc, char **argv);\nstatic int clusterManagerCommandSetTimeout(int argc, char **argv);\nstatic int clusterManagerCommandImport(int argc, char **argv);\nstatic int clusterManagerCommandCall(int argc, char **argv);\nstatic int clusterManagerCommandHelp(int argc, char **argv);\nstatic int clusterManagerCommandBackup(int argc, char **argv);\n\ntypedef struct clusterManagerCommandDef {\n    char *name;\n    clusterManagerCommandProc *proc;\n    int arity;\n    char *args;\n    char *options;\n} clusterManagerCommandDef;\n\nclusterManagerCommandDef clusterManagerCommands[] = {\n    {\"create\", clusterManagerCommandCreate, -1, \"host1:port1 ... hostN:portN\",\n     \"replicas <arg>\"},\n    {\"check\", clusterManagerCommandCheck, -1, \"<host:port> or <host> <port> - separated by either colon or space\",\n     \"search-multiple-owners\"},\n    {\"info\", clusterManagerCommandInfo, -1, \"<host:port> or <host> <port> - separated by either colon or space\", NULL},\n    {\"fix\", clusterManagerCommandFix, -1, \"<host:port> or <host> <port> - separated by either colon or space\",\n     \"search-multiple-owners,fix-with-unreachable-masters\"},\n    {\"reshard\", clusterManagerCommandReshard, -1, \"<host:port> or <host> <port> - separated by either colon or space\",\n     \"from <arg>,to <arg>,slots <arg>,yes,timeout <arg>,pipeline <arg>,\"\n     \"replace\"},\n    {\"rebalance\", clusterManagerCommandRebalance, -1, \"<host:port> or <host> <port> - separated by either colon or space\",\n     \"weight <node1=w1...nodeN=wN>,use-empty-masters,\"\n     \"timeout <arg>,simulate,pipeline <arg>,threshold <arg>,replace\"},\n    {\"add-node\", clusterManagerCommandAddNode, 2,\n     \"new_host:new_port existing_host:existing_port\", \"slave,master-id <arg>\"},\n    {\"del-node\", clusterManagerCommandDeleteNode, 2, \"host:port node_id\",NULL},\n    {\"call\", clusterManagerCommandCall, -2,\n        \"host:port command arg arg .. arg\", \"only-masters,only-replicas\"},\n    {\"set-timeout\", clusterManagerCommandSetTimeout, 2,\n     \"host:port milliseconds\", NULL},\n    {\"import\", clusterManagerCommandImport, 1, \"host:port\",\n     \"from <arg>,from-user <arg>,from-pass <arg>,from-askpass,copy,replace\"},\n    {\"backup\", clusterManagerCommandBackup, 2,  \"host:port backup_directory\",\n     NULL},\n    {\"help\", clusterManagerCommandHelp, 0, NULL, NULL}\n};\n\ntypedef struct clusterManagerOptionDef {\n    char *name;\n    char *desc;\n} clusterManagerOptionDef;\n\nclusterManagerOptionDef clusterManagerOptions[] = {\n    {\"--cluster-yes\", \"Automatic yes to cluster commands prompts\"}\n};\n\nstatic void getRDB(clusterManagerNode *node);\n\nstatic int createClusterManagerCommand(char *cmdname, int argc, char **argv) {\n    clusterManagerCommand *cmd = &config.cluster_manager_command;\n    cmd->name = cmdname;\n    cmd->argc = argc;\n    cmd->argv = argc ? argv : NULL;\n    if (isColorTerm()) cmd->flags |= CLUSTER_MANAGER_CMD_FLAG_COLOR;\n\n    if (config.stdin_lastarg) {\n        char **new_argv = zmalloc(sizeof(char*) * (cmd->argc+1));\n        memcpy(new_argv, cmd->argv, sizeof(char*) * cmd->argc);\n\n        cmd->stdin_arg = readArgFromStdin();\n        new_argv[cmd->argc++] = cmd->stdin_arg;\n        cmd->argv = new_argv;\n    } else if (config.stdin_tag_arg) {\n        int i = 0, tag_match = 0;\n        cmd->stdin_arg = readArgFromStdin();\n\n        for (; i < argc; i++) {\n            if (strcmp(argv[i], config.stdin_tag_name) != 0) continue;\n\n            tag_match = 1;\n            cmd->argv[i] = (char *)cmd->stdin_arg;\n            break;\n        }\n\n        if (!tag_match) {\n            sdsfree(cmd->stdin_arg);\n            fprintf(stderr, \"Using -X option but stdin tag not match.\\n\");\n            return 1;\n        }\n    }\n\n    return 0;\n}\n\nstatic clusterManagerCommandProc *validateClusterManagerCommand(void) {\n    int i, commands_count = sizeof(clusterManagerCommands) /\n                            sizeof(clusterManagerCommandDef);\n    clusterManagerCommandProc *proc = NULL;\n    char *cmdname = config.cluster_manager_command.name;\n    int argc = config.cluster_manager_command.argc;\n    for (i = 0; i < commands_count; i++) {\n        clusterManagerCommandDef cmddef = clusterManagerCommands[i];\n        if (!strcmp(cmddef.name, cmdname)) {\n            if ((cmddef.arity > 0 && argc != cmddef.arity) ||\n                (cmddef.arity < 0 && argc < (cmddef.arity * -1))) {\n                fprintf(stderr, \"[ERR] Wrong number of arguments for \"\n                                \"specified --cluster sub command\\n\");\n                return NULL;\n            }\n            proc = cmddef.proc;\n        }\n    }\n    if (!proc) fprintf(stderr, \"Unknown --cluster subcommand\\n\");\n    return proc;\n}\n\nstatic int parseClusterNodeAddress(char *addr, char **ip_ptr, int *port_ptr,\n                                   int *bus_port_ptr)\n{\n    /* ip:port[@bus_port] */\n    char *c = strrchr(addr, '@');\n    if (c != NULL) {\n        *c = '\\0';\n        if (bus_port_ptr != NULL)\n            *bus_port_ptr = atoi(c + 1);\n    }\n    c = strrchr(addr, ':');\n    if (c != NULL) {\n        *c = '\\0';\n        *ip_ptr = addr;\n        *port_ptr = atoi(++c);\n    } else return 0;\n    return 1;\n}\n\n/* Get host ip and port from command arguments. If only one argument has\n * been provided it must be in the form of 'ip:port', elsewhere\n * the first argument must be the ip and the second one the port.\n * If host and port can be detected, it returns 1 and it stores host and\n * port into variables referenced by 'ip_ptr' and 'port_ptr' pointers,\n * elsewhere it returns 0. */\nstatic int getClusterHostFromCmdArgs(int argc, char **argv,\n                                     char **ip_ptr, int *port_ptr) {\n    int port = 0;\n    char *ip = NULL;\n    if (argc == 1) {\n        char *addr = argv[0];\n        if (!parseClusterNodeAddress(addr, &ip, &port, NULL)) return 0;\n    } else {\n        ip = argv[0];\n        port = atoi(argv[1]);\n    }\n    if (!ip || !port) return 0;\n    else {\n        *ip_ptr = ip;\n        *port_ptr = port;\n    }\n    return 1;\n}\n\nstatic void freeClusterManagerNodeFlags(list *flags) {\n    listIter li;\n    listNode *ln;\n    listRewind(flags, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        sds flag = ln->value;\n        sdsfree(flag);\n    }\n    listRelease(flags);\n}\n\nstatic void freeClusterManagerNode(clusterManagerNode *node) {\n    if (node->context != NULL) redisFree(node->context);\n    if (node->friends != NULL) {\n        listIter li;\n        listNode *ln;\n        listRewind(node->friends,&li);\n        while ((ln = listNext(&li)) != NULL) {\n            clusterManagerNode *fn = ln->value;\n            freeClusterManagerNode(fn);\n        }\n        listRelease(node->friends);\n        node->friends = NULL;\n    }\n    if (node->name != NULL) sdsfree(node->name);\n    if (node->replicate != NULL) sdsfree(node->replicate);\n    if ((node->flags & CLUSTER_MANAGER_FLAG_FRIEND) && node->ip)\n        sdsfree(node->ip);\n    int i;\n    if (node->migrating != NULL) {\n        for (i = 0; i < node->migrating_count; i++) sdsfree(node->migrating[i]);\n        zfree(node->migrating);\n    }\n    if (node->importing != NULL) {\n        for (i = 0; i < node->importing_count; i++) sdsfree(node->importing[i]);\n        zfree(node->importing);\n    }\n    if (node->flags_str != NULL) {\n        freeClusterManagerNodeFlags(node->flags_str);\n        node->flags_str = NULL;\n    }\n    zfree(node);\n}\n\nstatic void freeClusterManager(void) {\n    listIter li;\n    listNode *ln;\n    if (cluster_manager.nodes != NULL) {\n        listRewind(cluster_manager.nodes,&li);\n        while ((ln = listNext(&li)) != NULL) {\n            clusterManagerNode *n = ln->value;\n            freeClusterManagerNode(n);\n        }\n        listRelease(cluster_manager.nodes);\n        cluster_manager.nodes = NULL;\n    }\n    if (cluster_manager.errors != NULL) {\n        listRewind(cluster_manager.errors,&li);\n        while ((ln = listNext(&li)) != NULL) {\n            sds err = ln->value;\n            sdsfree(err);\n        }\n        listRelease(cluster_manager.errors);\n        cluster_manager.errors = NULL;\n    }\n    if (clusterManagerUncoveredSlots != NULL)\n        dictRelease(clusterManagerUncoveredSlots);\n}\n\nstatic clusterManagerNode *clusterManagerNewNode(char *ip, int port, int bus_port) {\n    clusterManagerNode *node = zmalloc(sizeof(*node));\n    node->context = NULL;\n    node->name = NULL;\n    node->ip = ip;\n    node->port = port;\n    /* We don't need to know the bus_port, at this point this value may be wrong.\n     * If it is used, it will be corrected in clusterManagerLoadInfoFromNode. */\n    node->bus_port = bus_port ? bus_port : port + CLUSTER_MANAGER_PORT_INCR;\n    node->current_epoch = 0;\n    node->ping_sent = 0;\n    node->ping_recv = 0;\n    node->flags = 0;\n    node->flags_str = NULL;\n    node->replicate = NULL;\n    node->dirty = 0;\n    node->friends = NULL;\n    node->migrating = NULL;\n    node->importing = NULL;\n    node->migrating_count = 0;\n    node->importing_count = 0;\n    node->replicas_count = 0;\n    node->weight = 1.0f;\n    node->balance = 0;\n    clusterManagerNodeResetSlots(node);\n    return node;\n}\n\nstatic sds clusterManagerGetNodeRDBFilename(clusterManagerNode *node) {\n    assert(config.cluster_manager_command.backup_dir);\n    sds filename = sdsnew(config.cluster_manager_command.backup_dir);\n    if (filename[sdslen(filename) - 1] != '/')\n        filename = sdscat(filename, \"/\");\n    filename = sdscatprintf(filename, \"redis-node-%s-%d-%s.rdb\", node->ip,\n                            node->port, node->name);\n    return filename;\n}\n\n/* Check whether reply is NULL or its type is REDIS_REPLY_ERROR. In the\n * latest case, if the 'err' arg is not NULL, it gets allocated with a copy\n * of reply error (it's up to the caller function to free it), elsewhere\n * the error is directly printed. */\nstatic int clusterManagerCheckRedisReply(clusterManagerNode *n,\n                                         redisReply *r, char **err)\n{\n    int is_err = 0;\n    if (!r || (is_err = (r->type == REDIS_REPLY_ERROR))) {\n        if (is_err) {\n            if (err != NULL) {\n                *err = zmalloc((r->len + 1) * sizeof(char));\n                redis_strlcpy(*err, r->str,(r->len + 1));\n            } else CLUSTER_MANAGER_PRINT_REPLY_ERROR(n, r->str);\n        }\n        return 0;\n    }\n    return 1;\n}\n\n/* Call MULTI command on a cluster node. */\nstatic int clusterManagerStartTransaction(clusterManagerNode *node) {\n    redisReply *reply = CLUSTER_MANAGER_COMMAND(node, \"MULTI\");\n    int success = clusterManagerCheckRedisReply(node, reply, NULL);\n    if (reply) freeReplyObject(reply);\n    return success;\n}\n\n/* Call EXEC command on a cluster node. */\nstatic int clusterManagerExecTransaction(clusterManagerNode *node,\n                                         clusterManagerOnReplyError onerror)\n{\n    redisReply *reply = CLUSTER_MANAGER_COMMAND(node, \"EXEC\");\n    int success = clusterManagerCheckRedisReply(node, reply, NULL);\n    if (success) {\n        if (reply->type != REDIS_REPLY_ARRAY) {\n            success = 0;\n            goto cleanup;\n        }\n        size_t i;\n        for (i = 0; i < reply->elements; i++) {\n            redisReply *r = reply->element[i];\n            char *err = NULL;\n            success = clusterManagerCheckRedisReply(node, r, &err);\n            if (!success && onerror) success = onerror(r, node, i);\n            if (err) {\n                if (!success)\n                    CLUSTER_MANAGER_PRINT_REPLY_ERROR(node, err);\n                zfree(err);\n            }\n            if (!success) break;\n        }\n    }\ncleanup:\n    if (reply) freeReplyObject(reply);\n    return success;\n}\n\nstatic int clusterManagerNodeConnect(clusterManagerNode *node) {\n    if (node->context) redisFree(node->context);\n    node->context = redisConnectWrapper(node->ip, node->port, config.connect_timeout);\n    if (!node->context->err && config.tls) {\n        const char *err = NULL;\n        if (cliSecureConnection(node->context, config.sslconfig, &err) == REDIS_ERR && err) {\n            fprintf(stderr,\"TLS Error: %s\\n\", err);\n            redisFree(node->context);\n            node->context = NULL;\n            return 0;\n        }\n    }\n    if (node->context->err) {\n        fprintf(stderr,\"Could not connect to Redis at \");\n        fprintf(stderr,\"%s:%d: %s\\n\", node->ip, node->port,\n                node->context->errstr);\n        redisFree(node->context);\n        node->context = NULL;\n        return 0;\n    }\n    /* Set aggressive KEEP_ALIVE socket option in the Redis context socket\n     * in order to prevent timeouts caused by the execution of long\n     * commands. At the same time this improves the detection of real\n     * errors. */\n    anetKeepAlive(NULL, node->context->fd, REDIS_CLI_KEEPALIVE_INTERVAL);\n    if (config.conn_info.auth) {\n        redisReply *reply;\n        if (config.conn_info.user == NULL)\n            reply = redisCommand(node->context,\"AUTH %s\", config.conn_info.auth);\n        else\n            reply = redisCommand(node->context,\"AUTH %s %s\",\n                                 config.conn_info.user,config.conn_info.auth);\n        int ok = clusterManagerCheckRedisReply(node, reply, NULL);\n        if (reply != NULL) freeReplyObject(reply);\n        if (!ok) return 0;\n    }\n    return 1;\n}\n\nstatic void clusterManagerRemoveNodeFromList(list *nodelist,\n                                             clusterManagerNode *node) {\n    listIter li;\n    listNode *ln;\n    listRewind(nodelist, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        if (node == ln->value) {\n            listDelNode(nodelist, ln);\n            break;\n        }\n    }\n}\n\n/* Return the node with the specified name (ID) or NULL. */\nstatic clusterManagerNode *clusterManagerNodeByName(const char *name) {\n    if (cluster_manager.nodes == NULL) return NULL;\n    clusterManagerNode *found = NULL;\n    sds lcname = sdsempty();\n    lcname = sdscpy(lcname, name);\n    sdstolower(lcname);\n    listIter li;\n    listNode *ln;\n    listRewind(cluster_manager.nodes, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *n = ln->value;\n        if (n->name && !sdscmp(n->name, lcname)) {\n            found = n;\n            break;\n        }\n    }\n    sdsfree(lcname);\n    return found;\n}\n\n/* Like clusterManagerNodeByName but the specified name can be just the first\n * part of the node ID as long as the prefix in unique across the\n * cluster.\n */\nstatic clusterManagerNode *clusterManagerNodeByAbbreviatedName(const char*name)\n{\n    if (cluster_manager.nodes == NULL) return NULL;\n    clusterManagerNode *found = NULL;\n    sds lcname = sdsempty();\n    lcname = sdscpy(lcname, name);\n    sdstolower(lcname);\n    listIter li;\n    listNode *ln;\n    listRewind(cluster_manager.nodes, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *n = ln->value;\n        if (n->name &&\n            strstr(n->name, lcname) == n->name) {\n            found = n;\n            break;\n        }\n    }\n    sdsfree(lcname);\n    return found;\n}\n\nstatic void clusterManagerNodeResetSlots(clusterManagerNode *node) {\n    memset(node->slots, 0, sizeof(node->slots));\n    node->slots_count = 0;\n}\n\n/* Call \"INFO\" redis command on the specified node and return the reply. */\nstatic redisReply *clusterManagerGetNodeRedisInfo(clusterManagerNode *node,\n                                                  char **err)\n{\n    redisReply *info = CLUSTER_MANAGER_COMMAND(node, \"INFO\");\n    if (err != NULL) *err = NULL;\n    if (info == NULL) return NULL;\n    if (info->type == REDIS_REPLY_ERROR) {\n        if (err != NULL) {\n            *err = zmalloc((info->len + 1) * sizeof(char));\n            redis_strlcpy(*err, info->str,(info->len + 1));\n        }\n        freeReplyObject(info);\n        return  NULL;\n    }\n    return info;\n}\n\nstatic int clusterManagerNodeIsCluster(clusterManagerNode *node, char **err) {\n    redisReply *info = clusterManagerGetNodeRedisInfo(node, err);\n    if (info == NULL) return 0;\n    int is_cluster = (int) getLongInfoField(info->str, \"cluster_enabled\");\n    freeReplyObject(info);\n    return is_cluster;\n}\n\n/* Checks whether the node is empty. Node is considered not-empty if it has\n * some key or if it already knows other nodes */\nstatic int clusterManagerNodeIsEmpty(clusterManagerNode *node, char **err) {\n    redisReply *info = clusterManagerGetNodeRedisInfo(node, err);\n    int is_empty = 1;\n    if (info == NULL) return 0;\n    if (strstr(info->str, \"db0:\") != NULL) {\n        is_empty = 0;\n        goto result;\n    }\n    freeReplyObject(info);\n    info = CLUSTER_MANAGER_COMMAND(node, \"CLUSTER INFO\");\n    if (err != NULL) *err = NULL;\n    if (!clusterManagerCheckRedisReply(node, info, err)) {\n        is_empty = 0;\n        goto result;\n    }\n    long known_nodes = getLongInfoField(info->str, \"cluster_known_nodes\");\n    is_empty = (known_nodes == 1);\nresult:\n    freeReplyObject(info);\n    return is_empty;\n}\n\n/* Return the anti-affinity score, which is a measure of the amount of\n * violations of anti-affinity in the current cluster layout, that is, how\n * badly the masters and slaves are distributed in the different IP\n * addresses so that slaves of the same master are not in the master\n * host and are also in different hosts.\n *\n * The score is calculated as follows:\n *\n * SAME_AS_MASTER = 10000 * each slave in the same IP of its master.\n * SAME_AS_SLAVE  = 1 * each slave having the same IP as another slave\n                      of the same master.\n * FINAL_SCORE = SAME_AS_MASTER + SAME_AS_SLAVE\n *\n * So a greater score means a worse anti-affinity level, while zero\n * means perfect anti-affinity.\n *\n * The anti affinity optimization will try to get a score as low as\n * possible. Since we do not want to sacrifice the fact that slaves should\n * not be in the same host as the master, we assign 10000 times the score\n * to this violation, so that we'll optimize for the second factor only\n * if it does not impact the first one.\n *\n * The ipnodes argument is an array of clusterManagerNodeArray, one for\n * each IP, while ip_count is the total number of IPs in the configuration.\n *\n * The function returns the above score, and the list of\n * offending slaves can be stored into the 'offending' argument,\n * so that the optimizer can try changing the configuration of the\n * slaves violating the anti-affinity goals. */\nstatic int clusterManagerGetAntiAffinityScore(clusterManagerNodeArray *ipnodes,\n    int ip_count, clusterManagerNode ***offending, int *offending_len)\n{\n    int score = 0, i, j;\n    int node_len = cluster_manager.nodes->len;\n    clusterManagerNode **offending_p = NULL;\n    if (offending != NULL) {\n        *offending = zcalloc(node_len * sizeof(clusterManagerNode*));\n        offending_p = *offending;\n    }\n    /* For each set of nodes in the same host, split by\n     * related nodes (masters and slaves which are involved in\n     * replication of each other) */\n    for (i = 0; i < ip_count; i++) {\n        clusterManagerNodeArray *node_array = &(ipnodes[i]);\n        dict *related = dictCreate(&clusterManagerDictType);\n        char *ip = NULL;\n        for (j = 0; j < node_array->len; j++) {\n            clusterManagerNode *node = node_array->nodes[j];\n            if (node == NULL) continue;\n            if (!ip) ip = node->ip;\n            sds types;\n            /* We always use the Master ID as key. */\n            sds key = (!node->replicate ? node->name : node->replicate);\n            assert(key != NULL);\n            dictEntry *entry = dictFind(related, key);\n            if (entry) types = sdsdup((sds) dictGetVal(entry));\n            else types = sdsempty();\n            /* Master type 'm' is always set as the first character of the\n             * types string. */\n            if (node->replicate) types = sdscat(types, \"s\");\n            else {\n                sds s = sdscatsds(sdsnew(\"m\"), types);\n                sdsfree(types);\n                types = s;\n            }\n            dictReplace(related, key, types);\n        }\n        /* Now it's trivial to check, for each related group having the\n         * same host, what is their local score. */\n        dictIterator iter;\n        dictEntry *entry;\n\n        dictInitIterator(&iter, related);\n        while ((entry = dictNext(&iter)) != NULL) {\n            sds types = (sds) dictGetVal(entry);\n            sds name = (sds) dictGetKey(entry);\n            int typeslen = sdslen(types);\n            if (typeslen < 2) continue;\n            if (types[0] == 'm') score += (10000 * (typeslen - 1));\n            else score += (1 * typeslen);\n            if (offending == NULL) continue;\n            /* Populate the list of offending nodes. */\n            listIter li;\n            listNode *ln;\n            listRewind(cluster_manager.nodes, &li);\n            while ((ln = listNext(&li)) != NULL) {\n                clusterManagerNode *n = ln->value;\n                if (n->replicate == NULL) continue;\n                if (!strcmp(n->replicate, name) && !strcmp(n->ip, ip)) {\n                    *(offending_p++) = n;\n                    if (offending_len != NULL) (*offending_len)++;\n                    break;\n                }\n            }\n        }\n        //if (offending_len != NULL) *offending_len = offending_p - *offending;\n        dictResetIterator(&iter);\n        dictRelease(related);\n    }\n    return score;\n}\n\nstatic void clusterManagerOptimizeAntiAffinity(clusterManagerNodeArray *ipnodes,\n    int ip_count)\n{\n    clusterManagerNode **offenders = NULL;\n    int score = clusterManagerGetAntiAffinityScore(ipnodes, ip_count,\n                                                   NULL, NULL);\n    if (score == 0) goto cleanup;\n    clusterManagerLogInfo(\">>> Trying to optimize slaves allocation \"\n                          \"for anti-affinity\\n\");\n    int node_len = cluster_manager.nodes->len;\n    int maxiter = 500 * node_len; // Effort is proportional to cluster size...\n    srand(time(NULL));\n    while (maxiter > 0) {\n        int offending_len = 0;\n        if (offenders != NULL) {\n            zfree(offenders);\n            offenders = NULL;\n        }\n        score = clusterManagerGetAntiAffinityScore(ipnodes,\n                                                   ip_count,\n                                                   &offenders,\n                                                   &offending_len);\n        if (score == 0 || offending_len == 0) break; // Optimal anti affinity reached\n        /* We'll try to randomly swap a slave's assigned master causing\n         * an affinity problem with another random slave, to see if we\n         * can improve the affinity. */\n        int rand_idx = rand() % offending_len;\n        clusterManagerNode *first = offenders[rand_idx],\n                           *second = NULL;\n        clusterManagerNode **other_replicas = zcalloc((node_len - 1) *\n                                                      sizeof(*other_replicas));\n        int other_replicas_count = 0;\n        listIter li;\n        listNode *ln;\n        listRewind(cluster_manager.nodes, &li);\n        while ((ln = listNext(&li)) != NULL) {\n            clusterManagerNode *n = ln->value;\n            if (n != first && n->replicate != NULL)\n                other_replicas[other_replicas_count++] = n;\n        }\n        if (other_replicas_count == 0) {\n            zfree(other_replicas);\n            break;\n        }\n        rand_idx = rand() % other_replicas_count;\n        second = other_replicas[rand_idx];\n        char *first_master = first->replicate,\n             *second_master = second->replicate;\n        first->replicate = second_master, first->dirty = 1;\n        second->replicate = first_master, second->dirty = 1;\n        int new_score = clusterManagerGetAntiAffinityScore(ipnodes,\n                                                           ip_count,\n                                                           NULL, NULL);\n        /* If the change actually makes thing worse, revert. Otherwise\n         * leave as it is because the best solution may need a few\n         * combined swaps. */\n        if (new_score > score) {\n            first->replicate = first_master;\n            second->replicate = second_master;\n        }\n        zfree(other_replicas);\n        maxiter--;\n    }\n    score = clusterManagerGetAntiAffinityScore(ipnodes, ip_count, NULL, NULL);\n    char *msg;\n    int perfect = (score == 0);\n    int log_level = (perfect ? CLUSTER_MANAGER_LOG_LVL_SUCCESS :\n                               CLUSTER_MANAGER_LOG_LVL_WARN);\n    if (perfect) msg = \"[OK] Perfect anti-affinity obtained!\";\n    else if (score >= 10000)\n        msg = (\"[WARNING] Some slaves are in the same host as their master\");\n    else\n        msg=(\"[WARNING] Some slaves of the same master are in the same host\");\n    clusterManagerLog(log_level, \"%s\\n\", msg);\ncleanup:\n    zfree(offenders);\n}\n\n/* Return a representable string of the node's flags */\nstatic sds clusterManagerNodeFlagString(clusterManagerNode *node) {\n    sds flags = sdsempty();\n    if (!node->flags_str) return flags;\n    int empty = 1;\n    listIter li;\n    listNode *ln;\n    listRewind(node->flags_str, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        sds flag = ln->value;\n        if (strcmp(flag, \"myself\") == 0) continue;\n        if (!empty) flags = sdscat(flags, \",\");\n        flags = sdscatfmt(flags, \"%S\", flag);\n        empty = 0;\n    }\n    return flags;\n}\n\n/* Return a representable string of the node's slots */\nstatic sds clusterManagerNodeSlotsString(clusterManagerNode *node) {\n    sds slots = sdsempty();\n    int first_range_idx = -1, last_slot_idx = -1, i;\n    for (i = 0; i < CLUSTER_MANAGER_SLOTS; i++) {\n        int has_slot = node->slots[i];\n        if (has_slot) {\n            if (first_range_idx == -1) {\n                if (sdslen(slots)) slots = sdscat(slots, \",\");\n                first_range_idx = i;\n                slots = sdscatfmt(slots, \"[%u\", i);\n            }\n            last_slot_idx = i;\n        } else {\n            if (last_slot_idx >= 0) {\n                if (first_range_idx == last_slot_idx)\n                    slots = sdscat(slots, \"]\");\n                else slots = sdscatfmt(slots, \"-%u]\", last_slot_idx);\n            }\n            last_slot_idx = -1;\n            first_range_idx = -1;\n        }\n    }\n    if (last_slot_idx >= 0) {\n        if (first_range_idx == last_slot_idx) slots = sdscat(slots, \"]\");\n        else slots = sdscatfmt(slots, \"-%u]\", last_slot_idx);\n    }\n    return slots;\n}\n\nstatic sds clusterManagerNodeGetJSON(clusterManagerNode *node,\n                                     unsigned long error_count)\n{\n    sds json = sdsempty();\n    sds replicate = sdsempty();\n    if (node->replicate)\n        replicate = sdscatprintf(replicate, \"\\\"%s\\\"\", node->replicate);\n    else\n        replicate = sdscat(replicate, \"null\");\n    sds slots = clusterManagerNodeSlotsString(node);\n    sds flags = clusterManagerNodeFlagString(node);\n    char *p = slots;\n    while ((p = strchr(p, '-')) != NULL)\n        *(p++) = ',';\n    json = sdscatprintf(json,\n        \"  {\\n\"\n        \"    \\\"name\\\": \\\"%s\\\",\\n\"\n        \"    \\\"host\\\": \\\"%s\\\",\\n\"\n        \"    \\\"port\\\": %d,\\n\"\n        \"    \\\"replicate\\\": %s,\\n\"\n        \"    \\\"slots\\\": [%s],\\n\"\n        \"    \\\"slots_count\\\": %d,\\n\"\n        \"    \\\"flags\\\": \\\"%s\\\",\\n\"\n        \"    \\\"current_epoch\\\": %llu\",\n        node->name,\n        node->ip,\n        node->port,\n        replicate,\n        slots,\n        node->slots_count,\n        flags,\n        (unsigned long long)node->current_epoch\n    );\n    if (error_count > 0) {\n        json = sdscatprintf(json, \",\\n    \\\"cluster_errors\\\": %lu\",\n                            error_count);\n    }\n    if (node->migrating_count > 0 && node->migrating != NULL) {\n        int i = 0;\n        sds migrating = sdsempty();\n        for (; i < node->migrating_count; i += 2) {\n            sds slot = node->migrating[i];\n            sds dest = node->migrating[i + 1];\n            if (slot && dest) {\n                if (sdslen(migrating) > 0) migrating = sdscat(migrating, \",\");\n                migrating = sdscatfmt(migrating, \"\\\"%S\\\": \\\"%S\\\"\", slot, dest);\n            }\n        }\n        if (sdslen(migrating) > 0)\n            json = sdscatfmt(json, \",\\n    \\\"migrating\\\": {%S}\", migrating);\n        sdsfree(migrating);\n    }\n    if (node->importing_count > 0 && node->importing != NULL) {\n        int i = 0;\n        sds importing = sdsempty();\n        for (; i < node->importing_count; i += 2) {\n            sds slot = node->importing[i];\n            sds from = node->importing[i + 1];\n            if (slot && from) {\n                if (sdslen(importing) > 0) importing = sdscat(importing, \",\");\n                importing = sdscatfmt(importing, \"\\\"%S\\\": \\\"%S\\\"\", slot, from);\n            }\n        }\n        if (sdslen(importing) > 0)\n            json = sdscatfmt(json, \",\\n    \\\"importing\\\": {%S}\", importing);\n        sdsfree(importing);\n    }\n    json = sdscat(json, \"\\n  }\");\n    sdsfree(replicate);\n    sdsfree(slots);\n    sdsfree(flags);\n    return json;\n}\n\n\n/* -----------------------------------------------------------------------------\n * Key space handling\n * -------------------------------------------------------------------------- */\n\n/* We have 16384 hash slots. The hash slot of a given key is obtained\n * as the least significant 14 bits of the crc16 of the key.\n *\n * However if the key contains the {...} pattern, only the part between\n * { and } is hashed. This may be useful in the future to force certain\n * keys to be in the same node (assuming no resharding is in progress). */\nstatic unsigned int clusterManagerKeyHashSlot(char *key, int keylen) {\n    int s, e; /* start-end indexes of { and } */\n\n    for (s = 0; s < keylen; s++)\n        if (key[s] == '{') break;\n\n    /* No '{' ? Hash the whole key. This is the base case. */\n    if (s == keylen) return crc16(key,keylen) & 0x3FFF;\n\n    /* '{' found? Check if we have the corresponding '}'. */\n    for (e = s+1; e < keylen; e++)\n        if (key[e] == '}') break;\n\n    /* No '}' or nothing between {} ? Hash the whole key. */\n    if (e == keylen || e == s+1) return crc16(key,keylen) & 0x3FFF;\n\n    /* If we are here there is both a { and a } on its right. Hash\n     * what is in the middle between { and }. */\n    return crc16(key+s+1,e-s-1) & 0x3FFF;\n}\n\n/* Return a string representation of the cluster node. */\nstatic sds clusterManagerNodeInfo(clusterManagerNode *node, int indent) {\n    sds info = sdsempty();\n    sds spaces = sdsempty();\n    int i;\n    for (i = 0; i < indent; i++) spaces = sdscat(spaces, \" \");\n    if (indent) info = sdscat(info, spaces);\n    int is_master = !(node->flags & CLUSTER_MANAGER_FLAG_SLAVE);\n    char *role = (is_master ? \"M\" : \"S\");\n    sds slots = NULL;\n    if (node->dirty && node->replicate != NULL)\n        info = sdscatfmt(info, \"S: %S %s:%u\", node->name, node->ip, node->port);\n    else {\n        slots = clusterManagerNodeSlotsString(node);\n        sds flags = clusterManagerNodeFlagString(node);\n        info = sdscatfmt(info, \"%s: %S %s:%u\\n\"\n                               \"%s   slots:%S (%u slots) \"\n                               \"%S\",\n                               role, node->name, node->ip, node->port, spaces,\n                               slots, node->slots_count, flags);\n        sdsfree(slots);\n        sdsfree(flags);\n    }\n    if (node->replicate != NULL)\n        info = sdscatfmt(info, \"\\n%s   replicates %S\", spaces, node->replicate);\n    else if (node->replicas_count)\n        info = sdscatfmt(info, \"\\n%s   %U additional replica(s)\",\n                         spaces, node->replicas_count);\n    sdsfree(spaces);\n    return info;\n}\n\nstatic void clusterManagerShowNodes(void) {\n    listIter li;\n    listNode *ln;\n    listRewind(cluster_manager.nodes, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *node = ln->value;\n        sds info = clusterManagerNodeInfo(node, 0);\n        printf(\"%s\\n\", (char *) info);\n        sdsfree(info);\n    }\n}\n\nstatic void clusterManagerShowClusterInfo(void) {\n    int masters = 0;\n    long long keys = 0;\n    listIter li;\n    listNode *ln;\n    listRewind(cluster_manager.nodes, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *node = ln->value;\n        if (!(node->flags & CLUSTER_MANAGER_FLAG_SLAVE)) {\n            if (!node->name) continue;\n            int replicas = 0;\n            long long dbsize = -1;\n            char name[9];\n            memcpy(name, node->name, 8);\n            name[8] = '\\0';\n            listIter ri;\n            listNode *rn;\n            listRewind(cluster_manager.nodes, &ri);\n            while ((rn = listNext(&ri)) != NULL) {\n                clusterManagerNode *n = rn->value;\n                if (n == node || !(n->flags & CLUSTER_MANAGER_FLAG_SLAVE))\n                    continue;\n                if (n->replicate && !strcmp(n->replicate, node->name))\n                    replicas++;\n            }\n            redisReply *reply = CLUSTER_MANAGER_COMMAND(node, \"DBSIZE\");\n            if (reply != NULL && reply->type == REDIS_REPLY_INTEGER)\n                dbsize = reply->integer;\n            if (dbsize < 0) {\n                char *err = \"\";\n                if (reply != NULL && reply->type == REDIS_REPLY_ERROR)\n                    err = reply->str;\n                CLUSTER_MANAGER_PRINT_REPLY_ERROR(node, err);\n                if (reply != NULL) freeReplyObject(reply);\n                return;\n            };\n            if (reply != NULL) freeReplyObject(reply);\n            printf(\"%s:%d (%s...) -> %lld keys | %d slots | %d slaves.\\n\",\n                   node->ip, node->port, name, dbsize,\n                   node->slots_count, replicas);\n            masters++;\n            keys += dbsize;\n        }\n    }\n    clusterManagerLogOk(\"[OK] %lld keys in %d masters.\\n\", keys, masters);\n    float keys_per_slot = keys / (float) CLUSTER_MANAGER_SLOTS;\n    printf(\"%.2f keys per slot on average.\\n\", keys_per_slot);\n}\n\n/* Flush dirty slots configuration of the node by calling CLUSTER ADDSLOTS */\nstatic int clusterManagerAddSlots(clusterManagerNode *node, char**err)\n{\n    redisReply *reply = NULL;\n    void *_reply = NULL;\n    int success = 1;\n    /* First two args are used for the command itself. */\n    int argc = node->slots_count + 2;\n    sds *argv = zmalloc(argc * sizeof(*argv));\n    size_t *argvlen = zmalloc(argc * sizeof(*argvlen));\n    argv[0] = \"CLUSTER\";\n    argv[1] = \"ADDSLOTS\";\n    argvlen[0] = 7;\n    argvlen[1] = 8;\n    *err = NULL;\n    int i, argv_idx = 2;\n    for (i = 0; i < CLUSTER_MANAGER_SLOTS; i++) {\n        if (argv_idx >= argc) break;\n        if (node->slots[i]) {\n            argv[argv_idx] = sdsfromlonglong((long long) i);\n            argvlen[argv_idx] = sdslen(argv[argv_idx]);\n            argv_idx++;\n        }\n    }\n    if (argv_idx == 2) {\n        success = 0;\n        goto cleanup;\n    }\n    redisAppendCommandArgv(node->context,argc,(const char**)argv,argvlen);\n    if (redisGetReply(node->context, &_reply) != REDIS_OK) {\n        success = 0;\n        goto cleanup;\n    }\n    reply = (redisReply*) _reply;\n    success = clusterManagerCheckRedisReply(node, reply, err);\ncleanup:\n    zfree(argvlen);\n    if (argv != NULL) {\n        for (i = 2; i < argc; i++) sdsfree(argv[i]);\n        zfree(argv);\n    }\n    if (reply != NULL) freeReplyObject(reply);\n    return success;\n}\n\n/* Get the node the slot is assigned to from the point of view of node *n.\n * If the slot is unassigned or if the reply is an error, return NULL.\n * Use the **err argument in order to check whether the slot is unassigned\n * or the reply resulted in an error. */\nstatic clusterManagerNode *clusterManagerGetSlotOwner(clusterManagerNode *n,\n                                                      int slot, char **err)\n{\n    assert(slot >= 0 && slot < CLUSTER_MANAGER_SLOTS);\n    clusterManagerNode *owner = NULL;\n    redisReply *reply = CLUSTER_MANAGER_COMMAND(n, \"CLUSTER SLOTS\");\n    if (clusterManagerCheckRedisReply(n, reply, err)) {\n        assert(reply->type == REDIS_REPLY_ARRAY);\n        size_t i;\n        for (i = 0; i < reply->elements; i++) {\n            redisReply *r = reply->element[i];\n            assert(r->type == REDIS_REPLY_ARRAY && r->elements >= 3);\n            int from, to;\n            from = r->element[0]->integer;\n            to = r->element[1]->integer;\n            if (slot < from || slot > to) continue;\n            redisReply *nr =  r->element[2];\n            assert(nr->type == REDIS_REPLY_ARRAY && nr->elements >= 2);\n            char *name = NULL;\n            if (nr->elements >= 3)\n                name =  nr->element[2]->str;\n            if (name != NULL)\n                owner = clusterManagerNodeByName(name);\n            else {\n                char *ip = nr->element[0]->str;\n                assert(ip != NULL);\n                int port = (int) nr->element[1]->integer;\n                listIter li;\n                listNode *ln;\n                listRewind(cluster_manager.nodes, &li);\n                while ((ln = listNext(&li)) != NULL) {\n                    clusterManagerNode *nd = ln->value;\n                    if (strcmp(nd->ip, ip) == 0 && port == nd->port) {\n                        owner = nd;\n                        break;\n                    }\n                }\n            }\n            if (owner) break;\n        }\n    }\n    if (reply) freeReplyObject(reply);\n    return owner;\n}\n\n/* Set slot status to \"importing\" or \"migrating\" */\nstatic int clusterManagerSetSlot(clusterManagerNode *node1,\n                                 clusterManagerNode *node2,\n                                 int slot, const char *status, char **err) {\n    redisReply *reply = CLUSTER_MANAGER_COMMAND(node1, \"CLUSTER \"\n                                                \"SETSLOT %d %s %s\",\n                                                slot, status,\n                                                (char *) node2->name);\n    if (err != NULL) *err = NULL;\n    if (!reply) {\n        if (err) *err = zstrdup(\"CLUSTER SETSLOT failed to run\");\n        return 0;\n    }\n    int success = 1;\n    if (reply->type == REDIS_REPLY_ERROR) {\n        success = 0;\n        if (err != NULL) {\n            *err = zmalloc((reply->len + 1) * sizeof(char));\n            redis_strlcpy(*err, reply->str,(reply->len + 1));\n        } else CLUSTER_MANAGER_PRINT_REPLY_ERROR(node1, reply->str);\n        goto cleanup;\n    }\ncleanup:\n    freeReplyObject(reply);\n    return success;\n}\n\nstatic int clusterManagerClearSlotStatus(clusterManagerNode *node, int slot) {\n    redisReply *reply = CLUSTER_MANAGER_COMMAND(node,\n        \"CLUSTER SETSLOT %d %s\", slot, \"STABLE\");\n    int success = clusterManagerCheckRedisReply(node, reply, NULL);\n    if (reply) freeReplyObject(reply);\n    return success;\n}\n\nstatic int clusterManagerDelSlot(clusterManagerNode *node, int slot,\n                                 int ignore_unassigned_err)\n{\n    redisReply *reply = CLUSTER_MANAGER_COMMAND(node,\n        \"CLUSTER DELSLOTS %d\", slot);\n    char *err = NULL;\n    int success = clusterManagerCheckRedisReply(node, reply, &err);\n    if (!success && reply && reply->type == REDIS_REPLY_ERROR &&\n        ignore_unassigned_err)\n    {\n        char *get_owner_err = NULL;\n        clusterManagerNode *assigned_to =\n            clusterManagerGetSlotOwner(node, slot, &get_owner_err);\n        if (!assigned_to) {\n            if (get_owner_err == NULL) success = 1;\n            else {\n                CLUSTER_MANAGER_PRINT_REPLY_ERROR(node, get_owner_err);\n                zfree(get_owner_err);\n            }\n        }\n    }\n    if (!success && err != NULL) {\n        CLUSTER_MANAGER_PRINT_REPLY_ERROR(node, err);\n        zfree(err);\n    }\n    if (reply) freeReplyObject(reply);\n    return success;\n}\n\nstatic int clusterManagerAddSlot(clusterManagerNode *node, int slot) {\n    redisReply *reply = CLUSTER_MANAGER_COMMAND(node,\n        \"CLUSTER ADDSLOTS %d\", slot);\n    int success = clusterManagerCheckRedisReply(node, reply, NULL);\n    if (reply) freeReplyObject(reply);\n    return success;\n}\n\nstatic signed int clusterManagerCountKeysInSlot(clusterManagerNode *node,\n                                                int slot)\n{\n    redisReply *reply = CLUSTER_MANAGER_COMMAND(node,\n        \"CLUSTER COUNTKEYSINSLOT %d\", slot);\n    int count = -1;\n    int success = clusterManagerCheckRedisReply(node, reply, NULL);\n    if (success && reply->type == REDIS_REPLY_INTEGER) count = reply->integer;\n    if (reply) freeReplyObject(reply);\n    return count;\n}\n\nstatic int clusterManagerBumpEpoch(clusterManagerNode *node) {\n    redisReply *reply = CLUSTER_MANAGER_COMMAND(node, \"CLUSTER BUMPEPOCH\");\n    int success = clusterManagerCheckRedisReply(node, reply, NULL);\n    if (reply) freeReplyObject(reply);\n    return success;\n}\n\n/* Callback used by clusterManagerSetSlotOwner transaction. It should ignore\n * errors except for ADDSLOTS errors.\n * Return 1 if the error should be ignored. */\nstatic int clusterManagerOnSetOwnerErr(redisReply *reply,\n    clusterManagerNode *n, int bulk_idx)\n{\n    UNUSED(reply);\n    UNUSED(n);\n    /* Only raise error when ADDSLOTS fail (bulk_idx == 1). */\n    return (bulk_idx != 1);\n}\n\nstatic int clusterManagerSetSlotOwner(clusterManagerNode *owner,\n                                      int slot,\n                                      int do_clear)\n{\n    int success = clusterManagerStartTransaction(owner);\n    if (!success) return 0;\n    /* Ensure the slot is not already assigned. */\n    clusterManagerDelSlot(owner, slot, 1);\n    /* Add the slot and bump epoch. */\n    clusterManagerAddSlot(owner, slot);\n    if (do_clear) clusterManagerClearSlotStatus(owner, slot);\n    clusterManagerBumpEpoch(owner);\n    success = clusterManagerExecTransaction(owner, clusterManagerOnSetOwnerErr);\n    return success;\n}\n\n/* Get the hash for the values of the specified keys in *keys_reply for the\n * specified nodes *n1 and *n2, by calling DEBUG DIGEST-VALUE redis command\n * on both nodes. Every key with same name on both nodes but having different\n * values will be added to the *diffs list. Return 0 in case of reply\n * error. */\nstatic int clusterManagerCompareKeysValues(clusterManagerNode *n1,\n                                          clusterManagerNode *n2,\n                                          redisReply *keys_reply,\n                                          list *diffs)\n{\n    size_t i, argc = keys_reply->elements + 2;\n    static const char *hash_zero = \"0000000000000000000000000000000000000000\";\n    char **argv = zcalloc(argc * sizeof(char *));\n    size_t  *argv_len = zcalloc(argc * sizeof(size_t));\n    argv[0] = \"DEBUG\";\n    argv_len[0] = 5;\n    argv[1] = \"DIGEST-VALUE\";\n    argv_len[1] = 12;\n    for (i = 0; i < keys_reply->elements; i++) {\n        redisReply *entry = keys_reply->element[i];\n        int idx = i + 2;\n        argv[idx] = entry->str;\n        argv_len[idx] = entry->len;\n    }\n    int success = 0;\n    void *_reply1 = NULL, *_reply2 = NULL;\n    redisReply *r1 = NULL, *r2 = NULL;\n    redisAppendCommandArgv(n1->context,argc, (const char**)argv,argv_len);\n    success = (redisGetReply(n1->context, &_reply1) == REDIS_OK);\n    if (!success) goto cleanup;\n    r1 = (redisReply *) _reply1;\n    redisAppendCommandArgv(n2->context,argc, (const char**)argv,argv_len);\n    success = (redisGetReply(n2->context, &_reply2) == REDIS_OK);\n    if (!success) goto cleanup;\n    r2 = (redisReply *) _reply2;\n    success = (r1->type != REDIS_REPLY_ERROR && r2->type != REDIS_REPLY_ERROR);\n    if (r1->type == REDIS_REPLY_ERROR) {\n        CLUSTER_MANAGER_PRINT_REPLY_ERROR(n1, r1->str);\n        success = 0;\n    }\n    if (r2->type == REDIS_REPLY_ERROR) {\n        CLUSTER_MANAGER_PRINT_REPLY_ERROR(n2, r2->str);\n        success = 0;\n    }\n    if (!success) goto cleanup;\n    assert(keys_reply->elements == r1->elements &&\n           keys_reply->elements == r2->elements);\n    for (i = 0; i < keys_reply->elements; i++) {\n        char *key = keys_reply->element[i]->str;\n        char *hash1 = r1->element[i]->str;\n        char *hash2 = r2->element[i]->str;\n        /* Ignore keys that don't exist in both nodes. */\n        if (strcmp(hash1, hash_zero) == 0 || strcmp(hash2, hash_zero) == 0)\n            continue;\n        if (strcmp(hash1, hash2) != 0) listAddNodeTail(diffs, key);\n    }\ncleanup:\n    if (r1) freeReplyObject(r1);\n    if (r2) freeReplyObject(r2);\n    zfree(argv);\n    zfree(argv_len);\n    return success;\n}\n\n/* Migrate keys taken from reply->elements. It returns the reply from the\n * MIGRATE command, or NULL if something goes wrong. If the argument 'dots'\n * is not NULL, a dot will be printed for every migrated key. */\nstatic redisReply *clusterManagerMigrateKeysInReply(clusterManagerNode *source,\n                                                    clusterManagerNode *target,\n                                                    redisReply *reply,\n                                                    int replace, int timeout,\n                                                    char *dots)\n{\n    redisReply *migrate_reply = NULL;\n    char **argv = NULL;\n    size_t *argv_len = NULL;\n    int c = (replace ? 8 : 7);\n    if (config.conn_info.auth) c += 2;\n    if (config.conn_info.user) c += 1;\n    size_t argc = c + reply->elements;\n    size_t i, offset = 6; // Keys Offset\n    argv = zcalloc(argc * sizeof(char *));\n    argv_len = zcalloc(argc * sizeof(size_t));\n    char portstr[255];\n    char timeoutstr[255];\n    snprintf(portstr, 10, \"%d\", target->port);\n    snprintf(timeoutstr, 10, \"%d\", timeout);\n    argv[0] = \"MIGRATE\";\n    argv_len[0] = 7;\n    argv[1] = target->ip;\n    argv_len[1] = strlen(target->ip);\n    argv[2] = portstr;\n    argv_len[2] = strlen(portstr);\n    argv[3] = \"\";\n    argv_len[3] = 0;\n    argv[4] = \"0\";\n    argv_len[4] = 1;\n    argv[5] = timeoutstr;\n    argv_len[5] = strlen(timeoutstr);\n    if (replace) {\n        argv[offset] = \"REPLACE\";\n        argv_len[offset] = 7;\n        offset++;\n    }\n    if (config.conn_info.auth) {\n        if (config.conn_info.user) {\n            argv[offset] = \"AUTH2\";\n            argv_len[offset] = 5;\n            offset++;\n            argv[offset] = config.conn_info.user;\n            argv_len[offset] = strlen(config.conn_info.user);\n            offset++;\n            argv[offset] = config.conn_info.auth;\n            argv_len[offset] = strlen(config.conn_info.auth);\n            offset++;\n        } else {\n            argv[offset] = \"AUTH\";\n            argv_len[offset] = 4;\n            offset++;\n            argv[offset] = config.conn_info.auth;\n            argv_len[offset] = strlen(config.conn_info.auth);\n            offset++;\n        }\n    }\n    argv[offset] = \"KEYS\";\n    argv_len[offset] = 4;\n    offset++;\n    for (i = 0; i < reply->elements; i++) {\n        redisReply *entry = reply->element[i];\n        size_t idx = i + offset;\n        assert(entry->type == REDIS_REPLY_STRING);\n        argv[idx] = (char *) sdsnewlen(entry->str, entry->len);\n        argv_len[idx] = entry->len;\n        if (dots) dots[i] = '.';\n    }\n    if (dots) dots[reply->elements] = '\\0';\n    void *_reply = NULL;\n    redisAppendCommandArgv(source->context,argc,\n                           (const char**)argv,argv_len);\n    int success = (redisGetReply(source->context, &_reply) == REDIS_OK);\n    for (i = 0; i < reply->elements; i++) sdsfree(argv[i + offset]);\n    if (!success) goto cleanup;\n    migrate_reply = (redisReply *) _reply;\ncleanup:\n    zfree(argv);\n    zfree(argv_len);\n    return migrate_reply;\n}\n\n/* Migrate all keys in the given slot from source to target.*/\nstatic int clusterManagerMigrateKeysInSlot(clusterManagerNode *source,\n                                           clusterManagerNode *target,\n                                           int slot, int timeout,\n                                           int pipeline, int verbose,\n                                           char **err)\n{\n    int success = 1;\n    int do_fix = config.cluster_manager_command.flags &\n                 CLUSTER_MANAGER_CMD_FLAG_FIX;\n    int do_replace = config.cluster_manager_command.flags &\n                     CLUSTER_MANAGER_CMD_FLAG_REPLACE;\n    while (1) {\n        char *dots = NULL;\n        redisReply *reply = NULL, *migrate_reply = NULL;\n        reply = CLUSTER_MANAGER_COMMAND(source, \"CLUSTER \"\n                                        \"GETKEYSINSLOT %d %d\", slot,\n                                        pipeline);\n        success = (reply != NULL);\n        if (!success) return 0;\n        if (reply->type == REDIS_REPLY_ERROR) {\n            success = 0;\n            if (err != NULL) {\n                *err = zmalloc((reply->len + 1) * sizeof(char));\n                redis_strlcpy(*err, reply->str,(reply->len + 1));\n                CLUSTER_MANAGER_PRINT_REPLY_ERROR(source, *err);\n            }\n            goto next;\n        }\n        assert(reply->type == REDIS_REPLY_ARRAY);\n        size_t count = reply->elements;\n        if (count == 0) {\n            freeReplyObject(reply);\n            break;\n        }\n        if (verbose) dots = zmalloc((count+1) * sizeof(char));\n        /* Calling MIGRATE command. */\n        migrate_reply = clusterManagerMigrateKeysInReply(source, target,\n                                                         reply, 0, timeout,\n                                                         dots);\n        if (migrate_reply == NULL) goto next;\n        if (migrate_reply->type == REDIS_REPLY_ERROR) {\n            int is_busy = strstr(migrate_reply->str, \"BUSYKEY\") != NULL;\n            int not_served = 0;\n            if (!is_busy) {\n                /* Check if the slot is unassigned (not served) in the\n                 * source node's configuration. */\n                char *get_owner_err = NULL;\n                clusterManagerNode *served_by =\n                    clusterManagerGetSlotOwner(source, slot, &get_owner_err);\n                if (!served_by) {\n                    if (get_owner_err == NULL) not_served = 1;\n                    else {\n                        CLUSTER_MANAGER_PRINT_REPLY_ERROR(source,\n                                                          get_owner_err);\n                        zfree(get_owner_err);\n                    }\n                }\n            }\n            /* Try to handle errors. */\n            if (is_busy || not_served) {\n                /* If the key's slot is not served, try to assign slot\n                 * to the target node. */\n                if (do_fix && not_served) {\n                    clusterManagerLogWarn(\"*** Slot was not served, setting \"\n                                          \"owner to node %s:%d.\\n\",\n                                          target->ip, target->port);\n                    clusterManagerSetSlot(source, target, slot, \"node\", NULL);\n                }\n                /* If the key already exists in the target node (BUSYKEY),\n                 * check whether its value is the same in both nodes.\n                 * In case of equal values, retry migration with the\n                 * REPLACE option.\n                 * In case of different values:\n                 *  - If the migration is requested by the fix command, stop\n                 *    and warn the user.\n                 *  - In other cases (ie. reshard), proceed only if the user\n                 *    launched the command with the --cluster-replace option.*/\n                if (is_busy) {\n                    clusterManagerLogWarn(\"\\n*** Target key exists\\n\");\n                    if (!do_replace) {\n                        clusterManagerLogWarn(\"*** Checking key values on \"\n                                              \"both nodes...\\n\");\n                        list *diffs = listCreate();\n                        success = clusterManagerCompareKeysValues(source,\n                            target, reply, diffs);\n                        if (!success) {\n                            clusterManagerLogErr(\"*** Value check failed!\\n\");\n                            listRelease(diffs);\n                            goto next;\n                        }\n                        if (listLength(diffs) > 0) {\n                            success = 0;\n                            clusterManagerLogErr(\n                                \"*** Found %d key(s) in both source node and \"\n                                \"target node having different values.\\n\"\n                                \"    Source node: %s:%d\\n\"\n                                \"    Target node: %s:%d\\n\"\n                                \"    Keys(s):\\n\",\n                                listLength(diffs),\n                                source->ip, source->port,\n                                target->ip, target->port);\n                            listIter dli;\n                            listNode *dln;\n                            listRewind(diffs, &dli);\n                            while((dln = listNext(&dli)) != NULL) {\n                                char *k = dln->value;\n                                clusterManagerLogErr(\"    - %s\\n\", k);\n                            }\n                            clusterManagerLogErr(\"Please fix the above key(s) \"\n                                                 \"manually and try again \"\n                                                 \"or relaunch the command \\n\"\n                                                 \"with --cluster-replace \"\n                                                 \"option to force key \"\n                                                 \"overriding.\\n\");\n                            listRelease(diffs);\n                            goto next;\n                        }\n                        listRelease(diffs);\n                    }\n                    clusterManagerLogWarn(\"*** Replacing target keys...\\n\");\n                }\n                freeReplyObject(migrate_reply);\n                migrate_reply = clusterManagerMigrateKeysInReply(source,\n                                                                 target,\n                                                                 reply,\n                                                                 is_busy,\n                                                                 timeout,\n                                                                 NULL);\n                success = (migrate_reply != NULL &&\n                           migrate_reply->type != REDIS_REPLY_ERROR);\n            } else success = 0;\n            if (!success) {\n                if (migrate_reply != NULL) {\n                    if (err) {\n                        *err = zmalloc((migrate_reply->len + 1) * sizeof(char));\n                        redis_strlcpy(*err, migrate_reply->str, (migrate_reply->len + 1));\n                    }\n                    printf(\"\\n\");\n                    CLUSTER_MANAGER_PRINT_REPLY_ERROR(source,\n                                                      migrate_reply->str);\n                }\n                goto next;\n            }\n        }\n        if (verbose) {\n            printf(\"%s\", dots);\n            fflush(stdout);\n        }\nnext:\n        if (reply != NULL) freeReplyObject(reply);\n        if (migrate_reply != NULL) freeReplyObject(migrate_reply);\n        if (dots) zfree(dots);\n        if (!success) break;\n    }\n    return success;\n}\n\n/* Move slots between source and target nodes using MIGRATE.\n *\n * Options:\n * CLUSTER_MANAGER_OPT_VERBOSE -- Print a dot for every moved key.\n * CLUSTER_MANAGER_OPT_COLD    -- Move keys without opening slots /\n *                                reconfiguring the nodes.\n * CLUSTER_MANAGER_OPT_UPDATE  -- Update node->slots for source/target nodes.\n * CLUSTER_MANAGER_OPT_QUIET   -- Don't print info messages.\n*/\nstatic int clusterManagerMoveSlot(clusterManagerNode *source,\n                                  clusterManagerNode *target,\n                                  int slot, int opts,  char**err)\n{\n    if (!(opts & CLUSTER_MANAGER_OPT_QUIET)) {\n        printf(\"Moving slot %d from %s:%d to %s:%d: \", slot, source->ip,\n               source->port, target->ip, target->port);\n        fflush(stdout);\n    }\n    if (err != NULL) *err = NULL;\n    int pipeline = config.cluster_manager_command.pipeline,\n        timeout = config.cluster_manager_command.timeout,\n        print_dots = (opts & CLUSTER_MANAGER_OPT_VERBOSE),\n        option_cold = (opts & CLUSTER_MANAGER_OPT_COLD),\n        success = 1;\n    if (!option_cold) {\n        success = clusterManagerSetSlot(target, source, slot,\n                                        \"importing\", err);\n        if (!success) return 0;\n        success = clusterManagerSetSlot(source, target, slot,\n                                        \"migrating\", err);\n        if (!success) return 0;\n    }\n    success = clusterManagerMigrateKeysInSlot(source, target, slot, timeout,\n                                              pipeline, print_dots, err);\n    if (!(opts & CLUSTER_MANAGER_OPT_QUIET)) printf(\"\\n\");\n    if (!success) return 0;\n    if (!option_cold) {\n        /* Set the new node as the owner of the slot in all the known nodes.\n         *\n         * We inform the target node first. It will propagate the information to\n         * the rest of the cluster.\n         *\n         * If we inform any other node first, it can happen that the target node\n         * crashes before it is set as the new owner and then the slot is left\n         * without an owner which results in redirect loops. See issue #7116. */\n        success = clusterManagerSetSlot(target, target, slot, \"node\", err);\n        if (!success) return 0;\n\n        /* Inform the source node. If the source node has just lost its last\n         * slot and the target node has already informed the source node, the\n         * source node has turned itself into a replica. This is not an error in\n         * this scenario so we ignore it. See issue #9223. */\n        success = clusterManagerSetSlot(source, target, slot, \"node\", err);\n        const char *acceptable = \"ERR Please use SETSLOT only with masters.\";\n        if (!success && err && !strncmp(*err, acceptable, strlen(acceptable))) {\n            zfree(*err);\n            *err = NULL;\n        } else if (!success && err) {\n            return 0;\n        }\n\n        /* We also inform the other nodes to avoid redirects in case the target\n         * node is slow to propagate the change to the entire cluster. */\n        listIter li;\n        listNode *ln;\n        listRewind(cluster_manager.nodes, &li);\n        while ((ln = listNext(&li)) != NULL) {\n            clusterManagerNode *n = ln->value;\n            if (n == target || n == source) continue; /* already done */\n            if (n->flags & CLUSTER_MANAGER_FLAG_SLAVE) continue;\n            success = clusterManagerSetSlot(n, target, slot, \"node\", err);\n            if (!success) return 0;\n        }\n    }\n    /* Update the node logical config */\n    if (opts & CLUSTER_MANAGER_OPT_UPDATE) {\n        source->slots[slot] = 0;\n        target->slots[slot] = 1;\n    }\n    return 1;\n}\n\n/* Flush the dirty node configuration by calling replicate for slaves or\n * adding the slots defined in the masters. */\nstatic int clusterManagerFlushNodeConfig(clusterManagerNode *node, char **err) {\n    if (!node->dirty) return 0;\n    redisReply *reply = NULL;\n    int is_err = 0, success = 1;\n    if (err != NULL) *err = NULL;\n    if (node->replicate != NULL) {\n        reply = CLUSTER_MANAGER_COMMAND(node, \"CLUSTER REPLICATE %s\",\n                                        node->replicate);\n        if (reply == NULL || (is_err = (reply->type == REDIS_REPLY_ERROR))) {\n            if (is_err && err != NULL) {\n                *err = zmalloc((reply->len + 1) * sizeof(char));\n                redis_strlcpy(*err, reply->str, (reply->len + 1));\n            }\n            success = 0;\n            /* If the cluster did not already joined it is possible that\n             * the slave does not know the master node yet. So on errors\n             * we return ASAP leaving the dirty flag set, to flush the\n             * config later. */\n            goto cleanup;\n        }\n    } else {\n        int added = clusterManagerAddSlots(node, err);\n        if (!added || *err != NULL) success = 0;\n    }\n    node->dirty = 0;\ncleanup:\n    if (reply != NULL) freeReplyObject(reply);\n    return success;\n}\n\n/* Wait until the cluster configuration is consistent. */\nstatic void clusterManagerWaitForClusterJoin(void) {\n    printf(\"Waiting for the cluster to join\\n\");\n    int counter = 0,\n        check_after = CLUSTER_JOIN_CHECK_AFTER +\n                      (int)(listLength(cluster_manager.nodes) * 0.15f);\n    while(!clusterManagerIsConfigConsistent()) {\n        printf(\".\");\n        fflush(stdout);\n        sleep(1);\n        if (++counter > check_after) {\n            dict *status = clusterManagerGetLinkStatus();\n            if (status != NULL && dictSize(status) > 0) {\n                printf(\"\\n\");\n                clusterManagerLogErr(\"Warning: %d node(s) may \"\n                                     \"be unreachable\\n\", dictSize(status));\n                dictIterator iter;\n                dictEntry *entry;\n                dictInitIterator(&iter, status);\n                while ((entry = dictNext(&iter)) != NULL) {\n                    sds nodeaddr = (sds) dictGetKey(entry);\n                    char *node_ip = NULL;\n                    int node_port = 0, node_bus_port = 0;\n                    list *from = (list *) dictGetVal(entry);\n                    if (parseClusterNodeAddress(nodeaddr, &node_ip,\n                        &node_port, &node_bus_port) && node_bus_port) {\n                        clusterManagerLogErr(\" - The port %d of node %s may \"\n                                             \"be unreachable from:\\n\",\n                                             node_bus_port, node_ip);\n                    } else {\n                        clusterManagerLogErr(\" - Node %s may be unreachable \"\n                                             \"from:\\n\", nodeaddr);\n                    }\n                    listIter li;\n                    listNode *ln;\n                    listRewind(from, &li);\n                    while ((ln = listNext(&li)) != NULL) {\n                        sds from_addr = ln->value;\n                        clusterManagerLogErr(\"   %s\\n\", from_addr);\n                        sdsfree(from_addr);\n                    }\n                    clusterManagerLogErr(\"Cluster bus ports must be reachable \"\n                                         \"by every node.\\nRemember that \"\n                                         \"cluster bus ports are different \"\n                                         \"from standard instance ports.\\n\");\n                    listEmpty(from);\n                }\n                dictResetIterator(&iter);\n            }\n            if (status != NULL) dictRelease(status);\n            counter = 0;\n        }\n    }\n    printf(\"\\n\");\n}\n\n/* Load node's cluster configuration by calling \"CLUSTER NODES\" command.\n * Node's configuration (name, replicate, slots, ...) is then updated.\n * If CLUSTER_MANAGER_OPT_GETFRIENDS flag is set into 'opts' argument,\n * and node already knows other nodes, the node's friends list is populated\n * with the other nodes info. */\nstatic int clusterManagerNodeLoadInfo(clusterManagerNode *node, int opts,\n                                      char **err)\n{\n    redisReply *reply = CLUSTER_MANAGER_COMMAND(node, \"CLUSTER NODES\");\n    int success = 1;\n    *err = NULL;\n    if (!clusterManagerCheckRedisReply(node, reply, err)) {\n        success = 0;\n        goto cleanup;\n    }\n    int getfriends = (opts & CLUSTER_MANAGER_OPT_GETFRIENDS);\n    char *lines = reply->str, *p, *line;\n    while ((p = strstr(lines, \"\\n\")) != NULL) {\n        *p = '\\0';\n        line = lines;\n        lines = p + 1;\n        char *name = NULL, *addr = NULL, *flags = NULL, *master_id = NULL,\n             *ping_sent = NULL, *ping_recv = NULL, *config_epoch = NULL,\n             *link_status = NULL;\n        UNUSED(link_status);\n        int i = 0;\n        while ((p = strchr(line, ' ')) != NULL) {\n            *p = '\\0';\n            char *token = line;\n            line = p + 1;\n            switch(i++){\n            case 0: name = token; break;\n            case 1: addr = token; break;\n            case 2: flags = token; break;\n            case 3: master_id = token; break;\n            case 4: ping_sent = token; break;\n            case 5: ping_recv = token; break;\n            case 6: config_epoch = token; break;\n            case 7: link_status = token; break;\n            }\n            if (i == 8) break; // Slots\n        }\n        if (!flags) {\n            success = 0;\n            goto cleanup;\n        }\n\n        char *ip = NULL;\n        int port = 0, bus_port = 0;\n        if (addr == NULL || !parseClusterNodeAddress(addr, &ip, &port, &bus_port)) {\n            fprintf(stderr, \"Error: invalid CLUSTER NODES reply\\n\");\n            success = 0;\n            goto cleanup;\n        }\n\n        int myself = (strstr(flags, \"myself\") != NULL);\n        clusterManagerNode *currentNode = NULL;\n        if (myself) {\n            /* bus-port could be wrong, correct it here, see clusterManagerNewNode. */\n            node->bus_port = bus_port;\n            node->flags |= CLUSTER_MANAGER_FLAG_MYSELF;\n            currentNode = node;\n            clusterManagerNodeResetSlots(node);\n            if (i == 8) {\n                int remaining = strlen(line);\n                while (remaining > 0) {\n                    p = strchr(line, ' ');\n                    if (p == NULL) p = line + remaining;\n                    remaining -= (p - line);\n\n                    char *slotsdef = line;\n                    *p = '\\0';\n                    if (remaining) {\n                        line = p + 1;\n                        remaining--;\n                    } else line = p;\n                    char *dash = NULL;\n                    if (slotsdef[0] == '[') {\n                        slotsdef++;\n                        if ((p = strstr(slotsdef, \"->-\"))) { // Migrating\n                            *p = '\\0';\n                            p += 3;\n                            char *closing_bracket = strchr(p, ']');\n                            if (closing_bracket) *closing_bracket = '\\0';\n                            sds slot = sdsnew(slotsdef);\n                            sds dst = sdsnew(p);\n                            node->migrating_count += 2;\n                            node->migrating = zrealloc(node->migrating,\n                                (node->migrating_count * sizeof(sds)));\n                            node->migrating[node->migrating_count - 2] =\n                                slot;\n                            node->migrating[node->migrating_count - 1] =\n                                dst;\n                        }  else if ((p = strstr(slotsdef, \"-<-\"))) {//Importing\n                            *p = '\\0';\n                            p += 3;\n                            char *closing_bracket = strchr(p, ']');\n                            if (closing_bracket) *closing_bracket = '\\0';\n                            sds slot = sdsnew(slotsdef);\n                            sds src = sdsnew(p);\n                            node->importing_count += 2;\n                            node->importing = zrealloc(node->importing,\n                                (node->importing_count * sizeof(sds)));\n                            node->importing[node->importing_count - 2] =\n                                slot;\n                            node->importing[node->importing_count - 1] =\n                                src;\n                        }\n                    } else if ((dash = strchr(slotsdef, '-')) != NULL) {\n                        p = dash;\n                        int start, stop;\n                        *p = '\\0';\n                        start = atoi(slotsdef);\n                        stop = atoi(p + 1);\n                        node->slots_count += (stop - (start - 1));\n                        while (start <= stop) node->slots[start++] = 1;\n                    } else if (p > slotsdef) {\n                        node->slots[atoi(slotsdef)] = 1;\n                        node->slots_count++;\n                    }\n                }\n            }\n            node->dirty = 0;\n        } else if (!getfriends) {\n            if (!(node->flags & CLUSTER_MANAGER_FLAG_MYSELF)) continue;\n            else break;\n        } else {\n            currentNode = clusterManagerNewNode(sdsnew(ip), port, bus_port);\n            currentNode->flags |= CLUSTER_MANAGER_FLAG_FRIEND;\n            if (node->friends == NULL) node->friends = listCreate();\n            listAddNodeTail(node->friends, currentNode);\n        }\n        if (name != NULL) {\n            if (currentNode->name) sdsfree(currentNode->name);\n            currentNode->name = sdsnew(name);\n        }\n        if (currentNode->flags_str != NULL)\n            freeClusterManagerNodeFlags(currentNode->flags_str);\n        currentNode->flags_str = listCreate();\n        int flag_len;\n        while ((flag_len = strlen(flags)) > 0) {\n            sds flag = NULL;\n            char *fp = strchr(flags, ',');\n            if (fp) {\n                *fp = '\\0';\n                flag = sdsnew(flags);\n                flags = fp + 1;\n            } else {\n                flag = sdsnew(flags);\n                flags += flag_len;\n            }\n            if (strcmp(flag, \"noaddr\") == 0)\n                currentNode->flags |= CLUSTER_MANAGER_FLAG_NOADDR;\n            else if (strcmp(flag, \"disconnected\") == 0)\n                currentNode->flags |= CLUSTER_MANAGER_FLAG_DISCONNECT;\n            else if (strcmp(flag, \"fail\") == 0)\n                currentNode->flags |= CLUSTER_MANAGER_FLAG_FAIL;\n            else if (strcmp(flag, \"slave\") == 0) {\n                currentNode->flags |= CLUSTER_MANAGER_FLAG_SLAVE;\n                if (master_id != NULL) {\n                    if (currentNode->replicate) sdsfree(currentNode->replicate);\n                    currentNode->replicate = sdsnew(master_id);\n                }\n            }\n            listAddNodeTail(currentNode->flags_str, flag);\n        }\n        if (config_epoch != NULL)\n            currentNode->current_epoch = atoll(config_epoch);\n        if (ping_sent != NULL) currentNode->ping_sent = atoll(ping_sent);\n        if (ping_recv != NULL) currentNode->ping_recv = atoll(ping_recv);\n        if (!getfriends && myself) break;\n    }\ncleanup:\n    if (reply) freeReplyObject(reply);\n    return success;\n}\n\n/* Retrieves info about the cluster using argument 'node' as the starting\n * point. All nodes will be loaded inside the cluster_manager.nodes list.\n * Warning: if something goes wrong, it will free the starting node before\n * returning 0. */\nstatic int clusterManagerLoadInfoFromNode(clusterManagerNode *node) {\n    if (node->context == NULL && !clusterManagerNodeConnect(node)) {\n        freeClusterManagerNode(node);\n        return 0;\n    }\n    char *e = NULL;\n    if (!clusterManagerNodeIsCluster(node, &e)) {\n        clusterManagerPrintNotClusterNodeError(node, e);\n        if (e) zfree(e);\n        freeClusterManagerNode(node);\n        return 0;\n    }\n    e = NULL;\n    if (!clusterManagerNodeLoadInfo(node, CLUSTER_MANAGER_OPT_GETFRIENDS, &e)) {\n        if (e) {\n            CLUSTER_MANAGER_PRINT_REPLY_ERROR(node, e);\n            zfree(e);\n        }\n        freeClusterManagerNode(node);\n        return 0;\n    }\n    listIter li;\n    listNode *ln;\n    if (cluster_manager.nodes != NULL) {\n        listRewind(cluster_manager.nodes, &li);\n        while ((ln = listNext(&li)) != NULL)\n            freeClusterManagerNode((clusterManagerNode *) ln->value);\n        listRelease(cluster_manager.nodes);\n    }\n    cluster_manager.nodes = listCreate();\n    listAddNodeTail(cluster_manager.nodes, node);\n    if (node->friends != NULL) {\n        listRewind(node->friends, &li);\n        while ((ln = listNext(&li)) != NULL) {\n            clusterManagerNode *friend = ln->value;\n            if (!friend->ip || !friend->port) goto invalid_friend;\n            if (!friend->context && !clusterManagerNodeConnect(friend))\n                goto invalid_friend;\n            e = NULL;\n            if (clusterManagerNodeLoadInfo(friend, 0, &e)) {\n                if (friend->flags & (CLUSTER_MANAGER_FLAG_NOADDR |\n                                     CLUSTER_MANAGER_FLAG_DISCONNECT |\n                                     CLUSTER_MANAGER_FLAG_FAIL))\n                {\n                    goto invalid_friend;\n                }\n                listAddNodeTail(cluster_manager.nodes, friend);\n            } else {\n                clusterManagerLogErr(\"[ERR] Unable to load info for \"\n                                     \"node %s:%d\\n\",\n                                     friend->ip, friend->port);\n                goto invalid_friend;\n            }\n            continue;\ninvalid_friend:\n            if (!(friend->flags & CLUSTER_MANAGER_FLAG_SLAVE))\n                cluster_manager.unreachable_masters++;\n            freeClusterManagerNode(friend);\n        }\n        listRelease(node->friends);\n        node->friends = NULL;\n    }\n    // Count replicas for each node\n    listRewind(cluster_manager.nodes, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *n = ln->value;\n        if (n->replicate != NULL) {\n            clusterManagerNode *master = clusterManagerNodeByName(n->replicate);\n            if (master == NULL) {\n                clusterManagerLogWarn(\"*** WARNING: %s:%d claims to be \"\n                                      \"slave of unknown node ID %s.\\n\",\n                                      n->ip, n->port, n->replicate);\n            } else master->replicas_count++;\n        }\n    }\n    return 1;\n}\n\n/* Compare functions used by various sorting operations. */\nint clusterManagerSlotCompare(const void *slot1, const void *slot2) {\n    const char **i1 = (const char **)slot1;\n    const char **i2 = (const char **)slot2;\n    return strcmp(*i1, *i2);\n}\n\nint clusterManagerSlotCountCompareDesc(const void *n1, const void *n2) {\n    clusterManagerNode *node1 = *((clusterManagerNode **) n1);\n    clusterManagerNode *node2 = *((clusterManagerNode **) n2);\n    return node2->slots_count - node1->slots_count;\n}\n\nint clusterManagerCompareNodeBalance(const void *n1, const void *n2) {\n    clusterManagerNode *node1 = *((clusterManagerNode **) n1);\n    clusterManagerNode *node2 = *((clusterManagerNode **) n2);\n    return node1->balance - node2->balance;\n}\n\nstatic sds clusterManagerGetConfigSignature(clusterManagerNode *node) {\n    sds signature = NULL;\n    int node_count = 0, i = 0, name_len = 0;\n    char **node_configs = NULL;\n    redisReply *reply = CLUSTER_MANAGER_COMMAND(node, \"CLUSTER NODES\");\n    if (reply == NULL || reply->type == REDIS_REPLY_ERROR)\n        goto cleanup;\n    char *lines = reply->str, *p, *line;\n    while ((p = strstr(lines, \"\\n\")) != NULL) {\n        i = 0;\n        *p = '\\0';\n        line = lines;\n        lines = p + 1;\n        char *nodename = NULL;\n        int tot_size = 0;\n        while ((p = strchr(line, ' ')) != NULL) {\n            *p = '\\0';\n            char *token = line;\n            line = p + 1;\n            if (i == 0) {\n                nodename = token;\n                tot_size = (p - token);\n                name_len = tot_size++; // Make room for ':' in tot_size\n            }\n            if (++i == 8) break;\n        }\n        if (i != 8) continue;\n        if (nodename == NULL) continue;\n        int remaining = strlen(line);\n        if (remaining == 0) continue;\n        char **slots = NULL;\n        int c = 0;\n        while (remaining > 0) {\n            p = strchr(line, ' ');\n            if (p == NULL) p = line + remaining;\n            int size = (p - line);\n            remaining -= size;\n            tot_size += size;\n            char *slotsdef = line;\n            *p = '\\0';\n            if (remaining) {\n                line = p + 1;\n                remaining--;\n            } else line = p;\n            if (slotsdef[0] != '[') {\n                c++;\n                slots = zrealloc(slots, (c * sizeof(char *)));\n                slots[c - 1] = slotsdef;\n            }\n        }\n        if (c > 0) {\n            if (c > 1)\n                qsort(slots, c, sizeof(char *), clusterManagerSlotCompare);\n            node_count++;\n            node_configs =\n                zrealloc(node_configs, (node_count * sizeof(char *)));\n            /* Make room for '|' separators. */\n            tot_size += (sizeof(char) * (c - 1));\n            char *cfg = zmalloc((sizeof(char) * tot_size) + 1);\n            memcpy(cfg, nodename, name_len);\n            char *sp = cfg + name_len;\n            *(sp++) = ':';\n            for (i = 0; i < c; i++) {\n                if (i > 0) *(sp++) = ',';\n                int slen = strlen(slots[i]);\n                memcpy(sp, slots[i], slen);\n                sp += slen;\n            }\n            *(sp++) = '\\0';\n            node_configs[node_count - 1] = cfg;\n        }\n        zfree(slots);\n    }\n    if (node_count > 0) {\n        if (node_count > 1) {\n            qsort(node_configs, node_count, sizeof(char *),\n                  clusterManagerSlotCompare);\n        }\n        signature = sdsempty();\n        for (i = 0; i < node_count; i++) {\n            if (i > 0) signature = sdscatprintf(signature, \"%c\", '|');\n            signature = sdscatfmt(signature, \"%s\", node_configs[i]);\n        }\n    }\ncleanup:\n    if (reply != NULL) freeReplyObject(reply);\n    if (node_configs != NULL) {\n        for (i = 0; i < node_count; i++) zfree(node_configs[i]);\n        zfree(node_configs);\n    }\n    return signature;\n}\n\nstatic int clusterManagerIsConfigConsistent(void) {\n    if (cluster_manager.nodes == NULL) return 0;\n    int consistent = (listLength(cluster_manager.nodes) <= 1);\n    // If the Cluster has only one node, it's always consistent\n    if (consistent) return 1;\n    sds first_cfg = NULL;\n    listIter li;\n    listNode *ln;\n    listRewind(cluster_manager.nodes, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *node = ln->value;\n        sds cfg = clusterManagerGetConfigSignature(node);\n        if (cfg == NULL) {\n            consistent = 0;\n            break;\n        }\n        if (first_cfg == NULL) first_cfg = cfg;\n        else {\n            consistent = !sdscmp(first_cfg, cfg);\n            sdsfree(cfg);\n            if (!consistent) break;\n        }\n    }\n    if (first_cfg != NULL) sdsfree(first_cfg);\n    return consistent;\n}\n\nstatic list *clusterManagerGetDisconnectedLinks(clusterManagerNode *node) {\n    list *links = NULL;\n    redisReply *reply = CLUSTER_MANAGER_COMMAND(node, \"CLUSTER NODES\");\n    if (!clusterManagerCheckRedisReply(node, reply, NULL)) goto cleanup;\n    links = listCreate();\n    char *lines = reply->str, *p, *line;\n    while ((p = strstr(lines, \"\\n\")) != NULL) {\n        int i = 0;\n        *p = '\\0';\n        line = lines;\n        lines = p + 1;\n        char *nodename = NULL, *addr = NULL, *flags = NULL, *link_status = NULL;\n        while ((p = strchr(line, ' ')) != NULL) {\n            *p = '\\0';\n            char *token = line;\n            line = p + 1;\n            if (i == 0) nodename = token;\n            else if (i == 1) addr = token;\n            else if (i == 2) flags = token;\n            else if (i == 7) link_status = token;\n            else if (i == 8) break;\n            i++;\n        }\n        if (i == 7) link_status = line;\n        if (nodename == NULL || addr == NULL || flags == NULL ||\n            link_status == NULL) continue;\n        if (strstr(flags, \"myself\") != NULL) continue;\n        int disconnected = ((strstr(flags, \"disconnected\") != NULL) ||\n                            (strstr(link_status, \"disconnected\")));\n        int handshaking = (strstr(flags, \"handshake\") != NULL);\n        if (disconnected || handshaking) {\n            clusterManagerLink *link = zmalloc(sizeof(*link));\n            link->node_name = sdsnew(nodename);\n            link->node_addr = sdsnew(addr);\n            link->connected = 0;\n            link->handshaking = handshaking;\n            listAddNodeTail(links, link);\n        }\n    }\ncleanup:\n    if (reply != NULL) freeReplyObject(reply);\n    return links;\n}\n\n/* Check for disconnected cluster links. It returns a dict whose keys\n * are the unreachable node addresses and the values are lists of\n * node addresses that cannot reach the unreachable node. */\nstatic dict *clusterManagerGetLinkStatus(void) {\n    if (cluster_manager.nodes == NULL) return NULL;\n    dict *status = dictCreate(&clusterManagerLinkDictType);\n    listIter li;\n    listNode *ln;\n    listRewind(cluster_manager.nodes, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *node = ln->value;\n        list *links = clusterManagerGetDisconnectedLinks(node);\n        if (links) {\n            listIter lli;\n            listNode *lln;\n            listRewind(links, &lli);\n            while ((lln = listNext(&lli)) != NULL) {\n                clusterManagerLink *link = lln->value;\n                list *from = NULL;\n                dictEntry *entry = dictFind(status, link->node_addr);\n                if (entry) from = dictGetVal(entry);\n                else {\n                    from = listCreate();\n                    dictAdd(status, sdsdup(link->node_addr), from);\n                }\n                sds myaddr = sdsempty();\n                myaddr = sdscatfmt(myaddr, \"%s:%u\", node->ip, node->port);\n                listAddNodeTail(from, myaddr);\n                sdsfree(link->node_name);\n                sdsfree(link->node_addr);\n                zfree(link);\n            }\n            listRelease(links);\n        }\n    }\n    return status;\n}\n\n/* Add the error string to cluster_manager.errors and print it. */\nstatic void clusterManagerOnError(sds err) {\n    if (cluster_manager.errors == NULL)\n        cluster_manager.errors = listCreate();\n    listAddNodeTail(cluster_manager.errors, err);\n    clusterManagerLogErr(\"%s\\n\", (char *) err);\n}\n\n/* Check the slots coverage of the cluster. The 'all_slots' argument must be\n * and array of 16384 bytes. Every covered slot will be set to 1 in the\n * 'all_slots' array. The function returns the total number if covered slots.*/\nstatic int clusterManagerGetCoveredSlots(char *all_slots) {\n    if (cluster_manager.nodes == NULL) return 0;\n    listIter li;\n    listNode *ln;\n    listRewind(cluster_manager.nodes, &li);\n    int totslots = 0, i;\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *node = ln->value;\n        for (i = 0; i < CLUSTER_MANAGER_SLOTS; i++) {\n            if (node->slots[i] && !all_slots[i]) {\n                all_slots[i] = 1;\n                totslots++;\n            }\n        }\n    }\n    return totslots;\n}\n\nstatic void clusterManagerPrintSlotsList(list *slots) {\n    clusterManagerNode n = {0};\n    listIter li;\n    listNode *ln;\n    listRewind(slots, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        int slot = atoi(ln->value);\n        if (slot >= 0 && slot < CLUSTER_MANAGER_SLOTS)\n            n.slots[slot] = 1;\n    }\n    sds nodeslist = clusterManagerNodeSlotsString(&n);\n    printf(\"%s\\n\", nodeslist);\n    sdsfree(nodeslist);\n}\n\n/* Return the node, among 'nodes' with the greatest number of keys\n * in the specified slot. */\nstatic clusterManagerNode * clusterManagerGetNodeWithMostKeysInSlot(list *nodes,\n                                                                    int slot,\n                                                                    char **err)\n{\n    clusterManagerNode *node = NULL;\n    int numkeys = 0;\n    listIter li;\n    listNode *ln;\n    listRewind(nodes, &li);\n    if (err) *err = NULL;\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *n = ln->value;\n        if (n->flags & CLUSTER_MANAGER_FLAG_SLAVE || n->replicate)\n            continue;\n        redisReply *r =\n            CLUSTER_MANAGER_COMMAND(n, \"CLUSTER COUNTKEYSINSLOT %d\", slot);\n        int success = clusterManagerCheckRedisReply(n, r, err);\n        if (success) {\n            if (r->integer > numkeys || node == NULL) {\n                numkeys = r->integer;\n                node = n;\n            }\n        }\n        if (r != NULL) freeReplyObject(r);\n        /* If the reply contains errors */\n        if (!success) {\n            if (err != NULL && *err != NULL)\n                CLUSTER_MANAGER_PRINT_REPLY_ERROR(n, err);\n            node = NULL;\n            break;\n        }\n    }\n    return node;\n}\n\n/* This function returns the master that has the least number of replicas\n * in the cluster. If there are multiple masters with the same smaller\n * number of replicas, one at random is returned. */\n\nstatic clusterManagerNode *clusterManagerNodeWithLeastReplicas(void) {\n    clusterManagerNode *node = NULL;\n    int lowest_count = 0;\n    listIter li;\n    listNode *ln;\n    listRewind(cluster_manager.nodes, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *n = ln->value;\n        if (n->flags & CLUSTER_MANAGER_FLAG_SLAVE) continue;\n        if (node == NULL || n->replicas_count < lowest_count) {\n            node = n;\n            lowest_count = n->replicas_count;\n        }\n    }\n    return node;\n}\n\n/* This function returns a random master node, return NULL if none */\n\nstatic clusterManagerNode *clusterManagerNodeMasterRandom(void) {\n    int master_count = 0;\n    int idx;\n    listIter li;\n    listNode *ln;\n    listRewind(cluster_manager.nodes, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *n = ln->value;\n        if (n->flags & CLUSTER_MANAGER_FLAG_SLAVE) continue;\n        master_count++;\n    }\n\n    assert(master_count > 0);\n    srand(time(NULL));\n    idx = rand() % master_count;\n    listRewind(cluster_manager.nodes, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *n = ln->value;\n        if (n->flags & CLUSTER_MANAGER_FLAG_SLAVE) continue;\n        if (!idx--) {\n            return n;\n        }\n    }\n    /* Can not be reached */\n    assert(0);\n    return NULL;\n}\n\nstatic int clusterManagerFixSlotsCoverage(char *all_slots) {\n    int force_fix = config.cluster_manager_command.flags &\n                    CLUSTER_MANAGER_CMD_FLAG_FIX_WITH_UNREACHABLE_MASTERS;\n\n    if (cluster_manager.unreachable_masters > 0 && !force_fix) {\n        clusterManagerLogWarn(\"*** Fixing slots coverage with %d unreachable masters is dangerous: redis-cli will assume that slots about masters that are not reachable are not covered, and will try to reassign them to the reachable nodes. This can cause data loss and is rarely what you want to do. If you really want to proceed use the --cluster-fix-with-unreachable-masters option.\\n\", cluster_manager.unreachable_masters);\n        exit(1);\n    }\n\n    int i, fixed = 0;\n    list *none = NULL, *single = NULL, *multi = NULL;\n    clusterManagerLogInfo(\">>> Fixing slots coverage...\\n\");\n    for (i = 0; i < CLUSTER_MANAGER_SLOTS; i++) {\n        int covered = all_slots[i];\n        if (!covered) {\n            sds slot = sdsfromlonglong((long long) i);\n            list *slot_nodes = listCreate();\n            sds slot_nodes_str = sdsempty();\n            listIter li;\n            listNode *ln;\n            listRewind(cluster_manager.nodes, &li);\n            while ((ln = listNext(&li)) != NULL) {\n                clusterManagerNode *n = ln->value;\n                if (n->flags & CLUSTER_MANAGER_FLAG_SLAVE || n->replicate)\n                    continue;\n                redisReply *reply = CLUSTER_MANAGER_COMMAND(n,\n                    \"CLUSTER GETKEYSINSLOT %d %d\", i, 1);\n                if (!clusterManagerCheckRedisReply(n, reply, NULL)) {\n                    fixed = -1;\n                    if (reply) freeReplyObject(reply);\n                    if (slot_nodes) listRelease(slot_nodes);\n                    sdsfree(slot_nodes_str);\n                    sdsfree(slot);\n                    goto cleanup;\n                }\n                assert(reply->type == REDIS_REPLY_ARRAY);\n                if (reply->elements > 0) {\n                    listAddNodeTail(slot_nodes, n);\n                    if (listLength(slot_nodes) > 1)\n                        slot_nodes_str = sdscat(slot_nodes_str, \", \");\n                    slot_nodes_str = sdscatfmt(slot_nodes_str,\n                                               \"%s:%u\", n->ip, n->port);\n                }\n                freeReplyObject(reply);\n            }\n            sdsfree(slot_nodes_str);\n            dictAdd(clusterManagerUncoveredSlots, slot, slot_nodes);\n        }\n    }\n\n    /* For every slot, take action depending on the actual condition:\n     * 1) No node has keys for this slot.\n     * 2) A single node has keys for this slot.\n     * 3) Multiple nodes have keys for this slot. */\n    none = listCreate();\n    single = listCreate();\n    multi = listCreate();\n    dictIterator iter;\n    dictEntry *entry;\n\n    dictInitIterator(&iter, clusterManagerUncoveredSlots);\n    while ((entry = dictNext(&iter)) != NULL) {\n        sds slot = (sds) dictGetKey(entry);\n        list *nodes = (list *) dictGetVal(entry);\n        switch (listLength(nodes)){\n        case 0: listAddNodeTail(none, slot); break;\n        case 1: listAddNodeTail(single, slot); break;\n        default: listAddNodeTail(multi, slot); break;\n        }\n    }\n    dictResetIterator(&iter);\n\n    /* we want explicit manual confirmation from users for all the fix cases */\n    int ignore_force = 1;\n\n    /*  Handle case \"1\": keys in no node. */\n    if (listLength(none) > 0) {\n        printf(\"The following uncovered slots have no keys \"\n               \"across the cluster:\\n\");\n        clusterManagerPrintSlotsList(none);\n        if (confirmWithYes(\"Fix these slots by covering with a random node?\",\n                           ignore_force)) {\n            listIter li;\n            listNode *ln;\n            listRewind(none, &li);\n            while ((ln = listNext(&li)) != NULL) {\n                sds slot = ln->value;\n                int s = atoi(slot);\n                clusterManagerNode *n = clusterManagerNodeMasterRandom();\n                clusterManagerLogInfo(\">>> Covering slot %s with %s:%d\\n\",\n                                      slot, n->ip, n->port);\n                if (!clusterManagerSetSlotOwner(n, s, 0)) {\n                    fixed = -1;\n                    goto cleanup;\n                }\n                /* Since CLUSTER ADDSLOTS succeeded, we also update the slot\n                 * info into the node struct, in order to keep it synced */\n                n->slots[s] = 1;\n                fixed++;\n            }\n        }\n    }\n\n    /*  Handle case \"2\": keys only in one node. */\n    if (listLength(single) > 0) {\n        printf(\"The following uncovered slots have keys in just one node:\\n\");\n        clusterManagerPrintSlotsList(single);\n        if (confirmWithYes(\"Fix these slots by covering with those nodes?\",\n                           ignore_force)) {\n            listIter li;\n            listNode *ln;\n            listRewind(single, &li);\n            while ((ln = listNext(&li)) != NULL) {\n                sds slot = ln->value;\n                int s = atoi(slot);\n                dictEntry *entry = dictFind(clusterManagerUncoveredSlots, slot);\n                assert(entry != NULL);\n                list *nodes = (list *) dictGetVal(entry);\n                listNode *fn = listFirst(nodes);\n                assert(fn != NULL);\n                clusterManagerNode *n = fn->value;\n                clusterManagerLogInfo(\">>> Covering slot %s with %s:%d\\n\",\n                                      slot, n->ip, n->port);\n                if (!clusterManagerSetSlotOwner(n, s, 0)) {\n                    fixed = -1;\n                    goto cleanup;\n                }\n                /* Since CLUSTER ADDSLOTS succeeded, we also update the slot\n                 * info into the node struct, in order to keep it synced */\n                n->slots[atoi(slot)] = 1;\n                fixed++;\n            }\n        }\n    }\n\n    /* Handle case \"3\": keys in multiple nodes. */\n    if (listLength(multi) > 0) {\n        printf(\"The following uncovered slots have keys in multiple nodes:\\n\");\n        clusterManagerPrintSlotsList(multi);\n        if (confirmWithYes(\"Fix these slots by moving keys \"\n                           \"into a single node?\", ignore_force)) {\n            listIter li;\n            listNode *ln;\n            listRewind(multi, &li);\n            while ((ln = listNext(&li)) != NULL) {\n                sds slot = ln->value;\n                dictEntry *entry = dictFind(clusterManagerUncoveredSlots, slot);\n                assert(entry != NULL);\n                list *nodes = (list *) dictGetVal(entry);\n                int s = atoi(slot);\n                clusterManagerNode *target =\n                    clusterManagerGetNodeWithMostKeysInSlot(nodes, s, NULL);\n                if (target == NULL) {\n                    fixed = -1;\n                    goto cleanup;\n                }\n                clusterManagerLogInfo(\">>> Covering slot %s moving keys \"\n                                      \"to %s:%d\\n\", slot,\n                                      target->ip, target->port);\n                if (!clusterManagerSetSlotOwner(target, s, 1)) {\n                    fixed = -1;\n                    goto cleanup;\n                }\n                /* Since CLUSTER ADDSLOTS succeeded, we also update the slot\n                 * info into the node struct, in order to keep it synced */\n                target->slots[atoi(slot)] = 1;\n                listIter nli;\n                listNode *nln;\n                listRewind(nodes, &nli);\n                while ((nln = listNext(&nli)) != NULL) {\n                    clusterManagerNode *src = nln->value;\n                    if (src == target) continue;\n                    /* Assign the slot to target node in the source node. */\n                    if (!clusterManagerSetSlot(src, target, s, \"NODE\", NULL))\n                        fixed = -1;\n                    if (fixed < 0) goto cleanup;\n                    /* Set the source node in 'importing' state\n                     * (even if we will actually migrate keys away)\n                     * in order to avoid receiving redirections\n                     * for MIGRATE. */\n                    if (!clusterManagerSetSlot(src, target, s,\n                                               \"IMPORTING\", NULL)) fixed = -1;\n                    if (fixed < 0) goto cleanup;\n                    int opts = CLUSTER_MANAGER_OPT_VERBOSE |\n                               CLUSTER_MANAGER_OPT_COLD;\n                    if (!clusterManagerMoveSlot(src, target, s, opts, NULL)) {\n                        fixed = -1;\n                        goto cleanup;\n                    }\n                    if (!clusterManagerClearSlotStatus(src, s))\n                        fixed = -1;\n                    if (fixed < 0) goto cleanup;\n                }\n                fixed++;\n            }\n        }\n    }\ncleanup:\n    if (none) listRelease(none);\n    if (single) listRelease(single);\n    if (multi) listRelease(multi);\n    return fixed;\n}\n\n/* Slot 'slot' was found to be in importing or migrating state in one or\n * more nodes. This function fixes this condition by migrating keys where\n * it seems more sensible. */\nstatic int clusterManagerFixOpenSlot(int slot) {\n    int force_fix = config.cluster_manager_command.flags &\n                    CLUSTER_MANAGER_CMD_FLAG_FIX_WITH_UNREACHABLE_MASTERS;\n\n    if (cluster_manager.unreachable_masters > 0 && !force_fix) {\n        clusterManagerLogWarn(\"*** Fixing open slots with %d unreachable masters is dangerous: redis-cli will assume that slots about masters that are not reachable are not covered, and will try to reassign them to the reachable nodes. This can cause data loss and is rarely what you want to do. If you really want to proceed use the --cluster-fix-with-unreachable-masters option.\\n\", cluster_manager.unreachable_masters);\n        exit(1);\n    }\n\n    clusterManagerLogInfo(\">>> Fixing open slot %d\\n\", slot);\n    /* Try to obtain the current slot owner, according to the current\n     * nodes configuration. */\n    int success = 1;\n    list *owners = listCreate();    /* List of nodes claiming some ownership.\n                                       it could be stating in the configuration\n                                       to have the node ownership, or just\n                                       holding keys for such slot. */\n    list *migrating = listCreate();\n    list *importing = listCreate();\n    sds migrating_str = sdsempty();\n    sds importing_str = sdsempty();\n    clusterManagerNode *owner = NULL; /* The obvious slot owner if any. */\n\n    /* Iterate all the nodes, looking for potential owners of this slot. */\n    listIter li;\n    listNode *ln;\n    listRewind(cluster_manager.nodes, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *n = ln->value;\n        if (n->flags & CLUSTER_MANAGER_FLAG_SLAVE) continue;\n        if (n->slots[slot]) {\n            listAddNodeTail(owners, n);\n        } else {\n            redisReply *r = CLUSTER_MANAGER_COMMAND(n,\n                \"CLUSTER COUNTKEYSINSLOT %d\", slot);\n            success = clusterManagerCheckRedisReply(n, r, NULL);\n            if (success && r->integer > 0) {\n                clusterManagerLogWarn(\"*** Found keys about slot %d \"\n                                      \"in non-owner node %s:%d!\\n\", slot,\n                                      n->ip, n->port);\n                listAddNodeTail(owners, n);\n            }\n            if (r) freeReplyObject(r);\n            if (!success) goto cleanup;\n        }\n    }\n\n    /* If we have only a single potential owner for this slot,\n     * set it as \"owner\". */\n    if (listLength(owners) == 1) owner = listFirst(owners)->value;\n\n    /* Scan the list of nodes again, in order to populate the\n     * list of nodes in importing or migrating state for\n     * this slot. */\n    listRewind(cluster_manager.nodes, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *n = ln->value;\n        if (n->flags & CLUSTER_MANAGER_FLAG_SLAVE) continue;\n        int is_migrating = 0, is_importing = 0;\n        if (n->migrating) {\n            for (int i = 0; i < n->migrating_count; i += 2) {\n                sds migrating_slot = n->migrating[i];\n                if (atoi(migrating_slot) == slot) {\n                    char *sep = (listLength(migrating) == 0 ? \"\" : \",\");\n                    migrating_str = sdscatfmt(migrating_str, \"%s%s:%u\",\n                                              sep, n->ip, n->port);\n                    listAddNodeTail(migrating, n);\n                    is_migrating = 1;\n                    break;\n                }\n            }\n        }\n        if (!is_migrating && n->importing) {\n            for (int i = 0; i < n->importing_count; i += 2) {\n                sds importing_slot = n->importing[i];\n                if (atoi(importing_slot) == slot) {\n                    char *sep = (listLength(importing) == 0 ? \"\" : \",\");\n                    importing_str = sdscatfmt(importing_str, \"%s%s:%u\",\n                                              sep, n->ip, n->port);\n                    listAddNodeTail(importing, n);\n                    is_importing = 1;\n                    break;\n                }\n            }\n        }\n\n        /* If the node is neither migrating nor importing and it's not\n         * the owner, then is added to the importing list in case\n         * it has keys in the slot. */\n        if (!is_migrating && !is_importing && n != owner) {\n            redisReply *r = CLUSTER_MANAGER_COMMAND(n,\n                \"CLUSTER COUNTKEYSINSLOT %d\", slot);\n            success = clusterManagerCheckRedisReply(n, r, NULL);\n            if (success && r->integer > 0) {\n                clusterManagerLogWarn(\"*** Found keys about slot %d \"\n                                      \"in node %s:%d!\\n\", slot, n->ip,\n                                      n->port);\n                char *sep = (listLength(importing) == 0 ? \"\" : \",\");\n                importing_str = sdscatfmt(importing_str, \"%s%s:%u\",\n                                          sep, n->ip, n->port);\n                listAddNodeTail(importing, n);\n            }\n            if (r) freeReplyObject(r);\n            if (!success) goto cleanup;\n        }\n    }\n    if (sdslen(migrating_str) > 0)\n        printf(\"Set as migrating in: %s\\n\", migrating_str);\n    if (sdslen(importing_str) > 0)\n        printf(\"Set as importing in: %s\\n\", importing_str);\n\n    /* If there is no slot owner, set as owner the node with the biggest\n     * number of keys, among the set of migrating / importing nodes. */\n    if (owner == NULL) {\n        clusterManagerLogInfo(\">>> No single clear owner for the slot, \"\n                              \"selecting an owner by # of keys...\\n\");\n        owner = clusterManagerGetNodeWithMostKeysInSlot(cluster_manager.nodes,\n                                                        slot, NULL);\n        // If we still don't have an owner, we can't fix it.\n        if (owner == NULL) {\n            clusterManagerLogErr(\"[ERR] Can't select a slot owner. \"\n                                 \"Impossible to fix.\\n\");\n            success = 0;\n            goto cleanup;\n        }\n\n        // Use ADDSLOTS to assign the slot.\n        clusterManagerLogWarn(\"*** Configuring %s:%d as the slot owner\\n\",\n                              owner->ip, owner->port);\n        success = clusterManagerClearSlotStatus(owner, slot);\n        if (!success) goto cleanup;\n        success = clusterManagerSetSlotOwner(owner, slot, 0);\n        if (!success) goto cleanup;\n        /* Since CLUSTER ADDSLOTS succeeded, we also update the slot\n         * info into the node struct, in order to keep it synced */\n        owner->slots[slot] = 1;\n        /* Remove the owner from the list of migrating/importing\n         * nodes. */\n        clusterManagerRemoveNodeFromList(migrating, owner);\n        clusterManagerRemoveNodeFromList(importing, owner);\n    }\n\n    /* If there are multiple owners of the slot, we need to fix it\n     * so that a single node is the owner and all the other nodes\n     * are in importing state. Later the fix can be handled by one\n     * of the base cases above.\n     *\n     * Note that this case also covers multiple nodes having the slot\n     * in migrating state, since migrating is a valid state only for\n     * slot owners. */\n    if (listLength(owners) > 1) {\n        /* Owner cannot be NULL at this point, since if there are more owners,\n         * the owner has been set in the previous condition (owner == NULL). */\n        assert(owner != NULL);\n        listRewind(owners, &li);\n        while ((ln = listNext(&li)) != NULL) {\n            clusterManagerNode *n = ln->value;\n            if (n == owner) continue;\n            success = clusterManagerDelSlot(n, slot, 1);\n            if (!success) goto cleanup;\n            n->slots[slot] = 0;\n            /* Assign the slot to the owner in the node 'n' configuration.' */\n            success = clusterManagerSetSlot(n, owner, slot, \"node\", NULL);\n            if (!success) goto cleanup;\n            success = clusterManagerSetSlot(n, owner, slot, \"importing\", NULL);\n            if (!success) goto cleanup;\n            /* Avoid duplicates. */\n            clusterManagerRemoveNodeFromList(importing, n);\n            listAddNodeTail(importing, n);\n            /* Ensure that the node is not in the migrating list. */\n            clusterManagerRemoveNodeFromList(migrating, n);\n        }\n    }\n    int move_opts = CLUSTER_MANAGER_OPT_VERBOSE;\n\n    /* Case 1: The slot is in migrating state in one node, and in\n     *         importing state in 1 node. That's trivial to address. */\n    if (listLength(migrating) == 1 && listLength(importing) == 1) {\n        clusterManagerNode *src = listFirst(migrating)->value;\n        clusterManagerNode *dst = listFirst(importing)->value;\n        clusterManagerLogInfo(\">>> Case 1: Moving slot %d from \"\n                              \"%s:%d to %s:%d\\n\", slot,\n                              src->ip, src->port, dst->ip, dst->port);\n        move_opts |= CLUSTER_MANAGER_OPT_UPDATE;\n        success = clusterManagerMoveSlot(src, dst, slot, move_opts, NULL);\n    }\n\n    /* Case 2: There are multiple nodes that claim the slot as importing,\n     * they probably got keys about the slot after a restart so opened\n     * the slot. In this case we just move all the keys to the owner\n     * according to the configuration. */\n    else if (listLength(migrating) == 0 && listLength(importing) > 0) {\n        clusterManagerLogInfo(\">>> Case 2: Moving all the %d slot keys to its \"\n                              \"owner %s:%d\\n\", slot, owner->ip, owner->port);\n        move_opts |= CLUSTER_MANAGER_OPT_COLD;\n        listRewind(importing, &li);\n        while ((ln = listNext(&li)) != NULL) {\n            clusterManagerNode *n = ln->value;\n            if (n == owner) continue;\n            success = clusterManagerMoveSlot(n, owner, slot, move_opts, NULL);\n            if (!success) goto cleanup;\n            clusterManagerLogInfo(\">>> Setting %d as STABLE in \"\n                                  \"%s:%d\\n\", slot, n->ip, n->port);\n            success = clusterManagerClearSlotStatus(n, slot);\n            if (!success) goto cleanup;\n        }\n        /* Since the slot has been moved in \"cold\" mode, ensure that all the\n         * other nodes update their own configuration about the slot itself. */\n        listRewind(cluster_manager.nodes, &li);\n        while ((ln = listNext(&li)) != NULL) {\n            clusterManagerNode *n = ln->value;\n            if (n == owner) continue;\n            if (n->flags & CLUSTER_MANAGER_FLAG_SLAVE) continue;\n            success = clusterManagerSetSlot(n, owner, slot, \"NODE\", NULL);\n            if (!success) goto cleanup;\n        }\n    }\n\n    /* Case 3: The slot is in migrating state in one node but multiple\n     * other nodes claim to be in importing state and don't have any key in\n     * the slot. We search for the importing node having the same ID as\n     * the destination node of the migrating node.\n     * In that case we move the slot from the migrating node to this node and\n     * we close the importing states on all the other importing nodes.\n     * If no importing node has the same ID as the destination node of the\n     * migrating node, the slot's state is closed on both the migrating node\n     * and the importing nodes. */\n    else if (listLength(migrating) == 1 && listLength(importing) > 1) {\n        int try_to_fix = 1;\n        clusterManagerNode *src = listFirst(migrating)->value;\n        clusterManagerNode *dst = NULL;\n        sds target_id = NULL;\n        for (int i = 0; i < src->migrating_count; i += 2) {\n            sds migrating_slot = src->migrating[i];\n            if (atoi(migrating_slot) == slot) {\n                target_id = src->migrating[i + 1];\n                break;\n            }\n        }\n        assert(target_id != NULL);\n        listIter li;\n        listNode *ln;\n        listRewind(importing, &li);\n        while ((ln = listNext(&li)) != NULL) {\n            clusterManagerNode *n = ln->value;\n            int count = clusterManagerCountKeysInSlot(n, slot);\n            if (count > 0) {\n                try_to_fix = 0;\n                break;\n            }\n            if (strcmp(n->name, target_id) == 0) dst = n;\n        }\n        if (!try_to_fix) goto unhandled_case;\n        if (dst != NULL) {\n            clusterManagerLogInfo(\">>> Case 3: Moving slot %d from %s:%d to \"\n                                  \"%s:%d and closing it on all the other \"\n                                  \"importing nodes.\\n\",\n                                  slot, src->ip, src->port,\n                                  dst->ip, dst->port);\n            /* Move the slot to the destination node. */\n            success = clusterManagerMoveSlot(src, dst, slot, move_opts, NULL);\n            if (!success) goto cleanup;\n            /* Close slot on all the other importing nodes. */\n            listRewind(importing, &li);\n            while ((ln = listNext(&li)) != NULL) {\n                clusterManagerNode *n = ln->value;\n                if (dst == n) continue;\n                success = clusterManagerClearSlotStatus(n, slot);\n                if (!success) goto cleanup;\n            }\n        } else {\n            clusterManagerLogInfo(\">>> Case 3: Closing slot %d on both \"\n                                  \"migrating and importing nodes.\\n\", slot);\n            /* Close the slot on both the migrating node and the importing\n             * nodes. */\n            success = clusterManagerClearSlotStatus(src, slot);\n            if (!success) goto cleanup;\n            listRewind(importing, &li);\n            while ((ln = listNext(&li)) != NULL) {\n                clusterManagerNode *n = ln->value;\n                success = clusterManagerClearSlotStatus(n, slot);\n                if (!success) goto cleanup;\n            }\n        }\n    } else {\n        int try_to_close_slot = (listLength(importing) == 0 &&\n                                 listLength(migrating) == 1);\n        if (try_to_close_slot) {\n            clusterManagerNode *n = listFirst(migrating)->value;\n            if (!owner || owner != n) {\n                redisReply *r = CLUSTER_MANAGER_COMMAND(n,\n                    \"CLUSTER GETKEYSINSLOT %d %d\", slot, 10);\n                success = clusterManagerCheckRedisReply(n, r, NULL);\n                if (r) {\n                    if (success) try_to_close_slot = (r->elements == 0);\n                    freeReplyObject(r);\n                }\n                if (!success) goto cleanup;\n            }\n        }\n        /* Case 4: There are no slots claiming to be in importing state, but\n         * there is a migrating node that actually don't have any key or is the\n         * slot owner. We can just close the slot, probably a reshard\n         * interrupted in the middle. */\n        if (try_to_close_slot) {\n            clusterManagerNode *n = listFirst(migrating)->value;\n            clusterManagerLogInfo(\">>> Case 4: Closing slot %d on %s:%d\\n\",\n                                  slot, n->ip, n->port);\n            redisReply *r = CLUSTER_MANAGER_COMMAND(n, \"CLUSTER SETSLOT %d %s\",\n                                                    slot, \"STABLE\");\n            success = clusterManagerCheckRedisReply(n, r, NULL);\n            if (r) freeReplyObject(r);\n            if (!success) goto cleanup;\n        } else {\nunhandled_case:\n            success = 0;\n            clusterManagerLogErr(\"[ERR] Sorry, redis-cli can't fix this slot \"\n                                 \"yet (work in progress). Slot is set as \"\n                                 \"migrating in %s, as importing in %s, \"\n                                 \"owner is %s:%d\\n\", migrating_str,\n                                 importing_str, owner->ip, owner->port);\n        }\n    }\ncleanup:\n    listRelease(owners);\n    listRelease(migrating);\n    listRelease(importing);\n    sdsfree(migrating_str);\n    sdsfree(importing_str);\n    return success;\n}\n\nstatic int clusterManagerFixMultipleSlotOwners(int slot, list *owners) {\n    clusterManagerLogInfo(\">>> Fixing multiple owners for slot %d...\\n\", slot);\n    int success = 0;\n    assert(listLength(owners) > 1);\n    clusterManagerNode *owner = clusterManagerGetNodeWithMostKeysInSlot(owners,\n                                                                        slot,\n                                                                        NULL);\n    if (!owner) owner = listFirst(owners)->value;\n    clusterManagerLogInfo(\">>> Setting slot %d owner: %s:%d\\n\",\n                          slot, owner->ip, owner->port);\n    /* Set the slot owner. */\n    if (!clusterManagerSetSlotOwner(owner, slot, 0)) return 0;\n    listIter li;\n    listNode *ln;\n    listRewind(cluster_manager.nodes, &li);\n    /* Update configuration in all the other master nodes by assigning the slot\n     * itself to the new owner, and by eventually migrating keys if the node\n     * has keys for the slot. */\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *n = ln->value;\n        if (n == owner) continue;\n        if (n->flags & CLUSTER_MANAGER_FLAG_SLAVE) continue;\n        int count = clusterManagerCountKeysInSlot(n, slot);\n        success = (count >= 0);\n        if (!success) break;\n        clusterManagerDelSlot(n, slot, 1);\n        if (!clusterManagerSetSlot(n, owner, slot, \"node\", NULL)) return 0;\n        if (count > 0) {\n            int opts = CLUSTER_MANAGER_OPT_VERBOSE |\n                       CLUSTER_MANAGER_OPT_COLD;\n            success = clusterManagerMoveSlot(n, owner, slot, opts, NULL);\n            if (!success) break;\n        }\n    }\n    return success;\n}\n\nstatic int clusterManagerCheckCluster(int quiet) {\n    listNode *ln = listFirst(cluster_manager.nodes);\n    if (!ln) return 0;\n    clusterManagerNode *node = ln->value;\n    clusterManagerLogInfo(\">>> Performing Cluster Check (using node %s:%d)\\n\",\n                          node->ip, node->port);\n    int result = 1, consistent = 0;\n    int do_fix = config.cluster_manager_command.flags &\n                 CLUSTER_MANAGER_CMD_FLAG_FIX;\n    if (!quiet) clusterManagerShowNodes();\n    consistent = clusterManagerIsConfigConsistent();\n    if (!consistent) {\n        sds err = sdsnew(\"[ERR] Nodes don't agree about configuration!\");\n        clusterManagerOnError(err);\n        result = 0;\n    } else {\n        clusterManagerLogOk(\"[OK] All nodes agree about slots \"\n                            \"configuration.\\n\");\n    }\n    /* Check open slots */\n    clusterManagerLogInfo(\">>> Check for open slots...\\n\");\n    listIter li;\n    listRewind(cluster_manager.nodes, &li);\n    int i;\n    dict *open_slots = NULL;\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *n = ln->value;\n        if (n->migrating != NULL) {\n            if (open_slots == NULL)\n                open_slots = dictCreate(&clusterManagerDictType);\n            sds errstr = sdsempty();\n            errstr = sdscatprintf(errstr,\n                                \"[WARNING] Node %s:%d has slots in \"\n                                \"migrating state \",\n                                n->ip,\n                                n->port);\n            for (i = 0; i < n->migrating_count; i += 2) {\n                sds slot = n->migrating[i];\n                dictReplace(open_slots, slot, sdsdup(n->migrating[i + 1]));\n                char *fmt = (i > 0 ? \",%S\" : \"%S\");\n                errstr = sdscatfmt(errstr, fmt, slot);\n            }\n            errstr = sdscat(errstr, \".\");\n            clusterManagerOnError(errstr);\n        }\n        if (n->importing != NULL) {\n            if (open_slots == NULL)\n                open_slots = dictCreate(&clusterManagerDictType);\n            sds errstr = sdsempty();\n            errstr = sdscatprintf(errstr,\n                                \"[WARNING] Node %s:%d has slots in \"\n                                \"importing state \",\n                                n->ip,\n                                n->port);\n            for (i = 0; i < n->importing_count; i += 2) {\n                sds slot = n->importing[i];\n                dictReplace(open_slots, slot, sdsdup(n->importing[i + 1]));\n                char *fmt = (i > 0 ? \",%S\" : \"%S\");\n                errstr = sdscatfmt(errstr, fmt, slot);\n            }\n            errstr = sdscat(errstr, \".\");\n            clusterManagerOnError(errstr);\n        }\n    }\n    if (open_slots != NULL) {\n        result = 0;\n        dictIterator iter;\n        dictEntry *entry;\n        sds errstr = sdsnew(\"[WARNING] The following slots are open: \");\n        i = 0;\n\n        dictInitIterator(&iter, open_slots);\n        while ((entry = dictNext(&iter)) != NULL) {\n            sds slot = (sds) dictGetKey(entry);\n            char *fmt = (i++ > 0 ? \",%S\" : \"%S\");\n            errstr = sdscatfmt(errstr, fmt, slot);\n        }\n        dictResetIterator(&iter);\n        clusterManagerLogErr(\"%s.\\n\", (char *) errstr);\n        sdsfree(errstr);\n        if (do_fix) {\n            /* Fix open slots. */\n            dictInitIterator(&iter, open_slots);\n            while ((entry = dictNext(&iter)) != NULL) {\n                sds slot = (sds) dictGetKey(entry);\n                result = clusterManagerFixOpenSlot(atoi(slot));\n                if (!result) break;\n            }\n            dictResetIterator(&iter);\n        }\n        dictRelease(open_slots);\n    }\n    clusterManagerLogInfo(\">>> Check slots coverage...\\n\");\n    char slots[CLUSTER_MANAGER_SLOTS];\n    memset(slots, 0, CLUSTER_MANAGER_SLOTS);\n    int coverage = clusterManagerGetCoveredSlots(slots);\n    if (coverage == CLUSTER_MANAGER_SLOTS) {\n        clusterManagerLogOk(\"[OK] All %d slots covered.\\n\",\n                            CLUSTER_MANAGER_SLOTS);\n    } else {\n        sds err = sdsempty();\n        err = sdscatprintf(err, \"[ERR] Not all %d slots are \"\n                                \"covered by nodes.\\n\",\n                                CLUSTER_MANAGER_SLOTS);\n        clusterManagerOnError(err);\n        result = 0;\n        if (do_fix/* && result*/) {\n            dictType dtype = clusterManagerDictType;\n            dtype.keyDestructor = dictSdsDestructor;\n            dtype.valDestructor = dictListDestructor;\n            clusterManagerUncoveredSlots = dictCreate(&dtype);\n            int fixed = clusterManagerFixSlotsCoverage(slots);\n            if (fixed > 0) result = 1;\n        }\n    }\n    int search_multiple_owners = config.cluster_manager_command.flags &\n                                 CLUSTER_MANAGER_CMD_FLAG_CHECK_OWNERS;\n    if (search_multiple_owners) {\n        /* Check whether there are multiple owners, even when slots are\n         * fully covered and there are no open slots. */\n        clusterManagerLogInfo(\">>> Check for multiple slot owners...\\n\");\n        int slot = 0, slots_with_multiple_owners = 0;\n        for (; slot < CLUSTER_MANAGER_SLOTS; slot++) {\n            listIter li;\n            listNode *ln;\n            listRewind(cluster_manager.nodes, &li);\n            list *owners = listCreate();\n            while ((ln = listNext(&li)) != NULL) {\n                clusterManagerNode *n = ln->value;\n                if (n->flags & CLUSTER_MANAGER_FLAG_SLAVE) continue;\n                if (n->slots[slot]) listAddNodeTail(owners, n);\n                else {\n                    /* Nodes having keys for the slot will be considered\n                     * owners too. */\n                    int count = clusterManagerCountKeysInSlot(n, slot);\n                    if (count > 0) listAddNodeTail(owners, n);\n                }\n            }\n            if (listLength(owners) > 1) {\n                result = 0;\n                clusterManagerLogErr(\"[WARNING] Slot %d has %d owners:\\n\",\n                                     slot, listLength(owners));\n                listRewind(owners, &li);\n                while ((ln = listNext(&li)) != NULL) {\n                    clusterManagerNode *n = ln->value;\n                    clusterManagerLogErr(\"    %s:%d\\n\", n->ip, n->port);\n                }\n                slots_with_multiple_owners++;\n                if (do_fix) {\n                    result = clusterManagerFixMultipleSlotOwners(slot, owners);\n                    if (!result) {\n                        clusterManagerLogErr(\"Failed to fix multiple owners \"\n                                             \"for slot %d\\n\", slot);\n                        listRelease(owners);\n                        break;\n                    } else slots_with_multiple_owners--;\n                }\n            }\n            listRelease(owners);\n        }\n        if (slots_with_multiple_owners == 0)\n            clusterManagerLogOk(\"[OK] No multiple owners found.\\n\");\n    }\n    return result;\n}\n\nstatic clusterManagerNode *clusterNodeForResharding(char *id,\n                                                    clusterManagerNode *target,\n                                                    int *raise_err)\n{\n    clusterManagerNode *node = NULL;\n    const char *invalid_node_msg = \"*** The specified node (%s) is not known \"\n                                   \"or not a master, please retry.\\n\";\n    node = clusterManagerNodeByName(id);\n    *raise_err = 0;\n    if (!node || node->flags & CLUSTER_MANAGER_FLAG_SLAVE) {\n        clusterManagerLogErr(invalid_node_msg, id);\n        *raise_err = 1;\n        return NULL;\n    } else if (target != NULL) {\n        if (!strcmp(node->name, target->name)) {\n            clusterManagerLogErr( \"*** It is not possible to use \"\n                                  \"the target node as \"\n                                  \"source node.\\n\");\n            return NULL;\n        }\n    }\n    return node;\n}\n\nstatic list *clusterManagerComputeReshardTable(list *sources, int numslots) {\n    list *moved = listCreate();\n    int src_count = listLength(sources), i = 0, tot_slots = 0, j;\n    clusterManagerNode **sorted = zmalloc(src_count * sizeof(*sorted));\n    listIter li;\n    listNode *ln;\n    listRewind(sources, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *node = ln->value;\n        tot_slots += node->slots_count;\n        sorted[i++] = node;\n    }\n    qsort(sorted, src_count, sizeof(clusterManagerNode *),\n          clusterManagerSlotCountCompareDesc);\n    for (i = 0; i < src_count; i++) {\n        clusterManagerNode *node = sorted[i];\n        float n = ((float) numslots / tot_slots * node->slots_count);\n        if (i == 0) n = ceil(n);\n        else n = floor(n);\n        int max = (int) n, count = 0;\n        for (j = 0; j < CLUSTER_MANAGER_SLOTS; j++) {\n            int slot = node->slots[j];\n            if (!slot) continue;\n            if (count >= max || (int)listLength(moved) >= numslots) break;\n            clusterManagerReshardTableItem *item = zmalloc(sizeof(*item));\n            item->source = node;\n            item->slot = j;\n            listAddNodeTail(moved, item);\n            count++;\n        }\n    }\n    zfree(sorted);\n    return moved;\n}\n\nstatic void clusterManagerShowReshardTable(list *table) {\n    listIter li;\n    listNode *ln;\n    listRewind(table, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerReshardTableItem *item = ln->value;\n        clusterManagerNode *n = item->source;\n        printf(\"    Moving slot %d from %s\\n\", item->slot, (char *) n->name);\n    }\n}\n\nstatic void clusterManagerReleaseReshardTable(list *table) {\n    if (table != NULL) {\n        listIter li;\n        listNode *ln;\n        listRewind(table, &li);\n        while ((ln = listNext(&li)) != NULL) {\n            clusterManagerReshardTableItem *item = ln->value;\n            zfree(item);\n        }\n        listRelease(table);\n    }\n}\n\nstatic void clusterManagerLog(int level, const char* fmt, ...) {\n    int use_colors =\n        (config.cluster_manager_command.flags & CLUSTER_MANAGER_CMD_FLAG_COLOR);\n    if (use_colors) {\n        printf(\"\\033[\");\n        switch (level) {\n        case CLUSTER_MANAGER_LOG_LVL_INFO: printf(LOG_COLOR_BOLD); break;\n        case CLUSTER_MANAGER_LOG_LVL_WARN: printf(LOG_COLOR_YELLOW); break;\n        case CLUSTER_MANAGER_LOG_LVL_ERR: printf(LOG_COLOR_RED); break;\n        case CLUSTER_MANAGER_LOG_LVL_SUCCESS: printf(LOG_COLOR_GREEN); break;\n        default: printf(LOG_COLOR_RESET); break;\n        }\n    }\n    va_list ap;\n    va_start(ap, fmt);\n    vprintf(fmt, ap);\n    va_end(ap);\n    if (use_colors) printf(\"\\033[\" LOG_COLOR_RESET);\n}\n\nstatic void clusterManagerNodeArrayInit(clusterManagerNodeArray *array,\n                                        int alloc_len)\n{\n    array->nodes = zcalloc(alloc_len * sizeof(clusterManagerNode*));\n    array->alloc = array->nodes;\n    array->len = alloc_len;\n    array->count = 0;\n}\n\n/* Reset array->nodes to the original array allocation and re-count non-NULL\n * nodes. */\nstatic void clusterManagerNodeArrayReset(clusterManagerNodeArray *array) {\n    if (array->nodes > array->alloc) {\n        array->len = array->nodes - array->alloc;\n        array->nodes = array->alloc;\n        array->count = 0;\n        int i = 0;\n        for(; i < array->len; i++) {\n            if (array->nodes[i] != NULL) array->count++;\n        }\n    }\n}\n\n/* Shift array->nodes and store the shifted node into 'nodeptr'. */\nstatic void clusterManagerNodeArrayShift(clusterManagerNodeArray *array,\n                                         clusterManagerNode **nodeptr)\n{\n    assert(array->len > 0);\n    /* If the first node to be shifted is not NULL, decrement count. */\n    if (*array->nodes != NULL) array->count--;\n    /* Store the first node to be shifted into 'nodeptr'. */\n    *nodeptr = *array->nodes;\n    /* Shift the nodes array and decrement length. */\n    array->nodes++;\n    array->len--;\n}\n\nstatic void clusterManagerNodeArrayAdd(clusterManagerNodeArray *array,\n                                       clusterManagerNode *node)\n{\n    assert(array->len > 0);\n    assert(node != NULL);\n    assert(array->count < array->len);\n    array->nodes[array->count++] = node;\n}\n\nstatic void clusterManagerPrintNotEmptyNodeError(clusterManagerNode *node,\n                                                 char *err)\n{\n    char *msg;\n    if (err) msg = err;\n    else {\n        msg = \"is not empty. Either the node already knows other \"\n              \"nodes (check with CLUSTER NODES) or contains some \"\n              \"key in database 0.\";\n    }\n    clusterManagerLogErr(\"[ERR] Node %s:%d %s\\n\", node->ip, node->port, msg);\n}\n\nstatic void clusterManagerPrintNotClusterNodeError(clusterManagerNode *node,\n                                                   char *err)\n{\n    char *msg = (err ? err : \"is not configured as a cluster node.\");\n    clusterManagerLogErr(\"[ERR] Node %s:%d %s\\n\", node->ip, node->port, msg);\n}\n\n/* Execute redis-cli in Cluster Manager mode */\nstatic void clusterManagerMode(clusterManagerCommandProc *proc) {\n    int argc = config.cluster_manager_command.argc;\n    char **argv = config.cluster_manager_command.argv;\n    cluster_manager.nodes = NULL;\n    int success = proc(argc, argv);\n\n    /* Initialized in createClusterManagerCommand. */\n    if (config.stdin_lastarg) {\n        zfree(config.cluster_manager_command.argv);\n        sdsfree(config.cluster_manager_command.stdin_arg);\n    } else if (config.stdin_tag_arg) {\n        sdsfree(config.cluster_manager_command.stdin_arg);\n    }\n    freeClusterManager();\n\n    exit(success ? 0 : 1);\n}\n\n/* Cluster Manager Commands */\n\nstatic int clusterManagerCommandCreate(int argc, char **argv) {\n    int i, j, success = 1;\n    cluster_manager.nodes = listCreate();\n    for (i = 0; i < argc; i++) {\n        char *addr = argv[i];\n        char *ip = NULL;\n        int port = 0;\n        if (!parseClusterNodeAddress(addr, &ip, &port, NULL)) {\n            fprintf(stderr, \"Invalid address format: %s\\n\", addr);\n            return 0;\n        }\n\n        clusterManagerNode *node = clusterManagerNewNode(ip, port, 0);\n        if (!clusterManagerNodeConnect(node)) {\n            freeClusterManagerNode(node);\n            return 0;\n        }\n        char *err = NULL;\n        if (!clusterManagerNodeIsCluster(node, &err)) {\n            clusterManagerPrintNotClusterNodeError(node, err);\n            if (err) zfree(err);\n            freeClusterManagerNode(node);\n            return 0;\n        }\n        err = NULL;\n        if (!clusterManagerNodeLoadInfo(node, 0, &err)) {\n            if (err) {\n                CLUSTER_MANAGER_PRINT_REPLY_ERROR(node, err);\n                zfree(err);\n            }\n            freeClusterManagerNode(node);\n            return 0;\n        }\n        err = NULL;\n        if (!clusterManagerNodeIsEmpty(node, &err)) {\n            clusterManagerPrintNotEmptyNodeError(node, err);\n            if (err) zfree(err);\n            freeClusterManagerNode(node);\n            return 0;\n        }\n        listAddNodeTail(cluster_manager.nodes, node);\n    }\n    int node_len = cluster_manager.nodes->len;\n    int replicas = config.cluster_manager_command.replicas;\n    int masters_count = CLUSTER_MANAGER_MASTERS_COUNT(node_len, replicas);\n    if (masters_count < 3) {\n        clusterManagerLogErr(\n            \"*** ERROR: Invalid configuration for cluster creation.\\n\"\n            \"*** Redis Cluster requires at least 3 master nodes.\\n\"\n            \"*** This is not possible with %d nodes and %d replicas per node.\",\n            node_len, replicas);\n        clusterManagerLogErr(\"\\n*** At least %d nodes are required.\\n\",\n                             3 * (replicas + 1));\n        return 0;\n    }\n    clusterManagerLogInfo(\">>> Performing hash slots allocation \"\n                          \"on %d nodes...\\n\", node_len);\n    int interleaved_len = 0, ip_count = 0;\n    clusterManagerNode **interleaved = zcalloc(node_len*sizeof(**interleaved));\n    char **ips = zcalloc(node_len * sizeof(char*));\n    clusterManagerNodeArray *ip_nodes = zcalloc(node_len * sizeof(*ip_nodes));\n    listIter li;\n    listNode *ln;\n    listRewind(cluster_manager.nodes, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *n = ln->value;\n        int found = 0;\n        for (i = 0; i < ip_count; i++) {\n            char *ip = ips[i];\n            if (!strcmp(ip, n->ip)) {\n                found = 1;\n                break;\n            }\n        }\n        if (!found) {\n            ips[ip_count++] = n->ip;\n        }\n        clusterManagerNodeArray *node_array = &(ip_nodes[i]);\n        if (node_array->nodes == NULL)\n            clusterManagerNodeArrayInit(node_array, node_len);\n        clusterManagerNodeArrayAdd(node_array, n);\n    }\n    while (interleaved_len < node_len) {\n        for (i = 0; i < ip_count; i++) {\n            clusterManagerNodeArray *node_array = &(ip_nodes[i]);\n            if (node_array->count > 0) {\n                clusterManagerNode *n = NULL;\n                clusterManagerNodeArrayShift(node_array, &n);\n                interleaved[interleaved_len++] = n;\n            }\n        }\n    }\n    clusterManagerNode **masters = interleaved;\n    interleaved += masters_count;\n    interleaved_len -= masters_count;\n    float slots_per_node = CLUSTER_MANAGER_SLOTS / (float) masters_count;\n    long first = 0;\n    float cursor = 0.0f;\n    for (i = 0; i < masters_count; i++) {\n        clusterManagerNode *master = masters[i];\n        long last = lround(cursor + slots_per_node - 1);\n        if (last > CLUSTER_MANAGER_SLOTS || i == (masters_count - 1))\n            last = CLUSTER_MANAGER_SLOTS - 1;\n        if (last < first) last = first;\n        printf(\"Master[%d] -> Slots %ld - %ld\\n\", i, first, last);\n        master->slots_count = 0;\n        for (j = first; j <= last; j++) {\n            master->slots[j] = 1;\n            master->slots_count++;\n        }\n        master->dirty = 1;\n        first = last + 1;\n        cursor += slots_per_node;\n    }\n\n    /* Rotating the list sometimes helps to get better initial\n     * anti-affinity before the optimizer runs. */\n    clusterManagerNode *first_node = interleaved[0];\n    for (i = 0; i < (interleaved_len - 1); i++)\n        interleaved[i] = interleaved[i + 1];\n    interleaved[interleaved_len - 1] = first_node;\n    int assign_unused = 0, available_count = interleaved_len;\nassign_replicas:\n    for (i = 0; i < masters_count; i++) {\n        clusterManagerNode *master = masters[i];\n        int assigned_replicas = 0;\n        while (assigned_replicas < replicas) {\n            if (available_count == 0) break;\n            clusterManagerNode *found = NULL, *slave = NULL;\n            int firstNodeIdx = -1;\n            for (j = 0; j < interleaved_len; j++) {\n                clusterManagerNode *n = interleaved[j];\n                if (n == NULL) continue;\n                if (strcmp(n->ip, master->ip)) {\n                    found = n;\n                    interleaved[j] = NULL;\n                    break;\n                }\n                if (firstNodeIdx < 0) firstNodeIdx = j;\n            }\n            if (found) slave = found;\n            else if (firstNodeIdx >= 0) {\n                slave = interleaved[firstNodeIdx];\n                interleaved_len -= (firstNodeIdx + 1);\n                interleaved += (firstNodeIdx + 1);\n            }\n            if (slave != NULL) {\n                assigned_replicas++;\n                available_count--;\n                if (slave->replicate) sdsfree(slave->replicate);\n                slave->replicate = sdsnew(master->name);\n                slave->dirty = 1;\n            } else break;\n            printf(\"Adding replica %s:%d to %s:%d\\n\", slave->ip, slave->port,\n                   master->ip, master->port);\n            if (assign_unused) break;\n        }\n    }\n    if (!assign_unused && available_count > 0) {\n        assign_unused = 1;\n        printf(\"Adding extra replicas...\\n\");\n        goto assign_replicas;\n    }\n    for (i = 0; i < ip_count; i++) {\n        clusterManagerNodeArray *node_array = ip_nodes + i;\n        clusterManagerNodeArrayReset(node_array);\n    }\n    clusterManagerOptimizeAntiAffinity(ip_nodes, ip_count);\n    clusterManagerShowNodes();\n    int ignore_force = 0;\n    if (confirmWithYes(\"Can I set the above configuration?\", ignore_force)) {\n        listRewind(cluster_manager.nodes, &li);\n        while ((ln = listNext(&li)) != NULL) {\n            clusterManagerNode *node = ln->value;\n            char *err = NULL;\n            int flushed = clusterManagerFlushNodeConfig(node, &err);\n            if (!flushed && node->dirty && !node->replicate) {\n                if (err != NULL) {\n                    CLUSTER_MANAGER_PRINT_REPLY_ERROR(node, err);\n                    zfree(err);\n                }\n                success = 0;\n                goto cleanup;\n            } else if (err != NULL) zfree(err);\n        }\n        clusterManagerLogInfo(\">>> Nodes configuration updated\\n\");\n        clusterManagerLogInfo(\">>> Assign a different config epoch to \"\n                              \"each node\\n\");\n        int config_epoch = 1;\n        listRewind(cluster_manager.nodes, &li);\n        while ((ln = listNext(&li)) != NULL) {\n            clusterManagerNode *node = ln->value;\n            redisReply *reply = NULL;\n            reply = CLUSTER_MANAGER_COMMAND(node,\n                                            \"cluster set-config-epoch %d\",\n                                            config_epoch++);\n            if (reply != NULL) freeReplyObject(reply);\n        }\n        clusterManagerLogInfo(\">>> Sending CLUSTER MEET messages to join \"\n                              \"the cluster\\n\");\n        clusterManagerNode *first = NULL;\n        char first_ip[NET_IP_STR_LEN]; /* first->ip may be a hostname */\n        listRewind(cluster_manager.nodes, &li);\n        while ((ln = listNext(&li)) != NULL) {\n            clusterManagerNode *node = ln->value;\n            if (first == NULL) {\n                first = node;\n                /* Although hiredis supports connecting to a hostname, CLUSTER\n                 * MEET requires an IP address, so we do a DNS lookup here. */\n                int anet_flags = ANET_NONE;\n                if (config.prefer_ipv4) anet_flags |= ANET_PREFER_IPV4;\n                if (config.prefer_ipv6) anet_flags |= ANET_PREFER_IPV6;\n                if (anetResolve(NULL, first->ip, first_ip, sizeof(first_ip), anet_flags)\n                    == ANET_ERR)\n                {\n                    fprintf(stderr, \"Invalid IP address or hostname specified: %s\\n\", first->ip);\n                    success = 0;\n                    goto cleanup;\n                }\n                continue;\n            }\n            redisReply *reply = NULL;\n            if (first->bus_port == 0 || (first->bus_port == first->port + CLUSTER_MANAGER_PORT_INCR)) {\n                /* CLUSTER MEET bus-port parameter was added in 4.0.\n                 * So if (bus_port == 0) or (bus_port == port + CLUSTER_MANAGER_PORT_INCR),\n                 * we just call CLUSTER MEET with 2 arguments, using the old form. */\n                reply = CLUSTER_MANAGER_COMMAND(node, \"cluster meet %s %d\",\n                                                first_ip, first->port);\n            } else {\n                reply = CLUSTER_MANAGER_COMMAND(node, \"cluster meet %s %d %d\",\n                                                first_ip, first->port, first->bus_port);\n            }\n            int is_err = 0;\n            if (reply != NULL) {\n                if ((is_err = reply->type == REDIS_REPLY_ERROR))\n                    CLUSTER_MANAGER_PRINT_REPLY_ERROR(node, reply->str);\n                freeReplyObject(reply);\n            } else {\n                is_err = 1;\n                fprintf(stderr, \"Failed to send CLUSTER MEET command.\\n\");\n            }\n            if (is_err) {\n                success = 0;\n                goto cleanup;\n            }\n        }\n        /* Give one second for the join to start, in order to avoid that\n         * waiting for cluster join will find all the nodes agree about\n         * the config as they are still empty with unassigned slots. */\n        sleep(1);\n        clusterManagerWaitForClusterJoin();\n        /* Useful for the replicas */\n        listRewind(cluster_manager.nodes, &li);\n        while ((ln = listNext(&li)) != NULL) {\n            clusterManagerNode *node = ln->value;\n            if (!node->dirty) continue;\n            char *err = NULL;\n            int flushed = clusterManagerFlushNodeConfig(node, &err);\n            if (!flushed && !node->replicate) {\n                if (err != NULL) {\n                    CLUSTER_MANAGER_PRINT_REPLY_ERROR(node, err);\n                    zfree(err);\n                }\n                success = 0;\n                goto cleanup;\n            } else if (err != NULL) {\n                zfree(err);\n            }\n        }\n        // Reset Nodes\n        listRewind(cluster_manager.nodes, &li);\n        clusterManagerNode *first_node = NULL;\n        while ((ln = listNext(&li)) != NULL) {\n            clusterManagerNode *node = ln->value;\n            if (!first_node) first_node = node;\n            else freeClusterManagerNode(node);\n        }\n        listEmpty(cluster_manager.nodes);\n        if (!clusterManagerLoadInfoFromNode(first_node)) {\n            success = 0;\n            goto cleanup;\n        }\n        clusterManagerCheckCluster(0);\n    }\ncleanup:\n    /* Free everything */\n    zfree(masters);\n    zfree(ips);\n    for (i = 0; i < node_len; i++) {\n        clusterManagerNodeArray *node_array = ip_nodes + i;\n        CLUSTER_MANAGER_NODE_ARRAY_FREE(node_array);\n    }\n    zfree(ip_nodes);\n    return success;\n}\n\nstatic int clusterManagerCommandAddNode(int argc, char **argv) {\n    int success = 1;\n    redisReply *reply = NULL;\n    redisReply *function_restore_reply = NULL;\n    redisReply *function_list_reply = NULL;\n    char *ref_ip = NULL, *ip = NULL;\n    int ref_port = 0, port = 0;\n    if (!getClusterHostFromCmdArgs(argc - 1, argv + 1, &ref_ip, &ref_port))\n        goto invalid_args;\n    if (!getClusterHostFromCmdArgs(1, argv, &ip, &port))\n        goto invalid_args;\n    clusterManagerLogInfo(\">>> Adding node %s:%d to cluster %s:%d\\n\", ip, port,\n                          ref_ip, ref_port);\n    // Check the existing cluster\n    clusterManagerNode *refnode = clusterManagerNewNode(ref_ip, ref_port, 0);\n    if (!clusterManagerLoadInfoFromNode(refnode)) return 0;\n    if (!clusterManagerCheckCluster(0)) return 0;\n\n    /* If --cluster-master-id was specified, try to resolve it now so that we\n     * abort before starting with the node configuration. */\n    clusterManagerNode *master_node = NULL;\n    if (config.cluster_manager_command.flags & CLUSTER_MANAGER_CMD_FLAG_SLAVE) {\n        char *master_id = config.cluster_manager_command.master_id;\n        if (master_id != NULL) {\n            master_node = clusterManagerNodeByName(master_id);\n            if (master_node == NULL) {\n                clusterManagerLogErr(\"[ERR] No such master ID %s\\n\", master_id);\n                return 0;\n            }\n        } else {\n            master_node = clusterManagerNodeWithLeastReplicas();\n            assert(master_node != NULL);\n            printf(\"Automatically selected master %s:%d\\n\", master_node->ip,\n                   master_node->port);\n        }\n    }\n\n    // Add the new node\n    clusterManagerNode *new_node = clusterManagerNewNode(ip, port, 0);\n    int added = 0;\n    if (!clusterManagerNodeConnect(new_node)) {\n        clusterManagerLogErr(\"[ERR] Sorry, can't connect to node %s:%d\\n\",\n                             ip, port);\n        success = 0;\n        goto cleanup;\n    }\n    char *err = NULL;\n    if (!(success = clusterManagerNodeIsCluster(new_node, &err))) {\n        clusterManagerPrintNotClusterNodeError(new_node, err);\n        if (err) zfree(err);\n        goto cleanup;\n    }\n    if (!clusterManagerNodeLoadInfo(new_node, 0, &err)) {\n        if (err) {\n            CLUSTER_MANAGER_PRINT_REPLY_ERROR(new_node, err);\n            zfree(err);\n        }\n        success = 0;\n        goto cleanup;\n    }\n    if (!(success = clusterManagerNodeIsEmpty(new_node, &err))) {\n        clusterManagerPrintNotEmptyNodeError(new_node, err);\n        if (err) zfree(err);\n        goto cleanup;\n    }\n    clusterManagerNode *first = listFirst(cluster_manager.nodes)->value;\n    listAddNodeTail(cluster_manager.nodes, new_node);\n    added = 1;\n\n    if (!master_node) {\n        /* Send functions to the new node, if new node is a replica it will get the functions from its primary. */\n        clusterManagerLogInfo(\">>> Getting functions from cluster\\n\");\n        reply = CLUSTER_MANAGER_COMMAND(refnode, \"FUNCTION DUMP\");\n        if (!clusterManagerCheckRedisReply(refnode, reply, &err)) {\n            clusterManagerLogInfo(\">>> Failed retrieving Functions from the cluster, \"\n                    \"skip this step as Redis version do not support function command (error = '%s')\\n\", err? err : \"NULL reply\");\n            if (err) zfree(err);\n        } else {\n            assert(reply->type == REDIS_REPLY_STRING);\n            clusterManagerLogInfo(\">>> Send FUNCTION LIST to %s:%d to verify there is no functions in it\\n\", ip, port);\n            function_list_reply = CLUSTER_MANAGER_COMMAND(new_node, \"FUNCTION LIST\");\n            if (!clusterManagerCheckRedisReply(new_node, function_list_reply, &err)) {\n                clusterManagerLogErr(\">>> Failed on CLUSTER LIST (error = '%s')\\r\\n\", err? err : \"NULL reply\");\n                if (err) zfree(err);\n                success = 0;\n                goto cleanup;\n            }\n            assert(function_list_reply->type == REDIS_REPLY_ARRAY);\n            if (function_list_reply->elements > 0) {\n                clusterManagerLogErr(\">>> New node already contains functions and can not be added to the cluster. Use FUNCTION FLUSH and try again.\\r\\n\");\n                success = 0;\n                goto cleanup;\n            }\n            clusterManagerLogInfo(\">>> Send FUNCTION RESTORE to %s:%d\\n\", ip, port);\n            function_restore_reply = CLUSTER_MANAGER_COMMAND(new_node, \"FUNCTION RESTORE %b\", reply->str, reply->len);\n            if (!clusterManagerCheckRedisReply(new_node, function_restore_reply, &err)) {\n                clusterManagerLogErr(\">>> Failed loading functions to the new node (error = '%s')\\r\\n\", err? err : \"NULL reply\");\n                if (err) zfree(err);\n                success = 0;\n                goto cleanup;\n            }\n        }\n    }\n\n    if (reply) freeReplyObject(reply);\n\n    // Send CLUSTER MEET command to the new node\n    clusterManagerLogInfo(\">>> Send CLUSTER MEET to node %s:%d to make it \"\n                          \"join the cluster.\\n\", ip, port);\n    /* CLUSTER MEET requires an IP address, so we do a DNS lookup here. */\n    char first_ip[NET_IP_STR_LEN];\n    int anet_flags = ANET_NONE;\n    if (config.prefer_ipv4) anet_flags |= ANET_PREFER_IPV4;\n    if (config.prefer_ipv6) anet_flags |= ANET_PREFER_IPV6;\n    if (anetResolve(NULL, first->ip, first_ip, sizeof(first_ip), anet_flags) == ANET_ERR) {\n        fprintf(stderr, \"Invalid IP address or hostname specified: %s\\n\", first->ip);\n        success = 0;\n        goto cleanup;\n    }\n\n    if (first->bus_port == 0 || (first->bus_port == first->port + CLUSTER_MANAGER_PORT_INCR)) {\n        /* CLUSTER MEET bus-port parameter was added in 4.0.\n         * So if (bus_port == 0) or (bus_port == port + CLUSTER_MANAGER_PORT_INCR),\n         * we just call CLUSTER MEET with 2 arguments, using the old form. */\n        reply = CLUSTER_MANAGER_COMMAND(new_node, \"CLUSTER MEET %s %d\",\n                                        first_ip, first->port);\n    } else {\n        reply = CLUSTER_MANAGER_COMMAND(new_node, \"CLUSTER MEET %s %d %d\",\n                                        first_ip, first->port, first->bus_port);\n    }\n\n    if (!(success = clusterManagerCheckRedisReply(new_node, reply, NULL)))\n        goto cleanup;\n\n    /* Additional configuration is needed if the node is added as a slave. */\n    if (master_node) {\n        sleep(1);\n        clusterManagerWaitForClusterJoin();\n        clusterManagerLogInfo(\">>> Configure node as replica of %s:%d.\\n\",\n                              master_node->ip, master_node->port);\n        freeReplyObject(reply);\n        reply = CLUSTER_MANAGER_COMMAND(new_node, \"CLUSTER REPLICATE %s\",\n                                        master_node->name);\n        if (!(success = clusterManagerCheckRedisReply(new_node, reply, NULL)))\n            goto cleanup;\n    }\n    clusterManagerLogOk(\"[OK] New node added correctly.\\n\");\ncleanup:\n    if (!added && new_node) freeClusterManagerNode(new_node);\n    if (reply) freeReplyObject(reply);\n    if (function_restore_reply) freeReplyObject(function_restore_reply);\n    if (function_list_reply) freeReplyObject(function_list_reply);\n    return success;\ninvalid_args:\n    fprintf(stderr, CLUSTER_MANAGER_INVALID_HOST_ARG);\n    return 0;\n}\n\nstatic int clusterManagerCommandDeleteNode(int argc, char **argv) {\n    UNUSED(argc);\n    int success = 1;\n    int port = 0;\n    char *ip = NULL;\n    if (!getClusterHostFromCmdArgs(1, argv, &ip, &port)) goto invalid_args;\n    char *node_id = argv[1];\n    clusterManagerLogInfo(\">>> Removing node %s from cluster %s:%d\\n\",\n                          node_id, ip, port);\n    clusterManagerNode *ref_node = clusterManagerNewNode(ip, port, 0);\n    clusterManagerNode *node = NULL;\n\n    // Load cluster information\n    if (!clusterManagerLoadInfoFromNode(ref_node)) return 0;\n\n    // Check if the node exists and is not empty\n    node = clusterManagerNodeByName(node_id);\n    if (node == NULL) {\n        clusterManagerLogErr(\"[ERR] No such node ID %s\\n\", node_id);\n        return 0;\n    }\n    if (node->slots_count != 0) {\n        clusterManagerLogErr(\"[ERR] Node %s:%d is not empty! Reshard data \"\n                             \"away and try again.\\n\", node->ip, node->port);\n        return 0;\n    }\n\n    // Send CLUSTER FORGET to all the nodes but the node to remove\n    clusterManagerLogInfo(\">>> Sending CLUSTER FORGET messages to the \"\n                          \"cluster...\\n\");\n    listIter li;\n    listNode *ln;\n    listRewind(cluster_manager.nodes, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *n = ln->value;\n        if (n == node) continue;\n        if (n->replicate && !strcasecmp(n->replicate, node_id)) {\n            // Reconfigure the slave to replicate with some other node\n            clusterManagerNode *master = clusterManagerNodeWithLeastReplicas();\n            assert(master != NULL);\n            clusterManagerLogInfo(\">>> %s:%d as replica of %s:%d\\n\",\n                                  n->ip, n->port, master->ip, master->port);\n            redisReply *r = CLUSTER_MANAGER_COMMAND(n, \"CLUSTER REPLICATE %s\",\n                                                    master->name);\n            success = clusterManagerCheckRedisReply(n, r, NULL);\n            if (r) freeReplyObject(r);\n            if (!success) return 0;\n        }\n        redisReply *r = CLUSTER_MANAGER_COMMAND(n, \"CLUSTER FORGET %s\",\n                                                node_id);\n        success = clusterManagerCheckRedisReply(n, r, NULL);\n        if (r) freeReplyObject(r);\n        if (!success) return 0;\n    }\n\n    /* Finally send CLUSTER RESET to the node. */\n    clusterManagerLogInfo(\">>> Sending CLUSTER RESET SOFT to the \"\n                          \"deleted node.\\n\");\n    redisReply *r = redisCommand(node->context, \"CLUSTER RESET %s\", \"SOFT\");\n    success = clusterManagerCheckRedisReply(node, r, NULL);\n    if (r) freeReplyObject(r);\n    return success;\ninvalid_args:\n    fprintf(stderr, CLUSTER_MANAGER_INVALID_HOST_ARG);\n    return 0;\n}\n\nstatic int clusterManagerCommandInfo(int argc, char **argv) {\n    int port = 0;\n    char *ip = NULL;\n    if (!getClusterHostFromCmdArgs(argc, argv, &ip, &port)) goto invalid_args;\n    clusterManagerNode *node = clusterManagerNewNode(ip, port, 0);\n    if (!clusterManagerLoadInfoFromNode(node)) return 0;\n    clusterManagerShowClusterInfo();\n    return 1;\ninvalid_args:\n    fprintf(stderr, CLUSTER_MANAGER_INVALID_HOST_ARG);\n    return 0;\n}\n\nstatic int clusterManagerCommandCheck(int argc, char **argv) {\n    int port = 0;\n    char *ip = NULL;\n    if (!getClusterHostFromCmdArgs(argc, argv, &ip, &port)) goto invalid_args;\n    clusterManagerNode *node = clusterManagerNewNode(ip, port, 0);\n    if (!clusterManagerLoadInfoFromNode(node)) return 0;\n    clusterManagerShowClusterInfo();\n    return clusterManagerCheckCluster(0);\ninvalid_args:\n    fprintf(stderr, CLUSTER_MANAGER_INVALID_HOST_ARG);\n    return 0;\n}\n\nstatic int clusterManagerCommandFix(int argc, char **argv) {\n    config.cluster_manager_command.flags |= CLUSTER_MANAGER_CMD_FLAG_FIX;\n    return clusterManagerCommandCheck(argc, argv);\n}\n\nstatic int clusterManagerCommandReshard(int argc, char **argv) {\n    int port = 0;\n    char *ip = NULL;\n    if (!getClusterHostFromCmdArgs(argc, argv, &ip, &port)) goto invalid_args;\n    clusterManagerNode *node = clusterManagerNewNode(ip, port, 0);\n    if (!clusterManagerLoadInfoFromNode(node)) return 0;\n    clusterManagerCheckCluster(0);\n    if (cluster_manager.errors && listLength(cluster_manager.errors) > 0) {\n        fflush(stdout);\n        fprintf(stderr,\n                \"*** Please fix your cluster problems before resharding\\n\");\n        return 0;\n    }\n    int slots = config.cluster_manager_command.slots;\n    if (!slots) {\n        while (slots <= 0 || slots > CLUSTER_MANAGER_SLOTS) {\n            printf(\"How many slots do you want to move (from 1 to %d)? \",\n                   CLUSTER_MANAGER_SLOTS);\n            fflush(stdout);\n            char buf[6];\n            int nread = read(fileno(stdin),buf,6);\n            if (nread <= 0) continue;\n            int last_idx = nread - 1;\n            if (buf[last_idx] != '\\n') {\n                int ch;\n                while ((ch = getchar()) != '\\n' && ch != EOF) {}\n            }\n            buf[last_idx] = '\\0';\n            slots = atoi(buf);\n        }\n    }\n    char buf[255];\n    char *to = config.cluster_manager_command.to,\n         *from = config.cluster_manager_command.from;\n    while (to == NULL) {\n        printf(\"What is the receiving node ID? \");\n        fflush(stdout);\n        int nread = read(fileno(stdin),buf,255);\n        if (nread <= 0) continue;\n        int last_idx = nread - 1;\n        if (buf[last_idx] != '\\n') {\n            int ch;\n            while ((ch = getchar()) != '\\n' && ch != EOF) {}\n        }\n        buf[last_idx] = '\\0';\n        if (strlen(buf) > 0) to = buf;\n    }\n    int raise_err = 0;\n    clusterManagerNode *target = clusterNodeForResharding(to, NULL, &raise_err);\n    if (target == NULL) return 0;\n    list *sources = listCreate();\n    list *table = NULL;\n    int all = 0, result = 1;\n    if (from == NULL) {\n        printf(\"Please enter all the source node IDs.\\n\");\n        printf(\"  Type 'all' to use all the nodes as source nodes for \"\n               \"the hash slots.\\n\");\n        printf(\"  Type 'done' once you entered all the source nodes IDs.\\n\");\n        while (1) {\n            printf(\"Source node #%lu: \", listLength(sources) + 1);\n            fflush(stdout);\n            int nread = read(fileno(stdin),buf,255);\n            if (nread <= 0) continue;\n            int last_idx = nread - 1;\n            if (buf[last_idx] != '\\n') {\n                int ch;\n                while ((ch = getchar()) != '\\n' && ch != EOF) {}\n            }\n            buf[last_idx] = '\\0';\n            if (!strcmp(buf, \"done\")) break;\n            else if (!strcmp(buf, \"all\")) {\n                all = 1;\n                break;\n            } else {\n                clusterManagerNode *src =\n                    clusterNodeForResharding(buf, target, &raise_err);\n                if (src != NULL) listAddNodeTail(sources, src);\n                else if (raise_err) {\n                    result = 0;\n                    goto cleanup;\n                }\n            }\n        }\n    } else {\n        char *p;\n        while((p = strchr(from, ',')) != NULL) {\n            *p = '\\0';\n            if (!strcmp(from, \"all\")) {\n                all = 1;\n                break;\n            } else {\n                clusterManagerNode *src =\n                    clusterNodeForResharding(from, target, &raise_err);\n                if (src != NULL) listAddNodeTail(sources, src);\n                else if (raise_err) {\n                    result = 0;\n                    goto cleanup;\n                }\n            }\n            from = p + 1;\n        }\n        /* Check if there's still another source to process. */\n        if (!all && strlen(from) > 0) {\n            if (!strcmp(from, \"all\")) all = 1;\n            if (!all) {\n                clusterManagerNode *src =\n                    clusterNodeForResharding(from, target, &raise_err);\n                if (src != NULL) listAddNodeTail(sources, src);\n                else if (raise_err) {\n                    result = 0;\n                    goto cleanup;\n                }\n            }\n        }\n    }\n    listIter li;\n    listNode *ln;\n    if (all) {\n        listEmpty(sources);\n        listRewind(cluster_manager.nodes, &li);\n        while ((ln = listNext(&li)) != NULL) {\n            clusterManagerNode *n = ln->value;\n            if (n->flags & CLUSTER_MANAGER_FLAG_SLAVE || n->replicate)\n                continue;\n            if (!sdscmp(n->name, target->name)) continue;\n            listAddNodeTail(sources, n);\n        }\n    }\n    if (listLength(sources) == 0) {\n        fprintf(stderr, \"*** No source nodes given, operation aborted.\\n\");\n        result = 0;\n        goto cleanup;\n    }\n    printf(\"\\nReady to move %d slots.\\n\", slots);\n    printf(\"  Source nodes:\\n\");\n    listRewind(sources, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *src = ln->value;\n        sds info = clusterManagerNodeInfo(src, 4);\n        printf(\"%s\\n\", info);\n        sdsfree(info);\n    }\n    printf(\"  Destination node:\\n\");\n    sds info = clusterManagerNodeInfo(target, 4);\n    printf(\"%s\\n\", info);\n    sdsfree(info);\n    table = clusterManagerComputeReshardTable(sources, slots);\n    printf(\"  Resharding plan:\\n\");\n    clusterManagerShowReshardTable(table);\n    if (!(config.cluster_manager_command.flags &\n          CLUSTER_MANAGER_CMD_FLAG_YES))\n    {\n        printf(\"Do you want to proceed with the proposed \"\n               \"reshard plan (yes/no)? \");\n        fflush(stdout);\n        char buf[4];\n        int nread = read(fileno(stdin),buf,4);\n        buf[3] = '\\0';\n        if (nread <= 0 || strcmp(\"yes\", buf) != 0) {\n            result = 0;\n            goto cleanup;\n        }\n    }\n    int opts = CLUSTER_MANAGER_OPT_VERBOSE;\n    listRewind(table, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerReshardTableItem *item = ln->value;\n        char *err = NULL;\n        result = clusterManagerMoveSlot(item->source, target, item->slot,\n                                        opts, &err);\n        if (!result) {\n            if (err != NULL) {\n                clusterManagerLogErr(\"clusterManagerMoveSlot failed: %s\\n\", err);\n                zfree(err);\n            }\n            goto cleanup;\n        }\n    }\ncleanup:\n    listRelease(sources);\n    clusterManagerReleaseReshardTable(table);\n    return result;\ninvalid_args:\n    fprintf(stderr, CLUSTER_MANAGER_INVALID_HOST_ARG);\n    return 0;\n}\n\nstatic int clusterManagerCommandRebalance(int argc, char **argv) {\n    int port = 0;\n    char *ip = NULL;\n    clusterManagerNode **weightedNodes = NULL;\n    list *involved = NULL;\n    if (!getClusterHostFromCmdArgs(argc, argv, &ip, &port)) goto invalid_args;\n    clusterManagerNode *node = clusterManagerNewNode(ip, port, 0);\n    if (!clusterManagerLoadInfoFromNode(node)) return 0;\n    int result = 1, i;\n    if (config.cluster_manager_command.weight != NULL) {\n        for (i = 0; i < config.cluster_manager_command.weight_argc; i++) {\n            char *name = config.cluster_manager_command.weight[i];\n            char *p = strchr(name, '=');\n            if (p == NULL) {\n                clusterManagerLogErr(\"*** invalid input %s\\n\", name);\n                result = 0;\n                goto cleanup;\n            }\n            *p = '\\0';\n            float w = atof(++p);\n            clusterManagerNode *n = clusterManagerNodeByAbbreviatedName(name);\n            if (n == NULL) {\n                clusterManagerLogErr(\"*** No such master node %s\\n\", name);\n                result = 0;\n                goto cleanup;\n            }\n            n->weight = w;\n        }\n    }\n    float total_weight = 0;\n    int nodes_involved = 0;\n    int use_empty = config.cluster_manager_command.flags &\n                    CLUSTER_MANAGER_CMD_FLAG_EMPTYMASTER;\n    involved = listCreate();\n    listIter li;\n    listNode *ln;\n    listRewind(cluster_manager.nodes, &li);\n    /* Compute the total cluster weight. */\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *n = ln->value;\n        if (n->flags & CLUSTER_MANAGER_FLAG_SLAVE || n->replicate)\n            continue;\n        if (!use_empty && n->slots_count == 0) {\n            n->weight = 0;\n            continue;\n        }\n        total_weight += n->weight;\n        nodes_involved++;\n        listAddNodeTail(involved, n);\n    }\n    weightedNodes = zmalloc(nodes_involved * sizeof(clusterManagerNode *));\n    if (weightedNodes == NULL) goto cleanup;\n    /* Check cluster, only proceed if it looks sane. */\n    clusterManagerCheckCluster(1);\n    if (cluster_manager.errors && listLength(cluster_manager.errors) > 0) {\n        clusterManagerLogErr(\"*** Please fix your cluster problems \"\n                             \"before rebalancing\\n\");\n        result = 0;\n        goto cleanup;\n    }\n    /* Calculate the slots balance for each node. It's the number of\n     * slots the node should lose (if positive) or gain (if negative)\n     * in order to be balanced. */\n    int threshold_reached = 0, total_balance = 0;\n    float threshold = config.cluster_manager_command.threshold;\n    i = 0;\n    listRewind(involved, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *n = ln->value;\n        weightedNodes[i++] = n;\n        int expected = (int) (((float)CLUSTER_MANAGER_SLOTS / total_weight) *\n                        n->weight);\n        n->balance = n->slots_count - expected;\n        total_balance += n->balance;\n        /* Compute the percentage of difference between the\n         * expected number of slots and the real one, to see\n         * if it's over the threshold specified by the user. */\n        int over_threshold = 0;\n        if (threshold > 0) {\n            if (n->slots_count > 0) {\n                float err_perc = fabs((100-(100.0*expected/n->slots_count)));\n                if (err_perc > threshold) over_threshold = 1;\n            } else if (expected > 1) {\n                over_threshold = 1;\n            }\n        }\n        if (over_threshold) threshold_reached = 1;\n    }\n    if (!threshold_reached) {\n        clusterManagerLogWarn(\"*** No rebalancing needed! \"\n                             \"All nodes are within the %.2f%% threshold.\\n\",\n                             config.cluster_manager_command.threshold);\n        goto cleanup;\n    }\n    /* Because of rounding, it is possible that the balance of all nodes\n     * summed does not give 0. Make sure that nodes that have to provide\n     * slots are always matched by nodes receiving slots. */\n    while (total_balance > 0) {\n        listRewind(involved, &li);\n        while ((ln = listNext(&li)) != NULL) {\n            clusterManagerNode *n = ln->value;\n            if (n->balance <= 0 && total_balance > 0) {\n                n->balance--;\n                total_balance--;\n            }\n        }\n    }\n    /* Sort nodes by their slots balance. */\n    qsort(weightedNodes, nodes_involved, sizeof(clusterManagerNode *),\n          clusterManagerCompareNodeBalance);\n    clusterManagerLogInfo(\">>> Rebalancing across %d nodes. \"\n                          \"Total weight = %.2f\\n\",\n                          nodes_involved, total_weight);\n    if (config.verbose) {\n        for (i = 0; i < nodes_involved; i++) {\n            clusterManagerNode *n = weightedNodes[i];\n            printf(\"%s:%d balance is %d slots\\n\", n->ip, n->port, n->balance);\n        }\n    }\n    /* Now we have at the start of the 'sn' array nodes that should get\n     * slots, at the end nodes that must give slots.\n     * We take two indexes, one at the start, and one at the end,\n     * incrementing or decrementing the indexes accordingly til we\n     * find nodes that need to get/provide slots. */\n    int dst_idx = 0;\n    int src_idx = nodes_involved - 1;\n    int simulate = config.cluster_manager_command.flags &\n                   CLUSTER_MANAGER_CMD_FLAG_SIMULATE;\n    while (dst_idx < src_idx) {\n        clusterManagerNode *dst = weightedNodes[dst_idx];\n        clusterManagerNode *src = weightedNodes[src_idx];\n        int db = abs(dst->balance);\n        int sb = abs(src->balance);\n        int numslots = (db < sb ? db : sb);\n        if (numslots > 0) {\n            printf(\"Moving %d slots from %s:%d to %s:%d\\n\", numslots,\n                                                            src->ip,\n                                                            src->port,\n                                                            dst->ip,\n                                                            dst->port);\n            /* Actually move the slots. */\n            list *lsrc = listCreate(), *table = NULL;\n            listAddNodeTail(lsrc, src);\n            table = clusterManagerComputeReshardTable(lsrc, numslots);\n            listRelease(lsrc);\n            int table_len = (int) listLength(table);\n            if (!table || table_len != numslots) {\n                clusterManagerLogErr(\"*** Assertion failed: Reshard table \"\n                                     \"!= number of slots\");\n                result = 0;\n                goto end_move;\n            }\n            if (simulate) {\n                for (i = 0; i < table_len; i++) printf(\"#\");\n            } else {\n                int opts = CLUSTER_MANAGER_OPT_QUIET |\n                           CLUSTER_MANAGER_OPT_UPDATE;\n                listRewind(table, &li);\n                while ((ln = listNext(&li)) != NULL) {\n                    clusterManagerReshardTableItem *item = ln->value;\n                    char *err;\n                    result = clusterManagerMoveSlot(item->source,\n                                                    dst,\n                                                    item->slot,\n                                                    opts, &err);\n                    if (!result) {\n                        clusterManagerLogErr(\"*** clusterManagerMoveSlot: %s\\n\", err);\n                        zfree(err);\n                        goto end_move;\n                    }\n                    printf(\"#\");\n                    fflush(stdout);\n                }\n\n            }\n            printf(\"\\n\");\nend_move:\n            clusterManagerReleaseReshardTable(table);\n            if (!result) goto cleanup;\n        }\n        /* Update nodes balance. */\n        dst->balance += numslots;\n        src->balance -= numslots;\n        if (dst->balance == 0) dst_idx++;\n        if (src->balance == 0) src_idx --;\n    }\ncleanup:\n    if (involved != NULL) listRelease(involved);\n    if (weightedNodes != NULL) zfree(weightedNodes);\n    return result;\ninvalid_args:\n    fprintf(stderr, CLUSTER_MANAGER_INVALID_HOST_ARG);\n    return 0;\n}\n\nstatic int clusterManagerCommandSetTimeout(int argc, char **argv) {\n    UNUSED(argc);\n    int port = 0;\n    char *ip = NULL;\n    if (!getClusterHostFromCmdArgs(1, argv, &ip, &port)) goto invalid_args;\n    int timeout = atoi(argv[1]);\n    if (timeout < 100) {\n        fprintf(stderr, \"Setting a node timeout of less than 100 \"\n                \"milliseconds is a bad idea.\\n\");\n        return 0;\n    }\n    // Load cluster information\n    clusterManagerNode *node = clusterManagerNewNode(ip, port, 0);\n    if (!clusterManagerLoadInfoFromNode(node)) return 0;\n    int ok_count = 0, err_count = 0;\n\n    clusterManagerLogInfo(\">>> Reconfiguring node timeout in every \"\n                          \"cluster node...\\n\");\n    listIter li;\n    listNode *ln;\n    listRewind(cluster_manager.nodes, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *n = ln->value;\n        char *err = NULL;\n        redisReply *reply = CLUSTER_MANAGER_COMMAND(n, \"CONFIG %s %s %d\",\n                                                    \"SET\",\n                                                    \"cluster-node-timeout\",\n                                                    timeout);\n        if (reply == NULL) goto reply_err;\n        int ok = clusterManagerCheckRedisReply(n, reply, &err);\n        freeReplyObject(reply);\n        if (!ok) goto reply_err;\n        reply = CLUSTER_MANAGER_COMMAND(n, \"CONFIG %s\", \"REWRITE\");\n        if (reply == NULL) goto reply_err;\n        ok = clusterManagerCheckRedisReply(n, reply, &err);\n        freeReplyObject(reply);\n        if (!ok) goto reply_err;\n        clusterManagerLogWarn(\"*** New timeout set for %s:%d\\n\", n->ip,\n                              n->port);\n        ok_count++;\n        continue;\nreply_err:;\n        int need_free = 0;\n        if (err == NULL) err = \"\";\n        else need_free = 1;\n        clusterManagerLogErr(\"ERR setting node-timeout for %s:%d: %s\\n\", n->ip,\n                             n->port, err);\n        if (need_free) zfree(err);\n        err_count++;\n    }\n    clusterManagerLogInfo(\">>> New node timeout set. %d OK, %d ERR.\\n\",\n                          ok_count, err_count);\n    return 1;\ninvalid_args:\n    fprintf(stderr, CLUSTER_MANAGER_INVALID_HOST_ARG);\n    return 0;\n}\n\nstatic int clusterManagerCommandImport(int argc, char **argv) {\n    int success = 1;\n    int port = 0, src_port = 0;\n    char *ip = NULL, *src_ip = NULL;\n    char *invalid_args_msg = NULL;\n    sds cmdfmt = NULL;\n    if (!getClusterHostFromCmdArgs(argc, argv, &ip, &port)) {\n        invalid_args_msg = CLUSTER_MANAGER_INVALID_HOST_ARG;\n        goto invalid_args;\n    }\n    if (config.cluster_manager_command.from == NULL) {\n        invalid_args_msg = \"[ERR] Option '--cluster-from' is required for \"\n                           \"subcommand 'import'.\\n\";\n        goto invalid_args;\n    }\n    char *src_host[] = {config.cluster_manager_command.from};\n    if (!getClusterHostFromCmdArgs(1, src_host, &src_ip, &src_port)) {\n        invalid_args_msg = \"[ERR] Invalid --cluster-from host. You need to \"\n                           \"pass a valid address (ie. 120.0.0.1:7000).\\n\";\n        goto invalid_args;\n    }\n    clusterManagerLogInfo(\">>> Importing data from %s:%d to cluster %s:%d\\n\",\n                          src_ip, src_port, ip, port);\n\n    clusterManagerNode *refnode = clusterManagerNewNode(ip, port, 0);\n    if (!clusterManagerLoadInfoFromNode(refnode)) return 0;\n    if (!clusterManagerCheckCluster(0)) return 0;\n    char *reply_err = NULL;\n    redisReply *src_reply = NULL;\n    // Connect to the source node.\n    redisContext *src_ctx = redisConnectWrapper(src_ip, src_port, config.connect_timeout);\n    if (src_ctx->err) {\n        success = 0;\n        fprintf(stderr,\"Could not connect to Redis at %s:%d: %s.\\n\", src_ip,\n                src_port, src_ctx->errstr);\n        goto cleanup;\n    }\n    // Auth for the source node. \n    char *from_user = config.cluster_manager_command.from_user;\n    char *from_pass = config.cluster_manager_command.from_pass;\n    if (cliAuth(src_ctx, from_user, from_pass) == REDIS_ERR) {\n        success = 0;\n        goto cleanup;\n    }\n\n    src_reply = reconnectingRedisCommand(src_ctx, \"INFO\");\n    if (!src_reply || src_reply->type == REDIS_REPLY_ERROR) {\n        if (src_reply && src_reply->str) reply_err = src_reply->str;\n        success = 0;\n        goto cleanup;\n    }\n    if (getLongInfoField(src_reply->str, \"cluster_enabled\")) {\n        clusterManagerLogErr(\"[ERR] The source node should not be a \"\n                             \"cluster node.\\n\");\n        success = 0;\n        goto cleanup;\n    }\n    freeReplyObject(src_reply);\n    src_reply = reconnectingRedisCommand(src_ctx, \"DBSIZE\");\n    if (!src_reply || src_reply->type == REDIS_REPLY_ERROR) {\n        if (src_reply && src_reply->str) reply_err = src_reply->str;\n        success = 0;\n        goto cleanup;\n    }\n    int size = src_reply->integer, i;\n    clusterManagerLogWarn(\"*** Importing %d keys from DB 0\\n\", size);\n\n    // Build a slot -> node map\n    clusterManagerNode  *slots_map[CLUSTER_MANAGER_SLOTS];\n    memset(slots_map, 0, sizeof(slots_map));\n    listIter li;\n    listNode *ln;\n    for (i = 0; i < CLUSTER_MANAGER_SLOTS; i++) {\n        listRewind(cluster_manager.nodes, &li);\n        while ((ln = listNext(&li)) != NULL) {\n            clusterManagerNode *n = ln->value;\n            if (n->flags & CLUSTER_MANAGER_FLAG_SLAVE) continue;\n            if (n->slots_count == 0) continue;\n            if (n->slots[i]) {\n                slots_map[i] = n;\n                break;\n            }\n        }\n    }\n    cmdfmt = sdsnew(\"MIGRATE %s %d %s %d %d\");\n    if (config.conn_info.auth) {\n        if (config.conn_info.user) {\n            cmdfmt = sdscatfmt(cmdfmt,\" AUTH2 %s %s\", config.conn_info.user, config.conn_info.auth); \n        } else {\n            cmdfmt = sdscatfmt(cmdfmt,\" AUTH %s\", config.conn_info.auth);\n        }\n    }\n\n    if (config.cluster_manager_command.flags & CLUSTER_MANAGER_CMD_FLAG_COPY)\n        cmdfmt = sdscat(cmdfmt,\" COPY\");\n    if (config.cluster_manager_command.flags & CLUSTER_MANAGER_CMD_FLAG_REPLACE)\n        cmdfmt = sdscat(cmdfmt,\" REPLACE\");\n\n    /* Use SCAN to iterate over the keys, migrating to the\n     * right node as needed. */\n    int cursor = -999, timeout = config.cluster_manager_command.timeout;\n    while (cursor != 0) {\n        if (cursor < 0) cursor = 0;\n        freeReplyObject(src_reply);\n        src_reply = reconnectingRedisCommand(src_ctx, \"SCAN %d COUNT %d\",\n                                             cursor, 1000);\n        if (!src_reply || src_reply->type == REDIS_REPLY_ERROR) {\n            if (src_reply && src_reply->str) reply_err = src_reply->str;\n            success = 0;\n            goto cleanup;\n        }\n        assert(src_reply->type == REDIS_REPLY_ARRAY);\n        assert(src_reply->elements >= 2);\n        assert(src_reply->element[1]->type == REDIS_REPLY_ARRAY);\n        if (src_reply->element[0]->type == REDIS_REPLY_STRING)\n            cursor = atoi(src_reply->element[0]->str);\n        else if (src_reply->element[0]->type == REDIS_REPLY_INTEGER)\n            cursor = src_reply->element[0]->integer;\n        int keycount = src_reply->element[1]->elements;\n        for (i = 0; i < keycount; i++) {\n            redisReply *kr = src_reply->element[1]->element[i];\n            assert(kr->type == REDIS_REPLY_STRING);\n            char *key = kr->str;\n            uint16_t slot = clusterManagerKeyHashSlot(key, kr->len);\n            clusterManagerNode *target = slots_map[slot];\n            printf(\"Migrating %s to %s:%d: \", key, target->ip, target->port);\n            redisReply *r = reconnectingRedisCommand(src_ctx, cmdfmt,\n                                                     target->ip, target->port,\n                                                     key, 0, timeout);\n            if (!r || r->type == REDIS_REPLY_ERROR) {\n                if (r && r->str) {\n                    clusterManagerLogErr(\"Source %s:%d replied with \"\n                                         \"error:\\n%s\\n\", src_ip, src_port,\n                                         r->str);\n                }\n                success = 0;\n            }\n            freeReplyObject(r);\n            if (!success) goto cleanup;\n            clusterManagerLogOk(\"OK\\n\");\n        }\n    }\ncleanup:\n    if (reply_err)\n        clusterManagerLogErr(\"Source %s:%d replied with error:\\n%s\\n\",\n                             src_ip, src_port, reply_err);\n    if (src_ctx) redisFree(src_ctx);\n    if (src_reply) freeReplyObject(src_reply);\n    if (cmdfmt) sdsfree(cmdfmt);\n    return success;\ninvalid_args:\n    fprintf(stderr, \"%s\", invalid_args_msg);\n    return 0;\n}\n\nstatic int clusterManagerCommandCall(int argc, char **argv) {\n    int port = 0, i;\n    char *ip = NULL;\n    if (!getClusterHostFromCmdArgs(1, argv, &ip, &port)) goto invalid_args;\n    clusterManagerNode *refnode = clusterManagerNewNode(ip, port, 0);\n    if (!clusterManagerLoadInfoFromNode(refnode)) return 0;\n    argc--;\n    argv++;\n    size_t *argvlen = zmalloc(argc*sizeof(size_t));\n    clusterManagerLogInfo(\">>> Calling\");\n    for (i = 0; i < argc; i++) {\n        argvlen[i] = strlen(argv[i]);\n        printf(\" %s\", argv[i]);\n    }\n    printf(\"\\n\");\n    listIter li;\n    listNode *ln;\n    listRewind(cluster_manager.nodes, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        clusterManagerNode *n = ln->value;\n        if ((config.cluster_manager_command.flags & CLUSTER_MANAGER_CMD_FLAG_MASTERS_ONLY)\n              && (n->replicate != NULL)) continue;  // continue if node is slave\n        if ((config.cluster_manager_command.flags & CLUSTER_MANAGER_CMD_FLAG_SLAVES_ONLY)\n              && (n->replicate == NULL)) continue;   // continue if node is master\n        if (!n->context && !clusterManagerNodeConnect(n)) continue;\n        redisReply *reply = NULL;\n        redisAppendCommandArgv(n->context, argc, (const char **) argv, argvlen);\n        int status = redisGetReply(n->context, (void **)(&reply));\n        if (status != REDIS_OK || reply == NULL )\n            printf(\"%s:%d: Failed!\\n\", n->ip, n->port);\n        else {\n            sds formatted_reply = cliFormatReplyRaw(reply);\n            printf(\"%s:%d: %s\\n\", n->ip, n->port, (char *) formatted_reply);\n            sdsfree(formatted_reply);\n        }\n        if (reply != NULL) freeReplyObject(reply);\n    }\n    zfree(argvlen);\n    return 1;\ninvalid_args:\n    fprintf(stderr, CLUSTER_MANAGER_INVALID_HOST_ARG);\n    return 0;\n}\n\nstatic int clusterManagerCommandBackup(int argc, char **argv) {\n    UNUSED(argc);\n    int success = 1, port = 0;\n    char *ip = NULL;\n    sds json = NULL;\n    sds jsonpath = NULL;\n\n    if (!getClusterHostFromCmdArgs(1, argv, &ip, &port)) goto invalid_args;\n    clusterManagerNode *refnode = clusterManagerNewNode(ip, port, 0);\n    if (!clusterManagerLoadInfoFromNode(refnode)) return 0;\n    int no_issues = clusterManagerCheckCluster(0);\n    int cluster_errors_count = (no_issues ? 0 :\n                                listLength(cluster_manager.errors));\n    config.cluster_manager_command.backup_dir = argv[1];\n\n    struct stat st = {0};\n    char *backup_dir = config.cluster_manager_command.backup_dir;\n\n    if (stat(backup_dir, &st) == -1) {\n        if (errno == ENOENT) {\n            clusterManagerLogErr(\"[ERR] The specified backup directory '%s' does not exist.\\n\", backup_dir);\n        } else {\n            clusterManagerLogErr(\"[ERR] Cannot stat backup directory %s: %s\\n\",\n                                 backup_dir, strerror(errno));\n        }\n        success = 0;\n        goto cleanup;\n    } else if (!S_ISDIR(st.st_mode)) {\n        clusterManagerLogErr(\"[ERR] The specified backup path '%s' exists but is not a directory.\\n\", backup_dir);\n        success = 0;\n        goto cleanup;\n    }\n\n    json = sdsnew(\"[\\n\");\n    int first_node = 0;\n    listIter li;\n    listNode *ln;\n    listRewind(cluster_manager.nodes, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        if (!first_node) first_node = 1;\n        else json = sdscat(json, \",\\n\");\n        clusterManagerNode *node = ln->value;\n        sds node_json = clusterManagerNodeGetJSON(node, cluster_errors_count);\n        json = sdscat(json, node_json);\n        sdsfree(node_json);\n        if (node->replicate)\n            continue;\n        clusterManagerLogInfo(\">>> Node %s:%d -> Saving RDB...\\n\",\n                              node->ip, node->port);\n        fflush(stdout);\n        getRDB(node);\n    }\n    json = sdscat(json, \"\\n]\");\n    jsonpath = sdsnew(config.cluster_manager_command.backup_dir);\n    if (jsonpath[sdslen(jsonpath) - 1] != '/')\n        jsonpath = sdscat(jsonpath, \"/\");\n    jsonpath = sdscat(jsonpath, \"nodes.json\");\n    fflush(stdout);\n    clusterManagerLogInfo(\"Saving cluster configuration to: %s\\n\", jsonpath);\n    FILE *out = fopen(jsonpath, \"w+\");\n    if (!out) {\n        clusterManagerLogErr(\"Could not save nodes to: %s\\n\", jsonpath);\n        success = 0;\n        goto cleanup;\n    }\n    fputs(json, out);\n    fclose(out);\ncleanup:\n    sdsfree(json);\n    sdsfree(jsonpath);\n    if (success) {\n        if (!no_issues) {\n            clusterManagerLogWarn(\"*** Cluster seems to have some problems, \"\n                                  \"please be aware of it if you're going \"\n                                  \"to restore this backup.\\n\");\n        }\n        clusterManagerLogOk(\"[OK] Backup created into: %s\\n\",\n                            config.cluster_manager_command.backup_dir);\n    } else clusterManagerLogOk(\"[ERR] Failed to back cluster!\\n\");\n    return success;\ninvalid_args:\n    fprintf(stderr, CLUSTER_MANAGER_INVALID_HOST_ARG);\n    return 0;\n}\n\nstatic int clusterManagerCommandHelp(int argc, char **argv) {\n    UNUSED(argc);\n    UNUSED(argv);\n    int commands_count = sizeof(clusterManagerCommands) /\n                         sizeof(clusterManagerCommandDef);\n    int i = 0, j;\n    fprintf(stdout, \"Cluster Manager Commands:\\n\");\n    int padding = 15;\n    for (; i < commands_count; i++) {\n        clusterManagerCommandDef *def = &(clusterManagerCommands[i]);\n        int namelen = strlen(def->name), padlen = padding - namelen;\n        fprintf(stdout, \"  %s\", def->name);\n        for (j = 0; j < padlen; j++) fprintf(stdout, \" \");\n        fprintf(stdout, \"%s\\n\", (def->args ? def->args : \"\"));\n        if (def->options != NULL) {\n            int optslen = strlen(def->options);\n            char *p = def->options, *eos = p + optslen;\n            char *comma = NULL;\n            while ((comma = strchr(p, ',')) != NULL) {\n                int deflen = (int)(comma - p);\n                char buf[255];\n                memcpy(buf, p, deflen);\n                buf[deflen] = '\\0';\n                for (j = 0; j < padding; j++) fprintf(stdout, \" \");\n                fprintf(stdout, \"  --cluster-%s\\n\", buf);\n                p = comma + 1;\n                if (p >= eos) break;\n            }\n            if (p < eos) {\n                for (j = 0; j < padding; j++) fprintf(stdout, \" \");\n                fprintf(stdout, \"  --cluster-%s\\n\", p);\n            }\n        }\n    }\n    fprintf(stdout, \"\\nFor check, fix, reshard, del-node, set-timeout, \"\n                    \"info, rebalance, call, import, backup you \"\n                    \"can specify the host and port of any working node in \"\n                    \"the cluster.\\n\");\n\n    int options_count = sizeof(clusterManagerOptions) /\n                        sizeof(clusterManagerOptionDef);\n    i = 0;\n    fprintf(stdout, \"\\nCluster Manager Options:\\n\");\n    for (; i < options_count; i++) {\n        clusterManagerOptionDef *def = &(clusterManagerOptions[i]);\n        int namelen = strlen(def->name), padlen = padding - namelen;\n        fprintf(stdout, \"  %s\", def->name);\n        for (j = 0; j < padlen; j++) fprintf(stdout, \" \");\n        fprintf(stdout, \"%s\\n\", def->desc);\n    }\n\n    fprintf(stdout, \"\\n\");\n    return 0;\n}\n\n/*------------------------------------------------------------------------------\n * Latency and latency history modes\n *--------------------------------------------------------------------------- */\n\nstatic void latencyModePrint(long long min, long long max, double avg, long long count) {\n    if (config.output == OUTPUT_STANDARD) {\n        printf(\"min: %lld, max: %lld, avg: %.2f (%lld samples)\",\n                min, max, avg, count);\n        fflush(stdout);\n    } else if (config.output == OUTPUT_CSV) {\n        printf(\"%lld,%lld,%.2f,%lld\\n\", min, max, avg, count);\n    } else if (config.output == OUTPUT_RAW) {\n        printf(\"%lld %lld %.2f %lld\\n\", min, max, avg, count);\n    } else if (config.output == OUTPUT_JSON) {\n        printf(\"{\\\"min\\\": %lld, \\\"max\\\": %lld, \\\"avg\\\": %.2f, \\\"count\\\": %lld}\\n\", min, max, avg, count);\n    }\n}\n\n#define LATENCY_SAMPLE_RATE 10 /* milliseconds. */\n#define LATENCY_HISTORY_DEFAULT_INTERVAL 15000 /* milliseconds. */\nstatic void latencyMode(void) {\n    redisReply *reply;\n    long long start, latency, min = 0, max = 0, tot = 0, count = 0;\n    long long history_interval =\n        config.interval ? config.interval/1000 :\n                          LATENCY_HISTORY_DEFAULT_INTERVAL;\n    double avg;\n    long long history_start = mstime();\n\n    /* Set a default for the interval in case of --latency option\n     * with --raw, --csv or when it is redirected to non tty. */\n    if (config.interval == 0) {\n        config.interval = 1000;\n    } else {\n        config.interval /= 1000; /* We need to convert to milliseconds. */\n    }\n\n    if (!context) exit(1);\n    while(1) {\n        start = mstime();\n        reply = reconnectingRedisCommand(context,\"PING\");\n        if (reply == NULL) {\n            fprintf(stderr,\"\\nI/O error\\n\");\n            exit(1);\n        }\n        latency = mstime()-start;\n        freeReplyObject(reply);\n        count++;\n        if (count == 1) {\n            min = max = tot = latency;\n            avg = (double) latency;\n        } else {\n            if (latency < min) min = latency;\n            if (latency > max) max = latency;\n            tot += latency;\n            avg = (double) tot/count;\n        }\n\n        if (config.output == OUTPUT_STANDARD) {\n            printf(\"\\x1b[0G\\x1b[2K\"); /* Clear the line. */\n            latencyModePrint(min,max,avg,count);\n        } else {\n            if (config.latency_history) {\n                latencyModePrint(min,max,avg,count);\n            } else if (mstime()-history_start > config.interval) {\n                latencyModePrint(min,max,avg,count);\n                exit(0);\n            }\n        }\n\n        if (config.latency_history && mstime()-history_start > history_interval)\n        {\n            printf(\" -- %.2f seconds range\\n\", (float)(mstime()-history_start)/1000);\n            history_start = mstime();\n            min = max = tot = count = 0;\n        }\n        usleep(LATENCY_SAMPLE_RATE * 1000);\n    }\n}\n\n/*------------------------------------------------------------------------------\n * Latency distribution mode -- requires 256 colors xterm\n *--------------------------------------------------------------------------- */\n\n#define LATENCY_DIST_DEFAULT_INTERVAL 1000 /* milliseconds. */\n\n/* Structure to store samples distribution. */\nstruct distsamples {\n    long long max;   /* Max latency to fit into this interval (usec). */\n    long long count; /* Number of samples in this interval. */\n    int character;   /* Associated character in visualization. */\n};\n\n/* Helper function for latencyDistMode(). Performs the spectrum visualization\n * of the collected samples targeting an xterm 256 terminal.\n *\n * Takes an array of distsamples structures, ordered from smaller to bigger\n * 'max' value. Last sample max must be 0, to mean that it holds all the\n * samples greater than the previous one, and is also the stop sentinel.\n *\n * \"tot' is the total number of samples in the different buckets, so it\n * is the SUM(samples[i].count) for i to 0 up to the max sample.\n *\n * As a side effect the function sets all the buckets count to 0. */\nvoid showLatencyDistSamples(struct distsamples *samples, long long tot) {\n    int j;\n\n     /* We convert samples into an index inside the palette\n     * proportional to the percentage a given bucket represents.\n     * This way intensity of the different parts of the spectrum\n     * don't change relative to the number of requests, which avoids to\n     * pollute the visualization with non-latency related info. */\n    printf(\"\\033[38;5;0m\"); /* Set foreground color to black. */\n    for (j = 0; ; j++) {\n        int coloridx =\n            ceil((double) samples[j].count / tot * (spectrum_palette_size-1));\n        int color = spectrum_palette[coloridx];\n        printf(\"\\033[48;5;%dm%c\", (int)color, samples[j].character);\n        samples[j].count = 0;\n        if (samples[j].max == 0) break; /* Last sample. */\n    }\n    printf(\"\\033[0m\\n\");\n    fflush(stdout);\n}\n\n/* Show the legend: different buckets values and colors meaning, so\n * that the spectrum is more easily readable. */\nvoid showLatencyDistLegend(void) {\n    int j;\n\n    printf(\"---------------------------------------------\\n\");\n    printf(\". - * #          .01 .125 .25 .5 milliseconds\\n\");\n    printf(\"1,2,3,...,9      from 1 to 9     milliseconds\\n\");\n    printf(\"A,B,C,D,E        10,20,30,40,50  milliseconds\\n\");\n    printf(\"F,G,H,I,J        .1,.2,.3,.4,.5       seconds\\n\");\n    printf(\"K,L,M,N,O,P,Q,?  1,2,4,8,16,30,60,>60 seconds\\n\");\n    printf(\"From 0 to 100%%: \");\n    for (j = 0; j < spectrum_palette_size; j++) {\n        printf(\"\\033[48;5;%dm \", spectrum_palette[j]);\n    }\n    printf(\"\\033[0m\\n\");\n    printf(\"---------------------------------------------\\n\");\n}\n\nstatic void latencyDistMode(void) {\n    redisReply *reply;\n    long long start, latency, count = 0;\n    long long history_interval =\n        config.interval ? config.interval/1000 :\n                          LATENCY_DIST_DEFAULT_INTERVAL;\n    long long history_start = ustime();\n    int j, outputs = 0;\n\n    struct distsamples samples[] = {\n        /* We use a mostly logarithmic scale, with certain linear intervals\n         * which are more interesting than others, like 1-10 milliseconds\n         * range. */\n        {10,0,'.'},         /* 0.01 ms */\n        {125,0,'-'},        /* 0.125 ms */\n        {250,0,'*'},        /* 0.25 ms */\n        {500,0,'#'},        /* 0.5 ms */\n        {1000,0,'1'},       /* 1 ms */\n        {2000,0,'2'},       /* 2 ms */\n        {3000,0,'3'},       /* 3 ms */\n        {4000,0,'4'},       /* 4 ms */\n        {5000,0,'5'},       /* 5 ms */\n        {6000,0,'6'},       /* 6 ms */\n        {7000,0,'7'},       /* 7 ms */\n        {8000,0,'8'},       /* 8 ms */\n        {9000,0,'9'},       /* 9 ms */\n        {10000,0,'A'},      /* 10 ms */\n        {20000,0,'B'},      /* 20 ms */\n        {30000,0,'C'},      /* 30 ms */\n        {40000,0,'D'},      /* 40 ms */\n        {50000,0,'E'},      /* 50 ms */\n        {100000,0,'F'},     /* 0.1 s */\n        {200000,0,'G'},     /* 0.2 s */\n        {300000,0,'H'},     /* 0.3 s */\n        {400000,0,'I'},     /* 0.4 s */\n        {500000,0,'J'},     /* 0.5 s */\n        {1000000,0,'K'},    /* 1 s */\n        {2000000,0,'L'},    /* 2 s */\n        {4000000,0,'M'},    /* 4 s */\n        {8000000,0,'N'},    /* 8 s */\n        {16000000,0,'O'},   /* 16 s */\n        {30000000,0,'P'},   /* 30 s */\n        {60000000,0,'Q'},   /* 1 minute */\n        {0,0,'?'},          /* > 1 minute */\n    };\n\n    if (!context) exit(1);\n    while(1) {\n        start = ustime();\n        reply = reconnectingRedisCommand(context,\"PING\");\n        if (reply == NULL) {\n            fprintf(stderr,\"\\nI/O error\\n\");\n            exit(1);\n        }\n        latency = ustime()-start;\n        freeReplyObject(reply);\n        count++;\n\n        /* Populate the relevant bucket. */\n        for (j = 0; ; j++) {\n            if (samples[j].max == 0 || latency <= samples[j].max) {\n                samples[j].count++;\n                break;\n            }\n        }\n\n        /* From time to time show the spectrum. */\n        if (count && (ustime()-history_start)/1000 > history_interval) {\n            if ((outputs++ % 20) == 0)\n                showLatencyDistLegend();\n            showLatencyDistSamples(samples,count);\n            history_start = ustime();\n            count = 0;\n        }\n        usleep(LATENCY_SAMPLE_RATE * 1000);\n    }\n}\n\n/*------------------------------------------------------------------------------\n * Vset recall mode.\n *\n * This mode targets a specific vector set key, performing queries on\n * vectors composed mixing components from existing vectors (each component of\n * the query vector is the component of a random source vector), then uses\n * VSIM and VSIM TRUTH to test for recall percentage.\n *--------------------------------------------------------------------------- */\nstatic void vsetRecallMode(void) {\n    redisReply *reply, *vsim_reply, *truth_reply;\n    int ele_count = config.vset_recall_ele_count;\n    int vsim_count = config.vset_recall_vsim_count;\n    int vsim_ef = config.vset_recall_vsim_ef;\n    unsigned long long queries = 0, total_overlap = 0;\n    long long refresh_time = mstime();\n    struct hdr_histogram *recall_histogram;\n\n    if (!context) exit(1);\n\n    /* HDR histogram requires minimum value >= 1 for some reason.\n     * We store recall percentages as:\n     * (recall% * 100) + 1, giving us range 1 to 10001.\n     * This maps: 0.00% -> 1, 50.00% -> 5001, 100.00% -> 10001\n     * Precision: 2 significant figures = 0.01% accuracy. */\n    if (hdr_init(1, 10001, 2, &recall_histogram)) {\n        fprintf(stderr, \"Failed to initialize recall histogram\\n\");\n        exit(1);\n    }\n\n    /* Get vector dimension. */\n    reply = reconnectingRedisCommand(context, \"VDIM %s\",\n        config.vset_recall_key);\n    if (reply == NULL || reply->type != REDIS_REPLY_INTEGER) {\n        fprintf(stderr, \"Error: Cannot get dimension for key %s\\n\",\n                config.vset_recall_key);\n        exit(1);\n    }\n    unsigned int dim = reply->integer;\n    freeReplyObject(reply);\n\n    printf(\"\\n# Testing recall for vector set: %s (dimension: %d)\\n\",\n           config.vset_recall_key, dim);\n    printf(\"# Mixing %d random element vectors, top %d results, EF=%d\\n\\n\",\n           ele_count, vsim_count, vsim_ef);\n\n    signal(SIGINT, longStatLoopModeStop);\n\n    /* Do the same recall test again and again. */\n    while (force_cancel_loop == 0) {\n        /* Get random members. */\n        reply = reconnectingRedisCommand(context, \"VRANDMEMBER %s %d\",\n                                        config.vset_recall_key, ele_count);\n        if (reply == NULL || reply->type != REDIS_REPLY_ARRAY ||\n            reply->elements == 0) {\n            fprintf(stderr, \"Error fetching random members\\n\");\n            exit(1);\n        }\n\n        /* Fetch and store vectors. */\n        double **vectors = zmalloc(reply->elements * sizeof(double*));\n        int valid_vectors = 0;\n\n        for (size_t i = 0; i < reply->elements; i++) {\n            redisReply *vemb = reconnectingRedisCommand(context, \"VEMB %s %s\",\n                                                       config.vset_recall_key,\n                                                       reply->element[i]->str);\n            if (vemb && vemb->type == REDIS_REPLY_ARRAY &&\n                vemb->elements == dim) {\n                vectors[valid_vectors] = zmalloc(dim * sizeof(double));\n                for (unsigned int j = 0; j < dim; j++) {\n                    vectors[valid_vectors][j] = atof(vemb->element[j]->str);\n                }\n                valid_vectors++;\n            }\n            if (vemb) freeReplyObject(vemb);\n        }\n        freeReplyObject(reply);\n\n        if (valid_vectors == 0) {\n            fprintf(stderr, \"No valid vectors retrieved\\n\");\n            zfree(vectors);\n            continue;\n        }\n\n        /* Create mixed query vector by randomly selecting components. */\n        float *query = zmalloc(sizeof(float) * dim);\n        for (unsigned int i = 0; i < dim; i++) {\n            int src = rand() % valid_vectors;\n            query[i] = vectors[src][i];\n        }\n\n        for (int i = 0; i < valid_vectors; i++) zfree(vectors[i]);\n        zfree(vectors);\n\n        /* Execute VSIM query with HNSW. */\n        vsim_reply = reconnectingRedisCommand(context,\n                                     \"VSIM %s FP32 %b COUNT %d EF %d\",\n                                     config.vset_recall_key, query,\n                                     sizeof(float)*dim, vsim_count, vsim_ef);\n        if (vsim_reply == NULL || vsim_reply->type != REDIS_REPLY_ARRAY) {\n            zfree(query);\n            if (vsim_reply) freeReplyObject(vsim_reply);\n            continue;\n        }\n\n        /* Execute ground truth query (brute force using TRUTH). */\n        truth_reply = reconnectingRedisCommand(context,\n                                      \"VSIM %s FP32 %b COUNT %d TRUTH\",\n                                      config.vset_recall_key, query,\n                                      sizeof(float)*dim, vsim_count);\n        zfree(query);\n\n        if (truth_reply == NULL || truth_reply->type != REDIS_REPLY_ARRAY) {\n            freeReplyObject(vsim_reply);\n            if (truth_reply) freeReplyObject(truth_reply);\n            continue;\n        }\n\n        /* Build dictionary of ground truth results for fast lookup. */\n        dictType dtype = {\n            dictSdsHash, NULL, NULL, dictSdsKeyCompare,\n            dictSdsDestructor, NULL, NULL\n        };\n        dict *truth_set = dictCreate(&dtype);\n\n        for (size_t i = 0; i < truth_reply->elements; i++) {\n            sds key = sdsnew(truth_reply->element[i]->str);\n            dictAdd(truth_set, key, NULL);\n        }\n\n        /* Count overlap between HNSW results and ground truth. */\n        int overlap = 0;\n        for (size_t i = 0; i < vsim_reply->elements; i++) {\n            sds vsim_key = sdsnew(vsim_reply->element[i]->str);\n            if (dictFind(truth_set, vsim_key) != NULL) {\n                overlap++;\n            }\n            sdsfree(vsim_key);\n        }\n\n        dictRelease(truth_set);\n        freeReplyObject(vsim_reply);\n        freeReplyObject(truth_reply);\n\n        /* Calculate recall percentage (overlap / expected * 100). */\n        double recall = (double)overlap / vsim_count * 100.0;\n\n        /* Cap at 100% against rounding errors. */\n        if (recall > 100.0) recall = 100.0;\n\n        queries++;\n        total_overlap += overlap;\n\n        /* Store in histogram: convert to integer by multiplying by 100,\n         * then add 1 to shift into valid range [1, 10001] */\n        int64_t recall_value = (int64_t)(recall * 100.0) + 1;\n        hdr_record_value(recall_histogram, recall_value);\n\n        /* Display progresses. */\n        if (mstime() > refresh_time + REFRESH_INTERVAL || !IS_TTY_OR_FAKETTY())\n        {\n            refresh_time = mstime();\n            double avg_recall = (double)total_overlap / (queries * vsim_count)\n                                    * 100.0;\n\n            if (IS_TTY_OR_FAKETTY()) printf(\"\\x1b[0G\\x1b[2K\");\n            printf(\"Queries: %llu | Avg recall: %.2f%%\", queries, avg_recall);\n            if (!IS_TTY_OR_FAKETTY()) printf(\"\\n\");\n            fflush(stdout);\n        }\n        if (config.interval) usleep(config.interval);\n    }\n\n    /* Final statistics. */\n    printf(\"\\n\\n\");\n    printf(\"====================================\\n\");\n    printf(\"       Recall Test Results\\n\");\n    printf(\"====================================\\n\\n\");\n    printf(\"Total queries:   %llu\\n\", queries);\n    printf(\"Average recall:  %.2f%%\\n\",\n           (double)total_overlap / (queries * vsim_count) * 100.0);\n\n    /* Convert histogram statistics back to percentages. */\n    printf(\"Mean recall:     %.2f%%\\n\", (hdr_mean(recall_histogram)-1)/100.0);\n    printf(\"Median recall:   %.2f%%\\n\",\n           (hdr_value_at_percentile(recall_histogram, 50.0)-1)/100.0);\n    printf(\"StdDev:          %.2f%%\\n\", hdr_stddev(recall_histogram)/100.0);\n    printf(\"Min recall:      %.2f%%\\n\", (hdr_min(recall_histogram)-1)/100.0);\n    printf(\"Max recall:      %.2f%%\\n\", (hdr_max(recall_histogram)-1)/100.0);\n\n    /* Display recall threshold distribution. */\n    printf(\"\\n--- Recall Thresholds ---\\n\");\n    printf(\"At least    %% of queries\\n\");\n    printf(\"--------    ------------\\n\");\n\n    double recall_thresholds[] = {0, 50, 60, 70, 80, 85, 90, 95, 99, 100};\n    int num_thresholds = sizeof(recall_thresholds) / sizeof(recall_thresholds[0]);\n    for (int i = 0; i < num_thresholds; i++) {\n        double target_recall = recall_thresholds[i];\n        /* Convert target recall to histogram value. */\n        int64_t target_value = (int64_t)(target_recall * 100.0) + 1;\n\n        /* Find what percentile this value is at. */\n        double percentile = 0.0;\n        for (double p = 0.0; p <= 100.0; p += 0.1) {\n            int64_t value_at_p = hdr_value_at_percentile(recall_histogram, p);\n            if (value_at_p >= target_value) {\n                percentile = p;\n                break;\n            }\n        }\n\n        /* Percentage achieving AT LEAST this recall is (100 - percentile) */\n        double pct_achieving = 100.0 - percentile;\n        printf(\"%6.1f%%     %10.2f%%\\n\", target_recall, pct_achieving);\n    }\n\n    hdr_close(recall_histogram);\n    exit(0);\n}\n\n/*------------------------------------------------------------------------------\n * Slave mode\n *--------------------------------------------------------------------------- */\n\n#define RDB_EOF_MARK_SIZE 40\n\nint sendReplconf(const char* arg1, const char* arg2) {\n    int res = 1;\n    fprintf(stderr, \"sending REPLCONF %s %s\\n\", arg1, arg2);\n    redisReply *reply = redisCommand(context, \"REPLCONF %s %s\", arg1, arg2);\n\n    /* Handle any error conditions */\n    if(reply == NULL) {\n        fprintf(stderr, \"\\nI/O error\\n\");\n        exit(1);\n    } else if(reply->type == REDIS_REPLY_ERROR) {\n        /* non fatal, old versions may not support it */\n        fprintf(stderr, \"REPLCONF %s error: %s\\n\", arg1, reply->str);\n        res = 0;\n    }\n    freeReplyObject(reply);\n    return res;\n}\n\nvoid sendCapa(void) {\n    sendReplconf(\"capa\", \"eof\");\n}\n\nvoid sendRdbOnly(void) {\n    sendReplconf(\"rdb-only\", \"1\");\n}\n\n/* Read raw bytes through a redisContext. The read operation is not greedy\n * and may not fill the buffer entirely.\n */\nstatic ssize_t readConn(redisContext *c, char *buf, size_t len)\n{\n    return c->funcs->read(c, buf, len);\n}\n\n/* Sends SYNC and reads the number of bytes in the payload. Used both by\n * slaveMode() and getRDB().\n *\n * send_sync if 1 means we will explicitly send SYNC command. If 0 means\n * we will not send SYNC command, will send the command that in c->obuf.\n *\n * Returns the size of the RDB payload to read, or 0 in case an EOF marker is used and the size\n * is unknown, also returns 0 in case a PSYNC +CONTINUE was found (no RDB payload).\n *\n * The out_full_mode parameter if 1 means this is a full sync, if 0 means this is partial mode. */\nunsigned long long sendSync(redisContext *c, int send_sync, char *out_eof, int *out_full_mode) {\n    /* To start we need to send the SYNC command and return the payload.\n     * The hiredis client lib does not understand this part of the protocol\n     * and we don't want to mess with its buffers, so everything is performed\n     * using direct low-level I/O. */\n    char buf[4096], *p;\n    ssize_t nread;\n\n    if (out_full_mode) *out_full_mode = 1;\n\n    if (send_sync) {\n        /* Send the SYNC command. */\n        if (cliWriteConn(c, \"SYNC\\r\\n\", 6) != 6) {\n            fprintf(stderr,\"Error writing to master\\n\");\n            exit(1);\n        }\n    } else {\n        /* We have written the command into c->obuf before. */\n        if (cliWriteConn(c, \"\", 0) != 0) {\n            fprintf(stderr,\"Error writing to master\\n\");\n            exit(1);\n        }\n    }\n\n    /* Read $<payload>\\r\\n, making sure to read just up to \"\\n\" */\n    p = buf;\n    while(1) {\n        nread = readConn(c,p,1);\n        if (nread <= 0) {\n            fprintf(stderr,\"Error reading bulk length while SYNCing\\n\");\n            exit(1);\n        }\n        if (*p == '\\n' && p != buf) break;\n        if (*p != '\\n') p++;\n        if (p >= buf + sizeof(buf) - 1) break; /* Go back one more char for null-term. */\n    }\n    *p = '\\0';\n    if (buf[0] == '-') {\n        fprintf(stderr, \"SYNC with master failed: %s\\n\", buf);\n        exit(1);\n    }\n\n    /* Handling PSYNC responses.\n     * Read +FULLRESYNC <replid> <offset>\\r\\n, after that is the $<payload> or the $EOF:<40 bytes delimiter>\n     * Read +CONTINUE <replid>\\r\\n or +CONTINUE\\r\\n, after that is the command stream */\n    if (!strncmp(buf, \"+FULLRESYNC\", 11) ||\n        !strncmp(buf, \"+CONTINUE\", 9))\n    {\n        int sync_partial = !strncmp(buf, \"+CONTINUE\", 9);\n        fprintf(stderr, \"PSYNC replied %s\\n\", buf);\n        p = buf;\n        while(1) {\n            nread = readConn(c,p,1);\n            if (nread <= 0) {\n                fprintf(stderr,\"Error reading bulk length while PSYNCing\\n\");\n                exit(1);\n            }\n            if (*p == '\\n' && p != buf) break;\n            if (*p != '\\n') p++;\n            if (p >= buf + sizeof(buf) - 1) break; /* Go back one more char for null-term. */\n        }\n        *p = '\\0';\n\n        if (sync_partial) {\n            if (out_full_mode) *out_full_mode = 0;\n            return 0;\n        }\n    }\n\n    if (strncmp(buf+1,\"EOF:\",4) == 0 && strlen(buf+5) >= RDB_EOF_MARK_SIZE) {\n        memcpy(out_eof, buf+5, RDB_EOF_MARK_SIZE);\n        return 0;\n    }\n    return strtoull(buf+1,NULL,10);\n}\n\nstatic void slaveMode(int send_sync) {\n    static char eofmark[RDB_EOF_MARK_SIZE];\n    static char lastbytes[RDB_EOF_MARK_SIZE];\n    static int usemark = 0;\n    static int out_full_mode;\n    unsigned long long payload = sendSync(context, send_sync, eofmark, &out_full_mode);\n    char buf[1024];\n    int original_output = config.output;\n    char *info = out_full_mode ? \"Full resync\" : \"Partial resync\";\n\n    if (out_full_mode == 1 && payload == 0) {\n        /* SYNC with EOF marker or PSYNC +FULLRESYNC with EOF marker. */\n        payload = ULLONG_MAX;\n        memset(lastbytes,0,RDB_EOF_MARK_SIZE);\n        usemark = 1;\n        fprintf(stderr, \"%s with master, discarding \"\n                        \"bytes of bulk transfer until EOF marker...\\n\", info);\n    } else if (out_full_mode == 1 && payload != 0) {\n        /* SYNC without EOF marker or PSYNC +FULLRESYNC. */\n        fprintf(stderr, \"%s with master, discarding %llu \"\n                        \"bytes of bulk transfer...\\n\", info, payload);\n    } else if (out_full_mode == 0 && payload == 0) {\n        /* PSYNC +CONTINUE (no RDB payload). */\n        fprintf(stderr, \"%s with master...\\n\", info);\n    }\n\n    /* Discard the payload. */\n    while(payload) {\n        ssize_t nread;\n\n        nread = readConn(context,buf,(payload > sizeof(buf)) ? sizeof(buf) : payload);\n        if (nread <= 0) {\n            fprintf(stderr,\"Error reading RDB payload while %sing\\n\", info);\n            exit(1);\n        }\n        payload -= nread;\n\n        if (usemark) {\n            /* Update the last bytes array, and check if it matches our delimiter.*/\n            if (nread >= RDB_EOF_MARK_SIZE) {\n                memcpy(lastbytes,buf+nread-RDB_EOF_MARK_SIZE,RDB_EOF_MARK_SIZE);\n            } else {\n                int rem = RDB_EOF_MARK_SIZE-nread;\n                memmove(lastbytes,lastbytes+nread,rem);\n                memcpy(lastbytes+rem,buf,nread);\n            }\n            if (memcmp(lastbytes,eofmark,RDB_EOF_MARK_SIZE) == 0)\n                break;\n        }\n    }\n\n    if (usemark) {\n        unsigned long long offset = ULLONG_MAX - payload;\n        fprintf(stderr,\"%s done after %llu bytes. Logging commands from master.\\n\", info, offset);\n        /* put the slave online */\n        sleep(1);\n        sendReplconf(\"ACK\", \"0\");\n    } else\n        fprintf(stderr,\"%s done. Logging commands from master.\\n\", info);\n\n    /* Now we can use hiredis to read the incoming protocol. */\n    config.output = OUTPUT_CSV;\n    while (cliReadReply(0) == REDIS_OK);\n    config.output = original_output;\n}\n\n/*------------------------------------------------------------------------------\n * RDB transfer mode\n *--------------------------------------------------------------------------- */\n\n/* This function implements --rdb, so it uses the replication protocol in order\n * to fetch the RDB file from a remote server. */\nstatic void getRDB(clusterManagerNode *node) {\n    int fd;\n    redisContext *s;\n    char *filename;\n    if (node != NULL) {\n        assert(node->context);\n        s = node->context;\n        filename = clusterManagerGetNodeRDBFilename(node);\n    } else {\n        s = context;\n        filename = config.rdb_filename;\n    }\n    static char eofmark[RDB_EOF_MARK_SIZE];\n    static char lastbytes[RDB_EOF_MARK_SIZE];\n    static int usemark = 0;\n    unsigned long long payload = sendSync(s, 1, eofmark, NULL);\n    char buf[4096];\n\n    if (payload == 0) {\n        payload = ULLONG_MAX;\n        memset(lastbytes,0,RDB_EOF_MARK_SIZE);\n        usemark = 1;\n        fprintf(stderr,\"SYNC sent to master, writing bytes of bulk transfer \"\n                \"until EOF marker to '%s'\\n\", filename);\n    } else {\n        fprintf(stderr,\"SYNC sent to master, writing %llu bytes to '%s'\\n\",\n            payload, filename);\n    }\n\n    int write_to_stdout = !strcmp(filename,\"-\");\n    /* Write to file. */\n    if (write_to_stdout) {\n        fd = STDOUT_FILENO;\n    } else {\n        fd = open(filename, O_CREAT|O_WRONLY, 0644);\n        if (fd == -1) {\n            fprintf(stderr, \"Error opening '%s': %s\\n\", filename,\n                strerror(errno));\n            exit(1);\n        }\n    }\n\n    while(payload) {\n        ssize_t nread, nwritten;\n\n        nread = readConn(s,buf,(payload > sizeof(buf)) ? sizeof(buf) : payload);\n        if (nread <= 0) {\n            fprintf(stderr,\"I/O Error reading RDB payload from socket\\n\");\n            exit(1);\n        }\n        nwritten = write(fd, buf, nread);\n        if (nwritten != nread) {\n            fprintf(stderr,\"Error writing data to file: %s\\n\",\n                (nwritten == -1) ? strerror(errno) : \"short write\");\n            exit(1);\n        }\n        payload -= nread;\n\n        if (usemark) {\n            /* Update the last bytes array, and check if it matches our delimiter.*/\n            if (nread >= RDB_EOF_MARK_SIZE) {\n                memcpy(lastbytes,buf+nread-RDB_EOF_MARK_SIZE,RDB_EOF_MARK_SIZE);\n            } else {\n                int rem = RDB_EOF_MARK_SIZE-nread;\n                memmove(lastbytes,lastbytes+nread,rem);\n                memcpy(lastbytes+rem,buf,nread);\n            }\n            if (memcmp(lastbytes,eofmark,RDB_EOF_MARK_SIZE) == 0)\n                break;\n        }\n    }\n    if (usemark) {\n        payload = ULLONG_MAX - payload - RDB_EOF_MARK_SIZE;\n        if (!write_to_stdout && ftruncate(fd, payload) == -1)\n            fprintf(stderr,\"ftruncate failed: %s.\\n\", strerror(errno));\n        fprintf(stderr,\"Transfer finished with success after %llu bytes\\n\", payload);\n    } else {\n        fprintf(stderr,\"Transfer finished with success.\\n\");\n    }\n    redisFree(s); /* Close the connection ASAP as fsync() may take time. */\n    if (node)\n        node->context = NULL;\n    if (!write_to_stdout && fsync(fd) == -1) {\n        fprintf(stderr,\"Fail to fsync '%s': %s\\n\", filename, strerror(errno));\n        exit(1);\n    }\n    close(fd);\n    if (node) {\n        sdsfree(filename);\n        return;\n    }\n    exit(0);\n}\n\n/*------------------------------------------------------------------------------\n * Bulk import (pipe) mode\n *--------------------------------------------------------------------------- */\n\n#define PIPEMODE_WRITE_LOOP_MAX_BYTES (128*1024)\nstatic void pipeMode(void) {\n    long long errors = 0, replies = 0, obuf_len = 0, obuf_pos = 0;\n    char obuf[1024*16]; /* Output buffer */\n    char aneterr[ANET_ERR_LEN];\n    redisReply *reply;\n    int eof = 0; /* True once we consumed all the standard input. */\n    int done = 0;\n    char magic[20]; /* Special reply we recognize. */\n    time_t last_read_time = time(NULL);\n\n    srand(time(NULL));\n\n    /* Use non blocking I/O. */\n    if (anetNonBlock(aneterr,context->fd) == ANET_ERR) {\n        fprintf(stderr, \"Can't set the socket in non blocking mode: %s\\n\",\n            aneterr);\n        exit(1);\n    }\n\n    context->flags &= ~REDIS_BLOCK;\n\n    /* Transfer raw protocol and read replies from the server at the same\n     * time. */\n    while(!done) {\n        int mask = AE_READABLE;\n\n        if (!eof || obuf_len != 0) mask |= AE_WRITABLE;\n        mask = aeWait(context->fd,mask,1000);\n\n        /* Handle the readable state: we can read replies from the server. */\n        if (mask & AE_READABLE) {\n            int read_error = 0;\n\n            do {\n                if (!read_error && redisBufferRead(context) == REDIS_ERR) {\n                    read_error = 1;\n                }\n\n                reply = NULL;\n                if (redisGetReply(context, (void **) &reply) == REDIS_ERR) {\n                    fprintf(stderr, \"Error reading replies from server\\n\");\n                    exit(1);\n                }\n                if (reply) {\n                    last_read_time = time(NULL);\n                    if (reply->type == REDIS_REPLY_ERROR) {\n                        fprintf(stderr,\"%s\\n\", reply->str);\n                        errors++;\n                    } else if (eof && reply->type == REDIS_REPLY_STRING &&\n                                      reply->len == 20) {\n                        /* Check if this is the reply to our final ECHO\n                         * command. If so everything was received\n                         * from the server. */\n                        if (memcmp(reply->str,magic,20) == 0) {\n                            printf(\"Last reply received from server.\\n\");\n                            done = 1;\n                            replies--;\n                        }\n                    }\n                    replies++;\n                    freeReplyObject(reply);\n                }\n            } while(reply);\n\n            /* Abort on read errors. We abort here because it is important\n             * to consume replies even after a read error: this way we can\n             * show a potential problem to the user. */\n            if (read_error) exit(1);\n        }\n\n        /* Handle the writable state: we can send protocol to the server. */\n        if (mask & AE_WRITABLE) {\n            ssize_t loop_nwritten = 0;\n\n            while(1) {\n                /* Transfer current buffer to server. */\n                if (obuf_len != 0) {\n                    ssize_t nwritten = cliWriteConn(context,obuf+obuf_pos,obuf_len);\n\n                    if (nwritten == -1) {\n                        if (errno != EAGAIN && errno != EINTR) {\n                            fprintf(stderr, \"Error writing to the server: %s\\n\",\n                                strerror(errno));\n                            exit(1);\n                        } else {\n                            nwritten = 0;\n                        }\n                    }\n                    obuf_len -= nwritten;\n                    obuf_pos += nwritten;\n                    loop_nwritten += nwritten;\n                    if (obuf_len != 0) break; /* Can't accept more data. */\n                }\n                if (context->err) {\n                    fprintf(stderr, \"Server I/O Error: %s\\n\", context->errstr);\n                    exit(1);\n                }\n                /* If buffer is empty, load from stdin. */\n                if (obuf_len == 0 && !eof) {\n                    ssize_t nread = read(STDIN_FILENO,obuf,sizeof(obuf));\n\n                    if (nread == 0) {\n                        /* The ECHO sequence starts with a \"\\r\\n\" so that if there\n                         * is garbage in the protocol we read from stdin, the ECHO\n                         * will likely still be properly formatted.\n                         * CRLF is ignored by Redis, so it has no effects. */\n                        char echo[] =\n                        \"\\r\\n*2\\r\\n$4\\r\\nECHO\\r\\n$20\\r\\n01234567890123456789\\r\\n\";\n                        int j;\n\n                        eof = 1;\n                        /* Everything transferred, so we queue a special\n                         * ECHO command that we can match in the replies\n                         * to make sure everything was read from the server. */\n                        for (j = 0; j < 20; j++)\n                            magic[j] = rand() & 0xff;\n                        memcpy(echo+21,magic,20);\n                        memcpy(obuf,echo,sizeof(echo)-1);\n                        obuf_len = sizeof(echo)-1;\n                        obuf_pos = 0;\n                        printf(\"All data transferred. Waiting for the last reply...\\n\");\n                    } else if (nread == -1) {\n                        fprintf(stderr, \"Error reading from stdin: %s\\n\",\n                            strerror(errno));\n                        exit(1);\n                    } else {\n                        obuf_len = nread;\n                        obuf_pos = 0;\n                    }\n                }\n                if ((obuf_len == 0 && eof) ||\n                    loop_nwritten > PIPEMODE_WRITE_LOOP_MAX_BYTES) break;\n            }\n        }\n\n        /* Handle timeout, that is, we reached EOF, and we are not getting\n         * replies from the server for a few seconds, nor the final ECHO is\n         * received. */\n        if (eof && config.pipe_timeout > 0 &&\n            time(NULL)-last_read_time > config.pipe_timeout)\n        {\n            fprintf(stderr,\"No replies for %d seconds: exiting.\\n\",\n                config.pipe_timeout);\n            errors++;\n            break;\n        }\n    }\n    printf(\"errors: %lld, replies: %lld\\n\", errors, replies);\n    if (errors)\n        exit(1);\n    else\n        exit(0);\n}\n\n/*------------------------------------------------------------------------------\n * Find big keys\n *--------------------------------------------------------------------------- */\n\nstatic redisReply *sendScan(unsigned long long *it) {\n    redisReply *reply;\n\n    if (config.pattern)\n        reply = redisCommand(context, \"SCAN %llu MATCH %b COUNT %d\",\n            *it, config.pattern, sdslen(config.pattern), config.count);\n    else\n        reply = redisCommand(context, \"SCAN %llu COUNT %d\",\n            *it, config.count);\n\n    /* Handle any error conditions */\n    if(reply == NULL) {\n        fprintf(stderr, \"\\nI/O error\\n\");\n        exit(1);\n    } else if(reply->type == REDIS_REPLY_ERROR) {\n        fprintf(stderr, \"SCAN error: %s\\n\", reply->str);\n        exit(1);\n    } else if(reply->type != REDIS_REPLY_ARRAY) {\n        fprintf(stderr, \"Non ARRAY response from SCAN!\\n\");\n        exit(1);\n    } else if(reply->elements != 2) {\n        fprintf(stderr, \"Invalid element count from SCAN!\\n\");\n        exit(1);\n    }\n\n    /* Validate our types are correct */\n    assert(reply->element[0]->type == REDIS_REPLY_STRING);\n    assert(reply->element[1]->type == REDIS_REPLY_ARRAY);\n\n    /* Update iterator */\n    *it = strtoull(reply->element[0]->str, NULL, 10);\n\n    return reply;\n}\n\nstatic int getDbSize(void) {\n    redisReply *reply;\n    int size;\n\n    reply = redisCommand(context, \"DBSIZE\");\n\n    if (reply == NULL) {\n        fprintf(stderr, \"\\nI/O error\\n\");\n        exit(1);\n    } else if (reply->type == REDIS_REPLY_ERROR) {\n        fprintf(stderr, \"Couldn't determine DBSIZE: %s\\n\", reply->str);\n        exit(1);\n    } else if (reply->type != REDIS_REPLY_INTEGER) {\n        fprintf(stderr, \"Non INTEGER response from DBSIZE!\\n\");\n        exit(1);\n    }\n\n    /* Grab the number of keys and free our reply */\n    size = reply->integer;\n    freeReplyObject(reply);\n\n    return size;\n}\n\nstatic int getDatabases(void) {\n    redisReply *reply;\n    int dbnum;\n\n    reply = redisCommand(context, \"CONFIG GET databases\");\n\n    if (reply == NULL) {\n        fprintf(stderr, \"\\nI/O error\\n\");\n        exit(1);\n    } else if (reply->type == REDIS_REPLY_ERROR) {\n        dbnum = 16;\n        fprintf(stderr, \"CONFIG GET databases fails: %s, use default value 16 instead\\n\", reply->str);\n    } else {\n        assert(reply->type == (config.current_resp3 ? REDIS_REPLY_MAP : REDIS_REPLY_ARRAY));\n        assert(reply->elements == 2);\n        dbnum = atoi(reply->element[1]->str);\n    }\n\n    freeReplyObject(reply);\n    return dbnum;\n}\n\ntypedef struct {\n    char *name;\n    char *sizecmd;\n    char *sizeunit;\n    unsigned long long biggest;\n    unsigned long long count;\n    unsigned long long totalsize;\n    sds biggest_key;\n} typeinfo;\n\ntypeinfo type_string = { \"string\", \"STRLEN\", \"bytes\" };\ntypeinfo type_list = { \"list\", \"LLEN\", \"items\" };\ntypeinfo type_set = { \"set\", \"SCARD\", \"members\" };\ntypeinfo type_hash = { \"hash\", \"HLEN\", \"fields\" };\ntypeinfo type_zset = { \"zset\", \"ZCARD\", \"members\" };\ntypeinfo type_stream = { \"stream\", \"XLEN\", \"entries\" };\ntypeinfo type_other = { \"other\", NULL, \"?\" };\n\nstatic typeinfo* typeinfo_add(dict *types, char* name, typeinfo* type_template) {\n    typeinfo *info = zmalloc(sizeof(typeinfo));\n    *info = *type_template;\n    info->name = sdsnew(name);\n    dictAdd(types, info->name, info);\n    return info;\n}\n\nvoid type_free(dict *d, void* val) {\n    typeinfo *info = val;\n    UNUSED(d);\n    if (info->biggest_key)\n        sdsfree(info->biggest_key);\n    sdsfree(info->name);\n    zfree(info);\n}\n\nstatic dictType typeinfoDictType = {\n    dictSdsHash,               /* hash function */\n    NULL,                      /* key dup */\n    NULL,                      /* val dup */\n    dictSdsKeyCompare,         /* key compare */\n    NULL,                      /* key destructor (owned by the value)*/\n    type_free,                 /* val destructor */\n    NULL                       /* allow to expand */\n};\n\nstatic void getKeyTypes(dict *types_dict, redisReply *keys, typeinfo **types) {\n    redisReply *reply;\n    unsigned int i;\n\n    /* Pipeline TYPE commands */\n    for(i=0;i<keys->elements;i++) {\n        const char* argv[] = {\"TYPE\", keys->element[i]->str};\n        size_t lens[] = {4, keys->element[i]->len};\n        redisAppendCommandArgv(context, 2, argv, lens);\n    }\n\n    /* Retrieve types */\n    for(i=0;i<keys->elements;i++) {\n        if(redisGetReply(context, (void**)&reply)!=REDIS_OK) {\n            fprintf(stderr, \"Error getting type for key '%s' (%d: %s)\\n\",\n                keys->element[i]->str, context->err, context->errstr);\n            exit(1);\n        } else if(reply->type != REDIS_REPLY_STATUS) {\n            if(reply->type == REDIS_REPLY_ERROR) {\n                fprintf(stderr, \"TYPE returned an error: %s\\n\", reply->str);\n            } else {\n                fprintf(stderr,\n                    \"Invalid reply type (%d) for TYPE on key '%s'!\\n\",\n                    reply->type, keys->element[i]->str);\n            }\n            exit(1);\n        }\n\n        sds typereply = sdsnew(reply->str);\n        dictEntry *de = dictFind(types_dict, typereply);\n        sdsfree(typereply);\n        typeinfo *type = NULL;\n        if (de)\n            type = dictGetVal(de);\n        else if (strcmp(reply->str, \"none\")) /* create new types for modules, (but not for deleted keys) */\n            type = typeinfo_add(types_dict, reply->str, &type_other);\n        types[i] = type;\n        freeReplyObject(reply);\n    }\n}\n\nstatic void getKeySizes(redisReply *keys, typeinfo **types,\n                        unsigned long long *sizes, int memkeys,\n                        long long memkeys_samples)\n{\n    redisReply *reply;\n    unsigned int i;\n\n    /* Pipeline size commands */\n    for(i=0;i<keys->elements;i++) {\n        /* Skip keys that disappeared between SCAN and TYPE (or unknown types when not in memkeys mode) */\n        if(!types[i] || (!types[i]->sizecmd && !memkeys))\n            continue;\n\n        if (!memkeys) {\n            const char* argv[] = {types[i]->sizecmd, keys->element[i]->str};\n            size_t lens[] = {strlen(types[i]->sizecmd), keys->element[i]->len};\n            redisAppendCommandArgv(context, 2, argv, lens);\n        } else if (memkeys_samples == -1) {\n            const char* argv[] = {\"MEMORY\", \"USAGE\", keys->element[i]->str};\n            size_t lens[] = {6, 5, keys->element[i]->len};\n            redisAppendCommandArgv(context, 3, argv, lens);\n        } else {\n            sds samplesstr = sdsfromlonglong(memkeys_samples);\n            const char* argv[] = {\"MEMORY\", \"USAGE\", keys->element[i]->str, \"SAMPLES\", samplesstr};\n            size_t lens[] = {6, 5, keys->element[i]->len, 7, sdslen(samplesstr)};\n            redisAppendCommandArgv(context, 5, argv, lens);\n            sdsfree(samplesstr);\n        }\n    }\n\n    /* Retrieve sizes */\n    for(i=0;i<keys->elements;i++) {\n        /* Skip keys that disappeared between SCAN and TYPE (or unknown types when not in memkeys mode) */\n        if(!types[i] || (!types[i]->sizecmd && !memkeys)) {\n            sizes[i] = 0;\n            continue;\n        }\n\n        /* Retrieve size */\n        if(redisGetReply(context, (void**)&reply)!=REDIS_OK) {\n            fprintf(stderr, \"Error getting size for key '%s' (%d: %s)\\n\",\n                keys->element[i]->str, context->err, context->errstr);\n            exit(1);\n        } else if(reply->type != REDIS_REPLY_INTEGER) {\n            /* Theoretically the key could have been removed and\n             * added as a different type between TYPE and SIZE */\n            fprintf(stderr,\n                \"Warning:  %s on '%s' failed (may have changed type)\\n\",\n                !memkeys? types[i]->sizecmd: \"MEMORY USAGE\",\n                keys->element[i]->str);\n            sizes[i] = 0;\n        } else {\n            sizes[i] = reply->integer;\n        }\n\n        freeReplyObject(reply);\n    }\n}\n\n/* In cluster mode we may need to send the READONLY command.\n   Ignore the error in case the server isn't using cluster mode. */\nstatic void sendReadOnly(void) {\n    redisReply *read_reply;\n    read_reply = redisCommand(context, \"READONLY\");\n    if (read_reply == NULL){\n        fprintf(stderr, \"\\nI/O error\\n\");\n        exit(1);\n    } else if (read_reply->type == REDIS_REPLY_ERROR && \n               strcmp(read_reply->str, \"ERR This instance has cluster support disabled\") != 0 &&\n               strncmp(read_reply->str, \"ERR unknown command\", 19) != 0) {\n        fprintf(stderr, \"Error: %s\\n\", read_reply->str);\n        exit(1);\n    }\n    freeReplyObject(read_reply);\n}\n\nstatic int displayKeyStatsProgressbar(unsigned long long sampled,\n                                      unsigned long long total_keys);\n\nstatic void findBigKeys(int memkeys, long long memkeys_samples) {\n    unsigned long long sampled = 0, total_keys, totlen=0, *sizes=NULL, it=0, scan_loops = 0;\n    redisReply *reply, *keys;\n    unsigned int arrsize=0, i;\n    dictIterator di;\n    dictEntry *de;\n    typeinfo **types = NULL;\n    double pct;\n    long long refresh_time = mstime();\n\n    dict *types_dict = dictCreate(&typeinfoDictType);\n    typeinfo_add(types_dict, \"string\", &type_string);\n    typeinfo_add(types_dict, \"list\", &type_list);\n    typeinfo_add(types_dict, \"set\", &type_set);\n    typeinfo_add(types_dict, \"hash\", &type_hash);\n    typeinfo_add(types_dict, \"zset\", &type_zset);\n    typeinfo_add(types_dict, \"stream\", &type_stream);\n\n    signal(SIGINT, longStatLoopModeStop);\n    /* Total keys pre scanning */\n    total_keys = getDbSize();\n\n    /* Status message */\n    printf(\"\\n# Scanning the entire keyspace to find biggest keys as well as\\n\");\n    printf(\"# average sizes per key type.  You can use -i 0.1 to sleep 0.1 sec\\n\");\n    printf(\"# per 100 SCAN commands (not usually needed).\\n\\n\");\n    \n    /* Use readonly in cluster */\n    sendReadOnly();\n\n    /* SCAN loop */\n    do {\n        /* Calculate approximate percentage completion */\n        pct = 100 * (double)sampled/total_keys;\n\n        /* Grab some keys and point to the keys array */\n        reply = sendScan(&it);\n        scan_loops++;\n        keys  = reply->element[1];\n\n        /* Reallocate our type and size array if we need to */\n        if(keys->elements > arrsize) {\n            types = zrealloc(types, sizeof(typeinfo*)*keys->elements);\n            sizes = zrealloc(sizes, sizeof(unsigned long long)*keys->elements);\n\n            if(!types || !sizes) {\n                fprintf(stderr, \"Failed to allocate storage for keys!\\n\");\n                exit(1);\n            }\n\n            arrsize = keys->elements;\n        }\n\n        /* Retrieve types and then sizes */\n        getKeyTypes(types_dict, keys, types);\n        getKeySizes(keys, types, sizes, memkeys, memkeys_samples);\n\n        /* Now update our stats */\n        for(i=0;i<keys->elements;i++) {\n            typeinfo *type = types[i];\n            /* Skip keys that disappeared between SCAN and TYPE */\n            if(!type)\n                continue;\n\n            type->totalsize += sizes[i];\n            type->count++;\n            totlen += keys->element[i]->len;\n            sampled++;\n\n            if(type->biggest<sizes[i]) {\n                /* Keep track of biggest key name for this type */\n                if (type->biggest_key)\n                    sdsfree(type->biggest_key);\n                type->biggest_key = sdscatrepr(sdsempty(), keys->element[i]->str, keys->element[i]->len);\n                if(!type->biggest_key) {\n                    fprintf(stderr, \"Failed to allocate memory for key!\\n\");\n                    exit(1);\n                }\n\n                /* We only show the original progress output when writing to a file */\n                if (!IS_TTY_OR_FAKETTY()) {\n                    printf(\"[%05.2f%%] Biggest %-6s found so far %s with %llu %s\\n\",\n                        pct, type->name, type->biggest_key, sizes[i],\n                        !memkeys? type->sizeunit: \"bytes\");\n                }\n\n                /* Keep track of the biggest size for this type */\n                type->biggest = sizes[i];\n            }\n\n            /* Update overall progress\n             * We only show the original progress output when writing to a file */\n            if (sampled % 1000000 == 0 && !IS_TTY_OR_FAKETTY()) {\n                printf(\"[%05.2f%%] Sampled %llu keys so far\\n\", pct, sampled);\n            }\n\n            /* Show the progress bar in TTY */\n            if (mstime() > refresh_time + REFRESH_INTERVAL && IS_TTY_OR_FAKETTY()) {\n                int line_count = 0;\n                refresh_time = mstime();\n\n                line_count = displayKeyStatsProgressbar(sampled, total_keys);\n                line_count += cleanPrintfln(\"\");\n\n                dictInitIterator(&di, types_dict);\n                while ((de = dictNext(&di))) {\n                    typeinfo *current_type = dictGetVal(de);\n                    if (current_type->biggest > 0) {\n                        line_count += cleanPrintfln(\"Biggest %-9s found so far %s with %llu %s\",\n                            current_type->name, current_type->biggest_key, current_type->biggest,\n                            !memkeys? current_type->sizeunit: \"bytes\");\n                    }\n                }\n                dictResetIterator(&di);\n\n                printf(\"\\033[%dA\\r\", line_count);\n            }\n        }\n\n        /* Sleep if we've been directed to do so */\n        if (config.interval && (scan_loops % 100) == 0) {\n            usleep(config.interval);\n        }\n\n        freeReplyObject(reply);\n    } while(force_cancel_loop == 0 && it != 0);\n\n    /* Final progress bar if TTY */\n    if (IS_TTY_OR_FAKETTY()) {\n        displayKeyStatsProgressbar(sampled, total_keys);\n\n        /* Clean the types info shown during the progress bar */\n        int line_count = 0;\n        dictInitIterator(&di, types_dict);\n        while ((de = dictNext(&di)))\n            line_count += cleanPrintfln(\"\");\n        dictResetIterator(&di);\n        printf(\"\\033[%dA\\r\", line_count);\n    }\n\n    if(types) zfree(types);\n    if(sizes) zfree(sizes);\n\n    /* We're done */\n    printf(\"\\n-------- summary -------\\n\\n\");\n\n    /* Show percentage and sampled output when writing to a file */\n    if (!IS_TTY_OR_FAKETTY()) {\n        if (force_cancel_loop) printf(\"[%05.2f%%] \", pct);\n        printf(\"Sampled %llu keys in the keyspace!\\n\", sampled);\n    }\n\n    printf(\"Total key length in bytes is %llu (avg len %.2f)\\n\\n\",\n       totlen, totlen ? (double)totlen/sampled : 0);\n\n    /* Output the biggest keys we found, for types we did find */\n    dictInitIterator(&di, types_dict);\n    while ((de = dictNext(&di))) {\n        typeinfo *type = dictGetVal(de);\n        if(type->biggest_key) {\n            printf(\"Biggest %6s found %s has %llu %s\\n\", type->name, type->biggest_key,\n               type->biggest, !memkeys? type->sizeunit: \"bytes\");\n        }\n    }\n    dictResetIterator(&di);\n\n    printf(\"\\n\");\n\n    dictInitIterator(&di, types_dict);\n    while ((de = dictNext(&di))) {\n        typeinfo *type = dictGetVal(de);\n        printf(\"%llu %ss with %llu %s (%05.2f%% of keys, avg size %.2f)\\n\",\n           type->count, type->name, type->totalsize, !memkeys? type->sizeunit: \"bytes\",\n           sampled ? 100 * (double)type->count/sampled : 0,\n           type->count ? (double)type->totalsize/type->count : 0);\n    }\n    dictResetIterator(&di);\n\n    dictRelease(types_dict);\n\n    /* Success! */\n    exit(0);\n}\n\nstatic void getKeyFreqs(redisReply *keys, unsigned long long *freqs) {\n    redisReply *reply;\n    unsigned int i;\n\n    /* Pipeline OBJECT freq commands */\n    for(i=0;i<keys->elements;i++) {\n        const char* argv[] = {\"OBJECT\", \"FREQ\", keys->element[i]->str};\n        size_t lens[] = {6, 4, keys->element[i]->len};\n        redisAppendCommandArgv(context, 3, argv, lens);\n    }\n\n    /* Retrieve freqs */\n    for(i=0;i<keys->elements;i++) {\n        if(redisGetReply(context, (void**)&reply)!=REDIS_OK) {\n            sds keyname = sdscatrepr(sdsempty(), keys->element[i]->str, keys->element[i]->len);\n            fprintf(stderr, \"Error getting freq for key '%s' (%d: %s)\\n\",\n                keyname, context->err, context->errstr);\n            sdsfree(keyname);\n            exit(1);\n        } else if(reply->type != REDIS_REPLY_INTEGER) {\n            if(reply->type == REDIS_REPLY_ERROR) {\n                fprintf(stderr, \"Error: %s\\n\", reply->str);\n                exit(1);\n            } else {\n                sds keyname = sdscatrepr(sdsempty(), keys->element[i]->str, keys->element[i]->len);\n                fprintf(stderr, \"Warning: OBJECT freq on '%s' failed (may have been deleted)\\n\", keyname);\n                sdsfree(keyname);\n                freqs[i] = 0;\n            }\n        } else {\n            freqs[i] = reply->integer;\n        }\n        freeReplyObject(reply);\n    }\n}\n\n#define HOTKEYS_SAMPLE 16\nstatic void findHotKeys(void) {\n    redisReply *keys, *reply;\n    unsigned long long counters[HOTKEYS_SAMPLE] = {0};\n    sds hotkeys[HOTKEYS_SAMPLE] = {NULL};\n    unsigned long long sampled = 0, total_keys, *freqs = NULL, it = 0, scan_loops = 0;\n    unsigned int arrsize = 0, i;\n    int k;\n    double pct;\n    long long refresh_time = mstime();\n\n    signal(SIGINT, longStatLoopModeStop);\n    /* Total keys pre scanning */\n    total_keys = getDbSize();\n\n    /* Status message */\n    printf(\"\\n# Scanning the entire keyspace to find hot keys as well as\\n\");\n    printf(\"# average sizes per key type.  You can use -i 0.1 to sleep 0.1 sec\\n\");\n    printf(\"# per 100 SCAN commands (not usually needed).\\n\\n\");\n\n    /* Use readonly in cluster */\n    sendReadOnly();\n    \n    /* SCAN loop */\n    do {\n        /* Calculate approximate percentage completion */\n        pct = 100 * (double)sampled/total_keys;\n\n        /* Grab some keys and point to the keys array */\n        reply = sendScan(&it);\n        scan_loops++;\n        keys  = reply->element[1];\n\n        /* Reallocate our freqs array if we need to */\n        if(keys->elements > arrsize) {\n            freqs = zrealloc(freqs, sizeof(unsigned long long)*keys->elements);\n\n            if(!freqs) {\n                fprintf(stderr, \"Failed to allocate storage for keys!\\n\");\n                exit(1);\n            }\n\n            arrsize = keys->elements;\n        }\n\n        getKeyFreqs(keys, freqs);\n\n        /* Now update our stats */\n        for(i=0;i<keys->elements;i++) {\n            sampled++;\n\n            /* Update overall progress.\n             * Only show the original progress output when writing to a file */\n            if (sampled % 1000000 == 0 && !IS_TTY_OR_FAKETTY()) {\n                printf(\"[%05.2f%%] Sampled %llu keys so far\\n\", pct, sampled);\n            }\n\n            /* Use eviction pool here */\n            k = 0;\n            while (k < HOTKEYS_SAMPLE && freqs[i] > counters[k]) k++;\n            if (k == 0) continue;\n            k--;\n            if (k == 0 || counters[k] == 0) {\n                sdsfree(hotkeys[k]);\n            } else {\n                sdsfree(hotkeys[0]);\n                memmove(counters,counters+1,sizeof(counters[0])*k);\n                memmove(hotkeys,hotkeys+1,sizeof(hotkeys[0])*k);\n            }\n            counters[k] = freqs[i];\n            hotkeys[k] = sdscatrepr(sdsempty(), keys->element[i]->str, keys->element[i]->len);\n\n            /* Only show the original progress output when writing to a file */\n            if (!IS_TTY_OR_FAKETTY()) {\n                printf(\"[%05.2f%%] Hot key %s found so far with counter %llu\\n\",\n                    pct, hotkeys[k], freqs[i]);\n            }\n        }\n\n        /* Show the progress bar in TTY */\n        if (mstime() > refresh_time + REFRESH_INTERVAL && IS_TTY_OR_FAKETTY()) {\n            int line_count = 0;\n            refresh_time = mstime();\n\n            line_count = displayKeyStatsProgressbar(sampled, total_keys);\n            line_count += cleanPrintfln(\"\");\n\n            for (k = HOTKEYS_SAMPLE - 1; k >= 0; k--) {\n                if (counters[k] > 0) {\n                    line_count += cleanPrintfln(\"hot key found with counter: %llu\\tkeyname: %s\", \n                        counters[k], hotkeys[k]);\n                }\n            }\n\n            printf(\"\\033[%dA\\r\", line_count);\n        }\n\n        /* Sleep if we've been directed to do so */\n        if (config.interval && (scan_loops % 100) == 0) {\n            usleep(config.interval);\n        }\n\n        freeReplyObject(reply);\n    } while(force_cancel_loop ==0 && it != 0);\n\n    /* Final progress bar in TTY */\n    if (IS_TTY_OR_FAKETTY()) {\n        displayKeyStatsProgressbar(sampled, total_keys);\n\n        /* clean the types info shown during the progress bar */\n        int line_count = 0;\n        for (k = 0; k <= HOTKEYS_SAMPLE; k++)\n            line_count += cleanPrintfln(\"\");\n        printf(\"\\033[%dA\\r\", line_count);\n    }\n\n    if (freqs) zfree(freqs);\n\n    /* We're done */\n    printf(\"\\n-------- summary -------\\n\\n\");\n\n    /* Show the original output when writing to a file */\n    if (!IS_TTY_OR_FAKETTY()) {\n        if(force_cancel_loop) printf(\"[%05.2f%%] \",pct);\n        printf(\"Sampled %llu keys in the keyspace!\\n\", sampled);\n    }\n\n    for (k = HOTKEYS_SAMPLE - 1; k >= 0; k--) {\n        if (counters[k] > 0) {\n            printf(\"hot key found with counter: %llu\\tkeyname: %s\\n\", counters[k], hotkeys[k]);\n            sdsfree(hotkeys[k]);\n        }\n    }\n\n    exit(0);\n}\n\n/*------------------------------------------------------------------------------\n * Stats mode\n *--------------------------------------------------------------------------- */\n\n/* Return the specified INFO field from the INFO command output \"info\".\n * A new buffer is allocated for the result, that needs to be free'd.\n * If the field is not found NULL is returned. */\nstatic char *getInfoField(char *info, char *field) {\n    char *p = strstr(info,field);\n    char *n1, *n2;\n    char *result;\n\n    if (!p) return NULL;\n    p += strlen(field)+1;\n    n1 = strchr(p,'\\r');\n    n2 = strchr(p,',');\n    if (n2 && n2 < n1) n1 = n2;\n    result = zmalloc(sizeof(char)*(n1-p)+1);\n    memcpy(result,p,(n1-p));\n    result[n1-p] = '\\0';\n    return result;\n}\n\n/* Like the above function but automatically convert the result into\n * a long. On error (missing field) LONG_MIN is returned. */\nstatic long getLongInfoField(char *info, char *field) {\n    char *value = getInfoField(info,field);\n    long l;\n\n    if (!value) return LONG_MIN;\n    l = strtol(value,NULL,10);\n    zfree(value);\n    return l;\n}\n\n/* Convert number of bytes into a human readable string of the form:\n * 1003B, 4.03K, 100.00M, 2.32G, 3.01T \n * Returns the parameter `s` containing the converted number. */\nchar *bytesToHuman(char *s, size_t size, long long n) {\n    double d;\n    char *r = s;\n\n    if (n < 0) {\n        *s = '-';\n        s++;\n        n = -n;\n    }\n    if (n < 1024) {\n        /* Bytes */\n        snprintf(s,size,\"%lldB\",n);\n    } else if (n < (1024*1024)) {\n        d = (double)n/(1024);\n        snprintf(s,size,\"%.2fK\",d);\n    } else if (n < (1024LL*1024*1024)) {\n        d = (double)n/(1024*1024);\n        snprintf(s,size,\"%.2fM\",d);\n    } else if (n < (1024LL*1024*1024*1024)) {\n        d = (double)n/(1024LL*1024*1024);\n        snprintf(s,size,\"%.2fG\",d);\n    } else if (n < (1024LL*1024*1024*1024*1024)) {\n        d = (double)n/(1024LL*1024*1024*1024);\n        snprintf(s,size,\"%.2fT\",d);\n    }\n\n    return r;\n}\n\nstatic void statMode(void) {\n    redisReply *reply;\n    long aux, requests = 0;\n    int dbnum = getDatabases();\n    int i = 0;\n\n    while(1) {\n        char buf[64];\n        int j;\n\n        reply = reconnectingRedisCommand(context,\"INFO\");\n        if (reply == NULL) {\n            fprintf(stderr, \"\\nI/O error\\n\");\n            exit(1);\n        } else if (reply->type == REDIS_REPLY_ERROR) {\n            fprintf(stderr, \"ERROR: %s\\n\", reply->str);\n            exit(1);\n        }\n\n        if ((i++ % 20) == 0) {\n            printf(\n\"------- data ------ --------------------- load -------------------- - child -\\n\"\n\"keys       mem      clients blocked requests            connections          \\n\");\n        }\n\n        /* Keys */\n        aux = 0;\n        for (j = 0; j < dbnum; j++) {\n            long k;\n\n            snprintf(buf,sizeof(buf),\"db%d:keys\",j);\n            k = getLongInfoField(reply->str,buf);\n            if (k == LONG_MIN) continue;\n            aux += k;\n        }\n        snprintf(buf,sizeof(buf),\"%ld\",aux);\n        printf(\"%-11s\",buf);\n\n        /* Used memory */\n        aux = getLongInfoField(reply->str,\"used_memory\");\n        bytesToHuman(buf,sizeof(buf),aux);\n        printf(\"%-8s\",buf);\n\n        /* Clients */\n        aux = getLongInfoField(reply->str,\"connected_clients\");\n        snprintf(buf,sizeof(buf),\"%ld\",aux);\n        printf(\" %-8s\",buf);\n\n        /* Blocked (BLPOPPING) Clients */\n        aux = getLongInfoField(reply->str,\"blocked_clients\");\n        snprintf(buf,sizeof(buf),\"%ld\",aux);\n        printf(\"%-8s\",buf);\n\n        /* Requests */\n        aux = getLongInfoField(reply->str,\"total_commands_processed\");\n        snprintf(buf,sizeof(buf),\"%ld (+%ld)\",aux,requests == 0 ? 0 : aux-requests);\n        printf(\"%-19s\",buf);\n        requests = aux;\n\n        /* Connections */\n        aux = getLongInfoField(reply->str,\"total_connections_received\");\n        snprintf(buf,sizeof(buf),\"%ld\",aux);\n        printf(\" %-12s\",buf);\n\n        /* Children */\n        aux = getLongInfoField(reply->str,\"bgsave_in_progress\");\n        aux |= getLongInfoField(reply->str,\"aof_rewrite_in_progress\") << 1;\n        aux |= getLongInfoField(reply->str,\"loading\") << 2;\n        switch(aux) {\n        case 0: break;\n        case 1:\n            printf(\"SAVE\");\n            break;\n        case 2:\n            printf(\"AOF\");\n            break;\n        case 3:\n            printf(\"SAVE+AOF\");\n            break;\n        case 4:\n            printf(\"LOAD\");\n            break;\n        }\n\n        printf(\"\\n\");\n        freeReplyObject(reply);\n        usleep(config.interval);\n    }\n}\n\n/*------------------------------------------------------------------------------\n * Scan mode\n *--------------------------------------------------------------------------- */\n\nstatic void scanMode(void) {\n    redisReply *reply;\n    unsigned long long cur = 0;\n    signal(SIGINT, longStatLoopModeStop);\n    do {\n        reply = sendScan(&cur);\n        for (unsigned int j = 0; j < reply->element[1]->elements; j++) {\n            if (config.output == OUTPUT_STANDARD) {\n                sds out = sdscatrepr(sdsempty(), reply->element[1]->element[j]->str,\n                                     reply->element[1]->element[j]->len);\n                printf(\"%s\\n\", out);\n                sdsfree(out);\n            } else {\n                printf(\"%s\\n\", reply->element[1]->element[j]->str);\n            }\n        }\n        freeReplyObject(reply);\n        if (config.interval) usleep(config.interval);\n    } while(force_cancel_loop == 0 && cur != 0);\n\n    exit(0);\n}\n\n/*------------------------------------------------------------------------------\n * LRU test mode\n *--------------------------------------------------------------------------- */\n\n/* Return an integer from min to max (both inclusive) using a power-law\n * distribution, depending on the value of alpha: the greater the alpha\n * the more bias towards lower values.\n *\n * With alpha = 6.2 the output follows the 80-20 rule where 20% of\n * the returned numbers will account for 80% of the frequency. */\nlong long powerLawRand(long long min, long long max, double alpha) {\n    double pl, r;\n\n    max += 1;\n    r = ((double)rand()) / RAND_MAX;\n    pl = pow(\n        ((pow(max,alpha+1) - pow(min,alpha+1))*r + pow(min,alpha+1)),\n        (1.0/(alpha+1)));\n    return (max-1-(long long)pl)+min;\n}\n\n/* Generates a key name among a set of lru_test_sample_size keys, using\n * an 80-20 distribution. */\nvoid LRUTestGenKey(char *buf, size_t buflen) {\n    snprintf(buf, buflen, \"lru:%lld\",\n        powerLawRand(1, config.lru_test_sample_size, 6.2));\n}\n\n#define LRU_CYCLE_PERIOD 1000 /* 1000 milliseconds. */\n#define LRU_CYCLE_PIPELINE_SIZE 250\nstatic void LRUTestMode(void) {\n    redisReply *reply;\n    char key[128];\n    long long start_cycle;\n    int j;\n\n    srand(time(NULL)^getpid());\n    while(1) {\n        /* Perform cycles of 1 second with 50% writes and 50% reads.\n         * We use pipelining batching writes / reads N times per cycle in order\n         * to fill the target instance easily. */\n        start_cycle = mstime();\n        long long hits = 0, misses = 0;\n        while(mstime() - start_cycle < LRU_CYCLE_PERIOD) {\n            /* Write cycle. */\n            for (j = 0; j < LRU_CYCLE_PIPELINE_SIZE; j++) {\n                char val[6];\n                val[5] = '\\0';\n                for (int i = 0; i < 5; i++) val[i] = 'A'+rand()%('z'-'A');\n                LRUTestGenKey(key,sizeof(key));\n                redisAppendCommand(context, \"SET %s %s\",key,val);\n            }\n            for (j = 0; j < LRU_CYCLE_PIPELINE_SIZE; j++)\n                redisGetReply(context, (void**)&reply);\n\n            /* Read cycle. */\n            for (j = 0; j < LRU_CYCLE_PIPELINE_SIZE; j++) {\n                LRUTestGenKey(key,sizeof(key));\n                redisAppendCommand(context, \"GET %s\",key);\n            }\n            for (j = 0; j < LRU_CYCLE_PIPELINE_SIZE; j++) {\n                if (redisGetReply(context, (void**)&reply) == REDIS_OK) {\n                    switch(reply->type) {\n                        case REDIS_REPLY_ERROR:\n                            fprintf(stderr, \"%s\\n\", reply->str);\n                            break;\n                        case REDIS_REPLY_NIL:\n                            misses++;\n                            break;\n                        default:\n                            hits++;\n                            break;\n                    }\n                }\n            }\n\n            if (context->err) {\n                fprintf(stderr,\"I/O error during LRU test\\n\");\n                exit(1);\n            }\n        }\n        /* Print stats. */\n        long long total_gets = hits + misses;\n        printf(\n            \"%lld Gets/sec | Hits: %lld (%.2f%%) | Misses: %lld (%.2f%%)\\n\",\n            hits+misses,\n            hits, total_gets > 0 ? (double)hits/total_gets*100 : 0.0,\n            misses, total_gets > 0 ? (double)misses/total_gets*100 : 0.0);\n    }\n    exit(0);\n}\n\n/*------------------------------------------------------------------------------\n * Intrinsic latency mode.\n *\n * Measure max latency of a running process that does not result from\n * syscalls. Basically this software should provide a hint about how much\n * time the kernel leaves the process without a chance to run.\n *--------------------------------------------------------------------------- */\n\n/* This is just some computation the compiler can't optimize out.\n * Should run in less than 100-200 microseconds even using very\n * slow hardware. Runs in less than 10 microseconds in modern HW. */\nunsigned long compute_something_fast(void) {\n    unsigned char s[256], i, j, t;\n    int count = 1000, k;\n    unsigned long output = 0;\n\n    for (k = 0; k < 256; k++) s[k] = k;\n\n    i = 0;\n    j = 0;\n    while(count--) {\n        i++;\n        j = j + s[i];\n        t = s[i];\n        s[i] = s[j];\n        s[j] = t;\n        output += s[(s[i]+s[j])&255];\n    }\n    return output;\n}\n\nstatic void sigIntHandler(int s) {\n    UNUSED(s);\n\n    if (config.monitor_mode || config.pubsub_mode) {\n        close(context->fd);\n        context->fd = REDIS_INVALID_FD;\n        config.blocking_state_aborted = 1;\n    } else {\n        exit(1);\n    }\n}\n\nstatic void intrinsicLatencyMode(void) {\n    long long test_end, run_time, max_latency = 0, runs = 0;\n\n    run_time = (long long)config.intrinsic_latency_duration * 1000000;\n    test_end = ustime() + run_time;\n    signal(SIGINT, longStatLoopModeStop);\n\n    while(1) {\n        long long start, end, latency;\n\n        start = ustime();\n        compute_something_fast();\n        end = ustime();\n        latency = end-start;\n        runs++;\n        if (latency <= 0) continue;\n\n        /* Reporting */\n        if (latency > max_latency) {\n            max_latency = latency;\n            printf(\"Max latency so far: %lld microseconds.\\n\", max_latency);\n        }\n\n        double avg_us = (double)run_time/runs;\n        double avg_ns = avg_us * 1e3;\n        if (force_cancel_loop || end > test_end) {\n            printf(\"\\n%lld total runs \"\n                \"(avg latency: \"\n                \"%.4f microseconds / %.2f nanoseconds per run).\\n\",\n                runs, avg_us, avg_ns);\n            printf(\"Worst run took %.0fx longer than the average latency.\\n\",\n                max_latency / avg_us);\n            exit(0);\n        }\n    }\n}\n\nstatic sds askPassword(const char *msg) {\n    linenoiseMaskModeEnable();\n    sds auth = linenoise(msg);\n    linenoiseMaskModeDisable();\n    return auth;\n}\n\n/* Prints out the hint completion string for a given input prefix string. */\nvoid testHint(const char *input) {\n    cliInitHelp();\n\n    sds hint = getHintForInput(input);\n    printf(\"%s\\n\", hint);\n    exit(0);\n}\n\nsds readHintSuiteLine(char buf[], size_t size, FILE *fp) {\n    while (fgets(buf, size, fp) != NULL) {\n        if (buf[0] != '#') {\n            sds input = sdsnew(buf);\n\n            /* Strip newline. */\n            input = sdstrim(input, \"\\n\");\n            return input;\n        }\n    }\n    return NULL;\n}\n\n/* Runs a suite of hint completion tests contained in a file. */\nvoid testHintSuite(char *filename) {\n    FILE *fp;\n    char buf[256];\n    sds line, input, expected, hint;\n    int pass=0, fail=0;\n    int argc;\n    char **argv;\n\n    fp = fopen(filename, \"r\");\n    if (!fp) {\n        fprintf(stderr,\n            \"Can't open file '%s': %s\\n\", filename, strerror(errno));\n        exit(-1);\n    }\n\n    cliInitHelp();\n\n    while (1) {\n        line = readHintSuiteLine(buf, sizeof(buf), fp);\n        if (line == NULL) break;\n        argv = sdssplitargs(line, &argc);\n        sdsfree(line);\n        if (argc == 0) {\n            sdsfreesplitres(argv, argc);\n            continue;\n        }\n\n        if (argc == 1) {\n            fprintf(stderr,\n                \"Missing expected hint for input '%s'\\n\", argv[0]);\n            exit(-1);\n        }\n        input = argv[0];\n        expected = argv[1];\n        hint = getHintForInput(input);\n        if (config.verbose) {\n            printf(\"Input: '%s', Expected: '%s', Hint: '%s'\\n\", input, expected, hint);\n        }\n\n        /* Strip trailing spaces from hint - they don't matter. */\n        while (hint != NULL && sdslen(hint) > 0 && hint[sdslen(hint) - 1] == ' ') {            \n            sdssetlen(hint, sdslen(hint) - 1);\n            hint[sdslen(hint)] = '\\0';\n        }\n\n        if (hint == NULL || strcmp(hint, expected) != 0) {\n            fprintf(stderr, \"Test case '%s' FAILED: expected '%s', got '%s'\\n\", input, expected, hint);\n            ++fail;\n        }\n        else {\n            ++pass;\n        }\n        sdsfreesplitres(argv, argc);\n        sdsfree(hint);\n    }\n    fclose(fp);\n    \n    printf(\"%s: %d/%d passed\\n\", fail == 0 ? \"SUCCESS\" : \"FAILURE\", pass, pass + fail);\n    exit(fail);\n}\n\n/*------------------------------------------------------------------------------\n * Keystats\n *--------------------------------------------------------------------------- */\n\n/* Key name length distribution. */\n\ntypedef struct size_dist_entry {\n    unsigned long long size;        /* Key name size in bytes. */\n    unsigned long long count;       /* Number of key names that are less or equal to the size. */\n} size_dist_entry;\n\ntypedef struct size_dist {\n    unsigned long long total_count; /* Total number of key names in the distribution. */\n    unsigned long long total_size;  /* Sum of all the key name sizes in bytes. */\n    unsigned long long max_size;    /* Highest key name size in bytes. */\n    size_dist_entry *size_dist;     /* Array of sizes and key names count per size. */\n} size_dist;\n\n/* distribution is an array initialized with last element {0, 0}\n * for instance: size_dist_entry distribution[] = { {32, 0}, {256, 0}, {0, 0} }; */\nstatic void sizeDistInit(size_dist *dist, size_dist_entry *distribution) {\n    dist->max_size = 0;\n    dist->total_count = 0;\n    dist->total_size = 0;\n    dist->size_dist = distribution;\n}\n\nstatic void addSizeDist(size_dist *dist, unsigned long long size) {\n    dist->total_count++;\n    dist->total_size += size;\n\n    if (size > dist->max_size)\n        dist->max_size = size;\n\n    int j;\n    for (j=0; dist->size_dist[j].size && size > dist->size_dist[j].size; j++);\n    dist->size_dist[j].count++;\n}\n\nstatic int displayKeyStatsLengthDist(size_dist *dist) {\n    int line_count = 0;\n    unsigned long long total_keys = 0, size;\n    char buf[2][256];\n\n    line_count += cleanPrintfln(\"Key name length Percentile Total keys\");\n    line_count += cleanPrintfln(\"--------------- ---------- -----------\");\n\n    for (int i=0; dist->size_dist[i].size; i++) {\n        if (dist->size_dist[i].count) {\n            if (dist->max_size < dist->size_dist[i].size) {\n                size = dist->max_size;\n            } else {\n                size = dist->size_dist[i].size;\n            }\n            total_keys += dist->size_dist[i].count;\n            line_count += cleanPrintfln(\"%15s %9.4f%% %11llu\",\n                bytesToHuman(buf[1], sizeof(buf[1]), size),\n                (double)100 * total_keys / dist->total_count,\n                total_keys);\n        }\n    }\n\n    if (total_keys < dist->total_count) {\n        line_count += cleanPrintfln(\"           inf %9.4f%% %11llu\", 100.0, dist->total_count);\n    }\n\n    line_count += cleanPrintfln(\"Total key length is %s (%s avg)\",\n        bytesToHuman(buf[0], sizeof(buf[0]), dist->total_size),\n        dist->total_count ? bytesToHuman(buf[1], sizeof(buf[1]), dist->total_size/dist->total_count) : \"0\");\n\n    return line_count;\n}\n\n#define PROGRESSBAR_WIDTH 60\nstatic int displayKeyStatsProgressbar(unsigned long long sampled,\n                                      unsigned long long total_keys)\n{\n    int line_count = 0;\n    char progressbar[512];\n    char buf[2][128];\n\n    /* We can go over 100% if keys are added in the middle of the scans.\n     * Cap at 100% or the progressbar memset will overflow. */\n    double completion_pct = total_keys ? sampled < total_keys ? (double) sampled/total_keys : 1 : 0;\n\n    /* If we are not redirecting to a file, build the progress bar */\n    if (IS_TTY_OR_FAKETTY()) {\n        int completed_width = (int)round(PROGRESSBAR_WIDTH * completion_pct);\n        memset(buf[0], '|', completed_width);\n        buf[0][completed_width]= '\\0';\n\n        int uncompleted_width = PROGRESSBAR_WIDTH - completed_width;\n        memset(buf[1], '-', uncompleted_width);\n        buf[1][uncompleted_width]= '\\0';\n\n        char red[] = \"\\033[31m\";\n        char green[] = \"\\033[32m\";\n        char default_color[] = \"\\033[39m\";\n        snprintf(progressbar, sizeof(progressbar), \"%s%s%s%s%s\",\n            green, buf[0], red, buf[1], default_color);\n    } else {\n        snprintf(progressbar, sizeof(progressbar), \"%s\", \"keys scanned\");\n    }\n\n    line_count += cleanPrintfln(\"%6.2f%% %s\", completion_pct * 100, progressbar);\n    line_count += cleanPrintfln(\"Keys sampled: %llu\", sampled);\n\n    return line_count;\n}\n\nstatic int displayKeyStatsSizeType(dict *memkeys_types_dict) {\n    dictIterator di;\n    dictEntry *de;\n    int line_count = 0;\n    char buf[256];\n\n    line_count += cleanPrintfln(\"--- Top size per type ---\");\n    dictInitIterator(&di, memkeys_types_dict);\n    while ((de = dictNext(&di))) {\n        typeinfo *type = dictGetVal(de);\n        if (type->biggest_key) {\n            line_count += cleanPrintfln(\"%-10s %s is %s\",\n                type->name, type->biggest_key,\n                bytesToHuman(buf, sizeof(buf),type->biggest));\n        }\n    }\n    dictResetIterator(&di);\n\n    return line_count;\n}\n\nstatic int displayKeyStatsLengthType(dict *bigkeys_types_dict) {\n    dictIterator di;\n    dictEntry *de;\n    int line_count = 0;\n    char buf[256];\n\n    line_count += cleanPrintfln(\"--- Top length and cardinality per type ---\");\n    dictInitIterator(&di, bigkeys_types_dict);\n    while ((de = dictNext(&di))) {\n        typeinfo *type = dictGetVal(de);\n        if (type->biggest_key) {\n            if (!strcmp(type->sizeunit, \"bytes\")) {\n                bytesToHuman(buf, sizeof(buf), type->biggest);\n            } else {\n                snprintf(buf, sizeof(buf), \"%llu %s\", type->biggest, type->sizeunit);\n            }\n            line_count += cleanPrintfln(\"%-10s %s has %s\", type->name, type->biggest_key, buf);\n        }\n    }\n    dictResetIterator(&di);\n\n    return line_count;\n}\n\nstatic int displayKeyStatsSizeDist(struct hdr_histogram *keysize_histogram) {\n    int line_count = 0;\n    double percentile;\n    char size[32], mean[32], stddev[32];\n    struct hdr_iter iter;\n    int64_t last_displayed_cumulative_count = 0;\n\n    if (keysize_histogram->total_count == 0) {\n        line_count += cleanPrintfln(\"No key size samples collected\");\n        return line_count;\n    }\n\n    hdr_iter_percentile_init(&iter, keysize_histogram, 1);\n\n    line_count += cleanPrintfln(\"Key size Percentile Total keys\");\n    line_count += cleanPrintfln(\"-------- ---------- -----------\");\n\n    while (hdr_iter_next(&iter)) {\n        /* Skip repeat in hdr_histogram cumulative_count. For instance:\n         * 140.68K    99.9969%        50013\n         * 140.68K    99.9977%        50013\n         *   2.04G   100.0000%        50014\n         * Will display:\n         * 140.68K    99.9969%        50013\n         *   2.04G   100.0000%        50014                                   */\n\n        if (iter.cumulative_count != last_displayed_cumulative_count) {\n            percentile = (100.0 * (double) iter.cumulative_count) / iter.h->total_count;\n\n            line_count += cleanPrintfln(\"%8s %9.4f%% %11lld\",\n                bytesToHuman(size, sizeof(size), iter.highest_equivalent_value),\n                percentile,\n                iter.cumulative_count);\n\n            last_displayed_cumulative_count = iter.cumulative_count;\n        }\n    }\n\n    bytesToHuman(mean, sizeof(mean),hdr_mean(keysize_histogram));\n    bytesToHuman(stddev, sizeof(stddev),hdr_stddev(keysize_histogram));\n    line_count += cleanPrintfln(\"Note: 0.01%% size precision, Mean: %s, StdDeviation: %s\", mean, stddev);\n\n    return line_count;\n}\n\nstatic int displayKeyStatsType(unsigned long long sampled,\n                               dict *memkeys_types_dict,\n                               dict *bigkeys_types_dict)\n{\n    dictIterator di;\n    dictEntry *de;\n    int line_count = 0;\n    char total_size[64], size_avg[64], total_length[64], length_avg[64];\n\n    line_count += cleanPrintfln(\"Type        Total keys  Keys %% Tot size Avg size  Total length/card Avg ln/card\");\n    line_count += cleanPrintfln(\"--------- ------------ ------- -------- -------- ------------------ -----------\");\n\n    dictInitIterator(&di, memkeys_types_dict);\n    while ((de = dictNext(&di))) {\n        typeinfo *memkey_type = dictGetVal(de);\n        if (memkey_type->count) {\n            /* Key count, percentage, memkeys info */\n            bytesToHuman(total_size, sizeof(total_size), memkey_type->totalsize);\n            bytesToHuman(size_avg, sizeof(size_avg), memkey_type->totalsize/memkey_type->count);\n\n            strncpy(total_length, \" - \", sizeof(total_length));\n            strncpy(length_avg, \" - \", sizeof(length_avg));\n\n            /* bigkeys info */\n            dictEntry *bk_de = dictFind(bigkeys_types_dict, memkey_type->name);\n            if (bk_de) { /* If we have it in memkeys it should be in bigkeys */\n                typeinfo *bigkey_type = dictGetVal(bk_de);\n                if (bigkey_type->sizecmd && bigkey_type->count) {\n                    double avg = (double)bigkey_type->totalsize/bigkey_type->count;\n                    if (!strcmp(bigkey_type->sizeunit, \"bytes\")) {\n                        bytesToHuman(total_length, sizeof(total_length), bigkey_type->totalsize);\n                        bytesToHuman(length_avg, sizeof(length_avg), (long long)round(avg)); /* better than truncating */\n                    } else {\n                        snprintf(total_length, sizeof(total_length), \"%llu %s\", bigkey_type->totalsize, bigkey_type->sizeunit);\n                        snprintf(length_avg, sizeof(length_avg), \"%.2f\", avg);\n                    }\n                }\n            }\n            /* Print the line for the given Redis type */\n            line_count += cleanPrintfln(\"%-10s %11llu %6.2f%% %8s %8s %18s %11s\",\n                memkey_type->name, memkey_type->count,\n                sampled ? 100 * (double)memkey_type->count/sampled : 0,\n                total_size, size_avg, total_length, length_avg);\n        }\n    }\n    dictResetIterator(&di);\n\n    return line_count;\n}\n\ntypedef struct key_info {\n    unsigned long long size;\n    char type_name[10]; /* Key type name seems to be 9 char max + \\0 */\n    sds key_name;\n} key_info;\n\nstatic int displayKeyStatsTopSizes(list *top_key_sizes, unsigned long top_sizes_limit) {\n    int line_count = 0, i = 0;\n\n    line_count += cleanPrintfln(\"--- Top %llu key sizes ---\", top_sizes_limit);\n    char buffer[32];\n    listIter iter;\n    listNode *node;\n    listRewind(top_key_sizes, &iter);\n    while ((node = listNext(&iter)) != NULL) {\n        key_info *key = (key_info*) listNodeValue(node);\n        line_count += cleanPrintfln(\"%3d %8s %-10s %s\", ++i, bytesToHuman(buffer, sizeof(buffer), key->size),\n                                    key->type_name, key->key_name);\n    }\n\n    return line_count;\n}\n\nstatic key_info *createKeySizeInfo(char *key_name, size_t key_name_len, char *key_type, unsigned long long size) {\n    key_info *key = zmalloc(sizeof(key_info));\n    key->size = size;\n    snprintf(key->type_name, sizeof(key->type_name), \"%s\", key_type);\n    key->key_name = sdscatrepr(sdsempty(), key_name, key_name_len);\n    if (!key->key_name) {\n        fprintf(stderr, \"Failed to allocate memory for key name.\\n\");\n        exit(1);\n    }\n    return key;\n}\n\n/* Insert key info in topkeys sorted by size (from high to low size).\n * Keep a maximum of config.top_sizes_limit items in topkeys list.\n * key_name and type_name are copied.\n * Return: 0 size was not added (too small), 1 size was inserted.  */\nstatic int updateTopSizes(char *key_name, size_t key_name_len, unsigned long long key_size,\n                          char *type_name, list *topkeys, unsigned long top_sizes_limit)\n{\n    listNode *node;\n    listIter iter;\n    key_info *new_node;\n\n    /* Check if we do not need to add to the list */\n    if (top_sizes_limit != 0 &&\n        topkeys->len == top_sizes_limit &&\n        key_size <= ((key_info*)topkeys->tail->value)->size){\n        return 0;\n    }\n\n    /* Find where to insert the new key size */\n    listRewind(topkeys, &iter);\n    do {\n        node = listNext(&iter);\n    } while (node != NULL && key_size <= ((key_info*)node->value)->size);\n\n    new_node = createKeySizeInfo(key_name, key_name_len, type_name, key_size);\n    if (node) {\n        /* Insert before the node */\n        listInsertNode(topkeys, node, new_node, 0);\n    } else {\n        /* Insert as the last node */\n        listAddNodeTail(topkeys, new_node);\n    }\n\n    /* Trim to stay within the limit */\n    if (topkeys->len == top_sizes_limit + 1) {\n        sdsfree(((key_info*)topkeys->tail->value)->key_name);\n        listDelNode(topkeys, topkeys->tail); /* list->free is set */\n    }\n\n    return 1;\n}\n\nstatic void displayKeyStats(unsigned long long sampled, unsigned long long total_keys,\n                            unsigned long long total_size, dict *memkeys_types_dict,\n                            dict *bigkeys_types_dict, list *top_key_sizes,\n                            unsigned long top_sizes_limit, int move_cursor_up)\n{\n    int line_count = 0;\n    char buf[256];\n\n    line_count += displayKeyStatsProgressbar(sampled, total_keys);\n    line_count += cleanPrintfln(\"Keys size:    %s\", bytesToHuman(buf, sizeof(buf), total_size));\n    line_count += cleanPrintfln(\"\");\n    line_count += displayKeyStatsTopSizes(top_key_sizes, top_sizes_limit);\n    line_count += cleanPrintfln(\"\");\n    line_count += displayKeyStatsSizeType(memkeys_types_dict);\n    line_count += cleanPrintfln(\"\");\n    line_count += displayKeyStatsLengthType(bigkeys_types_dict);\n\n    /* If we need to refresh the stats */\n    if (move_cursor_up) {\n        printf(\"\\033[%dA\\r\", line_count);\n    }\n\n    fflush(stdout);\n}\n\nstatic void updateKeyType(redisReply *element, unsigned long long size, typeinfo *type) {\n    type->totalsize += size;\n    type->count++;\n\n    if (type->biggest<size) {\n        /* Keep track of biggest key name for this type */\n        if (type->biggest_key)\n            sdsfree(type->biggest_key);\n        type->biggest_key = sdscatrepr(sdsempty(), element->str, element->len);\n        if (!type->biggest_key) {\n            fprintf(stderr, \"Failed to allocate memory for key!\\n\");\n            exit(1);\n        }\n        /* Keep track of the biggest size for this type */\n        type->biggest = size;\n    }\n}\n\nstatic void keyStats(long long memkeys_samples, unsigned long long cursor, unsigned long top_sizes_limit) {\n    unsigned long long sampled = 0, total_keys, total_size = 0, it = 0, scan_loops = 0;\n    unsigned long long *memkeys_sizes = NULL, *bigkeys_sizes = NULL;\n    redisReply *reply, *keys;\n    unsigned int array_size = 0, i;\n    typeinfo **memkeys_types = NULL, **bigkeys_types = NULL;\n    list *top_sizes;\n    long long refresh_time = mstime();\n\n    if (cursor != 0) {\n        it = cursor;\n    }\n\n    if ((top_sizes = listCreate()) == NULL) {\n        fprintf(stderr, \"top_sizes list creation failed.\\n\");\n        exit(1);\n    }\n    top_sizes->free = zfree;\n\n    dict *memkeys_types_dict = dictCreate(&typeinfoDictType);\n    typeinfo_add(memkeys_types_dict, \"string\", &type_string);\n    typeinfo_add(memkeys_types_dict, \"list\", &type_list);\n    typeinfo_add(memkeys_types_dict, \"set\", &type_set);\n    typeinfo_add(memkeys_types_dict, \"hash\", &type_hash);\n    typeinfo_add(memkeys_types_dict, \"zset\", &type_zset);\n    typeinfo_add(memkeys_types_dict, \"stream\", &type_stream);\n\n    /* We could use only one typeinfo dictionary if we add new fields to save\n     * both memkey and bigkey info. Not sure it would make sense in findBigKeys(). */\n    dict *bigkeys_types_dict = dictCreate(&typeinfoDictType);\n    typeinfo_add(bigkeys_types_dict, \"string\", &type_string);\n    typeinfo_add(bigkeys_types_dict, \"list\", &type_list);\n    typeinfo_add(bigkeys_types_dict, \"set\", &type_set);\n    typeinfo_add(bigkeys_types_dict, \"hash\", &type_hash);\n    typeinfo_add(bigkeys_types_dict, \"zset\", &type_zset);\n    typeinfo_add(bigkeys_types_dict, \"stream\", &type_stream);\n\n    size_dist key_length_dist;\n    size_dist_entry distribution[] = {\n        {1<<5, 0},                 /*  32 B  (sds)                                            */\n        {1<<8, 0},                 /* 256 B  (sds)                                            */\n        {1<<16, 0},                /*  64 KB (sds and Redis Enterprise key name max length)   */\n        {1024*1024, 0},            /*   1 MB                                                  */\n        {16*1024*1024, 0},         /*  16 MB                                                  */\n        {128*1024*1024, 0},        /* 128 MB                                                  */\n        {512*1024*1024, 0},        /* 512 MB (max String size)                                */\n        {0, 0},                    /* Sizes above the last entry                              */\n    };\n    sizeDistInit(&key_length_dist, distribution);\n\n    struct hdr_histogram *keysize_histogram;\n    /* Record max of 1TB for a key size should cover all keys.\n     * significant_figures == 4 (0.01% precision on key size)  */\n    if (hdr_init(1, 1ULL*1024*1024*1024*1024, 4, &keysize_histogram)) {\n        fprintf(stderr, \"Keystats hdr init error\\n\");\n        exit(1);\n    }\n\n    signal(SIGINT, longStatLoopModeStop);\n\n    /* Total keys pre scanning */\n    total_keys = getDbSize();\n\n    /* Status message */\n    printf(\"\\n# Scanning the entire keyspace to find the biggest keys and distribution information.\\n\");\n    printf(\"# Use -i 0.1 to sleep 0.1 sec per 100 SCAN commands (not usually needed).\\n\");\n    printf(\"# Use --cursor <n> to start the scan at the cursor <n> (usually after a Ctrl-C).\\n\");\n    printf(\"# Use --top <n> to display <n> top key sizes (default is 10).\\n\");\n    printf(\"# Ctrl-C to stop the scan.\\n\\n\");\n\n    /* Use readonly in cluster */\n    sendReadOnly();\n\n    /* SCAN loop */\n    do {\n        /* Grab some keys and point to the keys array */\n        reply = sendScan(&it);\n        scan_loops++;\n        keys = reply->element[1];\n\n        /* Reallocate our type and size array if we need to */\n        if (keys->elements > array_size) {\n            memkeys_types = zrealloc(memkeys_types, sizeof(typeinfo*)*keys->elements);\n            memkeys_sizes = zrealloc(memkeys_sizes, sizeof(unsigned long long)*keys->elements);\n\n            bigkeys_types = zrealloc(bigkeys_types, sizeof(typeinfo*)*keys->elements);\n            bigkeys_sizes = zrealloc(bigkeys_sizes, sizeof(unsigned long long)*keys->elements);\n\n            if (!memkeys_types || !memkeys_sizes || !bigkeys_types || !bigkeys_sizes) {\n                fprintf(stderr, \"Failed to allocate storage for keys!\\n\");\n                exit(1);\n            }\n\n            array_size = keys->elements;\n        }\n\n        /* Retrieve types and sizes for memkeys */\n        getKeyTypes(memkeys_types_dict, keys, memkeys_types);\n        getKeySizes(keys, memkeys_types, memkeys_sizes, 1, memkeys_samples);\n\n        /* Retrieve types and sizes for bigkeys */\n        getKeyTypes(bigkeys_types_dict, keys, bigkeys_types);\n        getKeySizes(keys, bigkeys_types, bigkeys_sizes, 0, memkeys_samples);\n\n        for (i=0; i<keys->elements; i++) {\n            /* Skip keys that disappeared between SCAN and TYPE */\n            if (!memkeys_types[i] || !bigkeys_types[i]) {\n                continue;\n            }\n\n            total_size += memkeys_sizes[i];\n            sampled++;\n\n            updateTopSizes(keys->element[i]->str, keys->element[i]->len, memkeys_sizes[i],\n                           memkeys_types[i]->name, top_sizes, top_sizes_limit);\n            updateKeyType(keys->element[i], memkeys_sizes[i], memkeys_types[i]);\n            updateKeyType(keys->element[i], bigkeys_sizes[i], bigkeys_types[i]);\n\n            /* Key Size distribution */\n            if (hdr_record_value(keysize_histogram, memkeys_sizes[i]) == 0) {\n                fprintf(stderr, \"Value %llu not added in the hdr histogram.\\n\", memkeys_sizes[i]);\n            }\n\n            /* Key length distribution */\n            addSizeDist(&key_length_dist, keys->element[i]->len);\n        }\n\n        /* Refresh keystats info on regular basis */\n        if (mstime() > refresh_time + REFRESH_INTERVAL && IS_TTY_OR_FAKETTY()) {\n            displayKeyStats(sampled, total_keys, total_size, memkeys_types_dict, bigkeys_types_dict,\n                top_sizes, top_sizes_limit, 1);\n            refresh_time = mstime();\n        }\n\n        /* Sleep if we've been directed to do so */\n        if (config.interval && (scan_loops % 100) == 0) {\n            usleep(config.interval);\n        }\n\n        freeReplyObject(reply);\n    } while(force_cancel_loop == 0 && it != 0);\n\n    displayKeyStats(sampled, total_keys, total_size, memkeys_types_dict, bigkeys_types_dict, top_sizes,\n                    top_sizes_limit, 0);\n\n    /* Additional data at the end of the SCAN loop.\n     * Using cleanPrintfln in case we want to print during the SCAN loop. */\n    cleanPrintfln(\"\");\n    displayKeyStatsSizeDist(keysize_histogram);\n    cleanPrintfln(\"\");\n    displayKeyStatsLengthDist(&key_length_dist);\n    cleanPrintfln(\"\");\n    displayKeyStatsType(sampled, memkeys_types_dict, bigkeys_types_dict);\n\n    if (it != 0) {\n        printf(\"\\n\");\n        printf(\"Scan interrupted:\\n\");\n        printf(\"Use 'redis-cli --keystats --cursor %llu' to restart from the last cursor.\\n\", it);\n    }\n\n    if (memkeys_types) zfree(memkeys_types);\n    if (bigkeys_types) zfree(bigkeys_types);\n    if (memkeys_sizes) zfree(memkeys_sizes);\n    if (bigkeys_sizes) zfree(bigkeys_sizes);\n    dictRelease(memkeys_types_dict);\n    dictRelease(bigkeys_types_dict);\n    hdr_close(keysize_histogram);\n\n    /* sdsfree before listRelease */\n    listIter iter;\n    listNode *node;\n    listRewind(top_sizes, &iter);\n    while ((node = listNext(&iter)) != NULL) {\n        key_info *key = (key_info*) listNodeValue(node);\n        sdsfree(key->key_name);\n    }\n    listRelease(top_sizes); /* list->free is set */\n\n    exit(0);\n}\n\n/*------------------------------------------------------------------------------\n * Program main()\n *--------------------------------------------------------------------------- */\n\nint main(int argc, char **argv) {\n    int firstarg;\n    struct timeval tv;\n\n    memset(&config.sslconfig, 0, sizeof(config.sslconfig));\n    config.conn_info.hostip = sdsnew(\"127.0.0.1\");\n    config.conn_info.hostport = 6379;\n    config.connect_timeout.tv_sec = 0;\n    config.connect_timeout.tv_usec = 0;\n    config.hostsocket = NULL;\n    config.repeat = 1;\n    config.interval = 0;\n    config.dbnum = 0;\n    config.conn_info.input_dbnum = 0;\n    config.interactive = 0;\n    config.shutdown = 0;\n    config.monitor_mode = 0;\n    config.pubsub_mode = 0;\n    config.blocking_state_aborted = 0;\n    config.vset_recall_mode = 0;\n    config.vset_recall_key = NULL;\n    config.vset_recall_ele_count = 1;\n    config.vset_recall_vsim_count = 100;\n    config.vset_recall_vsim_ef = 500;\n    config.latency_mode = 0;\n    config.latency_dist_mode = 0;\n    config.latency_history = 0;\n    config.lru_test_mode = 0;\n    config.lru_test_sample_size = 0;\n    config.cluster_mode = 0;\n    config.cluster_send_asking = 0;\n    config.slave_mode = 0;\n    config.getrdb_mode = 0;\n    config.get_functions_rdb_mode = 0;\n    config.stat_mode = 0;\n    config.scan_mode = 0;\n    config.count = 10;\n    config.intrinsic_latency_mode = 0;\n    config.pattern = NULL;\n    config.rdb_filename = NULL;\n    config.pipe_mode = 0;\n    config.pipe_timeout = REDIS_CLI_DEFAULT_PIPE_TIMEOUT;\n    config.bigkeys = 0;\n    config.memkeys = 0;\n    config.keystats = 0;\n    config.cursor = 0;\n    config.top_sizes_limit = 10;\n    config.hotkeys = 0;\n    config.stdin_lastarg = 0;\n    config.stdin_tag_arg = 0;\n    config.stdin_tag_name = NULL;\n    config.conn_info.auth = NULL;\n    config.askpass = 0;\n    config.conn_info.user = NULL;\n    config.eval = NULL;\n    config.eval_ldb = 0;\n    config.eval_ldb_end = 0;\n    config.eval_ldb_sync = 0;\n    config.enable_ldb_on_eval = 0;\n    config.last_cmd_type = -1;\n    config.last_reply = NULL;\n    config.verbose = 0;\n    config.set_errcode = 0;\n    config.no_auth_warning = 0;\n    config.in_multi = 0;\n    config.server_version = NULL;\n    config.prefer_ipv4 = 0;\n    config.prefer_ipv6 = 0;\n    config.cluster_manager_command.name = NULL;\n    config.cluster_manager_command.argc = 0;\n    config.cluster_manager_command.argv = NULL;\n    config.cluster_manager_command.stdin_arg = NULL;\n    config.cluster_manager_command.flags = 0;\n    config.cluster_manager_command.replicas = 0;\n    config.cluster_manager_command.from = NULL;\n    config.cluster_manager_command.to = NULL;\n    config.cluster_manager_command.from_user = NULL;\n    config.cluster_manager_command.from_pass = NULL;\n    config.cluster_manager_command.from_askpass = 0;\n    config.cluster_manager_command.weight = NULL;\n    config.cluster_manager_command.weight_argc = 0;\n    config.cluster_manager_command.slots = 0;\n    config.cluster_manager_command.timeout = CLUSTER_MANAGER_MIGRATE_TIMEOUT;\n    config.cluster_manager_command.pipeline = CLUSTER_MANAGER_MIGRATE_PIPELINE;\n    config.cluster_manager_command.threshold =\n        CLUSTER_MANAGER_REBALANCE_THRESHOLD;\n    config.cluster_manager_command.backup_dir = NULL;\n    pref.hints = 1;\n\n    spectrum_palette = spectrum_palette_color;\n    spectrum_palette_size = spectrum_palette_color_size;\n\n    if (!isatty(fileno(stdout)) && (getenv(\"FAKETTY\") == NULL)) {\n        config.output = OUTPUT_RAW;\n        config.push_output = 0;\n    } else {\n        config.output = OUTPUT_STANDARD;\n        config.push_output = 1;\n    }\n    config.mb_delim = sdsnew(\"\\n\");\n    config.cmd_delim = sdsnew(\"\\n\");\n\n    firstarg = parseOptions(argc,argv);\n    argc -= firstarg;\n    argv += firstarg;\n\n    parseEnv();\n\n    if (config.askpass) {\n        config.conn_info.auth = askPassword(\"Please input password: \");\n    }\n\n    if (config.cluster_manager_command.from_askpass) {\n        config.cluster_manager_command.from_pass = askPassword(\n            \"Please input import source node password: \");\n    }\n\n#ifdef USE_OPENSSL\n    if (config.tls) {\n        cliSecureInit();\n    }\n#endif\n\n    gettimeofday(&tv, NULL);\n    init_genrand64(((long long) tv.tv_sec * 1000000 + tv.tv_usec) ^ getpid());\n\n    /* Cluster Manager mode */\n    if (CLUSTER_MANAGER_MODE()) {\n        clusterManagerCommandProc *proc = validateClusterManagerCommand();\n        if (!proc) {\n            exit(1);\n        }\n        clusterManagerMode(proc);\n    }\n\n    /* Latency mode */\n    if (config.latency_mode) {\n        if (cliConnect(0) == REDIS_ERR) exit(1);\n        latencyMode();\n    }\n\n    /* Latency distribution mode */\n    if (config.latency_dist_mode) {\n        if (cliConnect(0) == REDIS_ERR) exit(1);\n        latencyDistMode();\n    }\n\n    /* Latency mode */\n    if (config.vset_recall_mode) {\n        if (cliConnect(0) == REDIS_ERR) exit(1);\n        vsetRecallMode();\n    }\n\n    /* Slave mode */\n    if (config.slave_mode) {\n        if (cliConnect(0) == REDIS_ERR) exit(1);\n        sendCapa();\n        sendReplconf(\"rdb-filter-only\", \"\");\n        slaveMode(1);\n    }\n\n    /* Get RDB/functions mode. */\n    if (config.getrdb_mode || config.get_functions_rdb_mode) {\n        if (cliConnect(0) == REDIS_ERR) exit(1);\n        sendCapa();\n        sendRdbOnly();\n        if (config.get_functions_rdb_mode && !sendReplconf(\"rdb-filter-only\", \"functions\")) {\n            fprintf(stderr, \"Failed requesting functions only RDB from server, aborting\\n\");\n            exit(1);\n        }\n        getRDB(NULL);\n    }\n\n    /* Pipe mode */\n    if (config.pipe_mode) {\n        if (cliConnect(0) == REDIS_ERR) exit(1);\n        pipeMode();\n    }\n\n    /* Find big keys */\n    if (config.bigkeys) {\n        if (cliConnect(0) == REDIS_ERR) exit(1);\n        findBigKeys(0, 0);\n    }\n\n    /* Find large keys */\n    if (config.memkeys) {\n        if (cliConnect(0) == REDIS_ERR) exit(1);\n        findBigKeys(1, config.memkeys_samples);\n    }\n\n    /* Find big and large keys */\n    if (config.keystats) {\n        if (cliConnect(0) == REDIS_ERR) exit(1);\n        keyStats(config.memkeys_samples, config.cursor, config.top_sizes_limit);\n    }\n\n    /* Find hot keys */\n    if (config.hotkeys) {\n        if (cliConnect(0) == REDIS_ERR) exit(1);\n        findHotKeys();\n    }\n\n    /* Stat mode */\n    if (config.stat_mode) {\n        if (cliConnect(0) == REDIS_ERR) exit(1);\n        if (config.interval == 0) config.interval = 1000000;\n        statMode();\n    }\n\n    /* Scan mode */\n    if (config.scan_mode) {\n        if (cliConnect(0) == REDIS_ERR) exit(1);\n        scanMode();\n    }\n\n    /* LRU test mode */\n    if (config.lru_test_mode) {\n        if (cliConnect(0) == REDIS_ERR) exit(1);\n        LRUTestMode();\n    }\n\n    /* Intrinsic latency mode */\n    if (config.intrinsic_latency_mode) intrinsicLatencyMode();\n\n    /* Print command-line hint for an input prefix string */\n    if (config.test_hint) {\n        testHint(config.test_hint);\n    }\n    /* Run test suite for command-line hints */\n    if (config.test_hint_file) {\n        testHintSuite(config.test_hint_file);\n    }\n\n    /* Start interactive mode when no command is provided */\n    if (argc == 0 && !config.eval) {\n        /* Ignore SIGPIPE in interactive mode to force a reconnect */\n        signal(SIGPIPE, SIG_IGN);\n        signal(SIGINT, sigIntHandler);\n\n        /* Note that in repl mode we don't abort on connection error.\n         * A new attempt will be performed for every command send. */\n        cliConnect(0);\n        repl();\n    }\n\n    /* Otherwise, we have some arguments to execute */\n    if (config.eval) {\n        if (cliConnect(0) != REDIS_OK) exit(1);\n        return evalMode(argc,argv);\n    } else {\n        cliConnect(CC_QUIET);\n        return noninteractive(argc,argv);\n    }\n}\n"
  },
  {
    "path": "src/redis-trib.rb",
    "content": "#!/usr/bin/env ruby\n\ndef colorized(str, color)\n    return str if !(ENV['TERM'] || '')[\"xterm\"]\n    color_code = {\n        white: 29,\n        bold: '29;1',\n        black: 30,\n        red: 31,\n        green: 32,\n        yellow: 33,\n        blue: 34,\n        magenta: 35,\n        cyan: 36,\n        gray: 37\n    }[color]\n    return str if !color_code\n    \"\\033[#{color_code}m#{str}\\033[0m\"\nend\n\nclass String\n\n    %w(white bold black red green yellow blue magenta cyan gray).each{|color|\n        color = :\"#{color}\"\n        define_method(color){\n            colorized(self, color)\n        }\n    }\n\nend\n\nCOMMANDS = %w(create check info fix reshard rebalance add-node \n              del-node set-timeout call import help)\n\nALLOWED_OPTIONS={\n    \"create\" => {\"replicas\" => true},\n    \"add-node\" => {\"slave\" => false, \"master-id\" => true},\n    \"import\" => {\"from\" => :required, \"copy\" => false, \"replace\" => false},\n    \"reshard\" => {\"from\" => true, \"to\" => true, \"slots\" => true, \"yes\" => false, \"timeout\" => true, \"pipeline\" => true},\n    \"rebalance\" => {\"weight\" => [], \"auto-weights\" => false, \"use-empty-masters\" => false, \"timeout\" => true, \"simulate\" => false, \"pipeline\" => true, \"threshold\" => true},\n    \"fix\" => {\"timeout\" => 0},\n}\n\ndef parse_options(cmd)\n    cmd = cmd.downcase\n    idx = 0\n    options = {}\n    args = []\n    while (arg = ARGV.shift)\n        if arg[0..1] == \"--\"\n            option = arg[2..-1]\n\n            # --verbose is a global option\n            if option == \"--verbose\"\n                options['verbose'] = true\n                next\n            end\n            if ALLOWED_OPTIONS[cmd] == nil || \n               ALLOWED_OPTIONS[cmd][option] == nil\n                next\n            end\n            if ALLOWED_OPTIONS[cmd][option] != false\n                value = ARGV.shift\n                next if !value\n            else\n                value = true\n            end\n\n            # If the option is set to [], it's a multiple arguments\n            # option. We just queue every new value into an array.\n            if ALLOWED_OPTIONS[cmd][option] == []\n                options[option] = [] if !options[option]\n                options[option] << value\n            else\n                options[option] = value\n            end\n        else\n            next if arg[0,1] == '-'\n            args << arg\n        end\n    end\n\n    return options,args\nend\n\ndef command_example(cmd, args, opts)\n    cmd = \"redis-cli --cluster #{cmd}\"\n    args.each{|a| \n        a = a.to_s\n        a = a.inspect if a[' ']\n        cmd << \" #{a}\"\n    }\n    opts.each{|opt, val|\n        opt = \" --cluster-#{opt.downcase}\"\n        if val != true\n            val = val.join(' ') if val.is_a? Array\n            opt << \" #{val}\"\n        end\n        cmd << opt\n    }\n    cmd\nend\n\n$command = ARGV.shift\n$opts, $args = parse_options($command) if $command\n\nputs \"WARNING: redis-trib.rb is not longer available!\".yellow\nputs \"You should use #{'redis-cli'.bold} instead.\"\nputs ''\nputs \"All commands and features belonging to redis-trib.rb \"+\n     \"have been moved\\nto redis-cli.\"\nputs \"In order to use them you should call redis-cli with the #{'--cluster'.bold}\"\nputs \"option followed by the subcommand name, arguments and options.\"\nputs ''\nputs \"Use the following syntax:\"\nputs \"redis-cli --cluster SUBCOMMAND [ARGUMENTS] [OPTIONS]\".bold\nputs ''\nputs \"Example:\"\nif $command\n    example = command_example $command, $args, $opts\nelse\n    example = \"redis-cli --cluster info 127.0.0.1:7000\"\nend\nputs example.bold\nputs ''\nputs \"To get help about all subcommands, type:\"\nputs \"redis-cli --cluster help\".bold\nputs ''\nexit 1\n"
  },
  {
    "path": "src/redisassert.c",
    "content": "/* redisassert.c -- Implement the default _serverAssert and _serverPanic which \n * simply print stack trace to standard error stream.\n * \n * This file is shared by those modules that try to print some logs about stack trace \n * but don't have their own implementations of functions in redisassert.h.\n *\n * ----------------------------------------------------------------------------\n *\n * Copyright (c) 2021, Andy Pan <panjf2000@gmail.com> and Redis Ltd.\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n\n#include <stdarg.h>\n#include <stdio.h> \n#include <stdlib.h>\n#include <signal.h>\n\nvoid _serverAssert(const char *estr, const char *file, int line) {\n    fprintf(stderr, \"=== ASSERTION FAILED ===\");\n    fprintf(stderr, \"==> %s:%d '%s' is not true\",file,line,estr);\n    raise(SIGSEGV);\n}\n\nvoid _serverPanic(const char *file, int line, const char *msg, ...) {\n    va_list ap;\n    char fmtmsg[256];\n\n    va_start(ap,msg);\n    vsnprintf(fmtmsg,sizeof(fmtmsg),msg,ap);\n    va_end(ap);\n\n    fprintf(stderr, \"------------------------------------------------\");\n    fprintf(stderr, \"!!! Software Failure. Press left mouse button to continue\");\n    fprintf(stderr, \"Guru Meditation: %s #%s:%d\",fmtmsg,file,line);\n    abort();\n}\n"
  },
  {
    "path": "src/redisassert.h",
    "content": "/* redisassert.h -- Drop in replacements assert.h that prints the stack trace\n *                  in the Redis logs.\n *\n * This file should be included instead of \"assert.h\" inside libraries used by\n * Redis that are using assertions, so instead of Redis disappearing with\n * SIGABORT, we get the details and stack trace inside the log file.\n *\n * ----------------------------------------------------------------------------\n *\n * Copyright (c) 2006-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef __REDIS_ASSERT_H__\n#define __REDIS_ASSERT_H__\n\n#include \"config.h\"\n\n#define assert(_e) (likely((_e))?(void)0 : (_serverAssert(#_e,__FILE__,__LINE__),redis_unreachable()))\n#define panic(...) _serverPanic(__FILE__,__LINE__,__VA_ARGS__),redis_unreachable()\n\nvoid _serverAssert(const char *estr, const char *file, int line);\nvoid _serverPanic(const char *file, int line, const char *msg, ...);\n\n#ifdef DEBUG_ASSERTIONS\n#define debugAssert(_e) assert(_e)\n#else\n#define debugAssert(_e) ((void)0)\n#endif\n\n#endif\n"
  },
  {
    "path": "src/redismodule.h",
    "content": "#ifndef REDISMODULE_H\n#define REDISMODULE_H\n\n#include <sys/types.h>\n#include <stdint.h>\n#include <stdio.h>\n#include <stdlib.h>\n\n\ntypedef struct RedisModuleString RedisModuleString;\ntypedef struct RedisModuleKey RedisModuleKey;\ntypedef int RedisModuleKeyMetaClassId;\n\n/* -------------- Defines NOT common between core and modules ------------- */\n\n#if defined REDISMODULE_CORE\n/* Things only defined for the modules core (server), not exported to modules\n * that include this file. */\n\n#define RedisModuleString robj\n\n#endif /* defined REDISMODULE_CORE */\n\n#if !defined REDISMODULE_CORE && !defined REDISMODULE_CORE_MODULE\n/* Things defined for modules, but not for core-modules. */\n\ntypedef long long mstime_t;\ntypedef long long ustime_t;\n\n#endif /* !defined REDISMODULE_CORE && !defined REDISMODULE_CORE_MODULE */\n\n/* ---------------- Defines common between core and modules --------------- */\n\n/* Error status return values. */\n#define REDISMODULE_OK 0\n#define REDISMODULE_ERR 1\n\n/* Module Based Authentication status return values. */\n#define REDISMODULE_AUTH_HANDLED 0\n#define REDISMODULE_AUTH_NOT_HANDLED 1\n\n/* API versions. */\n#define REDISMODULE_APIVER_1 1\n\n/* Version of the RedisModuleTypeMethods structure. Once the RedisModuleTypeMethods \n * structure is changed, this version number needs to be changed synchronistically. */\n#define REDISMODULE_TYPE_METHOD_VERSION 5\n\n/* Version of the RedisModuleKeyMetaClassConfig structure. Once the RedisModuleKeyMetaClassConfig \n * structure is changed, this version number needs to be changed synchronistically. */\n#define REDISMODULE_KEY_META_VERSION    1\n\n/* API flags and constants */\n#define REDISMODULE_READ (1<<0)\n#define REDISMODULE_WRITE (1<<1)\n\n/* RedisModule_OpenKey extra flags for the 'mode' argument.\n * Avoid touching the LRU/LFU of the key when opened. */\n#define REDISMODULE_OPEN_KEY_NOTOUCH (1<<16)\n/* Don't trigger keyspace event on key misses. */\n#define REDISMODULE_OPEN_KEY_NONOTIFY (1<<17)\n/* Don't update keyspace hits/misses counters. */\n#define REDISMODULE_OPEN_KEY_NOSTATS (1<<18)\n/* Avoid deleting lazy expired keys. */\n#define REDISMODULE_OPEN_KEY_NOEXPIRE (1<<19)\n/* Avoid any effects from fetching the key */\n#define REDISMODULE_OPEN_KEY_NOEFFECTS (1<<20)\n/* Allow access expired key that haven't deleted yet */\n#define REDISMODULE_OPEN_KEY_ACCESS_EXPIRED (1<<21)\n/* Allow access trimmed key that haven't deleted yet */\n#define REDISMODULE_OPEN_KEY_ACCESS_TRIMMED (1<<22)\n\n\n/* Mask of all REDISMODULE_OPEN_KEY_* values. Any new mode should be added to this list.\n * Should not be used directly by the module, use RM_GetOpenKeyModesAll instead.\n * Located here so when we will add new modes we will not forget to update it. */\n#define _REDISMODULE_OPEN_KEY_ALL REDISMODULE_READ | REDISMODULE_WRITE | REDISMODULE_OPEN_KEY_NOTOUCH | REDISMODULE_OPEN_KEY_NONOTIFY | REDISMODULE_OPEN_KEY_NOSTATS | REDISMODULE_OPEN_KEY_NOEXPIRE | REDISMODULE_OPEN_KEY_NOEFFECTS | REDISMODULE_OPEN_KEY_ACCESS_EXPIRED | REDISMODULE_OPEN_KEY_ACCESS_TRIMMED\n\n/* List push and pop */\n#define REDISMODULE_LIST_HEAD 0\n#define REDISMODULE_LIST_TAIL 1\n\n/* Key types. */\n#define REDISMODULE_KEYTYPE_EMPTY 0\n#define REDISMODULE_KEYTYPE_STRING 1\n#define REDISMODULE_KEYTYPE_LIST 2\n#define REDISMODULE_KEYTYPE_HASH 3\n#define REDISMODULE_KEYTYPE_SET 4\n#define REDISMODULE_KEYTYPE_ZSET 5\n#define REDISMODULE_KEYTYPE_MODULE 6\n#define REDISMODULE_KEYTYPE_STREAM 7\n#define REDISMODULE_KEYTYPE_GCRA 8\n\n/* Reply types. */\n#define REDISMODULE_REPLY_UNKNOWN -1\n#define REDISMODULE_REPLY_STRING 0\n#define REDISMODULE_REPLY_ERROR 1\n#define REDISMODULE_REPLY_INTEGER 2\n#define REDISMODULE_REPLY_ARRAY 3\n#define REDISMODULE_REPLY_NULL 4\n#define REDISMODULE_REPLY_MAP 5\n#define REDISMODULE_REPLY_SET 6\n#define REDISMODULE_REPLY_BOOL 7\n#define REDISMODULE_REPLY_DOUBLE 8\n#define REDISMODULE_REPLY_BIG_NUMBER 9\n#define REDISMODULE_REPLY_VERBATIM_STRING 10\n#define REDISMODULE_REPLY_ATTRIBUTE 11\n#define REDISMODULE_REPLY_PROMISE 12\n\n/* Postponed array length. */\n#define REDISMODULE_POSTPONED_ARRAY_LEN -1  /* Deprecated, please use REDISMODULE_POSTPONED_LEN */\n#define REDISMODULE_POSTPONED_LEN -1\n\n/* Expire */\n#define REDISMODULE_NO_EXPIRE -1\n\n/* Sorted set API flags. */\n#define REDISMODULE_ZADD_XX      (1<<0)\n#define REDISMODULE_ZADD_NX      (1<<1)\n#define REDISMODULE_ZADD_ADDED   (1<<2)\n#define REDISMODULE_ZADD_UPDATED (1<<3)\n#define REDISMODULE_ZADD_NOP     (1<<4)\n#define REDISMODULE_ZADD_GT      (1<<5)\n#define REDISMODULE_ZADD_LT      (1<<6)\n\n/* Hash API flags. */\n#define REDISMODULE_HASH_NONE        0\n#define REDISMODULE_HASH_NX          (1<<0)\n#define REDISMODULE_HASH_XX          (1<<1)\n#define REDISMODULE_HASH_CFIELDS     (1<<2)\n#define REDISMODULE_HASH_EXISTS      (1<<3)\n#define REDISMODULE_HASH_COUNT_ALL   (1<<4)\n#define REDISMODULE_HASH_EXPIRE_TIME (1<<5) \n\n#define REDISMODULE_CONFIG_DEFAULT 0 /* This is the default for a module config. */\n#define REDISMODULE_CONFIG_IMMUTABLE (1ULL<<0) /* Can this value only be set at startup? */\n#define REDISMODULE_CONFIG_SENSITIVE (1ULL<<1) /* Does this value contain sensitive information */\n#define REDISMODULE_CONFIG_HIDDEN (1ULL<<4) /* This config is hidden in `config get <pattern>` (used for tests/debugging) */\n#define REDISMODULE_CONFIG_PROTECTED (1ULL<<5) /* Becomes immutable if enable-protected-configs is enabled. */\n#define REDISMODULE_CONFIG_DENY_LOADING (1ULL<<6) /* This config is forbidden during loading. */\n\n#define REDISMODULE_CONFIG_MEMORY (1ULL<<7) /* Indicates if this value can be set as a memory value */\n#define REDISMODULE_CONFIG_BITFLAGS (1ULL<<8) /* Indicates if this value can be set as a multiple enum values */\n#define REDISMODULE_CONFIG_UNPREFIXED (1ULL<<9) /* Provided configuration name won't be prefixed with the module name */\n\n/* StreamID type. */\ntypedef struct RedisModuleStreamID {\n    uint64_t ms;\n    uint64_t seq;\n} RedisModuleStreamID;\n\n/* StreamAdd() flags. */\n#define REDISMODULE_STREAM_ADD_AUTOID (1<<0)\n/* StreamIteratorStart() flags. */\n#define REDISMODULE_STREAM_ITERATOR_EXCLUSIVE (1<<0)\n#define REDISMODULE_STREAM_ITERATOR_REVERSE (1<<1)\n/* StreamIteratorTrim*() flags. */\n#define REDISMODULE_STREAM_TRIM_APPROX (1<<0)\n\n/* Context Flags: Info about the current context returned by\n * RM_GetContextFlags(). */\n\n/* The command is running in the context of a Lua script */\n#define REDISMODULE_CTX_FLAGS_LUA (1<<0)\n/* The command is running inside a Redis transaction */\n#define REDISMODULE_CTX_FLAGS_MULTI (1<<1)\n/* The instance is a master */\n#define REDISMODULE_CTX_FLAGS_MASTER (1<<2)\n/* The instance is a slave */\n#define REDISMODULE_CTX_FLAGS_SLAVE (1<<3)\n/* The instance is read-only (usually meaning it's a slave as well) */\n#define REDISMODULE_CTX_FLAGS_READONLY (1<<4)\n/* The instance is running in cluster mode */\n#define REDISMODULE_CTX_FLAGS_CLUSTER (1<<5)\n/* The instance has AOF enabled */\n#define REDISMODULE_CTX_FLAGS_AOF (1<<6)\n/* The instance has RDB enabled */\n#define REDISMODULE_CTX_FLAGS_RDB (1<<7)\n/* The instance has Maxmemory set */\n#define REDISMODULE_CTX_FLAGS_MAXMEMORY (1<<8)\n/* Maxmemory is set and has an eviction policy that may delete keys */\n#define REDISMODULE_CTX_FLAGS_EVICT (1<<9)\n/* Redis is out of memory according to the maxmemory flag. */\n#define REDISMODULE_CTX_FLAGS_OOM (1<<10)\n/* Less than 25% of memory available according to maxmemory. */\n#define REDISMODULE_CTX_FLAGS_OOM_WARNING (1<<11)\n/* The command was sent over the replication link. */\n#define REDISMODULE_CTX_FLAGS_REPLICATED (1<<12)\n/* Redis is currently loading either from AOF or RDB. */\n#define REDISMODULE_CTX_FLAGS_LOADING (1<<13)\n/* The replica has no link with its master, note that\n * there is the inverse flag as well:\n *\n *  REDISMODULE_CTX_FLAGS_REPLICA_IS_ONLINE\n *\n * The two flags are exclusive, one or the other can be set. */\n#define REDISMODULE_CTX_FLAGS_REPLICA_IS_STALE (1<<14)\n/* The replica is trying to connect with the master.\n * (REPL_STATE_CONNECT and REPL_STATE_CONNECTING states) */\n#define REDISMODULE_CTX_FLAGS_REPLICA_IS_CONNECTING (1<<15)\n/* THe replica is receiving an RDB file from its master. */\n#define REDISMODULE_CTX_FLAGS_REPLICA_IS_TRANSFERRING (1<<16)\n/* The replica is online, receiving updates from its master. */\n#define REDISMODULE_CTX_FLAGS_REPLICA_IS_ONLINE (1<<17)\n/* There is currently some background process active. */\n#define REDISMODULE_CTX_FLAGS_ACTIVE_CHILD (1<<18)\n/* The next EXEC will fail due to dirty CAS (touched keys). */\n#define REDISMODULE_CTX_FLAGS_MULTI_DIRTY (1<<19)\n/* Redis is currently running inside background child process. */\n#define REDISMODULE_CTX_FLAGS_IS_CHILD (1<<20)\n/* The current client does not allow blocking, either called from\n * within multi, lua, or from another module using RM_Call */\n#define REDISMODULE_CTX_FLAGS_DENY_BLOCKING (1<<21)\n/* The current client uses RESP3 protocol */\n#define REDISMODULE_CTX_FLAGS_RESP3 (1<<22)\n/* Redis is currently async loading database for diskless replication. */\n#define REDISMODULE_CTX_FLAGS_ASYNC_LOADING (1<<23)\n/* Redis is starting. */\n#define REDISMODULE_CTX_FLAGS_SERVER_STARTUP (1<<24)\n/* This context can call execute debug commands. */\n#define REDISMODULE_CTX_FLAGS_DEBUG_ENABLED (1<<25)\n/* Trim is in progress due to slot migration. */\n#define REDISMODULE_CTX_FLAGS_TRIM_IN_PROGRESS (1<<26)\n\n/* Next context flag, must be updated when adding new flags above!\nThis flag should not be used directly by the module.\n * Use RedisModule_GetContextFlagsAll instead. */\n#define _REDISMODULE_CTX_FLAGS_NEXT (1<<27)\n\n/* Keyspace changes notification classes. Every class is associated with a\n * character for configuration purposes.\n * NOTE: These have to be in sync with NOTIFY_* in server.h */\n#define REDISMODULE_NOTIFY_KEYSPACE (1<<0)    /* K */\n#define REDISMODULE_NOTIFY_KEYEVENT (1<<1)    /* E */\n#define REDISMODULE_NOTIFY_GENERIC (1<<2)     /* g */\n#define REDISMODULE_NOTIFY_STRING (1<<3)      /* $ */\n#define REDISMODULE_NOTIFY_LIST (1<<4)        /* l */\n#define REDISMODULE_NOTIFY_SET (1<<5)         /* s */\n#define REDISMODULE_NOTIFY_HASH (1<<6)        /* h */\n#define REDISMODULE_NOTIFY_ZSET (1<<7)        /* z */\n#define REDISMODULE_NOTIFY_EXPIRED (1<<8)     /* x */\n#define REDISMODULE_NOTIFY_EVICTED (1<<9)     /* e */\n#define REDISMODULE_NOTIFY_STREAM (1<<10)     /* t */\n#define REDISMODULE_NOTIFY_KEY_MISS (1<<11)   /* m (Note: This one is excluded from REDISMODULE_NOTIFY_ALL on purpose) */\n#define REDISMODULE_NOTIFY_LOADED (1<<12)     /* module only key space notification, indicate a key loaded from rdb */\n#define REDISMODULE_NOTIFY_MODULE (1<<13)     /* d, module key space notification */\n#define REDISMODULE_NOTIFY_NEW (1<<14)        /* n, new key notification */\n#define REDISMODULE_NOTIFY_OVERWRITTEN (1<<15)   /* o, key overwrite notification */\n#define REDISMODULE_NOTIFY_TYPE_CHANGED (1<<16) /* c, key type changed notification */\n#define REDISMODULE_NOTIFY_KEY_TRIMMED (1<<17) /* module only key space notification, indicates a key trimmed during slot migration */\n#define REDISMODULE_NOTIFY_RATE_LIMIT (1<<18) /* r, rate limit event */\n\n#define REDISMODULE_NOTIFY_SUBKEYSPACE (1<<19)      /* S */\n#define REDISMODULE_NOTIFY_SUBKEYEVENT (1<<20)      /* T */\n#define REDISMODULE_NOTIFY_SUBKEYSPACEITEM (1<<21)  /* I */\n#define REDISMODULE_NOTIFY_SUBKEYSPACEEVENT (1<<22) /* V */\n\n/* Next notification flag, must be updated when adding new flags above!\nThis flag should not be used directly by the module.\n * Use RedisModule_GetKeyspaceNotificationFlagsAll instead. */\n#define _REDISMODULE_NOTIFY_NEXT (1<<23)\n\n/* Delivery flags for RM_SubscribeToKeyspaceEventsWithSubkeys.\n * These are passed in the 'flags' parameter, not in 'types'. */\n#define REDISMODULE_NOTIFY_FLAG_NONE 0                  /* Invoke callback for all matching events */\n#define REDISMODULE_NOTIFY_FLAG_SUBKEYS_REQUIRED (1<<0) /* Only invoke callback when subkeys are present */\n\n#define REDISMODULE_NOTIFY_ALL (REDISMODULE_NOTIFY_GENERIC | REDISMODULE_NOTIFY_STRING | REDISMODULE_NOTIFY_LIST | REDISMODULE_NOTIFY_SET | REDISMODULE_NOTIFY_HASH | REDISMODULE_NOTIFY_ZSET | REDISMODULE_NOTIFY_EXPIRED | REDISMODULE_NOTIFY_EVICTED | REDISMODULE_NOTIFY_STREAM | REDISMODULE_NOTIFY_MODULE)      /* A */\n\n/* A special pointer that we can use between the core and the module to signal\n * field deletion, and that is impossible to be a valid pointer. */\n#define REDISMODULE_HASH_DELETE ((RedisModuleString*)(long)1)\n\n/* Error messages. */\n#define REDISMODULE_ERRORMSG_WRONGTYPE \"WRONGTYPE Operation against a key holding the wrong kind of value\"\n\n#define REDISMODULE_POSITIVE_INFINITE (1.0/0.0)\n#define REDISMODULE_NEGATIVE_INFINITE (-1.0/0.0)\n\n/* Cluster API defines. */\n#define REDISMODULE_NODE_ID_LEN 40\n#define REDISMODULE_NODE_MYSELF     (1<<0)\n#define REDISMODULE_NODE_MASTER     (1<<1)\n#define REDISMODULE_NODE_SLAVE      (1<<2)\n#define REDISMODULE_NODE_PFAIL      (1<<3)\n#define REDISMODULE_NODE_FAIL       (1<<4)\n#define REDISMODULE_NODE_NOFAILOVER (1<<5)\n\n#define REDISMODULE_CLUSTER_FLAG_NONE 0\n#define REDISMODULE_CLUSTER_FLAG_NO_FAILOVER (1<<1)\n#define REDISMODULE_CLUSTER_FLAG_NO_REDIRECTION (1<<2)\n\n#define REDISMODULE_NOT_USED(V) ((void) V)\n\n/* Logging level strings */\n#define REDISMODULE_LOGLEVEL_DEBUG \"debug\"\n#define REDISMODULE_LOGLEVEL_VERBOSE \"verbose\"\n#define REDISMODULE_LOGLEVEL_NOTICE \"notice\"\n#define REDISMODULE_LOGLEVEL_WARNING \"warning\"\n\n/* Bit flags for aux_save_triggers and the aux_load and aux_save callbacks */\n#define REDISMODULE_AUX_BEFORE_RDB (1<<0)\n#define REDISMODULE_AUX_AFTER_RDB (1<<1)\n\n/* RM_Yield flags */\n#define REDISMODULE_YIELD_FLAG_NONE (1<<0)\n#define REDISMODULE_YIELD_FLAG_CLIENTS (1<<1)\n\n/* RM_BlockClientOnKeysWithFlags flags */\n#define REDISMODULE_BLOCK_UNBLOCK_DEFAULT (0)\n#define REDISMODULE_BLOCK_UNBLOCK_DELETED (1<<0)\n\n/* This type represents a timer handle, and is returned when a timer is\n * registered and used in order to invalidate a timer. It's just a 64 bit\n * number, because this is how each timer is represented inside the radix tree\n * of timers that are going to expire, sorted by expire time. */\ntypedef uint64_t RedisModuleTimerID;\n\n/* CommandFilter Flags */\n\n/* Do filter RedisModule_Call() commands initiated by module itself. */\n#define REDISMODULE_CMDFILTER_NOSELF    (1<<0)\n\n/* Declare that the module can handle errors with RedisModule_SetModuleOptions. */\n#define REDISMODULE_OPTIONS_HANDLE_IO_ERRORS    (1<<0)\n\n/* When set, Redis will not call RedisModule_SignalModifiedKey(), implicitly in\n * RedisModule_CloseKey, and the module needs to do that when manually when keys\n * are modified from the user's perspective, to invalidate WATCH. */\n#define REDISMODULE_OPTION_NO_IMPLICIT_SIGNAL_MODIFIED (1<<1)\n\n/* Declare that the module can handle diskless async replication with RedisModule_SetModuleOptions. */\n#define REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD    (1<<2)\n\n/* Declare that the module want to get nested key space notifications.\n * If enabled, the module is responsible to break endless loop. */\n#define REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS    (1<<3)\n\n/* Next option flag, must be updated when adding new module flags above!\n * This flag should not be used directly by the module.\n * Use RedisModule_GetModuleOptionsAll instead. */\n#define _REDISMODULE_OPTIONS_FLAGS_NEXT (1<<4)\n\n/* Definitions for RedisModule_SetCommandInfo. */\n\ntypedef enum {\n    REDISMODULE_ARG_TYPE_STRING,\n    REDISMODULE_ARG_TYPE_INTEGER,\n    REDISMODULE_ARG_TYPE_DOUBLE,\n    REDISMODULE_ARG_TYPE_KEY, /* A string, but represents a keyname */\n    REDISMODULE_ARG_TYPE_PATTERN,\n    REDISMODULE_ARG_TYPE_UNIX_TIME,\n    REDISMODULE_ARG_TYPE_PURE_TOKEN,\n    REDISMODULE_ARG_TYPE_ONEOF, /* Must have sub-arguments */\n    REDISMODULE_ARG_TYPE_BLOCK /* Must have sub-arguments */\n} RedisModuleCommandArgType;\n\n#define REDISMODULE_CMD_ARG_NONE            (0)\n#define REDISMODULE_CMD_ARG_OPTIONAL        (1<<0) /* The argument is optional (like GET in SET command) */\n#define REDISMODULE_CMD_ARG_MULTIPLE        (1<<1) /* The argument may repeat itself (like key in DEL) */\n#define REDISMODULE_CMD_ARG_MULTIPLE_TOKEN  (1<<2) /* The argument may repeat itself, and so does its token (like `GET pattern` in SORT) */\n#define _REDISMODULE_CMD_ARG_NEXT           (1<<3)\n\ntypedef enum {\n    REDISMODULE_KSPEC_BS_INVALID = 0, /* Must be zero. An implicitly value of\n                                       * zero is provided when the field is\n                                       * absent in a struct literal. */\n    REDISMODULE_KSPEC_BS_UNKNOWN,\n    REDISMODULE_KSPEC_BS_INDEX,\n    REDISMODULE_KSPEC_BS_KEYWORD\n} RedisModuleKeySpecBeginSearchType;\n\ntypedef enum {\n    REDISMODULE_KSPEC_FK_OMITTED = 0, /* Used when the field is absent in a\n                                       * struct literal. Don't use this value\n                                       * explicitly. */\n    REDISMODULE_KSPEC_FK_UNKNOWN,\n    REDISMODULE_KSPEC_FK_RANGE,\n    REDISMODULE_KSPEC_FK_KEYNUM\n} RedisModuleKeySpecFindKeysType;\n\n/* Key-spec flags. For details, see the documentation of\n * RedisModule_SetCommandInfo and the key-spec flags in server.h. */\n#define REDISMODULE_CMD_KEY_RO (1ULL<<0)\n#define REDISMODULE_CMD_KEY_RW (1ULL<<1)\n#define REDISMODULE_CMD_KEY_OW (1ULL<<2)\n#define REDISMODULE_CMD_KEY_RM (1ULL<<3)\n#define REDISMODULE_CMD_KEY_ACCESS (1ULL<<4)\n#define REDISMODULE_CMD_KEY_UPDATE (1ULL<<5)\n#define REDISMODULE_CMD_KEY_INSERT (1ULL<<6)\n#define REDISMODULE_CMD_KEY_DELETE (1ULL<<7)\n#define REDISMODULE_CMD_KEY_NOT_KEY (1ULL<<8)\n#define REDISMODULE_CMD_KEY_INCOMPLETE (1ULL<<9)\n#define REDISMODULE_CMD_KEY_VARIABLE_FLAGS (1ULL<<10)\n\n/* Channel flags, for details see the documentation of\n * RedisModule_ChannelAtPosWithFlags. */\n#define REDISMODULE_CMD_CHANNEL_PATTERN (1ULL<<0)\n#define REDISMODULE_CMD_CHANNEL_PUBLISH (1ULL<<1)\n#define REDISMODULE_CMD_CHANNEL_SUBSCRIBE (1ULL<<2)\n#define REDISMODULE_CMD_CHANNEL_UNSUBSCRIBE (1ULL<<3)\n\ntypedef struct RedisModuleCommandArg {\n    const char *name;\n    RedisModuleCommandArgType type;\n    int key_spec_index;       /* If type is KEY, this is a zero-based index of\n                               * the key_spec in the command. For other types,\n                               * you may specify -1. */\n    const char *token;        /* If type is PURE_TOKEN, this is the token. */\n    const char *summary;\n    const char *since;\n    int flags;                /* The REDISMODULE_CMD_ARG_* macros. */\n    const char *deprecated_since;\n    struct RedisModuleCommandArg *subargs;\n    const char *display_text;\n} RedisModuleCommandArg;\n\ntypedef struct {\n    const char *since;\n    const char *changes;\n} RedisModuleCommandHistoryEntry;\n\ntypedef struct {\n    const char *notes;\n    uint64_t flags; /* REDISMODULE_CMD_KEY_* macros. */\n    RedisModuleKeySpecBeginSearchType begin_search_type;\n    union {\n        struct {\n            /* The index from which we start the search for keys */\n            int pos;\n        } index;\n        struct {\n            /* The keyword that indicates the beginning of key args */\n            const char *keyword;\n            /* An index in argv from which to start searching.\n             * Can be negative, which means start search from the end, in reverse\n             * (Example: -2 means to start in reverse from the penultimate arg) */\n            int startfrom;\n        } keyword;\n    } bs;\n    RedisModuleKeySpecFindKeysType find_keys_type;\n    union {\n        struct {\n            /* Index of the last key relative to the result of the begin search\n             * step. Can be negative, in which case it's not relative. -1\n             * indicating till the last argument, -2 one before the last and so\n             * on. */\n            int lastkey;\n            /* How many args should we skip after finding a key, in order to\n             * find the next one. */\n            int keystep;\n            /* If lastkey is -1, we use limit to stop the search by a factor. 0\n             * and 1 mean no limit. 2 means 1/2 of the remaining args, 3 means\n             * 1/3, and so on. */\n            int limit;\n        } range;\n        struct {\n            /* Index of the argument containing the number of keys to come\n             * relative to the result of the begin search step */\n            int keynumidx;\n            /* Index of the fist key. (Usually it's just after keynumidx, in\n             * which case it should be set to keynumidx + 1.) */\n            int firstkey;\n            /* How many args should we skip after finding a key, in order to\n             * find the next one, relative to the result of the begin search\n             * step. */\n            int keystep;\n        } keynum;\n    } fk;\n} RedisModuleCommandKeySpec;\n\ntypedef struct {\n    int version;\n    size_t sizeof_historyentry;\n    size_t sizeof_keyspec;\n    size_t sizeof_arg;\n} RedisModuleCommandInfoVersion;\n\nstatic const RedisModuleCommandInfoVersion RedisModule_CurrentCommandInfoVersion = {\n    .version = 1,\n    .sizeof_historyentry = sizeof(RedisModuleCommandHistoryEntry),\n    .sizeof_keyspec = sizeof(RedisModuleCommandKeySpec),\n    .sizeof_arg = sizeof(RedisModuleCommandArg)\n};\n\n#define REDISMODULE_COMMAND_INFO_VERSION (&RedisModule_CurrentCommandInfoVersion)\n\ntypedef struct {\n    /* Always set version to REDISMODULE_COMMAND_INFO_VERSION */\n    const RedisModuleCommandInfoVersion *version;\n    /* Version 1 fields (added in Redis 7.0.0) */\n    const char *summary;          /* Summary of the command */\n    const char *complexity;       /* Complexity description */\n    const char *since;            /* Debut module version of the command */\n    RedisModuleCommandHistoryEntry *history; /* History */\n    /* A string of space-separated tips meant for clients/proxies regarding this\n     * command */\n    const char *tips;\n    /* Number of arguments, it is possible to use -N to say >= N */\n    int arity;\n    RedisModuleCommandKeySpec *key_specs;\n    RedisModuleCommandArg *args;\n} RedisModuleCommandInfo;\n\n/* Eventloop definitions. */\n#define REDISMODULE_EVENTLOOP_READABLE 1\n#define REDISMODULE_EVENTLOOP_WRITABLE 2\ntypedef void (*RedisModuleEventLoopFunc)(int fd, void *user_data, int mask);\ntypedef void (*RedisModuleEventLoopOneShotFunc)(void *user_data);\n\n/* Server events definitions.\n * Those flags should not be used directly by the module, instead\n * the module should use RedisModuleEvent_* variables.\n * Note: This must be synced with moduleEventVersions */\n#define REDISMODULE_EVENT_REPLICATION_ROLE_CHANGED 0\n#define REDISMODULE_EVENT_PERSISTENCE 1\n#define REDISMODULE_EVENT_FLUSHDB 2\n#define REDISMODULE_EVENT_LOADING 3\n#define REDISMODULE_EVENT_CLIENT_CHANGE 4\n#define REDISMODULE_EVENT_SHUTDOWN 5\n#define REDISMODULE_EVENT_REPLICA_CHANGE 6\n#define REDISMODULE_EVENT_MASTER_LINK_CHANGE 7\n#define REDISMODULE_EVENT_CRON_LOOP 8\n#define REDISMODULE_EVENT_MODULE_CHANGE 9\n#define REDISMODULE_EVENT_LOADING_PROGRESS 10\n#define REDISMODULE_EVENT_SWAPDB 11\n#define REDISMODULE_EVENT_REPL_BACKUP 12 /* Deprecated since Redis 7.0, not used anymore. */\n#define REDISMODULE_EVENT_FORK_CHILD 13\n#define REDISMODULE_EVENT_REPL_ASYNC_LOAD 14\n#define REDISMODULE_EVENT_EVENTLOOP 15\n#define REDISMODULE_EVENT_CONFIG 16\n#define REDISMODULE_EVENT_KEY 17\n#define REDISMODULE_EVENT_CLUSTER_SLOT_MIGRATION 18\n#define REDISMODULE_EVENT_CLUSTER_SLOT_MIGRATION_TRIM 19\n#define _REDISMODULE_EVENT_NEXT 20 /* Next event flag, should be updated if a new event added. */\n\ntypedef struct RedisModuleEvent {\n    uint64_t id;        /* REDISMODULE_EVENT_... defines. */\n    uint64_t dataver;   /* Version of the structure we pass as 'data'. */\n} RedisModuleEvent;\n\nstruct RedisModuleCtx;\nstruct RedisModuleDefragCtx;\ntypedef void (*RedisModuleEventCallback)(struct RedisModuleCtx *ctx, RedisModuleEvent eid, uint64_t subevent, void *data);\n\n/* IMPORTANT: When adding a new version of one of below structures that contain\n * event data (RedisModuleFlushInfoV1 for example) we have to avoid renaming the\n * old RedisModuleEvent structure.\n * For example, if we want to add RedisModuleFlushInfoV2, the RedisModuleEvent\n * structures should be:\n *      RedisModuleEvent_FlushDB = {\n *          REDISMODULE_EVENT_FLUSHDB,\n *          1\n *      },\n *      RedisModuleEvent_FlushDBV2 = {\n *          REDISMODULE_EVENT_FLUSHDB,\n *          2\n *      }\n * and NOT:\n *      RedisModuleEvent_FlushDBV1 = {\n *          REDISMODULE_EVENT_FLUSHDB,\n *          1\n *      },\n *      RedisModuleEvent_FlushDB = {\n *          REDISMODULE_EVENT_FLUSHDB,\n *          2\n *      }\n * The reason for that is forward-compatibility: We want that module that\n * compiled with a new redismodule.h to be able to work with a old server,\n * unless the author explicitly decided to use the newer event type.\n */\nstatic const RedisModuleEvent\n    RedisModuleEvent_ReplicationRoleChanged = {\n        REDISMODULE_EVENT_REPLICATION_ROLE_CHANGED,\n        1\n    },\n    RedisModuleEvent_Persistence = {\n        REDISMODULE_EVENT_PERSISTENCE,\n        1\n    },\n    RedisModuleEvent_FlushDB = {\n        REDISMODULE_EVENT_FLUSHDB,\n        1\n    },\n    RedisModuleEvent_Loading = {\n        REDISMODULE_EVENT_LOADING,\n        1\n    },\n    RedisModuleEvent_ClientChange = {\n        REDISMODULE_EVENT_CLIENT_CHANGE,\n        1\n    },\n    RedisModuleEvent_Shutdown = {\n        REDISMODULE_EVENT_SHUTDOWN,\n        1\n    },\n    RedisModuleEvent_ReplicaChange = {\n        REDISMODULE_EVENT_REPLICA_CHANGE,\n        1\n    },\n    RedisModuleEvent_CronLoop = {\n        REDISMODULE_EVENT_CRON_LOOP,\n        1\n    },\n    RedisModuleEvent_MasterLinkChange = {\n        REDISMODULE_EVENT_MASTER_LINK_CHANGE,\n        1\n    },\n    RedisModuleEvent_ModuleChange = {\n        REDISMODULE_EVENT_MODULE_CHANGE,\n        1\n    },\n    RedisModuleEvent_LoadingProgress = {\n        REDISMODULE_EVENT_LOADING_PROGRESS,\n        1\n    },\n    RedisModuleEvent_SwapDB = {\n        REDISMODULE_EVENT_SWAPDB,\n        1\n    },\n    /* Deprecated since Redis 7.0, not used anymore. */\n    __attribute__ ((deprecated))\n    RedisModuleEvent_ReplBackup = {\n        REDISMODULE_EVENT_REPL_BACKUP, \n        1\n    },\n    RedisModuleEvent_ReplAsyncLoad = {\n        REDISMODULE_EVENT_REPL_ASYNC_LOAD,\n        1\n    },\n    RedisModuleEvent_ForkChild = {\n        REDISMODULE_EVENT_FORK_CHILD,\n        1\n    },\n    RedisModuleEvent_EventLoop = {\n        REDISMODULE_EVENT_EVENTLOOP,\n        1\n    },\n    RedisModuleEvent_Config = {\n        REDISMODULE_EVENT_CONFIG,\n        1\n    },\n    RedisModuleEvent_Key = {\n        REDISMODULE_EVENT_KEY,\n        1\n    },\n    RedisModuleEvent_ClusterSlotMigration = {\n        REDISMODULE_EVENT_CLUSTER_SLOT_MIGRATION,\n        1\n    },\n    RedisModuleEvent_ClusterSlotMigrationTrim = {\n        REDISMODULE_EVENT_CLUSTER_SLOT_MIGRATION_TRIM,\n        1\n    };\n\n/* Those are values that are used for the 'subevent' callback argument. */\n#define REDISMODULE_SUBEVENT_PERSISTENCE_RDB_START 0\n#define REDISMODULE_SUBEVENT_PERSISTENCE_AOF_START 1\n#define REDISMODULE_SUBEVENT_PERSISTENCE_SYNC_RDB_START 2\n#define REDISMODULE_SUBEVENT_PERSISTENCE_ENDED 3\n#define REDISMODULE_SUBEVENT_PERSISTENCE_FAILED 4\n#define REDISMODULE_SUBEVENT_PERSISTENCE_SYNC_AOF_START 5\n#define _REDISMODULE_SUBEVENT_PERSISTENCE_NEXT 6\n\n#define REDISMODULE_SUBEVENT_LOADING_RDB_START 0\n#define REDISMODULE_SUBEVENT_LOADING_AOF_START 1\n#define REDISMODULE_SUBEVENT_LOADING_REPL_START 2\n#define REDISMODULE_SUBEVENT_LOADING_ENDED 3\n#define REDISMODULE_SUBEVENT_LOADING_FAILED 4\n#define _REDISMODULE_SUBEVENT_LOADING_NEXT 5\n\n#define REDISMODULE_SUBEVENT_CLIENT_CHANGE_CONNECTED 0\n#define REDISMODULE_SUBEVENT_CLIENT_CHANGE_DISCONNECTED 1\n#define _REDISMODULE_SUBEVENT_CLIENT_CHANGE_NEXT 2\n\n#define REDISMODULE_SUBEVENT_MASTER_LINK_UP 0\n#define REDISMODULE_SUBEVENT_MASTER_LINK_DOWN 1\n#define _REDISMODULE_SUBEVENT_MASTER_NEXT 2\n\n#define REDISMODULE_SUBEVENT_REPLICA_CHANGE_ONLINE 0\n#define REDISMODULE_SUBEVENT_REPLICA_CHANGE_OFFLINE 1\n#define _REDISMODULE_SUBEVENT_REPLICA_CHANGE_NEXT 2\n\n#define REDISMODULE_EVENT_REPLROLECHANGED_NOW_MASTER 0\n#define REDISMODULE_EVENT_REPLROLECHANGED_NOW_REPLICA 1\n#define _REDISMODULE_EVENT_REPLROLECHANGED_NEXT 2\n\n#define REDISMODULE_SUBEVENT_FLUSHDB_START 0\n#define REDISMODULE_SUBEVENT_FLUSHDB_END 1\n#define _REDISMODULE_SUBEVENT_FLUSHDB_NEXT 2\n\n#define REDISMODULE_SUBEVENT_MODULE_LOADED 0\n#define REDISMODULE_SUBEVENT_MODULE_UNLOADED 1\n#define _REDISMODULE_SUBEVENT_MODULE_NEXT 2\n\n#define REDISMODULE_SUBEVENT_CONFIG_CHANGE 0\n#define _REDISMODULE_SUBEVENT_CONFIG_NEXT 1\n\n#define REDISMODULE_SUBEVENT_LOADING_PROGRESS_RDB 0\n#define REDISMODULE_SUBEVENT_LOADING_PROGRESS_AOF 1\n#define _REDISMODULE_SUBEVENT_LOADING_PROGRESS_NEXT 2\n\n/* Replication Backup events are deprecated since Redis 7.0 and are never fired. */\n#define REDISMODULE_SUBEVENT_REPL_BACKUP_CREATE 0\n#define REDISMODULE_SUBEVENT_REPL_BACKUP_RESTORE 1\n#define REDISMODULE_SUBEVENT_REPL_BACKUP_DISCARD 2\n#define _REDISMODULE_SUBEVENT_REPL_BACKUP_NEXT 3\n\n#define REDISMODULE_SUBEVENT_REPL_ASYNC_LOAD_STARTED 0\n#define REDISMODULE_SUBEVENT_REPL_ASYNC_LOAD_ABORTED 1\n#define REDISMODULE_SUBEVENT_REPL_ASYNC_LOAD_COMPLETED 2\n#define _REDISMODULE_SUBEVENT_REPL_ASYNC_LOAD_NEXT 3\n\n#define REDISMODULE_SUBEVENT_FORK_CHILD_BORN 0\n#define REDISMODULE_SUBEVENT_FORK_CHILD_DIED 1\n#define _REDISMODULE_SUBEVENT_FORK_CHILD_NEXT 2\n\n#define REDISMODULE_SUBEVENT_EVENTLOOP_BEFORE_SLEEP 0\n#define REDISMODULE_SUBEVENT_EVENTLOOP_AFTER_SLEEP 1\n#define _REDISMODULE_SUBEVENT_EVENTLOOP_NEXT 2\n\n#define REDISMODULE_SUBEVENT_KEY_DELETED 0\n#define REDISMODULE_SUBEVENT_KEY_EXPIRED 1\n#define REDISMODULE_SUBEVENT_KEY_EVICTED 2\n#define REDISMODULE_SUBEVENT_KEY_OVERWRITTEN 3\n#define _REDISMODULE_SUBEVENT_KEY_NEXT 4\n\n#define _REDISMODULE_SUBEVENT_SHUTDOWN_NEXT 0\n#define _REDISMODULE_SUBEVENT_CRON_LOOP_NEXT 0\n#define _REDISMODULE_SUBEVENT_SWAPDB_NEXT 0\n\n#define REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_IMPORT_STARTED 0\n#define REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_IMPORT_FAILED 1\n#define REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_IMPORT_COMPLETED 2\n#define REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_STARTED 3\n#define REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_FAILED 4\n#define REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_COMPLETED 5\n#define REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_MODULE_PROPAGATE 6\n#define _REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_NEXT 7\n\n#define REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_TRIM_STARTED 0\n#define REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_TRIM_COMPLETED 1\n#define REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_TRIM_BACKGROUND 2\n#define _REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_TRIM_NEXT 3\n\n/* RedisModuleClientInfo flags. */\n#define REDISMODULE_CLIENTINFO_FLAG_SSL (1<<0)\n#define REDISMODULE_CLIENTINFO_FLAG_PUBSUB (1<<1)\n#define REDISMODULE_CLIENTINFO_FLAG_BLOCKED (1<<2)\n#define REDISMODULE_CLIENTINFO_FLAG_TRACKING (1<<3)\n#define REDISMODULE_CLIENTINFO_FLAG_UNIXSOCKET (1<<4)\n#define REDISMODULE_CLIENTINFO_FLAG_MULTI (1<<5)\n\n/* Here we take all the structures that the module pass to the core\n * and the other way around. Notably the list here contains the structures\n * used by the hooks API RedisModule_RegisterToServerEvent().\n *\n * The structures always start with a 'version' field. This is useful\n * when we want to pass a reference to the structure to the core APIs,\n * for the APIs to fill the structure. In that case, the structure 'version'\n * field is initialized before passing it to the core, so that the core is\n * able to cast the pointer to the appropriate structure version. In this\n * way we obtain ABI compatibility.\n *\n * Here we'll list all the structure versions in case they evolve over time,\n * however using a define, we'll make sure to use the last version as the\n * public name for the module to use. */\n\n#define REDISMODULE_CLIENTINFO_VERSION 1\ntypedef struct RedisModuleClientInfo {\n    uint64_t version;       /* Version of this structure for ABI compat. */\n    uint64_t flags;         /* REDISMODULE_CLIENTINFO_FLAG_* */\n    uint64_t id;            /* Client ID. */\n    char addr[46];          /* IPv4 or IPv6 address. */\n    uint16_t port;          /* TCP port. */\n    uint16_t db;            /* Selected DB. */\n} RedisModuleClientInfoV1;\n\n#define RedisModuleClientInfo RedisModuleClientInfoV1\n\n#define REDISMODULE_CLIENTINFO_INITIALIZER_V1 { .version = 1 }\n\n#define REDISMODULE_REPLICATIONINFO_VERSION 1\ntypedef struct RedisModuleReplicationInfo {\n    uint64_t version;       /* Not used since this structure is never passed\n                               from the module to the core right now. Here\n                               for future compatibility. */\n    int master;             /* true if master, false if replica */\n    char *masterhost;       /* master instance hostname for NOW_REPLICA */\n    int masterport;         /* master instance port for NOW_REPLICA */\n    char *replid1;          /* Main replication ID */\n    char *replid2;          /* Secondary replication ID */\n    uint64_t repl1_offset;  /* Main replication offset */\n    uint64_t repl2_offset;  /* Offset of replid2 validity */\n} RedisModuleReplicationInfoV1;\n\n#define RedisModuleReplicationInfo RedisModuleReplicationInfoV1\n\n#define REDISMODULE_FLUSHINFO_VERSION 1\ntypedef struct RedisModuleFlushInfo {\n    uint64_t version;       /* Not used since this structure is never passed\n                               from the module to the core right now. Here\n                               for future compatibility. */\n    int32_t sync;           /* Synchronous or threaded flush?. */\n    int32_t dbnum;          /* Flushed database number, -1 for ALL. */\n} RedisModuleFlushInfoV1;\n\n#define RedisModuleFlushInfo RedisModuleFlushInfoV1\n\n#define REDISMODULE_MODULE_CHANGE_VERSION 1\ntypedef struct RedisModuleModuleChange {\n    uint64_t version;       /* Not used since this structure is never passed\n                               from the module to the core right now. Here\n                               for future compatibility. */\n    const char* module_name;/* Name of module loaded or unloaded. */\n    int32_t module_version; /* Module version. */\n} RedisModuleModuleChangeV1;\n\n#define RedisModuleModuleChange RedisModuleModuleChangeV1\n\n#define REDISMODULE_CONFIGCHANGE_VERSION 1\ntypedef struct RedisModuleConfigChange {\n    uint64_t version;       /* Not used since this structure is never passed\n                               from the module to the core right now. Here\n                               for future compatibility. */\n    uint32_t num_changes;   /* how many redis config options were changed */\n    const char **config_names; /* the config names that were changed */\n} RedisModuleConfigChangeV1;\n\n#define RedisModuleConfigChange RedisModuleConfigChangeV1\n\n#define REDISMODULE_CRON_LOOP_VERSION 1\ntypedef struct RedisModuleCronLoopInfo {\n    uint64_t version;       /* Not used since this structure is never passed\n                               from the module to the core right now. Here\n                               for future compatibility. */\n    int32_t hz;             /* Approximate number of events per second. */\n} RedisModuleCronLoopV1;\n\n#define RedisModuleCronLoop RedisModuleCronLoopV1\n\n#define REDISMODULE_LOADING_PROGRESS_VERSION 1\ntypedef struct RedisModuleLoadingProgressInfo {\n    uint64_t version;       /* Not used since this structure is never passed\n                               from the module to the core right now. Here\n                               for future compatibility. */\n    int32_t hz;             /* Approximate number of events per second. */\n    int32_t progress;       /* Approximate progress between 0 and 1024, or -1\n                             * if unknown. */\n} RedisModuleLoadingProgressV1;\n\n#define RedisModuleLoadingProgress RedisModuleLoadingProgressV1\n\n#define REDISMODULE_SWAPDBINFO_VERSION 1\ntypedef struct RedisModuleSwapDbInfo {\n    uint64_t version;       /* Not used since this structure is never passed\n                               from the module to the core right now. Here\n                               for future compatibility. */\n    int32_t dbnum_first;    /* Swap Db first dbnum */\n    int32_t dbnum_second;   /* Swap Db second dbnum */\n} RedisModuleSwapDbInfoV1;\n\n#define RedisModuleSwapDbInfo RedisModuleSwapDbInfoV1\n\n#define REDISMODULE_KEYINFO_VERSION 1\ntypedef struct RedisModuleKeyInfo {\n    uint64_t version;       /* Not used since this structure is never passed\n                               from the module to the core right now. Here\n                               for future compatibility. */\n    RedisModuleKey *key;    /* Opened key. */\n} RedisModuleKeyInfoV1;\n\n#define RedisModuleKeyInfo RedisModuleKeyInfoV1\n\ntypedef struct RedisModuleSlotRange {\n    uint16_t start;\n    uint16_t end;\n} RedisModuleSlotRange;\n\ntypedef struct RedisModuleSlotRangeArray {\n    int32_t num_ranges;\n    RedisModuleSlotRange ranges[];\n} RedisModuleSlotRangeArray;\n\n#define REDISMODULE_CLUSTER_SLOT_MIGRATION_INFO_VERSION 1\n\ntypedef struct RedisModuleClusterSlotMigrationInfo {\n    uint64_t version;       /* Not used since this structure is never passed\n                               from the module to the core right now. Here\n                               for future compatibility. */\n    char source_node_id[REDISMODULE_NODE_ID_LEN + 1];\n    char destination_node_id[REDISMODULE_NODE_ID_LEN + 1];\n    const char *task_id;\n    RedisModuleSlotRangeArray *slots;\n} RedisModuleClusterSlotMigrationInfoV1;\n\n#define RedisModuleClusterSlotMigrationInfo RedisModuleClusterSlotMigrationInfoV1\n\n#define REDISMODULE_CLUSTER_SLOT_MIGRATION_TRIMINFO_VERSION 1\n\ntypedef struct RedisModuleClusterSlotMigrationTrimInfo {\n    uint64_t version;       /* Not used since this structure is never passed\n                               from the module to the core right now. Here\n                               for future compatibility. */\n    RedisModuleSlotRangeArray *slots;\n} RedisModuleClusterSlotMigrationTrimInfoV1;\n\n#define RedisModuleClusterSlotMigrationTrimInfo RedisModuleClusterSlotMigrationTrimInfoV1\n\ntypedef enum {\n    REDISMODULE_ACL_LOG_AUTH = 0, /* Authentication failure */\n    REDISMODULE_ACL_LOG_CMD, /* Command authorization failure */\n    REDISMODULE_ACL_LOG_KEY, /* Key authorization failure */\n    REDISMODULE_ACL_LOG_CHANNEL /* Channel authorization failure */\n} RedisModuleACLLogEntryReason;\n\ntypedef enum {\n    REDISMODULE_CONFIG_TYPE_STRING = 0,\n    REDISMODULE_CONFIG_TYPE_ENUM,\n    REDISMODULE_CONFIG_TYPE_NUMERIC,\n    REDISMODULE_CONFIG_TYPE_BOOL,\n} RedisModuleConfigType;\n\n/* Incomplete structures needed by both the core and modules. */\ntypedef struct RedisModuleIO RedisModuleIO;\ntypedef struct RedisModuleDigest RedisModuleDigest;\ntypedef struct RedisModuleInfoCtx RedisModuleInfoCtx;\ntypedef struct RedisModuleDefragCtx RedisModuleDefragCtx;\n\n/* Function pointers needed by both the core and modules, these needs to be\n * exposed since you can't cast a function pointer to (void *). */\ntypedef void (*RedisModuleInfoFunc)(RedisModuleInfoCtx *ctx, int for_crash_report);\ntypedef void (*RedisModuleDefragFunc)(RedisModuleDefragCtx *ctx);\ntypedef int (*RedisModuleDefragFunc2)(RedisModuleDefragCtx *ctx);\ntypedef void (*RedisModuleUserChangedFunc) (uint64_t client_id, void *privdata);\ntypedef int (*RedisModuleDefragDictValueCallback)(RedisModuleDefragCtx *ctx, void *data, unsigned char *key, size_t keylen, void **newptr);\n\n/* ------------------------- End of common defines ------------------------ */\n\n/* ----------- The rest of the defines are only for modules ----------------- */\n#if !defined REDISMODULE_CORE || defined REDISMODULE_CORE_MODULE\n/* Things defined for modules and core-modules. */\n\n/* Macro definitions specific to individual compilers */\n#ifndef REDISMODULE_ATTR_UNUSED\n#    ifdef __GNUC__\n#        define REDISMODULE_ATTR_UNUSED __attribute__((unused))\n#    else\n#        define REDISMODULE_ATTR_UNUSED\n#    endif\n#endif\n\n#ifndef REDISMODULE_ATTR_PRINTF\n#    ifdef __GNUC__\n#        define REDISMODULE_ATTR_PRINTF(idx,cnt) __attribute__((format(printf,idx,cnt)))\n#    else\n#        define REDISMODULE_ATTR_PRINTF(idx,cnt)\n#    endif\n#endif\n\n#ifndef REDISMODULE_ATTR_COMMON\n#    if defined(__GNUC__) && !(defined(__clang__) && defined(__cplusplus))\n#        define REDISMODULE_ATTR_COMMON __attribute__((__common__))\n#    else\n#        define REDISMODULE_ATTR_COMMON\n#    endif\n#endif\n\n/* Incomplete structures for compiler checks but opaque access. */\ntypedef struct RedisModuleCtx RedisModuleCtx;\ntypedef struct RedisModuleCommand RedisModuleCommand;\ntypedef struct RedisModuleCallReply RedisModuleCallReply;\ntypedef struct RedisModuleType RedisModuleType;\ntypedef struct RedisModuleBlockedClient RedisModuleBlockedClient;\ntypedef struct RedisModuleClusterInfo RedisModuleClusterInfo;\ntypedef struct RedisModuleDict RedisModuleDict;\ntypedef struct RedisModuleDictIter RedisModuleDictIter;\ntypedef struct RedisModuleCommandFilterCtx RedisModuleCommandFilterCtx;\ntypedef struct RedisModuleCommandFilter RedisModuleCommandFilter;\ntypedef struct RedisModuleServerInfoData RedisModuleServerInfoData;\ntypedef struct RedisModuleScanCursor RedisModuleScanCursor;\ntypedef struct RedisModuleUser RedisModuleUser;\ntypedef struct RedisModuleKeyOptCtx RedisModuleKeyOptCtx;\ntypedef struct RedisModuleRdbStream RedisModuleRdbStream;\ntypedef struct RedisModuleConfigIterator RedisModuleConfigIterator;\n\ntypedef int (*RedisModuleCmdFunc)(RedisModuleCtx *ctx, RedisModuleString **argv, int argc);\ntypedef void (*RedisModuleDisconnectFunc)(RedisModuleCtx *ctx, RedisModuleBlockedClient *bc);\ntypedef int (*RedisModuleNotificationFunc)(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key);\ntypedef void (*RedisModuleNotificationWithSubkeysFunc)(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key, RedisModuleString **subkeys, int count);\ntypedef void (*RedisModulePostNotificationJobFunc) (RedisModuleCtx *ctx, void *pd);\ntypedef void *(*RedisModuleTypeLoadFunc)(RedisModuleIO *rdb, int encver);\ntypedef void (*RedisModuleTypeSaveFunc)(RedisModuleIO *rdb, void *value);\ntypedef int (*RedisModuleTypeAuxLoadFunc)(RedisModuleIO *rdb, int encver, int when);\ntypedef void (*RedisModuleTypeAuxSaveFunc)(RedisModuleIO *rdb, int when);\ntypedef void (*RedisModuleTypeRewriteFunc)(RedisModuleIO *aof, RedisModuleString *key, void *value);\ntypedef size_t (*RedisModuleTypeMemUsageFunc)(const void *value);\ntypedef size_t (*RedisModuleTypeMemUsageFunc2)(RedisModuleKeyOptCtx *ctx, const void *value, size_t sample_size);\ntypedef void (*RedisModuleTypeDigestFunc)(RedisModuleDigest *digest, void *value);\ntypedef void (*RedisModuleTypeFreeFunc)(void *value);\ntypedef size_t (*RedisModuleTypeFreeEffortFunc)(RedisModuleString *key, const void *value);\ntypedef size_t (*RedisModuleTypeFreeEffortFunc2)(RedisModuleKeyOptCtx *ctx, const void *value);\ntypedef void (*RedisModuleTypeUnlinkFunc)(RedisModuleString *key, const void *value);\ntypedef void (*RedisModuleTypeUnlinkFunc2)(RedisModuleKeyOptCtx *ctx, const void *value);\ntypedef void *(*RedisModuleTypeCopyFunc)(RedisModuleString *fromkey, RedisModuleString *tokey, const void *value);\ntypedef void *(*RedisModuleTypeCopyFunc2)(RedisModuleKeyOptCtx *ctx, const void *value);\ntypedef int (*RedisModuleTypeDefragFunc)(RedisModuleDefragCtx *ctx, RedisModuleString *key, void **value);\ntypedef void (*RedisModuleClusterMessageReceiver)(RedisModuleCtx *ctx, const char *sender_id, uint8_t type, const unsigned char *payload, uint32_t len);\ntypedef void (*RedisModuleTimerProc)(RedisModuleCtx *ctx, void *data);\ntypedef void (*RedisModuleCommandFilterFunc) (RedisModuleCommandFilterCtx *filter);\ntypedef void (*RedisModuleForkDoneHandler) (int exitcode, int bysignal, void *user_data);\ntypedef void (*RedisModuleScanCB)(RedisModuleCtx *ctx, RedisModuleString *keyname, RedisModuleKey *key, void *privdata);\ntypedef void (*RedisModuleScanKeyCB)(RedisModuleKey *key, RedisModuleString *field, RedisModuleString *value, void *privdata);\ntypedef RedisModuleString * (*RedisModuleConfigGetStringFunc)(const char *name, void *privdata);\ntypedef long long (*RedisModuleConfigGetNumericFunc)(const char *name, void *privdata);\ntypedef int (*RedisModuleConfigGetBoolFunc)(const char *name, void *privdata);\ntypedef int (*RedisModuleConfigGetEnumFunc)(const char *name, void *privdata);\ntypedef int (*RedisModuleConfigSetStringFunc)(const char *name, RedisModuleString *val, void *privdata, RedisModuleString **err);\ntypedef int (*RedisModuleConfigSetNumericFunc)(const char *name, long long val, void *privdata, RedisModuleString **err);\ntypedef int (*RedisModuleConfigSetBoolFunc)(const char *name, int val, void *privdata, RedisModuleString **err);\ntypedef int (*RedisModuleConfigSetEnumFunc)(const char *name, int val, void *privdata, RedisModuleString **err);\ntypedef int (*RedisModuleConfigApplyFunc)(RedisModuleCtx *ctx, void *privdata, RedisModuleString **err);\ntypedef void (*RedisModuleOnUnblocked)(RedisModuleCtx *ctx, RedisModuleCallReply *reply, void *private_data);\ntypedef int (*RedisModuleAuthCallback)(RedisModuleCtx *ctx, RedisModuleString *username, RedisModuleString *password, RedisModuleString **err);\n\ntypedef int (*RedisModuleKeyMetaLoadFunc)(RedisModuleIO *rdb, uint64_t *meta, int encver);\ntypedef void (*RedisModuleKeyMetaSaveFunc)(RedisModuleIO *rdb, void *reserved, uint64_t *meta);\ntypedef void (*RedisModuleKeyMetaAOFRewriteFunc)(RedisModuleIO *aof, void *reserved, uint64_t meta);\ntypedef void (*RedisModuleKeyMetaFreeFunc)(const char *keyname, uint64_t meta);\ntypedef int (*RedisModuleKeyMetaCopyFunc)(RedisModuleKeyOptCtx *ctx, uint64_t *meta);\ntypedef int (*RedisModuleKeyMetaRenameFunc)(RedisModuleKeyOptCtx *ctx, uint64_t *meta);\ntypedef int (*RedisModuleKeyMetaDefragFunc)(RedisModuleDefragCtx *ctx, RedisModuleString *keyname, uint64_t meta);\ntypedef size_t (*RedisModuleKeyMetaMemUsageFunc)(RedisModuleKeyOptCtx *ctx, size_t sample_size, uint64_t meta);\ntypedef size_t (*RedisModuleKeyMetaFreeEffortFunc)(RedisModuleKeyOptCtx *ctx, uint64_t meta);\ntypedef void (*RedisModuleKeyMetaUnlinkFunc)(RedisModuleKeyOptCtx *ctx, uint64_t *meta);\ntypedef int (*RedisModuleKeyMetaMoveFunc)(RedisModuleKeyOptCtx *ctx, uint64_t *meta);\n\n\ntypedef struct RedisModuleTypeMethods {\n    uint64_t version;\n    RedisModuleTypeLoadFunc rdb_load;\n    RedisModuleTypeSaveFunc rdb_save;\n    RedisModuleTypeRewriteFunc aof_rewrite;\n    RedisModuleTypeMemUsageFunc mem_usage;\n    RedisModuleTypeDigestFunc digest;\n    RedisModuleTypeFreeFunc free;\n    RedisModuleTypeAuxLoadFunc aux_load;\n    RedisModuleTypeAuxSaveFunc aux_save;\n    int aux_save_triggers;\n    RedisModuleTypeFreeEffortFunc free_effort;\n    RedisModuleTypeUnlinkFunc unlink;\n    RedisModuleTypeCopyFunc copy;\n    RedisModuleTypeDefragFunc defrag;\n    RedisModuleTypeMemUsageFunc2 mem_usage2;\n    RedisModuleTypeFreeEffortFunc2 free_effort2;\n    RedisModuleTypeUnlinkFunc2 unlink2;\n    RedisModuleTypeCopyFunc2 copy2;\n    RedisModuleTypeAuxSaveFunc aux_save2;\n} RedisModuleTypeMethods;\n\n/* Key metadata class configuration structure.\n * Must be aligned with KeyMetaConfAllVersions in module.c.\n * See RM_CreateKeyMetaClass() documentation in module.c for detailed information. */\ntypedef struct RedisModuleKeyMetaClassConfig {\n    uint64_t version;\n#define REDISMODULE_META_ALLOW_IGNORE 0\n    uint64_t flags;\n    uint64_t reset_value;\n    RedisModuleKeyMetaCopyFunc copy;\n    RedisModuleKeyMetaRenameFunc rename;\n    RedisModuleKeyMetaMoveFunc move;\n    RedisModuleKeyMetaUnlinkFunc unlink;\n    RedisModuleKeyMetaFreeFunc free;\n    RedisModuleKeyMetaLoadFunc rdb_load;\n    RedisModuleKeyMetaSaveFunc rdb_save;\n    RedisModuleKeyMetaAOFRewriteFunc aof_rewrite;\n    RedisModuleKeyMetaDefragFunc defrag;\n    RedisModuleKeyMetaMemUsageFunc mem_usage;\n    RedisModuleKeyMetaFreeEffortFunc free_effort;\n} RedisModuleKeyMetaClassConfig;\n\n#define REDISMODULE_GET_API(name) \\\n    RedisModule_GetApi(\"RedisModule_\" #name, ((void **)&RedisModule_ ## name))\n\n/* Default API declaration prefix (not 'extern' for backwards compatibility) */\n#ifndef REDISMODULE_API\n#define REDISMODULE_API\n#endif\n\n/* Default API declaration suffix (compiler attributes) */\n#ifndef REDISMODULE_ATTR\n#define REDISMODULE_ATTR REDISMODULE_ATTR_COMMON\n#endif\n\nREDISMODULE_API void * (*RedisModule_Alloc)(size_t bytes) REDISMODULE_ATTR;\nREDISMODULE_API void * (*RedisModule_TryAlloc)(size_t bytes) REDISMODULE_ATTR;\nREDISMODULE_API void * (*RedisModule_Realloc)(void *ptr, size_t bytes) REDISMODULE_ATTR;\nREDISMODULE_API void * (*RedisModule_TryRealloc)(void *ptr, size_t bytes) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_Free)(void *ptr) REDISMODULE_ATTR;\nREDISMODULE_API void * (*RedisModule_Calloc)(size_t nmemb, size_t size) REDISMODULE_ATTR;\nREDISMODULE_API void * (*RedisModule_TryCalloc)(size_t nmemb, size_t size) REDISMODULE_ATTR;\nREDISMODULE_API char * (*RedisModule_Strdup)(const char *str) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetApi)(const char *, void *) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_CreateCommand)(RedisModuleCtx *ctx, const char *name, RedisModuleCmdFunc cmdfunc, const char *strflags, int firstkey, int lastkey, int keystep) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleCommand *(*RedisModule_GetCommand)(RedisModuleCtx *ctx, const char *name) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_CreateSubcommand)(RedisModuleCommand *parent, const char *name, RedisModuleCmdFunc cmdfunc, const char *strflags, int firstkey, int lastkey, int keystep) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_SetCommandInfo)(RedisModuleCommand *command, const RedisModuleCommandInfo *info) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_SetCommandACLCategories)(RedisModuleCommand *command, const char *ctgrsflags) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_AddACLCategory)(RedisModuleCtx *ctx, const char *name) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_SetModuleAttribs)(RedisModuleCtx *ctx, const char *name, int ver, int apiver) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_IsModuleNameBusy)(const char *name) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_WrongArity)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithLongLong)(RedisModuleCtx *ctx, long long ll) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetSelectedDb)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_SelectDb)(RedisModuleCtx *ctx, int newid) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_KeyExists)(RedisModuleCtx *ctx, RedisModuleString *keyname) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleKey * (*RedisModule_OpenKey)(RedisModuleCtx *ctx, RedisModuleString *keyname, int mode) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetOpenKeyModesAll)(void) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_CloseKey)(RedisModuleKey *kp) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_KeyType)(RedisModuleKey *kp) REDISMODULE_ATTR;\nREDISMODULE_API size_t (*RedisModule_ValueLength)(RedisModuleKey *kp) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ListPush)(RedisModuleKey *kp, int where, RedisModuleString *ele) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_ListPop)(RedisModuleKey *key, int where) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_ListGet)(RedisModuleKey *key, long index) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ListSet)(RedisModuleKey *key, long index, RedisModuleString *value) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ListInsert)(RedisModuleKey *key, long index, RedisModuleString *value) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ListDelete)(RedisModuleKey *key, long index) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleCallReply * (*RedisModule_Call)(RedisModuleCtx *ctx, const char *cmdname, const char *fmt, ...) REDISMODULE_ATTR;\nREDISMODULE_API const char * (*RedisModule_CallReplyProto)(RedisModuleCallReply *reply, size_t *len) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_FreeCallReply)(RedisModuleCallReply *reply) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_CallReplyType)(RedisModuleCallReply *reply) REDISMODULE_ATTR;\nREDISMODULE_API long long (*RedisModule_CallReplyInteger)(RedisModuleCallReply *reply) REDISMODULE_ATTR;\nREDISMODULE_API double (*RedisModule_CallReplyDouble)(RedisModuleCallReply *reply) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_CallReplyBool)(RedisModuleCallReply *reply) REDISMODULE_ATTR;\nREDISMODULE_API const char* (*RedisModule_CallReplyBigNumber)(RedisModuleCallReply *reply, size_t *len) REDISMODULE_ATTR;\nREDISMODULE_API const char* (*RedisModule_CallReplyVerbatim)(RedisModuleCallReply *reply, size_t *len, const char **format) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleCallReply * (*RedisModule_CallReplySetElement)(RedisModuleCallReply *reply, size_t idx) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_CallReplyMapElement)(RedisModuleCallReply *reply, size_t idx, RedisModuleCallReply **key, RedisModuleCallReply **val) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_CallReplyAttributeElement)(RedisModuleCallReply *reply, size_t idx, RedisModuleCallReply **key, RedisModuleCallReply **val) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_CallReplyPromiseSetUnblockHandler)(RedisModuleCallReply *reply, RedisModuleOnUnblocked on_unblock, void *private_data) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_CallReplyPromiseAbort)(RedisModuleCallReply *reply, void **private_data) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleCallReply * (*RedisModule_CallReplyAttribute)(RedisModuleCallReply *reply) REDISMODULE_ATTR;\nREDISMODULE_API size_t (*RedisModule_CallReplyLength)(RedisModuleCallReply *reply) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleCallReply * (*RedisModule_CallReplyArrayElement)(RedisModuleCallReply *reply, size_t idx) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_CreateString)(RedisModuleCtx *ctx, const char *ptr, size_t len) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_CreateStringFromLongLong)(RedisModuleCtx *ctx, long long ll) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_CreateStringFromULongLong)(RedisModuleCtx *ctx, unsigned long long ull) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_CreateStringFromDouble)(RedisModuleCtx *ctx, double d) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_CreateStringFromLongDouble)(RedisModuleCtx *ctx, long double ld, int humanfriendly) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_CreateStringFromString)(RedisModuleCtx *ctx, const RedisModuleString *str) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_CreateStringFromStreamID)(RedisModuleCtx *ctx, const RedisModuleStreamID *id) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_CreateStringPrintf)(RedisModuleCtx *ctx, const char *fmt, ...) REDISMODULE_ATTR_PRINTF(2,3) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_FreeString)(RedisModuleCtx *ctx, RedisModuleString *str) REDISMODULE_ATTR;\nREDISMODULE_API const char * (*RedisModule_StringPtrLen)(const RedisModuleString *str, size_t *len) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithError)(RedisModuleCtx *ctx, const char *err) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithErrorFormat)(RedisModuleCtx *ctx, const char *fmt, ...) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithSimpleString)(RedisModuleCtx *ctx, const char *msg) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithArray)(RedisModuleCtx *ctx, long len) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithMap)(RedisModuleCtx *ctx, long len) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithSet)(RedisModuleCtx *ctx, long len) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithAttribute)(RedisModuleCtx *ctx, long len) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithNullArray)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithEmptyArray)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_ReplySetArrayLength)(RedisModuleCtx *ctx, long len) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_ReplySetMapLength)(RedisModuleCtx *ctx, long len) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_ReplySetSetLength)(RedisModuleCtx *ctx, long len) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_ReplySetAttributeLength)(RedisModuleCtx *ctx, long len) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_ReplySetPushLength)(RedisModuleCtx *ctx, long len) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithStringBuffer)(RedisModuleCtx *ctx, const char *buf, size_t len) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithCString)(RedisModuleCtx *ctx, const char *buf) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithString)(RedisModuleCtx *ctx, RedisModuleString *str) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithEmptyString)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithVerbatimString)(RedisModuleCtx *ctx, const char *buf, size_t len) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithVerbatimStringType)(RedisModuleCtx *ctx, const char *buf, size_t len, const char *ext) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithNull)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithBool)(RedisModuleCtx *ctx, int b) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithLongDouble)(RedisModuleCtx *ctx, long double d) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithDouble)(RedisModuleCtx *ctx, double d) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithBigNumber)(RedisModuleCtx *ctx, const char *bignum, size_t len) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplyWithCallReply)(RedisModuleCtx *ctx, RedisModuleCallReply *reply) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_StringToLongLong)(const RedisModuleString *str, long long *ll) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_StringToULongLong)(const RedisModuleString *str, unsigned long long *ull) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_StringToDouble)(const RedisModuleString *str, double *d) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_StringToLongDouble)(const RedisModuleString *str, long double *d) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_StringToStreamID)(const RedisModuleString *str, RedisModuleStreamID *id) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_AutoMemory)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_Replicate)(RedisModuleCtx *ctx, const char *cmdname, const char *fmt, ...) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReplicateVerbatim)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API const char * (*RedisModule_CallReplyStringPtr)(RedisModuleCallReply *reply, size_t *len) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_CreateStringFromCallReply)(RedisModuleCallReply *reply) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_DeleteKey)(RedisModuleKey *key) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_UnlinkKey)(RedisModuleKey *key) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_StringSet)(RedisModuleKey *key, RedisModuleString *str) REDISMODULE_ATTR;\nREDISMODULE_API char * (*RedisModule_StringDMA)(RedisModuleKey *key, size_t *len, int mode) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_StringTruncate)(RedisModuleKey *key, size_t newlen) REDISMODULE_ATTR;\nREDISMODULE_API mstime_t (*RedisModule_GetExpire)(RedisModuleKey *key) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_SetExpire)(RedisModuleKey *key, mstime_t expire) REDISMODULE_ATTR;\nREDISMODULE_API mstime_t (*RedisModule_GetAbsExpire)(RedisModuleKey *key) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_SetAbsExpire)(RedisModuleKey *key, mstime_t expire) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_ResetDataset)(int restart_aof, int async) REDISMODULE_ATTR;\nREDISMODULE_API unsigned long long (*RedisModule_DbSize)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_RandomKey)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ZsetAdd)(RedisModuleKey *key, double score, RedisModuleString *ele, int *flagsptr) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ZsetIncrby)(RedisModuleKey *key, double score, RedisModuleString *ele, int *flagsptr, double *newscore) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ZsetScore)(RedisModuleKey *key, RedisModuleString *ele, double *score) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ZsetRem)(RedisModuleKey *key, RedisModuleString *ele, int *deleted) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_ZsetRangeStop)(RedisModuleKey *key) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ZsetFirstInScoreRange)(RedisModuleKey *key, double min, double max, int minex, int maxex) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ZsetLastInScoreRange)(RedisModuleKey *key, double min, double max, int minex, int maxex) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ZsetFirstInLexRange)(RedisModuleKey *key, RedisModuleString *min, RedisModuleString *max) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ZsetLastInLexRange)(RedisModuleKey *key, RedisModuleString *min, RedisModuleString *max) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_ZsetRangeCurrentElement)(RedisModuleKey *key, double *score) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ZsetRangeNext)(RedisModuleKey *key) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ZsetRangePrev)(RedisModuleKey *key) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ZsetRangeEndReached)(RedisModuleKey *key) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_HashSet)(RedisModuleKey *key, int flags, ...) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_HashGet)(RedisModuleKey *key, int flags, ...) REDISMODULE_ATTR;\nREDISMODULE_API mstime_t (*RedisModule_HashFieldMinExpire)(RedisModuleKey *key) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_StreamAdd)(RedisModuleKey *key, int flags, RedisModuleStreamID *id, RedisModuleString **argv, int64_t numfields) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_StreamDelete)(RedisModuleKey *key, RedisModuleStreamID *id) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_StreamIteratorStart)(RedisModuleKey *key, int flags, RedisModuleStreamID *startid, RedisModuleStreamID *endid) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_StreamIteratorStop)(RedisModuleKey *key) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_StreamIteratorNextID)(RedisModuleKey *key, RedisModuleStreamID *id, long *numfields) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_StreamIteratorNextField)(RedisModuleKey *key, RedisModuleString **field_ptr, RedisModuleString **value_ptr) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_StreamIteratorDelete)(RedisModuleKey *key) REDISMODULE_ATTR;\nREDISMODULE_API long long (*RedisModule_StreamTrimByLength)(RedisModuleKey *key, int flags, long long length) REDISMODULE_ATTR;\nREDISMODULE_API long long (*RedisModule_StreamTrimByID)(RedisModuleKey *key, int flags, RedisModuleStreamID *id) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_IsKeysPositionRequest)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_KeyAtPos)(RedisModuleCtx *ctx, int pos) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_KeyAtPosWithFlags)(RedisModuleCtx *ctx, int pos, int flags) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_IsChannelsPositionRequest)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_ChannelAtPosWithFlags)(RedisModuleCtx *ctx, int pos, int flags) REDISMODULE_ATTR;\nREDISMODULE_API unsigned long long (*RedisModule_GetClientId)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_GetClientUserNameById)(RedisModuleCtx *ctx, uint64_t id) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetClientInfoById)(void *ci, uint64_t id) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_GetClientNameById)(RedisModuleCtx *ctx, uint64_t id) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_SetClientNameById)(uint64_t id, RedisModuleString *name) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_PublishMessage)(RedisModuleCtx *ctx, RedisModuleString *channel, RedisModuleString *message) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_PublishMessageShard)(RedisModuleCtx *ctx, RedisModuleString *channel, RedisModuleString *message) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetContextFlags)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_AvoidReplicaTraffic)(void) REDISMODULE_ATTR;\nREDISMODULE_API void * (*RedisModule_PoolAlloc)(RedisModuleCtx *ctx, size_t bytes) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleType * (*RedisModule_CreateDataType)(RedisModuleCtx *ctx, const char *name, int encver, RedisModuleTypeMethods *typemethods) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ModuleTypeSetValue)(RedisModuleKey *key, RedisModuleType *mt, void *value) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ModuleTypeReplaceValue)(RedisModuleKey *key, RedisModuleType *mt, void *new_value, void **old_value) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleType * (*RedisModule_ModuleTypeGetType)(RedisModuleKey *key) REDISMODULE_ATTR;\nREDISMODULE_API void * (*RedisModule_ModuleTypeGetValue)(RedisModuleKey *key) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_IsIOError)(RedisModuleIO *io) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_SetModuleOptions)(RedisModuleCtx *ctx, int options) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_SignalModifiedKey)(RedisModuleCtx *ctx, RedisModuleString *keyname) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_SaveUnsigned)(RedisModuleIO *io, uint64_t value) REDISMODULE_ATTR;\nREDISMODULE_API uint64_t (*RedisModule_LoadUnsigned)(RedisModuleIO *io) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_SaveSigned)(RedisModuleIO *io, int64_t value) REDISMODULE_ATTR;\nREDISMODULE_API int64_t (*RedisModule_LoadSigned)(RedisModuleIO *io) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_EmitAOF)(RedisModuleIO *io, const char *cmdname, const char *fmt, ...) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_SaveString)(RedisModuleIO *io, RedisModuleString *s) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_SaveStringBuffer)(RedisModuleIO *io, const char *str, size_t len) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_LoadString)(RedisModuleIO *io) REDISMODULE_ATTR;\nREDISMODULE_API char * (*RedisModule_LoadStringBuffer)(RedisModuleIO *io, size_t *lenptr) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_SaveDouble)(RedisModuleIO *io, double value) REDISMODULE_ATTR;\nREDISMODULE_API double (*RedisModule_LoadDouble)(RedisModuleIO *io) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_SaveFloat)(RedisModuleIO *io, float value) REDISMODULE_ATTR;\nREDISMODULE_API float (*RedisModule_LoadFloat)(RedisModuleIO *io) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_SaveLongDouble)(RedisModuleIO *io, long double value) REDISMODULE_ATTR;\nREDISMODULE_API long double (*RedisModule_LoadLongDouble)(RedisModuleIO *io) REDISMODULE_ATTR;\nREDISMODULE_API void * (*RedisModule_LoadDataTypeFromString)(const RedisModuleString *str, const RedisModuleType *mt) REDISMODULE_ATTR;\nREDISMODULE_API void * (*RedisModule_LoadDataTypeFromStringEncver)(const RedisModuleString *str, const RedisModuleType *mt, int encver) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_SaveDataTypeToString)(RedisModuleCtx *ctx, void *data, const RedisModuleType *mt) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_Log)(RedisModuleCtx *ctx, const char *level, const char *fmt, ...) REDISMODULE_ATTR REDISMODULE_ATTR_PRINTF(3,4);\nREDISMODULE_API void (*RedisModule_LogIOError)(RedisModuleIO *io, const char *levelstr, const char *fmt, ...) REDISMODULE_ATTR REDISMODULE_ATTR_PRINTF(3,4);\nREDISMODULE_API void (*RedisModule__Assert)(const char *estr, const char *file, int line) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_LatencyAddSample)(const char *event, mstime_t latency) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_StringAppendBuffer)(RedisModuleCtx *ctx, RedisModuleString *str, const char *buf, size_t len) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_TrimStringAllocation)(RedisModuleString *str) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_RetainString)(RedisModuleCtx *ctx, RedisModuleString *str) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_HoldString)(RedisModuleCtx *ctx, RedisModuleString *str) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_StringCompare)(const RedisModuleString *a, const RedisModuleString *b) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleCtx * (*RedisModule_GetContextFromIO)(RedisModuleIO *io) REDISMODULE_ATTR;\nREDISMODULE_API const RedisModuleString * (*RedisModule_GetKeyNameFromIO)(RedisModuleIO *io) REDISMODULE_ATTR;\nREDISMODULE_API const RedisModuleString * (*RedisModule_GetKeyNameFromModuleKey)(RedisModuleKey *key) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetDbIdFromModuleKey)(RedisModuleKey *key) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetDbIdFromIO)(RedisModuleIO *io) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetDbIdFromOptCtx)(RedisModuleKeyOptCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetToDbIdFromOptCtx)(RedisModuleKeyOptCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API const RedisModuleString * (*RedisModule_GetKeyNameFromOptCtx)(RedisModuleKeyOptCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API const RedisModuleString * (*RedisModule_GetToKeyNameFromOptCtx)(RedisModuleKeyOptCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API mstime_t (*RedisModule_Milliseconds)(void) REDISMODULE_ATTR;\nREDISMODULE_API uint64_t (*RedisModule_MonotonicMicroseconds)(void) REDISMODULE_ATTR;\nREDISMODULE_API ustime_t (*RedisModule_Microseconds)(void) REDISMODULE_ATTR;\nREDISMODULE_API ustime_t (*RedisModule_CachedMicroseconds)(void) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_DigestAddStringBuffer)(RedisModuleDigest *md, const char *ele, size_t len) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_DigestAddLongLong)(RedisModuleDigest *md, long long ele) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_DigestEndSequence)(RedisModuleDigest *md) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetDbIdFromDigest)(RedisModuleDigest *dig) REDISMODULE_ATTR;\nREDISMODULE_API const RedisModuleString * (*RedisModule_GetKeyNameFromDigest)(RedisModuleDigest *dig) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleDict * (*RedisModule_CreateDict)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_FreeDict)(RedisModuleCtx *ctx, RedisModuleDict *d) REDISMODULE_ATTR;\nREDISMODULE_API uint64_t (*RedisModule_DictSize)(RedisModuleDict *d) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_DictSetC)(RedisModuleDict *d, void *key, size_t keylen, void *ptr) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_DictReplaceC)(RedisModuleDict *d, void *key, size_t keylen, void *ptr) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_DictSet)(RedisModuleDict *d, RedisModuleString *key, void *ptr) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_DictReplace)(RedisModuleDict *d, RedisModuleString *key, void *ptr) REDISMODULE_ATTR;\nREDISMODULE_API void * (*RedisModule_DictGetC)(RedisModuleDict *d, void *key, size_t keylen, int *nokey) REDISMODULE_ATTR;\nREDISMODULE_API void * (*RedisModule_DictGet)(RedisModuleDict *d, RedisModuleString *key, int *nokey) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_DictDelC)(RedisModuleDict *d, void *key, size_t keylen, void *oldval) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_DictDel)(RedisModuleDict *d, RedisModuleString *key, void *oldval) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleDictIter * (*RedisModule_DictIteratorStartC)(RedisModuleDict *d, const char *op, void *key, size_t keylen) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleDictIter * (*RedisModule_DictIteratorStart)(RedisModuleDict *d, const char *op, RedisModuleString *key) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_DictIteratorStop)(RedisModuleDictIter *di) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_DictIteratorReseekC)(RedisModuleDictIter *di, const char *op, void *key, size_t keylen) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_DictIteratorReseek)(RedisModuleDictIter *di, const char *op, RedisModuleString *key) REDISMODULE_ATTR;\nREDISMODULE_API void * (*RedisModule_DictNextC)(RedisModuleDictIter *di, size_t *keylen, void **dataptr) REDISMODULE_ATTR;\nREDISMODULE_API void * (*RedisModule_DictPrevC)(RedisModuleDictIter *di, size_t *keylen, void **dataptr) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_DictNext)(RedisModuleCtx *ctx, RedisModuleDictIter *di, void **dataptr) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_DictPrev)(RedisModuleCtx *ctx, RedisModuleDictIter *di, void **dataptr) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_DictCompareC)(RedisModuleDictIter *di, const char *op, void *key, size_t keylen) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_DictCompare)(RedisModuleDictIter *di, const char *op, RedisModuleString *key) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_RegisterInfoFunc)(RedisModuleCtx *ctx, RedisModuleInfoFunc cb) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_RegisterAuthCallback)(RedisModuleCtx *ctx, RedisModuleAuthCallback cb) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_InfoAddSection)(RedisModuleInfoCtx *ctx, const char *name) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_InfoBeginDictField)(RedisModuleInfoCtx *ctx, const char *name) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_InfoEndDictField)(RedisModuleInfoCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_InfoAddFieldString)(RedisModuleInfoCtx *ctx, const char *field, RedisModuleString *value) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_InfoAddFieldCString)(RedisModuleInfoCtx *ctx, const char *field,const  char *value) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_InfoAddFieldDouble)(RedisModuleInfoCtx *ctx, const char *field, double value) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_InfoAddFieldLongLong)(RedisModuleInfoCtx *ctx, const char *field, long long value) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_InfoAddFieldULongLong)(RedisModuleInfoCtx *ctx, const char *field, unsigned long long value) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleServerInfoData * (*RedisModule_GetServerInfo)(RedisModuleCtx *ctx, const char *section) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_FreeServerInfo)(RedisModuleCtx *ctx, RedisModuleServerInfoData *data) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_ServerInfoGetField)(RedisModuleCtx *ctx, RedisModuleServerInfoData *data, const char* field) REDISMODULE_ATTR;\nREDISMODULE_API const char * (*RedisModule_ServerInfoGetFieldC)(RedisModuleServerInfoData *data, const char* field) REDISMODULE_ATTR;\nREDISMODULE_API long long (*RedisModule_ServerInfoGetFieldSigned)(RedisModuleServerInfoData *data, const char* field, int *out_err) REDISMODULE_ATTR;\nREDISMODULE_API unsigned long long (*RedisModule_ServerInfoGetFieldUnsigned)(RedisModuleServerInfoData *data, const char* field, int *out_err) REDISMODULE_ATTR;\nREDISMODULE_API double (*RedisModule_ServerInfoGetFieldDouble)(RedisModuleServerInfoData *data, const char* field, int *out_err) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_SubscribeToServerEvent)(RedisModuleCtx *ctx, RedisModuleEvent event, RedisModuleEventCallback callback) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_SetLRU)(RedisModuleKey *key, mstime_t lru_idle) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetLRU)(RedisModuleKey *key, mstime_t *lru_idle) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_SetLFU)(RedisModuleKey *key, long long lfu_freq) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetLFU)(RedisModuleKey *key, long long *lfu_freq) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleBlockedClient * (*RedisModule_BlockClientOnKeys)(RedisModuleCtx *ctx, RedisModuleCmdFunc reply_callback, RedisModuleCmdFunc timeout_callback, void (*free_privdata)(RedisModuleCtx*,void*), long long timeout_ms, RedisModuleString **keys, int numkeys, void *privdata) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleBlockedClient * (*RedisModule_BlockClientOnKeysWithFlags)(RedisModuleCtx *ctx, RedisModuleCmdFunc reply_callback, RedisModuleCmdFunc timeout_callback, void (*free_privdata)(RedisModuleCtx*,void*), long long timeout_ms, RedisModuleString **keys, int numkeys, void *privdata, int flags) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_SignalKeyAsReady)(RedisModuleCtx *ctx, RedisModuleString *key) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_GetBlockedClientReadyKey)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleScanCursor * (*RedisModule_ScanCursorCreate)(void) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_ScanCursorRestart)(RedisModuleScanCursor *cursor) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_ScanCursorDestroy)(RedisModuleScanCursor *cursor) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_Scan)(RedisModuleCtx *ctx, RedisModuleScanCursor *cursor, RedisModuleScanCB fn, void *privdata) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ScanKey)(RedisModuleKey *key, RedisModuleScanCursor *cursor, RedisModuleScanKeyCB fn, void *privdata) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetContextFlagsAll)(void) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetModuleOptionsAll)(void) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetKeyspaceNotificationFlagsAll)(void) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_IsSubEventSupported)(RedisModuleEvent event, uint64_t subevent) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetServerVersion)(void) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetTypeMethodVersion)(void) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_Yield)(RedisModuleCtx *ctx, int flags, const char *busy_reply) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleBlockedClient * (*RedisModule_BlockClient)(RedisModuleCtx *ctx, RedisModuleCmdFunc reply_callback, RedisModuleCmdFunc timeout_callback, void (*free_privdata)(RedisModuleCtx*,void*), long long timeout_ms) REDISMODULE_ATTR;\nREDISMODULE_API void * (*RedisModule_BlockClientGetPrivateData)(RedisModuleBlockedClient *blocked_client) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_BlockClientSetPrivateData)(RedisModuleBlockedClient *blocked_client, void *private_data) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleBlockedClient * (*RedisModule_BlockClientOnAuth)(RedisModuleCtx *ctx, RedisModuleAuthCallback reply_callback, void (*free_privdata)(RedisModuleCtx*,void*)) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_UnblockClient)(RedisModuleBlockedClient *bc, void *privdata) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_IsBlockedReplyRequest)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_IsBlockedTimeoutRequest)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API void * (*RedisModule_GetBlockedClientPrivateData)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleBlockedClient * (*RedisModule_GetBlockedClientHandle)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_AbortBlock)(RedisModuleBlockedClient *bc) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_BlockedClientMeasureTimeStart)(RedisModuleBlockedClient *bc) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_BlockedClientMeasureTimeEnd)(RedisModuleBlockedClient *bc) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleCtx * (*RedisModule_GetThreadSafeContext)(RedisModuleBlockedClient *bc) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleCtx * (*RedisModule_GetDetachedThreadSafeContext)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_FreeThreadSafeContext)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_ThreadSafeContextLock)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ThreadSafeContextTryLock)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_ThreadSafeContextUnlock)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_SubscribeToKeyspaceEvents)(RedisModuleCtx *ctx, int types, RedisModuleNotificationFunc cb) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_UnsubscribeFromKeyspaceEvents)(RedisModuleCtx *ctx, int types, RedisModuleNotificationFunc cb) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_SubscribeToKeyspaceEventsWithSubkeys)(RedisModuleCtx *ctx, int types, int flags, RedisModuleNotificationWithSubkeysFunc cb) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_UnsubscribeFromKeyspaceEventsWithSubkeys)(RedisModuleCtx *ctx, int types, int flags, RedisModuleNotificationWithSubkeysFunc cb) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_AddPostNotificationJob)(RedisModuleCtx *ctx, RedisModulePostNotificationJobFunc callback, void *pd, void (*free_pd)(void*)) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_NotifyKeyspaceEvent)(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_NotifyKeyspaceEventWithSubkeys)(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key, RedisModuleString **subkeys, int count) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetNotifyKeyspaceEvents)(void) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_BlockedClientDisconnected)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_RegisterClusterMessageReceiver)(RedisModuleCtx *ctx, uint8_t type, RedisModuleClusterMessageReceiver callback) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_SendClusterMessage)(RedisModuleCtx *ctx, const char *target_id, uint8_t type, const char *msg, uint32_t len) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetClusterNodeInfo)(RedisModuleCtx *ctx, const char *id, char *ip, char *master_id, int *port, int *flags) REDISMODULE_ATTR;\nREDISMODULE_API char ** (*RedisModule_GetClusterNodesList)(RedisModuleCtx *ctx, size_t *numnodes) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_FreeClusterNodesList)(char **ids) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleTimerID (*RedisModule_CreateTimer)(RedisModuleCtx *ctx, mstime_t period, RedisModuleTimerProc callback, void *data) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_StopTimer)(RedisModuleCtx *ctx, RedisModuleTimerID id, void **data) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetTimerInfo)(RedisModuleCtx *ctx, RedisModuleTimerID id, uint64_t *remaining, void **data) REDISMODULE_ATTR;\nREDISMODULE_API const char * (*RedisModule_GetMyClusterID)(void) REDISMODULE_ATTR;\nREDISMODULE_API size_t (*RedisModule_GetClusterSize)(void) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_GetRandomBytes)(unsigned char *dst, size_t len) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_GetRandomHexChars)(char *dst, size_t len) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_SetDisconnectCallback)(RedisModuleBlockedClient *bc, RedisModuleDisconnectFunc callback) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_SetClusterFlags)(RedisModuleCtx *ctx, uint64_t flags) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ClusterDisableTrim)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ClusterEnableTrim)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API unsigned int (*RedisModule_ClusterKeySlot)(RedisModuleString *key) REDISMODULE_ATTR;\nREDISMODULE_API unsigned int (*RedisModule_ClusterKeySlotC)(const char *keystr, size_t keylen) REDISMODULE_ATTR;\nREDISMODULE_API const char *(*RedisModule_ClusterCanonicalKeyNameInSlot)(unsigned int slot) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ClusterCanAccessKeysInSlot)(int slot) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ClusterPropagateForSlotMigration)(RedisModuleCtx *ctx, const char *cmdname, const char *fmt, ...) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleSlotRangeArray *(*RedisModule_ClusterGetLocalSlotRanges)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_ClusterFreeSlotRanges)(RedisModuleCtx *ctx, RedisModuleSlotRangeArray *slots) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ExportSharedAPI)(RedisModuleCtx *ctx, const char *apiname, void *func) REDISMODULE_ATTR;\nREDISMODULE_API void * (*RedisModule_GetSharedAPI)(RedisModuleCtx *ctx, const char *apiname) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleCommandFilter * (*RedisModule_RegisterCommandFilter)(RedisModuleCtx *ctx, RedisModuleCommandFilterFunc cb, int flags) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_UnregisterCommandFilter)(RedisModuleCtx *ctx, RedisModuleCommandFilter *filter) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_CommandFilterArgsCount)(RedisModuleCommandFilterCtx *fctx) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_CommandFilterArgGet)(RedisModuleCommandFilterCtx *fctx, int pos) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_CommandFilterArgInsert)(RedisModuleCommandFilterCtx *fctx, int pos, RedisModuleString *arg) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_CommandFilterArgReplace)(RedisModuleCommandFilterCtx *fctx, int pos, RedisModuleString *arg) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_CommandFilterArgDelete)(RedisModuleCommandFilterCtx *fctx, int pos) REDISMODULE_ATTR;\nREDISMODULE_API unsigned long long (*RedisModule_CommandFilterGetClientId)(RedisModuleCommandFilterCtx *fctx) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_Fork)(RedisModuleForkDoneHandler cb, void *user_data) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_SendChildHeartbeat)(double progress) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ExitFromChild)(int retcode) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_KillForkChild)(int child_pid) REDISMODULE_ATTR;\nREDISMODULE_API float (*RedisModule_GetUsedMemoryRatio)(void) REDISMODULE_ATTR;\nREDISMODULE_API size_t (*RedisModule_MallocSize)(void* ptr) REDISMODULE_ATTR;\nREDISMODULE_API size_t (*RedisModule_MallocUsableSize)(void *ptr) REDISMODULE_ATTR;\nREDISMODULE_API size_t (*RedisModule_MallocSizeString)(RedisModuleString* str) REDISMODULE_ATTR;\nREDISMODULE_API size_t (*RedisModule_MallocSizeDict)(RedisModuleDict* dict) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleUser * (*RedisModule_CreateModuleUser)(const char *name) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_FreeModuleUser)(RedisModuleUser *user) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_SetContextUser)(RedisModuleCtx *ctx, const RedisModuleUser *user) REDISMODULE_ATTR;\nREDISMODULE_API const RedisModuleUser *(*RedisModule_GetContextUser)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString *(*RedisModule_GetUserUsername)(RedisModuleCtx *ctx, const RedisModuleUser *user) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_SetModuleUserACL)(RedisModuleUser *user, const char* acl) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_SetModuleUserACLString)(RedisModuleCtx * ctx, RedisModuleUser *user, const char* acl, RedisModuleString **error) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_GetModuleUserACLString)(RedisModuleUser *user) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_GetCurrentUserName)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleUser * (*RedisModule_GetModuleUserFromUserName)(RedisModuleString *name) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ACLCheckCommandPermissions)(RedisModuleUser *user, RedisModuleString **argv, int argc) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ACLCheckKeyPermissions)(RedisModuleUser *user, RedisModuleString *key, int flags) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ACLCheckKeyPrefixPermissions)(RedisModuleUser *user, RedisModuleString *prefix, int flags) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ACLCheckChannelPermissions)(RedisModuleUser *user, RedisModuleString *ch, int literal) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_ACLAddLogEntry)(RedisModuleCtx *ctx, RedisModuleUser *user, RedisModuleString *object, RedisModuleACLLogEntryReason reason) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_ACLAddLogEntryByUserName)(RedisModuleCtx *ctx, RedisModuleString *user, RedisModuleString *object, RedisModuleACLLogEntryReason reason) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_AuthenticateClientWithACLUser)(RedisModuleCtx *ctx, const char *name, size_t len, RedisModuleUserChangedFunc callback, void *privdata, uint64_t *client_id) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_AuthenticateClientWithUser)(RedisModuleCtx *ctx, RedisModuleUser *user, RedisModuleUserChangedFunc callback, void *privdata, uint64_t *client_id) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_DeauthenticateAndCloseClient)(RedisModuleCtx *ctx, uint64_t client_id) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_RedactClientCommandArgument)(RedisModuleCtx *ctx, int pos) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString * (*RedisModule_GetClientCertificate)(RedisModuleCtx *ctx, uint64_t id) REDISMODULE_ATTR;\nREDISMODULE_API int *(*RedisModule_GetCommandKeys)(RedisModuleCtx *ctx, RedisModuleString **argv, int argc, int *num_keys) REDISMODULE_ATTR;\nREDISMODULE_API int *(*RedisModule_GetCommandKeysWithFlags)(RedisModuleCtx *ctx, RedisModuleString **argv, int argc, int *num_keys, int **out_flags) REDISMODULE_ATTR;\nREDISMODULE_API const char *(*RedisModule_GetCurrentCommandName)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_RegisterDefragFunc)(RedisModuleCtx *ctx, RedisModuleDefragFunc func) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_RegisterDefragFunc2)(RedisModuleCtx *ctx, RedisModuleDefragFunc2 func) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_RegisterDefragCallbacks)(RedisModuleCtx *ctx, RedisModuleDefragFunc start, RedisModuleDefragFunc end) REDISMODULE_ATTR;\nREDISMODULE_API void *(*RedisModule_DefragAlloc)(RedisModuleDefragCtx *ctx, void *ptr) REDISMODULE_ATTR;\nREDISMODULE_API void *(*RedisModule_DefragAllocRaw)(RedisModuleDefragCtx *ctx, size_t size) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_DefragFreeRaw)(RedisModuleDefragCtx *ctx, void *ptr) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleString *(*RedisModule_DefragRedisModuleString)(RedisModuleDefragCtx *ctx, RedisModuleString *str) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleDict *(*RedisModule_DefragRedisModuleDict)(RedisModuleDefragCtx *ctx, RedisModuleDict *dict, RedisModuleDefragDictValueCallback valueCB, RedisModuleString **seekTo) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_DefragShouldStop)(RedisModuleDefragCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_DefragCursorSet)(RedisModuleDefragCtx *ctx, unsigned long cursor) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_DefragCursorGet)(RedisModuleDefragCtx *ctx, unsigned long *cursor) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetDbIdFromDefragCtx)(RedisModuleDefragCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API const RedisModuleString * (*RedisModule_GetKeyNameFromDefragCtx)(RedisModuleDefragCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_EventLoopAdd)(int fd, int mask, RedisModuleEventLoopFunc func, void *user_data) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_EventLoopDel)(int fd, int mask) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_EventLoopAddOneShot)(RedisModuleEventLoopOneShotFunc func, void *user_data) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_RegisterBoolConfig)(RedisModuleCtx *ctx, const char *name, int default_val, unsigned int flags, RedisModuleConfigGetBoolFunc getfn, RedisModuleConfigSetBoolFunc setfn, RedisModuleConfigApplyFunc applyfn, void *privdata) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_RegisterNumericConfig)(RedisModuleCtx *ctx, const char *name, long long default_val, unsigned int flags, long long min, long long max, RedisModuleConfigGetNumericFunc getfn, RedisModuleConfigSetNumericFunc setfn, RedisModuleConfigApplyFunc applyfn, void *privdata) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_RegisterStringConfig)(RedisModuleCtx *ctx, const char *name, const char *default_val, unsigned int flags, RedisModuleConfigGetStringFunc getfn, RedisModuleConfigSetStringFunc setfn, RedisModuleConfigApplyFunc applyfn, void *privdata) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_RegisterEnumConfig)(RedisModuleCtx *ctx, const char *name, int default_val, unsigned int flags, const char **enum_values, const int *int_values, int num_enum_vals, RedisModuleConfigGetEnumFunc getfn, RedisModuleConfigSetEnumFunc setfn, RedisModuleConfigApplyFunc applyfn, void *privdata) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_LoadConfigs)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_LoadDefaultConfigs)(RedisModuleCtx *ctx) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleRdbStream *(*RedisModule_RdbStreamCreateFromFile)(const char *filename) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_RdbStreamFree)(RedisModuleRdbStream *stream) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_RdbLoad)(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_RdbSave)(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags) REDISMODULE_ATTR;\nREDISMODULE_API const char * (*RedisModule_GetInternalSecret)(RedisModuleCtx *ctx, size_t *len) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleConfigIterator * (*RedisModule_ConfigIteratorCreate)(RedisModuleCtx *ctx, const char *pattern) REDISMODULE_ATTR;\nREDISMODULE_API void (*RedisModule_ConfigIteratorRelease)(RedisModuleCtx *ctx, RedisModuleConfigIterator *iter) REDISMODULE_ATTR;\nREDISMODULE_API const char * (*RedisModule_ConfigIteratorNext)(RedisModuleConfigIterator *iter) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ConfigGetType)(const char *name, RedisModuleConfigType *res) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ConfigGet)(RedisModuleCtx *ctx, const char *name, RedisModuleString **res) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ConfigGetBool)(RedisModuleCtx *ctx, const char *name, int *res) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ConfigGetEnum)(RedisModuleCtx *ctx, const char *name, RedisModuleString **res) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ConfigGetNumeric)(RedisModuleCtx *ctx, const char *name, long long *res) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ConfigSet)(RedisModuleCtx *ctx, const char *name, RedisModuleString *value, RedisModuleString **err) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ConfigSetBool)(RedisModuleCtx *ctx, const char *name, int value, RedisModuleString **err) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ConfigSetEnum)(RedisModuleCtx *ctx, const char *name, RedisModuleString *value, RedisModuleString **err) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ConfigSetNumeric)(RedisModuleCtx *ctx, const char *name, long long value, RedisModuleString **err) REDISMODULE_ATTR;\nREDISMODULE_API RedisModuleKeyMetaClassId (*RedisModule_CreateKeyMetaClass)(RedisModuleCtx *ctx, const char *metaname, int metaver, RedisModuleKeyMetaClassConfig *conf) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_ReleaseKeyMetaClass)(RedisModuleKeyMetaClassId id) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_SetKeyMeta)(RedisModuleKeyMetaClassId id, RedisModuleKey *key, uint64_t metadata) REDISMODULE_ATTR;\nREDISMODULE_API int (*RedisModule_GetKeyMeta)(RedisModuleKeyMetaClassId id, RedisModuleKey *key, uint64_t *metadata) REDISMODULE_ATTR;\n\n#define RedisModule_IsAOFClient(id) ((id) == UINT64_MAX)\n\n/* This is included inline inside each Redis module. */\nstatic int RedisModule_Init(RedisModuleCtx *ctx, const char *name, int ver, int apiver) REDISMODULE_ATTR_UNUSED;\nstatic int RedisModule_Init(RedisModuleCtx *ctx, const char *name, int ver, int apiver) {\n    void *getapifuncptr = ((void**)ctx)[0];\n    RedisModule_GetApi = (int (*)(const char *, void *)) (unsigned long)getapifuncptr;\n    REDISMODULE_GET_API(Alloc);\n    REDISMODULE_GET_API(TryAlloc);\n    REDISMODULE_GET_API(Calloc);\n    REDISMODULE_GET_API(TryCalloc);\n    REDISMODULE_GET_API(Free);\n    REDISMODULE_GET_API(Realloc);\n    REDISMODULE_GET_API(TryRealloc);\n    REDISMODULE_GET_API(Strdup);\n    REDISMODULE_GET_API(CreateCommand);\n    REDISMODULE_GET_API(GetCommand);\n    REDISMODULE_GET_API(CreateSubcommand);\n    REDISMODULE_GET_API(SetCommandInfo);\n    REDISMODULE_GET_API(SetCommandACLCategories);\n    REDISMODULE_GET_API(AddACLCategory);\n    REDISMODULE_GET_API(SetModuleAttribs);\n    REDISMODULE_GET_API(IsModuleNameBusy);\n    REDISMODULE_GET_API(WrongArity);\n    REDISMODULE_GET_API(ReplyWithLongLong);\n    REDISMODULE_GET_API(ReplyWithError);\n    REDISMODULE_GET_API(ReplyWithErrorFormat);\n    REDISMODULE_GET_API(ReplyWithSimpleString);\n    REDISMODULE_GET_API(ReplyWithArray);\n    REDISMODULE_GET_API(ReplyWithMap);\n    REDISMODULE_GET_API(ReplyWithSet);\n    REDISMODULE_GET_API(ReplyWithAttribute);\n    REDISMODULE_GET_API(ReplyWithNullArray);\n    REDISMODULE_GET_API(ReplyWithEmptyArray);\n    REDISMODULE_GET_API(ReplySetArrayLength);\n    REDISMODULE_GET_API(ReplySetMapLength);\n    REDISMODULE_GET_API(ReplySetSetLength);\n    REDISMODULE_GET_API(ReplySetAttributeLength);\n    REDISMODULE_GET_API(ReplySetPushLength);\n    REDISMODULE_GET_API(ReplyWithStringBuffer);\n    REDISMODULE_GET_API(ReplyWithCString);\n    REDISMODULE_GET_API(ReplyWithString);\n    REDISMODULE_GET_API(ReplyWithEmptyString);\n    REDISMODULE_GET_API(ReplyWithVerbatimString);\n    REDISMODULE_GET_API(ReplyWithVerbatimStringType);\n    REDISMODULE_GET_API(ReplyWithNull);\n    REDISMODULE_GET_API(ReplyWithBool);\n    REDISMODULE_GET_API(ReplyWithCallReply);\n    REDISMODULE_GET_API(ReplyWithDouble);\n    REDISMODULE_GET_API(ReplyWithBigNumber);\n    REDISMODULE_GET_API(ReplyWithLongDouble);\n    REDISMODULE_GET_API(GetSelectedDb);\n    REDISMODULE_GET_API(SelectDb);\n    REDISMODULE_GET_API(KeyExists);\n    REDISMODULE_GET_API(OpenKey);\n    REDISMODULE_GET_API(GetOpenKeyModesAll);\n    REDISMODULE_GET_API(CloseKey);\n    REDISMODULE_GET_API(KeyType);\n    REDISMODULE_GET_API(ValueLength);\n    REDISMODULE_GET_API(ListPush);\n    REDISMODULE_GET_API(ListPop);\n    REDISMODULE_GET_API(ListGet);\n    REDISMODULE_GET_API(ListSet);\n    REDISMODULE_GET_API(ListInsert);\n    REDISMODULE_GET_API(ListDelete);\n    REDISMODULE_GET_API(StringToLongLong);\n    REDISMODULE_GET_API(StringToULongLong);\n    REDISMODULE_GET_API(StringToDouble);\n    REDISMODULE_GET_API(StringToLongDouble);\n    REDISMODULE_GET_API(StringToStreamID);\n    REDISMODULE_GET_API(Call);\n    REDISMODULE_GET_API(CallReplyProto);\n    REDISMODULE_GET_API(FreeCallReply);\n    REDISMODULE_GET_API(CallReplyInteger);\n    REDISMODULE_GET_API(CallReplyDouble);\n    REDISMODULE_GET_API(CallReplyBool);\n    REDISMODULE_GET_API(CallReplyBigNumber);\n    REDISMODULE_GET_API(CallReplyVerbatim);\n    REDISMODULE_GET_API(CallReplySetElement);\n    REDISMODULE_GET_API(CallReplyMapElement);\n    REDISMODULE_GET_API(CallReplyAttributeElement);\n    REDISMODULE_GET_API(CallReplyPromiseSetUnblockHandler);\n    REDISMODULE_GET_API(CallReplyPromiseAbort);\n    REDISMODULE_GET_API(CallReplyAttribute);\n    REDISMODULE_GET_API(CallReplyType);\n    REDISMODULE_GET_API(CallReplyLength);\n    REDISMODULE_GET_API(CallReplyArrayElement);\n    REDISMODULE_GET_API(CallReplyStringPtr);\n    REDISMODULE_GET_API(CreateStringFromCallReply);\n    REDISMODULE_GET_API(CreateString);\n    REDISMODULE_GET_API(CreateStringFromLongLong);\n    REDISMODULE_GET_API(CreateStringFromULongLong);\n    REDISMODULE_GET_API(CreateStringFromDouble);\n    REDISMODULE_GET_API(CreateStringFromLongDouble);\n    REDISMODULE_GET_API(CreateStringFromString);\n    REDISMODULE_GET_API(CreateStringFromStreamID);\n    REDISMODULE_GET_API(CreateStringPrintf);\n    REDISMODULE_GET_API(FreeString);\n    REDISMODULE_GET_API(StringPtrLen);\n    REDISMODULE_GET_API(AutoMemory);\n    REDISMODULE_GET_API(Replicate);\n    REDISMODULE_GET_API(ReplicateVerbatim);\n    REDISMODULE_GET_API(DeleteKey);\n    REDISMODULE_GET_API(UnlinkKey);\n    REDISMODULE_GET_API(StringSet);\n    REDISMODULE_GET_API(StringDMA);\n    REDISMODULE_GET_API(StringTruncate);\n    REDISMODULE_GET_API(GetExpire);\n    REDISMODULE_GET_API(SetExpire);\n    REDISMODULE_GET_API(GetAbsExpire);\n    REDISMODULE_GET_API(SetAbsExpire);\n    REDISMODULE_GET_API(ResetDataset);\n    REDISMODULE_GET_API(DbSize);\n    REDISMODULE_GET_API(RandomKey);\n    REDISMODULE_GET_API(ZsetAdd);\n    REDISMODULE_GET_API(ZsetIncrby);\n    REDISMODULE_GET_API(ZsetScore);\n    REDISMODULE_GET_API(ZsetRem);\n    REDISMODULE_GET_API(ZsetRangeStop);\n    REDISMODULE_GET_API(ZsetFirstInScoreRange);\n    REDISMODULE_GET_API(ZsetLastInScoreRange);\n    REDISMODULE_GET_API(ZsetFirstInLexRange);\n    REDISMODULE_GET_API(ZsetLastInLexRange);\n    REDISMODULE_GET_API(ZsetRangeCurrentElement);\n    REDISMODULE_GET_API(ZsetRangeNext);\n    REDISMODULE_GET_API(ZsetRangePrev);\n    REDISMODULE_GET_API(ZsetRangeEndReached);\n    REDISMODULE_GET_API(HashSet);\n    REDISMODULE_GET_API(HashGet);\n    REDISMODULE_GET_API(HashFieldMinExpire);\n    REDISMODULE_GET_API(StreamAdd);\n    REDISMODULE_GET_API(StreamDelete);\n    REDISMODULE_GET_API(StreamIteratorStart);\n    REDISMODULE_GET_API(StreamIteratorStop);\n    REDISMODULE_GET_API(StreamIteratorNextID);\n    REDISMODULE_GET_API(StreamIteratorNextField);\n    REDISMODULE_GET_API(StreamIteratorDelete);\n    REDISMODULE_GET_API(StreamTrimByLength);\n    REDISMODULE_GET_API(StreamTrimByID);\n    REDISMODULE_GET_API(IsKeysPositionRequest);\n    REDISMODULE_GET_API(KeyAtPos);\n    REDISMODULE_GET_API(KeyAtPosWithFlags);\n    REDISMODULE_GET_API(IsChannelsPositionRequest);\n    REDISMODULE_GET_API(ChannelAtPosWithFlags);\n    REDISMODULE_GET_API(GetClientId);\n    REDISMODULE_GET_API(GetClientUserNameById);\n    REDISMODULE_GET_API(GetContextFlags);\n    REDISMODULE_GET_API(AvoidReplicaTraffic);\n    REDISMODULE_GET_API(PoolAlloc);\n    REDISMODULE_GET_API(CreateDataType);\n    REDISMODULE_GET_API(ModuleTypeSetValue);\n    REDISMODULE_GET_API(ModuleTypeReplaceValue);\n    REDISMODULE_GET_API(ModuleTypeGetType);\n    REDISMODULE_GET_API(ModuleTypeGetValue);\n    REDISMODULE_GET_API(IsIOError);\n    REDISMODULE_GET_API(SetModuleOptions);\n    REDISMODULE_GET_API(SignalModifiedKey);\n    REDISMODULE_GET_API(SaveUnsigned);\n    REDISMODULE_GET_API(LoadUnsigned);\n    REDISMODULE_GET_API(SaveSigned);\n    REDISMODULE_GET_API(LoadSigned);\n    REDISMODULE_GET_API(SaveString);\n    REDISMODULE_GET_API(SaveStringBuffer);\n    REDISMODULE_GET_API(LoadString);\n    REDISMODULE_GET_API(LoadStringBuffer);\n    REDISMODULE_GET_API(SaveDouble);\n    REDISMODULE_GET_API(LoadDouble);\n    REDISMODULE_GET_API(SaveFloat);\n    REDISMODULE_GET_API(LoadFloat);\n    REDISMODULE_GET_API(SaveLongDouble);\n    REDISMODULE_GET_API(LoadLongDouble);\n    REDISMODULE_GET_API(SaveDataTypeToString);\n    REDISMODULE_GET_API(LoadDataTypeFromString);\n    REDISMODULE_GET_API(LoadDataTypeFromStringEncver);\n    REDISMODULE_GET_API(EmitAOF);\n    REDISMODULE_GET_API(Log);\n    REDISMODULE_GET_API(LogIOError);\n    REDISMODULE_GET_API(_Assert);\n    REDISMODULE_GET_API(LatencyAddSample);\n    REDISMODULE_GET_API(StringAppendBuffer);\n    REDISMODULE_GET_API(TrimStringAllocation);\n    REDISMODULE_GET_API(RetainString);\n    REDISMODULE_GET_API(HoldString);\n    REDISMODULE_GET_API(StringCompare);\n    REDISMODULE_GET_API(GetContextFromIO);\n    REDISMODULE_GET_API(GetKeyNameFromIO);\n    REDISMODULE_GET_API(GetKeyNameFromModuleKey);\n    REDISMODULE_GET_API(GetDbIdFromModuleKey);\n    REDISMODULE_GET_API(GetDbIdFromIO);\n    REDISMODULE_GET_API(GetKeyNameFromOptCtx);\n    REDISMODULE_GET_API(GetToKeyNameFromOptCtx);\n    REDISMODULE_GET_API(GetDbIdFromOptCtx);\n    REDISMODULE_GET_API(GetToDbIdFromOptCtx);\n    REDISMODULE_GET_API(Milliseconds);\n    REDISMODULE_GET_API(MonotonicMicroseconds);\n    REDISMODULE_GET_API(Microseconds);\n    REDISMODULE_GET_API(CachedMicroseconds);\n    REDISMODULE_GET_API(DigestAddStringBuffer);\n    REDISMODULE_GET_API(DigestAddLongLong);\n    REDISMODULE_GET_API(DigestEndSequence);\n    REDISMODULE_GET_API(GetKeyNameFromDigest);\n    REDISMODULE_GET_API(GetDbIdFromDigest);\n    REDISMODULE_GET_API(CreateDict);\n    REDISMODULE_GET_API(FreeDict);\n    REDISMODULE_GET_API(DictSize);\n    REDISMODULE_GET_API(DictSetC);\n    REDISMODULE_GET_API(DictReplaceC);\n    REDISMODULE_GET_API(DictSet);\n    REDISMODULE_GET_API(DictReplace);\n    REDISMODULE_GET_API(DictGetC);\n    REDISMODULE_GET_API(DictGet);\n    REDISMODULE_GET_API(DictDelC);\n    REDISMODULE_GET_API(DictDel);\n    REDISMODULE_GET_API(DictIteratorStartC);\n    REDISMODULE_GET_API(DictIteratorStart);\n    REDISMODULE_GET_API(DictIteratorStop);\n    REDISMODULE_GET_API(DictIteratorReseekC);\n    REDISMODULE_GET_API(DictIteratorReseek);\n    REDISMODULE_GET_API(DictNextC);\n    REDISMODULE_GET_API(DictPrevC);\n    REDISMODULE_GET_API(DictNext);\n    REDISMODULE_GET_API(DictPrev);\n    REDISMODULE_GET_API(DictCompare);\n    REDISMODULE_GET_API(DictCompareC);\n    REDISMODULE_GET_API(RegisterInfoFunc);\n    REDISMODULE_GET_API(RegisterAuthCallback);\n    REDISMODULE_GET_API(InfoAddSection);\n    REDISMODULE_GET_API(InfoBeginDictField);\n    REDISMODULE_GET_API(InfoEndDictField);\n    REDISMODULE_GET_API(InfoAddFieldString);\n    REDISMODULE_GET_API(InfoAddFieldCString);\n    REDISMODULE_GET_API(InfoAddFieldDouble);\n    REDISMODULE_GET_API(InfoAddFieldLongLong);\n    REDISMODULE_GET_API(InfoAddFieldULongLong);\n    REDISMODULE_GET_API(GetServerInfo);\n    REDISMODULE_GET_API(FreeServerInfo);\n    REDISMODULE_GET_API(ServerInfoGetField);\n    REDISMODULE_GET_API(ServerInfoGetFieldC);\n    REDISMODULE_GET_API(ServerInfoGetFieldSigned);\n    REDISMODULE_GET_API(ServerInfoGetFieldUnsigned);\n    REDISMODULE_GET_API(ServerInfoGetFieldDouble);\n    REDISMODULE_GET_API(GetClientInfoById);\n    REDISMODULE_GET_API(GetClientNameById);\n    REDISMODULE_GET_API(SetClientNameById);\n    REDISMODULE_GET_API(PublishMessage);\n    REDISMODULE_GET_API(PublishMessageShard);\n    REDISMODULE_GET_API(SubscribeToServerEvent);\n    REDISMODULE_GET_API(SetLRU);\n    REDISMODULE_GET_API(GetLRU);\n    REDISMODULE_GET_API(SetLFU);\n    REDISMODULE_GET_API(GetLFU);\n    REDISMODULE_GET_API(BlockClientOnKeys);\n    REDISMODULE_GET_API(BlockClientOnKeysWithFlags);\n    REDISMODULE_GET_API(SignalKeyAsReady);\n    REDISMODULE_GET_API(GetBlockedClientReadyKey);\n    REDISMODULE_GET_API(ScanCursorCreate);\n    REDISMODULE_GET_API(ScanCursorRestart);\n    REDISMODULE_GET_API(ScanCursorDestroy);\n    REDISMODULE_GET_API(Scan);\n    REDISMODULE_GET_API(ScanKey);\n    REDISMODULE_GET_API(GetContextFlagsAll);\n    REDISMODULE_GET_API(GetModuleOptionsAll);\n    REDISMODULE_GET_API(GetKeyspaceNotificationFlagsAll);\n    REDISMODULE_GET_API(IsSubEventSupported);\n    REDISMODULE_GET_API(GetServerVersion);\n    REDISMODULE_GET_API(GetTypeMethodVersion);\n    REDISMODULE_GET_API(Yield);\n    REDISMODULE_GET_API(GetThreadSafeContext);\n    REDISMODULE_GET_API(GetDetachedThreadSafeContext);\n    REDISMODULE_GET_API(FreeThreadSafeContext);\n    REDISMODULE_GET_API(ThreadSafeContextLock);\n    REDISMODULE_GET_API(ThreadSafeContextTryLock);\n    REDISMODULE_GET_API(ThreadSafeContextUnlock);\n    REDISMODULE_GET_API(BlockClient);\n    REDISMODULE_GET_API(BlockClientGetPrivateData);\n    REDISMODULE_GET_API(BlockClientSetPrivateData);\n    REDISMODULE_GET_API(BlockClientOnAuth);\n    REDISMODULE_GET_API(UnblockClient);\n    REDISMODULE_GET_API(IsBlockedReplyRequest);\n    REDISMODULE_GET_API(IsBlockedTimeoutRequest);\n    REDISMODULE_GET_API(GetBlockedClientPrivateData);\n    REDISMODULE_GET_API(GetBlockedClientHandle);\n    REDISMODULE_GET_API(AbortBlock);\n    REDISMODULE_GET_API(BlockedClientMeasureTimeStart);\n    REDISMODULE_GET_API(BlockedClientMeasureTimeEnd);\n    REDISMODULE_GET_API(SetDisconnectCallback);\n    REDISMODULE_GET_API(SubscribeToKeyspaceEvents);\n    REDISMODULE_GET_API(UnsubscribeFromKeyspaceEvents);\n    REDISMODULE_GET_API(SubscribeToKeyspaceEventsWithSubkeys);\n    REDISMODULE_GET_API(UnsubscribeFromKeyspaceEventsWithSubkeys);\n    REDISMODULE_GET_API(AddPostNotificationJob);\n    REDISMODULE_GET_API(NotifyKeyspaceEvent);\n    REDISMODULE_GET_API(NotifyKeyspaceEventWithSubkeys);\n    REDISMODULE_GET_API(GetNotifyKeyspaceEvents);\n    REDISMODULE_GET_API(BlockedClientDisconnected);\n    REDISMODULE_GET_API(RegisterClusterMessageReceiver);\n    REDISMODULE_GET_API(SendClusterMessage);\n    REDISMODULE_GET_API(GetClusterNodeInfo);\n    REDISMODULE_GET_API(GetClusterNodesList);\n    REDISMODULE_GET_API(FreeClusterNodesList);\n    REDISMODULE_GET_API(CreateTimer);\n    REDISMODULE_GET_API(StopTimer);\n    REDISMODULE_GET_API(GetTimerInfo);\n    REDISMODULE_GET_API(GetMyClusterID);\n    REDISMODULE_GET_API(GetClusterSize);\n    REDISMODULE_GET_API(GetRandomBytes);\n    REDISMODULE_GET_API(GetRandomHexChars);\n    REDISMODULE_GET_API(SetClusterFlags);\n    REDISMODULE_GET_API(ClusterDisableTrim);\n    REDISMODULE_GET_API(ClusterEnableTrim);\n    REDISMODULE_GET_API(ClusterKeySlot);\n    REDISMODULE_GET_API(ClusterKeySlotC);\n    REDISMODULE_GET_API(ClusterCanonicalKeyNameInSlot);\n    REDISMODULE_GET_API(ClusterCanAccessKeysInSlot);\n    REDISMODULE_GET_API(ClusterPropagateForSlotMigration);\n    REDISMODULE_GET_API(ClusterGetLocalSlotRanges);\n    REDISMODULE_GET_API(ClusterFreeSlotRanges);\n    REDISMODULE_GET_API(ExportSharedAPI);\n    REDISMODULE_GET_API(GetSharedAPI);\n    REDISMODULE_GET_API(RegisterCommandFilter);\n    REDISMODULE_GET_API(UnregisterCommandFilter);\n    REDISMODULE_GET_API(CommandFilterArgsCount);\n    REDISMODULE_GET_API(CommandFilterArgGet);\n    REDISMODULE_GET_API(CommandFilterArgInsert);\n    REDISMODULE_GET_API(CommandFilterArgReplace);\n    REDISMODULE_GET_API(CommandFilterArgDelete);\n    REDISMODULE_GET_API(CommandFilterGetClientId);\n    REDISMODULE_GET_API(Fork);\n    REDISMODULE_GET_API(SendChildHeartbeat);\n    REDISMODULE_GET_API(ExitFromChild);\n    REDISMODULE_GET_API(KillForkChild);\n    REDISMODULE_GET_API(GetUsedMemoryRatio);\n    REDISMODULE_GET_API(MallocSize);\n    REDISMODULE_GET_API(MallocUsableSize);\n    REDISMODULE_GET_API(MallocSizeString);\n    REDISMODULE_GET_API(MallocSizeDict);\n    REDISMODULE_GET_API(CreateModuleUser);\n    REDISMODULE_GET_API(FreeModuleUser);\n    REDISMODULE_GET_API(SetContextUser);\n    REDISMODULE_GET_API(GetContextUser);\n    REDISMODULE_GET_API(GetUserUsername);\n    REDISMODULE_GET_API(SetModuleUserACL);\n    REDISMODULE_GET_API(SetModuleUserACLString);\n    REDISMODULE_GET_API(GetModuleUserACLString);\n    REDISMODULE_GET_API(GetCurrentUserName);\n    REDISMODULE_GET_API(GetModuleUserFromUserName);\n    REDISMODULE_GET_API(ACLCheckCommandPermissions);\n    REDISMODULE_GET_API(ACLCheckKeyPermissions);\n    REDISMODULE_GET_API(ACLCheckKeyPrefixPermissions);    \n    REDISMODULE_GET_API(ACLCheckChannelPermissions);\n    REDISMODULE_GET_API(ACLAddLogEntry);\n    REDISMODULE_GET_API(ACLAddLogEntryByUserName);\n    REDISMODULE_GET_API(DeauthenticateAndCloseClient);\n    REDISMODULE_GET_API(AuthenticateClientWithACLUser);\n    REDISMODULE_GET_API(AuthenticateClientWithUser);\n    REDISMODULE_GET_API(RedactClientCommandArgument);\n    REDISMODULE_GET_API(GetClientCertificate);\n    REDISMODULE_GET_API(GetCommandKeys);\n    REDISMODULE_GET_API(GetCommandKeysWithFlags);\n    REDISMODULE_GET_API(GetCurrentCommandName);\n    REDISMODULE_GET_API(RegisterDefragFunc);\n    REDISMODULE_GET_API(RegisterDefragFunc2);\n    REDISMODULE_GET_API(RegisterDefragCallbacks);\n    REDISMODULE_GET_API(DefragAlloc);\n    REDISMODULE_GET_API(DefragAllocRaw);\n    REDISMODULE_GET_API(DefragFreeRaw);\n    REDISMODULE_GET_API(DefragRedisModuleString);\n    REDISMODULE_GET_API(DefragRedisModuleDict);\n    REDISMODULE_GET_API(DefragShouldStop);\n    REDISMODULE_GET_API(DefragCursorSet);\n    REDISMODULE_GET_API(DefragCursorGet);\n    REDISMODULE_GET_API(GetKeyNameFromDefragCtx);\n    REDISMODULE_GET_API(GetDbIdFromDefragCtx);\n    REDISMODULE_GET_API(EventLoopAdd);\n    REDISMODULE_GET_API(EventLoopDel);\n    REDISMODULE_GET_API(EventLoopAddOneShot);\n    REDISMODULE_GET_API(RegisterBoolConfig);\n    REDISMODULE_GET_API(RegisterNumericConfig);\n    REDISMODULE_GET_API(RegisterStringConfig);\n    REDISMODULE_GET_API(RegisterEnumConfig);\n    REDISMODULE_GET_API(LoadConfigs);\n    REDISMODULE_GET_API(LoadDefaultConfigs);\n    REDISMODULE_GET_API(RdbStreamCreateFromFile);\n    REDISMODULE_GET_API(RdbStreamFree);\n    REDISMODULE_GET_API(RdbLoad);\n    REDISMODULE_GET_API(RdbSave);\n    REDISMODULE_GET_API(GetInternalSecret);\n    REDISMODULE_GET_API(ConfigIteratorCreate);\n    REDISMODULE_GET_API(ConfigIteratorRelease);\n    REDISMODULE_GET_API(ConfigIteratorNext);\n    REDISMODULE_GET_API(ConfigGetType);\n    REDISMODULE_GET_API(ConfigGet);\n    REDISMODULE_GET_API(ConfigGetBool);\n    REDISMODULE_GET_API(ConfigGetEnum);\n    REDISMODULE_GET_API(ConfigGetNumeric);\n    REDISMODULE_GET_API(ConfigSet);\n    REDISMODULE_GET_API(ConfigSetBool);\n    REDISMODULE_GET_API(ConfigSetEnum);\n    REDISMODULE_GET_API(ConfigSetNumeric);\n    REDISMODULE_GET_API(CreateKeyMetaClass);\n    REDISMODULE_GET_API(ReleaseKeyMetaClass);\n\n    REDISMODULE_GET_API(SetKeyMeta);\n    REDISMODULE_GET_API(GetKeyMeta);\n\n    if (RedisModule_IsModuleNameBusy && RedisModule_IsModuleNameBusy(name)) return REDISMODULE_ERR;\n    RedisModule_SetModuleAttribs(ctx,name,ver,apiver);\n    return REDISMODULE_OK;\n}\n\n#define RedisModule_Assert(_e) ((_e)?(void)0 : (RedisModule__Assert(#_e,__FILE__,__LINE__),exit(1)))\n\n#define RMAPI_FUNC_SUPPORTED(func) (func != NULL)\n\n#endif /* REDISMODULE_CORE */\n#endif /* REDISMODULE_H */\n"
  },
  {
    "path": "src/release.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n/* Every time the Redis Git SHA1 or Dirty status changes only this small\n * file is recompiled, as we access this information in all the other\n * files using this functions. */\n\n#include <string.h>\n#include <stdio.h>\n\n#include \"release.h\"\n#include \"crc64.h\"\n\nchar *redisGitSHA1(void) {\n    return REDIS_GIT_SHA1;\n}\n\nchar *redisGitDirty(void) {\n    return REDIS_GIT_DIRTY;\n}\n\nconst char *redisBuildIdRaw(void) {\n    return REDIS_BUILD_ID_RAW;\n}\n\nuint64_t redisBuildId(void) {\n    char *buildid = REDIS_BUILD_ID_RAW;\n\n    return crc64(0,(unsigned char*)buildid,strlen(buildid));\n}\n\n/* Return a cached value of the build string in order to avoid recomputing\n * and converting it in hex every time: this string is shown in the INFO\n * output that should be fast. */\nchar *redisBuildIdString(void) {\n    static char buf[32];\n    static int cached = 0;\n    if (!cached) {\n        snprintf(buf,sizeof(buf),\"%llx\",(unsigned long long) redisBuildId());\n        cached = 1;\n    }\n    return buf;\n}\n"
  },
  {
    "path": "src/replication.c",
    "content": "/* Asynchronous replication implementation.\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n/*\n * replication.c - Replication Management\n *\n * This file contains the implementation of Redis's replication logic, which\n * enables data synchronization between master and replica instances.\n * It handles:\n * - Master-to-replica synchronization\n * - Full and partial resynchronizations\n * - Replication backlog management\n * - State machines for replica operations\n * - RDB Channel for Full Sync  (lookup \"rdb channel for full sync\")\n */\n\n#include \"server.h\"\n#include \"cluster.h\"\n#include \"cluster_slot_stats.h\"\n#include \"bio.h\"\n#include \"functions.h\"\n#include \"connection.h\"\n#include \"cluster_asm.h\"\n\n#include <memory.h>\n#include <sys/time.h>\n#include <unistd.h>\n#include <fcntl.h>\n#include <sys/socket.h>\n#include <sys/stat.h>\n\nvoid replicationDiscardCachedMaster(void);\nvoid replicationResurrectCachedMaster(connection *conn);\nvoid replicationSendAck(void);\nint replicaPutOnline(client *slave);\nvoid replicaStartCommandStream(client *slave);\nint cancelReplicationHandshake(int reconnect);\nstatic void rdbChannelFullSyncWithMaster(connection *conn);\nstatic int rdbChannelAbort(void);\nstatic void rdbChannelBufferReplData(connection *conn);\nstatic void rdbChannelReplDataBufInit(void);\nstatic void rdbChannelStreamReplDataToDb(void);\nstatic void rdbChannelCleanup(void);\n\n/* We take a global flag to remember if this instance generated an RDB\n * because of replication, so that we can remove the RDB file in case\n * the instance is configured to have no persistence. */\nint RDBGeneratedByReplication = 0;\n\n\n/* A reference to diskless loading rio to abort it asynchronously. It's needed\n * for rdbchannel replication. While loading from rdbchannel connection, we may\n * yield back to eventloop. If main channel connection detects a network problem\n * we want to abort loading. It calls rioAbort() in this case, so next rioRead()\n * from rdbchannel connection will return error to cancel loading safely. */\nstatic rio *disklessLoadingRio = NULL;\n\n/* --------------------------- Utility functions ---------------------------- */\n\n/* Returns 1 if the replica is rdbchannel and there is an associated main\n * channel slave with that. */\nint replicationCheckHasMainChannel(client *replica) {\n    if (!(replica->flags & CLIENT_REPL_RDB_CHANNEL) ||\n        !replica->main_ch_client_id ||\n        lookupClientByID(replica->main_ch_client_id) == NULL)\n    {\n        return 0;\n    }\n    return 1;\n}\n\n/* During rdb channel replication, replica opens two connections. From master\n * POV, these connections are distinct replicas in server.slaves. This function\n * counts associated replicas as one and returns logical replica count. */\nunsigned long replicationLogicalReplicaCount(void) {\n    unsigned long count = 0;\n    listNode *ln;\n    listIter li;\n\n    listRewind(server.slaves,&li);\n    while ((ln = listNext(&li))) {\n        client *replica = listNodeValue(ln);\n        if (!replicationCheckHasMainChannel(replica))\n            count++;\n    }\n    return count;\n}\n\nint replicaFromIOThreadHasPendingRead(client *c) {\n    serverAssert(c->tid != IOTHREAD_MAIN_THREAD_ID);\n\n    int pending_read;\n    atomicGetWithSync(c->pending_read, pending_read);\n    return pending_read;\n}\n\n/* Send replicas to their respective IO threads if it has pending reads or\n * writes. Otherwise it remains in main thread so it can check for new data in\n * the replication buffer ASAP. */\nvoid putReplicasInPendingClientsToIOThreads(void) {\n    if (server.io_threads_num <= 1) return;\n\n    serverAssert(pthread_equal(pthread_self(), server.main_thread_id));\n\n    listIter li;\n    listNode *ln;\n    listRewind(server.slaves,&li);\n    while((ln = listNext(&li))) {\n        client *replica = listNodeValue(ln);\n\n        /* We only care about replicas that need to run on IO thread but are\n         * currently in main */\n        if (replica->tid == IOTHREAD_MAIN_THREAD_ID ||\n            replica->running_tid != IOTHREAD_MAIN_THREAD_ID)\n        {\n            continue;\n        }\n\n        /* Skip the replica if it's scheduled for close */\n        if (replica->flags & CLIENT_CLOSE_ASAP) continue;\n\n        /* The call to clientHasPendingReplies may seem redundant but in the\n         * case of replica being in IO thread we can have the following case:\n         * replica gets back to main thread after sending the repl buffer it\n         * knows about. In the mean time main thread has accumulated new repl\n         * data. In that case the replica's client wouldn't have been put in\n         * the pending write queue but will still have new repl data it needs to\n         * send, so we make sure to check for that and send it back to IO thread\n         * if so. On the other hand if replica gets back to main thread before\n         * any new repl data has accumulated then after a new cmd is propagated\n         * the replica will be put in the pending write queue as usual so we\n         * need to check for that also.\n         * In addition, if the replica client has pending read events, we should\n         * also send them to the IO thread. */\n        if (replica->flags & CLIENT_PENDING_WRITE ||\n            clientHasPendingReplies(replica) ||\n            replicaFromIOThreadHasPendingRead(replica))\n        {\n            enqueuePendingClienstToIOThreads(replica);\n        }\n    }\n}\n\n/* Run some cron tasks for a connected master client. Return 1 when the client\n * is freed, 0 otherwise. */\nint replicationCronRunMasterClient(void) {\n    if (!server.masterhost || !server.master) return 0;\n\n    if (server.master->running_tid != IOTHREAD_MAIN_THREAD_ID) return 0;\n\n    /* Timed out master when we are an already connected slave? */\n    if (server.repl_state == REPL_STATE_CONNECTED &&\n        (time(NULL)-server.master->lastinteraction) > server.repl_timeout)\n    {\n        serverLog(LL_WARNING,\"MASTER timeout: no data nor PING received...\");\n        freeClient(server.master);\n        return 1;\n    }\n\n    /* Send ACK to master from time to time.\n     * Note that we do not send periodic acks to masters that don't\n     * support PSYNC and replication offsets. */\n    if (!(server.master->flags & CLIENT_PRE_PSYNC))\n        replicationSendAck();\n\n    return 0;\n}\n\nConnectionType *connTypeOfReplication(void) {\n    if (server.tls_replication) {\n        return connectionTypeTls();\n    }\n\n    return connectionTypeTcp();\n}\n\n/* Return the pointer to a string representing the slave ip:listening_port\n * pair. Mostly useful for logging, since we want to log a slave using its\n * IP address and its listening port which is more clear for the user, for\n * example: \"Closing connection with replica 10.1.2.3:6380\". */\nchar *replicationGetSlaveName(client *c) {\n    static char buf[NET_HOST_PORT_STR_LEN];\n    char ip[NET_IP_STR_LEN];\n\n    ip[0] = '\\0';\n    buf[0] = '\\0';\n    if (c->slave_addr ||\n        connAddrPeerName(c->conn,ip,sizeof(ip),NULL) != -1)\n    {\n        char *addr = c->slave_addr ? c->slave_addr : ip;\n        if (c->slave_listening_port)\n            formatAddr(buf,sizeof(buf),addr,c->slave_listening_port);\n        else\n            snprintf(buf,sizeof(buf),\"%s:<unknown-replica-port>\",addr);\n    } else {\n        snprintf(buf,sizeof(buf),\"client id #%llu\",\n            (unsigned long long) c->id);\n    }\n    return buf;\n}\n\n/* Plain unlink() can block for quite some time in order to actually apply\n * the file deletion to the filesystem. This call removes the file in a\n * background thread instead. We actually just do close() in the thread,\n * by using the fact that if there is another instance of the same file open,\n * the foreground unlink() will only remove the fs name, and deleting the\n * file's storage space will only happen once the last reference is lost. */\nint bg_unlink(const char *filename) {\n    int fd = open(filename,O_RDONLY|O_NONBLOCK);\n    if (fd == -1) {\n        /* Can't open the file? Fall back to unlinking in the main thread. */\n        return unlink(filename);\n    } else {\n        /* The following unlink() removes the name but doesn't free the\n         * file contents because a process still has it open. */\n        int retval = unlink(filename);\n        if (retval == -1) {\n            /* If we got an unlink error, we just return it, closing the\n             * new reference we have to the file. */\n            int old_errno = errno;\n            close(fd);  /* This would overwrite our errno. So we saved it. */\n            errno = old_errno;\n            return -1;\n        }\n        bioCreateCloseJob(fd, 0, 0);\n        return 0; /* Success. */\n    }\n}\n\n/* ---------------------------------- MASTER -------------------------------- */\n\nvoid createReplicationBacklog(void) {\n    serverAssert(server.repl_backlog == NULL);\n    server.repl_backlog = zmalloc(sizeof(replBacklog));\n    server.repl_backlog->ref_repl_buf_node = NULL;\n    server.repl_backlog->unindexed_count = 0;\n    server.repl_backlog->blocks_index = raxNew();\n    server.repl_backlog->histlen = 0;\n    /* We don't have any data inside our buffer, but virtually the first\n     * byte we have is the next byte that will be generated for the\n     * replication stream. */\n    server.repl_backlog->offset = server.master_repl_offset+1;\n}\n\n/* This function is called when the user modifies the replication backlog\n * size at runtime. It is up to the function to resize the buffer and setup it\n * so that it contains the same data as the previous one (possibly less data,\n * but the most recent bytes, or the same data and more free space in case the\n * buffer is enlarged). */\nvoid resizeReplicationBacklog(void) {\n    if (server.repl_backlog_size < CONFIG_REPL_BACKLOG_MIN_SIZE)\n        server.repl_backlog_size = CONFIG_REPL_BACKLOG_MIN_SIZE;\n    if (server.repl_backlog)\n        incrementalTrimReplicationBacklog(REPL_BACKLOG_TRIM_BLOCKS_PER_CALL);\n}\n\nvoid freeReplicationBacklog(void) {\n    serverAssert(listLength(server.slaves) == 0);\n    if (server.repl_backlog == NULL) return;\n\n    /* Decrease the start buffer node reference count. */\n    if (server.repl_backlog->ref_repl_buf_node) {\n        replBufBlock *o = listNodeValue(\n            server.repl_backlog->ref_repl_buf_node);\n        serverAssert(o->refcount == 1); /* Last reference. */\n        o->refcount--;\n    }\n\n    /* Replication buffer blocks are completely released when we free the\n     * backlog, since the backlog is released only when there are no replicas\n     * and the backlog keeps the last reference of all blocks. */\n    freeReplicationBacklogRefMemAsync(server.repl_buffer_blocks,\n                            server.repl_backlog->blocks_index);\n    resetReplicationBuffer();\n    zfree(server.repl_backlog);\n    server.repl_backlog = NULL;\n}\n\n/* To make search offset from replication buffer blocks quickly\n * when replicas ask partial resynchronization, we create one index\n * block every REPL_BACKLOG_INDEX_PER_BLOCKS blocks. */\nvoid createReplicationBacklogIndex(listNode *ln) {\n    server.repl_backlog->unindexed_count++;\n    if (server.repl_backlog->unindexed_count >= REPL_BACKLOG_INDEX_PER_BLOCKS) {\n        replBufBlock *o = listNodeValue(ln);\n        uint64_t encoded_offset = htonu64(o->repl_offset);\n        raxInsert(server.repl_backlog->blocks_index,\n                  (unsigned char*)&encoded_offset, sizeof(uint64_t),\n                  ln, NULL);\n        server.repl_backlog->unindexed_count = 0;\n    }\n}\n\n/* Rebase replication buffer blocks' offset since the initial\n * setting offset starts from 0 when master restart. */\nvoid rebaseReplicationBuffer(long long base_repl_offset) {\n    raxFree(server.repl_backlog->blocks_index);\n    server.repl_backlog->blocks_index = raxNew();\n    server.repl_backlog->unindexed_count = 0;\n\n    listIter li;\n    listNode *ln;\n    listRewind(server.repl_buffer_blocks, &li);\n    while ((ln = listNext(&li))) {\n        replBufBlock *o = listNodeValue(ln);\n        o->repl_offset += base_repl_offset;\n        createReplicationBacklogIndex(ln);\n    }\n}\n\nvoid resetReplicationBuffer(void) {\n    server.repl_buffer_mem = 0;\n    server.repl_buffer_blocks = listCreate();\n    listSetFreeMethod(server.repl_buffer_blocks, zfree);\n}\n\nint canFeedReplicaReplBuffer(client *replica) {\n    /* Don't feed replicas that only want the RDB or main channels of migration\n     * destinations which need filtered stream for migrating slot ranges. */\n    if (replica->flags & CLIENT_REPL_RDBONLY ||\n        replica->flags & CLIENT_ASM_MIGRATING) return 0;\n\n    /* Don't feed replicas that are still waiting for BGSAVE to start. */\n    if (replica->replstate == SLAVE_STATE_WAIT_BGSAVE_START ||\n        replica->replstate == SLAVE_STATE_WAIT_RDB_CHANNEL) return 0;\n\n    /* Don't feed replicas that are going to be closed ASAP. */\n    if (replica->flags & CLIENT_CLOSE_ASAP) return 0;\n\n    return 1;\n}\n\n/* Create the replication backlog if needed. */\nvoid createReplicationBacklogIfNeeded(void) {\n    if (listLength(server.slaves) == 1 && server.repl_backlog == NULL) {\n        /* When we create the backlog from scratch, we always use a new\n         * replication ID and clear the ID2, since there is no valid\n         * past history. */\n        changeReplicationId();\n        clearReplicationId2();\n        createReplicationBacklog();\n        serverLog(LL_NOTICE,\"Replication backlog created, my new \"\n                            \"replication IDs are '%s' and '%s'\",\n                            server.replid, server.replid2);\n    }\n}\n/* Similar with 'prepareClientToWrite', note that we must call this function\n * before feeding replication stream into global replication buffer, since\n * clientHasPendingReplies in prepareClientToWrite will access the global\n * replication buffer to make judgements. */\nint prepareReplicasToWrite(void) {\n    listIter li;\n    listNode *ln;\n    int prepared = 0;\n\n    listRewind(server.slaves,&li);\n    while((ln = listNext(&li))) {\n        client *slave = ln->value;\n        if (!canFeedReplicaReplBuffer(slave)) continue;\n        if (prepareClientToWrite(slave) == C_ERR) continue;\n        prepared++;\n    }\n\n    return prepared;\n}\n\n/* Generally, we only have one replication buffer block to trim when replication\n * backlog size exceeds our setting and no replica reference it. But if replica\n * clients disconnect, we need to free many replication buffer blocks that are\n * referenced. It would cost much time if there are a lots blocks to free, that\n * will freeze server, so we trim replication backlog incrementally. */\nvoid incrementalTrimReplicationBacklog(size_t max_blocks) {\n    serverAssert(server.repl_backlog != NULL);\n\n    size_t trimmed_blocks = 0;\n    while (server.repl_backlog->histlen > server.repl_backlog_size &&\n           trimmed_blocks < max_blocks)\n    {\n        /* We never trim backlog to less than one block. */\n        if (listLength(server.repl_buffer_blocks) <= 1) break;\n\n        /* Replicas increment the refcount of the first replication buffer block\n         * they refer to, in that case, we don't trim the backlog even if\n         * backlog_histlen exceeds backlog_size. This implicitly makes backlog\n         * bigger than our setting, but makes the master accept partial resync as\n         * much as possible. So that backlog must be the last reference of\n         * replication buffer blocks. */\n        listNode *first = listFirst(server.repl_buffer_blocks);\n        serverAssert(first == server.repl_backlog->ref_repl_buf_node);\n        replBufBlock *fo = listNodeValue(first);\n        if (fo->refcount != 1) break;\n\n        /* We don't try trim backlog if backlog valid size will be lessen than\n         * setting backlog size once we release the first repl buffer block. */\n        if (server.repl_backlog->histlen - (long long)fo->size <=\n            server.repl_backlog_size) break;\n\n        /* Decr refcount and release the first block later. */\n        fo->refcount--;\n        trimmed_blocks++;\n        server.repl_backlog->histlen -= fo->size;\n\n        /* Go to use next replication buffer block node. */\n        listNode *next = listNextNode(first);\n        server.repl_backlog->ref_repl_buf_node = next;\n        serverAssert(server.repl_backlog->ref_repl_buf_node != NULL);\n        /* Incr reference count to keep the new head node. */\n        ((replBufBlock *)listNodeValue(next))->refcount++;\n\n        /* Remove the node in recorded blocks. */\n        uint64_t encoded_offset = htonu64(fo->repl_offset);\n        raxRemove(server.repl_backlog->blocks_index,\n            (unsigned char*)&encoded_offset, sizeof(uint64_t), NULL);\n\n        /* Delete the first node from global replication buffer. */\n        serverAssert(fo->refcount == 0 && fo->used == fo->size);\n        server.repl_buffer_mem -= (fo->size +\n            sizeof(listNode) + sizeof(replBufBlock));\n        listDelNode(server.repl_buffer_blocks, first);\n    }\n\n    /* Set the offset of the first byte we have in the backlog. */\n    server.repl_backlog->offset = server.master_repl_offset -\n                              server.repl_backlog->histlen + 1;\n}\n\n/* Free replication buffer blocks that are referenced by this client. */\nvoid freeReplicaReferencedReplBuffer(client *replica) {\n    serverAssert(replica->running_tid == IOTHREAD_MAIN_THREAD_ID);\n\n    if (replica->ref_repl_buf_node != NULL) {\n        /* Decrease the start buffer node reference count. */\n        replBufBlock *o = listNodeValue(replica->ref_repl_buf_node);\n        serverAssert(o->refcount > 0);\n        o->refcount--;\n        incrementalTrimReplicationBacklog(REPL_BACKLOG_TRIM_BLOCKS_PER_CALL);\n    }\n    replica->ref_repl_buf_node = NULL;\n    replica->ref_block_pos = 0;\n}\n\n/* Batched write API for the global replication backlog, optimized for minimal\n * overhead per append: data writes are just memcpys into the tail block.\n * All bookkeeping is deferred to replBufWriterEnd(). */\ntypedef struct replBufWriter {\n    listNode *start_node;  /* First repl buffer block written to. */\n    size_t start_pos;      /* Byte offset within start_node where writing began. */\n    size_t total_len;      /* Total bytes written across all writes. */\n    int new_blocks;        /* Number of new blocks allocated during this stream. */\n    replBufBlock *tail;    /* Current tail block. */\n} replBufWriter;\n\n/* Initialize the writer, cache the current tail position. */\nstatic void replBufWriterBegin(replBufWriter *wr) {\n    listNode *ln = listLast(server.repl_buffer_blocks);\n    replBufBlock *tail = ln ? listNodeValue(ln) : NULL;\n\n    if (tail && tail->used < tail->size) {\n        wr->start_node = ln;\n        wr->start_pos = tail->used;\n    } else {\n        wr->start_node = NULL;\n        wr->start_pos = 0;\n    }\n\n    wr->total_len = 0;\n    wr->new_blocks = 0;\n    wr->tail = tail;\n}\n\n/* Allocate a new replication backlog block. Called when current block is full. */\nstatic void replBufWriterAllocBlock(replBufWriter *wr, size_t hint) {\n    static long long repl_block_id = 0;\n    size_t usable_size;\n    /* Avoid creating nodes smaller than PROTO_REPLY_CHUNK_BYTES, so that we can append more data into them,\n     * and also avoid creating nodes bigger than repl_backlog_size / 16, so that we won't have huge nodes that can't\n     * trim when we only still need to hold a small portion from them. */\n    size_t limit = max((size_t)server.repl_backlog_size / 16, (size_t)PROTO_REPLY_CHUNK_BYTES);\n    size_t bsize = min(max(hint, (size_t)PROTO_REPLY_CHUNK_BYTES), limit);\n    replBufBlock *tail = zmalloc_usable(bsize + sizeof(replBufBlock), &usable_size);\n    /* Take over the allocation's internal fragmentation */\n    tail->size = usable_size - sizeof(replBufBlock);\n    tail->used = 0;\n    tail->refcount = 0;\n    tail->repl_offset = server.master_repl_offset + wr->total_len + 1;\n    tail->id = repl_block_id++;\n    listAddNodeTail(server.repl_buffer_blocks, tail);\n    server.repl_buffer_mem += (usable_size + sizeof(listNode));\n    createReplicationBacklogIndex(listLast(server.repl_buffer_blocks));\n\n    /* Update stream state. */\n    wr->tail = tail;\n    wr->new_blocks++;\n    if (wr->start_node == NULL) {\n        wr->start_node = listLast(server.repl_buffer_blocks);\n        wr->start_pos = 0;\n    }\n}\n\n/* Slow path: fill remainder of current block + allocate as needed. */\nstatic void replBufWriterAppendSlow(replBufWriter *wr, const char *buf, size_t len) {\n    while (len > 0) {\n        size_t avail = wr->tail ? wr->tail->size - wr->tail->used : 0;\n        if (avail > 0) {\n            size_t copy = (avail >= len) ? len : avail;\n            memcpy(wr->tail->buf + wr->tail->used, buf, copy);\n            wr->tail->used += copy;\n            wr->total_len += copy;\n            buf += copy;\n            len -= copy;\n        }\n\n        if (len > 0)\n            replBufWriterAllocBlock(wr, len);\n    }\n}\n\n/* Write data into the replication buffer. The slow path is split out to give \n * the compiler a chance to inline the common case where the write fits entirely\n * in the current block. */\nstatic inline void replBufWriterAppend(replBufWriter *wr, const char *buf, size_t len) {\n    size_t avail = wr->tail ? wr->tail->size - wr->tail->used : 0;\n    if (len > 0 && avail >= len) {\n        memcpy(wr->tail->buf + wr->tail->used, buf, len);\n        wr->tail->used += len;\n        wr->total_len += len;\n        return;\n    }\n    replBufWriterAppendSlow(wr, buf, len);\n}\n\n/* Write a RESP header prefix<value>\\r\\n (e.g. \"$12\\r\\n\" or \"*3\\r\\n\").\n * Uses pre-built shared objects for small values, formats manually otherwise. */\nstatic inline void replBufWriterAppendBulkLen(replBufWriter *wr, char prefix, long long value) {\n    serverAssert(prefix == '$' || prefix == '*');\n    if (value >= 0 && value < OBJ_SHARED_BULKHDR_LEN) {\n        robj **tbl = (prefix == '$') ? shared.bulkhdr : shared.mbulkhdr;\n        replBufWriterAppend(wr, tbl[value]->ptr, OBJ_SHARED_HDR_STRLEN(value));\n        return;\n    }\n    char buf[LONG_STR_SIZE+3];\n    buf[0] = prefix;\n    int len = ll2string(buf+1, sizeof(buf)-1, value);\n    buf[len+1] = '\\r';\n    buf[len+2] = '\\n';\n    replBufWriterAppend(wr, buf, len+3);\n}\n\n\n/* Finalize the replication buffer write: update global offsets, set up replica\n * references for new data, check output buffer limits, and trim the\n * backlog if new blocks were allocated. */\nstatic void replBufWriterEnd(replBufWriter *wr) {\n    if (wr->total_len == 0) return;\n\n    serverAssert(wr->start_node != NULL);\n    clusterSlotStatsIncrNetworkBytesOutForReplication(wr->total_len);\n\n    /* Update the current cmd's keys with the commands replication bytes*/\n    hotkeyMetrics metrics = {0, wr->total_len};\n    hotkeyStatsUpdateCurrentCmd(server.hotkeys, metrics);\n\n    server.master_repl_offset += wr->total_len;\n    server.repl_backlog->histlen += wr->total_len;\n\n    /* For output buffer of replicas. */\n    listIter li;\n    listNode *ln;\n    listRewind(server.slaves,&li);\n    while((ln = listNext(&li))) {\n        client *slave = ln->value;\n        if (!canFeedReplicaReplBuffer(slave)) continue;\n\n        /* Update shared replication buffer start position. */\n        if (slave->ref_repl_buf_node == NULL) {\n            slave->ref_repl_buf_node = wr->start_node;\n            slave->ref_block_pos = wr->start_pos;\n            /* Only increase the start block reference count. */\n            ((replBufBlock *)listNodeValue(wr->start_node))->refcount++;\n        }\n\n        /* Check output buffer limit only when new blocks were added. */\n        if (wr->new_blocks) closeClientOnOutputBufferLimitReached(slave, 1);\n    }\n\n    /* For replication backlog */\n    if (server.repl_backlog->ref_repl_buf_node == NULL) {\n        server.repl_backlog->ref_repl_buf_node = wr->start_node;\n        /* Only increase the start block reference count. */\n        ((replBufBlock *)listNodeValue(wr->start_node))->refcount++;\n\n        /* Replication buffer must be empty before adding replication stream\n         * into replication backlog. */\n        serverAssert(wr->new_blocks > 0 && wr->start_pos == 0);\n    }\n    if (wr->new_blocks) {\n        /* It is important to trim after adding replication data to keep the backlog size close to\n         * repl_backlog_size in the common case. We wait until we add a new block to avoid repeated\n         * unnecessary trimming attempts when small amounts of data are added. See comments in\n         * freeMemoryGetNotCountedMemory() for details on replication backlog memory tracking. */\n        incrementalTrimReplicationBacklog(REPL_BACKLOG_TRIM_BLOCKS_PER_CALL);\n    }\n}\n\n/* Append bytes into the global replication buffer. */\nstatic void feedReplicationBuffer(const char *buf, size_t len) {\n    replBufWriter wr;\n    replBufWriterBegin(&wr);\n    replBufWriterAppend(&wr, buf, len);\n    replBufWriterEnd(&wr);\n}\n\n/* Propagate write commands to replication stream.\n *\n * This function is used if the instance is a master: we use the commands\n * received by our clients in order to create the replication stream.\n * Instead if the instance is a replica and has sub-replicas attached, we use\n * replicationFeedStreamFromMasterStream() */\nvoid replicationFeedSlaves(list *slaves, int dictid, robj **argv, int argc) {\n    int j, len;\n    char llstr[LONG_STR_SIZE];\n\n    /* In case we propagate a command that doesn't touch keys (PING, REPLCONF) we\n     * pass dbid=-1 that indicate there is no need to replicate `select` command. */\n    serverAssert(dictid == -1 || (dictid >= 0 && dictid < server.dbnum));\n\n    /* If the instance is not a top level master, return ASAP: we'll just proxy\n     * the stream of data we receive from our master instead, in order to\n     * propagate *identical* replication stream. In this way this slave can\n     * advertise the same replication ID as the master (since it shares the\n     * master replication history and has the same backlog and offsets). */\n    if (server.masterhost != NULL) return;\n\n    /* If current client is marked as master, we will proxy the command stream\n     * to our slaves instead of replicating them, that also happens when being\n     * in atomic slot migration. */\n    if (server.current_client && server.current_client->flags & CLIENT_MASTER) return;\n\n    /* If there aren't slaves, and there is no backlog buffer to populate,\n     * we can return ASAP. */\n    if (server.repl_backlog == NULL && listLength(slaves) == 0) {\n        /* We increment the repl_offset anyway, since we use that for tracking AOF fsyncs\n         * even when there's no replication active. This code will not be reached if AOF\n         * is also disabled. */\n        server.master_repl_offset += 1;\n        return;\n    }\n\n    /* We can't have slaves attached and no backlog. */\n    serverAssert(!(listLength(slaves) != 0 && server.repl_backlog == NULL));\n\n    /* Update the time of sending replication stream to replicas. */\n    server.repl_stream_lastio = server.unixtime;\n\n    /* Must install write handler for all replicas first before feeding\n     * replication stream. */\n    prepareReplicasToWrite();\n\n    /* Send SELECT command to every slave if needed. */\n    if (dictid != -1 && server.slaveseldb != dictid) {\n        robj *selectcmd;\n\n        /* For a few DBs we have pre-computed SELECT command. */\n        if (dictid >= 0 && dictid < PROTO_SHARED_SELECT_CMDS) {\n            selectcmd = shared.select[dictid];\n        } else {\n            int dictid_len;\n\n            dictid_len = ll2string(llstr,sizeof(llstr),dictid);\n            selectcmd = createObject(OBJ_STRING,\n                sdscatprintf(sdsempty(),\n                \"*2\\r\\n$6\\r\\nSELECT\\r\\n$%d\\r\\n%s\\r\\n\",\n                dictid_len, llstr));\n        }\n\n        feedReplicationBuffer(selectcmd->ptr, sdslen(selectcmd->ptr));\n\n        /* Although the SELECT command is not associated with any slot,\n         * its per-slot network-bytes-out accumulation is made by the above function call.\n         * To cancel-out this accumulation, below adjustment is made. */\n        clusterSlotStatsDecrNetworkBytesOutForReplication(sdslen(selectcmd->ptr));\n\n        if (dictid < 0 || dictid >= PROTO_SHARED_SELECT_CMDS)\n            decrRefCount(selectcmd);\n\n        server.slaveseldb = dictid;\n    }\n\n    /* Write the command to the replication buffer if any. */\n    char aux[LONG_STR_SIZE+3];\n    replBufWriter wr;\n    replBufWriterBegin(&wr);\n\n    /* Write the multi bulk count */\n    replBufWriterAppendBulkLen(&wr, '*', argc);\n\n    for (j = 0; j < argc; j++) {\n        /* Write the bulk count */\n        long objlen = stringObjectLen(argv[j]);\n        replBufWriterAppendBulkLen(&wr, '$', objlen);\n\n        /* Write the bulk data */\n        if (argv[j]->encoding == OBJ_ENCODING_INT) {\n            len = ll2string(aux, sizeof(aux), (long)argv[j]->ptr);\n            replBufWriterAppend(&wr, aux, len);\n        } else {\n            replBufWriterAppend(&wr, argv[j]->ptr, objlen);\n        }\n        replBufWriterAppend(&wr, \"\\r\\n\", 2);\n    }\n\n    replBufWriterEnd(&wr);\n}\n\n/* This is a debugging function that gets called when we detect something\n * wrong with the replication protocol: the goal is to peek into the\n * replication backlog and show a few final bytes to make simpler to\n * guess what kind of bug it could be. */\nvoid showLatestBacklog(void) {\n    if (server.repl_backlog == NULL) return;\n    if (listLength(server.repl_buffer_blocks) == 0) return;\n    if (server.hide_user_data_from_log) {\n        serverLog(LL_NOTICE,\"hide-user-data-from-log is on, skip logging backlog content to avoid spilling PII.\");\n        return;\n    }\n\n    size_t dumplen = 256;\n    if (server.repl_backlog->histlen < (long long)dumplen)\n        dumplen = server.repl_backlog->histlen;\n\n    sds dump = sdsempty();\n    listNode *node = listLast(server.repl_buffer_blocks);\n    while(dumplen) {\n        if (node == NULL) break;\n        replBufBlock *o = listNodeValue(node);\n        size_t thislen = o->used >= dumplen ? dumplen : o->used;\n        sds head = sdscatrepr(sdsempty(), o->buf+o->used-thislen, thislen);\n        sds tmp = sdscatsds(head, dump);\n        sdsfree(dump);\n        dump = tmp;\n        dumplen -= thislen;\n        node = listPrevNode(node);\n    }\n\n    /* Finally log such bytes: this is vital debugging info to\n     * understand what happened. */\n    serverLog(LL_NOTICE,\"Latest backlog is: '%s'\", dump);\n    sdsfree(dump);\n}\n\n/* This function is used in order to proxy what we receive from our master\n * to our sub-slaves. Besides, we also proxy the replication stream from\n * the source node when being in atomic slot migration. */\nvoid replicationFeedStreamFromMasterStream(char *buf, size_t buflen) {\n    /* There must be replication backlog if having attached slaves. */\n    if (listLength(server.slaves)) serverAssert(server.repl_backlog != NULL);\n    if (server.repl_backlog) {\n        /* Must install write handler for all replicas first before feeding\n         * replication stream. */\n        prepareReplicasToWrite();\n        feedReplicationBuffer(buf,buflen);\n    } else if (server.masterhost == NULL && server.aof_enabled) {\n        /* We increment the repl_offset anyway, since we use that for tracking\n         * AOF fsyncs even when there's no replication active. This code will\n         * not be reached if AOF is also disabled.\n         *\n         * As we skip feeding the replication buffer in atomic slot migration,\n         * so here we need to update the replication offset manually. */\n        server.master_repl_offset += 1;\n    }\n}\n\nvoid replicationFeedMonitors(client *c, list *monitors, int dictid, robj **argv, int argc) {\n    /* Fast path to return if the monitors list is empty or the server is in loading. */\n    if (monitors == NULL || listLength(monitors) == 0 || server.loading) return;\n    listNode *ln;\n    listIter li;\n    int j;\n    sds cmdrepr = sdsnew(\"+\");\n    robj *cmdobj;\n    struct timeval tv;\n\n    gettimeofday(&tv,NULL);\n    cmdrepr = sdscatprintf(cmdrepr,\"%ld.%06ld \",(long)tv.tv_sec,(long)tv.tv_usec);\n    if (c->flags & CLIENT_SCRIPT) {\n        cmdrepr = sdscatprintf(cmdrepr,\"[%d lua] \",dictid);\n    } else if (c->flags & CLIENT_UNIX_SOCKET) {\n        cmdrepr = sdscatprintf(cmdrepr,\"[%d unix:%s] \",dictid,server.unixsocket);\n    } else {\n        cmdrepr = sdscatprintf(cmdrepr,\"[%d %s] \",dictid,getClientPeerId(c));\n    }\n\n    for (j = 0; j < argc; j++) {\n        if (argv[j]->encoding == OBJ_ENCODING_INT) {\n            cmdrepr = sdscatprintf(cmdrepr, \"\\\"%ld\\\"\", (long)argv[j]->ptr);\n        } else {\n            cmdrepr = sdscatrepr(cmdrepr,(char*)argv[j]->ptr,\n                        sdslen(argv[j]->ptr));\n        }\n        if (j != argc-1)\n            cmdrepr = sdscatlen(cmdrepr,\" \",1);\n    }\n    cmdrepr = sdscatlen(cmdrepr,\"\\r\\n\",2);\n    cmdobj = createObject(OBJ_STRING,cmdrepr);\n\n    listRewind(monitors,&li);\n    while((ln = listNext(&li))) {\n        client *monitor = ln->value;\n        /* Do not show internal commands to non-internal clients. */\n        if (c->realcmd && (c->realcmd->flags & CMD_INTERNAL) && !(monitor->flags & CLIENT_INTERNAL)) {\n            continue;\n        }\n        addReply(monitor,cmdobj);\n        updateClientMemUsageAndBucket(monitor);\n    }\n    decrRefCount(cmdobj);\n}\n\n/* Feed the slave 'c' with the replication backlog starting from the\n * specified 'offset' up to the end of the backlog. */\nlong long addReplyReplicationBacklog(client *c, long long offset) {\n    serverAssert(c->running_tid == IOTHREAD_MAIN_THREAD_ID);\n\n    long long skip;\n\n    serverLog(LL_DEBUG, \"[PSYNC] Replica request offset: %lld\", offset);\n\n    if (server.repl_backlog->histlen == 0) {\n        serverLog(LL_DEBUG, \"[PSYNC] Backlog history len is zero\");\n        return 0;\n    }\n\n    serverLog(LL_DEBUG, \"[PSYNC] Backlog size: %lld\",\n             server.repl_backlog_size);\n    serverLog(LL_DEBUG, \"[PSYNC] First byte: %lld\",\n             server.repl_backlog->offset);\n    serverLog(LL_DEBUG, \"[PSYNC] History len: %lld\",\n             server.repl_backlog->histlen);\n\n    /* Compute the amount of bytes we need to discard. */\n    skip = offset - server.repl_backlog->offset;\n    serverLog(LL_DEBUG, \"[PSYNC] Skipping: %lld\", skip);\n\n    /* Iterate recorded blocks, quickly search the approximate node. */\n    listNode *node = NULL;\n    if (raxSize(server.repl_backlog->blocks_index) > 0) {\n        uint64_t encoded_offset = htonu64(offset);\n        raxIterator ri;\n        raxStart(&ri, server.repl_backlog->blocks_index);\n        raxSeek(&ri, \">\", (unsigned char*)&encoded_offset, sizeof(uint64_t));\n        if (raxEOF(&ri)) {\n            /* No found, so search from the last recorded node. */\n            raxSeek(&ri, \"$\", NULL, 0);\n            raxPrev(&ri);\n            node = (listNode *)ri.data;\n        } else {\n            raxPrev(&ri); /* Skip the sought node. */\n            /* We should search from the prev node since the offset of current\n             * sought node exceeds searching offset. */\n            if (raxPrev(&ri))\n                node = (listNode *)ri.data;\n            else\n                node = server.repl_backlog->ref_repl_buf_node;\n        }\n        raxStop(&ri);\n    } else {\n        /* No recorded blocks, just from the start node to search. */\n        node = server.repl_backlog->ref_repl_buf_node;\n    }\n\n    /* Search the exact node. */\n    while (node != NULL) {\n        replBufBlock *o = listNodeValue(node);\n        if (o->repl_offset + (long long)o->used >= offset) break;\n        node = listNextNode(node);\n    }\n    serverAssert(node != NULL);\n\n    /* Install a writer handler first.*/\n    prepareClientToWrite(c);\n    /* Setting output buffer of the replica. */\n    replBufBlock *o = listNodeValue(node);\n    o->refcount++;\n    c->ref_repl_buf_node = node;\n    c->ref_block_pos = offset - o->repl_offset;\n\n    return server.repl_backlog->histlen - skip;\n}\n\n/* Return the offset to provide as reply to the PSYNC command received\n * from the slave. The returned value is only valid immediately after\n * the BGSAVE process started and before executing any other command\n * from clients. */\nlong long getPsyncInitialOffset(void) {\n    return server.master_repl_offset;\n}\n\n/* Send a FULLRESYNC reply in the specific case of a full resynchronization,\n * as a side effect setup the slave for a full sync in different ways:\n *\n * 1) Remember, into the slave client structure, the replication offset\n *    we sent here, so that if new slaves will later attach to the same\n *    background RDB saving process (by duplicating this client output\n *    buffer), we can get the right offset from this slave.\n * 2) Set the replication state of the slave to WAIT_BGSAVE_END so that\n *    we start accumulating differences from this point.\n * 3) Force the replication stream to re-emit a SELECT statement so\n *    the new slave incremental differences will start selecting the\n *    right database number.\n *\n * Normally this function should be called immediately after a successful\n * BGSAVE for replication was started, or when there is one already in\n * progress that we attached our slave to. */\nint replicationSetupSlaveForFullResync(client *slave, long long offset) {\n    char buf[128];\n    int buflen;\n\n    slave->psync_initial_offset = offset;\n    slave->replstate = SLAVE_STATE_WAIT_BGSAVE_END;\n    /* We are going to accumulate the incremental changes for this\n     * slave as well. Set slaveseldb to -1 in order to force to re-emit\n     * a SELECT statement in the replication stream. */\n    server.slaveseldb = -1;\n\n    /* Slots snapshot. */\n    if (slave->flags & CLIENT_REPL_RDB_CHANNEL &&\n        slave->slave_req & SLAVE_REQ_SLOTS_SNAPSHOT)\n    {\n        /* Start to deliver the commands stream on migrating slots. */\n        asmSlotSnapshotAndStreamStart(slave->task);\n\n        buflen = snprintf(buf, sizeof(buf), \"+SLOTSSNAPSHOT\\r\\n\");\n        if (connWrite(slave->conn, buf, buflen) != buflen) {\n            freeClientAsync(slave);\n            return C_ERR;\n        }\n        return C_OK;\n    }\n\n    /* Don't send this reply to slaves that approached us with\n     * the old SYNC command. */\n    if (!(slave->flags & CLIENT_PRE_PSYNC)) {\n        if (slave->flags & CLIENT_REPL_RDB_CHANNEL) {\n            /* This slave is rdbchannel. Find its associated main channel and\n             * change its state so we can deliver replication stream from now\n             * on, in parallel to rdb. */\n            uint64_t id = slave->main_ch_client_id;\n            client *c = lookupClientByID(id);\n            if (c && c->replstate == SLAVE_STATE_WAIT_RDB_CHANNEL) {\n                c->replstate = SLAVE_STATE_SEND_BULK_AND_STREAM;\n                serverLog(LL_NOTICE, \"Starting to deliver RDB and replication stream to replica: %s\",\n                          replicationGetSlaveName(c));\n            } else {\n                serverLog(LL_WARNING, \"Starting to deliver RDB to replica %s\"\n                                      \" but it has no associated main channel\",\n                                      replicationGetSlaveName(slave));\n            }\n        }\n        buflen = snprintf(buf,sizeof(buf),\"+FULLRESYNC %s %lld\\r\\n\",\n                          server.replid,offset);\n        if (connWrite(slave->conn,buf,buflen) != buflen) {\n            freeClientAsync(slave);\n            return C_ERR;\n        }\n    }\n    return C_OK;\n}\n\n/* This function handles the PSYNC command from the point of view of a\n * master receiving a request for partial resynchronization.\n *\n * On success return C_OK, otherwise C_ERR is returned and we proceed\n * with the usual full resync. */\nint masterTryPartialResynchronization(client *c, long long psync_offset) {\n    long long psync_len;\n    char *master_replid = c->argv[1]->ptr;\n    char buf[128];\n    int buflen;\n\n    /* Is the replication ID of this master the same advertised by the wannabe\n     * slave via PSYNC? If the replication ID changed this master has a\n     * different replication history, and there is no way to continue.\n     *\n     * Note that there are two potentially valid replication IDs: the ID1\n     * and the ID2. The ID2 however is only valid up to a specific offset. */\n    if (strcasecmp(master_replid, server.replid) &&\n        (strcasecmp(master_replid, server.replid2) ||\n         psync_offset > server.second_replid_offset))\n    {\n        /* Replid \"?\" is used by slaves that want to force a full resync. */\n        if (master_replid[0] != '?') {\n            if (strcasecmp(master_replid, server.replid) &&\n                strcasecmp(master_replid, server.replid2))\n            {\n                serverLog(LL_NOTICE,\"Partial resynchronization not accepted: \"\n                    \"Replication ID mismatch (Replica asked for '%s', my \"\n                    \"replication IDs are '%s' and '%s')\",\n                    master_replid, server.replid, server.replid2);\n            } else {\n                serverLog(LL_NOTICE,\"Partial resynchronization not accepted: \"\n                    \"Requested offset for second ID was %lld, but I can reply \"\n                    \"up to %lld\", psync_offset, server.second_replid_offset);\n            }\n        } else {\n            serverLog(LL_NOTICE,\"Full resync requested by replica %s %s\",\n                replicationGetSlaveName(c),\n                c->flags & CLIENT_REPL_RDB_CHANNEL ? \"(rdb-channel)\" : \"\");\n        }\n        goto need_full_resync;\n    }\n\n    /* We still have the data our slave is asking for? */\n    if (!server.repl_backlog ||\n        psync_offset < server.repl_backlog->offset ||\n        psync_offset > (server.repl_backlog->offset + server.repl_backlog->histlen))\n    {\n        serverLog(LL_NOTICE,\n            \"Unable to partial resync with replica %s for lack of backlog (Replica request was: %lld).\", replicationGetSlaveName(c), psync_offset);\n        if (psync_offset > server.master_repl_offset) {\n            serverLog(LL_WARNING,\n                \"Warning: replica %s tried to PSYNC with an offset that is greater than the master replication offset.\", replicationGetSlaveName(c));\n        }\n        goto need_full_resync;\n    }\n\n    /* If we reached this point, we are able to perform a partial resync:\n     * 1) Set client state to make it a slave.\n     * 2) Inform the client we can continue with +CONTINUE\n     * 3) Send the backlog data (from the offset to the end) to the slave. */\n    c->flags |= CLIENT_SLAVE;\n    c->replstate = SLAVE_STATE_ONLINE;\n    c->repl_ack_time = server.unixtime;\n    c->repl_start_cmd_stream_on_ack = 0;\n    listAddNodeTail(server.slaves,c);\n    /* We can't use the connection buffers since they are used to accumulate\n     * new commands at this stage. But we are sure the socket send buffer is\n     * empty so this write will never fail actually. */\n    if (c->slave_capa & SLAVE_CAPA_PSYNC2) {\n        buflen = snprintf(buf,sizeof(buf),\"+CONTINUE %s\\r\\n\", server.replid);\n    } else {\n        buflen = snprintf(buf,sizeof(buf),\"+CONTINUE\\r\\n\");\n    }\n    if (connWrite(c->conn,buf,buflen) != buflen) {\n        freeClientAsync(c);\n        return C_OK;\n    }\n    psync_len = addReplyReplicationBacklog(c,psync_offset);\n    serverLog(LL_NOTICE,\n        \"Partial resynchronization request from %s accepted. Sending %lld bytes of backlog starting from offset %lld.\",\n            replicationGetSlaveName(c),\n            psync_len, psync_offset);\n    /* Note that we don't need to set the selected DB at server.slaveseldb\n     * to -1 to force the master to emit SELECT, since the slave already\n     * has this state from the previous connection with the master. */\n\n    refreshGoodSlavesCount();\n\n    /* Fire the replica change modules event. */\n    moduleFireServerEvent(REDISMODULE_EVENT_REPLICA_CHANGE,\n                          REDISMODULE_SUBEVENT_REPLICA_CHANGE_ONLINE,\n                          NULL);\n\n    return C_OK; /* The caller can return, no full resync needed. */\n\nneed_full_resync:\n    /* We need a full resync for some reason... Note that we can't\n     * reply to PSYNC right now if a full SYNC is needed. The reply\n     * must include the master offset at the time the RDB file we transfer\n     * is generated, so we need to delay the reply to that moment. */\n    return C_ERR;\n}\n\n/* Start a BGSAVE for replication goals, which is, selecting the disk or\n * socket target depending on the configuration, and making sure that\n * the script cache is flushed before to start.\n *\n * The mincapa argument is the bitwise AND among all the slaves capabilities\n * of the slaves waiting for this BGSAVE, so represents the slave capabilities\n * all the slaves support. Can be tested via SLAVE_CAPA_* macros.\n *\n * Side effects, other than starting a BGSAVE:\n *\n * 1) Handle the slaves in WAIT_START state, by preparing them for a full\n *    sync if the BGSAVE was successfully started, or sending them an error\n *    and dropping them from the list of slaves.\n *\n * 2) Flush the Lua scripting script cache if the BGSAVE was actually\n *    started.\n *\n * Returns C_OK on success or C_ERR otherwise. */\nint startBgsaveForReplication(int mincapa, int req) {\n    int retval;\n    int socket_target = 0;\n    listIter li;\n    listNode *ln;\n\n    /* We use a socket target if slave can handle the EOF marker and we're configured to do diskless syncs.\n     * Note that in case we're creating a \"filtered\" RDB (functions-only, for example) we also force socket replication\n     * to avoid overwriting the snapshot RDB file with filtered data. */\n    socket_target = (server.repl_diskless_sync || req & SLAVE_REQ_RDB_MASK) && (mincapa & SLAVE_CAPA_EOF);\n    /* `SYNC` should have failed with error if we don't support socket and require a filter, assert this here */\n    serverAssert(socket_target || !(req & SLAVE_REQ_RDB_MASK));\n\n    int slots_req = req & SLAVE_REQ_SLOTS_SNAPSHOT;\n    serverLog(LL_NOTICE,\"Starting BGSAVE for SYNC with target: %s%s\",\n        socket_target ? (slots_req ? \"slot migration destination socket\" : \"replicas sockets\") : \"disk\",\n        (req & SLAVE_REQ_RDB_CHANNEL) ? \" (rdb-channel)\" : \"\");\n\n    rdbSaveInfo rsi, *rsiptr;\n    rsiptr = rdbPopulateSaveInfo(&rsi);\n    /* Only do rdbSave* when rsiptr is not NULL,\n     * otherwise slave will miss repl-stream-db. */\n    if (rsiptr) {\n        if (socket_target)\n            retval = rdbSaveToSlavesSockets(req,rsiptr);\n        else {\n            /* Keep the page cache since it'll get used soon */\n            retval = rdbSaveBackground(req, server.rdb_filename, rsiptr, RDBFLAGS_REPLICATION | RDBFLAGS_KEEP_CACHE);\n        }\n        if (server.repl_debug_pause & REPL_DEBUG_AFTER_FORK)\n            debugPauseProcess();\n    } else {\n        serverLog(LL_WARNING,\"BGSAVE for replication: replication information not available, can't generate the RDB file right now. Try later.\");\n        retval = C_ERR;\n    }\n\n    /* If we succeeded to start a BGSAVE with disk target, let's remember\n     * this fact, so that we can later delete the file if needed. Note\n     * that we don't set the flag to 1 if the feature is disabled, otherwise\n     * it would never be cleared: the file is not deleted. This way if\n     * the user enables it later with CONFIG SET, we are fine. */\n    if (retval == C_OK && !socket_target && server.rdb_del_sync_files)\n        RDBGeneratedByReplication = 1;\n\n    /* If we failed to BGSAVE, remove the slaves waiting for a full\n     * resynchronization from the list of slaves, inform them with\n     * an error about what happened, close the connection ASAP. */\n    if (retval == C_ERR) {\n        serverLog(LL_WARNING,\"BGSAVE for replication failed\");\n        listRewind(server.slaves,&li);\n        while((ln = listNext(&li))) {\n            client *slave = ln->value;\n\n            if (slave->replstate == SLAVE_STATE_WAIT_BGSAVE_START) {\n                slave->replstate = REPL_STATE_NONE;\n                slave->flags &= ~CLIENT_SLAVE;\n                listDelNode(server.slaves,ln);\n                addReplyError(slave,\n                    \"BGSAVE failed, replication can't continue\");\n                slave->flags |= CLIENT_CLOSE_AFTER_REPLY;\n            }\n        }\n        return retval;\n    }\n\n    /* If the target is socket, rdbSaveToSlavesSockets() already setup\n     * the slaves for a full resync. Otherwise for disk target do it now.*/\n    if (!socket_target) {\n        listRewind(server.slaves,&li);\n        while((ln = listNext(&li))) {\n            client *slave = ln->value;\n\n            if (slave->replstate == SLAVE_STATE_WAIT_BGSAVE_START) {\n                /* Check slave has the exact requirements */\n                if (slave->slave_req != req)\n                    continue;\n                replicationSetupSlaveForFullResync(slave, getPsyncInitialOffset());\n            }\n        }\n    }\n\n    return retval;\n}\n\n/* SYNC and PSYNC command implementation. */\nvoid syncCommand(client *c) {\n    /* ignore SYNC if already slave or in monitor mode */\n    if (c->flags & CLIENT_SLAVE) return;\n\n    /* Check if this is a failover request to a replica with the same replid and\n     * become a master if so. */\n    if (c->argc > 3 && !strcasecmp(c->argv[0]->ptr,\"psync\") && \n        !strcasecmp(c->argv[3]->ptr,\"failover\"))\n    {\n        serverLog(LL_NOTICE, \"Failover request received for replid %s.\",\n            (unsigned char *)c->argv[1]->ptr);\n        if (!server.masterhost) {\n            addReplyError(c, \"PSYNC FAILOVER can't be sent to a master.\");\n            return;\n        }\n\n        if (!strcasecmp(c->argv[1]->ptr,server.replid)) {\n            if (server.cluster_enabled) {\n                clusterPromoteSelfToMaster();\n            } else {\n                replicationUnsetMaster();\n            }\n            sds client = catClientInfoString(sdsempty(),c);\n            serverLog(LL_NOTICE,\n                \"MASTER MODE enabled (failover request from '%s')\",client);\n            sdsfree(client);\n        } else {\n            addReplyError(c, \"PSYNC FAILOVER replid must match my replid.\");\n            return;            \n        }\n    }\n\n    /* Don't let replicas sync with us while we're failing over */\n    if (server.failover_state != NO_FAILOVER) {\n        addReplyError(c,\"-NOMASTERLINK Can't SYNC while failing over\");\n        return;\n    }\n\n    /* Refuse SYNC requests if we are a slave but the link with our master\n     * is not ok... */\n    if (server.masterhost && server.repl_state != REPL_STATE_CONNECTED) {\n        addReplyError(c,\"-NOMASTERLINK Can't SYNC while not connected with my master\");\n        return;\n    }\n\n    /* SYNC can't be issued when the server has pending data to send to\n     * the client about already issued commands. We need a fresh reply\n     * buffer registering the differences between the BGSAVE and the current\n     * dataset, so that we can copy to other slaves if needed. */\n    if (clientHasPendingReplies(c)) {\n        addReplyError(c,\"SYNC and PSYNC are invalid with pending output\");\n        return;\n    }\n\n    /* Fail sync if slave doesn't support EOF capability but wants a filtered RDB. This is because we force filtered\n     * RDB's to be generated over a socket and not through a file to avoid conflicts with the snapshot files. Forcing\n     * use of a socket is handled, if needed, in `startBgsaveForReplication`. */\n    if (c->slave_req & SLAVE_REQ_RDB_MASK && !(c->slave_capa & SLAVE_CAPA_EOF)) {\n        addReplyError(c,\"Filtered replica requires EOF capability\");\n        return;\n    }\n\n    serverLog(LL_NOTICE,\"Replica %s asks for synchronization\",\n        replicationGetSlaveName(c));\n\n    /* Try a partial resynchronization if this is a PSYNC command.\n     * If it fails, we continue with usual full resynchronization, however\n     * when this happens replicationSetupSlaveForFullResync will replied\n     * with:\n     *\n     * +FULLRESYNC <replid> <offset>\n     *\n     * So the slave knows the new replid and offset to try a PSYNC later\n     * if the connection with the master is lost. */\n    if (!strcasecmp(c->argv[0]->ptr,\"psync\")) {\n        long long psync_offset;\n        if (getLongLongFromObjectOrReply(c, c->argv[2], &psync_offset, NULL) != C_OK) {\n            serverLog(LL_WARNING, \"Replica %s asks for synchronization but with a wrong offset\",\n                      replicationGetSlaveName(c));\n            return;\n        }\n\n        if (masterTryPartialResynchronization(c, psync_offset) == C_OK) {\n            server.stat_sync_partial_ok++;\n            return; /* No full resync needed, return. */\n        } else {\n            char *master_replid = c->argv[1]->ptr;\n\n            /* Increment stats for failed PSYNCs, but only if the\n             * replid is not \"?\", as this is used by slaves to force a full\n             * resync on purpose when they are not able to partially\n             * resync. */\n            if (master_replid[0] != '?') server.stat_sync_partial_err++;\n            if (c->slave_capa & SLAVE_CAPA_RDB_CHANNEL_REPL) {\n                int len;\n                char buf[128];\n                /* Replica is capable of rdbchannel replication. This is\n                 * replica's main channel. Let replica know full sync is needed.\n                 * Replica will open another connection (rdbchannel). Once rdb\n                 * delivery starts, we'll stream repl data to the main channel.*/\n                c->flags |= CLIENT_SLAVE;\n                c->replstate = SLAVE_STATE_WAIT_RDB_CHANNEL;\n                c->repl_ack_time = server.unixtime;\n                listAddNodeTail(server.slaves, c);\n                createReplicationBacklogIfNeeded();\n\n                serverLog(LL_NOTICE,\n                          \"Replica %s is capable of rdb channel synchronization, and partial sync isn't possible. \"\n                          \"Full sync will continue with dedicated rdb channel.\",\n                          replicationGetSlaveName(c));\n\n                /* Send +RDBCHANNELSYNC with client id so we can associate replica connections on master.*/\n                len = snprintf(buf, sizeof(buf), \"+RDBCHANNELSYNC %llu\\r\\n\",\n                               (unsigned long long) c->id);\n                if (connWrite(c->conn, buf, strlen(buf)) != len)\n                    freeClientAsync(c);\n\n                return;\n            }\n        }\n    } else {\n        /* If a slave uses SYNC, we are dealing with an old implementation\n         * of the replication protocol (like redis-cli --slave). Flag the client\n         * so that we don't expect to receive REPLCONF ACK feedbacks. */\n        c->flags |= CLIENT_PRE_PSYNC;\n    }\n\n    /* Full resynchronization. */\n    server.stat_sync_full++;\n\n    /* Setup the slave as one waiting for BGSAVE to start. The following code\n     * paths will change the state if we handle the slave differently. */\n    c->replstate = SLAVE_STATE_WAIT_BGSAVE_START;\n    if (server.repl_disable_tcp_nodelay)\n        connDisableTcpNoDelay(c->conn); /* Non critical if it fails. */\n    c->repldbfd = -1;\n    c->flags |= CLIENT_SLAVE;\n    listAddNodeTail(server.slaves,c);\n\n    /* Create the replication backlog if needed. */\n    createReplicationBacklogIfNeeded();\n\n    /* Keep the client in the main thread to avoid data races between the\n     * connWrite call in startBgsaveForReplication and the client's event\n     * handler in IO threads. */\n    if (c->tid != IOTHREAD_MAIN_THREAD_ID) keepClientInMainThread(c);\n\n    /* CASE 1: BGSAVE is in progress, with disk target. */\n    if (server.child_type == CHILD_TYPE_RDB &&\n        server.rdb_child_type == RDB_CHILD_TYPE_DISK)\n    {\n        /* Ok a background save is in progress. Let's check if it is a good\n         * one for replication, i.e. if there is another slave that is\n         * registering differences since the server forked to save. */\n        client *slave;\n        listNode *ln;\n        listIter li;\n\n        listRewind(server.slaves,&li);\n        while((ln = listNext(&li))) {\n            slave = ln->value;\n            /* If the client needs a buffer of commands, we can't use\n             * a replica without replication buffer. */\n            if (slave->replstate == SLAVE_STATE_WAIT_BGSAVE_END &&\n                (!(slave->flags & CLIENT_REPL_RDBONLY) ||\n                 (c->flags & CLIENT_REPL_RDBONLY)))\n                break;\n        }\n        /* To attach this slave, we check that it has at least all the\n         * capabilities of the slave that triggered the current BGSAVE\n         * and its exact requirements. */\n        if (ln && ((c->slave_capa & slave->slave_capa) == slave->slave_capa) &&\n            c->slave_req == slave->slave_req) {\n            /* Perfect, the server is already registering differences for\n             * another slave. Set the right state, and copy the buffer.\n             * We don't copy buffer if clients don't want. */\n            if (!(c->flags & CLIENT_REPL_RDBONLY))\n                copyReplicaOutputBuffer(c,slave);\n            replicationSetupSlaveForFullResync(c,slave->psync_initial_offset);\n            serverLog(LL_NOTICE,\"Waiting for end of BGSAVE for SYNC\");\n        } else {\n            /* No way, we need to wait for the next BGSAVE in order to\n             * register differences. */\n            serverLog(LL_NOTICE,\"Can't attach the replica to the current BGSAVE. Waiting for next BGSAVE for SYNC\");\n        }\n\n    /* CASE 2: BGSAVE is in progress, with socket target. */\n    } else if (server.child_type == CHILD_TYPE_RDB &&\n               server.rdb_child_type == RDB_CHILD_TYPE_SOCKET)\n    {\n        /* There is an RDB child process but it is writing directly to\n         * children sockets. We need to wait for the next BGSAVE\n         * in order to synchronize. */\n        serverLog(LL_NOTICE,\"Current BGSAVE has socket target. Waiting for next BGSAVE for SYNC\");\n\n    /* CASE 3: There is no BGSAVE is in progress. */\n    } else {\n        if (server.repl_diskless_sync && (c->slave_capa & SLAVE_CAPA_EOF) &&\n            server.repl_diskless_sync_delay)\n        {\n            /* Diskless replication RDB child is created inside\n             * replicationCron() since we want to delay its start a\n             * few seconds to wait for more slaves to arrive. */\n            serverLog(LL_NOTICE,\"Delay next BGSAVE for diskless SYNC\");\n        } else {\n            /* We don't have a BGSAVE in progress, let's start one. Diskless\n             * or disk-based mode is determined by replica's capacity. */\n            if (!hasActiveChildProcess()) {\n                startBgsaveForReplication(c->slave_capa, c->slave_req);\n            } else {\n                serverLog(LL_NOTICE,\n                    \"No BGSAVE in progress, but another BG operation is active. \"\n                    \"BGSAVE for replication delayed\");\n            }\n        }\n    }\n    return;\n}\n\n/* REPLCONF <option> <value> <option> <value> ...\n * This command is used by a replica in order to configure the replication\n * process before starting it with the SYNC command.\n * This command is also used by a master in order to get the replication\n * offset from a replica.\n *\n * Currently we support these options:\n *\n * - listening-port <port>\n * - ip-address <ip>\n * What is the listening ip and port of the Replica redis instance, so that\n * the master can accurately lists replicas and their listening ports in the\n * INFO output.\n *\n * - capa <eof|psync2|rdb-channel-repl>\n * What is the capabilities of this instance.\n * eof: supports EOF-style RDB transfer for diskless replication.\n * psync2: supports PSYNC v2, so understands +CONTINUE <new repl ID>.\n *\n * - ack <offset> [fack <aofofs>]\n * Replica informs the master the amount of replication stream that it\n * processed so far, and optionally the replication offset fsynced to the AOF file.\n * This special pattern doesn't reply to the caller.\n *\n * - getack <dummy>\n * Unlike other subcommands, this is used by master to get the replication\n * offset from a replica.\n *\n * - rdb-only <0|1>\n * Only wants RDB snapshot without replication buffer.\n *\n * - rdb-filter-only <include-filters>\n * Define \"include\" filters for the RDB snapshot. Currently we only support\n * a single include filter: \"functions\". Passing an empty string \"\" will\n * result in an empty RDB.\n *\n * - main-ch-client-id <client-id>\n * Replica's main channel informs master that this is the main channel of the\n * rdb channel identified by the client-id. */\nvoid replconfCommand(client *c) {\n    int j;\n\n    if ((c->argc % 2) == 0) {\n        /* Number of arguments must be odd to make sure that every\n         * option has a corresponding value. */\n        addReplyErrorObject(c,shared.syntaxerr);\n        return;\n    }\n\n    /* Process every option-value pair. */\n    for (j = 1; j < c->argc; j+=2) {\n        if (!strcasecmp(c->argv[j]->ptr,\"listening-port\")) {\n            long port;\n\n            if ((getLongFromObjectOrReply(c,c->argv[j+1],\n                    &port,NULL) != C_OK))\n                return;\n            c->slave_listening_port = port;\n        } else if (!strcasecmp(c->argv[j]->ptr,\"ip-address\")) {\n            sds addr = c->argv[j+1]->ptr;\n            if (sdslen(addr) < NET_HOST_STR_LEN) {\n                if (c->slave_addr) sdsfree(c->slave_addr);\n                c->slave_addr = sdsdup(addr);\n            } else {\n                addReplyErrorFormat(c,\"REPLCONF ip-address provided by \"\n                    \"replica instance is too long: %zd bytes\", sdslen(addr));\n                return;\n            }\n        } else if (!strcasecmp(c->argv[j]->ptr,\"capa\")) {\n            /* Ignore capabilities not understood by this master. */\n            if (!strcasecmp(c->argv[j+1]->ptr,\"eof\"))\n                c->slave_capa |= SLAVE_CAPA_EOF;\n            else if (!strcasecmp(c->argv[j+1]->ptr,\"psync2\"))\n                c->slave_capa |= SLAVE_CAPA_PSYNC2;\n            else if (!strcasecmp(c->argv[j+1]->ptr,\"rdb-channel-repl\") && server.repl_rdb_channel &&\n                     server.repl_diskless_sync) {\n                c->slave_capa |= SLAVE_CAPA_RDB_CHANNEL_REPL;\n            }\n        } else if (!strcasecmp(c->argv[j]->ptr,\"ack\")) {\n            /* REPLCONF ACK is used by slave to inform the master the amount\n             * of replication stream that it processed so far. It is an\n             * internal only command that normal clients should never use. */\n            long long offset;\n\n            if (!(c->flags & CLIENT_SLAVE)) return;\n            if ((getLongLongFromObject(c->argv[j+1], &offset) != C_OK))\n                return;\n            if (offset > c->repl_ack_off)\n                c->repl_ack_off = offset;\n            if (c->argc > j+3 && !strcasecmp(c->argv[j+2]->ptr,\"fack\")) {\n                if ((getLongLongFromObject(c->argv[j+3], &offset) != C_OK))\n                    return;\n                if (offset > c->repl_aof_off)\n                    c->repl_aof_off = offset;\n            }\n            c->repl_ack_time = server.unixtime;\n            /* If this was a diskless replication, we need to really put\n             * the slave online when the first ACK is received (which\n             * confirms slave is online and ready to get more data). This\n             * allows for simpler and less CPU intensive EOF detection\n             * when streaming RDB files.\n             * There's a chance the ACK got to us before we detected that the\n             * bgsave is done (since that depends on cron ticks), so run a\n             * quick check first (instead of waiting for the next ACK. */\n            if (server.child_type == CHILD_TYPE_RDB && c->replstate == SLAVE_STATE_WAIT_BGSAVE_END)\n                checkChildrenDone();\n            if (c->repl_start_cmd_stream_on_ack && c->replstate == SLAVE_STATE_ONLINE)\n                replicaStartCommandStream(c);\n            /* If state is send_bulk_and_stream, it means this is the main\n             * channel of the slave in rdbchannel replication. Normally, slave\n             * will be put online after rdb fork is completed. There is chance\n             * that 'ack' might be received before we detect bgsave is done. */\n            if (c->replstate == SLAVE_STATE_SEND_BULK_AND_STREAM)\n                replicaPutOnline(c);\n            /* Note: this command does not reply anything! */\n            return;\n        } else if (!strcasecmp(c->argv[j]->ptr,\"getack\")) {\n            /* REPLCONF GETACK is used in order to request an ACK ASAP\n             * to the slave. */\n            if (server.masterhost && server.master) replicationSendAck();\n            return;\n        } else if (!strcasecmp(c->argv[j]->ptr,\"rdb-only\")) {\n           /* REPLCONF RDB-ONLY is used to identify the client only wants\n            * RDB snapshot without replication buffer. */\n            long rdb_only = 0;\n            if (getRangeLongFromObjectOrReply(c,c->argv[j+1],\n                    0,1,&rdb_only,NULL) != C_OK)\n                return;\n            if (rdb_only == 1) {\n                c->flags |= CLIENT_REPL_RDBONLY;\n                /* If replicas ask for RDB only, We can apply the background\n                 * RDB transfer optimization based on the configurations. */\n                if (server.repl_rdb_channel && server.repl_diskless_sync)\n                    c->slave_req |= SLAVE_REQ_RDB_CHANNEL;\n            } else {\n                c->flags &= ~CLIENT_REPL_RDBONLY;\n                c->slave_req &= ~SLAVE_REQ_RDB_CHANNEL;\n            }\n        } else if (!strcasecmp(c->argv[j]->ptr,\"rdb-filter-only\")) {\n            /* REPLCONFG RDB-FILTER-ONLY is used to define \"include\" filters\n             * for the RDB snapshot. Currently we only support a single\n             * include filter: \"functions\". In the future we may want to add\n             * other filters like key patterns, key types, non-volatile, module\n             * aux fields, ...\n             * We might want to add the complementing \"RDB-FILTER-EXCLUDE\" to\n             * filter out certain data. */\n            int filter_count, i;\n            sds *filters;\n            if (!(filters = sdssplitargs(c->argv[j+1]->ptr, &filter_count))) {\n                addReplyError(c, \"Missing rdb-filter-only values\");\n                return;\n            }\n            /* By default filter out all parts of the rdb */\n            c->slave_req |= SLAVE_REQ_RDB_EXCLUDE_DATA;\n            c->slave_req |= SLAVE_REQ_RDB_EXCLUDE_FUNCTIONS;\n            for (i = 0; i < filter_count; i++) {\n                if (!strcasecmp(filters[i], \"functions\"))\n                    c->slave_req &= ~SLAVE_REQ_RDB_EXCLUDE_FUNCTIONS;\n                else {\n                    addReplyErrorFormat(c, \"Unsupported rdb-filter-only option: %s\", (char*)filters[i]);\n                    sdsfreesplitres(filters, filter_count);\n                    return;\n                }\n            }\n            sdsfreesplitres(filters, filter_count);\n        } else if (!strcasecmp(c->argv[j]->ptr, \"rdb-channel\")) {\n            long rdb_channel = 0;\n            if (getRangeLongFromObjectOrReply(c, c->argv[j + 1], 0, 1, &rdb_channel, NULL) != C_OK)\n                return;\n            if (rdb_channel == 1) {\n                c->flags |= CLIENT_REPL_RDB_CHANNEL;\n            } else {\n                c->flags &= ~CLIENT_REPL_RDB_CHANNEL;\n            }\n        } else if (!strcasecmp(c->argv[j]->ptr, \"main-ch-client-id\")) {\n            /* REPLCONF main-ch-client-id <client-id> is used to identify\n             * the current replica rdb channel with existing main channel\n             * connection. */\n            long long client_id = 0;\n            client *main_ch;\n            if (getLongLongFromObjectOrReply(c, c->argv[j + 1], &client_id, NULL) != C_OK)\n                return;\n            main_ch = lookupClientByID(client_id);\n            if (!main_ch || main_ch->replstate != SLAVE_STATE_WAIT_RDB_CHANNEL) {\n                addReplyErrorFormat(c, \"Unrecognized RDB client id: %lld\", client_id);\n                return;\n            }\n            c->main_ch_client_id = (uint64_t)client_id;\n            /* Inherit the rdb-no-compress and rdb-no-checksum request from the main channel. */\n            if (main_ch->slave_req & SLAVE_REQ_RDB_NO_COMPRESS)\n                c->slave_req |= SLAVE_REQ_RDB_NO_COMPRESS;\n            if (main_ch->slave_req & SLAVE_REQ_RDB_NO_CHECKSUM)\n                c->slave_req |= SLAVE_REQ_RDB_NO_CHECKSUM;\n        } else if (!strcasecmp(c->argv[j]->ptr, \"rdb-no-compress\")) {\n            long rdb_no_compress = 0;\n            if (getRangeLongFromObjectOrReply(c, c->argv[j + 1], 0, 1, &rdb_no_compress, NULL) != C_OK)\n                return;\n            if (rdb_no_compress == 1) {\n                c->slave_req |= SLAVE_REQ_RDB_NO_COMPRESS;\n            } else {\n                c->slave_req &= ~SLAVE_REQ_RDB_NO_COMPRESS;\n            }\n        } else if (!strcasecmp(c->argv[j]->ptr, \"rdb-no-checksum\")) {\n            long rdb_no_checksum = 0;\n            if (getRangeLongFromObjectOrReply(c, c->argv[j + 1], 0, 1, &rdb_no_checksum, NULL) != C_OK)\n                return;\n            if (rdb_no_checksum == 1) {\n                c->slave_req |= SLAVE_REQ_RDB_NO_CHECKSUM;\n            } else {\n                c->slave_req &= ~SLAVE_REQ_RDB_NO_CHECKSUM;\n            }\n        } else {\n            addReplyErrorFormat(c,\"Unrecognized REPLCONF option: %s\",\n                (char*)c->argv[j]->ptr);\n            return;\n        }\n    }\n    addReply(c,shared.ok);\n}\n\n/* This function puts a replica in the online state, and should be called just\n * after a replica received the RDB file for the initial synchronization.\n *\n * It does a few things:\n * 1) Put the slave in ONLINE state.\n * 2) Update the count of \"good replicas\".\n * 3) Trigger the module event.\n *\n * the return value indicates that the replica should be disconnected.\n * */\nint replicaPutOnline(client *slave) {\n    if (slave->flags & CLIENT_REPL_RDBONLY) {\n        slave->replstate = SLAVE_STATE_RDB_TRANSMITTED;\n        /* The client asked for RDB only so we should close it ASAP */\n        serverLog(LL_NOTICE,\n                  \"RDB transfer completed, rdb only replica (%s) should be disconnected asap\",\n                  replicationGetSlaveName(slave));\n        return 0;\n    }\n\n    /* Don't put migration destination client online. */\n    if (slave->flags & CLIENT_ASM_MIGRATING) return 0;\n\n    slave->replstate = SLAVE_STATE_ONLINE;\n    slave->repl_ack_time = server.unixtime; /* Prevent false timeout. */\n\n    refreshGoodSlavesCount();\n    /* Fire the replica change modules event. */\n    moduleFireServerEvent(REDISMODULE_EVENT_REPLICA_CHANGE,\n                          REDISMODULE_SUBEVENT_REPLICA_CHANGE_ONLINE,\n                          NULL);\n    serverLog(LL_NOTICE,\"Synchronization with replica %s succeeded\",\n        replicationGetSlaveName(slave));\n    return 1;\n}\n\n/* This function should be called just after a replica received the RDB file\n * for the initial synchronization, and we are finally ready to send the\n * incremental stream of commands.\n *\n * It does a few things:\n * 1) Close the replica's connection async if it doesn't need replication\n *    commands buffer stream, since it actually isn't a valid replica.\n * 2) Make sure the writable event is re-installed, since when calling the SYNC\n *    command we had no replies and it was disabled, and then we could\n *    accumulate output buffer data without sending it to the replica so it\n *    won't get mixed with the RDB stream. */\nvoid replicaStartCommandStream(client *slave) {\n    serverAssert(!(slave->flags & CLIENT_REPL_RDBONLY));\n    slave->repl_start_cmd_stream_on_ack = 0;\n\n    putClientInPendingWriteQueue(slave);\n}\n\n/* We call this function periodically to remove an RDB file that was\n * generated because of replication, in an instance that is otherwise\n * without any persistence. We don't want instances without persistence\n * to take RDB files around, this violates certain policies in certain\n * environments. */\nvoid removeRDBUsedToSyncReplicas(void) {\n    /* If the feature is disabled, return ASAP but also clear the\n     * RDBGeneratedByReplication flag in case it was set. Otherwise if the\n     * feature was enabled, but gets disabled later with CONFIG SET, the\n     * flag may remain set to one: then next time the feature is re-enabled\n     * via CONFIG SET we have it set even if no RDB was generated\n     * because of replication recently. */\n    if (!server.rdb_del_sync_files) {\n        RDBGeneratedByReplication = 0;\n        return;\n    }\n\n    if (allPersistenceDisabled() && RDBGeneratedByReplication) {\n        client *slave;\n        listNode *ln;\n        listIter li;\n\n        int delrdb = 1;\n        listRewind(server.slaves,&li);\n        while((ln = listNext(&li))) {\n            slave = ln->value;\n            if (slave->replstate == SLAVE_STATE_WAIT_BGSAVE_START ||\n                slave->replstate == SLAVE_STATE_WAIT_BGSAVE_END ||\n                slave->replstate == SLAVE_STATE_SEND_BULK)\n            {\n                delrdb = 0;\n                break; /* No need to check the other replicas. */\n            }\n        }\n        if (delrdb) {\n            struct stat sb;\n            if (lstat(server.rdb_filename,&sb) != -1) {\n                RDBGeneratedByReplication = 0;\n                serverLog(LL_NOTICE,\n                    \"Removing the RDB file used to feed replicas \"\n                    \"in a persistence-less instance\");\n                bg_unlink(server.rdb_filename);\n            }\n        }\n    }\n}\n\n/* Close the repldbfd and reclaim the page cache if the client hold\n * the last reference to replication DB */\nvoid closeRepldbfd(client *myself) {\n    listNode *ln;\n    listIter li;\n    int reclaim = 1;\n    listRewind(server.slaves,&li);\n    while((ln = listNext(&li))) {\n        client *slave = ln->value;\n        if (slave != myself && slave->replstate == SLAVE_STATE_SEND_BULK) {\n            reclaim = 0;\n            break;\n        }\n    }\n\n    if (reclaim) {\n        bioCreateCloseJob(myself->repldbfd, 0, 1);\n    } else {\n        close(myself->repldbfd);\n    }\n    myself->repldbfd = -1;\n}\n\nvoid sendBulkToSlave(connection *conn) {\n    client *slave = connGetPrivateData(conn);\n    char buf[PROTO_IOBUF_LEN];\n    ssize_t nwritten, buflen;\n\n    /* Before sending the RDB file, we send the preamble as configured by the\n     * replication process. Currently the preamble is just the bulk count of\n     * the file in the form \"$<length>\\r\\n\". */\n    if (slave->replpreamble) {\n        nwritten = connWrite(conn,slave->replpreamble,sdslen(slave->replpreamble));\n        if (nwritten == -1) {\n            serverLog(LL_WARNING,\n                \"Write error sending RDB preamble to replica: %s\",\n                connGetLastError(conn));\n            freeClient(slave);\n            return;\n        }\n        atomicIncr(server.stat_net_repl_output_bytes, nwritten);\n        sdsrange(slave->replpreamble,nwritten,-1);\n        if (sdslen(slave->replpreamble) == 0) {\n            sdsfree(slave->replpreamble);\n            slave->replpreamble = NULL;\n            /* fall through sending data. */\n        } else {\n            return;\n        }\n    }\n\n    /* If the preamble was already transferred, send the RDB bulk data. */\n    if (lseek(slave->repldbfd,slave->repldboff,SEEK_SET) == -1) {\n\tserverLog(LL_WARNING,\"Failed to lseek the RDB file to offset %lld for replica %s: %s\",\n\t    (long long)slave->repldboff, replicationGetSlaveName(slave), strerror(errno));\n\tfreeClient(slave);\n\treturn;\n    }\n    buflen = read(slave->repldbfd,buf,PROTO_IOBUF_LEN);\n    if (buflen <= 0) {\n        serverLog(LL_WARNING,\"Read error sending DB to replica: %s\",\n            (buflen == 0) ? \"premature EOF\" : strerror(errno));\n        freeClient(slave);\n        return;\n    }\n    if ((nwritten = connWrite(conn,buf,buflen)) == -1) {\n        if (connGetState(conn) != CONN_STATE_CONNECTED) {\n            serverLog(LL_WARNING,\"Write error sending DB to replica: %s\",\n                connGetLastError(conn));\n            freeClient(slave);\n        }\n        return;\n    }\n    slave->repldboff += nwritten;\n    atomicIncr(server.stat_net_repl_output_bytes, nwritten);\n    if (slave->repldboff == slave->repldbsize) {\n        closeRepldbfd(slave);\n        connSetWriteHandler(slave->conn,NULL);\n        if (!replicaPutOnline(slave)) {\n            freeClient(slave);\n            return;\n        }\n        replicaStartCommandStream(slave);\n    }\n}\n\n/* Remove one write handler from the list of connections waiting to be writable\n * during rdb pipe transfer. */\nvoid rdbPipeWriteHandlerConnRemoved(struct connection *conn) {\n    if (!connHasWriteHandler(conn))\n        return;\n    connSetWriteHandler(conn, NULL);\n    client *slave = connGetPrivateData(conn);\n    slave->repl_last_partial_write = 0;\n    server.rdb_pipe_numconns_writing--;\n    /* if there are no more writes for now for this conn, or write error: */\n    if (server.rdb_pipe_numconns_writing == 0) {\n        if (aeCreateFileEvent(server.el, server.rdb_pipe_read, AE_READABLE, rdbPipeReadHandler,NULL) == AE_ERR) {\n            serverPanic(\"Unrecoverable error creating server.rdb_pipe_read file event.\");\n        }\n    }\n}\n\n/* Called in diskless master during transfer of data from the rdb pipe, when\n * the replica becomes writable again. */\nvoid rdbPipeWriteHandler(struct connection *conn) {\n    serverAssert(server.rdb_pipe_bufflen>0);\n    client *slave = connGetPrivateData(conn);\n    ssize_t nwritten;\n    if ((nwritten = connWrite(conn, server.rdb_pipe_buff + slave->repldboff,\n                              server.rdb_pipe_bufflen - slave->repldboff)) == -1)\n    {\n        if (connGetState(conn) == CONN_STATE_CONNECTED)\n            return; /* equivalent to EAGAIN */\n        serverLog(LL_WARNING,\"Write error sending DB to replica: %s\",\n            connGetLastError(conn));\n        freeClient(slave);\n        return;\n    } else {\n        slave->repldboff += nwritten;\n        atomicIncr(server.stat_net_repl_output_bytes, nwritten);\n        if (slave->repldboff < server.rdb_pipe_bufflen) {\n            slave->repl_last_partial_write = server.unixtime;\n            return; /* more data to write.. */\n        }\n    }\n    rdbPipeWriteHandlerConnRemoved(conn);\n}\n\n/* Called in diskless master, when there's data to read from the child's rdb pipe */\nvoid rdbPipeReadHandler(struct aeEventLoop *eventLoop, int fd, void *clientData, int mask) {\n    UNUSED(mask);\n    UNUSED(clientData);\n    UNUSED(eventLoop);\n    int i;\n    if (!server.rdb_pipe_buff)\n        server.rdb_pipe_buff = zmalloc(PROTO_IOBUF_LEN);\n    serverAssert(server.rdb_pipe_numconns_writing==0);\n\n    while (1) {\n        server.rdb_pipe_bufflen = read(fd, server.rdb_pipe_buff, PROTO_IOBUF_LEN);\n        if (server.rdb_pipe_bufflen < 0) {\n            if (errno == EAGAIN || errno == EWOULDBLOCK)\n                return;\n            serverLog(LL_WARNING,\"Diskless rdb transfer, read error sending DB to replicas: %s\", strerror(errno));\n            for (i=0; i < server.rdb_pipe_numconns; i++) {\n                connection *conn = server.rdb_pipe_conns[i];\n                if (!conn)\n                    continue;\n                client *slave = connGetPrivateData(conn);\n                freeClient(slave);\n                server.rdb_pipe_conns[i] = NULL;\n            }\n            killRDBChild();\n            return;\n        }\n\n        if (server.rdb_pipe_bufflen == 0) {\n            /* EOF - write end was closed. */\n            int stillUp = 0;\n            aeDeleteFileEvent(server.el, server.rdb_pipe_read, AE_READABLE);\n            for (i=0; i < server.rdb_pipe_numconns; i++)\n            {\n                connection *conn = server.rdb_pipe_conns[i];\n                if (!conn)\n                    continue;\n                stillUp++;\n            }\n            serverLog(LL_NOTICE,\"Diskless rdb transfer, done reading from pipe, %d replicas still up.\", stillUp);\n            /* Now that the replicas have finished reading, notify the child that it's safe to exit. \n             * When the server detects the child has exited, it can mark the replica as online, and\n             * start streaming the replication buffers. */\n            close(server.rdb_child_exit_pipe);\n            server.rdb_child_exit_pipe = -1;\n            return;\n        }\n\n        int stillAlive = 0;\n        for (i=0; i < server.rdb_pipe_numconns; i++)\n        {\n            ssize_t nwritten;\n            connection *conn = server.rdb_pipe_conns[i];\n            if (!conn)\n                continue;\n\n            client *slave = connGetPrivateData(conn);\n            if ((nwritten = connWrite(conn, server.rdb_pipe_buff, server.rdb_pipe_bufflen)) == -1) {\n                if (connGetState(conn) != CONN_STATE_CONNECTED) {\n                    serverLog(LL_WARNING,\"Diskless rdb transfer, write error sending DB to replica: %s\",\n                        connGetLastError(conn));\n                    freeClient(slave);\n                    server.rdb_pipe_conns[i] = NULL;\n                    continue;\n                }\n                /* An error and still in connected state, is equivalent to EAGAIN */\n                slave->repldboff = 0;\n            } else {\n                /* Note: when use diskless replication, 'repldboff' is the offset\n                 * of 'rdb_pipe_buff' sent rather than the offset of entire RDB. */\n                slave->repldboff = nwritten;\n                atomicIncr(server.stat_net_repl_output_bytes, nwritten);\n            }\n            /* If we were unable to write all the data to one of the replicas,\n             * setup write handler (and disable pipe read handler, below) */\n            if (nwritten != server.rdb_pipe_bufflen) {\n                slave->repl_last_partial_write = server.unixtime;\n                server.rdb_pipe_numconns_writing++;\n                connSetWriteHandler(conn, rdbPipeWriteHandler);\n            }\n            stillAlive++;\n        }\n\n        if (stillAlive == 0) {\n            serverLog(LL_WARNING,\"Diskless rdb transfer, last replica dropped, killing fork child.\");\n            /* Avoid deleting events after killRDBChild as it may trigger new bgsaves for other replicas. */\n            aeDeleteFileEvent(server.el, server.rdb_pipe_read, AE_READABLE); \n            killRDBChild();\n            break;\n        }\n        /* Remove the pipe read handler if at least one write handler was set. */\n        else if (server.rdb_pipe_numconns_writing) {\n            aeDeleteFileEvent(server.el, server.rdb_pipe_read, AE_READABLE);\n            break;\n        }\n    }\n}\n\n/* This function is called at the end of every background saving.\n *\n * The argument bgsaveerr is C_OK if the background saving succeeded\n * otherwise C_ERR is passed to the function.\n * The 'type' argument is the type of the child that terminated\n * (if it had a disk or socket target). */\nvoid updateSlavesWaitingBgsave(int bgsaveerr, int type) {\n    listNode *ln;\n    listIter li;\n\n    /* Note: there's a chance we got here from within the REPLCONF ACK command\n     * so we must avoid using freeClient, otherwise we'll crash on our way up. */\n\n    listRewind(server.slaves,&li);\n    while((ln = listNext(&li))) {\n        client *slave = ln->value;\n\n        /* We can get here via freeClient()->killRDBChild()->checkChildrenDone(). skip disconnected slaves. */\n        if (!slave->conn) continue;\n\n        if (slave->replstate == SLAVE_STATE_SEND_BULK_AND_STREAM) {\n            /* This is the main channel of the slave that received the RDB.\n             * Put it online if RDB delivery is successful. */\n            if (bgsaveerr == C_OK) {\n                /* Notify the task that the snapshot bulk delivery is done */\n                if (slave->flags & CLIENT_ASM_MIGRATING)\n                    asmSlotSnapshotSucceed(slave->task);\n                replicaPutOnline(slave);\n            } else {\n                freeClientAsync(slave);\n            }\n        } else if (slave->replstate == SLAVE_STATE_WAIT_BGSAVE_END) {\n            struct redis_stat buf;\n\n            if (bgsaveerr != C_OK) {\n                /* Notify the task that the snapshot bulk delivery failed */\n                if (slave->flags & CLIENT_ASM_MIGRATING)\n                    asmSlotSnapshotFailed(slave->task);\n                freeClientAsync(slave);\n                serverLog(LL_WARNING,\"SYNC failed. BGSAVE child returned an error\");\n                continue;\n            }\n\n            /* If this was an RDB on disk save, we have to prepare to send\n             * the RDB from disk to the slave socket. Otherwise if this was\n             * already an RDB -> Slaves socket transfer, used in the case of\n             * diskless replication, our work is trivial, we can just put\n             * the slave online. */\n            if (type == RDB_CHILD_TYPE_SOCKET) {\n                /* Slots snapshot */\n                if (slave->slave_req & SLAVE_REQ_SLOTS_SNAPSHOT) {\n                    serverLog(LL_NOTICE, \"Streamed slots snapshot transfer succeeded\");\n                    freeClientAsync(slave);\n                    continue;\n                }\n\n                serverLog(LL_NOTICE,\n                    \"Streamed RDB transfer with replica %s succeeded (socket). Waiting for REPLCONF ACK from replica to enable streaming\",\n                        replicationGetSlaveName(slave));\n                /* Note: we wait for a REPLCONF ACK message from the replica in\n                 * order to really put it online (install the write handler\n                 * so that the accumulated data can be transferred). However\n                 * we change the replication state ASAP, since our slave\n                 * is technically online now.\n                 *\n                 * So things work like that:\n                 *\n                 * 1. We end transferring the RDB file via socket.\n                 * 2. The replica is put ONLINE but the write handler\n                 *    is not installed.\n                 * 3. The replica however goes really online, and pings us\n                 *    back via REPLCONF ACK commands.\n                 * 4. Now we finally install the write handler, and send\n                 *    the buffers accumulated so far to the replica.\n                 *\n                 * But why we do that? Because the replica, when we stream\n                 * the RDB directly via the socket, must detect the RDB\n                 * EOF (end of file), that is a special random string at the\n                 * end of the RDB (for streamed RDBs we don't know the length\n                 * in advance). Detecting such final EOF string is much\n                 * simpler and less CPU intensive if no more data is sent\n                 * after such final EOF. So we don't want to glue the end of\n                 * the RDB transfer with the start of the other replication\n                 * data. */\n                if (!replicaPutOnline(slave)) {\n                    freeClientAsync(slave);\n                    continue;\n                }\n                slave->repl_start_cmd_stream_on_ack = 1;\n            } else {\n                if ((slave->repldbfd = open(server.rdb_filename,O_RDONLY)) == -1 ||\n                    redis_fstat(slave->repldbfd,&buf) == -1) {\n                    freeClientAsync(slave);\n                    serverLog(LL_WARNING,\"SYNC failed. Can't open/stat DB after BGSAVE: %s\", strerror(errno));\n                    continue;\n                }\n                slave->repldboff = 0;\n                slave->repldbsize = buf.st_size;\n                slave->replstate = SLAVE_STATE_SEND_BULK;\n                slave->replpreamble = sdscatprintf(sdsempty(),\"$%lld\\r\\n\",\n                    (unsigned long long) slave->repldbsize);\n\n                connSetWriteHandler(slave->conn,NULL);\n                if (connSetWriteHandler(slave->conn,sendBulkToSlave) == C_ERR) {\n                    freeClientAsync(slave);\n                    continue;\n                }\n            }\n        }\n    }\n}\n\n/* Change the current instance replication ID with a new, random one.\n * This will prevent successful PSYNCs between this master and other\n * slaves, so the command should be called when something happens that\n * alters the current story of the dataset. */\nvoid changeReplicationId(void) {\n    getRandomHexChars(server.replid,CONFIG_RUN_ID_SIZE);\n    server.replid[CONFIG_RUN_ID_SIZE] = '\\0';\n}\n\n/* Clear (invalidate) the secondary replication ID. This happens, for\n * example, after a full resynchronization, when we start a new replication\n * history. */\nvoid clearReplicationId2(void) {\n    memset(server.replid2,'0',sizeof(server.replid));\n    server.replid2[CONFIG_RUN_ID_SIZE] = '\\0';\n    server.second_replid_offset = -1;\n}\n\n/* Use the current replication ID / offset as secondary replication\n * ID, and change the current one in order to start a new history.\n * This should be used when an instance is switched from slave to master\n * so that it can serve PSYNC requests performed using the master\n * replication ID. */\nvoid shiftReplicationId(void) {\n    memcpy(server.replid2,server.replid,sizeof(server.replid));\n    /* We set the second replid offset to the master offset + 1, since\n     * the slave will ask for the first byte it has not yet received, so\n     * we need to add one to the offset: for example if, as a slave, we are\n     * sure we have the same history as the master for 50 bytes, after we\n     * are turned into a master, we can accept a PSYNC request with offset\n     * 51, since the slave asking has the same history up to the 50th\n     * byte, and is asking for the new bytes starting at offset 51. */\n    server.second_replid_offset = server.master_repl_offset+1;\n    changeReplicationId();\n    serverLog(LL_NOTICE,\"Setting secondary replication ID to %s, valid up to offset: %lld. New replication ID is %s\", server.replid2, server.second_replid_offset, server.replid);\n}\n\n/* ----------------------------------- SLAVE -------------------------------- */\n\n/* Replication: Replica side. */\nvoid slaveGetPortStr(char *buf, size_t size) {\n    long long port;\n    if (server.slave_announce_port) {\n        port = server.slave_announce_port;\n    } else if (server.tls_replication && server.tls_port) {\n        port = server.tls_port;\n    } else {\n        port = server.port;\n    }\n    ll2string(buf, size, port);\n}\n\n/* Returns 1 if the given replication state is a handshake state,\n * 0 otherwise. */\nint slaveIsInHandshakeState(void) {\n    return server.repl_state >= REPL_STATE_RECEIVE_PING_REPLY &&\n           server.repl_state <= REPL_STATE_RECEIVE_PSYNC_REPLY;\n}\n\n/* Avoid the master to detect the slave is timing out while loading the\n * RDB file in initial synchronization. We send a single newline character\n * that is valid protocol but is guaranteed to either be sent entirely or\n * not, since the byte is indivisible.\n *\n * The function is called in two contexts: while we flush the current\n * data with emptyData(), and while we load the new data received as an\n * RDB file from the master. */\nvoid replicationSendNewlineToMaster(void) {\n    static time_t newline_sent;\n    if (time(NULL) != newline_sent) {\n        newline_sent = time(NULL);\n        /* Pinging back in this stage is best-effort. */\n        if (server.repl_transfer_s) connWrite(server.repl_transfer_s, \"\\n\", 1);\n    }\n}\n\n/* Callback used by emptyData() while flushing away old data to load\n * the new dataset received by the master or to clear partial db if loading\n * fails. */\nvoid replicationEmptyDbCallback(dict *d) {\n    UNUSED(d);\n    if (server.repl_state == REPL_STATE_TRANSFER)\n        replicationSendNewlineToMaster();\n\n    processEventsWhileBlocked();\n}\n\n/* Function to flush old db or the partial db on error. */\nstatic void rdbLoadEmptyDbFunc(void) {\n    serverAssert(server.loading);\n\n    serverLog(LL_NOTICE, \"MASTER <-> REPLICA sync: Flushing old data\");\n    int empty_db_flags = server.repl_slave_lazy_flush ? EMPTYDB_ASYNC :\n                                                        EMPTYDB_NO_FLAGS;\n\n    emptyData(-1, empty_db_flags, replicationEmptyDbCallback);\n}\n\n/* Once we have a link with the master and the synchronization was\n * performed, this function materializes the master client we store\n * at server.master, starting from the specified file descriptor. */\nvoid replicationCreateMasterClient(connection *conn, int dbid) {\n    server.master = createClient(conn);\n    if (conn)\n        connSetReadHandler(server.master->conn, readQueryFromClient);\n\n    /**\n     * Important note:\n     * The CLIENT_DENY_BLOCKING flag is not, and should not, be set here.\n     * For commands like BLPOP, it makes no sense to block the master\n     * connection, and such blocking attempt will probably cause deadlock and\n     * break the replication. We consider such a thing as a bug because\n     * commands as BLPOP should never be sent on the replication link.\n     * A possible use-case for blocking the replication link is if a module wants\n     * to pass the execution to a background thread and unblock after the\n     * execution is done. This is the reason why we allow blocking the replication\n     * connection. */\n    server.master->flags |= CLIENT_MASTER;\n\n    /* Allocate a private query buffer for the master client instead of using the reusable query buffer.\n     * This is done because the master's query buffer data needs to be preserved for my sub-replicas to use. */\n    server.master->querybuf = sdsempty();\n    server.master->authenticated = 1;\n    server.master->reploff = server.master_initial_offset;\n    server.master->read_reploff = server.master->reploff;\n    server.master->user = NULL; /* This client can do everything. */\n    memcpy(server.master->replid, server.master_replid,\n        sizeof(server.master_replid));\n    /* If master offset is set to -1, this master is old and is not\n     * PSYNC capable, so we flag it accordingly. */\n    if (server.master->reploff == -1)\n        server.master->flags |= CLIENT_PRE_PSYNC;\n    if (dbid != -1) selectDb(server.master,dbid);\n}\n\nstatic int useDisklessLoad(void) {\n    /* compute boolean decision to use diskless load */\n    int enabled = server.repl_diskless_load == REPL_DISKLESS_LOAD_ALWAYS || server.repl_diskless_load == REPL_DISKLESS_LOAD_SWAPDB ||\n           (server.repl_diskless_load == REPL_DISKLESS_LOAD_WHEN_DB_EMPTY && dbTotalServerKeyCount()==0);\n\n    if (enabled) {\n        /* Check all modules handle read errors, otherwise it's not safe to use diskless load. */\n        if (server.repl_diskless_load != REPL_DISKLESS_LOAD_ALWAYS && !moduleAllDatatypesHandleErrors()) {\n            serverLog(LL_NOTICE,\n                \"Skipping diskless-load because there are modules that don't handle read errors.\");\n            enabled = 0;\n        }\n        /* Check all modules handle async replication, otherwise it's not safe to use diskless load. */\n        else if (server.repl_diskless_load == REPL_DISKLESS_LOAD_SWAPDB && !moduleAllModulesHandleReplAsyncLoad()) {\n            serverLog(LL_NOTICE,\n                \"Skipping diskless-load because there are modules that are not aware of async replication.\");\n            enabled = 0;\n        }\n    }\n    return enabled;\n}\n\n/* Helper function for readSyncBulkPayload() to initialize tempDb\n * before socket-loading the new db from master. The tempDb may be populated\n * by swapMainDbWithTempDb or freed by disklessLoadDiscardTempDb later. */\nredisDb *disklessLoadInitTempDb(void) {\n    return initTempDb();\n}\n\n/* Helper function for readSyncBulkPayload() to discard our tempDb\n * when the loading succeeded or failed. */\nvoid disklessLoadDiscardTempDb(redisDb *tempDb) {\n    discardTempDb(tempDb);\n}\n\n/* If we know we got an entirely different data set from our master\n * we have no way to incrementally feed our replicas after that.\n * We want our replicas to resync with us as well, if we have any sub-replicas.\n * This is useful on readSyncBulkPayload in places where we just finished transferring db. */\nvoid replicationAttachToNewMaster(void) { \n    /* Replica starts to apply data from new master, we must discard the cached\n     * master structure. */\n    serverAssert(server.master == NULL);\n    replicationDiscardCachedMaster();\n\n    disconnectSlaves(); /* Force our replicas to resync with us as well. */\n    freeReplicationBacklog(); /* Don't allow our chained replicas to PSYNC. */\n}\n\n/* Asynchronously read the SYNC payload we receive from a master */\n#define REPL_MAX_WRITTEN_BEFORE_FSYNC (1024*1024*8) /* 8 MB */\nvoid readSyncBulkPayload(connection *conn) {\n    char buf[PROTO_IOBUF_LEN];\n    ssize_t nread, readlen, nwritten;\n    int use_diskless_load = useDisklessLoad();\n    int rdbchannel = (conn == server.repl_rdb_transfer_s);\n    int empty_db_flags = server.repl_slave_lazy_flush ? EMPTYDB_ASYNC :\n                                                        EMPTYDB_NO_FLAGS;\n    off_t left;\n\n    /* Static vars used to hold the EOF mark, and the last bytes received\n     * from the server: when they match, we reached the end of the transfer. */\n    static char eofmark[CONFIG_RUN_ID_SIZE];\n    static char lastbytes[CONFIG_RUN_ID_SIZE];\n    static int usemark = 0;\n\n    /* If repl_transfer_size == -1 we still have to read the bulk length\n     * from the master reply. */\n    if (server.repl_transfer_size == -1) {\n        nread = connSyncReadLine(conn,buf,1024,server.repl_syncio_timeout*1000);\n        if (nread == -1) {\n            serverLog(LL_WARNING,\n                \"I/O error reading bulk count from MASTER: %s\",\n                connGetLastError(conn));\n            goto error;\n        } else {\n            /* nread here is returned by connSyncReadLine(), which calls syncReadLine() and\n             * convert \"\\r\\n\" to '\\0' so 1 byte is lost. */\n            atomicIncr(server.stat_net_repl_input_bytes, nread+1);\n        }\n\n        if (buf[0] == '-') {\n            serverLog(LL_WARNING,\n                \"MASTER aborted replication with an error: %s\",\n                buf+1);\n            goto error;\n        } else if (buf[0] == '\\0') {\n            /* At this stage just a newline works as a PING in order to take\n             * the connection live. So we refresh our last interaction\n             * timestamp. */\n            server.repl_transfer_lastio = server.unixtime;\n            return;\n        } else if (buf[0] != '$') {\n            serverLog(LL_WARNING,\"Bad protocol from MASTER, the first byte is not '$' (we received '%s'), are you sure the host and port are right?\", buf);\n            goto error;\n        }\n\n        /* There are two possible forms for the bulk payload. One is the\n         * usual $<count> bulk format. The other is used for diskless transfers\n         * when the master does not know beforehand the size of the file to\n         * transfer. In the latter case, the following format is used:\n         *\n         * $EOF:<40 bytes delimiter>\n         *\n         * At the end of the file the announced delimiter is transmitted. The\n         * delimiter is long and random enough that the probability of a\n         * collision with the actual file content can be ignored. */\n        if (strncmp(buf+1,\"EOF:\",4) == 0 && strlen(buf+5) >= CONFIG_RUN_ID_SIZE) {\n            usemark = 1;\n            memcpy(eofmark,buf+5,CONFIG_RUN_ID_SIZE);\n            memset(lastbytes,0,CONFIG_RUN_ID_SIZE);\n            /* Set any repl_transfer_size to avoid entering this code path\n             * at the next call. */\n            server.repl_transfer_size = 0;\n            serverLog(LL_NOTICE,\n                \"MASTER <-> REPLICA sync: receiving streamed RDB from master with EOF %s\",\n                use_diskless_load? \"to parser\":\"to disk\");\n        } else {\n            usemark = 0;\n            server.repl_transfer_size = strtol(buf+1,NULL,10);\n            serverLog(LL_NOTICE,\n                \"MASTER <-> REPLICA sync: receiving %lld bytes from master %s\",\n                (long long) server.repl_transfer_size,\n                use_diskless_load? \"to parser\":\"to disk\");\n        }\n        return;\n    }\n\n    if (!use_diskless_load) {\n        /* Read the data from the socket, store it to a file and search\n         * for the EOF. */\n        if (usemark) {\n            readlen = sizeof(buf);\n        } else {\n            left = server.repl_transfer_size - server.repl_transfer_read;\n            readlen = (left < (signed)sizeof(buf)) ? left : (signed)sizeof(buf);\n        }\n\n        nread = connRead(conn,buf,readlen);\n        if (nread <= 0) {\n            if (connGetState(conn) == CONN_STATE_CONNECTED) {\n                /* equivalent to EAGAIN */\n                return;\n            }\n            serverLog(LL_WARNING,\"I/O error trying to sync with MASTER: %s\",\n                (nread == -1) ? connGetLastError(conn) : \"connection lost\");\n            cancelReplicationHandshake(1);\n            return;\n        }\n        atomicIncr(server.stat_net_repl_input_bytes, nread);\n\n        /* When a mark is used, we want to detect EOF asap in order to avoid\n         * writing the EOF mark into the file... */\n        int eof_reached = 0;\n\n        if (usemark) {\n            /* Update the last bytes array, and check if it matches our\n             * delimiter. */\n            if (nread >= CONFIG_RUN_ID_SIZE) {\n                memcpy(lastbytes,buf+nread-CONFIG_RUN_ID_SIZE,\n                       CONFIG_RUN_ID_SIZE);\n            } else {\n                int rem = CONFIG_RUN_ID_SIZE-nread;\n                memmove(lastbytes,lastbytes+nread,rem);\n                memcpy(lastbytes+rem,buf,nread);\n            }\n            if (memcmp(lastbytes,eofmark,CONFIG_RUN_ID_SIZE) == 0)\n                eof_reached = 1;\n        }\n\n        /* Update the last I/O time for the replication transfer (used in\n         * order to detect timeouts during replication), and write what we\n         * got from the socket to the dump file on disk. */\n        server.repl_transfer_lastio = server.unixtime;\n        if ((nwritten = write(server.repl_transfer_fd,buf,nread)) != nread) {\n            serverLog(LL_WARNING,\n                \"Write error or short write writing to the DB dump file \"\n                \"needed for MASTER <-> REPLICA synchronization: %s\",\n                (nwritten == -1) ? strerror(errno) : \"short write\");\n            goto error;\n        }\n        server.repl_transfer_read += nread;\n\n        /* Delete the last 40 bytes from the file if we reached EOF. */\n        if (usemark && eof_reached) {\n            if (ftruncate(server.repl_transfer_fd,\n                server.repl_transfer_read - CONFIG_RUN_ID_SIZE) == -1)\n            {\n                serverLog(LL_WARNING,\n                    \"Error truncating the RDB file received from the master \"\n                    \"for SYNC: %s\", strerror(errno));\n                goto error;\n            }\n        }\n\n        /* Sync data on disk from time to time, otherwise at the end of the\n         * transfer we may suffer a big delay as the memory buffers are copied\n         * into the actual disk. */\n        if (server.repl_transfer_read >=\n            server.repl_transfer_last_fsync_off + REPL_MAX_WRITTEN_BEFORE_FSYNC)\n        {\n            off_t sync_size = server.repl_transfer_read -\n                              server.repl_transfer_last_fsync_off;\n            rdb_fsync_range(server.repl_transfer_fd,\n                server.repl_transfer_last_fsync_off, sync_size);\n            server.repl_transfer_last_fsync_off += sync_size;\n        }\n\n        /* Check if the transfer is now complete */\n        if (!usemark) {\n            if (server.repl_transfer_read == server.repl_transfer_size)\n                eof_reached = 1;\n        }\n\n        /* If the transfer is yet not complete, we need to read more, so\n         * return ASAP and wait for the handler to be called again. */\n        if (!eof_reached) return;\n    }\n\n    /* We reach this point in one of the following cases:\n     *\n     * 1. The replica is using diskless replication, that is, it reads data\n     *    directly from the socket to the Redis memory, without using\n     *    a temporary RDB file on disk. In that case we just block and\n     *    read everything from the socket.\n     *\n     * 2. Or when we are done reading from the socket to the RDB file, in\n     *    such case we want just to read the RDB file in memory. */\n\n    /* We need to stop any AOF rewriting child before flushing and parsing\n     * the RDB, otherwise we'll create a copy-on-write disaster. */\n    if (server.aof_state != AOF_OFF) stopAppendOnly();\n    /* Also try to stop save RDB child before flushing and parsing the RDB:\n     * 1. Ensure background save doesn't overwrite synced data after being loaded.\n     * 2. Avoid copy-on-write disaster. */\n    if (server.child_type == CHILD_TYPE_RDB) {\n        if (!use_diskless_load) {\n            serverLog(LL_NOTICE,\n                \"Replica is about to load the RDB file received from the \"\n                \"master, but there is a pending RDB child running. \"\n                \"Killing process %ld and removing its temp file to avoid \"\n                \"any race\",\n                (long) server.child_pid);\n        }\n        killRDBChild();\n    }\n\n    /* Attach to the new master immediately if we are not using swapdb. */\n    if (!use_diskless_load || server.repl_diskless_load != REPL_DISKLESS_LOAD_SWAPDB)\n        replicationAttachToNewMaster();\n\n    /* Before loading the DB into memory we need to delete the readable\n     * handler, otherwise it will get called recursively since\n     * rdbLoad() will call the event loop to process events from time to\n     * time for non blocking loading. */\n    connSetReadHandler(conn, NULL);\n    \n    serverLog(LL_NOTICE, \"MASTER <-> REPLICA sync: Loading DB in memory\");\n    rdbSaveInfo rsi = RDB_SAVE_INFO_INIT;\n    if (use_diskless_load) {\n        rio rdb;\n        redisDb *dbarray;\n        functionsLibCtx* functions_lib_ctx;\n        int asyncLoading = 0;\n\n        if (server.repl_diskless_load == REPL_DISKLESS_LOAD_SWAPDB) {\n            moduleFireServerEvent(REDISMODULE_EVENT_REPL_ASYNC_LOAD,\n                                  REDISMODULE_SUBEVENT_REPL_ASYNC_LOAD_STARTED,\n                                  NULL);\n            /* Async loading means we continue serving read commands during full resync, and\n             * \"swap\" the new db with the old db only when loading is done.\n             * It is enabled only on SWAPDB diskless replication when master replication ID hasn't changed,\n             * because in that state the old content of the db represents a different point in time of the same\n             * data set we're currently receiving from the master. */\n            if (memcmp(server.replid, server.master_replid, CONFIG_RUN_ID_SIZE) == 0) {\n                asyncLoading = 1;\n            }\n        }\n\n        /* Set disklessLoadingRio before calling emptyData() which may yield\n         * back to networking. */\n        rioInitWithConn(&rdb,conn,server.repl_transfer_size);\n        disklessLoadingRio = &rdb;\n\n        /* Disable checksum verification when diskless on both master and replica.\n         * The RDB checksum is designed to detect disk corruption, but if the data\n         * never touched disk, we can skip verification. */\n        if (usemark) server.loading_skip_checksum = 1;\n\n        /* Empty db */\n        loadingSetFlags(NULL, server.repl_transfer_size, asyncLoading);\n        if (server.repl_diskless_load != REPL_DISKLESS_LOAD_SWAPDB) {\n            serverLog(LL_NOTICE, \"MASTER <-> REPLICA sync: Flushing old data\");\n            /* Note that inside loadingSetFlags(), server.loading is set.\n             * replicationEmptyDbCallback() may yield back to event-loop to\n             * reply -LOADING. */\n            emptyData(-1, empty_db_flags, replicationEmptyDbCallback);\n        }\n        loadingFireEvent(RDBFLAGS_REPLICATION);\n\n        if (server.repl_diskless_load == REPL_DISKLESS_LOAD_SWAPDB) {\n            dbarray = disklessLoadInitTempDb();\n            functions_lib_ctx = functionsLibCtxCreate();\n        } else {\n            dbarray = server.db;\n            functions_lib_ctx = functionsLibCtxGetCurrent();\n            functionsLibCtxClear(functions_lib_ctx);\n        }\n\n        /* Put the socket in blocking mode to simplify RDB transfer.\n         * We'll restore it when the RDB is received. */\n        connBlock(conn);\n        connRecvTimeout(conn, server.repl_timeout*1000);\n\n        int loadingFailed = 0;\n        rdbLoadingCtx loadingCtx = { .dbarray = dbarray, .functions_lib_ctx = functions_lib_ctx };\n        if (rdbLoadRioWithLoadingCtx(&rdb,RDBFLAGS_REPLICATION,&rsi,&loadingCtx) != C_OK) {\n            /* RDB loading failed. */\n            serverLog(LL_WARNING,\n                      \"Failed trying to load the MASTER synchronization DB \"\n                      \"from socket, check server logs.\");\n            loadingFailed = 1;\n        } else if (usemark) {\n            /* Verify the end mark is correct. */\n            if (!rioRead(&rdb, buf, CONFIG_RUN_ID_SIZE) ||\n                memcmp(buf, eofmark, CONFIG_RUN_ID_SIZE) != 0)\n            {\n                serverLog(LL_WARNING, \"Replication stream EOF marker is broken\");\n                loadingFailed = 1;\n            }\n        }\n        disklessLoadingRio = NULL;\n\n        if (loadingFailed) {\n            rioFreeConn(&rdb, NULL);\n\n            if (server.repl_diskless_load == REPL_DISKLESS_LOAD_SWAPDB) {\n                /* Discard potentially partially loaded tempDb. */\n                moduleFireServerEvent(REDISMODULE_EVENT_REPL_ASYNC_LOAD,\n                                      REDISMODULE_SUBEVENT_REPL_ASYNC_LOAD_ABORTED,\n                                      NULL);\n\n                disklessLoadDiscardTempDb(dbarray);\n                functionsLibCtxFree(functions_lib_ctx);\n                serverLog(LL_NOTICE, \"MASTER <-> REPLICA sync: Discarding temporary DB in background\");\n            } else {\n                /* Remove the half-loaded data in case we started with an empty replica. */\n                emptyData(-1,empty_db_flags,replicationEmptyDbCallback);\n            }\n\n            /* Note that replicationEmptyDbCallback() may yield back to event\n             * loop to reply -LOADING if flushing the db takes a long time. So,\n             * stopLoading() must be called after emptyData() above. */\n            stopLoading(0);\n\n            /* This must be called after stopLoading(0) as it checks loading\n             * flag in case of rdbchannel replication. */\n            cancelReplicationHandshake(1);\n\n            /* Note that there's no point in restarting the AOF on SYNC\n             * failure, it'll be restarted when sync succeeds or the replica\n             * gets promoted. */\n            return;\n        }\n\n        /* RDB loading succeeded if we reach this point. */\n        if (server.repl_diskless_load == REPL_DISKLESS_LOAD_SWAPDB) {\n            /* Cancel all ASM trim jobs as we are about to swap the main db. */\n            asmCancelTrimJobs();\n            /* We will soon swap main db with tempDb and replicas will start\n             * to apply data from new master, we must discard the cached\n             * master structure and force resync of sub-replicas. */\n            replicationAttachToNewMaster();\n\n            serverLog(LL_NOTICE, \"MASTER <-> REPLICA sync: Swapping active DB with loaded DB\");\n            swapMainDbWithTempDb(dbarray);\n\n            /* swap existing functions ctx with the temporary one */\n            functionsLibCtxSwapWithCurrent(functions_lib_ctx);\n\n            moduleFireServerEvent(REDISMODULE_EVENT_REPL_ASYNC_LOAD,\n                        REDISMODULE_SUBEVENT_REPL_ASYNC_LOAD_COMPLETED,\n                        NULL);\n\n            /* Delete the old db as it's useless now. */\n            disklessLoadDiscardTempDb(dbarray);\n            serverLog(LL_NOTICE, \"MASTER <-> REPLICA sync: Discarding old DB in background\");\n        }\n\n        /* Inform about db change, as replication was diskless and didn't cause a save. */\n        server.dirty++;\n\n        stopLoading(1);\n\n        /* Cleanup and restore the socket to the original state to continue\n         * with the normal replication. */\n        rioFreeConn(&rdb, NULL);\n        connNonBlock(conn);\n        connRecvTimeout(conn,0);\n    } else {\n\n        /* Make sure the new file (also used for persistence) is fully synced\n         * (not covered by earlier calls to rdb_fsync_range). */\n        if (fsync(server.repl_transfer_fd) == -1) {\n            serverLog(LL_WARNING,\n                \"Failed trying to sync the temp DB to disk in \"\n                \"MASTER <-> REPLICA synchronization: %s\",\n                strerror(errno));\n            cancelReplicationHandshake(1);\n            return;\n        }\n\n        /* Rename rdb like renaming rewrite aof asynchronously. */\n        int old_rdb_fd = open(server.rdb_filename,O_RDONLY|O_NONBLOCK);\n        if (rename(server.repl_transfer_tmpfile,server.rdb_filename) == -1) {\n            serverLog(LL_WARNING,\n                \"Failed trying to rename the temp DB into %s in \"\n                \"MASTER <-> REPLICA synchronization: %s\",\n                server.rdb_filename, strerror(errno));\n            cancelReplicationHandshake(1);\n            if (old_rdb_fd != -1) close(old_rdb_fd);\n            return;\n        }\n        /* Close old rdb asynchronously. */\n        if (old_rdb_fd != -1) bioCreateCloseJob(old_rdb_fd, 0, 0);\n\n        /* Sync the directory to ensure rename is persisted */\n        if (fsyncFileDir(server.rdb_filename) == -1) {\n            serverLog(LL_WARNING,\n                \"Failed trying to sync DB directory %s in \"\n                \"MASTER <-> REPLICA synchronization: %s\",\n                server.rdb_filename, strerror(errno));\n            cancelReplicationHandshake(1);\n            return;\n        }\n\n        if (rdbLoadWithEmptyFunc(server.rdb_filename,&rsi,RDBFLAGS_REPLICATION,rdbLoadEmptyDbFunc) != RDB_OK) {\n            serverLog(LL_WARNING,\n                \"Failed trying to load the MASTER synchronization \"\n                \"DB from disk, check server logs.\");\n            cancelReplicationHandshake(1);\n            if (server.rdb_del_sync_files && allPersistenceDisabled()) {\n                serverLog(LL_NOTICE,\"Removing the RDB file obtained from \"\n                                    \"the master. This replica has persistence \"\n                                    \"disabled\");\n                bg_unlink(server.rdb_filename);\n            }\n\n            /* Note that there's no point in restarting the AOF on sync failure,\n               it'll be restarted when sync succeeds or replica promoted. */\n            return;\n        }\n\n        /* Cleanup. */\n        if (server.rdb_del_sync_files && allPersistenceDisabled()) {\n            serverLog(LL_NOTICE,\"Removing the RDB file obtained from \"\n                                \"the master. This replica has persistence \"\n                                \"disabled\");\n            bg_unlink(server.rdb_filename);\n        }\n\n        zfree(server.repl_transfer_tmpfile);\n        close(server.repl_transfer_fd);\n        server.repl_transfer_fd = -1;\n        server.repl_transfer_tmpfile = NULL;\n    }\n\n    /* Final setup of the connected slave <- master link */\n    replicationCreateMasterClient(server.repl_transfer_s,rsi.repl_stream_db);\n    server.repl_state = REPL_STATE_CONNECTED;\n    server.repl_down_since = 0;\n    server.repl_up_since = server.unixtime;\n\n    if (server.repl_disconnect_start_time != 0) {\n        server.repl_total_disconnect_time += server.unixtime - server.repl_disconnect_start_time;\n        server.repl_disconnect_start_time = 0;\n    }\n    /* Fire the master link modules event. */\n    moduleFireServerEvent(REDISMODULE_EVENT_MASTER_LINK_CHANGE,\n                          REDISMODULE_SUBEVENT_MASTER_LINK_UP,\n                          NULL);\n\n    /* After a full resynchronization we use the replication ID and\n     * offset of the master. The secondary ID / offset are cleared since\n     * we are starting a new history. */\n    memcpy(server.replid,server.master->replid,sizeof(server.replid));\n    server.master_repl_offset = server.master->reploff;\n    clearReplicationId2();\n\n    /* Let's create the replication backlog if needed. Slaves need to\n     * accumulate the backlog regardless of the fact they have sub-slaves\n     * or not, in order to behave correctly if they are promoted to\n     * masters after a failover. */\n    if (server.repl_backlog == NULL) createReplicationBacklog();\n    serverLog(LL_NOTICE, \"MASTER <-> REPLICA sync: Finished with success\");\n\n    if (server.supervised_mode == SUPERVISED_SYSTEMD) {\n        redisCommunicateSystemd(\"STATUS=MASTER <-> REPLICA sync: Finished with success. Ready to accept connections in read-write mode.\\n\");\n    }\n\n    /* Send the initial ACK immediately to put this replica in online state. */\n    if (usemark) replicationSendAck();\n\n    /* Restart the AOF subsystem now that we finished the sync. This\n     * will trigger an AOF rewrite, and when done will start appending\n     * to the new file. */\n    if (server.aof_enabled) {\n        serverLog(LL_NOTICE, \"MASTER <-> REPLICA sync: Starting AOF after a successful sync\");\n        startAppendOnlyWithRetry();\n    }\n\n    /* Stream accumulated replication buffer to the db and finalize fullsync */\n    if (rdbchannel) {\n        if (server.repl_rdb_transfer_s) {\n            connClose(server.repl_rdb_transfer_s);\n            server.repl_rdb_transfer_s = NULL;\n        }\n        rdbChannelStreamReplDataToDb();\n    }\n\n    return;\n\nerror:\n    cancelReplicationHandshake(1);\n    return;\n}\n\nchar *receiveSynchronousResponse(connection *conn) {\n    char buf[256];\n    /* Read the reply from the server. */\n    if (connSyncReadLine(conn,buf,sizeof(buf),server.repl_syncio_timeout*1000) == -1)\n    {\n        serverLog(LL_WARNING, \"Failed to read response from the server: %s\", connGetLastError(conn));\n        return NULL;\n    }\n    server.repl_transfer_lastio = server.unixtime;\n    return sdsnew(buf);\n}\n\n/* Send a pre-formatted multi-bulk command to the connection. */\nchar* sendCommandRaw(connection *conn, sds cmd) {\n    if (connSyncWrite(conn,cmd,sdslen(cmd),server.repl_syncio_timeout*1000) == -1) {\n        return sdscatprintf(sdsempty(),\"-Writing to master: %s\",\n                connGetLastError(conn));\n    }\n    return NULL;\n}\n\n/* Compose a multi-bulk command and send it to the connection.\n * Used to send AUTH and REPLCONF commands to the master before starting the\n * replication.\n *\n * Takes a list of char* arguments, terminated by a NULL argument.\n *\n * The command returns an sds string representing the result of the\n * operation. On error the first byte is a \"-\".\n */\nchar *sendCommand(connection *conn, ...) {\n    va_list ap;\n    sds cmd = sdsempty();\n    sds cmdargs = sdsempty();\n    size_t argslen = 0;\n    char *arg;\n\n    /* Create the command to send to the master, we use redis binary\n     * protocol to make sure correct arguments are sent. This function\n     * is not safe for all binary data. */\n    va_start(ap,conn);\n    while(1) {\n        arg = va_arg(ap, char*);\n        if (arg == NULL) break;\n        cmdargs = sdscatprintf(cmdargs,\"$%zu\\r\\n%s\\r\\n\",strlen(arg),arg);\n        argslen++;\n    }\n\n    cmd = sdscatprintf(cmd,\"*%zu\\r\\n\",argslen);\n    cmd = sdscatsds(cmd,cmdargs);\n    sdsfree(cmdargs);\n\n    va_end(ap);\n    char* err = sendCommandRaw(conn, cmd);\n    sdsfree(cmd);\n    if(err)\n        return err;\n    return NULL;\n}\n\n/* Compose a multi-bulk command and send it to the connection. \n * Used to send AUTH and REPLCONF commands to the master before starting the\n * replication.\n *\n * argv_lens is optional, when NULL, strlen is used.\n *\n * The command returns an sds string representing the result of the\n * operation. On error the first byte is a \"-\".\n */\nchar *sendCommandArgv(connection *conn, int argc, char **argv, size_t *argv_lens) {\n    sds cmd = sdsempty();\n    char *arg;\n    int i;\n\n    /* Create the command to send to the master. */\n    cmd = sdscatfmt(cmd,\"*%i\\r\\n\",argc);\n    for (i=0; i<argc; i++) {\n        int len;\n        arg = argv[i];\n        len = argv_lens ? argv_lens[i] : strlen(arg);\n        cmd = sdscatfmt(cmd,\"$%i\\r\\n\",len);\n        cmd = sdscatlen(cmd,arg,len);\n        cmd = sdscatlen(cmd,\"\\r\\n\",2);\n    }\n    char* err = sendCommandRaw(conn, cmd);\n    sdsfree(cmd);\n    if (err)\n        return err;\n    return NULL;\n}\n\n/* Try a partial resynchronization with the master if we are about to reconnect.\n * If there is no cached master structure, at least try to issue a\n * \"PSYNC ? -1\" command in order to trigger a full resync using the PSYNC\n * command in order to obtain the master replid and the master replication\n * global offset.\n *\n * This function is designed to be called from syncWithMaster(), so the\n * following assumptions are made:\n *\n * 1) We pass the function an already connected socket \"fd\".\n * 2) This function does not close the file descriptor \"fd\". However in case\n *    of successful partial resynchronization, the function will reuse\n *    'fd' as file descriptor of the server.master client structure.\n *\n * The function is split in two halves: if read_reply is 0, the function\n * writes the PSYNC command on the socket, and a new function call is\n * needed, with read_reply set to 1, in order to read the reply of the\n * command. This is useful in order to support non blocking operations, so\n * that we write, return into the event loop, and read when there are data.\n *\n * When read_reply is 0 the function returns PSYNC_WRITE_ERR if there\n * was a write error, or PSYNC_WAIT_REPLY to signal we need another call\n * with read_reply set to 1. However even when read_reply is set to 1\n * the function may return PSYNC_WAIT_REPLY again to signal there were\n * insufficient data to read to complete its work. We should re-enter\n * into the event loop and wait in such a case.\n *\n * The function returns:\n *\n * PSYNC_CONTINUE: If the PSYNC command succeeded and we can continue.\n * PSYNC_FULLRESYNC: If PSYNC is supported but a full resync is needed.\n *                   In this case the master replid and global replication\n *                   offset is saved.\n * PSYNC_NOT_SUPPORTED: If the server does not understand PSYNC at all and\n *                      the caller should fall back to SYNC.\n * PSYNC_WRITE_ERROR: There was an error writing the command to the socket.\n * PSYNC_WAIT_REPLY: Call again the function with read_reply set to 1.\n * PSYNC_TRY_LATER: Master is currently in a transient error condition.\n *\n * Notable side effects:\n *\n * 1) As a side effect of the function call the function removes the readable\n *    event handler from \"fd\", unless the return value is PSYNC_WAIT_REPLY.\n * 2) server.master_initial_offset is set to the right value according\n *    to the master reply. This will be used to populate the 'server.master'\n *    structure replication offset.\n */\n\n#define PSYNC_WRITE_ERROR 0\n#define PSYNC_WAIT_REPLY 1\n#define PSYNC_CONTINUE 2\n#define PSYNC_FULLRESYNC 3\n#define PSYNC_NOT_SUPPORTED 4\n#define PSYNC_TRY_LATER 5\n#define PSYNC_FULLRESYNC_RDBCHANNEL 6\nint slaveTryPartialResynchronization(connection *conn, int read_reply) {\n    char *psync_replid;\n    char psync_offset[32];\n    sds reply;\n\n    /* Writing half */\n    if (!read_reply) {\n        /* Initially set master_initial_offset to -1 to mark the current\n         * master replid and offset as not valid. Later if we'll be able to do\n         * a FULL resync using the PSYNC command we'll set the offset at the\n         * right value, so that this information will be propagated to the\n         * client structure representing the master into server.master. */\n        server.master_initial_offset = -1;\n\n        if (server.cached_master) {\n            psync_replid = server.cached_master->replid;\n            snprintf(psync_offset,sizeof(psync_offset),\"%lld\", server.cached_master->reploff+1);\n            serverLog(LL_NOTICE,\"Trying a partial resynchronization (request %s:%s).\", psync_replid, psync_offset);\n        } else {\n            serverLog(LL_NOTICE,\"Partial resynchronization not possible (no cached master)\");\n            psync_replid = \"?\";\n            memcpy(psync_offset,\"-1\",3);\n        }\n\n        /* Issue the PSYNC command, if this is a master with a failover in\n         * progress then send the failover argument to the replica to cause it\n         * to become a master */\n        if (server.failover_state == FAILOVER_IN_PROGRESS) {\n            reply = sendCommand(conn,\"PSYNC\",psync_replid,psync_offset,\"FAILOVER\",NULL);\n        } else {\n            reply = sendCommand(conn,\"PSYNC\",psync_replid,psync_offset,NULL);\n        }\n\n        if (reply != NULL) {\n            serverLog(LL_WARNING,\"Unable to send PSYNC to master: %s\",reply);\n            sdsfree(reply);\n            connSetReadHandler(conn, NULL);\n            return PSYNC_WRITE_ERROR;\n        }\n        return PSYNC_WAIT_REPLY;\n    }\n\n    /* Reading half */\n    reply = receiveSynchronousResponse(conn);\n    /* Master did not reply to PSYNC */\n    if (reply == NULL) {\n        connSetReadHandler(conn, NULL);\n        serverLog(LL_WARNING, \"Master did not reply to PSYNC, will try later\");\n        return PSYNC_TRY_LATER;\n    }\n\n    if (sdslen(reply) == 0) {\n        /* The master may send empty newlines after it receives PSYNC\n         * and before to reply, just to keep the connection alive. */\n        sdsfree(reply);\n        return PSYNC_WAIT_REPLY;\n    }\n\n    connSetReadHandler(conn, NULL);\n\n    if (!strncmp(reply,\"+FULLRESYNC\",11)) {\n        char *replid = NULL, *offset = NULL;\n\n        /* FULL RESYNC, parse the reply in order to extract the replid\n         * and the replication offset. */\n        replid = strchr(reply,' ');\n        if (replid) {\n            replid++;\n            offset = strchr(replid,' ');\n            if (offset) offset++;\n        }\n        if (!replid || !offset || (offset-replid-1) != CONFIG_RUN_ID_SIZE) {\n            serverLog(LL_WARNING,\n                \"Master replied with wrong +FULLRESYNC syntax.\");\n            /* This is an unexpected condition, actually the +FULLRESYNC\n             * reply means that the master supports PSYNC, but the reply\n             * format seems wrong. To stay safe we blank the master\n             * replid to make sure next PSYNCs will fail. */\n            memset(server.master_replid,0,CONFIG_RUN_ID_SIZE+1);\n        } else {\n            memcpy(server.master_replid, replid, offset-replid-1);\n            server.master_replid[CONFIG_RUN_ID_SIZE] = '\\0';\n            server.master_initial_offset = strtoll(offset,NULL,10);\n            serverLog(LL_NOTICE,\"Full resync from master: %s:%lld\",\n                server.master_replid,\n                server.master_initial_offset);\n        }\n        sdsfree(reply);\n        return PSYNC_FULLRESYNC;\n    }\n\n    if (!strncmp(reply, \"+RDBCHANNELSYNC\", strlen(\"+RDBCHANNELSYNC\"))) {\n        char *client_id = strchr(reply,' ');\n        if (client_id)\n            client_id++;\n\n        if (!client_id) {\n            serverLog(LL_WARNING,\n                      \"Master replied with wrong +RDBCHANNELSYNC syntax: %s\", reply);\n            sdsfree(reply);\n            return PSYNC_NOT_SUPPORTED;\n        }\n        server.repl_main_ch_client_id = strtoll(client_id, NULL, 10);;\n        /* A response of +RDBCHANNELSYNC from the master implies that partial\n         * synchronization is not possible and that the master supports full\n         * sync using dedicated RDB channel. Full sync will continue that way.*/\n        serverLog(LL_NOTICE, \"PSYNC is not possible, initialize RDB channel.\");\n        sdsfree(reply);\n        return PSYNC_FULLRESYNC_RDBCHANNEL;\n    }\n\n    if (!strncmp(reply,\"+CONTINUE\",9)) {\n        /* Partial resync was accepted. */\n        serverLog(LL_NOTICE,\n            \"Successful partial resynchronization with master.\");\n\n        /* Check the new replication ID advertised by the master. If it\n         * changed, we need to set the new ID as primary ID, and set\n         * secondary ID as the old master ID up to the current offset, so\n         * that our sub-slaves will be able to PSYNC with us after a\n         * disconnection. */\n        char *start = reply+10;\n        char *end = reply+9;\n        while(end[0] != '\\r' && end[0] != '\\n' && end[0] != '\\0') end++;\n        if (end-start == CONFIG_RUN_ID_SIZE) {\n            char new[CONFIG_RUN_ID_SIZE+1];\n            memcpy(new,start,CONFIG_RUN_ID_SIZE);\n            new[CONFIG_RUN_ID_SIZE] = '\\0';\n\n            if (strcmp(new,server.cached_master->replid)) {\n                /* Master ID changed. */\n                serverLog(LL_NOTICE,\"Master replication ID changed to %s\",new);\n\n                /* Set the old ID as our ID2, up to the current offset+1. */\n                memcpy(server.replid2,server.cached_master->replid,\n                    sizeof(server.replid2));\n                server.second_replid_offset = server.master_repl_offset+1;\n\n                /* Update the cached master ID and our own primary ID to the\n                 * new one. */\n                memcpy(server.replid,new,sizeof(server.replid));\n                memcpy(server.cached_master->replid,new,sizeof(server.replid));\n\n                /* Disconnect all the sub-slaves: they need to be notified. */\n                disconnectSlaves();\n            }\n        }\n\n        /* Setup the replication to continue. */\n        sdsfree(reply);\n        replicationResurrectCachedMaster(conn);\n\n        /* If this instance was restarted and we read the metadata to\n         * PSYNC from the persistence file, our replication backlog could\n         * be still not initialized. Create it. */\n        if (server.repl_backlog == NULL) createReplicationBacklog();\n        return PSYNC_CONTINUE;\n    }\n\n    /* If we reach this point we received either an error (since the master does\n     * not understand PSYNC or because it is in a special state and cannot\n     * serve our request), or an unexpected reply from the master.\n     *\n     * Return PSYNC_NOT_SUPPORTED on errors we don't understand, otherwise\n     * return PSYNC_TRY_LATER if we believe this is a transient error. */\n\n    if (!strncmp(reply,\"-NOMASTERLINK\",13) ||\n        !strncmp(reply,\"-LOADING\",8))\n    {\n        serverLog(LL_NOTICE,\n            \"Master is currently unable to PSYNC \"\n            \"but should be in the future: %s\", reply);\n        sdsfree(reply);\n        return PSYNC_TRY_LATER;\n    }\n\n    if (strncmp(reply,\"-ERR\",4)) {\n        /* If it's not an error, log the unexpected event. */\n        serverLog(LL_WARNING,\n            \"Unexpected reply to PSYNC from master: %s\", reply);\n    } else {\n        serverLog(LL_NOTICE,\n            \"Master does not support PSYNC or is in \"\n            \"error state (reply: %s)\", reply);\n    }\n    sdsfree(reply);\n    return PSYNC_NOT_SUPPORTED;\n}\n\n/* This handler fires when the non blocking connect was able to\n * establish a connection with the master. */\nvoid syncWithMaster(connection *conn) {\n    char tmpfile[256], *err = NULL;\n    int dfd = -1, maxtries = 5;\n    int psync_result;\n    static int no_compress_checksum = 0;\n\n    /* If this event fired after the user turned the instance into a master\n     * with SLAVEOF NO ONE we must just return ASAP. */\n    if (server.repl_state == REPL_STATE_NONE) {\n        connClose(conn);\n        return;\n    }\n\n    /* Check for errors in the socket: after a non blocking connect() we\n     * may find that the socket is in error state. */\n    if (connGetState(conn) != CONN_STATE_CONNECTED) {\n        serverLog(LL_WARNING,\"Error condition on socket for SYNC: %s\",\n                connGetLastError(conn));\n        goto error;\n    }\n\n    /* Send a PING to check the master is able to reply without errors. */\n    if (server.repl_state == REPL_STATE_CONNECTING) {\n        serverLog(LL_NOTICE,\"Non blocking connect for SYNC fired the event.\");\n        /* Delete the writable event so that the readable event remains\n         * registered and we can wait for the PONG reply. */\n        connSetReadHandler(conn, syncWithMaster);\n        connSetWriteHandler(conn, NULL);\n        server.repl_state = REPL_STATE_RECEIVE_PING_REPLY;\n        /* Send the PING, don't check for errors at all, we have the timeout\n         * that will take care about this. */\n        err = sendCommand(conn,\"PING\",NULL);\n        if (err) goto write_error;\n        return;\n    }\n\n    /* Receive the PONG command. */\n    if (server.repl_state == REPL_STATE_RECEIVE_PING_REPLY) {\n        err = receiveSynchronousResponse(conn);\n\n        /* The master did not reply */\n        if (err == NULL) goto no_response_error;\n\n        /* We accept only two replies as valid, a positive +PONG reply\n         * (we just check for \"+\") or an authentication error.\n         * Note that older versions of Redis replied with \"operation not\n         * permitted\" instead of using a proper error code, so we test\n         * both. */\n        if (err[0] != '+' &&\n            strncmp(err,\"-NOAUTH\",7) != 0 &&\n            strncmp(err,\"-NOPERM\",7) != 0 &&\n            strncmp(err,\"-ERR operation not permitted\",28) != 0)\n        {\n            serverLog(LL_WARNING,\"Error reply to PING from master: '%s'\",err);\n            sdsfree(err);\n            goto error;\n        } else {\n            serverLog(LL_NOTICE,\n                \"Master replied to PING, replication can continue...\");\n        }\n        sdsfree(err);\n        err = NULL;\n        server.repl_state = REPL_STATE_SEND_HANDSHAKE;\n    }\n\n    if (server.repl_state == REPL_STATE_SEND_HANDSHAKE) {\n        /* AUTH with the master if required. */\n        if (server.masterauth) {\n            char *args[3] = {\"AUTH\",NULL,NULL};\n            size_t lens[3] = {4,0,0};\n            int argc = 1;\n            if (server.masteruser) {\n                args[argc] = server.masteruser;\n                lens[argc] = strlen(server.masteruser);\n                argc++;\n            }\n            args[argc] = server.masterauth;\n            lens[argc] = sdslen(server.masterauth);\n            argc++;\n            err = sendCommandArgv(conn, argc, args, lens);\n            if (err) goto write_error;\n        }\n\n        /* Set the slave port, so that Master's INFO command can list the\n         * slave listening port correctly. */\n        {\n            char buf[LONG_STR_SIZE];\n\n            slaveGetPortStr(buf, sizeof(buf));\n            err = sendCommand(conn,\"REPLCONF\",\n                    \"listening-port\",buf, NULL);\n            if (err) goto write_error;\n        }\n\n        /* Set the slave ip, so that Master's INFO command can list the\n         * slave IP address port correctly in case of port forwarding or NAT.\n         * Skip REPLCONF ip-address if there is no slave-announce-ip option set. */\n        if (server.slave_announce_ip) {\n            err = sendCommand(conn,\"REPLCONF\",\n                    \"ip-address\",server.slave_announce_ip, NULL);\n            if (err) goto write_error;\n        }\n\n        /* If we are not going to save the RDB to disk, request that RDB\n         * compression and checksum be disabled, which speeds up RDB delivery\n         * and loading. */\n        no_compress_checksum = 0;\n        if (useDisklessLoad()) {\n            no_compress_checksum = 1;\n            err = sendCommand(conn, \"REPLCONF\", \"rdb-no-compress\", \"1\",\n                                    \"rdb-no-checksum\", \"1\", NULL);\n            if (err) goto write_error;\n        }\n\n        /* Inform the master of our (slave) capabilities.\n         *\n         * EOF: supports EOF-style RDB transfer for diskless replication.\n         * PSYNC2: supports PSYNC v2, so understands +CONTINUE <new repl ID>.\n         *\n         * The master will ignore capabilities it does not understand. */\n        err = sendCommand(conn,\"REPLCONF\",\n                          \"capa\",\"eof\",\"capa\",\"psync2\",\n                          server.repl_rdb_channel ? \"capa\" : NULL, \"rdb-channel-repl\", NULL);\n\n        if (err) goto write_error;\n\n        server.repl_state = REPL_STATE_RECEIVE_AUTH_REPLY;\n        return;\n    }\n\n    if (server.repl_state == REPL_STATE_RECEIVE_AUTH_REPLY && !server.masterauth)\n        server.repl_state = REPL_STATE_RECEIVE_PORT_REPLY;\n\n    /* Receive AUTH reply. */\n    if (server.repl_state == REPL_STATE_RECEIVE_AUTH_REPLY) {\n        err = receiveSynchronousResponse(conn);\n        if (err == NULL) goto no_response_error;\n        if (err[0] == '-') {\n            serverLog(LL_WARNING,\"Unable to AUTH to MASTER: %s\",err);\n            sdsfree(err);\n            goto error;\n        }\n        sdsfree(err);\n        err = NULL;\n        server.repl_state = REPL_STATE_RECEIVE_PORT_REPLY;\n        return;\n    }\n\n    /* Receive REPLCONF listening-port reply. */\n    if (server.repl_state == REPL_STATE_RECEIVE_PORT_REPLY) {\n        err = receiveSynchronousResponse(conn);\n        if (err == NULL) goto no_response_error;\n        /* Ignore the error if any, not all the Redis versions support\n         * REPLCONF listening-port. */\n        if (err[0] == '-') {\n            serverLog(LL_NOTICE,\"(Non critical) Master does not understand \"\n                                \"REPLCONF listening-port: %s\", err);\n        }\n        sdsfree(err);\n        server.repl_state = REPL_STATE_RECEIVE_IP_REPLY;\n        return;\n    }\n\n    if (server.repl_state == REPL_STATE_RECEIVE_IP_REPLY && !server.slave_announce_ip)\n        server.repl_state = REPL_STATE_RECEIVE_REQ_REPLY;\n\n    /* Receive REPLCONF ip-address reply. */\n    if (server.repl_state == REPL_STATE_RECEIVE_IP_REPLY) {\n        err = receiveSynchronousResponse(conn);\n        if (err == NULL) goto no_response_error;\n        /* Ignore the error if any, not all the Redis versions support\n         * REPLCONF ip-address. */\n        if (err[0] == '-') {\n            serverLog(LL_NOTICE,\"(Non critical) Master does not understand \"\n                                \"REPLCONF ip-address: %s\", err);\n        }\n        sdsfree(err);\n        server.repl_state = REPL_STATE_RECEIVE_REQ_REPLY;\n        return;\n    }\n\n    if (server.repl_state == REPL_STATE_RECEIVE_REQ_REPLY && !no_compress_checksum)\n        server.repl_state = REPL_STATE_RECEIVE_CAPA_REPLY;\n\n    /* Receive REPLCONF REQUEST reply (rdb-no-compress and rdb-no-checksum). */\n    if (server.repl_state == REPL_STATE_RECEIVE_REQ_REPLY) {\n        err = receiveSynchronousResponse(conn);\n        if (err == NULL) goto no_response_error;\n        /* Ignore the error if any, not all the Redis versions support\n         * REPLCONF rdb-no-compress and rdb-no-checksum. */\n        if (err[0] == '-') {\n            serverLog(LL_NOTICE,\"(Non critical) Master does not understand \"\n                                \"REPLCONF rdb-no-compress/checksum: %s\", err);\n        }\n        sdsfree(err);\n        server.repl_state = REPL_STATE_RECEIVE_CAPA_REPLY;\n        return;\n    }\n\n    /* Receive CAPA reply. */\n    if (server.repl_state == REPL_STATE_RECEIVE_CAPA_REPLY) {\n        err = receiveSynchronousResponse(conn);\n        if (err == NULL) goto no_response_error;\n        /* Ignore the error if any, not all the Redis versions support\n         * REPLCONF capa. */\n        if (err[0] == '-') {\n            serverLog(LL_NOTICE,\"(Non critical) Master does not understand \"\n                                  \"REPLCONF capa: %s\", err);\n        }\n        sdsfree(err);\n        err = NULL;\n        server.repl_state = REPL_STATE_SEND_PSYNC;\n    }\n\n    /* Try a partial resynchronization. If we don't have a cached master\n     * slaveTryPartialResynchronization() will at least try to use PSYNC\n     * to start a full resynchronization so that we get the master replid\n     * and the global offset, to try a partial resync at the next\n     * reconnection attempt. */\n    if (server.repl_state == REPL_STATE_SEND_PSYNC) {\n        if (slaveTryPartialResynchronization(conn,0) == PSYNC_WRITE_ERROR) {\n            err = sdsnew(\"Write error sending the PSYNC command.\");\n            abortFailover(\"Write error to failover target\");\n            goto write_error;\n        }\n        server.repl_state = REPL_STATE_RECEIVE_PSYNC_REPLY;\n        return;\n    }\n\n    /* If reached this point, we should be in REPL_STATE_RECEIVE_PSYNC_REPLY. */\n    if (server.repl_state != REPL_STATE_RECEIVE_PSYNC_REPLY) {\n        serverLog(LL_WARNING,\"syncWithMaster(): state machine error, \"\n                             \"state should be RECEIVE_PSYNC_REPLY but is %d\",\n                             server.repl_state);\n        goto error;\n    }\n\n    psync_result = slaveTryPartialResynchronization(conn,1);\n    if (psync_result == PSYNC_WAIT_REPLY) return; /* Try again later... */\n\n    /* Check the status of the planned failover. We expect PSYNC_CONTINUE,\n     * but there is nothing technically wrong with a full resync which\n     * could happen in edge cases. */\n    if (server.failover_state == FAILOVER_IN_PROGRESS) {\n        if (psync_result == PSYNC_CONTINUE ||\n            psync_result == PSYNC_FULLRESYNC ||\n            psync_result == PSYNC_FULLRESYNC_RDBCHANNEL)\n        {\n            clearFailoverState();\n        } else {\n            abortFailover(\"Failover target rejected psync request\");\n            return;\n        }\n    }\n\n    /* If the master is in an transient error, we should try to PSYNC\n     * from scratch later, so go to the error path. This happens when\n     * the server is loading the dataset or is not connected with its\n     * master and so forth. */\n    if (psync_result == PSYNC_TRY_LATER) goto error;\n\n    /* Note: if PSYNC does not return WAIT_REPLY, it will take care of\n     * uninstalling the read handler from the file descriptor. */\n\n    if (psync_result == PSYNC_CONTINUE) {\n        serverLog(LL_NOTICE, \"MASTER <-> REPLICA sync: Master accepted a Partial Resynchronization.\");\n        if (server.supervised_mode == SUPERVISED_SYSTEMD) {\n            redisCommunicateSystemd(\"STATUS=MASTER <-> REPLICA sync: Partial Resynchronization accepted. Ready to accept connections in read-write mode.\\n\");\n        }\n        return;\n    }\n\n    /* Fall back to SYNC if needed. Otherwise psync_result == PSYNC_FULLRESYNC\n     * and the server.master_replid and master_initial_offset are\n     * already populated. */\n    if (psync_result == PSYNC_NOT_SUPPORTED) {\n        serverLog(LL_NOTICE,\"Retrying with SYNC...\");\n        if (connSyncWrite(conn,\"SYNC\\r\\n\",6,server.repl_syncio_timeout*1000) == -1) {\n            serverLog(LL_WARNING,\"I/O error writing to MASTER: %s\",\n                connGetLastError(conn));\n            goto error;\n        }\n    }\n\n    /* Prepare a suitable temp file for bulk transfer */\n    if (!useDisklessLoad()) {\n        while(maxtries--) {\n            snprintf(tmpfile,256,\n                \"temp-%d.%ld.rdb\",(int)server.unixtime,(long int)getpid());\n            dfd = open(tmpfile,O_CREAT|O_WRONLY|O_EXCL,0644);\n            if (dfd != -1) break;\n            sleep(1);\n        }\n        if (dfd == -1) {\n            serverLog(LL_WARNING,\"Opening the temp file needed for MASTER <-> REPLICA synchronization: %s\",strerror(errno));\n            goto error;\n        }\n        server.repl_transfer_tmpfile = zstrdup(tmpfile);\n        server.repl_transfer_fd = dfd;\n    }\n\n    server.repl_transfer_size = -1;\n    server.repl_transfer_read = 0;\n    server.repl_transfer_last_fsync_off = 0;\n    server.repl_transfer_lastio = server.unixtime;\n\n    /* Using rdb channel replication, the master responded +RDBCHANNELSYNC.\n     * We need to initialize the RDB channel. */\n    if (psync_result == PSYNC_FULLRESYNC_RDBCHANNEL) {\n        /* Create RDB connection */\n        server.repl_rdb_transfer_s = connCreate(server.el, connTypeOfReplication());\n        if (connConnect(server.repl_rdb_transfer_s, server.masterhost,\n                        server.masterport, server.bind_source_addr,\n                        rdbChannelFullSyncWithMaster) == C_ERR) {\n            serverLog(LL_WARNING, \"Unable to connect to master: %s\", connGetLastError(server.repl_rdb_transfer_s));\n            goto error;\n        }\n        server.repl_rdb_ch_state = REPL_RDB_CH_SEND_HANDSHAKE;\n        connSetReadHandler(server.repl_transfer_s, NULL);\n        return;\n    }\n\n    /* Setup the non blocking download of the bulk file. */\n    if (connSetReadHandler(conn, readSyncBulkPayload)\n            == C_ERR)\n    {\n        char conninfo[CONN_INFO_LEN];\n        serverLog(LL_WARNING,\n            \"Can't create readable event for SYNC: %s (%s)\",\n            strerror(errno), connGetInfo(conn, conninfo, sizeof(conninfo)));\n        goto error;\n    }\n\n    server.repl_state = REPL_STATE_TRANSFER;\n    return;\n\nno_response_error: /* Handle receiveSynchronousResponse() error when master has no reply */\n    serverLog(LL_WARNING, \"Master did not respond to command during SYNC handshake\");\n    /* Fall through to regular error handling */\n\nerror:\n    if (dfd != -1) close(dfd);\n    connClose(conn);\n    if (server.repl_rdb_transfer_s)\n        connClose(server.repl_rdb_transfer_s);\n    server.repl_rdb_transfer_s = NULL;\n    server.repl_transfer_s = NULL;\n    if (server.repl_transfer_fd != -1)\n        close(server.repl_transfer_fd);\n    if (server.repl_transfer_tmpfile)\n        zfree(server.repl_transfer_tmpfile);\n    server.repl_transfer_tmpfile = NULL;\n    server.repl_transfer_fd = -1;\n    server.repl_state = REPL_STATE_CONNECT;\n    return;\n\nwrite_error: /* Handle sendCommand() errors. */\n    serverLog(LL_WARNING,\"Sending command to master in replication handshake: %s\", err);\n    sdsfree(err);\n    goto error;\n}\n\nint connectWithMaster(void) {\n    server.repl_current_sync_attempts++;\n    server.repl_total_sync_attempts++;\n    server.repl_transfer_s = connCreate(server.el, connTypeOfReplication());\n    if (connConnect(server.repl_transfer_s, server.masterhost, server.masterport,\n                server.bind_source_addr, syncWithMaster) == C_ERR) {\n        serverLog(LL_WARNING,\"Unable to connect to MASTER: %s\",\n                connGetLastError(server.repl_transfer_s));\n        connClose(server.repl_transfer_s);\n        server.repl_transfer_s = NULL;\n        return C_ERR;\n    }\n\n\n    server.repl_transfer_lastio = server.unixtime;\n    server.repl_state = REPL_STATE_CONNECTING;\n    serverLog(LL_NOTICE,\"MASTER <-> REPLICA sync started\");\n    return C_OK;\n}\n\n/* This function can be called when a non blocking connection is currently\n * in progress to undo it.\n * Never call this function directly, use cancelReplicationHandshake() instead.\n */\nvoid undoConnectWithMaster(void) {\n    connClose(server.repl_transfer_s);\n    server.repl_transfer_s = NULL;\n}\n\n/* Abort the async download of the bulk dataset while SYNC-ing with master.\n * Never call this function directly, use cancelReplicationHandshake() instead.\n */\nvoid replicationAbortSyncTransfer(void) {\n    serverAssert(server.repl_state == REPL_STATE_TRANSFER);\n    undoConnectWithMaster();\n    if (server.repl_disconnect_start_time == 0)\n        server.repl_disconnect_start_time = server.unixtime;\n    if (server.repl_transfer_fd!=-1) {\n        close(server.repl_transfer_fd);\n        bg_unlink(server.repl_transfer_tmpfile);\n        zfree(server.repl_transfer_tmpfile);\n        server.repl_transfer_tmpfile = NULL;\n        server.repl_transfer_fd = -1;\n    }\n}\n\n/* This function aborts a non blocking replication attempt if there is one\n * in progress, by canceling the non-blocking connect attempt or\n * the initial bulk transfer.\n *\n * If there was a replication handshake in progress 1 is returned and\n * the replication state (server.repl_state) set to REPL_STATE_CONNECT.\n *\n * Otherwise zero is returned and no operation is performed at all. */\nint cancelReplicationHandshake(int reconnect) {\n    if (rdbChannelAbort() != C_OK)\n        return 1;\n\n    if (server.repl_state == REPL_STATE_TRANSFER) {\n        replicationAbortSyncTransfer();\n        server.repl_state = REPL_STATE_CONNECT;\n    } else if (server.repl_state == REPL_STATE_CONNECTING ||\n               slaveIsInHandshakeState())\n    {\n        undoConnectWithMaster();\n        server.repl_state = REPL_STATE_CONNECT;\n    } else {\n        return 0;\n    }\n\n    if (!reconnect)\n        return 1;\n\n    /* try to re-connect without waiting for replicationCron, this is needed\n     * for the \"diskless loading short read\" test. */\n    serverLog(LL_NOTICE,\"Reconnecting to MASTER %s:%d after failure\",\n        server.masterhost, server.masterport);\n    connectWithMaster();\n\n    return 1;\n}\n\n/* Set replication to the specified master address and port. */\nvoid replicationSetMaster(char *ip, int port) {\n    int was_master = server.masterhost == NULL;\n\n    sdsfree(server.masterhost);\n    server.masterhost = NULL;\n    if (server.master) {\n        freeClient(server.master);\n    }\n    disconnectAllBlockedClients(); /* Clients blocked in master, now slave. */\n\n    /* Setting masterhost only after the call to freeClient since it calls\n     * replicationHandleMasterDisconnection which can trigger a re-connect\n     * directly from within that call. */\n    server.masterhost = sdsnew(ip);\n    server.masterport = port;\n\n    /* Update oom_score_adj */\n    setOOMScoreAdj(-1);\n\n    /* Here we don't disconnect with replicas, since they may hopefully be able\n     * to partially resync with us. We will disconnect with replicas and force\n     * them to resync with us when changing replid on partially resync with new\n     * master, or finishing transferring RDB and preparing loading DB on full\n     * sync with new master. */\n\n    cancelReplicationHandshake(0);\n    /* Before destroying our master state, create a cached master using\n     * our own parameters, to later PSYNC with the new master. */\n    if (was_master) {\n        replicationDiscardCachedMaster();\n        replicationCacheMasterUsingMyself();\n    }\n\n    /* Fire the role change modules event. */\n    moduleFireServerEvent(REDISMODULE_EVENT_REPLICATION_ROLE_CHANGED,\n                          REDISMODULE_EVENT_REPLROLECHANGED_NOW_REPLICA,\n                          NULL);\n\n    /* Fire the master link modules event. */\n    if (server.repl_state == REPL_STATE_CONNECTED)\n        moduleFireServerEvent(REDISMODULE_EVENT_MASTER_LINK_CHANGE,\n                              REDISMODULE_SUBEVENT_MASTER_LINK_DOWN,\n                              NULL);\n\n    server.repl_state = REPL_STATE_CONNECT;\n    server.repl_current_sync_attempts = 0;\n    server.repl_total_sync_attempts = 0;\n    serverLog(LL_NOTICE,\"Connecting to MASTER %s:%d\",\n        server.masterhost, server.masterport);\n    connectWithMaster();\n}\n\n/* Cancel replication, setting the instance as a master itself. */\nvoid replicationUnsetMaster(void) {\n    if (server.masterhost == NULL) return; /* Nothing to do. */\n\n    /* Fire the master link modules event. */\n    if (server.repl_state == REPL_STATE_CONNECTED)\n        moduleFireServerEvent(REDISMODULE_EVENT_MASTER_LINK_CHANGE,\n                              REDISMODULE_SUBEVENT_MASTER_LINK_DOWN,\n                              NULL);\n\n    /* Clear masterhost first, since the freeClient calls\n     * replicationHandleMasterDisconnection which can attempt to re-connect. */\n    sdsfree(server.masterhost);\n    server.masterhost = NULL;\n    if (server.master) freeClient(server.master);\n    replicationDiscardCachedMaster();\n    cancelReplicationHandshake(0);\n    /* When a slave is turned into a master, the current replication ID\n     * (that was inherited from the master at synchronization time) is\n     * used as secondary ID up to the current offset, and a new replication\n     * ID is created to continue with a new replication history. */\n    shiftReplicationId();\n    /* Disconnecting all the slaves is required: we need to inform slaves\n     * of the replication ID change (see shiftReplicationId() call). However\n     * the slaves will be able to partially resync with us, so it will be\n     * a very fast reconnection. */\n    disconnectSlaves();\n    server.repl_state = REPL_STATE_NONE;\n    /* Reset the attempts number. */\n    server.repl_current_sync_attempts = 0;\n    server.repl_total_sync_attempts = 0;\n    /* We need to make sure the new master will start the replication stream\n     * with a SELECT statement. This is forced after a full resync, but\n     * with PSYNC version 2, there is no need for full resync after a\n     * master switch. */\n    server.slaveseldb = -1;\n\n    /* Update oom_score_adj */\n    setOOMScoreAdj(-1);\n\n    /* Once we turn from slave to master, we consider the starting time without\n     * slaves (that is used to count the replication backlog time to live) as\n     * starting from now. Otherwise the backlog will be freed after a\n     * failover if slaves do not connect immediately. */\n    server.repl_no_slaves_since = server.unixtime;\n    \n    /* Reset up and down time so it'll be ready for when we turn into replica again. */\n    server.repl_down_since = 0;\n    server.repl_up_since = 0;\n    /* Fire the role change modules event. */\n    moduleFireServerEvent(REDISMODULE_EVENT_REPLICATION_ROLE_CHANGED,\n                          REDISMODULE_EVENT_REPLROLECHANGED_NOW_MASTER,\n                          NULL);\n\n    /* Restart the AOF subsystem in case we shut it down during a sync when\n     * we were still a slave. */\n    if (server.aof_enabled && server.aof_state == AOF_OFF) {\n        serverLog(LL_NOTICE, \"Restarting AOF after becoming master\");\n        startAppendOnlyWithRetry();\n    }\n}\n\n/* This function is called when the slave lose the connection with the\n * master into an unexpected way. */\nvoid replicationHandleMasterDisconnection(void) {\n    /* Fire the master link modules event. */\n    if (server.repl_state == REPL_STATE_CONNECTED)\n        moduleFireServerEvent(REDISMODULE_EVENT_MASTER_LINK_CHANGE,\n                              REDISMODULE_SUBEVENT_MASTER_LINK_DOWN,\n                              NULL);\n\n    server.master = NULL;\n    if (server.repl_state == REPL_STATE_CONNECTED)\n        server.repl_current_sync_attempts = 0;\n    server.repl_state = REPL_STATE_CONNECT;\n    server.repl_down_since = server.unixtime;\n    server.repl_up_since = 0;\n    server.repl_num_master_disconnection++;\n\n    /* If we are in the loop of streaming accumulated buffers, discard the\n     * buffer and clean up the rdbchannel state. The outer loop will abort once\n     * it detects that the master client has been disconnected. For details,\n     * see rdbChannelStreamReplDataToDb() */\n    if (server.repl_main_ch_state & REPL_MAIN_CH_STREAMING_BUF)\n        rdbChannelCleanup();\n\n    if (server.repl_disconnect_start_time == 0)\n        server.repl_disconnect_start_time = server.unixtime;\n    /* We lost connection with our master, don't disconnect slaves yet,\n     * maybe we'll be able to PSYNC with our master later. We'll disconnect\n     * the slaves only if we'll have to do a full resync with our master. */\n\n    /* Try to re-connect immediately rather than wait for replicationCron\n     * waiting 1 second may risk backlog being recycled. */\n    if (server.masterhost) {\n        serverLog(LL_NOTICE,\"Reconnecting to MASTER %s:%d\",\n            server.masterhost, server.masterport);\n        connectWithMaster();\n    }\n}\n\n/* Rdb channel for full sync\n *\n * - During a full sync, when master is delivering RDB to the replica, incoming\n *   write commands are kept in a replication buffer in order to be sent to the\n *   replica once RDB delivery is completed. If RDB delivery takes a long time,\n *   it might create memory pressure on master. Also, once a replica connection\n *   accumulates replication data which is larger than output buffer limits,\n *   master will kill replica connection. This may cause a replication failure.\n *\n *   The main benefit of the rdb channel replication is streaming incoming\n *   commands in parallel to the RDB delivery. This approach shifts replication\n *   stream buffering to the replica and reduces load on master. We do this by\n *   opening another connection for RDB delivery. The main channel on replica\n *   will be receiving replication stream while rdb channel is receiving the RDB.\n *\n *   This feature also helps to reduce master's main process CPU load. By\n *   opening a dedicated connection for the RDB transfer, the bgsave process has\n *   direct access to the new connection and it will stream RDB directly to the\n *   replicas. Before this change, due to TLS connection restriction, the bgsave\n *   process was writing RDB bytes to a pipe and the main process was forwarding\n *   it to the replica. This is no longer necessary, the main process can avoid\n *   these expensive socket read/write syscalls.\n *\n *  Implementation\n *  - When replica connects to the master, it sends 'rdb-channel-repl' as part\n *    of capability exchange to let master to know replica supports rdb channel.\n *  - When replica lacks sufficient data for PSYNC, master sends +RDBCHANNELSYNC\n *    reply with replica's client id. As the next step, the replica opens a new\n *    connection (rdb-channel) and configures it against the master with the\n *    appropriate capabilities and requirements. It also sends given client id\n *    back to master over rdbchannel so that master can associate these\n *    channels (initial replica connection will be referred as main-channel)\n *    Then, replica requests fullsync using the RDB channel.\n *  - Prior to forking, master attaches the replica's main channel to the\n *    replication backlog to deliver replication stream starting at the snapshot\n *    end offset.\n *  - The master main process sends replication stream via the main channel,\n *    while the bgsave process sends the RDB directly to the replica via the\n *    rdb-channel. Replica accumulates replication stream in a local buffer,\n *    while the RDB is being loaded into the memory.\n *  - Once the replica completes loading the rdb, it drops the rdb channel and\n *    streams the accumulated replication stream into the db. Sync is completed.\n *\n * * Replica state machine *\n *\n * Main channel state\n * ┌───────────────────┐\n * │RECEIVE_PING_REPLY │\n * └────────┬──────────┘\n *          │ +PONG\n * ┌────────▼──────────┐\n * │SEND_HANDSHAKE     │                     RDB channel state\n * └────────┬──────────┘            ┌───────────────────────────────┐\n *          │+OK                ┌───► RDB_CH_SEND_HANDSHAKE         │\n * ┌────────▼──────────┐        │   └──────────────┬────────────────┘\n * │RECEIVE_AUTH_REPLY │        │    REPLCONF main-ch-client-id <clientid>\n * └────────┬──────────┘        │   ┌──────────────▼────────────────┐\n *          │+OK                │   │ RDB_CH_RECEIVE_AUTH_REPLY     │\n * ┌────────▼──────────┐        │   └──────────────┬────────────────┘\n * │RECEIVE_PORT_REPLY │        │                  │ +OK\n * └────────┬──────────┘        │   ┌──────────────▼────────────────┐\n *          │+OK                │   │  RDB_CH_RECEIVE_REPLCONF_REPLY│\n * ┌────────▼──────────┐        │   └──────────────┬────────────────┘\n * │RECEIVE_IP_REPLY   │        │                  │ +OK\n * └────────┬──────────┘        │   ┌──────────────▼────────────────┐\n *          │+OK                │   │ RDB_CH_RECEIVE_FULLRESYNC     │\n * ┌────────▼──────────┐        │   └──────────────┬────────────────┘\n * │RECEIVE_CAPA_REPLY │        │                  │+FULLRESYNC\n * └────────┬──────────┘        │                  │Rdb delivery\n *          │                   │   ┌──────────────▼────────────────┐\n * ┌────────▼──────────┐        │   │ RDB_CH_RDB_LOADING            │\n * │SEND_PSYNC         │        │   └──────────────┬────────────────┘\n * └─┬─────────────────┘        │                  │ Done loading\n *   │PSYNC (use cached-master) │                  │\n * ┌─▼─────────────────┐        │                  │\n * │RECEIVE_PSYNC_REPLY│        │    ┌────────────►│ Replica streams replication\n * └─┬─────────────────┘        │    │             │ buffer into memory\n *   │                          │    │             │\n *   │+RDBCHANNELSYNC client-id │    │             │\n *   ├──────┬───────────────────┘    │             │\n *   │      │ Main channel           │             │\n *   │      │ accumulates repl data  │             │\n *   │   ┌──▼────────────────┐       │     ┌───────▼───────────┐\n *   │   │ REPL_TRANSFER     ├───────┘     │    CONNECTED      │\n *   │   └───────────────────┘             └────▲───▲──────────┘\n *   │                                          │   │\n *   │                                          │   │\n *   │  +FULLRESYNC    ┌───────────────────┐    │   │\n *   ├────────────────► REPL_TRANSFER      ├────┘   │\n *   │                 └───────────────────┘        │\n *   │  +CONTINUE                                   │\n *   └──────────────────────────────────────────────┘\n */\n\n/* Replication: Replica side. */\nstatic int rdbChannelSendHandshake(connection *conn, sds *err) {\n    /* AUTH with the master if required. */\n    if (server.masterauth) {\n        char *args[] = {\"AUTH\", NULL, NULL};\n        size_t lens[] = {4, 0, 0};\n        int argc = 1;\n        if (server.masteruser) {\n            args[argc] = server.masteruser;\n            lens[argc] = strlen(server.masteruser);\n            argc++;\n        }\n        args[argc] = server.masterauth;\n        lens[argc] = sdslen(server.masterauth);\n        argc++;\n        *err = sendCommandArgv(conn, argc, args, lens);\n        if (*err) {\n            serverLog(LL_WARNING, \"Error sending AUTH to master in rdb channel replication handshake: %s\", *err);\n            return C_ERR;\n        }\n    }\n\n    char buf[LONG_STR_SIZE];\n    slaveGetPortStr(buf, sizeof(buf));\n\n    char cid[LONG_STR_SIZE];\n    ull2string(cid, sizeof(cid), server.repl_main_ch_client_id);\n\n    *err = sendCommand(conn, \"REPLCONF\", \"capa\", \"eof\", \"rdb-only\", \"1\",\n                       \"rdb-channel\", \"1\", \"main-ch-client-id\", cid,\n                       \"listening-port\", buf,\n                       server.slave_announce_ip ? \"ip-address\" : NULL,\n                       server.slave_announce_ip ? server.slave_announce_ip : NULL,\n                       NULL);\n    \n    if (*err) {\n        serverLog(LL_WARNING, \"Error sending REPLCONF command to master in rdb channel handshake: %s\", *err);\n        return C_ERR;\n    }\n\n    if (connSetReadHandler(conn, rdbChannelFullSyncWithMaster) == C_ERR) {\n        char conninfo[CONN_INFO_LEN];\n        serverLog(LL_WARNING, \"Can't create readable event for SYNC: %s (%s)\",\n                  strerror(errno), connGetInfo(conn, conninfo, sizeof(conninfo)));\n        return C_ERR;\n    }\n    return C_OK;\n}\n\n/* Replication: Replica side. */\nstatic int rdbChannelHandleAuthReply(connection *conn, sds *err) {\n    *err = receiveSynchronousResponse(conn);\n    if (*err == NULL) {\n        serverLog(LL_WARNING, \"Master did not respond to auth command during rdb channel handshake\");\n        return C_ERR;\n    }\n    if ((*err)[0] == '-') {\n        serverLog(LL_WARNING, \"Unable to AUTH to master: %s\", *err);\n        return C_ERR;\n    }\n    server.repl_rdb_ch_state = REPL_RDB_CH_RECEIVE_REPLCONF_REPLY;\n    return C_OK;\n}\n\n/* Replication: Replica side. */\nstatic int rdbChannelHandleReplconfReply(connection *conn, sds *err) {\n    *err = receiveSynchronousResponse(conn);\n    if (*err == NULL) {\n        serverLog(LL_WARNING, \"Master did not respond to replconf command during rdb channel handshake\");\n        return C_ERR;\n    }\n    if (*err[0] == '-') {\n        serverLog(LL_WARNING, \"Master replied error to replconf: %s\", *err);\n        return C_ERR;\n    }\n    sdsfree(*err);\n\n    if (server.repl_debug_pause & REPL_DEBUG_BEFORE_RDB_CHANNEL)\n        debugPauseProcess();\n\n    /* Request rdb from master */\n    *err = sendCommand(conn, \"PSYNC\", \"?\", \"-1\", NULL);\n    if (*err) {\n        serverLog(LL_WARNING, \"I/O error writing to Master: %s\", *err);\n        return C_ERR;\n    }\n\n    return C_OK;\n}\n\n/* Replication: Replica side. */\nstatic int rdbChannelHandleFullresyncReply(connection *conn, sds *err) {\n    char *replid = NULL, *offset = NULL;\n\n    *err = receiveSynchronousResponse(conn);\n    if (*err == NULL)\n        return C_ERR;\n\n    if (*err[0] == '\\0') {\n        /* Retry again later */\n        serverLog(LL_DEBUG, \"Received empty psync reply\");\n        return C_RETRY;\n    }\n\n    /* FULL RESYNC, parse the reply in order to extract the replid\n     * and the replication offset. */\n    replid = strchr(*err,' ');\n    if (replid) {\n        replid++;\n        offset = strchr(replid, ' ');\n        if (offset) offset++;\n    }\n    if (!replid || !offset || (offset-replid-1) != CONFIG_RUN_ID_SIZE) {\n        serverLog(LL_WARNING, \"Received unexpected psync reply: %s\", *err);\n        return C_ERR;\n    }\n    memcpy(server.master_replid, replid, offset-replid-1);\n    server.master_replid[CONFIG_RUN_ID_SIZE] = '\\0';\n    server.master_initial_offset = strtoll(offset,NULL,10);\n\n    /* Prepare the main and rdb channels for rdb and repl stream delivery.*/\n    server.repl_state = REPL_STATE_TRANSFER;\n    rdbChannelReplDataBufInit();\n\n    serverLog(LL_NOTICE, \"Starting to receive RDB and replication stream in parallel.\");\n\n    /* Setup connection to accumulate repl data.  */\n    server.repl_main_ch_state = REPL_MAIN_CH_ACCUMULATE_BUF;\n    if (connSetReadHandler(server.repl_transfer_s,\n                           rdbChannelBufferReplData) != C_OK)\n    {\n        serverLog(LL_WARNING, \"Can't set read handler for main channel: %s\",\n                  strerror(errno));\n        return C_ERR;\n    }\n\n    /* Prepare RDB channel connection for RDB download. */\n    if (connSetReadHandler(server.repl_rdb_transfer_s,\n                           readSyncBulkPayload) != C_OK)\n    {\n        char inf[CONN_INFO_LEN];\n        serverLog(LL_WARNING,\n                  \"Can't create readable event for rdb channel connection: %s (%s)\",\n                  strerror(errno),\n                  connGetInfo(server.repl_rdb_transfer_s, inf, sizeof(inf)));\n        return C_ERR;\n    }\n\n    return C_OK;\n}\n\n/* Replication: Replica side.\n * This connection handler is used to initialize the RDB channel connection.*/\nstatic void rdbChannelFullSyncWithMaster(connection *conn) {\n    int ret = 0;\n    char *err = NULL;\n    serverAssert(conn == server.repl_rdb_transfer_s);\n\n    /* Check for errors in the socket: after a non blocking connect() we\n     * may find that the socket is in error state. */\n    if (connGetState(conn) != CONN_STATE_CONNECTED) {\n        serverLog(LL_WARNING, \"Error condition on socket for rdb channel replication: %s\",\n                  connGetLastError(conn));\n        goto error;\n    }\n    switch (server.repl_rdb_ch_state) {\n        case REPL_RDB_CH_SEND_HANDSHAKE:\n            ret = rdbChannelSendHandshake(conn, &err);\n            if (ret == C_OK)\n                server.repl_rdb_ch_state = REPL_RDB_CH_RECEIVE_AUTH_REPLY;\n            break;\n        case REPL_RDB_CH_RECEIVE_AUTH_REPLY:\n            if (server.masterauth) {\n                ret = rdbChannelHandleAuthReply(conn, &err);\n                if (ret == C_OK)\n                    server.repl_rdb_ch_state = REPL_RDB_CH_RECEIVE_REPLCONF_REPLY;\n                /* Wait for next bulk before trying to read replconf reply. */\n                break;\n            }\n            server.repl_rdb_ch_state = REPL_RDB_CH_RECEIVE_REPLCONF_REPLY;\n            /* fall through */\n        case REPL_RDB_CH_RECEIVE_REPLCONF_REPLY:\n            ret = rdbChannelHandleReplconfReply(conn, &err);\n            if (ret == C_OK)\n                server.repl_rdb_ch_state = REPL_RDB_CH_RECEIVE_FULLRESYNC;\n            break;\n        case REPL_RDB_CH_RECEIVE_FULLRESYNC:\n            ret = rdbChannelHandleFullresyncReply(conn, &err);\n            if (ret == C_OK)\n                server.repl_rdb_ch_state = REPL_RDB_CH_RDB_LOADING;\n            break;\n        default:\n            serverPanic(\"Unknown rdb channel state: %d\", server.repl_rdb_ch_state);\n    }\n\n    if (ret == C_ERR)\n        goto error;\n\n    sdsfree(err);\n    return;\n\nerror:\n    if (err) {\n        serverLog(LL_WARNING, \"rdb channel sync failed with error: %s\", err);\n        sdsfree(err);\n    }\n    if (server.repl_transfer_s) {\n        connClose(server.repl_transfer_s);\n        server.repl_transfer_s = NULL;\n    }\n    server.repl_state = REPL_STATE_CONNECT;\n    rdbChannelAbort();\n}\n\nvoid replDataBufInit(replDataBuf *buf) {\n    serverAssert(buf->blocks == NULL);\n    buf->size = 0;\n    buf->used = 0;\n    buf->last_num_blocks = 0;\n    buf->mem_used = 0;\n    buf->blocks = listCreate();\n    buf->blocks->free = zfree;\n}\n\nvoid replDataBufClear(replDataBuf *buf) {\n    if (buf->blocks) listRelease(buf->blocks);\n    buf->blocks = NULL;\n    buf->size = 0;\n    buf->used = 0;\n    buf->last_num_blocks = 0;\n    buf->mem_used = 0;\n}\n\n/* Replication: Replica side.\n * Initialize replica's local replication buffer to accumulate repl stream\n * during rdb channel sync. */\nstatic void rdbChannelReplDataBufInit(void) {\n    replDataBufInit(&server.repl_full_sync_buffer);\n}\n\n/* Replication: Replica side.\n * Clear replica's local replication buffer */\nstatic void rdbChannelReplDataBufClear(void) {\n    replDataBufClear(&server.repl_full_sync_buffer);\n}\n\n/* Generic function to read data from connection into the last block. */\nstatic int replDataBufReadIntoLastBlock(connection *conn, replDataBuf *buf,\n                                    void (*error_handler)(connection *conn))\n{\n    atomicIncr(server.stat_io_reads_processed[IOTHREAD_MAIN_THREAD_ID], 1);\n\n    replDataBufBlock *block = listNodeValue(listLast(buf->blocks));\n    serverAssert(block && block->size > block->used);\n\n    int nread = connRead(conn, block->buf + block->used, block->size - block->used);\n    if (nread <= 0) {\n        if (nread == 0 || connGetState(conn) != CONN_STATE_CONNECTED) {\n            error_handler(conn);\n        }\n        return -1;\n    }\n\n    block->used += nread;\n    if (buf) buf->used += nread;\n    atomicIncr(server.stat_net_repl_input_bytes, nread);\n\n    return nread;\n}\n\n/* Generic function to read data from connection into a buffer. */\nvoid replDataBufReadFromConn(connection *conn, replDataBuf *buf, void (*error_handler)(connection *conn)) {\n    const int buflen = 1024 * 1024;\n    const int minread = 16 * 1024;\n    int nread = 0;\n    int needs_read = 1;\n\n    listNode *ln = listLast(buf->blocks);\n    replDataBufBlock *tail = ln ? listNodeValue(ln) : NULL;\n\n    /* Try to append last node. */\n    if (tail && tail->size > tail->used) {\n        nread = replDataBufReadIntoLastBlock(conn, buf, error_handler);\n        if (nread <= 0)\n            return;\n\n        /* If buffer is filled fully, there might be more data in socket buffer.\n         * Only read again if we've read small amount (less than minread). */\n        needs_read = (tail->size == tail->used) && nread < minread;\n    }\n\n    if (needs_read) {\n        unsigned long long limit;\n        size_t usable_size;\n\n        /* For accumulation limit, if 'replica-full-sync-buffer-limit' is set,\n         * we'll use it. Otherwise, 'client-output-buffer-limit <replica>' is\n         * the limit.*/\n        limit = server.repl_full_sync_buffer_limit;\n        if (limit == 0)\n            limit = server.client_obuf_limits[CLIENT_TYPE_SLAVE].hard_limit_bytes;\n\n        if (limit != 0 && buf->size > limit) {\n            /* Currently this function is only used for replication and slots sync.\n             * Log accordingly, maybe should be extendable in the future. */\n            if (server.masterhost)\n                serverLog(LL_NOTICE, \"Replication buffer limit has been reached (%llu bytes), \"\n                    \"stopped buffering replication stream. Further accumulation may occur on master side.\", limit);\n            else\n                serverLog(LL_NOTICE, \"Slots sync buffer limit has been reached (%llu bytes), \"\n                    \"stopped buffering slots sync stream. Further accumulation may occur on source side.\", limit);\n\n            connSetReadHandler(conn, NULL);\n            return;\n        }\n\n        tail = zmalloc_usable(buflen, &usable_size);\n        tail->size = usable_size - sizeof(replDataBufBlock);\n        tail->used = 0;\n\n        listAddNodeTail(buf->blocks, tail);\n        buf->size += tail->size;\n        buf->mem_used += usable_size + sizeof(listNode);\n\n        /* Update buffer's peak */\n        if (buf->peak < buf->size)\n            buf->peak = buf->size;\n\n        replDataBufReadIntoLastBlock(conn, buf, error_handler);\n    }\n}\n\n/* Replication: Replica side.\n * Main channel read error handler */\nstatic void readReplBufferErrorHandler(connection *conn) {\n    serverLog(LL_WARNING, \"Main channel error while reading from master: %s\",\n              connGetLastError(conn));\n    cancelReplicationHandshake(1);\n}\n\n/* Replication: Replica side.\n * Read handler for buffering incoming repl data during RDB download/loading. */\nstatic void rdbChannelBufferReplData(connection *conn) {\n    replDataBuf *buf = &server.repl_full_sync_buffer;\n\n    if (server.repl_main_ch_state & REPL_MAIN_CH_STREAMING_BUF) {\n        /* While streaming accumulated buffers, we continue reading from the\n         * master to prevent accumulation on master side as much as possible.\n         * However, we aim to drain buffer eventually. To ensure we consume more\n         * than we read, we'll read at most one block after two blocks of\n         * buffers are consumed. */\n        if (listLength(buf->blocks) + 1 >= buf->last_num_blocks)\n            return;\n        buf->last_num_blocks = listLength(buf->blocks);\n    }\n\n    replDataBufReadFromConn(conn, buf, readReplBufferErrorHandler);\n}\n\n/* Generic function to stream replDataBuf data into database\n * Returns C_OK on success, C_ERR on error */\nint replDataBufStreamToDb(replDataBuf *buf, replDataBufToDbCtx *ctx) {\n    listNode *n;\n    int ret = C_OK;\n    client *c = ctx->client;\n\n    blockingOperationStarts();\n    while ((n = listFirst(buf->blocks))) {\n        replDataBufBlock *o = listNodeValue(n);\n        listUnlinkNode(buf->blocks, n);\n        zfree(n);\n\n        size_t processed = 0;\n        while (processed < o->used) {\n            size_t bytes = min(PROTO_IOBUF_LEN, o->used - processed);\n            c->querybuf = sdscatlen(c->querybuf, &o->buf[processed], bytes);\n            c->read_reploff += (long long int) bytes;\n            c->lastinteraction = server.unixtime;\n\n            /* We don't expect error return value but just in case. */\n            ret = processInputBuffer(c);\n            if (ret != C_OK) break;\n\n            processed += bytes;\n            buf->used -= bytes;\n\n            if (server.repl_debug_pause & REPL_DEBUG_ON_STREAMING_REPL_BUF)\n                debugPauseProcess();\n\n            /* Check if we should yield back to the event loop */\n            if (server.loading_process_events_interval_bytes &&\n                ((ctx->applied_offset + bytes) / server.loading_process_events_interval_bytes >\n                  ctx->applied_offset / server.loading_process_events_interval_bytes))\n            {\n                ctx->yield_callback(ctx);\n                processEventsWhileBlocked();\n            }\n            ctx->applied_offset += bytes;\n\n            /* Check if we should continue processing */\n            if (!ctx->should_continue(ctx)) {\n                ret = C_ERR;\n                break;\n            }\n\n            /* Streaming buffer into the database more slowly is useful in order\n             * to test certain edge cases. */\n            if (server.key_load_delay) debugDelay(server.key_load_delay);\n        }\n        size_t size = o->size;\n        zfree(o);\n\n        /* Break the loop if there is an error. */\n        if (ret != C_OK) break;\n\n        /* Update stats */\n        buf->size -= size;\n        buf->mem_used -= (size + sizeof(listNode) + sizeof(replDataBufBlock));\n    }\n    blockingOperationEnds();\n\n    return ret;\n}\n\n/* Replication: Replica side.\n * Yield callback for streaming replDataBuf to database */\nstatic void rdbChannelStreamYieldCallback(void *ctx) {\n    UNUSED(ctx);\n    replicationSendNewlineToMaster();\n}\n\n/* Replication: Replica side.\n * Global variable to track number of master disconnection.\n * Used to detect master disconnection when streaming replDataBuf to database */\nstatic uint64_t ReplNumMasterDisconnection = 0;\n\n/* Replication: Replica side.\n * Check if we should continue streaming replDataBuf to database */\nstatic int rdbChannelStreamShouldContinue(void *ctx) {\n    replDataBufToDbCtx *context = ctx;\n\n    /* Check if master client was freed in processEventsWhileBlocked().\n     * It can happen if we receive 'replicaof' command or 'client kill'\n     * command for the master. */\n    if (ReplNumMasterDisconnection != server.repl_num_master_disconnection ||\n        !server.repl_full_sync_buffer.blocks ||\n        context->client->flags & CLIENT_CLOSE_ASAP)\n    {\n        return 0;\n    }\n    return 1;\n}\n\n/* Replication: Replica side.\n * Streams accumulated replication data into the database. */\nstatic void rdbChannelStreamReplDataToDb(void) {\n    int ret = C_OK, close_asap = 0;\n    client *c = server.master;\n\n    /* Save repl_num_master_disconnection to figure out if master gets\n     * disconnected when we yield back to processEventsWhileBlocked() */\n    ReplNumMasterDisconnection = server.repl_num_master_disconnection;\n\n    server.repl_main_ch_state |= REPL_MAIN_CH_STREAMING_BUF;\n    serverLog(LL_NOTICE, \"MASTER <-> REPLICA sync: Starting to stream replication buffer into the db\"\n                         \" (%zu bytes).\", server.repl_full_sync_buffer.used);\n    if (!server.repl_full_sync_buffer.blocks)\n        goto out;\n\n    /* Mark the peek buffer block count. We'll use it to verify we consume\n     * faster than we read from the master. */\n    server.repl_full_sync_buffer.last_num_blocks = listLength(server.repl_full_sync_buffer.blocks);\n    /* Set read handler to continue accumulating during streaming */\n    connSetReadHandler(c->conn, rdbChannelBufferReplData);\n\n    replDataBufToDbCtx ctx = {\n        .client = c,\n        .applied_offset = 0,\n        .should_continue = rdbChannelStreamShouldContinue,\n        .yield_callback = rdbChannelStreamYieldCallback,\n    };\n\n    ret = replDataBufStreamToDb(&server.repl_full_sync_buffer, &ctx);\n\nout:\n    /* If main channel state is CLOSE_ASAP, it means main channel faced a\n     * problem while RDB is being loaded or while we are applying the\n     * accumulated buffer. It stopped replication stream buffering. It's okay\n     * though. We streamed whatever we have into the db, now we can free master\n     * client and replica can try psync. */\n    close_asap = (server.repl_main_ch_state & REPL_MAIN_CH_CLOSE_ASAP);\n\n    if (ret == C_OK) {\n        serverLog(LL_NOTICE, \"MASTER <-> REPLICA sync: Successfully streamed replication buffer into the db (%zu bytes in total)\",\n                             ctx.applied_offset);\n        /* Revert the read handler */\n        if (!close_asap && connSetReadHandler(c->conn, readQueryFromClient) != C_OK) {\n            serverLog(LL_WARNING,\n                      \"Can't create readable event for master client: %s\",\n                      strerror(errno));\n            close_asap = 1;\n        }\n    } else {\n        serverLog(LL_WARNING, \"Master client was freed while streaming accumulated replication data to db.\");\n        close_asap = 1;\n    }\n\n    /* If master is disconnected, state should have been cleaned up\n     * already. Otherwise, we do it here. */\n    if (ReplNumMasterDisconnection == server.repl_num_master_disconnection) {\n        rdbChannelCleanup();\n        if (server.master && close_asap)\n            freeClient(server.master);\n    }\n}\n\nstatic void rdbChannelCleanup(void) {\n    server.repl_rdb_ch_state = REPL_RDB_CH_STATE_NONE;\n    server.repl_main_ch_state = REPL_MAIN_CH_NONE;\n    rdbChannelReplDataBufClear();\n}\n\n/* Replication: Replica side.\n * On rdb channel failure, close rdb-connection and reset state.\n * Return C_OK if cleanup is done. Otherwise, returns C_ERR which means cleanup\n * will be done asynchronously. */\nstatic int rdbChannelAbort(void) {\n    if (server.repl_rdb_ch_state == REPL_RDB_CH_STATE_NONE)\n        return C_OK;\n\n    /* This function may also be called if a problem is detected on the main\n     * channel. In this case, we handle the situation differently based on\n     * the current state:\n     * - If we started loading the RDB file and the RDB is disk-based, we mark\n     *   the main channel's state as CLOSE_ASAP and defer the failure handling\n     *   until after the RDB has been loaded. This way we allow the replica to\n     *   retry psync after the RDB is loaded.\n     * - For diskless loading, we cannot safely free the rdb channel connection\n     *   object. Instead, we mark the RIO object as aborted so the next\n     *   rioRead() will fail safely.\n     * - If the RDB has already been loaded, and we are streaming the\n     *   accumulated buffer to the database, we mark the main connection\n     *   as CLOSE_ASAP and wait until the accumulated buffer is drained.\n     *   Once done, the replica can attempt psync with the offset it has. */\n    int async_cleanup = (server.repl_rdb_transfer_s && server.loading) ||\n                        (server.repl_main_ch_state & REPL_MAIN_CH_STREAMING_BUF);\n    if (async_cleanup) {\n        if (server.repl_rdb_transfer_s && server.loading) {\n            serverLog(LL_NOTICE, \"Aborting rdb channel sync while loading the RDB.\");\n\n            if (disklessLoadingRio)\n                /* Mark rio with abort flag, next rioRead() will return error.*/\n                rioAbort(disklessLoadingRio);\n            else {\n                /* For disk based loading, we can wait until loading is done.\n                 * This way, replica will have a chance for a successful psync\n                 * later.*/\n                serverLog(LL_NOTICE, \"After loading RDB, replica will try psync with master.\");\n            }\n        }\n\n        if (server.repl_transfer_s)\n            connSetReadHandler(server.repl_transfer_s, NULL);\n\n        server.repl_main_ch_state |= REPL_MAIN_CH_CLOSE_ASAP;\n        return C_ERR;\n    }\n\n    serverLog(LL_NOTICE, \"Aborting rdb channel sync\");\n\n    if (server.repl_rdb_transfer_s) {\n        connClose(server.repl_rdb_transfer_s);\n        server.repl_rdb_transfer_s = NULL;\n    }\n    if (server.repl_transfer_fd != -1) {\n        close(server.repl_transfer_fd);\n        server.repl_transfer_fd = -1;\n    }\n    if (server.repl_transfer_tmpfile) {\n        bg_unlink(server.repl_transfer_tmpfile);\n        zfree(server.repl_transfer_tmpfile);\n        server.repl_transfer_tmpfile = NULL;\n    }\n    rdbChannelCleanup();\n    return C_OK;\n}\n\nvoid replicaofCommand(client *c) {\n    /* SLAVEOF is not allowed in cluster mode as replication is automatically\n     * configured using the current address of the master node. */\n    if (server.cluster_enabled) {\n        addReplyError(c,\"REPLICAOF not allowed in cluster mode.\");\n        return;\n    }\n\n    if (server.failover_state != NO_FAILOVER) {\n        addReplyError(c,\"REPLICAOF not allowed while failing over.\");\n        return;\n    }\n\n    /* The special host/port combination \"NO\" \"ONE\" turns the instance\n     * into a master. Otherwise the new master address is set. */\n    if (!strcasecmp(c->argv[1]->ptr,\"no\") &&\n        !strcasecmp(c->argv[2]->ptr,\"one\")) {\n        if (server.masterhost) {\n            replicationUnsetMaster();\n            sds client = catClientInfoString(sdsempty(),c);\n            serverLog(LL_NOTICE,\"MASTER MODE enabled (user request from '%s')\",\n                client);\n            sdsfree(client);\n        }\n    } else {\n        long port;\n\n        if (c->flags & CLIENT_SLAVE)\n        {\n            /* If a client is already a replica they cannot run this command,\n             * because it involves flushing all replicas (including this\n             * client) */\n            addReplyError(c, \"Command is not valid when client is a replica.\");\n            return;\n        }\n\n        if (getRangeLongFromObjectOrReply(c, c->argv[2], 0, 65535, &port,\n                                          \"Invalid master port\") != C_OK)\n            return;\n\n        /* Check if we are already attached to the specified master */\n        if (server.masterhost && !strcasecmp(server.masterhost,c->argv[1]->ptr)\n            && server.masterport == port) {\n            serverLog(LL_NOTICE,\"REPLICAOF would result into synchronization \"\n                                \"with the master we are already connected \"\n                                \"with. No operation performed.\");\n            addReplySds(c,sdsnew(\"+OK Already connected to specified \"\n                                 \"master\\r\\n\"));\n            return;\n        }\n        /* There was no previous master or the user specified a different one,\n         * we can continue. */\n        replicationSetMaster(c->argv[1]->ptr, port);\n        sds client = catClientInfoString(sdsempty(),c);\n        serverLog(LL_NOTICE,\"REPLICAOF %s:%d enabled (user request from '%s')\",\n            server.masterhost, server.masterport, client);\n        sdsfree(client);\n    }\n    addReply(c,shared.ok);\n}\n\n/* ROLE command: provide information about the role of the instance\n * (master or slave) and additional information related to replication\n * in an easy to process format. */\nvoid roleCommand(client *c) {\n    if (server.sentinel_mode) {\n        sentinelRoleCommand(c);\n        return;\n    }\n\n    if (server.masterhost == NULL) {\n        listIter li;\n        listNode *ln;\n        void *mbcount;\n        int slaves = 0;\n\n        addReplyArrayLen(c,3);\n        addReplyBulkCBuffer(c,\"master\",6);\n        addReplyLongLong(c,server.master_repl_offset);\n        mbcount = addReplyDeferredLen(c);\n        listRewind(server.slaves,&li);\n        while((ln = listNext(&li))) {\n            client *slave = ln->value;\n            char ip[NET_IP_STR_LEN], *slaveaddr = slave->slave_addr;\n\n            if (!slaveaddr) {\n                if (connAddrPeerName(slave->conn,ip,sizeof(ip),NULL) == -1)\n                    continue;\n                slaveaddr = ip;\n            }\n            if (slave->replstate != SLAVE_STATE_ONLINE) continue;\n            addReplyArrayLen(c,3);\n            addReplyBulkCString(c,slaveaddr);\n            addReplyBulkLongLong(c,slave->slave_listening_port);\n            addReplyBulkLongLong(c,slave->repl_ack_off);\n            slaves++;\n        }\n        setDeferredArrayLen(c,mbcount,slaves);\n    } else {\n        char *slavestate = NULL;\n\n        addReplyArrayLen(c,5);\n        addReplyBulkCBuffer(c,\"slave\",5);\n        addReplyBulkCString(c,server.masterhost);\n        addReplyLongLong(c,server.masterport);\n        if (slaveIsInHandshakeState()) {\n            slavestate = \"handshake\";\n        } else {\n            switch(server.repl_state) {\n            case REPL_STATE_NONE: slavestate = \"none\"; break;\n            case REPL_STATE_CONNECT: slavestate = \"connect\"; break;\n            case REPL_STATE_CONNECTING: slavestate = \"connecting\"; break;\n            case REPL_STATE_TRANSFER: slavestate = \"sync\"; break;\n            case REPL_STATE_CONNECTED: slavestate = \"connected\"; break;\n            default: slavestate = \"unknown\"; break;\n            }\n        }\n        addReplyBulkCString(c,slavestate);\n        addReplyLongLong(c,server.master ? server.master->reploff : -1);\n    }\n}\n\n/* Send a REPLCONF ACK command to the master to inform it about the current\n * processed offset. If we are not connected with a master, the command has\n * no effects. */\nvoid replicationSendAck(void) {\n    client *c = server.master;\n\n    if (c != NULL) {\n        int send_fack = server.fsynced_reploff != -1;\n        c->flags |= CLIENT_MASTER_FORCE_REPLY;\n        addReplyArrayLen(c,send_fack ? 5 : 3);\n        addReplyBulkCString(c,\"REPLCONF\");\n        addReplyBulkCString(c,\"ACK\");\n        addReplyBulkLongLong(c,c->reploff);\n        if (send_fack) {\n            addReplyBulkCString(c,\"FACK\");\n            addReplyBulkLongLong(c,server.fsynced_reploff);\n        }\n        c->flags &= ~CLIENT_MASTER_FORCE_REPLY;\n        /* Accumulation from above replies must be reset back to 0 manually,\n         * as this subroutine does not invoke resetClient(). */\n        c->net_output_bytes_curr_cmd = 0;\n    }\n}\n\n/* ---------------------- MASTER CACHING FOR PSYNC -------------------------- */\n\n/* In order to implement partial synchronization we need to be able to cache\n * our master's client structure after a transient disconnection.\n * It is cached into server.cached_master and flushed away using the following\n * functions. */\n\n/* This function is called by freeClient() in order to cache the master\n * client structure instead of destroying it. freeClient() will return\n * ASAP after this function returns, so every action needed to avoid problems\n * with a client that is really \"suspended\" has to be done by this function.\n *\n * The other functions that will deal with the cached master are:\n *\n * replicationDiscardCachedMaster() that will make sure to kill the client\n * as for some reason we don't want to use it in the future.\n *\n * replicationResurrectCachedMaster() that is used after a successful PSYNC\n * handshake in order to reactivate the cached master.\n */\nvoid replicationCacheMaster(client *c) {\n    serverAssert(server.master != NULL && server.cached_master == NULL);\n    serverAssert(server.master->tid == IOTHREAD_MAIN_THREAD_ID);\n    serverLog(LL_NOTICE,\"Caching the disconnected master state.\");\n\n    /* Unlink the client from the server structures. */\n    unlinkClient(c);\n\n    /* Reset the master client so that's ready to accept new commands:\n     * we want to discard the non processed query buffers and non processed\n     * offsets, including pending transactions, already populated arguments,\n     * pending outputs to the master. */\n    sdsclear(server.master->querybuf);\n    server.master->qb_pos = 0;\n    server.master->repl_applied = 0;\n    server.master->read_reploff = server.master->reploff;\n    server.master->reploff_next = 0;\n    if (c->flags & CLIENT_MULTI) discardTransaction(c);\n    listEmpty(c->reply);\n    c->sentlen = 0;\n    c->reply_bytes = 0;\n    c->bufpos = 0;\n    resetClient(c, -1);\n    resetClientQbufState(c);\n\n    /* Save the master. Server.master will be set to null later by\n     * replicationHandleMasterDisconnection(). */\n    server.cached_master = server.master;\n\n    /* Invalidate the Peer ID cache. */\n    if (c->peerid) {\n        sdsfree(c->peerid);\n        c->peerid = NULL;\n    }\n    /* Invalidate the Sock Name cache. */\n    if (c->sockname) {\n        sdsfree(c->sockname);\n        c->sockname = NULL;\n    }\n\n    /* Caching the master happens instead of the actual freeClient() call,\n     * so make sure to adjust the replication state. This function will\n     * also set server.master to NULL. */\n    replicationHandleMasterDisconnection();\n}\n\n/* This function is called when a master is turned into a slave, in order to\n * create from scratch a cached master for the new client, that will allow\n * to PSYNC with the slave that was promoted as the new master after a\n * failover.\n *\n * Assuming this instance was previously the master instance of the new master,\n * the new master will accept its replication ID, and potential also the\n * current offset if no data was lost during the failover. So we use our\n * current replication ID and offset in order to synthesize a cached master. */\nvoid replicationCacheMasterUsingMyself(void) {\n    serverLog(LL_NOTICE,\n        \"Before turning into a replica, using my own master parameters \"\n        \"to synthesize a cached master: I may be able to synchronize with \"\n        \"the new master with just a partial transfer.\");\n\n    /* This will be used to populate the field server.master->reploff\n     * by replicationCreateMasterClient(). We'll later set the created\n     * master as server.cached_master, so the replica will use such\n     * offset for PSYNC. */\n    server.master_initial_offset = server.master_repl_offset;\n\n    /* The master client we create can be set to any DBID, because\n     * the new master will start its replication stream with SELECT. */\n    replicationCreateMasterClient(NULL,-1);\n\n    /* Use our own ID / offset. */\n    memcpy(server.master->replid, server.replid, sizeof(server.replid));\n\n    /* Set as cached master. */\n    unlinkClient(server.master);\n    server.cached_master = server.master;\n    server.master = NULL;\n}\n\n/* Free a cached master, called when there are no longer the conditions for\n * a partial resync on reconnection. */\nvoid replicationDiscardCachedMaster(void) {\n    if (server.cached_master == NULL) return;\n\n    serverLog(LL_NOTICE,\"Discarding previously cached master state.\");\n    server.cached_master->flags &= ~CLIENT_MASTER;\n    freeClient(server.cached_master);\n    server.cached_master = NULL;\n}\n\n/* Turn the cached master into the current master, using the file descriptor\n * passed as argument as the socket for the new master.\n *\n * This function is called when successfully setup a partial resynchronization\n * so the stream of data that we'll receive will start from where this\n * master left. */\nvoid replicationResurrectCachedMaster(connection *conn) {\n    serverAssert(server.cached_master->tid == IOTHREAD_MAIN_THREAD_ID);\n\n    server.master = server.cached_master;\n    server.cached_master = NULL;\n    server.master->conn = conn;\n    connSetPrivateData(server.master->conn, server.master);\n    server.master->flags &= ~(CLIENT_CLOSE_AFTER_REPLY|CLIENT_CLOSE_ASAP);\n    server.master->authenticated = 1;\n    server.master->lastinteraction = server.unixtime;\n    server.repl_state = REPL_STATE_CONNECTED;\n    server.repl_down_since = 0;\n    server.repl_up_since = server.unixtime;\n    if (server.repl_disconnect_start_time != 0) {\n        server.repl_total_disconnect_time += server.unixtime - server.repl_disconnect_start_time;\n        server.repl_disconnect_start_time = 0;\n    }\n    /* Fire the master link modules event. */\n    moduleFireServerEvent(REDISMODULE_EVENT_MASTER_LINK_CHANGE,\n                          REDISMODULE_SUBEVENT_MASTER_LINK_UP,\n                          NULL);\n\n    /* Re-add to the list of clients. */\n    linkClient(server.master);\n    if (connSetReadHandler(server.master->conn, readQueryFromClient)) {\n        serverLog(LL_WARNING,\"Error resurrecting the cached master, impossible to add the readable handler: %s\", strerror(errno));\n        freeClientAsync(server.master); /* Close ASAP. */\n    }\n\n    /* We may also need to install the write handler as well if there is\n     * pending data in the write buffers. */\n    if (clientHasPendingReplies(server.master)) {\n        if (connSetWriteHandler(server.master->conn, sendReplyToClient)) {\n            serverLog(LL_WARNING,\"Error resurrecting the cached master, impossible to add the writable handler: %s\", strerror(errno));\n            freeClientAsync(server.master); /* Close ASAP. */\n        }\n    }\n}\n\n/* ------------------------- MIN-SLAVES-TO-WRITE  --------------------------- */\n\n/* This function counts the number of slaves with lag <= min-slaves-max-lag.\n * If the option is active, the server will prevent writes if there are not\n * enough connected slaves with the specified lag (or less). */\nvoid refreshGoodSlavesCount(void) {\n    listIter li;\n    listNode *ln;\n    int good = 0;\n\n    if (!server.repl_min_slaves_to_write ||\n        !server.repl_min_slaves_max_lag) return;\n\n    listRewind(server.slaves,&li);\n    while((ln = listNext(&li))) {\n        client *slave = ln->value;\n        time_t lag = server.unixtime - slave->repl_ack_time;\n\n        if (slave->replstate == SLAVE_STATE_ONLINE &&\n            lag <= server.repl_min_slaves_max_lag) good++;\n    }\n    server.repl_good_slaves_count = good;\n}\n\n/* return true if status of good replicas is OK. otherwise false */\nint checkGoodReplicasStatus(void) {\n    return server.masterhost || /* not a primary status should be OK */\n           !server.repl_min_slaves_max_lag || /* Min slave max lag not configured */\n           !server.repl_min_slaves_to_write || /* Min slave to write not configured */\n           server.repl_good_slaves_count >= server.repl_min_slaves_to_write; /* check if we have enough slaves */\n}\n\n/* ----------------------- SYNCHRONOUS REPLICATION --------------------------\n * Redis synchronous replication design can be summarized in points:\n *\n * - Redis masters have a global replication offset, used by PSYNC.\n * - Master increment the offset every time new commands are sent to slaves.\n * - Slaves ping back masters with the offset processed so far.\n *\n * So synchronous replication adds a new WAIT command in the form:\n *\n *   WAIT <num_replicas> <milliseconds_timeout>\n *\n * That returns the number of replicas that processed the query when\n * we finally have at least num_replicas, or when the timeout was\n * reached.\n *\n * The command is implemented in this way:\n *\n * - Every time a client processes a command, we remember the replication\n *   offset after sending that command to the slaves.\n * - When WAIT is called, we ask slaves to send an acknowledgement ASAP.\n *   The client is blocked at the same time (see blocked.c).\n * - Once we receive enough ACKs for a given offset or when the timeout\n *   is reached, the WAIT command is unblocked and the reply sent to the\n *   client.\n */\n\n/* This just set a flag so that we broadcast a REPLCONF GETACK command\n * to all the slaves in the beforeSleep() function. Note that this way\n * we \"group\" all the clients that want to wait for synchronous replication\n * in a given event loop iteration, and send a single GETACK for them all. */\nvoid replicationRequestAckFromSlaves(void) {\n    server.get_ack_from_slaves = 1;\n}\n\n/* Return the number of slaves that already acknowledged the specified\n * replication offset. */\nint replicationCountAcksByOffset(long long offset) {\n    listIter li;\n    listNode *ln;\n    int count = 0;\n\n    listRewind(server.slaves,&li);\n    while((ln = listNext(&li))) {\n        client *slave = ln->value;\n\n        if (slave->replstate != SLAVE_STATE_ONLINE) continue;\n        if (slave->repl_ack_off >= offset) count++;\n    }\n    return count;\n}\n\n/* Return the number of replicas that already acknowledged the specified\n * replication offset being AOF fsynced. */\nint replicationCountAOFAcksByOffset(long long offset) {\n    listIter li;\n    listNode *ln;\n    int count = 0;\n\n    listRewind(server.slaves,&li);\n    while((ln = listNext(&li))) {\n        client *slave = ln->value;\n\n        if (slave->replstate != SLAVE_STATE_ONLINE) continue;\n        if (slave->repl_aof_off >= offset) count++;\n    }\n    return count;\n}\n\n/* WAIT for N replicas to acknowledge the processing of our latest\n * write command (and all the previous commands). */\nvoid waitCommand(client *c) {\n    mstime_t timeout;\n    long numreplicas, ackreplicas;\n    long long offset = c->woff;\n\n    if (server.masterhost) {\n        addReplyError(c,\"WAIT cannot be used with replica instances. Please also note that since Redis 4.0 if a replica is configured to be writable (which is not the default) writes to replicas are just local and are not propagated.\");\n        return;\n    }\n\n    /* Argument parsing. */\n    if (getLongFromObjectOrReply(c,c->argv[1],&numreplicas,NULL) != C_OK)\n        return;\n    if (getTimeoutFromObjectOrReply(c,c->argv[2],&timeout,UNIT_MILLISECONDS)\n        != C_OK) return;\n\n    /* First try without blocking at all. */\n    ackreplicas = replicationCountAcksByOffset(c->woff);\n    if (ackreplicas >= numreplicas || c->flags & CLIENT_DENY_BLOCKING) {\n        addReplyLongLong(c,ackreplicas);\n        return;\n    }\n\n    /* Otherwise block the client and put it into our list of clients\n     * waiting for ack from slaves. */\n    blockForReplication(c,timeout,offset,numreplicas);\n\n    /* Make sure that the server will send an ACK request to all the slaves\n     * before returning to the event loop. */\n    replicationRequestAckFromSlaves();\n}\n\n/* WAIT for N replicas and / or local master to acknowledge our latest\n * write command got synced to the disk. */\nvoid waitaofCommand(client *c) {\n    mstime_t timeout;\n    long numreplicas, numlocal, ackreplicas, acklocal;\n\n    /* Argument parsing. */\n    if (getRangeLongFromObjectOrReply(c,c->argv[1],0,1,&numlocal,NULL) != C_OK)\n        return;\n    if (getPositiveLongFromObjectOrReply(c,c->argv[2],&numreplicas,NULL) != C_OK)\n        return;\n    if (getTimeoutFromObjectOrReply(c,c->argv[3],&timeout,UNIT_MILLISECONDS) != C_OK)\n        return;\n\n    if (server.masterhost) {\n        addReplyError(c,\"WAITAOF cannot be used with replica instances. Please also note that writes to replicas are just local and are not propagated.\");\n        return;\n    }\n    if (numlocal && !server.aof_enabled) {\n        addReplyError(c, \"WAITAOF cannot be used when numlocal is set but appendonly is disabled.\");\n        return;\n    }\n\n    /* First try without blocking at all. */\n    ackreplicas = replicationCountAOFAcksByOffset(c->woff);\n    acklocal = server.fsynced_reploff >= c->woff;\n    if ((ackreplicas >= numreplicas && acklocal >= numlocal) || c->flags & CLIENT_DENY_BLOCKING) {\n        addReplyArrayLen(c,2);\n        addReplyLongLong(c,acklocal);\n        addReplyLongLong(c,ackreplicas);\n        return;\n    }\n\n    /* Otherwise block the client and put it into our list of clients\n     * waiting for ack from slaves. */\n    blockForAofFsync(c,timeout,c->woff,numlocal,numreplicas);\n\n    /* Make sure that the server will send an ACK request to all the slaves\n     * before returning to the event loop. */\n    replicationRequestAckFromSlaves();\n}\n\n/* This is called by unblockClient() to perform the blocking op type\n * specific cleanup. We just remove the client from the list of clients\n * waiting for replica acks. Never call it directly, call unblockClient()\n * instead. */\nvoid unblockClientWaitingReplicas(client *c) {\n    listNode *ln = listSearchKey(server.clients_waiting_acks,c);\n    serverAssert(ln != NULL);\n    listDelNode(server.clients_waiting_acks,ln);\n    updateStatsOnUnblock(c, 0, 0, 0);\n}\n\n/* Check if there are clients blocked in WAIT or WAITAOF that can be unblocked\n * since we received enough ACKs from slaves. */\nvoid processClientsWaitingReplicas(void) {\n    long long last_offset = 0;\n    long long last_aof_offset = 0;\n    int last_numreplicas = 0;\n    int last_aof_numreplicas = 0;\n\n    listIter li;\n    listNode *ln;\n\n    listRewind(server.clients_waiting_acks,&li);\n    while((ln = listNext(&li))) {\n        int numlocal = 0;\n        int numreplicas = 0;\n\n        client *c = ln->value;\n        int is_wait_aof = c->bstate.btype == BLOCKED_WAITAOF;\n\n        if (is_wait_aof && c->bstate.numlocal && !server.aof_enabled) {\n            addReplyError(c, \"WAITAOF cannot be used when numlocal is set but appendonly is disabled.\");\n            unblockClient(c, 1);\n            continue;\n        }\n\n        /* Every time we find a client that is satisfied for a given\n         * offset and number of replicas, we remember it so the next client\n         * may be unblocked without calling replicationCountAcksByOffset()\n         * or calling replicationCountAOFAcksByOffset()\n         * if the requested offset / replicas were equal or less. */\n        if (!is_wait_aof && last_offset && last_offset >= c->bstate.reploffset &&\n                           last_numreplicas >= c->bstate.numreplicas)\n        {\n            numreplicas = last_numreplicas;\n        } else if (is_wait_aof && last_aof_offset && last_aof_offset >= c->bstate.reploffset &&\n                    last_aof_numreplicas >= c->bstate.numreplicas)\n        {\n            numreplicas = last_aof_numreplicas;\n        } else {\n            numreplicas = is_wait_aof ?\n                replicationCountAOFAcksByOffset(c->bstate.reploffset) :\n                replicationCountAcksByOffset(c->bstate.reploffset);\n\n            /* Check if the number of replicas is satisfied. */\n            if (numreplicas < c->bstate.numreplicas) continue;\n\n            if (is_wait_aof) {\n                last_aof_offset = c->bstate.reploffset;\n                last_aof_numreplicas = numreplicas;\n            } else {\n                last_offset = c->bstate.reploffset;\n                last_numreplicas = numreplicas;\n            }\n        }\n\n        /* Check if the local constraint of WAITAOF is served */\n        if (is_wait_aof) {\n            numlocal = server.fsynced_reploff >= c->bstate.reploffset;\n            if (numlocal < c->bstate.numlocal) continue;\n        }\n\n        /* Reply before unblocking, because unblock client calls reqresAppendResponse */\n        if (is_wait_aof) {\n            /* WAITAOF has an array reply */\n            addReplyArrayLen(c, 2);\n            addReplyLongLong(c, numlocal);\n            addReplyLongLong(c, numreplicas);\n        } else {\n            addReplyLongLong(c, numreplicas);\n        }\n\n        unblockClient(c, 1);\n    }\n}\n\n/* Return the slave replication offset for this instance, that is\n * the offset for which we already processed the master replication stream. */\nlong long replicationGetSlaveOffset(void) {\n    long long offset = 0;\n\n    if (server.masterhost != NULL) {\n        if (server.master) {\n            offset = server.master->reploff;\n        } else if (server.cached_master) {\n            offset = server.cached_master->reploff;\n        }\n    }\n    /* offset may be -1 when the master does not support it at all, however\n     * this function is designed to return an offset that can express the\n     * amount of data processed by the master, so we return a positive\n     * integer. */\n    if (offset < 0) offset = 0;\n    return offset;\n}\n\n/* --------------------------- REPLICATION CRON  ---------------------------- */\n\n/* Replication cron function, called 1 time per second. */\nvoid replicationCron(void) {\n    /* Check failover status first, to see if we need to start\n     * handling the failover. */\n    updateFailoverStatus();\n\n    /* Non blocking connection timeout? */\n    if (server.masterhost &&\n        (server.repl_state == REPL_STATE_CONNECTING ||\n         slaveIsInHandshakeState()) &&\n         (time(NULL)-server.repl_transfer_lastio) > server.repl_timeout)\n    {\n        serverLog(LL_WARNING,\"Timeout connecting to the MASTER...\");\n        cancelReplicationHandshake(1);\n    }\n\n    /* Bulk transfer I/O timeout? */\n    if (server.masterhost && server.repl_state == REPL_STATE_TRANSFER &&\n        (time(NULL)-server.repl_transfer_lastio) > server.repl_timeout)\n    {\n        serverLog(LL_WARNING,\"Timeout receiving bulk data from MASTER... If the problem persists try to set the 'repl-timeout' parameter in redis.conf to a larger value.\");\n        cancelReplicationHandshake(1);\n    }\n\n    /* Check if we should connect to a MASTER */\n    if (server.repl_state == REPL_STATE_CONNECT) {\n        serverLog(LL_NOTICE,\"Connecting to MASTER %s:%d\",\n            server.masterhost, server.masterport);\n        connectWithMaster();\n    }\n\n    replicationCronRunMasterClient();\n\n    /* If we have attached slaves, PING them from time to time.\n     * So slaves can implement an explicit timeout to masters, and will\n     * be able to detect a link disconnection even if the TCP connection\n     * will not actually go down. */\n    listIter li;\n    listNode *ln;\n    robj *ping_argv[1];\n\n    /* First, send PING according to ping_slave_period. The reason why master\n     * sends PING is to keep the connection with replica active, so master need\n     * not send PING to replicas if already sent replication stream in the past\n     * repl_ping_slave_period time. */\n    if (server.masterhost == NULL && listLength(server.slaves) &&\n        server.unixtime >= server.repl_stream_lastio + server.repl_ping_slave_period)\n    {\n        /* Note that we don't send the PING if the clients are paused during\n         * a Redis Cluster manual failover: the PING we send will otherwise\n         * alter the replication offsets of master and slave, and will no longer\n         * match the one stored into 'mf_master_offset' state. */\n        int manual_failover_in_progress =\n            ((server.cluster_enabled &&\n              clusterManualFailoverTimeLimit()) ||\n            server.failover_end_time) &&\n            isPausedActionsWithUpdate(PAUSE_ACTION_REPLICA);\n\n        if (!manual_failover_in_progress) {\n            ping_argv[0] = shared.ping;\n            replicationFeedSlaves(server.slaves, -1,\n                ping_argv, 1);\n        }\n    }\n\n    /* Second, send a newline to all the slaves in pre-synchronization\n     * stage, that is, slaves waiting for the master to create the RDB file.\n     *\n     * Also send the a newline to all the chained slaves we have, if we lost\n     * connection from our master, to keep the slaves aware that their\n     * master is online. This is needed since sub-slaves only receive proxied\n     * data from top-level masters, so there is no explicit pinging in order\n     * to avoid altering the replication offsets. This special out of band\n     * pings (newlines) can be sent, they will have no effect in the offset.\n     *\n     * The newline will be ignored by the slave but will refresh the\n     * last interaction timer preventing a timeout. In this case we ignore the\n     * ping period and refresh the connection once per second since certain\n     * timeouts are set at a few seconds (example: PSYNC response). */\n    listRewind(server.slaves,&li);\n    while((ln = listNext(&li))) {\n        client *slave = ln->value;\n\n        int is_presync =\n            (slave->replstate == SLAVE_STATE_WAIT_BGSAVE_START ||\n            (slave->replstate == SLAVE_STATE_WAIT_BGSAVE_END &&\n             server.rdb_child_type != RDB_CHILD_TYPE_SOCKET));\n\n        if (is_presync && !(slave->flags & CLIENT_CLOSE_ASAP)) {\n            connWrite(slave->conn, \"\\n\", 1);\n        }\n    }\n\n    /* Disconnect timedout slaves. */\n    if (listLength(server.slaves)) {\n        listIter li;\n        listNode *ln;\n\n        listRewind(server.slaves,&li);\n        while((ln = listNext(&li))) {\n            client *slave = ln->value;\n\n            if (slave->replstate == SLAVE_STATE_ONLINE) {\n                if (slave->flags & CLIENT_PRE_PSYNC)\n                    continue;\n                if ((server.unixtime - slave->repl_ack_time) > server.repl_timeout) {\n                    serverLog(LL_WARNING, \"Disconnecting timedout replica (streaming sync): %s\",\n                          replicationGetSlaveName(slave));\n                    freeClient(slave);\n                    continue;\n                }\n            }\n            /* We consider disconnecting only diskless replicas because disk-based replicas aren't fed\n             * by the fork child so if a disk-based replica is stuck it doesn't prevent the fork child\n             * from terminating. */\n            if (slave->replstate == SLAVE_STATE_WAIT_BGSAVE_END && server.rdb_child_type == RDB_CHILD_TYPE_SOCKET) {\n                if (slave->repl_last_partial_write != 0 &&\n                    (server.unixtime - slave->repl_last_partial_write) > server.repl_timeout)\n                {\n                    serverLog(LL_WARNING, \"Disconnecting timedout replica (full sync): %s\",\n                          replicationGetSlaveName(slave));\n                    freeClient(slave);\n                    continue;\n                }\n            }\n        }\n    }\n\n    /* If this is a master without attached slaves and there is a replication\n     * backlog active, in order to reclaim memory we can free it after some\n     * (configured) time. Note that this cannot be done for slaves: slaves\n     * without sub-slaves attached should still accumulate data into the\n     * backlog, in order to reply to PSYNC queries if they are turned into\n     * masters after a failover. */\n    if (listLength(server.slaves) == 0 && server.repl_backlog_time_limit &&\n        server.repl_backlog && server.masterhost == NULL)\n    {\n        time_t idle = server.unixtime - server.repl_no_slaves_since;\n\n        if (idle > server.repl_backlog_time_limit) {\n            /* When we free the backlog, we always use a new\n             * replication ID and clear the ID2. This is needed\n             * because when there is no backlog, the master_repl_offset\n             * is not updated, but we would still retain our replication\n             * ID, leading to the following problem:\n             *\n             * 1. We are a master instance.\n             * 2. Our slave is promoted to master. It's repl-id-2 will\n             *    be the same as our repl-id.\n             * 3. We, yet as master, receive some updates, that will not\n             *    increment the master_repl_offset.\n             * 4. Later we are turned into a slave, connect to the new\n             *    master that will accept our PSYNC request by second\n             *    replication ID, but there will be data inconsistency\n             *    because we received writes. */\n            changeReplicationId();\n            clearReplicationId2();\n            freeReplicationBacklog();\n            serverLog(LL_NOTICE,\n                \"Replication backlog freed after %d seconds \"\n                \"without connected replicas.\",\n                (int) server.repl_backlog_time_limit);\n        }\n    }\n\n    replicationStartPendingFork();\n\n    /* Remove the RDB file used for replication if Redis is not running\n     * with any persistence. */\n    removeRDBUsedToSyncReplicas();\n\n    /* Sanity check replication buffer, the first block of replication buffer blocks\n     * must be referenced by someone, since it will be freed when not referenced,\n     * otherwise, server will OOM. also, its refcount must not be more than\n     * replicas number + 1(replication backlog). */\n    if (listLength(server.repl_buffer_blocks) > 0) {\n        replBufBlock *o = listNodeValue(listFirst(server.repl_buffer_blocks));\n        serverAssert(o->refcount > 0 &&\n            o->refcount <= (int)listLength(server.slaves)+1);\n    }\n\n    /* Refresh the number of slaves with lag <= min-slaves-max-lag. */\n    refreshGoodSlavesCount();\n}\n\nint shouldStartChildReplication(int *mincapa_out, int *req_out) {\n    /* We should start a BGSAVE good for replication if we have slaves in\n     * WAIT_BGSAVE_START state.\n     *\n     * In case of diskless replication, we make sure to wait the specified\n     * number of seconds (according to configuration) so that other slaves\n     * have the time to arrive before we start streaming. */\n    if (!hasActiveChildProcess()) {\n        time_t idle, max_idle = 0;\n        int slaves_waiting = 0;\n        int mincapa;\n        int req;\n        int first = 1;\n        listNode *ln;\n        listIter li;\n\n        listRewind(server.slaves,&li);\n        while((ln = listNext(&li))) {\n            client *slave = ln->value;\n            if (slave->replstate == SLAVE_STATE_WAIT_BGSAVE_START) {\n                if (first) {\n                    /* Get first slave's requirements */\n                    req = slave->slave_req;\n                } else if (req != slave->slave_req) {\n                    /* Skip slaves that don't match */\n                    continue;\n                }\n                idle = server.unixtime - slave->lastinteraction;\n                /* If the slave requests a slots snapshot, we should start BGSAVE\n                 * immediately since it can't share the RDB with other slaves. */\n                if (slave->slave_req & SLAVE_REQ_SLOTS_SNAPSHOT)\n                    idle = server.repl_diskless_sync_delay; /* Threshold for BGSAVE */\n                if (idle > max_idle) max_idle = idle;\n                slaves_waiting++;\n                mincapa = first ? slave->slave_capa : (mincapa & slave->slave_capa);\n                first = 0;\n            }\n        }\n\n        if (slaves_waiting &&\n            (!server.repl_diskless_sync ||\n             (server.repl_diskless_sync_max_replicas > 0 &&\n              slaves_waiting >= server.repl_diskless_sync_max_replicas) ||\n             max_idle >= server.repl_diskless_sync_delay))\n        {\n            if (mincapa_out)\n                *mincapa_out = mincapa;\n            if (req_out)\n                *req_out = req;\n            return 1;\n        }\n    }\n\n    return 0;\n}\n\nvoid replicationStartPendingFork(void) {\n    int mincapa = -1;\n    int req = -1;\n\n    if (shouldStartChildReplication(&mincapa, &req)) {\n        /* Start the BGSAVE. The called function may start a\n         * BGSAVE with socket target or disk target depending on the\n         * configuration and slaves capabilities and requirements. */\n        startBgsaveForReplication(mincapa, req);\n    }\n}\n\n/* Find replica at IP:PORT from replica list */\nstatic client *findReplica(char *host, int port) {\n    listIter li;\n    listNode *ln;\n    client *replica;\n\n    listRewind(server.slaves,&li);\n    while((ln = listNext(&li))) {\n        replica = ln->value;\n        char ip[NET_IP_STR_LEN], *replicaip = replica->slave_addr;\n\n        if (!replicaip) {\n            if (connAddrPeerName(replica->conn, ip, sizeof(ip), NULL) == -1)\n                continue;\n            replicaip = ip;\n        }\n\n        if (!strcasecmp(host, replicaip) &&\n                (port == replica->slave_listening_port))\n            return replica;\n    }\n\n    return NULL;\n}\n\nconst char *getFailoverStateString(void) {\n    switch(server.failover_state) {\n        case NO_FAILOVER: return \"no-failover\";\n        case FAILOVER_IN_PROGRESS: return \"failover-in-progress\";\n        case FAILOVER_WAIT_FOR_SYNC: return \"waiting-for-sync\";\n        default: return \"unknown\";\n    }\n}\n\n/* Resets the internal failover configuration, this needs\n * to be called after a failover either succeeds or fails\n * as it includes the client unpause. */\nvoid clearFailoverState(void) {\n    server.failover_end_time = 0;\n    server.force_failover = 0;\n    zfree(server.target_replica_host);\n    server.target_replica_host = NULL;\n    server.target_replica_port = 0;\n    server.failover_state = NO_FAILOVER;\n    unpauseActions(PAUSE_DURING_FAILOVER);\n}\n\n/* Abort an ongoing failover if one is going on. */\nvoid abortFailover(const char *err) {\n    if (server.failover_state == NO_FAILOVER) return;\n\n    if (server.target_replica_host) {\n        serverLog(LL_NOTICE,\"FAILOVER to %s:%d aborted: %s\",\n            server.target_replica_host,server.target_replica_port,err);  \n    } else {\n        serverLog(LL_NOTICE,\"FAILOVER to any replica aborted: %s\",err);  \n    }\n    if (server.failover_state == FAILOVER_IN_PROGRESS) {\n        replicationUnsetMaster();\n    }\n    clearFailoverState();\n}\n\n/* \n * FAILOVER [TO <HOST> <PORT> [FORCE]] [ABORT] [TIMEOUT <timeout>]\n * \n * This command will coordinate a failover between the master and one\n * of its replicas. The happy path contains the following steps:\n * 1) The master will initiate a client pause write, to stop replication\n * traffic.\n * 2) The master will periodically check if any of its replicas has\n * consumed the entire replication stream through acks. \n * 3) Once any replica has caught up, the master will itself become a replica.\n * 4) The master will send a PSYNC FAILOVER request to the target replica, which\n * if accepted will cause the replica to become the new master and start a sync.\n * \n * FAILOVER ABORT is the only way to abort a failover command, as replicaof\n * will be disabled. This may be needed if the failover is unable to progress. \n * \n * The optional arguments [TO <HOST> <IP>] allows designating a specific replica\n * to be failed over to.\n * \n * FORCE flag indicates that even if the target replica is not caught up,\n * failover to it anyway. This must be specified with a timeout and a target\n * HOST and IP.\n * \n * TIMEOUT <timeout> indicates how long should the primary wait for \n * a replica to sync up before aborting. If not specified, the failover\n * will attempt forever and must be manually aborted.\n */\nvoid failoverCommand(client *c) {\n    if (!clusterAllowFailoverCmd(c)) {\n        return;\n    }\n\n    /* Handle special case for abort */\n    if ((c->argc == 2) && !strcasecmp(c->argv[1]->ptr,\"abort\")) {\n        if (server.failover_state == NO_FAILOVER) {\n            addReplyError(c, \"No failover in progress.\");\n            return;\n        }\n\n        abortFailover(\"Failover manually aborted\");\n        addReply(c,shared.ok);\n        return;\n    }\n\n    long timeout_in_ms = 0;\n    int force_flag = 0;\n    long port = 0;\n    char *host = NULL;\n\n    /* Parse the command for syntax and arguments. */\n    for (int j = 1; j < c->argc; j++) {\n        if (!strcasecmp(c->argv[j]->ptr,\"timeout\") && (j + 1 < c->argc) &&\n            timeout_in_ms == 0)\n        {\n            if (getLongFromObjectOrReply(c,c->argv[j + 1],\n                        &timeout_in_ms,NULL) != C_OK) return;\n            if (timeout_in_ms <= 0) {\n                addReplyError(c,\"FAILOVER timeout must be greater than 0\");\n                return;\n            }\n            j++;\n        } else if (!strcasecmp(c->argv[j]->ptr,\"to\") && (j + 2 < c->argc) &&\n            !host) \n        {\n            if (getLongFromObjectOrReply(c,c->argv[j + 2],&port,NULL) != C_OK)\n                return;\n            host = c->argv[j + 1]->ptr;\n            j += 2;\n        } else if (!strcasecmp(c->argv[j]->ptr,\"force\") && !force_flag) {\n            force_flag = 1;\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        }\n    }\n\n    if (server.failover_state != NO_FAILOVER) {\n        addReplyError(c,\"FAILOVER already in progress.\");\n        return;\n    }\n\n    if (server.masterhost) {\n        addReplyError(c,\"FAILOVER is not valid when server is a replica.\");\n        return;\n    }\n\n    if (listLength(server.slaves) == 0) {\n        addReplyError(c,\"FAILOVER requires connected replicas.\");\n        return; \n    }\n\n    if (force_flag && (!timeout_in_ms || !host)) {\n        addReplyError(c,\"FAILOVER with force option requires both a timeout \"\n            \"and target HOST and IP.\");\n        return;     \n    }\n\n    /* If a replica address was provided, validate that it is connected. */\n    if (host) {\n        client *replica = findReplica(host, port);\n\n        if (replica == NULL) {\n            addReplyError(c,\"FAILOVER target HOST and PORT is not \"\n                            \"a replica.\");\n            return;\n        }\n\n        /* Check if requested replica is online */\n        if (replica->replstate != SLAVE_STATE_ONLINE) {\n            addReplyError(c,\"FAILOVER target replica is not online.\");\n            return;\n        }\n\n        server.target_replica_host = zstrdup(host);\n        server.target_replica_port = port;\n        serverLog(LL_NOTICE,\"FAILOVER requested to %s:%ld.\",host,port);\n    } else {\n        serverLog(LL_NOTICE,\"FAILOVER requested to any replica.\");\n    }\n\n    mstime_t now = commandTimeSnapshot();\n    if (timeout_in_ms) {\n        server.failover_end_time = now + timeout_in_ms;\n    }\n    \n    server.force_failover = force_flag;\n    server.failover_state = FAILOVER_WAIT_FOR_SYNC;\n    /* Cancel all ASM tasks when starting failover */\n    clusterAsmCancel(NULL, \"failover requested\");\n    /* Cluster failover will unpause eventually */\n    pauseActions(PAUSE_DURING_FAILOVER,\n                 LLONG_MAX,\n                 PAUSE_ACTIONS_CLIENT_WRITE_SET);\n    addReply(c,shared.ok);\n}\n\n/* Failover cron function, checks coordinated failover state. \n *\n * Implementation note: The current implementation calls replicationSetMaster()\n * to start the failover request, this has some unintended side effects if the\n * failover doesn't work like blocked clients will be unblocked and replicas will\n * be disconnected. This could be optimized further.\n */\nvoid updateFailoverStatus(void) {\n    if (server.failover_state != FAILOVER_WAIT_FOR_SYNC) return;\n    mstime_t now = server.mstime;\n\n    /* Check if failover operation has timed out */\n    if (server.failover_end_time && server.failover_end_time <= now) {\n        if (server.force_failover) {\n            serverLog(LL_NOTICE,\n                \"FAILOVER to %s:%d time out exceeded, failing over.\",\n                server.target_replica_host, server.target_replica_port);\n            server.failover_state = FAILOVER_IN_PROGRESS;\n            /* If timeout has expired force a failover if requested. */\n            replicationSetMaster(server.target_replica_host,\n                server.target_replica_port);\n            return;\n        } else {\n            /* Force was not requested, so timeout. */\n            abortFailover(\"Replica never caught up before timeout\");\n            return;\n        }\n    }\n\n    /* Check to see if the replica has caught up so failover can start */\n    client *replica = NULL;\n    if (server.target_replica_host) {\n        replica = findReplica(server.target_replica_host, \n            server.target_replica_port);\n    } else {\n        listIter li;\n        listNode *ln;\n\n        listRewind(server.slaves,&li);\n        /* Find any replica that has matched our repl_offset */\n        while((ln = listNext(&li))) {\n            replica = ln->value;\n            if (replica->repl_ack_off == server.master_repl_offset) {\n                char ip[NET_IP_STR_LEN], *replicaaddr = replica->slave_addr;\n\n                if (!replicaaddr) {\n                    if (connAddrPeerName(replica->conn,ip,sizeof(ip),NULL) == -1)\n                        continue;\n                    replicaaddr = ip;\n                }\n\n                /* We are now failing over to this specific node */\n                server.target_replica_host = zstrdup(replicaaddr);\n                server.target_replica_port = replica->slave_listening_port;\n                break;\n            }\n        }\n    }\n\n    /* We've found a replica that is caught up */\n    if (replica && (replica->repl_ack_off == server.master_repl_offset)) {\n        server.failover_state = FAILOVER_IN_PROGRESS;\n        serverLog(LL_NOTICE,\n                \"Failover target %s:%d is synced, failing over.\",\n                server.target_replica_host, server.target_replica_port);\n        /* Designated replica is caught up, failover to it. */\n        replicationSetMaster(server.target_replica_host,\n            server.target_replica_port);\n    }\n}\n"
  },
  {
    "path": "src/resp_parser.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n/* ----------------------------------------------------------------------------------------\n * A RESP parser for parsing replies returned by RM_Call or Lua's\n * 'redis.call()'.\n *\n * The parser introduces callbacks that need to be set by the user. Each\n * callback represents a different reply type. Each callback gets a p_ctx that\n * was given to the parseReply function. The callbacks also give the protocol\n * (underlying blob) of the current reply and the size.\n *\n * Some callbacks also get the parser object itself:\n * - array_callback\n * - set_callback\n * - map_callback\n * \n * These callbacks need to continue parsing by calling parseReply a number of\n * times, according to the supplied length. Subsequent parseReply calls may use\n * a different p_ctx, which will be used for nested CallReply objects.\n *\n * These callbacks also do not receive a proto_len, which is not known at the\n * time of parsing. Callers may calculate it themselves after parsing the\n * entire collection.\n *\n * NOTE: This parser is designed to only handle replies generated by Redis\n * itself. It does not perform many required validations and thus NOT SAFE FOR\n * PARSING USER INPUT.\n * ----------------------------------------------------------------------------------------\n */\n\n#include \"fast_float_strtod.h\"\n#include \"resp_parser.h\"\n#include \"server.h\"\n\nstatic int parseBulk(ReplyParser *parser, void *p_ctx) {\n    const char *proto = parser->curr_location;\n    char *p = strchr(proto+1,'\\r');\n    long long bulklen;\n    parser->curr_location = p + 2; /* for \\r\\n */\n\n    string2ll(proto+1,p-proto-1,&bulklen);\n    if (bulklen == -1) {\n        parser->callbacks.null_bulk_string_callback(p_ctx, proto, parser->curr_location - proto);\n    } else {\n        const char *str = parser->curr_location;\n        parser->curr_location += bulklen;\n        parser->curr_location += 2; /* for \\r\\n */\n        parser->callbacks.bulk_string_callback(p_ctx, str, bulklen, proto, parser->curr_location - proto);\n    }\n\n    return C_OK;\n}\n\nstatic int parseSimpleString(ReplyParser *parser, void *p_ctx) {\n    const char *proto = parser->curr_location;\n    char *p = strchr(proto+1,'\\r');\n    parser->curr_location = p + 2; /* for \\r\\n */\n    parser->callbacks.simple_str_callback(p_ctx, proto+1, p-proto-1, proto, parser->curr_location - proto);\n    return C_OK;\n}\n\nstatic int parseError(ReplyParser *parser, void *p_ctx) {\n    const char *proto = parser->curr_location;\n    char *p = strchr(proto+1,'\\r');\n    parser->curr_location = p + 2; // for \\r\\n\n    parser->callbacks.error_callback(p_ctx, proto+1, p-proto-1, proto, parser->curr_location - proto);\n    return C_OK;\n}\n\nstatic int parseLong(ReplyParser *parser, void *p_ctx) {\n    const char *proto = parser->curr_location;\n    char *p = strchr(proto+1,'\\r');\n    parser->curr_location = p + 2; /* for \\r\\n */\n    long long val;\n    string2ll(proto+1,p-proto-1,&val);\n    parser->callbacks.long_callback(p_ctx, val, proto, parser->curr_location - proto);\n    return C_OK;\n}\n\nstatic int parseAttributes(ReplyParser *parser, void *p_ctx) {\n    const char *proto = parser->curr_location;\n    char *p = strchr(proto+1,'\\r');\n    long long len;\n    string2ll(proto+1,p-proto-1,&len);\n    p += 2;\n    parser->curr_location = p;\n    parser->callbacks.attribute_callback(parser, p_ctx, len, proto);\n    return C_OK;\n}\n\nstatic int parseVerbatimString(ReplyParser *parser, void *p_ctx) {\n    const char *proto = parser->curr_location;\n    char *p = strchr(proto+1,'\\r');\n    long long bulklen;\n    parser->curr_location = p + 2; /* for \\r\\n */\n    string2ll(proto+1,p-proto-1,&bulklen);\n    const char *format = parser->curr_location;\n    parser->curr_location += bulklen;\n    parser->curr_location += 2; /* for \\r\\n */\n    parser->callbacks.verbatim_string_callback(p_ctx, format, format + 4, bulklen - 4, proto, parser->curr_location - proto);\n    return C_OK;\n}\n\nstatic int parseBigNumber(ReplyParser *parser, void *p_ctx) {\n    const char *proto = parser->curr_location;\n    char *p = strchr(proto+1,'\\r');\n    parser->curr_location = p + 2; /* for \\r\\n */\n    parser->callbacks.big_number_callback(p_ctx, proto+1, p-proto-1, proto, parser->curr_location - proto);\n    return C_OK;\n}\n\nstatic int parseNull(ReplyParser *parser, void *p_ctx) {\n    const char *proto = parser->curr_location;\n    char *p = strchr(proto+1,'\\r');\n    parser->curr_location = p + 2; /* for \\r\\n */\n    parser->callbacks.null_callback(p_ctx, proto, parser->curr_location - proto);\n    return C_OK;\n}\n\nstatic int parseDouble(ReplyParser *parser, void *p_ctx) {\n    const char *proto = parser->curr_location;\n    char *p = strchr(proto+1,'\\r');\n    parser->curr_location = p + 2; /* for \\r\\n */\n    size_t len = p-proto-1;\n    double d;\n    if (len <= MAX_LONG_DOUBLE_CHARS) {\n        d = fast_float_strtod(proto+1,len,NULL); /* We expect a valid representation. */\n    } else {\n        d = 0;\n    }\n    parser->callbacks.double_callback(p_ctx, d, proto, parser->curr_location - proto);\n    return C_OK;\n}\n\nstatic int parseBool(ReplyParser *parser, void *p_ctx) {\n    const char *proto = parser->curr_location;\n    char *p = strchr(proto+1,'\\r');\n    parser->curr_location = p + 2; /* for \\r\\n */\n    parser->callbacks.bool_callback(p_ctx, proto[1] == 't', proto, parser->curr_location - proto);\n    return C_OK;\n}\n\nstatic int parseArray(ReplyParser *parser, void *p_ctx) {\n    const char *proto = parser->curr_location;\n    char *p = strchr(proto+1,'\\r');\n    long long len;\n    string2ll(proto+1,p-proto-1,&len);\n    p += 2;\n    parser->curr_location = p;\n    if (len == -1) {\n        parser->callbacks.null_array_callback(p_ctx, proto, parser->curr_location - proto);\n    } else {\n        parser->callbacks.array_callback(parser, p_ctx, len, proto);\n    }\n    return C_OK;\n}\n\nstatic int parseSet(ReplyParser *parser, void *p_ctx) {\n    const char *proto = parser->curr_location;\n    char *p = strchr(proto+1,'\\r');\n    long long len;\n    string2ll(proto+1,p-proto-1,&len);\n    p += 2;\n    parser->curr_location = p;\n    parser->callbacks.set_callback(parser, p_ctx, len, proto);\n    return C_OK;\n}\n\nstatic int parseMap(ReplyParser *parser, void *p_ctx) {\n    const char *proto = parser->curr_location;\n    char *p = strchr(proto+1,'\\r');\n    long long len;\n    string2ll(proto+1,p-proto-1,&len);\n    p += 2;\n    parser->curr_location = p;\n    parser->callbacks.map_callback(parser, p_ctx, len, proto);\n    return C_OK;\n}\n\n/* Parse a reply pointed to by parser->curr_location. */\nint parseReply(ReplyParser *parser, void *p_ctx) {\n    switch (parser->curr_location[0]) {\n        case '$': return parseBulk(parser, p_ctx);\n        case '+': return parseSimpleString(parser, p_ctx);\n        case '-': return parseError(parser, p_ctx);\n        case ':': return parseLong(parser, p_ctx);\n        case '*': return parseArray(parser, p_ctx);\n        case '~': return parseSet(parser, p_ctx);\n        case '%': return parseMap(parser, p_ctx);\n        case '#': return parseBool(parser, p_ctx);\n        case ',': return parseDouble(parser, p_ctx);\n        case '_': return parseNull(parser, p_ctx);\n        case '(': return parseBigNumber(parser, p_ctx);\n        case '=': return parseVerbatimString(parser, p_ctx);\n        case '|': return parseAttributes(parser, p_ctx);\n        default: if (parser->callbacks.error) parser->callbacks.error(p_ctx);\n    }\n    return C_ERR;\n}\n"
  },
  {
    "path": "src/resp_parser.h",
    "content": "/*\n * Copyright (c) 2021-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef SRC_RESP_PARSER_H_\n#define SRC_RESP_PARSER_H_\n\n#include <stddef.h>\n\ntypedef struct ReplyParser ReplyParser;\n\ntypedef struct ReplyParserCallbacks {\n    /* Called when the parser reaches an empty mbulk ('*-1') */\n    void (*null_array_callback)(void *ctx, const char *proto, size_t proto_len);\n\n    /* Called when the parser reaches an empty bulk ('$-1') (bulk len is -1) */\n    void (*null_bulk_string_callback)(void *ctx, const char *proto, size_t proto_len);\n\n    /* Called when the parser reaches a bulk ('$'), which is passed as 'str' along with its length 'len' */\n    void (*bulk_string_callback)(void *ctx, const char *str, size_t len, const char *proto, size_t proto_len);\n\n    /* Called when the parser reaches an error ('-'), which is passed as 'str' along with its length 'len' */\n    void (*error_callback)(void *ctx, const char *str, size_t len, const char *proto, size_t proto_len);\n\n    /* Called when the parser reaches a simple string ('+'), which is passed as 'str' along with its length 'len' */\n    void (*simple_str_callback)(void *ctx, const char *str, size_t len, const char *proto, size_t proto_len);\n\n    /* Called when the parser reaches a long long value (':'), which is passed as an argument 'val' */\n    void (*long_callback)(void *ctx, long long val, const char *proto, size_t proto_len);\n\n    /* Called when the parser reaches an array ('*'). The array length is passed as an argument 'len' */\n    void (*array_callback)(struct ReplyParser *parser, void *ctx, size_t len, const char *proto);\n\n    /* Called when the parser reaches a set ('~'). The set length is passed as an argument 'len' */\n    void (*set_callback)(struct ReplyParser *parser, void *ctx, size_t len, const char *proto);\n\n    /* Called when the parser reaches a map ('%'). The map length is passed as an argument 'len' */\n    void (*map_callback)(struct ReplyParser *parser, void *ctx, size_t len, const char *proto);\n\n    /* Called when the parser reaches a bool ('#'), which is passed as an argument 'val' */\n    void (*bool_callback)(void *ctx, int val, const char *proto, size_t proto_len);\n\n    /* Called when the parser reaches a double (','), which is passed as an argument 'val' */\n    void (*double_callback)(void *ctx, double val, const char *proto, size_t proto_len);\n\n    /* Called when the parser reaches a big number ('('), which is passed as 'str' along with its length 'len' */\n    void (*big_number_callback)(void *ctx, const char *str, size_t len, const char *proto, size_t proto_len);\n\n    /* Called when the parser reaches a string ('='), which is passed as 'str' along with its 'format' and length 'len' */\n    void (*verbatim_string_callback)(void *ctx, const char *format, const char *str, size_t len, const char *proto, size_t proto_len);\n\n    /* Called when the parser reaches an attribute ('|'). The attribute length is passed as an argument 'len' */\n    void (*attribute_callback)(struct ReplyParser *parser, void *ctx, size_t len, const char *proto);\n\n    /* Called when the parser reaches a null ('_') */\n    void (*null_callback)(void *ctx, const char *proto, size_t proto_len);\n\n    void (*error)(void *ctx);\n} ReplyParserCallbacks;\n\nstruct ReplyParser {\n    /* The current location in the reply buffer, needs to be set to the beginning of the reply */\n    const char *curr_location;\n    ReplyParserCallbacks callbacks;\n};\n\nint parseReply(ReplyParser *parser, void *p_ctx);\n\n#endif /* SRC_RESP_PARSER_H_ */\n"
  },
  {
    "path": "src/rio.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n/* rio.c is a simple stream-oriented I/O abstraction that provides an interface\n * to write code that can consume/produce data using different concrete input\n * and output devices. For instance the same rdb.c code using the rio\n * abstraction can be used to read and write the RDB format using in-memory\n * buffers or files.\n *\n * A rio object provides the following methods:\n *  read: read from stream.\n *  write: write to stream.\n *  tell: get the current offset.\n *\n * It is also possible to set a 'checksum' method that is used by rio.c in order\n * to compute a checksum of the data written or read, or to query the rio object\n * for the current checksum.\n *\n * ----------------------------------------------------------------------------\n */\n\n\n#include \"fmacros.h\"\n#include \"fpconv_dtoa.h\"\n#include <string.h>\n#include <stdio.h>\n#include <unistd.h>\n#include \"rio.h\"\n#include \"util.h\"\n#include \"crc64.h\"\n#include \"config.h\"\n#include \"server.h\"\n\n/* ------------------------- Buffer I/O implementation ----------------------- */\n\n/* Returns 1 or 0 for success/failure. */\nstatic size_t rioBufferWrite(rio *r, const void *buf, size_t len) {\n    r->io.buffer.ptr = sdscatlen(r->io.buffer.ptr,(char*)buf,len);\n    r->io.buffer.pos += len;\n    return 1;\n}\n\n/* Returns 1 or 0 for success/failure. */\nstatic size_t rioBufferRead(rio *r, void *buf, size_t len) {\n    if (sdslen(r->io.buffer.ptr)-r->io.buffer.pos < len)\n        return 0; /* not enough buffer to return len bytes. */\n    memcpy(buf,r->io.buffer.ptr+r->io.buffer.pos,len);\n    r->io.buffer.pos += len;\n    return 1;\n}\n\n/* Returns read/write position in buffer. */\nstatic off_t rioBufferTell(rio *r) {\n    return r->io.buffer.pos;\n}\n\n/* Flushes any buffer to target device if applicable. Returns 1 on success\n * and 0 on failures. */\nstatic int rioBufferFlush(rio *r) {\n    UNUSED(r);\n    return 1; /* Nothing to do, our write just appends to the buffer. */\n}\n\nstatic const rio rioBufferIO = {\n    rioBufferRead,\n    rioBufferWrite,\n    rioBufferTell,\n    rioBufferFlush,\n    NULL,           /* update_checksum */\n    0,              /* current checksum */\n    0,              /* flags */\n    0,              /* bytes read or written */\n    0,              /* read/write chunk size */\n    { { NULL, 0 } } /* union for io-specific vars */\n};\n\nvoid rioInitWithBuffer(rio *r, sds s) {\n    *r = rioBufferIO;\n    r->io.buffer.ptr = s;\n    r->io.buffer.pos = 0;\n}\n\n/* --------------------- Stdio file pointer implementation ------------------- */\n\n/* Returns 1 or 0 for success/failure. */\nstatic size_t rioFileWrite(rio *r, const void *buf, size_t len) {\n    if (!r->io.file.autosync) return fwrite(buf,len,1,r->io.file.fp);\n\n    size_t nwritten = 0;\n    /* Incrementally write data to the file, avoid a single write larger than\n     * the autosync threshold (so that the kernel's buffer cache never has too\n     * many dirty pages at once). */\n    while (len != nwritten) {\n        serverAssert(r->io.file.autosync > r->io.file.buffered);\n        size_t nalign = (size_t)(r->io.file.autosync - r->io.file.buffered);\n        size_t towrite = nalign > len-nwritten ? len-nwritten : nalign;\n\n        if (fwrite((char*)buf+nwritten,towrite,1,r->io.file.fp) == 0) return 0;\n        nwritten += towrite;\n        r->io.file.buffered += towrite;\n\n        if (r->io.file.buffered >= r->io.file.autosync) {\n            fflush(r->io.file.fp);\n\n            size_t processed = r->processed_bytes + nwritten;\n            serverAssert(processed % r->io.file.autosync == 0);\n            serverAssert(r->io.file.buffered == r->io.file.autosync);\n\n#if HAVE_SYNC_FILE_RANGE\n            /* Start writeout asynchronously. */\n            if (sync_file_range(fileno(r->io.file.fp),\n                    processed - r->io.file.autosync, r->io.file.autosync,\n                    SYNC_FILE_RANGE_WRITE) == -1)\n                return 0;\n\n            if (processed >= (size_t)r->io.file.autosync * 2) {\n                /* To keep the promise to 'autosync', we should make sure last\n                 * asynchronous writeout persists into disk. This call may block\n                 * if last writeout is not finished since disk is slow. */\n                if (sync_file_range(fileno(r->io.file.fp),\n                        processed - r->io.file.autosync*2,\n                        r->io.file.autosync, SYNC_FILE_RANGE_WAIT_BEFORE|\n                        SYNC_FILE_RANGE_WRITE|SYNC_FILE_RANGE_WAIT_AFTER) == -1)\n                    return 0;\n            }\n#else\n            if (redis_fsync(fileno(r->io.file.fp)) == -1) return 0;\n#endif\n            if (r->io.file.reclaim_cache) {\n                /* In Linux sync_file_range just issue a writeback request to\n                 * OS, and when posix_fadvise is called, the dirty page may\n                 * still be in flushing, which means it would be ignored by\n                 * posix_fadvise.\n                 * \n                 * So we posix_fadvise the whole file, and the writeback-ed \n                 * pages will have other chances to be reclaimed. */\n                reclaimFilePageCache(fileno(r->io.file.fp), 0, 0);\n            }\n            r->io.file.buffered = 0;\n        }\n    }\n    return 1;\n}\n\n/* Returns 1 or 0 for success/failure. */\nstatic size_t rioFileRead(rio *r, void *buf, size_t len) {\n    return fread(buf,len,1,r->io.file.fp);\n}\n\n/* Returns read/write position in file. */\nstatic off_t rioFileTell(rio *r) {\n    return ftello(r->io.file.fp);\n}\n\n/* Flushes any buffer to target device if applicable. Returns 1 on success\n * and 0 on failures. */\nstatic int rioFileFlush(rio *r) {\n    return (fflush(r->io.file.fp) == 0) ? 1 : 0;\n}\n\nstatic const rio rioFileIO = {\n    rioFileRead,\n    rioFileWrite,\n    rioFileTell,\n    rioFileFlush,\n    NULL,           /* update_checksum */\n    0,              /* current checksum */\n    0,              /* flags */\n    0,              /* bytes read or written */\n    0,              /* read/write chunk size */\n    { { NULL, 0 } } /* union for io-specific vars */\n};\n\nvoid rioInitWithFile(rio *r, FILE *fp) {\n    *r = rioFileIO;\n    r->io.file.fp = fp;\n    r->io.file.buffered = 0;\n    r->io.file.autosync = 0;\n    r->io.file.reclaim_cache = 0;\n}\n\n/* ------------------- Connection implementation -------------------\n * We use this RIO implementation when reading an RDB file directly from\n * the connection to the memory via rdbLoadRio(), thus this implementation\n * only implements reading from a connection that is, normally,\n * just a socket. */\n\nstatic size_t rioConnWrite(rio *r, const void *buf, size_t len) {\n    UNUSED(r);\n    UNUSED(buf);\n    UNUSED(len);\n    return 0; /* Error, this target does not yet support writing. */\n}\n\n/* Returns 1 or 0 for success/failure. */\nstatic size_t rioConnRead(rio *r, void *buf, size_t len) {\n    size_t avail = sdslen(r->io.conn.buf)-r->io.conn.pos;\n\n    /* If the buffer is too small for the entire request: realloc. */\n    if (sdslen(r->io.conn.buf) + sdsavail(r->io.conn.buf) < len)\n        r->io.conn.buf = sdsMakeRoomFor(r->io.conn.buf, len - sdslen(r->io.conn.buf));\n\n    /* If the remaining unused buffer is not large enough: memmove so that we\n     * can read the rest. */\n    if (len > avail && sdsavail(r->io.conn.buf) < len - avail) {\n        sdsrange(r->io.conn.buf, r->io.conn.pos, -1);\n        r->io.conn.pos = 0;\n    }\n\n    /* Make sure the caller didn't request to read past the limit.\n     * If they didn't we'll buffer till the limit, if they did, we'll\n     * return an error. */\n    if (r->io.conn.read_limit != 0 && r->io.conn.read_limit < r->io.conn.read_so_far + len) {\n        errno = EOVERFLOW;\n        return 0;\n    }\n\n    /* If we don't already have all the data in the sds, read more */\n    while (len > sdslen(r->io.conn.buf) - r->io.conn.pos) {\n        size_t buffered = sdslen(r->io.conn.buf) - r->io.conn.pos;\n        size_t needs = len - buffered;\n        /* Read either what's missing, or PROTO_IOBUF_LEN, the bigger of\n         * the two. */\n        size_t toread = needs < PROTO_IOBUF_LEN ? PROTO_IOBUF_LEN: needs;\n        if (toread > sdsavail(r->io.conn.buf)) toread = sdsavail(r->io.conn.buf);\n        if (r->io.conn.read_limit != 0 &&\n            r->io.conn.read_so_far + buffered + toread > r->io.conn.read_limit)\n        {\n            toread = r->io.conn.read_limit - r->io.conn.read_so_far - buffered;\n        }\n        int retval = connRead(r->io.conn.conn,\n                          (char*)r->io.conn.buf + sdslen(r->io.conn.buf),\n                          toread);\n        if (retval == 0) {\n            return 0;\n        } else if (retval < 0) {\n            if (connLastErrorRetryable(r->io.conn.conn)) continue;\n            if (errno == EWOULDBLOCK) errno = ETIMEDOUT;\n            return 0;\n        }\n        sdsIncrLen(r->io.conn.buf, retval);\n    }\n\n    memcpy(buf, (char*)r->io.conn.buf + r->io.conn.pos, len);\n    r->io.conn.read_so_far += len;\n    r->io.conn.pos += len;\n    return len;\n}\n\n/* Returns read/write position in file. */\nstatic off_t rioConnTell(rio *r) {\n    return r->io.conn.read_so_far;\n}\n\n/* Flushes any buffer to target device if applicable. Returns 1 on success\n * and 0 on failures. */\nstatic int rioConnFlush(rio *r) {\n    /* Our flush is implemented by the write method, that recognizes a\n     * buffer set to NULL with a count of zero as a flush request. */\n    return rioConnWrite(r,NULL,0);\n}\n\nstatic const rio rioConnIO = {\n    rioConnRead,\n    rioConnWrite,\n    rioConnTell,\n    rioConnFlush,\n    NULL,           /* update_checksum */\n    0,              /* current checksum */\n    0,              /* flags */\n    0,              /* bytes read or written */\n    0,              /* read/write chunk size */\n    { { NULL, 0 } } /* union for io-specific vars */\n};\n\n/* Create an RIO that implements a buffered read from an fd\n * read_limit argument stops buffering when the reaching the limit. */\nvoid rioInitWithConn(rio *r, connection *conn, size_t read_limit) {\n    *r = rioConnIO;\n    r->io.conn.conn = conn;\n    r->io.conn.pos = 0;\n    r->io.conn.read_limit = read_limit;\n    r->io.conn.read_so_far = 0;\n    r->io.conn.buf = sdsnewlen(NULL, PROTO_IOBUF_LEN);\n    sdsclear(r->io.conn.buf);\n}\n\n/* Release the RIO stream. Optionally returns the unread buffered data\n * when the SDS pointer 'remaining' is passed. */\nvoid rioFreeConn(rio *r, sds *remaining) {\n    if (remaining && (size_t)r->io.conn.pos < sdslen(r->io.conn.buf)) {\n        if (r->io.conn.pos > 0) sdsrange(r->io.conn.buf, r->io.conn.pos, -1);\n        *remaining = r->io.conn.buf;\n    } else {\n        sdsfree(r->io.conn.buf);\n        if (remaining) *remaining = NULL;\n    }\n    r->io.conn.buf = NULL;\n}\n\n/* ------------------- File descriptor implementation ------------------\n * This target is used to write the RDB file to pipe, when the master just\n * streams the data to the replicas without creating an RDB on-disk image\n * (diskless replication option).\n * It only implements writes. */\n\n/* Returns 1 or 0 for success/failure.\n *\n * When buf is NULL and len is 0, the function performs a flush operation\n * if there is some pending buffer, so this function is also used in order\n * to implement rioFdFlush(). */\nstatic size_t rioFdWrite(rio *r, const void *buf, size_t len) {\n    ssize_t retval;\n    unsigned char *p = (unsigned char*) buf;\n    int doflush = (buf == NULL && len == 0);\n\n    /* For small writes, we rather keep the data in user-space buffer, and flush\n     * it only when it grows. however for larger writes, we prefer to flush\n     * any pre-existing buffer, and write the new one directly without reallocs\n     * and memory copying. */\n    if (len > PROTO_IOBUF_LEN) {\n        /* First, flush any pre-existing buffered data. */\n        if (sdslen(r->io.fd.buf)) {\n            if (rioFdWrite(r, NULL, 0) == 0)\n                return 0;\n        }\n        /* Write the new data, keeping 'p' and 'len' from the input. */\n    } else {\n        if (len) {\n            r->io.fd.buf = sdscatlen(r->io.fd.buf,buf,len);\n            if (sdslen(r->io.fd.buf) > PROTO_IOBUF_LEN)\n                doflush = 1;\n            if (!doflush)\n                return 1;\n        }\n        /* Flushing the buffered data. set 'p' and 'len' accordingly. */\n        p = (unsigned char*) r->io.fd.buf;\n        len = sdslen(r->io.fd.buf);\n    }\n\n    size_t nwritten = 0;\n    while(nwritten != len) {\n        retval = write(r->io.fd.fd,p+nwritten,len-nwritten);\n        if (retval <= 0) {\n            if (retval == -1 && errno == EINTR) continue;\n            /* With blocking io, which is the sole user of this\n             * rio target, EWOULDBLOCK is returned only because of\n             * the SO_SNDTIMEO socket option, so we translate the error\n             * into one more recognizable by the user. */\n            if (retval == -1 && errno == EWOULDBLOCK) errno = ETIMEDOUT;\n            return 0; /* error. */\n        }\n        nwritten += retval;\n    }\n\n    r->io.fd.pos += len;\n    sdsclear(r->io.fd.buf);\n    return 1;\n}\n\n/* Returns 1 or 0 for success/failure. */\nstatic size_t rioFdRead(rio *r, void *buf, size_t len) {\n    UNUSED(r);\n    UNUSED(buf);\n    UNUSED(len);\n    return 0; /* Error, this target does not support reading. */\n}\n\n/* Returns read/write position in file. */\nstatic off_t rioFdTell(rio *r) {\n    return r->io.fd.pos;\n}\n\n/* Flushes any buffer to target device if applicable. Returns 1 on success\n * and 0 on failures. */\nstatic int rioFdFlush(rio *r) {\n    /* Our flush is implemented by the write method, that recognizes a\n     * buffer set to NULL with a count of zero as a flush request. */\n    return rioFdWrite(r,NULL,0);\n}\n\nstatic const rio rioFdIO = {\n    rioFdRead,\n    rioFdWrite,\n    rioFdTell,\n    rioFdFlush,\n    NULL,           /* update_checksum */\n    0,              /* current checksum */\n    0,              /* flags */\n    0,              /* bytes read or written */\n    0,              /* read/write chunk size */\n    { { NULL, 0 } } /* union for io-specific vars */\n};\n\nvoid rioInitWithFd(rio *r, int fd) {\n    *r = rioFdIO;\n    r->io.fd.fd = fd;\n    r->io.fd.pos = 0;\n    r->io.fd.buf = sdsempty();\n}\n\n/* release the rio stream. */\nvoid rioFreeFd(rio *r) {\n    sdsfree(r->io.fd.buf);\n}\n\n/* ------------------- Connection set implementation ------------------\n * This target is used to write the RDB file to a set of replica connections as\n * part of rdb channel replication. */\n\n/* Returns 1 for success, 0 for failure.\n * The function returns success as long as we are able to correctly write\n * to at least one file descriptor.\n *\n * When buf is NULL or len is 0, the function performs a flush operation if\n * there is some pending buffer, so this function is also used in order to\n * implement rioConnsetFlush(). */\nstatic size_t rioConnsetWrite(rio *r, const void *buf, size_t len) {\n    const size_t pre_flush_size = 256 * 1024;\n    unsigned char *p = (unsigned char*) buf;\n    size_t buflen = len;\n    size_t failed = 0; /* number of connections that write() returned error. */\n\n    /* For small writes, we rather keep the data in user-space buffer, and flush\n     * it only when it grows. however for larger writes, we prefer to flush\n     * any pre-existing buffer, and write the new one directly without reallocs\n     * and memory copying. */\n    if (len > pre_flush_size) {\n        rioConnsetWrite(r, NULL, 0);\n    } else {\n        if (buf && len) {\n            r->io.connset.buf = sdscatlen(r->io.connset.buf, buf, len);\n            if (sdslen(r->io.connset.buf) <= PROTO_IOBUF_LEN)\n                return 1;\n        }\n\n        p = (unsigned char *)r->io.connset.buf;\n        buflen = sdslen(r->io.connset.buf);\n    }\n\n    while (buflen > 0) {\n        /* Write in little chunks so that when there are big writes we\n         * parallelize while the kernel is sending data in background to the\n         * TCP socket. */\n        size_t limit = PROTO_IOBUF_LEN * 2;\n        size_t count = buflen < limit ? buflen : limit;\n\n        for (size_t i = 0; i < r->io.connset.n_dst; i++) {\n            size_t n_written = 0;\n\n            if (r->io.connset.dst[i].failed != 0) {\n                failed++;\n                continue; /* Skip failed connections. */\n            }\n\n            do {\n                ssize_t ret;\n                connection *c = r->io.connset.dst[i].conn;\n\n                ret = connWrite(c, p + n_written, count - n_written);\n                if (ret <= 0) {\n                    if (errno == 0)\n                        errno = EIO;\n                    /* With blocking sockets, which is the sole user of this\n                     * rio target, EWOULDBLOCK is returned only because of\n                     * the SO_SNDTIMEO socket option, so we translate the error\n                     * into one more recognizable by the user. */\n                    if (ret == -1 && errno == EWOULDBLOCK)\n                        errno = ETIMEDOUT;\n\n                    r->io.connset.dst[i].failed = 1;\n                    failed++;\n                    break;\n                }\n                n_written += ret;\n            } while (n_written != count);\n        }\n        if (failed == r->io.connset.n_dst)\n            return 0; /* All the connections have failed. */\n\n        p += count;\n        buflen -= count;\n        r->io.connset.pos += count;\n    }\n\n    sdsclear(r->io.connset.buf);\n    return 1;\n}\n\n/* Returns 1 or 0 for success/failure. */\nstatic size_t rioConnsetRead(rio *r, void *buf, size_t len) {\n    UNUSED(r);\n    UNUSED(buf);\n    UNUSED(len);\n    return 0; /* Error, this target does not support reading. */\n}\n\n/* Returns the number of sent bytes. */\nstatic off_t rioConnsetTell(rio *r) {\n    return r->io.connset.pos;\n}\n\n/* Flushes any buffer to target device if applicable. Returns 1 on success\n * and 0 on failures. */\nstatic int rioConnsetFlush(rio *r) {\n    /* Our flush is implemented by the write method, that recognizes a\n     * buffer set to NULL with a count of zero as a flush request. */\n    return rioConnsetWrite(r, NULL, 0);\n}\n\nstatic const rio rioConnsetIO = {\n        rioConnsetRead,\n        rioConnsetWrite,\n        rioConnsetTell,\n        rioConnsetFlush,\n        NULL,            /* update_checksum */\n        0,               /* current checksum */\n        0,               /* flags */\n        0,               /* bytes read or written */\n        0,               /* read/write chunk size */\n        { { NULL, 0 } }  /* union for io-specific vars */\n};\n\nvoid rioInitWithConnset(rio *r, connection **conns, size_t n_conns) {\n    *r = rioConnsetIO;\n    r->io.connset.dst = zcalloc(sizeof(*r->io.connset.dst) * n_conns);\n    r->io.connset.n_dst = n_conns;\n    r->io.connset.pos = 0;\n    r->io.connset.buf = sdsempty();\n\n    for (size_t i = 0; i < n_conns; i++)\n        r->io.connset.dst[i].conn = conns[i];\n}\n\n/* release the rio stream. */\nvoid rioFreeConnset(rio *r) {\n    zfree(r->io.connset.dst);\n    sdsfree(r->io.connset.buf);\n}\n\n/* ---------------------------- Generic functions ---------------------------- */\n\n/* This function can be installed both in memory and file streams when checksum\n * computation is needed. */\nvoid rioGenericUpdateChecksum(rio *r, const void *buf, size_t len) {\n    r->cksum = crc64(r->cksum,buf,len);\n}\n\n/* Set the file-based rio object to auto-fsync every 'bytes' file written.\n * By default this is set to zero that means no automatic file sync is\n * performed.\n *\n * This feature is useful in a few contexts since when we rely on OS write\n * buffers sometimes the OS buffers way too much, resulting in too many\n * disk I/O concentrated in very little time. When we fsync in an explicit\n * way instead the I/O pressure is more distributed across time. */\nvoid rioSetAutoSync(rio *r, off_t bytes) {\n    if(r->write != rioFileIO.write) return;\n    r->io.file.autosync = bytes;\n}\n\n/* Set the file-based rio object to reclaim cache after every auto-sync.\n * In the Linux implementation POSIX_FADV_DONTNEED skips the dirty\n * pages, so if auto sync is unset this option will have no effect.\n * \n * This feature can reduce the cache footprint backed by the file. */\nvoid rioSetReclaimCache(rio *r, int enabled) {\n    r->io.file.reclaim_cache = enabled;\n}\n\n/* Check the type of rio. */\nuint8_t rioCheckType(rio *r) {\n    if (r->read == rioFileRead) {\n        return RIO_TYPE_FILE;\n    } else if (r->read == rioBufferRead) {\n        return RIO_TYPE_BUFFER;\n    } else if (r->read == rioConnRead) {\n        return RIO_TYPE_CONN;\n    } else {\n        /* r->read == rioFdRead */\n        return RIO_TYPE_FD;\n    }\n}\n\n/* --------------------------- Higher level interface --------------------------\n *\n * The following higher level functions use lower level rio.c functions to help\n * generating the Redis protocol for the Append Only File. */\n\n/* Write multi bulk count in the format: \"*<count>\\r\\n\". */\nsize_t rioWriteBulkCount(rio *r, char prefix, long count) {\n    char cbuf[128];\n    int clen;\n\n    cbuf[0] = prefix;\n    clen = 1+ll2string(cbuf+1,sizeof(cbuf)-1,count);\n    cbuf[clen++] = '\\r';\n    cbuf[clen++] = '\\n';\n    if (rioWrite(r,cbuf,clen) == 0) return 0;\n    return clen;\n}\n\n/* Write binary-safe string in the format: \"$<count>\\r\\n<payload>\\r\\n\". */\nsize_t rioWriteBulkString(rio *r, const char *buf, size_t len) {\n    size_t nwritten;\n\n    if ((nwritten = rioWriteBulkCount(r,'$',len)) == 0) return 0;\n    if (len > 0 && rioWrite(r,buf,len) == 0) return 0;\n    if (rioWrite(r,\"\\r\\n\",2) == 0) return 0;\n    return nwritten+len+2;\n}\n\n/* Write a long long value in format: \"$<count>\\r\\n<payload>\\r\\n\". */\nsize_t rioWriteBulkLongLong(rio *r, long long l) {\n    char lbuf[32];\n    unsigned int llen;\n\n    llen = ll2string(lbuf,sizeof(lbuf),l);\n    return rioWriteBulkString(r,lbuf,llen);\n}\n\n/* Write a double value in the format: \"$<count>\\r\\n<payload>\\r\\n\" */\nsize_t rioWriteBulkDouble(rio *r, double d) {\n    char dbuf[128];\n    unsigned int dlen;\n    dlen = fpconv_dtoa(d, dbuf);\n    dbuf[dlen] = '\\0';\n    return rioWriteBulkString(r,dbuf,dlen);\n}\n"
  },
  {
    "path": "src/rio.h",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n\n#ifndef __REDIS_RIO_H\n#define __REDIS_RIO_H\n\n#include <stdio.h>\n#include <stdint.h>\n#include \"sds.h\"\n#include \"connection.h\"\n\n#define RIO_FLAG_READ_ERROR (1<<0)\n#define RIO_FLAG_WRITE_ERROR (1<<1)\n\n#define RIO_TYPE_FILE (1<<0)\n#define RIO_TYPE_BUFFER (1<<1)\n#define RIO_TYPE_CONN (1<<2)\n#define RIO_TYPE_FD (1<<3)\n\nstruct _rio {\n    /* Backend functions.\n     * Since this functions do not tolerate short writes or reads the return\n     * value is simplified to: zero on error, non zero on complete success. */\n    size_t (*read)(struct _rio *, void *buf, size_t len);\n    size_t (*write)(struct _rio *, const void *buf, size_t len);\n    off_t (*tell)(struct _rio *);\n    int (*flush)(struct _rio *);\n    /* The update_cksum method if not NULL is used to compute the checksum of\n     * all the data that was read or written so far. The method should be\n     * designed so that can be called with the current checksum, and the buf\n     * and len fields pointing to the new block of data to add to the checksum\n     * computation. */\n    void (*update_cksum)(struct _rio *, const void *buf, size_t len);\n\n    /* The current checksum and flags (see RIO_FLAG_*) */\n    uint64_t cksum, flags;\n\n    /* number of bytes read or written */\n    size_t processed_bytes;\n\n    /* maximum single read or write chunk size */\n    size_t max_processing_chunk;\n\n    /* Backend-specific vars. */\n    union {\n        /* In-memory buffer target. */\n        struct {\n            sds ptr;\n            off_t pos;\n        } buffer;\n        /* Stdio file pointer target. */\n        struct {\n            FILE *fp;\n            off_t buffered; /* Bytes written since last fsync. */\n            off_t autosync; /* fsync after 'autosync' bytes written. */\n            unsigned reclaim_cache:1; /* A flag to indicate reclaim cache after fsync */\n        } file;\n        /* Connection object (used to read from socket) */\n        struct {\n            connection *conn;   /* Connection */\n            off_t pos;    /* pos in buf that was returned */\n            sds buf;      /* buffered data */\n            size_t read_limit;  /* don't allow to buffer/read more than that */\n            size_t read_so_far; /* amount of data read from the rio (not buffered) */\n        } conn;\n        /* FD target (used to write to pipe). */\n        struct {\n            int fd;       /* File descriptor. */\n            off_t pos;\n            sds buf;\n        } fd;\n        /* Multiple connections target (used to write to N sockets). */\n        struct {\n            struct {\n                connection *conn; /* Connection */\n                int failed;       /* If write failed on this connection. */\n            } *dst;\n\n            size_t n_dst;        /* Number of connections */\n            off_t pos;           /* Number of sent bytes */\n            sds buf;\n        } connset;\n    } io;\n};\n\ntypedef struct _rio rio;\n\n/* The following functions are our interface with the stream. They'll call the\n * actual implementation of read / write / tell, and will update the checksum\n * if needed. */\n\nstatic inline size_t rioWrite(rio *r, const void *buf, size_t len) {\n    if (r->flags & (RIO_FLAG_WRITE_ERROR)) return 0;\n    while (len) {\n        size_t bytes_to_write = (r->max_processing_chunk && r->max_processing_chunk < len) ? r->max_processing_chunk : len;\n        if (r->update_cksum) r->update_cksum(r,buf,bytes_to_write);\n        if (r->write(r,buf,bytes_to_write) == 0) {\n            r->flags |= RIO_FLAG_WRITE_ERROR;\n            return 0;\n        }\n        buf = (char*)buf + bytes_to_write;\n        len -= bytes_to_write;\n        r->processed_bytes += bytes_to_write;\n    }\n    return 1;\n}\n\nstatic inline size_t rioRead(rio *r, void *buf, size_t len) {\n    if (r->flags & (RIO_FLAG_READ_ERROR)) return 0;\n    while (len) {\n        size_t bytes_to_read = (r->max_processing_chunk && r->max_processing_chunk < len) ? r->max_processing_chunk : len;\n        if (r->read(r,buf,bytes_to_read) == 0) {\n            r->flags |= RIO_FLAG_READ_ERROR;\n            return 0;\n        }\n        if (r->update_cksum) r->update_cksum(r,buf,bytes_to_read);\n        buf = (char*)buf + bytes_to_read;\n        len -= bytes_to_read;\n        r->processed_bytes += bytes_to_read;\n    }\n    return 1;\n}\n\nstatic inline off_t rioTell(rio *r) {\n    return r->tell(r);\n}\n\nstatic inline int rioFlush(rio *r) {\n    return r->flush(r);\n}\n\n/* Abort RIO asynchronously by setting read and write error flags. Subsequent\n * rioRead()/rioWrite() calls will fail, letting the caller terminate safely. */\nstatic inline void rioAbort(rio *r) {\n    r->flags |= (RIO_FLAG_READ_ERROR | RIO_FLAG_WRITE_ERROR);\n}\n\n/* This function allows to know if there was a read error in any past\n * operation, since the rio stream was created or since the last call\n * to rioClearError(). */\nstatic inline int rioGetReadError(rio *r) {\n    return (r->flags & RIO_FLAG_READ_ERROR) != 0;\n}\n\n/* Like rioGetReadError() but for write errors. */\nstatic inline int rioGetWriteError(rio *r) {\n    return (r->flags & RIO_FLAG_WRITE_ERROR) != 0;\n}\n\nstatic inline void rioClearErrors(rio *r) {\n    r->flags &= ~(RIO_FLAG_READ_ERROR|RIO_FLAG_WRITE_ERROR);\n}\n\nvoid rioInitWithFile(rio *r, FILE *fp);\nvoid rioInitWithBuffer(rio *r, sds s);\nvoid rioInitWithConn(rio *r, connection *conn, size_t read_limit);\nvoid rioInitWithFd(rio *r, int fd);\nvoid rioInitWithConnset(rio *r, connection **conns, size_t n_conns);\n\nvoid rioFreeFd(rio *r);\nvoid rioFreeConn(rio *r, sds* out_remainingBufferedData);\nvoid rioFreeConnset(rio *r);\n\nsize_t rioWriteBulkCount(rio *r, char prefix, long count);\nsize_t rioWriteBulkString(rio *r, const char *buf, size_t len);\nsize_t rioWriteBulkLongLong(rio *r, long long l);\nsize_t rioWriteBulkDouble(rio *r, double d);\n\nstruct redisObject;\nint rioWriteBulkObject(rio *r, struct redisObject *obj);\n\nvoid rioGenericUpdateChecksum(rio *r, const void *buf, size_t len);\nvoid rioSetAutoSync(rio *r, off_t bytes);\nvoid rioSetReclaimCache(rio *r, int enabled); \nuint8_t rioCheckType(rio *r);\n#endif\n"
  },
  {
    "path": "src/script.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n#include \"server.h\"\n#include \"script.h\"\n#include \"cluster.h\"\n#include \"cluster_slot_stats.h\"\n\n#include <lua.h>\n#include <lauxlib.h>\n\nscriptFlag scripts_flags_def[] = {\n    {.flag = SCRIPT_FLAG_NO_WRITES, .str = \"no-writes\"},\n    {.flag = SCRIPT_FLAG_ALLOW_OOM, .str = \"allow-oom\"},\n    {.flag = SCRIPT_FLAG_ALLOW_STALE, .str = \"allow-stale\"},\n    {.flag = SCRIPT_FLAG_NO_CLUSTER, .str = \"no-cluster\"},\n    {.flag = SCRIPT_FLAG_ALLOW_CROSS_SLOT, .str = \"allow-cross-slot-keys\"},\n    {.flag = 0, .str = NULL}, /* flags array end */\n};\n\n/* On script invocation, holding the current run context */\nstatic scriptRunCtx *curr_run_ctx = NULL;\n\nstatic void exitScriptTimedoutMode(scriptRunCtx *run_ctx) {\n    serverAssert(run_ctx == curr_run_ctx);\n    serverAssert(scriptIsTimedout());\n    run_ctx->flags &= ~SCRIPT_TIMEDOUT;\n    blockingOperationEnds();\n    /* if we are a replica and we have an active master, set it for continue processing */\n    if (server.masterhost && server.master) {\n        /* Master running in IO thread needs to be sent to main thread so that\n         * it can process any pending commands ASAP without waiting for the next\n         * read.\n         * We don't queue the client for reprocessing in this case as it will\n         * create contention with main thread when it deals with unblocked\n         * clients - see comment above queueClientForReprocessing. */\n        if (server.master->running_tid != IOTHREAD_MAIN_THREAD_ID) {\n            pauseIOThread(server.master->tid);\n            enqueuePendingClientsToMainThread(server.master, 0);\n            resumeIOThread(server.master->tid);\n            return;\n        }\n\n        queueClientForReprocessing(server.master);\n    }\n}\n\nstatic void enterScriptTimedoutMode(scriptRunCtx *run_ctx) {\n    serverAssert(run_ctx == curr_run_ctx);\n    serverAssert(!scriptIsTimedout());\n    /* Mark script as timedout */\n    run_ctx->flags |= SCRIPT_TIMEDOUT;\n    blockingOperationStarts();\n}\n\n#if defined(USE_JEMALLOC)\n/* When lua uses jemalloc, pass in luaAlloc as a parameter of lua_newstate. */\nstatic void *luaAlloc(void *ud, void *ptr, size_t osize, size_t nsize) {\n    UNUSED(osize);\n\n    unsigned int tcache = (unsigned int)(uintptr_t)ud;\n    if (nsize == 0) {\n        zfree_with_flags(ptr, MALLOCX_ARENA(server.lua_arena) | MALLOCX_TCACHE(tcache));\n        return NULL;\n    } else {\n        return zrealloc_with_flags(ptr, nsize, MALLOCX_ARENA(server.lua_arena) | MALLOCX_TCACHE(tcache));\n    }\n}\n\n/* Create a lua interpreter, and use jemalloc as lua memory allocator. */\nlua_State *createLuaState(void) {\n    /* Every time a lua VM is created, a new private tcache is created for use.\n     * This private tcache will be destroyed after the lua VM is closed. */\n    unsigned int tcache;\n    size_t sz = sizeof(unsigned int);\n    int err = je_mallctl(\"tcache.create\", (void *)&tcache, &sz, NULL, 0);\n    if (err) {\n        serverLog(LL_WARNING, \"Failed creating the lua jemalloc tcache (err=%d).\", err);\n        exit(1);\n    }\n\n    /* We pass tcache as ud so that it is not bound to the server. */\n    return lua_newstate(luaAlloc, (void *)(uintptr_t)tcache);\n}\n\n/* Under jemalloc we need to create a new arena for lua to avoid blocking\n * defragger. */\nvoid luaEnvInit(void) {\n    unsigned int arena;\n    size_t sz = sizeof(unsigned int);\n    int err = je_mallctl(\"arenas.create\", (void *)&arena, &sz, NULL, 0);\n    if (err) {\n        serverLog(LL_WARNING, \"Failed creating the lua jemalloc arena (err=%d).\", err);\n        exit(1);\n    }\n    server.lua_arena = arena;\n}\n\n#else\n\n/* Create a lua interpreter and use glibc (default) as lua memory allocator. */\nlua_State *createLuaState(void) {\n    return lua_open();\n}\n\n/* There is nothing to set up under glib. */\nvoid luaEnvInit(void) {\n    server.lua_arena = UINT_MAX;\n}\n\n#endif\n\nint scriptIsTimedout(void) {\n    return scriptIsRunning() && (curr_run_ctx->flags & SCRIPT_TIMEDOUT);\n}\n\nclient* scriptGetClient(void) {\n    serverAssert(scriptIsRunning());\n    return curr_run_ctx->c;\n}\n\nclient* scriptGetCaller(void) {\n    serverAssert(scriptIsRunning());\n    return curr_run_ctx->original_client;\n}\n\n/* interrupt function for scripts, should be call\n * from time to time to reply some special command (like ping)\n * and also check if the run should be terminated. */\nint scriptInterrupt(scriptRunCtx *run_ctx) {\n    if (run_ctx->flags & SCRIPT_TIMEDOUT) {\n        /* script already timedout\n           we just need to precess some events and return */\n        processEventsWhileBlocked();\n        return (run_ctx->flags & SCRIPT_KILLED) ? SCRIPT_KILL : SCRIPT_CONTINUE;\n    }\n\n    long long elapsed = elapsedMs(run_ctx->start_time);\n    if (elapsed < server.busy_reply_threshold) {\n        return SCRIPT_CONTINUE;\n    }\n\n    serverLog(LL_WARNING,\n            \"Slow script detected: still in execution after %lld milliseconds. \"\n                    \"You can try killing the script using the %s command. Script name is: %s.\",\n            elapsed, (run_ctx->flags & SCRIPT_EVAL_MODE) ? \"SCRIPT KILL\" : \"FUNCTION KILL\", run_ctx->funcname);\n\n    enterScriptTimedoutMode(run_ctx);\n    /* Once the script timeouts we reenter the event loop to permit others\n     * some commands execution. For this reason\n     * we need to mask the client executing the script from the event loop.\n     * If we don't do that the client may disconnect and could no longer be\n     * here when the EVAL command will return. */\n    protectClient(run_ctx->original_client);\n\n    processEventsWhileBlocked();\n\n    return (run_ctx->flags & SCRIPT_KILLED) ? SCRIPT_KILL : SCRIPT_CONTINUE;\n}\n\nuint64_t scriptFlagsToCmdFlags(uint64_t cmd_flags, uint64_t script_flags) {\n    /* If the script declared flags, clear the ones from the command and use the ones it declared.*/\n    cmd_flags &= ~(CMD_STALE | CMD_DENYOOM | CMD_WRITE);\n\n    /* NO_WRITES implies ALLOW_OOM */\n    if (!(script_flags & (SCRIPT_FLAG_ALLOW_OOM | SCRIPT_FLAG_NO_WRITES)))\n        cmd_flags |= CMD_DENYOOM;\n    if (!(script_flags & SCRIPT_FLAG_NO_WRITES))\n        cmd_flags |= CMD_WRITE;\n    if (script_flags & SCRIPT_FLAG_ALLOW_STALE)\n        cmd_flags |= CMD_STALE;\n\n    /* In addition the MAY_REPLICATE flag is set for these commands, but\n     * if we have flags we know if it's gonna do any writes or not. */\n    cmd_flags &= ~CMD_MAY_REPLICATE;\n\n    return cmd_flags;\n}\n\n/* Prepare the given run ctx for execution */\nint scriptPrepareForRun(scriptRunCtx *run_ctx, client *engine_client, client *caller, const char *funcname, uint64_t script_flags, int ro) {\n    serverAssert(!curr_run_ctx);\n    int client_allow_oom = !!(caller->flags & CLIENT_ALLOW_OOM);\n\n    int running_stale = server.masterhost &&\n            server.repl_state != REPL_STATE_CONNECTED &&\n            server.repl_serve_stale_data == 0;\n    int obey_client = mustObeyClient(caller);\n\n    if (!(script_flags & SCRIPT_FLAG_EVAL_COMPAT_MODE)) {\n        if ((script_flags & SCRIPT_FLAG_NO_CLUSTER) && server.cluster_enabled) {\n            addReplyError(caller, \"Can not run script on cluster, 'no-cluster' flag is set.\");\n            return C_ERR;\n        }\n\n        /* Can't run script with 'non-cluster' flag as above when cluster is enabled. */\n        if (script_flags & SCRIPT_FLAG_NO_CLUSTER) {\n            server.stat_cluster_incompatible_ops++;\n        }\n\n        if (running_stale && !(script_flags & SCRIPT_FLAG_ALLOW_STALE)) {\n            addReplyError(caller, \"-MASTERDOWN Link with MASTER is down, \"\n                             \"replica-serve-stale-data is set to 'no' \"\n                             \"and 'allow-stale' flag is not set on the script.\");\n            return C_ERR;\n        }\n\n        if (!(script_flags & SCRIPT_FLAG_NO_WRITES)) {\n            /* Script may perform writes we need to verify:\n             * 1. we are not a readonly replica\n             * 2. no disk error detected\n             * 3. command is not `fcall_ro`/`eval[sha]_ro` */\n            if (server.masterhost && server.repl_slave_ro && !obey_client) {\n                addReplyError(caller, \"-READONLY Can not run script with write flag on readonly replica\");\n                return C_ERR;\n            }\n\n            /* Deny writes if we're unable to persist. */\n            int deny_write_type = writeCommandsDeniedByDiskError();\n            if (deny_write_type != DISK_ERROR_TYPE_NONE && !obey_client) {\n                if (deny_write_type == DISK_ERROR_TYPE_RDB)\n                    addReplyError(caller, \"-MISCONF Redis is configured to save RDB snapshots, \"\n                                     \"but it's currently unable to persist to disk. \"\n                                     \"Writable scripts are blocked. Use 'no-writes' flag for read only scripts.\");\n                else\n                    addReplyErrorFormat(caller, \"-MISCONF Redis is configured to persist data to AOF, \"\n                                           \"but it's currently unable to persist to disk. \"\n                                           \"Writable scripts are blocked. Use 'no-writes' flag for read only scripts. \"\n                                           \"AOF error: %s\", strerror(server.aof_last_write_errno));\n                return C_ERR;\n            }\n\n            if (ro) {\n                addReplyError(caller, \"Can not execute a script with write flag using *_ro command.\");\n                return C_ERR;\n            }\n\n            /* Don't accept write commands if there are not enough good slaves and\n             * user configured the min-slaves-to-write option. */\n            if (!checkGoodReplicasStatus()) {\n                addReplyErrorObject(caller, shared.noreplicaserr);\n                return C_ERR;\n            }\n        }\n\n        /* Check OOM state. the no-writes flag imply allow-oom. we tested it\n         * after the no-write error, so no need to mention it in the error reply. */\n        if (!client_allow_oom && server.pre_command_oom_state && server.maxmemory &&\n            !(script_flags & (SCRIPT_FLAG_ALLOW_OOM|SCRIPT_FLAG_NO_WRITES)))\n        {\n            addReplyError(caller, \"-OOM allow-oom flag is not set on the script, \"\n                                  \"can not run it when used memory > 'maxmemory'\");\n            return C_ERR;\n        }\n\n    } else {\n        /* Special handling for backwards compatibility (no shebang eval[sha]) mode */\n        if (running_stale) {\n            addReplyErrorObject(caller, shared.masterdownerr);\n            return C_ERR;\n        }\n    }\n\n    run_ctx->c = engine_client;\n    run_ctx->original_client = caller;\n    run_ctx->funcname = funcname;\n    run_ctx->slot = caller->slot;\n    run_ctx->cluster_compatibility_check_slot = caller->cluster_compatibility_check_slot;\n\n    client *script_client = run_ctx->c;\n    client *curr_client = run_ctx->original_client;\n\n    /* Select the right DB in the context of the Lua client */\n    selectDb(script_client, curr_client->db->id);\n    script_client->resp = 2; /* Default is RESP2, scripts can change it. */\n\n    /* If we are in MULTI context, flag Lua client as CLIENT_MULTI. */\n    if (curr_client->flags & CLIENT_MULTI) {\n        script_client->flags |= CLIENT_MULTI;\n    }\n\n    run_ctx->start_time = getMonotonicUs();\n\n    run_ctx->flags = 0;\n    run_ctx->repl_flags = PROPAGATE_AOF | PROPAGATE_REPL;\n\n    if (ro || (!(script_flags & SCRIPT_FLAG_EVAL_COMPAT_MODE) && (script_flags & SCRIPT_FLAG_NO_WRITES))) {\n        /* On fcall_ro or on functions that do not have the 'write'\n         * flag, we will not allow write commands. */\n        run_ctx->flags |= SCRIPT_READ_ONLY;\n    }\n    if (client_allow_oom || (!(script_flags & SCRIPT_FLAG_EVAL_COMPAT_MODE) && (script_flags & SCRIPT_FLAG_ALLOW_OOM))) {\n        /* Note: we don't need to test the no-writes flag here and set this run_ctx flag,\n         * since only write commands can are deny-oom. */\n        run_ctx->flags |= SCRIPT_ALLOW_OOM;\n    }\n\n    if ((script_flags & SCRIPT_FLAG_EVAL_COMPAT_MODE) || (script_flags & SCRIPT_FLAG_ALLOW_CROSS_SLOT)) {\n        run_ctx->flags |= SCRIPT_ALLOW_CROSS_SLOT;\n    }\n\n    /* set the curr_run_ctx so we can use it to kill the script if needed */\n    curr_run_ctx = run_ctx;\n\n    return C_OK;\n}\n\n/* Reset the given run ctx after execution */\nvoid scriptResetRun(scriptRunCtx *run_ctx) {\n    serverAssert(curr_run_ctx);\n\n    /* After the script done, remove the MULTI state. */\n    run_ctx->c->flags &= ~CLIENT_MULTI;\n\n    if (scriptIsTimedout()) {\n        exitScriptTimedoutMode(run_ctx);\n        /* Restore the client that was protected when the script timeout\n         * was detected. */\n        unprotectClient(run_ctx->original_client);\n    }\n\n    run_ctx->slot = -1;\n    run_ctx->cluster_compatibility_check_slot = -2;\n\n    preventCommandPropagation(run_ctx->original_client);\n\n    /*  unset curr_run_ctx so we will know there is no running script */\n    curr_run_ctx = NULL;\n}\n\n/* return true if a script is currently running */\nint scriptIsRunning(void) {\n    return curr_run_ctx != NULL;\n}\n\nconst char* scriptCurrFunction(void) {\n    serverAssert(scriptIsRunning());\n    return curr_run_ctx->funcname;\n}\n\nint scriptIsEval(void) {\n    serverAssert(scriptIsRunning());\n    return curr_run_ctx->flags & SCRIPT_EVAL_MODE;\n}\n\n/* Kill the current running script */\nvoid scriptKill(client *c, int is_eval) {\n    if (!curr_run_ctx) {\n        addReplyError(c, \"-NOTBUSY No scripts in execution right now.\");\n        return;\n    }\n    if (mustObeyClient(curr_run_ctx->original_client)) {\n        addReplyError(c,\n                \"-UNKILLABLE The busy script was sent by a master instance in the context of replication and cannot be killed.\");\n        return;\n    }\n    if (curr_run_ctx->flags & SCRIPT_WRITE_DIRTY) {\n        addReplyError(c,\n                \"-UNKILLABLE Sorry the script already executed write \"\n                        \"commands against the dataset. You can either wait the \"\n                        \"script termination or kill the server in a hard way \"\n                        \"using the SHUTDOWN NOSAVE command.\");\n        return;\n    }\n    if (is_eval && !(curr_run_ctx->flags & SCRIPT_EVAL_MODE)) {\n        /* Kill a function with 'SCRIPT KILL' is not allow */\n        addReplyErrorObject(c, shared.slowscripterr);\n        return;\n    }\n    if (!is_eval && (curr_run_ctx->flags & SCRIPT_EVAL_MODE)) {\n        /* Kill an eval with 'FUNCTION KILL' is not allow */\n        addReplyErrorObject(c, shared.slowevalerr);\n        return;\n    }\n    curr_run_ctx->flags |= SCRIPT_KILLED;\n    addReply(c, shared.ok);\n}\n\nstatic int scriptVerifyCommandArity(struct redisCommand *cmd, int argc, sds *err) {\n    if (!cmd || ((cmd->arity > 0 && cmd->arity != argc) || (argc < -cmd->arity))) {\n        if (cmd)\n            *err = sdsnew(\"Wrong number of args calling Redis command from script\");\n        else\n            *err = sdsnew(\"Unknown Redis command called from script\");\n        return C_ERR;\n    }\n    return C_OK;\n}\n\nstatic int scriptVerifyACL(client *c, sds *err) {\n    /* Check the ACLs. */\n    int acl_errpos;\n    int acl_retval = ACLCheckAllPerm(c, &acl_errpos);\n    if (acl_retval != ACL_OK) {\n        addACLLogEntry(c,acl_retval,ACL_LOG_CTX_LUA,acl_errpos,NULL,NULL);\n        sds msg = getAclErrorMessage(acl_retval, c->user, c->cmd, c->argv[acl_errpos]->ptr, 0);\n        *err = sdscatsds(sdsnew(\"ACL failure in script: \"), msg);\n        sdsfree(msg);\n        return C_ERR;\n    }\n    return C_OK;\n}\n\nstatic int scriptVerifyWriteCommandAllow(scriptRunCtx *run_ctx, char **err) {\n\n    /* A write command, on an RO command or an RO script is rejected ASAP.\n     * Note: For scripts, we consider may-replicate commands as write commands.\n     * This also makes it possible to allow read-only scripts to be run during\n     * CLIENT PAUSE WRITE. */\n    if (run_ctx->flags & SCRIPT_READ_ONLY &&\n        (run_ctx->c->cmd->flags & (CMD_WRITE|CMD_MAY_REPLICATE)))\n    {\n        *err = sdsnew(\"Write commands are not allowed from read-only scripts.\");\n        return C_ERR;\n    }\n\n    /* The other checks below are on the server state and are only relevant for\n     *  write commands, return if this is not a write command. */\n    if (!(run_ctx->c->cmd->flags & CMD_WRITE))\n        return C_OK;\n\n    /* If the script already made a modification to the dataset, we can't\n     * fail it on unpredictable error state. */\n    if ((run_ctx->flags & SCRIPT_WRITE_DIRTY))\n        return C_OK;\n\n    /* Write commands are forbidden against read-only slaves, or if a\n     * command marked as non-deterministic was already called in the context\n     * of this script. */\n    int deny_write_type = writeCommandsDeniedByDiskError();\n\n    if (server.masterhost && server.repl_slave_ro &&\n        !mustObeyClient(run_ctx->original_client))\n    {\n        *err = sdsdup(shared.roslaveerr->ptr);\n        return C_ERR;\n    }\n\n    if (deny_write_type != DISK_ERROR_TYPE_NONE) {\n        *err = writeCommandsGetDiskErrorMessage(deny_write_type);\n        return C_ERR;\n    }\n\n    /* Don't accept write commands if there are not enough good slaves and\n     * user configured the min-slaves-to-write option. Note this only reachable\n     * for Eval scripts that didn't declare flags, see the other check in\n     * scriptPrepareForRun */\n    if (!checkGoodReplicasStatus()) {\n        *err = sdsdup(shared.noreplicaserr->ptr);\n        return C_ERR;\n    }\n\n    return C_OK;\n}\n\nstatic int scriptVerifyOOM(scriptRunCtx *run_ctx, char **err) {\n    if (run_ctx->flags & SCRIPT_ALLOW_OOM) {\n        /* Allow running any command even if OOM reached */\n        return C_OK;\n    }\n\n    /* If we reached the memory limit configured via maxmemory, commands that\n     * could enlarge the memory usage are not allowed, but only if this is the\n     * first write in the context of this script, otherwise we can't stop\n     * in the middle. */\n\n    if (server.maxmemory &&                            /* Maxmemory is actually enabled. */\n        !mustObeyClient(run_ctx->original_client) &&   /* Don't care about mem for replicas or AOF. */\n        !(run_ctx->flags & SCRIPT_WRITE_DIRTY) &&      /* Script had no side effects so far. */\n        server.pre_command_oom_state &&                /* Detected OOM when script start. */\n        (run_ctx->c->cmd->flags & CMD_DENYOOM))\n    {\n        *err = sdsdup(shared.oomerr->ptr);\n        return C_ERR;\n    }\n\n    return C_OK;\n}\n\nstatic int scriptVerifyClusterState(scriptRunCtx *run_ctx, client *c, client *original_c, sds *err) {\n    if (!server.cluster_enabled || mustObeyClient(original_c)) {\n        return C_OK;\n    }\n    /* If this is a Redis Cluster node, we need to make sure the script is not\n     * trying to access non-local keys, with the exception of commands\n     * received from our master or when loading the AOF back in memory. */\n    int error_code;\n    /* Duplicate relevant flags in the script client. */\n    c->flags &= ~(CLIENT_READONLY | CLIENT_ASKING);\n    c->flags |= original_c->flags & (CLIENT_READONLY | CLIENT_ASKING);\n    const uint64_t cmd_flags = getCommandFlags(c);\n    int hashslot = -1;\n    if (getNodeByQuery(c, c->cmd, c->argv, c->argc, &hashslot, NULL, 0, cmd_flags, &error_code) != getMyClusterNode()) {\n        if (error_code == CLUSTER_REDIR_DOWN_RO_STATE) {\n            *err = sdsnew(\n                    \"Script attempted to execute a write command while the \"\n                            \"cluster is down and readonly\");\n        } else if (error_code == CLUSTER_REDIR_DOWN_STATE) {\n            *err = sdsnew(\"Script attempted to execute a command while the \"\n                    \"cluster is down\");\n        } else if (error_code == CLUSTER_REDIR_CROSS_SLOT) {\n            *err = sdscatfmt(sdsempty(), \n                             \"Command '%S' in script attempted to access keys that don't hash to the same slot\",\n                             c->cmd->fullname);\n        } else if (error_code == CLUSTER_REDIR_UNSTABLE) {\n            /* The request spawns multiple keys in the same slot,\n             * but the slot is not \"stable\" currently as there is\n             * a migration or import in progress. */\n            *err = sdscatfmt(sdsempty(),\n                             \"Unable to execute command '%S' in script \"\n                             \"because undeclared keys were accessed during rehashing of the slot\",\n                             c->cmd->fullname); \n        } else if (error_code == CLUSTER_REDIR_DOWN_UNBOUND) {\n            *err = sdsnew(\"Script attempted to access a slot not served\"); \n        } else if (error_code == CLUSTER_REDIR_TRIMMING) {\n            *err = sdsnew(\"Script attempted to access a slot being trimmed\");\n        } else {\n            /* error_code == CLUSTER_REDIR_MOVED || error_code == CLUSTER_REDIR_ASK */\n            *err = sdsnew(\"Script attempted to access a non local key in a \"\n                    \"cluster node\");\n        }\n        return C_ERR;\n    }\n\n    /* If the script declared keys in advanced, the cross slot error would have\n     * already been thrown. This is only checking for cross slot keys being accessed\n     * that weren't pre-declared. */\n    if (hashslot != -1 && !(run_ctx->flags & SCRIPT_ALLOW_CROSS_SLOT)) {\n        if (run_ctx->slot == -1) {\n            run_ctx->slot = hashslot;\n        } else if (run_ctx->slot != hashslot) {\n            *err = sdsnew(\"Script attempted to access keys that do not hash to \"\n                    \"the same slot\");\n            return C_ERR;\n        }\n    }\n\n    c->slot = hashslot;\n    original_c->slot = hashslot;\n\n    return C_OK;\n}\n\nstatic void scriptCheckClusterCompatibility(scriptRunCtx *run_ctx, client *c) {\n    int hashslot = -1;\n\n    /* If we don't need to detect for this script or slot violation already\n     * detected and reported for this script, exit */\n    if (run_ctx->cluster_compatibility_check_slot == -2)  return;\n\n    if (!areCommandKeysInSameSlot(c, &hashslot)) {\n        server.stat_cluster_incompatible_ops++;\n        /* Already found cross slot usage, skip the check for the rest of the script */\n        run_ctx->cluster_compatibility_check_slot = -2;\n    } else {\n        /* Check whether the declared keys and the accessed keys belong to the same slot.\n         * If having SCRIPT_ALLOW_CROSS_SLOT flag, skip this check since it's allowed\n         * in cluster mode, but it may fail when the slot doesn't belong to the node. */\n        if (hashslot != -1 && !(run_ctx->flags & SCRIPT_ALLOW_CROSS_SLOT)) {\n            if (run_ctx->cluster_compatibility_check_slot == -1) {\n                run_ctx->cluster_compatibility_check_slot = hashslot;\n            } else if (run_ctx->cluster_compatibility_check_slot != hashslot) {\n                server.stat_cluster_incompatible_ops++;\n                /* Already found cross slot usage, skip the check for the rest of the script */\n                run_ctx->cluster_compatibility_check_slot = -2;\n            }\n        }\n    }\n}\n\n/* set RESP for a given run_ctx */\nint scriptSetResp(scriptRunCtx *run_ctx, int resp) {\n    if (resp != 2 && resp != 3) {\n        return C_ERR;\n    }\n\n    run_ctx->c->resp = resp;\n    return C_OK;\n}\n\n/* set Repl for a given run_ctx\n * either: PROPAGATE_AOF | PROPAGATE_REPL*/\nint scriptSetRepl(scriptRunCtx *run_ctx, int repl) {\n    if ((repl & ~(PROPAGATE_AOF | PROPAGATE_REPL)) != 0) {\n        return C_ERR;\n    }\n    run_ctx->repl_flags = repl;\n    return C_OK;\n}\n\nstatic int scriptVerifyAllowStale(client *c, sds *err) {\n    if (!server.masterhost) {\n        /* Not a replica, stale is irrelevant */\n        return C_OK;\n    }\n\n    if (server.repl_state == REPL_STATE_CONNECTED) {\n        /* Connected to replica, stale is irrelevant */\n        return C_OK;\n    }\n\n    if (server.repl_serve_stale_data == 1) {\n        /* Disconnected from replica but allow to serve data */\n        return C_OK;\n    }\n\n    if (c->cmd->flags & CMD_STALE) {\n        /* Command is allow while stale */\n        return C_OK;\n    }\n\n    /* On stale replica, can not run the command */\n    *err = sdsnew(\"Can not execute the command on a stale replica\");\n    return C_ERR;\n}\n\n/* Call a Redis command.\n * The reply is written to the run_ctx client and it is\n * up to the engine to take and parse.\n * The err out variable is set only if error occurs and describe the error.\n * If err is set on reply is written to the run_ctx client. */\nvoid scriptCall(scriptRunCtx *run_ctx, sds *err) {\n    client *c = run_ctx->c;\n\n    /* Setup our fake client for command execution */\n    c->user = run_ctx->original_client->user;\n\n    /* Process module hooks */\n    moduleCallCommandFilters(c);\n\n    struct redisCommand *cmd = lookupCommand(c->argv, c->argc);\n    c->cmd = c->lastcmd = c->realcmd = cmd;\n    if (scriptVerifyCommandArity(cmd, c->argc, err) != C_OK) {\n        goto error;\n    }\n\n    /* There are commands that are not allowed inside scripts. */\n    if (!server.script_disable_deny_script && (cmd->flags & CMD_NOSCRIPT)) {\n        *err = sdsnew(\"This Redis command is not allowed from script\");\n        goto error;\n    }\n\n    if (scriptVerifyAllowStale(c, err) != C_OK) {\n        goto error;\n    }\n\n    if (scriptVerifyACL(c, err) != C_OK) {\n        goto error;\n    }\n\n    if (scriptVerifyWriteCommandAllow(run_ctx, err) != C_OK) {\n        goto error;\n    }\n\n    if (scriptVerifyOOM(run_ctx, err) != C_OK) {\n        goto error;\n    }\n\n    if (cmd->flags & CMD_WRITE) {\n        /* signify that we already change the data in this execution */\n        run_ctx->flags |= SCRIPT_WRITE_DIRTY;\n    }\n\n    if (scriptVerifyClusterState(run_ctx, c, run_ctx->original_client, err) != C_OK) {\n        goto error;\n    }\n\n    scriptCheckClusterCompatibility(run_ctx, c);\n\n    int call_flags = CMD_CALL_NONE;\n    if (run_ctx->repl_flags & PROPAGATE_AOF) {\n        call_flags |= CMD_CALL_PROPAGATE_AOF;\n    }\n    if (run_ctx->repl_flags & PROPAGATE_REPL) {\n        call_flags |= CMD_CALL_PROPAGATE_REPL;\n    }\n    call(c, call_flags);\n    serverAssert((c->flags & CLIENT_BLOCKED) == 0);\n    clusterSlotStatsInvalidateSlotIfApplicable(run_ctx);\n    return;\n\nerror:\n    afterErrorReply(c, *err, sdslen(*err), 0);\n    incrCommandStatsOnError(cmd, ERROR_COMMAND_REJECTED);\n}\n\nlong long scriptRunDuration(void) {\n    serverAssert(scriptIsRunning());\n    return elapsedMs(curr_run_ctx->start_time);\n}\n"
  },
  {
    "path": "src/script.h",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef __SCRIPT_H_\n#define __SCRIPT_H_\n\n/*\n * Script.c unit provides an API for functions and eval \n * to interact with Redis. Interaction includes mostly\n * executing commands, but also functionalities like calling\n * Redis back on long scripts or check if the script was killed.\n *\n * The interaction is done using a scriptRunCtx object that\n * need to be created by the user and initialized using scriptPrepareForRun.\n *\n * Detailed list of functionalities expose by the unit:\n * 1. Calling commands (including all the validation checks such as\n *    acl, cluster, read only run, ...)\n * 2. Set Resp\n * 3. Set Replication method (AOF/REPLICATION/NONE)\n * 4. Call Redis back to on long running scripts to allow Redis reply\n *    to clients and perform script kill\n */\n\n/*\n * scriptInterrupt function will return one of those value,\n *\n * - SCRIPT_KILL - kill the current running script.\n * - SCRIPT_CONTINUE - keep running the current script.\n */\n#define SCRIPT_KILL 1\n#define SCRIPT_CONTINUE 2\n\n/* runCtx flags */\n#define SCRIPT_WRITE_DIRTY            (1ULL<<0) /* indicate that the current script already performed a write command */\n#define SCRIPT_TIMEDOUT               (1ULL<<3) /* indicate that the current script timedout */\n#define SCRIPT_KILLED                 (1ULL<<4) /* indicate that the current script was marked to be killed */\n#define SCRIPT_READ_ONLY              (1ULL<<5) /* indicate that the current script should only perform read commands */\n#define SCRIPT_ALLOW_OOM              (1ULL<<6) /* indicate to allow any command even if OOM reached */\n#define SCRIPT_EVAL_MODE              (1ULL<<7) /* Indicate that the current script called from legacy Lua */\n#define SCRIPT_ALLOW_CROSS_SLOT       (1ULL<<8) /* Indicate that the current script may access keys from multiple slots */\ntypedef struct scriptRunCtx scriptRunCtx;\n\nstruct scriptRunCtx {\n    const char *funcname;\n    client *c;\n    client *original_client;\n    int flags;\n    int repl_flags;\n    monotime start_time;\n    int slot;\n    int cluster_compatibility_check_slot;\n};\n\n/* Scripts flags */\n#define SCRIPT_FLAG_NO_WRITES        (1ULL<<0)\n#define SCRIPT_FLAG_ALLOW_OOM        (1ULL<<1)\n#define SCRIPT_FLAG_ALLOW_STALE      (1ULL<<2)\n#define SCRIPT_FLAG_NO_CLUSTER       (1ULL<<3)\n#define SCRIPT_FLAG_EVAL_COMPAT_MODE (1ULL<<4) /* EVAL Script backwards compatible behavior, no shebang provided */\n#define SCRIPT_FLAG_ALLOW_CROSS_SLOT (1ULL<<5)\n\n/* Defines a script flags */\ntypedef struct scriptFlag {\n    uint64_t flag;\n    const char *str;\n} scriptFlag;\n\nextern scriptFlag scripts_flags_def[];\n\nvoid luaEnvInit(void);\nlua_State *createLuaState(void);\nuint64_t scriptFlagsToCmdFlags(uint64_t cmd_flags, uint64_t script_flags);\nint scriptPrepareForRun(scriptRunCtx *r_ctx, client *engine_client, client *caller, const char *funcname, uint64_t script_flags, int ro);\nvoid scriptResetRun(scriptRunCtx *r_ctx);\nint scriptSetResp(scriptRunCtx *r_ctx, int resp);\nint scriptSetRepl(scriptRunCtx *r_ctx, int repl);\nvoid scriptCall(scriptRunCtx *r_ctx, sds *err);\nint scriptInterrupt(scriptRunCtx *r_ctx);\nvoid scriptKill(client *c, int is_eval);\nint scriptIsRunning(void);\nconst char* scriptCurrFunction(void);\nint scriptIsEval(void);\nint scriptIsTimedout(void);\nclient* scriptGetClient(void);\nclient* scriptGetCaller(void);\nlong long scriptRunDuration(void);\n\n#endif /* __SCRIPT_H_ */\n"
  },
  {
    "path": "src/script_lua.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n#include \"script_lua.h\"\n#include \"fpconv_dtoa.h\"\n\n#include \"server.h\"\n#include \"sha1.h\"\n#include \"rand.h\"\n#include \"cluster.h\"\n#include \"monotonic.h\"\n#include \"resp_parser.h\"\n#include \"version.h\"\n#include <lauxlib.h>\n#include <lualib.h>\n#include <ctype.h>\n#include <math.h>\n\n/* Globals that are added by the Lua libraries */\nstatic char *libraries_allow_list[] = {\n    \"string\",\n    \"cjson\",\n    \"bit\",\n    \"cmsgpack\",\n    \"math\",\n    \"table\",\n    \"struct\",\n    \"os\",\n    NULL,\n};\n\n/* Redis Lua API globals */\nstatic char *redis_api_allow_list[] = {\n    \"redis\",\n    \"__redis__err__handler\", /* error handler for eval, currently located on globals.\n                                Should move to registry. */\n    NULL,\n};\n\n/* Lua builtins */\nstatic char *lua_builtins_allow_list[] = {\n    \"xpcall\",\n    \"tostring\",\n    \"setmetatable\",\n    \"next\",\n    \"assert\",\n    \"tonumber\",\n    \"rawequal\",\n    \"collectgarbage\",\n    \"getmetatable\",\n    \"rawset\",\n    \"pcall\",\n    \"coroutine\",\n    \"type\",\n    \"_G\",\n    \"select\",\n    \"unpack\",\n    \"gcinfo\",\n    \"pairs\",\n    \"rawget\",\n    \"loadstring\",\n    \"ipairs\",\n    \"_VERSION\",\n    \"load\",\n    \"error\",\n    NULL,\n};\n\n/* Lua builtins which are deprecated for sandboxing concerns */\nstatic char *lua_builtins_deprecated[] = {\n    \"newproxy\",\n    \"setfenv\",\n    \"getfenv\",\n    NULL,\n};\n\n/* Lua builtins which are allowed on initialization but will be removed right after */\nstatic char *lua_builtins_removed_after_initialization_allow_list[] = {\n    \"debug\", /* debug will be set to nil after the error handler will be created */\n    NULL,\n};\n\n/* Those allow lists was created from the globals that was\n * available to the user when the allow lists was first introduce.\n * Because we do not want to break backward compatibility we keep\n * all the globals. The allow lists will prevent us from accidentally\n * creating unwanted globals in the future.\n *\n * Also notice that the allow list is only checked on start time,\n * after that the global table is locked so not need to check anything.*/\nstatic char **allow_lists[] = {\n    libraries_allow_list,\n    redis_api_allow_list,\n    lua_builtins_allow_list,\n    lua_builtins_removed_after_initialization_allow_list,\n    NULL,\n};\n\n/* Deny list contains elements which we know we do not want to add to globals\n * and there is no need to print a warning message form them. We will print a\n * log message only if an element was added to the globals and the element is\n * not on the allow list nor on the back list. */\nstatic char *deny_list[] = {\n    \"dofile\",\n    \"loadfile\",\n    \"print\",\n    NULL,\n};\n\nstatic int redis_math_random (lua_State *L);\nstatic int redis_math_randomseed (lua_State *L);\nstatic void redisProtocolToLuaType_Int(void *ctx, long long val, const char *proto, size_t proto_len);\nstatic void redisProtocolToLuaType_BulkString(void *ctx, const char *str, size_t len, const char *proto, size_t proto_len);\nstatic void redisProtocolToLuaType_NullBulkString(void *ctx, const char *proto, size_t proto_len);\nstatic void redisProtocolToLuaType_NullArray(void *ctx, const char *proto, size_t proto_len);\nstatic void redisProtocolToLuaType_Status(void *ctx, const char *str, size_t len, const char *proto, size_t proto_len);\nstatic void redisProtocolToLuaType_Error(void *ctx, const char *str, size_t len, const char *proto, size_t proto_len);\nstatic void redisProtocolToLuaType_Array(struct ReplyParser *parser, void *ctx, size_t len, const char *proto);\nstatic void redisProtocolToLuaType_Map(struct ReplyParser *parser, void *ctx, size_t len, const char *proto);\nstatic void redisProtocolToLuaType_Set(struct ReplyParser *parser, void *ctx, size_t len, const char *proto);\nstatic void redisProtocolToLuaType_Null(void *ctx, const char *proto, size_t proto_len);\nstatic void redisProtocolToLuaType_Bool(void *ctx, int val, const char *proto, size_t proto_len);\nstatic void redisProtocolToLuaType_Double(void *ctx, double d, const char *proto, size_t proto_len);\nstatic void redisProtocolToLuaType_BigNumber(void *ctx, const char *str, size_t len, const char *proto, size_t proto_len);\nstatic void redisProtocolToLuaType_VerbatimString(void *ctx, const char *format, const char *str, size_t len, const char *proto, size_t proto_len);\nstatic void redisProtocolToLuaType_Attribute(struct ReplyParser *parser, void *ctx, size_t len, const char *proto);\nstatic void luaReplyToRedisReply(client *c, client* script_client, lua_State *lua);\n\n/*\n * Save the give pointer on Lua registry, used to save the Lua context and\n * function context so we can retrieve them from lua_State.\n */\nvoid luaSaveOnRegistry(lua_State* lua, const char* name, void* ptr) {\n    lua_pushstring(lua, name);\n    if (ptr) {\n        lua_pushlightuserdata(lua, ptr);\n    } else {\n        lua_pushnil(lua);\n    }\n    lua_settable(lua, LUA_REGISTRYINDEX);\n}\n\n/*\n * Get a saved pointer from registry\n */\nvoid* luaGetFromRegistry(lua_State* lua, const char* name) {\n    lua_pushstring(lua, name);\n    lua_gettable(lua, LUA_REGISTRYINDEX);\n\n    if (lua_isnil(lua, -1)) {\n        lua_pop(lua, 1); /* pops the value */\n        return NULL;\n    }\n    /* must be light user data */\n    serverAssert(lua_islightuserdata(lua, -1));\n\n    void* ptr = (void*) lua_topointer(lua, -1);\n    serverAssert(ptr);\n\n    /* pops the value */\n    lua_pop(lua, 1);\n\n    return ptr;\n}\n\n/* ---------------------------------------------------------------------------\n * Redis reply to Lua type conversion functions.\n * ------------------------------------------------------------------------- */\n\n/* Take a Redis reply in the Redis protocol format and convert it into a\n * Lua type. Thanks to this function, and the introduction of not connected\n * clients, it is trivial to implement the redis() lua function.\n *\n * Basically we take the arguments, execute the Redis command in the context\n * of a non connected client, then take the generated reply and convert it\n * into a suitable Lua type. With this trick the scripting feature does not\n * need the introduction of a full Redis internals API. The script\n * is like a normal client that bypasses all the slow I/O paths.\n *\n * Note: in this function we do not do any sanity check as the reply is\n * generated by Redis directly. This allows us to go faster.\n *\n * Errors are returned as a table with a single 'err' field set to the\n * error string.\n */\n\nstatic const ReplyParserCallbacks DefaultLuaTypeParserCallbacks = {\n    .null_array_callback = redisProtocolToLuaType_NullArray,\n    .bulk_string_callback = redisProtocolToLuaType_BulkString,\n    .null_bulk_string_callback = redisProtocolToLuaType_NullBulkString,\n    .error_callback = redisProtocolToLuaType_Error,\n    .simple_str_callback = redisProtocolToLuaType_Status,\n    .long_callback = redisProtocolToLuaType_Int,\n    .array_callback = redisProtocolToLuaType_Array,\n    .set_callback = redisProtocolToLuaType_Set,\n    .map_callback = redisProtocolToLuaType_Map,\n    .bool_callback = redisProtocolToLuaType_Bool,\n    .double_callback = redisProtocolToLuaType_Double,\n    .null_callback = redisProtocolToLuaType_Null,\n    .big_number_callback = redisProtocolToLuaType_BigNumber,\n    .verbatim_string_callback = redisProtocolToLuaType_VerbatimString,\n    .attribute_callback = redisProtocolToLuaType_Attribute,\n    .error = NULL,\n};\n\nstatic void redisProtocolToLuaType(lua_State *lua, char* reply) {\n    ReplyParser parser = {.curr_location = reply, .callbacks = DefaultLuaTypeParserCallbacks};\n\n    parseReply(&parser, lua);\n}\n\nstatic void redisProtocolToLuaType_Int(void *ctx, long long val, const char *proto, size_t proto_len) {\n    UNUSED(proto);\n    UNUSED(proto_len);\n    if (!ctx) {\n        return;\n    }\n\n    lua_State *lua = ctx;\n    if (!lua_checkstack(lua, 1)) {\n        /* Increase the Lua stack if needed, to make sure there is enough room\n         * to push elements to the stack. On failure, exit with panic. */\n        serverPanic(\"lua stack limit reach when parsing redis.call reply\");\n    }\n    lua_pushnumber(lua,(lua_Number)val);\n}\n\nstatic void redisProtocolToLuaType_NullBulkString(void *ctx, const char *proto, size_t proto_len) {\n    UNUSED(proto);\n    UNUSED(proto_len);\n    if (!ctx) {\n        return;\n    }\n\n    lua_State *lua = ctx;\n    if (!lua_checkstack(lua, 1)) {\n        /* Increase the Lua stack if needed, to make sure there is enough room\n         * to push elements to the stack. On failure, exit with panic. */\n        serverPanic(\"lua stack limit reach when parsing redis.call reply\");\n    }\n    lua_pushboolean(lua,0);\n}\n\nstatic void redisProtocolToLuaType_NullArray(void *ctx, const char *proto, size_t proto_len) {\n    UNUSED(proto);\n    UNUSED(proto_len);\n    if (!ctx) {\n        return;\n    }\n    lua_State *lua = ctx;\n    if (!lua_checkstack(lua, 1)) {\n        /* Increase the Lua stack if needed, to make sure there is enough room\n         * to push elements to the stack. On failure, exit with panic. */\n        serverPanic(\"lua stack limit reach when parsing redis.call reply\");\n    }\n    lua_pushboolean(lua,0);\n}\n\n\nstatic void redisProtocolToLuaType_BulkString(void *ctx, const char *str, size_t len, const char *proto, size_t proto_len) {\n    UNUSED(proto);\n    UNUSED(proto_len);\n    if (!ctx) {\n        return;\n    }\n\n    lua_State *lua = ctx;\n    if (!lua_checkstack(lua, 1)) {\n        /* Increase the Lua stack if needed, to make sure there is enough room\n         * to push elements to the stack. On failure, exit with panic. */\n        serverPanic(\"lua stack limit reach when parsing redis.call reply\");\n    }\n    lua_pushlstring(lua,str,len);\n}\n\nstatic void redisProtocolToLuaType_Status(void *ctx, const char *str, size_t len, const char *proto, size_t proto_len) {\n    UNUSED(proto);\n    UNUSED(proto_len);\n    if (!ctx) {\n        return;\n    }\n\n    lua_State *lua = ctx;\n    if (!lua_checkstack(lua, 3)) {\n        /* Increase the Lua stack if needed, to make sure there is enough room\n         * to push elements to the stack. On failure, exit with panic. */\n        serverPanic(\"lua stack limit reach when parsing redis.call reply\");\n    }\n    lua_newtable(lua);\n    lua_pushstring(lua,\"ok\");\n    lua_pushlstring(lua,str,len);\n    lua_settable(lua,-3);\n}\n\nstatic void redisProtocolToLuaType_Error(void *ctx, const char *str, size_t len, const char *proto, size_t proto_len) {\n    UNUSED(proto);\n    UNUSED(proto_len);\n    if (!ctx) {\n        return;\n    }\n\n    lua_State *lua = ctx;\n    if (!lua_checkstack(lua, 3)) {\n        /* Increase the Lua stack if needed, to make sure there is enough room\n         * to push elements to the stack. On failure, exit with panic. */\n        serverPanic(\"lua stack limit reach when parsing redis.call reply\");\n    }\n    sds err_msg = sdscatlen(sdsnew(\"-\"), str, len);\n    luaPushErrorBuff(lua,err_msg);\n    /* push a field indicate to ignore updating the stats on this error\n     * because it was already updated when executing the command. */\n    lua_pushstring(lua,\"ignore_error_stats_update\");\n    lua_pushboolean(lua, 1);\n    lua_settable(lua,-3);\n}\n\nstatic void redisProtocolToLuaType_Map(struct ReplyParser *parser, void *ctx, size_t len, const char *proto) {\n    UNUSED(proto);\n    lua_State *lua = ctx;\n    if (lua) {\n        if (!lua_checkstack(lua, 3)) {\n            /* Increase the Lua stack if needed, to make sure there is enough room\n             * to push elements to the stack. On failure, exit with panic. */\n            serverPanic(\"lua stack limit reach when parsing redis.call reply\");\n        }\n        lua_newtable(lua);\n        lua_pushstring(lua, \"map\");\n        lua_newtable(lua);\n    }\n    for (size_t j = 0; j < len; j++) {\n        parseReply(parser,lua);\n        parseReply(parser,lua);\n        if (lua) lua_settable(lua,-3);\n    }\n    if (lua) lua_settable(lua,-3);\n}\n\nstatic void redisProtocolToLuaType_Set(struct ReplyParser *parser, void *ctx, size_t len, const char *proto) {\n    UNUSED(proto);\n\n    lua_State *lua = ctx;\n    if (lua) {\n        if (!lua_checkstack(lua, 3)) {\n            /* Increase the Lua stack if needed, to make sure there is enough room\n             * to push elements to the stack. On failure, exit with panic. */\n            serverPanic(\"lua stack limit reach when parsing redis.call reply\");\n        }\n        lua_newtable(lua);\n        lua_pushstring(lua, \"set\");\n        lua_newtable(lua);\n    }\n    for (size_t j = 0; j < len; j++) {\n        parseReply(parser,lua);\n        if (lua) {\n            if (!lua_checkstack(lua, 1)) {\n                /* Increase the Lua stack if needed, to make sure there is enough room\n                 * to push elements to the stack. On failure, exit with panic.\n                 * Notice that here we need to check the stack again because the recursive\n                 * call to redisProtocolToLuaType might have use the room allocated in the stack*/\n                serverPanic(\"lua stack limit reach when parsing redis.call reply\");\n            }\n            lua_pushboolean(lua,1);\n            lua_settable(lua,-3);\n        }\n    }\n    if (lua) lua_settable(lua,-3);\n}\n\nstatic void redisProtocolToLuaType_Array(struct ReplyParser *parser, void *ctx, size_t len, const char *proto) {\n    UNUSED(proto);\n\n    lua_State *lua = ctx;\n    if (lua){\n        if (!lua_checkstack(lua, 2)) {\n            /* Increase the Lua stack if needed, to make sure there is enough room\n             * to push elements to the stack. On failure, exit with panic. */\n            serverPanic(\"lua stack limit reach when parsing redis.call reply\");\n        }\n        lua_newtable(lua);\n    }\n    for (size_t j = 0; j < len; j++) {\n        if (lua) lua_pushnumber(lua,j+1);\n        parseReply(parser,lua);\n        if (lua) lua_settable(lua,-3);\n    }\n}\n\nstatic void redisProtocolToLuaType_Attribute(struct ReplyParser *parser, void *ctx, size_t len, const char *proto) {\n    UNUSED(proto);\n\n    /* Parse the attribute reply.\n     * Currently, we do not expose the attribute to the Lua script so\n     * we just need to continue parsing and ignore it (the NULL ensures that the\n     * reply will be ignored). */\n    for (size_t j = 0; j < len; j++) {\n        parseReply(parser,NULL);\n        parseReply(parser,NULL);\n    }\n\n    /* Parse the reply itself. */\n    parseReply(parser,ctx);\n}\n\nstatic void redisProtocolToLuaType_VerbatimString(void *ctx, const char *format, const char *str, size_t len, const char *proto, size_t proto_len) {\n    UNUSED(proto);\n    UNUSED(proto_len);\n    if (!ctx) {\n        return;\n    }\n\n    lua_State *lua = ctx;\n    if (!lua_checkstack(lua, 5)) {\n        /* Increase the Lua stack if needed, to make sure there is enough room\n         * to push elements to the stack. On failure, exit with panic. */\n        serverPanic(\"lua stack limit reach when parsing redis.call reply\");\n    }\n    lua_newtable(lua);\n    lua_pushstring(lua,\"verbatim_string\");\n    lua_newtable(lua);\n    lua_pushstring(lua,\"string\");\n    lua_pushlstring(lua,str,len);\n    lua_settable(lua,-3);\n    lua_pushstring(lua,\"format\");\n    lua_pushlstring(lua,format,3);\n    lua_settable(lua,-3);\n    lua_settable(lua,-3);\n}\n\nstatic void redisProtocolToLuaType_BigNumber(void *ctx, const char *str, size_t len, const char *proto, size_t proto_len) {\n    UNUSED(proto);\n    UNUSED(proto_len);\n    if (!ctx) {\n        return;\n    }\n\n    lua_State *lua = ctx;\n    if (!lua_checkstack(lua, 3)) {\n        /* Increase the Lua stack if needed, to make sure there is enough room\n         * to push elements to the stack. On failure, exit with panic. */\n        serverPanic(\"lua stack limit reach when parsing redis.call reply\");\n    }\n    lua_newtable(lua);\n    lua_pushstring(lua,\"big_number\");\n    lua_pushlstring(lua,str,len);\n    lua_settable(lua,-3);\n}\n\nstatic void redisProtocolToLuaType_Null(void *ctx, const char *proto, size_t proto_len) {\n    UNUSED(proto);\n    UNUSED(proto_len);\n    if (!ctx) {\n        return;\n    }\n\n    lua_State *lua = ctx;\n    if (!lua_checkstack(lua, 1)) {\n        /* Increase the Lua stack if needed, to make sure there is enough room\n         * to push elements to the stack. On failure, exit with panic. */\n        serverPanic(\"lua stack limit reach when parsing redis.call reply\");\n    }\n    lua_pushnil(lua);\n}\n\nstatic void redisProtocolToLuaType_Bool(void *ctx, int val, const char *proto, size_t proto_len) {\n    UNUSED(proto);\n    UNUSED(proto_len);\n    if (!ctx) {\n        return;\n    }\n\n    lua_State *lua = ctx;\n    if (!lua_checkstack(lua, 1)) {\n        /* Increase the Lua stack if needed, to make sure there is enough room\n         * to push elements to the stack. On failure, exit with panic. */\n        serverPanic(\"lua stack limit reach when parsing redis.call reply\");\n    }\n    lua_pushboolean(lua,val);\n}\n\nstatic void redisProtocolToLuaType_Double(void *ctx, double d, const char *proto, size_t proto_len) {\n    UNUSED(proto);\n    UNUSED(proto_len);\n    if (!ctx) {\n        return;\n    }\n\n    lua_State *lua = ctx;\n    if (!lua_checkstack(lua, 3)) {\n        /* Increase the Lua stack if needed, to make sure there is enough room\n         * to push elements to the stack. On failure, exit with panic. */\n        serverPanic(\"lua stack limit reach when parsing redis.call reply\");\n    }\n    lua_newtable(lua);\n    lua_pushstring(lua,\"double\");\n    lua_pushnumber(lua,d);\n    lua_settable(lua,-3);\n}\n\n/* This function is used in order to push an error on the Lua stack in the\n * format used by redis.pcall to return errors, which is a lua table\n * with an \"err\" field set to the error string including the error code.\n * Note that this table is never a valid reply by proper commands,\n * since the returned tables are otherwise always indexed by integers, never by strings.\n *\n * The function takes ownership on the given err_buffer. */\nvoid luaPushErrorBuff(lua_State *lua, sds err_buffer) {\n    sds msg;\n    sds error_code;\n\n    /* If debugging is active and in step mode, log errors resulting from\n     * Redis commands. */\n    if (ldbIsEnabled()) {\n        ldbLog(sdscatprintf(sdsempty(),\"<error> %s\",err_buffer));\n    }\n\n    /* There are two possible formats for the received `error` string:\n     * 1) \"-CODE msg\": in this case we remove the leading '-' since we don't store it as part of the lua error format.\n     * 2) \"msg\": in this case we prepend a generic 'ERR' code since all error statuses need some error code.\n     * We support format (1) so this function can reuse the error messages used in other places in redis.\n     * We support format (2) so it'll be easy to pass descriptive errors to this function without worrying about format.\n     */\n    if (err_buffer[0] == '-') {\n        /* derive error code from the message */\n        char *err_msg = strstr(err_buffer, \" \");\n        if (!err_msg) {\n            msg = sdsnew(err_buffer+1);\n            error_code = sdsnew(\"ERR\");\n        } else {\n            *err_msg = '\\0';\n            msg = sdsnew(err_msg+1);\n            error_code = sdsnew(err_buffer + 1);\n        }\n        sdsfree(err_buffer);\n    } else {\n        msg = err_buffer;\n        error_code = sdsnew(\"ERR\");\n    }\n    /* Trim newline at end of string. If we reuse the ready-made Redis error objects (case 1 above) then we might\n     * have a newline that needs to be trimmed. In any case the lua Redis error table shouldn't end with a newline. */\n    msg = sdstrim(msg, \"\\r\\n\");\n    sds final_msg = sdscatfmt(error_code, \" %s\", msg);\n\n    lua_newtable(lua);\n    lua_pushstring(lua,\"err\");\n    lua_pushstring(lua, final_msg);\n    lua_settable(lua,-3);\n\n    sdsfree(msg);\n    sdsfree(final_msg);\n}\n\nvoid luaPushError(lua_State *lua, const char *error) {\n    luaPushErrorBuff(lua, sdsnew(error));\n}\n\n/* In case the error set into the Lua stack by luaPushError() was generated\n * by the non-error-trapping version of redis.pcall(), which is redis.call(),\n * this function will raise the Lua error so that the execution of the\n * script will be halted. */\nint luaError(lua_State *lua) {\n    return lua_error(lua);\n}\n\n\n/* ---------------------------------------------------------------------------\n * Lua reply to Redis reply conversion functions.\n * ------------------------------------------------------------------------- */\n\n/* Reply to client 'c' converting the top element in the Lua stack to a\n * Redis reply. As a side effect the element is consumed from the stack.  */\nstatic void luaReplyToRedisReply(client *c, client* script_client, lua_State *lua) {\n    int t = lua_type(lua,-1);\n\n    if (!lua_checkstack(lua, 4)) {\n        /* Increase the Lua stack if needed to make sure there is enough room\n         * to push 4 elements to the stack. On failure, return error.\n         * Notice that we need, in the worst case, 4 elements because returning a map might\n         * require push 4 elements to the Lua stack.*/\n        addReplyError(c, \"reached lua stack limit\");\n        lua_pop(lua,1); /* pop the element from the stack */\n        return;\n    }\n\n    switch(t) {\n    case LUA_TSTRING:\n        addReplyBulkCBuffer(c,(char*)lua_tostring(lua,-1),lua_strlen(lua,-1));\n        break;\n    case LUA_TBOOLEAN:\n        if (script_client->resp == 2)\n            addReply(c,lua_toboolean(lua,-1) ? shared.cone :\n                                               shared.null[c->resp]);\n        else\n            addReplyBool(c,lua_toboolean(lua,-1));\n        break;\n    case LUA_TNUMBER:\n        addReplyLongLong(c,(long long)lua_tonumber(lua,-1));\n        break;\n    case LUA_TTABLE:\n        /* We need to check if it is an array, an error, or a status reply.\n         * Error are returned as a single element table with 'err' field.\n         * Status replies are returned as single element table with 'ok'\n         * field. */\n\n        /* Handle error reply. */\n        /* we took care of the stack size on function start */\n        lua_pushstring(lua,\"err\");\n        lua_rawget(lua,-2);\n        t = lua_type(lua,-1);\n        if (t == LUA_TSTRING) {\n            lua_pop(lua, 1); /* pop the error message, we will use luaExtractErrorInformation to get error information */\n            errorInfo err_info = {0};\n            luaExtractErrorInformation(lua, &err_info);\n            addReplyErrorFormatEx(c,\n                                  err_info.ignore_err_stats_update? ERR_REPLY_FLAG_NO_STATS_UPDATE: 0,\n                                  \"-%s\",\n                                  err_info.msg);\n            luaErrorInformationDiscard(&err_info);\n            lua_pop(lua,1); /* pop the result table */\n            return;\n        }\n        lua_pop(lua,1); /* Discard field name pushed before. */\n\n        /* Handle status reply. */\n        lua_pushstring(lua,\"ok\");\n        lua_rawget(lua,-2);\n        t = lua_type(lua,-1);\n        if (t == LUA_TSTRING) {\n            sds ok = sdsnew(lua_tostring(lua,-1));\n            sdsmapchars(ok,\"\\r\\n\",\"  \",2);\n            addReplyStatusLength(c, ok, sdslen(ok));\n            sdsfree(ok);\n            lua_pop(lua,2);\n            return;\n        }\n        lua_pop(lua,1); /* Discard field name pushed before. */\n\n        /* Handle double reply. */\n        lua_pushstring(lua,\"double\");\n        lua_rawget(lua,-2);\n        t = lua_type(lua,-1);\n        if (t == LUA_TNUMBER) {\n            addReplyDouble(c,lua_tonumber(lua,-1));\n            lua_pop(lua,2);\n            return;\n        }\n        lua_pop(lua,1); /* Discard field name pushed before. */\n\n        /* Handle big number reply. */\n        lua_pushstring(lua,\"big_number\");\n        lua_rawget(lua,-2);\n        t = lua_type(lua,-1);\n        if (t == LUA_TSTRING) {\n            sds big_num = sdsnewlen(lua_tostring(lua,-1), lua_strlen(lua,-1));\n            sdsmapchars(big_num,\"\\r\\n\",\"  \",2);\n            addReplyBigNum(c,big_num,sdslen(big_num));\n            sdsfree(big_num);\n            lua_pop(lua,2);\n            return;\n        }\n        lua_pop(lua,1); /* Discard field name pushed before. */\n\n        /* Handle verbatim reply. */\n        lua_pushstring(lua,\"verbatim_string\");\n        lua_rawget(lua,-2);\n        t = lua_type(lua,-1);\n        if (t == LUA_TTABLE) {\n            lua_pushstring(lua,\"format\");\n            lua_rawget(lua,-2);\n            t = lua_type(lua,-1);\n            if (t == LUA_TSTRING){\n                char* format = (char*)lua_tostring(lua,-1);\n                lua_pushstring(lua,\"string\");\n                lua_rawget(lua,-3);\n                t = lua_type(lua,-1);\n                if (t == LUA_TSTRING){\n                    size_t len;\n                    char* str = (char*)lua_tolstring(lua,-1,&len);\n                    addReplyVerbatim(c, str, len, format);\n                    lua_pop(lua,4);\n                    return;\n                }\n                lua_pop(lua,1);\n            }\n            lua_pop(lua,1);\n        }\n        lua_pop(lua,1); /* Discard field name pushed before. */\n\n        /* Handle map reply. */\n        lua_pushstring(lua,\"map\");\n        lua_rawget(lua,-2);\n        t = lua_type(lua,-1);\n        if (t == LUA_TTABLE) {\n            int maplen = 0;\n            void *replylen = addReplyDeferredLen(c);\n            /* we took care of the stack size on function start */\n            lua_pushnil(lua); /* Use nil to start iteration. */\n            while (lua_next(lua,-2)) {\n                /* Stack now: table, key, value */\n                lua_pushvalue(lua,-2);        /* Dup key before consuming. */\n                luaReplyToRedisReply(c, script_client, lua); /* Return key. */\n                luaReplyToRedisReply(c, script_client, lua); /* Return value. */\n                /* Stack now: table, key. */\n                maplen++;\n            }\n            setDeferredMapLen(c,replylen,maplen);\n            lua_pop(lua,2);\n            return;\n        }\n        lua_pop(lua,1); /* Discard field name pushed before. */\n\n        /* Handle set reply. */\n        lua_pushstring(lua,\"set\");\n        lua_rawget(lua,-2);\n        t = lua_type(lua,-1);\n        if (t == LUA_TTABLE) {\n            int setlen = 0;\n            void *replylen = addReplyDeferredLen(c);\n            /* we took care of the stack size on function start */\n            lua_pushnil(lua); /* Use nil to start iteration. */\n            while (lua_next(lua,-2)) {\n                /* Stack now: table, key, true */\n                lua_pop(lua,1);               /* Discard the boolean value. */\n                lua_pushvalue(lua,-1);        /* Dup key before consuming. */\n                luaReplyToRedisReply(c, script_client, lua); /* Return key. */\n                /* Stack now: table, key. */\n                setlen++;\n            }\n            setDeferredSetLen(c,replylen,setlen);\n            lua_pop(lua,2);\n            return;\n        }\n        lua_pop(lua,1); /* Discard field name pushed before. */\n\n        /* Handle the array reply. */\n        void *replylen = addReplyDeferredLen(c);\n        int j = 1, mbulklen = 0;\n        while(1) {\n            /* we took care of the stack size on function start */\n            lua_pushnumber(lua,j++);\n            lua_rawget(lua,-2);\n            t = lua_type(lua,-1);\n            if (t == LUA_TNIL) {\n                lua_pop(lua,1);\n                break;\n            }\n            luaReplyToRedisReply(c, script_client, lua);\n            mbulklen++;\n        }\n        setDeferredArrayLen(c,replylen,mbulklen);\n        break;\n    default:\n        addReplyNull(c);\n    }\n    lua_pop(lua,1);\n}\n\n/* ---------------------------------------------------------------------------\n * Lua redis.* functions implementations.\n * ------------------------------------------------------------------------- */\nvoid freeLuaRedisArgv(robj **argv, int argc, int argv_len);\n\n/* Cached argv array across calls. */\nstatic robj **lua_argv = NULL;\nstatic int lua_argv_size = 0;\n\n/* Cache of recently used small arguments to avoid malloc calls. */\nstatic robj *lua_args_cached_objects[LUA_CMD_OBJCACHE_SIZE];\nstatic size_t lua_args_cached_objects_len[LUA_CMD_OBJCACHE_SIZE];\n\nstatic robj **luaArgsToRedisArgv(lua_State *lua, int *argc, int *argv_len) {\n    int j;\n    /* Require at least one argument */\n    *argc = lua_gettop(lua);\n    if (*argc == 0) {\n        luaPushError(lua, \"Please specify at least one argument for this redis lib call\");\n        return NULL;\n    }\n\n    /* Build the arguments vector (reuse a cached argv from last call) */\n    if (lua_argv_size < *argc) {\n        lua_argv = zrealloc(lua_argv,sizeof(robj*)* *argc);\n        lua_argv_size = *argc;\n    }\n    *argv_len = lua_argv_size;\n\n    for (j = 0; j < *argc; j++) {\n        char *obj_s;\n        size_t obj_len;\n        char dbuf[64];\n\n        if (lua_type(lua,j+1) == LUA_TNUMBER) {\n            /* We can't use lua_tolstring() for number -> string conversion\n             * since Lua uses a format specifier that loses precision. */\n            lua_Number num = lua_tonumber(lua,j+1);\n            /* Integer printing function is much faster, check if we can safely use it.\n             * Since lua_Number is not explicitly an integer or a double, we need to make an effort\n             * to convert it as an integer when that's possible, since the string could later be used\n             * in a context that doesn't support scientific notation (e.g. 1e9 instead of 100000000). */\n            long long lvalue;\n            if (double2ll((double)num, &lvalue))\n                obj_len = ll2string(dbuf, sizeof(dbuf), lvalue);\n            else {\n                obj_len = fpconv_dtoa((double)num, dbuf);\n                dbuf[obj_len] = '\\0';\n            }\n            obj_s = dbuf;\n        } else {\n            obj_s = (char*)lua_tolstring(lua,j+1,&obj_len);\n            if (obj_s == NULL) break; /* Not a string. */\n        }\n        /* Try to use a cached object. */\n        if (j < LUA_CMD_OBJCACHE_SIZE && lua_args_cached_objects[j] &&\n            lua_args_cached_objects_len[j] >= obj_len)\n        {\n            sds s = lua_args_cached_objects[j]->ptr;\n            lua_argv[j] = lua_args_cached_objects[j];\n            lua_args_cached_objects[j] = NULL;\n            memcpy(s,obj_s,obj_len+1);\n            sdssetlen(s, obj_len);\n        } else {\n            lua_argv[j] = createStringObject(obj_s, obj_len);\n        }\n    }\n\n    /* Pop all arguments from the stack, we do not need them anymore\n     * and this way we guaranty we will have room on the stack for the result. */\n    lua_pop(lua, *argc);\n\n    /* Check if one of the arguments passed by the Lua script\n     * is not a string or an integer (lua_isstring() return true for\n     * integers as well). */\n    if (j != *argc) {\n        freeLuaRedisArgv(lua_argv, j, lua_argv_size);\n        luaPushError(lua, \"Lua redis lib command arguments must be strings or integers\");\n        return NULL;\n    }\n\n    return lua_argv;\n}\n\nvoid freeLuaRedisArgv(robj **argv, int argc, int argv_len) {\n    int j;\n    for (j = 0; j < argc; j++) {\n        robj *o = argv[j];\n\n        /* Try to cache the object in the lua_args_cached_objects array.\n         * The object must be small, SDS-encoded, and with refcount = 1\n         * (we must be the only owner) for us to cache it. */\n        if (j < LUA_CMD_OBJCACHE_SIZE &&\n            o->refcount == 1 &&\n            (o->encoding == OBJ_ENCODING_RAW ||\n             o->encoding == OBJ_ENCODING_EMBSTR) &&\n            sdslen(o->ptr) <= LUA_CMD_OBJCACHE_MAX_LEN)\n        {\n            sds s = o->ptr;\n            if (lua_args_cached_objects[j]) decrRefCount(lua_args_cached_objects[j]);\n            lua_args_cached_objects[j] = o;\n            lua_args_cached_objects_len[j] = sdsalloc(s);\n        } else {\n            decrRefCount(o);\n        }\n    }\n    if (argv != lua_argv || argv_len != lua_argv_size) {\n        /* The command changed argv, scrap the cache and start over. */\n        zfree(argv);\n        lua_argv = NULL;\n        lua_argv_size = 0;\n    }\n}\n\nstatic int luaRedisGenericCommand(lua_State *lua, int raise_error) {\n    int j;\n    scriptRunCtx* rctx = luaGetFromRegistry(lua, REGISTRY_RUN_CTX_NAME);\n    serverAssert(rctx); /* Only supported inside script invocation */\n    sds err = NULL;\n    client* c = rctx->c;\n    sds reply;\n\n    c->argv = luaArgsToRedisArgv(lua, &c->argc, &c->argv_len);\n    if (c->argv == NULL) {\n        return raise_error ? luaError(lua) : 1;\n    }\n\n    static int inuse = 0;   /* Recursive calls detection. */\n\n    /* By using Lua debug hooks it is possible to trigger a recursive call\n     * to luaRedisGenericCommand(), which normally should never happen.\n     * To make this function reentrant is futile and makes it slower, but\n     * we should at least detect such a misuse, and abort. */\n    if (inuse) {\n        char *recursion_warning =\n                \"luaRedisGenericCommand() recursive call detected. \"\n                \"Are you doing funny stuff with Lua debug hooks?\";\n        serverLog(LL_WARNING,\"%s\",recursion_warning);\n        luaPushError(lua,recursion_warning);\n        return 1;\n    }\n    inuse++;\n\n    /* Log the command if debugging is active. */\n    if (ldbIsEnabled()) {\n        sds cmdlog = sdsnew(\"<redis>\");\n        for (j = 0; j < c->argc; j++) {\n            if (j == 10) {\n                cmdlog = sdscatprintf(cmdlog,\" ... (%d more)\",\n                    c->argc-j-1);\n                break;\n            } else {\n                cmdlog = sdscatlen(cmdlog,\" \",1);\n                cmdlog = sdscatsds(cmdlog,c->argv[j]->ptr);\n            }\n        }\n        ldbLog(cmdlog);\n    }\n\n    scriptCall(rctx, &err);\n    if (err) {\n        luaPushError(lua, err);\n        sdsfree(err);\n        /* push a field indicate to ignore updating the stats on this error\n         * because it was already updated when executing the command. */\n        lua_pushstring(lua,\"ignore_error_stats_update\");\n        lua_pushboolean(lua, 1);\n        lua_settable(lua,-3);\n        goto cleanup;\n    }\n\n    /* Convert the result of the Redis command into a suitable Lua type.\n     * The first thing we need is to create a single string from the client\n     * output buffers. */\n    if (listLength(c->reply) == 0 && (size_t)c->bufpos < c->buf_usable_size) {\n        /* This is a fast path for the common case of a reply inside the\n         * client static buffer. Don't create an SDS string but just use\n         * the client buffer directly. */\n        c->buf[c->bufpos] = '\\0';\n        reply = c->buf;\n        c->bufpos = 0;\n    } else {\n        reply = sdsnewlen(c->buf,c->bufpos);\n        c->bufpos = 0;\n        while(listLength(c->reply)) {\n            clientReplyBlock *o = listNodeValue(listFirst(c->reply));\n\n            reply = sdscatlen(reply,o->buf,o->used);\n            listDelNode(c->reply,listFirst(c->reply));\n        }\n    }\n    if (raise_error && reply[0] != '-') raise_error = 0;\n    redisProtocolToLuaType(lua,reply);\n\n    /* If the debugger is active, log the reply from Redis. */\n    if (ldbIsEnabled())\n        ldbLogRedisReply(reply);\n\n    if (reply != c->buf) sdsfree(reply);\n    c->reply_bytes = 0;\n\ncleanup:\n    /* Clean up. Command code may have changed argv/argc so we use the\n     * argv/argc of the client instead of the local variables. */\n    freeLuaRedisArgv(c->argv, c->argc, c->argv_len);\n    c->argc = c->argv_len = 0;\n    c->user = NULL;\n    c->argv = NULL;\n    c->all_argv_len_sum = 0;\n    resetClient(c, 1);\n    inuse--;\n\n    if (raise_error) {\n        /* If we are here we should have an error in the stack, in the\n         * form of a table with an \"err\" field. Extract the string to\n         * return the plain error. */\n        return luaError(lua);\n    }\n    return 1;\n}\n\n/* Our implementation to lua pcall.\n * We need this implementation for backward\n * comparability with older Redis versions.\n *\n * On Redis 7, the error object is a table,\n * compare to older version where the error\n * object is a string. To keep backward\n * comparability we catch the table object\n * and just return the error message. */\nstatic int luaRedisPcall(lua_State *lua) {\n    int argc = lua_gettop(lua);\n    lua_pushboolean(lua, 1); /* result place holder */\n    lua_insert(lua, 1);\n    if (lua_pcall(lua, argc - 1, LUA_MULTRET, 0)) {\n        /* Error */\n        lua_remove(lua, 1); /* remove the result place holder, now we have room for at least one element */\n        if (lua_istable(lua, -1)) {\n            lua_getfield(lua, -1, \"err\");\n            if (lua_isstring(lua, -1)) {\n                lua_replace(lua, -2); /* replace the error message with the table */\n            }\n        }\n        lua_pushboolean(lua, 0); /* push result */\n        lua_insert(lua, 1);\n    }\n    return lua_gettop(lua);\n\n}\n\n/* redis.call() */\nstatic int luaRedisCallCommand(lua_State *lua) {\n    return luaRedisGenericCommand(lua,1);\n}\n\n/* redis.pcall() */\nstatic int luaRedisPCallCommand(lua_State *lua) {\n    return luaRedisGenericCommand(lua,0);\n}\n\n/* This adds redis.sha1hex(string) to Lua scripts using the same hashing\n * function used for sha1ing lua scripts. */\nstatic int luaRedisSha1hexCommand(lua_State *lua) {\n    int argc = lua_gettop(lua);\n    char digest[41];\n    size_t len;\n    char *s;\n\n    if (argc != 1) {\n        luaPushError(lua, \"wrong number of arguments\");\n        return luaError(lua);\n    }\n\n    s = (char*)lua_tolstring(lua,1,&len);\n    sha1hex(digest,s,len);\n    lua_pushstring(lua,digest);\n    return 1;\n}\n\n/* Returns a table with a single field 'field' set to the string value\n * passed as argument. This helper function is handy when returning\n * a Redis Protocol error or status reply from Lua:\n *\n * return redis.error_reply(\"ERR Some Error\")\n * return redis.status_reply(\"ERR Some Error\")\n */\nstatic int luaRedisReturnSingleFieldTable(lua_State *lua, char *field) {\n    if (lua_gettop(lua) != 1 || lua_type(lua,-1) != LUA_TSTRING) {\n        luaPushError(lua, \"wrong number or type of arguments\");\n        return 1;\n    }\n\n    lua_newtable(lua);\n    lua_pushstring(lua, field);\n    lua_pushvalue(lua, -3);\n    lua_settable(lua, -3);\n    return 1;\n}\n\n/* redis.error_reply() */\nstatic int luaRedisErrorReplyCommand(lua_State *lua) {\n    if (lua_gettop(lua) != 1 || lua_type(lua,-1) != LUA_TSTRING) {\n        luaPushError(lua, \"wrong number or type of arguments\");\n        return 1;\n    }\n\n    /* add '-' if not exists */\n    const char *err = lua_tostring(lua, -1);\n    sds err_buff = NULL;\n    if (err[0] != '-') {\n        err_buff = sdscatfmt(sdsempty(), \"-%s\", err);\n    } else {\n        err_buff = sdsnew(err);\n    }\n    luaPushErrorBuff(lua, err_buff);\n    return 1;\n}\n\n/* redis.status_reply() */\nstatic int luaRedisStatusReplyCommand(lua_State *lua) {\n    return luaRedisReturnSingleFieldTable(lua,\"ok\");\n}\n\n/* redis.set_repl()\n *\n * Set the propagation of write commands executed in the context of the\n * script to on/off for AOF and slaves. */\nstatic int luaRedisSetReplCommand(lua_State *lua) {\n    int flags, argc = lua_gettop(lua);\n\n    scriptRunCtx* rctx = luaGetFromRegistry(lua, REGISTRY_RUN_CTX_NAME);\n    serverAssert(rctx); /* Only supported inside script invocation */\n\n    if (argc != 1) {\n        luaPushError(lua, \"redis.set_repl() requires one argument.\");\n         return luaError(lua);\n    }\n\n    flags = lua_tonumber(lua,-1);\n    if ((flags & ~(PROPAGATE_AOF|PROPAGATE_REPL)) != 0) {\n        luaPushError(lua, \"Invalid replication flags. Use REPL_AOF, REPL_REPLICA, REPL_ALL or REPL_NONE.\");\n        return luaError(lua);\n    }\n\n    scriptSetRepl(rctx, flags);\n    return 0;\n}\n\n/* redis.acl_check_cmd()\n *\n * Checks ACL permissions for given command for the current user. */\nstatic int luaRedisAclCheckCmdPermissionsCommand(lua_State *lua) {\n    scriptRunCtx* rctx = luaGetFromRegistry(lua, REGISTRY_RUN_CTX_NAME);\n    serverAssert(rctx); /* Only supported inside script invocation */\n    int raise_error = 0;\n\n    int argc, argv_len;\n    robj **argv = luaArgsToRedisArgv(lua, &argc, &argv_len);\n\n    /* Require at least one argument */\n    if (argv == NULL) return luaError(lua);\n\n    /* Find command */\n    struct redisCommand *cmd;\n    if ((cmd = lookupCommand(argv, argc)) == NULL) {\n        luaPushError(lua, \"Invalid command passed to redis.acl_check_cmd()\");\n        raise_error = 1;\n    } else if (!commandCheckArity(cmd, argc, NULL)) {\n        luaPushError(lua, \"Wrong number of args for redis.acl_check_cmd()\");\n        raise_error = 1;\n    } else {\n        int keyidxptr;\n        if (ACLCheckAllUserCommandPerm(rctx->original_client->user, cmd, argv, argc, NULL, &keyidxptr) != ACL_OK) {\n            lua_pushboolean(lua, 0);\n        } else {\n            lua_pushboolean(lua, 1);\n        }\n    }\n\n    freeLuaRedisArgv(argv, argc, argv_len);\n    if (raise_error)\n        return luaError(lua);\n    else\n        return 1;\n}\n\n\n/* redis.log() */\nstatic int luaLogCommand(lua_State *lua) {\n    int j, argc = lua_gettop(lua);\n    int level;\n    sds log;\n\n    if (argc < 2) {\n        luaPushError(lua, \"redis.log() requires two arguments or more.\");\n        return luaError(lua);\n    } else if (!lua_isnumber(lua,-argc)) {\n        luaPushError(lua, \"First argument must be a number (log level).\");\n        return luaError(lua);\n    }\n    level = lua_tonumber(lua,-argc);\n    if (level < LL_DEBUG || level > LL_WARNING) {\n        luaPushError(lua, \"Invalid log level.\");\n        return luaError(lua);\n    }\n    if (level < server.verbosity) return 0;\n\n    /* Glue together all the arguments */\n    log = sdsempty();\n    for (j = 1; j < argc; j++) {\n        size_t len;\n        char *s;\n\n        s = (char*)lua_tolstring(lua,(-argc)+j,&len);\n        if (s) {\n            if (j != 1) log = sdscatlen(log,\" \",1);\n            log = sdscatlen(log,s,len);\n        }\n    }\n    serverLogRaw(level,log);\n    sdsfree(log);\n    return 0;\n}\n\n/* redis.setresp() */\nstatic int luaSetResp(lua_State *lua) {\n    scriptRunCtx* rctx = luaGetFromRegistry(lua, REGISTRY_RUN_CTX_NAME);\n    serverAssert(rctx); /* Only supported inside script invocation */\n    int argc = lua_gettop(lua);\n\n    if (argc != 1) {\n        luaPushError(lua, \"redis.setresp() requires one argument.\");\n        return luaError(lua);\n    }\n\n    int resp = lua_tonumber(lua,-argc);\n    if (resp != 2 && resp != 3) {\n        luaPushError(lua, \"RESP version must be 2 or 3.\");\n        return luaError(lua);\n    }\n    scriptSetResp(rctx, resp);\n    return 0;\n}\n\n/* ---------------------------------------------------------------------------\n * Lua engine initialization and reset.\n * ------------------------------------------------------------------------- */\n\nstatic void luaLoadLib(lua_State *lua, const char *libname, lua_CFunction luafunc) {\n  lua_pushcfunction(lua, luafunc);\n  lua_pushstring(lua, libname);\n  lua_call(lua, 1, 0);\n}\n\nLUALIB_API int (luaopen_cjson) (lua_State *L);\nLUALIB_API int (luaopen_struct) (lua_State *L);\nLUALIB_API int (luaopen_cmsgpack) (lua_State *L);\nLUALIB_API int (luaopen_bit) (lua_State *L);\n\nstatic void luaLoadLibraries(lua_State *lua) {\n    luaLoadLib(lua, \"\", luaopen_base);\n    luaLoadLib(lua, LUA_TABLIBNAME, luaopen_table);\n    luaLoadLib(lua, LUA_STRLIBNAME, luaopen_string);\n    luaLoadLib(lua, LUA_MATHLIBNAME, luaopen_math);\n    luaLoadLib(lua, LUA_DBLIBNAME, luaopen_debug);\n    luaLoadLib(lua, LUA_OSLIBNAME, luaopen_os);\n    luaLoadLib(lua, \"cjson\", luaopen_cjson);\n    luaLoadLib(lua, \"struct\", luaopen_struct);\n    luaLoadLib(lua, \"cmsgpack\", luaopen_cmsgpack);\n    luaLoadLib(lua, \"bit\", luaopen_bit);\n\n#if 0 /* Stuff that we don't load currently, for sandboxing concerns. */\n    luaLoadLib(lua, LUA_LOADLIBNAME, luaopen_package);\n#endif\n}\n\n/* Return sds of the string value located on stack at the given index.\n * Return NULL if the value is not a string. */\nsds luaGetStringSds(lua_State *lua, int index) {\n    if (!lua_isstring(lua, index)) {\n        return NULL;\n    }\n\n    size_t len;\n    const char *str = lua_tolstring(lua, index, &len);\n    sds str_sds = sdsnewlen(str, len);\n    return str_sds;\n}\n\nstatic int luaProtectedTableError(lua_State *lua) {\n    int argc = lua_gettop(lua);\n    if (argc != 2) {\n        serverLog(LL_WARNING, \"malicious code trying to call luaProtectedTableError with wrong arguments\");\n        luaL_error(lua, \"Wrong number of arguments to luaProtectedTableError\");\n    }\n    if (!lua_isstring(lua, -1) && !lua_isnumber(lua, -1)) {\n        luaL_error(lua, \"Second argument to luaProtectedTableError must be a string or number\");\n    }\n    const char *variable_name = lua_tostring(lua, -1);\n    luaL_error(lua, \"Script attempted to access nonexistent global variable '%s'\", variable_name);\n    return 0;\n}\n\n/* Set a special metatable on the table on the top of the stack.\n * The metatable will raise an error if the user tries to fetch\n * an un-existing value.\n *\n * The function assumes the Lua stack have a least enough\n * space to push 2 element, its up to the caller to verify\n * this before calling this function. */\nvoid luaSetErrorMetatable(lua_State *lua) {\n    lua_newtable(lua); /* push metatable */\n    lua_pushcfunction(lua, luaProtectedTableError); /* push get error handler */\n    lua_setfield(lua, -2, \"__index\");\n    lua_setmetatable(lua, -2);\n}\n\nstatic int luaNewIndexAllowList(lua_State *lua) {\n    int argc = lua_gettop(lua);\n    if (argc != 3) {\n        serverLog(LL_WARNING, \"malicious code trying to call luaNewIndexAllowList with wrong arguments\");\n        luaL_error(lua, \"Wrong number of arguments to luaNewIndexAllowList\");\n    }\n    if (!lua_istable(lua, -3)) {\n        luaL_error(lua, \"first argument to luaNewIndexAllowList must be a table\");\n    }\n    if (!lua_isstring(lua, -2) && !lua_isnumber(lua, -2)) {\n        luaL_error(lua, \"Second argument to luaNewIndexAllowList must be a string or number\");\n    }\n    const char *variable_name = lua_tostring(lua, -2);\n    /* check if the key is in our allow list */\n\n    char ***allow_l = allow_lists;\n    for (; *allow_l ; ++allow_l){\n        char **c = *allow_l;\n        for (; *c ; ++c) {\n            if (strcmp(*c, variable_name) == 0) {\n                break;\n            }\n        }\n        if (*c) {\n            break;\n        }\n    }\n\n    int allowed = (*allow_l != NULL);\n    /* If not explicitly allowed, check if it's a deprecated function. If so,\n     * allow it only if 'lua_enable_deprecated_api' config is enabled. */\n    int deprecated = 0;\n    if (!allowed) {\n        char **c = lua_builtins_deprecated;\n        for (; *c; ++c) {\n            if (strcmp(*c, variable_name) == 0) {\n                deprecated = 1;\n                allowed = server.lua_enable_deprecated_api ? 1 : 0;\n                break;\n            }\n        }\n    }\n    if (!allowed) {\n        /* Search the value on the back list, if its there we know that it was removed\n         * on purpose and there is no need to print a warning. */\n        char **c = deny_list;\n        for ( ; *c ; ++c) {\n            if (strcmp(*c, variable_name) == 0) {\n                break;\n            }\n        }\n        if (!*c && !deprecated) {\n            serverLog(LL_WARNING, \"A key '%s' was added to Lua globals which is not on the globals allow list nor listed on the deny list.\", redactLogCstr(variable_name));\n        }\n    } else {\n        lua_rawset(lua, -3);\n    }\n    return 0;\n}\n\n/* Set a metatable with '__newindex' function that verify that\n * the new index appears on our globals while list.\n *\n * The metatable is set on the table which located on the top\n * of the stack.\n */\nvoid luaSetAllowListProtection(lua_State *lua) {\n    lua_newtable(lua); /* push metatable */\n    lua_pushcfunction(lua, luaNewIndexAllowList); /* push get error handler */\n    lua_setfield(lua, -2, \"__newindex\");\n    lua_setmetatable(lua, -2);\n}\n\n/* Set the readonly flag on the table located on the top of the stack\n * and recursively call this function on each table located on the original\n * table.  Also, recursively call this function on the metatables.*/\nvoid luaSetTableProtectionRecursively(lua_State *lua) {\n    /* This protect us from a loop in case we already visited the table\n     * For example, globals has '_G' key which is pointing back to globals. */\n    if (lua_isreadonlytable(lua, -1)) {\n        return;\n    }\n\n    /* protect the current table */\n    lua_enablereadonlytable(lua, -1, 1);\n\n    lua_checkstack(lua, 2);\n    lua_pushnil(lua); /* Use nil to start iteration. */\n    while (lua_next(lua,-2)) {\n        /* Stack now: table, key, value */\n        if (lua_istable(lua, -1)) {\n            luaSetTableProtectionRecursively(lua);\n        }\n        lua_pop(lua, 1);\n    }\n\n    /* protect the metatable if exists */\n    if (lua_getmetatable(lua, -1)) {\n        luaSetTableProtectionRecursively(lua);\n        lua_pop(lua, 1); /* pop the metatable */\n    }\n}\n\n/* Set the readonly flag on the metatable of basic types (string, nil etc.) */\nvoid luaSetTableProtectionForBasicTypes(lua_State *lua) {\n    static const int types[] = {\n        LUA_TSTRING,\n        LUA_TNUMBER,\n        LUA_TBOOLEAN,\n        LUA_TNIL,\n        LUA_TFUNCTION,\n        LUA_TTHREAD,\n        LUA_TLIGHTUSERDATA\n    };\n\n    for (size_t i = 0; i < sizeof(types) / sizeof(types[0]); i++) {\n        /* Push a dummy value of the type to get its metatable */\n        switch (types[i]) {\n            case LUA_TSTRING: lua_pushstring(lua, \"\"); break;\n            case LUA_TNUMBER: lua_pushnumber(lua, 0); break;\n            case LUA_TBOOLEAN: lua_pushboolean(lua, 0); break;\n            case LUA_TNIL: lua_pushnil(lua); break;\n            case LUA_TFUNCTION: lua_pushcfunction(lua, NULL); break;\n            case LUA_TTHREAD: lua_newthread(lua); break;\n            case LUA_TLIGHTUSERDATA: lua_pushlightuserdata(lua, (void*)lua); break;\n        }\n        if (lua_getmetatable(lua, -1)) {\n            luaSetTableProtectionRecursively(lua);\n            lua_pop(lua, 1); /* pop metatable */\n        }\n        lua_pop(lua, 1); /* pop dummy value */\n    }\n}\n\nvoid luaRegisterVersion(lua_State* lua) {\n    lua_pushstring(lua,\"REDIS_VERSION_NUM\");\n    lua_pushnumber(lua,REDIS_VERSION_NUM);\n    lua_settable(lua,-3);\n\n    lua_pushstring(lua,\"REDIS_VERSION\");\n    lua_pushstring(lua,REDIS_VERSION);\n    lua_settable(lua,-3);\n}\n\nvoid luaRegisterLogFunction(lua_State* lua) {\n    /* redis.log and log levels. */\n    lua_pushstring(lua,\"log\");\n    lua_pushcfunction(lua,luaLogCommand);\n    lua_settable(lua,-3);\n\n    lua_pushstring(lua,\"LOG_DEBUG\");\n    lua_pushnumber(lua,LL_DEBUG);\n    lua_settable(lua,-3);\n\n    lua_pushstring(lua,\"LOG_VERBOSE\");\n    lua_pushnumber(lua,LL_VERBOSE);\n    lua_settable(lua,-3);\n\n    lua_pushstring(lua,\"LOG_NOTICE\");\n    lua_pushnumber(lua,LL_NOTICE);\n    lua_settable(lua,-3);\n\n    lua_pushstring(lua,\"LOG_WARNING\");\n    lua_pushnumber(lua,LL_WARNING);\n    lua_settable(lua,-3);\n}\n\nvoid luaRegisterRedisAPI(lua_State* lua) {\n    lua_pushvalue(lua, LUA_GLOBALSINDEX);\n    luaSetAllowListProtection(lua);\n    lua_pop(lua, 1);\n\n    luaLoadLibraries(lua);\n\n    lua_pushcfunction(lua,luaRedisPcall);\n    lua_setglobal(lua, \"pcall\");\n\n    /* Register the redis commands table and fields */\n    lua_newtable(lua);\n\n    /* redis.call */\n    lua_pushstring(lua,\"call\");\n    lua_pushcfunction(lua,luaRedisCallCommand);\n    lua_settable(lua,-3);\n\n    /* redis.pcall */\n    lua_pushstring(lua,\"pcall\");\n    lua_pushcfunction(lua,luaRedisPCallCommand);\n    lua_settable(lua,-3);\n\n    luaRegisterLogFunction(lua);\n\n    luaRegisterVersion(lua);\n\n    /* redis.setresp */\n    lua_pushstring(lua,\"setresp\");\n    lua_pushcfunction(lua,luaSetResp);\n    lua_settable(lua,-3);\n\n    /* redis.sha1hex */\n    lua_pushstring(lua, \"sha1hex\");\n    lua_pushcfunction(lua, luaRedisSha1hexCommand);\n    lua_settable(lua, -3);\n\n    /* redis.error_reply and redis.status_reply */\n    lua_pushstring(lua, \"error_reply\");\n    lua_pushcfunction(lua, luaRedisErrorReplyCommand);\n    lua_settable(lua, -3);\n    lua_pushstring(lua, \"status_reply\");\n    lua_pushcfunction(lua, luaRedisStatusReplyCommand);\n    lua_settable(lua, -3);\n\n    /* redis.set_repl and associated flags. */\n    lua_pushstring(lua,\"set_repl\");\n    lua_pushcfunction(lua,luaRedisSetReplCommand);\n    lua_settable(lua,-3);\n\n    lua_pushstring(lua,\"REPL_NONE\");\n    lua_pushnumber(lua,PROPAGATE_NONE);\n    lua_settable(lua,-3);\n\n    lua_pushstring(lua,\"REPL_AOF\");\n    lua_pushnumber(lua,PROPAGATE_AOF);\n    lua_settable(lua,-3);\n\n    lua_pushstring(lua,\"REPL_SLAVE\");\n    lua_pushnumber(lua,PROPAGATE_REPL);\n    lua_settable(lua,-3);\n\n    lua_pushstring(lua,\"REPL_REPLICA\");\n    lua_pushnumber(lua,PROPAGATE_REPL);\n    lua_settable(lua,-3);\n\n    lua_pushstring(lua,\"REPL_ALL\");\n    lua_pushnumber(lua,PROPAGATE_AOF|PROPAGATE_REPL);\n    lua_settable(lua,-3);\n\n    /* redis.acl_check_cmd */\n    lua_pushstring(lua,\"acl_check_cmd\");\n    lua_pushcfunction(lua,luaRedisAclCheckCmdPermissionsCommand);\n    lua_settable(lua,-3);\n\n    /* Finally set the table as 'redis' global var. */\n    lua_setglobal(lua,REDIS_API_NAME);\n\n    /* Replace math.random and math.randomseed with our implementations. */\n    lua_getglobal(lua,\"math\");\n\n    lua_pushstring(lua,\"random\");\n    lua_pushcfunction(lua,redis_math_random);\n    lua_settable(lua,-3);\n\n    lua_pushstring(lua,\"randomseed\");\n    lua_pushcfunction(lua,redis_math_randomseed);\n    lua_settable(lua,-3);\n\n    lua_setglobal(lua,\"math\");\n}\n\n/* Set an array of Redis String Objects as a Lua array (table) stored into a\n * global variable. */\nstatic void luaCreateArray(lua_State *lua, robj **elev, int elec) {\n    int j;\n\n    lua_newtable(lua);\n    for (j = 0; j < elec; j++) {\n        lua_pushlstring(lua,(char*)elev[j]->ptr,sdslen(elev[j]->ptr));\n        lua_rawseti(lua,-2,j+1);\n    }\n}\n\n/* ---------------------------------------------------------------------------\n * Redis provided math.random\n * ------------------------------------------------------------------------- */\n\n/* We replace math.random() with our implementation that is not affected\n * by specific libc random() implementations and will output the same sequence\n * (for the same seed) in every arch. */\n\n/* The following implementation is the one shipped with Lua itself but with\n * rand() replaced by redisLrand48(). */\nstatic int redis_math_random (lua_State *L) {\n  /* the `%' avoids the (rare) case of r==1, and is needed also because on\n     some systems (SunOS!) `rand()' may return a value larger than RAND_MAX */\n  lua_Number r = (lua_Number)(redisLrand48()%REDIS_LRAND48_MAX) /\n                                (lua_Number)REDIS_LRAND48_MAX;\n  switch (lua_gettop(L)) {  /* check number of arguments */\n    case 0: {  /* no arguments */\n      lua_pushnumber(L, r);  /* Number between 0 and 1 */\n      break;\n    }\n    case 1: {  /* only upper limit */\n      int u = luaL_checkint(L, 1);\n      luaL_argcheck(L, 1<=u, 1, \"interval is empty\");\n      lua_pushnumber(L, floor(r*u)+1);  /* int between 1 and `u' */\n      break;\n    }\n    case 2: {  /* lower and upper limits */\n      int l = luaL_checkint(L, 1);\n      int u = luaL_checkint(L, 2);\n      luaL_argcheck(L, l<=u, 2, \"interval is empty\");\n      lua_pushnumber(L, floor(r*(u-l+1))+l);  /* int between `l' and `u' */\n      break;\n    }\n    default: return luaL_error(L, \"wrong number of arguments\");\n  }\n  return 1;\n}\n\nstatic int redis_math_randomseed (lua_State *L) {\n  redisSrand48(luaL_checkint(L, 1));\n  return 0;\n}\n\n/* This is the Lua script \"count\" hook that we use to detect scripts timeout. */\nstatic void luaMaskCountHook(lua_State *lua, lua_Debug *ar) {\n    UNUSED(ar);\n    scriptRunCtx* rctx = luaGetFromRegistry(lua, REGISTRY_RUN_CTX_NAME);\n    serverAssert(rctx); /* Only supported inside script invocation */\n    if (scriptInterrupt(rctx) == SCRIPT_KILL) {\n        serverLog(LL_NOTICE,\"Lua script killed by user with SCRIPT KILL.\");\n\n        /*\n         * Set the hook to invoke all the time so the user\n         * will not be able to catch the error with pcall and invoke\n         * pcall again which will prevent the script from ever been killed\n         */\n        lua_sethook(lua, luaMaskCountHook, LUA_MASKLINE, 0);\n\n        luaPushError(lua,\"Script killed by user with SCRIPT KILL...\");\n        luaError(lua);\n    }\n}\n\nvoid luaErrorInformationDiscard(errorInfo *err_info) {\n    if (err_info->msg) sdsfree(err_info->msg);\n    if (err_info->source) sdsfree(err_info->source);\n    if (err_info->line) sdsfree(err_info->line);\n}\n\nvoid luaExtractErrorInformation(lua_State *lua, errorInfo *err_info) {\n    if (lua_isstring(lua, -1)) {\n        err_info->msg = sdscatfmt(sdsempty(), \"ERR %s\", lua_tostring(lua, -1));\n        err_info->line = NULL;\n        err_info->source = NULL;\n        err_info->ignore_err_stats_update = 0;\n        return;\n    }\n\n    lua_getfield(lua, -1, \"err\");\n    if (lua_isstring(lua, -1)) {\n        err_info->msg = sdsnew(lua_tostring(lua, -1));\n    } else {\n        /* Ensure we never return a NULL msg. */\n        err_info->msg = sdsnew(\"ERR unknown error\");\n    }\n    lua_pop(lua, 1);\n\n    lua_getfield(lua, -1, \"source\");\n    if (lua_isstring(lua, -1)) {\n        err_info->source = sdsnew(lua_tostring(lua, -1));\n    }\n    lua_pop(lua, 1);\n\n    lua_getfield(lua, -1, \"line\");\n    if (lua_isstring(lua, -1)) {\n        err_info->line = sdsnew(lua_tostring(lua, -1));\n    }\n    lua_pop(lua, 1);\n\n    lua_getfield(lua, -1, \"ignore_error_stats_update\");\n    if (lua_isboolean(lua, -1)) {\n        err_info->ignore_err_stats_update = lua_toboolean(lua, -1);\n    }\n    lua_pop(lua, 1);\n}\n\nvoid luaCallFunction(scriptRunCtx* run_ctx, lua_State *lua, robj** keys, size_t nkeys, robj** args, size_t nargs, int debug_enabled) {\n    client* c = run_ctx->original_client;\n    int delhook = 0;\n\n    /* We must set it before we set the Lua hook, theoretically the\n     * Lua hook might be called wheneven we run any Lua instruction\n     * such as 'luaSetGlobalArray' and we want the run_ctx to be available\n     * each time the Lua hook is invoked. */\n    luaSaveOnRegistry(lua, REGISTRY_RUN_CTX_NAME, run_ctx);\n\n    if (server.busy_reply_threshold > 0 && !debug_enabled) {\n        lua_sethook(lua,luaMaskCountHook,LUA_MASKCOUNT,100000);\n        delhook = 1;\n    } else if (debug_enabled) {\n        lua_sethook(lua,luaLdbLineHook,LUA_MASKLINE|LUA_MASKCOUNT,100000);\n        delhook = 1;\n    }\n\n    /* Populate the argv and keys table accordingly to the arguments that\n     * EVAL received. */\n    luaCreateArray(lua,keys,nkeys);\n    /* On eval, keys and arguments are globals. */\n    if (run_ctx->flags & SCRIPT_EVAL_MODE){\n        /* open global protection to set KEYS */\n        lua_enablereadonlytable(lua, LUA_GLOBALSINDEX, 0);\n        lua_setglobal(lua,\"KEYS\");\n        lua_enablereadonlytable(lua, LUA_GLOBALSINDEX, 1);\n    }\n    luaCreateArray(lua,args,nargs);\n    if (run_ctx->flags & SCRIPT_EVAL_MODE){\n        /* open global protection to set ARGV */\n        lua_enablereadonlytable(lua, LUA_GLOBALSINDEX, 0);\n        lua_setglobal(lua,\"ARGV\");\n        lua_enablereadonlytable(lua, LUA_GLOBALSINDEX, 1);\n    }\n\n    /* At this point whether this script was never seen before or if it was\n     * already defined, we can call it.\n     * On eval mode, we have zero arguments and expect a single return value.\n     * In addition the error handler is located on position -2 on the Lua stack.\n     * On function mode, we pass 2 arguments (the keys and args tables),\n     * and the error handler is located on position -4 (stack: error_handler, callback, keys, args) */\n    int err;\n    if (run_ctx->flags & SCRIPT_EVAL_MODE) {\n        err = lua_pcall(lua,0,1,-2);\n    } else {\n        err = lua_pcall(lua,2,1,-4);\n    }\n\n    if (err) {\n        /* Error object is a table of the following format:\n         * {err='<error msg>', source='<source file>', line=<line>}\n         * We can construct the error message from this information */\n        if (!lua_istable(lua, -1)) {\n            const char *msg = \"execution failure\";\n            if (lua_isstring(lua, -1)) {\n                msg = lua_tostring(lua, -1);\n            }\n            addReplyErrorFormat(c,\"Error running script %s, %.100s\\n\", run_ctx->funcname, msg);\n        } else {\n            errorInfo err_info = {0};\n            sds final_msg = sdsempty();\n            luaExtractErrorInformation(lua, &err_info);\n            final_msg = sdscatfmt(final_msg, \"-%s\",\n                                  err_info.msg);\n            if (err_info.line && err_info.source) {\n                final_msg = sdscatfmt(final_msg, \" script: %s, on %s:%s.\",\n                                      run_ctx->funcname,\n                                      err_info.source,\n                                      err_info.line);\n            }\n            addReplyErrorSdsEx(c, final_msg, err_info.ignore_err_stats_update? ERR_REPLY_FLAG_NO_STATS_UPDATE : 0);\n            luaErrorInformationDiscard(&err_info);\n        }\n        lua_pop(lua,1); /* Consume the Lua error */\n    } else {\n        /* On success convert the Lua return value into Redis protocol, and\n         * send it to * the client. */\n        luaReplyToRedisReply(c, run_ctx->c, lua); /* Convert and consume the reply. */\n    }\n\n    /* Perform some cleanup that we need to do both on error and success. */\n    if (delhook) lua_sethook(lua,NULL,0,0); /* Disable hook */\n\n    /* remove run_ctx from registry, its only applicable for the current script. */\n    luaSaveOnRegistry(lua, REGISTRY_RUN_CTX_NAME, NULL);\n}\n\nunsigned long luaMemory(lua_State *lua) {\n    return lua_gc(lua, LUA_GCCOUNT, 0) * 1024LL;\n}\n\n/* Call the Lua garbage collector from time to time to avoid a\n * full cycle performed by Lua, which adds too latency.\n *\n * The call is performed every LUA_GC_CYCLE_PERIOD executed commands\n * (and for LUA_GC_CYCLE_PERIOD collection steps) because calling it\n * for every command uses too much CPU.\n * \n * Each script VM / State (Eval and Functions) maintains its own unique `gc_count`\n * to control GC independently. */\n#define LUA_GC_CYCLE_PERIOD 50\nvoid luaGC(lua_State *lua, int *gc_count) {\n    (*gc_count)++;\n    if (*gc_count >= LUA_GC_CYCLE_PERIOD) {\n        lua_gc(lua, LUA_GCSTEP, LUA_GC_CYCLE_PERIOD);\n        *gc_count = 0;\n    }\n}\n"
  },
  {
    "path": "src/script_lua.h",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef __SCRIPT_LUA_H_\n#define __SCRIPT_LUA_H_\n\n/*\n * script_lua.c unit provides shared functionality between\n * eval.c and function_lua.c. Functionality provided:\n *\n * * Execute Lua code, assuming that the code is located on\n *   the top of the Lua stack. In addition, parsing the execution\n *   result and convert it to the resp and reply ot the client.\n *\n * * Run Redis commands from within the Lua code (Including\n *   parsing the reply and create a Lua object out of it).\n *\n * * Register Redis API to the Lua interpreter. Only shared\n *   API are registered (API that is only relevant on eval.c\n *   (like debugging) are registered on eval.c).\n *\n * Uses script.c for interaction back with Redis.\n */\n\n#include \"server.h\"\n#include \"script.h\"\n#include <lua.h>\n#include <lauxlib.h>\n#include <lualib.h>\n\n#define REGISTRY_RUN_CTX_NAME \"__RUN_CTX__\"\n#define REGISTRY_SET_GLOBALS_PROTECTION_NAME \"__GLOBAL_PROTECTION__\"\n#define REDIS_API_NAME \"redis\"\n\ntypedef struct errorInfo {\n    sds msg;\n    sds source;\n    sds line;\n    int ignore_err_stats_update;\n}errorInfo;\n\nvoid luaRegisterRedisAPI(lua_State* lua);\nsds luaGetStringSds(lua_State *lua, int index);\nvoid luaRegisterGlobalProtectionFunction(lua_State *lua);\nvoid luaSetErrorMetatable(lua_State *lua);\nvoid luaSetAllowListProtection(lua_State *lua);\nvoid luaSetTableProtectionRecursively(lua_State *lua);\nvoid luaSetTableProtectionForBasicTypes(lua_State *lua);\nvoid luaRegisterLogFunction(lua_State* lua);\nvoid luaRegisterVersion(lua_State* lua);\nvoid luaPushErrorBuff(lua_State *lua, sds err_buff);\nvoid luaPushError(lua_State *lua, const char *error);\nint luaError(lua_State *lua);\nvoid luaSaveOnRegistry(lua_State* lua, const char* name, void* ptr);\nvoid* luaGetFromRegistry(lua_State* lua, const char* name);\nvoid luaCallFunction(scriptRunCtx* r_ctx, lua_State *lua, robj** keys, size_t nkeys, robj** args, size_t nargs, int debug_enabled);\nvoid luaExtractErrorInformation(lua_State *lua, errorInfo *err_info);\nvoid luaErrorInformationDiscard(errorInfo *err_info);\nunsigned long luaMemory(lua_State *lua);\nvoid luaGC(lua_State *lua, int *gc_count);\n\n#endif /* __SCRIPT_LUA_H_ */\n"
  },
  {
    "path": "src/sds.c",
    "content": "/* SDSLib 2.0 -- A C dynamic strings library\n *\n * Copyright (c) 2006-Present, Redis Ltd.\n * All rights reserved.\n * \n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n * \n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <ctype.h>\n#include <limits.h>\n#include \"redisassert.h\"\n#include \"sds.h\"\n#include \"sdsalloc.h\"\n#include \"util.h\"\n\nconst char *SDS_NOINIT = \"SDS_NOINIT\";\n\n/* Returns the minimum SDS type required to store a string of the given length.\n * \n * Previously, the SDS type was determined solely by the logical string\n * length, ignoring header size, which could lead to inconsistencies. If the \n * allocated buffer was larger than the maximum size supported by a type, `alloc` \n * would be truncated, creating a mismatch between `alloc` and the actual buffer size.\n */\nchar sdsReqType(size_t string_size) {\n    if (string_size < 1 << 5) return SDS_TYPE_5;\n    if (string_size <= (1 << 8) - sizeof(struct sdshdr8) - 1) return SDS_TYPE_8;\n    if (string_size <= (1 << 16) - sizeof(struct sdshdr16) - 1) return SDS_TYPE_16;\n#if (LONG_MAX == LLONG_MAX)\n    if (string_size <= (1ll << 32) - sizeof(struct sdshdr32) - 1) return SDS_TYPE_32;\n    return SDS_TYPE_64;\n#else\n    return SDS_TYPE_32;\n#endif\n}\n\nstatic inline size_t sdsTypeMaxSize(char type) {\n    if (type == SDS_TYPE_5)\n        return (1<<5) - 1;\n    if (type == SDS_TYPE_8)\n        return (1<<8) - 1;\n    if (type == SDS_TYPE_16)\n        return (1<<16) - 1;\n#if (LONG_MAX == LLONG_MAX)\n    if (type == SDS_TYPE_32)\n        return (1ll<<32) - 1;\n#endif\n    return -1; /* this is equivalent to the max SDS_TYPE_64 or SDS_TYPE_32 */\n}\n\n/* \n * Adjusts the SDS type if the allocated buffer size exceeds the maximum size \n * addressable by the current type.\n *\n * The SDS type is initially determined based on the logical length of the string. \n * However, allocators like jemalloc may return a buffer larger than requested, \n * potentially exceeding the maximum size the selected SDS type can handle. This \n * can cause a mismatch between the `alloc` field and the actual buffer size, \n * leading to wasted memory and possible inconsistencies.\n *\n * This function ensures that the SDS type is selected based on the actual buffer \n * size rather than just the logical length. If the buffer size supports a larger \n * SDS type, it updates `type` and `hdrlen` accordingly.\n *\n * Returns 1 if the type was adjusted, 0 otherwise.\n */\nstatic inline int adjustTypeIfNeeded(char *type, int *hdrlen, size_t bufsize) {\n    size_t usable = bufsize - *hdrlen - 1;\n    if (*type != SDS_TYPE_5 && usable > sdsTypeMaxSize(*type)) {\n        *type = sdsReqType(usable);\n        *hdrlen = sdsHdrSize(*type);\n        return 1;\n    }\n    return 0;\n}\n\n/* Create a new sds string with the content specified by the 'init' pointer\n * and 'initlen'.\n * If NULL is used for 'init' the string is initialized with zero bytes.\n * If SDS_NOINIT is used, the buffer is left uninitialized;\n *\n * The string is always null-terminated (all the sds strings are, always) so\n * even if you create an sds string with:\n *\n * mystring = sdsnewlen(\"abc\",3);\n *\n * You can print the string with printf() as there is an implicit \\0 at the\n * end of the string. However the string is binary safe and can contain\n * \\0 characters in the middle, as the length is stored in the sds header. */\nsds _sdsnewlen(const void *init, size_t initlen, int trymalloc) {\n    void *sh;\n\n    char type = sdsReqType(initlen);\n    /* Empty strings are usually created in order to append. Use type 8\n     * since type 5 is not good at this. */\n    if (type == SDS_TYPE_5 && initlen == 0) type = SDS_TYPE_8;\n    int hdrlen = sdsHdrSize(type);\n    size_t bufsize;\n\n    assert(initlen + hdrlen + 1 > initlen); /* Catch size_t overflow */\n    sh = trymalloc?\n        s_trymalloc_usable(hdrlen+initlen+1, &bufsize) :\n        s_malloc_usable(hdrlen+initlen+1, &bufsize);\n    if (sh == NULL) return NULL;\n\n    adjustTypeIfNeeded(&type, &hdrlen, bufsize);\n    return sdsnewplacement(sh, bufsize, type, init, initlen);\n}\n\n/* Initializes an SDS within pre-allocated buffer. Like, placement new in C++. \n * \n * Parameters:\n * - `buf`    : A pre-allocated buffer for the SDS.\n * - `bufsize`: Total size of the buffer (>= `sdsReqSize(initlen, type)`). Can use \n *              a larger `bufsize` than required, but usable size won't be greater \n *              than `sdsTypeMaxSize(type)`. \n * - `type`   : The SDS type. Can assist `sdsReqType(length)` to compute the type.\n * - `init`   : Initial string to copy, or `SDS_NOINIT` to skip initialization.\n * - `initlen`: Length of the initial string.\n * \n * Returns:\n * - A pointer to the SDS inside `buf`. \n */\nsds sdsnewplacement(char *buf, size_t bufsize, char type, const char *init, size_t initlen) {\n    assert(bufsize >= sdsReqSize(initlen, type));\n    int hdrlen = sdsHdrSize(type);\n    size_t usable = bufsize - hdrlen - 1;\n    sds s = buf + hdrlen;\n    unsigned char *fp = ((unsigned char *)s) - 1; /* flags pointer. */\n\n    switch(type) {\n        case SDS_TYPE_5: {\n            *fp = type | (initlen << SDS_TYPE_BITS);\n            break;\n        }\n        case SDS_TYPE_8: {\n            SDS_HDR_VAR(8,s);\n            sh->len = initlen;\n            debugAssert(usable <= sdsTypeMaxSize(type));\n            sh->alloc = usable;\n            *fp = type;\n            break;\n        }\n        case SDS_TYPE_16: {\n            SDS_HDR_VAR(16,s);\n            sh->len = initlen;\n            debugAssert(usable <= sdsTypeMaxSize(type));\n            sh->alloc = usable;\n            *fp = type;\n            break;\n        }\n        case SDS_TYPE_32: {\n            SDS_HDR_VAR(32,s);\n            sh->len = initlen;\n            debugAssert(usable <= sdsTypeMaxSize(type));\n            sh->alloc = usable;\n            *fp = type;\n            break;\n        }\n        case SDS_TYPE_64: {\n            SDS_HDR_VAR(64,s);\n            sh->len = initlen;\n            debugAssert(usable <= sdsTypeMaxSize(type));\n            sh->alloc = usable;\n            *fp = type;\n            break;\n        }\n    }\n    if (init == SDS_NOINIT)\n        init = NULL;\n    else if (!init)\n        memset(s, 0, initlen);\n    else if (initlen) \n        memcpy(s, init, initlen);\n\n    s[initlen] = '\\0';\n    return s;\n}\n\nsds sdsnewlen(const void *init, size_t initlen) {\n    return _sdsnewlen(init, initlen, 0);\n}\n\nsds sdstrynewlen(const void *init, size_t initlen) {\n    return _sdsnewlen(init, initlen, 1);\n}\n\n/* Create an empty (zero length) sds string. Even in this case the string\n * always has an implicit null term. */\nsds sdsempty(void) {\n    return sdsnewlen(\"\",0);\n}\n\n/* Create a new sds string starting from a null terminated C string. */\nsds sdsnew(const char *init) {\n    size_t initlen = (init == NULL) ? 0 : strlen(init);\n    return sdsnewlen(init, initlen);\n}\n\n/* Duplicate an sds string. */\nsds sdsdup(const sds s) {\n    return sdsnewlen(s, sdslen(s));\n}\n\n/* Free an sds string. No operation is performed if 's' is NULL. */\nvoid sdsfree(sds s) {\n    if (s == NULL) return;\n    s_free((char*)s-sdsHdrSize(s[-1]));\n}\n\nvoid sdsfreeusable(sds s, size_t *usable) {\n    if (s == NULL) return;\n    s_free_usable((char*)s-sdsHdrSize(s[-1]), usable);\n}\n\n/* Generic version of sdsfree. */\nvoid sdsfreegeneric(void *s) {\n    sdsfree((sds)s);\n}\n\n/* Set the sds string length to the length as obtained with strlen(), so\n * considering as content only up to the first null term character.\n *\n * This function is useful when the sds string is hacked manually in some\n * way, like in the following example:\n *\n * s = sdsnew(\"foobar\");\n * s[2] = '\\0';\n * sdsupdatelen(s);\n * printf(\"%d\\n\", sdslen(s));\n *\n * The output will be \"2\", but if we comment out the call to sdsupdatelen()\n * the output will be \"6\" as the string was modified but the logical length\n * remains 6 bytes. */\nvoid sdsupdatelen(sds s) {\n    size_t reallen = strlen(s);\n    sdssetlen(s, reallen);\n}\n\n/* Modify an sds string in-place to make it empty (zero length).\n * However all the existing buffer is not discarded but set as free space\n * so that next append operations will not require allocations up to the\n * number of bytes previously available. */\nvoid sdsclear(sds s) {\n    sdssetlen(s, 0);\n    s[0] = '\\0';\n}\n\n/* Enlarge the free space at the end of the sds string so that the caller\n * is sure that after calling this function can overwrite up to addlen\n * bytes after the end of the string, plus one more byte for nul term.\n * If there's already sufficient free space, this function returns without any\n * action, if there isn't sufficient free space, it'll allocate what's missing,\n * and possibly more:\n * When greedy is 1, enlarge more than needed, to avoid need for future reallocs\n * on incremental growth.\n * When greedy is 0, enlarge just enough so that there's free space for 'addlen'.\n *\n * Note: this does not change the *length* of the sds string as returned\n * by sdslen(), but only the free buffer space we have. */\nsds _sdsMakeRoomFor(sds s, size_t addlen, int greedy) {\n    void *sh, *newsh;\n    size_t avail = sdsavail(s);\n    size_t len, newlen, reqlen;\n    char type, oldtype = sdsType(s);\n    int hdrlen;\n    size_t bufsize, usable;\n    int use_realloc;\n\n    /* Return ASAP if there is enough space left. */\n    if (avail >= addlen) return s;\n\n    len = sdslen(s);\n    sh = (char*)s-sdsHdrSize(oldtype);\n    reqlen = newlen = (len+addlen);\n    assert(newlen > len);   /* Catch size_t overflow */\n    if (greedy == 1) {\n        if (newlen < SDS_MAX_PREALLOC)\n            newlen *= 2;\n        else\n            newlen += SDS_MAX_PREALLOC;\n    }\n\n    type = sdsReqType(newlen);\n\n    /* Don't use type 5: the user is appending to the string and type 5 is\n     * not able to remember empty space, so sdsMakeRoomFor() must be called\n     * at every appending operation. */\n    if (type == SDS_TYPE_5) type = SDS_TYPE_8;\n\n    hdrlen = sdsHdrSize(type);\n    assert(hdrlen + newlen + 1 > reqlen);  /* Catch size_t overflow */\n    use_realloc = (oldtype == type);\n    if (use_realloc) {\n        newsh = s_realloc_usable(sh, hdrlen + newlen + 1, &bufsize, NULL);\n        if (newsh == NULL) return NULL;\n        s = (char*)newsh + hdrlen;\n        if (adjustTypeIfNeeded(&type, &hdrlen, bufsize)) {\n            memmove((char *)newsh + hdrlen, s, len + 1);\n            s = (char *)newsh + hdrlen;\n            s[-1] = type;\n            sdssetlen(s, len);\n        }\n    } else {\n        /* Since the header size changes, need to move the string forward,\n         * and can't use realloc */\n        newsh = s_malloc_usable(hdrlen + newlen + 1, &bufsize);\n        if (newsh == NULL) return NULL;\n        adjustTypeIfNeeded(&type, &hdrlen, bufsize);\n        memcpy((char*)newsh+hdrlen, s, len+1);\n        s_free(sh);\n        s = (char*)newsh+hdrlen;\n        s[-1] = type;\n        sdssetlen(s, len);\n    }\n    usable = bufsize - hdrlen - 1;\n    assert(type == SDS_TYPE_5 || usable <= sdsTypeMaxSize(type));\n    sdssetalloc(s, usable);\n    return s;\n}\n\n/* Enlarge the free space at the end of the sds string more than needed,\n * This is useful to avoid repeated re-allocations when repeatedly appending to the sds. */\nsds sdsMakeRoomFor(sds s, size_t addlen) {\n    return _sdsMakeRoomFor(s, addlen, 1);\n}\n\n/* Unlike sdsMakeRoomFor(), this one just grows to the necessary size. */\nsds sdsMakeRoomForNonGreedy(sds s, size_t addlen) {\n    return _sdsMakeRoomFor(s, addlen, 0);\n}\n\n/* Reallocate the sds string so that it has no free space at the end. The\n * contained string remains not altered, but next concatenation operations\n * will require a reallocation.\n *\n * After the call, the passed sds string is no longer valid and all the\n * references must be substituted with the new pointer returned by the call. */\nsds sdsRemoveFreeSpace(sds s, int would_regrow) {\n    return sdsResize(s, sdslen(s), would_regrow);\n}\n\n/* Resize the allocation, this can make the allocation bigger or smaller,\n * if the size is smaller than currently used len, the data will be truncated.\n *\n * When the would_regrow argument is set to 1, it prevents the use of\n * SDS_TYPE_5, which is desired when the sds is likely to be changed again.\n *\n * The sdsAlloc size will be set to the requested size regardless of the actual\n * allocation size, this is done in order to avoid repeated calls to this\n * function when the caller detects that it has excess space. */\nsds sdsResize(sds s, size_t size, int would_regrow) {\n    void *sh, *newsh = NULL;\n    char type, oldtype = sdsType(s);\n    int hdrlen, oldhdrlen = sdsHdrSize(oldtype);\n    size_t len = sdslen(s);\n    sh = (char*)s-oldhdrlen;\n\n    /* Return ASAP if the size is already good. */\n    if (sdsalloc(s) == size) return s;\n\n    /* Truncate len if needed. */\n    if (size < len) len = size;\n\n    /* Check what would be the minimum SDS header that is just good enough to\n     * fit this string. */\n    type = sdsReqType(size);\n    if (would_regrow) {\n        /* Don't use type 5, it is not good for strings that are expected to grow back. */\n        if (type == SDS_TYPE_5) type = SDS_TYPE_8;\n    }\n    hdrlen = sdsHdrSize(type);\n\n    /* If the type is the same, or can hold the size in it with low overhead\n     * (larger than SDS_TYPE_8), we just realloc(), letting the allocator\n     * to do the copy only if really needed. Otherwise if the change is\n     * huge, we manually reallocate the string to use the different header\n     * type. */\n    int use_realloc = (oldtype==type || (type < oldtype && type > SDS_TYPE_8));\n    size_t newlen = use_realloc ? oldhdrlen+size+1 : hdrlen+size+1;\n    size_t bufsize = 0;\n    size_t newsize;\n\n    if (use_realloc) {\n        int alloc_already_optimal = 0;\n        #if defined(USE_JEMALLOC)\n            /* je_nallocx returns the expected allocation size for the newlen.\n             * We aim to avoid calling realloc() when using Jemalloc if there is no\n             * change in the allocation size, as it incurs a cost even if the\n             * allocation size stays the same. */\n        bufsize = sdsAllocSize(s);\n        alloc_already_optimal = (je_nallocx(newlen, 0) == bufsize);\n        #endif\n        if (!alloc_already_optimal) {\n            newsh = s_realloc_usable(sh, newlen, &bufsize, NULL);\n            if (newsh == NULL) return NULL;\n            s = (char *)newsh + oldhdrlen;\n\n            if (adjustTypeIfNeeded(&oldtype, &oldhdrlen, bufsize)) {\n                memmove((char *)newsh + oldhdrlen, s, len + 1);\n                s = (char *)newsh + oldhdrlen;\n                s[-1] = oldtype;\n                sdssetlen(s, len);\n            }\n        }\n        newsize = bufsize - oldhdrlen - 1;\n        debugAssert(oldtype == SDS_TYPE_5 || newsize <= sdsTypeMaxSize(oldtype));\n    } else {\n        newsh = s_malloc_usable(newlen, &bufsize);\n        if (newsh == NULL) return NULL;\n        adjustTypeIfNeeded(&type, &hdrlen, bufsize);\n        memcpy((char *)newsh + hdrlen, s, len + 1);\n        s_free(sh);\n        s = (char *)newsh + hdrlen;\n        s[-1] = type;\n        newsize = bufsize - hdrlen - 1;\n        debugAssert(type == SDS_TYPE_5 || newsize <= sdsTypeMaxSize(type));\n    }\n    s[len] = 0;\n    sdssetlen(s, len);\n    sdssetalloc(s, newsize);\n    return s;\n}\n\n/* Return the pointer of the actual SDS allocation (normally SDS strings\n * are referenced by the start of the string buffer). */\nvoid *sdsAllocPtr(sds s) {\n    return (void*) (s-sdsHdrSize(s[-1]));\n}\n\n/* Increment the sds length and decrements the left free space at the\n * end of the string according to 'incr'. Also set the null term\n * in the new end of the string.\n *\n * This function is used in order to fix the string length after the\n * user calls sdsMakeRoomFor(), writes something after the end of\n * the current string, and finally needs to set the new length.\n *\n * Note: it is possible to use a negative increment in order to\n * right-trim the string.\n *\n * Usage example:\n *\n * Using sdsIncrLen() and sdsMakeRoomFor() it is possible to mount the\n * following schema, to cat bytes coming from the kernel to the end of an\n * sds string without copying into an intermediate buffer:\n *\n * oldlen = sdslen(s);\n * s = sdsMakeRoomFor(s, BUFFER_SIZE);\n * nread = read(fd, s+oldlen, BUFFER_SIZE);\n * ... check for nread <= 0 and handle it ...\n * sdsIncrLen(s, nread);\n */\nvoid sdsIncrLen(sds s, ssize_t incr) {\n    size_t len;\n    switch(sdsType(s)) {\n        case SDS_TYPE_5: {\n            unsigned char *fp = ((unsigned char*)s)-1;\n            unsigned char oldlen = SDS_TYPE_5_LEN(s);\n            assert((incr > 0 && oldlen+incr < 32) || (incr < 0 && oldlen >= (unsigned int)(-incr)));\n            *fp = SDS_TYPE_5 | ((oldlen+incr) << SDS_TYPE_BITS);\n            len = oldlen+incr;\n            break;\n        }\n        case SDS_TYPE_8: {\n            SDS_HDR_VAR(8,s);\n            assert((incr >= 0 && sh->alloc-sh->len >= incr) || (incr < 0 && sh->len >= (unsigned int)(-incr)));\n            len = (sh->len += incr);\n            break;\n        }\n        case SDS_TYPE_16: {\n            SDS_HDR_VAR(16,s);\n            assert((incr >= 0 && sh->alloc-sh->len >= incr) || (incr < 0 && sh->len >= (unsigned int)(-incr)));\n            len = (sh->len += incr);\n            break;\n        }\n        case SDS_TYPE_32: {\n            SDS_HDR_VAR(32,s);\n            assert((incr >= 0 && sh->alloc-sh->len >= (unsigned int)incr) || (incr < 0 && sh->len >= (unsigned int)(-incr)));\n            len = (sh->len += incr);\n            break;\n        }\n        case SDS_TYPE_64: {\n            SDS_HDR_VAR(64,s);\n            assert((incr >= 0 && sh->alloc-sh->len >= (uint64_t)incr) || (incr < 0 && sh->len >= (uint64_t)(-incr)));\n            len = (sh->len += incr);\n            break;\n        }\n        default: len = 0; /* Just to avoid compilation warnings. */\n    }\n    s[len] = '\\0';\n}\n\n/* Grow the sds to have the specified length. Bytes that were not part of\n * the original length of the sds will be set to zero.\n *\n * if the specified length is smaller than the current length, no operation\n * is performed. */\nsds sdsgrowzero(sds s, size_t len) {\n    size_t curlen = sdslen(s);\n\n    if (len <= curlen) return s;\n    s = sdsMakeRoomFor(s,len-curlen);\n    if (s == NULL) return NULL;\n\n    /* Make sure added region doesn't contain garbage */\n    memset(s+curlen,0,(len-curlen+1)); /* also set trailing \\0 byte */\n    sdssetlen(s, len);\n    return s;\n}\n\n/* Append the specified binary-safe string pointed by 't' of 'len' bytes to the\n * end of the specified sds string 's'.\n *\n * After the call, the passed sds string is no longer valid and all the\n * references must be substituted with the new pointer returned by the call. */\nsds sdscatlen(sds s, const void *t, size_t len) {\n    size_t curlen = sdslen(s);\n\n    s = sdsMakeRoomFor(s,len);\n    if (s == NULL) return NULL;\n    memcpy(s+curlen, t, len);\n    sdssetlen(s, curlen+len);\n    s[curlen+len] = '\\0';\n    return s;\n}\n\n/* Append the specified null terminated C string to the sds string 's'.\n *\n * After the call, the passed sds string is no longer valid and all the\n * references must be substituted with the new pointer returned by the call. */\nsds sdscat(sds s, const char *t) {\n    return sdscatlen(s, t, strlen(t));\n}\n\n/* Append the specified sds 't' to the existing sds 's'.\n *\n * After the call, the modified sds string is no longer valid and all the\n * references must be substituted with the new pointer returned by the call. */\nsds sdscatsds(sds s, const sds t) {\n    return sdscatlen(s, t, sdslen(t));\n}\n\n/* Destructively modify the sds string 's' to hold the specified binary\n * safe string pointed by 't' of length 'len' bytes. */\nsds sdscpylen(sds s, const char *t, size_t len) {\n    if (sdsalloc(s) < len) {\n        s = sdsMakeRoomFor(s,len-sdslen(s));\n        if (s == NULL) return NULL;\n    }\n    memcpy(s, t, len);\n    s[len] = '\\0';\n    sdssetlen(s, len);\n    return s;\n}\n\n/* Like sdscpylen() but 't' must be a null-terminated string so that the length\n * of the string is obtained with strlen(). */\nsds sdscpy(sds s, const char *t) {\n    return sdscpylen(s, t, strlen(t));\n}\n\n/* Create an sds string from a long long value. It is much faster than:\n *\n * sdscatprintf(sdsempty(),\"%lld\\n\", value);\n */\nsds sdsfromlonglong(long long value) {\n    char buf[LONG_STR_SIZE];\n    int len = ll2string(buf,sizeof(buf),value);\n\n    return sdsnewlen(buf,len);\n}\n\n/* Like sdscatprintf() but gets va_list instead of being variadic. */\nsds sdscatvprintf(sds s, const char *fmt, va_list ap) {\n    va_list cpy;\n    char staticbuf[1024], *buf = staticbuf, *t;\n    size_t buflen = strlen(fmt)*2;\n    int bufstrlen;\n\n    /* We try to start using a static buffer for speed.\n     * If not possible we revert to heap allocation. */\n    if (buflen > sizeof(staticbuf)) {\n        buf = s_malloc(buflen);\n        if (buf == NULL) return NULL;\n    } else {\n        buflen = sizeof(staticbuf);\n    }\n\n    /* Alloc enough space for buffer and \\0 after failing to\n     * fit the string in the current buffer size. */\n    while(1) {\n        va_copy(cpy,ap);\n        bufstrlen = vsnprintf(buf, buflen, fmt, cpy);\n        va_end(cpy);\n        if (bufstrlen < 0) {\n            if (buf != staticbuf) s_free(buf);\n            return NULL;\n        }\n        if (((size_t)bufstrlen) >= buflen) {\n            if (buf != staticbuf) s_free(buf);\n            buflen = ((size_t)bufstrlen) + 1;\n            buf = s_malloc(buflen);\n            if (buf == NULL) return NULL;\n            continue;\n        }\n        break;\n    }\n\n    /* Finally concat the obtained string to the SDS string and return it. */\n    t = sdscatlen(s, buf, bufstrlen);\n    if (buf != staticbuf) s_free(buf);\n    return t;\n}\n\n/* Append to the sds string 's' a string obtained using printf-alike format\n * specifier.\n *\n * After the call, the modified sds string is no longer valid and all the\n * references must be substituted with the new pointer returned by the call.\n *\n * Example:\n *\n * s = sdsnew(\"Sum is: \");\n * s = sdscatprintf(s,\"%d+%d = %d\",a,b,a+b).\n *\n * Often you need to create a string from scratch with the printf-alike\n * format. When this is the need, just use sdsempty() as the target string:\n *\n * s = sdscatprintf(sdsempty(), \"... your format ...\", args);\n */\nsds sdscatprintf(sds s, const char *fmt, ...) {\n    va_list ap;\n    char *t;\n    va_start(ap, fmt);\n    t = sdscatvprintf(s,fmt,ap);\n    va_end(ap);\n    return t;\n}\n\n/* This function is similar to sdscatprintf, but much faster as it does\n * not rely on sprintf() family functions implemented by the libc that\n * are often very slow. Moreover directly handling the sds string as\n * new data is concatenated provides a performance improvement.\n *\n * However this function only handles an incompatible subset of printf-alike\n * format specifiers:\n *\n * %s - C String\n * %S - SDS string\n * %i - signed int\n * %I - 64 bit signed integer (long long, int64_t)\n * %u - unsigned int\n * %U - 64 bit unsigned integer (unsigned long long, uint64_t)\n * %% - Verbatim \"%\" character.\n */\nsds sdscatfmt(sds s, char const *fmt, ...) {\n    size_t initlen = sdslen(s);\n    const char *f = fmt;\n    long i;\n    va_list ap;\n\n    /* To avoid continuous reallocations, let's start with a buffer that\n     * can hold at least two times the format string itself. It's not the\n     * best heuristic but seems to work in practice. */\n    s = sdsMakeRoomFor(s, strlen(fmt)*2);\n    va_start(ap,fmt);\n    f = fmt;    /* Next format specifier byte to process. */\n    i = initlen; /* Position of the next byte to write to dest str. */\n    while(*f) {\n        char next, *str;\n        size_t l;\n        long long num;\n        unsigned long long unum;\n\n        /* Make sure there is always space for at least 1 char. */\n        if (sdsavail(s)==0) {\n            s = sdsMakeRoomFor(s,1);\n        }\n\n        switch(*f) {\n        case '%':\n            next = *(f+1);\n            if (next == '\\0') break;\n            f++;\n            switch(next) {\n            case 's':\n            case 'S':\n                str = va_arg(ap,char*);\n                l = (next == 's') ? strlen(str) : sdslen(str);\n                if (sdsavail(s) < l) {\n                    s = sdsMakeRoomFor(s,l);\n                }\n                memcpy(s+i,str,l);\n                sdsinclen(s,l);\n                i += l;\n                break;\n            case 'i':\n            case 'I':\n                if (next == 'i')\n                    num = va_arg(ap,int);\n                else\n                    num = va_arg(ap,long long);\n                {\n                    char buf[LONG_STR_SIZE];\n                    l = ll2string(buf,sizeof(buf),num);\n                    if (sdsavail(s) < l) {\n                        s = sdsMakeRoomFor(s,l);\n                    }\n                    memcpy(s+i,buf,l);\n                    sdsinclen(s,l);\n                    i += l;\n                }\n                break;\n            case 'u':\n            case 'U':\n                if (next == 'u')\n                    unum = va_arg(ap,unsigned int);\n                else\n                    unum = va_arg(ap,unsigned long long);\n                {\n                    char buf[LONG_STR_SIZE];\n                    l = ull2string(buf,sizeof(buf),unum);\n                    if (sdsavail(s) < l) {\n                        s = sdsMakeRoomFor(s,l);\n                    }\n                    memcpy(s+i,buf,l);\n                    sdsinclen(s,l);\n                    i += l;\n                }\n                break;\n            default: /* Handle %% and generally %<unknown>. */\n                s[i++] = next;\n                sdsinclen(s,1);\n                break;\n            }\n            break;\n        default:\n            s[i++] = *f;\n            sdsinclen(s,1);\n            break;\n        }\n        f++;\n    }\n    va_end(ap);\n\n    /* Add null-term */\n    s[i] = '\\0';\n    return s;\n}\n\n/* Remove the part of the string from left and from right composed just of\n * contiguous characters found in 'cset', that is a null terminated C string.\n *\n * After the call, the modified sds string is no longer valid and all the\n * references must be substituted with the new pointer returned by the call.\n *\n * Example:\n *\n * s = sdsnew(\"AA...AA.a.aa.aHelloWorld     :::\");\n * s = sdstrim(s,\"Aa. :\");\n * printf(\"%s\\n\", s);\n *\n * Output will be just \"HelloWorld\".\n */\nsds sdstrim(sds s, const char *cset) {\n    char *end, *sp, *ep;\n    size_t len;\n\n    sp = s;\n    ep = end = s+sdslen(s)-1;\n    while(sp <= end && strchr(cset, *sp)) sp++;\n    while(ep > sp && strchr(cset, *ep)) ep--;\n    len = (ep-sp)+1;\n    if (s != sp) memmove(s, sp, len);\n    s[len] = '\\0';\n    sdssetlen(s,len);\n    return s;\n}\n\n/* Changes the input string to be a subset of the original.\n * It does not release the free space in the string, so a call to\n * sdsRemoveFreeSpace may be wise after. */\nvoid sdssubstr(sds s, size_t start, size_t len) {\n    /* Clamp out of range input */\n    size_t oldlen = sdslen(s);\n    if (start >= oldlen) start = len = 0;\n    if (len > oldlen-start) len = oldlen-start;\n\n    /* Move the data */\n    if (len) memmove(s, s+start, len);\n    s[len] = 0;\n    sdssetlen(s,len);\n}\n\n/* Turn the string into a smaller (or equal) string containing only the\n * substring specified by the 'start' and 'end' indexes.\n *\n * start and end can be negative, where -1 means the last character of the\n * string, -2 the penultimate character, and so forth.\n *\n * The interval is inclusive, so the start and end characters will be part\n * of the resulting string.\n *\n * The string is modified in-place.\n *\n * NOTE: this function can be misleading and can have unexpected behaviour,\n * specifically when you want the length of the new string to be 0.\n * Having start==end will result in a string with one character.\n * please consider using sdssubstr instead.\n *\n * Example:\n *\n * s = sdsnew(\"Hello World\");\n * sdsrange(s,1,-1); => \"ello World\"\n */\nvoid sdsrange(sds s, ssize_t start, ssize_t end) {\n    size_t newlen, len = sdslen(s);\n    if (len == 0) return;\n    if (start < 0)\n        start = len + start;\n    if (end < 0)\n        end = len + end;\n    newlen = (start > end) ? 0 : (end-start)+1;\n    sdssubstr(s, start, newlen);\n}\n\n/* Apply tolower() to every character of the sds string 's'. */\nvoid sdstolower(sds s) {\n    size_t len = sdslen(s), j;\n\n    for (j = 0; j < len; j++) s[j] = tolower(s[j]);\n}\n\n/* Apply toupper() to every character of the sds string 's'. */\nvoid sdstoupper(sds s) {\n    size_t len = sdslen(s), j;\n\n    for (j = 0; j < len; j++) s[j] = toupper(s[j]);\n}\n\n/* Compare two sds strings s1 and s2 with memcmp().\n *\n * Return value:\n *\n *     positive if s1 > s2.\n *     negative if s1 < s2.\n *     0 if s1 and s2 are exactly the same binary string.\n *\n * If two strings share exactly the same prefix, but one of the two has\n * additional characters, the longer string is considered to be greater than\n * the smaller one. */\nint sdscmp(const sds s1, const sds s2) {\n    size_t l1, l2, minlen;\n    int cmp;\n\n    l1 = sdslen(s1);\n    l2 = sdslen(s2);\n    minlen = (l1 < l2) ? l1 : l2;\n    cmp = memcmp(s1,s2,minlen);\n    if (cmp == 0) return l1>l2? 1: (l1<l2? -1: 0);\n    return cmp;\n}\n\n/* Split 's' with separator in 'sep'. An array\n * of sds strings is returned. *count will be set\n * by reference to the number of tokens returned.\n *\n * On out of memory, zero length string, zero length\n * separator, NULL is returned.\n *\n * Note that 'sep' is able to split a string using\n * a multi-character separator. For example\n * sdssplit(\"foo_-_bar\",\"_-_\"); will return two\n * elements \"foo\" and \"bar\".\n *\n * This version of the function is binary-safe but\n * requires length arguments. sdssplit() is just the\n * same function but for zero-terminated strings.\n */\nsds *sdssplitlen(const char *s, ssize_t len, const char *sep, int seplen, int *count) {\n    int elements = 0, slots = 5;\n    long start = 0, j;\n    sds *tokens;\n\n    if (seplen < 1 || len <= 0) {\n        *count = 0;\n        return NULL;\n    }\n    tokens = s_malloc(sizeof(sds)*slots);\n    if (tokens == NULL) return NULL;\n\n    for (j = 0; j < (len-(seplen-1)); j++) {\n        /* make sure there is room for the next element and the final one */\n        if (slots < elements+2) {\n            sds *newtokens;\n\n            slots *= 2;\n            newtokens = s_realloc(tokens,sizeof(sds)*slots);\n            if (newtokens == NULL) goto cleanup;\n            tokens = newtokens;\n        }\n        /* search the separator */\n        if ((seplen == 1 && *(s+j) == sep[0]) || (memcmp(s+j,sep,seplen) == 0)) {\n            tokens[elements] = sdsnewlen(s+start,j-start);\n            if (tokens[elements] == NULL) goto cleanup;\n            elements++;\n            start = j+seplen;\n            j = j+seplen-1; /* skip the separator */\n        }\n    }\n    /* Add the final element. We are sure there is room in the tokens array. */\n    tokens[elements] = sdsnewlen(s+start,len-start);\n    if (tokens[elements] == NULL) goto cleanup;\n    elements++;\n    *count = elements;\n    return tokens;\n\ncleanup:\n    {\n        int i;\n        for (i = 0; i < elements; i++) sdsfree(tokens[i]);\n        s_free(tokens);\n        *count = 0;\n        return NULL;\n    }\n}\n\n/* Free the result returned by sdssplitlen(), or do nothing if 'tokens' is NULL. */\nvoid sdsfreesplitres(sds *tokens, int count) {\n    if (!tokens) return;\n    while(count--)\n        sdsfree(tokens[count]);\n    s_free(tokens);\n}\n\n/* Append to the sds string \"s\" an escaped string representation where\n * all the non-printable characters (tested with isprint()) are turned into\n * escapes in the form \"\\n\\r\\a....\" or \"\\x<hex-number>\".\n *\n * After the call, the modified sds string is no longer valid and all the\n * references must be substituted with the new pointer returned by the call. */\nsds sdscatrepr(sds s, const char *p, size_t len) {\n    s = sdsMakeRoomFor(s, len + 2);\n    s = sdscatlen(s,\"\\\"\",1);\n    while(len--) {\n        switch(*p) {\n        case '\\\\':\n        case '\"':\n            s = sdscatprintf(s,\"\\\\%c\",*p);\n            break;\n        case '\\n': s = sdscatlen(s,\"\\\\n\",2); break;\n        case '\\r': s = sdscatlen(s,\"\\\\r\",2); break;\n        case '\\t': s = sdscatlen(s,\"\\\\t\",2); break;\n        case '\\a': s = sdscatlen(s,\"\\\\a\",2); break;\n        case '\\b': s = sdscatlen(s,\"\\\\b\",2); break;\n        default:\n            if (isprint(*p))\n                s = sdscatlen(s, p, 1);\n            else\n                s = sdscatprintf(s,\"\\\\x%02x\",(unsigned char)*p);\n            break;\n        }\n        p++;\n    }\n    return sdscatlen(s,\"\\\"\",1);\n}\n\n/* Returns one if the string contains characters to be escaped\n * by sdscatrepr(), zero otherwise.\n *\n * Typically, this should be used to help protect aggregated strings in a way\n * that is compatible with sdssplitargs(). For this reason, also spaces will be\n * treated as needing an escape.\n */\nint sdsneedsrepr(const sds s) {\n    size_t len = sdslen(s);\n    const char *p = s;\n\n    while (len--) {\n        if (*p == '\\\\' || *p == '\"' || *p == '\\n' || *p == '\\r' ||\n            *p == '\\t' || *p == '\\a' || *p == '\\b' || !isprint(*p) || isspace(*p)) return 1;\n        p++;\n    }\n\n    return 0;\n}\n\n/* Helper function for sdssplitargs() that returns non zero if 'c'\n * is a valid hex digit. */\nint is_hex_digit(char c) {\n    return (c >= '0' && c <= '9') || (c >= 'a' && c <= 'f') ||\n           (c >= 'A' && c <= 'F');\n}\n\n/* Helper function for sdssplitargs() that converts a hex digit into an\n * integer from 0 to 15 */\nint hex_digit_to_int(char c) {\n    switch(c) {\n    case '0': return 0;\n    case '1': return 1;\n    case '2': return 2;\n    case '3': return 3;\n    case '4': return 4;\n    case '5': return 5;\n    case '6': return 6;\n    case '7': return 7;\n    case '8': return 8;\n    case '9': return 9;\n    case 'a': case 'A': return 10;\n    case 'b': case 'B': return 11;\n    case 'c': case 'C': return 12;\n    case 'd': case 'D': return 13;\n    case 'e': case 'E': return 14;\n    case 'f': case 'F': return 15;\n    default: return 0;\n    }\n}\n\n/* Split a line into arguments, where every argument can be in the\n * following programming-language REPL-alike form:\n *\n * foo bar \"newline are supported\\n\" and \"\\xff\\x00otherstuff\"\n *\n * The number of arguments is stored into *argc, and an array\n * of sds is returned.\n *\n * The caller should free the resulting array of sds strings with\n * sdsfreesplitres().\n *\n * Note that sdscatrepr() is able to convert back a string into\n * a quoted string in the same format sdssplitargs() is able to parse.\n *\n * The function returns the allocated tokens on success, even when the\n * input string is empty, or NULL if the input contains unbalanced\n * quotes or closed quotes followed by non space characters\n * as in: \"foo\"bar or \"foo'\n */\nsds *sdssplitargs(const char *line, int *argc) {\n    const char *p = line;\n    char *current = NULL;\n    char **vector = NULL;\n\n    *argc = 0;\n    while(1) {\n        /* skip blanks */\n        while(*p && isspace(*p)) p++;\n        if (*p) {\n            /* get a token */\n            int inq=0;  /* set to 1 if we are in \"quotes\" */\n            int insq=0; /* set to 1 if we are in 'single quotes' */\n            int done=0;\n\n            if (current == NULL) current = sdsempty();\n            while(!done) {\n                if (inq) {\n                    if (*p == '\\\\' && *(p+1) == 'x' &&\n                                             is_hex_digit(*(p+2)) &&\n                                             is_hex_digit(*(p+3)))\n                    {\n                        unsigned char byte;\n\n                        byte = (hex_digit_to_int(*(p+2))*16)+\n                                hex_digit_to_int(*(p+3));\n                        current = sdscatlen(current,(char*)&byte,1);\n                        p += 3;\n                    } else if (*p == '\\\\' && *(p+1)) {\n                        char c;\n\n                        p++;\n                        switch(*p) {\n                        case 'n': c = '\\n'; break;\n                        case 'r': c = '\\r'; break;\n                        case 't': c = '\\t'; break;\n                        case 'b': c = '\\b'; break;\n                        case 'a': c = '\\a'; break;\n                        default: c = *p; break;\n                        }\n                        current = sdscatlen(current,&c,1);\n                    } else if (*p == '\"') {\n                        /* closing quote must be followed by a space or\n                         * nothing at all. */\n                        if (*(p+1) && !isspace(*(p+1))) goto err;\n                        done=1;\n                    } else if (!*p) {\n                        /* unterminated quotes */\n                        goto err;\n                    } else {\n                        current = sdscatlen(current,p,1);\n                    }\n                } else if (insq) {\n                    if (*p == '\\\\' && *(p+1) == '\\'') {\n                        p++;\n                        current = sdscatlen(current,\"'\",1);\n                    } else if (*p == '\\'') {\n                        /* closing quote must be followed by a space or\n                         * nothing at all. */\n                        if (*(p+1) && !isspace(*(p+1))) goto err;\n                        done=1;\n                    } else if (!*p) {\n                        /* unterminated quotes */\n                        goto err;\n                    } else {\n                        current = sdscatlen(current,p,1);\n                    }\n                } else {\n                    switch(*p) {\n                    case ' ':\n                    case '\\n':\n                    case '\\r':\n                    case '\\t':\n                    case '\\0':\n                        done=1;\n                        break;\n                    case '\"':\n                        inq=1;\n                        break;\n                    case '\\'':\n                        insq=1;\n                        break;\n                    default:\n                        current = sdscatlen(current,p,1);\n                        break;\n                    }\n                }\n                if (*p) p++;\n            }\n            /* add the token to the vector */\n            vector = s_realloc(vector,((*argc)+1)*sizeof(char*));\n            vector[*argc] = current;\n            (*argc)++;\n            current = NULL;\n        } else {\n            /* Even on empty input string return something not NULL. */\n            if (vector == NULL) vector = s_malloc(sizeof(void*));\n            return vector;\n        }\n    }\n\nerr:\n    while((*argc)--)\n        sdsfree(vector[*argc]);\n    s_free(vector);\n    if (current) sdsfree(current);\n    *argc = 0;\n    return NULL;\n}\n\n/* Modify the string substituting all the occurrences of the set of\n * characters specified in the 'from' string to the corresponding character\n * in the 'to' array.\n *\n * For instance: sdsmapchars(mystring, \"ho\", \"01\", 2)\n * will have the effect of turning the string \"hello\" into \"0ell1\".\n *\n * The function returns the sds string pointer, that is always the same\n * as the input pointer since no resize is needed. */\nsds sdsmapchars(sds s, const char *from, const char *to, size_t setlen) {\n    size_t j, i, l = sdslen(s);\n\n    for (j = 0; j < l; j++) {\n        for (i = 0; i < setlen; i++) {\n            if (s[j] == from[i]) {\n                s[j] = to[i];\n                break;\n            }\n        }\n    }\n    return s;\n}\n\n/* Join an array of C strings using the specified separator (also a C string).\n * Returns the result as an sds string. */\nsds sdsjoin(char **argv, int argc, char *sep) {\n    sds join = sdsempty();\n    int j;\n\n    for (j = 0; j < argc; j++) {\n        join = sdscat(join, argv[j]);\n        if (j != argc-1) join = sdscat(join,sep);\n    }\n    return join;\n}\n\n/* Like sdsjoin, but joins an array of SDS strings. */\nsds sdsjoinsds(sds *argv, int argc, const char *sep, size_t seplen) {\n    sds join = sdsempty();\n    int j;\n\n    for (j = 0; j < argc; j++) {\n        join = sdscatsds(join, argv[j]);\n        if (j != argc-1) join = sdscatlen(join,sep,seplen);\n    }\n    return join;\n}\n\n/* Wrappers to the allocators used by SDS. Note that SDS will actually\n * just use the macros defined into sdsalloc.h in order to avoid to pay\n * the overhead of function calls. Here we define these wrappers only for\n * the programs SDS is linked to, if they want to touch the SDS internals\n * even if they use a different allocator. */\nvoid *sds_malloc(size_t size) { return s_malloc(size); }\nvoid *sds_realloc(void *ptr, size_t size) { return s_realloc(ptr,size); }\nvoid sds_free(void *ptr) { s_free(ptr); }\n\n/* Perform expansion of a template string and return the result as a newly\n * allocated sds.\n *\n * Template variables are specified using curly brackets, e.g. {variable}.\n * An opening bracket can be quoted by repeating it twice.\n */\nsds sdstemplate(const char *template, sdstemplate_callback_t cb_func, void *cb_arg)\n{\n    sds res = sdsempty();\n    const char *p = template;\n\n    while (*p) {\n        /* Find next variable, copy everything until there */\n        const char *sv = strchr(p, '{');\n        if (!sv) {\n            /* Not found: copy till rest of template and stop */\n            res = sdscat(res, p);\n            break;\n        } else if (sv > p) {\n            /* Found: copy anything up to the beginning of the variable */\n            res = sdscatlen(res, p, sv - p);\n        }\n\n        /* Skip into variable name, handle premature end or quoting */\n        sv++;\n        if (!*sv) goto error;       /* Premature end of template */\n        if (*sv == '{') {\n            /* Quoted '{' */\n            p = sv + 1;\n            res = sdscat(res, \"{\");\n            continue;\n        }\n\n        /* Find end of variable name, handle premature end of template */\n        const char *ev = strchr(sv, '}');\n        if (!ev) goto error;\n\n        /* Pass variable name to callback and obtain value. If callback failed,\n         * abort. */\n        sds varname = sdsnewlen(sv, ev - sv);\n        sds value = cb_func(varname, cb_arg);\n        sdsfree(varname);\n        if (!value) goto error;\n\n        /* Append value to result and continue */\n        res = sdscat(res, value);\n        sdsfree(value);\n        p = ev + 1;\n    }\n\n    return res;\n\nerror:\n    sdsfree(res);\n    return NULL;\n}\n\n#ifdef REDIS_TEST\n#include <stdio.h>\n#include <limits.h>\n#include \"testhelp.h\"\n\n#define UNUSED(x) (void)(x)\n\nstatic sds sdsTestTemplateCallback(sds varname, void *arg) {\n    UNUSED(arg);\n    static const char *_var1 = \"variable1\";\n    static const char *_var2 = \"variable2\";\n\n    if (!strcmp(varname, _var1)) return sdsnew(\"value1\");\n    else if (!strcmp(varname, _var2)) return sdsnew(\"value2\");\n    else return NULL;\n}\n\nint sdsTest(int argc, char **argv, int flags) {\n    UNUSED(argc);\n    UNUSED(argv);\n    UNUSED(flags);\n\n    {\n        sds x = sdsnew(\"foo\"), y;\n\n        test_cond(\"Create a string and obtain the length\",\n            sdslen(x) == 3 && memcmp(x,\"foo\\0\",4) == 0);\n\n        sdsfree(x);\n        x = sdsnewlen(\"foo\",2);\n        test_cond(\"Create a string with specified length\",\n            sdslen(x) == 2 && memcmp(x,\"fo\\0\",3) == 0);\n\n        x = sdscat(x,\"bar\");\n        test_cond(\"Strings concatenation\",\n            sdslen(x) == 5 && memcmp(x,\"fobar\\0\",6) == 0);\n\n        x = sdscpy(x,\"a\");\n        test_cond(\"sdscpy() against an originally longer string\",\n            sdslen(x) == 1 && memcmp(x,\"a\\0\",2) == 0);\n\n        x = sdscpy(x,\"xyzxxxxxxxxxxyyyyyyyyyykkkkkkkkkk\");\n        test_cond(\"sdscpy() against an originally shorter string\",\n            sdslen(x) == 33 &&\n            memcmp(x,\"xyzxxxxxxxxxxyyyyyyyyyykkkkkkkkkk\\0\",33) == 0);\n\n        sdsfree(x);\n        x = sdscatprintf(sdsempty(),\"%d\",123);\n        test_cond(\"sdscatprintf() seems working in the base case\",\n            sdslen(x) == 3 && memcmp(x,\"123\\0\",4) == 0);\n\n        sdsfree(x);\n        x = sdscatprintf(sdsempty(),\"a%cb\",0);\n        test_cond(\"sdscatprintf() seems working with \\\\0 inside of result\",\n            sdslen(x) == 3 && memcmp(x,\"a\\0\"\"b\\0\",4) == 0);\n\n        {\n            sdsfree(x);\n            char etalon[1024*1024];\n            for (size_t i = 0; i < sizeof(etalon); i++) {\n                etalon[i] = '0';\n            }\n            x = sdscatprintf(sdsempty(),\"%0*d\",(int)sizeof(etalon),0);\n            test_cond(\"sdscatprintf() can print 1MB\",\n                sdslen(x) == sizeof(etalon) && memcmp(x,etalon,sizeof(etalon)) == 0);\n        }\n\n        sdsfree(x);\n        x = sdsnew(\"--\");\n        x = sdscatfmt(x, \"Hello %s World %I,%I--\", \"Hi!\", LLONG_MIN,LLONG_MAX);\n        test_cond(\"sdscatfmt() seems working in the base case\",\n            sdslen(x) == 60 &&\n            memcmp(x,\"--Hello Hi! World -9223372036854775808,\"\n                     \"9223372036854775807--\",60) == 0);\n        printf(\"[%s]\\n\",x);\n\n        sdsfree(x);\n        x = sdsnew(\"--\");\n        x = sdscatfmt(x, \"%u,%U--\", UINT_MAX, ULLONG_MAX);\n        test_cond(\"sdscatfmt() seems working with unsigned numbers\",\n            sdslen(x) == 35 &&\n            memcmp(x,\"--4294967295,18446744073709551615--\",35) == 0);\n\n        sdsfree(x);\n        x = sdsnew(\" x \");\n        sdstrim(x,\" x\");\n        test_cond(\"sdstrim() works when all chars match\",\n            sdslen(x) == 0);\n\n        sdsfree(x);\n        x = sdsnew(\" x \");\n        sdstrim(x,\" \");\n        test_cond(\"sdstrim() works when a single char remains\",\n            sdslen(x) == 1 && x[0] == 'x');\n\n        sdsfree(x);\n        x = sdsnew(\"xxciaoyyy\");\n        sdstrim(x,\"xy\");\n        test_cond(\"sdstrim() correctly trims characters\",\n            sdslen(x) == 4 && memcmp(x,\"ciao\\0\",5) == 0);\n\n        y = sdsdup(x);\n        sdsrange(y,1,1);\n        test_cond(\"sdsrange(...,1,1)\",\n            sdslen(y) == 1 && memcmp(y,\"i\\0\",2) == 0);\n\n        sdsfree(y);\n        y = sdsdup(x);\n        sdsrange(y,1,-1);\n        test_cond(\"sdsrange(...,1,-1)\",\n            sdslen(y) == 3 && memcmp(y,\"iao\\0\",4) == 0);\n\n        sdsfree(y);\n        y = sdsdup(x);\n        sdsrange(y,-2,-1);\n        test_cond(\"sdsrange(...,-2,-1)\",\n            sdslen(y) == 2 && memcmp(y,\"ao\\0\",3) == 0);\n\n        sdsfree(y);\n        y = sdsdup(x);\n        sdsrange(y,2,1);\n        test_cond(\"sdsrange(...,2,1)\",\n            sdslen(y) == 0 && memcmp(y,\"\\0\",1) == 0);\n\n        sdsfree(y);\n        y = sdsdup(x);\n        sdsrange(y,1,100);\n        test_cond(\"sdsrange(...,1,100)\",\n            sdslen(y) == 3 && memcmp(y,\"iao\\0\",4) == 0);\n\n        sdsfree(y);\n        y = sdsdup(x);\n        sdsrange(y,100,100);\n        test_cond(\"sdsrange(...,100,100)\",\n            sdslen(y) == 0 && memcmp(y,\"\\0\",1) == 0);\n\n        sdsfree(y);\n        y = sdsdup(x);\n        sdsrange(y,4,6);\n        test_cond(\"sdsrange(...,4,6)\",\n            sdslen(y) == 0 && memcmp(y,\"\\0\",1) == 0);\n\n        sdsfree(y);\n        y = sdsdup(x);\n        sdsrange(y,3,6);\n        test_cond(\"sdsrange(...,3,6)\",\n            sdslen(y) == 1 && memcmp(y,\"o\\0\",2) == 0);\n\n        sdsfree(y);\n        sdsfree(x);\n        x = sdsnew(\"foo\");\n        y = sdsnew(\"foa\");\n        test_cond(\"sdscmp(foo,foa)\", sdscmp(x,y) > 0);\n\n        sdsfree(y);\n        sdsfree(x);\n        x = sdsnew(\"bar\");\n        y = sdsnew(\"bar\");\n        test_cond(\"sdscmp(bar,bar)\", sdscmp(x,y) == 0);\n\n        sdsfree(y);\n        sdsfree(x);\n        x = sdsnew(\"aar\");\n        y = sdsnew(\"bar\");\n        test_cond(\"sdscmp(bar,bar)\", sdscmp(x,y) < 0);\n\n        sdsfree(y);\n        sdsfree(x);\n        x = sdsnewlen(\"\\a\\n\\0foo\\r\",7);\n        y = sdscatrepr(sdsempty(),x,sdslen(x));\n        test_cond(\"sdscatrepr(...data...)\",\n            memcmp(y,\"\\\"\\\\a\\\\n\\\\x00foo\\\\r\\\"\",15) == 0);\n\n        {\n            unsigned int oldfree;\n            char *p;\n            int i;\n            size_t step = 10, j;\n\n            sdsfree(x);\n            sdsfree(y);\n            x = sdsnew(\"0\");\n            test_cond(\"sdsnew() free/len buffers\", sdslen(x) == 1 && sdsavail(x) == 0);\n\n            /* Run the test a few times in order to hit the first two\n             * SDS header types. */\n            for (i = 0; i < 10; i++) {\n                size_t oldlen = sdslen(x);\n                x = sdsMakeRoomFor(x,step);\n                int type = sdsType(x);\n\n                test_cond(\"sdsMakeRoomFor() len\", sdslen(x) == oldlen);\n                if (type != SDS_TYPE_5) {\n                    test_cond(\"sdsMakeRoomFor() free\", sdsavail(x) >= step);\n                    oldfree = sdsavail(x);\n                    UNUSED(oldfree);\n                }\n                p = x+oldlen;\n                for (j = 0; j < step; j++) {\n                    p[j] = 'A'+j;\n                }\n                sdsIncrLen(x,step);\n            }\n            test_cond(\"sdsMakeRoomFor() content\",\n                memcmp(\"0ABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJABCDEFGHIJ\",x,101) == 0);\n            test_cond(\"sdsMakeRoomFor() final length\",sdslen(x)==101);\n\n            sdsfree(x);\n        }\n\n        /* Simple template */\n        x = sdstemplate(\"v1={variable1} v2={variable2}\", sdsTestTemplateCallback, NULL);\n        test_cond(\"sdstemplate() normal flow\",\n                  memcmp(x,\"v1=value1 v2=value2\",19) == 0);\n        sdsfree(x);\n\n        /* Template with callback error */\n        x = sdstemplate(\"v1={variable1} v3={doesnotexist}\", sdsTestTemplateCallback, NULL);\n        test_cond(\"sdstemplate() with callback error\", x == NULL);\n\n        /* Template with empty var name */\n        x = sdstemplate(\"v1={\", sdsTestTemplateCallback, NULL);\n        test_cond(\"sdstemplate() with empty var name\", x == NULL);\n\n        /* Template with truncated var name */\n        x = sdstemplate(\"v1={start\", sdsTestTemplateCallback, NULL);\n        test_cond(\"sdstemplate() with truncated var name\", x == NULL);\n\n        /* Template with quoting */\n        x = sdstemplate(\"v1={{{variable1}} {{} v2={variable2}\", sdsTestTemplateCallback, NULL);\n        test_cond(\"sdstemplate() with quoting\",\n                  memcmp(x,\"v1={value1} {} v2=value2\",24) == 0);\n        sdsfree(x);\n\n        /* Test sdsresize - extend */\n        x = sdsnew(\"1234567890123456789012345678901234567890\");\n        x = sdsResize(x, 200, 1);\n        test_cond(\"sdsresize() expand len\", sdslen(x) == 40);\n        test_cond(\"sdsresize() expand strlen\", strlen(x) == 40);\n#if defined(USE_JEMALLOC)\n        /* 224 - hdrlen(3) - 1(\\0) */\n        test_cond(\"sdsresize() expand alloc\", sdsalloc(x) == 220);\n#endif\n        /* Test sdsresize - trim free space */\n        x = sdsResize(x, 80, 1);\n        test_cond(\"sdsresize() shrink len\", sdslen(x) == 40);\n        test_cond(\"sdsresize() shrink strlen\", strlen(x) == 40);\n#if defined(USE_JEMALLOC)\n        /* 96 - hdrlen(3) - 1(\\0) */\n        test_cond(\"sdsresize() shrink alloc\", sdsalloc(x) == 92);\n#endif\n        /* Test sdsresize - crop used space */\n        x = sdsResize(x, 30, 1);\n        test_cond(\"sdsresize() crop len\", sdslen(x) == 30);\n        test_cond(\"sdsresize() crop strlen\", strlen(x) == 30);\n#if defined(USE_JEMALLOC)\n        /* 40 - hdrlen(3) - 1(\\0) */\n        test_cond(\"sdsresize() crop alloc\", sdsalloc(x) == 36);\n#endif\n        /* Test sdsresize - extend to different class */\n        x = sdsResize(x, 400, 1);\n        test_cond(\"sdsresize() expand len\", sdslen(x) == 30);\n        test_cond(\"sdsresize() expand strlen\", strlen(x) == 30);\n#if defined(USE_JEMALLOC)\n        /* 448 - hdrlen(5) - 1(\\0) */\n        test_cond(\"sdsresize() expand alloc\", sdsalloc(x) == 442);\n#endif\n        /* Test sdsresize - shrink to different class */\n        x = sdsResize(x, 4, 1);\n        test_cond(\"sdsresize() crop len\", sdslen(x) == 4);\n        test_cond(\"sdsresize() crop strlen\", strlen(x) == 4);\n#if defined(USE_JEMALLOC)\n        /* 8 - hdrlen(3) - 1(\\0) */\n        test_cond(\"sdsresize() crop alloc\", sdsalloc(x) == 4);\n#endif\n        sdsfree(x);\n        \n        { /* Test adjustTypeIfNeeded() */\n            /* Test case: Type should be adjusted when buffer size exceeds max size for current type */\n            char type = SDS_TYPE_8;\n            int hdrlen = sdsHdrSize(type);\n            size_t bufsize = (1<<8) + hdrlen + 1; /* Exceeds SDS_TYPE_8 max size */\n\n            int result = adjustTypeIfNeeded(&type, &hdrlen, bufsize);\n            test_cond(\"adjustTypeIfNeeded() returns 1 when type needs adjustment\", result == 1);\n            test_cond(\"adjustTypeIfNeeded() adjusts type correctly\", type == SDS_TYPE_16);\n            test_cond(\"adjustTypeIfNeeded() adjusts header length correctly\", hdrlen == sdsHdrSize(SDS_TYPE_16));\n\n            /* Test case: Type should not be adjusted when buffer size is within max size for current type */\n            type = SDS_TYPE_8;\n            hdrlen = sdsHdrSize(type);\n            bufsize = (1<<8) - 10 + hdrlen + 1; /* Within SDS_TYPE_8 max size */\n            result = adjustTypeIfNeeded(&type, &hdrlen, bufsize);\n            test_cond(\"adjustTypeIfNeeded() returns 0 when type doesn't need adjustment\", result == 0);\n            test_cond(\"adjustTypeIfNeeded() doesn't change type when not needed\", type == SDS_TYPE_8);\n            test_cond(\"adjustTypeIfNeeded() doesn't change header length when not needed\", hdrlen == sdsHdrSize(SDS_TYPE_8));\n\n            /* Test case 3: Type 5 should never be adjusted */\n            type = SDS_TYPE_5;\n            hdrlen = sdsHdrSize(type);\n            bufsize = 1000; /* Large buffer size */\n            result = adjustTypeIfNeeded(&type, &hdrlen, bufsize);\n            test_cond(\"adjustTypeIfNeeded() returns 0 for SDS_TYPE_5\", result == 0);\n            test_cond(\"adjustTypeIfNeeded() doesn't change SDS_TYPE_5\", type == SDS_TYPE_5);\n            test_cond(\"adjustTypeIfNeeded() doesn't change header length for SDS_TYPE_5\", hdrlen == sdsHdrSize(SDS_TYPE_5));\n        }\n    }\n    return 0;\n}\n#endif\n"
  },
  {
    "path": "src/sds.h",
    "content": "/* SDSLib 2.0 -- A C dynamic strings library\n *\n * Copyright (c) 2006-Present, Redis Ltd.\n * All rights reserved.\n * \n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved. \n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef __SDS_H\n#define __SDS_H\n\n#define SDS_MAX_PREALLOC (1024*1024)\nextern const char *SDS_NOINIT;\n\n#include <sys/types.h>\n#include <stdarg.h>\n#include <stdint.h>\n\ntypedef char *sds;\n\n/* Note: sdshdr5 is never used, we just access the flags byte directly.\n * However is here to document the layout of type 5 SDS strings. */\nstruct __attribute__ ((__packed__)) sdshdr5 {\n    unsigned char flags; /* 3 lsb of type, and 5 msb of string length */\n    char buf[];\n};\nstruct __attribute__ ((__packed__)) sdshdr8 {\n    uint8_t len; /* used */\n    uint8_t alloc; /* excluding the header and null terminator */\n    unsigned char flags; /* 3 lsb of type, 5 unused bits */\n    char buf[];\n};\nstruct __attribute__ ((__packed__)) sdshdr16 {\n    uint16_t len; /* used */\n    uint16_t alloc; /* excluding the header and null terminator */\n    unsigned char flags; /* 3 lsb of type, 5 unused bits */\n    char buf[];\n};\nstruct __attribute__ ((__packed__)) sdshdr32 {\n    uint32_t len; /* used */\n    uint32_t alloc; /* excluding the header and null terminator */\n    unsigned char flags; /* 3 lsb of type, 5 unused bits */\n    char buf[];\n};\nstruct __attribute__ ((__packed__)) sdshdr64 {\n    uint64_t len; /* used */\n    uint64_t alloc; /* excluding the header and null terminator */\n    unsigned char flags; /* 3 lsb of type, 5 unused bits */\n    char buf[];\n};\n\n#define SDS_TYPE_5  0\n#define SDS_TYPE_8  1\n#define SDS_TYPE_16 2\n#define SDS_TYPE_32 3\n#define SDS_TYPE_64 4\n#define SDS_TYPE_MASK 7\n#define SDS_TYPE_BITS 3\n#define SDS_HDR_VAR(T,s) struct sdshdr##T *sh = (void*)((s)-(sizeof(struct sdshdr##T)));\n#define SDS_HDR(T,s) ((struct sdshdr##T *)((s)-(sizeof(struct sdshdr##T))))\n#define SDS_TYPE_5_LEN(s) (((unsigned char)(s[-1])) >> SDS_TYPE_BITS)\n\nstatic inline unsigned char sdsType(sds s) {\n    unsigned char flags = s[-1];\n    return flags & SDS_TYPE_MASK;\n}\n\n/* Returns a user data bit stored in the SDS header by sdsSetAuxBit. The bit\n * index is 0-4. Returns 0 or 1. Always returns 0 for SDS_TYPE_5. */\nstatic inline int sdsGetAuxBit(sds s, int bit) {\n    if (sdsType(s) == SDS_TYPE_5) \n        return 0;\n    \n    unsigned char flags = s[-1];\n    return (flags & (1U << (SDS_TYPE_BITS + bit))) != 0U;\n}\n\n/* Stores a bit in an unused area in the SDS header, except for SDS_TYPE_5. The\n * bit index is 0-4. The value is 0 or 1. The aux bits are lost if the SDS is\n * auto-resized. This is only for special uses like immutable SDS embedded in\n * other structures. */\nstatic inline void sdsSetAuxBit(sds s, int bit, int value) {\n    if (sdsType(s) == SDS_TYPE_5) return;\n    unsigned char flags = s[-1];\n    if (value) {\n        flags |= 1U << (SDS_TYPE_BITS + bit);\n    } else {\n        flags &= ~(1U << (SDS_TYPE_BITS + bit));\n    }\n    s[-1] = (char)flags;\n}\n\nstatic inline size_t sdslen(const sds s) {\n    switch (sdsType(s)) {\n        case SDS_TYPE_5: return SDS_TYPE_5_LEN(s);\n        case SDS_TYPE_8:\n            return SDS_HDR(8,s)->len;\n        case SDS_TYPE_16:\n            return SDS_HDR(16,s)->len;\n        case SDS_TYPE_32:\n            return SDS_HDR(32,s)->len;\n        case SDS_TYPE_64:\n            return SDS_HDR(64,s)->len;\n    }\n    return 0;\n}\n\nstatic inline size_t sdsavail(const sds s) {\n    switch(sdsType(s)) {\n        case SDS_TYPE_5: {\n            return 0;\n        }\n        case SDS_TYPE_8: {\n            SDS_HDR_VAR(8,s);\n            return sh->alloc - sh->len;\n        }\n        case SDS_TYPE_16: {\n            SDS_HDR_VAR(16,s);\n            return sh->alloc - sh->len;\n        }\n        case SDS_TYPE_32: {\n            SDS_HDR_VAR(32,s);\n            return sh->alloc - sh->len;\n        }\n        case SDS_TYPE_64: {\n            SDS_HDR_VAR(64,s);\n            return sh->alloc - sh->len;\n        }\n    }\n    return 0;\n}\n\nstatic inline void sdssetlen(sds s, size_t newlen) {\n    switch(sdsType(s)) {\n        case SDS_TYPE_5:\n            {\n                unsigned char *fp = ((unsigned char*)s)-1;\n                *fp = SDS_TYPE_5 | (newlen << SDS_TYPE_BITS);\n            }\n            break;\n        case SDS_TYPE_8:\n            SDS_HDR(8,s)->len = newlen;\n            break;\n        case SDS_TYPE_16:\n            SDS_HDR(16,s)->len = newlen;\n            break;\n        case SDS_TYPE_32:\n            SDS_HDR(32,s)->len = newlen;\n            break;\n        case SDS_TYPE_64:\n            SDS_HDR(64,s)->len = newlen;\n            break;\n    }\n}\n\nstatic inline void sdsinclen(sds s, size_t inc) {\n    switch(sdsType(s)) {\n        case SDS_TYPE_5:\n            {\n                unsigned char *fp = ((unsigned char*)s)-1;\n                unsigned char newlen = SDS_TYPE_5_LEN(s)+inc;\n                *fp = SDS_TYPE_5 | (newlen << SDS_TYPE_BITS);\n            }\n            break;\n        case SDS_TYPE_8:\n            SDS_HDR(8,s)->len += inc;\n            break;\n        case SDS_TYPE_16:\n            SDS_HDR(16,s)->len += inc;\n            break;\n        case SDS_TYPE_32:\n            SDS_HDR(32,s)->len += inc;\n            break;\n        case SDS_TYPE_64:\n            SDS_HDR(64,s)->len += inc;\n            break;\n    }\n}\n\n/* Return the total size of the allocation of the specified sds string,\n * including:\n * 1) The sds header before the pointer.\n * 2) The string.\n * 3) The free buffer at the end if any.\n * 4) The implicit null term.\n */\nstatic inline size_t sdsAllocSize(sds s) {\n    switch(sdsType(s)) {\n        case SDS_TYPE_5:\n            return sizeof(struct sdshdr5) + SDS_TYPE_5_LEN(s) + 1;\n        case SDS_TYPE_8:\n            return sizeof(struct sdshdr8) + SDS_HDR(8,s)->alloc + 1;\n        case SDS_TYPE_16:\n            return sizeof(struct sdshdr16) + SDS_HDR(16,s)->alloc + 1;\n        case SDS_TYPE_32:\n            return sizeof(struct sdshdr32) + SDS_HDR(32,s)->alloc + 1;\n        case SDS_TYPE_64:\n            return sizeof(struct sdshdr64) + SDS_HDR(64,s)->alloc + 1;\n    }\n    return 0;\n}\n\n/* sdsalloc() = sdsavail() + sdslen() */\nstatic inline size_t sdsalloc(const sds s) {\n    switch(sdsType(s)) {\n        case SDS_TYPE_5:\n            return SDS_TYPE_5_LEN(s);\n        case SDS_TYPE_8:\n            return SDS_HDR(8,s)->alloc;\n        case SDS_TYPE_16:\n            return SDS_HDR(16,s)->alloc;\n        case SDS_TYPE_32:\n            return SDS_HDR(32,s)->alloc;\n        case SDS_TYPE_64:\n            return SDS_HDR(64,s)->alloc;\n    }\n    return 0;\n}\n\nstatic inline void sdssetalloc(sds s, size_t newlen) {\n    switch(sdsType(s)) {\n        case SDS_TYPE_5:\n            /* Nothing to do, this type has no total allocation info. */\n            break;\n        case SDS_TYPE_8:\n            SDS_HDR(8,s)->alloc = newlen;\n            break;\n        case SDS_TYPE_16:\n            SDS_HDR(16,s)->alloc = newlen;\n            break;\n        case SDS_TYPE_32:\n            SDS_HDR(32,s)->alloc = newlen;\n            break;\n        case SDS_TYPE_64:\n            SDS_HDR(64,s)->alloc = newlen;\n            break;\n    }\n}\n\nstatic inline int sdsHdrSize(char type) {\n    switch(type&SDS_TYPE_MASK) {\n        case SDS_TYPE_5:\n            return sizeof(struct sdshdr5);\n        case SDS_TYPE_8:\n            return sizeof(struct sdshdr8);\n        case SDS_TYPE_16:\n            return sizeof(struct sdshdr16);\n        case SDS_TYPE_32:\n            return sizeof(struct sdshdr32);\n        case SDS_TYPE_64:\n            return sizeof(struct sdshdr64);\n    }\n    return 0;\n}\n\nsds sdsnewlen(const void *init, size_t initlen);\nsds sdstrynewlen(const void *init, size_t initlen);\nsds sdsnew(const char *init);\nsds sdsnewplacement(char *buf, size_t bufsize, char type, const char *init, size_t initlen);\n\nsds sdsempty(void);\nsds sdsdup(const sds s);\nvoid sdsfree(sds s);\nvoid sdsfreegeneric(void *s);\nvoid sdsfreeusable(sds s, size_t *usable);\nsds sdsgrowzero(sds s, size_t len);\nsds sdscatlen(sds s, const void *t, size_t len);\nsds sdscat(sds s, const char *t);\nsds sdscatsds(sds s, const sds t);\nsds sdscpylen(sds s, const char *t, size_t len);\nsds sdscpy(sds s, const char *t);\n\nsds sdscatvprintf(sds s, const char *fmt, va_list ap);\n#ifdef __GNUC__\nsds sdscatprintf(sds s, const char *fmt, ...)\n    __attribute__((format(printf, 2, 3)));\n#else\nsds sdscatprintf(sds s, const char *fmt, ...);\n#endif\n\nsds sdscatfmt(sds s, char const *fmt, ...);\nsds sdstrim(sds s, const char *cset);\nvoid sdssubstr(sds s, size_t start, size_t len);\nvoid sdsrange(sds s, ssize_t start, ssize_t end);\nvoid sdsupdatelen(sds s);\nvoid sdsclear(sds s);\nint sdscmp(const sds s1, const sds s2);\nsds *sdssplitlen(const char *s, ssize_t len, const char *sep, int seplen, int *count);\nvoid sdsfreesplitres(sds *tokens, int count);\nvoid sdstolower(sds s);\nvoid sdstoupper(sds s);\nsds sdsfromlonglong(long long value);\nsds sdscatrepr(sds s, const char *p, size_t len);\nsds *sdssplitargs(const char *line, int *argc);\nsds sdsmapchars(sds s, const char *from, const char *to, size_t setlen);\nsds sdsjoin(char **argv, int argc, char *sep);\nsds sdsjoinsds(sds *argv, int argc, const char *sep, size_t seplen);\nint sdsneedsrepr(const sds s);\n\n/* Callback for sdstemplate. The function gets called by sdstemplate\n * every time a variable needs to be expanded. The variable name is\n * provided as variable, and the callback is expected to return a\n * substitution value. Returning a NULL indicates an error.\n */\ntypedef sds (*sdstemplate_callback_t)(const sds variable, void *arg);\nsds sdstemplate(const char *template, sdstemplate_callback_t cb_func, void *cb_arg);\n\n/* Low level functions exposed to the user API */\nchar sdsReqType(size_t string_size);\nsds sdsMakeRoomFor(sds s, size_t addlen);\nsds sdsMakeRoomForNonGreedy(sds s, size_t addlen);\nvoid sdsIncrLen(sds s, ssize_t incr);\nsds sdsRemoveFreeSpace(sds s, int would_regrow);\nsds sdsResize(sds s, size_t size, int would_regrow);\nvoid *sdsAllocPtr(sds s);\n\n/* Returns the minimum required size to store an sds string of the given length\n * and type. */\nstatic inline size_t sdsReqSize(size_t len, char type) {\n    return len + sdsHdrSize(type) + 1;\n}\n\n/* Export the allocator used by SDS to the program using SDS.\n * Sometimes the program SDS is linked to, may use a different set of\n * allocators, but may want to allocate or free things that SDS will\n * respectively free or allocate. */\nvoid *sds_malloc(size_t size);\nvoid *sds_realloc(void *ptr, size_t size);\nvoid sds_free(void *ptr);\n\n#ifdef REDIS_TEST\nint sdsTest(int argc, char *argv[], int flags);\n#endif\n\n#endif\n"
  },
  {
    "path": "src/sdsalloc.h",
    "content": "/* SDSLib 2.0 -- A C dynamic strings library\n *\n * Copyright (c) 2006-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n/* SDS allocator selection.\n *\n * This file is used in order to change the SDS allocator at compile time.\n * Just define the following defines to what you want to use. Also add\n * the include of your alternate allocator if needed (not needed in order\n * to use the default libc allocator). */\n\n#ifndef __SDS_ALLOC_H__\n#define __SDS_ALLOC_H__\n\n#include \"zmalloc.h\"\n#define s_malloc zmalloc\n#define s_realloc zrealloc\n#define s_trymalloc ztrymalloc\n#define s_tryrealloc ztryrealloc\n#define s_free zfree\n#define s_malloc_usable zmalloc_usable\n#define s_realloc_usable zrealloc_usable\n#define s_trymalloc_usable ztrymalloc_usable\n#define s_tryrealloc_usable ztryrealloc_usable\n#define s_free_usable zfree_usable\n\n#endif\n"
  },
  {
    "path": "src/sentinel.c",
    "content": "/* Redis Sentinel implementation\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n#include \"hiredis.h\"\n#if USE_OPENSSL == 1 /* BUILD_YES */\n#include \"openssl/ssl.h\"\n#include \"hiredis_ssl.h\"\n#endif\n#include \"async.h\"\n\n#include <ctype.h>\n#include <arpa/inet.h>\n#include <sys/socket.h>\n#include <sys/wait.h>\n#include <fcntl.h>\n\nextern char **environ;\n\n#if USE_OPENSSL == 1 /* BUILD_YES */\nextern SSL_CTX *redis_tls_ctx;\nextern SSL_CTX *redis_tls_client_ctx;\n#endif\n\n#define REDIS_SENTINEL_PORT 26379\n\n/* ======================== Sentinel global state =========================== */\n\n/* Address object, used to describe an ip:port pair. */\ntypedef struct sentinelAddr {\n    char *hostname;         /* Hostname OR address, as specified */\n    char *ip;               /* Always a resolved address */\n    int port;\n} sentinelAddr;\n\n/* A Sentinel Redis Instance object is monitoring. */\n#define SRI_MASTER  (1<<0)\n#define SRI_SLAVE   (1<<1)\n#define SRI_SENTINEL (1<<2)\n#define SRI_S_DOWN (1<<3)   /* Subjectively down (no quorum). */\n#define SRI_O_DOWN (1<<4)   /* Objectively down (confirmed by others). */\n#define SRI_MASTER_DOWN (1<<5) /* A Sentinel with this flag set thinks that\n                                   its master is down. */\n#define SRI_FAILOVER_IN_PROGRESS (1<<6) /* Failover is in progress for\n                                           this master. */\n#define SRI_PROMOTED (1<<7)            /* Slave selected for promotion. */\n#define SRI_RECONF_SENT (1<<8)     /* SLAVEOF <newmaster> sent. */\n#define SRI_RECONF_INPROG (1<<9)   /* Slave synchronization in progress. */\n#define SRI_RECONF_DONE (1<<10)     /* Slave synchronized with new master. */\n#define SRI_FORCE_FAILOVER (1<<11)  /* Force failover with master up. */\n#define SRI_SCRIPT_KILL_SENT (1<<12) /* SCRIPT KILL already sent on -BUSY */\n#define SRI_MASTER_REBOOT  (1<<13)   /* Master was detected as rebooting */\n/* Note: when adding new flags, please check the flags section in addReplySentinelRedisInstance. */\n\n/* Note: times are in milliseconds. */\n#define SENTINEL_PING_PERIOD 1000\n\nstatic mstime_t sentinel_info_period = 10000;\nstatic mstime_t sentinel_ping_period = SENTINEL_PING_PERIOD;\nstatic mstime_t sentinel_ask_period = 1000;\nstatic mstime_t sentinel_publish_period = 2000;\nstatic mstime_t sentinel_default_down_after = 30000;\nstatic mstime_t sentinel_tilt_trigger = 2000;\nstatic mstime_t sentinel_tilt_period = SENTINEL_PING_PERIOD * 30;\nstatic mstime_t sentinel_slave_reconf_timeout = 10000;\nstatic mstime_t sentinel_min_link_reconnect_period = 15000;\nstatic mstime_t sentinel_election_timeout = 10000;\nstatic mstime_t sentinel_script_max_runtime = 60000;  /* 60 seconds max exec time. */\nstatic mstime_t sentinel_script_retry_delay = 30000;  /* 30 seconds between retries. */\nstatic mstime_t sentinel_default_failover_timeout = 60*3*1000;\n\n#define SENTINEL_HELLO_CHANNEL \"__sentinel__:hello\"\n#define SENTINEL_DEFAULT_SLAVE_PRIORITY 100\n#define SENTINEL_DEFAULT_PARALLEL_SYNCS 1\n#define SENTINEL_MAX_PENDING_COMMANDS 100\n\n#define SENTINEL_MAX_DESYNC 1000\n#define SENTINEL_DEFAULT_DENY_SCRIPTS_RECONFIG 1\n#define SENTINEL_DEFAULT_RESOLVE_HOSTNAMES 0\n#define SENTINEL_DEFAULT_ANNOUNCE_HOSTNAMES 0\n\n/* Failover machine different states. */\n#define SENTINEL_FAILOVER_STATE_NONE 0  /* No failover in progress. */\n#define SENTINEL_FAILOVER_STATE_WAIT_START 1  /* Wait for failover_start_time*/\n#define SENTINEL_FAILOVER_STATE_SELECT_SLAVE 2 /* Select slave to promote */\n#define SENTINEL_FAILOVER_STATE_SEND_SLAVEOF_NOONE 3 /* Slave -> Master */\n#define SENTINEL_FAILOVER_STATE_WAIT_PROMOTION 4 /* Wait slave to change role */\n#define SENTINEL_FAILOVER_STATE_RECONF_SLAVES 5 /* SLAVEOF newmaster */\n#define SENTINEL_FAILOVER_STATE_UPDATE_CONFIG 6 /* Monitor promoted slave. */\n\n#define SENTINEL_MASTER_LINK_STATUS_UP 0\n#define SENTINEL_MASTER_LINK_STATUS_DOWN 1\n\n/* Generic flags that can be used with different functions.\n * They use higher bits to avoid colliding with the function specific\n * flags. */\n#define SENTINEL_NO_FLAGS 0\n#define SENTINEL_GENERATE_EVENT (1<<16)\n#define SENTINEL_LEADER (1<<17)\n#define SENTINEL_OBSERVER (1<<18)\n\n/* Script execution flags and limits. */\n#define SENTINEL_SCRIPT_NONE 0\n#define SENTINEL_SCRIPT_RUNNING 1\n#define SENTINEL_SCRIPT_MAX_QUEUE 256\n#define SENTINEL_SCRIPT_MAX_RUNNING 16\n#define SENTINEL_SCRIPT_MAX_RETRY 10\n\n/* SENTINEL SIMULATE-FAILURE command flags. */\n#define SENTINEL_SIMFAILURE_NONE 0\n#define SENTINEL_SIMFAILURE_CRASH_AFTER_ELECTION (1<<0)\n#define SENTINEL_SIMFAILURE_CRASH_AFTER_PROMOTION (1<<1)\n\n/* The link to a sentinelRedisInstance. When we have the same set of Sentinels\n * monitoring many masters, we have different instances representing the\n * same Sentinels, one per master, and we need to share the hiredis connections\n * among them. Otherwise if 5 Sentinels are monitoring 100 masters we create\n * 500 outgoing connections instead of 5.\n *\n * So this structure represents a reference counted link in terms of the two\n * hiredis connections for commands and Pub/Sub, and the fields needed for\n * failure detection, since the ping/pong time are now local to the link: if\n * the link is available, the instance is available. This way we don't just\n * have 5 connections instead of 500, we also send 5 pings instead of 500.\n *\n * Links are shared only for Sentinels: master and slave instances have\n * a link with refcount = 1, always. */\ntypedef struct instanceLink {\n    int refcount;          /* Number of sentinelRedisInstance owners. */\n    int disconnected;      /* Non-zero if we need to reconnect cc or pc. */\n    int pending_commands;  /* Number of commands sent waiting for a reply. */\n    redisAsyncContext *cc; /* Hiredis context for commands. */\n    redisAsyncContext *pc; /* Hiredis context for Pub / Sub. */\n    mstime_t cc_conn_time; /* cc connection time. */\n    mstime_t pc_conn_time; /* pc connection time. */\n    mstime_t pc_last_activity; /* Last time we received any message. */\n    mstime_t last_avail_time; /* Last time the instance replied to ping with\n                                 a reply we consider valid. */\n    mstime_t act_ping_time;   /* Time at which the last pending ping (no pong\n                                 received after it) was sent. This field is\n                                 set to 0 when a pong is received, and set again\n                                 to the current time if the value is 0 and a new\n                                 ping is sent. */\n    mstime_t last_ping_time;  /* Time at which we sent the last ping. This is\n                                 only used to avoid sending too many pings\n                                 during failure. Idle time is computed using\n                                 the act_ping_time field. */\n    mstime_t last_pong_time;  /* Last time the instance replied to ping,\n                                 whatever the reply was. That's used to check\n                                 if the link is idle and must be reconnected. */\n    mstime_t last_reconn_time;  /* Last reconnection attempt performed when\n                                   the link was down. */\n} instanceLink;\n\ntypedef struct sentinelRedisInstance {\n    int flags;      /* See SRI_... defines */\n    char *name;     /* Master name from the point of view of this sentinel. */\n    char *runid;    /* Run ID of this instance, or unique ID if is a Sentinel.*/\n    uint64_t config_epoch;  /* Configuration epoch. */\n    sentinelAddr *addr; /* Master host. */\n    instanceLink *link; /* Link to the instance, may be shared for Sentinels. */\n    mstime_t last_pub_time;   /* Last time we sent hello via Pub/Sub. */\n    mstime_t last_hello_time; /* Only used if SRI_SENTINEL is set. Last time\n                                 we received a hello from this Sentinel\n                                 via Pub/Sub. */\n    mstime_t last_master_down_reply_time; /* Time of last reply to\n                                             SENTINEL is-master-down command. */\n    mstime_t s_down_since_time; /* Subjectively down since time. */\n    mstime_t o_down_since_time; /* Objectively down since time. */\n    mstime_t down_after_period; /* Consider it down after that period. */\n    mstime_t master_reboot_down_after_period; /* Consider master down after that period. */\n    mstime_t master_reboot_since_time; /* master reboot time since time. */\n    mstime_t info_refresh;  /* Time at which we received INFO output from it. */\n    dict *renamed_commands;     /* Commands renamed in this instance:\n                                   Sentinel will use the alternative commands\n                                   mapped on this table to send things like\n                                   SLAVEOF, CONFIG, INFO, ... */\n\n    /* Role and the first time we observed it.\n     * This is useful in order to delay replacing what the instance reports\n     * with our own configuration. We need to always wait some time in order\n     * to give a chance to the leader to report the new configuration before\n     * we do silly things. */\n    int role_reported;\n    mstime_t role_reported_time;\n    mstime_t slave_conf_change_time; /* Last time slave master addr changed. */\n\n    /* Master specific. */\n    dict *sentinels;    /* Other sentinels monitoring the same master. */\n    dict *slaves;       /* Slaves for this master instance. */\n    unsigned int quorum;/* Number of sentinels that need to agree on failure. */\n    int parallel_syncs; /* How many slaves to reconfigure at same time. */\n    char *auth_pass;    /* Password to use for AUTH against master & replica. */\n    char *auth_user;    /* Username for ACLs AUTH against master & replica. */\n\n    /* Slave specific. */\n    mstime_t master_link_down_time; /* Slave replication link down time. */\n    int slave_priority; /* Slave priority according to its INFO output. */\n    int replica_announced; /* Replica announcing according to its INFO output. */\n    mstime_t slave_reconf_sent_time; /* Time at which we sent SLAVE OF <new> */\n    struct sentinelRedisInstance *master; /* Master instance if it's slave. */\n    char *slave_master_host;    /* Master host as reported by INFO */\n    int slave_master_port;      /* Master port as reported by INFO */\n    int slave_master_link_status; /* Master link status as reported by INFO */\n    unsigned long long slave_repl_offset; /* Slave replication offset. */\n    /* Failover */\n    char *leader;       /* If this is a master instance, this is the runid of\n                           the Sentinel that should perform the failover. If\n                           this is a Sentinel, this is the runid of the Sentinel\n                           that this Sentinel voted as leader. */\n    uint64_t leader_epoch; /* Epoch of the 'leader' field. */\n    uint64_t failover_epoch; /* Epoch of the currently started failover. */\n    int failover_state; /* See SENTINEL_FAILOVER_STATE_* defines. */\n    mstime_t failover_state_change_time;\n    mstime_t failover_start_time;   /* Last failover attempt start time. */\n    mstime_t failover_timeout;      /* Max time to refresh failover state. */\n    mstime_t failover_delay_logged; /* For what failover_start_time value we\n                                       logged the failover delay. */\n    struct sentinelRedisInstance *promoted_slave; /* Promoted slave instance. */\n    /* Scripts executed to notify admin or reconfigure clients: when they\n     * are set to NULL no script is executed. */\n    char *notification_script;\n    char *client_reconfig_script;\n    sds info; /* cached INFO output */\n} sentinelRedisInstance;\n\n/* Main state. */\nstruct sentinelState {\n    char myid[CONFIG_RUN_ID_SIZE+1]; /* This sentinel ID. */\n    uint64_t current_epoch;         /* Current epoch. */\n    dict *masters;      /* Dictionary of master sentinelRedisInstances.\n                           Key is the instance name, value is the\n                           sentinelRedisInstance structure pointer. */\n    int tilt;           /* Are we in TILT mode? */\n    int total_tilt;   /* Number of tilt. */\n    int running_scripts;    /* Number of scripts in execution right now. */\n    mstime_t tilt_start_time;       /* When TITL started. */\n    mstime_t previous_time;         /* Last time we ran the time handler. */\n    list *scripts_queue;            /* Queue of user scripts to execute. */\n    char *announce_ip;  /* IP addr that is gossiped to other sentinels if\n                           not NULL. */\n    int announce_port;  /* Port that is gossiped to other sentinels if\n                           non zero. */\n    unsigned long simfailure_flags; /* Failures simulation. */\n    int deny_scripts_reconfig; /* Allow SENTINEL SET ... to change script\n                                  paths at runtime? */\n    char *sentinel_auth_pass;    /* Password to use for AUTH against other sentinel */\n    char *sentinel_auth_user;    /* Username for ACLs AUTH against other sentinel. */\n    int resolve_hostnames;       /* Support use of hostnames, assuming DNS is well configured. */\n    int announce_hostnames;      /* Announce hostnames instead of IPs when we have them. */\n} sentinel;\n\n/* A script execution job. */\ntypedef struct sentinelScriptJob {\n    int flags;              /* Script job flags: SENTINEL_SCRIPT_* */\n    int retry_num;          /* Number of times we tried to execute it. */\n    char **argv;            /* Arguments to call the script. */\n    mstime_t start_time;    /* Script execution time if the script is running,\n                               otherwise 0 if we are allowed to retry the\n                               execution at any time. If the script is not\n                               running and it's not 0, it means: do not run\n                               before the specified time. */\n    pid_t pid;              /* Script execution pid. */\n} sentinelScriptJob;\n\n/* ======================= hiredis ae.c adapters =============================\n * Note: this implementation is taken from hiredis/adapters/ae.h, however\n * we have our modified copy for Sentinel in order to use our allocator\n * and to have full control over how the adapter works. */\n\ntypedef struct redisAeEvents {\n    redisAsyncContext *context;\n    aeEventLoop *loop;\n    int fd;\n    int reading, writing;\n} redisAeEvents;\n\nstatic void redisAeReadEvent(aeEventLoop *el, int fd, void *privdata, int mask) {\n    ((void)el); ((void)fd); ((void)mask);\n\n    redisAeEvents *e = (redisAeEvents*)privdata;\n    redisAsyncHandleRead(e->context);\n}\n\nstatic void redisAeWriteEvent(aeEventLoop *el, int fd, void *privdata, int mask) {\n    ((void)el); ((void)fd); ((void)mask);\n\n    redisAeEvents *e = (redisAeEvents*)privdata;\n    redisAsyncHandleWrite(e->context);\n}\n\nstatic void redisAeAddRead(void *privdata) {\n    redisAeEvents *e = (redisAeEvents*)privdata;\n    aeEventLoop *loop = e->loop;\n    if (!e->reading) {\n        e->reading = 1;\n        aeCreateFileEvent(loop,e->fd,AE_READABLE,redisAeReadEvent,e);\n    }\n}\n\nstatic void redisAeDelRead(void *privdata) {\n    redisAeEvents *e = (redisAeEvents*)privdata;\n    aeEventLoop *loop = e->loop;\n    if (e->reading) {\n        e->reading = 0;\n        aeDeleteFileEvent(loop,e->fd,AE_READABLE);\n    }\n}\n\nstatic void redisAeAddWrite(void *privdata) {\n    redisAeEvents *e = (redisAeEvents*)privdata;\n    aeEventLoop *loop = e->loop;\n    if (!e->writing) {\n        e->writing = 1;\n        aeCreateFileEvent(loop,e->fd,AE_WRITABLE,redisAeWriteEvent,e);\n    }\n}\n\nstatic void redisAeDelWrite(void *privdata) {\n    redisAeEvents *e = (redisAeEvents*)privdata;\n    aeEventLoop *loop = e->loop;\n    if (e->writing) {\n        e->writing = 0;\n        aeDeleteFileEvent(loop,e->fd,AE_WRITABLE);\n    }\n}\n\nstatic void redisAeCleanup(void *privdata) {\n    redisAeEvents *e = (redisAeEvents*)privdata;\n    redisAeDelRead(privdata);\n    redisAeDelWrite(privdata);\n    zfree(e);\n}\n\nstatic int redisAeAttach(aeEventLoop *loop, redisAsyncContext *ac) {\n    redisContext *c = &(ac->c);\n    redisAeEvents *e;\n\n    /* Nothing should be attached when something is already attached */\n    if (ac->ev.data != NULL)\n        return C_ERR;\n\n    /* Create container for context and r/w events */\n    e = (redisAeEvents*)zmalloc(sizeof(*e));\n    e->context = ac;\n    e->loop = loop;\n    e->fd = c->fd;\n    e->reading = e->writing = 0;\n\n    /* Register functions to start/stop listening for events */\n    ac->ev.addRead = redisAeAddRead;\n    ac->ev.delRead = redisAeDelRead;\n    ac->ev.addWrite = redisAeAddWrite;\n    ac->ev.delWrite = redisAeDelWrite;\n    ac->ev.cleanup = redisAeCleanup;\n    ac->ev.data = e;\n\n    return C_OK;\n}\n\n/* ============================= Prototypes ================================= */\n\nvoid sentinelLinkEstablishedCallback(const redisAsyncContext *c, int status);\nvoid sentinelDisconnectCallback(const redisAsyncContext *c, int status);\nvoid sentinelReceiveHelloMessages(redisAsyncContext *c, void *reply, void *privdata);\nsentinelRedisInstance *sentinelGetMasterByName(char *name);\nchar *sentinelGetSubjectiveLeader(sentinelRedisInstance *master);\nchar *sentinelGetObjectiveLeader(sentinelRedisInstance *master);\nvoid instanceLinkConnectionError(const redisAsyncContext *c);\nconst char *sentinelRedisInstanceTypeStr(sentinelRedisInstance *ri);\nvoid sentinelAbortFailover(sentinelRedisInstance *ri);\nvoid sentinelEvent(int level, char *type, sentinelRedisInstance *ri, const char *fmt, ...);\nsentinelRedisInstance *sentinelSelectSlave(sentinelRedisInstance *master);\nvoid sentinelScheduleScriptExecution(char *path, ...);\nvoid sentinelStartFailover(sentinelRedisInstance *master);\nvoid sentinelDiscardReplyCallback(redisAsyncContext *c, void *reply, void *privdata);\nint sentinelSendSlaveOf(sentinelRedisInstance *ri, const sentinelAddr *addr);\nchar *sentinelVoteLeader(sentinelRedisInstance *master, uint64_t req_epoch, char *req_runid, uint64_t *leader_epoch);\nint sentinelFlushConfig(void);\nvoid sentinelGenerateInitialMonitorEvents(void);\nint sentinelSendPing(sentinelRedisInstance *ri);\nint sentinelForceHelloUpdateForMaster(sentinelRedisInstance *master);\nsentinelRedisInstance *getSentinelRedisInstanceByAddrAndRunID(dict *instances, char *ip, int port, char *runid);\nvoid sentinelSimFailureCrash(void);\n\n/* ========================= Dictionary types =============================== */\n\nvoid releaseSentinelRedisInstance(sentinelRedisInstance *ri);\n\nvoid dictInstancesValDestructor (dict *d, void *obj) {\n    UNUSED(d);\n    releaseSentinelRedisInstance(obj);\n}\n\n/* Instance name (sds) -> instance (sentinelRedisInstance pointer)\n *\n * also used for: sentinelRedisInstance->sentinels dictionary that maps\n * sentinels ip:port to last seen time in Pub/Sub hello message. */\ndictType instancesDictType = {\n    dictSdsHash,               /* hash function */\n    NULL,                      /* key dup */\n    NULL,                      /* val dup */\n    dictSdsKeyCompare,         /* key compare */\n    NULL,                      /* key destructor */\n    dictInstancesValDestructor,/* val destructor */\n    NULL                       /* allow to expand */\n};\n\n/* Instance runid (sds) -> votes (long casted to void*)\n *\n * This is useful into sentinelGetObjectiveLeader() function in order to\n * count the votes and understand who is the leader. */\ndictType leaderVotesDictType = {\n    dictSdsHash,               /* hash function */\n    NULL,                      /* key dup */\n    NULL,                      /* val dup */\n    dictSdsKeyCompare,         /* key compare */\n    NULL,                      /* key destructor */\n    NULL,                      /* val destructor */\n    NULL                       /* allow to expand */\n};\n\n/* Instance renamed commands table. */\ndictType renamedCommandsDictType = {\n    dictSdsCaseHash,           /* hash function */\n    NULL,                      /* key dup */\n    NULL,                      /* val dup */\n    dictSdsKeyCaseCompare,     /* key compare */\n    dictSdsDestructor,         /* key destructor */\n    dictSdsDestructor,         /* val destructor */\n    NULL                       /* allow to expand */\n};\n\n/* =========================== Initialization =============================== */\n\nvoid sentinelSetCommand(client *c);\nvoid sentinelConfigGetCommand(client *c);\nvoid sentinelConfigSetCommand(client *c);\n\n/* this array is used for sentinel config lookup, which need to be loaded\n * before monitoring masters config to avoid dependency issues */\nconst char *preMonitorCfgName[] = { \n    \"announce-ip\",\n    \"announce-port\",\n    \"deny-scripts-reconfig\",\n    \"sentinel-user\",\n    \"sentinel-pass\",\n    \"current-epoch\",\n    \"myid\",\n    \"resolve-hostnames\",\n    \"announce-hostnames\"\n};\n\n/* This function overwrites a few normal Redis config default with Sentinel\n * specific defaults. */\nvoid initSentinelConfig(void) {\n    server.port = REDIS_SENTINEL_PORT;\n    server.protected_mode = 0; /* Sentinel must be exposed. */\n}\n\nvoid freeSentinelLoadQueueEntry(void *item);\n\n/* Perform the Sentinel mode initialization. */\nvoid initSentinel(void) {\n    /* Initialize various data structures. */\n    sentinel.current_epoch = 0;\n    sentinel.masters = dictCreate(&instancesDictType);\n    sentinel.tilt = 0;\n    sentinel.tilt_start_time = 0;\n    sentinel.total_tilt = 0;\n    sentinel.previous_time = mstime();\n    sentinel.running_scripts = 0;\n    sentinel.scripts_queue = listCreate();\n    sentinel.announce_ip = NULL;\n    sentinel.announce_port = 0;\n    sentinel.simfailure_flags = SENTINEL_SIMFAILURE_NONE;\n    sentinel.deny_scripts_reconfig = SENTINEL_DEFAULT_DENY_SCRIPTS_RECONFIG;\n    sentinel.sentinel_auth_pass = NULL;\n    sentinel.sentinel_auth_user = NULL;\n    sentinel.resolve_hostnames = SENTINEL_DEFAULT_RESOLVE_HOSTNAMES;\n    sentinel.announce_hostnames = SENTINEL_DEFAULT_ANNOUNCE_HOSTNAMES;\n    memset(sentinel.myid,0,sizeof(sentinel.myid));\n    server.sentinel_config = NULL;\n}\n\n/* This function is for checking whether sentinel config file has been set,\n * also checking whether we have write permissions. */\nvoid sentinelCheckConfigFile(void) {\n    if (server.configfile == NULL) {\n        serverLog(LL_WARNING,\n            \"Sentinel needs config file on disk to save state. Exiting...\");\n        exit(1);\n    } else if (access(server.configfile,W_OK) == -1) {\n        serverLog(LL_WARNING,\n            \"Sentinel config file %s is not writable: %s. Exiting...\",\n            server.configfile,strerror(errno));\n        exit(1);\n    }\n}\n\n/* This function gets called when the server is in Sentinel mode, started,\n * loaded the configuration, and is ready for normal operations. */\nvoid sentinelIsRunning(void) {\n    int j;\n\n    /* If this Sentinel has yet no ID set in the configuration file, we\n     * pick a random one and persist the config on disk. From now on this\n     * will be this Sentinel ID across restarts. */\n    for (j = 0; j < CONFIG_RUN_ID_SIZE; j++)\n        if (sentinel.myid[j] != 0) break;\n\n    if (j == CONFIG_RUN_ID_SIZE) {\n        /* Pick ID and persist the config. */\n        getRandomHexChars(sentinel.myid,CONFIG_RUN_ID_SIZE);\n        sentinelFlushConfig();\n    }\n\n    /* Log its ID to make debugging of issues simpler. */\n    serverLog(LL_NOTICE,\"Sentinel ID is %s\", sentinel.myid);\n\n    /* We want to generate a +monitor event for every configured master\n     * at startup. */\n    sentinelGenerateInitialMonitorEvents();\n}\n\n/* ============================== sentinelAddr ============================== */\n\n/* Create a sentinelAddr object and return it on success.\n * On error NULL is returned and errno is set to:\n *  ENOENT: Can't resolve the hostname, unless accept_unresolved is non-zero.\n *  EINVAL: Invalid port number.\n */\nsentinelAddr *createSentinelAddr(char *hostname, int port, int is_accept_unresolved) {\n    char ip[NET_IP_STR_LEN];\n    sentinelAddr *sa;\n\n    if (port < 0 || port > 65535) {\n        errno = EINVAL;\n        return NULL;\n    }\n    if (anetResolve(NULL,hostname,ip,sizeof(ip),\n                    sentinel.resolve_hostnames ? ANET_NONE : ANET_IP_ONLY) == ANET_ERR) {\n        serverLog(LL_WARNING, \"Failed to resolve hostname '%s'\", hostname);\n        if (sentinel.resolve_hostnames && is_accept_unresolved) {\n            ip[0] = '\\0';\n        }\n        else {\n            errno = ENOENT;\n            return NULL;\n        }\n    }\n    sa = zmalloc(sizeof(*sa));\n    sa->hostname = sdsnew(hostname);\n    sa->ip = sdsnew(ip);\n    sa->port = port;\n    return sa;\n}\n\n/* Return a duplicate of the source address. */\nsentinelAddr *dupSentinelAddr(sentinelAddr *src) {\n    sentinelAddr *sa;\n\n    sa = zmalloc(sizeof(*sa));\n    sa->hostname = sdsnew(src->hostname);\n    sa->ip = sdsnew(src->ip);\n    sa->port = src->port;\n    return sa;\n}\n\n/* Free a Sentinel address. Can't fail. */\nvoid releaseSentinelAddr(sentinelAddr *sa) {\n    sdsfree(sa->hostname);\n    sdsfree(sa->ip);\n    zfree(sa);\n}\n\n/* Return non-zero if the two addresses are equal, either by address\n * or by hostname if they could not have been resolved.\n */\nint sentinelAddrOrHostnameEqual(sentinelAddr *a, sentinelAddr *b) {\n    return a->port == b->port &&\n            (!strcmp(a->ip, b->ip)  ||\n            !strcasecmp(a->hostname, b->hostname));\n}\n\n/* Return non-zero if a hostname matches an address. */\nint sentinelAddrEqualsHostname(sentinelAddr *a, char *hostname) {\n    char ip[NET_IP_STR_LEN];\n\n    /* Try resolve the hostname and compare it to the address */\n    if (anetResolve(NULL, hostname, ip, sizeof(ip),\n                    sentinel.resolve_hostnames ? ANET_NONE : ANET_IP_ONLY) == ANET_ERR) {\n\n        /* If failed resolve then compare based on hostnames. That is our best effort as\n         * long as the server is unavailable for some reason. It is fine since Redis \n         * instance cannot have multiple hostnames for a given setup */\n        return !strcasecmp(sentinel.resolve_hostnames ? a->hostname : a->ip, hostname);\n    }\n    /* Compare based on address */\n    return !strcasecmp(a->ip, ip);\n}\n\nconst char *announceSentinelAddr(const sentinelAddr *a) {\n    return sentinel.announce_hostnames ? a->hostname : a->ip;\n}\n\n/* Return an allocated sds with hostname/address:port. IPv6\n * addresses are bracketed the same way anetFormatAddr() does.\n */\nsds announceSentinelAddrAndPort(const sentinelAddr *a) {\n    const char *addr = announceSentinelAddr(a);\n    if (strchr(addr, ':') != NULL)\n        return sdscatprintf(sdsempty(), \"[%s]:%d\", addr, a->port);\n    else\n        return sdscatprintf(sdsempty(), \"%s:%d\", addr, a->port);\n}\n\n/* =========================== Events notification ========================== */\n\n/* Send an event to log, pub/sub, user notification script.\n *\n * 'level' is the log level for logging. Only LL_WARNING events will trigger\n * the execution of the user notification script.\n *\n * 'type' is the message type, also used as a pub/sub channel name.\n *\n * 'ri', is the redis instance target of this event if applicable, and is\n * used to obtain the path of the notification script to execute.\n *\n * The remaining arguments are printf-alike.\n * If the format specifier starts with the two characters \"%@\" then ri is\n * not NULL, and the message is prefixed with an instance identifier in the\n * following format:\n *\n *  <instance type> <instance name> <ip> <port>\n *\n *  If the instance type is not master, than the additional string is\n *  added to specify the originating master:\n *\n *  @ <master name> <master ip> <master port>\n *\n *  Any other specifier after \"%@\" is processed by printf itself.\n */\nvoid sentinelEvent(int level, char *type, sentinelRedisInstance *ri,\n                   const char *fmt, ...) {\n    va_list ap;\n    char msg[LOG_MAX_LEN];\n    robj *channel, *payload;\n\n    /* Handle %@ */\n    if (fmt[0] == '%' && fmt[1] == '@') {\n        sentinelRedisInstance *master = (ri->flags & SRI_MASTER) ?\n                                         NULL : ri->master;\n\n        if (master) {\n            snprintf(msg, sizeof(msg), \"%s %s %s %d @ %s %s %d\",\n                sentinelRedisInstanceTypeStr(ri),\n                ri->name, announceSentinelAddr(ri->addr), ri->addr->port,\n                master->name, announceSentinelAddr(master->addr), master->addr->port);\n        } else {\n            snprintf(msg, sizeof(msg), \"%s %s %s %d\",\n                sentinelRedisInstanceTypeStr(ri),\n                ri->name, announceSentinelAddr(ri->addr), ri->addr->port);\n        }\n        fmt += 2;\n    } else {\n        msg[0] = '\\0';\n    }\n\n    /* Use vsprintf for the rest of the formatting if any. */\n    if (fmt[0] != '\\0') {\n        va_start(ap, fmt);\n        vsnprintf(msg+strlen(msg), sizeof(msg)-strlen(msg), fmt, ap);\n        va_end(ap);\n    }\n\n    /* Log the message if the log level allows it to be logged. */\n    if (level >= server.verbosity)\n        serverLog(level,\"%s %s\",type,msg);\n\n    /* Publish the message via Pub/Sub if it's not a debugging one. */\n    if (level != LL_DEBUG) {\n        channel = createStringObject(type,strlen(type));\n        payload = createStringObject(msg,strlen(msg));\n        pubsubPublishMessage(channel,payload,0);\n        decrRefCount(channel);\n        decrRefCount(payload);\n    }\n\n    /* Call the notification script if applicable. */\n    if (level == LL_WARNING && ri != NULL) {\n        sentinelRedisInstance *master = (ri->flags & SRI_MASTER) ?\n                                         ri : ri->master;\n        if (master && master->notification_script) {\n            sentinelScheduleScriptExecution(master->notification_script,\n                type,msg,NULL);\n        }\n    }\n}\n\n/* This function is called only at startup and is used to generate a\n * +monitor event for every configured master. The same events are also\n * generated when a master to monitor is added at runtime via the\n * SENTINEL MONITOR command. */\nvoid sentinelGenerateInitialMonitorEvents(void) {\n    dictIterator di;\n    dictEntry *de;\n\n    dictInitIterator(&di, sentinel.masters);\n    while((de = dictNext(&di)) != NULL) {\n        sentinelRedisInstance *ri = dictGetVal(de);\n        sentinelEvent(LL_WARNING,\"+monitor\",ri,\"%@ quorum %d\",ri->quorum);\n    }\n    dictResetIterator(&di);\n}\n\n/* ============================ script execution ============================ */\n\n/* Release a script job structure and all the associated data. */\nvoid sentinelReleaseScriptJob(sentinelScriptJob *sj) {\n    int j = 0;\n\n    while(sj->argv[j]) sdsfree(sj->argv[j++]);\n    zfree(sj->argv);\n    zfree(sj);\n}\n\n#define SENTINEL_SCRIPT_MAX_ARGS 16\nvoid sentinelScheduleScriptExecution(char *path, ...) {\n    va_list ap;\n    char *argv[SENTINEL_SCRIPT_MAX_ARGS+1];\n    int argc = 1;\n    sentinelScriptJob *sj;\n\n    va_start(ap, path);\n    while(argc < SENTINEL_SCRIPT_MAX_ARGS) {\n        argv[argc] = va_arg(ap,char*);\n        if (!argv[argc]) break;\n        argv[argc] = sdsnew(argv[argc]); /* Copy the string. */\n        argc++;\n    }\n    va_end(ap);\n    argv[0] = sdsnew(path);\n\n    sj = zmalloc(sizeof(*sj));\n    sj->flags = SENTINEL_SCRIPT_NONE;\n    sj->retry_num = 0;\n    sj->argv = zmalloc(sizeof(char*)*(argc+1));\n    sj->start_time = 0;\n    sj->pid = 0;\n    memcpy(sj->argv,argv,sizeof(char*)*(argc+1));\n\n    listAddNodeTail(sentinel.scripts_queue,sj);\n\n    /* Remove the oldest non running script if we already hit the limit. */\n    if (listLength(sentinel.scripts_queue) > SENTINEL_SCRIPT_MAX_QUEUE) {\n        listNode *ln;\n        listIter li;\n\n        listRewind(sentinel.scripts_queue,&li);\n        while ((ln = listNext(&li)) != NULL) {\n            sj = ln->value;\n\n            if (sj->flags & SENTINEL_SCRIPT_RUNNING) continue;\n            /* The first node is the oldest as we add on tail. */\n            listDelNode(sentinel.scripts_queue,ln);\n            sentinelReleaseScriptJob(sj);\n            break;\n        }\n        serverAssert(listLength(sentinel.scripts_queue) <=\n                    SENTINEL_SCRIPT_MAX_QUEUE);\n    }\n}\n\n/* Lookup a script in the scripts queue via pid, and returns the list node\n * (so that we can easily remove it from the queue if needed). */\nlistNode *sentinelGetScriptListNodeByPid(pid_t pid) {\n    listNode *ln;\n    listIter li;\n\n    listRewind(sentinel.scripts_queue,&li);\n    while ((ln = listNext(&li)) != NULL) {\n        sentinelScriptJob *sj = ln->value;\n\n        if ((sj->flags & SENTINEL_SCRIPT_RUNNING) && sj->pid == pid)\n            return ln;\n    }\n    return NULL;\n}\n\n/* Run pending scripts if we are not already at max number of running\n * scripts. */\nvoid sentinelRunPendingScripts(void) {\n    listNode *ln;\n    listIter li;\n    mstime_t now = mstime();\n\n    /* Find jobs that are not running and run them, from the top to the\n     * tail of the queue, so we run older jobs first. */\n    listRewind(sentinel.scripts_queue,&li);\n    while (sentinel.running_scripts < SENTINEL_SCRIPT_MAX_RUNNING &&\n           (ln = listNext(&li)) != NULL)\n    {\n        sentinelScriptJob *sj = ln->value;\n        pid_t pid;\n\n        /* Skip if already running. */\n        if (sj->flags & SENTINEL_SCRIPT_RUNNING) continue;\n\n        /* Skip if it's a retry, but not enough time has elapsed. */\n        if (sj->start_time && sj->start_time > now) continue;\n\n        sj->flags |= SENTINEL_SCRIPT_RUNNING;\n        sj->start_time = mstime();\n        sj->retry_num++;\n        pid = fork();\n\n        if (pid == -1) {\n            /* Parent (fork error).\n             * We report fork errors as signal 99, in order to unify the\n             * reporting with other kind of errors. */\n            sentinelEvent(LL_WARNING,\"-script-error\",NULL,\n                          \"%s %d %d\", sj->argv[0], 99, 0);\n            sj->flags &= ~SENTINEL_SCRIPT_RUNNING;\n            sj->pid = 0;\n        } else if (pid == 0) {\n            /* Child */\n            connTypeCleanupAll();\n            execve(sj->argv[0],sj->argv,environ);\n            /* If we are here an error occurred. */\n            _exit(2); /* Don't retry execution. */\n        } else {\n            sentinel.running_scripts++;\n            sj->pid = pid;\n            sentinelEvent(LL_DEBUG,\"+script-child\",NULL,\"%ld\",(long)pid);\n        }\n    }\n}\n\n/* How much to delay the execution of a script that we need to retry after\n * an error?\n *\n * We double the retry delay for every further retry we do. So for instance\n * if RETRY_DELAY is set to 30 seconds and the max number of retries is 10\n * starting from the second attempt to execute the script the delays are:\n * 30 sec, 60 sec, 2 min, 4 min, 8 min, 16 min, 32 min, 64 min, 128 min. */\nmstime_t sentinelScriptRetryDelay(int retry_num) {\n    mstime_t delay = sentinel_script_retry_delay;\n\n    while (retry_num-- > 1) delay *= 2;\n    return delay;\n}\n\n/* Check for scripts that terminated, and remove them from the queue if the\n * script terminated successfully. If instead the script was terminated by\n * a signal, or returned exit code \"1\", it is scheduled to run again if\n * the max number of retries did not already elapsed. */\nvoid sentinelCollectTerminatedScripts(void) {\n    int statloc;\n    pid_t pid;\n\n    while ((pid = waitpid(-1, &statloc, WNOHANG)) > 0) {\n        int exitcode = WEXITSTATUS(statloc);\n        int bysignal = 0;\n        listNode *ln;\n        sentinelScriptJob *sj;\n\n        if (WIFSIGNALED(statloc)) bysignal = WTERMSIG(statloc);\n        sentinelEvent(LL_DEBUG,\"-script-child\",NULL,\"%ld %d %d\",\n            (long)pid, exitcode, bysignal);\n\n        ln = sentinelGetScriptListNodeByPid(pid);\n        if (ln == NULL) {\n            serverLog(LL_WARNING,\"waitpid() returned a pid (%ld) we can't find in our scripts execution queue!\", (long)pid);\n            continue;\n        }\n        sj = ln->value;\n\n        /* If the script was terminated by a signal or returns an\n         * exit code of \"1\" (that means: please retry), we reschedule it\n         * if the max number of retries is not already reached. */\n        if ((bysignal || exitcode == 1) &&\n            sj->retry_num != SENTINEL_SCRIPT_MAX_RETRY)\n        {\n            sj->flags &= ~SENTINEL_SCRIPT_RUNNING;\n            sj->pid = 0;\n            sj->start_time = mstime() +\n                             sentinelScriptRetryDelay(sj->retry_num);\n        } else {\n            /* Otherwise let's remove the script, but log the event if the\n             * execution did not terminated in the best of the ways. */\n            if (bysignal || exitcode != 0) {\n                sentinelEvent(LL_WARNING,\"-script-error\",NULL,\n                              \"%s %d %d\", sj->argv[0], bysignal, exitcode);\n            }\n            listDelNode(sentinel.scripts_queue,ln);\n            sentinelReleaseScriptJob(sj);\n        }\n        sentinel.running_scripts--;\n    }\n}\n\n/* Kill scripts in timeout, they'll be collected by the\n * sentinelCollectTerminatedScripts() function. */\nvoid sentinelKillTimedoutScripts(void) {\n    listNode *ln;\n    listIter li;\n    mstime_t now = mstime();\n\n    listRewind(sentinel.scripts_queue,&li);\n    while ((ln = listNext(&li)) != NULL) {\n        sentinelScriptJob *sj = ln->value;\n\n        if (sj->flags & SENTINEL_SCRIPT_RUNNING &&\n            (now - sj->start_time) > sentinel_script_max_runtime)\n        {\n            sentinelEvent(LL_WARNING,\"-script-timeout\",NULL,\"%s %ld\",\n                sj->argv[0], (long)sj->pid);\n            kill(sj->pid,SIGKILL);\n        }\n    }\n}\n\n/* Implements SENTINEL PENDING-SCRIPTS command. */\nvoid sentinelPendingScriptsCommand(client *c) {\n    listNode *ln;\n    listIter li;\n\n    addReplyArrayLen(c,listLength(sentinel.scripts_queue));\n    listRewind(sentinel.scripts_queue,&li);\n    while ((ln = listNext(&li)) != NULL) {\n        sentinelScriptJob *sj = ln->value;\n        int j = 0;\n\n        addReplyMapLen(c,5);\n\n        addReplyBulkCString(c,\"argv\");\n        while (sj->argv[j]) j++;\n        addReplyArrayLen(c,j);\n        j = 0;\n        while (sj->argv[j]) addReplyBulkCString(c,sj->argv[j++]);\n\n        addReplyBulkCString(c,\"flags\");\n        addReplyBulkCString(c,\n            (sj->flags & SENTINEL_SCRIPT_RUNNING) ? \"running\" : \"scheduled\");\n\n        addReplyBulkCString(c,\"pid\");\n        addReplyBulkLongLong(c,sj->pid);\n\n        if (sj->flags & SENTINEL_SCRIPT_RUNNING) {\n            addReplyBulkCString(c,\"run-time\");\n            addReplyBulkLongLong(c,mstime() - sj->start_time);\n        } else {\n            mstime_t delay = sj->start_time ? (sj->start_time-mstime()) : 0;\n            if (delay < 0) delay = 0;\n            addReplyBulkCString(c,\"run-delay\");\n            addReplyBulkLongLong(c,delay);\n        }\n\n        addReplyBulkCString(c,\"retry-num\");\n        addReplyBulkLongLong(c,sj->retry_num);\n    }\n}\n\n/* This function calls, if any, the client reconfiguration script with the\n * following parameters:\n *\n * <master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port>\n *\n * It is called every time a failover is performed.\n *\n * <state> is currently always \"start\".\n * <role> is either \"leader\" or \"observer\".\n *\n * from/to fields are respectively master -> promoted slave addresses for\n * \"start\" and \"end\". */\nvoid sentinelCallClientReconfScript(sentinelRedisInstance *master, int role, char *state, sentinelAddr *from, sentinelAddr *to) {\n    char fromport[32], toport[32];\n\n    if (master->client_reconfig_script == NULL) return;\n    ll2string(fromport,sizeof(fromport),from->port);\n    ll2string(toport,sizeof(toport),to->port);\n    sentinelScheduleScriptExecution(master->client_reconfig_script,\n        master->name,\n        (role == SENTINEL_LEADER) ? \"leader\" : \"observer\",\n        state, announceSentinelAddr(from), fromport,\n        announceSentinelAddr(to), toport, NULL);\n}\n\n/* =============================== instanceLink ============================= */\n\n/* Create a not yet connected link object. */\ninstanceLink *createInstanceLink(void) {\n    instanceLink *link = zmalloc(sizeof(*link));\n\n    link->refcount = 1;\n    link->disconnected = 1;\n    link->pending_commands = 0;\n    link->cc = NULL;\n    link->pc = NULL;\n    link->cc_conn_time = 0;\n    link->pc_conn_time = 0;\n    link->last_reconn_time = 0;\n    link->pc_last_activity = 0;\n    /* We set the act_ping_time to \"now\" even if we actually don't have yet\n     * a connection with the node, nor we sent a ping.\n     * This is useful to detect a timeout in case we'll not be able to connect\n     * with the node at all. */\n    link->act_ping_time = mstime();\n    link->last_ping_time = 0;\n    link->last_avail_time = mstime();\n    link->last_pong_time = mstime();\n    return link;\n}\n\n/* Disconnect a hiredis connection in the context of an instance link. */\nvoid instanceLinkCloseConnection(instanceLink *link, redisAsyncContext *c) {\n    if (c == NULL) return;\n\n    if (link->cc == c) {\n        link->cc = NULL;\n        link->pending_commands = 0;\n    }\n    if (link->pc == c) link->pc = NULL;\n    c->data = NULL;\n    link->disconnected = 1;\n    redisAsyncFree(c);\n}\n\n/* Decrement the refcount of a link object, if it drops to zero, actually\n * free it and return NULL. Otherwise don't do anything and return the pointer\n * to the object.\n *\n * If we are not going to free the link and ri is not NULL, we rebind all the\n * pending requests in link->cc (hiredis connection for commands) to a\n * callback that will just ignore them. This is useful to avoid processing\n * replies for an instance that no longer exists. */\ninstanceLink *releaseInstanceLink(instanceLink *link, sentinelRedisInstance *ri)\n{\n    serverAssert(link->refcount > 0);\n    link->refcount--;\n    if (link->refcount != 0) {\n        if (ri && ri->link->cc) {\n            /* This instance may have pending callbacks in the hiredis async\n             * context, having as 'privdata' the instance that we are going to\n             * free. Let's rewrite the callback list, directly exploiting\n             * hiredis internal data structures, in order to bind them with\n             * a callback that will ignore the reply at all. */\n            redisCallback *cb;\n            redisCallbackList *callbacks = &link->cc->replies;\n\n            cb = callbacks->head;\n            while(cb) {\n                if (cb->privdata == ri) {\n                    cb->fn = sentinelDiscardReplyCallback;\n                    cb->privdata = NULL; /* Not strictly needed. */\n                }\n                cb = cb->next;\n            }\n        }\n        return link; /* Other active users. */\n    }\n\n    instanceLinkCloseConnection(link,link->cc);\n    instanceLinkCloseConnection(link,link->pc);\n    zfree(link);\n    return NULL;\n}\n\n/* This function will attempt to share the instance link we already have\n * for the same Sentinel in the context of a different master, with the\n * instance we are passing as argument.\n *\n * This way multiple Sentinel objects that refer all to the same physical\n * Sentinel instance but in the context of different masters will use\n * a single connection, will send a single PING per second for failure\n * detection and so forth.\n *\n * Return C_OK if a matching Sentinel was found in the context of a\n * different master and sharing was performed. Otherwise C_ERR\n * is returned. */\nint sentinelTryConnectionSharing(sentinelRedisInstance *ri) {\n    serverAssert(ri->flags & SRI_SENTINEL);\n    dictIterator di;\n    dictEntry *de;\n\n    if (ri->runid == NULL) return C_ERR; /* No way to identify it. */\n    if (ri->link->refcount > 1) return C_ERR; /* Already shared. */\n\n    dictInitIterator(&di, sentinel.masters);\n    while((de = dictNext(&di)) != NULL) {\n        sentinelRedisInstance *master = dictGetVal(de), *match;\n        /* We want to share with the same physical Sentinel referenced\n         * in other masters, so skip our master. */\n        if (master == ri->master) continue;\n        match = getSentinelRedisInstanceByAddrAndRunID(master->sentinels,\n                                                       NULL,0,ri->runid);\n        if (match == NULL) continue; /* No match. */\n        if (match == ri) continue; /* Should never happen but... safer. */\n\n        /* We identified a matching Sentinel, great! Let's free our link\n         * and use the one of the matching Sentinel. */\n        releaseInstanceLink(ri->link,NULL);\n        ri->link = match->link;\n        match->link->refcount++;\n        dictResetIterator(&di);\n        return C_OK;\n    }\n    dictResetIterator(&di);\n    return C_ERR;\n}\n\n/* Disconnect the relevant master and its replicas. */\nvoid dropInstanceConnections(sentinelRedisInstance *ri) {\n    serverAssert(ri->flags & SRI_MASTER);\n\n    /* Disconnect with the master. */\n    instanceLinkCloseConnection(ri->link, ri->link->cc);\n    instanceLinkCloseConnection(ri->link, ri->link->pc);\n    \n    /* Disconnect with all replicas. */\n    dictIterator di;\n    dictEntry *de;\n    sentinelRedisInstance *repl_ri;\n\n    dictInitIterator(&di, ri->slaves);\n    while ((de = dictNext(&di)) != NULL) {\n        repl_ri = dictGetVal(de);\n        instanceLinkCloseConnection(repl_ri->link, repl_ri->link->cc);\n        instanceLinkCloseConnection(repl_ri->link, repl_ri->link->pc);\n    }\n    dictResetIterator(&di);\n}\n\n/* Drop all connections to other sentinels. Returns the number of connections\n * dropped.*/\nint sentinelDropConnections(void) {\n    dictIterator di;\n    dictEntry *de;\n    int dropped = 0;\n\n    dictInitIterator(&di, sentinel.masters);\n    while ((de = dictNext(&di)) != NULL) {\n        dictIterator sdi;\n        dictEntry *sde;\n\n        sentinelRedisInstance *ri = dictGetVal(de);\n        dictInitIterator(&sdi, ri->sentinels);\n        while ((sde = dictNext(&sdi)) != NULL) {\n            sentinelRedisInstance *si = dictGetVal(sde);\n            if (!si->link->disconnected) {\n                instanceLinkCloseConnection(si->link, si->link->pc);\n                instanceLinkCloseConnection(si->link, si->link->cc);\n                dropped++;\n            }\n        }\n        dictResetIterator(&sdi);\n    }\n    dictResetIterator(&di);\n\n    return dropped;\n}\n\n/* When we detect a Sentinel to switch address (reporting a different IP/port\n * pair in Hello messages), let's update all the matching Sentinels in the\n * context of other masters as well and disconnect the links, so that everybody\n * will be updated.\n *\n * Return the number of updated Sentinel addresses. */\nint sentinelUpdateSentinelAddressInAllMasters(sentinelRedisInstance *ri) {\n    serverAssert(ri->flags & SRI_SENTINEL);\n    dictIterator di;\n    dictEntry *de;\n    int reconfigured = 0;\n\n    dictInitIterator(&di, sentinel.masters);\n    while((de = dictNext(&di)) != NULL) {\n        sentinelRedisInstance *master = dictGetVal(de), *match;\n        match = getSentinelRedisInstanceByAddrAndRunID(master->sentinels,\n                                                       NULL,0,ri->runid);\n        /* If there is no match, this master does not know about this\n         * Sentinel, try with the next one. */\n        if (match == NULL) continue;\n\n        /* Disconnect the old links if connected. */\n        if (match->link->cc != NULL)\n            instanceLinkCloseConnection(match->link,match->link->cc);\n        if (match->link->pc != NULL)\n            instanceLinkCloseConnection(match->link,match->link->pc);\n\n        if (match == ri) continue; /* Address already updated for it. */\n\n        /* Update the address of the matching Sentinel by copying the address\n         * of the Sentinel object that received the address update. */\n        releaseSentinelAddr(match->addr);\n        match->addr = dupSentinelAddr(ri->addr);\n        reconfigured++;\n    }\n    dictResetIterator(&di);\n    if (reconfigured)\n        sentinelEvent(LL_NOTICE,\"+sentinel-address-update\", ri,\n                    \"%@ %d additional matching instances\", reconfigured);\n    return reconfigured;\n}\n\n/* This function is called when a hiredis connection reported an error.\n * We set it to NULL and mark the link as disconnected so that it will be\n * reconnected again.\n *\n * Note: we don't free the hiredis context as hiredis will do it for us\n * for async connections. */\nvoid instanceLinkConnectionError(const redisAsyncContext *c) {\n    instanceLink *link = c->data;\n    int pubsub;\n\n    if (!link) return;\n\n    pubsub = (link->pc == c);\n    if (pubsub)\n        link->pc = NULL;\n    else\n        link->cc = NULL;\n    link->disconnected = 1;\n}\n\n/* Hiredis connection established / disconnected callbacks. We need them\n * just to cleanup our link state. */\nvoid sentinelLinkEstablishedCallback(const redisAsyncContext *c, int status) {\n    if (status != C_OK) instanceLinkConnectionError(c);\n}\n\nvoid sentinelDisconnectCallback(const redisAsyncContext *c, int status) {\n    UNUSED(status);\n    instanceLinkConnectionError(c);\n}\n\n/* ========================== sentinelRedisInstance ========================= */\n\n/* Create a redis instance, the following fields must be populated by the\n * caller if needed:\n * runid: set to NULL but will be populated once INFO output is received.\n * info_refresh: is set to 0 to mean that we never received INFO so far.\n *\n * If SRI_MASTER is set into initial flags the instance is added to\n * sentinel.masters table.\n *\n * if SRI_SLAVE or SRI_SENTINEL is set then 'master' must be not NULL and the\n * instance is added into master->slaves or master->sentinels table.\n *\n * If the instance is a slave, the name parameter is ignored and is created\n * automatically as ip/hostname:port.\n *\n * The function fails if hostname can't be resolved or port is out of range.\n * When this happens NULL is returned and errno is set accordingly to the\n * createSentinelAddr() function.\n *\n * The function may also fail and return NULL with errno set to EBUSY if\n * a master with the same name, a slave with the same address, or a sentinel\n * with the same ID already exists. */\n\nsentinelRedisInstance *createSentinelRedisInstance(char *name, int flags, char *hostname, int port, int quorum, sentinelRedisInstance *master) {\n    sentinelRedisInstance *ri;\n    sentinelAddr *addr;\n    dict *table = NULL;\n    sds sdsname;\n\n    serverAssert(flags & (SRI_MASTER|SRI_SLAVE|SRI_SENTINEL));\n    serverAssert((flags & SRI_MASTER) || master != NULL);\n\n    /* Check address validity. */\n    addr = createSentinelAddr(hostname,port,1);\n    if (addr == NULL) return NULL;\n\n    /* For slaves use ip/host:port as name. */\n    if (flags & SRI_SLAVE)\n        sdsname = announceSentinelAddrAndPort(addr);\n    else\n        sdsname = sdsnew(name);\n\n    /* Make sure the entry is not duplicated. This may happen when the same\n     * name for a master is used multiple times inside the configuration or\n     * if we try to add multiple times a slave or sentinel with same ip/port\n     * to a master. */\n    if (flags & SRI_MASTER) table = sentinel.masters;\n    else if (flags & SRI_SLAVE) table = master->slaves;\n    else if (flags & SRI_SENTINEL) table = master->sentinels;\n    if (dictFind(table,sdsname)) {\n        releaseSentinelAddr(addr);\n        sdsfree(sdsname);\n        errno = EBUSY;\n        return NULL;\n    }\n\n    /* Create the instance object. */\n    ri = zmalloc(sizeof(*ri));\n    /* Note that all the instances are started in the disconnected state,\n     * the event loop will take care of connecting them. */\n    ri->flags = flags;\n    ri->name = sdsname;\n    ri->runid = NULL;\n    ri->config_epoch = 0;\n    ri->addr = addr;\n    ri->link = createInstanceLink();\n    ri->last_pub_time = mstime();\n    ri->last_hello_time = mstime();\n    ri->last_master_down_reply_time = mstime();\n    ri->s_down_since_time = 0;\n    ri->o_down_since_time = 0;\n    ri->down_after_period = master ? master->down_after_period : sentinel_default_down_after;\n    ri->master_reboot_down_after_period = 0;\n    ri->master_link_down_time = 0;\n    ri->auth_pass = NULL;\n    ri->auth_user = NULL;\n    ri->slave_priority = SENTINEL_DEFAULT_SLAVE_PRIORITY;\n    ri->replica_announced = 1;\n    ri->slave_reconf_sent_time = 0;\n    ri->slave_master_host = NULL;\n    ri->slave_master_port = 0;\n    ri->slave_master_link_status = SENTINEL_MASTER_LINK_STATUS_DOWN;\n    ri->slave_repl_offset = 0;\n    ri->sentinels = dictCreate(&instancesDictType);\n    ri->quorum = quorum;\n    ri->parallel_syncs = SENTINEL_DEFAULT_PARALLEL_SYNCS;\n    ri->master = master;\n    ri->slaves = dictCreate(&instancesDictType);\n    ri->info_refresh = 0;\n    ri->renamed_commands = dictCreate(&renamedCommandsDictType);\n\n    /* Failover state. */\n    ri->leader = NULL;\n    ri->leader_epoch = 0;\n    ri->failover_epoch = 0;\n    ri->failover_state = SENTINEL_FAILOVER_STATE_NONE;\n    ri->failover_state_change_time = 0;\n    ri->failover_start_time = 0;\n    ri->failover_timeout = sentinel_default_failover_timeout;\n    ri->failover_delay_logged = 0;\n    ri->promoted_slave = NULL;\n    ri->notification_script = NULL;\n    ri->client_reconfig_script = NULL;\n    ri->info = NULL;\n\n    /* Role */\n    ri->role_reported = ri->flags & (SRI_MASTER|SRI_SLAVE);\n    ri->role_reported_time = mstime();\n    ri->slave_conf_change_time = mstime();\n\n    /* Add into the right table. */\n    dictAdd(table, ri->name, ri);\n    return ri;\n}\n\n/* Release this instance and all its slaves, sentinels, hiredis connections.\n * This function does not take care of unlinking the instance from the main\n * masters table (if it is a master) or from its master sentinels/slaves table\n * if it is a slave or sentinel. */\nvoid releaseSentinelRedisInstance(sentinelRedisInstance *ri) {\n    /* Release all its slaves or sentinels if any. */\n    dictRelease(ri->sentinels);\n    dictRelease(ri->slaves);\n\n    /* Disconnect the instance. */\n    releaseInstanceLink(ri->link,ri);\n\n    /* Free other resources. */\n    sdsfree(ri->name);\n    sdsfree(ri->runid);\n    sdsfree(ri->notification_script);\n    sdsfree(ri->client_reconfig_script);\n    sdsfree(ri->slave_master_host);\n    sdsfree(ri->leader);\n    sdsfree(ri->auth_pass);\n    sdsfree(ri->auth_user);\n    sdsfree(ri->info);\n    releaseSentinelAddr(ri->addr);\n    dictRelease(ri->renamed_commands);\n\n    /* Clear state into the master if needed. */\n    if ((ri->flags & SRI_SLAVE) && (ri->flags & SRI_PROMOTED) && ri->master)\n        ri->master->promoted_slave = NULL;\n\n    zfree(ri);\n}\n\n/* Lookup a slave in a master Redis instance, by ip and port. */\nsentinelRedisInstance *sentinelRedisInstanceLookupSlave(\n                sentinelRedisInstance *ri, char *slave_addr, int port)\n{\n    sds key;\n    sentinelRedisInstance *slave;\n    sentinelAddr *addr;\n\n    serverAssert(ri->flags & SRI_MASTER);\n\n    /* We need to handle a slave_addr that is potentially a hostname.\n     * If that is the case, depending on configuration we either resolve\n     * it and use the IP address or fail.\n     */\n    addr = createSentinelAddr(slave_addr, port, 0);\n    if (!addr) return NULL;\n    key = announceSentinelAddrAndPort(addr);\n    releaseSentinelAddr(addr);\n\n    slave = dictFetchValue(ri->slaves,key);\n    sdsfree(key);\n    return slave;\n}\n\n/* Return the name of the type of the instance as a string. */\nconst char *sentinelRedisInstanceTypeStr(sentinelRedisInstance *ri) {\n    if (ri->flags & SRI_MASTER) return \"master\";\n    else if (ri->flags & SRI_SLAVE) return \"slave\";\n    else if (ri->flags & SRI_SENTINEL) return \"sentinel\";\n    else return \"unknown\";\n}\n\n/* This function remove the Sentinel with the specified ID from the\n * specified master.\n *\n * If \"runid\" is NULL the function returns ASAP.\n *\n * This function is useful because on Sentinels address switch, we want to\n * remove our old entry and add a new one for the same ID but with the new\n * address.\n *\n * The function returns 1 if the matching Sentinel was removed, otherwise\n * 0 if there was no Sentinel with this ID. */\nint removeMatchingSentinelFromMaster(sentinelRedisInstance *master, char *runid) {\n    dictIterator di;\n    dictEntry *de;\n    int removed = 0;\n\n    if (runid == NULL) return 0;\n\n    dictInitSafeIterator(&di, master->sentinels);\n    while((de = dictNext(&di)) != NULL) {\n        sentinelRedisInstance *ri = dictGetVal(de);\n\n        if (ri->runid && strcmp(ri->runid,runid) == 0) {\n            dictDelete(master->sentinels,ri->name);\n            removed++;\n        }\n    }\n    dictResetIterator(&di);\n    return removed;\n}\n\n/* Search an instance with the same runid, ip and port into a dictionary\n * of instances. Return NULL if not found, otherwise return the instance\n * pointer.\n *\n * runid or addr can be NULL. In such a case the search is performed only\n * by the non-NULL field. */\nsentinelRedisInstance *getSentinelRedisInstanceByAddrAndRunID(dict *instances, char *addr, int port, char *runid) {\n    dictIterator di;\n    dictEntry *de;\n    sentinelRedisInstance *instance = NULL;\n    sentinelAddr *ri_addr = NULL;\n\n    serverAssert(addr || runid);   /* User must pass at least one search param. */\n    if (addr != NULL) {\n        /* Try to resolve addr. If hostnames are used, we're accepting an ri_addr\n         * that contains an hostname only and can still be matched based on that.\n         */\n        ri_addr = createSentinelAddr(addr,port,1);\n        if (!ri_addr) return NULL;\n    }\n    dictInitIterator(&di, instances);\n    while((de = dictNext(&di)) != NULL) {\n        sentinelRedisInstance *ri = dictGetVal(de);\n\n        if (runid && !ri->runid) continue;\n        if ((runid == NULL || strcmp(ri->runid, runid) == 0) &&\n            (addr == NULL || sentinelAddrOrHostnameEqual(ri->addr, ri_addr)))\n        {\n            instance = ri;\n            break;\n        }\n    }\n    dictResetIterator(&di);\n    if (ri_addr != NULL)\n        releaseSentinelAddr(ri_addr);\n\n    return instance;\n}\n\n/* Master lookup by name */\nsentinelRedisInstance *sentinelGetMasterByName(char *name) {\n    sentinelRedisInstance *ri;\n    sds sdsname = sdsnew(name);\n\n    ri = dictFetchValue(sentinel.masters,sdsname);\n    sdsfree(sdsname);\n    return ri;\n}\n\n/* Reset the state of a monitored master:\n * 1) Remove all slaves.\n * 2) Remove all sentinels.\n * 3) Remove most of the flags resulting from runtime operations.\n * 4) Reset timers to their default value. For example after a reset it will be\n *    possible to failover again the same master ASAP, without waiting the\n *    failover timeout delay.\n * 5) In the process of doing this undo the failover if in progress.\n * 6) Disconnect the connections with the master (will reconnect automatically).\n */\n\n#define SENTINEL_RESET_NO_SENTINELS (1<<0)\nvoid sentinelResetMaster(sentinelRedisInstance *ri, int flags) {\n    serverAssert(ri->flags & SRI_MASTER);\n    dictRelease(ri->slaves);\n    ri->slaves = dictCreate(&instancesDictType);\n    if (!(flags & SENTINEL_RESET_NO_SENTINELS)) {\n        dictRelease(ri->sentinels);\n        ri->sentinels = dictCreate(&instancesDictType);\n    }\n    instanceLinkCloseConnection(ri->link,ri->link->cc);\n    instanceLinkCloseConnection(ri->link,ri->link->pc);\n    ri->flags &= SRI_MASTER;\n    if (ri->leader) {\n        sdsfree(ri->leader);\n        ri->leader = NULL;\n    }\n    ri->failover_state = SENTINEL_FAILOVER_STATE_NONE;\n    ri->failover_state_change_time = 0;\n    ri->failover_start_time = 0; /* We can failover again ASAP. */\n    ri->promoted_slave = NULL;\n    sdsfree(ri->runid);\n    sdsfree(ri->slave_master_host);\n    ri->runid = NULL;\n    ri->slave_master_host = NULL;\n    ri->link->act_ping_time = mstime();\n    ri->link->last_ping_time = 0;\n    ri->link->last_avail_time = mstime();\n    ri->link->last_pong_time = mstime();\n    ri->role_reported_time = mstime();\n    ri->role_reported = SRI_MASTER;\n    if (flags & SENTINEL_GENERATE_EVENT)\n        sentinelEvent(LL_WARNING,\"+reset-master\",ri,\"%@\");\n}\n\n/* Call sentinelResetMaster() on every master with a name matching the specified\n * pattern. */\nint sentinelResetMastersByPattern(char *pattern, int flags) {\n    dictIterator di;\n    dictEntry *de;\n    int reset = 0;\n\n    dictInitIterator(&di, sentinel.masters);\n    while((de = dictNext(&di)) != NULL) {\n        sentinelRedisInstance *ri = dictGetVal(de);\n\n        if (ri->name) {\n            if (stringmatch(pattern,ri->name,0)) {\n                sentinelResetMaster(ri,flags);\n                reset++;\n            }\n        }\n    }\n    dictResetIterator(&di);\n    return reset;\n}\n\n/* Reset the specified master with sentinelResetMaster(), and also change\n * the ip:port address, but take the name of the instance unmodified.\n *\n * This is used to handle the +switch-master event.\n *\n * The function returns C_ERR if the address can't be resolved for some\n * reason. Otherwise C_OK is returned.  */\nint sentinelResetMasterAndChangeAddress(sentinelRedisInstance *master, char *hostname, int port) {\n    sentinelAddr *oldaddr, *newaddr;\n    sentinelAddr **slaves = NULL;\n    int numslaves = 0, j;\n    dictIterator di;\n    dictEntry *de;\n\n    newaddr = createSentinelAddr(hostname,port,0);\n    if (newaddr == NULL) return C_ERR;\n\n    /* There can be only 0 or 1 slave that has the newaddr.\n     * and It can add old master 1 more slave. \n     * so It allocates dictSize(master->slaves) + 1          */\n    slaves = zmalloc(sizeof(sentinelAddr*)*(dictSize(master->slaves) + 1));\n    \n    /* Don't include the one having the address we are switching to. */\n    dictInitIterator(&di, master->slaves);\n    while((de = dictNext(&di)) != NULL) {\n        sentinelRedisInstance *slave = dictGetVal(de);\n\n        if (sentinelAddrOrHostnameEqual(slave->addr,newaddr)) continue;\n        slaves[numslaves++] = dupSentinelAddr(slave->addr);\n    }\n    dictResetIterator(&di);\n\n    /* If we are switching to a different address, include the old address\n     * as a slave as well, so that we'll be able to sense / reconfigure\n     * the old master. */\n    if (!sentinelAddrOrHostnameEqual(newaddr,master->addr)) {\n        slaves[numslaves++] = dupSentinelAddr(master->addr);\n    }\n\n    /* Reset and switch address. */\n    sentinelResetMaster(master,SENTINEL_RESET_NO_SENTINELS);\n    oldaddr = master->addr;\n    master->addr = newaddr;\n    master->o_down_since_time = 0;\n    master->s_down_since_time = 0;\n\n    /* Add slaves back. */\n    for (j = 0; j < numslaves; j++) {\n        sentinelRedisInstance *slave;\n\n        slave = createSentinelRedisInstance(NULL,SRI_SLAVE,slaves[j]->hostname,\n                    slaves[j]->port, master->quorum, master);\n        releaseSentinelAddr(slaves[j]);\n        if (slave) sentinelEvent(LL_NOTICE,\"+slave\",slave,\"%@\");\n    }\n    zfree(slaves);\n\n    /* Release the old address at the end so we are safe even if the function\n     * gets the master->addr->ip and master->addr->port as arguments. */\n    releaseSentinelAddr(oldaddr);\n    sentinelFlushConfig();\n    return C_OK;\n}\n\n/* Return non-zero if there was no SDOWN or ODOWN error associated to this\n * instance in the latest 'ms' milliseconds. */\nint sentinelRedisInstanceNoDownFor(sentinelRedisInstance *ri, mstime_t ms) {\n    mstime_t most_recent;\n\n    most_recent = ri->s_down_since_time;\n    if (ri->o_down_since_time > most_recent)\n        most_recent = ri->o_down_since_time;\n    return most_recent == 0 || (mstime() - most_recent) > ms;\n}\n\n/* Return the current master address, that is, its address or the address\n * of the promoted slave if already operational. */\nsentinelAddr *sentinelGetCurrentMasterAddress(sentinelRedisInstance *master) {\n    /* If we are failing over the master, and the state is already\n     * SENTINEL_FAILOVER_STATE_RECONF_SLAVES or greater, it means that we\n     * already have the new configuration epoch in the master, and the\n     * slave acknowledged the configuration switch. Advertise the new\n     * address. */\n    if ((master->flags & SRI_FAILOVER_IN_PROGRESS) &&\n        master->promoted_slave &&\n        master->failover_state >= SENTINEL_FAILOVER_STATE_RECONF_SLAVES)\n    {\n        return master->promoted_slave->addr;\n    } else {\n        return master->addr;\n    }\n}\n\n/* This function sets the down_after_period field value in 'master' to all\n * the slaves and sentinel instances connected to this master. */\nvoid sentinelPropagateDownAfterPeriod(sentinelRedisInstance *master) {\n    dictIterator di;\n    dictEntry *de;\n    int j;\n    dict *d[] = {master->slaves, master->sentinels, NULL};\n\n    for (j = 0; d[j]; j++) {\n        dictInitIterator(&di, d[j]);\n        while((de = dictNext(&di)) != NULL) {\n            sentinelRedisInstance *ri = dictGetVal(de);\n            ri->down_after_period = master->down_after_period;\n        }\n        dictResetIterator(&di);\n    }\n}\n\n/* This function is used in order to send commands to Redis instances: the\n * commands we send from Sentinel may be renamed, a common case is a master\n * with CONFIG and SLAVEOF commands renamed for security concerns. In that\n * case we check the ri->renamed_command table (or if the instance is a slave,\n * we check the one of the master), and map the command that we should send\n * to the set of renamed commands. However, if the command was not renamed,\n * we just return \"command\" itself. */\nchar *sentinelInstanceMapCommand(sentinelRedisInstance *ri, char *command) {\n    sds sc = sdsnew(command);\n    if (ri->master) ri = ri->master;\n    char *retval = dictFetchValue(ri->renamed_commands, sc);\n    sdsfree(sc);\n    return retval ? retval : command;\n}\n\n/* ============================ Config handling ============================= */\n\n/* Generalise handling create instance error. Use SRI_MASTER, SRI_SLAVE or\n * SRI_SENTINEL as a role value. */\nconst char *sentinelCheckCreateInstanceErrors(int role) {\n    switch(errno) {\n    case EBUSY:\n        switch (role) {\n        case SRI_MASTER:\n            return \"Duplicate master name.\";\n        case SRI_SLAVE:\n            return \"Duplicate hostname and port for replica.\";\n        case SRI_SENTINEL:\n            return \"Duplicate runid for sentinel.\";\n        default:\n            serverAssert(0);\n            break;\n        }\n        break;\n    case ENOENT:\n        return \"Can't resolve instance hostname.\";\n    case EINVAL:\n        return \"Invalid port number.\";\n    default:\n        return \"Unknown Error for creating instances.\";\n    }\n}\n\n/* init function for server.sentinel_config */\nvoid initializeSentinelConfig(void) {\n    server.sentinel_config = zmalloc(sizeof(struct sentinelConfig));\n    server.sentinel_config->monitor_cfg = listCreate();\n    server.sentinel_config->pre_monitor_cfg = listCreate();\n    server.sentinel_config->post_monitor_cfg = listCreate();\n    listSetFreeMethod(server.sentinel_config->monitor_cfg,freeSentinelLoadQueueEntry);\n    listSetFreeMethod(server.sentinel_config->pre_monitor_cfg,freeSentinelLoadQueueEntry);\n    listSetFreeMethod(server.sentinel_config->post_monitor_cfg,freeSentinelLoadQueueEntry);\n}\n\n/* destroy function for server.sentinel_config */\nvoid freeSentinelConfig(void) {\n    /* release these three config queues since we will not use it anymore */\n    listRelease(server.sentinel_config->pre_monitor_cfg);\n    listRelease(server.sentinel_config->monitor_cfg);\n    listRelease(server.sentinel_config->post_monitor_cfg);\n    zfree(server.sentinel_config);\n    server.sentinel_config = NULL;\n}\n\n/* Search config name in pre monitor config name array, return 1 if found,\n * 0 if not found. */\nint searchPreMonitorCfgName(const char *name) {\n    for (unsigned int i = 0; i < sizeof(preMonitorCfgName)/sizeof(preMonitorCfgName[0]); i++) {\n        if (!strcasecmp(preMonitorCfgName[i],name)) return 1;\n    }\n    return 0;\n}\n\n/* free method for sentinelLoadQueueEntry when release the list */\nvoid freeSentinelLoadQueueEntry(void *item) {\n    struct sentinelLoadQueueEntry *entry = item;\n    sdsfreesplitres(entry->argv,entry->argc);\n    sdsfree(entry->line);\n    zfree(entry);\n}\n\n/* This function is used for queuing sentinel configuration, the main\n * purpose of this function is to delay parsing the sentinel config option\n * in order to avoid the order dependent issue from the config. */\nvoid queueSentinelConfig(sds *argv, int argc, int linenum, sds line) {\n    int i;\n    struct sentinelLoadQueueEntry *entry;\n\n    /* initialize sentinel_config for the first call */\n    if (server.sentinel_config == NULL) initializeSentinelConfig();\n\n    entry = zmalloc(sizeof(struct sentinelLoadQueueEntry));\n    entry->argv = zmalloc(sizeof(char*)*argc);\n    entry->argc = argc;\n    entry->linenum = linenum;\n    entry->line = sdsdup(line);\n    for (i = 0; i < argc; i++) {\n        entry->argv[i] = sdsdup(argv[i]);\n    }\n    /*  Separate config lines with pre monitor config, monitor config and\n     *  post monitor config, in order to parsing config dependencies\n     *  correctly. */\n    if (!strcasecmp(argv[0],\"monitor\")) {\n        listAddNodeTail(server.sentinel_config->monitor_cfg,entry);\n    } else if (searchPreMonitorCfgName(argv[0])) {\n        listAddNodeTail(server.sentinel_config->pre_monitor_cfg,entry);\n    } else{\n        listAddNodeTail(server.sentinel_config->post_monitor_cfg,entry);\n    }\n}\n\n/* This function is used for loading the sentinel configuration from\n * pre_monitor_cfg, monitor_cfg and post_monitor_cfg list */\nvoid loadSentinelConfigFromQueue(void) {\n    const char *err = NULL;\n    listIter li;\n    listNode *ln;\n    int linenum = 0;\n    sds line = NULL;\n    unsigned int j;\n\n    /* if there is no sentinel_config entry, we can return immediately */\n    if (server.sentinel_config == NULL) return;\n\n    list *sentinel_configs[3] = {\n        server.sentinel_config->pre_monitor_cfg,\n        server.sentinel_config->monitor_cfg,\n        server.sentinel_config->post_monitor_cfg\n    };\n    /* loading from pre monitor config queue first to avoid dependency issues\n     * loading from monitor config queue\n     * loading from the post monitor config queue */\n    for (j = 0; j < sizeof(sentinel_configs) / sizeof(sentinel_configs[0]); j++) {\n        listRewind(sentinel_configs[j],&li);\n        while((ln = listNext(&li))) {\n            struct sentinelLoadQueueEntry *entry = ln->value;\n            err = sentinelHandleConfiguration(entry->argv,entry->argc);\n            if (err) {\n                linenum = entry->linenum;\n                line = entry->line;\n                goto loaderr;\n            }\n        }\n    }\n\n    /* free sentinel_config when config loading is finished */\n    freeSentinelConfig();\n    return;\n\nloaderr:\n    fprintf(stderr, \"\\n*** FATAL CONFIG FILE ERROR (Redis %s) ***\\n\",\n        REDIS_VERSION);\n    fprintf(stderr, \"Reading the configuration file, at line %d\\n\", linenum);\n    fprintf(stderr, \">>> '%s'\\n\", line);\n    fprintf(stderr, \"%s\\n\", err);\n    exit(1);\n}\n\nconst char *sentinelHandleConfiguration(char **argv, int argc) {\n\n    sentinelRedisInstance *ri;\n\n    if (!strcasecmp(argv[0],\"monitor\") && argc == 5) {\n        /* monitor <name> <host> <port> <quorum> */\n        int quorum = atoi(argv[4]);\n\n        if (quorum <= 0) return \"Quorum must be 1 or greater.\";\n        if (createSentinelRedisInstance(argv[1],SRI_MASTER,argv[2],\n                                        atoi(argv[3]),quorum,NULL) == NULL)\n        {\n            return sentinelCheckCreateInstanceErrors(SRI_MASTER);\n        }\n    } else if (!strcasecmp(argv[0],\"down-after-milliseconds\") && argc == 3) {\n        /* down-after-milliseconds <name> <milliseconds> */\n        ri = sentinelGetMasterByName(argv[1]);\n        if (!ri) return \"No such master with specified name.\";\n        ri->down_after_period = atoi(argv[2]);\n        if (ri->down_after_period <= 0)\n            return \"negative or zero time parameter.\";\n        sentinelPropagateDownAfterPeriod(ri);\n    } else if (!strcasecmp(argv[0],\"failover-timeout\") && argc == 3) {\n        /* failover-timeout <name> <milliseconds> */\n        ri = sentinelGetMasterByName(argv[1]);\n        if (!ri) return \"No such master with specified name.\";\n        ri->failover_timeout = atoi(argv[2]);\n        if (ri->failover_timeout <= 0)\n            return \"negative or zero time parameter.\";\n    } else if (!strcasecmp(argv[0],\"parallel-syncs\") && argc == 3) {\n        /* parallel-syncs <name> <milliseconds> */\n        ri = sentinelGetMasterByName(argv[1]);\n        if (!ri) return \"No such master with specified name.\";\n        ri->parallel_syncs = atoi(argv[2]);\n    } else if (!strcasecmp(argv[0],\"notification-script\") && argc == 3) {\n        /* notification-script <name> <path> */\n        ri = sentinelGetMasterByName(argv[1]);\n        if (!ri) return \"No such master with specified name.\";\n        if (access(argv[2],X_OK) == -1)\n            return \"Notification script seems non existing or non executable.\";\n        ri->notification_script = sdsnew(argv[2]);\n    } else if (!strcasecmp(argv[0],\"client-reconfig-script\") && argc == 3) {\n        /* client-reconfig-script <name> <path> */\n        ri = sentinelGetMasterByName(argv[1]);\n        if (!ri) return \"No such master with specified name.\";\n        if (access(argv[2],X_OK) == -1)\n            return \"Client reconfiguration script seems non existing or \"\n                   \"non executable.\";\n        ri->client_reconfig_script = sdsnew(argv[2]);\n    } else if (!strcasecmp(argv[0],\"auth-pass\") && argc == 3) {\n        /* auth-pass <name> <password> */\n        ri = sentinelGetMasterByName(argv[1]);\n        if (!ri) return \"No such master with specified name.\";\n        ri->auth_pass = sdsnew(argv[2]);\n    } else if (!strcasecmp(argv[0],\"auth-user\") && argc == 3) {\n        /* auth-user <name> <username> */\n        ri = sentinelGetMasterByName(argv[1]);\n        if (!ri) return \"No such master with specified name.\";\n        ri->auth_user = sdsnew(argv[2]);\n    } else if (!strcasecmp(argv[0],\"current-epoch\") && argc == 2) {\n        /* current-epoch <epoch> */\n        unsigned long long current_epoch = strtoull(argv[1],NULL,10);\n        if (current_epoch > sentinel.current_epoch)\n            sentinel.current_epoch = current_epoch;\n    } else if (!strcasecmp(argv[0],\"myid\") && argc == 2) {\n        if (strlen(argv[1]) != CONFIG_RUN_ID_SIZE)\n            return \"Malformed Sentinel id in myid option.\";\n        memcpy(sentinel.myid,argv[1],CONFIG_RUN_ID_SIZE);\n    } else if (!strcasecmp(argv[0],\"config-epoch\") && argc == 3) {\n        /* config-epoch <name> <epoch> */\n        ri = sentinelGetMasterByName(argv[1]);\n        if (!ri) return \"No such master with specified name.\";\n        ri->config_epoch = strtoull(argv[2],NULL,10);\n        /* The following update of current_epoch is not really useful as\n         * now the current epoch is persisted on the config file, but\n         * we leave this check here for redundancy. */\n        if (ri->config_epoch > sentinel.current_epoch)\n            sentinel.current_epoch = ri->config_epoch;\n    } else if (!strcasecmp(argv[0],\"leader-epoch\") && argc == 3) {\n        /* leader-epoch <name> <epoch> */\n        ri = sentinelGetMasterByName(argv[1]);\n        if (!ri) return \"No such master with specified name.\";\n        ri->leader_epoch = strtoull(argv[2],NULL,10);\n    } else if ((!strcasecmp(argv[0],\"known-slave\") ||\n                !strcasecmp(argv[0],\"known-replica\")) && argc == 4)\n    {\n        sentinelRedisInstance *slave;\n\n        /* known-replica <name> <ip> <port> */\n        ri = sentinelGetMasterByName(argv[1]);\n        if (!ri) return \"No such master with specified name.\";\n        if ((slave = createSentinelRedisInstance(NULL,SRI_SLAVE,argv[2],\n                    atoi(argv[3]), ri->quorum, ri)) == NULL)\n        {\n            return sentinelCheckCreateInstanceErrors(SRI_SLAVE);\n        }\n    } else if (!strcasecmp(argv[0],\"known-sentinel\") &&\n               (argc == 4 || argc == 5)) {\n        sentinelRedisInstance *si;\n\n        if (argc == 5) { /* Ignore the old form without runid. */\n            /* known-sentinel <name> <ip> <port> [runid] */\n            ri = sentinelGetMasterByName(argv[1]);\n            if (!ri) return \"No such master with specified name.\";\n            if ((si = createSentinelRedisInstance(argv[4],SRI_SENTINEL,argv[2],\n                        atoi(argv[3]), ri->quorum, ri)) == NULL)\n            {\n                return sentinelCheckCreateInstanceErrors(SRI_SENTINEL);\n            }\n            si->runid = sdsnew(argv[4]);\n            sentinelTryConnectionSharing(si);\n        }\n    } else if (!strcasecmp(argv[0],\"rename-command\") && argc == 4) {\n        /* rename-command <name> <command> <renamed-command> */\n        ri = sentinelGetMasterByName(argv[1]);\n        if (!ri) return \"No such master with specified name.\";\n        sds oldcmd = sdsnew(argv[2]);\n        sds newcmd = sdsnew(argv[3]);\n        if (dictAdd(ri->renamed_commands,oldcmd,newcmd) != DICT_OK) {\n            sdsfree(oldcmd);\n            sdsfree(newcmd);\n            return \"Same command renamed multiple times with rename-command.\";\n        }\n    } else if (!strcasecmp(argv[0],\"announce-ip\") && argc == 2) {\n        /* announce-ip <ip-address> */\n        if (strlen(argv[1]))\n            sentinel.announce_ip = sdsnew(argv[1]);\n    } else if (!strcasecmp(argv[0],\"announce-port\") && argc == 2) {\n        /* announce-port <port> */\n        sentinel.announce_port = atoi(argv[1]);\n    } else if (!strcasecmp(argv[0],\"deny-scripts-reconfig\") && argc == 2) {\n        /* deny-scripts-reconfig <yes|no> */\n        if ((sentinel.deny_scripts_reconfig = yesnotoi(argv[1])) == -1) {\n            return \"Please specify yes or no for the \"\n                   \"deny-scripts-reconfig options.\";\n        }\n    } else if (!strcasecmp(argv[0],\"sentinel-user\") && argc == 2) {\n        /* sentinel-user <user-name> */\n        if (strlen(argv[1]))\n            sentinel.sentinel_auth_user = sdsnew(argv[1]);\n    } else if (!strcasecmp(argv[0],\"sentinel-pass\") && argc == 2) {\n        /* sentinel-pass <password> */\n        if (strlen(argv[1]))\n            sentinel.sentinel_auth_pass = sdsnew(argv[1]);\n    } else if (!strcasecmp(argv[0],\"resolve-hostnames\") && argc == 2) {\n        /* resolve-hostnames <yes|no> */\n        if ((sentinel.resolve_hostnames = yesnotoi(argv[1])) == -1) {\n            return \"Please specify yes or no for the resolve-hostnames option.\";\n        }\n    } else if (!strcasecmp(argv[0],\"announce-hostnames\") && argc == 2) {\n        /* announce-hostnames <yes|no> */\n        if ((sentinel.announce_hostnames = yesnotoi(argv[1])) == -1) {\n            return \"Please specify yes or no for the announce-hostnames option.\";\n        }\n    } else if (!strcasecmp(argv[0],\"master-reboot-down-after-period\") && argc == 3) {\n        /* master-reboot-down-after-period <name> <milliseconds> */\n        ri = sentinelGetMasterByName(argv[1]);\n        if (!ri) return \"No such master with specified name.\";\n        ri->master_reboot_down_after_period = atoi(argv[2]);\n        if (ri->master_reboot_down_after_period < 0)\n            return \"negative time parameter.\";\n    } else {\n        return \"Unrecognized sentinel configuration statement.\";\n    }\n    return NULL;\n}\n\n/* Implements CONFIG REWRITE for \"sentinel\" option.\n * This is used not just to rewrite the configuration given by the user\n * (the configured masters) but also in order to retain the state of\n * Sentinel across restarts: config epoch of masters, associated slaves\n * and sentinel instances, and so forth. */\nvoid rewriteConfigSentinelOption(struct rewriteConfigState *state) {\n    dictIterator di, di2;\n    dictEntry *de;\n    sds line;\n\n    /* sentinel unique ID. */\n    line = sdscatprintf(sdsempty(), \"sentinel myid %s\", sentinel.myid);\n    rewriteConfigRewriteLine(state,\"sentinel myid\",line,1);\n\n    /* sentinel deny-scripts-reconfig. */\n    line = sdscatprintf(sdsempty(), \"sentinel deny-scripts-reconfig %s\",\n        sentinel.deny_scripts_reconfig ? \"yes\" : \"no\");\n    rewriteConfigRewriteLine(state,\"sentinel deny-scripts-reconfig\",line,\n        sentinel.deny_scripts_reconfig != SENTINEL_DEFAULT_DENY_SCRIPTS_RECONFIG);\n\n    /* sentinel resolve-hostnames.\n     * This must be included early in the file so it is already in effect\n     * when reading the file.\n     */\n    line = sdscatprintf(sdsempty(), \"sentinel resolve-hostnames %s\",\n                        sentinel.resolve_hostnames ? \"yes\" : \"no\");\n    rewriteConfigRewriteLine(state,\"sentinel resolve-hostnames\",line,\n                             sentinel.resolve_hostnames != SENTINEL_DEFAULT_RESOLVE_HOSTNAMES);\n\n    /* sentinel announce-hostnames. */\n    line = sdscatprintf(sdsempty(), \"sentinel announce-hostnames %s\",\n                        sentinel.announce_hostnames ? \"yes\" : \"no\");\n    rewriteConfigRewriteLine(state,\"sentinel announce-hostnames\",line,\n                             sentinel.announce_hostnames != SENTINEL_DEFAULT_ANNOUNCE_HOSTNAMES);\n\n    /* For every master emit a \"sentinel monitor\" config entry. */\n    dictInitIterator(&di, sentinel.masters);\n    while((de = dictNext(&di)) != NULL) {\n        sentinelRedisInstance *master, *ri;\n        sentinelAddr *master_addr;\n\n        /* sentinel monitor */\n        master = dictGetVal(de);\n        master_addr = sentinelGetCurrentMasterAddress(master);\n        line = sdscatprintf(sdsempty(),\"sentinel monitor %s %s %d %d\",\n            master->name, announceSentinelAddr(master_addr), master_addr->port,\n            master->quorum);\n        rewriteConfigRewriteLine(state,\"sentinel monitor\",line,1);\n        /* rewriteConfigMarkAsProcessed is handled after the loop */\n\n        /* sentinel down-after-milliseconds */\n        if (master->down_after_period != sentinel_default_down_after) {\n            line = sdscatprintf(sdsempty(),\n                \"sentinel down-after-milliseconds %s %ld\",\n                master->name, (long) master->down_after_period);\n            rewriteConfigRewriteLine(state,\"sentinel down-after-milliseconds\",line,1);\n            /* rewriteConfigMarkAsProcessed is handled after the loop */\n        }\n\n        /* sentinel failover-timeout */\n        if (master->failover_timeout != sentinel_default_failover_timeout) {\n            line = sdscatprintf(sdsempty(),\n                \"sentinel failover-timeout %s %ld\",\n                master->name, (long) master->failover_timeout);\n            rewriteConfigRewriteLine(state,\"sentinel failover-timeout\",line,1);\n            /* rewriteConfigMarkAsProcessed is handled after the loop */\n\n        }\n\n        /* sentinel parallel-syncs */\n        if (master->parallel_syncs != SENTINEL_DEFAULT_PARALLEL_SYNCS) {\n            line = sdscatprintf(sdsempty(),\n                \"sentinel parallel-syncs %s %d\",\n                master->name, master->parallel_syncs);\n            rewriteConfigRewriteLine(state,\"sentinel parallel-syncs\",line,1);\n            /* rewriteConfigMarkAsProcessed is handled after the loop */\n        }\n\n        /* sentinel notification-script */\n        if (master->notification_script) {\n            line = sdscatprintf(sdsempty(),\n                \"sentinel notification-script %s %s\",\n                master->name, master->notification_script);\n            rewriteConfigRewriteLine(state,\"sentinel notification-script\",line,1);\n            /* rewriteConfigMarkAsProcessed is handled after the loop */\n        }\n\n        /* sentinel client-reconfig-script */\n        if (master->client_reconfig_script) {\n            line = sdscatprintf(sdsempty(),\n                \"sentinel client-reconfig-script %s %s\",\n                master->name, master->client_reconfig_script);\n            rewriteConfigRewriteLine(state,\"sentinel client-reconfig-script\",line,1);\n            /* rewriteConfigMarkAsProcessed is handled after the loop */\n        }\n\n        /* sentinel auth-pass & auth-user */\n        if (master->auth_pass) {\n            line = sdscatprintf(sdsempty(),\n                \"sentinel auth-pass %s %s\",\n                master->name, master->auth_pass);\n            rewriteConfigRewriteLine(state,\"sentinel auth-pass\",line,1);\n            /* rewriteConfigMarkAsProcessed is handled after the loop */\n        }\n\n        if (master->auth_user) {\n            line = sdscatprintf(sdsempty(),\n                \"sentinel auth-user %s %s\",\n                master->name, master->auth_user);\n            rewriteConfigRewriteLine(state,\"sentinel auth-user\",line,1);\n            /* rewriteConfigMarkAsProcessed is handled after the loop */\n        }\n\n        /* sentinel master-reboot-down-after-period */\n        if (master->master_reboot_down_after_period != 0) {\n            line = sdscatprintf(sdsempty(),\n                \"sentinel master-reboot-down-after-period %s %ld\",\n                master->name, (long) master->master_reboot_down_after_period);\n            rewriteConfigRewriteLine(state,\"sentinel master-reboot-down-after-period\",line,1);\n            /* rewriteConfigMarkAsProcessed is handled after the loop */\n        }\n\n        /* sentinel config-epoch */\n        line = sdscatprintf(sdsempty(),\n            \"sentinel config-epoch %s %llu\",\n            master->name, (unsigned long long) master->config_epoch);\n        rewriteConfigRewriteLine(state,\"sentinel config-epoch\",line,1);\n        /* rewriteConfigMarkAsProcessed is handled after the loop */\n\n\n        /* sentinel leader-epoch */\n        line = sdscatprintf(sdsempty(),\n            \"sentinel leader-epoch %s %llu\",\n            master->name, (unsigned long long) master->leader_epoch);\n        rewriteConfigRewriteLine(state,\"sentinel leader-epoch\",line,1);\n        /* rewriteConfigMarkAsProcessed is handled after the loop */\n\n        /* sentinel known-slave */\n        dictInitIterator(&di2, master->slaves);\n        while((de = dictNext(&di2)) != NULL) {\n            sentinelAddr *slave_addr;\n\n            ri = dictGetVal(de);\n            slave_addr = ri->addr;\n\n            /* If master_addr (obtained using sentinelGetCurrentMasterAddress()\n             * so it may be the address of the promoted slave) is equal to this\n             * slave's address, a failover is in progress and the slave was\n             * already successfully promoted. So as the address of this slave\n             * we use the old master address instead. */\n            if (sentinelAddrOrHostnameEqual(slave_addr,master_addr))\n                slave_addr = master->addr;\n            line = sdscatprintf(sdsempty(),\n                \"sentinel known-replica %s %s %d\",\n                master->name, announceSentinelAddr(slave_addr), slave_addr->port);\n            /* try to replace any known-slave option first if found */\n            if (rewriteConfigRewriteLine(state, \"sentinel known-slave\", sdsdup(line), 0) == 0) {\n                rewriteConfigRewriteLine(state, \"sentinel known-replica\", line, 1);\n            } else {\n                sdsfree(line);\n            }\n            /* rewriteConfigMarkAsProcessed is handled after the loop */\n        }\n        dictResetIterator(&di2);\n\n        /* sentinel known-sentinel */\n        dictInitIterator(&di2, master->sentinels);\n        while((de = dictNext(&di2)) != NULL) {\n            ri = dictGetVal(de);\n            if (ri->runid == NULL) continue;\n            line = sdscatprintf(sdsempty(),\n                \"sentinel known-sentinel %s %s %d %s\",\n                master->name, announceSentinelAddr(ri->addr), ri->addr->port, ri->runid);\n            rewriteConfigRewriteLine(state,\"sentinel known-sentinel\",line,1);\n            /* rewriteConfigMarkAsProcessed is handled after the loop */\n        }\n        dictResetIterator(&di2);\n\n        /* sentinel rename-command */\n        dictInitIterator(&di2, master->renamed_commands);\n        while((de = dictNext(&di2)) != NULL) {\n            sds oldname = dictGetKey(de);\n            sds newname = dictGetVal(de);\n            line = sdscatprintf(sdsempty(),\n                \"sentinel rename-command %s %s %s\",\n                master->name, oldname, newname);\n            rewriteConfigRewriteLine(state,\"sentinel rename-command\",line,1);\n            /* rewriteConfigMarkAsProcessed is handled after the loop */\n        }\n        dictResetIterator(&di2);\n    }\n\n    /* sentinel current-epoch is a global state valid for all the masters. */\n    line = sdscatprintf(sdsempty(),\n        \"sentinel current-epoch %llu\", (unsigned long long) sentinel.current_epoch);\n    rewriteConfigRewriteLine(state,\"sentinel current-epoch\",line,1);\n\n    /* sentinel announce-ip. */\n    if (sentinel.announce_ip) {\n        line = sdsnew(\"sentinel announce-ip \");\n        line = sdscatrepr(line, sentinel.announce_ip, sdslen(sentinel.announce_ip));\n        rewriteConfigRewriteLine(state,\"sentinel announce-ip\",line,1);\n    } else {\n        rewriteConfigMarkAsProcessed(state,\"sentinel announce-ip\");\n    }\n\n    /* sentinel announce-port. */\n    if (sentinel.announce_port) {\n        line = sdscatprintf(sdsempty(),\"sentinel announce-port %d\",\n                            sentinel.announce_port);\n        rewriteConfigRewriteLine(state,\"sentinel announce-port\",line,1);\n    } else {\n        rewriteConfigMarkAsProcessed(state,\"sentinel announce-port\");\n    }\n\n    /* sentinel sentinel-user. */\n    if (sentinel.sentinel_auth_user) {\n        line = sdscatprintf(sdsempty(), \"sentinel sentinel-user %s\", sentinel.sentinel_auth_user);\n        rewriteConfigRewriteLine(state,\"sentinel sentinel-user\",line,1);\n    } else {\n        rewriteConfigMarkAsProcessed(state,\"sentinel sentinel-user\");\n    }\n\n    /* sentinel sentinel-pass. */\n    if (sentinel.sentinel_auth_pass) {\n        line = sdscatprintf(sdsempty(), \"sentinel sentinel-pass %s\", sentinel.sentinel_auth_pass);\n        rewriteConfigRewriteLine(state,\"sentinel sentinel-pass\",line,1);\n    } else {\n        rewriteConfigMarkAsProcessed(state,\"sentinel sentinel-pass\");  \n    }\n\n    dictResetIterator(&di);\n\n    /* NOTE: the purpose here is in case due to the state change, the config rewrite \n     does not handle the configs, however, previously the config was set in the config file, \n     rewriteConfigMarkAsProcessed should be put here to mark it as processed in order to \n     delete the old config entry.\n    */\n    rewriteConfigMarkAsProcessed(state,\"sentinel monitor\");\n    rewriteConfigMarkAsProcessed(state,\"sentinel down-after-milliseconds\");\n    rewriteConfigMarkAsProcessed(state,\"sentinel failover-timeout\");\n    rewriteConfigMarkAsProcessed(state,\"sentinel parallel-syncs\");\n    rewriteConfigMarkAsProcessed(state,\"sentinel notification-script\");\n    rewriteConfigMarkAsProcessed(state,\"sentinel client-reconfig-script\");\n    rewriteConfigMarkAsProcessed(state,\"sentinel auth-pass\");\n    rewriteConfigMarkAsProcessed(state,\"sentinel auth-user\");\n    rewriteConfigMarkAsProcessed(state,\"sentinel config-epoch\");\n    rewriteConfigMarkAsProcessed(state,\"sentinel leader-epoch\");\n    rewriteConfigMarkAsProcessed(state,\"sentinel known-replica\");\n    rewriteConfigMarkAsProcessed(state,\"sentinel known-sentinel\");\n    rewriteConfigMarkAsProcessed(state,\"sentinel rename-command\");\n    rewriteConfigMarkAsProcessed(state,\"sentinel master-reboot-down-after-period\");\n}\n\n/* This function uses the config rewriting Redis engine in order to persist\n * the state of the Sentinel in the current configuration file.\n *\n * On failure the function logs a warning on the Redis log. */\nint sentinelFlushConfig(void) {\n    int saved_hz = server.hz;\n    int rewrite_status;\n\n    server.hz = CONFIG_DEFAULT_HZ;\n    rewrite_status = rewriteConfig(server.configfile, 0);\n    server.hz = saved_hz;\n\n    if (rewrite_status == -1) {\n        serverLog(LL_WARNING,\"WARNING: Sentinel was not able to save the new configuration on disk!!!: %s\", strerror(errno));\n        return C_ERR;\n    } else {\n        serverLog(LL_NOTICE,\"Sentinel new configuration saved on disk\");\n        return C_OK;\n    }\n}\n\n/* Call sentinelFlushConfig() produce a success/error reply to the\n * calling client.\n */\nstatic void sentinelFlushConfigAndReply(client *c) {\n    if (sentinelFlushConfig() == C_ERR)\n        addReplyError(c, \"Failed to save config file. Check server logs.\");\n    else\n        addReply(c, shared.ok);\n}\n\n/* ====================== hiredis connection handling ======================= */\n\n/* Send the AUTH command with the specified master password if needed.\n * Note that for slaves the password set for the master is used.\n *\n * In case this Sentinel requires a password as well, via the \"requirepass\"\n * configuration directive, we assume we should use the local password in\n * order to authenticate when connecting with the other Sentinels as well.\n * So basically all the Sentinels share the same password and use it to\n * authenticate reciprocally.\n *\n * We don't check at all if the command was successfully transmitted\n * to the instance as if it fails Sentinel will detect the instance down,\n * will disconnect and reconnect the link and so forth. */\nvoid sentinelSendAuthIfNeeded(sentinelRedisInstance *ri, redisAsyncContext *c) {\n    char *auth_pass = NULL;\n    char *auth_user = NULL;\n\n    if (ri->flags & SRI_MASTER) {\n        auth_pass = ri->auth_pass;\n        auth_user = ri->auth_user;\n    } else if (ri->flags & SRI_SLAVE) {\n        auth_pass = ri->master->auth_pass;\n        auth_user = ri->master->auth_user;\n    } else if (ri->flags & SRI_SENTINEL) {\n        /* If sentinel_auth_user is NULL, AUTH will use default user\n           with sentinel_auth_pass to authenticate */\n        if (sentinel.sentinel_auth_pass) {\n            auth_pass = sentinel.sentinel_auth_pass;\n            auth_user = sentinel.sentinel_auth_user;\n        } else {\n            /* Compatibility with old configs. requirepass is used\n             * for both incoming and outgoing authentication. */\n            auth_pass = server.requirepass;\n            auth_user = NULL;\n        }\n    }\n\n    if (auth_pass && auth_user == NULL) {\n        if (redisAsyncCommand(c, sentinelDiscardReplyCallback, ri, \"%s %s\",\n            sentinelInstanceMapCommand(ri,\"AUTH\"),\n            auth_pass) == C_OK) ri->link->pending_commands++;\n    } else if (auth_pass && auth_user) {\n        /* If we also have an username, use the ACL-style AUTH command\n         * with two arguments, username and password. */\n        if (redisAsyncCommand(c, sentinelDiscardReplyCallback, ri, \"%s %s %s\",\n            sentinelInstanceMapCommand(ri,\"AUTH\"),\n            auth_user, auth_pass) == C_OK) ri->link->pending_commands++;\n    }\n}\n\n/* Use CLIENT SETNAME to name the connection in the Redis instance as\n * sentinel-<first_8_chars_of_runid>-<connection_type>\n * The connection type is \"cmd\" or \"pubsub\" as specified by 'type'.\n *\n * This makes it possible to list all the sentinel instances connected\n * to a Redis server with CLIENT LIST, grepping for a specific name format. */\nvoid sentinelSetClientName(sentinelRedisInstance *ri, redisAsyncContext *c, char *type) {\n    char name[64];\n\n    snprintf(name,sizeof(name),\"sentinel-%.8s-%s\",sentinel.myid,type);\n    if (redisAsyncCommand(c, sentinelDiscardReplyCallback, ri,\n        \"%s SETNAME %s\",\n        sentinelInstanceMapCommand(ri,\"CLIENT\"),\n        name) == C_OK)\n    {\n        ri->link->pending_commands++;\n    }\n}\n\nstatic int instanceLinkNegotiateTLS(redisAsyncContext *context) {\n#if USE_OPENSSL == 1 /* BUILD_YES */\n    if (!redis_tls_ctx) return C_ERR;\n    SSL *ssl = SSL_new(redis_tls_client_ctx ? redis_tls_client_ctx : redis_tls_ctx);\n    if (!ssl) return C_ERR;\n\n    if (redisInitiateSSL(&context->c, ssl) == REDIS_ERR) {\n        SSL_free(ssl);\n        return C_ERR;\n    }\n#else\n    UNUSED(context);\n#endif\n    return C_OK;\n}\n\n/* Create the async connections for the instance link if the link\n * is disconnected. Note that link->disconnected is true even if just\n * one of the two links (commands and pub/sub) is missing. */\nvoid sentinelReconnectInstance(sentinelRedisInstance *ri) {\n\n    if (ri->link->disconnected == 0) return;\n    if (ri->addr->port == 0) return; /* port == 0 means invalid address. */\n    instanceLink *link = ri->link;\n    mstime_t now = mstime();\n\n    if (now - ri->link->last_reconn_time < sentinel_ping_period) return;\n    ri->link->last_reconn_time = now;\n\n    /* Commands connection. */\n    if (link->cc == NULL) {\n\n        /* It might be that the instance is disconnected because it wasn't available earlier when the instance\n         * allocated, say during failover, and therefore we failed to resolve its ip.\n         * Another scenario is that the instance restarted with new ip, and we should resolve its new ip based on\n         * its hostname */\n        if (sentinel.resolve_hostnames) {\n            sentinelAddr *tryResolveAddr = createSentinelAddr(ri->addr->hostname, ri->addr->port, 0);\n            if (tryResolveAddr != NULL) {\n                releaseSentinelAddr(ri->addr);\n                ri->addr = tryResolveAddr;\n            }\n        }\n\n        link->cc = redisAsyncConnectBind(ri->addr->ip,ri->addr->port,server.bind_source_addr);\n\n        if (link->cc && !link->cc->err) anetCloexec(link->cc->c.fd);\n        if (!link->cc) {\n            sentinelEvent(LL_DEBUG,\"-cmd-link-reconnection\",ri,\"%@ #Failed to establish connection\");\n        } else if (!link->cc->err && server.tls_replication &&\n                (instanceLinkNegotiateTLS(link->cc) == C_ERR)) {\n            sentinelEvent(LL_DEBUG,\"-cmd-link-reconnection\",ri,\"%@ #Failed to initialize TLS\");\n            instanceLinkCloseConnection(link,link->cc);\n        } else if (link->cc->err) {\n            sentinelEvent(LL_DEBUG,\"-cmd-link-reconnection\",ri,\"%@ #%s\",\n                link->cc->errstr);\n            instanceLinkCloseConnection(link,link->cc);\n        } else {\n            link->pending_commands = 0;\n            link->cc_conn_time = mstime();\n            link->cc->data = link;\n            redisAeAttach(server.el,link->cc);\n            redisAsyncSetConnectCallback(link->cc,\n                    sentinelLinkEstablishedCallback);\n            redisAsyncSetDisconnectCallback(link->cc,\n                    sentinelDisconnectCallback);\n            sentinelSendAuthIfNeeded(ri,link->cc);\n            sentinelSetClientName(ri,link->cc,\"cmd\");\n\n            /* Send a PING ASAP when reconnecting. */\n            sentinelSendPing(ri);\n        }\n    }\n    /* Pub / Sub */\n    if ((ri->flags & (SRI_MASTER|SRI_SLAVE)) && link->pc == NULL) {\n        link->pc = redisAsyncConnectBind(ri->addr->ip,ri->addr->port,server.bind_source_addr);\n        if (link->pc && !link->pc->err) anetCloexec(link->pc->c.fd);\n        if (!link->pc) {\n            sentinelEvent(LL_DEBUG,\"-pubsub-link-reconnection\",ri,\"%@ #Failed to establish connection\");\n        } else if (!link->pc->err && server.tls_replication &&\n                (instanceLinkNegotiateTLS(link->pc) == C_ERR)) {\n            sentinelEvent(LL_DEBUG,\"-pubsub-link-reconnection\",ri,\"%@ #Failed to initialize TLS\");\n        } else if (link->pc->err) {\n            sentinelEvent(LL_DEBUG,\"-pubsub-link-reconnection\",ri,\"%@ #%s\",\n                link->pc->errstr);\n            instanceLinkCloseConnection(link,link->pc);\n        } else {\n            int retval;\n            link->pc_conn_time = mstime();\n            link->pc->data = link;\n            redisAeAttach(server.el,link->pc);\n            redisAsyncSetConnectCallback(link->pc,\n                    sentinelLinkEstablishedCallback);\n            redisAsyncSetDisconnectCallback(link->pc,\n                    sentinelDisconnectCallback);\n            sentinelSendAuthIfNeeded(ri,link->pc);\n            sentinelSetClientName(ri,link->pc,\"pubsub\");\n            /* Now we subscribe to the Sentinels \"Hello\" channel. */\n            retval = redisAsyncCommand(link->pc,\n                sentinelReceiveHelloMessages, ri, \"%s %s\",\n                sentinelInstanceMapCommand(ri,\"SUBSCRIBE\"),\n                SENTINEL_HELLO_CHANNEL);\n            if (retval != C_OK) {\n                /* If we can't subscribe, the Pub/Sub connection is useless\n                 * and we can simply disconnect it and try again. */\n                instanceLinkCloseConnection(link,link->pc);\n                return;\n            }\n        }\n    }\n    /* Clear the disconnected status only if we have both the connections\n     * (or just the commands connection if this is a sentinel instance). */\n    if (link->cc && (ri->flags & SRI_SENTINEL || link->pc))\n        link->disconnected = 0;\n}\n\n/* ======================== Redis instances pinging  ======================== */\n\n/* Return true if master looks \"sane\", that is:\n * 1) It is actually a master in the current configuration.\n * 2) It reports itself as a master.\n * 3) It is not SDOWN or ODOWN.\n * 4) We obtained last INFO no more than two times the INFO period time ago. */\nint sentinelMasterLooksSane(sentinelRedisInstance *master) {\n    return\n        master->flags & SRI_MASTER &&\n        master->role_reported == SRI_MASTER &&\n        (master->flags & (SRI_S_DOWN|SRI_O_DOWN)) == 0 &&\n        (mstime() - master->info_refresh) < sentinel_info_period*2;\n}\n\n/* Process the INFO output from masters. */\nvoid sentinelRefreshInstanceInfo(sentinelRedisInstance *ri, const char *info) {\n    sds *lines;\n    int numlines, j;\n    int role = 0;\n\n    /* cache full INFO output for instance */\n    sdsfree(ri->info);\n    ri->info = sdsnew(info);\n\n    /* The following fields must be reset to a given value in the case they\n     * are not found at all in the INFO output. */\n    ri->master_link_down_time = 0;\n\n    /* Process line by line. */\n    lines = sdssplitlen(info,strlen(info),\"\\r\\n\",2,&numlines);\n    for (j = 0; j < numlines; j++) {\n        sentinelRedisInstance *slave;\n        sds l = lines[j];\n\n        /* run_id:<40 hex chars>*/\n        if (sdslen(l) >= 47 && !memcmp(l,\"run_id:\",7)) {\n            if (ri->runid == NULL) {\n                ri->runid = sdsnewlen(l+7,40);\n            } else {\n                if (strncmp(ri->runid,l+7,40) != 0) {\n                    sentinelEvent(LL_NOTICE,\"+reboot\",ri,\"%@\");\n\n                    if (ri->flags & SRI_MASTER && ri->master_reboot_down_after_period != 0) {\n                        ri->flags |= SRI_MASTER_REBOOT;\n                        ri->master_reboot_since_time = mstime();\n                    }\n\n                    sdsfree(ri->runid);\n                    ri->runid = sdsnewlen(l+7,40);\n                }\n            }\n        }\n\n        /* old versions: slave0:<ip>,<port>,<state>\n         * new versions: slave0:ip=127.0.0.1,port=9999,... */\n        if ((ri->flags & SRI_MASTER) &&\n            sdslen(l) >= 7 &&\n            !memcmp(l,\"slave\",5) && isdigit(l[5]))\n        {\n            char *ip, *port, *end;\n\n            if (strstr(l,\"ip=\") == NULL) {\n                /* Old format. */\n                ip = strchr(l,':'); if (!ip) continue;\n                ip++; /* Now ip points to start of ip address. */\n                port = strchr(ip,','); if (!port) continue;\n                *port = '\\0'; /* nul term for easy access. */\n                port++; /* Now port points to start of port number. */\n                end = strchr(port,','); if (!end) continue;\n                *end = '\\0'; /* nul term for easy access. */\n            } else {\n                /* New format. */\n                ip = strstr(l,\"ip=\"); if (!ip) continue;\n                ip += 3; /* Now ip points to start of ip address. */\n                port = strstr(l,\"port=\"); if (!port) continue;\n                port += 5; /* Now port points to start of port number. */\n                /* Nul term both fields for easy access. */\n                end = strchr(ip,','); if (end) *end = '\\0';\n                end = strchr(port,','); if (end) *end = '\\0';\n            }\n\n            /* Check if we already have this slave into our table,\n             * otherwise add it. */\n            if (sentinelRedisInstanceLookupSlave(ri,ip,atoi(port)) == NULL) {\n                if ((slave = createSentinelRedisInstance(NULL,SRI_SLAVE,ip,\n                            atoi(port), ri->quorum, ri)) != NULL)\n                {\n                    sentinelEvent(LL_NOTICE,\"+slave\",slave,\"%@\");\n                    sentinelFlushConfig();\n                }\n            }\n        }\n\n        /* master_link_down_since_seconds:<seconds> */\n        if (sdslen(l) >= 32 &&\n            !memcmp(l,\"master_link_down_since_seconds\",30))\n        {\n            ri->master_link_down_time = strtoll(l+31,NULL,10)*1000;\n        }\n\n        /* role:<role> */\n        if (sdslen(l) >= 11 && !memcmp(l,\"role:master\",11)) role = SRI_MASTER;\n        else if (sdslen(l) >= 10 && !memcmp(l,\"role:slave\",10)) role = SRI_SLAVE;\n\n        if (role == SRI_SLAVE) {\n            /* master_host:<host> */\n            if (sdslen(l) >= 12 && !memcmp(l,\"master_host:\",12)) {\n                if (ri->slave_master_host == NULL ||\n                    strcasecmp(l+12,ri->slave_master_host))\n                {\n                    sdsfree(ri->slave_master_host);\n                    ri->slave_master_host = sdsnew(l+12);\n                    ri->slave_conf_change_time = mstime();\n                }\n            }\n\n            /* master_port:<port> */\n            if (sdslen(l) >= 12 && !memcmp(l,\"master_port:\",12)) {\n                int slave_master_port = atoi(l+12);\n\n                if (ri->slave_master_port != slave_master_port) {\n                    ri->slave_master_port = slave_master_port;\n                    ri->slave_conf_change_time = mstime();\n                }\n            }\n\n            /* master_link_status:<status> */\n            if (sdslen(l) >= 19 && !memcmp(l,\"master_link_status:\",19)) {\n                ri->slave_master_link_status =\n                    (strcasecmp(l+19,\"up\") == 0) ?\n                    SENTINEL_MASTER_LINK_STATUS_UP :\n                    SENTINEL_MASTER_LINK_STATUS_DOWN;\n            }\n\n            /* slave_priority:<priority> */\n            if (sdslen(l) >= 15 && !memcmp(l,\"slave_priority:\",15))\n                ri->slave_priority = atoi(l+15);\n\n            /* slave_repl_offset:<offset> */\n            if (sdslen(l) >= 18 && !memcmp(l,\"slave_repl_offset:\",18))\n                ri->slave_repl_offset = strtoull(l+18,NULL,10);\n\n            /* replica_announced:<announcement> */\n            if (sdslen(l) >= 18 && !memcmp(l,\"replica_announced:\",18))\n                ri->replica_announced = atoi(l+18);\n        }\n    }\n    ri->info_refresh = mstime();\n    sdsfreesplitres(lines,numlines);\n\n    /* ---------------------------- Acting half -----------------------------\n     * Some things will not happen if sentinel.tilt is true, but some will\n     * still be processed. */\n\n    /* Remember when the role changed. */\n    if (role != ri->role_reported) {\n        ri->role_reported_time = mstime();\n        ri->role_reported = role;\n        if (role == SRI_SLAVE) ri->slave_conf_change_time = mstime();\n        /* Log the event with +role-change if the new role is coherent or\n         * with -role-change if there is a mismatch with the current config. */\n        sentinelEvent(LL_VERBOSE,\n            ((ri->flags & (SRI_MASTER|SRI_SLAVE)) == role) ?\n            \"+role-change\" : \"-role-change\",\n            ri, \"%@ new reported role is %s\",\n            role == SRI_MASTER ? \"master\" : \"slave\");\n    }\n\n    /* None of the following conditions are processed when in tilt mode, so\n     * return asap. */\n    if (sentinel.tilt) return;\n\n    /* Handle master -> slave role switch. */\n    if ((ri->flags & SRI_MASTER) && role == SRI_SLAVE) {\n        /* Nothing to do, but masters claiming to be slaves are\n         * considered to be unreachable by Sentinel, so eventually\n         * a failover will be triggered. */\n    }\n\n    /* Handle slave -> master role switch. */\n    if ((ri->flags & SRI_SLAVE) && role == SRI_MASTER) {\n        /* If this is a promoted slave we can change state to the\n         * failover state machine. */\n        if ((ri->flags & SRI_PROMOTED) &&\n            (ri->master->flags & SRI_FAILOVER_IN_PROGRESS) &&\n            (ri->master->failover_state ==\n                SENTINEL_FAILOVER_STATE_WAIT_PROMOTION))\n        {\n            /* Now that we are sure the slave was reconfigured as a master\n             * set the master configuration epoch to the epoch we won the\n             * election to perform this failover. This will force the other\n             * Sentinels to update their config (assuming there is not\n             * a newer one already available). */\n            ri->master->config_epoch = ri->master->failover_epoch;\n            ri->master->failover_state = SENTINEL_FAILOVER_STATE_RECONF_SLAVES;\n            ri->master->failover_state_change_time = mstime();\n            sentinelFlushConfig();\n            sentinelEvent(LL_WARNING,\"+promoted-slave\",ri,\"%@\");\n            if (sentinel.simfailure_flags &\n                SENTINEL_SIMFAILURE_CRASH_AFTER_PROMOTION)\n                sentinelSimFailureCrash();\n            sentinelEvent(LL_WARNING,\"+failover-state-reconf-slaves\",\n                ri->master,\"%@\");\n            sentinelCallClientReconfScript(ri->master,SENTINEL_LEADER,\n                \"start\",ri->master->addr,ri->addr);\n            sentinelForceHelloUpdateForMaster(ri->master);\n        } else {\n            /* A slave turned into a master. We want to force our view and\n             * reconfigure as slave. Wait some time after the change before\n             * going forward, to receive new configs if any. */\n            mstime_t wait_time = sentinel_publish_period*4;\n\n            if (!(ri->flags & SRI_PROMOTED) &&\n                 sentinelMasterLooksSane(ri->master) &&\n                 sentinelRedisInstanceNoDownFor(ri,wait_time) &&\n                 mstime() - ri->role_reported_time > wait_time)\n            {\n                int retval = sentinelSendSlaveOf(ri,ri->master->addr);\n                if (retval == C_OK)\n                    sentinelEvent(LL_NOTICE,\"+convert-to-slave\",ri,\"%@\");\n            }\n        }\n    }\n\n    /* Handle slaves replicating to a different master address. */\n    if ((ri->flags & SRI_SLAVE) &&\n        role == SRI_SLAVE &&\n        (ri->slave_master_port != ri->master->addr->port ||\n         !sentinelAddrEqualsHostname(ri->master->addr, ri->slave_master_host)))\n    {\n        mstime_t wait_time = ri->master->failover_timeout;\n\n        /* Make sure the master is sane before reconfiguring this instance\n         * into a slave. */\n        if (sentinelMasterLooksSane(ri->master) &&\n            sentinelRedisInstanceNoDownFor(ri,wait_time) &&\n            mstime() - ri->slave_conf_change_time > wait_time)\n        {\n            int retval = sentinelSendSlaveOf(ri,ri->master->addr);\n            if (retval == C_OK)\n                sentinelEvent(LL_NOTICE,\"+fix-slave-config\",ri,\"%@\");\n        }\n    }\n\n    /* Detect if the slave that is in the process of being reconfigured\n     * changed state. */\n    if ((ri->flags & SRI_SLAVE) && role == SRI_SLAVE &&\n        (ri->flags & (SRI_RECONF_SENT|SRI_RECONF_INPROG)))\n    {\n        /* SRI_RECONF_SENT -> SRI_RECONF_INPROG. */\n        if ((ri->flags & SRI_RECONF_SENT) &&\n            ri->slave_master_host &&\n            sentinelAddrEqualsHostname(ri->master->promoted_slave->addr,\n                ri->slave_master_host) &&\n            ri->slave_master_port == ri->master->promoted_slave->addr->port)\n        {\n            ri->flags &= ~SRI_RECONF_SENT;\n            ri->flags |= SRI_RECONF_INPROG;\n            sentinelEvent(LL_NOTICE,\"+slave-reconf-inprog\",ri,\"%@\");\n        }\n\n        /* SRI_RECONF_INPROG -> SRI_RECONF_DONE */\n        if ((ri->flags & SRI_RECONF_INPROG) &&\n            ri->slave_master_link_status == SENTINEL_MASTER_LINK_STATUS_UP)\n        {\n            ri->flags &= ~SRI_RECONF_INPROG;\n            ri->flags |= SRI_RECONF_DONE;\n            sentinelEvent(LL_NOTICE,\"+slave-reconf-done\",ri,\"%@\");\n        }\n    }\n}\n\nvoid sentinelInfoReplyCallback(redisAsyncContext *c, void *reply, void *privdata) {\n    sentinelRedisInstance *ri = privdata;\n    instanceLink *link = c->data;\n    redisReply *r;\n\n    if (!reply || !link) return;\n    link->pending_commands--;\n    r = reply;\n\n    /* INFO reply type is verbatim in resp3. Normally, sentinel will not use\n     * resp3 but this is required for testing (see logreqres.c). */\n    if (r->type == REDIS_REPLY_STRING || r->type == REDIS_REPLY_VERB)\n        sentinelRefreshInstanceInfo(ri,r->str);\n}\n\n/* Just discard the reply. We use this when we are not monitoring the return\n * value of the command but its effects directly. */\nvoid sentinelDiscardReplyCallback(redisAsyncContext *c, void *reply, void *privdata) {\n    instanceLink *link = c->data;\n    UNUSED(reply);\n    UNUSED(privdata);\n\n    if (link) link->pending_commands--;\n}\n\nvoid sentinelPingReplyCallback(redisAsyncContext *c, void *reply, void *privdata) {\n    sentinelRedisInstance *ri = privdata;\n    instanceLink *link = c->data;\n    redisReply *r;\n\n    if (!reply || !link) return;\n    link->pending_commands--;\n    r = reply;\n\n    if (r->type == REDIS_REPLY_STATUS ||\n        r->type == REDIS_REPLY_ERROR) {\n        /* Update the \"instance available\" field only if this is an\n         * acceptable reply. */\n        if (strncmp(r->str,\"PONG\",4) == 0 ||\n            strncmp(r->str,\"LOADING\",7) == 0 ||\n            strncmp(r->str,\"MASTERDOWN\",10) == 0)\n        {\n            link->last_avail_time = mstime();\n            link->act_ping_time = 0; /* Flag the pong as received. */\n\n            if (ri->flags & SRI_MASTER_REBOOT && strncmp(r->str,\"PONG\",4) == 0)\n                ri->flags &= ~SRI_MASTER_REBOOT;\n\n        } else {\n            /* Send a SCRIPT KILL command if the instance appears to be\n             * down because of a busy script. */\n            if (strncmp(r->str,\"BUSY\",4) == 0 &&\n                (ri->flags & SRI_S_DOWN) &&\n                !(ri->flags & SRI_SCRIPT_KILL_SENT))\n            {\n                if (redisAsyncCommand(ri->link->cc,\n                        sentinelDiscardReplyCallback, ri,\n                        \"%s KILL\",\n                        sentinelInstanceMapCommand(ri,\"SCRIPT\")) == C_OK)\n                {\n                    ri->link->pending_commands++;\n                }\n                ri->flags |= SRI_SCRIPT_KILL_SENT;\n            }\n        }\n    }\n    link->last_pong_time = mstime();\n}\n\n/* This is called when we get the reply about the PUBLISH command we send\n * to the master to advertise this sentinel. */\nvoid sentinelPublishReplyCallback(redisAsyncContext *c, void *reply, void *privdata) {\n    sentinelRedisInstance *ri = privdata;\n    instanceLink *link = c->data;\n    redisReply *r;\n\n    if (!reply || !link) return;\n    link->pending_commands--;\n    r = reply;\n\n    /* Only update pub_time if we actually published our message. Otherwise\n     * we'll retry again in 100 milliseconds. */\n    if (r->type != REDIS_REPLY_ERROR)\n        ri->last_pub_time = mstime();\n}\n\n/* Process a hello message received via Pub/Sub in master or slave instance,\n * or sent directly to this sentinel via the (fake) PUBLISH command of Sentinel.\n *\n * If the master name specified in the message is not known, the message is\n * discarded. */\nvoid sentinelProcessHelloMessage(char *hello, int hello_len) {\n    /* Format is composed of 8 tokens:\n     * 0=ip,1=port,2=runid,3=current_epoch,4=master_name,\n     * 5=master_ip,6=master_port,7=master_config_epoch. */\n    int numtokens, port, removed, master_port;\n    uint64_t current_epoch, master_config_epoch;\n    char **token = sdssplitlen(hello, hello_len, \",\", 1, &numtokens);\n    sentinelRedisInstance *si, *master;\n\n    if (numtokens == 8) {\n        /* Obtain a reference to the master this hello message is about */\n        master = sentinelGetMasterByName(token[4]);\n        if (!master) goto cleanup; /* Unknown master, skip the message. */\n\n        /* First, try to see if we already have this sentinel. */\n        port = atoi(token[1]);\n        master_port = atoi(token[6]);\n        si = getSentinelRedisInstanceByAddrAndRunID(\n                        master->sentinels,token[0],port,token[2]);\n        current_epoch = strtoull(token[3],NULL,10);\n        master_config_epoch = strtoull(token[7],NULL,10);\n\n        if (!si) {\n            /* If not, remove all the sentinels that have the same runid\n             * because there was an address change, and add the same Sentinel\n             * with the new address back. */\n            removed = removeMatchingSentinelFromMaster(master,token[2]);\n            if (removed) {\n                sentinelEvent(LL_NOTICE,\"+sentinel-address-switch\",master,\n                    \"%@ ip %s port %d for %s\", token[0],port,token[2]);\n            } else {\n                /* Check if there is another Sentinel with the same address this\n                 * new one is reporting. What we do if this happens is to set its\n                 * port to 0, to signal the address is invalid. We'll update it\n                 * later if we get an HELLO message. */\n                sentinelRedisInstance *other =\n                    getSentinelRedisInstanceByAddrAndRunID(\n                        master->sentinels, token[0],port,NULL);\n                if (other) {\n                    /* If there is already other sentinel with same address (but\n                     * different runid) then remove the old one across all masters */\n                    sentinelEvent(LL_NOTICE,\"+sentinel-invalid-addr\",other,\"%@\");\n                    dictIterator di;\n                    dictEntry *de;\n\n                    /* Keep a copy of runid. 'other' about to be deleted in loop. */\n                    sds runid_obsolete = sdsnew(other->runid);\n\n                    dictInitIterator(&di, sentinel.masters);\n                    while((de = dictNext(&di)) != NULL) {\n                        sentinelRedisInstance *master = dictGetVal(de);\n                        removeMatchingSentinelFromMaster(master, runid_obsolete);\n                    }\n                    dictResetIterator(&di);\n                    sdsfree(runid_obsolete);\n                }\n            }\n\n            /* Add the new sentinel. */\n            si = createSentinelRedisInstance(token[2],SRI_SENTINEL,\n                            token[0],port,master->quorum,master);\n\n            if (si) {\n                if (!removed) sentinelEvent(LL_NOTICE,\"+sentinel\",si,\"%@\");\n                /* The runid is NULL after a new instance creation and\n                 * for Sentinels we don't have a later chance to fill it,\n                 * so do it now. */\n                si->runid = sdsnew(token[2]);\n                sentinelTryConnectionSharing(si);\n                if (removed) sentinelUpdateSentinelAddressInAllMasters(si);\n                sentinelFlushConfig();\n            }\n        }\n\n        /* Update local current_epoch if received current_epoch is greater.*/\n        if (current_epoch > sentinel.current_epoch) {\n            sentinel.current_epoch = current_epoch;\n            sentinelFlushConfig();\n            sentinelEvent(LL_WARNING,\"+new-epoch\",master,\"%llu\",\n                (unsigned long long) sentinel.current_epoch);\n        }\n\n        /* Update master info if received configuration is newer. */\n        if (si && master->config_epoch < master_config_epoch) {\n            master->config_epoch = master_config_epoch;\n            if (master_port != master->addr->port ||\n                !sentinelAddrEqualsHostname(master->addr, token[5]))\n            {\n                sentinelAddr *old_addr;\n\n                sentinelEvent(LL_WARNING,\"+config-update-from\",si,\"%@\");\n                sentinelEvent(LL_WARNING,\"+switch-master\",\n                    master,\"%s %s %d %s %d\",\n                    master->name,\n                    announceSentinelAddr(master->addr), master->addr->port,\n                    token[5], master_port);\n\n                old_addr = dupSentinelAddr(master->addr);\n                sentinelResetMasterAndChangeAddress(master, token[5], master_port);\n                sentinelCallClientReconfScript(master,\n                    SENTINEL_OBSERVER,\"start\",\n                    old_addr,master->addr);\n                releaseSentinelAddr(old_addr);\n            }\n        }\n\n        /* Update the state of the Sentinel. */\n        if (si) si->last_hello_time = mstime();\n    }\n\ncleanup:\n    sdsfreesplitres(token,numtokens);\n}\n\n\n/* This is our Pub/Sub callback for the Hello channel. It's useful in order\n * to discover other sentinels attached at the same master. */\nvoid sentinelReceiveHelloMessages(redisAsyncContext *c, void *reply, void *privdata) {\n    sentinelRedisInstance *ri = privdata;\n    redisReply *r;\n    UNUSED(c);\n\n    if (!reply || !ri) return;\n    r = reply;\n\n    /* Update the last activity in the pubsub channel. Note that since we\n     * receive our messages as well this timestamp can be used to detect\n     * if the link is probably disconnected even if it seems otherwise. */\n    ri->link->pc_last_activity = mstime();\n\n    /* Sanity check in the reply we expect, so that the code that follows\n     * can avoid to check for details.\n     * Note: Reply type is PUSH in resp3. Normally, sentinel will not use\n     * resp3 but this is required for testing (see logreqres.c). */\n    if ((r->type != REDIS_REPLY_ARRAY && r->type != REDIS_REPLY_PUSH) ||\n        r->elements != 3 ||\n        r->element[0]->type != REDIS_REPLY_STRING ||\n        r->element[1]->type != REDIS_REPLY_STRING ||\n        r->element[2]->type != REDIS_REPLY_STRING ||\n        strcmp(r->element[0]->str,\"message\") != 0) return;\n\n    /* We are not interested in meeting ourselves */\n    if (strstr(r->element[2]->str,sentinel.myid) != NULL) return;\n\n    sentinelProcessHelloMessage(r->element[2]->str, r->element[2]->len);\n}\n\n/* Send a \"Hello\" message via Pub/Sub to the specified 'ri' Redis\n * instance in order to broadcast the current configuration for this\n * master, and to advertise the existence of this Sentinel at the same time.\n *\n * The message has the following format:\n *\n * sentinel_ip,sentinel_port,sentinel_runid,current_epoch,\n * master_name,master_ip,master_port,master_config_epoch.\n *\n * Returns C_OK if the PUBLISH was queued correctly, otherwise\n * C_ERR is returned. */\nint sentinelSendHello(sentinelRedisInstance *ri) {\n    char ip[NET_IP_STR_LEN];\n    char payload[NET_IP_STR_LEN+1024];\n    int retval;\n    char *announce_ip;\n    int announce_port;\n    sentinelRedisInstance *master = (ri->flags & SRI_MASTER) ? ri : ri->master;\n    sentinelAddr *master_addr = sentinelGetCurrentMasterAddress(master);\n\n    if (ri->link->disconnected) return C_ERR;\n\n    /* Use the specified announce address if specified, otherwise try to\n     * obtain our own IP address. */\n    if (sentinel.announce_ip) {\n        announce_ip = sentinel.announce_ip;\n    } else {\n        if (anetFdToString(ri->link->cc->c.fd,ip,sizeof(ip),NULL,0) == -1)\n            return C_ERR;\n        announce_ip = ip;\n    }\n    if (sentinel.announce_port) announce_port = sentinel.announce_port;\n    else if (server.tls_replication && server.tls_port) announce_port = server.tls_port;\n    else announce_port = server.port;\n\n    /* Format and send the Hello message. */\n    snprintf(payload,sizeof(payload),\n        \"%s,%d,%s,%llu,\" /* Info about this sentinel. */\n        \"%s,%s,%d,%llu\", /* Info about current master. */\n        announce_ip, announce_port, sentinel.myid,\n        (unsigned long long) sentinel.current_epoch,\n        /* --- */\n        master->name,announceSentinelAddr(master_addr),master_addr->port,\n        (unsigned long long) master->config_epoch);\n    retval = redisAsyncCommand(ri->link->cc,\n        sentinelPublishReplyCallback, ri, \"%s %s %s\",\n        sentinelInstanceMapCommand(ri,\"PUBLISH\"),\n        SENTINEL_HELLO_CHANNEL,payload);\n    if (retval != C_OK) return C_ERR;\n    ri->link->pending_commands++;\n    return C_OK;\n}\n\n/* Reset last_pub_time in all the instances in the specified dictionary\n * in order to force the delivery of a Hello update ASAP. */\nvoid sentinelForceHelloUpdateDictOfRedisInstances(dict *instances) {\n    dictIterator di;\n    dictEntry *de;\n\n    dictInitSafeIterator(&di, instances);\n    while((de = dictNext(&di)) != NULL) {\n        sentinelRedisInstance *ri = dictGetVal(de);\n        if (ri->last_pub_time >= (sentinel_publish_period+1))\n            ri->last_pub_time -= (sentinel_publish_period+1);\n    }\n    dictResetIterator(&di);\n}\n\n/* This function forces the delivery of a \"Hello\" message (see\n * sentinelSendHello() top comment for further information) to all the Redis\n * and Sentinel instances related to the specified 'master'.\n *\n * It is technically not needed since we send an update to every instance\n * with a period of SENTINEL_PUBLISH_PERIOD milliseconds, however when a\n * Sentinel upgrades a configuration it is a good idea to deliver an update\n * to the other Sentinels ASAP. */\nint sentinelForceHelloUpdateForMaster(sentinelRedisInstance *master) {\n    if (!(master->flags & SRI_MASTER)) return C_ERR;\n    if (master->last_pub_time >= (sentinel_publish_period+1))\n        master->last_pub_time -= (sentinel_publish_period+1);\n    sentinelForceHelloUpdateDictOfRedisInstances(master->sentinels);\n    sentinelForceHelloUpdateDictOfRedisInstances(master->slaves);\n    return C_OK;\n}\n\n/* Send a PING to the specified instance and refresh the act_ping_time\n * if it is zero (that is, if we received a pong for the previous ping).\n *\n * On error zero is returned, and we can't consider the PING command\n * queued in the connection. */\nint sentinelSendPing(sentinelRedisInstance *ri) {\n    int retval = redisAsyncCommand(ri->link->cc,\n        sentinelPingReplyCallback, ri, \"%s\",\n        sentinelInstanceMapCommand(ri,\"PING\"));\n    if (retval == C_OK) {\n        ri->link->pending_commands++;\n        ri->link->last_ping_time = mstime();\n        /* We update the active ping time only if we received the pong for\n         * the previous ping, otherwise we are technically waiting since the\n         * first ping that did not receive a reply. */\n        if (ri->link->act_ping_time == 0)\n            ri->link->act_ping_time = ri->link->last_ping_time;\n        return 1;\n    } else {\n        return 0;\n    }\n}\n\n/* Send periodic PING, INFO, and PUBLISH to the Hello channel to\n * the specified master or slave instance. */\nvoid sentinelSendPeriodicCommands(sentinelRedisInstance *ri) {\n    mstime_t now = mstime();\n    mstime_t info_period, ping_period;\n    int retval;\n\n    /* Return ASAP if we have already a PING or INFO already pending, or\n     * in the case the instance is not properly connected. */\n    if (ri->link->disconnected) return;\n\n    /* For INFO, PING, PUBLISH that are not critical commands to send we\n     * also have a limit of SENTINEL_MAX_PENDING_COMMANDS. We don't\n     * want to use a lot of memory just because a link is not working\n     * properly (note that anyway there is a redundant protection about this,\n     * that is, the link will be disconnected and reconnected if a long\n     * timeout condition is detected. */\n    if (ri->link->pending_commands >=\n        SENTINEL_MAX_PENDING_COMMANDS * ri->link->refcount) return;\n\n    /* If this is a slave of a master in O_DOWN condition we start sending\n     * it INFO every second, instead of the usual SENTINEL_INFO_PERIOD\n     * period. In this state we want to closely monitor slaves in case they\n     * are turned into masters by another Sentinel, or by the sysadmin.\n     *\n     * Similarly we monitor the INFO output more often if the slave reports\n     * to be disconnected from the master, so that we can have a fresh\n     * disconnection time figure. */\n    if ((ri->flags & SRI_SLAVE) &&\n        ((ri->master->flags & (SRI_O_DOWN|SRI_FAILOVER_IN_PROGRESS)) ||\n         (ri->master_link_down_time != 0)))\n    {\n        info_period = 1000;\n    } else {\n        info_period = sentinel_info_period;\n    }\n\n    /* We ping instances every time the last received pong is older than\n     * the configured 'down-after-milliseconds' time, but every second\n     * anyway if 'down-after-milliseconds' is greater than 1 second. */\n    ping_period = ri->down_after_period;\n    if (ping_period > sentinel_ping_period) ping_period = sentinel_ping_period;\n\n    /* Send INFO to masters and slaves, not sentinels. */\n    if ((ri->flags & SRI_SENTINEL) == 0 &&\n        (ri->info_refresh == 0 ||\n        (now - ri->info_refresh) > info_period))\n    {\n        retval = redisAsyncCommand(ri->link->cc,\n            sentinelInfoReplyCallback, ri, \"%s\",\n            sentinelInstanceMapCommand(ri,\"INFO\"));\n        if (retval == C_OK) ri->link->pending_commands++;\n    }\n\n    /* Send PING to all the three kinds of instances. */\n    if ((now - ri->link->last_pong_time) > ping_period &&\n               (now - ri->link->last_ping_time) > ping_period/2) {\n        sentinelSendPing(ri);\n    }\n\n    /* PUBLISH hello messages to all the three kinds of instances. */\n    if ((now - ri->last_pub_time) > sentinel_publish_period) {\n        sentinelSendHello(ri);\n    }\n}\n\n/* =========================== SENTINEL command ============================= */\nstatic void populateDict(dict *options_dict, char **options) {\n    for (int i=0; options[i]; i++) {\n        sds option = sdsnew(options[i]);\n        if (dictAdd(options_dict, option, NULL)==DICT_ERR)\n            sdsfree(option);\n    }\n}\n\nconst char* getLogLevel(void) {\n   switch (server.verbosity) {\n    case LL_DEBUG: return \"debug\";\n    case LL_VERBOSE: return \"verbose\";\n    case LL_NOTICE: return \"notice\";\n    case LL_WARNING: return \"warning\";\n    case LL_NOTHING: return \"nothing\";\n    }\n    return \"unknown\";\n}\n\n/* SENTINEL CONFIG SET option value [option value ...] */\nvoid sentinelConfigSetCommand(client *c) {\n    long long numval;\n    int drop_conns = 0;\n    char *option;\n    robj *val;\n    char *options[] = {\n        \"announce-ip\",\n        \"sentinel-user\",\n        \"sentinel-pass\",\n        \"resolve-hostnames\",\n        \"announce-port\",\n        \"announce-hostnames\",\n        \"loglevel\",\n        NULL};\n    static dict *options_dict = NULL;\n    if (!options_dict) {\n        options_dict = dictCreate(&stringSetDictType);\n        populateDict(options_dict, options);\n    }\n    dict *set_configs = dictCreate(&stringSetDictType);\n\n    /* Validate arguments are valid */\n    for (int i = 3; i < c->argc; i++) {\n        option = c->argv[i]->ptr;\n\n        /* Validate option is valid */\n        if (dictFind(options_dict, option) == NULL) {\n            addReplyErrorFormat(c, \"Invalid argument '%s' to SENTINEL CONFIG SET\", option);\n            goto exit;\n        }\n\n        /* Check duplicates */\n        if (dictFind(set_configs, option) != NULL) {\n            addReplyErrorFormat(c, \"Duplicate argument '%s' to SENTINEL CONFIG SET\", option);\n            goto exit;\n        }\n\n        serverAssert(dictAdd(set_configs, sdsnew(option), NULL) == C_OK);\n\n        /* Validate argument */\n        if (i + 1 == c->argc) {\n            addReplyErrorFormat(c, \"Missing argument '%s' value\", option);\n            goto exit;\n        }\n        val = c->argv[++i];\n\n        if (!strcasecmp(option, \"resolve-hostnames\")) {\n            if ((yesnotoi(val->ptr)) == -1) goto badfmt;\n        } else if (!strcasecmp(option, \"announce-hostnames\")) {\n            if ((yesnotoi(val->ptr)) == -1) goto badfmt;\n        } else if (!strcasecmp(option, \"announce-port\")) {\n            if (getLongLongFromObject(val, &numval) == C_ERR ||\n                numval < 0 || numval > 65535) goto badfmt;\n        } else if (!strcasecmp(option, \"loglevel\")) {\n            if (!(!strcasecmp(val->ptr, \"debug\") || !strcasecmp(val->ptr, \"verbose\") ||\n                !strcasecmp(val->ptr, \"notice\") || !strcasecmp(val->ptr, \"warning\") ||\n                !strcasecmp(val->ptr, \"nothing\"))) goto badfmt;\n        }\n    }\n\n    /* Apply changes */\n    for (int i = 3; i < c->argc; i++) {\n        int moreargs = (c->argc-1) - i;\n        option = c->argv[i]->ptr;\n        if (!strcasecmp(option, \"loglevel\") && moreargs > 0) {\n            val = c->argv[++i];\n            if (!strcasecmp(val->ptr, \"debug\"))\n                server.verbosity = LL_DEBUG;\n            else if (!strcasecmp(val->ptr, \"verbose\"))\n                server.verbosity = LL_VERBOSE;\n            else if (!strcasecmp(val->ptr, \"notice\"))\n                server.verbosity = LL_NOTICE;\n            else if (!strcasecmp(val->ptr, \"warning\"))\n                server.verbosity = LL_WARNING;\n            else if (!strcasecmp(val->ptr, \"nothing\"))\n                server.verbosity = LL_NOTHING;\n        } else if (!strcasecmp(option, \"resolve-hostnames\") && moreargs > 0) {\n            val = c->argv[++i];\n            numval = yesnotoi(val->ptr);\n            sentinel.resolve_hostnames = numval;\n        } else if (!strcasecmp(option, \"announce-hostnames\") && moreargs > 0) {\n            val = c->argv[++i];\n            numval = yesnotoi(val->ptr);\n            sentinel.announce_hostnames = numval;\n        } else if (!strcasecmp(option, \"announce-ip\") && moreargs > 0) {\n            val = c->argv[++i];\n            if (sentinel.announce_ip) sdsfree(sentinel.announce_ip);\n            sentinel.announce_ip = sdsnew(val->ptr);\n        } else if (!strcasecmp(option, \"announce-port\") && moreargs > 0) {\n            val = c->argv[++i];\n            getLongLongFromObject(val, &numval);\n            sentinel.announce_port = numval;\n        } else if (!strcasecmp(option, \"sentinel-user\") && moreargs > 0) {\n            val = c->argv[++i];\n            sdsfree(sentinel.sentinel_auth_user);\n            sentinel.sentinel_auth_user = sdslen(val->ptr) == 0 ?\n                NULL : sdsdup(val->ptr);\n            drop_conns = 1;\n        } else if (!strcasecmp(option, \"sentinel-pass\") && moreargs > 0) {\n            val = c->argv[++i];\n            sdsfree(sentinel.sentinel_auth_pass);\n            sentinel.sentinel_auth_pass = sdslen(val->ptr) == 0 ?\n                NULL : sdsdup(val->ptr);\n            drop_conns = 1;\n        } else {\n            /* Should never reach here */\n            serverAssert(0);\n        }\n    }\n\n    sentinelFlushConfigAndReply(c);\n\n    /* Drop Sentinel connections to initiate a reconnect if needed. */\n    if (drop_conns)\n        sentinelDropConnections();\n\nexit:\n    dictRelease(set_configs);\n    return;\n\nbadfmt:\n    addReplyErrorFormat(c, \"Invalid value '%s' to SENTINEL CONFIG SET '%s'\",\n                        (char *) val->ptr, option);\n    dictRelease(set_configs);\n}\n\n/* SENTINEL CONFIG GET <option> [<option> ...] */\nvoid sentinelConfigGetCommand(client *c) {\n    char *pattern;\n    void *replylen = addReplyDeferredLen(c);\n    int matches = 0;\n    /* Create a dictionary to store the input configs,to avoid adding duplicate twice */\n    dict *d = dictCreate(&externalStringType);\n    for (int i = 3; i < c->argc; i++) {\n        pattern = c->argv[i]->ptr;\n        /* If the string doesn't contain glob patterns and available in dictionary, don't look further, just continue. */\n        if (!strpbrk(pattern, \"[*?\") && dictFind(d, pattern)) continue;\n        /* we want to print all the matched patterns and avoid printing duplicates twice */\n        if (stringmatch(pattern,\"resolve-hostnames\",1) && !dictFind(d, \"resolve-hostnames\")) {\n            addReplyBulkCString(c,\"resolve-hostnames\");\n            addReplyBulkCString(c,sentinel.resolve_hostnames ? \"yes\" : \"no\");\n            dictAdd(d, \"resolve-hostnames\", NULL);\n            matches++;\n        }\n        if (stringmatch(pattern, \"announce-hostnames\", 1) && !dictFind(d, \"announce-hostnames\")) {\n            addReplyBulkCString(c,\"announce-hostnames\");\n            addReplyBulkCString(c,sentinel.announce_hostnames ? \"yes\" : \"no\");\n            dictAdd(d, \"announce-hostnames\", NULL);\n            matches++;\n        }\n        if (stringmatch(pattern, \"announce-ip\", 1) && !dictFind(d, \"announce-ip\")) {\n            addReplyBulkCString(c,\"announce-ip\");\n            addReplyBulkCString(c,sentinel.announce_ip ? sentinel.announce_ip : \"\");\n            dictAdd(d, \"announce-ip\", NULL);\n            matches++;\n        }\n        if (stringmatch(pattern, \"announce-port\", 1) && !dictFind(d, \"announce-port\")) {\n            addReplyBulkCString(c, \"announce-port\");\n            addReplyBulkLongLong(c, sentinel.announce_port);\n            dictAdd(d, \"announce-port\", NULL);\n            matches++;\n        }\n        if (stringmatch(pattern, \"sentinel-user\", 1) && !dictFind(d, \"sentinel-user\")) {\n            addReplyBulkCString(c, \"sentinel-user\");\n            addReplyBulkCString(c, sentinel.sentinel_auth_user ? sentinel.sentinel_auth_user : \"\");\n            dictAdd(d, \"sentinel-user\", NULL);\n            matches++;\n        }\n        if (stringmatch(pattern, \"sentinel-pass\", 1) && !dictFind(d, \"sentinel-pass\")) {\n            addReplyBulkCString(c, \"sentinel-pass\");\n            addReplyBulkCString(c, sentinel.sentinel_auth_pass ? sentinel.sentinel_auth_pass : \"\");\n            dictAdd(d, \"sentinel-pass\", NULL);\n            matches++;\n        }\n        if (stringmatch(pattern, \"loglevel\", 1) && !dictFind(d, \"loglevel\")) {\n            addReplyBulkCString(c, \"loglevel\");\n            addReplyBulkCString(c, getLogLevel());\n            dictAdd(d, \"loglevel\", NULL);\n            matches++;\n        }\n    }\n    dictRelease(d);\n    setDeferredMapLen(c, replylen, matches);\n}\n\nconst char *sentinelFailoverStateStr(int state) {\n    switch(state) {\n    case SENTINEL_FAILOVER_STATE_NONE: return \"none\";\n    case SENTINEL_FAILOVER_STATE_WAIT_START: return \"wait_start\";\n    case SENTINEL_FAILOVER_STATE_SELECT_SLAVE: return \"select_slave\";\n    case SENTINEL_FAILOVER_STATE_SEND_SLAVEOF_NOONE: return \"send_slaveof_noone\";\n    case SENTINEL_FAILOVER_STATE_WAIT_PROMOTION: return \"wait_promotion\";\n    case SENTINEL_FAILOVER_STATE_RECONF_SLAVES: return \"reconf_slaves\";\n    case SENTINEL_FAILOVER_STATE_UPDATE_CONFIG: return \"update_config\";\n    default: return \"unknown\";\n    }\n}\n\n/* Redis instance to Redis protocol representation. */\nvoid addReplySentinelRedisInstance(client *c, sentinelRedisInstance *ri) {\n    char *flags = sdsempty();\n    void *mbl;\n    int fields = 0;\n\n    mbl = addReplyDeferredLen(c);\n\n    addReplyBulkCString(c,\"name\");\n    addReplyBulkCString(c,ri->name);\n    fields++;\n\n    addReplyBulkCString(c,\"ip\");\n    addReplyBulkCString(c,announceSentinelAddr(ri->addr));\n    fields++;\n\n    addReplyBulkCString(c,\"port\");\n    addReplyBulkLongLong(c,ri->addr->port);\n    fields++;\n\n    addReplyBulkCString(c,\"runid\");\n    addReplyBulkCString(c,ri->runid ? ri->runid : \"\");\n    fields++;\n\n    addReplyBulkCString(c,\"flags\");\n    if (ri->flags & SRI_S_DOWN) flags = sdscat(flags,\"s_down,\");\n    if (ri->flags & SRI_O_DOWN) flags = sdscat(flags,\"o_down,\");\n    if (ri->flags & SRI_MASTER) flags = sdscat(flags,\"master,\");\n    if (ri->flags & SRI_SLAVE) flags = sdscat(flags,\"slave,\");\n    if (ri->flags & SRI_SENTINEL) flags = sdscat(flags,\"sentinel,\");\n    if (ri->link->disconnected) flags = sdscat(flags,\"disconnected,\");\n    if (ri->flags & SRI_MASTER_DOWN) flags = sdscat(flags,\"master_down,\");\n    if (ri->flags & SRI_FAILOVER_IN_PROGRESS)\n        flags = sdscat(flags,\"failover_in_progress,\");\n    if (ri->flags & SRI_PROMOTED) flags = sdscat(flags,\"promoted,\");\n    if (ri->flags & SRI_RECONF_SENT) flags = sdscat(flags,\"reconf_sent,\");\n    if (ri->flags & SRI_RECONF_INPROG) flags = sdscat(flags,\"reconf_inprog,\");\n    if (ri->flags & SRI_RECONF_DONE) flags = sdscat(flags,\"reconf_done,\");\n    if (ri->flags & SRI_FORCE_FAILOVER) flags = sdscat(flags,\"force_failover,\");\n    if (ri->flags & SRI_SCRIPT_KILL_SENT) flags = sdscat(flags,\"script_kill_sent,\");\n    if (ri->flags & SRI_MASTER_REBOOT) flags = sdscat(flags,\"master_reboot,\");\n\n    if (sdslen(flags) != 0) sdsrange(flags,0,-2); /* remove last \",\" */\n    addReplyBulkCString(c,flags);\n    sdsfree(flags);\n    fields++;\n\n    addReplyBulkCString(c,\"link-pending-commands\");\n    addReplyBulkLongLong(c,ri->link->pending_commands);\n    fields++;\n\n    addReplyBulkCString(c,\"link-refcount\");\n    addReplyBulkLongLong(c,ri->link->refcount);\n    fields++;\n\n    if (ri->flags & SRI_FAILOVER_IN_PROGRESS) {\n        addReplyBulkCString(c,\"failover-state\");\n        addReplyBulkCString(c,(char*)sentinelFailoverStateStr(ri->failover_state));\n        fields++;\n    }\n\n    addReplyBulkCString(c,\"last-ping-sent\");\n    addReplyBulkLongLong(c,\n        ri->link->act_ping_time ? (mstime() - ri->link->act_ping_time) : 0);\n    fields++;\n\n    addReplyBulkCString(c,\"last-ok-ping-reply\");\n    addReplyBulkLongLong(c,mstime() - ri->link->last_avail_time);\n    fields++;\n\n    addReplyBulkCString(c,\"last-ping-reply\");\n    addReplyBulkLongLong(c,mstime() - ri->link->last_pong_time);\n    fields++;\n\n    if (ri->flags & SRI_S_DOWN) {\n        addReplyBulkCString(c,\"s-down-time\");\n        addReplyBulkLongLong(c,mstime()-ri->s_down_since_time);\n        fields++;\n    }\n\n    if (ri->flags & SRI_O_DOWN) {\n        addReplyBulkCString(c,\"o-down-time\");\n        addReplyBulkLongLong(c,mstime()-ri->o_down_since_time);\n        fields++;\n    }\n\n    addReplyBulkCString(c,\"down-after-milliseconds\");\n    addReplyBulkLongLong(c,ri->down_after_period);\n    fields++;\n\n    /* Masters and Slaves */\n    if (ri->flags & (SRI_MASTER|SRI_SLAVE)) {\n        addReplyBulkCString(c,\"info-refresh\");\n        addReplyBulkLongLong(c,\n            ri->info_refresh ? (mstime() - ri->info_refresh) : 0);\n        fields++;\n\n        addReplyBulkCString(c,\"role-reported\");\n        addReplyBulkCString(c, (ri->role_reported == SRI_MASTER) ? \"master\" :\n                                                                   \"slave\");\n        fields++;\n\n        addReplyBulkCString(c,\"role-reported-time\");\n        addReplyBulkLongLong(c,mstime() - ri->role_reported_time);\n        fields++;\n    }\n\n    /* Only masters */\n    if (ri->flags & SRI_MASTER) {\n        addReplyBulkCString(c,\"config-epoch\");\n        addReplyBulkLongLong(c,ri->config_epoch);\n        fields++;\n\n        addReplyBulkCString(c,\"num-slaves\");\n        addReplyBulkLongLong(c,dictSize(ri->slaves));\n        fields++;\n\n        addReplyBulkCString(c,\"num-other-sentinels\");\n        addReplyBulkLongLong(c,dictSize(ri->sentinels));\n        fields++;\n\n        addReplyBulkCString(c,\"quorum\");\n        addReplyBulkLongLong(c,ri->quorum);\n        fields++;\n\n        addReplyBulkCString(c,\"failover-timeout\");\n        addReplyBulkLongLong(c,ri->failover_timeout);\n        fields++;\n\n        addReplyBulkCString(c,\"parallel-syncs\");\n        addReplyBulkLongLong(c,ri->parallel_syncs);\n        fields++;\n\n        if (ri->notification_script) {\n            addReplyBulkCString(c,\"notification-script\");\n            addReplyBulkCString(c,ri->notification_script);\n            fields++;\n        }\n\n        if (ri->client_reconfig_script) {\n            addReplyBulkCString(c,\"client-reconfig-script\");\n            addReplyBulkCString(c,ri->client_reconfig_script);\n            fields++;\n        }\n    }\n\n    /* Only slaves */\n    if (ri->flags & SRI_SLAVE) {\n        addReplyBulkCString(c,\"master-link-down-time\");\n        addReplyBulkLongLong(c,ri->master_link_down_time);\n        fields++;\n\n        addReplyBulkCString(c,\"master-link-status\");\n        addReplyBulkCString(c,\n            (ri->slave_master_link_status == SENTINEL_MASTER_LINK_STATUS_UP) ?\n            \"ok\" : \"err\");\n        fields++;\n\n        addReplyBulkCString(c,\"master-host\");\n        addReplyBulkCString(c,\n            ri->slave_master_host ? ri->slave_master_host : \"?\");\n        fields++;\n\n        addReplyBulkCString(c,\"master-port\");\n        addReplyBulkLongLong(c,ri->slave_master_port);\n        fields++;\n\n        addReplyBulkCString(c,\"slave-priority\");\n        addReplyBulkLongLong(c,ri->slave_priority);\n        fields++;\n\n        addReplyBulkCString(c,\"slave-repl-offset\");\n        addReplyBulkLongLong(c,ri->slave_repl_offset);\n        fields++;\n\n        addReplyBulkCString(c,\"replica-announced\");\n        addReplyBulkLongLong(c,ri->replica_announced);\n        fields++;\n    }\n\n    /* Only sentinels */\n    if (ri->flags & SRI_SENTINEL) {\n        addReplyBulkCString(c,\"last-hello-message\");\n        addReplyBulkLongLong(c,mstime() - ri->last_hello_time);\n        fields++;\n\n        addReplyBulkCString(c,\"voted-leader\");\n        addReplyBulkCString(c,ri->leader ? ri->leader : \"?\");\n        fields++;\n\n        addReplyBulkCString(c,\"voted-leader-epoch\");\n        addReplyBulkLongLong(c,ri->leader_epoch);\n        fields++;\n    }\n\n    setDeferredMapLen(c,mbl,fields);\n}\n\nvoid sentinelSetDebugConfigParameters(client *c){\n    int j;\n    int badarg = 0; /* Bad argument position for error reporting. */\n    char *option;\n\n    /* Process option - value pairs. */\n    for (j = 2; j < c->argc; j++) {\n        int moreargs = (c->argc-1) - j;\n        option = c->argv[j]->ptr;\n        long long ll;\n\n        if (!strcasecmp(option,\"info-period\") && moreargs > 0) {\n            /* info-period <milliseconds> */\n            robj *o = c->argv[++j];\n            if (getLongLongFromObject(o,&ll) == C_ERR || ll <= 0) {\n                badarg = j;\n                goto badfmt;\n            }\n            sentinel_info_period = ll;\n\n        } else if (!strcasecmp(option,\"ping-period\") && moreargs > 0) {\n            /* ping-period <milliseconds> */\n            robj *o = c->argv[++j];\n            if (getLongLongFromObject(o,&ll) == C_ERR || ll <= 0) {\n                badarg = j;\n                goto badfmt;\n            }\n            sentinel_ping_period = ll;\n\n        } else if (!strcasecmp(option,\"ask-period\") && moreargs > 0) {\n            /* ask-period <milliseconds> */\n            robj *o = c->argv[++j];\n            if (getLongLongFromObject(o,&ll) == C_ERR || ll <= 0) {\n                badarg = j;\n                goto badfmt;\n            }\n            sentinel_ask_period = ll;\n\n        } else if (!strcasecmp(option,\"publish-period\") && moreargs > 0) {\n            /* publish-period <milliseconds> */\n            robj *o = c->argv[++j];\n            if (getLongLongFromObject(o,&ll) == C_ERR || ll <= 0) {\n                badarg = j;\n                goto badfmt;\n            }\n            sentinel_publish_period = ll;\n\n        } else if (!strcasecmp(option,\"default-down-after\") && moreargs > 0) {\n            /* default-down-after <milliseconds> */\n            robj *o = c->argv[++j];\n            if (getLongLongFromObject(o,&ll) == C_ERR || ll <= 0) {\n                badarg = j;\n                goto badfmt;\n            }\n            sentinel_default_down_after = ll;\n\n        } else if (!strcasecmp(option,\"tilt-trigger\") && moreargs > 0) {\n            /* tilt-trigger <milliseconds> */\n            robj *o = c->argv[++j];\n            if (getLongLongFromObject(o,&ll) == C_ERR || ll <= 0) {\n                badarg = j;\n                goto badfmt;\n            }\n            sentinel_tilt_trigger = ll;\n\n        } else if (!strcasecmp(option,\"tilt-period\") && moreargs > 0) {\n            /* tilt-period <milliseconds> */\n            robj *o = c->argv[++j];\n            if (getLongLongFromObject(o,&ll) == C_ERR || ll <= 0) {\n                badarg = j;\n                goto badfmt;\n            }\n            sentinel_tilt_period = ll;\n\n        } else if (!strcasecmp(option,\"slave-reconf-timeout\") && moreargs > 0) {\n            /* slave-reconf-timeout <milliseconds> */\n            robj *o = c->argv[++j];\n            if (getLongLongFromObject(o,&ll) == C_ERR || ll <= 0) {\n                badarg = j;\n                goto badfmt;\n            }\n            sentinel_slave_reconf_timeout = ll;\n\n        } else if (!strcasecmp(option,\"min-link-reconnect-period\") && moreargs > 0) {\n            /* min-link-reconnect-period <milliseconds> */\n            robj *o = c->argv[++j];\n            if (getLongLongFromObject(o,&ll) == C_ERR || ll <= 0) {\n                badarg = j;\n                goto badfmt;\n            }\n            sentinel_min_link_reconnect_period = ll;\n\n        } else if (!strcasecmp(option,\"default-failover-timeout\") && moreargs > 0) {\n            /* default-failover-timeout <milliseconds> */\n            robj *o = c->argv[++j];\n            if (getLongLongFromObject(o,&ll) == C_ERR || ll <= 0) {\n                badarg = j;\n                goto badfmt;\n            }\n            sentinel_default_failover_timeout = ll;\n\n        } else if (!strcasecmp(option,\"election-timeout\") && moreargs > 0) {\n            /* election-timeout <milliseconds> */\n            robj *o = c->argv[++j];\n            if (getLongLongFromObject(o,&ll) == C_ERR || ll <= 0) {\n                badarg = j;\n                goto badfmt;\n            }\n            sentinel_election_timeout = ll;\n\n        } else if (!strcasecmp(option,\"script-max-runtime\") && moreargs > 0) {\n            /* script-max-runtime <milliseconds> */\n            robj *o = c->argv[++j];\n            if (getLongLongFromObject(o,&ll) == C_ERR || ll <= 0) {\n                badarg = j;\n                goto badfmt;\n            }\n            sentinel_script_max_runtime = ll;\n\n        } else if (!strcasecmp(option,\"script-retry-delay\") && moreargs > 0) {\n            /* script-retry-delay <milliseconds> */\n            robj *o = c->argv[++j];\n            if (getLongLongFromObject(o,&ll) == C_ERR || ll <= 0) {\n                badarg = j;\n                goto badfmt;\n            }\n            sentinel_script_retry_delay = ll;\n\n        } else {\n            addReplyErrorFormat(c,\"Unknown option or number of arguments for \"\n                                  \"SENTINEL DEBUG '%s'\", option);\n            return;\n        }\n    }\n\n    addReply(c,shared.ok);\n    return;\n\nbadfmt: /* Bad format errors */\n    addReplyErrorFormat(c,\"Invalid argument '%s' for SENTINEL DEBUG '%s'\",\n        (char*)c->argv[badarg]->ptr,option);\n\n    return;\n}\n\nvoid addReplySentinelDebugInfo(client *c) {\n    void *mbl;\n    int fields = 0;\n\n    mbl = addReplyDeferredLen(c);\n\n    addReplyBulkCString(c,\"INFO-PERIOD\");\n    addReplyBulkLongLong(c,sentinel_info_period);\n    fields++;\n\n    addReplyBulkCString(c,\"PING-PERIOD\");\n    addReplyBulkLongLong(c,sentinel_ping_period);\n    fields++;\n\n    addReplyBulkCString(c,\"ASK-PERIOD\");\n    addReplyBulkLongLong(c,sentinel_ask_period);\n    fields++;\n\n    addReplyBulkCString(c,\"PUBLISH-PERIOD\");\n    addReplyBulkLongLong(c,sentinel_publish_period);\n    fields++;\n\n    addReplyBulkCString(c,\"DEFAULT-DOWN-AFTER\");\n    addReplyBulkLongLong(c,sentinel_default_down_after);\n    fields++;\n\n    addReplyBulkCString(c,\"DEFAULT-FAILOVER-TIMEOUT\");\n    addReplyBulkLongLong(c,sentinel_default_failover_timeout);\n    fields++;\n\n    addReplyBulkCString(c,\"TILT-TRIGGER\");\n    addReplyBulkLongLong(c,sentinel_tilt_trigger);\n    fields++;\n\n    addReplyBulkCString(c,\"TILT-PERIOD\");\n    addReplyBulkLongLong(c,sentinel_tilt_period);\n    fields++;\n\n    addReplyBulkCString(c,\"SLAVE-RECONF-TIMEOUT\");\n    addReplyBulkLongLong(c,sentinel_slave_reconf_timeout);\n    fields++;\n\n    addReplyBulkCString(c,\"MIN-LINK-RECONNECT-PERIOD\");\n    addReplyBulkLongLong(c,sentinel_min_link_reconnect_period);\n    fields++;\n\n    addReplyBulkCString(c,\"ELECTION-TIMEOUT\");\n    addReplyBulkLongLong(c,sentinel_election_timeout);\n    fields++;\n\n    addReplyBulkCString(c,\"SCRIPT-MAX-RUNTIME\");\n    addReplyBulkLongLong(c,sentinel_script_max_runtime);\n    fields++;\n\n    addReplyBulkCString(c,\"SCRIPT-RETRY-DELAY\");\n    addReplyBulkLongLong(c,sentinel_script_retry_delay);\n    fields++;\n\n    setDeferredMapLen(c,mbl,fields);\n}\n\n/* Output a number of instances contained inside a dictionary as\n * Redis protocol. */\nvoid addReplyDictOfRedisInstances(client *c, dict *instances) {\n    dictIterator di;\n    dictEntry *de;\n    long slaves = 0;\n    void *replylen = addReplyDeferredLen(c);\n\n    dictInitIterator(&di, instances);\n    while((de = dictNext(&di)) != NULL) {\n        sentinelRedisInstance *ri = dictGetVal(de);\n\n        /* don't announce unannounced replicas */\n        if (ri->flags & SRI_SLAVE && !ri->replica_announced) continue;\n        addReplySentinelRedisInstance(c,ri);\n        slaves++;\n    }\n    dictResetIterator(&di);\n    setDeferredArrayLen(c, replylen, slaves);\n}\n\n/* Lookup the named master into sentinel.masters.\n * If the master is not found reply to the client with an error and returns\n * NULL. */\nsentinelRedisInstance *sentinelGetMasterByNameOrReplyError(client *c,\n                        robj *name)\n{\n    sentinelRedisInstance *ri;\n\n    ri = dictFetchValue(sentinel.masters,name->ptr);\n    if (!ri) {\n        addReplyError(c,\"No such master with that name\");\n        return NULL;\n    }\n    return ri;\n}\n\n#define SENTINEL_ISQR_OK 0\n#define SENTINEL_ISQR_NOQUORUM (1<<0)\n#define SENTINEL_ISQR_NOAUTH (1<<1)\nint sentinelIsQuorumReachable(sentinelRedisInstance *master, int *usableptr) {\n    dictIterator di;\n    dictEntry *de;\n    int usable = 1; /* Number of usable Sentinels. Init to 1 to count myself. */\n    int result = SENTINEL_ISQR_OK;\n    int voters = dictSize(master->sentinels)+1; /* Known Sentinels + myself. */\n\n    dictInitIterator(&di, master->sentinels);\n    while((de = dictNext(&di)) != NULL) {\n        sentinelRedisInstance *ri = dictGetVal(de);\n\n        if (ri->flags & (SRI_S_DOWN|SRI_O_DOWN)) continue;\n        usable++;\n    }\n    dictResetIterator(&di);\n\n    if (usable < (int)master->quorum) result |= SENTINEL_ISQR_NOQUORUM;\n    if (usable < voters/2+1) result |= SENTINEL_ISQR_NOAUTH;\n    if (usableptr) *usableptr = usable;\n    return result;\n}\n\nvoid sentinelCommand(client *c) {\n    if (c->argc == 2 && !strcasecmp(c->argv[1]->ptr,\"help\")) {\n        const char *help[] = {\n\"CKQUORUM <master-name>\",\n\"    Check if the current Sentinel configuration is able to reach the quorum\",\n\"    needed to failover a master and the majority needed to authorize the\",\n\"    failover.\",\n\"CONFIG SET param value [param value ...]\",\n\"    Set a global Sentinel configuration parameter.\",\n\"CONFIG GET <param> [param param param ...]\",\n\"    Get global Sentinel configuration parameter.\",\n\"DEBUG [<param> <value> ...]\",\n\"    Show a list of configurable time parameters and their values (milliseconds).\",\n\"    Or update current configurable parameters values (one or more).\",\n\"GET-MASTER-ADDR-BY-NAME <master-name>\",\n\"    Return the ip and port number of the master with that name.\",\n\"FAILOVER <master-name>\",\n\"    Manually failover a master node without asking for agreement from other\",\n\"    Sentinels\",\n\"FLUSHCONFIG\",\n\"    Force Sentinel to rewrite its configuration on disk, including the current\",\n\"    Sentinel state.\",\n\"INFO-CACHE <master-name>\",\n\"    Return last cached INFO output from masters and all its replicas.\",\n\"IS-MASTER-DOWN-BY-ADDR <ip> <port> <current-epoch> <runid>\",\n\"    Check if the master specified by ip:port is down from current Sentinel's\",\n\"    point of view.\",\n\"MASTER <master-name>\",\n\"    Show the state and info of the specified master.\",\n\"MASTERS\",\n\"    Show a list of monitored masters and their state.\",\n\"MONITOR <name> <ip> <port> <quorum>\",\n\"    Start monitoring a new master with the specified name, ip, port and quorum.\",\n\"MYID\",\n\"    Return the ID of the Sentinel instance.\",\n\"PENDING-SCRIPTS\",\n\"    Get pending scripts information.\",\n\"REMOVE <master-name>\",\n\"    Remove master from Sentinel's monitor list.\",\n\"REPLICAS <master-name>\",\n\"    Show a list of replicas for this master and their state.\",\n\"RESET <pattern>\",\n\"    Reset masters for specific master name matching this pattern.\",\n\"SENTINELS <master-name>\",\n\"    Show a list of Sentinel instances for this master and their state.\",\n\"SET <master-name> <option> <value> [<option> <value> ...]\",\n\"    Set configuration parameters for certain masters.\",\n\"SIMULATE-FAILURE [CRASH-AFTER-ELECTION] [CRASH-AFTER-PROMOTION] [HELP]\",\n\"    Simulate a Sentinel crash.\",\nNULL\n        };\n        addReplyHelp(c, help);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"masters\")) {\n        /* SENTINEL MASTERS */\n        if (c->argc != 2) goto numargserr;\n        addReplyDictOfRedisInstances(c,sentinel.masters);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"master\")) {\n        /* SENTINEL MASTER <name> */\n        sentinelRedisInstance *ri;\n\n        if (c->argc != 3) goto numargserr;\n        if ((ri = sentinelGetMasterByNameOrReplyError(c,c->argv[2]))\n            == NULL) return;\n        addReplySentinelRedisInstance(c,ri);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"slaves\") ||\n               !strcasecmp(c->argv[1]->ptr,\"replicas\"))\n    {\n        /* SENTINEL REPLICAS <master-name> */\n        sentinelRedisInstance *ri;\n\n        if (c->argc != 3) goto numargserr;\n        if ((ri = sentinelGetMasterByNameOrReplyError(c,c->argv[2])) == NULL)\n            return;\n        addReplyDictOfRedisInstances(c,ri->slaves);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"sentinels\")) {\n        /* SENTINEL SENTINELS <master-name> */\n        sentinelRedisInstance *ri;\n\n        if (c->argc != 3) goto numargserr;\n        if ((ri = sentinelGetMasterByNameOrReplyError(c,c->argv[2])) == NULL)\n            return;\n        addReplyDictOfRedisInstances(c,ri->sentinels);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"myid\") && c->argc == 2) {\n        /* SENTINEL MYID */\n        addReplyBulkCBuffer(c,sentinel.myid,CONFIG_RUN_ID_SIZE);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"is-master-down-by-addr\")) {\n        /* SENTINEL IS-MASTER-DOWN-BY-ADDR <ip> <port> <current-epoch> <runid>\n         *\n         * Arguments:\n         *\n         * ip and port are the ip and port of the master we want to be\n         * checked by Sentinel. Note that the command will not check by\n         * name but just by master, in theory different Sentinels may monitor\n         * different masters with the same name.\n         *\n         * current-epoch is needed in order to understand if we are allowed\n         * to vote for a failover leader or not. Each Sentinel can vote just\n         * one time per epoch.\n         *\n         * runid is \"*\" if we are not seeking for a vote from the Sentinel\n         * in order to elect the failover leader. Otherwise it is set to the\n         * runid we want the Sentinel to vote if it did not already voted.\n         */\n        sentinelRedisInstance *ri;\n        long long req_epoch;\n        uint64_t leader_epoch = 0;\n        char *leader = NULL;\n        long port;\n        int isdown = 0;\n\n        if (c->argc != 6) goto numargserr;\n        if (getLongFromObjectOrReply(c,c->argv[3],&port,NULL) != C_OK ||\n            getLongLongFromObjectOrReply(c,c->argv[4],&req_epoch,NULL)\n                                                              != C_OK)\n            return;\n        ri = getSentinelRedisInstanceByAddrAndRunID(sentinel.masters,\n            c->argv[2]->ptr,port,NULL);\n\n        /* It exists? Is actually a master? Is subjectively down? It's down.\n         * Note: if we are in tilt mode we always reply with \"0\". */\n        if (!sentinel.tilt && ri && (ri->flags & SRI_S_DOWN) &&\n                                    (ri->flags & SRI_MASTER))\n            isdown = 1;\n\n        /* Vote for the master (or fetch the previous vote) if the request\n         * includes a runid, otherwise the sender is not seeking for a vote. */\n        if (ri && ri->flags & SRI_MASTER && strcasecmp(c->argv[5]->ptr,\"*\")) {\n            leader = sentinelVoteLeader(ri,(uint64_t)req_epoch,\n                                            c->argv[5]->ptr,\n                                            &leader_epoch);\n        }\n\n        /* Reply with a three-elements multi-bulk reply:\n         * down state, leader, vote epoch. */\n        addReplyArrayLen(c,3);\n        addReply(c, isdown ? shared.cone : shared.czero);\n        addReplyBulkCString(c, leader ? leader : \"*\");\n        addReplyLongLong(c, (long long)leader_epoch);\n        if (leader) sdsfree(leader);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"reset\")) {\n        /* SENTINEL RESET <pattern> */\n        if (c->argc != 3) goto numargserr;\n        addReplyLongLong(c,sentinelResetMastersByPattern(c->argv[2]->ptr,SENTINEL_GENERATE_EVENT));\n    } else if (!strcasecmp(c->argv[1]->ptr,\"get-master-addr-by-name\")) {\n        /* SENTINEL GET-MASTER-ADDR-BY-NAME <master-name> */\n        sentinelRedisInstance *ri;\n\n        if (c->argc != 3) goto numargserr;\n        ri = sentinelGetMasterByName(c->argv[2]->ptr);\n        if (ri == NULL) {\n            addReplyNullArray(c);\n        } else {\n            sentinelAddr *addr = sentinelGetCurrentMasterAddress(ri);\n\n            addReplyArrayLen(c,2);\n            addReplyBulkCString(c,announceSentinelAddr(addr));\n            addReplyBulkLongLong(c,addr->port);\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"failover\")) {\n        /* SENTINEL FAILOVER <master-name> */\n        sentinelRedisInstance *ri;\n\n        if (c->argc != 3) goto numargserr;\n        if ((ri = sentinelGetMasterByNameOrReplyError(c,c->argv[2])) == NULL)\n            return;\n        if (ri->flags & SRI_FAILOVER_IN_PROGRESS) {\n            addReplyError(c,\"-INPROG Failover already in progress\");\n            return;\n        }\n        if (sentinelSelectSlave(ri) == NULL) {\n            addReplyError(c,\"-NOGOODSLAVE No suitable replica to promote\");\n            return;\n        }\n        serverLog(LL_NOTICE,\"Executing user requested FAILOVER of '%s'\",\n            ri->name);\n        sentinelStartFailover(ri);\n        ri->flags |= SRI_FORCE_FAILOVER;\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"pending-scripts\")) {\n        /* SENTINEL PENDING-SCRIPTS */\n\n        if (c->argc != 2) goto numargserr;\n        sentinelPendingScriptsCommand(c);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"monitor\")) {\n        /* SENTINEL MONITOR <name> <ip> <port> <quorum> */\n        sentinelRedisInstance *ri;\n        long quorum, port;\n        char ip[NET_IP_STR_LEN];\n\n        if (c->argc != 6) goto numargserr;\n        if (getLongFromObjectOrReply(c,c->argv[5],&quorum,\"Invalid quorum\")\n            != C_OK) return;\n        if (getLongFromObjectOrReply(c,c->argv[4],&port,\"Invalid port\")\n            != C_OK) return;\n\n        if (quorum <= 0) {\n            addReplyError(c, \"Quorum must be 1 or greater.\");\n            return;\n        }\n\n        /* If resolve-hostnames is used, actual DNS resolution may take place.\n         * Otherwise just validate address.\n         */\n        if (anetResolve(NULL,c->argv[3]->ptr,ip,sizeof(ip),\n                        sentinel.resolve_hostnames ? ANET_NONE : ANET_IP_ONLY) == ANET_ERR) {\n            addReplyError(c, \"Invalid IP address or hostname specified\");\n            return;\n        }\n\n        /* Parameters are valid. Try to create the master instance. */\n        ri = createSentinelRedisInstance(c->argv[2]->ptr,SRI_MASTER,\n                c->argv[3]->ptr,port,quorum,NULL);\n        if (ri == NULL) {\n            addReplyError(c,sentinelCheckCreateInstanceErrors(SRI_MASTER));\n        } else {\n            sentinelFlushConfigAndReply(c);\n            sentinelEvent(LL_WARNING,\"+monitor\",ri,\"%@ quorum %d\",ri->quorum);\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"flushconfig\")) {\n        if (c->argc != 2) goto numargserr;\n        sentinelFlushConfigAndReply(c);\n        return;\n    } else if (!strcasecmp(c->argv[1]->ptr,\"remove\")) {\n        /* SENTINEL REMOVE <name> */\n        sentinelRedisInstance *ri;\n\n        if (c->argc != 3) goto numargserr;\n        if ((ri = sentinelGetMasterByNameOrReplyError(c,c->argv[2]))\n            == NULL) return;\n        sentinelEvent(LL_WARNING,\"-monitor\",ri,\"%@\");\n        dictDelete(sentinel.masters,c->argv[2]->ptr);\n        sentinelFlushConfigAndReply(c);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"ckquorum\")) {\n        /* SENTINEL CKQUORUM <name> */\n        sentinelRedisInstance *ri;\n        int usable;\n\n        if (c->argc != 3) goto numargserr;\n        if ((ri = sentinelGetMasterByNameOrReplyError(c,c->argv[2]))\n            == NULL) return;\n        int result = sentinelIsQuorumReachable(ri,&usable);\n        if (result == SENTINEL_ISQR_OK) {\n            addReplySds(c, sdscatfmt(sdsempty(),\n                \"+OK %i usable Sentinels. Quorum and failover authorization \"\n                \"can be reached\\r\\n\",usable));\n        } else {\n            sds e = sdscatfmt(sdsempty(),\n                \"-NOQUORUM %i usable Sentinels. \",usable);\n            if (result & SENTINEL_ISQR_NOQUORUM)\n                e = sdscat(e,\"Not enough available Sentinels to reach the\"\n                             \" specified quorum for this master\");\n            if (result & SENTINEL_ISQR_NOAUTH) {\n                if (result & SENTINEL_ISQR_NOQUORUM) e = sdscat(e,\". \");\n                e = sdscat(e, \"Not enough available Sentinels to reach the\"\n                              \" majority and authorize a failover\");\n            }\n            addReplyErrorSds(c,e);\n        }\n    } else if (!strcasecmp(c->argv[1]->ptr,\"set\")) {\n        sentinelSetCommand(c);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"config\")) {\n        if (c->argc < 4) goto numargserr;\n        if (!strcasecmp(c->argv[2]->ptr,\"set\") && c->argc >= 5)\n            sentinelConfigSetCommand(c);\n        else if (!strcasecmp(c->argv[2]->ptr,\"get\") && c->argc >= 4)\n            sentinelConfigGetCommand(c);\n        else\n            addReplyError(c, \"Only SENTINEL CONFIG GET <param> [<param> <param> ...] / SET <param> <value> [<param> <value> ...] are supported.\");\n    } else if (!strcasecmp(c->argv[1]->ptr,\"info-cache\")) {\n        /* SENTINEL INFO-CACHE <name> */\n        if (c->argc < 2) goto numargserr;\n        mstime_t now = mstime();\n\n        /* Create an ad-hoc dictionary type so that we can iterate\n         * a dictionary composed of just the master groups the user\n         * requested. */\n        dictType copy_keeper = instancesDictType;\n        copy_keeper.valDestructor = NULL;\n        dict *masters_local = sentinel.masters;\n        if (c->argc > 2) {\n            masters_local = dictCreate(&copy_keeper);\n\n            for (int i = 2; i < c->argc; i++) {\n                sentinelRedisInstance *ri;\n                ri = sentinelGetMasterByName(c->argv[i]->ptr);\n                if (!ri) continue; /* ignore non-existing names */\n                dictAdd(masters_local, ri->name, ri);\n            }\n        }\n\n        /* Reply format:\n         *   1) master name\n         *   2) 1) 1) info cached ms\n         *         2) info from master\n         *      2) 1) info cached ms\n         *         2) info from replica1\n         *      ...\n         *   3) other master name\n         *      ...\n         *   ...\n         */\n        addReplyArrayLen(c,dictSize(masters_local) * 2);\n\n        dictIterator  di;\n        dictEntry *de;\n\n        dictInitIterator(&di, masters_local);\n        while ((de = dictNext(&di)) != NULL) {\n            sentinelRedisInstance *ri = dictGetVal(de);\n            addReplyBulkCBuffer(c,ri->name,strlen(ri->name));\n            addReplyArrayLen(c,dictSize(ri->slaves) + 1); /* +1 for self */\n            addReplyArrayLen(c,2);\n            addReplyLongLong(c,\n                ri->info_refresh ? (now - ri->info_refresh) : 0);\n            if (ri->info)\n                addReplyBulkCBuffer(c,ri->info,sdslen(ri->info));\n            else\n                addReplyNull(c);\n\n            dictIterator sdi;\n            dictEntry *sde;\n\n            dictInitIterator(&sdi, ri->slaves);\n            while ((sde = dictNext(&sdi)) != NULL) {\n                sentinelRedisInstance *sri = dictGetVal(sde);\n                addReplyArrayLen(c,2);\n                addReplyLongLong(c,\n                    ri->info_refresh ? (now - sri->info_refresh) : 0);\n                if (sri->info)\n                    addReplyBulkCBuffer(c,sri->info,sdslen(sri->info));\n                else\n                    addReplyNull(c);\n            }\n            dictResetIterator(&sdi);\n        }\n        dictResetIterator(&di);\n        if (masters_local != sentinel.masters) dictRelease(masters_local);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"simulate-failure\")) {\n        /* SENTINEL SIMULATE-FAILURE [CRASH-AFTER-ELECTION] [CRASH-AFTER-PROMOTION] [HELP] */\n        int j;\n\n        sentinel.simfailure_flags = SENTINEL_SIMFAILURE_NONE;\n        for (j = 2; j < c->argc; j++) {\n            if (!strcasecmp(c->argv[j]->ptr,\"crash-after-election\")) {\n                sentinel.simfailure_flags |=\n                    SENTINEL_SIMFAILURE_CRASH_AFTER_ELECTION;\n                serverLog(LL_WARNING,\"Failure simulation: this Sentinel \"\n                    \"will crash after being successfully elected as failover \"\n                    \"leader\");\n            } else if (!strcasecmp(c->argv[j]->ptr,\"crash-after-promotion\")) {\n                sentinel.simfailure_flags |=\n                    SENTINEL_SIMFAILURE_CRASH_AFTER_PROMOTION;\n                serverLog(LL_WARNING,\"Failure simulation: this Sentinel \"\n                    \"will crash after promoting the selected replica to master\");\n            } else if (!strcasecmp(c->argv[j]->ptr,\"help\")) {\n                addReplyArrayLen(c,2);\n                addReplyBulkCString(c,\"crash-after-election\");\n                addReplyBulkCString(c,\"crash-after-promotion\");\n                return;\n            } else {\n                addReplyError(c,\"Unknown failure simulation specified\");\n                return;\n            }\n        }\n        addReply(c,shared.ok);\n    } else if (!strcasecmp(c->argv[1]->ptr,\"debug\")) {\n        if(c->argc == 2)\n            addReplySentinelDebugInfo(c);\n        else\n            sentinelSetDebugConfigParameters(c);\n    } \n    else {\n        addReplySubcommandSyntaxError(c);\n    }\n    return;\n\nnumargserr:\n    addReplyErrorArity(c);\n}\n\nvoid addInfoSectionsToDict(dict *section_dict, char **sections);\n\n/* INFO [<section> [<section> ...]] */\nvoid sentinelInfoCommand(client *c) {\n    char *sentinel_sections[] = {\"server\", \"clients\", \"cpu\", \"stats\", \"sentinel\", NULL};\n    int sec_all = 0, sec_everything = 0;\n    static dict *cached_all_info_sections = NULL;\n\n    /* Get requested section list. */\n    dict *sections_dict = genInfoSectionDict(c->argv+1, c->argc-1, sentinel_sections, &sec_all, &sec_everything);\n\n    /* Purge unsupported sections from the requested ones. */\n    dictEntry *de;\n    dictIterator di;\n\n    dictInitSafeIterator(&di, sections_dict);\n    while((de = dictNext(&di)) != NULL) {\n        int i;\n        sds sec = dictGetKey(de);\n        for (i=0; sentinel_sections[i]; i++)\n            if (!strcasecmp(sentinel_sections[i], sec))\n                break;\n        /* section not found? remove it */\n        if (!sentinel_sections[i])\n            dictDelete(sections_dict, sec);\n    }\n    dictResetIterator(&di);\n\n    /* Insert explicit all sections (don't pass these vars to genRedisInfoString) */\n    if (sec_all || sec_everything) {\n        releaseInfoSectionDict(sections_dict);\n        /* We cache this dict as an optimization. */\n        if (!cached_all_info_sections) {\n            cached_all_info_sections = dictCreate(&stringSetDictType);\n            addInfoSectionsToDict(cached_all_info_sections, sentinel_sections);\n        }\n        sections_dict = cached_all_info_sections;\n    }\n\n    sds info = genRedisInfoString(sections_dict, 0, 0);\n    if (sec_all || (dictFind(sections_dict, \"sentinel\") != NULL)) {\n        dictIterator di;\n        dictEntry *de;\n        int master_id = 0;\n\n        if (sdslen(info) != 0)\n            info = sdscat(info,\"\\r\\n\");\n        info = sdscatprintf(info,\n            \"# Sentinel\\r\\n\"\n            \"sentinel_masters:%lu\\r\\n\"\n            \"sentinel_tilt:%d\\r\\n\"\n            \"sentinel_tilt_since_seconds:%jd\\r\\n\"\n            \"sentinel_total_tilt:%d\\r\\n\"\n            \"sentinel_running_scripts:%d\\r\\n\"\n            \"sentinel_scripts_queue_length:%ld\\r\\n\"\n            \"sentinel_simulate_failure_flags:%lu\\r\\n\",\n            dictSize(sentinel.masters),\n            sentinel.tilt,\n            sentinel.tilt ? (intmax_t)((mstime()-sentinel.tilt_start_time)/1000) : -1,\n            sentinel.total_tilt,\n            sentinel.running_scripts,\n            listLength(sentinel.scripts_queue),\n            sentinel.simfailure_flags);\n\n        dictInitIterator(&di, sentinel.masters);\n        while((de = dictNext(&di)) != NULL) {\n            sentinelRedisInstance *ri = dictGetVal(de);\n            char *status = \"ok\";\n\n            if (ri->flags & SRI_O_DOWN) status = \"odown\";\n            else if (ri->flags & SRI_S_DOWN) status = \"sdown\";\n            info = sdscatprintf(info,\n                \"master%d:name=%s,status=%s,address=%s:%d,\"\n                \"slaves=%lu,sentinels=%lu\\r\\n\",\n                master_id++, ri->name, status,\n                announceSentinelAddr(ri->addr), ri->addr->port,\n                dictSize(ri->slaves),\n                dictSize(ri->sentinels)+1);\n        }\n        dictResetIterator(&di);\n    }\n    if (sections_dict != cached_all_info_sections)\n        releaseInfoSectionDict(sections_dict);\n    addReplyBulkSds(c, info);\n}\n\n/* Implements Sentinel version of the ROLE command. The output is\n * \"sentinel\" and the list of currently monitored master names. */\nvoid sentinelRoleCommand(client *c) {\n    dictIterator di;\n    dictEntry *de;\n\n    addReplyArrayLen(c,2);\n    addReplyBulkCBuffer(c,\"sentinel\",8);\n    addReplyArrayLen(c,dictSize(sentinel.masters));\n\n    dictInitIterator(&di, sentinel.masters);\n    while((de = dictNext(&di)) != NULL) {\n        sentinelRedisInstance *ri = dictGetVal(de);\n\n        addReplyBulkCString(c,ri->name);\n    }\n    dictResetIterator(&di);\n}\n\n/* SENTINEL SET <mastername> [<option> <value> ...] */\nvoid sentinelSetCommand(client *c) {\n    sentinelRedisInstance *ri;\n    int j, changes = 0;\n    int badarg = 0; /* Bad argument position for error reporting. */\n    char *option;\n    int redacted;\n\n    if ((ri = sentinelGetMasterByNameOrReplyError(c,c->argv[2]))\n        == NULL) return;\n\n    /* Process option - value pairs. */\n    for (j = 3; j < c->argc; j++) {\n        int moreargs = (c->argc-1) - j;\n        option = c->argv[j]->ptr;\n        long long ll;\n        int old_j = j; /* Used to know what to log as an event. */\n        redacted = 0;\n\n        if (!strcasecmp(option,\"down-after-milliseconds\") && moreargs > 0) {\n            /* down-after-milliseconds <milliseconds> */\n            robj *o = c->argv[++j];\n            if (getLongLongFromObject(o,&ll) == C_ERR || ll <= 0) {\n                badarg = j;\n                goto badfmt;\n            }\n            ri->down_after_period = ll;\n            sentinelPropagateDownAfterPeriod(ri);\n            changes++;\n        } else if (!strcasecmp(option,\"failover-timeout\") && moreargs > 0) {\n            /* failover-timeout <milliseconds> */\n            robj *o = c->argv[++j];\n            if (getLongLongFromObject(o,&ll) == C_ERR || ll <= 0) {\n                badarg = j;\n                goto badfmt;\n            }\n            ri->failover_timeout = ll;\n            changes++;\n        } else if (!strcasecmp(option,\"parallel-syncs\") && moreargs > 0) {\n            /* parallel-syncs <milliseconds> */\n            robj *o = c->argv[++j];\n            if (getLongLongFromObject(o,&ll) == C_ERR || ll <= 0) {\n                badarg = j;\n                goto badfmt;\n            }\n            ri->parallel_syncs = ll;\n            changes++;\n        } else if (!strcasecmp(option,\"notification-script\") && moreargs > 0) {\n            /* notification-script <path> */\n            char *value = c->argv[++j]->ptr;\n            if (sentinel.deny_scripts_reconfig) {\n                addReplyError(c,\n                    \"Reconfiguration of scripts path is denied for \"\n                    \"security reasons. Check the deny-scripts-reconfig \"\n                    \"configuration directive in your Sentinel configuration\");\n                goto seterr;\n            }\n\n            if (strlen(value) && access(value,X_OK) == -1) {\n                addReplyError(c,\n                    \"Notification script seems non existing or non executable\");\n                goto seterr;\n            }\n            sdsfree(ri->notification_script);\n            ri->notification_script = strlen(value) ? sdsnew(value) : NULL;\n            changes++;\n        } else if (!strcasecmp(option,\"client-reconfig-script\") && moreargs > 0) {\n            /* client-reconfig-script <path> */\n            char *value = c->argv[++j]->ptr;\n            if (sentinel.deny_scripts_reconfig) {\n                addReplyError(c,\n                    \"Reconfiguration of scripts path is denied for \"\n                    \"security reasons. Check the deny-scripts-reconfig \"\n                    \"configuration directive in your Sentinel configuration\");\n                goto seterr;\n            }\n\n            if (strlen(value) && access(value,X_OK) == -1) {\n                addReplyError(c,\n                    \"Client reconfiguration script seems non existing or \"\n                    \"non executable\");\n                goto seterr;\n            }\n            sdsfree(ri->client_reconfig_script);\n            ri->client_reconfig_script = strlen(value) ? sdsnew(value) : NULL;\n            changes++;\n        } else if (!strcasecmp(option,\"auth-pass\") && moreargs > 0) {\n            /* auth-pass <password> */\n            char *value = c->argv[++j]->ptr;\n            sdsfree(ri->auth_pass);\n            ri->auth_pass = strlen(value) ? sdsnew(value) : NULL;\n            dropInstanceConnections(ri);\n            changes++;\n            redacted = 1;\n        } else if (!strcasecmp(option,\"auth-user\") && moreargs > 0) {\n            /* auth-user <username> */\n            char *value = c->argv[++j]->ptr;\n            sdsfree(ri->auth_user);\n            ri->auth_user = strlen(value) ? sdsnew(value) : NULL;\n            dropInstanceConnections(ri);\n            changes++;\n        } else if (!strcasecmp(option,\"quorum\") && moreargs > 0) {\n            /* quorum <count> */\n            robj *o = c->argv[++j];\n            if (getLongLongFromObject(o,&ll) == C_ERR || ll <= 0) {\n                badarg = j;\n                goto badfmt;\n            }\n            ri->quorum = ll;\n            changes++;\n        } else if (!strcasecmp(option,\"rename-command\") && moreargs > 1) {\n            /* rename-command <oldname> <newname> */\n            sds oldname = c->argv[++j]->ptr;\n            sds newname = c->argv[++j]->ptr;\n\n            if ((sdslen(oldname) == 0) || (sdslen(newname) == 0)) {\n                badarg = sdslen(newname) ? j-1 : j;\n                goto badfmt;\n            }\n\n            /* Remove any older renaming for this command. */\n            dictDelete(ri->renamed_commands,oldname);\n\n            /* If the target name is the same as the source name there\n             * is no need to add an entry mapping to itself. */\n            if (strcasecmp(oldname, newname) != 0) {\n                oldname = sdsdup(oldname);\n                newname = sdsdup(newname);\n                dictAdd(ri->renamed_commands,oldname,newname);\n            }\n            changes++;\n        } else if (!strcasecmp(option,\"master-reboot-down-after-period\") && moreargs > 0) {\n            /* master-reboot-down-after-period <milliseconds> */\n            robj *o = c->argv[++j];\n            if (getLongLongFromObject(o,&ll) == C_ERR || ll < 0) {\n                badarg = j;\n                goto badfmt;\n            }\n            ri->master_reboot_down_after_period = ll;\n            changes++;\n        } else {\n            addReplyErrorFormat(c,\"Unknown option or number of arguments for \"\n                                  \"SENTINEL SET '%s'\", option);\n            goto seterr;\n        }\n\n        /* Log the event. */\n        int numargs = j-old_j+1;\n        switch(numargs) {\n        case 2:\n            sentinelEvent(LL_WARNING,\"+set\",ri,\"%@ %s %s\",(char*)c->argv[old_j]->ptr,\n                                                          redacted ? \"******\" : (char*)c->argv[old_j+1]->ptr);\n            break;\n        case 3:\n            sentinelEvent(LL_WARNING,\"+set\",ri,\"%@ %s %s %s\",(char*)c->argv[old_j]->ptr,\n                                                             (char*)c->argv[old_j+1]->ptr,\n                                                             (char*)c->argv[old_j+2]->ptr);\n            break;\n        default:\n            sentinelEvent(LL_WARNING,\"+set\",ri,\"%@ %s\",(char*)c->argv[old_j]->ptr);\n            break;\n        }\n    }\n    if (changes) sentinelFlushConfigAndReply(c);\n    return;\n\nbadfmt: /* Bad format errors */\n    addReplyErrorFormat(c,\"Invalid argument '%s' for SENTINEL SET '%s'\",\n        (char*)c->argv[badarg]->ptr,option);\nseterr:\n    /* TODO: Handle the case of both bad input and save error, possibly handling\n     * SENTINEL SET atomically. */\n    if (changes) sentinelFlushConfig();\n}\n\n/* Our fake PUBLISH command: it is actually useful only to receive hello messages\n * from the other sentinel instances, and publishing to a channel other than\n * SENTINEL_HELLO_CHANNEL is forbidden.\n *\n * Because we have a Sentinel PUBLISH, the code to send hello messages is the same\n * for all the three kind of instances: masters, slaves, sentinels. */\nvoid sentinelPublishCommand(client *c) {\n    if (strcmp(c->argv[1]->ptr,SENTINEL_HELLO_CHANNEL)) {\n        addReplyError(c, \"Only HELLO messages are accepted by Sentinel instances.\");\n        return;\n    }\n    sentinelProcessHelloMessage(c->argv[2]->ptr,sdslen(c->argv[2]->ptr));\n    addReplyLongLong(c,1);\n}\n\n/* ===================== SENTINEL availability checks ======================= */\n\n/* Is this instance down from our point of view? */\nvoid sentinelCheckSubjectivelyDown(sentinelRedisInstance *ri) {\n    mstime_t elapsed = 0;\n\n    if (ri->link->act_ping_time)\n        elapsed = mstime() - ri->link->act_ping_time;\n    else if (ri->link->disconnected)\n        elapsed = mstime() - ri->link->last_avail_time;\n\n    /* Check if we are in need for a reconnection of one of the\n     * links, because we are detecting low activity.\n     *\n     * 1) Check if the command link seems connected, was connected not less\n     *    than SENTINEL_MIN_LINK_RECONNECT_PERIOD, but still we have a\n     *    pending ping for more than half the timeout. */\n    if (ri->link->cc &&\n        (mstime() - ri->link->cc_conn_time) >\n        sentinel_min_link_reconnect_period &&\n        ri->link->act_ping_time != 0 && /* There is a pending ping... */\n        /* The pending ping is delayed, and we did not receive\n         * error replies as well. */\n        (mstime() - ri->link->act_ping_time) > (ri->down_after_period/2) &&\n        (mstime() - ri->link->last_pong_time) > (ri->down_after_period/2))\n    {\n        instanceLinkCloseConnection(ri->link,ri->link->cc);\n    }\n\n    /* 2) Check if the pubsub link seems connected, was connected not less\n     *    than SENTINEL_MIN_LINK_RECONNECT_PERIOD, but still we have no\n     *    activity in the Pub/Sub channel for more than\n     *    SENTINEL_PUBLISH_PERIOD * 3.\n     */\n    if (ri->link->pc &&\n        (mstime() - ri->link->pc_conn_time) >\n         sentinel_min_link_reconnect_period &&\n        (mstime() - ri->link->pc_last_activity) > (sentinel_publish_period*3))\n    {\n        instanceLinkCloseConnection(ri->link,ri->link->pc);\n    }\n\n    /* Update the SDOWN flag. We believe the instance is SDOWN if:\n     *\n     * 1) It is not replying.\n     * 2) We believe it is a master, it reports to be a slave for enough time\n     *    to meet the down_after_period, plus enough time to get two times\n     *    INFO report from the instance. */\n    if (elapsed > ri->down_after_period ||\n        (ri->flags & SRI_MASTER &&\n         ri->role_reported == SRI_SLAVE &&\n         mstime() - ri->role_reported_time >\n          (ri->down_after_period+sentinel_info_period*2)) ||\n          (ri->flags & SRI_MASTER_REBOOT && \n           mstime()-ri->master_reboot_since_time > ri->master_reboot_down_after_period))\n    {\n        /* Is subjectively down */\n        if ((ri->flags & SRI_S_DOWN) == 0) {\n            sentinelEvent(LL_WARNING,\"+sdown\",ri,\"%@\");\n            ri->s_down_since_time = mstime();\n            ri->flags |= SRI_S_DOWN;\n        }\n    } else {\n        /* Is subjectively up */\n        if (ri->flags & SRI_S_DOWN) {\n            sentinelEvent(LL_WARNING,\"-sdown\",ri,\"%@\");\n            ri->flags &= ~(SRI_S_DOWN|SRI_SCRIPT_KILL_SENT);\n        }\n    }\n}\n\n/* Is this instance down according to the configured quorum?\n *\n * Note that ODOWN is a weak quorum, it only means that enough Sentinels\n * reported in a given time range that the instance was not reachable.\n * However messages can be delayed so there are no strong guarantees about\n * N instances agreeing at the same time about the down state. */\nvoid sentinelCheckObjectivelyDown(sentinelRedisInstance *master) {\n    dictIterator di;\n    dictEntry *de;\n    unsigned int quorum = 0, odown = 0;\n\n    if (master->flags & SRI_S_DOWN) {\n        /* Is down for enough sentinels? */\n        quorum = 1; /* the current sentinel. */\n        /* Count all the other sentinels. */\n        dictInitIterator(&di, master->sentinels);\n        while((de = dictNext(&di)) != NULL) {\n            sentinelRedisInstance *ri = dictGetVal(de);\n\n            if (ri->flags & SRI_MASTER_DOWN) quorum++;\n        }\n        dictResetIterator(&di);\n        if (quorum >= master->quorum) odown = 1;\n    }\n\n    /* Set the flag accordingly to the outcome. */\n    if (odown) {\n        if ((master->flags & SRI_O_DOWN) == 0) {\n            sentinelEvent(LL_WARNING,\"+odown\",master,\"%@ #quorum %d/%d\",\n                quorum, master->quorum);\n            master->flags |= SRI_O_DOWN;\n            master->o_down_since_time = mstime();\n        }\n    } else {\n        if (master->flags & SRI_O_DOWN) {\n            sentinelEvent(LL_WARNING,\"-odown\",master,\"%@\");\n            master->flags &= ~SRI_O_DOWN;\n        }\n    }\n}\n\n/* Receive the SENTINEL is-master-down-by-addr reply, see the\n * sentinelAskMasterStateToOtherSentinels() function for more information. */\nvoid sentinelReceiveIsMasterDownReply(redisAsyncContext *c, void *reply, void *privdata) {\n    sentinelRedisInstance *ri = privdata;\n    instanceLink *link = c->data;\n    redisReply *r;\n\n    if (!reply || !link) return;\n    link->pending_commands--;\n    r = reply;\n\n    /* Ignore every error or unexpected reply.\n     * Note that if the command returns an error for any reason we'll\n     * end clearing the SRI_MASTER_DOWN flag for timeout anyway. */\n    if (r->type == REDIS_REPLY_ARRAY && r->elements == 3 &&\n        r->element[0]->type == REDIS_REPLY_INTEGER &&\n        r->element[1]->type == REDIS_REPLY_STRING &&\n        r->element[2]->type == REDIS_REPLY_INTEGER)\n    {\n        ri->last_master_down_reply_time = mstime();\n        if (r->element[0]->integer == 1) {\n            ri->flags |= SRI_MASTER_DOWN;\n        } else {\n            ri->flags &= ~SRI_MASTER_DOWN;\n        }\n        if (strcmp(r->element[1]->str,\"*\")) {\n            /* If the runid in the reply is not \"*\" the Sentinel actually\n             * replied with a vote. */\n            sdsfree(ri->leader);\n            if ((long long)ri->leader_epoch != r->element[2]->integer)\n                serverLog(LL_NOTICE,\n                    \"%s voted for %s %llu\", ri->name,\n                    r->element[1]->str,\n                    (unsigned long long) r->element[2]->integer);\n            ri->leader = sdsnew(r->element[1]->str);\n            ri->leader_epoch = r->element[2]->integer;\n        }\n    }\n}\n\n/* If we think the master is down, we start sending\n * SENTINEL IS-MASTER-DOWN-BY-ADDR requests to other sentinels\n * in order to get the replies that allow to reach the quorum\n * needed to mark the master in ODOWN state and trigger a failover. */\n#define SENTINEL_ASK_FORCED (1<<0)\nvoid sentinelAskMasterStateToOtherSentinels(sentinelRedisInstance *master, int flags) {\n    dictIterator di;\n    dictEntry *de;\n\n    dictInitIterator(&di, master->sentinels);\n    while((de = dictNext(&di)) != NULL) {\n        sentinelRedisInstance *ri = dictGetVal(de);\n        mstime_t elapsed = mstime() - ri->last_master_down_reply_time;\n        char port[32];\n        int retval;\n\n        /* If the master state from other sentinel is too old, we clear it. */\n        if (elapsed > sentinel_ask_period*5) {\n            ri->flags &= ~SRI_MASTER_DOWN;\n            sdsfree(ri->leader);\n            ri->leader = NULL;\n        }\n\n        /* Only ask if master is down to other sentinels if:\n         *\n         * 1) We believe it is down, or there is a failover in progress.\n         * 2) Sentinel is connected.\n         * 3) We did not receive the info within SENTINEL_ASK_PERIOD ms. */\n        if ((master->flags & SRI_S_DOWN) == 0) continue;\n        if (ri->link->disconnected) continue;\n        if (!(flags & SENTINEL_ASK_FORCED) &&\n            mstime() - ri->last_master_down_reply_time < sentinel_ask_period)\n            continue;\n\n        /* Ask */\n        ll2string(port,sizeof(port),master->addr->port);\n        retval = redisAsyncCommand(ri->link->cc,\n                    sentinelReceiveIsMasterDownReply, ri,\n                    \"%s is-master-down-by-addr %s %s %llu %s\",\n                    sentinelInstanceMapCommand(ri,\"SENTINEL\"),\n                    announceSentinelAddr(master->addr), port,\n                    sentinel.current_epoch,\n                    (master->failover_state > SENTINEL_FAILOVER_STATE_NONE) ?\n                    sentinel.myid : \"*\");\n        if (retval == C_OK) ri->link->pending_commands++;\n    }\n    dictResetIterator(&di);\n}\n\n/* =============================== FAILOVER ================================= */\n\n/* Crash because of user request via SENTINEL simulate-failure command. */\nvoid sentinelSimFailureCrash(void) {\n    serverLog(LL_WARNING,\n        \"Sentinel CRASH because of SENTINEL simulate-failure\");\n    exit(99);\n}\n\n/* Vote for the sentinel with 'req_runid' or return the old vote if already\n * voted for the specified 'req_epoch' or one greater.\n *\n * If a vote is not available returns NULL, otherwise return the Sentinel\n * runid and populate the leader_epoch with the epoch of the vote. */\nchar *sentinelVoteLeader(sentinelRedisInstance *master, uint64_t req_epoch, char *req_runid, uint64_t *leader_epoch) {\n    if (req_epoch > sentinel.current_epoch) {\n        sentinel.current_epoch = req_epoch;\n        sentinelFlushConfig();\n        sentinelEvent(LL_WARNING,\"+new-epoch\",master,\"%llu\",\n            (unsigned long long) sentinel.current_epoch);\n    }\n\n    if (master->leader_epoch < req_epoch && sentinel.current_epoch <= req_epoch)\n    {\n        sdsfree(master->leader);\n        master->leader = sdsnew(req_runid);\n        master->leader_epoch = sentinel.current_epoch;\n        sentinelFlushConfig();\n        sentinelEvent(LL_WARNING,\"+vote-for-leader\",master,\"%s %llu\",\n            master->leader, (unsigned long long) master->leader_epoch);\n        /* If we did not voted for ourselves, set the master failover start\n         * time to now, in order to force a delay before we can start a\n         * failover for the same master. */\n        if (strcasecmp(master->leader,sentinel.myid))\n            master->failover_start_time = mstime()+rand()%SENTINEL_MAX_DESYNC;\n    }\n\n    *leader_epoch = master->leader_epoch;\n    return master->leader ? sdsnew(master->leader) : NULL;\n}\n\nstruct sentinelLeader {\n    char *runid;\n    unsigned long votes;\n};\n\n/* Helper function for sentinelGetLeader, increment the counter\n * relative to the specified runid. */\nint sentinelLeaderIncr(dict *counters, char *runid) {\n    dictEntry *existing, *de;\n    uint64_t oldval;\n\n    de = dictAddRaw(counters,runid,&existing);\n    if (existing) {\n        oldval = dictGetUnsignedIntegerVal(existing);\n        dictSetUnsignedIntegerVal(existing,oldval+1);\n        return oldval+1;\n    } else {\n        serverAssert(de != NULL);\n        dictSetUnsignedIntegerVal(de,1);\n        return 1;\n    }\n}\n\n/* Scan all the Sentinels attached to this master to check if there\n * is a leader for the specified epoch.\n *\n * To be a leader for a given epoch, we should have the majority of\n * the Sentinels we know (ever seen since the last SENTINEL RESET) that\n * reported the same instance as leader for the same epoch. */\nchar *sentinelGetLeader(sentinelRedisInstance *master, uint64_t epoch) {\n    dict *counters;\n    dictIterator di;\n    dictEntry *de;\n    unsigned int voters = 0, voters_quorum;\n    char *myvote;\n    char *winner = NULL;\n    uint64_t leader_epoch;\n    uint64_t max_votes = 0;\n\n    serverAssert(master->flags & (SRI_O_DOWN|SRI_FAILOVER_IN_PROGRESS));\n    counters = dictCreate(&leaderVotesDictType);\n\n    voters = dictSize(master->sentinels)+1; /* All the other sentinels and me.*/\n\n    /* Count other sentinels votes */\n    dictInitIterator(&di, master->sentinels);\n    while((de = dictNext(&di)) != NULL) {\n        sentinelRedisInstance *ri = dictGetVal(de);\n        if (ri->leader != NULL && ri->leader_epoch == sentinel.current_epoch)\n            sentinelLeaderIncr(counters,ri->leader);\n    }\n    dictResetIterator(&di);\n\n    /* Check what's the winner. For the winner to win, it needs two conditions:\n     * 1) Absolute majority between voters (50% + 1).\n     * 2) And anyway at least master->quorum votes. */\n    dictInitIterator(&di, counters);\n    while((de = dictNext(&di)) != NULL) {\n        uint64_t votes = dictGetUnsignedIntegerVal(de);\n\n        if (votes > max_votes) {\n            max_votes = votes;\n            winner = dictGetKey(de);\n        }\n    }\n    dictResetIterator(&di);\n\n    /* Count this Sentinel vote:\n     * if this Sentinel did not voted yet, either vote for the most\n     * common voted sentinel, or for itself if no vote exists at all. */\n    if (winner)\n        myvote = sentinelVoteLeader(master,epoch,winner,&leader_epoch);\n    else\n        myvote = sentinelVoteLeader(master,epoch,sentinel.myid,&leader_epoch);\n\n    if (myvote && leader_epoch == epoch) {\n        uint64_t votes = sentinelLeaderIncr(counters,myvote);\n\n        if (votes > max_votes) {\n            max_votes = votes;\n            winner = myvote;\n        }\n    }\n\n    voters_quorum = voters/2+1;\n    if (winner && (max_votes < voters_quorum || max_votes < master->quorum))\n        winner = NULL;\n\n    winner = winner ? sdsnew(winner) : NULL;\n    sdsfree(myvote);\n    dictRelease(counters);\n    return winner;\n}\n\n/* Send SLAVEOF to the specified instance, always followed by a\n * CONFIG REWRITE command in order to store the new configuration on disk\n * when possible (that is, if the Redis instance is recent enough to support\n * config rewriting, and if the server was started with a configuration file).\n *\n * If Host is NULL the function sends \"SLAVEOF NO ONE\".\n *\n * The command returns C_OK if the SLAVEOF command was accepted for\n * (later) delivery otherwise C_ERR. The command replies are just\n * discarded. */\nint sentinelSendSlaveOf(sentinelRedisInstance *ri, const sentinelAddr *addr) {\n    char portstr[32];\n    const char *host;\n    int retval;\n\n    /* If host is NULL we send SLAVEOF NO ONE that will turn the instance\n    * into a master. */\n    if (!addr) {\n        host = \"NO\";\n        memcpy(portstr,\"ONE\",4);\n    } else {\n        host = announceSentinelAddr(addr);\n        ll2string(portstr,sizeof(portstr),addr->port);\n    }\n\n    /* In order to send SLAVEOF in a safe way, we send a transaction performing\n     * the following tasks:\n     * 1) Reconfigure the instance according to the specified host/port params.\n     * 2) Rewrite the configuration.\n     * 3) Disconnect all clients (but this one sending the command) in order\n     *    to trigger the ask-master-on-reconnection protocol for connected\n     *    clients.\n     *\n     * Note that we don't check the replies returned by commands, since we\n     * will observe instead the effects in the next INFO output. */\n    retval = redisAsyncCommand(ri->link->cc,\n        sentinelDiscardReplyCallback, ri, \"%s\",\n        sentinelInstanceMapCommand(ri,\"MULTI\"));\n    if (retval == C_ERR) return retval;\n    ri->link->pending_commands++;\n\n    retval = redisAsyncCommand(ri->link->cc,\n        sentinelDiscardReplyCallback, ri, \"%s %s %s\",\n        sentinelInstanceMapCommand(ri,\"SLAVEOF\"),\n        host, portstr);\n    if (retval == C_ERR) return retval;\n    ri->link->pending_commands++;\n\n    retval = redisAsyncCommand(ri->link->cc,\n        sentinelDiscardReplyCallback, ri, \"%s REWRITE\",\n        sentinelInstanceMapCommand(ri,\"CONFIG\"));\n    if (retval == C_ERR) return retval;\n    ri->link->pending_commands++;\n\n    /* CLIENT KILL TYPE <type> is only supported starting from Redis 2.8.12,\n     * however sending it to an instance not understanding this command is not\n     * an issue because CLIENT is variadic command, so Redis will not\n     * recognized as a syntax error, and the transaction will not fail (but\n     * only the unsupported command will fail). */\n    for (int type = 0; type < 2; type++) {\n        retval = redisAsyncCommand(ri->link->cc,\n            sentinelDiscardReplyCallback, ri, \"%s KILL TYPE %s\",\n            sentinelInstanceMapCommand(ri,\"CLIENT\"),\n            type == 0 ? \"normal\" : \"pubsub\");\n        if (retval == C_ERR) return retval;\n        ri->link->pending_commands++;\n    }\n\n    retval = redisAsyncCommand(ri->link->cc,\n        sentinelDiscardReplyCallback, ri, \"%s\",\n        sentinelInstanceMapCommand(ri,\"EXEC\"));\n    if (retval == C_ERR) return retval;\n    ri->link->pending_commands++;\n\n    return C_OK;\n}\n\n/* Setup the master state to start a failover. */\nvoid sentinelStartFailover(sentinelRedisInstance *master) {\n    serverAssert(master->flags & SRI_MASTER);\n\n    master->failover_state = SENTINEL_FAILOVER_STATE_WAIT_START;\n    master->flags |= SRI_FAILOVER_IN_PROGRESS;\n    master->failover_epoch = ++sentinel.current_epoch;\n    sentinelEvent(LL_WARNING,\"+new-epoch\",master,\"%llu\",\n        (unsigned long long) sentinel.current_epoch);\n    sentinelEvent(LL_WARNING,\"+try-failover\",master,\"%@\");\n    master->failover_start_time = mstime()+rand()%SENTINEL_MAX_DESYNC;\n    master->failover_state_change_time = mstime();\n}\n\n/* This function checks if there are the conditions to start the failover,\n * that is:\n *\n * 1) Master must be in ODOWN condition.\n * 2) No failover already in progress.\n * 3) No failover already attempted recently.\n *\n * We still don't know if we'll win the election so it is possible that we\n * start the failover but that we'll not be able to act.\n *\n * Return non-zero if a failover was started. */\nint sentinelStartFailoverIfNeeded(sentinelRedisInstance *master) {\n    /* We can't failover if the master is not in O_DOWN state. */\n    if (!(master->flags & SRI_O_DOWN)) return 0;\n\n    /* Failover already in progress? */\n    if (master->flags & SRI_FAILOVER_IN_PROGRESS) return 0;\n\n    /* Last failover attempt started too little time ago? */\n    if (mstime() - master->failover_start_time <\n        master->failover_timeout*2)\n    {\n        if (master->failover_delay_logged != master->failover_start_time) {\n            time_t clock = (master->failover_start_time +\n                            master->failover_timeout*2) / 1000;\n            char ctimebuf[26];\n\n            ctime_r(&clock,ctimebuf);\n            ctimebuf[24] = '\\0'; /* Remove newline. */\n            master->failover_delay_logged = master->failover_start_time;\n            serverLog(LL_NOTICE,\n                \"Next failover delay: I will not start a failover before %s\",\n                ctimebuf);\n        }\n        return 0;\n    }\n\n    sentinelStartFailover(master);\n    return 1;\n}\n\n/* Select a suitable slave to promote. The current algorithm only uses\n * the following parameters:\n *\n * 1) None of the following conditions: S_DOWN, O_DOWN, DISCONNECTED.\n * 2) Last time the slave replied to ping no more than 5 times the PING period.\n * 3) info_refresh not older than 3 times the INFO refresh period.\n * 4) master_link_down_time no more than:\n *     (now - master->s_down_since_time) + (master->down_after_period * 10).\n *    Basically since the master is down from our POV, the slave reports\n *    to be disconnected no more than 10 times the configured down-after-period.\n *    This is pretty much black magic but the idea is, the master was not\n *    available so the slave may be lagging, but not over a certain time.\n *    Anyway we'll select the best slave according to replication offset.\n * 5) Slave priority can't be zero, otherwise the slave is discarded.\n *\n * Among all the slaves matching the above conditions we select the slave\n * with, in order of sorting key:\n *\n * - lower slave_priority.\n * - bigger processed replication offset.\n * - lexicographically smaller runid.\n *\n * Basically if runid is the same, the slave that processed more commands\n * from the master is selected.\n *\n * The function returns the pointer to the selected slave, otherwise\n * NULL if no suitable slave was found.\n */\n\n/* Helper for sentinelSelectSlave(). This is used by qsort() in order to\n * sort suitable slaves in a \"better first\" order, to take the first of\n * the list. */\nint compareSlavesForPromotion(const void *a, const void *b) {\n    sentinelRedisInstance **sa = (sentinelRedisInstance **)a,\n                          **sb = (sentinelRedisInstance **)b;\n    char *sa_runid, *sb_runid;\n\n    if ((*sa)->slave_priority != (*sb)->slave_priority)\n        return (*sa)->slave_priority - (*sb)->slave_priority;\n\n    /* If priority is the same, select the slave with greater replication\n     * offset (processed more data from the master). */\n    if ((*sa)->slave_repl_offset > (*sb)->slave_repl_offset) {\n        return -1; /* a < b */\n    } else if ((*sa)->slave_repl_offset < (*sb)->slave_repl_offset) {\n        return 1; /* a > b */\n    }\n\n    /* If the replication offset is the same select the slave with that has\n     * the lexicographically smaller runid. Note that we try to handle runid\n     * == NULL as there are old Redis versions that don't publish runid in\n     * INFO. A NULL runid is considered bigger than any other runid. */\n    sa_runid = (*sa)->runid;\n    sb_runid = (*sb)->runid;\n    if (sa_runid == NULL && sb_runid == NULL) return 0;\n    else if (sa_runid == NULL) return 1;  /* a > b */\n    else if (sb_runid == NULL) return -1; /* a < b */\n    return strcasecmp(sa_runid, sb_runid);\n}\n\nsentinelRedisInstance *sentinelSelectSlave(sentinelRedisInstance *master) {\n    sentinelRedisInstance **instance =\n        zmalloc(sizeof(instance[0])*dictSize(master->slaves));\n    sentinelRedisInstance *selected = NULL;\n    int instances = 0;\n    dictIterator di;\n    dictEntry *de;\n    mstime_t max_master_down_time = 0;\n\n    if (master->flags & SRI_S_DOWN)\n        max_master_down_time += mstime() - master->s_down_since_time;\n    max_master_down_time += master->down_after_period * 10;\n\n    dictInitIterator(&di, master->slaves);\n\n    while((de = dictNext(&di)) != NULL) {\n        sentinelRedisInstance *slave = dictGetVal(de);\n        mstime_t info_validity_time;\n\n        if (slave->flags & (SRI_S_DOWN|SRI_O_DOWN)) continue;\n        if (slave->link->disconnected) continue;\n        if (mstime() - slave->link->last_avail_time > sentinel_ping_period*5) continue;\n        if (slave->slave_priority == 0) continue;\n\n        /* If the master is in SDOWN state we get INFO for slaves every second.\n         * Otherwise we get it with the usual period so we need to account for\n         * a larger delay. */\n        if (master->flags & SRI_S_DOWN)\n            info_validity_time = sentinel_ping_period*5;\n        else\n            info_validity_time = sentinel_info_period*3;\n        if (mstime() - slave->info_refresh > info_validity_time) continue;\n        if (slave->master_link_down_time > max_master_down_time) continue;\n        instance[instances++] = slave;\n    }\n    dictResetIterator(&di);\n    if (instances) {\n        qsort(instance,instances,sizeof(sentinelRedisInstance*),\n            compareSlavesForPromotion);\n        selected = instance[0];\n    }\n    zfree(instance);\n    return selected;\n}\n\n/* ---------------- Failover state machine implementation ------------------- */\nvoid sentinelFailoverWaitStart(sentinelRedisInstance *ri) {\n    char *leader;\n    int isleader;\n\n    /* Check if we are the leader for the failover epoch. */\n    leader = sentinelGetLeader(ri, ri->failover_epoch);\n    isleader = leader && strcasecmp(leader,sentinel.myid) == 0;\n    sdsfree(leader);\n\n    /* If I'm not the leader, and it is not a forced failover via\n     * SENTINEL FAILOVER, then I can't continue with the failover. */\n    if (!isleader && !(ri->flags & SRI_FORCE_FAILOVER)) {\n        mstime_t election_timeout = sentinel_election_timeout;\n\n        /* The election timeout is the MIN between SENTINEL_ELECTION_TIMEOUT\n         * and the configured failover timeout. */\n        if (election_timeout > ri->failover_timeout)\n            election_timeout = ri->failover_timeout;\n        /* Abort the failover if I'm not the leader after some time. */\n        if (mstime() - ri->failover_start_time > election_timeout) {\n            sentinelEvent(LL_WARNING,\"-failover-abort-not-elected\",ri,\"%@\");\n            sentinelAbortFailover(ri);\n        }\n        return;\n    }\n    sentinelEvent(LL_WARNING,\"+elected-leader\",ri,\"%@\");\n    if (sentinel.simfailure_flags & SENTINEL_SIMFAILURE_CRASH_AFTER_ELECTION)\n        sentinelSimFailureCrash();\n    ri->failover_state = SENTINEL_FAILOVER_STATE_SELECT_SLAVE;\n    ri->failover_state_change_time = mstime();\n    sentinelEvent(LL_WARNING,\"+failover-state-select-slave\",ri,\"%@\");\n}\n\nvoid sentinelFailoverSelectSlave(sentinelRedisInstance *ri) {\n    sentinelRedisInstance *slave = sentinelSelectSlave(ri);\n\n    /* We don't handle the timeout in this state as the function aborts\n     * the failover or go forward in the next state. */\n    if (slave == NULL) {\n        sentinelEvent(LL_WARNING,\"-failover-abort-no-good-slave\",ri,\"%@\");\n        sentinelAbortFailover(ri);\n    } else {\n        sentinelEvent(LL_WARNING,\"+selected-slave\",slave,\"%@\");\n        slave->flags |= SRI_PROMOTED;\n        ri->promoted_slave = slave;\n        ri->failover_state = SENTINEL_FAILOVER_STATE_SEND_SLAVEOF_NOONE;\n        ri->failover_state_change_time = mstime();\n        sentinelEvent(LL_NOTICE,\"+failover-state-send-slaveof-noone\",\n            slave, \"%@\");\n    }\n}\n\nvoid sentinelFailoverSendSlaveOfNoOne(sentinelRedisInstance *ri) {\n    int retval;\n\n    /* We can't send the command to the promoted slave if it is now\n     * disconnected. Retry again and again with this state until the timeout\n     * is reached, then abort the failover. */\n    if (ri->promoted_slave->link->disconnected) {\n        if (mstime() - ri->failover_state_change_time > ri->failover_timeout) {\n            sentinelEvent(LL_WARNING,\"-failover-abort-slave-timeout\",ri,\"%@\");\n            sentinelAbortFailover(ri);\n        }\n        return;\n    }\n\n    /* Send SLAVEOF NO ONE command to turn the slave into a master.\n     * We actually register a generic callback for this command as we don't\n     * really care about the reply. We check if it worked indirectly observing\n     * if INFO returns a different role (master instead of slave). */\n    retval = sentinelSendSlaveOf(ri->promoted_slave,NULL);\n    if (retval != C_OK) return;\n    sentinelEvent(LL_NOTICE, \"+failover-state-wait-promotion\",\n        ri->promoted_slave,\"%@\");\n    ri->failover_state = SENTINEL_FAILOVER_STATE_WAIT_PROMOTION;\n    ri->failover_state_change_time = mstime();\n}\n\n/* We actually wait for promotion indirectly checking with INFO when the\n * slave turns into a master. */\nvoid sentinelFailoverWaitPromotion(sentinelRedisInstance *ri) {\n    /* Just handle the timeout. Switching to the next state is handled\n     * by the function parsing the INFO command of the promoted slave. */\n    if (mstime() - ri->failover_state_change_time > ri->failover_timeout) {\n        sentinelEvent(LL_WARNING,\"-failover-abort-slave-timeout\",ri,\"%@\");\n        sentinelAbortFailover(ri);\n    }\n}\n\nvoid sentinelFailoverDetectEnd(sentinelRedisInstance *master) {\n    int not_reconfigured = 0, timeout = 0;\n    dictIterator di;\n    dictEntry *de;\n    mstime_t elapsed = mstime() - master->failover_state_change_time;\n\n    /* We can't consider failover finished if the promoted slave is\n     * not reachable. */\n    if (master->promoted_slave == NULL ||\n        master->promoted_slave->flags & SRI_S_DOWN) return;\n\n    /* The failover terminates once all the reachable slaves are properly\n     * configured. */\n    dictInitIterator(&di, master->slaves);\n    while((de = dictNext(&di)) != NULL) {\n        sentinelRedisInstance *slave = dictGetVal(de);\n\n        if (slave->flags & (SRI_PROMOTED|SRI_RECONF_DONE)) continue;\n        if (slave->flags & SRI_S_DOWN) continue;\n        not_reconfigured++;\n    }\n    dictResetIterator(&di);\n\n    /* Force end of failover on timeout. */\n    if (elapsed > master->failover_timeout) {\n        not_reconfigured = 0;\n        timeout = 1;\n        sentinelEvent(LL_WARNING,\"+failover-end-for-timeout\",master,\"%@\");\n    }\n\n    if (not_reconfigured == 0) {\n        sentinelEvent(LL_WARNING,\"+failover-end\",master,\"%@\");\n        master->failover_state = SENTINEL_FAILOVER_STATE_UPDATE_CONFIG;\n        master->failover_state_change_time = mstime();\n    }\n\n    /* If I'm the leader it is a good idea to send a best effort SLAVEOF\n     * command to all the slaves still not reconfigured to replicate with\n     * the new master. */\n    if (timeout) {\n        dictIterator di;\n        dictEntry *de;\n\n        dictInitIterator(&di, master->slaves);\n        while((de = dictNext(&di)) != NULL) {\n            sentinelRedisInstance *slave = dictGetVal(de);\n            int retval;\n\n            if (slave->flags & (SRI_PROMOTED|SRI_RECONF_DONE|SRI_RECONF_SENT)) continue;\n            if (slave->link->disconnected) continue;\n\n            retval = sentinelSendSlaveOf(slave,master->promoted_slave->addr);\n            if (retval == C_OK) {\n                sentinelEvent(LL_NOTICE,\"+slave-reconf-sent-be\",slave,\"%@\");\n                slave->flags |= SRI_RECONF_SENT;\n            }\n        }\n        dictResetIterator(&di);\n    }\n}\n\n/* Send SLAVE OF <new master address> to all the remaining slaves that\n * still don't appear to have the configuration updated. */\nvoid sentinelFailoverReconfNextSlave(sentinelRedisInstance *master) {\n    dictIterator di;\n    dictEntry *de;\n    int in_progress = 0;\n\n    dictInitIterator(&di, master->slaves);\n    while((de = dictNext(&di)) != NULL) {\n        sentinelRedisInstance *slave = dictGetVal(de);\n\n        if (slave->flags & (SRI_RECONF_SENT|SRI_RECONF_INPROG))\n            in_progress++;\n    }\n    dictResetIterator(&di);\n\n    dictInitIterator(&di, master->slaves);\n    while(in_progress < master->parallel_syncs &&\n          (de = dictNext(&di)) != NULL)\n    {\n        sentinelRedisInstance *slave = dictGetVal(de);\n        int retval;\n\n        /* Skip the promoted slave, and already configured slaves. */\n        if (slave->flags & (SRI_PROMOTED|SRI_RECONF_DONE)) continue;\n\n        /* If too much time elapsed without the slave moving forward to\n         * the next state, consider it reconfigured even if it is not.\n         * Sentinels will detect the slave as misconfigured and fix its\n         * configuration later. */\n        if ((slave->flags & SRI_RECONF_SENT) &&\n            (mstime() - slave->slave_reconf_sent_time) >\n            sentinel_slave_reconf_timeout)\n        {\n            sentinelEvent(LL_NOTICE,\"-slave-reconf-sent-timeout\",slave,\"%@\");\n            slave->flags &= ~SRI_RECONF_SENT;\n            slave->flags |= SRI_RECONF_DONE;\n        }\n\n        /* Nothing to do for instances that are disconnected or already\n         * in RECONF_SENT state. */\n        if (slave->flags & (SRI_RECONF_SENT|SRI_RECONF_INPROG)) continue;\n        if (slave->link->disconnected) continue;\n\n        /* Send SLAVEOF <new master>. */\n        retval = sentinelSendSlaveOf(slave,master->promoted_slave->addr);\n        if (retval == C_OK) {\n            slave->flags |= SRI_RECONF_SENT;\n            slave->slave_reconf_sent_time = mstime();\n            sentinelEvent(LL_NOTICE,\"+slave-reconf-sent\",slave,\"%@\");\n            in_progress++;\n        }\n    }\n    dictResetIterator(&di);\n\n    /* Check if all the slaves are reconfigured and handle timeout. */\n    sentinelFailoverDetectEnd(master);\n}\n\n/* This function is called when the slave is in\n * SENTINEL_FAILOVER_STATE_UPDATE_CONFIG state. In this state we need\n * to remove it from the master table and add the promoted slave instead. */\nvoid sentinelFailoverSwitchToPromotedSlave(sentinelRedisInstance *master) {\n    sentinelRedisInstance *ref = master->promoted_slave ?\n                                 master->promoted_slave : master;\n\n    sentinelEvent(LL_WARNING,\"+switch-master\",master,\"%s %s %d %s %d\",\n        master->name, announceSentinelAddr(master->addr), master->addr->port,\n        announceSentinelAddr(ref->addr), ref->addr->port);\n\n    sentinelResetMasterAndChangeAddress(master,ref->addr->hostname,ref->addr->port);\n}\n\nvoid sentinelFailoverStateMachine(sentinelRedisInstance *ri) {\n    serverAssert(ri->flags & SRI_MASTER);\n\n    if (!(ri->flags & SRI_FAILOVER_IN_PROGRESS)) return;\n\n    switch(ri->failover_state) {\n        case SENTINEL_FAILOVER_STATE_WAIT_START:\n            sentinelFailoverWaitStart(ri);\n            break;\n        case SENTINEL_FAILOVER_STATE_SELECT_SLAVE:\n            sentinelFailoverSelectSlave(ri);\n            break;\n        case SENTINEL_FAILOVER_STATE_SEND_SLAVEOF_NOONE:\n            sentinelFailoverSendSlaveOfNoOne(ri);\n            break;\n        case SENTINEL_FAILOVER_STATE_WAIT_PROMOTION:\n            sentinelFailoverWaitPromotion(ri);\n            break;\n        case SENTINEL_FAILOVER_STATE_RECONF_SLAVES:\n            sentinelFailoverReconfNextSlave(ri);\n            break;\n    }\n}\n\n/* Abort a failover in progress:\n *\n * This function can only be called before the promoted slave acknowledged\n * the slave -> master switch. Otherwise the failover can't be aborted and\n * will reach its end (possibly by timeout). */\nvoid sentinelAbortFailover(sentinelRedisInstance *ri) {\n    serverAssert(ri->flags & SRI_FAILOVER_IN_PROGRESS);\n    serverAssert(ri->failover_state <= SENTINEL_FAILOVER_STATE_WAIT_PROMOTION);\n\n    ri->flags &= ~(SRI_FAILOVER_IN_PROGRESS|SRI_FORCE_FAILOVER);\n    ri->failover_state = SENTINEL_FAILOVER_STATE_NONE;\n    ri->failover_state_change_time = mstime();\n    if (ri->promoted_slave) {\n        ri->promoted_slave->flags &= ~SRI_PROMOTED;\n        ri->promoted_slave = NULL;\n    }\n}\n\n/* ======================== SENTINEL timer handler ==========================\n * This is the \"main\" our Sentinel, being sentinel completely non blocking\n * in design.\n * -------------------------------------------------------------------------- */\n\n/* Perform scheduled operations for the specified Redis instance. */\nvoid sentinelHandleRedisInstance(sentinelRedisInstance *ri) {\n    /* ========== MONITORING HALF ============ */\n    /* Every kind of instance */\n    sentinelReconnectInstance(ri);\n    sentinelSendPeriodicCommands(ri);\n\n    /* ============== ACTING HALF ============= */\n    /* We don't proceed with the acting half if we are in TILT mode.\n     * TILT happens when we find something odd with the time, like a\n     * sudden change in the clock. */\n    if (sentinel.tilt) {\n        if (mstime()-sentinel.tilt_start_time < sentinel_tilt_period) return;\n        sentinel.tilt = 0;\n        sentinelEvent(LL_WARNING,\"-tilt\",NULL,\"#tilt mode exited\");\n    }\n\n    /* Every kind of instance */\n    sentinelCheckSubjectivelyDown(ri);\n\n    /* Masters and slaves */\n    if (ri->flags & (SRI_MASTER|SRI_SLAVE)) {\n        /* Nothing so far. */\n    }\n\n    /* Only masters */\n    if (ri->flags & SRI_MASTER) {\n        sentinelCheckObjectivelyDown(ri);\n        if (sentinelStartFailoverIfNeeded(ri))\n            sentinelAskMasterStateToOtherSentinels(ri,SENTINEL_ASK_FORCED);\n        sentinelFailoverStateMachine(ri);\n        sentinelAskMasterStateToOtherSentinels(ri,SENTINEL_NO_FLAGS);\n    }\n}\n\n/* Perform scheduled operations for all the instances in the dictionary.\n * Recursively call the function against dictionaries of slaves. */\nvoid sentinelHandleDictOfRedisInstances(dict *instances) {\n    dictIterator di;\n    dictEntry *de;\n    sentinelRedisInstance *switch_to_promoted = NULL;\n\n    /* There are a number of things we need to perform against every master. */\n    dictInitIterator(&di, instances);\n    while((de = dictNext(&di)) != NULL) {\n        sentinelRedisInstance *ri = dictGetVal(de);\n\n        sentinelHandleRedisInstance(ri);\n        if (ri->flags & SRI_MASTER) {\n            sentinelHandleDictOfRedisInstances(ri->slaves);\n            sentinelHandleDictOfRedisInstances(ri->sentinels);\n            if (ri->failover_state == SENTINEL_FAILOVER_STATE_UPDATE_CONFIG) {\n                switch_to_promoted = ri;\n            }\n        }\n    }\n    if (switch_to_promoted)\n        sentinelFailoverSwitchToPromotedSlave(switch_to_promoted);\n    dictResetIterator(&di);\n}\n\n/* This function checks if we need to enter the TILT mode.\n *\n * The TILT mode is entered if we detect that between two invocations of the\n * timer interrupt, a negative amount of time, or too much time has passed.\n * Note that we expect that more or less just 100 milliseconds will pass\n * if everything is fine. However we'll see a negative number or a\n * difference bigger than SENTINEL_TILT_TRIGGER milliseconds if one of the\n * following conditions happen:\n *\n * 1) The Sentinel process for some time is blocked, for every kind of\n * random reason: the load is huge, the computer was frozen for some time\n * in I/O or alike, the process was stopped by a signal. Everything.\n * 2) The system clock was altered significantly.\n *\n * Under both this conditions we'll see everything as timed out and failing\n * without good reasons. Instead we enter the TILT mode and wait\n * for SENTINEL_TILT_PERIOD to elapse before starting to act again.\n *\n * During TILT time we still collect information, we just do not act. */\nvoid sentinelCheckTiltCondition(void) {\n    mstime_t now = mstime();\n    mstime_t delta = now - sentinel.previous_time;\n\n    if (delta < 0 || delta > sentinel_tilt_trigger) {\n        sentinel.tilt = 1;\n        sentinel.tilt_start_time = mstime();\n        sentinel.total_tilt++;\n        sentinelEvent(LL_WARNING,\"+tilt\",NULL,\"#tilt mode entered\");\n    }\n    sentinel.previous_time = mstime();\n}\n\nvoid sentinelTimer(void) {\n    sentinelCheckTiltCondition();\n    sentinelHandleDictOfRedisInstances(sentinel.masters);\n    sentinelRunPendingScripts();\n    sentinelCollectTerminatedScripts();\n    sentinelKillTimedoutScripts();\n\n    /* We continuously change the frequency of the Redis \"timer interrupt\"\n     * in order to desynchronize every Sentinel from every other.\n     * This non-determinism avoids that Sentinels started at the same time\n     * exactly continue to stay synchronized asking to be voted at the\n     * same time again and again (resulting in nobody likely winning the\n     * election because of split brain voting). */\n    server.hz = CONFIG_DEFAULT_HZ + rand() % CONFIG_DEFAULT_HZ;\n}\n"
  },
  {
    "path": "src/server.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n#include \"server.h\"\n#include \"monotonic.h\"\n#include \"cluster.h\"\n#include \"cluster_slot_stats.h\"\n#include \"slowlog.h\"\n#include \"bio.h\"\n#include \"latency.h\"\n#include \"atomicvar.h\"\n#include \"mt19937-64.h\"\n#include \"functions.h\"\n#include \"hdr_histogram.h\"\n#include \"syscheck.h\"\n#include \"threads_mngr.h\"\n#include \"fmtargs.h\"\n#include \"mstr.h\"\n#include \"ebuckets.h\"\n#include \"cluster_asm.h\"\n#include \"fwtree.h\"\n#include \"estore.h\"\n#include \"chk.h\"\n#include \"fast_float_strtod.h\"\n\n#include <time.h>\n#include <signal.h>\n#include <sys/wait.h>\n#include <errno.h>\n#include <ctype.h>\n#include <stdarg.h>\n#include <arpa/inet.h>\n#include <sys/stat.h>\n#include <fcntl.h>\n#include <sys/file.h>\n#include <sys/time.h>\n#include <sys/resource.h>\n#include <sys/uio.h>\n#include <sys/un.h>\n#include <limits.h>\n#include <float.h>\n#include <math.h>\n#include <sys/utsname.h>\n#include <locale.h>\n#include <sys/socket.h>\n\n#ifdef __linux__\n#include <sys/mman.h>\n#endif\n\n#if defined(HAVE_SYSCTL_KIPC_SOMAXCONN) || defined(HAVE_SYSCTL_KERN_SOMAXCONN)\n#include <sys/sysctl.h>\n#endif\n\n#ifdef __GNUC__\n#define GNUC_VERSION_STR STRINGIFY(__GNUC__) \".\" STRINGIFY(__GNUC_MINOR__) \".\" STRINGIFY(__GNUC_PATCHLEVEL__)\n#else\n#define GNUC_VERSION_STR \"0.0.0\"\n#endif\n\n/* Our shared \"common\" objects */\n\nstruct sharedObjectsStruct shared;\n\n/* Global vars that are actually used as constants. The following double\n * values are used for double on-disk serialization, and are initialized\n * at runtime to avoid strange compiler optimizations. */\n\ndouble R_Zero, R_PosInf, R_NegInf, R_Nan;\n\n/*================================= Globals ================================= */\n\n/* Global vars */\nstruct redisServer server; /* Server global state */\n\n/* Snapshot of server.stat_total_client_process_input_buff_events used in\n * beforeSleep() to detect event loop cycles where client input buffers\n * were processed. */\nlong long stat_prev_total_client_process_input_buff_events = 0;\n\n/*============================ Internal prototypes ========================== */\n\nstatic inline int isShutdownInitiated(void);\nstatic inline int isCommandReusable(struct redisCommand *cmd, robj *commandArg);\nint isReadyToShutdown(void);\nint finishShutdown(void);\nconst char *replstateToString(int replstate);\n\n/*============================ Utility functions ============================ */\n\n/* Check if a given command can be reused without performing a lookup.\n * A command is reusable if:\n * - It is not NULL.\n * - It does not have subcommands (subcommands_dict == NULL).\n *   This preserves simplicity on the check and accounts for the majority of the use cases.\n * - Its full name matches the provided command argument. */\nstatic inline int isCommandReusable(struct redisCommand *cmd, robj *commandArg) {\n    return cmd != NULL &&\n           cmd->subcommands_dict == NULL &&\n           strcasecmp(cmd->fullname, commandArg->ptr) == 0;\n}\n\n/* This macro tells if we are in the context of loading an AOF. */\n#define isAOFLoadingContext() \\\n    ((server.current_client && server.current_client->id == CLIENT_ID_AOF) ? 1 : 0)\n\n/* We use a private localtime implementation which is fork-safe. The logging\n * function of Redis may be called from other threads. */\nvoid nolocks_localtime(struct tm *tmp, time_t t, time_t tz, int dst);\n\nstatic inline int shouldShutdownAsap(void) {\n    int shutdown_asap;\n    atomicGet(server.shutdown_asap, shutdown_asap);\n    return shutdown_asap;\n}\n\n/* Low level logging. To use only for very big messages, otherwise\n * serverLog() is to prefer. */\nvoid serverLogRaw(int level, const char *msg) {\n    const int syslogLevelMap[] = { LOG_DEBUG, LOG_INFO, LOG_NOTICE, LOG_WARNING };\n    const char *c = \".-*#\";\n    FILE *fp;\n    char buf[64];\n    int rawmode = (level & LL_RAW);\n    int log_to_stdout = server.logfile[0] == '\\0';\n\n    level &= 0xff; /* clear flags */\n    if (level < server.verbosity) return;\n\n    fp = log_to_stdout ? stdout : fopen(server.logfile,\"a\");\n    if (!fp) return;\n\n    if (rawmode) {\n        fprintf(fp,\"%s\",msg);\n    } else {\n        int off;\n        struct timeval tv;\n        int role_char;\n        int daylight_active = 0;\n        pid_t pid = getpid();\n\n        gettimeofday(&tv,NULL);\n        struct tm tm;\n        atomicGet(server.daylight_active, daylight_active);\n        nolocks_localtime(&tm,tv.tv_sec,server.timezone,daylight_active);\n        off = strftime(buf,sizeof(buf),\"%d %b %Y %H:%M:%S.\",&tm);\n        snprintf(buf+off,sizeof(buf)-off,\"%03d\",(int)tv.tv_usec/1000);\n        if (server.sentinel_mode) {\n            role_char = 'X'; /* Sentinel. */\n        } else if (pid != server.pid) {\n            role_char = 'C'; /* RDB / AOF writing child. */\n        } else {\n            role_char = (server.masterhost ? 'S':'M'); /* Slave or Master. */\n        }\n        fprintf(fp,\"%d:%c %s %c %s\\n\",\n            (int)getpid(),role_char, buf,c[level],msg);\n    }\n    fflush(fp);\n\n    if (!log_to_stdout) fclose(fp);\n    if (server.syslog_enabled) syslog(syslogLevelMap[level], \"%s\", msg);\n}\n\n/* Like serverLogRaw() but with printf-alike support. This is the function that\n * is used across the code. The raw version is only used in order to dump\n * the INFO output on crash. */\nvoid _serverLog(int level, const char *fmt, ...) {\n    va_list ap;\n    char msg[LOG_MAX_LEN];\n\n    va_start(ap, fmt);\n    vsnprintf(msg, sizeof(msg), fmt, ap);\n    va_end(ap);\n\n    serverLogRaw(level,msg);\n}\n\n/* Low level logging from signal handler. Should be used with pre-formatted strings. \n   See serverLogFromHandler. */\nvoid serverLogRawFromHandler(int level, const char *msg) {\n    int fd;\n    int log_to_stdout = server.logfile[0] == '\\0';\n    char buf[64];\n\n    if ((level&0xff) < server.verbosity || (log_to_stdout && server.daemonize))\n        return;\n    fd = log_to_stdout ? STDOUT_FILENO :\n                         open(server.logfile, O_APPEND|O_CREAT|O_WRONLY, 0644);\n    if (fd == -1) return;\n    if (level & LL_RAW) {\n        if (write(fd,msg,strlen(msg)) == -1) goto err;\n    }\n    else {\n        ll2string(buf,sizeof(buf),getpid());\n        if (write(fd,buf,strlen(buf)) == -1) goto err;\n        if (write(fd,\":signal-handler (\",17) == -1) goto err;\n        ll2string(buf,sizeof(buf),time(NULL));\n        if (write(fd,buf,strlen(buf)) == -1) goto err;\n        if (write(fd,\") \",2) == -1) goto err;\n        if (write(fd,msg,strlen(msg)) == -1) goto err;\n        if (write(fd,\"\\n\",1) == -1) goto err;\n    }\nerr:\n    if (!log_to_stdout) close(fd);\n}\n\n/* An async-signal-safe version of serverLog. if LL_RAW is not included in level flags,\n * The message format is: <pid>:signal-handler (<time>) <msg> \\n\n * with LL_RAW flag only the msg is printed (with no new line at the end)\n *\n * We actually use this only for signals that are not fatal from the point\n * of view of Redis. Signals that are going to kill the server anyway and\n * where we need printf-alike features are served by serverLog(). */\nvoid serverLogFromHandler(int level, const char *fmt, ...) {\n    va_list ap;\n    char msg[LOG_MAX_LEN];\n\n    va_start(ap, fmt);\n    vsnprintf_async_signal_safe(msg, sizeof(msg), fmt, ap);\n    va_end(ap);\n\n    serverLogRawFromHandler(level, msg);\n}\n\n/* Return the UNIX time in microseconds */\nlong long ustime(void) {\n    struct timeval tv;\n    long long ust;\n\n    gettimeofday(&tv, NULL);\n    ust = ((long long)tv.tv_sec)*1000000;\n    ust += tv.tv_usec;\n    return ust;\n}\n\n/* Return the UNIX time in milliseconds */\nmstime_t mstime(void) {\n    return ustime()/1000;\n}\n\n/* Return the command time snapshot in milliseconds.\n * The time the command started is the logical time it runs,\n * and all the time readings during the execution time should\n * reflect the same time.\n * More details can be found in the comments below. */\nmstime_t commandTimeSnapshot(void) {\n    /* When we are in the middle of a command execution, we want to use a\n     * reference time that does not change: in that case we just use the\n     * cached time, that we update before each call in the call() function.\n     * This way we avoid that commands such as RPOPLPUSH or similar, that\n     * may re-open the same key multiple times, can invalidate an already\n     * open object in a next call, if the next call will see the key expired,\n     * while the first did not.\n     * This is specifically important in the context of scripts, where we\n     * pretend that time freezes. This way a key can expire only the first time\n     * it is accessed and not in the middle of the script execution, making\n     * propagation to slaves / AOF consistent. See issue #1525 for more info.\n     * Note that we cannot use the cached server.mstime because it can change\n     * in processEventsWhileBlocked etc. */\n    return server.cmd_time_snapshot;\n}\n\n/* After an RDB dump or AOF rewrite we exit from children using _exit() instead of\n * exit(), because the latter may interact with the same file objects used by\n * the parent process. However if we are testing the coverage normal exit() is\n * used in order to obtain the right coverage information. \n * There is a caveat for when we exit due to a signal.\n * In this case we want the function to be async signal safe, so we can't use exit()\n */\nvoid exitFromChild(int retcode, int from_signal) {\n#ifdef COVERAGE_TEST\n    if (!from_signal) {\n        exit(retcode);\n    } else {\n        _exit(retcode);\n    }\n#else\n    UNUSED(from_signal);\n    _exit(retcode);\n#endif\n}\n\n/*====================== Hash table type implementation  ==================== */\n\n/* This is a hash table type that uses the SDS dynamic strings library as\n * keys and redis objects as values (objects can hold SDS strings,\n * lists, sets). */\n\nvoid dictVanillaFree(dict *d, void *val)\n{\n    UNUSED(d);\n    zfree(val);\n}\n\nvoid dictListDestructor(dict *d, void *val)\n{\n    UNUSED(d);\n    listRelease((list*)val);\n}\n\nvoid dictDictDestructor(dict *d, void *val)\n{\n    UNUSED(d);\n    dictRelease((dict*)val);\n}\n\nsize_t dictSdsKeyLen(dict *d, const void *key) {\n    UNUSED(d);\n    return sdslen((sds)key);\n}\n\nstatic const void *kvGetKey(const void *kv) {\n    sds sdsKey = kvobjGetKey((kvobj *) kv);\n    return sdsKey;\n}\n\nint dictSdsCompareKV(dictCmpCache *cache, const void *sdsKey1, const void *sdsKey2)\n{\n    /* is first cmp call of a new lookup */\n    if (cache->useCache == 0) {\n        cache->useCache = 1;\n        cache->data[0].sz = sdslen((sds) sdsKey1);\n    }\n\n    size_t l1 = cache->data[0].sz;\n    size_t l2 = sdslen((sds)sdsKey2);\n    if (l1 != l2) return 0;\n    return memcmp(sdsKey1, sdsKey2, l1) == 0;\n}\n\nstatic void dictDestructorKV(dict *d, void *key) {\n    kvobj *kv = (kvobj *)key;\n    if (kv == NULL) return;\n    if (server.memory_tracking_enabled) {\n        kvstore *kvs = d->type->userdata;\n        kvstoreMetadata *kvstoreMeta = kvstoreGetMetadata(kvs);\n        kvstoreDictMetadata *meta = (kvstoreDictMetadata *)dictMetadata(d);\n        size_t alloc_size = kvobjAllocSize(kv);\n        debugServerAssert(alloc_size <= meta->alloc_size);\n        meta->alloc_size -= alloc_size;\n        /* kvstoreMeta may be NULL when freeing kvstore created with kvstoreBaseType\n         * (e.g. in lazy free context). */\n        if (kvstoreMeta && kv->type < OBJ_TYPE_BASIC_MAX) {\n            /* we don't call kvsUpdateHistogram() because it contains debugServerAssert\n             * that may fail in bg thread as kvstore might not being fully initialized */\n            int old_bin = (alloc_size == 0) ? 0 : log2ceil(alloc_size) + 1;\n            debugServerAssert(old_bin < MAX_KEYSIZES_BINS);\n            kvstoreMeta->allocsizes_hist[kv->type][old_bin]--;\n        }\n    }\n    decrRefCount(kv);\n}\n\nint dictSdsKeyCompare(dictCmpCache *cache, const void *key1,\n        const void *key2)\n{\n    int l1,l2;\n    UNUSED(cache);\n\n    l1 = sdslen((sds)key1);\n    l2 = sdslen((sds)key2);\n    if (l1 != l2) return 0;\n    return memcmp(key1, key2, l1) == 0;\n}\n\n/* A case insensitive version used for the command lookup table and other\n * places where case insensitive non binary-safe comparison is needed. */\nint dictSdsKeyCaseCompare(dictCmpCache *cache, const void *key1,\n        const void *key2)\n{\n    UNUSED(cache);\n    return strcasecmp(key1, key2) == 0;\n}\n\nvoid dictObjectDestructor(dict *d, void *val)\n{\n    UNUSED(d);\n    if (val == NULL) return; /* Lazy freeing will set value to NULL. */\n    decrRefCount(val);\n}\n\nvoid dictSdsDestructor(dict *d, void *val)\n{\n    UNUSED(d);\n    sdsfree(val);\n}\n\nvoid setSdsDestructor(dict *d, void *val) {\n    *htGetMetadataSize(d) -= sdsAllocSize(val);\n    sdsfree(val);\n}\n\nsize_t setDictMetadataBytes(dict *d) {\n    UNUSED(d);\n    return sizeof(size_t);\n}\n\nvoid *dictSdsDup(dict *d, const void *key) {\n    UNUSED(d);\n    return sdsdup((const sds) key);\n}\n\nint dictObjKeyCompare(dictCmpCache *cache, const void *key1,\n        const void *key2)\n{\n    const robj *o1 = key1, *o2 = key2;\n    return dictSdsKeyCompare(cache, o1->ptr,o2->ptr);\n}\n\nuint64_t dictObjHash(const void *key) {\n    const robj *o = key;\n    return dictGenHashFunction(o->ptr, sdslen((sds)o->ptr));\n}\n\nuint64_t dictPtrHash(const void *key) {\n    return dictGenHashFunction((unsigned char*)&key,sizeof(key));\n}\n\nuint64_t dictSdsHash(const void *key) {\n    return dictGenHashFunction((unsigned char*)key, sdslen((char*)key));\n}\n\nuint64_t dictSdsCaseHash(const void *key) {\n    return dictGenCaseHashFunction((unsigned char*)key, sdslen((char*)key));\n}\n\n/* Dict hash function for null terminated string */\nuint64_t dictCStrHash(const void *key) {\n    return dictGenHashFunction((unsigned char*)key, strlen((char*)key));\n}\n\n/* Dict hash function for null terminated string */\nuint64_t dictCStrCaseHash(const void *key) {\n    return dictGenCaseHashFunction((unsigned char*)key, strlen((char*)key));\n}\n\n/* Dict hash function for client */\nuint64_t dictClientHash(const void *key) {\n    return ((client *)key)->id;\n}\n\n/* Dict compare function for client */\nint dictClientKeyCompare(dictCmpCache *cache, const void *key1, const void *key2) {\n    UNUSED(cache);\n    return ((client *)key1)->id == ((client *)key2)->id;\n}\n\n/* Dict compare function for null terminated string */\nint dictCStrKeyCompare(dictCmpCache *cache, const void *key1, const void *key2) {\n    int l1,l2;\n    UNUSED(cache);\n\n    l1 = strlen((char*)key1);\n    l2 = strlen((char*)key2);\n    if (l1 != l2) return 0;\n    return memcmp(key1, key2, l1) == 0;\n}\n\n/* Dict case insensitive compare function for null terminated string */\nint dictCStrKeyCaseCompare(dictCmpCache *cache, const void *key1, const void *key2) {\n    UNUSED(cache);\n    return strcasecmp(key1, key2) == 0;\n}\n\nint dictEncObjKeyCompare(dictCmpCache *cache, const void *key1, const void *key2)\n{\n    robj *o1 = (robj*) key1, *o2 = (robj*) key2;\n    int cmp;\n\n    if (o1->encoding == OBJ_ENCODING_INT &&\n        o2->encoding == OBJ_ENCODING_INT)\n            return o1->ptr == o2->ptr;\n\n    /* Due to OBJ_STATIC_REFCOUNT, we avoid calling getDecodedObject() without\n     * good reasons, because it would incrRefCount() the object, which\n     * is invalid. So we check to make sure dictFind() works with static\n     * objects as well. */\n    if (o1->refcount != OBJ_STATIC_REFCOUNT) o1 = getDecodedObject(o1);\n    if (o2->refcount != OBJ_STATIC_REFCOUNT) o2 = getDecodedObject(o2);\n    cmp = dictSdsKeyCompare(cache,o1->ptr,o2->ptr);\n    if (o1->refcount != OBJ_STATIC_REFCOUNT) decrRefCount(o1);\n    if (o2->refcount != OBJ_STATIC_REFCOUNT) decrRefCount(o2);\n    return cmp;\n}\n\nuint64_t dictEncObjHash(const void *key) {\n    robj *o = (robj*) key;\n\n    if (sdsEncodedObject(o)) {\n        return dictGenHashFunction(o->ptr, sdslen((sds)o->ptr));\n    } else if (o->encoding == OBJ_ENCODING_INT) {\n        char buf[32];\n        int len;\n\n        len = ll2string(buf,32,(long)o->ptr);\n        return dictGenHashFunction((unsigned char*)buf, len);\n    } else {\n        serverPanic(\"Unknown string encoding\");\n    }\n}\n\nstatic size_t kvstoreMetadataBytes(kvstore *kvs) {\n    UNUSED(kvs);\n    return sizeof(kvstoreMetadata);\n}\n\nstatic size_t kvstoreDictMetaBytes(dict *d) {\n    UNUSED(d);\n    return sizeof(kvstoreDictMetadata);\n}\n\nstatic int kvstoreCanFreeDict(kvstore *kvs, int didx) {\n    kvstoreDictMetadata *meta = kvstoreGetDictMeta(kvs, didx, 0);\n    debugServerAssert(meta->alloc_size == 0);\n    /* Free if not in cluster */\n    if (!server.cluster_enabled) return 1;\n\n    /* Don't free if we have stats for this slot and the relevant tracking is enabled. */\n    int has_cpu_stats = (server.cluster_slot_stats_enabled & CLUSTER_SLOT_STATS_CPU) && meta->cpu_usec;\n    int has_net_stats = (server.cluster_slot_stats_enabled & CLUSTER_SLOT_STATS_NET) &&\n                        (meta->network_bytes_in || meta->network_bytes_out);\n    if ((has_cpu_stats || has_net_stats) && clusterIsMySlot(didx)) {\n        return 0;\n    }\n\n    /* Otherwise, we can free */\n    return 1;\n}\n\nstatic void kvstoreOnEmpty(kvstore *kvs) {\n    kvstoreMetadata *meta = kvstoreGetMetadata(kvs);\n    memset(&meta->keysizes_hist, 0, sizeof(meta->keysizes_hist));\n    memset(&meta->allocsizes_hist, 0, sizeof(meta->allocsizes_hist));\n}\n\nstatic void kvstoreOnDictEmpty(kvstore *kvs, int didx) {\n    kvstoreDictMetadata *meta = kvstoreGetDictMeta(kvs, didx, 0);\n    UNUSED(meta);\n#ifdef DEBUG_ASSERTIONS\n    dictEmpty(kvstoreGetDict(kvs, didx), NULL);\n#endif\n    debugServerAssert(meta->alloc_size == 0);\n}\n\n/* Return 1 if currently we allow dict to expand. Dict may allocate huge\n * memory to contain hash buckets when dict expands, that may lead redis\n * rejects user's requests or evicts some keys, we can stop dict to expand\n * provisionally if used memory will be over maxmemory after dict expands,\n * but to guarantee the performance of redis, we still allow dict to expand\n * if dict load factor exceeds HASHTABLE_MAX_LOAD_FACTOR. */\nint dictResizeAllowed(size_t moreMem, double usedRatio) {\n    /* for debug purposes: dict is not allowed to be resized. */\n    if (!server.dict_resizing) return 0;\n\n    if (usedRatio <= HASHTABLE_MAX_LOAD_FACTOR) {\n        return !overMaxmemoryAfterAlloc(moreMem);\n    } else {\n        return 1;\n    }\n}\n\n/* Generic hash table type where keys are Redis Objects, Values\n * dummy pointers. */\ndictType objectKeyPointerValueDictType = {\n    dictEncObjHash,            /* hash function */\n    NULL,                      /* key dup */\n    NULL,                      /* val dup */\n    dictEncObjKeyCompare,      /* key compare */\n    dictObjectDestructor,      /* key destructor */\n    NULL,                      /* val destructor */\n    NULL                       /* allow to expand */\n};\n\n/* Dict type with robj pointer keys and no values. */\ndictType objectKeyNoValueDictType = {\n    dictEncObjHash,            /* hash function */\n    NULL,                      /* key dup */\n    NULL,                      /* val dup */\n    dictEncObjKeyCompare,      /* key compare */\n    dictObjectDestructor,      /* key destructor */\n    NULL,                      /* val destructor */\n    NULL,                      /* allow to expand */\n    .no_value = 1,             /* no values in this dict */\n    .keys_are_odd = 0,         /* robj pointers are not odd */\n};\n\n/* Like objectKeyPointerValueDictType(), but values can be destroyed, if\n * not NULL, calling zfree(). */\ndictType objectKeyHeapPointerValueDictType = {\n    dictEncObjHash,            /* hash function */\n    NULL,                      /* key dup */\n    NULL,                      /* val dup */\n    dictEncObjKeyCompare,      /* key compare */\n    dictObjectDestructor,      /* key destructor */\n    dictVanillaFree,           /* val destructor */\n    NULL                       /* allow to expand */\n};\n\n/* Set dictionary type. Keys are SDS strings, values are not used. */\ndictType setDictType = {\n    dictSdsHash,               /* hash function */\n    NULL,                      /* key dup */\n    NULL,                      /* val dup */\n    dictSdsKeyCompare,         /* key compare */\n    setSdsDestructor,          /* key destructor */\n    NULL,                      /* val destructor */\n    NULL,                      /* allow to expand */\n    .no_value = 1,             /* no values in this dict */\n    .keys_are_odd = 1,         /* an SDS string is always an odd pointer */\n    .dictMetadataBytes = setDictMetadataBytes,\n};\n\n/* Db->dict, keys are of type kvobj, unification of key and value */\ndictType dbDictType = {\n    dictSdsHash,            /* hash function */\n    NULL,                   /* key dup */\n    NULL,                   /* val dup */\n    dictSdsCompareKV,       /* lookup key compare */\n    dictDestructorKV,       /* key destructor */\n    NULL,                   /* val destructor */\n    dictResizeAllowed,      /* allow to resize */\n    .no_value = 1,          /* keys and values are unified (kvobj) */\n    .keys_are_odd = 0,      /* simple kvobj (robj) struct */\n    .keyFromStoredKey = kvGetKey,    /* get key from stored-key */\n};\n\n/* Db->expires */\ndictType dbExpiresDictType = {\n    dictSdsHash,                /* hash function */\n    NULL,                       /* key dup */\n    NULL,                       /* val dup */\n    dictSdsCompareKV,           /* key compare */\n    NULL,                       /* key destructor */\n    NULL,                       /* val destructor */\n    dictResizeAllowed,          /* allow to resize */\n    .no_value = 1,              /* keys and values are unified (kvobj) */\n    .keys_are_odd = 0,          /* simple kvobj (robj) struct */\n    .keyFromStoredKey = kvGetKey,   /* get key from stored-key */\n};\n\n/* Command table. sds string -> command struct pointer. */\ndictType commandTableDictType = {\n    dictSdsCaseHash,            /* hash function */\n    NULL,                       /* key dup */\n    NULL,                       /* val dup */\n    dictSdsKeyCaseCompare,      /* key compare */\n    dictSdsDestructor,          /* key destructor */\n    NULL,                       /* val destructor */\n    NULL,                       /* allow to expand */\n    .force_full_rehash = 1,     /* force full rehashing */\n};\n\n/* Hash type hash table (note that small hashes are represented with listpacks) */\ndictType hashDictType = {\n    dictSdsHash,                /* hash function */\n    NULL,                       /* key dup */\n    NULL,                       /* val dup */\n    dictSdsKeyCompare,          /* key compare */\n    dictSdsDestructor,          /* key destructor */\n    dictSdsDestructor,          /* val destructor */\n    NULL,                       /* allow to expand */\n};\n\n/* Dict type without destructor */\ndictType sdsReplyDictType = {\n    dictSdsHash,                /* hash function */\n    NULL,                       /* key dup */\n    NULL,                       /* val dup */\n    dictSdsKeyCompare,          /* key compare */\n    NULL,                       /* key destructor */\n    NULL,                       /* val destructor */\n    NULL                        /* allow to expand */\n};\n\n/* Keylist hash table type has unencoded redis objects as keys and\n * lists as values. It's used for blocking operations (BLPOP) and to\n * map swapped keys to a list of clients waiting for this keys to be loaded. */\ndictType keylistDictType = {\n    dictObjHash,                /* hash function */\n    NULL,                       /* key dup */\n    NULL,                       /* val dup */\n    dictObjKeyCompare,          /* key compare */\n    dictObjectDestructor,       /* key destructor */\n    dictListDestructor,         /* val destructor */\n    NULL                        /* allow to expand */\n};\n\n/* KeyDict hash table type has unencoded redis objects as keys and\n * dicts as values. It's used for PUBSUB command to track clients subscribing the channels. */\ndictType objToDictDictType = {\n    dictObjHash,                /* hash function */\n    NULL,                       /* key dup */\n    NULL,                       /* val dup */\n    dictObjKeyCompare,          /* key compare */\n    dictObjectDestructor,       /* key destructor */\n    dictDictDestructor,         /* val destructor */\n    NULL                        /* allow to expand */\n};\n\n/* Modules system dictionary type. Keys are module name,\n * values are pointer to RedisModule struct. */\ndictType modulesDictType = {\n    dictSdsCaseHash,            /* hash function */\n    NULL,                       /* key dup */\n    NULL,                       /* val dup */\n    dictSdsKeyCaseCompare,      /* key compare */\n    dictSdsDestructor,          /* key destructor */\n    NULL,                       /* val destructor */\n    NULL                        /* allow to expand */\n};\n\n/* Migrate cache dict type. */\ndictType migrateCacheDictType = {\n    dictSdsHash,                /* hash function */\n    NULL,                       /* key dup */\n    NULL,                       /* val dup */\n    dictSdsKeyCompare,          /* key compare */\n    dictSdsDestructor,          /* key destructor */\n    NULL,                       /* val destructor */\n    NULL                        /* allow to expand */\n};\n\n/* Dict for for case-insensitive search using null terminated C strings.\n * The keys stored in dict are sds though. */\ndictType stringSetDictType = {\n    dictCStrCaseHash,           /* hash function */\n    NULL,                       /* key dup */\n    NULL,                       /* val dup */\n    dictCStrKeyCaseCompare,     /* key compare */\n    dictSdsDestructor,          /* key destructor */\n    NULL,                       /* val destructor */\n    NULL                        /* allow to expand */\n};\n\n/* Dict for for case-insensitive search using null terminated C strings.\n * The key and value do not have a destructor. */\ndictType externalStringType = {\n    dictCStrCaseHash,           /* hash function */\n    NULL,                       /* key dup */\n    NULL,                       /* val dup */\n    dictCStrKeyCaseCompare,     /* key compare */\n    NULL,                       /* key destructor */\n    NULL,                       /* val destructor */\n    NULL                        /* allow to expand */\n};\n\n/* Dict for case-insensitive search using sds objects with a zmalloc\n * allocated object as the value. */\ndictType sdsHashDictType = {\n    dictSdsCaseHash,            /* hash function */\n    NULL,                       /* key dup */\n    NULL,                       /* val dup */\n    dictSdsKeyCaseCompare,      /* key compare */\n    dictSdsDestructor,          /* key destructor */\n    dictVanillaFree,            /* val destructor */\n    NULL                        /* allow to expand */\n};\n\n/* Client Set dictionary type. Keys are client, values are not used. */\ndictType clientDictType = {\n    dictClientHash,             /* hash function */\n    NULL,                       /* key dup */\n    NULL,                       /* val dup */\n    dictClientKeyCompare,       /* key compare */\n    .no_value = 1,              /* no values in this dict */\n    .keys_are_odd = 0           /* a client pointer is not an odd pointer */            \n};\n\nkvstoreType kvstoreBaseType = {\n    NULL, /* kvstore metadata size */\n    NULL, /* dict metadata size */\n    NULL, /* can free dict */\n    NULL, /* on kvstore empty */\n    NULL, /* on dict empty */\n};\n\nkvstoreType kvstoreExType = {\n    kvstoreMetadataBytes, /* kvstore metadata size */\n    kvstoreDictMetaBytes, /* dict metadata size */\n    kvstoreCanFreeDict,   /* can free dict */\n    kvstoreOnEmpty,       /* on kvstore empty */\n    kvstoreOnDictEmpty,   /* on dict empty */\n};\n\n/* This function is called once a background process of some kind terminates,\n * as we want to avoid resizing the hash tables when there is a child in order\n * to play well with copy-on-write (otherwise when a resize happens lots of\n * memory pages are copied). The goal of this function is to update the ability\n * for dict.c to resize or rehash the tables accordingly to the fact we have an\n * active fork child running. */\nvoid updateDictResizePolicy(void) {\n    if (server.in_fork_child != CHILD_TYPE_NONE)\n        dictSetResizeEnabled(DICT_RESIZE_FORBID);\n    else if (hasActiveChildProcess())\n        dictSetResizeEnabled(DICT_RESIZE_AVOID);\n    else\n        dictSetResizeEnabled(DICT_RESIZE_ENABLE);\n}\n\nconst char *strChildType(int type) {\n    switch(type) {\n        case CHILD_TYPE_RDB: return \"RDB\";\n        case CHILD_TYPE_AOF: return \"AOF\";\n        case CHILD_TYPE_LDB: return \"LDB\";\n        case CHILD_TYPE_MODULE: return \"MODULE\";\n        default: return \"Unknown\";\n    }\n}\n\n/* Return true if there are active children processes doing RDB saving,\n * AOF rewriting, or some side process spawned by a loaded module. */\nint hasActiveChildProcess(void) {\n    return server.child_pid != -1;\n}\n\nvoid resetChildState(void) {\n    server.child_type = CHILD_TYPE_NONE;\n    server.child_pid = -1;\n    server.stat_current_cow_peak = 0;\n    server.stat_current_cow_bytes = 0;\n    server.stat_current_cow_updated = 0;\n    server.stat_current_save_keys_processed = 0;\n    server.stat_module_progress = 0;\n    server.stat_current_save_keys_total = 0;\n    updateDictResizePolicy();\n    closeChildInfoPipe();\n    moduleFireServerEvent(REDISMODULE_EVENT_FORK_CHILD,\n                          REDISMODULE_SUBEVENT_FORK_CHILD_DIED,\n                          NULL);\n}\n\n/* Return if child type is mutually exclusive with other fork children */\nint isMutuallyExclusiveChildType(int type) {\n    return type == CHILD_TYPE_RDB || type == CHILD_TYPE_AOF || type == CHILD_TYPE_MODULE;\n}\n\n/* Returns true when we're inside a long command that yielded to the event loop. */\nint isInsideYieldingLongCommand(void) {\n    return scriptIsTimedout() || server.busy_module_yield_flags;\n}\n\n/* Return true if this instance has persistence completely turned off:\n * both RDB and AOF are disabled. */\nint allPersistenceDisabled(void) {\n    return server.saveparamslen == 0 && server.aof_state == AOF_OFF;\n}\n\n/* ======================= Cron: called every 100 ms ======================== */\n\n/* Add a sample to the instantaneous metric. This function computes the quotient\n * of the increment of value and base, which is useful to record operation count\n * per second, or the average time consumption of an operation.\n *\n * current_value - The dividend\n * current_base - The divisor\n * */\nvoid trackInstantaneousMetric(int metric, long long current_value, long long current_base, long long factor) {\n    if (server.inst_metric[metric].last_sample_base > 0) {\n        long long base = current_base - server.inst_metric[metric].last_sample_base;\n        long long value = current_value - server.inst_metric[metric].last_sample_value;\n        long long avg = base > 0 ? (value * factor / base) : 0;\n        server.inst_metric[metric].samples[server.inst_metric[metric].idx] = avg;\n        server.inst_metric[metric].idx++;\n        server.inst_metric[metric].idx %= STATS_METRIC_SAMPLES;\n    }\n    server.inst_metric[metric].last_sample_base = current_base;\n    server.inst_metric[metric].last_sample_value = current_value;\n}\n\n/* Return the mean of all the samples. */\nlong long getInstantaneousMetric(int metric) {\n    int j;\n    long long sum = 0;\n\n    for (j = 0; j < STATS_METRIC_SAMPLES; j++)\n        sum += server.inst_metric[metric].samples[j];\n    return sum / STATS_METRIC_SAMPLES;\n}\n\n/* The client query buffer is an sds.c string that can end with a lot of\n * free space not used, this function reclaims space if needed.\n *\n * The function always returns 0 as it never terminates the client. */\nint clientsCronResizeQueryBuffer(client *c) {\n    /* If the client query buffer is NULL, it is using the reusable query buffer and there is nothing to do. */\n    if (c->querybuf == NULL) return 0;\n    size_t querybuf_size = sdsalloc(c->querybuf);\n    time_t idletime = server.unixtime - c->lastinteraction;\n\n    /* Only resize the query buffer if the buffer is actually wasting at least a\n     * few kbytes */\n    if (sdsavail(c->querybuf) > 1024*4) {\n        /* There are two conditions to resize the query buffer: */\n        if (idletime > 2) {\n            /* 1) Query is idle for a long time. */\n            size_t remaining = sdslen(c->querybuf) - c->qb_pos;\n            if (!(c->flags & CLIENT_MASTER) && !remaining) {\n                /* If the client is not a master and no data is pending,\n                 * The client can safely use the reusable query buffer in the next read - free the client's querybuf. */\n                sdsfree(c->querybuf);\n                /* By setting the querybuf to NULL, the client will use the reusable query buffer in the next read.\n                 * We don't move the client to the reusable query buffer immediately, because if we allocated a private\n                 * query buffer for the client, it's likely that the client will use it again soon. */\n                c->querybuf = NULL;\n            } else {\n                c->querybuf = sdsRemoveFreeSpace(c->querybuf, 1);\n            }\n        } else if (querybuf_size > PROTO_RESIZE_THRESHOLD && querybuf_size/2 > c->querybuf_peak) {\n            /* 2) Query buffer is too big for latest peak and is larger than\n             *    resize threshold. Trim excess space but only up to a limit,\n             *    not below the recent peak and current c->querybuf (which will\n             *    be soon get used). If we're in the middle of a bulk then make\n             *    sure not to resize to less than the bulk length. */\n            size_t resize = sdslen(c->querybuf);\n            if (resize < c->querybuf_peak) resize = c->querybuf_peak;\n            if (c->bulklen != -1 && resize < (size_t)c->bulklen + 2) resize = c->bulklen + 2;\n            c->querybuf = sdsResize(c->querybuf, resize, 1);\n        }\n    }\n\n    /* Reset the peak again to capture the peak memory usage in the next\n     * cycle. */\n    c->querybuf_peak = c->querybuf ? sdslen(c->querybuf) : 0;\n    /* We reset to either the current used, or currently processed bulk size,\n     * which ever is bigger. */\n    if (c->bulklen != -1 && (size_t)c->bulklen + 2 > c->querybuf_peak) c->querybuf_peak = c->bulklen + 2;\n    return 0;\n}\n\n/* The client output buffer can be adjusted to better fit the memory requirements.\n *\n * the logic is:\n * in case the last observed peak size of the buffer equals the buffer size - we double the size\n * in case the last observed peak size of the buffer is less than half the buffer size - we shrink by half.\n * The buffer peak will be reset back to the buffer position every server.reply_buffer_peak_reset_time milliseconds\n * The function always returns 0 as it never terminates the client. */\nint clientsCronResizeOutputBuffer(client *c, mstime_t now_ms) {\n\n    size_t new_buffer_size = 0;\n    char *oldbuf = NULL;\n    const size_t buffer_target_shrink_size = c->buf_usable_size/2;\n    const size_t buffer_target_expand_size = c->buf_usable_size*2;\n\n    /* in case the resizing is disabled return immediately */\n    if(!server.reply_buffer_resizing_enabled)\n        return 0;\n\n    /* Don't resize encoded buffers. When buf is encoded, we track the last\n     * partially written payloadHeader pointer, so we can't\n     * reallocate the buffer as it would invalidate this pointer. */\n    if (c->buf_encoded) return 0;\n\n    if (buffer_target_shrink_size >= PROTO_REPLY_MIN_BYTES &&\n        c->buf_peak < buffer_target_shrink_size )\n    {\n        new_buffer_size = max(PROTO_REPLY_MIN_BYTES,c->buf_peak+1);\n        server.stat_reply_buffer_shrinks++;\n    } else if (buffer_target_expand_size < PROTO_REPLY_CHUNK_BYTES*2 &&\n        c->buf_peak == c->buf_usable_size)\n    {\n        new_buffer_size = min(PROTO_REPLY_CHUNK_BYTES,buffer_target_expand_size);\n        server.stat_reply_buffer_expands++;\n    }\n\n    serverAssertWithInfo(c, NULL, (!new_buffer_size) || (new_buffer_size >= (size_t)c->bufpos));\n\n    /* reset the peak value each server.reply_buffer_peak_reset_time seconds. in case the client will be idle\n     * it will start to shrink.\n     */\n    if (server.reply_buffer_peak_reset_time >=0 &&\n        now_ms - c->buf_peak_last_reset_time >= server.reply_buffer_peak_reset_time)\n    {\n        c->buf_peak = c->bufpos;\n        c->buf_peak_last_reset_time = now_ms;\n    }\n\n    if (new_buffer_size) {\n        oldbuf = c->buf;\n        c->buf = zmalloc_usable(new_buffer_size, &c->buf_usable_size);\n        memcpy(c->buf,oldbuf,c->bufpos);\n        zfree(oldbuf);\n    }\n    return 0;\n}\n\n/* This function is used in order to track clients using the biggest amount\n * of memory in the latest few seconds. This way we can provide such information\n * in the INFO output (clients section), without having to do an O(N) scan for\n * all the clients.\n *\n * This is how it works. We have an array of CLIENTS_PEAK_MEM_USAGE_SLOTS slots\n * where we track, for each, the biggest client output and input buffers we\n * saw in that slot. Every slot corresponds to one of the latest seconds, since\n * the array is indexed by doing UNIXTIME % CLIENTS_PEAK_MEM_USAGE_SLOTS.\n *\n * When we want to know what was recently the peak memory usage, we just scan\n * such few slots searching for the maximum value. */\n#define CLIENTS_PEAK_MEM_USAGE_SLOTS 8\nsize_t ClientsPeakMemInput[CLIENTS_PEAK_MEM_USAGE_SLOTS] = {0};\nsize_t ClientsPeakMemOutput[CLIENTS_PEAK_MEM_USAGE_SLOTS] = {0};\nint CurrentPeakMemUsageSlot = 0;\n\nint clientsCronTrackExpansiveClients(client *c) {\n    size_t qb_size = c->querybuf ? sdsZmallocSize(c->querybuf) : 0;\n    size_t argv_size = c->argv ? zmalloc_size(c->argv) : 0;\n    size_t in_usage = qb_size + c->all_argv_len_sum + argv_size;\n    size_t out_usage = getClientOutputBufferMemoryUsage(c);\n\n    /* Track the biggest values observed so far in this slot. */\n    if (in_usage > ClientsPeakMemInput[CurrentPeakMemUsageSlot])\n        ClientsPeakMemInput[CurrentPeakMemUsageSlot] = in_usage;\n    if (out_usage > ClientsPeakMemOutput[CurrentPeakMemUsageSlot])\n        ClientsPeakMemOutput[CurrentPeakMemUsageSlot] = out_usage;\n\n    return 0; /* This function never terminates the client. */\n}\n\n/* All normal clients are placed in one of the \"mem usage buckets\" according\n * to how much memory they currently use. We use this function to find the\n * appropriate bucket based on a given memory usage value. The algorithm simply\n * does a log2(mem) to ge the bucket. This means, for examples, that if a\n * client's memory usage doubles it's moved up to the next bucket, if it's\n * halved we move it down a bucket.\n * For more details see CLIENT_MEM_USAGE_BUCKETS documentation in server.h. */\nstatic inline clientMemUsageBucket *getMemUsageBucket(size_t mem) {\n    int size_in_bits = 8*(int)sizeof(mem);\n    int clz = mem > 0 ? __builtin_clzl(mem) : size_in_bits;\n    int bucket_idx = size_in_bits - clz;\n    if (bucket_idx > CLIENT_MEM_USAGE_BUCKET_MAX_LOG)\n        bucket_idx = CLIENT_MEM_USAGE_BUCKET_MAX_LOG;\n    else if (bucket_idx < CLIENT_MEM_USAGE_BUCKET_MIN_LOG)\n        bucket_idx = CLIENT_MEM_USAGE_BUCKET_MIN_LOG;\n    bucket_idx -= CLIENT_MEM_USAGE_BUCKET_MIN_LOG;\n    return &server.client_mem_usage_buckets[bucket_idx];\n}\n\n/*\n * This method updates the client memory usage and update the\n * server stats for client type.\n *\n * This method is called from the clientsCron to have updated\n * stats for non CLIENT_TYPE_NORMAL/PUBSUB clients to accurately\n * provide information around clients memory usage.\n *\n * It is also used in updateClientMemUsageAndBucket to have latest\n * client memory usage information to place it into appropriate client memory\n * usage bucket.\n */\nvoid updateClientMemoryUsage(client *c) {\n    serverAssert(c->conn);\n    size_t mem = getClientMemoryUsage(c, NULL);\n    int type = getClientType(c);\n    /* Now that we have the memory used by the client, remove the old\n     * value from the old category, and add it back. */\n    server.stat_clients_type_memory[c->last_memory_type] -= c->last_memory_usage;\n    server.stat_clients_type_memory[type] += mem;\n    /* Remember what we added and where, to remove it next time. */\n    c->last_memory_type = type;\n    c->last_memory_usage = mem;\n}\n\nint clientEvictionAllowed(client *c) {\n    if (server.maxmemory_clients == 0 || c->flags & CLIENT_NO_EVICT || !c->conn) {\n        return 0;\n    }\n    int type = getClientType(c);\n    return (type == CLIENT_TYPE_NORMAL || type == CLIENT_TYPE_PUBSUB);\n}\n\n\n/* This function is used to cleanup the client's previously tracked memory usage.\n * This is called during incremental client memory usage tracking as well as\n * used to reset when client to bucket allocation is not required when\n * client eviction is disabled.  */\nvoid removeClientFromMemUsageBucket(client *c, int allow_eviction) {\n    if (c->mem_usage_bucket) {\n        c->mem_usage_bucket->mem_usage_sum -= c->last_memory_usage;\n        /* If this client can't be evicted then remove it from the mem usage\n         * buckets */\n        if (!allow_eviction) {\n            listDelNode(c->mem_usage_bucket->clients, c->mem_usage_bucket_node);\n            c->mem_usage_bucket = NULL;\n            c->mem_usage_bucket_node = NULL;\n        }\n    }\n}\n\n/* This is called only if explicit clients when something changed their buffers,\n * so we can track clients' memory and enforce clients' maxmemory in real time.\n *\n * This also adds the client to the correct memory usage bucket. Each bucket contains\n * all clients with roughly the same amount of memory. This way we group\n * together clients consuming about the same amount of memory and can quickly\n * free them in case we reach maxmemory-clients (client eviction).\n *\n * Note: This function filters clients of type no-evict, master or replica regardless\n * of whether the eviction is enabled or not, so the memory usage we get from these\n * types of clients via the INFO command may be out of date.\n *\n * returns 1 if client eviction for this client is allowed, 0 otherwise.\n */\nint updateClientMemUsageAndBucket(client *c) {\n    /* The unlikely case this function was called from a thread different\n     * than the main one is a module call from a spawned thread. This is safe\n     * since this call must have been made after calling\n     * RedisModule_ThreadSafeContextLock i.e the module is holding the GIL. In\n     * that special case we assert that at least the updated client's\n     * running_tid is the main thread. The true main thread is allowed to call\n     * this function on clients handled by IO-threads as it makes sure the\n     * IO-threads are paused, f.e see cleintsCron() and evictClients(). */\n    serverAssert((pthread_equal(pthread_self(), server.main_thread_id) ||\n                  c->running_tid == IOTHREAD_MAIN_THREAD_ID) && c->conn);\n    int allow_eviction = clientEvictionAllowed(c);\n    removeClientFromMemUsageBucket(c, allow_eviction);\n\n    if (!allow_eviction) {\n        return 0;\n    }\n\n    /* Update client memory usage. */\n    updateClientMemoryUsage(c);\n\n    /* Update the client in the mem usage buckets */\n    clientMemUsageBucket *bucket = getMemUsageBucket(c->last_memory_usage);\n    bucket->mem_usage_sum += c->last_memory_usage;\n    if (bucket != c->mem_usage_bucket) {\n        if (c->mem_usage_bucket)\n            listDelNode(c->mem_usage_bucket->clients,\n                        c->mem_usage_bucket_node);\n        c->mem_usage_bucket = bucket;\n        listAddNodeTail(bucket->clients, c);\n        c->mem_usage_bucket_node = listLast(bucket->clients);\n    }\n    return 1;\n}\n\n/* Return the max samples in the memory usage of clients tracked by\n * the function clientsCronTrackExpansiveClients(). */\nvoid getExpansiveClientsInfo(size_t *in_usage, size_t *out_usage) {\n    size_t i = 0, o = 0;\n    for (int j = 0; j < CLIENTS_PEAK_MEM_USAGE_SLOTS; j++) {\n        if (ClientsPeakMemInput[j] > i) i = ClientsPeakMemInput[j];\n        if (ClientsPeakMemOutput[j] > o) o = ClientsPeakMemOutput[j];\n    }\n    *in_usage = i;\n    *out_usage = o;\n}\n\n/* Run cron tasks for a single client. Return 1 if the client should\n * be terminated, 0 otherwise. */\nint clientsCronRunClient(client *c) {\n    mstime_t now = server.mstime;\n    /* The following functions do different service checks on the client.\n     * The protocol is that they return non-zero if the client was\n     * terminated. */\n    if (clientsCronHandleTimeout(c,now)) return 1;\n    if (clientsCronResizeQueryBuffer(c)) return 1;\n    if (clientsCronResizeOutputBuffer(c,now)) return 1;\n\n    if (clientsCronTrackExpansiveClients(c)) return 1;\n\n    /* Iterating all the clients in getMemoryOverheadData() is too slow and\n     * in turn would make the INFO command too slow. So we perform this\n     * computation incrementally and track the (not instantaneous but updated\n     * to the second) total memory used by clients using clientsCron() in\n     * a more incremental way (depending on server.hz).\n     * If client eviction is enabled, update the bucket as well. */\n    if (!updateClientMemUsageAndBucket(c))\n        updateClientMemoryUsage(c);\n\n    if (closeClientOnOutputBufferLimitReached(c, 0)) return 1;\n    return 0;\n}\n\n/* Periodic maintenance for the pending command pool.\n * This function should be called from serverCron to manage pool size based on utilization patterns. */\nvoid pendingCommandPoolCron(void) {\n    /* Only shrink pool when IO threads are not active */\n    if (server.io_threads_active) return;\n\n    /* Calculate utilization rate based on minimum pool size reached */\n    if (server.cmd_pool.capacity > PENDING_COMMAND_POOL_SIZE) {\n        /* If utilization is below threshold, shrink the pool */\n        double utilization_ratio = 1.0 - (double)server.cmd_pool.min_size / server.cmd_pool.capacity;\n        if (utilization_ratio < 0.5)\n            shrinkPendingCommandPool();\n    }\n\n    /* Reset tracking for next interval */\n    server.cmd_pool.min_size = server.cmd_pool.size; /* Reset to current size */\n}\n\n/* This function is called by serverCron() and is used in order to perform\n * operations on clients that are important to perform constantly. For instance\n * we use this function in order to disconnect clients after a timeout, including\n * clients blocked in some blocking command with a non-zero timeout.\n *\n * The function makes some effort to process all the clients every second, even\n * if this cannot be strictly guaranteed, since serverCron() may be called with\n * an actual frequency lower than server.hz in case of latency events like slow\n * commands.\n *\n * It is very important for this function, and the functions it calls, to be\n * very fast: sometimes Redis has tens of hundreds of connected clients, and the\n * default server.hz value is 10, so sometimes here we need to process thousands\n * of clients per second, turning this function into a source of latency.\n */\nvoid clientsCron(void) {\n    /* Try to process at least numclients/server.hz of clients\n     * per call. Since normally (if there are no big latency events) this\n     * function is called server.hz times per second, in the average case we\n     * process all the clients in 1 second. */\n    int numclients = listLength(server.clients);\n    int iterations = numclients/server.hz;\n\n    /* Process at least a few clients while we are at it, even if we need\n     * to process less than CLIENTS_CRON_MIN_ITERATIONS to meet our contract\n     * of processing each client once per second. */\n    if (iterations < CLIENTS_CRON_MIN_ITERATIONS)\n        iterations = (numclients < CLIENTS_CRON_MIN_ITERATIONS) ?\n                     numclients : CLIENTS_CRON_MIN_ITERATIONS;\n\n\n    CurrentPeakMemUsageSlot = server.unixtime % CLIENTS_PEAK_MEM_USAGE_SLOTS;\n    /* Always zero the next sample, so that when we switch to that second, we'll\n     * only register samples that are greater in that second without considering\n     * the history of such slot.\n     *\n     * Note: our index may jump to any random position if serverCron() is not\n     * called for some reason with the normal frequency, for instance because\n     * some slow command is called taking multiple seconds to execute. In that\n     * case our array may end containing data which is potentially older\n     * than CLIENTS_PEAK_MEM_USAGE_SLOTS seconds: however this is not a problem\n     * since here we want just to track if \"recently\" there were very expansive\n     * clients from the POV of memory usage. */\n    int zeroidx = (CurrentPeakMemUsageSlot+1) % CLIENTS_PEAK_MEM_USAGE_SLOTS;\n    ClientsPeakMemInput[zeroidx] = 0;\n    ClientsPeakMemOutput[zeroidx] = 0;\n\n    while(listLength(server.clients) && iterations--) {\n        client *c;\n        listNode *head;\n\n        /* Take the current head, process, and then rotate the head to tail.\n         * This way we can fairly iterate all clients step by step. */\n        head = listFirst(server.clients);\n        c = listNodeValue(head);\n        listRotateHeadToTail(server.clients);\n\n        /* Clients handled by IO threads will be processed by IOThreadClientsCron. */\n        if (c->tid != IOTHREAD_MAIN_THREAD_ID) continue;\n\n        clientsCronRunClient(c);\n    }\n}\n\n/* This function handles 'background' operations we are required to do\n * incrementally in Redis databases, such as active key expiring, resizing,\n * rehashing. */\nvoid databasesCron(void) {\n    /* Expire keys by random sampling. Not required for slaves\n     * as master will synthesize DELs for us. */\n    if (server.active_expire_enabled) {\n        if (iAmMaster()) {\n            activeExpireCycle(ACTIVE_EXPIRE_CYCLE_SLOW);\n        } else {\n            expireSlaveKeys();\n        }\n    }\n\n    /* Defrag keys gradually. */\n    activeDefragCycle();\n\n    /* Handle active-trim */\n    if (server.cluster_enabled)\n        asmActiveTrimCycle();\n\n    /* Perform hash tables rehashing if needed, but only if there are no\n     * other processes saving the DB on disk. Otherwise rehashing is bad\n     * as will cause a lot of copy-on-write of memory pages. */\n    if (!hasActiveChildProcess()) {\n        /* We use global counters so if we stop the computation at a given\n         * DB we'll be able to start from the successive in the next\n         * cron loop iteration. */\n        static unsigned int resize_db = 0;\n        static unsigned int rehash_db = 0;\n        int dbs_per_call = CRON_DBS_PER_CALL;\n        int j;\n\n        /* Don't test more DBs than we have. */\n        if (dbs_per_call > server.dbnum) dbs_per_call = server.dbnum;\n\n        for (j = 0; j < dbs_per_call; j++) {\n            redisDb *db = &server.db[resize_db % server.dbnum];\n            kvstoreTryResizeDicts(db->keys, CRON_DICTS_PER_DB);\n            kvstoreTryResizeDicts(db->expires, CRON_DICTS_PER_DB);\n            resize_db++;\n        }\n\n        /* Rehash */\n        if (server.activerehashing) {\n            uint64_t elapsed_us = 0;\n            for (j = 0; j < dbs_per_call; j++) {\n                redisDb *db = &server.db[rehash_db % server.dbnum];\n                elapsed_us += kvstoreIncrementallyRehash(db->keys, INCREMENTAL_REHASHING_THRESHOLD_US - elapsed_us);\n                if (elapsed_us >= INCREMENTAL_REHASHING_THRESHOLD_US)\n                    break;\n                elapsed_us += kvstoreIncrementallyRehash(db->expires, INCREMENTAL_REHASHING_THRESHOLD_US - elapsed_us);\n                if (elapsed_us >= INCREMENTAL_REHASHING_THRESHOLD_US)\n                    break;\n                rehash_db++;\n            }\n        }\n    }\n}\n\nstatic inline void updateCachedTimeWithUs(int update_daylight_info, const long long ustime) {\n    server.ustime = ustime;\n    server.mstime = server.ustime / 1000;\n    time_t unixtime = server.mstime / 1000;\n    atomicSet(server.unixtime, unixtime);\n\n    /* To get information about daylight saving time, we need to call\n     * localtime_r and cache the result. However calling localtime_r in this\n     * context is safe since we will never fork() while here, in the main\n     * thread. The logging function will call a thread safe version of\n     * localtime that has no locks. */\n    if (update_daylight_info) {\n        struct tm tm;\n        time_t ut = server.unixtime;\n        localtime_r(&ut,&tm);\n        atomicSet(server.daylight_active, tm.tm_isdst);\n    }\n}\n\n/* We take a cached value of the unix time in the global state because with\n * virtual memory and aging there is to store the current time in objects at\n * every object access, and accuracy is not needed. To access a global var is\n * a lot faster than calling time(NULL).\n *\n * This function should be fast because it is called at every command execution\n * in call(), so it is possible to decide if to update the daylight saving\n * info or not using the 'update_daylight_info' argument. Normally we update\n * such info only when calling this function from serverCron() but not when\n * calling it from call(). */\nvoid updateCachedTime(int update_daylight_info) {\n    const long long us = ustime();\n    updateCachedTimeWithUs(update_daylight_info, us);\n}\n\n/* Performing required operations in order to enter an execution unit.\n * In general, if we are already inside an execution unit then there is nothing to do,\n * otherwise we need to update cache times so the same cached time will be used all over\n * the execution unit.\n * update_cached_time - if 0, will not update the cached time even if required.\n * us - if not zero, use this time for cached time, otherwise get current time. */\nvoid enterExecutionUnit(int update_cached_time, long long us) {\n    if (server.execution_nesting++ == 0 && update_cached_time) {\n        if (us == 0) {\n            us = ustime();\n        }\n        updateCachedTimeWithUs(0, us);\n        server.cmd_time_snapshot = server.mstime;\n    }\n}\n\nvoid exitExecutionUnit(void) {\n    --server.execution_nesting;\n}\n\nvoid checkChildrenDone(void) {\n    int statloc = 0;\n    pid_t pid;\n\n    if ((pid = waitpid(-1, &statloc, WNOHANG)) != 0) {\n        int exitcode = WIFEXITED(statloc) ? WEXITSTATUS(statloc) : -1;\n        int bysignal = 0;\n\n        if (WIFSIGNALED(statloc)) bysignal = WTERMSIG(statloc);\n\n        /* sigKillChildHandler catches the signal and calls exit(), but we\n         * must make sure not to flag lastbgsave_status, etc incorrectly.\n         * We could directly terminate the child process via SIGUSR1\n         * without handling it */\n        if (exitcode == SERVER_CHILD_NOERROR_RETVAL) {\n            bysignal = SIGUSR1;\n            exitcode = 1;\n        }\n\n        if (pid == -1) {\n            serverLog(LL_WARNING,\"waitpid() returned an error: %s. \"\n                \"child_type: %s, child_pid = %d\",\n                strerror(errno),\n                strChildType(server.child_type),\n                (int) server.child_pid);\n        } else if (pid == server.child_pid) {\n            if (server.child_type == CHILD_TYPE_RDB) {\n                backgroundSaveDoneHandler(exitcode, bysignal);\n            } else if (server.child_type == CHILD_TYPE_AOF) {\n                backgroundRewriteDoneHandler(exitcode, bysignal);\n            } else if (server.child_type == CHILD_TYPE_MODULE) {\n                ModuleForkDoneHandler(exitcode, bysignal);\n            } else {\n                serverPanic(\"Unknown child type %d for child pid %d\", server.child_type, server.child_pid);\n                exit(1);\n            }\n            if (!bysignal && exitcode == 0) receiveChildInfo();\n            resetChildState();\n        } else {\n            if (!ldbRemoveChild(pid)) {\n                serverLog(LL_WARNING,\n                          \"Warning, detected child with unmatched pid: %ld\",\n                          (long) pid);\n            }\n        }\n\n        /* start any pending forks immediately. */\n        replicationStartPendingFork();\n    }\n}\n\n/* Record the max memory used since the server was started. */\nvoid updatePeakMemory(void) {\n    size_t zmalloc_used = zmalloc_used_memory();\n    if (zmalloc_used > server.stat_peak_memory) {\n        server.stat_peak_memory = zmalloc_used;\n        server.stat_peak_memory_time = server.unixtime;\n    }\n\n    size_t zmalloc_peak = zmalloc_get_peak_memory();\n    if (zmalloc_peak > server.stat_peak_memory) {\n        server.stat_peak_memory = zmalloc_peak;\n        server.stat_peak_memory_time = zmalloc_get_peak_memory_time();\n    }\n}\n\n/* Called from serverCron and cronUpdateMemoryStats to update cached memory metrics. */\nvoid cronUpdateMemoryStats(void) {\n    updatePeakMemory();\n\n    run_with_period(100) {\n        /* Sample the RSS and other metrics here since this is a relatively slow call.\n         * We must sample the zmalloc_used at the same time we take the rss, otherwise\n         * the frag ratio calculate may be off (ratio of two samples at different times) */\n        server.cron_malloc_stats.process_rss = zmalloc_get_rss();\n        server.cron_malloc_stats.zmalloc_used = zmalloc_used_memory();\n        /* Sampling the allocator info can be slow too.\n         * The fragmentation ratio it'll show is potentially more accurate\n         * it excludes other RSS pages such as: shared libraries, LUA and other non-zmalloc\n         * allocations, and allocator reserved pages that can be pursed (all not actual frag) */\n        zmalloc_get_allocator_info(1,\n                                   &server.cron_malloc_stats.allocator_allocated,\n                                   &server.cron_malloc_stats.allocator_active,\n                                   &server.cron_malloc_stats.allocator_resident,\n                                   NULL,\n                                   &server.cron_malloc_stats.allocator_muzzy,\n                                   &server.cron_malloc_stats.allocator_frag_smallbins_bytes);\n        if (server.lua_arena != UINT_MAX) {\n            zmalloc_get_allocator_info_by_arena(server.lua_arena,\n                                                0,\n                                                &server.cron_malloc_stats.lua_allocator_allocated,\n                                                &server.cron_malloc_stats.lua_allocator_active,\n                                                &server.cron_malloc_stats.lua_allocator_resident,\n                                                &server.cron_malloc_stats.lua_allocator_frag_smallbins_bytes);\n        }\n        /* in case the allocator isn't providing these stats, fake them so that\n         * fragmentation info still shows some (inaccurate metrics) */\n        if (!server.cron_malloc_stats.allocator_resident)\n            server.cron_malloc_stats.allocator_resident = server.cron_malloc_stats.process_rss;\n        if (!server.cron_malloc_stats.allocator_active)\n            server.cron_malloc_stats.allocator_active = server.cron_malloc_stats.allocator_resident;\n        if (!server.cron_malloc_stats.allocator_allocated)\n            server.cron_malloc_stats.allocator_allocated = server.cron_malloc_stats.zmalloc_used;\n    }\n}\n\n/* This is our timer interrupt, called server.hz times per second.\n * Here is where we do a number of things that need to be done asynchronously.\n * For instance:\n *\n * - Active expired keys collection (it is also performed in a lazy way on\n *   lookup).\n * - Software watchdog.\n * - Update some statistic.\n * - Incremental rehashing of the DBs hash tables.\n * - Triggering BGSAVE / AOF rewrite, and handling of terminated children.\n * - Clients timeout of different kinds.\n * - Replication reconnection.\n * - Many more...\n *\n * Everything directly called here will be called server.hz times per second,\n * so in order to throttle execution of things we want to do less frequently\n * a macro is used: run_with_period(milliseconds) { .... }\n */\n\nint serverCron(struct aeEventLoop *eventLoop, long long id, void *clientData) {\n    int j;\n    UNUSED(eventLoop);\n    UNUSED(id);\n    UNUSED(clientData);\n\n    /* Software watchdog: deliver the SIGALRM that will reach the signal\n     * handler if we don't return here fast enough. */\n    if (server.watchdog_period) watchdogScheduleSignal(server.watchdog_period);\n\n    server.hz = server.config_hz;\n    /* Adapt the server.hz value to the number of configured clients. If we have\n     * many clients, we want to call serverCron() with an higher frequency. */\n    if (server.dynamic_hz) {\n        while (listLength(server.clients) / server.hz >\n               MAX_CLIENTS_PER_CLOCK_TICK)\n        {\n            server.hz *= 2;\n            if (server.hz > CONFIG_MAX_HZ) {\n                server.hz = CONFIG_MAX_HZ;\n                break;\n            }\n        }\n    }\n\n    /* for debug purposes: skip actual cron work if pause_cron is on */\n    if (server.pause_cron) return 1000/server.hz;\n\n    monotime cron_start = getMonotonicUs();\n\n    run_with_period(100) {\n        long long stat_net_input_bytes, stat_net_output_bytes;\n        long long stat_net_repl_input_bytes, stat_net_repl_output_bytes;\n        atomicGet(server.stat_net_input_bytes, stat_net_input_bytes);\n        atomicGet(server.stat_net_output_bytes, stat_net_output_bytes);\n        atomicGet(server.stat_net_repl_input_bytes, stat_net_repl_input_bytes);\n        atomicGet(server.stat_net_repl_output_bytes, stat_net_repl_output_bytes);\n        monotime current_time = getMonotonicUs();\n        long long factor = 1000000;  // us\n        trackInstantaneousMetric(STATS_METRIC_COMMAND, server.stat_numcommands, current_time, factor);\n        trackInstantaneousMetric(STATS_METRIC_NET_INPUT, stat_net_input_bytes + stat_net_repl_input_bytes,\n                                 current_time, factor);\n        trackInstantaneousMetric(STATS_METRIC_NET_OUTPUT, stat_net_output_bytes + stat_net_repl_output_bytes,\n                                 current_time, factor);\n        trackInstantaneousMetric(STATS_METRIC_NET_INPUT_REPLICATION, stat_net_repl_input_bytes, current_time,\n                                 factor);\n        trackInstantaneousMetric(STATS_METRIC_NET_OUTPUT_REPLICATION, stat_net_repl_output_bytes,\n                                 current_time, factor);\n        trackInstantaneousMetric(STATS_METRIC_EL_CYCLE, server.duration_stats[EL_DURATION_TYPE_EL].cnt,\n                                 current_time, factor);\n        trackInstantaneousMetric(STATS_METRIC_EL_DURATION, server.duration_stats[EL_DURATION_TYPE_EL].sum,\n                                 server.duration_stats[EL_DURATION_TYPE_EL].cnt, 1);\n\n        /* Periodic cleanup of active clients sliding window to clear stale slots\n         * when no client activity occurs for extended periods */\n        statsUpdateActiveClients(NULL);\n    }\n\n    /* We have just LRU_BITS bits per object for LRU information.\n     * So we use an (eventually wrapping) LRU clock.\n     *\n     * Note that even if the counter wraps it's not a big problem,\n     * everything will still work but some object will appear younger\n     * to Redis. However for this to happen a given object should never be\n     * touched for all the time needed to the counter to wrap, which is\n     * not likely.\n     *\n     * Note that you can change the resolution altering the\n     * LRU_CLOCK_RESOLUTION define. */\n    server.lruclock = getLRUClock();\n\n    cronUpdateMemoryStats();\n\n    /* We received a SIGTERM or SIGINT, shutting down here in a safe way, as it is\n     * not ok doing so inside the signal handler. */\n    if (shouldShutdownAsap() && !isShutdownInitiated()) {\n        int shutdownFlags = SHUTDOWN_NOFLAGS;\n        int last_sig_received;\n        atomicGet(server.last_sig_received, last_sig_received);\n        if (last_sig_received == SIGINT && server.shutdown_on_sigint)\n            shutdownFlags = server.shutdown_on_sigint;\n        else if (last_sig_received == SIGTERM && server.shutdown_on_sigterm)\n            shutdownFlags = server.shutdown_on_sigterm;\n\n        if (prepareForShutdown(shutdownFlags) == C_OK) exit(0);\n    } else if (isShutdownInitiated()) {\n        if (server.mstime >= server.shutdown_mstime || isReadyToShutdown()) {\n            if (finishShutdown() == C_OK) exit(0);\n            /* Shutdown failed. Continue running. An error has been logged. */\n        }\n    }\n\n    /* Show some info about non-empty databases */\n    if (server.verbosity <= LL_VERBOSE) {\n        run_with_period(5000) {\n            for (j = 0; j < server.dbnum; j++) {\n                long long size, used, vkeys;\n\n                size = kvstoreBuckets(server.db[j].keys);\n                used = kvstoreSize(server.db[j].keys);\n                vkeys = kvstoreSize(server.db[j].expires);\n                if (used || vkeys) {\n                    serverLog(LL_VERBOSE,\"DB %d: %lld keys (%lld volatile) in %lld slots HT.\",j,used,vkeys,size);\n                }\n            }\n        }\n    }\n\n    /* Show information about connected clients */\n    if (!server.sentinel_mode) {\n        run_with_period(5000) {\n            serverLog(LL_DEBUG,\n                \"%lu clients connected (%lu replicas), %zu bytes in use\",\n                listLength(server.clients)-listLength(server.slaves),\n                replicationLogicalReplicaCount(),\n                zmalloc_used_memory());\n        }\n    }\n\n    /* We need to do a few operations on clients asynchronously. */\n    clientsCron();\n\n    /* Handle background operations on Redis databases. */\n    databasesCron();\n\n    /* Start a scheduled AOF rewrite if this was requested by the user while\n     * a BGSAVE was in progress. */\n    if (!hasActiveChildProcess() &&\n        server.aof_rewrite_scheduled &&\n        !aofRewriteLimited())\n    {\n        rewriteAppendOnlyFileBackground();\n    }\n\n    /* Check if a background saving or AOF rewrite in progress terminated. */\n    if (hasActiveChildProcess() || ldbPendingChildren())\n    {\n        run_with_period(1000) receiveChildInfo();\n        checkChildrenDone();\n    } else {\n        /* If there is not a background saving/rewrite in progress check if\n         * we have to save/rewrite now. */\n        for (j = 0; j < server.saveparamslen; j++) {\n            struct saveparam *sp = server.saveparams+j;\n\n            /* Save if we reached the given amount of changes,\n             * the given amount of seconds, and if the latest bgsave was\n             * successful or if, in case of an error, at least\n             * CONFIG_BGSAVE_RETRY_DELAY seconds already elapsed. */\n            if (server.dirty >= sp->changes &&\n                server.unixtime-server.lastsave > sp->seconds &&\n                (server.unixtime-server.lastbgsave_try >\n                 CONFIG_BGSAVE_RETRY_DELAY ||\n                 server.lastbgsave_status == C_OK))\n            {\n                serverLog(LL_NOTICE,\"%d changes in %d seconds. Saving...\",\n                    sp->changes, (int)sp->seconds);\n                rdbSaveInfo rsi, *rsiptr;\n                rsiptr = rdbPopulateSaveInfo(&rsi);\n                rdbSaveBackground(SLAVE_REQ_NONE,server.rdb_filename,rsiptr,RDBFLAGS_NONE);\n                break;\n            }\n        }\n\n        /* Trigger an AOF rewrite if needed. */\n        if (server.aof_state == AOF_ON &&\n            !hasActiveChildProcess() &&\n            server.aof_rewrite_perc &&\n            server.aof_current_size > server.aof_rewrite_min_size)\n        {\n            long long base = server.aof_rewrite_base_size ?\n                server.aof_rewrite_base_size : 1;\n            long long growth = (server.aof_current_size*100/base) - 100;\n            if (growth >= server.aof_rewrite_perc && !aofRewriteLimited()) {\n                serverLog(LL_NOTICE,\"Starting automatic rewriting of AOF on %lld%% growth\",growth);\n                rewriteAppendOnlyFileBackground();\n            }\n        }\n    }\n    /* Just for the sake of defensive programming, to avoid forgetting to\n     * call this function when needed. */\n    updateDictResizePolicy();\n\n    /* AOF postponed flush: Try at every cron cycle if the slow fsync\n     * completed. */\n    if ((server.aof_state == AOF_ON || server.aof_state == AOF_WAIT_REWRITE) &&\n        server.aof_flush_postponed_start)\n    {\n        flushAppendOnlyFile(0);\n    }\n\n    /* AOF write errors: in this case we have a buffer to flush as well and\n     * clear the AOF error in case of success to make the DB writable again,\n     * however to try every second is enough in case of 'hz' is set to\n     * a higher frequency. */\n    run_with_period(1000) {\n        if ((server.aof_state == AOF_ON || server.aof_state == AOF_WAIT_REWRITE) &&\n            server.aof_last_write_status == C_ERR) \n            {\n                flushAppendOnlyFile(0);\n            }\n    }\n\n    /* Clear the paused actions state if needed. */\n    updatePausedActions();\n\n    /* Replication cron function -- used to reconnect to master,\n     * detect transfer failures, start background RDB transfers and so forth. \n     * \n     * If Redis is trying to failover then run the replication cron faster so\n     * progress on the handshake happens more quickly. */\n    if (server.failover_state != NO_FAILOVER) {\n        run_with_period(100) replicationCron();\n    } else {\n        run_with_period(1000) replicationCron();\n    }\n\n    /* Run the Redis Cluster cron. */\n    run_with_period(100) {\n        if (server.cluster_enabled) {\n            clusterCron();\n            asmCron();\n        }\n    }\n\n    /* Run the Sentinel timer if we are in sentinel mode. */\n    if (server.sentinel_mode) sentinelTimer();\n\n    /* Cleanup expired MIGRATE cached sockets. */\n    run_with_period(1000) {\n        migrateCloseTimedoutSockets();\n    }\n\n    /* Cleanup expired IDMP entries from tracked streams */\n    run_with_period(1000) {\n        handleExpiredIdmpEntries();\n    }\n\n    /* Periodically shrink pending command reuse pool */\n    run_with_period(2000) {\n        pendingCommandPoolCron();\n    }\n\n    /* Resize tracking keys table if needed. This is also done at every\n     * command execution, but we want to be sure that if the last command\n     * executed changes the value via CONFIG SET, the server will perform\n     * the operation even if completely idle. */\n    if (server.tracking_clients) trackingLimitUsedSlots();\n\n    /* Check if hotkey tracking duration has expired and auto-stop if needed */\n    if (server.hotkeys && server.hotkeys->active && server.hotkeys->duration > 0) {\n        mstime_t elapsed = (server.mstime - server.hotkeys->start);\n        if (elapsed >= server.hotkeys->duration) {\n            server.hotkeys->active = 0;\n            server.hotkeys->duration = elapsed;\n        }\n    }\n\n    /* Start a scheduled BGSAVE if the corresponding flag is set. This is\n     * useful when we are forced to postpone a BGSAVE because an AOF\n     * rewrite is in progress.\n     *\n     * Note: this code must be after the replicationCron() call above so\n     * make sure when refactoring this file to keep this order. This is useful\n     * because we want to give priority to RDB savings for replication. */\n    if (!hasActiveChildProcess() &&\n        server.rdb_bgsave_scheduled &&\n        (server.unixtime-server.lastbgsave_try > CONFIG_BGSAVE_RETRY_DELAY ||\n         server.lastbgsave_status == C_OK))\n    {\n        rdbSaveInfo rsi, *rsiptr;\n        rsiptr = rdbPopulateSaveInfo(&rsi);\n        if (rdbSaveBackground(SLAVE_REQ_NONE,server.rdb_filename,rsiptr,RDBFLAGS_NONE) == C_OK)\n            server.rdb_bgsave_scheduled = 0;\n    }\n\n    run_with_period(100) {\n        if (moduleCount()) modulesCron();\n    }\n\n    /* Fire the cron loop modules event. */\n    RedisModuleCronLoopV1 ei = {REDISMODULE_CRON_LOOP_VERSION,server.hz};\n    moduleFireServerEvent(REDISMODULE_EVENT_CRON_LOOP,\n                          0,\n                          &ei);\n\n    server.cronloops++;\n\n    server.el_cron_duration = getMonotonicUs() - cron_start;\n\n    return 1000/server.hz;\n}\n\n\nvoid blockingOperationStarts(void) {\n    if(!server.blocking_op_nesting++){\n        updateCachedTime(0);\n        server.blocked_last_cron = server.mstime;\n    }\n}\n\nvoid blockingOperationEnds(void) {\n    if(!(--server.blocking_op_nesting)){\n        server.blocked_last_cron = 0;\n    }\n}\n\n/* This function fills in the role of serverCron during RDB or AOF loading, and\n * also during blocked scripts.\n * It attempts to do its duties at a similar rate as the configured server.hz,\n * and updates cronloops variable so that similarly to serverCron, the\n * run_with_period can be used. */\nvoid whileBlockedCron(void) {\n    /* Here we may want to perform some cron jobs (normally done server.hz times\n     * per second). */\n\n    /* Since this function depends on a call to blockingOperationStarts, let's\n     * make sure it was done. */\n    serverAssert(server.blocked_last_cron);\n\n    /* In case we were called too soon, leave right away. This way one time\n     * jobs after the loop below don't need an if. and we don't bother to start\n     * latency monitor if this function is called too often. */\n    if (server.blocked_last_cron >= server.mstime)\n        return;\n\n    /* Increment server.cronloops so that run_with_period works. */\n    long hz_ms = 1000 / server.hz;\n    int cronloops = (server.mstime - server.blocked_last_cron + (hz_ms - 1)) / hz_ms; /* rounding up */\n    server.blocked_last_cron += cronloops * hz_ms;\n    server.cronloops += cronloops;\n\n    mstime_t latency;\n    latencyStartMonitor(latency);\n\n    /* Only defragment during AOF loading. */\n    if (isAOFLoadingContext()) defragWhileBlocked();\n\n    /* Update memory stats during loading (excluding blocked scripts) */\n    if (server.loading) cronUpdateMemoryStats();\n\n    latencyEndMonitor(latency);\n    latencyAddSampleIfNeeded(\"while-blocked-cron\",latency);\n\n    /* We received a SIGTERM during loading, shutting down here in a safe way,\n     * as it isn't ok doing so inside the signal handler. */\n    if (shouldShutdownAsap() && server.loading) {\n        if (prepareForShutdown(SHUTDOWN_NOSAVE) == C_OK) exit(0);\n        serverLog(LL_WARNING,\"SIGTERM received but errors trying to shut down the server, check the logs for more information\");\n        atomicSet(server.shutdown_asap, 0);\n        atomicSet(server.last_sig_received, 0);\n    }\n}\n\nstatic void sendGetackToReplicas(void) {\n    robj *argv[3];\n    argv[0] = shared.replconf;\n    argv[1] = shared.getack;\n    argv[2] = shared.special_asterick; /* Not used argument. */\n    replicationFeedSlaves(server.slaves, -1, argv, 3);\n}\n\nextern int ProcessingEventsWhileBlocked;\n\n/* This function gets called every time Redis is entering the\n * main loop of the event driven library, that is, before to sleep\n * for ready file descriptors.\n *\n * Note: This function is (currently) called from two functions:\n * 1. aeMain - The main server loop\n * 2. processEventsWhileBlocked - Process clients during RDB/AOF load\n *\n * If it was called from processEventsWhileBlocked we don't want\n * to perform all actions (For example, we don't want to expire\n * keys), but we do need to perform some actions.\n *\n * The most important is freeClientsInAsyncFreeQueue but we also\n * call some other low-risk functions. */\nvoid beforeSleep(struct aeEventLoop *eventLoop) {\n    UNUSED(eventLoop);\n\n    updatePeakMemory();\n\n    /* Just call a subset of vital functions in case we are re-entering\n     * the event loop from processEventsWhileBlocked(). Note that in this\n     * case we keep track of the number of events we are processing, since\n     * processEventsWhileBlocked() wants to stop ASAP if there are no longer\n     * events to handle. */\n    if (ProcessingEventsWhileBlocked) {\n        uint64_t processed = 0;\n        processed += connTypeProcessPendingData(server.el);\n        if (server.aof_state == AOF_ON || server.aof_state == AOF_WAIT_REWRITE)\n            flushAppendOnlyFile(0);\n        processed += handleClientsWithPendingWrites();\n        processed += freeClientsInAsyncFreeQueue();\n\n        /* Let the clients after the blocking call be processed. */\n        processClientsOfAllIOThreads();\n        /* New connections may have been established while blocked, clients from\n         * IO thread may have replies to write, ensure they are promptly sent to\n         * IO threads. */\n        processed += sendPendingClientsToIOThreads();\n\n        server.events_processed_while_blocked += processed;\n        return;\n    }\n\n    /* Handle pending data(typical TLS). (must be done before flushAppendOnlyFile) */\n    connTypeProcessPendingData(server.el);\n\n    /* If any connection type(typical TLS) still has pending unread data don't sleep at all. */\n    int dont_sleep = connTypeHasPendingData(server.el);\n\n    /* Call the Redis Cluster before sleep function. Note that this function\n     * may change the state of Redis Cluster (from ok to fail or vice versa),\n     * so it's a good idea to call it before serving the unblocked clients\n     * later in this function, must be done before blockedBeforeSleep. */\n    if (server.cluster_enabled) {\n        clusterBeforeSleep();\n        asmBeforeSleep();\n    }\n\n    /* Handle blocked clients.\n     * must be done before flushAppendOnlyFile, in case of appendfsync=always,\n     * since the unblocked clients may write data. */\n    blockedBeforeSleep();\n\n    /* Record cron time in beforeSleep, which is the sum of active-expire, active-defrag and all other\n     * tasks done by cron and beforeSleep, but excluding read, write and AOF, that are counted by other\n     * sets of metrics. */\n    monotime cron_start_time_before_aof = getMonotonicUs();\n\n    /* Run a fast expire cycle (the called function will return\n     * ASAP if a fast cycle is not needed). */\n    if (server.active_expire_enabled && iAmMaster())\n        activeExpireCycle(ACTIVE_EXPIRE_CYCLE_FAST);\n\n    if (moduleCount()) {\n        moduleFireServerEvent(REDISMODULE_EVENT_EVENTLOOP,\n                              REDISMODULE_SUBEVENT_EVENTLOOP_BEFORE_SLEEP,\n                              NULL);\n    }\n\n    /* Send all the slaves an ACK request if at least one client blocked\n     * during the previous event loop iteration. Note that we do this after\n     * processUnblockedClients(), so if there are multiple pipelined WAITs\n     * and the just unblocked WAIT gets blocked again, we don't have to wait\n     * a server cron cycle in absence of other event loop events. See #6623.\n     * \n     * We also don't send the ACKs while clients are paused, since it can\n     * increment the replication backlog, they'll be sent after the pause\n     * if we are still the master. */\n    if (server.get_ack_from_slaves && !isPausedActionsWithUpdate(PAUSE_ACTION_REPLICA)) {\n        sendGetackToReplicas();\n        server.get_ack_from_slaves = 0;\n    }\n\n    /* We may have received updates from clients about their current offset. NOTE:\n     * this can't be done where the ACK is received since failover will disconnect \n     * our clients. */\n    updateFailoverStatus();\n\n    /* Since we rely on current_client to send scheduled invalidation messages\n     * we have to flush them after each command, so when we get here, the list\n     * must be empty. */\n    serverAssert(listLength(server.tracking_pending_keys) == 0);\n    serverAssert(listLength(server.pending_push_messages) == 0);\n\n    /* Send the invalidation messages to clients participating to the\n     * client side caching protocol in broadcasting (BCAST) mode. */\n    trackingBroadcastInvalidationMessages();\n\n    /* Record time consumption of AOF writing. */\n    monotime aof_start_time = getMonotonicUs();\n    /* Record cron time in beforeSleep. This does not include the time consumed by AOF writing and IO writing below. */\n    monotime duration_before_aof = aof_start_time - cron_start_time_before_aof;\n    /* Record the fsync'd offset before flushAppendOnly */\n    long long prev_fsynced_reploff = server.fsynced_reploff;\n\n    /* Write the AOF buffer on disk,\n     * must be done before handleClientsWithPendingWrites and\n     * sendPendingClientsToIOThreads, in case of appendfsync=always. */\n    if (server.aof_state == AOF_ON || server.aof_state == AOF_WAIT_REWRITE)\n        flushAppendOnlyFile(0);\n\n    /* Record time consumption of AOF writing. */\n    durationAddSample(EL_DURATION_TYPE_AOF, getMonotonicUs() - aof_start_time);\n\n    /* Update the fsynced replica offset.\n     * If an initial rewrite is in progress then not all data is guaranteed to have actually been\n     * persisted to disk yet, so we cannot update the field. We will wait for the rewrite to complete. */\n    if (server.aof_state == AOF_ON && server.fsynced_reploff != -1) {\n        long long fsynced_reploff_pending;\n        atomicGet(server.fsynced_reploff_pending, fsynced_reploff_pending);\n        server.fsynced_reploff = fsynced_reploff_pending;\n\n        /* If we have blocked [WAIT]AOF clients, and fsynced_reploff changed, we want to try to\n         * wake them up ASAP. */\n        if (listLength(server.clients_waiting_acks) && prev_fsynced_reploff != server.fsynced_reploff)\n            dont_sleep = 1;\n    }\n\n    if (server.io_threads_num > 1) {\n        /* Corresponding to IOThreadBeforeSleep, process the clients from IO threads\n         * without notification. */\n        if (processClientsOfAllIOThreads() > 0) {\n            /* If there are clients that are processed, it means IO thread is busy to\n             * trafer clients to main thread, so the main thread does not sleep. */\n            dont_sleep = 1;\n        }\n        if (!dont_sleep) {\n            atomicSetWithSync(server.running, 0); /* Not running if going to sleep. */\n            /* Try to process the clients from IO threads again, since before setting running\n             * to 0, some clients may be transferred without notification. */\n            processClientsOfAllIOThreads();\n        }\n    }\n\n    /* Detect cycles with client input processing.\n     * Compare and refresh the snapshot here (not in afterSleep()) so IO-thread updates during aeApiPoll() are not missed.\n     * Run this before dispatching new IO-thread work. */\n    if (!ProcessingEventsWhileBlocked) {\n        long long total_client_process_input_buff_events;\n        atomicGet(server.stat_total_client_process_input_buff_events, total_client_process_input_buff_events);\n        if (stat_prev_total_client_process_input_buff_events != total_client_process_input_buff_events)\n            server.stat_eventloop_cycles_with_clients_input_buff_processing++;\n        stat_prev_total_client_process_input_buff_events = total_client_process_input_buff_events;\n    }\n\n    /* Handle writes with pending output buffers. */\n    handleClientsWithPendingWrites();\n\n    /* Check if IO thread replicas have any pending read or writes and send them\n     * back to their threads if so. */\n    putReplicasInPendingClientsToIOThreads();\n\n    /* Let io thread to handle its pending clients. */\n    sendPendingClientsToIOThreads();\n\n    /* Record cron time in beforeSleep. This does not include the time consumed by AOF writing and IO writing above. */\n    monotime cron_start_time_after_write = getMonotonicUs();\n\n    /* Close clients that need to be closed asynchronous */\n    freeClientsInAsyncFreeQueue();\n\n    /* Incrementally trim replication backlog, 10 times the normal speed is\n     * to free replication backlog as much as possible. */\n    if (server.repl_backlog)\n        incrementalTrimReplicationBacklog(10*REPL_BACKLOG_TRIM_BLOCKS_PER_CALL);\n\n    /* Disconnect some clients if they are consuming too much memory. */\n    evictClients();\n\n    /* Record cron time in beforeSleep. */\n    monotime duration_after_write = getMonotonicUs() - cron_start_time_after_write;\n\n    /* Record eventloop latency. */\n    if (server.el_start > 0) {\n        monotime el_duration = getMonotonicUs() - server.el_start;\n        durationAddSample(EL_DURATION_TYPE_EL, el_duration);\n    }\n    server.el_cron_duration += duration_before_aof + duration_after_write;\n    durationAddSample(EL_DURATION_TYPE_CRON, server.el_cron_duration);\n    server.el_cron_duration = 0;\n    /* Record max command count per cycle. */\n    if (server.stat_numcommands > server.el_cmd_cnt_start) {\n        long long el_command_cnt = server.stat_numcommands - server.el_cmd_cnt_start;\n        if (el_command_cnt > server.el_cmd_cnt_max) {\n            server.el_cmd_cnt_max = el_command_cnt;\n        }\n    }\n\n    /* Don't sleep at all before the next beforeSleep() if needed (e.g. a\n     * connection has pending data) */\n    aeSetDontWait(server.el, dont_sleep);\n\n    /* Before we are going to sleep, let the threads access the dataset by\n     * releasing the GIL. Redis main thread will not touch anything at this\n     * time. */\n    if (moduleCount()) moduleReleaseGIL();\n    /********************* WARNING ********************\n     * Do NOT add anything below moduleReleaseGIL !!! *\n     ***************************** ********************/\n}\n\n/* This function is called immediately after the event loop multiplexing\n * API returned, and the control is going to soon return to Redis by invoking\n * the different events callbacks. */\nvoid afterSleep(struct aeEventLoop *eventLoop) {\n    UNUSED(eventLoop);\n    /********************* WARNING ********************\n     * Do NOT add anything above moduleAcquireGIL !!! *\n     ***************************** ********************/\n    if (!ProcessingEventsWhileBlocked) {\n        /* Acquire the modules GIL so that their threads won't touch anything. */\n        if (moduleCount()) {\n            mstime_t latency;\n            latencyStartMonitor(latency);\n\n            atomicSet(server.module_gil_acquring, 1);\n            moduleAcquireGIL();\n            atomicSet(server.module_gil_acquring, 0);\n            moduleFireServerEvent(REDISMODULE_EVENT_EVENTLOOP,\n                                  REDISMODULE_SUBEVENT_EVENTLOOP_AFTER_SLEEP,\n                                  NULL);\n            latencyEndMonitor(latency);\n            latencyAddSampleIfNeeded(\"module-acquire-GIL\",latency);\n        }\n        /* Set the eventloop start time. */\n        server.el_start = getMonotonicUs();\n        /* Set the eventloop command count at start. */\n        server.el_cmd_cnt_start = server.stat_numcommands;\n    }\n\n    /* Set running after waking up */\n    if (server.io_threads_num > 1) atomicSetWithSync(server.running, 1);\n\n    /* Update the time cache. */\n    updateCachedTime(1);\n\n    /* Update command time snapshot in case it'll be required without a command\n     * e.g. somehow used by module timers. Don't update it while yielding to a\n     * blocked command, call() will handle that and restore the original time. */\n    if (!ProcessingEventsWhileBlocked) {\n        server.cmd_time_snapshot = server.mstime;\n    }\n}\n\n/* =========================== Server initialization ======================== */\n\nvoid createSharedObjects(void) {\n    int j;\n\n    /* Shared command responses */\n    shared.ok = createObject(OBJ_STRING,sdsnew(\"+OK\\r\\n\"));\n    shared.emptybulk = createObject(OBJ_STRING,sdsnew(\"$0\\r\\n\\r\\n\"));\n    shared.czero = createObject(OBJ_STRING,sdsnew(\":0\\r\\n\"));\n    shared.cone = createObject(OBJ_STRING,sdsnew(\":1\\r\\n\"));\n    shared.emptyarray = createObject(OBJ_STRING,sdsnew(\"*0\\r\\n\"));\n    shared.pong = createObject(OBJ_STRING,sdsnew(\"+PONG\\r\\n\"));\n    shared.queued = createObject(OBJ_STRING,sdsnew(\"+QUEUED\\r\\n\"));\n    shared.emptyscan = createObject(OBJ_STRING,sdsnew(\"*2\\r\\n$1\\r\\n0\\r\\n*0\\r\\n\"));\n    shared.space = createObject(OBJ_STRING,sdsnew(\" \"));\n    shared.plus = createObject(OBJ_STRING,sdsnew(\"+\"));\n\n    /* Shared command error responses */\n    shared.wrongtypeerr = createObject(OBJ_STRING,sdsnew(\n        \"-WRONGTYPE Operation against a key holding the wrong kind of value\\r\\n\"));\n    shared.err = createObject(OBJ_STRING,sdsnew(\"-ERR\\r\\n\"));\n    shared.nokeyerr = createObject(OBJ_STRING,sdsnew(\n        \"-ERR no such key\\r\\n\"));\n    shared.syntaxerr = createObject(OBJ_STRING,sdsnew(\n        \"-ERR syntax error\\r\\n\"));\n    shared.sameobjecterr = createObject(OBJ_STRING,sdsnew(\n        \"-ERR source and destination objects are the same\\r\\n\"));\n    shared.outofrangeerr = createObject(OBJ_STRING,sdsnew(\n        \"-ERR index out of range\\r\\n\"));\n    shared.noscripterr = createObject(OBJ_STRING,sdsnew(\n        \"-NOSCRIPT No matching script. Please use EVAL.\\r\\n\"));\n    shared.loadingerr = createObject(OBJ_STRING,sdsnew(\n        \"-LOADING Redis is loading the dataset in memory\\r\\n\"));\n    shared.slowevalerr = createObject(OBJ_STRING,sdsnew(\n        \"-BUSY Redis is busy running a script. You can only call SCRIPT KILL or SHUTDOWN NOSAVE.\\r\\n\"));\n    shared.slowscripterr = createObject(OBJ_STRING,sdsnew(\n        \"-BUSY Redis is busy running a script. You can only call FUNCTION KILL or SHUTDOWN NOSAVE.\\r\\n\"));\n    shared.slowmoduleerr = createObject(OBJ_STRING,sdsnew(\n        \"-BUSY Redis is busy running a module command.\\r\\n\"));\n    shared.masterdownerr = createObject(OBJ_STRING,sdsnew(\n        \"-MASTERDOWN Link with MASTER is down and replica-serve-stale-data is set to 'no'.\\r\\n\"));\n    shared.bgsaveerr = createObject(OBJ_STRING,sdsnew(\n        \"-MISCONF Redis is configured to save RDB snapshots, but it's currently unable to persist to disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error.\\r\\n\"));\n    shared.roslaveerr = createObject(OBJ_STRING,sdsnew(\n        \"-READONLY You can't write against a read only replica.\\r\\n\"));\n    shared.noautherr = createObject(OBJ_STRING,sdsnew(\n        \"-NOAUTH Authentication required.\\r\\n\"));\n    shared.oomerr = createObject(OBJ_STRING,sdsnew(\n        \"-OOM command not allowed when used memory > 'maxmemory'.\\r\\n\"));\n    shared.execaborterr = createObject(OBJ_STRING,sdsnew(\n        \"-EXECABORT Transaction discarded because of previous errors.\\r\\n\"));\n    shared.noreplicaserr = createObject(OBJ_STRING,sdsnew(\n        \"-NOREPLICAS Not enough good replicas to write.\\r\\n\"));\n    shared.busykeyerr = createObject(OBJ_STRING,sdsnew(\n        \"-BUSYKEY Target key name already exists.\\r\\n\"));\n\n    /* The shared NULL depends on the protocol version. */\n    shared.null[0] = NULL;\n    shared.null[1] = NULL;\n    shared.null[2] = createObject(OBJ_STRING,sdsnew(\"$-1\\r\\n\"));\n    shared.null[3] = createObject(OBJ_STRING,sdsnew(\"_\\r\\n\"));\n\n    shared.nullarray[0] = NULL;\n    shared.nullarray[1] = NULL;\n    shared.nullarray[2] = createObject(OBJ_STRING,sdsnew(\"*-1\\r\\n\"));\n    shared.nullarray[3] = createObject(OBJ_STRING,sdsnew(\"_\\r\\n\"));\n\n    shared.emptymap[0] = NULL;\n    shared.emptymap[1] = NULL;\n    shared.emptymap[2] = createObject(OBJ_STRING,sdsnew(\"*0\\r\\n\"));\n    shared.emptymap[3] = createObject(OBJ_STRING,sdsnew(\"%0\\r\\n\"));\n\n    shared.emptyset[0] = NULL;\n    shared.emptyset[1] = NULL;\n    shared.emptyset[2] = createObject(OBJ_STRING,sdsnew(\"*0\\r\\n\"));\n    shared.emptyset[3] = createObject(OBJ_STRING,sdsnew(\"~0\\r\\n\"));\n\n    for (j = 0; j < PROTO_SHARED_SELECT_CMDS; j++) {\n        char dictid_str[64];\n        int dictid_len;\n\n        dictid_len = ll2string(dictid_str,sizeof(dictid_str),j);\n        shared.select[j] = createObject(OBJ_STRING,\n            sdscatprintf(sdsempty(),\n                \"*2\\r\\n$6\\r\\nSELECT\\r\\n$%d\\r\\n%s\\r\\n\",\n                dictid_len, dictid_str));\n    }\n    shared.messagebulk = createStringObject(\"$7\\r\\nmessage\\r\\n\",13);\n    shared.pmessagebulk = createStringObject(\"$8\\r\\npmessage\\r\\n\",14);\n    shared.subscribebulk = createStringObject(\"$9\\r\\nsubscribe\\r\\n\",15);\n    shared.unsubscribebulk = createStringObject(\"$11\\r\\nunsubscribe\\r\\n\",18);\n    shared.ssubscribebulk = createStringObject(\"$10\\r\\nssubscribe\\r\\n\", 17);\n    shared.sunsubscribebulk = createStringObject(\"$12\\r\\nsunsubscribe\\r\\n\", 19);\n    shared.smessagebulk = createStringObject(\"$8\\r\\nsmessage\\r\\n\", 14);\n    shared.psubscribebulk = createStringObject(\"$10\\r\\npsubscribe\\r\\n\",17);\n    shared.punsubscribebulk = createStringObject(\"$12\\r\\npunsubscribe\\r\\n\",19);\n\n    /* Shared command names */\n    shared.del = createStringObject(\"DEL\",3);\n    shared.unlink = createStringObject(\"UNLINK\",6);\n    shared.rpop = createStringObject(\"RPOP\",4);\n    shared.lpop = createStringObject(\"LPOP\",4);\n    shared.lpush = createStringObject(\"LPUSH\",5);\n    shared.rpoplpush = createStringObject(\"RPOPLPUSH\",9);\n    shared.lmove = createStringObject(\"LMOVE\",5);\n    shared.blmove = createStringObject(\"BLMOVE\",6);\n    shared.zpopmin = createStringObject(\"ZPOPMIN\",7);\n    shared.zpopmax = createStringObject(\"ZPOPMAX\",7);\n    shared.multi = createStringObject(\"MULTI\",5);\n    shared.exec = createStringObject(\"EXEC\",4);\n    shared.hset = createStringObject(\"HSET\",4);\n    shared.srem = createStringObject(\"SREM\",4);\n    shared.xgroup = createStringObject(\"XGROUP\",6);\n    shared.xclaim = createStringObject(\"XCLAIM\",6);\n    shared.xack = createStringObject(\"XACK\",4);\n    shared.script = createStringObject(\"SCRIPT\",6);\n    shared.replconf = createStringObject(\"REPLCONF\",8);\n    shared.pexpireat = createStringObject(\"PEXPIREAT\",9);\n    shared.pexpire = createStringObject(\"PEXPIRE\",7);\n    shared.persist = createStringObject(\"PERSIST\",7);\n    shared.set = createStringObject(\"SET\",3);\n    shared.eval = createStringObject(\"EVAL\",4);\n    shared.hpexpireat = createStringObject(\"HPEXPIREAT\",10);\n    shared.hpersist = createStringObject(\"HPERSIST\",8);\n    shared.hdel = createStringObject(\"HDEL\",4);\n    shared.hsetex = createStringObject(\"HSETEX\",6);\n\n    /* Shared command argument */\n    shared.left = createStringObject(\"left\",4);\n    shared.right = createStringObject(\"right\",5);\n    shared.pxat = createStringObject(\"PXAT\", 4);\n    shared.time = createStringObject(\"TIME\",4);\n    shared.retrycount = createStringObject(\"RETRYCOUNT\",10);\n    shared.force = createStringObject(\"FORCE\",5);\n    shared.justid = createStringObject(\"JUSTID\",6);\n    shared.entriesread = createStringObject(\"ENTRIESREAD\",11);\n    shared.lastid = createStringObject(\"LASTID\",6);\n    shared.default_username = createStringObject(\"default\",7);\n    shared.ping = createStringObject(\"ping\",4);\n    shared.setid = createStringObject(\"SETID\",5);\n    shared.keepttl = createStringObject(\"KEEPTTL\",7);\n    shared.absttl = createStringObject(\"ABSTTL\",6);\n    shared.load = createStringObject(\"LOAD\",4);\n    shared.createconsumer = createStringObject(\"CREATECONSUMER\",14);\n    shared.getack = createStringObject(\"GETACK\",6);\n    shared.special_asterick = createStringObject(\"*\",1);\n    shared.special_equals = createStringObject(\"=\",1);\n    shared.redacted = makeObjectShared(createStringObject(\"(redacted)\",10));\n    shared.fields = createStringObject(\"FIELDS\",6);\n\n    for (j = 0; j < OBJ_SHARED_INTEGERS; j++) {\n        shared.integers[j] =\n            makeObjectShared(createObject(OBJ_STRING,(void*)(long)j));\n        initObjectLRUOrLFU(shared.integers[j]);\n        shared.integers[j]->encoding = OBJ_ENCODING_INT;\n    }\n    for (j = 0; j < OBJ_SHARED_BULKHDR_LEN; j++) {\n        shared.mbulkhdr[j] = createObject(OBJ_STRING,\n            sdscatprintf(sdsempty(),\"*%d\\r\\n\",j));\n        shared.bulkhdr[j] = createObject(OBJ_STRING,\n            sdscatprintf(sdsempty(),\"$%d\\r\\n\",j));\n        shared.maphdr[j] = createObject(OBJ_STRING,\n            sdscatprintf(sdsempty(),\"%%%d\\r\\n\",j));\n        shared.sethdr[j] = createObject(OBJ_STRING,\n            sdscatprintf(sdsempty(),\"~%d\\r\\n\",j));\n    }\n    /* The following two shared objects, minstring and maxstring, are not\n     * actually used for their value but as a special object meaning\n     * respectively the minimum possible string and the maximum possible\n     * string in string comparisons for the ZRANGEBYLEX command. */\n    shared.minstring = sdsnew(\"minstring\");\n    shared.maxstring = sdsnew(\"maxstring\");\n}\n\nvoid initServerClientMemUsageBuckets(void) {\n    if (server.client_mem_usage_buckets)\n        return;\n    server.client_mem_usage_buckets = zmalloc(sizeof(clientMemUsageBucket)*CLIENT_MEM_USAGE_BUCKETS);\n    for (int j = 0; j < CLIENT_MEM_USAGE_BUCKETS; j++) {\n        server.client_mem_usage_buckets[j].mem_usage_sum = 0;\n        server.client_mem_usage_buckets[j].clients = listCreate();\n    }\n}\n\nvoid freeServerClientMemUsageBuckets(void) {\n    if (!server.client_mem_usage_buckets)\n        return;\n    for (int j = 0; j < CLIENT_MEM_USAGE_BUCKETS; j++)\n        listRelease(server.client_mem_usage_buckets[j].clients);\n    zfree(server.client_mem_usage_buckets);\n    server.client_mem_usage_buckets = NULL;\n}\n\nvoid initServerConfig(void) {\n    int j;\n    char *default_bindaddr[CONFIG_DEFAULT_BINDADDR_COUNT] = CONFIG_DEFAULT_BINDADDR;\n\n    initConfigValues();\n    updateCachedTime(1);\n    server.cmd_time_snapshot = server.mstime;\n    getRandomHexChars(server.runid,CONFIG_RUN_ID_SIZE);\n    server.runid[CONFIG_RUN_ID_SIZE] = '\\0';\n    changeReplicationId();\n    clearReplicationId2();\n    server.hz = CONFIG_DEFAULT_HZ; /* Initialize it ASAP, even if it may get\n                                      updated later after loading the config.\n                                      This value may be used before the server\n                                      is initialized. */\n    server.timezone = getTimeZone(); /* Initialized by tzset(). */\n    server.configfile = NULL;\n    server.executable = NULL;\n    server.arch_bits = (sizeof(long) == 8) ? 64 : 32;\n#if DEBUG_ASSERT_KEYSPACE\n    server.dbg_assert_flags = DBG_ASSERT_KEYSIZES | DBG_ASSERT_ALLOC_SLOT;\n#else\n    server.dbg_assert_flags = 0;\n#endif\n    server.bindaddr_count = CONFIG_DEFAULT_BINDADDR_COUNT;\n    for (j = 0; j < CONFIG_DEFAULT_BINDADDR_COUNT; j++)\n        server.bindaddr[j] = zstrdup(default_bindaddr[j]);\n    memset(server.listeners, 0x00, sizeof(server.listeners));\n    server.active_expire_enabled = 1;\n    server.allow_access_expired = 0;\n    server.allow_access_trimmed = 0;\n    server.skip_checksum_validation = 0;\n    server.allow_keymeta_registration = 0;\n    server.loading = 0;\n    server.async_loading = 0;\n    server.loading_rdb_used_mem = 0;\n    server.aof_state = AOF_OFF;\n    server.aof_rewrite_base_size = 0;\n    server.aof_rewrite_scheduled = 0;\n    server.aof_flush_sleep = 0;\n    server.aof_last_fsync = time(NULL) * 1000;\n    server.aof_cur_timestamp = 0;\n    atomicSet(server.aof_bio_fsync_status,C_OK);\n    server.aof_rewrite_time_last = -1;\n    server.aof_rewrite_time_start = -1;\n    server.aof_lastbgrewrite_status = C_OK;\n    server.aof_delayed_fsync = 0;\n    server.aof_fd = -1;\n    server.aof_selected_db = -1; /* Make sure the first time will not match */\n    server.aof_flush_postponed_start = 0;\n    server.aof_last_incr_size = 0;\n    server.aof_last_incr_fsync_offset = 0;\n    server.active_defrag_running = 0;\n    server.active_defrag_configuration_changed = 0;\n    server.notify_keyspace_events = 0;\n    server.blocked_clients = 0;\n    memset(server.blocked_clients_by_type,0,\n           sizeof(server.blocked_clients_by_type));\n    server.shutdown_asap = 0;\n    server.crashing = 0;\n    server.shutdown_flags = 0;\n    server.shutdown_mstime = 0;\n    server.cluster_module_flags = CLUSTER_MODULE_FLAG_NONE;\n    server.cluster_module_trim_disablers = 0;\n    server.migrate_cached_sockets = dictCreate(&migrateCacheDictType);\n    server.next_client_id = 1; /* Client IDs, start from 1 .*/\n    server.page_size = sysconf(_SC_PAGESIZE);\n    server.pause_cron = 0;\n    server.dict_resizing = 1;\n\n    server.latency_tracking_info_percentiles_len = 3;\n    server.latency_tracking_info_percentiles = zmalloc(sizeof(double)*(server.latency_tracking_info_percentiles_len));\n    server.latency_tracking_info_percentiles[0] = 50.0;  /* p50 */\n    server.latency_tracking_info_percentiles[1] = 99.0;  /* p99 */\n    server.latency_tracking_info_percentiles[2] = 99.9;  /* p999 */\n\n    server.lruclock = getLRUClock();\n    resetServerSaveParams();\n\n    appendServerSaveParams(60*60,1);  /* save after 1 hour and 1 change */\n    appendServerSaveParams(300,100);  /* save after 5 minutes and 100 changes */\n    appendServerSaveParams(60,10000); /* save after 1 minute and 10000 changes */\n\n    /* Replication related */\n    server.masterhost = NULL;\n    server.masterport = 6379;\n    server.master = NULL;\n    server.cached_master = NULL;\n    server.master_initial_offset = -1;\n    server.repl_state = REPL_STATE_NONE;\n    server.repl_rdb_ch_state = REPL_RDB_CH_STATE_NONE;\n    server.repl_num_master_disconnection = 0;\n    server.repl_full_sync_buffer = (struct replDataBuf) {0};\n    server.repl_transfer_tmpfile = NULL;\n    server.repl_transfer_fd = -1;\n    server.repl_transfer_s = NULL;\n    server.repl_syncio_timeout = CONFIG_REPL_SYNCIO_TIMEOUT;\n    server.repl_down_since = 0; /* Never connected, repl is down since EVER. */\n    server.repl_up_since = 0;\n    server.master_repl_offset = 0;\n    server.fsynced_reploff_pending = 0;\n    server.repl_stream_lastio = server.unixtime;\n    server.repl_total_sync_attempts = 0;\n\n    /* Replication partial resync backlog */\n    server.repl_backlog = NULL;\n    server.repl_no_slaves_since = time(NULL);\n\n    /* Failover related */\n    server.failover_end_time = 0;\n    server.force_failover = 0;\n    server.target_replica_host = NULL;\n    server.target_replica_port = 0;\n    server.failover_state = NO_FAILOVER;\n\n    /* Client output buffer limits */\n    for (j = 0; j < CLIENT_TYPE_OBUF_COUNT; j++)\n        server.client_obuf_limits[j] = clientBufferLimitsDefaults[j];\n\n    /* Linux OOM Score config */\n    for (j = 0; j < CONFIG_OOM_COUNT; j++)\n        server.oom_score_adj_values[j] = configOOMScoreAdjValuesDefaults[j];\n\n    /* Double constants initialization */\n    R_Zero = 0.0;\n    R_PosInf = 1.0/R_Zero;\n    R_NegInf = -1.0/R_Zero;\n    R_Nan = R_Zero/R_Zero;\n\n    /* Command table -- we initialize it here as it is part of the\n     * initial configuration, since command names may be changed via\n     * redis.conf using the rename-command directive. */\n    server.commands = dictCreate(&commandTableDictType);\n    server.orig_commands = dictCreate(&commandTableDictType);\n    populateCommandTable();\n\n    /* Debugging */\n    server.watchdog_period = 0;\n}\n\nextern char **environ;\n\n/* Restart the server, executing the same executable that started this\n * instance, with the same arguments and configuration file.\n *\n * The function is designed to directly call execve() so that the new\n * server instance will retain the PID of the previous one.\n *\n * The list of flags, that may be bitwise ORed together, alter the\n * behavior of this function:\n *\n * RESTART_SERVER_NONE              No flags.\n * RESTART_SERVER_GRACEFULLY        Do a proper shutdown before restarting.\n * RESTART_SERVER_CONFIG_REWRITE    Rewrite the config file before restarting.\n *\n * On success the function does not return, because the process turns into\n * a different process. On error C_ERR is returned. */\nint restartServer(int flags, mstime_t delay) {\n    int j;\n\n    /* Check if we still have accesses to the executable that started this\n     * server instance. */\n    if (access(server.executable,X_OK) == -1) {\n        serverLog(LL_WARNING,\"Can't restart: this process has no \"\n                             \"permissions to execute %s\", server.executable);\n        return C_ERR;\n    }\n\n    /* Config rewriting. */\n    if (flags & RESTART_SERVER_CONFIG_REWRITE &&\n        server.configfile &&\n        rewriteConfig(server.configfile, 0) == -1)\n    {\n        serverLog(LL_WARNING,\"Can't restart: configuration rewrite process \"\n                             \"failed: %s\", strerror(errno));\n        return C_ERR;\n    }\n\n    /* Perform a proper shutdown. We don't wait for lagging replicas though. */\n    if (flags & RESTART_SERVER_GRACEFULLY &&\n        prepareForShutdown(SHUTDOWN_NOW) != C_OK)\n    {\n        serverLog(LL_WARNING,\"Can't restart: error preparing for shutdown\");\n        return C_ERR;\n    }\n\n    /* Close all file descriptors, with the exception of stdin, stdout, stderr\n     * which are useful if we restart a Redis server which is not daemonized. */\n    for (j = 3; j < (int)server.maxclients + 1024; j++) {\n        /* Test the descriptor validity before closing it, otherwise\n         * Valgrind issues a warning on close(). */\n        if (fcntl(j,F_GETFD) != -1) close(j);\n    }\n\n    /* Execute the server with the original command line. */\n    if (delay) usleep(delay*1000);\n    zfree(server.exec_argv[0]);\n    server.exec_argv[0] = zstrdup(server.executable);\n    execve(server.executable,server.exec_argv,environ);\n\n    /* If an error occurred here, there is nothing we can do, but exit. */\n    _exit(1);\n\n    return C_ERR; /* Never reached. */\n}\n\n/* This function will configure the current process's oom_score_adj according\n * to user specified configuration. This is currently implemented on Linux\n * only.\n *\n * A process_class value of -1 implies OOM_CONFIG_MASTER or OOM_CONFIG_REPLICA,\n * depending on current role.\n */\nint setOOMScoreAdj(int process_class) {\n    if (process_class == -1)\n        process_class = (server.masterhost ? CONFIG_OOM_REPLICA : CONFIG_OOM_MASTER);\n\n    serverAssert(process_class >= 0 && process_class < CONFIG_OOM_COUNT);\n\n#ifdef HAVE_PROC_OOM_SCORE_ADJ\n    /* The following statics are used to indicate Redis has changed the process's oom score.\n     * And to save the original score so we can restore it later if needed.\n     * We need this so when we disabled oom-score-adj (also during configuration rollback\n     * when another configuration parameter was invalid and causes a rollback after\n     * applying a new oom-score) we can return to the oom-score value from before our\n     * adjustments. */\n    static int oom_score_adjusted_by_redis = 0;\n    static int oom_score_adj_base = 0;\n\n    int fd;\n    int val;\n    char buf[64];\n\n    if (server.oom_score_adj != OOM_SCORE_ADJ_NO) {\n        if (!oom_score_adjusted_by_redis) {\n            oom_score_adjusted_by_redis = 1;\n            /* Backup base value before enabling Redis control over oom score */\n            fd = open(\"/proc/self/oom_score_adj\", O_RDONLY);\n            if (fd < 0 || read(fd, buf, sizeof(buf)) < 0) {\n                serverLog(LL_WARNING, \"Unable to read oom_score_adj: %s\", strerror(errno));\n                if (fd != -1) close(fd);\n                return C_ERR;\n            }\n            oom_score_adj_base = atoi(buf);\n            close(fd);\n        }\n\n        val = server.oom_score_adj_values[process_class];\n        if (server.oom_score_adj == OOM_SCORE_RELATIVE)\n            val += oom_score_adj_base;\n        if (val > 1000) val = 1000;\n        if (val < -1000) val = -1000;\n    } else if (oom_score_adjusted_by_redis) {\n        oom_score_adjusted_by_redis = 0;\n        val = oom_score_adj_base;\n    }\n    else {\n        return C_OK;\n    }\n\n    snprintf(buf, sizeof(buf) - 1, \"%d\\n\", val);\n\n    fd = open(\"/proc/self/oom_score_adj\", O_WRONLY);\n    if (fd < 0 || write(fd, buf, strlen(buf)) < 0) {\n        serverLog(LL_WARNING, \"Unable to write oom_score_adj: %s\", strerror(errno));\n        if (fd != -1) close(fd);\n        return C_ERR;\n    }\n\n    close(fd);\n    return C_OK;\n#else\n    /* Unsupported */\n    return C_ERR;\n#endif\n}\n\n/* This function will try to raise the max number of open files accordingly to\n * the configured max number of clients. It also reserves a number of file\n * descriptors (CONFIG_MIN_RESERVED_FDS) for extra operations of\n * persistence, listening sockets, log files and so forth.\n *\n * If it will not be possible to set the limit accordingly to the configured\n * max number of clients, the function will do the reverse setting\n * server.maxclients to the value that we can actually handle. */\nvoid adjustOpenFilesLimit(void) {\n    rlim_t maxfiles = server.maxclients+CONFIG_MIN_RESERVED_FDS;\n    struct rlimit limit;\n\n    if (getrlimit(RLIMIT_NOFILE,&limit) == -1) {\n        serverLog(LL_WARNING,\"Unable to obtain the current NOFILE limit (%s), assuming 1024 and setting the max clients configuration accordingly.\",\n            strerror(errno));\n        server.maxclients = 1024-CONFIG_MIN_RESERVED_FDS;\n    } else {\n        rlim_t oldlimit = limit.rlim_cur;\n\n        /* Set the max number of files if the current limit is not enough\n         * for our needs. */\n        if (oldlimit < maxfiles) {\n            rlim_t bestlimit;\n            int setrlimit_error = 0;\n\n            /* Try to set the file limit to match 'maxfiles' or at least\n             * to the higher value supported less than maxfiles. */\n            bestlimit = maxfiles;\n            while(bestlimit > oldlimit) {\n                rlim_t decr_step = 16;\n\n                limit.rlim_cur = bestlimit;\n                limit.rlim_max = bestlimit;\n                if (setrlimit(RLIMIT_NOFILE,&limit) != -1) break;\n                setrlimit_error = errno;\n\n                /* We failed to set file limit to 'bestlimit'. Try with a\n                 * smaller limit decrementing by a few FDs per iteration. */\n                if (bestlimit < decr_step) {\n                    bestlimit = oldlimit;\n                    break;\n                }\n                bestlimit -= decr_step;\n            }\n\n            /* Assume that the limit we get initially is still valid if\n             * our last try was even lower. */\n            if (bestlimit < oldlimit) bestlimit = oldlimit;\n\n            if (bestlimit < maxfiles) {\n                unsigned int old_maxclients = server.maxclients;\n                server.maxclients = bestlimit-CONFIG_MIN_RESERVED_FDS;\n                /* maxclients is unsigned so may overflow: in order\n                 * to check if maxclients is now logically less than 1\n                 * we test indirectly via bestlimit. */\n                if (bestlimit <= CONFIG_MIN_RESERVED_FDS) {\n                    serverLog(LL_WARNING,\"Your current 'ulimit -n' \"\n                        \"of %llu is not enough for the server to start. \"\n                        \"Please increase your open file limit to at least \"\n                        \"%llu. Exiting.\",\n                        (unsigned long long) oldlimit,\n                        (unsigned long long) maxfiles);\n                    exit(1);\n                }\n                serverLog(LL_WARNING,\"You requested maxclients of %d \"\n                    \"requiring at least %llu max file descriptors.\",\n                    old_maxclients,\n                    (unsigned long long) maxfiles);\n                serverLog(LL_WARNING,\"Server can't set maximum open files \"\n                    \"to %llu because of OS error: %s.\",\n                    (unsigned long long) maxfiles, strerror(setrlimit_error));\n                serverLog(LL_WARNING,\"Current maximum open files is %llu. \"\n                    \"maxclients has been reduced to %d to compensate for \"\n                    \"low ulimit. \"\n                    \"If you need higher maxclients increase 'ulimit -n'.\",\n                    (unsigned long long) bestlimit, server.maxclients);\n            } else {\n                serverLog(LL_NOTICE,\"Increased maximum number of open files \"\n                    \"to %llu (it was originally set to %llu).\",\n                    (unsigned long long) maxfiles,\n                    (unsigned long long) oldlimit);\n            }\n        }\n    }\n}\n\n/* Check that server.tcp_backlog can be actually enforced in Linux according\n * to the value of /proc/sys/net/core/somaxconn, or warn about it. */\nvoid checkTcpBacklogSettings(void) {\n#if defined(HAVE_PROC_SOMAXCONN)\n    FILE *fp = fopen(\"/proc/sys/net/core/somaxconn\",\"r\");\n    char buf[1024];\n    if (!fp) return;\n    if (fgets(buf,sizeof(buf),fp) != NULL) {\n        int somaxconn = atoi(buf);\n        if (somaxconn > 0 && somaxconn < server.tcp_backlog) {\n            serverLog(LL_WARNING,\"WARNING: The TCP backlog setting of %d cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of %d.\", server.tcp_backlog, somaxconn);\n        }\n    }\n    fclose(fp);\n#elif defined(HAVE_SYSCTL_KIPC_SOMAXCONN)\n    int somaxconn, mib[3];\n    size_t len = sizeof(int);\n\n    mib[0] = CTL_KERN;\n    mib[1] = KERN_IPC;\n    mib[2] = KIPC_SOMAXCONN;\n\n    if (sysctl(mib, 3, &somaxconn, &len, NULL, 0) == 0) {\n        if (somaxconn > 0 && somaxconn < server.tcp_backlog) {\n            serverLog(LL_WARNING,\"WARNING: The TCP backlog setting of %d cannot be enforced because kern.ipc.somaxconn is set to the lower value of %d.\", server.tcp_backlog, somaxconn);\n        }\n    }\n#elif defined(HAVE_SYSCTL_KERN_SOMAXCONN)\n    int somaxconn, mib[2];\n    size_t len = sizeof(int);\n\n    mib[0] = CTL_KERN;\n    mib[1] = KERN_SOMAXCONN;\n\n    if (sysctl(mib, 2, &somaxconn, &len, NULL, 0) == 0) {\n        if (somaxconn > 0 && somaxconn < server.tcp_backlog) {\n            serverLog(LL_WARNING,\"WARNING: The TCP backlog setting of %d cannot be enforced because kern.somaxconn is set to the lower value of %d.\", server.tcp_backlog, somaxconn);\n        }\n    }\n#elif defined(SOMAXCONN)\n    if (SOMAXCONN < server.tcp_backlog) {\n        serverLog(LL_WARNING,\"WARNING: The TCP backlog setting of %d cannot be enforced because SOMAXCONN is set to the lower value of %d.\", server.tcp_backlog, SOMAXCONN);\n    }\n#endif\n}\n\nvoid closeListener(connListener *sfd) {\n    int j;\n\n    for (j = 0; j < sfd->count; j++) {\n        if (sfd->fd[j] == -1) continue;\n\n        aeDeleteFileEvent(server.el, sfd->fd[j], AE_READABLE);\n        close(sfd->fd[j]);\n    }\n\n    sfd->count = 0;\n}\n\n/* Create an event handler for accepting new connections in TCP or TLS domain sockets.\n * This works atomically for all socket fds */\nint createSocketAcceptHandler(connListener *sfd, aeFileProc *accept_handler) {\n    int j;\n\n    for (j = 0; j < sfd->count; j++) {\n        if (aeCreateFileEvent(server.el, sfd->fd[j], AE_READABLE, accept_handler,sfd) == AE_ERR) {\n            /* Rollback */\n            for (j = j-1; j >= 0; j--) aeDeleteFileEvent(server.el, sfd->fd[j], AE_READABLE);\n            return C_ERR;\n        }\n    }\n    return C_OK;\n}\n\n/* Initialize a set of file descriptors to listen to the specified 'port'\n * binding the addresses specified in the Redis server configuration.\n *\n * The listening file descriptors are stored in the integer array 'fds'\n * and their number is set in '*count'. Actually @sfd should be 'listener',\n * for the historical reasons, let's keep 'sfd' here.\n *\n * The addresses to bind are specified in the global server.bindaddr array\n * and their number is server.bindaddr_count. If the server configuration\n * contains no specific addresses to bind, this function will try to\n * bind * (all addresses) for both the IPv4 and IPv6 protocols.\n *\n * On success the function returns C_OK.\n *\n * On error the function returns C_ERR. For the function to be on\n * error, at least one of the server.bindaddr addresses was\n * impossible to bind, or no bind addresses were specified in the server\n * configuration but the function is not able to bind * for at least\n * one of the IPv4 or IPv6 protocols. */\nint listenToPort(connListener *sfd) {\n    int j;\n    int port = sfd->port;\n    char **bindaddr = sfd->bindaddr;\n\n    /* If we have no bind address, we don't listen on a TCP socket */\n    if (sfd->bindaddr_count == 0) return C_OK;\n\n    for (j = 0; j < sfd->bindaddr_count; j++) {\n        char* addr = bindaddr[j];\n        int optional = *addr == '-';\n        if (optional) addr++;\n        if (strchr(addr,':')) {\n            /* Bind IPv6 address. */\n            sfd->fd[sfd->count] = anetTcp6Server(server.neterr,port,addr,server.tcp_backlog);\n        } else {\n            /* Bind IPv4 address. */\n            sfd->fd[sfd->count] = anetTcpServer(server.neterr,port,addr,server.tcp_backlog);\n        }\n        if (sfd->fd[sfd->count] == ANET_ERR) {\n            int net_errno = errno;\n            serverLog(LL_WARNING,\n                \"Warning: Could not create server TCP listening socket %s:%d: %s\",\n                addr, port, server.neterr);\n            if (net_errno == EADDRNOTAVAIL && optional)\n                continue;\n            if (net_errno == ENOPROTOOPT     || net_errno == EPROTONOSUPPORT ||\n                net_errno == ESOCKTNOSUPPORT || net_errno == EPFNOSUPPORT ||\n                net_errno == EAFNOSUPPORT)\n                continue;\n\n            /* Rollback successful listens before exiting */\n            closeListener(sfd);\n            return C_ERR;\n        }\n        if (server.socket_mark_id > 0) anetSetSockMarkId(NULL, sfd->fd[sfd->count], server.socket_mark_id);\n        anetNonBlock(NULL,sfd->fd[sfd->count]);\n        anetCloexec(sfd->fd[sfd->count]);\n        sfd->count++;\n    }\n    return C_OK;\n}\n\n/* Resets the stats that we expose via INFO or other means that we want\n * to reset via CONFIG RESETSTAT. The function is also used in order to\n * initialize these fields in initServer() at server startup. */\nvoid resetServerStats(void) {\n    int j;\n\n    server.stat_numcommands = 0;\n    server.stat_numconnections = 0;\n    server.stat_expiredkeys = 0;\n    server.stat_expiredkeys_active = 0;\n    server.stat_expired_subkeys = 0;\n    server.stat_expired_subkeys_active = 0;\n    server.stat_expired_stale_perc = 0;\n    server.stat_expired_time_cap_reached_count = 0;\n    server.stat_expire_cycle_time_used = 0;\n    server.stat_evictedkeys = 0;\n    server.stat_evictedclients = 0;\n    server.stat_evictedscripts = 0;\n    server.stat_total_eviction_exceeded_time = 0;\n    server.stat_last_eviction_exceeded_time = 0;\n    server.stat_keyspace_misses = 0;\n    server.stat_keyspace_hits = 0;\n    server.stat_active_defrag_hits = 0;\n    server.stat_active_defrag_misses = 0;\n    server.stat_active_defrag_key_hits = 0;\n    server.stat_active_defrag_key_misses = 0;\n    server.stat_active_defrag_scanned = 0;\n    server.stat_total_active_defrag_time = 0;\n    server.stat_last_active_defrag_time = 0;\n    server.stat_fork_time = 0;\n    server.stat_fork_rate = 0;\n    server.stat_total_forks = 0;\n    server.stat_rejected_conn = 0;\n    server.stat_sync_full = 0;\n    server.stat_sync_partial_ok = 0;\n    server.stat_sync_partial_err = 0;\n    for (j = 0; j < IO_THREADS_MAX_NUM; j++) {\n        atomicSet(server.stat_io_reads_processed[j], 0);\n        atomicSet(server.stat_io_writes_processed[j], 0);\n    }\n    atomicSet(server.stat_client_qbuf_limit_disconnections, 0);\n    server.stat_client_outbuf_limit_disconnections = 0;\n    for (j = 0; j < STATS_METRIC_COUNT; j++) {\n        server.inst_metric[j].idx = 0;\n        server.inst_metric[j].last_sample_base = 0;\n        server.inst_metric[j].last_sample_value = 0;\n        memset(server.inst_metric[j].samples,0,\n            sizeof(server.inst_metric[j].samples));\n    }\n    server.stat_aof_rewrites = 0;\n    server.stat_rdb_saves = 0;\n    server.stat_aofrw_consecutive_failures = 0;\n    server.stat_rdb_consecutive_failures = 0;\n    atomicSet(server.stat_net_input_bytes, 0);\n    atomicSet(server.stat_net_output_bytes, 0);\n    atomicSet(server.stat_net_repl_input_bytes, 0);\n    atomicSet(server.stat_net_repl_output_bytes, 0);\n    server.stat_unexpected_error_replies = 0;\n    server.stat_total_error_replies = 0;\n    server.stat_dump_payload_sanitizations = 0;\n    server.aof_delayed_fsync = 0;\n    server.stat_reply_buffer_shrinks = 0;\n    server.stat_reply_buffer_expands = 0;\n    server.stat_cluster_incompatible_ops = 0;\n    server.stat_total_prefetch_batches = 0;\n    server.stat_total_prefetch_entries = 0;\n    atomicSet(server.stat_avg_pipeline_length_sum, 0);\n    atomicSet(server.stat_avg_pipeline_length_cnt, 0);\n    atomicSet(server.stat_total_client_process_input_buff_events, 0);\n    server.stat_eventloop_cycles_with_clients_input_buff_processing = 0;\n    stat_prev_total_client_process_input_buff_events = 0;\n    memset(server.duration_stats, 0, sizeof(durationStats) * EL_DURATION_TYPE_NUM);\n    server.el_cmd_cnt_max = 0;\n    server.stat_slowlog_count = 0;\n    server.stat_slowlog_time_us_sum = 0;\n    server.stat_slowlog_time_us_max = 0;\n    lazyfreeResetStats();\n}\n\n/* Make the thread killable at any time, so that kill threads functions\n * can work reliably (default cancelability type is PTHREAD_CANCEL_DEFERRED).\n * Needed for pthread_cancel used by the fast memory test used by the crash report. */\nvoid makeThreadKillable(void) {\n    pthread_setcancelstate(PTHREAD_CANCEL_ENABLE, NULL);\n    pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS, NULL);\n}\n\nvoid initServer(void) {\n    int j;\n\n    signal(SIGHUP, SIG_IGN);\n    signal(SIGPIPE, SIG_IGN);\n    setupSignalHandlers();\n    ThreadsManager_init();\n    makeThreadKillable();\n\n    if (server.syslog_enabled) {\n        openlog(server.syslog_ident, LOG_PID | LOG_NDELAY | LOG_NOWAIT,\n            server.syslog_facility);\n    }\n\n    /* Initialization after setting defaults from the config system. */\n    server.aof_state = server.aof_enabled ? AOF_ON : AOF_OFF;\n    server.fsynced_reploff = server.aof_enabled ? 0 : -1;\n    server.hz = server.config_hz;\n    server.pid = getpid();\n    server.in_fork_child = CHILD_TYPE_NONE;\n    server.rdb_pipe_read = -1;\n    server.rdb_child_exit_pipe = -1;\n    server.main_thread_id = pthread_self();\n    server.current_client = NULL;\n    server.errors = raxNew();\n    server.errors_enabled = 1;\n    server.execution_nesting = 0;\n    server.clients = listCreate();\n    server.clients_index = raxNew();\n    server.clients_to_close = listCreate();\n    server.slaves = listCreate();\n    server.monitors = listCreate();\n    server.clients_pending_write = listCreate();\n    server.clients_pending_read = listCreate();\n    server.clients_with_pending_ref_reply = listCreate();\n    server.clients_timeout_table = raxNew();\n    server.replication_allowed = 1;\n    server.slaveseldb = -1; /* Force to emit the first SELECT command. */\n    server.unblocked_clients = listCreate();\n    server.ready_keys = listCreate();\n    server.tracking_pending_keys = listCreate();\n    server.pending_push_messages = listCreate();\n    server.clients_waiting_acks = listCreate();\n    server.get_ack_from_slaves = 0;\n    server.paused_actions = 0;\n    memset(server.client_pause_per_purpose, 0,\n           sizeof(server.client_pause_per_purpose));\n    server.postponed_clients = listCreate();\n    server.events_processed_while_blocked = 0;\n    server.system_memory_size = zmalloc_get_memory_size();\n    server.blocked_last_cron = 0;\n    server.blocking_op_nesting = 0;\n    server.thp_enabled = 0;\n    server.cluster_drop_packet_filter = -1;\n    server.reply_buffer_peak_reset_time = REPLY_BUFFER_DEFAULT_PEAK_RESET_TIME;\n    server.reply_buffer_resizing_enabled = 1;\n    server.reply_copy_avoidance_enabled = 1;\n    server.client_mem_usage_buckets = NULL;\n    /* Enable memory accounting only if key-memory-histograms or cluster-slot-stats-enabled\n     * includes 'mem' at startup. Memory tracking can be disabled at runtime\n     * but cannot be re-enabled, to avoid situation where we would need to\n     * catch up or iterate over all slots and kvobjs. */\n    server.memory_tracking_enabled = server.key_memory_histograms || clusterSlotStatsEnabled(CLUSTER_SLOT_STATS_MEM);\n    resetReplicationBuffer();\n\n    /* Make sure the locale is set on startup based on the config file. */\n    if (setlocale(LC_COLLATE,server.locale_collate) == NULL) {\n        serverLog(LL_WARNING, \"Failed to configure LOCALE for invalid locale name.\");\n        exit(1);\n    }\n\n    createSharedObjects();\n    adjustOpenFilesLimit();\n    const char *clk_msg = monotonicInit();\n    serverLog(LL_NOTICE, \"monotonic clock: %s\", clk_msg);\n    server.el = aeCreateEventLoop(server.maxclients+CONFIG_FDSET_INCR);\n    if (server.el == NULL) {\n        serverLog(LL_WARNING,\n            \"Failed creating the event loop. Error message: '%s'\",\n            strerror(errno));\n        exit(1);\n    }\n    server.db = zmalloc(sizeof(redisDb)*server.dbnum);\n\n    /* Create the Redis databases, and initialize other internal state. */\n    int slot_count_bits = 0;\n    int flags = KVSTORE_ALLOCATE_DICTS_ON_DEMAND;\n    if (server.cluster_enabled) {\n        slot_count_bits = CLUSTER_SLOT_MASK_BITS;\n        flags |= KVSTORE_FREE_EMPTY_DICTS;\n    }\n    for (j = 0; j < server.dbnum; j++) {\n        server.db[j].keys = kvstoreCreate(&kvstoreExType, &dbDictType, slot_count_bits, flags);\n        server.db[j].expires = kvstoreCreate(&kvstoreBaseType, &dbExpiresDictType, slot_count_bits, flags);\n        server.db[j].subexpires = estoreCreate(&subexpiresBucketsType, slot_count_bits);\n        server.db[j].expires_cursor = 0;\n        server.db[j].blocking_keys = dictCreate(&keylistDictType);\n        server.db[j].blocking_keys_unblock_on_nokey = dictCreate(&objectKeyPointerValueDictType);\n        server.db[j].stream_claim_pending_keys = dictCreate(&objectKeyPointerValueDictType);\n        server.db[j].stream_idmp_keys = dictCreate(&objectKeyNoValueDictType);\n        server.db[j].ready_keys = dictCreate(&objectKeyPointerValueDictType);\n        server.db[j].watched_keys = dictCreate(&keylistDictType);\n        server.db[j].id = j;\n        server.db[j].avg_ttl = 0;\n    }\n    evictionPoolAlloc(); /* Initialize the LRU keys pool. */\n    /* Note that server.pubsub_channels was chosen to be a kvstore (with only one dict, which\n     * seems odd) just to make the code cleaner by making it be the same type as server.pubsubshard_channels\n     * (which has to be kvstore), see pubsubtype.serverPubSubChannels */\n    server.pubsub_channels = kvstoreCreate(\n        &kvstoreBaseType, &objToDictDictType,\n        0, KVSTORE_ALLOCATE_DICTS_ON_DEMAND);\n    server.pubsub_patterns = dictCreate(&objToDictDictType);\n    server.pubsubshard_channels = kvstoreCreate(\n        &kvstoreBaseType, &objToDictDictType,\n        slot_count_bits, KVSTORE_ALLOCATE_DICTS_ON_DEMAND | KVSTORE_FREE_EMPTY_DICTS);\n    server.pubsub_clients = 0;\n    server.watching_clients = 0;\n    server.cronloops = 0;\n    server.in_exec = 0;\n    server.busy_module_yield_flags = BUSY_MODULE_YIELD_NONE;\n    server.busy_module_yield_reply = NULL;\n    server.client_pause_in_transaction = 0;\n    server.child_pid = -1;\n    server.child_type = CHILD_TYPE_NONE;\n    server.rdb_child_type = RDB_CHILD_TYPE_NONE;\n    server.rdb_pipe_conns = NULL;\n    server.rdb_pipe_numconns = 0;\n    server.rdb_pipe_numconns_writing = 0;\n    server.rdb_pipe_buff = NULL;\n    server.rdb_pipe_bufflen = 0;\n    server.rdb_bgsave_scheduled = 0;\n    server.child_info_pipe[0] = -1;\n    server.child_info_pipe[1] = -1;\n    server.child_info_nread = 0;\n    server.aof_buf = sdsempty();\n    server.lastsave = time(NULL); /* At startup we consider the DB saved. */\n    server.lastbgsave_try = 0;    /* At startup we never tried to BGSAVE. */\n    server.rdb_save_time_last = -1;\n    server.rdb_save_time_start = -1;\n    server.rdb_last_load_keys_expired = 0;\n    server.rdb_last_load_keys_loaded = 0;\n    server.dirty = 0;\n    resetServerStats();\n    /* A few stats we don't want to reset: server startup time, and peak mem. */\n    server.stat_starttime = time(NULL);\n    server.stat_peak_memory = 0;\n    server.stat_peak_memory_time = server.unixtime;\n    server.stat_current_cow_peak = 0;\n    server.stat_current_cow_bytes = 0;\n    server.stat_current_cow_updated = 0;\n    server.stat_current_save_keys_processed = 0;\n    server.stat_current_save_keys_total = 0;\n    server.stat_rdb_cow_bytes = 0;\n    server.stat_aof_cow_bytes = 0;\n    server.stat_module_cow_bytes = 0;\n    server.stat_module_progress = 0;\n    for (int j = 0; j < CLIENT_TYPE_COUNT; j++)\n        server.stat_clients_type_memory[j] = 0;\n    server.stat_cluster_links_memory = 0;\n    server.cron_malloc_stats.zmalloc_used = 0;\n    server.cron_malloc_stats.process_rss = 0;\n    server.cron_malloc_stats.allocator_allocated = 0;\n    server.cron_malloc_stats.allocator_active = 0;\n    server.cron_malloc_stats.allocator_resident = 0;\n    server.repl_current_sync_attempts = 0;\n    server.lastbgsave_status = C_OK;\n    server.aof_last_write_status = C_OK;\n    server.aof_last_write_errno = 0;\n    server.repl_good_slaves_count = 0;\n    server.last_sig_received = 0;\n    memset(server.io_threads_clients_num, 0, sizeof(server.io_threads_clients_num));\n    atomicSetWithSync(server.running, 0);\n    server.accum_call_count_since_ustime = 0;\n    server.monotonic_us_when_ustime = 0;\n\n    /* Initiate acl info struct */\n    server.acl_info.invalid_cmd_accesses = 0;\n    server.acl_info.invalid_key_accesses  = 0;\n    server.acl_info.user_auth_failures = 0;\n    server.acl_info.invalid_channel_accesses = 0;\n    server.acl_info.acl_access_denied_tls_cert = 0;\n\n    /* Initialize the shared pending command pool. */\n    server.cmd_pool.size = 0;\n    server.cmd_pool.capacity = PENDING_COMMAND_POOL_SIZE;\n    server.cmd_pool.pool = zmalloc(sizeof(pendingCommand*) * PENDING_COMMAND_POOL_SIZE);\n    server.cmd_pool.min_size = 0;\n\n    /* Create the timer callback, this is our way to process many background\n     * operations incrementally, like clients timeout, eviction of unaccessed\n     * expired keys and so forth. */\n    if (aeCreateTimeEvent(server.el, 1, serverCron, NULL, NULL) == AE_ERR) {\n        serverPanic(\"Can't create event loop timers.\");\n        exit(1);\n    }\n\n    /* Register a readable event for the pipe used to awake the event loop\n     * from module threads. */\n    if (aeCreateFileEvent(server.el, server.module_pipe[0], AE_READABLE,\n        modulePipeReadable,NULL) == AE_ERR) {\n            serverPanic(\n                \"Error registering the readable event for the module pipe.\");\n    }\n\n    /* Register before and after sleep handlers (note this needs to be done\n     * before loading persistence since it is used by processEventsWhileBlocked. */\n    aeSetBeforeSleepProc(server.el,beforeSleep);\n    aeSetAfterSleepProc(server.el,afterSleep);\n\n    /* 32 bit instances are limited to 4GB of address space, so if there is\n     * no explicit limit in the user provided configuration we set a limit\n     * at 3 GB using maxmemory with 'noeviction' policy'. This avoids\n     * useless crashes of the Redis instance for out of memory. */\n    if (server.arch_bits == 32 && server.maxmemory == 0) {\n        serverLog(LL_WARNING,\"Warning: 32 bit instance detected but no memory limit set. Setting 3 GB maxmemory limit with 'noeviction' policy now.\");\n        server.maxmemory = 3072LL*(1024*1024); /* 3 GB */\n        server.maxmemory_policy = MAXMEMORY_NO_EVICTION;\n    }\n\n    luaEnvInit();\n    scriptingInit(1);\n    if (functionsInit() == C_ERR) {\n        serverPanic(\"Functions initialization failed, check the server logs.\");\n        exit(1);\n    }\n    slowlogInit();\n    latencyMonitorInit();\n\n    /* Initialize ACL default password if it exists */\n    ACLUpdateDefaultUserPassword(server.requirepass);\n\n    applyWatchdogPeriod();\n\n    if (server.maxmemory_clients != 0)\n        initServerClientMemUsageBuckets();\n\n    prefetchCommandsBatchInit();\n}\n\nvoid initListeners(void) {\n    /* Setup listeners from server config for TCP/TLS/Unix */\n    int conn_index;\n    connListener *listener;\n    if (server.port != 0) {\n        conn_index = connectionIndexByType(CONN_TYPE_SOCKET);\n        if (conn_index < 0)\n            serverPanic(\"Failed finding connection listener of %s\", CONN_TYPE_SOCKET);\n        listener = &server.listeners[conn_index];\n        listener->bindaddr = server.bindaddr;\n        listener->bindaddr_count = server.bindaddr_count;\n        listener->port = server.port;\n        listener->ct = connectionByType(CONN_TYPE_SOCKET);\n    }\n\n    if (server.tls_port || server.tls_replication || server.tls_cluster) {\n        ConnectionType *ct_tls = connectionTypeTls();\n        if (!ct_tls) {\n            serverLog(LL_WARNING, \"Failed finding TLS support.\");\n            exit(1);\n        }\n        if (connTypeConfigure(ct_tls, &server.tls_ctx_config, 1) == C_ERR) {\n            serverLog(LL_WARNING, \"Failed to configure TLS. Check logs for more info.\");\n            exit(1);\n        }\n    }\n\n    if (server.tls_port != 0) {\n        conn_index = connectionIndexByType(CONN_TYPE_TLS);\n        if (conn_index < 0)\n            serverPanic(\"Failed finding connection listener of %s\", CONN_TYPE_TLS);\n        listener = &server.listeners[conn_index];\n        listener->bindaddr = server.bindaddr;\n        listener->bindaddr_count = server.bindaddr_count;\n        listener->port = server.tls_port;\n        listener->ct = connectionByType(CONN_TYPE_TLS);\n    }\n    if (server.unixsocket != NULL) {\n        conn_index = connectionIndexByType(CONN_TYPE_UNIX);\n        if (conn_index < 0)\n            serverPanic(\"Failed finding connection listener of %s\", CONN_TYPE_UNIX);\n        listener = &server.listeners[conn_index];\n        listener->bindaddr = &server.unixsocket;\n        listener->bindaddr_count = 1;\n        listener->ct = connectionByType(CONN_TYPE_UNIX);\n        listener->priv = &server.unixsocketperm; /* Unix socket specified */\n    }\n\n    /* create all the configured listener, and add handler to start to accept */\n    int listen_fds = 0;\n    for (int j = 0; j < CONN_TYPE_MAX; j++) {\n        listener = &server.listeners[j];\n        if (listener->ct == NULL)\n            continue;\n\n        if (connListen(listener) == C_ERR) {\n            serverLog(LL_WARNING, \"Failed listening on port %u (%s), aborting.\", listener->port, listener->ct->get_type(NULL));\n            exit(1);\n        }\n\n        if (createSocketAcceptHandler(listener, connAcceptHandler(listener->ct)) != C_OK)\n            serverPanic(\"Unrecoverable error creating %s listener accept handler.\", listener->ct->get_type(NULL));\n\n       listen_fds += listener->count;\n    }\n\n    if (listen_fds == 0) {\n        serverLog(LL_WARNING, \"Configured to not listen anywhere, exiting.\");\n        exit(1);\n    }\n}\n\n/* Some steps in server initialization need to be done last (after modules\n * are loaded).\n * Specifically, creation of threads due to a race bug in ld.so, in which\n * Thread Local Storage initialization collides with dlopen call.\n * see: https://sourceware.org/bugzilla/show_bug.cgi?id=19329 */\nvoid InitServerLast(void) {\n    bioInit();\n    initThreadedIO();\n    set_jemalloc_bg_thread(server.jemalloc_bg_thread);\n    server.initial_memory_usage = zmalloc_used_memory();\n}\n\n/* The purpose of this function is to try to \"glue\" consecutive range\n * key specs in order to build the legacy (first,last,step) spec\n * used by the COMMAND command.\n * By far the most common case is just one range spec (e.g. SET)\n * but some commands' ranges were split into two or more ranges\n * in order to have different flags for different keys (e.g. SMOVE,\n * first key is \"RW ACCESS DELETE\", second key is \"RW INSERT\").\n *\n * Additionally set the CMD_MOVABLE_KEYS flag for commands that may have key\n * names in their arguments, but the legacy range spec doesn't cover all of them.\n *\n * This function uses very basic heuristics and is \"best effort\":\n * 1. Only commands which have only \"range\" specs are considered.\n * 2. Only range specs with keystep of 1 are considered.\n * 3. The order of the range specs must be ascending (i.e.\n *    lastkey of spec[i] == firstkey-1 of spec[i+1]).\n *\n * This function will succeed on all native Redis commands and may\n * fail on module commands, even if it only has \"range\" specs that\n * could actually be \"glued\", in the following cases:\n * 1. The order of \"range\" specs is not ascending (e.g. the spec for\n *    the key at index 2 was added before the spec of the key at\n *    index 1).\n * 2. The \"range\" specs have keystep >1.\n *\n * If this functions fails it means that the legacy (first,last,step)\n * spec used by COMMAND will show 0,0,0. This is not a dire situation\n * because anyway the legacy (first,last,step) spec is to be deprecated\n * and one should use the new key specs scheme.\n */\nvoid populateCommandLegacyRangeSpec(struct redisCommand *c) {\n    memset(&c->legacy_range_key_spec, 0, sizeof(c->legacy_range_key_spec));\n\n    /* Set the movablekeys flag if we have a GETKEYS flag for modules.\n     * Note that for native redis commands, we always have keyspecs,\n     * with enough information to rely on for movablekeys. */\n    if (c->flags & CMD_MODULE_GETKEYS)\n        c->flags |= CMD_MOVABLE_KEYS;\n\n    /* no key-specs, no keys, exit. */\n    if (c->key_specs_num == 0) {\n        return;\n    }\n\n    if (c->key_specs_num == 1 &&\n        c->key_specs[0].begin_search_type == KSPEC_BS_INDEX &&\n        c->key_specs[0].find_keys_type == KSPEC_FK_RANGE)\n    {\n        /* Quick win, exactly one range spec. */\n        c->legacy_range_key_spec = c->key_specs[0];\n        /* If it has the incomplete flag, set the movablekeys flag on the command. */\n        if (c->key_specs[0].flags & CMD_KEY_INCOMPLETE)\n            c->flags |= CMD_MOVABLE_KEYS;\n        return;\n    }\n\n    int firstkey = INT_MAX, lastkey = 0;\n    int prev_lastkey = 0;\n    for (int i = 0; i < c->key_specs_num; i++) {\n        if (c->key_specs[i].begin_search_type != KSPEC_BS_INDEX ||\n            c->key_specs[i].find_keys_type != KSPEC_FK_RANGE)\n        {\n            /* Found an incompatible (non range) spec, skip it, and set the movablekeys flag. */\n            c->flags |= CMD_MOVABLE_KEYS;\n            continue;\n        }\n        if (c->key_specs[i].fk.range.keystep != 1 ||\n            (prev_lastkey && prev_lastkey != c->key_specs[i].bs.index.pos-1))\n        {\n            /* Found a range spec that's not plain (step of 1) or not consecutive to the previous one.\n             * Skip it, and we set the movablekeys flag. */\n            c->flags |= CMD_MOVABLE_KEYS;\n            continue;\n        }\n        if (c->key_specs[i].flags & CMD_KEY_INCOMPLETE) {\n            /* The spec we're using is incomplete, we can use it, but we also have to set the movablekeys flag. */\n            c->flags |= CMD_MOVABLE_KEYS;\n        }\n        firstkey = min(firstkey, c->key_specs[i].bs.index.pos);\n        /* Get the absolute index for lastkey (in the \"range\" spec, lastkey is relative to firstkey) */\n        int lastkey_abs_index = c->key_specs[i].fk.range.lastkey;\n        if (lastkey_abs_index >= 0)\n            lastkey_abs_index += c->key_specs[i].bs.index.pos;\n        /* For lastkey we use unsigned comparison to handle negative values correctly */\n        lastkey = max((unsigned)lastkey, (unsigned)lastkey_abs_index);\n        prev_lastkey = lastkey;\n    }\n\n    if (firstkey == INT_MAX) {\n        /* Couldn't find range specs, the legacy range spec will remain empty, and we set the movablekeys flag. */\n        c->flags |= CMD_MOVABLE_KEYS;\n        return;\n    }\n\n    serverAssert(firstkey != 0);\n    serverAssert(lastkey != 0);\n\n    c->legacy_range_key_spec.begin_search_type = KSPEC_BS_INDEX;\n    c->legacy_range_key_spec.bs.index.pos = firstkey;\n    c->legacy_range_key_spec.find_keys_type = KSPEC_FK_RANGE;\n    c->legacy_range_key_spec.fk.range.lastkey = lastkey < 0 ? lastkey : (lastkey-firstkey); /* in the \"range\" spec, lastkey is relative to firstkey */\n    c->legacy_range_key_spec.fk.range.keystep = 1;\n    c->legacy_range_key_spec.fk.range.limit = 0;\n}\n\nsds catSubCommandFullname(const char *parent_name, const char *sub_name) {\n    return sdscatfmt(sdsempty(), \"%s|%s\", parent_name, sub_name);\n}\n\nvoid commandAddSubcommand(struct redisCommand *parent, struct redisCommand *subcommand, const char *declared_name) {\n    if (!parent->subcommands_dict)\n        parent->subcommands_dict = dictCreate(&commandTableDictType);\n\n    subcommand->parent = parent; /* Assign the parent command */\n    subcommand->id = ACLGetCommandID(subcommand->fullname); /* Assign the ID used for ACL. */\n\n    serverAssert(dictAdd(parent->subcommands_dict, sdsnew(declared_name), subcommand) == DICT_OK);\n}\n\n/* Set implicit ACl categories (see comment above the definition of\n * struct redisCommand). */\nvoid setImplicitACLCategories(struct redisCommand *c) {\n    if (c->flags & CMD_WRITE)\n        c->acl_categories |= ACL_CATEGORY_WRITE;\n    /* Exclude scripting commands from the RO category. */\n    if (c->flags & CMD_READONLY && !(c->acl_categories & ACL_CATEGORY_SCRIPTING))\n        c->acl_categories |= ACL_CATEGORY_READ;\n    if (c->flags & CMD_ADMIN)\n        c->acl_categories |= ACL_CATEGORY_ADMIN|ACL_CATEGORY_DANGEROUS;\n    if (c->flags & CMD_PUBSUB)\n        c->acl_categories |= ACL_CATEGORY_PUBSUB;\n    if (c->flags & CMD_FAST)\n        c->acl_categories |= ACL_CATEGORY_FAST;\n    if (c->flags & CMD_BLOCKING)\n        c->acl_categories |= ACL_CATEGORY_BLOCKING;\n\n    /* If it's not @fast is @slow in this binary world. */\n    if (!(c->acl_categories & ACL_CATEGORY_FAST))\n        c->acl_categories |= ACL_CATEGORY_SLOW;\n}\n\n/* Recursively populate the command structure.\n *\n * On success, the function return C_OK. Otherwise C_ERR is returned and we won't\n * add this command in the commands dict. */\nint populateCommandStructure(struct redisCommand *c) {\n    /* If the command marks with CMD_SENTINEL, it exists in sentinel. */\n    if (!(c->flags & CMD_SENTINEL) && server.sentinel_mode)\n        return C_ERR;\n\n    /* If the command marks with CMD_ONLY_SENTINEL, it only exists in sentinel. */\n    if (c->flags & CMD_ONLY_SENTINEL && !server.sentinel_mode)\n        return C_ERR;\n\n    /* Translate the command string flags description into an actual\n     * set of flags. */\n    setImplicitACLCategories(c);\n\n    /* We start with an unallocated histogram and only allocate memory when a command\n     * has been issued for the first time */\n    c->latency_histogram = NULL;\n\n    /* Handle the legacy range spec and the \"movablekeys\" flag (must be done after populating all key specs). */\n    populateCommandLegacyRangeSpec(c);\n\n    /* Assign the ID used for ACL. */\n    c->id = ACLGetCommandID(c->fullname);\n\n    /* Handle subcommands */\n    if (c->subcommands) {\n        for (int j = 0; c->subcommands[j].declared_name; j++) {\n            struct redisCommand *sub = c->subcommands+j;\n\n            sub->fullname = catSubCommandFullname(c->declared_name, sub->declared_name);\n            if (populateCommandStructure(sub) == C_ERR)\n                continue;\n\n            commandAddSubcommand(c, sub, sub->declared_name);\n        }\n    }\n\n    return C_OK;\n}\n\nextern struct redisCommand redisCommandTable[];\n\n/* Populates the Redis Command Table dict from the static table in commands.c\n * which is auto generated from the json files in the commands folder. */\nvoid populateCommandTable(void) {\n    int j;\n    struct redisCommand *c;\n\n    for (j = 0;; j++) {\n        c = redisCommandTable + j;\n        if (c->declared_name == NULL)\n            break;\n\n        int retval1, retval2;\n\n        c->fullname = sdsnew(c->declared_name);\n        if (populateCommandStructure(c) == C_ERR)\n            continue;\n\n        retval1 = dictAdd(server.commands, sdsdup(c->fullname), c);\n        /* Populate an additional dictionary that will be unaffected\n         * by rename-command statements in redis.conf. */\n        retval2 = dictAdd(server.orig_commands, sdsdup(c->fullname), c);\n        serverAssert(retval1 == DICT_OK && retval2 == DICT_OK);\n    }\n}\n\nvoid resetCommandTableStats(dict* commands) {\n    struct redisCommand *c;\n    dictEntry *de;\n    dictIterator di;\n\n    dictInitSafeIterator(&di, commands);\n    while((de = dictNext(&di)) != NULL) {\n        c = (struct redisCommand *) dictGetVal(de);\n        c->microseconds = 0;\n        c->calls = 0;\n        c->rejected_calls = 0;\n        c->failed_calls = 0;\n        c->slowlog_count = 0;\n        c->slowlog_time_us_sum = 0;\n        c->slowlog_time_us_max = 0;\n        if(c->latency_histogram) {\n            hdr_close(c->latency_histogram);\n            c->latency_histogram = NULL;\n        }\n        if (c->subcommands_dict)\n            resetCommandTableStats(c->subcommands_dict);\n    }\n    dictResetIterator(&di);\n}\n\nvoid resetErrorTableStats(void) {\n    freeErrorsRadixTreeAsync(server.errors);\n    server.errors = raxNew();\n    server.errors_enabled = 1;\n}\n\n/* ========================== Redis OP Array API ============================ */\n\nint redisOpArrayAppend(redisOpArray *oa, int dbid, robj **argv, int argc, int target) {\n    redisOp *op;\n    int prev_capacity = oa->capacity;\n\n    if (oa->numops == 0) {\n        oa->capacity = 16;\n    } else if (oa->numops >= oa->capacity) {\n        oa->capacity *= 2;\n    }\n\n    if (prev_capacity != oa->capacity)\n        oa->ops = zrealloc(oa->ops,sizeof(redisOp)*oa->capacity);\n    op = oa->ops+oa->numops;\n    op->dbid = dbid;\n    op->argv = argv;\n    op->argc = argc;\n    op->target = target;\n    oa->numops++;\n    return oa->numops;\n}\n\nvoid redisOpArrayFree(redisOpArray *oa) {\n    while(oa->numops) {\n        int j;\n        redisOp *op;\n\n        oa->numops--;\n        op = oa->ops+oa->numops;\n        for (j = 0; j < op->argc; j++)\n            decrRefCount(op->argv[j]);\n        zfree(op->argv);\n    }\n    /* no need to free the actual op array, we reuse the memory for future commands */\n    serverAssert(!oa->numops);\n}\n\n/* ====================== Commands lookup and execution ===================== */\n\nint isContainerCommandBySds(sds s) {\n    struct redisCommand *base_cmd = dictFetchValue(server.commands, s);\n    int has_subcommands = base_cmd && base_cmd->subcommands_dict;\n    return has_subcommands;\n}\n\nstruct redisCommand *lookupSubcommand(struct redisCommand *container, sds sub_name) {\n    return dictFetchValue(container->subcommands_dict, sub_name);\n}\n\n/* Look up a command by argv and argc\n *\n * If `strict` is not 0 we expect argc to be exact (i.e. argc==2\n * for a subcommand and argc==1 for a top-level command)\n * `strict` should be used every time we want to look up a command\n * name (e.g. in COMMAND INFO) rather than to find the command\n * a user requested to execute (in processCommand).\n */\nstruct redisCommand *lookupCommandLogic(dict *commands, robj **argv, int argc, int strict) {\n    struct redisCommand *base_cmd = dictFetchValue(commands, argv[0]->ptr);\n    int has_subcommands = base_cmd && base_cmd->subcommands_dict;\n    if (argc == 1 || !has_subcommands) {\n        if (strict && argc != 1)\n            return NULL;\n        /* Note: It is possible that base_cmd->proc==NULL (e.g. CONFIG) */\n        return base_cmd;\n    } else { /* argc > 1 && has_subcommands */\n        if (strict && argc != 2)\n            return NULL;\n        /* Note: Currently we support just one level of subcommands */\n        return lookupSubcommand(base_cmd, argv[1]->ptr);\n    }\n}\n\nstruct redisCommand *lookupCommand(robj **argv, int argc) {\n    return lookupCommandLogic(server.commands,argv,argc,0);\n}\n\nstruct redisCommand *lookupCommandBySdsLogic(dict *commands, sds s) {\n    int argc, j;\n    sds *strings = sdssplitlen(s,sdslen(s),\"|\",1,&argc);\n    if (strings == NULL)\n        return NULL;\n    if (argc < 1 || argc > 2) {\n        /* Currently we support just one level of subcommands */\n        sdsfreesplitres(strings,argc);\n        return NULL;\n    }\n\n    serverAssert(argc > 0); /* Avoid warning `-Wmaybe-uninitialized` in lookupCommandLogic() */\n    robj objects[argc];\n    robj *argv[argc];\n    for (j = 0; j < argc; j++) {\n        initStaticStringObject(objects[j],strings[j]);\n        argv[j] = &objects[j];\n    }\n\n    struct redisCommand *cmd = lookupCommandLogic(commands,argv,argc,1);\n    sdsfreesplitres(strings,argc);\n    return cmd;\n}\n\nstruct redisCommand *lookupCommandBySds(sds s) {\n    return lookupCommandBySdsLogic(server.commands,s);\n}\n\nstruct redisCommand *lookupCommandByCStringLogic(dict *commands, const char *s) {\n    struct redisCommand *cmd;\n    sds name = sdsnew(s);\n\n    cmd = lookupCommandBySdsLogic(commands,name);\n    sdsfree(name);\n    return cmd;\n}\n\nstruct redisCommand *lookupCommandByCString(const char *s) {\n    return lookupCommandByCStringLogic(server.commands,s);\n}\n\n/* Lookup the command in the current table, if not found also check in\n * the original table containing the original command names unaffected by\n * redis.conf rename-command statement.\n *\n * This is used by functions rewriting the argument vector such as\n * rewriteClientCommandVector() in order to set client->cmd pointer\n * correctly even if the command was renamed. */\nstruct redisCommand *lookupCommandOrOriginal(robj **argv ,int argc) {\n    struct redisCommand *cmd = lookupCommandLogic(server.commands, argv, argc, 0);\n\n    if (!cmd) cmd = lookupCommandLogic(server.orig_commands, argv, argc, 0);\n    return cmd;\n}\n\n/* Commands arriving from the master client or AOF client, should never be rejected. */\nint mustObeyClient(client *c) {\n    return c->id == CLIENT_ID_AOF || c->flags & CLIENT_MASTER;\n}\n\nstatic int shouldPropagate(int target) {\n    if (!server.replication_allowed || target == PROPAGATE_NONE || server.loading)\n        return 0;\n\n    if (target & PROPAGATE_AOF) {\n        if (server.aof_state != AOF_OFF)\n            return 1;\n    }\n    if (target & PROPAGATE_REPL) {\n        if (server.masterhost == NULL && (server.repl_backlog || listLength(server.slaves) != 0 || asmMigrateInProgress()))\n            return 1;\n    }\n\n    return 0;\n}\n\n/* Propagate the specified command (in the context of the specified database id)\n * to AOF and Slaves.\n *\n * flags are an xor between:\n * + PROPAGATE_NONE (no propagation of command at all)\n * + PROPAGATE_AOF (propagate into the AOF file if is enabled)\n * + PROPAGATE_REPL (propagate into the replication link)\n *\n * This is an internal low-level function and should not be called!\n *\n * The API for propagating commands is alsoPropagate().\n *\n * dbid value of -1 is saved to indicate that the called do not want\n * to replicate SELECT for this command (used for database neutral commands).\n */\nstatic void propagateNow(int dbid, robj **argv, int argc, int target) {\n    if (!shouldPropagate(target))\n        return;\n\n    /* This needs to be unreachable since the dataset should be fixed during\n     * replica pause (otherwise data may be lost during a failover) */\n    serverAssert(!(isPausedActions(PAUSE_ACTION_REPLICA) &&\n                   (!server.client_pause_in_transaction)));\n\n    if (server.aof_state != AOF_OFF && target & PROPAGATE_AOF)\n        feedAppendOnlyFile(dbid,argv,argc);\n    if (target & PROPAGATE_REPL) {\n        replicationFeedSlaves(server.slaves,dbid,argv,argc);\n        asmFeedMigrationClient(argv, argc);\n    }\n}\n\n/* Used inside commands to schedule the propagation of additional commands\n * after the current command is propagated to AOF / Replication.\n *\n * dbid is the database ID the command should be propagated into.\n * Arguments of the command to propagate are passed as an array of redis\n * objects pointers of len 'argc', using the 'argv' vector.\n *\n * The function does not take a reference to the passed 'argv' vector,\n * so it is up to the caller to release the passed argv (but it is usually\n * stack allocated).  The function automatically increments ref count of\n * passed objects, so the caller does not need to. */\nvoid alsoPropagate(int dbid, robj **argv, int argc, int target) {\n    robj **argvcopy;\n    int j;\n\n    if (!shouldPropagate(target))\n        return;\n\n    argvcopy = zmalloc(sizeof(robj*)*argc);\n    for (j = 0; j < argc; j++) {\n        argvcopy[j] = argv[j];\n        incrRefCount(argv[j]);\n    }\n    redisOpArrayAppend(&server.also_propagate,dbid,argvcopy,argc,target);\n}\n\n/* It is possible to call the function forceCommandPropagation() inside a\n * Redis command implementation in order to to force the propagation of a\n * specific command execution into AOF / Replication. */\nvoid forceCommandPropagation(client *c, int flags) {\n    serverAssert(c->cmd->flags & (CMD_WRITE | CMD_MAY_REPLICATE));\n    if (flags & PROPAGATE_REPL) c->flags |= CLIENT_FORCE_REPL;\n    if (flags & PROPAGATE_AOF) c->flags |= CLIENT_FORCE_AOF;\n}\n\n/* Avoid that the executed command is propagated at all. This way we\n * are free to just propagate what we want using the alsoPropagate()\n * API. */\nvoid preventCommandPropagation(client *c) {\n    c->flags |= CLIENT_PREVENT_PROP;\n}\n\n/* AOF specific version of preventCommandPropagation(). */\nvoid preventCommandAOF(client *c) {\n    c->flags |= CLIENT_PREVENT_AOF_PROP;\n}\n\n/* Replication specific version of preventCommandPropagation(). */\nvoid preventCommandReplication(client *c) {\n    c->flags |= CLIENT_PREVENT_REPL_PROP;\n}\n\n/* Log the last command a client executed into the slowlog. */\nvoid slowlogPushCurrentCommand(client *c, struct redisCommand *cmd, ustime_t duration) {\n    /* Some commands may contain sensitive data that should not be available in the slowlog. */\n    if (cmd->flags & CMD_SKIP_SLOWLOG)\n        return;\n\n    /* If command argument vector was rewritten, use the original\n     * arguments. */\n    robj **argv = c->original_argv ? c->original_argv : c->argv;\n    int argc = c->original_argv ? c->original_argc : c->argc;\n    if (slowlogPushEntryIfNeeded(c,argv,argc,duration)) {\n        server.stat_slowlog_count++;\n        server.stat_slowlog_time_us_sum += duration;\n        if (duration > server.stat_slowlog_time_us_max)\n            server.stat_slowlog_time_us_max = duration;\n        cmd->slowlog_count++;\n        cmd->slowlog_time_us_sum += duration;\n        if (duration > cmd->slowlog_time_us_max)\n            cmd->slowlog_time_us_max = duration;\n    }\n}\n\n/* This function is called in order to update the total command histogram duration.\n * The latency unit is nano-seconds.\n * If needed it will allocate the histogram memory and trim the duration to the upper/lower tracking limits*/\nvoid updateCommandLatencyHistogram(struct hdr_histogram **latency_histogram, int64_t duration_hist){\n    if (duration_hist < LATENCY_HISTOGRAM_MIN_VALUE)\n        duration_hist=LATENCY_HISTOGRAM_MIN_VALUE;\n    if (duration_hist>LATENCY_HISTOGRAM_MAX_VALUE)\n        duration_hist=LATENCY_HISTOGRAM_MAX_VALUE;\n    if (*latency_histogram==NULL)\n        hdr_init(LATENCY_HISTOGRAM_MIN_VALUE,LATENCY_HISTOGRAM_MAX_VALUE,LATENCY_HISTOGRAM_PRECISION,latency_histogram);\n    hdr_record_value(*latency_histogram,duration_hist);\n}\n\n/* Handle the alsoPropagate() API to handle commands that want to propagate\n * multiple separated commands. Note that alsoPropagate() is not affected\n * by CLIENT_PREVENT_PROP flag. */\nstatic void propagatePendingCommands(void) {\n    if (server.also_propagate.numops == 0)\n        return;\n\n    int j;\n    redisOp *rop;\n\n    /* If we got here it means we have finished an execution-unit.\n     * If that unit has caused propagation of multiple commands, they\n     * should be propagated as a transaction */\n    int transaction = server.also_propagate.numops > 1;\n\n    /* In case a command that may modify random keys was run *directly*\n     * (i.e. not from within a script, MULTI/EXEC, RM_Call, etc.) we want\n     * to avoid using a transaction (much like active-expire) */\n    if (server.current_client &&\n        server.current_client->cmd &&\n        server.current_client->cmd->flags & CMD_TOUCHES_ARBITRARY_KEYS)\n    {\n        transaction = 0;\n    }\n\n    if (transaction) {\n        /* We use dbid=-1 to indicate we do not want to replicate SELECT.\n         * It'll be inserted together with the next command (inside the MULTI) */\n        propagateNow(-1,&shared.multi,1,PROPAGATE_AOF|PROPAGATE_REPL);\n    }\n\n    for (j = 0; j < server.also_propagate.numops; j++) {\n        rop = &server.also_propagate.ops[j];\n        serverAssert(rop->target);\n        propagateNow(rop->dbid,rop->argv,rop->argc,rop->target);\n    }\n\n    if (transaction) {\n        /* We use dbid=-1 to indicate we do not want to replicate select */\n        propagateNow(-1,&shared.exec,1,PROPAGATE_AOF|PROPAGATE_REPL);\n    }\n\n    redisOpArrayFree(&server.also_propagate);\n}\n\n/* Performs operations that should be performed after an execution unit ends.\n * Execution unit is a code that should be done atomically.\n * Execution units can be nested and are not necessarily starts with Redis command.\n *\n * For example the following is a logical unit:\n *   active expire ->\n *      trigger del notification of some module ->\n *          accessing a key ->\n *              trigger key miss notification of some other module\n *\n * What we want to achieve is that the entire execution unit will be done atomically,\n * currently with respect to replication and post jobs, but in the future there might\n * be other considerations. So we basically want the `postUnitOperations` to trigger\n * after the entire chain finished. */\nvoid postExecutionUnitOperations(void) {\n    if (server.execution_nesting)\n        return;\n\n    firePostExecutionUnitJobs();\n\n    /* If we are at the top-most call() and not inside a an active module\n     * context (e.g. within a module timer) we can propagate what we accumulated. */\n    propagatePendingCommands();\n\n    /* Module subsystem post-execution-unit logic */\n    modulePostExecutionUnitOperations();\n}\n\n/* Increment the command failure counters (either rejected_calls or failed_calls).\n * The decision which counter to increment is done using the flags argument, options are:\n * * ERROR_COMMAND_REJECTED - update rejected_calls\n * * ERROR_COMMAND_FAILED - update failed_calls\n *\n * The function also reset the prev_err_count to make sure we will not count the same error\n * twice, its possible to pass a NULL cmd value to indicate that the error was counted elsewhere.\n *\n * The function returns true if stats was updated and false if not. */\nint incrCommandStatsOnError(struct redisCommand *cmd, int flags) {\n    /* hold the prev error count captured on the last command execution */\n    static long long prev_err_count = 0;\n    int res = 0;\n    if (cmd) {\n        if ((server.stat_total_error_replies - prev_err_count) > 0) {\n            if (flags & ERROR_COMMAND_REJECTED) {\n                cmd->rejected_calls++;\n                res = 1;\n            } else if (flags & ERROR_COMMAND_FAILED) {\n                cmd->failed_calls++;\n                res = 1;\n            }\n        }\n    }\n    prev_err_count = server.stat_total_error_replies;\n    return res;\n}\n\n/* Returns true if the command is not internal, or the connection is internal. */\nstatic bool commandVisibleForClient(client *c, struct redisCommand *cmd) {\n    return (!(cmd->flags & CMD_INTERNAL)) || (c->flags & CLIENT_INTERNAL);\n}\n\n/* Call() is the core of Redis execution of a command.\n *\n * The following flags can be passed:\n * CMD_CALL_NONE        No flags.\n * CMD_CALL_PROPAGATE_AOF   Append command to AOF if it modified the dataset\n *                          or if the client flags are forcing propagation.\n * CMD_CALL_PROPAGATE_REPL  Send command to slaves if it modified the dataset\n *                          or if the client flags are forcing propagation.\n * CMD_CALL_PROPAGATE   Alias for PROPAGATE_AOF|PROPAGATE_REPL.\n * CMD_CALL_FULL        Alias for SLOWLOG|STATS|PROPAGATE.\n *\n * The exact propagation behavior depends on the client flags.\n * Specifically:\n *\n * 1. If the client flags CLIENT_FORCE_AOF or CLIENT_FORCE_REPL are set\n *    and assuming the corresponding CMD_CALL_PROPAGATE_AOF/REPL is set\n *    in the call flags, then the command is propagated even if the\n *    dataset was not affected by the command.\n * 2. If the client flags CLIENT_PREVENT_REPL_PROP or CLIENT_PREVENT_AOF_PROP\n *    are set, the propagation into AOF or to slaves is not performed even\n *    if the command modified the dataset.\n *\n * Note that regardless of the client flags, if CMD_CALL_PROPAGATE_AOF\n * or CMD_CALL_PROPAGATE_REPL are not set, then respectively AOF or\n * slaves propagation will never occur.\n *\n * Client flags are modified by the implementation of a given command\n * using the following API:\n *\n * forceCommandPropagation(client *c, int flags);\n * preventCommandPropagation(client *c);\n * preventCommandAOF(client *c);\n * preventCommandReplication(client *c);\n *\n */\nvoid call(client *c, int flags) {\n    long long dirty;\n    uint64_t client_old_flags = c->flags;\n    struct redisCommand *real_cmd = c->realcmd;\n    client *prev_client = server.executing_client;\n    server.executing_client = c;\n\n    /* When call() is issued during loading the AOF we don't want commands called\n     * from module, exec or LUA to go into the slowlog or to populate statistics. */\n    int update_command_stats = !isAOFLoadingContext();\n\n    /* We want to be aware of a client which is making a first time attempt to execute this command\n     * and a client which is reprocessing command again (after being unblocked).\n     * Blocked clients can be blocked in different places and not always it means the call() function has been\n     * called. For example this is required for avoiding double logging to monitors.*/\n    int reprocessing_command = (c->flags & CLIENT_REEXECUTING_COMMAND) ? 1 : 0;\n\n    /* Initialization: clear the flags that must be set by the command on\n     * demand, and initialize the array for additional commands propagation. */\n    c->flags &= ~(CLIENT_FORCE_AOF|CLIENT_FORCE_REPL|CLIENT_PREVENT_PROP);\n\n    /* Redis core is in charge of propagation when the first entry point\n     * of call() is processCommand().\n     * The only other option to get to call() without having processCommand\n     * as an entry point is if a module triggers RM_Call outside of call()\n     * context (for example, in a timer).\n     * In that case, the module is in charge of propagation. */\n\n    /* Call the command. */\n    dirty = server.dirty;\n    long long old_master_repl_offset = server.master_repl_offset;\n    incrCommandStatsOnError(NULL, 0);\n\n    /* Use monotonic clock if available, and update cached time if needed */\n    const int use_hw_clock = monotonicGetType() == MONOTONIC_CLOCK_HW;\n    monotime monotonic_start = 0;\n    if (use_hw_clock) {\n        monotonic_start = getMonotonicUs();\n        if (server.execution_nesting == 0) {\n            server.accum_call_count_since_ustime++;\n            /* Sync cached time when monotonic clock moves more than 10us\n             * or after 25 commands */\n            if (monotonic_start - server.monotonic_us_when_ustime > 10 ||\n                server.accum_call_count_since_ustime > 25)\n            {\n                updateCachedTime(0);\n                /* Recalculate monotonic_start after time update as ustime()\n                 * in updateCachedTime() might have taken some time */\n                monotonic_start = getMonotonicUs();\n                server.monotonic_us_when_ustime = monotonic_start;\n                server.accum_call_count_since_ustime = 0;\n            }\n        }\n    }\n\n    /* Pass current server.ustime to avoid ustime() call if monotonic clock is used\n     * and time will be updated before command execution based on monotonic clock. */\n    const long long call_timer = use_hw_clock ? server.ustime : ustime();\n    enterExecutionUnit(1, call_timer);\n\n    /* setting the CLIENT_EXECUTING_COMMAND flag so we will avoid\n     * sending client side caching message in the middle of a command reply.\n     * In case of blocking commands, the flag will be un-set only after successfully\n     * re-processing and unblock the client.*/\n    c->flags |= CLIENT_EXECUTING_COMMAND;\n\n    c->cmd->proc(c);\n\n    exitExecutionUnit();\n\n    /* In case client is blocked after trying to execute the command,\n     * it means the execution is not yet completed and we MIGHT reprocess the command in the future. */\n    if (!(c->flags & CLIENT_BLOCKED)) c->flags &= ~(CLIENT_EXECUTING_COMMAND);\n\n    /* In order to avoid performance implication due to querying the clock using a system call 3 times,\n     * we use a monotonic clock, when we are sure its cost is very low, and fall back to non-monotonic call otherwise. */\n    ustime_t duration;\n    if (use_hw_clock)\n        duration = getMonotonicUs() - monotonic_start;\n    else\n        duration = ustime() - call_timer;\n\n    c->duration += duration;\n    dirty = server.dirty-dirty;\n    if (dirty < 0) dirty = 0;\n\n    /* Update failed command calls if required. */\n\n    if (!incrCommandStatsOnError(real_cmd, ERROR_COMMAND_FAILED) && c->deferred_reply_errors) {\n        /* When call is used from a module client, error stats, and total_error_replies\n         * isn't updated since these errors, if handled by the module, are internal,\n         * and not reflected to users. however, the commandstats does show these calls\n         * (made by RM_Call), so it should log if they failed or succeeded. */\n        real_cmd->failed_calls++;\n    }\n\n    /* After executing command, we will close the client after writing entire\n     * reply if it is set 'CLIENT_CLOSE_AFTER_COMMAND' flag. */\n    if (c->flags & CLIENT_CLOSE_AFTER_COMMAND) {\n        c->flags &= ~CLIENT_CLOSE_AFTER_COMMAND;\n        c->flags |= CLIENT_CLOSE_AFTER_REPLY;\n    }\n\n    /* Note: the code below uses the real command that was executed\n     * c->cmd and c->lastcmd may be different, in case of MULTI-EXEC or\n     * re-written commands such as EXPIRE, GEOADD, etc. */\n\n    /* Record the latency this command induced on the main thread.\n     * unless instructed by the caller not to log. (happens when processing\n     * a MULTI-EXEC from inside an AOF). */\n    if (update_command_stats) {\n        char *latency_event = (real_cmd->flags & CMD_FAST) ?\n                               \"fast-command\" : \"command\";\n        latencyAddSampleIfNeeded(latency_event,duration/1000);\n        if (server.execution_nesting == 0)\n            durationAddSample(EL_DURATION_TYPE_CMD, duration);\n    }\n\n    /* Log the command into the Slow log if needed.\n     * If the client is blocked we will handle slowlog when it is unblocked. */\n    if (update_command_stats && !(c->flags & CLIENT_BLOCKED))\n        slowlogPushCurrentCommand(c, real_cmd, c->duration);\n\n    /* Send the command to clients in MONITOR mode if applicable,\n     * since some administrative commands are considered too dangerous to be shown.\n     * Other exceptions is a client which is unblocked and retrying to process the command\n     * or we are currently in the process of loading AOF. */\n    if (update_command_stats && !reprocessing_command &&\n        !(c->cmd->flags & (CMD_SKIP_MONITOR|CMD_ADMIN)))\n    {\n        robj **argv = c->original_argv ? c->original_argv : c->argv;\n        int argc = c->original_argv ? c->original_argc : c->argc;\n        replicationFeedMonitors(c,server.monitors,c->db->id,argv,argc);\n    }\n\n    /* Populate the per-command and per-slot statistics that we show in INFO commandstats and CLUSTER SLOT-STATS,\n     * respectively. If the client is blocked we will handle latency stats and duration when it is unblocked. */\n    if (update_command_stats && !(c->flags & CLIENT_BLOCKED)) {\n        real_cmd->calls++;\n        real_cmd->microseconds += c->duration;\n        if (server.latency_tracking_enabled && !(c->flags & CLIENT_BLOCKED))\n            updateCommandLatencyHistogram(&(real_cmd->latency_histogram), c->duration*1000);\n        clusterSlotStatsAddCpuDuration(c, c->duration);\n    }\n\n    /* Populate the per-key hotkey stats. Before updating stats for a command\n     * we need to do some setup on the hotkeyStats structure. We only do this\n     * once during the outer-most call in case of nesting. However, when we are\n     * inside a MULTI/EXEC block, we want to track each individual command.\n     * NOTE: even though we update the network bytes during nested calls we\n     * only update the duration, since the outer-most call records the whole\n     * duration. */\n    if (update_command_stats && !(c->flags & CLIENT_BLOCKED) &&\n        (!server.execution_nesting || server.in_exec))\n    {\n        /* First we need to prepare the hotkeyStats for updates */\n        hotkeyStatsPreCurrentCmd(server.hotkeys, c);\n\n        /* Update the current cmd's keys with the commands duration */\n        hotkeyMetrics metrics = {c->duration, 0};\n        hotkeyStatsUpdateCurrentCmd(server.hotkeys, metrics);\n    }\n\n    /* The duration needs to be reset after each call except for a blocked command,\n     * which is expected to record and reset the duration after unblocking. */\n    if (!(c->flags & CLIENT_BLOCKED)) {\n        c->duration = 0;\n    }\n\n    /* Propagate the command into the AOF and replication link.\n     * We never propagate EXEC explicitly, it will be implicitly\n     * propagated if needed (see propagatePendingCommands).\n     * Also, module commands take care of themselves */\n    if (flags & CMD_CALL_PROPAGATE &&\n        (c->flags & CLIENT_PREVENT_PROP) != CLIENT_PREVENT_PROP &&\n        c->cmd->proc != execCommand &&\n        !(c->cmd->flags & CMD_MODULE))\n    {\n        int propagate_flags = PROPAGATE_NONE;\n\n        /* Check if the command operated changes in the data set. If so\n         * set for replication / AOF propagation. */\n        if (dirty) propagate_flags |= (PROPAGATE_AOF|PROPAGATE_REPL);\n\n        /* If the client forced AOF / replication of the command, set\n         * the flags regardless of the command effects on the data set. */\n        if (c->flags & CLIENT_FORCE_REPL) propagate_flags |= PROPAGATE_REPL;\n        if (c->flags & CLIENT_FORCE_AOF) propagate_flags |= PROPAGATE_AOF;\n\n        /* However prevent AOF / replication propagation if the command\n         * implementation called preventCommandPropagation() or similar,\n         * or if we don't have the call() flags to do so. */\n        if (c->flags & CLIENT_PREVENT_REPL_PROP        ||\n            c->flags & CLIENT_MODULE_PREVENT_REPL_PROP ||\n            !(flags & CMD_CALL_PROPAGATE_REPL))\n                propagate_flags &= ~PROPAGATE_REPL;\n        if (c->flags & CLIENT_PREVENT_AOF_PROP        ||\n            c->flags & CLIENT_MODULE_PREVENT_AOF_PROP ||\n            !(flags & CMD_CALL_PROPAGATE_AOF))\n                propagate_flags &= ~PROPAGATE_AOF;\n\n        /* Call alsoPropagate() only if at least one of AOF / replication\n         * propagation is needed. */\n        if (propagate_flags != PROPAGATE_NONE)\n            alsoPropagate(c->db->id,c->argv,c->argc,propagate_flags);\n    }\n\n    /* Restore the old replication flags, since call() can be executed\n     * recursively. */\n    c->flags &= ~(CLIENT_FORCE_AOF|CLIENT_FORCE_REPL|CLIENT_PREVENT_PROP);\n    c->flags |= client_old_flags &\n        (CLIENT_FORCE_AOF|CLIENT_FORCE_REPL|CLIENT_PREVENT_PROP);\n\n    /* If the client has keys tracking enabled for client side caching,\n     * make sure to remember the keys it fetched via this command. For read-only\n     * scripts, don't process the script, only the commands it executes. */\n    if ((c->cmd->flags & CMD_READONLY) && (c->cmd->proc != evalRoCommand)\n        && (c->cmd->proc != evalShaRoCommand) && (c->cmd->proc != fcallroCommand))\n    {\n        /* We use the tracking flag of the original external client that\n         * triggered the command, but we take the keys from the actual command\n         * being executed. */\n        if (server.current_client &&\n            (server.current_client->flags & CLIENT_TRACKING) &&\n            !(server.current_client->flags & CLIENT_TRACKING_BCAST))\n        {\n            trackingRememberKeys(server.current_client, c);\n        }\n    }\n\n    if (!(c->flags & CLIENT_BLOCKED)) {\n        /* Modules may call commands in cron, in which case server.current_client\n         * is not set. */\n        if (server.current_client) {\n            server.current_client->commands_processed++;\n        }\n        server.stat_numcommands++;\n    }\n\n    /* Do some maintenance job and cleanup */\n    afterCommand(c);\n\n    /* The afterCommand updates the replication network bytes. At this point we\n     * are ready to update the ingress/egress net bytes and cleanup tracking\n     * of the current command. */\n    if (update_command_stats && !(c->flags & CLIENT_BLOCKED)) {\n        /* Update the current cmd's keys with the commands output bytes */\n        hotkeyMetrics metrics =\n            {0, c->net_output_bytes_curr_cmd + c->net_input_bytes_curr_cmd};\n        hotkeyStatsUpdateCurrentCmd(server.hotkeys, metrics);\n\n        /* Just like curr cmd setup we only do the cleanup in case we are not in\n         * a nested command. For MULTI/EXEC, we do cleanup for each individual\n         * command. */\n        if (!server.execution_nesting || server.in_exec)\n            hotkeyStatsPostCurrentCmd(server.hotkeys);\n    }\n\n    /* Clear the original argv.\n     * If the client is blocked we will handle slowlog when it is unblocked.\n     * NOTE: we free the origin argv only after hoykeyStatsPostCurrentCmd as\n     * hotkeyStats updates depend on original_argv. */\n    if (!(c->flags & CLIENT_BLOCKED))\n        freeClientOriginalArgv(c);\n\n    /* Remember the replication offset of the client, right after its last\n     * command that resulted in propagation. */\n    if (old_master_repl_offset != server.master_repl_offset)\n        c->woff = server.master_repl_offset;\n\n    /* Client pause takes effect after a transaction has finished. This needs\n     * to be located after everything is propagated. */\n    if (!server.in_exec && server.client_pause_in_transaction) {\n        server.client_pause_in_transaction = 0;\n    }\n\n    server.executing_client = prev_client;\n}\n\n/* Used when a command that is ready for execution needs to be rejected, due to\n * various pre-execution checks. it returns the appropriate error to the client.\n * If there's a transaction is flags it as dirty, and if the command is EXEC,\n * it aborts the transaction.\n * The duration is reset, since we reject the command, and it did not record.\n * Note: 'reply' is expected to end with \\r\\n */\nvoid rejectCommand(client *c, robj *reply) {\n    flagTransaction(c);\n    c->duration = 0;\n    if (c->cmd) c->cmd->rejected_calls++;\n    if (c->cmd && c->cmd->proc == execCommand) {\n        execCommandAbort(c, reply->ptr);\n    } else {\n        /* using addReplyError* rather than addReply so that the error can be logged. */\n        addReplyErrorObject(c, reply);\n    }\n}\n\nvoid rejectCommandSds(client *c, sds s) {\n    flagTransaction(c);\n    c->duration = 0;\n    if (c->cmd) c->cmd->rejected_calls++;\n    if (c->cmd && c->cmd->proc == execCommand) {\n        execCommandAbort(c, s);\n        sdsfree(s);\n    } else {\n        /* The following frees 's'. */\n        addReplyErrorSds(c, s);\n    }\n}\n\nvoid rejectCommandFormat(client *c, const char *fmt, ...) {\n    va_list ap;\n    va_start(ap,fmt);\n    sds s = sdscatvprintf(sdsempty(),fmt,ap);\n    va_end(ap);\n    /* Make sure there are no newlines in the string, otherwise invalid protocol\n     * is emitted (The args come from the user, they may contain any character). */\n    sdsmapchars(s, \"\\r\\n\", \"  \",  2);\n    rejectCommandSds(c, s);\n}\n\n/* This is called after a command in call, we can do some maintenance job in it. */\nvoid afterCommand(client *c) {\n    /* Should be done before trackingHandlePendingKeyInvalidations so that we\n     * reply to client before invalidating cache (makes more sense) */\n    postExecutionUnitOperations();\n\n    /* Flush pending tracking invalidations. */\n    trackingHandlePendingKeyInvalidations();\n\n    clusterSlotStatsAddNetworkBytesOutForUserClient(c);\n\n    /* Flush other pending push messages. only when we are not in nested call.\n     * So the messages are not interleaved with transaction response. */\n    if (!server.execution_nesting)\n        listJoin(c->reply, server.pending_push_messages);\n\n    /* Run debug assertions if any are enabled */\n    if (unlikely(server.dbg_assert_flags))\n        dbgRunAssertions(c->db);\n}\n\n/* Check if c->cmd exists, fills `err` with details in case it doesn't.\n * Return 1 if exists. */\nint commandCheckExistence(client *c, sds *err) {\n    if (c->cmd)\n        return 1;\n    if (!err)\n        return 0;\n    if (isContainerCommandBySds(c->argv[0]->ptr)) {\n        /* If we can't find the command but argv[0] by itself is a command\n         * it means we're dealing with an invalid subcommand. Print Help. */\n        sds cmd = sdsnew((char *)c->argv[0]->ptr);\n        sdstoupper(cmd);\n        *err = sdsnew(NULL);\n\n        if (c->argc < 2) {\n            *err = sdscatprintf(*err, \"missing subcommand. Try %s HELP.\", cmd);\n        } else {\n            *err = sdscatprintf(*err, \"unknown subcommand '%.128s'. Try %s HELP.\",\n                                (char *)c->argv[1]->ptr, cmd);\n        }\n\n        sdsfree(cmd);\n    } else {\n        *err = sdsnew(NULL);\n        *err = sdscatprintf(*err, \"unknown command '%.128s'\", (char *)c->argv[0]->ptr);\n\n        if (c->argc >= 2) {\n            sds args = sdsempty();\n            for (int i = 1; i < c->argc && sdslen(args) < 128; i++)\n                args = sdscatprintf(args, \"'%.*s' \", 128 - (int)sdslen(args), (char *)c->argv[i]->ptr);\n            *err = sdscatprintf(*err, \", with args beginning with: %s\", args);\n            sdsfree(args);\n        }\n    }\n    /* Make sure there are no newlines in the string, otherwise invalid protocol\n     * is emitted (The args come from the user, they may contain any character). */\n    sdsmapchars(*err, \"\\r\\n\", \"  \",  2);\n    return 0;\n}\n\n/* Check if c->argc is valid for c->cmd, fills `err` with details in case it isn't.\n * Return 1 if valid. */\nint commandCheckArity(struct redisCommand *cmd, int argc, sds *err) {\n    if ((cmd->arity > 0 && cmd->arity != argc) || (argc < -cmd->arity)) {\n        if (err) {\n            *err = sdsnew(NULL);\n            *err = sdscatprintf(*err, \"wrong number of arguments for '%s' command\", cmd->fullname);\n        }\n        return 0;\n    }\n\n    return 1;\n}\n\n/* If we're executing a script, try to extract a set of command flags from\n * it, in case it declared them. Note this is just an attempt, we don't yet\n * know the script command is well formed.*/\nuint64_t getCommandFlags(client *c) {\n    uint64_t cmd_flags = c->cmd->flags;\n\n    if (c->cmd->proc == fcallCommand || c->cmd->proc == fcallroCommand) {\n        cmd_flags = fcallGetCommandFlags(c, cmd_flags);\n    } else if (c->cmd->proc == evalCommand || c->cmd->proc == evalRoCommand ||\n               c->cmd->proc == evalShaCommand || c->cmd->proc == evalShaRoCommand)\n    {\n        cmd_flags = evalGetCommandFlags(c, cmd_flags);\n    }\n\n    return cmd_flags;\n}\n\nvoid preprocessCommand(client *c, pendingCommand *pcmd) {\n    pcmd->slot = INVALID_CLUSTER_SLOT;\n    if (pcmd->argc == 0)\n        return;\n\n    /* Check if we can reuse the previous command instead of looking it up.\n     * The previous command is either the penultimate pending command (if it exists), or c->lastcmd. */\n    struct redisCommand *last_cmd = pcmd->prev ? pcmd->prev->cmd : c->lastcmd;\n\n    if (isCommandReusable(last_cmd, pcmd->argv[0]))\n        pcmd->cmd = last_cmd;\n    else\n        pcmd->cmd = lookupCommand(pcmd->argv, pcmd->argc);\n\n    if (!pcmd->cmd) {\n        pcmd->read_error = CLIENT_READ_COMMAND_NOT_FOUND;\n        return;\n    }\n\n    if ((pcmd->cmd->arity > 0 && pcmd->cmd->arity != pcmd->argc) ||\n        (pcmd->argc < -pcmd->cmd->arity))\n    {\n        pcmd->read_error = CLIENT_READ_BAD_ARITY;\n        return;\n    }\n\n    pcmd->keys_result = (getKeysResult)GETKEYS_RESULT_INIT;\n    int num_keys = extractKeysAndSlot(pcmd->cmd, pcmd->argv, pcmd->argc,\n                                      &pcmd->keys_result, &pcmd->slot);\n    if (num_keys < 0) {\n        /* We skip the checks below since We expect the command to be rejected in this case */\n        return;\n    } else if (num_keys > 0) {\n        /* Handle cross-slot keys: mark error and reset slot. */\n        if (pcmd->slot == CLUSTER_CROSSSLOT) {\n            pcmd->read_error = CLIENT_READ_CROSS_SLOT;\n            pcmd->slot = INVALID_CLUSTER_SLOT;\n        }\n    }\n    pcmd->flags |= PENDING_CMD_KEYS_RESULT_VALID;\n}\n\n/* If this function gets called we already read a whole\n * command, arguments are in the client argv/argc fields.\n * processCommand() execute the command or prepare the\n * server for a bulk read from the client.\n *\n * If C_OK is returned the client is still alive and valid and\n * other operations can be performed by the caller. Otherwise\n * if C_ERR is returned the client was destroyed (i.e. after QUIT). */\nint processCommand(client *c) {\n    if (!scriptIsTimedout()) {\n        /* Both EXEC and scripts call call() directly so there should be\n         * no way in_exec or scriptIsRunning() is 1.\n         * That is unless lua_timedout, in which case client may run\n         * some commands. */\n        serverAssert(!server.in_exec);\n        serverAssert(!scriptIsRunning());\n    }\n\n    /* in case we are starting to ProcessCommand and we already have a command we assume\n     * this is a reprocessing of this command, so we do not want to perform some of the actions again. */\n    int client_reprocessing_command = c->cmd ? 1 : 0;\n\n    /* only run command filter if not reprocessing command */\n    if (!client_reprocessing_command) {\n        moduleCallCommandFilters(c);\n        reqresAppendRequest(c);\n    }\n\n    /* If we're inside a module blocked context yielding that wants to avoid\n     * processing clients, postpone the command. */\n    if (server.busy_module_yield_flags != BUSY_MODULE_YIELD_NONE &&\n        !(server.busy_module_yield_flags & BUSY_MODULE_YIELD_CLIENTS))\n    {\n        blockPostponeClient(c);\n        return C_OK;\n    }\n\n    /* Now lookup the command and check ASAP about trivial error conditions\n     * such as wrong arity, bad command name and so forth.\n     * In case we are reprocessing a command after it was blocked,\n     * we do not have to repeat the same checks */\n    if (!client_reprocessing_command) {\n        /* check if we can reuse the last command instead of looking up if we already have that info */\n        struct redisCommand *cmd = c->lookedcmd;\n\n        /* The command may have been modified by modules (e.g., in CommandFilters callbacks),\n         * so we need to look it up again. */\n        if (!cmd) {\n            if (isCommandReusable(c->lastcmd, c->argv[0]))\n                cmd = c->lastcmd;\n            else\n                cmd = lookupCommand(c->argv, c->argc);\n        }\n\n        if (!cmd) {\n            /* Handle possible security attacks. */\n            if (!strcasecmp(c->argv[0]->ptr,\"host:\") || !strcasecmp(c->argv[0]->ptr,\"post\")) {\n                securityWarningCommand(c);\n                return C_ERR;\n            }\n        }\n\n        /* Internal commands seem unexistent to non-internal connections.\n         * masters and AOF loads are implicitly internal. */\n        if (cmd && (cmd->flags & CMD_INTERNAL) && !((c->flags & CLIENT_INTERNAL) || mustObeyClient(c))) {\n            cmd = NULL;\n        }\n\n        c->cmd = c->lastcmd = c->realcmd = cmd;\n        sds err;\n        if (!commandCheckExistence(c, &err)) {\n            rejectCommandSds(c, err);\n            return C_OK;\n        }\n        if (!commandCheckArity(c->cmd, c->argc, &err)) {\n            rejectCommandSds(c, err);\n            return C_OK;\n        }\n\n\n        /* Check if the command is marked as protected and the relevant configuration allows it */\n        if (c->cmd->flags & CMD_PROTECTED) {\n            if ((c->cmd->proc == debugCommand && !allowProtectedAction(server.enable_debug_cmd, c)) ||\n                (c->cmd->proc == moduleCommand && !allowProtectedAction(server.enable_module_cmd, c)))\n            {\n                rejectCommandFormat(c,\"%s command not allowed. If the %s option is set to \\\"local\\\", \"\n                                      \"you can run it from a local connection, otherwise you need to set this option \"\n                                      \"in the configuration file, and then restart the server.\",\n                                      c->cmd->proc == debugCommand ? \"DEBUG\" : \"MODULE\",\n                                      c->cmd->proc == debugCommand ? \"enable-debug-command\" : \"enable-module-command\");\n                return C_OK;\n\n            }\n        }\n    }\n\n    const uint64_t cmd_flags = getCommandFlags(c);\n\n    int is_read_command = (cmd_flags & CMD_READONLY) ||\n                           (c->cmd->proc == execCommand && (c->mstate.cmd_flags & CMD_READONLY));\n    int is_write_command = (cmd_flags & CMD_WRITE) ||\n                           (c->cmd->proc == execCommand && (c->mstate.cmd_flags & CMD_WRITE));\n    int is_denyoom_command = (cmd_flags & CMD_DENYOOM) ||\n                             (c->cmd->proc == execCommand && (c->mstate.cmd_flags & CMD_DENYOOM));\n    int is_denystale_command = !(cmd_flags & CMD_STALE) ||\n                               (c->cmd->proc == execCommand && (c->mstate.cmd_inv_flags & CMD_STALE));\n    int is_denyloading_command = !(cmd_flags & CMD_LOADING) ||\n                                 (c->cmd->proc == execCommand && (c->mstate.cmd_inv_flags & CMD_LOADING));\n    int is_may_replicate_command = (cmd_flags & (CMD_WRITE | CMD_MAY_REPLICATE)) ||\n                                   (c->cmd->proc == execCommand && (c->mstate.cmd_flags & (CMD_WRITE | CMD_MAY_REPLICATE)));\n    int is_deny_async_loading_command = (cmd_flags & CMD_NO_ASYNC_LOADING) ||\n                                        (c->cmd->proc == execCommand && (c->mstate.cmd_flags & CMD_NO_ASYNC_LOADING));\n    int obey_client = mustObeyClient(c);\n\n    if (authRequired(c)) {\n        /* AUTH and HELLO and no auth commands are valid even in\n         * non-authenticated state. */\n        if (!(c->cmd->flags & CMD_NO_AUTH)) {\n            rejectCommand(c,shared.noautherr);\n            return C_OK;\n        }\n    }\n\n    if (c->flags & CLIENT_MULTI && c->cmd->flags & CMD_NO_MULTI) {\n        rejectCommandFormat(c,\"Command not allowed inside a transaction\");\n        return C_OK;\n    }\n\n    /* Check if the user can run this command according to the current\n     * ACLs. */\n    int acl_errpos;\n    int acl_retval = ACLCheckAllPerm(c,&acl_errpos);\n    if (acl_retval != ACL_OK) {\n        addACLLogEntry(c,acl_retval,(c->flags & CLIENT_MULTI) ? ACL_LOG_CTX_MULTI : ACL_LOG_CTX_TOPLEVEL,acl_errpos,NULL,NULL);\n        sds msg = getAclErrorMessage(acl_retval, c->user, c->cmd, c->argv[acl_errpos]->ptr, 0);\n        rejectCommandFormat(c, \"-NOPERM %s\", msg);\n        sdsfree(msg);\n        return C_OK;\n    }\n\n    /* If cluster is enabled perform the cluster redirection here.\n     * However we don't perform the redirection if:\n     * 1) The sender of this command is our master.\n     * 2) The command has no key arguments. */\n    if (server.cluster_enabled &&\n        !mustObeyClient(c) &&\n        !(!(c->cmd->flags&CMD_MOVABLE_KEYS) && c->cmd->key_specs_num == 0 &&\n          c->cmd->proc != execCommand))\n    {\n        int error_code;\n        clusterNode *n = getNodeByQuery(c,c->cmd,c->argv,c->argc,\n            &c->slot,getClientCachedKeyResult(c),c->read_error,cmd_flags,&error_code);\n        if (n == NULL || !clusterNodeIsMyself(n)) {\n            if (c->cmd->proc == execCommand) {\n                discardTransaction(c);\n            } else {\n                flagTransaction(c);\n            }\n            clusterRedirectClient(c,n,c->slot,error_code);\n            c->duration = 0;\n            c->cmd->rejected_calls++;\n            return C_OK;\n        }\n    }\n\n    /* Check if the command keys are all in the same slot for cluster compatibility */\n    if (server.cluster_compatibility_sample_ratio && !server.cluster_enabled &&\n        !(!(c->cmd->flags&CMD_MOVABLE_KEYS) && c->cmd->key_specs_num == 0 &&\n          c->cmd->proc != execCommand) && SHOULD_CLUSTER_COMPATIBILITY_SAMPLE())\n    {\n        c->cluster_compatibility_check_slot = -1;\n        if (!areCommandKeysInSameSlot(c, &c->cluster_compatibility_check_slot)) {\n            server.stat_cluster_incompatible_ops++;\n            /* If we find cross slot keys, reset slot to -2 to indicate we won't\n             * check this command again. That is useful for script, since we need\n             * this variable to decide if we continue checking accessing keys. */\n            c->cluster_compatibility_check_slot = -2;\n        }\n    }\n\n    /* Disconnect some clients if total clients memory is too high. We do this\n     * before key eviction, after the last command was executed and consumed\n     * some client output buffer memory. */\n    evictClients();\n    if (server.current_client == NULL) {\n        /* If we evicted ourself then abort processing the command */\n        return C_ERR;\n    }\n\n    /* Handle the maxmemory directive.\n     *\n     * Note that we do not want to reclaim memory if we are here re-entering\n     * the event loop since there is a busy Lua script running in timeout\n     * condition, to avoid mixing the propagation of scripts with the\n     * propagation of DELs due to eviction. */\n    if (server.maxmemory && !isInsideYieldingLongCommand()) {\n        int out_of_memory = (performEvictions() == EVICT_FAIL);\n\n        /* performEvictions may evict keys, so we need flush pending tracking\n         * invalidation keys. If we don't do this, we may get an invalidation\n         * message after we perform operation on the key, where in fact this\n         * message belongs to the old value of the key before it gets evicted.*/\n        trackingHandlePendingKeyInvalidations();\n\n        /* performEvictions may flush slave output buffers. This may result\n         * in a slave, that may be the active client, to be freed. */\n        if (server.current_client == NULL) return C_ERR;\n\n        if (out_of_memory && is_denyoom_command) {\n            rejectCommand(c, shared.oomerr);\n            return C_OK;\n        }\n\n        /* Save out_of_memory result at command start, otherwise if we check OOM\n         * in the first write within script, memory used by lua stack and\n         * arguments might interfere. We need to save it for EXEC and module\n         * calls too, since these can call EVAL, but avoid saving it during an\n         * interrupted / yielding busy script / module. */\n        server.pre_command_oom_state = out_of_memory;\n    }\n\n    /* Make sure to use a reasonable amount of memory for client side\n     * caching metadata. */\n    if (server.tracking_clients) trackingLimitUsedSlots();\n\n    /* Don't accept write commands if there are problems persisting on disk\n     * unless coming from our master, in which case check the replica ignore\n     * disk write error config to either log or crash. */\n    int deny_write_type = writeCommandsDeniedByDiskError();\n    if (deny_write_type != DISK_ERROR_TYPE_NONE &&\n        (is_write_command || c->cmd->proc == pingCommand))\n    {\n        if (obey_client) {\n            if (!server.repl_ignore_disk_write_error && c->cmd->proc != pingCommand) {\n                serverPanic(\"Replica was unable to write command to disk.\");\n            } else {\n                static mstime_t last_log_time_ms = 0;\n                const mstime_t log_interval_ms = 10000;\n                if (server.mstime > last_log_time_ms + log_interval_ms) {\n                    last_log_time_ms = server.mstime;\n                    serverLog(LL_WARNING, \"Replica is applying a command even though \"\n                                          \"it is unable to write to disk.\");\n                }\n            }\n        } else {\n            sds err = writeCommandsGetDiskErrorMessage(deny_write_type);\n            /* remove the newline since rejectCommandSds adds it. */\n            sdssubstr(err, 0, sdslen(err)-2);\n            rejectCommandSds(c, err);\n            return C_OK;\n        }\n    }\n\n    /* Don't accept write commands if there are not enough good slaves and\n     * user configured the min-slaves-to-write option. */\n    if (is_write_command && !checkGoodReplicasStatus()) {\n        rejectCommand(c, shared.noreplicaserr);\n        return C_OK;\n    }\n\n    /* Don't accept write commands if this is a read only slave. But\n     * accept write commands if this is our master. */\n    if (server.masterhost && server.repl_slave_ro &&\n        !obey_client &&\n        is_write_command)\n    {\n        rejectCommand(c, shared.roslaveerr);\n        return C_OK;\n    }\n\n    /* If this node is a replica and there is a trim job due to slot migration,\n     * we cannot process commands from the master for the slot being trimmed.\n     * Otherwise, the trim cycle could mistakenly delete newly added keys.\n     * In this case, the master will be blocked until the trim job finishes.\n     * This is supposed to be a rare event as it needs to migrate slots and\n     * import them back before the trim job is done. */\n    if ((c->flags & CLIENT_MASTER) && is_write_command && server.cluster_enabled) {\n        /* Check if the command is accessing keys in a slot being trimmed. */\n        int slot_in_trim = asmGetTrimmingSlotForCommand(c->cmd, c->argv, c->argc);\n        if (slot_in_trim != -1) {\n            serverLog(LL_WARNING, \"Master is sending command for slot %d. \"\n                                  \"There is an trim job in progress for this slot. \"\n                                  \"This replica cannot process this command right now. \"\n                                  \"Blocking master client until trim job is done. \", slot_in_trim);\n            /* Block master client */\n            blockPostponeClientWithType(c, BLOCKED_POSTPONE_TRIM);\n            return C_OK;\n        }\n    }\n\n    /* Only allow a subset of commands in the context of Pub/Sub if the\n     * connection is in RESP2 mode. With RESP3 there are no limits. */\n    if ((c->flags & CLIENT_PUBSUB && c->resp == 2) &&\n        c->cmd->proc != pingCommand &&\n        c->cmd->proc != subscribeCommand &&\n        c->cmd->proc != ssubscribeCommand &&\n        c->cmd->proc != unsubscribeCommand &&\n        c->cmd->proc != sunsubscribeCommand &&\n        c->cmd->proc != psubscribeCommand &&\n        c->cmd->proc != punsubscribeCommand &&\n        c->cmd->proc != quitCommand &&\n        c->cmd->proc != resetCommand) {\n        rejectCommandFormat(c,\n            \"Can't execute '%s': only (P|S)SUBSCRIBE / \"\n            \"(P|S)UNSUBSCRIBE / PING / QUIT / RESET are allowed in this context\",\n            c->cmd->fullname);\n        return C_OK;\n    }\n\n    /* Only allow commands with flag \"t\", such as INFO, REPLICAOF and so on,\n     * when replica-serve-stale-data is no and we are a replica with a broken\n     * link with master. */\n    if (server.masterhost && server.repl_state != REPL_STATE_CONNECTED &&\n        server.repl_serve_stale_data == 0 &&\n        is_denystale_command)\n    {\n        rejectCommand(c, shared.masterdownerr);\n        return C_OK;\n    }\n\n    /* Loading DB? Return an error if the command has not the\n     * CMD_LOADING flag. */\n    if (server.loading && !server.async_loading && is_denyloading_command) {\n        rejectCommand(c, shared.loadingerr);\n        return C_OK;\n    }\n\n    /* During async-loading, block certain commands. */\n    if (server.async_loading && is_deny_async_loading_command) {\n        rejectCommand(c,shared.loadingerr);\n        return C_OK;\n    }\n\n    /* when a busy job is being done (script / module)\n     * Only allow a limited number of commands.\n     * Note that we need to allow the transactions commands, otherwise clients\n     * sending a transaction with pipelining without error checking, may have\n     * the MULTI plus a few initial commands refused, then the timeout\n     * condition resolves, and the bottom-half of the transaction gets\n     * executed, see Github PR #7022. */\n    if (isInsideYieldingLongCommand() && !(c->cmd->flags & CMD_ALLOW_BUSY)) {\n        if (server.busy_module_yield_flags && server.busy_module_yield_reply) {\n            rejectCommandFormat(c, \"-BUSY %s\", server.busy_module_yield_reply);\n        } else if (server.busy_module_yield_flags) {\n            rejectCommand(c, shared.slowmoduleerr);\n        } else if (scriptIsEval()) {\n            rejectCommand(c, shared.slowevalerr);\n        } else {\n            rejectCommand(c, shared.slowscripterr);\n        }\n        return C_OK;\n    }\n\n    /* Prevent a replica from sending commands that access the keyspace.\n     * The main objective here is to prevent abuse of client pause check\n     * from which replicas are exempt. */\n    if ((c->flags & CLIENT_SLAVE) && (is_may_replicate_command || is_write_command || is_read_command)) {\n        rejectCommandFormat(c, \"Replica can't interact with the keyspace\");\n        return C_OK;\n    }\n\n    /* If the server is paused, block the client until\n     * the pause has ended. Replicas are never paused. */\n    if (!(c->flags & CLIENT_SLAVE) && \n        ((isPausedActions(PAUSE_ACTION_CLIENT_ALL)) ||\n        ((isPausedActions(PAUSE_ACTION_CLIENT_WRITE)) && is_may_replicate_command)))\n    {\n        blockPostponeClient(c);\n        return C_OK;       \n    }\n\n    /* Exec the command */\n    if (c->flags & CLIENT_MULTI &&\n        c->cmd->proc != execCommand &&\n        c->cmd->proc != discardCommand &&\n        c->cmd->proc != multiCommand &&\n        c->cmd->proc != watchCommand &&\n        c->cmd->proc != quitCommand &&\n        c->cmd->proc != resetCommand)\n    {\n        queueMultiCommand(c, cmd_flags);\n        addReply(c,shared.queued);\n    } else {\n        int flags = CMD_CALL_FULL;\n        call(c,flags);\n        if (listLength(server.ready_keys) && !isInsideYieldingLongCommand())\n            handleClientsBlockedOnKeys();\n    }\n    return C_OK;\n}\n\n/* Checks if all keys in a command (or a MULTI-EXEC) belong to the same hash slot.\n * If yes, return 1, otherwise 0. If hashslot is not NULL, it will be set to the\n * slot of the keys. */\nint areCommandKeysInSameSlot(client *c, int *hashslot) {\n    int slot = -1;\n    multiState *ms = NULL;\n\n    if (c->cmd->proc == execCommand) {\n        if (!(c->flags & CLIENT_MULTI)) return 1;\n        else ms = &c->mstate;\n    }\n\n    /* If client is in multi-exec, we need to check the slot of all keys\n     * in the transaction. */\n    for (int i = 0; i < (ms ? ms->count : 1); i++) {\n        struct redisCommand *cmd = ms ? ms->commands[i]->cmd : c->cmd;\n        robj **argv = ms ? ms->commands[i]->argv : c->argv;\n        int argc = ms ? ms->commands[i]->argc : c->argc;\n\n        getKeysResult result = GETKEYS_RESULT_INIT;\n        int numkeys = getKeysFromCommand(cmd, argv, argc, &result);\n        keyReference *keyindex = result.keys;\n\n        /* Check if all keys have the same slots, increment the metric if not */\n        for (int j = 0; j < numkeys; j++) {\n            robj *thiskey = argv[keyindex[j].pos];\n            int thisslot = keyHashSlot((char*)thiskey->ptr, sdslen(thiskey->ptr));\n            if (slot == -1) {\n                slot = thisslot;\n            } else if (slot != thisslot) {\n                getKeysFreeResult(&result);\n                return 0;\n            }\n        }\n        getKeysFreeResult(&result);\n    }\n    if (hashslot) *hashslot = slot;\n    return 1;\n}\n\n/* ====================== Error lookup and execution ===================== */\n\n/* Users who abuse lua error_reply will generate a new error object on each\n * error call, which can make server.errors get bigger and bigger. This will\n * cause the server to block when calling INFO (we also return errorstats by\n * default). To prevent the damage it can cause, when a misuse is detected,\n * we will print the warning log and disable the errorstats to avoid adding\n * more new errors. It can be re-enabled via CONFIG RESETSTAT. */\n#define ERROR_STATS_NUMBER 128\nvoid incrementErrorCount(const char *fullerr, size_t namelen) {\n    /* errorstats is disabled, return ASAP. */\n    if (!server.errors_enabled) return;\n\n    void *result;\n    if (!raxFind(server.errors,(unsigned char*)fullerr,namelen,&result)) {\n        if (server.errors->numele >= ERROR_STATS_NUMBER) {\n            sds errors = sdsempty();\n            raxIterator ri;\n            raxStart(&ri, server.errors);\n            raxSeek(&ri, \"^\", NULL, 0);\n            while (raxNext(&ri)) {\n                char *tmpsafe;\n                errors = sdscatlen(errors, getSafeInfoString((char *)ri.key, ri.key_len, &tmpsafe), ri.key_len);\n                errors = sdscatlen(errors, \", \", 2);\n                if (tmpsafe != NULL) zfree(tmpsafe);\n            }\n            sdsrange(errors, 0, -3); /* Remove final \", \". */\n            raxStop(&ri);\n\n            /* Print the warning log and the contents of server.errors to the log. */\n            serverLog(LL_WARNING,\n                      \"Errorstats stopped adding new errors because the number of \"\n                      \"errors reached the limit, may be misuse of lua error_reply, \"\n                      \"please check INFO ERRORSTATS, this can be re-enabled via \"\n                      \"CONFIG RESETSTAT.\");\n            serverLog(LL_WARNING, \"Current errors code list: %s\", errors);\n            sdsfree(errors);\n\n            /* Reset the errors and add a single element to indicate that it is disabled. */\n            resetErrorTableStats();\n            incrementErrorCount(\"ERRORSTATS_DISABLED\", 19);\n            server.errors_enabled = 0;\n            return;\n        }\n\n        struct redisError *error = zmalloc(sizeof(*error));\n        error->count = 1;\n        raxInsert(server.errors,(unsigned char*)fullerr,namelen,error,NULL);\n    } else {\n        struct redisError *error = result;\n        error->count++;\n    }\n}\n\n/*================================== Shutdown =============================== */\n\n/* Close listening sockets. Also unlink the unix domain socket if\n * unlink_unix_socket is non-zero. */\nvoid closeListeningSockets(int unlink_unix_socket) {\n    int j;\n\n    for (int i = 0; i < CONN_TYPE_MAX; i++) {\n        connListener *listener = &server.listeners[i];\n        if (listener->ct == NULL)\n            continue;\n\n        for (j = 0; j < listener->count; j++) close(listener->fd[j]);\n    }\n\n    if (server.cluster_enabled)\n        for (j = 0; j < server.clistener.count; j++) close(server.clistener.fd[j]);\n    if (unlink_unix_socket && server.unixsocket) {\n        serverLog(LL_NOTICE,\"Removing the unix socket file.\");\n        if (unlink(server.unixsocket) != 0)\n            serverLog(LL_WARNING,\"Error removing the unix socket file: %s\",strerror(errno));\n    }\n}\n\n/* Prepare for shutting down the server. Flags:\n *\n * - SHUTDOWN_SAVE: Save a database dump even if the server is configured not to\n *   save any dump.\n *\n * - SHUTDOWN_NOSAVE: Don't save any database dump even if the server is\n *   configured to save one.\n *\n * - SHUTDOWN_NOW: Don't wait for replicas to catch up before shutting down.\n *\n * - SHUTDOWN_FORCE: Ignore errors writing AOF and RDB files on disk, which\n *   would normally prevent a shutdown.\n *\n * Unless SHUTDOWN_NOW is set and if any replicas are lagging behind, C_ERR is\n * returned and server.shutdown_mstime is set to a timestamp to allow a grace\n * period for the replicas to catch up. This is checked and handled by\n * serverCron() which completes the shutdown as soon as possible.\n *\n * If shutting down fails due to errors writing RDB or AOF files, C_ERR is\n * returned and an error is logged. If the flag SHUTDOWN_FORCE is set, these\n * errors are logged but ignored and C_OK is returned.\n *\n * On success, this function returns C_OK and then it's OK to call exit(0). */\nint prepareForShutdown(int flags) {\n    if (isShutdownInitiated()) return C_ERR;\n\n    /* When SHUTDOWN is called while the server is loading a dataset in\n     * memory we need to make sure no attempt is performed to save\n     * the dataset on shutdown (otherwise it could overwrite the current DB\n     * with half-read data).\n     *\n     * Also when in Sentinel mode clear the SAVE flag and force NOSAVE. */\n    if (server.loading || server.sentinel_mode)\n        flags = (flags & ~SHUTDOWN_SAVE) | SHUTDOWN_NOSAVE;\n\n    server.shutdown_flags = flags;\n\n    serverLog(LL_NOTICE,\"User requested shutdown...\");\n    if (server.supervised_mode == SUPERVISED_SYSTEMD)\n        redisCommunicateSystemd(\"STOPPING=1\\n\");\n\n    /* Cancel all ASM tasks before shutting down. */\n    clusterAsmCancel(NULL, \"server shutdown\");\n\n    /* If we have any replicas, let them catch up the replication offset before\n     * we shut down, to avoid data loss. */\n    if (!(flags & SHUTDOWN_NOW) &&\n        server.shutdown_timeout != 0 &&\n        !isReadyToShutdown())\n    {\n        server.shutdown_mstime = server.mstime + server.shutdown_timeout * 1000;\n        if (!isPausedActions(PAUSE_ACTION_REPLICA)) sendGetackToReplicas();\n        pauseActions(PAUSE_DURING_SHUTDOWN,\n                      LLONG_MAX,\n                     PAUSE_ACTIONS_CLIENT_WRITE_SET);\n        serverLog(LL_NOTICE, \"Waiting for replicas before shutting down.\");\n        return C_ERR;\n    }\n\n    return finishShutdown();\n}\n\nstatic inline int isShutdownInitiated(void) {\n    return server.shutdown_mstime != 0;\n}\n\n/* Returns 0 if there are any replicas which are lagging in replication which we\n * need to wait for before shutting down. Returns 1 if we're ready to shut\n * down now. */\nint isReadyToShutdown(void) {\n    if (listLength(server.slaves) == 0) return 1;  /* No replicas. */\n\n    listIter li;\n    listNode *ln;\n    listRewind(server.slaves, &li);\n    while ((ln = listNext(&li)) != NULL) {\n        client *replica = listNodeValue(ln);\n        /* Don't count migration destination replicas. */\n        if (replica->flags & CLIENT_ASM_MIGRATING) continue;\n        if (replica->repl_ack_off != server.master_repl_offset) return 0;\n    }\n    return 1;\n}\n\nstatic void cancelShutdown(void) {\n    atomicSet(server.shutdown_asap, 0);\n    server.shutdown_flags = 0;\n    server.shutdown_mstime = 0;\n    atomicSet(server.last_sig_received, 0);\n    replyToClientsBlockedOnShutdown();\n    unpauseActions(PAUSE_DURING_SHUTDOWN);\n}\n\n/* Returns C_OK if shutdown was aborted and C_ERR if shutdown wasn't ongoing. */\nint abortShutdown(void) {\n    if (isShutdownInitiated()) {\n        cancelShutdown();\n    } else if (shouldShutdownAsap()) {\n        /* Signal handler has requested shutdown, but it hasn't been initiated\n         * yet. Just clear the flag. */\n        atomicSet(server.shutdown_asap, 0);\n    } else {\n        /* Shutdown neither initiated nor requested. */\n        return C_ERR;\n    }\n    serverLog(LL_NOTICE, \"Shutdown manually aborted.\");\n    return C_OK;\n}\n\n/* The final step of the shutdown sequence. Returns C_OK if the shutdown\n * sequence was successful and it's OK to call exit(). If C_ERR is returned,\n * it's not safe to call exit(). */\nint finishShutdown(void) {\n\n    int save = server.shutdown_flags & SHUTDOWN_SAVE;\n    int nosave = server.shutdown_flags & SHUTDOWN_NOSAVE;\n    int force = server.shutdown_flags & SHUTDOWN_FORCE;\n\n    /* Log a warning for each replica that is lagging. */\n    listIter replicas_iter;\n    listNode *replicas_list_node;\n    int num_replicas = 0, num_lagging_replicas = 0;\n    listRewind(server.slaves, &replicas_iter);\n    while ((replicas_list_node = listNext(&replicas_iter)) != NULL) {\n        client *replica = listNodeValue(replicas_list_node);\n        /* Don't count migration destination replicas. */\n        if (replica->flags & CLIENT_ASM_MIGRATING) continue;\n        num_replicas++;\n\n        /* We pause the IO thread this replica is running on so we avoid data\n         * races. */\n        int paused = 0;\n        if (replica->running_tid != IOTHREAD_MAIN_THREAD_ID) {\n            pauseIOThread(replica->tid);\n            paused = 1;\n        }\n\n        if (replica->repl_ack_off != server.master_repl_offset) {\n            num_lagging_replicas++;\n            long lag = replica->replstate == SLAVE_STATE_ONLINE ?\n                time(NULL) - replica->repl_ack_time : 0;\n            serverLog(LL_NOTICE,\n                      \"Lagging replica %s reported offset %lld behind master, lag=%ld, state=%s.\",\n                      replicationGetSlaveName(replica),\n                      server.master_repl_offset - replica->repl_ack_off,\n                      lag,\n                      replstateToString(replica->replstate));\n        }\n\n        if (paused) resumeIOThread(replica->tid);\n    }\n    if (num_replicas > 0) {\n        serverLog(LL_NOTICE,\n                  \"%d of %d replicas are in sync when shutting down.\",\n                  num_replicas - num_lagging_replicas,\n                  num_replicas);\n    }\n\n    /* Kill all the Lua debugger forked sessions. */\n    ldbKillForkedSessions();\n\n    /* Kill the saving child if there is a background saving in progress.\n       We want to avoid race conditions, for instance our saving child may\n       overwrite the synchronous saving did by SHUTDOWN. */\n    if (server.child_type == CHILD_TYPE_RDB) {\n        serverLog(LL_WARNING,\"There is a child saving an .rdb. Killing it!\");\n        killRDBChild();\n        /* Note that, in killRDBChild normally has backgroundSaveDoneHandler\n         * doing it's cleanup, but in this case this code will not be reached,\n         * so we need to call rdbRemoveTempFile which will close fd(in order\n         * to unlink file actually) in background thread.\n         * The temp rdb file fd may won't be closed when redis exits quickly,\n         * but OS will close this fd when process exits. */\n        rdbRemoveTempFile(server.child_pid, 0);\n        resetChildState();\n    }\n\n    /* Kill module child if there is one. */\n    if (server.child_type == CHILD_TYPE_MODULE) {\n        serverLog(LL_WARNING,\"There is a module fork child. Killing it!\");\n        TerminateModuleForkChild(server.child_pid,0);\n    }\n\n    /* Kill the AOF saving child as the AOF we already have may be longer\n     * but contains the full dataset anyway. */\n    if (server.child_type == CHILD_TYPE_AOF) {\n        /* If we have AOF enabled but haven't written the AOF yet, don't\n         * shutdown or else the dataset will be lost. */\n        if (server.aof_state == AOF_WAIT_REWRITE) {\n            if (force) {\n                serverLog(LL_WARNING, \"Writing initial AOF. Exit anyway.\");\n            } else {\n                serverLog(LL_WARNING, \"Writing initial AOF, can't exit.\");\n                if (server.supervised_mode == SUPERVISED_SYSTEMD)\n                    redisCommunicateSystemd(\"STATUS=Writing initial AOF, can't exit.\\n\");\n                goto error;\n            }\n        }\n        serverLog(LL_WARNING,\n                  \"There is a child rewriting the AOF. Killing it!\");\n        killAppendOnlyChild();\n    }\n    if (server.aof_state != AOF_OFF) {\n        /* Append only file: flush buffers and fsync() the AOF at exit */\n        serverLog(LL_NOTICE,\"Calling fsync() on the AOF file.\");\n        flushAppendOnlyFile(1);\n        if (redis_fsync(server.aof_fd) == -1) {\n            serverLog(LL_WARNING,\"Fail to fsync the AOF file: %s.\",\n                                 strerror(errno));\n        }\n    }\n\n    /* Create a new RDB file before exiting. */\n    if ((server.saveparamslen > 0 && !nosave) || save) {\n        serverLog(LL_NOTICE,\"Saving the final RDB snapshot before exiting.\");\n        if (server.supervised_mode == SUPERVISED_SYSTEMD)\n            redisCommunicateSystemd(\"STATUS=Saving the final RDB snapshot\\n\");\n        /* Snapshotting. Perform a SYNC SAVE and exit */\n        rdbSaveInfo rsi, *rsiptr;\n        rsiptr = rdbPopulateSaveInfo(&rsi);\n        /* Keep the page cache since it's likely to restart soon */\n        if (rdbSave(SLAVE_REQ_NONE,server.rdb_filename,rsiptr,RDBFLAGS_KEEP_CACHE) != C_OK) {\n            /* Ooops.. error saving! The best we can do is to continue\n             * operating. Note that if there was a background saving process,\n             * in the next cron() Redis will be notified that the background\n             * saving aborted, handling special stuff like slaves pending for\n             * synchronization... */\n            if (force) {\n                serverLog(LL_WARNING,\"Error trying to save the DB. Exit anyway.\");\n            } else {\n                serverLog(LL_WARNING,\"Error trying to save the DB, can't exit.\");\n                if (server.supervised_mode == SUPERVISED_SYSTEMD)\n                    redisCommunicateSystemd(\"STATUS=Error trying to save the DB, can't exit.\\n\");\n                goto error;\n            }\n        }\n    }\n\n    /* Update the end offset of current INCR AOF if possible. */\n    updateCurIncrAofEndOffset();\n\n    /* Free the AOF manifest. */\n    if (server.aof_manifest) aofManifestFree(server.aof_manifest);\n\n    /* Fire the shutdown modules event. */\n    moduleFireServerEvent(REDISMODULE_EVENT_SHUTDOWN,0,NULL);\n\n    /* Remove the pid file if possible and needed. */\n    if (server.daemonize || server.pidfile) {\n        serverLog(LL_NOTICE,\"Removing the pid file.\");\n        unlink(server.pidfile);\n    }\n\n    /* Best effort flush of slave output buffers, so that we hopefully\n     * send them pending writes. */\n    flushSlavesOutputBuffers();\n\n    /* Close the listening sockets. Apparently this allows faster restarts. */\n    closeListeningSockets(1);\n\n#if !defined(__sun)\n    /* Unlock the cluster config file before shutdown */\n    if (server.cluster_enabled && server.cluster_config_file_lock_fd != -1) {\n        flock(server.cluster_config_file_lock_fd, LOCK_UN|LOCK_NB);\n    }\n#endif /* __sun */\n\n\n    serverLog(LL_WARNING,\"%s is now ready to exit, bye bye...\",\n        server.sentinel_mode ? \"Sentinel\" : \"Redis\");\n    return C_OK;\n\nerror:\n    serverLog(LL_WARNING, \"Errors trying to shut down the server. Check the logs for more information.\");\n    cancelShutdown();\n    return C_ERR;\n}\n\n/*================================== Commands =============================== */\n\n/* Sometimes Redis cannot accept write commands because there is a persistence\n * error with the RDB or AOF file, and Redis is configured in order to stop\n * accepting writes in such situation. This function returns if such a\n * condition is active, and the type of the condition.\n *\n * Function return values:\n *\n * DISK_ERROR_TYPE_NONE:    No problems, we can accept writes.\n * DISK_ERROR_TYPE_AOF:     Don't accept writes: AOF errors.\n * DISK_ERROR_TYPE_RDB:     Don't accept writes: RDB errors.\n */\nint writeCommandsDeniedByDiskError(void) {\n    if (server.stop_writes_on_bgsave_err &&\n        server.saveparamslen > 0 &&\n        server.lastbgsave_status == C_ERR)\n    {\n        return DISK_ERROR_TYPE_RDB;\n    } else if (server.aof_state != AOF_OFF) {\n        if (server.aof_last_write_status == C_ERR) {\n            return DISK_ERROR_TYPE_AOF;\n        }\n        /* AOF fsync error. */\n        int aof_bio_fsync_status;\n        atomicGet(server.aof_bio_fsync_status,aof_bio_fsync_status);\n        if (aof_bio_fsync_status == C_ERR) {\n            atomicGet(server.aof_bio_fsync_errno,server.aof_last_write_errno);\n            return DISK_ERROR_TYPE_AOF;\n        }\n    }\n\n    return DISK_ERROR_TYPE_NONE;\n}\n\nsds writeCommandsGetDiskErrorMessage(int error_code) {\n    sds ret = NULL;\n    if (error_code == DISK_ERROR_TYPE_RDB) {\n        ret = sdsdup(shared.bgsaveerr->ptr);\n    } else {\n        ret = sdscatfmt(sdsempty(),\n                \"-MISCONF Errors writing to the AOF file: %s\\r\\n\",\n                strerror(server.aof_last_write_errno));\n    }\n    return ret;\n}\n\n/* The PING command. It works in a different way if the client is in\n * in Pub/Sub mode. */\nvoid pingCommand(client *c) {\n    /* The command takes zero or one arguments. */\n    if (c->argc > 2) {\n        addReplyErrorArity(c);\n        return;\n    }\n\n    if (c->flags & CLIENT_PUBSUB && c->resp == 2) {\n        addReply(c,shared.mbulkhdr[2]);\n        addReplyBulkCBuffer(c,\"pong\",4);\n        if (c->argc == 1)\n            addReplyBulkCBuffer(c,\"\",0);\n        else\n            addReplyBulk(c,c->argv[1]);\n    } else {\n        if (c->argc == 1)\n            addReply(c,shared.pong);\n        else\n            addReplyBulk(c,c->argv[1]);\n    }\n}\n\nvoid echoCommand(client *c) {\n    addReplyBulk(c,c->argv[1]);\n}\n\nvoid timeCommand(client *c) {\n    addReplyArrayLen(c,2);\n    addReplyBulkLongLong(c, server.unixtime);\n    addReplyBulkLongLong(c, server.ustime-((long long)server.unixtime)*1000000);\n}\n\ntypedef struct replyFlagNames {\n    uint64_t flag;\n    const char *name;\n} replyFlagNames;\n\n/* Helper function to output flags. */\nvoid addReplyCommandFlags(client *c, uint64_t flags, replyFlagNames *replyFlags) {\n    int count = 0, j=0;\n    /* Count them so we don't have to use deferred reply. */\n    while (replyFlags[j].name) {\n        if (flags & replyFlags[j].flag)\n            count++;\n        j++;\n    }\n\n    addReplySetLen(c, count);\n    j = 0;\n    while (replyFlags[j].name) {\n        if (flags & replyFlags[j].flag)\n            addReplyStatus(c, replyFlags[j].name);\n        j++;\n    }\n}\n\nvoid addReplyFlagsForCommand(client *c, struct redisCommand *cmd) {\n    replyFlagNames flagNames[] = {\n        {CMD_WRITE,             \"write\"},\n        {CMD_READONLY,          \"readonly\"},\n        {CMD_DENYOOM,           \"denyoom\"},\n        {CMD_MODULE,            \"module\"},\n        {CMD_ADMIN,             \"admin\"},\n        {CMD_PUBSUB,            \"pubsub\"},\n        {CMD_NOSCRIPT,          \"noscript\"},\n        {CMD_BLOCKING,          \"blocking\"},\n        {CMD_LOADING,           \"loading\"},\n        {CMD_STALE,             \"stale\"},\n        {CMD_SKIP_MONITOR,      \"skip_monitor\"},\n        {CMD_SKIP_SLOWLOG,      \"skip_slowlog\"},\n        {CMD_ASKING,            \"asking\"},\n        {CMD_FAST,              \"fast\"},\n        {CMD_NO_AUTH,           \"no_auth\"},\n        /* {CMD_MAY_REPLICATE,     \"may_replicate\"},, Hidden on purpose */\n        /* {CMD_SENTINEL,          \"sentinel\"}, Hidden on purpose */\n        /* {CMD_ONLY_SENTINEL,     \"only_sentinel\"}, Hidden on purpose */\n        {CMD_NO_MANDATORY_KEYS, \"no_mandatory_keys\"},\n        /* {CMD_PROTECTED,         \"protected\"}, Hidden on purpose */\n        {CMD_NO_ASYNC_LOADING,  \"no_async_loading\"},\n        {CMD_NO_MULTI,          \"no_multi\"},\n        {CMD_MOVABLE_KEYS,      \"movablekeys\"},\n        {CMD_ALLOW_BUSY,        \"allow_busy\"},\n        /* {CMD_TOUCHES_ARBITRARY_KEYS,  \"TOUCHES_ARBITRARY_KEYS\"}, Hidden on purpose */\n        {0,NULL}\n    };\n    addReplyCommandFlags(c, cmd->flags, flagNames);\n}\n\nvoid addReplyDocFlagsForCommand(client *c, struct redisCommand *cmd) {\n    replyFlagNames docFlagNames[] = {\n        {CMD_DOC_DEPRECATED,         \"deprecated\"},\n        {CMD_DOC_SYSCMD,             \"syscmd\"},\n        {0,NULL}\n    };\n    addReplyCommandFlags(c, cmd->doc_flags, docFlagNames);\n}\n\nvoid addReplyFlagsForKeyArgs(client *c, uint64_t flags) {\n    replyFlagNames docFlagNames[] = {\n        {CMD_KEY_RO,              \"RO\"},\n        {CMD_KEY_RW,              \"RW\"},\n        {CMD_KEY_OW,              \"OW\"},\n        {CMD_KEY_RM,              \"RM\"},\n        {CMD_KEY_ACCESS,          \"access\"},\n        {CMD_KEY_UPDATE,          \"update\"},\n        {CMD_KEY_INSERT,          \"insert\"},\n        {CMD_KEY_DELETE,          \"delete\"},\n        {CMD_KEY_NOT_KEY,         \"not_key\"},\n        {CMD_KEY_INCOMPLETE,      \"incomplete\"},\n        {CMD_KEY_VARIABLE_FLAGS,  \"variable_flags\"},\n        {0,NULL}\n    };\n    addReplyCommandFlags(c, flags, docFlagNames);\n}\n\n/* Must match redisCommandArgType */\nconst char *ARG_TYPE_STR[] = {\n    \"string\",\n    \"integer\",\n    \"double\",\n    \"key\",\n    \"pattern\",\n    \"unix-time\",\n    \"pure-token\",\n    \"oneof\",\n    \"block\",\n};\n\nvoid addReplyFlagsForArg(client *c, uint64_t flags) {\n    replyFlagNames argFlagNames[] = {\n        {CMD_ARG_OPTIONAL,          \"optional\"},\n        {CMD_ARG_MULTIPLE,          \"multiple\"},\n        {CMD_ARG_MULTIPLE_TOKEN,    \"multiple_token\"},\n        {0,NULL}\n    };\n    addReplyCommandFlags(c, flags, argFlagNames);\n}\n\nvoid addReplyCommandArgList(client *c, struct redisCommandArg *args, int num_args) {\n    addReplyArrayLen(c, num_args);\n    for (int j = 0; j<num_args; j++) {\n        /* Count our reply len so we don't have to use deferred reply. */\n        int has_display_text = 1;\n        long maplen = 2;\n        if (args[j].key_spec_index != -1) maplen++;\n        if (args[j].token) maplen++;\n        if (args[j].summary) maplen++;\n        if (args[j].since) maplen++;\n        if (args[j].deprecated_since) maplen++;\n        if (args[j].flags) maplen++;\n        if (args[j].type == ARG_TYPE_ONEOF || args[j].type == ARG_TYPE_BLOCK) {\n            has_display_text = 0;\n            maplen++;\n        }\n        if (has_display_text) maplen++;\n        addReplyMapLen(c, maplen);\n\n        addReplyBulkCString(c, \"name\");\n        addReplyBulkCString(c, args[j].name);\n\n        addReplyBulkCString(c, \"type\");\n        addReplyBulkCString(c, ARG_TYPE_STR[args[j].type]);\n\n        if (has_display_text) {\n            addReplyBulkCString(c, \"display_text\");\n            addReplyBulkCString(c, args[j].display_text ? args[j].display_text : args[j].name);\n        }\n        if (args[j].key_spec_index != -1) {\n            addReplyBulkCString(c, \"key_spec_index\");\n            addReplyLongLong(c, args[j].key_spec_index);\n        }\n        if (args[j].token) {\n            addReplyBulkCString(c, \"token\");\n            addReplyBulkCString(c, args[j].token);\n        }\n        if (args[j].summary) {\n            addReplyBulkCString(c, \"summary\");\n            addReplyBulkCString(c, args[j].summary);\n        }\n        if (args[j].since) {\n            addReplyBulkCString(c, \"since\");\n            addReplyBulkCString(c, args[j].since);\n        }\n        if (args[j].deprecated_since) {\n            addReplyBulkCString(c, \"deprecated_since\");\n            addReplyBulkCString(c, args[j].deprecated_since);\n        }\n        if (args[j].flags) {\n            addReplyBulkCString(c, \"flags\");\n            addReplyFlagsForArg(c, args[j].flags);\n        }\n        if (args[j].type == ARG_TYPE_ONEOF || args[j].type == ARG_TYPE_BLOCK) {\n            addReplyBulkCString(c, \"arguments\");\n            addReplyCommandArgList(c, args[j].subargs, args[j].num_args);\n        }\n    }\n}\n\n#ifdef LOG_REQ_RES\n\nvoid addReplyJson(client *c, struct jsonObject *rs) {\n    addReplyMapLen(c, rs->length);\n\n    for (int i = 0; i < rs->length; i++) {\n        struct jsonObjectElement *curr = &rs->elements[i];\n        addReplyBulkCString(c, curr->key);\n        switch (curr->type) {\n        case (JSON_TYPE_BOOLEAN):\n            addReplyBool(c, curr->value.boolean);\n            break;\n        case (JSON_TYPE_INTEGER):\n            addReplyLongLong(c, curr->value.integer);\n            break;\n        case (JSON_TYPE_STRING):\n            addReplyBulkCString(c, curr->value.string);\n            break;\n        case (JSON_TYPE_OBJECT):\n            addReplyJson(c, curr->value.object);\n            break;\n        case (JSON_TYPE_ARRAY):\n            addReplyArrayLen(c, curr->value.array.length);\n            for (int k = 0; k < curr->value.array.length; k++) {\n                struct jsonObject *object = curr->value.array.objects[k];\n                addReplyJson(c, object);\n            }\n            break;\n        default:\n            serverPanic(\"Invalid JSON type %d\", curr->type);\n        }\n    }\n}\n\n#endif\n\nvoid addReplyCommandHistory(client *c, struct redisCommand *cmd) {\n    addReplySetLen(c, cmd->num_history);\n    for (int j = 0; j<cmd->num_history; j++) {\n        addReplyArrayLen(c, 2);\n        addReplyBulkCString(c, cmd->history[j].since);\n        addReplyBulkCString(c, cmd->history[j].changes);\n    }\n}\n\nvoid addReplyCommandTips(client *c, struct redisCommand *cmd) {\n    addReplySetLen(c, cmd->num_tips);\n    for (int j = 0; j<cmd->num_tips; j++) {\n        addReplyBulkCString(c, cmd->tips[j]);\n    }\n}\n\nvoid addReplyCommandKeySpecs(client *c, struct redisCommand *cmd) {\n    addReplySetLen(c, cmd->key_specs_num);\n    for (int i = 0; i < cmd->key_specs_num; i++) {\n        int maplen = 3;\n        if (cmd->key_specs[i].notes) maplen++;\n\n        addReplyMapLen(c, maplen);\n\n        if (cmd->key_specs[i].notes) {\n            addReplyBulkCString(c, \"notes\");\n            addReplyBulkCString(c,cmd->key_specs[i].notes);\n        }\n\n        addReplyBulkCString(c, \"flags\");\n        addReplyFlagsForKeyArgs(c,cmd->key_specs[i].flags);\n\n        addReplyBulkCString(c, \"begin_search\");\n        switch (cmd->key_specs[i].begin_search_type) {\n            case KSPEC_BS_UNKNOWN:\n                addReplyMapLen(c, 2);\n                addReplyBulkCString(c, \"type\");\n                addReplyBulkCString(c, \"unknown\");\n\n                addReplyBulkCString(c, \"spec\");\n                addReplyMapLen(c, 0);\n                break;\n            case KSPEC_BS_INDEX:\n                addReplyMapLen(c, 2);\n                addReplyBulkCString(c, \"type\");\n                addReplyBulkCString(c, \"index\");\n\n                addReplyBulkCString(c, \"spec\");\n                addReplyMapLen(c, 1);\n                addReplyBulkCString(c, \"index\");\n                addReplyLongLong(c, cmd->key_specs[i].bs.index.pos);\n                break;\n            case KSPEC_BS_KEYWORD:\n                addReplyMapLen(c, 2);\n                addReplyBulkCString(c, \"type\");\n                addReplyBulkCString(c, \"keyword\");\n\n                addReplyBulkCString(c, \"spec\");\n                addReplyMapLen(c, 2);\n                addReplyBulkCString(c, \"keyword\");\n                addReplyBulkCString(c, cmd->key_specs[i].bs.keyword.keyword);\n                addReplyBulkCString(c, \"startfrom\");\n                addReplyLongLong(c, cmd->key_specs[i].bs.keyword.startfrom);\n                break;\n            default:\n                serverPanic(\"Invalid begin_search key spec type %d\", cmd->key_specs[i].begin_search_type);\n        }\n\n        addReplyBulkCString(c, \"find_keys\");\n        switch (cmd->key_specs[i].find_keys_type) {\n            case KSPEC_FK_UNKNOWN:\n                addReplyMapLen(c, 2);\n                addReplyBulkCString(c, \"type\");\n                addReplyBulkCString(c, \"unknown\");\n\n                addReplyBulkCString(c, \"spec\");\n                addReplyMapLen(c, 0);\n                break;\n            case KSPEC_FK_RANGE:\n                addReplyMapLen(c, 2);\n                addReplyBulkCString(c, \"type\");\n                addReplyBulkCString(c, \"range\");\n\n                addReplyBulkCString(c, \"spec\");\n                addReplyMapLen(c, 3);\n                addReplyBulkCString(c, \"lastkey\");\n                addReplyLongLong(c, cmd->key_specs[i].fk.range.lastkey);\n                addReplyBulkCString(c, \"keystep\");\n                addReplyLongLong(c, cmd->key_specs[i].fk.range.keystep);\n                addReplyBulkCString(c, \"limit\");\n                addReplyLongLong(c, cmd->key_specs[i].fk.range.limit);\n                break;\n            case KSPEC_FK_KEYNUM:\n                addReplyMapLen(c, 2);\n                addReplyBulkCString(c, \"type\");\n                addReplyBulkCString(c, \"keynum\");\n\n                addReplyBulkCString(c, \"spec\");\n                addReplyMapLen(c, 3);\n                addReplyBulkCString(c, \"keynumidx\");\n                addReplyLongLong(c, cmd->key_specs[i].fk.keynum.keynumidx);\n                addReplyBulkCString(c, \"firstkey\");\n                addReplyLongLong(c, cmd->key_specs[i].fk.keynum.firstkey);\n                addReplyBulkCString(c, \"keystep\");\n                addReplyLongLong(c, cmd->key_specs[i].fk.keynum.keystep);\n                break;\n            default:\n                serverPanic(\"Invalid find_keys key spec type %d\", cmd->key_specs[i].begin_search_type);\n        }\n    }\n}\n\n/* Reply with an array of sub-command using the provided reply callback. */\nvoid addReplyCommandSubCommands(client *c, struct redisCommand *cmd, void (*reply_function)(client*, struct redisCommand*), int use_map) {\n    if (!cmd->subcommands_dict || !commandVisibleForClient(c, cmd)) {\n        addReplySetLen(c, 0);\n        return;\n    }\n\n    if (use_map)\n        addReplyMapLen(c, dictSize(cmd->subcommands_dict));\n    else\n        addReplyArrayLen(c, dictSize(cmd->subcommands_dict));\n    dictEntry *de;\n    dictIterator di;\n    dictInitSafeIterator(&di, cmd->subcommands_dict);\n    while((de = dictNext(&di)) != NULL) {\n        struct redisCommand *sub = (struct redisCommand *)dictGetVal(de);\n        if (use_map)\n            addReplyBulkCBuffer(c, sub->fullname, sdslen(sub->fullname));\n        reply_function(c, sub);\n    }\n    dictResetIterator(&di);\n}\n\n/* Output the representation of a Redis command. Used by the COMMAND command and COMMAND INFO. */\nvoid addReplyCommandInfo(client *c, struct redisCommand *cmd) {\n    if (!cmd || !commandVisibleForClient(c, cmd)) {\n        addReplyNull(c);\n    } else {\n        int firstkey = 0, lastkey = 0, keystep = 0;\n        if (cmd->legacy_range_key_spec.begin_search_type != KSPEC_BS_INVALID) {\n            firstkey = cmd->legacy_range_key_spec.bs.index.pos;\n            lastkey = cmd->legacy_range_key_spec.fk.range.lastkey;\n            if (lastkey >= 0)\n                lastkey += firstkey;\n            keystep = cmd->legacy_range_key_spec.fk.range.keystep;\n        }\n\n        addReplyArrayLen(c, 10);\n        addReplyBulkCBuffer(c, cmd->fullname, sdslen(cmd->fullname));\n        addReplyLongLong(c, cmd->arity);\n        addReplyFlagsForCommand(c, cmd);\n        addReplyLongLong(c, firstkey);\n        addReplyLongLong(c, lastkey);\n        addReplyLongLong(c, keystep);\n        addReplyCommandCategories(c, cmd);\n        addReplyCommandTips(c, cmd);\n        addReplyCommandKeySpecs(c, cmd);\n        addReplyCommandSubCommands(c, cmd, addReplyCommandInfo, 0);\n    }\n}\n\n/* Output the representation of a Redis command. Used by the COMMAND DOCS. */\nvoid addReplyCommandDocs(client *c, struct redisCommand *cmd) {\n    /* Count our reply len so we don't have to use deferred reply. */\n    long maplen = 1;\n    if (cmd->summary) maplen++;\n    if (cmd->since) maplen++;\n    if (cmd->flags & CMD_MODULE) maplen++;\n    if (cmd->complexity) maplen++;\n    if (cmd->doc_flags) maplen++;\n    if (cmd->deprecated_since) maplen++;\n    if (cmd->replaced_by) maplen++;\n    if (cmd->history) maplen++;\n#ifdef LOG_REQ_RES\n    if (cmd->reply_schema) maplen++;\n#endif\n    if (cmd->args) maplen++;\n    if (cmd->subcommands_dict) maplen++;\n    addReplyMapLen(c, maplen);\n\n    if (cmd->summary) {\n        addReplyBulkCString(c, \"summary\");\n        addReplyBulkCString(c, cmd->summary);\n    }\n    if (cmd->since) {\n        addReplyBulkCString(c, \"since\");\n        addReplyBulkCString(c, cmd->since);\n    }\n\n    /* Always have the group, for module commands the group is always \"module\". */\n    addReplyBulkCString(c, \"group\");\n    addReplyBulkCString(c, commandGroupStr(cmd->group));\n\n    if (cmd->complexity) {\n        addReplyBulkCString(c, \"complexity\");\n        addReplyBulkCString(c, cmd->complexity);\n    }\n    if (cmd->flags & CMD_MODULE) {\n        addReplyBulkCString(c, \"module\");\n        addReplyBulkCString(c, moduleNameFromCommand(cmd));\n    }\n    if (cmd->doc_flags) {\n        addReplyBulkCString(c, \"doc_flags\");\n        addReplyDocFlagsForCommand(c, cmd);\n    }\n    if (cmd->deprecated_since) {\n        addReplyBulkCString(c, \"deprecated_since\");\n        addReplyBulkCString(c, cmd->deprecated_since);\n    }\n    if (cmd->replaced_by) {\n        addReplyBulkCString(c, \"replaced_by\");\n        addReplyBulkCString(c, cmd->replaced_by);\n    }\n    if (cmd->history) {\n        addReplyBulkCString(c, \"history\");\n        addReplyCommandHistory(c, cmd);\n    }\n#ifdef LOG_REQ_RES\n    if (cmd->reply_schema) {\n        addReplyBulkCString(c, \"reply_schema\");\n        addReplyJson(c, cmd->reply_schema);\n    }\n#endif\n    if (cmd->args) {\n        addReplyBulkCString(c, \"arguments\");\n        addReplyCommandArgList(c, cmd->args, cmd->num_args);\n    }\n    if (cmd->subcommands_dict) {\n        addReplyBulkCString(c, \"subcommands\");\n        addReplyCommandSubCommands(c, cmd, addReplyCommandDocs, 1);\n    }\n}\n\n/* Helper for COMMAND GETKEYS and GETKEYSANDFLAGS */\nvoid getKeysSubcommandImpl(client *c, int with_flags) {\n    struct redisCommand *cmd = lookupCommand(c->argv+2,c->argc-2);\n    getKeysResult result = GETKEYS_RESULT_INIT;\n    int j;\n\n    if (!cmd || !commandVisibleForClient(c, cmd)) {\n        addReplyError(c,\"Invalid command specified\");\n        return;\n    } else if (!doesCommandHaveKeys(cmd)) {\n        addReplyError(c,\"The command has no key arguments\");\n        return;\n    } else if ((cmd->arity > 0 && cmd->arity != c->argc-2) ||\n               ((c->argc-2) < -cmd->arity))\n    {\n        addReplyError(c,\"Invalid number of arguments specified for command\");\n        return;\n    }\n\n    if (!getKeysFromCommandWithSpecs(cmd,c->argv+2,c->argc-2,GET_KEYSPEC_DEFAULT,&result)) {\n        if (cmd->flags & CMD_NO_MANDATORY_KEYS) {\n            addReplyArrayLen(c,0);\n        } else {\n            addReplyError(c,\"Invalid arguments specified for command\");\n        }\n    } else {\n        addReplyArrayLen(c,result.numkeys);\n        for (j = 0; j < result.numkeys; j++) {\n            if (!with_flags) {\n                addReplyBulk(c,c->argv[result.keys[j].pos+2]);\n            } else {\n                addReplyArrayLen(c,2);\n                addReplyBulk(c,c->argv[result.keys[j].pos+2]);\n                addReplyFlagsForKeyArgs(c,result.keys[j].flags);\n            }\n        }\n    }\n    getKeysFreeResult(&result);\n}\n\n/* COMMAND GETKEYSANDFLAGS cmd arg1 arg2 ... */\nvoid commandGetKeysAndFlagsCommand(client *c) {\n    getKeysSubcommandImpl(c, 1);\n}\n\n/* COMMAND GETKEYS cmd arg1 arg2 ... */\nvoid getKeysSubcommand(client *c) {\n    getKeysSubcommandImpl(c, 0);\n}\n\nvoid genericCommandCommand(client *c, int count_only) {\n    dictIterator di;\n    dictEntry *de;\n    void *len = NULL;\n    int count = 0;\n\n    if (!count_only)\n        len = addReplyDeferredLen(c);\n\n    dictInitIterator(&di, server.commands);\n    while ((de = dictNext(&di)) != NULL) {\n        struct redisCommand *cmd = dictGetVal(de);\n        if (!commandVisibleForClient(c, cmd))\n            continue;\n        if (!count_only)\n            addReplyCommandInfo(c, dictGetVal(de));\n        count++;\n    }\n    dictResetIterator(&di);\n    if (count_only)\n        addReplyLongLong(c, count);\n    else\n        setDeferredArrayLen(c, len, count);\n}\n\n/* COMMAND (no args) */\nvoid commandCommand(client *c) {\n    genericCommandCommand(c, 0);\n}\n\n/* COMMAND COUNT */\nvoid commandCountCommand(client *c) {\n    genericCommandCommand(c, 1);\n}\n\ntypedef enum {\n    COMMAND_LIST_FILTER_MODULE,\n    COMMAND_LIST_FILTER_ACLCAT,\n    COMMAND_LIST_FILTER_PATTERN,\n} commandListFilterType;\n\ntypedef struct {\n    commandListFilterType type;\n    sds arg;\n    struct {\n        int valid;\n        union {\n            uint64_t aclcat;\n            void *module_handle;\n        } u;\n    } cache;\n} commandListFilter;\n\nint shouldFilterFromCommandList(struct redisCommand *cmd, commandListFilter *filter) {\n    switch (filter->type) {\n        case (COMMAND_LIST_FILTER_MODULE):\n            if (!filter->cache.valid) {\n                filter->cache.u.module_handle = moduleGetHandleByName(filter->arg);\n                filter->cache.valid = 1;\n            }\n            return !moduleIsModuleCommand(filter->cache.u.module_handle, cmd);\n        case (COMMAND_LIST_FILTER_ACLCAT): {\n            if (!filter->cache.valid) {\n                filter->cache.u.aclcat = ACLGetCommandCategoryFlagByName(filter->arg);\n                filter->cache.valid = 1;\n            }\n            uint64_t cat = filter->cache.u.aclcat;\n            if (cat == 0)\n                return 1; /* Invalid ACL category */\n            return (!(cmd->acl_categories & cat));\n            break;\n        }\n        case (COMMAND_LIST_FILTER_PATTERN):\n            return !stringmatchlen(filter->arg, sdslen(filter->arg), cmd->fullname, sdslen(cmd->fullname), 1);\n        default:\n            serverPanic(\"Invalid filter type %d\", filter->type);\n    }\n}\n\n/* COMMAND LIST FILTERBY (MODULE <module-name>|ACLCAT <cat>|PATTERN <pattern>) */\nvoid commandListWithFilter(client *c, dict *commands, commandListFilter filter, int *numcmds) {\n    dictEntry *de;\n    dictIterator di;\n\n    dictInitIterator(&di, commands);\n    while ((de = dictNext(&di)) != NULL) {\n        struct redisCommand *cmd = dictGetVal(de);\n        if (commandVisibleForClient(c, cmd) && !shouldFilterFromCommandList(cmd,&filter)) {\n            addReplyBulkCBuffer(c, cmd->fullname, sdslen(cmd->fullname));\n            (*numcmds)++;\n        }\n\n        if (cmd->subcommands_dict) {\n            commandListWithFilter(c, cmd->subcommands_dict, filter, numcmds);\n        }\n    }\n    dictResetIterator(&di);\n}\n\n/* COMMAND LIST */\nvoid commandListWithoutFilter(client *c, dict *commands, int *numcmds) {\n    dictEntry *de;\n    dictIterator di;\n\n    dictInitIterator(&di, commands);\n    while ((de = dictNext(&di)) != NULL) {\n        struct redisCommand *cmd = dictGetVal(de);\n        if (commandVisibleForClient(c, cmd)) {\n            addReplyBulkCBuffer(c, cmd->fullname, sdslen(cmd->fullname));\n            (*numcmds)++;\n        }\n\n        if (cmd->subcommands_dict) {\n            commandListWithoutFilter(c, cmd->subcommands_dict, numcmds);\n        }\n    }\n    dictResetIterator(&di);\n}\n\n/* COMMAND LIST [FILTERBY (MODULE <module-name>|ACLCAT <cat>|PATTERN <pattern>)] */\nvoid commandListCommand(client *c) {\n\n    /* Parse options. */\n    int i = 2, got_filter = 0;\n    commandListFilter filter = {0};\n    for (; i < c->argc; i++) {\n        int moreargs = (c->argc-1) - i; /* Number of additional arguments. */\n        char *opt = c->argv[i]->ptr;\n        if (!strcasecmp(opt,\"filterby\") && moreargs == 2) {\n            char *filtertype = c->argv[i+1]->ptr;\n            if (!strcasecmp(filtertype,\"module\")) {\n                filter.type = COMMAND_LIST_FILTER_MODULE;\n            } else if (!strcasecmp(filtertype,\"aclcat\")) {\n                filter.type = COMMAND_LIST_FILTER_ACLCAT;\n            } else if (!strcasecmp(filtertype,\"pattern\")) {\n                filter.type = COMMAND_LIST_FILTER_PATTERN;\n            } else {\n                addReplyErrorObject(c,shared.syntaxerr);\n                return;\n            }\n            got_filter = 1;\n            filter.arg = c->argv[i+2]->ptr;\n            i += 2;\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        }\n    }\n\n    int numcmds = 0;\n    void *replylen = addReplyDeferredLen(c);\n\n    if (got_filter) {\n        commandListWithFilter(c, server.commands, filter, &numcmds);\n    } else {\n        commandListWithoutFilter(c, server.commands, &numcmds);\n    }\n\n    setDeferredArrayLen(c,replylen,numcmds);\n}\n\n/* COMMAND INFO [<command-name> ...] */\nvoid commandInfoCommand(client *c) {\n    int i;\n\n    if (c->argc == 2) {\n        genericCommandCommand(c, 0);\n    } else {\n        addReplyArrayLen(c, c->argc-2);\n        for (i = 2; i < c->argc; i++) {\n            addReplyCommandInfo(c, lookupCommandBySds(c->argv[i]->ptr));\n        }\n    }\n}\n\n/* COMMAND DOCS [command-name [command-name ...]] */\nvoid commandDocsCommand(client *c) {\n    int i;\n    int numcmds = 0;\n    if (c->argc == 2) {\n        /* Reply with an array of all commands */\n        dictIterator di;\n        dictEntry *de;\n        void *replylen = addReplyDeferredLen(c);\n        dictInitIterator(&di, server.commands);\n        while ((de = dictNext(&di)) != NULL) {\n            struct redisCommand *cmd = dictGetVal(de);\n            if (commandVisibleForClient(c, cmd)) {\n                addReplyBulkCBuffer(c, cmd->fullname, sdslen(cmd->fullname));\n                addReplyCommandDocs(c, cmd);\n                numcmds++;\n            }\n        }\n        dictResetIterator(&di);\n        setDeferredMapLen(c,replylen,numcmds);\n    } else {\n        /* Reply with an array of the requested commands (if we find them) */\n        void *replylen = addReplyDeferredLen(c);\n        for (i = 2; i < c->argc; i++) {\n            struct redisCommand *cmd = lookupCommandBySds(c->argv[i]->ptr);\n            if (!cmd || !commandVisibleForClient(c, cmd))\n                continue;\n            addReplyBulkCBuffer(c, cmd->fullname, sdslen(cmd->fullname));\n            addReplyCommandDocs(c, cmd);\n            numcmds++;\n        }\n        setDeferredMapLen(c,replylen,numcmds);\n    }\n}\n\n/* COMMAND GETKEYS arg0 arg1 arg2 ... */\nvoid commandGetKeysCommand(client *c) {\n    getKeysSubcommand(c);\n}\n\n/* COMMAND HELP */\nvoid commandHelpCommand(client *c) {\n    const char *help[] = {\n\"(no subcommand)\",\n\"    Return details about all Redis commands.\",\n\"COUNT\",\n\"    Return the total number of commands in this Redis server.\",\n\"LIST\",\n\"    Return a list of all commands in this Redis server.\",\n\"INFO [<command-name> ...]\",\n\"    Return details about multiple Redis commands.\",\n\"    If no command names are given, documentation details for all\",\n\"    commands are returned.\",\n\"DOCS [<command-name> ...]\",\n\"    Return documentation details about multiple Redis commands.\",\n\"    If no command names are given, documentation details for all\",\n\"    commands are returned.\",\n\"GETKEYS <full-command>\",\n\"    Return the keys from a full Redis command.\",\n\"GETKEYSANDFLAGS <full-command>\",\n\"    Return the keys and the access flags from a full Redis command.\",\nNULL\n    };\n\n    addReplyHelp(c, help);\n}\n\n/* Convert an amount of bytes into a human readable string in the form\n * of 100B, 2G, 100M, 4K, and so forth. */\nvoid bytesToHuman(char *s, size_t size, unsigned long long n) {\n    double d;\n\n    if (n < 1024) {\n        /* Bytes */\n        snprintf(s,size,\"%lluB\",n);\n    } else if (n < (1024*1024)) {\n        d = (double)n/(1024);\n        snprintf(s,size,\"%.2fK\",d);\n    } else if (n < (1024LL*1024*1024)) {\n        d = (double)n/(1024*1024);\n        snprintf(s,size,\"%.2fM\",d);\n    } else if (n < (1024LL*1024*1024*1024)) {\n        d = (double)n/(1024LL*1024*1024);\n        snprintf(s,size,\"%.2fG\",d);\n    } else if (n < (1024LL*1024*1024*1024*1024)) {\n        d = (double)n/(1024LL*1024*1024*1024);\n        snprintf(s,size,\"%.2fT\",d);\n    } else if (n < (1024LL*1024*1024*1024*1024*1024)) {\n        d = (double)n/(1024LL*1024*1024*1024*1024);\n        snprintf(s,size,\"%.2fP\",d);\n    } else {\n        /* Let's hope we never need this */\n        snprintf(s,size,\"%lluB\",n);\n    }\n}\n\n/* Fill percentile distribution of latencies. */\nsds fillPercentileDistributionLatencies(sds info, const char* histogram_name, struct hdr_histogram* histogram) {\n    info = sdscatfmt(info,\"latency_percentiles_usec_%s:\",histogram_name);\n    for (int j = 0; j < server.latency_tracking_info_percentiles_len; j++) {\n        char fbuf[128];\n        size_t len = snprintf(fbuf, sizeof(fbuf), \"%f\", server.latency_tracking_info_percentiles[j]);\n        trimDoubleString(fbuf, len);\n        info = sdscatprintf(info,\"p%s=%.3f\", fbuf,\n            ((double)hdr_value_at_percentile(histogram,server.latency_tracking_info_percentiles[j]))/1000.0f);\n        if (j != server.latency_tracking_info_percentiles_len-1)\n            info = sdscatlen(info,\",\",1);\n        }\n    info = sdscatprintf(info,\"\\r\\n\");\n    return info;\n}\n\nconst char *replstateToString(int replstate) {\n    switch (replstate) {\n    case SLAVE_STATE_WAIT_BGSAVE_START:\n    case SLAVE_STATE_WAIT_BGSAVE_END:\n    case SLAVE_STATE_WAIT_RDB_CHANNEL:\n        return \"wait_bgsave\";\n    case SLAVE_STATE_SEND_BULK_AND_STREAM:\n        return \"send_bulk_and_stream\";\n    case SLAVE_STATE_SEND_BULK:\n        return \"send_bulk\";\n    case SLAVE_STATE_ONLINE:\n        return \"online\";\n    default:\n        return \"\";\n    }\n}\n\n/* Characters we sanitize on INFO output to maintain expected format. */\nstatic char unsafe_info_chars[] = \"#:\\n\\r\";\nstatic char unsafe_info_chars_substs[] = \"____\";   /* Must be same length as above */\n\n/* Returns a sanitized version of s that contains no unsafe info string chars.\n * If no unsafe characters are found, simply returns s. Caller needs to\n * free tmp if it is non-null on return.\n */\nconst char *getSafeInfoString(const char *s, size_t len, char **tmp) {\n    *tmp = NULL;\n    if (mempbrk(s, len, unsafe_info_chars,sizeof(unsafe_info_chars)-1)\n        == NULL) return s;\n    char *new = *tmp = zmalloc(len + 1);\n    memcpy(new, s, len);\n    new[len] = '\\0';\n    return memmapchars(new, len, unsafe_info_chars, unsafe_info_chars_substs,\n                       sizeof(unsafe_info_chars)-1);\n}\n\n/* Active Clients Sliding Window\n *\n * Tracks unique clients with read activity in a sliding time window.\n * Uses a circular buffer where each slot covers SLOT_DURATION_MS milliseconds.\n * When a client becomes active, it increments the current slot and decrements\n * its previous slot (if it was already active within the window).\n * The total active count is the sum of all slots in the window.\n *\n * Slot duration and number of slots are constant powers of 2 to enable the compiler\n * to optimize division and modulo operations into bit shifts and bitwise ANDs. */\n\n#define WINDOW_SLOTS 4\n#define SLOT_DURATION_MS_BITS 7\n#define SLOT_DURATION_MS (1LL << SLOT_DURATION_MS_BITS) /* 128ms per slot */\n#define WINDOW_DURATION_MS (WINDOW_SLOTS * SLOT_DURATION_MS) /* 512ms total */\n\nstatic_assert((WINDOW_SLOTS & (WINDOW_SLOTS - 1)) == 0, \"WINDOW_SLOTS must be a power of 2\");\n\nstatic int active_clients_window[WINDOW_SLOTS];\nstatic long long active_clients_window_ts = 0;\n\nvoid statsUpdateActiveClients(client *c) {\n    mstime_t now = server.mstime;\n    int current_slot = (now / SLOT_DURATION_MS) % WINDOW_SLOTS;\n    long long current_window_ts = (now / SLOT_DURATION_MS) * SLOT_DURATION_MS;\n\n    if (current_window_ts != active_clients_window_ts) {\n        /* Clear every slot crossed since the last update. Cap at one full\n         * window so large time gaps still clear each slot exactly once. */\n        long long slots_to_clear = (current_window_ts - active_clients_window_ts) / SLOT_DURATION_MS;\n        if (slots_to_clear > WINDOW_SLOTS) slots_to_clear = WINDOW_SLOTS;\n        int prev_slot = (active_clients_window_ts / SLOT_DURATION_MS) % WINDOW_SLOTS;\n        for (int i = 1; i <= (int)slots_to_clear; i++) {\n            active_clients_window[(prev_slot + i) % WINDOW_SLOTS] = 0;\n        }\n        active_clients_window_ts = current_window_ts;\n    }\n\n    /* Called periodically from serverCron() with c==NULL to clear stale slots\n     * when no client activity has occurred. */\n    if (!c)\n        return;\n\n    active_clients_window[current_slot]++;\n\n    /* If the client was already counted in the window, decrement its old slot\n     * so each client is counted at most once across the entire window.\n     * Use slot-aligned timestamps to match the granularity of window\n     * maintenance — otherwise a slot cleared by advancement may still\n     * appear \"within the window\" by exact-timestamp comparison. */\n    long long old_slot_boundary = (c->last_ts_when_counted_as_active / SLOT_DURATION_MS) * SLOT_DURATION_MS;\n    if (old_slot_boundary >= active_clients_window_ts - (WINDOW_SLOTS - 1) * SLOT_DURATION_MS) {\n        int old_slot = (c->last_ts_when_counted_as_active / SLOT_DURATION_MS) % WINDOW_SLOTS;\n        active_clients_window[old_slot]--;\n    }\n\n    c->last_ts_when_counted_as_active = now;\n}\n\nint getActiveClientsInWindow(void) {\n    int count = 0;\n    for (int i = 0; i < WINDOW_SLOTS; i++) {\n        count += active_clients_window[i];\n    }\n    return count;\n}\n\nsds genRedisInfoStringCommandStats(sds info, dict *commands) {\n    struct redisCommand *c;\n    dictEntry *de;\n    dictIterator di;\n    dictInitSafeIterator(&di, commands);\n    while((de = dictNext(&di)) != NULL) {\n        char *tmpsafe;\n        c = (struct redisCommand *) dictGetVal(de);\n        if (c->calls || c->failed_calls || c->rejected_calls) {\n            if (c->slowlog_count > 0) {\n                info = sdscatprintf(info,\n                    \"cmdstat_%s:calls=%lld,usec=%lld,usec_per_call=%.2f\"\n                    \",rejected_calls=%lld,failed_calls=%lld\"\n                    \",slowlog_count=%lld,slowlog_time_ms_sum=%.2f,slowlog_time_ms_max=%.2f\\r\\n\",\n                    getSafeInfoString(c->fullname, sdslen(c->fullname), &tmpsafe), c->calls, c->microseconds,\n                    (c->calls == 0) ? 0 : ((float)c->microseconds/c->calls),\n                    c->rejected_calls, c->failed_calls,\n                    c->slowlog_count, (double)c->slowlog_time_us_sum / 1000,\n                    (double)c->slowlog_time_us_max / 1000);\n            } else {\n                info = sdscatprintf(info,\n                    \"cmdstat_%s:calls=%lld,usec=%lld,usec_per_call=%.2f\"\n                    \",rejected_calls=%lld,failed_calls=%lld\\r\\n\",\n                    getSafeInfoString(c->fullname, sdslen(c->fullname), &tmpsafe), c->calls, c->microseconds,\n                    (c->calls == 0) ? 0 : ((float)c->microseconds/c->calls),\n                    c->rejected_calls, c->failed_calls);\n            }\n            if (tmpsafe != NULL) zfree(tmpsafe);\n        }\n        if (c->subcommands_dict) {\n            info = genRedisInfoStringCommandStats(info, c->subcommands_dict);\n        }\n    }\n    dictResetIterator(&di);\n\n    return info;\n}\n\n/* Writes the ACL metrics to the info */\nsds genRedisInfoStringACLStats(sds info) {\n    info = sdscatprintf(info,\n\t     \"acl_access_denied_auth:%lld\\r\\n\"\n\t     \"acl_access_denied_cmd:%lld\\r\\n\"\n\t     \"acl_access_denied_key:%lld\\r\\n\"\n\t     \"acl_access_denied_channel:%lld\\r\\n\"\n\t     \"acl_access_denied_tls_cert:%lld\\r\\n\",\n\t     server.acl_info.user_auth_failures,\n\t     server.acl_info.invalid_cmd_accesses,\n\t     server.acl_info.invalid_key_accesses,\n\t     server.acl_info.invalid_channel_accesses,\n\t     server.acl_info.acl_access_denied_tls_cert);\n    return info;\n}\n\nsds genRedisInfoStringLatencyStats(sds info, dict *commands) {\n    struct redisCommand *c;\n    dictEntry *de;\n    dictIterator di;\n    dictInitSafeIterator(&di, commands);\n    while((de = dictNext(&di)) != NULL) {\n        char *tmpsafe;\n        c = (struct redisCommand *) dictGetVal(de);\n        if (c->latency_histogram) {\n            info = fillPercentileDistributionLatencies(info,\n                getSafeInfoString(c->fullname, sdslen(c->fullname), &tmpsafe),\n                c->latency_histogram);\n            if (tmpsafe != NULL) zfree(tmpsafe);\n        }\n        if (c->subcommands_dict) {\n            info = genRedisInfoStringLatencyStats(info, c->subcommands_dict);\n        }\n    }\n    dictResetIterator(&di);\n\n    return info;\n}\n\n/* Takes a null terminated sections list, and adds them to the dict. */\nvoid addInfoSectionsToDict(dict *section_dict, char **sections) {\n    while (*sections) {\n        sds section = sdsnew(*sections);\n        if (dictAdd(section_dict, section, NULL)==DICT_ERR)\n            sdsfree(section);\n        sections++;\n    }\n}\n\n/* Cached copy of the default sections, as an optimization. */\nstatic dict *cached_default_info_sections = NULL;\n\nvoid releaseInfoSectionDict(dict *sec) {\n    if (sec != cached_default_info_sections)\n        dictRelease(sec);\n}\n\n/* Create a dictionary with unique section names to be used by genRedisInfoString.\n * 'argv' and 'argc' are list of arguments for INFO.\n * 'defaults' is an optional null terminated list of default sections.\n * 'out_all' and 'out_everything' are optional.\n * The resulting dictionary should be released with releaseInfoSectionDict. */\ndict *genInfoSectionDict(robj **argv, int argc, char **defaults, int *out_all, int *out_everything) {\n    char *default_sections[] = {\n        \"server\", \"clients\", \"memory\", \"persistence\", \"stats\", \"replication\", \"threads\",\n        \"cpu\", \"hotkeys\", \"module_list\", \"errorstats\", \"cluster\", \"keyspace\", \"keysizes\", NULL};\n    if (!defaults)\n        defaults = default_sections;\n\n    if (argc == 0) {\n        /* In this case we know the dict is not gonna be modified, so we cache\n         * it as an optimization for a common case. */\n        if (cached_default_info_sections)\n            return cached_default_info_sections;\n        cached_default_info_sections = dictCreate(&stringSetDictType);\n        dictExpand(cached_default_info_sections, 16);\n        addInfoSectionsToDict(cached_default_info_sections, defaults);\n        return cached_default_info_sections;\n    }\n\n    dict *section_dict = dictCreate(&stringSetDictType);\n    dictExpand(section_dict, min(argc,16));\n    for (int i = 0; i < argc; i++) {\n        if (!strcasecmp(argv[i]->ptr,\"default\")) {\n            addInfoSectionsToDict(section_dict, defaults);\n        } else if (!strcasecmp(argv[i]->ptr,\"all\")) {\n            if (out_all) *out_all = 1;\n        } else if (!strcasecmp(argv[i]->ptr,\"everything\")) {\n            if (out_everything) *out_everything = 1;\n            if (out_all) *out_all = 1;\n        } else {\n            sds section = sdsnew(argv[i]->ptr);\n            if (dictAdd(section_dict, section, NULL) != DICT_OK)\n                sdsfree(section);\n        }\n    }\n    return section_dict;\n}\n\n/* sets blocking_keys to the total number of keys which has at least one client blocked on them.\n * sets blocking_keys_on_nokey to the total number of keys which has at least one client\n * blocked on them to be written or deleted.\n * sets watched_keys to the total number of keys which has at least on client watching on them. */\nvoid totalNumberOfStatefulKeys(unsigned long *blocking_keys, unsigned long *blocking_keys_on_nokey, unsigned long *watched_keys) {\n    unsigned long bkeys=0, bkeys_on_nokey=0, wkeys=0;\n    for (int j = 0; j < server.dbnum; j++) {\n        bkeys += dictSize(server.db[j].blocking_keys);\n        bkeys_on_nokey += dictSize(server.db[j].blocking_keys_unblock_on_nokey);\n        wkeys += dictSize(server.db[j].watched_keys);\n    }\n    if (blocking_keys)\n        *blocking_keys = bkeys;\n    if (blocking_keys_on_nokey)\n        *blocking_keys_on_nokey = bkeys_on_nokey;\n    if (watched_keys)\n        *watched_keys = wkeys;\n}\n\n/* Append keysizes histograms to the info string in format \"db<dbnum>_<field_name>:<label>=<count>,...\"\n * field_names is an array of field names indexed by type, NULL entries are skipped. */\nstatic sds sdscatHistograms(sds info, int dbnum, keysizesHist histogram, const char *field_names[]) {\n    static const char *expSizeLabels[] = {\n        \"0\", \"1\",   \"2\",  \"4\",  \"8\",  \"16\",  \"32\",  \"64\",  \"128\",  \"256\",  \"512\", /* Byte */\n        \"1K\", \"2K\", \"4K\", \"8K\", \"16K\", \"32K\", \"64K\", \"128K\", \"256K\", \"512K\", /* Kilo */\n        \"1M\", \"2M\", \"4M\", \"8M\", \"16M\", \"32M\", \"64M\", \"128M\", \"256M\", \"512M\", /* Mega */\n        \"1G\", \"2G\", \"4G\", \"8G\", \"16G\", \"32G\", \"64G\", \"128G\", \"256G\", \"512G\", /* Giga */\n        \"1T\", \"2T\", \"4T\", \"8T\", \"16T\", \"32T\", \"64T\", \"128T\", \"256T\", \"512T\", /* Tera */\n        \"1P\", \"2P\", \"4P\", \"8P\", \"16P\", \"32P\", \"64P\", \"128P\", \"256P\", \"512P\", /* Peta */\n        \"1E\", \"2E\", \"4E\"                                                     /* Exa */\n    };\n\n    for (int type = 0; type < OBJ_TYPE_BASIC_MAX; type++) {\n        if (field_names[type] == NULL) continue;\n\n        char buf[10000];\n        int cnt = 0, buflen = 0;\n\n        buflen += snprintf(buf + buflen, sizeof(buf) - buflen, \"db%d_%s:\", dbnum, field_names[type]);\n\n        for (int i = 0; i < MAX_KEYSIZES_BINS; i++) {\n            if (histogram[type][i] == 0)\n                continue;\n\n            int res = snprintf(buf + buflen, sizeof(buf) - buflen,\n                               (cnt == 0) ? \"%s=%llu\" : \",%s=%llu\",\n                               expSizeLabels[i], (unsigned long long) histogram[type][i]);\n            if (res < 0) break;\n            buflen += res;\n            cnt += histogram[type][i];\n        }\n\n        if (cnt) info = sdscatprintf(info, \"%s\\r\\n\", buf);\n    }\n    return info;\n}\n\n/* Create the string returned by the INFO command. This is decoupled\n * by the INFO command itself as we need to report the same information\n * on memory corruption problems. */\nsds genRedisInfoString(dict *section_dict, int all_sections, int everything) {\n    sds info = sdsempty();\n    time_t uptime = server.unixtime-server.stat_starttime;\n    int j;\n    int sections = 0;\n    if (everything) all_sections = 1;\n\n    /* Server */\n    if (all_sections || (dictFind(section_dict,\"server\") != NULL)) {\n        static int call_uname = 1;\n        static struct utsname name;\n        char *mode;\n        char *supervised;\n\n        if (server.cluster_enabled) mode = \"cluster\";\n        else if (server.sentinel_mode) mode = \"sentinel\";\n        else mode = \"standalone\";\n\n        if (server.supervised) {\n            if (server.supervised_mode == SUPERVISED_UPSTART) supervised = \"upstart\";\n            else if (server.supervised_mode == SUPERVISED_SYSTEMD) supervised = \"systemd\";\n            else supervised = \"unknown\";\n        } else {\n            supervised = \"no\";\n        }\n\n        if (sections++) info = sdscat(info,\"\\r\\n\");\n\n        if (call_uname) {\n            /* Uname can be slow and is always the same output. Cache it. */\n            uname(&name);\n            call_uname = 0;\n        }\n\n        info = sdscatfmt(info, \"# Server\\r\\n\" FMTARGS(\n            \"redis_version:%s\\r\\n\", REDIS_VERSION,\n            \"redis_git_sha1:%s\\r\\n\", redisGitSHA1(),\n            \"redis_git_dirty:%i\\r\\n\", strtol(redisGitDirty(),NULL,10) > 0,\n            \"redis_build_id:%s\\r\\n\", redisBuildIdString(),\n            \"redis_mode:%s\\r\\n\", mode,\n            \"os:%s\", name.sysname,\n            \" %s\", name.release,\n            \" %s\\r\\n\", name.machine,\n            \"arch_bits:%i\\r\\n\", server.arch_bits,\n            \"monotonic_clock:%s\\r\\n\", monotonicInfoString(),\n            \"multiplexing_api:%s\\r\\n\", aeGetApiName(),\n            \"atomicvar_api:%s\\r\\n\", REDIS_ATOMIC_API,\n            \"gcc_version:%s\\r\\n\", GNUC_VERSION_STR,\n            \"process_id:%I\\r\\n\", (int64_t) getpid(),\n            \"process_supervised:%s\\r\\n\", supervised,\n            \"run_id:%s\\r\\n\", server.runid,\n            \"tcp_port:%i\\r\\n\", server.port ? server.port : server.tls_port,\n            \"server_time_usec:%I\\r\\n\", (int64_t)server.ustime,\n            \"uptime_in_seconds:%I\\r\\n\", (int64_t)uptime,\n            \"uptime_in_days:%I\\r\\n\", (int64_t)(uptime/(3600*24)),\n            \"hz:%i\\r\\n\", server.hz,\n            \"configured_hz:%i\\r\\n\", server.config_hz,\n            \"lru_clock:%u\\r\\n\", server.lruclock,\n            \"executable:%s\\r\\n\", server.executable ? server.executable : \"\",\n            \"config_file:%s\\r\\n\", server.configfile ? server.configfile : \"\",\n            \"io_threads_active:%i\\r\\n\", server.io_threads_active));\n\n        /* Conditional properties */\n        if (isShutdownInitiated()) {\n            info = sdscatfmt(info,\n                \"shutdown_in_milliseconds:%I\\r\\n\",\n                (int64_t)(server.shutdown_mstime - commandTimeSnapshot()));\n        }\n\n        /* get all the listeners information */\n        info = getListensInfoString(info);\n    }\n\n    /* Clients */\n    if (all_sections || (dictFind(section_dict,\"clients\") != NULL)) {\n        size_t maxin, maxout;\n        unsigned long blocking_keys, blocking_keys_on_nokey, watched_keys;\n        getExpansiveClientsInfo(&maxin,&maxout);\n        totalNumberOfStatefulKeys(&blocking_keys, &blocking_keys_on_nokey, &watched_keys);\n        if (sections++) info = sdscat(info,\"\\r\\n\");\n        info = sdscatprintf(info, \"# Clients\\r\\n\" FMTARGS(\n            \"connected_clients:%lu\\r\\n\", listLength(server.clients) - listLength(server.slaves),\n            \"cluster_connections:%lu\\r\\n\", getClusterConnectionsCount(),\n            \"maxclients:%u\\r\\n\", server.maxclients,\n            \"client_recent_max_input_buffer:%zu\\r\\n\", maxin,\n            \"client_recent_max_output_buffer:%zu\\r\\n\", maxout,\n            \"blocked_clients:%d\\r\\n\", server.blocked_clients,\n            \"tracking_clients:%d\\r\\n\", server.tracking_clients,\n            \"pubsub_clients:%d\\r\\n\", server.pubsub_clients,\n            \"watching_clients:%d\\r\\n\", server.watching_clients,\n            \"clients_in_timeout_table:%llu\\r\\n\", (unsigned long long) raxSize(server.clients_timeout_table),\n            \"active_clients:%d\\r\\n\", getActiveClientsInWindow(),\n            \"total_watched_keys:%lu\\r\\n\", watched_keys,\n            \"total_blocking_keys:%lu\\r\\n\", blocking_keys,\n            \"total_blocking_keys_on_nokey:%lu\\r\\n\", blocking_keys_on_nokey));\n    }\n\n    /* Memory */\n    if (all_sections || (dictFind(section_dict,\"memory\") != NULL)) {\n        char hmem[64];\n        char peak_hmem[64];\n        char total_system_hmem[64];\n        char used_memory_lua_hmem[64];\n        char used_memory_vm_total_hmem[64];\n        char used_memory_scripts_hmem[64];\n        char used_memory_rss_hmem[64];\n        char maxmemory_hmem[64];\n        size_t zmalloc_used = zmalloc_used_memory();\n        size_t total_system_mem = server.system_memory_size;\n        const char *evict_policy = evictPolicyToString();\n        long long memory_lua = evalScriptsMemoryVM();\n        long long memory_functions = functionsMemoryVM();\n        struct redisMemOverhead *mh = getMemoryOverheadData();\n\n        /* Peak memory is updated from time to time by serverCron() so it\n         * may happen that the instantaneous value is slightly bigger than\n         * the peak value. This may confuse users, so we update the peak\n         * if found smaller than the current memory usage. */\n        updatePeakMemory();\n\n        bytesToHuman(hmem,sizeof(hmem),zmalloc_used);\n        bytesToHuman(peak_hmem,sizeof(peak_hmem),server.stat_peak_memory);\n        bytesToHuman(total_system_hmem,sizeof(total_system_hmem),total_system_mem);\n        bytesToHuman(used_memory_lua_hmem,sizeof(used_memory_lua_hmem),memory_lua);\n        bytesToHuman(used_memory_vm_total_hmem,sizeof(used_memory_vm_total_hmem),memory_functions + memory_lua);\n        bytesToHuman(used_memory_scripts_hmem,sizeof(used_memory_scripts_hmem),mh->eval_caches + mh->functions_caches);\n        bytesToHuman(used_memory_rss_hmem,sizeof(used_memory_rss_hmem),server.cron_malloc_stats.process_rss);\n        bytesToHuman(maxmemory_hmem,sizeof(maxmemory_hmem),server.maxmemory);\n\n        if (sections++) info = sdscat(info,\"\\r\\n\");\n        info = sdscatprintf(info, \"# Memory\\r\\n\" FMTARGS(\n            \"used_memory:%zu\\r\\n\", zmalloc_used,\n            \"used_memory_human:%s\\r\\n\", hmem,\n            \"used_memory_rss:%zu\\r\\n\", server.cron_malloc_stats.process_rss,\n            \"used_memory_rss_human:%s\\r\\n\", used_memory_rss_hmem,\n            \"used_memory_peak:%zu\\r\\n\", server.stat_peak_memory,\n            \"used_memory_peak_human:%s\\r\\n\", peak_hmem,\n            \"used_memory_peak_time:%jd\\r\\n\", (intmax_t)server.stat_peak_memory_time,\n            \"used_memory_peak_perc:%.2f%%\\r\\n\", mh->peak_perc,\n            \"used_memory_overhead:%zu\\r\\n\", mh->overhead_total,\n            \"used_memory_startup:%zu\\r\\n\", mh->startup_allocated,\n            \"used_memory_dataset:%zu\\r\\n\", mh->dataset,\n            \"used_memory_dataset_perc:%.2f%%\\r\\n\", mh->dataset_perc,\n            \"allocator_allocated:%zu\\r\\n\", server.cron_malloc_stats.allocator_allocated,\n            \"allocator_active:%zu\\r\\n\", server.cron_malloc_stats.allocator_active,\n            \"allocator_resident:%zu\\r\\n\", server.cron_malloc_stats.allocator_resident,\n            \"allocator_muzzy:%zu\\r\\n\", server.cron_malloc_stats.allocator_muzzy,\n            \"total_system_memory:%lu\\r\\n\", (unsigned long)total_system_mem,\n            \"total_system_memory_human:%s\\r\\n\", total_system_hmem,\n            \"used_memory_lua:%lld\\r\\n\", memory_lua, /* deprecated, renamed to used_memory_vm_eval */\n            \"used_memory_vm_eval:%lld\\r\\n\", memory_lua,\n            \"used_memory_lua_human:%s\\r\\n\", used_memory_lua_hmem, /* deprecated */\n            \"used_memory_scripts_eval:%lld\\r\\n\", (long long)mh->eval_caches,\n            \"number_of_cached_scripts:%lu\\r\\n\", dictSize(evalScriptsDict()),\n            \"number_of_functions:%lu\\r\\n\", functionsNum(),\n            \"number_of_libraries:%lu\\r\\n\", functionsLibNum(),\n            \"used_memory_vm_functions:%lld\\r\\n\", memory_functions,\n            \"used_memory_vm_total:%lld\\r\\n\", memory_functions + memory_lua,\n            \"used_memory_vm_total_human:%s\\r\\n\", used_memory_vm_total_hmem,\n            \"used_memory_functions:%lld\\r\\n\", (long long)mh->functions_caches,\n            \"used_memory_scripts:%lld\\r\\n\", (long long)mh->eval_caches + (long long)mh->functions_caches,\n            \"used_memory_scripts_human:%s\\r\\n\", used_memory_scripts_hmem,\n            \"maxmemory:%lld\\r\\n\", server.maxmemory,\n            \"maxmemory_human:%s\\r\\n\", maxmemory_hmem,\n            \"maxmemory_policy:%s\\r\\n\", evict_policy,\n            \"allocator_frag_ratio:%.2f\\r\\n\", mh->allocator_frag,\n            \"allocator_frag_bytes:%zu\\r\\n\", mh->allocator_frag_bytes,\n            \"allocator_rss_ratio:%.2f\\r\\n\", mh->allocator_rss,\n            \"allocator_rss_bytes:%zd\\r\\n\", mh->allocator_rss_bytes,\n            \"rss_overhead_ratio:%.2f\\r\\n\", mh->rss_extra,\n            \"rss_overhead_bytes:%zd\\r\\n\", mh->rss_extra_bytes,\n            /* The next field (mem_fragmentation_ratio) is the total RSS\n             * overhead, including fragmentation, but not just it. This field\n             * (and the next one) is named like that just for backward\n             * compatibility. */\n            \"mem_fragmentation_ratio:%.2f\\r\\n\", mh->total_frag,\n            \"mem_fragmentation_bytes:%zd\\r\\n\", mh->total_frag_bytes,\n            \"mem_not_counted_for_evict:%zu\\r\\n\", freeMemoryGetNotCountedMemory(),\n            \"mem_replication_backlog:%zu\\r\\n\", mh->repl_backlog,\n            \"mem_total_replication_buffers:%zu\\r\\n\", server.repl_buffer_mem + server.repl_full_sync_buffer.mem_used,\n            \"mem_replica_full_sync_buffer:%zu\\r\\n\", server.repl_full_sync_buffer.mem_used,\n            \"mem_clients_slaves:%zu\\r\\n\", mh->clients_slaves,\n            \"mem_clients_normal:%zu\\r\\n\", mh->clients_normal,\n            \"mem_cluster_slot_migration_output_buffer:%zu\\r\\n\", mh->asm_migrate_output_buffer,\n            \"mem_cluster_slot_migration_input_buffer:%zu\\r\\n\", mh->asm_import_input_buffer,\n            \"mem_cluster_slot_migration_input_buffer_peak:%zu\\r\\n\", asmGetPeakSyncBufferSize(),\n            \"mem_cluster_links:%zu\\r\\n\", mh->cluster_links,\n            \"mem_aof_buffer:%zu\\r\\n\", mh->aof_buffer,\n            \"mem_allocator:%s\\r\\n\", ZMALLOC_LIB,\n            \"mem_overhead_db_hashtable_rehashing:%zu\\r\\n\", mh->overhead_db_hashtable_rehashing,\n            \"active_defrag_running:%d\\r\\n\", server.active_defrag_running,\n            \"lazyfree_pending_objects:%zu\\r\\n\", lazyfreeGetPendingObjectsCount(),\n            \"lazyfreed_objects:%zu\\r\\n\", lazyfreeGetFreedObjectsCount()));\n        freeMemoryOverheadData(mh);\n    }\n\n    /* Persistence */\n    if (all_sections || (dictFind(section_dict,\"persistence\") != NULL)) {\n        if (sections++) info = sdscat(info,\"\\r\\n\");\n        double fork_perc = 0;\n        if (server.stat_module_progress) {\n            fork_perc = server.stat_module_progress * 100;\n        } else if (server.stat_current_save_keys_total) {\n            fork_perc = ((double)server.stat_current_save_keys_processed / server.stat_current_save_keys_total) * 100;\n        }\n        int aof_bio_fsync_status;\n        atomicGet(server.aof_bio_fsync_status,aof_bio_fsync_status);\n\n        info = sdscatprintf(info, \"# Persistence\\r\\n\" FMTARGS(\n            \"loading:%d\\r\\n\", (int)(server.loading && !server.async_loading),\n            \"async_loading:%d\\r\\n\", (int)server.async_loading,\n            \"current_cow_peak:%zu\\r\\n\", server.stat_current_cow_peak,\n            \"current_cow_size:%zu\\r\\n\", server.stat_current_cow_bytes,\n            \"current_cow_size_age:%lu\\r\\n\", (server.stat_current_cow_updated ?\n                                             (unsigned long) elapsedMs(server.stat_current_cow_updated) / 1000 : 0),\n            \"current_fork_perc:%.2f\\r\\n\", fork_perc,\n            \"current_save_keys_processed:%zu\\r\\n\", server.stat_current_save_keys_processed,\n            \"current_save_keys_total:%zu\\r\\n\", server.stat_current_save_keys_total,\n            \"rdb_changes_since_last_save:%lld\\r\\n\", server.dirty,\n            \"rdb_bgsave_in_progress:%d\\r\\n\", server.child_type == CHILD_TYPE_RDB,\n            \"rdb_last_save_time:%jd\\r\\n\", (intmax_t)server.lastsave,\n            \"rdb_last_bgsave_status:%s\\r\\n\", (server.lastbgsave_status == C_OK) ? \"ok\" : \"err\",\n            \"rdb_last_bgsave_time_sec:%jd\\r\\n\", (intmax_t)server.rdb_save_time_last,\n            \"rdb_current_bgsave_time_sec:%jd\\r\\n\", (intmax_t)((server.child_type != CHILD_TYPE_RDB) ?\n                                                              -1 : time(NULL)-server.rdb_save_time_start),\n            \"rdb_saves:%lld\\r\\n\", server.stat_rdb_saves,\n            \"rdb_saves_consecutive_failures:%lld\\r\\n\", server.stat_rdb_consecutive_failures,\n            \"rdb_last_cow_size:%zu\\r\\n\", server.stat_rdb_cow_bytes,\n            \"rdb_last_load_keys_expired:%lld\\r\\n\", server.rdb_last_load_keys_expired,\n            \"rdb_last_load_keys_loaded:%lld\\r\\n\", server.rdb_last_load_keys_loaded,\n            \"aof_enabled:%d\\r\\n\", server.aof_state != AOF_OFF,\n            \"aof_rewrite_in_progress:%d\\r\\n\", server.child_type == CHILD_TYPE_AOF,\n            \"aof_rewrite_scheduled:%d\\r\\n\", server.aof_rewrite_scheduled,\n            \"aof_last_rewrite_time_sec:%jd\\r\\n\", (intmax_t)server.aof_rewrite_time_last,\n            \"aof_current_rewrite_time_sec:%jd\\r\\n\", (intmax_t)((server.child_type != CHILD_TYPE_AOF) ?\n                                                               -1 : time(NULL)-server.aof_rewrite_time_start),\n            \"aof_last_bgrewrite_status:%s\\r\\n\", (server.aof_lastbgrewrite_status == C_OK ?\n                                                 \"ok\" : \"err\"),\n            \"aof_rewrites:%lld\\r\\n\", server.stat_aof_rewrites,\n            \"aof_rewrites_consecutive_failures:%lld\\r\\n\", server.stat_aofrw_consecutive_failures,\n            \"aof_last_write_status:%s\\r\\n\", (server.aof_last_write_status == C_OK &&\n                                             aof_bio_fsync_status == C_OK) ? \"ok\" : \"err\",\n            \"aof_last_cow_size:%zu\\r\\n\", server.stat_aof_cow_bytes,\n            \"module_fork_in_progress:%d\\r\\n\", server.child_type == CHILD_TYPE_MODULE,\n            \"module_fork_last_cow_size:%zu\\r\\n\", server.stat_module_cow_bytes));\n\n        if (server.aof_enabled) {\n            info = sdscatprintf(info, FMTARGS(\n                \"aof_current_size:%lld\\r\\n\", (long long) server.aof_current_size,\n                \"aof_base_size:%lld\\r\\n\", (long long) server.aof_rewrite_base_size,\n                \"aof_pending_rewrite:%d\\r\\n\", server.aof_rewrite_scheduled,\n                \"aof_buffer_length:%zu\\r\\n\", sdslen(server.aof_buf),\n                \"aof_pending_bio_fsync:%lu\\r\\n\", bioPendingJobsOfType(BIO_AOF_FSYNC),\n                \"aof_delayed_fsync:%lu\\r\\n\", server.aof_delayed_fsync));\n        }\n\n        if (server.loading) {\n            double perc = 0;\n            time_t eta, elapsed;\n            off_t remaining_bytes = 1;\n\n            if (server.loading_total_bytes) {\n                perc = ((double)server.loading_loaded_bytes / server.loading_total_bytes) * 100;\n                remaining_bytes = server.loading_total_bytes - server.loading_loaded_bytes;\n            } else if(server.loading_rdb_used_mem) {\n                perc = ((double)server.loading_loaded_bytes / server.loading_rdb_used_mem) * 100;\n                remaining_bytes = server.loading_rdb_used_mem - server.loading_loaded_bytes;\n                /* used mem is only a (bad) estimation of the rdb file size, avoid going over 100% */\n                if (perc > 99.99) perc = 99.99;\n                if (remaining_bytes < 1) remaining_bytes = 1;\n            }\n\n            elapsed = time(NULL)-server.loading_start_time;\n            if (elapsed == 0) {\n                eta = 1; /* A fake 1 second figure if we don't have\n                            enough info */\n            } else {\n                eta = (elapsed*remaining_bytes)/(server.loading_loaded_bytes+1);\n            }\n\n            info = sdscatprintf(info, FMTARGS(\n                \"loading_start_time:%jd\\r\\n\", (intmax_t) server.loading_start_time,\n                \"loading_total_bytes:%llu\\r\\n\", (unsigned long long) server.loading_total_bytes,\n                \"loading_rdb_used_mem:%llu\\r\\n\", (unsigned long long) server.loading_rdb_used_mem,\n                \"loading_loaded_bytes:%llu\\r\\n\", (unsigned long long) server.loading_loaded_bytes,\n                \"loading_loaded_perc:%.2f\\r\\n\", perc,\n                \"loading_eta_seconds:%jd\\r\\n\", (intmax_t)eta));\n        }\n    }\n\n    /* Threads */\n    int stat_io_ops_processed_calculated = 0;\n    long long stat_io_reads_processed = 0, stat_io_writes_processed = 0;\n    long long stat_total_reads_processed = 0, stat_total_writes_processed = 0;\n    if (all_sections || (dictFind(section_dict,\"threads\") != NULL)) {\n        if (sections++) info = sdscat(info,\"\\r\\n\");\n        info = sdscatprintf(info, \"# Threads\\r\\n\");\n        long long reads, writes;\n        for (j = 0; j < server.io_threads_num; j++) {\n            atomicGet(server.stat_io_reads_processed[j], reads);\n            atomicGet(server.stat_io_writes_processed[j], writes);\n            info = sdscatprintf(info, \"io_thread_%d:clients=%d,reads=%lld,writes=%lld\\r\\n\",\n                                       j, server.io_threads_clients_num[j], reads, writes);\n            stat_total_reads_processed += reads;\n            if (j != 0) stat_io_reads_processed += reads; /* Skip the main thread */\n            stat_total_writes_processed += writes;\n            if (j != 0) stat_io_writes_processed += writes; /* Skip the main thread */\n        }\n        stat_io_ops_processed_calculated = 1;\n    }\n\n    /* Stats */\n    if (all_sections  || (dictFind(section_dict,\"stats\") != NULL)) {\n        long long stat_net_input_bytes, stat_net_output_bytes;\n        long long stat_net_repl_input_bytes, stat_net_repl_output_bytes;\n        long long stat_total_client_process_input_buff_events;\n        long long stat_avg_pipeline_length_sum;\n        long long stat_avg_pipeline_length_cnt;\n        long long current_eviction_exceeded_time = server.stat_last_eviction_exceeded_time ?\n            (long long) elapsedUs(server.stat_last_eviction_exceeded_time): 0;\n        long long current_active_defrag_time = server.stat_last_active_defrag_time ?\n            (long long) elapsedUs(server.stat_last_active_defrag_time): 0;\n        long long stat_client_qbuf_limit_disconnections;\n        atomicGet(server.stat_net_input_bytes, stat_net_input_bytes);\n        atomicGet(server.stat_net_output_bytes, stat_net_output_bytes);\n        atomicGet(server.stat_net_repl_input_bytes, stat_net_repl_input_bytes);\n        atomicGet(server.stat_net_repl_output_bytes, stat_net_repl_output_bytes);\n        atomicGet(server.stat_client_qbuf_limit_disconnections, stat_client_qbuf_limit_disconnections);\n        atomicGet(server.stat_total_client_process_input_buff_events, stat_total_client_process_input_buff_events);\n        atomicGet(server.stat_avg_pipeline_length_sum, stat_avg_pipeline_length_sum);\n        atomicGet(server.stat_avg_pipeline_length_cnt, stat_avg_pipeline_length_cnt);\n\n        /* If we calculated the total reads and writes in the threads section,\n         * we don't need to do it again, and also keep the values consistent. */\n        if (!stat_io_ops_processed_calculated) {\n            long long reads, writes;\n            for (j = 0; j < server.io_threads_num; j++) {\n                atomicGet(server.stat_io_reads_processed[j], reads);\n                stat_total_reads_processed += reads;\n                if (j != 0) stat_io_reads_processed += reads; /* Skip the main thread */\n                atomicGet(server.stat_io_writes_processed[j], writes);\n                stat_total_writes_processed += writes;\n                if (j != 0) stat_io_writes_processed += writes; /* Skip the main thread */\n            }\n        }\n\n        if (sections++) info = sdscat(info,\"\\r\\n\");\n        info = sdscatprintf(info, \"# Stats\\r\\n\" FMTARGS(\n            \"total_connections_received:%lld\\r\\n\", server.stat_numconnections,\n            \"total_commands_processed:%lld\\r\\n\", server.stat_numcommands,\n            \"instantaneous_ops_per_sec:%lld\\r\\n\", getInstantaneousMetric(STATS_METRIC_COMMAND),\n            \"total_net_input_bytes:%lld\\r\\n\", stat_net_input_bytes + stat_net_repl_input_bytes,\n            \"total_net_output_bytes:%lld\\r\\n\", stat_net_output_bytes + stat_net_repl_output_bytes,\n            \"total_net_repl_input_bytes:%lld\\r\\n\", stat_net_repl_input_bytes,\n            \"total_net_repl_output_bytes:%lld\\r\\n\", stat_net_repl_output_bytes,\n            \"instantaneous_input_kbps:%.2f\\r\\n\", (float)getInstantaneousMetric(STATS_METRIC_NET_INPUT)/1024,\n            \"instantaneous_output_kbps:%.2f\\r\\n\", (float)getInstantaneousMetric(STATS_METRIC_NET_OUTPUT)/1024,\n            \"instantaneous_input_repl_kbps:%.2f\\r\\n\", (float)getInstantaneousMetric(STATS_METRIC_NET_INPUT_REPLICATION)/1024,\n            \"instantaneous_output_repl_kbps:%.2f\\r\\n\", (float)getInstantaneousMetric(STATS_METRIC_NET_OUTPUT_REPLICATION)/1024,\n            \"rejected_connections:%lld\\r\\n\", server.stat_rejected_conn,\n            \"sync_full:%lld\\r\\n\", server.stat_sync_full,\n            \"sync_partial_ok:%lld\\r\\n\", server.stat_sync_partial_ok,\n            \"sync_partial_err:%lld\\r\\n\", server.stat_sync_partial_err,\n            \"expired_subkeys:%lld\\r\\n\", server.stat_expired_subkeys,\n            \"expired_subkeys_active:%lld\\r\\n\", server.stat_expired_subkeys_active,\n            \"expired_keys:%lld\\r\\n\", server.stat_expiredkeys,\n            \"expired_keys_active:%lld\\r\\n\", server.stat_expiredkeys_active,\n            \"expired_stale_perc:%.2f\\r\\n\", server.stat_expired_stale_perc*100,\n            \"expired_time_cap_reached_count:%lld\\r\\n\", server.stat_expired_time_cap_reached_count,\n            \"expire_cycle_cpu_milliseconds:%lld\\r\\n\", server.stat_expire_cycle_time_used/1000,\n            \"evicted_keys:%lld\\r\\n\", server.stat_evictedkeys,\n            \"evicted_clients:%lld\\r\\n\", server.stat_evictedclients,\n            \"evicted_scripts:%lld\\r\\n\", server.stat_evictedscripts,\n            \"total_eviction_exceeded_time:%lld\\r\\n\", (server.stat_total_eviction_exceeded_time + current_eviction_exceeded_time) / 1000,\n            \"current_eviction_exceeded_time:%lld\\r\\n\", current_eviction_exceeded_time / 1000,\n            \"keyspace_hits:%lld\\r\\n\", server.stat_keyspace_hits,\n            \"keyspace_misses:%lld\\r\\n\", server.stat_keyspace_misses,\n            \"pubsub_channels:%llu\\r\\n\", kvstoreSize(server.pubsub_channels),\n            \"pubsub_patterns:%lu\\r\\n\", dictSize(server.pubsub_patterns),\n            \"pubsubshard_channels:%llu\\r\\n\", kvstoreSize(server.pubsubshard_channels),\n            \"latest_fork_usec:%lld\\r\\n\", server.stat_fork_time,\n            \"total_forks:%lld\\r\\n\", server.stat_total_forks,\n            \"migrate_cached_sockets:%ld\\r\\n\", dictSize(server.migrate_cached_sockets),\n            \"slave_expires_tracked_keys:%zu\\r\\n\", getSlaveKeyWithExpireCount(),\n            \"active_defrag_hits:%lld\\r\\n\", server.stat_active_defrag_hits,\n            \"active_defrag_misses:%lld\\r\\n\", server.stat_active_defrag_misses,\n            \"active_defrag_key_hits:%lld\\r\\n\", server.stat_active_defrag_key_hits,\n            \"active_defrag_key_misses:%lld\\r\\n\", server.stat_active_defrag_key_misses,\n            \"total_active_defrag_time:%lld\\r\\n\", (server.stat_total_active_defrag_time + current_active_defrag_time) / 1000,\n            \"current_active_defrag_time:%lld\\r\\n\", current_active_defrag_time / 1000,\n            \"tracking_total_keys:%lld\\r\\n\", (unsigned long long) trackingGetTotalKeys(),\n            \"tracking_total_items:%lld\\r\\n\", (unsigned long long) trackingGetTotalItems(),\n            \"tracking_total_prefixes:%lld\\r\\n\", (unsigned long long) trackingGetTotalPrefixes(),\n            \"unexpected_error_replies:%lld\\r\\n\", server.stat_unexpected_error_replies,\n            \"total_error_replies:%lld\\r\\n\", server.stat_total_error_replies,\n            \"dump_payload_sanitizations:%lld\\r\\n\", server.stat_dump_payload_sanitizations,\n            \"total_reads_processed:%lld\\r\\n\", stat_total_reads_processed,\n            \"total_writes_processed:%lld\\r\\n\", stat_total_writes_processed,\n            \"io_threaded_reads_processed:%lld\\r\\n\", stat_io_reads_processed,\n            \"io_threaded_writes_processed:%lld\\r\\n\", stat_io_writes_processed,\n            \"io_threaded_total_prefetch_batches:%lld\\r\\n\", server.stat_total_prefetch_batches,\n            \"io_threaded_total_prefetch_entries:%lld\\r\\n\", server.stat_total_prefetch_entries,\n            \"client_query_buffer_limit_disconnections:%lld\\r\\n\", stat_client_qbuf_limit_disconnections,\n            \"client_output_buffer_limit_disconnections:%lld\\r\\n\", server.stat_client_outbuf_limit_disconnections,\n            \"reply_buffer_shrinks:%lld\\r\\n\", server.stat_reply_buffer_shrinks,\n            \"reply_buffer_expands:%lld\\r\\n\", server.stat_reply_buffer_expands,\n            \"eventloop_cycles:%llu\\r\\n\", server.duration_stats[EL_DURATION_TYPE_EL].cnt,\n            \"eventloop_duration_sum:%llu\\r\\n\", server.duration_stats[EL_DURATION_TYPE_EL].sum,\n            \"eventloop_duration_cmd_sum:%llu\\r\\n\", server.duration_stats[EL_DURATION_TYPE_CMD].sum,\n            \"instantaneous_eventloop_cycles_per_sec:%llu\\r\\n\", getInstantaneousMetric(STATS_METRIC_EL_CYCLE),\n            \"instantaneous_eventloop_duration_usec:%llu\\r\\n\", getInstantaneousMetric(STATS_METRIC_EL_DURATION),\n            \"eventloop_cycles_with_clients_processing:%zu\\r\\n\", server.stat_eventloop_cycles_with_clients_input_buff_processing,\n            \"total_client_processing_events:%lld\\r\\n\", stat_total_client_process_input_buff_events,\n            \"avg_pipeline_length_sum:%lld\\r\\n\", stat_avg_pipeline_length_sum,\n            \"avg_pipeline_length_cnt:%lld\\r\\n\", stat_avg_pipeline_length_cnt,\n            \"avg_pipeline_length:%.2f\\r\\n\", stat_avg_pipeline_length_cnt ? (double)stat_avg_pipeline_length_sum / stat_avg_pipeline_length_cnt : 0,\n            \"slowlog_commands_count:%lld\\r\\n\", server.stat_slowlog_count,\n            \"slowlog_commands_time_ms_max:%.2f\\r\\n\", (double)server.stat_slowlog_time_us_max / 1000,\n            \"slowlog_commands_time_ms_sum:%.2f\\r\\n\", (double)server.stat_slowlog_time_us_sum / 1000));\n        info = genRedisInfoStringACLStats(info);\n        if (!server.cluster_enabled && server.cluster_compatibility_sample_ratio) {\n            info = sdscatprintf(info, \"cluster_incompatible_ops:%lld\\r\\n\", server.stat_cluster_incompatible_ops);\n        }\n    }\n\n    /* Replication */\n    if (all_sections || (dictFind(section_dict,\"replication\") != NULL)) {\n        if (sections++) info = sdscat(info,\"\\r\\n\");\n        info = sdscatprintf(info,\n            \"# Replication\\r\\n\"\n            \"role:%s\\r\\n\",\n            server.masterhost == NULL ? \"master\" : \"slave\");\n        if (server.masterhost) {\n            long long slave_repl_offset = 1;\n            long long slave_read_repl_offset = 1;\n            time_t current_disconnect_time = server.repl_down_since ?\n                server.unixtime - server.repl_down_since : 0 ;\n\n            if (server.master) {\n                slave_repl_offset = server.master->reploff;\n                slave_read_repl_offset = server.master->read_reploff;\n            } else if (server.cached_master) {\n                slave_repl_offset = server.cached_master->reploff;\n                slave_read_repl_offset = server.cached_master->read_reploff;\n            }\n\n            info = sdscatprintf(info, FMTARGS(\n                \"master_host:%s\\r\\n\", server.masterhost,\n                \"master_port:%d\\r\\n\", server.masterport,\n                \"master_link_status:%s\\r\\n\", (server.repl_state == REPL_STATE_CONNECTED) ? \"up\" : \"down\",\n                \"master_last_io_seconds_ago:%d\\r\\n\", server.master ? ((int)(server.unixtime-server.master->lastinteraction)) : -1,\n                \"master_sync_in_progress:%d\\r\\n\", server.repl_state == REPL_STATE_TRANSFER,\n                \"slave_read_repl_offset:%lld\\r\\n\", slave_read_repl_offset,\n                \"slave_repl_offset:%lld\\r\\n\", slave_repl_offset,\n                \"replica_full_sync_buffer_size:%zu\\r\\n\", server.repl_full_sync_buffer.size,\n                \"replica_full_sync_buffer_peak:%zu\\r\\n\", server.repl_full_sync_buffer.peak,\n                \"master_current_sync_attempts:%lld\\r\\n\", server.repl_current_sync_attempts,\n                \"master_total_sync_attempts:%lld\\r\\n\", server.repl_total_sync_attempts));\n            if (server.repl_state == REPL_STATE_TRANSFER) {\n                double perc = 0;\n                if (server.repl_transfer_size) {\n                    perc = ((double)server.repl_transfer_read / server.repl_transfer_size) * 100;\n                }\n                info = sdscatprintf(info, FMTARGS(\n                    \"master_sync_total_bytes:%lld\\r\\n\", (long long) server.repl_transfer_size,\n                    \"master_sync_read_bytes:%lld\\r\\n\", (long long) server.repl_transfer_read,\n                    \"master_sync_left_bytes:%lld\\r\\n\", (long long) (server.repl_transfer_size - server.repl_transfer_read),\n                    \"master_sync_perc:%.2f\\r\\n\", perc,\n                    \"master_sync_last_io_seconds_ago:%d\\r\\n\", (int)(server.unixtime-server.repl_transfer_lastio)));\n            }\n\n            if (server.repl_state != REPL_STATE_CONNECTED) {\n                info = sdscatprintf(info,\n                    \"master_link_down_since_seconds:%jd\\r\\n\",\n                    server.repl_down_since ?\n                    (intmax_t)(server.unixtime-server.repl_down_since) : -1);\n            } else {\n                info = sdscatprintf(info, FMTARGS(\n                    \"master_link_up_since_seconds:%jd\\r\\n\",\n                    server.repl_up_since ? /* defensive code, should never be 0 when connected */\n                    (intmax_t)(server.unixtime-server.repl_up_since) : -1,\n                    \"master_client_io_thread:%d\\r\\n\", server.master->tid));\n            }\n            info = sdscatprintf(info, \"total_disconnect_time_sec:%jd\\r\\n\", (intmax_t)server.repl_total_disconnect_time+(current_disconnect_time));\n\n            info = sdscatprintf(info, FMTARGS(\n                \"slave_priority:%d\\r\\n\", server.slave_priority,\n                \"slave_read_only:%d\\r\\n\", server.repl_slave_ro,\n                \"replica_announced:%d\\r\\n\", server.replica_announced));\n        }\n\n        info = sdscatprintf(info,\n            \"connected_slaves:%lu\\r\\n\",\n            replicationLogicalReplicaCount());\n\n        /* If min-slaves-to-write is active, write the number of slaves\n         * currently considered 'good'. */\n        if (server.repl_min_slaves_to_write &&\n            server.repl_min_slaves_max_lag) {\n            info = sdscatprintf(info,\n                \"min_slaves_good_slaves:%d\\r\\n\",\n                server.repl_good_slaves_count);\n        }\n\n        if (listLength(server.slaves)) {\n            int slaveid = 0;\n            listNode *ln;\n            listIter li;\n\n            listRewind(server.slaves,&li);\n            while((ln = listNext(&li))) {\n                client *slave = listNodeValue(ln);\n                char ip[NET_IP_STR_LEN], *slaveip = slave->slave_addr;\n                int port;\n                long lag = 0;\n\n                /* During rdbchannel replication, replica opens two connections.\n                 * These are distinct slaves in server.slaves list from master\n                 * POV. We don't want to list these separately. If a rdbchannel\n                 * replica has an associated main-channel replica in\n                 * server.slaves list, we'll list main channel replica only. */\n                if (replicationCheckHasMainChannel(slave))\n                    continue;\n\n                /* Don't list migration destination replicas. */\n                if (slave->flags & CLIENT_ASM_MIGRATING)\n                    continue;\n\n                if (!slaveip) {\n                    if (connAddrPeerName(slave->conn,ip,sizeof(ip),&port) == -1)\n                        continue;\n                    slaveip = ip;\n                }\n                const char *state = replstateToString(slave->replstate);\n                if (state[0] == '\\0') continue;\n                if (slave->replstate == SLAVE_STATE_ONLINE)\n                    lag = time(NULL) - slave->repl_ack_time;\n\n                info = sdscatprintf(info,\n                    \"slave%d:ip=%s,port=%d,state=%s,\"\n                    \"offset=%lld,lag=%ld,io-thread=%d\\r\\n\",\n                    slaveid,slaveip,slave->slave_listening_port,state,\n                    slave->repl_ack_off, lag, slave->tid);\n                slaveid++;\n            }\n        }\n        info = sdscatprintf(info, FMTARGS(\n            \"master_failover_state:%s\\r\\n\", getFailoverStateString(),\n            \"master_replid:%s\\r\\n\", server.replid,\n            \"master_replid2:%s\\r\\n\", server.replid2,\n            \"master_repl_offset:%lld\\r\\n\", server.master_repl_offset,\n            \"second_repl_offset:%lld\\r\\n\", server.second_replid_offset,\n            \"repl_backlog_active:%d\\r\\n\", server.repl_backlog != NULL,\n            \"repl_backlog_size:%lld\\r\\n\", server.repl_backlog_size,\n            \"repl_backlog_first_byte_offset:%lld\\r\\n\", server.repl_backlog ? server.repl_backlog->offset : 0,\n            \"repl_backlog_histlen:%lld\\r\\n\", server.repl_backlog ? server.repl_backlog->histlen : 0));\n    }\n\n    /* CPU */\n    if (all_sections || (dictFind(section_dict,\"cpu\") != NULL)) {\n        if (sections++) info = sdscat(info,\"\\r\\n\");\n\n        struct rusage self_ru, c_ru;\n        getrusage(RUSAGE_SELF, &self_ru);\n        getrusage(RUSAGE_CHILDREN, &c_ru);\n        info = sdscatprintf(info,\n        \"# CPU\\r\\n\"\n        \"used_cpu_sys:%ld.%06ld\\r\\n\"\n        \"used_cpu_user:%ld.%06ld\\r\\n\"\n        \"used_cpu_sys_children:%ld.%06ld\\r\\n\"\n        \"used_cpu_user_children:%ld.%06ld\\r\\n\",\n        (long)self_ru.ru_stime.tv_sec, (long)self_ru.ru_stime.tv_usec,\n        (long)self_ru.ru_utime.tv_sec, (long)self_ru.ru_utime.tv_usec,\n        (long)c_ru.ru_stime.tv_sec, (long)c_ru.ru_stime.tv_usec,\n        (long)c_ru.ru_utime.tv_sec, (long)c_ru.ru_utime.tv_usec);\n#ifdef RUSAGE_THREAD\n        struct rusage m_ru;\n        getrusage(RUSAGE_THREAD, &m_ru);\n        info = sdscatprintf(info,\n            \"used_cpu_sys_main_thread:%ld.%06ld\\r\\n\"\n            \"used_cpu_user_main_thread:%ld.%06ld\\r\\n\",\n            (long)m_ru.ru_stime.tv_sec, (long)m_ru.ru_stime.tv_usec,\n            (long)m_ru.ru_utime.tv_sec, (long)m_ru.ru_utime.tv_usec);\n#endif  /* RUSAGE_THREAD */\n    }\n\n    /* Hotkeys */\n    if (all_sections || (dictFind(section_dict,\"hotkeys\") != NULL))\n    {\n        if (sections++) info = sdscat(info,\"\\r\\n\"); \n\n        info = sdscatprintf(info, \"# Hotkeys\\r\\n\");\n        if (server.hotkeys) {\n            info = sdscatprintf(info,\n                \"hotkeys-tracking-active:%d\\r\\n\"\n                \"hotkeys-cmd-cpu-time:%lld\\r\\n\",\n                server.hotkeys->active ? 1 : 0,\n                server.hotkeys->cpu_time);\n        }\n    }\n\n    /* Modules */\n    if (all_sections || (dictFind(section_dict,\"module_list\") != NULL) || (dictFind(section_dict,\"modules\") != NULL)) {\n        if (sections++) info = sdscat(info,\"\\r\\n\");\n        info = sdscatprintf(info,\"# Modules\\r\\n\");\n        info = genModulesInfoString(info);\n    }\n\n    /* Command statistics */\n    if (all_sections || (dictFind(section_dict,\"commandstats\") != NULL)) {\n        if (sections++) info = sdscat(info,\"\\r\\n\");\n        info = sdscatprintf(info, \"# Commandstats\\r\\n\");\n        info = genRedisInfoStringCommandStats(info, server.commands);\n    }\n\n    /* Error statistics */\n    if (all_sections || (dictFind(section_dict,\"errorstats\") != NULL)) {\n        if (sections++) info = sdscat(info,\"\\r\\n\");\n        info = sdscat(info, \"# Errorstats\\r\\n\");\n        raxIterator ri;\n        raxStart(&ri,server.errors);\n        raxSeek(&ri,\"^\",NULL,0);\n        struct redisError *e;\n        while(raxNext(&ri)) {\n            char *tmpsafe;\n            e = (struct redisError *) ri.data;\n            info = sdscatprintf(info,\n                \"errorstat_%.*s:count=%lld\\r\\n\",\n                (int)ri.key_len, getSafeInfoString((char *) ri.key, ri.key_len, &tmpsafe), e->count);\n            if (tmpsafe != NULL) zfree(tmpsafe);\n        }\n        raxStop(&ri);\n    }\n\n    /* Latency by percentile distribution per command */\n    if (all_sections || (dictFind(section_dict,\"latencystats\") != NULL)) {\n        if (sections++) info = sdscat(info,\"\\r\\n\");\n        info = sdscatprintf(info, \"# Latencystats\\r\\n\");\n        if (server.latency_tracking_enabled) {\n            info = genRedisInfoStringLatencyStats(info, server.commands);\n        }\n    }\n\n    /* Cluster */\n    if (all_sections || (dictFind(section_dict,\"cluster\") != NULL)) {\n        if (sections++) info = sdscat(info,\"\\r\\n\");\n        info = sdscatprintf(info,\n        \"# Cluster\\r\\n\"\n        \"cluster_enabled:%d\\r\\n\",\n        server.cluster_enabled);\n    }\n\n    /* Key space */\n    if (all_sections || (dictFind(section_dict,\"keyspace\") != NULL)) {\n        if (sections++) info = sdscat(info,\"\\r\\n\");\n        info = sdscatprintf(info, \"# Keyspace\\r\\n\");\n        for (j = 0; j < server.dbnum; j++) {\n            long long keys, vkeys, subexpiry;\n\n            keys = kvstoreSize(server.db[j].keys);\n            vkeys = kvstoreSize(server.db[j].expires);\n            subexpiry = estoreSize(server.db[j].subexpires);\n\n            if (keys || vkeys) {\n                info = sdscatprintf(info,\n                                    \"db%d:keys=%lld,expires=%lld,avg_ttl=%lld,subexpiry=%lld\\r\\n\",\n                                    j, keys, vkeys, server.db[j].avg_ttl, subexpiry);\n            }\n        }\n    }\n\n    /* keysizes */\n    if (all_sections || (dictFind(section_dict,\"keysizes\") != NULL)) {\n        if (sections++) info = sdscat(info,\"\\r\\n\");\n        info = sdscatprintf(info, \"# Keysizes\\r\\n\");\n\n        static const char *type_items_str[] = {\n            [OBJ_STRING] = \"distrib_strings_sizes\",\n            [OBJ_LIST] = \"distrib_lists_items\",\n            [OBJ_SET] = \"distrib_sets_items\",\n            [OBJ_ZSET] = \"distrib_zsets_items\",\n            [OBJ_HASH] = \"distrib_hashes_items\"\n        };\n        serverAssert(sizeof(type_items_str)/sizeof(type_items_str[0]) == OBJ_TYPE_BASIC_MAX);\n        static const char *type_sizes_str[] = {\n            [OBJ_STRING] = NULL, /* Skip strings to avoid confusion with distrib_strings_sizes */\n            [OBJ_LIST] = \"distrib_lists_sizes\",\n            [OBJ_SET] = \"distrib_sets_sizes\",\n            [OBJ_ZSET] = \"distrib_zsets_sizes\",\n            [OBJ_HASH] = \"distrib_hashes_sizes\"\n        };\n        serverAssert(sizeof(type_sizes_str)/sizeof(type_sizes_str[0]) == OBJ_TYPE_BASIC_MAX);\n\n        for (int dbnum = 0; dbnum < server.dbnum; dbnum++) {\n            if (kvstoreSize(server.db[dbnum].keys) == 0)\n                continue;\n\n            kvstoreMetadata *meta = kvstoreGetMetadata(server.db[dbnum].keys);\n\n            /* Collection sizes distribution */\n            info = sdscatHistograms(info, dbnum, meta->keysizes_hist, type_items_str);\n\n            if (!server.memory_tracking_enabled) continue;\n\n            /* Allocation sizes distribution */\n            info = sdscatHistograms(info, dbnum, meta->allocsizes_hist, type_sizes_str);\n        }\n    }\n\n    /* Get info from modules.\n     * Returned when the user asked for \"everything\", \"modules\", or a specific module section.\n     * We're not aware of the module section names here, and we rather avoid the search when we can.\n     * so we proceed if there's a requested section name that's not found yet, or when the user asked\n     * for \"all\" with any additional section names. */\n    if (everything || dictFind(section_dict, \"modules\") != NULL || sections < (int)dictSize(section_dict) ||\n        (all_sections && dictSize(section_dict)))\n    {\n\n        info = modulesCollectInfo(info,\n                                  everything || dictFind(section_dict, \"modules\") != NULL ? NULL: section_dict,\n                                  0, /* not a crash report */\n                                  sections);\n    }\n\n    if (dictFind(section_dict, \"debug\") != NULL) {\n        if (sections++) info = sdscat(info,\"\\r\\n\");\n        info = sdscatprintf(info, \"# Debug\\r\\n\" FMTARGS(\n            \"eventloop_duration_aof_sum:%llu\\r\\n\", server.duration_stats[EL_DURATION_TYPE_AOF].sum,\n            \"eventloop_duration_cron_sum:%llu\\r\\n\", server.duration_stats[EL_DURATION_TYPE_CRON].sum,\n            \"eventloop_duration_max:%llu\\r\\n\", server.duration_stats[EL_DURATION_TYPE_EL].max,\n            \"eventloop_cmd_per_cycle_max:%lld\\r\\n\", server.el_cmd_cnt_max,\n            \"allocator_allocated_lua:%zu\\r\\n\", server.cron_malloc_stats.lua_allocator_allocated,\n            \"allocator_active_lua:%zu\\r\\n\", server.cron_malloc_stats.lua_allocator_active,\n            \"allocator_resident_lua:%zu\\r\\n\", server.cron_malloc_stats.lua_allocator_resident,\n            \"allocator_frag_bytes_lua:%zu\\r\\n\", server.cron_malloc_stats.lua_allocator_frag_smallbins_bytes));\n    }\n\n    return info;\n}\n\n/* INFO [<section> [<section> ...]] */\nvoid infoCommand(client *c) {\n    if (server.sentinel_mode) {\n        sentinelInfoCommand(c);\n        return;\n    }\n    int all_sections = 0;\n    int everything = 0;\n    dict *sections_dict = genInfoSectionDict(c->argv+1, c->argc-1, NULL, &all_sections, &everything);\n    sds info = genRedisInfoString(sections_dict, all_sections, everything);\n    addReplyVerbatim(c,info,sdslen(info),\"txt\");\n    sdsfree(info);\n    releaseInfoSectionDict(sections_dict);\n    return;\n}\n\nvoid monitorCommand(client *c) {\n    if (c->flags & CLIENT_DENY_BLOCKING) {\n        /**\n         * A client that has CLIENT_DENY_BLOCKING flag on\n         * expects a reply per command and so can't execute MONITOR. */\n        addReplyError(c, \"MONITOR isn't allowed for DENY BLOCKING client\");\n        return;\n    }\n\n    /* ignore MONITOR if already slave or in monitor mode */\n    if (c->flags & CLIENT_SLAVE) return;\n\n    c->flags |= (CLIENT_SLAVE|CLIENT_MONITOR);\n    listAddNodeTail(server.monitors,c);\n    addReply(c,shared.ok);\n}\n\n/* =================================== Main! ================================ */\n\nint checkIgnoreWarning(const char *warning) {\n    int argc, j;\n    sds *argv = sdssplitargs(server.ignore_warnings, &argc);\n    if (argv == NULL)\n        return 0;\n\n    for (j = 0; j < argc; j++) {\n        char *flag = argv[j];\n        if (!strcasecmp(flag, warning))\n            break;\n    }\n    sdsfreesplitres(argv,argc);\n    return j < argc;\n}\n\n#ifdef __linux__\n#include <sys/prctl.h>\n/* since linux-3.5, kernel supports to set the state of the \"THP disable\" flag\n * for the calling thread. PR_SET_THP_DISABLE is defined in linux/prctl.h */\nstatic int THPDisable(void) {\n    int ret = -EINVAL;\n\n    if (!server.disable_thp)\n        return ret;\n\n#ifdef PR_SET_THP_DISABLE\n    ret = prctl(PR_SET_THP_DISABLE, 1, 0, 0, 0);\n#endif\n\n    return ret;\n}\n\nvoid linuxMemoryWarnings(void) {\n    sds err_msg = NULL;\n    if (checkOvercommit(&err_msg) < 0) {\n        serverLog(LL_WARNING,\"WARNING %s\", err_msg);\n        sdsfree(err_msg);\n    }\n    if (checkTHPEnabled(&err_msg) < 0) {\n        server.thp_enabled = 1;\n        if (THPDisable() == 0) {\n            server.thp_enabled = 0;\n        } else {\n            serverLog(LL_WARNING, \"WARNING %s\", err_msg);\n        }\n        sdsfree(err_msg);\n    }\n}\n#endif /* __linux__ */\n\nvoid createPidFile(void) {\n    /* If pidfile requested, but no pidfile defined, use\n     * default pidfile path */\n    if (!server.pidfile) server.pidfile = zstrdup(CONFIG_DEFAULT_PID_FILE);\n\n    /* Try to write the pid file in a best-effort way. */\n    FILE *fp = fopen(server.pidfile,\"w\");\n    if (fp) {\n        fprintf(fp,\"%d\\n\",(int)getpid());\n        fclose(fp);\n    } else {\n        serverLog(LL_WARNING, \"Failed to write PID file: %s\", strerror(errno));\n    }\n}\n\nvoid daemonize(void) {\n    int fd;\n\n    if (fork() != 0) exit(0); /* parent exits */\n    setsid(); /* create a new session */\n\n    /* Every output goes to /dev/null. If Redis is daemonized but\n     * the 'logfile' is set to 'stdout' in the configuration file\n     * it will not log at all. */\n    if ((fd = open(\"/dev/null\", O_RDWR, 0)) != -1) {\n        dup2(fd, STDIN_FILENO);\n        dup2(fd, STDOUT_FILENO);\n        dup2(fd, STDERR_FILENO);\n        if (fd > STDERR_FILENO) close(fd);\n    }\n}\n\nsds getVersion(void) {\n    sds version = sdscatprintf(sdsempty(),\n        \"v=%s sha=%s:%d malloc=%s bits=%d build=%llx\",\n        REDIS_VERSION,\n        redisGitSHA1(),\n        atoi(redisGitDirty()) > 0,\n        ZMALLOC_LIB,\n        sizeof(long) == 4 ? 32 : 64,\n        (unsigned long long) redisBuildId());\n    return version;\n}\n\nvoid usage(void) {\n    fprintf(stderr,\"Usage: ./redis-server [/path/to/redis.conf] [options] [-]\\n\");\n    fprintf(stderr,\"       ./redis-server - (read config from stdin)\\n\");\n    fprintf(stderr,\"       ./redis-server -v or --version\\n\");\n    fprintf(stderr,\"       ./redis-server -h or --help\\n\");\n    fprintf(stderr,\"       ./redis-server --test-memory <megabytes>\\n\");\n    fprintf(stderr,\"       ./redis-server --check-system\\n\");\n    fprintf(stderr,\"\\n\");\n    fprintf(stderr,\"Examples:\\n\");\n    fprintf(stderr,\"       ./redis-server (run the server with default conf)\\n\");\n    fprintf(stderr,\"       echo 'maxmemory 128mb' | ./redis-server -\\n\");\n    fprintf(stderr,\"       ./redis-server /etc/redis/6379.conf\\n\");\n    fprintf(stderr,\"       ./redis-server --port 7777\\n\");\n    fprintf(stderr,\"       ./redis-server --port 7777 --replicaof 127.0.0.1 8888\\n\");\n    fprintf(stderr,\"       ./redis-server /etc/myredis.conf --loglevel verbose -\\n\");\n    fprintf(stderr,\"       ./redis-server /etc/myredis.conf --loglevel verbose\\n\\n\");\n    fprintf(stderr,\"Sentinel mode:\\n\");\n    fprintf(stderr,\"       ./redis-server /etc/sentinel.conf --sentinel\\n\");\n    exit(1);\n}\n\nvoid redisAsciiArt(void) {\n#include \"asciilogo.h\"\n    char *buf = zmalloc(1024*16);\n    char *mode;\n\n    if (server.cluster_enabled) mode = \"cluster\";\n    else if (server.sentinel_mode) mode = \"sentinel\";\n    else mode = \"standalone\";\n\n    /* Show the ASCII logo if: log file is stdout AND stdout is a\n     * tty AND syslog logging is disabled. Also show logo if the user\n     * forced us to do so via redis.conf. */\n    int show_logo = ((!server.syslog_enabled &&\n                      server.logfile[0] == '\\0' &&\n                      isatty(fileno(stdout))) ||\n                     server.always_show_logo);\n\n    if (!show_logo) {\n        serverLog(LL_NOTICE,\n            \"Running mode=%s, port=%d.\",\n            mode, server.port ? server.port : server.tls_port\n        );\n    } else {\n        snprintf(buf,1024*16,ascii_logo,\n            REDIS_VERSION,\n            redisGitSHA1(),\n            strtol(redisGitDirty(),NULL,10) > 0,\n            (sizeof(long) == 8) ? \"64\" : \"32\",\n            mode, server.port ? server.port : server.tls_port,\n            (long) getpid()\n        );\n        serverLogRaw(LL_NOTICE|LL_RAW,buf);\n    }\n    zfree(buf);\n}\n\n/* Warn if the default user allows unauthenticated access. */\nvoid warnAboutInsecureConfig(void) {\n    if ((DefaultUser->flags & USER_FLAG_NOPASS) && !(DefaultUser->flags & USER_FLAG_DISABLED)) {\n        /* Check if Redis listens on all network interfaces */\n        int bind_all_interfaces = 0;\n        for (int j = 0; j < server.bindaddr_count; j++) {\n            char *addr = server.bindaddr[j];\n            if (addr[0] == '-') addr++;\n            if (!strcmp(addr, \"*\") || !strcmp(addr, \"0.0.0.0\") ||\n                !strcmp(addr, \"::\") || !strcmp(addr, \"::*\")) {\n                bind_all_interfaces = 1;\n                break;\n            }\n        }\n\n        if (!server.protected_mode && bind_all_interfaces) {\n            serverLog(LL_WARNING,\n                \"WARNING: Redis does not require authentication and is not protected by network restrictions. \"\n                \"Redis will accept connections from any IP address on any network interface.\");\n        } else if (!server.protected_mode) {\n            serverLog(LL_WARNING,\n                \"WARNING: Redis does not require authentication. \"\n                \"Redis will accept connections from any IP address on the configured network interface.\");\n        } else {\n            /* protected_mode is enabled */\n            serverLog(LL_WARNING,\n                \"WARNING: Redis does not require authentication. \"\n                \"Redis will accept connections from any local client.\");\n        }\n    }\n}\n\n/* Get the server listener by type name */\nconnListener *listenerByType(const char *typename) {\n    int conn_index;\n\n    conn_index = connectionIndexByType(typename);\n    if (conn_index < 0)\n        return NULL;\n\n    return &server.listeners[conn_index];\n}\n\n/* Close original listener, re-create a new listener from the updated bind address & port */\nint changeListener(connListener *listener) {\n    /* Close old servers */\n    closeListener(listener);\n\n    /* Just close the server if port disabled */\n    if (listener->port == 0) {\n        if (server.set_proc_title) redisSetProcTitle(NULL);\n        return C_OK;\n    }\n\n    /* Re-create listener */\n    if (connListen(listener) != C_OK) {\n        return C_ERR;\n    }\n\n    /* Create event handlers */\n    if (createSocketAcceptHandler(listener, listener->ct->accept_handler) != C_OK) {\n        serverPanic(\"Unrecoverable error creating %s accept handler.\", listener->ct->get_type(NULL));\n    }\n\n    if (server.set_proc_title) redisSetProcTitle(NULL);\n\n    return C_OK;\n}\n\nstatic void sigShutdownHandler(int sig) {\n    char *msg;\n\n    switch (sig) {\n    case SIGINT:\n        msg = \"Received SIGINT scheduling shutdown...\";\n        break;\n    case SIGTERM:\n        msg = \"Received SIGTERM scheduling shutdown...\";\n        break;\n    default:\n        msg = \"Received shutdown signal, scheduling shutdown...\";\n    };\n\n    /* SIGINT is often delivered via Ctrl+C in an interactive session.\n     * If we receive the signal the second time, we interpret this as\n     * the user really wanting to quit ASAP without waiting to persist\n     * on disk and without waiting for lagging replicas. */\n    if (shouldShutdownAsap() && sig == SIGINT) {\n        serverLogRawFromHandler(LL_WARNING, \"You insist... exiting now.\");\n        rdbRemoveTempFile(getpid(), 1);\n        exit(1); /* Exit with an error since this was not a clean shutdown. */\n    } else if (server.loading) {\n        msg = \"Received shutdown signal during loading, scheduling shutdown.\";\n    }\n\n    serverLogRawFromHandler(LL_WARNING, msg);\n    atomicSet(server.shutdown_asap, 1);\n    atomicSet(server.last_sig_received, sig);\n}\n\nvoid setupSignalHandlers(void) {\n    struct sigaction act;\n\n    sigemptyset(&act.sa_mask);\n    act.sa_flags = 0;\n    act.sa_handler = sigShutdownHandler;\n    sigaction(SIGTERM, &act, NULL);\n    sigaction(SIGINT, &act, NULL);\n\n    setupDebugSigHandlers();\n}\n\n/* This is the signal handler for children process. It is currently useful\n * in order to track the SIGUSR1, that we send to a child in order to terminate\n * it in a clean way, without the parent detecting an error and stop\n * accepting writes because of a write error condition. */\nstatic void sigKillChildHandler(int sig) {\n    UNUSED(sig);\n    int level = server.in_fork_child == CHILD_TYPE_MODULE? LL_VERBOSE: LL_WARNING;\n    serverLogRawFromHandler(level, \"Received SIGUSR1 in child, exiting now.\");\n    /* We don't want to perform any IO in the child when the parent is terminating us.\n     * We don't know what our stack trace is, it is possible that we were called during an IO operation\n     * If we were to do another IO operation, we might end up in a deadlock */\n    exitFromChild(SERVER_CHILD_NOERROR_RETVAL, 1);\n}\n\nvoid setupChildSignalHandlers(void) {\n    struct sigaction act;\n\n    /* When the SA_SIGINFO flag is set in sa_flags then sa_sigaction is used.\n     * Otherwise, sa_handler is used. */\n    sigemptyset(&act.sa_mask);\n    act.sa_flags = 0;\n    act.sa_handler = sigKillChildHandler;\n    sigaction(SIGUSR1, &act, NULL);\n}\n\n/* After fork, the child process will inherit the resources\n * of the parent process, e.g. fd(socket or flock) etc.\n * should close the resources not used by the child process, so that if the\n * parent restarts it can bind/lock despite the child possibly still running. */\nvoid closeChildUnusedResourceAfterFork(void) {\n    closeListeningSockets(0);\n    if (server.cluster_enabled && server.cluster_config_file_lock_fd != -1)\n        close(server.cluster_config_file_lock_fd);  /* don't care if this fails */\n\n    /* Clear server.pidfile, this is the parent pidfile which should not\n     * be touched (or deleted) by the child (on exit / crash) */\n    zfree(server.pidfile);\n    server.pidfile = NULL;\n}\n\n/* purpose is one of CHILD_TYPE_ types */\nint redisFork(int purpose) {\n    if (isMutuallyExclusiveChildType(purpose)) {\n        if (hasActiveChildProcess()) {\n            errno = EEXIST;\n            return -1;\n        }\n\n        openChildInfoPipe();\n    }\n\n    int childpid;\n    long long start = ustime();\n    if ((childpid = fork()) == 0) {\n        /* Child.\n         *\n         * The order of setting things up follows some reasoning:\n         * Setup signal handlers first because a signal could fire at any time.\n         * Adjust OOM score before everything else to assist the OOM killer if\n         * memory resources are low.\n         */\n        server.in_fork_child = purpose;\n        setupChildSignalHandlers();\n        setOOMScoreAdj(CONFIG_OOM_BGCHILD);\n        updateDictResizePolicy();\n        dismissMemoryInChild();\n        closeChildUnusedResourceAfterFork();\n        /* Memory tracking for slots is unnecessary in child processes. */\n        server.memory_tracking_enabled = 0;\n        /* Close the reading part, so that if the parent crashes, the child will\n         * get a write error and exit. */\n        if (server.child_info_pipe[0] != -1)\n            close(server.child_info_pipe[0]);\n    } else {\n        /* Parent */\n        if (childpid == -1) {\n            int fork_errno = errno;\n            if (isMutuallyExclusiveChildType(purpose)) closeChildInfoPipe();\n            errno = fork_errno;\n            return -1;\n        }\n\n        server.stat_total_forks++;\n        server.stat_fork_time = ustime()-start;\n        server.stat_fork_rate = (double) zmalloc_used_memory() * 1000000 / server.stat_fork_time / (1024*1024*1024); /* GB per second. */\n        latencyAddSampleIfNeeded(\"fork\",server.stat_fork_time/1000);\n\n        /* The child_pid and child_type are only for mutually exclusive children.\n         * other child types should handle and store their pid's in dedicated variables.\n         *\n         * Today, we allows CHILD_TYPE_LDB to run in parallel with the other fork types:\n         * - it isn't used for production, so it will not make the server be less efficient\n         * - used for debugging, and we don't want to block it from running while other\n         *   forks are running (like RDB and AOF) */\n        if (isMutuallyExclusiveChildType(purpose)) {\n            server.child_pid = childpid;\n            server.child_type = purpose;\n            server.stat_current_cow_peak = 0;\n            server.stat_current_cow_bytes = 0;\n            server.stat_current_cow_updated = 0;\n            server.stat_current_save_keys_processed = 0;\n            server.stat_module_progress = 0;\n            server.stat_current_save_keys_total = dbTotalServerKeyCount();\n        }\n\n        updateDictResizePolicy();\n        moduleFireServerEvent(REDISMODULE_EVENT_FORK_CHILD,\n                              REDISMODULE_SUBEVENT_FORK_CHILD_BORN,\n                              NULL);\n    }\n    return childpid;\n}\n\nvoid sendChildCowInfo(childInfoType info_type, char *pname) {\n    sendChildInfoGeneric(info_type, 0, -1, pname);\n}\n\nvoid sendChildInfo(childInfoType info_type, size_t keys, char *pname) {\n    sendChildInfoGeneric(info_type, keys, -1, pname);\n}\n\n/* Try to release pages back to the OS directly (bypassing the allocator),\n * in an effort to decrease CoW during fork. For small allocations, we can't\n * release any full page, so in an effort to avoid getting the size of the\n * allocation from the allocator (malloc_size) when we already know it's small,\n * we check the size_hint. If the size is not already known, passing a size_hint\n * of 0 will lead the checking the real size of the allocation.\n * Also please note that the size may be not accurate, so in order to make this\n * solution effective, the judgement for releasing memory pages should not be\n * too strict. */\nvoid dismissMemory(void* ptr, size_t size_hint) {\n    if (ptr == NULL) return;\n\n    /* madvise(MADV_DONTNEED) can not release pages if the size of memory\n     * is too small, we try to release only for the memory which the size\n     * is more than half of page size. */\n    if (size_hint && size_hint <= server.page_size/2) return;\n\n    zmadvise_dontneed(ptr);\n}\n\n/* Dismiss big chunks of memory inside a client structure, see dismissMemory() */\nvoid dismissClientMemory(client *c) {\n    /* Dismiss client query buffer and static reply buffer. */\n    dismissMemory(c->buf, c->buf_usable_size);\n    if (c->querybuf) dismissSds(c->querybuf);\n    /* Dismiss argv array only if we estimate it contains a big buffer. */\n    if (c->argc && c->all_argv_len_sum/c->argc >= server.page_size) {\n        for (int i = 0; i < c->argc; i++) {\n            dismissObject(c->argv[i], 0);\n        }\n    }\n    if (c->argc) dismissMemory(c->argv, c->argc*sizeof(robj*));\n\n    /* Dismiss the reply array only if the average buffer size is bigger\n     * than a page. */\n    if (listLength(c->reply) &&\n        c->reply_bytes/listLength(c->reply) >= server.page_size)\n    {\n        listIter li;\n        listNode *ln;\n        listRewind(c->reply, &li);\n        while ((ln = listNext(&li))) {\n            clientReplyBlock *bulk = listNodeValue(ln);\n            /* Default bulk size is 16k, actually it has extra data, maybe it\n             * occupies 20k according to jemalloc bin size if using jemalloc. */\n            if (bulk) dismissMemory(bulk, bulk->size);\n        }\n    }\n}\n\n/* Dismiss the hash table bucket arrays of a dict. */\nvoid dismissDictBucketsMemory(dict *d) {\n    if (!d) return;\n    dismissMemory(d->ht_table[0], DICTHT_SIZE(d->ht_size_exp[0]) * sizeof(dictEntry*));\n    dismissMemory(d->ht_table[1], DICTHT_SIZE(d->ht_size_exp[1]) * sizeof(dictEntry*));\n}\n\n/* Dismiss the hash table bucket arrays for all dicts in the given kvstore. */\nvoid dismissKvstoreBucketsMemory(kvstore *kvs) {\n    for (int didx = 0; didx < kvstoreNumDicts(kvs); didx++) {\n        dismissDictBucketsMemory(kvstoreGetDict(kvs, didx));\n    }\n}\n\n/* In the child process, we don't need some buffers anymore, and these are\n * likely to change in the parent when there's heavy write traffic.\n * We dismiss them right away, to avoid CoW.\n * see dismissMemeory(). */\nvoid dismissMemoryInChild(void) {\n    /* madvise(MADV_DONTNEED) may not work if Transparent Huge Pages is enabled. */\n    if (server.thp_enabled) return;\n\n    /* Currently we use zmadvise_dontneed only when we use jemalloc with Linux.\n     * so we avoid these pointless loops when they're not going to do anything. */\n#if defined(USE_JEMALLOC) && defined(__linux__)\n    listIter li;\n    listNode *ln;\n\n    /* Dismiss replication buffer. We don't need to separately dismiss replication\n     * backlog and replica' output buffer, because they just reference the global\n     * replication buffer but don't cost real memory. */\n    listRewind(server.repl_buffer_blocks, &li);\n    while((ln = listNext(&li))) {\n        replBufBlock *o = listNodeValue(ln);\n        dismissMemory(o, o->size);\n    }\n\n    /* Dismiss accumulated repl buffer on replica. */\n    if (server.repl_full_sync_buffer.blocks) {\n        listRewind(server.repl_full_sync_buffer.blocks, &li);\n        while((ln = listNext(&li))) {\n            replDataBufBlock *o = listNodeValue(ln);\n            dismissMemory(o, o->size);\n        }\n    }\n\n    /* Dismiss all clients memory. */\n    listRewind(server.clients, &li);\n    while((ln = listNext(&li))) {\n        client *c = listNodeValue(ln);\n        dismissClientMemory(c);\n    }\n\n    /* Dismiss expires kvstore bucket arrays since the child process never\n     * accesses them, expire times are embedded in key objects. */\n    if (server.in_fork_child == CHILD_TYPE_RDB || server.in_fork_child == CHILD_TYPE_AOF) {\n        for (int dbid = 0; dbid < server.dbnum; dbid++) {\n            dismissKvstoreBucketsMemory(server.db[dbid].expires);\n        }\n    }\n#endif\n}\n\nvoid memtest(size_t megabytes, int passes);\n\n/* Returns 1 if there is --sentinel among the arguments or if\n * executable name contains \"redis-sentinel\". */\nint checkForSentinelMode(int argc, char **argv, char *exec_name) {\n    if (strstr(exec_name,\"redis-sentinel\") != NULL) return 1;\n\n    for (int j = 1; j < argc; j++)\n        if (!strcmp(argv[j],\"--sentinel\")) return 1;\n    return 0;\n}\n\n/* Function called at startup to load RDB or AOF file in memory. */\nvoid loadDataFromDisk(void) {\n    long long start = ustime();\n    if (server.aof_state == AOF_ON) {\n        int ret = loadAppendOnlyFiles(server.aof_manifest);\n        if (ret == AOF_FAILED || ret == AOF_OPEN_ERR)\n            exit(1);\n        if (ret != AOF_NOT_EXIST)\n            serverLog(LL_NOTICE, \"DB loaded from append only file: %.3f seconds\", (float)(ustime()-start)/1000000);\n        updateReplOffsetAndResetEndOffset();\n    } else {\n        rdbSaveInfo rsi = RDB_SAVE_INFO_INIT;\n        int rsi_is_valid = 0;\n        errno = 0; /* Prevent a stale value from affecting error checking */\n        int rdb_flags = RDBFLAGS_NONE;\n        if (iAmMaster()) {\n            /* Master may delete expired keys when loading, we should\n             * propagate expire to replication backlog. */\n            createReplicationBacklog();\n            rdb_flags |= RDBFLAGS_FEED_REPL;\n        }\n        int rdb_load_ret = rdbLoad(server.rdb_filename, &rsi, rdb_flags);\n        if (rdb_load_ret == RDB_OK) {\n            serverLog(LL_NOTICE,\"DB loaded from disk: %.3f seconds\",\n                (float)(ustime()-start)/1000000);\n\n            /* Restore the replication ID / offset from the RDB file. */\n            if (rsi.repl_id_is_set &&\n                rsi.repl_offset != -1 &&\n                /* Note that older implementations may save a repl_stream_db\n                 * of -1 inside the RDB file in a wrong way, see more\n                 * information in function rdbPopulateSaveInfo. */\n                rsi.repl_stream_db != -1)\n            {\n                rsi_is_valid = 1;\n                if (!iAmMaster()) {\n                    memcpy(server.replid,rsi.repl_id,sizeof(server.replid));\n                    server.master_repl_offset = rsi.repl_offset;\n                    /* If this is a replica, create a cached master from this\n                     * information, in order to allow partial resynchronizations\n                     * with masters. */\n                    replicationCacheMasterUsingMyself();\n                    selectDb(server.cached_master,rsi.repl_stream_db);\n                } else {\n                    /* If this is a master, we can save the replication info\n                     * as secondary ID and offset, in order to allow replicas\n                     * to partial resynchronizations with masters. */\n                    memcpy(server.replid2,rsi.repl_id,sizeof(server.replid));\n                    server.second_replid_offset = rsi.repl_offset+1;\n                    /* Rebase master_repl_offset from rsi.repl_offset. */\n                    server.master_repl_offset += rsi.repl_offset;\n                    serverAssert(server.repl_backlog);\n                    server.repl_backlog->offset = server.master_repl_offset -\n                              server.repl_backlog->histlen + 1;\n                    rebaseReplicationBuffer(rsi.repl_offset);\n                    server.repl_no_slaves_since = time(NULL);\n                }\n            }\n        } else if (rdb_load_ret != RDB_NOT_EXIST) {\n            serverLog(LL_WARNING, \"Fatal error loading the DB, check server logs. Exiting.\");\n            exit(1);\n        }\n\n        /* We always create replication backlog if server is a master, we need\n         * it because we put DELs in it when loading expired keys in RDB, but\n         * if RDB doesn't have replication info or there is no rdb, it is not\n         * possible to support partial resynchronization, to avoid extra memory\n         * of replication backlog, we drop it. */\n        if (!rsi_is_valid && server.repl_backlog)\n            freeReplicationBacklog();\n    }\n}\n\nvoid redisOutOfMemoryHandler(size_t allocation_size) {\n    serverLog(LL_WARNING,\"Out Of Memory allocating %zu bytes!\",\n        allocation_size);\n    serverPanic(\"Redis aborting for OUT OF MEMORY. Allocating %zu bytes!\",\n        allocation_size);\n}\n\n/* Callback for sdstemplate on proc-title-template. See redis.conf for\n * supported variables.\n */\nstatic sds redisProcTitleGetVariable(const sds varname, void *arg)\n{\n    if (!strcmp(varname, \"title\")) {\n        return sdsnew(arg);\n    } else if (!strcmp(varname, \"listen-addr\")) {\n        if (server.port || server.tls_port)\n            return sdscatprintf(sdsempty(), \"%s:%u\",\n                                server.bindaddr_count ? server.bindaddr[0] : \"*\",\n                                server.port ? server.port : server.tls_port);\n        else\n            return sdscatprintf(sdsempty(), \"unixsocket:%s\", server.unixsocket);\n    } else if (!strcmp(varname, \"server-mode\")) {\n        if (server.cluster_enabled) return sdsnew(\"[cluster]\");\n        else if (server.sentinel_mode) return sdsnew(\"[sentinel]\");\n        else return sdsempty();\n    } else if (!strcmp(varname, \"config-file\")) {\n        return sdsnew(server.configfile ? server.configfile : \"-\");\n    } else if (!strcmp(varname, \"port\")) {\n        return sdscatprintf(sdsempty(), \"%u\", server.port);\n    } else if (!strcmp(varname, \"tls-port\")) {\n        return sdscatprintf(sdsempty(), \"%u\", server.tls_port);\n    } else if (!strcmp(varname, \"unixsocket\")) {\n        return sdsnew(server.unixsocket);\n    } else\n        return NULL;    /* Unknown variable name */\n}\n\n/* Expand the specified proc-title-template string and return a newly\n * allocated sds, or NULL. */\nstatic sds expandProcTitleTemplate(const char *template, const char *title) {\n    sds res = sdstemplate(template, redisProcTitleGetVariable, (void *) title);\n    if (!res)\n        return NULL;\n    return sdstrim(res, \" \");\n}\n/* Validate the specified template, returns 1 if valid or 0 otherwise. */\nint validateProcTitleTemplate(const char *template) {\n    int ok = 1;\n    sds res = expandProcTitleTemplate(template, \"\");\n    if (!res)\n        return 0;\n    if (sdslen(res) == 0) ok = 0;\n    sdsfree(res);\n    return ok;\n}\n\nint redisSetProcTitle(char *title) {\n#ifdef USE_SETPROCTITLE\n    if (!title) title = server.exec_argv[0];\n    sds proc_title = expandProcTitleTemplate(server.proc_title_template, title);\n    if (!proc_title) return C_ERR;  /* Not likely, proc_title_template is validated */\n\n    setproctitle(\"%s\", proc_title);\n    sdsfree(proc_title);\n#else\n    UNUSED(title);\n#endif\n\n    return C_OK;\n}\n\nvoid redisSetCpuAffinity(const char *cpulist) {\n#ifdef USE_SETCPUAFFINITY\n    setcpuaffinity(cpulist);\n#else\n    UNUSED(cpulist);\n#endif\n}\n\n/* Send a notify message to systemd. Returns sd_notify return code which is\n * a positive number on success. */\nint redisCommunicateSystemd(const char *sd_notify_msg) {\n#ifdef HAVE_LIBSYSTEMD\n    int ret = sd_notify(0, sd_notify_msg);\n\n    if (ret == 0)\n        serverLog(LL_WARNING, \"systemd supervision error: NOTIFY_SOCKET not found!\");\n    else if (ret < 0)\n        serverLog(LL_WARNING, \"systemd supervision error: sd_notify: %d\", ret);\n    return ret;\n#else\n    UNUSED(sd_notify_msg);\n    return 0;\n#endif\n}\n\n/* Attempt to set up upstart supervision. Returns 1 if successful. */\nstatic int redisSupervisedUpstart(void) {\n    const char *upstart_job = getenv(\"UPSTART_JOB\");\n\n    if (!upstart_job) {\n        serverLog(LL_WARNING,\n                \"upstart supervision requested, but UPSTART_JOB not found!\");\n        return 0;\n    }\n\n    serverLog(LL_NOTICE, \"supervised by upstart, will stop to signal readiness.\");\n    raise(SIGSTOP);\n    unsetenv(\"UPSTART_JOB\");\n    return 1;\n}\n\n/* Attempt to set up systemd supervision. Returns 1 if successful. */\nstatic int redisSupervisedSystemd(void) {\n#ifndef HAVE_LIBSYSTEMD\n    serverLog(LL_WARNING,\n            \"systemd supervision requested or auto-detected, but Redis is compiled without libsystemd support!\");\n    return 0;\n#else\n    if (redisCommunicateSystemd(\"STATUS=Redis is loading...\\n\") <= 0)\n        return 0;\n    serverLog(LL_NOTICE,\n        \"Supervised by systemd. Please make sure you set appropriate values for TimeoutStartSec and TimeoutStopSec in your service unit.\");\n    return 1;\n#endif\n}\n\nint redisIsSupervised(int mode) {\n    int ret = 0;\n\n    if (mode == SUPERVISED_AUTODETECT) {\n        if (getenv(\"UPSTART_JOB\")) {\n            serverLog(LL_VERBOSE, \"Upstart supervision detected.\");\n            mode = SUPERVISED_UPSTART;\n        } else if (getenv(\"NOTIFY_SOCKET\")) {\n            serverLog(LL_VERBOSE, \"Systemd supervision detected.\");\n            mode = SUPERVISED_SYSTEMD;\n        }\n    }\n\n    switch (mode) {\n        case SUPERVISED_UPSTART:\n            ret = redisSupervisedUpstart();\n            break;\n        case SUPERVISED_SYSTEMD:\n            ret = redisSupervisedSystemd();\n            break;\n        default:\n            break;\n    }\n\n    if (ret)\n        server.supervised_mode = mode;\n\n    return ret;\n}\n\nint iAmMaster(void) {\n    return ((!server.cluster_enabled && server.masterhost == NULL) ||\n            (server.cluster_enabled && clusterNodeIsMaster(getMyClusterNode())));\n}\n\n#ifdef REDIS_TEST\n#include \"testhelp.h\"\n#include \"intset.h\"  /* Compact integer set structure */\n\nint __failed_tests = 0;\nint __test_num = 0;\n\n/* The flags are the following:\n* --accurate:     Runs tests with more iterations.\n* --large-memory: Enables tests that consume more than 100mb. */\ntypedef int redisTestProc(int argc, char **argv, int flags);\nint bitopsTest(int argc, char **argv, int flags);\nint zsetTest(int argc, char **argv, int flags);\nint vectorTest(int argc, char **argv, int flags);\nstruct redisTest {\n    char *name;\n    redisTestProc *proc;\n    int test_count;\n    int passed_count;\n} redisTests[] = {\n    {\"ziplist\", ziplistTest},\n    {\"quicklist\", quicklistTest},\n    {\"intset\", intsetTest},\n    {\"zipmap\", zipmapTest},\n    {\"sha1test\", sha1Test},\n    {\"util\", utilTest},\n    {\"endianconv\", endianconvTest},\n    {\"crc64\", crc64Test},\n    {\"zmalloc\", zmalloc_test},\n    {\"sds\", sdsTest},\n    {\"mstr\", mstrTest},\n    {\"dict\", dictTest},\n    {\"listpack\", listpackTest},\n    {\"kvstore\", kvstoreTest},\n    {\"fwtree\", fwtreeTest},\n    {\"estore\", estoreTest},\n    {\"ebuckets\", ebucketsTest},\n    {\"vector\", vectorTest},\n    {\"bitmap\", bitopsTest},\n    {\"rax\", raxTest},\n    {\"zset\", zsetTest},\n    {\"topk\", chkTopKTest},\n    {\"fastfloat\", fastFloatTest},\n};\nredisTestProc *getTestProcByName(const char *name) {\n    int numtests = sizeof(redisTests)/sizeof(struct redisTest);\n    for (int j = 0; j < numtests; j++) {\n        if (!strcasecmp(name,redisTests[j].name)) {\n            return redisTests[j].proc;\n        }\n    }\n    return NULL;\n}\n#endif\n\nint main(int argc, char **argv) {\n    struct timeval tv;\n    int j;\n    char config_from_stdin = 0;\n\n#ifdef REDIS_TEST\n    monotonicInit(); /* Required for dict tests, that are relying on monotime during dict rehashing. */\n    if (argc >= 3 && !strcasecmp(argv[1], \"test\")) {\n        int flags = 0;\n        for (j = 3; j < argc; j++) {\n            char *arg = argv[j];\n            if (!strcasecmp(arg, \"--accurate\")) flags |= REDIS_TEST_ACCURATE;\n            else if (!strcasecmp(arg, \"--large-memory\")) flags |= REDIS_TEST_LARGE_MEMORY;\n            else if (!strcasecmp(arg, \"--valgrind\")) flags |= REDIS_TEST_VALGRIND;\n            else if (!strcasecmp(arg, \"--verbose\")) flags |= REDIS_TEST_VERBOSE;\n        }\n\n        if (!strcasecmp(argv[2], \"all\")) {\n            int numtests = sizeof(redisTests)/sizeof(struct redisTest);\n            int failed_num = 0;\n            for (j = 0; j < numtests; j++) {\n                int before_total = __test_num;\n                int before_failed = __failed_tests;\n                redisTests[j].proc(argc,argv,flags);\n                redisTests[j].test_count = __test_num - before_total;\n                redisTests[j].passed_count = redisTests[j].test_count - (__failed_tests - before_failed);\n                if (redisTests[j].passed_count < redisTests[j].test_count)\n                    failed_num++;\n            }\n\n            printf(\"\\n========== Test Suite Summary ==========\\n\\n\");\n            for (j = 0; j < numtests; j++) {\n                int failed = redisTests[j].passed_count < redisTests[j].test_count;\n                printf(\"  %s %-15s (%d/%d passed)%s\\n\",\n                       failed ? \"\\033[31m[failed]\" : \"\\033[32m[ok]    \\033[0m\",\n                       redisTests[j].name,\n                       redisTests[j].passed_count, redisTests[j].test_count,\n                       failed ? \"\\033[0m\" : \"\");\n            }\n\n            printf(\"\\n  Test Groups: %s%d passed\\033[0m, %s%d failed\\033[0m, %d total\\n\",\n                   failed_num ? \"\" : \"\\033[32m\", numtests-failed_num,\n                   failed_num ? \"\\033[31m\" : \"\", failed_num, numtests);\n\n            test_report();\n        } else {\n            redisTestProc *proc = getTestProcByName(argv[2]);\n            if (!proc) return -1; /* test not found */\n            proc(argc,argv,flags);\n            test_report();\n        }\n\n        return __failed_tests ? 1 : 0;\n    }\n#endif\n\n    /* We need to initialize our libraries, and the server configuration. */\n#ifdef INIT_SETPROCTITLE_REPLACEMENT\n    spt_init(argc, argv);\n#endif\n    tzset(); /* Populates 'timezone' global. */\n    zmalloc_set_oom_handler(redisOutOfMemoryHandler);\n\n    /* To achieve entropy, in case of containers, their time() and getpid() can\n     * be the same. But value of tv_usec is fast enough to make the difference */\n    gettimeofday(&tv,NULL);\n    srand(time(NULL)^getpid()^tv.tv_usec);\n    srandom(time(NULL)^getpid()^tv.tv_usec);\n    init_genrand64(((long long) tv.tv_sec * 1000000 + tv.tv_usec) ^ getpid());\n    crc64_init();\n\n    /* Store umask value. Because umask(2) only offers a set-and-get API we have\n     * to reset it and restore it back. We do this early to avoid a potential\n     * race condition with threads that could be creating files or directories.\n     */\n    umask(server.umask = umask(0777));\n\n    uint8_t hashseed[16];\n    getRandomBytes(hashseed,sizeof(hashseed));\n    dictSetHashFunctionSeed(hashseed);\n\n    char *exec_name = strrchr(argv[0], '/');\n    if (exec_name == NULL) exec_name = argv[0];\n    server.sentinel_mode = checkForSentinelMode(argc,argv, exec_name);\n    initServerConfig();\n    ACLInit(); /* The ACL subsystem must be initialized ASAP because the\n                  basic networking code and client creation depends on it. */\n    moduleInitModulesSystem();\n    connTypeInitialize();\n    keyMetaInit();\n\n    /* Store the executable path and arguments in a safe place in order\n     * to be able to restart the server later. */\n    server.executable = getAbsolutePath(argv[0]);\n    server.exec_argv = zmalloc(sizeof(char*)*(argc+1));\n    server.exec_argv[argc] = NULL;\n    for (j = 0; j < argc; j++) server.exec_argv[j] = zstrdup(argv[j]);\n\n    /* We need to init sentinel right now as parsing the configuration file\n     * in sentinel mode will have the effect of populating the sentinel\n     * data structures with master nodes to monitor. */\n    if (server.sentinel_mode) {\n        initSentinelConfig();\n        initSentinel();\n    }\n\n    /* Check if we need to start in redis-check-rdb/aof mode. We just execute\n     * the program main. However the program is part of the Redis executable\n     * so that we can easily execute an RDB check on loading errors. */\n    if (strstr(exec_name,\"redis-check-rdb\") != NULL)\n        redis_check_rdb_main(argc,argv,NULL);\n    else if (strstr(exec_name,\"redis-check-aof\") != NULL)\n        redis_check_aof_main(argc,argv);\n\n    if (argc >= 2) {\n        j = 1; /* First option to parse in argv[] */\n        sds options = sdsempty();\n\n        /* Handle special options --help and --version */\n        if (strcmp(argv[1], \"-v\") == 0 ||\n            strcmp(argv[1], \"--version\") == 0)\n        {\n            sds version = getVersion();\n            printf(\"Redis server %s\\n\", version);\n            sdsfree(version);\n            exit(0);\n        }\n        if (strcmp(argv[1], \"--help\") == 0 ||\n            strcmp(argv[1], \"-h\") == 0) usage();\n        if (strcmp(argv[1], \"--test-memory\") == 0) {\n            if (argc == 3) {\n                memtest(atoi(argv[2]),50);\n                exit(0);\n            } else {\n                fprintf(stderr,\"Please specify the amount of memory to test in megabytes.\\n\");\n                fprintf(stderr,\"Example: ./redis-server --test-memory 4096\\n\\n\");\n                exit(1);\n            }\n        } if (strcmp(argv[1], \"--check-system\") == 0) {\n            exit(syscheck() ? 0 : 1);\n        }\n        /* Parse command line options\n         * Precedence wise, File, stdin, explicit options -- last config is the one that matters.\n         *\n         * First argument is the config file name? */\n        if (argv[1][0] != '-') {\n            /* Replace the config file in server.exec_argv with its absolute path. */\n            server.configfile = getAbsolutePath(argv[1]);\n            zfree(server.exec_argv[1]);\n            server.exec_argv[1] = zstrdup(server.configfile);\n            j = 2; // Skip this arg when parsing options\n        }\n        sds *argv_tmp;\n        int argc_tmp;\n        int handled_last_config_arg = 1;\n        while(j < argc) {\n            /* Either first or last argument - Should we read config from stdin? */\n            if (argv[j][0] == '-' && argv[j][1] == '\\0' && (j == 1 || j == argc-1)) {\n                config_from_stdin = 1;\n            }\n            /* All the other options are parsed and conceptually appended to the\n             * configuration file. For instance --port 6380 will generate the\n             * string \"port 6380\\n\" to be parsed after the actual config file\n             * and stdin input are parsed (if they exist).\n             * Only consider that if the last config has at least one argument. */\n            else if (handled_last_config_arg && argv[j][0] == '-' && argv[j][1] == '-') {\n                /* Option name */\n                if (sdslen(options)) options = sdscat(options,\"\\n\");\n                /* argv[j]+2 for removing the preceding `--` */\n                options = sdscat(options,argv[j]+2);\n                options = sdscat(options,\" \");\n\n                argv_tmp = sdssplitargs(argv[j], &argc_tmp);\n                if (argc_tmp == 1) {\n                    /* Means that we only have one option name, like --port or \"--port \" */\n                    handled_last_config_arg = 0;\n\n                    if ((j != argc-1) && argv[j+1][0] == '-' && argv[j+1][1] == '-' &&\n                        !strcasecmp(argv[j], \"--save\"))\n                    {\n                        /* Special case: handle some things like `--save --config value`.\n                         * In this case, if next argument starts with `--`, we will reset\n                         * handled_last_config_arg flag and append an empty \"\" config value\n                         * to the options, so it will become `--save \"\" --config value`.\n                         * We are doing it to be compatible with pre 7.0 behavior (which we\n                         * break it in #10660, 7.0.1), since there might be users who generate\n                         * a command line from an array and when it's empty that's what they produce. */\n                        options = sdscat(options, \"\\\"\\\"\");\n                        handled_last_config_arg = 1;\n                    }\n                    else if ((j == argc-1) && !strcasecmp(argv[j], \"--save\")) {\n                        /* Special case: when empty save is the last argument.\n                         * In this case, we append an empty \"\" config value to the options,\n                         * so it will become `--save \"\"` and will follow the same reset thing. */\n                        options = sdscat(options, \"\\\"\\\"\");\n                    }\n                    else if ((j != argc-1) && argv[j+1][0] == '-' && argv[j+1][1] == '-' &&\n                        !strcasecmp(argv[j], \"--sentinel\"))\n                    {\n                        /* Special case: handle some things like `--sentinel --config value`.\n                         * It is a pseudo config option with no value. In this case, if next\n                         * argument starts with `--`, we will reset handled_last_config_arg flag.\n                         * We are doing it to be compatible with pre 7.0 behavior (which we\n                         * break it in #10660, 7.0.1). */\n                        options = sdscat(options, \"\");\n                        handled_last_config_arg = 1;\n                    }\n                    else if ((j == argc-1) && !strcasecmp(argv[j], \"--sentinel\")) {\n                        /* Special case: when --sentinel is the last argument.\n                         * It is a pseudo config option with no value. In this case, do nothing.\n                         * We are doing it to be compatible with pre 7.0 behavior (which we\n                         * break it in #10660, 7.0.1). */\n                        options = sdscat(options, \"\");\n                    }\n                } else {\n                    /* Means that we are passing both config name and it's value in the same arg,\n                     * like \"--port 6380\", so we need to reset handled_last_config_arg flag. */\n                    handled_last_config_arg = 1;\n                }\n                sdsfreesplitres(argv_tmp, argc_tmp);\n            } else {\n                /* Option argument */\n                options = sdscatrepr(options,argv[j],strlen(argv[j]));\n                options = sdscat(options,\" \");\n                handled_last_config_arg = 1;\n            }\n            j++;\n        }\n\n        loadServerConfig(server.configfile, config_from_stdin, options);\n        if (server.sentinel_mode) loadSentinelConfigFromQueue();\n        sdsfree(options);\n    }\n    if (server.sentinel_mode) sentinelCheckConfigFile();\n\n    /* Do system checks */\n#ifdef __linux__\n    linuxMemoryWarnings();\n    sds err_msg = NULL;\n    if (checkXenClocksource(&err_msg) < 0) {\n        serverLog(LL_WARNING, \"WARNING %s\", err_msg);\n        sdsfree(err_msg);\n    }\n#if defined (__arm64__)\n    int ret;\n    if ((ret = checkLinuxMadvFreeForkBug(&err_msg)) <= 0) {\n        if (ret < 0) {\n            serverLog(LL_WARNING, \"WARNING %s\", err_msg);\n            sdsfree(err_msg);\n        } else\n            serverLog(LL_WARNING, \"Failed to test the kernel for a bug that could lead to data corruption during background save. \"\n                                  \"Your system could be affected, please report this error.\");\n        if (!checkIgnoreWarning(\"ARM64-COW-BUG\")) {\n            serverLog(LL_WARNING,\"Redis will now exit to prevent data corruption. \"\n                                 \"Note that it is possible to suppress this warning by setting the following config: ignore-warnings ARM64-COW-BUG\");\n            exit(1);\n        }\n    }\n#endif /* __arm64__ */\n#endif /* __linux__ */\n\n    /* Daemonize if needed */\n    server.supervised = redisIsSupervised(server.supervised_mode);\n    int background = server.daemonize && !server.supervised;\n    if (background) daemonize();\n\n    serverLog(LL_NOTICE, \"oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo\");\n    serverLog(LL_NOTICE,\n        \"Redis version=%s, bits=%d, commit=%s, modified=%d, pid=%d, just started\",\n            REDIS_VERSION,\n            (sizeof(long) == 8) ? 64 : 32,\n            redisGitSHA1(),\n            strtol(redisGitDirty(),NULL,10) > 0,\n            (int)getpid());\n\n    if (argc == 1) {\n        serverLog(LL_WARNING, \"Warning: no config file specified, using the default config. In order to specify a config file use %s /path/to/redis.conf\", argv[0]);\n    } else {\n        serverLog(LL_NOTICE, \"Configuration loaded\");\n    }\n\n    initServer();\n    if (background || server.pidfile) createPidFile();\n    if (server.set_proc_title) redisSetProcTitle(NULL);\n    redisAsciiArt();\n    checkTcpBacklogSettings();\n    if (server.cluster_enabled) {\n        /* clusterCommonInit() initializes slot-stats required by clusterInit() */\n        clusterCommonInit();\n        clusterInit();\n    }\n    if (!server.sentinel_mode) {\n        moduleInitModulesSystemLast();\n        moduleLoadInternalModules();\n        moduleLoadFromQueue();\n    }\n    ACLLoadUsersAtStartup();\n    initListeners();\n    if (server.cluster_enabled) {\n        clusterInitLast();\n    }\n    InitServerLast();\n\n    if (!server.sentinel_mode) {\n        /* Things not needed when running in Sentinel mode. */\n        serverLog(LL_NOTICE,\"Server initialized\");\n        aofLoadManifestFromDisk();\n        loadDataFromDisk();\n        aofOpenIfNeededOnServerStart();\n        aofDelHistoryFiles();\n        /* While loading data, we delay applying \"appendonly\" config change.\n         * If there was a config change while we were inside loadDataFromDisk()\n         * above, we'll apply it here. */\n        applyAppendOnlyConfig();\n\n        if (server.cluster_enabled) {\n            serverAssert(verifyClusterConfigWithData() == C_OK);\n        }\n\n        for (j = 0; j < CONN_TYPE_MAX; j++) {\n            connListener *listener = &server.listeners[j];\n            if (listener->ct == NULL)\n                continue;\n\n            serverLog(LL_NOTICE,\"Ready to accept connections %s\", listener->ct->get_type(NULL));\n        }\n\n        if (server.supervised_mode == SUPERVISED_SYSTEMD) {\n            if (!server.masterhost) {\n                redisCommunicateSystemd(\"STATUS=Ready to accept connections\\n\");\n            } else {\n                redisCommunicateSystemd(\"STATUS=Ready to accept connections in read-only mode. Waiting for MASTER <-> REPLICA sync\\n\");\n            }\n            redisCommunicateSystemd(\"READY=1\\n\");\n        }\n        warnAboutInsecureConfig();\n    } else {\n        sentinelIsRunning();\n        if (server.supervised_mode == SUPERVISED_SYSTEMD) {\n            redisCommunicateSystemd(\"STATUS=Ready to accept connections\\n\");\n            redisCommunicateSystemd(\"READY=1\\n\");\n        }\n    }\n\n    /* Warning the user about suspicious maxmemory setting. */\n    if (server.maxmemory > 0 && server.maxmemory < 1024*1024) {\n        serverLog(LL_WARNING,\"WARNING: You specified a maxmemory value that is less than 1MB (current value is %llu bytes). Are you sure this is what you really want?\", server.maxmemory);\n    }\n\n    redisSetCpuAffinity(server.server_cpulist);\n    setOOMScoreAdj(-1);\n\n    aeMain(server.el);\n    aeDeleteEventLoop(server.el);\n    return 0;\n}\n\n/* The End */\n"
  },
  {
    "path": "src/server.h",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n#ifndef __REDIS_H\n#define __REDIS_H\n\n#include \"fmacros.h\"\n#include \"config.h\"\n#include \"solarisfixes.h\"\n#include \"rio.h\"\n#include \"atomicvar.h\"\n#include \"commands.h\"\n#include \"object.h\"\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <stddef.h>\n#include <string.h>\n#include <time.h>\n#include <limits.h>\n#include <unistd.h>\n#include <errno.h>\n#include <inttypes.h>\n#include <pthread.h>\n#include <syslog.h>\n#include <netinet/in.h>\n#include <sys/socket.h>\n#include <lua.h>\n#include <signal.h>\n\n#ifdef HAVE_LIBSYSTEMD\n#include <systemd/sd-daemon.h>\n#endif\n\ntypedef long long mstime_t; /* millisecond time type. */\ntypedef long long ustime_t; /* microsecond time type. */\n\n#include \"ae.h\"      /* Event driven programming library */\n#include \"sds.h\"     /* Dynamic safe strings */\n#include \"entry.h\"   /* Entry objects (field-value pairs with optional expiration) */\n#include \"ebuckets.h\" /* expiry data structure */\n#include \"dict.h\"    /* Hash tables */\n#include \"kvstore.h\" /* Slot-based hash table */\n#include \"estore.h\"  /* Expiration store */\n#include \"adlist.h\"  /* Linked lists */\n#include \"zmalloc.h\" /* total memory usage aware version of malloc/free */\n#include \"anet.h\"    /* Networking the easy way */\n#include \"version.h\" /* Version macro */\n#include \"util.h\"    /* Misc functions useful in many places */\n#include \"latency.h\" /* Latency monitor API */\n#include \"sparkline.h\" /* ASCII graphs API */\n#include \"quicklist.h\"  /* Lists are encoded as linked lists of\n                           N-elements flat arrays */\n#include \"rax.h\"     /* Radix tree */\n#include \"connection.h\" /* Connection abstraction */\n#include \"eventnotifier.h\" /* Event notification */\n#include \"memory_prefetch.h\"\n\n/* Forward declarations needed by redismodule.h and keymeta.h */\nstruct redisObject;\nstruct RedisModule;\n\n/* This is a structure used to export some meta-information such as dbid to the module. */\nstruct RedisModuleKeyOptCtx {\n    struct redisObject *from_key, *to_key; /* Optional name of key processed, NULL when unknown.\n                                              In most cases, only 'from_key' is valid, but in callbacks\n                                              such as `copy2`, both 'from_key' and 'to_key' are valid. */\n    int from_dbid, to_dbid;                /* The dbid of the key being processed, -1 when unknown.\n                                              In most cases, only 'from_dbid' is valid, but in callbacks such\n                                              as `copy2`, 'from_dbid' and 'to_dbid' are both valid. */\n}; \n\n#define REDISMODULE_CORE 1\n\n#include \"redismodule.h\"    /* Redis modules API defines. */\n\n/* Following includes allow test functions to be called from Redis main() */\n#include \"zipmap.h\"\n#include \"ziplist.h\" /* Compact list data structure */\n#include \"sha1.h\"\n#include \"endianconv.h\"\n#include \"crc64.h\"\n#include \"keymeta.h\"\n\nstruct hdr_histogram;\n\n/* helpers */\n#define numElements(x) (sizeof(x)/sizeof((x)[0]))\n\n/* min/max */\n#undef min\n#undef max\n#define min(a, b) ((a) < (b) ? (a) : (b))\n#define max(a, b) ((a) > (b) ? (a) : (b))\n\n/* Get the pointer of the outer struct from a member address */\n#define redis_member2struct(struct_name, member_name, member_addr) \\\n            ((struct_name *)((char*)member_addr - offsetof(struct_name, member_name)))\n\n/* Error codes */\n#define C_OK                    0\n#define C_ERR                   -1\n#define C_RETRY                 -2\n\n/* Static server configuration */\n#define CONFIG_DEFAULT_HZ        10             /* Time interrupt calls/sec. */\n#define CONFIG_MIN_HZ            1\n#define CONFIG_MAX_HZ            500\n#define MAX_CLIENTS_PER_CLOCK_TICK 200          /* HZ is adapted based on that. */\n#define CRON_DBS_PER_CALL 16\n#define CRON_DICTS_PER_DB 16\n#define NET_MAX_WRITES_PER_EVENT (1024*64)\n#define PROTO_SHARED_SELECT_CMDS 10\n#define OBJ_SHARED_INTEGERS 10000\n#define OBJ_SHARED_BULKHDR_LEN 32\n#define OBJ_SHARED_HDR_STRLEN(_len_) (((_len_) < 10) ? 4 : 5) /* see shared.mbulkhdr etc. */\n#define LOG_MAX_LEN    1024 /* Default maximum length of syslog messages.*/\n#define AOF_REWRITE_ITEMS_PER_CMD 64\n#define AOF_ANNOTATION_LINE_MAX_LEN 1024\n#define CONFIG_RUN_ID_SIZE 40\n#define RDB_EOF_MARK_SIZE 40\n#define CONFIG_REPL_BACKLOG_MIN_SIZE (1024*16)          /* 16k */\n#define CONFIG_BGSAVE_RETRY_DELAY 5 /* Wait a few secs before trying again. */\n#define CONFIG_DEFAULT_PID_FILE \"/var/run/redis.pid\"\n#define CONFIG_DEFAULT_BINDADDR_COUNT 2\n#define CONFIG_DEFAULT_BINDADDR { \"*\", \"-::*\" }\n#define NET_HOST_STR_LEN 256 /* Longest valid hostname */\n#define NET_IP_STR_LEN 46 /* INET6_ADDRSTRLEN is 46, but we need to be sure */\n#define NET_ADDR_STR_LEN (NET_IP_STR_LEN+32) /* Must be enough for ip:port */\n#define NET_HOST_PORT_STR_LEN (NET_HOST_STR_LEN+32) /* Must be enough for hostname:port */\n#define CONFIG_BINDADDR_MAX 16\n#define CONFIG_MIN_RESERVED_FDS 32\n#define CONFIG_DEFAULT_PROC_TITLE_TEMPLATE \"{title} {listen-addr} {server-mode}\"\n#define INCREMENTAL_REHASHING_THRESHOLD_US 1000\n#define CLIENTS_CRON_MIN_ITERATIONS 5\n\n/* Stream IDMP configuration limits */\n#define CONFIG_STREAM_IDMP_MIN_DURATION 1        /* Min IDMP duration in seconds. */\n#define CONFIG_STREAM_IDMP_MAX_DURATION 86400    /* Max IDMP duration in seconds (24 hours). */\n#define CONFIG_STREAM_IDMP_MIN_MAXSIZE 1         /* Min IDMP max entries. */\n#define CONFIG_STREAM_IDMP_MAX_MAXSIZE 10000     /* Max IDMP max entries. */\n\n/* Bucket sizes for client eviction pools. Each bucket stores clients with\n * memory usage of up to twice the size of the bucket below it. */\n#define CLIENT_MEM_USAGE_BUCKET_MIN_LOG 15 /* Bucket sizes start at up to 32KB (2^15) */\n#define CLIENT_MEM_USAGE_BUCKET_MAX_LOG 33 /* Bucket for largest clients: sizes above 4GB (2^32) */\n#define CLIENT_MEM_USAGE_BUCKETS (1+CLIENT_MEM_USAGE_BUCKET_MAX_LOG-CLIENT_MEM_USAGE_BUCKET_MIN_LOG)\n\n#define ACTIVE_EXPIRE_CYCLE_SLOW 0\n#define ACTIVE_EXPIRE_CYCLE_FAST 1\n\n/* Children process will exit with this status code to signal that the\n * process terminated without an error: this is useful in order to kill\n * a saving child (RDB or AOF one), without triggering in the parent the\n * write protection that is normally turned on on write errors.\n * Usually children that are terminated with SIGUSR1 will exit with this\n * special code. */\n#define SERVER_CHILD_NOERROR_RETVAL    255\n\n/* Reading copy-on-write info is sometimes expensive and may slow down child\n * processes that report it continuously. We measure the cost of obtaining it\n * and hold back additional reading based on this factor. */\n#define CHILD_COW_DUTY_CYCLE           100\n\n/* Instantaneous metrics tracking. */\n#define STATS_METRIC_SAMPLES 16     /* Number of samples per metric. */\n#define STATS_METRIC_COMMAND 0      /* Number of commands executed. */\n#define STATS_METRIC_NET_INPUT 1    /* Bytes read from network. */\n#define STATS_METRIC_NET_OUTPUT 2   /* Bytes written to network. */\n#define STATS_METRIC_NET_INPUT_REPLICATION 3   /* Bytes read from network during replication. */\n#define STATS_METRIC_NET_OUTPUT_REPLICATION 4   /* Bytes written to network during replication. */\n#define STATS_METRIC_EL_CYCLE 5     /* Number of eventloop cycled. */\n#define STATS_METRIC_EL_DURATION 6  /* Eventloop duration. */\n#define STATS_METRIC_COUNT 7\n\n/* Protocol and I/O related defines */\n#define PROTO_IOBUF_LEN         (1024*16)  /* Generic I/O buffer size */\n#define PROTO_REPLY_CHUNK_BYTES (16*1024) /* 16k output buffer */\n#define PROTO_INLINE_MAX_SIZE   (1024*64) /* Max size of inline reads */\n#define PROTO_MBULK_BIG_ARG     (1024*32)\n#define PROTO_RESIZE_THRESHOLD  (1024*32) /* Threshold for determining whether to resize query buffer */\n#define PROTO_REPLY_MIN_BYTES   (1024) /* the lower limit on reply buffer size */\n#define REDIS_AUTOSYNC_BYTES (1024*1024*4) /* Sync file every 4MB. */\n\n#define REPLY_BUFFER_DEFAULT_PEAK_RESET_TIME 5000 /* 5 seconds */\n\n/* Reply copy avoidance thresholds */\n#define COPY_AVOID_MIN_IO_THREADS 7          /* Minimum number of IO threads for copy avoidance */\n#define COPY_AVOID_MIN_STRING_SIZE 16384     /* Minimum bulk string size for copy avoidance (no IO threads) */\n#define COPY_AVOID_MIN_STRING_SIZE_THREADED 65536  /* Minimum bulk string size for copy avoidance (with IO threads) */\n\n/* When configuring the server eventloop, we setup it so that the total number\n * of file descriptors we can handle are server.maxclients + RESERVED_FDS +\n * a few more to stay safe. Since RESERVED_FDS defaults to 32, we add 96\n * in order to make sure of not over provisioning more than 128 fds. */\n#define CONFIG_FDSET_INCR (CONFIG_MIN_RESERVED_FDS+96)\n\n/* Default lookahead value */\n#define REDIS_DEFAULT_LOOKAHEAD 16\n\n/* OOM Score Adjustment classes. */\n#define CONFIG_OOM_MASTER 0\n#define CONFIG_OOM_REPLICA 1\n#define CONFIG_OOM_BGCHILD 2\n#define CONFIG_OOM_COUNT 3\n\nextern int configOOMScoreAdjValuesDefaults[CONFIG_OOM_COUNT];\n\n/* Hash table parameters */\n#define HASHTABLE_MAX_LOAD_FACTOR 1.618   /* Maximum hash table load factor. */\n\n/* Max number of IO threads */\n#define IO_THREADS_MAX_NUM 128\n\n/* To make IO threads and main thread run in parallel, we will transfer clients\n * between them if the number of clients in the pending list reaches this value. */\n#define IO_THREAD_MAX_PENDING_CLIENTS 16\n\n/* Main thread id for doing IO work, whatever we enable or disable io thread\n * the main thread always does IO work, so we can consider that the main thread\n * is the io thread 0. */\n#define IOTHREAD_MAIN_THREAD_ID 0\n\n/* Command flags. Please check the definition of struct redisCommand in this file\n * for more information about the meaning of every flag. */\n#define CMD_WRITE (1ULL<<0)\n#define CMD_READONLY (1ULL<<1)\n#define CMD_DENYOOM (1ULL<<2)\n#define CMD_MODULE (1ULL<<3)           /* Command exported by module. */\n#define CMD_ADMIN (1ULL<<4)\n#define CMD_PUBSUB (1ULL<<5)\n#define CMD_NOSCRIPT (1ULL<<6)\n#define CMD_BLOCKING (1ULL<<8)       /* Has potential to block. */\n#define CMD_LOADING (1ULL<<9)\n#define CMD_STALE (1ULL<<10)\n#define CMD_SKIP_MONITOR (1ULL<<11)\n#define CMD_SKIP_SLOWLOG (1ULL<<12)\n#define CMD_ASKING (1ULL<<13)\n#define CMD_FAST (1ULL<<14)\n#define CMD_NO_AUTH (1ULL<<15)\n#define CMD_MAY_REPLICATE (1ULL<<16)\n#define CMD_SENTINEL (1ULL<<17)\n#define CMD_ONLY_SENTINEL (1ULL<<18)\n#define CMD_NO_MANDATORY_KEYS (1ULL<<19)\n#define CMD_PROTECTED (1ULL<<20)\n#define CMD_MODULE_GETKEYS (1ULL<<21)  /* Use the modules getkeys interface. */\n#define CMD_MODULE_NO_CLUSTER (1ULL<<22) /* Deny on Redis Cluster. */\n#define CMD_NO_ASYNC_LOADING (1ULL<<23)\n#define CMD_NO_MULTI (1ULL<<24)\n#define CMD_MOVABLE_KEYS (1ULL<<25) /* The legacy range spec doesn't cover all keys.\n                                     * Populated by populateCommandLegacyRangeSpec. */\n#define CMD_ALLOW_BUSY ((1ULL<<26))\n#define CMD_MODULE_GETCHANNELS (1ULL<<27)  /* Use the modules getchannels interface. */\n#define CMD_TOUCHES_ARBITRARY_KEYS (1ULL<<28)\n#define CMD_INTERNAL (1ULL<<29) /* Internal command. */\n\n/* Command flags that describe ACLs categories. */\n#define ACL_CATEGORY_KEYSPACE (1ULL<<0)\n#define ACL_CATEGORY_READ (1ULL<<1)\n#define ACL_CATEGORY_WRITE (1ULL<<2)\n#define ACL_CATEGORY_SET (1ULL<<3)\n#define ACL_CATEGORY_SORTEDSET (1ULL<<4)\n#define ACL_CATEGORY_LIST (1ULL<<5)\n#define ACL_CATEGORY_HASH (1ULL<<6)\n#define ACL_CATEGORY_STRING (1ULL<<7)\n#define ACL_CATEGORY_BITMAP (1ULL<<8)\n#define ACL_CATEGORY_HYPERLOGLOG (1ULL<<9)\n#define ACL_CATEGORY_GEO (1ULL<<10)\n#define ACL_CATEGORY_STREAM (1ULL<<11)\n#define ACL_CATEGORY_PUBSUB (1ULL<<12)\n#define ACL_CATEGORY_ADMIN (1ULL<<13)\n#define ACL_CATEGORY_FAST (1ULL<<14)\n#define ACL_CATEGORY_SLOW (1ULL<<15)\n#define ACL_CATEGORY_BLOCKING (1ULL<<16)\n#define ACL_CATEGORY_DANGEROUS (1ULL<<17)\n#define ACL_CATEGORY_CONNECTION (1ULL<<18)\n#define ACL_CATEGORY_TRANSACTION (1ULL<<19)\n#define ACL_CATEGORY_SCRIPTING (1ULL<<20)\n#define ACL_CATEGORY_RATE_LIMIT (1ULL<<21)\n\n/* Key-spec flags *\n * -------------- */\n/* The following refer what the command actually does with the value or metadata\n * of the key, and not necessarily the user data or how it affects it.\n * Each key-spec may must have exactly one of these. Any operation that's not\n * distinctly deletion, overwrite or read-only would be marked as RW. */\n#define CMD_KEY_RO (1ULL<<0)     /* Read-Only - Reads the value of the key, but\n                                  * doesn't necessarily returns it. */\n#define CMD_KEY_RW (1ULL<<1)     /* Read-Write - Reads and modifies/deletes\n                                  * the data stored in the value of the key or\n                                  * its metadata. */\n#define CMD_KEY_OW (1ULL<<2)     /* Overwrite - Overwrites the data stored in\n                                  * the value of the key. */\n#define CMD_KEY_RM (1ULL<<3)     /* Deletes the key without reading it's value. */\n/* The following refer to user data inside the value of the key, not the metadata\n * like LRU, type, cardinality. It refers to the logical operation on the user's\n * data (actual input strings / TTL), being used / returned / copied / changed,\n * It doesn't refer to modification or returning of metadata (like type, count,\n * presence of data). Any write that's not INSERT or DELETE, would be an UPDATE.\n * Each key-spec may have one of the writes with or without access, or none: */\n#define CMD_KEY_ACCESS (1ULL<<4) /* Returns, copies or uses the user data from\n                                  * the value of the key. */\n#define CMD_KEY_UPDATE (1ULL<<5) /* Updates data to the value, new value may\n                                  * depend on the old value. */\n#define CMD_KEY_INSERT (1ULL<<6) /* Adds data to the value with no chance of\n                                  * modification or deletion of existing data. */\n#define CMD_KEY_DELETE (1ULL<<7) /* Explicitly deletes some content\n                                  * from the value of the key. */\n/* Other flags: */\n#define CMD_KEY_NOT_KEY (1ULL<<8)     /* A 'fake' key that should be routed\n                                       * like a key in cluster mode but is\n                                       * excluded from other key checks. */\n#define CMD_KEY_INCOMPLETE (1ULL<<9)  /* Means that the keyspec might not point\n                                       * out to all keys it should cover */\n#define CMD_KEY_VARIABLE_FLAGS (1ULL<<10)  /* Means that some keys might have\n                                            * different flags depending on arguments */\n#define CMD_KEY_PREFIX (1ULL<<11) /* Given key represents a prefix of a set of keys */\n\n/* Key flags for when access type is unknown */\n#define CMD_KEY_FULL_ACCESS (CMD_KEY_RW | CMD_KEY_ACCESS | CMD_KEY_UPDATE)\n\n/* Key flags for how key is removed */\n#define DB_FLAG_KEY_NONE 0\n#define DB_FLAG_KEY_DELETED (1ULL<<0)\n#define DB_FLAG_KEY_EXPIRED (1ULL<<1)\n#define DB_FLAG_KEY_EVICTED (1ULL<<2)\n#define DB_FLAG_KEY_OVERWRITE (1ULL<<3)\n#define DB_FLAG_NO_UPDATE_KEYSIZES (1ULL<<4) /* Don't update keysizes histograms */\n\n/* Channel flags share the same flag space as the key flags */\n#define CMD_CHANNEL_PATTERN (1ULL<<11)     /* The argument is a channel pattern */\n#define CMD_CHANNEL_SUBSCRIBE (1ULL<<12)   /* The command subscribes to channels */\n#define CMD_CHANNEL_UNSUBSCRIBE (1ULL<<13) /* The command unsubscribes to channels */\n#define CMD_CHANNEL_PUBLISH (1ULL<<14)     /* The command publishes to channels. */\n\n/* AOF states */\n#define AOF_OFF 0             /* AOF is off */\n#define AOF_ON 1              /* AOF is on */\n#define AOF_WAIT_REWRITE 2    /* AOF waits rewrite to start appending */\n\n/* AOF return values for loadAppendOnlyFiles() and loadSingleAppendOnlyFile() */\n#define AOF_OK 0\n#define AOF_NOT_EXIST 1\n#define AOF_EMPTY 2\n#define AOF_OPEN_ERR 3\n#define AOF_FAILED 4\n#define AOF_TRUNCATED 5\n#define AOF_BROKEN_RECOVERED 6\n\n/* RDB return values for rdbLoad. */\n#define RDB_OK 0\n#define RDB_NOT_EXIST 1 /* RDB file doesn't exist. */\n#define RDB_FAILED 2 /* Failed to load the RDB file. */\n\n/* Command doc flags */\n#define CMD_DOC_NONE 0\n#define CMD_DOC_DEPRECATED (1<<0) /* Command is deprecated */\n#define CMD_DOC_SYSCMD (1<<1) /* System (internal) command */\n\n/* Client flags */\n#define CLIENT_SLAVE (1<<0)   /* This client is a replica */\n#define CLIENT_MASTER (1<<1)  /* This client is a master */\n#define CLIENT_MONITOR (1<<2) /* This client is a slave monitor, see MONITOR */\n#define CLIENT_MULTI (1<<3)   /* This client is in a MULTI context */\n#define CLIENT_BLOCKED (1<<4) /* The client is waiting in a blocking operation */\n#define CLIENT_DIRTY_CAS (1<<5) /* Watched keys modified. EXEC will fail. */\n#define CLIENT_CLOSE_AFTER_REPLY (1<<6) /* Close after writing entire reply. */\n#define CLIENT_UNBLOCKED (1<<7) /* This client was unblocked and is stored in\n                                  server.unblocked_clients */\n#define CLIENT_SCRIPT (1<<8) /* This is a non connected client used by Lua */\n#define CLIENT_ASKING (1<<9)     /* Client issued the ASKING command */\n#define CLIENT_CLOSE_ASAP (1<<10)/* Close this client ASAP */\n#define CLIENT_UNIX_SOCKET (1<<11) /* Client connected via Unix domain socket */\n#define CLIENT_DIRTY_EXEC (1<<12)  /* EXEC will fail for errors while queueing */\n#define CLIENT_MASTER_FORCE_REPLY (1<<13)  /* Queue replies even if is master */\n#define CLIENT_FORCE_AOF (1<<14)   /* Force AOF propagation of current cmd. */\n#define CLIENT_FORCE_REPL (1<<15)  /* Force replication of current cmd. */\n#define CLIENT_PRE_PSYNC (1<<16)   /* Instance don't understand PSYNC. */\n#define CLIENT_READONLY (1<<17)    /* Cluster client is in read-only state. */\n#define CLIENT_PUBSUB (1<<18)      /* Client is in Pub/Sub mode. */\n#define CLIENT_PREVENT_AOF_PROP (1<<19)  /* Don't propagate to AOF. */\n#define CLIENT_PREVENT_REPL_PROP (1<<20)  /* Don't propagate to slaves. */\n#define CLIENT_PREVENT_PROP (CLIENT_PREVENT_AOF_PROP|CLIENT_PREVENT_REPL_PROP)\n#define CLIENT_PENDING_WRITE (1<<21) /* Client has output to send but a write\n                                        handler is yet not installed. */\n#define CLIENT_REPLY_OFF (1<<22)   /* Don't send replies to client. */\n#define CLIENT_REPLY_SKIP_NEXT (1<<23)  /* Set CLIENT_REPLY_SKIP for next cmd */\n#define CLIENT_REPLY_SKIP (1<<24)  /* Don't send just this reply. */\n#define CLIENT_LUA_DEBUG (1<<25)  /* Run EVAL in debug mode. */\n#define CLIENT_LUA_DEBUG_SYNC (1<<26)  /* EVAL debugging without fork() */\n#define CLIENT_MODULE (1<<27) /* Non connected client used by some module. */\n#define CLIENT_PROTECTED (1<<28) /* Client should not be freed for now. */\n#define CLIENT_EXECUTING_COMMAND (1<<29) /* Indicates that the client is currently in the process of handling\n                                          a command. usually this will be marked only during call()\n                                          however, blocked clients might have this flag kept until they\n                                          will try to reprocess the command. */\n\n#define CLIENT_PENDING_COMMAND (1<<30) /* Indicates the client has a fully\n                                        * parsed command ready for execution. */\n#define CLIENT_TRACKING (1ULL<<31) /* Client enabled keys tracking in order to\n                                   perform client side caching. */\n#define CLIENT_TRACKING_BROKEN_REDIR (1ULL<<32) /* Target client is invalid. */\n#define CLIENT_TRACKING_BCAST (1ULL<<33) /* Tracking in BCAST mode. */\n#define CLIENT_TRACKING_OPTIN (1ULL<<34)  /* Tracking in opt-in mode. */\n#define CLIENT_TRACKING_OPTOUT (1ULL<<35) /* Tracking in opt-out mode. */\n#define CLIENT_TRACKING_CACHING (1ULL<<36) /* CACHING yes/no was given,\n                                              depending on optin/optout mode. */\n#define CLIENT_TRACKING_NOLOOP (1ULL<<37) /* Don't send invalidation messages\n                                             about writes performed by myself.*/\n#define CLIENT_IN_TO_TABLE (1ULL<<38) /* This client is in the timeout table. */\n#define CLIENT_PROTOCOL_ERROR (1ULL<<39) /* Protocol error chatting with it. */\n#define CLIENT_CLOSE_AFTER_COMMAND (1ULL<<40) /* Close after executing commands\n                                               * and writing entire reply. */\n#define CLIENT_DENY_BLOCKING (1ULL<<41) /* Indicate that the client should not be blocked.\n                                           currently, turned on inside MULTI, Lua, RM_Call,\n                                           and AOF client */\n#define CLIENT_REPL_RDBONLY (1ULL<<42) /* This client is a replica that only wants\n                                          RDB without replication buffer. */\n#define CLIENT_NO_EVICT (1ULL<<43) /* This client is protected against client\n                                      memory eviction. */\n#define CLIENT_ALLOW_OOM (1ULL<<44) /* Client used by RM_Call is allowed to fully execute\n                                       scripts even when in OOM */\n#define CLIENT_NO_TOUCH (1ULL<<45) /* This client will not touch LFU/LRU stats. */\n#define CLIENT_PUSHING (1ULL<<46) /* This client is pushing notifications. */\n#define CLIENT_MODULE_AUTH_HAS_RESULT (1ULL<<47) /* Indicates a client in the middle of module based\n                                                    auth had been authenticated from the Module. */\n#define CLIENT_MODULE_PREVENT_AOF_PROP (1ULL<<48) /* Module client do not want to propagate to AOF */\n#define CLIENT_MODULE_PREVENT_REPL_PROP (1ULL<<49) /* Module client do not want to propagate to replica */\n#define CLIENT_REEXECUTING_COMMAND (1ULL<<50) /* The client is re-executing the command. */\n#define CLIENT_REPL_RDB_CHANNEL (1ULL<<51)      /* Client which is used for rdb delivery as part of rdb channel replication */\n#define CLIENT_INTERNAL (1ULL<<52) /* Internal client connection */\n#define CLIENT_ASM_MIGRATING (1ULL<<53) /* Client is migrating RDB/stream data during atomic slot migration. */\n#define CLIENT_ASM_IMPORTING (1ULL<<54) /* Client is importing RDB/stream data during atomic slot migration. */\n\n/* Any flag that does not let optimize FLUSH SYNC to run it in bg as blocking client ASYNC */\n#define CLIENT_AVOID_BLOCKING_ASYNC_FLUSH (CLIENT_DENY_BLOCKING|CLIENT_MULTI|CLIENT_LUA_DEBUG|CLIENT_LUA_DEBUG_SYNC|CLIENT_MODULE)\n\n/* Max deferred objects to be freed by IO thread for each client. */\n#define CLIENT_MAX_DEFERRED_OBJECTS 32\n\n/* Client flags for client IO */\n#define CLIENT_IO_READ_ENABLED (1ULL<<0) /* Client can read from socket. */\n#define CLIENT_IO_WRITE_ENABLED (1ULL<<1) /* Client can write to socket. */\n#define CLIENT_IO_PENDING_COMMAND (1ULL<<2) /* Similar to CLIENT_PENDING_COMMAND. */\n#define CLIENT_IO_REUSABLE_QUERYBUFFER (1ULL<<3) /* The client is using the reusable query buffer. */\n#define CLIENT_IO_CLOSE_ASAP (1ULL<<4) /* Close this client ASAP in IO thread. */\n#define CLIENT_IO_PENDING_CRON (1ULL<<5)  /* The client is pending cron job, to be processed in main thread. */\n\n/* Definitions for client read errors. These error codes are used to indicate\n * various issues that can occur while reading or parsing data from a client. */\n#define CLIENT_READ_TOO_BIG_INLINE_REQUEST 1\n#define CLIENT_READ_UNBALANCED_QUOTES 2\n#define CLIENT_READ_MASTER_USING_INLINE_PROTOCAL 3\n#define CLIENT_READ_TOO_BIG_MBULK_COUNT_STRING 4\n#define CLIENT_READ_TOO_BIG_BUCK_COUNT_STRING 5\n#define CLIENT_READ_EXPECTED_DOLLAR 6\n#define CLIENT_READ_INVALID_BUCK_LENGTH 7\n#define CLIENT_READ_UNAUTH_BUCK_LENGTH 8\n#define CLIENT_READ_INVALID_MULTIBUCK_LENGTH 9\n#define CLIENT_READ_UNAUTH_MBUCK_COUNT 10\n#define CLIENT_READ_CONN_DISCONNECTED 11\n#define CLIENT_READ_CONN_CLOSED 12\n#define CLIENT_READ_REACHED_MAX_QUERYBUF 13\n#define CLIENT_READ_COMMAND_NOT_FOUND 14\n#define CLIENT_READ_BAD_ARITY 15\n#define CLIENT_READ_CROSS_SLOT 16\n\n/* Client block type (btype field in client structure)\n * if CLIENT_BLOCKED flag is set. */\ntypedef enum blocking_type {\n    BLOCKED_NONE,    /* Not blocked, no CLIENT_BLOCKED flag set. */\n    BLOCKED_LIST,    /* BLPOP & co. */\n    BLOCKED_WAIT,    /* WAIT for synchronous replication. */\n    BLOCKED_WAITAOF, /* WAITAOF for AOF file fsync. */\n    BLOCKED_MODULE,  /* Blocked by a loadable module. */\n    BLOCKED_STREAM,  /* XREAD. */\n    BLOCKED_ZSET,    /* BZPOP et al. */\n    BLOCKED_POSTPONE, /* Blocked by processCommand, re-try processing later. */\n    BLOCKED_POSTPONE_TRIM, /* Master client is blocked due to an active trim job. */\n    BLOCKED_SHUTDOWN, /* SHUTDOWN. */\n    BLOCKED_LAZYFREE, /* LAZYFREE */\n    BLOCKED_NUM,      /* Number of blocked states. */\n    BLOCKED_END       /* End of enumeration */\n} blocking_type;\n\n/* Client request types */\n#define PROTO_REQ_INLINE 1\n#define PROTO_REQ_MULTIBULK 2\n\n/* Client classes for client limits, currently used only for\n * the max-client-output-buffer limit implementation. */\n#define CLIENT_TYPE_NORMAL 0 /* Normal req-reply clients + MONITORs */\n#define CLIENT_TYPE_SLAVE 1  /* Slaves. */\n#define CLIENT_TYPE_PUBSUB 2 /* Clients subscribed to PubSub channels. */\n#define CLIENT_TYPE_MASTER 3 /* Master. */\n#define CLIENT_TYPE_COUNT 4  /* Total number of client types. */\n#define CLIENT_TYPE_OBUF_COUNT 3 /* Number of clients to expose to output\n                                    buffer configuration. Just the first\n                                    three: normal, slave, pubsub. */\n\n/* Slave replication state. Used in server.repl_state for slaves to remember\n * what to do next. */\ntypedef enum {\n    REPL_STATE_NONE = 0,            /* No active replication */\n    REPL_STATE_CONNECT,             /* Must connect to master */\n    REPL_STATE_CONNECTING,          /* Connecting to master */\n    /* --- Handshake states, must be ordered --- */\n    REPL_STATE_RECEIVE_PING_REPLY,  /* Wait for PING reply */\n    REPL_STATE_SEND_HANDSHAKE,      /* Send handshake sequence to master */\n    REPL_STATE_RECEIVE_AUTH_REPLY,  /* Wait for AUTH reply */\n    REPL_STATE_RECEIVE_PORT_REPLY,  /* Wait for REPLCONF reply */\n    REPL_STATE_RECEIVE_IP_REPLY,    /* Wait for REPLCONF reply */\n    REPL_STATE_RECEIVE_REQ_REPLY,   /* Wait for REPLCONF reply */\n    REPL_STATE_RECEIVE_CAPA_REPLY,  /* Wait for REPLCONF reply */\n    REPL_STATE_SEND_PSYNC,          /* Send PSYNC */\n    REPL_STATE_RECEIVE_PSYNC_REPLY, /* Wait for PSYNC reply */\n    /* --- End of handshake states --- */\n    REPL_STATE_TRANSFER,        /* Receiving .rdb from master */\n    REPL_STATE_CONNECTED,       /* Connected to master */\n} repl_state;\n\n/* Replica rdb channel replication state. Used in server.repl_rdb_ch_state for\n * replicas to remember what to do next. */\ntypedef enum {\n    REPL_RDB_CH_STATE_NONE = 0,         /* No active rdb channel sync */\n    REPL_RDB_CH_SEND_HANDSHAKE,         /* Send handshake sequence to master */\n    REPL_RDB_CH_RECEIVE_AUTH_REPLY,     /* Wait for AUTH reply */\n    REPL_RDB_CH_RECEIVE_REPLCONF_REPLY, /* Wait for REPLCONF reply */\n    REPL_RDB_CH_RECEIVE_FULLRESYNC,     /* Wait for +FULLRESYNC reply */\n    REPL_RDB_CH_RDB_LOADING,            /* Loading rdb using rdb channel */\n} repl_rdb_channel_state;\n\n#define REPL_MAIN_CH_NONE           (1 << 0)\n#define REPL_MAIN_CH_ACCUMULATE_BUF (1 << 1)\n#define REPL_MAIN_CH_STREAMING_BUF  (1 << 2)\n#define REPL_MAIN_CH_CLOSE_ASAP     (1 << 3)\n\n/* Replication debug flags for testing. */\n#define REPL_DEBUG_PAUSE_NONE             (1 << 0)\n#define REPL_DEBUG_AFTER_FORK             (1 << 1)\n#define REPL_DEBUG_BEFORE_RDB_CHANNEL     (1 << 2)\n#define REPL_DEBUG_ON_STREAMING_REPL_BUF  (1 << 3)\n\n/* The state of an in progress coordinated failover */\ntypedef enum {\n    NO_FAILOVER = 0,        /* No failover in progress */\n    FAILOVER_WAIT_FOR_SYNC, /* Waiting for target replica to catch up */\n    FAILOVER_IN_PROGRESS    /* Waiting for target replica to accept\n                             * PSYNC FAILOVER request. */\n} failover_state;\n\n/* State of slaves from the POV of the master. Used in client->replstate.\n * In SEND_BULK and ONLINE state the slave receives new updates\n * in its output queue. In the WAIT_BGSAVE states instead the server is waiting\n * to start the next background saving in order to send updates to it. */\n#define SLAVE_STATE_WAIT_BGSAVE_START 6 /* We need to produce a new RDB file. */\n#define SLAVE_STATE_WAIT_BGSAVE_END 7 /* Waiting RDB file creation to finish. */\n#define SLAVE_STATE_SEND_BULK 8 /* Sending RDB file to slave. */\n#define SLAVE_STATE_ONLINE 9 /* RDB file transmitted, sending just updates. */\n#define SLAVE_STATE_RDB_TRANSMITTED 10 /* RDB file transmitted - This state is used only for\n                                        * a replica that only wants RDB without replication buffer  */\n#define SLAVE_STATE_WAIT_RDB_CHANNEL 11 /* Main channel of replica is connected,\n                                         * we are waiting rdbchannel connection to start delivery.*/\n#define SLAVE_STATE_SEND_BULK_AND_STREAM 12 /* Main channel of a replica which uses rdb channel replication.\n                                             * Sending RDB file and replication stream in parallel. */\n\n/* Slave capabilities. */\n#define SLAVE_CAPA_NONE             0\n#define SLAVE_CAPA_EOF              (1<<0) /* Can parse the RDB EOF streaming format. */\n#define SLAVE_CAPA_PSYNC2           (1<<1) /* Supports PSYNC2 protocol. */\n#define SLAVE_CAPA_RDB_CHANNEL_REPL (1<<2) /* Supports rdb channel replication during full sync */\n\n/* Slave requirements. NO_COMPRESS and NO_CHECKSUM are hints rather than strict\n * requirements - the replica can handle compressed/checksummed RDB either way,\n * but prefers to skip them for diskless loading since they become redundant.\n * We reuse the REQ mechanism for simplicity, avoiding a separate HINT bitfield. */\n#define SLAVE_REQ_NONE                  0\n#define SLAVE_REQ_RDB_EXCLUDE_DATA      (1 << 0) /* Exclude data from RDB */\n#define SLAVE_REQ_RDB_EXCLUDE_FUNCTIONS (1 << 1) /* Exclude functions from RDB */\n#define SLAVE_REQ_SLOTS_SNAPSHOT        (1 << 2) /* Only slots snapshot is required */\n#define SLAVE_REQ_RDB_CHANNEL           (1 << 3) /* Use rdb channel replication, transfer RDB background */\n#define SLAVE_REQ_RDB_NO_COMPRESS       (1 << 4) /* Don't enable RDB compression */\n#define SLAVE_REQ_RDB_NO_CHECKSUM       (1 << 5) /* Don't enable RDB checksum */\n/* Mask of all bits in the slave requirements bitfield that represent non-standard (filtered) RDB requirements */\n#define SLAVE_REQ_RDB_MASK (SLAVE_REQ_RDB_EXCLUDE_DATA | SLAVE_REQ_RDB_EXCLUDE_FUNCTIONS | SLAVE_REQ_SLOTS_SNAPSHOT)\n\n/* Synchronous read timeout - slave side */\n#define CONFIG_REPL_SYNCIO_TIMEOUT 5\n\n/* The default number of replication backlog blocks to trim per call. */\n#define REPL_BACKLOG_TRIM_BLOCKS_PER_CALL 64\n\n/* In order to quickly find the requested offset for PSYNC requests,\n * we index some nodes in the replication buffer linked list into a rax. */\n#define REPL_BACKLOG_INDEX_PER_BLOCKS 64\n\n/* List related stuff */\n#define LIST_HEAD 0\n#define LIST_TAIL 1\n#define ZSET_MIN 0\n#define ZSET_MAX 1\n\n/* Sort operations */\n#define SORT_OP_GET 0\n\n/* Log levels */\n#define LL_DEBUG 0\n#define LL_VERBOSE 1\n#define LL_NOTICE 2\n#define LL_WARNING 3\n#define LL_NOTHING 4\n#define LL_RAW (1<<10) /* Modifier to log without timestamp */\n\n/* Supervision options */\n#define SUPERVISED_NONE 0\n#define SUPERVISED_AUTODETECT 1\n#define SUPERVISED_SYSTEMD 2\n#define SUPERVISED_UPSTART 3\n\n/* Anti-warning macro... */\n#define UNUSED(V) ((void) V)\n\n#define ZSKIPLIST_MAXLEVEL 32 /* Should be enough for 2^64 elements */\n#define ZSKIPLIST_P 0.25      /* Skiplist P = 1/4 */\n#define ZSKIPLIST_MAX_SEARCH 10\n\n/* Append only defines */\n#define AOF_FSYNC_NO 0\n#define AOF_FSYNC_ALWAYS 1\n#define AOF_FSYNC_EVERYSEC 2\n\n/* Replication diskless load defines */\n#define REPL_DISKLESS_LOAD_DISABLED 0\n#define REPL_DISKLESS_LOAD_WHEN_DB_EMPTY 1\n#define REPL_DISKLESS_LOAD_SWAPDB 2\n#define REPL_DISKLESS_LOAD_ALWAYS 3\n\n/* TLS Client Authentication */\n#define TLS_CLIENT_AUTH_NO 0\n#define TLS_CLIENT_AUTH_YES 1\n#define TLS_CLIENT_AUTH_OPTIONAL 2\n\n/* TLS Client Certfiicate Authentication */\n#define TLS_CLIENT_FIELD_OFF 0\n#define TLS_CLIENT_FIELD_CN 1\n\n/* Sanitize dump payload */\n#define SANITIZE_DUMP_NO 0\n#define SANITIZE_DUMP_YES 1\n#define SANITIZE_DUMP_CLIENTS 2\n\n/* Enable protected config/command */\n#define PROTECTED_ACTION_ALLOWED_NO 0\n#define PROTECTED_ACTION_ALLOWED_YES 1\n#define PROTECTED_ACTION_ALLOWED_LOCAL 2\n\n/* Sets operations codes */\n#define SET_OP_UNION 0\n#define SET_OP_DIFF 1\n#define SET_OP_INTER 2\n\n/* oom-score-adj defines */\n#define OOM_SCORE_ADJ_NO 0\n#define OOM_SCORE_RELATIVE 1\n#define OOM_SCORE_ADJ_ABSOLUTE 2\n\n/* Redis maxmemory strategies. Instead of using just incremental number\n * for this defines, we use a set of flags so that testing for certain\n * properties common to multiple policies is faster. */\n#define MAXMEMORY_FLAG_LRU (1<<0)\n#define MAXMEMORY_FLAG_LFU (1<<1)\n#define MAXMEMORY_FLAG_ALLKEYS (1<<2)\n#define MAXMEMORY_FLAG_LRM (1<<3)\n#define MAXMEMORY_FLAG_NO_SHARED_INTEGERS \\\n    (MAXMEMORY_FLAG_LRU|MAXMEMORY_FLAG_LFU|MAXMEMORY_FLAG_LRM)\n\n#define MAXMEMORY_VOLATILE_LRU ((0<<8)|MAXMEMORY_FLAG_LRU)\n#define MAXMEMORY_VOLATILE_LFU ((1<<8)|MAXMEMORY_FLAG_LFU)\n#define MAXMEMORY_VOLATILE_TTL (2<<8)\n#define MAXMEMORY_VOLATILE_RANDOM (3<<8)\n#define MAXMEMORY_ALLKEYS_LRU ((4<<8)|MAXMEMORY_FLAG_LRU|MAXMEMORY_FLAG_ALLKEYS)\n#define MAXMEMORY_ALLKEYS_LFU ((5<<8)|MAXMEMORY_FLAG_LFU|MAXMEMORY_FLAG_ALLKEYS)\n#define MAXMEMORY_ALLKEYS_RANDOM ((6<<8)|MAXMEMORY_FLAG_ALLKEYS)\n#define MAXMEMORY_NO_EVICTION (7<<8)\n#define MAXMEMORY_VOLATILE_LRM ((8<<8)|MAXMEMORY_FLAG_LRM)\n#define MAXMEMORY_ALLKEYS_LRM ((9<<8)|MAXMEMORY_FLAG_LRM|MAXMEMORY_FLAG_ALLKEYS)\n\n/* Units */\n#define UNIT_SECONDS 0\n#define UNIT_MILLISECONDS 1\n\n/* SHUTDOWN flags */\n#define SHUTDOWN_NOFLAGS 0      /* No flags. */\n#define SHUTDOWN_SAVE 1         /* Force SAVE on SHUTDOWN even if no save\n                                   points are configured. */\n#define SHUTDOWN_NOSAVE 2       /* Don't SAVE on SHUTDOWN. */\n#define SHUTDOWN_NOW 4          /* Don't wait for replicas to catch up. */\n#define SHUTDOWN_FORCE 8        /* Don't let errors prevent shutdown. */\n\n/* Cluster slot stats flags */\n#define CLUSTER_SLOT_STATS_CPU 1  /* Track CPU usage per slot. */\n#define CLUSTER_SLOT_STATS_NET 2  /* Track network bytes per slot. */\n#define CLUSTER_SLOT_STATS_MEM 4  /* Track memory usage per slot. */\n#define CLUSTER_SLOT_STATS_ALL (CLUSTER_SLOT_STATS_CPU | CLUSTER_SLOT_STATS_NET | CLUSTER_SLOT_STATS_MEM)\n\n/* IO thread pause status */\n#define IO_THREAD_UNPAUSED      0\n#define IO_THREAD_PAUSING       1\n#define IO_THREAD_PAUSED        2\n#define IO_THREAD_RESUMING      3\n\n/* Command call flags, see call() function */\n#define CMD_CALL_NONE 0\n#define CMD_CALL_PROPAGATE_AOF (1<<0)\n#define CMD_CALL_PROPAGATE_REPL (1<<1)\n#define CMD_CALL_FROM_MODULE (1<<2)  /* From RM_Call */\n#define CMD_CALL_PROPAGATE (CMD_CALL_PROPAGATE_AOF|CMD_CALL_PROPAGATE_REPL)\n#define CMD_CALL_FULL (CMD_CALL_PROPAGATE)\n\n/* Command propagation flags, see propagateNow() function */\n#define PROPAGATE_NONE 0\n#define PROPAGATE_AOF 1\n#define PROPAGATE_REPL 2\n\n/* Actions pause types */\n#define PAUSE_ACTION_CLIENT_WRITE     (1<<0)\n#define PAUSE_ACTION_CLIENT_ALL       (1<<1) /* must be bigger than PAUSE_ACTION_CLIENT_WRITE */\n#define PAUSE_ACTION_EXPIRE           (1<<2)\n#define PAUSE_ACTION_EVICT            (1<<3)\n#define PAUSE_ACTION_REPLICA          (1<<4) /* pause replica traffic */\n\n/* common sets of actions to pause/unpause */\n#define PAUSE_ACTIONS_CLIENT_WRITE_SET (PAUSE_ACTION_CLIENT_WRITE|\\\n                                        PAUSE_ACTION_EXPIRE|\\\n                                        PAUSE_ACTION_EVICT|\\\n                                        PAUSE_ACTION_REPLICA)\n#define PAUSE_ACTIONS_CLIENT_ALL_SET   (PAUSE_ACTION_CLIENT_ALL|\\\n                                        PAUSE_ACTION_EXPIRE|\\\n                                        PAUSE_ACTION_EVICT|\\\n                                        PAUSE_ACTION_REPLICA)\n\n/* Client pause purposes. Each purpose has its own end time and pause type. */\ntypedef enum {\n    PAUSE_BY_CLIENT_COMMAND = 0,\n    PAUSE_DURING_SHUTDOWN,\n    PAUSE_DURING_FAILOVER,\n    PAUSE_DURING_SLOT_HANDOFF,\n    NUM_PAUSE_PURPOSES /* This value is the number of purposes above. */\n} pause_purpose;\n\ntypedef struct {\n    uint32_t paused_actions; /* Bitmask of actions */\n    mstime_t end;\n} pause_event;\n\n/* Ways that a clusters endpoint can be described */\ntypedef enum {\n    CLUSTER_ENDPOINT_TYPE_IP = 0,          /* Show IP address */\n    CLUSTER_ENDPOINT_TYPE_HOSTNAME,        /* Show hostname */\n    CLUSTER_ENDPOINT_TYPE_UNKNOWN_ENDPOINT /* Show NULL or empty */\n} cluster_endpoint_type;\n\n/* RDB active child save type. */\n#define RDB_CHILD_TYPE_NONE 0\n#define RDB_CHILD_TYPE_DISK 1     /* RDB is written to disk. */\n#define RDB_CHILD_TYPE_SOCKET 2   /* RDB is written to slave socket. */\n\n/* Keyspace changes notification classes. Every class is associated with a\n * character for configuration purposes. */\n#define NOTIFY_KEYSPACE (1<<0)    /* K */\n#define NOTIFY_KEYEVENT (1<<1)    /* E */\n#define NOTIFY_GENERIC (1<<2)     /* g */\n#define NOTIFY_STRING (1<<3)      /* $ */\n#define NOTIFY_LIST (1<<4)        /* l */\n#define NOTIFY_SET (1<<5)         /* s */\n#define NOTIFY_HASH (1<<6)        /* h */\n#define NOTIFY_ZSET (1<<7)        /* z */\n#define NOTIFY_EXPIRED (1<<8)     /* x */\n#define NOTIFY_EVICTED (1<<9)     /* e */\n#define NOTIFY_STREAM (1<<10)     /* t */\n#define NOTIFY_KEY_MISS (1<<11)   /* m (Note: This one is excluded from NOTIFY_ALL on purpose) */\n#define NOTIFY_LOADED (1<<12)     /* module only key space notification, indicate a key loaded from rdb */\n#define NOTIFY_MODULE (1<<13)     /* d, module key space notification */\n#define NOTIFY_NEW (1<<14)        /* n, new key notification (Note: excluded from NOTIFY_ALL) */\n#define NOTIFY_OVERWRITTEN (1<<15)   /* o, key overwrite notification (Note: excluded from NOTIFY_ALL) */\n#define NOTIFY_TYPE_CHANGED (1<<16) /* c, key type changed notification (Note: excluded from NOTIFY_ALL) */\n#define NOTIFY_KEY_TRIMMED (1<<17)     /* module only key space notification, indicates a key trimmed during slot migration */\n#define NOTIFY_RATE_LIMIT (1<<18)      /* r, notify rate limit event (Note: excluded from NOTIFY_ALL)*/\n#define NOTIFY_SUBKEYSPACE (1<<19)       /* S, subkey-level keyspace notification */\n#define NOTIFY_SUBKEYEVENT (1<<20)       /* T, subkey-level keyevent notification */\n#define NOTIFY_SUBKEYSPACEITEM (1<<21)   /* I, subkey-level notification per item: channel=key\\nsubkey */\n#define NOTIFY_SUBKEYSPACEEVENT (1<<22)  /* V, subkey-level notification: channel=event|key */\n#define NOTIFY_ALL (NOTIFY_GENERIC | NOTIFY_STRING | NOTIFY_LIST | NOTIFY_SET | NOTIFY_HASH | NOTIFY_ZSET | NOTIFY_EXPIRED | NOTIFY_EVICTED | NOTIFY_STREAM | NOTIFY_MODULE) /* A flag */\n\n/* Using the following macro you can run code inside serverCron() with the\n * specified period, specified in milliseconds.\n * The actual resolution depends on server.hz. */\n#define run_with_period(_ms_) if (((_ms_) <= 1000/server.hz) || !(server.cronloops%((_ms_)/(1000/server.hz))))\n\n/* We can print the stacktrace, so our assert is defined this way: */\n#define serverAssertWithInfo(_c,_o,_e) (likely(_e)?(void)0 : (_serverAssertWithInfo(_c,_o,#_e,__FILE__,__LINE__),redis_unreachable()))\n#define serverAssert(_e) (likely(_e)?(void)0 : (_serverAssert(#_e,__FILE__,__LINE__),redis_unreachable()))\n#define serverPanic(...) _serverPanic(__FILE__,__LINE__,__VA_ARGS__),redis_unreachable()\n\n/* The following macros provide assertions that are only executed during test builds and should be used to add\n * assertions that are too computationally expensive or dangerous to run during normal operations.  */\n#ifdef DEBUG_ASSERTIONS\n#define debugServerAssertWithInfo(...) serverAssertWithInfo(__VA_ARGS__)\n#define debugServerAssert(...) serverAssert(__VA_ARGS__)\n#else\n#define debugServerAssertWithInfo(...)\n#define debugServerAssert(...)\n#endif\n\n/* latency histogram per command init settings */\n#define LATENCY_HISTOGRAM_MIN_VALUE 1L        /* >= 1 nanosec */\n#define LATENCY_HISTOGRAM_MAX_VALUE 1000000000L  /* <= 1 secs */\n#define LATENCY_HISTOGRAM_PRECISION 2  /* Maintain a value precision of 2 significant digits across LATENCY_HISTOGRAM_MIN_VALUE and LATENCY_HISTOGRAM_MAX_VALUE range.\n                                        * Value quantization within the range will thus be no larger than 1/100th (or 1%) of any value.\n                                        * The total size per histogram should sit around 40 KiB Bytes. */\n\n/* Busy module flags, see busy_module_yield_flags */\n#define BUSY_MODULE_YIELD_NONE (0)\n#define BUSY_MODULE_YIELD_EVENTS (1<<0)\n#define BUSY_MODULE_YIELD_CLIENTS (1<<1)\n\n/* Key prefetch configs */\n#define PREFETCH_BATCH_MAX_SIZE 128\n\n/*-----------------------------------------------------------------------------\n * Data types\n *----------------------------------------------------------------------------*/\n\n/* A redis object, that is a type able to hold a string / list / set */\n\n/* The actual Redis Object */\n#define OBJ_STRING 0    /* String object. */\n#define OBJ_LIST 1      /* List object. */\n#define OBJ_SET 2       /* Set object. */\n#define OBJ_ZSET 3      /* Sorted set object. */\n#define OBJ_HASH 4      /* Hash object. */\n#define OBJ_TYPE_BASIC_MAX 5 /* Max number of basic object types. */\n\n/* The \"module\" object type is a special one that signals that the object\n * is one directly managed by a Redis module. In this case the value points\n * to a moduleValue struct, which contains the object value (which is only\n * handled by the module itself) and the RedisModuleType struct which lists\n * function pointers in order to serialize, deserialize, AOF-rewrite and\n * free the object.\n *\n * Inside the RDB file, module types are encoded as OBJ_MODULE followed\n * by a 64 bit module type ID, which has a 54 bits module-specific signature\n * in order to dispatch the loading to the right module, plus a 10 bits\n * encoding version. */\n#define OBJ_MODULE 5    /* Module object. */\n#define OBJ_STREAM 6    /* Stream object. */\n#define OBJ_GCRA 7    /* GCRA object. */\n#define OBJ_TYPE_MAX 8  /* Maximum number of object types */\n\n/* NOTE: adding a new object requires changes in the following places:\n * - rdb.c - save/load (also bump RDB_VERSION if needed)\n * - aof.c - rewrite\n * - db.c - obj_type_name, copyCommand\n * - debug.c - xorObjectDigest, serverLogObjectDebugInfo\n * - defrag.c - defragKey\n * - module.c - RM_KeyType (and add the new keytype to redismodule.h)\n * - object.c - object(create/free/dismiss/allocSize/Length)\n * - tests/support/util.tcl:generate_fuzzy_traffic_on_key - add command(s) for the new object type to the `commands` dict.\n *\n * If the new object type requires new command group make sure to update the following:\n * - src/commands/command-docs.json - update the group:oneOf map with the new group\n * - utils/generate-command-code.py - add the new group to GROUPS and COMMAND_GROUP_STR arrays\n * - src/acl.c - add the new group to ACLDefaultCommandCategories array\n * - src/server.h - add the new group to redisCommandGroup enum\n * - if needed add new KSN type related to the group - search for NOTIFY_* and REDISMODULE_NOTIFY_* defines. */\n\n/* Extract encver / signature from a module type ID. */\n#define REDISMODULE_TYPE_ENCVER_BITS 10\n#define REDISMODULE_TYPE_ENCVER_MASK ((1<<REDISMODULE_TYPE_ENCVER_BITS)-1)\n#define REDISMODULE_TYPE_ENCVER(id) ((id) & REDISMODULE_TYPE_ENCVER_MASK)\n#define REDISMODULE_TYPE_SIGN(id) (((id) & ~((uint64_t)REDISMODULE_TYPE_ENCVER_MASK)) >>REDISMODULE_TYPE_ENCVER_BITS)\n\n/* Bit flags for moduleTypeAuxSaveFunc */\n#define REDISMODULE_AUX_BEFORE_RDB (1<<0)\n#define REDISMODULE_AUX_AFTER_RDB (1<<1)\n\nstruct RedisModule;\nstruct RedisModuleIO;\nstruct RedisModuleDigest;\nstruct RedisModuleCtx;\nstruct moduleLoadQueueEntry;\nstruct RedisModuleCommand;\nstruct clusterState;\nstruct slotRangeArray;\n\n/* Each module type implementation should export a set of methods in order\n * to serialize and deserialize the value in the RDB file, rewrite the AOF\n * log, create the digest for \"DEBUG DIGEST\", and free the value when a key\n * is deleted. */\ntypedef void *(*moduleTypeLoadFunc)(struct RedisModuleIO *io, int encver);\ntypedef void (*moduleTypeSaveFunc)(struct RedisModuleIO *io, void *value);\ntypedef int (*moduleTypeAuxLoadFunc)(struct RedisModuleIO *rdb, int encver, int when);\ntypedef void (*moduleTypeAuxSaveFunc)(struct RedisModuleIO *rdb, int when);\ntypedef void (*moduleTypeRewriteFunc)(struct RedisModuleIO *io, struct redisObject *key, void *value);\ntypedef void (*moduleTypeDigestFunc)(struct RedisModuleDigest *digest, void *value);\ntypedef size_t (*moduleTypeMemUsageFunc)(const void *value);\ntypedef void (*moduleTypeFreeFunc)(void *value);\ntypedef size_t (*moduleTypeFreeEffortFunc)(struct redisObject *key, const void *value);\ntypedef void (*moduleTypeUnlinkFunc)(struct redisObject *key, void *value);\ntypedef void *(*moduleTypeCopyFunc)(struct redisObject *fromkey, struct redisObject *tokey, const void *value);\ntypedef int (*moduleTypeDefragFunc)(struct RedisModuleDefragCtx *ctx, struct redisObject *key, void **value);\ntypedef size_t (*moduleTypeMemUsageFunc2)(struct RedisModuleKeyOptCtx *ctx, const void *value, size_t sample_size);\ntypedef void (*moduleTypeFreeFunc2)(struct RedisModuleKeyOptCtx *ctx, void *value);\ntypedef size_t (*moduleTypeFreeEffortFunc2)(struct RedisModuleKeyOptCtx *ctx, const void *value);\ntypedef void (*moduleTypeUnlinkFunc2)(struct RedisModuleKeyOptCtx *ctx, void *value);\ntypedef void *(*moduleTypeCopyFunc2)(struct RedisModuleKeyOptCtx *ctx, const void *value);\ntypedef int (*moduleTypeAuthCallback)(struct RedisModuleCtx *ctx, void *username, void *password, const char **err);\n\n/* Module Entity ID: module type or keymeta. */\ntypedef struct ModuleEntityId {\n    struct RedisModule *module;\n    char name[10]; /* 9 bytes name + null term. Charset: A-Z a-z 0-9 _- */\n    uint64_t id; /* Higher 54 bits of type ID + 10 lower bits of encoding ver. */\n} ModuleEntityId;\n\n/* The module type, which is referenced in each value of a given type, defines\n * the methods and links to the module exporting the type. */\ntypedef struct RedisModuleType {\n    ModuleEntityId entity;  /* module data type name and ID. */\n    moduleTypeLoadFunc rdb_load;\n    moduleTypeSaveFunc rdb_save;\n    moduleTypeRewriteFunc aof_rewrite;\n    moduleTypeMemUsageFunc mem_usage;\n    moduleTypeDigestFunc digest;\n    moduleTypeFreeFunc free;\n    moduleTypeFreeEffortFunc free_effort;\n    moduleTypeUnlinkFunc unlink;\n    moduleTypeCopyFunc copy;\n    moduleTypeDefragFunc defrag;\n    moduleTypeAuxLoadFunc aux_load;\n    moduleTypeAuxSaveFunc aux_save;\n    moduleTypeMemUsageFunc2 mem_usage2;\n    moduleTypeFreeEffortFunc2 free_effort2;\n    moduleTypeUnlinkFunc2 unlink2;\n    moduleTypeCopyFunc2 copy2;\n    moduleTypeAuxSaveFunc aux_save2;\n    int aux_save_triggers;\n} moduleType;\n\n/* In Redis objects 'robj' structures of type OBJ_MODULE, the value pointer\n * is set to the following structure, referencing the moduleType structure\n * in order to work with the value, and at the same time providing a raw\n * pointer to the value, as created by the module commands operating with\n * the module type.\n *\n * So for example in order to free such a value, it is possible to use\n * the following code:\n *\n *  if (robj->type == OBJ_MODULE) {\n *      moduleValue *mt = robj->ptr;\n *      mt->type->free(mt->value);\n *      zfree(mt); // We need to release this in-the-middle struct as well.\n *  }\n */\ntypedef struct moduleValue {\n    moduleType *type;\n    void *value;\n} moduleValue;\n\n/* Describe the state of the module during loading, and the indication which configs were loaded / applied already. */\ntypedef enum {\n    MODULE_CONFIGS_DEFAULTS = 0x1, /* The registered defaults were applied. */\n    MODULE_CONFIGS_USER_VALS  = 0x2, /* The user provided values were applied. */\n    MODULE_CONFIGS_ALL_APPLIED = 0x3 /* Both of the above applied. */\n} ModuleConfigsApplied;\n\n/* This structure represents a module inside the system. */\nstruct RedisModule {\n    void *handle;   /* Module dlopen() handle. */\n    char *name;     /* Module name. */\n    int ver;        /* Module version. We use just progressive integers. */\n    int apiver;     /* Module API version as requested during initialization.*/\n    list *types;    /* Module data types. */\n    list *usedby;   /* List of modules using APIs from this one. */\n    list *using;    /* List of modules we use some APIs of. */\n    list *filters;  /* List of filters the module has registered. */\n    list *module_configs; /* List of configurations the module has registered */\n    ModuleConfigsApplied configs_initialized; /* Have the module configurations been initialized? */\n    int in_call;    /* RM_Call() nesting level */\n    int in_hook;    /* Hooks callback nesting level for this module (0 or 1). */\n    int options;    /* Module options and capabilities. */\n    int blocked_clients;         /* Count of RedisModuleBlockedClient in this module. */\n    RedisModuleInfoFunc info_cb; /* Callback for module to add INFO fields. */\n    RedisModuleDefragFunc defrag_cb;    /* Callback for global data defrag. */\n    RedisModuleDefragFunc2 defrag_cb_2; /* Version 2 callback for global data defrag. */\n    RedisModuleDefragFunc defrag_start_cb;    /* Callback indicating defrag started. */\n    RedisModuleDefragFunc defrag_end_cb;      /* Callback indicating defrag ended. */\n    struct moduleLoadQueueEntry *loadmod; /* Module load arguments for config rewrite. */\n    int num_commands_with_acl_categories; /* Number of commands in this module included in acl categories */\n    int onload;     /* Flag to identify if the call is being made from Onload (0 or 1) */\n    size_t num_acl_categories_added; /* Number of acl categories added by this module. */\n};\ntypedef struct RedisModule RedisModule;\n\n/* The defrag context, used to manage state during calls to the data type\n * defrag callback.\n */\nstruct RedisModuleDefragCtx {\n    monotime endtime;\n    unsigned long *cursor;\n    struct redisObject *key; /* Optional name of key processed, NULL when unknown. */\n    int dbid;                /* The dbid of the key being processed, -1 when unknown. */\n    long long last_stop_check_hits; /* Number of defrag hits at last check. */\n    long long last_stop_check_misses; /* Number of defrag misses at last check. */\n    int stopping; /* Flag indicating if defrag should stop. */\n};\n#define INIT_MODULE_DEFRAG_CTX(endtime, cursor, key, dbid) \\\n    ((RedisModuleDefragCtx) {               \\\n        (endtime), (cursor), (key), (dbid), \\\n        server.stat_active_defrag_hits,     \\\n        server.stat_active_defrag_misses    \\\n    })\n\n/* This is a wrapper for the 'rio' streams used inside rdb.c in Redis, so that\n * the user does not have to take the total count of the written bytes nor\n * to care about error conditions. */\nstruct RedisModuleIO {\n    size_t bytes;       /* Bytes read / written so far. */\n    rio *rio;           /* Rio stream. */\n    ModuleEntityId *entity; /* Module type or keymeta doing the operation. */\n    int error;          /* True if error condition happened. */\n    struct RedisModuleCtx *ctx; /* Optional context, see RM_GetContextFromIO()*/\n    struct redisObject *key;    /* Optional name of key processed */\n    int dbid;            /* The dbid of the key being processed, -1 when unknown. */\n    sds pre_flush_buffer; /* A buffer that should be flushed before next write operation\n                           * See rdbSaveSingleModuleAux for more details */\n};\n\n/* Initialize an IO context. Note that the 'ver' field is populated\n * inside rdb.c according to the version of the value to load. */\nstatic inline void moduleInitIOContext(RedisModuleIO *io, ModuleEntityId *entity,\n                                       rio *rioptr, struct redisObject *keyptr, int db) \n{\n    io->rio = rioptr;\n    io->entity = entity;\n    io->bytes = 0;\n    io->error = 0;\n    io->key = keyptr;\n    io->dbid = db;\n    io->ctx = NULL;\n    io->pre_flush_buffer = NULL;\n}\n\n/* This is a structure used to export DEBUG DIGEST capabilities to Redis\n * modules. We want to capture both the ordered and unordered elements of\n * a data structure, so that a digest can be created in a way that correctly\n * reflects the values. See the DEBUG DIGEST command implementation for more\n * background. */\nstruct RedisModuleDigest {\n    unsigned char o[20];    /* Ordered elements. */\n    unsigned char x[20];    /* Xored elements. */\n    struct redisObject *key; /* Optional name of key processed */\n    int dbid;                /* The dbid of the key being processed */\n};\n\n/* Just start with a digest composed of all zero bytes. */\n#define moduleInitDigestContext(mdvar) do { \\\n    memset(mdvar.o,0,sizeof(mdvar.o)); \\\n    memset(mdvar.x,0,sizeof(mdvar.x)); \\\n} while(0)\n\n/* Macro to check if the client is in the middle of module based authentication. */\n#define clientHasModuleAuthInProgress(c) ((c)->module_auth_ctx != NULL)\n\n/* The string name for an object's type as listed above\n * Native types are checked against the OBJ_STRING, OBJ_LIST, OBJ_* defines,\n * and Module types have their registered name returned. */\nchar *getObjectTypeName(robj*);\n\n/* Macro used to initialize a Redis object allocated on the stack.\n * Note that this macro is taken near the structure definition to make sure\n * we'll update it when the structure is changed, to avoid bugs like\n * bug #85 introduced exactly in this way. */\n#define initStaticStringObject(_var,_ptr) do { \\\n    _var.refcount = OBJ_STATIC_REFCOUNT; \\\n    _var.type = OBJ_STRING; \\\n    _var.encoding = OBJ_ENCODING_RAW; \\\n    _var.metabits = 0; \\\n    _var.iskvobj = 0; \\\n    _var.ptr = _ptr; \\\n} while(0)\n\nstruct evictionPoolEntry; /* Defined in evict.c */\n\n/* Encoded buffers contain headers followed by either plain replies or\n * by bulk string references */\ntypedef enum {\n    PLAIN_REPLY = 0, /* plain reply */\n    BULK_STR_REF     /* bulk string references */\n} payloadType;\n\n/* Encoded reply buffers consist of chunks\n * Each chunk contains header followed by payload\n * The packed attribute is specified because buffer is accessed at arbitrary offsets,\n * so no benefit in data structure padding and applying packed saves the space in the buffer  */\ntypedef struct __attribute__((__packed__)) payloadHeader {\n    size_t payload_len;   /* payload length in a reply buffer */\n    uint8_t payload_type; /* one of payloadType */\n} payloadHeader;\nstatic_assert(offsetof(payloadHeader, payload_len) == 0, \"payload_len must be at offset 0 to avoid unaligned access\");\n\n/* To avoid copy of whole string in reply buffer\n * we store pointers to object and string itself */\ntypedef struct __attribute__((__packed__)) bulkStrRef {\n    robj *obj; /* pointer to object used for reference count management */\n    unsigned int prefix_cnt;\n    char prefix[LONG_STR_SIZE + 3]; /* $<len>\\r\\n */\n    char crlf[2]; /* \\r\\n */\n} bulkStrRef;\n\n/* This structure is used in order to represent the output buffer of a client,\n * which is actually a linked list of blocks like that, that is: client->reply. */\ntypedef struct clientReplyBlock {\n    size_t size, used;\n    char buf_encoded;\n    char buf[];\n} clientReplyBlock;\n\n/* Replication buffer blocks is the list of replBufBlock.\n *\n * +--------------+       +--------------+       +--------------+\n * | refcount = 1 |  ...  | refcount = 0 |  ...  | refcount = 2 |\n * +--------------+       +--------------+       +--------------+\n *      |                                            /       \\\n *      |                                           /         \\\n *      |                                          /           \\\n *  Repl Backlog                               Replica_A    Replica_B\n *\n * Each replica or replication backlog increments only the refcount of the\n * 'ref_repl_buf_node' which it points to. So when replica walks to the next\n * node, it should first increase the next node's refcount, and when we trim\n * the replication buffer nodes, we remove node always from the head node which\n * refcount is 0. If the refcount of the head node is not 0, we must stop\n * trimming and never iterate the next node.\n *\n * For replicas in IO threads we don't update the refcount while sending the\n * repl data, but only when the client is sent back to main. This avoids data\n * races. In order to achieve this, the replicas keep track of following:\n * - io_curr_repl_node - the current node we've reached.\n * - io_bound_repl_node - the last node in the replication buffer as seen by\n *                        the replica client before it was sent to IO thread\n *\n * When the client is sent to IO thread for the first time io_curr_repl_node is\n * initialized with ref_repl_buf_node.\n * When the client is sent back to main it can decrement ref_repl_buf_node's\n * refcount and increment it for io_curr_repl_node, since all the nodes\n * in-between are already sent and the client doesn't hold reference to them.\n *\n * `io_bound_repl_node` is needed because IO thread needs to know when to stop\n * sending data. If it was reading directly from the replication buffer,\n * there will be a data race, because main thread may be writing to it during\n * `feedReplicationBuffer`. `io_bound_repl_node` is cached in the client\n * together with its used size just before sending the client to IO thread\n * in `enqueuePendingClienstToIOThreads`. */\n\n/* Similar with 'clientReplyBlock', it is used for shared buffers between\n * all replica clients and replication backlog. */\ntypedef struct replBufBlock {\n    int refcount;           /* Number of replicas or repl backlog using. */\n    long long id;           /* The unique incremental number. */\n    long long repl_offset;  /* Start replication offset of the block. */\n    size_t size;            /* Capacity of the buf in bytes */\n    size_t used;            /* Count of written bytes */\n    char buf[];\n} replBufBlock;\n\n/* Redis database representation. There are multiple databases identified\n * by integers from 0 (the default database) up to the max configured\n * database. The database number is the 'id' field in the structure. */\ntypedef struct redisDb {\n    kvstore *keys;              /* The keyspace for this DB. As metadata, holds keysizes histogram */\n    kvstore *expires;           /* Timeout of keys with a timeout set */\n    estore *subexpires;         /* Timeout of sub-keys with a timeout set. (Currently only used for hashes) */\n    dict *blocking_keys;        /* Keys with clients waiting for data (BLPOP)*/\n    dict *blocking_keys_unblock_on_nokey;   /* Keys with clients waiting for\n                                             * data, and should be unblocked if key is deleted (XREADEDGROUP).\n                                             * This is a subset of blocking_keys*/\n    dict *stream_claim_pending_keys; /* Keys with clients waiting to claim pending entries */\n    dict *stream_idmp_keys; /* Stream keys with IDMP tracking */\n    dict *ready_keys;           /* Blocked keys that received a PUSH */\n    dict *watched_keys;         /* WATCHED keys for MULTI/EXEC CAS */\n    int id;                     /* Database ID */\n    long long avg_ttl;          /* Average TTL, just for stats */\n    unsigned long expires_cursor; /* Cursor of the active expire cycle. */\n} redisDb;\n\n/* maximum number of bins of keysizes histogram */\n#define MAX_KEYSIZES_BINS 60\n#define MAX_KEYSIZES_TYPES 5 /* static_assert at db.c verifies == OBJ_TYPE_BASIC_MAX */\ntypedef int64_t keysizesHist[MAX_KEYSIZES_TYPES][MAX_KEYSIZES_BINS];\n\n/* Metadata structure used for kvstores with type `kvstoreExType`, managed outside kvstore */\ntypedef struct {\n    keysizesHist keysizes_hist;\n    keysizesHist allocsizes_hist;\n} kvstoreMetadata;\n\n/* Like kvstoreMetadata, this one per dict */\ntypedef struct {\n    kvstoreDictMetaBase base;   /* must be first in struct ! */\n    size_t alloc_size;          /* Total memory used (in bytes) by this slot */\n    uint64_t cpu_usec;          /* CPU time (in microseconds) spent on given slot */\n    uint64_t network_bytes_in;  /* Network ingress (in bytes) received for given slot */\n    uint64_t network_bytes_out; /* Network egress (in bytes) sent for given slot */\n} kvstoreDictMetadata;\n\n/* Context for ASM background trim with delta histogram tracking */\ntypedef struct asmTrimCtx {\n    int refcount;                      /* For shared bg/main thread ownership */\n    struct slotRangeArray *slots;      /* Slot ranges being trimmed */\n    kvstore *target_kvstore;           /* Target kvstore to update (for validation) */\n    keysizesHist delta_keysizes_hist;  /* Delta populated by BIO thread */\n    keysizesHist delta_allocsizes_hist;/* Delta populated by BIO thread */\n} asmTrimCtx;\n\n/* forward declaration for functions ctx */\ntypedef struct functionsLibCtx functionsLibCtx;\n\n/* Holding object that need to be populated during\n * rdb loading. On loading end it is possible to decide\n * whether not to set those objects on their rightful place.\n * For example: dbarray need to be set as main database on\n *              successful loading and dropped on failure. */\ntypedef struct rdbLoadingCtx {\n    redisDb* dbarray;\n    functionsLibCtx* functions_lib_ctx;\n}rdbLoadingCtx;\n\ntypedef struct pendingCommand pendingCommand;\ntypedef struct multiState {\n    pendingCommand **commands;     /* Array of pointers to MULTI commands */\n    int executing_cmd;      /* The index of the currently executed transaction \n                               command (index in commands field) */\n    int count;              /* Total number of MULTI commands */\n    int cmd_flags;          /* The accumulated command flags OR-ed together.\n                               So if at least a command has a given flag, it\n                               will be set in this field. */\n    int cmd_inv_flags;      /* Same as cmd_flags, OR-ing the ~flags. so that it\n                               is possible to know if all the commands have a\n                               certain flag. */\n    size_t argv_len_sums;    /* mem used by all commands arguments */\n    int alloc_count;         /* total number of pendingCommand struct memory reserved. */\n} multiState;\n\n/* This structure holds the blocking operation state for a client.\n * The fields used depend on client->btype. */\ntypedef struct blockingState {\n    /* Generic fields. */\n    blocking_type btype;                  /* Type of blocking op if CLIENT_BLOCKED. */\n    mstime_t timeout;           /* Blocking operation timeout. If UNIX current time\n                                 * is > timeout then the operation timed out. */\n    int unblock_on_nokey;       /* Whether to unblock the client when at least one of the keys\n                                   is deleted or does not exist anymore */\n    /* BLOCKED_LIST, BLOCKED_ZSET and BLOCKED_STREAM or any other Keys related blocking */\n    dict *keys;                 /* The keys we are blocked on */\n\n    /* BLOCKED_WAIT and BLOCKED_WAITAOF */\n    int numreplicas;        /* Number of replicas we are waiting for ACK. */\n    int numlocal;           /* Indication if WAITAOF is waiting for local fsync. */\n    long long reploffset;   /* Replication offset to reach. */\n\n    /* BLOCKED_MODULE */\n    void *module_blocked_handle; /* RedisModuleBlockedClient structure.\n                                    which is opaque for the Redis core, only\n                                    handled in module.c. */\n\n    void *async_rm_call_handle; /* RedisModuleAsyncRMCallPromise structure.\n                                   which is opaque for the Redis core, only\n                                   handled in module.c. */\n\n    /* BLOCKED_LAZYFREE */\n    monotime lazyfreeStartTime;\n} blockingState;\n\n/* The following structure represents a node in the server.ready_keys list,\n * where we accumulate all the keys that had clients blocked with a blocking\n * operation such as B[LR]POP, but received new data in the context of the\n * last executed command.\n *\n * After the execution of every command or script, we iterate over this list to check\n * if as a result we should serve data to clients blocked, unblocking them.\n * Note that server.ready_keys will not have duplicates as there dictionary\n * also called ready_keys in every structure representing a Redis database,\n * where we make sure to remember if a given key was already added in the\n * server.ready_keys list. */\ntypedef struct readyList {\n    redisDb *db;\n    robj *key;\n} readyList;\n\n/* List of pending commands. */\ntypedef struct pendingCommandList {\n    pendingCommand *head;\n    pendingCommand *tail;\n    int len; /* Number of commands in the list */\n    int ready_len; /* Number of commands that are ready to be processed */\n} pendingCommandList;\n\n/* Pending command pool management structure */\n#define PENDING_COMMAND_POOL_SIZE 16\n#define PENDING_COMMAND_POOL_MAX_SIZE 1024\ntypedef struct pendingCommandPool {\n    pendingCommand **pool;  /* Pool array for reusing pendingCommand objects */\n    int size;               /* Current number of objects in pool */\n    int capacity;           /* Current capacity of the pool array */\n    int min_size;           /* Minimum size since last check (indicates peak usage) */\n} pendingCommandPool;\n\n/* This structure represents a Redis user. This is useful for ACLs, the\n * user is associated to the connection after the connection is authenticated.\n * If there is no associated user, the connection uses the default user. */\n#define USER_COMMAND_BITS_COUNT 1024    /* The total number of command bits\n                                           in the user structure. The last valid\n                                           command ID we can set in the user\n                                           is USER_COMMAND_BITS_COUNT-1. */\n#define USER_FLAG_ENABLED (1<<0)        /* The user is active. */\n#define USER_FLAG_DISABLED (1<<1)       /* The user is disabled. */\n#define USER_FLAG_NOPASS (1<<2)         /* The user requires no password, any\n                                           provided password will work. For the\n                                           default user, this also means that\n                                           no AUTH is needed, and every\n                                           connection is immediately\n                                           authenticated. */\n#define USER_FLAG_SANITIZE_PAYLOAD (1<<3)       /* The user require a deep RESTORE\n                                                 * payload sanitization. */\n#define USER_FLAG_SANITIZE_PAYLOAD_SKIP (1<<4)  /* The user should skip the\n                                                 * deep sanitization of RESTORE\n                                                 * payload. */\n\n#define SELECTOR_FLAG_ROOT (1<<0)           /* This is the root user permission\n                                             * selector. */\n#define SELECTOR_FLAG_ALLKEYS (1<<1)        /* The user can mention any key. */\n#define SELECTOR_FLAG_ALLCOMMANDS (1<<2)    /* The user can run all commands. */\n#define SELECTOR_FLAG_ALLCHANNELS (1<<3)    /* The user can mention any Pub/Sub\n                                               channel. */\n\ntypedef struct {\n    sds name;       /* The username as an SDS string. */\n    redisAtomic uint32_t flags; /* See USER_FLAG_* */\n    list *passwords; /* A list of SDS valid passwords for this user. */\n    list *selectors; /* A list of selectors this user validates commands\n                        against. This list will always contain at least\n                        one selector for backwards compatibility. */\n    robj *acl_string; /* cached string represent of ACLs */\n} user;\n\n/* With multiplexing we need to take per-client state.\n * Clients are taken in a linked list. */\n\n#define CLIENT_ID_AOF (UINT64_MAX) /* Reserved ID for the AOF client. If you\n                                      need more reserved IDs use UINT64_MAX-1,\n                                      -2, ... and so forth. */\n#define CLIENT_ID_NONE (0)         /* Non-existent client ID, used when no client\n                                      is associated with an operation. */\n\n/* Replication backlog is not a separate memory, it just is one consumer of\n * the global replication buffer. This structure records the reference of\n * replication buffers. Since the replication buffer block list may be very long,\n * it would cost much time to search replication offset on partial resync, so\n * we use one rax tree to index some blocks every REPL_BACKLOG_INDEX_PER_BLOCKS\n * to make searching offset from replication buffer blocks list faster. */\ntypedef struct replBacklog {\n    listNode *ref_repl_buf_node; /* Referenced node of replication buffer blocks,\n                                  * see the definition of replBufBlock. */\n    size_t unindexed_count;      /* The count from last creating index block. */\n    rax *blocks_index;           /* The index of recorded blocks of replication\n                                  * buffer for quickly searching replication\n                                  * offset on partial resynchronization. */\n    long long histlen;           /* Backlog actual data length */\n    long long offset;            /* Replication \"master offset\" of first\n                                  * byte in the replication backlog buffer.*/\n} replBacklog;\n\n/* Used by replDataBuf during rdb channel replication to accumulate replication\n * stream on replica side. */\ntypedef struct replDataBufBlock {\n    size_t used; /* Used bytes in the buf */\n    size_t size; /* Size of the buf */\n    char buf[];  /* Replication data */\n} replDataBufBlock;\n\n/* Linked list of replDataBufBlock structs, holds replication stream during\n * rdb channel replication on replica side. */\ntypedef struct replDataBuf {\n    list *blocks; /* List of replDataBufBlock */\n    size_t mem_used; /* Total allocated memory */\n    size_t size;  /* Total number of bytes available in all blocks. */\n    size_t used;  /* Total number of bytes actually used in all blocks. */\n    size_t peak;  /* Peak number of bytes stored in all blocks. */\n    size_t last_num_blocks; /* Used to verify we consume more than we read from\n                             * the master connection while streaming buffer to\n                             * the db. */\n} replDataBuf;\n\ntypedef struct {\n    list *clients;\n    size_t mem_usage_sum;\n} clientMemUsageBucket;\n\n#define DEFERRED_OBJECT_TYPE_PENDING_COMMAND 1\n#define DEFERRED_OBJECT_TYPE_ROBJ 2\n/* Structure to hold objects that need to be freed later by IO threads.\n * This allows the main thread to defer memory cleanup operations to\n * IO threads to avoid blocking the main event loop. */\ntypedef struct deferredObject {\n    int type;    /* Pointer to the object to be freed */\n    void *ptr;   /* Type of object: DEFERRED_OBJECT_TYPE_* */ \n} deferredObject;\n\n#define SHOULD_CLUSTER_COMPATIBILITY_SAMPLE() \\\n            (server.cluster_compatibility_sample_ratio == 100 || \\\n             (double)rand()/RAND_MAX * 100 < server.cluster_compatibility_sample_ratio)\n\n#ifdef LOG_REQ_RES\n/* Structure used to log client's requests and their\n * responses (see logreqres.c) */\ntypedef struct {\n    /* General */\n    int argv_logged; /* 1 if the command was logged */\n    /* Vars for log buffer */\n    unsigned char *buf; /* Buffer holding the data (request and response) */\n    size_t used;\n    size_t capacity;\n    /* Vars for offsets within the client's reply */\n    struct {\n        /* General */\n        int saved; /* 1 if we already saved the offset (first time we call addReply*) */\n        /* Offset within the static reply buffer */\n        size_t bufpos;\n        /* Offset within the reply block list */\n        struct {\n            int index;\n            size_t used;\n        } last_node;\n    } offset;\n} clientReqResInfo;\n#endif\n\ntypedef struct client {\n    uint64_t id;            /* Client incremental unique ID. */\n    uint64_t flags;         /* Client flags: CLIENT_* macros. */\n    connection *conn;\n    uint8_t tid;            /* Thread assigned ID this client is bound to. */\n    uint8_t running_tid;    /* Thread assigned ID this client is running on. */\n    uint8_t io_flags;       /* Accessed by both main and IO threads, but not modified concurrently */\n    uint8_t read_error;     /* Client read error: CLIENT_READ_* macros. */\n    int resp;               /* RESP protocol version. Can be 2 or 3. */\n    redisDb *db;            /* Pointer to currently SELECTed DB. */\n    robj *name;             /* As set by CLIENT SETNAME. */\n    robj *lib_name;         /* The client library name as set by CLIENT SETINFO. */\n    robj *lib_ver;          /* The client library version as set by CLIENT SETINFO. */\n    sds querybuf;           /* Buffer we use to accumulate client queries. */\n    size_t qb_pos;          /* The position we have read in querybuf. */\n    size_t querybuf_peak;   /* Recent (100ms or more) peak of querybuf size. */\n    int argc;               /* Num of arguments of current command. */\n    robj **argv;            /* Arguments of current command. */\n    int argv_len;           /* Size of argv array (may be more than argc) */\n    int original_argc;      /* Num of arguments of original command if arguments were rewritten. */\n    robj **original_argv;   /* Arguments of original command if arguments were rewritten. */\n    size_t all_argv_len_sum;    /* Sum of lengths of objects in all pendingCommand argv lists */\n    pendingCommandList pending_cmds;  /* List of parsed pending commands */\n    pendingCommand *current_pending_cmd;\n    deferredObject *deferred_objects; /* Array of deferred objects to free. */\n    int deferred_objects_num;   /* Number of deferred objects to free. */\n    robj **io_deferred_objects;    /* Objects to be freed by main thread, queued by IO thread */\n    int io_deferred_objects_num;   /* Number of objects in io_deferred_objects */\n    int io_deferred_objects_size;  /* Allocated size of io_deferred_objects */\n    struct redisCommand *cmd, *lastcmd;  /* Last command executed. */\n    struct redisCommand *lookedcmd; /* Command looked up in lookahead. */\n    struct redisCommand *realcmd; /* The original command that was executed by the client,\n                                     Used to update error stats in case the c->cmd was modified\n                                     during the command invocation (like on GEOADD for example). */\n    user *user;             /* User associated with this connection. If the\n                               user is set to NULL the connection can do\n                               anything (admin). */\n    int reqtype;            /* Request protocol type: PROTO_REQ_* */\n    int multibulklen;       /* Number of multi bulk arguments left to read. */\n    long bulklen;           /* Length of bulk argument in multi bulk request. */\n    list *reply;            /* List of reply objects to send to the client. */\n    unsigned long long reply_bytes; /* Tot bytes of objects in reply list. */\n    list *deferred_reply_errors;    /* Used for module thread safe contexts. */\n    size_t sentlen;         /* Amount of bytes already sent in the current\n                               buffer or object being sent. */\n    time_t ctime;           /* Client creation time. */\n    long duration;          /* Current command duration. Used for measuring latency of blocking/non-blocking cmds */\n    int slot;               /* The slot the client is executing against. Set to -1 if no slot is being used */\n    int cluster_compatibility_check_slot; /* The slot the client is executing against for cluster compatibility check.\n                                           * -2 means we don't need to check slot violation, or we already found\n                                           * a violation, reported it and don't need to continue checking.\n                                           * -1 means we're looking for the slot number and didn't find it yet.\n                                           * any positive number means we found a slot and no violation yet. */\n    dictEntry *cur_script;  /* Cached pointer to the dictEntry of the script being executed. */\n    time_t lastinteraction; /* Time of the last interaction, used for timeout */\n    time_t io_lastinteraction; /* Time of the last interaction as seen from\n                                * IO thread. When the client is moved to main\n                                * it updates its `lastinteraction` value from\n                                * this. */\n    time_t obuf_soft_limit_reached_time;\n    mstime_t io_last_client_cron;  /* Timestamp of last invocation of client\n                                    * cron if client is running in IO thread */\n    mstime_t io_last_repl_cron;    /* Timestamp of last invocation of replication\n                                    * cron if client is running in IO thread. */\n    int authenticated;      /* Needed when the default user requires auth. */\n    int replstate;          /* Replication state if this is a slave. */\n    int repl_start_cmd_stream_on_ack; /* Install slave write handler on first ACK. */\n    int repldbfd;           /* Replication DB file descriptor. */\n    off_t repldboff;        /* Replication DB file offset. */\n    off_t repldbsize;       /* Replication DB file size. */\n    sds replpreamble;       /* Replication DB preamble. */\n    long long read_reploff; /* Read replication offset if this is a master. */\n    long long io_read_reploff; /* Copy of read_reploff but only used when\n                                * master client is in IO thread so we don't\n                                * have contention with IO thread. */\n    long long reploff;      /* Applied replication offset if this is a master. */\n    long long reploff_next; /* Next value to set for reploff when a command finishes executing */\n    long long repl_applied; /* Applied replication data count in querybuf, if this is a replica. */\n    long long repl_ack_off; /* Replication ack offset, if this is a slave. */\n    long long repl_aof_off; /* Replication AOF fsync ack offset, if this is a slave. */\n    long long repl_ack_time;/* Replication ack time, if this is a slave. */\n    long long io_repl_ack_time; /* Replication ack time, if this is a replica in\n                                 * IO thread. Keeps track of repl_ack_time while\n                                 * replica is in IO thread to avoid data races\n                                 * with main. repl_ack_time is updated with this\n                                 * value when replica returns to main thread. */\n    long long repl_last_partial_write; /* The last time the server did a partial write from the RDB child pipe to this replica  */\n    long long psync_initial_offset; /* FULLRESYNC reply offset other slaves\n                                       copying this slave output buffer\n                                       should use. */\n    char replid[CONFIG_RUN_ID_SIZE+1]; /* Master replication ID (if master). */\n    int slave_listening_port; /* As configured with: REPLCONF listening-port */\n    char *slave_addr;       /* Optionally given by REPLCONF ip-address */\n    int slave_capa;         /* Slave capabilities: SLAVE_CAPA_* bitwise OR. */\n    int slave_req;          /* Slave requirements: SLAVE_REQ_* */\n    uint64_t main_ch_client_id; /* The client id of this replica's main channel */\n    multiState mstate;      /* MULTI/EXEC state */\n    blockingState bstate;     /* blocking state */\n    long long woff;         /* Last write global replication offset. */\n    list *watched_keys;     /* Keys WATCHED for MULTI/EXEC CAS */\n    dict *pubsub_channels;  /* channels a client is interested in (SUBSCRIBE) */\n    dict *pubsub_patterns;  /* patterns a client is interested in (PSUBSCRIBE) */\n    dict *pubsubshard_channels;  /* shard level channels a client is interested in (SSUBSCRIBE) */\n    sds peerid;             /* Cached peer ID. */\n    sds sockname;           /* Cached connection target address. */\n    listNode *client_list_node; /* list node in client list */\n    listNode *io_thread_client_list_node; /* list node in io thread client list */\n    listNode *postponed_list_node; /* list node within the postponed list */\n    void *module_blocked_client; /* Pointer to the RedisModuleBlockedClient associated with this\n                                  * client. This is set in case of module authentication before the\n                                  * unblocked client is reprocessed to handle reply callbacks. */\n    void *module_auth_ctx; /* Ongoing / attempted module based auth callback's ctx.\n                            * This is only tracked within the context of the command attempting\n                            * authentication. If not NULL, it means module auth is in progress. */\n    RedisModuleUserChangedFunc auth_callback; /* Module callback to execute\n                                               * when the authenticated user\n                                               * changes. */\n    void *auth_callback_privdata; /* Private data that is passed when the auth\n                                   * changed callback is executed. Opaque for\n                                   * Redis Core. */\n    void *auth_module;      /* The module that owns the callback, which is used\n                             * to disconnect the client if the module is\n                             * unloaded for cleanup. Opaque for Redis Core.*/\n\n    /* If this client is in tracking mode and this field is non zero,\n     * invalidation messages for keys fetched by this client will be sent to\n     * the specified client ID. */\n    uint64_t client_tracking_redirection;\n    rax *client_tracking_prefixes; /* A dictionary of prefixes we are already\n                                      subscribed to in BCAST mode, in the\n                                      context of client side caching. */\n    /* In updateClientMemoryUsage() we track the memory usage of\n     * each client and add it to the sum of all the clients of a given type,\n     * however we need to remember what was the old contribution of each\n     * client, and in which category the client was, in order to remove it\n     * before adding it the new value. */\n    size_t last_memory_usage;\n    int last_memory_type;\n\n    listNode *mem_usage_bucket_node;\n    clientMemUsageBucket *mem_usage_bucket;\n\n    listNode *ref_repl_buf_node; /* Referenced node of replication buffer blocks,\n                                  * see the definition of replBufBlock. */\n    size_t ref_block_pos;        /* Access position of referenced buffer block,\n                                  * i.e. the next offset to send. */\n    listNode *io_curr_repl_node; /* Current node we are sending repl data from in\n                                  * IO thread. */\n    size_t io_curr_block_pos;    /* Current position we are sending repl data from\n                                  * in IO thread. */\n    listNode *io_bound_repl_node;/* Bound node we are sending repl data from in\n                                  * IO thread. */\n    size_t io_bound_block_pos;   /* Bound position we are sending repl data from\n                                  * in IO thread. */\n\n    /* list node in clients_pending_write list */\n    listNode clients_pending_write_node;\n    /* list node in clients_with_pending_ref_reply list */\n    listNode pending_ref_reply_node;\n    /* Statistics and metrics */\n    size_t net_input_bytes_curr_cmd; /* Total network input bytes read for the\n                                      * execution of this client's current command. */\n    size_t net_output_bytes_curr_cmd; /* Total network output bytes sent to this\n                                       * client, by the current command. */\n    /* Response buffer */\n    size_t buf_peak; /* Peak used size of buffer in last 5 sec interval. */\n    mstime_t buf_peak_last_reset_time; /* keeps the last time the buffer peak value was reset */\n    size_t bufpos;\n    size_t buf_usable_size; /* Usable size of buffer. */\n    char *buf;\n    uint8_t buf_encoded; /* True if c->buf content is encoded (e.g. for copy avoidance) */\n    payloadHeader *last_header; /* Pointer to the last header in a buffer when using copy avoidance */\n#ifdef LOG_REQ_RES\n    clientReqResInfo reqres;\n#endif\n    unsigned long long net_input_bytes;    /* Total network input bytes read from this client. */\n    unsigned long long net_output_bytes;   /* Total network output bytes sent to this client. */\n    unsigned long long commands_processed; /* Total count of commands this client executed. */\n    struct asmTask *task;       /* Atomic slot migration task */\n    char *node_id;              /* Node ID to connect to for atomic slot migration */\n\n    redisAtomic int pending_read; /* Flag indicating an IO thread client residing\n                                   * in main thread has received a read event. */\n\n    mstime_t last_ts_when_counted_as_active; /* Timestamp of last time this client was counted as active */\n    size_t stat_total_read_events; /* Number of times readQueryFromClient() was called */\n    size_t stat_avg_pipeline_length_sum; /* Sum of pipeline lengths for computing average */\n    size_t stat_avg_pipeline_length_cnt; /* Count of pipeline length samples */\n} client;\n\ntypedef struct __attribute__((aligned(CACHE_LINE_SIZE))) {\n    uint8_t id;                                 /* The unique ID assigned, if IO_THREADS_MAX_NUM is more\n                                                 * than 256, we should also promote the data type. */\n    pthread_t tid;                              /* Pthread ID */\n    redisAtomic int paused;                     /* Paused status for the io thread. */\n    redisAtomic int running;                    /* Running if true, main thread can send clients directly. */\n    aeEventLoop *el;                            /* Main event loop of io thread. */\n    list *pending_clients;                      /* List of clients with pending writes. */\n    list *processing_clients;                   /* List of clients being processed. */\n    eventNotifier *pending_clients_notifier;    /* Used to wake up the loop when write should be performed. */\n    pthread_mutex_t pending_clients_mutex;      /* Mutex for pending write list */\n    list *pending_clients_to_main_thread;       /* Clients that are waiting to be executed by the main thread. */\n    list *clients;                              /* IO thread managed clients. */\n} IOThread;\n\n/* Context for streaming replDataBuf to database */\ntypedef struct replDataBufToDbCtx {\n    void *privdata;                     /* Private data of context */\n    client *client;                     /* Client to process commands */\n    size_t applied_offset;              /* Offset applied to the database */\n    int  (*should_continue)(void *ctx); /* Check if we should continue */\n    void (*yield_callback)(void *ctx);  /* Yield to the event loop */\n} replDataBufToDbCtx;\n\n/* ACL information */\ntypedef struct aclInfo {\n    long long user_auth_failures; /* Auth failure counts on user level */\n    long long invalid_cmd_accesses; /* Invalid command accesses that user doesn't have permission to */\n    long long invalid_key_accesses; /* Invalid key accesses that user doesn't have permission to */\n    long long invalid_channel_accesses; /* Invalid channel accesses that user doesn't have permission to */\n    long long acl_access_denied_tls_cert; /* TLS clients with cert not matching any existing user. */\n} aclInfo;\n\nstruct saveparam {\n    time_t seconds;\n    int changes;\n};\n\nstruct moduleLoadQueueEntry {\n    sds path;\n    int argc;\n    robj **argv;\n};\n\nstruct sentinelLoadQueueEntry {\n    int argc;\n    sds *argv;\n    int linenum;\n    sds line;\n};\n\nstruct sentinelConfig {\n    list *pre_monitor_cfg;\n    list *monitor_cfg;\n    list *post_monitor_cfg;\n};\n\nstruct sharedObjectsStruct {\n    robj *ok, *err, *emptybulk, *czero, *cone, *pong, *space,\n    *queued, *null[4], *nullarray[4], *emptymap[4], *emptyset[4],\n    *emptyarray, *wrongtypeerr, *nokeyerr, *syntaxerr, *sameobjecterr,\n    *outofrangeerr, *noscripterr, *loadingerr,\n    *slowevalerr, *slowscripterr, *slowmoduleerr, *bgsaveerr,\n    *masterdownerr, *roslaveerr, *execaborterr, *noautherr, *noreplicaserr,\n    *busykeyerr, *oomerr, *plus, *messagebulk, *pmessagebulk, *subscribebulk,\n    *unsubscribebulk, *psubscribebulk, *punsubscribebulk, *del, *unlink,\n    *rpop, *lpop, *lpush, *rpoplpush, *lmove, *blmove, *zpopmin, *zpopmax,\n    *emptyscan, *multi, *exec, *left, *right, *hset, *srem, *xgroup, *xclaim, *xack,\n    *script, *replconf, *eval, *persist, *set, *pexpireat, *pexpire,\n    *hdel, *hpexpireat, *hpersist, *hsetex,\n    *time, *pxat, *absttl, *retrycount, *force, *justid, *entriesread,\n    *lastid, *ping, *setid, *keepttl, *load, *createconsumer, *fields,\n    *getack, *special_asterick, *special_equals, *default_username, *redacted,\n    *ssubscribebulk,*sunsubscribebulk, *smessagebulk,\n    *select[PROTO_SHARED_SELECT_CMDS],\n    *integers[OBJ_SHARED_INTEGERS],\n    *mbulkhdr[OBJ_SHARED_BULKHDR_LEN], /* \"*<value>\\r\\n\" */\n    *bulkhdr[OBJ_SHARED_BULKHDR_LEN],  /* \"$<value>\\r\\n\" */\n    *maphdr[OBJ_SHARED_BULKHDR_LEN],   /* \"%<value>\\r\\n\" */\n    *sethdr[OBJ_SHARED_BULKHDR_LEN];   /* \"~<value>\\r\\n\" */\n    sds minstring, maxstring;\n};\n\n/* ZSETs use a specialized version of Skiplists */\n\n/* Node info placed in level[0].span since it's unused at level 0 (static assert verified) */\ntypedef struct zskiplistNodeInfo {\n    uint16_t sdsoffset;  /* Offset from node start to sds data (after sds header) */\n    uint8_t levels;      /* Number of levels in this node (1-32) */\n    uint8_t reserved;\n} zskiplistNodeInfo;\n\ntypedef struct zskiplistNode {\n    double score;\n    struct zskiplistNode *backward;\n    struct zskiplistLevel {\n        struct zskiplistNode *forward;\n        /* Span is the number of elements between this node and the next node at this level.\n         * At level 0, span is repurposed to store zskiplistNodeInfo for regular nodes, */\n        unsigned long span;\n    } level[];\n    /* sds ele is embedded after level[] array (assist zslGetNodeElement(node) to access it) */\n} zskiplistNode;\n\ntypedef struct zskiplist {\n    struct zskiplistNode *header, *tail;\n    unsigned long length;\n    int level;\n    size_t alloc_size;\n} zskiplist;\n\ntypedef struct zset {\n    dict *dict;\n    zskiplist *zsl;\n} zset;\n\ntypedef struct clientBufferLimitsConfig {\n    unsigned long long hard_limit_bytes;\n    unsigned long long soft_limit_bytes;\n    time_t soft_limit_seconds;\n} clientBufferLimitsConfig;\n\nextern clientBufferLimitsConfig clientBufferLimitsDefaults[CLIENT_TYPE_OBUF_COUNT];\n\n/* The redisOp structure defines a Redis Operation, that is an instance of\n * a command with an argument vector, database ID, propagation target\n * (PROPAGATE_*), and command pointer.\n *\n * Currently only used to additionally propagate more commands to AOF/Replication\n * after the propagation of the executed command. */\ntypedef struct redisOp {\n    robj **argv;\n    int argc, dbid, target;\n} redisOp;\n\n/* Defines an array of Redis operations. There is an API to add to this\n * structure in an easy way.\n *\n * int redisOpArrayAppend(redisOpArray *oa, int dbid, robj **argv, int argc, int target);\n * void redisOpArrayFree(redisOpArray *oa);\n */\ntypedef struct redisOpArray {\n    redisOp *ops;\n    int numops;\n    int capacity;\n} redisOpArray;\n\n/* This structure is returned by the getMemoryOverheadData() function in\n * order to return memory overhead information. */\nstruct redisMemOverhead {\n    size_t peak_allocated;\n    size_t total_allocated;\n    size_t startup_allocated;\n    size_t repl_backlog;\n    size_t replica_fullsync_buffer;\n    size_t clients_slaves;\n    size_t clients_normal;\n    size_t cluster_links;\n    size_t aof_buffer;\n    size_t eval_caches;\n    size_t functions_caches;\n    size_t script_vm;\n    size_t overhead_total;\n    size_t dataset;\n    size_t total_keys;\n    size_t bytes_per_key;\n    float dataset_perc;\n    float peak_perc;\n    float total_frag;\n    ssize_t total_frag_bytes;\n    float allocator_frag;\n    ssize_t allocator_frag_bytes;\n    float allocator_rss;\n    ssize_t allocator_rss_bytes;\n    float rss_extra;\n    ssize_t rss_extra_bytes;\n    size_t num_dbs;\n    size_t overhead_db_hashtable_lut;\n    size_t overhead_db_hashtable_rehashing;\n    unsigned long db_dict_rehashing_count;\n    size_t asm_import_input_buffer;\n    size_t asm_migrate_output_buffer;\n    struct {\n        size_t dbid;\n        size_t overhead_ht_main;\n        size_t overhead_ht_expires;\n    } *db;\n};\n\n/* Replication error behavior determines the replica behavior\n * when it receives an error over the replication stream. In\n * either case the error is logged. */\ntypedef enum {\n    PROPAGATION_ERR_BEHAVIOR_IGNORE = 0,\n    PROPAGATION_ERR_BEHAVIOR_PANIC,\n    PROPAGATION_ERR_BEHAVIOR_PANIC_ON_REPLICAS\n} replicationErrorBehavior;\n\n/* This structure can be optionally passed to RDB save/load functions in\n * order to implement additional functionalities, by storing and loading\n * metadata to the RDB file.\n *\n * For example, to use select a DB at load time, useful in\n * replication in order to make sure that chained slaves (slaves of slaves)\n * select the correct DB and are able to accept the stream coming from the\n * top-level master. */\ntypedef struct rdbSaveInfo {\n    /* Used saving and loading. */\n    int repl_stream_db;  /* DB to select in server.master client. */\n\n    /* Used only loading. */\n    int repl_id_is_set;  /* True if repl_id field is set. */\n    char repl_id[CONFIG_RUN_ID_SIZE+1];     /* Replication ID. */\n    long long repl_offset;                  /* Replication offset. */\n} rdbSaveInfo;\n\n#define RDB_SAVE_INFO_INIT {-1,0,\"0000000000000000000000000000000000000000\",-1}\n\nstruct malloc_stats {\n    size_t zmalloc_used;\n    size_t process_rss;\n    size_t allocator_allocated;\n    size_t allocator_active;\n    size_t allocator_resident;\n    size_t allocator_muzzy;\n    size_t allocator_frag_smallbins_bytes;\n    size_t lua_allocator_allocated;\n    size_t lua_allocator_active;\n    size_t lua_allocator_resident;\n    size_t lua_allocator_frag_smallbins_bytes;\n};\n\n/*-----------------------------------------------------------------------------\n * TLS Context Configuration\n *----------------------------------------------------------------------------*/\n\ntypedef struct redisTLSContextConfig {\n    char *cert_file;                /* Server side and optionally client side cert file name */\n    char *key_file;                 /* Private key filename for cert_file */\n    char *key_file_pass;            /* Optional password for key_file */\n    char *client_cert_file;         /* Certificate to use as a client; if none, use cert_file */\n    char *client_key_file;          /* Private key filename for client_cert_file */\n    char *client_key_file_pass;     /* Optional password for client_key_file */\n    int client_auth_user;           /* Field to be used for automatic TLS authentication based on client TLS certificate */\n    char *dh_params_file;\n    char *ca_cert_file;\n    char *ca_cert_dir;\n    char *protocols;\n    char *ciphers;\n    char *ciphersuites;\n    int prefer_server_ciphers;\n    int session_caching;\n    int session_cache_size;\n    int session_cache_timeout;\n} redisTLSContextConfig;\n\n/*-----------------------------------------------------------------------------\n * AOF manifest definition\n *----------------------------------------------------------------------------*/\ntypedef enum {\n    AOF_FILE_TYPE_BASE  = 'b', /* BASE file */\n    AOF_FILE_TYPE_HIST  = 'h', /* HISTORY file */\n    AOF_FILE_TYPE_INCR  = 'i', /* INCR file */\n} aof_file_type;\n\ntypedef struct {\n    sds           file_name;  /* file name */\n    long long     file_seq;   /* file sequence */\n    aof_file_type file_type;  /* file type */\n    long long     start_offset;  /* the start replication offset of the file */\n    long long     end_offset;    /* the end replication offset of the file */\n} aofInfo;\n\ntypedef struct {\n    aofInfo     *base_aof_info;       /* BASE file information. NULL if there is no BASE file. */\n    list        *incr_aof_list;       /* INCR AOFs list. We may have multiple INCR AOF when rewrite fails. */\n    list        *history_aof_list;    /* HISTORY AOF list. When the AOFRW success, The aofInfo contained in\n                                         `base_aof_info` and `incr_aof_list` will be moved to this list. We\n                                         will delete these AOF files when AOFRW finish. */\n    long long   curr_base_file_seq;   /* The sequence number used by the current BASE file. */\n    long long   curr_incr_file_seq;   /* The sequence number used by the current INCR file. */\n    int         dirty;                /* 1 Indicates that the aofManifest in the memory is inconsistent with\n                                         disk, we need to persist it immediately. */\n} aofManifest;\n\n/*-----------------------------------------------------------------------------\n * Global server state\n *----------------------------------------------------------------------------*/\n\n/* AIX defines hz to __hz, we don't use this define and in order to allow\n * Redis build on AIX we need to undef it. */\n#ifdef _AIX\n#undef hz\n#endif\n\n#define CHILD_TYPE_NONE 0\n#define CHILD_TYPE_RDB 1\n#define CHILD_TYPE_AOF 2\n#define CHILD_TYPE_LDB 3\n#define CHILD_TYPE_MODULE 4\n\ntypedef enum childInfoType {\n    CHILD_INFO_TYPE_CURRENT_INFO,\n    CHILD_INFO_TYPE_AOF_COW_SIZE,\n    CHILD_INFO_TYPE_RDB_COW_SIZE,\n    CHILD_INFO_TYPE_MODULE_COW_SIZE\n} childInfoType;\n\ntypedef struct hotkeyStats hotkeyStats;\n\nstruct redisServer {\n    /* General */\n    pid_t pid;                  /* Main process pid. */\n    pthread_t main_thread_id;         /* Main thread id */\n    char *configfile;           /* Absolute config file path, or NULL */\n    char *executable;           /* Absolute executable file path. */\n    char **exec_argv;           /* Executable argv vector (copy). */\n    int dynamic_hz;             /* Change hz value depending on # of clients. */\n    int config_hz;              /* Configured HZ value. May be different than\n                                   the actual 'hz' field value if dynamic-hz\n                                   is enabled. */\n    mode_t umask;               /* The umask value of the process on startup */\n    int hz;                     /* serverCron() calls frequency in hertz */\n    int in_fork_child;          /* indication that this is a fork child */\n    redisDb *db;\n    dict *commands;             /* Command table */\n    dict *orig_commands;        /* Command table before command renaming. */\n    aeEventLoop *el;\n    rax *errors;                /* Errors table */\n    int errors_enabled;         /* If true, errorstats is enabled, and we will add new errors. */\n    unsigned int lruclock; /* Clock for LRU eviction */\n    redisAtomic int shutdown_asap; /* Shutdown ordered by signal handler. */\n    redisAtomic int crashing;      /* Server is crashing report. */\n    mstime_t shutdown_mstime;   /* Timestamp to limit graceful shutdown. */\n    redisAtomic int last_sig_received;      /* Indicates the last SIGNAL received, if any (e.g., SIGINT or SIGTERM). */\n    int shutdown_flags;         /* Flags passed to prepareForShutdown(). */\n    int activerehashing;        /* Incremental rehash in serverCron() */\n    int active_defrag_running;  /* Active defragmentation running (holds current scan aggressiveness) */\n    char *pidfile;              /* PID file path */\n    int arch_bits;              /* 32 or 64 depending on sizeof(long) */\n    int cronloops;              /* Number of times the cron function run */\n    char runid[CONFIG_RUN_ID_SIZE+1];  /* ID always different at every exec. */\n    int sentinel_mode;          /* True if this instance is a Sentinel. */\n    size_t initial_memory_usage; /* Bytes used after initialization. */\n    int always_show_logo;       /* Show logo even for non-stdout logging. */\n    int in_exec;                /* Are we inside EXEC? */\n    int busy_module_yield_flags;         /* Are we inside a busy module? (triggered by RM_Yield). see BUSY_MODULE_YIELD_ flags. */\n    const char *busy_module_yield_reply; /* When non-null, we are inside RM_Yield. */\n    char *ignore_warnings;      /* Config: warnings that should be ignored. */\n    int client_pause_in_transaction; /* Was a client pause executed during this Exec? */\n    int thp_enabled;                 /* If true, THP is enabled. */\n    size_t page_size;                /* The page size of OS. */\n    redisAtomic int running;    /* Running if true, IO threads can send clients without notification */\n    /* Modules */\n    dict *moduleapi;            /* Exported core APIs dictionary for modules. */\n    dict *sharedapi;            /* Like moduleapi but containing the APIs that\n                                   modules share with each other. */\n    dict *module_configs_queue; /* Unmapped configs are queued here, assumed to be module config. Applied after modules are loaded during startup or arguments to loadex. */\n    list *loadmodule_queue;     /* List of modules to load at startup. */\n    int module_pipe[2];         /* Pipe used to awake the event loop by module threads. */\n    pid_t child_pid;            /* PID of current child */\n    int child_type;             /* Type of current child */\n    redisAtomic int module_gil_acquring; /* Indicates whether the GIL is being acquiring by the main thread. */\n    /* Networking */\n    int port;                   /* TCP listening port */\n    int tls_port;               /* TLS listening port */\n    int tcp_backlog;            /* TCP listen() backlog */\n    char *bindaddr[CONFIG_BINDADDR_MAX]; /* Addresses we should bind to */\n    int bindaddr_count;         /* Number of addresses in server.bindaddr[] */\n    char *bind_source_addr;     /* Source address to bind on for outgoing connections */\n    char *unixsocket;           /* UNIX socket path */\n    unsigned int unixsocketperm; /* UNIX socket permission (see mode_t) */\n    connListener listeners[CONN_TYPE_MAX]; /* TCP/Unix/TLS even more types */\n    uint32_t socket_mark_id;    /* ID for listen socket marking */\n    connListener clistener;     /* Cluster bus listener */\n    list *clients;              /* List of active clients */\n    list *clients_to_close;     /* Clients to close asynchronously */\n    list *clients_pending_write; /* There is to write or install handler. */\n    list *clients_pending_read;  /* Client has pending read socket buffers. */\n    list *clients_with_pending_ref_reply; /* Clients with referenced reply objects. */\n    list *slaves, *monitors;    /* List of slaves and MONITORs */\n    client *current_client;     /* The client that triggered the command execution (External or AOF). */\n    client *executing_client;   /* The client executing the current command (possibly script or module). */\n\n#ifdef LOG_REQ_RES\n    char *req_res_logfile; /* Path of log file for logging all requests and their replies. If NULL, no logging will be performed */\n    unsigned int client_default_resp;\n#endif\n\n    /* Stuff for client mem eviction */\n    clientMemUsageBucket* client_mem_usage_buckets;\n\n    rax *clients_timeout_table; /* Radix tree for blocked clients timeouts. */\n    int execution_nesting;      /* Execution nesting level.\n                                 * e.g. call(), async module stuff (timers, events, etc.),\n                                 * cron stuff (active expire, eviction) */\n    rax *clients_index;         /* Active clients dictionary by client ID. */\n    uint32_t paused_actions;   /* Bitmask of actions that are currently paused */\n    list *postponed_clients;       /* List of postponed clients */\n    pause_event client_pause_per_purpose[NUM_PAUSE_PURPOSES];\n    char neterr[ANET_ERR_LEN];   /* Error buffer for anet.c */\n    dict *migrate_cached_sockets;/* MIGRATE cached sockets */\n    redisAtomic uint64_t next_client_id; /* Next client unique ID. Incremental. */\n    int protected_mode;         /* Don't accept external connections. */\n    int io_threads_num;         /* Number of IO threads to use. */\n    int io_threads_clients_num[IO_THREADS_MAX_NUM]; /* Number of clients assigned to each IO thread. */\n    int io_threads_do_reads;    /* Read and parse from IO threads? */\n    int io_threads_active;      /* Is IO threads currently active? */\n    pendingCommandPool cmd_pool; /* Shared pool for reusing pendingCommand,\n                                  * only when IO threads disabled */\n    int prefetch_batch_max_size;/* Maximum number of keys to prefetch in a single batch */\n    long long events_processed_while_blocked; /* processEventsWhileBlocked() */\n    int enable_protected_configs;    /* Enable the modification of protected configs, see PROTECTED_ACTION_ALLOWED_* */\n    int enable_debug_cmd;            /* Enable DEBUG commands, see PROTECTED_ACTION_ALLOWED_* */\n    int enable_module_cmd;           /* Enable MODULE commands, see PROTECTED_ACTION_ALLOWED_* */\n\n    /* RDB / AOF loading information */\n    volatile sig_atomic_t loading; /* We are loading data from disk if true */\n    volatile sig_atomic_t async_loading; /* We are loading data without blocking the db being served */\n    off_t loading_total_bytes;\n    off_t loading_rdb_used_mem;\n    off_t loading_loaded_bytes;\n    time_t loading_start_time;\n    off_t loading_process_events_interval_bytes;\n    int loading_skip_checksum; /* Skip checksum verification during diskless loading */\n    /* Fields used only for stats */\n    time_t stat_starttime;          /* Server start time */\n    long long stat_numcommands;     /* Number of processed commands */\n    long long stat_numconnections;  /* Number of connections received */\n    long long stat_expiredkeys;     /* Number of expired keys */\n    long long stat_expiredkeys_active; /* Number of expired keys by active expire */\n    long long stat_expired_subkeys; /* Number of expired subkeys (Currently only hash-fields) */\n    long long stat_expired_subkeys_active; /* Number of expired subkeys by active expire */\n    double stat_expired_stale_perc; /* Percentage of keys probably expired */\n    long long stat_expired_time_cap_reached_count; /* Early expire cycle stops.*/\n    long long stat_expire_cycle_time_used; /* Cumulative microseconds used. */\n    long long stat_evictedkeys;     /* Number of evicted keys (maxmemory) */\n    long long stat_evictedclients;  /* Number of evicted clients */\n    long long stat_evictedscripts;  /* Number of evicted lua scripts. */\n    long long stat_total_eviction_exceeded_time;  /* Total time over the memory limit, unit us */\n    monotime stat_last_eviction_exceeded_time;  /* Timestamp of current eviction start, unit us */\n    long long stat_keyspace_hits;   /* Number of successful lookups of keys */\n    long long stat_keyspace_misses; /* Number of failed lookups of keys */\n    long long stat_active_defrag_hits;      /* number of allocations moved */\n    long long stat_active_defrag_misses;    /* number of allocations scanned but not moved */\n    long long stat_active_defrag_key_hits;  /* number of keys with moved allocations */\n    long long stat_active_defrag_key_misses;/* number of keys scanned and not moved */\n    long long stat_active_defrag_scanned;   /* number of dictEntries scanned */\n    long long stat_total_active_defrag_time; /* Total time memory fragmentation over the limit, unit us */\n    monotime stat_last_active_defrag_time; /* Timestamp of current active defrag start */\n    size_t stat_peak_memory;        /* Max used memory record */\n    time_t stat_peak_memory_time;   /* Time when stat_peak_memory was recorded */\n    long long stat_aof_rewrites;    /* number of aof file rewrites performed */\n    long long stat_aofrw_consecutive_failures; /* The number of consecutive failures of aofrw */\n    long long stat_rdb_saves;       /* number of rdb saves performed */\n    long long stat_rdb_consecutive_failures; /* The number of consecutive failures of rdb saves */\n    long long stat_fork_time;       /* Time needed to perform latest fork() */\n    double stat_fork_rate;          /* Fork rate in GB/sec. */\n    long long stat_total_forks;     /* Total count of fork. */\n    long long stat_rejected_conn;   /* Clients rejected because of maxclients */\n    long long stat_sync_full;       /* Number of full resyncs with slaves. */\n    long long stat_sync_partial_ok; /* Number of accepted PSYNC requests. */\n    long long stat_sync_partial_err;/* Number of unaccepted PSYNC requests. */\n    list *slowlog;                  /* SLOWLOG list of commands */\n    long long slowlog_entry_id;     /* SLOWLOG current entry ID */\n    long long slowlog_log_slower_than; /* SLOWLOG time limit (to get logged) */\n    unsigned long slowlog_max_len;     /* SLOWLOG max number of items logged */\n    long long stat_slowlog_count;          /* Total slowlog entries ever pushed */\n    long long stat_slowlog_time_us_sum;    /* Sum of all slowlog entry durations (usec) */\n    long long stat_slowlog_time_us_max;    /* Max slowlog entry duration (usec) */\n    struct malloc_stats cron_malloc_stats; /* sampled in serverCron(). */\n    redisAtomic long long stat_net_input_bytes; /* Bytes read from network. */\n    redisAtomic long long stat_net_output_bytes; /* Bytes written to network. */\n    redisAtomic long long stat_net_repl_input_bytes; /* Bytes read during replication, added to stat_net_input_bytes in 'info'. */\n    redisAtomic long long stat_net_repl_output_bytes; /* Bytes written during replication, added to stat_net_output_bytes in 'info'. */\n    size_t stat_current_cow_peak;   /* Peak size of copy on write bytes. */\n    size_t stat_current_cow_bytes;  /* Copy on write bytes while child is active. */\n    monotime stat_current_cow_updated;  /* Last update time of stat_current_cow_bytes */\n    size_t stat_current_save_keys_processed;  /* Processed keys while child is active. */\n    size_t stat_current_save_keys_total;  /* Number of keys when child started. */\n    size_t stat_rdb_cow_bytes;      /* Copy on write bytes during RDB saving. */\n    size_t stat_aof_cow_bytes;      /* Copy on write bytes during AOF rewrite. */\n    size_t stat_module_cow_bytes;   /* Copy on write bytes during module fork. */\n    double stat_module_progress;   /* Module save progress. */\n    size_t stat_clients_type_memory[CLIENT_TYPE_COUNT];/* Mem usage by type */\n    size_t stat_cluster_links_memory; /* Mem usage by cluster links */\n    long long stat_unexpected_error_replies; /* Number of unexpected (aof-loading, replica to master, etc.) error replies */\n    long long stat_total_error_replies; /* Total number of issued error replies ( command + rejected errors ) */\n    long long stat_dump_payload_sanitizations; /* Number deep dump payloads integrity validations. */\n    redisAtomic long long stat_io_reads_processed[IO_THREADS_MAX_NUM]; /* Number of read events processed by IO / Main threads */\n    redisAtomic long long stat_io_writes_processed[IO_THREADS_MAX_NUM]; /* Number of write events processed by IO / Main threads */\n    redisAtomic long long stat_client_qbuf_limit_disconnections;  /* Total number of clients reached query buf length limit */\n    long long stat_client_outbuf_limit_disconnections;  /* Total number of clients reached output buf length limit */\n    long long stat_cluster_incompatible_ops; /* Number of operations that are incompatible with cluster mode */\n    long long stat_total_prefetch_entries;  /* Total number of prefetched dict entries */\n    long long stat_total_prefetch_batches;  /* Total number of prefetched batches */\n    redisAtomic long long stat_avg_pipeline_length_sum; /* Sum of pipeline lengths for computing average */\n    redisAtomic long long stat_avg_pipeline_length_cnt; /* Count of pipeline length samples */\n    redisAtomic long long stat_total_client_process_input_buff_events; /* Number of times processInputBuffer() was called */\n    size_t stat_eventloop_cycles_with_clients_input_buff_processing; /* Event loop cycles with client input buffer processing */\n    /* The following two are used to track instantaneous metrics, like\n     * number of operations per second, network traffic. */\n    struct {\n        long long last_sample_base;  /* The divisor of last sample window */\n        long long last_sample_value; /* The dividend of last sample window */\n        long long samples[STATS_METRIC_SAMPLES];\n        int idx;\n    } inst_metric[STATS_METRIC_COUNT];\n    long long stat_reply_buffer_shrinks; /* Total number of output buffer shrinks */\n    long long stat_reply_buffer_expands; /* Total number of output buffer expands */\n    monotime el_start;\n    /* The following two are used to record the max number of commands executed in one eventloop.\n     * Note that commands in transactions are also counted. */\n    long long el_cmd_cnt_start;\n    long long el_cmd_cnt_max;\n    /* The sum of active-expire, active-defrag and all other tasks done by cron and beforeSleep,\n       but excluding read, write and AOF, which are counted by other sets of metrics. */\n    monotime el_cron_duration;\n    durationStats duration_stats[EL_DURATION_TYPE_NUM];\n\n    /* Hotkey tracking */\n    hotkeyStats *hotkeys;\n\n    /* Configuration */\n    int verbosity;                  /* Loglevel in redis.conf */\n    int hide_user_data_from_log;    /* In the event of an assertion failure, hide command arguments from the operator */\n    int maxidletime;                /* Client timeout in seconds */\n    int tcpkeepalive;               /* Set SO_KEEPALIVE if non-zero. */\n    int active_expire_enabled;      /* Can be disabled for testing purposes. */\n    int active_expire_effort;       /* From 1 (default) to 10, active effort. */\n    int allow_access_expired;       /* If > 0, allow access to logically expired keys */\n    int allow_access_trimmed;       /* If > 0, allow access to logically trimmed keys */\n    int active_defrag_enabled;\n    int sanitize_dump_payload;      /* Enables deep sanitization for ziplist and listpack in RDB and RESTORE. */\n    int skip_checksum_validation;   /* Disable checksum validation for RDB and RESTORE payload. */\n    int allow_keymeta_registration; /* Allow keymeta class registration outside server startup (for testing). */\n    int jemalloc_bg_thread;         /* Enable jemalloc background thread */\n    int active_defrag_configuration_changed; /* defrag configuration has been changed and need to reconsider\n                                              * active_defrag_running in computeDefragCycles. */\n    size_t active_defrag_ignore_bytes; /* minimum amount of fragmentation waste to start active defrag */\n    int active_defrag_threshold_lower; /* minimum percentage of fragmentation to start active defrag */\n    int active_defrag_threshold_upper; /* maximum percentage of fragmentation at which we use maximum effort */\n    int active_defrag_cycle_min;       /* minimal effort for defrag in CPU percentage */\n    int active_defrag_cycle_max;       /* maximal effort for defrag in CPU percentage */\n    unsigned long active_defrag_max_scan_fields; /* maximum number of fields of set/hash/zset/list to process from within the main dict scan */\n    size_t client_max_querybuf_len; /* Limit for client query buffer length */\n    int lookahead;                  /* how many commands in each client pipeline to decode and prefetch */\n    int dbnum;                      /* Total number of configured DBs */\n    int supervised;                 /* 1 if supervised, 0 otherwise. */\n    int supervised_mode;            /* See SUPERVISED_* */\n    int daemonize;                  /* True if running as a daemon */\n    int set_proc_title;             /* True if change proc title */\n    char *proc_title_template;      /* Process title template format */\n    clientBufferLimitsConfig client_obuf_limits[CLIENT_TYPE_OBUF_COUNT];\n    int pause_cron;                 /* Don't run cron tasks (debug) */\n    int dict_resizing;              /* Whether to allow main dict and expired dict to be resized (debug) */\n    int latency_tracking_enabled;   /* 1 if extended latency tracking is enabled, 0 otherwise. */\n    double *latency_tracking_info_percentiles; /* Extended latency tracking info output percentile list configuration. */\n    int latency_tracking_info_percentiles_len;\n    int memory_tracking_enabled;    /* Account used memory per slot */\n    unsigned int max_new_tls_conns_per_cycle; /* The maximum number of tls connections that will be accepted during each invocation of the event loop. */\n    unsigned int max_new_conns_per_cycle; /* The maximum number of tcp connections that will be accepted during each invocation of the event loop. */\n    int cluster_compatibility_sample_ratio; /* Sampling ratio for cluster mode incompatible commands. */\n    int lazyexpire_nested_arbitrary_keys; /* If disabled, avoid lazy-expire from commands that touch arbitrary keys (SCAN/RANDOMKEY) within transactions */\n\n    /* AOF persistence */\n    int aof_enabled;                /* AOF configuration */\n    int aof_state;                  /* AOF_(ON|OFF|WAIT_REWRITE) */\n    int aof_fsync;                  /* Kind of fsync() policy */\n    char *aof_filename;             /* Basename of the AOF file and manifest file */\n    char *aof_dirname;              /* Name of the AOF directory */\n    int aof_no_fsync_on_rewrite;    /* Don't fsync if a rewrite is in prog. */\n    int aof_rewrite_perc;           /* Rewrite AOF if % growth is > M and... */\n    off_t aof_rewrite_min_size;     /* the AOF file is at least N bytes. */\n    off_t aof_rewrite_base_size;    /* AOF size on latest startup or rewrite. */\n    off_t aof_current_size;         /* AOF current size (Including BASE + INCRs). */\n    off_t aof_last_incr_size;       /* The size of the latest incr AOF. */\n    off_t aof_last_incr_fsync_offset; /* AOF offset which is already requested to be synced to disk.\n                                       * Compare with the aof_last_incr_size. */\n    int aof_flush_sleep;            /* Micros to sleep before flush. (used by tests) */\n    int aof_rewrite_scheduled;      /* Rewrite once BGSAVE terminates. */\n    sds aof_buf;      /* AOF buffer, written before entering the event loop */\n    int aof_fd;       /* File descriptor of currently selected AOF file */\n    int aof_selected_db; /* Currently selected DB in AOF */\n    mstime_t aof_flush_postponed_start; /* mstime of postponed AOF flush */\n    mstime_t aof_last_fsync;            /* mstime of last fsync() */\n    time_t aof_rewrite_time_last;   /* Time used by last AOF rewrite run. */\n    time_t aof_rewrite_time_start;  /* Current AOF rewrite start time. */\n    time_t aof_cur_timestamp;       /* Current record timestamp in AOF */\n    int aof_timestamp_enabled;      /* Enable record timestamp in AOF */\n    int aof_lastbgrewrite_status;   /* C_OK or C_ERR */\n    unsigned long aof_delayed_fsync;  /* delayed AOF fsync() counter */\n    int aof_rewrite_incremental_fsync;/* fsync incrementally while aof rewriting? */\n    int rdb_save_incremental_fsync;   /* fsync incrementally while rdb saving? */\n    int aof_last_write_status;      /* C_OK or C_ERR */\n    int aof_last_write_errno;       /* Valid if aof write/fsync status is ERR */\n    int aof_load_truncated;         /* Don't stop on unexpected AOF EOF. */\n    off_t aof_load_corrupt_tail_max_size; /* The max size of broken AOF tail than can be ignored. */\n    int aof_use_rdb_preamble;       /* Specify base AOF to use RDB encoding on AOF rewrites. */\n    redisAtomic int aof_bio_fsync_status; /* Status of AOF fsync in bio job. */\n    redisAtomic int aof_bio_fsync_errno;  /* Errno of AOF fsync in bio job. */\n    aofManifest *aof_manifest;       /* Used to track AOFs. */\n    int aof_disable_auto_gc;         /* If disable automatically deleting HISTORY type AOFs?\n                                        default no. (for testings). */\n\n    /* RDB persistence */\n    long long dirty;                /* Changes to DB from the last save */\n    long long dirty_before_bgsave;  /* Used to restore dirty on failed BGSAVE */\n    long long rdb_last_load_keys_expired;  /* number of expired keys when loading RDB */\n    long long rdb_last_load_keys_loaded;   /* number of loaded keys when loading RDB */\n    int bgsave_aborted;             /* Set when killing a child, to treat it as aborted even if it succeeds. */\n    struct saveparam *saveparams;   /* Save points array for RDB */\n    int saveparamslen;              /* Number of saving points */\n    char *rdb_filename;             /* Name of RDB file */\n    int rdb_compression;            /* Use compression in RDB? */\n    int rdb_checksum;               /* Use RDB checksum? */\n    int rdb_del_sync_files;         /* Remove RDB files used only for SYNC if\n                                       the instance does not use persistence. */\n    time_t lastsave;                /* Unix time of last successful save */\n    time_t lastbgsave_try;          /* Unix time of last attempted bgsave */\n    time_t rdb_save_time_last;      /* Time used by last RDB save run. */\n    time_t rdb_save_time_start;     /* Current RDB save start time. */\n    int rdb_bgsave_scheduled;       /* BGSAVE when possible if true. */\n    int rdb_child_type;             /* Type of save by active child. */\n    int lastbgsave_status;          /* C_OK or C_ERR */\n    int stop_writes_on_bgsave_err;  /* Don't allow writes if can't BGSAVE */\n    int rdb_pipe_read;              /* RDB pipe used to transfer the rdb data */\n                                    /* to the parent process in diskless repl. */\n    int rdb_child_exit_pipe;        /* Used by the diskless parent allow child exit. */\n    connection **rdb_pipe_conns;    /* Connections which are currently the */\n    int rdb_pipe_numconns;          /* target of diskless rdb fork child. */\n    int rdb_pipe_numconns_writing;  /* Number of rdb conns with pending writes. */\n    char *rdb_pipe_buff;            /* In diskless replication, this buffer holds data */\n    int rdb_pipe_bufflen;           /* that was read from the rdb pipe. */\n    int rdb_key_save_delay;         /* Delay in microseconds between keys while\n                                     * writing aof or rdb. (for testings). negative\n                                     * value means fractions of microseconds (on average). */\n    int key_load_delay;             /* Delay in microseconds between keys while\n                                     * loading aof or rdb. (for testings). negative\n                                     * value means fractions of microseconds (on average). */\n    /* Pipe and data structures for child -> parent info sharing. */\n    int child_info_pipe[2];         /* Pipe used to write the child_info_data. */\n    int child_info_nread;           /* Num of bytes of the last read from pipe */\n    /* Propagation of commands in AOF / replication */\n    redisOpArray also_propagate;    /* Additional command to propagate. */\n    int replication_allowed;        /* Are we allowed to replicate? */\n    /* Logging */\n    char *logfile;                  /* Path of log file */\n    int syslog_enabled;             /* Is syslog enabled? */\n    char *syslog_ident;             /* Syslog ident */\n    int syslog_facility;            /* Syslog facility */\n    int crashlog_enabled;           /* Enable signal handler for crashlog.\n                                     * disable for clean core dumps. */\n    int memcheck_enabled;           /* Enable memory check on crash. */\n    int use_exit_on_panic;          /* Use exit() on panic and assert rather than\n                                     * abort(). useful for Valgrind. */\n    /* Shutdown */\n    int shutdown_timeout;           /* Graceful shutdown time limit in seconds. */\n    int shutdown_on_sigint;         /* Shutdown flags configured for SIGINT. */\n    int shutdown_on_sigterm;        /* Shutdown flags configured for SIGTERM. */\n\n    /* Replication (master) */\n    char replid[CONFIG_RUN_ID_SIZE+1];  /* My current replication ID. */\n    char replid2[CONFIG_RUN_ID_SIZE+1]; /* replid inherited from master*/\n    long long master_repl_offset;   /* My current replication offset */\n    long long second_replid_offset; /* Accept offsets up to this for replid2. */\n    redisAtomic long long fsynced_reploff_pending;/* Largest replication offset to\n                                     * potentially have been fsynced, applied to\n                                       fsynced_reploff only when AOF state is AOF_ON\n                                       (not during the initial rewrite) */\n    long long fsynced_reploff;      /* Largest replication offset that has been confirmed to be fsynced */\n    int slaveseldb;                 /* Last SELECTed DB in replication output */\n    int repl_ping_slave_period;     /* Master pings the slave every N seconds */\n    replBacklog *repl_backlog;      /* Replication backlog for partial syncs */\n    long long repl_backlog_size;    /* Backlog circular buffer size */\n    long long repl_full_sync_buffer_limit; /* Accumulated repl data limit during rdb channel replication */\n    replDataBuf repl_full_sync_buffer;  /* Accumulated replication data for rdb channel replication */\n    time_t repl_backlog_time_limit; /* Time without slaves after the backlog\n                                       gets released. */\n    time_t repl_no_slaves_since;    /* We have no slaves since that time.\n                                       Only valid if server.slaves len is 0. */\n    int repl_min_slaves_to_write;   /* Min number of slaves to write. */\n    int repl_min_slaves_max_lag;    /* Max lag of <count> slaves to write. */\n    int repl_good_slaves_count;     /* Number of slaves with lag <= max_lag. */\n    int repl_diskless_sync;         /* Master send RDB to slaves sockets directly. */\n    int repl_diskless_load;         /* Slave parse RDB directly from the socket.\n                                     * see REPL_DISKLESS_LOAD_* enum */\n    int repl_diskless_sync_delay;   /* Delay to start a diskless repl BGSAVE. */\n    int repl_diskless_sync_max_replicas;/* Max replicas for diskless repl BGSAVE\n                                         * delay (start sooner if they all connect). */\n    int repl_rdb_channel;           /* Config used to determine if the replica should\n                                     * use rdb channel replication for full syncs. */\n    int repl_debug_pause;           /* Debug config to force the main process to pause. */\n    size_t repl_buffer_mem;         /* The memory of replication buffer. */\n    list *repl_buffer_blocks;       /* Replication buffers blocks list\n                                     * (serving replica clients and repl backlog) */\n    time_t repl_stream_lastio;      /* Unix time of the latest sending replication stream. */\n    /* Replication (slave) */\n    char *masteruser;               /* AUTH with this user and masterauth with master */\n    sds masterauth;                 /* AUTH with this password with master */\n    char *masterhost;               /* Hostname of master */\n    int masterport;                 /* Port of master */\n    int repl_timeout;               /* Timeout after N seconds of master idle */\n    client *master;     /* Client that is master for this slave */\n    client *cached_master; /* Cached master to be reused for PSYNC. */\n    int repl_syncio_timeout; /* Timeout for synchronous I/O calls */\n    int repl_state;          /* Replication status if the instance is a slave */\n    int repl_rdb_ch_state; /* State of the replica's rdb channel during rdb channel replication */\n    int repl_main_ch_state; /* State of the replica's main channel during rdb channel replication */\n    uint64_t repl_num_master_disconnection; /* Number of master connection was disconnected */\n    uint64_t repl_main_ch_client_id; /* Main channel client id received in +RDBCHANNELSYNC reply. */\n    off_t repl_transfer_size; /* Size of RDB to read from master during sync. */\n    off_t repl_transfer_read; /* Amount of RDB read from master during sync. */\n    off_t repl_transfer_last_fsync_off; /* Offset when we fsync-ed last time. */\n    connection *repl_transfer_s;     /* Slave -> Master SYNC connection */\n    connection *repl_rdb_transfer_s; /* Slave -> Master FULL SYNC connection (RDB download) */\n    int repl_transfer_fd;    /* Slave -> Master SYNC temp file descriptor */\n    char *repl_transfer_tmpfile; /* Slave-> master SYNC temp file name */\n    time_t repl_transfer_lastio; /* Unix time of the latest read, for timeout */\n    int repl_serve_stale_data; /* Serve stale data when link is down? */\n    int repl_slave_ro;          /* Slave is read only? */\n    int repl_slave_ignore_maxmemory;    /* If true slaves do not evict. */\n    time_t repl_down_since; /* Unix time at which link with master went down */\n    time_t repl_up_since;   /* Unix time that master link is fully up and healthy */\n    int repl_disable_tcp_nodelay;   /* Disable TCP_NODELAY after SYNC? */\n    int slave_priority;             /* Reported in INFO and used by Sentinel. */\n    int replica_announced;          /* If true, replica is announced by Sentinel */\n    int slave_announce_port;        /* Give the master this listening port. */\n    char *slave_announce_ip;        /* Give the master this ip address. */\n    int propagation_error_behavior; /* Configures the behavior of the replica\n                                     * when it receives an error on the replication stream */\n    int repl_ignore_disk_write_error;   /* Configures whether replicas panic when unable to\n                                         * persist writes to AOF. */\n    /* The following two fields is where we store master PSYNC replid/offset\n     * while the PSYNC is in progress. At the end we'll copy the fields into\n     * the server->master client structure. */\n    char master_replid[CONFIG_RUN_ID_SIZE+1];  /* Master PSYNC runid. */\n    long long master_initial_offset;           /* Master PSYNC offset. */\n    int repl_slave_lazy_flush;          /* Lazy FLUSHALL before loading DB? */\n    /* Synchronous replication. */\n    list *clients_waiting_acks;         /* Clients waiting in WAIT or WAITAOF. */\n    int get_ack_from_slaves;            /* If true we send REPLCONF GETACK. */\n    long long repl_current_sync_attempts;    /* Number of times in current configuration, the replica attempted to sync since the last success. */\n    long long repl_total_sync_attempts;      /* Number of times in current configuration, the replica attempted to sync to a master  */\n    time_t repl_disconnect_start_time;       /* Unix time that master disconnection start */\n    time_t repl_total_disconnect_time;       /* The total cumulative time we've been disconnected as a replica, visible when the link is up too. */\n    /* Limits */\n    unsigned int maxclients;            /* Max number of simultaneous clients */\n    unsigned long long maxmemory;   /* Max number of memory bytes to use */\n    ssize_t maxmemory_clients;       /* Memory limit for total client buffers */\n    int maxmemory_policy;           /* Policy for key eviction */\n    int maxmemory_samples;          /* Precision of random sampling */\n    int maxmemory_eviction_tenacity;/* Aggressiveness of eviction processing */\n    int lfu_log_factor;             /* LFU logarithmic counter factor. */\n    int lfu_decay_time;             /* LFU counter decay factor. */\n    long long proto_max_bulk_len;   /* Protocol bulk length maximum size. */\n    int oom_score_adj_values[CONFIG_OOM_COUNT];   /* Linux oom_score_adj configuration */\n    int oom_score_adj;                            /* If true, oom_score_adj is managed */\n    int disable_thp;                              /* If true, disable THP by syscall */\n    /* Blocked clients */\n    unsigned int blocked_clients;   /* # of clients executing a blocking cmd.*/\n    unsigned int blocked_clients_by_type[BLOCKED_NUM];\n    list *unblocked_clients; /* list of clients to unblock before next loop */\n    list *ready_keys;        /* List of readyList structures for BLPOP & co */\n    /* Client side caching. */\n    unsigned int tracking_clients;  /* # of clients with tracking enabled.*/\n    size_t tracking_table_max_keys; /* Max number of keys in tracking table. */\n    list *tracking_pending_keys; /* tracking invalidation keys pending to flush */\n    list *pending_push_messages; /* pending publish or other push messages to flush */\n    /* Sort parameters - qsort_r() is only available under BSD so we\n     * have to take this state global, in order to pass it to sortCompare() */\n    int sort_desc;\n    int sort_alpha;\n    int sort_bypattern;\n    int sort_store;\n    /* Zip structure config, see redis.conf for more information  */\n    size_t hash_max_listpack_entries;\n    size_t hash_max_listpack_value;\n    size_t set_max_intset_entries;\n    size_t set_max_listpack_entries;\n    size_t set_max_listpack_value;\n    size_t zset_max_listpack_entries;\n    size_t zset_max_listpack_value;\n    size_t hll_sparse_max_bytes;\n    size_t stream_node_max_bytes;\n    long long stream_node_max_entries;\n    /* Stream IDMP parameters */\n    long long stream_idmp_duration;     /* Default IDMP duration in seconds. */\n    long long stream_idmp_maxsize;      /* Default IDMP max entries. */\n    /* List parameters */\n    int list_max_listpack_size;\n    int list_compress_depth;\n    /* time cache */\n    redisAtomic time_t unixtime; /* Unix time sampled every cron cycle. */\n    time_t timezone;            /* Cached timezone. As set by tzset(). */\n    redisAtomic int daylight_active; /* Currently in daylight saving time. */\n    mstime_t mstime;            /* 'unixtime' in milliseconds. */\n    ustime_t ustime;            /* 'unixtime' in microseconds. */\n    int accum_call_count_since_ustime; /* Command count since last ustime update */\n    monotime monotonic_us_when_ustime; /* Monotonic time when last ustime update */\n    mstime_t cmd_time_snapshot; /* Time snapshot of the root execution nesting. */\n    size_t blocking_op_nesting; /* Nesting level of blocking operation, used to reset blocked_last_cron. */\n    long long blocked_last_cron; /* Indicate the mstime of the last time we did cron jobs from a blocking operation */\n    /* Pubsub */\n    kvstore *pubsub_channels;  /* Map channels to list of subscribed clients */\n    dict *pubsub_patterns;  /* A dict of pubsub_patterns */\n    int notify_keyspace_events; /* Events to propagate via Pub/Sub. This is an\n                                   xor of NOTIFY_... flags. */\n    kvstore *pubsubshard_channels;  /* Map shard channels in every slot to list of subscribed clients */\n    unsigned int pubsub_clients; /* # of clients in Pub/Sub mode */\n    unsigned int watching_clients; /* # of clients are wathcing keys */\n    /* Cluster */\n    int cluster_enabled;      /* Is cluster enabled? */\n    int cluster_port;         /* Set the cluster port for a node. */\n    mstime_t cluster_node_timeout; /* Cluster node timeout. */\n    mstime_t cluster_ping_interval;    /* A debug configuration for setting how often cluster nodes send ping messages. */\n    char *cluster_configfile; /* Cluster auto-generated config file name. */\n    long long asm_handoff_max_lag_bytes; /* Maximum lag in bytes before pausing writes for ASM handoff. */\n    long long asm_write_pause_timeout; /* Timeout in milliseconds to pause writes during ASM handoff. */\n    long long asm_sync_buffer_drain_timeout; /* Timeout in milliseconds for sync buffer to drain during ASM. */\n    int asm_max_archived_tasks; /* Maximum number of archived ASM tasks to keep in memory. */\n    struct clusterState *cluster;  /* State of the cluster */\n    int cluster_migration_barrier; /* Cluster replicas migration barrier. */\n    int cluster_allow_replica_migration; /* Automatic replica migrations to orphaned masters and from empty masters */\n    int cluster_slave_validity_factor; /* Slave max data age for failover. */\n    int cluster_require_full_coverage; /* If true, put the cluster down if\n                                          there is at least an uncovered slot.*/\n    int cluster_slave_no_failover;  /* Prevent slave from starting a failover\n                                       if the master is in failure state. */\n    char *cluster_announce_ip;  /* IP address to announce on cluster bus. */\n    char *cluster_announce_hostname;  /* hostname to announce on cluster bus. */\n    char *cluster_announce_human_nodename;  /* Human readable node name assigned to a node. */\n    int cluster_preferred_endpoint_type; /* Use the announced hostname when available. */\n    int cluster_announce_port;     /* base port to announce on cluster bus. */\n    int cluster_announce_tls_port; /* TLS port to announce on cluster bus. */\n    int cluster_announce_bus_port; /* bus port to announce on cluster bus. */\n    int cluster_module_flags;      /* Set of flags that Redis modules are able\n                                      to set in order to suppress certain\n                                      native Redis Cluster features. Check the\n                                      REDISMODULE_CLUSTER_FLAG_*. */\n    int cluster_module_trim_disablers; /* Number of module requests to disable trimming */\n    int cluster_allow_reads_when_down; /* Are reads allowed when the cluster\n                                        is down? */\n    int cluster_config_file_lock_fd;   /* cluster config fd, will be flocked. */\n    unsigned long long cluster_link_msg_queue_limit_bytes;  /* Memory usage limit on individual link msg queue */\n    int cluster_drop_packet_filter; /* Debug config that allows tactically\n                                   * dropping packets of a specific type */\n    int cluster_slot_stats_enabled; /* Cluster slot usage statistics tracking enabled. */\n    /* Scripting */\n    unsigned int lua_arena;         /* eval lua arena used in jemalloc. */\n    mstime_t busy_reply_threshold;  /* Script / module timeout in milliseconds */\n    int pre_command_oom_state;         /* OOM before command (script?) was started */\n    int script_disable_deny_script;    /* Allow running commands marked \"noscript\" inside a script. */\n    int lua_enable_deprecated_api;     /* Config to enable deprecated api */\n    int key_memory_histograms;         /* Config to enable key memory histograms */\n    /* Lazy free */\n    int lazyfree_lazy_eviction;\n    int lazyfree_lazy_expire;\n    int lazyfree_lazy_server_del;\n    int lazyfree_lazy_user_del;\n    int lazyfree_lazy_user_flush;\n    /* Latency monitor */\n    long long latency_monitor_threshold;\n    dict *latency_events;\n    /* ACLs */\n    char *acl_filename;           /* ACL Users file. NULL if not configured. */\n    unsigned long acllog_max_len; /* Maximum length of the ACL LOG list. */\n    sds requirepass;              /* Remember the cleartext password set with\n                                     the old \"requirepass\" directive for\n                                     backward compatibility with Redis <= 5. */\n    int acl_pubsub_default;      /* Default ACL pub/sub channels flag */\n    aclInfo acl_info; /* ACL info */\n    /* Assert & bug reporting */\n    int watchdog_period;  /* Software watchdog period in ms. 0 = off */\n    /* System hardware info */\n    size_t system_memory_size;  /* Total memory in system as reported by OS */\n    /* TLS Configuration */\n    int tls_cluster;\n    int tls_replication;\n    int tls_auth_clients;\n    redisTLSContextConfig tls_ctx_config;\n    /* cpu affinity */\n    char *server_cpulist; /* cpu affinity list of redis server main/io thread. */\n    char *bio_cpulist; /* cpu affinity list of bio thread. */\n    char *aof_rewrite_cpulist; /* cpu affinity list of aof rewrite process. */\n    char *bgsave_cpulist; /* cpu affinity list of bgsave process. */\n    /* Sentinel config */\n    struct sentinelConfig *sentinel_config; /* sentinel config to load at startup time. */\n    /* Coordinate failover info */\n    mstime_t failover_end_time; /* Deadline for failover command. */\n    int force_failover; /* If true then failover will be forced at the\n                         * deadline, otherwise failover is aborted. */\n    char *target_replica_host; /* Failover target host. If null during a\n                                * failover then any replica can be used. */\n    int target_replica_port; /* Failover target port */\n    int failover_state; /* Failover state */\n    int cluster_allow_pubsubshard_when_down; /* Is pubsubshard allowed when the cluster\n                                                is down, doesn't affect pubsub global. */\n    long reply_buffer_peak_reset_time; /* The amount of time (in milliseconds) to wait between reply buffer peak resets */\n    int reply_buffer_resizing_enabled; /* Is reply buffer resizing enabled (1 by default) */\n    int reply_copy_avoidance_enabled; /* Is reply copy avoidance enabled (1 by default) */\n    /* Local environment */\n    char *locale_collate;\n    unsigned int dbg_assert_flags; /* Bitmask of debug assertions to run after each command */\n};\n\n/* Debug assertion flags for server.dbg_assert_flags */\n#define DBG_ASSERT_KEYSIZES    (1 << 0) /* Assert keysizes histogram */\n#define DBG_ASSERT_ALLOC_SLOT  (1 << 1) /* Assert per-slot alloc_size */\n\n/* we use 6 so that all getKeyResult fits a cacheline */\n#define MAX_KEYS_BUFFER 6\n\ntypedef struct {\n    int pos; /* The position of the key within the client array */\n    int flags; /* The flags associated with the key access, see\n                  CMD_KEY_* for more information */\n} keyReference;\n\n/* A result structure for the various getkeys function calls. It lists the\n * keys as indices to the provided argv. This functionality is also re-used\n * for returning channel information.\n */\ntypedef struct {\n    int numkeys;                                 /* Number of key indices return */\n    int size;                                    /* Available array size */\n    keyReference keysbuf[MAX_KEYS_BUFFER];       /* Pre-allocated buffer, to save heap allocations */\n    keyReference *keys;                          /* Key indices array, points to keysbuf or heap */\n} getKeysResult;\n#define GETKEYS_RESULT_INIT { 0, MAX_KEYS_BUFFER, {{0}}, NULL }\n\n/*-----------------------------------------------------------------------------\n * Hotkey tracking\n *----------------------------------------------------------------------------*/\n\n/* Hotkeys tracking metric flags */\n#define HOTKEYS_TRACK_CPU (1ULL << 0)\n#define HOTKEYS_TRACK_NET (1ULL << 1)\n#define HOTKEYS_METRICS_COUNT 2 /* NOTE: update if adding new metric */\n\n/* A structure for tracking hotkey statistics by given metrics. */\nstruct hotkeyStats {\n    struct chkTopK *cpu;\n    struct chkTopK *net;\n    mstime_t start; /* Initial time point for wall time tracking */\n\n    /* Only keys from selected slots will be tracked. If slots is NULL,\n     * all keys are tracked. Stored as a sorted slotRangeArray. */\n    struct slotRangeArray *slots;\n\n    /* Statistics counters. */\n    uint64_t time_sampled_commands_selected_slots;  /* microseconds */\n    uint64_t time_all_commands_selected_slots;       /* microseconds */\n    uint64_t time_all_commands_all_slots;            /* microseconds */\n    uint64_t net_bytes_sampled_commands_selected_slots;\n    uint64_t net_bytes_all_commands_selected_slots;\n    uint64_t net_bytes_all_commands_all_slots;\n\n    /* rusage stats for CPU time tracking */\n    struct timeval ru_utime;\n    struct timeval ru_stime;\n\n    int tracking_count; /* Count of top hotkeys we want to track */\n    int sample_ratio; /* Track a key with probability 1 / sample_ratio */\n    int active; /* True if tracking is currently active */\n    mstime_t duration; /* Tracking duration */\n    uint64_t tracked_metrics;  /* Bit flags: HOTKEYS_TRACK_CPU, HOTKEYS_TRACK_NET, etc. */\n    mstime_t cpu_time;  /* Total CPU time spent updating the topk struct in milliseconds */\n\n    /* Current command related fields */\n    getKeysResult keys_result; /* Key results for current command */\n    client *current_client;\n    int is_sampled; /* Indicates whether or not keys from cmd are sampled via sample_ratio */\n    int is_in_selected_slots; /* Indicates whether or not keys from cmd are in selected_slots */\n};\n\ntypedef struct hotkeyMetrics {\n    uint64_t cpu_time_usec;\n    uint64_t net_bytes;\n} hotkeyMetrics;\n\n/* pendingCommand flags */\nenum {\n    PENDING_CMD_FLAG_INCOMPLETE = 1 << 0,     /* Command parsing is incomplete, still waiting for more data */\n    PENDING_CMD_FLAG_PREPROCESSED = 1 << 1,   /* This command has passed pre-processing */\n    PENDING_CMD_KEYS_RESULT_VALID = 1 << 2,   /* Command's keys_result is valid and cached */\n};\n\n/* Parser state and parse result of a command from a client's input buffer. */\nstruct pendingCommand {\n    int argc;                 /* Num of arguments of current command. */\n    int argv_len;             /* Size of argv array (may be more than argc) */\n    robj **argv;              /* Arguments of current command. */\n    size_t argv_len_sum;      /* Sum of lengths of objects in argv list. */\n    unsigned long long input_bytes;\n    struct redisCommand *cmd;\n    getKeysResult keys_result;\n    long long reploff;        /* c->reploff should be set to this value when the command is processed */\n    int flags;\n    int slot;         /* The slot the command is executing against. Set to INVALID_CLUSTER_SLOT\n                       * if no slot is being used or if the command has a cross slot error */\n    uint8_t read_error;\n\n    struct pendingCommand *next;\n    struct pendingCommand *prev;\n};\n\n/* Key specs definitions.\n *\n * Brief: This is a scheme that tries to describe the location\n * of key arguments better than the old [first,last,step] scheme\n * which is limited and doesn't fit many commands.\n *\n * There are two steps:\n * 1. begin_search (BS): in which index should we start searching for keys?\n * 2. find_keys (FK): relative to the output of BS, how can we will which args are keys?\n *\n * There are two types of BS:\n * 1. index: key args start at a constant index\n * 2. keyword: key args start just after a specific keyword\n *\n * There are two kinds of FK:\n * 1. range: keys end at a specific index (or relative to the last argument)\n * 2. keynum: there's an arg that contains the number of key args somewhere before the keys themselves\n */\n\n/* WARNING! Must be synced with generate-command-code.py and RedisModuleKeySpecBeginSearchType */\ntypedef enum {\n    KSPEC_BS_INVALID = 0, /* Must be 0 */\n    KSPEC_BS_UNKNOWN,\n    KSPEC_BS_INDEX,\n    KSPEC_BS_KEYWORD\n} kspec_bs_type;\n\n/* WARNING! Must be synced with generate-command-code.py and RedisModuleKeySpecFindKeysType */\ntypedef enum {\n    KSPEC_FK_INVALID = 0, /* Must be 0 */\n    KSPEC_FK_UNKNOWN,\n    KSPEC_FK_RANGE,\n    KSPEC_FK_KEYNUM\n} kspec_fk_type;\n\n/* WARNING! This struct must match RedisModuleCommandKeySpec */\ntypedef struct {\n    /* Declarative data */\n    const char *notes;\n    uint64_t flags;\n    kspec_bs_type begin_search_type;\n    union {\n        struct {\n            /* The index from which we start the search for keys */\n            int pos;\n        } index;\n        struct {\n            /* The keyword that indicates the beginning of key args */\n            const char *keyword;\n            /* An index in argv from which to start searching.\n             * Can be negative, which means start search from the end, in reverse\n             * (Example: -2 means to start in reverse from the penultimate arg) */\n            int startfrom;\n        } keyword;\n    } bs;\n    kspec_fk_type find_keys_type;\n    union {\n        /* NOTE: Indices in this struct are relative to the result of the begin_search step!\n         * These are: range.lastkey, keynum.keynumidx, keynum.firstkey */\n        struct {\n            /* Index of the last key.\n             * Can be negative, in which case it's not relative. -1 indicating till the last argument,\n             * -2 one before the last and so on. */\n            int lastkey;\n            /* How many args should we skip after finding a key, in order to find the next one. */\n            int keystep;\n            /* If lastkey is -1, we use limit to stop the search by a factor. 0 and 1 mean no limit.\n             * 2 means 1/2 of the remaining args, 3 means 1/3, and so on. */\n            int limit;\n        } range;\n        struct {\n            /* Index of the argument containing the number of keys to come */\n            int keynumidx;\n            /* Index of the fist key (Usually it's just after keynumidx, in\n             * which case it should be set to keynumidx+1). */\n            int firstkey;\n            /* How many args should we skip after finding a key, in order to find the next one. */\n            int keystep;\n        } keynum;\n    } fk;\n} keySpec;\n\n#ifdef LOG_REQ_RES\n\n/* Must be synced with generate-command-code.py */\ntypedef enum {\n    JSON_TYPE_STRING,\n    JSON_TYPE_INTEGER,\n    JSON_TYPE_BOOLEAN,\n    JSON_TYPE_OBJECT,\n    JSON_TYPE_ARRAY,\n} jsonType;\n\ntypedef struct jsonObjectElement {\n    jsonType type;\n    const char *key;\n    union {\n        const char *string;\n        long long integer;\n        int boolean;\n        struct jsonObject *object;\n        struct {\n            struct jsonObject **objects;\n            int length;\n        } array;\n    } value;\n} jsonObjectElement;\n\ntypedef struct jsonObject {\n    struct jsonObjectElement *elements;\n    int length;\n} jsonObject;\n\n#endif\n\n/* WARNING! This struct must match RedisModuleCommandHistoryEntry */\ntypedef struct {\n    const char *since;\n    const char *changes;\n} commandHistory;\n\n/* Must be synced with COMMAND_GROUP_STR and generate-command-code.py */\ntypedef enum {\n    COMMAND_GROUP_GENERIC,\n    COMMAND_GROUP_STRING,\n    COMMAND_GROUP_LIST,\n    COMMAND_GROUP_SET,\n    COMMAND_GROUP_SORTED_SET,\n    COMMAND_GROUP_HASH,\n    COMMAND_GROUP_PUBSUB,\n    COMMAND_GROUP_TRANSACTIONS,\n    COMMAND_GROUP_CONNECTION,\n    COMMAND_GROUP_SERVER,\n    COMMAND_GROUP_SCRIPTING,\n    COMMAND_GROUP_HYPERLOGLOG,\n    COMMAND_GROUP_CLUSTER,\n    COMMAND_GROUP_SENTINEL,\n    COMMAND_GROUP_GEO,\n    COMMAND_GROUP_STREAM,\n    COMMAND_GROUP_BITMAP,\n    COMMAND_GROUP_MODULE,\n    COMMAND_GROUP_RATE_LIMIT,\n} redisCommandGroup;\n\ntypedef void redisCommandProc(client *c);\ntypedef int redisGetKeysProc(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);\n\n/* Redis command structure.\n *\n * Note that the command table is in commands.c and it is auto-generated.\n *\n * This is the meaning of the flags:\n *\n * CMD_WRITE:       Write command (may modify the key space).\n *\n * CMD_READONLY:    Commands just reading from keys without changing the content.\n *                  Note that commands that don't read from the keyspace such as\n *                  TIME, SELECT, INFO, administrative commands, and connection\n *                  or transaction related commands (multi, exec, discard, ...)\n *                  are not flagged as read-only commands, since they affect the\n *                  server or the connection in other ways.\n *\n * CMD_DENYOOM:     May increase memory usage once called. Don't allow if out\n *                  of memory.\n *\n * CMD_ADMIN:       Administrative command, like SAVE or SHUTDOWN.\n *\n * CMD_PUBSUB:      Pub/Sub related command.\n *\n * CMD_NOSCRIPT:    Command not allowed in scripts.\n *\n * CMD_BLOCKING:    The command has the potential to block the client.\n *\n * CMD_LOADING:     Allow the command while loading the database.\n *\n * CMD_NO_ASYNC_LOADING: Deny during async loading (when a replica uses diskless\n *                       sync swapdb, and allows access to the old dataset)\n *\n * CMD_STALE:       Allow the command while a slave has stale data but is not\n *                  allowed to serve this data. Normally no command is accepted\n *                  in this condition but just a few.\n *\n * CMD_SKIP_MONITOR:  Do not automatically propagate the command on MONITOR.\n *\n * CMD_SKIP_SLOWLOG:  Do not automatically propagate the command to the slowlog.\n *\n * CMD_ASKING:      Perform an implicit ASKING for this command, so the\n *                  command will be accepted in cluster mode if the slot is marked\n *                  as 'importing'.\n *\n * CMD_FAST:        Fast command: O(1) or O(log(N)) command that should never\n *                  delay its execution as long as the kernel scheduler is giving\n *                  us time. Note that commands that may trigger a DEL as a side\n *                  effect (like SET) are not fast commands.\n *\n * CMD_NO_AUTH:     Command doesn't require authentication\n *\n * CMD_MAY_REPLICATE:   Command may produce replication traffic, but should be\n *                      allowed under circumstances where write commands are disallowed.\n *                      Examples include PUBLISH, which replicates pubsub messages,and\n *                      EVAL, which may execute write commands, which are replicated,\n *                      or may just execute read commands. A command can not be marked\n *                      both CMD_WRITE and CMD_MAY_REPLICATE\n *\n * CMD_SENTINEL:    This command is present in sentinel mode.\n *\n * CMD_ONLY_SENTINEL: This command is present only when in sentinel mode.\n *                    And should be removed from redis.\n *\n * CMD_NO_MANDATORY_KEYS: This key arguments for this command are optional.\n *\n * CMD_NO_MULTI: The command is not allowed inside a transaction\n *\n * CMD_ALLOW_BUSY: The command can run while another command is running for\n *                 a long time (timedout script, module command that yields)\n *\n * CMD_TOUCHES_ARBITRARY_KEYS: The command may touch (and cause lazy-expire)\n *                             arbitrary key (i.e not provided in argv)\n *\n * CMD_INTERNAL: The command may perform operations without performing\n *               validations such as ACL.\n *\n * The following additional flags are only used in order to put commands\n * in a specific ACL category. Commands can have multiple ACL categories.\n * See redis.conf for the exact meaning of each.\n *\n * @keyspace, @read, @write, @set, @sortedset, @list, @hash, @string, @bitmap,\n * @hyperloglog, @stream, @admin, @fast, @slow, @pubsub, @blocking, @dangerous,\n * @connection, @transaction, @scripting, @geo.\n *\n * Note that:\n *\n * 1) The read-only flag implies the @read ACL category.\n * 2) The write flag implies the @write ACL category.\n * 3) The fast flag implies the @fast ACL category.\n * 4) The admin flag implies the @admin and @dangerous ACL category.\n * 5) The pub-sub flag implies the @pubsub ACL category.\n * 6) The lack of fast flag implies the @slow ACL category.\n * 7) The non obvious \"keyspace\" category includes the commands\n *    that interact with keys without having anything to do with\n *    specific data structures, such as: DEL, RENAME, MOVE, SELECT,\n *    TYPE, EXPIRE*, PEXPIRE*, TTL, PTTL, ...\n */\nstruct redisCommand {\n    /* Declarative data */\n    const char *declared_name; /* A string representing the command declared_name.\n                                * It is a const char * for native commands and SDS for module commands. */\n    const char *summary; /* Summary of the command (optional). */\n    const char *complexity; /* Complexity description (optional). */\n    const char *since; /* Debut version of the command (optional). */\n    int doc_flags; /* Flags for documentation (see CMD_DOC_*). */\n    const char *replaced_by; /* In case the command is deprecated, this is the successor command. */\n    const char *deprecated_since; /* In case the command is deprecated, when did it happen? */\n    redisCommandGroup group; /* Command group */\n    commandHistory *history; /* History of the command */\n    int num_history;\n    const char **tips; /* An array of strings that are meant to be tips for clients/proxies regarding this command */\n    int num_tips;\n    redisCommandProc *proc; /* Command implementation */\n    int arity; /* Number of arguments, it is possible to use -N to say >= N */\n    uint64_t flags; /* Command flags, see CMD_*. */\n    uint64_t acl_categories; /* ACl categories, see ACL_CATEGORY_*. */\n    keySpec *key_specs;\n    int key_specs_num;\n    /* Use a function to determine keys arguments in a command line.\n     * Used for Redis Cluster redirect (may be NULL) */\n    redisGetKeysProc *getkeys_proc;\n    int num_args; /* Length of args array. */\n    /* Array of subcommands (may be NULL) */\n    struct redisCommand *subcommands;\n    /* Array of arguments (may be NULL) */\n    struct redisCommandArg *args;\n#ifdef LOG_REQ_RES\n    /* Reply schema */\n    struct jsonObject *reply_schema;\n#endif\n\n    /* Runtime populated data */\n    long long microseconds, calls, rejected_calls, failed_calls;\n    long long slowlog_count, slowlog_time_us_sum, slowlog_time_us_max;\n    int id;     /* Command ID. This is a progressive ID starting from 0 that\n                   is assigned at runtime, and is used in order to check\n                   ACLs. A connection is able to execute a given command if\n                   the user associated to the connection has this command\n                   bit set in the bitmap of allowed commands. */\n    sds fullname; /* A SDS string representing the command fullname. */\n    struct hdr_histogram* latency_histogram; /*points to the command latency command histogram (unit of time nanosecond) */\n    keySpec legacy_range_key_spec; /* The legacy (first,last,step) key spec is\n                                     * still maintained (if applicable) so that\n                                     * we can still support the reply format of\n                                     * COMMAND INFO and COMMAND GETKEYS */\n    dict *subcommands_dict; /* A dictionary that holds the subcommands, the key is the subcommand sds name\n                             * (not the fullname), and the value is the redisCommand structure pointer. */\n    struct redisCommand *parent;\n    struct RedisModuleCommand *module_cmd; /* A pointer to the module command data (NULL if native command) */\n};\n\nstruct redisError {\n    long long count;\n};\n\nstruct redisFunctionSym {\n    char *name;\n    unsigned long pointer;\n};\n\ntypedef struct _redisSortObject {\n    robj *obj;\n    union {\n        double score;\n        robj *cmpobj;\n    } u;\n} redisSortObject;\n\ntypedef struct _redisSortOperation {\n    int type;\n    robj *pattern;\n} redisSortOperation;\n\n/* Structure to hold list iteration abstraction. */\ntypedef struct {\n    robj *subject;\n    unsigned char encoding;\n    unsigned char direction; /* Iteration direction */\n\n    unsigned char *lpi; /* listpack iterator */\n    quicklistIter iter; /* quicklist iterator */\n} listTypeIterator;\n\n/* Structure for an entry while iterating over a list. */\ntypedef struct {\n    listTypeIterator *li;\n    unsigned char *lpe; /* Entry in listpack */\n    quicklistEntry entry; /* Entry in quicklist */\n} listTypeEntry;\n\n/* Structure to hold set iteration abstraction. */\ntypedef struct {\n    robj *subject;\n    int encoding;\n    int ii; /* intset iterator */\n    dictIterator di;\n    unsigned char *lpi; /* listpack iterator */\n} setTypeIterator;\n\n/* Structure to hold hash iteration abstraction. Note that iteration over\n * hashes involves both fields and values. Because it is possible that\n * not both are required, store pointers in the iterator to avoid\n * unnecessary memory allocation for fields/values. */\ntypedef struct {\n    robj *subject;\n    int encoding;\n\n    unsigned char *fptr, *vptr, *tptr;\n    uint64_t expire_time; /* Only used with OBJ_ENCODING_LISTPACK_EX */\n\n    dictIterator di;\n    dictEntry *de;\n} hashTypeIterator;\n\n#include \"stream.h\"  /* Stream data type header file. */\n\n#define OBJ_HASH_KEY 1\n#define OBJ_HASH_VALUE 2\n\n/* Hash-field data type (of t_hash.c) - now using entry directly\n * Note: entry* is used directly instead of a typedef for clarity */\n\n/*-----------------------------------------------------------------------------\n * Extern declarations\n *----------------------------------------------------------------------------*/\n\nextern struct redisServer server;\nextern struct sharedObjectsStruct shared;\nextern dictType objectKeyPointerValueDictType;\nextern dictType objectKeyNoValueDictType;\nextern dictType objectKeyHeapPointerValueDictType;\nextern dictType setDictType;\nextern dictType BenchmarkDictType;\nextern dictType zsetDictType;\nextern dictType dbDictType;\nextern double R_Zero, R_PosInf, R_NegInf, R_Nan;\nextern dictType hashDictType;\nextern dictType entryHashDictType;\nextern dictType entryHashDictTypeWithHFE;\nextern dictType stringSetDictType;\nextern dictType externalStringType;\nextern dictType sdsHashDictType;\nextern dictType clientDictType;\nextern dictType objToDictDictType;\nextern dictType dbExpiresDictType;\nextern dictType modulesDictType;\nextern dictType sdsReplyDictType;\nextern dictType keylistDictType;\nextern kvstoreType kvstoreBaseType;\nextern kvstoreType kvstoreExType;\nextern dict *modules;\n\nextern EbucketsType subexpiresBucketsType;  /* global expires */\nextern EbucketsType hashFieldExpireBucketsType; /* local per hash */\n\n/*-----------------------------------------------------------------------------\n * Functions prototypes\n *----------------------------------------------------------------------------*/\n\n/* Command metadata */\nvoid populateCommandLegacyRangeSpec(struct redisCommand *c);\n\n/* Modules */\nvoid moduleInitModulesSystem(void);\nvoid moduleInitModulesSystemLast(void);\nvoid modulesCron(void);\nint moduleOnLoad(int (*onload)(void *, void **, int), const char *path, void *handle, void **module_argv, int module_argc, int is_loadex);\nint moduleLoad(const char *path, void **argv, int argc, int is_loadex);\nint moduleUnload(sds name, const char **errmsg, int forced_unload);\nvoid moduleLoadInternalModules(void);\nvoid moduleLoadFromQueue(void);\nint moduleGetCommandKeysViaAPI(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);\nint moduleGetCommandChannelsViaAPI(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);\nmoduleType *moduleTypeLookupModuleByID(uint64_t id);\nmoduleType *moduleTypeLookupModuleByName(const char *name);\nmoduleType *moduleTypeLookupModuleByNameIgnoreCase(const char *name);\nvoid moduleTypeNameByID(char *name, uint64_t moduleid);\nconst char *moduleTypeModuleName(moduleType *mt);\nconst char *moduleNameFromCommand(struct redisCommand *cmd);\nvoid moduleFreeContext(struct RedisModuleCtx *ctx);\nvoid moduleCallCommandUnblockedHandler(client *c);\nint isModuleClientUnblocked(client *c);\nvoid unblockClientFromModule(client *c);\nvoid moduleHandleBlockedClients(void);\nvoid moduleBlockedClientTimedOut(client *c);\nvoid modulePipeReadable(aeEventLoop *el, int fd, void *privdata, int mask);\nsize_t moduleCount(void);\nvoid moduleAcquireGIL(void);\nint moduleTryAcquireGIL(void);\nvoid moduleReleaseGIL(void);\nvoid moduleNotifyKeyspaceEvent(int type, const char *event, robj *key, int dbid, robj **subkeys, int count);\nvoid firePostExecutionUnitJobs(void);\nvoid moduleCallCommandFilters(client *c);\nvoid modulePostExecutionUnitOperations(void);\nvoid ModuleForkDoneHandler(int exitcode, int bysignal);\nint TerminateModuleForkChild(int child_pid, int wait);\nssize_t rdbSaveModulesAux(rio *rdb, int when);\nint moduleAllDatatypesHandleErrors(void);\nint moduleAllModulesHandleReplAsyncLoad(void);\nsds modulesCollectInfo(sds info, dict *sections_dict, int for_crash_report, int sections);\nvoid moduleFireServerEvent(uint64_t eid, int subid, void *data);\nvoid processModuleLoadingProgressEvent(int is_aof);\nint moduleTryServeClientBlockedOnKey(client *c, robj *key);\nvoid moduleUnblockClient(client *c);\nint moduleBlockedClientMayTimeout(client *c);\nint moduleClientIsBlockedOnKeys(client *c);\nvoid moduleNotifyUserChanged(client *c);\nvoid moduleNotifyKeyUnlink(robj *key, kvobj *kv, int dbid, int flags);\nsize_t moduleGetFreeEffort(robj *key, robj *val, int dbid);\nsize_t moduleGetMemUsage(robj *key, robj *val, size_t sample_size, int dbid);\nrobj *moduleTypeDupOrReply(client *c, robj *fromkey, robj *tokey, int todb, robj *value);\nint moduleDefragValue(robj *key, robj *obj, int dbid);\nint moduleLateDefrag(robj *key, robj *value, unsigned long *cursor, monotime endtime, int dbid);\nvoid moduleDefragStart(void);\nvoid moduleDefragEnd(void);\nvoid *moduleGetHandleByName(char *modulename);\nint moduleIsModuleCommand(void *module_handle, struct redisCommand *cmd);\nint moduleHasSubscribersForKeyspaceEvent(int type);\nint moduleHasSubscribersForKeyspaceEventWithSubkeys(int type);\n\n/* pcmd */\nvoid initPendingCommand(pendingCommand *pcmd);\nvoid freePendingCommand(client *c, pendingCommand *pcmd);\nvoid addPendingCommand(pendingCommandList *queue, pendingCommand *cmd);\npendingCommand *popPendingCommandFromHead(pendingCommandList *queue);\npendingCommand *popPendingCommandFromTail(pendingCommandList *queue);\nvoid shrinkPendingCommandPool(void);\n\n/* Utils */\nlong long ustime(void);\nmstime_t mstime(void);\nmstime_t commandTimeSnapshot(void);\nvoid getRandomHexChars(char *p, size_t len);\nvoid getRandomBytes(unsigned char *p, size_t len);\nuint64_t crc64(uint64_t crc, const unsigned char *s, uint64_t l);\nvoid exitFromChild(int retcode, int from_signal);\nlong long redisPopcount(void *s, long count);\nint redisSetProcTitle(char *title);\nint validateProcTitleTemplate(const char *template);\nint redisCommunicateSystemd(const char *sd_notify_msg);\nvoid redisSetCpuAffinity(const char *cpulist);\n\n/* afterErrorReply flags */\n#define ERR_REPLY_FLAG_NO_STATS_UPDATE (1ULL<<0) /* Indicating that we should not update\n                                                    error stats after sending error reply */\n/* networking.c -- Networking and Client related operations */\nclient *createClient(connection *conn);\nvoid freeClient(client *c);\nvoid freeClientAsync(client *c);\nvoid deauthenticateAndCloseClient(client *c);\nvoid logInvalidUseAndFreeClientAsync(client *c, const char *fmt, ...);\nint beforeNextClient(client *c);\nvoid clearClientConnectionState(client *c);\nvoid resetClient(client *c, int num_pcmds_to_free);\nvoid resetClientQbufState(client *c);\nvoid freeClientOriginalArgv(client *c);\nvoid freeClientArgv(client *c);\nvoid freeClientPendingCommands(client *c, int num_pcmds_to_free);\nvoid tryDeferFreeClientObject(client *c, int type, void *ptr);\nvoid freeClientDeferredObjects(client *c, int free_array);\nvoid freeClientIODeferredObjects(client *c, int free_array);\nvoid sendReplyToClient(connection *conn);\nvoid *addReplyDeferredLen(client *c);\nvoid setDeferredArrayLen(client *c, void *node, long length);\nvoid setDeferredMapLen(client *c, void *node, long length);\nvoid setDeferredSetLen(client *c, void *node, long length);\nvoid setDeferredAttributeLen(client *c, void *node, long length);\nvoid setDeferredPushLen(client *c, void *node, long length);\nint isClientReadErrorFatal(client *c);\nint processInputBuffer(client *c);\nvoid statsUpdateActiveClients(client *c);\nint getActiveClientsInWindow(void);\nvoid acceptCommonHandler(connection *conn, int flags, char *ip);\nvoid readQueryFromClient(connection *conn);\nint prepareClientToWrite(client *c);\nvoid addReplyNull(client *c);\nvoid addReplyNullArray(client *c);\nvoid addReplyBool(client *c, int b);\nvoid addReplyVerbatim(client *c, const char *s, size_t len, const char *ext);\nvoid addReplyProto(client *c, const char *s, size_t len);\nvoid AddReplyFromClient(client *c, client *src);\nvoid addReplyBulk(client *c, robj *obj);\nvoid addReplyBulkWithFlag(client *c, robj *obj, int avoid_copy);\nvoid addReplyBulkCString(client *c, const char *s);\nvoid addReplyBulkCBuffer(client *c, const void *p, size_t len);\nvoid addReplyBulkLongLong(client *c, long long ll);\nvoid addReply(client *c, robj *obj);\nvoid addReplyStatusLength(client *c, const char *s, size_t len);\nvoid addReplySds(client *c, sds s);\nvoid addReplyBulkSds(client *c, sds s);\nvoid setDeferredReplyBulkSds(client *c, void *node, sds s);\nvoid addReplyErrorObject(client *c, robj *err);\nvoid addReplyOrErrorObject(client *c, robj *reply);\nvoid afterErrorReply(client *c, const char *s, size_t len, int flags);\nvoid addReplyErrorFormatInternal(client *c, int flags, const char *fmt, va_list ap);\nvoid addReplyErrorSdsEx(client *c, sds err, int flags);\nvoid addReplyErrorSds(client *c, sds err);\nvoid addReplyErrorSdsSafe(client *c, sds err);\nvoid addReplyError(client *c, const char *err);\nvoid addReplyErrorArity(client *c);\nvoid addReplyErrorExpireTime(client *c);\nvoid addReplyStatus(client *c, const char *status);\nvoid addReplyDouble(client *c, double d);\nvoid addReplyBigNum(client *c, const char *num, size_t len);\nvoid addReplyHumanLongDouble(client *c, long double d);\nvoid addReplyLongLong(client *c, long long ll);\nvoid addReplyLongLongFromStr(client *c, robj* str);\nvoid addReplyArrayLen(client *c, long length);\nvoid addReplyMapLen(client *c, long length);\nvoid addReplySetLen(client *c, long length);\nvoid addReplyAttributeLen(client *c, long length);\nvoid addReplyPushLen(client *c, long length);\nvoid addReplyHelp(client *c, const char **help);\nvoid addExtendedReplyHelp(client *c, const char **help, const char **extended_help);\nvoid addReplySubcommandSyntaxError(client *c);\nvoid addReplyLoadedModules(client *c);\nvoid copyReplicaOutputBuffer(client *dst, client *src);\nvoid addListRangeReply(client *c, robj *o, long start, long end, int reverse);\nvoid deferredAfterErrorReply(client *c, list *errors);\nsize_t sdsZmallocSize(sds s);\nsize_t getStringObjectSdsUsedMemory(robj *o);\nvoid freeClientReplyValue(void *o);\nvoid *dupClientReplyValue(void *o);\nchar *getClientPeerId(client *client);\nchar *getClientSockName(client *client);\nsds catClientInfoString(sds s, client *client);\nsds getAllClientsInfoString(int type);\nint clientSetName(client *c, robj *name, const char **err);\nvoid rewriteClientCommandVector(client *c, int argc, ...);\nvoid rewriteClientCommandArgument(client *c, int i, robj *newval);\nvoid replaceClientCommandVector(client *c, int argc, robj **argv);\nvoid redactClientCommandArgument(client *c, int argc);\nsize_t getClientOutputBufferMemoryUsage(client *c);\nsize_t getNormalClientPendingReplyBytes(client *c);\nsize_t getClientMemoryUsage(client *c, size_t *output_buffer_mem_usage);\nint freeClientsInAsyncFreeQueue(void);\nint closeClientOnOutputBufferLimitReached(client *c, int async);\nint getClientType(client *c);\nint getClientTypeByName(char *name);\nchar *getClientTypeName(int class);\nvoid flushSlavesOutputBuffers(void);\nvoid disconnectSlaves(void);\nvoid evictClients(void);\nint listenToPort(connListener *fds);\nvoid pauseActions(pause_purpose purpose, mstime_t end, uint32_t actions_bitmask);\nvoid unpauseActions(pause_purpose purpose);\nuint32_t isPausedActions(uint32_t action_bitmask);\nuint32_t isPausedActionsWithUpdate(uint32_t action_bitmask);\nvoid updatePausedActions(void);\nvoid unblockPostponedClients(void);\nvoid processEventsWhileBlocked(void);\nvoid whileBlockedCron(void);\nvoid blockingOperationStarts(void);\nvoid blockingOperationEnds(void);\nint handleClientsWithPendingWrites(void);\nint clientHasPendingReplies(client *c);\nint updateClientMemUsageAndBucket(client *c);\nvoid removeClientFromMemUsageBucket(client *c, int allow_eviction);\nvoid unlinkClient(client *c);\nvoid tryUnlinkClientFromPendingRefReply(client *c, int force);\nint writeToClient(client *c, int handler_installed);\nvoid linkClient(client *c);\nvoid protectClient(client *c);\nvoid unprotectClient(client *c);\nclient *lookupClientByID(uint64_t id);\nint authRequired(client *c);\nvoid putClientInPendingWriteQueue(client *c);\ngetKeysResult *getClientCachedKeyResult(client *c);\n/* reply macros */\n#define ADD_REPLY_BULK_CBUFFER_STRING_CONSTANT(c, str) addReplyBulkCBuffer(c, str, strlen(str))\n\n/* iothread.c - the threaded io implementation */\nvoid initThreadedIO(void);\nvoid killIOThreads(void);\nvoid pauseIOThread(int id);\nvoid resumeIOThread(int id);\nvoid pauseAllIOThreads(void);\nvoid resumeAllIOThreads(void);\nvoid pauseIOThreadsRange(int start, int end);\nvoid resumeIOThreadsRange(int start, int end);\nint resizeAllIOThreadsEventLoops(size_t newsize);\nint sendPendingClientsToIOThreads(void);\nvoid enqueuePendingClientsToMainThread(client *c, int unbind);\nvoid enqueuePendingClienstToIOThreads(client *c);\nvoid handleClientReadError(client *c);\nvoid unbindClientFromIOThreadEventLoop(client *c);\nint processClientsOfAllIOThreads(void);\nint processClientsFromMainThread(IOThread *t);\nvoid assignClientToIOThread(client *c);\nvoid keepClientInMainThread(client *c);\nvoid fetchClientFromIOThread(client *c);\nint isClientMustHandledByMainThread(client *c);\n\n/* logreqres.c - logging of requests and responses */\nvoid reqresReset(client *c, int free_buf);\nvoid reqresSaveClientReplyOffset(client *c);\nsize_t reqresAppendRequest(client *c);\nsize_t reqresAppendResponse(client *c);\n\n#ifdef __GNUC__\nvoid addReplyErrorFormatEx(client *c, int flags, const char *fmt, ...)\n    __attribute__((format(printf, 3, 4)));\nvoid addReplyErrorFormat(client *c, const char *fmt, ...)\n    __attribute__((format(printf, 2, 3)));\nvoid addReplyStatusFormat(client *c, const char *fmt, ...)\n    __attribute__((format(printf, 2, 3)));\n#else\nvoid addReplyErrorFormatEx(client *c, int flags, const char *fmt, ...);\nvoid addReplyErrorFormat(client *c, const char *fmt, ...);\nvoid addReplyStatusFormat(client *c, const char *fmt, ...);\n#endif\n\n/* Client side caching (tracking mode) */\nvoid enableTracking(client *c, uint64_t redirect_to, uint64_t options, robj **prefix, size_t numprefix);\nvoid disableTracking(client *c);\nvoid trackingRememberKeys(client *tracking, client *executing);\nvoid trackingInvalidateKey(client *c, robj *keyobj, int bcast);\nvoid trackingScheduleKeyInvalidation(uint64_t client_id, robj *keyobj);\nvoid trackingHandlePendingKeyInvalidations(void);\nvoid trackingInvalidateKeysOnFlush(int async);\nvoid freeTrackingRadixTree(rax *rt);\nvoid freeTrackingRadixTreeAsync(rax *rt);\nvoid freeErrorsRadixTreeAsync(rax *errors);\nvoid trackingLimitUsedSlots(void);\nuint64_t trackingGetTotalItems(void);\nuint64_t trackingGetTotalKeys(void);\nuint64_t trackingGetTotalPrefixes(void);\nvoid trackingBroadcastInvalidationMessages(void);\nint checkPrefixCollisionsOrReply(client *c, robj **prefix, size_t numprefix);\n\n/* List data type */\nvoid listTypePush(robj *subject, robj *value, int where);\nrobj *listTypePop(robj *subject, int where);\nunsigned long listTypeLength(const robj *subject);\nsize_t listTypeAllocSize(const robj *o);\nvoid listTypeInitIterator(listTypeIterator *li, robj *subject, long index, unsigned char direction);\nvoid listTypeResetIterator(listTypeIterator *li);\nvoid listTypeSetIteratorDirection(listTypeIterator *li, listTypeEntry *entry, unsigned char direction);\nint listTypeNext(listTypeIterator *li, listTypeEntry *entry);\nrobj *listTypeGet(listTypeEntry *entry);\nunsigned char *listTypeGetValue(listTypeEntry *entry, size_t *vlen, long long *lval);\nvoid listTypeInsert(listTypeEntry *entry, robj *value, int where);\nvoid listTypeReplace(listTypeEntry *entry, robj *value);\nint listTypeEqual(listTypeEntry *entry, robj *o, size_t object_len,\n                  long long *cached_longval, int *cached_valid);\nvoid listTypeDelete(listTypeIterator *iter, listTypeEntry *entry);\nrobj *listTypeDup(robj *o);\nvoid listTypeDelRange(robj *o, long start, long stop);\nvoid popGenericCommand(client *c, int where);\nvoid listElementsRemoved(client *c, robj *key, int where, robj *o, long count, size_t oldsize, int signal, int *deleted);\ntypedef enum {\n    LIST_CONV_AUTO,\n    LIST_CONV_GROWING,\n    LIST_CONV_SHRINKING,\n} list_conv_type;\ntypedef void (*beforeConvertCB)(void *data);\nvoid listTypeTryConversion(robj *o, list_conv_type lct, beforeConvertCB fn, void *data);\nvoid listTypeTryConversionAppend(robj *o, robj **argv, int start, int end, beforeConvertCB fn, void *data);\n\n/* MULTI/EXEC/WATCH... */\nvoid unwatchAllKeys(client *c);\nvoid initClientMultiState(client *c);\nvoid freeClientMultiState(client *c);\nvoid queueMultiCommand(client *c, uint64_t cmd_flags);\nsize_t multiStateMemOverhead(client *c);\nvoid touchWatchedKey(redisDb *db, robj *key);\nint isWatchedKeyExpired(client *c);\nvoid touchAllWatchedKeysInDb(redisDb *emptied, redisDb *replaced_with, struct slotRangeArray *slots);\nvoid discardTransaction(client *c);\nvoid flagTransaction(client *c);\nvoid execCommandAbort(client *c, sds error);\n\nunsigned char *getObjectReadOnlyString(robj *o, long *len, char *llbuf);\n\nunsigned long long estimateObjectIdleTime(robj *o);\n#define sdsEncodedObject(objptr) (objptr->encoding == OBJ_ENCODING_RAW || objptr->encoding == OBJ_ENCODING_EMBSTR)\n\n/* Synchronous I/O with timeout */\nssize_t syncWrite(int fd, char *ptr, ssize_t size, long long timeout);\nssize_t syncRead(int fd, char *ptr, ssize_t size, long long timeout);\nssize_t syncReadLine(int fd, char *ptr, ssize_t size, long long timeout);\n\n/* Replication */\nvoid replicationFeedSlaves(list *slaves, int dictid, robj **argv, int argc);\nvoid replicationFeedStreamFromMasterStream(char *buf, size_t buflen);\nvoid resetReplicationBuffer(void);\nvoid freeReplicaReferencedReplBuffer(client *replica);\nvoid replicationFeedMonitors(client *c, list *monitors, int dictid, robj **argv, int argc);\nvoid updateSlavesWaitingBgsave(int bgsaveerr, int type);\nvoid replicationCron(void);\nvoid replicationStartPendingFork(void);\nvoid replicationHandleMasterDisconnection(void);\nvoid replicationCacheMaster(client *c);\nvoid resizeReplicationBacklog(void);\nvoid replicationSetMaster(char *ip, int port);\nvoid replicationUnsetMaster(void);\nvoid refreshGoodSlavesCount(void);\nint checkGoodReplicasStatus(void);\nvoid processClientsWaitingReplicas(void);\nvoid unblockClientWaitingReplicas(client *c);\nint replicationCountAcksByOffset(long long offset);\nint replicationCountAOFAcksByOffset(long long offset);\nvoid replicationSendNewlineToMaster(void);\nlong long replicationGetSlaveOffset(void);\nchar *replicationGetSlaveName(client *c);\nlong long getPsyncInitialOffset(void);\nint replicationSetupSlaveForFullResync(client *slave, long long offset);\nvoid changeReplicationId(void);\nvoid clearReplicationId2(void);\nvoid createReplicationBacklog(void);\nvoid freeReplicationBacklog(void);\nvoid replicationCacheMasterUsingMyself(void);\nvoid feedReplicationBacklog(void *ptr, size_t len);\nvoid incrementalTrimReplicationBacklog(size_t blocks);\nint canFeedReplicaReplBuffer(client *replica);\nvoid rebaseReplicationBuffer(long long base_repl_offset);\nvoid showLatestBacklog(void);\nvoid rdbPipeReadHandler(struct aeEventLoop *eventLoop, int fd, void *clientData, int mask);\nvoid rdbPipeWriteHandlerConnRemoved(struct connection *conn);\nvoid clearFailoverState(void);\nvoid updateFailoverStatus(void);\nvoid abortFailover(const char *err);\nconst char *getFailoverStateString(void);\nint replicationCheckHasMainChannel(client *slave);\nunsigned long replicationLogicalReplicaCount(void);\nvoid replDataBufInit(replDataBuf *buf);\nvoid replDataBufClear(replDataBuf *buf);\nvoid replDataBufReadFromConn(connection *conn, replDataBuf *buf, void (*error_handler)(connection *conn));\nint replDataBufStreamToDb(replDataBuf *buf, replDataBufToDbCtx *ctx);\nint replicaFromIOThreadHasPendingRead(client *c);\nvoid putReplicasInPendingClientsToIOThreads(void);\nint replicationCronRunMasterClient(void);\n\n/* Generic persistence functions */\nvoid startLoadingFile(size_t size, char* filename, int rdbflags);\nvoid startLoading(size_t size, int rdbflags, int async);\nvoid loadingSetFlags(char *filename, size_t size, int async);\nvoid loadingFireEvent(int rdbflags);\nvoid loadingAbsProgress(off_t pos);\nvoid loadingIncrProgress(off_t size);\nvoid stopLoading(int success);\nvoid updateLoadingFileName(char* filename);\nvoid startSaving(int rdbflags);\nvoid stopSaving(int success);\nint allPersistenceDisabled(void);\n\n#define DISK_ERROR_TYPE_AOF 1       /* Don't accept writes: AOF errors. */\n#define DISK_ERROR_TYPE_RDB 2       /* Don't accept writes: RDB errors. */\n#define DISK_ERROR_TYPE_NONE 0      /* No problems, we can accept writes. */\nint writeCommandsDeniedByDiskError(void);\nsds writeCommandsGetDiskErrorMessage(int);\n\n/* RDB persistence */\n#include \"rdb.h\"\nvoid killRDBChild(void);\nint bg_unlink(const char *filename);\n\n/* AOF persistence */\nvoid flushAppendOnlyFile(int force);\nvoid feedAppendOnlyFile(int dictid, robj **argv, int argc);\nvoid aofRemoveTempFile(pid_t childpid);\nint rewriteAppendOnlyFileBackground(void);\nint loadAppendOnlyFiles(aofManifest *am);\nvoid stopAppendOnly(void);\nint startAppendOnly(void);\nvoid startAppendOnlyWithRetry(void);\nvoid applyAppendOnlyConfig(void);\nvoid backgroundRewriteDoneHandler(int exitcode, int bysignal);\nvoid killAppendOnlyChild(void);\nvoid aofLoadManifestFromDisk(void);\nvoid aofOpenIfNeededOnServerStart(void);\nvoid aofManifestFree(aofManifest *am);\nint aofDelHistoryFiles(void);\nint aofRewriteLimited(void);\nvoid updateCurIncrAofEndOffset(void);\nvoid updateReplOffsetAndResetEndOffset(void);\nint rewriteObject(rio *r, robj *key, robj *o, int dbid, long long expiretime);\n\n/* Child info */\nvoid openChildInfoPipe(void);\nvoid closeChildInfoPipe(void);\nvoid sendChildInfoGeneric(childInfoType info_type, size_t keys, double progress, char *pname);\nvoid sendChildCowInfo(childInfoType info_type, char *pname);\nvoid sendChildInfo(childInfoType info_type, size_t keys, char *pname);\nvoid receiveChildInfo(void);\n\n/* Fork helpers */\nint redisFork(int purpose);\nint hasActiveChildProcess(void);\nvoid resetChildState(void);\nint isMutuallyExclusiveChildType(int type);\n\n/* acl.c -- Authentication related prototypes. */\nextern rax *Users;\nextern user *DefaultUser;\nvoid ACLInit(void);\n/* Return values for ACLCheckAllPerm(). */\n#define ACL_OK 0\n#define ACL_DENIED_CMD 1\n#define ACL_DENIED_KEY 2\n#define ACL_DENIED_AUTH 3 /* Only used for ACL LOG entries. */\n#define ACL_DENIED_CHANNEL 4 /* Only used for pub/sub commands */\n#define ACL_INVALID_TLS_CERT_AUTH 5 /* Only used for TLS Auto-authentication */\n\n/* Context values for addACLLogEntry(). */\n#define ACL_LOG_CTX_TOPLEVEL 0\n#define ACL_LOG_CTX_LUA 1\n#define ACL_LOG_CTX_MULTI 2\n#define ACL_LOG_CTX_MODULE 3\n\n/* ACL key permission types */\n#define ACL_READ_PERMISSION (1<<0)\n#define ACL_WRITE_PERMISSION (1<<1)\n#define ACL_ALL_PERMISSION (ACL_READ_PERMISSION|ACL_WRITE_PERMISSION)\n\n/* Return codes for Authentication functions to indicate the result. */\ntypedef enum {\n    AUTH_OK = 0,\n    AUTH_ERR,\n    AUTH_NOT_HANDLED,\n    AUTH_BLOCKED\n} AuthResult;\n\nint ACLCheckUserCredentials(robj *username, robj *password);\nint ACLAuthenticateUser(client *c, robj *username, robj *password, robj **err);\nint checkModuleAuthentication(client *c, robj *username, robj *password, robj **err);\nvoid addAuthErrReply(client *c, robj *err);\nunsigned long ACLGetCommandID(sds cmdname);\nvoid ACLClearCommandID(void);\nuser *ACLGetUserByName(const char *name, size_t namelen);\nint ACLUserCheckKeyPerm(user *u, const char *key, int keylen, int flags);\nint ACLUserCheckChannelPerm(user *u, sds channel, int literal);\nint ACLCheckAllUserCommandPerm(user *u, struct redisCommand *cmd, robj **argv, int argc, getKeysResult *key_result, int *idxptr);\nint ACLUserCheckCmdWithUnrestrictedKeyAccess(user *u, struct redisCommand *cmd, robj **argv, int argc, int flags);\nint ACLCheckAllPerm(client *c, int *idxptr);\nint ACLSetUser(user *u, const char *op, ssize_t oplen);\nsds ACLStringSetUser(user *u, sds username, sds *argv, int argc);\nuint64_t ACLGetCommandCategoryFlagByName(const char *name);\nint ACLAddCommandCategory(const char *name, uint64_t flag);\nvoid ACLCleanupCategoriesOnFailure(size_t num_acl_categories_added);\nint ACLAppendUserForLoading(sds *argv, int argc, int *argc_err);\nconst char *ACLSetUserStringError(void);\nint ACLLoadConfiguredUsers(void);\nrobj *ACLDescribeUser(user *u);\nvoid ACLLoadUsersAtStartup(void);\nvoid addReplyCommandCategories(client *c, struct redisCommand *cmd);\nuser *ACLCreateUnlinkedUser(void);\nvoid ACLFreeUserAndKillClients(user *u);\nvoid addACLLogEntry(client *c, int reason, int context, int argpos, sds username, sds object);\nsds getAclErrorMessage(int acl_res, user *user, struct redisCommand *cmd, sds errored_val, int verbose);\nvoid ACLUpdateDefaultUserPassword(sds password);\nsds genRedisInfoStringACLStats(sds info);\nvoid ACLRecomputeCommandBitsFromCommandRulesAllUsers(void);\n\n/* Sorted sets data type */\n\n/* Input flags. */\n#define ZADD_IN_NONE 0\n#define ZADD_IN_INCR (1<<0)    /* Increment the score instead of setting it. */\n#define ZADD_IN_NX (1<<1)      /* Don't touch elements not already existing. */\n#define ZADD_IN_XX (1<<2)      /* Only touch elements already existing. */\n#define ZADD_IN_GT (1<<3)      /* Only update existing when new scores are higher. */\n#define ZADD_IN_LT (1<<4)      /* Only update existing when new scores are lower. */\n\n/* Output flags. */\n#define ZADD_OUT_NOP (1<<0)     /* Operation not performed because of conditionals.*/\n#define ZADD_OUT_NAN (1<<1)     /* Only touch elements already existing. */\n#define ZADD_OUT_ADDED (1<<2)   /* The element was new and was added. */\n#define ZADD_OUT_UPDATED (1<<3) /* The element already existed, score updated. */\n\n/* Struct to hold an inclusive/exclusive range spec by score comparison. */\ntypedef struct {\n    double min, max;\n    int minex, maxex; /* are min or max exclusive? */\n} zrangespec;\n\n/* Struct to hold an inclusive/exclusive range spec by lexicographic comparison. */\ntypedef struct {\n    sds min, max;     /* May be set to shared.(minstring|maxstring) */\n    int minex, maxex; /* are min or max exclusive? */\n} zlexrangespec;\n\n/* flags for incrCommandFailedCalls */\n#define ERROR_COMMAND_REJECTED (1<<0) /* Indicate to update the command rejected stats */\n#define ERROR_COMMAND_FAILED (1<<1) /* Indicate to update the command failed stats */\n\nzskiplist *zslCreate(void);\nvoid zslFree(zskiplist *zsl);\nsize_t zslAllocSize(const zskiplist *zsl);\nsds zslGetNodeElement(const zskiplistNode *node);\nint zslCompareWithNode(double score, sds ele, const zskiplistNode *n);\nzskiplistNode *zslInsert(zskiplist *zsl, double score, sds ele);\nunsigned char *zzlInsert(unsigned char *zl, sds ele, double score);\nzskiplistNode *zslNthInRange(zskiplist *zsl, zrangespec *range, long n, unsigned long *out_rank);\ndouble zzlGetScore(unsigned char *sptr);\nvoid zzlNext(unsigned char *zl, unsigned char **eptr, unsigned char **sptr);\nvoid zzlPrev(unsigned char *zl, unsigned char **eptr, unsigned char **sptr);\nunsigned char *zzlFirstInRange(unsigned char *zl, zrangespec *range);\nunsigned char *zzlLastInRange(unsigned char *zl, zrangespec *range);\nunsigned long zsetLength(const robj *zobj);\nsize_t zsetAllocSize(const robj *o);\nvoid zsetConvert(robj *zobj, int encoding);\nvoid zsetConvertToListpackIfNeeded(robj *zobj, size_t maxelelen, size_t totelelen);\nint zsetScore(robj *zobj, sds member, double *score);\nunsigned long zslGetRank(zskiplist *zsl, double score, sds o);\nint zsetAdd(robj *zobj, double score, sds ele, int in_flags, int *out_flags, double *newscore);\nlong zsetRank(robj *zobj, sds ele, int reverse, double *score);\nint zsetDel(robj *zobj, sds ele);\nrobj *zsetDup(robj *o);\nvoid genericZpopCommand(client *c, robj **keyv, int keyc, int where, int emitkey, long count, int use_nested_array, int reply_nil_when_empty, int *deleted);\nsds lpGetObject(unsigned char *sptr);\nint zslValueGteMin(double value, zrangespec *spec);\nint zslValueLteMax(double value, zrangespec *spec);\nvoid zslFreeLexRange(zlexrangespec *spec);\nint zslParseLexRange(robj *min, robj *max, zlexrangespec *spec);\nunsigned char *zzlFirstInLexRange(unsigned char *zl, zlexrangespec *range);\nunsigned char *zzlLastInLexRange(unsigned char *zl, zlexrangespec *range);\nzskiplistNode *zslNthInLexRange(zskiplist *zsl, zlexrangespec *range, long n, unsigned long *out_rank);\nint zzlLexValueGteMin(unsigned char *p, zlexrangespec *spec);\nint zzlLexValueLteMax(unsigned char *p, zlexrangespec *spec);\nint zslLexValueGteMin(sds value, zlexrangespec *spec);\nint zslLexValueLteMax(sds value, zlexrangespec *spec);\n\n/* gcra related */\nrobj *gcraDup(robj *o);\n\n/* Core functions */\nint getMaxmemoryState(size_t *total, size_t *logical, size_t *tofree, float *level);\nvoid updatePeakMemory(void);\nsize_t freeMemoryGetNotCountedMemory(void);\nint overMaxmemoryAfterAlloc(size_t moremem);\nuint64_t getCommandFlags(client *c);\nvoid preprocessCommand(client *c, pendingCommand *pcmd);\nint processCommand(client *c);\nvoid commandProcessed(client *c);\nvoid prepareForNextCommand(client *c, int update_slot_stats);\nint processPendingCommandAndInputBuffer(client *c);\nint processCommandAndResetClient(client *c);\nint areCommandKeysInSameSlot(client *c, int *hashslot);\nvoid setupSignalHandlers(void);\nint createSocketAcceptHandler(connListener *sfd, aeFileProc *accept_handler);\nconnListener *listenerByType(const char *typename);\nint changeListener(connListener *listener);\nvoid closeListener(connListener *listener);\nstruct redisCommand *lookupSubcommand(struct redisCommand *container, sds sub_name);\nstruct redisCommand *lookupCommand(robj **argv, int argc);\nstruct redisCommand *lookupCommandBySdsLogic(dict *commands, sds s);\nstruct redisCommand *lookupCommandBySds(sds s);\nstruct redisCommand *lookupCommandByCStringLogic(dict *commands, const char *s);\nstruct redisCommand *lookupCommandByCString(const char *s);\nstruct redisCommand *lookupCommandOrOriginal(robj **argv, int argc);\nint commandCheckExistence(client *c, sds *err);\nint commandCheckArity(struct redisCommand *cmd, int argc, sds *err);\nvoid startCommandExecution(void);\nint incrCommandStatsOnError(struct redisCommand *cmd, int flags);\nvoid call(client *c, int flags);\nvoid alsoPropagate(int dbid, robj **argv, int argc, int target);\nvoid postExecutionUnitOperations(void);\nint redisOpArrayAppend(redisOpArray *oa, int dbid, robj **argv, int argc, int target);\nvoid redisOpArrayFree(redisOpArray *oa);\nvoid forceCommandPropagation(client *c, int flags);\nvoid preventCommandPropagation(client *c);\nvoid preventCommandAOF(client *c);\nvoid preventCommandReplication(client *c);\nvoid slowlogPushCurrentCommand(client *c, struct redisCommand *cmd, ustime_t duration);\nvoid updateCommandLatencyHistogram(struct hdr_histogram** latency_histogram, int64_t duration_hist);\nint prepareForShutdown(int flags);\nvoid replyToClientsBlockedOnShutdown(void);\nint abortShutdown(void);\nvoid afterCommand(client *c);\nint mustObeyClient(client *c);\n#ifdef __GNUC__\nvoid _serverLog(int level, const char *fmt, ...)\n    __attribute__((format(printf, 2, 3)));\nvoid serverLogFromHandler(int level, const char *fmt, ...)\n    __attribute__((format(printf, 2, 3)));\n#else\nvoid serverLogFromHandler(int level, const char *fmt, ...);\nvoid _serverLog(int level, const char *fmt, ...);\n#endif\nvoid serverLogRaw(int level, const char *msg);\nvoid serverLogRawFromHandler(int level, const char *msg);\nvoid usage(void);\nvoid updateDictResizePolicy(void);\nvoid populateCommandTable(void);\nvoid resetCommandTableStats(dict* commands);\nvoid resetErrorTableStats(void);\nvoid adjustOpenFilesLimit(void);\nvoid incrementErrorCount(const char *fullerr, size_t namelen);\nvoid closeListeningSockets(int unlink_unix_socket);\nvoid updateCachedTime(int update_daylight_info);\nvoid enterExecutionUnit(int update_cached_time, long long us);\nvoid exitExecutionUnit(void);\nvoid resetServerStats(void);\nvoid activeDefragCycle(void);\nvoid defragWhileBlocked(void);\nunsigned int getLRUClock(void);\nunsigned int LRU_CLOCK(void);\nconst char *evictPolicyToString(void);\nstruct redisMemOverhead *getMemoryOverheadData(void);\nvoid freeMemoryOverheadData(struct redisMemOverhead *mh);\nvoid checkChildrenDone(void);\nint setOOMScoreAdj(int process_class);\nvoid rejectCommandFormat(client *c, const char *fmt, ...);\nvoid *activeDefragAlloc(void *ptr);\nvoid *activeDefragAllocRaw(size_t size);\nvoid activeDefragFreeRaw(void *ptr);\nrobj *activeDefragStringOb(robj* ob);\nvoid dismissSds(sds s);\nvoid dismissMemory(void* ptr, size_t size_hint);\nvoid dismissDictBucketsMemory(dict *d);\nvoid dismissKvstoreBucketsMemory(kvstore *kvs);\nvoid dismissMemoryInChild(void);\nint clientsCronRunClient(client *c);\n\n#define RESTART_SERVER_NONE 0\n#define RESTART_SERVER_GRACEFULLY (1<<0)     /* Do proper shutdown. */\n#define RESTART_SERVER_CONFIG_REWRITE (1<<1) /* CONFIG REWRITE before restart.*/\nint restartServer(int flags, mstime_t delay);\nint getKeySlot(sds key);\nint calculateKeySlot(sds key);\n\n/* kvstore wrappers */\nint dbExpand(redisDb *db, uint64_t db_size, int try_expand);\nint dbExpandExpires(redisDb *db, uint64_t db_size, int try_expand);\nkvobj *dbFind(redisDb *db, sds key);\nkvobj *dbFindByLink(redisDb *db, sds key, dictEntryLink *link);\nkvobj *dbFindExpires(redisDb *db, sds key);\nunsigned long long dbSize(redisDb *db);\nunsigned long long dbScan(redisDb *db, unsigned long long cursor, dictScanFunction *scan_cb, void *privdata);\n\n/* Set data type */\nrobj *setTypeCreate(sds value, size_t size_hint);\nint setTypeAdd(robj *subject, sds value);\nint setTypeAddAux(robj *set, char *str, size_t len, int64_t llval, int str_is_sds);\nint setTypeRemove(robj *subject, sds value);\nint setTypeRemoveAux(robj *set, char *str, size_t len, int64_t llval, int str_is_sds);\nint setTypeIsMember(robj *subject, sds value);\nint setTypeIsMemberAux(robj *set, char *str, size_t len, int64_t llval, int str_is_sds);\nvoid setTypeInitIterator(setTypeIterator *si, robj *subject);\nvoid setTypeResetIterator(setTypeIterator *si);\nint setTypeNext(setTypeIterator *si, char **str, size_t *len, int64_t *llele);\nsds setTypeNextObject(setTypeIterator *si);\nint setTypeRandomElement(robj *setobj, char **str, size_t *len, int64_t *llele);\nunsigned long setTypeSize(const robj *subject);\nsize_t setTypeAllocSize(const robj *o);\nvoid setTypeConvert(robj *subject, int enc);\nint setTypeConvertAndExpand(robj *setobj, int enc, unsigned long cap, int panic);\nrobj *setTypeDup(robj *o);\n\n/* Data structure for OBJ_ENCODING_LISTPACK_EX for hash. It contains listpack\n * and metadata fields for hash field expiration.*/\ntypedef struct listpackEx {\n    ExpireMeta meta;  /* To be used in order to register the hash in the\n                         global ebuckets subexpires with next, minimum,\n                         hash-field to expire. TTL value might be inaccurate\n                         up-to few seconds due to optimization consideration. */\n    void *lp;         /* listpack that contains 'key-value-ttl' tuples which\n                         are ordered by ttl. */\n} listpackEx;\n\n/* Each dict of hash object that has fields with time-Expiration will have the\n * following metadata attached to dict header.\n * Note that alloc_size field must be first because hash objects without expre\n * already use sizeof(size_t) bytes of metadata for memory accounting. */\ntypedef struct htMetadataEx {\n    size_t alloc_size;       /* Total memory used for keys and values */\n    ExpireMeta expireMeta;   /* embedded ExpireMeta in dict.\n                                To be used in order to register the hash in the\n                                subexpires DB with next minimum hash-field to expire.\n                                TTL value might be inaccurate up-to few seconds due\n                                to optimization consideration. */\n    ebuckets hfe;            /* DS of Hash Fields Expiration, associated to each hash */\n} htMetadataEx;\n\n/* hash metadata helpers */\nstatic inline htMetadataEx *htGetMetadataEx(dict *d) {\n    return (htMetadataEx *)dictMetadata(d);\n}\n\nstatic inline size_t *htGetMetadataSize(dict *d) {\n    return (size_t *)dictMetadata(d);\n}\n\n/* Hash data type */\n#define HASH_SET_TAKE_FIELD (1<<0)\n#define HASH_SET_TAKE_VALUE (1<<1)\n#define HASH_SET_COPY 0\n\n/* Hash field lazy expiration flags. Used by core hashTypeGetValue() and its callers */\n#define HFE_LAZY_EXPIRE              (0)    /* Delete expired field, and if last field also the hash */\n#define HFE_LAZY_AVOID_FIELD_DEL     (1<<0) /* Avoid deleting expired field */\n#define HFE_LAZY_AVOID_HASH_DEL      (1<<1) /* Avoid deleting hash if the field is the last one */\n#define HFE_LAZY_NO_NOTIFICATION     (1<<2) /* Do not send notification, used when multiple fields\n                                             * may expire and only one notification is desired. */\n#define HFE_LAZY_NO_SIGNAL           (1<<3)    /* Do not send signal, used when multiple fields\n                                             * may expire and only one signal is desired. */\n#define HFE_LAZY_ACCESS_EXPIRED      (1<<4) /* Avoid lazy expire and allow access to expired fields */\n#define HFE_LAZY_NO_UPDATE_KEYSIZES  (1<<5) /* If field lazy deleted, avoid updating keysizes histogram */\n#define HFE_LAZY_NO_UPDATE_ALLOCSIZES (1<<6) /* If field lazy deleted, avoid updating slot allocation sizes */\n\nvoid hashTypeConvert(redisDb *db, robj *o, int enc);\nvoid hashTypeTryConversion(redisDb *db, kvobj *kv, robj **argv, int start, int end);\nint hashTypeExists(redisDb *db, kvobj *kv, sds field, int hfeFlags, int *isHashDeleted);\nint hashTypeDelete(robj *o, void *key);\nunsigned long hashTypeLength(const robj *o, int subtractExpiredFields);\nsize_t hashTypeAllocSize(const robj *o);\nvoid hashTypeInitIterator(hashTypeIterator *hi, robj *subject);\nvoid hashTypeResetIterator(hashTypeIterator *hi);\nint hashTypeNext(hashTypeIterator *hi, int skipExpiredFields);\nvoid hashTypeCurrentFromListpack(hashTypeIterator *hi, int what,\n                                 unsigned char **vstr,\n                                 unsigned int *vlen,\n                                 long long *vll,\n                                 uint64_t *expireTime);\nvoid hashTypeCurrentFromHashTable(hashTypeIterator *hi, int what, char **str,\n                                  size_t *len, uint64_t *expireTime);\nvoid hashTypeCurrentObject(hashTypeIterator *hi, int what, unsigned char **vstr,\n                           unsigned int *vlen, long long *vll, uint64_t *expireTime);\nsds hashTypeCurrentObjectNewSds(hashTypeIterator *hi, int what);\nEntry *hashTypeCurrentObjectNewEntry(hashTypeIterator *hi, size_t *usable);\nint hashTypeGetValueObject(redisDb *db, kvobj *kv, sds field, int hfeFlags,\n                           robj **val, uint64_t *expireTime, int *isHashDeleted);\nint hashTypeSet(redisDb *db, kvobj *kv, sds field, sds value, int flags);\nrobj *hashTypeDup(kvobj *kv, uint64_t *minHashExpire);\nuint64_t hashTypeExpire(redisDb *db, kvobj *o, uint32_t *quota, int updateSubexpires, int activeEx);\nvoid hashTypeFree(robj *o);\nint hashTypeIsExpired(const robj *o, uint64_t expireAt);\nunsigned char *hashTypeListpackGetLp(robj *o);\nuint64_t hashTypeGetMinExpire(robj *o, int accurate);\nebuckets *hashTypeGetDictMetaHFE(dict *d);\nvoid initDictExpireMetadata(robj *o);\nstruct listpackEx *listpackExCreate(void);\nvoid listpackExAddNew(robj *o, char *field, size_t flen,\n                      char *value, size_t vlen, uint64_t expireAt);\n\n/* Pub / Sub */\nint pubsubUnsubscribeAllChannels(client *c, int notify);\nint pubsubUnsubscribeShardAllChannels(client *c, int notify);\nvoid pubsubShardUnsubscribeAllChannelsInSlot(unsigned int slot);\nint pubsubUnsubscribeAllPatterns(client *c, int notify);\nint pubsubPublishMessage(robj *channel, robj *message, int sharded);\nint pubsubPublishMessageAndPropagateToCluster(robj *channel, robj *message, int sharded);\nvoid addReplyPubsubMessage(client *c, robj *channel, robj *msg, robj *message_bulk);\nint serverPubsubSubscriptionCount(void);\nint serverPubsubShardSubscriptionCount(void);\nsize_t pubsubMemOverhead(client *c);\nvoid unmarkClientAsPubSub(client *c);\nint pubsubTotalSubscriptions(void);\ndict *getClientPubSubChannels(client *c);\ndict *getClientPubSubShardChannels(client *c);\n\n/* Keyspace events notification */\nvoid notifyKeyspaceEvent(int type, const char *event, robj *key, int dbid);\nvoid notifyKeyspaceEventWithSubkeys(int type, const char *event, robj *key, int dbid, robj **subkeys, int count);\nint keyspaceEventsStringToFlags(char *classes);\nsds keyspaceEventsFlagsToString(int flags);\nint isSubkeyNotifyEnabled(int type);\n\n/* As part of KSN the module should not attempt to modify the key. Nevertheless,\n * RediSearch does it in some specific flows and modifies key metadata which in\n * turn might invalidates the local kvobj pointer. Those specific flows are\n * protected by the following macro which invalidates the local kvobj pointer\n * after the notification to prevent further access to it (Currently it is only \n * using it with hash type keys, without hash field expiration) */\n#define KSN_INVALIDATE_KVOBJ(o) do { (o) = NULL; } while (0)\n\n/* Configuration */\n/* Configuration Flags */\n#define MODIFIABLE_CONFIG 0 /* This is the implied default for a standard\n                             * config, which is mutable. */\n#define IMMUTABLE_CONFIG (1ULL<<0) /* Can this value only be set at startup? */\n#define SENSITIVE_CONFIG (1ULL<<1) /* Does this value contain sensitive information */\n#define DEBUG_CONFIG (1ULL<<2) /* Values that are useful for debugging. */\n#define MULTI_ARG_CONFIG (1ULL<<3) /* This config receives multiple arguments. */\n#define HIDDEN_CONFIG (1ULL<<4) /* This config is hidden in `config get <pattern>` (used for tests/debugging) */\n#define PROTECTED_CONFIG (1ULL<<5) /* Becomes immutable if enable-protected-configs is enabled. */\n#define DENY_LOADING_CONFIG (1ULL<<6) /* This config is forbidden during loading. */\n#define ALIAS_CONFIG (1ULL<<7) /* For configs with multiple names, this flag is set on the alias. */\n#define MODULE_CONFIG (1ULL<<8) /* This config is a module config */\n#define VOLATILE_CONFIG (1ULL<<9) /* The config is a reference to the config data and not the config data itself (ex.\n                                   * a file name containing more configuration like a tls key). In this case we want\n                                   * to apply the configuration change even if the new config value is the same as\n                                   * the old. */\n\n#define INTEGER_CONFIG 0 /* No flags means a simple integer configuration */\n#define MEMORY_CONFIG (1<<0) /* Indicates if this value can be loaded as a memory value */\n#define PERCENT_CONFIG (1<<1) /* Indicates if this value can be loaded as a percent (and stored as a negative int) */\n#define OCTAL_CONFIG (1<<2) /* This value uses octal representation */\n\n/* Enum Configs contain an array of configEnum objects that match a string with an integer. */\ntypedef struct configEnum {\n    char *name;\n    int val;\n} configEnum;\n\n/* Type of configuration. */\ntypedef enum {\n    BOOL_CONFIG,\n    NUMERIC_CONFIG,\n    STRING_CONFIG,\n    SDS_CONFIG,\n    ENUM_CONFIG,\n    SPECIAL_CONFIG,\n} configType;\n\nvoid loadServerConfig(char *filename, char config_from_stdin, char *options);\nvoid appendServerSaveParams(time_t seconds, int changes);\nvoid resetServerSaveParams(void);\nstruct rewriteConfigState; /* Forward declaration to export API. */\nint rewriteConfigRewriteLine(struct rewriteConfigState *state, const char *option, sds line, int force);\nvoid rewriteConfigMarkAsProcessed(struct rewriteConfigState *state, const char *option);\nint rewriteConfig(char *path, int force_write);\nvoid initConfigValues(void);\nvoid removeConfig(sds name);\nsds getConfigDebugInfo(void);\nint allowProtectedAction(int config, client *c);\nvoid initServerClientMemUsageBuckets(void);\nvoid freeServerClientMemUsageBuckets(void);\nstatic inline int clusterSlotStatsEnabled(int stat) { return server.cluster_enabled && (server.cluster_slot_stats_enabled & stat); }\n\n/* Module Configuration */\ntypedef struct ModuleConfig ModuleConfig;\nint performModuleConfigSetFromName(sds name, sds value, const char **err);\nint performModuleConfigSetDefaultFromName(sds name, const char **err);\nvoid addModuleBoolConfig(sds name, sds alias, int flags, void *privdata, int default_val);\nvoid addModuleStringConfig(sds name, sds alias, int flags, void *privdata, sds default_val);\nvoid addModuleEnumConfig(sds name, sds alias, int flags, void *privdata, int default_val, configEnum *enum_vals, int num_enum_vals);\nvoid addModuleNumericConfig(sds name, sds alias, int flags, void *privdata, long long default_val, int conf_flags, long long lower, long long upper);\nvoid addModuleConfigApply(list *module_configs, ModuleConfig *module_config);\nint moduleConfigApply(ModuleConfig *module_config, const char **err);\nint moduleConfigApplyConfig(list *module_configs, const char **err, const char **err_arg_name);\nint moduleConfigNeedsApply(ModuleConfig *config);\nint getModuleBoolConfig(ModuleConfig *module_config);\nint setModuleBoolConfig(ModuleConfig *config, int val, const char **err);\nsds getModuleStringConfig(ModuleConfig *module_config);\nint setModuleStringConfig(ModuleConfig *config, sds strval, const char **err);\nint getModuleEnumConfig(ModuleConfig *module_config);\nint setModuleEnumConfig(ModuleConfig *config, int val, const char **err);\nlong long getModuleNumericConfig(ModuleConfig *module_config);\nint setModuleNumericConfig(ModuleConfig *config, long long val, const char **err);\n\n/* API for modules to access config values. */\ndictIterator *moduleGetConfigIterator(void);\nconst char *moduleConfigIteratorNext(dictIterator **iter, sds pattern, int is_glob, configType *typehint);\nint moduleGetConfigType(sds name, configType *res);\nint moduleGetBoolConfig(sds name, int *res);\nint moduleGetStringConfig(sds name, sds *res);\nint moduleGetEnumConfig(sds name, sds *res);\nint moduleGetNumericConfig(sds name, long long *res);\nint moduleSetBoolConfig(client *c, sds name, int val, const char **err);\nint moduleSetStringConfig(client *c, sds name, const char *val, const char **err);\nint moduleSetEnumConfig(client *c, sds name, sds *vals, int vals_cnt, const char **err);\nint moduleSetNumericConfig(client *c, sds name, long long val, const char **err);\n\n/* db.c -- Keyspace access API */\nvoid kvsUpdateHistogram(keysizesHist kvstoreHist, uint32_t type, int64_t oldLen, int64_t newLen);\nvoid updateKeysizesHist(redisDb *db, uint32_t type, int64_t oldLen, int64_t newLen);\nvoid updateSlotAllocSize(redisDb *db, int didx, kvobj *kv, int64_t oldsize, int64_t newsize);\nvoid dbgRunAssertions(redisDb *db);\nint removeExpire(redisDb *db, robj *key);\nvoid deleteExpiredKeyAndPropagate(redisDb *db, robj *keyobj);\nvoid deleteEvictedKeyAndPropagate(redisDb *db, robj *keyobj, long long *key_mem_freed);\nvoid propagateDeletion(redisDb *db, robj *key, int lazy);\nint keyIsExpired(redisDb *db, sds key, kvobj *kv);\nint confAllowsExpireDel(void);\nlong long getExpire(redisDb *db, sds key, kvobj *kv);\nkvobj *setExpire(client *c, redisDb *db, robj *key, long long when);\nkvobj *setExpireByLink(client *c, redisDb *db, sds key, long long when, dictEntryLink link);\nint checkAlreadyExpired(long long when);\nint parseExtendedExpireArgumentsOrReply(client *c, int *flags);\nkvobj *lookupKeyRead(redisDb *db, robj *key);\nkvobj *lookupKeyWrite(redisDb *db, robj *key);\nkvobj *lookupKeyWriteWithLink(redisDb *db, robj *key, dictEntryLink *link);\nkvobj *lookupKeyReadOrReply(client *c, robj *key, robj *reply);\nkvobj *lookupKeyWriteOrReply(client *c, robj *key, robj *reply);\nkvobj *lookupKeyReadWithFlags(redisDb *db, robj *key, int flags);\nkvobj *lookupKeyWriteWithFlags(redisDb *db, robj *key, int flags);\nkvobj *kvobjCommandLookup(client *c, robj *key);\nkvobj *kvobjCommandLookupOrReply(client *c, robj *key, robj *reply);\n\n#define LOOKUP_NONE 0\n#define LOOKUP_NOTOUCH (1<<0)        /* Don't update LRU. */\n#define LOOKUP_NONOTIFY (1<<1)       /* Don't trigger keyspace event on key misses. */\n#define LOOKUP_NOSTATS (1<<2)        /* Don't update keyspace hits/misses counters. */\n#define LOOKUP_WRITE (1<<3)          /* Delete expired keys even in replicas. */\n#define LOOKUP_NOEXPIRE (1<<4)       /* Avoid deleting lazy expired keys. */\n#define LOOKUP_ACCESS_EXPIRED (1<<5) /* Allow lookup to expired key. */\n#define LOOKUP_ACCESS_TRIMMED (1<<6) /* Allow lookup to key in slots being trimmed. */\n#define LOOKUP_NOEFFECTS (LOOKUP_NONOTIFY | LOOKUP_NOSTATS | LOOKUP_NOTOUCH | LOOKUP_NOEXPIRE) /* Avoid any effects from fetching the key */\n\nstatic inline kvobj *dictGetKV(const dictEntry *de) {return (kvobj *) dictGetKey(de);}\nkvobj *dbAdd(redisDb *db, robj *key, robj **valref);\nkvobj *dbAddByLink(redisDb *db, robj *key, robj **valref, dictEntryLink *link);\nkvobj *dbAddInternal(redisDb *db, robj *key, robj **valref, dictEntryLink *link, const KeyMetaSpec *m);\nkvobj *dbAddRDBLoad(redisDb *db, sds key, robj **valref, const KeyMetaSpec *keyMetaSpec);\nvoid dbReplaceValue(redisDb *db, robj *key, kvobj **ioKeyVal, int updateKeySizes);\nvoid dbReplaceValueWithLink(redisDb *db, robj *key, robj **val, dictEntryLink link);\n\n#define SETKEY_KEEPTTL 1\n#define SETKEY_NO_SIGNAL 2\n#define SETKEY_ALREADY_EXIST 4\n#define SETKEY_DOESNT_EXIST 8\n\nvoid setKey(client *c, redisDb *db, robj *key, robj **ioval, int flags);\nvoid setKeyByLink(client *c, redisDb *db, robj *key, robj **valref, int flags, dictEntryLink *link);\nrobj *dbRandomKey(redisDb *db);\nint dbGenericDelete(redisDb *db, robj *key, int async, int flags);\nint dbSyncDelete(redisDb *db, robj *key);\nint dbDelete(redisDb *db, robj *key);\nint dbDeleteSkipKeysizesUpdate(redisDb *db, robj *key);\nkvobj *dbUnshareStringValue(redisDb *db, robj *key, kvobj *o);\nkvobj *dbUnshareStringValueByLink(redisDb *db, robj *key, kvobj *kv, dictEntryLink link);\n\n#define FLUSH_TYPE_ALL   0\n#define FLUSH_TYPE_DB    1\n#define FLUSH_TYPE_SLOTS 2\nvoid replySlotsFlush(client *c, struct slotRangeArray *slots);\nint flushCommandCommon(client *c, int type, int flags, struct asmTrimCtx *trim_ctx);\nvoid kvsAsyncFreeDoneCB(uint64_t client_id, void *userdata);\nvoid unblockClientForAsyncFlush(uint64_t client_id, struct slotRangeArray *slots);\nvoid blockClientForAsyncFlush(client *c);\n#define EMPTYDB_NO_FLAGS 0      /* No flags. */\n#define EMPTYDB_ASYNC (1<<0)    /* Reclaim memory in another thread. */\n#define EMPTYDB_NOFUNCTIONS (1<<1) /* Indicate not to flush the functions. */\nlong long emptyData(int dbnum, int flags, void(callback)(dict*));\nlong long emptyDbStructure(redisDb *dbarray, int dbnum, int async, void(callback)(dict*));\nvoid flushAllDataAndResetRDB(int flags);\nlong long dbTotalServerKeyCount(void);\nredisDb *initTempDb(void);\nvoid discardTempDb(redisDb *tempDb);\n\n\nint selectDb(client *c, int id);\nvoid keyModified(client *c, redisDb *db, robj *key, robj *val, int signal);\nvoid signalFlushedDb(int dbid, int async, struct slotRangeArray *slots);\nvoid scanGenericCommand(client *c, robj *o, unsigned long long cursor);\nint parseScanCursorOrReply(client *c, robj *o, unsigned long long *cursor);\nint dbAsyncDelete(redisDb *db, robj *key);\nvoid emptyDbAsync(redisDb *db);\nvoid streamMoveIdmpKeys(dict *src, dict *dst, int slot);\nvoid emptyDbDataAsync(kvstore *keys, kvstore *expires, ebuckets hexpires, dict *stream_idmp_keys, struct asmTrimCtx *ctx);\nsize_t lazyfreeGetPendingObjectsCount(void);\nsize_t lazyfreeGetFreedObjectsCount(void);\nvoid lazyfreeResetStats(void);\nvoid freeObjAsync(robj *key, robj *obj, int dbid);\nvoid freeReplicationBacklogRefMemAsync(list *blocks, rax *index);\n\n/* API to get key arguments from commands */\n#define GET_KEYSPEC_DEFAULT 0\n#define GET_KEYSPEC_INCLUDE_NOT_KEYS (1<<0) /* Consider 'fake' keys as keys */\n#define GET_KEYSPEC_RETURN_PARTIAL (1<<1) /* Return all keys that can be found */\n\nint getKeysFromCommandWithSpecs(struct redisCommand *cmd, robj **argv, int argc, int search_flags, getKeysResult *result);\nkeyReference *getKeysPrepareResult(getKeysResult *result, int numkeys);\nint getKeysFromCommand(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);\nint getSlotFromCommand(struct redisCommand *cmd, robj **argv, int argc);\nint doesCommandHaveKeys(struct redisCommand *cmd);\nint getChannelsFromCommand(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);\nint doesCommandHaveChannelsWithFlags(struct redisCommand *cmd, int flags);\nvoid getKeysFreeResult(getKeysResult *result);\nint extractKeysAndSlot(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result, int *slot);\nint sintercardGetKeys(struct redisCommand *cmd,robj **argv, int argc, getKeysResult *result);\nint zunionInterDiffGetKeys(struct redisCommand *cmd,robj **argv, int argc, getKeysResult *result);\nint zunionInterDiffStoreGetKeys(struct redisCommand *cmd,robj **argv, int argc, getKeysResult *result);\nint evalGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);\nint functionGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);\nint sortGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);\nint pfmergeGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);\nint sortROGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);\nint migrateGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);\nint georadiusGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);\nint xreadGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);\nint lmpopGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);\nint blmpopGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);\nint zmpopGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);\nint bzmpopGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);\nint setGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);\nint delexGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);\nint bitfieldGetKeys(struct redisCommand *cmd, robj **argv, int argc, getKeysResult *result);\n\nunsigned short crc16(const char *buf, int len);\n\n/* Sentinel */\nvoid initSentinelConfig(void);\nvoid initSentinel(void);\nvoid sentinelTimer(void);\nconst char *sentinelHandleConfiguration(char **argv, int argc);\nvoid queueSentinelConfig(sds *argv, int argc, int linenum, sds line);\nvoid loadSentinelConfigFromQueue(void);\nvoid sentinelIsRunning(void);\nvoid sentinelCheckConfigFile(void);\nvoid sentinelCommand(client *c);\nvoid sentinelInfoCommand(client *c);\nvoid sentinelPublishCommand(client *c);\nvoid sentinelRoleCommand(client *c);\n\n/* redis-check-rdb & aof */\nint redis_check_rdb(char *rdbfilename, FILE *fp);\nint redis_check_rdb_main(int argc, char **argv, FILE *fp);\nint redis_check_aof_main(int argc, char **argv);\n\n/* Scripting */\nvoid scriptingInit(int setup);\nint ldbRemoveChild(pid_t pid);\nvoid ldbKillForkedSessions(void);\nint ldbPendingChildren(void);\nvoid luaLdbLineHook(lua_State *lua, lua_Debug *ar);\nvoid freeLuaScriptsSync(dict *lua_scripts, list *lua_scripts_lru_list, lua_State *lua);\nvoid freeLuaScriptsAsync(dict *lua_scripts, list *lua_scripts_lru_list, lua_State *lua);\nvoid freeFunctionsAsync(functionsLibCtx *functions_lib_ctx, dict *engines);\nint ldbIsEnabled(void);\nvoid ldbLog(sds entry);\nvoid ldbLogRedisReply(char *reply);\nvoid sha1hex(char *digest, char *script, size_t len);\nunsigned long evalScriptsMemoryVM(void);\ndict* evalScriptsDict(void);\nunsigned long evalScriptsMemoryEngine(void);\nuint64_t evalGetCommandFlags(client *c, uint64_t orig_flags);\nuint64_t fcallGetCommandFlags(client *c, uint64_t orig_flags);\nint isInsideYieldingLongCommand(void);\n\ntypedef struct luaScript {\n    uint64_t flags;\n    robj *body;\n    listNode *node;  /* list node in lua_scripts_lru_list list. */\n} luaScript;\n/* Cache of recently used small arguments to avoid malloc calls. */\n#define LUA_CMD_OBJCACHE_SIZE 32\n#define LUA_CMD_OBJCACHE_MAX_LEN 64\n\n/* Blocked clients API */\nvoid processUnblockedClients(void);\nvoid initClientBlockingState(client *c);\nvoid blockClient(client *c, int btype);\nvoid unblockClient(client *c, int queue_for_reprocessing);\nvoid unblockClientOnTimeout(client *c);\nvoid unblockClientOnError(client *c, const char *err_str);\nvoid queueClientForReprocessing(client *c);\nint blockedClientMayTimeout(client *c);\nvoid replyToBlockedClientTimedOut(client *c);\nint getTimeoutFromObjectOrReply(client *c, robj *object, mstime_t *timeout, int unit);\nvoid disconnectAllBlockedClients(void);\nvoid handleClientsBlockedOnKeys(void);\nvoid signalKeyAsReady(redisDb *db, robj *key, int type);\nvoid blockForKeys(client *c, int btype, robj **keys, int numkeys, mstime_t timeout, int unblock_on_nokey);\nvoid blockClientShutdown(client *c);\nvoid blockPostponeClient(client *c);\nvoid blockPostponeClientWithType(client *c, int btype);\nvoid blockForReplication(client *c, mstime_t timeout, long long offset, long numreplicas);\nvoid blockForAofFsync(client *c, mstime_t timeout, long long offset, int numlocal, long numreplicas);\nvoid signalDeletedKeyAsReady(redisDb *db, robj *key, int type);\nvoid updateStatsOnUnblock(client *c, long blocked_us, long reply_us, int had_errors);\nvoid scanDatabaseForDeletedKeys(redisDb *emptied, redisDb *replaced_with, struct slotRangeArray *slots);\nvoid totalNumberOfStatefulKeys(unsigned long *blocking_keys, unsigned long *bloking_keys_on_nokey, unsigned long *watched_keys);\nvoid blockedBeforeSleep(void);\n\n/* timeout.c -- Blocked clients timeout and connections timeout. */\nvoid addClientToTimeoutTable(client *c);\nvoid removeClientFromTimeoutTable(client *c);\nvoid handleBlockedClientsTimeout(void);\nint clientsCronHandleTimeout(client *c, mstime_t now_ms);\n\n/* t_stream.c -- Handling of stream data structures */\nvoid handleClaimableStreamEntries(void);\nvoid handleExpiredIdmpEntries(void);\n\n/* expire.c -- Handling of expired keys */\nvoid activeExpireCycle(int type);\nvoid expireSlaveKeys(void);\nvoid rememberSlaveKeyWithExpire(redisDb *db, sds key);\nvoid flushSlaveKeysWithExpireList(void);\nsize_t getSlaveKeyWithExpireCount(void);\nuint64_t activeSubexpires(redisDb *db, int slot, uint32_t maxFieldsToExpire);\n\n/* evict.c -- maxmemory handling and LRU eviction. */\nvoid evictionPoolAlloc(void);\n#define LFU_INIT_VAL 5\nunsigned long LFUGetTimeInMinutes(void);\nuint8_t LFULogIncr(uint8_t value);\nunsigned long LFUDecrAndReturn(robj *o);\n#define EVICT_OK 0\n#define EVICT_RUNNING 1\n#define EVICT_FAIL 2\nint performEvictions(void);\nvoid startEvictionTimeProc(void);\n\n/* Keys hashing / comparison functions for dict.c hash tables. */\nuint64_t dictSdsHash(const void *key);\nuint64_t dictPtrHash(const void *key);\nuint64_t dictSdsCaseHash(const void *key);\nsize_t dictSdsKeyLen(dict *d, const void *key);\nint dictSdsKeyCompare(dictCmpCache *cache, const void *key1, const void *key2);\nint dictSdsKeyCaseCompare(dictCmpCache *cache, const void *key1, const void *key2);\nvoid dictSdsDestructor(dict *d, void *val);\nvoid dictListDestructor(dict *d, void *val);\nvoid *dictSdsDup(dict *d, const void *key);\n\n/* Git SHA1 */\nchar *redisGitSHA1(void);\nchar *redisGitDirty(void);\nuint64_t redisBuildId(void);\nconst char *redisBuildIdRaw(void);\nchar *redisBuildIdString(void);\n\n/* XXH3 hash of a string as hex string */\nsds stringDigest(robj *o);\nint validateHexDigest(client *c, const sds digest);\n\n/* Hotkey tracking */\nhotkeyStats *hotkeyStatsCreate(int count, int duration, int sample_ratio,\n                               struct slotRangeArray *slots, uint64_t tracked_metrics);\nvoid hotkeyStatsRelease(hotkeyStats *hotkeys);\nvoid hotkeyStatsPreCurrentCmd(hotkeyStats *hotkeys, client *c);\nvoid hotkeyStatsUpdateCurrentCmd(hotkeyStats *hotkeys, hotkeyMetrics metrics);\nvoid hotkeyStatsPostCurrentCmd(hotkeyStats *hotkeys);\nsize_t hotkeysGetMemoryUsage(hotkeyStats *hotkeys);\n\n/* Commands prototypes */\nvoid authCommand(client *c);\nvoid pingCommand(client *c);\nvoid echoCommand(client *c);\nvoid commandCommand(client *c);\nvoid commandCountCommand(client *c);\nvoid commandListCommand(client *c);\nvoid commandInfoCommand(client *c);\nvoid commandGetKeysCommand(client *c);\nvoid commandGetKeysAndFlagsCommand(client *c);\nvoid commandHelpCommand(client *c);\nvoid commandDocsCommand(client *c);\nvoid setCommand(client *c);\nvoid setnxCommand(client *c);\nvoid setexCommand(client *c);\nvoid psetexCommand(client *c);\nvoid getCommand(client *c);\nvoid getexCommand(client *c);\nvoid getdelCommand(client *c);\nvoid delCommand(client *c);\nvoid delexCommand(client *c);\nvoid unlinkCommand(client *c);\nvoid existsCommand(client *c);\nvoid setbitCommand(client *c);\nvoid getbitCommand(client *c);\nvoid bitfieldCommand(client *c);\nvoid bitfieldroCommand(client *c);\nvoid setrangeCommand(client *c);\nvoid getrangeCommand(client *c);\nvoid incrCommand(client *c);\nvoid decrCommand(client *c);\nvoid incrbyCommand(client *c);\nvoid decrbyCommand(client *c);\nvoid incrbyfloatCommand(client *c);\nvoid selectCommand(client *c);\nvoid swapdbCommand(client *c);\nvoid randomkeyCommand(client *c);\nvoid keysCommand(client *c);\nvoid scanCommand(client *c);\nvoid dbsizeCommand(client *c);\nvoid lastsaveCommand(client *c);\nvoid saveCommand(client *c);\nvoid bgsaveCommand(client *c);\nvoid bgrewriteaofCommand(client *c);\nvoid shutdownCommand(client *c);\nvoid slowlogCommand(client *c);\nvoid moveCommand(client *c);\nvoid copyCommand(client *c);\nvoid renameCommand(client *c);\nvoid renamenxCommand(client *c);\nvoid lpushCommand(client *c);\nvoid rpushCommand(client *c);\nvoid lpushxCommand(client *c);\nvoid rpushxCommand(client *c);\nvoid linsertCommand(client *c);\nvoid lpopCommand(client *c);\nvoid rpopCommand(client *c);\nvoid lmpopCommand(client *c);\nvoid llenCommand(client *c);\nvoid lindexCommand(client *c);\nvoid lrangeCommand(client *c);\nvoid ltrimCommand(client *c);\nvoid typeCommand(client *c);\nvoid lsetCommand(client *c);\nvoid saddCommand(client *c);\nvoid sremCommand(client *c);\nvoid smoveCommand(client *c);\nvoid sismemberCommand(client *c);\nvoid smismemberCommand(client *c);\nvoid scardCommand(client *c);\nvoid spopCommand(client *c);\nvoid srandmemberCommand(client *c);\nvoid sinterCommand(client *c);\nvoid smembersCommand(client *c);\nvoid sinterCardCommand(client *c);\nvoid sinterstoreCommand(client *c);\nvoid sunionCommand(client *c);\nvoid sunionstoreCommand(client *c);\nvoid sdiffCommand(client *c);\nvoid sdiffstoreCommand(client *c);\nvoid sscanCommand(client *c);\nvoid syncCommand(client *c);\nvoid flushdbCommand(client *c);\nvoid flushallCommand(client *c);\nvoid trimslotsCommand(client *c);\nvoid sortCommand(client *c);\nvoid sortroCommand(client *c);\nvoid lremCommand(client *c);\nvoid lposCommand(client *c);\nvoid rpoplpushCommand(client *c);\nvoid lmoveCommand(client *c);\nvoid infoCommand(client *c);\nvoid mgetCommand(client *c);\nvoid monitorCommand(client *c);\nvoid expireCommand(client *c);\nvoid expireatCommand(client *c);\nvoid pexpireCommand(client *c);\nvoid pexpireatCommand(client *c);\nvoid getsetCommand(client *c);\nvoid ttlCommand(client *c);\nvoid touchCommand(client *c);\nvoid pttlCommand(client *c);\nvoid expiretimeCommand(client *c);\nvoid pexpiretimeCommand(client *c);\nvoid persistCommand(client *c);\nvoid replicaofCommand(client *c);\nvoid roleCommand(client *c);\nvoid debugCommand(client *c);\nvoid msetCommand(client *c);\nvoid msetnxCommand(client *c);\nvoid msetexCommand(client *c);\nvoid zaddCommand(client *c);\nvoid zincrbyCommand(client *c);\nvoid zrangeCommand(client *c);\nvoid zrangebyscoreCommand(client *c);\nvoid zrevrangebyscoreCommand(client *c);\nvoid zrangebylexCommand(client *c);\nvoid zrevrangebylexCommand(client *c);\nvoid zcountCommand(client *c);\nvoid zlexcountCommand(client *c);\nvoid zrevrangeCommand(client *c);\nvoid zcardCommand(client *c);\nvoid zremCommand(client *c);\nvoid zscoreCommand(client *c);\nvoid zmscoreCommand(client *c);\nvoid zremrangebyscoreCommand(client *c);\nvoid zremrangebylexCommand(client *c);\nvoid zpopminCommand(client *c);\nvoid zpopmaxCommand(client *c);\nvoid zmpopCommand(client *c);\nvoid bzpopminCommand(client *c);\nvoid bzpopmaxCommand(client *c);\nvoid bzmpopCommand(client *c);\nvoid zrandmemberCommand(client *c);\nvoid multiCommand(client *c);\nvoid execCommand(client *c);\nvoid discardCommand(client *c);\nvoid blpopCommand(client *c);\nvoid brpopCommand(client *c);\nvoid blmpopCommand(client *c);\nvoid brpoplpushCommand(client *c);\nvoid blmoveCommand(client *c);\nvoid appendCommand(client *c);\nvoid strlenCommand(client *c);\nvoid zrankCommand(client *c);\nvoid zrevrankCommand(client *c);\nvoid hsetCommand(client *c);\nvoid hsetexCommand(client *c);\nvoid hpexpireCommand(client *c);\nvoid hexpireCommand(client *c);\nvoid hpexpireatCommand(client *c);\nvoid hexpireatCommand(client *c);\nvoid httlCommand(client *c);\nvoid hpttlCommand(client *c);\nvoid hexpiretimeCommand(client *c);\nvoid hpexpiretimeCommand(client *c);\nvoid hpersistCommand(client *c);\nvoid hsetnxCommand(client *c);\nvoid hgetCommand(client *c);\nvoid hmgetCommand(client *c);\nvoid hgetexCommand(client *c);\nvoid hgetdelCommand(client *c);\nvoid hdelCommand(client *c);\nvoid hlenCommand(client *c);\nvoid hstrlenCommand(client *c);\nvoid zremrangebyrankCommand(client *c);\nvoid zunionstoreCommand(client *c);\nvoid zinterstoreCommand(client *c);\nvoid zdiffstoreCommand(client *c);\nvoid zunionCommand(client *c);\nvoid zinterCommand(client *c);\nvoid zinterCardCommand(client *c);\nvoid zrangestoreCommand(client *c);\nvoid zdiffCommand(client *c);\nvoid zscanCommand(client *c);\nvoid hkeysCommand(client *c);\nvoid hvalsCommand(client *c);\nvoid hgetallCommand(client *c);\nvoid hexistsCommand(client *c);\nvoid hscanCommand(client *c);\nvoid hrandfieldCommand(client *c);\nvoid configSetCommand(client *c);\nvoid configGetCommand(client *c);\nvoid configResetStatCommand(client *c);\nvoid configRewriteCommand(client *c);\nvoid configHelpCommand(client *c);\nint configExists(const sds name);\nvoid hincrbyCommand(client *c);\nvoid hincrbyfloatCommand(client *c);\nvoid subscribeCommand(client *c);\nvoid unsubscribeCommand(client *c);\nvoid psubscribeCommand(client *c);\nvoid punsubscribeCommand(client *c);\nvoid publishCommand(client *c);\nvoid pubsubCommand(client *c);\nvoid spublishCommand(client *c);\nvoid ssubscribeCommand(client *c);\nvoid sunsubscribeCommand(client *c);\nvoid watchCommand(client *c);\nvoid unwatchCommand(client *c);\nvoid clusterCommand(client *c);\nvoid clusterSlotStatsCommand(client *c);\nvoid restoreCommand(client *c);\nvoid migrateCommand(client *c);\nvoid askingCommand(client *c);\nvoid readonlyCommand(client *c);\nvoid readwriteCommand(client *c);\nvoid sflushCommand(client *c);\nint verifyDumpPayload(unsigned char *p, size_t len, uint16_t *rdbver_ptr);\nvoid dumpCommand(client *c);\nvoid clientCommand(client *c);\nvoid helloCommand(client *c);\nvoid clientSetinfoCommand(client *c);\nvoid evalCommand(client *c);\nvoid evalRoCommand(client *c);\nvoid evalShaCommand(client *c);\nvoid evalShaRoCommand(client *c);\nvoid scriptCommand(client *c);\nvoid fcallCommand(client *c);\nvoid fcallroCommand(client *c);\nvoid functionLoadCommand(client *c);\nvoid functionDeleteCommand(client *c);\nvoid functionKillCommand(client *c);\nvoid functionStatsCommand(client *c);\nvoid functionListCommand(client *c);\nvoid functionHelpCommand(client *c);\nvoid functionFlushCommand(client *c);\nvoid functionRestoreCommand(client *c);\nvoid functionDumpCommand(client *c);\nvoid timeCommand(client *c);\nvoid bitopCommand(client *c);\nvoid bitcountCommand(client *c);\nvoid bitposCommand(client *c);\nvoid replconfCommand(client *c);\nvoid waitCommand(client *c);\nvoid waitaofCommand(client *c);\nvoid georadiusbymemberCommand(client *c);\nvoid georadiusbymemberroCommand(client *c);\nvoid georadiusCommand(client *c);\nvoid georadiusroCommand(client *c);\nvoid geoaddCommand(client *c);\nvoid geohashCommand(client *c);\nvoid geoposCommand(client *c);\nvoid geodistCommand(client *c);\nvoid geosearchCommand(client *c);\nvoid geosearchstoreCommand(client *c);\nvoid pfselftestCommand(client *c);\nvoid pfaddCommand(client *c);\nvoid pfcountCommand(client *c);\nvoid pfmergeCommand(client *c);\nvoid pfdebugCommand(client *c);\nvoid latencyCommand(client *c);\nvoid moduleCommand(client *c);\nvoid securityWarningCommand(client *c);\nvoid xaddCommand(client *c);\nvoid xrangeCommand(client *c);\nvoid xrevrangeCommand(client *c);\nvoid xlenCommand(client *c);\nvoid xreadCommand(client *c);\nvoid xgroupCommand(client *c);\nvoid xsetidCommand(client *c);\nvoid xidmprecordCommand(client *c);\nvoid xackCommand(client *c);\nvoid xnackCommand(client *c);\nvoid xackdelCommand(client *c);\nvoid xpendingCommand(client *c);\nvoid xclaimCommand(client *c);\nvoid xautoclaimCommand(client *c);\nvoid xinfoCommand(client *c);\nvoid xcfgsetCommand(client *c);\nvoid xdelCommand(client *c);\nvoid xdelexCommand(client *c);\nvoid xtrimCommand(client *c);\nvoid lolwutCommand(client *c);\nvoid aclCommand(client *c);\nvoid hotkeysCommand(client *c);\nvoid lcsCommand(client *c);\nvoid quitCommand(client *c);\nvoid resetCommand(client *c);\nvoid failoverCommand(client *c);\nvoid digestCommand(client *c);\nvoid gcraCommand(client *c);\nvoid gcraSetValueCommand(client *c);\n\n#if defined(__GNUC__)\nvoid *calloc(size_t count, size_t size) __attribute__ ((deprecated));\nvoid free(void *ptr) __attribute__ ((deprecated));\nvoid *malloc(size_t size) __attribute__ ((deprecated));\nvoid *realloc(void *ptr, size_t size) __attribute__ ((deprecated));\n#endif\n\n/* Debugging stuff */\nvoid _serverAssertWithInfo(const client *c, const robj *o, const char *estr, const char *file, int line);\nvoid _serverAssert(const char *estr, const char *file, int line);\n#ifdef __GNUC__\nvoid _serverPanic(const char *file, int line, const char *msg, ...)\n    __attribute__ ((format (printf, 3, 4)));\n#else\nvoid _serverPanic(const char *file, int line, const char *msg, ...);\n#endif\nvoid serverLogObjectDebugInfo(const robj *o);\nvoid setupDebugSigHandlers(void);\nvoid setupSigSegvHandler(void);\nvoid removeSigSegvHandlers(void);\nconst char *getSafeInfoString(const char *s, size_t len, char **tmp);\ndict *genInfoSectionDict(robj **argv, int argc, char **defaults, int *out_all, int *out_everything);\nvoid releaseInfoSectionDict(dict *sec);\nsds genRedisInfoString(dict *section_dict, int all_sections, int everything);\nsds genModulesInfoString(sds info);\nvoid applyWatchdogPeriod(void);\nvoid watchdogScheduleSignal(int period);\nvoid serverLogHexDump(int level, char *descr, void *value, size_t len);\nint memtest_preserving_test(unsigned long *m, size_t bytes, int passes);\nvoid mixDigest(unsigned char *digest, const void *ptr, size_t len);\nvoid xorDigest(unsigned char *digest, const void *ptr, size_t len);\nsds catSubCommandFullname(const char *parent_name, const char *sub_name);\nvoid commandAddSubcommand(struct redisCommand *parent, struct redisCommand *subcommand, const char *declared_name);\nvoid debugDelay(int usec);\nvoid killThreads(void);\nvoid makeThreadKillable(void);\nvoid swapMainDbWithTempDb(redisDb *tempDb);\nsds getVersion(void);\nvoid debugPauseProcess(void);\n\n/* Log redaction helpers: return \"*redacted*\" when hide-user-data-from-log is on. */\nstatic inline const char *redactLogCstr(const char *s) {\n    return server.hide_user_data_from_log ? \"*redacted*\" : (s ? s : \"(null)\");\n}\n\n/* Use macro for checking log level to avoid evaluating arguments in cases log\n * should be ignored due to low level. */\n#define serverLog(level, ...) do {\\\n        if (((level)&0xff) < server.verbosity) break;\\\n        _serverLog(level, __VA_ARGS__);\\\n    } while(0)\n\n#define redisDebug(fmt, ...) \\\n    printf(\"DEBUG %s:%d > \" fmt \"\\n\", __FILE__, __LINE__, __VA_ARGS__)\n#define redisDebugMark() \\\n    printf(\"-- MARK %s:%d --\\n\", __FILE__, __LINE__)\n\nint iAmMaster(void);\n\n#define STRINGIFY_(x) #x\n#define STRINGIFY(x) STRINGIFY_(x)\n\n#endif\n"
  },
  {
    "path": "src/setcpuaffinity.c",
    "content": "/* ==========================================================================\n * setcpuaffinity.c - Linux/BSD setcpuaffinity.\n * --------------------------------------------------------------------------\n * Copyright (C) 2020  zhenwei pi\n *\n * Permission is hereby granted, free of charge, to any person obtaining a\n * copy of this software and associated documentation files (the\n * \"Software\"), to deal in the Software without restriction, including\n * without limitation the rights to use, copy, modify, merge, publish,\n * distribute, sublicense, and/or sell copies of the Software, and to permit\n * persons to whom the Software is furnished to do so, subject to the\n * following conditions:\n *\n * The above copyright notice and this permission notice shall be included\n * in all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\n * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN\n * NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,\n * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE\n * USE OR OTHER DEALINGS IN THE SOFTWARE.\n * ==========================================================================\n */\n#ifndef _GNU_SOURCE\n#define _GNU_SOURCE\n#endif\n#include <stdlib.h>\n#include <string.h>\n#include <ctype.h>\n#ifdef __linux__\n#include <sched.h>\n#endif\n#ifdef __FreeBSD__\n#include <sys/param.h>\n#include <sys/cpuset.h>\n#endif\n#ifdef __DragonFly__\n#include <pthread.h>\n#include <pthread_np.h>\n#endif\n#ifdef __NetBSD__\n#include <pthread.h>\n#include <sched.h>\n#endif\n#include \"config.h\"\n\n#ifdef USE_SETCPUAFFINITY\nstatic const char *next_token(const char *q,  int sep) {\n    if (q)\n        q = strchr(q, sep);\n    if (q)\n        q++;\n\n    return q;\n}\n\nstatic int next_num(const char *str, char **end, int *result) {\n    if (!str || *str == '\\0' || !isdigit(*str))\n        return -1;\n\n    *result = strtoul(str, end, 10);\n    if (str == *end)\n        return -1;\n\n    return 0;\n}\n\n/* set current thread cpu affinity to cpu list, this function works like\n * taskset command (actually cpulist parsing logic reference to util-linux).\n * example of this function: \"0,2,3\", \"0,2-3\", \"0-20:2\". */\nvoid setcpuaffinity(const char *cpulist) {\n    const char *p, *q;\n    char *end = NULL;\n#ifdef __linux__\n    cpu_set_t cpuset;\n#endif\n#if defined (__FreeBSD__) || defined(__DragonFly__)\n    cpuset_t cpuset;\n#endif\n#ifdef __NetBSD__\n    cpuset_t *cpuset;\n#endif\n\n    if (!cpulist)\n        return;\n\n#ifndef __NetBSD__\n    CPU_ZERO(&cpuset);\n#else\n    cpuset = cpuset_create();\n#endif\n\n    q = cpulist;\n    while (p = q, q = next_token(q, ','), p) {\n        int a, b, s;\n        const char *c1, *c2;\n\n        if (next_num(p, &end, &a) != 0)\n            return;\n\n        b = a;\n        s = 1;\n        p = end;\n\n        c1 = next_token(p, '-');\n        c2 = next_token(p, ',');\n\n        if (c1 != NULL && (c2 == NULL || c1 < c2)) {\n            if (next_num(c1, &end, &b) != 0)\n                return;\n\n            c1 = end && *end ? next_token(end, ':') : NULL;\n            if (c1 != NULL && (c2 == NULL || c1 < c2)) {\n                if (next_num(c1, &end, &s) != 0)\n                    return;\n\n                if (s == 0)\n                    return;\n            }\n        }\n\n        if ((a > b))\n            return;\n\n        while (a <= b) {\n#ifndef __NetBSD__\n            CPU_SET(a, &cpuset);\n#else\n            cpuset_set(a, cpuset);\n#endif\n            a += s;\n        }\n    }\n\n    if (end && *end)\n        return;\n\n#ifdef __linux__\n    sched_setaffinity(0, sizeof(cpuset), &cpuset);\n#endif\n#ifdef __FreeBSD__\n    cpuset_setaffinity(CPU_LEVEL_WHICH, CPU_WHICH_TID, -1, sizeof(cpuset), &cpuset);\n#endif\n#ifdef __DragonFly__\n    pthread_setaffinity_np(pthread_self(), sizeof(cpuset), &cpuset);\n#endif\n#ifdef __NetBSD__\n    pthread_setaffinity_np(pthread_self(), cpuset_size(cpuset), cpuset);\n    cpuset_destroy(cpuset);\n#endif\n}\n\n#endif /* USE_SETCPUAFFINITY */\n"
  },
  {
    "path": "src/setproctitle.c",
    "content": "/* ==========================================================================\n * setproctitle.c - Linux/Darwin setproctitle.\n * --------------------------------------------------------------------------\n * Copyright (C) 2010  William Ahern\n * Copyright (C) 2013-current  Redis Ltd.\n * Copyright (C) 2013  Stam He\n *\n * Permission is hereby granted, free of charge, to any person obtaining a\n * copy of this software and associated documentation files (the\n * \"Software\"), to deal in the Software without restriction, including\n * without limitation the rights to use, copy, modify, merge, publish,\n * distribute, sublicense, and/or sell copies of the Software, and to permit\n * persons to whom the Software is furnished to do so, subject to the\n * following conditions:\n *\n * The above copyright notice and this permission notice shall be included\n * in all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\n * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN\n * NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,\n * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE\n * USE OR OTHER DEALINGS IN THE SOFTWARE.\n * ==========================================================================\n */\n#ifndef _GNU_SOURCE\n#define _GNU_SOURCE\n#endif\n\n#include <stddef.h>\t/* NULL size_t */\n#include <stdarg.h>\t/* va_list va_start va_end */\n#include <stdlib.h>\t/* malloc(3) setenv(3) clearenv(3) setproctitle(3) getprogname(3) */\n#include <stdio.h>\t/* vsnprintf(3) snprintf(3) */\n\n#include <string.h>\t/* strlen(3) strchr(3) strdup(3) memset(3) memcpy(3) */\n\n#include <errno.h>\t/* errno program_invocation_name program_invocation_short_name */\n\n#if !defined(HAVE_SETPROCTITLE)\n#if (defined __NetBSD__ || defined __FreeBSD__ || defined __OpenBSD__ || defined __DragonFly__)\n#define HAVE_SETPROCTITLE 1\n#else\n#define HAVE_SETPROCTITLE 0\n#endif\n#endif\n\n\n#if !HAVE_SETPROCTITLE\n#if (defined __linux || defined __APPLE__)\n\n#ifdef __GLIBC__\n#define HAVE_CLEARENV\n#endif\n\nextern char **environ;\n\nstatic struct {\n\t/* original value */\n\tconst char *arg0;\n\n\t/* title space available */\n\tchar *base, *end;\n\n\t /* pointer to original nul character within base */\n\tchar *nul;\n\n\t_Bool reset;\n\tint error;\n} SPT;\n\n\n#ifndef SPT_MIN\n#define SPT_MIN(a, b) (((a) < (b))? (a) : (b))\n#endif\n\nstatic inline size_t spt_min(size_t a, size_t b) {\n\treturn SPT_MIN(a, b);\n} /* spt_min() */\n\n\n/*\n * For discussion on the portability of the various methods, see\n * http://lists.freebsd.org/pipermail/freebsd-stable/2008-June/043136.html\n */\nint spt_clearenv(void) {\n#ifdef HAVE_CLEARENV\n\treturn clearenv();\n#else\n\textern char **environ;\n\tstatic char **tmp;\n\n\tif (!(tmp = malloc(sizeof *tmp)))\n\t\treturn errno;\n\n\ttmp[0]  = NULL;\n\tenviron = tmp;\n\n\treturn 0;\n#endif\n} /* spt_clearenv() */\n\n\nstatic int spt_copyenv(int envc, char *oldenv[]) {\n\textern char **environ;\n\tchar **envcopy = NULL;\n\tchar *eq;\n\tint i, error;\n\tint envsize;\n\n\tif (environ != oldenv)\n\t\treturn 0;\n\n\t/* Copy environ into envcopy before clearing it. Shallow copy is\n\t * enough as clearenv() only clears the environ array.\n\t */\n\tenvsize = (envc + 1) * sizeof(char *);\n\tenvcopy = malloc(envsize);\n\tif (!envcopy)\n\t\treturn ENOMEM;\n\tmemcpy(envcopy, oldenv, envsize);\n\n\t/* Note that the state after clearenv() failure is undefined, but we'll\n\t * just assume an error means it was left unchanged.\n\t */\n\tif ((error = spt_clearenv())) {\n\t\tenviron = oldenv;\n\t\tfree(envcopy);\n\t\treturn error;\n\t}\n\n\t/* Set environ from envcopy */\n\tfor (i = 0; envcopy[i]; i++) {\n\t\tif (!(eq = strchr(envcopy[i], '=')))\n\t\t\tcontinue;\n\n\t\t*eq = '\\0';\n\t\terror = (0 != setenv(envcopy[i], eq + 1, 1))? errno : 0;\n\t\t*eq = '=';\n\n\t\t/* On error, do our best to restore state */\n\t\tif (error) {\n#ifdef HAVE_CLEARENV\n\t\t\t/* We don't assume it is safe to free environ, so we\n\t\t\t * may leak it. As clearenv() was shallow using envcopy\n\t\t\t * here is safe.\n\t\t\t */\n\t\t\tenviron = envcopy;\n#else\n\t\t\tfree(envcopy);\n\t\t\tfree(environ);  /* Safe to free, we have just alloc'd it */\n\t\t\tenviron = oldenv;\n#endif\n\t\t\treturn error;\n\t\t}\n\t}\n\n\tfree(envcopy);\n\treturn 0;\n} /* spt_copyenv() */\n\n\nstatic int spt_copyargs(int argc, char *argv[]) {\n\tchar *tmp;\n\tint i;\n\n\tfor (i = 1; i < argc || (i >= argc && argv[i]); i++) {\n\t\tif (!argv[i])\n\t\t\tcontinue;\n\n\t\tif (!(tmp = strdup(argv[i])))\n\t\t\treturn errno;\n\n\t\targv[i] = tmp;\n\t}\n\n\treturn 0;\n} /* spt_copyargs() */\n\n/* Initialize and populate SPT to allow a future setproctitle()\n * call.\n *\n * As setproctitle() basically needs to overwrite argv[0], we're\n * trying to determine what is the largest contiguous block\n * starting at argv[0] we can use for this purpose.\n *\n * As this range will overwrite some or all of the argv and environ\n * strings, a deep copy of these two arrays is performed.\n */\nvoid spt_init(int argc, char *argv[]) {\n        char **envp = environ;\n\tchar *base, *end, *nul, *tmp;\n\tint i, error, envc;\n\n\tif (!(base = argv[0]))\n\t\treturn;\n\n\t/* We start with end pointing at the end of argv[0] */\n\tnul = &base[strlen(base)];\n\tend = nul + 1;\n\n\t/* Attempt to extend end as far as we can, while making sure\n\t * that the range between base and end is only allocated to\n\t * argv, or anything that immediately follows argv (presumably\n\t * envp).\n\t */\n\tfor (i = 0; i < argc || (i >= argc && argv[i]); i++) {\n\t\tif (!argv[i] || argv[i] < end)\n\t\t\tcontinue;\n\n\t\tif (end >= argv[i] && end <= argv[i] + strlen(argv[i]))\n\t\t\tend = argv[i] + strlen(argv[i]) + 1;\n\t}\n\n\t/* In case the envp array was not an immediate extension to argv,\n\t * scan it explicitly.\n\t */\n\tfor (i = 0; envp[i]; i++) {\n\t\tif (envp[i] < end)\n\t\t\tcontinue;\n\n\t\tif (end >= envp[i] && end <= envp[i] + strlen(envp[i]))\n\t\t\tend = envp[i] + strlen(envp[i]) + 1;\n\t}\n\tenvc = i;\n\n\t/* We're going to deep copy argv[], but argv[0] will still point to\n\t * the old memory for the purpose of updating the title so we need\n\t * to keep the original value elsewhere.\n\t */\n\tif (!(SPT.arg0 = strdup(argv[0])))\n\t\tgoto syerr;\n\n#if __linux__\n\tif (!(tmp = strdup(program_invocation_name)))\n\t\tgoto syerr;\n\n\tprogram_invocation_name = tmp;\n\n\tif (!(tmp = strdup(program_invocation_short_name)))\n\t\tgoto syerr;\n\n\tprogram_invocation_short_name = tmp;\n#elif __APPLE__\n\tif (!(tmp = strdup(getprogname())))\n\t\tgoto syerr;\n\n\tsetprogname(tmp);\n#endif\n\n    /* Now make a full deep copy of the environment and argv[] */\n\tif ((error = spt_copyenv(envc, envp)))\n\t\tgoto error;\n\n\tif ((error = spt_copyargs(argc, argv)))\n\t\tgoto error;\n\n\tSPT.nul  = nul;\n\tSPT.base = base;\n\tSPT.end  = end;\n\n\treturn;\nsyerr:\n\terror = errno;\nerror:\n\tSPT.error = error;\n} /* spt_init() */\n\n\n#ifndef SPT_MAXTITLE\n#define SPT_MAXTITLE 255\n#endif\n\nvoid setproctitle(const char *fmt, ...) {\n\tchar buf[SPT_MAXTITLE + 1]; /* use buffer in case argv[0] is passed */\n\tva_list ap;\n\tchar *nul;\n\tint len, error;\n\n\tif (!SPT.base)\n\t\treturn;\n\n\tif (fmt) {\n\t\tva_start(ap, fmt);\n\t\tlen = vsnprintf(buf, sizeof buf, fmt, ap);\n\t\tva_end(ap);\n\t} else {\n\t\tlen = snprintf(buf, sizeof buf, \"%s\", SPT.arg0);\n\t}\n\n\tif (len <= 0)\n\t\t{ error = errno; goto error; }\n\n\tif (!SPT.reset) {\n\t\tmemset(SPT.base, 0, SPT.end - SPT.base);\n\t\tSPT.reset = 1;\n\t} else {\n\t\tmemset(SPT.base, 0, spt_min(sizeof buf, SPT.end - SPT.base));\n\t}\n\n\tlen = spt_min(len, spt_min(sizeof buf, SPT.end - SPT.base) - 1);\n\tmemcpy(SPT.base, buf, len);\n\tnul = &SPT.base[len];\n\n\tif (nul < SPT.nul) {\n\t\t*SPT.nul = '.';\n\t} else if (nul == SPT.nul && &nul[1] < SPT.end) {\n\t\t*SPT.nul = ' ';\n\t\t*++nul = '\\0';\n\t}\n\n\treturn;\nerror:\n\tSPT.error = error;\n} /* setproctitle() */\n\n\n#endif /* __linux || __APPLE__ */\n#endif /* !HAVE_SETPROCTITLE */\n\n#ifdef SETPROCTITLE_TEST_MAIN\nint main(int argc, char *argv[]) {\n\tspt_init(argc, argv);\n\n\tprintf(\"SPT.arg0: [%p] '%s'\\n\", SPT.arg0, SPT.arg0);\n\tprintf(\"SPT.base: [%p] '%s'\\n\", SPT.base, SPT.base);\n\tprintf(\"SPT.end: [%p] (%d bytes after base)'\\n\", SPT.end, (int) (SPT.end - SPT.base));\n\treturn 0;\n}\n#endif\n"
  },
  {
    "path": "src/sha1.c",
    "content": "\n/* from valgrind tests */\n\n/* ================ sha1.c ================ */\n/*\nSHA-1 in C\nBy Steve Reid <steve@edmweb.com>\n100% Public Domain\n\nTest Vectors (from FIPS PUB 180-1)\n\"abc\"\n  A9993E36 4706816A BA3E2571 7850C26C 9CD0D89D\n\"abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq\"\n  84983E44 1C3BD26E BAAE4AA1 F95129E5 E54670F1\nA million repetitions of \"a\"\n  34AA973C D4C4DAA4 F61EEB2B DBAD2731 6534016F\n*/\n\n/* #define LITTLE_ENDIAN * This should be #define'd already, if true. */\n/* #define SHA1HANDSOFF * Copies data before messing with it. */\n\n#define SHA1HANDSOFF\n\n#include <stdio.h>\n#include <string.h>\n#include <stdint.h>\n#include \"solarisfixes.h\"\n#include \"sha1.h\"\n#include \"config.h\"\n\n#define rol(value, bits) (((value) << (bits)) | ((value) >> (32 - (bits))))\n\n/* blk0() and blk() perform the initial expand. */\n/* I got the idea of expanding during the round function from SSLeay */\n#if BYTE_ORDER == LITTLE_ENDIAN\n#define blk0(i) (block->l[i] = (rol(block->l[i],24)&0xFF00FF00) \\\n    |(rol(block->l[i],8)&0x00FF00FF))\n#elif BYTE_ORDER == BIG_ENDIAN\n#define blk0(i) block->l[i]\n#else\n#error \"Endianness not defined!\"\n#endif\n#define blk(i) (block->l[i&15] = rol(block->l[(i+13)&15]^block->l[(i+8)&15] \\\n    ^block->l[(i+2)&15]^block->l[i&15],1))\n\n/* (R0+R1), R2, R3, R4 are the different operations used in SHA1 */\n#define R0(v,w,x,y,z,i) z+=((w&(x^y))^y)+blk0(i)+0x5A827999+rol(v,5);w=rol(w,30);\n#define R1(v,w,x,y,z,i) z+=((w&(x^y))^y)+blk(i)+0x5A827999+rol(v,5);w=rol(w,30);\n#define R2(v,w,x,y,z,i) z+=(w^x^y)+blk(i)+0x6ED9EBA1+rol(v,5);w=rol(w,30);\n#define R3(v,w,x,y,z,i) z+=(((w|x)&y)|(w&x))+blk(i)+0x8F1BBCDC+rol(v,5);w=rol(w,30);\n#define R4(v,w,x,y,z,i) z+=(w^x^y)+blk(i)+0xCA62C1D6+rol(v,5);w=rol(w,30);\n\n\n/* Hash a single 512-bit block. This is the core of the algorithm. */\n\nvoid SHA1Transform(uint32_t state[5], const unsigned char buffer[64])\n{\n    uint32_t a, b, c, d, e;\n    typedef union {\n        unsigned char c[64];\n        uint32_t l[16];\n    } CHAR64LONG16;\n#ifdef SHA1HANDSOFF\n    CHAR64LONG16 block[1];  /* use array to appear as a pointer */\n    memcpy(block, buffer, 64);\n#else\n    /* The following had better never be used because it causes the\n     * pointer-to-const buffer to be cast into a pointer to non-const.\n     * And the result is written through.  I threw a \"const\" in, hoping\n     * this will cause a diagnostic.\n     */\n    CHAR64LONG16* block = (const CHAR64LONG16*)buffer;\n#endif\n    /* Copy context->state[] to working vars */\n    a = state[0];\n    b = state[1];\n    c = state[2];\n    d = state[3];\n    e = state[4];\n    /* 4 rounds of 20 operations each. Loop unrolled. */\n    R0(a,b,c,d,e, 0); R0(e,a,b,c,d, 1); R0(d,e,a,b,c, 2); R0(c,d,e,a,b, 3);\n    R0(b,c,d,e,a, 4); R0(a,b,c,d,e, 5); R0(e,a,b,c,d, 6); R0(d,e,a,b,c, 7);\n    R0(c,d,e,a,b, 8); R0(b,c,d,e,a, 9); R0(a,b,c,d,e,10); R0(e,a,b,c,d,11);\n    R0(d,e,a,b,c,12); R0(c,d,e,a,b,13); R0(b,c,d,e,a,14); R0(a,b,c,d,e,15);\n    R1(e,a,b,c,d,16); R1(d,e,a,b,c,17); R1(c,d,e,a,b,18); R1(b,c,d,e,a,19);\n    R2(a,b,c,d,e,20); R2(e,a,b,c,d,21); R2(d,e,a,b,c,22); R2(c,d,e,a,b,23);\n    R2(b,c,d,e,a,24); R2(a,b,c,d,e,25); R2(e,a,b,c,d,26); R2(d,e,a,b,c,27);\n    R2(c,d,e,a,b,28); R2(b,c,d,e,a,29); R2(a,b,c,d,e,30); R2(e,a,b,c,d,31);\n    R2(d,e,a,b,c,32); R2(c,d,e,a,b,33); R2(b,c,d,e,a,34); R2(a,b,c,d,e,35);\n    R2(e,a,b,c,d,36); R2(d,e,a,b,c,37); R2(c,d,e,a,b,38); R2(b,c,d,e,a,39);\n    R3(a,b,c,d,e,40); R3(e,a,b,c,d,41); R3(d,e,a,b,c,42); R3(c,d,e,a,b,43);\n    R3(b,c,d,e,a,44); R3(a,b,c,d,e,45); R3(e,a,b,c,d,46); R3(d,e,a,b,c,47);\n    R3(c,d,e,a,b,48); R3(b,c,d,e,a,49); R3(a,b,c,d,e,50); R3(e,a,b,c,d,51);\n    R3(d,e,a,b,c,52); R3(c,d,e,a,b,53); R3(b,c,d,e,a,54); R3(a,b,c,d,e,55);\n    R3(e,a,b,c,d,56); R3(d,e,a,b,c,57); R3(c,d,e,a,b,58); R3(b,c,d,e,a,59);\n    R4(a,b,c,d,e,60); R4(e,a,b,c,d,61); R4(d,e,a,b,c,62); R4(c,d,e,a,b,63);\n    R4(b,c,d,e,a,64); R4(a,b,c,d,e,65); R4(e,a,b,c,d,66); R4(d,e,a,b,c,67);\n    R4(c,d,e,a,b,68); R4(b,c,d,e,a,69); R4(a,b,c,d,e,70); R4(e,a,b,c,d,71);\n    R4(d,e,a,b,c,72); R4(c,d,e,a,b,73); R4(b,c,d,e,a,74); R4(a,b,c,d,e,75);\n    R4(e,a,b,c,d,76); R4(d,e,a,b,c,77); R4(c,d,e,a,b,78); R4(b,c,d,e,a,79);\n    /* Add the working vars back into context.state[] */\n    state[0] += a;\n    state[1] += b;\n    state[2] += c;\n    state[3] += d;\n    state[4] += e;\n    /* Wipe variables */\n    a = b = c = d = e = 0;\n#ifdef SHA1HANDSOFF\n    memset(block, '\\0', sizeof(block));\n#endif\n}\n\n\n/* SHA1Init - Initialize new context */\n\nvoid SHA1Init(SHA1_CTX* context)\n{\n    /* SHA1 initialization constants */\n    context->state[0] = 0x67452301;\n    context->state[1] = 0xEFCDAB89;\n    context->state[2] = 0x98BADCFE;\n    context->state[3] = 0x10325476;\n    context->state[4] = 0xC3D2E1F0;\n    context->count[0] = context->count[1] = 0;\n}\n\n/* This source code is referenced from\n * https://github.com/libevent/libevent/commit/e1d7d3e40a7fd50348d849046fbfd9bf976e643c */\n#if defined(__GNUC__) && __GNUC__ >= 12\n#pragma GCC diagnostic push\n/* Ignore the case when SHA1Transform() called with 'char *', that code passed\n * buffer of 64 bytes anyway (at least now) */\n#pragma GCC diagnostic ignored \"-Wstringop-overread\"\n#endif\n\n/* Run your data through this. */\n\nvoid SHA1Update(SHA1_CTX* context, const unsigned char* data, uint32_t len)\n{\n    uint32_t i, j;\n\n    j = context->count[0];\n    if ((context->count[0] += len << 3) < j)\n        context->count[1]++;\n    context->count[1] += (len>>29);\n    j = (j >> 3) & 63;\n    if ((j + len) > 63) {\n        memcpy(&context->buffer[j], data, (i = 64-j));\n        SHA1Transform(context->state, context->buffer);\n        for ( ; i + 63 < len; i += 64) {\n            SHA1Transform(context->state, &data[i]);\n        }\n        j = 0;\n    }\n    else i = 0;\n    memcpy(&context->buffer[j], &data[i], len - i);\n}\n\n#if defined(__GNUC__) && __GNUC__ >= 12\n#pragma GCC diagnostic pop\n#endif\n\n/* Add padding and return the message digest. */\n\nvoid SHA1Final(unsigned char digest[20], SHA1_CTX* context)\n{\n    unsigned i;\n    unsigned char finalcount[8];\n    unsigned char c;\n\n#if 0\t/* untested \"improvement\" by DHR */\n    /* Convert context->count to a sequence of bytes\n     * in finalcount.  Second element first, but\n     * big-endian order within element.\n     * But we do it all backwards.\n     */\n    unsigned char *fcp = &finalcount[8];\n\n    for (i = 0; i < 2; i++)\n       {\n        uint32_t t = context->count[i];\n        int j;\n\n        for (j = 0; j < 4; t >>= 8, j++)\n\t          *--fcp = (unsigned char) t;\n    }\n#else\n    for (i = 0; i < 8; i++) {\n        finalcount[i] = (unsigned char)((context->count[(i >= 4 ? 0 : 1)]\n         >> ((3-(i & 3)) * 8) ) & 255);  /* Endian independent */\n    }\n#endif\n    c = 0200;\n    SHA1Update(context, &c, 1);\n    while ((context->count[0] & 504) != 448) {\n\tc = 0000;\n        SHA1Update(context, &c, 1);\n    }\n    SHA1Update(context, finalcount, 8);  /* Should cause a SHA1Transform() */\n    for (i = 0; i < 20; i++) {\n        digest[i] = (unsigned char)\n         ((context->state[i>>2] >> ((3-(i & 3)) * 8) ) & 255);\n    }\n    /* Wipe variables */\n    memset(context, '\\0', sizeof(*context));\n    memset(&finalcount, '\\0', sizeof(finalcount));\n}\n/* ================ end of sha1.c ================ */\n\n#ifdef REDIS_TEST\n#define BUFSIZE 4096\n\n#define UNUSED(x) (void)(x)\nint sha1Test(int argc, char **argv, int flags)\n{\n    SHA1_CTX ctx;\n    unsigned char hash[20], buf[BUFSIZE];\n    int i;\n\n    UNUSED(argc);\n    UNUSED(argv);\n    UNUSED(flags);\n\n    for(i=0;i<BUFSIZE;i++)\n        buf[i] = i;\n\n    SHA1Init(&ctx);\n    for(i=0;i<1000;i++)\n        SHA1Update(&ctx, buf, BUFSIZE);\n    SHA1Final(hash, &ctx);\n\n    printf(\"SHA1=\");\n    for(i=0;i<20;i++)\n        printf(\"%02x\", hash[i]);\n    printf(\"\\n\");\n    return 0;\n}\n#endif\n"
  },
  {
    "path": "src/sha1.h",
    "content": "#ifndef SHA1_H\n#define SHA1_H\n/* ================ sha1.h ================ */\n/*\nSHA-1 in C\nBy Steve Reid <steve@edmweb.com>\n100% Public Domain\n*/\n\ntypedef struct {\n    uint32_t state[5];\n    uint32_t count[2];\n    unsigned char buffer[64];\n} SHA1_CTX;\n\nvoid SHA1Transform(uint32_t state[5], const unsigned char buffer[64]);\nvoid SHA1Init(SHA1_CTX* context);\n/* 'noinline' attribute is intended to prevent the `-Wstringop-overread` warning\n * when using gcc-12 later with LTO enabled. It may be removed once the\n * bug[https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80922] is fixed. */\n__attribute__((noinline)) void SHA1Update(SHA1_CTX* context, const unsigned char* data, uint32_t len);\nvoid SHA1Final(unsigned char digest[20], SHA1_CTX* context);\n\n#ifdef REDIS_TEST\nint sha1Test(int argc, char **argv, int flags);\n#endif\n#endif\n"
  },
  {
    "path": "src/sha256.c",
    "content": "/*********************************************************************\n* Filename:   sha256.c\n* Author:     Brad Conte (brad AT bradconte.com)\n* Copyright:\n* Disclaimer: This code is presented \"as is\" without any guarantees.\n* Details:    Implementation of the SHA-256 hashing algorithm.\n              SHA-256 is one of the three algorithms in the SHA2\n              specification. The others, SHA-384 and SHA-512, are not\n              offered in this implementation.\n              Algorithm specification can be found here:\n               * http://csrc.nist.gov/publications/fips/fips180-2/fips180-2withchangenotice.pdf\n              This implementation uses little endian byte order.\n*********************************************************************/\n\n/*************************** HEADER FILES ***************************/\n#include <stdlib.h>\n#include <string.h>\n#include \"sha256.h\"\n\n/****************************** MACROS ******************************/\n#define ROTLEFT(a,b) (((a) << (b)) | ((a) >> (32-(b))))\n#define ROTRIGHT(a,b) (((a) >> (b)) | ((a) << (32-(b))))\n\n#define CH(x,y,z) (((x) & (y)) ^ (~(x) & (z)))\n#define MAJ(x,y,z) (((x) & (y)) ^ ((x) & (z)) ^ ((y) & (z)))\n#define EP0(x) (ROTRIGHT(x,2) ^ ROTRIGHT(x,13) ^ ROTRIGHT(x,22))\n#define EP1(x) (ROTRIGHT(x,6) ^ ROTRIGHT(x,11) ^ ROTRIGHT(x,25))\n#define SIG0(x) (ROTRIGHT(x,7) ^ ROTRIGHT(x,18) ^ ((x) >> 3))\n#define SIG1(x) (ROTRIGHT(x,17) ^ ROTRIGHT(x,19) ^ ((x) >> 10))\n\n/**************************** VARIABLES *****************************/\nstatic const WORD k[64] = {\n\t0x428a2f98,0x71374491,0xb5c0fbcf,0xe9b5dba5,0x3956c25b,0x59f111f1,0x923f82a4,0xab1c5ed5,\n\t0xd807aa98,0x12835b01,0x243185be,0x550c7dc3,0x72be5d74,0x80deb1fe,0x9bdc06a7,0xc19bf174,\n\t0xe49b69c1,0xefbe4786,0x0fc19dc6,0x240ca1cc,0x2de92c6f,0x4a7484aa,0x5cb0a9dc,0x76f988da,\n\t0x983e5152,0xa831c66d,0xb00327c8,0xbf597fc7,0xc6e00bf3,0xd5a79147,0x06ca6351,0x14292967,\n\t0x27b70a85,0x2e1b2138,0x4d2c6dfc,0x53380d13,0x650a7354,0x766a0abb,0x81c2c92e,0x92722c85,\n\t0xa2bfe8a1,0xa81a664b,0xc24b8b70,0xc76c51a3,0xd192e819,0xd6990624,0xf40e3585,0x106aa070,\n\t0x19a4c116,0x1e376c08,0x2748774c,0x34b0bcb5,0x391c0cb3,0x4ed8aa4a,0x5b9cca4f,0x682e6ff3,\n\t0x748f82ee,0x78a5636f,0x84c87814,0x8cc70208,0x90befffa,0xa4506ceb,0xbef9a3f7,0xc67178f2\n};\n\n/*********************** FUNCTION DEFINITIONS ***********************/\nvoid sha256_transform(SHA256_CTX *ctx, const BYTE data[])\n{\n\tWORD a, b, c, d, e, f, g, h, i, j, t1, t2, m[64];\n\n    for (i = 0, j = 0; i < 16; ++i, j += 4) {\n        m[i] = ((WORD) data[j + 0] << 24) |\n               ((WORD) data[j + 1] << 16) |\n               ((WORD) data[j + 2] << 8) |\n               ((WORD) data[j + 3]);\n    }\n\n\tfor ( ; i < 64; ++i)\n\t\tm[i] = SIG1(m[i - 2]) + m[i - 7] + SIG0(m[i - 15]) + m[i - 16];\n\n\ta = ctx->state[0];\n\tb = ctx->state[1];\n\tc = ctx->state[2];\n\td = ctx->state[3];\n\te = ctx->state[4];\n\tf = ctx->state[5];\n\tg = ctx->state[6];\n\th = ctx->state[7];\n\n\tfor (i = 0; i < 64; ++i) {\n\t\tt1 = h + EP1(e) + CH(e,f,g) + k[i] + m[i];\n\t\tt2 = EP0(a) + MAJ(a,b,c);\n\t\th = g;\n\t\tg = f;\n\t\tf = e;\n\t\te = d + t1;\n\t\td = c;\n\t\tc = b;\n\t\tb = a;\n\t\ta = t1 + t2;\n\t}\n\n\tctx->state[0] += a;\n\tctx->state[1] += b;\n\tctx->state[2] += c;\n\tctx->state[3] += d;\n\tctx->state[4] += e;\n\tctx->state[5] += f;\n\tctx->state[6] += g;\n\tctx->state[7] += h;\n}\n\nvoid sha256_init(SHA256_CTX *ctx)\n{\n\tctx->datalen = 0;\n\tctx->bitlen = 0;\n\tctx->state[0] = 0x6a09e667;\n\tctx->state[1] = 0xbb67ae85;\n\tctx->state[2] = 0x3c6ef372;\n\tctx->state[3] = 0xa54ff53a;\n\tctx->state[4] = 0x510e527f;\n\tctx->state[5] = 0x9b05688c;\n\tctx->state[6] = 0x1f83d9ab;\n\tctx->state[7] = 0x5be0cd19;\n}\n\nvoid sha256_update(SHA256_CTX *ctx, const BYTE data[], size_t len)\n{\n\tWORD i;\n\n\tfor (i = 0; i < len; ++i) {\n\t\tctx->data[ctx->datalen] = data[i];\n\t\tctx->datalen++;\n\t\tif (ctx->datalen == 64) {\n\t\t\tsha256_transform(ctx, ctx->data);\n\t\t\tctx->bitlen += 512;\n\t\t\tctx->datalen = 0;\n\t\t}\n\t}\n}\n\nvoid sha256_final(SHA256_CTX *ctx, BYTE hash[])\n{\n\tWORD i;\n\n\ti = ctx->datalen;\n\n\t// Pad whatever data is left in the buffer.\n\tif (ctx->datalen < 56) {\n\t\tctx->data[i++] = 0x80;\n\t\twhile (i < 56)\n\t\t\tctx->data[i++] = 0x00;\n\t}\n\telse {\n\t\tctx->data[i++] = 0x80;\n\t\twhile (i < 64)\n\t\t\tctx->data[i++] = 0x00;\n\t\tsha256_transform(ctx, ctx->data);\n\t\tmemset(ctx->data, 0, 56);\n\t}\n\n\t// Append to the padding the total message's length in bits and transform.\n\tctx->bitlen += ctx->datalen * 8;\n\tctx->data[63] = ctx->bitlen;\n\tctx->data[62] = ctx->bitlen >> 8;\n\tctx->data[61] = ctx->bitlen >> 16;\n\tctx->data[60] = ctx->bitlen >> 24;\n\tctx->data[59] = ctx->bitlen >> 32;\n\tctx->data[58] = ctx->bitlen >> 40;\n\tctx->data[57] = ctx->bitlen >> 48;\n\tctx->data[56] = ctx->bitlen >> 56;\n\tsha256_transform(ctx, ctx->data);\n\n\t// Since this implementation uses little endian byte ordering and SHA uses big endian,\n\t// reverse all the bytes when copying the final state to the output hash.\n\tfor (i = 0; i < 4; ++i) {\n\t\thash[i]      = (ctx->state[0] >> (24 - i * 8)) & 0x000000ff;\n\t\thash[i + 4]  = (ctx->state[1] >> (24 - i * 8)) & 0x000000ff;\n\t\thash[i + 8]  = (ctx->state[2] >> (24 - i * 8)) & 0x000000ff;\n\t\thash[i + 12] = (ctx->state[3] >> (24 - i * 8)) & 0x000000ff;\n\t\thash[i + 16] = (ctx->state[4] >> (24 - i * 8)) & 0x000000ff;\n\t\thash[i + 20] = (ctx->state[5] >> (24 - i * 8)) & 0x000000ff;\n\t\thash[i + 24] = (ctx->state[6] >> (24 - i * 8)) & 0x000000ff;\n\t\thash[i + 28] = (ctx->state[7] >> (24 - i * 8)) & 0x000000ff;\n\t}\n}\n"
  },
  {
    "path": "src/sha256.h",
    "content": "/*********************************************************************\n* Filename:   sha256.h\n* Author:     Brad Conte (brad AT bradconte.com)\n* Copyright:\n* Disclaimer: This code is presented \"as is\" without any guarantees.\n* Details:    Defines the API for the corresponding SHA256 implementation.\n*********************************************************************/\n\n#ifndef SHA256_H\n#define SHA256_H\n\n/*************************** HEADER FILES ***************************/\n#include <stddef.h>\n#include <stdint.h>\n\n/****************************** MACROS ******************************/\n#define SHA256_BLOCK_SIZE 32            // SHA256 outputs a 32 byte digest\n\n/**************************** DATA TYPES ****************************/\ntypedef uint8_t BYTE;   // 8-bit byte\ntypedef uint32_t WORD;  // 32-bit word\n\ntypedef struct {\n\tBYTE data[64];\n\tWORD datalen;\n\tunsigned long long bitlen;\n\tWORD state[8];\n} SHA256_CTX;\n\n/*********************** FUNCTION DECLARATIONS **********************/\nvoid sha256_init(SHA256_CTX *ctx);\nvoid sha256_update(SHA256_CTX *ctx, const BYTE data[], size_t len);\nvoid sha256_final(SHA256_CTX *ctx, BYTE hash[]);\n\n#endif   // SHA256_H\n"
  },
  {
    "path": "src/siphash.c",
    "content": "/*\n   SipHash reference C implementation\n\n   Copyright (c) 2012-2016 Jean-Philippe Aumasson\n   <jeanphilippe.aumasson@gmail.com>\n   Copyright (c) 2012-2014 Daniel J. Bernstein <djb@cr.yp.to>\n   Copyright (c) 2017-current Redis Ltd.\n\n   To the extent possible under law, the author(s) have dedicated all copyright\n   and related and neighboring rights to this software to the public domain\n   worldwide. This software is distributed without any warranty.\n\n   You should have received a copy of the CC0 Public Domain Dedication along\n   with this software. If not, see\n   <http://creativecommons.org/publicdomain/zero/1.0/>.\n\n   ----------------------------------------------------------------------------\n\n   This version was modified by Salvatore Sanfilippo <antirez@gmail.com>\n   in the following ways:\n\n   1. We use SipHash 1-2. This is not believed to be as strong as the\n      suggested 2-4 variant, but AFAIK there are not trivial attacks\n      against this reduced-rounds version, and it runs at the same speed\n      as Murmurhash2 that we used previously, while the 2-4 variant slowed\n      down Redis by a 4% figure more or less.\n   2. Hard-code rounds in the hope the compiler can optimize it more\n      in this raw form. Anyway we always want the standard 2-4 variant.\n   3. Modify the prototype and implementation so that the function directly\n      returns an uint64_t value, the hash itself, instead of receiving an\n      output buffer. This also means that the output size is set to 8 bytes\n      and the 16 bytes output code handling was removed.\n   4. Provide a case insensitive variant to be used when hashing strings that\n      must be considered identical by the hash table regardless of the case.\n      If we don't have directly a case insensitive hash function, we need to\n      perform a text transformation in some temporary buffer, which is costly.\n   5. Remove debugging code.\n   6. Modified the original test.c file to be a stand-alone function testing\n      the function in the new form (returning an uint64_t) using just the\n      relevant test vector.\n */\n#include <stdint.h>\n#include <stdio.h>\n#include <string.h>\n#include <ctype.h>\n\n/* Fast tolower() alike function that does not care about locale\n * but just returns a-z instead of A-Z. */\nint siptlw(int c) {\n    if (c >= 'A' && c <= 'Z') {\n        return c+('a'-'A');\n    } else {\n        return c;\n    }\n}\n\n#if defined(__has_attribute)\n#if __has_attribute(no_sanitize)\n#define NO_SANITIZE(sanitizer) __attribute__((no_sanitize(sanitizer)))\n#endif\n#endif\n\n#if !defined(NO_SANITIZE)\n#define NO_SANITIZE(sanitizer)\n#endif\n\n/* Test of the CPU is Little Endian and supports not aligned accesses.\n * Two interesting conditions to speedup the function that happen to be\n * in most of x86 servers. */\n#if defined(__X86_64__) || defined(__x86_64__) || defined (__i386__) \\\n\t|| defined (__aarch64__) || defined (__arm64__) \\\n    || (defined(__riscv) && defined(__riscv_zicclsm))\n#define UNALIGNED_LE_CPU\n#endif\n\n#define ROTL(x, b) (uint64_t)(((x) << (b)) | ((x) >> (64 - (b))))\n\n#define U32TO8_LE(p, v)                                                        \\\n    (p)[0] = (uint8_t)((v));                                                   \\\n    (p)[1] = (uint8_t)((v) >> 8);                                              \\\n    (p)[2] = (uint8_t)((v) >> 16);                                             \\\n    (p)[3] = (uint8_t)((v) >> 24);\n\n#define U64TO8_LE(p, v)                                                        \\\n    U32TO8_LE((p), (uint32_t)((v)));                                           \\\n    U32TO8_LE((p) + 4, (uint32_t)((v) >> 32));\n\n#ifdef UNALIGNED_LE_CPU\n#define U8TO64_LE(p) (*((uint64_t*)(p)))\n#else\n#define U8TO64_LE(p)                                                           \\\n    (((uint64_t)((p)[0])) | ((uint64_t)((p)[1]) << 8) |                        \\\n     ((uint64_t)((p)[2]) << 16) | ((uint64_t)((p)[3]) << 24) |                 \\\n     ((uint64_t)((p)[4]) << 32) | ((uint64_t)((p)[5]) << 40) |                 \\\n     ((uint64_t)((p)[6]) << 48) | ((uint64_t)((p)[7]) << 56))\n#endif\n\n#define U8TO64_LE_NOCASE(p)                                                    \\\n    (((uint64_t)(siptlw((p)[0]))) |                                           \\\n     ((uint64_t)(siptlw((p)[1])) << 8) |                                      \\\n     ((uint64_t)(siptlw((p)[2])) << 16) |                                     \\\n     ((uint64_t)(siptlw((p)[3])) << 24) |                                     \\\n     ((uint64_t)(siptlw((p)[4])) << 32) |                                              \\\n     ((uint64_t)(siptlw((p)[5])) << 40) |                                              \\\n     ((uint64_t)(siptlw((p)[6])) << 48) |                                              \\\n     ((uint64_t)(siptlw((p)[7])) << 56))\n\n#define SIPROUND                                                               \\\n    do {                                                                       \\\n        v0 += v1;                                                              \\\n        v1 = ROTL(v1, 13);                                                     \\\n        v1 ^= v0;                                                              \\\n        v0 = ROTL(v0, 32);                                                     \\\n        v2 += v3;                                                              \\\n        v3 = ROTL(v3, 16);                                                     \\\n        v3 ^= v2;                                                              \\\n        v0 += v3;                                                              \\\n        v3 = ROTL(v3, 21);                                                     \\\n        v3 ^= v0;                                                              \\\n        v2 += v1;                                                              \\\n        v1 = ROTL(v1, 17);                                                     \\\n        v1 ^= v2;                                                              \\\n        v2 = ROTL(v2, 32);                                                     \\\n    } while (0)\n\nNO_SANITIZE(\"alignment\")\nuint64_t siphash(const uint8_t *in, const size_t inlen, const uint8_t *k) {\n#ifndef UNALIGNED_LE_CPU\n    uint64_t hash;\n    uint8_t *out = (uint8_t*) &hash;\n#endif\n    uint64_t v0 = 0x736f6d6570736575ULL;\n    uint64_t v1 = 0x646f72616e646f6dULL;\n    uint64_t v2 = 0x6c7967656e657261ULL;\n    uint64_t v3 = 0x7465646279746573ULL;\n    uint64_t k0 = U8TO64_LE(k);\n    uint64_t k1 = U8TO64_LE(k + 8);\n    uint64_t m;\n    const uint8_t *end = in + inlen - (inlen % sizeof(uint64_t));\n    const int left = inlen & 7;\n    uint64_t b = ((uint64_t)inlen) << 56;\n    v3 ^= k1;\n    v2 ^= k0;\n    v1 ^= k1;\n    v0 ^= k0;\n\n    for (; in != end; in += 8) {\n        m = U8TO64_LE(in);\n        v3 ^= m;\n\n        SIPROUND;\n\n        v0 ^= m;\n    }\n\n    switch (left) {\n    case 7: b |= ((uint64_t)in[6]) << 48; /* fall-thru */\n    case 6: b |= ((uint64_t)in[5]) << 40; /* fall-thru */\n    case 5: b |= ((uint64_t)in[4]) << 32; /* fall-thru */\n    case 4: b |= ((uint64_t)in[3]) << 24; /* fall-thru */\n    case 3: b |= ((uint64_t)in[2]) << 16; /* fall-thru */\n    case 2: b |= ((uint64_t)in[1]) << 8; /* fall-thru */\n    case 1: b |= ((uint64_t)in[0]); break;\n    case 0: break;\n    }\n\n    v3 ^= b;\n\n    SIPROUND;\n\n    v0 ^= b;\n    v2 ^= 0xff;\n\n    SIPROUND;\n    SIPROUND;\n\n    b = v0 ^ v1 ^ v2 ^ v3;\n#ifndef UNALIGNED_LE_CPU\n    U64TO8_LE(out, b);\n    return hash;\n#else\n    return b;\n#endif\n}\n\nNO_SANITIZE(\"alignment\")\nuint64_t siphash_nocase(const uint8_t *in, const size_t inlen, const uint8_t *k)\n{\n#ifndef UNALIGNED_LE_CPU\n    uint64_t hash;\n    uint8_t *out = (uint8_t*) &hash;\n#endif\n    uint64_t v0 = 0x736f6d6570736575ULL;\n    uint64_t v1 = 0x646f72616e646f6dULL;\n    uint64_t v2 = 0x6c7967656e657261ULL;\n    uint64_t v3 = 0x7465646279746573ULL;\n    uint64_t k0 = U8TO64_LE(k);\n    uint64_t k1 = U8TO64_LE(k + 8);\n    uint64_t m;\n    const uint8_t *end = in + inlen - (inlen % sizeof(uint64_t));\n    const int left = inlen & 7;\n    uint64_t b = ((uint64_t)inlen) << 56;\n    v3 ^= k1;\n    v2 ^= k0;\n    v1 ^= k1;\n    v0 ^= k0;\n\n    for (; in != end; in += 8) {\n        m = U8TO64_LE_NOCASE(in);\n        v3 ^= m;\n\n        SIPROUND;\n\n        v0 ^= m;\n    }\n\n    switch (left) {\n    case 7: b |= ((uint64_t)siptlw(in[6])) << 48; /* fall-thru */\n    case 6: b |= ((uint64_t)siptlw(in[5])) << 40; /* fall-thru */\n    case 5: b |= ((uint64_t)siptlw(in[4])) << 32; /* fall-thru */\n    case 4: b |= ((uint64_t)siptlw(in[3])) << 24; /* fall-thru */\n    case 3: b |= ((uint64_t)siptlw(in[2])) << 16; /* fall-thru */\n    case 2: b |= ((uint64_t)siptlw(in[1])) << 8; /* fall-thru */\n    case 1: b |= ((uint64_t)siptlw(in[0])); break;\n    case 0: break;\n    }\n\n    v3 ^= b;\n\n    SIPROUND;\n\n    v0 ^= b;\n    v2 ^= 0xff;\n\n    SIPROUND;\n    SIPROUND;\n\n    b = v0 ^ v1 ^ v2 ^ v3;\n#ifndef UNALIGNED_LE_CPU\n    U64TO8_LE(out, b);\n    return hash;\n#else\n    return b;\n#endif\n}\n\n\n/* --------------------------------- TEST ------------------------------------ */\n\n#ifdef SIPHASH_TEST\n\nconst uint8_t vectors_sip64[64][8] = {\n    { 0x31, 0x0e, 0x0e, 0xdd, 0x47, 0xdb, 0x6f, 0x72, },\n    { 0xfd, 0x67, 0xdc, 0x93, 0xc5, 0x39, 0xf8, 0x74, },\n    { 0x5a, 0x4f, 0xa9, 0xd9, 0x09, 0x80, 0x6c, 0x0d, },\n    { 0x2d, 0x7e, 0xfb, 0xd7, 0x96, 0x66, 0x67, 0x85, },\n    { 0xb7, 0x87, 0x71, 0x27, 0xe0, 0x94, 0x27, 0xcf, },\n    { 0x8d, 0xa6, 0x99, 0xcd, 0x64, 0x55, 0x76, 0x18, },\n    { 0xce, 0xe3, 0xfe, 0x58, 0x6e, 0x46, 0xc9, 0xcb, },\n    { 0x37, 0xd1, 0x01, 0x8b, 0xf5, 0x00, 0x02, 0xab, },\n    { 0x62, 0x24, 0x93, 0x9a, 0x79, 0xf5, 0xf5, 0x93, },\n    { 0xb0, 0xe4, 0xa9, 0x0b, 0xdf, 0x82, 0x00, 0x9e, },\n    { 0xf3, 0xb9, 0xdd, 0x94, 0xc5, 0xbb, 0x5d, 0x7a, },\n    { 0xa7, 0xad, 0x6b, 0x22, 0x46, 0x2f, 0xb3, 0xf4, },\n    { 0xfb, 0xe5, 0x0e, 0x86, 0xbc, 0x8f, 0x1e, 0x75, },\n    { 0x90, 0x3d, 0x84, 0xc0, 0x27, 0x56, 0xea, 0x14, },\n    { 0xee, 0xf2, 0x7a, 0x8e, 0x90, 0xca, 0x23, 0xf7, },\n    { 0xe5, 0x45, 0xbe, 0x49, 0x61, 0xca, 0x29, 0xa1, },\n    { 0xdb, 0x9b, 0xc2, 0x57, 0x7f, 0xcc, 0x2a, 0x3f, },\n    { 0x94, 0x47, 0xbe, 0x2c, 0xf5, 0xe9, 0x9a, 0x69, },\n    { 0x9c, 0xd3, 0x8d, 0x96, 0xf0, 0xb3, 0xc1, 0x4b, },\n    { 0xbd, 0x61, 0x79, 0xa7, 0x1d, 0xc9, 0x6d, 0xbb, },\n    { 0x98, 0xee, 0xa2, 0x1a, 0xf2, 0x5c, 0xd6, 0xbe, },\n    { 0xc7, 0x67, 0x3b, 0x2e, 0xb0, 0xcb, 0xf2, 0xd0, },\n    { 0x88, 0x3e, 0xa3, 0xe3, 0x95, 0x67, 0x53, 0x93, },\n    { 0xc8, 0xce, 0x5c, 0xcd, 0x8c, 0x03, 0x0c, 0xa8, },\n    { 0x94, 0xaf, 0x49, 0xf6, 0xc6, 0x50, 0xad, 0xb8, },\n    { 0xea, 0xb8, 0x85, 0x8a, 0xde, 0x92, 0xe1, 0xbc, },\n    { 0xf3, 0x15, 0xbb, 0x5b, 0xb8, 0x35, 0xd8, 0x17, },\n    { 0xad, 0xcf, 0x6b, 0x07, 0x63, 0x61, 0x2e, 0x2f, },\n    { 0xa5, 0xc9, 0x1d, 0xa7, 0xac, 0xaa, 0x4d, 0xde, },\n    { 0x71, 0x65, 0x95, 0x87, 0x66, 0x50, 0xa2, 0xa6, },\n    { 0x28, 0xef, 0x49, 0x5c, 0x53, 0xa3, 0x87, 0xad, },\n    { 0x42, 0xc3, 0x41, 0xd8, 0xfa, 0x92, 0xd8, 0x32, },\n    { 0xce, 0x7c, 0xf2, 0x72, 0x2f, 0x51, 0x27, 0x71, },\n    { 0xe3, 0x78, 0x59, 0xf9, 0x46, 0x23, 0xf3, 0xa7, },\n    { 0x38, 0x12, 0x05, 0xbb, 0x1a, 0xb0, 0xe0, 0x12, },\n    { 0xae, 0x97, 0xa1, 0x0f, 0xd4, 0x34, 0xe0, 0x15, },\n    { 0xb4, 0xa3, 0x15, 0x08, 0xbe, 0xff, 0x4d, 0x31, },\n    { 0x81, 0x39, 0x62, 0x29, 0xf0, 0x90, 0x79, 0x02, },\n    { 0x4d, 0x0c, 0xf4, 0x9e, 0xe5, 0xd4, 0xdc, 0xca, },\n    { 0x5c, 0x73, 0x33, 0x6a, 0x76, 0xd8, 0xbf, 0x9a, },\n    { 0xd0, 0xa7, 0x04, 0x53, 0x6b, 0xa9, 0x3e, 0x0e, },\n    { 0x92, 0x59, 0x58, 0xfc, 0xd6, 0x42, 0x0c, 0xad, },\n    { 0xa9, 0x15, 0xc2, 0x9b, 0xc8, 0x06, 0x73, 0x18, },\n    { 0x95, 0x2b, 0x79, 0xf3, 0xbc, 0x0a, 0xa6, 0xd4, },\n    { 0xf2, 0x1d, 0xf2, 0xe4, 0x1d, 0x45, 0x35, 0xf9, },\n    { 0x87, 0x57, 0x75, 0x19, 0x04, 0x8f, 0x53, 0xa9, },\n    { 0x10, 0xa5, 0x6c, 0xf5, 0xdf, 0xcd, 0x9a, 0xdb, },\n    { 0xeb, 0x75, 0x09, 0x5c, 0xcd, 0x98, 0x6c, 0xd0, },\n    { 0x51, 0xa9, 0xcb, 0x9e, 0xcb, 0xa3, 0x12, 0xe6, },\n    { 0x96, 0xaf, 0xad, 0xfc, 0x2c, 0xe6, 0x66, 0xc7, },\n    { 0x72, 0xfe, 0x52, 0x97, 0x5a, 0x43, 0x64, 0xee, },\n    { 0x5a, 0x16, 0x45, 0xb2, 0x76, 0xd5, 0x92, 0xa1, },\n    { 0xb2, 0x74, 0xcb, 0x8e, 0xbf, 0x87, 0x87, 0x0a, },\n    { 0x6f, 0x9b, 0xb4, 0x20, 0x3d, 0xe7, 0xb3, 0x81, },\n    { 0xea, 0xec, 0xb2, 0xa3, 0x0b, 0x22, 0xa8, 0x7f, },\n    { 0x99, 0x24, 0xa4, 0x3c, 0xc1, 0x31, 0x57, 0x24, },\n    { 0xbd, 0x83, 0x8d, 0x3a, 0xaf, 0xbf, 0x8d, 0xb7, },\n    { 0x0b, 0x1a, 0x2a, 0x32, 0x65, 0xd5, 0x1a, 0xea, },\n    { 0x13, 0x50, 0x79, 0xa3, 0x23, 0x1c, 0xe6, 0x60, },\n    { 0x93, 0x2b, 0x28, 0x46, 0xe4, 0xd7, 0x06, 0x66, },\n    { 0xe1, 0x91, 0x5f, 0x5c, 0xb1, 0xec, 0xa4, 0x6c, },\n    { 0xf3, 0x25, 0x96, 0x5c, 0xa1, 0x6d, 0x62, 0x9f, },\n    { 0x57, 0x5f, 0xf2, 0x8e, 0x60, 0x38, 0x1b, 0xe5, },\n    { 0x72, 0x45, 0x06, 0xeb, 0x4c, 0x32, 0x8a, 0x95, },\n};\n\n\n/* Test siphash using a test vector. Returns 0 if the function passed\n * all the tests, otherwise 1 is returned.\n *\n * IMPORTANT: The test vector is for SipHash 2-4. Before running\n * the test revert back the siphash() function to 2-4 rounds since\n * now it uses 1-2 rounds. */\nint siphash_test(void) {\n    uint8_t in[64], k[16];\n    int i;\n    int fails = 0;\n\n    for (i = 0; i < 16; ++i)\n        k[i] = i;\n\n    for (i = 0; i < 64; ++i) {\n        in[i] = i;\n        uint64_t hash = siphash(in, i, k);\n        const uint8_t *v = NULL;\n        v = (uint8_t *)vectors_sip64;\n        if (memcmp(&hash, v + (i * 8), 8)) {\n            /* printf(\"fail for %d bytes\\n\", i); */\n            fails++;\n        }\n    }\n\n    /* Run a few basic tests with the case insensitive version. */\n    uint64_t h1, h2;\n    h1 = siphash((uint8_t*)\"hello world\",11,(uint8_t*)\"1234567812345678\");\n    h2 = siphash_nocase((uint8_t*)\"hello world\",11,(uint8_t*)\"1234567812345678\");\n    if (h1 != h2) fails++;\n\n    h1 = siphash((uint8_t*)\"hello world\",11,(uint8_t*)\"1234567812345678\");\n    h2 = siphash_nocase((uint8_t*)\"HELLO world\",11,(uint8_t*)\"1234567812345678\");\n    if (h1 != h2) fails++;\n\n    h1 = siphash((uint8_t*)\"HELLO world\",11,(uint8_t*)\"1234567812345678\");\n    h2 = siphash_nocase((uint8_t*)\"HELLO world\",11,(uint8_t*)\"1234567812345678\");\n    if (h1 == h2) fails++;\n\n    if (!fails) return 0;\n    return 1;\n}\n\nint main(void) {\n    if (siphash_test() == 0) {\n        printf(\"SipHash test: OK\\n\");\n        return 0;\n    } else {\n        printf(\"SipHash test: FAILED\\n\");\n        return 1;\n    }\n}\n\n#endif\n"
  },
  {
    "path": "src/slowlog.c",
    "content": "/* Slowlog implements a system that is able to remember the latest N\n * queries that took more than M microseconds to execute.\n *\n * The execution time to reach to be logged in the slow log is set\n * using the 'slowlog-log-slower-than' config directive, that is also\n * readable and writable using the CONFIG SET/GET command.\n *\n * The slow queries log is actually not \"logged\" in the Redis log file\n * but is accessible thanks to the SLOWLOG command.\n *\n * ----------------------------------------------------------------------------\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n\n#include \"server.h\"\n#include \"slowlog.h\"\n\n/* Create a new slowlog entry.\n * Incrementing the ref count of all the objects retained is up to\n * this function. */\nslowlogEntry *slowlogCreateEntry(client *c, robj **argv, int argc, long long duration) {\n    slowlogEntry *se = zmalloc(sizeof(*se));\n    int j, slargc = argc;\n\n    if (slargc > SLOWLOG_ENTRY_MAX_ARGC) slargc = SLOWLOG_ENTRY_MAX_ARGC;\n    se->argc = slargc;\n    se->argv = zmalloc(sizeof(robj*)*slargc);\n    for (j = 0; j < slargc; j++) {\n        /* Logging too many arguments is a useless memory waste, so we stop\n         * at SLOWLOG_ENTRY_MAX_ARGC, but use the last argument to specify\n         * how many remaining arguments there were in the original command. */\n        if (slargc != argc && j == slargc-1) {\n            se->argv[j] = createObject(OBJ_STRING,\n                sdscatprintf(sdsempty(),\"... (%d more arguments)\",\n                argc-slargc+1));\n        } else {\n            /* Trim too long strings as well... */\n            if (argv[j]->type == OBJ_STRING &&\n                sdsEncodedObject(argv[j]) &&\n                sdslen(argv[j]->ptr) > SLOWLOG_ENTRY_MAX_STRING)\n            {\n                sds s = sdsnewlen(argv[j]->ptr, SLOWLOG_ENTRY_MAX_STRING);\n\n                s = sdscatprintf(s,\"... (%lu more bytes)\",\n                    (unsigned long)\n                    sdslen(argv[j]->ptr) - SLOWLOG_ENTRY_MAX_STRING);\n                se->argv[j] = createObject(OBJ_STRING,s);\n            } else if (argv[j]->refcount == OBJ_SHARED_REFCOUNT) {\n                se->argv[j] = argv[j];\n            } else {\n                /* Here we need to duplicate the string objects composing the\n                 * argument vector of the command, because those may otherwise\n                 * end shared with string objects stored into keys. Having\n                 * shared objects between any part of Redis, and the data\n                 * structure holding the data, is a problem: FLUSHALL ASYNC\n                 * may release the shared string object and create a race. */\n                se->argv[j] = dupStringObject(argv[j]);\n            }\n        }\n    }\n    se->time = time(NULL);\n    se->duration = duration;\n    se->id = server.slowlog_entry_id++;\n    se->peerid = sdsnew(getClientPeerId(c));\n    se->cname = c->name ? sdsnew(c->name->ptr) : sdsempty();\n    return se;\n}\n\n/* Free a slow log entry. The argument is void so that the prototype of this\n * function matches the one of the 'free' method of adlist.c.\n *\n * This function will take care to release all the retained object. */\nvoid slowlogFreeEntry(void *septr) {\n    slowlogEntry *se = septr;\n    int j;\n\n    for (j = 0; j < se->argc; j++)\n        decrRefCount(se->argv[j]);\n    zfree(se->argv);\n    sdsfree(se->peerid);\n    sdsfree(se->cname);\n    zfree(se);\n}\n\n/* Initialize the slow log. This function should be called a single time\n * at server startup. */\nvoid slowlogInit(void) {\n    server.slowlog = listCreate();\n    server.slowlog_entry_id = 0;\n    listSetFreeMethod(server.slowlog,slowlogFreeEntry);\n}\n\n/* Push a new entry into the slow log.\n * This function will make sure to trim the slow log accordingly to the\n * configured max length.\n * Returns 1 if an entry was added, 0 otherwise. */\nint slowlogPushEntryIfNeeded(client *c, robj **argv, int argc, long long duration) {\n    if (server.slowlog_log_slower_than < 0 || server.slowlog_max_len == 0) return 0;\n    if (duration >= server.slowlog_log_slower_than) {\n        listAddNodeHead(server.slowlog,\n                        slowlogCreateEntry(c,argv,argc,duration));\n\n        /* Remove old entries if needed. */\n        while (listLength(server.slowlog) > server.slowlog_max_len)\n            listDelNode(server.slowlog,listLast(server.slowlog));\n        return 1;\n    }\n    return 0;\n}\n\n/* Remove all the entries from the current slow log. */\nvoid slowlogReset(void) {\n    while (listLength(server.slowlog) > 0)\n        listDelNode(server.slowlog,listLast(server.slowlog));\n}\n\n/* The SLOWLOG command. Implements all the subcommands needed to handle the\n * Redis slow log. */\nvoid slowlogCommand(client *c) {\n    if (c->argc == 2 && !strcasecmp(c->argv[1]->ptr,\"help\")) {\n        const char *help[] = {\n\"GET [<count>]\",\n\"    Return top <count> entries from the slowlog (default: 10, -1 mean all).\",\n\"    Entries are made of:\",\n\"    id, timestamp, time in microseconds, arguments array, client IP and port,\",\n\"    client name\",\n\"LEN\",\n\"    Return the length of the slowlog.\",\n\"RESET\",\n\"    Reset the slowlog.\",\nNULL\n        };\n        addReplyHelp(c, help);\n    } else if (c->argc == 2 && !strcasecmp(c->argv[1]->ptr,\"reset\")) {\n        slowlogReset();\n        addReply(c,shared.ok);\n    } else if (c->argc == 2 && !strcasecmp(c->argv[1]->ptr,\"len\")) {\n        addReplyLongLong(c,listLength(server.slowlog));\n    } else if ((c->argc == 2 || c->argc == 3) &&\n               !strcasecmp(c->argv[1]->ptr,\"get\"))\n    {\n        long count = 10;\n        listIter li;\n        listNode *ln;\n        slowlogEntry *se;\n\n        if (c->argc == 3) {\n            /* Consume count arg. */\n            if (getRangeLongFromObjectOrReply(c, c->argv[2], -1,\n                    LONG_MAX, &count, \"count should be greater than or equal to -1\") != C_OK)\n                return;\n\n            if (count == -1) {\n                /* We treat -1 as a special value, which means to get all slow logs.\n                 * Simply set count to the length of server.slowlog.*/\n                count = listLength(server.slowlog);\n            }\n        }\n\n        if (count > (long)listLength(server.slowlog)) {\n            count = listLength(server.slowlog);\n        }\n        addReplyArrayLen(c, count);\n        listRewind(server.slowlog, &li);\n        while (count--) {\n            int j;\n\n            ln = listNext(&li);\n            se = ln->value;\n            addReplyArrayLen(c,6);\n            addReplyLongLong(c,se->id);\n            addReplyLongLong(c,se->time);\n            addReplyLongLong(c,se->duration);\n            addReplyArrayLen(c,se->argc);\n            for (j = 0; j < se->argc; j++)\n                addReplyBulk(c,se->argv[j]);\n            addReplyBulkCBuffer(c,se->peerid,sdslen(se->peerid));\n            addReplyBulkCBuffer(c,se->cname,sdslen(se->cname));\n        }\n    } else {\n        addReplySubcommandSyntaxError(c);\n    }\n}\n"
  },
  {
    "path": "src/slowlog.h",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef __SLOWLOG_H__\n#define __SLOWLOG_H__\n\n#define SLOWLOG_ENTRY_MAX_ARGC 32\n#define SLOWLOG_ENTRY_MAX_STRING 128\n\n/* This structure defines an entry inside the slow log list */\ntypedef struct slowlogEntry {\n    robj **argv;\n    int argc;\n    long long id;       /* Unique entry identifier. */\n    long long duration; /* Time spent by the query, in microseconds. */\n    time_t time;        /* Unix time at which the query was executed. */\n    sds cname;          /* Client name. */\n    sds peerid;         /* Client network address. */\n} slowlogEntry;\n\n/* Exported API */\nvoid slowlogInit(void);\nint slowlogPushEntryIfNeeded(client *c, robj **argv, int argc, long long duration);\n\n#endif /* __SLOWLOG_H__ */\n"
  },
  {
    "path": "src/socket.c",
    "content": "/*\n * Copyright (c) 2019-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n#include \"connhelpers.h\"\n\n/* The connections module provides a lean abstraction of network connections\n * to avoid direct socket and async event management across the Redis code base.\n *\n * It does NOT provide advanced connection features commonly found in similar\n * libraries such as complete in/out buffer management, throttling, etc. These\n * functions remain in networking.c.\n *\n * The primary goal is to allow transparent handling of TCP and TLS based\n * connections. To do so, connections have the following properties:\n *\n * 1. A connection may live before its corresponding socket exists.  This\n *    allows various context and configuration setting to be handled before\n *    establishing the actual connection.\n * 2. The caller may register/unregister logical read/write handlers to be\n *    called when the connection has data to read from/can accept writes.\n *    These logical handlers may or may not correspond to actual AE events,\n *    depending on the implementation (for TCP they are; for TLS they aren't).\n */\n\nstatic ConnectionType CT_Socket;\n\n/* When a connection is created we must know its type already, but the\n * underlying socket may or may not exist:\n *\n * - For accepted connections, it exists as we do not model the listen/accept\n *   part; So caller calls connCreateSocket() followed by connAccept().\n * - For outgoing connections, the socket is created by the connection module\n *   itself; So caller calls connCreateSocket() followed by connConnect(),\n *   which registers a connect callback that fires on connected/error state\n *   (and after any transport level handshake was done).\n *\n * NOTE: An earlier version relied on connections being part of other structs\n * and not independently allocated. This could lead to further optimizations\n * like using container_of(), etc.  However it was discontinued in favor of\n * this approach for these reasons:\n *\n * 1. In some cases conns are created/handled outside the context of the\n * containing struct, in which case it gets a bit awkward to copy them.\n * 2. Future implementations may wish to allocate arbitrary data for the\n * connection.\n * 3. The container_of() approach is anyway risky because connections may\n * be embedded in different structs, not just client.\n */\n\nstatic connection *connCreateSocket(struct aeEventLoop *el) {\n    connection *conn = zcalloc(sizeof(connection));\n    conn->type = &CT_Socket;\n    conn->fd = -1;\n    conn->iovcnt = IOV_MAX;\n    conn->el = el;\n\n    return conn;\n}\n\n/* Create a new socket-type connection that is already associated with\n * an accepted connection.\n *\n * The socket is not ready for I/O until connAccept() was called and\n * invoked the connection-level accept handler.\n *\n * Callers should use connGetState() and verify the created connection\n * is not in an error state (which is not possible for a socket connection,\n * but could but possible with other protocols).\n */\nstatic connection *connCreateAcceptedSocket(struct aeEventLoop *el, int fd, void *priv) {\n    UNUSED(priv);\n    connection *conn = connCreateSocket(el);\n    conn->fd = fd;\n    conn->state = CONN_STATE_ACCEPTING;\n    return conn;\n}\n\nstatic int connSocketConnect(connection *conn, const char *addr, int port, const char *src_addr,\n        ConnectionCallbackFunc connect_handler) {\n    int fd = anetTcpNonBlockBestEffortBindConnect(NULL,addr,port,src_addr);\n    if (fd == -1) {\n        conn->state = CONN_STATE_ERROR;\n        conn->last_errno = errno;\n        return C_ERR;\n    }\n\n    conn->fd = fd;\n    conn->state = CONN_STATE_CONNECTING;\n\n    conn->conn_handler = connect_handler;\n    aeCreateFileEvent(conn->el, conn->fd, AE_WRITABLE,\n            conn->type->ae_handler, conn);\n\n    return C_OK;\n}\n\n/* ------ Pure socket connections ------- */\n\n/* A very incomplete list of implementation-specific calls.  Much of the above shall\n * move here as we implement additional connection types.\n */\n\nstatic void connSocketShutdown(connection *conn) {\n    if (conn->fd == -1) return;\n\n    shutdown(conn->fd, SHUT_RDWR);\n}\n\n/* Close the connection and free resources. */\nstatic void connSocketClose(connection *conn) {\n    if (conn->fd != -1) {\n        if (conn->el) aeDeleteFileEvent(conn->el, conn->fd, AE_READABLE | AE_WRITABLE);\n        close(conn->fd);\n        conn->fd = -1;\n    }\n\n    /* If called from within a handler, schedule the close but\n     * keep the connection until the handler returns.\n     */\n    if (connHasRefs(conn)) {\n        conn->flags |= CONN_FLAG_CLOSE_SCHEDULED;\n        return;\n    }\n\n    zfree(conn);\n}\n\nstatic int connSocketWrite(connection *conn, const void *data, size_t data_len) {\n    int ret = write(conn->fd, data, data_len);\n    if (ret < 0 && errno != EAGAIN) {\n        conn->last_errno = errno;\n\n        /* Don't overwrite the state of a connection that is not already\n         * connected, not to mess with handler callbacks.\n         */\n        if (errno != EINTR && conn->state == CONN_STATE_CONNECTED)\n            conn->state = CONN_STATE_ERROR;\n    }\n\n    return ret;\n}\n\nstatic int connSocketWritev(connection *conn, const struct iovec *iov, int iovcnt) {\n    int ret = writev(conn->fd, iov, iovcnt);\n    if (ret < 0 && errno != EAGAIN) {\n        conn->last_errno = errno;\n\n        /* Don't overwrite the state of a connection that is not already\n         * connected, not to mess with handler callbacks.\n         */\n        if (errno != EINTR && conn->state == CONN_STATE_CONNECTED)\n            conn->state = CONN_STATE_ERROR;\n    }\n\n    return ret;\n}\n\nstatic int connSocketRead(connection *conn, void *buf, size_t buf_len) {\n    int ret = read(conn->fd, buf, buf_len);\n    if (!ret) {\n        conn->state = CONN_STATE_CLOSED;\n    } else if (ret < 0 && errno != EAGAIN) {\n        conn->last_errno = errno;\n\n        /* Don't overwrite the state of a connection that is not already\n         * connected, not to mess with handler callbacks.\n         */\n        if (errno != EINTR && conn->state == CONN_STATE_CONNECTED)\n            conn->state = CONN_STATE_ERROR;\n    }\n\n    return ret;\n}\n\nstatic int connSocketAccept(connection *conn, ConnectionCallbackFunc accept_handler) {\n    int ret = C_OK;\n\n    if (conn->state != CONN_STATE_ACCEPTING) return C_ERR;\n    conn->state = CONN_STATE_CONNECTED;\n\n    connIncrRefs(conn);\n    if (!callHandler(conn, accept_handler)) ret = C_ERR;\n    connDecrRefs(conn);\n\n    return ret;\n}\n\n/* Rebind the connection to another event loop, read/write handlers must not\n * be installed in the current event loop, otherwise it will cause two event\n * loops to manage the same connection at the same time. */\nstatic int connSocketRebindEventLoop(connection *conn, aeEventLoop *el) {\n    serverAssert(!conn->el && !conn->read_handler && !conn->write_handler);\n    conn->el = el;\n    return C_OK;\n}\n\n/* Register a write handler, to be called when the connection is writable.\n * If NULL, the existing handler is removed.\n *\n * The barrier flag indicates a write barrier is requested, resulting with\n * CONN_FLAG_WRITE_BARRIER set. This will ensure that the write handler is\n * always called before and not after the read handler in a single event\n * loop.\n */\nstatic int connSocketSetWriteHandler(connection *conn, ConnectionCallbackFunc func, int barrier) {\n    if (func == conn->write_handler) return C_OK;\n\n    conn->write_handler = func;\n    if (barrier)\n        conn->flags |= CONN_FLAG_WRITE_BARRIER;\n    else\n        conn->flags &= ~CONN_FLAG_WRITE_BARRIER;\n    if (!conn->write_handler)\n        aeDeleteFileEvent(conn->el,conn->fd,AE_WRITABLE);\n    else\n        if (aeCreateFileEvent(conn->el,conn->fd,AE_WRITABLE,\n                    conn->type->ae_handler,conn) == AE_ERR) return C_ERR;\n    return C_OK;\n}\n\n/* Register a read handler, to be called when the connection is readable.\n * If NULL, the existing handler is removed.\n */\nstatic int connSocketSetReadHandler(connection *conn, ConnectionCallbackFunc func) {\n    if (func == conn->read_handler) return C_OK;\n\n    conn->read_handler = func;\n    if (!conn->read_handler)\n        aeDeleteFileEvent(conn->el,conn->fd,AE_READABLE);\n    else\n        if (aeCreateFileEvent(conn->el,conn->fd,\n                    AE_READABLE,conn->type->ae_handler,conn) == AE_ERR) return C_ERR;\n    return C_OK;\n}\n\nstatic const char *connSocketGetLastError(connection *conn) {\n    return strerror(conn->last_errno);\n}\n\nstatic void connSocketEventHandler(struct aeEventLoop *el, int fd, void *clientData, int mask)\n{\n    UNUSED(el);\n    UNUSED(fd);\n    connection *conn = clientData;\n\n    if (conn->state == CONN_STATE_CONNECTING &&\n            (mask & AE_WRITABLE) && conn->conn_handler) {\n\n        int conn_error = anetGetError(conn->fd);\n        if (conn_error) {\n            conn->last_errno = conn_error;\n            conn->state = CONN_STATE_ERROR;\n        } else {\n            conn->state = CONN_STATE_CONNECTED;\n        }\n\n        if (!conn->write_handler) aeDeleteFileEvent(conn->el, conn->fd, AE_WRITABLE);\n\n        if (!callHandler(conn, conn->conn_handler)) return;\n        conn->conn_handler = NULL;\n    }\n\n    /* Normally we execute the readable event first, and the writable\n     * event later. This is useful as sometimes we may be able\n     * to serve the reply of a query immediately after processing the\n     * query.\n     *\n     * However if WRITE_BARRIER is set in the mask, our application is\n     * asking us to do the reverse: never fire the writable event\n     * after the readable. In such a case, we invert the calls.\n     * This is useful when, for instance, we want to do things\n     * in the beforeSleep() hook, like fsync'ing a file to disk,\n     * before replying to a client. */\n    int invert = conn->flags & CONN_FLAG_WRITE_BARRIER;\n\n    int call_write = (mask & AE_WRITABLE) && conn->write_handler;\n    int call_read = (mask & AE_READABLE) && conn->read_handler;\n\n    /* Handle normal I/O flows */\n    if (!invert && call_read) {\n        if (!callHandler(conn, conn->read_handler)) return;\n    }\n    /* Fire the writable event. */\n    if (call_write) {\n        if (!callHandler(conn, conn->write_handler)) return;\n    }\n    /* If we have to invert the call, fire the readable event now\n     * after the writable one. */\n    if (invert && call_read) {\n        if (!callHandler(conn, conn->read_handler)) return;\n    }\n}\n\nstatic void connSocketAcceptHandler(aeEventLoop *el, int fd, void *privdata, int mask) {\n    int cport, cfd;\n    int max = server.max_new_conns_per_cycle;\n    char cip[NET_IP_STR_LEN];\n    UNUSED(mask);\n    UNUSED(privdata);\n\n    while(max--) {\n        cfd = anetTcpAccept(server.neterr, fd, cip, sizeof(cip), &cport);\n        if (cfd == ANET_ERR) {\n            if (anetAcceptFailureNeedsRetry(errno))\n                continue;\n            if (errno != EWOULDBLOCK)\n                serverLog(LL_WARNING,\n                    \"Accepting client connection: %s\", server.neterr);\n            return;\n        }\n        serverLog(LL_VERBOSE,\"Accepted %s:%d\", cip, cport);\n        acceptCommonHandler(connCreateAcceptedSocket(el,cfd,NULL), 0, cip);\n    }\n}\n\nstatic int connSocketAddr(connection *conn, char *ip, size_t ip_len, int *port, int remote) {\n    if (anetFdToString(conn->fd, ip, ip_len, port, remote) == 0)\n        return C_OK;\n\n    conn->last_errno = errno;\n    return C_ERR;\n}\n\nstatic int connSocketIsLocal(connection *conn) {\n    char cip[NET_IP_STR_LEN + 1] = { 0 };\n\n    if (connSocketAddr(conn, cip, sizeof(cip) - 1, NULL, 1) == C_ERR)\n        return -1;\n\n    return !strncmp(cip, \"127.\", 4) || !strcmp(cip, \"::1\");\n}\n\nstatic int connSocketListen(connListener *listener) {\n    return listenToPort(listener);\n}\n\nstatic int connSocketBlockingConnect(connection *conn, const char *addr, int port, long long timeout) {\n    int fd = anetTcpNonBlockConnect(NULL,addr,port);\n    if (fd == -1) {\n        conn->state = CONN_STATE_ERROR;\n        conn->last_errno = errno;\n        return C_ERR;\n    }\n\n    if ((aeWait(fd, AE_WRITABLE, timeout) & AE_WRITABLE) == 0) {\n        conn->state = CONN_STATE_ERROR;\n        conn->last_errno = ETIMEDOUT;\n        return C_ERR;\n    }\n\n    conn->fd = fd;\n    conn->state = CONN_STATE_CONNECTED;\n    return C_OK;\n}\n\n/* Connection-based versions of syncio.c functions.\n * NOTE: This should ideally be refactored out in favor of pure async work.\n */\n\nstatic ssize_t connSocketSyncWrite(connection *conn, char *ptr, ssize_t size, long long timeout) {\n    return syncWrite(conn->fd, ptr, size, timeout);\n}\n\nstatic ssize_t connSocketSyncRead(connection *conn, char *ptr, ssize_t size, long long timeout) {\n    return syncRead(conn->fd, ptr, size, timeout);\n}\n\nstatic ssize_t connSocketSyncReadLine(connection *conn, char *ptr, ssize_t size, long long timeout) {\n    return syncReadLine(conn->fd, ptr, size, timeout);\n}\n\nstatic const char *connSocketGetType(connection *conn) {\n    (void) conn;\n\n    return CONN_TYPE_SOCKET;\n}\n\nstatic ConnectionType CT_Socket = {\n    /* connection type */\n    .get_type = connSocketGetType,\n\n    /* connection type initialize & finalize & configure */\n    .init = NULL,\n    .cleanup = NULL,\n    .configure = NULL,\n\n    /* ae & accept & listen & error & address handler */\n    .ae_handler = connSocketEventHandler,\n    .accept_handler = connSocketAcceptHandler,\n    .addr = connSocketAddr,\n    .is_local = connSocketIsLocal,\n    .listen = connSocketListen,\n\n    /* create/shutdown/close connection */\n    .conn_create = connCreateSocket,\n    .conn_create_accepted = connCreateAcceptedSocket,\n    .shutdown = connSocketShutdown,\n    .close = connSocketClose,\n\n    /* connect & accept */\n    .connect = connSocketConnect,\n    .blocking_connect = connSocketBlockingConnect,\n    .accept = connSocketAccept,\n\n    /* event loop */\n    .unbind_event_loop = NULL,\n    .rebind_event_loop = connSocketRebindEventLoop,\n\n    /* IO */\n    .write = connSocketWrite,\n    .writev = connSocketWritev,\n    .read = connSocketRead,\n    .set_write_handler = connSocketSetWriteHandler,\n    .set_read_handler = connSocketSetReadHandler,\n    .get_last_error = connSocketGetLastError,\n    .sync_write = connSocketSyncWrite,\n    .sync_read = connSocketSyncRead,\n    .sync_readline = connSocketSyncReadLine,\n\n    /* pending data */\n    .has_pending_data = NULL,\n    .process_pending_data = NULL,\n};\n\nint connBlock(connection *conn) {\n    if (conn->fd == -1) return C_ERR;\n    return anetBlock(NULL, conn->fd);\n}\n\nint connNonBlock(connection *conn) {\n    if (conn->fd == -1) return C_ERR;\n    return anetNonBlock(NULL, conn->fd);\n}\n\nint connEnableTcpNoDelay(connection *conn) {\n    if (conn->fd == -1) return C_ERR;\n    return anetEnableTcpNoDelay(NULL, conn->fd);\n}\n\nint connDisableTcpNoDelay(connection *conn) {\n    if (conn->fd == -1) return C_ERR;\n    return anetDisableTcpNoDelay(NULL, conn->fd);\n}\n\nint connKeepAlive(connection *conn, int interval) {\n    if (conn->fd == -1) return C_ERR;\n    return anetKeepAlive(NULL, conn->fd, interval);\n}\n\nint connSendTimeout(connection *conn, long long ms) {\n    return anetSendTimeout(NULL, conn->fd, ms);\n}\n\nint connRecvTimeout(connection *conn, long long ms) {\n    return anetRecvTimeout(NULL, conn->fd, ms);\n}\n\nint RedisRegisterConnectionTypeSocket(void)\n{\n    return connTypeRegister(&CT_Socket);\n}\n"
  },
  {
    "path": "src/solarisfixes.h",
    "content": "/* Solaris specific fixes.\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#if defined(__sun)\n\n#if defined(__GNUC__)\n#include <math.h>\n#undef isnan\n#define isnan(x) \\\n     __extension__({ __typeof (x) __x_a = (x); \\\n     __builtin_expect(__x_a != __x_a, 0); })\n\n#undef isfinite\n#define isfinite(x) \\\n     __extension__ ({ __typeof (x) __x_f = (x); \\\n     __builtin_expect(!isnan(__x_f - __x_f), 1); })\n\n#undef isinf\n#define isinf(x) \\\n     __extension__ ({ __typeof (x) __x_i = (x); \\\n     __builtin_expect(!isnan(__x_i) && !isfinite(__x_i), 0); })\n\n#define u_int uint\n#define u_int32_t uint32_t\n#endif /* __GNUC__ */\n\n#endif /* __sun */\n"
  },
  {
    "path": "src/sort.c",
    "content": "/* SORT command and helper functions.\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"fast_float_strtod.h\"\n#include \"server.h\"\n#include \"pqsort.h\" /* Partial qsort for SORT+LIMIT */\n#include <math.h> /* isnan() */\n#include \"cluster.h\"\n\nzskiplistNode* zslGetElementByRank(zskiplist *zsl, unsigned long rank);\n\nredisSortOperation *createSortOperation(int type, robj *pattern) {\n    redisSortOperation *so = zmalloc(sizeof(*so));\n    so->type = type;\n    so->pattern = pattern;\n    return so;\n}\n\n/* Return the value associated to the key with a name obtained using\n * the following rules:\n *\n * 1) The first occurrence of '*' in 'pattern' is substituted with 'subst'.\n *\n * 2) If 'pattern' matches the \"->\" string, everything on the left of\n *    the arrow is treated as the name of a hash field, and the part on the\n *    left as the key name containing a hash. The value of the specified\n *    field is returned.\n *\n * 3) If 'pattern' equals \"#\", the function simply returns 'subst' itself so\n *    that the SORT command can be used like: SORT key GET # to retrieve\n *    the Set/List elements directly.\n *\n * The returned object will always have its refcount increased by 1\n * when it is non-NULL. */\nrobj *lookupKeyByPattern(redisDb *db, robj *pattern, robj *subst) {\n    kvobj *kv;\n    char *p, *f, *k;\n    sds spat, ssub;\n    robj *keyobj, *fieldobj = NULL, *val;\n\n    int prefixlen, sublen, postfixlen, fieldlen;\n\n    /* If the pattern is \"#\" return the substitution object itself in order\n     * to implement the \"SORT ... GET #\" feature. */\n    spat = pattern->ptr;\n    if (spat[0] == '#' && spat[1] == '\\0') {\n        incrRefCount(subst);\n        return subst;\n    }\n\n    /* The substitution object may be specially encoded. If so we create\n     * a decoded object on the fly. Otherwise getDecodedObject will just\n     * increment the ref count, that we'll decrement later. */\n    subst = getDecodedObject(subst);\n    ssub = subst->ptr;\n\n    /* If we can't find '*' in the pattern we return NULL as to GET a\n     * fixed key does not make sense. */\n    p = strchr(spat,'*');\n    if (!p) {\n        decrRefCount(subst);\n        return NULL;\n    }\n\n    /* Find out if we're dealing with a hash dereference. */\n    if ((f = strstr(p+1, \"->\")) != NULL && *(f+2) != '\\0') {\n        fieldlen = sdslen(spat)-(f-spat)-2;\n        fieldobj = createStringObject(f+2,fieldlen);\n    } else {\n        fieldlen = 0;\n    }\n\n    /* Perform the '*' substitution. */\n    prefixlen = p-spat;\n    sublen = sdslen(ssub);\n    postfixlen = sdslen(spat)-(prefixlen+1)-(fieldlen ? fieldlen+2 : 0);\n    keyobj = createStringObject(NULL,prefixlen+sublen+postfixlen);\n    k = keyobj->ptr;\n    memcpy(k,spat,prefixlen);\n    memcpy(k+prefixlen,ssub,sublen);\n    memcpy(k+prefixlen+sublen,p+1,postfixlen);\n    decrRefCount(subst); /* Incremented by decodeObject() */\n\n    /* Lookup substituted key */\n    kv = lookupKeyRead(db, keyobj);\n    if (kv == NULL) goto noobj;\n\n    if (fieldobj) {\n        if (kv->type != OBJ_HASH) goto noobj;\n\n        /* Retrieve value from hash by the field name. The returned object\n         * is a new object with refcount already incremented. */\n        int isHashDeleted;\n        hashTypeGetValueObject(db, kv, fieldobj->ptr, HFE_LAZY_EXPIRE, &val, NULL, &isHashDeleted);\n        kv = val;\n\n        if (isHashDeleted)\n            goto noobj;\n\n    } else {\n        if (kv->type != OBJ_STRING) goto noobj;\n\n        /* Every object that this function returns needs to have its refcount\n         * increased. sortCommand decreases it again. */\n        incrRefCount(kv);\n    }\n    decrRefCount(keyobj);\n    if (fieldobj) decrRefCount(fieldobj);\n    return kv;\n\nnoobj:\n    decrRefCount(keyobj);\n    if (fieldlen) decrRefCount(fieldobj);\n    return NULL;\n}\n\n/* sortCompare() is used by qsort in sortCommand(). Given that qsort_r with\n * the additional parameter is not standard but a BSD-specific we have to\n * pass sorting parameters via the global 'server' structure */\nint sortCompare(const void *s1, const void *s2) {\n    const redisSortObject *so1 = s1, *so2 = s2;\n    int cmp;\n\n    if (!server.sort_alpha) {\n        /* Numeric sorting. Here it's trivial as we precomputed scores */\n        if (so1->u.score > so2->u.score) {\n            cmp = 1;\n        } else if (so1->u.score < so2->u.score) {\n            cmp = -1;\n        } else {\n            /* Objects have the same score, but we don't want the comparison\n             * to be undefined, so we compare objects lexicographically.\n             * This way the result of SORT is deterministic. */\n            cmp = compareStringObjects(so1->obj,so2->obj);\n        }\n    } else {\n        /* Alphanumeric sorting */\n        if (server.sort_bypattern) {\n            if (!so1->u.cmpobj || !so2->u.cmpobj) {\n                /* At least one compare object is NULL */\n                if (so1->u.cmpobj == so2->u.cmpobj)\n                    cmp = 0;\n                else if (so1->u.cmpobj == NULL)\n                    cmp = -1;\n                else\n                    cmp = 1;\n            } else {\n                /* We have both the objects, compare them. */\n                if (server.sort_store) {\n                    cmp = compareStringObjects(so1->u.cmpobj,so2->u.cmpobj);\n                } else {\n                    /* Here we can use strcoll() directly as we are sure that\n                     * the objects are decoded string objects. */\n                    cmp = strcoll(so1->u.cmpobj->ptr,so2->u.cmpobj->ptr);\n                }\n            }\n        } else {\n            /* Compare elements directly. */\n            if (server.sort_store) {\n                cmp = compareStringObjects(so1->obj,so2->obj);\n            } else {\n                cmp = collateStringObjects(so1->obj,so2->obj);\n            }\n        }\n    }\n    return server.sort_desc ? -cmp : cmp;\n}\n\n/* The SORT command is the most complex command in Redis. Warning: this code\n * is optimized for speed and a bit less for readability */\nvoid sortCommandGeneric(client *c, int readonly) {\n    list *operations;\n    unsigned int outputlen = 0;\n    int desc = 0, alpha = 0;\n    long limit_start = 0, limit_count = -1, start, end;\n    int j, dontsort = 0, vectorlen;\n    int getop = 0; /* GET operation counter */\n    int int_conversion_error = 0;\n    int syntax_error = 0;\n    robj *sortval, *sortby = NULL, *storekey = NULL;\n    size_t oldsize = 0;\n    redisSortObject *vector; /* Resulting vector to sort */\n    int user_has_full_key_access = 0; /* ACL - used in order to verify 'get' and 'by' options can be used */\n    /* Create a list of operations to perform for every sorted element.\n     * Operations can be GET */\n    operations = listCreate();\n    listSetFreeMethod(operations,zfree);\n    j = 2; /* options start at argv[2] */\n\n    user_has_full_key_access = ACLUserCheckCmdWithUnrestrictedKeyAccess(c->user, c->cmd, c->argv, c->argc, CMD_KEY_ACCESS);\n\n    /* The SORT command has an SQL-alike syntax, parse it */\n    while(j < c->argc) {\n        int leftargs = c->argc-j-1;\n        if (!strcasecmp(c->argv[j]->ptr,\"asc\")) {\n            desc = 0;\n        } else if (!strcasecmp(c->argv[j]->ptr,\"desc\")) {\n            desc = 1;\n        } else if (!strcasecmp(c->argv[j]->ptr,\"alpha\")) {\n            alpha = 1;\n        } else if (!strcasecmp(c->argv[j]->ptr,\"limit\") && leftargs >= 2) {\n            if ((getLongFromObjectOrReply(c, c->argv[j+1], &limit_start, NULL)\n                 != C_OK) ||\n                (getLongFromObjectOrReply(c, c->argv[j+2], &limit_count, NULL)\n                 != C_OK))\n            {\n                syntax_error++;\n                break;\n            }\n            j+=2;\n        } else if (readonly == 0 && !strcasecmp(c->argv[j]->ptr,\"store\") && leftargs >= 1) {\n            storekey = c->argv[j+1];\n            j++;\n        } else if (!strcasecmp(c->argv[j]->ptr,\"by\") && leftargs >= 1) {\n            sortby = c->argv[j+1];\n            /* If the BY pattern does not contain '*', i.e. it is constant,\n             * we don't need to sort nor to lookup the weight keys. */\n            if (strchr(c->argv[j+1]->ptr,'*') == NULL) {\n                dontsort = 1;\n            } else {\n                /* If BY is specified with a real pattern, we can't accept it in cluster mode,\n                 * unless we can make sure the keys formed by the pattern are in the same slot \n                 * as the key to sort. */\n                if (server.cluster_enabled && patternHashSlot(sortby->ptr, sdslen(sortby->ptr)) != getKeySlot(c->argv[1]->ptr)) {\n                    addReplyError(c, \"BY option of SORT denied in Cluster mode when \"\n                                 \"keys formed by the pattern may be in different slots.\");\n                    syntax_error++;\n                    break;\n                }\n\n                /* If the BY pattern slot is not equal with the slot of keys, we will record\n                 * an incompatible behavior as above comments. */\n                if (server.cluster_compatibility_sample_ratio && !server.cluster_enabled &&\n                    SHOULD_CLUSTER_COMPATIBILITY_SAMPLE())\n                {\n                    if (patternHashSlot(sortby->ptr, sdslen(sortby->ptr)) !=\n                        (int)keyHashSlot(c->argv[1]->ptr, sdslen(c->argv[1]->ptr)))\n                        server.stat_cluster_incompatible_ops++;\n                }\n\n                /* If BY is specified with a real pattern, we can't accept\n                 * it if no full ACL key access is applied for this command. */\n                if (!user_has_full_key_access) {\n                    addReplyError(c,\"BY option of SORT denied due to insufficient ACL permissions.\");\n                    syntax_error++;\n                    break;\n                }\n            }\n            j++;\n        } else if (!strcasecmp(c->argv[j]->ptr,\"get\") && leftargs >= 1) {\n            /* If GET is specified with a real pattern, we can't accept it in cluster mode,\n             * unless we can make sure the keys formed by the pattern are in the same slot \n             * as the key to sort. The pattern # represents the key itself, so just skip\n             * pattern slot check. */\n            if (server.cluster_enabled &&\n                strcmp(c->argv[j+1]->ptr, \"#\") &&\n                patternHashSlot(c->argv[j+1]->ptr, sdslen(c->argv[j+1]->ptr)) != getKeySlot(c->argv[1]->ptr))\n            {\n                addReplyError(c, \"GET option of SORT denied in Cluster mode when \"\n                              \"keys formed by the pattern may be in different slots.\");\n                syntax_error++;\n                break;\n            }\n\n            /* If the GET pattern slot is not equal with the slot of keys, we will record\n             * an incompatible behavior as above comments. */\n            if (server.cluster_compatibility_sample_ratio && !server.cluster_enabled &&\n                strcmp(c->argv[j+1]->ptr, \"#\") &&\n                SHOULD_CLUSTER_COMPATIBILITY_SAMPLE())\n            {\n                if (patternHashSlot(c->argv[j+1]->ptr, sdslen(c->argv[j+1]->ptr)) !=\n                    (int)keyHashSlot(c->argv[1]->ptr, sdslen(c->argv[1]->ptr)))\n                    server.stat_cluster_incompatible_ops++;\n            }\n\n            if (!user_has_full_key_access) {\n                addReplyError(c,\"GET option of SORT denied due to insufficient ACL permissions.\");\n                syntax_error++;\n                break;\n            }\n            listAddNodeTail(operations,createSortOperation(\n                SORT_OP_GET,c->argv[j+1]));\n            getop++;\n            j++;\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            syntax_error++;\n            break;\n        }\n        j++;\n    }\n\n    /* Handle syntax errors set during options parsing. */\n    if (syntax_error) {\n        listRelease(operations);\n        return;\n    }\n\n    /* Lookup the key to sort. It must be of the right types */\n    sortval = lookupKeyRead(c->db, c->argv[1]);\n    if (sortval && sortval->type != OBJ_SET &&\n                   sortval->type != OBJ_LIST &&\n                   sortval->type != OBJ_ZSET)\n    {\n        listRelease(operations);\n        addReplyErrorObject(c,shared.wrongtypeerr);\n        return;\n    }\n\n    /* Now we need to protect sortval incrementing its count, in the future\n     * SORT may have options able to overwrite/delete keys during the sorting\n     * and the sorted key itself may get destroyed */\n    if (sortval)\n        incrRefCount(sortval);\n    else\n        sortval = createQuicklistObject(server.list_max_listpack_size, server.list_compress_depth);\n\n    /* When sorting a set with no sort specified, we must sort the output\n     * so the result is consistent across scripting and replication.\n     *\n     * The other types (list, sorted set) will retain their native order\n     * even if no sort order is requested, so they remain stable across\n     * scripting and replication. */\n    if (dontsort &&\n        sortval->type == OBJ_SET &&\n        (storekey || c->flags & CLIENT_SCRIPT))\n    {\n        /* Force ALPHA sorting */\n        dontsort = 0;\n        alpha = 1;\n        sortby = NULL;\n    }\n\n    /* Destructively convert encoded sorted sets for SORT. */\n    if (sortval->type == OBJ_ZSET) {\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(sortval);\n        zsetConvert(sortval, OBJ_ENCODING_SKIPLIST);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), sortval, oldsize, kvobjAllocSize(sortval));\n    }\n\n    /* Obtain the length of the object to sort. */\n    switch(sortval->type) {\n    case OBJ_LIST: vectorlen = listTypeLength(sortval); break;\n    case OBJ_SET: vectorlen =  setTypeSize(sortval); break;\n    case OBJ_ZSET: vectorlen = dictSize(((zset*)sortval->ptr)->dict); break;\n    default: vectorlen = 0; serverPanic(\"Bad SORT type\"); /* Avoid GCC warning */\n    }\n\n    /* Perform LIMIT start,count sanity checking.\n     * And avoid integer overflow by limiting inputs to object sizes. */\n    start = min(max(limit_start, 0), vectorlen);\n    limit_count = min(max(limit_count, -1), vectorlen);\n    end = (limit_count < 0) ? vectorlen-1 : start+limit_count-1;\n    if (start >= vectorlen) {\n        start = vectorlen-1;\n        end = vectorlen-2;\n    }\n    if (end >= vectorlen) end = vectorlen-1;\n\n    /* Whenever possible, we load elements into the output array in a more\n     * direct way. This is possible if:\n     *\n     * 1) The object to sort is a sorted set or a list (internally sorted).\n     * 2) There is nothing to sort as dontsort is true (BY <constant string>).\n     *\n     * In this special case, if we have a LIMIT option that actually reduces\n     * the number of elements to fetch, we also optimize to just load the\n     * range we are interested in and allocating a vector that is big enough\n     * for the selected range length. */\n    if ((sortval->type == OBJ_ZSET || sortval->type == OBJ_LIST) &&\n        dontsort &&\n        (start != 0 || end != vectorlen-1))\n    {\n        vectorlen = end-start+1;\n    }\n\n    /* Load the sorting vector with all the objects to sort */\n    vector = zmalloc(sizeof(redisSortObject)*vectorlen);\n    j = 0;\n\n    if (sortval->type == OBJ_LIST && dontsort) {\n        /* Special handling for a list, if 'dontsort' is true.\n         * This makes sure we return elements in the list original\n         * ordering, accordingly to DESC / ASC options.\n         *\n         * Note that in this case we also handle LIMIT here in a direct\n         * way, just getting the required range, as an optimization. */\n        if (end >= start) {\n            listTypeIterator li;\n            listTypeEntry entry;\n            listTypeInitIterator(&li, sortval,\n                    desc ? (long)(listTypeLength(sortval) - start - 1) : start,\n                    desc ? LIST_HEAD : LIST_TAIL);\n\n            while(j < vectorlen && listTypeNext(&li, &entry)) {\n                vector[j].obj = listTypeGet(&entry);\n                vector[j].u.score = 0;\n                vector[j].u.cmpobj = NULL;\n                j++;\n            }\n            listTypeResetIterator(&li);\n            /* Fix start/end: output code is not aware of this optimization. */\n            end -= start;\n            start = 0;\n        }\n    } else if (sortval->type == OBJ_LIST) {\n        listTypeIterator li;\n        listTypeEntry entry;\n        listTypeInitIterator(&li, sortval, 0, LIST_TAIL);\n        while(listTypeNext(&li, &entry)) {\n            vector[j].obj = listTypeGet(&entry);\n            vector[j].u.score = 0;\n            vector[j].u.cmpobj = NULL;\n            j++;\n        }\n        listTypeResetIterator(&li);\n    } else if (sortval->type == OBJ_SET) {\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(sortval);\n        setTypeIterator si;\n        sds sdsele;\n        setTypeInitIterator(&si, sortval);\n        while((sdsele = setTypeNextObject(&si)) != NULL) {\n            vector[j].obj = createObject(OBJ_STRING,sdsele);\n            vector[j].u.score = 0;\n            vector[j].u.cmpobj = NULL;\n            j++;\n        }\n        setTypeResetIterator(&si);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), sortval, oldsize, kvobjAllocSize(sortval));\n    } else if (sortval->type == OBJ_ZSET && dontsort) {\n        /* Special handling for a sorted set, if 'dontsort' is true.\n         * This makes sure we return elements in the sorted set original\n         * ordering, accordingly to DESC / ASC options.\n         *\n         * Note that in this case we also handle LIMIT here in a direct\n         * way, just getting the required range, as an optimization. */\n\n        zset *zs = sortval->ptr;\n        zskiplist *zsl = zs->zsl;\n        zskiplistNode *ln;\n        sds sdsele;\n        int rangelen = vectorlen;\n\n        /* Check if starting point is trivial, before doing log(N) lookup. */\n        if (desc) {\n            long zsetlen = dictSize(((zset*)sortval->ptr)->dict);\n\n            ln = zsl->tail;\n            if (start > 0)\n                ln = zslGetElementByRank(zsl,zsetlen-start);\n        } else {\n            ln = zsl->header->level[0].forward;\n            if (start > 0)\n                ln = zslGetElementByRank(zsl,start+1);\n        }\n\n        while(rangelen--) {\n            serverAssertWithInfo(c,sortval,ln != NULL);\n            sdsele = zslGetNodeElement(ln);\n            vector[j].obj = createStringObject(sdsele,sdslen(sdsele));\n            vector[j].u.score = 0;\n            vector[j].u.cmpobj = NULL;\n            j++;\n            ln = desc ? ln->backward : ln->level[0].forward;\n        }\n        /* Fix start/end: output code is not aware of this optimization. */\n        end -= start;\n        start = 0;\n    } else if (sortval->type == OBJ_ZSET) {\n        dict *set = ((zset*)sortval->ptr)->dict;\n        dictIterator di;\n        dictEntry *setele;\n        sds sdsele;\n\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(sortval);\n        dictInitIterator(&di, set);\n        while((setele = dictNext(&di)) != NULL) {\n            sdsele = zslGetNodeElement(dictGetKey(setele));\n            vector[j].obj = createStringObject(sdsele,sdslen(sdsele));\n            vector[j].u.score = 0;\n            vector[j].u.cmpobj = NULL;\n            j++;\n        }\n        dictResetIterator(&di);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), sortval, oldsize, kvobjAllocSize(sortval));\n    } else {\n        serverPanic(\"Unknown type\");\n    }\n    serverAssertWithInfo(c,sortval,j == vectorlen);\n\n    /* Now it's time to load the right scores in the sorting vector */\n    if (!dontsort) {\n        for (j = 0; j < vectorlen; j++) {\n            robj *byval;\n            if (sortby) {\n                /* lookup value to sort by */\n                byval = lookupKeyByPattern(c->db,sortby,vector[j].obj);\n                if (!byval) continue;\n            } else {\n                /* use object itself to sort by */\n                byval = vector[j].obj;\n            }\n\n            if (alpha) {\n                if (sortby) vector[j].u.cmpobj = getDecodedObject(byval);\n            } else {\n                if (sdsEncodedObject(byval)) {\n                    if (string2d(byval->ptr,sdslen(byval->ptr),&vector[j].u.score) == 0) {\n                        int_conversion_error = 1;\n                    }\n                } else if (byval->encoding == OBJ_ENCODING_INT) {\n                    /* Don't need to decode the object if it's\n                     * integer-encoded (the only encoding supported) so\n                     * far. We can just cast it */\n                    vector[j].u.score = (long)byval->ptr;\n                } else {\n                    serverAssertWithInfo(c,sortval,1 != 1);\n                }\n            }\n\n            /* when the object was retrieved using lookupKeyByPattern,\n             * its refcount needs to be decreased. */\n            if (sortby) {\n                decrRefCount(byval);\n            }\n        }\n\n        server.sort_desc = desc;\n        server.sort_alpha = alpha;\n        server.sort_bypattern = sortby ? 1 : 0;\n        server.sort_store = storekey ? 1 : 0;\n        if (sortby && (start != 0 || end != vectorlen-1))\n            pqsort(vector,vectorlen,sizeof(redisSortObject),sortCompare, start,end);\n        else\n            qsort(vector,vectorlen,sizeof(redisSortObject),sortCompare);\n    }\n\n    /* Send command output to the output buffer, performing the specified\n     * GET/DEL/INCR/DECR operations if any. */\n    outputlen = getop ? getop*(end-start+1) : end-start+1;\n    if (int_conversion_error) {\n        addReplyError(c,\"One or more scores can't be converted into double\");\n    } else if (storekey == NULL) {\n        /* STORE option not specified, sent the sorting result to client */\n        addReplyArrayLen(c,outputlen);\n        for (j = start; j <= end; j++) {\n            listNode *ln;\n            listIter li;\n\n            if (!getop) addReplyBulk(c,vector[j].obj);\n            listRewind(operations,&li);\n            while((ln = listNext(&li))) {\n                redisSortOperation *sop = ln->value;\n                robj *val = lookupKeyByPattern(c->db,sop->pattern,\n                                               vector[j].obj);\n\n                if (sop->type == SORT_OP_GET) {\n                    if (!val) {\n                        addReplyNull(c);\n                    } else {\n                        addReplyBulk(c,val);\n                        decrRefCount(val);\n                    }\n                } else {\n                    /* Always fails */\n                    serverAssertWithInfo(c,sortval,sop->type == SORT_OP_GET);\n                }\n            }\n        }\n    } else {\n        /* We can't predict the size and encoding of the stored list, we\n         * assume it's a large list and then convert it at the end if needed. */\n        robj *sobj = createQuicklistObject(server.list_max_listpack_size, server.list_compress_depth);\n\n        /* STORE option specified, set the sorting result as a List object */\n        for (j = start; j <= end; j++) {\n            listNode *ln;\n            listIter li;\n\n            if (!getop) {\n                listTypePush(sobj,vector[j].obj,LIST_TAIL);\n            } else {\n                listRewind(operations,&li);\n                while((ln = listNext(&li))) {\n                    redisSortOperation *sop = ln->value;\n                    robj *val = lookupKeyByPattern(c->db,sop->pattern,\n                                                   vector[j].obj);\n\n                    if (sop->type == SORT_OP_GET) {\n                        if (!val) val = createStringObject(\"\",0);\n\n                        /* listTypePush does an incrRefCount, so we should take care\n                         * care of the incremented refcount caused by either\n                         * lookupKeyByPattern or createStringObject(\"\",0) */\n                        listTypePush(sobj,val,LIST_TAIL);\n                        decrRefCount(val);\n                    } else {\n                        /* Always fails */\n                        serverAssertWithInfo(c,sortval,sop->type == SORT_OP_GET);\n                    }\n                }\n            }\n        }\n        \n        if (outputlen) {\n            listTypeTryConversion(sobj,LIST_CONV_AUTO,NULL,NULL);\n            setKey(c, c->db, storekey, &sobj, 0);\n            /* Ownership of sobj transferred to the db. Set to NULL to prevent\n             * freeing it below. */\n            sobj = NULL;\n            notifyKeyspaceEvent(NOTIFY_LIST,\"sortstore\",storekey,\n                                c->db->id);\n            server.dirty += outputlen;\n            /* Ownership of sobj transferred to the db. No need to free it. */\n        } else {\n            if (dbDelete(c->db, storekey)) {\n                keyModified(c, c->db, storekey, NULL, 1);\n                notifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", storekey, c->db->id);\n                server.dirty++;\n            }\n            decrRefCount(sobj);\n        }\n\n        addReplyLongLong(c,outputlen);\n    }\n\n    /* Cleanup */\n    for (j = 0; j < vectorlen; j++)\n        decrRefCount(vector[j].obj);\n\n    decrRefCount(sortval);\n    listRelease(operations);\n    for (j = 0; j < vectorlen; j++) {\n        if (alpha && vector[j].u.cmpobj)\n            decrRefCount(vector[j].u.cmpobj);\n    }\n    zfree(vector);\n}\n\n/* SORT wrapper function for read-only mode. */\nvoid sortroCommand(client *c) {\n    sortCommandGeneric(c, 1);\n}\n\nvoid sortCommand(client *c) {\n    sortCommandGeneric(c, 0);\n}\n"
  },
  {
    "path": "src/sparkline.c",
    "content": "/* sparkline.c -- ASCII Sparklines\n * This code is modified from http://github.com/antirez/aspark and adapted\n * in order to return SDS strings instead of outputting directly to\n * the terminal.\n *\n * ---------------------------------------------------------------------------\n *\n * Copyright (c) 2011-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n\n#include <math.h>\n\n/* This is the charset used to display the graphs, but multiple rows are used\n * to increase the resolution. */\nstatic char charset[] = \"_-`\";\nstatic char charset_fill[] = \"_o#\";\nstatic int charset_len = sizeof(charset)-1;\nstatic int label_margin_top = 1;\n\n/* ----------------------------------------------------------------------------\n * Sequences are arrays of samples we use to represent data to turn\n * into sparklines. This is the API in order to generate a sparkline:\n *\n * struct sequence *seq = createSparklineSequence();\n * sparklineSequenceAddSample(seq, 10, NULL);\n * sparklineSequenceAddSample(seq, 20, NULL);\n * sparklineSequenceAddSample(seq, 30, \"last sample label\");\n * sds output = sparklineRender(sdsempty(), seq, 80, 4, SPARKLINE_FILL);\n * freeSparklineSequence(seq);\n * ------------------------------------------------------------------------- */\n\n/* Create a new sequence. */\nstruct sequence *createSparklineSequence(void) {\n    struct sequence *seq = zmalloc(sizeof(*seq));\n    seq->length = 0;\n    seq->labels = 0;\n    seq->samples = NULL;\n    seq->min = 0.0f;\n    seq->max = 0.0f;\n    return seq;\n}\n\n/* Add a new sample into a sequence. */\nvoid sparklineSequenceAddSample(struct sequence *seq, double value, char *label) {\n    label = (label == NULL || label[0] == '\\0') ? NULL : zstrdup(label);\n    if (seq->length == 0) {\n        seq->min = seq->max = value;\n    } else {\n        if (value < seq->min) seq->min = value;\n        else if (value > seq->max) seq->max = value;\n    }\n    seq->samples = zrealloc(seq->samples,sizeof(struct sample)*(seq->length+1));\n    seq->samples[seq->length].value = value;\n    seq->samples[seq->length].label = label;\n    seq->length++;\n    if (label) seq->labels++;\n}\n\n/* Free a sequence. */\nvoid freeSparklineSequence(struct sequence *seq) {\n    int j;\n\n    for (j = 0; j < seq->length; j++)\n        zfree(seq->samples[j].label);\n    zfree(seq->samples);\n    zfree(seq);\n}\n\n/* ----------------------------------------------------------------------------\n * ASCII rendering of sequence\n * ------------------------------------------------------------------------- */\n\n/* Render part of a sequence, so that render_sequence() call call this function\n * with different parts in order to create the full output without overflowing\n * the current terminal columns. */\nsds sparklineRenderRange(sds output, struct sequence *seq, int rows, int offset, int len, int flags) {\n    int j;\n    double relmax = seq->max - seq->min;\n    int steps = charset_len*rows;\n    int row = 0;\n    char *chars = zmalloc(len);\n    int loop = 1;\n    int opt_fill = flags & SPARKLINE_FILL;\n    int opt_log = flags & SPARKLINE_LOG_SCALE;\n\n    if (opt_log) {\n        relmax = log(relmax+1);\n    } else if (relmax == 0) {\n        relmax = 1;\n    }\n\n    while(loop) {\n        loop = 0;\n        memset(chars,' ',len);\n        for (j = 0; j < len; j++) {\n            struct sample *s = &seq->samples[j+offset];\n            double relval = s->value - seq->min;\n            int step;\n\n            if (opt_log) relval = log(relval+1);\n            step = (int) (relval*steps)/relmax;\n            if (step < 0) step = 0;\n            if (step >= steps) step = steps-1;\n\n            if (row < rows) {\n                /* Print the character needed to create the sparkline */\n                int charidx = step-((rows-row-1)*charset_len);\n                loop = 1;\n                if (charidx >= 0 && charidx < charset_len) {\n                    chars[j] = opt_fill ? charset_fill[charidx] :\n                                          charset[charidx];\n                } else if(opt_fill && charidx >= charset_len) {\n                    chars[j] = '|';\n                }\n            } else {\n                /* Labels spacing */\n                if (seq->labels && row-rows < label_margin_top) {\n                    loop = 1;\n                    break;\n                }\n                /* Print the label if needed. */\n                if (s->label) {\n                    int label_len = strlen(s->label);\n                    int label_char = row - rows - label_margin_top;\n\n                    if (label_len > label_char) {\n                        loop = 1;\n                        chars[j] = s->label[label_char];\n                    }\n                }\n            }\n        }\n        if (loop) {\n            row++;\n            output = sdscatlen(output,chars,len);\n            output = sdscatlen(output,\"\\n\",1);\n        }\n    }\n    zfree(chars);\n    return output;\n}\n\n/* Turn a sequence into its ASCII representation */\nsds sparklineRender(sds output, struct sequence *seq, int columns, int rows, int flags) {\n    int j;\n\n    for (j = 0; j < seq->length; j += columns) {\n        int sublen = (seq->length-j) < columns ? (seq->length-j) : columns;\n\n        if (j != 0) output = sdscatlen(output,\"\\n\",1);\n        output = sparklineRenderRange(output, seq, rows, j, sublen, flags);\n    }\n    return output;\n}\n\n"
  },
  {
    "path": "src/sparkline.h",
    "content": "/* sparkline.h -- ASCII Sparklines header file\n *\n * ---------------------------------------------------------------------------\n *\n * Copyright (c) 2011-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef __SPARKLINE_H\n#define __SPARKLINE_H\n\n/* A sequence is represented of many \"samples\" */\nstruct sample {\n    double value;\n    char *label;\n};\n\nstruct sequence {\n    int length;\n    int labels;\n    struct sample *samples;\n    double min, max;\n};\n\n#define SPARKLINE_NO_FLAGS 0\n#define SPARKLINE_FILL 1      /* Fill the area under the curve. */\n#define SPARKLINE_LOG_SCALE 2 /* Use logarithmic scale. */\n\nstruct sequence *createSparklineSequence(void);\nvoid sparklineSequenceAddSample(struct sequence *seq, double value, char *label);\nvoid freeSparklineSequence(struct sequence *seq);\nsds sparklineRenderRange(sds output, struct sequence *seq, int rows, int offset, int len, int flags);\nsds sparklineRender(sds output, struct sequence *seq, int columns, int rows, int flags);\n\n#endif /* __SPARKLINE_H */\n"
  },
  {
    "path": "src/stream.h",
    "content": "#ifndef STREAM_H\n#define STREAM_H\n\n#include \"rax.h\"\n#include \"listpack.h\"\n#include \"dict.h\"\n#include \"xxhash.h\"\n\n/* Stream item ID: a 128 bit number composed of a milliseconds time and\n * a sequence counter. IDs generated in the same millisecond (or in a past\n * millisecond if the clock jumped backward) will use the millisecond time\n * of the latest generated ID and an incremented sequence. */\ntypedef struct streamID {\n    uint64_t ms;        /* Unix time in milliseconds. */\n    uint64_t seq;       /* Sequence number. */\n} streamID;\n\n/* Structure to hold IID and stream ID for IDMP deduplication */\ntypedef struct idmpEntry {\n    struct idmpEntry *next;  /* Pointer to next entry in insertion order (linked list) */\n    streamID id;             /* Associated stream ID */\n    size_t iid_len;          /* Length of the IID */\n    char iid[];              /* Flexible array member for inline IID storage */\n} idmpEntry;\n\n/* IDMP Producer structure for per-producer deduplication tracking */\ntypedef struct idmpProducer {\n    dict *idmp_dict;       /* IDMP IID tracking tree. */\n    idmpEntry *idmp_head;  /* Head of the IDMP entries linked list. */\n    idmpEntry *idmp_tail;  /* Tail of the IDMP entries linked list. */\n} idmpProducer;\n\n/* Dictionary type for IDMP entries - uses IID as key */\nextern dictType idmpDictType;\n\ntypedef struct stream {\n    rax *rax;               /* The radix tree holding the stream. */\n    uint64_t length;        /* Current number of elements inside this stream. */\n    streamID last_id;       /* Zero if there are yet no items. */\n    streamID first_id;      /* The first non-tombstone entry, zero if empty. */\n    streamID max_deleted_entry_id;  /* The maximal ID that was deleted. */\n    uint64_t entries_added; /* All time count of elements added. */\n    size_t alloc_size;      /* Total allocated memory (in bytes) by this stream. */\n    rax *cgroups;           /* Consumer groups dictionary: name -> streamCG */\n    rax *cgroups_ref;       /* Index mapping message IDs to their consumer groups. */\n    streamID min_cgroup_last_id;  /* The minimum ID of consume group. */\n    unsigned int min_cgroup_last_id_valid: 1;\n    uint64_t idmp_duration; /* IDMP duration in seconds. */\n    uint64_t idmp_max_entries; /* Max number of IID for tracking. */\n    rax *idmp_producers;   /* IDMP producers radix tree: pid -> idmpProducer */\n    uint64_t iids_added;   /* All time count of entries with IID added. */\n    uint64_t iids_duplicates; /* All time count of duplicate IIDs detected. */\n} stream;\n\n/* We define an iterator to iterate stream items in an abstract way, without\n * caring about the radix tree + listpack representation. Technically speaking\n * the iterator is only used inside streamReplyWithRange(), so could just\n * be implemented inside the function, but practically there is the AOF\n * rewriting code that also needs to iterate the stream to emit the XADD\n * commands. */\ntypedef struct streamIterator {\n    stream *stream;         /* The stream we are iterating. */\n    streamID master_id;     /* ID of the master entry at listpack head. */\n    uint64_t master_fields_count;       /* Master entries # of fields. */\n    unsigned char *master_fields_start; /* Master entries start in listpack. */\n    unsigned char *master_fields_ptr;   /* Master field to emit next. */\n    int entry_flags;                    /* Flags of entry we are emitting. */\n    int rev;                /* True if iterating end to start (reverse). */\n    int skip_tombstones;    /* True if not emitting tombstone entries. */\n    uint64_t start_key[2];  /* Start key as 128 bit big endian. */\n    uint64_t end_key[2];    /* End key as 128 bit big endian. */\n    /* Decoded native-endian fields for fast numeric comparison */\n    uint64_t start_ms;\n    uint64_t start_seq;\n    uint64_t end_ms;\n    uint64_t end_seq;\n    raxIterator ri;         /* Rax iterator. */\n    unsigned char *lp;      /* Current listpack. */\n    unsigned char *lp_last_ele; /* Previous listpack element position for corruption detection. */\n    unsigned char *lp_ele;  /* Current listpack cursor. */\n    unsigned char *lp_flags; /* Current entry flags pointer. */\n    /* Buffers used to hold the string of lpGet() when the element is\n     * integer encoded, so that there is no string representation of the\n     * element inside the listpack itself. */\n    unsigned char field_buf[LP_INTBUF_SIZE];\n    unsigned char value_buf[LP_INTBUF_SIZE];\n} streamIterator;\n\n/* Forward declarations */\ntypedef struct streamNACK streamNACK;\n\n/* Consumer group. */\ntypedef struct streamCG {\n    streamID last_id;       /* Last delivered (not acknowledged) ID for this\n                               group. Consumers that will just ask for more\n                               messages will served with IDs > than this. */\n    long long entries_read; /* In a perfect world (CG starts at 0-0, no dels, no\n                               XGROUP SETID, ...), this is the total number of\n                               group reads. In the real world, the reasoning behind\n                               this value is detailed at the top comment of\n                               streamEstimateDistanceFromFirstEverEntry(). */\n    rax *pel;               /* Pending entries list. This is a radix tree that\n                               has every message delivered to consumers (without\n                               the NOACK option) that was yet not acknowledged\n                               as processed. The key of the radix tree is the\n                               ID as a 64 bit big endian number, while the\n                               associated value is a streamNACK structure.*/\n    streamNACK *pel_time_head; /* Head of time-ordered doubly-linked list of pending\n                                  entries (oldest delivery_time). Used for efficient\n                                  CLAIM operations. O(1) access to oldest entries. */\n    streamNACK *pel_time_tail; /* Tail of time-ordered doubly-linked list of pending\n                                  entries (newest delivery_time). O(1) append for\n                                  updates that set delivery_time to current time. */\n    streamNACK *pel_nack_tail; /* Tail of the NACK zone at the head of the\n                                  PEL time-ordered list. NACKed entries occupy\n                                  positions from pel_time_head to pel_nack_tail.\n                                  NULL if no NACKed entries exist. */\n    rax *consumers;         /* A radix tree representing the consumers by name\n                               and their associated representation in the form\n                               of streamConsumer structures. */\n} streamCG;\n\n/* A specific consumer in a consumer group.  */\ntypedef struct streamConsumer {\n    mstime_t seen_time;         /* Last time this consumer tried to perform an action (attempted reading/claiming). */\n    mstime_t active_time;       /* Last time this consumer was active (successful reading/claiming). */\n    sds name;                   /* Consumer name. This is how the consumer\n                                   will be identified in the consumer group\n                                   protocol. Case sensitive. */\n    rax *pel;                   /* Consumer specific pending entries list: all\n                                   the pending messages delivered to this\n                                   consumer not yet acknowledged. Keys are\n                                   big endian message IDs, while values are\n                                   the same streamNACK structure referenced\n                                   in the \"pel\" of the consumer group structure\n                                   itself, so the value is shared. */\n} streamConsumer;\n\n/* Pending (yet not acknowledged) message in a consumer group. */\nstruct streamNACK {\n    mstime_t delivery_time;     /* Last time this message was delivered. */\n    uint64_t delivery_count;    /* Number of times this message was delivered.*/\n    streamConsumer *consumer;   /* The consumer this message was delivered to\n                                   in the last delivery. */\n    listNode *cgroup_ref_node; /* Reference to this NACK in the cgroups_ref list. */\n    streamID id;                /* Stream ID for this pending entry. */\n    struct streamNACK *pel_prev; /* Previous NACK in time-ordered doubly-linked list. */\n    struct streamNACK *pel_next; /* Next NACK in time-ordered doubly-linked list. */\n};\n\n/* Stream propagation information, passed to functions in order to propagate\n * XCLAIM commands to AOF and slaves. */\ntypedef struct streamPropInfo {\n    robj *keyname;\n    robj *groupname;\n} streamPropInfo;\n\n/* Prototypes of exported APIs. */\nstruct client;\n\n/* Flags for streamCreateConsumer */\n#define SCC_DEFAULT       0\n#define SCC_NO_NOTIFY     (1<<0) /* Do not notify key space if consumer created */\n#define SCC_NO_DIRTIFY    (1<<1) /* Do not dirty++ if consumer created */\n\n#define SCG_INVALID_ENTRIES_READ -1\n\nstream *streamNew(void);\nvoid freeStream(stream *s);\nunsigned long streamLength(const robj *subject);\nsize_t streamReplyWithRange(client *c, stream *s, streamID *start, streamID *end, size_t count, int rev, long long min_idle_time, streamCG *group, streamConsumer *consumer, int flags, streamPropInfo *spi, unsigned long *propCount);\nvoid streamIteratorStart(streamIterator *si, stream *s, streamID *start, streamID *end, int rev);\nint streamIteratorGetID(streamIterator *si, streamID *id, int64_t *numfields);\nvoid streamIteratorGetField(streamIterator *si, unsigned char **fieldptr, unsigned char **valueptr, int64_t *fieldlen, int64_t *valuelen);\nvoid streamIteratorRemoveEntry(streamIterator *si, streamID *current);\nvoid streamIteratorStop(streamIterator *si);\nstreamCG *streamLookupCG(stream *s, sds groupname);\nstreamConsumer *streamLookupConsumer(streamCG *cg, sds name);\nstreamConsumer *streamCreateConsumer(stream *s, streamCG *cg, sds name, robj *key, int dbid, int flags);\nstreamCG *streamCreateCG(stream *s, char *name, size_t namelen, streamID *id, long long entries_read);\nstreamNACK *streamCreateNACK(stream *s, streamConsumer *consumer, streamID *id);\nvoid streamEncodeID(void *buf, streamID *id);\nvoid streamDecodeID(void *buf, streamID *id);\nint streamCompareID(streamID *a, streamID *b);\nvoid streamFreeNACK(stream *s, streamNACK *na);\nvoid streamDestroyNACK(stream *s, streamNACK *na, unsigned char *key);\nint streamIncrID(streamID *id);\nint streamDecrID(streamID *id);\nvoid streamPropagateConsumerCreation(client *c, robj *key, robj *groupname, sds consumername);\nrobj *streamDup(robj *o);\nint streamValidateListpackIntegrity(unsigned char *lp, size_t size, int deep);\nint streamParseID(const robj *o, streamID *id);\nrobj *createObjectFromStreamID(streamID *id);\nint streamAppendItem(stream *s, robj **argv, int64_t numfields, streamID *added_id, streamID *use_id, int seq_given);\nint streamDeleteItem(stream *s, streamID *id);\nvoid streamGetEdgeID(stream *s, int first, int skip_tombstones, streamID *edge_id);\nlong long streamEstimateDistanceFromFirstEverEntry(stream *s, streamID *id);\nint64_t streamTrimByLength(stream *s, long long maxlen, int approx);\nint64_t streamTrimByID(stream *s, streamID minid, int approx);\nint streamEntryExists(stream *s, streamID *id);\nvoid streamKeyLoaded(redisDb *db, robj *key, robj *val);\nvoid streamKeyRemoved(redisDb *db, robj *key, robj *val);\n\nlistNode *streamLinkCGroupToEntry(stream *s, streamCG *cg, unsigned char *key);\n\n/* PEL time list management (used by RDB loading) */\nvoid pelListInsertSorted(streamCG *cg, streamNACK *nack);\nvoid pelListUnlink(streamCG *cg, streamNACK *nack);\nvoid pelListInsertNacked(streamCG *cg, streamNACK *nack);\nuint64_t pelListNackedCount(streamCG *cg);\n\n/* IDMP functions */\nidmpEntry *idmpEntryCreate(const char *iid, size_t iid_len, size_t *alloc_size);\nvoid idmpEntryFree(idmpEntry *entry, size_t *alloc_size);\nidmpProducer *idmpProducerCreate(size_t *alloc_size);\nvoid idmpProducerFree(idmpProducer *producer, size_t *alloc_size);\nvoid streamFreeIdmpProducerGeneric(void *producer, void *strm);\n\n#endif\n"
  },
  {
    "path": "src/strl.c",
    "content": "/*\n * Copyright (c) 1998, 2015 Todd C. Miller <millert@openbsd.org>\n *\n * Permission to use, copy, modify, and distribute this software for any\n * purpose with or without fee is hereby granted, provided that the above\n * copyright notice and this permission notice appear in all copies.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES\n * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF\n * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR\n * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES\n * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN\n * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF\n * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.\n */\n#include <string.h>\n\n/*\n * Copy string src to buffer dst of size dsize.  At most dsize-1\n * chars will be copied.  Always NUL terminates (unless dsize == 0).\n * Returns strlen(src); if retval >= dsize, truncation occurred.\n */\nsize_t\nredis_strlcpy(char *dst, const char *src, size_t dsize)\n{\n    const char *osrc = src;\n    size_t nleft = dsize;\n\n    /* Copy as many bytes as will fit. */\n    if (nleft != 0) {\n        while (--nleft != 0) {\n            if ((*dst++ = *src++) == '\\0')\n                break;\n        }\n    }\n\n    /* Not enough room in dst, add NUL and traverse rest of src. */\n    if (nleft == 0) {\n        if (dsize != 0)\n            *dst = '\\0';        /* NUL-terminate dst */\n        while (*src++)\n            ;\n    }\n\n    return(src - osrc - 1); /* count does not include NUL */\n}\n\n/*\n * Appends src to string dst of size dsize (unlike strncat, dsize is the\n * full size of dst, not space left).  At most dsize-1 characters\n * will be copied.  Always NUL terminates (unless dsize <= strlen(dst)).\n * Returns strlen(src) + MIN(dsize, strlen(initial dst)).\n * If retval >= dsize, truncation occurred.\n */\nsize_t\nredis_strlcat(char *dst, const char *src, size_t dsize)\n{\n    const char *odst = dst;\n    const char *osrc = src;\n    size_t n = dsize;\n    size_t dlen;\n\n    /* Find the end of dst and adjust bytes left but don't go past end. */\n    while (n-- != 0 && *dst != '\\0')\n        dst++;\n    dlen = dst - odst;\n    n = dsize - dlen;\n\n    if (n-- == 0)\n        return(dlen + strlen(src));\n    while (*src != '\\0') {\n        if (n != 0) {\n            *dst++ = *src;\n            n--;\n        }\n        src++;\n    }\n    *dst = '\\0';\n\n    return(dlen + (src - osrc));    /* count does not include NUL */\n}\n\n\n\n\n\n"
  },
  {
    "path": "src/syncio.c",
    "content": "/* Synchronous socket and file I/O operations useful across the core.\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n\n/* ----------------- Blocking sockets I/O with timeouts --------------------- */\n\n/* Redis performs most of the I/O in a nonblocking way, with the exception\n * of the SYNC command where the slave does it in a blocking way, and\n * the MIGRATE command that must be blocking in order to be atomic from the\n * point of view of the two instances (one migrating the key and one receiving\n * the key). This is why need the following blocking I/O functions.\n *\n * All the functions take the timeout in milliseconds. */\n\n#define SYNCIO__RESOLUTION 10 /* Resolution in milliseconds */\n\n/* Write the specified payload to 'fd'. If writing the whole payload will be\n * done within 'timeout' milliseconds the operation succeeds and 'size' is\n * returned. Otherwise the operation fails, -1 is returned, and an unspecified\n * partial write could be performed against the file descriptor. */\nssize_t syncWrite(int fd, char *ptr, ssize_t size, long long timeout) {\n    ssize_t nwritten, ret = size;\n    long long start = mstime();\n    long long remaining = timeout;\n\n    while(1) {\n        long long wait = (remaining > SYNCIO__RESOLUTION) ?\n                          remaining : SYNCIO__RESOLUTION;\n        long long elapsed;\n\n        /* Optimistically try to write before checking if the file descriptor\n         * is actually writable. At worst we get EAGAIN. */\n        nwritten = write(fd,ptr,size);\n        if (nwritten == -1) {\n            if (errno != EAGAIN) return -1;\n        } else {\n            ptr += nwritten;\n            size -= nwritten;\n        }\n        if (size == 0) return ret;\n\n        /* Wait */\n        aeWait(fd,AE_WRITABLE,wait);\n        elapsed = mstime() - start;\n        if (elapsed >= timeout) {\n            errno = ETIMEDOUT;\n            return -1;\n        }\n        remaining = timeout - elapsed;\n    }\n}\n\n/* Read the specified amount of bytes from 'fd'. If all the bytes are read\n * within 'timeout' milliseconds the operation succeed and 'size' is returned.\n * Otherwise the operation fails, -1 is returned, and an unspecified amount of\n * data could be read from the file descriptor. */\nssize_t syncRead(int fd, char *ptr, ssize_t size, long long timeout) {\n    ssize_t nread, totread = 0;\n    long long start = mstime();\n    long long remaining = timeout;\n\n    if (size == 0) return 0;\n    while(1) {\n        long long wait = (remaining > SYNCIO__RESOLUTION) ?\n                          remaining : SYNCIO__RESOLUTION;\n        long long elapsed;\n\n        /* Optimistically try to read before checking if the file descriptor\n         * is actually readable. At worst we get EAGAIN. */\n        nread = read(fd,ptr,size);\n        if (nread == 0) return -1; /* short read. */\n        if (nread == -1) {\n            if (errno != EAGAIN) return -1;\n        } else {\n            ptr += nread;\n            size -= nread;\n            totread += nread;\n        }\n        if (size == 0) return totread;\n\n        /* Wait */\n        aeWait(fd,AE_READABLE,wait);\n        elapsed = mstime() - start;\n        if (elapsed >= timeout) {\n            errno = ETIMEDOUT;\n            return -1;\n        }\n        remaining = timeout - elapsed;\n    }\n}\n\n/* Read a line making sure that every char will not require more than 'timeout'\n * milliseconds to be read.\n *\n * On success the number of bytes read is returned, otherwise -1.\n * On success the string is always correctly terminated with a 0 byte. */\nssize_t syncReadLine(int fd, char *ptr, ssize_t size, long long timeout) {\n    ssize_t nread = 0;\n\n    size--;\n    while(size) {\n        char c;\n\n        if (syncRead(fd,&c,1,timeout) == -1) return -1;\n        if (c == '\\n') {\n            *ptr = '\\0';\n            if (nread && *(ptr-1) == '\\r') *(ptr-1) = '\\0';\n            return nread;\n        } else {\n            *ptr++ = c;\n            *ptr = '\\0';\n            nread++;\n        }\n        size--;\n    }\n    return nread;\n}\n"
  },
  {
    "path": "src/syscheck.c",
    "content": "/*\n * Copyright (c) 2016-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n#include \"fmacros.h\"\n#include \"config.h\"\n#include \"syscheck.h\"\n#include \"sds.h\"\n#include \"anet.h\"\n\n#include <time.h>\n#include <sys/resource.h>\n#include <unistd.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <errno.h>\n#include <sys/wait.h>\n\n#ifdef __linux__\n#include <sys/mman.h>\n#endif\n\n\n#ifdef __linux__\nstatic sds read_sysfs_line(char *path) {\n    char buf[256];\n    FILE *f = fopen(path, \"r\");\n    if (!f) return NULL;\n    if (!fgets(buf, sizeof(buf), f)) {\n        fclose(f);\n        return NULL;\n    }\n    fclose(f);\n    sds res = sdsnew(buf);\n    res = sdstrim(res, \" \\n\");\n    return res;\n}\n\n/* Verify our clocksource implementation doesn't go through a system call (uses vdso).\n * Going through a system call to check the time degrades Redis performance. */\nstatic int checkClocksource(sds *error_msg) {\n    unsigned long test_time_us, system_hz;\n    struct timespec ts;\n    unsigned long long start_us;\n    struct rusage ru_start, ru_end;\n\n    system_hz = sysconf(_SC_CLK_TCK);\n\n    if (getrusage(RUSAGE_SELF, &ru_start) != 0)\n        return 0;\n    if (clock_gettime(CLOCK_MONOTONIC, &ts) < 0) {\n        return 0;\n    }\n    start_us = (ts.tv_sec * 1000000 + ts.tv_nsec / 1000);\n\n    /* clock_gettime() busy loop of 5 times system tick (for a system_hz of 100 this is 50ms)\n     * Using system_hz is required to ensure accurate measurements from getrusage().\n     * If our clocksource is configured correctly (vdso) this will result in no system calls.\n     * If our clocksource is inefficient it'll waste most of the busy loop in the kernel. */\n    test_time_us = 5 * 1000000 / system_hz;\n    while (1) {\n        unsigned long long d;\n        if (clock_gettime(CLOCK_MONOTONIC, &ts) < 0)\n            return 0;\n        d = (ts.tv_sec * 1000000 + ts.tv_nsec / 1000) - start_us;\n        if (d >= test_time_us) break;\n    }\n    if (getrusage(RUSAGE_SELF, &ru_end) != 0)\n        return 0;\n\n    long long stime_us = (ru_end.ru_stime.tv_sec * 1000000 + ru_end.ru_stime.tv_usec) - (ru_start.ru_stime.tv_sec * 1000000 + ru_start.ru_stime.tv_usec);\n    long long utime_us = (ru_end.ru_utime.tv_sec * 1000000 + ru_end.ru_utime.tv_usec) - (ru_start.ru_utime.tv_sec * 1000000 + ru_start.ru_utime.tv_usec);\n\n    /* If more than 10% of the process time was in system calls we probably have an inefficient clocksource, print a warning */\n    if (stime_us * 10 > stime_us + utime_us) {\n        sds avail = read_sysfs_line(\"/sys/devices/system/clocksource/clocksource0/available_clocksource\");\n        sds curr = read_sysfs_line(\"/sys/devices/system/clocksource/clocksource0/current_clocksource\");\n        *error_msg = sdscatprintf(sdsempty(),\n           \"Slow system clocksource detected. This can result in degraded performance. \"\n           \"Consider changing the system's clocksource. \"\n           \"Current clocksource: %s. Available clocksources: %s. \"\n           \"For example: run the command 'echo tsc > /sys/devices/system/clocksource/clocksource0/current_clocksource' as root. \"\n           \"To permanently change the system's clocksource you'll need to set the 'clocksource=' kernel command line parameter.\",\n           curr ? curr : \"\", avail ? avail : \"\");\n        sdsfree(avail);\n        sdsfree(curr);\n        return -1;\n    } else {\n        return 1;\n    }\n}\n\n/* Verify we're not using the `xen` clocksource. The xen hypervisor's default clocksource is slow and affects\n * Redis's performance. This has been measured on ec2 xen based instances. ec2 recommends using the non-default\n * tsc clock source for these instances. */\nint checkXenClocksource(sds *error_msg) {\n    sds curr = read_sysfs_line(\"/sys/devices/system/clocksource/clocksource0/current_clocksource\");\n    int res = 1;\n    if (curr == NULL) {\n        res = 0;\n    } else if (strcmp(curr, \"xen\") == 0) {\n        *error_msg = sdsnew(\n            \"Your system is configured to use the 'xen' clocksource which might lead to degraded performance. \"\n            \"Check the result of the [slow-clocksource] system check: run 'redis-server --check-system' to check if \"\n            \"the system's clocksource isn't degrading performance.\");\n        res = -1;\n    }\n    sdsfree(curr);\n    return res;\n}\n\n/* Verify overcommit is enabled.\n * When overcommit memory is disabled Linux will kill the forked child of a background save\n * if we don't have enough free memory to satisfy double the current memory usage even though\n * the forked child uses copy-on-write to reduce its actual memory usage. */\nint checkOvercommit(sds *error_msg) {\n    FILE *fp = fopen(\"/proc/sys/vm/overcommit_memory\",\"r\");\n    char buf[64];\n\n    if (!fp) return 0;\n    if (fgets(buf,64,fp) == NULL) {\n        fclose(fp);\n        return 0;\n    }\n    fclose(fp);\n\n    if (strtol(buf, NULL, 10) != 1) {\n        *error_msg = sdsnew(\n            \"Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. \"\n#if defined(USE_JEMALLOC)\n            \"Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. \"\n#endif\n            \"To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the \"\n            \"command 'sysctl vm.overcommit_memory=1' for this to take effect.\");\n        return -1;\n    } else {\n        return 1;\n    }\n}\n\n/* Make sure transparent huge pages aren't always enabled. When they are this can cause copy-on-write logic\n * to consume much more memory and reduce performance during forks. */\nint checkTHPEnabled(sds *error_msg) {\n    char buf[1024];\n\n    FILE *fp = fopen(\"/sys/kernel/mm/transparent_hugepage/enabled\",\"r\");\n    if (!fp) return 0;\n    if (fgets(buf,sizeof(buf),fp) == NULL) {\n        fclose(fp);\n        return 0;\n    }\n    fclose(fp);\n\n    if (strstr(buf,\"[always]\") != NULL) {\n        *error_msg = sdsnew(\n            \"You have Transparent Huge Pages (THP) support enabled in your kernel. \"\n            \"This will create latency and memory usage issues with Redis. \"\n            \"To fix this issue run the command 'echo madvise > /sys/kernel/mm/transparent_hugepage/enabled' as root, \"\n            \"and add it to your /etc/rc.local in order to retain the setting after a reboot. \"\n            \"Redis must be restarted after THP is disabled (set to 'madvise' or 'never').\");\n        return -1;\n    } else {\n        return 1;\n    }\n}\n\n#ifdef __arm64__\n/* Get size in kilobytes of the Shared_Dirty pages of the calling process for the\n * memory map corresponding to the provided address, or -1 on error. */\nstatic int smapsGetSharedDirty(unsigned long addr) {\n    int ret, in_mapping = 0, val = -1;\n    unsigned long from, to;\n    char buf[64];\n    FILE *f;\n\n    f = fopen(\"/proc/self/smaps\", \"r\");\n    if (!f) return -1;\n\n    while (1) {\n        if (!fgets(buf, sizeof(buf), f))\n            break;\n\n        ret = sscanf(buf, \"%lx-%lx\", &from, &to);\n        if (ret == 2)\n            in_mapping = from <= addr && addr < to;\n\n        if (in_mapping && !memcmp(buf, \"Shared_Dirty:\", 13)) {\n            sscanf(buf, \"%*s %d\", &val);\n            /* If parsing fails, we remain with val == -1 */\n            break;\n        }\n    }\n\n    fclose(f);\n    return val;\n}\n\n/* Older arm64 Linux kernels have a bug that could lead to data corruption\n * during background save in certain scenarios. This function checks if the\n * kernel is affected.\n * The bug was fixed in commit ff1712f953e27f0b0718762ec17d0adb15c9fd0b\n * titled: \"arm64: pgtable: Ensure dirty bit is preserved across pte_wrprotect()\"\n */\nint checkLinuxMadvFreeForkBug(sds *error_msg) {\n    int ret, pipefd[2] = { -1, -1 };\n    pid_t pid;\n    char *p = NULL, *q;\n    int res = 1;\n    long page_size = sysconf(_SC_PAGESIZE);\n    long map_size = 3 * page_size;\n\n    /* Create a memory map that's in our full control (not one used by the allocator). */\n    p = mmap(NULL, map_size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);\n    if (p == MAP_FAILED) {\n        return 0;\n    }\n\n    q = p + page_size;\n\n    /* Split the memory map in 3 pages by setting their protection as RO|RW|RO to prevent\n     * Linux from merging this memory map with adjacent VMAs. */\n    ret = mprotect(q, page_size, PROT_READ | PROT_WRITE);\n    if (ret < 0) {\n        res = 0;\n        goto exit;\n    }\n\n    /* Write to the page once to make it resident */\n    *(volatile char*)q = 0;\n\n    /* Tell the kernel that this page is free to be reclaimed. */\n#ifndef MADV_FREE\n#define MADV_FREE 8\n#endif\n    ret = madvise(q, page_size, MADV_FREE);\n    if (ret < 0) {\n        /* MADV_FREE is not available on older kernels that are presumably\n         * not affected. */\n        if (errno == EINVAL) goto exit;\n\n        res = 0;\n        goto exit;\n    }\n\n    /* Write to the page after being marked for freeing, this is supposed to take\n     * ownership of that page again. */\n    *(volatile char*)q = 0;\n\n    /* Create a pipe for the child to return the info to the parent. */\n    ret = anetPipe(pipefd, 0, 0);\n    if (ret < 0) {\n        res = 0;\n        goto exit;\n    }\n\n    /* Fork the process. */\n    pid = fork();\n    if (pid < 0) {\n        res = 0;\n        goto exit;\n    } else if (!pid) {\n        /* Child: check if the page is marked as dirty, page_size in kb.\n         * A value of 0 means the kernel is affected by the bug. */\n        ret = smapsGetSharedDirty((unsigned long) q);\n        if (!ret)\n            res = -1;\n        else if (ret == -1)     /* Failed to read */\n            res = 0;\n\n        ret = write(pipefd[1], &res, sizeof(res)); /* Assume success, ignore return value*/\n        exit(0);\n    } else {\n        /* Read the result from the child. */\n        ret = read(pipefd[0], &res, sizeof(res));\n        if (ret < 0) {\n            res = 0;\n        }\n\n        /* Reap the child pid. */\n        waitpid(pid, NULL, 0);\n    }\n\nexit:\n    /* Cleanup */\n    if (pipefd[0] != -1) close(pipefd[0]);\n    if (pipefd[1] != -1) close(pipefd[1]);\n    if (p != NULL) munmap(p, map_size);\n\n    if (res == -1)\n        *error_msg = sdsnew(\n            \"Your kernel has a bug that could lead to data corruption during background save. \"\n            \"Please upgrade to the latest stable kernel.\");\n\n    return res;\n}\n#endif /* __arm64__ */\n#endif /* __linux__ */\n\n/*\n * Standard system check interface:\n * Each check has a name `name` and a functions pointer `check_fn`.\n * `check_fn` should return:\n *   -1 in case the check fails.\n *   1 in case the check passes.\n *   0 in case the check could not be completed (usually because of some unexpected failed system call).\n *   When (and only when) the check fails and -1 is returned and error description is places in a new sds pointer to by\n *   the single `sds*` argument to `check_fn`. This message should be freed by the caller via `sdsfree()`.\n */\ntypedef struct {\n    const char *name;\n    int (*check_fn)(sds*);\n} check;\n\ncheck checks[] = {\n#ifdef __linux__\n    {.name = \"slow-clocksource\", .check_fn = checkClocksource},\n    {.name = \"xen-clocksource\", .check_fn = checkXenClocksource},\n    {.name = \"overcommit\", .check_fn = checkOvercommit},\n    {.name = \"THP\", .check_fn = checkTHPEnabled},\n#ifdef __arm64__\n    {.name = \"madvise-free-fork-bug\", .check_fn = checkLinuxMadvFreeForkBug},\n#endif\n#endif\n    {.name = NULL, .check_fn = NULL}\n};\n\n/* Performs various system checks, returns 0 if any check fails, 1 otherwise. */\nint syscheck(void) {\n    check *cur_check = checks;\n    int ret = 1;\n    sds err_msg = NULL;\n    while (cur_check->check_fn) {\n        int res = cur_check->check_fn(&err_msg);\n        printf(\"[%s]...\", cur_check->name);\n        if (res == 0) {\n            printf(\"skipped\\n\");\n        } else if (res == 1) {\n            printf(\"OK\\n\");\n        } else {\n            printf(\"WARNING:\\n\");\n            printf(\"%s\\n\", err_msg);\n            sdsfree(err_msg);\n            ret = 0;\n        }\n        cur_check++;\n    }\n\n    return ret;\n}\n"
  },
  {
    "path": "src/syscheck.h",
    "content": "/*\n * Copyright (c) 2022-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef __SYSCHECK_H\n#define __SYSCHECK_H\n\n#include \"sds.h\"\n#include \"config.h\"\n\nint syscheck(void);\n#ifdef __linux__\nint checkXenClocksource(sds *error_msg);\nint checkTHPEnabled(sds *error_msg);\nint checkOvercommit(sds *error_msg);\n#ifdef __arm64__\nint checkLinuxMadvFreeForkBug(sds *error_msg);\n#endif\n#endif\n\n#endif\n"
  },
  {
    "path": "src/t_hash.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n * \n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n#include \"redisassert.h\"\n#include \"ebuckets.h\"\n#include \"entry.h\"\n#include \"cluster_asm.h\"\n#include \"vector.h\"\n#include <math.h>\n\n/* Threshold for HEXPIRE and HPERSIST to be considered whether it is worth to\n * update the expiration time of the hash object in global HFE DS. */\n#define HASH_NEW_EXPIRE_DIFF_THRESHOLD max(4000, 1<<EB_BUCKET_KEY_PRECISION)\n\n/* Reserve 2 bits out of hash-field expiration time for possible future lightweight\n * indexing/categorizing of fields. It can be achieved by hacking HFE as follows:\n *\n *    HPEXPIREAT key [ 2^47 + USER_INDEX ] FIELDS numfields field [field …]\n *\n * Redis will also need to expose kind of HEXPIRESCAN and HEXPIRECOUNT for this\n * idea. Yet to be better defined.\n *\n * HFE_MAX_ABS_TIME_MSEC constraint must be enforced only at API level. Internally,\n * the expiration time can be up to EB_EXPIRE_TIME_MAX for future readiness.\n */\n#define HFE_MAX_ABS_TIME_MSEC (EB_EXPIRE_TIME_MAX >> 2)\n\ntypedef enum GetFieldRes {\n    /* common (Used by hashTypeGet* value family) */\n    GETF_OK = 0,            /* The field was found. */\n    GETF_NOT_FOUND,         /* The field was not found. */\n    GETF_EXPIRED,           /* Logically expired (Might be lazy deleted or not) */\n    GETF_EXPIRED_HASH,      /* Delete hash since retrieved field was expired and\n                             * it was the last field in the hash. */\n} GetFieldRes;\n\ntypedef listpackEntry CommonEntry; /* extend usage beyond lp */\n\n#define FIELDS_STACK_SIZE 16\n\n/* A vec with an embedded stack buffer, used to collect field robj pointers\n * for subkey notifications without heap allocation in the common case. */\ntypedef struct fieldvec { vec v; void *buf[FIELDS_STACK_SIZE]; } fieldvec;\n\nstatic inline vec *fieldvecInit(fieldvec *fv, size_t cap) {\n    vecInit(&fv->v, fv->buf, FIELDS_STACK_SIZE);\n    vecReserve(&fv->v, cap);\n    return &fv->v;\n}\n\n/* hash field expiration (HFE) funcs */\nstatic ExpireAction onFieldExpire(eItem item, void *ctx);\nstatic ExpireMeta* hentryGetExpireMeta(const eItem field);\nstatic void hexpireGenericCommand(client *c, long long basetime, int unit);\nstatic void hfieldPersist(robj *hashObj, Entry *entry);\nstatic void propagateHashFieldDeletion(redisDb *db, sds key, char *field, size_t fieldLen);\n\n/* hash dictType funcs */\nstatic void dictEntryDestructor(dict *d, void *entry);\nstatic size_t hashDictMetadataBytes(dict *d);\nstatic size_t hashDictWithExpireMetadataBytes(dict *d);\nstatic void hashDictWithExpireOnRelease(dict *d);\nstatic kvobj* hashTypeLookupWriteOrCreate(client *c, robj *key);\n\n/*-----------------------------------------------------------------------------\n * Define dictType of hash\n *\n * - Stores fields as entry objects (field-value pairs) with optional expiration\n * - Note that small hashes are represented with listpacks\n * - Once expiration is set for a field, the dict instance and corresponding\n *   dictType are replaced with a dict containing metadata for Hash Field\n *   Expiration (HFE) and using dictType `entryHashDictTypeWithHFE`\n * - Dict uses no_value=1 since entry contains both field and value\n *----------------------------------------------------------------------------*/\ndictType entryHashDictType = {\n    dictSdsHash,                                /* lookup hash function */\n    NULL,                                       /* key dup */\n    NULL,                                       /* val dup */\n    dictSdsKeyCompare,                          /* lookup key compare */\n    dictEntryDestructor,                       /* key destructor */\n    NULL,                                       /* val destructor (value is in entry) */\n    .dictMetadataBytes = hashDictMetadataBytes,\n    .no_value = 1,                              /* entry contains both field and value */\n    .keys_are_odd = 1,                          /* entry pointers (SDS) are always odd */\n};\n\n/* Define alternative dictType of hash with hash-field expiration (HFE) support */\ndictType entryHashDictTypeWithHFE = {\n    dictSdsHash,                                /* lookup hash function */\n    NULL,                                       /* key dup */\n    NULL,                                       /* val dup */\n    dictSdsKeyCompare,                          /* lookup key compare */\n    dictEntryDestructor,                       /* key destructor */\n    NULL,                                       /* val destructor (value is in entry) */\n    .dictMetadataBytes = hashDictWithExpireMetadataBytes,\n    .onDictRelease = hashDictWithExpireOnRelease,\n    .no_value = 1,                              /* entry contains both field and value */\n    .keys_are_odd = 1,                          /* entry pointers (SDS) are always odd */\n};\n\n/*-----------------------------------------------------------------------------\n * Hash Field Expiration (HFE) Feature\n *\n * Each hash instance maintains its own set of hash field expiration within its\n * private ebuckets DS. In order to support HFE active expire cycle across hash\n * instances, hashes with associated HFE will be also registered in a global\n * ebuckets DS with expiration time value that reflects their next minimum\n * time to expire (db->subexpires). The global HFE Active expiration will be\n * triggered from activeExpireCycle() function and in turn will invoke \"local\"\n * HFE Active sub-expiration for each hash instance that has expired fields.\n *----------------------------------------------------------------------------*/\nEbucketsType subexpiresBucketsType = {\n    .onDeleteItem = NULL,\n    .getExpireMeta = hashGetExpireMeta,   /* get ExpireMeta attached to each hash */\n    .itemsAddrAreOdd = 0,                 /* Addresses of dict are even */\n};\n\n/* htExpireMetadata - ebuckets-type for hash fields with time-Expiration. ebuckets\n * instance Will be attached to each hash that has at least one field with expiry\n * time. */\nEbucketsType hashFieldExpireBucketsType = {\n    .onDeleteItem = NULL,\n    .getExpireMeta = hentryGetExpireMeta, /* get ExpireMeta attached to each field */\n    .itemsAddrAreOdd = 1,                 /* Addresses of hfield (entry/sds) are odd!! */\n};\n\n/* OnFieldExpireCtx passed to OnFieldExpire() */\ntypedef struct OnFieldExpireCtx {\n    robj *hashObj;\n    redisDb *db;\n    int activeEx; /* 1 for active expire, 0 for lazy expire */\n    vec *vexpired; /* Expired fields vector */\n} OnFieldExpireCtx;\n\n/* The implementation of hashes by dict was modified from storing fields as sds\n * strings to store \"entry\" objects (field-value pairs with optional expiration).\n * The entry structure unifies field and value into a single allocation, with\n * optional expiration metadata. This is simpler than the previous mstr approach\n * and provides better memory locality.\n */\n\n/* Used by hpersistCommand() */\ntypedef enum SetPersistRes {\n    HFE_PERSIST_NO_FIELD =     -2,   /* No such hash-field */\n    HFE_PERSIST_NO_TTL =       -1,   /* No TTL attached to the field */\n    HFE_PERSIST_OK =            1\n} SetPersistRes;\n\nstatic inline int isDictWithMetaHFE(dict *d) {\n    return d->type == &entryHashDictTypeWithHFE;\n}\n\n/*-----------------------------------------------------------------------------\n * setex* - Set field's expiration\n *\n * Setting expiration time to fields might be time-consuming and complex since\n * each update of expiration time, not only updates `ebuckets` of corresponding\n * hash, but also might update `ebuckets` of global HFE DS. It is required to opt\n * sequence of field updates with expirartion for a given hash, such that only\n * once done, the global HFE DS will get updated.\n *\n * To do so, follow the scheme:\n * 1. Call hashTypeSetExInit() to initialize the HashTypeSetEx struct.\n * 2. Call hashTypeSetEx() one time or more, for each field/expiration update.\n * 3. Call hashTypeSetExDone() for notification and update of global HFE.\n *----------------------------------------------------------------------------*/\n\n/* Returned value of hashTypeSetEx() */\ntypedef enum SetExRes {\n    HSETEX_OK =                1,   /* Expiration time set/updated as expected */\n    HSETEX_NO_FIELD =         -2,   /* No such hash-field */\n    HSETEX_NO_CONDITION_MET =  0,   /* Specified NX | XX | GT | LT condition not met */\n    HSETEX_DELETED =           2,   /* Field deleted because the specified time is in the past */\n} SetExRes;\n\n/* Used by httlGenericCommand() */\ntypedef enum GetExpireTimeRes {\n    HFE_GET_NO_FIELD =          -2, /* No such hash-field */\n    HFE_GET_NO_TTL =            -1, /* No TTL attached to the field */\n} GetExpireTimeRes;\n\ntypedef enum ExpireSetCond {\n    HFE_NX = 1<<0,\n    HFE_XX = 1<<1,\n    HFE_GT = 1<<2,\n    HFE_LT = 1<<3\n} ExpireSetCond;\n\n/* Used by hashTypeSetEx() for setting fields or their expiry  */\ntypedef struct HashTypeSetEx {\n\n    /*** config ***/\n    ExpireSetCond expireSetCond;        /* [XX | NX | GT | LT] */\n\n    /*** metadata ***/\n    uint64_t minExpire;                 /* if uninit EB_EXPIRE_TIME_INVALID */\n    redisDb *db;\n    robj *key, *hashObj;\n    uint64_t minExpireFields;           /* Trace updated fields and their previous/new\n                                         * minimum expiration time. If minimum recorded\n                                         * is above minExpire of the hash, then we don't\n                                         * have to update global HFE DS */\n\n    /* Optionally provide client for notification */\n    client *c;\n    const char *cmd;\n} HashTypeSetEx;\n\nint hashTypeSetExInit(robj *key, kvobj *kvo, client *c, redisDb *db,\n                      ExpireSetCond expireSetCond, HashTypeSetEx *ex);\n\nSetExRes hashTypeSetEx(robj *o, sds field, uint64_t expireAt, HashTypeSetEx *exInfo);\n\nvoid hashTypeSetExDone(HashTypeSetEx *e);\n\n/*-----------------------------------------------------------------------------\n * Accessor functions for dictType of hash\n *----------------------------------------------------------------------------*/\n\nstatic void dictEntryDestructor(dict *d, void *entry) {\n    size_t usable;\n    size_t *alloc_size = htGetMetadataSize(d);\n\n    /* If attached TTL to the field, then remove it from hash's private ebuckets. */\n    if (entryGetExpiry(entry) != EB_EXPIRE_TIME_INVALID) {\n        htMetadataEx *dictExpireMeta = htGetMetadataEx(d);\n        ebRemove(&dictExpireMeta->hfe, &hashFieldExpireBucketsType, entry);\n    }\n\n    entryFree(entry, &usable);\n    *alloc_size -= usable;\n\n    /* Don't have to update global HFE DS. It's unnecessary. Implementing this\n     * would introduce significant complexity and overhead for an operation that\n     * isn't critical. In the worst case scenario, the hash will be efficiently\n     * updated later by an active-expire operation, or it will be removed by the\n     * hash's dbGenericDelete() function. */\n}\n\nstatic size_t hashDictMetadataBytes(dict *d) {\n    UNUSED(d);\n    return sizeof(size_t);\n}\n\nstatic size_t hashDictWithExpireMetadataBytes(dict *d) {\n    UNUSED(d);\n    /* expireMeta of the hash, ref to ebuckets and pointer to hash's key */\n    return sizeof(htMetadataEx);\n}\n\nstatic void hashDictWithExpireOnRelease(dict *d) {\n    /* for sure allocated with metadata. Otherwise, this func won't be registered */\n    htMetadataEx *dictExpireMeta = htGetMetadataEx(d);\n    ebDestroy(&dictExpireMeta->hfe, &hashFieldExpireBucketsType, NULL);\n}\n\n/*-----------------------------------------------------------------------------\n * listpackEx functions\n *----------------------------------------------------------------------------*/\n/*\n * If any of hash field expiration command is called on a listpack hash object\n * for the first time, we convert it to OBJ_ENCODING_LISTPACK_EX encoding.\n * We allocate \"struct listpackEx\" which holds listpack pointer and expiry\n * metadata. In the listpack string, we append another TTL entry for each field\n * value pair. From now on, listpack will have triplets in it: field-value-ttl.\n * If TTL is not set for a field, we store 'zero' as the TTL value. 'zero' is\n * encoded as two bytes in the listpack. Memory overhead of a non-existing TTL\n * will be two bytes per field.\n *\n * Fields in the listpack will be ordered by TTL. Field with the smallest expiry\n * time will be the first item. Fields without TTL will be at the end of the\n * listpack. This way, it is easier/faster to find expired items.\n */\n\n#define HASH_LP_NO_TTL 0\n\nstruct listpackEx *listpackExCreate(void) {\n    listpackEx *lpt = zcalloc(sizeof(*lpt));\n    lpt->meta.trash = 1;\n    lpt->lp = NULL;\n    return lpt;\n}\n\nstatic void listpackExFree(listpackEx *lpt) {\n    lpFree(lpt->lp);\n    zfree(lpt);\n}\n\nstruct lpFingArgs {\n    uint64_t max_to_search; /* [in] Max number of tuples to search */\n    uint64_t expire_time;   /* [in] Find the tuple that has a TTL larger than expire_time */\n    unsigned char *p;       /* [out] First item of the tuple that has a TTL larger than expire_time */\n    int expired;            /* [out] Number of tuples that have TTLs less than expire_time */\n    int index;              /* Internally used */\n    unsigned char *fptr;    /* Internally used, temp ptr */\n};\n\n/* Callback for lpFindCb(). Used to find number of expired fields as part of\n * active expiry or when trying to find the position for the new field according\n * to its expiry time.*/\nstatic int cbFindInListpack(const unsigned char *lp, unsigned char *p,\n                            void *user, unsigned char *s, long long slen)\n{\n    (void) lp;\n    struct lpFingArgs *r = user;\n\n    r->index++;\n\n    if (r->max_to_search == 0)\n        return 0; /* Break the loop and return */\n\n    if (r->index % 3 == 1) {\n        r->fptr = p;  /* First item of the tuple. */\n    } else if (r->index % 3 == 0) {\n        serverAssert(!s);\n\n        /* Third item of a tuple is expiry time */\n        if (slen == HASH_LP_NO_TTL || (uint64_t) slen >= r->expire_time) {\n            r->p = r->fptr;\n            return 0; /* Break the loop and return */\n        }\n        r->expired++;\n        r->max_to_search--;\n    }\n\n    return 1;\n}\n\n/* Returns number of expired fields. */\nstatic uint64_t listpackExExpireDryRun(const robj *o) {\n    serverAssert(o->encoding == OBJ_ENCODING_LISTPACK_EX);\n\n    listpackEx *lpt = o->ptr;\n\n    struct lpFingArgs r = {\n        .max_to_search = UINT64_MAX,\n        .expire_time = commandTimeSnapshot(),\n    };\n\n    lpFindCb(lpt->lp, NULL, &r, cbFindInListpack, 0);\n    return r.expired;\n}\n\n/* Returns the expiration time of the item with the nearest expiration. */\nstatic uint64_t listpackExGetMinExpire(robj *o) {\n    serverAssert(o->encoding == OBJ_ENCODING_LISTPACK_EX);\n\n    long long expireAt;\n    unsigned char *fptr;\n    listpackEx *lpt = o->ptr;\n\n    /* As fields are ordered by expire time, first field will have the smallest\n     * expiry time. Third element is the expiry time of the first field */\n    fptr = lpSeek(lpt->lp, 2);\n    if (fptr != NULL) {\n        serverAssert(lpGetIntegerValue(fptr, &expireAt));\n\n        /* Check if this is a non-volatile field. */\n        if (expireAt != HASH_LP_NO_TTL)\n            return expireAt;\n    }\n\n    return EB_EXPIRE_TIME_INVALID;\n}\n\n/* Walk over fields and delete the expired ones. */\nvoid listpackExExpire(redisDb *db, kvobj *kv, ExpireInfo *info) {\n    OnFieldExpireCtx *ctx = info->ctx;\n    serverAssert(kv->encoding == OBJ_ENCODING_LISTPACK_EX);\n    uint64_t expired = 0, min = EB_EXPIRE_TIME_INVALID;\n    unsigned char *ptr;\n    listpackEx *lpt = kv->ptr;\n\n    ptr = lpFirst(lpt->lp);\n\n    sds key = kvobjGetKey(kv);\n\n    while (ptr != NULL && (info->itemsExpired < info->maxToExpire)) {\n        long long val;\n        int64_t flen;\n        unsigned char intbuf[LP_INTBUF_SIZE], *fref;\n\n        fref = lpGet(ptr, &flen, intbuf);\n\n        ptr = lpNext(lpt->lp, ptr);\n        serverAssert(ptr);\n        ptr = lpNext(lpt->lp, ptr);\n        serverAssert(ptr && lpGetIntegerValue(ptr, &val));\n\n        /* Fields are ordered by expiry time. If we reached to a non-expired\n         * or a non-volatile field, we know rest is not yet expired. */\n        if (val == HASH_LP_NO_TTL || (uint64_t) val > info->now)\n            break;\n\n        /* Collect expired field for subkey notification. */\n        if (ctx->vexpired) {\n            char *fstr = (char *)(fref ? fref : intbuf);\n            vecPush(ctx->vexpired, createStringObject(fstr, flen));\n        }\n\n        propagateHashFieldDeletion(db, key, (char *)((fref) ? fref : intbuf), flen);\n        server.stat_expired_subkeys++;\n        if (ctx->activeEx) server.stat_expired_subkeys_active++;\n\n        ptr = lpNext(lpt->lp, ptr);\n\n        info->itemsExpired++;\n        expired++;\n    }\n\n    if (expired) {\n        size_t oldsize = 0;\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(kv);\n        lpt->lp = lpDeleteRange(lpt->lp, 0, expired * 3);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(db, getKeySlot(key), kv, oldsize, kvobjAllocSize(kv));\n\n        /* update keysizes */\n        unsigned long l = lpLength(lpt->lp) / 3;\n        updateKeysizesHist(db, OBJ_HASH, l + expired, l);\n    }\n\n    min = hashTypeGetMinExpire(kv, 1 /*accurate*/);\n    info->nextExpireTime = min;\n}\n\nstatic void listpackExAddInternal(robj *o, listpackEntry ent[3]) {\n    listpackEx *lpt = o->ptr;\n\n    /* Shortcut, just append at the end if this is a non-volatile field. */\n    if (ent[2].lval == HASH_LP_NO_TTL) {\n        lpt->lp = lpBatchAppend(lpt->lp, ent, 3);\n        return;\n    }\n\n    struct lpFingArgs r = {\n            .max_to_search = UINT64_MAX,\n            .expire_time = ent[2].lval,\n    };\n\n    /* Check if there is a field with a larger TTL. */\n    lpFindCb(lpt->lp, NULL, &r, cbFindInListpack, 0);\n\n    /* If list is empty or there is no field with a larger TTL, result will be\n     * NULL. Otherwise, just insert before the found item.*/\n    if (r.p)\n        lpt->lp = lpBatchInsert(lpt->lp, r.p, LP_BEFORE, ent, 3, NULL);\n    else\n        lpt->lp = lpBatchAppend(lpt->lp, ent, 3);\n}\n\n/* Add new field ordered by expire time. */\nvoid listpackExAddNew(robj *o, char *field, size_t flen,\n                      char *value, size_t vlen, uint64_t expireAt) {\n    listpackEntry ent[3] = {\n        {.sval = (unsigned char*) field, .slen = flen},\n        {.sval = (unsigned char*) value, .slen = vlen},\n        {.lval = expireAt}\n    };\n\n    listpackExAddInternal(o, ent);\n}\n\n/* If expiry time is changed, this function will place field into the correct\n * position. First, it deletes the field and re-inserts to the listpack ordered\n * by expiry time. */\nstatic void listpackExUpdateExpiry(robj *o, sds field,\n                                   unsigned char *fptr,\n                                   unsigned char *vptr,\n                                   uint64_t expire_at) {\n    unsigned int slen = 0;\n    long long val = 0;\n    unsigned char tmp[512] = {0};\n    unsigned char *valstr;\n    sds tmpval = NULL;\n    listpackEx *lpt = o->ptr;\n\n    /* Copy value */\n    valstr = lpGetValue(vptr, &slen, &val);\n    if (valstr) {\n        /* Normally, item length in the listpack is limited by\n         * 'hash-max-listpack-value' config. It is unlikely, but it might be\n         * larger than sizeof(tmp). */\n        if (slen > sizeof(tmp))\n            tmpval = sdsnewlen(valstr, slen);\n        else\n            memcpy(tmp, valstr, slen);\n    }\n\n    /* Delete field name, value and expiry time */\n    lpt->lp = lpDeleteRangeWithEntry(lpt->lp, &fptr, 3);\n\n    listpackEntry ent[3] = {{0}};\n\n    ent[0].sval = (unsigned char*) field;\n    ent[0].slen = sdslen(field);\n\n    if (valstr) {\n        ent[1].sval = tmpval ? (unsigned char *) tmpval : tmp;\n        ent[1].slen = slen;\n    } else {\n        ent[1].lval = val;\n    }\n    ent[2].lval = expire_at;\n\n    listpackExAddInternal(o, ent);\n    sdsfree(tmpval);\n}\n\n/* Update field expire time. */\nSetExRes hashTypeSetExpiryListpack(HashTypeSetEx *ex, sds field,\n                                   unsigned char *fptr, unsigned char *vptr,\n                                   unsigned char *tptr, uint64_t expireAt)\n{\n    long long expireTime;\n    uint64_t prevExpire = EB_EXPIRE_TIME_INVALID;\n\n    serverAssert(lpGetIntegerValue(tptr, &expireTime));\n\n    if (expireTime != HASH_LP_NO_TTL) {\n        prevExpire = (uint64_t) expireTime;\n    }\n\n    /* Special value of EXPIRE_TIME_INVALID indicates field should be persisted.*/\n    if (expireAt == EB_EXPIRE_TIME_INVALID) {\n        /* Return error if already there is no ttl. */\n        if (prevExpire == EB_EXPIRE_TIME_INVALID)\n            return HSETEX_NO_CONDITION_MET;\n        listpackExUpdateExpiry(ex->hashObj, field, fptr, vptr, HASH_LP_NO_TTL);\n        return HSETEX_OK;\n    }\n\n    if (prevExpire == EB_EXPIRE_TIME_INVALID) {\n        /* For fields without expiry, LT condition is considered valid */\n        if (ex->expireSetCond & (HFE_XX | HFE_GT))\n            return HSETEX_NO_CONDITION_MET;\n    } else {\n        if (((ex->expireSetCond == HFE_GT) && (prevExpire >= expireAt)) ||\n            ((ex->expireSetCond == HFE_LT) && (prevExpire <= expireAt)) ||\n            (ex->expireSetCond == HFE_NX) )\n            return HSETEX_NO_CONDITION_MET;\n\n        /* Track of minimum expiration time (only later update global HFE DS) */\n        if (ex->minExpireFields > prevExpire)\n            ex->minExpireFields = prevExpire;\n    }\n\n    /* If expired, then delete the field and propagate the deletion.\n     * If replica, continue like the field is valid */\n    if (unlikely(checkAlreadyExpired(expireAt))) {\n        propagateHashFieldDeletion(ex->db, ex->key->ptr, field, sdslen(field));\n        hashTypeDelete(ex->hashObj, field);\n        server.stat_expired_subkeys++;\n        return HSETEX_DELETED;\n    }\n\n    if (ex->minExpireFields > expireAt)\n        ex->minExpireFields = expireAt;\n\n    listpackExUpdateExpiry(ex->hashObj, field, fptr, vptr, expireAt);\n    return HSETEX_OK;\n}\n\n/* Returns 1 if expired */\nint hashTypeIsExpired(const robj *o, uint64_t expireAt) {\n    if (server.allow_access_expired) return 0;\n\n    if (o->encoding == OBJ_ENCODING_LISTPACK_EX) {\n        if (expireAt == HASH_LP_NO_TTL)\n            return 0;\n    } else if (o->encoding == OBJ_ENCODING_HT) {\n        if (expireAt == EB_EXPIRE_TIME_INVALID)\n            return 0;\n    } else {\n        serverPanic(\"Unknown encoding: %d\", o->encoding);\n    }\n\n    return (mstime_t) expireAt < commandTimeSnapshot();\n}\n\n/* Returns listpack pointer of the object. */\nunsigned char *hashTypeListpackGetLp(robj *o) {\n    if (o->encoding == OBJ_ENCODING_LISTPACK)\n        return o->ptr;\n    else if (o->encoding == OBJ_ENCODING_LISTPACK_EX)\n        return ((listpackEx*)o->ptr)->lp;\n\n    serverPanic(\"Unknown encoding: %d\", o->encoding);\n}\n\n/*-----------------------------------------------------------------------------\n * Hash type API\n *----------------------------------------------------------------------------*/\n\n/* Check the length of a number of objects to see if we need to convert a\n * listpack to a real hash. Note that we only check string encoded objects\n * as their string length can be queried in constant time. */\nvoid hashTypeTryConversion(redisDb *db, kvobj *o, robj **argv, int start, int end) {\n    int i;\n    size_t sum = 0;\n\n    if (o->encoding != OBJ_ENCODING_LISTPACK && o->encoding != OBJ_ENCODING_LISTPACK_EX)\n        return;\n\n    /* We guess that most of the values in the input are unique, so\n     * if there are enough arguments we create a pre-sized hash, which\n     * might over allocate memory if there are duplicates. */\n    size_t new_fields = (end - start + 1) / 2;\n    if (new_fields > server.hash_max_listpack_entries) {\n        hashTypeConvert(db, o, OBJ_ENCODING_HT);\n        dictExpand(o->ptr, new_fields);\n        return;\n    }\n\n    for (i = start; i <= end; i++) {\n        if (!sdsEncodedObject(argv[i]))\n            continue;\n        size_t len = sdslen(argv[i]->ptr);\n        if (len > server.hash_max_listpack_value) {\n            hashTypeConvert(db, o, OBJ_ENCODING_HT);\n            return;\n        }\n        sum += len;\n    }\n    if (!lpSafeToAdd(hashTypeListpackGetLp(o), sum)) {\n        hashTypeConvert(db, o, OBJ_ENCODING_HT);\n    }\n}\n\n/* Get the value from a listpack encoded hash, identified by field. */\nGetFieldRes hashTypeGetFromListpack(robj *o, sds field,\n                            unsigned char **vstr,\n                            unsigned int *vlen,\n                            long long *vll,\n                            uint64_t *expiredAt)\n{\n    *expiredAt = EB_EXPIRE_TIME_INVALID;\n    unsigned char *zl, *fptr = NULL, *vptr = NULL;\n\n    if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        zl = o->ptr;\n        fptr = lpFirst(zl);\n        if (fptr != NULL) {\n            fptr = lpFind(zl, fptr, (unsigned char*)field, sdslen(field), 1);\n            if (fptr != NULL) {\n                /* Grab pointer to the value (fptr points to the field) */\n                vptr = lpNext(zl, fptr);\n                serverAssert(vptr != NULL);\n            }\n        }\n    } else if (o->encoding == OBJ_ENCODING_LISTPACK_EX) {\n        long long expire;\n        unsigned char *h;\n        listpackEx *lpt = o->ptr;\n\n        fptr = lpFirst(lpt->lp);\n        if (fptr != NULL) {\n            fptr = lpFind(lpt->lp, fptr, (unsigned char*)field, sdslen(field), 2);\n            if (fptr != NULL) {\n                vptr = lpNext(lpt->lp, fptr);\n                serverAssert(vptr != NULL);\n\n                h = lpNext(lpt->lp, vptr);\n                serverAssert(h && lpGetIntegerValue(h, &expire));\n                if (expire != HASH_LP_NO_TTL)\n                    *expiredAt = expire;\n            }\n        }\n    } else {\n        serverPanic(\"Unknown hash encoding: %d\", o->encoding);\n    }\n\n    if (vptr != NULL) {\n        *vstr = lpGetValue(vptr, vlen, vll);\n        return GETF_OK;\n    }\n\n    return GETF_NOT_FOUND;\n}\n\n/* Get the value from a hash table encoded hash, identified by field.\n * Returns NULL when the field cannot be found, otherwise the SDS value\n * is returned. */\nGetFieldRes hashTypeGetFromHashTable(robj *o, sds field, sds *value, uint64_t *expiredAt) {\n    dictEntry *de;\n\n    *expiredAt = EB_EXPIRE_TIME_INVALID;\n\n    serverAssert(o->encoding == OBJ_ENCODING_HT);\n\n    de = dictFind(o->ptr, field);\n\n    if (de == NULL)\n        return GETF_NOT_FOUND;\n\n    Entry *entry = dictGetKey(de);\n    *expiredAt = entryGetExpiry(entry);\n    *value = entryGetValue(entry);\n    return GETF_OK;\n}\n\n/* Higher level function of hashTypeGet*() that returns the hash value\n * associated with the specified field.\n * Arguments:\n * hfeFlags      - Lookup for HFE_LAZY_* flags\n *\n * Returned:\n * GetFieldRes  - Result of get operation\n * vstr, vlen   - if string, ref in either *vstr and *vlen if it's\n *                returned in string form,\n * vll          - or stored in *vll if it's returned as a number.\n *                If *vll is populated *vstr is set to NULL, so the caller can\n *                always check the function return by checking the return value\n *                for GETF_OK and checking if vll (or vstr) is NULL.\n * expiredAt    - if the field has an expiration time, it will be set to the expiration \n *                time of the field. Otherwise, will be set to EB_EXPIRE_TIME_INVALID.\n */\nGetFieldRes hashTypeGetValue(redisDb *db, kvobj *o, sds field, unsigned char **vstr,\n                                   unsigned int *vlen, long long *vll, \n                                   int hfeFlags, uint64_t *expiredAt)\n{\n    sds key = kvobjGetKey(o);\n    GetFieldRes res;\n    uint64_t dummy;\n    size_t oldsize = 0;\n    if (expiredAt == NULL) expiredAt = &dummy;\n    if (o->encoding == OBJ_ENCODING_LISTPACK ||\n        o->encoding == OBJ_ENCODING_LISTPACK_EX) {\n        *vstr = NULL;\n        res = hashTypeGetFromListpack(o, field, vstr, vlen, vll, expiredAt);\n\n        if (res == GETF_NOT_FOUND)\n            return GETF_NOT_FOUND;\n\n    } else if (o->encoding == OBJ_ENCODING_HT) {\n        sds value = NULL;\n        if (server.memory_tracking_enabled && !(hfeFlags & HFE_LAZY_NO_UPDATE_ALLOCSIZES))\n            oldsize = kvobjAllocSize(o);\n        res = hashTypeGetFromHashTable(o, field, &value, expiredAt);\n        if (server.memory_tracking_enabled && !(hfeFlags & HFE_LAZY_NO_UPDATE_ALLOCSIZES))\n            updateSlotAllocSize(db, getKeySlot(key), o, oldsize, kvobjAllocSize(o));\n\n        if (res == GETF_NOT_FOUND)\n            return GETF_NOT_FOUND;\n\n        *vstr = (unsigned char*) value;\n        *vlen = sdslen(value);\n    } else {\n        serverPanic(\"Unknown hash encoding\");\n    }\n\n    if ((server.allow_access_expired) ||\n        (*expiredAt >= (uint64_t) commandTimeSnapshot()) ||\n        (hfeFlags & HFE_LAZY_ACCESS_EXPIRED))\n        return GETF_OK;\n\n    if (server.masterhost || server.cluster_enabled) {\n        /* If CLIENT_MASTER, assume valid as long as it didn't get delete.\n         *\n         * In cluster mode, we also assume valid if we are importing data\n         * from the source, to avoid deleting fields that are still in use.\n         * We create a fake master client for data import, which can be\n         * identified using the CLIENT_MASTER flag. */\n        if (server.current_client && (server.current_client->flags & CLIENT_MASTER))\n            return GETF_OK;\n\n        /* For replica, if user client, then act as if expired, but don't delete! */\n        if (server.masterhost) return GETF_EXPIRED;\n    }\n\n    if ((server.loading) ||\n        (hfeFlags & HFE_LAZY_AVOID_FIELD_DEL) ||\n        (isPausedActionsWithUpdate(PAUSE_ACTION_EXPIRE)))\n        return GETF_EXPIRED;\n\n    /* delete the field and propagate the deletion */\n    if (server.memory_tracking_enabled && !(hfeFlags & HFE_LAZY_NO_UPDATE_ALLOCSIZES))\n        oldsize = kvobjAllocSize(o);\n    serverAssert(hashTypeDelete(o, field) == 1);\n    if (server.memory_tracking_enabled && !(hfeFlags & HFE_LAZY_NO_UPDATE_ALLOCSIZES))\n        updateSlotAllocSize(db, getKeySlot(key), o, oldsize, kvobjAllocSize(o));\n    propagateHashFieldDeletion(db, key, field, sdslen(field));\n    server.stat_expired_subkeys++;\n\n    if (!(hfeFlags & HFE_LAZY_NO_UPDATE_KEYSIZES)) {\n        uint64_t l = hashTypeLength(o, 0);\n        updateKeysizesHist(db, OBJ_HASH, l+1, l);\n    }\n\n    /* If the field is the last one in the hash, then the hash will be deleted */\n    res = GETF_EXPIRED;\n    robj *keyObj = createStringObject(key, sdslen(key));\n    unsigned long length = hashTypeLength(o, 0);\n    if ((length != 0) && !(hfeFlags & HFE_LAZY_NO_NOTIFICATION)) {\n        robj fobj, *farr[1] = {&fobj};\n        initStaticStringObject(fobj, field);\n        notifyKeyspaceEventWithSubkeys(NOTIFY_HASH, \"hexpired\", keyObj, db->id, farr, 1);\n    }\n    if ((length == 0) && (!(hfeFlags & HFE_LAZY_AVOID_HASH_DEL))) {\n        if (!(hfeFlags & HFE_LAZY_NO_NOTIFICATION))\n            notifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", keyObj, db->id);\n        dbDelete(db,keyObj);\n        o = NULL;\n        res = GETF_EXPIRED_HASH;\n    }\n    keyModified(NULL, db, keyObj, o, !(hfeFlags & HFE_LAZY_NO_SIGNAL));\n    decrRefCount(keyObj);\n    return res;\n}\n\n/* Like hashTypeGetValue() but returns a Redis object, which is useful for\n * interaction with the hash type outside t_hash.c.\n * The function returns NULL if the field is not found in the hash. Otherwise\n * a newly allocated string object with the value is returned.\n *\n * hfeFlags      - Lookup HFE_LAZY_* flags\n * isHashDeleted - If attempted to access expired field and it's the last field\n *                 in the hash, then the hash will as well be deleted. In this case,\n *                 isHashDeleted will be set to 1.\n * val           - If the field is found, then val will be set to the value object.\n * expireTime    - If the field exists (`GETF_OK`) then expireTime will be set to  \n *                 the expiration time of the field. Otherwise, it will be set to 0.\n *                 \n * Returns 1 if the field exists, and 0 when it doesn't.\n */\nint hashTypeGetValueObject(redisDb *db, kvobj *o, sds field, int hfeFlags,\n                           robj **val, uint64_t *expireTime, int *isHashDeleted) {\n    unsigned char *vstr;\n    unsigned int vlen;\n    long long vll;\n\n    if (isHashDeleted) *isHashDeleted = 0;\n    if (val) *val = NULL;\n    GetFieldRes res = hashTypeGetValue(db,o,field,&vstr,&vlen,&vll, \n                                                   hfeFlags, expireTime);\n\n    if (res == GETF_OK) {\n        /* expireTime set to 0 if the field has no expiration time */ \n        if (expireTime && (*expireTime == EB_EXPIRE_TIME_INVALID))\n            *expireTime = 0;\n        \n        /* If expected to return the value, then create a new object */\n        if (val) {\n            if (vstr) *val = createStringObject((char *) vstr, vlen);\n            else *val = createStringObjectFromLongLong(vll);\n        }\n        return 1;\n    }\n\n    if ((res == GETF_EXPIRED_HASH) && (isHashDeleted))\n        *isHashDeleted = 1;\n\n    /* GETF_EXPIRED_HASH, GETF_EXPIRED, GETF_NOT_FOUND */\n    return 0;\n}\n\n/* Test if the specified field exists in the given hash. If the field is\n * expired (HFE), then it will be lazy deleted unless HFE_LAZY_AVOID_FIELD_DEL \n * hfeFlags is set.\n *\n * hfeFlags      - Lookup HFE_LAZY_* flags\n * isHashDeleted - If attempted to access expired field and it is the last field\n *                 in the hash, then the hash will as well be deleted. In this case,\n *                 isHashDeleted will be set to 1.\n *\n * Returns 1 if the field exists, and 0 when it doesn't.\n */\nint hashTypeExists(redisDb *db, kvobj *o, sds field, int hfeFlags, int *isHashDeleted) {\n    unsigned char *vstr = NULL;\n    unsigned int vlen = UINT_MAX;\n    long long vll = LLONG_MAX;\n\n    GetFieldRes res = hashTypeGetValue(db, o, field, &vstr, &vlen, &vll, \n                                             hfeFlags, NULL);\n    if (isHashDeleted)\n        *isHashDeleted = (res == GETF_EXPIRED_HASH) ? 1 : 0;\n    return (res == GETF_OK) ? 1 : 0;\n}\n\n/* Add a new field, overwrite the old with the new value if it already exists.\n * Return 0 on insert and 1 on update.\n *\n * By default, the key and value SDS strings are copied if needed, so the\n * caller retains ownership of the strings passed. However this behavior\n * can be effected by passing appropriate flags (possibly bitwise OR-ed):\n *\n * HASH_SET_TAKE_FIELD  -- The SDS field ownership passes to the function.\n * HASH_SET_TAKE_VALUE  -- The SDS value ownership passes to the function.\n * HASH_SET_KEEP_TTL --  keep original TTL if field already exists\n *\n * When the flags are used the caller does not need to release the passed\n * SDS string(s). It's up to the function to use the string to create a new\n * entry or to free the SDS string before returning to the caller.\n *\n * HASH_SET_COPY corresponds to no flags passed, and means the default\n * semantics of copying the values if needed.\n *\n */\n#define HASH_SET_TAKE_FIELD  (1<<0)\n#define HASH_SET_TAKE_VALUE  (1<<1)\n#define HASH_SET_KEEP_TTL (1<<2)\n\nstatic_assert(HASH_SET_TAKE_VALUE == ENTRY_TAKE_VALUE, \"ENTRY_TAKE_VALUE must match HASH_SET_TAKE_VALUE\");\n\nint hashTypeSet(redisDb *db, kvobj *o, sds field, sds value, int flags) {\n    int update = 0;\n\n    /* Check if the field is too long for listpack, and convert before adding the item.\n     * This is needed for HINCRBY* case since in other commands this is handled early by\n     * hashTypeTryConversion, so this check will be a NOP. */\n    if (o->encoding == OBJ_ENCODING_LISTPACK  ||\n        o->encoding == OBJ_ENCODING_LISTPACK_EX) {\n        if (sdslen(field) > server.hash_max_listpack_value || sdslen(value) > server.hash_max_listpack_value)\n            hashTypeConvert(db, o, OBJ_ENCODING_HT);\n    }\n\n    if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *zl, *fptr, *vptr;\n\n        zl = o->ptr;\n        fptr = lpFirst(zl);\n        if (fptr != NULL) {\n            fptr = lpFind(zl, fptr, (unsigned char*)field, sdslen(field), 1);\n            if (fptr != NULL) {\n                /* Grab pointer to the value (fptr points to the field) */\n                vptr = lpNext(zl, fptr);\n                serverAssert(vptr != NULL);\n\n                /* Replace value */\n                zl = lpReplace(zl, &vptr, (unsigned char*)value, sdslen(value));\n                update = 1;\n            }\n        }\n\n        if (!update) {\n            listpackEntry entries[2] = {\n                {.sval = (unsigned char*) field, .slen = sdslen(field)},\n                {.sval = (unsigned char*) value, .slen = sdslen(value)},\n            };\n\n            /* Push new field/value pair onto the tail of the listpack */\n            zl = lpBatchAppend(zl, entries, 2);\n        }\n        o->ptr = zl;\n\n        /* Check if the listpack needs to be converted to a hash table */\n        if (hashTypeLength(o, 0) > server.hash_max_listpack_entries)\n            hashTypeConvert(db, o, OBJ_ENCODING_HT);\n    } else if (o->encoding == OBJ_ENCODING_LISTPACK_EX) {\n        unsigned char *fptr = NULL, *vptr = NULL, *tptr = NULL;\n        listpackEx *lpt = o->ptr;\n        long long expireTime = HASH_LP_NO_TTL;\n\n        fptr = lpFirst(lpt->lp);\n        if (fptr != NULL) {\n            fptr = lpFind(lpt->lp, fptr, (unsigned char*)field, sdslen(field), 2);\n            if (fptr != NULL) {\n                /* Grab pointer to the value (fptr points to the field) */\n                vptr = lpNext(lpt->lp, fptr);\n                serverAssert(vptr != NULL);\n\n                /* Replace value */\n                lpt->lp = lpReplace(lpt->lp, &vptr, (unsigned char *) value, sdslen(value));\n                update = 1;\n\n                fptr = lpPrev(lpt->lp, vptr);\n                serverAssert(fptr != NULL);\n\n                tptr = lpNext(lpt->lp, vptr);\n                serverAssert(tptr && lpGetIntegerValue(tptr, &expireTime));\n\n                if (flags & HASH_SET_KEEP_TTL) {\n                    /* keep old field along with TTL */\n                } else if (expireTime != HASH_LP_NO_TTL) {\n                    /* re-insert field and override TTL */\n                    listpackExUpdateExpiry(o, field, fptr, vptr, HASH_LP_NO_TTL);\n                }\n            }\n        }\n\n        if (!update)\n            listpackExAddNew(o, field, sdslen(field), value, sdslen(value),\n                             HASH_LP_NO_TTL);\n\n        /* Check if the listpack needs to be converted to a hash table */\n        if (hashTypeLength(o, 0) > server.hash_max_listpack_entries)\n            hashTypeConvert(db, o, OBJ_ENCODING_HT);\n\n    } else if (o->encoding == OBJ_ENCODING_HT) {\n        dict *ht = o->ptr;\n        /* check if field already exists */\n        dictEntryLink bucket, link = dictFindLink(ht, field, &bucket);\n        size_t *alloc_size = htGetMetadataSize(ht);\n\n        /* take ownership of value if requested */\n        uint32_t newEntryFlags = flags & HASH_SET_TAKE_VALUE;\n        flags &= ~HASH_SET_TAKE_VALUE;\n\n        if (link == NULL) {\n            /* Create entry and transfer value ownership if possible */\n            size_t usable;\n            Entry *newEntry = entryCreate(field, value, newEntryFlags, &usable);\n\n            dictSetKeyAtLink(ht, newEntry, &bucket, 1);\n            *alloc_size += usable;\n        } else {\n            /* Existing field - update value in entry */\n            Entry *oldEntry = dictGetKey(*link);\n\n            /* Check if old entry has expiration before potentially freeing it */\n            uint64_t oldExpireAt = entryGetExpiry(oldEntry);\n            uint64_t newExpireAt = EB_EXPIRE_TIME_INVALID;\n\n            /* If attached TTL to the old field, then remove it from hash's\n             * private ebuckets. We do this before updating the value because\n             * the entry might be reallocated and freed. */\n            if (oldExpireAt != EB_EXPIRE_TIME_INVALID) {\n                hfieldPersist(o, oldEntry);\n                if (flags & HASH_SET_KEEP_TTL) {\n                    newExpireAt = oldExpireAt;\n                    newEntryFlags |= ENTRY_HAS_EXPIRY;\n                }\n            }\n            \n            ssize_t usableDiff;\n            Entry *newEntry = entryUpdate(oldEntry, value, newEntryFlags, &usableDiff);\n\n            /* If entry was reallocated, update the dict key */\n            if (newEntry != oldEntry) {\n                /* entryUpdate already freed the old entry if needed */\n                /* Update the dict to point to the new entry using dictSetKeyAtLink (no_value=1) */\n                dictSetKeyAtLink(ht, newEntry, &link, 0);\n            }\n\n            /* If keeping TTL, add the (potentially new) entry back to ebuckets */\n            if (newExpireAt != EB_EXPIRE_TIME_INVALID) {\n                dict *d = o->ptr;\n                htMetadataEx *dictExpireMeta = htGetMetadataEx(d);\n                ebAdd(&dictExpireMeta->hfe, &hashFieldExpireBucketsType, newEntry, newExpireAt);\n            }\n\n            *alloc_size += usableDiff;\n            update = 1;\n        }\n    } else {\n        serverPanic(\"Unknown hash encoding\");\n    }\n\n    /* Free SDS strings we did not referenced elsewhere if the flags\n     * want this function to be responsible. */\n    if (flags & HASH_SET_TAKE_FIELD && field) sdsfree(field);\n    if (flags & HASH_SET_TAKE_VALUE && value) sdsfree(value);\n    return update;\n}\n\nSetExRes hashTypeSetExpiryHT(HashTypeSetEx *exInfo, sds field, uint64_t expireAt) {\n    dict *ht = exInfo->hashObj->ptr;\n    dictEntryLink link = NULL;\n    Entry *entryNew = NULL;\n\n    link = dictFindLink(ht, field, NULL);\n    if (link == NULL)\n        return HSETEX_NO_FIELD;\n\n    dictEntry *existingEntry = *link;\n    Entry *oldEntry = dictGetKey(existingEntry);\n    /* Special value of EXPIRE_TIME_INVALID indicates field should be persisted.*/\n    if (expireAt == EB_EXPIRE_TIME_INVALID) {\n        /* Return error if already there is no ttl. */\n        if (entryGetExpiry(oldEntry) == EB_EXPIRE_TIME_INVALID)\n            return HSETEX_NO_CONDITION_MET;\n\n        hfieldPersist(exInfo->hashObj, oldEntry);\n        return HSETEX_OK;\n    }\n\n    /* If field doesn't have expiry metadata attached */\n    if (!entryHasExpiry(oldEntry)) {\n        size_t *alloc_size = htGetMetadataSize(ht);\n\n        /* For fields without expiry, LT condition is considered valid */\n        if (exInfo->expireSetCond & (HFE_XX | HFE_GT))\n            return HSETEX_NO_CONDITION_MET;\n\n        ssize_t usableDiff;\n        entryNew = entryUpdate(oldEntry, NULL, ENTRY_HAS_EXPIRY, &usableDiff);\n        *alloc_size += usableDiff;\n    } else { /* field has ExpireMeta struct attached */\n        uint64_t prevExpire = entryGetExpiry(oldEntry);\n\n        /* If field has valid expiration time, then check GT|LT|NX */\n        if (prevExpire != EB_EXPIRE_TIME_INVALID) {\n            if (((exInfo->expireSetCond == HFE_GT) && (prevExpire >= expireAt)) ||\n                ((exInfo->expireSetCond == HFE_LT) && (prevExpire <= expireAt)) ||\n                (exInfo->expireSetCond == HFE_NX) )\n                return HSETEX_NO_CONDITION_MET;\n\n            /* If expiry time is the same, then nothing to do */\n            if (prevExpire == expireAt)\n                return HSETEX_OK;\n\n            /* remove old expiry time from hash's private ebuckets */\n            htMetadataEx *dm = htGetMetadataEx(ht);\n            ebRemove(&dm->hfe, &hashFieldExpireBucketsType, oldEntry);\n\n            /* Track of minimum expiration time (only later update global HFE DS) */\n            if (exInfo->minExpireFields > prevExpire)\n                exInfo->minExpireFields = prevExpire;\n\n        } else {\n            /* field has invalid expiry. No need to ebRemove() */\n\n            /* Check XX|LT|GT */\n            if (exInfo->expireSetCond & (HFE_XX | HFE_GT))\n                return HSETEX_NO_CONDITION_MET;\n        }\n\n        /* Reuse hfOld as hfNew and rewrite its expiry with ebAdd() */\n        entryNew = oldEntry;\n    }\n\n    dictSetKeyAtLink(ht, entryNew, &link, 0);  /* newItem=0 for updating existing entry */\n\n\n    /* If expired, then delete the field and propagate the deletion.\n     * If replica, continue like the field is valid */\n    if (unlikely(checkAlreadyExpired(expireAt))) {\n        /* replicas should not initiate deletion of fields */\n        propagateHashFieldDeletion(exInfo->db, exInfo->key->ptr, field, sdslen(field));\n        hashTypeDelete(exInfo->hashObj, field);\n        server.stat_expired_subkeys++;\n        return HSETEX_DELETED;\n    }\n\n    if (exInfo->minExpireFields > expireAt)\n        exInfo->minExpireFields = expireAt;\n\n    htMetadataEx *dm = htGetMetadataEx(ht);\n    ebAdd(&dm->hfe, &hashFieldExpireBucketsType, entryNew, expireAt);\n    return HSETEX_OK;\n}\n\n/*\n * Set field expiration\n *\n * Take care to call first hashTypeSetExInit() and then call this function.\n * Finally, call hashTypeSetExDone() to notify and update global HFE DS.\n *\n * Special value of EB_EXPIRE_TIME_INVALID for 'expireAt' argument will persist\n * the field.\n */\nSetExRes hashTypeSetEx(robj *o, sds field, uint64_t expireAt, HashTypeSetEx *exInfo) {\n    if (o->encoding == OBJ_ENCODING_LISTPACK_EX) {\n        unsigned char *fptr = NULL, *vptr = NULL, *tptr = NULL;\n        listpackEx *lpt = o->ptr;\n\n        fptr = lpFirst(lpt->lp);\n        if (fptr)\n            fptr = lpFind(lpt->lp, fptr, (unsigned char*)field, sdslen(field), 2);\n\n        if (!fptr)\n            return HSETEX_NO_FIELD;\n\n        /* Grab pointer to the value (fptr points to the field) */\n        vptr = lpNext(lpt->lp, fptr);\n        serverAssert(vptr != NULL);\n\n        tptr = lpNext(lpt->lp, vptr);\n        serverAssert(tptr);\n\n        /* update TTL */\n        return hashTypeSetExpiryListpack(exInfo, field, fptr, vptr, tptr, expireAt);\n    } else if (o->encoding == OBJ_ENCODING_HT) {\n        /* If needed to set the field along with expiry */\n        return hashTypeSetExpiryHT(exInfo, field, expireAt);\n    } else {\n        serverPanic(\"Unknown hash encoding\");\n    }\n\n    return HSETEX_OK; /* never reach here */\n}\n\nvoid initDictExpireMetadata(robj *o) {\n    dict *ht = o->ptr;\n\n    htMetadataEx *m = htGetMetadataEx(ht);\n    m->hfe = ebCreate();     /* Allocate HFE DS */\n    m->expireMeta.trash = 1; /* mark as trash (as long it wasn't ebAdd()) */\n}\n\n/* Init HashTypeSetEx struct before calling hashTypeSetEx() */\nint hashTypeSetExInit(robj *key, kvobj *o, client *c, redisDb *db,\n                      ExpireSetCond expireSetCond, HashTypeSetEx *ex)\n{\n    dict *ht = o->ptr;\n    ex->expireSetCond = expireSetCond;\n    ex->minExpire = EB_EXPIRE_TIME_INVALID;\n    ex->c = c;\n    ex->db = db;\n    ex->key = key;\n    ex->hashObj = o;\n    ex->minExpireFields = EB_EXPIRE_TIME_INVALID;\n\n    /* Take care that HASH support expiration */\n    if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        hashTypeConvert(c->db, o, OBJ_ENCODING_LISTPACK_EX);\n    } else if (o->encoding == OBJ_ENCODING_HT) {\n        /* Take care dict has HFE metadata */\n        if (!isDictWithMetaHFE(ht)) {\n            /* Realloc (only header of dict) with metadata for hash-field expiration */\n            dictTypeAddMeta(&ht, &entryHashDictTypeWithHFE);\n            htMetadataEx *m = htGetMetadataEx(ht);\n            o->ptr = ht;\n\n            /* Find the key in the keyspace. Need to keep reference to the key for\n             * notifications or even removal of the hash */\n\n            /* Fillup dict HFE metadata */\n            m->hfe = ebCreate();     /* Allocate HFE DS */\n            m->expireMeta.trash = 1; /* mark as trash (as long it wasn't ebAdd()) */\n        }\n    }\n\n    /* Read minExpire from attached ExpireMeta to the hash */\n    ex->minExpire = hashTypeGetMinExpire(o, 0);\n    return C_OK;\n}\n\n/*\n * After calling hashTypeSetEx() for setting fields or their expiry, call this\n * function to update global HFE DS.\n */\nvoid hashTypeSetExDone(HashTypeSetEx *ex) {\n\n    if (hashTypeLength(ex->hashObj, 0) == 0)\n        return;\n\n    /* If minimum HFE of the hash is smaller than expiration time of the\n     * specified fields in the command as well as it is smaller or equal\n     * than expiration time provided in the command, then the minimum\n     * HFE of the hash won't change following this command. */\n    if ((ex->minExpire < ex->minExpireFields))\n        return;\n\n    /* Retrieve new expired time. It might have changed. */\n    uint64_t newMinExpire = hashTypeGetMinExpire(ex->hashObj, 1 /*accurate*/);\n\n    /* Calculate the diff between old minExpire and newMinExpire. If it is\n     * only few seconds, then don't have to update global HFE DS. At the worst\n     * case fields of hash will be active-expired up to few seconds later.\n     *\n     * In any case, active-expire operation will know to update global\n     * HFE DS more efficiently than here for a single item.\n     */\n    uint64_t diff = (ex->minExpire > newMinExpire) ?\n                    (ex->minExpire - newMinExpire) : (newMinExpire - ex->minExpire);\n    if (diff < HASH_NEW_EXPIRE_DIFF_THRESHOLD) return;\n\n    int slot = getKeySlot(ex->key->ptr);\n    if (ex->minExpire != EB_EXPIRE_TIME_INVALID) {\n        if (newMinExpire != EB_EXPIRE_TIME_INVALID)\n            estoreUpdate(ex->db->subexpires, slot, ex->hashObj, newMinExpire);\n        else\n            estoreRemove(ex->db->subexpires, slot, ex->hashObj);\n    } else {\n        if (newMinExpire != EB_EXPIRE_TIME_INVALID)\n            estoreAdd(ex->db->subexpires, slot, ex->hashObj, newMinExpire);\n    }\n}\n\n/* Delete an element from a hash.\n *\n * Return 1 on deleted and 0 on not found.\n * field - sds field name to delete */\nint hashTypeDelete(robj *o, void *field) {\n    int deleted = 0;\n    int fieldLen = sdslen((sds)field);\n\n    if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *zl, *fptr;\n\n        zl = o->ptr;\n        fptr = lpFirst(zl);\n        if (fptr != NULL) {\n            fptr = lpFind(zl, fptr, (unsigned char*)field, fieldLen, 1);\n            if (fptr != NULL) {\n                /* Delete both of the key and the value. */\n                zl = lpDeleteRangeWithEntry(zl,&fptr,2);\n                o->ptr = zl;\n                deleted = 1;\n            }\n        }\n    } else if (o->encoding == OBJ_ENCODING_LISTPACK_EX) {\n        unsigned char *fptr;\n        listpackEx *lpt = o->ptr;\n\n        fptr = lpFirst(lpt->lp);\n        if (fptr != NULL) {\n            fptr = lpFind(lpt->lp, fptr, (unsigned char*)field, fieldLen, 2);\n            if (fptr != NULL) {\n                /* Delete field, value and ttl */\n                lpt->lp = lpDeleteRangeWithEntry(lpt->lp, &fptr, 3);\n                deleted = 1;\n            }\n        }\n    } else if (o->encoding == OBJ_ENCODING_HT) {\n        /* dictDelete() will call dictEntryDestructor() */\n        if (dictDelete((dict*)o->ptr, field) == C_OK) {\n            deleted = 1;\n        }\n    } else {\n        serverPanic(\"Unknown hash encoding\");\n    }\n    return deleted;\n}\n\n/* Return the number of elements in a hash.\n *\n * Note, subtractExpiredFields=1 might be pricy in case there are many HFEs\n */\nunsigned long hashTypeLength(const robj *o, int subtractExpiredFields) {\n    unsigned long length = ULONG_MAX;\n    /* If expired field access is allowed, don't subtract expired fields from the count. */\n    if (server.allow_access_expired)\n        subtractExpiredFields = 0;\n\n    if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        length = lpLength(o->ptr) / 2;\n    } else if (o->encoding == OBJ_ENCODING_LISTPACK_EX) {\n        listpackEx *lpt = o->ptr;\n        length = lpLength(lpt->lp) / 3;\n\n        if (subtractExpiredFields && lpt->meta.trash == 0)\n            length -= listpackExExpireDryRun(o);\n    } else if (o->encoding == OBJ_ENCODING_HT) {\n        uint64_t expiredItems = 0;\n        dict *d = (dict*)o->ptr;\n        if (subtractExpiredFields && isDictWithMetaHFE(d)) {\n            htMetadataEx *meta = htGetMetadataEx(d);\n            /* If dict registered in global HFE DS */\n            if (meta->expireMeta.trash == 0)\n                expiredItems = ebExpireDryRun(meta->hfe,\n                                              &hashFieldExpireBucketsType,\n                                              commandTimeSnapshot());\n        }\n        length = dictSize(d) - expiredItems;\n    } else {\n        serverPanic(\"Unknown hash encoding\");\n    }\n    return length;\n}\n\nsize_t hashTypeAllocSize(const robj *o) {\n    serverAssertWithInfo(NULL,o,o->type == OBJ_HASH);\n    size_t size = 0;\n    if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        size = lpBytes(o->ptr);\n    } else if (o->encoding == OBJ_ENCODING_LISTPACK_EX) {\n        listpackEx *lpt = o->ptr;\n        size = sizeof(listpackEx) + lpBytes(lpt->lp);\n    } else if (o->encoding == OBJ_ENCODING_HT) {\n        dict *d = o->ptr;\n        size += sizeof(dict) + dictMemUsage(d) + *htGetMetadataSize(d);\n    } else {\n        serverPanic(\"Unknown hash encoding\");\n    }\n    return size;\n}\n\nvoid hashTypeInitIterator(hashTypeIterator *hi, robj *subject) {\n    hi->subject = subject;\n    hi->encoding = subject->encoding;\n\n    if (hi->encoding == OBJ_ENCODING_LISTPACK ||\n        hi->encoding == OBJ_ENCODING_LISTPACK_EX)\n    {\n        hi->fptr = NULL;\n        hi->vptr = NULL;\n        hi->tptr = NULL;\n        hi->expire_time = EB_EXPIRE_TIME_INVALID;\n    } else if (hi->encoding == OBJ_ENCODING_HT) {\n        dictInitIterator(&hi->di, subject->ptr);\n    } else {\n        serverPanic(\"Unknown hash encoding\");\n    }\n}\n\nvoid hashTypeResetIterator(hashTypeIterator *hi) {\n    if (hi->encoding == OBJ_ENCODING_HT)\n        dictResetIterator(&hi->di);\n}\n\n/* Move to the next entry in the hash. Return C_OK when the next entry\n * could be found and C_ERR when the iterator reaches the end. */\nint hashTypeNext(hashTypeIterator *hi, int skipExpiredFields) {\n    /* If expired field access is allowed, don't skip expired fields during iteration */\n    if (server.allow_access_expired)\n        skipExpiredFields = 0;\n\n    hi->expire_time = EB_EXPIRE_TIME_INVALID;\n    if (hi->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *zl;\n        unsigned char *fptr, *vptr;\n\n        zl = hi->subject->ptr;\n        fptr = hi->fptr;\n        vptr = hi->vptr;\n\n        if (fptr == NULL) {\n            /* Initialize cursor */\n            serverAssert(vptr == NULL);\n            fptr = lpFirst(zl);\n        } else {\n            /* Advance cursor */\n            serverAssert(vptr != NULL);\n            fptr = lpNext(zl, vptr);\n        }\n        if (fptr == NULL) return C_ERR;\n\n        /* Grab pointer to the value (fptr points to the field) */\n        vptr = lpNext(zl, fptr);\n        serverAssert(vptr != NULL);\n\n        /* fptr, vptr now point to the first or next pair */\n        hi->fptr = fptr;\n        hi->vptr = vptr;\n    } else if (hi->encoding == OBJ_ENCODING_LISTPACK_EX) {\n        long long expire_time;\n        unsigned char *zl = hashTypeListpackGetLp(hi->subject);\n        unsigned char *fptr, *vptr, *tptr;\n\n        fptr = hi->fptr;\n        vptr = hi->vptr;\n        tptr = hi->tptr;\n\n        if (fptr == NULL) {\n            /* Initialize cursor */\n            serverAssert(vptr == NULL);\n            fptr = lpFirst(zl);\n        } else {\n            /* Advance cursor */\n            serverAssert(tptr != NULL);\n            fptr = lpNext(zl, tptr);\n        }\n        if (fptr == NULL) return C_ERR;\n\n        while (fptr != NULL) {\n            /* Grab pointer to the value (fptr points to the field) */\n            vptr = lpNext(zl, fptr);\n            serverAssert(vptr != NULL);\n\n            tptr = lpNext(zl, vptr);\n            serverAssert(tptr && lpGetIntegerValue(tptr, &expire_time));\n\n            if (!skipExpiredFields || !hashTypeIsExpired(hi->subject, expire_time))\n                break;\n\n            fptr = lpNext(zl, tptr);\n        }\n        if (fptr == NULL) return C_ERR;\n\n        /* fptr, vptr now point to the first or next pair */\n        hi->fptr = fptr;\n        hi->vptr = vptr;\n        hi->tptr = tptr;\n        hi->expire_time = (expire_time != HASH_LP_NO_TTL) ? (uint64_t) expire_time : EB_EXPIRE_TIME_INVALID;\n    } else if (hi->encoding == OBJ_ENCODING_HT) {\n\n        while ((hi->de = dictNext(&hi->di)) != NULL) {\n            Entry *e = dictGetKey(hi->de);\n            hi->expire_time = entryGetExpiry(e);\n            /* this condition still valid if expire_time equals EB_EXPIRE_TIME_INVALID */\n            if (skipExpiredFields && ((mstime_t)hi->expire_time < commandTimeSnapshot()))\n                continue;\n            return C_OK;\n        }\n        return C_ERR;\n    } else {\n        serverPanic(\"Unknown hash encoding\");\n    }\n    return C_OK;\n}\n\n/* Get the field or value at iterator cursor, for an iterator on a hash value\n * encoded as a listpack. Prototype is similar to `hashTypeGetFromListpack`. */\nvoid hashTypeCurrentFromListpack(hashTypeIterator *hi, int what,\n                                 unsigned char **vstr,\n                                 unsigned int *vlen,\n                                 long long *vll,\n                                 uint64_t *expireTime)\n{\n    serverAssert(hi->encoding == OBJ_ENCODING_LISTPACK ||\n                 hi->encoding == OBJ_ENCODING_LISTPACK_EX);\n\n    if (what & OBJ_HASH_KEY) {\n        *vstr = lpGetValue(hi->fptr, vlen, vll);\n    } else {\n        *vstr = lpGetValue(hi->vptr, vlen, vll);\n    }\n\n    if (expireTime)\n        *expireTime = hi->expire_time;\n}\n\n/* Get the field or value at iterator cursor, for an iterator on a hash value\n * encoded as a hash table. Prototype is similar to\n * `hashTypeGetFromHashTable`.\n *\n * expireTime - If parameter is not null, then the function will return the expire\n *              time of the field. If expiry not set, return EB_EXPIRE_TIME_INVALID\n */\nvoid hashTypeCurrentFromHashTable(hashTypeIterator *hi, int what, char **str, size_t *len, uint64_t *expireTime) {\n    serverAssert(hi->encoding == OBJ_ENCODING_HT);\n    Entry *e = dictGetKey(hi->de);\n\n    if (what & OBJ_HASH_KEY) {\n        sds field = entryGetField(e);\n        *str = field;\n        *len = sdslen(field);\n    } else {\n        sds val = entryGetValue(e);\n        *str = val;\n        *len = sdslen(val);\n    }\n\n    if (expireTime)\n        *expireTime = hi->expire_time;\n}\n\n/* Higher level function of hashTypeCurrent*() that returns the hash value\n * at current iterator position.\n *\n * The returned element is returned by reference in either *vstr and *vlen if\n * it's returned in string form, or stored in *vll if it's returned as\n * a number.\n *\n * If *vll is populated *vstr is set to NULL, so the caller\n * can always check the function return by checking the return value\n * type checking if vstr == NULL. */\nvoid hashTypeCurrentObject(hashTypeIterator *hi,\n                           int what,\n                           unsigned char **vstr,\n                           unsigned int *vlen,\n                           long long *vll,\n                           uint64_t *expireTime)\n{\n    if (hi->encoding == OBJ_ENCODING_LISTPACK ||\n        hi->encoding == OBJ_ENCODING_LISTPACK_EX)\n    {\n        *vstr = NULL;\n        hashTypeCurrentFromListpack(hi, what, vstr, vlen, vll, expireTime);\n    } else if (hi->encoding == OBJ_ENCODING_HT) {\n        char *ele;\n        size_t eleLen;\n        hashTypeCurrentFromHashTable(hi, what, &ele, &eleLen, expireTime);\n        *vstr = (unsigned char*) ele;\n        *vlen = eleLen;\n    } else {\n        serverPanic(\"Unknown hash encoding\");\n    }\n}\n\n/* Return the key or value at the current iterator position as a new\n * SDS string. */\nsds hashTypeCurrentObjectNewSds(hashTypeIterator *hi, int what) {\n    unsigned char *vstr;\n    unsigned int vlen;\n    long long vll;\n\n    hashTypeCurrentObject(hi,what,&vstr,&vlen,&vll, NULL);\n    if (vstr) return sdsnewlen(vstr,vlen);\n    return sdsfromlonglong(vll);\n}\n\n/* Return the key at the current iterator position as a new entry. */\nEntry *hashTypeCurrentObjectNewEntry(hashTypeIterator *hi, size_t *usable) {\n    char fieldBuf[LONG_STR_SIZE], valueBuf[LONG_STR_SIZE];\n    unsigned char *fieldStr, *valueStr;\n    unsigned int fieldLen, valueLen;\n    long long fieldLl, valueLl;\n    Entry *entry;\n\n    /* Get field */\n    hashTypeCurrentObject(hi, OBJ_HASH_KEY, &fieldStr, &fieldLen, &fieldLl, NULL);\n    if (!fieldStr) {\n        fieldLen = ll2string(fieldBuf, sizeof(fieldBuf), fieldLl);\n        fieldStr = (unsigned char *) fieldBuf;\n    }\n    sds field = sdsnewlen(fieldStr, fieldLen);\n\n    /* Get value */\n    hashTypeCurrentObject(hi, OBJ_HASH_VALUE, &valueStr, &valueLen, &valueLl, NULL);\n    if (!valueStr) {\n        valueLen = ll2string(valueBuf, sizeof(valueBuf), valueLl);\n        valueStr = (unsigned char *) valueBuf;\n    }\n    sds value = sdsnewlen(valueStr, valueLen);\n    int hasExpiry = (hi->expire_time != EB_EXPIRE_TIME_INVALID);\n\n    /* Create entry with field and value, using iterator's expire_time */\n    uint32_t entryFlags = ENTRY_TAKE_VALUE | ((hasExpiry) ? ENTRY_HAS_EXPIRY : 0); \n    entry = entryCreate(field, value, entryFlags, usable);\n    sdsfree(field);  /* entryCreate() doesn't take ownership of field */\n\n    return entry;\n}\n\nstatic kvobj *hashTypeLookupWriteOrCreate(client *c, robj *key) {\n    dictEntryLink link;\n    kvobj *kv = lookupKeyWriteWithLink(c->db, key, &link);\n    if (checkType(c, kv, OBJ_HASH)) return NULL;\n\n    if (kv == NULL) {\n        robj *o = createHashObject();\n        kv = dbAddByLink(c->db, key, &o, &link);\n    }\n    return kv;\n}\n\n\nvoid hashTypeConvertListpack(robj *o, int enc) {\n    serverAssert(o->encoding == OBJ_ENCODING_LISTPACK);\n\n    if (enc == OBJ_ENCODING_LISTPACK) {\n        /* Nothing to do... */\n\n    } else if (enc == OBJ_ENCODING_LISTPACK_EX) {\n        unsigned char *p;\n\n        /* Append HASH_LP_NO_TTL to each field name - value pair. */\n        p = lpFirst(o->ptr);\n        while (p != NULL) {\n            p = lpNext(o->ptr, p);\n            serverAssert(p);\n\n            o->ptr = lpInsertInteger(o->ptr, HASH_LP_NO_TTL, p, LP_AFTER, &p);\n            p = lpNext(o->ptr, p);\n        }\n\n        listpackEx *lpt = listpackExCreate();\n        lpt->lp = o->ptr;\n        o->encoding = OBJ_ENCODING_LISTPACK_EX;\n        o->ptr = lpt;\n    } else if (enc == OBJ_ENCODING_HT) {\n        hashTypeIterator hi;\n        dict *dict;\n        int ret;\n\n        hashTypeInitIterator(&hi, o);\n        dict = dictCreate(&entryHashDictType);\n\n        /* Presize the dict to avoid rehashing */\n        dictExpand(dict,hashTypeLength(o, 0));\n\n        size_t usable, *alloc_size = htGetMetadataSize(dict);\n        while (hashTypeNext(&hi, 0) != C_ERR) {\n            Entry *entry = hashTypeCurrentObjectNewEntry(&hi, &usable);\n            ret = dictAdd(dict, entry, NULL);\n            if (ret != DICT_OK) {\n                entryFree(entry, NULL); /* Needed for gcc ASAN */\n                hashTypeResetIterator(&hi);  /* Needed for gcc ASAN */\n                serverLogHexDump(LL_WARNING,\"listpack with dup elements dump\",\n                    o->ptr,lpBytes(o->ptr));\n                serverPanic(\"Listpack corruption detected\");\n            }\n            *alloc_size += usable;\n        }\n        hashTypeResetIterator(&hi);\n        zfree(o->ptr);\n        o->encoding = OBJ_ENCODING_HT;\n        o->ptr = dict;\n    } else {\n        serverPanic(\"Unknown hash encoding\");\n    }\n}\n\n/* db can be NULL to avoid registration in subexpires */\nvoid hashTypeConvertListpackEx(redisDb *db, robj *o, int enc) {\n    serverAssert(o->encoding == OBJ_ENCODING_LISTPACK_EX);\n\n    if (enc == OBJ_ENCODING_LISTPACK_EX) {\n        return;\n    } else if (enc == OBJ_ENCODING_HT) {\n        uint64_t minExpire = EB_EXPIRE_TIME_INVALID;\n        int ret, slot = -1;\n        hashTypeIterator hi;\n        dict *dict;\n        htMetadataEx *dictExpireMeta;\n        listpackEx *lpt = o->ptr;\n\n        if (db && lpt->meta.trash != 1) {\n            minExpire = hashTypeGetMinExpire(o, 0);\n            slot = getKeySlot(kvobjGetKey(o));\n            estoreRemove(db->subexpires, slot, o);\n        }\n\n        dict = dictCreate(&entryHashDictTypeWithHFE);\n        dictExpand(dict,hashTypeLength(o, 0));\n        dictExpireMeta = htGetMetadataEx(dict);\n\n        /* Fillup dict HFE metadata */\n        dictExpireMeta->hfe = ebCreate();     /* Allocate HFE DS */\n        dictExpireMeta->expireMeta.trash = 1; /* mark as trash (as long it wasn't ebAdd()) */\n\n        hashTypeInitIterator(&hi, o);\n\n        size_t usable, *alloc_size = &dictExpireMeta->alloc_size;\n        while (hashTypeNext(&hi, 0) != C_ERR) {\n            /* Create entry with both field and value */\n            Entry *entry = hashTypeCurrentObjectNewEntry(&hi, &usable);\n            ret = dictAdd(dict, entry, NULL);\n            if (ret != DICT_OK) {\n                entryFree(entry, NULL); /* Needed for gcc ASAN */\n                hashTypeResetIterator(&hi);  /* Needed for gcc ASAN */\n                serverLogHexDump(LL_WARNING,\"listpack with dup elements dump\",\n                                 lpt->lp,lpBytes(lpt->lp));\n                serverPanic(\"Listpack corruption detected\");\n            }\n            *alloc_size += usable;\n\n            if (hi.expire_time != EB_EXPIRE_TIME_INVALID)\n                ebAdd(&dictExpireMeta->hfe, &hashFieldExpireBucketsType, entry, hi.expire_time);\n        }\n        hashTypeResetIterator(&hi);\n        listpackExFree(lpt);\n\n        o->encoding = OBJ_ENCODING_HT;\n        o->ptr = dict;\n\n        if (minExpire != EB_EXPIRE_TIME_INVALID)\n            estoreAdd(db->subexpires, slot, o, minExpire);\n    } else {\n        serverPanic(\"Unknown hash encoding: %d\", enc);\n    }\n}\n\n/* NOTE: db can be NULL (Won't register in global HFE DS) */\nvoid hashTypeConvert(redisDb *db, robj *o, int enc) {\n    if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        hashTypeConvertListpack(o, enc);\n    } else if (o->encoding == OBJ_ENCODING_LISTPACK_EX) {\n        hashTypeConvertListpackEx(db, o, enc);\n    } else if (o->encoding == OBJ_ENCODING_HT) {\n        serverPanic(\"Not implemented\");\n    } else {\n        serverPanic(\"Unknown hash encoding\");\n    }\n}\n\n/* This is a helper function for the COPY command.\n * Duplicate a hash object, with the guarantee that the returned object\n * has the same encoding as the original one.\n *\n * The resulting object always has refcount set to 1 */\nrobj *hashTypeDup(kvobj *o, uint64_t *minHashExpire) {\n    robj *hobj;\n    hashTypeIterator hi;\n\n    serverAssert(o->type == OBJ_HASH);\n\n    if(o->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *zl = o->ptr;\n        size_t sz = lpBytes(zl);\n        unsigned char *new_zl = zmalloc(sz);\n        memcpy(new_zl, zl, sz);\n        hobj = createObject(OBJ_HASH, new_zl);\n        hobj->encoding = OBJ_ENCODING_LISTPACK;\n    } else if(o->encoding == OBJ_ENCODING_LISTPACK_EX) {\n        listpackEx *lpt = o->ptr;\n\n        if (lpt->meta.trash == 0)\n            *minHashExpire = ebGetMetaExpTime(&lpt->meta);\n\n        listpackEx *dup = listpackExCreate();\n\n        size_t sz = lpBytes(lpt->lp);\n        dup->lp = lpNew(sz);\n        memcpy(dup->lp, lpt->lp, sz);\n\n        hobj = createObject(OBJ_HASH, dup);\n        hobj->encoding = OBJ_ENCODING_LISTPACK_EX;\n    } else if(o->encoding == OBJ_ENCODING_HT) {\n        htMetadataEx *dictExpireMetaSrc, *dictExpireMetaDst = NULL;\n        dict *d;\n\n        /* If dict doesn't have HFE metadata, then create a new dict without it */\n        if (!isDictWithMetaHFE(o->ptr)) {\n            d = dictCreate(&entryHashDictType);\n        } else {\n            /* Create a new dict with HFE metadata */\n            d = dictCreate(&entryHashDictTypeWithHFE);\n            dictExpireMetaSrc = htGetMetadataEx((dict *) o->ptr);\n            dictExpireMetaDst = htGetMetadataEx(d);\n            dictExpireMetaDst->hfe = ebCreate();     /* Allocate HFE DS */\n            dictExpireMetaDst->expireMeta.trash = 1; /* mark as trash (as long it wasn't ebAdd()) */\n\n            /* Extract the minimum expire time of the source hash (Will be used by caller\n             * to register the new hash in the global subexpires DB) */\n            if (dictExpireMetaSrc->expireMeta.trash == 0)\n                *minHashExpire = ebGetMetaExpTime(&dictExpireMetaSrc->expireMeta);\n        }\n        dictExpand(d, dictSize((const dict*)o->ptr));\n\n        size_t usable, *alloc_size = htGetMetadataSize(d);\n        hashTypeInitIterator(&hi, o);\n        while (hashTypeNext(&hi, 0) != C_ERR) {\n            Entry *newEntry;\n            uint64_t expireTime;\n            /* Extract a field-value pair from an original hash object.*/\n            char *field, *value;\n            size_t fieldLen, valueLen;\n            hashTypeCurrentFromHashTable(&hi, OBJ_HASH_KEY, &field, &fieldLen, &expireTime);\n            hashTypeCurrentFromHashTable(&hi, OBJ_HASH_VALUE, &value, &valueLen, NULL);\n\n            /* Create new entry with field and value */\n            sds newFieldSds = sdsnewlen(field, fieldLen);\n            sds newValueSds = sdsnewlen(value, valueLen);\n            /* Create new entry with field and value, optional expiry. */\n            if (expireTime == EB_EXPIRE_TIME_INVALID) {\n                newEntry = entryCreate(newFieldSds, newValueSds, \n                                       ENTRY_TAKE_VALUE, &usable);\n            } else {\n                newEntry = entryCreate(newFieldSds, newValueSds, \n                                       ENTRY_TAKE_VALUE | ENTRY_HAS_EXPIRY, &usable);\n                ebAdd(&dictExpireMetaDst->hfe, &hashFieldExpireBucketsType, newEntry, expireTime);\n            }\n            sdsfree(newFieldSds); /* (Only value ownership transferred to entry) */\n\n            /* Add entry to new hash object. */\n            dictAdd(d, newEntry, NULL);  /* no_value=1, so value is NULL */\n            *alloc_size += usable;\n        }\n        hashTypeResetIterator(&hi);\n\n        hobj = createObject(OBJ_HASH, d);\n        hobj->encoding = OBJ_ENCODING_HT;\n    } else {\n        serverPanic(\"Unknown hash encoding\");\n    }\n    return hobj;\n}\n\n/* Create a new sds string from the listpack entry. */\nsds hashSdsFromListpackEntry(listpackEntry *e) {\n    return e->sval ? sdsnewlen(e->sval, e->slen) : sdsfromlonglong(e->lval);\n}\n\n/* Reply with bulk string from the listpack entry. */\nvoid hashReplyFromListpackEntry(client *c, listpackEntry *e) {\n    if (e->sval)\n        addReplyBulkCBuffer(c, e->sval, e->slen);\n    else\n        addReplyBulkLongLong(c, e->lval);\n}\n\n/* Return random element from a non empty hash.\n * 'key' and 'val' will be set to hold the element.\n * The memory in them is not to be freed or modified by the caller.\n * 'val' can be NULL in which case it's not extracted. */\nvoid hashTypeRandomElement(robj *hashobj, unsigned long hashsize, CommonEntry *key, CommonEntry *val) {\n    if (hashobj->encoding == OBJ_ENCODING_HT) {\n        dictEntry *de = dictGetFairRandomKey(hashobj->ptr);\n        Entry *entry = dictGetKey(de);\n        sds field = entryGetField(entry);\n        key->sval = (unsigned char*) field;\n        key->slen = sdslen(field);\n        if (val) {\n            sds s = entryGetValue(entry);\n            val->sval = (unsigned char*)s;\n            val->slen = sdslen(s);\n        }\n    } else if (hashobj->encoding == OBJ_ENCODING_LISTPACK) {\n        lpRandomPair(hashobj->ptr, hashsize, (listpackEntry *) key, (listpackEntry *) val, 2);\n    } else if (hashobj->encoding == OBJ_ENCODING_LISTPACK_EX) {\n        lpRandomPair(hashTypeListpackGetLp(hashobj), hashsize, (listpackEntry *) key,\n                     (listpackEntry *) val, 3);\n    } else {\n        serverPanic(\"Unknown hash encoding\");\n    }\n}\n\n/* Delete all expired fields from the hash and delete the hash if left empty.\n *\n * updateSubexpires - If the hash should be updated in the subexpires DB with new\n *                   expiration time in case expired fields were deleted.\n *\n * Return next Expire time of the hash\n * - 0 if hash got deleted\n * - EB_EXPIRE_TIME_INVALID if no more fields to expire\n */\nuint64_t hashTypeExpire(redisDb *db, kvobj *o, uint32_t *quota, int updateSubexpires, int activeEx) {\n    uint64_t noExpireLeftRes = EB_EXPIRE_TIME_INVALID;\n\n    /* Collect expired field names for batched subkey notification.\n     * Skip allocation entirely when subkey notifications are disabled. */\n    fieldvec fvexpired;\n    vec *vexpired = isSubkeyNotifyEnabled(NOTIFY_HASH) ?\n                        fieldvecInit(&fvexpired, FIELDS_STACK_SIZE) : NULL;\n\n    OnFieldExpireCtx onFieldExpireCtx = { .hashObj = o, .db = db, .activeEx = activeEx, .vexpired = vexpired };\n    ExpireInfo info = (ExpireInfo) {\n                .maxToExpire = *quota,\n                .now = commandTimeSnapshot(),\n                .ctx = &onFieldExpireCtx,\n                .itemsExpired = 0};\n\n    if (o->encoding == OBJ_ENCODING_LISTPACK_EX) {\n        listpackExExpire(db, o, &info);\n    } else {\n        serverAssert(o->encoding == OBJ_ENCODING_HT);\n\n        dict *d = o->ptr;\n        htMetadataEx *dictExpireMeta = htGetMetadataEx(d);\n\n        info.onExpireItem = onFieldExpire;\n        ebExpire(&dictExpireMeta->hfe, &hashFieldExpireBucketsType, &info);\n    }\n\n    /* Update quota left */\n    *quota -= info.itemsExpired;\n\n    /* In some cases, a field might have been deleted without updating the global DS.\n     * As a result, active-expire might not expire any fields, in such cases,\n     * we don't need to send notifications or perform other operations for this key. */\n    if (info.itemsExpired) {\n        sds keystr = kvobjGetKey(o);\n        robj *key = createStringObject(keystr, sdslen(keystr));\n\n        /* Send subkey notification with all expired fields */\n        notifyKeyspaceEventWithSubkeys(NOTIFY_HASH, \"hexpired\", key, db->id,\n            vexpired ? (robj**)vecData(vexpired) : NULL, vexpired ? vecSize(vexpired) : 0);\n\n        int slot;\n        int deleted = 0;\n\n        if (updateSubexpires) {\n            slot = getKeySlot(keystr);\n            estoreRemove(db->subexpires, slot, o);\n        }\n\n        if (hashTypeLength(o, 0) == 0) {\n            notifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", key, db->id);\n            dbDelete(db, key);\n            noExpireLeftRes = 0;\n            deleted = 1;\n        } else {\n            if ((updateSubexpires) && (info.nextExpireTime != EB_EXPIRE_TIME_INVALID))\n                estoreAdd(db->subexpires, slot, o, info.nextExpireTime);\n        }\n\n        keyModified(NULL, db, key, deleted ? NULL : o, 1);\n        decrRefCount(key);\n    }\n\n    /* Free collected expired fields */\n    if (vexpired) {\n        for (size_t i = 0; i < vecSize(vexpired); i++) {\n            decrRefCount(vecGet(vexpired, i));\n        }\n        vecRelease(vexpired);\n    }\n\n    /* return 0 if hash got deleted, EB_EXPIRE_TIME_INVALID if no more fields\n     * with expiration. Else return next expiration time */\n    return (info.nextExpireTime == EB_EXPIRE_TIME_INVALID) ? noExpireLeftRes : info.nextExpireTime;\n}\n\n/* Delete all expired fields in hash if needed (Currently used only by HRANDFIELD)\n *\n * NOTICE: If we call this function in other places, we should consider the slot\n * migration scenario, where we don't want to delete expired fields. See also\n * expireIfNeeded().\n *\n * Return 1 if the entire hash was deleted, 0 otherwise.\n * This function might be pricy in case there are many expired fields.\n */\nstatic int hashTypeExpireIfNeeded(redisDb *db, kvobj *o) {\n    uint64_t nextExpireTime;\n    uint64_t minExpire = hashTypeGetMinExpire(o, 1 /*accurate*/);\n\n    /* Nothing to expire */\n    if ((mstime_t) minExpire >= commandTimeSnapshot())\n        return 0;\n\n    /* Follow expireIfNeeded() conditions of when not lazy-expire */\n    if ( (server.loading) ||\n         (server.allow_access_expired) ||\n         (server.masterhost) ||  /* master-client or user-client, don't delete */\n         (isPausedActionsWithUpdate(PAUSE_ACTION_EXPIRE)))\n        return 0;\n\n    /* Take care to expire all the fields */\n    uint32_t quota = UINT32_MAX;\n    nextExpireTime = hashTypeExpire(db, o, &quota, 1, 0);\n    /* return 1 if the entire hash was deleted */\n    return nextExpireTime == 0;\n}\n\n/* Return the next/minimum expiry time of the hash-field.\n * accurate=1 - Return the exact time by looking into the object DS.\n * accurate=0 - Return the minimum expiration time maintained in expireMeta\n *              (Verify it is not trash before using it) which might not be\n *              accurate due to optimization reasons.\n *\n * If not found, return EB_EXPIRE_TIME_INVALID\n */\nuint64_t hashTypeGetMinExpire(robj *o, int accurate) {\n    ExpireMeta *expireMeta = NULL;\n\n    if (!accurate) {\n        if (o->encoding == OBJ_ENCODING_LISTPACK) {\n            return EB_EXPIRE_TIME_INVALID;\n        } else if (o->encoding == OBJ_ENCODING_LISTPACK_EX) {\n            listpackEx *lpt = o->ptr;\n            expireMeta = &lpt->meta;\n        } else {\n            serverAssert(o->encoding == OBJ_ENCODING_HT);\n\n            dict *d = o->ptr;\n            if (!isDictWithMetaHFE(d))\n                return EB_EXPIRE_TIME_INVALID;\n\n            expireMeta = &htGetMetadataEx(d)->expireMeta;\n        }\n\n        /* Keep aside next hash-field expiry before updating HFE DS. Verify it is not trash */\n        if (expireMeta->trash == 1)\n            return EB_EXPIRE_TIME_INVALID;\n\n        return ebGetMetaExpTime(expireMeta);\n    }\n\n    /* accurate == 1 */\n\n    if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        return EB_EXPIRE_TIME_INVALID;\n    } else if (o->encoding == OBJ_ENCODING_LISTPACK_EX) {\n        return listpackExGetMinExpire(o);\n    } else {\n        serverAssert(o->encoding == OBJ_ENCODING_HT);\n\n        dict *d = o->ptr;\n        if (!isDictWithMetaHFE(d))\n            return EB_EXPIRE_TIME_INVALID;\n\n        htMetadataEx *expireMeta = htGetMetadataEx(d);\n        return ebGetNextTimeToExpire(expireMeta->hfe, &hashFieldExpireBucketsType);\n    }\n}\n\nint hashTypeIsFieldsWithExpire(robj *o) {\n    if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        return 0;\n    } else if (o->encoding == OBJ_ENCODING_LISTPACK_EX) {\n        return EB_EXPIRE_TIME_INVALID != listpackExGetMinExpire(o);\n    } else { /* o->encoding == OBJ_ENCODING_HT */\n        dict *d = o->ptr;\n        /* If dict doesn't holds HFE metadata */\n        if (!isDictWithMetaHFE(d))\n            return 0;\n        htMetadataEx *meta = htGetMetadataEx(d);\n        return ebGetTotalItems(meta->hfe, &hashFieldExpireBucketsType) != 0;\n    }\n}\n\nvoid hashTypeFree(robj *o) {\n    switch (o->encoding) {\n        case OBJ_ENCODING_HT:\n            /* Verify hash is not registered in global HFE ds */\n            if (isDictWithMetaHFE((dict*)o->ptr)) {\n                htMetadataEx *m = htGetMetadataEx((dict*)o->ptr);\n                serverAssert(m->expireMeta.trash == 1);\n            }\n#ifdef DEBUG_ASSERTIONS\n            dictEmpty(o->ptr, NULL);\n            debugServerAssert(*htGetMetadataSize(o->ptr) == 0);\n#endif\n            dictRelease((dict*) o->ptr);\n            break;\n        case OBJ_ENCODING_LISTPACK:\n            lpFree(o->ptr);\n            break;\n        case OBJ_ENCODING_LISTPACK_EX:\n            /* Verify hash is not registered in global HFE ds */\n            serverAssert(((listpackEx *) o->ptr)->meta.trash == 1);\n            listpackExFree(o->ptr);\n            break;\n        default:\n            serverPanic(\"Unknown hash encoding type\");\n            break;\n    }\n}\n\nebuckets *hashTypeGetDictMetaHFE(dict *d) {\n    htMetadataEx *dictExpireMeta = htGetMetadataEx(d);\n    return &dictExpireMeta->hfe;\n}\n\n/*-----------------------------------------------------------------------------\n * Hash type commands\n *----------------------------------------------------------------------------*/\n\nvoid hsetnxCommand(client *c) {\n    unsigned long hlen;\n    int isHashDeleted;\n    size_t oldsize = 0;\n    robj *kv = hashTypeLookupWriteOrCreate(c,c->argv[1]);\n    if (kv == NULL) return;\n\n    if (hashTypeExists(c->db, kv, c->argv[2]->ptr, HFE_LAZY_EXPIRE, &isHashDeleted)) {\n        addReply(c, shared.czero);\n        return;\n    }\n\n    /* Field expired and in turn hash deleted. Create new one! */\n    if (isHashDeleted) {\n        robj *o = createHashObject();\n        kv = dbAdd(c->db,c->argv[1],&o);\n    }\n\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(kv);\n    hashTypeTryConversion(c->db, kv, c->argv, 2, 3);\n    hashTypeSet(c->db, kv, c->argv[2]->ptr, c->argv[3]->ptr, HASH_SET_COPY);\n    addReply(c, shared.cone);\n    keyModified(c,c->db,c->argv[1], kv, 1);\n    hlen = hashTypeLength(kv, 0);\n    updateKeysizesHist(c->db, OBJ_HASH, hlen - 1, hlen);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), kv, oldsize, kvobjAllocSize(kv));\n    notifyKeyspaceEventWithSubkeys(NOTIFY_HASH,\"hset\",c->argv[1],c->db->id,&c->argv[2],1);\n    KSN_INVALIDATE_KVOBJ(kv);\n    server.dirty++;\n}\n\nvoid hsetCommand(client *c) {\n    int i, created = 0;\n    size_t oldsize = 0;\n    kvobj *kv;\n\n    if ((c->argc % 2) == 1) {\n        addReplyErrorArity(c);\n        return;\n    }\n\n    if ((kv = hashTypeLookupWriteOrCreate(c,c->argv[1])) == NULL) return;\n\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(kv);\n    hashTypeTryConversion(c->db, kv, c->argv, 2, c->argc-1);\n\n    for (i = 2; i < c->argc; i += 2)\n        created += !hashTypeSet(c->db, kv, c->argv[i]->ptr, c->argv[i+1]->ptr, HASH_SET_COPY);\n\n    /* HMSET (deprecated) and HSET return value is different. */\n    char *cmdname = c->argv[0]->ptr;\n    if (cmdname[1] == 's' || cmdname[1] == 'S') {\n        /* HSET */\n        addReplyLongLong(c, created);\n    } else {\n        /* HMSET */\n        addReply(c, shared.ok);\n    }\n    keyModified(c,c->db,c->argv[1],kv,1);\n    unsigned long l = hashTypeLength(kv, 0);\n    updateKeysizesHist(c->db, OBJ_HASH, l - created, l);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), kv, oldsize, kvobjAllocSize(kv));\n\n    /* Collect field pointers for subkey notification. Fields are at argv[2,4,6...]. */\n    int numfields = (c->argc - 2) / 2;\n    fieldvec fvset;\n    vec *vset = fieldvecInit(&fvset, numfields);\n    for (i = 0; i < numfields; i++) {\n        vecPush(vset, c->argv[2 + i * 2]);\n    }\n    notifyKeyspaceEventWithSubkeys(NOTIFY_HASH,\"hset\",c->argv[1],c->db->id,(robj**)vecData(vset),numfields);\n    vecRelease(vset);\n    KSN_INVALIDATE_KVOBJ(kv);\n    server.dirty += (c->argc - 2)/2;\n}\n\n/* Parse expire time from argument and do boundary checks. */\nstatic int parseExpireTime(client *c, robj *o, int unit, long long basetime,\n                           long long *expire)\n{\n    long long val;\n\n    /* Read the expiry time from command */\n    if (getLongLongFromObjectOrReply(c, o, &val, NULL) != C_OK)\n        return C_ERR;\n\n    if (val < 0) {\n        addReplyError(c,\"invalid expire time, must be >= 0\");\n        return C_ERR;\n    }\n\n    if (unit == UNIT_SECONDS) {\n        if (val > (long long) HFE_MAX_ABS_TIME_MSEC / 1000) {\n            addReplyErrorExpireTime(c);\n            return C_ERR;\n        }\n        val *= 1000;\n    }\n\n    if (val > (long long) HFE_MAX_ABS_TIME_MSEC - basetime) {\n        addReplyErrorExpireTime(c);\n        return C_ERR;\n    }\n    val += basetime;\n    *expire = val;\n    return C_OK;\n}\n\n/* Flags that are used as part of HGETEX and HSETEX commands. */\n#define HFE_EX       (1<<0) /* Expiration time in seconds */\n#define HFE_PX       (1<<1) /* Expiration time in milliseconds */\n#define HFE_EXAT     (1<<2) /* Expiration time in unix seconds */\n#define HFE_PXAT     (1<<3) /* Expiration time in unix milliseconds */\n#define HFE_PERSIST  (1<<4) /* Persist fields */\n#define HFE_KEEPTTL  (1<<5) /* Do not discard field ttl on set op */\n#define HFE_FXX      (1<<6) /* Set fields if all the fields already exist */\n#define HFE_FNX      (1<<7) /* Set fields if none of the fields exist */\n\n/* Command types for unified hash argument parser */\n#define HASH_CMD_HGETEX 0\n#define HASH_CMD_HSETEX 1\n\n/* Parse hash field expiration command arguments for both HGETEX and HSETEX.\n * HGETEX <key> [EX seconds|PX milliseconds|EXAT unix-time-seconds|PXAT unix-time-milliseconds|PERSIST]\n *              FIELDS <numfields> field [field ...]\n * HSETEX <key> [EX seconds|PX milliseconds|EXAT unix-time-seconds|PXAT unix-time-milliseconds|KEEPTTL]\n *              [FXX|FNX] FIELDS <numfields> field value [field value ...]\n */\nstatic int parseHashFieldExpireArgs(client *c, int *flags,\n                                    long long *expire_time, int *expire_time_pos,\n                                    int *first_field_pos, int *field_count,\n                                    int command_type) {\n    *flags = 0;\n    *first_field_pos = -1;\n    *field_count = -1;\n    *expire_time_pos = -1;\n\n    for (int i = 2; i < c->argc; i++) {\n        if (!strcasecmp(c->argv[i]->ptr, \"fields\")) {\n            /* Ensure only one FIELDS argument is provided */\n            if (*first_field_pos != -1) {\n                addReplyError(c, \"FIELDS keyword specified multiple times\");\n                return C_ERR;\n            }\n\n            int args_per_field = (command_type == HASH_CMD_HSETEX) ? 2 : 1;\n            long val;\n            /* Ensure we have at least the numfields argument */\n            if (i + 1 >= c->argc) {\n                addReplyErrorArity(c);\n                return C_ERR;\n            }\n\n            if (getRangeLongFromObjectOrReply(c, c->argv[i + 1], 1, INT_MAX, &val,\n                                              \"invalid number of fields\") != C_OK)\n                return C_ERR;\n\n            *first_field_pos = i + 2;\n            *field_count = (int) val;\n\n            /* Validate field count based on command type */\n            long long required_args = *first_field_pos + ((long long)*field_count * args_per_field);\n            if (required_args > c->argc) {\n                addReplyError(c, \"wrong number of arguments\");\n                return C_ERR;\n            }\n\n            /* Skip over numfields and all field-value pairs\n             * Set i to the last position of the FIELDS block, loop will increment past it */\n            i = *first_field_pos + (*field_count * args_per_field) - 1;\n            continue;\n        } else if (!strcasecmp(c->argv[i]->ptr, \"EX\")) {\n            if (*flags & (HFE_EX | HFE_EXAT | HFE_PX | HFE_PXAT | HFE_KEEPTTL | HFE_PERSIST))\n                goto err_expiration;\n\n            if (i >= c->argc - 1)\n                goto err_missing_expire;\n\n            *flags |= HFE_EX;\n            i++;\n            if (parseExpireTime(c, c->argv[i], UNIT_SECONDS,\n                                commandTimeSnapshot(), expire_time) != C_OK)\n                return C_ERR;\n\n            *expire_time_pos = i;\n        } else if (!strcasecmp(c->argv[i]->ptr, \"PX\")) {\n            if (*flags & (HFE_EX | HFE_EXAT | HFE_PX | HFE_PXAT | HFE_KEEPTTL | HFE_PERSIST))\n                goto err_expiration;\n\n            if (i >= c->argc - 1)\n                goto err_missing_expire;\n\n            *flags |= HFE_PX;\n            i++;\n            if (parseExpireTime(c, c->argv[i], UNIT_MILLISECONDS,\n                                commandTimeSnapshot(), expire_time) != C_OK)\n                return C_ERR;\n\n            *expire_time_pos = i;\n        } else if (!strcasecmp(c->argv[i]->ptr, \"EXAT\")) {\n            if (*flags & (HFE_EX | HFE_EXAT | HFE_PX | HFE_PXAT | HFE_KEEPTTL | HFE_PERSIST))\n                goto err_expiration;\n\n            if (i >= c->argc - 1)\n                goto err_missing_expire;\n\n            *flags |= HFE_EXAT;\n            i++;\n            if (parseExpireTime(c, c->argv[i], UNIT_SECONDS, 0, expire_time) != C_OK)\n                return C_ERR;\n\n            *expire_time_pos = i;\n        } else if (!strcasecmp(c->argv[i]->ptr, \"PXAT\")) {\n            if (*flags & (HFE_EX | HFE_EXAT | HFE_PX | HFE_PXAT | HFE_KEEPTTL | HFE_PERSIST))\n                goto err_expiration;\n\n            if (i >= c->argc - 1)\n                goto err_missing_expire;\n\n            *flags |= HFE_PXAT;\n            i++;\n            if (parseExpireTime(c, c->argv[i], UNIT_MILLISECONDS, 0,\n                                expire_time) != C_OK)\n                return C_ERR;\n\n            *expire_time_pos = i;\n        } else if (command_type == HASH_CMD_HGETEX && !strcasecmp(c->argv[i]->ptr, \"PERSIST\")) {\n            if (*flags & (HFE_EX | HFE_EXAT | HFE_PX | HFE_PXAT | HFE_PERSIST))\n                goto err_expiration;\n            *flags |= HFE_PERSIST;\n        } else if (command_type == HASH_CMD_HSETEX && !strcasecmp(c->argv[i]->ptr, \"KEEPTTL\")) {\n            if (*flags & (HFE_EX | HFE_EXAT | HFE_PX | HFE_PXAT | HFE_KEEPTTL))\n                goto err_expiration;\n            *flags |= HFE_KEEPTTL;\n        } else if (command_type == HASH_CMD_HSETEX && !strcasecmp(c->argv[i]->ptr, \"FXX\")) {\n            if (*flags & (HFE_FXX | HFE_FNX))\n                goto err_condition;\n            *flags |= HFE_FXX;\n        } else if (command_type == HASH_CMD_HSETEX && !strcasecmp(c->argv[i]->ptr, \"FNX\")) {\n            if (*flags & (HFE_FXX | HFE_FNX))\n                goto err_condition;\n            *flags |= HFE_FNX;\n        } else {\n            addReplyErrorFormat(c, \"unknown argument: %s\", (char*) c->argv[i]->ptr);\n            return C_ERR;\n        }\n    }\n\n    /* Ensure FIELDS is specified */\n    if (*first_field_pos == -1) {\n        addReplyError(c, \"missing FIELDS argument\");\n        return C_ERR;\n    }\n\n    return C_OK;\n\nerr_missing_expire:\n    addReplyError(c, \"missing expire time\");\n    return C_ERR;\nerr_condition:\n    addReplyError(c, \"Only one of FXX or FNX arguments can be specified\");\n    return C_ERR;\nerr_expiration:\n    if (command_type == HASH_CMD_HSETEX) {\n        addReplyError(c, \"Only one of EX, PX, EXAT, PXAT or KEEPTTL arguments can be specified\");\n    } else {\n        addReplyError(c, \"Only one of EX, PX, EXAT, PXAT or PERSIST arguments can be specified\");\n    }\n    return C_ERR;\n}\n\n/* Set the value of one or more fields of a given hash key, and optionally set\n * their expiration.\n *\n * HSETEX key\n *  [FNX | FXX]\n *  [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | KEEPTTL]\n *  FIELDS <numfields> field value [field value...]\n *\n * Reply:\n *   Integer reply: 0 if no fields were set (due to FXX/FNX args)\n *   Integer reply: 1 if all the fields were set\n */\nvoid hsetexCommand(client *c) {\n    int flags = 0, first_field_pos = 0, field_count = 0, expire_time_pos = -1;\n    int set_expiry;\n    long long expire_time = EB_EXPIRE_TIME_INVALID;\n    int64_t oldlen, newlen;\n    HashTypeSetEx setex;\n    dictEntryLink link;\n    size_t oldsize = 0;\n\n    if (parseHashFieldExpireArgs(c, &flags, &expire_time, &expire_time_pos,\n                                 &first_field_pos, &field_count, HASH_CMD_HSETEX) != C_OK)\n        return;\n\n    kvobj *o = lookupKeyWriteWithLink(c->db, c->argv[1], &link);\n    if (checkType(c, o, OBJ_HASH))\n        return;\n\n    if (!o) {\n        if (flags & HFE_FXX) {\n            addReplyLongLong(c, 0);\n            return;\n        }\n        o = createHashObject();\n        dbAddByLink(c->db, c->argv[1], &o, &link);\n    }\n    oldlen = (int64_t) hashTypeLength(o, 0);\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(o);\n\n    /* Track fields for subkey notifications by event type. */\n    fieldvec fvexpired, fvset, fvdeleted, fvupdated;\n    vec *vexpired = fieldvecInit(&fvexpired, field_count);\n    vec *vset = fieldvecInit(&fvset, field_count);\n    vec *vdeleted = fieldvecInit(&fvdeleted, field_count);\n    vec *vupdated = fieldvecInit(&fvupdated, field_count);\n\n    if (flags & (HFE_FXX | HFE_FNX)) {\n        int found = 0;\n        for (int i = 0; i < field_count; i++) {\n            sds field = c->argv[first_field_pos + (i * 2)]->ptr;\n            unsigned char *vstr = NULL;\n            unsigned int vlen = UINT_MAX;\n            long long vll = LLONG_MAX;\n            const int opt = HFE_LAZY_NO_NOTIFICATION |\n                            HFE_LAZY_NO_SIGNAL |\n                            HFE_LAZY_AVOID_HASH_DEL |\n                            HFE_LAZY_NO_UPDATE_KEYSIZES |\n                            HFE_LAZY_NO_UPDATE_ALLOCSIZES;\n\n            GetFieldRes res = hashTypeGetValue(c->db, o, field, &vstr, &vlen, &vll, opt, NULL);\n            int exists = (res == GETF_OK);\n            if (res == GETF_EXPIRED) {\n                vecPush(vexpired, c->argv[first_field_pos + (i * 2)]);\n            }\n            found += exists;\n\n            /* Check for early exit if the condition is already invalid. */\n            if (((flags & HFE_FXX) && !exists) ||\n                ((flags & HFE_FNX) && exists))\n                break;\n        }\n\n        int all_exists = (found == field_count);\n        int non_exists = (found == 0);\n\n        if (((flags & HFE_FNX) && !non_exists) ||\n            ((flags & HFE_FXX) && !all_exists))\n        {\n            addReplyLongLong(c, 0);\n            goto out;\n        }\n    }\n    hashTypeTryConversion(c->db, o,c->argv, first_field_pos, c->argc - 1);\n\n    /* Check if we will set the expiration time. */\n    set_expiry = flags & (HFE_EX | HFE_PX | HFE_EXAT | HFE_PXAT);\n    if (set_expiry)\n        hashTypeSetExInit(c->argv[1], o, c, c->db, 0, &setex);\n\n    for (int i = 0; i < field_count; i++) {\n        sds field = c->argv[first_field_pos + (i * 2)]->ptr;\n        sds value = c->argv[first_field_pos + (i * 2) + 1]->ptr;\n\n        int opt = HASH_SET_COPY;\n        /* If we are going to set the expiration time later, no need to discard\n         * it as part of set operation now. */\n        if (flags & (HFE_EX | HFE_PX | HFE_EXAT | HFE_PXAT | HFE_KEEPTTL))\n            opt |= HASH_SET_KEEP_TTL;\n\n        hashTypeSet(c->db, o, field, value, opt);\n        vecPush(vset, c->argv[first_field_pos + (i * 2)]);\n        /* Update the expiration time. */\n        if (set_expiry) {\n            int ret = hashTypeSetEx(o, field, expire_time, &setex);\n            if (ret == HSETEX_OK) {\n                vecPush(vupdated, c->argv[first_field_pos + (i * 2)]);\n            } else if (ret == HSETEX_DELETED) {\n                vecPush(vdeleted, c->argv[first_field_pos + (i * 2)]);\n            }\n        }\n    }\n\n    if (set_expiry)\n        hashTypeSetExDone(&setex);\n\n    server.dirty += field_count;\n\n    if (vecSize(vdeleted)) {\n        /* If fields are deleted due to timestamp is being in the past, hdel's\n         * are already propagated. No need to propagate the command itself. */\n        preventCommandPropagation(c);\n    } else if (set_expiry && !(flags & HFE_PXAT)) {\n        /* Propagate as 'HSETEX <key> PXAT ..' if there is EX/EXAT/PX flag*/\n\n        /* Replace EX/EXAT/PX with PXAT */\n        rewriteClientCommandArgument(c, expire_time_pos - 1, shared.pxat);\n        /* Replace timestamp with unix timestamp milliseconds. */\n        robj *expire = createStringObjectFromLongLong(expire_time);\n        rewriteClientCommandArgument(c, expire_time_pos, expire);\n        decrRefCount(expire);\n    }\n\n    addReplyLongLong(c, 1);\n\nout:\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), o, oldsize, kvobjAllocSize(o));\n    /* Emit keyspace notifications based on field expiry, mutation, or key deletion */\n    if (vecSize(vset) || vecSize(vexpired)) {\n        newlen = (int64_t) hashTypeLength(o, 0); \n        keyModified(c, c->db, c->argv[1], o, 1);\n        if (vecSize(vexpired)) {\n            notifyKeyspaceEventWithSubkeys(NOTIFY_HASH, \"hexpired\", c->argv[1],\n                                           c->db->id, (robj**)vecData(vexpired), vecSize(vexpired));\n        }\n        if (vecSize(vset)) {\n            notifyKeyspaceEventWithSubkeys(NOTIFY_HASH, \"hset\", c->argv[1],\n                                           c->db->id, (robj**)vecData(vset), vecSize(vset));\n            if (vecSize(vdeleted)) {\n                notifyKeyspaceEventWithSubkeys(NOTIFY_HASH, \"hdel\", c->argv[1],\n                                               c->db->id, (robj**)vecData(vdeleted), vecSize(vdeleted));\n            } else if (vecSize(vupdated)) {\n                notifyKeyspaceEventWithSubkeys(NOTIFY_HASH, \"hexpire\", c->argv[1],\n                                               c->db->id, (robj**)vecData(vupdated), vecSize(vupdated));\n            }\n        }\n        \n        KSN_INVALIDATE_KVOBJ(o);\n        \n        /* Key may become empty due to lazy expiry in hashTypeGetValue()\n         * or the new expiration time is in the past.*/\n        if (newlen == 0) {\n            newlen = -1;\n            /* Del key but don't update KEYSIZES. else it will decr wrong bin in histogram */\n            dbDeleteSkipKeysizesUpdate(c->db, c->argv[1]);\n            notifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", c->argv[1], c->db->id);\n        }\n        if (oldlen != newlen)\n            updateKeysizesHist(c->db, OBJ_HASH, oldlen, newlen);\n    }\n\n    vecRelease(vexpired);\n    vecRelease(vset);\n    vecRelease(vdeleted);\n    vecRelease(vupdated);\n}\n\nvoid hincrbyCommand(client *c) {\n    long long value, incr, oldvalue;\n    kvobj *o;\n    sds new;\n    unsigned char *vstr;\n    unsigned int vlen;\n    size_t oldsize = 0;\n\n    if (getLongLongFromObjectOrReply(c,c->argv[3],&incr,NULL) != C_OK) return;\n    if ((o = hashTypeLookupWriteOrCreate(c,c->argv[1])) == NULL) return;\n\n    GetFieldRes res = hashTypeGetValue(c->db,o,c->argv[2]->ptr,&vstr,&vlen,&value,\n                                       HFE_LAZY_EXPIRE, NULL);\n    if (res == GETF_OK) {\n        if (vstr) {\n            if (string2ll((char*)vstr,vlen,&value) == 0) {\n                addReplyError(c,\"hash value is not an integer\");\n                return;\n            }\n        } /* Else hashTypeGetValue() already stored it into &value */\n    } else if ((res == GETF_NOT_FOUND) || (res == GETF_EXPIRED)) {\n        value = 0;\n        unsigned long l = hashTypeLength(o, 0);\n        updateKeysizesHist(c->db, OBJ_HASH, l, l + 1);\n    } else {\n        /* Field expired and in turn hash deleted. Create new one! */\n        o = createHashObject();\n        dbAdd(c->db,c->argv[1],&o);\n        value = 0;\n        updateKeysizesHist(c->db, OBJ_HASH, 0, 1);\n    }\n\n    oldvalue = value;\n    if ((incr < 0 && oldvalue < 0 && incr < (LLONG_MIN-oldvalue)) ||\n        (incr > 0 && oldvalue > 0 && incr > (LLONG_MAX-oldvalue))) {\n        addReplyError(c,\"increment or decrement would overflow\");\n        return;\n    }\n    value += incr;\n    new = sdsfromlonglong(value);\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(o);\n    hashTypeSet(c->db, o,c->argv[2]->ptr,new,HASH_SET_TAKE_VALUE | HASH_SET_KEEP_TTL);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), o, oldsize, kvobjAllocSize(o));\n    addReplyLongLong(c,value);\n    keyModified(c,c->db,c->argv[1], o, 1);\n    notifyKeyspaceEventWithSubkeys(NOTIFY_HASH,\"hincrby\",c->argv[1],c->db->id,&c->argv[2],1);\n    KSN_INVALIDATE_KVOBJ(o);\n    server.dirty++;\n}\n\nvoid hincrbyfloatCommand(client *c) {\n    long double value, incr;\n    long long ll;\n    kvobj *o;\n    sds new;\n    unsigned char *vstr;\n    unsigned int vlen;\n    size_t oldsize = 0;\n\n    if (getLongDoubleFromObjectOrReply(c,c->argv[3],&incr,NULL) != C_OK) return;\n    if (isnan(incr) || isinf(incr)) {\n        addReplyError(c,\"value is NaN or Infinity\");\n        return;\n    }\n    if ((o = hashTypeLookupWriteOrCreate(c,c->argv[1])) == NULL) return;\n    GetFieldRes res = hashTypeGetValue(c->db, o,c->argv[2]->ptr,&vstr,&vlen,&ll,\n                                       HFE_LAZY_EXPIRE, NULL);\n    if (res == GETF_OK) {\n        if (vstr) {\n            if (string2ld((char*)vstr,vlen,&value) == 0) {\n                addReplyError(c,\"hash value is not a float\");\n                return;\n            }\n        } else {\n            value = (long double)ll;\n        }\n    } else if ((res == GETF_NOT_FOUND) || (res == GETF_EXPIRED)) {\n        value = 0;\n        unsigned long l = hashTypeLength(o, 0);\n        updateKeysizesHist(c->db, OBJ_HASH, l, l + 1);\n    } else {\n        /* Field expired and in turn hash deleted. Create new one! */\n        o = createHashObject();\n        dbAdd(c->db, c->argv[1], &o);\n        value = 0;\n        updateKeysizesHist(c->db, OBJ_HASH, 0, 1);\n    }\n\n    value += incr;\n    if (isnan(value) || isinf(value)) {\n        addReplyError(c,\"increment would produce NaN or Infinity\");\n        return;\n    }\n\n    char buf[MAX_LONG_DOUBLE_CHARS];\n    int len = ld2string(buf,sizeof(buf),value,LD_STR_HUMAN);\n    new = sdsnewlen(buf,len);\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(o);\n    hashTypeSet(c->db, o,c->argv[2]->ptr,new,HASH_SET_TAKE_VALUE | HASH_SET_KEEP_TTL);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), o, oldsize, kvobjAllocSize(o));\n    addReplyBulkCBuffer(c,buf,len);\n    keyModified(c,c->db,c->argv[1],o,1);\n    notifyKeyspaceEventWithSubkeys(NOTIFY_HASH,\"hincrbyfloat\",c->argv[1],c->db->id,&c->argv[2],1);\n    KSN_INVALIDATE_KVOBJ(o);\n    server.dirty++;\n\n    /* Always replicate HINCRBYFLOAT as an HSETEX command with the final value\n     * in order to make sure that differences in float precision or formatting\n     * will not create differences in replicas or after an AOF restart.\n     * The KEEPTTL flag is used to make sure the field TTL is preserved. */\n    robj *newobj;\n    newobj = createRawStringObject(buf,len);\n    rewriteClientCommandVector(c, 7, shared.hsetex, c->argv[1], shared.keepttl,\n                        shared.fields, shared.integers[1], c->argv[2], newobj);\n    decrRefCount(newobj);\n}\n\nstatic GetFieldRes addHashFieldToReply(client *c, kvobj *o, sds field, int hfeFlags) {\n    if (o == NULL) {\n        addReplyNull(c);\n        return GETF_NOT_FOUND;\n    }\n\n    unsigned char *vstr = NULL;\n    unsigned int vlen = UINT_MAX;\n    long long vll = LLONG_MAX;\n\n    GetFieldRes res = hashTypeGetValue(c->db, o, field, &vstr, &vlen, &vll, hfeFlags, NULL);\n    if (res == GETF_OK) {\n        if (vstr) {\n            addReplyBulkCBuffer(c, vstr, vlen);\n        } else {\n            addReplyBulkLongLong(c, vll);\n        }\n    } else {\n        addReplyNull(c);\n    }\n    return res;\n}\n\nvoid hgetCommand(client *c) {\n    kvobj *o;\n\n    if ((o = lookupKeyReadOrReply(c,c->argv[1],shared.null[c->resp])) == NULL ||\n        checkType(c,o,OBJ_HASH)) return;\n\n    addHashFieldToReply(c, o, c->argv[2]->ptr, HFE_LAZY_EXPIRE);\n}\n\nvoid hmgetCommand(client *c) {\n    GetFieldRes res = GETF_OK;\n    int i, deleted = 0;\n\n    /* Don't abort when the key cannot be found. Non-existing keys are empty\n     * hashes, where HMGET should respond with a series of null bulks. */\n    kvobj *o = lookupKeyRead(c->db, c->argv[1]);\n    if (checkType(c,o,OBJ_HASH)) return;\n\n    /* Track expired fields for subkey notification. */\n    fieldvec fvexpired;\n    vec *vexpired = fieldvecInit(&fvexpired, c->argc-2);\n\n    addReplyArrayLen(c, c->argc-2);\n    for (i = 2; i < c->argc ; i++) {\n        if (!deleted) {\n            res = addHashFieldToReply(c, o, c->argv[i]->ptr, HFE_LAZY_NO_NOTIFICATION);\n            if (res == GETF_EXPIRED) {\n                vecPush(vexpired, c->argv[i]);\n            }\n            deleted += (res == GETF_EXPIRED_HASH);\n        } else {\n            /* If hash got lazy expired since all fields are expired (o is invalid),\n             * then fill the rest with trivial nulls and return. */\n            addReplyNull(c);\n        }\n    }\n\n    if (vecSize(vexpired)) {\n        notifyKeyspaceEventWithSubkeys(NOTIFY_HASH, \"hexpired\", c->argv[1],\n                                       c->db->id, (robj**)vecData(vexpired), vecSize(vexpired));\n    }\n    if (deleted)\n        notifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", c->argv[1], c->db->id);\n\n    vecRelease(vexpired);\n}\n\n/* Get and delete the value of one or more fields of a given hash key.\n * HGETDEL <key> FIELDS <numfields> field1 field2 ...\n * Reply: list of the value associated with each field or nil if the field\n *        doesn’t exist.\n */\nvoid hgetdelCommand(client *c) {\n    int res = 0, hfe = 0;\n    int64_t oldlen = -1; /* not exists as long as it is not set */\n    long num_fields = 0;\n    size_t oldsize = 0;\n\n    kvobj *o = lookupKeyWrite(c->db, c->argv[1]);\n    if (checkType(c, o, OBJ_HASH))\n        return;\n\n    if (strcasecmp(c->argv[2]->ptr, \"FIELDS\") != 0) {\n        addReplyError(c, \"Mandatory argument FIELDS is missing or not at the right position\");\n        return;\n    }\n\n    /* Read number of fields */\n    if (getRangeLongFromObjectOrReply(c, c->argv[3], 1, LONG_MAX, &num_fields,\n                                      \"Number of fields must be a positive integer\") != C_OK)\n        return;\n\n    /* Verify `numFields` is consistent with number of arguments */\n    if (num_fields != c->argc - 4) {\n        addReplyError(c, \"The `numfields` parameter must match the number of arguments\");\n        return;\n    }\n\n    /* Hash field expiration is optimized to avoid frequent update global HFE DS\n     * for each field deletion. Eventually active-expiration will run and update\n     * or remove the hash from global HFE DS gracefully. Nevertheless, statistic\n     * \"subexpiry\" might reflect wrong number of hashes with HFE to the user if\n     * it is the last field with expiration. The following logic checks if this\n     * is the last field with expiration and removes it from global HFE DS. */\n    if (o) {\n        hfe = hashTypeIsFieldsWithExpire(o);\n        oldlen = hashTypeLength(o, 0);\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(o);\n    }\n\n    /* Track fields for subkey notifications. */\n    fieldvec fvexpired, fvdeleted;\n    vec *vexpired = fieldvecInit(&fvexpired, num_fields);\n    vec *vdeleted = fieldvecInit(&fvdeleted, num_fields);\n\n    addReplyArrayLen(c, num_fields);\n    for (int i = 4; i < c->argc; i++) {\n        const int flags = HFE_LAZY_NO_NOTIFICATION |\n                          HFE_LAZY_NO_SIGNAL |\n                          HFE_LAZY_AVOID_HASH_DEL |\n                          HFE_LAZY_NO_UPDATE_KEYSIZES |\n                          HFE_LAZY_NO_UPDATE_ALLOCSIZES;\n        res = addHashFieldToReply(c, o, c->argv[i]->ptr, flags);\n        if (res == GETF_EXPIRED) {\n            vecPush(vexpired, c->argv[i]);\n        }\n        /* Try to delete only if it's found and not expired lazily. */\n        if (res == GETF_OK) {\n            vecPush(vdeleted, c->argv[i]);\n            serverAssert(hashTypeDelete(o, c->argv[i]->ptr) == 1);\n        }\n    }\n\n    /* Return if no modification has been made. */\n    if (vecSize(vexpired) == 0 && vecSize(vdeleted) == 0) {\n        vecRelease(vexpired);\n        vecRelease(vdeleted);\n        return;\n    }\n\n    int64_t newlen = (int64_t) hashTypeLength(o, 0);\n    /* del key if become empty */\n    int delete_key = (newlen == 0);\n    /* update new len for keysizes histogram */\n    int64_t hist_newlen = delete_key ? -1 : newlen;\n    if (oldlen != hist_newlen)\n        updateKeysizesHist(c->db, OBJ_HASH, oldlen, hist_newlen);\n    /* update memory tracking */\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), o, oldsize, kvobjAllocSize(o));\n    /* is it last HFE */\n    if (!delete_key && hfe && (hashTypeIsFieldsWithExpire(o) == 0))\n        estoreRemove(c->db->subexpires, getKeySlot(c->argv[1]->ptr), o);\n    \n    keyModified(c, c->db, c->argv[1], o, 1);\n\n    if (vecSize(vexpired)) {\n        notifyKeyspaceEventWithSubkeys(NOTIFY_HASH, \"hexpired\", c->argv[1],\n                                       c->db->id, (robj**)vecData(vexpired), vecSize(vexpired));\n    }\n    if (vecSize(vdeleted)) {\n        notifyKeyspaceEventWithSubkeys(NOTIFY_HASH, \"hdel\", c->argv[1],\n                                       c->db->id, (robj**)vecData(vdeleted), vecSize(vdeleted));\n        server.dirty += vecSize(vdeleted);\n\n        /* Propagate as HDEL command.\n         * Orig: HGETDEL <key> FIELDS <numfields> field1 field2 ...\n         * Repl: HDEL <key> field1 field2 ... */\n        rewriteClientCommandArgument(c, 0, shared.hdel);\n        rewriteClientCommandArgument(c, 2, NULL);  /* Delete FIELDS arg */\n        rewriteClientCommandArgument(c, 2, NULL);  /* Delete <numfields> arg */\n    }\n\n    vecRelease(vexpired);\n    vecRelease(vdeleted);\n    KSN_INVALIDATE_KVOBJ(o);\n\n    /* Key may have become empty because of deleting fields or lazy expire. */\n    if (delete_key) {\n        /* Del key but don't update KEYSIZES. else it will decr wrong bin in histogram */\n        dbDeleteSkipKeysizesUpdate(c->db, c->argv[1]);\n        notifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", c->argv[1], c->db->id);\n    }\n}\n\n/* Get the value of one or more fields of a given hash key and optionally set \n * their expiration.\n *\n * HGETEX <key>\n *   [EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT unix-time-milliseconds | PERSIST]\n *   FIELDS <numfields> field1 field2 ...\n *\n * Reply: list of the value associated with each field or nil if the field\n *        doesn’t exist.\n */\nvoid hgetexCommand(client *c) {\n    int parse_flags = 0, expire_time_pos = -1, first_field_pos = -1, num_fields = -1;\n    long long expire_time = 0;\n    int64_t oldlen = 0, newlen = -1;\n    HashTypeSetEx setex;\n    size_t oldsize = 0;\n\n    kvobj *o = lookupKeyWrite(c->db, c->argv[1]);\n    if (checkType(c, o, OBJ_HASH))\n        return;\n\n    /* Parse arguments using flexible parser */\n    if (parseHashFieldExpireArgs(c, &parse_flags, &expire_time, &expire_time_pos, &first_field_pos, &num_fields, HASH_CMD_HGETEX) != C_OK)\n        return;\n\n    /* Non-existing keys and empty hashes are the same thing. Reply null if the\n     * key does not exist.*/\n    if (!o) {\n        addReplyArrayLen(c, num_fields);\n        for (int i = 0; i < num_fields; i++)\n            addReplyNull(c);\n        return;\n    }\n\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(o);\n    oldlen = hashTypeLength(o, 0);\n    if (parse_flags)\n        hashTypeSetExInit(c->argv[1], o, c, c->db, 0, &setex);\n\n    /* Track fields for subkey notifications by event type. */\n    fieldvec fvexpired, fvdeleted, fvupdated;\n    vec *vexpired = fieldvecInit(&fvexpired, num_fields);\n    vec *vdeleted = fieldvecInit(&fvdeleted, num_fields);\n    vec *vupdated = fieldvecInit(&fvupdated, num_fields);\n\n    addReplyArrayLen(c, num_fields);\n    for (int i = first_field_pos; i < first_field_pos + num_fields; i++) {\n        const int flags = HFE_LAZY_NO_NOTIFICATION |\n                          HFE_LAZY_NO_SIGNAL |\n                          HFE_LAZY_AVOID_HASH_DEL |\n                          HFE_LAZY_NO_UPDATE_KEYSIZES |\n                          HFE_LAZY_NO_UPDATE_ALLOCSIZES;\n        sds field = c->argv[i]->ptr;\n        int res = addHashFieldToReply(c, o, c->argv[i]->ptr, flags);\n        if (res == GETF_EXPIRED) {\n            vecPush(vexpired, c->argv[i]);\n        }\n\n        /* Set expiration only if the field exists and not expired lazily. */\n        if (res == GETF_OK && parse_flags) {\n            if (parse_flags & HFE_PERSIST)\n                expire_time = EB_EXPIRE_TIME_INVALID;\n\n            res = hashTypeSetEx(o, field, expire_time, &setex);\n            if (res == HSETEX_DELETED) {\n                vecPush(vdeleted, c->argv[i]);\n            } else if (res == HSETEX_OK) {\n                vecPush(vupdated, c->argv[i]);\n            }\n        }\n    }\n\n    if (parse_flags)\n        hashTypeSetExDone(&setex);\n\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), o, oldsize, kvobjAllocSize(o));\n\n    /* Exit early if no modification has been made. */\n    if (vecSize(vexpired) == 0 && vecSize(vdeleted) == 0 && vecSize(vupdated) == 0) {\n        vecRelease(vexpired);\n        vecRelease(vdeleted);\n        vecRelease(vupdated);\n        return;\n    }\n\n    server.dirty += vecSize(vdeleted) + vecSize(vupdated);\n    keyModified(c, c->db, c->argv[1], o, 1);\n\n    /* This command will never be propagated as it is. It will be propagated as\n     * HDELs when fields are lazily expired or deleted, if the new timestamp is\n     * in the past. HDEL's will be emitted as part of addHashFieldToReply()\n     * or hashTypeSetEx() in this case.\n     *\n     * If PERSIST flags is used, it will be propagated as HPERSIST command.\n     * IF EX/EXAT/PX/PXAT flags are used, it will be replicated as HPEXPRITEAT.\n     */\n    if (vecSize(vexpired)) {\n        notifyKeyspaceEventWithSubkeys(NOTIFY_HASH, \"hexpired\", c->argv[1],\n                                       c->db->id, (robj**)vecData(vexpired), vecSize(vexpired));\n    }\n    if (vecSize(vupdated)) {\n        /* Build canonical command for propagation */\n        int canonical_argc;\n        robj **canonical_argv;\n        int idx = 0;\n\n        if (parse_flags & HFE_PERSIST) {\n            notifyKeyspaceEventWithSubkeys(NOTIFY_HASH, \"hpersist\", c->argv[1],\n                                           c->db->id, (robj**)vecData(vupdated), vecSize(vupdated));\n            /* Build canonical HPERSIST command: HPERSIST key FIELDS numfields field1 field2 ... */\n            canonical_argc = 4 + num_fields;\n            canonical_argv = zmalloc(sizeof(robj*) * canonical_argc);\n            canonical_argv[idx++] = shared.hpersist;\n            incrRefCount(shared.hpersist);\n            canonical_argv[idx++] = c->argv[1]; /* key */\n            incrRefCount(c->argv[1]);\n        } else {\n            notifyKeyspaceEventWithSubkeys(NOTIFY_HASH, \"hexpire\", c->argv[1],\n                                           c->db->id, (robj**)vecData(vupdated), vecSize(vupdated));\n            /* Build canonical HPEXPIREAT command: HPEXPIREAT key timestamp FIELDS numfields field1 field2 ... */\n            canonical_argc = 5 + num_fields;\n            canonical_argv = zmalloc(sizeof(robj*) * canonical_argc);\n            canonical_argv[idx++] = shared.hpexpireat;\n            incrRefCount(shared.hpexpireat);\n            canonical_argv[idx++] = c->argv[1]; /* key */\n            incrRefCount(c->argv[1]);\n            canonical_argv[idx++] = createStringObjectFromLongLong(expire_time); /* timestamp */\n        }\n\n        canonical_argv[idx++] = shared.fields;\n        incrRefCount(shared.fields);\n        canonical_argv[idx++] = createStringObjectFromLongLong(num_fields);\n        for (int i = 0; i < num_fields; i++) {\n            canonical_argv[idx++] = c->argv[first_field_pos + i];\n            incrRefCount(c->argv[first_field_pos + i]);\n        }\n\n        replaceClientCommandVector(c, canonical_argc, canonical_argv);\n    } else if (vecSize(vdeleted)) {\n        /* If we are here, fields are deleted because new timestamp was in the\n         * past. HDELs are already propagated as part of hashTypeSetEx(). */\n        notifyKeyspaceEventWithSubkeys(NOTIFY_HASH, \"hdel\", c->argv[1],\n                                       c->db->id, (robj**)vecData(vdeleted), vecSize(vdeleted));\n        preventCommandPropagation(c);\n    }\n\n    vecRelease(vexpired);\n    vecRelease(vdeleted);\n    vecRelease(vupdated);\n\n    /* Key may become empty due to lazy expiry in addHashFieldToReply()\n     * or the new expiration time is in the past.*/\n    newlen = hashTypeLength(o, 0);\n\n    updateKeysizesHist(c->db, OBJ_HASH, oldlen, newlen);\n    if (newlen == 0) {\n        dbDelete(c->db, c->argv[1]);\n        notifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", c->argv[1], c->db->id);\n    }\n}\n\nvoid hdelCommand(client *c) {\n    kvobj *o;\n    int j, keyremoved = 0;\n    size_t oldsize = 0;\n\n    if ((o = lookupKeyWriteOrReply(c,c->argv[1],shared.czero)) == NULL ||\n        checkType(c,o,OBJ_HASH)) return;\n\n    int64_t oldLen = (int64_t) hashTypeLength(o, 0);\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(o);\n\n    /* Hash field expiration is optimized to avoid frequent update global HFE DS for\n     * each field deletion. Eventually active-expiration will run and update or remove\n     * the hash from global HFE DS gracefully. Nevertheless, statistic \"subexpiry\"\n     * might reflect wrong number of hashes with HFE to the user if it is the last\n     * field with expiration. The following logic checks if this is indeed the last\n     * field with expiration and removes it from global HFE DS. */\n    int isHFE = hashTypeIsFieldsWithExpire(o);\n\n    /* Track which fields were actually deleted for subkey notification. */\n    fieldvec fvdeleted;\n    vec *vdeleted = fieldvecInit(&fvdeleted, c->argc - 2);\n\n    if (o->encoding == OBJ_ENCODING_HT)\n        dictPauseAutoResize((dict*)o->ptr);\n    for (j = 2; j < c->argc; j++) {\n        if (hashTypeDelete(o,c->argv[j]->ptr)) {\n            vecPush(vdeleted, c->argv[j]);\n            if (hashTypeLength(o, 0) == 0) {\n                keyremoved = 1;\n                break;\n            }\n        }\n    }\n    \n    if (!keyremoved && o->encoding == OBJ_ENCODING_HT) {\n        dictResumeAutoResize((dict*)o->ptr);\n        dictShrinkIfNeeded((dict*)o->ptr);\n    }\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), o, oldsize, kvobjAllocSize(o));\n    if (vecSize(vdeleted)) {\n        /* Update keysizes histogram */\n        int64_t newLen = (int64_t) hashTypeLength(o, 0);\n        updateKeysizesHist(c->db, OBJ_HASH, oldLen, keyremoved ? -1 : newLen);\n        \n        if (keyremoved) {\n            /* del key but don't update KEYSIZES. Else it will decr wrong bin in histogram */\n            dbDeleteSkipKeysizesUpdate(c->db, c->argv[1]);\n        } else {\n            /* is it last HFE */\n            if (isHFE && (hashTypeIsFieldsWithExpire(o) == 0))\n                estoreRemove(c->db->subexpires, getKeySlot(c->argv[1]->ptr), o);\n        }\n\n        /* Signal key modification */\n        keyModified(c, c->db, c->argv[1], keyremoved ? NULL : o, 1);\n        notifyKeyspaceEventWithSubkeys(NOTIFY_HASH,\"hdel\",c->argv[1],c->db->id,(robj**)vecData(vdeleted),vecSize(vdeleted));\n        \n        KSN_INVALIDATE_KVOBJ(o); /* Invalidate local kvobj pointer */\n        \n        /* Notify del event if key was deleted */\n        if (keyremoved) notifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", c->argv[1], c->db->id);\n        server.dirty += vecSize(vdeleted);\n    }\n    addReplyLongLong(c,vecSize(vdeleted));\n    vecRelease(vdeleted);\n}\n\nvoid hlenCommand(client *c) {\n    kvobj *o;\n\n    if ((o = lookupKeyReadOrReply(c,c->argv[1],shared.czero)) == NULL ||\n        checkType(c,o,OBJ_HASH)) return;\n\n    addReplyLongLong(c,hashTypeLength(o, 0));\n}\n\nvoid hstrlenCommand(client *c) {\n    kvobj *o;\n    unsigned char *vstr = NULL;\n    unsigned int vlen = UINT_MAX;\n    long long vll = LLONG_MAX;\n\n    if ((o = lookupKeyReadOrReply(c,c->argv[1],shared.czero)) == NULL ||\n        checkType(c,o,OBJ_HASH)) return;\n\n    GetFieldRes res = hashTypeGetValue(c->db, o, c->argv[2]->ptr, &vstr,\n                                       &vlen, &vll, HFE_LAZY_EXPIRE, NULL);\n\n    if (res == GETF_NOT_FOUND || res == GETF_EXPIRED || res == GETF_EXPIRED_HASH) {\n        addReply(c, shared.czero);\n        return;\n    }\n\n    size_t len = vstr ? vlen : sdigits10(vll);\n    addReplyLongLong(c,len);\n}\n\nstatic void addHashIteratorCursorToReply(client *c, hashTypeIterator *hi, int what) {\n    if (hi->encoding == OBJ_ENCODING_LISTPACK ||\n        hi->encoding == OBJ_ENCODING_LISTPACK_EX)\n    {\n        unsigned char *vstr = NULL;\n        unsigned int vlen = UINT_MAX;\n        long long vll = LLONG_MAX;\n\n        hashTypeCurrentFromListpack(hi, what, &vstr, &vlen, &vll, NULL);\n        if (vstr)\n            addReplyBulkCBuffer(c, vstr, vlen);\n        else\n            addReplyBulkLongLong(c, vll);\n    } else if (hi->encoding == OBJ_ENCODING_HT) {\n        char *value;\n        size_t len;\n        hashTypeCurrentFromHashTable(hi, what, &value, &len, NULL);\n        addReplyBulkCBuffer(c, value, len);\n    } else {\n        serverPanic(\"Unknown hash encoding\");\n    }\n}\n\nvoid genericHgetallCommand(client *c, int flags) {\n    kvobj *o;\n    hashTypeIterator hi;\n    int length, count = 0;\n    size_t oldsize = 0;\n\n    robj *emptyResp = (flags & OBJ_HASH_KEY && flags & OBJ_HASH_VALUE) ?\n        shared.emptymap[c->resp] : shared.emptyarray;\n    if ((o = lookupKeyReadOrReply(c,c->argv[1],emptyResp))\n        == NULL || checkType(c,o,OBJ_HASH)) return;\n\n    /* We return a map if the user requested keys and values, like in the\n     * HGETALL case. Otherwise to use a flat array makes more sense. */\n    if ((length = hashTypeLength(o, 1 /*subtractExpiredFields*/)) == 0) {\n        addReply(c, emptyResp);\n        return;\n    }\n\n    if (flags & OBJ_HASH_KEY && flags & OBJ_HASH_VALUE) {\n        addReplyMapLen(c, length);\n    } else {\n        addReplyArrayLen(c, length);\n    }\n\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(o);\n\n    /* Fast path: batched prefetch for hashtable-encoded HGETALL.\n     * Collect a batch of dict entries, prefetch their Entry structs and\n     * value SDS data, then emit replies while the data is cache-warm.\n     * This hides the latency of pointer chasing through scattered\n     * heap allocations (dictEntry → Entry → value SDS). */\n#define HGETALL_BATCH 16\n    if (o->encoding == OBJ_ENCODING_HT) {\n        int skip_expired = !server.allow_access_expired;\n        dict *d = o->ptr;\n        dictIterator di;\n        dictInitSafeIterator(&di, d);\n        Entry *batch_entry[HGETALL_BATCH];\n        sds batch_val[HGETALL_BATCH];\n\n        while (1) {\n            /* Phase 1: pull a batch of entries from the dict iterator and\n             * prefetch their Entry structs. Pure pointer-fetch — we don't\n             * dereference Entry here so the prefetch is effective. */\n            int batch_count = 0;\n            while (batch_count < HGETALL_BATCH) {\n                dictEntry *de = dictNext(&di);\n                if (!de) break;\n                Entry *e = dictGetKey(de);\n                batch_entry[batch_count++] = e;\n                redis_prefetch_read(e);\n            }\n            if (batch_count == 0) break;\n\n            /* Phase 2: Entry structs are warm — check expiry, extract value,\n             * and prefetch the value SDS. Expired entries are dropped from\n             * the batch by compacting in place. */\n            int valid_count = 0;\n            for (int i = 0; i < batch_count; i++) {\n                Entry *e = batch_entry[i];\n                if (skip_expired) {\n                    uint64_t expire_time = entryGetExpiry(e);\n                    if (expire_time != EB_EXPIRE_TIME_INVALID && (mstime_t)expire_time < commandTimeSnapshot())\n                        continue;\n                }\n                batch_entry[valid_count] = e;\n                if (flags & OBJ_HASH_VALUE) {\n                    sds val = entryGetValue(e);\n                    batch_val[valid_count] = val;\n                    redis_prefetch_read(val);\n                }\n                valid_count++;\n            }\n\n            /* Phase 3: emit replies — field + value data is cache-warm. */\n            for (int i = 0; i < valid_count; i++) {\n                if (flags & OBJ_HASH_KEY) {\n                    sds field = entryGetField(batch_entry[i]);\n                    addReplyBulkCBuffer(c, field, sdslen(field));\n                    count++;\n                }\n                if (flags & OBJ_HASH_VALUE) {\n                    sds val = batch_val[i];\n                    addReplyBulkCBuffer(c, val, sdslen(val));\n                    count++;\n                }\n            }\n        }\n        dictResetIterator(&di);\n        goto done;\n    }\n\n    hashTypeInitIterator(&hi, o);\n\n    while (hashTypeNext(&hi, 1 /*skipExpiredFields*/) != C_ERR) {\n        if (flags & OBJ_HASH_KEY) {\n            addHashIteratorCursorToReply(c, &hi, OBJ_HASH_KEY);\n            count++;\n        }\n        if (flags & OBJ_HASH_VALUE) {\n            addHashIteratorCursorToReply(c, &hi, OBJ_HASH_VALUE);\n            count++;\n        }\n    }\n\n    hashTypeResetIterator(&hi);\n\ndone:\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), o, oldsize, kvobjAllocSize(o));\n\n    /* Make sure we returned the right number of elements. */\n    if (flags & OBJ_HASH_KEY && flags & OBJ_HASH_VALUE) count /= 2;\n    serverAssert(count == length);\n}\n\nvoid hkeysCommand(client *c) {\n    genericHgetallCommand(c,OBJ_HASH_KEY);\n}\n\nvoid hvalsCommand(client *c) {\n    genericHgetallCommand(c,OBJ_HASH_VALUE);\n}\n\nvoid hgetallCommand(client *c) {\n    genericHgetallCommand(c,OBJ_HASH_KEY|OBJ_HASH_VALUE);\n}\n\nvoid hexistsCommand(client *c) {\n    kvobj *o;\n    if ((o = lookupKeyReadOrReply(c,c->argv[1],shared.czero)) == NULL ||\n        checkType(c,o,OBJ_HASH)) return;\n\n    addReply(c,hashTypeExists(c->db,o,c->argv[2]->ptr,HFE_LAZY_EXPIRE, NULL) ?\n                                shared.cone : shared.czero);\n}\n\nvoid hscanCommand(client *c) {\n    kvobj *o;\n    unsigned long long cursor;\n    size_t oldsize = 0;\n\n    if (parseScanCursorOrReply(c,c->argv[2],&cursor) == C_ERR) return;\n    if ((o = lookupKeyReadOrReply(c,c->argv[1],shared.emptyscan)) == NULL ||\n        checkType(c,o,OBJ_HASH)) return;\n\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(o);\n    scanGenericCommand(c,o,cursor);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), o, oldsize, kvobjAllocSize(o));\n}\n\nstatic void hrandfieldReplyWithListpack(client *c, unsigned int count, listpackEntry *keys, listpackEntry *vals) {\n    for (unsigned long i = 0; i < count; i++) {\n        if (vals && c->resp > 2)\n            addReplyArrayLen(c,2);\n        if (keys[i].sval)\n            addReplyBulkCBuffer(c, keys[i].sval, keys[i].slen);\n        else\n            addReplyBulkLongLong(c, keys[i].lval);\n        if (vals) {\n            if (vals[i].sval)\n                addReplyBulkCBuffer(c, vals[i].sval, vals[i].slen);\n            else\n                addReplyBulkLongLong(c, vals[i].lval);\n        }\n    }\n}\n\n/* How many times bigger should be the hash compared to the requested size\n * for us to not use the \"remove elements\" strategy? Read later in the\n * implementation for more info. */\n#define HRANDFIELD_SUB_STRATEGY_MUL 3\n\n/* If client is trying to ask for a very large number of random elements,\n * queuing may consume an unlimited amount of memory, so we want to limit\n * the number of randoms per time. */\n#define HRANDFIELD_RANDOM_SAMPLE_LIMIT 1000\n\nvoid hrandfieldWithCountCommand(client *c, long l, int withvalues) {\n    unsigned long count, size;\n    int uniq = 1;\n    kvobj *hash;\n    size_t oldsize = 0;\n\n    if ((hash = lookupKeyReadOrReply(c,c->argv[1],shared.emptyarray))\n        == NULL || checkType(c,hash,OBJ_HASH)) return;\n\n    if(l >= 0) {\n        count = (unsigned long) l;\n    } else {\n        count = -l;\n        uniq = 0;\n    }\n\n    /* Delete all expired fields. If the entire hash got deleted then return empty array. */\n    if (hashTypeExpireIfNeeded(c->db, hash)) {\n        addReply(c, shared.emptyarray);\n        return;\n    }\n\n    /* Delete expired fields */\n    size = hashTypeLength(hash, 0);\n\n    /* If count is zero, serve it ASAP to avoid special cases later. */\n    if (count == 0) {\n        addReply(c,shared.emptyarray);\n        return;\n    }\n\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(hash);\n\n    /* CASE 1: The count was negative, so the extraction method is just:\n     * \"return N random elements\" sampling the whole set every time.\n     * This case is trivial and can be served without auxiliary data\n     * structures. This case is the only one that also needs to return the\n     * elements in random order. */\n    if (!uniq || count == 1) {\n        if (withvalues && c->resp == 2)\n            addReplyArrayLen(c, count*2);\n        else\n            addReplyArrayLen(c, count);\n        if (hash->encoding == OBJ_ENCODING_HT) {\n            while (count--) {\n                dictEntry *de = dictGetFairRandomKey(hash->ptr);\n                Entry *entry = dictGetKey(de);\n                sds fieldStr = entryGetField(entry);\n                if (withvalues && c->resp > 2)\n                    addReplyArrayLen(c,2);\n                addReplyBulkCBuffer(c, fieldStr, sdslen(fieldStr));\n                if (withvalues) {\n                    sds value = entryGetValue(entry);\n                    addReplyBulkCBuffer(c, value, sdslen(value));\n                }\n                if (c->flags & CLIENT_CLOSE_ASAP)\n                    break;\n            }\n        } else if (hash->encoding == OBJ_ENCODING_LISTPACK ||\n                   hash->encoding == OBJ_ENCODING_LISTPACK_EX)\n        {\n            listpackEntry *keys, *vals = NULL;\n            unsigned long limit, sample_count;\n            unsigned char *lp = hashTypeListpackGetLp(hash);\n            int tuple_len = hash->encoding == OBJ_ENCODING_LISTPACK ? 2 : 3;\n\n            limit = count > HRANDFIELD_RANDOM_SAMPLE_LIMIT ? HRANDFIELD_RANDOM_SAMPLE_LIMIT : count;\n            keys = zmalloc(sizeof(listpackEntry)*limit);\n            if (withvalues)\n                vals = zmalloc(sizeof(listpackEntry)*limit);\n            while (count) {\n                sample_count = count > limit ? limit : count;\n                count -= sample_count;\n                lpRandomPairs(lp, sample_count, keys, vals, tuple_len);\n                hrandfieldReplyWithListpack(c, sample_count, keys, vals);\n                if (c->flags & CLIENT_CLOSE_ASAP)\n                    break;\n            }\n            zfree(keys);\n            zfree(vals);\n        }\n        goto out;\n    }\n\n    /* Initiate reply count, RESP3 responds with nested array, RESP2 with flat one. */\n    long reply_size = count < size ? count : size;\n    if (withvalues && c->resp == 2)\n        addReplyArrayLen(c, reply_size*2);\n    else\n        addReplyArrayLen(c, reply_size);\n\n    /* CASE 2:\n    * The number of requested elements is greater than the number of\n    * elements inside the hash: simply return the whole hash. */\n    if(count >= size) {\n        hashTypeIterator hi;\n        hashTypeInitIterator(&hi, hash);\n        while (hashTypeNext(&hi, 0) != C_ERR) {\n            if (withvalues && c->resp > 2)\n                addReplyArrayLen(c,2);\n            addHashIteratorCursorToReply(c, &hi, OBJ_HASH_KEY);\n            if (withvalues)\n                addHashIteratorCursorToReply(c, &hi, OBJ_HASH_VALUE);\n        }\n        hashTypeResetIterator(&hi);\n        goto out;\n    }\n\n    /* CASE 2.5 listpack only. Sampling unique elements, in non-random order.\n     * Listpack encoded hashes are meant to be relatively small, so\n     * HRANDFIELD_SUB_STRATEGY_MUL isn't necessary and we rather not make\n     * copies of the entries. Instead, we emit them directly to the output\n     * buffer.\n     *\n     * And it is inefficient to repeatedly pick one random element from a\n     * listpack in CASE 4. So we use this instead. */\n    if (hash->encoding == OBJ_ENCODING_LISTPACK ||\n        hash->encoding == OBJ_ENCODING_LISTPACK_EX)\n    {\n        unsigned char *lp = hashTypeListpackGetLp(hash);\n        int tuple_len = hash->encoding == OBJ_ENCODING_LISTPACK ? 2 : 3;\n        listpackEntry *keys, *vals = NULL;\n        keys = zmalloc(sizeof(listpackEntry)*count);\n        if (withvalues)\n            vals = zmalloc(sizeof(listpackEntry)*count);\n        serverAssert(lpRandomPairsUnique(lp, count, keys, vals, tuple_len) == count);\n        hrandfieldReplyWithListpack(c, count, keys, vals);\n        zfree(keys);\n        zfree(vals);\n        goto out;\n    }\n\n    /* CASE 3:\n     * The number of elements inside the hash of type dict is not greater than\n     * HRANDFIELD_SUB_STRATEGY_MUL times the number of requested elements.\n     * In this case we create an array of dictEntry pointers from the original hash,\n     * and subtract random elements to reach the requested number of elements.\n     *\n     * This is done because if the number of requested elements is just\n     * a bit less than the number of elements in the hash, the natural approach\n     * used into CASE 4 is highly inefficient. */\n    if (count*HRANDFIELD_SUB_STRATEGY_MUL > size) {\n        /* Hashtable encoding (generic implementation) */\n        dict *ht = hash->ptr;\n        dictIterator di;\n        dictEntry *de;\n        unsigned long idx = 0;\n\n        /* Allocate a temporary array of pointers to stored key-values in dict and\n         * assist it to remove random elements to reach the right count. */\n        struct FieldValPair {\n            sds field;\n            sds value;\n        } *pairs = zmalloc(sizeof(struct FieldValPair) * size);\n\n        /* Add all the elements into the temporary array. */\n        dictInitIterator(&di, ht);\n        while((de = dictNext(&di)) != NULL) {\n            Entry *e = dictGetKey(de);\n            pairs[idx++] = (struct FieldValPair) {entryGetField(e), entryGetValue(e)};\n        }\n        dictResetIterator(&di);\n\n        /* Remove random elements to reach the right count. */\n        while (size > count) {\n            unsigned long toDiscardIdx = rand() % size;\n            pairs[toDiscardIdx] = pairs[--size];\n        }\n\n        /* Reply with what's in the array */\n        for (idx = 0; idx < size; idx++) {\n            if (withvalues && c->resp > 2)\n                addReplyArrayLen(c,2);\n            addReplyBulkCBuffer(c, pairs[idx].field, sdslen(pairs[idx].field));\n            if (withvalues)\n                addReplyBulkCBuffer(c, pairs[idx].value, sdslen(pairs[idx].value));\n        }\n\n        zfree(pairs);\n    }\n\n    /* CASE 4: We have a big hash compared to the requested number of elements.\n     * In this case we can simply get random elements from the hash and add\n     * to the temporary hash, trying to eventually get enough unique elements\n     * to reach the specified count. */\n    else {\n        /* Allocate temporary dictUnique to find unique elements. Just keep ref\n         * to key-value from the original hash. This dict relaxes hash function\n         * to be based on field's pointer */\n        dictType uniqueDictType = { .hashFunction =  dictPtrHash };\n        dict *dictUnique = dictCreate(&uniqueDictType);\n        dictExpand(dictUnique, count);\n\n        /* Hashtable encoding (generic implementation) */\n        unsigned long added = 0;\n\n        while(added < count) {\n            dictEntry *de = dictGetFairRandomKey(hash->ptr);\n            serverAssert(de != NULL);\n            Entry *e = dictGetKey(de);\n            sds field = entryGetField(e);\n            sds value = entryGetValue(e);\n\n            /* Try to add the object to the dictionary. If it already exists\n            * free it, otherwise increment the number of objects we have\n            * in the result dictionary. */\n            if (dictAdd(dictUnique, field, value) != DICT_OK)\n                continue;\n\n            added++;\n\n            /* We can reply right away, so that we don't need to store the value in the dict. */\n            if (withvalues && c->resp > 2)\n                addReplyArrayLen(c,2);\n\n            addReplyBulkCBuffer(c, field, sdslen(field));\n            if (withvalues)\n                addReplyBulkCBuffer(c, value, sdslen(value));\n        }\n\n        /* Release memory */\n        dictRelease(dictUnique);\n    }\nout:\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), hash, oldsize, kvobjAllocSize(hash));\n}\n\n/*\n * HRANDFIELD - Return a random field from the hash value stored at key.\n * CLI usage: HRANDFIELD key [<count> [WITHVALUES]]\n *\n * Considerations for the current imp of HRANDFIELD & HFE feature:\n *  HRANDFIELD might access any of the fields in the hash as some of them might\n *  be expired. And so the Implementation of HRANDFIELD along with HFEs\n *  might be one of the two options:\n *  1. Expire hash-fields before diving into handling HRANDFIELD.\n *  2. Refine HRANDFIELD cases to deal with expired fields.\n *\n *  Regarding the first option, as reference, the command RANDOMKEY also declares\n *  on O(1) complexity, yet might be stuck on a very long (but not infinite) loop\n *  trying to find non-expired keys. Furthermore RANDOMKEY also evicts expired keys\n *  along the way even though it is categorized as a read-only command. Note that\n *  the case of HRANDFIELD is more lightweight versus RANDOMKEY since HFEs have\n *  much more effective and aggressive active-expiration for fields behind.\n *\n *  The second option introduces additional implementation complexity to HRANDFIELD.\n *  We could further refine HRANDFIELD cases to differentiate between scenarios\n *  with many expired fields versus few expired fields, and adjust based on the\n *  percentage of expired fields. However, this approach could still lead to long\n *  loops or necessitate expiring fields before selecting them. For the “lightweight”\n *  cases it is also expected to have a lightweight expiration.\n *\n *  Considering the pros and cons, and the fact that HRANDFIELD is an infrequent\n *  command (particularly with HFEs) and the fact we have effective active-expiration\n *  behind for hash-fields, it is better to keep it simple and choose the option #1.\n */\nvoid hrandfieldCommand(client *c) {\n    long l;\n    int withvalues = 0;\n    kvobj *hash;\n    CommonEntry ele;\n    size_t oldsize = 0;\n\n    if (c->argc >= 3) {\n        if (getRangeLongFromObjectOrReply(c,c->argv[2],-LONG_MAX,LONG_MAX,&l,NULL) != C_OK) return;\n        if (c->argc > 4 || (c->argc == 4 && strcasecmp(c->argv[3]->ptr,\"withvalues\"))) {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        } else if (c->argc == 4) {\n            withvalues = 1;\n            if (l < -LONG_MAX/2 || l > LONG_MAX/2) {\n                addReplyError(c,\"value is out of range\");\n                return;\n            }\n        }\n        hrandfieldWithCountCommand(c, l, withvalues);\n        return;\n    }\n\n    /* Handle variant without <count> argument. Reply with simple bulk string */\n    if ((hash = lookupKeyReadOrReply(c,c->argv[1],shared.null[c->resp]))== NULL ||\n        checkType(c,hash,OBJ_HASH)) {\n        return;\n    }\n\n    /* Delete all expired fields. If the entire hash got deleted then return null. */\n    if (hashTypeExpireIfNeeded(c->db, hash)) {\n        addReply(c,shared.null[c->resp]);\n        return;\n    }\n\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(hash);\n    hashTypeRandomElement(hash,hashTypeLength(hash, 0),&ele,NULL);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), hash, oldsize, kvobjAllocSize(hash));\n\n    if (ele.sval)\n        addReplyBulkCBuffer(c, ele.sval, ele.slen);\n    else\n        addReplyBulkLongLong(c, ele.lval);\n}\n\n/*-----------------------------------------------------------------------------\n * Hash Field with optional expiry (based on entry)\n *----------------------------------------------------------------------------*/\n\nstatic ExpireMeta* hentryGetExpireMeta(const eItem e) {\n    /* extract the expireMeta from the field (entry) */\n    return entryRefExpiryMeta((Entry *)e);\n}\n\n/* Remove TTL from the field. Assumed ExpireMeta is attached and has valid value */\nstatic void hfieldPersist(robj *hashObj, Entry *entry) {\n    uint64_t fieldExpireTime = entryGetExpiry(entry);\n    if (fieldExpireTime == EB_EXPIRE_TIME_INVALID)\n        return;\n\n    /* if field is set with expire, then dict must has HFE metadata attached */\n    dict *d = hashObj->ptr;\n    htMetadataEx *dictExpireMeta = htGetMetadataEx(d);\n\n    /* Remove field from private HFE DS */\n    ebRemove(&dictExpireMeta->hfe, &hashFieldExpireBucketsType, entry);\n\n    /* Don't have to update global HFE DS. It's unnecessary. Implementing this\n     * would introduce significant complexity and overhead for an operation that\n     * isn't critical. In the worst case scenario, the hash will be efficiently\n     * updated later by an active-expire operation, or it will be removed by the\n     * hash's dbGenericDelete() function. */\n}\n\n/*-----------------------------------------------------------------------------\n * Hash Field Expiration (HFE)\n *----------------------------------------------------------------------------*/\n/*  Can be called either by active-expire cron job or query from the client */\nstatic void propagateHashFieldDeletion(redisDb *db, sds key, char *field, size_t fieldLen) {\n    robj *argv[] = {\n        shared.hdel,\n        createStringObject((char*) key, sdslen(key)),\n        createStringObject(field, fieldLen)\n    };\n\n    enterExecutionUnit(1, 0);\n    int prev_replication_allowed = server.replication_allowed;\n    server.replication_allowed = 1;\n    alsoPropagate(db->id,argv, 3, PROPAGATE_AOF|PROPAGATE_REPL);\n    server.replication_allowed = prev_replication_allowed;\n    exitExecutionUnit();\n\n    /* Propagate the HDEL command */\n    postExecutionUnitOperations();\n\n    decrRefCount(argv[1]);\n    decrRefCount(argv[2]);\n}\n\n/* Called during active expiration of hash-fields. Propagate to replica & Delete. */\nstatic ExpireAction onFieldExpire(eItem item, void *ctx) {\n    OnFieldExpireCtx *expCtx = ctx;\n    Entry *e = item;\n    kvobj *kv = expCtx->hashObj;\n    size_t oldsize = 0;\n    sds key = kvobjGetKey(kv);\n\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(kv);\n    sds field = entryGetField(e);\n\n    /* Collect expired field for subkey notification (before deletion) */\n    if (expCtx->vexpired)\n        vecPush(expCtx->vexpired, createStringObject(field, sdslen(field)));\n\n    propagateHashFieldDeletion(expCtx->db, key, field, sdslen(field));\n\n    /* update keysizes */\n    unsigned long l = hashTypeLength(expCtx->hashObj, 0);\n    updateKeysizesHist(expCtx->db, OBJ_HASH, l, l - 1);\n\n    serverAssert(hashTypeDelete(expCtx->hashObj, field) == 1);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(expCtx->db, getKeySlot(key), kv, oldsize, kvobjAllocSize(kv));\n    server.stat_expired_subkeys++;\n    if (expCtx->activeEx)\n        server.stat_expired_subkeys_active++;\n    return ACT_REMOVE_EXP_ITEM;\n}\n\n/* Retrieve the ExpireMeta associated with the hash.\n * The caller is responsible for ensuring that it is indeed attached. */\nExpireMeta *hashGetExpireMeta(const eItem hash) {\n    robj *hashObj = (robj *)hash;\n    if (hashObj->encoding == OBJ_ENCODING_LISTPACK_EX) {\n        listpackEx *lpt = hashObj->ptr;\n        return &lpt->meta;\n    } else if (hashObj->encoding == OBJ_ENCODING_HT) {\n        dict *d = hashObj->ptr;\n        htMetadataEx *dictExpireMeta = htGetMetadataEx(d);\n        return &dictExpireMeta->expireMeta;\n    } else {\n        serverPanic(\"Unknown encoding: %d\", hashObj->encoding);\n    }\n}\n\n/* Generic structure to hold parsed arguments for all hash field commands */\ntypedef struct {\n    /* FIELDS arguments */\n    int fieldsPos;          /* Position of FIELDS keyword (-1 if not found) */\n    int numFieldsPos;       /* Position of numfields argument */\n    int firstFieldPos;      /* Position of first field */\n    int fieldCount;         /* Number of fields */\n\n    /* HEXPIRE family arguments */\n    int expireTimePos;      /* Position of expire time argument */\n    long long expireTime;   /* Parsed expire time */\n    int expireCondition;    /* HFE_NX, HFE_XX, HFE_GT, HFE_LT */\n} HashCommandArgs;\n\n/* Parser for HEXPIRE family commands with flexible keyword ordering.\n * Returns C_OK on success, C_ERR on error (with reply sent). */\nstatic int parseHashCommandArgs(client *c, HashCommandArgs *args,\n                                long long basetime, int unit)\n{\n    memset(args, 0, sizeof(*args));\n    args->fieldsPos = -1;\n    args->expireTimePos = 2;\n\n    if (parseExpireTime(c, c->argv[2], unit, basetime, &args->expireTime) != C_OK) {\n        return C_ERR;\n    }\n\n    /* Parse remaining arguments starting from position 3 */\n    for (int i = 3; i < c->argc; i++) {\n        char *arg = c->argv[i]->ptr;\n\n        /* FIELDS keyword - supported by ALL hash field commands */\n        if (!strcasecmp(arg, \"FIELDS\")) {\n            if (args->fieldsPos != -1) {\n                addReplyError(c, \"FIELDS keyword specified multiple times\");\n                return C_ERR;\n            }\n\n            if (i >= c->argc - 2) {\n                addReplyError(c, \"FIELDS requires at least numfields and one field argument\");\n                return C_ERR;\n            }\n\n            args->fieldsPos = i;\n            args->numFieldsPos = i + 1;\n            long numFields;\n            if (getRangeLongFromObjectOrReply(c, c->argv[args->numFieldsPos], 1, INT_MAX,\n                                              &numFields, \"Parameter `numFields` should be greater than 0\") != C_OK)\n                return C_ERR;\n\n            args->firstFieldPos = i + 2;\n\n            /* Check bounds - we must have exactly the right number of fields */\n            if (numFields > c->argc - args->firstFieldPos) {\n                addReplyError(c, \"wrong number of arguments\");\n                return C_ERR;\n            }\n\n            args->fieldCount = (int)numFields;\n\n            /* Skip over the field arguments */\n            i = args->firstFieldPos + args->fieldCount - 1;\n            continue;\n        }\n\n        /* Expiration condition keywords - validation moved outside loop for performance */\n        if (!strcasecmp(arg, \"NX\")) {\n            args->expireCondition |= HFE_NX;\n            continue;\n        } else if (!strcasecmp(arg, \"XX\")) {\n            args->expireCondition |= HFE_XX;\n            continue;\n        } else if (!strcasecmp(arg, \"GT\")) {\n            args->expireCondition |= HFE_GT;\n            continue;\n        } else if (!strcasecmp(arg, \"LT\")) {\n            args->expireCondition |= HFE_LT;\n            continue;\n        }\n\n        addReplyErrorFormat(c, \"unknown argument: %s\", (char*) c->argv[i]->ptr);\n        return C_ERR;\n    }\n\n    /* Ensure FIELDS is specified */\n    if (args->fieldsPos == -1) {\n        addReplyError(c, \"missing FIELDS argument\");\n        return C_ERR;\n    }\n\n    if (__builtin_popcount(args->expireCondition & (HFE_NX|HFE_XX|HFE_GT|HFE_LT)) > 1) {\n        addReplyError(c, \"Multiple condition flags specified\");\n        return C_ERR;\n    }\n\n    return C_OK;\n}\n\n/* HTTL key <FIELDS count field [field ...]>  */\nstatic void httlGenericCommand(client *c, const char *cmd, long long basetime, int unit){\n    UNUSED(cmd);\n    kvobj *hashObj;\n    long numFields = 0, numFieldsAt = 3;\n\n    /* Read the hash object */\n    hashObj = lookupKeyRead(c->db, c->argv[1]);\n    if (checkType(c, hashObj, OBJ_HASH))\n        return;\n\n    if (strcasecmp(c->argv[numFieldsAt-1]->ptr, \"FIELDS\")) {\n        addReplyError(c, \"Mandatory argument FIELDS is missing or not at the right position\");\n        return;\n    }\n\n    /* Read number of fields */\n    if (getRangeLongFromObjectOrReply(c, c->argv[numFieldsAt], 1, LONG_MAX,\n                                      &numFields, \"Number of fields must be a positive integer\") != C_OK)\n        return;\n\n    /* Verify `numFields` is consistent with number of arguments */\n    if (numFields != (c->argc - numFieldsAt - 1)) {\n        addReplyError(c, \"The `numfields` parameter must match the number of arguments\");\n        return;\n    }\n\n    /* Non-existing keys and empty hashes are the same thing. It also means\n     * fields in the command don't exist in the hash key. */\n    if (!hashObj) {\n        addReplyArrayLen(c, numFields);\n        for (int i = 0; i < numFields; i++) {\n            addReplyLongLong(c, HFE_GET_NO_FIELD);\n        }\n        return;\n    }\n\n    if (hashObj->encoding == OBJ_ENCODING_LISTPACK) {\n        void *lp = hashObj->ptr;\n\n        addReplyArrayLen(c, numFields);\n        for (int i = 0 ; i < numFields ; i++) {\n            sds field = c->argv[numFieldsAt+1+i]->ptr;\n            void *fptr = lpFirst(lp);\n            if (fptr != NULL)\n                fptr = lpFind(lp, fptr, (unsigned char *) field, sdslen(field), 1);\n\n            if (!fptr)\n                addReplyLongLong(c, HFE_GET_NO_FIELD);\n            else\n                addReplyLongLong(c, HFE_GET_NO_TTL);\n        }\n        return;\n    } else if (hashObj->encoding == OBJ_ENCODING_LISTPACK_EX) {\n        listpackEx *lpt = hashObj->ptr;\n\n        addReplyArrayLen(c, numFields);\n        for (int i = 0 ; i < numFields ; i++) {\n            long long expire;\n            sds field = c->argv[numFieldsAt+1+i]->ptr;\n            void *fptr = lpFirst(lpt->lp);\n            if (fptr != NULL)\n                fptr = lpFind(lpt->lp, fptr, (unsigned char *) field, sdslen(field), 2);\n\n            if (!fptr) {\n                addReplyLongLong(c, HFE_GET_NO_FIELD);\n                continue;\n            }\n\n            fptr = lpNext(lpt->lp, fptr);\n            serverAssert(fptr);\n            fptr = lpNext(lpt->lp, fptr);\n            serverAssert(fptr && lpGetIntegerValue(fptr, &expire));\n\n            if (expire == HASH_LP_NO_TTL) {\n                addReplyLongLong(c, HFE_GET_NO_TTL);\n                continue;\n            }\n\n            if (expire <= commandTimeSnapshot()) {\n                addReplyLongLong(c, HFE_GET_NO_FIELD);\n                continue;\n            }\n\n            if (unit == UNIT_SECONDS)\n                addReplyLongLong(c, (expire + 999 - basetime) / 1000);\n            else\n                addReplyLongLong(c, (expire - basetime));\n        }\n        return;\n    } else if (hashObj->encoding == OBJ_ENCODING_HT) {\n        dict *d = hashObj->ptr;\n        size_t oldsize = 0;\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(hashObj);\n\n        addReplyArrayLen(c, numFields);\n        for (int i = 0 ; i < numFields ; i++) {\n            sds field = c->argv[numFieldsAt+1+i]->ptr;\n            dictEntry *de = dictFind(d, field);\n            if (de == NULL) {\n                addReplyLongLong(c, HFE_GET_NO_FIELD);\n                continue;\n            }\n\n            Entry *entry = dictGetKey(de);\n            uint64_t expire = entryGetExpiry(entry);\n            if (expire == EB_EXPIRE_TIME_INVALID) {\n                addReplyLongLong(c, HFE_GET_NO_TTL); /* no ttl */\n                continue;\n            }\n\n            if ( (long long) expire < commandTimeSnapshot()) {\n                addReplyLongLong(c, HFE_GET_NO_FIELD);\n                continue;\n            }\n\n            if (unit == UNIT_SECONDS)\n                addReplyLongLong(c, (expire + 999 - basetime) / 1000);\n            else\n                addReplyLongLong(c, (expire - basetime));\n        }\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), hashObj, oldsize, kvobjAllocSize(hashObj));\n        return;\n    } else {\n        serverPanic(\"Unknown encoding: %d\", hashObj->encoding);\n    }\n}\n\n/* This is the generic command implementation for HEXPIRE, HPEXPIRE, HEXPIREAT\n * and HPEXPIREAT. Because the command second argument may be relative or absolute\n * the \"basetime\" argument is used to signal what the base time is (either 0\n * for *AT variants of the command, or the current time for relative expires).\n *\n * unit is either UNIT_SECONDS or UNIT_MILLISECONDS, and is only used for\n * the argv[2] parameter. The basetime is always specified in milliseconds.\n *\n * PROPAGATE TO REPLICA:\n *   The command will be translated into HPEXPIREAT and the expiration time will be\n *   converted to absolute time in milliseconds.\n *\n *   As we need to propagate H(P)EXPIRE(AT) command to the replica, each field that\n *   is mentioned in the command should be categorized into one of the four options:\n *   1. Field’s expiration time updated successfully - Propagate it to replica as\n *      part of the HPEXPIREAT command.\n *   2. The field got deleted since the time is in the past - propagate also HDEL\n *      command to delete the field. Also remove the field from the propagated\n *      HPEXPIREAT command.\n *   3. Condition not met for the field - Remove the field from the propagated\n *      HPEXPIREAT command.\n *   4. Field doesn't exists - Remove the field from propagated HPEXPIREAT command.\n *\n *   If none of the provided fields match option #1, that is provided time of the\n *   command is in the past, then avoid propagating the HPEXPIREAT command to the\n *   replica.\n *\n *   This approach is aligned with existing EXPIRE command. If a given key has already\n *   expired, then DEL will be propagated instead of EXPIRE command. If condition\n *   not met, then command will be rejected. Otherwise, EXPIRE command will be\n *   propagated for given key.\n */\nstatic void hexpireGenericCommand(client *c, long long basetime, int unit) {\n    HashCommandArgs args;\n    int fieldsNotSet = 0;\n    int64_t oldlen, newlen;\n    robj *keyArg = c->argv[1];\n    size_t oldsize = 0;\n\n    /* Read the hash object */\n    kvobj *hashObj = lookupKeyWrite(c->db, keyArg);\n    if (checkType(c, hashObj, OBJ_HASH))\n        return;\n\n    /* Parse arguments using flexible keyword-based parsing */\n    if (parseHashCommandArgs(c, &args, basetime, unit) != C_OK)\n        return;\n\n    /* Non-existing keys and empty hashes are the same thing. It also means\n     * fields in the command don't exist in the hash key. */\n    if (!hashObj) {\n        addReplyArrayLen(c, args.fieldCount);\n        for (int i = 0; i < args.fieldCount; i++) {\n            addReplyLongLong(c, HSETEX_NO_FIELD);\n        }\n        return;\n    }\n\n    oldlen = hashTypeLength(hashObj, 0);\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(hashObj);\n\n    HashTypeSetEx exCtx;\n    hashTypeSetExInit(keyArg, hashObj, c, c->db, args.expireCondition, &exCtx);\n    addReplyArrayLen(c, args.fieldCount);\n\n    /* Lazy allocation of fieldsToRemove - only allocate when failures occur */\n    int *fieldsToRemove = NULL;\n    int removeCount = 0;\n\n    /* Track fields for subkey notifications. */\n    fieldvec fvupdated, fvdeleted;\n    vec *vupdated = fieldvecInit(&fvupdated, args.fieldCount);\n    vec *vdeleted = fieldvecInit(&fvdeleted, args.fieldCount);\n\n    for (int i = 0; i < args.fieldCount; i++) {\n        int fieldPos = args.firstFieldPos + i;\n        sds field = c->argv[fieldPos]->ptr;\n        SetExRes res = hashTypeSetEx(hashObj, field, args.expireTime, &exCtx);\n        if (res == HSETEX_OK) {\n            vecPush(vupdated, c->argv[fieldPos]);\n        } else if (res == HSETEX_DELETED) {\n            vecPush(vdeleted, c->argv[fieldPos]);\n        }\n\n        if (unlikely(res != HSETEX_OK)) {\n            if (fieldsToRemove == NULL) {\n                fieldsToRemove = zmalloc(sizeof(int) * (args.fieldCount > 0 ? args.fieldCount : 1));\n            }\n            /* Remember this field position for later removal from propagation */\n            fieldsToRemove[removeCount++] = fieldPos;\n            fieldsNotSet = 1;\n        }\n\n        addReplyLongLong(c, res);\n    }\n\n    hashTypeSetExDone(&exCtx);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(keyArg->ptr), hashObj, oldsize, kvobjAllocSize(hashObj));\n\n    if (vecSize(vdeleted) + vecSize(vupdated) > 0) {\n        server.dirty += vecSize(vdeleted) + vecSize(vupdated);\n        keyModified(c, c->db, keyArg, hashObj, 1);\n        if (vecSize(vdeleted)) notifyKeyspaceEventWithSubkeys(NOTIFY_HASH, \"hdel\",\n                                keyArg, c->db->id, (robj**)vecData(vdeleted), vecSize(vdeleted));\n        if (vecSize(vupdated)) notifyKeyspaceEventWithSubkeys(NOTIFY_HASH, \"hexpire\",\n                                keyArg, c->db->id, (robj**)vecData(vupdated), vecSize(vupdated));\n    }\n\n    newlen = (int64_t) hashTypeLength(hashObj, 0);\n    if (newlen == 0) {\n        newlen = -1;\n        /* Del key but don't update KEYSIZES. Else it will decr wrong bin in histogram */\n        dbDeleteSkipKeysizesUpdate(c->db, keyArg);\n        notifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", keyArg, c->db->id);\n    }\n\n    if (oldlen != newlen)\n        updateKeysizesHist(c->db, OBJ_HASH, oldlen, newlen);\n\n    /* Avoid propagating command if not even one field was updated (Either because\n     * the time is in the past, and corresponding HDELs were sent, or conditions\n     * not met) then it is useless and invalid to propagate command with no fields */\n    if (vecSize(vupdated) == 0) {\n        vecRelease(vupdated);\n        vecRelease(vdeleted);\n        preventCommandPropagation(c);\n        zfree(fieldsToRemove);\n        return;\n    }\n\n    /* Handle propagation using command rewriting\n     * Rewrite to canonical HPEXPIREAT command */\n    if (c->cmd->proc != hpexpireatCommand) {\n        rewriteClientCommandArgument(c, 0, shared.hpexpireat);\n\n        robj *expireTimeObj = createStringObjectFromLongLong(args.expireTime);\n        rewriteClientCommandArgument(c, args.expireTimePos, expireTimeObj);\n        decrRefCount(expireTimeObj);\n    }\n\n    /* For partial failures, remove failed fields from the original command */\n    if (fieldsNotSet) {\n        for (int i = removeCount - 1; i >= 0; i--) {\n            rewriteClientCommandArgument(c, fieldsToRemove[i], NULL);\n        }\n        robj *newFieldCount = createStringObjectFromLongLong(vecSize(vupdated));\n        rewriteClientCommandArgument(c, args.fieldsPos + 1, newFieldCount);\n        decrRefCount(newFieldCount);\n    }\n\n    if (fieldsToRemove)\n        zfree(fieldsToRemove);\n\n    vecRelease(vupdated);\n    vecRelease(vdeleted);\n}\n\n/* HPEXPIRE key milliseconds [ NX | XX | GT | LT] FIELDS numfields <field [field ...]> */\nvoid hpexpireCommand(client *c) {\n    hexpireGenericCommand(c,commandTimeSnapshot(),UNIT_MILLISECONDS);\n}\n\n/* HEXPIRE key seconds [NX | XX | GT | LT] FIELDS numfields <field [field ...]> */\nvoid hexpireCommand(client *c) {\n    hexpireGenericCommand(c,commandTimeSnapshot(),UNIT_SECONDS);\n}\n\n/* HEXPIREAT key unix-time-seconds [NX | XX | GT | LT] FIELDS numfields <field [field ...]> */\nvoid hexpireatCommand(client *c) {\n    hexpireGenericCommand(c,0,UNIT_SECONDS);\n}\n\n/* HPEXPIREAT key unix-time-milliseconds [NX | XX | GT | LT] FIELDS numfields <field [field ...]> */\nvoid hpexpireatCommand(client *c) {\n    hexpireGenericCommand(c,0,UNIT_MILLISECONDS);\n}\n\n/* for each specified field: get the remaining time to live in seconds*/\n/* HTTL key FIELDS numfields <field [field ...]> */\nvoid httlCommand(client *c) {\n    httlGenericCommand(c, \"httl\", commandTimeSnapshot(), UNIT_SECONDS);\n}\n\n/* HPTTL key FIELDS numfields <field [field ...]> */\nvoid hpttlCommand(client *c) {\n    httlGenericCommand(c, \"hpttl\", commandTimeSnapshot(), UNIT_MILLISECONDS);\n}\n\n/* HEXPIRETIME key FIELDS numfields <field [field ...]> */\nvoid hexpiretimeCommand(client *c) {\n    httlGenericCommand(c, \"hexpiretime\", 0, UNIT_SECONDS);\n}\n\n/* HPEXPIRETIME key FIELDS numfields <field [field ...]> */\nvoid hpexpiretimeCommand(client *c) {\n    httlGenericCommand(c, \"hexpiretime\", 0, UNIT_MILLISECONDS);\n}\n\n/* HPERSIST key FIELDS numfields <field [field ...]> */\nvoid hpersistCommand(client *c) {\n    long numFields = 0, numFieldsAt = 3;\n\n    /* Read the hash object */\n    kvobj *hashObj = lookupKeyWrite(c->db, c->argv[1]);\n    if (checkType(c, hashObj, OBJ_HASH))\n        return;\n\n    if (strcasecmp(c->argv[numFieldsAt-1]->ptr, \"FIELDS\")) {\n        addReplyError(c, \"Mandatory argument FIELDS is missing or not at the right position\");\n        return;\n    }\n\n    /* Read number of fields */\n    if (getRangeLongFromObjectOrReply(c, c->argv[numFieldsAt], 1, LONG_MAX,\n                                      &numFields, \"Number of fields must be a positive integer\") != C_OK)\n        return;\n\n    /* Verify `numFields` is consistent with number of arguments */\n    if (numFields != (c->argc - numFieldsAt - 1)) {\n        addReplyError(c, \"The `numfields` parameter must match the number of arguments\");\n        return;\n    }\n\n    /* Non-existing keys and empty hashes are the same thing. It also means\n     * fields in the command don't exist in the hash key. */\n    if (!hashObj) {\n        addReplyArrayLen(c, numFields);\n        for (int i = 0; i < numFields; i++) {\n            addReplyLongLong(c, HFE_PERSIST_NO_FIELD);\n        }\n        return;\n    }\n\n    /* Track which fields were successfully persisted for subkey notification. */\n    fieldvec fvpersisted;\n    vec *vpersisted = fieldvecInit(&fvpersisted, numFields);\n\n    if (hashObj->encoding == OBJ_ENCODING_LISTPACK) {\n        addReplyArrayLen(c, numFields);\n        for (int i = 0 ; i < numFields ; i++) {\n            sds field = c->argv[numFieldsAt + 1 + i]->ptr;\n            unsigned char *fptr, *zl = hashObj->ptr;\n\n            fptr = lpFirst(zl);\n            if (fptr != NULL)\n                fptr = lpFind(zl, fptr, (unsigned char *) field, sdslen(field), 1);\n\n            if (!fptr)\n                addReplyLongLong(c, HFE_PERSIST_NO_FIELD);\n            else\n                addReplyLongLong(c, HFE_PERSIST_NO_TTL);\n        }\n        vecRelease(vpersisted);\n        return;\n    } else if (hashObj->encoding == OBJ_ENCODING_LISTPACK_EX) {\n        long long prevExpire;\n        unsigned char *fptr, *vptr, *tptr;\n        listpackEx *lpt = hashObj->ptr;\n        size_t oldsize = 0;\n\n        addReplyArrayLen(c, numFields);\n        for (int i = 0 ; i < numFields ; i++) {\n            sds field = c->argv[numFieldsAt + 1 + i]->ptr;\n\n            fptr = lpFirst(lpt->lp);\n            if (fptr != NULL)\n                fptr = lpFind(lpt->lp, fptr, (unsigned char*)field, sdslen(field), 2);\n\n            if (!fptr) {\n                addReplyLongLong(c, HFE_PERSIST_NO_FIELD);\n                continue;\n            }\n\n            vptr = lpNext(lpt->lp, fptr);\n            serverAssert(vptr);\n            tptr = lpNext(lpt->lp, vptr);\n            serverAssert(tptr && lpGetIntegerValue(tptr, &prevExpire));\n\n            if (prevExpire == HASH_LP_NO_TTL) {\n                addReplyLongLong(c, HFE_PERSIST_NO_TTL);\n                continue;\n            }\n\n            if (prevExpire < commandTimeSnapshot()) {\n                addReplyLongLong(c, HFE_PERSIST_NO_FIELD);\n                continue;\n            }\n\n            if (server.memory_tracking_enabled)\n                oldsize = kvobjAllocSize(hashObj);\n            listpackExUpdateExpiry(hashObj, field, fptr, vptr, HASH_LP_NO_TTL);\n            if (server.memory_tracking_enabled)\n                updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), hashObj, oldsize, kvobjAllocSize(hashObj));\n            addReplyLongLong(c, HFE_PERSIST_OK);\n            vecPush(vpersisted, c->argv[numFieldsAt + 1 + i]);\n        }\n    } else if (hashObj->encoding == OBJ_ENCODING_HT) {\n        dict *d = hashObj->ptr;\n        size_t oldsize = 0;\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(hashObj);\n\n        addReplyArrayLen(c, numFields);\n        for (int i = 0 ; i < numFields ; i++) {\n            sds field = c->argv[numFieldsAt + 1 + i]->ptr;\n            dictEntry *de = dictFind(d, field);\n            if (de == NULL) {\n                addReplyLongLong(c, HFE_PERSIST_NO_FIELD);\n                continue;\n            }\n\n            Entry *entry = dictGetKey(de);\n            uint64_t expire = entryGetExpiry(entry);\n            if (expire == EB_EXPIRE_TIME_INVALID) {\n                addReplyLongLong(c, HFE_PERSIST_NO_TTL);\n                continue;\n            }\n\n            /* Already expired. Pretend there is no such field */\n            if ( (long long) expire < commandTimeSnapshot()) {\n                addReplyLongLong(c, HFE_PERSIST_NO_FIELD);\n                continue;\n            }\n\n            hfieldPersist(hashObj, entry);\n            addReplyLongLong(c, HFE_PERSIST_OK);\n            vecPush(vpersisted, c->argv[numFieldsAt + 1 + i]);\n        }\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), hashObj, oldsize, kvobjAllocSize(hashObj));\n    } else {\n        serverPanic(\"Unknown encoding: %d\", hashObj->encoding);\n    }\n\n    /* Generates a hpersist event if the expiry time associated with any field\n     * has been successfully deleted. */\n    if (vecSize(vpersisted)) {\n        notifyKeyspaceEventWithSubkeys(NOTIFY_HASH, \"hpersist\", c->argv[1],\n                                       c->db->id, (robj**)vecData(vpersisted), vecSize(vpersisted));\n        keyModified(c, c->db, c->argv[1], hashObj, 1);\n        server.dirty++;\n    }\n    vecRelease(vpersisted);\n}\n"
  },
  {
    "path": "src/t_list.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n#include \"util.h\"\n\n/*-----------------------------------------------------------------------------\n * List API\n *----------------------------------------------------------------------------*/\n\n/* Check the length and size of a number of objects that will be added to list to see\n * if we need to convert a listpack to a quicklist. Note that we only check string\n * encoded objects as their string length can be queried in constant time.\n *\n * If callback is given the function is called in order for caller to do some work\n * before the list conversion. */\nstatic void listTypeTryConvertListpack(robj *o, robj **argv, int start, int end,\n                                       beforeConvertCB fn, void *data)\n{\n    serverAssert(o->encoding == OBJ_ENCODING_LISTPACK);\n\n    size_t add_bytes = 0;\n    size_t add_length = 0;\n\n    if (argv) {\n        for (int i = start; i <= end; i++) {\n            if (!sdsEncodedObject(argv[i]))\n                continue;\n            add_bytes += sdslen(argv[i]->ptr);\n        }\n        add_length = end - start + 1;\n    }\n\n    if (quicklistNodeExceedsLimit(server.list_max_listpack_size,\n            lpBytes(o->ptr) + add_bytes, lpLength(o->ptr) + add_length))\n    {\n        /* Invoke callback before conversion. */\n        if (fn) fn(data);\n\n        quicklist *ql = quicklistNew(server.list_max_listpack_size, server.list_compress_depth);\n\n        /* Append listpack to quicklist if it's not empty, otherwise release it. */\n        if (lpLength(o->ptr))\n            quicklistAppendListpack(ql, o->ptr);\n        else\n            lpFree(o->ptr);\n        o->ptr = ql;\n        o->encoding = OBJ_ENCODING_QUICKLIST;\n    }\n}\n\n/* Check the length and size of a quicklist to see if we need to convert it to listpack.\n *\n * 'shrinking' is 1 means that the conversion is due to a list shrinking, to avoid\n * frequent conversions of quicklist and listpack due to frequent insertion and\n * deletion, we don't convert quicklist to listpack until its length or size is\n * below half of the limit.\n *\n * If callback is given the function is called in order for caller to do some work\n * before the list conversion. */\nstatic void listTypeTryConvertQuicklist(robj *o, int shrinking, beforeConvertCB fn, void *data) {\n    serverAssert(o->encoding == OBJ_ENCODING_QUICKLIST);\n\n    size_t sz_limit;\n    unsigned int count_limit;\n    quicklist *ql = o->ptr;\n\n    /* A quicklist can be converted to listpack only if it has only one packed node. */\n    if (ql->len != 1 || ql->head->container != QUICKLIST_NODE_CONTAINER_PACKED)\n        return;\n\n    /* Check the length or size of the quicklist is below the limit. */\n    quicklistNodeLimit(server.list_max_listpack_size, &sz_limit, &count_limit);\n    if (shrinking) {\n        sz_limit /= 2;\n        count_limit /= 2;\n    }\n    if (ql->head->sz > sz_limit || ql->count > count_limit) return;\n\n    /* Invoke callback before conversion. */\n    if (fn) fn(data);\n\n    /* Extract the listpack from the unique quicklist node,\n     * then reset it and release the quicklist. */\n    o->ptr = ql->head->entry;\n    ql->head->entry = NULL;\n    ql->alloc_size -= ql->head->sz;\n    quicklistRelease(ql);\n    o->encoding = OBJ_ENCODING_LISTPACK;\n}\n\n/* Check if the list needs to be converted to appropriate encoding due to\n * growing, shrinking or other cases.\n *\n * 'lct' can be one of the following values:\n * LIST_CONV_AUTO      - Used after we built a new list, and we want to let the\n *                       function decide on the best encoding for that list.\n * LIST_CONV_GROWING   - Used before or right after adding elements to the list,\n *                       in which case we are likely to only consider converting\n *                       from listpack to quicklist.\n *                       'argv' is only used in this case to calculate the size\n *                       of a number of objects that will be added to list.\n * LIST_CONV_SHRINKING - Used after removing an element from the list, in which case we\n *                       wanna consider converting from quicklist to listpack. When we\n *                       know we're shrinking, we use a lower (more strict) threshold in\n *                       order to avoid repeated conversions on every list change. */\nstatic void listTypeTryConversionRaw(robj *o, list_conv_type lct,\n                                     robj **argv, int start, int end,\n                                     beforeConvertCB fn, void *data)\n{\n    if (o->encoding == OBJ_ENCODING_QUICKLIST) {\n        if (lct == LIST_CONV_GROWING) return; /* Growing has nothing to do with quicklist */\n        listTypeTryConvertQuicklist(o, lct == LIST_CONV_SHRINKING, fn, data);\n    } else if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        if (lct == LIST_CONV_SHRINKING) return; /* Shrinking has nothing to do with listpack */\n        listTypeTryConvertListpack(o, argv, start, end, fn, data);\n    } else {\n        serverPanic(\"Unknown list encoding\");\n    }\n}\n\n/* This is just a wrapper for listTypeTryConversionRaw() that is\n * able to try conversion without passing 'argv'. */\nvoid listTypeTryConversion(robj *o, list_conv_type lct, beforeConvertCB fn, void *data) {\n    listTypeTryConversionRaw(o, lct, NULL, 0, 0, fn, data);\n}\n\n/* This is just a wrapper for listTypeTryConversionRaw() that is\n * able to try conversion before adding elements to the list. */\nvoid listTypeTryConversionAppend(robj *o, robj **argv, int start, int end,\n                                 beforeConvertCB fn, void *data)\n{\n    listTypeTryConversionRaw(o, LIST_CONV_GROWING, argv, start, end, fn, data);\n}\n\n/* The function pushes an element to the specified list object 'subject',\n * at head or tail position as specified by 'where'.\n *\n * There is no need for the caller to increment the refcount of 'value' as\n * the function takes care of it if needed. */\nvoid listTypePush(robj *subject, robj *value, int where) {\n    if (subject->encoding == OBJ_ENCODING_QUICKLIST) {\n        int pos = (where == LIST_HEAD) ? QUICKLIST_HEAD : QUICKLIST_TAIL;\n        if (value->encoding == OBJ_ENCODING_INT) {\n            char buf[32];\n            ll2string(buf, 32, (long)value->ptr);\n            quicklistPush(subject->ptr, buf, strlen(buf), pos);\n        } else {\n            quicklistPush(subject->ptr, value->ptr, sdslen(value->ptr), pos);\n        }\n    } else if (subject->encoding == OBJ_ENCODING_LISTPACK) {\n        if (value->encoding == OBJ_ENCODING_INT) {\n            subject->ptr = (where == LIST_HEAD) ?\n                lpPrependInteger(subject->ptr, (long)value->ptr) :\n                lpAppendInteger(subject->ptr, (long)value->ptr);\n        } else {\n            subject->ptr = (where == LIST_HEAD) ?\n                lpPrepend(subject->ptr, value->ptr, sdslen(value->ptr)) :\n                lpAppend(subject->ptr, value->ptr, sdslen(value->ptr));\n        }\n    } else {\n        serverPanic(\"Unknown list encoding\");\n    }\n}\n\nvoid *listPopSaver(unsigned char *data, size_t sz) {\n    return createStringObject((char*)data,sz);\n}\n\nrobj *listTypePop(robj *subject, int where) {\n    robj *value = NULL;\n\n    if (subject->encoding == OBJ_ENCODING_QUICKLIST) {\n        long long vlong;\n        int ql_where = where == LIST_HEAD ? QUICKLIST_HEAD : QUICKLIST_TAIL;\n        if (quicklistPopCustom(subject->ptr, ql_where, (unsigned char **)&value,\n                               NULL, &vlong, listPopSaver)) {\n            if (!value)\n                value = createStringObjectFromLongLong(vlong);\n        }\n    } else if (subject->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *p;\n        unsigned char *vstr;\n        int64_t vlen;\n        unsigned char intbuf[LP_INTBUF_SIZE];\n\n        p = (where == LIST_HEAD) ? lpFirst(subject->ptr) : lpLast(subject->ptr);\n        if (p) {\n            vstr = lpGet(p, &vlen, intbuf);\n            value = createStringObject((char*)vstr, vlen);\n            subject->ptr = lpDelete(subject->ptr, p, NULL);\n        }\n    } else {\n        serverPanic(\"Unknown list encoding\");\n    }\n    return value;\n}\n\nunsigned long listTypeLength(const robj *subject) {\n    if (subject->encoding == OBJ_ENCODING_QUICKLIST) {\n        return quicklistCount(subject->ptr);\n    } else if (subject->encoding == OBJ_ENCODING_LISTPACK) {\n        return lpLength(subject->ptr);\n    } else {\n        serverPanic(\"Unknown list encoding\");\n    }\n}\n\nsize_t listTypeAllocSize(const robj *o) {\n    serverAssertWithInfo(NULL,o,o->type == OBJ_LIST);\n    size_t size = 0;\n    if (o->encoding == OBJ_ENCODING_QUICKLIST) {\n        size = quicklistAllocSize(o->ptr);\n    } else if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        size = lpBytes(o->ptr);\n    } else {\n        serverPanic(\"Unknown list encoding\");\n    }\n    return size;\n}\n\n/* Initialize an iterator at the specified index. */\nvoid listTypeInitIterator(listTypeIterator *li, robj *subject,\n                          long index, unsigned char direction) {\n    li->subject = subject;\n    li->encoding = subject->encoding;\n    li->direction = direction;\n    /* LIST_HEAD means start at TAIL and move *towards* head.\n     * LIST_TAIL means start at HEAD and move *towards* tail. */\n    if (li->encoding == OBJ_ENCODING_QUICKLIST) {\n        int iter_direction = direction == LIST_HEAD ? AL_START_TAIL : AL_START_HEAD;\n        quicklistInitIteratorAtIdx(&li->iter, li->subject->ptr, iter_direction, index);\n    } else if (li->encoding == OBJ_ENCODING_LISTPACK) {\n        li->lpi = lpSeek(subject->ptr, index);\n    } else {\n        serverPanic(\"Unknown list encoding\");\n    }\n}\n\n/* Sets the direction of an iterator. */\nvoid listTypeSetIteratorDirection(listTypeIterator *li, listTypeEntry *entry, unsigned char direction) {\n    if (li->direction == direction) return;\n\n    li->direction = direction;\n    if (li->encoding == OBJ_ENCODING_QUICKLIST) {\n        int dir = direction == LIST_HEAD ? AL_START_TAIL : AL_START_HEAD;\n        quicklistSetDirection(&li->iter, dir);\n    } else if (li->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *lp = li->subject->ptr;\n        /* Note that the iterator for listpack always points to the next of the current entry,\n         * so we need to update position of the iterator depending on the direction. */\n        li->lpi = (direction == LIST_TAIL) ? lpNext(lp, entry->lpe) : lpPrev(lp, entry->lpe);\n    } else {\n        serverPanic(\"Unknown list encoding\");\n    }\n}\n\n/* Clean up the iterator. */\nvoid listTypeResetIterator(listTypeIterator *li) {\n    if (li->encoding == OBJ_ENCODING_QUICKLIST)\n        quicklistResetIterator(&li->iter);\n}\n\n/* Stores pointer to current the entry in the provided entry structure\n * and advances the position of the iterator. Returns 1 when the current\n * entry is in fact an entry, 0 otherwise. */\nint listTypeNext(listTypeIterator *li, listTypeEntry *entry) {\n    /* Protect from converting when iterating */\n    serverAssert(li->subject->encoding == li->encoding);\n\n    entry->li = li;\n    if (li->encoding == OBJ_ENCODING_QUICKLIST) {\n        return quicklistNext(&li->iter, &entry->entry);\n    } else if (li->encoding == OBJ_ENCODING_LISTPACK) {\n        entry->lpe = li->lpi;\n        if (entry->lpe != NULL) {\n            li->lpi = (li->direction == LIST_TAIL) ?\n                lpNext(li->subject->ptr,li->lpi) : lpPrev(li->subject->ptr,li->lpi);\n            return 1;\n        }\n    } else {\n        serverPanic(\"Unknown list encoding\");\n    }\n    return 0;\n}\n\n/* Get entry value at the current position of the iterator.\n * When the function returns NULL, it populates the integer value by\n * reference in 'lval'. Otherwise a pointer to the string is returned,\n * and 'vlen' is set to the length of the string. */\nunsigned char *listTypeGetValue(listTypeEntry *entry, size_t *vlen, long long *lval) {\n    unsigned char *vstr = NULL;\n    if (entry->li->encoding == OBJ_ENCODING_QUICKLIST) {\n        if (entry->entry.value) {\n            vstr = entry->entry.value;\n            *vlen = entry->entry.sz;\n        } else {\n            *lval = entry->entry.longval;\n        }\n    } else if (entry->li->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned int slen;\n        vstr = lpGetValue(entry->lpe, &slen, lval);\n        *vlen = slen;\n    } else {\n        serverPanic(\"Unknown list encoding\");\n    }\n    return vstr;\n}\n\n/* Return entry or NULL at the current position of the iterator. */\nrobj *listTypeGet(listTypeEntry *entry) {\n    unsigned char *vstr;\n    size_t vlen;\n    long long lval;\n\n    vstr = listTypeGetValue(entry, &vlen, &lval);\n    if (vstr) \n        return createStringObject((char *)vstr, vlen);\n    else\n        return createStringObjectFromLongLong(lval);\n}\n\nvoid listTypeInsert(listTypeEntry *entry, robj *value, int where) {\n    robj *subject = entry->li->subject;\n    value = getDecodedObject(value);\n    sds str = value->ptr;\n    size_t len = sdslen(str);\n\n    if (entry->li->encoding == OBJ_ENCODING_QUICKLIST) {\n        if (where == LIST_TAIL) {\n            quicklistInsertAfter(&entry->li->iter, &entry->entry, str, len);\n        } else if (where == LIST_HEAD) {\n            quicklistInsertBefore(&entry->li->iter, &entry->entry, str, len);\n        }\n    } else if (entry->li->encoding == OBJ_ENCODING_LISTPACK) {\n        int lpw = (where == LIST_TAIL) ? LP_AFTER : LP_BEFORE;\n        subject->ptr = lpInsertString(subject->ptr, (unsigned char *)str,\n                                      len, entry->lpe, lpw, &entry->lpe);\n    } else {\n        serverPanic(\"Unknown list encoding\");\n    }\n    decrRefCount(value);\n}\n\n/* Replaces entry at the current position of the iterator. */\nvoid listTypeReplace(listTypeEntry *entry, robj *value) {\n    robj *subject = entry->li->subject;\n    value = getDecodedObject(value);\n    sds str = value->ptr;\n    size_t len = sdslen(str);\n\n    if (entry->li->encoding == OBJ_ENCODING_QUICKLIST) {\n        quicklistReplaceEntry(&entry->li->iter, &entry->entry, str, len);\n    } else if (entry->li->encoding == OBJ_ENCODING_LISTPACK) {\n        subject->ptr = lpReplace(subject->ptr, &entry->lpe, (unsigned char *)str, len);\n    } else {\n        serverPanic(\"Unknown list encoding\");\n    }\n\n    decrRefCount(value);\n}\n\n/* Replace entry at offset 'index' by 'value'.\n *\n * Returns 1 if replace happened.\n * Returns 0 if replace failed and no changes happened. */\nint listTypeReplaceAtIndex(robj *o, int index, robj *value) {\n    value = getDecodedObject(value);\n    sds vstr = value->ptr;\n    size_t vlen = sdslen(vstr);\n    int replaced = 0;\n\n    if (o->encoding == OBJ_ENCODING_QUICKLIST) {\n        quicklist *ql = o->ptr;\n        replaced = quicklistReplaceAtIndex(ql, index, vstr, vlen);\n    } else if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *p = lpSeek(o->ptr,index);\n        if (p) {\n            o->ptr = lpReplace(o->ptr, &p, (unsigned char *)vstr, vlen);\n            replaced = 1;\n        }\n    } else {\n        serverPanic(\"Unknown list encoding\");\n    }\n\n    decrRefCount(value);\n    return replaced;\n}\n\n/* Compare the given object with the entry at the current position.\n *\n * If the list encoding is quicklist, delegates to quicklistCompare(),\n * passing along the cached integer conversion state.\n *\n * If the list encoding is listpack, uses lpCompare().\n *\n * Returns 1 if equal, 0 otherwise.\n */\nint listTypeEqual(listTypeEntry *entry, robj *o, size_t object_len,\n                  long long *cached_longval, int *cached_valid) {\n    serverAssertWithInfo(NULL,o,sdsEncodedObject(o));\n    if (entry->li->encoding == OBJ_ENCODING_QUICKLIST) {\n        return quicklistCompare(&entry->entry,o->ptr,object_len,cached_longval,cached_valid);\n    } else if (entry->li->encoding == OBJ_ENCODING_LISTPACK) {\n        return lpCompare(entry->lpe,o->ptr,object_len,cached_longval,cached_valid);\n    } else {\n        serverPanic(\"Unknown list encoding\");\n    }\n}\n\n/* Delete the element pointed to. */\nvoid listTypeDelete(listTypeIterator *iter, listTypeEntry *entry) {\n    if (entry->li->encoding == OBJ_ENCODING_QUICKLIST) {\n        quicklistDelEntry(&iter->iter, &entry->entry);\n    } else if (entry->li->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *p = entry->lpe;\n        iter->subject->ptr = lpDelete(iter->subject->ptr,p,&p);\n\n        /* Update position of the iterator depending on the direction */\n        if (iter->direction == LIST_TAIL)\n            iter->lpi = p;\n        else {\n            if (p) {\n                iter->lpi = lpPrev(iter->subject->ptr,p);\n            } else {\n                /* We deleted the last element, so we need to set the\n                 * iterator to the last element. */\n                iter->lpi = lpLast(iter->subject->ptr);\n            }\n        }\n    } else {\n        serverPanic(\"Unknown list encoding\");\n    }\n}\n\n/* This is a helper function for the COPY command.\n * Duplicate a list object, with the guarantee that the returned object\n * has the same encoding as the original one.\n *\n * The resulting object always has refcount set to 1 */\nrobj *listTypeDup(robj *o) {\n    robj *lobj;\n\n    serverAssert(o->type == OBJ_LIST);\n\n    switch (o->encoding) {\n        case OBJ_ENCODING_LISTPACK:\n            lobj = createObject(OBJ_LIST, lpDup(o->ptr));\n            break;\n        case OBJ_ENCODING_QUICKLIST:\n            lobj = createObject(OBJ_LIST, quicklistDup(o->ptr));\n            break;\n        default:\n            serverPanic(\"Unknown list encoding\");\n            break;\n    }\n    lobj->encoding = o->encoding;\n    return lobj;\n}\n\n/* Delete a range of elements from the list. */\nvoid listTypeDelRange(robj *subject, long start, long count) {\n    if (subject->encoding == OBJ_ENCODING_QUICKLIST) {\n        quicklistDelRange(subject->ptr, start, count);\n    } else if (subject->encoding == OBJ_ENCODING_LISTPACK) {\n        subject->ptr = lpDeleteRange(subject->ptr, start, count);\n    } else {\n        serverPanic(\"Unknown list encoding\");\n    }\n}\n\n/*-----------------------------------------------------------------------------\n * List Commands\n *----------------------------------------------------------------------------*/\n\n/* Implements LPUSH/RPUSH/LPUSHX/RPUSHX. \n * 'xx': push if key exists. */\nvoid pushGenericCommand(client *c, int where, int xx) {\n    unsigned long llen;\n    dictEntryLink link;\n    int j;\n    size_t oldsize = 0;\n\n    kvobj *lobj = lookupKeyWriteWithLink(c->db, c->argv[1], &link);\n    if (checkType(c,lobj,OBJ_LIST)) return;\n    if (!lobj) {\n        if (xx) {\n            addReply(c, shared.czero);\n            return;\n        }\n\n        lobj = createListListpackObject();\n        dbAddByLink(c->db, c->argv[1], &lobj, &link);\n    }\n\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(lobj);\n    listTypeTryConversionAppend(lobj,c->argv,2,c->argc-1,NULL,NULL);\n    for (j = 2; j < c->argc; j++) {\n        listTypePush(lobj,c->argv[j],where);\n        server.dirty++;\n    }\n\n    llen = listTypeLength(lobj);\n    addReplyLongLong(c, llen);\n\n    char *event = (where == LIST_HEAD) ? \"lpush\" : \"rpush\";\n    keyModified(c,c->db,c->argv[1],lobj,1);\n    notifyKeyspaceEvent(NOTIFY_LIST,event,c->argv[1],c->db->id);\n    updateKeysizesHist(c->db, OBJ_LIST, llen - (c->argc - 2), llen);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), lobj, oldsize, kvobjAllocSize(lobj));\n}\n\n/* LPUSH <key> <element> [<element> ...] */\nvoid lpushCommand(client *c) {\n    pushGenericCommand(c,LIST_HEAD,0);\n}\n\n/* RPUSH <key> <element> [<element> ...] */\nvoid rpushCommand(client *c) {\n    pushGenericCommand(c,LIST_TAIL,0);\n}\n\n/* LPUSHX <key> <element> [<element> ...] */\nvoid lpushxCommand(client *c) {\n    pushGenericCommand(c,LIST_HEAD,1);\n}\n\n/* RPUSHX <key> <element> [<element> ...] */\nvoid rpushxCommand(client *c) {\n    pushGenericCommand(c,LIST_TAIL,1);\n}\n\n/* LINSERT <key> (BEFORE|AFTER) <pivot> <element> */\nvoid linsertCommand(client *c) {\n    int where;\n    kvobj *subject;\n    listTypeIterator iter;\n    listTypeEntry entry;\n    int inserted = 0;\n    size_t oldsize = 0;\n\n    if (strcasecmp(c->argv[2]->ptr,\"after\") == 0) {\n        where = LIST_TAIL;\n    } else if (strcasecmp(c->argv[2]->ptr,\"before\") == 0) {\n        where = LIST_HEAD;\n    } else {\n        addReplyErrorObject(c,shared.syntaxerr);\n        return;\n    }\n\n    if ((subject = lookupKeyWriteOrReply(c,c->argv[1],shared.czero)) == NULL ||\n        checkType(c,subject,OBJ_LIST)) return;\n\n    /* We're not sure if this value can be inserted yet, but we cannot\n     * convert the list inside the iterator. We don't want to loop over\n     * the list twice (once to see if the value can be inserted and once\n     * to do the actual insert), so we assume this value can be inserted\n     * and convert the listpack to a regular list if necessary. */\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(subject);\n    listTypeTryConversionAppend(subject,c->argv,4,4,NULL,NULL);\n\n    /* Seek pivot from head to tail */\n    listTypeInitIterator(&iter, subject, 0, LIST_TAIL);\n    const size_t object_len = sdslen(c->argv[3]->ptr);\n    long long cached_longval = 0;\n    int cached_valid = 0;\n    while (listTypeNext(&iter, &entry)) {\n        if (listTypeEqual(&entry,c->argv[3],object_len,&cached_longval,&cached_valid)) {\n            listTypeInsert(&entry,c->argv[4],where);\n            inserted = 1;\n            break;\n        }\n    }\n    listTypeResetIterator(&iter);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), subject, oldsize, kvobjAllocSize(subject));\n\n    if (inserted) {\n        keyModified(c,c->db,c->argv[1],subject,1);\n        notifyKeyspaceEvent(NOTIFY_LIST,\"linsert\",\n                            c->argv[1],c->db->id);\n        server.dirty++;\n        unsigned long ll = listTypeLength(subject);\n        updateKeysizesHist(c->db, OBJ_LIST, ll-1, ll);\n    } else {\n        /* Notify client of a failed insert */\n        addReplyLongLong(c,-1);\n        return;\n    }\n\n    addReplyLongLong(c,listTypeLength(subject));\n}\n\n/* LLEN <key> */\nvoid llenCommand(client *c) {\n    kvobj *kv = lookupKeyReadOrReply(c,c->argv[1],shared.czero);\n    if (kv == NULL || checkType(c,kv,OBJ_LIST)) return;\n    addReplyLongLong(c,listTypeLength(kv));\n}\n\n/* LINDEX <key> <index> */\nvoid lindexCommand(client *c) {\n    kvobj *o = lookupKeyReadOrReply(c,c->argv[1],shared.null[c->resp]);\n    if (o == NULL || checkType(c,o,OBJ_LIST)) return;\n    long index;\n\n    if ((getLongFromObjectOrReply(c, c->argv[2], &index, NULL) != C_OK))\n        return;\n\n    listTypeIterator iter;\n    listTypeEntry entry;\n    unsigned char *vstr;\n    size_t vlen;\n    long long lval;\n\n    listTypeInitIterator(&iter, o, index, LIST_TAIL);\n    if (listTypeNext(&iter, &entry)) {\n        vstr = listTypeGetValue(&entry,&vlen,&lval);\n        if (vstr) {\n            addReplyBulkCBuffer(c, vstr, vlen);\n        } else {\n            addReplyBulkLongLong(c, lval);\n        }\n    } else {\n        addReplyNull(c);\n    }\n\n    listTypeResetIterator(&iter);\n}\n\n/* LSET <key> <index> <element> */\nvoid lsetCommand(client *c) {\n    kvobj *o = lookupKeyWriteOrReply(c, c->argv[1], shared.nokeyerr);\n    if (o == NULL || checkType(c,o,OBJ_LIST)) return;\n    long index;\n    robj *value = c->argv[3];\n    size_t oldsize = 0;\n\n    if ((getLongFromObjectOrReply(c, c->argv[2], &index, NULL) != C_OK))\n        return;\n\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(o);\n    listTypeTryConversionAppend(o,c->argv,3,3,NULL,NULL);\n    if (listTypeReplaceAtIndex(o,index,value)) {\n        /* We might replace a big item with a small one or vice versa, but we've\n         * already handled the growing case in listTypeTryConversionAppend()\n         * above, so here we just need to try the conversion for shrinking. */\n        listTypeTryConversion(o,LIST_CONV_SHRINKING,NULL,NULL);\n        addReply(c,shared.ok);\n        keyModified(c,c->db,c->argv[1],o,1);\n        notifyKeyspaceEvent(NOTIFY_LIST,\"lset\",c->argv[1],c->db->id);\n        server.dirty++;\n    } else {\n        addReplyErrorObject(c,shared.outofrangeerr);\n    }\n    /* Always update db allocation sizes since listTypeTryConversionAppend()\n     * might have changed object encoding. */\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), o, oldsize, kvobjAllocSize(o));\n}\n\n/* A helper function like addListRangeReply, more details see below.\n * The difference is that here we are returning nested arrays, like:\n * 1) keyname\n * 2) 1) element1\n *    2) element2\n *\n * And also actually pop out from the list by calling listElementsRemoved.\n * We maintain the server.dirty and notifications there.\n *\n * 'deleted' is an optional output argument to get an indication\n * if the key got deleted by this function. */\nvoid listPopRangeAndReplyWithKey(client *c, robj *o, robj *key, int where, long count, int signal, int *deleted) {\n    long llen = listTypeLength(o);\n    long rangelen = (count > llen) ? llen : count;\n    long rangestart = (where == LIST_HEAD) ? 0 : -rangelen;\n    long rangeend = (where == LIST_HEAD) ? rangelen - 1 : -1;\n    int reverse = (where == LIST_HEAD) ? 0 : 1;\n\n    /* We return key-name just once, and an array of elements */\n    addReplyArrayLen(c, 2);\n    addReplyBulk(c, key);\n    addListRangeReply(c, o, rangestart, rangeend, reverse);\n\n    /* Pop these elements. */\n    size_t oldsize = 0;\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(o);\n    listTypeDelRange(o, rangestart, rangelen);\n    /* Maintain the notifications and dirty. */\n    listElementsRemoved(c, key, where, o, rangelen, oldsize, signal, deleted);\n}\n\n/* Extracted from `addListRangeReply()` to reply with a quicklist list.\n * Note that the purpose is to make the methods small so that the\n * code in the loop can be inlined better to improve performance. */\nvoid addListQuicklistRangeReply(client *c, robj *o, int from, int rangelen, int reverse) {\n    /* Return the result in form of a multi-bulk reply */\n    addReplyArrayLen(c,rangelen);\n\n    int direction = reverse ? AL_START_TAIL : AL_START_HEAD;\n    quicklistIter iter;\n    quicklistInitIteratorAtIdx(&iter, o->ptr, direction, from);\n    while(rangelen--) {\n        quicklistEntry qe;\n        serverAssert(quicklistNext(&iter, &qe)); /* fail on corrupt data */\n        if (qe.value) {\n            addReplyBulkCBuffer(c,qe.value,qe.sz);\n        } else {\n            addReplyBulkLongLong(c,qe.longval);\n        }\n    }\n    quicklistResetIterator(&iter);\n}\n\n/* Extracted from `addListRangeReply()` to reply with a listpack list.\n * Note that the purpose is to make the methods small so that the\n * code in the loop can be inlined better to improve performance. */\nvoid addListListpackRangeReply(client *c, robj *o, int from, int rangelen, int reverse) {\n    unsigned char *lp = o->ptr;\n    unsigned char *p = lpSeek(lp, from);\n    const size_t lpbytes = lpBytes(lp);\n    int64_t vlen;\n\n    /* Return the result in form of a multi-bulk reply */\n    addReplyArrayLen(c,rangelen);\n\n    while(rangelen--) {\n        serverAssert(p); /* fail on corrupt data */\n        unsigned char buf[LP_INTBUF_SIZE];\n        unsigned char *vstr = lpGet(p,&vlen,buf);\n        addReplyBulkCBuffer(c,vstr,vlen);\n        p = reverse ? lpPrev(lp,p) : lpNextWithBytes(lp,p,lpbytes);\n    }\n}\n\n/* A helper for replying with a list's range between the inclusive start and end\n * indexes as multi-bulk, with support for negative indexes. Note that start\n * must be less than end or an empty array is returned. When the reverse\n * argument is set to a non-zero value, the reply is reversed so that elements\n * are returned from end to start. */\nvoid addListRangeReply(client *c, robj *o, long start, long end, int reverse) {\n    long rangelen, llen = listTypeLength(o);\n\n    /* Convert negative indexes. */\n    if (start < 0) start = llen+start;\n    if (end < 0) end = llen+end;\n    if (start < 0) start = 0;\n\n    /* Invariant: start >= 0, so this test will be true when end < 0.\n     * The range is empty when start > end or start >= length. */\n    if (start > end || start >= llen) {\n        addReply(c,shared.emptyarray);\n        return;\n    }\n    if (end >= llen) end = llen-1;\n    rangelen = (end-start)+1;\n\n    int from = reverse ? end : start;\n    if (o->encoding == OBJ_ENCODING_QUICKLIST)\n        addListQuicklistRangeReply(c, o, from, rangelen, reverse);\n    else if (o->encoding == OBJ_ENCODING_LISTPACK)\n        addListListpackRangeReply(c, o, from, rangelen, reverse);\n    else\n        serverPanic(\"Unknown list encoding\");\n}\n\n/* A housekeeping helper for list elements popping tasks.\n *\n * If 'signal' is 0, skip calling keyModified().\n *\n * 'deleted' is an optional output argument to get an indication\n * if the key got deleted by this function. */\nvoid listElementsRemoved(client *c, robj *key, int where, robj *o, long count, size_t oldsize, int signal, int *deleted) {\n    char *event = (where == LIST_HEAD) ? \"lpop\" : \"rpop\";\n    unsigned long llen = listTypeLength(o);\n    \n    notifyKeyspaceEvent(NOTIFY_LIST, event, key, c->db->id);\n    updateKeysizesHist(c->db, OBJ_LIST, llen + count, llen);\n    if (llen == 0) {\n        if (deleted) *deleted = 1;\n\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db, getKeySlot(key->ptr), o, oldsize, kvobjAllocSize(o));\n        dbDelete(c->db, key);\n        notifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", key, c->db->id);\n    } else {\n        listTypeTryConversion(o, LIST_CONV_SHRINKING, NULL, NULL);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db, getKeySlot(key->ptr), o, oldsize, kvobjAllocSize(o));\n        if (deleted) *deleted = 0;\n    }\n    if (signal)\n        keyModified(c, c->db, key, llen ? o : NULL, 1);\n    server.dirty += count;\n}\n\n/* Implements the generic list pop operation for LPOP/RPOP.\n * The where argument specifies which end of the list is operated on. An\n * optional count may be provided as the third argument of the client's\n * command. */\nvoid popGenericCommand(client *c, int where) {\n    int hascount = (c->argc == 3);\n    long count = 0;\n    robj *value;\n\n    if (c->argc > 3) {\n        addReplyErrorArity(c);\n        return;\n    } else if (hascount) {\n        /* Parse the optional count argument. */\n        if (getPositiveLongFromObjectOrReply(c,c->argv[2],&count,NULL) != C_OK) \n            return;\n    }\n\n    kvobj *o = lookupKeyWriteOrReply(c, c->argv[1], hascount ? shared.nullarray[c->resp] : shared.null[c->resp]);\n    if (o == NULL || checkType(c, o, OBJ_LIST))\n        return;\n\n    if (hascount && !count) {\n        /* Fast exit path. */\n        addReply(c,shared.emptyarray);\n        return;\n    }\n\n    size_t oldsize = 0;\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(o);\n    if (!count) {\n        /* Pop a single element. This is POP's original behavior that replies\n         * with a bulk string. */\n        value = listTypePop(o,where);\n        serverAssert(value != NULL);\n        addReplyBulk(c,value);\n        decrRefCount(value);\n        listElementsRemoved(c,c->argv[1],where,o,1,oldsize,1,NULL);\n    } else {\n        /* Pop a range of elements. An addition to the original POP command,\n         *  which replies with a multi-bulk. */\n        long llen = listTypeLength(o);\n        long rangelen = (count > llen) ? llen : count;\n        long rangestart = (where == LIST_HEAD) ? 0 : -rangelen;\n        long rangeend = (where == LIST_HEAD) ? rangelen - 1 : -1;\n        int reverse = (where == LIST_HEAD) ? 0 : 1;\n\n        addListRangeReply(c,o,rangestart,rangeend,reverse);\n        listTypeDelRange(o,rangestart,rangelen);\n        listElementsRemoved(c,c->argv[1],where,o,rangelen,oldsize,1,NULL);\n    }\n}\n\n/* Like popGenericCommand but work with multiple keys.\n * Take multiple keys and return multiple elements from just one key.\n *\n * 'numkeys' the number of keys.\n * 'count' is the number of elements requested to pop.\n *\n * Always reply with array. */\nvoid mpopGenericCommand(client *c, robj **keys, int numkeys, int where, long count) {\n    int j;\n    robj *o;\n    robj *key;\n\n    for (j = 0; j < numkeys; j++) {\n        key = keys[j];\n        o = lookupKeyWrite(c->db, key);\n\n        /* Non-existing key, move to next key. */\n        if (o == NULL) continue;\n\n        if (checkType(c, o, OBJ_LIST)) return;\n\n        long llen = listTypeLength(o);\n        /* Empty list, move to next key. */\n        if (llen == 0) continue;\n\n        /* Pop a range of elements in a nested arrays way. */\n        listPopRangeAndReplyWithKey(c, o, key, where, count, 1, NULL);\n\n        /* Replicate it as [LR]POP COUNT. */\n        robj *count_obj = createStringObjectFromLongLong((count > llen) ? llen : count);\n        rewriteClientCommandVector(c, 3,\n                                   (where == LIST_HEAD) ? shared.lpop : shared.rpop,\n                                   key, count_obj);\n        decrRefCount(count_obj);\n        return;\n    }\n\n    /* Look like we are not able to pop up any elements. */\n    addReplyNullArray(c);\n}\n\n/* LPOP <key> [count] */\nvoid lpopCommand(client *c) {\n    popGenericCommand(c,LIST_HEAD);\n}\n\n/* RPOP <key> [count] */\nvoid rpopCommand(client *c) {\n    popGenericCommand(c,LIST_TAIL);\n}\n\n/* LRANGE <key> <start> <stop> */\nvoid lrangeCommand(client *c) {\n    kvobj *o;\n    long start, end;\n\n    if ((getLongFromObjectOrReply(c, c->argv[2], &start, NULL) != C_OK) ||\n        (getLongFromObjectOrReply(c, c->argv[3], &end, NULL) != C_OK)) return;\n\n    if ((o = lookupKeyReadOrReply(c,c->argv[1],shared.emptyarray)) == NULL\n         || checkType(c,o,OBJ_LIST)) return;\n\n    addListRangeReply(c,o,start,end,0);\n}\n\n/* LTRIM <key> <start> <stop> */\nvoid ltrimCommand(client *c) {\n    kvobj *o;\n    long start, end, llen, ltrim, rtrim, llenNew;\n    size_t oldsize = 0;\n\n    if ((getLongFromObjectOrReply(c, c->argv[2], &start, NULL) != C_OK) ||\n        (getLongFromObjectOrReply(c, c->argv[3], &end, NULL) != C_OK)) return;\n\n    if ((o = lookupKeyWriteOrReply(c,c->argv[1],shared.ok)) == NULL ||\n        checkType(c,o,OBJ_LIST)) return;\n    llen = listTypeLength(o);\n\n    /* convert negative indexes */\n    if (start < 0) start = llen+start;\n    if (end < 0) end = llen+end;\n    if (start < 0) start = 0;\n\n    /* Invariant: start >= 0, so this test will be true when end < 0.\n     * The range is empty when start > end or start >= length. */\n    if (start > end || start >= llen) {\n        /* Out of range start or start > end result in empty list */\n        ltrim = llen;\n        rtrim = 0;\n    } else {\n        if (end >= llen) end = llen-1;\n        ltrim = start;\n        rtrim = llen-end-1;\n    }\n\n    /* Remove list elements to perform the trim */\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(o);\n    if (o->encoding == OBJ_ENCODING_QUICKLIST) {\n        quicklistDelRange(o->ptr,0,ltrim);\n        quicklistDelRange(o->ptr,-rtrim,rtrim);\n    } else if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        o->ptr = lpDeleteRange(o->ptr,0,ltrim);\n        o->ptr = lpDeleteRange(o->ptr,-rtrim,rtrim);\n    } else {\n        serverPanic(\"Unknown list encoding\");\n    }\n\n    notifyKeyspaceEvent(NOTIFY_LIST,\"ltrim\",c->argv[1],c->db->id);\n    if ((llenNew = listTypeLength(o)) == 0) {\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), o, oldsize, kvobjAllocSize(o));\n        dbDeleteSkipKeysizesUpdate(c->db,c->argv[1]);\n        notifyKeyspaceEvent(NOTIFY_GENERIC,\"del\",c->argv[1],c->db->id);\n        llenNew = -1; /* Indicate key deleted to updateKeysizesHist() */\n    } else {\n        listTypeTryConversion(o,LIST_CONV_SHRINKING,NULL,NULL);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), o, oldsize, kvobjAllocSize(o));\n    }\n    updateKeysizesHist(c->db, OBJ_LIST, llen, llenNew);\n    keyModified(c, c->db, c->argv[1], (llenNew > 0) ? o : NULL, 1);\n    server.dirty += (ltrim + rtrim);\n    addReply(c,shared.ok);\n}\n\n/* LPOS key element [RANK rank] [COUNT num-matches] [MAXLEN len]\n *\n * The \"rank\" is the position of the match, so if it is 1, the first match\n * is returned, if it is 2 the second match is returned and so forth.\n * It is 1 by default. If negative has the same meaning but the search is\n * performed starting from the end of the list.\n *\n * If COUNT is given, instead of returning the single element, a list of\n * all the matching elements up to \"num-matches\" are returned. COUNT can\n * be combined with RANK in order to returning only the element starting\n * from the Nth. If COUNT is zero, all the matching elements are returned.\n *\n * MAXLEN tells the command to scan a max of len elements. If zero (the\n * default), all the elements in the list are scanned if needed.\n *\n * The returned elements indexes are always referring to what LINDEX\n * would return. So first element from head is 0, and so forth. */\nvoid lposCommand(client *c) {\n    robj *ele;\n    ele = c->argv[2];\n    int direction = LIST_TAIL;\n    long rank = 1, count = -1, maxlen = 0; /* Count -1: option not given. */\n\n    /* Parse the optional arguments. */\n    for (int j = 3; j < c->argc; j++) {\n        char *opt = c->argv[j]->ptr;\n        int moreargs = (c->argc-1)-j;\n\n        if (!strcasecmp(opt,\"RANK\") && moreargs) {\n            j++;\n            if (getRangeLongFromObjectOrReply(c, c->argv[j], -LONG_MAX, LONG_MAX, &rank, NULL) != C_OK)\n                return;\n            if (rank == 0) {\n                addReplyError(c,\"RANK can't be zero: use 1 to start from \"\n                                \"the first match, 2 from the second ... \"\n                                \"or use negative to start from the end of the list\");\n                return;\n            }\n        } else if (!strcasecmp(opt,\"COUNT\") && moreargs) {\n            j++;\n            if (getPositiveLongFromObjectOrReply(c, c->argv[j], &count,\n              \"COUNT can't be negative\") != C_OK)\n                return;\n        } else if (!strcasecmp(opt,\"MAXLEN\") && moreargs) {\n            j++;\n            if (getPositiveLongFromObjectOrReply(c, c->argv[j], &maxlen, \n              \"MAXLEN can't be negative\") != C_OK)\n                return;\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        }\n    }\n\n    /* A negative rank means start from the tail. */\n    if (rank < 0) {\n        rank = -rank;\n        direction = LIST_HEAD;\n    }\n\n    /* We return NULL or an empty array if there is no such key (or\n     * if we find no matches, depending on the presence of the COUNT option. */\n    kvobj *o = lookupKeyRead(c->db,c->argv[1]);\n    if (o == NULL) {\n        if (count != -1)\n            addReply(c,shared.emptyarray);\n        else\n            addReply(c,shared.null[c->resp]);\n        return;\n    }\n    if (checkType(c,o,OBJ_LIST)) return;\n\n    /* If we got the COUNT option, prepare to emit an array. */\n    void *arraylenptr = NULL;\n    if (count != -1) arraylenptr = addReplyDeferredLen(c);\n\n    /* Seek the element. */\n    listTypeIterator li;\n    listTypeEntry entry;\n    listTypeInitIterator(&li, o, direction == LIST_HEAD ? -1 : 0, direction);\n    long llen = listTypeLength(o);\n    long index = 0, matches = 0, matchindex = -1, arraylen = 0;\n    const size_t ele_len = sdslen(ele->ptr);\n    long long cached_longval = 0;\n    int cached_valid = 0;\n    while (listTypeNext(&li, &entry) && (maxlen == 0 || index < maxlen)) {\n        if (listTypeEqual(&entry,ele,ele_len,&cached_longval,&cached_valid)) {\n            matches++;\n            matchindex = (direction == LIST_TAIL) ? index : llen - index - 1;\n            if (matches >= rank) {\n                if (arraylenptr) {\n                    arraylen++;\n                    addReplyLongLong(c,matchindex);\n                    if (count && matches-rank+1 >= count) break;\n                } else {\n                    break;\n                }\n            }\n        }\n        index++;\n        matchindex = -1; /* Remember if we exit the loop without a match. */\n    }\n    listTypeResetIterator(&li);\n\n    /* Reply to the client. Note that arraylenptr is not NULL only if\n     * the COUNT option was selected. */\n    if (arraylenptr != NULL) {\n        setDeferredArrayLen(c,arraylenptr,arraylen);\n    } else {\n        if (matchindex != -1)\n            addReplyLongLong(c,matchindex);\n        else\n            addReply(c,shared.null[c->resp]);\n    }\n}\n\n/* LREM <key> <count> <element> */\nvoid lremCommand(client *c) {\n    robj *obj;\n    obj = c->argv[3];\n    long toremove;\n    long removed = 0;\n\n    if (getRangeLongFromObjectOrReply(c, c->argv[2], -LONG_MAX, LONG_MAX, &toremove, NULL) != C_OK)\n        return;\n\n    kvobj *subject = lookupKeyWriteOrReply(c, c->argv[1], shared.czero);\n    if (subject == NULL || checkType(c,subject,OBJ_LIST)) return;\n\n    listTypeIterator li;\n    if (toremove < 0) {\n        toremove = -toremove;\n        listTypeInitIterator(&li, subject, -1, LIST_HEAD);\n    } else {\n        listTypeInitIterator(&li, subject, 0, LIST_TAIL);\n    }\n\n    listTypeEntry entry;\n    const size_t object_len = sdslen(c->argv[3]->ptr);\n    long long cached_longval = 0;\n    int cached_valid = 0;\n    size_t oldsize = 0;\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(subject);\n    while (listTypeNext(&li, &entry)) {\n        if (listTypeEqual(&entry,obj,object_len,&cached_longval,&cached_valid)) {\n            listTypeDelete(&li, &entry);\n            server.dirty++;\n            removed++;\n            if (toremove && removed == toremove) break;\n        }\n    }\n    listTypeResetIterator(&li);\n\n    if (removed) {\n        long ll = listTypeLength(subject);\n        updateKeysizesHist(c->db, OBJ_LIST, ll + removed, ll);\n        notifyKeyspaceEvent(NOTIFY_LIST,\"lrem\",c->argv[1],c->db->id);\n\n        if (ll == 0) {\n            if (server.memory_tracking_enabled)\n                updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), subject, oldsize, kvobjAllocSize(subject));\n            dbDelete(c->db,c->argv[1]);\n            notifyKeyspaceEvent(NOTIFY_GENERIC,\"del\",c->argv[1],c->db->id);\n        } else {\n            listTypeTryConversion(subject,LIST_CONV_SHRINKING,NULL,NULL);\n            if (server.memory_tracking_enabled)\n                updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), subject, oldsize, kvobjAllocSize(subject));\n        }\n        keyModified(c, c->db, c->argv[1], ll ? subject : NULL, 1);\n    }\n\n    addReplyLongLong(c,removed);\n}\n\nvoid lmoveHandlePush(client *c, robj *dstkey, robj *dstobj, robj *value,\n                     int where) {\n    size_t oldsize = 0;\n    /* Create the list if the key does not exist */\n    if (!dstobj) {\n        dstobj = createListListpackObject();\n        dbAdd(c->db, dstkey, &dstobj);\n    }\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(dstobj);\n    listTypeTryConversionAppend(dstobj,&value,0,0,NULL,NULL);\n    listTypePush(dstobj,value,where);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(dstkey->ptr), dstobj, oldsize, kvobjAllocSize(dstobj));\n    keyModified(c,c->db,dstkey,dstobj,1);\n\n    notifyKeyspaceEvent(NOTIFY_LIST,\n                        where == LIST_HEAD ? \"lpush\" : \"rpush\",\n                        dstkey,\n                        c->db->id);\n    /* Always send the pushed value to the client. */\n    addReplyBulk(c,value);\n}\n\nint getListPositionFromObjectOrReply(client *c, robj *arg, int *position) {\n    if (strcasecmp(arg->ptr,\"right\") == 0) {\n        *position = LIST_TAIL;\n    } else if (strcasecmp(arg->ptr,\"left\") == 0) {\n        *position = LIST_HEAD;\n    } else {\n        addReplyErrorObject(c,shared.syntaxerr);\n        return C_ERR;\n    }\n    return C_OK;\n}\n\nrobj *getStringObjectFromListPosition(int position) {\n    if (position == LIST_HEAD) {\n        return shared.left;\n    } else {\n        // LIST_TAIL\n        return shared.right;\n    }\n}\n\nvoid lmoveGenericCommand(client *c, int wherefrom, int whereto) {\n    size_t oldsize = 0;\n    kvobj *kvsrc = lookupKeyWriteOrReply(c,c->argv[1],shared.null[c->resp]);\n    if (kvsrc == NULL || checkType(c,kvsrc,OBJ_LIST)) return;\n\n    if (listTypeLength(kvsrc) == 0) {\n        /* This may only happen after loading very old RDB files. Recent\n         * versions of Redis delete keys of empty lists. */\n        addReplyNull(c);\n    } else {\n        robj *kvdst, *skey = c->argv[1];\n        int64_t oldlen = 0, newlen = 1; /* init lengths assuming new dst object */\n\n        if ((kvdst = lookupKeyWrite(c->db,c->argv[2])) != NULL) {\n            if (checkType(c,kvdst,OBJ_LIST)) return;\n            /* dst object exists */\n            oldlen = (int64_t) listTypeLength(kvdst);\n            newlen = oldlen + 1;\n        }\n\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(kvsrc);\n        robj *value = listTypePop(kvsrc, wherefrom);\n        serverAssert(value); /* assertion for valgrind (avoid NPD) */\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), kvsrc, oldsize, kvobjAllocSize(kvsrc));\n        lmoveHandlePush(c, c->argv[2], kvdst, value, whereto);\n        /* Update dst obj cardinality in KEYSIZES */\n        updateKeysizesHist(c->db, OBJ_LIST, oldlen, newlen);\n        /* Update src obj cardinality in KEYSIZES by listElementsRemoved() */\n        size_t srcsize = server.memory_tracking_enabled ? kvobjAllocSize(kvsrc) : 0;\n        listElementsRemoved(c, skey, wherefrom, kvsrc, 1, srcsize, 1, NULL);\n        /* listTypePop returns an object with its refcount incremented */\n        decrRefCount(value);\n\n        if (c->cmd->proc == blmoveCommand) {\n            rewriteClientCommandVector(c,5,shared.lmove,\n                                       c->argv[1],c->argv[2],c->argv[3],c->argv[4]);\n        } else if (c->cmd->proc == brpoplpushCommand) {\n            rewriteClientCommandVector(c,3,shared.rpoplpush,\n                                       c->argv[1],c->argv[2]);\n        }\n    }\n}\n\n/* LMOVE <source> <destination> (LEFT|RIGHT) (LEFT|RIGHT) */\nvoid lmoveCommand(client *c) {\n    int wherefrom, whereto;\n    if (getListPositionFromObjectOrReply(c,c->argv[3],&wherefrom)\n        != C_OK) return;\n    if (getListPositionFromObjectOrReply(c,c->argv[4],&whereto)\n        != C_OK) return;\n    lmoveGenericCommand(c, wherefrom, whereto);\n}\n\n/* This is the semantic of this command:\n *  RPOPLPUSH srclist dstlist:\n *    IF LLEN(srclist) > 0\n *      element = RPOP srclist\n *      LPUSH dstlist element\n *      RETURN element\n *    ELSE\n *      RETURN nil\n *    END\n *  END\n *\n * The idea is to be able to get an element from a list in a reliable way\n * since the element is not just returned but pushed against another list\n * as well. This command was originally proposed by Ezra Zygmuntowicz.\n */\nvoid rpoplpushCommand(client *c) {\n    lmoveGenericCommand(c, LIST_TAIL, LIST_HEAD);\n}\n\n/* Blocking RPOP/LPOP/LMPOP\n *\n * 'numkeys' is the number of keys.\n * 'timeout_idx' parameter position of block timeout.\n * 'where' LIST_HEAD for LEFT, LIST_TAIL for RIGHT.\n * 'count' is the number of elements requested to pop, or -1 for plain single pop.\n *\n * When count is -1, a reply of a single bulk-string will be used.\n * When count > 0, an array reply will be used. */\nvoid blockingPopGenericCommand(client *c, robj **keys, int numkeys, int where, int timeout_idx, long count) {\n    robj *o;\n    robj *key;\n    mstime_t timeout;\n    int j;\n\n    if (getTimeoutFromObjectOrReply(c,c->argv[timeout_idx],&timeout,UNIT_SECONDS)\n        != C_OK) return;\n\n    /* Traverse all input keys, we take action only based on one key. */\n    for (j = 0; j < numkeys; j++) {\n        key = keys[j];\n        o = lookupKeyWrite(c->db, key);\n\n        /* Non-existing key, move to next key. */\n        if (o == NULL) continue;\n\n        if (checkType(c, o, OBJ_LIST)) return;\n\n        long llen = listTypeLength(o);\n        /* Empty list, move to next key. */\n        if (llen == 0) continue;\n\n        if (count != -1) {\n            /* BLMPOP, non empty list, like a normal [LR]POP with count option.\n             * The difference here we pop a range of elements in a nested arrays way. */\n            listPopRangeAndReplyWithKey(c, o, key, where, count, 1, NULL);\n\n            /* Replicate it as [LR]POP COUNT. */\n            robj *count_obj = createStringObjectFromLongLong((count > llen) ? llen : count);\n            rewriteClientCommandVector(c, 3,\n                                       (where == LIST_HEAD) ? shared.lpop : shared.rpop,\n                                       key, count_obj);\n            decrRefCount(count_obj);\n            return;\n        }\n\n        /* Non empty list, this is like a normal [LR]POP. */\n        size_t oldsize = 0;\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(o);\n        robj *value = listTypePop(o,where);\n        serverAssert(value != NULL);\n\n        addReplyArrayLen(c,2);\n        addReplyBulk(c,key);\n        addReplyBulk(c,value);\n        decrRefCount(value);\n        listElementsRemoved(c,key,where,o,1,oldsize,1,NULL);\n\n        /* Replicate it as an [LR]POP instead of B[LR]POP. */\n        rewriteClientCommandVector(c,2,\n            (where == LIST_HEAD) ? shared.lpop : shared.rpop,\n            key);\n        return;\n    }\n\n    /* If we are not allowed to block the client, the only thing\n     * we can do is treating it as a timeout (even with timeout 0). */\n    if (c->flags & CLIENT_DENY_BLOCKING) {\n        addReplyNullArray(c);\n        return;\n    }\n\n    /* If the keys do not exist we must block */\n    blockForKeys(c,BLOCKED_LIST,keys,numkeys,timeout,0);\n}\n\n/* BLPOP <key> [<key> ...] <timeout> */\nvoid blpopCommand(client *c) {\n    blockingPopGenericCommand(c,c->argv+1,c->argc-2,LIST_HEAD,c->argc-1,-1);\n}\n\n/* BRPOP <key> [<key> ...] <timeout> */\nvoid brpopCommand(client *c) {\n    blockingPopGenericCommand(c,c->argv+1,c->argc-2,LIST_TAIL,c->argc-1,-1);\n}\n\nvoid blmoveGenericCommand(client *c, int wherefrom, int whereto, mstime_t timeout) {\n    robj *key = lookupKeyWrite(c->db, c->argv[1]);\n    if (checkType(c,key,OBJ_LIST)) return;\n\n    if (key == NULL) {\n        if (c->flags & CLIENT_DENY_BLOCKING) {\n            /* Blocking against an empty list when blocking is not allowed\n             * returns immediately. */\n            addReplyNull(c);\n        } else {\n            /* The list is empty and the client blocks. */\n            blockForKeys(c,BLOCKED_LIST,c->argv + 1,1,timeout,0);\n        }\n    } else {\n        /* The list exists and has elements, so\n         * the regular lmoveCommand is executed. */\n        serverAssertWithInfo(c,key,listTypeLength(key) > 0);\n        lmoveGenericCommand(c,wherefrom,whereto);\n    }\n}\n\n/* BLMOVE <source> <destination> (LEFT|RIGHT) (LEFT|RIGHT) <timeout> */\nvoid blmoveCommand(client *c) {\n    mstime_t timeout;\n    int wherefrom, whereto;\n    if (getListPositionFromObjectOrReply(c,c->argv[3],&wherefrom)\n        != C_OK) return;\n    if (getListPositionFromObjectOrReply(c,c->argv[4],&whereto)\n        != C_OK) return;\n    if (getTimeoutFromObjectOrReply(c,c->argv[5],&timeout,UNIT_SECONDS)\n        != C_OK) return;\n    blmoveGenericCommand(c,wherefrom,whereto,timeout);\n}\n\n/* BRPOPLPUSH <source> <destination> <timeout> */\nvoid brpoplpushCommand(client *c) {\n    mstime_t timeout;\n    if (getTimeoutFromObjectOrReply(c,c->argv[3],&timeout,UNIT_SECONDS)\n        != C_OK) return;\n    blmoveGenericCommand(c, LIST_TAIL, LIST_HEAD, timeout);\n}\n\n/* LMPOP/BLMPOP\n *\n * 'numkeys_idx' parameter position of key number.\n * 'is_block' this indicates whether it is a blocking variant. */\nvoid lmpopGenericCommand(client *c, int numkeys_idx, int is_block) {\n    long j;\n    long numkeys = 0;      /* Number of keys. */\n    int where = 0;         /* HEAD for LEFT, TAIL for RIGHT. */\n    long count = -1;       /* Reply will consist of up to count elements, depending on the list's length. */\n\n    /* Parse the numkeys. */\n    if (getRangeLongFromObjectOrReply(c, c->argv[numkeys_idx], 1, LONG_MAX,\n                                      &numkeys, \"numkeys should be greater than 0\") != C_OK)\n        return;\n\n    /* Parse the where. where_idx: the index of where in the c->argv. */\n    long where_idx = numkeys_idx + numkeys + 1;\n    if (where_idx >= c->argc) {\n        addReplyErrorObject(c, shared.syntaxerr);\n        return;\n    }\n    if (getListPositionFromObjectOrReply(c, c->argv[where_idx], &where) != C_OK)\n        return;\n\n    /* Parse the optional arguments. */\n    for (j = where_idx + 1; j < c->argc; j++) {\n        char *opt = c->argv[j]->ptr;\n        int moreargs = (c->argc - 1) - j;\n\n        if (count == -1 && !strcasecmp(opt, \"COUNT\") && moreargs) {\n            j++;\n            if (getRangeLongFromObjectOrReply(c, c->argv[j], 1, LONG_MAX,\n                                              &count,\"count should be greater than 0\") != C_OK)\n                return;\n        } else {\n            addReplyErrorObject(c, shared.syntaxerr);\n            return;\n        }\n    }\n\n    if (count == -1) count = 1;\n\n    if (is_block) {\n        /* BLOCK. We will handle CLIENT_DENY_BLOCKING flag in blockingPopGenericCommand. */\n        blockingPopGenericCommand(c, c->argv+numkeys_idx+1, numkeys, where, 1, count);\n    } else {\n        /* NON-BLOCK */\n        mpopGenericCommand(c, c->argv+numkeys_idx+1, numkeys, where, count);\n    }\n}\n\n/* LMPOP numkeys <key> [<key> ...] (LEFT|RIGHT) [COUNT count] */\nvoid lmpopCommand(client *c) {\n    lmpopGenericCommand(c, 1, 0);\n}\n\n/* BLMPOP timeout numkeys <key> [<key> ...] (LEFT|RIGHT) [COUNT count] */\nvoid blmpopCommand(client *c) {\n    lmpopGenericCommand(c, 2, 1);\n}\n"
  },
  {
    "path": "src/t_set.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n#include \"server.h\"\n#include \"intset.h\"  /* Compact integer set structure */\n\n/*-----------------------------------------------------------------------------\n * Set Commands\n *----------------------------------------------------------------------------*/\n\nvoid sunionDiffGenericCommand(client *c, robj **setkeys, int setnum,\n                              robj *dstkey, int op);\n\n/* Factory method to return a set that *can* hold \"value\". When the object has\n * an integer-encodable value, an intset will be returned. Otherwise a listpack\n * or a regular hash table.\n *\n * The size hint indicates approximately how many items will be added which is\n * used to determine the initial representation. */\nrobj *setTypeCreate(sds value, size_t size_hint) {\n    if (isSdsRepresentableAsLongLong(value,NULL) == C_OK && size_hint <= server.set_max_intset_entries)\n        return createIntsetObject();\n    if (size_hint <= server.set_max_listpack_entries)\n        return createSetListpackObject();\n\n    /* We may oversize the set by using the hint if the hint is not accurate,\n     * but we will assume this is acceptable to maximize performance. */\n    robj *o = createSetObject();\n    dictExpand(o->ptr, size_hint);\n    return o;\n}\n\n/* Check if the existing set should be converted to another encoding based off the\n * the size hint. */\nvoid setTypeMaybeConvert(robj *set, size_t size_hint) {\n    if ((set->encoding == OBJ_ENCODING_LISTPACK && size_hint > server.set_max_listpack_entries)\n        || (set->encoding == OBJ_ENCODING_INTSET && size_hint > server.set_max_intset_entries))\n    {\n        setTypeConvertAndExpand(set, OBJ_ENCODING_HT, size_hint, 1);\n    }\n}\n\n/* Return the maximum number of entries to store in an intset. */\nstatic size_t intsetMaxEntries(void) {\n    size_t max_entries = server.set_max_intset_entries;\n    /* limit to 1G entries due to intset internals. */\n    if (max_entries >= 1<<30) max_entries = 1<<30;\n    return max_entries;\n}\n\n/* Converts intset to HT if it contains too many entries. */\nstatic void maybeConvertIntset(robj *subject) {\n    serverAssert(subject->encoding == OBJ_ENCODING_INTSET);\n    if (intsetLen(subject->ptr) > intsetMaxEntries())\n        setTypeConvert(subject,OBJ_ENCODING_HT);\n}\n\n/* When you know all set elements are integers, call this to convert the set to\n * an intset. No conversion happens if the set contains too many entries for an\n * intset. */\nstatic void maybeConvertToIntset(robj *set) {\n    if (set->encoding == OBJ_ENCODING_INTSET) return; /* already intset */\n    if (setTypeSize(set) > intsetMaxEntries()) return; /* can't use intset */\n    intset *is = intsetNew();\n    char *str;\n    size_t len = 0;\n    int64_t llval = 0;\n    setTypeIterator si;\n    setTypeInitIterator(&si, set);\n    while (setTypeNext(&si, &str, &len, &llval) != -1) {\n        if (str) {\n            /* If the element is returned as a string, we may be able to convert\n             * it to integer. This happens for OBJ_ENCODING_HT. */\n            serverAssert(string2ll(str, len, (long long *)&llval));\n        }\n        uint8_t success = 0;\n        is = intsetAdd(is, llval, &success);\n        serverAssert(success);\n    }\n    setTypeResetIterator(&si);\n    freeSetObject(set); /* frees the internals but not robj itself */\n    set->ptr = is;\n    set->encoding = OBJ_ENCODING_INTSET;\n}\n\n/* Add the specified sds value into a set.\n *\n * If the value was already member of the set, nothing is done and 0 is\n * returned, otherwise the new element is added and 1 is returned. */\nint setTypeAdd(robj *subject, sds value) {\n    return setTypeAddAux(subject, value, sdslen(value), 0, 1);\n}\n\n/* Add member. This function is optimized for the different encodings. The\n * value can be provided as an sds string (indicated by passing str_is_sds =\n * 1), as string and length (str_is_sds = 0) or as an integer in which case str\n * is set to NULL and llval is provided instead.\n *\n * Returns 1 if the value was added and 0 if it was already a member. */\nint setTypeAddAux(robj *set, char *str, size_t len, int64_t llval, int str_is_sds) {\n    char tmpbuf[LONG_STR_SIZE];\n    if (!str) {\n        if (set->encoding == OBJ_ENCODING_INTSET) {\n            uint8_t success = 0;\n            set->ptr = intsetAdd(set->ptr, llval, &success);\n            if (success) maybeConvertIntset(set);\n            return success;\n        }\n        /* Convert int to string. */\n        len = ll2string(tmpbuf, sizeof tmpbuf, llval);\n        str = tmpbuf;\n        str_is_sds = 0;\n    }\n\n    serverAssert(str);\n    if (set->encoding == OBJ_ENCODING_HT) {\n        /* Avoid duping the string if it is an sds string. */\n        sds sdsval = str_is_sds ? (sds)str : sdsnewlen(str, len);\n        dict *ht = set->ptr;\n        dictEntryLink bucket, link = dictFindLink(ht, sdsval, &bucket);\n        if (link == NULL) {\n            /* Key doesn't already exist in the set. Add it but dup the key. */\n            if (sdsval == str) sdsval = sdsdup(sdsval);\n            dictSetKeyAtLink(ht, sdsval, &bucket, 1);\n            *htGetMetadataSize(ht) += sdsAllocSize(sdsval);\n            return 1;\n        } else if (sdsval != str) {\n            /* String is already a member. Free our temporary sds copy. */\n            sdsfree(sdsval);\n            return 0;\n        }\n    } else if (set->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *lp = set->ptr;\n        unsigned char *p = lpFirst(lp);\n        if (p != NULL)\n            p = lpFind(lp, p, (unsigned char*)str, len, 0);\n        if (p == NULL) {\n            /* Not found.  */\n            if (lpLength(lp) < server.set_max_listpack_entries &&\n                len <= server.set_max_listpack_value &&\n                lpSafeToAdd(lp, len))\n            {\n                if (str == tmpbuf) {\n                    /* This came in as integer so we can avoid parsing it again.\n                     * TODO: Create and use lpFindInteger; don't go via string. */\n                    lp = lpAppendInteger(lp, llval);\n                } else {\n                    lp = lpAppend(lp, (unsigned char*)str, len);\n                }\n                set->ptr = lp;\n            } else {\n                /* Size limit is reached. Convert to hashtable and add. */\n                setTypeConvertAndExpand(set, OBJ_ENCODING_HT, lpLength(lp) + 1, 1);\n                sds newval = sdsnewlen(str,len);\n                serverAssert(dictAdd(set->ptr,newval,NULL) == DICT_OK);\n                *htGetMetadataSize(set->ptr) += sdsAllocSize(newval);\n            }\n            return 1;\n        }\n    } else if (set->encoding == OBJ_ENCODING_INTSET) {\n        long long value;\n        if (string2ll(str, len, &value)) {\n            uint8_t success = 0;\n            set->ptr = intsetAdd(set->ptr,value,&success);\n            if (success) {\n                maybeConvertIntset(set);\n                return 1;\n            }\n        } else {\n            /* Check if listpack encoding is safe not to cross any threshold. */\n            size_t maxelelen = 0, totsize = 0;\n            unsigned long n = intsetLen(set->ptr);\n            if (n != 0) {\n                size_t elelen1 = sdigits10(intsetMax(set->ptr));\n                size_t elelen2 = sdigits10(intsetMin(set->ptr));\n                maxelelen = max(elelen1, elelen2);\n                size_t s1 = lpEstimateBytesRepeatedInteger(intsetMax(set->ptr), n);\n                size_t s2 = lpEstimateBytesRepeatedInteger(intsetMin(set->ptr), n);\n                totsize = max(s1, s2);\n            }\n            if (intsetLen((const intset*)set->ptr) < server.set_max_listpack_entries &&\n                len <= server.set_max_listpack_value &&\n                maxelelen <= server.set_max_listpack_value &&\n                lpSafeToAdd(NULL, totsize + len))\n            {\n                /* In the \"safe to add\" check above we assumed all elements in\n                 * the intset are of size maxelelen. This is an upper bound. */\n                setTypeConvertAndExpand(set, OBJ_ENCODING_LISTPACK,\n                                        intsetLen(set->ptr) + 1, 1);\n                unsigned char *lp = set->ptr;\n                lp = lpAppend(lp, (unsigned char *)str, len);\n                lp = lpShrinkToFit(lp);\n                set->ptr = lp;\n                return 1;\n            } else {\n                setTypeConvertAndExpand(set, OBJ_ENCODING_HT,\n                                        intsetLen(set->ptr) + 1, 1);\n                /* The set *was* an intset and this value is not integer\n                 * encodable, so dictAdd should always work. */\n                sds newval = sdsnewlen(str,len);\n                serverAssert(dictAdd(set->ptr,newval,NULL) == DICT_OK);\n                *htGetMetadataSize(set->ptr) += sdsAllocSize(newval);\n                return 1;\n            }\n        }\n    } else {\n        serverPanic(\"Unknown set encoding\");\n    }\n    return 0;\n}\n\n/* Deletes a value provided as an sds string from the set. Returns 1 if the\n * value was deleted and 0 if it was not a member of the set. */\nint setTypeRemove(robj *setobj, sds value) {\n    return setTypeRemoveAux(setobj, value, sdslen(value), 0, 1);\n}\n\n/* Remove a member. This function is optimized for the different encodings. The\n * value can be provided as an sds string (indicated by passing str_is_sds =\n * 1), as string and length (str_is_sds = 0) or as an integer in which case str\n * is set to NULL and llval is provided instead.\n *\n * Returns 1 if the value was deleted and 0 if it was not a member of the set. */\nint setTypeRemoveAux(robj *setobj, char *str, size_t len, int64_t llval, int str_is_sds) {\n    char tmpbuf[LONG_STR_SIZE];\n    if (!str) {\n        if (setobj->encoding == OBJ_ENCODING_INTSET) {\n            int success;\n            setobj->ptr = intsetRemove(setobj->ptr,llval,&success);\n            return success;\n        }\n        len = ll2string(tmpbuf, sizeof tmpbuf, llval);\n        str = tmpbuf;\n        str_is_sds = 0;\n    }\n\n    if (setobj->encoding == OBJ_ENCODING_HT) {\n        sds sdsval = str_is_sds ? (sds)str : sdsnewlen(str, len);\n        int deleted = (dictDelete(setobj->ptr, sdsval) == DICT_OK);\n        if (sdsval != str) sdsfree(sdsval); /* free temp copy */\n        return deleted;\n    } else if (setobj->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *lp = setobj->ptr;\n        unsigned char *p = lpFirst(lp);\n        if (p == NULL) return 0;\n        p = lpFind(lp, p, (unsigned char*)str, len, 0);\n        if (p != NULL) {\n            lp = lpDelete(lp, p, NULL);\n            setobj->ptr = lp;\n            return 1;\n        }\n    } else if (setobj->encoding == OBJ_ENCODING_INTSET) {\n        long long llval;\n        if (string2ll(str, len, &llval)) {\n            int success;\n            setobj->ptr = intsetRemove(setobj->ptr,llval,&success);\n            if (success) return 1;\n        }\n    } else {\n        serverPanic(\"Unknown set encoding\");\n    }\n    return 0;\n}\n\n/* Check if an sds string is a member of the set. Returns 1 if the value is a\n * member of the set and 0 if it isn't. */\nint setTypeIsMember(robj *subject, sds value) {\n    return setTypeIsMemberAux(subject, value, sdslen(value), 0, 1);\n}\n\n/* Membership checking optimized for the different encodings. The value can be\n * provided as an sds string (indicated by passing str_is_sds = 1), as string\n * and length (str_is_sds = 0) or as an integer in which case str is set to NULL\n * and llval is provided instead.\n *\n * Returns 1 if the value is a member of the set and 0 if it isn't. */\nint setTypeIsMemberAux(robj *set, char *str, size_t len, int64_t llval, int str_is_sds) {\n    char tmpbuf[LONG_STR_SIZE];\n    if (!str) {\n        if (set->encoding == OBJ_ENCODING_INTSET)\n            return intsetFind(set->ptr, llval);\n        len = ll2string(tmpbuf, sizeof tmpbuf, llval);\n        str = tmpbuf;\n        str_is_sds = 0;\n    }\n\n    if (set->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *lp = set->ptr;\n        unsigned char *p = lpFirst(lp);\n        return p && lpFind(lp, p, (unsigned char*)str, len, 0);\n    } else if (set->encoding == OBJ_ENCODING_INTSET) {\n        long long llval;\n        return string2ll(str, len, &llval) && intsetFind(set->ptr, llval);\n    } else if (set->encoding == OBJ_ENCODING_HT && str_is_sds) {\n        return dictFind(set->ptr, (sds)str) != NULL;\n    } else if (set->encoding == OBJ_ENCODING_HT) {\n        sds sdsval = sdsnewlen(str, len);\n        int result = dictFind(set->ptr, sdsval) != NULL;\n        sdsfree(sdsval);\n        return result;\n    } else {\n        serverPanic(\"Unknown set encoding\");\n    }\n}\n\nvoid setTypeInitIterator(setTypeIterator *si, robj *subject) {\n    si->subject = subject;\n    si->encoding = subject->encoding;\n    if (si->encoding == OBJ_ENCODING_HT) {\n        dictInitIterator(&si->di, subject->ptr);\n    } else if (si->encoding == OBJ_ENCODING_INTSET) {\n        si->ii = 0;\n    } else if (si->encoding == OBJ_ENCODING_LISTPACK) {\n        si->lpi = NULL;\n    } else {\n        serverPanic(\"Unknown set encoding\");\n    }\n}\n\nvoid setTypeResetIterator(setTypeIterator *si) {\n    if (si->encoding == OBJ_ENCODING_HT)\n        dictResetIterator(&si->di);\n}\n\n/* Move to the next entry in the set. Returns the object at the current\n * position, as a string or as an integer.\n *\n * Since set elements can be internally be stored as SDS strings, char buffers or\n * simple arrays of integers, setTypeNext returns the encoding of the\n * set object you are iterating, and will populate the appropriate pointers\n * (str and len) or (llele) depending on whether the value is stored as a string\n * or as an integer internally.\n *\n * If OBJ_ENCODING_HT is returned, then str points to an sds string and can be\n * used as such. If OBJ_ENCODING_INTSET, then llele is populated and str is\n * pointed to NULL. If OBJ_ENCODING_LISTPACK is returned, the value can be\n * either a string or an integer. If *str is not NULL, then str and len are\n * populated with the string content and length. Otherwise, llele populated with\n * an integer value.\n *\n * Note that str, len and llele pointers should all be passed and cannot\n * be NULL since the function will try to defensively populate the non\n * used field with values which are easy to trap if misused.\n *\n * When there are no more elements -1 is returned. */\nint setTypeNext(setTypeIterator *si, char **str, size_t *len, int64_t *llele) {\n    if (si->encoding == OBJ_ENCODING_HT) {\n        dictEntry *de = dictNext(&si->di);\n        if (de == NULL) return -1;\n        *str = dictGetKey(de);\n        *len = sdslen(*str);\n        *llele = -123456789; /* Not needed. Defensive. */\n    } else if (si->encoding == OBJ_ENCODING_INTSET) {\n        if (!intsetGet(si->subject->ptr,si->ii++,llele))\n            return -1;\n        *str = NULL;\n    } else if (si->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *lp = si->subject->ptr;\n        unsigned char *lpi = si->lpi;\n        if (lpi == NULL) {\n            lpi = lpFirst(lp);\n        } else {\n            lpi = lpNext(lp, lpi);\n        }\n        if (lpi == NULL) return -1;\n        si->lpi = lpi;\n        unsigned int l = 0;\n        *str = (char *)lpGetValue(lpi, &l, (long long *)llele);\n        *len = (size_t)l;\n    } else {\n        serverPanic(\"Wrong set encoding in setTypeNext\");\n    }\n    return si->encoding;\n}\n\n/* The not copy on write friendly version but easy to use version\n * of setTypeNext() is setTypeNextObject(), returning new SDS\n * strings. So if you don't retain a pointer to this object you should call\n * sdsfree() against it.\n *\n * This function is the way to go for write operations where COW is not\n * an issue. */\nsds setTypeNextObject(setTypeIterator *si) {\n    int64_t intele = 0;\n    char *str;\n    size_t len = 0;\n\n    if (setTypeNext(si, &str, &len, &intele) == -1) return NULL;\n    if (str != NULL) return sdsnewlen(str, len);\n    return sdsfromlonglong(intele);\n}\n\n/* Return random element from a non empty set.\n * The returned element can be an int64_t value if the set is encoded\n * as an \"intset\" blob of integers, or an string.\n *\n * The caller provides three pointers to be populated with the right\n * object. The return value of the function is the object->encoding\n * field of the object and can be used by the caller to check if the\n * int64_t pointer or the str and len pointers were populated, as for\n * setTypeNext. If OBJ_ENCODING_HT is returned, str is pointed to a\n * string which is actually an sds string and it can be used as such.\n *\n * Note that both the str, len and llele pointers should be passed and cannot\n * be NULL. If str is set to NULL, the value is an integer stored in llele. */\nint setTypeRandomElement(robj *setobj, char **str, size_t *len, int64_t *llele) {\n    if (setobj->encoding == OBJ_ENCODING_HT) {\n        dictEntry *de = dictGetFairRandomKey(setobj->ptr);\n        *str = dictGetKey(de);\n        *len = sdslen(*str);\n        *llele = -123456789; /* Not needed. Defensive. */\n    } else if (setobj->encoding == OBJ_ENCODING_INTSET) {\n        *llele = intsetRandom(setobj->ptr);\n        *str = NULL; /* Not needed. Defensive. */\n    } else if (setobj->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *lp = setobj->ptr;\n        int r = rand() % lpLength(lp);\n        unsigned char *p = lpSeek(lp, r);\n        unsigned int l;\n        *str = (char *)lpGetValue(p, &l, (long long *)llele);\n        *len = (size_t)l;\n    } else {\n        serverPanic(\"Unknown set encoding\");\n    }\n    return setobj->encoding;\n}\n\n/* Pops a random element and returns it as an object. */\nrobj *setTypePopRandom(robj *set) {\n    robj *obj;\n    if (set->encoding == OBJ_ENCODING_LISTPACK) {\n        /* Find random and delete it without re-seeking the listpack. */\n        unsigned int i = 0;\n        unsigned char *p = lpNextRandom(set->ptr, lpFirst(set->ptr), &i, 1, 1);\n        unsigned int len = 0; /* initialize to silence warning */\n        long long llele = 0; /* initialize to silence warning */\n        char *str = (char *)lpGetValue(p, &len, &llele);\n        if (str)\n            obj = createStringObject(str, len);\n        else\n            obj = createStringObjectFromLongLong(llele);\n        set->ptr = lpDelete(set->ptr, p, NULL);\n    } else {\n        char *str;\n        size_t len = 0;\n        int64_t llele = 0;\n        int encoding = setTypeRandomElement(set, &str, &len, &llele);\n        if (str)\n            obj = createStringObject(str, len);\n        else\n            obj = createStringObjectFromLongLong(llele);\n        setTypeRemoveAux(set, str, len, llele, encoding == OBJ_ENCODING_HT);\n    }\n    return obj;\n}\n\nunsigned long setTypeSize(const robj *subject) {\n    if (subject->encoding == OBJ_ENCODING_HT) {\n        return dictSize((const dict*)subject->ptr);\n    } else if (subject->encoding == OBJ_ENCODING_INTSET) {\n        return intsetLen((const intset*)subject->ptr);\n    } else if (subject->encoding == OBJ_ENCODING_LISTPACK) {\n        return lpLength((unsigned char *)subject->ptr);\n    } else {\n        serverPanic(\"Unknown set encoding\");\n    }\n}\n\nsize_t setTypeAllocSize(const robj *o) {\n    serverAssertWithInfo(NULL,o,o->type == OBJ_SET);\n    size_t size = 0;\n    if (o->encoding == OBJ_ENCODING_HT) {\n        dict *d = o->ptr;\n        size += sizeof(dict) + dictMemUsage(d) + *htGetMetadataSize(d);\n    } else if (o->encoding == OBJ_ENCODING_INTSET) {\n        size = intsetAllocSize(o->ptr);\n    } else if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        size = lpBytes(o->ptr);\n    } else {\n        serverPanic(\"Unknown set encoding\");\n    }\n    return size;\n}\n\n/* Convert the set to specified encoding. The resulting dict (when converting\n * to a hash table) is presized to hold the number of elements in the original\n * set. */\nvoid setTypeConvert(robj *setobj, int enc) {\n    setTypeConvertAndExpand(setobj, enc, setTypeSize(setobj), 1);\n}\n\n/* Converts a set to the specified encoding, pre-sizing it for 'cap' elements.\n * The 'panic' argument controls whether to panic on OOM (panic=1) or return\n * C_ERR on OOM (panic=0). If panic=1 is given, this function always returns\n * C_OK. */\nint setTypeConvertAndExpand(robj *setobj, int enc, unsigned long cap, int panic) {\n    setTypeIterator si;\n    serverAssertWithInfo(NULL,setobj,setobj->type == OBJ_SET &&\n                             setobj->encoding != enc);\n\n    if (enc == OBJ_ENCODING_HT) {\n        dict *d = dictCreate(&setDictType);\n        sds element;\n\n        /* Presize the dict to avoid rehashing */\n        if (panic) {\n            dictExpand(d, cap);\n        } else if (dictTryExpand(d, cap) != DICT_OK) {\n            dictRelease(d);\n            return C_ERR;\n        }\n\n        /* To add the elements we extract integers and create redis objects */\n        size_t *alloc_size = htGetMetadataSize(d);\n        setTypeInitIterator(&si, setobj);\n        while ((element = setTypeNextObject(&si)) != NULL) {\n            serverAssert(dictAdd(d,element,NULL) == DICT_OK);\n            *alloc_size += sdsAllocSize(element);\n        }\n        setTypeResetIterator(&si);\n\n        freeSetObject(setobj); /* frees the internals but not setobj itself */\n        setobj->encoding = OBJ_ENCODING_HT;\n        setobj->ptr = d;\n    } else if (enc == OBJ_ENCODING_LISTPACK) {\n        /* Preallocate the minimum two bytes per element (enc/value + backlen) */\n        size_t estcap = cap * 2;\n        if (setobj->encoding == OBJ_ENCODING_INTSET && setTypeSize(setobj) > 0) {\n            /* If we're converting from intset, we have a better estimate. */\n            size_t s1 = lpEstimateBytesRepeatedInteger(intsetMin(setobj->ptr), cap);\n            size_t s2 = lpEstimateBytesRepeatedInteger(intsetMax(setobj->ptr), cap);\n            estcap = max(s1, s2);\n        }\n        unsigned char *lp = lpNew(estcap);\n        char *str;\n        size_t len = 0;\n        int64_t llele = 0;\n        setTypeInitIterator(&si, setobj);\n        while (setTypeNext(&si, &str, &len, &llele) != -1) {\n            if (str != NULL)\n                lp = lpAppend(lp, (unsigned char *)str, len);\n            else\n                lp = lpAppendInteger(lp, llele);\n        }\n        setTypeResetIterator(&si);\n\n        freeSetObject(setobj); /* frees the internals but not setobj itself */\n        setobj->encoding = OBJ_ENCODING_LISTPACK;\n        setobj->ptr = lp;\n    } else {\n        serverPanic(\"Unsupported set conversion\");\n    }\n    return C_OK;\n}\n\n/* This is a helper function for the COPY command.\n * Duplicate a set object, with the guarantee that the returned object\n * has the same encoding as the original one.\n *\n * The resulting object always has refcount set to 1 */\nrobj *setTypeDup(robj *o) {\n    robj *set;\n\n    serverAssert(o->type == OBJ_SET);\n\n    /* Create a new set object that have the same encoding as the original object's encoding */\n    if (o->encoding == OBJ_ENCODING_INTSET) {\n        intset *is = o->ptr;\n        size_t size = intsetBlobLen(is);\n        intset *newis = zmalloc(size);\n        memcpy(newis,is,size);\n        set = createObject(OBJ_SET, newis);\n        set->encoding = OBJ_ENCODING_INTSET;\n    } else if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *lp = o->ptr;\n        size_t sz = lpBytes(lp);\n        unsigned char *new_lp = zmalloc(sz);\n        memcpy(new_lp, lp, sz);\n        set = createObject(OBJ_SET, new_lp);\n        set->encoding = OBJ_ENCODING_LISTPACK;\n    } else if (o->encoding == OBJ_ENCODING_HT) {\n        set = createSetObject();\n        dict *d = o->ptr;\n        dictExpand(set->ptr, dictSize(d));\n        setTypeIterator si;\n        setTypeInitIterator(&si, o);\n        char *str;\n        size_t len = 0;\n        int64_t intobj = 0;\n        while (setTypeNext(&si, &str, &len, &intobj) != -1) {\n            setTypeAdd(set, (sds)str);\n        }\n        setTypeResetIterator(&si);\n    } else {\n        serverPanic(\"Unknown set encoding\");\n    }\n    return set;\n}\n\nvoid saddCommand(client *c) {\n    kvobj *set;\n    int j, added = 0;\n    dictEntryLink link;\n    size_t oldsize = 0;\n\n    set = lookupKeyWriteWithLink(c->db,c->argv[1], &link);\n    if (checkType(c,set,OBJ_SET)) return;\n    \n    if (set == NULL) {\n        robj *o = setTypeCreate(c->argv[2]->ptr, c->argc - 2);\n        set = dbAddByLink(c->db, c->argv[1], &o, &link);\n    } else {\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(set);\n        setTypeMaybeConvert(set, c->argc - 2);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), set, oldsize, kvobjAllocSize(set));\n    }\n\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(set);\n    for (j = 2; j < c->argc; j++) {\n        if (setTypeAdd(set,c->argv[j]->ptr)) added++;\n    }\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), set, oldsize, kvobjAllocSize(set));\n    if (added) {\n        unsigned long size = setTypeSize(set);\n        updateKeysizesHist(c->db, OBJ_SET, size - added, size);\n        keyModified(c,c->db,c->argv[1],set,1);\n        notifyKeyspaceEvent(NOTIFY_SET,\"sadd\",c->argv[1],c->db->id);\n    }\n    server.dirty += added;\n    addReplyLongLong(c,added);\n}\n\nvoid sremCommand(client *c) {\n    int j, deleted = 0, keyremoved = 0;\n    size_t oldsize = 0;\n\n    kvobj *set = lookupKeyWriteOrReply(c, c->argv[1], shared.czero);\n    if (set == NULL || checkType(c, set, OBJ_SET))\n        return;\n\n    unsigned long oldSize = setTypeSize(set);\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(set);\n\n    if (set->encoding == OBJ_ENCODING_HT)\n        dictPauseAutoResize((dict*)set->ptr);\n    for (j = 2; j < c->argc; j++) {\n        if (setTypeRemove(set,c->argv[j]->ptr)) {\n            deleted++;\n            if (setTypeSize(set) == 0) {\n                if (server.memory_tracking_enabled)\n                    updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), set, oldsize, kvobjAllocSize(set));\n                dbDeleteSkipKeysizesUpdate(c->db, c->argv[1]);\n                keyremoved = 1;\n                break;\n            }\n        }\n    }\n    if (!keyremoved && set->encoding == OBJ_ENCODING_HT) {\n        dictResumeAutoResize((dict*)set->ptr);\n        dictShrinkIfNeeded((dict*)set->ptr);\n    }\n    if (server.memory_tracking_enabled && !keyremoved)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), set, oldsize, kvobjAllocSize(set));\n    if (deleted) {\n        int64_t newSize = oldSize - deleted;\n\n        keyModified(c, c->db, c->argv[1], keyremoved ? NULL : set, 1);\n        notifyKeyspaceEvent(NOTIFY_SET,\"srem\",c->argv[1],c->db->id);\n        if (keyremoved) {\n            notifyKeyspaceEvent(NOTIFY_GENERIC,\"del\",c->argv[1],\n                                c->db->id);\n            newSize = -1; /* removed */\n        }\n        updateKeysizesHist(c->db, OBJ_SET, oldSize, newSize);\n        server.dirty += deleted;\n    }\n    addReplyLongLong(c,deleted);\n}\n\nvoid smoveCommand(client *c) {\n    robj *srcset, *dstset, *ele;\n    size_t oldSrcAllocSize = 0, oldDstAllocSize = 0;\n    srcset = lookupKeyWrite(c->db,c->argv[1]);\n    dstset = lookupKeyWrite(c->db,c->argv[2]);\n    ele = c->argv[3];\n\n    /* If the source key does not exist return 0 */\n    if (srcset == NULL) {\n        addReply(c,shared.czero);\n        return;\n    }\n\n    /* If the source key has the wrong type, or the destination key\n     * is set and has the wrong type, return with an error. */\n    if (checkType(c,srcset,OBJ_SET) ||\n        checkType(c,dstset,OBJ_SET)) return;\n\n    /* If srcset and dstset are equal, SMOVE is a no-op */\n    if (srcset == dstset) {\n        addReply(c,setTypeIsMember(srcset,ele->ptr) ?\n            shared.cone : shared.czero);\n        return;\n    }\n\n    if (server.memory_tracking_enabled)\n        oldSrcAllocSize = kvobjAllocSize(srcset);\n    int deleted = setTypeRemove(srcset,ele->ptr);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), srcset, oldSrcAllocSize, kvobjAllocSize(srcset));\n    /* If the element cannot be removed from the src set, return 0. */\n    if (!deleted) {\n        addReply(c,shared.czero);\n        return;\n    }\n    notifyKeyspaceEvent(NOTIFY_SET,\"srem\",c->argv[1],c->db->id);\n\n    /* Update keysizes histogram */\n    int64_t srcNewLen = setTypeSize(srcset), srcOldLen = srcNewLen + 1;\n\n    /* Remove the src set from the database when empty */\n    if (srcNewLen == 0) {\n        dbDeleteSkipKeysizesUpdate(c->db,c->argv[1]);\n        srcNewLen = -1; /* removed */\n        notifyKeyspaceEvent(NOTIFY_GENERIC,\"del\",c->argv[1],c->db->id);\n    }\n    updateKeysizesHist(c->db, OBJ_SET, srcOldLen, srcNewLen);\n\n    /* Create the destination set when it doesn't exist */\n    if (!dstset) {\n        dstset = setTypeCreate(ele->ptr, 1);\n        dbAdd(c->db, c->argv[2], &dstset);\n    }\n\n    keyModified(c, c->db, c->argv[1], (srcNewLen > 0) ? srcset : NULL, 1);\n    server.dirty++;\n\n    if (server.memory_tracking_enabled)\n        oldDstAllocSize = kvobjAllocSize(dstset);\n    /* An extra key has changed when ele was successfully added to dstset */\n    if (setTypeAdd(dstset,ele->ptr)) {\n        unsigned long dstLen = setTypeSize(dstset);\n        updateKeysizesHist(c->db, OBJ_SET, dstLen - 1, dstLen);\n        server.dirty++;\n        keyModified(c,c->db,c->argv[2],dstset,1);\n        notifyKeyspaceEvent(NOTIFY_SET,\"sadd\",c->argv[2],c->db->id);\n    }\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[2]->ptr), dstset, oldDstAllocSize, kvobjAllocSize(dstset));\n    addReply(c,shared.cone);\n}\n\nvoid sismemberCommand(client *c) {\n    kvobj *set;\n    size_t oldsize = 0;\n\n    if ((set = lookupKeyReadOrReply(c,c->argv[1],shared.czero)) == NULL ||\n        checkType(c,set,OBJ_SET)) return;\n\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(set);\n    if (setTypeIsMember(set,c->argv[2]->ptr))\n        addReply(c,shared.cone);\n    else\n        addReply(c,shared.czero);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), set, oldsize, kvobjAllocSize(set));\n}\n\nvoid smismemberCommand(client *c) {\n    /* Don't abort when the key cannot be found. Non-existing keys are empty\n     * sets, where SMISMEMBER should respond with a series of zeros. */\n    size_t oldsize = 0;\n    kvobj *set = lookupKeyRead(c->db, c->argv[1]);\n    if (set && checkType(c,set,OBJ_SET)) return;\n\n    addReplyArrayLen(c,c->argc - 2);\n\n    if (server.memory_tracking_enabled && set)\n        oldsize = kvobjAllocSize(set);\n    for (int j = 2; j < c->argc; j++) {\n        if (set && setTypeIsMember(set,c->argv[j]->ptr))\n            addReply(c,shared.cone);\n        else\n            addReply(c,shared.czero);\n    }\n    if (server.memory_tracking_enabled && set)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), set, oldsize, kvobjAllocSize(set));\n}\n\nvoid scardCommand(client *c) {\n    kvobj *kv;\n\n    if ((kv = lookupKeyReadOrReply(c,c->argv[1],shared.czero)) == NULL ||\n        checkType(c,kv,OBJ_SET)) return;\n\n    addReplyLongLong(c,setTypeSize(kv));\n}\n\n/* Handle the \"SPOP key <count>\" variant. The normal version of the\n * command is handled by the spopCommand() function itself. */\n\n/* How many times bigger should be the set compared to the remaining size\n * for us to use the \"create new set\" strategy? Read later in the\n * implementation for more info. */\n#define SPOP_MOVE_STRATEGY_MUL 5\n\nvoid spopWithCountCommand(client *c) {\n    long l;\n    unsigned long count, size, toRemove;\n    size_t oldsize = 0;\n\n    /* Get the count argument */\n    if (getPositiveLongFromObjectOrReply(c,c->argv[2],&l,NULL) != C_OK) return;\n    count = (unsigned long) l;\n\n    /* Make sure a key with the name inputted exists, and that it's type is\n     * indeed a kv. Otherwise, return nil */\n    robj *set = lookupKeyWriteOrReply(c, c->argv[1], shared.emptyset[c->resp]);\n    if (set == NULL || checkType(c, set, OBJ_SET)) return;\n\n    /* If count is zero, serve an empty set ASAP to avoid special\n     * cases later. */\n    if (count == 0) {\n        addReply(c,shared.emptyset[c->resp]);\n        return;\n    }\n\n    size = setTypeSize(set);\n    toRemove = (count >= size) ? size : count;\n\n    /* Generate an SPOP keyspace notification */\n    notifyKeyspaceEvent(NOTIFY_SET,\"spop\",c->argv[1],c->db->id);\n    server.dirty += toRemove;\n\n    /* CASE 1:\n     * The number of requested elements is greater than or equal to\n     * the number of elements inside the set: simply return the whole set. */\n    if (count >= size) {\n        /* We just return the entire set */\n        sunionDiffGenericCommand(c,c->argv+1,1,NULL,SET_OP_UNION);\n\n        /* Delete the set as it is now empty */\n        dbDelete(c->db,c->argv[1]);\n        notifyKeyspaceEvent(NOTIFY_GENERIC,\"del\",c->argv[1],c->db->id);\n\n        /* todo: Move the spop notification to be executed after the command logic. */\n\n        /* Propagate this command as a DEL or UNLINK operation */\n        robj *aux = server.lazyfree_lazy_server_del ? shared.unlink : shared.del;\n        rewriteClientCommandVector(c, 2, aux, c->argv[1]);\n        keyModified(c,c->db,c->argv[1],NULL,1);\n        return;\n    }\n\n    /* Case 2 and 3 require to replicate SPOP as a set of SREM commands.\n     * Prepare our replication argument vector. Also send the array length\n     * which is common to both the code paths. */\n    unsigned long batchsize = count > 1024 ? 1024 : count;\n    robj **propargv = zmalloc(sizeof(robj *) * (2 + batchsize));\n    propargv[0] = shared.srem;\n    propargv[1] = c->argv[1];\n    unsigned long propindex = 2;\n    addReplySetLen(c,count);\n\n    /* Common iteration vars. */\n    char *str;\n    size_t len = 0;\n    int64_t llele = 0;\n    unsigned long remaining = size-count; /* Elements left after SPOP. */\n\n    /* If we are here, the number of requested elements is less than the\n     * number of elements inside the set. Also we are sure that count < size.\n     * Use two different strategies.\n     *\n     * CASE 2: The number of elements to return is small compared to the\n     * set size. We can just extract random elements and return them to\n     * the set. */\n    if (remaining*SPOP_MOVE_STRATEGY_MUL > count &&\n        set->encoding == OBJ_ENCODING_LISTPACK)\n    {\n        /* Specialized case for listpack. Traverse it only once. */\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(set);\n        unsigned char *lp = set->ptr;\n        unsigned char *p = lpFirst(lp);\n        unsigned int index = 0;\n        unsigned char **ps = zmalloc(sizeof(char *) * count);\n        for (unsigned long i = 0; i < count; i++) {\n            p = lpNextRandom(lp, p, &index, count - i, 1);\n            unsigned int len = 0;\n            str = (char *)lpGetValue(p, &len, (long long *)&llele);\n\n            if (str) {\n                addReplyBulkCBuffer(c, str, len);\n                propargv[propindex++] = createStringObject(str, len);\n            } else {\n                addReplyBulkLongLong(c, llele);\n                propargv[propindex++] = createStringObjectFromLongLong(llele);\n            }\n            /* Replicate/AOF this command as an SREM operation */\n            if (propindex == 2 + batchsize) {\n                alsoPropagate(c->db->id, propargv, propindex, PROPAGATE_AOF | PROPAGATE_REPL);\n                for (unsigned long j = 2; j < propindex; j++) {\n                    decrRefCount(propargv[j]);\n                }\n                propindex = 2;\n            }\n\n            /* Store pointer for later deletion and move to next. */\n            ps[i] = p;\n            p = lpNext(lp, p);\n            index++;\n        }\n        lp = lpBatchDelete(lp, ps, count);\n        zfree(ps);\n        set->ptr = lp;\n        updateKeysizesHist(c->db, OBJ_SET, size, size - count);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), set, oldsize, kvobjAllocSize(set));\n    } else if (remaining*SPOP_MOVE_STRATEGY_MUL > count) {\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(set);\n        for (unsigned long i = 0; i < count; i++) {\n            propargv[propindex] = setTypePopRandom(set);\n            addReplyBulk(c, propargv[propindex]);\n            propindex++;\n            /* Replicate/AOF this command as an SREM operation */\n            if (propindex == 2 + batchsize) {\n                alsoPropagate(c->db->id, propargv, propindex, PROPAGATE_AOF | PROPAGATE_REPL);\n                for (unsigned long j = 2; j < propindex; j++) {\n                    decrRefCount(propargv[j]);\n                }\n                propindex = 2;\n            }\n        }\n        updateKeysizesHist(c->db, OBJ_SET, size, size - count);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), set, oldsize, kvobjAllocSize(set));\n    } else {\n    /* CASE 3: The number of elements to return is very big, approaching\n     * the size of the set itself. After some time extracting random elements\n     * from such a set becomes computationally expensive, so we use\n     * a different strategy, we extract random elements that we don't\n     * want to return (the elements that will remain part of the set),\n     * creating a new set as we do this (that will be stored as the original\n     * set). Then we return the elements left in the original set and\n     * release it. */\n        robj *newset = NULL;\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(set);\n\n        /* Create a new set with just the remaining elements. */\n        if (set->encoding == OBJ_ENCODING_LISTPACK) {\n            /* Specialized case for listpack. Traverse it only once. */\n            newset = createSetListpackObject();\n            unsigned char *lp = set->ptr;\n            unsigned char *p = lpFirst(lp);\n            unsigned int index = 0;\n            unsigned char **ps = zmalloc(sizeof(char *) * remaining);\n            for (unsigned long i = 0; i < remaining; i++) {\n                p = lpNextRandom(lp, p, &index, remaining - i, 1);\n                unsigned int len = 0;\n                str = (char *)lpGetValue(p, &len, (long long *)&llele);\n                setTypeAddAux(newset, str, len, llele, 0);\n                ps[i] = p;\n                p = lpNext(lp, p);\n                index++;\n            }\n            lp = lpBatchDelete(lp, ps, remaining);\n            zfree(ps);\n            set->ptr = lp;\n        } else {\n            while(remaining--) {\n                int encoding = setTypeRandomElement(set, &str, &len, &llele);\n                if (!newset) {\n                    newset = str ? createSetListpackObject() : createIntsetObject();\n                }\n                setTypeAddAux(newset, str, len, llele, encoding == OBJ_ENCODING_HT);\n                setTypeRemoveAux(set, str, len, llele, encoding == OBJ_ENCODING_HT);\n            }\n        }\n\n        /* Transfer the old set to the client. */\n        setTypeIterator si;\n        setTypeInitIterator(&si, set);\n        while (setTypeNext(&si, &str, &len, &llele) != -1) {\n            if (str == NULL) {\n                addReplyBulkLongLong(c,llele);\n                propargv[propindex++] = createStringObjectFromLongLong(llele);\n            } else {\n                addReplyBulkCBuffer(c, str, len);\n                propargv[propindex++] = createStringObject(str, len);\n            }\n            /* Replicate/AOF this command as an SREM operation */\n            if (propindex == 2 + batchsize) {\n                alsoPropagate(c->db->id, propargv, propindex, PROPAGATE_AOF | PROPAGATE_REPL);\n                for (unsigned long i = 2; i < propindex; i++) {\n                    decrRefCount(propargv[i]);\n                }\n                propindex = 2;\n            }\n        }\n        setTypeResetIterator(&si);\n\n        /* Update key size histogram \"explicitly\" and not indirectly by dbReplaceValue()\n         * since function dbReplaceValue() assumes the entire set is being replaced, \n         * but here we're building the new set from the existing one. As a result, \n         * the size of the old set has already changed by the time we reach this point. */\n        updateKeysizesHist(c->db, OBJ_SET, size, size-count);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), set, oldsize, kvobjAllocSize(set));\n        dbReplaceValue(c->db, c->argv[1], &newset, 0);\n        set = newset;\n    }\n\n    /* Replicate/AOF the remaining elements as an SREM operation */\n    if (propindex != 2) {\n        alsoPropagate(c->db->id, propargv, propindex, PROPAGATE_AOF | PROPAGATE_REPL);\n        for (unsigned long i = 2; i < propindex; i++) {\n            decrRefCount(propargv[i]);\n        }\n        propindex = 2;\n    }\n    zfree(propargv);\n\n    /* Don't propagate the command itself even if we incremented the\n     * dirty counter. We don't want to propagate an SPOP command since\n     * we propagated the command as a set of SREMs operations using\n     * the alsoPropagate() API. */\n    preventCommandPropagation(c);\n    keyModified(c,c->db,c->argv[1],set,1);\n}\n\nvoid spopCommand(client *c) {\n    unsigned long size;\n    robj *ele;\n    size_t oldsize = 0;\n\n    if (c->argc == 3) {\n        spopWithCountCommand(c);\n        return;\n    } else if (c->argc > 3) {\n        addReplyErrorObject(c,shared.syntaxerr);\n        return;\n    }\n\n    /* Make sure a key with the name inputted exists, and that it's type is\n     * indeed a kv */\n    kvobj *kv = lookupKeyWriteOrReply(c, c->argv[1], shared.null[c->resp]);\n    if (kv == NULL || checkType(c, kv, OBJ_SET)) return;\n\n    size = setTypeSize(kv);\n    updateKeysizesHist(c->db, OBJ_SET, size, size-1);\n\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(kv);\n\n    /* Pop a random element from the kv */\n    ele = setTypePopRandom(kv);\n\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), kv, oldsize, kvobjAllocSize(kv));\n\n    notifyKeyspaceEvent(NOTIFY_SET,\"spop\",c->argv[1],c->db->id);\n\n    /* Replicate/AOF this command as an SREM operation */\n    rewriteClientCommandVector(c,3,shared.srem,c->argv[1],ele);\n\n    /* Add the element to the reply */\n    addReplyBulk(c, ele);\n    decrRefCount(ele);\n\n    /* Delete the kv if it's empty */\n    int deleted = 0;\n    if (setTypeSize(kv) == 0) {\n        deleted = 1;\n        dbDelete(c->db,c->argv[1]);\n        notifyKeyspaceEvent(NOTIFY_GENERIC,\"del\",c->argv[1],c->db->id);\n    }\n\n    /* Set has been modified */\n    keyModified(c, c->db, c->argv[1], deleted ? NULL : kv, 1);\n    server.dirty++;\n}\n\n/* handle the \"SRANDMEMBER key <count>\" variant. The normal version of the\n * command is handled by the srandmemberCommand() function itself. */\n\n/* How many times bigger should be the set compared to the requested size\n * for us to don't use the \"remove elements\" strategy? Read later in the\n * implementation for more info. */\n#define SRANDMEMBER_SUB_STRATEGY_MUL 3\n\n/* If client is trying to ask for a very large number of random elements,\n * queuing may consume an unlimited amount of memory, so we want to limit\n * the number of randoms per time. */\n#define SRANDFIELD_RANDOM_SAMPLE_LIMIT 1000\n\nvoid srandmemberWithCountCommand(client *c) {\n    long l;\n    unsigned long count, size;\n    int uniq = 1;\n    kvobj *set;\n    char *str;\n    size_t len = 0;\n    int64_t llele = 0;\n\n    dict *d;\n\n    if (getRangeLongFromObjectOrReply(c,c->argv[2],-LONG_MAX,LONG_MAX,&l,NULL) != C_OK) return;\n    if (l >= 0) {\n        count = (unsigned long) l;\n    } else {\n        /* A negative count means: return the same elements multiple times\n         * (i.e. don't remove the extracted element after every extraction). */\n        count = -l;\n        uniq = 0;\n    }\n\n    if ((set = lookupKeyReadOrReply(c,c->argv[1],shared.emptyarray))\n        == NULL || checkType(c,set,OBJ_SET)) return;\n    size = setTypeSize(set);\n\n    /* If count is zero, serve it ASAP to avoid special cases later. */\n    if (count == 0) {\n        addReply(c,shared.emptyarray);\n        return;\n    }\n\n    /* CASE 1: The count was negative, so the extraction method is just:\n     * \"return N random elements\" sampling the whole set every time.\n     * This case is trivial and can be served without auxiliary data\n     * structures. This case is the only one that also needs to return the\n     * elements in random order. */\n    if (!uniq || count == 1) {\n        addReplyArrayLen(c,count);\n\n        if (set->encoding == OBJ_ENCODING_LISTPACK && count > 1) {\n            /* Specialized case for listpack, traversing it only once. */\n            unsigned long limit, sample_count;\n            limit = count > SRANDFIELD_RANDOM_SAMPLE_LIMIT ? SRANDFIELD_RANDOM_SAMPLE_LIMIT : count;\n            listpackEntry *entries = zmalloc(limit * sizeof(listpackEntry));\n            while (count) {\n                sample_count = count > limit ? limit : count;\n                count -= sample_count;\n                lpRandomEntries(set->ptr, sample_count, entries);\n                for (unsigned long i = 0; i < sample_count; i++) {\n                    if (entries[i].sval)\n                        addReplyBulkCBuffer(c, entries[i].sval, entries[i].slen);\n                    else\n                        addReplyBulkLongLong(c, entries[i].lval);\n                }\n                if (c->flags & CLIENT_CLOSE_ASAP)\n                    break;\n            }\n            zfree(entries);\n            return;\n        }\n\n        while(count--) {\n            setTypeRandomElement(set, &str, &len, &llele);\n            if (str == NULL) {\n                addReplyBulkLongLong(c,llele);\n            } else {\n                addReplyBulkCBuffer(c, str, len);\n            }\n            if (c->flags & CLIENT_CLOSE_ASAP)\n                break;\n        }\n        return;\n    }\n\n    /* CASE 2:\n     * The number of requested elements is greater than the number of\n     * elements inside the set: simply return the whole set. */\n    if (count >= size) {\n        setTypeIterator si;\n        addReplyArrayLen(c,size);\n        setTypeInitIterator(&si, set);\n        while (setTypeNext(&si, &str, &len, &llele) != -1) {\n            if (str == NULL) {\n                addReplyBulkLongLong(c,llele);\n            } else {\n                addReplyBulkCBuffer(c, str, len);\n            }\n            size--;\n        }\n        setTypeResetIterator(&si);\n        serverAssert(size==0);\n        return;\n    }\n\n    /* CASE 2.5 listpack only. Sampling unique elements, in non-random order.\n     * Listpack encoded sets are meant to be relatively small, so\n     * SRANDMEMBER_SUB_STRATEGY_MUL isn't necessary and we rather not make\n     * copies of the entries. Instead, we emit them directly to the output\n     * buffer.\n     *\n     * And it is inefficient to repeatedly pick one random element from a\n     * listpack in CASE 4. So we use this instead. */\n    if (set->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *lp = set->ptr;\n        unsigned char *p = lpFirst(lp);\n        unsigned int i = 0;\n        addReplyArrayLen(c, count);\n        while (count) {\n            p = lpNextRandom(lp, p, &i, count--, 1);\n            unsigned int len;\n            str = (char *)lpGetValue(p, &len, (long long *)&llele);\n            if (str == NULL) {\n                addReplyBulkLongLong(c, llele);\n            } else {\n                addReplyBulkCBuffer(c, str, len);\n            }\n            p = lpNext(lp, p);\n            i++;\n        }\n        return;\n    }\n\n    /* For CASE 3 and CASE 4 we need an auxiliary dictionary. */\n    d = dictCreate(&sdsReplyDictType);\n\n    /* CASE 3:\n     * The number of elements inside the set is not greater than\n     * SRANDMEMBER_SUB_STRATEGY_MUL times the number of requested elements.\n     * In this case we create a set from scratch with all the elements, and\n     * subtract random elements to reach the requested number of elements.\n     *\n     * This is done because if the number of requested elements is just\n     * a bit less than the number of elements in the set, the natural approach\n     * used into CASE 4 is highly inefficient. */\n    if (count*SRANDMEMBER_SUB_STRATEGY_MUL > size) {\n        setTypeIterator si;\n\n        /* Add all the elements into the temporary dictionary. */\n        setTypeInitIterator(&si, set);\n        dictExpand(d, size);\n        while (setTypeNext(&si, &str, &len, &llele) != -1) {\n            int retval = DICT_ERR;\n\n            if (str == NULL) {\n                retval = dictAdd(d,sdsfromlonglong(llele),NULL);\n            } else {\n                retval = dictAdd(d, sdsnewlen(str, len), NULL);\n            }\n            serverAssert(retval == DICT_OK);\n        }\n        setTypeResetIterator(&si);\n        serverAssert(dictSize(d) == size);\n\n        /* Remove random elements to reach the right count. */\n        while (size > count) {\n            dictEntry *de;\n            de = dictGetFairRandomKey(d);\n            dictUnlink(d,dictGetKey(de));\n            sdsfree(dictGetKey(de));\n            dictFreeUnlinkedEntry(d,de);\n            size--;\n        }\n    }\n\n    /* CASE 4: We have a big set compared to the requested number of elements.\n     * In this case we can simply get random elements from the set and add\n     * to the temporary set, trying to eventually get enough unique elements\n     * to reach the specified count. */\n    else {\n        unsigned long added = 0;\n        sds sdsele;\n\n        dictExpand(d, count);\n        while (added < count) {\n            setTypeRandomElement(set, &str, &len, &llele);\n            if (str == NULL) {\n                sdsele = sdsfromlonglong(llele);\n            } else {\n                sdsele = sdsnewlen(str, len);\n            }\n            /* Try to add the object to the dictionary. If it already exists\n             * free it, otherwise increment the number of objects we have\n             * in the result dictionary. */\n            if (dictAdd(d,sdsele,NULL) == DICT_OK)\n                added++;\n            else\n                sdsfree(sdsele);\n        }\n    }\n\n    /* CASE 3 & 4: send the result to the user. */\n    {\n        dictIterator di;\n        dictEntry *de;\n\n        addReplyArrayLen(c,count);\n        dictInitIterator(&di, d);\n        while((de = dictNext(&di)) != NULL)\n            addReplyBulkSds(c,dictGetKey(de));\n        dictResetIterator(&di);\n        dictRelease(d);\n    }\n}\n\n/* SRANDMEMBER <key> [<count>] */\nvoid srandmemberCommand(client *c) {\n    kvobj *set;\n    char *str;\n    size_t len = 0;\n    int64_t llele = 0;\n    size_t oldsize = 0;\n\n    if (c->argc == 3) {\n        srandmemberWithCountCommand(c);\n        return;\n    } else if (c->argc > 3) {\n        addReplyErrorObject(c,shared.syntaxerr);\n        return;\n    }\n\n    /* Handle variant without <count> argument. Reply with simple bulk string */\n    if ((set = lookupKeyReadOrReply(c,c->argv[1],shared.null[c->resp]))\n        == NULL || checkType(c,set,OBJ_SET)) return;\n\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(set);\n    setTypeRandomElement(set, &str, &len, &llele);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), set, oldsize, kvobjAllocSize(set));\n    if (str == NULL) {\n        addReplyBulkLongLong(c,llele);\n    } else {\n        addReplyBulkCBuffer(c, str, len);\n    }\n}\n\ntypedef struct setopsrc {\n    robj *set;\n    size_t oldsize;\n} setopsrc;\n\nint qsortCompareSetsByCardinality(const void *s1, const void *s2) {\n    robj *o1 = ((setopsrc*)s1)->set, *o2 = ((setopsrc*)s2)->set;\n    if (setTypeSize(o1) > setTypeSize(o2)) return 1;\n    if (setTypeSize(o1) < setTypeSize(o2)) return -1;\n    return 0;\n}\n\n/* This is used by SDIFF and in this case we can receive NULL that should\n * be handled as empty sets. */\nint qsortCompareSetsByRevCardinality(const void *s1, const void *s2) {\n    robj *o1 = ((setopsrc*)s1)->set, *o2 = ((setopsrc*)s2)->set;\n    unsigned long first = o1 ? setTypeSize(o1) : 0;\n    unsigned long second = o2 ? setTypeSize(o2) : 0;\n\n    if (first < second) return 1;\n    if (first > second) return -1;\n    return 0;\n}\n\n/* SINTER / SINTERSTORE / SINTERCARD\n *\n * 'cardinality_only' work for SINTERCARD, only return the cardinality\n * with minimum processing and memory overheads.\n *\n * 'limit' work for SINTERCARD, stop searching after reaching the limit.\n * Passing a 0 means unlimited.\n */\nvoid sinterGenericCommand(client *c, robj **setkeys,\n                          unsigned long setnum, robj *dstkey,\n                          int cardinality_only, unsigned long limit) {\n    setopsrc *sets = zmalloc(sizeof(setopsrc)*setnum);\n    setTypeIterator si;\n    robj *dstset = NULL;\n    char *str;\n    size_t len = 0;\n    int64_t intobj = 0;\n    void *replylen = NULL;\n    unsigned long j, cardinality = 0;\n    int encoding, empty = 0;\n\n    for (j = 0; j < setnum; j++) {\n        kvobj *kv = lookupKeyRead(c->db, setkeys[j]);\n        if (!kv) {\n            /* A NULL is considered an empty set */\n            empty += 1;\n            sets[j].set = NULL;\n            sets[j].oldsize = 0;\n            continue;\n        }\n        if (checkType(c, kv, OBJ_SET)) {\n            zfree(sets);\n            return;\n        }\n        sets[j].set = kv;\n        if (server.memory_tracking_enabled)\n            sets[j].oldsize = kvobjAllocSize(kv);\n    }\n\n    /* Set intersection with an empty set always results in an empty set.\n     * Return ASAP if there is an empty set. */\n    if (empty > 0) {\n        zfree(sets);\n        if (dstkey) {\n            if (dbDelete(c->db,dstkey)) {\n                keyModified(c,c->db,dstkey,NULL,1);\n                notifyKeyspaceEvent(NOTIFY_GENERIC,\"del\",dstkey,c->db->id);\n                server.dirty++;\n            }\n            addReply(c,shared.czero);\n        } else if (cardinality_only) {\n            addReplyLongLong(c,cardinality);\n        } else {\n            addReply(c,shared.emptyset[c->resp]);\n        }\n        return;\n    }\n\n    /* Sort sets from the smallest to largest, this will improve our\n     * algorithm's performance */\n    qsort(sets,setnum,sizeof(setopsrc),qsortCompareSetsByCardinality);\n\n    /* The first thing we should output is the total number of elements...\n     * since this is a multi-bulk write, but at this stage we don't know\n     * the intersection set size, so we use a trick, append an empty object\n     * to the output list and save the pointer to later modify it with the\n     * right length */\n    if (dstkey) {\n        /* If we have a target key where to store the resulting set\n         * create this key with an empty set inside */\n        if (sets[0].set->encoding == OBJ_ENCODING_INTSET) {\n            /* The first set is an intset, so the result is an intset too. The\n             * elements are inserted in ascending order which is efficient in an\n             * intset. */\n            dstset = createIntsetObject();\n        } else if (sets[0].set->encoding == OBJ_ENCODING_LISTPACK) {\n            /* To avoid many reallocs, we estimate that the result is a listpack\n             * of approximately the same size as the first set. Then we shrink\n             * it or possibly convert it to intset in the end. */\n            unsigned char *lp = lpNew(lpBytes(sets[0].set->ptr));\n            dstset = createObject(OBJ_SET, lp);\n            dstset->encoding = OBJ_ENCODING_LISTPACK;\n        } else {\n            /* We start off with a listpack, since it's more efficient to append\n             * to than an intset. Later we can convert it to intset or a\n             * hashtable. */\n            dstset = createSetListpackObject();\n        }\n    } else if (!cardinality_only) {\n        replylen = addReplyDeferredLen(c);\n    }\n\n    /* Iterate all the elements of the first (smallest) set, and test\n     * the element against all the other sets, if at least one set does\n     * not include the element it is discarded */\n    int only_integers = 1;\n    setTypeInitIterator(&si, sets[0].set);\n    while((encoding = setTypeNext(&si, &str, &len, &intobj)) != -1) {\n        for (j = 1; j < setnum; j++) {\n            if (sets[j].set == sets[0].set) continue;\n            if (!setTypeIsMemberAux(sets[j].set, str, len, intobj,\n                                    encoding == OBJ_ENCODING_HT))\n                break;\n        }\n\n        /* Only take action when all sets contain the member */\n        if (j == setnum) {\n            if (cardinality_only) {\n                cardinality++;\n\n                /* We stop the searching after reaching the limit. */\n                if (limit && cardinality >= limit)\n                    break;\n            } else if (!dstkey) {\n                if (str != NULL)\n                    addReplyBulkCBuffer(c, str, len);\n                else\n                    addReplyBulkLongLong(c,intobj);\n                cardinality++;\n            } else {\n                if (str && only_integers) {\n                    /* It may be an integer although we got it as a string. */\n                    if (encoding == OBJ_ENCODING_HT &&\n                        string2ll(str, len, (long long *)&intobj))\n                    {\n                        if (dstset->encoding == OBJ_ENCODING_LISTPACK ||\n                            dstset->encoding == OBJ_ENCODING_INTSET)\n                        {\n                            /* Adding it as an integer is more efficient. */\n                            str = NULL;\n                        }\n                    } else {\n                        /* It's not an integer */\n                        only_integers = 0;\n                    }\n                }\n                setTypeAddAux(dstset, str, len, intobj, encoding == OBJ_ENCODING_HT);\n            }\n        }\n    }\n    setTypeResetIterator(&si);\n\n    if (server.memory_tracking_enabled) {\n        for (j = 0; j < setnum; j++) {\n            robj *obj = sets[j].set;\n            if (!obj) continue;\n            updateSlotAllocSize(c->db, getKeySlot(setkeys[j]->ptr), obj,\n                                sets[j].oldsize, kvobjAllocSize(obj));\n        }\n    }\n\n    if (cardinality_only) {\n        addReplyLongLong(c,cardinality);\n    } else if (dstkey) {\n        /* Store the resulting set into the target, if the intersection\n         * is not an empty set. */\n        if (setTypeSize(dstset) > 0) {\n            if (only_integers) maybeConvertToIntset(dstset);\n            if (dstset->encoding == OBJ_ENCODING_LISTPACK) {\n                /* We allocated too much memory when we created it to avoid\n                 * frequent reallocs. Therefore, we shrink it now. */\n                dstset->ptr = lpShrinkToFit(dstset->ptr);\n            }\n            setKey(c, c->db, dstkey, &dstset, 0);\n            addReplyLongLong(c,setTypeSize(dstset));\n            notifyKeyspaceEvent(NOTIFY_SET,\"sinterstore\",\n                dstkey,c->db->id);\n            server.dirty++;\n        } else {\n            addReply(c,shared.czero);\n            if (dbDelete(c->db,dstkey)) {\n                server.dirty++;\n                keyModified(c,c->db,dstkey,NULL,1);\n                notifyKeyspaceEvent(NOTIFY_GENERIC,\"del\",dstkey,c->db->id);\n            }\n            decrRefCount(dstset);\n        }\n    } else {\n        setDeferredSetLen(c,replylen,cardinality);\n    }\n    zfree(sets);\n}\n\n/* SINTER key [key ...] */\nvoid sinterCommand(client *c) {\n    sinterGenericCommand(c, c->argv+1,  c->argc-1, NULL, 0, 0);\n}\n\n/* SMEMBERS key */\nvoid smembersCommand(client *c) {\n    setTypeIterator si;\n    char *str;\n    size_t len = 0;\n    int64_t intobj = 0;\n    size_t oldsize = 0;\n    kvobj *setobj = lookupKeyRead(c->db, c->argv[1]);\n    if (checkType(c,setobj,OBJ_SET)) return;\n    if (!setobj) {\n        addReply(c, shared.emptyset[c->resp]);\n        return;\n    }\n\n    /* Prepare the response. */\n    unsigned long length = setTypeSize(setobj);\n    addReplySetLen(c,length);\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(setobj);\n    /* Iterate through the elements of the set. */\n    setTypeInitIterator(&si, setobj);\n\n    while (setTypeNext(&si, &str, &len, &intobj) != -1) {\n        if (str != NULL)\n            addReplyBulkCBuffer(c, str, len);\n        else\n            addReplyBulkLongLong(c, intobj);\n        length--;\n    }\n    setTypeResetIterator(&si);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), setobj, oldsize, kvobjAllocSize(setobj));\n    serverAssert(length == 0); /* fail on corrupt data */\n}\n\n/* SINTERCARD numkeys key [key ...] [LIMIT limit] */\nvoid sinterCardCommand(client *c) {\n    long j;\n    long numkeys = 0; /* Number of keys. */\n    long limit = 0;   /* 0 means not limit. */\n\n    if (getRangeLongFromObjectOrReply(c, c->argv[1], 1, LONG_MAX,\n                                      &numkeys, \"numkeys should be greater than 0\") != C_OK)\n        return;\n    if (numkeys > (c->argc - 2)) {\n        addReplyError(c, \"Number of keys can't be greater than number of args\");\n        return;\n    }\n\n    for (j = 2 + numkeys; j < c->argc; j++) {\n        char *opt = c->argv[j]->ptr;\n        int moreargs = (c->argc - 1) - j;\n\n        if (!strcasecmp(opt, \"LIMIT\") && moreargs) {\n            j++;\n            if (getPositiveLongFromObjectOrReply(c, c->argv[j], &limit,\n                                                 \"LIMIT can't be negative\") != C_OK)\n                return;\n        } else {\n            addReplyErrorObject(c, shared.syntaxerr);\n            return;\n        }\n    }\n\n    sinterGenericCommand(c, c->argv+2, numkeys, NULL, 1, limit);\n}\n\n/* SINTERSTORE destination key [key ...] */\nvoid sinterstoreCommand(client *c) {\n    sinterGenericCommand(c, c->argv+2, c->argc-2, c->argv[1], 0, 0);\n}\n\nvoid sunionDiffGenericCommand(client *c, robj **setkeys, int setnum,\n                              robj *dstkey, int op) {\n    setopsrc *sets = zmalloc(sizeof(setopsrc)*setnum);\n    setTypeIterator si;\n    robj *dstset = NULL;\n    int dstset_encoding = OBJ_ENCODING_INTSET;\n    char *str;\n    size_t len = 0;\n    int64_t llval = 0;\n    int encoding;\n    int j, cardinality = 0;\n    int diff_algo = 1;\n    int sameset = 0; \n\n    for (j = 0; j < setnum; j++) {\n        kvobj *setobj = lookupKeyRead(c->db, setkeys[j]);\n        if (!setobj) {\n            sets[j].set = NULL;\n            sets[j].oldsize = 0;\n            continue;\n        }\n        if (checkType(c,setobj,OBJ_SET)) {\n            zfree(sets);\n            return;\n        }\n        /* For a SET's encoding, according to the factory method setTypeCreate(), currently have 3 types:\n         * 1. OBJ_ENCODING_INTSET\n         * 2. OBJ_ENCODING_LISTPACK\n         * 3. OBJ_ENCODING_HT\n         * 'dstset_encoding' is used to determine which kind of encoding to use when initialize 'dstset'.\n         *\n         * If all sets are all OBJ_ENCODING_INTSET encoding or 'dstkey' is not null, keep 'dstset'\n         * OBJ_ENCODING_INTSET encoding when initialize. Otherwise it is not efficient to create the 'dstset'\n         * from intset and then convert to listpack or hashtable.\n         *\n         * If one of the set is OBJ_ENCODING_LISTPACK, let's set 'dstset' to hashtable default encoding,\n         * the hashtable is more efficient when find and compare than the listpack. The corresponding\n         * time complexity are O(1) vs O(n). */\n        if (!dstkey && dstset_encoding == OBJ_ENCODING_INTSET &&\n            (setobj->encoding == OBJ_ENCODING_LISTPACK || setobj->encoding == OBJ_ENCODING_HT)) {\n            dstset_encoding = OBJ_ENCODING_HT;\n        }\n        sets[j].set = setobj;\n        if (server.memory_tracking_enabled)\n            sets[j].oldsize = kvobjAllocSize(setobj);\n        if (j > 0 && sets[0].set == sets[j].set) {\n            sameset = 1; \n        }\n    }\n\n    /* Select what DIFF algorithm to use.\n     *\n     * Algorithm 1 is O(N*M) where N is the size of the element first set\n     * and M the total number of sets.\n     *\n     * Algorithm 2 is O(N) where N is the total number of elements in all\n     * the sets.\n     *\n     * We compute what is the best bet with the current input here. */\n    if (op == SET_OP_DIFF && sets[0].set && !sameset) {\n        long long algo_one_work = 0, algo_two_work = 0;\n\n        for (j = 0; j < setnum; j++) {\n            if (sets[j].set == NULL) continue;\n\n            algo_one_work += setTypeSize(sets[0].set);\n            algo_two_work += setTypeSize(sets[j].set);\n        }\n\n        /* Algorithm 1 has better constant times and performs less operations\n         * if there are elements in common. Give it some advantage. */\n        algo_one_work /= 2;\n        diff_algo = (algo_one_work <= algo_two_work) ? 1 : 2;\n\n        if (diff_algo == 1 && setnum > 1) {\n            /* With algorithm 1 it is better to order the sets to subtract\n             * by decreasing size, so that we are more likely to find\n             * duplicated elements ASAP. */\n            qsort(sets+1,setnum-1,sizeof(setopsrc),\n                qsortCompareSetsByRevCardinality);\n        }\n    }\n\n    /* We need a temp set object to store our union/diff. If the dstkey\n     * is not NULL (that is, we are inside an SUNIONSTORE/SDIFFSTORE operation) then\n     * this set object will be the resulting object to set into the target key*/\n    if (dstset_encoding == OBJ_ENCODING_INTSET) {\n        dstset = createIntsetObject();\n    } else {\n        dstset = createSetObject();\n    }\n\n    if (op == SET_OP_UNION) {\n        /* Union is trivial, just add every element of every set to the\n         * temporary set. */\n        for (j = 0; j < setnum; j++) {\n            if (!sets[j].set) continue; /* non existing keys are like empty sets */\n\n            setTypeInitIterator(&si, sets[j].set);\n            while ((encoding = setTypeNext(&si, &str, &len, &llval)) != -1) {\n                cardinality += setTypeAddAux(dstset, str, len, llval, encoding == OBJ_ENCODING_HT);\n            }\n            setTypeResetIterator(&si);\n        }\n    } else if (op == SET_OP_DIFF && sameset) {\n        /* At least one of the sets is the same one (same key) as the first one, result must be empty. */\n    } else if (op == SET_OP_DIFF && sets[0].set && diff_algo == 1) {\n        /* DIFF Algorithm 1:\n         *\n         * We perform the diff by iterating all the elements of the first set,\n         * and only adding it to the target set if the element does not exist\n         * into all the other sets.\n         *\n         * This way we perform at max N*M operations, where N is the size of\n         * the first set, and M the number of sets. */\n        setTypeInitIterator(&si, sets[0].set);\n        while ((encoding = setTypeNext(&si, &str, &len, &llval)) != -1) {\n            for (j = 1; j < setnum; j++) {\n                if (!sets[j].set) continue; /* no key is an empty set. */\n                if (sets[j].set == sets[0].set) break; /* same set! */\n                if (setTypeIsMemberAux(sets[j].set, str, len, llval,\n                                       encoding == OBJ_ENCODING_HT))\n                    break;\n            }\n            if (j == setnum) {\n                /* There is no other set with this element. Add it. */\n                cardinality += setTypeAddAux(dstset, str, len, llval, encoding == OBJ_ENCODING_HT);\n            }\n        }\n        setTypeResetIterator(&si);\n    } else if (op == SET_OP_DIFF && sets[0].set && diff_algo == 2) {\n        /* DIFF Algorithm 2:\n         *\n         * Add all the elements of the first set to the auxiliary set.\n         * Then remove all the elements of all the next sets from it.\n         *\n         * This is O(N) where N is the sum of all the elements in every\n         * set. */\n        for (j = 0; j < setnum; j++) {\n            if (!sets[j].set) continue; /* non existing keys are like empty sets */\n\n            setTypeInitIterator(&si, sets[j].set);\n            while((encoding = setTypeNext(&si, &str, &len, &llval)) != -1) {\n                if (j == 0) {\n                    cardinality += setTypeAddAux(dstset, str, len, llval,\n                                                 encoding == OBJ_ENCODING_HT);\n                } else {\n                    cardinality -= setTypeRemoveAux(dstset, str, len, llval,\n                                                    encoding == OBJ_ENCODING_HT);\n                }\n            }\n            setTypeResetIterator(&si);\n\n            /* Exit if result set is empty as any additional removal\n             * of elements will have no effect. */\n            if (cardinality == 0) break;\n        }\n    }\n    if (server.memory_tracking_enabled) {\n        for (j = 0; j < setnum; j++) {\n            robj *obj = sets[j].set;\n            if (!obj) continue;\n            updateSlotAllocSize(c->db, getKeySlot(setkeys[j]->ptr), obj,\n                                sets[j].oldsize, kvobjAllocSize(obj));\n        }\n    }\n\n    /* Output the content of the resulting set, if not in STORE mode */\n    if (!dstkey) {\n        addReplySetLen(c,cardinality);\n        setTypeInitIterator(&si, dstset);\n        while (setTypeNext(&si, &str, &len, &llval) != -1) {\n            if (str)\n                addReplyBulkCBuffer(c, str, len);\n            else\n                addReplyBulkLongLong(c, llval);\n        }\n        setTypeResetIterator(&si);\n        server.lazyfree_lazy_server_del ? freeObjAsync(NULL, dstset, -1) :\n                                          decrRefCount(dstset);\n    } else {\n        /* If we have a target key where to store the resulting set\n         * create this key with the result set inside */\n        if (setTypeSize(dstset) > 0) {\n            setKey(c, c->db, dstkey, &dstset, 0);\n            addReplyLongLong(c,setTypeSize(dstset));\n            notifyKeyspaceEvent(NOTIFY_SET,\n                op == SET_OP_UNION ? \"sunionstore\" : \"sdiffstore\",\n                dstkey,c->db->id);\n            server.dirty++;\n        } else {\n            addReply(c,shared.czero);\n            if (dbDelete(c->db,dstkey)) {\n                server.dirty++;\n                keyModified(c,c->db,dstkey,NULL,1);\n                notifyKeyspaceEvent(NOTIFY_GENERIC,\"del\",dstkey,c->db->id);\n            }\n            decrRefCount(dstset);\n        }\n    }\n    zfree(sets);\n}\n\n/* SUNION key [key ...] */\nvoid sunionCommand(client *c) {\n    sunionDiffGenericCommand(c,c->argv+1,c->argc-1,NULL,SET_OP_UNION);\n}\n\n/* SUNIONSTORE destination key [key ...] */\nvoid sunionstoreCommand(client *c) {\n    sunionDiffGenericCommand(c,c->argv+2,c->argc-2,c->argv[1],SET_OP_UNION);\n}\n\n/* SDIFF key [key ...] */\nvoid sdiffCommand(client *c) {\n    sunionDiffGenericCommand(c,c->argv+1,c->argc-1,NULL,SET_OP_DIFF);\n}\n\n/* SDIFFSTORE destination key [key ...] */\nvoid sdiffstoreCommand(client *c) {\n    sunionDiffGenericCommand(c,c->argv+2,c->argc-2,c->argv[1],SET_OP_DIFF);\n}\n\nvoid sscanCommand(client *c) {\n    kvobj *set;\n    unsigned long long cursor;\n    size_t oldsize = 0;\n\n    if (parseScanCursorOrReply(c,c->argv[2],&cursor) == C_ERR) return;\n    if ((set = lookupKeyReadOrReply(c,c->argv[1],shared.emptyscan)) == NULL ||\n        checkType(c,set,OBJ_SET)) return;\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(set);\n    scanGenericCommand(c,set,cursor);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), set, oldsize, kvobjAllocSize(set));\n}\n"
  },
  {
    "path": "src/t_stream.c",
    "content": "/*\n * Copyright (c) 2017-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n#include \"endianconv.h\"\n#include \"stream.h\"\n#include \"xxhash.h\"\n#include <string.h>\n\n/* Every stream item inside the listpack, has a flags field that is used to\n * mark the entry as deleted, or having the same field as the \"master\"\n * entry at the start of the listpack> */\n#define STREAM_ITEM_FLAG_NONE 0             /* No special flags. */\n#define STREAM_ITEM_FLAG_DELETED (1<<0)     /* Entry is deleted. Skip it. */\n#define STREAM_ITEM_FLAG_SAMEFIELDS (1<<1)  /* Same fields as master entry. */\n\n/* For stream commands that require multiple IDs\n * when the number of IDs is less than 'STREAMID_STATIC_VECTOR_LEN',\n * avoid malloc allocation.*/\n#define STREAMID_STATIC_VECTOR_LEN 8\n\n/* Max pre-allocation for listpack. This is done to avoid abuse of a user\n * setting stream_node_max_bytes to a huge number. */\n#define STREAM_LISTPACK_MAX_PRE_ALLOCATE 4096\n\n/* Don't let listpacks grow too big, even if the user config allows it.\n * doing so can lead to an overflow (trying to store more than 32bit length\n * into the listpack header), or actually an assertion since lpInsert\n * will return NULL. */\n#define STREAM_LISTPACK_MAX_SIZE (1<<30)\n\nvoid streamFreeCGGeneric(void *cg, void *s);\nvoid streamFreeNACK(stream *s, streamNACK *na);\nsize_t streamReplyWithRangeFromConsumerPEL(client *c, stream *s, streamID *start, streamID *end, size_t count, streamCG *group, streamConsumer *consumer);\nint streamParseStrictIDOrReply(client *c, robj *o, streamID *id, uint64_t missing_seq, int *seq_given);\nint streamParseIDOrReply(client *c, robj *o, streamID *id, uint64_t missing_seq);\n\nint streamEntryIsReferenced(stream *s, streamID *id);\nvoid streamCleanupEntryCGroupRefs(stream *s, streamID *id);\nvoid streamUpdateCGroupLastId(stream *s, streamCG *cg, streamID *id);\nvoid trackStreamClaimTimeouts(client *c, robj **keys, int numkeys, uint64_t expire_time);\n\n/* Forward declarations for IDMP functions (defined at end of file) */\nstatic void trackStreamIdmpEntries(client *c, robj *key);\nstatic void streamClearIdmpEntries(stream *s);\nstatic void idmpInsertEntry(stream *s, idmpProducer *producer, idmpEntry *entry, const streamID *id);\nstatic int idmpLookupAndReply(stream *s, idmpProducer *producer, idmpEntry *entry, client *c);\nstatic int idmpLookup(idmpProducer *producer, idmpEntry *entry, streamID *id);\nstatic idmpProducer *idmpGetOrCreateProducer(stream *s, const char *pid, size_t pid_len);\nstatic int createIdempotencyHash(robj **argv, int64_t numfields, XXH128_hash_t *out_hash);\nstatic void idmpEvictOldestEntry(stream *s, idmpProducer *producer);\n\n/* Forward declarations for PEL time list functions */\nstatic void pelListInsertAfter(streamCG *cg, streamNACK *after, streamNACK *nack);\nstatic void pelListInsertAtTail(streamCG *cg, streamNACK *nack);\nstatic void pelListUpdate(streamCG *cg, streamNACK *nack, mstime_t new_delivery_time);\n\n/* -----------------------------------------------------------------------\n * Low level stream encoding: a radix tree of listpacks.\n * ----------------------------------------------------------------------- */\n\n/* Create a new stream data structure. */\nstream *streamNew(void) {\n    size_t usable;\n    stream *s = zmalloc_usable(sizeof(*s), &usable);\n    s->alloc_size = usable;\n    s->rax = raxNewWithMetadata(0, &s->alloc_size);\n    s->length = 0;\n    s->first_id.ms = 0;\n    s->first_id.seq = 0;\n    s->last_id.ms = 0;\n    s->last_id.seq = 0;\n    s->max_deleted_entry_id.seq = 0;\n    s->max_deleted_entry_id.ms = 0;\n    s->entries_added = 0;\n    s->cgroups = NULL; /* Created on demand to save memory when not used. */\n    s->cgroups_ref = NULL;\n    s->min_cgroup_last_id.ms = UINT64_MAX;\n    s->min_cgroup_last_id.seq = UINT64_MAX;\n    s->min_cgroup_last_id_valid = 0;\n    s->idmp_duration = server.stream_idmp_duration; /* Default from server config */\n    s->idmp_max_entries = server.stream_idmp_maxsize; /* Default from server config */ \n    s->idmp_producers = NULL; /* Created on demand to save memory when not used. */\n    s->iids_added = 0;\n    s->iids_duplicates = 0;\n    return s;\n}\n\nstatic void streamLpFreeGeneric(void *lp, void *strm) {\n    stream *s = strm;\n    s->alloc_size -= lpBytes(lp);\n    lpFree(lp);\n}\n\nvoid streamFreeIdmpProducerGeneric(void *producer, void *strm) {\n    stream *s = strm;\n    idmpProducerFree((idmpProducer *)producer, &s->alloc_size);\n}\n\n/* Free a stream, including the listpacks stored inside the radix tree. */\nvoid freeStream(stream *s) {\n    raxFreeWithCbAndContext(s->rax, streamLpFreeGeneric, s);\n    if (s->cgroups)\n        raxFreeWithCbAndContext(s->cgroups, streamFreeCGGeneric, s);\n    if (s->cgroups_ref)\n        raxFreeWithCallback(s->cgroups_ref, listReleaseGeneric);\n    /* Free IDMP producers rax tree */\n    if (s->idmp_producers)\n        raxFreeWithCbAndContext(s->idmp_producers, streamFreeIdmpProducerGeneric, s);\n    debugServerAssert(s->alloc_size == zmalloc_usable_size(s));\n    zfree(s);\n}\n\n/* Return the length of a stream. */\nunsigned long streamLength(const robj *subject) {\n    stream *s = subject->ptr;\n    return s->length;\n}\n\n/* Set 'id' to be its successor stream ID.\n * If 'id' is the maximal possible id, it is wrapped around to 0-0 and a\n * C_ERR is returned. */\nint streamIncrID(streamID *id) {\n    int ret = C_OK;\n    if (id->seq == UINT64_MAX) {\n        if (id->ms == UINT64_MAX) {\n            /* Special case where 'id' is the last possible streamID... */\n            id->ms = id->seq = 0;\n            ret = C_ERR;\n        } else {\n            id->ms++;\n            id->seq = 0;\n        }\n    } else {\n        id->seq++;\n    }\n    return ret;\n}\n\n/* Set 'id' to be its predecessor stream ID.\n * If 'id' is the minimal possible id, it remains 0-0 and a C_ERR is\n * returned. */\nint streamDecrID(streamID *id) {\n    int ret = C_OK;\n    if (id->seq == 0) {\n        if (id->ms == 0) {\n            /* Special case where 'id' is the first possible streamID... */\n            id->ms = id->seq = UINT64_MAX;\n            ret = C_ERR;\n        } else {\n            id->ms--;\n            id->seq = UINT64_MAX;\n        }\n    } else {\n        id->seq--;\n    }\n    return ret;\n}\n\n/* Generate the next stream item ID given the previous one. If the current\n * milliseconds Unix time is greater than the previous one, just use this\n * as time part and start with sequence part of zero. Otherwise we use the\n * previous time (and never go backward) and increment the sequence. */\nvoid streamNextID(streamID *last_id, streamID *new_id) {\n    uint64_t ms = commandTimeSnapshot();\n    if (ms > last_id->ms) {\n        new_id->ms = ms;\n        new_id->seq = 0;\n    } else {\n        *new_id = *last_id;\n        streamIncrID(new_id);\n    }\n}\n\n/* This is a helper function for the COPY command.\n * Duplicate a Stream object, with the guarantee that the returned object\n * has the same encoding as the original one.\n *\n * The resulting object always has refcount set to 1 */\nrobj *streamDup(robj *o) {\n    robj *sobj;\n\n    serverAssert(o->type == OBJ_STREAM);\n\n    switch (o->encoding) {\n        case OBJ_ENCODING_STREAM:\n            sobj = createStreamObject();\n            break;\n        default:\n            serverPanic(\"Wrong encoding.\");\n            break;\n    }\n\n    stream *s;\n    stream *new_s;\n    s = o->ptr;\n    new_s = sobj->ptr;\n\n    raxIterator ri;\n    raxStart(&ri, s->rax);\n    raxSeek(&ri, \"^\", NULL, 0);\n    size_t lp_bytes = 0;      /* Total bytes in the listpack. */\n    unsigned char *lp = NULL; /* listpack pointer. */\n    /* Get a reference to the listpack node. */\n    while (raxNext(&ri)) {\n        serverAssert(ri.key_len == sizeof(streamID));\n        lp = ri.data;\n        lp_bytes = lpBytes(lp);\n        unsigned char *new_lp = zmalloc(lp_bytes);\n        new_s->alloc_size += lp_bytes;\n        memcpy(new_lp, lp, lp_bytes);\n        raxInsert(new_s->rax, ri.key, ri.key_len,\n                  new_lp, NULL);\n    }\n    new_s->length = s->length;\n    new_s->first_id = s->first_id;\n    new_s->last_id = s->last_id;\n    new_s->max_deleted_entry_id = s->max_deleted_entry_id;\n    new_s->entries_added = s->entries_added;\n    raxStop(&ri);\n\n    /* IDMP state */\n    new_s->idmp_duration = s->idmp_duration;\n    new_s->idmp_max_entries = s->idmp_max_entries;\n    new_s->iids_added = s->iids_added;\n    new_s->iids_duplicates = s->iids_duplicates;\n\n    if (s->idmp_producers != NULL) {\n        new_s->idmp_producers = raxNewWithMetadata(0, &new_s->alloc_size);\n\n        raxIterator ri_prod;\n        raxStart(&ri_prod, s->idmp_producers);\n        raxSeek(&ri_prod, \"^\", NULL, 0);\n        while (raxNext(&ri_prod)) {\n            idmpProducer *src_prod = ri_prod.data;\n            idmpProducer *new_prod = idmpProducerCreate(&new_s->alloc_size);\n\n            /* Walk the linked list and duplicate each entry. */\n            idmpEntry *src_entry = src_prod->idmp_head;\n            while (src_entry != NULL) {\n                idmpEntry *new_entry = idmpEntryCreate(src_entry->iid,\n                                                       src_entry->iid_len,\n                                                       &new_s->alloc_size);\n                new_entry->id = src_entry->id;\n\n                /* Append to tail of the new producer's linked list. */\n                if (new_prod->idmp_tail != NULL) {\n                    new_prod->idmp_tail->next = new_entry;\n                } else {\n                    new_prod->idmp_head = new_entry;\n                }\n                new_prod->idmp_tail = new_entry;\n\n                dictAdd(new_prod->idmp_dict, new_entry, NULL);\n                src_entry = src_entry->next;\n            }\n\n            raxInsert(new_s->idmp_producers, ri_prod.key, ri_prod.key_len,\n                      new_prod, NULL);\n        }\n        raxStop(&ri_prod);\n    }\n\n    if (s->cgroups == NULL) return sobj;\n\n    /* Consumer Groups */\n    raxIterator ri_cgroups;\n    raxStart(&ri_cgroups, s->cgroups);\n    raxSeek(&ri_cgroups, \"^\", NULL, 0);\n    while (raxNext(&ri_cgroups)) {\n        streamCG *cg = ri_cgroups.data;\n        streamCG *new_cg = streamCreateCG(new_s, (char *)ri_cgroups.key,\n                                          ri_cgroups.key_len, &cg->last_id,\n                                          cg->entries_read);\n\n        serverAssert(new_cg != NULL);\n\n        /* Consumer Group PEL -- walk the time-ordered list so we can\n         * append directly and preserve NACK zone structure. */\n        for (streamNACK *nack = cg->pel_time_head; nack; nack = nack->pel_next) {\n            unsigned char buf[sizeof(streamID)];\n            streamEncodeID(buf, &nack->id);\n            streamNACK *new_nack = streamCreateNACK(new_s, NULL, &nack->id);\n            new_nack->delivery_time = nack->delivery_time;\n            new_nack->delivery_count = nack->delivery_count;\n            new_nack->cgroup_ref_node = streamLinkCGroupToEntry(new_s, new_cg, buf);\n            raxInsert(new_cg->pel, buf, sizeof(streamID), new_nack, NULL);\n            pelListInsertAtTail(new_cg, new_nack);\n            if (nack == cg->pel_nack_tail) new_cg->pel_nack_tail = new_nack;\n        }\n\n        /* Consumers */\n        raxIterator ri_consumers;\n        raxStart(&ri_consumers, cg->consumers);\n        raxSeek(&ri_consumers, \"^\", NULL, 0);\n        while (raxNext(&ri_consumers)) {\n            streamConsumer *consumer = ri_consumers.data;\n            streamConsumer *new_consumer;\n            size_t usable;\n            new_consumer = zmalloc_usable(sizeof(*new_consumer), &usable);\n            new_s->alloc_size += usable;\n            new_consumer->name = sdsdup(consumer->name);\n            new_s->alloc_size += sdsAllocSize(new_consumer->name);\n            new_consumer->pel = raxNewWithMetadata(0, &new_s->alloc_size);\n            raxInsert(new_cg->consumers,(unsigned char *)new_consumer->name,\n                        sdslen(new_consumer->name), new_consumer, NULL);\n            new_consumer->seen_time = consumer->seen_time;\n            new_consumer->active_time = consumer->active_time;\n\n            /* Consumer PEL */\n            raxIterator ri_cpel;\n            raxStart(&ri_cpel, consumer->pel);\n            raxSeek(&ri_cpel, \"^\", NULL, 0);\n            while (raxNext(&ri_cpel)) {\n                void *result;\n                int found = raxFind(new_cg->pel,ri_cpel.key,sizeof(streamID),&result);\n\n                serverAssert(found);\n\n                streamNACK *new_nack = result;\n                new_nack->consumer = new_consumer;\n                raxInsert(new_consumer->pel,ri_cpel.key,sizeof(streamID),new_nack,NULL);\n            }\n            raxStop(&ri_cpel);\n        }\n        raxStop(&ri_consumers);\n    }\n    raxStop(&ri_cgroups);\n    return sobj;\n}\n\n/* This is a wrapper function for lpGet() to directly get an integer value\n * from the listpack (that may store numbers as a string), converting\n * the string if needed.\n * The 'valid' argument is an optional output parameter to get an indication\n * if the record was valid, when this parameter is NULL, the function will\n * fail with an assertion. */\nstatic inline int64_t lpGetIntegerIfValid(unsigned char *ele, int *valid) {\n    int64_t v;\n    unsigned char *e = lpGet(ele,&v,NULL);\n    if (e == NULL) {\n        if (valid)\n            *valid = 1;\n        return v;\n    }\n    /* The following code path should never be used for how listpacks work:\n     * they should always be able to store an int64_t value in integer\n     * encoded form. However the implementation may change. */\n    long long ll = 0;\n    int ret = string2ll((char*)e,v,&ll);\n    if (valid)\n        *valid = ret;\n    else\n        serverAssert(ret != 0);\n    v = ll;\n    return v;\n}\n\n#define lpGetInteger(ele) lpGetIntegerIfValid(ele, NULL)\n\n/* Get an edge streamID of a given listpack.\n * 'master_id' is an input param, used to build the 'edge_id' output param */\nint lpGetEdgeStreamID(unsigned char *lp, int first, streamID *master_id, streamID *edge_id)\n{\n   if (lp == NULL)\n       return 0;\n\n   unsigned char *lp_ele;\n\n   /* We need to seek either the first or the last entry depending\n    * on the direction of the iteration. */\n   if (first) {\n       /* Get the master fields count. */\n       lp_ele = lpFirst(lp);        /* Seek items count */\n       lp_ele = lpNext(lp, lp_ele); /* Seek deleted count. */\n       lp_ele = lpNext(lp, lp_ele); /* Seek num fields. */\n       int64_t master_fields_count = lpGetInteger(lp_ele);\n       lp_ele = lpNext(lp, lp_ele); /* Seek first field. */\n\n       /* If we are iterating in normal order, skip the master fields\n        * to seek the first actual entry. */\n       for (int64_t i = 0; i < master_fields_count; i++)\n           lp_ele = lpNext(lp, lp_ele);\n\n       /* If we are going forward, skip the previous entry's\n        * lp-count field (or in case of the master entry, the zero\n        * term field) */\n       lp_ele = lpNext(lp, lp_ele);\n       if (lp_ele == NULL)\n           return 0;\n   } else {\n       /* If we are iterating in reverse direction, just seek the\n        * last part of the last entry in the listpack (that is, the\n        * fields count). */\n       lp_ele = lpLast(lp);\n\n       /* If we are going backward, read the number of elements this\n        * entry is composed of, and jump backward N times to seek\n        * its start. */\n       int64_t lp_count = lpGetInteger(lp_ele);\n       if (lp_count == 0) /* We reached the master entry. */\n           return 0;\n\n       while (lp_count--)\n           lp_ele = lpPrev(lp, lp_ele);\n   }\n\n   lp_ele = lpNext(lp, lp_ele); /* Seek ID (lp_ele currently points to 'flags'). */\n\n   /* Get the ID: it is encoded as difference between the master\n    * ID and this entry ID. */\n   streamID id = *master_id;\n   id.ms += lpGetInteger(lp_ele);\n   lp_ele = lpNext(lp, lp_ele);\n   id.seq += lpGetInteger(lp_ele);\n   *edge_id = id;\n   return 1;\n}\n\n/* Debugging function to log the full content of a listpack. Useful\n * for development and debugging. */\nvoid streamLogListpackContent(unsigned char *lp) {\n    unsigned char *p = lpFirst(lp);\n    while(p) {\n        unsigned char buf[LP_INTBUF_SIZE];\n        int64_t v;\n        unsigned char *ele = lpGet(p,&v,buf);\n        serverLog(LL_WARNING,\"- [%d] '%.*s'\", (int)v, (int)v, ele);\n        p = lpNext(lp,p);\n    }\n}\n\n/* Convert the specified stream entry ID as a 128 bit big endian number, so\n * that the IDs can be sorted lexicographically. */\nvoid streamEncodeID(void *buf, streamID *id) {\n    uint64_t e[2];\n    e[0] = htonu64(id->ms);\n    e[1] = htonu64(id->seq);\n    memcpy(buf,e,sizeof(e));\n}\n\n/* This is the reverse of streamEncodeID(): the decoded ID will be stored\n * in the 'id' structure passed by reference. The buffer 'buf' must point\n * to a 128 bit big-endian encoded ID. */\nvoid streamDecodeID(void *buf, streamID *id) {\n    uint64_t e[2];\n    memcpy(e,buf,sizeof(e));\n    id->ms = ntohu64(e[0]);\n    id->seq = ntohu64(e[1]);\n}\n\n/* Compare two stream IDs. Return -1 if a < b, 0 if a == b, 1 if a > b. */\nint streamCompareID(streamID *a, streamID *b) {\n    if (a->ms > b->ms) return 1;\n    else if (a->ms < b->ms) return -1;\n    /* The ms part is the same. Check the sequence part. */\n    else if (a->seq > b->seq) return 1;\n    else if (a->seq < b->seq) return -1;\n    /* Everything is the same: IDs are equal. */\n    return 0;\n}\n\n/* Retrieves the ID of the stream edge entry. An edge is either the first or\n * the last ID in the stream, and may be a tombstone. To filter out tombstones,\n * set the'skip_tombstones' argument to 1. */\nvoid streamGetEdgeID(stream *s, int first, int skip_tombstones, streamID *edge_id)\n{\n    streamIterator si;\n    int64_t numfields;\n    streamIteratorStart(&si,s,NULL,NULL,!first);\n    si.skip_tombstones = skip_tombstones;\n    int found = streamIteratorGetID(&si,edge_id,&numfields);\n    if (!found) {\n        streamID min_id = {0, 0}, max_id = {UINT64_MAX, UINT64_MAX};\n        *edge_id = first ? max_id : min_id;\n    }\n    streamIteratorStop(&si);\n}\n\n/* Adds a new item into the stream 's' having the specified number of\n * field-value pairs as specified in 'numfields' and stored into 'argv'.\n * Returns the new entry ID populating the 'added_id' structure.\n *\n * If 'use_id' is not NULL, the ID is not auto-generated by the function,\n * but instead the passed ID is used to add the new entry. In this case\n * adding the entry may fail as specified later in this comment.\n * \n * When 'use_id' is used alongside with a zero 'seq-given', the sequence\n * part of the passed ID is ignored and the function will attempt to use an\n * auto-generated sequence.\n *\n * The function returns C_OK if the item was added, this is always true\n * if the ID was generated by the function. However the function may return\n * C_ERR in several cases:\n * 1. If an ID was given via 'use_id', but adding it failed since the\n *    current top ID is greater or equal. errno will be set to EDOM.\n * 2. If a size of a single element or the sum of the elements is too big to\n *    be stored into the stream. errno will be set to ERANGE. */\nint streamAppendItem(stream *s, robj **argv, int64_t numfields, streamID *added_id, streamID *use_id, int seq_given) {\n\n    /* Generate the new entry ID. */\n    streamID id;\n    if (use_id) {\n        if (seq_given) {\n            id = *use_id;\n        } else {\n            /* The automatically generated sequence can be either zero (new\n             * timestamps) or the incremented sequence of the last ID. In the\n             * latter case, we need to prevent an overflow/advancing forward\n             * in time. */\n            if (s->last_id.ms == use_id->ms) {\n                if (s->last_id.seq == UINT64_MAX) {\n                    errno = EDOM;\n                    return C_ERR;\n                }\n                id = s->last_id;\n                id.seq++;\n            } else {\n                id = *use_id;\n            }\n        }\n    } else {\n        streamNextID(&s->last_id,&id);\n    }\n\n    /* Check that the new ID is greater than the last entry ID\n     * or return an error. Automatically generated IDs might\n     * overflow (and wrap-around) when incrementing the sequence\n       part. */\n    if (streamCompareID(&id,&s->last_id) <= 0) {\n        errno = EDOM;\n        return C_ERR;\n    }\n\n    /* Avoid overflow when trying to add an element to the stream (listpack\n     * can only host up to 32bit length strings, and also a total listpack size\n     * can't be bigger than 32bit length. */\n    size_t totelelen = 0;\n    for (int64_t i = 0; i < numfields*2; i++) {\n        sds ele = argv[i]->ptr;\n        totelelen += sdslen(ele);\n    }\n    if (totelelen > STREAM_LISTPACK_MAX_SIZE) {\n        errno = ERANGE;\n        return C_ERR;\n    }\n\n    /* Add the new entry. */\n    raxIterator ri;\n    raxStart(&ri,s->rax);\n    raxSeek(&ri,\"$\",NULL,0);\n\n    size_t lp_bytes = 0;        /* Total bytes in the tail listpack. */\n    unsigned char *lp = NULL;   /* Tail listpack pointer. */\n\n    if (!raxEOF(&ri)) {\n        /* Get a reference to the tail node listpack. */\n        lp = ri.data;\n        lp_bytes = lpBytes(lp);\n    }\n    raxStop(&ri);\n\n    /* We have to add the key into the radix tree in lexicographic order,\n     * to do so we consider the ID as a single 128 bit number written in\n     * big endian, so that the most significant bytes are the first ones. */\n    uint64_t rax_key[2];    /* Key in the radix tree containing the listpack.*/\n    streamID master_id;     /* ID of the master entry in the listpack. */\n\n    /* Create a new listpack and radix tree node if needed. Note that when\n     * a new listpack is created, we populate it with a \"master entry\". This\n     * is just a set of fields that is taken as references in order to compress\n     * the stream entries that we'll add inside the listpack.\n     *\n     * Note that while we use the first added entry fields to create\n     * the master entry, the first added entry is NOT represented in the master\n     * entry, which is a stand alone object. But of course, the first entry\n     * will compress well because it's used as reference.\n     *\n     * The master entry is composed like in the following example:\n     *\n     * +-------+---------+------------+---------+--/--+---------+---------+-+\n     * | count | deleted | num-fields | field_1 | field_2 | ... | field_N |0|\n     * +-------+---------+------------+---------+--/--+---------+---------+-+\n     *\n     * count and deleted just represent respectively the total number of\n     * entries inside the listpack that are valid, and marked as deleted\n     * (deleted flag in the entry flags set). So the total number of items\n     * actually inside the listpack (both deleted and not) is count+deleted.\n     *\n     * The real entries will be encoded with an ID that is just the\n     * millisecond and sequence difference compared to the key stored at\n     * the radix tree node containing the listpack (delta encoding), and\n     * if the fields of the entry are the same as the master entry fields, the\n     * entry flags will specify this fact and the entry fields and number\n     * of fields will be omitted (see later in the code of this function).\n     *\n     * The \"0\" entry at the end is the same as the 'lp-count' entry in the\n     * regular stream entries (see below), and marks the fact that there are\n     * no more entries, when we scan the stream from right to left. */\n\n    /* First of all, check if we can append to the current macro node or\n     * if we need to switch to the next one. 'lp' will be set to NULL if\n     * the current node is full. */\n    if (lp != NULL) {\n        int new_node = 0;\n        size_t node_max_bytes = server.stream_node_max_bytes;\n        if (node_max_bytes == 0 || node_max_bytes > STREAM_LISTPACK_MAX_SIZE)\n            node_max_bytes = STREAM_LISTPACK_MAX_SIZE;\n        if (lp_bytes + totelelen >= node_max_bytes) {\n            new_node = 1;\n        } else if (server.stream_node_max_entries) {\n            unsigned char *lp_ele = lpFirst(lp);\n            /* Count both live entries and deleted ones. */\n            int64_t count = lpGetInteger(lp_ele) + lpGetInteger(lpNext(lp,lp_ele));\n            if (count >= server.stream_node_max_entries) new_node = 1;\n        }\n\n        if (new_node) {\n            /* Shrink extra pre-allocated memory */\n            lp = lpShrinkToFit(lp);\n            s->alloc_size -= lp_bytes;\n            s->alloc_size += lpBytes(lp);\n            if (ri.data != lp)\n                raxSetData(ri.node, lp);\n            lp = NULL;\n        }\n    }\n\n    int flags = STREAM_ITEM_FLAG_NONE;\n    if (lp == NULL) {\n        master_id = id;\n        streamEncodeID(rax_key,&id);\n        /* Create the listpack having the master entry ID and fields.\n         * Pre-allocate some bytes when creating listpack to avoid realloc on\n         * every XADD. Since listpack.c uses malloc_size, it'll grow in steps,\n         * and won't realloc on every XADD.\n         * When listpack reaches max number of entries, we'll shrink the\n         * allocation to fit the data. */\n        size_t prealloc = STREAM_LISTPACK_MAX_PRE_ALLOCATE;\n        if (server.stream_node_max_bytes > 0 && server.stream_node_max_bytes < prealloc) {\n            prealloc = server.stream_node_max_bytes;\n        }\n        lp = lpNew(prealloc);\n        lp = lpAppendInteger(lp,1); /* One item, the one we are adding. */\n        lp = lpAppendInteger(lp,0); /* Zero deleted so far. */\n        lp = lpAppendInteger(lp,numfields);\n        for (int64_t i = 0; i < numfields; i++) {\n            sds field = argv[i*2]->ptr;\n            lp = lpAppend(lp,(unsigned char*)field,sdslen(field));\n        }\n        lp = lpAppendInteger(lp,0); /* Master entry zero terminator. */\n        s->alloc_size += lpBytes(lp);\n        raxInsert(s->rax,(unsigned char*)&rax_key,sizeof(rax_key),lp,NULL);\n        /* The first entry we insert, has obviously the same fields of the\n         * master entry. */\n        flags |= STREAM_ITEM_FLAG_SAMEFIELDS;\n    } else {\n        serverAssert(ri.key_len == sizeof(rax_key));\n        memcpy(rax_key,ri.key,sizeof(rax_key));\n\n        /* Read the master ID from the radix tree key. */\n        streamDecodeID(rax_key,&master_id);\n        unsigned char *lp_ele = lpFirst(lp);\n\n        /* Update count and skip the deleted fields. */\n        int64_t count = lpGetInteger(lp_ele);\n        size_t oldsize = lpBytes(lp);\n        lp = lpReplaceInteger(lp,&lp_ele,count+1);\n        s->alloc_size -= oldsize;\n        s->alloc_size += lpBytes(lp);\n        lp_ele = lpNext(lp,lp_ele); /* seek deleted. */\n        lp_ele = lpNext(lp,lp_ele); /* seek master entry num fields. */\n\n        /* Check if the entry we are adding, have the same fields\n         * as the master entry. */\n        int64_t master_fields_count = lpGetInteger(lp_ele);\n        lp_ele = lpNext(lp,lp_ele);\n        if (numfields == master_fields_count) {\n            int64_t i;\n            for (i = 0; i < master_fields_count; i++) {\n                sds field = argv[i*2]->ptr;\n                int64_t e_len;\n                unsigned char buf[LP_INTBUF_SIZE];\n                unsigned char *e = lpGet(lp_ele,&e_len,buf);\n                /* Stop if there is a mismatch. */\n                if (sdslen(field) != (size_t)e_len ||\n                    memcmp(e,field,e_len) != 0) break;\n                lp_ele = lpNext(lp,lp_ele);\n            }\n            /* All fields are the same! We can compress the field names\n             * setting a single bit in the flags. */\n            if (i == master_fields_count) flags |= STREAM_ITEM_FLAG_SAMEFIELDS;\n        }\n    }\n\n    /* Populate the listpack with the new entry. We use the following\n     * encoding:\n     *\n     * +-----+--------+----------+-------+-------+-/-+-------+-------+--------+\n     * |flags|entry-id|num-fields|field-1|value-1|...|field-N|value-N|lp-count|\n     * +-----+--------+----------+-------+-------+-/-+-------+-------+--------+\n     *\n     * However if the SAMEFIELD flag is set, we have just to populate\n     * the entry with the values, so it becomes:\n     *\n     * +-----+--------+-------+-/-+-------+--------+\n     * |flags|entry-id|value-1|...|value-N|lp-count|\n     * +-----+--------+-------+-/-+-------+--------+\n     *\n     * The entry-id field is actually two separated fields: the ms\n     * and seq difference compared to the master entry.\n     *\n     * The lp-count field is a number that states the number of listpack pieces\n     * that compose the entry, so that it's possible to travel the entry\n     * in reverse order: we can just start from the end of the listpack, read\n     * the entry, and jump back N times to seek the \"flags\" field to read\n     * the stream full entry. */\n    size_t oldsize = lpBytes(lp);\n    lp = lpAppendInteger(lp,flags);\n    lp = lpAppendInteger(lp,id.ms - master_id.ms);\n    lp = lpAppendInteger(lp,id.seq - master_id.seq);\n    if (!(flags & STREAM_ITEM_FLAG_SAMEFIELDS))\n        lp = lpAppendInteger(lp,numfields);\n    for (int64_t i = 0; i < numfields; i++) {\n        sds field = argv[i*2]->ptr, value = argv[i*2+1]->ptr;\n        if (!(flags & STREAM_ITEM_FLAG_SAMEFIELDS))\n            lp = lpAppend(lp,(unsigned char*)field,sdslen(field));\n        lp = lpAppend(lp,(unsigned char*)value,sdslen(value));\n    }\n    /* Compute and store the lp-count field. */\n    int64_t lp_count = numfields;\n    lp_count += 3; /* Add the 3 fixed fields flags + ms-diff + seq-diff. */\n    if (!(flags & STREAM_ITEM_FLAG_SAMEFIELDS)) {\n        /* If the item is not compressed, it also has the fields other than\n         * the values, and an additional num-fields field. */\n        lp_count += numfields+1;\n    }\n    lp = lpAppendInteger(lp,lp_count);\n    s->alloc_size -= oldsize;\n    s->alloc_size += lpBytes(lp);\n\n    /* Insert back into the tree in order to update the listpack pointer. */\n    if (ri.data != lp)\n        raxInsert(s->rax,(unsigned char*)&rax_key,sizeof(rax_key),lp,NULL);\n    s->length++;\n    s->entries_added++;\n    s->last_id = id;\n    if (s->length == 1) s->first_id = id;\n    if (added_id) *added_id = id;\n    return C_OK;\n}\n\ntypedef struct {\n    /* XADD options */\n    streamID id; /* User-provided ID, for XADD only. */\n    int id_given; /* Was an ID different than \"*\" specified? for XADD only. */\n    int seq_given; /* Was an ID different than \"ms-*\" specified? for XADD only. */\n    int no_mkstream; /* if set to 1 do not create new stream */\n    robj *idmp_pid; /* IDMP producer id parameter, for XADD only. */\n    robj *idmp_iid; /* IDMP idempotent id parameter, for XADD only. */\n    int idmp_auto; /* If set to 1, auto-generate IID from field-value pairs, for XADD only. */\n\n    /* XADD + XTRIM common options */\n    int trim_strategy; /* TRIM_STRATEGY_* */\n    int trim_strategy_arg_idx; /* Index of the count in MAXLEN/MINID, for rewriting. */\n    int delete_strategy; /* DELETE_STRATEGY_* */\n    int approx_trim; /* If 1 only delete whole radix tree nodes, so\n                      * the trim argument is not applied verbatim.\n                      * Note: This flag is ignored when delete_strategy is non-KEEPREF.\n                      * Individual entries may still be processed for consumer groups. */\n    long long limit; /* Maximum amount of entries to trim. If 0, no limitation\n                      * on the amount of trimming work is enforced. */\n    /* TRIM_STRATEGY_MAXLEN options */\n    long long maxlen; /* After trimming, leave stream at this length . */\n    /* TRIM_STRATEGY_MINID options */\n    streamID minid; /* Trim by ID (No stream entries with ID < 'minid' will remain) */\n} streamAddTrimArgs;\n\n#define TRIM_STRATEGY_NONE 0\n#define TRIM_STRATEGY_MAXLEN 1\n#define TRIM_STRATEGY_MINID 2\n\ntypedef struct {\n    int startidx; /* Starting index of IDs in argv */\n    long numids; /* Number of IDs to process */\n    int delete_strategy; /* DELETE_STRATEGY_* */\n} streamAckDelArgs;\n\n#define DELETE_STRATEGY_NONE 0\n#define DELETE_STRATEGY_KEEPREF 1   /* Delete and keep references */\n#define DELETE_STRATEGY_DELREF 2    /* Delete from pending entries list */\n#define DELETE_STRATEGY_ACKED 3     /* Only delete messages that are acknowledged */\n\n/* XNACK mode flags – control how the delivery counter is adjusted when\n * a pending entry is released back to the group (NACKed). */\n#define XNACK_SILENT 0  /* Decrement delivery_count by 1 (undo the delivery) */\n#define XNACK_FAIL   1  /* Keep delivery_count unchanged */\n#define XNACK_FATAL  2  /* Set delivery_count to LLONG_MAX (permanent failure) */\n\n/* Set the delivery attempts counter on a NACK entry.  When retrycount >= 0\n * the counter is set to that explicit value; otherwise it is adjusted\n * according to the XNACK mode (SILENT/FAIL/FATAL). */\nstatic void nackSetDeliveryCount(streamNACK *nack, int mode, long long retrycount) {\n    if (retrycount >= 0) {\n        nack->delivery_count = (uint64_t)retrycount;\n    } else {\n        switch (mode) {\n        case XNACK_SILENT:\n            if (nack->delivery_count > 0)\n                nack->delivery_count--;\n            break;\n        case XNACK_FAIL:\n            break;\n        case XNACK_FATAL:\n            nack->delivery_count = LLONG_MAX;\n            break;\n        }\n    }\n}\n\n/* Trim the stream 's' according to args->trim_strategy, and return the\n * number of elements removed from the stream. The 'approx' option, if non-zero,\n * specifies that the trimming must be performed in a approximated way in\n * order to maximize performances. This means that the stream may contain\n * entries with IDs < 'id' in case of MINID (or more elements than 'maxlen'\n * in case of MAXLEN), and elements are only removed if we can remove\n * a *whole* node of the radix tree. The elements are removed from the head\n * of the stream (older elements).\n *\n * The function may return zero if:\n *\n * 1) The minimal entry ID of the stream is already < 'id' (MINID); or\n * 2) The stream is already shorter or equal to the specified max length (MAXLEN); or\n * 3) The 'approx' option is true and the head node did not have enough elements\n *    to be deleted.\n *\n * args->limit is the maximum number of entries to delete. The purpose is to\n * prevent this function from taking to long.\n * If 'limit' is 0 then we do not limit the number of deleted entries.\n * Much like the 'approx', if 'limit' is smaller than the number of entries\n * that should be trimmed, there is a chance we will still have entries with\n * IDs < 'id' (or number of elements >= maxlen in case of MAXLEN).\n */\nint64_t streamTrim(stream *s, streamAddTrimArgs *args) {\n    size_t maxlen = args->maxlen;\n    streamID *id = &args->minid;\n    int approx = args->approx_trim;\n    int64_t limit = args->limit;\n    int trim_strategy = args->trim_strategy;\n    int delete_strategy = args->delete_strategy;\n\n    if (trim_strategy == TRIM_STRATEGY_NONE)\n        return 0;\n\n    raxIterator ri;\n    raxStart(&ri,s->rax);\n    raxSeek(&ri,\"^\",NULL,0);\n\n    int64_t deleted = 0;\n    while (raxNext(&ri)) {\n        if (trim_strategy == TRIM_STRATEGY_MAXLEN && s->length <= maxlen)\n            break;\n\n        unsigned char *lp = ri.data, *p = lpFirst(lp);\n        int64_t entries = lpGetInteger(p);\n\n        /* Check if we exceeded the amount of work we could do */\n        if (limit && (deleted + entries) > limit)\n            break;\n\n        /* Check if we can remove the whole node */\n        int remove_node = 0; /* Final decision flag for node removal */\n        int node_eligible_for_remove = 0; /* Whether node meets the basic criteria for removal */\n        streamID master_id = {0};\n        /* Read the master ID from the radix tree key. */\n        streamDecodeID(ri.key, &master_id);\n        if (trim_strategy == TRIM_STRATEGY_MAXLEN) {\n            node_eligible_for_remove = s->length - entries >= maxlen;\n        } else {\n            /* Read last ID. */\n            streamID last_id = {0,0};\n            lpGetEdgeStreamID(lp, 0, &master_id, &last_id);\n\n            /* We can remove the entire node id its last ID < 'id' */\n            node_eligible_for_remove = streamCompareID(&last_id, id) < 0;\n        }\n\n        if (node_eligible_for_remove && delete_strategy == DELETE_STRATEGY_KEEPREF) {\n            /* With KEEPREF strategy, we can remove the whole node directly since we don't need\n             * to check or clean up consumer group references. */\n            remove_node = 1;\n        }\n\n        if (remove_node) {\n            s->alloc_size -= lpBytes(lp);\n            lpFree(lp);\n            raxRemove(s->rax,ri.key,ri.key_len,NULL);\n            raxSeek(&ri,\">=\",ri.key,ri.key_len);\n            s->length -= entries;\n            deleted += entries;\n            continue;\n        }\n\n        /* If we cannot remove a whole element, and approx is true,\n         * stop here. However, for non-KEEPREF strategies, if the node was\n         * eligible for removal but we couldn't remove it (because we need\n         * to check consumer group references), we should continue to process\n         * entries within this node. */\n        if (approx && delete_strategy == DELETE_STRATEGY_KEEPREF) break;\n\n        /* Now we have to trim entries from within 'lp' */\n        size_t oldsize = lpBytes(lp);\n        int64_t deleted_from_lp = 0;\n\n        p = lpNext(lp, p); /* Skip deleted field. */\n        p = lpNext(lp, p); /* Skip num-of-fields in the master entry. */\n\n        /* Skip all the master fields. */\n        int64_t master_fields_count = lpGetInteger(p);\n        p = lpNext(lp,p); /* Skip the first field. */\n        for (int64_t j = 0; j < master_fields_count; j++)\n            p = lpNext(lp,p); /* Skip all master fields. */\n        p = lpNext(lp,p); /* Skip the zero master entry terminator. */\n\n        /* 'p' is now pointing to the first entry inside the listpack.\n         * We have to run entry after entry, marking entries as deleted\n         * if they are already not deleted. */\n        while (p) {\n            /* We keep a copy of p (which point to flags part) in order to\n             * update it after (and if) we actually remove the entry */\n            unsigned char *pcopy = p;\n\n            int64_t flags = lpGetInteger(p);\n            p = lpNext(lp, p); /* Skip flags. */\n            int64_t to_skip;\n\n            int64_t ms_delta = lpGetInteger(p);\n            p = lpNext(lp, p); /* Skip ID ms delta */\n            int64_t seq_delta = lpGetInteger(p);\n            p = lpNext(lp, p); /* Skip ID seq delta */\n\n            streamID currid = {0};\n            currid.ms = master_id.ms + ms_delta;\n            currid.seq = master_id.seq + seq_delta;\n\n            int stop;\n            if (trim_strategy == TRIM_STRATEGY_MAXLEN) {\n                stop = s->length <= maxlen;\n            } else {\n                /* Following IDs will definitely be greater because the rax\n                 * tree is sorted, no point of continuing. */\n                stop = streamCompareID(&currid, id) >= 0;\n            }\n            if (stop)\n                break;\n\n            if (flags & STREAM_ITEM_FLAG_SAMEFIELDS) {\n                to_skip = master_fields_count;\n            } else {\n                to_skip = lpGetInteger(p); /* Get num-fields. */\n                p = lpNext(lp,p); /* Skip num-fields. */\n                to_skip *= 2; /* Fields and values. */\n            }\n\n            while(to_skip--) p = lpNext(lp,p); /* Skip the whole entry. */\n            p = lpNext(lp,p); /* Skip the final lp-count field. */\n\n            /* Mark the entry as deleted if allowed. */\n            if (!(flags & STREAM_ITEM_FLAG_DELETED)) {\n                int can_delete = 1;\n                if (delete_strategy == DELETE_STRATEGY_ACKED) {\n                    /* Only delete entry that has been acknowledged by all consumer groups. */\n                    can_delete = (streamEntryIsReferenced(s, &currid) == 0);\n                } else if (delete_strategy == DELETE_STRATEGY_DELREF) {\n                    /* Remove all consumer group references for this entry */\n                    streamCleanupEntryCGroupRefs(s, &currid);\n                }\n\n                if (can_delete) {\n                    /* Mark the entry as deleted. */\n                    intptr_t delta = p ? (p - lp) : 0; /* p may be NULL if this was the last entry */\n                    flags |= STREAM_ITEM_FLAG_DELETED;\n                    lp = lpReplaceInteger(lp, &pcopy, flags);\n                    deleted_from_lp++;\n                    s->length--;\n                    if (p) p = lp + delta;\n                }\n            }\n        }\n        deleted += deleted_from_lp;\n        /* If this node was originally eligible for removal but we couldn't remove it upfront\n         * due to delete strategy constraints, and now we've processed and deleted all entries\n         * in the node, we can finally remove the entire node. */\n        if (node_eligible_for_remove && deleted_from_lp == entries) {\n            s->alloc_size -= oldsize;\n            lpFree(lp);\n            raxRemove(s->rax,ri.key,ri.key_len,NULL);\n            raxSeek(&ri,\">=\",ri.key,ri.key_len);\n            continue;\n        }\n\n        /* Now we update the entries/deleted counters. */\n        p = lpFirst(lp);\n        lp = lpReplaceInteger(lp,&p,entries-deleted_from_lp);\n        p = lpNext(lp,p); /* Skip deleted field. */\n        int64_t marked_deleted = lpGetInteger(p);\n        lp = lpReplaceInteger(lp,&p,marked_deleted+deleted_from_lp);\n        p = lpNext(lp,p); /* Skip num-of-fields in the master entry. */\n        s->alloc_size -= oldsize;\n        s->alloc_size += lpBytes(lp);\n\n        /* Here we should perform garbage collection in case at this point\n         * there are too many entries deleted inside the listpack. */\n        entries -= deleted_from_lp;\n        marked_deleted += deleted_from_lp;\n        if (entries + marked_deleted > 10 && marked_deleted > entries/2) {\n            /* TODO: perform a garbage collection. */\n        }\n\n        /* Update the node with the new pointer. */\n        raxSetData(ri.node,lp);\n\n        /* If the node is eligible for removal but we couldn't remove it due to delete strategy\n         * constraints (we need to check each entry individually), continue to the next node\n         * instead of stopping here. */\n        if (node_eligible_for_remove)\n            continue;\n\n        break; /* If we are here, there was enough to delete in the current\n                  node, so no need to go to the next node. */\n    }\n    raxStop(&ri);\n\n    /* Update the stream's first ID after the trimming. */\n    if (s->length == 0) {\n        s->first_id.ms = 0;\n        s->first_id.seq = 0;\n    } else if (deleted) {\n        streamGetEdgeID(s,1,1,&s->first_id);\n    }\n\n    return deleted;\n}\n\n/* Trims a stream by length. Returns the number of deleted items. */\nint64_t streamTrimByLength(stream *s, long long maxlen, int approx) {\n    streamAddTrimArgs args = {\n        .trim_strategy = TRIM_STRATEGY_MAXLEN,\n        .approx_trim = approx,\n        .limit = approx ? 100 * server.stream_node_max_entries : 0,\n        .maxlen = maxlen,\n        .delete_strategy = DELETE_STRATEGY_KEEPREF\n    };\n    return streamTrim(s, &args);\n}\n\n/* Trims a stream by minimum ID. Returns the number of deleted items. */\nint64_t streamTrimByID(stream *s, streamID minid, int approx) {\n    streamAddTrimArgs args = {\n        .trim_strategy = TRIM_STRATEGY_MINID,\n        .approx_trim = approx,\n        .limit = approx ? 100 * server.stream_node_max_entries : 0,\n        .minid = minid,\n        .delete_strategy = DELETE_STRATEGY_KEEPREF\n    };\n    return streamTrim(s, &args);\n}\n\n/* Parse the arguments of XADD/XTRIM.\n *\n * See streamAddTrimArgs for more details about the arguments handled.\n *\n * This function returns the position of the ID argument (relevant only to XADD).\n * On error -1 is returned and a reply is sent. */\nstatic int streamParseAddOrTrimArgsOrReply(client *c, streamAddTrimArgs *args, int xadd) {\n    /* Initialize arguments to defaults */\n    memset(args, 0, sizeof(*args));\n    args->delete_strategy = DELETE_STRATEGY_NONE;\n\n    /* Parse options. */\n    int i = 2; /* This is the first argument position where we could\n                  find an option, or the ID. */\n    int limit_given = 0;\n    for (; i < c->argc; i++) {\n        int moreargs = (c->argc-1) - i; /* Number of additional arguments. */\n        char *opt = c->argv[i]->ptr;\n        if (xadd && opt[0] == '*' && opt[1] == '\\0') {\n            /* This is just a fast path for the common case of auto-ID\n             * creation. */\n            break;\n        } else if (!strcasecmp(opt,\"maxlen\") && moreargs) {\n            if (args->trim_strategy != TRIM_STRATEGY_NONE) {\n                addReplyError(c,\"syntax error, MAXLEN and MINID options at the same time are not compatible\");\n                return -1;\n            }\n            args->approx_trim = 0;\n            char *next = c->argv[i+1]->ptr;\n            /* Check for the form MAXLEN ~ <count>. */\n            if (moreargs >= 2 && next[0] == '~' && next[1] == '\\0') {\n                args->approx_trim = 1;\n                i++;\n            } else if (moreargs >= 2 && next[0] == '=' && next[1] == '\\0') {\n                i++;\n            }\n            if (getLongLongFromObjectOrReply(c,c->argv[i+1],&args->maxlen,NULL)\n                != C_OK) return -1;\n\n            if (args->maxlen < 0) {\n                addReplyError(c,\"The MAXLEN argument must be >= 0.\");\n                return -1;\n            }\n            i++;\n            args->trim_strategy = TRIM_STRATEGY_MAXLEN;\n            args->trim_strategy_arg_idx = i;\n        } else if (!strcasecmp(opt,\"minid\") && moreargs) {\n            if (args->trim_strategy != TRIM_STRATEGY_NONE) {\n                addReplyError(c,\"syntax error, MAXLEN and MINID options at the same time are not compatible\");\n                return -1;\n            }\n            args->approx_trim = 0;\n            char *next = c->argv[i+1]->ptr;\n            /* Check for the form MINID ~ <id> */\n            if (moreargs >= 2 && next[0] == '~' && next[1] == '\\0') {\n                args->approx_trim = 1;\n                i++;\n            } else if (moreargs >= 2 && next[0] == '=' && next[1] == '\\0') {\n                i++;\n            }\n\n            if (streamParseStrictIDOrReply(c,c->argv[i+1],&args->minid,0,NULL) != C_OK)\n                return -1;\n            \n            i++;\n            args->trim_strategy = TRIM_STRATEGY_MINID;\n            args->trim_strategy_arg_idx = i;\n        } else if (!strcasecmp(opt,\"limit\") && moreargs) {\n            /* Note about LIMIT: If it was not provided by the caller we set\n             * it to 100*server.stream_node_max_entries, and that's to prevent the\n             * trimming from taking too long, on the expense of not deleting entries\n             * that should be trimmed.\n             * If user wanted exact trimming (i.e. no '~') we never limit the number\n             * of trimmed entries */\n            if (getLongLongFromObjectOrReply(c,c->argv[i+1],&args->limit,NULL) != C_OK)\n                return -1;\n\n            if (args->limit < 0) {\n                addReplyError(c,\"The LIMIT argument must be >= 0.\");\n                return -1;\n            }\n            limit_given = 1;\n            i++;\n        } else if (!strcasecmp(opt,\"keepref\") && args->delete_strategy == DELETE_STRATEGY_NONE) {\n            args->delete_strategy = DELETE_STRATEGY_KEEPREF;\n        } else if (!strcasecmp(opt,\"delref\") && args->delete_strategy == DELETE_STRATEGY_NONE) {\n            args->delete_strategy = DELETE_STRATEGY_DELREF;\n        } else if (!strcasecmp(opt,\"acked\") && args->delete_strategy == DELETE_STRATEGY_NONE) {\n            args->delete_strategy = DELETE_STRATEGY_ACKED;\n        } else if (xadd && !strcasecmp(opt,\"nomkstream\")) {\n            args->no_mkstream = 1;\n        } else if (xadd && !strcasecmp(opt,\"idmpauto\") && moreargs) {\n            /* IDMPAUTO pid - auto-generate IID from field-value pairs */\n            if (args->idmp_pid != NULL) {\n                addReplyError(c,\"syntax error, IDMP/IDMPAUTO specified multiple times\");\n                return -1;\n            }\n\n            size_t pid_len = sdslen((sds)c->argv[i+1]->ptr);\n            if (pid_len == 0) {\n                addReplyError(c, \"syntax error, IDMPAUTO requires a non-empty producer ID\");\n                return -1;\n            }\n\n            args->idmp_pid = c->argv[i+1];\n            args->idmp_auto = 1;\n            i++;\n        } else if (xadd && !strcasecmp(opt,\"idmp\") && moreargs >= 2) {\n            /* IDMP pid iid - explicit producer ID and idempotent ID */\n            if (args->idmp_pid != NULL) {\n                addReplyError(c,\"syntax error, IDMP/IDMPAUTO specified multiple times\");\n                return -1;\n            }\n\n            size_t pid_len = sdslen((sds)c->argv[i+1]->ptr);\n            if (pid_len == 0) {\n                addReplyError(c, \"syntax error, IDMP requires a non-empty producer ID\");\n                return -1;\n            }\n\n            size_t iid_len = sdslen((sds)c->argv[i+2]->ptr);\n            if (iid_len == 0) {\n                addReplyError(c, \"syntax error, IDMP requires a non-empty idempotent ID\");\n                return -1;\n            }\n\n            args->idmp_pid = c->argv[i+1];\n            args->idmp_iid = c->argv[i+2];\n            i += 2;\n        } else if (xadd) {\n            /* If we are here is a syntax error or a valid ID. */\n            if (streamParseStrictIDOrReply(c,c->argv[i],&args->id,0,&args->seq_given) != C_OK)\n                return -1;\n\n            /* mustObeyClient is needed because IDMP can only be used with * (auto-generated IDs),\n             * but when we replicate the message we replace the * with the actual StreamID. */\n            if (args->idmp_pid && opt[0] != '*' && !mustObeyClient(c)) {\n                addReplyError(c,\"syntax error, IDMP/IDMPAUTO can be used only with auto-generated IDs\");\n                return -1;\n            }\n            args->id_given = 1;\n            break;\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return -1;\n        }\n    }\n\n    if (args->limit && args->trim_strategy == TRIM_STRATEGY_NONE) {\n        addReplyError(c,\"syntax error, LIMIT cannot be used without specifying a trimming strategy\");\n        return -1;\n    }\n\n    if (!xadd && args->trim_strategy == TRIM_STRATEGY_NONE) {\n        addReplyError(c,\"syntax error, XTRIM must be called with a trimming strategy\");\n        return -1;\n    }\n\n    if (mustObeyClient(c)) {\n        /* If command came from master or from AOF we must not enforce maxnodes\n         * (The maxlen/minid argument was re-written to make sure there's no\n         * inconsistency). */\n        args->limit = 0;\n    } else {\n        /* We need to set the limit (only if we got '~') */\n        if (limit_given) {\n            if (!args->approx_trim) {\n                /* LIMIT was provided without ~ */\n                addReplyError(c,\"syntax error, LIMIT cannot be used without the special ~ option\");\n                return -1;\n            }\n        } else {\n            /* User didn't provide LIMIT, we must set it. */\n            if (args->approx_trim) {\n                /* In order to prevent from trimming to do too much work and \n                 * cause latency spikes we limit the amount of work it can do.\n                 * We have to cap args->limit from both sides in case \n                 * stream_node_max_entries is 0 or too big (could cause overflow)\n                 */\n                args->limit = 100 * server.stream_node_max_entries; /* Maximum 100 rax nodes. */\n                if (args->limit <= 0) args->limit = 10000;\n                if (args->limit > 1000000) args->limit = 1000000;\n            } else {\n                /* No LIMIT for exact trimming */\n                args->limit = 0;\n            }\n        }\n    }\n\n    /* Set default consumer group reference handling to KEEPREF if none was specified */\n    if (args->delete_strategy == DELETE_STRATEGY_NONE)\n        args->delete_strategy = DELETE_STRATEGY_KEEPREF;\n\n    return i;\n}\n\nstatic int streamParseAckDelArgsOrReply(client *c, int start_pos, streamAckDelArgs *args) {\n    /* Initialize arguments to defaults */\n    memset(args, 0, sizeof(*args));\n    args->startidx = -1;\n    args->delete_strategy = DELETE_STRATEGY_NONE;\n\n    /* Parse command options */\n    int j = start_pos;\n    while (j < c->argc) {\n        char *opt = c->argv[j]->ptr;\n        if (!strcasecmp(opt, \"KEEPREF\") && args->delete_strategy == DELETE_STRATEGY_NONE) {\n            args->delete_strategy = DELETE_STRATEGY_KEEPREF;\n            j++;\n        } else if (!strcasecmp(opt, \"DELREF\") && args->delete_strategy == DELETE_STRATEGY_NONE) {\n            args->delete_strategy = DELETE_STRATEGY_DELREF;\n            j++;\n        } else if (!strcasecmp(opt, \"ACKED\") && args->delete_strategy == DELETE_STRATEGY_NONE) {\n            args->delete_strategy = DELETE_STRATEGY_ACKED;\n            j++;\n        } else if (!strcasecmp(opt, \"IDS\") && j+1 < c->argc) {\n            /* Parse the number of IDs */\n            if (getRangeLongFromObjectOrReply(c, c->argv[j+1], 1, LONG_MAX,\n                &args->numids, \"Number of IDs must be a positive integer\") != C_OK)\n            {\n                return 0;\n            }\n\n            /* Verify that the specified number of IDs matches the actual arguments */\n            if (args->numids > (c->argc - j - 2)) {\n                addReplyError(c, \"The `numids` parameter must match the number of arguments\");\n                return 0;\n            }\n\n            args->startidx = j + 2;  /* Skip \"IDS\" and numids */\n            j = args->startidx + args->numids;\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return 0;\n        }\n    }\n\n    if (args->startidx == -1) {\n        addReplyError(c, \"IDS option is required\");\n        return 0;\n    }\n\n    /* Set default consumer group reference handling to KEEPREF if none was specified */\n    if (args->delete_strategy == DELETE_STRATEGY_NONE)\n        args->delete_strategy = DELETE_STRATEGY_KEEPREF;\n\n    return 1;\n}\n\n/* Initialize the stream iterator, so that we can call iterating functions\n * to get the next items. This requires a corresponding streamIteratorStop()\n * at the end. The 'rev' parameter controls the direction. If it's zero the\n * iteration is from the start to the end element (inclusive), otherwise\n * if rev is non-zero, the iteration is reversed.\n *\n * Once the iterator is initialized, we iterate like this:\n *\n *  streamIterator myiterator;\n *  streamIteratorStart(&myiterator,...);\n *  int64_t numfields;\n *  while(streamIteratorGetID(&myiterator,&ID,&numfields)) {\n *      while(numfields--) {\n *          unsigned char *key, *value;\n *          size_t key_len, value_len;\n *          streamIteratorGetField(&myiterator,&key,&value,&key_len,&value_len);\n *\n *          ... do what you want with key and value ...\n *      }\n *  }\n *  streamIteratorStop(&myiterator); */\nvoid streamIteratorStart(streamIterator *si, stream *s, streamID *start, streamID *end, int rev) {\n    /* Initialize the iterator and translates the iteration start/stop\n     * elements into a 128 big big-endian number. */\n    if (start) {\n        streamEncodeID(si->start_key,start);\n    } else {\n        si->start_key[0] = 0;\n        si->start_key[1] = 0;\n    }\n\n    if (end) {\n        streamEncodeID(si->end_key,end);\n    } else {\n        si->end_key[0] = UINT64_MAX;\n        si->end_key[1] = UINT64_MAX;\n    }\n\n    /* Decode the big-endian keys into native 64-bit integers\n     * for faster comparisons during iteration. */\n    si->start_ms  = htonu64(si->start_key[0]);\n    si->start_seq = htonu64(si->start_key[1]);\n    si->end_ms    = htonu64(si->end_key[0]);\n    si->end_seq   = htonu64(si->end_key[1]);\n\n    /* Seek the correct node in the radix tree. */\n    raxStart(&si->ri,s->rax);\n    if (!rev) {\n        if (start && (start->ms || start->seq)) {\n            raxSeek(&si->ri,\"<=\",(unsigned char*)si->start_key,\n                    sizeof(si->start_key));\n            if (raxEOF(&si->ri)) raxSeek(&si->ri,\"^\",NULL,0);\n        } else {\n            raxSeek(&si->ri,\"^\",NULL,0);\n        }\n    } else {\n        if (end && (end->ms || end->seq)) {\n            raxSeek(&si->ri,\"<=\",(unsigned char*)si->end_key,\n                    sizeof(si->end_key));\n            if (raxEOF(&si->ri)) raxSeek(&si->ri,\"$\",NULL,0);\n        } else {\n            raxSeek(&si->ri,\"$\",NULL,0);\n        }\n    }\n    si->stream = s;\n    si->lp = NULL;     /* There is no current listpack right now. */\n    si->lp_last_ele = NULL;\n    si->lp_ele = NULL; /* Current listpack cursor. */\n    si->rev = rev;     /* Direction, if non-zero reversed, from end to start. */\n    si->skip_tombstones = 1;    /* By default tombstones aren't emitted. */\n}\n\n/* Return 1 and store the current item ID at 'id' if there are still\n * elements within the iteration range, otherwise return 0 in order to\n * signal the iteration terminated. */\nint streamIteratorGetID(streamIterator *si, streamID *id, int64_t *numfields) {\n    while(1) { /* Will stop when element > stop_key or end of radix tree. */\n        /* Record the previous lp_ele position to detect data corruption\n         * that might cause the iterator to move backwards unexpectedly. */\n        if (si->lp_ele && si->lp_last_ele)\n            serverAssert(si->rev ? si->lp_ele < si->lp_last_ele : si->lp_ele > si->lp_last_ele);\n        si->lp_last_ele = si->lp_ele;\n\n        /* If the current listpack is set to NULL, this is the start of the\n         * iteration or the previous listpack was completely iterated.\n         * Go to the next node. */\n        if (si->lp == NULL || si->lp_ele == NULL) {\n            if (!si->rev && !raxNext(&si->ri)) return 0;\n            else if (si->rev && !raxPrev(&si->ri)) return 0;\n            serverAssert(si->ri.key_len == sizeof(streamID));\n            /* Get the master ID. */\n            streamDecodeID(si->ri.key,&si->master_id);\n            /* Get the master fields count. */\n            si->lp = si->ri.data;\n            si->lp_ele = lpFirst(si->lp);           /* Seek items count */\n            si->lp_ele = lpNext(si->lp,si->lp_ele); /* Seek deleted count. */\n            si->lp_ele = lpNext(si->lp,si->lp_ele); /* Seek num fields. */\n            si->master_fields_count = lpGetInteger(si->lp_ele);\n            si->lp_ele = lpNext(si->lp,si->lp_ele); /* Seek first field. */\n            si->master_fields_start = si->lp_ele;\n            /* We are now pointing to the first field of the master entry.\n             * We need to seek either the first or the last entry depending\n             * on the direction of the iteration. */\n            if (!si->rev) {\n                /* If we are iterating in normal order, skip the master fields\n                 * to seek the first actual entry. */\n                for (uint64_t i = 0; i < si->master_fields_count; i++)\n                    si->lp_ele = lpNext(si->lp,si->lp_ele);\n            } else {\n                /* If we are iterating in reverse direction, just seek the\n                 * last part of the last entry in the listpack (that is, the\n                 * fields count). */\n                si->lp_ele = lpLast(si->lp);\n            }\n        } else if (si->rev) {\n            /* If we are iterating in the reverse order, and this is not\n             * the first entry emitted for this listpack, then we already\n             * emitted the current entry, and have to go back to the previous\n             * one. */\n            int64_t lp_count = lpGetInteger(si->lp_ele);\n            while(lp_count--) si->lp_ele = lpPrev(si->lp,si->lp_ele);\n            /* Seek lp-count of prev entry. */\n            si->lp_ele = lpPrev(si->lp,si->lp_ele);\n        }\n\n        /* For every radix tree node, iterate the corresponding listpack,\n         * returning elements when they are within range. */\n        while(1) {\n            if (!si->rev) {\n                /* If we are going forward, skip the previous entry\n                 * lp-count field (or in case of the master entry, the zero\n                 * term field) */\n                si->lp_ele = lpNext(si->lp,si->lp_ele);\n                if (si->lp_ele == NULL) break;\n            } else {\n                /* If we are going backward, read the number of elements this\n                 * entry is composed of, and jump backward N times to seek\n                 * its start. */\n                int64_t lp_count = lpGetInteger(si->lp_ele);\n                if (lp_count == 0) { /* We reached the master entry. */\n                    si->lp = NULL;\n                    si->lp_ele = NULL;\n                    break;\n                }\n                while(lp_count--) si->lp_ele = lpPrev(si->lp,si->lp_ele);\n            }\n\n            /* Get the flags entry. */\n            si->lp_flags = si->lp_ele;\n            int64_t flags = lpGetInteger(si->lp_ele);\n            si->lp_ele = lpNext(si->lp,si->lp_ele); /* Seek ID. */\n\n            /* Get the ID: it is encoded as difference between the master\n             * ID and this entry ID. */\n            *id = si->master_id;\n            id->ms += lpGetInteger(si->lp_ele);\n            si->lp_ele = lpNext(si->lp,si->lp_ele);\n            id->seq += lpGetInteger(si->lp_ele);\n            si->lp_ele = lpNext(si->lp,si->lp_ele);\n\n            /* The number of entries is here or not depending on the\n             * flags. */\n            if (flags & STREAM_ITEM_FLAG_SAMEFIELDS) {\n                *numfields = si->master_fields_count;\n            } else {\n                *numfields = lpGetInteger(si->lp_ele);\n                si->lp_ele = lpNext(si->lp,si->lp_ele);\n            }\n            serverAssert(*numfields>=0);\n\n            /* If current >= start, and the entry is not marked as\n             * deleted or tombstones are included, emit it. */\n            if (!si->rev) {\n                if ((id->ms > si->start_ms ||\n                    (id->ms == si->start_ms && id->seq >= si->start_seq)) &&\n                    (!si->skip_tombstones || !(flags & STREAM_ITEM_FLAG_DELETED)))\n                {\n                    if (id->ms > si->end_ms ||\n                        (id->ms == si->end_ms && id->seq > si->end_seq))\n                        return 0; /* We are already out of range. */\n                    si->entry_flags = flags;\n                    if (flags & STREAM_ITEM_FLAG_SAMEFIELDS)\n                        si->master_fields_ptr = si->master_fields_start;\n                    return 1; /* Valid item returned. */\n                }\n            } else {\n                if ((id->ms < si->end_ms ||\n                    (id->ms == si->end_ms && id->seq <= si->end_seq)) &&\n                    (!si->skip_tombstones || !(flags & STREAM_ITEM_FLAG_DELETED)))\n                {\n                    if (id->ms < si->start_ms ||\n                        (id->ms == si->start_ms && id->seq < si->start_seq))\n                        return 0; /* We are already out of range. */\n                    si->entry_flags = flags;\n                    if (flags & STREAM_ITEM_FLAG_SAMEFIELDS)\n                        si->master_fields_ptr = si->master_fields_start;\n                    return 1; /* Valid item returned. */\n                }\n            }\n\n            /* If we do not emit, we have to discard if we are going\n             * forward, or seek the previous entry if we are going\n             * backward. */\n            if (!si->rev) {\n                int64_t to_discard = (flags & STREAM_ITEM_FLAG_SAMEFIELDS) ?\n                                      *numfields : *numfields*2;\n                for (int64_t i = 0; i < to_discard; i++)\n                    si->lp_ele = lpNext(si->lp,si->lp_ele);\n            } else {\n                int64_t prev_times = 4; /* flag + id ms + id seq + one more to\n                                           go back to the previous entry \"count\"\n                                           field. */\n                /* If the entry was not flagged SAMEFIELD we also read the\n                 * number of fields, so go back one more. */\n                if (!(flags & STREAM_ITEM_FLAG_SAMEFIELDS)) prev_times++;\n                while(prev_times--) si->lp_ele = lpPrev(si->lp,si->lp_ele);\n            }\n        }\n\n        /* End of listpack reached. Try the next/prev radix tree node. */\n    }\n}\n\n/* Get the field and value of the current item we are iterating. This should\n * be called immediately after streamIteratorGetID(), and for each field\n * according to the number of fields returned by streamIteratorGetID().\n * The function populates the field and value pointers and the corresponding\n * lengths by reference, that are valid until the next iterator call, assuming\n * no one touches the stream meanwhile. */\nvoid streamIteratorGetField(streamIterator *si, unsigned char **fieldptr, unsigned char **valueptr, int64_t *fieldlen, int64_t *valuelen) {\n    if (si->entry_flags & STREAM_ITEM_FLAG_SAMEFIELDS) {\n        *fieldptr = lpGet(si->master_fields_ptr,fieldlen,si->field_buf);\n        si->master_fields_ptr = lpNext(si->lp,si->master_fields_ptr);\n    } else {\n        *fieldptr = lpGet(si->lp_ele,fieldlen,si->field_buf);\n        si->lp_ele = lpNext(si->lp,si->lp_ele);\n    }\n    *valueptr = lpGet(si->lp_ele,valuelen,si->value_buf);\n    si->lp_ele = lpNext(si->lp,si->lp_ele);\n}\n\n/* Remove the current entry from the stream: can be called after the\n * GetID() API or after any GetField() call, however we need to iterate\n * a valid entry while calling this function. Moreover the function\n * requires the entry ID we are currently iterating, that was previously\n * returned by GetID().\n *\n * Note that after calling this function, next calls to GetField() can't\n * be performed: the entry is now deleted. Instead the iterator will\n * automatically re-seek to the next entry, so the caller should continue\n * with GetID(). */\nvoid streamIteratorRemoveEntry(streamIterator *si, streamID *current) {\n    stream *s = si->stream;\n    unsigned char *lp = si->lp;\n    size_t oldsize = lpBytes(lp);\n    int64_t aux;\n\n    /* We do not really delete the entry here. Instead we mark it as\n     * deleted by flagging it, and also incrementing the count of the\n     * deleted entries in the listpack header.\n     *\n     * We start flagging: */\n    int64_t flags = lpGetInteger(si->lp_flags);\n    flags |= STREAM_ITEM_FLAG_DELETED;\n    lp = lpReplaceInteger(lp,&si->lp_flags,flags);\n\n    /* Change the valid/deleted entries count in the master entry. */\n    unsigned char *p = lpFirst(lp);\n    aux = lpGetInteger(p);\n\n    if (aux == 1) {\n        /* If this is the last element in the listpack, we can remove the whole\n         * node. */\n        s->alloc_size -= oldsize;\n        lpFree(lp);\n        raxRemove(s->rax,si->ri.key,si->ri.key_len,NULL);\n    } else {\n        /* In the base case we alter the counters of valid/deleted entries. */\n        lp = lpReplaceInteger(lp,&p,aux-1);\n        p = lpNext(lp,p); /* Seek deleted field. */\n        aux = lpGetInteger(p);\n        lp = lpReplaceInteger(lp,&p,aux+1);\n        s->alloc_size -= oldsize;\n        s->alloc_size += lpBytes(lp);\n\n        /* Update the listpack with the new pointer. */\n        if (si->lp != lp)\n            raxInsert(s->rax,si->ri.key,si->ri.key_len,lp,NULL);\n    }\n\n    /* Update the number of entries counter. */\n    s->length--;\n\n    /* Re-seek the iterator to fix the now messed up state. */\n    streamID start, end;\n    if (si->rev) {\n        streamDecodeID(si->start_key,&start);\n        end = *current;\n    } else {\n        start = *current;\n        streamDecodeID(si->end_key,&end);\n    }\n    streamIteratorStop(si);\n    streamIteratorStart(si,s,&start,&end,si->rev);\n\n    /* TODO: perform a garbage collection here if the ratio between\n     * deleted and valid goes over a certain limit. */\n}\n\n/* Stop the stream iterator. The only cleanup we need is to free the rax\n * iterator, since the stream iterator itself is supposed to be stack\n * allocated. */\nvoid streamIteratorStop(streamIterator *si) {\n    raxStop(&si->ri);\n}\n\n/* Return 1 if `id` exists in `s` (and not marked as deleted) */\nint streamEntryExists(stream *s, streamID *id) {\n    streamIterator si;\n    streamIteratorStart(&si,s,id,id,0);\n    streamID myid;\n    int64_t numfields;\n    int found = streamIteratorGetID(&si,&myid,&numfields);\n    streamIteratorStop(&si);\n    if (!found)\n        return 0;\n    serverAssert(streamCompareID(id,&myid) == 0);\n    return 1;\n}\n\n/* Delete the specified item ID from the stream, returning 1 if the item\n * was deleted 0 otherwise (if it does not exist). */\nint streamDeleteItem(stream *s, streamID *id) {\n    int deleted = 0;\n    streamIterator si;\n    streamIteratorStart(&si,s,id,id,0);\n    streamID myid;\n    int64_t numfields;\n    if (streamIteratorGetID(&si,&myid,&numfields)) {\n        streamIteratorRemoveEntry(&si,&myid);\n        deleted = 1;\n    }\n    streamIteratorStop(&si);\n    return deleted;\n}\n\n/* Get the last valid (non-tombstone) streamID of 's'. */\nvoid streamLastValidID(stream *s, streamID *maxid)\n{\n    streamIterator si;\n    streamIteratorStart(&si,s,NULL,NULL,1);\n    int64_t numfields;\n    if (!streamIteratorGetID(&si,maxid,&numfields) && s->length)\n        serverPanic(\"Corrupt stream, length is %llu, but no max id\", (unsigned long long)s->length);\n    streamIteratorStop(&si);\n}\n\n/* Maximum size for a stream ID string. In theory 20*2+1 should be enough,\n * But to avoid chance for off by one issues and null-term, in case this will\n * be used as parsing buffer, we use a slightly larger buffer. On the other\n * hand considering sds header is gonna add 4 bytes, we wanna keep below the\n * allocator's 48 bytes bin. */\n#define STREAM_ID_STR_LEN 44\n\nsds createStreamIDString(streamID *id) {\n    /* Optimization: pre-allocate a big enough buffer to avoid reallocs. */\n    sds str = sdsnewlen(SDS_NOINIT, STREAM_ID_STR_LEN);\n    sdssetlen(str, 0);\n    return sdscatfmt(str,\"%U-%U\", id->ms,id->seq);\n}\n\n/* Emit a reply in the client output buffer by formatting a Stream ID\n * in the standard <ms>-<seq> format, using the simple string protocol\n * of REPL. */\nvoid addReplyStreamID(client *c, streamID *id) {\n    addReplyBulkSds(c,createStreamIDString(id));\n}\n\nvoid setDeferredReplyStreamID(client *c, void *dr, streamID *id) {\n    setDeferredReplyBulkSds(c, dr, createStreamIDString(id));\n}\n\n/* Similar to the above function, but just creates an object, usually useful\n * for replication purposes to create arguments. */\nrobj *createObjectFromStreamID(streamID *id) {\n    return createObject(OBJ_STRING, createStreamIDString(id));\n}\n\n/* Returns non-zero if the ID is 0-0. */\nint streamIDEqZero(streamID *id) {\n    return !(id->ms || id->seq);\n}\n\n/* A helper that returns non-zero if the range from 'start' to `end`\n * contains a tombstone.\n *\n * NOTE: this assumes that the caller had verified that 'start' is less than\n * 's->last_id'. */\nint streamRangeHasTombstones(stream *s, streamID *start, streamID *end) {\n    streamID start_id, end_id;\n\n    if (!s->length || streamIDEqZero(&s->max_deleted_entry_id)) {\n        /* The stream is empty or has no tombstones. */\n        return 0;\n    }\n\n    if (start) {\n        start_id = *start;\n    } else {\n        start_id.ms = 0;\n        start_id.seq = 0;\n    }\n\n    if (end) {\n        end_id = *end;\n    } else {\n        end_id.ms = UINT64_MAX;\n        end_id.seq = UINT64_MAX;\n    }\n\n    if (streamCompareID(&start_id,&s->max_deleted_entry_id) <= 0 &&\n        streamCompareID(&s->max_deleted_entry_id,&end_id) <= 0)\n    {\n        /* start_id <= max_deleted_entry_id <= end_id: The range does include a tombstone. */\n        return 1;\n    }\n\n    /* The range doesn't includes a tombstone. */\n    return 0;\n}\n\n/* Replies with a consumer group's current lag, that is the number of messages\n * in the stream that are yet to be delivered. In case that the lag isn't\n * available due to fragmentation, the reply to the client is a null. */\nvoid streamReplyWithCGLag(client *c, stream *s, streamCG *cg) {\n    int valid = 0;\n    long long lag = 0;\n\n    if (!s->entries_added) {\n        /* The lag of a newly-initialized stream is 0. */\n        lag = 0;\n        valid = 1;\n    } else if (!s->length) { /* All entries deleted, now empty. */\n        lag = 0;\n        valid = 1;\n    } else if (streamCompareID(&cg->last_id,&s->first_id) < 0 &&\n               streamCompareID(&s->max_deleted_entry_id,&s->first_id) < 0)\n    {\n        /* When both the consumer group's last_id and the maximum tombstone are behind\n         * the stream's first entry, the consumer group's lag will always be equal to\n         * the number of remainin entries in the stream. */\n        lag = s->length;\n        valid = 1;\n    } else if (cg->entries_read != SCG_INVALID_ENTRIES_READ && !streamRangeHasTombstones(s,&cg->last_id,NULL)) {\n        /* No fragmentation ahead means that the group's logical reads counter\n         * is valid for performing the lag calculation. */\n        lag = (long long)s->entries_added - cg->entries_read;\n        valid = 1;\n    } else {\n        /* Attempt to retrieve the group's last ID logical read counter. */\n        long long entries_read = streamEstimateDistanceFromFirstEverEntry(s,&cg->last_id);\n        if (entries_read != SCG_INVALID_ENTRIES_READ) {\n            /* A valid counter was obtained. */\n            lag = (long long)s->entries_added - entries_read;\n            valid = 1;\n        }\n    }\n\n    if (valid) {\n        addReplyLongLong(c,lag);\n    } else {\n        addReplyNull(c);\n    }\n}\n\n/* This function returns a value that is the ID's logical read counter, or its\n * distance (the number of entries) from the first entry ever to have been added\n * to the stream.\n * \n * A counter is returned only in one of the following cases:\n * 1. The ID is the same as the stream's last ID. In this case, the returned\n *    is the same as the stream's entries_added counter.\n * 2. The ID equals that of the currently first entry in the stream, and the\n *    stream has no tombstones. The returned value, in this case, is the result\n *    of subtracting the stream's length from its added_entries, incremented by\n *    one.\n * 3. The ID less than the stream's first current entry's ID, and there are no\n *    tombstones. Here the estimated counter is the result of subtracting the\n *    stream's length from its added_entries.\n * 4. The stream's added_entries is zero, meaning that no entries were ever\n *    added.\n *\n * The special return value of ULLONG_MAX signals that the counter's value isn't\n * obtainable. It is returned in these cases:\n * 1. The provided ID, if it even exists, is somewhere between the stream's\n *    current first and last entries' IDs, or in the future.\n * 2. The stream contains one or more tombstones. */\nlong long streamEstimateDistanceFromFirstEverEntry(stream *s, streamID *id) {\n    /* The counter of any ID in an empty, never-before-used stream is 0. */\n    if (!s->entries_added) {\n        return 0;\n    }\n\n    /* In the empty stream, if the ID is smaller or equal to the last ID,\n     * it can set to the current added_entries value. */\n    if (!s->length && streamCompareID(id,&s->last_id) < 1) {\n        return s->entries_added;\n    }\n\n    /* There are fragmentations between the `id` and the stream's last-generated-id. */\n    if (!streamIDEqZero(id) && streamCompareID(id,&s->max_deleted_entry_id) < 0)\n        return SCG_INVALID_ENTRIES_READ;\n\n    int cmp_last = streamCompareID(id,&s->last_id);\n    if (cmp_last == 0) {\n        /* Return the exact counter of the last entry in the stream. */\n        return s->entries_added;\n    } else if (cmp_last > 0) {\n        /* The counter of a future ID is unknown. */\n        return SCG_INVALID_ENTRIES_READ;\n    }\n\n    int cmp_id_first = streamCompareID(id,&s->first_id);\n    int cmp_xdel_first = streamCompareID(&s->max_deleted_entry_id,&s->first_id);\n    if (streamIDEqZero(&s->max_deleted_entry_id) || cmp_xdel_first < 0) {\n        /* There's definitely no fragmentation ahead. */\n        if (cmp_id_first < 0) {\n            /* Return the estimated counter. */\n            return s->entries_added - s->length;\n        } else if (cmp_id_first == 0) {\n            /* Return the exact counter of the first entry in the stream. */\n            return s->entries_added - s->length + 1;\n        }\n    }\n\n    /* The ID is either before an XDEL that fragments the stream or an arbitrary\n     * ID. Either case, so we can't make a prediction. */\n    return SCG_INVALID_ENTRIES_READ;\n}\n\n/* Copy-free version of streamPropagateXCLAIM that expects pre-created robj* arguments.\n * This is useful when propagating multiple XCLAIMs in a loop to avoid repeated\n * object creation/destruction overhead. */\nstatic inline void streamPropagateXCLAIMCopyFree(int dbid, robj *key, robj *group_last_id, robj *groupname, robj *id, robj *consumername, robj *delivery_time, robj *delivery_count) {\n    /* We need to generate an XCLAIM that will work in a idempotent fashion:\n     *\n     * XCLAIM <key> <group> <consumer> 0 <id> TIME <milliseconds-unix-time>\n     *        RETRYCOUNT <count> FORCE JUSTID LASTID <id>.\n     *\n     * Note that JUSTID is useful in order to avoid that XCLAIM will do\n     * useless work in the slave side, trying to fetch the stream item. */\n    robj *argv[14];\n    argv[0] = shared.xclaim;\n    argv[1] = key;\n    argv[2] = groupname;\n    argv[3] = consumername;\n    argv[4] = shared.integers[0];\n    argv[5] = id;\n    argv[6] = shared.time;\n    argv[7] = delivery_time;\n    argv[8] = shared.retrycount;\n    argv[9] = delivery_count;\n    argv[10] = shared.force;\n    argv[11] = shared.justid;\n    argv[12] = shared.lastid;\n    argv[13] = group_last_id;\n\n    alsoPropagate(dbid,argv,14,PROPAGATE_AOF|PROPAGATE_REPL);\n}\n\n/* Propagate an XACK command to AOF and replicas. Used when a PEL entry is\n * removed implicitly (e.g. entry no longer exists during XCLAIM/XAUTOCLAIM)\n * and the NACK has no consumer, so XCLAIM propagation is not applicable. */\nstatic inline void streamPropagateXACK(int dbid, robj *key, robj *groupname, robj *id) {\n    robj *argv[4];\n    argv[0] = shared.xack;\n    argv[1] = key;\n    argv[2] = groupname;\n    argv[3] = id;\n    alsoPropagate(dbid,argv,4,PROPAGATE_AOF|PROPAGATE_REPL);\n}\n\n/* As a result of an explicit XCLAIM or XREADGROUP command, new entries\n * are created in the pending list of the stream and consumers. We need\n * to propagate this changes in the form of XCLAIM commands. */\nstatic inline void streamPropagateXCLAIM(client *c, robj *key, streamCG *group, robj *groupname, robj *id, streamNACK *nack) {\n    robj *consumername = createStringObject(nack->consumer->name,sdslen(nack->consumer->name));\n    robj *delivery_time = createStringObjectFromLongLong(nack->delivery_time);\n    robj *delivery_count = createStringObjectFromLongLong(nack->delivery_count);\n    robj *group_last_id = createObjectFromStreamID(&group->last_id);\n\n    streamPropagateXCLAIMCopyFree(c->db->id, key, group_last_id, groupname, id, consumername, delivery_time, delivery_count);\n\n    decrRefCount(consumername);\n    decrRefCount(delivery_time);\n    decrRefCount(delivery_count);\n    decrRefCount(group_last_id);\n}\n\n/* We need this when we want to propagate the new last-id of a consumer group\n * that was consumed by XREADGROUP with the NOACK option: in that case we can't\n * propagate the last ID just using the XCLAIM LASTID option, so we emit\n *\n *  XGROUP SETID <key> <groupname> <id> ENTRIESREAD <entries_read>\n */\nvoid streamPropagateGroupID(client *c, robj *key, streamCG *group, robj *groupname) {\n    robj *argv[7];\n    argv[0] = shared.xgroup;\n    argv[1] = shared.setid;\n    argv[2] = key;\n    argv[3] = groupname;\n    argv[4] = createObjectFromStreamID(&group->last_id);\n    argv[5] = shared.entriesread;\n    argv[6] = createStringObjectFromLongLong(group->entries_read);\n\n    alsoPropagate(c->db->id,argv,7,PROPAGATE_AOF|PROPAGATE_REPL);\n\n    decrRefCount(argv[4]);\n    decrRefCount(argv[6]);\n}\n\n/* Propagate creation of a consumer that was implicitly created by XREADGROUP.\n * Called only when no XCLAIM commands were propagated for this consumer,\n * since XCLAIM implicitly creates the consumer on the replica.  This covers\n * two cases:\n * (1) NOACK, where the PEL/XCLAIM path is skipped entirely.\n * (2) no messages were available to deliver (see #7140).\n *\n * XGROUP CREATECONSUMER <key> <groupname> <consumername>\n */\nvoid streamPropagateConsumerCreation(client *c, robj *key, robj *groupname, sds consumername) {\n    robj *argv[5];\n    argv[0] = shared.xgroup;\n    argv[1] = shared.createconsumer;\n    argv[2] = key;\n    argv[3] = groupname;\n    argv[4] = createObject(OBJ_STRING,sdsdup(consumername));\n\n    alsoPropagate(c->db->id,argv,5,PROPAGATE_AOF|PROPAGATE_REPL);\n\n    decrRefCount(argv[4]);\n}\n\n/* Send the stream items in the specified range to the client 'c'. The range\n * the client will receive is between start and end inclusive, if 'count' is\n * non zero, no more than 'count' elements are sent.\n *\n * The 'end' pointer can be NULL to mean that we want all the elements from\n * 'start' till the end of the stream. If 'rev' is non zero, elements are\n * produced in reversed order from end to start.\n *\n * The function returns the number of entries emitted.\n *\n * If 'min_idle_time' is not -1 and a group is specified, the function first\n * processes pending entries (from the group's PEL) that have been idle for at\n * least 'min_idle_time' milliseconds, claiming them for the specified consumer.\n * Each claimed entry is returned as a four-element array: ID, field-value pairs,\n * idle time, and delivery count. The NACK is transferred from the previous\n * consumer to the new consumer with updated delivery metadata.\n *\n * If group and consumer are not NULL, the function performs additional work:\n * 1. It updates the last delivered ID in the group in case we are\n *    sending IDs greater than the current last ID.\n * 2. If the requested IDs are already assigned to some other consumer, the\n *    function will not return it to the client.\n * 3. An entry in the pending list will be created for every entry delivered\n *    for the first time to this consumer.\n * 4. The group's read counter is incremented if it is already valid and there\n *    are no future tombstones, or is invalidated (set to 0) otherwise. If the\n *    counter is invalid to begin with, we try to obtain it for the last\n *    delivered ID.\n *\n * The behavior may be modified passing non-zero flags:\n *\n * STREAM_RWR_NOACK: Do not create PEL entries, that is, the point \"3\" above\n *                   is not performed.\n * STREAM_RWR_RAWENTRIES: Do not emit array boundaries, but just the entries,\n *                        and return the number of entries emitted as usually.\n *                        This is used when the function is just used in order\n *                        to emit data and there is some higher level logic.\n * STREAM_RWR_HISTORY: Return entries from the consumer's own PEL history only.\n * STREAM_RWR_CLAIMED: Return only claimable entries from the PEL. New entries\n *                     from the stream are not returned.\n *\n * The final argument 'spi' (stream propagation info pointer) is a structure\n * filled with information needed to propagate the command execution to AOF\n * and slaves, in the case a consumer group was passed: we need to generate\n * XCLAIM commands to create the pending list into AOF/slaves in that case.\n *\n * If 'spi' is set to NULL no propagation will happen even if the group was\n * given, but currently such a feature is never used by the code base that\n * will always pass 'spi' and propagate when a group is passed.\n *\n * Note that this function is recursive in certain cases. When it's called\n * with a non NULL group and consumer argument, it may call\n * streamReplyWithRangeFromConsumerPEL() in order to get entries from the\n * consumer pending entries list. However such a function will then call\n * streamReplyWithRange() in order to emit single entries (found in the\n * PEL by ID) to the client. This is the use case for the STREAM_RWR_RAWENTRIES\n * flag. */\n#define STREAM_RWR_NOACK (1<<0)         /* Do not create entries in the PEL. */\n#define STREAM_RWR_RAWENTRIES (1<<1)    /* Do not emit protocol for array\n                                           boundaries, just the entries. */\n#define STREAM_RWR_HISTORY (1<<2)       /* Only serve consumer local PEL. */\n#define STREAM_RWR_CLAIMED (1<<3)       /* Only serve claimed entries from PEL. */\nsize_t streamReplyWithRange(client *c, stream *s, streamID *start, streamID *end, size_t count, int rev, long long min_idle_time, streamCG *group, streamConsumer *consumer, int flags, streamPropInfo *spi, unsigned long *propCount) {\n    void *arraylen_ptr = NULL;\n    size_t arraylen = 0;\n    streamIterator si;\n    int64_t numfields;\n    streamID id;\n    int propagate_last_id = 0;\n    int noack = flags & STREAM_RWR_NOACK;\n    const int db_id = c->db->id;\n    const mstime_t cmd_time_snapshot = commandTimeSnapshot();\n    /* to be used in case of stream propagation */\n    robj *consumername = NULL;\n    robj *delivery_time = NULL;\n    robj *group_last_id = NULL;\n    if (spi && consumer) {\n        consumername = createStringObject(consumer->name,sdslen(consumer->name));\n        delivery_time = createStringObjectFromLongLong(cmd_time_snapshot);\n        group_last_id = createObjectFromStreamID(&group->last_id);\n    }\n    if (propCount) *propCount = 0;\n\n    if (group && min_idle_time != -1) {\n        arraylen_ptr = addReplyDeferredLen(c);\n        /* Scan and process the group's pending entries list (PEL) in a single loop.\n         * To prevent a dead loop caused by pelListUpdate() moving elements from the\n         * beginning to the end of the list, we store the current tail pointer before\n         * processing. We iterate only up to this pre-determined boundary, ensuring we\n         * never process entries that are added or moved during iteration.\n         *\n         * The iteration can terminate early when:\n         * 1. We find an entry that hasn't been idle long enough\n         * 2. We've processed enough entries to satisfy the count limit\n         * 3. We reach the pre-stored tail boundary */\n        \n        /* Store the current tail to prevent infinite loops */\n        streamNACK *tail = group->pel_time_tail;\n        size_t processed = 0;\n        \n        streamNACK *nack = group->pel_time_head;\n        while (nack) {\n            /* Capture next pointer BEFORE modifications (pelListUpdate may reorder) */\n            streamNACK *next = nack->pel_next;\n            \n            uint64_t idle = cmd_time_snapshot - nack->delivery_time;\n            if (idle < (uint64_t)min_idle_time) break;\n\n            /* Process and claim this entry */\n            uint64_t delivery_count = nack->delivery_count;\n\n            streamID pel_id;\n            streamIteratorStart(&si,s,&nack->id,&nack->id,rev);\n            if (streamIteratorGetID(&si,&pel_id,&numfields)) {\n                robj *idarg = createObjectFromStreamID(&pel_id);\n                addReplyArrayLen(c,4);\n                addReplyBulk(c,idarg);\n                addReplyArrayLen(c,numfields*2);\n\n                /* Emit field-value pairs */\n                while (numfields--) {\n                    unsigned char *key, *value;\n                    int64_t key_len, value_len;\n                    streamIteratorGetField(&si,&key,&value,&key_len,&value_len);\n                    addReplyBulkCBuffer(c,key,key_len);\n                    addReplyBulkCBuffer(c,value,value_len);\n                }\n\n                addReplyLongLong(c, idle);\n                addReplyLongLong(c, delivery_count);\n\n                /* Transfer ownership if needed */\n                if (nack->consumer != consumer) {\n                    unsigned char buf[sizeof(streamID)];\n                    streamEncodeID(buf, &nack->id);\n                    if (nack->consumer)\n                        raxRemove(nack->consumer->pel,buf,sizeof(buf),NULL);\n                    nack->consumer = consumer;\n                    raxInsert(consumer->pel,buf,sizeof(buf),nack,NULL);\n                }\n                nack->delivery_count += nack->delivery_count == LLONG_MAX ? 0 : 1;\n                pelListUpdate(group, nack, cmd_time_snapshot); /* Moves element from beginning to end of list */\n\n                consumer->active_time = cmd_time_snapshot;\n\n                /* Propagate as XCLAIM */\n                if (spi) {\n                    robj *delivery_count = createStringObjectFromLongLong(nack->delivery_count);\n                    streamPropagateXCLAIMCopyFree(db_id,spi->keyname,group_last_id,spi->groupname,idarg,consumername,delivery_time,delivery_count);\n                    decrRefCount(delivery_count);\n                    if (propCount) (*propCount)++;\n                }\n                decrRefCount(idarg);\n                arraylen++;\n                \n                /* Check count limit */\n                if (count && ++processed >= count) {\n                    streamIteratorStop(&si);\n                    break;\n                }\n            }\n            streamIteratorStop(&si);\n            \n            /* Advance to next, stopping if we reached the tail */\n            nack = (nack == tail) ? NULL : next;\n        }\n    }\n    /* If the client is asking for some history, we serve it using a\n     * different function, so that we return entries *solely* from its\n     * own PEL. This ensures each consumer will always and only see\n     * the history of messages delivered to it and not yet confirmed\n     * as delivered. */\n    if (group && (flags & STREAM_RWR_HISTORY)) {\n        if (spi && consumer) {\n            decrRefCount(delivery_time);\n            decrRefCount(consumername);\n            decrRefCount(group_last_id);\n        }\n        return streamReplyWithRangeFromConsumerPEL(c,s,start,end,count,\n                                                   group, consumer);\n    }\n\n    /* Stop here if client only wants claimed entries or count is satisfied. */\n    if ((group && (flags & STREAM_RWR_CLAIMED)) || (count && count == arraylen)) {\n        if (arraylen_ptr) setDeferredArrayLen(c,arraylen_ptr,arraylen);\n        if (spi && consumer) {\n            decrRefCount(delivery_time);\n            decrRefCount(consumername);\n            decrRefCount(group_last_id);\n        }\n        return arraylen;\n    }\n\n    if (!(flags & STREAM_RWR_RAWENTRIES) && !arraylen_ptr)\n        arraylen_ptr = addReplyDeferredLen(c);\n    streamIteratorStart(&si,s,start,end,rev);\n    while (streamIteratorGetID(&si,&id,&numfields)) {\n        /* Update the group last_id if needed. */\n        if (group && streamCompareID(&id,&group->last_id) > 0) {\n            if (group->entries_read != SCG_INVALID_ENTRIES_READ &&\n                streamCompareID(&group->last_id, &s->first_id) >= 0 &&\n                !streamRangeHasTombstones(s,&group->last_id,NULL))\n            {\n                /* A valid counter and no tombstones between the group's last-delivered-id\n                 * and the stream's last-generated-id mean we can increment the read counter\n                 * to keep tracking the group's progress. */\n                group->entries_read++;\n            } else if (s->entries_added) {\n                /* The group's counter may be invalid, so we try to obtain it. */\n                group->entries_read = streamEstimateDistanceFromFirstEverEntry(s,&id);\n            }\n            streamUpdateCGroupLastId(s, group, &id);\n            /* In the past, we would only set it when NOACK was specified. And in\n             * #9127, XCLAIM did not propagate entries_read in ACK, which would\n             * cause entries_read to be inconsistent between master and replicas,\n             * so here we call streamPropagateGroupID unconditionally. */\n            propagate_last_id = 1;\n        }\n\n        if (min_idle_time != -1) {\n            /* If min-idle-time is specified, we emit a four elements\n             * array: ID, array of field-value pairs, idle time and delivery count. */\n            addReplyArrayLen(c,4);\n        } else {\n            /* Emit a two elements array for each item. The first is\n             * the ID, the second is an array of field-value pairs. */\n            addReplyArrayLen(c,2);\n        }\n        robj *idarg = createObjectFromStreamID(&id);\n        addReplyBulk(c,idarg);\n        addReplyArrayLen(c,numfields*2);\n\n        /* Emit the field-value pairs. */\n        while (numfields--) {\n            unsigned char *key, *value;\n            int64_t key_len, value_len;\n            streamIteratorGetField(&si,&key,&value,&key_len,&value_len);\n            addReplyBulkCBuffer(c,key,key_len);\n            addReplyBulkCBuffer(c,value,value_len);\n        }\n\n        if (min_idle_time != -1) {\n            /* For new entries idle time and delivery count is 0. */\n            addReplyLongLong(c, 0);\n            addReplyLongLong(c, 0);\n        }\n\n        /* If a group is passed, we need to create an entry in the\n         * PEL (pending entries list) of this group *and* this consumer.\n         *\n         * Note that we cannot be sure about the fact the message is not\n         * already owned by another consumer, because the admin is able\n         * to change the consumer group last delivered ID using the\n         * XGROUP SETID command. So if we find that there is already\n         * a NACK for the entry, we need to associate it to the new\n         * consumer. */\n        if (group && !noack) {\n            unsigned char buf[sizeof(streamID)];\n            streamEncodeID(buf,&id);\n\n            /* Try to add a new NACK. Most of the time this will work and\n             * will not require extra lookups. We'll fix the problem later\n             * if we find that there is already an entry for this ID. */\n            streamNACK *nack = streamCreateNACK(s, consumer, &id);\n            int group_inserted =\n                raxTryInsert(group->pel,buf,sizeof(buf),nack,NULL);\n\n            /* Now we can check if the entry was already busy, and\n             * in that case reassign the entry to the new consumer,\n             * or update it if the consumer is the same as before. */\n            if (group_inserted == 0) {\n                streamFreeNACK(s,nack);\n                void *result;\n                int found = raxFind(group->pel,buf,sizeof(buf),&result);\n                serverAssert(found);\n                nack = result;\n                /* Only transfer between consumers if they're different */\n                if (nack->consumer != consumer) {\n                    if (nack->consumer)\n                        raxRemove(nack->consumer->pel,buf,sizeof(buf),NULL);\n                    nack->consumer = consumer;\n                    raxInsert(consumer->pel,buf,sizeof(buf),nack,NULL);\n                }\n                nack->delivery_count = 1;\n                /* Update delivery time and reposition in time list */\n                pelListUpdate(group, nack, cmd_time_snapshot);\n            } else {\n                /* New NACK - insert into consumer's PEL and time list */\n                raxInsert(consumer->pel,buf,sizeof(buf),nack,NULL);\n                nack->cgroup_ref_node = streamLinkCGroupToEntry(s, group, buf);\n                pelListInsertAtTail(group, nack);\n            }\n\n            consumer->active_time = cmd_time_snapshot;\n\n            /* Propagate as XCLAIM. */\n            if (spi) {\n                robj *delivery_count = createStringObjectFromLongLong(nack->delivery_count);\n                streamPropagateXCLAIMCopyFree(db_id,spi->keyname,group_last_id,spi->groupname,idarg,consumername,delivery_time,delivery_count);\n                decrRefCount(delivery_count);\n                if (propCount) (*propCount)++;\n            }\n        }\n        decrRefCount(idarg);\n        arraylen++;\n        if (count && count == arraylen) break;\n    }\n\n    if (spi && consumer) {\n        decrRefCount(delivery_time);\n        decrRefCount(consumername);\n        decrRefCount(group_last_id);\n    }\n\n    if (spi && propagate_last_id) {\n        streamPropagateGroupID(c,spi->keyname,group,spi->groupname);\n        if (propCount) (*propCount)++;\n    }\n\n    streamIteratorStop(&si);\n    if (arraylen_ptr) setDeferredArrayLen(c,arraylen_ptr,arraylen);\n    return arraylen;\n}\n\n/* This is a helper function for streamReplyWithRange() when called with\n * group and consumer arguments, but with a range that is referring to already\n * delivered messages. In this case we just emit messages that are already\n * in the history of the consumer, fetching the IDs from its PEL.\n *\n * Note that this function does not have a 'rev' argument because it's not\n * possible to iterate in reverse using a group. Basically this function\n * is only called as a result of the XREADGROUP command.\n *\n * This function is more expensive because it needs to inspect the PEL and then\n * seek into the radix tree of the messages in order to emit the full message\n * to the client. However clients only reach this code path when they are\n * fetching the history of already retrieved messages, which is rare. */\nsize_t streamReplyWithRangeFromConsumerPEL(client *c, stream *s, streamID *start, streamID *end, size_t count, streamCG *group, streamConsumer *consumer) {\n    raxIterator ri;\n    unsigned char startkey[sizeof(streamID)];\n    unsigned char endkey[sizeof(streamID)];\n    streamEncodeID(startkey,start);\n    if (end) streamEncodeID(endkey,end);\n\n    size_t arraylen = 0;\n    void *arraylen_ptr = addReplyDeferredLen(c);\n    raxStart(&ri,consumer->pel);\n    raxSeek(&ri,\">=\",startkey,sizeof(startkey));\n    while(raxNext(&ri) && (!count || arraylen < count)) {\n        if (end && memcmp(ri.key,endkey,ri.key_len) > 0) break;\n        streamID thisid;\n        streamDecodeID(ri.key,&thisid);\n        if (streamReplyWithRange(c,s,&thisid,&thisid,1,0,-1,NULL,NULL,\n                                 STREAM_RWR_RAWENTRIES,NULL,NULL) == 0)\n        {\n            /* Note that we may have a not acknowledged entry in the PEL\n             * about a message that's no longer here because was removed\n             * by the user by other means. In that case we signal it emitting\n             * the ID but then a NULL entry for the fields. */\n            addReplyArrayLen(c,2);\n            addReplyStreamID(c,&thisid);\n            addReplyNullArray(c);\n        } else {\n            streamNACK *nack = ri.data;\n            nack->delivery_count += nack->delivery_count == LLONG_MAX ? 0 : 1;\n            pelListUpdate(group, nack, commandTimeSnapshot());\n        }\n        arraylen++;\n    }\n    raxStop(&ri);\n    setDeferredArrayLen(c,arraylen_ptr,arraylen);\n    return arraylen;\n}\n\n/* -----------------------------------------------------------------------\n * Stream commands implementation\n * ----------------------------------------------------------------------- */\n\n/* Look the stream at 'key' and return the corresponding stream object.\n * The function creates a key setting it to an empty stream if needed. */\nkvobj *streamTypeLookupWriteOrCreate(client *c, robj *key, int no_create) {\n    dictEntryLink link;\n    kvobj *kv = lookupKeyWriteWithLink(c->db,key, &link);\n    if (checkType(c, kv, OBJ_STREAM)) return NULL;\n    if (kv != NULL) return kv;\n\n    if (no_create) {\n        addReplyNull(c);\n        return NULL;\n    }\n    robj *o = createStreamObject();\n    dbAddByLink(c->db, key, &o, &link);\n    return o;\n}\n\n/* Parse a stream ID in the format given by clients to Redis, that is\n * <ms>-<seq>, and converts it into a streamID structure. If\n * the specified ID is invalid C_ERR is returned and an error is reported\n * to the client, otherwise C_OK is returned. The ID may be in incomplete\n * form, just stating the milliseconds time part of the stream. In such a case\n * the missing part is set according to the value of 'missing_seq' parameter.\n *\n * The IDs \"-\" and \"+\" specify respectively the minimum and maximum IDs\n * that can be represented. If 'strict' is set to 1, \"-\" and \"+\" will be\n * treated as an invalid ID.\n *\n * The ID form <ms>-* specifies a milliseconds-only ID, leaving the sequence part\n * to be autogenerated. When a non-NULL 'seq_given' argument is provided, this\n * form is accepted and the argument is set to 0 unless the sequence part is\n * specified.\n * \n * If 'c' is set to NULL, no reply is sent to the client. */\nint streamGenericParseIDOrReply(client *c, const robj *o, streamID *id, uint64_t missing_seq, int strict, int *seq_given) {\n    char buf[128];\n    if (sdslen(o->ptr) > sizeof(buf)-1) goto invalid;\n    memcpy(buf,o->ptr,sdslen(o->ptr)+1);\n\n    if (strict && (buf[0] == '-' || buf[0] == '+') && buf[1] == '\\0')\n        goto invalid;\n\n    if (seq_given != NULL) {\n        *seq_given = 1;\n    }\n\n    /* Handle the \"-\" and \"+\" special cases. */\n    if (buf[0] == '-' && buf[1] == '\\0') {\n        id->ms = 0;\n        id->seq = 0;\n        return C_OK;\n    } else if (buf[0] == '+' && buf[1] == '\\0') {\n        id->ms = UINT64_MAX;\n        id->seq = UINT64_MAX;\n        return C_OK;\n    }\n\n    /* Parse <ms>-<seq> form. */\n    unsigned long long ms, seq;\n    char *dot = strchr(buf,'-');\n    if (dot) *dot = '\\0';\n    if (string2ull(buf,&ms) == 0) goto invalid;\n    if (dot) {\n        size_t seqlen = strlen(dot+1);\n        if (seq_given != NULL && seqlen == 1 && *(dot + 1) == '*') {\n            /* Handle the <ms>-* form. */\n            seq = 0;\n            *seq_given = 0;\n        } else if (string2ull(dot+1,&seq) == 0) {\n            goto invalid;\n        }\n    } else {\n        seq = missing_seq;\n    }\n    id->ms = ms;\n    id->seq = seq;\n    return C_OK;\n\ninvalid:\n    if (c) addReplyError(c,\"Invalid stream ID specified as stream \"\n                           \"command argument\");\n    return C_ERR;\n}\n\n/* Wrapper for streamGenericParseIDOrReply() used by module API. */\nint streamParseID(const robj *o, streamID *id) {\n    return streamGenericParseIDOrReply(NULL,o,id,0,0,NULL);\n}\n\n/* Wrapper for streamGenericParseIDOrReply() with 'strict' argument set to\n * 0, to be used when - and + are acceptable IDs. */\nint streamParseIDOrReply(client *c, robj *o, streamID *id, uint64_t missing_seq) {\n    return streamGenericParseIDOrReply(c,o,id,missing_seq,0,NULL);\n}\n\n/* Wrapper for streamGenericParseIDOrReply() with 'strict' argument set to\n * 1, to be used when we want to return an error if the special IDs + or -\n * are provided. */\nint streamParseStrictIDOrReply(client *c, robj *o, streamID *id, uint64_t missing_seq, int *seq_given) {\n    return streamGenericParseIDOrReply(c,o,id,missing_seq,1,seq_given);\n}\n\n/* Helper for parsing a stream ID that is a range query interval. When the\n * exclude argument is NULL, streamParseIDOrReply() is called and the interval\n * is treated as close (inclusive). Otherwise, the exclude argument is set if \n * the interval is open (the \"(\" prefix) and streamParseStrictIDOrReply() is\n * called in that case.\n */\nint streamParseIntervalIDOrReply(client *c, robj *o, streamID *id, int *exclude, uint64_t missing_seq) {\n    char *p = o->ptr;\n    size_t len = sdslen(p);\n    int invalid = 0;\n    \n    if (exclude != NULL) *exclude = (len > 1 && p[0] == '(');\n    if (exclude != NULL && *exclude) {\n        robj *t = createStringObject(p+1,len-1);\n        invalid = (streamParseStrictIDOrReply(c,t,id,missing_seq,NULL) == C_ERR);\n        decrRefCount(t);\n    } else \n        invalid = (streamParseIDOrReply(c,o,id,missing_seq) == C_ERR);\n    if (invalid)\n        return C_ERR;\n    return C_OK;\n}\n\nvoid streamRewriteApproxSpecifier(client *c, int idx) {\n    rewriteClientCommandArgument(c,idx,shared.special_equals);\n}\n\n/* We propagate MAXLEN/MINID ~ <count> as MAXLEN/MINID = <resulting-len-of-stream>\n * otherwise trimming is no longer deterministic on replicas / AOF. */\nvoid streamRewriteTrimArgument(client *c, stream *s, int trim_strategy, int idx) {\n    robj *arg;\n    if (trim_strategy == TRIM_STRATEGY_MAXLEN) {\n        arg = createStringObjectFromLongLong(s->length);\n    } else {\n        streamID first_id;\n        streamGetEdgeID(s,1,0,&first_id);\n        arg = createObjectFromStreamID(&first_id);\n    }\n\n    rewriteClientCommandArgument(c,idx,arg);\n    decrRefCount(arg);\n}\n\n/* XADD key [NOMKSTREAM] [KEEPREF | DELREF | ACKED] [IDMPAUTO pid | IDMP pid iid] [(MAXLEN [~|=] <count> | MINID [~|=] <id>) [LIMIT <entries>]] <ID or *> [field value] [field value] ... */\nvoid xaddCommand(client *c) {\n    /* Parse options. */\n    streamAddTrimArgs parsed_args;\n    int idpos = streamParseAddOrTrimArgsOrReply(c, &parsed_args, 1);\n    if (idpos < 0)\n        return; /* streamParseAddOrTrimArgsOrReply already replied. */\n    int field_pos = idpos+1; /* The ID is always one argument before the first field */\n\n    /* Check arity. */\n    if ((c->argc - field_pos) < 2 || ((c->argc-field_pos) % 2) == 1) {\n        addReplyErrorArity(c);\n        return;\n    }\n\n    /* Return ASAP if minimal ID (0-0) was given so we avoid possibly creating\n     * a new stream and have streamAppendItem fail, leaving an empty key in the\n     * database. */\n    if (parsed_args.id_given && parsed_args.seq_given &&\n        parsed_args.id.ms == 0 && parsed_args.id.seq == 0)\n    {\n        addReplyError(c,\"The ID specified in XADD must be greater than 0-0\");\n        return;\n    }\n\n    /* Lookup the stream at key. */\n    kvobj *kv;\n    stream *s;\n    if ((kv = streamTypeLookupWriteOrCreate(c,c->argv[1],parsed_args.no_mkstream)) == NULL) return;\n    s = kv->ptr;\n    size_t old_alloc = server.memory_tracking_enabled ? kvobjAllocSize(kv) : 0;\n\n    /* IDMP: Check if IID already exists, save IID for later insertion */\n    XXH128_hash_t hash;\n    char *iid_str = NULL;\n    size_t iid_len = 0;\n    idmpProducer *producer = NULL;\n    idmpEntry *entry = NULL;\n    \n    if (parsed_args.idmp_pid != NULL) {\n        /* Get or create the producer for this pid */\n        char *pid_str = parsed_args.idmp_pid->ptr;\n        size_t pid_len = sdslen((sds)pid_str);\n        producer = idmpGetOrCreateProducer(s, pid_str, pid_len);\n\n        /* Get IID string based on option */\n        if (parsed_args.idmp_auto) {\n            /* Auto-generate IID by hashing field-value pairs */\n            int64_t numfields = (c->argc - field_pos) / 2;\n            if (createIdempotencyHash(&c->argv[field_pos], numfields, &hash) == C_ERR) {\n                addReplyError(c, \"Failed to create idempotency hash\");\n                return;\n            }\n            iid_str = (char *)&hash;\n            iid_len = sizeof(hash);\n        } else {\n            /* Use user-provided IID directly */\n            iid_str = parsed_args.idmp_iid->ptr;\n            iid_len = sdslen((sds)iid_str);\n        }\n        \n        /* Create entry for lookup and potential insertion */\n        entry = idmpEntryCreate(iid_str, iid_len, &s->alloc_size);\n        \n        /* Check if IID already exists and reply if found */\n        if (idmpLookupAndReply(s, producer, entry, c)) {\n            /* IID already exists, free the entry and return */\n            idmpEntryFree(entry, &s->alloc_size);\n            keyModified(c,c->db,c->argv[1],kv,0);\n            server.dirty++;\n            return;\n        }\n    }\n\n    /* Return ASAP if the stream has reached the last possible ID */\n    if (s->last_id.ms == UINT64_MAX && s->last_id.seq == UINT64_MAX) {\n        addReplyError(c,\"The stream has exhausted the last possible ID, \"\n                        \"unable to add more items\");\n        idmpEntryFree(entry, &s->alloc_size);\n        return;\n    }\n\n    /* Append using the low level function and return the ID. */\n    errno = 0;\n    streamID id;\n    if (streamAppendItem(s,c->argv+field_pos,(c->argc-field_pos)/2,\n        &id,parsed_args.id_given ? &parsed_args.id : NULL,parsed_args.seq_given) == C_ERR)\n    {\n        serverAssert(errno != 0);\n        if (errno == EDOM)\n            addReplyError(c,\"The ID specified in XADD is equal or smaller than \"\n                            \"the target stream top item\");\n        else\n            addReplyError(c,\"Elements are too large to be stored\");\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db,getKeySlot(c->argv[1]->ptr),kv,old_alloc,kvobjAllocSize(kv));\n        idmpEntryFree(entry, &s->alloc_size);\n        return;\n    }\n    sds replyid = createStreamIDString(&id);\n    addReplyBulkCBuffer(c, replyid, sdslen(replyid));\n\n    /* IDMP: Insert the entry now that we have the actual ID */\n    if (parsed_args.idmp_pid != NULL) {\n        idmpInsertEntry(s, producer, entry, &id);\n        trackStreamIdmpEntries(c, c->argv[1]);\n    }\n\n    notifyKeyspaceEvent(NOTIFY_STREAM,\"xadd\",c->argv[1],c->db->id);\n    server.dirty++;\n\n    /* Trim if needed. */\n    if (parsed_args.trim_strategy != TRIM_STRATEGY_NONE) {\n        if (streamTrim(s, &parsed_args))\n            notifyKeyspaceEvent(NOTIFY_STREAM,\"xtrim\",c->argv[1],c->db->id);\n        if (parsed_args.approx_trim) {\n            /* In case our trimming was limited (by LIMIT or by ~) we must\n             * re-write the relevant trim argument to make sure there will be\n             * no inconsistencies in AOF loading or in the replica.\n             * It's enough to check only args->approx because there is no\n             * way LIMIT is given without the ~ option. */\n            streamRewriteApproxSpecifier(c,parsed_args.trim_strategy_arg_idx-1);\n            streamRewriteTrimArgument(c,s,parsed_args.trim_strategy,parsed_args.trim_strategy_arg_idx);\n        }\n    }\n\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db,getKeySlot(c->argv[1]->ptr),kv,old_alloc,kvobjAllocSize(kv));\n\n    keyModified(c,c->db,c->argv[1],kv,1);\n\n    /* Let's rewrite the ID argument with the one actually generated for\n     * AOF/replication propagation. */\n    if (!parsed_args.id_given || !parsed_args.seq_given) {\n        robj *idarg = createObject(OBJ_STRING, replyid);\n        rewriteClientCommandArgument(c, idpos, idarg);\n        decrRefCount(idarg);\n    } else {\n        sdsfree(replyid);\n    }\n\n    /* We need to signal to blocked clients that there is new data on this\n     * stream. */\n    signalKeyAsReady(c->db, c->argv[1], OBJ_STREAM);\n}\n\n/* XRANGE/XREVRANGE actual implementation.\n * The 'start' and 'end' IDs are parsed as follows:\n *   Incomplete 'start' has its sequence set to 0, and 'end' to UINT64_MAX.\n *   \"-\" and \"+\"\" mean the minimal and maximal ID values, respectively.\n *   The \"(\" prefix means an open (exclusive) range, so XRANGE stream (1-0 (2-0\n *   will match anything from 1-1 and 1-UINT64_MAX.\n */\nvoid xrangeGenericCommand(client *c, int rev) {\n    kvobj *kv;\n    stream *s;\n    streamID startid, endid;\n    long long count = -1;\n    robj *startarg = rev ? c->argv[3] : c->argv[2];\n    robj *endarg = rev ? c->argv[2] : c->argv[3];\n    int startex = 0, endex = 0;\n    size_t old_alloc = 0;\n    \n    /* Parse start and end IDs. */\n    if (streamParseIntervalIDOrReply(c,startarg,&startid,&startex,0) != C_OK)\n        return;\n    if (startex && streamIncrID(&startid) != C_OK) {\n        addReplyError(c,\"invalid start ID for the interval\");\n        return;\n    }\n    if (streamParseIntervalIDOrReply(c,endarg,&endid,&endex,UINT64_MAX) != C_OK)\n        return;\n    if (endex && streamDecrID(&endid) != C_OK) {\n        addReplyError(c,\"invalid end ID for the interval\");\n        return;\n    }\n\n    /* Parse the COUNT option if any. */\n    if (c->argc > 4) {\n        for (int j = 4; j < c->argc; j++) {\n            int additional = c->argc-j-1;\n            if (strcasecmp(c->argv[j]->ptr,\"COUNT\") == 0 && additional >= 1) {\n                if (getLongLongFromObjectOrReply(c,c->argv[j+1],&count,NULL)\n                    != C_OK) return;\n                if (count < 0) count = 0;\n                j++; /* Consume additional arg. */\n            } else {\n                addReplyErrorObject(c,shared.syntaxerr);\n                return;\n            }\n        }\n    }\n\n    /* Return the specified range to the user. */\n    if ((kv = lookupKeyReadOrReply(c, c->argv[1], shared.emptyarray)) == NULL ||\n        checkType(c, kv, OBJ_STREAM)) return;\n\n    s = kv->ptr;\n\n    if (count == 0) {\n        addReplyNullArray(c);\n    } else {\n        if (count == -1) count = 0;\n        if (server.memory_tracking_enabled)\n            old_alloc = kvobjAllocSize(kv);\n        streamReplyWithRange(c,s,&startid,&endid,count,rev,-1,NULL,NULL,0,NULL,NULL);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db,getKeySlot(c->argv[1]->ptr),kv,old_alloc,kvobjAllocSize(kv));\n    }\n}\n\n/* XRANGE key start end [COUNT <n>] */\nvoid xrangeCommand(client *c) {\n    xrangeGenericCommand(c,0);\n}\n\n/* XREVRANGE key end start [COUNT <n>] */\nvoid xrevrangeCommand(client *c) {\n    xrangeGenericCommand(c,1);\n}\n\n/* XLEN key*/\nvoid xlenCommand(client *c) {\n    kvobj *kv;\n    if ((kv = lookupKeyReadOrReply(c, c->argv[1], shared.czero)) == NULL\n        || checkType(c, kv, OBJ_STREAM)) return;\n    stream *s = kv->ptr;\n    addReplyLongLong(c,s->length);\n}\n\n/* XREAD [BLOCK <milliseconds>] [COUNT <count>] STREAMS key_1 key_2 ... key_N\n *       ID_1 ID_2 ... ID_N\n *\n * This function also implements the XREADGROUP command, which is like XREAD\n * but accepting the [GROUP group-name consumer-name] additional option.\n * This is useful because while XREAD is a read command and can be called\n * on slaves, XREADGROUP is not. */\n#define XREAD_BLOCKED_DEFAULT_COUNT 1000\nvoid xreadCommand(client *c) {\n    long long min_idle_time = -1; /* -1 means, no IDLE argument given. */\n    long long timeout = -1; /* -1 means, no BLOCK argument given. */\n    long long count = 0;\n    int streams_count = 0;\n    int streams_arg = 0;\n    int noack = 0;          /* True if NOACK option was specified. */\n    streamID static_ids[STREAMID_STATIC_VECTOR_LEN];\n    streamID *ids = static_ids;\n    streamCG **groups = NULL;\n    int xreadgroup = sdslen(c->argv[0]->ptr) == 10; /* XREAD or XREADGROUP? */\n    robj *groupname = NULL;\n    robj *consumername = NULL;\n    size_t old_alloc = 0;\n\n    /* Parse arguments. */\n    for (int i = 1; i < c->argc; i++) {\n        int moreargs = c->argc-i-1;\n        char *o = c->argv[i]->ptr;\n        if (!strcasecmp(o,\"CLAIM\") && moreargs) {\n            if (!xreadgroup) {\n                addReplyError(c,\"The CLAIM option is only supported by \"\n                                \"XREADGROUP. You called XREAD instead.\");\n                return;\n            }\n            i++;\n            min_idle_time = -1;\n            if (getLongLongFromObjectOrReply(c, c->argv[i], &min_idle_time, \n                    \"min-idle-time is not an integer or out of range\") != C_OK)\n                return;\n            if (min_idle_time < 0) {\n                addReplyError(c,\"min-idle-time must be a positive integer\");\n                return;\n            }\n        } else if (!strcasecmp(o,\"BLOCK\") && moreargs) {\n            i++;\n            if (getTimeoutFromObjectOrReply(c,c->argv[i],&timeout,\n                UNIT_MILLISECONDS) != C_OK) return;\n        } else if (!strcasecmp(o,\"COUNT\") && moreargs) {\n            i++;\n            if (getLongLongFromObjectOrReply(c,c->argv[i],&count,NULL) != C_OK)\n                return;\n            if (count < 0) count = 0;\n        } else if (!strcasecmp(o,\"STREAMS\") && moreargs) {\n            streams_arg = i+1;\n            streams_count = (c->argc-streams_arg);\n            if ((streams_count % 2) != 0) {\n                const char *symbol = xreadgroup ? \"ID or '>'\" : \"ID, '+', or '$'\";\n                addReplyErrorFormat(c,\"Unbalanced '%s' list of streams: \"\n                                      \"for each stream key an %s must be \"\n                                      \"specified.\", c->cmd->fullname,symbol);\n                return;\n            }\n            streams_count /= 2; /* We have two arguments for each stream. */\n            break;\n        } else if (!strcasecmp(o,\"GROUP\") && moreargs >= 2) {\n            if (!xreadgroup) {\n                addReplyError(c,\"The GROUP option is only supported by \"\n                                \"XREADGROUP. You called XREAD instead.\");\n                return;\n            }\n            groupname = c->argv[i+1];\n            consumername = c->argv[i+2];\n            i += 2;\n        } else if (!strcasecmp(o,\"NOACK\")) {\n            if (!xreadgroup) {\n                addReplyError(c,\"The NOACK option is only supported by \"\n                                \"XREADGROUP. You called XREAD instead.\");\n                return;\n            }\n            noack = 1;\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        }\n    }\n\n    /* STREAMS option is mandatory. */\n    if (streams_arg == 0) {\n        addReplyErrorObject(c,shared.syntaxerr);\n        return;\n    }\n\n    /* If the user specified XREADGROUP then it must also\n     * provide the GROUP option. */\n    if (xreadgroup && groupname == NULL) {\n        addReplyError(c,\"Missing GROUP option for XREADGROUP\");\n        return;\n    }\n\n    /* Parse the IDs and resolve the group name. */\n    if (streams_count > STREAMID_STATIC_VECTOR_LEN)\n        ids = zmalloc(sizeof(streamID)*streams_count);\n    if (groupname) groups = zmalloc(sizeof(streamCG*)*streams_count);\n\n    for (int i = streams_arg + streams_count; i < c->argc; i++) {\n        /* Specifying \"$\" as last-known-id means that the client wants to be\n         * served with just the messages that will arrive into the stream\n         * starting from now. */\n        int id_idx = i - streams_arg - streams_count;\n        robj *key = c->argv[i-streams_count];\n        kvobj *o = lookupKeyRead(c->db, key);\n        if (checkType(c,o,OBJ_STREAM)) goto cleanup;\n        streamCG *group = NULL;\n\n        /* If a group was specified, than we need to be sure that the\n         * key and group actually exist. */\n        if (groupname) {\n            if (o == NULL ||\n                (group = streamLookupCG(o->ptr,groupname->ptr)) == NULL)\n            {\n                addReplyErrorFormat(c, \"-NOGROUP No such key '%s' or consumer \"\n                                       \"group '%s' in XREADGROUP with GROUP \"\n                                       \"option\",\n                                    (char*)key->ptr,(char*)groupname->ptr);\n                goto cleanup;\n            }\n            groups[id_idx] = group;\n        }\n\n        if (strcmp(c->argv[i]->ptr,\"$\") == 0) {\n            if (xreadgroup) {\n                addReplyError(c,\"The $ ID is meaningless in the context of \"\n                                \"XREADGROUP: you want to read the history of \"\n                                \"this consumer by specifying a proper ID, or \"\n                                \"use the > ID to get new messages. The $ ID would \"\n                                \"just return an empty result set.\");\n                goto cleanup;\n            }\n            if (o) {\n                stream *s = o->ptr;\n                ids[id_idx] = s->last_id;\n            } else {\n                ids[id_idx].ms = 0;\n                ids[id_idx].seq = 0;\n            }\n            continue;\n        } else if (strcmp(c->argv[i]->ptr,\"+\") == 0) {\n            if (xreadgroup) {\n                addReplyError(c,\"The + ID is meaningless in the context of \"\n                                \"XREADGROUP: you want to read the history of \"\n                                \"this consumer by specifying a proper ID, or \"\n                                \"use the > ID to get new messages. The + ID would \"\n                                \"just return an empty result set.\");\n                goto cleanup;\n            }\n            if (o && ((stream *)o->ptr)->length) {\n                stream *s = o->ptr;\n                /* We need to get the last valid ID.\n                 * It is impossible to use s->last_id because\n                 * entry with s->last_id may have been removed. */\n                streamLastValidID(s, &ids[id_idx]);\n                streamDecrID(&ids[id_idx]);\n            } else {\n                ids[id_idx].ms = 0;\n                ids[id_idx].seq = 0;\n            }\n            continue;\n        } else if (strcmp(c->argv[i]->ptr,\">\") == 0) {\n            if (!xreadgroup) {\n                addReplyError(c,\"The > ID can be specified only when calling \"\n                                \"XREADGROUP using the GROUP <group> \"\n                                \"<consumer> option.\");\n                goto cleanup;\n            }\n            /* We use just the maximum ID to signal this is a \">\" ID, anyway\n             * the code handling the blocking clients will have to update the\n             * ID later in order to match the changing consumer group last ID. */\n            ids[id_idx].ms = UINT64_MAX;\n            ids[id_idx].seq = UINT64_MAX;\n            continue;\n        }\n        if (streamParseStrictIDOrReply(c,c->argv[i],ids+id_idx,0,NULL) != C_OK)\n            goto cleanup;\n    }\n\n    /* Try to serve the client synchronously. */\n    size_t arraylen = 0;\n    void *arraylen_ptr = NULL;\n    mstime_t min_pel_delivery_time = LLONG_MAX;\n    for (int i = 0; i < streams_count; i++) {\n        kvobj *o = lookupKeyRead(c->db, c->argv[streams_arg + i]);\n        if (o == NULL) continue;\n        stream *s = o->ptr;\n        streamID *gt = ids+i; /* ID must be greater than this. */\n        int serve_claimed = 0;\n        int serve_synchronously = 0;\n        int serve_history = 0; /* True for XREADGROUP with ID != \">\". */\n        int consumer_created = 0;\n        streamConsumer *consumer = NULL; /* Unused if XREAD */\n        streamPropInfo spi = {c->argv[streams_arg+i],groupname}; /* Unused if XREAD */\n\n        /* Check if there are the conditions to serve the client\n         * synchronously. */\n        if (groups) {\n            /* If min_idle_time is set we need to check is there any pending\n             * message in the PEL idle enough to be claimed. Also we need to \n             * get the minimum delivery time in the PEL, in order to use it \n             * later if block option is set. */\n            if (min_idle_time != -1) {\n                streamNACK *nack = groups[i]->pel_time_head;\n                /* Iterate through PEL entries to find the first one that exists */\n                while (nack) {\n                    /* Skip entries that don't exist in the stream anymore */\n                    if (!streamEntryExists(s, &nack->id)) {\n                        nack = nack->pel_next;\n                        continue;\n                    }\n\n                    if (nack->delivery_time < min_pel_delivery_time) {\n                        min_pel_delivery_time = nack->delivery_time;\n                    }\n\n                    uint64_t idle = commandTimeSnapshot() - nack->delivery_time;\n                    if (idle >= (uint64_t)min_idle_time) {\n                        serve_claimed = 1;\n                    }\n                    break; /* Found a valid entry, stop searching */\n                }\n            }\n\n            /* If the consumer is blocked on a group, we always serve it\n             * synchronously (serving its local history) if the ID specified\n             * was not the special \">\" ID. */\n            if (gt->ms != UINT64_MAX ||\n                gt->seq != UINT64_MAX)\n            {\n                serve_synchronously = 1;\n                serve_history = 1;\n            } else if (s->length) {\n                /* We also want to serve a consumer in a consumer group\n                 * synchronously in case the group top item delivered is smaller\n                 * than what the stream has inside. */\n                streamID maxid, *last = &groups[i]->last_id;\n                streamLastValidID(s, &maxid);\n                if (streamCompareID(&maxid, last) > 0) {\n                    serve_synchronously = 1;\n                    *gt = *last;\n                }\n            }\n            consumer = streamLookupConsumer(groups[i],consumername->ptr);\n            if (consumer == NULL) {\n                if (server.memory_tracking_enabled)\n                    old_alloc = kvobjAllocSize(o);\n                consumer = streamCreateConsumer(s,groups[i],consumername->ptr,\n                                                c->argv[streams_arg+i],\n                                                c->db->id,SCC_DEFAULT);\n                if (server.memory_tracking_enabled)\n                    updateSlotAllocSize(c->db,getKeySlot(c->argv[streams_arg+i]->ptr),o,old_alloc,kvobjAllocSize(o));\n                consumer_created = 1;\n            }\n            consumer->seen_time = commandTimeSnapshot();\n            keyModified(c,c->db,c->argv[streams_arg+i],o,0); /* only update LRM */\n        } else if (s->length) {\n            /* For consumers without a group, we serve synchronously if we can\n             * actually provide at least one item from the stream. */\n            streamID maxid;\n            streamLastValidID(s, &maxid);\n            if (streamCompareID(&maxid, gt) > 0) {\n                serve_synchronously = 1;\n            }\n        }\n\n        int flags = 0;\n        if (serve_history) {\n            /* CLAIM option is ignored when we server from consumer history.*/\n            min_idle_time = -1;\n        } else if (!serve_synchronously && serve_claimed) {\n            /* We serve the client synchronously if the CLAIM option was\n             * specified and there are messages in the PEL that are idle\n             * enough. */\n            serve_synchronously = 1;\n            flags |= STREAM_RWR_CLAIMED;\n        }\n\n        unsigned long propCount = 0;\n        if (serve_synchronously) {\n            arraylen++;\n            if (arraylen == 1) arraylen_ptr = addReplyDeferredLen(c);\n            /* streamReplyWithRange() handles the 'start' ID as inclusive,\n             * so start from the next ID, since we want only messages with\n             * IDs greater than start. */\n            streamID start = *gt;\n            streamIncrID(&start);\n\n            /* Emit the two elements sub-array consisting of the name\n             * of the stream and the data we extracted from it. */\n            if (c->resp == 2) addReplyArrayLen(c,2);\n            addReplyBulk(c,c->argv[streams_arg+i]);\n            \n            if (noack) flags |= STREAM_RWR_NOACK;\n            if (serve_history) flags |= STREAM_RWR_HISTORY;\n            if (server.memory_tracking_enabled)\n                old_alloc = kvobjAllocSize(o);\n            streamReplyWithRange(c,s,&start,NULL,count,0, min_idle_time,\n                                 groups ? groups[i] : NULL,\n                                 consumer, flags, &spi, &propCount);\n            if (server.memory_tracking_enabled)\n                updateSlotAllocSize(c->db,getKeySlot(c->argv[streams_arg+i]->ptr),o,old_alloc,kvobjAllocSize(o));\n            if (propCount) {\n                server.dirty++;\n                keyModified(c,c->db,c->argv[streams_arg+i],o,0); /* only update LRM */\n            }\n        }\n\n        /* Propagate consumer creation only when no XCLAIM was generated,\n         * since XCLAIM implicitly creates the consumer on the replica.\n         * With NOACK the PEL/XCLAIM path is skipped entirely, so we\n         * always need explicit propagation regardless of propCount. */\n        if (consumer_created && (noack || propCount == 0)) {\n            streamPropagateConsumerCreation(c,spi.keyname, spi.groupname, consumer->name);\n        }\n    }\n\n     /* We replied synchronously! Set the top array len and return to caller. */\n    if (arraylen) {\n        if (c->resp == 2)\n            setDeferredArrayLen(c,arraylen_ptr,arraylen);\n        else\n            setDeferredMapLen(c,arraylen_ptr,arraylen);\n        goto cleanup;\n    }\n\n    /* Block if needed. */\n    if (timeout != -1) {\n        /* If we are not allowed to block the client, the only thing\n         * we can do is treating it as a timeout (even with timeout 0). */\n        if (c->flags & CLIENT_DENY_BLOCKING) {\n            addReplyNullArray(c);\n            goto cleanup;\n        }\n        /* We change the '$' to the current last ID for this stream. this is\n         * Since later on when we unblock on arriving data - we would like to\n         * re-process the command and in case '$' stays we will spin-block forever.\n         */\n        for (int id_idx = 0; id_idx < streams_count; id_idx++) {\n            int arg_idx = id_idx + streams_arg + streams_count;\n            if (strcmp(c->argv[arg_idx]->ptr,\"$\") == 0) {\n                robj *argv_streamid = createObjectFromStreamID(&ids[id_idx]);\n                rewriteClientCommandArgument(c, arg_idx, argv_streamid);\n                decrRefCount(argv_streamid);\n            }\n        }\n        /* If min_idle_time is set we need to unblock client if PEL entry became claimable\n         * before new messages arrive. min_pel_delivery_time is the minimum delivery time of all\n         * entries in the PELs of different streams specified in the command. We add it to \n         * min_idle_time to get the earliest time when an entry will be eligible for claiming.\n         * If there are no entries in the PELs we will unblock the client after min_idle_time. */\n        if (min_idle_time != -1) {\n            uint64_t pel_expire_time = min_idle_time;\n            if (min_pel_delivery_time != LLONG_MAX)\n                pel_expire_time += min_pel_delivery_time;\n            else\n                pel_expire_time += commandTimeSnapshot();\n            trackStreamClaimTimeouts(c, c->argv+streams_arg, streams_count, pel_expire_time);\n        }\n        blockForKeys(c, BLOCKED_STREAM, c->argv+streams_arg, streams_count, timeout, xreadgroup);\n        goto cleanup;\n    }\n\n    /* No BLOCK option, nor any stream we can serve. Reply as with a\n     * timeout happened. */\n    addReplyNullArray(c);\n    /* Continue to cleanup... */\n\ncleanup: /* Cleanup. */\n\n    /* The command is propagated (in the READGROUP form) as a side effect\n     * of calling lower level APIs. So stop any implicit propagation. */\n    preventCommandPropagation(c);\n    if (ids != static_ids) zfree(ids);\n    zfree(groups);\n}\n\n/* -----------------------------------------------------------------------\n * Low level implementation of consumer groups\n * ----------------------------------------------------------------------- */\n\n/* Update a consumer group's last_id and handle minimum last_id tracking.\n * we will recalculate the minimum last_id when needed. */\nvoid streamUpdateCGroupLastId(stream *s, streamCG *cg, streamID *id) {\n    /* When a consumer group's last_id is updated, we need to invalidate the cached\n     * minimum last_id in two cases:\n     * 1. If the consumer group's previous last_id equals the minimum last_id.\n     * 2. If the new ID being set is smaller than the current minimum last_id. */\n    if (s->min_cgroup_last_id_valid && \n        (streamCompareID(&cg->last_id, &s->min_cgroup_last_id) == 0 ||\n         streamCompareID(id, &s->min_cgroup_last_id) < 0)) \n    {\n        s->min_cgroup_last_id_valid = 0;\n    }\n    cg->last_id = *id;\n}\n\n/* Link a consumer group to a stream entry in the cgroups_ref index.\n * Returns a pointer to the list node, so that it can be used for future deletion. */\nlistNode *streamLinkCGroupToEntry(stream *s, streamCG *cg, unsigned char *key) {\n    list *cglist;\n\n    if (!s->cgroups_ref)\n        s->cgroups_ref = raxNewWithMetadata(0, &s->alloc_size);\n    \n    /* Try to find the list for this stream ID, create it if it doesn't exist */\n    if (!raxFind(s->cgroups_ref, key, sizeof(streamID), (void**)&cglist)) {\n        cglist = listCreate();\n        serverAssert(raxInsert(s->cgroups_ref, key, sizeof(streamID), cglist, NULL));\n    }\n    \n    /* Add the consumer group to the list and return the list node */\n    listAddNodeTail(cglist, cg);\n    return listLast(cglist);\n}\n\n/* Unlink a consumer group reference from the entry index for a specific stream ID.\n * This is called when a message is acknowledged or when a consumer group is deleted. */\nvoid streamUnlinkEntryFromCGroupRef(stream *s, streamNACK *na, unsigned char *key) {\n    list *cglist;\n    if (!s->cgroups_ref) return;\n    if (raxFind(s->cgroups_ref, key, sizeof(streamID), (void**)&cglist)) {\n        listDelNode(cglist, na->cgroup_ref_node);\n        \n        /* If the list is now empty, remove it from the index. */\n        if (listLength(cglist) == 0) {\n            raxRemove(s->cgroups_ref, key, sizeof(streamID), NULL);\n            listRelease(cglist);\n        }\n    }\n}\n\n/* Remove all consumer group references to a specific stream message. */\nvoid streamCleanupEntryCGroupRefs(stream *s, streamID *id) {\n    if (!s->cgroups_ref) return;\n    list *cglist;\n    listIter li;\n    listNode *ln;\n    unsigned char buf[sizeof(streamID)];\n    streamEncodeID(buf, id);\n\n    /* If message is not in any consumer group, nothing to do */\n    if (!raxFind(s->cgroups_ref, buf, sizeof(streamID), (void **)&cglist))\n        return;\n\n    listRewind(cglist, &li);\n    while ((ln = listNext(&li))) {\n        streamNACK *nack;\n        streamCG *group = listNodeValue(ln);\n        \n        /* Find the message in this consumer group's PEL */\n        serverAssert(raxFind(group->pel, buf, sizeof(buf), (void **)&nack));\n        \n        /* Remove from group and consumer PELs */\n        pelListUnlink(group, nack);\n        raxRemove(group->pel, buf, sizeof(buf), NULL);\n        if (nack->consumer)\n            raxRemove(nack->consumer->pel, buf, sizeof(buf), NULL);\n        /* Since we're removing all references from the cgroups_ref, we can directly\n         * free the NACK without unlinking it from the cgroups_ref. */\n        streamFreeNACK(s, nack);\n    }\n\n    raxRemove(s->cgroups_ref, buf, sizeof(streamID), NULL);\n    listRelease(cglist);\n}\n\n/* Check if a stream entry is still referenced by any consumer group.\n *\n * An entry is considered referenced if:\n * 1. Its ID is smaller than the minimum last_id of all consumer groups,\n *    which means at least one group hasn't read it yet.\n * 2. It exists in any consumer group's PEL.\n *\n * Returns 1 if the entry is referenced, 0 if it's fully acknowledged by all groups. */\nint streamEntryIsReferenced(stream *s, streamID *id) {\n    if (!s->cgroups || !raxSize(s->cgroups)) return 0;\n    if (!s->min_cgroup_last_id_valid) {\n        /* If the cached minimum last_id is invalid, we need to recalculate it\n         * by iterating through all consumer groups to find the minimum last_id */\n        s->min_cgroup_last_id_valid = 1;\n        s->min_cgroup_last_id.ms = UINT64_MAX;\n        s->min_cgroup_last_id.seq = UINT64_MAX;\n        raxIterator ri;\n        raxStart(&ri, s->cgroups);\n        raxSeek(&ri, \"^\", NULL, 0);\n        while (raxNext(&ri)) {\n            streamCG *cg = ri.data;\n            if (streamCompareID(&cg->last_id, &s->min_cgroup_last_id) < 0)\n                s->min_cgroup_last_id = cg->last_id;\n        }\n        raxStop(&ri);\n    }\n\n    /* The consume group doesn't read it. */\n    if (streamCompareID(&s->min_cgroup_last_id, id) < 0)\n        return 1;\n\n    /* Check if the message is in any consumer group's PEL */\n    if (!s->cgroups_ref) return 0;\n    unsigned char buf[sizeof(streamID)];\n    streamEncodeID(buf, id);\n    return raxFind(s->cgroups_ref, buf, sizeof(streamID), NULL);\n}\n\n/* Create a NACK entry setting the delivery count to 1 and the delivery\n * time to the current time. The NACK consumer will be set to the one\n * specified as argument of the function. */\nstreamNACK *streamCreateNACK(stream *s, streamConsumer *consumer, streamID *id) {\n    size_t usable;\n    streamNACK *nack = zmalloc_usable(sizeof(*nack), &usable);\n    s->alloc_size += usable;\n    nack->delivery_time = commandTimeSnapshot();\n    nack->delivery_count = 1;\n    nack->consumer = consumer;\n    nack->cgroup_ref_node = NULL;  /* Will be set when added to cgroups_ref */\n    nack->id = *id;\n    nack->pel_prev = NULL;\n    nack->pel_next = NULL;\n    return nack;\n}\n\n/* Free a NACK entry. */\nvoid streamFreeNACK(stream *s, streamNACK *na) {\n    size_t usable;\n    zfree_usable(na, &usable);\n    s->alloc_size -= usable;\n}\n\n/* Free a NACK entry and remove its reference from the cgroups_ref.\n * This ensures proper cleanup of the consumer group list associated with the message ID.\n * Note: Caller must ensure NACK is unlinked from pel_time list before calling. */\nvoid streamDestroyNACK(stream *s, streamNACK *na, unsigned char *key) {\n    size_t usable;\n    serverAssert(na->pel_prev == NULL && na->pel_next == NULL);\n    streamUnlinkEntryFromCGroupRef(s, na, key);\n    zfree_usable(na, &usable);\n    s->alloc_size -= usable;\n}\n\n/* Context for streamFreeNACKGeneric callback. */\ntypedef struct {\n    stream *s;\n    streamCG *cg;\n} streamFreeNACKCtx;\n\n/* Generic version of streamFreeNACK with PEL list unlinking. */\nvoid streamFreeNACKGeneric(void *na, void *ctx) {\n    streamFreeNACKCtx *c = (streamFreeNACKCtx *)ctx;\n    streamNACK *nack = (streamNACK *)na;\n    pelListUnlink(c->cg, nack);\n    streamFreeNACK(c->s, nack);\n}\n\n/* Free a consumer and associated data structures. Note that this function\n * will not reassign the pending messages associated with this consumer\n * nor will delete them from the stream, so when this function is called\n * to delete a consumer, and not when the whole stream is destroyed, the caller\n * should do some work before. */\nvoid streamFreeConsumer(stream *s, streamConsumer *sc) {\n    size_t usable;\n    raxFree(sc->pel); /* No value free callback: the PEL entries are shared\n                         between the consumer and the main stream PEL. */\n    s->alloc_size -= sdsAllocSize(sc->name);\n    sdsfree(sc->name);\n    zfree_usable(sc, &usable);\n    s->alloc_size -= usable;\n}\n\n/* Generic version of streamFreeConsumer. */\nvoid streamFreeConsumerGeneric(void *sc, void *s) {\n    streamFreeConsumer((stream *)s, (streamConsumer *)sc);\n}\n\n/* Create a new consumer group in the context of the stream 's', having the\n * specified name, last server ID and reads counter. If a consumer group with\n * the same name already exists NULL is returned, otherwise the pointer to the\n * consumer group is returned. */\nstreamCG *streamCreateCG(stream *s, char *name, size_t namelen, streamID *id, long long entries_read) {\n    if (s->cgroups == NULL)\n        s->cgroups = raxNewWithMetadata(0, &s->alloc_size);\n    if (raxFind(s->cgroups,(unsigned char*)name,namelen,NULL))\n        return NULL;\n\n    size_t usable;\n    streamCG *cg = zmalloc_usable(sizeof(*cg), &usable);\n    s->alloc_size += usable;\n    cg->pel = raxNewWithMetadata(0, &s->alloc_size);\n    cg->pel_time_head = NULL;\n    cg->pel_time_tail = NULL;\n    cg->pel_nack_tail = NULL;\n    cg->consumers = raxNewWithMetadata(0, &s->alloc_size);\n    cg->last_id.ms = 0;\n    cg->last_id.seq = 0;\n    streamUpdateCGroupLastId(s, cg, id);\n    cg->entries_read = entries_read;\n    raxInsert(s->cgroups,(unsigned char*)name,namelen,cg,NULL);\n    return cg;\n}\n\n/* Free a consumer group and all its associated data. */\nstatic void streamFreeCG(stream *s, streamCG *cg) {\n    /* Free the pel, unlinking each NACK from the time list in the callback */\n    streamFreeNACKCtx ctx = {s, cg};\n    raxFreeWithCbAndContext(cg->pel, streamFreeNACKGeneric, &ctx);\n    \n    /* pel_time_head/tail/pel_nack_tail should now be NULL after unlinking all NACKs */\n    serverAssert(cg->pel_time_head == NULL && cg->pel_time_tail == NULL && cg->pel_nack_tail == NULL);\n    \n    raxFreeWithCbAndContext(cg->consumers, streamFreeConsumerGeneric, s);\n    size_t usable;\n    zfree_usable(cg, &usable);\n    s->alloc_size -= usable;\n}\n\n/* Destroy a consumer group and clean up all associated references. */\nvoid streamDestroyCG(stream *s, streamCG *cg) {\n    /* Remove all references from the cgroups_ref. */\n    raxIterator it;\n    raxStart(&it, cg->pel);\n    raxSeek(&it, \"^\", NULL, 0);\n    while (raxNext(&it)) {\n        streamNACK *nack = it.data;\n        streamUnlinkEntryFromCGroupRef(s, nack, it.key);\n    }\n    raxStop(&it);\n\n    /* If we're destroying the group with the minimum last_id, the cached\n     * minimum is no longer valid and needs to be recalculated from the\n     * remaining groups. */\n    if (s->min_cgroup_last_id_valid && streamCompareID(&s->min_cgroup_last_id, &cg->last_id) == 0)\n        s->min_cgroup_last_id_valid = 0;\n\n    streamFreeCG(s, cg);\n}\n\n/* Generic version of streamFreeCG. */\nvoid streamFreeCGGeneric(void *cg, void *s) {\n    streamFreeCG((stream *)s, (streamCG *)cg);\n}\n\n/* Lookup the consumer group in the specified stream and returns its\n * pointer, otherwise if there is no such group, NULL is returned. */\nstreamCG *streamLookupCG(stream *s, sds groupname) {\n    if (s->cgroups == NULL) return NULL;\n    void *cg = NULL;\n    raxFind(s->cgroups,(unsigned char*)groupname,sdslen(groupname),&cg);\n    return cg;\n}\n\n/* Create a consumer with the specified name in the group 'cg' and return.\n * If the consumer exists, return NULL. As a side effect, when the consumer\n * is successfully created, the key space will be notified and dirty++ unless\n * the SCC_NO_NOTIFY or SCC_NO_DIRTIFY flags is specified. */\nstreamConsumer *streamCreateConsumer(stream *s, streamCG *cg, sds name, robj *key, int dbid, int flags) {\n    if (cg == NULL) return NULL;\n    int notify = !(flags & SCC_NO_NOTIFY);\n    int dirty = !(flags & SCC_NO_DIRTIFY);\n    size_t usable;\n    streamConsumer *consumer = zmalloc_usable(sizeof(*consumer), &usable);\n    int success = raxTryInsert(cg->consumers,(unsigned char*)name,\n                               sdslen(name),consumer,NULL);\n    if (!success) {\n        zfree(consumer);\n        return NULL;\n    }\n    s->alloc_size += usable;\n    consumer->name = sdsdup(name);\n    s->alloc_size += sdsAllocSize(consumer->name);\n    consumer->pel = raxNewWithMetadata(0, &s->alloc_size);\n    consumer->active_time = -1;\n    consumer->seen_time = commandTimeSnapshot();\n    if (dirty) server.dirty++;\n    if (notify) notifyKeyspaceEvent(NOTIFY_STREAM,\"xgroup-createconsumer\",key,dbid);\n    return consumer;\n}\n\n/* Lookup the consumer with the specified name in the group 'cg'. */\nstreamConsumer *streamLookupConsumer(streamCG *cg, sds name) {\n    if (cg == NULL) return NULL;\n    void *consumer = NULL;\n    raxFind(cg->consumers,(unsigned char*)name,sdslen(name),&consumer);\n    return consumer;\n}\n\n/* Delete the consumer specified in the consumer group 'cg'. */\nvoid streamDelConsumer(stream *s, streamCG *cg, streamConsumer *consumer) {\n    /* Iterate all the consumer pending messages, deleting every corresponding\n     * entry from the global entry. */\n    raxIterator ri;\n    raxStart(&ri,consumer->pel);\n    raxSeek(&ri,\"^\",NULL,0);\n    while(raxNext(&ri)) {\n        streamNACK *nack = ri.data;\n        streamUnlinkEntryFromCGroupRef(s, nack, ri.key);\n\n        streamID id;\n        streamDecodeID(ri.key, &id);\n\n        pelListUnlink(cg, nack);\n        raxRemove(cg->pel,ri.key,ri.key_len,NULL);\n\n        streamFreeNACK(s, nack);\n    }\n    raxStop(&ri);\n\n    /* Deallocate the consumer. */\n    raxRemove(cg->consumers,(unsigned char*)consumer->name,\n              sdslen(consumer->name),NULL);\n    streamFreeConsumer(s,consumer);\n}\n\n/* -----------------------------------------------------------------------\n * Consumer groups commands\n * ----------------------------------------------------------------------- */\n\n/* XGROUP CREATE <key> <groupname> <id or $> [MKSTREAM] [ENTRIESREAD entries_read]\n * XGROUP SETID <key> <groupname> <id or $> [ENTRIESREAD entries_read]\n * XGROUP DESTROY <key> <groupname>\n * XGROUP CREATECONSUMER <key> <groupname> <consumer>\n * XGROUP DELCONSUMER <key> <groupname> <consumername> */\nvoid xgroupCommand(client *c) {\n    stream *s = NULL;\n    sds grpname = NULL;\n    streamCG *cg = NULL;\n    char *opt = c->argv[1]->ptr; /* Subcommand name. */\n    int mkstream = 0;\n    long long entries_read = SCG_INVALID_ENTRIES_READ;\n    robj *o;\n    size_t old_alloc = 0;\n\n    /* Everything but the \"HELP\" option requires a key and group name. */\n    if (c->argc >= 4) {\n        /* Parse optional arguments for CREATE and SETID */\n        int i = 5;\n        int create_subcmd = !strcasecmp(opt,\"CREATE\");\n        int setid_subcmd = !strcasecmp(opt,\"SETID\");\n        while (i < c->argc) {\n            if (create_subcmd && !strcasecmp(c->argv[i]->ptr,\"MKSTREAM\")) {\n                mkstream = 1;\n                i++;\n            } else if ((create_subcmd || setid_subcmd) && !strcasecmp(c->argv[i]->ptr,\"ENTRIESREAD\") && i + 1 < c->argc) {\n                if (getLongLongFromObjectOrReply(c,c->argv[i+1],&entries_read,NULL) != C_OK)\n                    return;\n                if (entries_read < 0 && entries_read != SCG_INVALID_ENTRIES_READ) {\n                    addReplyError(c,\"value for ENTRIESREAD must be positive or -1\");\n                    return;\n                }\n                i += 2;\n            } else {\n                addReplySubcommandSyntaxError(c);\n                return;\n            }\n        }\n\n        o = lookupKeyWrite(c->db,c->argv[2]);\n        if (o) {\n            if (checkType(c,o,OBJ_STREAM)) return;\n            s = o->ptr;\n        }\n        grpname = c->argv[3]->ptr;\n    }\n\n    /* Check for missing key/group. */\n    if (c->argc >= 4 && !mkstream) {\n        /* At this point key must exist, or there is an error. */\n        if (s == NULL) {\n            addReplyError(c,\n                \"The XGROUP subcommand requires the key to exist. \"\n                \"Note that for CREATE you may want to use the MKSTREAM \"\n                \"option to create an empty stream automatically.\");\n            return;\n        }\n\n        /* Certain subcommands require the group to exist. */\n        if ((cg = streamLookupCG(s,grpname)) == NULL &&\n            (!strcasecmp(opt,\"SETID\") ||\n             !strcasecmp(opt,\"CREATECONSUMER\") ||\n             !strcasecmp(opt,\"DELCONSUMER\")))\n        {\n            addReplyErrorFormat(c, \"-NOGROUP No such consumer group '%s' \"\n                                   \"for key name '%s'\",\n                                   (char*)grpname, (char*)c->argv[2]->ptr);\n            return;\n        }\n    }\n\n    /* Dispatch the different subcommands. */\n    if (c->argc == 2 && !strcasecmp(opt,\"HELP\")) {\n        const char *help[] = {\n\"CREATE <key> <groupname> <id|$> [option]\",\n\"    Create a new consumer group. Options are:\",\n\"    * MKSTREAM\",\n\"      Create the empty stream if it does not exist.\",\n\"    * ENTRIESREAD entries_read\",\n\"      Set the group's entries_read counter (internal use).\",\n\"CREATECONSUMER <key> <groupname> <consumer>\",\n\"    Create a new consumer in the specified group.\",\n\"DELCONSUMER <key> <groupname> <consumer>\",\n\"    Remove the specified consumer.\",\n\"DESTROY <key> <groupname>\",\n\"    Remove the specified group.\",\n\"SETID <key> <groupname> <id|$> [ENTRIESREAD entries_read]\",\n\"    Set the current group ID and entries_read counter.\",\nNULL\n        };\n        addReplyHelp(c, help);\n    } else if (!strcasecmp(opt,\"CREATE\") && (c->argc >= 5 && c->argc <= 8)) {\n        streamID id;\n        if (!strcmp(c->argv[4]->ptr,\"$\")) {\n            if (s) {\n                id = s->last_id;\n            } else {\n                id.ms = 0;\n                id.seq = 0;\n            }\n        } else if (streamParseStrictIDOrReply(c,c->argv[4],&id,0,NULL) != C_OK) {\n            return;\n        }\n\n        /* Handle the MKSTREAM option now that the command can no longer fail. */\n        if (s == NULL) {\n            serverAssert(mkstream);\n            o = createStreamObject();\n            dbAdd(c->db, c->argv[2], &o);\n            s = o->ptr;\n            keyModified(c,c->db,c->argv[2],o,1);\n        }\n        \n        if (entries_read != SCG_INVALID_ENTRIES_READ && (uint64_t)entries_read > s->entries_added) {\n            entries_read = s->entries_added;\n        }\n\n        if (server.memory_tracking_enabled)\n            old_alloc = kvobjAllocSize(o);\n        streamCG *cg = streamCreateCG(s,grpname,sdslen(grpname),&id,entries_read);\n        if (cg) {\n            if (server.memory_tracking_enabled)\n                updateSlotAllocSize(c->db,getKeySlot(c->argv[2]->ptr),o,old_alloc,kvobjAllocSize(o));\n            addReply(c,shared.ok);\n            server.dirty++;\n            notifyKeyspaceEvent(NOTIFY_STREAM,\"xgroup-create\",\n                                c->argv[2],c->db->id);\n            keyModified(c,c->db,c->argv[2],o,0);\n        } else {\n            addReplyError(c,\"-BUSYGROUP Consumer Group name already exists\");\n        }\n    } else if (!strcasecmp(opt,\"SETID\") && (c->argc == 5 || c->argc == 7)) {\n        streamID id;\n        if (!strcmp(c->argv[4]->ptr,\"$\")) {\n            id = s->last_id;\n        } else if (streamParseIDOrReply(c,c->argv[4],&id,0) != C_OK) {\n            return;\n        }\n\n        if (entries_read != SCG_INVALID_ENTRIES_READ && (uint64_t)entries_read > s->entries_added) {\n            entries_read = s->entries_added;\n        }\n\n        streamUpdateCGroupLastId(s, cg, &id);\n        cg->entries_read = entries_read;\n        addReply(c,shared.ok);\n        server.dirty++;\n        notifyKeyspaceEvent(NOTIFY_STREAM,\"xgroup-setid\",c->argv[2],c->db->id);\n        keyModified(c,c->db,c->argv[2],o,0);\n    } else if (!strcasecmp(opt,\"DESTROY\") && c->argc == 4) {\n        if (cg) {\n            if (server.memory_tracking_enabled)\n                old_alloc = kvobjAllocSize(o);\n            raxRemove(s->cgroups,(unsigned char*)grpname,sdslen(grpname),NULL);\n            streamDestroyCG(s, cg);\n            if (server.memory_tracking_enabled)\n                updateSlotAllocSize(c->db,getKeySlot(c->argv[2]->ptr),o,old_alloc,kvobjAllocSize(o));\n            addReply(c,shared.cone);\n            server.dirty++;\n            notifyKeyspaceEvent(NOTIFY_STREAM,\"xgroup-destroy\",\n                                c->argv[2],c->db->id);\n            keyModified(c,c->db,c->argv[2],o,0);\n            /* We want to unblock any XREADGROUP consumers with -NOGROUP. */\n            signalKeyAsReady(c->db,c->argv[2],OBJ_STREAM);\n        } else {\n            addReply(c,shared.czero);\n        }\n    } else if (!strcasecmp(opt,\"CREATECONSUMER\") && c->argc == 5) {\n        if (server.memory_tracking_enabled)\n            old_alloc = kvobjAllocSize(o);\n        streamConsumer *created = streamCreateConsumer(s,cg,c->argv[4]->ptr,c->argv[2],\n                                                       c->db->id,SCC_DEFAULT);\n        keyModified(c,c->db,c->argv[2],o,0);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db,getKeySlot(c->argv[2]->ptr),o,old_alloc,kvobjAllocSize(o));\n        addReplyLongLong(c,created ? 1 : 0);\n    } else if (!strcasecmp(opt,\"DELCONSUMER\") && c->argc == 5) {\n        long long pending = 0;\n        streamConsumer *consumer = streamLookupConsumer(cg,c->argv[4]->ptr);\n        if (consumer) {\n            /* Delete the consumer and returns the number of pending messages\n             * that were yet associated with such a consumer. */\n            if (server.memory_tracking_enabled)\n                old_alloc = kvobjAllocSize(o);\n            pending = raxSize(consumer->pel);\n            streamDelConsumer(s,cg,consumer);\n            if (server.memory_tracking_enabled)\n                updateSlotAllocSize(c->db,getKeySlot(c->argv[2]->ptr),o,old_alloc,kvobjAllocSize(o));\n            server.dirty++;\n            notifyKeyspaceEvent(NOTIFY_STREAM,\"xgroup-delconsumer\",\n                                c->argv[2],c->db->id);\n            keyModified(c,c->db,c->argv[2],o,0);\n        }\n        addReplyLongLong(c,pending);\n    } else {\n        addReplySubcommandSyntaxError(c);\n    }\n}\n\n/* XSETID <stream> <id> [ENTRIESADDED entries_added] [MAXDELETEDID max_deleted_entry_id]\n *\n * Set the internal \"last ID\", \"added entries\" and \"maximal deleted entry ID\"\n * of a stream. */\nvoid xsetidCommand(client *c) {\n    streamID id, max_xdel_id = {0, 0};\n    long long entries_added = -1;\n\n    if (streamParseStrictIDOrReply(c,c->argv[2],&id,0,NULL) != C_OK)\n        return;\n\n    int i = 3;\n    while (i < c->argc) {\n        int moreargs = (c->argc-1) - i; /* Number of additional arguments. */\n        char *opt = c->argv[i]->ptr;\n        if (!strcasecmp(opt,\"ENTRIESADDED\") && moreargs) {\n            if (getLongLongFromObjectOrReply(c,c->argv[i+1],&entries_added,NULL) != C_OK) {\n                return;\n            } else if (entries_added < 0) {\n                addReplyError(c,\"entries_added must be positive\");\n                return;\n            }\n            i += 2;\n        } else if (!strcasecmp(opt,\"MAXDELETEDID\") && moreargs) {\n            if (streamParseStrictIDOrReply(c,c->argv[i+1],&max_xdel_id,0,NULL) != C_OK) {\n                return;\n            } else if (streamCompareID(&id,&max_xdel_id) < 0) {\n                addReplyError(c,\"The ID specified in XSETID is smaller than the provided max_deleted_entry_id\");\n                return;\n            }\n            i += 2;\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        }\n    }\n\n    kvobj *kv = lookupKeyWriteOrReply(c, c->argv[1], shared.nokeyerr);\n    if (kv == NULL || checkType(c, kv, OBJ_STREAM)) return;\n    stream *s = kv->ptr;\n\n    if (streamCompareID(&id,&s->max_deleted_entry_id) < 0) {\n        addReplyError(c,\"The ID specified in XSETID is smaller than current max_deleted_entry_id\");\n        return;\n    }\n\n    /* If the stream has at least one item, we want to check that the user\n     * is setting a last ID that is equal or greater than the current top\n     * item, otherwise the fundamental ID monotonicity assumption is violated. */\n    if (s->length > 0) {\n        streamID maxid;\n        streamLastValidID(s,&maxid);\n\n        if (streamCompareID(&id,&maxid) < 0) {\n            addReplyError(c,\"The ID specified in XSETID is smaller than the target stream top item\");\n            return;\n        }\n\n        /* If an entries_added was provided, it can't be lower than the length. */\n        if (entries_added != -1 && s->length > (uint64_t)entries_added) {\n            addReplyError(c,\"The entries_added specified in XSETID is smaller than the target stream length\");\n            return;\n        }\n    }\n\n    s->last_id = id;\n    if (entries_added != -1)\n        s->entries_added = entries_added;\n    if (!streamIDEqZero(&max_xdel_id))\n        s->max_deleted_entry_id = max_xdel_id;\n    addReply(c,shared.ok);\n    server.dirty++;\n    notifyKeyspaceEvent(NOTIFY_STREAM,\"xsetid\",c->argv[1],c->db->id);\n    keyModified(c,c->db,c->argv[1],kv,0);\n}\n\n/* XIDMPRECORD <key> <pid> <iid> <streamID>\n * Set IDMP metadata (producer id + idempotency id) on an existing stream message. */\nvoid xidmprecordCommand(client *c) {\n    streamID id;\n\n    if (streamParseStrictIDOrReply(c, c->argv[4], &id, 0, NULL) != C_OK)\n        return;\n\n    const char *pid_str = c->argv[2]->ptr;\n    size_t pid_len = sdslen((sds)pid_str);\n    const char *iid_str = c->argv[3]->ptr;\n    size_t iid_len = sdslen((sds)iid_str);\n\n    if (pid_len == 0) {\n        addReplyError(c,\"producer ID must be non-empty\");\n        return;\n    }\n    if (iid_len == 0) {\n        addReplyError(c,\"idempotent ID must be non-empty\");\n        return;\n    }\n\n    kvobj *kv = lookupKeyWriteOrReply(c, c->argv[1], shared.nokeyerr);\n    if (kv == NULL || checkType(c, kv, OBJ_STREAM)) return;\n    stream *s = kv->ptr;\n\n    if (!streamEntryExists(s, &id)) {\n        addReplyError(c, \"No such message in stream\");\n        return;\n    }\n\n    size_t old_alloc = server.memory_tracking_enabled ? kvobjAllocSize(kv) : 0;\n\n    idmpProducer *producer = idmpGetOrCreateProducer(s, pid_str, pid_len);\n    idmpEntry *entry = idmpEntryCreate(iid_str, iid_len, &s->alloc_size);\n    int found = idmpLookup(producer, entry, &id);\n    if (found) {\n        idmpEntryFree(entry, &s->alloc_size);\n        if (found == 1)\n            addReply(c, shared.ok);\n        else\n            addReplyError(c, \"IID already exists for this producer with a different stream ID\");\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db,getKeySlot(c->argv[1]->ptr),kv,old_alloc,kvobjAllocSize(kv));\n        return;\n    }\n\n    idmpInsertEntry(s, producer, entry, &id);\n    trackStreamIdmpEntries(c, c->argv[1]);\n    addReply(c, shared.ok);\n    server.dirty++;\n\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db,getKeySlot(c->argv[1]->ptr),kv,old_alloc,kvobjAllocSize(kv));\n\n    keyModified(c, c->db, c->argv[1], kv, 0);\n}\n\n/* XACK <key> <group> <id> <id> ... <id>\n * Acknowledge a message as processed. In practical terms we just check the\n * pending entries list (PEL) of the group, and delete the PEL entry both from\n * the group and the consumer (pending messages are referenced in both places).\n *\n * Return value of the command is the number of messages successfully\n * acknowledged, that is, the IDs we were actually able to resolve in the PEL.\n */\nvoid xackCommand(client *c) {\n    streamCG *group = NULL;\n    kvobj *kv = lookupKeyRead(c->db, c->argv[1]);\n    if (kv) {\n        if (checkType(c, kv, OBJ_STREAM)) return; /* Type error. */\n        group = streamLookupCG(kv->ptr, c->argv[2]->ptr);\n    }\n\n    /* No key or group? Nothing to ack. */\n    if (kv == NULL || group == NULL) {\n        addReply(c,shared.czero);\n        return;\n    }\n\n    /* Start parsing the IDs, so that we abort ASAP if there is a syntax\n     * error: the return value of this command cannot be an error in case\n     * the client successfully acknowledged some messages, so it should be\n     * executed in a \"all or nothing\" fashion. */\n    streamID static_ids[STREAMID_STATIC_VECTOR_LEN];\n    streamID *ids = static_ids;\n    int id_count = c->argc-3;\n    if (id_count > STREAMID_STATIC_VECTOR_LEN)\n        ids = zmalloc(sizeof(streamID)*id_count);\n    for (int j = 3; j < c->argc; j++) {\n        if (streamParseStrictIDOrReply(c,c->argv[j],&ids[j-3],0,NULL) != C_OK) goto cleanup;\n    }\n\n    int acknowledged = 0;\n    size_t old_alloc = server.memory_tracking_enabled ? kvobjAllocSize(kv) : 0;\n    for (int j = 3; j < c->argc; j++) {\n        unsigned char buf[sizeof(streamID)];\n        streamEncodeID(buf,&ids[j-3]);\n\n        /* Lookup the ID in the group PEL: it will have a reference to the\n         * NACK structure that will have a reference to the consumer, so that\n         * we are able to remove the entry from both PELs. */\n        void *result;\n        if (raxFind(group->pel,buf,sizeof(buf),&result)) {\n            streamNACK *nack = result;\n            pelListUnlink(group, nack);\n            raxRemove(group->pel,buf,sizeof(buf),NULL);\n            if (nack->consumer)\n                raxRemove(nack->consumer->pel,buf,sizeof(buf),NULL);\n            streamDestroyNACK(kv->ptr, nack, buf);\n            acknowledged++;\n            server.dirty++;\n            keyModified(c,c->db,c->argv[1],kv,0);\n        }\n    }\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db,getKeySlot(c->argv[1]->ptr),kv,old_alloc,kvobjAllocSize(kv));\n    addReplyLongLong(c,acknowledged);\ncleanup:\n    if (ids != static_ids) zfree(ids);\n}\n\n/* XNACK key group <SILENT|FAIL|FATAL> IDS numids id [id ...]\n *       [RETRYCOUNT count] [FORCE]\n *\n * Release pending messages back to the group's PEL without acknowledging them.\n * Entries are disassociated from their consumer (consumer = NULL) and\n * repositioned to the head of the PEL time-ordered list (delivery_time = 0),\n * making them immediately claimable by other consumers.\n *\n * Delivery counter behavior (when RETRYCOUNT is not specified):\n *   SILENT: decrement by 1 (undo the delivery increment)\n *   FAIL:   no change (already incremented during delivery)\n *   FATAL:  set to LLONG_MAX\n *\n * RETRYCOUNT count: directly sets delivery_count to the specified value,\n *   overriding the mode-based adjustment.\n *\n * FORCE: create new unowned PEL entries (consumer = NULL) for IDs that\n *   are not already in the group PEL. When FORCE creates an entry, the\n *   delivery counter is set to 0 (or to RETRYCOUNT if specified, or to\n *   LLONG_MAX if mode is FATAL). */\nvoid xnackCommand(client *c) {\n    streamCG *group = NULL;\n    kvobj *kv = lookupKeyWrite(c->db,c->argv[1]);\n    if (kv) {\n        if (checkType(c,kv,OBJ_STREAM)) return;\n        group = streamLookupCG(kv->ptr,c->argv[2]->ptr);\n    }\n\n    if (kv == NULL || group == NULL) {\n        addReplyErrorFormat(c,\"-NOGROUP No such key '%s' or \"\n                              \"consumer group '%s'\", (char*)c->argv[1]->ptr,\n                              (char*)c->argv[2]->ptr);\n        return;\n    }\n\n    int mode;\n    if (!strcasecmp(c->argv[3]->ptr,\"SILENT\")) {\n        mode = XNACK_SILENT;\n    } else if (!strcasecmp(c->argv[3]->ptr,\"FAIL\")) {\n        mode = XNACK_FAIL;\n    } else if (!strcasecmp(c->argv[3]->ptr,\"FATAL\")) {\n        mode = XNACK_FATAL;\n    } else {\n        addReplyError(c,\"mode must be SILENT, FAIL, or FATAL\");\n        return;\n    }\n\n    int ids_start = 0;\n    int numids = 0;\n    int force = 0;\n    long long retrycount = -1;\n    for (int i = 4; i < c->argc; i++) {\n        int moreargs = (c->argc-1) - i; /* Number of additional arguments. */\n        char *opt = c->argv[i]->ptr;\n        if (!strcasecmp(opt,\"IDS\") && moreargs) {\n            long numids_long;\n            if (getRangeLongFromObjectOrReply(c,c->argv[i+1],1,INT_MAX,\n                &numids_long,\"numids must be a positive integer\") != C_OK)\n                return;\n            numids = (int)numids_long;\n            ids_start = i + 2;\n            if (numids > (c->argc - ids_start)) {\n                addReplyError(c,\"number of IDs doesn't match numids\");\n                return;\n            }\n            i = ids_start + numids - 1;\n        } else if (!strcasecmp(opt,\"FORCE\")) {\n            force = 1;\n        } else if (!strcasecmp(opt,\"RETRYCOUNT\") && moreargs) {\n            i++;\n            if (getLongLongFromObjectOrReply(c,c->argv[i],&retrycount,NULL) != C_OK)\n                return;\n            if (retrycount < 0) {\n                addReplyError(c,\"Invalid RETRYCOUNT value, must be >= 0\");\n                return;\n            }\n        } else {\n            addReplyErrorFormat(c,\"Unrecognized XNACK option '%s'\",\n                                (char *)c->argv[i]->ptr);\n            return;\n        }\n    }\n\n    if (ids_start == 0) {\n        addReplyError(c,\"syntax error, expected IDS keyword\");\n        return;\n    }\n\n    streamID static_ids[STREAMID_STATIC_VECTOR_LEN];\n    streamID *ids = static_ids;\n    if (numids > STREAMID_STATIC_VECTOR_LEN)\n        ids = zmalloc(sizeof(streamID)*numids);\n    for (int j = 0; j < numids; j++) {\n        if (streamParseStrictIDOrReply(c,c->argv[ids_start+j],&ids[j],0,NULL) != C_OK) goto cleanup;\n    }\n\n    stream *s = kv->ptr;\n    int nacked = 0;\n    size_t old_alloc = server.memory_tracking_enabled ? kvobjAllocSize(kv) : 0;\n    for (int j = 0; j < numids; j++) {\n        unsigned char buf[sizeof(streamID)];\n        streamEncodeID(buf,&ids[j]);\n\n        void *result;\n        int found = raxFind(group->pel,buf,sizeof(buf),&result);\n        if (found) {\n            streamNACK *nack = result;\n            nackSetDeliveryCount(nack, mode, retrycount);\n            if (nack->consumer != NULL) {\n                raxRemove(nack->consumer->pel,buf,sizeof(buf),NULL);\n                nack->consumer = NULL;\n            }\n\n            /* Move to NACK zone: unlink from current position, insert at\n             * end of NACK zone (head region of PEL). */\n            pelListUnlink(group, nack);\n            pelListInsertNacked(group, nack);\n        } else if (force) {\n            /* FORCE: create new unowned PEL entry only if the stream\n             * entry exists, otherwise skip silently (same as XCLAIM). */\n            if (!streamEntryExists(s, &ids[j]))\n                continue;\n            streamNACK *nack = streamCreateNACK(s, NULL, &ids[j]);\n            \n            /* streamCreateNACK() initialises delivery_count to 1 (a real\n             * delivery), but FORCE creates a synthetic entry with no actual\n             * delivery, so reset to 0 before letting nackSetDeliveryCount()\n             * apply the mode/retrycount logic on a clean baseline. */\n            nack->delivery_count = 0;\n            nackSetDeliveryCount(nack, mode, retrycount);\n\n            raxInsert(group->pel, buf, sizeof(buf), nack, NULL);\n            pelListInsertNacked(group, nack);\n            nack->cgroup_ref_node = streamLinkCGroupToEntry(s, group, buf);\n        } else {\n            continue;\n        }\n        nacked++;\n    }\n\n    if (nacked > 0) {\n        server.dirty += nacked;\n        keyModified(c,c->db,c->argv[1],kv,0);\n        /* XNACK can make entries immediately claimable. */\n        signalKeyAsReady(c->db, c->argv[1], OBJ_STREAM);\n    }\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db,getKeySlot(c->argv[1]->ptr),kv,old_alloc,kvobjAllocSize(kv));\n\n    addReplyLongLong(c,nacked);\n\ncleanup:\n    if (ids != static_ids) zfree(ids);\n}\n\n/* Used by xackdelCommand() */\ntypedef enum XAckDelRes {\n    XACKDEL_NO_ID = -1,           /* ID not found in PEL. */\n    XACKDEL_DELETED = 1,          /* Message acknowledged and deleted. */\n    XACKDEL_STILL_REFERENCED = 2, /* Message acknowledged but not deleted (still referenced). */\n} XAckDelRes;\n\n/* XACKDEL <key> <group> [KEEPREF|DELREF|ACKED] [IDS <numids> <id ...>]\n * Acknowledges messages as processed and deletes them from the stream.\n * \n * Returns an array of status codes for each ID, indicating whether it\n * was deleted, still referenced, or not found. */\nvoid xackdelCommand(client *c) {\n    stream *s = NULL;\n    streamCG *group = NULL;\n    kvobj *kv = lookupKeyRead(c->db, c->argv[1]);\n    if (checkType(c, kv, OBJ_STREAM)) return; /* Type error. */\n\n    /* Parse command options */\n    streamAckDelArgs args;\n    if (!streamParseAckDelArgsOrReply(c, 3, &args)) return;\n\n    /* Reply null if the key doesn't exist or the group doesn't exist.*/\n    if (!kv || !(group = streamLookupCG(kv->ptr, c->argv[2]->ptr))) {\n        addReplyArrayLen(c, args.numids);\n        for (int i = 0; i < args.numids; i++)\n            addReplyLongLong(c, XACKDEL_NO_ID);\n        return;\n    } \n\n    /* Start parsing the IDs, so that we abort ASAP if there is a syntax\n     * error: the return value of this command cannot be an error in case\n     * the client successfully acknowledged some messages, so it should be\n     * executed in a \"all or nothing\" fashion. */\n    streamID static_ids[STREAMID_STATIC_VECTOR_LEN];\n    streamID *ids = static_ids;\n    if (args.numids > STREAMID_STATIC_VECTOR_LEN)\n        ids = zmalloc(sizeof(streamID)*args.numids);\n    for (int j = 0; j < args.numids; j++) {\n        if (streamParseStrictIDOrReply(c,c->argv[j+args.startidx],&ids[j],0,NULL) != C_OK)\n            goto cleanup;\n    }\n\n    s = kv->ptr;\n    size_t old_alloc = server.memory_tracking_enabled ? kvobjAllocSize(kv) : 0;\n    int first_entry = 0;\n    int deleted = 0, dirty = server.dirty;\n    addReplyArrayLen(c, args.numids);\n    for (int j = 0; j < args.numids; j++) {\n        int res = XACKDEL_NO_ID;\n        streamID *id = &ids[j];\n        unsigned char buf[sizeof(streamID)];\n        streamEncodeID(buf,id);\n\n        /* Lookup the ID in the group PEL: it will have a reference to the\n         * NACK structure that will have a reference to the consumer, so that\n         * we are able to remove the entry from both PELs. */\n        void *result;\n        if (raxFind(group->pel,buf,sizeof(buf),&result)) {\n            streamNACK *nack = result;\n            pelListUnlink(group, nack);\n            raxRemove(group->pel,buf,sizeof(buf),NULL);\n            if (nack->consumer)\n                raxRemove(nack->consumer->pel,buf,sizeof(buf),NULL);\n            streamDestroyNACK(s, nack, buf);\n            server.dirty++;\n\n            int can_delete = 1;\n            if (args.delete_strategy == DELETE_STRATEGY_ACKED) {\n                /* Only delete if acknowledged by all consumer groups */\n                if (streamEntryIsReferenced(s, id))\n                    can_delete = 0;\n            } else if (args.delete_strategy == DELETE_STRATEGY_DELREF) {\n                streamCleanupEntryCGroupRefs(s, id);\n            }\n\n            if (can_delete && streamDeleteItem(s,id)) {\n                /* We want to know if the first entry in the stream was deleted\n                 * so we can later set the new one. */\n                if (streamCompareID(id,&s->first_id) == 0) {\n                    first_entry = 1;\n                }\n                /* Update the stream's maximal tombstone if needed. */\n                if (streamCompareID(id,&s->max_deleted_entry_id) > 0) {\n                    s->max_deleted_entry_id = *id;\n                }\n                deleted++;\n            }\n\n            /* If the entry was in the PEL but not found in the stream,\n             * we still consider it successfully deleted. */\n            res = can_delete ? XACKDEL_DELETED : XACKDEL_STILL_REFERENCED;\n        }\n        addReplyLongLong(c, res);\n    }\n\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db,getKeySlot(c->argv[1]->ptr),kv,old_alloc,kvobjAllocSize(kv));\n\n    /* Update the stream's first ID. */\n    if (deleted) {\n        if (s->length == 0) {\n            s->first_id.ms = 0;\n            s->first_id.seq = 0;\n        } else if (first_entry) {\n            streamGetEdgeID(s,1,1,&s->first_id);\n        }\n\n        /* Propagate the write. */\n        keyModified(c,c->db,c->argv[1],kv,1);\n        notifyKeyspaceEvent(NOTIFY_STREAM,\"xdel\",c->argv[1],c->db->id);\n    } else if (server.dirty > dirty) {\n        /* Only ACK succeeded without deleting elements, just update LRM without signaling */\n        keyModified(c,c->db,c->argv[1],kv,0);\n    }\n\ncleanup:\n    if (ids != static_ids) zfree(ids);\n}\n\n/* XPENDING <key> <group> [[IDLE <idle>] <start> <stop> <count> [<consumer>]]\n *\n * If start and stop are omitted, the command just outputs information about\n * the amount of pending messages for the key/group pair, together with\n * the minimum and maximum ID of pending messages.\n *\n * If start and stop are provided instead, the pending messages are returned\n * with information about the current owner, number of deliveries and last\n * delivery time and so forth. */\nvoid xpendingCommand(client *c) {\n    int justinfo = c->argc == 3; /* Without the range just outputs general\n                                    information about the PEL. */\n    robj *key = c->argv[1];\n    robj *groupname = c->argv[2];\n    robj *consumername = NULL;\n    streamID startid, endid;\n    long long count = 0;\n    long long minidle = 0;\n    int startex = 0, endex = 0;\n\n    /* Start and stop, and the consumer, can be omitted. Also the IDLE modifier. */\n    if (c->argc != 3 && (c->argc < 6 || c->argc > 9)) {\n        addReplyErrorObject(c,shared.syntaxerr);\n        return;\n    }\n\n    /* Parse start/end/count arguments ASAP if needed, in order to report\n     * syntax errors before any other error. */\n    if (c->argc >= 6) {\n        int startidx = 3; /* Without IDLE */\n\n        if (!strcasecmp(c->argv[3]->ptr, \"IDLE\")) {\n            if (getLongLongFromObjectOrReply(c, c->argv[4], &minidle, NULL) == C_ERR)\n                return;\n            if (c->argc < 8) {\n                /* If IDLE was provided we must have at least 'start end count' */\n                addReplyErrorObject(c,shared.syntaxerr);\n                return;\n            }\n            /* Search for rest of arguments after 'IDLE <idle>' */\n            startidx += 2;\n        }\n\n        /* count argument. */\n        if (getLongLongFromObjectOrReply(c,c->argv[startidx+2],&count,NULL) == C_ERR)\n            return;\n        if (count < 0) count = 0;\n\n        /* start and end arguments. */\n        if (streamParseIntervalIDOrReply(c,c->argv[startidx],&startid,&startex,0) != C_OK)\n            return;\n        if (startex && streamIncrID(&startid) != C_OK) {\n            addReplyError(c,\"invalid start ID for the interval\");\n            return;\n        }\n        if (streamParseIntervalIDOrReply(c,c->argv[startidx+1],&endid,&endex,UINT64_MAX) != C_OK)\n            return;\n        if (endex && streamDecrID(&endid) != C_OK) {\n            addReplyError(c,\"invalid end ID for the interval\");\n            return;\n        }\n\n        if (startidx+3 < c->argc) {\n            /* 'consumer' was provided */\n            consumername = c->argv[startidx+3];\n        }\n    }\n\n    /* Lookup the key and the group inside the stream. */\n    kvobj *kv = lookupKeyRead(c->db, c->argv[1]);\n    streamCG *group;\n\n    if (checkType(c, kv, OBJ_STREAM)) return;\n    if (kv == NULL ||\n        (group = streamLookupCG(kv->ptr, groupname->ptr)) == NULL)\n    {\n        addReplyErrorFormat(c, \"-NOGROUP No such key '%s' or consumer \"\n                               \"group '%s'\",\n                               (char*)key->ptr,(char*)groupname->ptr);\n        return;\n    }\n\n    /* XPENDING <key> <group> variant. */\n    if (justinfo) {\n        addReplyArrayLen(c,4);\n        /* Total number of messages in the PEL. */\n        addReplyLongLong(c,raxSize(group->pel));\n        /* First and last IDs. */\n        if (raxSize(group->pel) == 0) {\n            addReplyNull(c); /* Start. */\n            addReplyNull(c); /* End. */\n            addReplyNullArray(c); /* Clients. */\n        } else {\n            /* Start. */\n            raxIterator ri;\n            raxStart(&ri,group->pel);\n            raxSeek(&ri,\"^\",NULL,0);\n            raxNext(&ri);\n            streamDecodeID(ri.key,&startid);\n            addReplyStreamID(c,&startid);\n\n            /* End. */\n            raxSeek(&ri,\"$\",NULL,0);\n            raxNext(&ri);\n            streamDecodeID(ri.key,&endid);\n            addReplyStreamID(c,&endid);\n            raxStop(&ri);\n\n            /* Consumers with pending messages. */\n            raxStart(&ri,group->consumers);\n            raxSeek(&ri,\"^\",NULL,0);\n            void *arraylen_ptr = addReplyDeferredLen(c);\n            size_t arraylen = 0;\n            while(raxNext(&ri)) {\n                streamConsumer *consumer = ri.data;\n                if (raxSize(consumer->pel) == 0) continue;\n                addReplyArrayLen(c,2);\n                addReplyBulkCBuffer(c,ri.key,ri.key_len);\n                addReplyBulkLongLong(c,raxSize(consumer->pel));\n                arraylen++;\n            }\n            setDeferredArrayLen(c,arraylen_ptr,arraylen);\n            raxStop(&ri);\n        }\n    } else { /* <start>, <stop> and <count> provided, return actual pending entries (not just info) */\n        streamConsumer *consumer = NULL;\n        if (consumername) {\n            consumer = streamLookupConsumer(group,consumername->ptr);\n\n            /* If a consumer name was mentioned but it does not exist, we can\n             * just return an empty array. */\n            if (consumer == NULL) {\n                addReplyArrayLen(c,0);\n                return;\n            }\n        }\n\n        rax *pel = consumer ? consumer->pel : group->pel;\n        unsigned char startkey[sizeof(streamID)];\n        unsigned char endkey[sizeof(streamID)];\n        raxIterator ri;\n        mstime_t now = commandTimeSnapshot();\n\n        streamEncodeID(startkey,&startid);\n        streamEncodeID(endkey,&endid);\n        raxStart(&ri,pel);\n        raxSeek(&ri,\">=\",startkey,sizeof(startkey));\n        void *arraylen_ptr = addReplyDeferredLen(c);\n        size_t arraylen = 0;\n\n        while(count && raxNext(&ri) && memcmp(ri.key,endkey,ri.key_len) <= 0) {\n            streamNACK *nack = ri.data;\n\n            if (nack->consumer && minidle) {\n                mstime_t this_idle = now - nack->delivery_time;\n                if (this_idle < minidle) continue;\n            }\n\n            arraylen++;\n            count--;\n            addReplyArrayLen(c,4);\n\n            /* Entry ID. */\n            streamID id;\n            streamDecodeID(ri.key,&id);\n            addReplyStreamID(c,&id);\n\n            /* Consumer name (empty string if NACKed / unowned). */\n            if (nack->consumer) {\n                addReplyBulkCBuffer(c,nack->consumer->name,\n                                    sdslen(nack->consumer->name));\n            } else {\n                addReplyBulkCBuffer(c,\"\",0);\n            }\n\n            /* Milliseconds elapsed since last delivery (-1 if unowned / NACKed). */\n            mstime_t elapsed;\n            if (nack->consumer) {\n                elapsed = now - nack->delivery_time;\n                if (elapsed < 0) elapsed = 0;\n            } else {\n                elapsed = -1;\n            }\n            addReplyLongLong(c,elapsed);\n\n            /* Number of deliveries. */\n            addReplyLongLong(c,nack->delivery_count);\n        }\n        raxStop(&ri);\n        setDeferredArrayLen(c,arraylen_ptr,arraylen);\n    }\n}\n\n/* XCLAIM <key> <group> <consumer> <min-idle-time> <ID-1> <ID-2>\n *        [IDLE <milliseconds>] [TIME <mstime>] [RETRYCOUNT <count>]\n *        [FORCE] [JUSTID]\n *\n * Changes ownership of one or multiple messages in the Pending Entries List\n * of a given stream consumer group.\n *\n * If the message ID (among the specified ones) exists, and its idle\n * time greater or equal to <min-idle-time>, then the message new owner\n * becomes the specified <consumer>. If the minimum idle time specified\n * is zero, messages are claimed regardless of their idle time.\n *\n * All the messages that cannot be found inside the pending entries list\n * are ignored, but in case the FORCE option is used. In that case we\n * create the NACK (representing a not yet acknowledged message) entry in\n * the consumer group PEL.\n *\n * This command creates the consumer as side effect if it does not yet\n * exists. Moreover the command reset the idle time of the message to 0,\n * even if by using the IDLE or TIME options, the user can control the\n * new idle time.\n *\n * The options at the end can be used in order to specify more attributes\n * to set in the representation of the pending message:\n *\n * 1. IDLE <ms>:\n *      Set the idle time (last time it was delivered) of the message.\n *      If IDLE is not specified, an IDLE of 0 is assumed, that is,\n *      the time count is reset because the message has now a new\n *      owner trying to process it.\n *\n * 2. TIME <ms-unix-time>:\n *      This is the same as IDLE but instead of a relative amount of\n *      milliseconds, it sets the idle time to a specific unix time\n *      (in milliseconds). This is useful in order to rewrite the AOF\n *      file generating XCLAIM commands.\n *\n * 3. RETRYCOUNT <count>:\n *      Set the retry counter to the specified value. This counter is\n *      incremented every time a message is delivered again. Normally\n *      XCLAIM does not alter this counter, which is just served to clients\n *      when the XPENDING command is called: this way clients can detect\n *      anomalies, like messages that are never processed for some reason\n *      after a big number of delivery attempts.\n *\n * 4. FORCE:\n *      Creates the pending message entry in the PEL even if certain\n *      specified IDs are not already in the PEL assigned to a different\n *      client. However the message must be exist in the stream, otherwise\n *      the IDs of non existing messages are ignored.\n *\n * 5. JUSTID:\n *      Return just an array of IDs of messages successfully claimed,\n *      without returning the actual message.\n *\n * 6. LASTID <id>:\n *      Update the consumer group last ID with the specified ID if the\n *      current last ID is smaller than the provided one.\n *      This is used for replication / AOF, so that when we read from a\n *      consumer group, the XCLAIM that gets propagated to give ownership\n *      to the consumer, is also used in order to update the group current\n *      ID.\n *\n * The command returns an array of messages that the user\n * successfully claimed, so that the caller is able to understand\n * what messages it is now in charge of. */\nvoid xclaimCommand(client *c) {\n    streamCG *group = NULL;\n    kvobj *o = lookupKeyRead(c->db,c->argv[1]);\n    long long minidle; /* Minimum idle time argument. */\n    long long retrycount = -1;   /* -1 means RETRYCOUNT option not given. */\n    mstime_t deliverytime = -1;  /* -1 means IDLE/TIME options not given. */\n    int force = 0;\n    int justid = 0;\n\n    if (o) {\n        if (checkType(c,o,OBJ_STREAM)) return; /* Type error. */\n        group = streamLookupCG(o->ptr,c->argv[2]->ptr);\n    }\n\n    /* No key or group? Send an error given that the group creation\n     * is mandatory. */\n    if (o == NULL || group == NULL) {\n        addReplyErrorFormat(c,\"-NOGROUP No such key '%s' or \"\n                              \"consumer group '%s'\", (char*)c->argv[1]->ptr,\n                              (char*)c->argv[2]->ptr);\n        return;\n    }\n\n    if (getLongLongFromObjectOrReply(c,c->argv[4],&minidle,\n        \"Invalid min-idle-time argument for XCLAIM\")\n        != C_OK) return;\n    if (minidle < 0) minidle = 0;\n\n    /* Start parsing the IDs, so that we abort ASAP if there is a syntax\n     * error: the return value of this command cannot be an error in case\n     * the client successfully claimed some message, so it should be\n     * executed in a \"all or nothing\" fashion. */\n    int j;\n    streamID static_ids[STREAMID_STATIC_VECTOR_LEN];\n    streamID *ids = static_ids;\n    int id_count = c->argc-5;\n    if (id_count > STREAMID_STATIC_VECTOR_LEN)\n        ids = zmalloc(sizeof(streamID)*id_count);\n    for (j = 5; j < c->argc; j++) {\n        if (streamParseStrictIDOrReply(NULL,c->argv[j],&ids[j-5],0,NULL) != C_OK) break;\n    }\n    int last_id_arg = j-1; /* Next time we iterate the IDs we now the range. */\n\n    /* If we stopped because some IDs cannot be parsed, perhaps they\n     * are trailing options. */\n    mstime_t now = commandTimeSnapshot();\n    streamID last_id = {0,0};\n    int propagate_last_id = 0;\n    for (; j < c->argc; j++) {\n        int moreargs = (c->argc-1) - j; /* Number of additional arguments. */\n        char *opt = c->argv[j]->ptr;\n        if (!strcasecmp(opt,\"FORCE\")) {\n            force = 1;\n        } else if (!strcasecmp(opt,\"JUSTID\")) {\n            justid = 1;\n        } else if (!strcasecmp(opt,\"IDLE\") && moreargs) {\n            j++;\n            if (getLongLongFromObjectOrReply(c,c->argv[j],&deliverytime,\n                \"Invalid IDLE option argument for XCLAIM\")\n                != C_OK) goto cleanup;\n            deliverytime = now - deliverytime;\n        } else if (!strcasecmp(opt,\"TIME\") && moreargs) {\n            j++;\n            if (getLongLongFromObjectOrReply(c,c->argv[j],&deliverytime,\n                \"Invalid TIME option argument for XCLAIM\")\n                != C_OK) goto cleanup;\n        } else if (!strcasecmp(opt,\"RETRYCOUNT\") && moreargs) {\n            j++;\n            if (getLongLongFromObjectOrReply(c,c->argv[j],&retrycount,\n                \"Invalid RETRYCOUNT option argument for XCLAIM\")\n                != C_OK) goto cleanup;\n        } else if (!strcasecmp(opt,\"LASTID\") && moreargs) {\n            j++;\n            if (streamParseStrictIDOrReply(c,c->argv[j],&last_id,0,NULL) != C_OK) goto cleanup;\n        } else {\n            addReplyErrorFormat(c,\"Unrecognized XCLAIM option '%s'\",opt);\n            goto cleanup;\n        }\n    }\n\n    if (streamCompareID(&last_id,&group->last_id) > 0) {\n        streamUpdateCGroupLastId(o->ptr, group, &last_id);\n        propagate_last_id = 1;\n    }\n\n    if (deliverytime != -1) {\n        /* If a delivery time was passed, either with IDLE or TIME, we\n         * do some sanity check on it, and set the deliverytime to now\n         * (which is a sane choice usually) if the value is bogus.\n         * To raise an error here is not wise because clients may compute\n         * the idle time doing some math starting from their local time,\n         * and this is not a good excuse to fail in case, for instance,\n         * the computer time is a bit in the future from our POV. */\n        if (deliverytime < 0 || deliverytime > now) deliverytime = now;\n    } else {\n        /* If no IDLE/TIME option was passed, we want the last delivery\n         * time to be now, so that the idle time of the message will be\n         * zero. */\n        deliverytime = now;\n    }\n\n    /* Do the actual claiming. */\n    stream *s = o->ptr;\n    size_t old_alloc = server.memory_tracking_enabled ? kvobjAllocSize(o) : 0;\n    streamConsumer *consumer = streamLookupConsumer(group,c->argv[3]->ptr);\n    if (consumer == NULL) {\n        consumer = streamCreateConsumer(o->ptr,group,c->argv[3]->ptr,c->argv[1],c->db->id,SCC_DEFAULT);\n    }\n    consumer->seen_time = commandTimeSnapshot();\n\n    void *arraylenptr = addReplyDeferredLen(c);\n    size_t arraylen = 0;\n    for (int j = 5; j <= last_id_arg; j++) {\n        streamID id = ids[j-5];\n        unsigned char buf[sizeof(streamID)];\n        streamEncodeID(buf,&id);\n\n        /* Lookup the ID in the group PEL. */\n        void *result = NULL;\n        raxFind(group->pel,buf,sizeof(buf),&result);\n        streamNACK *nack = result;\n\n        /* Item must exist for us to transfer it to another consumer. */\n        if (!streamEntryExists(s,&id)) {\n            /* Clear this entry from the PEL, it no longer exists */\n            if (nack != NULL) {\n                /* Propagate this change (we are going to delete the NACK). */\n                if (nack->consumer) {\n                    streamPropagateXCLAIM(c,c->argv[1],group,c->argv[2],c->argv[j],nack);\n                    propagate_last_id = 0; /* Will be propagated by XCLAIM itself. */\n                } else {\n                    /* Unowned NACK (NACK zone entry from XNACK) — can't use\n                     * XCLAIM propagation without a consumer; use XACK instead. */\n                    streamPropagateXACK(c->db->id,c->argv[1],c->argv[2],c->argv[j]);\n                }\n                server.dirty++;\n                /* Release the NACK */\n                pelListUnlink(group, nack);\n                raxRemove(group->pel,buf,sizeof(buf),NULL);\n                if (nack->consumer)\n                    raxRemove(nack->consumer->pel,buf,sizeof(buf),NULL);\n                streamDestroyNACK(s, nack, buf);\n            }\n            continue;\n        }\n\n        /* If FORCE is passed, let's check if at least the entry\n         * exists in the Stream. In such case, we'll create a new\n         * entry in the PEL from scratch, so that XCLAIM can also\n         * be used to create entries in the PEL. Useful for AOF\n         * and replication of consumer groups. */\n        if (force && nack == NULL) {\n            /* Create the NACK. */\n            nack = streamCreateNACK(s, NULL, &id);\n            raxInsert(group->pel,buf,sizeof(buf),nack,NULL);\n            pelListInsertAtTail(group, nack);\n            nack->cgroup_ref_node = streamLinkCGroupToEntry(s, group, buf);\n        }\n\n        if (nack != NULL) {\n            /* We need to check if the minimum idle time requested\n             * by the caller is satisfied by this entry.\n             *\n             * Note that the nack could be created by FORCE, in this\n             * case there was no pre-existing entry and minidle should\n             * be ignored, but in that case nack->consumer is NULL. */\n            if (nack->consumer && minidle) {\n                mstime_t this_idle = now - nack->delivery_time;\n                if (this_idle < minidle) continue;\n            }\n\n            if (nack->consumer != consumer) {\n                /* Remove the entry from the old consumer.\n                 * Note that nack->consumer is NULL if we created the\n                 * NACK above because of the FORCE option. */\n                if (nack->consumer) {\n                    raxRemove(nack->consumer->pel,buf,sizeof(buf),NULL);\n                }\n            }\n\n            pelListUpdate(group, nack, deliverytime);\n\n            /* Set the delivery attempts counter if given, otherwise\n             * autoincrement unless JUSTID option provided */\n            if (retrycount >= 0) {\n                nack->delivery_count = retrycount;\n            } else if (!justid) {\n                nack->delivery_count += nack->delivery_count == LLONG_MAX ? 0 : 1;\n            }\n            if (nack->consumer != consumer) {\n                /* Add the entry in the new consumer local PEL. */\n                raxInsert(consumer->pel,buf,sizeof(buf),nack,NULL);\n                nack->consumer = consumer;\n            }\n            /* Send the reply for this entry. */\n            if (justid) {\n                addReplyStreamID(c,&id);\n            } else {\n                serverAssert(streamReplyWithRange(c,o->ptr,&id,&id,1,0,-1,NULL,NULL,STREAM_RWR_RAWENTRIES,NULL,NULL) == 1);\n            }\n            arraylen++;\n\n            consumer->active_time = commandTimeSnapshot();\n\n            /* Propagate this change. */\n            streamPropagateXCLAIM(c,c->argv[1],group,c->argv[2],c->argv[j],nack);\n            propagate_last_id = 0; /* Will be propagated by XCLAIM itself. */\n            server.dirty++;\n        }\n    }\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db,getKeySlot(c->argv[1]->ptr),o,old_alloc,kvobjAllocSize(o));\n    if (propagate_last_id) {\n        streamPropagateGroupID(c,c->argv[1],group,c->argv[2]);\n        server.dirty++;\n    }\n    setDeferredArrayLen(c,arraylenptr,arraylen);\n    preventCommandPropagation(c);\n    keyModified(c,c->db,c->argv[1],o,0);\ncleanup:\n    if (ids != static_ids) zfree(ids);\n}\n\n/* XAUTOCLAIM <key> <group> <consumer> <min-idle-time> <start> [COUNT <count>] [JUSTID]\n *\n * Changes ownership of one or multiple messages in the Pending Entries List\n * of a given stream consumer group.\n *\n * For each PEL entry, if its idle time greater or equal to <min-idle-time>,\n * then the message new owner becomes the specified <consumer>.\n * If the minimum idle time specified is zero, messages are claimed\n * regardless of their idle time.\n *\n * This command creates the consumer as side effect if it does not yet\n * exists. Moreover the command reset the idle time of the message to 0.\n *\n * The command returns an array of messages that the user\n * successfully claimed, so that the caller is able to understand\n * what messages it is now in charge of. */\nvoid xautoclaimCommand(client *c) {\n    streamCG *group = NULL;\n    kvobj *o = lookupKeyRead(c->db,c->argv[1]);\n    long long minidle; /* Minimum idle time argument, in milliseconds. */\n    long count = 100; /* Maximum entries to claim. */\n    const unsigned attempts_factor = 10;\n    streamID startid;\n    int startex;\n    int justid = 0;\n\n    /* Parse idle/start/end/count arguments ASAP if needed, in order to report\n     * syntax errors before any other error. */\n    if (getLongLongFromObjectOrReply(c,c->argv[4],&minidle,\"Invalid min-idle-time argument for XAUTOCLAIM\") != C_OK)\n        return;\n    if (minidle < 0) minidle = 0;\n\n    if (streamParseIntervalIDOrReply(c,c->argv[5],&startid,&startex,0) != C_OK)\n        return;\n    if (startex && streamIncrID(&startid) != C_OK) {\n        addReplyError(c,\"invalid start ID for the interval\");\n        return;\n    }\n\n    int j = 6; /* options start at argv[6] */\n    while(j < c->argc) {\n        int moreargs = (c->argc-1) - j; /* Number of additional arguments. */\n        char *opt = c->argv[j]->ptr;\n        if (!strcasecmp(opt,\"COUNT\") && moreargs) {\n            long max_count = LONG_MAX / (max(sizeof(streamID), attempts_factor));\n            if (getRangeLongFromObjectOrReply(c,c->argv[j+1],1,max_count,&count,\"COUNT must be > 0\") != C_OK)\n                return;\n            j++;\n        } else if (!strcasecmp(opt,\"JUSTID\")) {\n            justid = 1;\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        }\n        j++;\n    }\n\n    if (o) {\n        if (checkType(c,o,OBJ_STREAM))\n            return; /* Type error. */\n        group = streamLookupCG(o->ptr,c->argv[2]->ptr);\n    }\n\n    /* No key or group? Send an error given that the group creation\n     * is mandatory. */\n    if (o == NULL || group == NULL) {\n        addReplyErrorFormat(c,\"-NOGROUP No such key '%s' or consumer group '%s'\",\n                            (char*)c->argv[1]->ptr,\n                            (char*)c->argv[2]->ptr);\n        return;\n    }\n\n    streamID *deleted_ids = ztrymalloc(count * sizeof(streamID));\n    if (!deleted_ids) {\n        addReplyError(c, \"Insufficient memory, failed allocating transient memory, COUNT too high.\");\n        return;\n    }\n\n    /* Do the actual claiming. */\n    stream *s = o->ptr;\n    size_t old_alloc = server.memory_tracking_enabled ? kvobjAllocSize(o) : 0;\n    streamConsumer *consumer = streamLookupConsumer(group,c->argv[3]->ptr);\n    if (consumer == NULL) {\n        consumer = streamCreateConsumer(o->ptr,group,c->argv[3]->ptr,c->argv[1],c->db->id,SCC_DEFAULT);\n    }\n    consumer->seen_time = commandTimeSnapshot();\n\n    long long attempts = count * attempts_factor;\n\n    addReplyArrayLen(c, 3); /* We add another reply later */\n    void *endidptr = addReplyDeferredLen(c); /* reply[0] */\n    void *arraylenptr = addReplyDeferredLen(c); /* reply[1] */\n\n    unsigned char startkey[sizeof(streamID)];\n    streamEncodeID(startkey,&startid);\n    raxIterator ri;\n    raxStart(&ri,group->pel);\n    raxSeek(&ri,\">=\",startkey,sizeof(startkey));\n    size_t arraylen = 0;\n    mstime_t now = commandTimeSnapshot();\n    int deleted_id_num = 0;\n    while (attempts-- && count && raxNext(&ri)) {\n        streamNACK *nack = ri.data;\n\n        streamID id;\n        streamDecodeID(ri.key, &id);\n\n        /* Item must exist for us to transfer it to another consumer. */\n        if (!streamEntryExists(s,&id)) {\n            /* Propagate this change (we are going to delete the NACK). */\n            if (nack->consumer) {\n                robj *idstr = createObjectFromStreamID(&id);\n                streamPropagateXCLAIM(c,c->argv[1],group,c->argv[2],idstr,nack);\n                decrRefCount(idstr);\n            } else {\n                /* Unowned NACK (NACK zone entry from XNACK) — can't use\n                 * XCLAIM propagation without a consumer; use XACK instead. */\n                robj *idstr = createObjectFromStreamID(&id);\n                streamPropagateXACK(c->db->id,c->argv[1],c->argv[2],idstr);\n                decrRefCount(idstr);\n            }\n            server.dirty++;\n            /* Clear this entry from the PEL, it no longer exists */\n            pelListUnlink(group, nack);\n            raxRemove(group->pel,ri.key,ri.key_len,NULL);\n            if (nack->consumer)\n                raxRemove(nack->consumer->pel,ri.key,ri.key_len,NULL);\n            streamDestroyNACK(s, nack, ri.key);\n            /* Remember the ID for later */\n            deleted_ids[deleted_id_num++] = id;\n            raxSeek(&ri,\">=\",ri.key,ri.key_len);\n            count--; /* Count is a limit of the command response size. */\n            continue;\n        }\n\n        if (nack->consumer && minidle) {\n            mstime_t this_idle = now - nack->delivery_time;\n            if (this_idle < minidle)\n                continue;\n        }\n\n        if (nack->consumer != consumer) {\n            /* Remove the entry from the old consumer.\n             * Note that nack->consumer is NULL if we created the\n             * NACK above because of the FORCE option. */\n            if (nack->consumer) {\n                raxRemove(nack->consumer->pel,ri.key,ri.key_len,NULL);\n            }\n        }\n\n        /* Update the consumer and idle time. */\n        pelListUpdate(group, nack, now);\n\n        /* Increment the delivery attempts counter unless JUSTID option provided */\n        if (!justid)\n            nack->delivery_count += nack->delivery_count == LLONG_MAX ? 0 : 1;\n\n        if (nack->consumer != consumer) {\n            /* Add the entry in the new consumer local PEL. */\n            raxInsert(consumer->pel,ri.key,ri.key_len,nack,NULL);\n            nack->consumer = consumer;\n        }\n\n        /* Send the reply for this entry. */\n        if (justid) {\n            addReplyStreamID(c,&id);\n        } else {\n            serverAssert(streamReplyWithRange(c,o->ptr,&id,&id,1,0,-1,NULL,NULL,STREAM_RWR_RAWENTRIES,NULL,NULL) == 1);\n        }\n        arraylen++;\n        count--;\n\n        consumer->active_time = commandTimeSnapshot();\n\n        /* Propagate this change. */\n        robj *idstr = createObjectFromStreamID(&id);\n        streamPropagateXCLAIM(c,c->argv[1],group,c->argv[2],idstr,nack);\n        decrRefCount(idstr);\n        server.dirty++;\n    }\n\n    /* We need to return the next entry as a cursor for the next XAUTOCLAIM call */\n    raxNext(&ri);\n\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db,getKeySlot(c->argv[1]->ptr),o,old_alloc,kvobjAllocSize(o));\n\n    streamID endid;\n    if (raxEOF(&ri)) {\n        endid.ms = endid.seq = 0;\n    } else {\n        streamDecodeID(ri.key, &endid);\n    }\n    raxStop(&ri);\n\n    setDeferredArrayLen(c,arraylenptr,arraylen);\n    setDeferredReplyStreamID(c,endidptr,&endid);\n\n    addReplyArrayLen(c, deleted_id_num); /* reply[2] */\n    for (int i = 0; i < deleted_id_num; i++) {\n        addReplyStreamID(c, &deleted_ids[i]);\n    }\n    zfree(deleted_ids);\n\n    preventCommandPropagation(c);\n    /* Update LRM but don't signal. */\n    keyModified(c,c->db,c->argv[1],o,0);\n}\n\n/* XDEL <key> [<ID1> <ID2> ... <IDN>]\n *\n * Removes the specified entries from the stream. Returns the number\n * of items actually deleted, that may be different from the number\n * of IDs passed in case certain IDs do not exist. */\nvoid xdelCommand(client *c) {\n    kvobj *kv = lookupKeyWriteOrReply(c, c->argv[1], shared.czero); \n    if (kv == NULL || checkType(c, kv, OBJ_STREAM)) return;\n    stream *s = kv->ptr;\n    size_t old_alloc = server.memory_tracking_enabled ? kvobjAllocSize(kv) : 0;\n\n    /* We need to sanity check the IDs passed to start. Even if not\n     * a big issue, it is not great that the command is only partially\n     * executed because at some point an invalid ID is parsed. */\n    streamID static_ids[STREAMID_STATIC_VECTOR_LEN];\n    streamID *ids = static_ids;\n    int id_count = c->argc-2;\n    if (id_count > STREAMID_STATIC_VECTOR_LEN)\n        ids = zmalloc(sizeof(streamID)*id_count);\n    for (int j = 2; j < c->argc; j++) {\n        if (streamParseStrictIDOrReply(c,c->argv[j],&ids[j-2],0,NULL) != C_OK) goto cleanup;\n    }\n\n    /* Actually apply the command. */\n    int deleted = 0;\n    int first_entry = 0;\n    for (int j = 2; j < c->argc; j++) {\n        streamID *id = &ids[j-2];\n        if (streamDeleteItem(s,id)) {\n            /* We want to know if the first entry in the stream was deleted\n             * so we can later set the new one. */\n            if (streamCompareID(id,&s->first_id) == 0) {\n                first_entry = 1;\n            }\n            /* Update the stream's maximal tombstone if needed. */\n            if (streamCompareID(id,&s->max_deleted_entry_id) > 0) {\n                s->max_deleted_entry_id = *id;\n            }\n            deleted++;\n        };\n    }\n\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db,getKeySlot(c->argv[1]->ptr),kv,old_alloc,kvobjAllocSize(kv));\n\n    /* Update the stream's first ID. */\n    if (deleted) {\n        if (s->length == 0) {\n            s->first_id.ms = 0;\n            s->first_id.seq = 0;\n        } else if (first_entry) {\n            streamGetEdgeID(s,1,1,&s->first_id);\n        }\n    }\n\n    /* Propagate the write if needed. */\n    if (deleted) {\n        keyModified(c,c->db,c->argv[1],kv,1);\n        notifyKeyspaceEvent(NOTIFY_STREAM,\"xdel\",c->argv[1],c->db->id);\n        server.dirty += deleted;\n    }\n    addReplyLongLong(c,deleted);\ncleanup:\n    if (ids != static_ids) zfree(ids);\n}\n\n/* Used by xdelexCommand() */\ntypedef enum XDelexRes {\n    XDELEX_NO_ID = -1,           /* ID not found in the stream. */\n    XDELEX_DELETED = 1,          /* Message deleted. */\n    XDELEX_STILL_REFERENCED = 2, /* Message not deleted (still referenced). */\n} XDelexRes;\n\n/* XDELEX <key> [KEEPREF|DELREF|ACKED] [IDS <numids> <id ...>]\n *\n * Removes specified entries from the stream. Returns an array of status codes for\n * each ID, indicating whether it was deleted, still referenced, or not found. */\nvoid xdelexCommand(client *c) {\n    kvobj *kv = lookupKeyWrite(c->db, c->argv[1]); \n    if (checkType(c, kv, OBJ_STREAM)) return;\n\n    /* Parse command options */\n    streamAckDelArgs args;\n    if (!streamParseAckDelArgsOrReply(c, 2, &args)) return;\n\n    /* Non-existing keys and empty stream are the same thing. Reply null if the\n     * key does not exist.*/\n    if (!kv) {\n        addReplyArrayLen(c, args.numids);\n        for (int i = 0; i < args.numids; i++)\n            addReplyLongLong(c, XDELEX_NO_ID);\n        return;\n    }\n\n    /* We need to sanity check the IDs passed to start. Even if not\n     * a big issue, it is not great that the command is only partially\n     * executed because at some point an invalid ID is parsed. */\n    streamID static_ids[STREAMID_STATIC_VECTOR_LEN];\n    streamID *ids = static_ids;\n    if (args.numids > STREAMID_STATIC_VECTOR_LEN)\n        ids = zmalloc(sizeof(streamID)*args.numids);\n    for (int j = 0; j < args.numids; j++) {\n        if (streamParseStrictIDOrReply(c,c->argv[j+args.startidx],&ids[j],0,NULL) != C_OK)\n            goto cleanup;\n    }\n\n    stream *s = kv->ptr;\n    size_t old_alloc = server.memory_tracking_enabled ? kvobjAllocSize(kv) : 0;\n    int first_entry = 0;\n    int deleted = 0;\n    addReplyArrayLen(c, args.numids);\n    for (int j = 0; j < args.numids; j++) {\n        int res = XDELEX_NO_ID;\n        streamID *id = &ids[j];\n        unsigned char buf[sizeof(streamID)];\n        streamEncodeID(buf,id);\n\n        int can_delete = 1;\n        if (args.delete_strategy == DELETE_STRATEGY_ACKED) {\n            /* Only delete if acknowledged by all consumer groups */\n            if (streamEntryIsReferenced(s, id))\n                can_delete = 0;\n        } else if (args.delete_strategy == DELETE_STRATEGY_DELREF) {\n            streamCleanupEntryCGroupRefs(s, id);\n        }\n\n        if (can_delete) { /* can_delete being true doesn't guarantee the ID exists */\n            if (streamDeleteItem(s,id)) {\n                /* We want to know if the first entry in the stream was deleted\n                 * so we can later set the new one. */\n                if (streamCompareID(id,&s->first_id) == 0) {\n                    first_entry = 1;\n                }\n                /* Update the stream's maximal tombstone if needed. */\n                if (streamCompareID(id,&s->max_deleted_entry_id) > 0) {\n                    s->max_deleted_entry_id = *id;\n                }\n                deleted++;\n                res = XDELEX_DELETED;\n            } else {\n                /* This id doesn't exist. */\n            }\n        } else {\n            res = XDELEX_STILL_REFERENCED;\n        }\n\n        addReplyLongLong(c, res);\n    }\n\n    /* Update the stream's first ID. */\n    if (deleted) {\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db,getKeySlot(c->argv[1]->ptr),kv,old_alloc,kvobjAllocSize(kv));\n        if (s->length == 0) {\n            s->first_id.ms = 0;\n            s->first_id.seq = 0;\n        } else if (first_entry) {\n            streamGetEdgeID(s,1,1,&s->first_id);\n        }\n\n        /* Propagate the write. */\n        keyModified(c,c->db,c->argv[1],kv,1);\n        notifyKeyspaceEvent(NOTIFY_STREAM,\"xdel\",c->argv[1],c->db->id);\n        server.dirty += deleted;\n    }\n\ncleanup:\n    if (ids != static_ids) zfree(ids);\n}\n\n/* General form: XTRIM <key> [... options ...]\n *\n * List of options:\n *\n * Trim strategies:\n *\n * MAXLEN [~|=] <count>     -- Trim so that the stream will be capped at\n *                             the specified length. Use ~ before the\n *                             count in order to demand approximated trimming\n *                             (like XADD MAXLEN option).\n * MINID [~|=] <id>         -- Trim so that the stream will not contain entries\n *                             with IDs smaller than 'id'. Use ~ before the\n *                             count in order to demand approximated trimming\n *                             (like XADD MINID option).\n *\n * Consumer group reference handling (optional, defaults to KEEPREF):\n *\n * KEEPREF                  -- Keeps existing consumer group references\n * DELREF                   -- Clean up all consumer group references\n * ACKED                    -- Only delete messages that are acknowledged\n *\n * Other options:\n *\n * LIMIT <entries>          -- The maximum number of entries to trim.\n *                             0 means unlimited. Unless specified, it is set\n *                             to a default of 100*server.stream_node_max_entries,\n *                             and that's in order to keep the trimming time sane.\n *                             Has meaning only if `~` was provided.\n */\nvoid xtrimCommand(client *c) {\n    /* Argument parsing. */\n    streamAddTrimArgs parsed_args;\n    if (streamParseAddOrTrimArgsOrReply(c, &parsed_args, 0) < 0)\n        return; /* streamParseAddOrTrimArgsOrReply already replied. */\n\n    /* If the key does not exist, we are ok returning zero, that is, the\n     * number of elements removed from the stream. */\n    kvobj *kv = lookupKeyWriteOrReply(c, c->argv[1], shared.czero); \n    if (kv == NULL || checkType(c, kv, OBJ_STREAM)) return;\n    stream *s = kv->ptr;\n\n    /* Perform the trimming. */\n    size_t old_alloc = server.memory_tracking_enabled ? kvobjAllocSize(kv) : 0;\n    int64_t deleted = streamTrim(s, &parsed_args);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db,getKeySlot(c->argv[1]->ptr),kv,old_alloc,kvobjAllocSize(kv));\n    if (deleted) {\n        notifyKeyspaceEvent(NOTIFY_STREAM,\"xtrim\",c->argv[1],c->db->id);\n        if (parsed_args.approx_trim) {\n            /* In case our trimming was limited (by LIMIT or by ~) we must\n             * re-write the relevant trim argument to make sure there will be\n             * no inconsistencies in AOF loading or in the replica.\n             * It's enough to check only args->approx because there is no\n             * way LIMIT is given without the ~ option. */\n            streamRewriteApproxSpecifier(c,parsed_args.trim_strategy_arg_idx-1);\n            streamRewriteTrimArgument(c,s,parsed_args.trim_strategy,parsed_args.trim_strategy_arg_idx);\n        }\n\n        /* Propagate the write. */\n        keyModified(c, c->db,c->argv[1], kv, 1);\n        server.dirty += deleted;\n    }\n    addReplyLongLong(c,deleted);\n}\n\n/* Helper function for xinfoCommand.\n * Handles the variants of XINFO STREAM */\nvoid xinfoReplyWithStreamInfo(client *c, robj *key, kvobj *kv) {\n    stream *s = kv->ptr;\n    int full = 1;\n    long long count = 10; /* Default COUNT is 10 so we don't block the server */\n    robj **optv = c->argv + 3; /* Options start after XINFO STREAM <key> */\n    int optc = c->argc - 3;\n\n    /* Parse options. */\n    if (optc == 0) {\n        full = 0;\n    } else {\n        /* Valid options are [FULL] or [FULL COUNT <count>] */\n        if (optc != 1 && optc != 3) {\n            addReplySubcommandSyntaxError(c);\n            return;\n        }\n\n        /* First option must be \"FULL\" */\n        if (strcasecmp(optv[0]->ptr,\"full\")) {\n            addReplySubcommandSyntaxError(c);\n            return;\n        }\n\n        if (optc == 3) {\n            /* First option must be \"FULL\" */\n            if (strcasecmp(optv[1]->ptr,\"count\")) {\n                addReplySubcommandSyntaxError(c);\n                return;\n            }\n            if (getLongLongFromObjectOrReply(c,optv[2],&count,NULL) == C_ERR)\n                return;\n            if (count < 0) count = 10;\n        }\n    }\n\n    addReplyMapLen(c,full ? 15 : 16);\n    addReplyBulkCString(c,\"length\");\n    addReplyLongLong(c,s->length);\n    addReplyBulkCString(c,\"radix-tree-keys\");\n    addReplyLongLong(c,raxSize(s->rax));\n    addReplyBulkCString(c,\"radix-tree-nodes\");\n    addReplyLongLong(c,s->rax->numnodes);\n    addReplyBulkCString(c,\"last-generated-id\");\n    addReplyStreamID(c,&s->last_id);\n    addReplyBulkCString(c,\"max-deleted-entry-id\");\n    addReplyStreamID(c,&s->max_deleted_entry_id);\n    addReplyBulkCString(c,\"entries-added\");\n    addReplyLongLong(c,s->entries_added);\n    addReplyBulkCString(c,\"recorded-first-entry-id\");\n    addReplyStreamID(c,&s->first_id);\n    addReplyBulkCString(c,\"idmp-duration\");\n    addReplyLongLong(c,s->idmp_duration);\n    addReplyBulkCString(c,\"idmp-maxsize\");\n    addReplyLongLong(c,s->idmp_max_entries);\n    addReplyBulkCString(c,\"pids-tracked\");\n    addReplyLongLong(c, s->idmp_producers ? raxSize(s->idmp_producers) : 0);\n    addReplyBulkCString(c,\"iids-tracked\");\n    /* Count total IIDs across all producers */\n    size_t total_iids = 0;\n    if (s->idmp_producers) {\n        raxIterator ri;\n        raxStart(&ri, s->idmp_producers);\n        raxSeek(&ri, \"^\", NULL, 0);\n        while (raxNext(&ri)) {\n            idmpProducer *producer = ri.data;\n            total_iids += dictSize(producer->idmp_dict);\n        }\n        raxStop(&ri);\n    }\n    addReplyLongLong(c, total_iids);\n    addReplyBulkCString(c,\"iids-added\");\n    addReplyLongLong(c,s->iids_added);\n    addReplyBulkCString(c,\"iids-duplicates\");\n    addReplyLongLong(c,s->iids_duplicates);\n\n    size_t old_alloc = server.memory_tracking_enabled ? kvobjAllocSize(kv) : 0;\n    if (!full) {\n        /* XINFO STREAM <key> */\n\n        addReplyBulkCString(c,\"groups\");\n        addReplyLongLong(c,s->cgroups ? raxSize(s->cgroups) : 0);\n\n        /* To emit the first/last entry we use streamReplyWithRange(). */\n        int emitted;\n        streamID start, end;\n        start.ms = start.seq = 0;\n        end.ms = end.seq = UINT64_MAX;\n        addReplyBulkCString(c,\"first-entry\");\n        emitted = streamReplyWithRange(c,s,&start,&end,1,0,-1,NULL,NULL,\n                                       STREAM_RWR_RAWENTRIES,NULL,NULL);\n        if (!emitted) addReplyNull(c);\n        addReplyBulkCString(c,\"last-entry\");\n        emitted = streamReplyWithRange(c,s,&start,&end,1,1,-1,NULL,NULL,\n                                       STREAM_RWR_RAWENTRIES,NULL,NULL);\n        if (!emitted) addReplyNull(c);\n    } else {\n        /* XINFO STREAM <key> FULL [COUNT <count>] */\n\n        /* Stream entries */\n        addReplyBulkCString(c,\"entries\");\n        streamReplyWithRange(c,s,NULL,NULL,count,0,-1,NULL,NULL,0,NULL,NULL);\n\n        /* Consumer groups */\n        addReplyBulkCString(c,\"groups\");\n        if (s->cgroups == NULL) {\n            addReplyArrayLen(c,0);\n        } else {\n            addReplyArrayLen(c,raxSize(s->cgroups));\n            raxIterator ri_cgroups;\n            raxStart(&ri_cgroups,s->cgroups);\n            raxSeek(&ri_cgroups,\"^\",NULL,0);\n            while(raxNext(&ri_cgroups)) {\n                streamCG *cg = ri_cgroups.data;\n                addReplyMapLen(c,8);\n\n                /* Name */\n                addReplyBulkCString(c,\"name\");\n                addReplyBulkCBuffer(c,ri_cgroups.key,ri_cgroups.key_len);\n\n                /* Last delivered ID */\n                addReplyBulkCString(c,\"last-delivered-id\");\n                addReplyStreamID(c,&cg->last_id);\n\n                /* Read counter of the last delivered ID */\n                addReplyBulkCString(c,\"entries-read\");\n                if (cg->entries_read != SCG_INVALID_ENTRIES_READ) {\n                    addReplyLongLong(c,cg->entries_read);\n                } else {\n                    addReplyNull(c);\n                }\n\n                /* Group lag */\n                addReplyBulkCString(c,\"lag\");\n                streamReplyWithCGLag(c,s,cg);\n\n                /* Group PEL count */\n                addReplyBulkCString(c,\"pel-count\");\n                addReplyLongLong(c,raxSize(cg->pel));\n\n                /* NACKed entries count (entries in the NACK zone) */\n                addReplyBulkCString(c,\"nacked-count\");\n                addReplyLongLong(c,pelListNackedCount(cg));\n\n                /* Group PEL */\n                addReplyBulkCString(c,\"pending\");\n                long long arraylen_cg_pel = 0;\n                void *arrayptr_cg_pel = addReplyDeferredLen(c);\n                raxIterator ri_cg_pel;\n                raxStart(&ri_cg_pel,cg->pel);\n                raxSeek(&ri_cg_pel,\"^\",NULL,0);\n                while(raxNext(&ri_cg_pel) && (!count || arraylen_cg_pel < count)) {\n                    streamNACK *nack = ri_cg_pel.data;\n                    addReplyArrayLen(c,4);\n\n                    /* Entry ID. */\n                    streamID id;\n                    streamDecodeID(ri_cg_pel.key,&id);\n                    addReplyStreamID(c,&id);\n\n                    /* Consumer name (empty string if NACKed / unowned). */\n                    if (nack->consumer) {\n                        addReplyBulkCBuffer(c,nack->consumer->name,\n                                            sdslen(nack->consumer->name));\n                    } else {\n                        addReplyBulkCBuffer(c,\"\",0);\n                    }\n\n                    /* Last delivery. */\n                    addReplyLongLong(c,nack->delivery_time);\n\n                    /* Number of deliveries. */\n                    addReplyLongLong(c,nack->delivery_count);\n\n                    arraylen_cg_pel++;\n                }\n                setDeferredArrayLen(c,arrayptr_cg_pel,arraylen_cg_pel);\n                raxStop(&ri_cg_pel);\n\n                /* Consumers */\n                addReplyBulkCString(c,\"consumers\");\n                addReplyArrayLen(c,raxSize(cg->consumers));\n                raxIterator ri_consumers;\n                raxStart(&ri_consumers,cg->consumers);\n                raxSeek(&ri_consumers,\"^\",NULL,0);\n                while(raxNext(&ri_consumers)) {\n                    streamConsumer *consumer = ri_consumers.data;\n                    addReplyMapLen(c,5);\n\n                    /* Consumer name */\n                    addReplyBulkCString(c,\"name\");\n                    addReplyBulkCBuffer(c,consumer->name,sdslen(consumer->name));\n\n                    /* Seen-time */\n                    addReplyBulkCString(c,\"seen-time\");\n                    addReplyLongLong(c,consumer->seen_time);\n\n                    /* Active-time */\n                    addReplyBulkCString(c,\"active-time\");\n                    addReplyLongLong(c,consumer->active_time);\n\n                    /* Consumer PEL count */\n                    addReplyBulkCString(c,\"pel-count\");\n                    addReplyLongLong(c,raxSize(consumer->pel));\n\n                    /* Consumer PEL */\n                    addReplyBulkCString(c,\"pending\");\n                    long long arraylen_cpel = 0;\n                    void *arrayptr_cpel = addReplyDeferredLen(c);\n                    raxIterator ri_cpel;\n                    raxStart(&ri_cpel,consumer->pel);\n                    raxSeek(&ri_cpel,\"^\",NULL,0);\n                    while(raxNext(&ri_cpel) && (!count || arraylen_cpel < count)) {\n                        streamNACK *nack = ri_cpel.data;\n                        addReplyArrayLen(c,3);\n\n                        /* Entry ID. */\n                        streamID id;\n                        streamDecodeID(ri_cpel.key,&id);\n                        addReplyStreamID(c,&id);\n\n                        /* Last delivery. */\n                        addReplyLongLong(c,nack->delivery_time);\n\n                        /* Number of deliveries. */\n                        addReplyLongLong(c,nack->delivery_count);\n\n                        arraylen_cpel++;\n                    }\n                    setDeferredArrayLen(c,arrayptr_cpel,arraylen_cpel);\n                    raxStop(&ri_cpel);\n                }\n                raxStop(&ri_consumers);\n            }\n            raxStop(&ri_cgroups);\n        }\n    }\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db,getKeySlot(key->ptr),kv,old_alloc,kvobjAllocSize(kv));\n}\n\n/* XINFO CONSUMERS <key> <group>\n * XINFO GROUPS <key>\n * XINFO STREAM <key> [FULL [COUNT <count>]]\n * XINFO HELP. */\nvoid xinfoCommand(client *c) {\n    stream *s = NULL;\n    char *opt;\n    robj *key;\n\n    /* HELP is special. Handle it ASAP. */\n    if (!strcasecmp(c->argv[1]->ptr,\"HELP\")) {\n        const char *help[] = {\n\"CONSUMERS <key> <groupname>\",\n\"    Show consumers of <groupname>.\",\n\"GROUPS <key>\",\n\"    Show the stream consumer groups.\",\n\"STREAM <key> [FULL [COUNT <count>]\",\n\"    Show information about the stream.\",\nNULL\n        };\n        addReplyHelp(c, help);\n        return;\n    }\n\n    /* With the exception of HELP handled before any other sub commands, all\n     * the ones are in the form of \"<subcommand> <key>\". */\n    opt = c->argv[1]->ptr;\n    key = c->argv[2];\n\n    /* Lookup the key now, this is common for all the subcommands but HELP. */\n    kvobj *kv = lookupKeyReadOrReply(c, key, shared.nokeyerr);\n    if (kv == NULL || checkType(c, kv, OBJ_STREAM)) return;\n    s = kv->ptr;\n\n    /* Dispatch the different subcommands. */\n    if (!strcasecmp(opt,\"CONSUMERS\") && c->argc == 4) {\n        /* XINFO CONSUMERS <key> <group>. */\n        streamCG *cg = streamLookupCG(s,c->argv[3]->ptr);\n        if (cg == NULL) {\n            addReplyErrorFormat(c, \"-NOGROUP No such consumer group '%s' \"\n                                   \"for key name '%s'\",\n                                   (char*)c->argv[3]->ptr, (char*)key->ptr);\n            return;\n        }\n\n        addReplyArrayLen(c,raxSize(cg->consumers));\n        raxIterator ri;\n        raxStart(&ri,cg->consumers);\n        raxSeek(&ri,\"^\",NULL,0);\n        mstime_t now = commandTimeSnapshot();\n        while(raxNext(&ri)) {\n            streamConsumer *consumer = ri.data;\n            mstime_t inactive = consumer->active_time != -1 ? now - consumer->active_time : consumer->active_time;\n            mstime_t idle = now - consumer->seen_time;\n            if (idle < 0) idle = 0;\n\n            addReplyMapLen(c,4);\n            addReplyBulkCString(c,\"name\");\n            addReplyBulkCBuffer(c,consumer->name,sdslen(consumer->name));\n            addReplyBulkCString(c,\"pending\");\n            addReplyLongLong(c,raxSize(consumer->pel));\n            addReplyBulkCString(c,\"idle\");\n            addReplyLongLong(c,idle);\n            addReplyBulkCString(c,\"inactive\");\n            addReplyLongLong(c,inactive);\n        }\n        raxStop(&ri);\n    } else if (!strcasecmp(opt,\"GROUPS\") && c->argc == 3) {\n        /* XINFO GROUPS <key>. */\n        if (s->cgroups == NULL) {\n            addReplyArrayLen(c,0);\n            return;\n        }\n\n        addReplyArrayLen(c,raxSize(s->cgroups));\n        raxIterator ri;\n        raxStart(&ri,s->cgroups);\n        raxSeek(&ri,\"^\",NULL,0);\n        while(raxNext(&ri)) {\n            streamCG *cg = ri.data;\n            addReplyMapLen(c,6);\n            addReplyBulkCString(c,\"name\");\n            addReplyBulkCBuffer(c,ri.key,ri.key_len);\n            addReplyBulkCString(c,\"consumers\");\n            addReplyLongLong(c,raxSize(cg->consumers));\n            addReplyBulkCString(c,\"pending\");\n            addReplyLongLong(c,raxSize(cg->pel));\n            addReplyBulkCString(c,\"last-delivered-id\");\n            addReplyStreamID(c,&cg->last_id);\n            addReplyBulkCString(c,\"entries-read\");\n            if (cg->entries_read != SCG_INVALID_ENTRIES_READ) {\n                addReplyLongLong(c,cg->entries_read);\n            } else {\n                addReplyNull(c);\n            }\n            addReplyBulkCString(c,\"lag\");\n            streamReplyWithCGLag(c,s,cg);\n        }\n        raxStop(&ri);\n    } else if (!strcasecmp(opt,\"STREAM\")) {\n        /* XINFO STREAM <key> [FULL [COUNT <count>]]. */\n        xinfoReplyWithStreamInfo(c,key,kv);\n    } else {\n        addReplySubcommandSyntaxError(c);\n    }\n}\n\n/* XCFGSET <key> [IDMP-DURATION <duration>] [IDMP-MAXSIZE <maxsize>] */\nvoid xcfgsetCommand(client *c) {\n    robj *key = c->argv[1];\n\n    /* Lookup the stream key */\n    kvobj *kv = lookupKeyWriteOrReply(c,key,shared.nokeyerr);\n    if (kv == NULL || checkType(c,kv,OBJ_STREAM)) return;\n    stream *s = kv->ptr;\n    size_t old_alloc = 0;\n    if (server.memory_tracking_enabled)\n        old_alloc = kvobjAllocSize(kv);\n\n    /* XCFGSET <key> [IDMP-DURATION <duration>] [IDMP-MAXSIZE <maxsize>] */\n    long long duration = -1;\n    long long maxsize = -1;\n\n    /* Parse parameters */\n    for (int i = 2; i < c->argc; i++) {\n        int moreargs = c->argc - i - 1;\n        char *param = c->argv[i]->ptr;\n        if (!strcasecmp(param,\"IDMP-DURATION\") && moreargs) {\n            if (duration != -1) {\n                addReplyError(c,\"IDMP-DURATION specified multiple times\");\n                return;\n            }\n            i++;\n            if (getLongLongFromObjectOrReply(c,c->argv[i],&duration,NULL) != C_OK)\n                return;\n            if (duration < CONFIG_STREAM_IDMP_MIN_DURATION ||\n                duration > CONFIG_STREAM_IDMP_MAX_DURATION) {\n                addReplyErrorFormat(c,\"IDMP-DURATION must be between %d and %d seconds\",\n                    CONFIG_STREAM_IDMP_MIN_DURATION,CONFIG_STREAM_IDMP_MAX_DURATION);\n                return;\n            }\n        } else if (!strcasecmp(param,\"IDMP-MAXSIZE\") && moreargs) {\n            if (maxsize != -1) {\n                addReplyError(c,\"IDMP-MAXSIZE specified multiple times\");\n                return;\n            }\n            i++;\n            if (getLongLongFromObjectOrReply(c,c->argv[i],&maxsize,NULL) != C_OK)\n                return;\n            if (maxsize < CONFIG_STREAM_IDMP_MIN_MAXSIZE ||\n                maxsize > CONFIG_STREAM_IDMP_MAX_MAXSIZE) {\n                addReplyErrorFormat(c,\"IDMP-MAXSIZE must be between %d and %d entries\",\n                    CONFIG_STREAM_IDMP_MIN_MAXSIZE,CONFIG_STREAM_IDMP_MAX_MAXSIZE);\n                return;\n            }\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        }\n    }\n\n    /* At least one parameter must be specified */\n    if (duration == -1 && maxsize == -1) {\n        addReplyError(c,\"At least one parameter must be specified\");\n        return;\n    }\n\n    /* Track if we made any changes */\n    int changed = 0;\n\n    /* Update the stream configuration. When we set IDMP-DURATION or IDMP-MAXSIZE to a\n     * different value, we clear all existing producer IDMP maps for the stream.\n     * If the value is the same, we don't clear to allow multiple publishers\n     * to call this before starting to publish without clearing each time. */\n    if (duration != -1 && s->idmp_duration != (uint64_t)duration) {\n        s->idmp_duration = duration;\n        streamClearIdmpEntries(s);\n        changed = 1;\n    }\n    if (maxsize != -1 && s->idmp_max_entries != (uint64_t)maxsize) {\n        s->idmp_max_entries = maxsize;\n        streamClearIdmpEntries(s);\n        changed = 1;\n    }\n\n    /* Clean up and propagate if we changed something */\n    if (changed) {\n        dictDelete(c->db->stream_idmp_keys, key); /* Untrack cleared IDMP key */\n        keyModified(c,c->db,key,kv,0);\n        server.dirty++;\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db,getKeySlot(key->ptr),kv,old_alloc,kvobjAllocSize(kv));\n    }\n    addReply(c,shared.ok);\n}\n\n/* Validate the integrity stream listpack entries structure. Both in term of a\n * valid listpack, but also that the structure of the entries matches a valid\n * stream. return 1 if valid 0 if not valid. */\nint streamValidateListpackIntegrity(unsigned char *lp, size_t size, int deep) {\n    int valid_record;\n    unsigned char *p, *next;\n\n    /* Since we don't want to run validation of all records twice, we'll\n     * run the listpack validation of just the header and do the rest here. */\n    if (!lpValidateIntegrity(lp, size, 0, NULL, NULL))\n        return 0;\n\n    /* In non-deep mode we just validated the listpack header (encoded size) */\n    if (!deep) return 1;\n\n    next = p = lpValidateFirst(lp);\n    if (!lpValidateNext(lp, &next, size)) return 0;\n    if (!p) return 0;\n\n    /* entry count */\n    int64_t entry_count = lpGetIntegerIfValid(p, &valid_record);\n    if (!valid_record) return 0;\n    p = next; if (!lpValidateNext(lp, &next, size)) return 0;\n\n    /* deleted */\n    int64_t deleted_count = lpGetIntegerIfValid(p, &valid_record);\n    if (!valid_record) return 0;\n    p = next; if (!lpValidateNext(lp, &next, size)) return 0;\n\n    /* num-of-fields */\n    int64_t master_fields = lpGetIntegerIfValid(p, &valid_record);\n    if (!valid_record) return 0;\n    p = next; if (!lpValidateNext(lp, &next, size)) return 0;\n\n    /* the field names */\n    for (int64_t j = 0; j < master_fields; j++) {\n        p = next; if (!lpValidateNext(lp, &next, size)) return 0;\n    }\n\n    /* the zero master entry terminator. */\n    int64_t zero = lpGetIntegerIfValid(p, &valid_record);\n    if (!valid_record || zero != 0) return 0;\n    p = next; if (!lpValidateNext(lp, &next, size)) return 0;\n\n    entry_count += deleted_count;\n    while (entry_count--) {\n        if (!p) return 0;\n        int64_t fields = master_fields, extra_fields = 3;\n        int64_t flags = lpGetIntegerIfValid(p, &valid_record);\n        if (!valid_record) return 0;\n        p = next; if (!lpValidateNext(lp, &next, size)) return 0;\n\n        /* entry id */\n        lpGetIntegerIfValid(p, &valid_record);\n        if (!valid_record) return 0;\n        p = next; if (!lpValidateNext(lp, &next, size)) return 0;\n        lpGetIntegerIfValid(p, &valid_record);\n        if (!valid_record) return 0;\n        p = next; if (!lpValidateNext(lp, &next, size)) return 0;\n\n        if (!(flags & STREAM_ITEM_FLAG_SAMEFIELDS)) {\n            /* num-of-fields */\n            fields = lpGetIntegerIfValid(p, &valid_record);\n            if (!valid_record) return 0;\n            p = next; if (!lpValidateNext(lp, &next, size)) return 0;\n\n            /* the field names */\n            for (int64_t j = 0; j < fields; j++) {\n                p = next; if (!lpValidateNext(lp, &next, size)) return 0;\n            }\n\n            extra_fields += fields + 1;\n        }\n\n        /* the values */\n        for (int64_t j = 0; j < fields; j++) {\n            p = next; if (!lpValidateNext(lp, &next, size)) return 0;\n        }\n\n        /* lp-count */\n        int64_t lp_count = lpGetIntegerIfValid(p, &valid_record);\n        if (!valid_record) return 0;\n        if (lp_count != fields + extra_fields) return 0;\n        p = next; if (!lpValidateNext(lp, &next, size)) return 0;\n    }\n\n    if (next)\n        return 0;\n\n    return 1;\n}\n\n/* -----------------------------------------------------------------------\n * PEL Time-Ordered List Helpers\n * ----------------------------------------------------------------------- */\n\n/* The following functions manage a doubly-linked list of pending entries (NACKs)\n * ordered by delivery_time. Almost all NACK updates set delivery_time to current\n * time, making this an append-to-tail workload. The doubly-linked list provides\n * O(1) unlink from any position, O(1) append to tail, O(1) access to oldest\n * entries for CLAIM operations. */\n\n/* Insert a NACK after 'after' in the time-ordered list.\n * If after is NULL, insert at the head. */\nstatic void pelListInsertAfter(streamCG *cg, streamNACK *after, streamNACK *nack) {\n    if (after) {\n        nack->pel_prev = after;\n        nack->pel_next = after->pel_next;\n        if (after->pel_next)\n            after->pel_next->pel_prev = nack;\n        else\n            cg->pel_time_tail = nack;\n        after->pel_next = nack;\n    } else {\n        nack->pel_prev = NULL;\n        nack->pel_next = cg->pel_time_head;\n        if (cg->pel_time_head)\n            cg->pel_time_head->pel_prev = nack;\n        else\n            cg->pel_time_tail = nack;\n        cg->pel_time_head = nack;\n    }\n}\n\n/* Insert a NACK at the tail of the PEL time-ordered list. This is used when\n * delivery_time is set to current time, which is the common case. */\nstatic void pelListInsertAtTail(streamCG *cg, streamNACK *nack) {\n    pelListInsertAfter(cg, cg->pel_time_tail, nack);\n}\n\n/* Unlink a NACK from the PEL time-ordered list. */\nvoid pelListUnlink(streamCG *cg, streamNACK *nack) {\n    if (nack == cg->pel_nack_tail) {\n        cg->pel_nack_tail = nack->pel_prev;\n    }\n    if (nack->pel_prev) {\n        nack->pel_prev->pel_next = nack->pel_next;\n    } else {\n        /* Removing head. */\n        cg->pel_time_head = nack->pel_next;\n    }\n    if (nack->pel_next) {\n        nack->pel_next->pel_prev = nack->pel_prev;\n    } else {\n        /* Removing tail. */\n        cg->pel_time_tail = nack->pel_prev;\n    }\n    nack->pel_prev = nack->pel_next = NULL;\n}\n\n/* Insert a NACK in sorted order by delivery_time. Used for edge cases where\n * delivery_time is set to a past time, and also by RDB loading where entries\n * may not be time-ordered. We scan backwards from the tail since most times\n * are recent, so the common case is still fast.\n *\n * The NACK zone (pel_time_head..pel_nack_tail) is skipped: new entries are\n * never placed before pel_nack_tail, so the NACK zone stays intact. */\nvoid pelListInsertSorted(streamCG *cg, streamNACK *nack) {\n    /* Empty list or append to tail (common case). */\n    if (cg->pel_time_head == NULL ||\n        nack->delivery_time >= cg->pel_time_tail->delivery_time) {\n        pelListInsertAtTail(cg, nack);\n        return;\n    }\n\n    /* Scan backwards from tail, stopping at the NACK-zone boundary\n     * (pel_nack_tail) so we never insert inside the zone. If boundary\n     * is NULL (no NACK zone), the scan may reach the list head. */\n    streamNACK *boundary = cg->pel_nack_tail;\n    streamNACK *curr = cg->pel_time_tail;\n    while (curr != boundary && curr->delivery_time > nack->delivery_time) {\n        curr = curr->pel_prev;\n    }\n\n    pelListInsertAfter(cg, curr, nack);\n}\n\n/* Insert a NACKed entry at the end of the NACK zone (head region of the PEL\n * time-ordered list). The NACK zone occupies positions from pel_time_head to\n * pel_nack_tail. This is O(1) and maintains FIFO order among NACKed entries. */\nvoid pelListInsertNacked(streamCG *cg, streamNACK *nack) {\n    nack->delivery_time = 0;\n    pelListInsertAfter(cg, cg->pel_nack_tail, nack);\n    cg->pel_nack_tail = nack;\n}\n\n/* Return the number of entries in the NACK zone (pel_time_head..pel_nack_tail).\n * Returns 0 when no NACKed entries exist. */\nuint64_t pelListNackedCount(streamCG *cg) {\n    uint64_t count = 0;\n    if (cg->pel_nack_tail) {\n        streamNACK *nack = cg->pel_time_head;\n        while (nack) {\n            count++;\n            if (nack == cg->pel_nack_tail) break;\n            nack = nack->pel_next;\n        }\n    }\n    return count;\n}\n\n/* Update a NACK's delivery_time and reposition it in the time-ordered list. */\nstatic void pelListUpdate(streamCG *cg, streamNACK *nack, mstime_t new_delivery_time) {\n    pelListUnlink(cg, nack);\n    nack->delivery_time = new_delivery_time;\n    pelListInsertSorted(cg, nack);\n}\n\n\n/* Register stream keys for monitoring of expired pending entries to enable\n * reactive blocking behavior for XREADGROUP commands with CLAIM. When a client\n * blocks waiting for either new messages or expired pending entries, this\n * function records the earliest timestamp when pending entries will expire\n * (satisfy the min-idle-time requirement).\n *\n * For multi-client coordination, when multiple clients are blocked on the same\n * stream with different min-idle-time values, the dictionary stores the minimum\n * (earliest) expire_time across all clients to ensure the earliest possible\n * wakeup when any pending entry expires and becomes available for claiming.\n *\n * 'c' is the client that is blocking on the stream(s).\n * 'keys' is an array of stream key objects to monitor.\n * 'numkeys' is the number of keys in the array.\n * 'expire_time' is the absolute timestamp (in milliseconds) when the next\n *   pending entry will expire for this client, calculated as\n *   next_delivery_time + min_idle_time, where next_delivery_time is the\n *   delivery timestamp of the oldest pending entry in the stream.\n *\n * For new entries, the key is added with the given expire_time and the\n * reference count is incremented. For existing entries, the expire_time\n * is updated to the minimum value if the new expire_time is earlier,\n * ensuring the earliest wakeup time is preserved for multi-client scenarios.\n * Note that the reference count is only incremented for newly added keys,\n * not for updates to existing entries. */\nvoid trackStreamClaimTimeouts(client *c, robj **keys, int numkeys, uint64_t expire_time) {\n    dictEntry *db_watch_entry, *db_watch_existing_entry;\n    uint64_t old_expire_time;\n    int j;\n\n    for (j = 0; j < numkeys; j++) {\n        db_watch_entry = dictAddRaw(c->db->stream_claim_pending_keys, keys[j], &db_watch_existing_entry);\n        if (db_watch_entry != NULL) {\n            dictSetUnsignedIntegerVal(db_watch_entry, expire_time);\n            incrRefCount(keys[j]);\n        } else {\n            old_expire_time = dictGetUnsignedIntegerVal(db_watch_existing_entry);\n            if (expire_time < old_expire_time) {\n                dictSetUnsignedIntegerVal(db_watch_existing_entry, expire_time);\n            }\n        }\n    }\n}\n\n/* Check and wake clients waiting for expired pending entries. This function\n * is invoked regularly from blockedBeforeSleep() to monitor streams being\n * watched for expired pending entries and wake up blocked clients when\n * entries expire and become available for claiming.\n *\n * The function processes up to CRON_DBS_PER_CALL databases per call in a\n * round-robin fashion, cycling through all databases over multiple invocations.\n * For each database, it iterates through the stream_claim_pending_keys dictionary.\n * For each watched stream, it compares the registered expire_time against the\n * current server time. When expire_time is less than the current server time,\n * the pending entry has expired and the stream is signaled as ready via\n * signalKeyAsReady(), which wakes all blocked clients waiting on that stream.\n * The entry is then removed from stream_claim_pending_keys. */\nvoid handleClaimableStreamEntries(void) {\n    static unsigned int current_db = 0;\n    int dbs_per_call = CRON_DBS_PER_CALL;\n    int j;\n\n    if (dbs_per_call > server.dbnum) dbs_per_call = server.dbnum;\n\n    for (j = 0; j < dbs_per_call; j++) {\n        redisDb *db = &server.db[current_db % server.dbnum];\n        current_db++;\n\n        if (dictIsEmpty(db->stream_claim_pending_keys))\n            continue;\n\n        dictEntry *de;\n        dictIterator di;\n        dictInitSafeIterator(&di, db->stream_claim_pending_keys);\n        while ((de = dictNext(&di)) != NULL) {\n            robj *key = dictGetKey(de);\n            uint64_t expire_time = dictGetUnsignedIntegerVal(de);\n            kvobj *kv = dbFind(db, key->ptr);\n\n            if (!kv || kv->type != OBJ_STREAM) {\n                dictDelete(db->stream_claim_pending_keys, key);\n                continue;\n            }\n\n            if (expire_time < (uint64_t)server.mstime) {\n                signalKeyAsReady(db, key, kv->type);\n                dictDelete(db->stream_claim_pending_keys, key);\n            }\n        }\n        dictResetIterator(&di);\n    }\n}\n\n/* -----------------------------------------------------------------------\n * IDMP (Idempotent Message Producer) Functions\n * ----------------------------------------------------------------------- */\n\n/* Hash function for idmpEntry - hashes the embedded iid buffer */\nstatic uint64_t idmpDictHashFunction(const void *key) {\n    const idmpEntry *entry = (const idmpEntry *)key;\n    return dictGenHashFunction((const char *)entry->iid, entry->iid_len);\n}\n\n/* Key comparison function for idmpEntry - compares embedded iid buffers */\nstatic int idmpDictKeyCompare(dictCmpCache *cache, const void *key1, const void *key2) {\n    UNUSED(cache);\n    const idmpEntry *e1 = (const idmpEntry *)key1;\n    const idmpEntry *e2 = (const idmpEntry *)key2;\n    if (e1->iid_len != e2->iid_len) return 0;\n    return memcmp((const char *)e1->iid, (const char *)e2->iid, e1->iid_len) == 0;\n}\n\n/* Dictionary type for IDMP entries - keys are idmpEntry pointers, values are NULL */\ndictType idmpDictType = {\n    idmpDictHashFunction,       /* hash function */\n    NULL,                       /* key dup */\n    NULL,                       /* val dup */\n    idmpDictKeyCompare,         /* key compare */\n    NULL,                       /* key destructor - handled manually with linked list */\n    NULL,                       /* val destructor */\n    NULL,                       /* resize allowed */\n    NULL,                       /* rehashing started */\n    NULL,                       /* rehashing completed */\n    NULL,                       /* bucket changed */\n    NULL,                       /* dict metadata bytes */\n    NULL,                       /* userdata */\n    .no_value = 0,              /* Use regular dict entries with NULL values to support defrag */\n    .keys_are_odd = 0,          /* keys are not odd */\n    .force_full_rehash = 0,     /* no force full rehash */\n    NULL,                       /* key from stored key */\n    NULL,                       /* on dict release */\n};\n\n/* Create a new idmpEntry with the given IID string.\n * The entry and IID are allocated together using flexible array member.\n * alloc_size must not be NULL and will be updated with the allocation size. */\nidmpEntry *idmpEntryCreate(const char *iid, size_t iid_len, size_t *alloc_size) {\n    size_t usable;\n    idmpEntry *entry = zmalloc_usable(sizeof(idmpEntry) + iid_len, &usable);\n    \n    entry->next = NULL;\n    entry->iid_len = iid_len;\n    memcpy(entry->iid, iid, iid_len);\n    \n    *alloc_size += usable;\n    \n    return entry;\n}\n\n/* Free an idmpEntry (iid is embedded via flexible array member).\n * alloc_size must not be NULL and will be updated with the freed size. */\nvoid idmpEntryFree(idmpEntry *entry, size_t *alloc_size) {\n    if (entry == NULL) return;\n    \n    size_t usable;\n    zfree_usable(entry, &usable);\n    *alloc_size -= usable;\n}\n\n/* Create a new idmpProducer with an empty dict and linked list.\n * alloc_size must not be NULL and will be updated with the allocation size. */\nidmpProducer *idmpProducerCreate(size_t *alloc_size) {\n    size_t usable;\n    idmpProducer *producer = zmalloc_usable(sizeof(idmpProducer), &usable);\n    producer->idmp_dict = dictCreate(&idmpDictType);\n    producer->idmp_head = NULL;\n    producer->idmp_tail = NULL;\n\n    *alloc_size += usable;\n\n    return producer;\n}\n\n/* Free an idmpProducer including its dict and all linked list entries.\n * alloc_size must not be NULL and will be updated with the freed size. */\nvoid idmpProducerFree(idmpProducer *producer, size_t *alloc_size) {\n    if (producer == NULL) return;\n\n    /* Release the dict */\n    if (producer->idmp_dict)\n        dictRelease(producer->idmp_dict);\n\n    /* Free IDMP linked list entries */\n    idmpEntry *entry = producer->idmp_head;\n    while (entry) {\n        idmpEntry *next = entry->next;\n        idmpEntryFree(entry, alloc_size);\n        entry = next;\n    }\n\n    size_t usable;\n    zfree_usable(producer, &usable);\n    *alloc_size -= usable;\n}\n\n/* Check if an IID already exists in the producer's idmp_dict.\n * If found, sends the existing stream ID as a reply and returns 1.\n * Returns 0 if the IID was not found.\n * \n * The 'entry' parameter should be an idmpEntry with the IID already set\n * (iid and iid_len fields must be initialized). */\nstatic int idmpLookupAndReply(stream *s, idmpProducer *producer, idmpEntry *entry, client *c) {\n    dictEntry *de = dictFind(producer->idmp_dict, entry);\n    if (de != NULL) {\n        /* IID already exists, return the existing stream ID */\n        idmpEntry *existing = (idmpEntry *)dictGetKey(de);\n        addReplyStreamID(c, &existing->id);\n        s->iids_duplicates++;\n        return 1;\n    }\n    return 0;\n}\n\n/* Lookup IID in the producer's dict.\n * Return: 0 = not found, 1 = found same ID, -1 = found different ID. */\nstatic int idmpLookup(idmpProducer *producer, idmpEntry *entry, streamID *id) {\n    dictEntry *de = dictFind(producer->idmp_dict, entry);\n    if (de == NULL)\n        return 0;\n    idmpEntry *existing = (idmpEntry *)dictGetKey(de);\n    return streamCompareID(&existing->id, id) == 0 ? 1 : -1;\n}\n\n/* Insert an idmpEntry into the producer's dict and linked list with the given stream ID. */\nstatic void idmpInsertEntry(stream *s, idmpProducer *producer, idmpEntry *entry, const streamID *id) {\n    /* Set the stream ID and initialize next pointer */\n    entry->next = NULL;\n    entry->id = *id;\n\n    /* Insert into dict (should always succeed since we already checked with lookup) */\n    serverAssert(dictAdd(producer->idmp_dict, entry, NULL) == DICT_OK);\n    \n    /* Add to linked list tail */\n    if (producer->idmp_tail == NULL) {\n        producer->idmp_head = producer->idmp_tail = entry;\n    } else {\n        producer->idmp_tail->next = entry;\n        producer->idmp_tail = entry;\n    }\n    \n    s->iids_added++;\n    \n    /* Remove oldest entry if exceeding max entries */\n    idmpEvictOldestEntry(s, producer);\n}\n\n/* Get or create an idmpProducer for the given producer ID.\n * Returns the producer, or NULL on allocation failure. */\nstatic idmpProducer *idmpGetOrCreateProducer(stream *s, const char *pid, size_t pid_len) {\n    /* Create the producers rax tree if it doesn't exist */\n    if (s->idmp_producers == NULL) {\n        s->idmp_producers = raxNewWithMetadata(0, &s->alloc_size);\n    }\n\n    /* Look up the producer */\n    idmpProducer *producer = NULL;\n    int found = raxFind(s->idmp_producers, (unsigned char *)pid, pid_len, (void **)&producer);\n    if (!found) {\n        /* Create a new producer */\n        producer = idmpProducerCreate(&s->alloc_size);\n        /* Insert into the rax tree - must succeed since we checked it doesn't exist */\n        serverAssert(raxInsert(s->idmp_producers, (unsigned char *)pid, pid_len, producer, NULL));\n    }\n\n    return producer;\n}\n\n/* Register a stream key for IDMP entry tracking.\n * This registers a stream key in the database's stream_idmp_keys dictionary,\n * allowing the cron job handleExpiredIdmpEntries() to periodically check\n * and clean up expired idempotency entries from the stream's idmp_dict.\n *\n * 'c' is the client that is performing the XADD operation with IDMP.\n * 'key' is the stream key object to track.\n *\n * If the key is not already tracked, it is added to stream_idmp_keys and its\n * reference count is incremented. If the key is already being tracked (added\n * by a previous XADD operation), this function does nothing, as the stream\n * is already registered for periodic cleanup. */\nstatic void trackStreamIdmpEntries(client *c, robj *key) {\n    if (dictAddRaw(c->db->stream_idmp_keys, key, NULL)) {\n        incrRefCount(key);\n    }\n}\n\n/* To be used when a stream key was loaded into ram, re-register it in stream_idmp_keys if needed */\nvoid streamKeyLoaded(redisDb *db, robj *key, robj *val) {\n    stream *s = val->ptr;\n    if (s->idmp_producers != NULL) {\n        robj *tracked_key = key;\n        if (key->refcount == OBJ_STATIC_REFCOUNT)\n            tracked_key = createStringObject(key->ptr, sdslen(key->ptr));\n        if (dictAddRaw(db->stream_idmp_keys, tracked_key, NULL)) {\n            incrRefCount(tracked_key);\n        }\n        if (tracked_key != key)\n            decrRefCount(tracked_key);\n    }\n}\n\n/* To be used when a stream key was removed from ram, un-register from stream_idmp_keys if needed */\nvoid streamKeyRemoved(redisDb *db, robj *key, robj *val) {\n    UNUSED(val);\n    dictDelete(db->stream_idmp_keys, key);\n}\n\n/* Clean up expired idempotency entries from tracked streams. This function\n * is invoked regularly from serverCron() to remove expired entries\n * from the idmp_dict of streams that have idempotency tracking enabled,\n * keeping memory usage under control.\n *\n * The function processes up to CRON_DBS_PER_CALL databases per call in a\n * round-robin fashion, cycling through all databases over multiple invocations.\n * For each database, it iterates through the stream_idmp_keys dictionary.\n * For each tracked stream, it compares the timestamp of entries in the stream's\n * idmp linked list against the expiration threshold (current time - idmp_duration).\n * Entries with timestamps older than the threshold are removed from the head\n * of the linked list. When all entries have been removed and the list becomes empty,\n * the stream key is removed from stream_idmp_keys to stop tracking it. */\nvoid handleExpiredIdmpEntries(void) {\n    static unsigned int current_db = 0;\n    int dbs_per_call = CRON_DBS_PER_CALL;\n    int j;\n\n    if (dbs_per_call > server.dbnum) dbs_per_call = server.dbnum;\n\n    for (j = 0; j < dbs_per_call; j++) {\n        redisDb *db = &server.db[current_db % server.dbnum];\n        current_db++;\n\n        if (dictIsEmpty(db->stream_idmp_keys))\n            continue;\n\n        dictEntry *de;\n        dictIterator di;\n        dictInitSafeIterator(&di, db->stream_idmp_keys);\n        while ((de = dictNext(&di)) != NULL) {\n            robj *key = dictGetKey(de);\n            kvobj *kv = dbFind(db, key->ptr);\n\n            serverAssert(kv && kv->type == OBJ_STREAM);\n\n            stream *s = kv->ptr;\n            uint64_t expire_time = server.mstime - (s->idmp_duration * 1000);\n            \n            /* Skip if no producers */\n            if (s->idmp_producers == NULL) {\n                dictDelete(db->stream_idmp_keys, key);\n                continue;\n            }\n\n            /* Iterate through all producers and remove expired entries */\n            int modified = 0;\n            raxIterator ri;\n            raxStart(&ri, s->idmp_producers);\n            raxSeek(&ri, \"^\", NULL, 0);\n            while (raxNext(&ri)) {\n                idmpProducer *producer = ri.data;\n                \n                /* Remove expired entries from the head of this producer's linked list */\n                while (producer->idmp_head != NULL) {\n                    idmpEntry *entry = producer->idmp_head;\n                    if (entry->id.ms <= expire_time) {\n                        /* Remove from dict */\n                        dictDelete(producer->idmp_dict, entry);\n                        /* Remove from linked list head */\n                        producer->idmp_head = entry->next;\n                        if (producer->idmp_head == NULL) {\n                            producer->idmp_tail = NULL;\n                        }\n                        /* Free the entry */\n                        idmpEntryFree(entry, &s->alloc_size);\n                        modified = 1;\n                    } else {\n                        break;\n                    }\n                }\n\n                /* If this producer has no entries left, remove it from the rax tree */\n                if (producer->idmp_head == NULL) {\n                    raxRemove(s->idmp_producers, ri.key, ri.key_len, NULL);\n                    idmpProducerFree(producer, &s->alloc_size);\n                    raxSeek(&ri, \">=\", ri.key, ri.key_len);\n                    modified = 1;\n                }\n            }\n            raxStop(&ri);\n\n            if (modified)\n                keyModified(NULL, db, key, kv, 0);\n\n            /* If no producers remain, free the entire rax tree */\n            if (raxSize(s->idmp_producers) == 0) {\n                raxFree(s->idmp_producers);\n                s->idmp_producers = NULL;\n                dictDelete(db->stream_idmp_keys, key);\n                continue;\n            }\n        }\n        dictResetIterator(&di);\n    }\n}\n\n/* 64-bit left rotation helper for hash combination */\nstatic inline uint64_t rotl64(uint64_t x, int r) {\n    return (x << r) | (x >> (64 - r));\n}\n\n/* Hash field-value pairs using XXH3_128bits for AUTOIDMP. The function takes\n * an array of robj pointers in 'argv' representing field-value pairs (field1,\n * value1, field2, value2, ...) and 'numfields' indicating the number of pairs\n * (not the array length). Each field-value pair is hashed using streaming\n * XXH3_128bits with the field length included as a separator to prevent hash\n * collisions from ambiguous concatenations. The resulting pair hashes are \n * combined using an order-independent Sum + XOR approach with rotation to \n * produce a final 128-bit hash stored in 'out_hash'. Returns C_OK on success,\n * C_ERR on error. XXH128 is a non-cryptographic hash function: fast and \n * well-distributed, but does NOT prevent intentional collision attacks. */\nstatic int createIdempotencyHash(robj **argv, int64_t numfields, XXH128_hash_t *out_hash) {\n    uint64_t sum_lo = 0, sum_hi = 0;\n    uint64_t xor_lo = 0, xor_hi = 0;\n    XXH3_state_t* state = XXH3_createState();\n    if (state == NULL) return C_ERR;\n    \n    char llbuf[LONG_STR_SIZE];\n    XXH_errorcode err;\n    \n    /* Process each field-value pair */\n    for (int64_t i = 0; i < numfields; i++) {\n        robj *field = argv[i * 2];\n        robj *value = argv[i * 2 + 1];\n        \n        /* Initialize hash state for this pair */\n        err = XXH3_128bits_reset(state);\n        if (err != XXH_OK) goto cleanup;\n        \n        /* Hash the field */\n        long field_len;\n        unsigned char *field_data = getObjectReadOnlyString(field, &field_len, llbuf);\n        err = XXH3_128bits_update(state, field_data, field_len);\n        if (err != XXH_OK) goto cleanup;\n        \n        /* Hash the field length as separator to prevent collisions */\n        err = XXH3_128bits_update(state, &field_len, sizeof(field_len));\n        if (err != XXH_OK) goto cleanup;\n        \n        /* Hash the value */\n        long value_len;\n        unsigned char *value_data = getObjectReadOnlyString(value, &value_len, llbuf);\n        err = XXH3_128bits_update(state, value_data, value_len);\n        if (err != XXH_OK) goto cleanup;\n        \n        /* Get the hash for this pair */\n        XXH128_hash_t pair_hash = XXH3_128bits_digest(state);\n        \n        /* Accumulate with both sum and xor for order-independent combination */\n        sum_lo += pair_hash.low64;\n        sum_hi += pair_hash.high64;\n        xor_lo ^= pair_hash.low64;\n        xor_hi ^= pair_hash.high64;\n    }\n    \n    /* Combine sum and xor with rotation for better distribution */\n    XXH128_hash_t hash_result;\n    hash_result.low64 = sum_lo ^ rotl64(xor_hi, 1);\n    hash_result.high64 = sum_hi ^ rotl64(xor_lo, 1);\n    \n    XXH3_freeState(state);\n    *out_hash = hash_result;\n    return C_OK;\n\ncleanup:\n    XXH3_freeState(state);\n    return C_ERR;\n}\n\n/* Clear all IDMP entries from a stream - free all producers and their entries */\nstatic void streamClearIdmpEntries(stream *s) {\n    if (s->idmp_producers == NULL) return;\n\n    /* Iterate through all producers and free them */\n    raxIterator ri;\n    raxStart(&ri, s->idmp_producers);\n    raxSeek(&ri, \"^\", NULL, 0);\n    while (raxNext(&ri)) {\n        idmpProducerFree(ri.data, &s->alloc_size);\n    }\n    raxStop(&ri);\n\n    /* Free the producers rax tree and reset */\n    raxFree(s->idmp_producers);\n    s->idmp_producers = NULL;\n}\n\n/* Evict the oldest entry from the IDMP producer when max entries is exceeded.\n * This function checks if the number of entries exceeds the stream's max limit,\n * and if so, removes the oldest entry from the producer's linked list and\n * dictionary, maintaining the integrity of both data structures. If the list\n * becomes empty after removal, both head and tail pointers are set to NULL. */\nstatic void idmpEvictOldestEntry(stream *s, idmpProducer *producer) {\n    if (dictSize(producer->idmp_dict) <= s->idmp_max_entries) {\n        return;\n    }\n    \n    idmpEntry *oldest = producer->idmp_head;\n    producer->idmp_head = oldest->next;\n    if (producer->idmp_head == NULL) {\n        producer->idmp_tail = NULL;\n    }\n    dictDelete(producer->idmp_dict, oldest);\n    idmpEntryFree(oldest, &s->alloc_size);\n}\n"
  },
  {
    "path": "src/t_string.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n#include \"xxhash.h\"\n#include <math.h> /* isnan(), isinf() */\n\n/* XXH3 64-bit hash produces 16 hex characters when formatted */\n#define DIGEST_HEX_LENGTH 16\n\n/* Forward declarations */\nint getGenericCommand(client *c);\n\n/*-----------------------------------------------------------------------------\n * String Commands\n *----------------------------------------------------------------------------*/\n\nstatic int checkStringLength(client *c, long long size, long long append) {\n    if (mustObeyClient(c))\n        return C_OK;\n    /* 'uint64_t' cast is there just to prevent undefined behavior on overflow */\n    long long total = (uint64_t)size + append;\n    /* Test configured max-bulk-len representing a limit of the biggest string object,\n     * and also test for overflow. */\n    if (total > server.proto_max_bulk_len || total < size || total < append) {\n        addReplyError(c,\"string exceeds maximum allowed size (proto-max-bulk-len)\");\n        return C_ERR;\n    }\n    return C_OK;\n}\n\n/* The setGenericCommand() function implements the SET operation with different\n * options and variants. This function is called in order to implement the\n * following commands: SET, SETEX, PSETEX, SETNX, GETSET.\n *\n * 'flags' changes the behavior of the command (NX, XX, GET, IFEQ, IFNE, IFDEQ\n * or IFDNE - see below).\n *\n * 'expire' represents an expire to set in form of a Redis object as passed\n * by the user. It is interpreted according to the specified 'unit'.\n *\n * 'match_value' is a value to check against if any of IFEQ/IFNE/IFDEQ/IFDNE is\n * present.\n *\n * 'ok_reply' and 'abort_reply' is what the function will reply to the client\n * if the operation is performed, or when it is not because of NX or\n * XX flags.\n *\n * If ok_reply is NULL \"+OK\" is used.\n * If abort_reply is NULL, \"$-1\" is used. */\n\n#define OBJ_NO_FLAGS 0\n#define OBJ_SET_NX (1<<0)          /* Set if key not exists. */\n#define OBJ_SET_XX (1<<1)          /* Set if key exists. */\n#define OBJ_EX (1<<2)              /* Set if time in seconds is given */\n#define OBJ_PX (1<<3)              /* Set if time in ms in given */\n#define OBJ_KEEPTTL (1<<4)         /* Set and keep the ttl */\n#define OBJ_SET_GET (1<<5)         /* Set if want to get key before set */\n#define OBJ_EXAT (1<<6)            /* Set if timestamp in second is given */\n#define OBJ_PXAT (1<<7)            /* Set if timestamp in ms is given */\n#define OBJ_PERSIST (1<<8)         /* Set if we need to remove the ttl */\n#define OBJ_SET_IFEQ (1<<9)        /* Set if value equals match value */\n#define OBJ_SET_IFNE (1<<10)       /* Set if value does not equal match value */\n#define OBJ_SET_IFDEQ (1<<11)      /* Set if current digest equals match digest */\n#define OBJ_SET_IFDNE (1<<12)      /* Set if current digest does not equal match digest */\n\n/* Forward declaration */\nstatic int getExpireMillisecondsOrReply(client *c, robj *expire, int flags, int unit, long long *milliseconds);\n\n/* Generic SET command family (SET, SETEX, PSETEX, SETNX)\n *\n * Arguments:\n *   valref: A pointer to the robj to be set. This argument may be updated by the function.\n *           The object is expected to have a refcount of 1, allowing its ownership to be\n *           transferred directly to the database to avoid making a copy. If needed, the\n *           function will replace *valref with a new allocation and increment its refcount\n *           so that both the database and the caller maintain valid references.\n */\nvoid setGenericCommand(client *c, int flags, robj *key, robj **valref, robj *expire,\n                       int unit, robj *match_value, robj *ok_reply, robj *abort_reply)\n{\n    long long milliseconds = 0; /* initialized to avoid any harmless warning */\n    int found = 0;\n    int setkey_flags = 0;\n\n    if (expire && getExpireMillisecondsOrReply(c, expire, flags, unit, &milliseconds) != C_OK) {\n        return;\n    }\n\n    if (flags & OBJ_SET_GET) {\n        if (getGenericCommand(c) == C_ERR) return;\n    }\n\n    dictEntryLink link = NULL;\n    found = (lookupKeyWriteWithLink(c->db,key,&link) != NULL);\n\n    if ((flags & OBJ_SET_NX && found) ||\n        (flags & (OBJ_SET_XX | OBJ_SET_IFEQ | OBJ_SET_IFDEQ) && !found))\n    {\n        if (!(flags & OBJ_SET_GET)) {\n            addReply(c, abort_reply ? abort_reply : shared.null[c->resp]);\n        }\n        return;\n    }\n\n    /* Handle conditional set operations - only set if key is found and condition\n     * is met - otherwise return nil. */\n    if (found && (flags & (OBJ_SET_IFEQ | OBJ_SET_IFNE | OBJ_SET_IFDEQ | OBJ_SET_IFDNE))) {\n        kvobj *current = lookupKeyRead(c->db, key);\n        if (checkType(c, current, OBJ_STRING)) {\n            return;\n        }\n\n        if (flags & OBJ_SET_IFEQ || flags & OBJ_SET_IFNE) {\n            robj *current_decoded = getDecodedObject(current);\n            int condition = (flags & OBJ_SET_IFEQ) ?\n                            sdscmp(current_decoded->ptr, match_value->ptr) == 0 :\n                            sdscmp(current_decoded->ptr, match_value->ptr) != 0;\n            decrRefCount(current_decoded);\n            if (!condition) {\n                if (!(flags & OBJ_SET_GET)) {\n                    addReply(c, abort_reply ? abort_reply : shared.null[c->resp]);\n                }\n                return;\n            }\n        } else if (flags & OBJ_SET_IFDEQ || flags & OBJ_SET_IFDNE) {\n            if (validateHexDigest(c, match_value->ptr) != C_OK)\n                return;\n\n            sds current_digest = stringDigest(current);\n            int condition = flags & OBJ_SET_IFDEQ ?\n                            strcasecmp(current_digest, match_value->ptr) == 0 :\n                            strcasecmp(current_digest, match_value->ptr) != 0;\n            sdsfree(current_digest);\n            if (!condition) {\n                if (!(flags & OBJ_SET_GET)) {\n                    addReply(c, abort_reply ? abort_reply : shared.null[c->resp]);\n                }\n                return;\n            }\n        }\n    }\n\n    /* If the expire time is already elapsed, we don't need to add the key,\n     * but we still need to update the stats, and we also need to delete the\n     * key if it exists.\n     *\n     * From stats perspective, we behave as if we inserted a new key (possibly\n     * an overwrite) and later expired it, but from the per-key KSN observability,\n     * we reflect what we've actually done in the db (deletion of old key, and\n     * no insertion of new one), so we don't confuse modules. */\n    if (expire && checkAlreadyExpired(milliseconds)) {\n        if (found) {\n            dbDelete(c->db, key);\n            robj *aux = server.lazyfree_lazy_server_del ? shared.unlink : shared.del;\n            rewriteClientCommandVector(c, 2, aux, key);\n            keyModified(c, c->db, key, NULL, 1);\n            notifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", key, c->db->id);\n            server.dirty++;\n        }\n        server.stat_expiredkeys++;\n        if (!(flags & OBJ_SET_GET)) {\n            addReply(c, ok_reply ? ok_reply : shared.ok);\n        }\n        return;\n    }\n\n    /* When expire is not NULL, we avoid deleting the TTL so it can be updated later instead of being deleted and then created again. */\n    setkey_flags |= ((flags & OBJ_KEEPTTL) || expire) ? SETKEY_KEEPTTL : 0;\n    setkey_flags |= found ? SETKEY_ALREADY_EXIST : SETKEY_DOESNT_EXIST;\n\n    setKeyByLink(c, c->db, key, valref, setkey_flags, &link);\n    /* If there's an expiration, setExpireByLink may reallocate the object.\n     * We must update valref to reflect the new object if that happens. */\n    if (expire) *valref = setExpireByLink(c, c->db, key->ptr, milliseconds, link);\n    /* The client still holds a reference to the original object via c->argv[i],\n     * and will call decrRefCount() at the end of call(). We increment the refcount\n     * from 1 to 2 to ensure both DB and client have valid references. */\n    incrRefCount(*valref); /* 1->2 */\n\n    server.dirty++;\n    notifyKeyspaceEvent(NOTIFY_STRING,\"set\",key,c->db->id);\n\n    if (expire) {\n        /* Propagate as SET Key Value PXAT millisecond-timestamp if there is\n         * EX/PX/EXAT flag. */\n        if (!(flags & OBJ_PXAT)) {\n            robj *milliseconds_obj = createStringObjectFromLongLong(milliseconds);\n            /* If command is exactly \"SET key value EX/PX/EXAT ttl\", we can just\n             * replace the expire type and value in-place. Otherwise, we need to\n             * rewrite the entire command to strip extra flags (NX, XX, GET, etc). */\n            if ((c->cmd->proc == setCommand) && c->argc == 5) {\n                rewriteClientCommandArgument(c, 3, shared.pxat);\n                rewriteClientCommandArgument(c, 4, milliseconds_obj);\n            } else {\n                rewriteClientCommandVector(c, 5, shared.set, key, *valref, shared.pxat, milliseconds_obj);\n            }\n            decrRefCount(milliseconds_obj);\n        }\n        notifyKeyspaceEvent(NOTIFY_GENERIC,\"expire\",key,c->db->id);\n    }\n\n    if (!(flags & OBJ_SET_GET)) {\n        addReply(c, ok_reply ? ok_reply : shared.ok);\n    }\n\n    /* Propagate without the GET argument (Isn't needed if we had expire since in that case we completely re-written the command argv) */\n    if ((flags & OBJ_SET_GET) && !expire) {\n        for (int j = c->argc - 1; j >= 3; j--) {\n            char *a = c->argv[j]->ptr;\n            /* Skip GET which may be repeated multiple times. */\n            if ((a[0] == 'g' || a[0] == 'G') &&\n                (a[1] == 'e' || a[1] == 'E') &&\n                (a[2] == 't' || a[2] == 'T') && a[3] == '\\0')\n            {\n                rewriteClientCommandArgument(c, j, NULL);\n            }\n        }\n    }\n}\n\n/*\n * Extract the `expire` argument of a given GET/SET command as an absolute timestamp in milliseconds.\n *\n * \"client\" is the client that sent the `expire` argument.\n * \"expire\" is the `expire` argument to be extracted.\n * \"flags\" represents the behavior of the command (e.g. PX or EX).\n * \"unit\" is the original unit of the given `expire` argument (e.g. UNIT_SECONDS).\n * \"milliseconds\" is output argument.\n *\n * If return C_OK, \"milliseconds\" output argument will be set to the resulting absolute timestamp.\n * If return C_ERR, an error reply has been added to the given client.\n */\nstatic int getExpireMillisecondsOrReply(client *c, robj *expire, int flags, int unit, long long *milliseconds) {\n    int ret = getLongLongFromObjectOrReply(c, expire, milliseconds, NULL);\n    if (ret != C_OK) {\n        return ret;\n    }\n\n    if (*milliseconds <= 0 || (unit == UNIT_SECONDS && *milliseconds > LLONG_MAX / 1000)) {\n        /* Negative value provided or multiplication is gonna overflow. */\n        addReplyErrorExpireTime(c);\n        return C_ERR;\n    }\n\n    if (unit == UNIT_SECONDS) *milliseconds *= 1000;\n\n    if ((flags & OBJ_PX) || (flags & OBJ_EX)) {\n        *milliseconds += commandTimeSnapshot();\n    }\n\n    if (*milliseconds <= 0) {\n        /* Overflow detected. */\n        addReplyErrorExpireTime(c);\n        return C_ERR;\n    }\n\n    return C_OK;\n}\n\n#define COMMAND_GET 0\n#define COMMAND_SET 1\n#define COMMAND_MSETEX 2\n\n/* Extended string command arguments structure */\ntypedef struct {\n    int flags;\n    int unit;\n    int expire_pos;  /* Position of EX/PX flag for replication rewriting */\n    robj *expire;\n    robj *match_value; /* For IFEQ/IFNE/IFDEQ/IFDNE conditions */\n} extendedStringArgs;\n\n/*\n * The parseExtendedStringArgumentsOrReply() function performs the common validation for extended\n * string arguments used in SET, GET and MSETEX commands.\n *\n * Get specific commands - PERSIST/DEL\n * Set specific commands - XX/NX/GET/IFEQ/IFNE/IFDEQ/IFDNE\n * Common commands - EX/EXAT/PX/PXAT/KEEPTTL\n *\n * Function takes pointers to client, start_pos for where to begin parsing, extendedStringArgs\n * structure to populate, and command_type which can be COMMAND_GET, COMMAND_SET, or COMMAND_MSETEX.\n *\n * If there are any syntax violations C_ERR is returned else C_OK is returned.\n *\n * The args structure is updated upon parsing the arguments. Unit and expire are updated if there are any\n * EX/EXAT/PX/PXAT arguments. Unit is updated to millisecond if PX/PXAT is set.\n * match_value is updated if any of IFEQ/IFNE/IFDEQ/IFDNE is set.\n */\nint parseExtendedStringArgumentsOrReply(client *c, int start_pos, extendedStringArgs *args, int command_type) {\n    /* Initialize arguments to defaults */\n    memset(args, 0, sizeof(*args));\n    args->expire_pos = -1;\n    args->unit = UNIT_SECONDS;\n\n    int j = start_pos;\n   /* We can have either none or exactly one of these conditionals as they are\n     * mutually exclusive. We'll make sure to check if none of the other flags\n     * are already set if we are going to set one of them. This is done via the\n     * check:\n     *\n     * if (opt == OBJ_SET_XXX && !(*flags & (cond_mut_excl & ~OBJ_SET_XXX)))\n     *\n     * A bit ugly - but concise.\n     */\n    int cond_mut_excl = OBJ_SET_NX | OBJ_SET_XX | OBJ_SET_IFEQ | OBJ_SET_IFNE |\n                        OBJ_SET_IFDEQ | OBJ_SET_IFDNE;\n    for (; j < c->argc; j++) {\n        char *opt = c->argv[j]->ptr;\n        robj *next = (j == c->argc-1) ? NULL : c->argv[j+1];\n\n        if ((opt[0] == 'n' || opt[0] == 'N') &&\n            (opt[1] == 'x' || opt[1] == 'X') && opt[2] == '\\0' &&\n            !(args->flags & OBJ_SET_XX) && (command_type == COMMAND_SET || command_type == COMMAND_MSETEX))\n        {\n            args->flags |= OBJ_SET_NX;\n        } else if ((opt[0] == 'x' || opt[0] == 'X') &&\n                   (opt[1] == 'x' || opt[1] == 'X') && opt[2] == '\\0' &&\n                   !(args->flags & OBJ_SET_NX) && (command_type == COMMAND_SET || command_type == COMMAND_MSETEX))\n        {\n            args->flags |= OBJ_SET_XX;\n        } else if ((opt[0] == 'g' || opt[0] == 'G') &&\n                   (opt[1] == 'e' || opt[1] == 'E') &&\n                   (opt[2] == 't' || opt[2] == 'T') && opt[3] == '\\0' &&\n                   (command_type == COMMAND_SET))\n        {\n            args->flags |= OBJ_SET_GET;\n        } else if (!strcasecmp(opt, \"KEEPTTL\") && !(args->flags & OBJ_PERSIST) &&\n            !(args->flags & OBJ_EX) && !(args->flags & OBJ_EXAT) &&\n            !(args->flags & OBJ_PX) && !(args->flags & OBJ_PXAT) &&\n            (command_type == COMMAND_SET || command_type == COMMAND_MSETEX))\n        {\n            args->flags |= OBJ_KEEPTTL;\n        } else if (!strcasecmp(opt,\"PERSIST\") && (command_type == COMMAND_GET) &&\n               !(args->flags & OBJ_EX) && !(args->flags & OBJ_EXAT) &&\n               !(args->flags & OBJ_PX) && !(args->flags & OBJ_PXAT) &&\n               !(args->flags & OBJ_KEEPTTL))\n        {\n            args->flags |= OBJ_PERSIST;\n        } else if ((opt[0] == 'e' || opt[0] == 'E') &&\n                   (opt[1] == 'x' || opt[1] == 'X') && opt[2] == '\\0' &&\n                   !(args->flags & OBJ_KEEPTTL) && !(args->flags & OBJ_PERSIST) &&\n                   !(args->flags & OBJ_EXAT) && !(args->flags & OBJ_PX) &&\n                   !(args->flags & OBJ_PXAT) && next)\n        {\n            args->flags |= OBJ_EX;\n            args->expire = next;\n            args->expire_pos = j;\n            j++;\n        } else if ((opt[0] == 'p' || opt[0] == 'P') &&\n                   (opt[1] == 'x' || opt[1] == 'X') && opt[2] == '\\0' &&\n                   !(args->flags & OBJ_KEEPTTL) && !(args->flags & OBJ_PERSIST) &&\n                   !(args->flags & OBJ_EX) && !(args->flags & OBJ_EXAT) &&\n                   !(args->flags & OBJ_PXAT) && next)\n        {\n            args->flags |= OBJ_PX;\n            args->unit = UNIT_MILLISECONDS;\n            args->expire = next;\n            args->expire_pos = j;\n            j++;\n        } else if ((opt[0] == 'e' || opt[0] == 'E') &&\n                   (opt[1] == 'x' || opt[1] == 'X') &&\n                   (opt[2] == 'a' || opt[2] == 'A') &&\n                   (opt[3] == 't' || opt[3] == 'T') && opt[4] == '\\0' &&\n                   !(args->flags & OBJ_KEEPTTL) && !(args->flags & OBJ_PERSIST) &&\n                   !(args->flags & OBJ_EX) && !(args->flags & OBJ_PX) &&\n                   !(args->flags & OBJ_PXAT) && next)\n        {\n            args->flags |= OBJ_EXAT;\n            args->expire = next;\n            j++;\n        } else if ((opt[0] == 'p' || opt[0] == 'P') &&\n                   (opt[1] == 'x' || opt[1] == 'X') &&\n                   (opt[2] == 'a' || opt[2] == 'A') &&\n                   (opt[3] == 't' || opt[3] == 'T') && opt[4] == '\\0' &&\n                   !(args->flags & OBJ_KEEPTTL) && !(args->flags & OBJ_PERSIST) &&\n                   !(args->flags & OBJ_EX) && !(args->flags & OBJ_EXAT) &&\n                   !(args->flags & OBJ_PX) && next)\n        {\n            args->flags |= OBJ_PXAT;\n            args->unit = UNIT_MILLISECONDS;\n            args->expire = next;\n            j++;\n        } else if (!strcasecmp(opt, \"ifeq\") && next &&\n                   !(args->flags & (cond_mut_excl & ~OBJ_SET_IFEQ)) &&\n                   (command_type == COMMAND_SET))\n        {\n            args->flags |= OBJ_SET_IFEQ;\n            args->match_value = next;\n            j++;\n        } else if (!strcasecmp(opt, \"ifne\") && next &&\n                   !(args->flags & (cond_mut_excl & ~OBJ_SET_IFNE)) &&\n                   (command_type == COMMAND_SET))\n        {\n            args->flags |= OBJ_SET_IFNE;\n            args->match_value = next;\n            j++;\n        } else if (!strcasecmp(opt, \"ifdeq\") && next &&\n                   !(args->flags & (cond_mut_excl & ~OBJ_SET_IFDEQ)) &&\n                   (command_type == COMMAND_SET))\n        {\n            args->flags |= OBJ_SET_IFDEQ;\n            args->match_value = next;\n            j++;\n        } else if (!strcasecmp(opt, \"ifdne\") && next &&\n                   !(args->flags & (cond_mut_excl & ~OBJ_SET_IFDNE)) &&\n                   (command_type == COMMAND_SET))\n        {\n            args->flags |= OBJ_SET_IFDNE;\n            args->match_value = next;\n            j++;\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return C_ERR;\n        }\n    }\n    return C_OK;\n}\n\n/* SET key value [NX] [XX] [KEEPTTL] [GET] [EX <seconds>] [PX <milliseconds>]\n *     [EXAT <seconds-timestamp>][PXAT <milliseconds-timestamp>]\n *     [IFEQ <match-value>|IFNE <match-value>|IFDEQ <match-digest>|\n *      IFDNE <match-digest>]*/\nvoid setCommand(client *c) {\n    extendedStringArgs args;\n\n    if (parseExtendedStringArgumentsOrReply(c, 3, &args, COMMAND_SET) != C_OK) {\n        return;\n    }\n\n    c->argv[2] = tryObjectEncoding(c->argv[2]);\n    setGenericCommand(c, args.flags, c->argv[1], &(c->argv[2]), args.expire, args.unit, args.match_value, NULL, NULL);\n}\n\nvoid setnxCommand(client *c) {\n    c->argv[2] = tryObjectEncoding(c->argv[2]);\n    setGenericCommand(c, OBJ_SET_NX, c->argv[1], &(c->argv[2]), NULL, 0, NULL, shared.cone, shared.czero);\n}\n\nvoid setexCommand(client *c) {\n    c->argv[3] = tryObjectEncoding(c->argv[3]);\n    setGenericCommand(c, OBJ_EX, c->argv[1], &(c->argv[3]), c->argv[2], UNIT_SECONDS, NULL, NULL, NULL);\n}\n\nvoid psetexCommand(client *c) {\n    c->argv[3] = tryObjectEncoding(c->argv[3]);\n    setGenericCommand(c, OBJ_PX, c->argv[1], &(c->argv[3]), c->argv[2], UNIT_MILLISECONDS, NULL, NULL, NULL);\n}\n\nint getGenericCommand(client *c) {\n    kvobj *o;\n\n    if ((o = lookupKeyReadOrReply(c, c->argv[1], shared.null[c->resp])) == NULL)\n        return C_OK;\n\n    if (checkType(c,o,OBJ_STRING)) {\n        return C_ERR;\n    }\n\n    addReplyBulk(c,o);\n    return C_OK;\n}\n\nvoid getCommand(client *c) {\n    getGenericCommand(c);\n}\n\n/*\n * GETEX <key> [PERSIST][EX seconds][PX milliseconds][EXAT seconds-timestamp][PXAT milliseconds-timestamp]\n *\n * The getexCommand() function implements extended options and variants of the GET command. Unlike GET\n * command this command is not read-only.\n *\n * The default behavior when no options are specified is same as GET and does not alter any TTL.\n *\n * Only one of the below options can be used at a given time.\n *\n * 1. PERSIST removes any TTL associated with the key.\n * 2. EX Set expiry TTL in seconds.\n * 3. PX Set expiry TTL in milliseconds.\n * 4. EXAT Same like EX instead of specifying the number of seconds representing the TTL\n *      (time to live), it takes an absolute Unix timestamp\n * 5. PXAT Same like PX instead of specifying the number of milliseconds representing the TTL\n *      (time to live), it takes an absolute Unix timestamp\n *\n * Command would either return the bulk string, error or nil.\n */\nvoid getexCommand(client *c) {\n    extendedStringArgs args;\n\n    if (parseExtendedStringArgumentsOrReply(c, 2, &args, COMMAND_GET) != C_OK) {\n        return;\n    }\n\n    kvobj *o;\n\n    if ((o = lookupKeyReadOrReply(c,c->argv[1],shared.null[c->resp])) == NULL)\n        return;\n\n    if (checkType(c,o,OBJ_STRING)) {\n        return;\n    }\n\n    /* Validate the expiration time value first */\n    long long milliseconds = 0;\n    if (args.expire && getExpireMillisecondsOrReply(c, args.expire, args.flags, args.unit, &milliseconds) != C_OK) {\n        return;\n    }\n\n    /* We need to do this before we expire the key or delete it */\n    addReplyBulk(c,o);\n\n    /* This command is never propagated as is. It is either propagated as PEXPIRE[AT],DEL,UNLINK or PERSIST.\n     * This why it doesn't need special handling in feedAppendOnlyFile to convert relative expire time to absolute one. */\n    if (((args.flags & OBJ_PXAT) || (args.flags & OBJ_EXAT)) && checkAlreadyExpired(milliseconds)) {\n        /* When PXAT/EXAT absolute timestamp is specified, there can be a chance that timestamp\n         * has already elapsed so delete the key in that case. */\n        int deleted = dbGenericDelete(c->db, c->argv[1], server.lazyfree_lazy_expire, DB_FLAG_KEY_EXPIRED);\n        serverAssert(deleted);\n        robj *aux = server.lazyfree_lazy_expire ? shared.unlink : shared.del;\n        rewriteClientCommandVector(c,2,aux,c->argv[1]);\n        keyModified(c, c->db, c->argv[1], NULL, 1);\n        notifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", c->argv[1], c->db->id);\n        server.dirty++;\n    } else if (args.expire) {\n        o = setExpire(c,c->db,c->argv[1],milliseconds);\n        /* Propagate as PXEXPIREAT millisecond-timestamp if there is\n         * EX/PX/EXAT/PXAT flag and the key has not expired. */\n        robj *milliseconds_obj = createStringObjectFromLongLong(milliseconds);\n        rewriteClientCommandVector(c,3,shared.pexpireat,c->argv[1],milliseconds_obj);\n        decrRefCount(milliseconds_obj);\n        keyModified(c, c->db, c->argv[1], o, 1);\n        notifyKeyspaceEvent(NOTIFY_GENERIC,\"expire\",c->argv[1],c->db->id);\n        server.dirty++;\n    } else if (args.flags & OBJ_PERSIST) {\n        if (removeExpire(c->db, c->argv[1])) {\n            keyModified(c, c->db, c->argv[1], o, 1);\n            rewriteClientCommandVector(c, 2, shared.persist, c->argv[1]);\n            notifyKeyspaceEvent(NOTIFY_GENERIC,\"persist\",c->argv[1],c->db->id);\n            server.dirty++;\n        }\n    }\n}\n\nvoid getdelCommand(client *c) {\n    if (getGenericCommand(c) == C_ERR) return;\n    if (dbSyncDelete(c->db, c->argv[1])) {\n        /* Propagate as DEL command */\n        rewriteClientCommandVector(c,2,shared.del,c->argv[1]);\n        keyModified(c, c->db, c->argv[1], NULL, 1);\n        notifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", c->argv[1], c->db->id);\n        server.dirty++;\n    }\n}\n\nvoid getsetCommand(client *c) {\n    if (getGenericCommand(c) == C_ERR) return;\n    c->argv[2] = tryObjectEncoding(c->argv[2]);\n    setKey(c, c->db, c->argv[1], &c->argv[2], 0);\n    incrRefCount(c->argv[2]);\n    notifyKeyspaceEvent(NOTIFY_STRING,\"set\",c->argv[1],c->db->id);\n    server.dirty++;\n\n    /* Propagate as SET command */\n    rewriteClientCommandArgument(c,0,shared.set);\n}\n\nvoid setrangeCommand(client *c) {\n    int64_t oldLen = -1, newLen;\n    long offset;\n    sds value = c->argv[3]->ptr;\n    const size_t value_len = sdslen(value);\n\n    if (getLongFromObjectOrReply(c,c->argv[2],&offset,NULL) != C_OK)\n        return;\n\n    if (offset < 0) {\n        addReplyError(c,\"offset is out of range\");\n        return;\n    }\n\n    dictEntryLink link;\n    kvobj *kv = lookupKeyWriteWithLink(c->db, c->argv[1], &link);\n    if (kv == NULL) {\n        /* Return 0 when setting nothing on a non-existing string */\n        if (value_len == 0) {\n            addReply(c,shared.czero);\n            return;\n        }\n\n        /* Return when the resulting string exceeds allowed size */\n        if (checkStringLength(c,offset,value_len) != C_OK)\n            return;\n\n        newLen = offset+value_len;\n        robj *o = createObject(OBJ_STRING,sdsnewlen(NULL, newLen));\n        kv = dbAddByLink(c->db, c->argv[1], &o, &link);\n    } else {\n        /* Key exists, check type */\n        if (checkType(c,kv,OBJ_STRING))\n            return;\n\n        /* Return existing string length when setting nothing */\n        oldLen = stringObjectLen(kv);\n        if (value_len == 0) {\n            addReplyLongLong(c, oldLen);\n            return;\n        }\n\n        /* Return when the resulting string exceeds allowed size */\n        if (checkStringLength(c,offset,value_len) != C_OK)\n            return;\n\n        /* Create a copy when the object is shared or encoded. */\n        kv = dbUnshareStringValueByLink(c->db, c->argv[1], kv, link);\n\n        newLen = max(oldLen, (int64_t) (offset + value_len));\n        updateKeysizesHist(c->db, OBJ_STRING, oldLen, newLen);            \n    }\n\n    if (value_len > 0) {\n        size_t oldsize = 0;\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(kv);\n        kv->ptr = sdsgrowzero(kv->ptr,offset+value_len);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), kv, oldsize, kvobjAllocSize(kv));\n        memcpy((char*)kv->ptr+offset,value,value_len);\n        keyModified(c,c->db,c->argv[1],kv,1);\n        notifyKeyspaceEvent(NOTIFY_STRING,\n            \"setrange\",c->argv[1],c->db->id);\n        server.dirty++;\n    }\n\n    addReplyLongLong(c,newLen);\n}\n\nvoid getrangeCommand(client *c) {\n    kvobj *o;\n    long long start, end;\n    char *str, llbuf[32];\n    size_t strlen;\n\n    if (getLongLongFromObjectOrReply(c,c->argv[2],&start,NULL) != C_OK)\n        return;\n    if (getLongLongFromObjectOrReply(c,c->argv[3],&end,NULL) != C_OK)\n        return;\n    if ((o = lookupKeyReadOrReply(c,c->argv[1],shared.emptybulk)) == NULL ||\n        checkType(c,o,OBJ_STRING)) return;\n\n    if (o->encoding == OBJ_ENCODING_INT) {\n        str = llbuf;\n        strlen = ll2string(llbuf,sizeof(llbuf),(long)o->ptr);\n    } else {\n        str = o->ptr;\n        strlen = sdslen(str);\n    }\n\n    /* Convert negative indexes */\n    if (start < 0 && end < 0 && start > end) {\n        addReply(c,shared.emptybulk);\n        return;\n    }\n    if (start < 0) start = strlen+start;\n    if (end < 0) end = strlen+end;\n    if (start < 0) start = 0;\n    if (end < 0) end = 0;\n    if ((unsigned long long)end >= strlen) end = strlen-1;\n\n    /* Precondition: end >= 0 && end < strlen, so the only condition where\n     * nothing can be returned is: start > end. */\n    if (start > end || strlen == 0) {\n        addReply(c,shared.emptybulk);\n    } else {\n        addReplyBulkCBuffer(c,(char*)str+start,end-start+1);\n    }\n}\n\nvoid mgetCommand(client *c) {\n    int j;\n\n    addReplyArrayLen(c,c->argc-1);\n    for (j = 1; j < c->argc; j++) {\n        kvobj *o = lookupKeyRead(c->db, c->argv[j]);\n        if (o == NULL) {\n            addReplyNull(c);\n        } else {\n            if (o->type != OBJ_STRING) {\n                addReplyNull(c);\n            } else {\n                addReplyBulk(c,o);\n            }\n        }\n    }\n}\n\nvoid msetGenericCommand(client *c, int nx) {\n    int j;\n\n    if ((c->argc % 2) == 0) {\n        addReplyErrorArity(c);\n        return;\n    }\n\n    /* Handle the NX flag. The MSETNX semantic is to return zero and don't\n     * set anything if at least one key already exists. */\n    if (nx) {\n        for (j = 1; j < c->argc; j += 2) {\n            if (lookupKeyWrite(c->db,c->argv[j]) != NULL) {\n                addReply(c, shared.czero);\n                return;\n            }\n        }\n    }\n\n    for (j = 1; j < c->argc; j += 2) {\n        c->argv[j+1] = tryObjectEncoding(c->argv[j+1]);\n        /* if 'NX', no need set flags SETKEY_DOESNT_EXIST. Already verified earlier! */\n        setKey(c, c->db, c->argv[j], &(c->argv[j+1]) , 0 /*flags*/);\n        incrRefCount(c->argv[j+1]);  /* refcnt not incr by setKey() */\n        notifyKeyspaceEvent(NOTIFY_STRING,\"set\",c->argv[j],c->db->id);\n    }\n    server.dirty += (c->argc-1)/2;\n    addReply(c, nx ? shared.cone : shared.ok);\n}\n\nvoid msetCommand(client *c) {\n    msetGenericCommand(c,0);\n}\n\nvoid msetnxCommand(client *c) {\n    msetGenericCommand(c,1);\n}\n\nvoid msetexCommand(client *c) {\n    /* Parse numkeys parameter */\n    long kv_count;\n    if (getRangeLongFromObjectOrReply(c, c->argv[1], 1, INT_MAX,\n        &kv_count, \"invalid numkeys value\") != C_OK)\n    {\n        return;\n    }\n\n    /* Validate we have enough arguments: command + numkeys + (key-value pairs) * 2\n     * Be careful to avoid overflow when calculating kv_count * 2 */\n    if ((long long)kv_count * 2 + 2 > c->argc) {\n        addReplyError(c, \"wrong number of key-value pairs\");\n        return;\n    }\n\n    extendedStringArgs args;\n    if (parseExtendedStringArgumentsOrReply(c, kv_count * 2 + 2, &args, COMMAND_MSETEX) != C_OK) {\n        return;\n    }\n\n    /* Validate the expiration time value first */\n    long long milliseconds = 0;\n    if (args.expire && getExpireMillisecondsOrReply(c, args.expire, args.flags, args.unit, &milliseconds) != C_OK) {\n        return;\n    }\n\n    if (args.flags & (OBJ_SET_NX | OBJ_SET_XX)) {\n        /* Check NX/XX conditions for each key - pattern from setGenericCommand */\n        for (int j = 0; j < kv_count; j++) {\n            int key_idx = (j * 2) + 2;\n            robj *found = lookupKeyWrite(c->db, c->argv[key_idx]);\n\n            if ((args.flags & OBJ_SET_NX && found) ||\n                (args.flags & OBJ_SET_XX && !found))\n            {\n                addReply(c, shared.czero);\n                return;\n            }\n        }\n    }\n\n    /* Set all key-value pairs */\n    for (int j = 0; j < kv_count; j++) {\n        int key_idx = (j * 2) + 2;\n        int val_idx = key_idx + 1;\n\n        c->argv[val_idx] = tryObjectEncoding(c->argv[val_idx]);\n\n        /* Handle KEEPTTL - preserve existing TTL */\n        int setkey_flags = 0;\n        if (args.flags & OBJ_KEEPTTL) {\n            setkey_flags |= SETKEY_KEEPTTL;\n        }\n\n        setKey(c, c->db, c->argv[key_idx], &(c->argv[val_idx]), setkey_flags);\n        incrRefCount(c->argv[val_idx]);\n\n        /* Set expiration for each key (but not for KEEPTTL) */\n        if (args.expire && !(args.flags & OBJ_KEEPTTL)) {\n            setExpire(c, c->db, c->argv[key_idx], milliseconds);\n            notifyKeyspaceEvent(NOTIFY_GENERIC,\"expire\",c->argv[key_idx],c->db->id);\n        }\n        notifyKeyspaceEvent(NOTIFY_STRING,\"set\",c->argv[key_idx],c->db->id);\n    }\n\n    /* Handle replication rewriting for relative expiration times */\n    if (args.expire && !(args.flags & OBJ_PXAT) && !(args.flags & OBJ_EXAT) && args.expire_pos != -1) {\n        /* Convert EX/PX (relative) to PXAT (absolute) for consistent replication */\n        robj *milliseconds_obj = createStringObjectFromLongLong(milliseconds);\n        rewriteClientCommandArgument(c, args.expire_pos, shared.pxat);\n        rewriteClientCommandArgument(c, args.expire_pos + 1, milliseconds_obj);\n        decrRefCount(milliseconds_obj);\n    }\n\n    server.dirty += kv_count;\n    addReply(c, shared.cone);\n}\n\nvoid incrDecrCommand(client *c, long long incr) {\n    long long value, oldvalue;\n    robj *new;\n    dictEntryLink link;\n    kvobj *o = lookupKeyWriteWithLink(c->db, c->argv[1], &link);\n    if (checkType(c,o,OBJ_STRING)) return;\n    if (getLongLongFromObjectOrReply(c,o,&value,NULL) != C_OK) return;\n\n    oldvalue = value;\n    if ((incr < 0 && oldvalue < 0 && incr < (LLONG_MIN-oldvalue)) ||\n        (incr > 0 && oldvalue > 0 && incr > (LLONG_MAX-oldvalue))) {\n        addReplyError(c,\"increment or decrement would overflow\");\n        return;\n    }\n    value += incr;\n\n    if (o && o->refcount == 1 && o->encoding == OBJ_ENCODING_INT &&\n        value >= LONG_MIN && value <= LONG_MAX)\n    {\n        new = o;\n        o->ptr = (void*)((long)value);\n        updateKeysizesHist(c->db, OBJ_STRING,\n                           (int64_t) sdigits10(oldvalue),\n                           (int64_t) sdigits10(value));\n    } else {\n        new = createStringObjectFromLongLongForValue(value);\n        if (o) {\n            /* replace value in db and also update keysizes hist */\n            dbReplaceValueWithLink(c->db, c->argv[1], &new, link);\n        } else {\n            /* Add new key to db and also update keysizes hist */\n            dbAddByLink(c->db, c->argv[1], &new, &link);\n        }\n    }\n    addReplyLongLongFromStr(c,new);\n    keyModified(c,c->db,c->argv[1],new,1);\n    notifyKeyspaceEvent(NOTIFY_STRING,\"incrby\",c->argv[1],c->db->id);\n    server.dirty++;\n}\n\nvoid incrCommand(client *c) {\n    incrDecrCommand(c,1);\n}\n\nvoid decrCommand(client *c) {\n    incrDecrCommand(c,-1);\n}\n\nvoid incrbyCommand(client *c) {\n    long long incr;\n\n    if (getLongLongFromObjectOrReply(c, c->argv[2], &incr, NULL) != C_OK) return;\n    incrDecrCommand(c,incr);\n}\n\nvoid decrbyCommand(client *c) {\n    long long incr;\n\n    if (getLongLongFromObjectOrReply(c, c->argv[2], &incr, NULL) != C_OK) return;\n    /* Overflow check: negating LLONG_MIN will cause an overflow */\n    if (incr == LLONG_MIN) {\n        addReplyError(c, \"decrement would overflow\");\n        return;\n    }\n    incrDecrCommand(c,-incr);\n}\n\nvoid incrbyfloatCommand(client *c) {\n    long double incr, value;\n\n    dictEntryLink link;\n    kvobj *o = lookupKeyWriteWithLink(c->db,c->argv[1],&link);\n    if (checkType(c,o,OBJ_STRING)) return;\n    if (getLongDoubleFromObjectOrReply(c,o,&value,NULL) != C_OK ||\n        getLongDoubleFromObjectOrReply(c,c->argv[2],&incr,NULL) != C_OK)\n        return;\n\n    value += incr;\n    if (isnan(value) || isinf(value)) {\n        addReplyError(c,\"increment would produce NaN or Infinity\");\n        return;\n    }\n    robj *new = createStringObjectFromLongDouble(value,1);\n    if (o)\n        dbReplaceValueWithLink(c->db, c->argv[1], &new, link);\n    else\n        dbAddByLink(c->db, c->argv[1], &new, &link);\n    keyModified(c,c->db,c->argv[1],new,1);\n    notifyKeyspaceEvent(NOTIFY_STRING,\"incrbyfloat\",c->argv[1],c->db->id);\n    server.dirty++;\n    addReplyBulk(c,new);\n\n    /* Always replicate INCRBYFLOAT as a SET command with the final value\n     * in order to make sure that differences in float precision or formatting\n     * will not create differences in replicas or after an AOF restart. */\n    rewriteClientCommandArgument(c,0,shared.set);\n    rewriteClientCommandArgument(c,2,new);\n    rewriteClientCommandArgument(c,3,shared.keepttl);\n}\n\nvoid appendCommand(client *c) {\n    size_t totlen;\n    robj *append;\n    kvobj *o;\n    size_t oldsize = 0;\n\n    dictEntryLink link;\n    o = lookupKeyWriteWithLink(c->db,c->argv[1],&link);\n    if (o == NULL) {\n        /* Create the key */\n        c->argv[2] = tryObjectEncoding(c->argv[2]);\n        o = dbAddByLink(c->db, c->argv[1], &c->argv[2], &link);\n        incrRefCount(c->argv[2]);\n        totlen = stringObjectLen(c->argv[2]);\n    } else {\n        /* Key exists, check type */\n        if (checkType(c,o,OBJ_STRING))\n            return;\n\n        /* \"append\" is an argument, so always an sds */\n        append = c->argv[2];\n        size_t append_len = sdslen(append->ptr);\n        if (checkStringLength(c,stringObjectLen(o),append_len) != C_OK)\n            return;\n\n        /* Append the value */\n        o = dbUnshareStringValueByLink(c->db,c->argv[1],o,link);\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(o);\n        o->ptr = sdscatlen(o->ptr,append->ptr,append_len);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), o, oldsize, kvobjAllocSize(o));\n        totlen = sdslen(o->ptr);\n        int64_t oldlen = totlen - append_len;\n        updateKeysizesHist(c->db, OBJ_STRING, oldlen, totlen);\n    }\n    keyModified(c,c->db,c->argv[1],o,1);\n    notifyKeyspaceEvent(NOTIFY_STRING,\"append\",c->argv[1],c->db->id);\n    server.dirty++;\n\n    addReplyLongLong(c,totlen);\n}\n\nvoid strlenCommand(client *c) {\n    kvobj *kv;\n    if ((kv = lookupKeyReadOrReply(c, c->argv[1], shared.czero)) == NULL ||\n        checkType(c, kv, OBJ_STRING)) return;\n    addReplyLongLong(c,stringObjectLen(kv));\n}\n\n/* LCS key1 key2 [LEN] [IDX] [MINMATCHLEN <len>] [WITHMATCHLEN] */\nvoid lcsCommand(client *c) {\n    uint32_t i, j;\n    long long minmatchlen = 0;\n    sds a = NULL, b = NULL;\n    int getlen = 0, getidx = 0, withmatchlen = 0;\n\n    kvobj *obja = lookupKeyRead(c->db, c->argv[1]);\n    kvobj *objb = lookupKeyRead(c->db, c->argv[2]);\n    if ((obja && obja->type != OBJ_STRING) ||\n        (objb && objb->type != OBJ_STRING))\n    {\n        addReplyError(c,\n            \"The specified keys must contain string values\");\n        /* Don't cleanup the objects, we need to do that\n         * only after calling getDecodedObject(). */\n        obja = NULL;\n        objb = NULL;\n        goto cleanup;\n    }\n    obja = obja ? getDecodedObject(obja) : createStringObject(\"\",0);\n    objb = objb ? getDecodedObject(objb) : createStringObject(\"\",0);\n    a = obja->ptr;\n    b = objb->ptr;\n\n    for (j = 3; j < (uint32_t)c->argc; j++) {\n        char *opt = c->argv[j]->ptr;\n        int moreargs = (c->argc-1) - j;\n\n        if (!strcasecmp(opt,\"IDX\")) {\n            getidx = 1;\n        } else if (!strcasecmp(opt,\"LEN\")) {\n            getlen = 1;\n        } else if (!strcasecmp(opt,\"WITHMATCHLEN\")) {\n            withmatchlen = 1;\n        } else if (!strcasecmp(opt,\"MINMATCHLEN\") && moreargs) {\n            if (getLongLongFromObjectOrReply(c,c->argv[j+1],&minmatchlen,NULL)\n                != C_OK) goto cleanup;\n            if (minmatchlen < 0) minmatchlen = 0;\n            j++;\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            goto cleanup;\n        }\n    }\n\n    /* Complain if the user passed ambiguous parameters. */\n    if (getlen && getidx) {\n        addReplyError(c,\n            \"If you want both the length and indexes, please just use IDX.\");\n        goto cleanup;\n    }\n\n    /* Detect string truncation or later overflows. */\n    if (sdslen(a) >= UINT32_MAX-1 || sdslen(b) >= UINT32_MAX-1) {\n        addReplyError(c, \"String too long for LCS\");\n        goto cleanup;\n    }\n\n    /* Compute the LCS using the vanilla dynamic programming technique of\n     * building a table of LCS(x,y) substrings. */\n    uint32_t alen = sdslen(a);\n    uint32_t blen = sdslen(b);\n\n    /* Setup an uint32_t array to store at LCS[i,j] the length of the\n     * LCS A0..i-1, B0..j-1. Note that we have a linear array here, so\n     * we index it as LCS[j+(blen+1)*i] */\n    #define LCS(A,B) lcs[(B)+((A)*(blen+1))]\n\n    /* Try to allocate the LCS table, and abort on overflow or insufficient memory. */\n    unsigned long long lcssize = (unsigned long long)(alen+1)*(blen+1); /* Can't overflow due to the size limits above. */\n    unsigned long long lcsalloc = lcssize * sizeof(uint32_t);\n    uint32_t *lcs = NULL;\n    if (lcsalloc < SIZE_MAX && lcsalloc / lcssize == sizeof(uint32_t)) {\n        if (lcsalloc > (size_t)server.proto_max_bulk_len) {\n            addReplyError(c, \"Insufficient memory, transient memory for LCS exceeds proto-max-bulk-len\");\n            goto cleanup;\n        }\n        lcs = ztrymalloc(lcsalloc);\n    }\n    if (!lcs) {\n        addReplyError(c, \"Insufficient memory, failed allocating transient memory for LCS\");\n        goto cleanup;\n    }\n\n    /* Start building the LCS table. */\n    for (uint32_t i = 0; i <= alen; i++) {\n        for (uint32_t j = 0; j <= blen; j++) {\n            if (i == 0 || j == 0) {\n                /* If one substring has length of zero, the\n                 * LCS length is zero. */\n                LCS(i,j) = 0;\n            } else if (a[i-1] == b[j-1]) {\n                /* The len LCS (and the LCS itself) of two\n                 * sequences with the same final character, is the\n                 * LCS of the two sequences without the last char\n                 * plus that last char. */\n                LCS(i,j) = LCS(i-1,j-1)+1;\n            } else {\n                /* If the last character is different, take the longest\n                 * between the LCS of the first string and the second\n                 * minus the last char, and the reverse. */\n                uint32_t lcs1 = LCS(i-1,j);\n                uint32_t lcs2 = LCS(i,j-1);\n                LCS(i,j) = lcs1 > lcs2 ? lcs1 : lcs2;\n            }\n        }\n    }\n\n    /* Store the actual LCS string in \"result\" if needed. We create\n     * it backward, but the length is already known, we store it into idx. */\n    uint32_t idx = LCS(alen,blen);\n    sds result = NULL;        /* Resulting LCS string. */\n    void *arraylenptr = NULL; /* Deferred length of the array for IDX. */\n    uint32_t arange_start = alen, /* alen signals that values are not set. */\n             arange_end = 0,\n             brange_start = 0,\n             brange_end = 0;\n\n    /* Do we need to compute the actual LCS string? Allocate it in that case. */\n    int computelcs = getidx || !getlen;\n    if (computelcs) result = sdsnewlen(SDS_NOINIT,idx);\n\n    /* Start with a deferred array if we have to emit the ranges. */\n    uint32_t arraylen = 0;  /* Number of ranges emitted in the array. */\n    if (getidx) {\n        addReplyMapLen(c,2);\n        addReplyBulkCString(c,\"matches\");\n        arraylenptr = addReplyDeferredLen(c);\n    }\n\n    i = alen, j = blen;\n    while (computelcs && i > 0 && j > 0) {\n        int emit_range = 0;\n        if (a[i-1] == b[j-1]) {\n            /* If there is a match, store the character and reduce\n             * the indexes to look for a new match. */\n            result[idx-1] = a[i-1];\n\n            /* Track the current range. */\n            if (arange_start == alen) {\n                arange_start = i-1;\n                arange_end = i-1;\n                brange_start = j-1;\n                brange_end = j-1;\n            } else {\n                /* Let's see if we can extend the range backward since\n                 * it is contiguous. */\n                if (arange_start == i && brange_start == j) {\n                    arange_start--;\n                    brange_start--;\n                } else {\n                    emit_range = 1;\n                }\n            }\n            /* Emit the range if we matched with the first byte of\n             * one of the two strings. We'll exit the loop ASAP. */\n            if (arange_start == 0 || brange_start == 0) emit_range = 1;\n            idx--; i--; j--;\n        } else {\n            /* Otherwise reduce i and j depending on the largest\n             * LCS between, to understand what direction we need to go. */\n            uint32_t lcs1 = LCS(i-1,j);\n            uint32_t lcs2 = LCS(i,j-1);\n            if (lcs1 > lcs2)\n                i--;\n            else\n                j--;\n            if (arange_start != alen) emit_range = 1;\n        }\n\n        /* Emit the current range if needed. */\n        uint32_t match_len = arange_end - arange_start + 1;\n        if (emit_range) {\n            if (minmatchlen == 0 || match_len >= minmatchlen) {\n                if (arraylenptr) {\n                    addReplyArrayLen(c,2+withmatchlen);\n                    addReplyArrayLen(c,2);\n                    addReplyLongLong(c,arange_start);\n                    addReplyLongLong(c,arange_end);\n                    addReplyArrayLen(c,2);\n                    addReplyLongLong(c,brange_start);\n                    addReplyLongLong(c,brange_end);\n                    if (withmatchlen) addReplyLongLong(c,match_len);\n                    arraylen++;\n                }\n            }\n            arange_start = alen; /* Restart at the next match. */\n        }\n    }\n\n    /* Signal modified key, increment dirty, ... */\n\n    /* Reply depending on the given options. */\n    if (arraylenptr) {\n        addReplyBulkCString(c,\"len\");\n        addReplyLongLong(c,LCS(alen,blen));\n        setDeferredArrayLen(c,arraylenptr,arraylen);\n    } else if (getlen) {\n        addReplyLongLong(c,LCS(alen,blen));\n    } else {\n        addReplyBulkSds(c,result);\n        result = NULL;\n    }\n\n    /* Cleanup. */\n    sdsfree(result);\n    zfree(lcs);\n\ncleanup:\n    if (obja) decrRefCount(obja);\n    if (objb) decrRefCount(objb);\n    return;\n}\n\n/* Validate that a digest string has the correct length (DIGEST_HEX_LENGTH characters).\n * Note: This only validates length, not whether characters are valid hex digits.\n * Invalid hex characters will simply fail to match during comparison.\n * Returns C_OK if length is correct, C_ERR otherwise. */\nint validateHexDigest(client *c, const sds digest) {\n    size_t len = sdslen(digest);\n    if (len != DIGEST_HEX_LENGTH) {\n        addReplyErrorFormat(c, \"must be exactly %d hexadecimal characters\", DIGEST_HEX_LENGTH);\n        return C_ERR;\n    }\n    return C_OK;\n}\n\n/* Return the xxh3 hash of a string object as a hex string stored in an sds.\n * The user is responsible for freeing the sds. */\nsds stringDigest(robj *o) {\n    serverAssert(o && o->type == OBJ_STRING);\n\n    XXH64_hash_t hash = 0;\n    if (sdsEncodedObject(o)) {\n        hash = XXH3_64bits(o->ptr, sdslen(o->ptr));\n    } else if (o->encoding == OBJ_ENCODING_INT) {\n        char buf[LONG_STR_SIZE];\n        size_t len = ll2string(buf,sizeof(buf),(long)o->ptr);\n        hash = XXH3_64bits(buf, len);\n    } else {\n        serverPanic(\"Wrong obj->encoding stringDigest()\");\n    }\n\n    sds hexhash = sdsempty();\n    hexhash = sdscatprintf(hexhash, \"%0\" STRINGIFY(DIGEST_HEX_LENGTH) PRIx64, hash);\n    return hexhash;\n}\n\n/* DIGEST key\n *\n * Return digest of the key's value computed via XXH3 hash. The key must be a\n * STRING object. */\nvoid digestCommand(client *c) {\n    kvobj *o;\n\n    if ((o = lookupKeyReadOrReply(c, c->argv[1], shared.null[c->resp])) == NULL)\n        return;\n\n    if (checkType(c,o,OBJ_STRING))\n        return;\n\n    addReplyBulkSds(c, stringDigest(o));\n}\n\n"
  },
  {
    "path": "src/t_zset.c",
    "content": "/* t_zset.c -- zset data type implementation.\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n *\n * Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n */\n\n/*-----------------------------------------------------------------------------\n * Sorted set API\n *----------------------------------------------------------------------------*/\n\n/* ZSETs are ordered sets using two data structures to hold the same elements\n * in order to get O(log(N)) INSERT and REMOVE operations into a sorted\n * data structure.\n *\n * The elements are added to a hash table mapping Redis objects to scores.\n * At the same time the elements are added to a skip list mapping scores\n * to Redis objects (so objects are sorted by scores in this \"view\").\n *\n * Note that the SDS string representing the element is the same in both\n * the hash table and skiplist in order to save memory. What we do in order\n * to manage the shared SDS string more easily is to free the SDS string\n * only in zslFreeNode(). The dictionary has no value free method set.\n * So we should always remove an element from the dictionary, and later from\n * the skiplist.\n *\n * This skiplist implementation is almost a C translation of the original\n * algorithm described by William Pugh in \"Skip Lists: A Probabilistic\n * Alternative to Balanced Trees\", modified in three ways:\n * a) this implementation allows for repeated scores.\n * b) the comparison is not just by key (our 'score') but by satellite data.\n * c) there is a back pointer, so it's a doubly linked list with the back\n * pointers being only at \"level 1\". This allows to traverse the list\n * from tail to head, useful for ZREVRANGE. */\n#include \"fast_float_strtod.h\"\n#include \"server.h\"\n#include \"intset.h\"  /* Compact integer set structure */\n#include <math.h>\n\n#define ZSL_OFFSET_MAX_ELE  UINT16_MAX\n#define ZSL_OFFSET_NO_ELE   UINT16_MAX\n\nconst void *zslGetNodeElementForDict(const void *node);\n\n/* dictType for zset's dict (maps sds to zskiplistNode*) */\ndictType zsetDictType = {\n    dictSdsHash,        /* hash function */\n    NULL,               /* key dup */\n    NULL,               /* val dup */\n    dictSdsKeyCompare,  /* compares embedded sds by keyFromStoredKey */\n    NULL,               /* key destructor - skiplist owns the node memory */\n    NULL,               /* val destructor */\n    NULL,               /* allow to expand */\n    .no_value = 1,      /* no values stored (only nodes) */\n    .keyFromStoredKey = zslGetNodeElementForDict,  /* extract embedded sds from node */\n};\n\n/*-----------------------------------------------------------------------------\n * Skiplist implementation of the low level API\n *----------------------------------------------------------------------------*/\n\nint zslLexValueGteMin(sds value, zlexrangespec *spec);\nint zslLexValueLteMax(sds value, zlexrangespec *spec);\nvoid zsetConvertAndExpand(robj *zobj, int encoding, unsigned long cap);\nstatic zskiplistNode *zslGetElementByRankFromNode(zskiplistNode *start_node, int start_level, unsigned long rank);\n\nstatic inline unsigned long zslGetNodeSpanAtLevel(zskiplistNode *x, int level) {\n    /* At level 0, span stores node level instead of distance, so return the actual span value:\n     * 1 for all nodes except the last node (which has span 0). */\n    if (level > 0) return x->level[level].span;\n    /* For level 0, if regular node, span is 1. If tail node, span is 0. */\n    return x->level[0].forward ? 1 : 0;\n}\n\nstatic inline void zslSetNodeSpanAtLevel(zskiplistNode *x, int level, unsigned long span) {\n    /* Skip level 0 since it stores node level, not span. */\n    if (level > 0)\n        x->level[level].span = span;\n}\n\nstatic inline void zslIncrNodeSpanAtLevel(zskiplistNode *x, int level, unsigned long incr) {\n    /* Skip level 0 since it stores node level, not span. */\n    if (level > 0)\n        x->level[level].span += incr;\n}\n\nstatic inline void zslDecrNodeSpanAtLevel(zskiplistNode *x, int level, unsigned long decr) {\n    /* Skip level 0 since it stores node level, not span. */\n    if (level > 0)\n        x->level[level].span -= decr;\n}\n\n/* Get zskiplistNodeInfo from node (stored in level[0].span). */\nstatic_assert(sizeof(zskiplistNodeInfo) <= sizeof(((zskiplistNode *)0)->level[0].span), \"Must fit in level[0].span\");\nstatic inline zskiplistNodeInfo *zslGetNodeInfo(const zskiplistNode *node) {\n    return (zskiplistNodeInfo *)&node->level[0].span;\n}\n\n/* Set zskiplistNodeInfo in node (stored in level[0].span) */\nstatic inline void zslSetNodeInfo(zskiplistNode *node, uint8_t levels, uint16_t sdsoffset) {\n    union {\n        zskiplistNodeInfo info;\n        unsigned long span;\n    } u = { .info = { .levels = levels, .sdsoffset = sdsoffset } };\n    node->level[0].span = u.span;\n}\n\n/* Compare {score, ele} with node. Returns: 1=bigger 0=equal -1=smaller\n *\n * Ordering is by score first, then lexicographically by element.\n * NULL is treated as +infinity (comes after any real node). */\nint zslCompareWithNode(double score, sds ele, const zskiplistNode *n) {\n    if (/*score < */ n == NULL) return -1; /* NULL is +infinity, comes after any real node */\n    if (score < n->score) return -1;\n    if (score > n->score) return 1;\n    /* Scores are equal, compare elements lexicographically */\n    return sdscmp(ele, zslGetNodeElement(n));\n}\n\n/* Get embedded sds from node. Uses the stored offset to directly access the sds data */\nsds zslGetNodeElement(const zskiplistNode *node) {\n    zskiplistNodeInfo *info = zslGetNodeInfo(node);\n    debugServerAssert(info->sdsoffset != ZSL_OFFSET_NO_ELE);\n    return (char*)node + info->sdsoffset;\n}\n\n/* Wrapper for dict getKeyId callback - extracts sds from node pointer.\n * This allows the dict to store zskiplistNode* but look them up using sds. */\nconst void *zslGetNodeElementForDict(const void *node) {\n    return zslGetNodeElement((zskiplistNode*)node);\n}\n\n/* Create a skiplist header node with ZSKIPLIST_MAXLEVEL levels */\nstatic zskiplistNode *zslCreateHeaderNode(zskiplist *zsl) {\n    size_t usable;\n    zskiplistNode *zn = zmalloc_usable(sizeof(*zn) + ZSKIPLIST_MAXLEVEL * sizeof(struct zskiplistLevel), &usable);\n\n    /* Initialize all fields */\n    zn->score = 0;\n    zn->backward = NULL;\n\n    /* Initialize all level pointers and spans */\n    for (int j = 0; j < ZSKIPLIST_MAXLEVEL; j++) {\n        zn->level[j].forward = NULL;\n        zn->level[j].span = 0;  /* Will be overwritten for level[0] below */\n    }\n\n    /* Use ZSL_OFFSET_NO_ELE as sentinel to indicate no embedded sds (header node) */\n    zslSetNodeInfo(zn, ZSKIPLIST_MAXLEVEL, ZSL_OFFSET_NO_ELE);\n\n    /* Track allocation size */\n    zsl->alloc_size += usable;\n\n    return zn;\n}\n\n/* Create a skiplist node with the specified number of levels.\n * The SDS string 'ele' is COPIED into an embedded sds within the node allocation.\n * This creates a single allocation containing: node + level[] + embedded sds.\n * The caller is responsible for freeing 'ele' if it's no longer needed. */\nstatic zskiplistNode *zslCreateNode(zskiplist *zsl, int level, double score, sds ele) {\n    size_t usable;\n    size_t ele_len = sdslen(ele);\n    char sds_type = sdsReqType(ele_len);\n    size_t sds_hdr_len = sdsHdrSize(sds_type);\n\n    /* Calculate total size: node fixed part + level[] + sds buffer space */\n    size_t node_size = sizeof(zskiplistNode) + level * sizeof(struct zskiplistLevel);\n    size_t sds_buf_size = sds_hdr_len + ele_len + 1;  /* header + data + null terminator */\n    size_t total_size = node_size + sds_buf_size;\n\n    /* Allocate single block for everything */\n    zskiplistNode *zn = zmalloc_usable(total_size, &usable);\n\n    /* Initialize node fields */\n    zn->score = score;\n    zn->backward = NULL;\n\n    /* Calculate offset from node start to sds data (after sds header) */\n    size_t sds_offset = node_size + sds_hdr_len;\n    debugServerAssert(sds_offset < ZSL_OFFSET_MAX_ELE);\n\n    /* Initialize embedded sds using sdsnewplacement */\n    char *sds_buf = (char*)zn + node_size;\n    sds embedded_sds = sdsnewplacement(sds_buf, sds_buf_size, sds_type, ele, ele_len);\n\n    /* Store node info in level[0].span */\n    zslSetNodeInfo(zn, level, sds_offset);\n\n    /* Verify that embedded_sds matches our calculated offset */\n    serverAssert(embedded_sds == (sds)((char*)zn + sds_offset));\n\n    /* Update allocation size tracking */\n    zsl->alloc_size += usable;\n\n    return zn;\n}\n\n/* Create a new skiplist. */\nzskiplist *zslCreate(void) {\n    zskiplist *zsl;\n    size_t zsl_size;\n\n    zsl = zmalloc_usable(sizeof(*zsl), &zsl_size);\n    zsl->level = 1;\n    zsl->length = 0;\n    zsl->alloc_size = zsl_size;\n    zsl->header = zslCreateHeaderNode(zsl);\n    zsl->header->backward = NULL;\n    zsl->tail = NULL;\n    return zsl;\n}\n\n/* Free the specified skiplist node. The embedded SDS is freed as part of\n * the single allocation (node + level[] + embedded sds). */\nstatic void zslFreeNode(zskiplist *zsl, zskiplistNode *node) {\n    size_t usable;\n    /* No separate sdsfree() needed - embedded sds is part of node allocation */\n    zfree_usable(node, &usable);\n    zsl->alloc_size -= usable;\n}\n\n/* Free a whole skiplist. */\nvoid zslFree(zskiplist *zsl) {\n    zskiplistNode *node = zsl->header->level[0].forward, *next;\n    size_t usable;\n\n    zfree_usable(zsl->header, &usable);\n    zsl->alloc_size -= usable;\n    while(node) {\n        next = node->level[0].forward;\n        zslFreeNode(zsl, node);\n        node = next;\n    }\n    debugServerAssert(zsl->alloc_size == zmalloc_usable_size(zsl));\n    zfree(zsl);\n}\n\n/* Return cached total memory used (in bytes) */\nsize_t zslAllocSize(const zskiplist *zsl) { return zsl->alloc_size; }\n\n/* Returns a random level for the new skiplist node we are going to create.\n * The return value of this function is between 1 and ZSKIPLIST_MAXLEVEL\n * (both inclusive), with a powerlaw-alike distribution where higher\n * levels are less likely to be returned. */\nstatic int zslRandomLevel(void) {\n    static const int threshold = ZSKIPLIST_P*RAND_MAX;\n    int level = 1;\n    while (random() < threshold)\n        level += 1;\n    return (level<ZSKIPLIST_MAXLEVEL) ? level : ZSKIPLIST_MAXLEVEL;\n}\n\n/* Insert an already-created node, with its score, element set into the skiplist \n * at the correct position. Updates all forward/backward pointers and spans.\n * The node's level must already be set via zslSetNodeInfo(). */\nstatic void zslInsertNode(zskiplist *zsl, zskiplistNode *node) {\n    zskiplistNode *update[ZSKIPLIST_MAXLEVEL];  /* Nodes that will point to the new node at each level */\n    unsigned long rank[ZSKIPLIST_MAXLEVEL];     /* Rank (0-based) at each level during traversal */\n    zskiplistNode *x;\n    int i, level;\n    double score = node->score;\n    sds ele = zslGetNodeElement(node);\n    level = zslGetNodeInfo(node)->levels;\n    serverAssert(!isnan(score));\n\n    /* Find the position where this node should be inserted */\n    x = zsl->header;\n    for (i = zsl->level-1; i >= 0; i--) {\n        /* store rank that is crossed to reach the insert position */\n        rank[i] = i == (zsl->level-1) ? 0 : rank[i+1];\n        while (zslCompareWithNode(score, ele, x->level[i].forward) > 0) {\n            rank[i] += zslGetNodeSpanAtLevel(x, i);\n            x = x->level[i].forward;\n        }\n        update[i] = x;\n    }\n\n    /* Update skiplist level if needed */\n    if (level > zsl->level) {\n        for (i = zsl->level; i < level; i++) {\n            rank[i] = 0;\n            update[i] = zsl->header;\n            zslSetNodeSpanAtLevel(update[i], i, zsl->length);\n        }\n        zsl->level = level;\n        zslGetNodeInfo(zsl->header)->levels = level;\n    }\n\n    /* Insert the node at the found position */\n    for (i = 0; i < level; i++) {\n        node->level[i].forward = update[i]->level[i].forward;\n        update[i]->level[i].forward = node;\n\n        /* update span covered by update[i] as node is inserted here */\n        zslSetNodeSpanAtLevel(node, i, zslGetNodeSpanAtLevel(update[i], i) - (rank[0] - rank[i]));\n        zslSetNodeSpanAtLevel(update[i], i, (rank[0] - rank[i]) + 1);\n    }\n\n    /* increment span for untouched levels */\n    for (i = level; i < zsl->level; i++) {\n        zslIncrNodeSpanAtLevel(update[i], i, 1);\n    }\n\n    /* Update backward pointers */\n    node->backward = (update[0] == zsl->header) ? NULL : update[0];\n    if (node->level[0].forward)\n        node->level[0].forward->backward = node;\n    else\n        zsl->tail = node;\n\n    zsl->length++;\n}\n\n/* Insert a new node in the skiplist. Assumes the element does not already\n * exist (up to the caller to enforce that). The element 'ele' is COPIED\n * into the new node, so the caller retains ownership and can free it. */\nzskiplistNode *zslInsert(zskiplist *zsl, double score, sds ele) {\n    int level;\n\n    serverAssert(!isnan(score));\n\n    /* we assume the element is not already inside, since we allow duplicated\n     * scores, reinserting the same element should never happen since the\n     * caller of zslInsert() should test in the hash table if the element is\n     * already inside or not. */\n    level = zslRandomLevel();\n    zskiplistNode *node = zslCreateNode(zsl, level, score, ele);\n    zslInsertNode(zsl, node);\n    return node;\n}\n\n/* Internal function used by zslDelete, zslDeleteRangeByScore and\n * zslDeleteRangeByRank.\n * This function only unlinks the node from the skiplist structure but does NOT free it.\n * The caller is responsible for freeing the node with zslFreeNode(). */\nstatic void zslUnlinkNode(zskiplist *zsl, zskiplistNode *x, zskiplistNode **update) {\n    int i;\n    for (i = 0; i < zsl->level; i++) {\n        if (update[i]->level[i].forward == x) {\n            zslIncrNodeSpanAtLevel(update[i], i, zslGetNodeSpanAtLevel(x, i) - 1);\n            update[i]->level[i].forward = x->level[i].forward;\n        } else {\n            zslDecrNodeSpanAtLevel(update[i], i, 1);\n        }\n    }\n    if (x->level[0].forward) {\n        x->level[0].forward->backward = x->backward;\n    } else {\n        zsl->tail = x->backward;\n    }\n    /* Decrease skiplist level if top levels are empty, and clear their spans */\n    while(zsl->level > 1 && zsl->header->level[zsl->level-1].forward == NULL) {\n        zsl->header->level[zsl->level-1].span = 0;\n        zsl->level--;\n    }\n    zsl->length--;\n}\n\n/* Delete the specified node from the skiplist.\n * The node is unlinked from all levels and then freed by zslFreeNode(),\n * which also frees the embedded SDS string. */\nstatic void zslDelete(zskiplist *zsl, zskiplistNode *node) {\n    zskiplistNode *update[ZSKIPLIST_MAXLEVEL], *x;\n    int i;\n    double score = node->score;\n    sds ele = zslGetNodeElement(node);\n\n    x = zsl->header;\n    for (i = zsl->level-1; i >= 0; i--) {\n        while (zslCompareWithNode(score, ele, x->level[i].forward) > 0) {\n            x = x->level[i].forward;\n        }\n        update[i] = x;\n    }\n\n    /* Verify we truly found the node */\n    serverAssert(x->level[0].forward == node);\n\n    zslUnlinkNode(zsl, node, update);\n    zslFreeNode(zsl, node);\n}\n\n/* Update the score of an element inside the sorted set skiplist.\n * If the new score would keep the node in its current position, updates in-place and returns NULL.\n * Otherwise, unlinks the node, updates score, reinserts at correct position, and returns node.\n * Anyway, the node pointer stays the same (no dict update needed). */\nstatic void zslUpdateScore(zskiplist *zsl, zskiplistNode *node, double newscore) {\n    /* Fast path: if the node, after the score update, would be still exactly\n     * at the same position, we can just update the score without\n     * actually removing and re-inserting the element in the skiplist. */\n    if ((node->backward == NULL || node->backward->score < newscore) &&\n        (node->level[0].forward == NULL || node->level[0].forward->score > newscore))\n    {\n        node->score = newscore;\n        return;\n    }\n\n    /* Slow path: need to reposition the node.\n     * Find the update[] array for unlinking. */\n    zskiplistNode *update[ZSKIPLIST_MAXLEVEL], *x;\n    int i;\n    double curscore = node->score;\n    sds ele = zslGetNodeElement(node);\n\n    x = zsl->header;\n    for (i = zsl->level-1; i >= 0; i--) {\n        while (zslCompareWithNode(curscore, ele, x->level[i].forward) > 0) {\n            x = x->level[i].forward;\n        }\n        update[i] = x;\n    }\n\n    /* Verify we found the right node */\n    serverAssert(x->level[0].forward == node);\n\n    /* Unlink, update score, and reinsert at new position.\n     * We reuse the same node to avoid dict updates. */\n    zslUnlinkNode(zsl, node, update);\n    node->score = newscore;\n    zslInsertNode(zsl, node);\n}\n\nint zslValueGteMin(double value, zrangespec *spec) {\n    return spec->minex ? (value > spec->min) : (value >= spec->min);\n}\n\nint zslValueLteMax(double value, zrangespec *spec) {\n    return spec->maxex ? (value < spec->max) : (value <= spec->max);\n}\n\n/* Returns if there is a part of the zset is in range. */\nstatic int zslIsInRange(zskiplist *zsl, zrangespec *range) {\n    zskiplistNode *x;\n\n    /* Test for ranges that will always be empty. */\n    if (range->min > range->max ||\n            (range->min == range->max && (range->minex || range->maxex)))\n        return 0;\n    x = zsl->tail;\n    if (x == NULL || !zslValueGteMin(x->score,range))\n        return 0;\n    x = zsl->header->level[0].forward;\n    if (x == NULL || !zslValueLteMax(x->score,range))\n        return 0;\n    return 1;\n}\n\n/* Find the Nth element within the specified score range.\n *\n * Parameters:\n *   - N is 0-based for forward direction (0 = first element in range)\n *   - N can be negative for reverse direction (-1 = last element in range)\n *\n * Returns:\n *   - The skiplist node at position N within the range, or NULL if:\n *     * N is out of bounds for the range\n *     * The range contains no elements\n *   - If out_rank!=NULL, it receives the 1-based absolute rank of the returned node \n */\nzskiplistNode *zslNthInRange(zskiplist *zsl, zrangespec *range, long n, unsigned long *out_rank) {\n    zskiplistNode *x;\n    int i;\n    long edge_rank = 0; /* 0-based rank of the last element smaller than the range. */\n    long last_highest_level_rank = 0;\n    zskiplistNode *last_highest_level_node = NULL;\n    unsigned long rank_diff;\n\n    /* If everything is out of range, return early. */\n    if (!zslIsInRange(zsl,range)) return NULL;\n\n    /* Go forward while *OUT* of range at level of zsl->level-1. */\n    x = zsl->header;\n    i = zsl->level - 1;\n    while (x->level[i].forward && !zslValueGteMin(x->level[i].forward->score, range)) {\n        edge_rank += zslGetNodeSpanAtLevel(x, i);\n        x = x->level[i].forward;\n    }\n    /* Remember the last node which has zsl->level-1 levels and its rank. */\n    last_highest_level_node = x;\n    last_highest_level_rank = edge_rank;\n\n    if (n >= 0) {\n        for (i = zsl->level - 2; i >= 0; i--) {\n            /* Go forward while *OUT* of range. */\n            while (x->level[i].forward && !zslValueGteMin(x->level[i].forward->score, range)) {\n                /* Count the rank of the last element smaller than the range. */\n                edge_rank += zslGetNodeSpanAtLevel(x, i);\n                x = x->level[i].forward;\n            }\n        }\n        /* Check if zsl is long enough. */\n        if ((unsigned long)(edge_rank + n) >= zsl->length) return NULL;\n        if (n < ZSKIPLIST_MAX_SEARCH) {\n            /* If offset is small, we can just jump node by node */\n            /* rank+1 is the first element in range, so we need n+1 steps to reach target. */\n            for (i = 0; i < n + 1; i++) { \n                x = x->level[0].forward;\n            }\n        } else {\n            /* If offset is big, we can jump from the last zsl->level-1 node. */\n            rank_diff = edge_rank + 1 + n - last_highest_level_rank;\n            x = zslGetElementByRankFromNode(last_highest_level_node, zsl->level - 1, rank_diff);\n        }\n        /* Check if score <= max. */\n        if (x && !zslValueLteMax(x->score,range)) return NULL;\n        /* Store rank if requested. For n >= 0, the returned node is at rank edge_rank + n + 1. */\n        if (x && out_rank) *out_rank = edge_rank + n + 1;\n    } else  {\n        for (i = zsl->level - 1; i >= 0; i--) {\n            /* Go forward while *IN* range. */\n            while (x->level[i].forward && zslValueLteMax(x->level[i].forward->score, range)) {\n                /* Count the rank of the last element in range. */\n                edge_rank += zslGetNodeSpanAtLevel(x, i);\n                x = x->level[i].forward;\n            }\n        }\n        /* Check if the range is big enough. */\n        if (edge_rank < -n) return NULL;\n        if (n + 1 > -ZSKIPLIST_MAX_SEARCH) {\n            /* If offset is small, we can just jump node by node */\n            /* rank is the -1th element in range, so we need -n-1 steps to reach target. */\n            for (i = 0; i < -n - 1; i++) {\n                x = x->backward;\n            }\n        } else {\n            /* If offset is big, we can jump from the last zsl->level-1 node. */\n            /* rank is the last element in range, n is -1-based, so we need n+1 to count backwards. */\n            rank_diff = edge_rank + 1 + n - last_highest_level_rank;\n            x = zslGetElementByRankFromNode(last_highest_level_node, zsl->level - 1, rank_diff);\n        }\n        /* Check if score >= min. */\n        if (x && !zslValueGteMin(x->score, range)) return NULL;\n        /* Store rank if requested. For n < 0, the returned node is at rank edge_rank + n + 1. */\n        if (x && out_rank) *out_rank = edge_rank + n + 1;\n    }\n\n    return x;\n}\n\n/* Delete all the elements with score between min and max from the skiplist.\n * Both min and max can be inclusive or exclusive (see range->minex and\n * range->maxex). When inclusive a score >= min && score <= max is deleted.\n * Note that this function takes the reference to the hash table view of the\n * sorted set, in order to remove the elements from the hash table too. */\nstatic unsigned long zslDeleteRangeByScore(zskiplist *zsl, zrangespec *range, dict *dict) {\n    zskiplistNode *update[ZSKIPLIST_MAXLEVEL], *x;\n    unsigned long removed = 0;\n    int i;\n\n    x = zsl->header;\n    for (i = zsl->level-1; i >= 0; i--) {\n        while (x->level[i].forward &&\n            !zslValueGteMin(x->level[i].forward->score, range))\n                x = x->level[i].forward;\n        update[i] = x;\n    }\n\n    /* Current node is the last with score < or <= min. */\n    x = x->level[0].forward;\n\n    /* Delete nodes while in range. */\n    while (x && zslValueLteMax(x->score, range)) {\n        zskiplistNode *next = x->level[0].forward;\n        zslUnlinkNode(zsl,x,update);\n        dictDelete(dict,zslGetNodeElement(x));\n        zslFreeNode(zsl, x); /* Here is where x->ele is actually released. */\n        removed++;\n        x = next;\n    }\n    return removed;\n}\n\nstatic unsigned long zslDeleteRangeByLex(zskiplist *zsl, zlexrangespec *range, dict *dict) {\n    zskiplistNode *update[ZSKIPLIST_MAXLEVEL], *x;\n    unsigned long removed = 0;\n    int i;\n\n\n    x = zsl->header;\n    for (i = zsl->level-1; i >= 0; i--) {\n        while (x->level[i].forward &&\n            !zslLexValueGteMin(zslGetNodeElement(x->level[i].forward),range))\n                x = x->level[i].forward;\n        update[i] = x;\n    }\n\n    /* Current node is the last with score < or <= min. */\n    x = x->level[0].forward;\n\n    /* Delete nodes while in range. */\n    while (x && zslLexValueLteMax(zslGetNodeElement(x),range)) {\n        zskiplistNode *next = x->level[0].forward;\n        zslUnlinkNode(zsl,x,update);\n        dictDelete(dict,zslGetNodeElement(x));\n        zslFreeNode(zsl, x); /* Here is where x->ele is actually released. */\n        removed++;\n        x = next;\n    }\n    return removed;\n}\n\n/* Delete all the elements with rank between start and end from the skiplist.\n * Start and end are inclusive. Note that start and end need to be 1-based */\nstatic unsigned long zslDeleteRangeByRank(zskiplist *zsl, unsigned int start, unsigned int end, dict *dict) {\n    zskiplistNode *update[ZSKIPLIST_MAXLEVEL], *x;\n    unsigned long traversed = 0, removed = 0;\n    int i;\n\n    x = zsl->header;\n    for (i = zsl->level-1; i >= 0; i--) {\n        while (x->level[i].forward && (traversed + zslGetNodeSpanAtLevel(x, i)) < start) {\n            traversed += zslGetNodeSpanAtLevel(x, i);\n            x = x->level[i].forward;\n        }\n        update[i] = x;\n    }\n\n    traversed++;\n    x = x->level[0].forward;\n    while (x && traversed <= end) {\n        zskiplistNode *next = x->level[0].forward;\n        zslUnlinkNode(zsl,x,update);\n        dictDelete(dict,zslGetNodeElement(x));\n        zslFreeNode(zsl, x);\n        removed++;\n        traversed++;\n        x = next;\n    }\n    return removed;\n}\n\n/* Find the rank for an element by both score and key.\n * Returns 0 when the element cannot be found, rank otherwise.\n * Note that the rank is 1-based due to the span of zsl->header to the\n * first element. */\nunsigned long zslGetRank(zskiplist *zsl, double score, sds ele) {\n    zskiplistNode *x;\n    unsigned long rank = 0;\n    int i;\n\n    x = zsl->header;\n    for (i = zsl->level-1; i >= 0; i--) {\n        while (zslCompareWithNode(score, ele, x->level[i].forward) >= 0) {\n            rank += zslGetNodeSpanAtLevel(x, i);\n            x = x->level[i].forward;\n        }\n\n        if (x != zsl->header && zslCompareWithNode(score, ele, x) == 0) {\n            return rank;\n        }\n    }\n    return 0;\n}\n\n/* Find the rank for a skiplist node by walking forward from the node to the end.\n * This avoids expensive string comparisons during traversal. The algorithm:\n * 1. Start at the given node's top level\n * 2. Walk forward to the tail, jumping at each node's top level\n * 3. Sum the spans to get distance from node to end\n * 4. Calculate rank as (list_length - distance_to_end)\n * Time complexity: O(log N) on average, same as traditional approach but faster\n * due to avoiding string comparisons. */\nunsigned long zslGetRankByNode(zskiplist *zsl, zskiplistNode *x) {\n    unsigned long distance_to_end = 0;\n    int level;\n    \n    /* Walk forward from x to the end, using top level of each node for fast jumps */\n    while (x) {\n        level = zslGetNodeInfo(x)->levels - 1;\n        distance_to_end += zslGetNodeSpanAtLevel(x, level);\n        x = x->level[level].forward;\n    }\n    \n    /* Rank = total nodes - nodes after this one */\n    return zsl->length - distance_to_end;\n}\n\n/* Finds an element by its rank from start node. The rank argument needs to be 1-based. */\nstatic zskiplistNode *zslGetElementByRankFromNode(zskiplistNode *start_node, int start_level, unsigned long rank) {\n    zskiplistNode *x;\n    unsigned long traversed = 0;\n    int i;\n\n    x = start_node;\n    for (i = start_level; i >= 0; i--) {\n        while (x->level[i].forward && (traversed + zslGetNodeSpanAtLevel(x, i)) <= rank)\n        {\n            traversed += zslGetNodeSpanAtLevel(x, i);\n            x = x->level[i].forward;\n        }\n        if (traversed == rank) {\n            return x;\n        }\n    }\n    return NULL;\n}\n\n/* Finds an element by its rank. The rank argument needs to be 1-based. */\nzskiplistNode *zslGetElementByRank(zskiplist *zsl, unsigned long rank) {\n    return zslGetElementByRankFromNode(zsl->header, zsl->level - 1, rank);\n}\n\n/* Populate the rangespec according to the objects min and max. */\nstatic int zslParseRange(robj *min, robj *max, zrangespec *spec) {\n    char *eptr;\n    spec->minex = spec->maxex = 0;\n\n    /* Parse the min-max interval. If one of the values is prefixed\n     * by the \"(\" character, it's considered \"open\". For instance\n     * ZRANGEBYSCORE zset (1.5 (2.5 will match min < x < max\n     * ZRANGEBYSCORE zset 1.5 2.5 will instead match min <= x <= max */\n    if (min->encoding == OBJ_ENCODING_INT) {\n        spec->min = (long)min->ptr;\n    } else {\n        size_t len = sdslen(min->ptr);\n        if (((char*)min->ptr)[0] == '(') {\n            spec->min = fast_float_strtod((char*)min->ptr+1,len-1,&eptr);\n            if (eptr[0] != '\\0' || isnan(spec->min)) return C_ERR;\n            spec->minex = 1;\n        } else {\n            spec->min = fast_float_strtod((char*)min->ptr,len,&eptr);\n            if (eptr[0] != '\\0' || isnan(spec->min)) return C_ERR;\n        }\n    }\n    if (max->encoding == OBJ_ENCODING_INT) {\n        spec->max = (long)max->ptr;\n    } else {\n        size_t len = sdslen(max->ptr);\n        if (((char*)max->ptr)[0] == '(') {\n            spec->max = fast_float_strtod((char*)max->ptr+1,len-1,&eptr);\n            if (eptr[0] != '\\0' || isnan(spec->max)) return C_ERR;\n            spec->maxex = 1;\n        } else {\n            spec->max = fast_float_strtod((char*)max->ptr,len,&eptr);\n            if (eptr[0] != '\\0' || isnan(spec->max)) return C_ERR;\n        }\n    }\n\n    return C_OK;\n}\n\n/* ------------------------ Lexicographic ranges ---------------------------- */\n\n/* Parse max or min argument of ZRANGEBYLEX.\n  * (foo means foo (open interval)\n  * [foo means foo (closed interval)\n  * - means the min string possible\n  * + means the max string possible\n  *\n  * If the string is valid the *dest pointer is set to the redis object\n  * that will be used for the comparison, and ex will be set to 0 or 1\n  * respectively if the item is exclusive or inclusive. C_OK will be\n  * returned.\n  *\n  * If the string is not a valid range C_ERR is returned, and the value\n  * of *dest and *ex is undefined. */\nstatic int zslParseLexRangeItem(robj *item, sds *dest, int *ex) {\n    char *c = item->ptr;\n\n    switch(c[0]) {\n    case '+':\n        if (c[1] != '\\0') return C_ERR;\n        *ex = 1;\n        *dest = shared.maxstring;\n        return C_OK;\n    case '-':\n        if (c[1] != '\\0') return C_ERR;\n        *ex = 1;\n        *dest = shared.minstring;\n        return C_OK;\n    case '(':\n        *ex = 1;\n        *dest = sdsnewlen(c+1,sdslen(c)-1);\n        return C_OK;\n    case '[':\n        *ex = 0;\n        *dest = sdsnewlen(c+1,sdslen(c)-1);\n        return C_OK;\n    default:\n        return C_ERR;\n    }\n}\n\n/* Free a lex range structure, must be called only after zslParseLexRange()\n * populated the structure with success (C_OK returned). */\nvoid zslFreeLexRange(zlexrangespec *spec) {\n    if (spec->min != shared.minstring &&\n        spec->min != shared.maxstring) sdsfree(spec->min);\n    if (spec->max != shared.minstring &&\n        spec->max != shared.maxstring) sdsfree(spec->max);\n}\n\n/* Populate the lex rangespec according to the objects min and max.\n *\n * Return C_OK on success. On error C_ERR is returned.\n * When OK is returned the structure must be freed with zslFreeLexRange(),\n * otherwise no release is needed. */\nint zslParseLexRange(robj *min, robj *max, zlexrangespec *spec) {\n    /* The range can't be valid if objects are integer encoded.\n     * Every item must start with ( or [. */\n    if (min->encoding == OBJ_ENCODING_INT ||\n        max->encoding == OBJ_ENCODING_INT) return C_ERR;\n\n    spec->min = spec->max = NULL;\n    if (zslParseLexRangeItem(min, &spec->min, &spec->minex) == C_ERR ||\n        zslParseLexRangeItem(max, &spec->max, &spec->maxex) == C_ERR) {\n        zslFreeLexRange(spec);\n        return C_ERR;\n    } else {\n        return C_OK;\n    }\n}\n\n/* This is just a wrapper to sdscmp() that is able to\n * handle shared.minstring and shared.maxstring as the equivalent of\n * -inf and +inf for strings */\nstatic int sdscmplex(sds a, sds b) {\n    if (a == b) return 0;\n    if (a == shared.minstring || b == shared.maxstring) return -1;\n    if (a == shared.maxstring || b == shared.minstring) return 1;\n    return sdscmp(a,b);\n}\n\nint zslLexValueGteMin(sds value, zlexrangespec *spec) {\n    return spec->minex ?\n        (sdscmplex(value,spec->min) > 0) :\n        (sdscmplex(value,spec->min) >= 0);\n}\n\nint zslLexValueLteMax(sds value, zlexrangespec *spec) {\n    return spec->maxex ?\n        (sdscmplex(value,spec->max) < 0) :\n        (sdscmplex(value,spec->max) <= 0);\n}\n\n/* Returns if there is a part of the zset is in the lex range. */\nstatic int zslIsInLexRange(zskiplist *zsl, zlexrangespec *range) {\n    zskiplistNode *x;\n\n    /* Test for ranges that will always be empty. */\n    int cmp = sdscmplex(range->min,range->max);\n    if (cmp > 0 || (cmp == 0 && (range->minex || range->maxex)))\n        return 0;\n    x = zsl->tail;\n    if ((x == NULL) || (!zslLexValueGteMin(zslGetNodeElement(x),range)))\n        return 0;\n    x = zsl->header->level[0].forward;\n    if ((x == NULL) || (!zslLexValueLteMax(zslGetNodeElement(x),range)))\n        return 0;\n    return 1;\n}\n\n/* Find the Nth node that is contained in the specified range. N should be 0-based.\n * Negative N works for reversed order (-1 represents the last element). Returns\n * NULL when no element is contained in the range.\n * If out_rank is not NULL, stores the 1-based rank of the returned node. */\nzskiplistNode *zslNthInLexRange(zskiplist *zsl, zlexrangespec *range, long n, unsigned long *out_rank) {\n    zskiplistNode *x;\n    int i;\n    long edge_rank = 0;\n    long last_highest_level_rank = 0;\n    zskiplistNode *last_highest_level_node = NULL;\n    unsigned long rank_diff;\n\n    /* If everything is out of range, return early. */\n    if (!zslIsInLexRange(zsl,range)) return NULL;\n\n    /* Go forward while *OUT* of range at level of zsl->level-1. */\n    x = zsl->header;\n    i = zsl->level - 1;\n    while (x->level[i].forward && !zslLexValueGteMin(zslGetNodeElement(x->level[i].forward), range)) {\n        edge_rank += zslGetNodeSpanAtLevel(x, i);\n        x = x->level[i].forward;\n    }\n    /* Remember the last node which has zsl->level-1 levels and its rank. */\n    last_highest_level_node = x;\n    last_highest_level_rank = edge_rank;\n\n    if (n >= 0) {\n        for (i = zsl->level - 2; i >= 0; i--) {\n            /* Go forward while *OUT* of range. */\n            while (x->level[i].forward && !zslLexValueGteMin(zslGetNodeElement(x->level[i].forward), range)) {\n                /* Count the rank of the last element smaller than the range. */\n                edge_rank += zslGetNodeSpanAtLevel(x, i);\n                x = x->level[i].forward;\n            }\n        }\n        /* Check if zsl is long enough. */\n        if ((unsigned long)(edge_rank + n) >= zsl->length) return NULL; \n        if (n < ZSKIPLIST_MAX_SEARCH) {\n            /* If offset is small, we can just jump node by node */\n            /* rank+1 is the first element in range, so we need n+1 steps to reach target. */\n            for (i = 0; i < n + 1; i++) { \n                x = x->level[0].forward;\n            }\n        } else {\n            /* If offset is big, we can jump from the last zsl->level-1 node. */\n            rank_diff = edge_rank + 1 + n - last_highest_level_rank;\n            x = zslGetElementByRankFromNode(last_highest_level_node, zsl->level - 1, rank_diff);\n        }\n        /* Check if score <= max. */\n        if (x && !zslLexValueLteMax(zslGetNodeElement(x),range)) return NULL;\n        /* Store rank if requested. For n >= 0, the returned node is at rank edge_rank + n + 1. */\n        if (x && out_rank) *out_rank = edge_rank + n + 1;\n    } else {\n        for (i = zsl->level - 1; i >= 0; i--) {\n            /* Go forward while *IN* range. */\n            while (x->level[i].forward && zslLexValueLteMax(zslGetNodeElement(x->level[i].forward), range)) {\n                /* Count the rank of the last element in range. */\n                edge_rank += zslGetNodeSpanAtLevel(x, i);\n                x = x->level[i].forward;\n            }\n        }\n        /* Check if the range is big enough. */\n        if (edge_rank < -n) return NULL;\n        if (n + 1 > -ZSKIPLIST_MAX_SEARCH) {\n            /* If offset is small, we can just jump node by node */\n            for (i = 0; i < -n - 1; i++) {\n                x = x->backward;\n            }\n        } else {\n            /* If offset is big, we can jump from the last zsl->level-1 node. */\n            /* rank is the last element in range, n is -1-based, so we need n+1 to count backwards. */\n            rank_diff = edge_rank + 1 + n - last_highest_level_rank;\n            x = zslGetElementByRankFromNode(last_highest_level_node, zsl->level - 1, rank_diff);\n        }\n        /* Check if score >= min. */\n        if (x && !zslLexValueGteMin(zslGetNodeElement(x), range)) return NULL;\n        /* Store rank if requested. For n < 0, the returned node is at rank edge_rank + n + 1. */\n        if (x && out_rank) *out_rank = edge_rank + n + 1;\n    }\n\n    return x;\n}\n\n/*-----------------------------------------------------------------------------\n * Listpack-backed sorted set API\n *----------------------------------------------------------------------------*/\n\nstatic double zzlStrtod(unsigned char *vstr, unsigned int vlen) {\n    return fast_float_strtod((char*)vstr, vlen, NULL);\n}\n\ndouble zzlGetScore(unsigned char *sptr) {\n    unsigned char *vstr;\n    unsigned int vlen;\n    long long vlong;\n    double score;\n\n    serverAssert(sptr != NULL);\n    vstr = lpGetValue(sptr,&vlen,&vlong);\n\n    if (vstr) {\n        score = zzlStrtod(vstr,vlen);\n    } else {\n        score = vlong;\n    }\n\n    return score;\n}\n\n/* Return a listpack element as an SDS string. */\nsds lpGetObject(unsigned char *sptr) {\n    unsigned char *vstr;\n    unsigned int vlen;\n    long long vlong;\n\n    serverAssert(sptr != NULL);\n    vstr = lpGetValue(sptr,&vlen,&vlong);\n\n    if (vstr) {\n        return sdsnewlen((char*)vstr,vlen);\n    } else {\n        return sdsfromlonglong(vlong);\n    }\n}\n\n/* Compare element in sorted set with given element. */\nstatic int zzlCompareElements(unsigned char *eptr, unsigned char *cstr, unsigned int clen) {\n    unsigned char *vstr;\n    unsigned int vlen;\n    long long vlong;\n    unsigned char vbuf[32];\n    int minlen, cmp;\n\n    vstr = lpGetValue(eptr,&vlen,&vlong);\n    if (vstr == NULL) {\n        /* Store string representation of long long in buf. */\n        vlen = ll2string((char*)vbuf,sizeof(vbuf),vlong);\n        vstr = vbuf;\n    }\n\n    minlen = (vlen < clen) ? vlen : clen;\n    cmp = memcmp(vstr,cstr,minlen);\n    if (cmp == 0) return vlen-clen;\n    return cmp;\n}\n\nstatic unsigned int zzlLength(unsigned char *zl) {\n    return lpLength(zl)/2;\n}\n\n/* Move to next entry based on the values in eptr and sptr. Both are set to\n * NULL when there is no next entry. */\nvoid zzlNext(unsigned char *zl, unsigned char **eptr, unsigned char **sptr) {\n    unsigned char *_eptr, *_sptr;\n    serverAssert(*eptr != NULL && *sptr != NULL);\n\n    _eptr = lpNext(zl,*sptr);\n    if (_eptr != NULL) {\n        _sptr = lpNext(zl,_eptr);\n        serverAssert(_sptr != NULL);\n    } else {\n        /* No next entry. */\n        _sptr = NULL;\n    }\n\n    *eptr = _eptr;\n    *sptr = _sptr;\n}\n\n/* Move to the previous entry based on the values in eptr and sptr. Both are\n * set to NULL when there is no prev entry. */\nvoid zzlPrev(unsigned char *zl, unsigned char **eptr, unsigned char **sptr) {\n    unsigned char *_eptr, *_sptr;\n    serverAssert(*eptr != NULL && *sptr != NULL);\n\n    _sptr = lpPrev(zl,*eptr);\n    if (_sptr != NULL) {\n        _eptr = lpPrev(zl,_sptr);\n        serverAssert(_eptr != NULL);\n    } else {\n        /* No previous entry. */\n        _eptr = NULL;\n    }\n\n    *eptr = _eptr;\n    *sptr = _sptr;\n}\n\n/* Returns if there is a part of the zset is in range. Should only be used\n * internally by zzlFirstInRange and zzlLastInRange. */\nstatic int zzlIsInRange(unsigned char *zl, zrangespec *range) {\n    unsigned char *p;\n    double score;\n\n    /* Test for ranges that will always be empty. */\n    if (range->min > range->max ||\n            (range->min == range->max && (range->minex || range->maxex)))\n        return 0;\n\n    p = lpSeek(zl,-1); /* Last score. */\n    if (p == NULL) return 0; /* Empty sorted set */\n    score = zzlGetScore(p);\n    if (!zslValueGteMin(score,range))\n        return 0;\n\n    p = lpSeek(zl,1); /* First score. */\n    serverAssert(p != NULL);\n    score = zzlGetScore(p);\n    if (!zslValueLteMax(score,range))\n        return 0;\n\n    return 1;\n}\n\n/* Find pointer to the first element contained in the specified range.\n * Returns NULL when no element is contained in the range. */\nunsigned char *zzlFirstInRange(unsigned char *zl, zrangespec *range) {\n    unsigned char *eptr = lpSeek(zl,0), *sptr;\n    double score;\n\n    /* If everything is out of range, return early. */\n    if (!zzlIsInRange(zl,range)) return NULL;\n\n    while (eptr != NULL) {\n        sptr = lpNext(zl,eptr);\n        serverAssert(sptr != NULL);\n\n        score = zzlGetScore(sptr);\n        if (zslValueGteMin(score,range)) {\n            /* Check if score <= max. */\n            if (zslValueLteMax(score,range))\n                return eptr;\n            return NULL;\n        }\n\n        /* Move to next element. */\n        eptr = lpNext(zl,sptr);\n    }\n\n    return NULL;\n}\n\n/* Find pointer to the last element contained in the specified range.\n * Returns NULL when no element is contained in the range. */\nunsigned char *zzlLastInRange(unsigned char *zl, zrangespec *range) {\n    unsigned char *eptr = lpSeek(zl,-2), *sptr;\n    double score;\n\n    /* If everything is out of range, return early. */\n    if (!zzlIsInRange(zl,range)) return NULL;\n\n    while (eptr != NULL) {\n        sptr = lpNext(zl,eptr);\n        serverAssert(sptr != NULL);\n\n        score = zzlGetScore(sptr);\n        if (zslValueLteMax(score,range)) {\n            /* Check if score >= min. */\n            if (zslValueGteMin(score,range))\n                return eptr;\n            return NULL;\n        }\n\n        /* Move to previous element by moving to the score of previous element.\n         * When this returns NULL, we know there also is no element. */\n        sptr = lpPrev(zl,eptr);\n        if (sptr != NULL)\n            serverAssert((eptr = lpPrev(zl,sptr)) != NULL);\n        else\n            eptr = NULL;\n    }\n\n    return NULL;\n}\n\nint zzlLexValueGteMin(unsigned char *p, zlexrangespec *spec) {\n    sds value = lpGetObject(p);\n    int res = zslLexValueGteMin(value,spec);\n    sdsfree(value);\n    return res;\n}\n\nint zzlLexValueLteMax(unsigned char *p, zlexrangespec *spec) {\n    sds value = lpGetObject(p);\n    int res = zslLexValueLteMax(value,spec);\n    sdsfree(value);\n    return res;\n}\n\n/* Returns if there is a part of the zset is in range. Should only be used\n * internally by zzlFirstInLexRange and zzlLastInLexRange. */\nstatic int zzlIsInLexRange(unsigned char *zl, zlexrangespec *range) {\n    unsigned char *p;\n\n    /* Test for ranges that will always be empty. */\n    int cmp = sdscmplex(range->min,range->max);\n    if (cmp > 0 || (cmp == 0 && (range->minex || range->maxex)))\n        return 0;\n\n    p = lpSeek(zl,-2); /* Last element. */\n    if (p == NULL) return 0;\n    if (!zzlLexValueGteMin(p,range))\n        return 0;\n\n    p = lpSeek(zl,0); /* First element. */\n    serverAssert(p != NULL);\n    if (!zzlLexValueLteMax(p,range))\n        return 0;\n\n    return 1;\n}\n\n/* Find pointer to the first element contained in the specified lex range.\n * Returns NULL when no element is contained in the range. */\nunsigned char *zzlFirstInLexRange(unsigned char *zl, zlexrangespec *range) {\n    unsigned char *eptr = lpSeek(zl,0), *sptr;\n\n    /* If everything is out of range, return early. */\n    if (!zzlIsInLexRange(zl,range)) return NULL;\n\n    while (eptr != NULL) {\n        if (zzlLexValueGteMin(eptr,range)) {\n            /* Check if score <= max. */\n            if (zzlLexValueLteMax(eptr,range))\n                return eptr;\n            return NULL;\n        }\n\n        /* Move to next element. */\n        sptr = lpNext(zl,eptr); /* This element score. Skip it. */\n        serverAssert(sptr != NULL);\n        eptr = lpNext(zl,sptr); /* Next element. */\n    }\n\n    return NULL;\n}\n\n/* Find pointer to the last element contained in the specified lex range.\n * Returns NULL when no element is contained in the range. */\nunsigned char *zzlLastInLexRange(unsigned char *zl, zlexrangespec *range) {\n    unsigned char *eptr = lpSeek(zl,-2), *sptr;\n\n    /* If everything is out of range, return early. */\n    if (!zzlIsInLexRange(zl,range)) return NULL;\n\n    while (eptr != NULL) {\n        if (zzlLexValueLteMax(eptr,range)) {\n            /* Check if score >= min. */\n            if (zzlLexValueGteMin(eptr,range))\n                return eptr;\n            return NULL;\n        }\n\n        /* Move to previous element by moving to the score of previous element.\n         * When this returns NULL, we know there also is no element. */\n        sptr = lpPrev(zl,eptr);\n        if (sptr != NULL)\n            serverAssert((eptr = lpPrev(zl,sptr)) != NULL);\n        else\n            eptr = NULL;\n    }\n\n    return NULL;\n}\n\nstatic unsigned char *zzlFind(unsigned char *lp, sds ele, double *score) {\n    unsigned char *eptr, *sptr;\n\n    if ((eptr = lpFirst(lp)) == NULL) return NULL;\n    eptr = lpFind(lp, eptr, (unsigned char*)ele, sdslen(ele), 1);\n    if (eptr) {\n        sptr = lpNext(lp,eptr);\n        serverAssert(sptr != NULL);\n\n        /* Matching element, pull out score. */\n        if (score != NULL) *score = zzlGetScore(sptr);\n        return eptr;\n    }\n\n    return NULL;\n}\n\n/* Delete (element,score) pair from listpack. Use local copy of eptr because we\n * don't want to modify the one given as argument. */\nstatic unsigned char *zzlDelete(unsigned char *zl, unsigned char *eptr) {\n    return lpDeleteRangeWithEntry(zl,&eptr,2);\n}\n\nstatic unsigned char *zzlInsertAt(unsigned char *zl, unsigned char *eptr, sds ele, double score) {\n    char scorebuf[MAX_D2STRING_CHARS];\n    int scorelen = 0;\n    long long lscore;\n    int score_is_long = double2ll(score, &lscore);\n    if (!score_is_long)\n        scorelen = d2string(scorebuf,sizeof(scorebuf),score);\n\n    listpackEntry entries[2];\n    entries[0].sval = (unsigned char*)ele;\n    entries[0].slen = sdslen(ele);\n    if (score_is_long) {\n        entries[1].sval = NULL;\n        entries[1].lval = lscore;\n    } else {\n        entries[1].sval = (unsigned char*)scorebuf;\n        entries[1].slen = scorelen;\n    }\n\n    if (eptr == NULL)\n        zl = lpBatchAppend(zl, entries, 2);\n    else\n        zl = lpBatchInsert(zl, eptr, LP_BEFORE, entries, 2, NULL);\n\n    return zl;\n}\n\n/* Insert (element,score) pair in listpack. This function assumes the element is\n * not yet present in the list. */\nunsigned char *zzlInsert(unsigned char *zl, sds ele, double score) {\n    unsigned char *eptr = lpSeek(zl,0), *sptr;\n    double s;\n\n    while (eptr != NULL) {\n        sptr = lpNext(zl,eptr);\n        serverAssert(sptr != NULL);\n        s = zzlGetScore(sptr);\n\n        if (s > score) {\n            /* First element with score larger than score for element to be\n             * inserted. This means we should take its spot in the list to\n             * maintain ordering. */\n            zl = zzlInsertAt(zl,eptr,ele,score);\n            break;\n        } else if (s == score) {\n            /* Ensure lexicographical ordering for elements. */\n            if (zzlCompareElements(eptr,(unsigned char*)ele,sdslen(ele)) > 0) {\n                zl = zzlInsertAt(zl,eptr,ele,score);\n                break;\n            }\n        }\n\n        /* Move to next element. */\n        eptr = lpNext(zl,sptr);\n    }\n\n    /* Push on tail of list when it was not yet inserted. */\n    if (eptr == NULL)\n        zl = zzlInsertAt(zl,NULL,ele,score);\n    return zl;\n}\n\nstatic unsigned char *zzlDeleteRangeByScore(unsigned char *zl, zrangespec *range, unsigned long *deleted) {\n    unsigned char *eptr, *sptr;\n    double score;\n    unsigned long num = 0;\n\n    if (deleted != NULL) *deleted = 0;\n\n    eptr = zzlFirstInRange(zl,range);\n    if (eptr == NULL) return zl;\n\n    /* When the tail of the listpack is deleted, eptr will be NULL. */\n    while (eptr && (sptr = lpNext(zl,eptr)) != NULL) {\n        score = zzlGetScore(sptr);\n        if (zslValueLteMax(score,range)) {\n            /* Delete both the element and the score. */\n            zl = lpDeleteRangeWithEntry(zl,&eptr,2);\n            num++;\n        } else {\n            /* No longer in range. */\n            break;\n        }\n    }\n\n    if (deleted != NULL) *deleted = num;\n    return zl;\n}\n\nstatic unsigned char *zzlDeleteRangeByLex(unsigned char *zl, zlexrangespec *range, unsigned long *deleted) {\n    unsigned char *eptr, *sptr;\n    unsigned long num = 0;\n\n    if (deleted != NULL) *deleted = 0;\n\n    eptr = zzlFirstInLexRange(zl,range);\n    if (eptr == NULL) return zl;\n\n    /* When the tail of the listpack is deleted, eptr will be NULL. */\n    while (eptr && (sptr = lpNext(zl,eptr)) != NULL) {\n        if (zzlLexValueLteMax(eptr,range)) {\n            /* Delete both the element and the score. */\n            zl = lpDeleteRangeWithEntry(zl,&eptr,2);\n            num++;\n        } else {\n            /* No longer in range. */\n            break;\n        }\n    }\n\n    if (deleted != NULL) *deleted = num;\n    return zl;\n}\n\n/* Delete all the elements with rank between start and end from the skiplist.\n * Start and end are inclusive. Note that start and end need to be 1-based */\nstatic unsigned char *zzlDeleteRangeByRank(unsigned char *zl, unsigned int start, unsigned int end, unsigned long *deleted) {\n    unsigned int num = (end-start)+1;\n    if (deleted) *deleted = num;\n    zl = lpDeleteRange(zl,2*(start-1),2*num);\n    return zl;\n}\n\n/*-----------------------------------------------------------------------------\n * Common sorted set API\n *----------------------------------------------------------------------------*/\n\nunsigned long zsetLength(const robj *zobj) {\n    unsigned long length = 0;\n    if (zobj->encoding == OBJ_ENCODING_LISTPACK) {\n        length = zzlLength(zobj->ptr);\n    } else if (zobj->encoding == OBJ_ENCODING_SKIPLIST) {\n        length = ((const zset*)zobj->ptr)->zsl->length;\n    } else {\n        serverPanic(\"Unknown sorted set encoding\");\n    }\n    return length;\n}\n\nsize_t zsetAllocSize(const robj *o) {\n    serverAssertWithInfo(NULL,o,o->type == OBJ_ZSET);\n    size_t size = 0;\n    if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        size = lpBytes(o->ptr);\n    } else if (o->encoding == OBJ_ENCODING_SKIPLIST) {\n        dict *d = ((zset*)o->ptr)->dict;\n        zskiplist *zsl = ((zset*)o->ptr)->zsl;\n        size = sizeof(zset) + zslAllocSize(zsl) +\n            sizeof(dict) + dictMemUsage(d);\n    } else {\n        serverPanic(\"Unknown sorted set encoding\");\n    }\n    return size;\n}\n\n/* Factory method to return a zset.\n *\n * The size hint indicates approximately how many items will be added,\n * and the value len hint indicates the approximate individual size of the added elements,\n * they are used to determine the initial representation.\n *\n * If the hints are not known, and underestimation or 0 is suitable. \n * We should never pass a negative value because it will convert to a very large unsigned number. */\nrobj *zsetTypeCreate(size_t size_hint, size_t val_len_hint) {\n    if (size_hint <= server.zset_max_listpack_entries &&\n        val_len_hint <= server.zset_max_listpack_value)\n    {\n        return createZsetListpackObject();\n    }\n\n    robj *zobj = createZsetObject();\n    zset *zs = zobj->ptr;\n    dictExpand(zs->dict, size_hint);\n    return zobj;\n}\n\n/* Check if the existing zset should be converted to another encoding based off the\n * the size hint. */\nvoid zsetTypeMaybeConvert(robj *zobj, size_t size_hint) {\n    if (zobj->encoding == OBJ_ENCODING_LISTPACK &&\n        size_hint > server.zset_max_listpack_entries)\n    {\n        zsetConvertAndExpand(zobj, OBJ_ENCODING_SKIPLIST, size_hint);\n    }\n}\n\n/* Convert the zset to specified encoding. The zset dict (when converting\n * to a skiplist) is presized to hold the number of elements in the original\n * zset. */\nvoid zsetConvert(robj *zobj, int encoding) {\n    zsetConvertAndExpand(zobj, encoding, zsetLength(zobj));\n}\n\n/* Converts a zset to the specified encoding, pre-sizing it for 'cap' elements. */\nvoid zsetConvertAndExpand(robj *zobj, int encoding, unsigned long cap) {\n    zset *zs;\n    zskiplistNode *node, *next;\n    sds ele;\n    double score;\n\n    if (zobj->encoding == encoding) return;\n    if (zobj->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *zl = zobj->ptr;\n        unsigned char *eptr, *sptr;\n        unsigned char *vstr;\n        unsigned int vlen;\n        long long vlong;\n\n        if (encoding != OBJ_ENCODING_SKIPLIST)\n            serverPanic(\"Unknown target encoding\");\n\n        zs = zmalloc(sizeof(*zs));\n        zs->dict = dictCreate(&zsetDictType);\n        zs->zsl = zslCreate();\n\n        /* Presize the dict to avoid rehashing */\n        dictExpand(zs->dict, cap);\n\n        eptr = lpSeek(zl,0);\n        if (eptr != NULL) {\n            sptr = lpNext(zl,eptr);\n            serverAssertWithInfo(NULL,zobj,sptr != NULL);\n        }\n\n        while (eptr != NULL) {\n            score = zzlGetScore(sptr);\n            vstr = lpGetValue(eptr,&vlen,&vlong);\n            if (vstr == NULL)\n                ele = sdsfromlonglong(vlong);\n            else\n                ele = sdsnewlen((char*)vstr,vlen);\n\n            node = zslInsert(zs->zsl,score,ele);\n            serverAssert(dictAdd(zs->dict, node, NULL) == DICT_OK);\n            sdsfree(ele); /* zslInsert copied it, we can free our copy */\n            zzlNext(zl,&eptr,&sptr);\n        }\n\n        zfree(zobj->ptr);\n        zobj->ptr = zs;\n        zobj->encoding = OBJ_ENCODING_SKIPLIST;\n    } else if (zobj->encoding == OBJ_ENCODING_SKIPLIST) {\n        unsigned char *zl = lpNew(0);\n\n        if (encoding != OBJ_ENCODING_LISTPACK)\n            serverPanic(\"Unknown target encoding\");\n\n        /* Approach similar to zslFree(), since we want to free the skiplist at\n         * the same time as creating the listpack. */\n        zs = zobj->ptr;\n        dictRelease(zs->dict);\n        node = zs->zsl->header->level[0].forward;\n        zfree(zs->zsl->header);\n\n        while (node) {\n            zl = zzlInsertAt(zl,NULL,zslGetNodeElement(node),node->score);\n            next = node->level[0].forward;\n            zslFreeNode(zs->zsl, node);\n            node = next;\n        }\n\n        zfree(zs->zsl);\n        zfree(zs);\n        zobj->ptr = zl;\n        zobj->encoding = OBJ_ENCODING_LISTPACK;\n    } else {\n        serverPanic(\"Unknown sorted set encoding\");\n    }\n}\n\n/* Convert the sorted set object into a listpack if it is not already a listpack\n * and if the number of elements and the maximum element size and total elements size\n * are within the expected ranges. */\nvoid zsetConvertToListpackIfNeeded(robj *zobj, size_t maxelelen, size_t totelelen) {\n    if (zobj->encoding == OBJ_ENCODING_LISTPACK) return;\n    zset *zset = zobj->ptr;\n\n    if (zset->zsl->length <= server.zset_max_listpack_entries &&\n        maxelelen <= server.zset_max_listpack_value &&\n        lpSafeToAdd(NULL, totelelen))\n    {\n        zsetConvert(zobj,OBJ_ENCODING_LISTPACK);\n    }\n}\n\n/* Return (by reference) the score of the specified member of the sorted set\n * storing it into *score. If the element does not exist C_ERR is returned\n * otherwise C_OK is returned and *score is correctly populated.\n * If 'zobj' or 'member' is NULL, C_ERR is returned. */\nint zsetScore(robj *zobj, sds member, double *score) {\n    if (!zobj || !member) return C_ERR;\n\n    if (zobj->encoding == OBJ_ENCODING_LISTPACK) {\n        if (zzlFind(zobj->ptr, member, score) == NULL) return C_ERR;\n    } else if (zobj->encoding == OBJ_ENCODING_SKIPLIST) {\n        zset *zs = zobj->ptr;\n        dictEntry *de = dictFind(zs->dict, member);\n        if (de == NULL) return C_ERR;\n        zskiplistNode *znode = dictGetKey(de);\n        *score = znode->score;\n    } else {\n        serverPanic(\"Unknown sorted set encoding\");\n    }\n    return C_OK;\n}\n\n/* Add a new element or update the score of an existing element in a sorted\n * set, regardless of its encoding.\n *\n * The set of flags change the command behavior. \n *\n * The input flags are the following:\n *\n * ZADD_INCR: Increment the current element score by 'score' instead of updating\n *            the current element score. If the element does not exist, we\n *            assume 0 as previous score.\n * ZADD_NX:   Perform the operation only if the element does not exist.\n * ZADD_XX:   Perform the operation only if the element already exist.\n * ZADD_GT:   Perform the operation on existing elements only if the new score is \n *            greater than the current score.\n * ZADD_LT:   Perform the operation on existing elements only if the new score is \n *            less than the current score.\n *\n * When ZADD_INCR is used, the new score of the element is stored in\n * '*newscore' if 'newscore' is not NULL.\n *\n * The returned flags are the following:\n *\n * ZADD_NAN:     The resulting score is not a number.\n * ZADD_ADDED:   The element was added (not present before the call).\n * ZADD_UPDATED: The element score was updated.\n * ZADD_NOP:     No operation was performed because of NX or XX.\n *\n * Return value:\n *\n * The function returns 1 on success, and sets the appropriate flags\n * ADDED or UPDATED to signal what happened during the operation (note that\n * none could be set if we re-added an element using the same score it used\n * to have, or in the case a zero increment is used).\n *\n * The function returns 0 on error, currently only when the increment\n * produces a NAN condition, or when the 'score' value is NAN since the\n * start.\n *\n * The command as a side effect of adding a new element may convert the sorted\n * set internal encoding from listpack to hashtable+skiplist.\n *\n * Memory management of 'ele':\n *\n * The function does not take ownership of the 'ele' SDS string, but copies\n * it if needed. */\nint zsetAdd(robj *zobj, double score, sds ele, int in_flags, int *out_flags, double *newscore) {\n    /* Turn options into simple to check vars. */\n    int incr = (in_flags & ZADD_IN_INCR) != 0;\n    int nx = (in_flags & ZADD_IN_NX) != 0;\n    int xx = (in_flags & ZADD_IN_XX) != 0;\n    int gt = (in_flags & ZADD_IN_GT) != 0;\n    int lt = (in_flags & ZADD_IN_LT) != 0;\n    *out_flags = 0; /* We'll return our response flags. */\n    double curscore;\n\n    /* NaN as input is an error regardless of all the other parameters. */\n    if (isnan(score)) {\n        *out_flags = ZADD_OUT_NAN;\n        return 0;\n    }\n\n    /* Update the sorted set according to its encoding. */\n    if (zobj->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *eptr;\n\n        if ((eptr = zzlFind(zobj->ptr,ele,&curscore)) != NULL) {\n            /* NX? Return, same element already exists. */\n            if (nx) {\n                *out_flags |= ZADD_OUT_NOP;\n                return 1;\n            }\n\n            /* Prepare the score for the increment if needed. */\n            if (incr) {\n                score += curscore;\n                if (isnan(score)) {\n                    *out_flags |= ZADD_OUT_NAN;\n                    return 0;\n                }\n            }\n\n            /* GT/LT? Only update if score is greater/less than current. */\n            if ((lt && score >= curscore) || (gt && score <= curscore)) {\n                *out_flags |= ZADD_OUT_NOP;\n                return 1;\n            }\n\n            if (newscore) *newscore = score;\n\n            /* Remove and re-insert when score changed. */\n            if (score != curscore) {\n                zobj->ptr = zzlDelete(zobj->ptr,eptr);\n                zobj->ptr = zzlInsert(zobj->ptr,ele,score);\n                *out_flags |= ZADD_OUT_UPDATED;\n            }\n            return 1;\n        } else if (!xx) {\n            /* check if the element is too large or the list\n             * becomes too long *before* executing zzlInsert. */\n            if (zzlLength(zobj->ptr)+1 > server.zset_max_listpack_entries ||\n                sdslen(ele) > server.zset_max_listpack_value ||\n                !lpSafeToAdd(zobj->ptr, sdslen(ele)))\n            {\n                zsetConvertAndExpand(zobj, OBJ_ENCODING_SKIPLIST, zsetLength(zobj) + 1);\n            } else {\n                zobj->ptr = zzlInsert(zobj->ptr,ele,score);\n                if (newscore) *newscore = score;\n                *out_flags |= ZADD_OUT_ADDED;\n                return 1;\n            }\n        } else {\n            *out_flags |= ZADD_OUT_NOP;\n            return 1;\n        }\n    }\n\n    /* Note that the above block handling listpack would have either returned or\n     * converted the key to skiplist. */\n    if (zobj->encoding == OBJ_ENCODING_SKIPLIST) {\n        zset *zs = zobj->ptr;\n        zskiplistNode *znode;\n        dictEntry *de;\n        dictEntryLink bucket, link;\n\n        /* Use dictFindLink to find the element and get the bucket for potential insertion.\n         * This avoids a second lookup in dictAdd() if the element doesn't exist. */\n        link = dictFindLink(zs->dict, ele, &bucket);\n\n        if (link != NULL) {\n            /* Element exists - get the dictEntry from the link */\n            de = *link;\n\n            /* NX? Return, same element already exists. */\n            if (nx) {\n                *out_flags |= ZADD_OUT_NOP;\n                return 1;\n            }\n\n            /* Get the node pointer from dict entry */\n            znode = dictGetKey(de);\n            curscore = znode->score;\n\n            /* Prepare the score for the increment if needed. */\n            if (incr) {\n                score += curscore;\n                if (isnan(score)) {\n                    *out_flags |= ZADD_OUT_NAN;\n                    return 0;\n                }\n            }\n\n            /* GT/LT? Only update if score is greater/less than current. */\n            if ((lt && score >= curscore) || (gt && score <= curscore)) {\n                *out_flags |= ZADD_OUT_NOP;\n                return 1;\n            }\n\n            if (newscore) *newscore = score;\n\n            /* Remove and re-insert when score changes. */\n            if (score != curscore) {\n                zslUpdateScore(zs->zsl, znode, score);\n                /* Note that we did not remove the original element from\n                 * the hash table representing the sorted set, so we don't\n                 * need to update the dict - the node pointer stays the same. */\n                *out_flags |= ZADD_OUT_UPDATED;\n            }\n            return 1;\n        } else if (!xx) {\n            /* Element doesn't exist - create node with embedded sds and add to skiplist */\n            znode = zslInsert(zs->zsl, score, ele);\n\n            /* Add node pointer to dict using the bucket we already found */\n            dictSetKeyAtLink(zs->dict, znode, &bucket, 1);\n\n            *out_flags |= ZADD_OUT_ADDED;\n            if (newscore) *newscore = score;\n            return 1;\n        } else {\n            *out_flags |= ZADD_OUT_NOP;\n            return 1;\n        }\n    } else {\n        serverPanic(\"Unknown sorted set encoding\");\n    }\n    return 0; /* Never reached. */\n}\n\n/* Deletes the element 'ele' from the sorted set encoded as a skiplist+dict,\n * returning 1 if the element existed and was deleted, 0 otherwise (the\n * element was not there). It does not resize the dict after deleting the\n * element. */\nstatic int zsetRemoveFromSkiplist(zset *zs, sds ele) {\n    dictEntry *de;\n\n    de = dictUnlink(zs->dict,ele);\n    if (de != NULL) {\n        /* Get the node and score in order to delete from the skiplist later. */\n        zskiplistNode *znode = dictGetKey(de);\n\n        /* Delete from the hash table and later from the skiplist.\n         * Note that the order is important: deleting from the skiplist\n         * actually releases the SDS string representing the element,\n         * which is shared between the skiplist and the hash table, so\n         * we need to delete from the skiplist as the final step. */\n        dictFreeUnlinkedEntry(zs->dict,de);\n\n        /* Delete from skiplist. */\n        zslDelete(zs->zsl, znode);\n\n        return 1;\n    }\n\n    return 0;\n}\n\n/* Delete the element 'ele' from the sorted set, returning 1 if the element\n * existed and was deleted, 0 otherwise (the element was not there). */\nint zsetDel(robj *zobj, sds ele) {\n    if (zobj->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *eptr;\n\n        if ((eptr = zzlFind(zobj->ptr,ele,NULL)) != NULL) {\n            zobj->ptr = zzlDelete(zobj->ptr,eptr);\n            return 1;\n        }\n    } else if (zobj->encoding == OBJ_ENCODING_SKIPLIST) {\n        zset *zs = zobj->ptr;\n        if (zsetRemoveFromSkiplist(zs, ele)) {\n            return 1;\n        }\n    } else {\n        serverPanic(\"Unknown sorted set encoding\");\n    }\n    return 0; /* No such element found. */\n}\n\n/* Given a sorted set object returns the 0-based rank of the object or\n * -1 if the object does not exist.\n *\n * For rank we mean the position of the element in the sorted collection\n * of elements. So the first element has rank 0, the second rank 1, and so\n * forth up to length-1 elements.\n *\n * If 'reverse' is false, the rank is returned considering as first element\n * the one with the lowest score. Otherwise if 'reverse' is non-zero\n * the rank is computed considering as element with rank 0 the one with\n * the highest score. */\nlong zsetRank(robj *zobj, sds ele, int reverse, double *output_score) {\n    unsigned long llen;\n    unsigned long rank;\n\n    llen = zsetLength(zobj);\n\n    if (zobj->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *zl = zobj->ptr;\n        unsigned char *eptr, *sptr;\n\n        eptr = lpSeek(zl,0);\n        serverAssert(eptr != NULL);\n        sptr = lpNext(zl,eptr);\n        serverAssert(sptr != NULL);\n        const size_t ele_len = sdslen(ele);\n        long long cached_val = 0;\n        int cached_valid = 0;\n        rank = 1;\n        while(eptr != NULL) {\n            if (lpCompare(eptr,(unsigned char*)ele,ele_len,&cached_val,&cached_valid))\n                break;\n            rank++;\n            zzlNext(zl,&eptr,&sptr);\n        }\n\n        if (eptr != NULL) {\n            if (output_score) \n                *output_score = zzlGetScore(sptr);\n            if (reverse)\n                return llen-rank;\n            else\n                return rank-1;\n        } else {\n            return -1;\n        }\n    } else if (zobj->encoding == OBJ_ENCODING_SKIPLIST) {\n        zset *zs = zobj->ptr;\n        zskiplist *zsl = zs->zsl;\n        dictEntry *de;\n\n        de = dictFind(zs->dict,ele);\n        if (de != NULL) {\n            zskiplistNode *n = dictGetKey(de);\n            rank = zslGetRankByNode(zsl, n);\n            /* Existing elements always have a rank. */\n            serverAssert(rank != 0);\n            if (output_score)\n                *output_score = n->score;\n            if (reverse)\n                return llen-rank;\n            else\n                return rank-1;\n        } else {\n            return -1;\n        }\n    } else {\n        serverPanic(\"Unknown sorted set encoding\");\n    }\n}\n\n/* This is a helper function for the COPY command.\n * Duplicate a sorted set object, with the guarantee that the returned object\n * has the same encoding as the original one.\n *\n * The resulting object always has refcount set to 1 */\nrobj *zsetDup(robj *o) {\n    robj *zobj;\n    zset *zs;\n    zset *new_zs;\n\n    serverAssert(o->type == OBJ_ZSET);\n\n    /* Create a new sorted set object that have the same encoding as the original object's encoding */\n    if (o->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *zl = o->ptr;\n        size_t sz = lpBytes(zl);\n        unsigned char *new_zl = zmalloc(sz);\n        memcpy(new_zl, zl, sz);\n        zobj = createObject(OBJ_ZSET, new_zl);\n        zobj->encoding = OBJ_ENCODING_LISTPACK;\n    } else if (o->encoding == OBJ_ENCODING_SKIPLIST) {\n        zobj = createZsetObject();\n        zs = o->ptr;\n        new_zs = zobj->ptr;\n        dictExpand(new_zs->dict,dictSize(zs->dict));\n        zskiplist *zsl = zs->zsl;\n        zskiplistNode *ln;\n        sds ele;\n        long llen = zsetLength(o);\n\n        /* We copy the skiplist elements from the greatest to the\n         * smallest (that's trivial since the elements are already ordered in\n         * the skiplist): this improves the load process, since the next loaded\n         * element will always be the smaller, so adding to the skiplist\n         * will always immediately stop at the head, making the insertion\n         * O(1) instead of O(log(N)). */\n        ln = zsl->tail;\n        while (llen--) {\n            ele = zslGetNodeElement(ln);\n            zskiplistNode *znode = zslInsert(new_zs->zsl,ln->score,ele);\n            dictAdd(new_zs->dict, znode, NULL);\n            ln = ln->backward;\n        }\n    } else {\n        serverPanic(\"Unknown sorted set encoding\");\n    }\n    return zobj;\n}\n\n/* Create a new sds string from the listpack entry. */\nsds zsetSdsFromListpackEntry(listpackEntry *e) {\n    return e->sval ? sdsnewlen(e->sval, e->slen) : sdsfromlonglong(e->lval);\n}\n\n/* Reply with bulk string from the listpack entry. */\nvoid zsetReplyFromListpackEntry(client *c, listpackEntry *e) {\n    if (e->sval)\n        addReplyBulkCBuffer(c, e->sval, e->slen);\n    else\n        addReplyBulkLongLong(c, e->lval);\n}\n\n\n/* Return random element from a non empty zset.\n * 'key' and 'val' will be set to hold the element.\n * The memory in `key` is not to be freed or modified by the caller.\n * 'score' can be NULL in which case it's not extracted. */\nvoid zsetTypeRandomElement(robj *zsetobj, unsigned long zsetsize, listpackEntry *key, double *score) {\n    if (zsetobj->encoding == OBJ_ENCODING_SKIPLIST) {\n        zset *zs = zsetobj->ptr;\n        dictEntry *de = dictGetFairRandomKey(zs->dict);\n        zskiplistNode *znode = dictGetKey(de);\n        sds s = zslGetNodeElement(znode);\n        key->sval = (unsigned char*)s;\n        key->slen = sdslen(s);\n        if (score) {\n            *score = znode->score;\n        }\n    } else if (zsetobj->encoding == OBJ_ENCODING_LISTPACK) {\n        listpackEntry val;\n        lpRandomPair(zsetobj->ptr, zsetsize, key, &val, 2);\n        if (score) {\n            if (val.sval) {\n                *score = zzlStrtod(val.sval,val.slen);\n            } else {\n                *score = (double)val.lval;\n            }\n        }\n    } else {\n        serverPanic(\"Unknown zset encoding\");\n    }\n}\n\n/*-----------------------------------------------------------------------------\n * Sorted set commands\n *----------------------------------------------------------------------------*/\n\n/* This generic command implements both ZADD and ZINCRBY. */\nvoid zaddGenericCommand(client *c, int flags) {\n    static char *nanerr = \"resulting score is not a number (NaN)\";\n    robj *key = c->argv[1];\n    robj *zobj;\n    sds ele;\n    size_t oldsize = 0;\n    double score = 0, *scores = NULL;\n    int j, elements, ch = 0;\n    int scoreidx = 0;\n    /* The following vars are used in order to track what the command actually\n     * did during the execution, to reply to the client and to trigger the\n     * notification of keyspace change. */\n    int added = 0;      /* Number of new elements added. */\n    int updated = 0;    /* Number of elements with updated score. */\n    int processed = 0;  /* Number of elements processed, may remain zero with\n                           options like XX. */\n\n    /* Parse options. At the end 'scoreidx' is set to the argument position\n     * of the score of the first score-element pair. */\n    scoreidx = 2;\n    while(scoreidx < c->argc) {\n        char *opt = c->argv[scoreidx]->ptr;\n        if (!strcasecmp(opt,\"nx\")) flags |= ZADD_IN_NX;\n        else if (!strcasecmp(opt,\"xx\")) flags |= ZADD_IN_XX;\n        else if (!strcasecmp(opt,\"ch\")) ch = 1; /* Return num of elements added or updated. */\n        else if (!strcasecmp(opt,\"incr\")) flags |= ZADD_IN_INCR;\n        else if (!strcasecmp(opt,\"gt\")) flags |= ZADD_IN_GT;\n        else if (!strcasecmp(opt,\"lt\")) flags |= ZADD_IN_LT;\n        else break;\n        scoreidx++;\n    }\n\n    /* Turn options into simple to check vars. */\n    int incr = (flags & ZADD_IN_INCR) != 0;\n    int nx = (flags & ZADD_IN_NX) != 0;\n    int xx = (flags & ZADD_IN_XX) != 0;\n    int gt = (flags & ZADD_IN_GT) != 0;\n    int lt = (flags & ZADD_IN_LT) != 0;\n\n    /* After the options, we expect to have an even number of args, since\n     * we expect any number of score-element pairs. */\n    elements = c->argc-scoreidx;\n    if (elements % 2 || !elements) {\n        addReplyErrorObject(c,shared.syntaxerr);\n        return;\n    }\n    elements /= 2; /* Now this holds the number of score-element pairs. */\n\n    /* Check for incompatible options. */\n    if (nx && xx) {\n        addReplyError(c,\n            \"XX and NX options at the same time are not compatible\");\n        return;\n    }\n    \n    if ((gt && nx) || (lt && nx) || (gt && lt)) {\n        addReplyError(c,\n            \"GT, LT, and/or NX options at the same time are not compatible\");\n        return;\n    }\n    /* Note that XX is compatible with either GT or LT */\n\n    if (incr && elements > 1) {\n        addReplyError(c,\n            \"INCR option supports a single increment-element pair\");\n        return;\n    }\n\n    /* Start parsing all the scores, we need to emit any syntax error\n     * before executing additions to the sorted set, as the command should\n     * either execute fully or nothing at all. */\n    scores = zmalloc(sizeof(double)*elements);\n    for (j = 0; j < elements; j++) {\n        if (getDoubleFromObjectOrReply(c,c->argv[scoreidx+j*2],&scores[j],NULL)\n            != C_OK) goto cleanup;\n    }\n\n    /* Lookup the key and create the sorted set if does not exist. */\n    zobj = lookupKeyWrite(c->db,key);\n    if (checkType(c,zobj,OBJ_ZSET)) goto cleanup;\n    if (zobj == NULL) {\n        if (xx) goto reply_to_client; /* No key + XX option: nothing to do. */\n        robj *o = zsetTypeCreate(elements, sdslen(c->argv[scoreidx + 1]->ptr));\n        zobj = dbAdd(c->db,key,&o);\n    } else {\n        if (server.memory_tracking_enabled)\n            oldsize = kvobjAllocSize(zobj);\n        zsetTypeMaybeConvert(zobj, elements);\n        if (server.memory_tracking_enabled)\n            updateSlotAllocSize(c->db, getKeySlot(key->ptr), zobj, oldsize, kvobjAllocSize(zobj));\n    }\n\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(zobj);\n    unsigned long llen = zsetLength(zobj);\n    for (j = 0; j < elements; j++) {\n        double newscore;\n        score = scores[j];\n        int retflags = 0;\n\n        ele = c->argv[scoreidx+1+j*2]->ptr;\n        int retval = zsetAdd(zobj, score, ele, flags, &retflags, &newscore);\n        if (retval == 0) {\n            addReplyError(c,nanerr);\n            if (server.memory_tracking_enabled)\n                updateSlotAllocSize(c->db, getKeySlot(key->ptr), zobj, oldsize, kvobjAllocSize(zobj));\n            goto cleanup;\n        }\n        if (retflags & ZADD_OUT_ADDED) added++;\n        if (retflags & ZADD_OUT_UPDATED) updated++;\n        if (!(retflags & ZADD_OUT_NOP)) processed++;\n        score = newscore;\n    }\n    server.dirty += (added+updated);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(key->ptr), zobj, oldsize, kvobjAllocSize(zobj));\n    updateKeysizesHist(c->db, OBJ_ZSET, llen, llen+added);\n\nreply_to_client:\n    if (incr) { /* ZINCRBY or INCR option. */\n        if (processed)\n            addReplyDouble(c,score);\n        else\n            addReplyNull(c);\n    } else { /* ZADD. */\n        addReplyLongLong(c,ch ? added+updated : added);\n    }\n\ncleanup:\n    zfree(scores);\n    if (added || updated) {\n        keyModified(c,c->db,key,zobj,1);\n        notifyKeyspaceEvent(NOTIFY_ZSET,\n            incr ? \"zincr\" : \"zadd\", key, c->db->id);\n    }\n}\n\nvoid zaddCommand(client *c) {\n    zaddGenericCommand(c,ZADD_IN_NONE);\n}\n\nvoid zincrbyCommand(client *c) {\n    zaddGenericCommand(c,ZADD_IN_INCR);\n}\n\nvoid zremCommand(client *c) {\n    robj *key = c->argv[1];\n    int deleted = 0, keyremoved = 0, j;\n    size_t oldsize = 0;\n\n    kvobj *zobj = lookupKeyWriteOrReply(c, key, shared.czero); \n    if (zobj == NULL || checkType(c,zobj,OBJ_ZSET)) return;\n\n    int64_t oldlen = (int64_t) zsetLength(zobj);\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(zobj);\n    if (zobj->encoding == OBJ_ENCODING_SKIPLIST)\n        dictPauseAutoResize(((zset*)zobj->ptr)->dict);\n    for (j = 2; j < c->argc; j++) {\n        if (zsetDel(zobj, c->argv[j]->ptr)) deleted++;\n        if (zsetLength(zobj) == 0) {\n            if (server.memory_tracking_enabled)\n                updateSlotAllocSize(c->db, getKeySlot(key->ptr), zobj, oldsize, kvobjAllocSize(zobj));\n            /* Del key but don't update KEYSIZES. Else it will decr wrong bin in histogram */\n            dbDeleteSkipKeysizesUpdate(c->db, key);\n            keyremoved = 1;\n            break;\n        }\n    }\n    if (!keyremoved && zobj->encoding == OBJ_ENCODING_SKIPLIST) {\n        dictResumeAutoResize(((zset*)zobj->ptr)->dict);\n        dictShrinkIfNeeded(((zset*)zobj->ptr)->dict);\n    }\n\n    if (server.memory_tracking_enabled && !keyremoved)\n        updateSlotAllocSize(c->db, getKeySlot(key->ptr), zobj, oldsize, kvobjAllocSize(zobj));\n    if (deleted) {\n        int64_t newlen = oldlen - deleted;\n        notifyKeyspaceEvent(NOTIFY_ZSET,\"zrem\",key,c->db->id);\n        if (keyremoved) {\n            notifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", key, c->db->id);\n            newlen = -1; /* means key got deleted */\n        }\n\n        updateKeysizesHist(c->db, OBJ_ZSET, oldlen, newlen);\n        keyModified(c, c->db, key, keyremoved ? NULL : zobj, 1);\n        server.dirty += deleted;\n    }\n    addReplyLongLong(c,deleted);\n}\n\ntypedef enum {\n    ZRANGE_AUTO = 0,\n    ZRANGE_RANK,\n    ZRANGE_SCORE,\n    ZRANGE_LEX,\n} zrange_type;\n\n/* Implements ZREMRANGEBYRANK, ZREMRANGEBYSCORE, ZREMRANGEBYLEX commands. */\nvoid zremrangeGenericCommand(client *c, zrange_type rangetype) {\n    robj *key = c->argv[1];\n    int keyremoved = 0;\n    unsigned long deleted = 0;\n    zrangespec range;\n    zlexrangespec lexrange;\n    long start, end, llen;\n    char *notify_type = NULL;\n    size_t oldsize = 0;\n\n    /* Step 1: Parse the range. */\n    if (rangetype == ZRANGE_RANK) {\n        notify_type = \"zremrangebyrank\";\n        if ((getLongFromObjectOrReply(c,c->argv[2],&start,NULL) != C_OK) ||\n            (getLongFromObjectOrReply(c,c->argv[3],&end,NULL) != C_OK))\n            return;\n    } else if (rangetype == ZRANGE_SCORE) {\n        notify_type = \"zremrangebyscore\";\n        if (zslParseRange(c->argv[2],c->argv[3],&range) != C_OK) {\n            addReplyError(c,\"min or max is not a float\");\n            return;\n        }\n    } else if (rangetype == ZRANGE_LEX) {\n        notify_type = \"zremrangebylex\";\n        if (zslParseLexRange(c->argv[2],c->argv[3],&lexrange) != C_OK) {\n            addReplyError(c,\"min or max not valid string range item\");\n            return;\n        }\n    } else {\n        serverPanic(\"unknown rangetype %d\", (int)rangetype);\n    }\n\n    /* Step 2: Lookup & range sanity checks if needed. */\n    kvobj *zobj = lookupKeyWriteOrReply(c, key, shared.czero);\n    if (zobj == NULL || checkType(c, zobj, OBJ_ZSET)) goto cleanup;\n\n    if (rangetype == ZRANGE_RANK) {\n        /* Sanitize indexes. */\n        llen = zsetLength(zobj);\n        if (start < 0) start = llen+start;\n        if (end < 0) end = llen+end;\n        if (start < 0) start = 0;\n\n        /* Invariant: start >= 0, so this test will be true when end < 0.\n         * The range is empty when start > end or start >= length. */\n        if (start > end || start >= llen) {\n            addReply(c,shared.czero);\n            goto cleanup;\n        }\n        if (end >= llen) end = llen-1;\n    }\n\n    /* Step 3: Perform the range deletion operation. */\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(zobj);\n    if (zobj->encoding == OBJ_ENCODING_LISTPACK) {\n        switch(rangetype) {\n        case ZRANGE_AUTO:\n        case ZRANGE_RANK:\n            zobj->ptr = zzlDeleteRangeByRank(zobj->ptr,start+1,end+1,&deleted);\n            break;\n        case ZRANGE_SCORE:\n            zobj->ptr = zzlDeleteRangeByScore(zobj->ptr,&range,&deleted);\n            break;\n        case ZRANGE_LEX:\n            zobj->ptr = zzlDeleteRangeByLex(zobj->ptr,&lexrange,&deleted);\n            break;\n        }\n        if (zzlLength(zobj->ptr) == 0) {\n            if (server.memory_tracking_enabled)\n                updateSlotAllocSize(c->db, getKeySlot(key->ptr), zobj, oldsize, kvobjAllocSize(zobj));\n            dbDeleteSkipKeysizesUpdate(c->db, key);\n            keyremoved = 1;\n        }\n    } else if (zobj->encoding == OBJ_ENCODING_SKIPLIST) {\n        zset *zs = zobj->ptr;\n        dictPauseAutoResize(zs->dict);\n        switch(rangetype) {\n        case ZRANGE_AUTO:\n        case ZRANGE_RANK:\n            deleted = zslDeleteRangeByRank(zs->zsl,start+1,end+1,zs->dict);\n            break;\n        case ZRANGE_SCORE:\n            deleted = zslDeleteRangeByScore(zs->zsl,&range,zs->dict);\n            break;\n        case ZRANGE_LEX:\n            deleted = zslDeleteRangeByLex(zs->zsl,&lexrange,zs->dict);\n            break;\n        }\n        dictResumeAutoResize(zs->dict);\n        if (dictSize(zs->dict) == 0) {\n            if (server.memory_tracking_enabled)\n                updateSlotAllocSize(c->db, getKeySlot(key->ptr), zobj, oldsize, kvobjAllocSize(zobj));\n            dbDeleteSkipKeysizesUpdate(c->db, key);\n            keyremoved = 1;\n        } else {\n            dictShrinkIfNeeded(zs->dict);\n        }\n    } else {\n        serverPanic(\"Unknown sorted set encoding\");\n    }\n\n    /* Step 4: Notifications and reply. */\n    if (server.memory_tracking_enabled && !keyremoved)\n        updateSlotAllocSize(c->db, getKeySlot(key->ptr), zobj, oldsize, kvobjAllocSize(zobj));\n    if (deleted) {\n        int64_t  oldlen, newlen;\n        keyModified(c,c->db,key,NULL,1);\n        notifyKeyspaceEvent(NOTIFY_ZSET,notify_type,key,c->db->id);\n        if (keyremoved) {\n            notifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", key, c->db->id);\n            newlen = -1;\n            oldlen = deleted;\n        } else {\n            newlen = zsetLength(zobj);\n            oldlen = newlen + deleted;\n        }\n        updateKeysizesHist(c->db, OBJ_ZSET, oldlen, newlen);\n    }\n    server.dirty += deleted;\n    addReplyLongLong(c,deleted);\n\ncleanup:\n    if (rangetype == ZRANGE_LEX) zslFreeLexRange(&lexrange);\n}\n\nvoid zremrangebyrankCommand(client *c) {\n    zremrangeGenericCommand(c,ZRANGE_RANK);\n}\n\nvoid zremrangebyscoreCommand(client *c) {\n    zremrangeGenericCommand(c,ZRANGE_SCORE);\n}\n\nvoid zremrangebylexCommand(client *c) {\n    zremrangeGenericCommand(c,ZRANGE_LEX);\n}\n\n/* Unified iterator source for set operations (ZUNION/ZINTER/ZDIFF).\n * Provides polymorphic iteration over sets and sorted sets with different encodings. */\ntypedef struct {\n    robj *subject;\n    int type; /* Set, sorted set */\n    int encoding;\n    double weight;\n    size_t oldsize;\n\n    union {\n        /* Set iterators. */\n        union _iterset {\n            struct {\n                intset *is;\n                int ii;\n            } is;\n            struct {\n                dict *dict;\n                dictIterator *di;\n                dictEntry *de;\n            } ht;\n            struct {\n                unsigned char *lp;\n                unsigned char *p;\n            } lp;\n        } set;\n\n        /* Sorted set iterators. */\n        union _iterzset {\n            struct {\n                unsigned char *zl;\n                unsigned char *eptr, *sptr;\n            } zl;\n            struct {\n                zset *zs;\n                zskiplistNode *node;\n            } sl;\n        } zset;\n    } iter;\n} zsetopsrc;\n\n\n/* Use dirty flags for pointers that need to be cleaned up in the next\n * iteration over the zsetopval. The dirty flag for the long long value is\n * special, since long long values don't need cleanup. Instead, it means that\n * we already checked that \"ell\" holds a long long, or tried to convert another\n * representation into a long long value. When this was successful,\n * OPVAL_VALID_LL is set as well. */\n#define OPVAL_DIRTY_SDS 1\n#define OPVAL_DIRTY_LL 2\n#define OPVAL_VALID_LL 4\n\n/* Store value retrieved from the iterator. */\ntypedef struct {\n    int flags;\n    unsigned char _buf[32]; /* Private buffer. */\n    sds ele;\n    unsigned char *estr;\n    unsigned int elen;\n    long long ell;\n    double score;\n} zsetopval;\n\ntypedef union _iterset iterset;\ntypedef union _iterzset iterzset;\n\nvoid zuiInitIterator(zsetopsrc *op) {\n    if (op->subject == NULL)\n        return;\n\n    if (op->type == OBJ_SET) {\n        iterset *it = &op->iter.set;\n        if (op->encoding == OBJ_ENCODING_INTSET) {\n            it->is.is = op->subject->ptr;\n            it->is.ii = 0;\n        } else if (op->encoding == OBJ_ENCODING_HT) {\n            it->ht.dict = op->subject->ptr;\n            it->ht.di = dictGetIterator(op->subject->ptr);\n            it->ht.de = dictNext(it->ht.di);\n        } else if (op->encoding == OBJ_ENCODING_LISTPACK) {\n            it->lp.lp = op->subject->ptr;\n            it->lp.p = lpFirst(it->lp.lp);\n        } else {\n            serverPanic(\"Unknown set encoding\");\n        }\n    } else if (op->type == OBJ_ZSET) {\n        /* Sorted sets are traversed in reverse order to optimize for\n         * the insertion of the elements in a new list as in\n         * ZDIFF/ZINTER/ZUNION */\n        iterzset *it = &op->iter.zset;\n        if (op->encoding == OBJ_ENCODING_LISTPACK) {\n            it->zl.zl = op->subject->ptr;\n            it->zl.eptr = lpSeek(it->zl.zl,-2);\n            if (it->zl.eptr != NULL) {\n                it->zl.sptr = lpNext(it->zl.zl,it->zl.eptr);\n                serverAssert(it->zl.sptr != NULL);\n            }\n        } else if (op->encoding == OBJ_ENCODING_SKIPLIST) {\n            it->sl.zs = op->subject->ptr;\n            it->sl.node = it->sl.zs->zsl->tail;\n        } else {\n            serverPanic(\"Unknown sorted set encoding\");\n        }\n    } else {\n        serverPanic(\"Unsupported type\");\n    }\n}\n\nvoid zuiClearIterator(zsetopsrc *op) {\n    if (op->subject == NULL)\n        return;\n\n    if (op->type == OBJ_SET) {\n        iterset *it = &op->iter.set;\n        if (op->encoding == OBJ_ENCODING_INTSET) {\n            UNUSED(it); /* skip */\n        } else if (op->encoding == OBJ_ENCODING_HT) {\n            dictReleaseIterator(it->ht.di);\n        } else if (op->encoding == OBJ_ENCODING_LISTPACK) {\n            UNUSED(it);\n        } else {\n            serverPanic(\"Unknown set encoding\");\n        }\n    } else if (op->type == OBJ_ZSET) {\n        iterzset *it = &op->iter.zset;\n        if (op->encoding == OBJ_ENCODING_LISTPACK) {\n            UNUSED(it); /* skip */\n        } else if (op->encoding == OBJ_ENCODING_SKIPLIST) {\n            UNUSED(it); /* skip */\n        } else {\n            serverPanic(\"Unknown sorted set encoding\");\n        }\n    } else {\n        serverPanic(\"Unsupported type\");\n    }\n}\n\nvoid zuiDiscardDirtyValue(zsetopval *val) {\n    if (val->flags & OPVAL_DIRTY_SDS) {\n        sdsfree(val->ele);\n        val->ele = NULL;\n        val->flags &= ~OPVAL_DIRTY_SDS;\n    }\n}\n\nunsigned long zuiLength(zsetopsrc *op) {\n    if (op->subject == NULL)\n        return 0;\n\n    if (op->type == OBJ_SET) {\n        return setTypeSize(op->subject);\n    } else if (op->type == OBJ_ZSET) {\n        if (op->encoding == OBJ_ENCODING_LISTPACK) {\n            return zzlLength(op->subject->ptr);\n        } else if (op->encoding == OBJ_ENCODING_SKIPLIST) {\n            zset *zs = op->subject->ptr;\n            return zs->zsl->length;\n        } else {\n            serverPanic(\"Unknown sorted set encoding\");\n        }\n    } else {\n        serverPanic(\"Unsupported type\");\n    }\n}\n\n/* Check if the current value is valid. If so, store it in the passed structure\n * and move to the next element. If not valid, this means we have reached the\n * end of the structure and can abort. */\nint zuiNext(zsetopsrc *op, zsetopval *val) {\n    if (op->subject == NULL)\n        return 0;\n\n    zuiDiscardDirtyValue(val);\n\n    memset(val,0,sizeof(zsetopval));\n\n    if (op->type == OBJ_SET) {\n        iterset *it = &op->iter.set;\n        if (op->encoding == OBJ_ENCODING_INTSET) {\n            int64_t ell;\n\n            if (!intsetGet(it->is.is,it->is.ii,&ell))\n                return 0;\n            val->ell = ell;\n            val->score = 1.0;\n\n            /* Move to next element. */\n            it->is.ii++;\n        } else if (op->encoding == OBJ_ENCODING_HT) {\n            if (it->ht.de == NULL)\n                return 0;\n            val->ele = dictGetKey(it->ht.de);\n            val->score = 1.0;\n\n            /* Move to next element. */\n            it->ht.de = dictNext(it->ht.di);\n        } else if (op->encoding == OBJ_ENCODING_LISTPACK) {\n            if (it->lp.p == NULL)\n                return 0;\n            val->estr = lpGetValue(it->lp.p, &val->elen, &val->ell);\n            val->score = 1.0;\n\n            /* Move to next element. */\n            it->lp.p = lpNext(it->lp.lp, it->lp.p);\n        } else {\n            serverPanic(\"Unknown set encoding\");\n        }\n    } else if (op->type == OBJ_ZSET) {\n        iterzset *it = &op->iter.zset;\n        if (op->encoding == OBJ_ENCODING_LISTPACK) {\n            /* No need to check both, but better be explicit. */\n            if (it->zl.eptr == NULL || it->zl.sptr == NULL)\n                return 0;\n            val->estr = lpGetValue(it->zl.eptr,&val->elen,&val->ell);\n            val->score = zzlGetScore(it->zl.sptr);\n\n            /* Move to next element (going backwards, see zuiInitIterator). */\n            zzlPrev(it->zl.zl,&it->zl.eptr,&it->zl.sptr);\n        } else if (op->encoding == OBJ_ENCODING_SKIPLIST) {\n            if (it->sl.node == NULL)\n                return 0;\n            val->ele = zslGetNodeElement(it->sl.node);\n            val->score = it->sl.node->score;\n\n            /* Move to next element. (going backwards, see zuiInitIterator) */\n            it->sl.node = it->sl.node->backward;\n        } else {\n            serverPanic(\"Unknown sorted set encoding\");\n        }\n    } else {\n        serverPanic(\"Unsupported type\");\n    }\n    return 1;\n}\n\nint zuiLongLongFromValue(zsetopval *val) {\n    if (!(val->flags & OPVAL_DIRTY_LL)) {\n        val->flags |= OPVAL_DIRTY_LL;\n\n        if (val->ele != NULL) {\n            if (string2ll(val->ele,sdslen(val->ele),&val->ell))\n                val->flags |= OPVAL_VALID_LL;\n        } else if (val->estr != NULL) {\n            if (string2ll((char*)val->estr,val->elen,&val->ell))\n                val->flags |= OPVAL_VALID_LL;\n        } else {\n            /* The long long was already set, flag as valid. */\n            val->flags |= OPVAL_VALID_LL;\n        }\n    }\n    return val->flags & OPVAL_VALID_LL;\n}\n\nsds zuiSdsFromValue(zsetopval *val) {\n    if (val->ele == NULL) {\n        if (val->estr != NULL) {\n            val->ele = sdsnewlen((char*)val->estr,val->elen);\n        } else {\n            val->ele = sdsfromlonglong(val->ell);\n        }\n        val->flags |= OPVAL_DIRTY_SDS;\n    }\n    return val->ele;\n}\n\n/* This is different from zuiSdsFromValue since returns a new SDS string\n * which is up to the caller to free. */\nsds zuiNewSdsFromValue(zsetopval *val) {\n    if (val->flags & OPVAL_DIRTY_SDS) {\n        /* We have already one to return! */\n        sds ele = val->ele;\n        val->flags &= ~OPVAL_DIRTY_SDS;\n        val->ele = NULL;\n        return ele;\n    } else if (val->ele) {\n        return sdsdup(val->ele);\n    } else if (val->estr) {\n        return sdsnewlen((char*)val->estr,val->elen);\n    } else {\n        return sdsfromlonglong(val->ell);\n    }\n}\n\nint zuiBufferFromValue(zsetopval *val) {\n    if (val->estr == NULL) {\n        if (val->ele != NULL) {\n            val->elen = sdslen(val->ele);\n            val->estr = (unsigned char*)val->ele;\n        } else {\n            val->elen = ll2string((char*)val->_buf,sizeof(val->_buf),val->ell);\n            val->estr = val->_buf;\n        }\n    }\n    return 1;\n}\n\n/* Find value pointed to by val in the source pointer to by op. When found,\n * return 1 and store its score in target. Return 0 otherwise. */\nint zuiFind(zsetopsrc *op, zsetopval *val, double *score) {\n    if (op->subject == NULL)\n        return 0;\n\n    if (op->type == OBJ_SET) {\n        char *str = val->ele ? val->ele : (char *)val->estr;\n        size_t len = val->ele ? sdslen(val->ele) : val->elen;\n        if (setTypeIsMemberAux(op->subject, str, len, val->ell, val->ele != NULL)) {\n            *score = 1.0;\n            return 1;\n        } else {\n            return 0;\n        }\n    } else if (op->type == OBJ_ZSET) {\n        zuiSdsFromValue(val);\n\n        if (op->encoding == OBJ_ENCODING_LISTPACK) {\n            if (zzlFind(op->subject->ptr,val->ele,score) != NULL) {\n                /* Score is already set by zzlFind. */\n                return 1;\n            } else {\n                return 0;\n            }\n        } else if (op->encoding == OBJ_ENCODING_SKIPLIST) {\n            zset *zs = op->subject->ptr;\n            dictEntry *de;\n            if ((de = dictFind(zs->dict,val->ele)) != NULL) {\n                zskiplistNode *znode = dictGetKey(de);\n                *score = znode->score;\n                return 1;\n            } else {\n                return 0;\n            }\n        } else {\n            serverPanic(\"Unknown sorted set encoding\");\n        }\n    } else {\n        serverPanic(\"Unsupported type\");\n    }\n}\n\nint zuiCompareByCardinality(const void *s1, const void *s2) {\n    unsigned long first = zuiLength((zsetopsrc*)s1);\n    unsigned long second = zuiLength((zsetopsrc*)s2);\n    if (first > second) return 1;\n    if (first < second) return -1;\n    return 0;\n}\n\nstatic int zuiCompareByRevCardinality(const void *s1, const void *s2) {\n    return zuiCompareByCardinality(s1, s2) * -1;\n}\n\n#define REDIS_AGGR_SUM 1\n#define REDIS_AGGR_MIN 2\n#define REDIS_AGGR_MAX 3\n#define REDIS_AGGR_COUNT 4\n\n/* Return the weighted contribution of a single sorted set member.\n * For COUNT aggregation the actual score is irrelevant — each member\n * contributes its set's weight (i.e. \"one occurrence worth <weight>\").\n * For all other aggregation modes the contribution is weight * score. */\ninline static double zuiWeightedScore(double score, double weight, int aggregate) {\n    return (aggregate == REDIS_AGGR_COUNT) ? weight : weight * score;\n}\n\ninline static void zunionInterAggregate(double *target, double val, int aggregate) {\n    if (aggregate == REDIS_AGGR_SUM) {\n        *target = *target + val;\n        /* The result of adding two doubles is NaN when one variable\n         * is +inf and the other is -inf. When these numbers are added,\n         * we maintain the convention of the result being 0.0. */\n        if (isnan(*target)) *target = 0.0;\n    } else if (aggregate == REDIS_AGGR_COUNT) {\n        *target += val;\n        /* The val is zuiWeightedScore(…) == weight, which can be +inf/-inf,\n         * so the NaN guard applies here. */\n        if (isnan(*target)) *target = 0.0;\n    } else if (aggregate == REDIS_AGGR_MIN) {\n        *target = val < *target ? val : *target;\n    } else if (aggregate == REDIS_AGGR_MAX) {\n        *target = val > *target ? val : *target;\n    } else {\n        /* safety net */\n        serverPanic(\"Unknown ZUNION/INTER aggregate type\");\n    }\n}\n\nstatic size_t zsetDictGetMaxElementLength(dict *d, size_t *totallen) {\n    dictIterator di;\n    dictEntry *de;\n    size_t maxelelen = 0;\n\n    dictInitIterator(&di, d);\n\n    while((de = dictNext(&di)) != NULL) {\n        /* Extract sds from the node (key is zskiplistNode*) */\n        zskiplistNode *znode = dictGetKey(de);\n        sds ele = zslGetNodeElement(znode);\n        if (sdslen(ele) > maxelelen) maxelelen = sdslen(ele);\n        if (totallen)\n            (*totallen) += sdslen(ele);\n    }\n\n    dictResetIterator(&di);\n\n    return maxelelen;\n}\n\nstatic void zdiffAlgorithm1(zsetopsrc *src, long setnum, zset *dstzset, size_t *maxelelen, size_t *totelelen) {\n    /* DIFF Algorithm 1:\n     *\n     * We perform the diff by iterating all the elements of the first set,\n     * and only adding it to the target set if the element does not exist\n     * into all the other sets.\n     *\n     * This way we perform at max N*M operations, where N is the size of\n     * the first set, and M the number of sets.\n     *\n     * There is also a O(K*log(K)) cost for adding the resulting elements\n     * to the target set, where K is the final size of the target set.\n     *\n     * The final complexity of this algorithm is O(N*M + K*log(K)). */\n    int j;\n    zsetopval zval;\n    zskiplistNode *znode;\n    sds tmp;\n\n    /* With algorithm 1 it is better to order the sets to subtract\n     * by decreasing size, so that we are more likely to find\n     * duplicated elements ASAP. */\n    qsort(src+1,setnum-1,sizeof(zsetopsrc),zuiCompareByRevCardinality);\n\n    memset(&zval, 0, sizeof(zval));\n    zuiInitIterator(&src[0]);\n    while (zuiNext(&src[0],&zval)) {\n        double value;\n        int exists = 0;\n\n        for (j = 1; j < setnum; j++) {\n            /* It is not safe to access the zset we are\n             * iterating, so explicitly check for equal object.\n             * This check isn't really needed anymore since we already\n             * check for a duplicate set in the zsetChooseDiffAlgorithm\n             * function, but we're leaving it for future-proofing. */\n            if (src[j].subject == src[0].subject ||\n                zuiFind(&src[j],&zval,&value)) {\n                exists = 1;\n                break;\n            }\n        }\n\n        if (!exists) {\n            tmp = zuiNewSdsFromValue(&zval);\n            znode = zslInsert(dstzset->zsl,zval.score,tmp);\n            dictAdd(dstzset->dict, znode, NULL);\n            if (sdslen(tmp) > *maxelelen) *maxelelen = sdslen(tmp);\n            (*totelelen) += sdslen(tmp);\n            sdsfree(tmp); /* zslInsert copied it, we can free our copy */\n        }\n    }\n    zuiClearIterator(&src[0]);\n}\n\n\nstatic void zdiffAlgorithm2(zsetopsrc *src, long setnum, zset *dstzset, size_t *maxelelen, size_t *totelelen) {\n    /* DIFF Algorithm 2:\n     *\n     * Add all the elements of the first set to the auxiliary set.\n     * Then remove all the elements of all the next sets from it.\n     *\n\n     * This is O(L + (N-K)log(N)) where L is the sum of all the elements in every\n     * set, N is the size of the first set, and K is the size of the result set.\n     *\n     * Note that from the (L-N) dict searches, (N-K) got to the zsetRemoveFromSkiplist\n     * which costs log(N)\n     *\n     * There is also a O(K) cost at the end for finding the largest element\n     * size, but this doesn't change the algorithm complexity since K < L, and\n     * O(2L) is the same as O(L). */\n    int j;\n    int cardinality = 0;\n    zsetopval zval;\n    zskiplistNode *znode;\n    sds tmp;\n\n    for (j = 0; j < setnum; j++) {\n        if (zuiLength(&src[j]) == 0) continue;\n\n        memset(&zval, 0, sizeof(zval));\n        zuiInitIterator(&src[j]);\n        while (zuiNext(&src[j],&zval)) {\n            if (j == 0) {\n                tmp = zuiNewSdsFromValue(&zval);\n                znode = zslInsert(dstzset->zsl,zval.score,tmp);\n                dictAdd(dstzset->dict, znode, NULL);\n                cardinality++;\n                sdsfree(tmp); /* zslInsert copied it, we can free our copy */\n            } else {\n                dictPauseAutoResize(dstzset->dict);\n                tmp = zuiSdsFromValue(&zval);\n                if (zsetRemoveFromSkiplist(dstzset, tmp)) {\n                    cardinality--;\n                }\n                dictResumeAutoResize(dstzset->dict);\n            }\n\n            /* Exit if result set is empty as any additional removal\n                * of elements will have no effect. */\n            if (cardinality == 0) {\n                zuiDiscardDirtyValue(&zval);\n                break;\n            }\n        }\n        zuiClearIterator(&src[j]);\n\n        if (cardinality == 0) break;\n    }\n\n    /* Resize dict if needed after removing multiple elements */\n    dictShrinkIfNeeded(dstzset->dict);\n\n    /* Using this algorithm, we can't calculate the max element as we go,\n     * we have to iterate through all elements to find the max one after. */\n    *maxelelen = zsetDictGetMaxElementLength(dstzset->dict, totelelen);\n}\n\nstatic int zsetChooseDiffAlgorithm(zsetopsrc *src, long setnum) {\n    int j;\n\n    /* Select what DIFF algorithm to use.\n     *\n     * Algorithm 1 is O(N*M + K*log(K)) where N is the size of the\n     * first set, M the total number of sets, and K is the size of the\n     * result set.\n     *\n     * Algorithm 2 is O(L + (N-K)log(N)) where L is the total number of elements\n     * in all the sets, N is the size of the first set, and K is the size of the\n     * result set.\n     *\n     * We compute what is the best bet with the current input here. */\n    long long algo_one_work = 0;\n    long long algo_two_work = 0;\n\n    for (j = 0; j < setnum; j++) {\n        /* If any other set is equal to the first set, there is nothing to be\n         * done, since we would remove all elements anyway. */\n        if (j > 0 && src[0].subject == src[j].subject) {\n            return 0;\n        }\n\n        algo_one_work += zuiLength(&src[0]);\n        algo_two_work += zuiLength(&src[j]);\n    }\n\n    /* Algorithm 1 has better constant times and performs less operations\n     * if there are elements in common. Give it some advantage. */\n    algo_one_work /= 2;\n    return (algo_one_work <= algo_two_work) ? 1 : 2;\n}\n\nstatic void zdiff(zsetopsrc *src, long setnum, zset *dstzset, size_t *maxelelen, size_t *totelelen) {\n    /* Skip everything if the smallest input is empty. */\n    if (zuiLength(&src[0]) > 0) {\n        int diff_algo = zsetChooseDiffAlgorithm(src, setnum);\n        if (diff_algo == 1) {\n            zdiffAlgorithm1(src, setnum, dstzset, maxelelen, totelelen);\n        } else if (diff_algo == 2) {\n            zdiffAlgorithm2(src, setnum, dstzset, maxelelen, totelelen);\n        } else if (diff_algo != 0) {\n            serverPanic(\"Unknown algorithm\");\n        }\n    }\n}\n\n/* The zunionInterDiffGenericCommand() function is called in order to implement the\n * following commands: ZUNION, ZINTER, ZDIFF, ZUNIONSTORE, ZINTERSTORE, ZDIFFSTORE,\n * ZINTERCARD.\n *\n * 'numkeysIndex' parameter position of key number. for ZUNION/ZINTER/ZDIFF command,\n * this value is 1, for ZUNIONSTORE/ZINTERSTORE/ZDIFFSTORE command, this value is 2.\n *\n * 'op' SET_OP_INTER, SET_OP_UNION or SET_OP_DIFF.\n *\n * 'cardinality_only' is currently only applicable when 'op' is SET_OP_INTER.\n * Work for SINTERCARD, only return the cardinality with minimum processing and memory overheads.\n */\nvoid zunionInterDiffGenericCommand(client *c, robj *dstkey, int numkeysIndex, int op,\n                                   int cardinality_only) {\n    int i, j;\n    long setnum;\n    int aggregate = REDIS_AGGR_SUM;\n    zsetopsrc *src;\n    zsetopval zval;\n    sds tmp;\n    size_t maxelelen = 0, totelelen = 0;\n    robj *dstobj = NULL;\n    zset *dstzset = NULL;\n    zskiplistNode *znode;\n    int withscores = 0;\n    unsigned long cardinality = 0;\n    long limit = 0; /* Stop searching after reaching the limit. 0 means unlimited. */\n\n    /* expect setnum input keys to be given */\n    if ((getLongFromObjectOrReply(c, c->argv[numkeysIndex], &setnum, NULL) != C_OK))\n        return;\n\n    if (setnum < 1) {\n        addReplyErrorFormat(c,\n            \"at least 1 input key is needed for '%s' command\", c->cmd->fullname);\n        return;\n    }\n\n    /* test if the expected number of keys would overflow */\n    if (setnum > (c->argc-(numkeysIndex+1))) {\n        addReplyErrorObject(c,shared.syntaxerr);\n        return;\n    }\n\n    /* Try to allocate the src table, and abort on insufficient memory. */\n    src = ztrycalloc(sizeof(zsetopsrc) * setnum);\n    if (src == NULL) {\n        addReplyError(c, \"Insufficient memory, failed allocating transient memory, too many args.\");\n        return;\n    }\n\n    /* read keys to be used for input */\n    for (i = 0, j = numkeysIndex+1; i < setnum; i++, j++) {\n        kvobj *obj = lookupKeyRead(c->db, c->argv[j]);\n        if (obj != NULL) {\n            if (obj->type != OBJ_ZSET && obj->type != OBJ_SET) {\n                zfree(src);\n                addReplyErrorObject(c,shared.wrongtypeerr);\n                return;\n            }\n\n            src[i].subject = obj;\n            src[i].type = obj->type;\n            src[i].encoding = obj->encoding;\n            if (server.memory_tracking_enabled)\n                src[i].oldsize = kvobjAllocSize(obj);\n        } else {\n            src[i].subject = NULL;\n        }\n\n        /* Default all weights to 1. */\n        src[i].weight = 1.0;\n    }\n\n    /* parse optional extra arguments */\n    if (j < c->argc) {\n        int remaining = c->argc - j;\n\n        while (remaining) {\n            if (op != SET_OP_DIFF && !cardinality_only &&\n                remaining >= (setnum + 1) &&\n                !strcasecmp(c->argv[j]->ptr,\"weights\"))\n            {\n                j++; remaining--;\n                for (i = 0; i < setnum; i++, j++, remaining--) {\n                    if (getDoubleFromObjectOrReply(c,c->argv[j],&src[i].weight,\n                            \"weight value is not a float\") != C_OK)\n                    {\n                        zfree(src);\n                        return;\n                    }\n                }\n            } else if (op != SET_OP_DIFF && !cardinality_only &&\n                       remaining >= 2 &&\n                       !strcasecmp(c->argv[j]->ptr,\"aggregate\"))\n            {\n                j++; remaining--;\n                if (!strcasecmp(c->argv[j]->ptr,\"sum\")) {\n                    aggregate = REDIS_AGGR_SUM;\n                } else if (!strcasecmp(c->argv[j]->ptr,\"min\")) {\n                    aggregate = REDIS_AGGR_MIN;\n                } else if (!strcasecmp(c->argv[j]->ptr,\"max\")) {\n                    aggregate = REDIS_AGGR_MAX;\n                } else if (!strcasecmp(c->argv[j]->ptr,\"count\")) {\n                    aggregate = REDIS_AGGR_COUNT;\n                } else {\n                    zfree(src);\n                    addReplyErrorObject(c,shared.syntaxerr);\n                    return;\n                }\n                j++; remaining--;\n            } else if (remaining >= 1 &&\n                       !dstkey && !cardinality_only &&\n                       !strcasecmp(c->argv[j]->ptr,\"withscores\"))\n            {\n                j++; remaining--;\n                withscores = 1;\n            } else if (cardinality_only && remaining >= 2 &&\n                       !strcasecmp(c->argv[j]->ptr, \"limit\"))\n            {\n                j++; remaining--;\n                if (getPositiveLongFromObjectOrReply(c, c->argv[j], &limit,\n                                                     \"LIMIT can't be negative\") != C_OK)\n                {\n                    zfree(src);\n                    return;\n                }\n                j++; remaining--;\n            } else {\n                zfree(src);\n                addReplyErrorObject(c,shared.syntaxerr);\n                return;\n            }\n        }\n    }\n\n    if (op != SET_OP_DIFF) {\n        /* sort sets from the smallest to largest, this will improve our\n        * algorithm's performance */\n        qsort(src,setnum,sizeof(zsetopsrc),zuiCompareByCardinality);\n    }\n\n    /* We need a temp zset object to store our union/inter/diff. If the dstkey\n     * is not NULL (that is, we are inside an ZUNIONSTORE/ZINTERSTORE/ZDIFFSTORE operation) then\n     * this zset object will be the resulting object to zset into the target key.\n     * In SINTERCARD case, we don't need the temp obj, so we can avoid creating it. */\n    if (!cardinality_only) {\n        dstobj = createZsetObject();\n        dstzset = dstobj->ptr;\n    }\n    memset(&zval, 0, sizeof(zval));\n\n    if (op == SET_OP_INTER) {\n        /* Skip everything if the smallest input is empty. */\n        if (zuiLength(&src[0]) > 0) {\n            /* Precondition: as src[0] is non-empty and the inputs are ordered\n             * by size, all src[i > 0] are non-empty too. */\n            zuiInitIterator(&src[0]);\n            while (zuiNext(&src[0],&zval)) {\n                double score, value;\n\n                score = zuiWeightedScore(zval.score, src[0].weight, aggregate);\n                if (isnan(score)) score = 0;\n\n                for (j = 1; j < setnum; j++) {\n                    /* It is not safe to access the zset we are\n                     * iterating, so explicitly check for equal object. */\n                    if (src[j].subject == src[0].subject) {\n                        value = zuiWeightedScore(zval.score, src[j].weight, aggregate);\n                        zunionInterAggregate(&score,value,aggregate);\n                    } else if (zuiFind(&src[j],&zval,&value)) {\n                        value = zuiWeightedScore(value, src[j].weight, aggregate);\n                        zunionInterAggregate(&score,value,aggregate);\n                    } else {\n                        break;\n                    }\n                }\n\n                /* Only continue when present in every input. */\n                if (j == setnum && cardinality_only) {\n                    cardinality++;\n\n                    /* We stop the searching after reaching the limit. */\n                    if (limit && cardinality >= (unsigned long)limit) {\n                        /* Cleanup before we break the zuiNext loop. */\n                        zuiDiscardDirtyValue(&zval);\n                        break;\n                    }\n                } else if (j == setnum) {\n                    tmp = zuiNewSdsFromValue(&zval);\n                    znode = zslInsert(dstzset->zsl,score,tmp);\n                    dictAdd(dstzset->dict, znode, NULL);\n                    totelelen += sdslen(tmp);\n                    if (sdslen(tmp) > maxelelen) maxelelen = sdslen(tmp);\n                    sdsfree(tmp); /* zslInsert copied it, we can free our copy */\n                }\n            }\n            zuiClearIterator(&src[0]);\n        }\n    } else if (op == SET_OP_UNION) {\n        dictIterator di;\n        dictEntry *de;\n        double score;\n\n        if (setnum) {\n            /* Our union is at least as large as the largest set.\n             * Resize the dictionary ASAP to avoid useless rehashing. */\n            dictExpand(dstzset->dict,zuiLength(&src[setnum-1]));\n        }\n\n        /* Step 1: Iterate all sorted sets and aggregate scores.\n         * For each element, either insert into skiplist (new) or update score (existing). */\n        for (i = 0; i < setnum; i++) {\n            if (zuiLength(&src[i]) == 0) continue;\n\n            zuiInitIterator(&src[i]);\n            while (zuiNext(&src[i],&zval)) {\n                /* Initialize value */\n                score = zuiWeightedScore(zval.score, src[i].weight, aggregate);\n                if (isnan(score)) score = 0;\n\n                /* Search for this element in the dict (which stores node pointers). */\n                dictEntryLink bucket, link;\n                link = dictFindLink(dstzset->dict, zuiSdsFromValue(&zval), &bucket);\n                \n                if (link == NULL) {  /* if not exists */\n                    /* New element: create node and insert into dict */\n                    tmp = zuiNewSdsFromValue(&zval);\n                    /* Remember the longest single element encountered,\n                     * to understand if it's possible to convert to listpack\n                     * at the end. */\n                     totelelen += sdslen(tmp);\n                     if (sdslen(tmp) > maxelelen) maxelelen = sdslen(tmp);\n\n                    /* Create node with embedded sds and score */\n                    znode = zslCreateNode(dstzset->zsl, zslRandomLevel(), score, tmp);\n                    /* Add node pointer to dict using the bucket we already found */\n                    dictSetKeyAtLink(dstzset->dict, znode, &bucket, 1);\n                    sdsfree(tmp); /* zslCreateNode copied it, we can free our copy */\n                } else {\n                    /* Existing element: aggregate score */\n                    de = *link;\n                    znode = dictGetKey(de);\n                    double newscore = znode->score;\n                    zunionInterAggregate(&newscore, score, aggregate);\n                    znode->score = newscore;\n                }\n            }\n            zuiClearIterator(&src[i]);\n        }\n\n        /* Step 2: Done filling dict with nodes and updating scores. Now insert skiplist */\n        dictInitIterator(&di, dstzset->dict);\n\n        while((de = dictNext(&di)) != NULL) {\n            zskiplistNode *znode = dictGetKey(de);\n            zslInsertNode(dstzset->zsl, znode);\n        }\n        dictResetIterator(&di);\n    } else if (op == SET_OP_DIFF) {\n        zdiff(src, setnum, dstzset, &maxelelen, &totelelen);\n    } else {\n        serverPanic(\"Unknown operator\");\n    }\n    if (server.memory_tracking_enabled) {\n        for (i = 0; i < setnum; i++) {\n            robj *obj = src[i].subject;\n            if (obj == NULL) continue;\n            updateSlotAllocSize(c->db, getKeySlot(kvobjGetKey(obj)), obj,\n                                src[i].oldsize, kvobjAllocSize(obj));\n        }\n    }\n\n    if (dstkey) {\n        if (dstzset->zsl->length) {\n            zsetConvertToListpackIfNeeded(dstobj, maxelelen, totelelen);\n            setKey(c, c->db, dstkey, &dstobj, 0);\n            addReplyLongLong(c, zsetLength(dstobj));\n            notifyKeyspaceEvent(NOTIFY_ZSET,\n                                (op == SET_OP_UNION) ? \"zunionstore\" :\n                                    (op == SET_OP_INTER ? \"zinterstore\" : \"zdiffstore\"),\n                                dstkey, c->db->id);\n            server.dirty++;\n        } else {\n            addReply(c, shared.czero);\n            if (dbDelete(c->db, dstkey)) {\n                keyModified(c, c->db, dstkey, NULL, 1);\n                notifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", dstkey, c->db->id);\n                server.dirty++;\n            }\n            decrRefCount(dstobj);\n        }\n    } else if (cardinality_only) {\n        addReplyLongLong(c, cardinality);\n    } else {\n        unsigned long length = dstzset->zsl->length;\n        zskiplist *zsl = dstzset->zsl;\n        zskiplistNode *zn = zsl->header->level[0].forward;\n        /* In case of WITHSCORES, respond with a single array in RESP2, and\n         * nested arrays in RESP3. We can't use a map response type since the\n         * client library needs to know to respect the order. */\n        if (withscores && c->resp == 2)\n            addReplyArrayLen(c, length*2);\n        else\n            addReplyArrayLen(c, length);\n\n        while (zn != NULL) {\n            if (withscores && c->resp > 2) addReplyArrayLen(c,2);\n            sds ele = zslGetNodeElement(zn); addReplyBulkCBuffer(c,ele,sdslen(ele));\n            if (withscores) addReplyDouble(c,zn->score);\n            zn = zn->level[0].forward;\n        }\n        server.lazyfree_lazy_server_del ? freeObjAsync(NULL, dstobj, -1) :\n                                          decrRefCount(dstobj);\n    }\n    zfree(src);\n}\n\n/* ZUNIONSTORE destination numkeys key [key ...] [WEIGHTS weight] [AGGREGATE SUM|MIN|MAX] */\nvoid zunionstoreCommand(client *c) {\n    zunionInterDiffGenericCommand(c, c->argv[1], 2, SET_OP_UNION, 0);\n}\n\n/* ZINTERSTORE destination numkeys key [key ...] [WEIGHTS weight] [AGGREGATE SUM|MIN|MAX] */\nvoid zinterstoreCommand(client *c) {\n    zunionInterDiffGenericCommand(c, c->argv[1], 2, SET_OP_INTER, 0);\n}\n\n/* ZDIFFSTORE destination numkeys key [key ...] */\nvoid zdiffstoreCommand(client *c) {\n    zunionInterDiffGenericCommand(c, c->argv[1], 2, SET_OP_DIFF, 0);\n}\n\n/* ZUNION numkeys key [key ...] [WEIGHTS weight] [AGGREGATE SUM|MIN|MAX] [WITHSCORES] */\nvoid zunionCommand(client *c) {\n    zunionInterDiffGenericCommand(c, NULL, 1, SET_OP_UNION, 0);\n}\n\n/* ZINTER numkeys key [key ...] [WEIGHTS weight] [AGGREGATE SUM|MIN|MAX] [WITHSCORES] */\nvoid zinterCommand(client *c) {\n    zunionInterDiffGenericCommand(c, NULL, 1, SET_OP_INTER, 0);\n}\n\n/* ZINTERCARD numkeys key [key ...] [LIMIT limit] */\nvoid zinterCardCommand(client *c) {\n    zunionInterDiffGenericCommand(c, NULL, 1, SET_OP_INTER, 1);\n}\n\n/* ZDIFF numkeys key [key ...] [WITHSCORES] */\nvoid zdiffCommand(client *c) {\n    zunionInterDiffGenericCommand(c, NULL, 1, SET_OP_DIFF, 0);\n}\n\ntypedef enum {\n    ZRANGE_DIRECTION_AUTO = 0,\n    ZRANGE_DIRECTION_FORWARD,\n    ZRANGE_DIRECTION_REVERSE\n} zrange_direction;\n\ntypedef enum {\n    ZRANGE_CONSUMER_TYPE_CLIENT = 0,\n    ZRANGE_CONSUMER_TYPE_INTERNAL\n} zrange_consumer_type;\n\ntypedef struct zrange_result_handler zrange_result_handler;\n\ntypedef void (*zrangeResultBeginFunction)(zrange_result_handler *c, long length);\ntypedef void (*zrangeResultFinalizeFunction)(\n    zrange_result_handler *c, size_t result_count);\ntypedef void (*zrangeResultEmitCBufferFunction)(\n    zrange_result_handler *c, const void *p, size_t len, double score);\ntypedef void (*zrangeResultEmitLongLongFunction)(\n    zrange_result_handler *c, long long ll, double score);\n\nvoid zrangeGenericCommand (zrange_result_handler *handler, int argc_start, int store,\n                           zrange_type rangetype, zrange_direction direction);\n\n/* Interface struct for ZRANGE/ZRANGESTORE generic implementation.\n * There is one implementation of this interface that sends a RESP reply to clients.\n * and one implementation that stores the range result into a zset object. */\nstruct zrange_result_handler {\n    zrange_consumer_type                 type;\n    client                              *client;\n    robj                                *dstkey;\n    robj                                *dstobj;\n    void                                *userdata;\n    int                                  withscores;\n    int                                  should_emit_array_length;\n    zrangeResultBeginFunction            beginResultEmission;\n    zrangeResultFinalizeFunction         finalizeResultEmission;\n    zrangeResultEmitCBufferFunction      emitResultFromCBuffer;\n    zrangeResultEmitLongLongFunction     emitResultFromLongLong;\n};\n\n/* Result handler methods for responding the ZRANGE to clients.\n * length can be used to provide the result length in advance (avoids deferred reply overhead).\n * length can be set to -1 if the result length is not know in advance.\n */\nstatic void zrangeResultBeginClient(zrange_result_handler *handler, long length) {\n    if (length > 0) {\n        /* In case of WITHSCORES, respond with a single array in RESP2, and\n        * nested arrays in RESP3. We can't use a map response type since the\n        * client library needs to know to respect the order. */\n        if (handler->withscores && (handler->client->resp == 2)) {\n            length *= 2;\n        }\n        addReplyArrayLen(handler->client, length);\n        handler->userdata = NULL;\n        return;\n    }\n    handler->userdata = addReplyDeferredLen(handler->client);\n}\n\nstatic void zrangeResultEmitCBufferToClient(zrange_result_handler *handler,\n    const void *value, size_t value_length_in_bytes, double score)\n{\n    if (handler->should_emit_array_length) {\n        addReplyArrayLen(handler->client, 2);\n    }\n\n    addReplyBulkCBuffer(handler->client, value, value_length_in_bytes);\n\n    if (handler->withscores) {\n        addReplyDouble(handler->client, score);\n    }\n}\n\nstatic void zrangeResultEmitLongLongToClient(zrange_result_handler *handler,\n    long long value, double score)\n{\n    if (handler->should_emit_array_length) {\n        addReplyArrayLen(handler->client, 2);\n    }\n\n    addReplyBulkLongLong(handler->client, value);\n\n    if (handler->withscores) {\n        addReplyDouble(handler->client, score);\n    }\n}\n\nstatic void zrangeResultFinalizeClient(zrange_result_handler *handler,\n    size_t result_count)\n{\n    /* If the reply size was know at start there's nothing left to do */\n    if (!handler->userdata)\n        return;\n    /* In case of WITHSCORES, respond with a single array in RESP2, and\n     * nested arrays in RESP3. We can't use a map response type since the\n     * client library needs to know to respect the order. */\n    if (handler->withscores && (handler->client->resp == 2)) {\n        result_count *= 2;\n    }\n\n    setDeferredArrayLen(handler->client, handler->userdata, result_count);\n}\n\n/* Result handler methods for storing the ZRANGESTORE to a zset. */\nstatic void zrangeResultBeginStore(zrange_result_handler *handler, long length)\n{\n    handler->dstobj = zsetTypeCreate(length >= 0 ? length : 0, 0);\n}\n\nstatic void zrangeResultEmitCBufferForStore(zrange_result_handler *handler,\n    const void *value, size_t value_length_in_bytes, double score)\n{\n    double newscore;\n    int retflags = 0;\n    sds ele = sdsnewlen(value, value_length_in_bytes);\n    int retval = zsetAdd(handler->dstobj, score, ele, ZADD_IN_NONE, &retflags, &newscore);\n    sdsfree(ele);\n    serverAssert(retval);\n}\n\nstatic void zrangeResultEmitLongLongForStore(zrange_result_handler *handler,\n    long long value, double score)\n{\n    double newscore;\n    int retflags = 0;\n    sds ele = sdsfromlonglong(value);\n    int retval = zsetAdd(handler->dstobj, score, ele, ZADD_IN_NONE, &retflags, &newscore);\n    sdsfree(ele);\n    serverAssert(retval);\n}\n\nstatic void zrangeResultFinalizeStore(zrange_result_handler *handler, size_t result_count)\n{\n    if (result_count) {\n        setKey(handler->client, handler->client->db, handler->dstkey, &handler->dstobj, 0);\n        addReplyLongLong(handler->client, result_count);\n        notifyKeyspaceEvent(NOTIFY_ZSET, \"zrangestore\", handler->dstkey, handler->client->db->id);\n        server.dirty++;\n    } else {\n        addReply(handler->client, shared.czero);\n        if (dbDelete(handler->client->db, handler->dstkey)) {\n            keyModified(handler->client, handler->client->db, handler->dstkey, NULL, 1);\n            notifyKeyspaceEvent(NOTIFY_GENERIC, \"del\", handler->dstkey, handler->client->db->id);\n            server.dirty++;\n        }\n        decrRefCount(handler->dstobj);\n    }\n}\n\n/* Initialize the consumer interface type with the requested type. */\nstatic void zrangeResultHandlerInit(zrange_result_handler *handler,\n    client *client, zrange_consumer_type type)\n{\n    memset(handler, 0, sizeof(*handler));\n\n    handler->client = client;\n\n    switch (type) {\n    case ZRANGE_CONSUMER_TYPE_CLIENT:\n        handler->beginResultEmission = zrangeResultBeginClient;\n        handler->finalizeResultEmission = zrangeResultFinalizeClient;\n        handler->emitResultFromCBuffer = zrangeResultEmitCBufferToClient;\n        handler->emitResultFromLongLong = zrangeResultEmitLongLongToClient;\n        break;\n\n    case ZRANGE_CONSUMER_TYPE_INTERNAL:\n        handler->beginResultEmission = zrangeResultBeginStore;\n        handler->finalizeResultEmission = zrangeResultFinalizeStore;\n        handler->emitResultFromCBuffer = zrangeResultEmitCBufferForStore;\n        handler->emitResultFromLongLong = zrangeResultEmitLongLongForStore;\n        break;\n    }\n}\n\nstatic void zrangeResultHandlerScoreEmissionEnable(zrange_result_handler *handler) {\n    handler->withscores = 1;\n    handler->should_emit_array_length = (handler->client->resp > 2);\n}\n\nstatic void zrangeResultHandlerDestinationKeySet (zrange_result_handler *handler,\n    robj *dstkey)\n{\n    handler->dstkey = dstkey;\n}\n\n/* This command implements ZRANGE, ZREVRANGE. */\nvoid genericZrangebyrankCommand(zrange_result_handler *handler,\n    robj *zobj, long start, long end, int withscores, int reverse) {\n\n    client *c = handler->client;\n    long llen;\n    long rangelen;\n    size_t result_cardinality;\n\n    /* Sanitize indexes. */\n    llen = zsetLength(zobj);\n    if (start < 0) start = llen+start;\n    if (end < 0) end = llen+end;\n    if (start < 0) start = 0;\n\n\n    /* Invariant: start >= 0, so this test will be true when end < 0.\n     * The range is empty when start > end or start >= length. */\n    if (start > end || start >= llen) {\n        handler->beginResultEmission(handler, 0);\n        handler->finalizeResultEmission(handler, 0);\n        return;\n    }\n    if (end >= llen) end = llen-1;\n    rangelen = (end-start)+1;\n    result_cardinality = rangelen;\n\n    handler->beginResultEmission(handler, rangelen);\n    if (zobj->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *zl = zobj->ptr;\n        unsigned char *eptr, *sptr;\n        unsigned char *vstr;\n        unsigned int vlen;\n        long long vlong;\n        double score = 0.0;\n\n        if (reverse)\n            eptr = lpSeek(zl,-2-(2*start));\n        else\n            eptr = lpSeek(zl,2*start);\n\n        serverAssertWithInfo(c,zobj,eptr != NULL);\n        sptr = lpNext(zl,eptr);\n\n        while (rangelen--) {\n            serverAssertWithInfo(c,zobj,eptr != NULL && sptr != NULL);\n            vstr = lpGetValue(eptr,&vlen,&vlong);\n\n            if (withscores) /* don't bother to extract the score if it's gonna be ignored. */\n                score = zzlGetScore(sptr);\n\n            if (vstr == NULL) {\n                handler->emitResultFromLongLong(handler, vlong, score);\n            } else {\n                handler->emitResultFromCBuffer(handler, vstr, vlen, score);\n            }\n\n            if (reverse)\n                zzlPrev(zl,&eptr,&sptr);\n            else\n                zzlNext(zl,&eptr,&sptr);\n        }\n\n    } else if (zobj->encoding == OBJ_ENCODING_SKIPLIST) {\n        zset *zs = zobj->ptr;\n        zskiplist *zsl = zs->zsl;\n        zskiplistNode *ln;\n\n        /* Check if starting point is trivial, before doing log(N) lookup. */\n        if (reverse) {\n            ln = zsl->tail;\n            if (start > 0)\n                ln = zslGetElementByRank(zsl,llen-start);\n        } else {\n            ln = zsl->header->level[0].forward;\n            if (start > 0)\n                ln = zslGetElementByRank(zsl,start+1);\n        }\n\n        while(rangelen--) {\n            serverAssertWithInfo(c,zobj,ln != NULL);\n            sds ele = zslGetNodeElement(ln);\n            handler->emitResultFromCBuffer(handler, ele, sdslen(ele), ln->score);\n            ln = reverse ? ln->backward : ln->level[0].forward;\n        }\n    } else {\n        serverPanic(\"Unknown sorted set encoding\");\n    }\n\n    handler->finalizeResultEmission(handler, result_cardinality);\n}\n\n/* ZRANGESTORE <dst> <src> <min> <max> [BYSCORE | BYLEX] [REV] [LIMIT offset count] */\nvoid zrangestoreCommand (client *c) {\n    robj *dstkey = c->argv[1];\n    zrange_result_handler handler;\n    zrangeResultHandlerInit(&handler, c, ZRANGE_CONSUMER_TYPE_INTERNAL);\n    zrangeResultHandlerDestinationKeySet(&handler, dstkey);\n    zrangeGenericCommand(&handler, 2, 1, ZRANGE_AUTO, ZRANGE_DIRECTION_AUTO);\n}\n\n/* ZRANGE <key> <min> <max> [BYSCORE | BYLEX] [REV] [WITHSCORES] [LIMIT offset count] */\nvoid zrangeCommand(client *c) {\n    zrange_result_handler handler;\n    zrangeResultHandlerInit(&handler, c, ZRANGE_CONSUMER_TYPE_CLIENT);\n    zrangeGenericCommand(&handler, 1, 0, ZRANGE_AUTO, ZRANGE_DIRECTION_AUTO);\n}\n\n/* ZREVRANGE <key> <start> <stop> [WITHSCORES] */\nvoid zrevrangeCommand(client *c) {\n    zrange_result_handler handler;\n    zrangeResultHandlerInit(&handler, c, ZRANGE_CONSUMER_TYPE_CLIENT);\n    zrangeGenericCommand(&handler, 1, 0, ZRANGE_RANK, ZRANGE_DIRECTION_REVERSE);\n}\n\n/* This command implements ZRANGEBYSCORE, ZREVRANGEBYSCORE. */\nvoid genericZrangebyscoreCommand(zrange_result_handler *handler,\n    zrangespec *range, robj *zobj, long offset, long limit, \n    int reverse) {\n    unsigned long rangelen = 0;\n\n    handler->beginResultEmission(handler, -1);\n\n    /* For invalid offset, return directly. */\n    if (offset < 0 || (offset > 0 && offset >= (long)zsetLength(zobj))) {\n        handler->finalizeResultEmission(handler, 0);\n        return;\n    }\n\n    if (zobj->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *zl = zobj->ptr;\n        unsigned char *eptr, *sptr;\n        unsigned char *vstr;\n        unsigned int vlen;\n        long long vlong;\n\n        /* If reversed, get the last node in range as starting point. */\n        if (reverse) {\n            eptr = zzlLastInRange(zl,range);\n        } else {\n            eptr = zzlFirstInRange(zl,range);\n        }\n\n        /* Get score pointer for the first element. */\n        if (eptr)\n            sptr = lpNext(zl,eptr);\n\n        /* If there is an offset, just traverse the number of elements without\n         * checking the score because that is done in the next loop. */\n        while (eptr && offset--) {\n            if (reverse) {\n                zzlPrev(zl,&eptr,&sptr);\n            } else {\n                zzlNext(zl,&eptr,&sptr);\n            }\n        }\n\n        while (eptr && limit--) {\n            double score = zzlGetScore(sptr);\n\n            /* Abort when the node is no longer in range. */\n            if (reverse) {\n                if (!zslValueGteMin(score,range)) break;\n            } else {\n                if (!zslValueLteMax(score,range)) break;\n            }\n\n            vstr = lpGetValue(eptr,&vlen,&vlong);\n            rangelen++;\n            if (vstr == NULL) {\n                handler->emitResultFromLongLong(handler, vlong, score);\n            } else {\n                handler->emitResultFromCBuffer(handler, vstr, vlen, score);\n            }\n\n            /* Move to next node */\n            if (reverse) {\n                zzlPrev(zl,&eptr,&sptr);\n            } else {\n                zzlNext(zl,&eptr,&sptr);\n            }\n        }\n    } else if (zobj->encoding == OBJ_ENCODING_SKIPLIST) {\n        zset *zs = zobj->ptr;\n        zskiplist *zsl = zs->zsl;\n        zskiplistNode *ln;\n\n        /* If reversed, get the last node in range as starting point. */\n        if (reverse) {\n            ln = zslNthInRange(zsl, range, -offset-1, NULL);\n        } else {\n            ln = zslNthInRange(zsl, range, offset, NULL);\n        }\n\n        while (ln && limit--) {\n            /* Abort when the node is no longer in range. */\n            if (reverse) {\n                if (!zslValueGteMin(ln->score,range)) break;\n            } else {\n                if (!zslValueLteMax(ln->score,range)) break;\n            }\n\n            rangelen++;\n            sds ele = zslGetNodeElement(ln);\n\t\t\thandler->emitResultFromCBuffer(handler, ele, sdslen(ele), ln->score);\n\n            /* Move to next node */\n            if (reverse) {\n                ln = ln->backward;\n            } else {\n                ln = ln->level[0].forward;\n            }\n        }\n    } else {\n        serverPanic(\"Unknown sorted set encoding\");\n    }\n\n    handler->finalizeResultEmission(handler, rangelen);\n}\n\n/* ZRANGEBYSCORE <key> <min> <max> [WITHSCORES] [LIMIT offset count] */\nvoid zrangebyscoreCommand(client *c) {\n    zrange_result_handler handler;\n    zrangeResultHandlerInit(&handler, c, ZRANGE_CONSUMER_TYPE_CLIENT);\n    zrangeGenericCommand(&handler, 1, 0, ZRANGE_SCORE, ZRANGE_DIRECTION_FORWARD);\n}\n\n/* ZREVRANGEBYSCORE <key> <max> <min> [WITHSCORES] [LIMIT offset count] */\nvoid zrevrangebyscoreCommand(client *c) {\n    zrange_result_handler handler;\n    zrangeResultHandlerInit(&handler, c, ZRANGE_CONSUMER_TYPE_CLIENT);\n    zrangeGenericCommand(&handler, 1, 0, ZRANGE_SCORE, ZRANGE_DIRECTION_REVERSE);\n}\n\nvoid zcountCommand(client *c) {\n    robj *key = c->argv[1];\n    kvobj *zobj;\n    zrangespec range;\n    unsigned long count = 0;\n\n    /* Parse the range arguments */\n    if (zslParseRange(c->argv[2],c->argv[3],&range) != C_OK) {\n        addReplyError(c,\"min or max is not a float\");\n        return;\n    }\n\n    /* Lookup the sorted set */\n    if ((zobj = lookupKeyReadOrReply(c, key, shared.czero)) == NULL ||\n        checkType(c, zobj, OBJ_ZSET)) return;\n\n    if (zobj->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *zl = zobj->ptr;\n        unsigned char *eptr, *sptr;\n        double score;\n\n        /* Use the first element in range as the starting point */\n        eptr = zzlFirstInRange(zl,&range);\n\n        /* No \"first\" element */\n        if (eptr == NULL) {\n            addReply(c, shared.czero);\n            return;\n        }\n\n        /* First element is in range */\n        sptr = lpNext(zl,eptr);\n        score = zzlGetScore(sptr);\n        serverAssertWithInfo(c,zobj,zslValueLteMax(score,&range));\n\n        /* Iterate over elements in range */\n        while (eptr) {\n            score = zzlGetScore(sptr);\n\n            /* Abort when the node is no longer in range. */\n            if (!zslValueLteMax(score,&range)) {\n                break;\n            } else {\n                count++;\n                zzlNext(zl,&eptr,&sptr);\n            }\n        }\n    } else if (zobj->encoding == OBJ_ENCODING_SKIPLIST) {\n        zset *zs = zobj->ptr;\n        zskiplist *zsl = zs->zsl;\n        zskiplistNode *zn;\n        unsigned long rank;\n\n        /* Find first element in range and get its rank */\n        zn = zslNthInRange(zsl, &range, 0, &rank);\n\n        /* Use rank of first element, if any, to determine preliminary count */\n        if (zn != NULL) {\n            count = (zsl->length - (rank - 1));\n\n            /* Find last element in range and get its rank */\n            zn = zslNthInRange(zsl, &range, -1, &rank);\n\n            /* Use rank of last element, if any, to determine the actual count */\n            if (zn != NULL) {\n                count -= (zsl->length - rank);\n            }\n        }\n    } else {\n        serverPanic(\"Unknown sorted set encoding\");\n    }\n\n    addReplyLongLong(c, count);\n}\n\nvoid zlexcountCommand(client *c) {\n    robj *key = c->argv[1];\n    kvobj *zobj;\n    zlexrangespec range;\n    unsigned long count = 0;\n\n    /* Parse the range arguments */\n    if (zslParseLexRange(c->argv[2],c->argv[3],&range) != C_OK) {\n        addReplyError(c,\"min or max not valid string range item\");\n        return;\n    }\n\n    /* Lookup the sorted set */\n    if ((zobj = lookupKeyReadOrReply(c, key, shared.czero)) == NULL ||\n        checkType(c, zobj, OBJ_ZSET))\n    {\n        zslFreeLexRange(&range);\n        return;\n    }\n\n    if (zobj->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *zl = zobj->ptr;\n        unsigned char *eptr, *sptr;\n\n        /* Use the first element in range as the starting point */\n        eptr = zzlFirstInLexRange(zl,&range);\n\n        /* No \"first\" element */\n        if (eptr == NULL) {\n            zslFreeLexRange(&range);\n            addReply(c, shared.czero);\n            return;\n        }\n\n        /* First element is in range */\n        sptr = lpNext(zl,eptr);\n        serverAssertWithInfo(c,zobj,zzlLexValueLteMax(eptr,&range));\n\n        /* Iterate over elements in range */\n        while (eptr) {\n            /* Abort when the node is no longer in range. */\n            if (!zzlLexValueLteMax(eptr,&range)) {\n                break;\n            } else {\n                count++;\n                zzlNext(zl,&eptr,&sptr);\n            }\n        }\n    } else if (zobj->encoding == OBJ_ENCODING_SKIPLIST) {\n        zset *zs = zobj->ptr;\n        zskiplist *zsl = zs->zsl;\n        zskiplistNode *zn;\n        unsigned long rank;\n\n        /* Find first element in range and get its rank */\n        zn = zslNthInLexRange(zsl, &range, 0, &rank);\n\n        /* Use rank of first element, if any, to determine preliminary count */\n        if (zn != NULL) {\n            count = (zsl->length - (rank - 1));\n\n            /* Find last element in range and get its rank */\n            zn = zslNthInLexRange(zsl, &range, -1, &rank);\n\n            /* Use rank of last element, if any, to determine the actual count */\n            if (zn != NULL) {\n                count -= (zsl->length - rank);\n            }\n        }\n    } else {\n        serverPanic(\"Unknown sorted set encoding\");\n    }\n\n    zslFreeLexRange(&range);\n    addReplyLongLong(c, count);\n}\n\n/* This command implements ZRANGEBYLEX, ZREVRANGEBYLEX. */\nvoid genericZrangebylexCommand(zrange_result_handler *handler,\n    zlexrangespec *range, robj *zobj, int withscores, long offset, long limit,\n    int reverse)\n{\n    unsigned long rangelen = 0;\n\n    handler->beginResultEmission(handler, -1);\n\n    /* For invalid offset, return directly. */\n    if (offset < 0 || (offset > 0 && offset >= (long)zsetLength(zobj))) {\n        handler->finalizeResultEmission(handler, 0);\n        return;\n    }\n\n    if (zobj->encoding == OBJ_ENCODING_LISTPACK) {\n        unsigned char *zl = zobj->ptr;\n        unsigned char *eptr, *sptr;\n        unsigned char *vstr;\n        unsigned int vlen;\n        long long vlong;\n\n        /* If reversed, get the last node in range as starting point. */\n        if (reverse) {\n            eptr = zzlLastInLexRange(zl,range);\n        } else {\n            eptr = zzlFirstInLexRange(zl,range);\n        }\n\n        /* Get score pointer for the first element. */\n        if (eptr)\n            sptr = lpNext(zl,eptr);\n\n        /* If there is an offset, just traverse the number of elements without\n         * checking the score because that is done in the next loop. */\n        while (eptr && offset--) {\n            if (reverse) {\n                zzlPrev(zl,&eptr,&sptr);\n            } else {\n                zzlNext(zl,&eptr,&sptr);\n            }\n        }\n\n        while (eptr && limit--) {\n            double score = 0;\n            if (withscores) /* don't bother to extract the score if it's gonna be ignored. */\n                score = zzlGetScore(sptr);\n\n            /* Abort when the node is no longer in range. */\n            if (reverse) {\n                if (!zzlLexValueGteMin(eptr,range)) break;\n            } else {\n                if (!zzlLexValueLteMax(eptr,range)) break;\n            }\n\n            vstr = lpGetValue(eptr,&vlen,&vlong);\n            rangelen++;\n            if (vstr == NULL) {\n                handler->emitResultFromLongLong(handler, vlong, score);\n            } else {\n                handler->emitResultFromCBuffer(handler, vstr, vlen, score);\n            }\n\n            /* Move to next node */\n            if (reverse) {\n                zzlPrev(zl,&eptr,&sptr);\n            } else {\n                zzlNext(zl,&eptr,&sptr);\n            }\n        }\n    } else if (zobj->encoding == OBJ_ENCODING_SKIPLIST) {\n        zset *zs = zobj->ptr;\n        zskiplist *zsl = zs->zsl;\n        zskiplistNode *ln;\n\n        /* If reversed, get the last node in range as starting point. */\n        if (reverse) {\n            ln = zslNthInLexRange(zsl,range,-offset-1,NULL);\n        } else {\n            ln = zslNthInLexRange(zsl,range,offset,NULL);\n        }\n\n        while (ln && limit--) {\n            /* Abort when the node is no longer in range. */\n            if (reverse) {\n                if (!zslLexValueGteMin(zslGetNodeElement(ln),range)) break;\n            } else {\n                if (!zslLexValueLteMax(zslGetNodeElement(ln),range)) break;\n            }\n\n            rangelen++;\n            sds ele = zslGetNodeElement(ln);\n\t\t\thandler->emitResultFromCBuffer(handler, ele, sdslen(ele), ln->score);\n\n            /* Move to next node */\n            if (reverse) {\n                ln = ln->backward;\n            } else {\n                ln = ln->level[0].forward;\n            }\n        }\n    } else {\n        serverPanic(\"Unknown sorted set encoding\");\n    }\n\n    handler->finalizeResultEmission(handler, rangelen);\n}\n\n/* ZRANGEBYLEX <key> <min> <max> [LIMIT offset count] */\nvoid zrangebylexCommand(client *c) {\n    zrange_result_handler handler;\n    zrangeResultHandlerInit(&handler, c, ZRANGE_CONSUMER_TYPE_CLIENT);\n    zrangeGenericCommand(&handler, 1, 0, ZRANGE_LEX, ZRANGE_DIRECTION_FORWARD);\n}\n\n/* ZREVRANGEBYLEX <key> <max> <min> [LIMIT offset count] */\nvoid zrevrangebylexCommand(client *c) {\n    zrange_result_handler handler;\n    zrangeResultHandlerInit(&handler, c, ZRANGE_CONSUMER_TYPE_CLIENT);\n    zrangeGenericCommand(&handler, 1, 0, ZRANGE_LEX, ZRANGE_DIRECTION_REVERSE);\n}\n\n/**\n * This function handles ZRANGE and ZRANGESTORE, and also the deprecated\n * Z[REV]RANGE[BYSCORE|BYLEX] commands.\n *\n * The simple ZRANGE and ZRANGESTORE can take _AUTO in rangetype and direction,\n * other command pass explicit value.\n *\n * The argc_start points to the src key argument, so following syntax is like:\n * <src> <min> <max> [BYSCORE | BYLEX] [REV] [WITHSCORES] [LIMIT offset count]\n */\nvoid zrangeGenericCommand(zrange_result_handler *handler, int argc_start, int store,\n                          zrange_type rangetype, zrange_direction direction)\n{\n    client *c = handler->client;\n    robj *key = c->argv[argc_start];\n    zrangespec range;\n    zlexrangespec lexrange;\n    int minidx = argc_start + 1;\n    int maxidx = argc_start + 2;\n    size_t oldsize = 0;\n\n    /* Options common to all */\n    long opt_start = 0;\n    long opt_end = 0;\n    int opt_withscores = 0;\n    long opt_offset = 0;\n    long opt_limit = -1;\n\n    /* Step 1: Skip the <src> <min> <max> args and parse remaining optional arguments. */\n    for (int j=argc_start + 3; j < c->argc; j++) {\n        int leftargs = c->argc-j-1;\n        if (!store && !strcasecmp(c->argv[j]->ptr,\"withscores\")) {\n            opt_withscores = 1;\n        } else if (!strcasecmp(c->argv[j]->ptr,\"limit\") && leftargs >= 2) {\n            if ((getLongFromObjectOrReply(c, c->argv[j+1], &opt_offset, NULL) != C_OK) ||\n                (getLongFromObjectOrReply(c, c->argv[j+2], &opt_limit, NULL) != C_OK))\n            {\n                return;\n            }\n            j += 2;\n        } else if (direction == ZRANGE_DIRECTION_AUTO &&\n                   !strcasecmp(c->argv[j]->ptr,\"rev\"))\n        {\n            direction = ZRANGE_DIRECTION_REVERSE;\n        } else if (rangetype == ZRANGE_AUTO &&\n                   !strcasecmp(c->argv[j]->ptr,\"bylex\"))\n        {\n            rangetype = ZRANGE_LEX;\n        } else if (rangetype == ZRANGE_AUTO &&\n                   !strcasecmp(c->argv[j]->ptr,\"byscore\"))\n        {\n            rangetype = ZRANGE_SCORE;\n        } else {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        }\n    }\n\n    /* Use defaults if not overridden by arguments. */\n    if (direction == ZRANGE_DIRECTION_AUTO)\n        direction = ZRANGE_DIRECTION_FORWARD;\n    if (rangetype == ZRANGE_AUTO)\n        rangetype = ZRANGE_RANK;\n\n    /* Check for conflicting arguments. */\n    if (opt_limit != -1 && rangetype == ZRANGE_RANK) {\n        addReplyError(c,\"syntax error, LIMIT is only supported in combination with either BYSCORE or BYLEX\");\n        return;\n    }\n    if (opt_withscores && rangetype == ZRANGE_LEX) {\n        addReplyError(c,\"syntax error, WITHSCORES not supported in combination with BYLEX\");\n        return;\n    }\n\n    if (direction == ZRANGE_DIRECTION_REVERSE &&\n        ((ZRANGE_SCORE == rangetype) || (ZRANGE_LEX == rangetype)))\n    {\n        /* Range is given as [max,min] */\n        int tmp = maxidx;\n        maxidx = minidx;\n        minidx = tmp;\n    }\n\n    /* Step 2: Parse the range. */\n    switch (rangetype) {\n    case ZRANGE_AUTO:\n    case ZRANGE_RANK:\n        /* Z[REV]RANGE, ZRANGESTORE [REV]RANGE */\n        if ((getLongFromObjectOrReply(c, c->argv[minidx], &opt_start,NULL) != C_OK) ||\n            (getLongFromObjectOrReply(c, c->argv[maxidx], &opt_end,NULL) != C_OK))\n        {\n            return;\n        }\n        break;\n\n    case ZRANGE_SCORE:\n        /* Z[REV]RANGEBYSCORE, ZRANGESTORE [REV]RANGEBYSCORE */\n        if (zslParseRange(c->argv[minidx], c->argv[maxidx], &range) != C_OK) {\n            addReplyError(c, \"min or max is not a float\");\n            return;\n        }\n        break;\n\n    case ZRANGE_LEX:\n        /* Z[REV]RANGEBYLEX, ZRANGESTORE [REV]RANGEBYLEX */\n        if (zslParseLexRange(c->argv[minidx], c->argv[maxidx], &lexrange) != C_OK) {\n            addReplyError(c, \"min or max not valid string range item\");\n            return;\n        }\n        break;\n    }\n\n    if (opt_withscores || store) {\n        zrangeResultHandlerScoreEmissionEnable(handler);\n    }\n\n    /* Step 3: Lookup the key and get the range. */\n    kvobj *zobj = lookupKeyRead(c->db, key);\n    if (zobj == NULL) {\n        if (store) {\n            handler->beginResultEmission(handler, -1);\n            handler->finalizeResultEmission(handler, 0);\n        } else {\n            addReply(c, shared.emptyarray);\n        }\n        goto cleanup;\n    }\n\n    if (checkType(c,zobj,OBJ_ZSET)) goto cleanup;\n\n    /* Step 4: Pass this to the command-specific handler. */\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(zobj);\n    switch (rangetype) {\n    case ZRANGE_AUTO:\n    case ZRANGE_RANK:\n        genericZrangebyrankCommand(handler, zobj, opt_start, opt_end,\n            opt_withscores || store, direction == ZRANGE_DIRECTION_REVERSE);\n        break;\n\n    case ZRANGE_SCORE:\n        genericZrangebyscoreCommand(handler, &range, zobj, opt_offset,\n            opt_limit, direction == ZRANGE_DIRECTION_REVERSE);\n        break;\n\n    case ZRANGE_LEX:\n        genericZrangebylexCommand(handler, &lexrange, zobj, opt_withscores || store,\n            opt_offset, opt_limit, direction == ZRANGE_DIRECTION_REVERSE);\n        break;\n    }\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(key->ptr), zobj, oldsize, kvobjAllocSize(zobj));\n\n    /* Instead of returning here, we'll just fall-through the clean-up. */\n\ncleanup:\n\n    if (rangetype == ZRANGE_LEX) {\n        zslFreeLexRange(&lexrange);\n    }\n}\n\nvoid zcardCommand(client *c) {\n    robj *key = c->argv[1];\n    kvobj *zobj;\n\n    if ((zobj = lookupKeyReadOrReply(c,key,shared.czero)) == NULL ||\n        checkType(c,zobj,OBJ_ZSET)) return;\n\n    addReplyLongLong(c,zsetLength(zobj));\n}\n\nvoid zscoreCommand(client *c) {\n    robj *key = c->argv[1];\n    kvobj *zobj;\n    double score;\n    size_t oldsize = 0;\n\n    if ((zobj = lookupKeyReadOrReply(c,key,shared.null[c->resp])) == NULL ||\n        checkType(c,zobj,OBJ_ZSET)) return;\n\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(zobj);\n    if (zsetScore(zobj,c->argv[2]->ptr,&score) == C_ERR) {\n        addReplyNull(c);\n    } else {\n        addReplyDouble(c,score);\n    }\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(key->ptr), zobj, oldsize, kvobjAllocSize(zobj));\n}\n\nvoid zmscoreCommand(client *c) {\n    robj *key = c->argv[1];\n    double score;\n    size_t oldsize = 0;\n    kvobj *zobj = lookupKeyRead(c->db, key);\n    if (checkType(c,zobj,OBJ_ZSET)) return;\n\n    if (server.memory_tracking_enabled && zobj != NULL)\n        oldsize = kvobjAllocSize(zobj);\n    addReplyArrayLen(c,c->argc - 2);\n    for (int j = 2; j < c->argc; j++) {\n        /* Treat a missing set the same way as an empty set */\n        if (zobj == NULL || zsetScore(zobj,c->argv[j]->ptr,&score) == C_ERR) {\n            addReplyNull(c);\n        } else {\n            addReplyDouble(c,score);\n        }\n    }\n    if (server.memory_tracking_enabled && zobj != NULL)\n        updateSlotAllocSize(c->db, getKeySlot(key->ptr), zobj, oldsize, kvobjAllocSize(zobj));\n}\n\nvoid zrankGenericCommand(client *c, int reverse) {\n    robj *key = c->argv[1];\n    robj *ele = c->argv[2];\n    kvobj *zobj;\n    robj* reply;\n    long rank;\n    int opt_withscore = 0;\n    double score;\n    size_t oldsize = 0;\n\n    if (c->argc > 4) {\n        addReplyErrorArity(c);\n        return;\n    }\n    if (c->argc > 3) {\n        if (!strcasecmp(c->argv[3]->ptr, \"withscore\")) {\n            opt_withscore = 1;\n        } else {\n            addReplyErrorObject(c, shared.syntaxerr);\n            return;\n        }\n    }\n    reply = opt_withscore ? shared.nullarray[c->resp] : shared.null[c->resp];\n    if ((zobj = lookupKeyReadOrReply(c, key, reply)) == NULL || checkType(c, zobj, OBJ_ZSET)) {\n        return;\n    }\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(zobj);\n    serverAssertWithInfo(c, ele, sdsEncodedObject(ele));\n    rank = zsetRank(zobj, ele->ptr, reverse, opt_withscore ? &score : NULL);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(key->ptr), zobj, oldsize, kvobjAllocSize(zobj));\n    if (rank >= 0) {\n        if (opt_withscore) {\n            addReplyArrayLen(c, 2);\n        }\n        addReplyLongLong(c, rank);\n        if (opt_withscore) {\n            addReplyDouble(c, score);\n        }\n    } else {\n        if (opt_withscore) {\n            addReplyNullArray(c);\n        } else {\n            addReplyNull(c);\n        }\n    }\n}\n\nvoid zrankCommand(client *c) {\n    zrankGenericCommand(c, 0);\n}\n\nvoid zrevrankCommand(client *c) {\n    zrankGenericCommand(c, 1);\n}\n\nvoid zscanCommand(client *c) {\n    kvobj *o;\n    unsigned long long cursor;\n    size_t oldsize = 0;\n\n    if (parseScanCursorOrReply(c,c->argv[2],&cursor) == C_ERR) return;\n    if ((o = lookupKeyReadOrReply(c,c->argv[1],shared.emptyscan)) == NULL ||\n        checkType(c,o,OBJ_ZSET)) return;\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(o);\n    scanGenericCommand(c,o,cursor);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), o, oldsize, kvobjAllocSize(o));\n}\n\n/* This command implements the generic zpop operation, used by:\n * ZPOPMIN, ZPOPMAX, BZPOPMIN, BZPOPMAX and ZMPOP. This function is also used\n * inside blocked.c in the unblocking stage of BZPOPMIN, BZPOPMAX and BZMPOP.\n *\n * If 'emitkey' is true also the key name is emitted, useful for the blocking\n * behavior of BZPOP[MIN|MAX], since we can block into multiple keys.\n * Or in ZMPOP/BZMPOP, because we also can take multiple keys.\n *\n * 'count' is the number of elements requested to pop, or -1 for plain single pop.\n *\n * 'use_nested_array' when false it generates a flat array (with or without key name).\n * When true, it generates a nested 2 level array of field + score pairs, or 3 level when emitkey is set.\n *\n * 'reply_nil_when_empty' when true we reply a NIL if we are not able to pop up any elements.\n * Like in ZMPOP/BZMPOP we reply with a structured nested array containing key name\n * and member + score pairs. In these commands, we reply with null when we have no result.\n * Otherwise in ZPOPMIN/ZPOPMAX we reply an empty array by default.\n *\n * 'deleted' is an optional output argument to get an indication\n * if the key got deleted by this function.\n * */\nvoid genericZpopCommand(client *c, robj **keyv, int keyc, int where, int emitkey,\n                        long count, int use_nested_array, int reply_nil_when_empty, int *deleted) {\n    int idx;\n    robj *key = NULL;\n    robj *zobj = NULL;\n    sds ele;\n    double score;\n    size_t oldsize = 0;\n\n    if (deleted) *deleted = 0;\n\n    /* Check type and break on the first error, otherwise identify candidate. */\n    idx = 0;\n    while (idx < keyc) {\n        key = keyv[idx++];\n        zobj = lookupKeyWrite(c->db,key);\n        if (!zobj) continue;\n        if (checkType(c,zobj,OBJ_ZSET)) return;\n        break;\n    }\n\n    /* No candidate for zpopping, return empty. */\n    if (!zobj) {\n        if (reply_nil_when_empty) {\n            addReplyNullArray(c);\n        } else {\n            addReply(c,shared.emptyarray);\n        }\n        return;\n    }\n\n    if (count == 0) {\n        /* ZPOPMIN/ZPOPMAX with count 0. */\n        addReply(c, shared.emptyarray);\n        return;\n    }\n\n    long result_count = 0;\n\n    /* When count is -1, we need to correct it to 1 for plain single pop. */\n    if (count == -1) count = 1;\n\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(zobj);\n    long llen = zsetLength(zobj);\n    long rangelen = (count > llen) ? llen : count;\n\n    if (!use_nested_array && !emitkey) {\n        /* ZPOPMIN/ZPOPMAX with or without COUNT option in RESP2. */\n        addReplyArrayLen(c, rangelen * 2);\n    } else if (use_nested_array && !emitkey) {\n        /* ZPOPMIN/ZPOPMAX with COUNT option in RESP3. */\n        addReplyArrayLen(c, rangelen);\n    } else if (!use_nested_array && emitkey) {\n        /* BZPOPMIN/BZPOPMAX in RESP2 and RESP3. */\n        addReplyArrayLen(c, rangelen * 2 + 1);\n        addReplyBulk(c, key);\n    } else if (use_nested_array && emitkey) {\n        /* ZMPOP/BZMPOP in RESP2 and RESP3. */\n        addReplyArrayLen(c, 2);\n        addReplyBulk(c, key);\n        addReplyArrayLen(c, rangelen);\n    }\n\n    /* Remove the element. */\n    do {\n        if (zobj->encoding == OBJ_ENCODING_LISTPACK) {\n            unsigned char *zl = zobj->ptr;\n            unsigned char *eptr, *sptr;\n            unsigned char *vstr;\n            unsigned int vlen;\n            long long vlong;\n\n            /* Get the first or last element in the sorted set. */\n            eptr = lpSeek(zl,where == ZSET_MAX ? -2 : 0);\n            serverAssertWithInfo(c,zobj,eptr != NULL);\n            vstr = lpGetValue(eptr,&vlen,&vlong);\n            if (vstr == NULL)\n                ele = sdsfromlonglong(vlong);\n            else\n                ele = sdsnewlen(vstr,vlen);\n\n            /* Get the score. */\n            sptr = lpNext(zl,eptr);\n            serverAssertWithInfo(c,zobj,sptr != NULL);\n            score = zzlGetScore(sptr);\n        } else if (zobj->encoding == OBJ_ENCODING_SKIPLIST) {\n            zset *zs = zobj->ptr;\n            zskiplist *zsl = zs->zsl;\n            zskiplistNode *zln;\n\n            /* Get the first or last element in the sorted set. */\n            zln = (where == ZSET_MAX ? zsl->tail :\n                                       zsl->header->level[0].forward);\n\n            /* There must be an element in the sorted set. */\n            serverAssertWithInfo(c,zobj,zln != NULL);\n            ele = sdsdup(zslGetNodeElement(zln));\n            score = zln->score;\n        } else {\n            serverPanic(\"Unknown sorted set encoding\");\n        }\n\n        serverAssertWithInfo(c,zobj,zsetDel(zobj,ele));\n        server.dirty++;\n\n        if (result_count == 0) { /* Do this only for the first iteration. */\n            char *events[2] = {\"zpopmin\",\"zpopmax\"};\n            notifyKeyspaceEvent(NOTIFY_ZSET,events[where],key,c->db->id);\n        }\n\n        if (use_nested_array) {\n            addReplyArrayLen(c,2);\n        }\n        addReplyBulkCBuffer(c,ele,sdslen(ele));\n        addReplyDouble(c,score);\n        sdsfree(ele);\n        ++result_count;\n    } while(--rangelen);\n\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(key->ptr), zobj, oldsize, kvobjAllocSize(zobj));\n\n    int64_t oldlen = llen, newlen = llen - result_count;\n\n    /* Remove the key, if indeed needed. */\n    if (zsetLength(zobj) == 0) {\n        if (deleted) *deleted = 1;\n\n        dbDeleteSkipKeysizesUpdate(c->db, key);\n        notifyKeyspaceEvent(NOTIFY_GENERIC,\"del\",key,c->db->id);\n\n        newlen = -1; \n    }\n    updateKeysizesHist(c->db, OBJ_ZSET, oldlen, newlen);\n    keyModified(c, c->db, key, (newlen > 0) ? zobj : NULL, 1);\n\n    if (c->cmd->proc == zmpopCommand) {\n        /* Always replicate it as ZPOP[MIN|MAX] with COUNT option instead of ZMPOP. */\n        robj *count_obj = createStringObjectFromLongLong((count > llen) ? llen : count);\n        rewriteClientCommandVector(c, 3,\n                                   (where == ZSET_MAX) ? shared.zpopmax : shared.zpopmin,\n                                   key, count_obj);\n        decrRefCount(count_obj);\n    }\n}\n\n/* ZPOPMIN/ZPOPMAX key [<count>] */\nvoid zpopMinMaxCommand(client *c, int where) {\n    if (c->argc > 3) {\n        addReplyErrorObject(c,shared.syntaxerr);\n        return;\n    }\n\n    long count = -1; /* -1 for plain single pop. */\n    if (c->argc == 3 && getPositiveLongFromObjectOrReply(c, c->argv[2], &count, NULL) != C_OK)\n        return;\n\n    /* Respond with a single (flat) array in RESP2 or if count is -1\n     * (returning a single element). In RESP3, when count > 0 use nested array. */\n    int use_nested_array = (c->resp > 2 && count != -1);\n\n    genericZpopCommand(c, &c->argv[1], 1, where, 0, count, use_nested_array, 0, NULL);\n}\n\n/* ZPOPMIN key [<count>] */\nvoid zpopminCommand(client *c) {\n    zpopMinMaxCommand(c, ZSET_MIN);\n}\n\n/* ZPOPMAX key [<count>] */\nvoid zpopmaxCommand(client *c) {\n    zpopMinMaxCommand(c, ZSET_MAX);\n}\n\n/* BZPOPMIN, BZPOPMAX, BZMPOP actual implementation.\n *\n * 'numkeys' is the number of keys.\n *\n * 'timeout_idx' parameter position of block timeout.\n *\n * 'where' ZSET_MIN or ZSET_MAX.\n *\n * 'count' is the number of elements requested to pop, or -1 for plain single pop.\n *\n * 'use_nested_array' when false it generates a flat array (with or without key name).\n * When true, it generates a nested 3 level array of keyname, field + score pairs.\n * */\nvoid blockingGenericZpopCommand(client *c, robj **keys, int numkeys, int where,\n                                int timeout_idx, long count, int use_nested_array, int reply_nil_when_empty) {\n    robj *o;\n    robj *key;\n    mstime_t timeout;\n    int j;\n\n    if (getTimeoutFromObjectOrReply(c,c->argv[timeout_idx],&timeout,UNIT_SECONDS)\n        != C_OK) return;\n\n    for (j = 0; j < numkeys; j++) {\n        key = keys[j];\n        o = lookupKeyWrite(c->db,key);\n        /* Non-existing key, move to next key. */\n        if (o == NULL) continue;\n\n        if (checkType(c,o,OBJ_ZSET)) return;\n\n        long llen = zsetLength(o);\n        /* Empty zset, move to next key. */\n        if (llen == 0) continue;\n\n        /* Non empty zset, this is like a normal ZPOP[MIN|MAX]. */\n        genericZpopCommand(c, &key, 1, where, 1, count, use_nested_array, reply_nil_when_empty, NULL);\n\n        if (count == -1) {\n            /* Replicate it as ZPOP[MIN|MAX] instead of BZPOP[MIN|MAX]. */\n            rewriteClientCommandVector(c,2,\n                                       (where == ZSET_MAX) ? shared.zpopmax : shared.zpopmin,\n                                       key);\n        } else {\n            /* Replicate it as ZPOP[MIN|MAX] with COUNT option. */\n            robj *count_obj = createStringObjectFromLongLong((count > llen) ? llen : count);\n            rewriteClientCommandVector(c, 3,\n                                       (where == ZSET_MAX) ? shared.zpopmax : shared.zpopmin,\n                                       key, count_obj);\n            decrRefCount(count_obj);\n        }\n\n        return;\n    }\n\n    /* If we are not allowed to block the client and the zset is empty the only thing\n     * we can do is treating it as a timeout (even with timeout 0). */\n    if (c->flags & CLIENT_DENY_BLOCKING) {\n        addReplyNullArray(c);\n        return;\n    }\n\n    /* If the keys do not exist we must block */\n    blockForKeys(c,BLOCKED_ZSET,keys,numkeys,timeout,0);\n}\n\n// BZPOPMIN key [key ...] timeout\nvoid bzpopminCommand(client *c) {\n    blockingGenericZpopCommand(c, c->argv+1, c->argc-2, ZSET_MIN, c->argc-1, -1, 0, 0);\n}\n\n// BZPOPMAX key [key ...] timeout\nvoid bzpopmaxCommand(client *c) {\n    blockingGenericZpopCommand(c, c->argv+1, c->argc-2, ZSET_MAX, c->argc-1, -1, 0, 0);\n}\n\nstatic void zrandmemberReplyWithListpack(client *c, unsigned int count, listpackEntry *keys, listpackEntry *vals) {\n    for (unsigned long i = 0; i < count; i++) {\n        if (vals && c->resp > 2)\n            addReplyArrayLen(c,2);\n        if (keys[i].sval)\n            addReplyBulkCBuffer(c, keys[i].sval, keys[i].slen);\n        else\n            addReplyBulkLongLong(c, keys[i].lval);\n        if (vals) {\n            if (vals[i].sval) {\n                addReplyDouble(c, zzlStrtod(vals[i].sval,vals[i].slen));\n            } else\n                addReplyDouble(c, vals[i].lval);\n        }\n    }\n}\n\n/* How many times bigger should be the zset compared to the requested size\n * for us to not use the \"remove elements\" strategy? Read later in the\n * implementation for more info. */\n#define ZRANDMEMBER_SUB_STRATEGY_MUL 3\n\n/* If client is trying to ask for a very large number of random elements,\n * queuing may consume an unlimited amount of memory, so we want to limit\n * the number of randoms per time. */\n#define ZRANDMEMBER_RANDOM_SAMPLE_LIMIT 1000\n\nvoid zrandmemberWithCountCommand(client *c, long l, int withscores) {\n    unsigned long count, size;\n    int uniq = 1;\n    kvobj *zsetobj;\n    size_t oldsize = 0;\n\n    if ((zsetobj = lookupKeyReadOrReply(c, c->argv[1], shared.emptyarray))\n        == NULL || checkType(c, zsetobj, OBJ_ZSET)) return;\n    size = zsetLength(zsetobj);\n\n    if(l >= 0) {\n        count = (unsigned long) l;\n    } else {\n        count = -l;\n        uniq = 0;\n    }\n\n    /* If count is zero, serve it ASAP to avoid special cases later. */\n    if (count == 0) {\n        addReply(c,shared.emptyarray);\n        return;\n    }\n\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(zsetobj);\n\n    /* CASE 1: The count was negative, so the extraction method is just:\n     * \"return N random elements\" sampling the whole set every time.\n     * This case is trivial and can be served without auxiliary data\n     * structures. This case is the only one that also needs to return the\n     * elements in random order. */\n    if (!uniq || count == 1) {\n        if (withscores && c->resp == 2)\n            addReplyArrayLen(c, count*2);\n        else\n            addReplyArrayLen(c, count);\n        if (zsetobj->encoding == OBJ_ENCODING_SKIPLIST) {\n            zset *zs = zsetobj->ptr;\n            while (count--) {\n                dictEntry *de = dictGetFairRandomKey(zs->dict);\n                zskiplistNode *znode = dictGetKey(de);\n                sds key = zslGetNodeElement(znode);\n                if (withscores && c->resp > 2)\n                    addReplyArrayLen(c,2);\n                addReplyBulkCBuffer(c, key, sdslen(key));\n                if (withscores) {\n                    addReplyDouble(c, znode->score);\n                }\n                if (c->flags & CLIENT_CLOSE_ASAP)\n                    break;\n            }\n        } else if (zsetobj->encoding == OBJ_ENCODING_LISTPACK) {\n            listpackEntry *keys, *vals = NULL;\n            unsigned long limit, sample_count;\n            limit = count > ZRANDMEMBER_RANDOM_SAMPLE_LIMIT ? ZRANDMEMBER_RANDOM_SAMPLE_LIMIT : count;\n            keys = zmalloc(sizeof(listpackEntry)*limit);\n            if (withscores)\n                vals = zmalloc(sizeof(listpackEntry)*limit);\n            while (count) {\n                sample_count = count > limit ? limit : count;\n                count -= sample_count;\n                lpRandomPairs(zsetobj->ptr, sample_count, keys, vals, 2);\n                zrandmemberReplyWithListpack(c, sample_count, keys, vals);\n                if (c->flags & CLIENT_CLOSE_ASAP)\n                    break;\n            }\n            zfree(keys);\n            zfree(vals);\n        }\n        goto out;\n    }\n\n    zsetopsrc src;\n    zsetopval zval;\n    src.subject = zsetobj;\n    src.type = zsetobj->type;\n    src.encoding = zsetobj->encoding;\n    zuiInitIterator(&src);\n    memset(&zval, 0, sizeof(zval));\n\n    /* Initiate reply count, RESP3 responds with nested array, RESP2 with flat one. */\n    long reply_size = count < size ? count : size;\n    if (withscores && c->resp == 2)\n        addReplyArrayLen(c, reply_size*2);\n    else\n        addReplyArrayLen(c, reply_size);\n\n    /* CASE 2:\n    * The number of requested elements is greater than the number of\n    * elements inside the zset: simply return the whole zset. */\n    if (count >= size) {\n        while (zuiNext(&src, &zval)) {\n            if (withscores && c->resp > 2)\n                addReplyArrayLen(c,2);\n            addReplyBulkSds(c, zuiNewSdsFromValue(&zval));\n            if (withscores)\n                addReplyDouble(c, zval.score);\n        }\n        zuiClearIterator(&src);\n        goto out;\n    }\n\n    /* CASE 2.5 listpack only. Sampling unique elements, in non-random order.\n     * Listpack encoded zsets are meant to be relatively small, so\n     * ZRANDMEMBER_SUB_STRATEGY_MUL isn't necessary and we rather not make\n     * copies of the entries. Instead, we emit them directly to the output\n     * buffer.\n     *\n     * And it is inefficient to repeatedly pick one random element from a\n     * listpack in CASE 4. So we use this instead. */\n    if (zsetobj->encoding == OBJ_ENCODING_LISTPACK) {\n        listpackEntry *keys, *vals = NULL;\n        keys = zmalloc(sizeof(listpackEntry)*count);\n        if (withscores)\n            vals = zmalloc(sizeof(listpackEntry)*count);\n        serverAssert(lpRandomPairsUnique(zsetobj->ptr, count, keys, vals, 2) == count);\n        zrandmemberReplyWithListpack(c, count, keys, vals);\n        zfree(keys);\n        zfree(vals);\n        zuiClearIterator(&src);\n        goto out;\n    }\n\n    /* CASE 3:\n     * The number of elements inside the zset is not greater than\n     * ZRANDMEMBER_SUB_STRATEGY_MUL times the number of requested elements.\n     * In this case we create a dict from scratch with all the elements, and\n     * subtract random elements to reach the requested number of elements.\n     *\n     * This is done because if the number of requested elements is just\n     * a bit less than the number of elements in the set, the natural approach\n     * used into CASE 4 is highly inefficient. */\n    if (count*ZRANDMEMBER_SUB_STRATEGY_MUL > size) {\n        /* Hashtable encoding (generic implementation) */\n        dict *d = dictCreate(&sdsReplyDictType);\n        dictExpand(d, size);\n        /* Add all the elements into the temporary dictionary. */\n        while (zuiNext(&src, &zval)) {\n            sds key = zuiNewSdsFromValue(&zval);\n            dictEntry *de = dictAddRaw(d, key, NULL);\n            serverAssert(de);\n            if (withscores)\n                dictSetDoubleVal(de, zval.score);\n        }\n        serverAssert(dictSize(d) == size);\n\n        /* Remove random elements to reach the right count. */\n        while (size > count) {\n            dictEntry *de;\n            de = dictGetFairRandomKey(d);\n            dictUnlink(d,dictGetKey(de));\n            sdsfree(dictGetKey(de));\n            dictFreeUnlinkedEntry(d,de);\n            size--;\n        }\n\n        /* Reply with what's in the dict and release memory */\n        dictIterator di;\n        dictEntry *de;\n\n        dictInitIterator(&di, d);\n        while ((de = dictNext(&di)) != NULL) {\n            if (withscores && c->resp > 2)\n                addReplyArrayLen(c,2);\n            addReplyBulkSds(c, dictGetKey(de));\n            if (withscores)\n                addReplyDouble(c, dictGetDoubleVal(de));\n        }\n\n        dictResetIterator(&di);\n        dictRelease(d);\n    }\n\n    /* CASE 4: We have a big zset compared to the requested number of elements.\n     * In this case we can simply get random elements from the zset and add\n     * to the temporary set, trying to eventually get enough unique elements\n     * to reach the specified count. */\n    else {\n        /* Hashtable encoding (generic implementation) */\n        unsigned long added = 0;\n        dict *d = dictCreate(&hashDictType);\n        dictExpand(d, count);\n\n        while (added < count) {\n            listpackEntry key;\n            double score;\n            zsetTypeRandomElement(zsetobj, size, &key, withscores ? &score: NULL);\n\n            /* Try to add the object to the dictionary. If it already exists\n            * free it, otherwise increment the number of objects we have\n            * in the result dictionary. */\n            sds skey = zsetSdsFromListpackEntry(&key);\n            if (dictAdd(d,skey,NULL) != DICT_OK) {\n                sdsfree(skey);\n                continue;\n            }\n            added++;\n\n            if (withscores && c->resp > 2)\n                addReplyArrayLen(c,2);\n            zsetReplyFromListpackEntry(c, &key);\n            if (withscores)\n                addReplyDouble(c, score);\n        }\n\n        /* Release memory */\n        dictRelease(d);\n    }\n    zuiClearIterator(&src);\nout:\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), zsetobj, oldsize, kvobjAllocSize(zsetobj));\n}\n\n/* ZRANDMEMBER key [<count> [WITHSCORES]] */\nvoid zrandmemberCommand(client *c) {\n    long l;\n    int withscores = 0;\n    kvobj *zset;\n    listpackEntry ele;\n    size_t oldsize = 0;\n\n    if (c->argc >= 3) {\n        if (getRangeLongFromObjectOrReply(c,c->argv[2],-LONG_MAX,LONG_MAX,&l,NULL) != C_OK) return;\n        if (c->argc > 4 || (c->argc == 4 && strcasecmp(c->argv[3]->ptr,\"withscores\"))) {\n            addReplyErrorObject(c,shared.syntaxerr);\n            return;\n        } else if (c->argc == 4) {\n            withscores = 1;\n            if (l < -LONG_MAX/2 || l > LONG_MAX/2) {\n                addReplyError(c,\"value is out of range\");\n                return;\n            }\n        }\n        zrandmemberWithCountCommand(c, l, withscores);\n        return;\n    }\n\n    /* Handle variant without <count> argument. Reply with simple bulk string */\n    if ((zset = lookupKeyReadOrReply(c,c->argv[1],shared.null[c->resp]))== NULL ||\n        checkType(c,zset,OBJ_ZSET)) {\n        return;\n    }\n\n    if (server.memory_tracking_enabled)\n        oldsize = kvobjAllocSize(zset);\n    zsetTypeRandomElement(zset, zsetLength(zset), &ele,NULL);\n    zsetReplyFromListpackEntry(c,&ele);\n    if (server.memory_tracking_enabled)\n        updateSlotAllocSize(c->db, getKeySlot(c->argv[1]->ptr), zset, oldsize, kvobjAllocSize(zset));\n}\n\n/* ZMPOP/BZMPOP\n *\n * 'numkeys_idx' parameter position of key number.\n * 'is_block' this indicates whether it is a blocking variant. */\nvoid zmpopGenericCommand(client *c, int numkeys_idx, int is_block) {\n    long j;\n    long numkeys = 0;      /* Number of keys. */\n    int where = 0;         /* ZSET_MIN or ZSET_MAX. */\n    long count = -1;       /* Reply will consist of up to count elements, depending on the zset's length. */\n\n    /* Parse the numkeys. */\n    if (getRangeLongFromObjectOrReply(c, c->argv[numkeys_idx], 1, LONG_MAX,\n                                      &numkeys, \"numkeys should be greater than 0\") != C_OK)\n        return;\n\n    /* Parse the where. where_idx: the index of where in the c->argv. */\n    long where_idx = numkeys_idx + numkeys + 1;\n    if (where_idx >= c->argc) {\n        addReplyErrorObject(c, shared.syntaxerr);\n        return;\n    }\n    if (!strcasecmp(c->argv[where_idx]->ptr, \"MIN\")) {\n        where = ZSET_MIN;\n    } else if (!strcasecmp(c->argv[where_idx]->ptr, \"MAX\")) {\n        where = ZSET_MAX;\n    } else {\n        addReplyErrorObject(c, shared.syntaxerr);\n        return;\n    }\n\n    /* Parse the optional arguments. */\n    for (j = where_idx + 1; j < c->argc; j++) {\n        char *opt = c->argv[j]->ptr;\n        int moreargs = (c->argc - 1) - j;\n\n        if (count == -1 && !strcasecmp(opt, \"COUNT\") && moreargs) {\n            j++;\n            if (getRangeLongFromObjectOrReply(c, c->argv[j], 1, LONG_MAX,\n                                              &count,\"count should be greater than 0\") != C_OK)\n                return;\n        } else {\n            addReplyErrorObject(c, shared.syntaxerr);\n            return;\n        }\n    }\n\n    if (count == -1) count = 1;\n\n    if (is_block) {\n        /* BLOCK. We will handle CLIENT_DENY_BLOCKING flag in blockingGenericZpopCommand. */\n        blockingGenericZpopCommand(c, c->argv+numkeys_idx+1, numkeys, where, 1, count, 1, 1);\n    } else {\n        /* NON-BLOCK */\n        genericZpopCommand(c, c->argv+numkeys_idx+1, numkeys, where, 1, count, 1, 1, NULL);\n    }\n}\n\n/* ZMPOP numkeys key [<key> ...] MIN|MAX [COUNT count] */\nvoid zmpopCommand(client *c) {\n    zmpopGenericCommand(c, 1, 0);\n}\n\n/* BZMPOP timeout numkeys key [<key> ...] MIN|MAX [COUNT count] */\nvoid bzmpopCommand(client *c) {\n    zmpopGenericCommand(c, 2, 1);\n}\n\n#ifdef REDIS_TEST\n#include <assert.h>\n#include \"testhelp.h\"\n\n/* Verify the entire skiplist structure for debugging purposes:\n * - Header node has correct structure\n * - Level is correct (highest non-NULL level)\n * - Forward and backward pointers are correct\n * - Scores are in sorted order (with lexicographic tie-breaking)\n * - Node levels stored in level[0].span are correct\n * - Span values across all levels sum to zsl->length\n * - Length matches actual node count\n * - Tail pointer is correct\n *\n * Panics with detailed error message if any invariant is violated. */\nstatic void zslDebugVerifyStruct(zskiplist *zsl) {\n    zskiplistNode *x;\n    unsigned long length = 0;\n    int i;\n\n    /* Verify header node */\n    serverAssert(zsl->header != NULL);\n    serverAssert(zslGetNodeInfo(zsl->header)->sdsoffset == ZSL_OFFSET_NO_ELE);\n    serverAssert(zsl->header->backward == NULL);\n\n    /* Verify level is in valid range */\n    serverAssert(zsl->level >= 1 && zsl->level <= ZSKIPLIST_MAXLEVEL);\n\n    /* Verify that all levels >= zsl->level in header are NULL */\n    for (i = zsl->level; i < ZSKIPLIST_MAXLEVEL; i++) {\n        serverAssert(zsl->header->level[i].forward == NULL);\n        serverAssert(zsl->header->level[i].span == 0);\n    }\n\n    /* Verify that level zsl->level-1 has at least one node (if list is not empty) */\n    if (zsl->length > 0) {\n        serverAssert(zsl->header->level[zsl->level-1].forward != NULL);\n    }\n\n    /* Single pass: verify forward/backward pointers, scores, node levels, and accumulate spans */\n    x = zsl->header->level[0].forward;\n    zskiplistNode *prev = NULL;\n\n    while (x) {\n        length++;\n\n        /* Verify backward pointer */\n        serverAssert(x->backward == prev);\n\n        /* Verify node has valid element */\n        serverAssert(zslGetNodeInfo(x)->sdsoffset != ZSL_OFFSET_NO_ELE);\n\n        /* Verify node level is in valid range */\n        unsigned long node_level = zslGetNodeInfo(x)->levels;\n        serverAssert(node_level >= 1 && node_level <= ZSKIPLIST_MAXLEVEL);\n\n        /* Verify score ordering */\n        if (x->level[0].forward) {\n            zskiplistNode *next = x->level[0].forward;\n            serverAssert(next->score > x->score ||\n                       (next->score == x->score && sdscmp(zslGetNodeElement(next), zslGetNodeElement(x)) > 0));\n        }\n\n        /* Verify spans are correct for all levels this node participates in.\n         * Note: level 0 doesn't store span (it stores node level), so start from level 1.\n         *\n         * Span semantics:\n         * - If forward != NULL: span represents distance to next node at this level (must be > 0)\n         * - If forward == NULL: span represents number of nodes after this node at level 0\n         *   (needed for zslGetRankByNode optimization) */\n        for (i = 1; i < (int)node_level; i++) {\n            if (x->level[i].forward) {\n                /* Verify span is positive when there's a next node */\n                serverAssert(x->level[i].span > 0);\n            } else {\n                /* When forward is NULL, span should equal the number of nodes after this node.\n                 * We can verify this by counting remaining nodes at level 0. */\n                unsigned long nodes_after = 0;\n                zskiplistNode *temp = x->level[0].forward;\n                while (temp) {\n                    nodes_after++;\n                    temp = temp->level[0].forward;\n                }\n                serverAssert(x->level[i].span == nodes_after);\n            }\n        }\n\n        prev = x;\n        x = x->level[0].forward;\n    }\n\n    /* Verify length matches actual count */\n    serverAssert(length == zsl->length);\n\n    /* Verify tail pointer */\n    if (zsl->length == 0) {\n        serverAssert(zsl->tail == NULL);\n    } else {\n        serverAssert(zsl->tail == prev);\n        serverAssert(zsl->tail->level[0].forward == NULL);\n    }\n\n    /* Verify that the sum of spans at each level is consistent.\n     * At each level, we traverse from header following forward pointers and sum all spans.\n     * The sum should equal the rank of the last node at that level.\n     * If the last node at a level is the tail, the sum should equal zsl->length. */\n    for (i = 1; i < zsl->level; i++) {\n        unsigned long span_sum = 0;\n        zskiplistNode *last_at_level = zsl->header;\n        x = zsl->header;\n        while (x->level[i].forward) {\n            span_sum += x->level[i].span;\n            x = x->level[i].forward;\n            last_at_level = x;\n        }\n        /* If the last node at this level is the tail, span sum should equal length */\n        if (last_at_level == zsl->tail) {\n            serverAssert(span_sum == zsl->length);\n        } else {\n            /* Otherwise, span sum should be less than length */\n            serverAssert(span_sum < zsl->length);\n        }\n    }\n}\n\nint zsetTest(int argc, char **argv, int flags) {\n    UNUSED(argc);\n    UNUSED(argv);\n    UNUSED(flags);\n\n    printf(\"Testing skiplist operations with structure verification\\n\");\n\n    const int N = 1000;\n    zskiplist *zsl = zslCreate();\n\n    /* Store inserted elements for later deletion */\n    typedef struct {\n        double score;\n        sds ele;\n        zskiplistNode *node;\n    } InsertedElement;\n\n    InsertedElement *elements = zmalloc(sizeof(InsertedElement) * N);\n\n    /* Seed random number generator for reproducible tests */\n    srand(12345);\n\n    printf(\"Inserting %d elements with scores 0-100 (with duplicates)...\\n\", N);\n\n    /* Insert N elements with random scores between 0 and 100 */\n    for (int i = 0; i < N; i++) {\n        double score = (double)(rand() % 101); /* 0 to 100 */\n        char buf[32];\n        snprintf(buf, sizeof(buf), \"elem%d\", i);\n        sds ele = sdsnew(buf);\n\n        zskiplistNode *node = zslInsert(zsl, score, ele);\n\n        /* Store for later deletion - keep a copy of the element name */\n        elements[i].score = score;\n        elements[i].ele = ele;\n        elements[i].node = node;\n\n        /* Verify structure after each insertion */\n        zslDebugVerifyStruct(zsl);\n\n        /* Query the inserted element */\n        unsigned long rank = zslGetRank(zsl, score, ele);\n        assert(rank != 0);\n\n        /* Verify we can get the element by rank */\n        zskiplistNode *found = zslGetElementByRank(zsl, rank);\n        assert(found != NULL && found == node);\n\n        /* Verify rank by node */\n        unsigned long node_rank = zslGetRankByNode(zsl, node);\n        assert(node_rank == rank);\n    }\n\n    test_cond(\"Insert N elements with verification\",\n        zsl->length == (unsigned long)N);\n\n    printf(\"Deleting %d elements...\\n\", N);\n\n    /* Delete all elements in reverse order */\n    for (int i = N - 1; i >= 0; i--) {\n        double score = elements[i].score;\n        sds ele = elements[i].ele;\n\n        /* Verify element exists before deletion with valid rank */\n        unsigned long rank = zslGetRank(zsl, score, ele);\n        assert(rank >= 1 && rank <= (unsigned long)(i + 1));\n\n        /* Delete the element - zslDelete frees the node's SDS string */\n        zslDelete(zsl, elements[i].node);\n\n        /* Verify structure after each deletion */\n        zslDebugVerifyStruct(zsl);\n        sdsfree(elements[i].ele);\n    }\n\n    test_cond(\"Delete N elements with verification\",\n        zsl->length == 0 && zsl->tail == NULL);\n\n    zfree(elements);\n    zslFree(zsl);\n    \n    return 0;\n}\n#endif\n"
  },
  {
    "path": "src/testhelp.h",
    "content": "/* This is a really minimal testing framework for C.\n *\n * Example:\n *\n * test_cond(\"Check if 1 == 1\", 1==1)\n * test_cond(\"Check if 5 > 10\", 5 > 10)\n * test_report()\n *\n * ----------------------------------------------------------------------------\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef __TESTHELP_H\n#define __TESTHELP_H\n\n#define REDIS_TEST_ACCURATE     (1<<0)\n#define REDIS_TEST_LARGE_MEMORY (1<<1)\n#define REDIS_TEST_VALGRIND     (1<<2)\n#define REDIS_TEST_VERBOSE      (1<<3)\n\n\nextern int __failed_tests;\nextern int __test_num;\n\n#define test_cond(descr,_c) do { \\\n    __test_num++; printf(\"%d - %s: \", __test_num, descr); \\\n    if(_c) printf(\"\\033[32mPASSED\\033[0m\\n\"); else {printf(\"\\033[31mFAILED\\033[0m\\n\"); __failed_tests++;} \\\n} while(0)\n#define test_report() do { \\\n    if (__failed_tests) { \\\n        printf(\"  Tests:       %d passed, \\033[31m%d failed\\033[0m, %d total\\n\", \\\n                        __test_num-__failed_tests, __failed_tests, __test_num); \\\n        printf(\"\\033[31m=== WARNING === We have failed tests here...\\033[0m\\n\"); \\\n        exit(1); \\\n    } else { \\\n        printf(\"  Tests:       \\033[32m%d passed\\033[0m, %d failed, %d total\\n\", \\\n                        __test_num-__failed_tests, __failed_tests, __test_num); \\\n    } \\\n} while(0)\n\n#endif\n"
  },
  {
    "path": "src/threads_mngr.c",
    "content": "/*\n * Copyright (c) 2021-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"threads_mngr.h\"\n/* Anti-warning macro... */\n#define UNUSED(V) ((void) V)\n\n#ifdef __linux__\n#include \"atomicvar.h\"\n#include \"server.h\"\n\n#include <signal.h>\n#include <time.h>\n#include <sys/syscall.h>\n\n#define IN_PROGRESS 1\nstatic const clock_t RUN_ON_THREADS_TIMEOUT = 2;\n\n/*================================= Globals ================================= */\n\nstatic redisAtomic run_on_thread_cb g_callback = NULL;\nstatic redisAtomic size_t g_tids_len = 0;\nstatic redisAtomic size_t g_num_threads_done = 0;\n\n/* This flag is set while ThreadsManager_runOnThreads is running */\nstatic redisAtomic int g_in_progress = 0;\n\n/*============================ Internal prototypes ========================== */\n\nstatic void invoke_callback(int sig);\n/* returns 0 if it is safe to start, IN_PROGRESS otherwise. */\nstatic int test_and_start(void);\nstatic void wait_threads(void);\n/* Clean up global variable.\nAssuming we are under the g_in_progress protection, this is not a thread-safe function */\nstatic void ThreadsManager_cleanups(void);\n\n/*============================ API functions implementations ========================== */\n\nvoid ThreadsManager_init(void) {\n    /* Register signal handler */\n    struct sigaction act;\n    sigemptyset(&act.sa_mask);\n    /* Not setting SA_RESTART flag means that If a signal handler is invoked while a\n    system call or library function call is blocked, use the default behavior\n    i.e., the call fails with the error EINTR */\n    act.sa_flags = 0;\n    act.sa_handler = invoke_callback;\n    sigaction(SIGUSR2, &act, NULL);\n}\n\n__attribute__ ((noinline))\nint ThreadsManager_runOnThreads(pid_t *tids, size_t tids_len, run_on_thread_cb callback) {\n    /* Check if it is safe to start running. If not - return */\n    if(test_and_start() == IN_PROGRESS) {\n        return 0;\n    }\n\n    /* Update g_callback */\n    atomicSet(g_callback, callback);\n\n    /* Set g_tids_len */\n    atomicSet(g_tids_len, tids_len);\n\n    /* set g_num_threads_done to 0 To handler the case where in the previous run we reached the timeout\n    and called ThreadsManager_cleanups before one or more threads were done and increased\n    (the already set to 0) g_num_threads_done */\n    atomicSet(g_num_threads_done, 0);\n\n    /* Send signal to all the threads in tids */\n    pid_t pid = getpid();\n    for (size_t i = 0; i < tids_len ; ++i) {\n        syscall(SYS_tgkill, pid, tids[i], THREADS_SIGNAL);\n    }\n\n    /* Wait for all the threads to write to the output array, or until timeout is reached */\n    wait_threads();\n\n    /* Cleanups to allow next execution */\n    ThreadsManager_cleanups();\n\n    return 1;\n}\n\n/*============================ Internal functions implementations ========================== */\n\n\nstatic int test_and_start(void) {\n    /* atomicFlagGetSet sets the variable to 1 and returns the previous value */\n    int prev_state;\n    atomicFlagGetSet(g_in_progress, prev_state);\n\n    /* If prev_state is 1, g_in_progress was on. */\n    return prev_state;\n}\n\n__attribute__ ((noinline))\nstatic void invoke_callback(int sig) {\n    UNUSED(sig);\n\n    run_on_thread_cb callback;\n    atomicGet(g_callback, callback);\n    if (callback) {\n        callback();\n        atomicIncr(g_num_threads_done, 1);\n    } else {\n        serverLogFromHandler(LL_WARNING, \"tid %ld: ThreadsManager g_callback is NULL\", syscall(SYS_gettid));\n    }\n}\n\nstatic void wait_threads(void) {\n    struct timespec timeout_time;\n    clock_gettime(CLOCK_REALTIME, &timeout_time);\n\n    /* calculate relative time until timeout */\n    timeout_time.tv_sec += RUN_ON_THREADS_TIMEOUT;\n\n    /* Wait until all threads are done to invoke the callback or until we reached the timeout */\n    size_t curr_done_count;\n    struct timespec curr_time;\n    size_t tids_len;\n\n    do {\n        struct timeval tv = {\n            .tv_sec = 0,\n            .tv_usec = 10};\n        /* Sleep a bit to yield to other threads. */\n        /* usleep isn't listed as signal safe, so we use select instead */\n        select(0, NULL, NULL, NULL, &tv);\n        atomicGet(g_num_threads_done, curr_done_count);\n        clock_gettime(CLOCK_REALTIME, &curr_time);\n        atomicGet(g_tids_len, tids_len);\n    } while (curr_done_count < tids_len &&\n             curr_time.tv_sec <= timeout_time.tv_sec);\n\n    if (curr_time.tv_sec > timeout_time.tv_sec) {\n        serverLogRawFromHandler(LL_WARNING, \"wait_threads(): waiting threads timed out\");\n    }\n\n}\n\nstatic void ThreadsManager_cleanups(void) {\n    atomicSet(g_callback, NULL);\n    atomicSet(g_tids_len, 0);\n    atomicSet(g_num_threads_done, 0);\n\n    /* Lastly, turn off g_in_progress */\n    atomicSet(g_in_progress, 0);\n\n}\n#else\n\nvoid ThreadsManager_init(void) {\n    /* DO NOTHING */\n}\n\nint ThreadsManager_runOnThreads(pid_t *tids, size_t tids_len, run_on_thread_cb callback) {\n    /* DO NOTHING */\n    UNUSED(tids);\n    UNUSED(tids_len);\n    UNUSED(callback);\n    return 1;\n}\n\n#endif /* __linux__ */\n"
  },
  {
    "path": "src/threads_mngr.h",
    "content": "/*\n * Copyright (c) 2021-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#pragma once\n\n#include \"fmacros.h\"\n\n#include <sys/types.h>\n#include <unistd.h>\n\n/** This is an API to invoke callback on a list of threads using a user defined signal handler.\n * NOTE: This is API is only supported only in linux systems. \n * Calling the functions below on any other system does nothing.\n*/\n\n#define THREADS_SIGNAL SIGUSR2\n\n/* Callback signature */\ntypedef void(*run_on_thread_cb)(void);\n\n/* Register the process to THREADS_SIGNAL */\nvoid ThreadsManager_init(void);\n\n/** @brief Invoke callback by each thread in tids.\n *\n * @param tids  An array of threads that need to invoke callback.\n * @param tids_len The number of threads in @param tids.\n * @param callback A callback to be invoked by each thread in @param tids.\n *\n * NOTES:\n * It is assumed that all the threads don't block or ignore THREADS_SIGNAL.\n *\n * It is safe to include the calling thread in @param tids. However, be aware that subsequent tids will\n * not be signaled until the calling thread returns from the callback invocation.\n * Hence, it is recommended to place the calling thread last in @param tids.\n *\n * The function returns only when @param tids_len threads have returned from @param callback, or when we reached timeout.\n *\n * @return 1 if successful, 0 If ThreadsManager_runOnThreads is already in the middle of execution.\n *\n**/\n\nint ThreadsManager_runOnThreads(pid_t *tids, size_t tids_len, run_on_thread_cb callback);\n"
  },
  {
    "path": "src/timeout.c",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n#include \"cluster.h\"\n\n#include <math.h>\n\n/* ========================== Clients timeouts ============================= */\n\n/* Check if this blocked client timedout (does nothing if the client is\n * not blocked right now). If so send a reply, unblock it, and return 1.\n * Otherwise 0 is returned and no operation is performed. */\nint checkBlockedClientTimeout(client *c, mstime_t now) {\n    if (c->flags & CLIENT_BLOCKED &&\n        c->bstate.timeout != 0\n        && c->bstate.timeout < now)\n    {\n        /* Handle blocking operation specific timeout. */\n        unblockClientOnTimeout(c);\n        return 1;\n    } else {\n        return 0;\n    }\n}\n\n/* Check for timeouts. Returns non-zero if the client was terminated.\n * The function gets the current time in milliseconds as argument since\n * it gets called multiple times in a loop, so calling gettimeofday() for\n * each iteration would be costly without any actual gain. */\nint clientsCronHandleTimeout(client *c, mstime_t now_ms) {\n    time_t now = now_ms/1000;\n\n    if (server.maxidletime &&\n        /* This handles the idle clients connection timeout if set. */\n        !(c->flags & CLIENT_SLAVE) &&   /* No timeout for slaves and monitors */\n        !mustObeyClient(c) &&         /* No timeout for masters and AOF */\n        !(c->flags & CLIENT_BLOCKED) && /* No timeout for BLPOP */\n        !(c->flags & CLIENT_PUBSUB) &&  /* No timeout for Pub/Sub clients */\n        (now - c->lastinteraction > server.maxidletime))\n    {\n        serverLog(LL_VERBOSE,\"Closing idle client\");\n        freeClient(c);\n        return 1;\n    } else if (c->flags & CLIENT_BLOCKED) {\n        /* Cluster: handle unblock & redirect of clients blocked\n         * into keys no longer served by this server. */\n        if (server.cluster_enabled) {\n            if (clusterRedirectBlockedClientIfNeeded(c))\n                unblockClientOnError(c, NULL);\n        }\n    }\n    return 0;\n}\n\n/* For blocked clients timeouts we populate a radix tree of 128 bit keys\n * composed as such:\n *\n *  [8 byte big endian expire time]+[8 byte client ID]\n *\n * We don't do any cleanup in the Radix tree: when we run the clients that\n * reached the timeout already, if they are no longer existing or no longer\n * blocked with such timeout, we just go forward.\n *\n * Every time a client blocks with a timeout, we add the client in\n * the tree. In beforeSleep() we call handleBlockedClientsTimeout() to run\n * the tree and unblock the clients. */\n\n#define CLIENT_ST_KEYLEN 16    /* 8 bytes mstime + 8 bytes client ID. */\n\n/* Given client ID and timeout, write the resulting radix tree key in buf. */\nvoid encodeTimeoutKey(unsigned char *buf, uint64_t timeout, client *c) {\n    timeout = htonu64(timeout);\n    memcpy(buf,&timeout,sizeof(timeout));\n    memcpy(buf+8,&c,sizeof(c));\n    if (sizeof(c) == 4) memset(buf+12,0,4); /* Zero padding for 32bit target. */\n}\n\n/* Given a key encoded with encodeTimeoutKey(), resolve the fields and write\n * the timeout into *toptr and the client pointer into *cptr. */\nvoid decodeTimeoutKey(unsigned char *buf, uint64_t *toptr, client **cptr) {\n    memcpy(toptr,buf,sizeof(*toptr));\n    *toptr = ntohu64(*toptr);\n    memcpy(cptr,buf+8,sizeof(*cptr));\n}\n\n/* Add the specified client id / timeout as a key in the radix tree we use\n * to handle blocked clients timeouts. The client is not added to the list\n * if its timeout is zero (block forever). */\nvoid addClientToTimeoutTable(client *c) {\n    if (c->bstate.timeout == 0) return;\n    uint64_t timeout = c->bstate.timeout;\n    unsigned char buf[CLIENT_ST_KEYLEN];\n    encodeTimeoutKey(buf,timeout,c);\n    if (raxTryInsert(server.clients_timeout_table,buf,sizeof(buf),NULL,NULL))\n        c->flags |= CLIENT_IN_TO_TABLE;\n}\n\n/* Remove the client from the table when it is unblocked for reasons\n * different than timing out. */\nvoid removeClientFromTimeoutTable(client *c) {\n    if (!(c->flags & CLIENT_IN_TO_TABLE)) return;\n    c->flags &= ~CLIENT_IN_TO_TABLE;\n    uint64_t timeout = c->bstate.timeout;\n    unsigned char buf[CLIENT_ST_KEYLEN];\n    encodeTimeoutKey(buf,timeout,c);\n    raxRemove(server.clients_timeout_table,buf,sizeof(buf),NULL);\n}\n\n/* This function is called in beforeSleep() in order to unblock clients\n * that are waiting in blocking operations with a timeout set. */\nvoid handleBlockedClientsTimeout(void) {\n    if (raxSize(server.clients_timeout_table) == 0) return;\n    uint64_t now = mstime();\n    raxIterator ri;\n    raxStart(&ri,server.clients_timeout_table);\n    raxSeek(&ri,\"^\",NULL,0);\n\n    while(raxNext(&ri)) {\n        uint64_t timeout;\n        client *c;\n        decodeTimeoutKey(ri.key,&timeout,&c);\n        if (timeout >= now) break; /* All the timeouts are in the future. */\n        c->flags &= ~CLIENT_IN_TO_TABLE;\n        checkBlockedClientTimeout(c,now);\n        raxRemove(server.clients_timeout_table,ri.key,ri.key_len,NULL);\n        raxSeek(&ri,\"^\",NULL,0);\n    }\n    raxStop(&ri);\n}\n\n/* Get a timeout value from an object and store it into 'timeout'.\n * The final timeout is always stored as milliseconds as a time where the\n * timeout will expire, however the parsing is performed according to\n * the 'unit' that can be seconds or milliseconds.\n *\n * Note that if the timeout is zero (usually from the point of view of\n * commands API this means no timeout) the value stored into 'timeout'\n * is zero. */\nint getTimeoutFromObjectOrReply(client *c, robj *object, mstime_t *timeout, int unit) {\n    long long tval;\n    long double ftval;\n    mstime_t now = commandTimeSnapshot();\n\n    if (unit == UNIT_SECONDS) {\n        if (getLongDoubleFromObjectOrReply(c,object,&ftval,\n            \"timeout is not a float or out of range\") != C_OK)\n            return C_ERR;\n\n        ftval *= 1000.0;  /* seconds => millisec */\n        if (ftval > (long double)LLONG_MAX) {\n            addReplyError(c, \"timeout is out of range\");\n            return C_ERR;\n        }\n        tval = (long long) ceill(ftval);\n    } else {\n        if (getLongLongFromObjectOrReply(c,object,&tval,\n            \"timeout is not an integer or out of range\") != C_OK)\n            return C_ERR;\n    }\n\n    if (tval < 0) {\n        addReplyError(c,\"timeout is negative\");\n        return C_ERR;\n    }\n\n    if (tval > 0) {\n        if  (tval > LLONG_MAX - now) {\n            addReplyError(c,\"timeout is out of range\"); /* 'tval+now' would overflow */\n            return C_ERR;\n        }\n        tval += now;\n    }\n    *timeout = tval;\n\n    return C_OK;\n}\n"
  },
  {
    "path": "src/tls.c",
    "content": "/*\n * Copyright (c) 2019-Present, Redis Ltd.\n * All rights reserved.\n *\n * Copyright (c) 2024-present, Valkey contributors.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#define REDISMODULE_CORE_MODULE /* A module that's part of the redis core, uses server.h too. */\n\n#include \"server.h\"\n#include \"connhelpers.h\"\n#include \"adlist.h\"\n\n#if (USE_OPENSSL == 1 /* BUILD_YES */ ) || ((USE_OPENSSL == 2 /* BUILD_MODULE */) && (BUILD_TLS_MODULE == 2))\n\n#include <openssl/conf.h>\n#include <openssl/ssl.h>\n#include <openssl/x509.h>\n#include <openssl/err.h>\n#include <openssl/rand.h>\n#include <openssl/pem.h>\n#if OPENSSL_VERSION_NUMBER >= 0x30000000L\n#include <openssl/decoder.h>\n#endif\n#include <sys/uio.h>\n#include <arpa/inet.h>\n\n#define REDIS_TLS_PROTO_TLSv1       (1<<0)\n#define REDIS_TLS_PROTO_TLSv1_1     (1<<1)\n#define REDIS_TLS_PROTO_TLSv1_2     (1<<2)\n#define REDIS_TLS_PROTO_TLSv1_3     (1<<3)\n\n/* Use safe defaults */\n#ifdef TLS1_3_VERSION\n#define REDIS_TLS_PROTO_DEFAULT     (REDIS_TLS_PROTO_TLSv1_2|REDIS_TLS_PROTO_TLSv1_3)\n#else\n#define REDIS_TLS_PROTO_DEFAULT     (REDIS_TLS_PROTO_TLSv1_2)\n#endif\n\nSSL_CTX *redis_tls_ctx = NULL;\nSSL_CTX *redis_tls_client_ctx = NULL;\n\nstatic int parseProtocolsConfig(const char *str) {\n    int i, count = 0;\n    int protocols = 0;\n\n    if (!str) return REDIS_TLS_PROTO_DEFAULT;\n    sds *tokens = sdssplitlen(str, strlen(str), \" \", 1, &count);\n\n    if (!tokens) { \n        serverLog(LL_WARNING, \"Invalid tls-protocols configuration string\");\n        return -1;\n    }\n    for (i = 0; i < count; i++) {\n        if (!strcasecmp(tokens[i], \"tlsv1\")) protocols |= REDIS_TLS_PROTO_TLSv1;\n        else if (!strcasecmp(tokens[i], \"tlsv1.1\")) protocols |= REDIS_TLS_PROTO_TLSv1_1;\n        else if (!strcasecmp(tokens[i], \"tlsv1.2\")) protocols |= REDIS_TLS_PROTO_TLSv1_2;\n        else if (!strcasecmp(tokens[i], \"tlsv1.3\")) {\n#ifdef TLS1_3_VERSION\n            protocols |= REDIS_TLS_PROTO_TLSv1_3;\n#else\n            serverLog(LL_WARNING, \"TLSv1.3 is specified in tls-protocols but not supported by OpenSSL.\");\n            protocols = -1;\n            break;\n#endif\n        } else {\n            serverLog(LL_WARNING, \"Invalid tls-protocols specified. \"\n                    \"Use a combination of 'TLSv1', 'TLSv1.1', 'TLSv1.2' and 'TLSv1.3'.\");\n            protocols = -1;\n            break;\n        }\n    }\n    sdsfreesplitres(tokens, count);\n\n    return protocols;\n}\n\n/**\n * OpenSSL global initialization and locking handling callbacks.\n * Note that this is only required for OpenSSL < 1.1.0.\n */\n\n#if OPENSSL_VERSION_NUMBER < 0x10100000L\n#define USE_CRYPTO_LOCKS\n#endif\n\n#ifdef USE_CRYPTO_LOCKS\n\nstatic pthread_mutex_t *openssl_locks;\n\nstatic void sslLockingCallback(int mode, int lock_id, const char *f, int line) {\n    pthread_mutex_t *mt = openssl_locks + lock_id;\n\n    if (mode & CRYPTO_LOCK) {\n        pthread_mutex_lock(mt);\n    } else {\n        pthread_mutex_unlock(mt);\n    }\n\n    (void)f;\n    (void)line;\n}\n\nstatic void initCryptoLocks(void) {\n    unsigned i, nlocks;\n    if (CRYPTO_get_locking_callback() != NULL) {\n        /* Someone already set the callback before us. Don't destroy it! */\n        return;\n    }\n    nlocks = CRYPTO_num_locks();\n    openssl_locks = zmalloc(sizeof(*openssl_locks) * nlocks);\n    for (i = 0; i < nlocks; i++) {\n        pthread_mutex_init(openssl_locks + i, NULL);\n    }\n    CRYPTO_set_locking_callback(sslLockingCallback);\n}\n#endif /* USE_CRYPTO_LOCKS */\n\nstatic void tlsInit(void) {\n    /* Enable configuring OpenSSL using the standard openssl.cnf\n     * OPENSSL_config()/OPENSSL_init_crypto() should be the first \n     * call to the OpenSSL* library.\n     *  - OPENSSL_config() should be used for OpenSSL versions < 1.1.0\n     *  - OPENSSL_init_crypto() should be used for OpenSSL versions >= 1.1.0\n     */\n    #if OPENSSL_VERSION_NUMBER < 0x10100000L\n    OPENSSL_config(NULL);\n    SSL_load_error_strings();\n    SSL_library_init();\n    #elif OPENSSL_VERSION_NUMBER < 0x10101000L\n    OPENSSL_init_crypto(OPENSSL_INIT_LOAD_CONFIG, NULL);\n    #else\n    OPENSSL_init_crypto(OPENSSL_INIT_LOAD_CONFIG|OPENSSL_INIT_ATFORK, NULL);\n    #endif\n\n#ifdef USE_CRYPTO_LOCKS\n    initCryptoLocks();\n#endif\n\n    if (!RAND_poll()) {\n        serverLog(LL_WARNING, \"OpenSSL: Failed to seed random number generator.\");\n    }\n}\n\nstatic void tlsCleanup(void) {\n    if (redis_tls_ctx) {\n        SSL_CTX_free(redis_tls_ctx);\n        redis_tls_ctx = NULL;\n    }\n    if (redis_tls_client_ctx) {\n        SSL_CTX_free(redis_tls_client_ctx);\n        redis_tls_client_ctx = NULL;\n    }\n\n    #if OPENSSL_VERSION_NUMBER >= 0x10100000L && !defined(LIBRESSL_VERSION_NUMBER)\n    // unavailable on LibreSSL\n    OPENSSL_cleanup();\n    #endif\n}\n\n/* Callback for passing a keyfile password stored as an sds to OpenSSL */\nstatic int tlsPasswordCallback(char *buf, int size, int rwflag, void *u) {\n    UNUSED(rwflag);\n\n    const char *pass = u;\n    size_t pass_len;\n\n    if (!pass) return -1;\n    pass_len = strlen(pass);\n    if (pass_len > (size_t) size) return -1;\n    memcpy(buf, pass, pass_len);\n\n    return (int) pass_len;\n}\n\n/* Create a *base* SSL_CTX using the SSL configuration provided. The base context\n * includes everything that's common for both client-side and server-side connections.\n */\nstatic SSL_CTX *createSSLContext(redisTLSContextConfig *ctx_config, int protocols, int client) {\n    const char *cert_file = client ? ctx_config->client_cert_file : ctx_config->cert_file;\n    const char *key_file = client ? ctx_config->client_key_file : ctx_config->key_file;\n    const char *key_file_pass = client ? ctx_config->client_key_file_pass : ctx_config->key_file_pass;\n    char errbuf[256];\n    SSL_CTX *ctx = NULL;\n\n    ctx = SSL_CTX_new(SSLv23_method());\n    if (!ctx) goto error;\n\n    SSL_CTX_set_options(ctx, SSL_OP_NO_SSLv2|SSL_OP_NO_SSLv3);\n\n#ifdef SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS\n    SSL_CTX_set_options(ctx, SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS);\n#endif\n\n    if (!(protocols & REDIS_TLS_PROTO_TLSv1))\n        SSL_CTX_set_options(ctx, SSL_OP_NO_TLSv1);\n    if (!(protocols & REDIS_TLS_PROTO_TLSv1_1))\n        SSL_CTX_set_options(ctx, SSL_OP_NO_TLSv1_1);\n#ifdef SSL_OP_NO_TLSv1_2\n    if (!(protocols & REDIS_TLS_PROTO_TLSv1_2))\n        SSL_CTX_set_options(ctx, SSL_OP_NO_TLSv1_2);\n#endif\n#ifdef SSL_OP_NO_TLSv1_3\n    if (!(protocols & REDIS_TLS_PROTO_TLSv1_3))\n        SSL_CTX_set_options(ctx, SSL_OP_NO_TLSv1_3);\n#endif\n\n#ifdef SSL_OP_NO_COMPRESSION\n    SSL_CTX_set_options(ctx, SSL_OP_NO_COMPRESSION);\n#endif\n\n    SSL_CTX_set_mode(ctx, SSL_MODE_ENABLE_PARTIAL_WRITE|SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER);\n    SSL_CTX_set_verify(ctx, SSL_VERIFY_PEER|SSL_VERIFY_FAIL_IF_NO_PEER_CERT, NULL);\n\n    SSL_CTX_set_default_passwd_cb(ctx, tlsPasswordCallback);\n    SSL_CTX_set_default_passwd_cb_userdata(ctx, (void *) key_file_pass);\n\n    if (SSL_CTX_use_certificate_chain_file(ctx, cert_file) <= 0) {\n        ERR_error_string_n(ERR_get_error(), errbuf, sizeof(errbuf));\n        serverLog(LL_WARNING, \"Failed to load certificate: %s: %s\", cert_file, errbuf);\n        goto error;\n    }\n\n    if (SSL_CTX_use_PrivateKey_file(ctx, key_file, SSL_FILETYPE_PEM) <= 0) {\n        ERR_error_string_n(ERR_get_error(), errbuf, sizeof(errbuf));\n        serverLog(LL_WARNING, \"Failed to load private key: %s: %s\", key_file, errbuf);\n        goto error;\n    }\n\n    if ((ctx_config->ca_cert_file || ctx_config->ca_cert_dir) &&\n        SSL_CTX_load_verify_locations(ctx, ctx_config->ca_cert_file, ctx_config->ca_cert_dir) <= 0) {\n        ERR_error_string_n(ERR_get_error(), errbuf, sizeof(errbuf));\n        serverLog(LL_WARNING, \"Failed to configure CA certificate(s) file/directory: %s\", errbuf);\n        goto error;\n    }\n\n    if (ctx_config->ciphers && !SSL_CTX_set_cipher_list(ctx, ctx_config->ciphers)) {\n        serverLog(LL_WARNING, \"Failed to configure ciphers: %s\", ctx_config->ciphers);\n        goto error;\n    }\n\n#ifdef TLS1_3_VERSION\n    if (ctx_config->ciphersuites && !SSL_CTX_set_ciphersuites(ctx, ctx_config->ciphersuites)) {\n        serverLog(LL_WARNING, \"Failed to configure ciphersuites: %s\", ctx_config->ciphersuites);\n        goto error;\n    }\n#endif\n\n    return ctx;\n\nerror:\n    if (ctx) SSL_CTX_free(ctx);\n    return NULL;\n}\n\n/* Attempt to configure/reconfigure TLS. This operation is atomic and will\n * leave the SSL_CTX unchanged if fails.\n * @priv: config of redisTLSContextConfig.\n * @reconfigure: if true, ignore the previous configure; if false, only\n *               configure from @ctx_config if redis_tls_ctx is NULL.\n */\nstatic int tlsConfigure(void *priv, int reconfigure) {\n    redisTLSContextConfig *ctx_config = (redisTLSContextConfig *)priv;\n    char errbuf[256];\n    SSL_CTX *ctx = NULL;\n    SSL_CTX *client_ctx = NULL;\n\n    if (!reconfigure && redis_tls_ctx) {\n        return C_OK;\n    }\n\n    if (!ctx_config->cert_file) {\n        serverLog(LL_WARNING, \"No tls-cert-file configured!\");\n        goto error;\n    }\n\n    if (!ctx_config->key_file) {\n        serverLog(LL_WARNING, \"No tls-key-file configured!\");\n        goto error;\n    }\n\n    if (((server.tls_auth_clients != TLS_CLIENT_AUTH_NO) || server.tls_cluster || server.tls_replication) &&\n            !ctx_config->ca_cert_file && !ctx_config->ca_cert_dir) {\n        serverLog(LL_WARNING, \"Either tls-ca-cert-file or tls-ca-cert-dir must be specified when tls-cluster, tls-replication or tls-auth-clients are enabled!\");\n        goto error;\n    }\n\n    int protocols = parseProtocolsConfig(ctx_config->protocols);\n    if (protocols == -1) goto error;\n\n    /* Create server side/general context */\n    ctx = createSSLContext(ctx_config, protocols, 0);\n    if (!ctx) goto error;\n\n    if (ctx_config->session_caching) {\n        SSL_CTX_set_session_cache_mode(ctx, SSL_SESS_CACHE_SERVER);\n        SSL_CTX_sess_set_cache_size(ctx, ctx_config->session_cache_size);\n        SSL_CTX_set_timeout(ctx, ctx_config->session_cache_timeout);\n        SSL_CTX_set_session_id_context(ctx, (void *) \"redis\", 5);\n    } else {\n        SSL_CTX_set_session_cache_mode(ctx, SSL_SESS_CACHE_OFF);\n    }\n\n#ifdef SSL_OP_NO_CLIENT_RENEGOTIATION\n    SSL_CTX_set_options(ctx, SSL_OP_NO_CLIENT_RENEGOTIATION);\n#endif\n\n    if (ctx_config->prefer_server_ciphers)\n        SSL_CTX_set_options(ctx, SSL_OP_CIPHER_SERVER_PREFERENCE);\n\n#if ((OPENSSL_VERSION_NUMBER < 0x30000000L) && defined(SSL_CTX_set_ecdh_auto))\n    SSL_CTX_set_ecdh_auto(ctx, 1);\n#endif\n    SSL_CTX_set_options(ctx, SSL_OP_SINGLE_DH_USE);\n\n    if (ctx_config->dh_params_file) {\n        FILE *dhfile = fopen(ctx_config->dh_params_file, \"r\");\n        if (!dhfile) {\n            serverLog(LL_WARNING, \"Failed to load %s: %s\", ctx_config->dh_params_file, strerror(errno));\n            goto error;\n        }\n\n#if (OPENSSL_VERSION_NUMBER >= 0x30000000L)\n        EVP_PKEY *pkey = NULL;\n        OSSL_DECODER_CTX *dctx = OSSL_DECODER_CTX_new_for_pkey(\n                &pkey, \"PEM\", NULL, \"DH\", OSSL_KEYMGMT_SELECT_DOMAIN_PARAMETERS, NULL, NULL);\n        if (!dctx) {\n            serverLog(LL_WARNING, \"No decoder for DH params.\");\n            fclose(dhfile);\n            goto error;\n        }\n        if (!OSSL_DECODER_from_fp(dctx, dhfile)) {\n            serverLog(LL_WARNING, \"%s: failed to read DH params.\", ctx_config->dh_params_file);\n            OSSL_DECODER_CTX_free(dctx);\n            fclose(dhfile);\n            goto error;\n        }\n\n        OSSL_DECODER_CTX_free(dctx);\n        fclose(dhfile);\n\n        if (SSL_CTX_set0_tmp_dh_pkey(ctx, pkey) <= 0) {\n            ERR_error_string_n(ERR_get_error(), errbuf, sizeof(errbuf));\n            serverLog(LL_WARNING, \"Failed to load DH params file: %s: %s\", ctx_config->dh_params_file, errbuf);\n            EVP_PKEY_free(pkey);\n            goto error;\n        }\n        /* Not freeing pkey, it is owned by OpenSSL now */\n#else\n        DH *dh = PEM_read_DHparams(dhfile, NULL, NULL, NULL);\n        fclose(dhfile);\n        if (!dh) {\n            serverLog(LL_WARNING, \"%s: failed to read DH params.\", ctx_config->dh_params_file);\n            goto error;\n        }\n\n        if (SSL_CTX_set_tmp_dh(ctx, dh) <= 0) {\n            ERR_error_string_n(ERR_get_error(), errbuf, sizeof(errbuf));\n            serverLog(LL_WARNING, \"Failed to load DH params file: %s: %s\", ctx_config->dh_params_file, errbuf);\n            DH_free(dh);\n            goto error;\n        }\n\n        DH_free(dh);\n#endif\n    } else {\n#if (OPENSSL_VERSION_NUMBER >= 0x30000000L)\n        SSL_CTX_set_dh_auto(ctx, 1);\n#endif\n    }\n\n    /* If a client-side certificate is configured, create an explicit client context */\n    if (ctx_config->client_cert_file && ctx_config->client_key_file) {\n        client_ctx = createSSLContext(ctx_config, protocols, 1);\n        if (!client_ctx) goto error;\n    }\n\n    SSL_CTX_free(redis_tls_ctx);\n    SSL_CTX_free(redis_tls_client_ctx);\n    redis_tls_ctx = ctx;\n    redis_tls_client_ctx = client_ctx;\n\n    return C_OK;\n\nerror:\n    if (ctx) SSL_CTX_free(ctx);\n    if (client_ctx) SSL_CTX_free(client_ctx);\n    return C_ERR;\n}\n\n#ifdef TLS_DEBUGGING\n#define TLSCONN_DEBUG(fmt, ...) \\\n    serverLog(LL_DEBUG, \"TLSCONN: \" fmt, __VA_ARGS__)\n#else\n#define TLSCONN_DEBUG(fmt, ...)\n#endif\n\nstatic ConnectionType CT_TLS;\n\n/* Normal socket connections have a simple events/handler correlation.\n *\n * With TLS connections we need to handle cases where during a logical read\n * or write operation, the SSL library asks to block for the opposite\n * socket operation.\n *\n * When this happens, we need to do two things:\n * 1. Make sure we register for the event.\n * 2. Make sure we know which handler needs to execute when the\n *    event fires.  That is, if we notify the caller of a write operation\n *    that it blocks, and SSL asks for a read, we need to trigger the\n *    write handler again on the next read event.\n *\n */\n\ntypedef enum {\n    WANT_READ = 1,\n    WANT_WRITE\n} WantIOType;\n\n#define TLS_CONN_FLAG_READ_WANT_WRITE   (1<<0)\n#define TLS_CONN_FLAG_WRITE_WANT_READ   (1<<1)\n#define TLS_CONN_FLAG_FD_SET            (1<<2)\n\ntypedef struct tls_connection {\n    connection c;\n    int flags;\n    SSL *ssl;\n    char *ssl_error;\n    listNode *pending_list_node;\n} tls_connection;\n\nstatic connection *createTLSConnection(struct aeEventLoop *el, int client_side) {\n    SSL_CTX *ctx = redis_tls_ctx;\n    if (client_side && redis_tls_client_ctx)\n        ctx = redis_tls_client_ctx;\n    tls_connection *conn = zcalloc(sizeof(tls_connection));\n    conn->c.type = &CT_TLS;\n    conn->c.fd = -1;\n    conn->c.el = el;\n    conn->c.iovcnt = IOV_MAX;\n    conn->ssl = SSL_new(ctx);\n    return (connection *) conn;\n}\n\nstatic connection *connCreateTLS(struct aeEventLoop *el) {\n    return createTLSConnection(el, 1);\n}\n\n/* Fetch the latest OpenSSL error and store it in the connection */\nstatic void updateTLSError(tls_connection *conn) {\n    conn->c.last_errno = 0;\n    if (conn->ssl_error) zfree(conn->ssl_error);\n    conn->ssl_error = zmalloc(512);\n    ERR_error_string_n(ERR_get_error(), conn->ssl_error, 512);\n}\n\n/* Create a new TLS connection that is already associated with\n * an accepted underlying file descriptor.\n *\n * The socket is not ready for I/O until connAccept() was called and\n * invoked the connection-level accept handler.\n *\n * Callers should use connGetState() and verify the created connection\n * is not in an error state.\n */\nstatic connection *connCreateAcceptedTLS(struct aeEventLoop *el, int fd, void *priv) {\n    int require_auth = *(int *)priv;\n    tls_connection *conn = (tls_connection *) createTLSConnection(el, 0);\n    conn->c.fd = fd;\n    conn->c.el = el;\n    conn->c.state = CONN_STATE_ACCEPTING;\n\n    if (!conn->ssl) {\n        updateTLSError(conn);\n        conn->c.state = CONN_STATE_ERROR;\n        return (connection *) conn;\n    }\n\n    switch (require_auth) {\n        case TLS_CLIENT_AUTH_NO:\n            SSL_set_verify(conn->ssl, SSL_VERIFY_NONE, NULL);\n            break;\n        case TLS_CLIENT_AUTH_OPTIONAL:\n            SSL_set_verify(conn->ssl, SSL_VERIFY_PEER, NULL);\n            break;\n        default: /* TLS_CLIENT_AUTH_YES, also fall-secure */\n            SSL_set_verify(conn->ssl, SSL_VERIFY_PEER|SSL_VERIFY_FAIL_IF_NO_PEER_CERT, NULL);\n            break;\n    }\n\n    SSL_set_fd(conn->ssl, conn->c.fd);\n    SSL_set_accept_state(conn->ssl);\n\n    return (connection *) conn;\n}\n\nstatic void tlsEventHandler(struct aeEventLoop *el, int fd, void *clientData, int mask);\nstatic void updateSSLEvent(tls_connection *conn);\n\n/* Process the return code received from OpenSSL>\n * Update the want parameter with expected I/O.\n * Update the connection's error state if a real error has occurred.\n * Returns an SSL error code, or 0 if no further handling is required.\n */\nstatic int handleSSLReturnCode(tls_connection *conn, int ret_value, WantIOType *want) {\n    if (ret_value <= 0) {\n        int ssl_err = SSL_get_error(conn->ssl, ret_value);\n        switch (ssl_err) {\n            case SSL_ERROR_WANT_WRITE:\n                *want = WANT_WRITE;\n                return 0;\n            case SSL_ERROR_WANT_READ:\n                *want = WANT_READ;\n                return 0;\n            case SSL_ERROR_SYSCALL:\n                conn->c.last_errno = errno;\n                if (conn->ssl_error) zfree(conn->ssl_error);\n                conn->ssl_error = errno ? zstrdup(strerror(errno)) : NULL;\n                break;\n            default:\n                /* Error! */\n                updateTLSError(conn);\n                break;\n        }\n\n        return ssl_err;\n    }\n\n    return 0;\n}\n\n/* Handle OpenSSL return code following SSL_write() or SSL_read():\n *\n * - Updates conn state and last_errno.\n * - If update_event is nonzero, calls updateSSLEvent() when necessary.\n *\n * Returns ret_value, or -1 on error or dropped connection.\n */\nstatic int updateStateAfterSSLIO(tls_connection *conn, int ret_value, int update_event) {\n    /* If system call was interrupted, there's no need to go through the full\n     * OpenSSL error handling and just report this for the caller to retry the\n     * operation.\n     */\n    if (errno == EINTR) {\n        conn->c.last_errno = EINTR;\n        return -1;\n    }\n\n    if (ret_value <= 0) {\n        WantIOType want = 0;\n        int ssl_err;\n        if (!(ssl_err = handleSSLReturnCode(conn, ret_value, &want))) {\n            if (want == WANT_READ) conn->flags |= TLS_CONN_FLAG_WRITE_WANT_READ;\n            if (want == WANT_WRITE) conn->flags |= TLS_CONN_FLAG_READ_WANT_WRITE;\n            if (update_event) updateSSLEvent(conn);\n            errno = EAGAIN;\n            return -1;\n        } else {\n            if (ssl_err == SSL_ERROR_ZERO_RETURN ||\n                ((ssl_err == SSL_ERROR_SYSCALL && !errno))) {\n                conn->c.state = CONN_STATE_CLOSED;\n                return -1;\n            } else {\n                conn->c.state = CONN_STATE_ERROR;\n                return -1;\n            }\n        }\n    }\n\n    return ret_value;\n}\n\nstatic void registerSSLEvent(tls_connection *conn, WantIOType want) {\n    int mask = aeGetFileEvents(conn->c.el, conn->c.fd);\n\n    switch (want) {\n        case WANT_READ:\n            if (mask & AE_WRITABLE) aeDeleteFileEvent(conn->c.el, conn->c.fd, AE_WRITABLE);\n            if (!(mask & AE_READABLE)) aeCreateFileEvent(conn->c.el, conn->c.fd, AE_READABLE,\n                        tlsEventHandler, conn);\n            break;\n        case WANT_WRITE:\n            if (mask & AE_READABLE) aeDeleteFileEvent(conn->c.el, conn->c.fd, AE_READABLE);\n            if (!(mask & AE_WRITABLE)) aeCreateFileEvent(conn->c.el, conn->c.fd, AE_WRITABLE,\n                        tlsEventHandler, conn);\n            break;\n        default:\n            serverAssert(0);\n            break;\n    }\n}\n\nstatic void updateSSLEvent(tls_connection *conn) {\n    serverAssert(conn->c.el);\n    int mask = aeGetFileEvents(conn->c.el, conn->c.fd);\n    int need_read = conn->c.read_handler || (conn->flags & TLS_CONN_FLAG_WRITE_WANT_READ);\n    int need_write = conn->c.write_handler || (conn->flags & TLS_CONN_FLAG_READ_WANT_WRITE);\n\n    if (need_read && !(mask & AE_READABLE))\n        aeCreateFileEvent(conn->c.el, conn->c.fd, AE_READABLE, tlsEventHandler, conn);\n    if (!need_read && (mask & AE_READABLE))\n        aeDeleteFileEvent(conn->c.el, conn->c.fd, AE_READABLE);\n\n    if (need_write && !(mask & AE_WRITABLE))\n        aeCreateFileEvent(conn->c.el, conn->c.fd, AE_WRITABLE, tlsEventHandler, conn);\n    if (!need_write && (mask & AE_WRITABLE))\n        aeDeleteFileEvent(conn->c.el, conn->c.fd, AE_WRITABLE);\n}\n\n/* Add a connection to the list of connections with pending data that has\n * already been read from the socket but has not yet been served to the reader. */\nstatic void tlsPendingAdd(tls_connection *conn) {\n    if (!conn->c.el->privdata[1])\n        conn->c.el->privdata[1] = listCreate();\n\n    list *pending_list = conn->c.el->privdata[1];\n    if (!conn->pending_list_node) {\n        listAddNodeTail(pending_list, conn);\n        conn->pending_list_node = listLast(pending_list);\n    }\n}\n\n/* Removes a connection from the list of connections with pending data. */\nstatic void tlsPendingRemove(tls_connection *conn) {\n    if (conn->pending_list_node) {\n        list *pending_list = conn->c.el->privdata[1];\n        listDelNode(pending_list, conn->pending_list_node);\n        conn->pending_list_node = NULL;\n    }\n}\n\nstatic int getCertFieldByName(X509 *cert, const char *field, char *out, size_t outlen) {\n    if (!cert || !field || !out) return 0;\n\n    int nid = -1;\n\n    if (!strcasecmp(field, \"CN\"))\n        nid = NID_commonName;\n    else if (!strcasecmp(field, \"O\"))\n        nid = NID_organizationName;\n    /* Add more mappings here as needed */\n\n    if (nid == -1) return 0;\n\n    X509_NAME *subject = X509_get_subject_name(cert);\n    if (!subject) return 0;\n\n    return X509_NAME_get_text_by_NID(subject, nid, out, outlen) > 0;\n}\n\nsds tlsGetPeerUsername(connection *conn_) {\n    tls_connection *conn = (tls_connection *)conn_;\n    if (!conn || !SSL_is_init_finished(conn->ssl)) return NULL;\n\n    /* Find the corresponding field name from the enum mapping */\n    const char *field = NULL;\n    switch (server.tls_ctx_config.client_auth_user) {\n    case TLS_CLIENT_FIELD_CN:\n        field = \"CN\";\n        break;\n    default:\n        return NULL;\n    }\n\n    if (!field) return NULL;\n\n    X509 *cert = SSL_get_peer_certificate(conn->ssl);\n    if (!cert) return NULL;\n\n    char field_value[256];\n    sds result = NULL;\n\n    if (getCertFieldByName(cert, field, field_value, sizeof(field_value))) {\n        result = sdsnew(field_value);\n    } else {\n        serverLog(LL_NOTICE, \"TLS: Failed to extract field '%s' from certificate\", field);\n    }\n\n    X509_free(cert);\n    return result;\n}\n\nstatic void tlsHandleEvent(tls_connection *conn, int mask) {\n    int ret, conn_error;\n\n    TLSCONN_DEBUG(\"tlsEventHandler(): fd=%d, state=%d, mask=%d, r=%d, w=%d, flags=%d\",\n            conn->c.fd, conn->c.state, mask, conn->c.read_handler != NULL, conn->c.write_handler != NULL,\n            conn->flags);\n\n    switch (conn->c.state) {\n        case CONN_STATE_CONNECTING:\n            ERR_clear_error();\n            conn_error = anetGetError(conn->c.fd);\n            if (conn_error) {\n                conn->c.last_errno = conn_error;\n                conn->c.state = CONN_STATE_ERROR;\n            } else {\n                if (!(conn->flags & TLS_CONN_FLAG_FD_SET)) {\n                    SSL_set_fd(conn->ssl, conn->c.fd);\n                    conn->flags |= TLS_CONN_FLAG_FD_SET;\n                }\n                ret = SSL_connect(conn->ssl);\n                if (ret <= 0) {\n                    WantIOType want = 0;\n                    if (!handleSSLReturnCode(conn, ret, &want)) {\n                        registerSSLEvent(conn, want);\n\n                        /* Avoid hitting UpdateSSLEvent, which knows nothing\n                         * of what SSL_connect() wants and instead looks at our\n                         * R/W handlers.\n                         */\n                        return;\n                    }\n\n                    /* If not handled, it's an error */\n                    conn->c.state = CONN_STATE_ERROR;\n                } else {\n                    conn->c.state = CONN_STATE_CONNECTED;\n                }\n            }\n\n            if (!callHandler((connection *) conn, conn->c.conn_handler)) return;\n            conn->c.conn_handler = NULL;\n            break;\n        case CONN_STATE_ACCEPTING:\n            ERR_clear_error();\n            ret = SSL_accept(conn->ssl);\n            if (ret <= 0) {\n                WantIOType want = 0;\n                if (!handleSSLReturnCode(conn, ret, &want)) {\n                    /* Avoid hitting UpdateSSLEvent, which knows nothing\n                     * of what SSL_connect() wants and instead looks at our\n                     * R/W handlers.\n                     */\n                    registerSSLEvent(conn, want);\n                    return;\n                }\n\n                /* If not handled, it's an error */\n                conn->c.state = CONN_STATE_ERROR;\n            } else {\n                conn->c.state = CONN_STATE_CONNECTED;\n            }\n\n            if (!callHandler((connection *) conn, conn->c.conn_handler)) return;\n            conn->c.conn_handler = NULL;\n            break;\n        case CONN_STATE_CONNECTED:\n        {\n            int call_read = ((mask & AE_READABLE) && conn->c.read_handler) ||\n                ((mask & AE_WRITABLE) && (conn->flags & TLS_CONN_FLAG_READ_WANT_WRITE));\n            int call_write = ((mask & AE_WRITABLE) && conn->c.write_handler) ||\n                ((mask & AE_READABLE) && (conn->flags & TLS_CONN_FLAG_WRITE_WANT_READ));\n\n            /* Normally we execute the readable event first, and the writable\n             * event laster. This is useful as sometimes we may be able\n             * to serve the reply of a query immediately after processing the\n             * query.\n             *\n             * However if WRITE_BARRIER is set in the mask, our application is\n             * asking us to do the reverse: never fire the writable event\n             * after the readable. In such a case, we invert the calls.\n             * This is useful when, for instance, we want to do things\n             * in the beforeSleep() hook, like fsynching a file to disk,\n             * before replying to a client. */\n            int invert = conn->c.flags & CONN_FLAG_WRITE_BARRIER;\n\n            if (!invert && call_read) {\n                conn->flags &= ~TLS_CONN_FLAG_READ_WANT_WRITE;\n                if (!callHandler((connection *) conn, conn->c.read_handler)) return;\n            }\n\n            /* Fire the writable event. */\n            if (call_write) {\n                conn->flags &= ~TLS_CONN_FLAG_WRITE_WANT_READ;\n                if (!callHandler((connection *) conn, conn->c.write_handler)) return;\n            }\n\n            /* If we have to invert the call, fire the readable event now\n             * after the writable one. */\n            if (invert && call_read) {\n                conn->flags &= ~TLS_CONN_FLAG_READ_WANT_WRITE;\n                if (!callHandler((connection *) conn, conn->c.read_handler)) return;\n            }\n\n            /* If SSL has pending that, already read from the socket, we're at\n             * risk of not calling the read handler again, make sure to add it\n             * to a list of pending connection that should be handled anyway. */\n            if ((mask & AE_READABLE)) {\n                if (SSL_pending(conn->ssl) > 0) {\n                    tlsPendingAdd(conn);\n                } else if (conn->pending_list_node) {\n                    tlsPendingRemove(conn);\n                }\n            }\n\n            break;\n        }\n        default:\n            break;\n    }\n\n    /* The event loop may have been unbound during the event processing above. */\n    if (conn->c.el) updateSSLEvent(conn);\n}\n\nstatic void tlsEventHandler(struct aeEventLoop *el, int fd, void *clientData, int mask) {\n    UNUSED(el);\n    UNUSED(fd);\n    tls_connection *conn = clientData;\n    tlsHandleEvent(conn, mask);\n}\n\nstatic void tlsAcceptHandler(aeEventLoop *el, int fd, void *privdata, int mask) {\n    int cport, cfd;\n    int max = server.max_new_tls_conns_per_cycle;\n    char cip[NET_IP_STR_LEN];\n    UNUSED(mask);\n    UNUSED(privdata);\n\n    while(max--) {\n        cfd = anetTcpAccept(server.neterr, fd, cip, sizeof(cip), &cport);\n        if (cfd == ANET_ERR) {\n            if (anetAcceptFailureNeedsRetry(errno))\n                continue;\n            if (errno != EWOULDBLOCK)\n                serverLog(LL_WARNING,\n                    \"Accepting client connection: %s\", server.neterr);\n            return;\n        }\n        serverLog(LL_VERBOSE,\"Accepted %s:%d\", cip, cport);\n        acceptCommonHandler(connCreateAcceptedTLS(el,cfd,&server.tls_auth_clients), 0, cip);\n    }\n}\n\nstatic int connTLSAddr(connection *conn, char *ip, size_t ip_len, int *port, int remote) {\n    return anetFdToString(conn->fd, ip, ip_len, port, remote);\n}\n\nstatic int connTLSIsLocal(connection *conn) {\n    return connectionTypeTcp()->is_local(conn);\n}\n\nstatic int connTLSListen(connListener *listener) {\n    return listenToPort(listener);\n}\n\nstatic void connTLSShutdown(connection *conn_) {\n    tls_connection *conn = (tls_connection *) conn_;\n\n    if (conn->ssl) {\n        if (conn->c.state == CONN_STATE_CONNECTED)\n            SSL_shutdown(conn->ssl);\n        SSL_free(conn->ssl);\n        conn->ssl = NULL;\n    }\n\n    connectionTypeTcp()->shutdown(conn_);\n}\n\nstatic void connTLSClose(connection *conn_) {\n    tls_connection *conn = (tls_connection *) conn_;\n\n    if (conn->ssl) {\n        if (conn->c.state == CONN_STATE_CONNECTED)\n            SSL_shutdown(conn->ssl);\n        SSL_free(conn->ssl);\n        conn->ssl = NULL;\n    }\n\n    if (conn->ssl_error) {\n        zfree(conn->ssl_error);\n        conn->ssl_error = NULL;\n    }\n\n    if (conn->pending_list_node) {\n        list *pending_list = conn->c.el->privdata[1];\n        listDelNode(pending_list, conn->pending_list_node);\n        conn->pending_list_node = NULL;\n    }\n\n    connectionTypeTcp()->close(conn_);\n}\n\nstatic int connTLSAccept(connection *_conn, ConnectionCallbackFunc accept_handler) {\n    tls_connection *conn = (tls_connection *) _conn;\n    int ret;\n\n    if (conn->c.state != CONN_STATE_ACCEPTING) return C_ERR;\n    ERR_clear_error();\n\n    /* Try to accept */\n    conn->c.conn_handler = accept_handler;\n    ret = SSL_accept(conn->ssl);\n\n    if (ret <= 0) {\n        WantIOType want = 0;\n        if (!handleSSLReturnCode(conn, ret, &want)) {\n            registerSSLEvent(conn, want);   /* We'll fire back */\n            return C_OK;\n        } else {\n            conn->c.state = CONN_STATE_ERROR;\n            return C_ERR;\n        }\n    }\n\n    conn->c.state = CONN_STATE_CONNECTED;\n    if (!callHandler((connection *) conn, conn->c.conn_handler)) return C_OK;\n    conn->c.conn_handler = NULL;\n\n    return C_OK;\n}\n\nstatic int connTLSConnect(connection *conn_, const char *addr, int port, const char *src_addr, ConnectionCallbackFunc connect_handler) {\n    tls_connection *conn = (tls_connection *) conn_;\n    unsigned char addr_buf[sizeof(struct in6_addr)];\n\n    if (conn->c.state != CONN_STATE_NONE) return C_ERR;\n    ERR_clear_error();\n\n    /* Check whether addr is an IP address, if not, use the value for Server Name Indication */\n    if (inet_pton(AF_INET, addr, addr_buf) != 1 && inet_pton(AF_INET6, addr, addr_buf) != 1) {\n        SSL_set_tlsext_host_name(conn->ssl, addr);\n    }\n\n    /* Initiate Socket connection first */\n    if (connectionTypeTcp()->connect(conn_, addr, port, src_addr, connect_handler) == C_ERR) return C_ERR;\n\n    /* Return now, once the socket is connected we'll initiate\n     * TLS connection from the event handler.\n     */\n    return C_OK;\n}\n\nstatic void connTLSUnbindEventLoop(connection *conn_) {\n    tls_connection *conn = (tls_connection *) conn_;\n\n    /* We need to remove all events from the old event loop. The subsequent\n     * updateSSLEvent() will add the appropriate events to the new event loop. */\n    if (conn->c.el) {\n        int mask = aeGetFileEvents(conn->c.el, conn->c.fd);\n        if (mask & AE_READABLE) aeDeleteFileEvent(conn->c.el, conn->c.fd, AE_READABLE);\n        if (mask & AE_WRITABLE) aeDeleteFileEvent(conn->c.el, conn->c.fd, AE_WRITABLE);\n\n        /* Check if there are pending events and handle accordingly. */\n        int has_pending = conn->pending_list_node != NULL;\n        if (has_pending) tlsPendingRemove(conn);\n    }\n}\n\nstatic int connTLSRebindEventLoop(connection *conn_, aeEventLoop *el) {\n    tls_connection *conn = (tls_connection *) conn_;\n    serverAssert(!conn->c.el && !conn->c.read_handler &&\n                 !conn->c.write_handler && !conn->pending_list_node);\n    conn->c.el = el;\n    if (el && SSL_pending(conn->ssl)) tlsPendingAdd(conn);\n    /* Add the appropriate events to the new event loop. */\n    updateSSLEvent((tls_connection *) conn);\n    return C_OK;\n}\n\nstatic int connTLSWrite(connection *conn_, const void *data, size_t data_len) {\n    tls_connection *conn = (tls_connection *) conn_;\n    int ret;\n\n    if (conn->c.state != CONN_STATE_CONNECTED) return -1;\n    ERR_clear_error();\n    ret = SSL_write(conn->ssl, data, data_len);\n    return updateStateAfterSSLIO(conn, ret, 1);\n}\n\nstatic int connTLSWritev(connection *conn_, const struct iovec *iov, int iovcnt) {\n    if (iovcnt == 1) return connTLSWrite(conn_, iov[0].iov_base, iov[0].iov_len);\n\n    /* Accumulate the amount of bytes of each buffer and check if it exceeds NET_MAX_WRITES_PER_EVENT. */\n    size_t iov_bytes_len = 0;\n    for (int i = 0; i < iovcnt; i++) {\n        iov_bytes_len += iov[i].iov_len;\n        if (iov_bytes_len > NET_MAX_WRITES_PER_EVENT) break;\n    }\n\n    /* The amount of all buffers is greater than NET_MAX_WRITES_PER_EVENT, \n     * which is not worth doing so much memory copying to reduce system calls,\n     * therefore, invoke connTLSWrite() multiple times to avoid memory copies. */\n    if (iov_bytes_len > NET_MAX_WRITES_PER_EVENT) {\n        ssize_t tot_sent = 0;\n        for (int i = 0; i < iovcnt; i++) {\n            ssize_t sent = connTLSWrite(conn_, iov[i].iov_base, iov[i].iov_len);\n            if (sent <= 0) return tot_sent > 0 ? tot_sent : sent;\n            tot_sent += sent;\n            if ((size_t) sent != iov[i].iov_len) break;\n        }\n        return tot_sent;\n    }\n\n    /* The amount of all buffers is less than NET_MAX_WRITES_PER_EVENT, \n     * which is worth doing more memory copies in exchange for fewer system calls, \n     * so concatenate these scattered buffers into a contiguous piece of memory \n     * and send it away by one call to connTLSWrite(). */\n    char buf[iov_bytes_len];\n    size_t offset = 0;\n    for (int i = 0; i < iovcnt; i++) {\n        memcpy(buf + offset, iov[i].iov_base, iov[i].iov_len);\n        offset += iov[i].iov_len;\n    }\n    return connTLSWrite(conn_, buf, iov_bytes_len);\n}\n\nstatic int connTLSRead(connection *conn_, void *buf, size_t buf_len) {\n    tls_connection *conn = (tls_connection *) conn_;\n    int ret;\n\n    if (conn->c.state != CONN_STATE_CONNECTED) return -1;\n    ERR_clear_error();\n    ret = SSL_read(conn->ssl, buf, buf_len);\n    return updateStateAfterSSLIO(conn, ret, 1);\n}\n\nstatic const char *connTLSGetLastError(connection *conn_) {\n    tls_connection *conn = (tls_connection *) conn_;\n\n    if (conn->ssl_error) return conn->ssl_error;\n    return NULL;\n}\n\nstatic int connTLSSetWriteHandler(connection *conn, ConnectionCallbackFunc func, int barrier) {\n    conn->write_handler = func;\n    if (barrier)\n        conn->flags |= CONN_FLAG_WRITE_BARRIER;\n    else\n        conn->flags &= ~CONN_FLAG_WRITE_BARRIER;\n    updateSSLEvent((tls_connection *) conn);\n    return C_OK;\n}\n\nstatic int connTLSSetReadHandler(connection *conn, ConnectionCallbackFunc func) {\n    conn->read_handler = func;\n    updateSSLEvent((tls_connection *) conn);\n    return C_OK;\n}\n\nstatic void setBlockingTimeout(tls_connection *conn, long long timeout) {\n    anetBlock(NULL, conn->c.fd);\n    anetSendTimeout(NULL, conn->c.fd, timeout);\n    anetRecvTimeout(NULL, conn->c.fd, timeout);\n}\n\nstatic void unsetBlockingTimeout(tls_connection *conn) {\n    anetNonBlock(NULL, conn->c.fd);\n    anetSendTimeout(NULL, conn->c.fd, 0);\n    anetRecvTimeout(NULL, conn->c.fd, 0);\n}\n\nstatic int connTLSBlockingConnect(connection *conn_, const char *addr, int port, long long timeout) {\n    tls_connection *conn = (tls_connection *) conn_;\n    int ret;\n\n    if (conn->c.state != CONN_STATE_NONE) return C_ERR;\n\n    /* Initiate socket blocking connect first */\n    if (connectionTypeTcp()->blocking_connect(conn_, addr, port, timeout) == C_ERR) return C_ERR;\n\n    /* Initiate TLS connection now.  We set up a send/recv timeout on the socket,\n     * which means the specified timeout will not be enforced accurately. */\n    SSL_set_fd(conn->ssl, conn->c.fd);\n    setBlockingTimeout(conn, timeout);\n    ERR_clear_error();\n\n    if ((ret = SSL_connect(conn->ssl)) <= 0) {\n        conn->c.state = CONN_STATE_ERROR;\n        return C_ERR;\n    }\n    unsetBlockingTimeout(conn);\n\n    conn->c.state = CONN_STATE_CONNECTED;\n    return C_OK;\n}\n\nstatic ssize_t connTLSSyncWrite(connection *conn_, char *ptr, ssize_t size, long long timeout) {\n    tls_connection *conn = (tls_connection *) conn_;\n\n    setBlockingTimeout(conn, timeout);\n    SSL_clear_mode(conn->ssl, SSL_MODE_ENABLE_PARTIAL_WRITE);\n    ERR_clear_error();\n    int ret = SSL_write(conn->ssl, ptr, size);\n    ret = updateStateAfterSSLIO(conn, ret, 0);\n    SSL_set_mode(conn->ssl, SSL_MODE_ENABLE_PARTIAL_WRITE);\n    unsetBlockingTimeout(conn);\n\n    return ret;\n}\n\nstatic ssize_t connTLSSyncRead(connection *conn_, char *ptr, ssize_t size, long long timeout) {\n    tls_connection *conn = (tls_connection *) conn_;\n\n    setBlockingTimeout(conn, timeout);\n    ERR_clear_error();\n    int ret = SSL_read(conn->ssl, ptr, size);\n    ret = updateStateAfterSSLIO(conn, ret, 0);\n    unsetBlockingTimeout(conn);\n\n    return ret;\n}\n\nstatic ssize_t connTLSSyncReadLine(connection *conn_, char *ptr, ssize_t size, long long timeout) {\n    tls_connection *conn = (tls_connection *) conn_;\n    ssize_t nread = 0;\n\n    setBlockingTimeout(conn, timeout);\n\n    size--;\n    while(size) {\n        char c;\n\n        ERR_clear_error();\n        int ret = SSL_read(conn->ssl, &c, 1);\n        ret = updateStateAfterSSLIO(conn, ret, 0);\n        if (ret <= 0) {\n            nread = -1;\n            goto exit;\n        }\n        if (c == '\\n') {\n            *ptr = '\\0';\n            if (nread && *(ptr-1) == '\\r') *(ptr-1) = '\\0';\n            goto exit;\n        } else {\n            *ptr++ = c;\n            *ptr = '\\0';\n            nread++;\n        }\n        size--;\n    }\nexit:\n    unsetBlockingTimeout(conn);\n    return nread;\n}\n\nstatic const char *connTLSGetType(connection *conn_) {\n    (void) conn_;\n\n    return CONN_TYPE_TLS;\n}\n\nstatic int tlsHasPendingData(struct aeEventLoop *el) {\n    list *pending_list = el->privdata[1];\n    if (!pending_list)\n        return 0;\n    return listLength(pending_list) > 0;\n}\n\nstatic int tlsProcessPendingData(struct aeEventLoop *el) {\n    listIter li;\n    listNode *ln;\n\n    list *pending_list = el->privdata[1];\n    if (!pending_list) return 0;\n    int processed = listLength(pending_list);\n    listRewind(pending_list,&li);\n    while((ln = listNext(&li))) {\n        tls_connection *conn = listNodeValue(ln);\n        tlsHandleEvent(conn, AE_READABLE);\n    }\n    return processed;\n}\n\n/* Fetch the peer certificate used for authentication on the specified\n * connection and return it as a PEM-encoded sds.\n */\nstatic sds connTLSGetPeerCert(connection *conn_) {\n    tls_connection *conn = (tls_connection *) conn_;\n    if ((conn_->type != connectionTypeTls()) || !conn->ssl) return NULL;\n\n    X509 *cert = SSL_get_peer_certificate(conn->ssl);\n    if (!cert) return NULL;\n\n    BIO *bio = BIO_new(BIO_s_mem());\n    if (bio == NULL || !PEM_write_bio_X509(bio, cert)) {\n        if (bio != NULL) BIO_free(bio);\n        X509_free(cert);\n        return NULL;\n    }\n\n    const char *bio_ptr;\n    long long bio_len = BIO_get_mem_data(bio, &bio_ptr);\n    sds cert_pem = sdsnewlen(bio_ptr, bio_len);\n    BIO_free(bio);\n    X509_free(cert);\n\n    return cert_pem;\n}\n\nstatic ConnectionType CT_TLS = {\n    /* connection type */\n    .get_type = connTLSGetType,\n\n    /* connection type initialize & finalize & configure */\n    .init = tlsInit,\n    .cleanup = tlsCleanup,\n    .configure = tlsConfigure,\n\n    /* ae & accept & listen & error & address handler */\n    .ae_handler = tlsEventHandler,\n    .accept_handler = tlsAcceptHandler,\n    .addr = connTLSAddr,\n    .is_local = connTLSIsLocal,\n    .listen = connTLSListen,\n\n    /* create/shutdown/close connection */\n    .conn_create = connCreateTLS,\n    .conn_create_accepted = connCreateAcceptedTLS,\n    .shutdown = connTLSShutdown,\n    .close = connTLSClose,\n\n    /* connect & accept */\n    .connect = connTLSConnect,\n    .blocking_connect = connTLSBlockingConnect,\n    .accept = connTLSAccept,\n\n    /* event loop */\n    .unbind_event_loop = connTLSUnbindEventLoop,\n    .rebind_event_loop = connTLSRebindEventLoop,\n\n    /* IO */\n    .read = connTLSRead,\n    .write = connTLSWrite,\n    .writev = connTLSWritev,\n    .set_write_handler = connTLSSetWriteHandler,\n    .set_read_handler = connTLSSetReadHandler,\n    .get_last_error = connTLSGetLastError,\n    .sync_write = connTLSSyncWrite,\n    .sync_read = connTLSSyncRead,\n    .sync_readline = connTLSSyncReadLine,\n\n    /* pending data */\n    .has_pending_data = tlsHasPendingData,\n    .process_pending_data = tlsProcessPendingData,\n\n    /* TLS specified methods */\n    .get_peer_cert = connTLSGetPeerCert,\n    .get_peer_username = tlsGetPeerUsername,\n};\n\nint RedisRegisterConnectionTypeTLS(void) {\n    return connTypeRegister(&CT_TLS);\n}\n\n#else   /* USE_OPENSSL */\n\nint RedisRegisterConnectionTypeTLS(void) {\n    serverLog(LL_VERBOSE, \"Connection type %s not builtin\", CONN_TYPE_TLS);\n    return C_ERR;\n}\n\n#endif\n\n#if BUILD_TLS_MODULE == 2 /* BUILD_MODULE */\n\n#include \"release.h\"\n\nint RedisModule_OnLoad(void *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    UNUSED(argc);\n\n    /* Connection modules must be part of the same build as redis. */\n    if (strcmp(REDIS_BUILD_ID_RAW, redisBuildIdRaw())) {\n        serverLog(LL_NOTICE, \"Connection type %s was not built together with the redis-server used.\", CONN_TYPE_TLS);\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_Init(ctx,\"tls\",1,REDISMODULE_APIVER_1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    /* Connection modules is available only bootup. */\n    if ((RedisModule_GetContextFlags(ctx) & REDISMODULE_CTX_FLAGS_SERVER_STARTUP) == 0) {\n        serverLog(LL_NOTICE, \"Connection type %s can be loaded only during bootup\", CONN_TYPE_TLS);\n        return REDISMODULE_ERR;\n    }\n\n    RedisModule_SetModuleOptions(ctx, REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD);\n\n    if(connTypeRegister(&CT_TLS) != C_OK)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnUnload(void *arg) {\n    UNUSED(arg);\n    serverLog(LL_NOTICE, \"Connection type %s can not be unloaded\", CONN_TYPE_TLS);\n    return REDISMODULE_ERR;\n}\n#endif\n"
  },
  {
    "path": "src/tracking.c",
    "content": "/* tracking.c - Client side caching: keys tracking and invalidation\n *\n * Copyright (c) 2019-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"server.h\"\n\n/* The tracking table is constituted by a radix tree of keys, each pointing\n * to a radix tree of client IDs, used to track the clients that may have\n * certain keys in their local, client side, cache.\n *\n * When a client enables tracking with \"CLIENT TRACKING on\", each key served to\n * the client is remembered in the table mapping the keys to the client IDs.\n * Later, when a key is modified, all the clients that may have local copy\n * of such key will receive an invalidation message.\n *\n * Clients will normally take frequently requested objects in memory, removing\n * them when invalidation messages are received. */\nrax *TrackingTable = NULL;\nrax *PrefixTable = NULL;\nuint64_t TrackingTableTotalItems = 0; /* Total number of IDs stored across\n                                         the whole tracking table. This gives\n                                         an hint about the total memory we\n                                         are using server side for CSC. */\nrobj *TrackingChannelName;\n\n/* This is the structure that we have as value of the PrefixTable, and\n * represents the list of keys modified, and the list of clients that need\n * to be notified, for a given prefix. */\ntypedef struct bcastState {\n    rax *keys;      /* Keys modified in the current event loop cycle. */\n    rax *clients;   /* Clients subscribed to the notification events for this\n                       prefix. */\n} bcastState;\n\n/* Remove the tracking state from the client 'c'. Note that there is not much\n * to do for us here, if not to decrement the counter of the clients in\n * tracking mode, because we just store the ID of the client in the tracking\n * table, so we'll remove the ID reference in a lazy way. Otherwise when a\n * client with many entries in the table is removed, it would cost a lot of\n * time to do the cleanup. */\nvoid disableTracking(client *c) {\n    /* If this client is in broadcasting mode, we need to unsubscribe it\n     * from all the prefixes it is registered to. */\n    if (c->flags & CLIENT_TRACKING_BCAST) {\n        raxIterator ri;\n        raxStart(&ri,c->client_tracking_prefixes);\n        raxSeek(&ri,\"^\",NULL,0);\n        while(raxNext(&ri)) {\n            void *result;\n            int found = raxFind(PrefixTable,ri.key,ri.key_len,&result);\n            serverAssert(found);\n            bcastState *bs = result;\n            raxRemove(bs->clients,(unsigned char*)&c,sizeof(c),NULL);\n            /* Was it the last client? Remove the prefix from the\n             * table. */\n            if (raxSize(bs->clients) == 0) {\n                raxFree(bs->clients);\n                raxFree(bs->keys);\n                zfree(bs);\n                raxRemove(PrefixTable,ri.key,ri.key_len,NULL);\n            }\n        }\n        raxStop(&ri);\n        raxFree(c->client_tracking_prefixes);\n        c->client_tracking_prefixes = NULL;\n    }\n\n    /* Clear flags and adjust the count. */\n    if (c->flags & CLIENT_TRACKING) {\n        server.tracking_clients--;\n        c->flags &= ~(CLIENT_TRACKING|CLIENT_TRACKING_BROKEN_REDIR|\n                      CLIENT_TRACKING_BCAST|CLIENT_TRACKING_OPTIN|\n                      CLIENT_TRACKING_OPTOUT|CLIENT_TRACKING_CACHING|\n                      CLIENT_TRACKING_NOLOOP);\n    }\n}\n\nstatic int stringCheckPrefix(unsigned char *s1, size_t s1_len, unsigned char *s2, size_t s2_len) {\n    size_t min_length = s1_len < s2_len ? s1_len : s2_len;\n    return memcmp(s1,s2,min_length) == 0;   \n}\n\n/* Check if any of the provided prefixes collide with one another or\n * with an existing prefix for the client. A collision is defined as two \n * prefixes that will emit an invalidation for the same key. If no prefix \n * collision is found, 1 is return, otherwise 0 is returned and the client \n * has an error emitted describing the error. */\nint checkPrefixCollisionsOrReply(client *c, robj **prefixes, size_t numprefix) {\n    for (size_t i = 0; i < numprefix; i++) {\n        /* Check input list has no overlap with existing prefixes. */\n        if (c->client_tracking_prefixes) {\n            raxIterator ri;\n            raxStart(&ri,c->client_tracking_prefixes);\n            raxSeek(&ri,\"^\",NULL,0);\n            while(raxNext(&ri)) {\n                if (stringCheckPrefix(ri.key,ri.key_len,\n                    prefixes[i]->ptr,sdslen(prefixes[i]->ptr))) \n                {\n                    sds collision = sdsnewlen(ri.key,ri.key_len);\n                    addReplyErrorFormat(c,\n                        \"Prefix '%s' overlaps with an existing prefix '%s'. \"\n                        \"Prefixes for a single client must not overlap.\",\n                        (unsigned char *)prefixes[i]->ptr,\n                        (unsigned char *)collision);\n                    sdsfree(collision);\n                    raxStop(&ri);\n                    return 0;\n                }\n            }\n            raxStop(&ri);\n        }\n        /* Check input has no overlap with itself. */\n        for (size_t j = i + 1; j < numprefix; j++) {\n            if (stringCheckPrefix(prefixes[i]->ptr,sdslen(prefixes[i]->ptr),\n                prefixes[j]->ptr,sdslen(prefixes[j]->ptr)))\n            {\n                addReplyErrorFormat(c,\n                    \"Prefix '%s' overlaps with another provided prefix '%s'. \"\n                    \"Prefixes for a single client must not overlap.\",\n                    (unsigned char *)prefixes[i]->ptr,\n                    (unsigned char *)prefixes[j]->ptr);\n                return 0;\n            }\n        }\n    }\n    return 1;\n}\n\n/* Set the client 'c' to track the prefix 'prefix'. If the client 'c' is\n * already registered for the specified prefix, no operation is performed. */\nvoid enableBcastTrackingForPrefix(client *c, char *prefix, size_t plen) {\n    void *result;\n    bcastState *bs;\n    /* If this is the first client subscribing to such prefix, create\n     * the prefix in the table. */\n    if (!raxFind(PrefixTable,(unsigned char*)prefix,plen,&result)) {\n        bs = zmalloc(sizeof(*bs));\n        bs->keys = raxNew();\n        bs->clients = raxNew();\n        raxInsert(PrefixTable,(unsigned char*)prefix,plen,bs,NULL);\n    } else {\n        bs = result;\n    }\n    if (raxTryInsert(bs->clients,(unsigned char*)&c,sizeof(c),NULL,NULL)) {\n        if (c->client_tracking_prefixes == NULL)\n            c->client_tracking_prefixes = raxNew();\n        raxInsert(c->client_tracking_prefixes,\n                  (unsigned char*)prefix,plen,NULL,NULL);\n    }\n}\n\n/* Enable the tracking state for the client 'c', and as a side effect allocates\n * the tracking table if needed. If the 'redirect_to' argument is non zero, the\n * invalidation messages for this client will be sent to the client ID\n * specified by the 'redirect_to' argument. Note that if such client will\n * eventually get freed, we'll send a message to the original client to\n * inform it of the condition. Multiple clients can redirect the invalidation\n * messages to the same client ID. */\nvoid enableTracking(client *c, uint64_t redirect_to, uint64_t options, robj **prefix, size_t numprefix) {\n    if (!(c->flags & CLIENT_TRACKING)) server.tracking_clients++;\n    c->flags |= CLIENT_TRACKING;\n    c->flags &= ~(CLIENT_TRACKING_BROKEN_REDIR|CLIENT_TRACKING_BCAST|\n                  CLIENT_TRACKING_OPTIN|CLIENT_TRACKING_OPTOUT|\n                  CLIENT_TRACKING_NOLOOP);\n    c->client_tracking_redirection = redirect_to;\n\n    /* This may be the first client we ever enable. Create the tracking\n     * table if it does not exist. */\n    if (TrackingTable == NULL) {\n        TrackingTable = raxNew();\n        PrefixTable = raxNew();\n        TrackingChannelName = createStringObject(\"__redis__:invalidate\",20);\n    }\n\n    /* For broadcasting, set the list of prefixes in the client. */\n    if (options & CLIENT_TRACKING_BCAST) {\n        c->flags |= CLIENT_TRACKING_BCAST;\n        if (numprefix == 0) enableBcastTrackingForPrefix(c,\"\",0);\n        for (size_t j = 0; j < numprefix; j++) {\n            sds sdsprefix = prefix[j]->ptr;\n            enableBcastTrackingForPrefix(c,sdsprefix,sdslen(sdsprefix));\n        }\n    }\n\n    /* Set the remaining flags that don't need any special handling. */\n    c->flags |= options & (CLIENT_TRACKING_OPTIN|CLIENT_TRACKING_OPTOUT|\n                           CLIENT_TRACKING_NOLOOP);\n}\n\n/* This function is called after the execution of a readonly command in the\n * case the client 'c' has keys tracking enabled and the tracking is not\n * in BCAST mode. It will populate the tracking invalidation table according\n * to the keys the user fetched, so that Redis will know what are the clients\n * that should receive an invalidation message with certain groups of keys\n * are modified. */\nvoid trackingRememberKeys(client *tracking, client *executing) {\n    /* Shard channels are treated as special keys for client\n     * library to rely on `COMMAND` command to discover the node\n     * to connect to. These channels don't need to be tracked. */\n    if (executing->cmd->flags & CMD_PUBSUB) {\n        return;\n    }\n\n    /* Return if we are in optin/out mode and the right CACHING command\n     * was/wasn't given in order to modify the default behavior. */\n    uint64_t optin = tracking->flags & CLIENT_TRACKING_OPTIN;\n    uint64_t optout = tracking->flags & CLIENT_TRACKING_OPTOUT;\n    uint64_t caching_given = tracking->flags & CLIENT_TRACKING_CACHING;\n    if ((optin && !caching_given) || (optout && caching_given)) return;\n\n    getKeysResult result = GETKEYS_RESULT_INIT;\n    int numkeys = getKeysFromCommand(executing->cmd,executing->argv,executing->argc,&result);\n    if (!numkeys) {\n        getKeysFreeResult(&result);\n        return;\n    }\n\n    keyReference *keys = result.keys;\n\n    for(int j = 0; j < numkeys; j++) {\n        int idx = keys[j].pos;\n        sds sdskey = executing->argv[idx]->ptr;\n        void *result;\n        rax *ids;\n        if (!raxFind(TrackingTable,(unsigned char*)sdskey,sdslen(sdskey),&result)) {\n            ids = raxNew();\n            int inserted = raxTryInsert(TrackingTable,(unsigned char*)sdskey,\n                                        sdslen(sdskey),ids, NULL);\n            serverAssert(inserted == 1);\n        } else {\n            ids = result;\n        }\n        if (raxTryInsert(ids,(unsigned char*)&tracking->id,sizeof(tracking->id),NULL,NULL))\n            TrackingTableTotalItems++;\n    }\n    getKeysFreeResult(&result);\n}\n\n/* Given a key name, this function sends an invalidation message in the\n * proper channel (depending on RESP version: PubSub or Push message) and\n * to the proper client (in case of redirection), in the context of the\n * client 'c' with tracking enabled.\n *\n * In case the 'proto' argument is non zero, the function will assume that\n * 'keyname' points to a buffer of 'keylen' bytes already expressed in the\n * form of Redis RESP protocol. This is used for:\n * - In BCAST mode, to send an array of invalidated keys to all\n *   applicable clients\n * - Following a flush command, to send a single RESP NULL to indicate\n *   that all keys are now invalid. */\nvoid sendTrackingMessage(client *c, char *keyname, size_t keylen, int proto) {\n    int paused = 0;\n    uint64_t old_flags = c->flags;\n    c->flags |= CLIENT_PUSHING;\n\n    int using_redirection = 0;\n    if (c->client_tracking_redirection) {\n        client *redir = lookupClientByID(c->client_tracking_redirection);\n        if (!redir) {\n            c->flags |= CLIENT_TRACKING_BROKEN_REDIR;\n            /* We need to signal to the original connection that we\n             * are unable to send invalidation messages to the redirected\n             * connection, because the client no longer exist. */\n            if (c->resp > 2) {\n                addReplyPushLen(c,2);\n                addReplyBulkCBuffer(c,\"tracking-redir-broken\",21);\n                addReplyLongLong(c,c->client_tracking_redirection);\n            }\n            if (!(old_flags & CLIENT_PUSHING)) c->flags &= ~CLIENT_PUSHING;\n            return;\n        }\n        if (!(old_flags & CLIENT_PUSHING)) c->flags &= ~CLIENT_PUSHING;\n        c = redir;\n        using_redirection = 1;\n        /* Start to touch another client data. */\n        if (c->running_tid != IOTHREAD_MAIN_THREAD_ID) {\n            pauseIOThread(c->running_tid);\n            paused = 1;\n        }\n        old_flags = c->flags;\n        c->flags |= CLIENT_PUSHING;\n    }\n\n    /* Only send such info for clients in RESP version 3 or more. However\n     * if redirection is active, and the connection we redirect to is\n     * in Pub/Sub mode, we can support the feature with RESP 2 as well,\n     * by sending Pub/Sub messages in the __redis__:invalidate channel. */\n    if (c->resp > 2) {\n        addReplyPushLen(c,2);\n        addReplyBulkCBuffer(c,\"invalidate\",10);\n    } else if (using_redirection && c->flags & CLIENT_PUBSUB) {\n        /* We use a static object to speedup things, however we assume\n         * that addReplyPubsubMessage() will not take a reference. */\n        addReplyPubsubMessage(c,TrackingChannelName,NULL,shared.messagebulk);\n    } else {\n        /* If are here, the client is not using RESP3, nor is\n         * redirecting to another client. We can't send anything to\n         * it since RESP2 does not support push messages in the same\n         * connection. */\n        if (!(old_flags & CLIENT_PUSHING)) c->flags &= ~CLIENT_PUSHING;\n        goto done;\n    }\n\n    /* Send the \"value\" part, which is the array of keys. */\n    if (proto) {\n        addReplyProto(c,keyname,keylen);\n    } else {\n        addReplyArrayLen(c,1);\n        addReplyBulkCBuffer(c,keyname,keylen);\n    }\n    updateClientMemUsageAndBucket(c);\n    if (!(old_flags & CLIENT_PUSHING)) c->flags &= ~CLIENT_PUSHING;\n\ndone:\n    if (paused) {\n        if (clientHasPendingReplies(c)) {\n            serverAssert(!(c->flags & CLIENT_PENDING_WRITE));\n            /* Actually we install write handler of client which is in IO thread\n             * event loop, it is safe since the io thread is paused */\n            connSetWriteHandler(c->conn, sendReplyToClient);\n        }\n        resumeIOThread(c->running_tid);\n    }\n}\n\n/* This function is called when a key is modified in Redis and in the case\n * we have at least one client with the BCAST mode enabled.\n * Its goal is to set the key in the right broadcast state if the key\n * matches one or more prefixes in the prefix table. Later when we\n * return to the event loop, we'll send invalidation messages to the\n * clients subscribed to each prefix. */\nvoid trackingRememberKeyToBroadcast(client *c, char *keyname, size_t keylen) {\n    raxIterator ri;\n    raxStart(&ri,PrefixTable);\n    raxSeek(&ri,\"^\",NULL,0);\n    while(raxNext(&ri)) {\n        if (ri.key_len > keylen) continue;\n        if (ri.key_len != 0 && memcmp(ri.key,keyname,ri.key_len) != 0)\n            continue;\n        bcastState *bs = ri.data;\n        /* We insert the client pointer as associated value in the radix\n         * tree. This way we know who was the client that did the last\n         * change to the key, and can avoid sending the notification in the\n         * case the client is in NOLOOP mode. */\n        raxInsert(bs->keys,(unsigned char*)keyname,keylen,c,NULL);\n    }\n    raxStop(&ri);\n}\n\n/* This function is called from keyModified() or other places in Redis\n * when a key changes value. In the context of keys tracking, our task here is\n * to send a notification to every client that may have keys about such caching\n * slot.\n *\n * Note that 'c' may be NULL in case the operation was performed outside the\n * context of a client modifying the database (for instance when we delete a\n * key because of expire).\n *\n * The last argument 'bcast' tells the function if it should also schedule\n * the key for broadcasting to clients in BCAST mode. This is the case when\n * the function is called from the Redis core once a key is modified, however\n * we also call the function in order to evict keys in the key table in case\n * of memory pressure: in that case the key didn't really change, so we want\n * just to notify the clients that are in the table for this key, that would\n * otherwise miss the fact we are no longer tracking the key for them. */\nvoid trackingInvalidateKey(client *c, robj *keyobj, int bcast) {\n    if (TrackingTable == NULL) return;\n\n    unsigned char *key = (unsigned char*)keyobj->ptr;\n    size_t keylen = sdslen(keyobj->ptr);\n\n    if (bcast && raxSize(PrefixTable) > 0)\n        trackingRememberKeyToBroadcast(c,(char *)key,keylen);\n\n    void *result;\n    if (!raxFind(TrackingTable,key,keylen,&result)) return;\n    rax *ids = result;\n\n    raxIterator ri;\n    raxStart(&ri,ids);\n    raxSeek(&ri,\"^\",NULL,0);\n    while(raxNext(&ri)) {\n        uint64_t id;\n        memcpy(&id,ri.key,sizeof(id));\n        client *target = lookupClientByID(id);\n        /* Note that if the client is in BCAST mode, we don't want to\n         * send invalidation messages that were pending in the case\n         * previously the client was not in BCAST mode. This can happen if\n         * TRACKING is enabled normally, and then the client switches to\n         * BCAST mode. */\n        if (target == NULL ||\n            !(target->flags & CLIENT_TRACKING)||\n            target->flags & CLIENT_TRACKING_BCAST)\n        {\n            continue;\n        }\n\n        /* If the client enabled the NOLOOP mode, don't send notifications\n         * about keys changed by the client itself. */\n        if (target->flags & CLIENT_TRACKING_NOLOOP &&\n            target == server.current_client)\n        {\n            continue;\n        }\n\n        /* If target is current client and it's executing a command, we need schedule key invalidation.\n         * As the invalidation messages may be interleaved with command\n         * response and should after command response. */\n        if (target == server.current_client && (server.current_client->flags & CLIENT_EXECUTING_COMMAND)) {\n            incrRefCount(keyobj);\n            listAddNodeTail(server.tracking_pending_keys, keyobj);\n        } else {\n            sendTrackingMessage(target,(char *)keyobj->ptr,sdslen(keyobj->ptr),0);\n        }\n    }\n    raxStop(&ri);\n\n    /* Free the tracking table: we'll create the radix tree and populate it\n     * again if more keys will be modified in this caching slot. */\n    TrackingTableTotalItems -= raxSize(ids);\n    raxFree(ids);\n    raxRemove(TrackingTable,(unsigned char*)key,keylen,NULL);\n}\n\nvoid trackingHandlePendingKeyInvalidations(void) {\n    if (!listLength(server.tracking_pending_keys)) return;\n\n    /* Flush pending invalidation messages only when we are not in nested call.\n     * So the messages are not interleaved with transaction response. */\n    if (server.execution_nesting) return;\n\n    listNode *ln;\n    listIter li;\n\n    listRewind(server.tracking_pending_keys,&li);\n    while ((ln = listNext(&li)) != NULL) {\n        robj *key = listNodeValue(ln);\n        /* current_client maybe freed, so we need to send invalidation\n         * message only when current_client is still alive */\n        if (server.current_client != NULL) {\n            if (key != NULL) {\n                sendTrackingMessage(server.current_client,(char *)key->ptr,sdslen(key->ptr),0);\n            } else {\n                sendTrackingMessage(server.current_client,shared.null[server.current_client->resp]->ptr,\n                    sdslen(shared.null[server.current_client->resp]->ptr),1);\n            }\n        }\n        if (key != NULL) decrRefCount(key);\n    }\n    listEmpty(server.tracking_pending_keys);\n}\n\n/* This function is called when one or all the Redis databases are\n * flushed. Caching keys are not specific for each DB but are global: \n * currently what we do is send a special notification to clients with \n * tracking enabled, sending a RESP NULL, which means, \"all the keys\", \n * in order to avoid flooding clients with many invalidation messages \n * for all the keys they may hold.\n */\nvoid freeTrackingRadixTreeCallback(void *rt) {\n    raxFree(rt);\n}\n\nvoid freeTrackingRadixTree(rax *rt) {\n    raxFreeWithCallback(rt,freeTrackingRadixTreeCallback);\n}\n\n/* A RESP NULL is sent to indicate that all keys are invalid */\nvoid trackingInvalidateKeysOnFlush(int async) {\n    if (server.tracking_clients) {\n        listNode *ln;\n        listIter li;\n        listRewind(server.clients,&li);\n        while ((ln = listNext(&li)) != NULL) {\n            client *c = listNodeValue(ln);\n            if (c->flags & CLIENT_TRACKING) {\n                if (c == server.current_client) {\n                    /* We use a special NULL to indicate that we should send null */\n                    listAddNodeTail(server.tracking_pending_keys,NULL);\n                } else {\n                    sendTrackingMessage(c,shared.null[c->resp]->ptr,sdslen(shared.null[c->resp]->ptr),1);\n                }\n            }\n        }\n    }\n\n    /* In case of FLUSHALL, reclaim all the memory used by tracking. */\n    if (TrackingTable) {\n        if (async) {\n            freeTrackingRadixTreeAsync(TrackingTable);\n        } else {\n            freeTrackingRadixTree(TrackingTable);\n        }\n        TrackingTable = raxNew();\n        TrackingTableTotalItems = 0;\n    }\n}\n\n/* Tracking forces Redis to remember information about which client may have\n * certain keys. In workloads where there are a lot of reads, but keys are\n * hardly modified, the amount of information we have to remember server side\n * could be a lot, with the number of keys being totally not bound.\n *\n * So Redis allows the user to configure a maximum number of keys for the\n * invalidation table. This function makes sure that we don't go over the\n * specified fill rate: if we are over, we can just evict information about\n * a random key, and send invalidation messages to clients like if the key was\n * modified. */\nvoid trackingLimitUsedSlots(void) {\n    static unsigned int timeout_counter = 0;\n    if (TrackingTable == NULL) return;\n    if (server.tracking_table_max_keys == 0) return; /* No limits set. */\n    size_t max_keys = server.tracking_table_max_keys;\n    if (raxSize(TrackingTable) <= max_keys) {\n        timeout_counter = 0;\n        return; /* Limit not reached. */\n    }\n\n    /* We have to invalidate a few keys to reach the limit again. The effort\n     * we do here is proportional to the number of times we entered this\n     * function and found that we are still over the limit. */\n    int effort = 100 * (timeout_counter+1);\n\n    /* We just remove one key after another by using a random walk. */\n    raxIterator ri;\n    raxStart(&ri,TrackingTable);\n    while(effort > 0) {\n        effort--;\n        raxSeek(&ri,\"^\",NULL,0);\n        raxRandomWalk(&ri,0);\n        if (raxEOF(&ri)) break;\n        robj *keyobj = createStringObject((char*)ri.key,ri.key_len);\n        trackingInvalidateKey(NULL,keyobj,0);\n        decrRefCount(keyobj);\n        if (raxSize(TrackingTable) <= max_keys) {\n            timeout_counter = 0;\n            raxStop(&ri);\n            return; /* Return ASAP: we are again under the limit. */\n        }\n    }\n\n    /* If we reach this point, we were not able to go under the configured\n     * limit using the maximum effort we had for this run. */\n    raxStop(&ri);\n    timeout_counter++;\n}\n\n/* Generate Redis protocol for an array containing all the key names\n * in the 'keys' radix tree. If the client is not NULL, the list will not\n * include keys that were modified the last time by this client, in order\n * to implement the NOLOOP option.\n *\n * If the resulting array would be empty, NULL is returned instead. */\nsds trackingBuildBroadcastReply(client *c, rax *keys) {\n    raxIterator ri;\n    uint64_t count;\n\n    if (c == NULL) {\n        count = raxSize(keys);\n    } else {\n        count = 0;\n        raxStart(&ri,keys);\n        raxSeek(&ri,\"^\",NULL,0);\n        while(raxNext(&ri)) {\n            if (ri.data != c) count++;\n        }\n        raxStop(&ri);\n\n        if (count == 0) return NULL;\n    }\n\n    /* Create the array reply with the list of keys once, then send\n    * it to all the clients subscribed to this prefix. */\n    char buf[32];\n    size_t len = ll2string(buf,sizeof(buf),count);\n    sds proto = sdsempty();\n    proto = sdsMakeRoomFor(proto,count*15);\n    proto = sdscatlen(proto,\"*\",1);\n    proto = sdscatlen(proto,buf,len);\n    proto = sdscatlen(proto,\"\\r\\n\",2);\n    raxStart(&ri,keys);\n    raxSeek(&ri,\"^\",NULL,0);\n    while(raxNext(&ri)) {\n        if (c && ri.data == c) continue;\n        len = ll2string(buf,sizeof(buf),ri.key_len);\n        proto = sdscatlen(proto,\"$\",1);\n        proto = sdscatlen(proto,buf,len);\n        proto = sdscatlen(proto,\"\\r\\n\",2);\n        proto = sdscatlen(proto,ri.key,ri.key_len);\n        proto = sdscatlen(proto,\"\\r\\n\",2);\n    }\n    raxStop(&ri);\n    return proto;\n}\n\n/* This function will run the prefixes of clients in BCAST mode and\n * keys that were modified about each prefix, and will send the\n * notifications to each client in each prefix. */\nvoid trackingBroadcastInvalidationMessages(void) {\n    raxIterator ri, ri2;\n\n    /* Return ASAP if there is nothing to do here. */\n    if (TrackingTable == NULL || !server.tracking_clients) return;\n\n    raxStart(&ri,PrefixTable);\n    raxSeek(&ri,\"^\",NULL,0);\n\n    /* For each prefix... */\n    while(raxNext(&ri)) {\n        bcastState *bs = ri.data;\n\n        if (raxSize(bs->keys)) {\n            /* Generate the common protocol for all the clients that are\n             * not using the NOLOOP option. */\n            sds proto = trackingBuildBroadcastReply(NULL,bs->keys);\n\n            /* Send this array of keys to every client in the list. */\n            raxStart(&ri2,bs->clients);\n            raxSeek(&ri2,\"^\",NULL,0);\n            while(raxNext(&ri2)) {\n                client *c;\n                memcpy(&c,ri2.key,sizeof(c));\n                if (c->flags & CLIENT_TRACKING_NOLOOP) {\n                    /* This client may have certain keys excluded. */\n                    sds adhoc = trackingBuildBroadcastReply(c,bs->keys);\n                    if (adhoc) {\n                        sendTrackingMessage(c,adhoc,sdslen(adhoc),1);\n                        sdsfree(adhoc);\n                    }\n                } else {\n                    sendTrackingMessage(c,proto,sdslen(proto),1);\n                }\n            }\n            raxStop(&ri2);\n\n            /* Clean up: we can remove everything from this state, because we\n             * want to only track the new keys that will be accumulated starting\n             * from now. */\n            sdsfree(proto);\n        }\n        raxFree(bs->keys);\n        bs->keys = raxNew();\n    }\n    raxStop(&ri);\n}\n\n/* This is just used in order to access the amount of used slots in the\n * tracking table. */\nuint64_t trackingGetTotalItems(void) {\n    return TrackingTableTotalItems;\n}\n\nuint64_t trackingGetTotalKeys(void) {\n    if (TrackingTable == NULL) return 0;\n    return raxSize(TrackingTable);\n}\n\nuint64_t trackingGetTotalPrefixes(void) {\n    if (PrefixTable == NULL) return 0;\n    return raxSize(PrefixTable);\n}\n"
  },
  {
    "path": "src/tsan.sup",
    "content": "# collect_stacktrace_data() calls backtrace() from a signal handler but\n# backtrace() is signal-unsafe since it might allocate memory, at least on\n# glibc 2.39 it does through a call to _dl_map_object_deps().\nsignal:collect_stacktrace_data\nsignal:printCrashReport\n# TODO Investigate this race in jemalloc probably related to\n# https://github.com/jemalloc/jemalloc/issues/2621\nrace:malloc_mutex_trylock_final\n\n# A race can happen on conn->last_errno if replica client is reading/writing\n# data in IO thread and main thread is calling connAddrPeerName for some reason\n# (f.e genRedisInfoString/roleCommand...).\n# Not worth the additional code for synchronization as:\n# - errno is thread-safe according to POSIX std\n# - we don't support systems that allow word tearing, i.e last_errno value would\n#   be a correct value at the end - either the errno from main or from IO thread\n# - even if we fix the data race on last_errno we still have the problem of it\n#   being set to either errno unless we pause the IO thread during main-thread's\n#   execution which would incur too big of a cost.\n# - the race happens rarely\nrace:connSocketAddr\n"
  },
  {
    "path": "src/unix.c",
    "content": "/* ==========================================================================\n * unix.c - unix socket connection implementation\n * --------------------------------------------------------------------------\n * Copyright (C) 2022  zhenwei pi\n *\n * Permission is hereby granted, free of charge, to any person obtaining a\n * copy of this software and associated documentation files (the\n * \"Software\"), to deal in the Software without restriction, including\n * without limitation the rights to use, copy, modify, merge, publish,\n * distribute, sublicense, and/or sell copies of the Software, and to permit\n * persons to whom the Software is furnished to do so, subject to the\n * following conditions:\n *\n * The above copyright notice and this permission notice shall be included\n * in all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\n * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN\n * NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,\n * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE\n * USE OR OTHER DEALINGS IN THE SOFTWARE.\n * ==========================================================================\n */\n\n#include \"server.h\"\n#include \"connection.h\"\n\nstatic ConnectionType CT_Unix;\n\nstatic const char *connUnixGetType(connection *conn) {\n    UNUSED(conn);\n\n    return CONN_TYPE_UNIX;\n}\n\nstatic void connUnixEventHandler(struct aeEventLoop *el, int fd, void *clientData, int mask) {\n    connectionTypeTcp()->ae_handler(el, fd, clientData, mask);\n}\n\nstatic int connUnixAddr(connection *conn, char *ip, size_t ip_len, int *port, int remote) {\n    return connectionTypeTcp()->addr(conn, ip, ip_len, port, remote);\n}\n\nstatic int connUnixIsLocal(connection *conn) {\n    UNUSED(conn);\n\n    return 1; /* Unix socket is always local connection */\n}\n\nstatic int connUnixListen(connListener *listener) {\n    int fd;\n    mode_t *perm = (mode_t *)listener->priv;\n\n    if (listener->bindaddr_count == 0)\n        return C_OK;\n\n    /* currently listener->bindaddr_count is always 1, we still use a loop here in case Redis supports multi Unix socket in the future */\n    for (int j = 0; j < listener->bindaddr_count; j++) {\n        char *addr = listener->bindaddr[j];\n\n        unlink(addr); /* don't care if this fails */\n        fd = anetUnixServer(server.neterr, addr, *perm, server.tcp_backlog);\n        if (fd == ANET_ERR) {\n            serverLog(LL_WARNING, \"Failed opening Unix socket: %s\", server.neterr);\n            exit(1);\n        }\n        anetNonBlock(NULL, fd);\n        anetCloexec(fd);\n        listener->fd[listener->count++] = fd;\n    }\n\n    return C_OK;\n}\n\nstatic connection *connCreateUnix(struct aeEventLoop *el) {\n    connection *conn = zcalloc(sizeof(connection));\n    conn->type = &CT_Unix;\n    conn->fd = -1;\n    conn->iovcnt = IOV_MAX;\n    conn->el = el;\n\n    return conn;\n}\n\nstatic connection *connCreateAcceptedUnix(struct aeEventLoop *el, int fd, void *priv) {\n    UNUSED(priv);\n    connection *conn = connCreateUnix(el);\n    conn->fd = fd;\n    conn->state = CONN_STATE_ACCEPTING;\n    return conn;\n}\n\nstatic void connUnixAcceptHandler(aeEventLoop *el, int fd, void *privdata, int mask) {\n    int cfd;\n    int max = server.max_new_conns_per_cycle;\n    UNUSED(el);\n    UNUSED(mask);\n    UNUSED(privdata);\n\n    while(max--) {\n        cfd = anetUnixAccept(server.neterr, fd);\n        if (cfd == ANET_ERR) {\n            if (anetAcceptFailureNeedsRetry(errno))\n                continue;\n            if (errno != EWOULDBLOCK)\n                serverLog(LL_WARNING,\n                    \"Accepting client connection: %s\", server.neterr);\n            return;\n        }\n        serverLog(LL_VERBOSE,\"Accepted connection to %s\", server.unixsocket);\n        acceptCommonHandler(connCreateAcceptedUnix(el, cfd, NULL),CLIENT_UNIX_SOCKET,NULL);\n    }\n}\n\nstatic void connUnixShutdown(connection *conn) {\n    connectionTypeTcp()->shutdown(conn);\n}\n\nstatic void connUnixClose(connection *conn) {\n    connectionTypeTcp()->close(conn);\n}\n\nstatic int connUnixAccept(connection *conn, ConnectionCallbackFunc accept_handler) {\n    return connectionTypeTcp()->accept(conn, accept_handler);\n}\n\nstatic int connUnixRebindEventLoop(connection *conn, aeEventLoop *el) {\n    return connectionTypeTcp()->rebind_event_loop(conn, el);\n}\n\nstatic int connUnixWrite(connection *conn, const void *data, size_t data_len) {\n    return connectionTypeTcp()->write(conn, data, data_len);\n}\n\nstatic int connUnixWritev(connection *conn, const struct iovec *iov, int iovcnt) {\n    return connectionTypeTcp()->writev(conn, iov, iovcnt);\n}\n\nstatic int connUnixRead(connection *conn, void *buf, size_t buf_len) {\n    return connectionTypeTcp()->read(conn, buf, buf_len);\n}\n\nstatic int connUnixSetWriteHandler(connection *conn, ConnectionCallbackFunc func, int barrier) {\n    return connectionTypeTcp()->set_write_handler(conn, func, barrier);\n}\n\nstatic int connUnixSetReadHandler(connection *conn, ConnectionCallbackFunc func) {\n    return connectionTypeTcp()->set_read_handler(conn, func);\n}\n\nstatic const char *connUnixGetLastError(connection *conn) {\n    return strerror(conn->last_errno);\n}\n\nstatic ssize_t connUnixSyncWrite(connection *conn, char *ptr, ssize_t size, long long timeout) {\n    return syncWrite(conn->fd, ptr, size, timeout);\n}\n\nstatic ssize_t connUnixSyncRead(connection *conn, char *ptr, ssize_t size, long long timeout) {\n    return syncRead(conn->fd, ptr, size, timeout);\n}\n\nstatic ssize_t connUnixSyncReadLine(connection *conn, char *ptr, ssize_t size, long long timeout) {\n    return syncReadLine(conn->fd, ptr, size, timeout);\n}\n\nstatic ConnectionType CT_Unix = {\n    /* connection type */\n    .get_type = connUnixGetType,\n\n    /* connection type initialize & finalize & configure */\n    .init = NULL,\n    .cleanup = NULL,\n    .configure = NULL,\n\n    /* ae & accept & listen & error & address handler */\n    .ae_handler = connUnixEventHandler,\n    .accept_handler = connUnixAcceptHandler,\n    .addr = connUnixAddr,\n    .is_local = connUnixIsLocal,\n    .listen = connUnixListen,\n\n    /* create/shutdown/close connection */\n    .conn_create = connCreateUnix,\n    .conn_create_accepted = connCreateAcceptedUnix,\n    .shutdown = connUnixShutdown,\n    .close = connUnixClose,\n\n    /* connect & accept */\n    .connect = NULL,\n    .blocking_connect = NULL,\n    .accept = connUnixAccept,\n\n    /* event loop */\n    .unbind_event_loop = NULL,\n    .rebind_event_loop = connUnixRebindEventLoop,\n\n    /* IO */\n    .write = connUnixWrite,\n    .writev = connUnixWritev,\n    .read = connUnixRead,\n    .set_write_handler = connUnixSetWriteHandler,\n    .set_read_handler = connUnixSetReadHandler,\n    .get_last_error = connUnixGetLastError,\n    .sync_write = connUnixSyncWrite,\n    .sync_read = connUnixSyncRead,\n    .sync_readline = connUnixSyncReadLine,\n\n    /* pending data */\n    .has_pending_data = NULL,\n    .process_pending_data = NULL,\n};\n\nint RedisRegisterConnectionTypeUnix(void)\n{\n    return connTypeRegister(&CT_Unix);\n}\n"
  },
  {
    "path": "src/util.c",
    "content": "/*\n * Copyright (c) 2009-current, Redis Ltd.\n * Copyright (c) 2012, Twitter, Inc.\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#include \"fmacros.h\"\n#include \"fpconv_dtoa.h\"\n#include \"fast_float_strtod.h\"\n#include <stdlib.h>\n#include <stdio.h>\n#include <string.h>\n#include <ctype.h>\n#include <limits.h>\n#include <math.h>\n#include <unistd.h>\n#include <sys/time.h>\n#include <float.h>\n#include <stdint.h>\n#include <errno.h>\n#include <time.h>\n#include <sys/stat.h>\n#include <dirent.h>\n#include <fcntl.h>\n#include <libgen.h>\n\n#include \"util.h\"\n#include \"sha256.h\"\n#include \"config.h\"\n\n#define UNUSED(x) ((void)(x))\n\n/* Selectively define static_assert. Attempt to avoid include server.h in this file. */\n#ifndef static_assert\n#define static_assert(expr, lit) extern char __static_assert_failure[(expr) ? 1:-1]\n#endif\n\nstatic_assert(UINTPTR_MAX == 0xffffffffffffffff || UINTPTR_MAX == 0xffffffff, \"Unsupported pointer size\");\n\n/* Glob-style pattern matching. */\nstatic int stringmatchlen_impl(const char *pattern, int patternLen,\n        const char *string, int stringLen, int nocase, int *skipLongerMatches, int nesting)\n{\n    /* Protection against abusive patterns. */\n    if (nesting > 1000) return 0;\n\n    while(patternLen && stringLen) {\n        switch(pattern[0]) {\n        case '*':\n            while (patternLen && pattern[1] == '*') {\n                pattern++;\n                patternLen--;\n            }\n            if (patternLen == 1)\n                return 1; /* match */\n            while(stringLen) {\n                if (stringmatchlen_impl(pattern+1, patternLen-1,\n                            string, stringLen, nocase, skipLongerMatches, nesting+1))\n                    return 1; /* match */\n                if (*skipLongerMatches)\n                    return 0; /* no match */\n                string++;\n                stringLen--;\n            }\n            /* There was no match for the rest of the pattern starting\n             * from anywhere in the rest of the string. If there were\n             * any '*' earlier in the pattern, we can terminate the\n             * search early without trying to match them to longer\n             * substrings. This is because a longer match for the\n             * earlier part of the pattern would require the rest of the\n             * pattern to match starting later in the string, and we\n             * have just determined that there is no match for the rest\n             * of the pattern starting from anywhere in the current\n             * string. */\n            *skipLongerMatches = 1;\n            return 0; /* no match */\n            break;\n        case '?':\n            string++;\n            stringLen--;\n            break;\n        case '[':\n        {\n            int not, match;\n\n            pattern++;\n            patternLen--;\n            not = patternLen && pattern[0] == '^';\n            if (not) {\n                pattern++;\n                patternLen--;\n            }\n            match = 0;\n            while(1) {\n                if (patternLen >= 2 && pattern[0] == '\\\\') {\n                    pattern++;\n                    patternLen--;\n                    if (pattern[0] == string[0])\n                        match = 1;\n                } else if (patternLen == 0) {\n                    pattern--;\n                    patternLen++;\n                    break;\n                } else if (pattern[0] == ']') {\n                    break;\n                } else if (patternLen >= 3 && pattern[1] == '-') {\n                    int start = pattern[0];\n                    int end = pattern[2];\n                    int c = string[0];\n                    if (start > end) {\n                        int t = start;\n                        start = end;\n                        end = t;\n                    }\n                    if (nocase) {\n                        start = tolower(start);\n                        end = tolower(end);\n                        c = tolower(c);\n                    }\n                    pattern += 2;\n                    patternLen -= 2;\n                    if (c >= start && c <= end)\n                        match = 1;\n                } else {\n                    if (!nocase) {\n                        if (pattern[0] == string[0])\n                            match = 1;\n                    } else {\n                        if (tolower((int)pattern[0]) == tolower((int)string[0]))\n                            match = 1;\n                    }\n                }\n                pattern++;\n                patternLen--;\n            }\n            if (not)\n                match = !match;\n            if (!match)\n                return 0; /* no match */\n            string++;\n            stringLen--;\n            break;\n        }\n        case '\\\\':\n            if (patternLen >= 2) {\n                pattern++;\n                patternLen--;\n            }\n            /* fall through */\n        default:\n            if (!nocase) {\n                if (pattern[0] != string[0])\n                    return 0; /* no match */\n            } else {\n                if (tolower((int)pattern[0]) != tolower((int)string[0]))\n                    return 0; /* no match */\n            }\n            string++;\n            stringLen--;\n            break;\n        }\n        pattern++;\n        patternLen--;\n        if (stringLen == 0) {\n            while(patternLen && *pattern == '*') {\n                pattern++;\n                patternLen--;\n            }\n            break;\n        }\n    }\n    if (patternLen == 0 && stringLen == 0)\n        return 1;\n    return 0;\n}\n\n/* \n * glob-style pattern matching to check if a given pattern fully includes \n * the prefix of a string. For the match to succeed, the pattern must end with \n * an unescaped '*' character.\n * \n * Returns: 1 if the `pattern` fully matches the `prefixStr`. Returns 0 otherwise.\n */\nint prefixmatch(const char *pattern, int patternLen,\n                const char *prefixStr, int prefixStrLen, int nocase) {\n    int skipLongerMatches = 0;\n    \n    /* Step 1: Verify if the pattern matches the prefix string completely. */\n    if (!stringmatchlen_impl(pattern, patternLen, prefixStr, prefixStrLen, nocase, &skipLongerMatches, 0))\n        return 0;\n\n    /* Step 2: Verify that the pattern ends with an unescaped '*', indicating\n     * it can match any suffix of the string beyond the prefix. This check\n     * remains outside stringmatchlen_impl() to keep its complexity manageable.\n     */\n    if (patternLen == 0 || pattern[patternLen - 1] != '*' )\n        return 0;\n\n    /* Count backward the number of consecutive backslashes preceding the '*'\n     * to determine if the '*' is escaped. */\n    int backslashCount = 0;\n    for (int i = patternLen - 2; i >= 0; i--) {\n        if (pattern[i] == '\\\\')\n            ++backslashCount;\n        else\n            break; /* Stop counting when a non-backslash character is found. */\n    }\n\n    /* Return 1 if the '*' is not escaped (i.e., even count), 0 otherwise. */\n    return (backslashCount % 2 == 0);\n}\n\n/* Glob-style pattern matching to a string. */\nint stringmatchlen(const char *pattern, int patternLen,\n        const char *string, int stringLen, int nocase) {\n    int skipLongerMatches = 0;\n    return stringmatchlen_impl(pattern,patternLen,string,stringLen,nocase,&skipLongerMatches,0);\n}\n\nint stringmatch(const char *pattern, const char *string, int nocase) {\n    return stringmatchlen(pattern,strlen(pattern),string,strlen(string),nocase);\n}\n\n/* Fuzz stringmatchlen() trying to crash it with bad input. */\nint stringmatchlen_fuzz_test(void) {\n    char str[32];\n    char pat[32];\n    int cycles = 10000000;\n    int total_matches = 0;\n    while(cycles--) {\n        int strlen = rand() % sizeof(str);\n        int patlen = rand() % sizeof(pat);\n        for (int j = 0; j < strlen; j++) str[j] = rand() % 128;\n        for (int j = 0; j < patlen; j++) pat[j] = rand() % 128;\n        total_matches += stringmatchlen(pat, patlen, str, strlen, 0);\n    }\n    return total_matches;\n}\n\n\n/* Convert a string representing an amount of memory into the number of\n * bytes, so for instance memtoull(\"1Gb\") will return 1073741824 that is\n * (1024*1024*1024).\n *\n * On parsing error, if *err is not NULL, it's set to 1, otherwise it's\n * set to 0. On error the function return value is 0, regardless of the\n * fact 'err' is NULL or not. */\nunsigned long long memtoull(const char *p, int *err) {\n    const char *u;\n    char buf[128];\n    long mul; /* unit multiplier */\n    unsigned long long val;\n    unsigned int digits;\n\n    if (err) *err = 0;\n\n    /* Search the first non digit character. */\n    u = p;\n    if (*u == '-') {\n        if (err) *err = 1;\n        return 0;\n    }\n    while(*u && isdigit(*u)) u++;\n    if (*u == '\\0' || !strcasecmp(u,\"b\")) {\n        mul = 1;\n    } else if (!strcasecmp(u,\"k\")) {\n        mul = 1000;\n    } else if (!strcasecmp(u,\"kb\")) {\n        mul = 1024;\n    } else if (!strcasecmp(u,\"m\")) {\n        mul = 1000*1000;\n    } else if (!strcasecmp(u,\"mb\")) {\n        mul = 1024*1024;\n    } else if (!strcasecmp(u,\"g\")) {\n        mul = 1000L*1000*1000;\n    } else if (!strcasecmp(u,\"gb\")) {\n        mul = 1024L*1024*1024;\n    } else {\n        if (err) *err = 1;\n        return 0;\n    }\n\n    /* Copy the digits into a buffer, we'll use strtoll() to convert\n     * the digit (without the unit) into a number. */\n    digits = u-p;\n    if (digits >= sizeof(buf)) {\n        if (err) *err = 1;\n        return 0;\n    }\n    memcpy(buf,p,digits);\n    buf[digits] = '\\0';\n\n    char *endptr;\n    errno = 0;\n    val = strtoull(buf,&endptr,10);\n    if ((val == 0 && errno == EINVAL) || *endptr != '\\0') {\n        if (err) *err = 1;\n        return 0;\n    }\n    return val*mul;\n}\n\n/* Search a memory buffer for any set of bytes, like strpbrk().\n * Returns pointer to first found char or NULL.\n */\nconst char *mempbrk(const char *s, size_t len, const char *chars, size_t charslen) {\n    for (size_t j = 0; j < len; j++) {\n        for (size_t n = 0; n < charslen; n++)\n            if (s[j] == chars[n]) return &s[j];\n    }\n\n    return NULL;\n}\n\n/* Modify the buffer replacing all occurrences of chars from the 'from'\n * set with the corresponding char in the 'to' set. Always returns s.\n */\nchar *memmapchars(char *s, size_t len, const char *from, const char *to, size_t setlen) {\n    for (size_t j = 0; j < len; j++) {\n        for (size_t i = 0; i < setlen; i++) {\n            if (s[j] == from[i]) {\n                s[j] = to[i];\n                break;\n            }\n        }\n    }\n    return s;\n}\n\n/* Return the number of digits of 'v' when converted to string in radix 10.\n * See ll2string() for more information. */\nuint32_t digits10(uint64_t v) {\n    if (v < 10) return 1;\n    if (v < 100) return 2;\n    if (v < 1000) return 3;\n    if (v < 1000000000000UL) {\n        if (v < 100000000UL) {\n            if (v < 1000000) {\n                if (v < 10000) return 4;\n                return 5 + (v >= 100000);\n            }\n            return 7 + (v >= 10000000UL);\n        }\n        if (v < 10000000000UL) {\n            return 9 + (v >= 1000000000UL);\n        }\n        return 11 + (v >= 100000000000UL);\n    }\n    return 12 + digits10(v / 1000000000000UL);\n}\n\n/* Like digits10() but for signed values. */\nuint32_t sdigits10(int64_t v) {\n    if (v < 0) {\n        /* Abs value of LLONG_MIN requires special handling. */\n        uint64_t uv = (v != LLONG_MIN) ?\n                      (uint64_t)-v : ((uint64_t) LLONG_MAX)+1;\n        return digits10(uv)+1; /* +1 for the minus. */\n    } else {\n        return digits10(v);\n    }\n}\n\n/* Convert a long long into a string. Returns the number of\n * characters needed to represent the number.\n * If the buffer is not big enough to store the string, 0 is returned. */\nint ll2string(char *dst, size_t dstlen, long long svalue) {\n    unsigned long long value;\n    int negative = 0;\n\n    /* The ull2string function with 64bit unsigned integers for simplicity, so\n     * we convert the number here and remember if it is negative. */\n    if (svalue < 0) {\n        if (svalue != LLONG_MIN) {\n            value = -svalue;\n        } else {\n            value = ((unsigned long long) LLONG_MAX)+1;\n        }\n        if (dstlen < 2)\n            goto err;\n        negative = 1;\n        dst[0] = '-';\n        dst++;\n        dstlen--;\n    } else {\n        value = svalue;\n    }\n\n    /* Converts the unsigned long long value to string*/\n    int length = ull2string(dst, dstlen, value);\n    if (length == 0) return 0;\n    return length + negative;\n\nerr:\n    /* force add Null termination */\n    if (dstlen > 0)\n        dst[0] = '\\0';\n    return 0;\n}\n\n/* Convert a unsigned long long into a string. Returns the number of\n * characters needed to represent the number.\n * If the buffer is not big enough to store the string, 0 is returned.\n *\n * Based on the following article (that apparently does not provide a\n * novel approach but only publicizes an already used technique):\n *\n * https://www.facebook.com/notes/facebook-engineering/three-optimization-tips-for-c/10151361643253920 */\nint ull2string(char *dst, size_t dstlen, unsigned long long value) {\n    static const char digits[201] =\n        \"0001020304050607080910111213141516171819\"\n        \"2021222324252627282930313233343536373839\"\n        \"4041424344454647484950515253545556575859\"\n        \"6061626364656667686970717273747576777879\"\n        \"8081828384858687888990919293949596979899\";\n\n    /* Check length. */\n    uint32_t length = digits10(value);\n    if (length >= dstlen) goto err;;\n\n    /* Null term. */\n    uint32_t next = length - 1;\n    dst[next + 1] = '\\0';\n    while (value >= 100) {\n        int const i = (value % 100) * 2;\n        value /= 100;\n        dst[next] = digits[i + 1];\n        dst[next - 1] = digits[i];\n        next -= 2;\n    }\n\n    /* Handle last 1-2 digits. */\n    if (value < 10) {\n        dst[next] = '0' + (uint32_t) value;\n    } else {\n        int i = (uint32_t) value * 2;\n        dst[next] = digits[i + 1];\n        dst[next - 1] = digits[i];\n    }\n    return length;\nerr:\n    /* force add Null termination */\n    if (dstlen > 0)\n        dst[0] = '\\0';\n    return 0;\n}\n\n/* Convert a string into a long long. Returns 1 if the string could be parsed\n * into a (non-overflowing) long long, 0 otherwise. The value will be set to\n * the parsed value when appropriate.\n *\n * Note that this function demands that the string strictly represents\n * a long long: no spaces or other characters before or after the string\n * representing the number are accepted, nor zeroes at the start if not\n * for the string \"0\" representing the zero number.\n *\n * Because of its strictness, it is safe to use this function to check if\n * you can convert a string into a long long, and obtain back the string\n * from the number without any loss in the string representation. */\nint string2ll(const char *s, size_t slen, long long *value) {\n    const char *p = s;\n    size_t plen = 0;\n    int negative = 0;\n    unsigned long long v;\n\n    /* A string of zero length or excessive length is not a valid number. */\n    if (plen == slen || slen >= LONG_STR_SIZE)\n        return 0;\n\n    /* Special case: first and only digit is 0. */\n    if (slen == 1 && p[0] == '0') {\n        if (value != NULL) *value = 0;\n        return 1;\n    }\n\n    /* Handle negative numbers: just set a flag and continue like if it\n     * was a positive number. Later convert into negative. */\n    if (p[0] == '-') {\n        negative = 1;\n        p++; plen++;\n\n        /* Abort on only a negative sign. */\n        if (plen == slen)\n            return 0;\n    }\n\n    /* First digit should be 1-9, otherwise the string should just be 0. */\n    if (p[0] >= '1' && p[0] <= '9') {\n        v = p[0]-'0';\n        p++; plen++;\n    } else {\n        return 0;\n    }\n\n    /* Parse all the other digits, checking for overflow at every step. */\n    while (plen < slen && p[0] >= '0' && p[0] <= '9') {\n        if (v > (ULLONG_MAX / 10)) /* Overflow. */\n            return 0;\n        v *= 10;\n\n        if (v > (ULLONG_MAX - (p[0]-'0'))) /* Overflow. */\n            return 0;\n        v += p[0]-'0';\n\n        p++; plen++;\n    }\n\n    /* Return if not all bytes were used. */\n    if (plen < slen)\n        return 0;\n\n    /* Convert to negative if needed, and do the final overflow check when\n     * converting from unsigned long long to long long. */\n    if (negative) {\n        if (v > ((unsigned long long)(-(LLONG_MIN+1))+1)) /* Overflow. */\n            return 0;\n        if (value != NULL) *value = -v;\n    } else {\n        if (v > LLONG_MAX) /* Overflow. */\n            return 0;\n        if (value != NULL) *value = v;\n    }\n    return 1;\n}\n\n/* Helper function to convert a string to an unsigned long long value.\n * The function attempts to use the faster string2ll() function inside\n * Redis: if it fails, strtoull() is used instead. The function returns\n * 1 if the conversion happened successfully or 0 if the number is\n * invalid or out of range. */\nint string2ull(const char *s, unsigned long long *value) {\n    long long ll;\n    if (string2ll(s,strlen(s),&ll)) {\n        if (ll < 0) return 0; /* Negative values are out of range. */\n        *value = ll;\n        return 1;\n    }\n    errno = 0;\n    char *endptr = NULL;\n    *value = strtoull(s,&endptr,10);\n    if (errno == EINVAL || errno == ERANGE || !(*s != '\\0' && *endptr == '\\0'))\n        return 0; /* strtoull() failed. */\n    return 1; /* Conversion done! */\n}\n\n/* Convert a string into a long. Returns 1 if the string could be parsed into a\n * (non-overflowing) long, 0 otherwise. The value will be set to the parsed\n * value when appropriate. */\nint string2l(const char *s, size_t slen, long *lval) {\n    long long llval;\n\n    if (!string2ll(s,slen,&llval))\n        return 0;\n\n    if (llval < LONG_MIN || llval > LONG_MAX)\n        return 0;\n\n    *lval = (long)llval;\n    return 1;\n}\n\n/* return 1 if c>= start && c <= end, 0 otherwise*/\nstatic int safe_is_c_in_range(char c, char start, char end) {\n    if (c >= start && c <= end) return 1;\n    return 0;\n}\n\nstatic int base_16_char_type(char c) {\n    if (safe_is_c_in_range(c, '0', '9')) return 0;\n    if (safe_is_c_in_range(c, 'a', 'f')) return 1;\n    if (safe_is_c_in_range(c, 'A', 'F')) return 2;\n    return -1;\n}\n\n/** This is an async-signal safe version of string2l to convert unsigned long to string.\n * The function translates @param src until it reaches a value that is not 0-9, a-f or A-F, or @param we read slen characters.\n * On successes writes the result to @param result_output and returns 1.\n * if the string represents an overflow value, return -1. */\nint string2ul_base16_async_signal_safe(const char *src, size_t slen, unsigned long *result_output) {\n    static char ascii_to_dec[] = {'0', 'a' - 10, 'A' - 10};\n\n    int char_type = 0;\n    size_t curr_char_idx = 0;\n    unsigned long result = 0;\n    int base = 16;\n    while ((-1 != (char_type = base_16_char_type(src[curr_char_idx]))) &&\n            curr_char_idx < slen) {\n        unsigned long curr_val = src[curr_char_idx] - ascii_to_dec[char_type];\n        if ((result > ULONG_MAX / base) || (result > (ULONG_MAX - curr_val)/base)) /* Overflow. */\n            return -1;\n        result = result * base + curr_val;\n        ++curr_char_idx;\n    }\n\n    *result_output = result;\n    return 1;\n}\n\n/* Convert a string into a double. Returns 1 if the string could be parsed\n * into a (non-overflowing) double, 0 otherwise. The value will be set to\n * the parsed value when appropriate.\n *\n * Note that this function demands that the string strictly represents\n * a double: no spaces or other characters before or after the string\n * representing the number are accepted. */\nint string2ld(const char *s, size_t slen, long double *dp) {\n    char buf[MAX_LONG_DOUBLE_CHARS];\n    long double value;\n    char *eptr;\n\n    if (slen == 0 || slen >= sizeof(buf)) return 0;\n    memcpy(buf,s,slen);\n    buf[slen] = '\\0';\n\n    errno = 0;\n    value = strtold(buf, &eptr);\n    if (isspace(buf[0]) || eptr[0] != '\\0' ||\n        (size_t)(eptr-buf) != slen ||\n        (errno == ERANGE &&\n            (value == HUGE_VAL || value == -HUGE_VAL || fpclassify(value) == FP_ZERO)) ||\n        errno == EINVAL ||\n        isnan(value))\n        return 0;\n\n    if (dp) *dp = value;\n    return 1;\n}\n\n/* Convert a string into a double. Returns 1 if the string could be parsed\n * into a (non-overflowing) double, 0 otherwise. The value will be set to\n * the parsed value when appropriate.\n *\n * Note that this function demands that the string strictly represents\n * a double: no spaces or other characters before or after the string\n * representing the number are accepted. */\nint string2d(const char *s, size_t slen, double *dp) {\n    errno = 0;\n    char *eptr;\n    /* Fast path to reject empty strings, or strings starting by space explicitly */\n    if (unlikely(slen == 0 ||\n        isspace(((const char*)s)[0])))\n        return 0;\n    *dp = fast_float_strtod(s, slen, &eptr);\n    /* Reject if not all characters were consumed by the parser. */\n    if (unlikely((size_t)(eptr - (char*)s) != slen)) {\n        return 0;\n    }\n    if (unlikely(errno == EINVAL ||\n        (errno == ERANGE &&\n            (*dp == HUGE_VAL || *dp == -HUGE_VAL || fpclassify(*dp) == FP_ZERO)) ||\n        isnan(*dp)))\n        return 0;\n    return 1;\n}\n\n/* Returns 1 if the double value can safely be represented in long long without\n * precision loss, in which case the corresponding long long is stored in the out variable. */\nint double2ll(double d, long long *out) {\n#if (DBL_MANT_DIG >= 52) && (DBL_MANT_DIG <= 63) && (LLONG_MAX == 0x7fffffffffffffffLL)\n    /* Check if the float is in a safe range to be casted into a\n     * long long. We are assuming that long long is 64 bit here.\n     * Also we are assuming that there are no implementations around where\n     * double has precision < 52 bit.\n     *\n     * Under this assumptions we test if a double is inside a range\n     * where casting to long long is safe. Then using two castings we\n     * make sure the decimal part is zero. If all this is true we can use\n     * integer without precision loss.\n     *\n     * Note that numbers above 2^52 and below 2^63 use all the fraction bits as real part,\n     * and the exponent bits are positive, which means the \"decimal\" part must be 0.\n     * i.e. all double values in that range are representable as a long without precision loss,\n     * but not all long values in that range can be represented as a double.\n     * we only care about the first part here. */\n    if (d < (double)(-LLONG_MAX/2) || d > (double)(LLONG_MAX/2))\n        return 0;\n    long long ll = d;\n    if (ll == d) {\n        *out = ll;\n        return 1;\n    }\n#endif\n    return 0;\n}\n\n/* Convert a double to a string representation. Returns the number of bytes\n * required. The representation should always be parsable by strtod(3).\n * This function does not support human-friendly formatting like ld2string\n * does. It is intended mainly to be used inside t_zset.c when writing scores\n * into a listpack representing a sorted set. */\nint d2string(char *buf, size_t len, double value) {\n    if (isnan(value)) {\n        /* Libc in some systems will format nan in a different way,\n         * like nan, -nan, NAN, nan(char-sequence).\n         * So we normalize it and create a single nan form in an explicit way. */\n        len = snprintf(buf,len,\"nan\");\n    } else if (isinf(value)) {\n        /* Libc in odd systems (Hi Solaris!) will format infinite in a\n         * different way, so better to handle it in an explicit way. */\n        if (value < 0)\n            len = snprintf(buf,len,\"-inf\");\n        else\n            len = snprintf(buf,len,\"inf\");\n    } else if (value == 0) {\n        /* See: http://en.wikipedia.org/wiki/Signed_zero, \"Comparisons\". */\n        if (1.0/value < 0)\n            len = snprintf(buf,len,\"-0\");\n        else\n            len = snprintf(buf,len,\"0\");\n    } else {\n        long long lvalue;\n        /* Integer printing function is much faster, check if we can safely use it. */\n        if (double2ll(value, &lvalue))\n            len = ll2string(buf,len,lvalue);\n        else {\n            len = fpconv_dtoa(value, buf);\n            buf[len] = '\\0';\n        }\n    }\n\n    return len;\n}\n\n/* Convert a double into a string with 'fractional_digits' digits after the dot precision.\n * This is an optimized version of snprintf \"%.<fractional_digits>f\".\n * We convert the double to long and multiply it  by 10 ^ <fractional_digits> to shift\n * the decimal places.\n * Note that multiply it of input value by 10 ^ <fractional_digits> can overflow but on the scenario\n * that we currently use within redis this that is not possible.\n * After we get the long representation we use the logic from ull2string function on this file\n * which is based on the following article:\n * https://www.facebook.com/notes/facebook-engineering/three-optimization-tips-for-c/10151361643253920\n *\n * Input values:\n * char: the buffer to store the string representation\n * dstlen: the buffer length\n * dvalue: the input double\n * fractional_digits: the number of fractional digits after the dot precision. between 1 and 17\n *\n * Return values:\n * Returns the number of characters needed to represent the number.\n * If the buffer is not big enough to store the string, 0 is returned.\n */\nint fixedpoint_d2string(char *dst, size_t dstlen, double dvalue, int fractional_digits) {\n    if (fractional_digits < 1 || fractional_digits > 17)\n        goto err;\n    /* min size of 2 ( due to 0. ) + n fractional_digitits + \\0 */\n    if ((int)dstlen < (fractional_digits+3))\n        goto err;\n    if (dvalue == 0) {\n        dst[0] = '0';\n        dst[1] = '.';\n        memset(dst + 2, '0', fractional_digits);\n        dst[fractional_digits+2] = '\\0';\n        return fractional_digits + 2;\n    }\n    /* scale and round */\n    static double powers_of_ten[] = {1.0, 10.0, 100.0, 1000.0, 10000.0, 100000.0, 1000000.0,\n    10000000.0, 100000000.0, 1000000000.0, 10000000000.0, 100000000000.0, 1000000000000.0,\n    10000000000000.0, 100000000000000.0, 1000000000000000.0, 10000000000000000.0,\n    100000000000000000.0 };\n    long long svalue = llrint(dvalue * powers_of_ten[fractional_digits]);\n    unsigned long long value;\n    /* write sign */\n    int negative = 0;\n    if (svalue < 0) {\n        if (svalue != LLONG_MIN) {\n            value = -svalue;\n        } else {\n            value = ((unsigned long long) LLONG_MAX)+1;\n        }\n        if (dstlen < 2)\n            goto err;\n        negative = 1;\n        dst[0] = '-';\n        dst++;\n        dstlen--;\n    } else {\n        value = svalue;\n    }\n\n    static const char digitsd[201] =\n        \"0001020304050607080910111213141516171819\"\n        \"2021222324252627282930313233343536373839\"\n        \"4041424344454647484950515253545556575859\"\n        \"6061626364656667686970717273747576777879\"\n        \"8081828384858687888990919293949596979899\";\n\n    /* Check length. */\n    uint32_t ndigits = digits10(value);\n    if (ndigits >= dstlen) goto err;\n    int integer_digits = ndigits - fractional_digits;\n    /* Fractional only check to avoid representing 0.7750 as .7750.\n     * This means we need to increment the length and store 0 as the first character.\n     */\n    if (integer_digits < 1) {\n        dst[0] = '0';\n        integer_digits = 1;\n    }\n    dst[integer_digits] = '.';\n    int size = integer_digits + 1 + fractional_digits;\n    /* fill with 0 from fractional digits until size */\n    memset(dst + integer_digits + 1, '0', fractional_digits);\n    int next = size - 1;\n    while (value >= 100) {\n        int const i = (value % 100) * 2;\n        value /= 100;\n        dst[next] = digitsd[i + 1];\n        dst[next - 1] = digitsd[i];\n        next -= 2;\n        /* dot position */\n        if (next == integer_digits) {\n            next--;\n        }\n    }\n\n    /* Handle last 1-2 digits. */\n    if (value < 10) {\n        dst[next] = '0' + (uint32_t) value;\n    } else {\n        int i = (uint32_t) value * 2;\n        dst[next] = digitsd[i + 1];\n        dst[next - 1] = digitsd[i];\n    }\n    /* Null term. */\n    dst[size] = '\\0';\n    return size + negative;\nerr:\n    /* force add Null termination */\n    if (dstlen > 0)\n        dst[0] = '\\0';\n    return 0;\n}\n\n/* Trims off trailing zeros from a string representing a double. */\nint trimDoubleString(char *buf, size_t len) {\n    if (strchr(buf,'.') != NULL) {\n        char *p = buf+len-1;\n        while(*p == '0') {\n            p--;\n            len--;\n        }\n        if (*p == '.') len--;\n    }\n    buf[len] = '\\0';\n    return len;\n}\n\n/* Create a string object from a long double.\n * If mode is humanfriendly it does not use exponential format and trims trailing\n * zeroes at the end (may result in loss of precision).\n * If mode is default exp format is used and the output of snprintf()\n * is not modified (may result in loss of precision).\n * If mode is hex hexadecimal format is used (no loss of precision)\n *\n * The function returns the length of the string or zero if there was not\n * enough buffer room to store it. */\nint ld2string(char *buf, size_t len, long double value, ld2string_mode mode) {\n    size_t l = 0;\n\n    if (isinf(value)) {\n        /* Libc in odd systems (Hi Solaris!) will format infinite in a\n         * different way, so better to handle it in an explicit way. */\n        if (len < 5) goto err; /* No room. 5 is \"-inf\\0\" */\n        if (value > 0) {\n            memcpy(buf,\"inf\",3);\n            l = 3;\n        } else {\n            memcpy(buf,\"-inf\",4);\n            l = 4;\n        }\n    } else if (isnan(value)) {\n        /* Libc in some systems will format nan in a different way,\n         * like nan, -nan, NAN, nan(char-sequence).\n         * So we normalize it and create a single nan form in an explicit way. */\n        if (len < 4) goto err; /* No room. 4 is \"nan\\0\" */\n        memcpy(buf, \"nan\", 3);\n        l = 3;\n    } else {\n        switch (mode) {\n        case LD_STR_AUTO:\n            l = snprintf(buf,len,\"%.17Lg\",value);\n            if (l+1 > len) goto err;; /* No room. */\n            break;\n        case LD_STR_HEX:\n            l = snprintf(buf,len,\"%La\",value);\n            if (l+1 > len) goto err; /* No room. */\n            break;\n        case LD_STR_HUMAN:\n            /* We use 17 digits precision since with 128 bit floats that precision\n             * after rounding is able to represent most small decimal numbers in a\n             * way that is \"non surprising\" for the user (that is, most small\n             * decimal numbers will be represented in a way that when converted\n             * back into a string are exactly the same as what the user typed.) */\n            l = snprintf(buf,len,\"%.17Lf\",value);\n            if (l+1 > len) goto err; /* No room. */\n            /* Now remove trailing zeroes after the '.' */\n            if (strchr(buf,'.') != NULL) {\n                char *p = buf+l-1;\n                while(*p == '0') {\n                    p--;\n                    l--;\n                }\n                if (*p == '.') l--;\n            }\n            if (l == 2 && buf[0] == '-' && buf[1] == '0') {\n                buf[0] = '0';\n                l = 1;\n            }\n            break;\n        default: goto err; /* Invalid mode. */\n        }\n    }\n    buf[l] = '\\0';\n    return l;\nerr:\n    /* force add Null termination */\n    if (len > 0)\n        buf[0] = '\\0';\n    return 0;\n}\n\n/* Get random bytes, attempts to get an initial seed from /dev/urandom and\n * the uses a one way hash function in counter mode to generate a random\n * stream. However if /dev/urandom is not available, a weaker seed is used.\n *\n * This function is not thread safe, since the state is global. */\nvoid getRandomBytes(unsigned char *p, size_t len) {\n    /* Global state. */\n    static int seed_initialized = 0;\n    static unsigned char seed[64]; /* 512 bit internal block size. */\n    static uint64_t counter = 0; /* The counter we hash with the seed. */\n\n    if (!seed_initialized) {\n        /* Initialize a seed and use SHA1 in counter mode, where we hash\n         * the same seed with a progressive counter. For the goals of this\n         * function we just need non-colliding strings, there are no\n         * cryptographic security needs. */\n        FILE *fp = fopen(\"/dev/urandom\",\"r\");\n        if (fp == NULL || fread(seed,sizeof(seed),1,fp) != 1) {\n            /* Revert to a weaker seed, and in this case reseed again\n             * at every call.*/\n            for (unsigned int j = 0; j < sizeof(seed); j++) {\n                struct timeval tv;\n                gettimeofday(&tv,NULL);\n                pid_t pid = getpid();\n                seed[j] = tv.tv_sec ^ tv.tv_usec ^ pid ^ (long)fp;\n            }\n        } else {\n            seed_initialized = 1;\n        }\n        if (fp) fclose(fp);\n    }\n\n    while(len) {\n        /* This implements SHA256-HMAC. */\n        unsigned char digest[SHA256_BLOCK_SIZE];\n        unsigned char kxor[64];\n        unsigned int copylen =\n            len > SHA256_BLOCK_SIZE ? SHA256_BLOCK_SIZE : len;\n\n        /* IKEY: key xored with 0x36. */\n        memcpy(kxor,seed,sizeof(kxor));\n        for (unsigned int i = 0; i < sizeof(kxor); i++) kxor[i] ^= 0x36;\n\n        /* Obtain HASH(IKEY||MESSAGE). */\n        SHA256_CTX ctx;\n        sha256_init(&ctx);\n        sha256_update(&ctx,kxor,sizeof(kxor));\n        sha256_update(&ctx,(unsigned char*)&counter,sizeof(counter));\n        sha256_final(&ctx,digest);\n\n        /* OKEY: key xored with 0x5c. */\n        memcpy(kxor,seed,sizeof(kxor));\n        for (unsigned int i = 0; i < sizeof(kxor); i++) kxor[i] ^= 0x5C;\n\n        /* Obtain HASH(OKEY || HASH(IKEY||MESSAGE)). */\n        sha256_init(&ctx);\n        sha256_update(&ctx,kxor,sizeof(kxor));\n        sha256_update(&ctx,digest,SHA256_BLOCK_SIZE);\n        sha256_final(&ctx,digest);\n\n        /* Increment the counter for the next iteration. */\n        counter++;\n\n        memcpy(p,digest,copylen);\n        len -= copylen;\n        p += copylen;\n    }\n}\n\n/* Generate the Redis \"Run ID\", a SHA1-sized random number that identifies a\n * given execution of Redis, so that if you are talking with an instance\n * having run_id == A, and you reconnect and it has run_id == B, you can be\n * sure that it is either a different instance or it was restarted. */\nvoid getRandomHexChars(char *p, size_t len) {\n    char *charset = \"0123456789abcdef\";\n    size_t j;\n\n    getRandomBytes((unsigned char*)p,len);\n    for (j = 0; j < len; j++) p[j] = charset[p[j] & 0x0F];\n}\n\n/* Given the filename, return the absolute path as an SDS string, or NULL\n * if it fails for some reason. Note that \"filename\" may be an absolute path\n * already, this will be detected and handled correctly.\n *\n * The function does not try to normalize everything, but only the obvious\n * case of one or more \"../\" appearing at the start of \"filename\"\n * relative path. */\nsds getAbsolutePath(char *filename) {\n    char cwd[1024];\n    sds abspath;\n    sds relpath = sdsnew(filename);\n\n    relpath = sdstrim(relpath,\" \\r\\n\\t\");\n    if (relpath[0] == '/') return relpath; /* Path is already absolute. */\n\n    /* If path is relative, join cwd and relative path. */\n    if (getcwd(cwd,sizeof(cwd)) == NULL) {\n        sdsfree(relpath);\n        return NULL;\n    }\n    abspath = sdsnew(cwd);\n    if (sdslen(abspath) && abspath[sdslen(abspath)-1] != '/')\n        abspath = sdscat(abspath,\"/\");\n\n    /* At this point we have the current path always ending with \"/\", and\n     * the trimmed relative path. Try to normalize the obvious case of\n     * trailing ../ elements at the start of the path.\n     *\n     * For every \"../\" we find in the filename, we remove it and also remove\n     * the last element of the cwd, unless the current cwd is \"/\". */\n    while (sdslen(relpath) >= 3 &&\n           relpath[0] == '.' && relpath[1] == '.' && relpath[2] == '/')\n    {\n        sdsrange(relpath,3,-1);\n        if (sdslen(abspath) > 1) {\n            char *p = abspath + sdslen(abspath)-2;\n            int trimlen = 1;\n\n            while(*p != '/') {\n                p--;\n                trimlen++;\n            }\n            sdsrange(abspath,0,-(trimlen+1));\n        }\n    }\n\n    /* Finally glue the two parts together. */\n    abspath = sdscatsds(abspath,relpath);\n    sdsfree(relpath);\n    return abspath;\n}\n\n/*\n * Gets the proper timezone in a more portable fashion\n * i.e timezone variables are linux specific.\n */\nlong getTimeZone(void) {\n#if defined(__linux__) || defined(__sun)\n    return timezone;\n#else\n    struct timezone tz;\n\n    gettimeofday(NULL, &tz);\n\n    return tz.tz_minuteswest * 60L;\n#endif\n}\n\n/* Return true if the specified path is just a file basename without any\n * relative or absolute path. This function just checks that no / or \\\n * character exists inside the specified path, that's enough in the\n * environments where Redis runs. */\nint pathIsBaseName(char *path) {\n    return strchr(path,'/') == NULL && strchr(path,'\\\\') == NULL;\n}\n\nint fileExist(char *filename) {\n    struct stat statbuf;\n    return stat(filename, &statbuf) == 0 && S_ISREG(statbuf.st_mode);\n}\n\nint dirExists(char *dname) {\n    struct stat statbuf;\n    return stat(dname, &statbuf) == 0 && S_ISDIR(statbuf.st_mode);\n}\n\nint dirCreateIfMissing(char *dname) {\n    if (mkdir(dname, 0755) != 0) {\n        if (errno != EEXIST) {\n            return -1;\n        } else if (!dirExists(dname)) {\n            errno = ENOTDIR;\n            return -1;\n        }\n    }\n    return 0;\n}\n\nint dirRemove(char *dname) {\n    DIR *dir;\n    struct stat stat_entry;\n    struct dirent *entry;\n    char full_path[PATH_MAX + 1];\n\n    if ((dir = opendir(dname)) == NULL) {\n        return -1;\n    }\n\n    while ((entry = readdir(dir)) != NULL) {\n        if (!strcmp(entry->d_name, \".\") || !strcmp(entry->d_name, \"..\")) continue;\n\n        snprintf(full_path, sizeof(full_path), \"%s/%s\", dname, entry->d_name);\n\n        int fd = open(full_path, O_RDONLY|O_NONBLOCK);\n        if (fd == -1) {\n            closedir(dir);\n            return -1;\n        }\n\n        if (fstat(fd, &stat_entry) == -1) {\n            close(fd);\n            closedir(dir);\n            return -1;\n        }\n        close(fd);\n\n        if (S_ISDIR(stat_entry.st_mode) != 0) {\n            if (dirRemove(full_path) == -1) {\n                closedir(dir);\n                return -1;\n            }\n            continue;\n        }\n\n        if (unlink(full_path) != 0) {\n            closedir(dir);\n            return -1;\n        }\n    }\n\n    if (rmdir(dname) != 0) {\n        closedir(dir);\n        return -1;\n    }\n\n    closedir(dir);\n    return 0;\n}\n\nsds makePath(char *path, char *filename) {\n    return sdscatfmt(sdsempty(), \"%s/%s\", path, filename);\n}\n\n/* Given the filename, sync the corresponding directory.\n *\n * Usually a portable and safe pattern to overwrite existing files would be like:\n * 1. create a new temp file (on the same file system!)\n * 2. write data to the temp file\n * 3. fsync() the temp file\n * 4. rename the temp file to the appropriate name\n * 5. fsync() the containing directory */\nint fsyncFileDir(const char *filename) {\n#ifdef _AIX\n    /* AIX is unable to fsync a directory */\n    return 0;\n#endif\n    char temp_filename[PATH_MAX + 1];\n    char *dname;\n    int dir_fd;\n\n    if (strlen(filename) > PATH_MAX) {\n        errno = ENAMETOOLONG;\n        return -1;\n    }\n\n    /* In the glibc implementation dirname may modify their argument. */\n    memcpy(temp_filename, filename, strlen(filename) + 1);\n    dname = dirname(temp_filename);\n\n    dir_fd = open(dname, O_RDONLY);\n    if (dir_fd == -1) {\n        /* Some OSs don't allow us to open directories at all, just\n         * ignore the error in that case */\n        if (errno == EISDIR) {\n            return 0;\n        }\n        return -1;\n    }\n    /* Some OSs don't allow us to fsync directories at all, so we can ignore\n     * those errors. */\n    if (redis_fsync(dir_fd) == -1 && !(errno == EBADF || errno == EINVAL)) {\n        int save_errno = errno;\n        close(dir_fd);\n        errno = save_errno;\n        return -1;\n    }\n\n    close(dir_fd);\n    return 0;\n}\n\n /* free OS pages backed by file */\nint reclaimFilePageCache(int fd, size_t offset, size_t length) {\n#ifdef HAVE_FADVISE\n    int ret = posix_fadvise(fd, offset, length, POSIX_FADV_DONTNEED);\n    if (ret) {\n        errno = ret;\n        return -1;\n    }\n    return 0;\n#else\n    UNUSED(fd);\n    UNUSED(offset);\n    UNUSED(length);\n    return 0;\n#endif\n}\n\n/** An async signal safe version of fgets().\n * Has the same behaviour as standard fgets(): reads a line from fd and stores it into the dest buffer.\n * It stops when either (buff_size-1) characters are read, the newline character is read, or the end-of-file is reached,\n * whichever comes first.\n *\n * On success, the function returns the same dest parameter. If the End-of-File is encountered and no characters have\n * been read, the contents of dest remain unchanged and a null pointer is returned.\n * If an error occurs, a null pointer is returned. */\nchar *fgets_async_signal_safe(char *dest, int buff_size, int fd) {\n    for (int i = 0; i < buff_size; i++) {\n        /* Read one byte */\n        ssize_t bytes_read_count = read(fd, dest + i, 1);\n        /* On EOF or error return NULL */\n        if (bytes_read_count < 1) {\n            return NULL;\n        }\n        /* we found the end of the line. */\n        if (dest[i] == '\\n') {\n            break;\n        }\n    }\n    return dest;\n}\n\nstatic const char HEX[] = \"0123456789abcdef\";\n\nstatic char *u2string_async_signal_safe(int _base, uint64_t val, char *buf) {\n    uint32_t base = (uint32_t) _base;\n    *buf-- = 0;\n    do {\n        *buf-- = HEX[val % base];\n    } while ((val /= base) != 0);\n    return buf + 1;\n}\n\nstatic char *i2string_async_signal_safe(int base, int64_t val, char *buf) {\n    char *orig_buf = buf;\n    const int32_t is_neg = (val < 0);\n    *buf-- = 0;\n\n    if (is_neg) {\n        val = -val;\n    }\n    if (is_neg && base == 16) {\n        int ix;\n        val -= 1;\n        for (ix = 0; ix < 16; ++ix)\n            buf[-ix] = '0';\n    }\n\n    do {\n        *buf-- = HEX[val % base];\n    } while ((val /= base) != 0);\n\n    if (is_neg && base == 10) {\n        *buf-- = '-';\n    }\n\n    if (is_neg && base == 16) {\n        int ix;\n        buf = orig_buf - 1;\n        for (ix = 0; ix < 16; ++ix, --buf) {\n            /* *INDENT-OFF* */\n            switch (*buf) {\n            case '0': *buf = 'f'; break;\n            case '1': *buf = 'e'; break;\n            case '2': *buf = 'd'; break;\n            case '3': *buf = 'c'; break;\n            case '4': *buf = 'b'; break;\n            case '5': *buf = 'a'; break;\n            case '6': *buf = '9'; break;\n            case '7': *buf = '8'; break;\n            case '8': *buf = '7'; break;\n            case '9': *buf = '6'; break;\n            case 'a': *buf = '5'; break;\n            case 'b': *buf = '4'; break;\n            case 'c': *buf = '3'; break;\n            case 'd': *buf = '2'; break;\n            case 'e': *buf = '1'; break;\n            case 'f': *buf = '0'; break;\n            }\n            /* *INDENT-ON* */\n        }\n    }\n    return buf + 1;\n}\n\nstatic const char *check_longlong_async_signal_safe(const char *fmt, int32_t *have_longlong) {\n    *have_longlong = 0;\n    if (*fmt == 'l') {\n        fmt++;\n        if (*fmt != 'l') {\n            *have_longlong = (sizeof(long) == sizeof(int64_t));\n        } else {\n            fmt++;\n            *have_longlong = 1;\n        }\n    }\n    return fmt;\n}\n\nint vsnprintf_async_signal_safe(char *to, size_t size, const char *format, va_list ap) {\n    char *start = to;\n    char *end = start + size - 1;\n    for (; *format; ++format) {\n        int32_t have_longlong = 0;\n        if (*format != '%') {\n            if (to == end) { /* end of buffer */\n                break;\n            }\n            *to++ = *format; /* copy ordinary char */\n            continue;\n        }\n        ++format; /* skip '%' */\n\n        format = check_longlong_async_signal_safe(format, &have_longlong);\n\n        switch (*format) {\n        case 'd':\n        case 'i':\n        case 'u':\n        case 'x':\n        case 'p':\n            {\n                int64_t ival = 0;\n                uint64_t uval = 0;\n                if (*format == 'p')\n                    have_longlong = (sizeof(void *) == sizeof(uint64_t));\n                if (have_longlong) {\n                    if (*format == 'u') {\n                        uval = va_arg(ap, uint64_t);\n                    } else {\n                        ival = va_arg(ap, int64_t);\n                    }\n                } else {\n                    if (*format == 'u') {\n                        uval = va_arg(ap, uint32_t);\n                    } else {\n                        ival = va_arg(ap, int32_t);\n                    }\n                }\n\n                {\n                    char buff[22];\n                    const int base = (*format == 'x' || *format == 'p') ? 16 : 10;\n\n/* *INDENT-OFF* */\n                    char *val_as_str = (*format == 'u') ?\n                        u2string_async_signal_safe(base, uval, &buff[sizeof(buff) - 1]) :\n                        i2string_async_signal_safe(base, ival, &buff[sizeof(buff) - 1]);\n/* *INDENT-ON* */\n\n                    /* Strip off \"ffffffff\" if we have 'x' format without 'll' */\n                    if (*format == 'x' && !have_longlong && ival < 0) {\n                        val_as_str += 8;\n                    }\n\n                    while (*val_as_str && to < end) {\n                        *to++ = *val_as_str++;\n                    }\n                    continue;\n                }\n            }\n        case 's':\n            {\n                const char *val = va_arg(ap, char *);\n                if (!val) {\n                    val = \"(null)\";\n                }\n                while (*val && to < end) {\n                    *to++ = *val++;\n                }\n                continue;\n            }\n        }\n    }\n    *to = 0;\n    return (int)(to - start);\n}\n\nint snprintf_async_signal_safe(char *to, size_t n, const char *fmt, ...) {\n    int result;\n    va_list args;\n    va_start(args, fmt);\n    result = vsnprintf_async_signal_safe(to, n, fmt, args);\n    va_end(args);\n    return result;\n}\n\n#ifdef REDIS_TEST\n#include <assert.h>\n#include <sys/mman.h>\n#include \"testhelp.h\"\n\nstatic void test_string2ll(void) {\n    char buf[32];\n    long long v;\n\n    /* May not start with +. */\n    redis_strlcpy(buf,\"+1\",sizeof(buf));\n    assert(string2ll(buf,strlen(buf),&v) == 0);\n\n    /* Leading space. */\n    redis_strlcpy(buf,\" 1\",sizeof(buf));\n    assert(string2ll(buf,strlen(buf),&v) == 0);\n\n    /* Trailing space. */\n    redis_strlcpy(buf,\"1 \",sizeof(buf));\n    assert(string2ll(buf,strlen(buf),&v) == 0);\n\n    /* May not start with 0. */\n    redis_strlcpy(buf,\"01\",sizeof(buf));\n    assert(string2ll(buf,strlen(buf),&v) == 0);\n\n    redis_strlcpy(buf,\"-1\",sizeof(buf));\n    assert(string2ll(buf,strlen(buf),&v) == 1);\n    assert(v == -1);\n\n    redis_strlcpy(buf,\"0\",sizeof(buf));\n    assert(string2ll(buf,strlen(buf),&v) == 1);\n    assert(v == 0);\n\n    redis_strlcpy(buf,\"1\",sizeof(buf));\n    assert(string2ll(buf,strlen(buf),&v) == 1);\n    assert(v == 1);\n\n    redis_strlcpy(buf,\"99\",sizeof(buf));\n    assert(string2ll(buf,strlen(buf),&v) == 1);\n    assert(v == 99);\n\n    redis_strlcpy(buf,\"-99\",sizeof(buf));\n    assert(string2ll(buf,strlen(buf),&v) == 1);\n    assert(v == -99);\n\n    redis_strlcpy(buf,\"-9223372036854775808\",sizeof(buf));\n    assert(string2ll(buf,strlen(buf),&v) == 1);\n    assert(v == LLONG_MIN);\n\n    redis_strlcpy(buf,\"-9223372036854775809\",sizeof(buf)); /* overflow */\n    assert(string2ll(buf,strlen(buf),&v) == 0);\n\n    redis_strlcpy(buf,\"9223372036854775807\",sizeof(buf));\n    assert(string2ll(buf,strlen(buf),&v) == 1);\n    assert(v == LLONG_MAX);\n\n    redis_strlcpy(buf,\"9223372036854775808\",sizeof(buf)); /* overflow */\n    assert(string2ll(buf,strlen(buf),&v) == 0);\n}\n\nstatic void test_string2l(void) {\n    char buf[32];\n    long v;\n\n    /* May not start with +. */\n    redis_strlcpy(buf,\"+1\",sizeof(buf));\n    assert(string2l(buf,strlen(buf),&v) == 0);\n\n    /* May not start with 0. */\n    redis_strlcpy(buf,\"01\",sizeof(buf));\n    assert(string2l(buf,strlen(buf),&v) == 0);\n\n    redis_strlcpy(buf,\"-1\",sizeof(buf));\n    assert(string2l(buf,strlen(buf),&v) == 1);\n    assert(v == -1);\n\n    redis_strlcpy(buf,\"0\",sizeof(buf));\n    assert(string2l(buf,strlen(buf),&v) == 1);\n    assert(v == 0);\n\n    redis_strlcpy(buf,\"1\",sizeof(buf));\n    assert(string2l(buf,strlen(buf),&v) == 1);\n    assert(v == 1);\n\n    redis_strlcpy(buf,\"99\",sizeof(buf));\n    assert(string2l(buf,strlen(buf),&v) == 1);\n    assert(v == 99);\n\n    redis_strlcpy(buf,\"-99\",sizeof(buf));\n    assert(string2l(buf,strlen(buf),&v) == 1);\n    assert(v == -99);\n\n#if LONG_MAX != LLONG_MAX\n    redis_strlcpy(buf,\"-2147483648\",sizeof(buf));\n    assert(string2l(buf,strlen(buf),&v) == 1);\n    assert(v == LONG_MIN);\n\n    redis_strlcpy(buf,\"-2147483649\",sizeof(buf)); /* overflow */\n    assert(string2l(buf,strlen(buf),&v) == 0);\n\n    redis_strlcpy(buf,\"2147483647\",sizeof(buf));\n    assert(string2l(buf,strlen(buf),&v) == 1);\n    assert(v == LONG_MAX);\n\n    redis_strlcpy(buf,\"2147483648\",sizeof(buf)); /* overflow */\n    assert(string2l(buf,strlen(buf),&v) == 0);\n#endif\n}\n\nstatic void test_string2d(void) {\n    char buf[1024];\n    double v;\n\n    /* Valid hexadecimal value. */\n    redis_strlcpy(buf,\"0x0p+0\",sizeof(buf));\n    assert(string2d(buf,strlen(buf),&v) == 1);\n    assert(v == 0.0);\n\n    redis_strlcpy(buf,\"0x1p+0\",sizeof(buf));\n    assert(string2d(buf,strlen(buf),&v) == 1);\n    assert(v == 1.0);\n\n    /* Valid floating-point numbers */\n    redis_strlcpy(buf, \"1.5\", sizeof(buf));\n    assert(string2d(buf, strlen(buf), &v) == 1);\n    assert(v == 1.5);\n\n    redis_strlcpy(buf, \"-3.14\", sizeof(buf));\n    assert(string2d(buf, strlen(buf), &v) == 1);\n    assert(v == -3.14);\n\n    redis_strlcpy(buf, \"2.0e10\", sizeof(buf));\n    assert(string2d(buf, strlen(buf), &v) == 1);\n    assert(v == 2.0e10);\n\n    redis_strlcpy(buf, \"1e-3\", sizeof(buf));\n    assert(string2d(buf, strlen(buf), &v) == 1);\n    assert(v == 0.001);\n\n    /* Valid integer */\n    redis_strlcpy(buf, \"42\", sizeof(buf));\n    assert(string2d(buf, strlen(buf), &v) == 1);\n    assert(v == 42.0);\n\n    /* Invalid cases */\n    /* Empty. */\n    redis_strlcpy(buf, \"\", sizeof(buf));\n    assert(string2d(buf, strlen(buf), &v) == 0);\n\n    /* Starting by space. */\n    redis_strlcpy(buf, \" 1.23\", sizeof(buf));\n    assert(string2d(buf, strlen(buf), &v) == 0);\n\n    /* Invalid hexadecimal format. */\n    redis_strlcpy(buf, \"0x1.2g\", sizeof(buf));\n    assert(string2d(buf, strlen(buf), &v) == 0);\n\n    /* Hexadecimal NaN */\n    redis_strlcpy(buf, \"0xNan\", sizeof(buf));\n    assert(string2d(buf, strlen(buf), &v) == 0);\n\n    /* overflow. */\n    redis_strlcpy(buf,\"23456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789\",sizeof(buf));\n    assert(string2d(buf,strlen(buf),&v) == 0);\n}\n\nstatic void test_ll2string(void) {\n    char buf[32];\n    long long v;\n    int sz;\n\n    v = 0;\n    sz = ll2string(buf, sizeof buf, v);\n    assert(sz == 1);\n    assert(!strcmp(buf, \"0\"));\n\n    v = -1;\n    sz = ll2string(buf, sizeof buf, v);\n    assert(sz == 2);\n    assert(!strcmp(buf, \"-1\"));\n\n    v = 99;\n    sz = ll2string(buf, sizeof buf, v);\n    assert(sz == 2);\n    assert(!strcmp(buf, \"99\"));\n\n    v = -99;\n    sz = ll2string(buf, sizeof buf, v);\n    assert(sz == 3);\n    assert(!strcmp(buf, \"-99\"));\n\n    v = -2147483648;\n    sz = ll2string(buf, sizeof buf, v);\n    assert(sz == 11);\n    assert(!strcmp(buf, \"-2147483648\"));\n\n    v = LLONG_MIN;\n    sz = ll2string(buf, sizeof buf, v);\n    assert(sz == 20);\n    assert(!strcmp(buf, \"-9223372036854775808\"));\n\n    v = LLONG_MAX;\n    sz = ll2string(buf, sizeof buf, v);\n    assert(sz == 19);\n    assert(!strcmp(buf, \"9223372036854775807\"));\n}\n\nstatic void test_ld2string(void) {\n    char buf[32];\n    long double v;\n    int sz;\n\n    v = 0.0 / 0.0;\n    sz = ld2string(buf, sizeof(buf), v, LD_STR_AUTO);\n    assert(sz == 3);\n    assert(!strcmp(buf, \"nan\"));\n}\n\nstatic void test_fixedpoint_d2string(void) {\n    char buf[32];\n    double v;\n    int sz;\n    v = 0.0;\n    sz = fixedpoint_d2string(buf, sizeof buf, v, 4);\n    assert(sz == 6);\n    assert(!strcmp(buf, \"0.0000\"));\n    sz = fixedpoint_d2string(buf, sizeof buf, v, 1);\n    assert(sz == 3);\n    assert(!strcmp(buf, \"0.0\"));\n    /* set junk in buffer */\n    memset(buf,'A',32);\n    v = 0.0001;\n    sz = fixedpoint_d2string(buf, sizeof buf, v, 4);\n    assert(sz == 6);\n    assert(buf[sz] == '\\0');\n    assert(!strcmp(buf, \"0.0001\"));\n    /* set junk in buffer */\n    memset(buf,'A',32);\n    v = 6.0642951598391699e-05;\n    sz = fixedpoint_d2string(buf, sizeof buf, v, 4);\n    assert(sz == 6);\n    assert(buf[sz] == '\\0');\n    assert(!strcmp(buf, \"0.0001\"));\n    v = 0.01;\n    sz = fixedpoint_d2string(buf, sizeof buf, v, 4);\n    assert(sz == 6);\n    assert(!strcmp(buf, \"0.0100\"));\n    sz = fixedpoint_d2string(buf, sizeof buf, v, 1);\n    assert(sz == 3);\n    assert(!strcmp(buf, \"0.0\"));\n    v = -0.01;\n    sz = fixedpoint_d2string(buf, sizeof buf, v, 4);\n    assert(sz == 7);\n    assert(!strcmp(buf, \"-0.0100\"));\n     v = -0.1;\n    sz = fixedpoint_d2string(buf, sizeof buf, v, 1);\n    assert(sz == 4);\n    assert(!strcmp(buf, \"-0.1\"));\n    v = 0.1;\n    sz = fixedpoint_d2string(buf, sizeof buf, v, 1);\n    assert(sz == 3);\n    assert(!strcmp(buf, \"0.1\"));\n    v = 0.01;\n    sz = fixedpoint_d2string(buf, sizeof buf, v, 17);\n    assert(sz == 19);\n    assert(!strcmp(buf, \"0.01000000000000000\"));\n    v = 10.01;\n    sz = fixedpoint_d2string(buf, sizeof buf, v, 4);\n    assert(sz == 7);\n    assert(!strcmp(buf, \"10.0100\"));\n    /* negative tests */\n    sz = fixedpoint_d2string(buf, sizeof buf, v, 18);\n    assert(sz == 0);\n    sz = fixedpoint_d2string(buf, sizeof buf, v, 0);\n    assert(sz == 0);\n    sz = fixedpoint_d2string(buf, 1, v, 1);\n    assert(sz == 0);\n}\n\n#if defined(__linux__)\n/* Since fadvise and mincore is only supported in specific platforms like\n * Linux, we only verify the fadvise mechanism works in Linux */\nstatic int cache_exist(int fd) {\n    unsigned char flag;\n    void *m = mmap(NULL, 4096, PROT_READ, MAP_SHARED, fd, 0);\n    assert(m);\n    assert(mincore(m, 4096, &flag) == 0);\n    munmap(m, 4096);\n    /* the least significant bit of the byte will be set if the corresponding\n     * page is currently resident in memory */\n    return flag&1;\n}\n\nstatic void test_reclaimFilePageCache(void) {\n    char *tmpfile = \"/tmp/redis-reclaim-cache-test\";\n    int fd = open(tmpfile, O_RDWR|O_CREAT, 0644);\n    assert(fd >= 0);\n\n    /* test write file */\n    char buf[4] = \"foo\";\n    assert(write(fd, buf, sizeof(buf)) > 0);\n    assert(cache_exist(fd));\n    assert(redis_fsync(fd) == 0);\n    assert(reclaimFilePageCache(fd, 0, 0) == 0);\n    assert(!cache_exist(fd));\n\n    /* test read file */\n    assert(pread(fd, buf, sizeof(buf), 0) > 0);\n    assert(cache_exist(fd));\n    assert(reclaimFilePageCache(fd, 0, 0) == 0);\n    assert(!cache_exist(fd));\n\n    unlink(tmpfile);\n    printf(\"reclaimFilePageCach test is ok\\n\");\n}\n#endif\n\nint utilTest(int argc, char **argv, int flags) {\n    UNUSED(argc);\n    UNUSED(argv);\n    UNUSED(flags);\n\n    test_string2ll();\n    test_string2l();\n    test_string2d();\n    test_ll2string();\n    test_ld2string();\n    test_fixedpoint_d2string();\n#if defined(__linux__)\n    if (!(flags & REDIS_TEST_VALGRIND)) {\n        test_reclaimFilePageCache();\n    }\n#endif\n    printf(\"Done testing util\\n\");\n    return 0;\n}\n#endif\n"
  },
  {
    "path": "src/util.h",
    "content": "/*\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef __REDIS_UTIL_H\n#define __REDIS_UTIL_H\n\n#include <stdint.h>\n#include \"sds.h\"\n\n/* The maximum number of characters needed to represent a long double\n * as a string (long double has a huge range of some 4952 chars, see LDBL_MAX).\n * This should be the size of the buffer given to ld2string */\n#define MAX_LONG_DOUBLE_CHARS 5*1024\n\n/* The maximum number of characters needed to represent a double\n * as a string (double has a huge range of some 328 chars, see DBL_MAX).\n * This should be the size of the buffer for sprintf with %f */\n#define MAX_DOUBLE_CHARS 400\n\n/* The maximum number of characters needed to for d2string/fpconv_dtoa call.\n * Since it uses %g and not %f, some 40 chars should be enough. */\n#define MAX_D2STRING_CHARS 128\n\n/* Bytes needed for long -> str + '\\0' */\n#define LONG_STR_SIZE      21\n\n/* long double to string conversion options */\ntypedef enum {\n    LD_STR_AUTO,     /* %.17Lg */\n    LD_STR_HUMAN,    /* %.17Lf + Trimming of trailing zeros */\n    LD_STR_HEX       /* %La */\n} ld2string_mode;\n\nint prefixmatch(const char *pattern, int patternLen, const char *prefixStr, \n                int prefixStrLen, int nocase);\nint stringmatchlen(const char *p, int plen, const char *s, int slen, int nocase);\nint stringmatch(const char *p, const char *s, int nocase);\nint stringmatchlen_fuzz_test(void);\nunsigned long long memtoull(const char *p, int *err);\nconst char *mempbrk(const char *s, size_t len, const char *chars, size_t charslen);\nchar *memmapchars(char *s, size_t len, const char *from, const char *to, size_t setlen);\nuint32_t digits10(uint64_t v);\nuint32_t sdigits10(int64_t v);\nint ll2string(char *s, size_t len, long long value);\nint ull2string(char *s, size_t len, unsigned long long value);\nint string2ll(const char *s, size_t slen, long long *value);\nint string2ull(const char *s, unsigned long long *value);\nint string2l(const char *s, size_t slen, long *value);\nint string2ul_base16_async_signal_safe(const char *src, size_t slen, unsigned long *result_output);\nint string2ld(const char *s, size_t slen, long double *dp);\nint string2d(const char *s, size_t slen, double *dp);\nint trimDoubleString(char *buf, size_t len);\nint d2string(char *buf, size_t len, double value);\nint fixedpoint_d2string(char *dst, size_t dstlen, double dvalue, int fractional_digits);\nint ld2string(char *buf, size_t len, long double value, ld2string_mode mode);\nint double2ll(double d, long long *out);\nint yesnotoi(char *s);\nsds getAbsolutePath(char *filename);\nlong getTimeZone(void);\nint pathIsBaseName(char *path);\nint dirCreateIfMissing(char *dname);\nint dirExists(char *dname);\nint dirRemove(char *dname);\nint fileExist(char *filename);\nsds makePath(char *path, char *filename);\nint fsyncFileDir(const char *filename);\nint reclaimFilePageCache(int fd, size_t offset, size_t length);\nchar *fgets_async_signal_safe(char *dest, int buff_size, int fd);\nint vsnprintf_async_signal_safe(char *to, size_t size, const char *format, va_list ap);\n#ifdef __GNUC__\nint snprintf_async_signal_safe(char *to, size_t n, const char *fmt, ...)\n    __attribute__((format(printf, 3, 4)));\n#else\nint snprintf_async_signal_safe(char *to, size_t n, const char *fmt, ...);\n#endif\nsize_t redis_strlcpy(char *dst, const char *src, size_t dsize);\nsize_t redis_strlcat(char *dst, const char *src, size_t dsize);\n\n/* to keep it opt without conditions Works only for: 0 < x < 2^63 */\nstatic inline int log2ceil(size_t x) {\n#if UINTPTR_MAX == 0xffffffffffffffff\n    return  63 - __builtin_clzll(x);\n#else\n    return 31 - __builtin_clz(x);\n#endif\n}\n\n#ifndef static_assert\n#define static_assert(expr, lit) extern char __static_assert_failure[(expr) ? 1:-1]\n#endif\n\n#ifdef REDIS_TEST\nint utilTest(int argc, char **argv, int flags);\n#endif\n\n#endif\n"
  },
  {
    "path": "src/valgrind.sup",
    "content": "{\n   <lzf_uninitialized_hash_table>\n   Memcheck:Cond\n   fun:lzf_compress\n}\n\n{\n   <lzf_uninitialized_hash_table>\n   Memcheck:Value4\n   fun:lzf_compress\n}\n\n{\n   <lzf_uninitialized_hash_table>\n   Memcheck:Value8\n   fun:lzf_compress\n}\n\n{\n   <negative size allocatoin, see integration/corrupt-dump>\n   Memcheck:FishyValue\n   malloc(size)\n   fun:malloc\n   fun:ztrymalloc_usable\n   fun:ztrymalloc\n}\n"
  },
  {
    "path": "src/vector.c",
    "content": "/* vector.c - Simple append-only vector implementation\n *\n * Copyright (c) 2026-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include <stdint.h>\n#include <stdlib.h>\n#include <string.h>\n\n#include \"vector.h\"\n#include \"redisassert.h\"\n#include \"zmalloc.h\"\n\n#define VEC_DEFAULT_INITCAP 8\n\n/*\n * Vector initialization.\n *\n * Modes:\n * - stack != NULL: use caller-provided storage for the first initcap items.\n * - stack == NULL && initcap > 0: start heap-backed with an initial 'initcap' capacity.\n * - stack == NULL && initcap == 0: start heap-backed with no initial storage.\n */\nvoid vecInit(vec *v, void **stack, size_t initcap) {\n    /* If stack is provided, initcap must be > 0 and at the size of the stack */\n    assert(initcap > 0 || stack == NULL);\n    \n    v->size = 0;\n    v->cap = initcap;\n    v->stack = stack; /* stack is NULL if not used */\n    v->free = NULL;\n\n    /* now init data either stack, heap or NULL */\n    v->data = (stack) ? stack : ((initcap > 0) ? zmalloc(initcap * sizeof(void *)) : NULL);\n}\n\n/* Release storage. If a free method is set, it is applied to every element\n * before the backing storage is released. Stack storage is never freed. */\nvoid vecRelease(vec *v) {\n    if (v->free) {\n        for (size_t i = 0; i < v->size; i++)\n            v->free(v->data[i]);\n    }\n    /* if data is not stack-allocated and is not NULL, free it */\n    if (v->data && v->data != v->stack)\n        zfree(v->data);\n    v->size = 0;\n    v->cap = 0;\n    v->data = NULL;\n    v->stack = NULL;\n    v->free = NULL;\n}\n\n/* Reset the logical length to zero while preserving allocated storage. */\nvoid vecClear(vec *v) {\n    v->size = 0;\n}\n\n/* Get element at index. index must be < vecSize(v). */\nvoid *vecGet(const vec *v, size_t index) {\n    assert(index < v->size);\n    return v->data[index];\n}\n\n/* Ensure capacity is at least mincap. */\nvoid vecReserve(vec *v, size_t mincap) {\n    void **newdata;\n\n    if (mincap <= v->cap) return;\n\n    /* If no heap storage is used yet, allocate and copy from stack if needed. */\n    if (v->data == v->stack) {\n        newdata = zmalloc(mincap * sizeof(void *));\n        if (v->size) memcpy(newdata, v->data, v->size * sizeof(void *));\n    } else {\n        newdata = zrealloc(v->data, mincap * sizeof(void *));\n    }\n\n    v->data = newdata;\n    v->cap = mincap;\n}\n\n/* Append one element, growing storage as needed. */\nvoid vecPush(vec *v, void *value) {\n    if (unlikely(v->size == v->cap)) {\n        size_t newcap = (v->cap > 0) ? v->cap * 2 : VEC_DEFAULT_INITCAP;\n        vecReserve(v, newcap);\n    }\n\n    v->data[v->size++] = value;\n}\n\n#ifdef REDIS_TEST\n\n#include <stdio.h>\n#include <stdlib.h>\n\n#include \"testhelp.h\"\n\n#define UNUSED(x) (void)(x)\n\nstatic int vecTestFreeCalls = 0;\nstatic void vecTestFree(void *ptr) {\n    UNUSED(ptr);\n    vecTestFreeCalls++;\n}\n\nint vectorTest(int argc, char **argv, int flags)\n{\n    UNUSED(argc);\n    UNUSED(argv);\n    UNUSED(flags);\n\n    vec v;\n    void *vstack[2];\n    int one = 1, two = 2, three = 3, four = 4, five = 5, six = 6;\n\n    vecInit(&v, vstack, 2);\n    test_cond(\"vecInit() stack-backed size is 0\", vecSize(&v) == 0);\n    test_cond(\"vecInit() uses stack buffer\", vecData(&v) == vstack);\n    vecReserve(&v, 1);\n    test_cond(\"vecReserve() no-ops when capacity is already sufficient\",\n              v.cap == 2 && vecData(&v) == vstack);\n    vecPush(&v, &one);\n    vecPush(&v, &two);\n    test_cond(\"vecPush() appends into stack storage\",\n              vecSize(&v) == 2 && vecData(&v) == vstack &&\n              vecGet(&v, 0) == &one && vecGet(&v, 1) == &two);\n    vecReserve(&v, 4);\n    test_cond(\"vecReserve() spills from stack to heap preserving values\",\n              v.cap == 4 && vecData(&v) != vstack &&\n              vecGet(&v, 0) == &one && vecGet(&v, 1) == &two);\n    vecPush(&v, &three);\n    test_cond(\"vecPush() spills from stack to heap preserving values\",\n              vecSize(&v) == 3 &&\n              vecData(&v) != vstack && vecGet(&v, 0) == &one &&\n              vecGet(&v, 1) == &two && vecGet(&v, 2) == &three);\n\n    void **heap_data = vecData(&v);\n    vecClear(&v);\n    test_cond(\"vecClear() resets size but preserves storage\",\n              vecSize(&v) == 0 && vecData(&v) == heap_data);\n    vecRelease(&v);\n    test_cond(\"vecRelease() resets vector state\",\n              vecSize(&v) == 0 && vecData(&v) == NULL && v.cap == 0);\n\n    vecInit(&v, NULL, 4);\n    test_cond(\"vecInit() heap-backed hint allocates storage\",\n              vecSize(&v) == 0 && vecData(&v) != NULL && v.cap == 4);\n    vecPush(&v, &four);\n    test_cond(\"vecPush() works in heap-backed mode\",\n              vecGet(&v, 0) == &four);\n    vecReserve(&v, 8);\n    test_cond(\"vecReserve() grows heap-backed storage preserving values\",\n              v.cap == 8 && vecGet(&v, 0) == &four);\n    vecRelease(&v);\n\n    vecInit(&v, NULL, 0);\n    vecReserve(&v, 6);\n    test_cond(\"vecReserve() allocates heap storage from empty vector\",\n              v.cap == 6 && vecData(&v) != NULL);\n    vecPush(&v, &five);\n    vecPush(&v, &six);\n    test_cond(\"vecPush() works after vecReserve() on empty vector\",\n              vecSize(&v) == 2 &&\n              vecGet(&v, 0) == &five && vecGet(&v, 1) == &six);\n    vecRelease(&v);\n\n    /* vecSetFreeMethod: element free callback is invoked on release. */\n    void *vstack2[2];\n    vecInit(&v, vstack2, 2);\n    vecSetFreeMethod(&v, vecTestFree);\n    vecPush(&v, &one);\n    vecPush(&v, &two);\n    vecPush(&v, &three); /* triggers spill to heap */\n    vecTestFreeCalls = 0;\n    vecRelease(&v);\n    test_cond(\"vecRelease() invokes free method on each element\",\n              vecTestFreeCalls == 3);\n\n    vecInit(&v, NULL, 4);\n    vecSetFreeMethod(&v, vecTestFree);\n    vecTestFreeCalls = 0;\n    vecRelease(&v);\n    test_cond(\"vecRelease() free method is a no-op on empty vector\",\n              vecTestFreeCalls == 0);\n\n    return 0;\n}\n#endif\n"
  },
  {
    "path": "src/vector.h",
    "content": "#ifndef REDIS_VECTOR_H\n#define REDIS_VECTOR_H\n\n#include <stddef.h>\n\n/*\n * Simple append-only vector (dynamic array) of void * elements.\n *\n * Design:\n * --------\n * - Stores elements in a contiguous array (void **).\n * - Supports append (vecPush) and read access.\n * - Optionally uses caller-provided stack buffer to avoid heap allocations.\n * - See also comment in vector.c of vecInit() for more details.\n *\n * Memory:\n * -------\n * - vecRelease() frees heap memory if used.\n * - Stack buffer is never freed.\n * - Stored elements are never freed.\n *\n * Modes:\n * ------- \n * 1. Start On Stack (grow to heap): vec v;\n *                                   void *vstack[8];\n *                                   ...\n *                                   vecInit(&v, vstack, 8);\n *\n *   Start Embedded (grow to heap):  typedef struct { \n *                                     vec v; \n *                                     void *vembedded[8]; \n *                                   } obj;\n *                                   ...\n *                                   vecInit(&obj->v, obj->vembedded, 8);\n *\n * 2. Heap only, init capacity 8:    vec v;\n *                                   ...\n *                                   vecInit(&v, NULL, 8);\n *\n *    Heap only, init capacity 0:    vec v;\n *                                   ...\n *                                   vecInit(&v, NULL, 0);\n *\n * 3. Depends on var size:           vec v;\n *                                   void *vstack[8];\n *                                   vecInit(&v, vstack, 8);\n *                                   vecReserve(&v, varsize); // varsize <= 8 ? stack : heap\n *\n * Notes:\n * ------\n * - Not thread-safe.\n * - If stack == NULL and initcap > 0, initcap is treated as an initial\n *   heap-capacity hint.\n * - When used in Redis core, the implementation should use the Redis allocator\n *   wrappers (zmalloc / zrealloc / zfree) rather than libc allocation APIs.\n */\n\ntypedef struct vec {\n    size_t size;       /* Number of elements in the vector. */\n    size_t cap;        /* Capacity of the vector. */\n    void **data;       /* Heap-allocated storage or refers to stack. */\n    void **stack;      /* Optional stack buffer. */\n    void (*free)(void *ptr); /* Optional free method, applied to each\n                              * element on vecRelease. NULL = no-op. */\n} vec;\n\n/* Return the contiguous backing array. */\nstatic inline void **vecData(const vec *v) { return v->data; }\n\n/* Return the number of elements in the vector. */\nstatic inline size_t vecSize(const vec *v) { return v->size; }\n\n/* Initialize a vector */\nvoid vecInit(vec *v, void **stack, size_t initcap);\n\n/* Set a free method applied to every element on vecRelease.\n * Symmetric to listSetFreeMethod for adlist. */\nstatic inline void vecSetFreeMethod(vec *v, void (*freefn)(void *ptr)) {\n    v->free = freefn;\n}\n\n/* Release storage. If a free method is set, it is applied to every element\n * before the backing storage is released. Stack storage is never freed. */\nvoid vecRelease(vec *v);\n\n/* Reset the logical length to zero while preserving allocated storage. */\nvoid vecClear(vec *v);\n\n/* Requires index < vecSize(v). */\nvoid *vecGet(const vec *v, size_t index);\n\n/* Ensure capacity is at least mincap. */\nvoid vecReserve(vec *v, size_t mincap);\n\n/* Append one element, growing storage as needed. */\nvoid vecPush(vec *v, void *value);\n\n#ifdef REDIS_TEST\nint vectorTest(int argc, char **argv, int flags);\n#endif\n\n#endif /* REDIS_VECTOR_H */\n"
  },
  {
    "path": "src/version.h",
    "content": "#define REDIS_VERSION \"255.255.255\"\n#define REDIS_VERSION_NUM 0x00ffffff\n"
  },
  {
    "path": "src/ziplist.c",
    "content": "/* The ziplist is a specially encoded dually linked list that is designed\n * to be very memory efficient. It stores both strings and integer values,\n * where integers are encoded as actual integers instead of a series of\n * characters. It allows push and pop operations on either side of the list\n * in O(1) time. However, because every operation requires a reallocation of\n * the memory used by the ziplist, the actual complexity is related to the\n * amount of memory used by the ziplist.\n *\n * ----------------------------------------------------------------------------\n *\n * ZIPLIST OVERALL LAYOUT\n * ======================\n *\n * The general layout of the ziplist is as follows:\n *\n * <zlbytes> <zltail> <zllen> <entry> <entry> ... <entry> <zlend>\n *\n * NOTE: all fields are stored in little endian, if not specified otherwise.\n *\n * <uint32_t zlbytes> is an unsigned integer to hold the number of bytes that\n * the ziplist occupies, including the four bytes of the zlbytes field itself.\n * This value needs to be stored to be able to resize the entire structure\n * without the need to traverse it first.\n *\n * <uint32_t zltail> is the offset to the last entry in the list. This allows\n * a pop operation on the far side of the list without the need for full\n * traversal.\n *\n * <uint16_t zllen> is the number of entries. When there are more than\n * 2^16-2 entries, this value is set to 2^16-1 and we need to traverse the\n * entire list to know how many items it holds.\n *\n * <uint8_t zlend> is a special entry representing the end of the ziplist.\n * Is encoded as a single byte equal to 255. No other normal entry starts\n * with a byte set to the value of 255.\n *\n * ZIPLIST ENTRIES\n * ===============\n *\n * Every entry in the ziplist is prefixed by metadata that contains two pieces\n * of information. First, the length of the previous entry is stored to be\n * able to traverse the list from back to front. Second, the entry encoding is\n * provided. It represents the entry type, integer or string, and in the case\n * of strings it also represents the length of the string payload.\n * So a complete entry is stored like this:\n *\n * <prevlen> <encoding> <entry-data>\n *\n * Sometimes the encoding represents the entry itself, like for small integers\n * as we'll see later. In such a case the <entry-data> part is missing, and we\n * could have just:\n *\n * <prevlen> <encoding>\n *\n * The length of the previous entry, <prevlen>, is encoded in the following way:\n * If this length is smaller than 254 bytes, it will only consume a single\n * byte representing the length as an unsigned 8 bit integer. When the length\n * is greater than or equal to 254, it will consume 5 bytes. The first byte is\n * set to 254 (FE) to indicate a larger value is following. The remaining 4\n * bytes take the length of the previous entry as value.\n *\n * So practically an entry is encoded in the following way:\n *\n * <prevlen from 0 to 253> <encoding> <entry>\n *\n * Or alternatively if the previous entry length is greater than 253 bytes\n * the following encoding is used:\n *\n * 0xFE <4 bytes unsigned little endian prevlen> <encoding> <entry>\n *\n * The encoding field of the entry depends on the content of the\n * entry. When the entry is a string, the first 2 bits of the encoding first\n * byte will hold the type of encoding used to store the length of the string,\n * followed by the actual length of the string. When the entry is an integer\n * the first 2 bits are both set to 1. The following 2 bits are used to specify\n * what kind of integer will be stored after this header. An overview of the\n * different types and encodings is as follows. The first byte is always enough\n * to determine the kind of entry.\n *\n * |00pppppp| - 1 byte\n *      String value with length less than or equal to 63 bytes (6 bits).\n *      \"pppppp\" represents the unsigned 6 bit length.\n * |01pppppp|qqqqqqqq| - 2 bytes\n *      String value with length less than or equal to 16383 bytes (14 bits).\n *      IMPORTANT: The 14 bit number is stored in big endian.\n * |10000000|qqqqqqqq|rrrrrrrr|ssssssss|tttttttt| - 5 bytes\n *      String value with length greater than or equal to 16384 bytes.\n *      Only the 4 bytes following the first byte represents the length\n *      up to 2^32-1. The 6 lower bits of the first byte are not used and\n *      are set to zero.\n *      IMPORTANT: The 32 bit number is stored in big endian.\n * |11000000| - 3 bytes\n *      Integer encoded as int16_t (2 bytes).\n * |11010000| - 5 bytes\n *      Integer encoded as int32_t (4 bytes).\n * |11100000| - 9 bytes\n *      Integer encoded as int64_t (8 bytes).\n * |11110000| - 4 bytes\n *      Integer encoded as 24 bit signed (3 bytes).\n * |11111110| - 2 bytes\n *      Integer encoded as 8 bit signed (1 byte).\n * |1111xxxx| - (with xxxx between 0001 and 1101) immediate 4 bit integer.\n *      Unsigned integer from 0 to 12. The encoded value is actually from\n *      1 to 13 because 0000 and 1111 can not be used, so 1 should be\n *      subtracted from the encoded 4 bit value to obtain the right value.\n * |11111111| - End of ziplist special entry.\n *\n * Like for the ziplist header, all the integers are represented in little\n * endian byte order, even when this code is compiled in big endian systems.\n *\n * EXAMPLES OF ACTUAL ZIPLISTS\n * ===========================\n *\n * The following is a ziplist containing the two elements representing\n * the strings \"2\" and \"5\". It is composed of 15 bytes, that we visually\n * split into sections:\n *\n *  [0f 00 00 00] [0c 00 00 00] [02 00] [00 f3] [02 f6] [ff]\n *        |             |          |       |       |     |\n *     zlbytes        zltail     zllen    \"2\"     \"5\"   end\n *\n * The first 4 bytes represent the number 15, that is the number of bytes\n * the whole ziplist is composed of. The second 4 bytes are the offset\n * at which the last ziplist entry is found, that is 12, in fact the\n * last entry, that is \"5\", is at offset 12 inside the ziplist.\n * The next 16 bit integer represents the number of elements inside the\n * ziplist, its value is 2 since there are just two elements inside.\n * Finally \"00 f3\" is the first entry representing the number 2. It is\n * composed of the previous entry length, which is zero because this is\n * our first entry, and the byte F3 which corresponds to the encoding\n * |1111xxxx| with xxxx between 0001 and 1101. We need to remove the \"F\"\n * higher order bits 1111, and subtract 1 from the \"3\", so the entry value\n * is \"2\". The next entry has a prevlen of 02, since the first entry is\n * composed of exactly two bytes. The entry itself, F6, is encoded exactly\n * like the first entry, and 6-1 = 5, so the value of the entry is 5.\n * Finally the special entry FF signals the end of the ziplist.\n *\n * Adding another element to the above string with the value \"Hello World\"\n * allows us to show how the ziplist encodes small strings. We'll just show\n * the hex dump of the entry itself. Imagine the bytes as following the\n * entry that stores \"5\" in the ziplist above:\n *\n * [02] [0b] [48 65 6c 6c 6f 20 57 6f 72 6c 64]\n *\n * The first byte, 02, is the length of the previous entry. The next\n * byte represents the encoding in the pattern |00pppppp| that means\n * that the entry is a string of length <pppppp>, so 0B means that\n * an 11 bytes string follows. From the third byte (48) to the last (64)\n * there are just the ASCII characters for \"Hello World\".\n *\n * ----------------------------------------------------------------------------\n *\n * Copyright (c) 2009-2012, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n * Copyright (c) 2009-current, Redis Ltd.\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <stdint.h>\n#include <limits.h>\n#include \"zmalloc.h\"\n#include \"util.h\"\n#include \"ziplist.h\"\n#include \"config.h\"\n#include \"endianconv.h\"\n#include \"redisassert.h\"\n\n#define ZIP_END 255         /* Special \"end of ziplist\" entry. */\n#define ZIP_BIG_PREVLEN 254 /* ZIP_BIG_PREVLEN - 1 is the max number of bytes of\n                               the previous entry, for the \"prevlen\" field prefixing\n                               each entry, to be represented with just a single byte.\n                               Otherwise it is represented as FE AA BB CC DD, where\n                               AA BB CC DD are a 4 bytes unsigned integer\n                               representing the previous entry len. */\n\n/* Different encoding/length possibilities */\n#define ZIP_STR_MASK 0xc0\n#define ZIP_INT_MASK 0x30\n#define ZIP_STR_06B (0 << 6)\n#define ZIP_STR_14B (1 << 6)\n#define ZIP_STR_32B (2 << 6)\n#define ZIP_INT_16B (0xc0 | 0<<4)\n#define ZIP_INT_32B (0xc0 | 1<<4)\n#define ZIP_INT_64B (0xc0 | 2<<4)\n#define ZIP_INT_24B (0xc0 | 3<<4)\n#define ZIP_INT_8B 0xfe\n\n/* 4 bit integer immediate encoding |1111xxxx| with xxxx between\n * 0001 and 1101. */\n#define ZIP_INT_IMM_MASK 0x0f   /* Mask to extract the 4 bits value. To add\n                                   one is needed to reconstruct the value. */\n#define ZIP_INT_IMM_MIN 0xf1    /* 11110001 */\n#define ZIP_INT_IMM_MAX 0xfd    /* 11111101 */\n\n#define INT24_MAX 0x7fffff\n#define INT24_MIN (-INT24_MAX - 1)\n\n/* Macro to determine if the entry is a string. String entries never start\n * with \"11\" as most significant bits of the first byte. */\n#define ZIP_IS_STR(enc) (((enc) & ZIP_STR_MASK) < ZIP_STR_MASK)\n\n/* Utility macros.*/\n\n/* Return total bytes a ziplist is composed of. */\n#define ZIPLIST_BYTES(zl)       (*((uint32_t*)(zl)))\n\n/* Return the offset of the last item inside the ziplist. */\n#define ZIPLIST_TAIL_OFFSET(zl) (*((uint32_t*)((zl)+sizeof(uint32_t))))\n\n/* Return the length of a ziplist, or UINT16_MAX if the length cannot be\n * determined without scanning the whole ziplist. */\n#define ZIPLIST_LENGTH(zl)      (*((uint16_t*)((zl)+sizeof(uint32_t)*2)))\n\n/* The size of a ziplist header: two 32 bit integers for the total\n * bytes count and last item offset. One 16 bit integer for the number\n * of items field. */\n#define ZIPLIST_HEADER_SIZE     (sizeof(uint32_t)*2+sizeof(uint16_t))\n\n/* Size of the \"end of ziplist\" entry. Just one byte. */\n#define ZIPLIST_END_SIZE        (sizeof(uint8_t))\n\n/* Return the pointer to the first entry of a ziplist. */\n#define ZIPLIST_ENTRY_HEAD(zl)  ((zl)+ZIPLIST_HEADER_SIZE)\n\n/* Return the pointer to the last entry of a ziplist, using the\n * last entry offset inside the ziplist header. */\n#define ZIPLIST_ENTRY_TAIL(zl)  ((zl)+intrev32ifbe(ZIPLIST_TAIL_OFFSET(zl)))\n\n/* Return the pointer to the last byte of a ziplist, which is, the\n * end of ziplist FF entry. */\n#define ZIPLIST_ENTRY_END(zl)   ((zl)+intrev32ifbe(ZIPLIST_BYTES(zl))-ZIPLIST_END_SIZE)\n\n/* Increment the number of items field in the ziplist header. Note that this\n * macro should never overflow the unsigned 16 bit integer, since entries are\n * always pushed one at a time. When UINT16_MAX is reached we want the count\n * to stay there to signal that a full scan is needed to get the number of\n * items inside the ziplist. */\n#define ZIPLIST_INCR_LENGTH(zl,incr) { \\\n    if (intrev16ifbe(ZIPLIST_LENGTH(zl)) < UINT16_MAX) \\\n        ZIPLIST_LENGTH(zl) = intrev16ifbe(intrev16ifbe(ZIPLIST_LENGTH(zl))+incr); \\\n}\n\n/* Don't let ziplists grow over 1GB in any case, don't wanna risk overflow in\n * zlbytes */\n#define ZIPLIST_MAX_SAFETY_SIZE (1<<30)\nint ziplistSafeToAdd(unsigned char* zl, size_t add) {\n    size_t len = zl? ziplistBlobLen(zl): 0;\n    if (len + add > ZIPLIST_MAX_SAFETY_SIZE)\n        return 0;\n    return 1;\n}\n\n\n/* We use this function to receive information about a ziplist entry.\n * Note that this is not how the data is actually encoded, is just what we\n * get filled by a function in order to operate more easily. */\ntypedef struct zlentry {\n    unsigned int prevrawlensize; /* Bytes used to encode the previous entry len*/\n    unsigned int prevrawlen;     /* Previous entry len. */\n    unsigned int lensize;        /* Bytes used to encode this entry type/len.\n                                    For example strings have a 1, 2 or 5 bytes\n                                    header. Integers always use a single byte.*/\n    unsigned int len;            /* Bytes used to represent the actual entry.\n                                    For strings this is just the string length\n                                    while for integers it is 1, 2, 3, 4, 8 or\n                                    0 (for 4 bit immediate) depending on the\n                                    number range. */\n    unsigned int headersize;     /* prevrawlensize + lensize. */\n    unsigned char encoding;      /* Set to ZIP_STR_* or ZIP_INT_* depending on\n                                    the entry encoding. However for 4 bits\n                                    immediate integers this can assume a range\n                                    of values and must be range-checked. */\n    unsigned char *p;            /* Pointer to the very start of the entry, that\n                                    is, this points to prev-entry-len field. */\n} zlentry;\n\n#define ZIPLIST_ENTRY_ZERO(zle) { \\\n    (zle)->prevrawlensize = (zle)->prevrawlen = 0; \\\n    (zle)->lensize = (zle)->len = (zle)->headersize = 0; \\\n    (zle)->encoding = 0; \\\n    (zle)->p = NULL; \\\n}\n\n/* Extract the encoding from the byte pointed by 'ptr' and set it into\n * 'encoding' field of the zlentry structure. */\n#define ZIP_ENTRY_ENCODING(ptr, encoding) do {  \\\n    (encoding) = ((ptr)[0]); \\\n    if ((encoding) < ZIP_STR_MASK) (encoding) &= ZIP_STR_MASK; \\\n} while(0)\n\n#define ZIP_ENCODING_SIZE_INVALID 0xff\n/* Return the number of bytes required to encode the entry type + length.\n * On error, return ZIP_ENCODING_SIZE_INVALID */\nstatic inline unsigned int zipEncodingLenSize(unsigned char encoding) {\n    if (encoding == ZIP_INT_16B || encoding == ZIP_INT_32B ||\n        encoding == ZIP_INT_24B || encoding == ZIP_INT_64B ||\n        encoding == ZIP_INT_8B)\n        return 1;\n    if (encoding >= ZIP_INT_IMM_MIN && encoding <= ZIP_INT_IMM_MAX)\n        return 1;\n    if (encoding == ZIP_STR_06B)\n        return 1;\n    if (encoding == ZIP_STR_14B)\n        return 2;\n    if (encoding == ZIP_STR_32B)\n        return 5;\n    return ZIP_ENCODING_SIZE_INVALID;\n}\n\n#define ZIP_ASSERT_ENCODING(encoding) do {                                     \\\n    assert(zipEncodingLenSize(encoding) != ZIP_ENCODING_SIZE_INVALID);         \\\n} while (0)\n\n/* Return bytes needed to store integer encoded by 'encoding' */\nstatic inline unsigned int zipIntSize(unsigned char encoding) {\n    switch(encoding) {\n    case ZIP_INT_8B:  return 1;\n    case ZIP_INT_16B: return 2;\n    case ZIP_INT_24B: return 3;\n    case ZIP_INT_32B: return 4;\n    case ZIP_INT_64B: return 8;\n    }\n    if (encoding >= ZIP_INT_IMM_MIN && encoding <= ZIP_INT_IMM_MAX)\n        return 0; /* 4 bit immediate */\n    /* bad encoding, covered by a previous call to ZIP_ASSERT_ENCODING */\n    redis_unreachable();\n    return 0;\n}\n\n/* Write the encoding header of the entry in 'p'. If p is NULL it just returns\n * the amount of bytes required to encode such a length. Arguments:\n *\n * 'encoding' is the encoding we are using for the entry. It could be\n * ZIP_INT_* or ZIP_STR_* or between ZIP_INT_IMM_MIN and ZIP_INT_IMM_MAX\n * for single-byte small immediate integers.\n *\n * 'rawlen' is only used for ZIP_STR_* encodings and is the length of the\n * string that this entry represents.\n *\n * The function returns the number of bytes used by the encoding/length\n * header stored in 'p'. */\nunsigned int zipStoreEntryEncoding(unsigned char *p, unsigned char encoding, unsigned int rawlen) {\n    unsigned char len = 1, buf[5];\n\n    if (ZIP_IS_STR(encoding)) {\n        /* Although encoding is given it may not be set for strings,\n         * so we determine it here using the raw length. */\n        if (rawlen <= 0x3f) {\n            if (!p) return len;\n            buf[0] = ZIP_STR_06B | rawlen;\n        } else if (rawlen <= 0x3fff) {\n            len += 1;\n            if (!p) return len;\n            buf[0] = ZIP_STR_14B | ((rawlen >> 8) & 0x3f);\n            buf[1] = rawlen & 0xff;\n        } else {\n            len += 4;\n            if (!p) return len;\n            buf[0] = ZIP_STR_32B;\n            buf[1] = (rawlen >> 24) & 0xff;\n            buf[2] = (rawlen >> 16) & 0xff;\n            buf[3] = (rawlen >> 8) & 0xff;\n            buf[4] = rawlen & 0xff;\n        }\n    } else {\n        /* Implies integer encoding, so length is always 1. */\n        if (!p) return len;\n        buf[0] = encoding;\n    }\n\n    /* Store this length at p. */\n    memcpy(p,buf,len);\n    return len;\n}\n\n/* Decode the entry encoding type and data length (string length for strings,\n * number of bytes used for the integer for integer entries) encoded in 'ptr'.\n * The 'encoding' variable is input, extracted by the caller, the 'lensize'\n * variable will hold the number of bytes required to encode the entry\n * length, and the 'len' variable will hold the entry length.\n * On invalid encoding error, lensize is set to 0. */\n#define ZIP_DECODE_LENGTH(ptr, encoding, lensize, len) do {                    \\\n    if ((encoding) < ZIP_STR_MASK) {                                           \\\n        if ((encoding) == ZIP_STR_06B) {                                       \\\n            (lensize) = 1;                                                     \\\n            (len) = (ptr)[0] & 0x3f;                                           \\\n        } else if ((encoding) == ZIP_STR_14B) {                                \\\n            (lensize) = 2;                                                     \\\n            (len) = (((ptr)[0] & 0x3f) << 8) | (ptr)[1];                       \\\n        } else if ((encoding) == ZIP_STR_32B) {                                \\\n            (lensize) = 5;                                                     \\\n            (len) = ((uint32_t)(ptr)[1] << 24) |                               \\\n                    ((uint32_t)(ptr)[2] << 16) |                               \\\n                    ((uint32_t)(ptr)[3] <<  8) |                               \\\n                    ((uint32_t)(ptr)[4]);                                      \\\n        } else {                                                               \\\n            (lensize) = 0; /* bad encoding, should be covered by a previous */ \\\n            (len) = 0;     /* ZIP_ASSERT_ENCODING / zipEncodingLenSize, or  */ \\\n                           /* match the lensize after this macro with 0.    */ \\\n        }                                                                      \\\n    } else {                                                                   \\\n        (lensize) = 1;                                                         \\\n        if ((encoding) == ZIP_INT_8B)  (len) = 1;                              \\\n        else if ((encoding) == ZIP_INT_16B) (len) = 2;                         \\\n        else if ((encoding) == ZIP_INT_24B) (len) = 3;                         \\\n        else if ((encoding) == ZIP_INT_32B) (len) = 4;                         \\\n        else if ((encoding) == ZIP_INT_64B) (len) = 8;                         \\\n        else if (encoding >= ZIP_INT_IMM_MIN && encoding <= ZIP_INT_IMM_MAX)   \\\n            (len) = 0; /* 4 bit immediate */                                   \\\n        else                                                                   \\\n            (lensize) = (len) = 0; /* bad encoding */                          \\\n    }                                                                          \\\n} while(0)\n\n/* Encode the length of the previous entry and write it to \"p\". This only\n * uses the larger encoding (required in __ziplistCascadeUpdate). */\nint zipStorePrevEntryLengthLarge(unsigned char *p, unsigned int len) {\n    uint32_t u32;\n    if (p != NULL) {\n        p[0] = ZIP_BIG_PREVLEN;\n        u32 = len;\n        memcpy(p+1,&u32,sizeof(u32));\n        memrev32ifbe(p+1);\n    }\n    return 1 + sizeof(uint32_t);\n}\n\n/* Encode the length of the previous entry and write it to \"p\". Return the\n * number of bytes needed to encode this length if \"p\" is NULL. */\nunsigned int zipStorePrevEntryLength(unsigned char *p, unsigned int len) {\n    if (p == NULL) {\n        return (len < ZIP_BIG_PREVLEN) ? 1 : sizeof(uint32_t) + 1;\n    } else {\n        if (len < ZIP_BIG_PREVLEN) {\n            p[0] = len;\n            return 1;\n        } else {\n            return zipStorePrevEntryLengthLarge(p,len);\n        }\n    }\n}\n\n/* Return the number of bytes used to encode the length of the previous\n * entry. The length is returned by setting the var 'prevlensize'. */\n#define ZIP_DECODE_PREVLENSIZE(ptr, prevlensize) do {                          \\\n    if ((ptr)[0] < ZIP_BIG_PREVLEN) {                                          \\\n        (prevlensize) = 1;                                                     \\\n    } else {                                                                   \\\n        (prevlensize) = 5;                                                     \\\n    }                                                                          \\\n} while(0)\n\n/* Return the length of the previous element, and the number of bytes that\n * are used in order to encode the previous element length.\n * 'ptr' must point to the prevlen prefix of an entry (that encodes the\n * length of the previous entry in order to navigate the elements backward).\n * The length of the previous entry is stored in 'prevlen', the number of\n * bytes needed to encode the previous entry length are stored in\n * 'prevlensize'. */\n#define ZIP_DECODE_PREVLEN(ptr, prevlensize, prevlen) do {                     \\\n    ZIP_DECODE_PREVLENSIZE(ptr, prevlensize);                                  \\\n    if ((prevlensize) == 1) {                                                  \\\n        (prevlen) = (ptr)[0];                                                  \\\n    } else { /* prevlensize == 5 */                                            \\\n        (prevlen) = ((ptr)[4] << 24) |                                         \\\n                    ((ptr)[3] << 16) |                                         \\\n                    ((ptr)[2] <<  8) |                                         \\\n                    ((ptr)[1]);                                                \\\n    }                                                                          \\\n} while(0)\n\n/* Given a pointer 'p' to the prevlen info that prefixes an entry, this\n * function returns the difference in number of bytes needed to encode\n * the prevlen if the previous entry changes of size.\n *\n * So if A is the number of bytes used right now to encode the 'prevlen'\n * field.\n *\n * And B is the number of bytes that are needed in order to encode the\n * 'prevlen' if the previous element will be updated to one of size 'len'.\n *\n * Then the function returns B - A\n *\n * So the function returns a positive number if more space is needed,\n * a negative number if less space is needed, or zero if the same space\n * is needed. */\nint zipPrevLenByteDiff(unsigned char *p, unsigned int len) {\n    unsigned int prevlensize;\n    ZIP_DECODE_PREVLENSIZE(p, prevlensize);\n    return zipStorePrevEntryLength(NULL, len) - prevlensize;\n}\n\n/* Check if string pointed to by 'entry' can be encoded as an integer.\n * Stores the integer value in 'v' and its encoding in 'encoding'. */\nint zipTryEncoding(unsigned char *entry, unsigned int entrylen, long long *v, unsigned char *encoding) {\n    long long value;\n\n    if (entrylen >= 32 || entrylen == 0) return 0;\n    if (string2ll((char*)entry,entrylen,&value)) {\n        /* Great, the string can be encoded. Check what's the smallest\n         * of our encoding types that can hold this value. */\n        if (value >= 0 && value <= 12) {\n            *encoding = ZIP_INT_IMM_MIN+value;\n        } else if (value >= INT8_MIN && value <= INT8_MAX) {\n            *encoding = ZIP_INT_8B;\n        } else if (value >= INT16_MIN && value <= INT16_MAX) {\n            *encoding = ZIP_INT_16B;\n        } else if (value >= INT24_MIN && value <= INT24_MAX) {\n            *encoding = ZIP_INT_24B;\n        } else if (value >= INT32_MIN && value <= INT32_MAX) {\n            *encoding = ZIP_INT_32B;\n        } else {\n            *encoding = ZIP_INT_64B;\n        }\n        *v = value;\n        return 1;\n    }\n    return 0;\n}\n\n/* Store integer 'value' at 'p', encoded as 'encoding' */\nvoid zipSaveInteger(unsigned char *p, int64_t value, unsigned char encoding) {\n    int16_t i16;\n    int32_t i32;\n    int64_t i64;\n    if (encoding == ZIP_INT_8B) {\n        ((int8_t*)p)[0] = (int8_t)value;\n    } else if (encoding == ZIP_INT_16B) {\n        i16 = value;\n        memcpy(p,&i16,sizeof(i16));\n        memrev16ifbe(p);\n    } else if (encoding == ZIP_INT_24B) {\n        i32 = ((uint64_t)value)<<8;\n        memrev32ifbe(&i32);\n        memcpy(p,((uint8_t*)&i32)+1,sizeof(i32)-sizeof(uint8_t));\n    } else if (encoding == ZIP_INT_32B) {\n        i32 = value;\n        memcpy(p,&i32,sizeof(i32));\n        memrev32ifbe(p);\n    } else if (encoding == ZIP_INT_64B) {\n        i64 = value;\n        memcpy(p,&i64,sizeof(i64));\n        memrev64ifbe(p);\n    } else if (encoding >= ZIP_INT_IMM_MIN && encoding <= ZIP_INT_IMM_MAX) {\n        /* Nothing to do, the value is stored in the encoding itself. */\n    } else {\n        assert(NULL);\n    }\n}\n\n/* Read integer encoded as 'encoding' from 'p' */\nint64_t zipLoadInteger(unsigned char *p, unsigned char encoding) {\n    int16_t i16;\n    int32_t i32;\n    int64_t i64, ret = 0;\n    if (encoding == ZIP_INT_8B) {\n        ret = ((int8_t*)p)[0];\n    } else if (encoding == ZIP_INT_16B) {\n        memcpy(&i16,p,sizeof(i16));\n        memrev16ifbe(&i16);\n        ret = i16;\n    } else if (encoding == ZIP_INT_32B) {\n        memcpy(&i32,p,sizeof(i32));\n        memrev32ifbe(&i32);\n        ret = i32;\n    } else if (encoding == ZIP_INT_24B) {\n        i32 = 0;\n        memcpy(((uint8_t*)&i32)+1,p,sizeof(i32)-sizeof(uint8_t));\n        memrev32ifbe(&i32);\n        ret = i32>>8;\n    } else if (encoding == ZIP_INT_64B) {\n        memcpy(&i64,p,sizeof(i64));\n        memrev64ifbe(&i64);\n        ret = i64;\n    } else if (encoding >= ZIP_INT_IMM_MIN && encoding <= ZIP_INT_IMM_MAX) {\n        ret = (encoding & ZIP_INT_IMM_MASK)-1;\n    } else {\n        assert(NULL);\n    }\n    return ret;\n}\n\n/* Fills a struct with all information about an entry.\n * This function is the \"unsafe\" alternative to the one below.\n * Generally, all function that return a pointer to an element in the ziplist\n * will assert that this element is valid, so it can be freely used.\n * Generally functions such ziplistGet assume the input pointer is already\n * validated (since it's the return value of another function). */\nstatic inline void zipEntry(unsigned char *p, zlentry *e) {\n    ZIP_DECODE_PREVLEN(p, e->prevrawlensize, e->prevrawlen);\n    ZIP_ENTRY_ENCODING(p + e->prevrawlensize, e->encoding);\n    ZIP_DECODE_LENGTH(p + e->prevrawlensize, e->encoding, e->lensize, e->len);\n    assert(e->lensize != 0); /* check that encoding was valid. */\n    e->headersize = e->prevrawlensize + e->lensize;\n    e->p = p;\n}\n\n/* Fills a struct with all information about an entry.\n * This function is safe to use on untrusted pointers, it'll make sure not to\n * try to access memory outside the ziplist payload.\n * Returns 1 if the entry is valid, and 0 otherwise. */\nstatic inline int zipEntrySafe(unsigned char* zl, size_t zlbytes, unsigned char *p, zlentry *e, int validate_prevlen) {\n    unsigned char *zlfirst = zl + ZIPLIST_HEADER_SIZE;\n    unsigned char *zllast = zl + zlbytes - ZIPLIST_END_SIZE;\n#define OUT_OF_RANGE(p) (unlikely((p) < zlfirst || (p) > zllast))\n\n    /* If there's no possibility for the header to reach outside the ziplist,\n     * take the fast path. (max lensize and prevrawlensize are both 5 bytes) */\n    if (p >= zlfirst && p + 10 < zllast) {\n        ZIP_DECODE_PREVLEN(p, e->prevrawlensize, e->prevrawlen);\n        ZIP_ENTRY_ENCODING(p + e->prevrawlensize, e->encoding);\n        ZIP_DECODE_LENGTH(p + e->prevrawlensize, e->encoding, e->lensize, e->len);\n        e->headersize = e->prevrawlensize + e->lensize;\n        e->p = p;\n        /* We didn't call ZIP_ASSERT_ENCODING, so we check lensize was set to 0. */\n        if (unlikely(e->lensize == 0))\n            return 0;\n        /* Make sure the entry doesn't reach outside the edge of the ziplist */\n        if (OUT_OF_RANGE(p + e->headersize + e->len))\n            return 0;\n        /* Make sure prevlen doesn't reach outside the edge of the ziplist */\n        if (validate_prevlen && OUT_OF_RANGE(p - e->prevrawlen))\n            return 0;\n        return 1;\n    }\n\n    /* Make sure the pointer doesn't reach outside the edge of the ziplist */\n    if (OUT_OF_RANGE(p))\n        return 0;\n\n    /* Make sure the encoded prevlen header doesn't reach outside the allocation */\n    ZIP_DECODE_PREVLENSIZE(p, e->prevrawlensize);\n    if (OUT_OF_RANGE(p + e->prevrawlensize))\n        return 0;\n\n    /* Make sure encoded entry header is valid. */\n    ZIP_ENTRY_ENCODING(p + e->prevrawlensize, e->encoding);\n    e->lensize = zipEncodingLenSize(e->encoding);\n    if (unlikely(e->lensize == ZIP_ENCODING_SIZE_INVALID))\n        return 0;\n\n    /* Make sure the encoded entry header doesn't reach outside the allocation */\n    if (OUT_OF_RANGE(p + e->prevrawlensize + e->lensize))\n        return 0;\n\n    /* Decode the prevlen and entry len headers. */\n    ZIP_DECODE_PREVLEN(p, e->prevrawlensize, e->prevrawlen);\n    ZIP_DECODE_LENGTH(p + e->prevrawlensize, e->encoding, e->lensize, e->len);\n    e->headersize = e->prevrawlensize + e->lensize;\n\n    /* Make sure the entry doesn't reach outside the edge of the ziplist */\n    if (OUT_OF_RANGE(p + e->headersize + e->len))\n        return 0;\n\n    /* Make sure prevlen doesn't reach outside the edge of the ziplist */\n    if (validate_prevlen && OUT_OF_RANGE(p - e->prevrawlen))\n        return 0;\n\n    e->p = p;\n    return 1;\n#undef OUT_OF_RANGE\n}\n\n/* Return the total number of bytes used by the entry pointed to by 'p'. */\nstatic inline unsigned int zipRawEntryLengthSafe(unsigned char* zl, size_t zlbytes, unsigned char *p) {\n    zlentry e;\n    assert(zipEntrySafe(zl, zlbytes, p, &e, 0));\n    return e.headersize + e.len;\n}\n\n/* Return the total number of bytes used by the entry pointed to by 'p'. */\nstatic inline unsigned int zipRawEntryLength(unsigned char *p) {\n    zlentry e;\n    zipEntry(p, &e);\n    return e.headersize + e.len;\n}\n\n/* Validate that the entry doesn't reach outside the ziplist allocation. */\nstatic inline void zipAssertValidEntry(unsigned char* zl, size_t zlbytes, unsigned char *p) {\n    zlentry e;\n    assert(zipEntrySafe(zl, zlbytes, p, &e, 1));\n}\n\n/* Create a new empty ziplist. */\nunsigned char *ziplistNew(void) {\n    unsigned int bytes = ZIPLIST_HEADER_SIZE+ZIPLIST_END_SIZE;\n    unsigned char *zl = zmalloc(bytes);\n    ZIPLIST_BYTES(zl) = intrev32ifbe(bytes);\n    ZIPLIST_TAIL_OFFSET(zl) = intrev32ifbe(ZIPLIST_HEADER_SIZE);\n    ZIPLIST_LENGTH(zl) = 0;\n    zl[bytes-1] = ZIP_END;\n    return zl;\n}\n\n/* Resize the ziplist. */\nunsigned char *ziplistResize(unsigned char *zl, size_t len) {\n    assert(len < UINT32_MAX);\n    zl = zrealloc(zl,len);\n    ZIPLIST_BYTES(zl) = intrev32ifbe(len);\n    zl[len-1] = ZIP_END;\n    return zl;\n}\n\n/* When an entry is inserted, we need to set the prevlen field of the next\n * entry to equal the length of the inserted entry. It can occur that this\n * length cannot be encoded in 1 byte and the next entry needs to be grow\n * a bit larger to hold the 5-byte encoded prevlen. This can be done for free,\n * because this only happens when an entry is already being inserted (which\n * causes a realloc and memmove). However, encoding the prevlen may require\n * that this entry is grown as well. This effect may cascade throughout\n * the ziplist when there are consecutive entries with a size close to\n * ZIP_BIG_PREVLEN, so we need to check that the prevlen can be encoded in\n * every consecutive entry.\n *\n * Note that this effect can also happen in reverse, where the bytes required\n * to encode the prevlen field can shrink. This effect is deliberately ignored,\n * because it can cause a \"flapping\" effect where a chain prevlen fields is\n * first grown and then shrunk again after consecutive inserts. Rather, the\n * field is allowed to stay larger than necessary, because a large prevlen\n * field implies the ziplist is holding large entries anyway.\n *\n * The pointer \"p\" points to the first entry that does NOT need to be\n * updated, i.e. consecutive fields MAY need an update. */\nunsigned char *__ziplistCascadeUpdate(unsigned char *zl, unsigned char *p) {\n    zlentry cur;\n    size_t prevlen, prevlensize, prevoffset; /* Informat of the last changed entry. */\n    size_t firstentrylen; /* Used to handle insert at head. */\n    size_t rawlen, curlen = intrev32ifbe(ZIPLIST_BYTES(zl));\n    size_t extra = 0, cnt = 0, offset;\n    size_t delta = 4; /* Extra bytes needed to update a entry's prevlen (5-1). */\n    unsigned char *tail = zl + intrev32ifbe(ZIPLIST_TAIL_OFFSET(zl));\n\n    /* Empty ziplist */\n    if (p[0] == ZIP_END) return zl;\n\n    zipEntry(p, &cur); /* no need for \"safe\" variant since the input pointer was validated by the function that returned it. */\n    firstentrylen = prevlen = cur.headersize + cur.len;\n    prevlensize = zipStorePrevEntryLength(NULL, prevlen);\n    prevoffset = p - zl;\n    p += prevlen;\n\n    /* Iterate ziplist to find out how many extra bytes do we need to update it. */\n    while (p[0] != ZIP_END) {\n        assert(zipEntrySafe(zl, curlen, p, &cur, 0));\n\n        /* Abort when \"prevlen\" has not changed. */\n        if (cur.prevrawlen == prevlen) break;\n\n        /* Abort when entry's \"prevlensize\" is big enough. */\n        if (cur.prevrawlensize >= prevlensize) {\n            if (cur.prevrawlensize == prevlensize) {\n                zipStorePrevEntryLength(p, prevlen);\n            } else {\n                /* This would result in shrinking, which we want to avoid.\n                 * So, set \"prevlen\" in the available bytes. */\n                zipStorePrevEntryLengthLarge(p, prevlen);\n            }\n            break;\n        }\n\n        /* cur.prevrawlen means cur is the former head entry. */\n        assert(cur.prevrawlen == 0 || cur.prevrawlen + delta == prevlen);\n\n        /* Update prev entry's info and advance the cursor. */\n        rawlen = cur.headersize + cur.len;\n        prevlen = rawlen + delta; \n        prevlensize = zipStorePrevEntryLength(NULL, prevlen);\n        prevoffset = p - zl;\n        p += rawlen;\n        extra += delta;\n        cnt++;\n    }\n\n    /* Extra bytes is zero all update has been done(or no need to update). */\n    if (extra == 0) return zl;\n\n    /* Update tail offset after loop. */\n    if (tail == zl + prevoffset) {\n        /* When the last entry we need to update is also the tail, update tail offset\n         * unless this is the only entry that was updated (so the tail offset didn't change). */\n        if (extra - delta != 0) {\n            ZIPLIST_TAIL_OFFSET(zl) =\n                intrev32ifbe(intrev32ifbe(ZIPLIST_TAIL_OFFSET(zl))+extra-delta);\n        }\n    } else {\n        /* Update the tail offset in cases where the last entry we updated is not the tail. */\n        ZIPLIST_TAIL_OFFSET(zl) =\n            intrev32ifbe(intrev32ifbe(ZIPLIST_TAIL_OFFSET(zl))+extra);\n    }\n\n    /* Now \"p\" points at the first unchanged byte in original ziplist,\n     * move data after that to new ziplist. */\n    offset = p - zl;\n    zl = ziplistResize(zl, curlen + extra);\n    p = zl + offset;\n    memmove(p + extra, p, curlen - offset - 1);\n    p += extra;\n\n    /* Iterate all entries that need to be updated tail to head. */\n    while (cnt) {\n        zipEntry(zl + prevoffset, &cur); /* no need for \"safe\" variant since we already iterated on all these entries above. */\n        rawlen = cur.headersize + cur.len;\n        /* Move entry to tail and reset prevlen. */\n        memmove(p - (rawlen - cur.prevrawlensize), \n                zl + prevoffset + cur.prevrawlensize, \n                rawlen - cur.prevrawlensize);\n        p -= (rawlen + delta);\n        if (cur.prevrawlen == 0) {\n            /* \"cur\" is the previous head entry, update its prevlen with firstentrylen. */\n            zipStorePrevEntryLength(p, firstentrylen);\n        } else {\n            /* An entry's prevlen can only increment 4 bytes. */\n            zipStorePrevEntryLength(p, cur.prevrawlen+delta);\n        }\n        /* Forward to previous entry. */\n        prevoffset -= cur.prevrawlen;\n        cnt--;\n    }\n    return zl;\n}\n\n/* Delete \"num\" entries, starting at \"p\". Returns pointer to the ziplist. */\nunsigned char *__ziplistDelete(unsigned char *zl, unsigned char *p, unsigned int num) {\n    unsigned int i, totlen, deleted = 0;\n    size_t offset;\n    int nextdiff = 0;\n    zlentry first, tail;\n    size_t zlbytes = intrev32ifbe(ZIPLIST_BYTES(zl));\n\n    zipEntry(p, &first); /* no need for \"safe\" variant since the input pointer was validated by the function that returned it. */\n    for (i = 0; p[0] != ZIP_END && i < num; i++) {\n        p += zipRawEntryLengthSafe(zl, zlbytes, p);\n        deleted++;\n    }\n\n    assert(p >= first.p);\n    totlen = p-first.p; /* Bytes taken by the element(s) to delete. */\n    if (totlen > 0) {\n        uint32_t set_tail;\n        if (p[0] != ZIP_END) {\n            /* Storing `prevrawlen` in this entry may increase or decrease the\n             * number of bytes required compare to the current `prevrawlen`.\n             * There always is room to store this, because it was previously\n             * stored by an entry that is now being deleted. */\n            nextdiff = zipPrevLenByteDiff(p,first.prevrawlen);\n\n            /* Note that there is always space when p jumps backward: if\n             * the new previous entry is large, one of the deleted elements\n             * had a 5 bytes prevlen header, so there is for sure at least\n             * 5 bytes free and we need just 4. */\n            p -= nextdiff;\n            assert(p >= first.p && p<zl+zlbytes-1);\n            zipStorePrevEntryLength(p,first.prevrawlen);\n\n            /* Update offset for tail */\n            set_tail = intrev32ifbe(ZIPLIST_TAIL_OFFSET(zl))-totlen;\n\n            /* When the tail contains more than one entry, we need to take\n             * \"nextdiff\" in account as well. Otherwise, a change in the\n             * size of prevlen doesn't have an effect on the *tail* offset. */\n            assert(zipEntrySafe(zl, zlbytes, p, &tail, 1));\n            if (p[tail.headersize+tail.len] != ZIP_END) {\n                set_tail = set_tail + nextdiff;\n            }\n\n            /* Move tail to the front of the ziplist */\n            /* since we asserted that p >= first.p. we know totlen >= 0,\n             * so we know that p > first.p and this is guaranteed not to reach\n             * beyond the allocation, even if the entries lens are corrupted. */\n            size_t bytes_to_move = zlbytes-(p-zl)-1;\n            memmove(first.p,p,bytes_to_move);\n        } else {\n            /* The entire tail was deleted. No need to move memory. */\n            set_tail = (first.p-zl)-first.prevrawlen;\n        }\n\n        /* Resize the ziplist */\n        offset = first.p-zl;\n        zlbytes -= totlen - nextdiff;\n        zl = ziplistResize(zl, zlbytes);\n        p = zl+offset;\n\n        /* Update record count */\n        ZIPLIST_INCR_LENGTH(zl,-deleted);\n\n        /* Set the tail offset computed above */\n        assert(set_tail <= zlbytes - ZIPLIST_END_SIZE);\n        ZIPLIST_TAIL_OFFSET(zl) = intrev32ifbe(set_tail);\n\n        /* When nextdiff != 0, the raw length of the next entry has changed, so\n         * we need to cascade the update throughout the ziplist */\n        if (nextdiff != 0)\n            zl = __ziplistCascadeUpdate(zl,p);\n    }\n    return zl;\n}\n\n/* Insert item at \"p\". */\nunsigned char *__ziplistInsert(unsigned char *zl, unsigned char *p, unsigned char *s, unsigned int slen) {\n    size_t curlen = intrev32ifbe(ZIPLIST_BYTES(zl)), reqlen, newlen;\n    unsigned int prevlensize, prevlen = 0;\n    size_t offset;\n    int nextdiff = 0;\n    unsigned char encoding = 0;\n    long long value = 123456789; /* initialized to avoid warning. Using a value\n                                    that is easy to see if for some reason\n                                    we use it uninitialized. */\n    zlentry tail;\n\n    /* Find out prevlen for the entry that is inserted. */\n    if (p[0] != ZIP_END) {\n        ZIP_DECODE_PREVLEN(p, prevlensize, prevlen);\n    } else {\n        unsigned char *ptail = ZIPLIST_ENTRY_TAIL(zl);\n        if (ptail[0] != ZIP_END) {\n            prevlen = zipRawEntryLengthSafe(zl, curlen, ptail);\n        }\n    }\n\n    /* See if the entry can be encoded */\n    if (zipTryEncoding(s,slen,&value,&encoding)) {\n        /* 'encoding' is set to the appropriate integer encoding */\n        reqlen = zipIntSize(encoding);\n    } else {\n        /* 'encoding' is untouched, however zipStoreEntryEncoding will use the\n         * string length to figure out how to encode it. */\n        reqlen = slen;\n    }\n    /* We need space for both the length of the previous entry and\n     * the length of the payload. */\n    reqlen += zipStorePrevEntryLength(NULL,prevlen);\n    reqlen += zipStoreEntryEncoding(NULL,encoding,slen);\n\n    /* When the insert position is not equal to the tail, we need to\n     * make sure that the next entry can hold this entry's length in\n     * its prevlen field. */\n    int forcelarge = 0;\n    nextdiff = (p[0] != ZIP_END) ? zipPrevLenByteDiff(p,reqlen) : 0;\n    if (nextdiff == -4 && reqlen < 4) {\n        nextdiff = 0;\n        forcelarge = 1;\n    }\n\n    /* Store offset because a realloc may change the address of zl. */\n    offset = p-zl;\n    newlen = curlen+reqlen+nextdiff;\n    zl = ziplistResize(zl,newlen);\n    p = zl+offset;\n\n    /* Apply memory move when necessary and update tail offset. */\n    if (p[0] != ZIP_END) {\n        /* Subtract one because of the ZIP_END bytes */\n        memmove(p+reqlen,p-nextdiff,curlen-offset-1+nextdiff);\n\n        /* Encode this entry's raw length in the next entry. */\n        if (forcelarge)\n            zipStorePrevEntryLengthLarge(p+reqlen,reqlen);\n        else\n            zipStorePrevEntryLength(p+reqlen,reqlen);\n\n        /* Update offset for tail */\n        ZIPLIST_TAIL_OFFSET(zl) =\n            intrev32ifbe(intrev32ifbe(ZIPLIST_TAIL_OFFSET(zl))+reqlen);\n\n        /* When the tail contains more than one entry, we need to take\n         * \"nextdiff\" in account as well. Otherwise, a change in the\n         * size of prevlen doesn't have an effect on the *tail* offset. */\n        assert(zipEntrySafe(zl, newlen, p+reqlen, &tail, 1));\n        if (p[reqlen+tail.headersize+tail.len] != ZIP_END) {\n            ZIPLIST_TAIL_OFFSET(zl) =\n                intrev32ifbe(intrev32ifbe(ZIPLIST_TAIL_OFFSET(zl))+nextdiff);\n        }\n    } else {\n        /* This element will be the new tail. */\n        ZIPLIST_TAIL_OFFSET(zl) = intrev32ifbe(p-zl);\n    }\n\n    /* When nextdiff != 0, the raw length of the next entry has changed, so\n     * we need to cascade the update throughout the ziplist */\n    if (nextdiff != 0) {\n        offset = p-zl;\n        zl = __ziplistCascadeUpdate(zl,p+reqlen);\n        p = zl+offset;\n    }\n\n    /* Write the entry */\n    p += zipStorePrevEntryLength(p,prevlen);\n    p += zipStoreEntryEncoding(p,encoding,slen);\n    if (ZIP_IS_STR(encoding)) {\n        memcpy(p,s,slen);\n    } else {\n        zipSaveInteger(p,value,encoding);\n    }\n    ZIPLIST_INCR_LENGTH(zl,1);\n    return zl;\n}\n\n/* Merge ziplists 'first' and 'second' by appending 'second' to 'first'.\n *\n * NOTE: The larger ziplist is reallocated to contain the new merged ziplist.\n * Either 'first' or 'second' can be used for the result.  The parameter not\n * used will be free'd and set to NULL.\n *\n * After calling this function, the input parameters are no longer valid since\n * they are changed and free'd in-place.\n *\n * The result ziplist is the contents of 'first' followed by 'second'.\n *\n * On failure: returns NULL if the merge is impossible.\n * On success: returns the merged ziplist (which is expanded version of either\n * 'first' or 'second', also frees the other unused input ziplist, and sets the\n * input ziplist argument equal to newly reallocated ziplist return value. */\nunsigned char *ziplistMerge(unsigned char **first, unsigned char **second) {\n    /* If any params are null, we can't merge, so NULL. */\n    if (first == NULL || *first == NULL || second == NULL || *second == NULL)\n        return NULL;\n\n    /* Can't merge same list into itself. */\n    if (*first == *second)\n        return NULL;\n\n    size_t first_bytes = intrev32ifbe(ZIPLIST_BYTES(*first));\n    size_t first_len = intrev16ifbe(ZIPLIST_LENGTH(*first));\n\n    size_t second_bytes = intrev32ifbe(ZIPLIST_BYTES(*second));\n    size_t second_len = intrev16ifbe(ZIPLIST_LENGTH(*second));\n\n    int append;\n    unsigned char *source, *target;\n    size_t target_bytes, source_bytes;\n    /* Pick the largest ziplist so we can resize easily in-place.\n     * We must also track if we are now appending or prepending to\n     * the target ziplist. */\n    if (first_len >= second_len) {\n        /* retain first, append second to first. */\n        target = *first;\n        target_bytes = first_bytes;\n        source = *second;\n        source_bytes = second_bytes;\n        append = 1;\n    } else {\n        /* else, retain second, prepend first to second. */\n        target = *second;\n        target_bytes = second_bytes;\n        source = *first;\n        source_bytes = first_bytes;\n        append = 0;\n    }\n\n    /* Calculate final bytes (subtract one pair of metadata) */\n    size_t zlbytes = first_bytes + second_bytes -\n                     ZIPLIST_HEADER_SIZE - ZIPLIST_END_SIZE;\n    size_t zllength = first_len + second_len;\n\n    /* Combined zl length should be limited within UINT16_MAX */\n    zllength = zllength < UINT16_MAX ? zllength : UINT16_MAX;\n\n    /* larger values can't be stored into ZIPLIST_BYTES */\n    assert(zlbytes < UINT32_MAX);\n\n    /* Save offset positions before we start ripping memory apart. */\n    size_t first_offset = intrev32ifbe(ZIPLIST_TAIL_OFFSET(*first));\n    size_t second_offset = intrev32ifbe(ZIPLIST_TAIL_OFFSET(*second));\n\n    /* Extend target to new zlbytes then append or prepend source. */\n    target = zrealloc(target, zlbytes);\n    if (append) {\n        /* append == appending to target */\n        /* Copy source after target (copying over original [END]):\n         *   [TARGET - END, SOURCE - HEADER] */\n        memcpy(target + target_bytes - ZIPLIST_END_SIZE,\n               source + ZIPLIST_HEADER_SIZE,\n               source_bytes - ZIPLIST_HEADER_SIZE);\n    } else {\n        /* !append == prepending to target */\n        /* Move target *contents* exactly size of (source - [END]),\n         * then copy source into vacated space (source - [END]):\n         *   [SOURCE - END, TARGET - HEADER] */\n        memmove(target + source_bytes - ZIPLIST_END_SIZE,\n                target + ZIPLIST_HEADER_SIZE,\n                target_bytes - ZIPLIST_HEADER_SIZE);\n        memcpy(target, source, source_bytes - ZIPLIST_END_SIZE);\n    }\n\n    /* Update header metadata. */\n    ZIPLIST_BYTES(target) = intrev32ifbe(zlbytes);\n    ZIPLIST_LENGTH(target) = intrev16ifbe(zllength);\n    /* New tail offset is:\n     *   + N bytes of first ziplist\n     *   - 1 byte for [END] of first ziplist\n     *   + M bytes for the offset of the original tail of the second ziplist\n     *   - J bytes for HEADER because second_offset keeps no header. */\n    ZIPLIST_TAIL_OFFSET(target) = intrev32ifbe(\n                                   (first_bytes - ZIPLIST_END_SIZE) +\n                                   (second_offset - ZIPLIST_HEADER_SIZE));\n\n    /* __ziplistCascadeUpdate just fixes the prev length values until it finds a\n     * correct prev length value (then it assumes the rest of the list is okay).\n     * We tell CascadeUpdate to start at the first ziplist's tail element to fix\n     * the merge seam. */\n    target = __ziplistCascadeUpdate(target, target+first_offset);\n\n    /* Now free and NULL out what we didn't realloc */\n    if (append) {\n        zfree(*second);\n        *second = NULL;\n        *first = target;\n    } else {\n        zfree(*first);\n        *first = NULL;\n        *second = target;\n    }\n    return target;\n}\n\nunsigned char *ziplistPush(unsigned char *zl, unsigned char *s, unsigned int slen, int where) {\n    unsigned char *p;\n    p = (where == ZIPLIST_HEAD) ? ZIPLIST_ENTRY_HEAD(zl) : ZIPLIST_ENTRY_END(zl);\n    return __ziplistInsert(zl,p,s,slen);\n}\n\n/* Returns an offset to use for iterating with ziplistNext. When the given\n * index is negative, the list is traversed back to front. When the list\n * doesn't contain an element at the provided index, NULL is returned. */\nunsigned char *ziplistIndex(unsigned char *zl, int index) {\n    unsigned char *p;\n    unsigned int prevlensize, prevlen = 0;\n    size_t zlbytes = intrev32ifbe(ZIPLIST_BYTES(zl));\n    if (index < 0) {\n        index = (-index)-1;\n        p = ZIPLIST_ENTRY_TAIL(zl);\n        if (p[0] != ZIP_END) {\n            /* No need for \"safe\" check: when going backwards, we know the header\n             * we're parsing is in the range, we just need to assert (below) that\n             * the size we take doesn't cause p to go outside the allocation. */\n            ZIP_DECODE_PREVLENSIZE(p, prevlensize);\n            assert(p + prevlensize < zl + zlbytes - ZIPLIST_END_SIZE);\n            ZIP_DECODE_PREVLEN(p, prevlensize, prevlen);\n            while (prevlen > 0 && index--) {\n                p -= prevlen;\n                assert(p >= zl + ZIPLIST_HEADER_SIZE && p < zl + zlbytes - ZIPLIST_END_SIZE);\n                ZIP_DECODE_PREVLEN(p, prevlensize, prevlen);\n            }\n        }\n    } else {\n        p = ZIPLIST_ENTRY_HEAD(zl);\n        while (index--) {\n            /* Use the \"safe\" length: When we go forward, we need to be careful\n             * not to decode an entry header if it's past the ziplist allocation. */\n            p += zipRawEntryLengthSafe(zl, zlbytes, p);\n            if (p[0] == ZIP_END)\n                break;\n        }\n    }\n    if (p[0] == ZIP_END || index > 0)\n        return NULL;\n    zipAssertValidEntry(zl, zlbytes, p);\n    return p;\n}\n\n/* Return pointer to next entry in ziplist.\n *\n * zl is the pointer to the ziplist\n * p is the pointer to the current element\n *\n * The element after 'p' is returned, otherwise NULL if we are at the end. */\nunsigned char *ziplistNext(unsigned char *zl, unsigned char *p) {\n    ((void) zl);\n    size_t zlbytes = intrev32ifbe(ZIPLIST_BYTES(zl));\n\n    /* \"p\" could be equal to ZIP_END, caused by ziplistDelete,\n     * and we should return NULL. Otherwise, we should return NULL\n     * when the *next* element is ZIP_END (there is no next entry). */\n    if (p[0] == ZIP_END) {\n        return NULL;\n    }\n\n    p += zipRawEntryLength(p);\n    if (p[0] == ZIP_END) {\n        return NULL;\n    }\n\n    zipAssertValidEntry(zl, zlbytes, p);\n    return p;\n}\n\n/* Return pointer to previous entry in ziplist. */\nunsigned char *ziplistPrev(unsigned char *zl, unsigned char *p) {\n    unsigned int prevlensize, prevlen = 0;\n\n    /* Iterating backwards from ZIP_END should return the tail. When \"p\" is\n     * equal to the first element of the list, we're already at the head,\n     * and should return NULL. */\n    if (p[0] == ZIP_END) {\n        p = ZIPLIST_ENTRY_TAIL(zl);\n        return (p[0] == ZIP_END) ? NULL : p;\n    } else if (p == ZIPLIST_ENTRY_HEAD(zl)) {\n        return NULL;\n    } else {\n        ZIP_DECODE_PREVLEN(p, prevlensize, prevlen);\n        assert(prevlen > 0);\n        p-=prevlen;\n        size_t zlbytes = intrev32ifbe(ZIPLIST_BYTES(zl));\n        zipAssertValidEntry(zl, zlbytes, p);\n        return p;\n    }\n}\n\n/* Get entry pointed to by 'p' and store in either '*sstr' or 'sval' depending\n * on the encoding of the entry. '*sstr' is always set to NULL to be able\n * to find out whether the string pointer or the integer value was set.\n * Return 0 if 'p' points to the end of the ziplist, 1 otherwise. */\nunsigned int ziplistGet(unsigned char *p, unsigned char **sstr, unsigned int *slen, long long *sval) {\n    zlentry entry;\n    if (p == NULL || p[0] == ZIP_END) return 0;\n    if (sstr) *sstr = NULL;\n\n    zipEntry(p, &entry); /* no need for \"safe\" variant since the input pointer was validated by the function that returned it. */\n    if (ZIP_IS_STR(entry.encoding)) {\n        if (sstr) {\n            *slen = entry.len;\n            *sstr = p+entry.headersize;\n        }\n    } else {\n        if (sval) {\n            *sval = zipLoadInteger(p+entry.headersize,entry.encoding);\n        }\n    }\n    return 1;\n}\n\n/* Insert an entry at \"p\". */\nunsigned char *ziplistInsert(unsigned char *zl, unsigned char *p, unsigned char *s, unsigned int slen) {\n    return __ziplistInsert(zl,p,s,slen);\n}\n\n/* Delete a single entry from the ziplist, pointed to by *p.\n * Also update *p in place, to be able to iterate over the\n * ziplist, while deleting entries. */\nunsigned char *ziplistDelete(unsigned char *zl, unsigned char **p) {\n    size_t offset = *p-zl;\n    zl = __ziplistDelete(zl,*p,1);\n\n    /* Store pointer to current element in p, because ziplistDelete will\n     * do a realloc which might result in a different \"zl\"-pointer.\n     * When the delete direction is back to front, we might delete the last\n     * entry and end up with \"p\" pointing to ZIP_END, so check this. */\n    *p = zl+offset;\n    return zl;\n}\n\n/* Delete a range of entries from the ziplist. */\nunsigned char *ziplistDeleteRange(unsigned char *zl, int index, unsigned int num) {\n    unsigned char *p = ziplistIndex(zl,index);\n    return (p == NULL) ? zl : __ziplistDelete(zl,p,num);\n}\n\n/* Replaces the entry at p. This is equivalent to a delete and an insert,\n * but avoids some overhead when replacing a value of the same size. */\nunsigned char *ziplistReplace(unsigned char *zl, unsigned char *p, unsigned char *s, unsigned int slen) {\n\n    /* get metadata of the current entry */\n    zlentry entry;\n    zipEntry(p, &entry);\n\n    /* compute length of entry to store, excluding prevlen */\n    unsigned int reqlen;\n    unsigned char encoding = 0;\n    long long value = 123456789; /* initialized to avoid warning. */\n    if (zipTryEncoding(s,slen,&value,&encoding)) {\n        reqlen = zipIntSize(encoding); /* encoding is set */\n    } else {\n        reqlen = slen; /* encoding == 0 */\n    }\n    reqlen += zipStoreEntryEncoding(NULL,encoding,slen);\n\n    if (reqlen == entry.lensize + entry.len) {\n        /* Simply overwrite the element. */\n        p += entry.prevrawlensize;\n        p += zipStoreEntryEncoding(p,encoding,slen);\n        if (ZIP_IS_STR(encoding)) {\n            memcpy(p,s,slen);\n        } else {\n            zipSaveInteger(p,value,encoding);\n        }\n    } else {\n        /* Fallback. */\n        zl = ziplistDelete(zl,&p);\n        zl = ziplistInsert(zl,p,s,slen);\n    }\n    return zl;\n}\n\n/* Compare entry pointer to by 'p' with 'sstr' of length 'slen'. */\n/* Return 1 if equal. */\nunsigned int ziplistCompare(unsigned char *p, unsigned char *sstr, unsigned int slen) {\n    zlentry entry;\n    unsigned char sencoding;\n    long long zval, sval;\n    if (p[0] == ZIP_END) return 0;\n\n    zipEntry(p, &entry); /* no need for \"safe\" variant since the input pointer was validated by the function that returned it. */\n    if (ZIP_IS_STR(entry.encoding)) {\n        /* Raw compare */\n        if (entry.len == slen) {\n            return memcmp(p+entry.headersize,sstr,slen) == 0;\n        } else {\n            return 0;\n        }\n    } else {\n        /* Try to compare encoded values. Don't compare encoding because\n         * different implementations may encoded integers differently. */\n        if (zipTryEncoding(sstr,slen,&sval,&sencoding)) {\n          zval = zipLoadInteger(p+entry.headersize,entry.encoding);\n          return zval == sval;\n        }\n    }\n    return 0;\n}\n\n/* Find pointer to the entry equal to the specified entry. Skip 'skip' entries\n * between every comparison. Returns NULL when the field could not be found. */\nunsigned char *ziplistFind(unsigned char *zl, unsigned char *p, unsigned char *vstr, unsigned int vlen, unsigned int skip) {\n    int skipcnt = 0;\n    unsigned char vencoding = 0;\n    long long vll = 0;\n    size_t zlbytes = ziplistBlobLen(zl);\n\n    while (p[0] != ZIP_END) {\n        struct zlentry e;\n        unsigned char *q;\n\n        assert(zipEntrySafe(zl, zlbytes, p, &e, 1));\n        q = p + e.prevrawlensize + e.lensize;\n\n        if (skipcnt == 0) {\n            /* Compare current entry with specified entry */\n            if (ZIP_IS_STR(e.encoding)) {\n                if (e.len == vlen && memcmp(q, vstr, vlen) == 0) {\n                    return p;\n                }\n            } else {\n                /* Find out if the searched field can be encoded. Note that\n                 * we do it only the first time, once done vencoding is set\n                 * to non-zero and vll is set to the integer value. */\n                if (vencoding == 0) {\n                    if (!zipTryEncoding(vstr, vlen, &vll, &vencoding)) {\n                        /* If the entry can't be encoded we set it to\n                         * UCHAR_MAX so that we don't retry again the next\n                         * time. */\n                        vencoding = UCHAR_MAX;\n                    }\n                    /* Must be non-zero by now */\n                    assert(vencoding);\n                }\n\n                /* Compare current entry with specified entry, do it only\n                 * if vencoding != UCHAR_MAX because if there is no encoding\n                 * possible for the field it can't be a valid integer. */\n                if (vencoding != UCHAR_MAX) {\n                    long long ll = zipLoadInteger(q, e.encoding);\n                    if (ll == vll) {\n                        return p;\n                    }\n                }\n            }\n\n            /* Reset skip count */\n            skipcnt = skip;\n        } else {\n            /* Skip entry */\n            skipcnt--;\n        }\n\n        /* Move to next entry */\n        p = q + e.len;\n    }\n\n    return NULL;\n}\n\n/* Return length of ziplist. */\nunsigned int ziplistLen(unsigned char *zl) {\n    unsigned int len = 0;\n    if (intrev16ifbe(ZIPLIST_LENGTH(zl)) < UINT16_MAX) {\n        len = intrev16ifbe(ZIPLIST_LENGTH(zl));\n    } else {\n        unsigned char *p = zl+ZIPLIST_HEADER_SIZE;\n        size_t zlbytes = intrev32ifbe(ZIPLIST_BYTES(zl));\n        while (*p != ZIP_END) {\n            p += zipRawEntryLengthSafe(zl, zlbytes, p);\n            len++;\n        }\n\n        /* Re-store length if small enough */\n        if (len < UINT16_MAX) ZIPLIST_LENGTH(zl) = intrev16ifbe(len);\n    }\n    return len;\n}\n\n/* Return ziplist blob size in bytes. */\nsize_t ziplistBlobLen(unsigned char *zl) {\n    return intrev32ifbe(ZIPLIST_BYTES(zl));\n}\n\nvoid ziplistRepr(unsigned char *zl) {\n    unsigned char *p;\n    int index = 0;\n    zlentry entry;\n    size_t zlbytes = ziplistBlobLen(zl);\n\n    printf(\n        \"{total bytes %u} \"\n        \"{num entries %u}\\n\"\n        \"{tail offset %u}\\n\",\n        intrev32ifbe(ZIPLIST_BYTES(zl)),\n        intrev16ifbe(ZIPLIST_LENGTH(zl)),\n        intrev32ifbe(ZIPLIST_TAIL_OFFSET(zl)));\n    p = ZIPLIST_ENTRY_HEAD(zl);\n    while(*p != ZIP_END) {\n        assert(zipEntrySafe(zl, zlbytes, p, &entry, 1));\n        printf(\n            \"{\\n\"\n                \"\\taddr 0x%08lx,\\n\"\n                \"\\tindex %2d,\\n\"\n                \"\\toffset %5lu,\\n\"\n                \"\\thdr+entry len: %5u,\\n\"\n                \"\\thdr len%2u,\\n\"\n                \"\\tprevrawlen: %5u,\\n\"\n                \"\\tprevrawlensize: %2u,\\n\"\n                \"\\tpayload %5u\\n\",\n            (long unsigned)p,\n            index,\n            (unsigned long) (p-zl),\n            entry.headersize+entry.len,\n            entry.headersize,\n            entry.prevrawlen,\n            entry.prevrawlensize,\n            entry.len);\n        printf(\"\\tbytes: \");\n        for (unsigned int i = 0; i < entry.headersize+entry.len; i++) {\n            printf(\"%02x|\",p[i]);\n        }\n        printf(\"\\n\");\n        p += entry.headersize;\n        if (ZIP_IS_STR(entry.encoding)) {\n            printf(\"\\t[str]\");\n            if (entry.len > 40) {\n                if (fwrite(p,40,1,stdout) == 0) perror(\"fwrite\");\n                printf(\"...\");\n            } else {\n                if (entry.len &&\n                    fwrite(p,entry.len,1,stdout) == 0) perror(\"fwrite\");\n            }\n        } else {\n            printf(\"\\t[int]%lld\", (long long) zipLoadInteger(p,entry.encoding));\n        }\n        printf(\"\\n}\\n\");\n        p += entry.len;\n        index++;\n    }\n    printf(\"{end}\\n\\n\");\n}\n\n/* Validate the integrity of the data structure.\n * when `deep` is 0, only the integrity of the header is validated.\n * when `deep` is 1, we scan all the entries one by one. */\nint ziplistValidateIntegrity(unsigned char *zl, size_t size, int deep,\n    ziplistValidateEntryCB entry_cb, void *cb_userdata) {\n    /* check that we can actually read the header. (and ZIP_END) */\n    if (size < ZIPLIST_HEADER_SIZE + ZIPLIST_END_SIZE)\n        return 0;\n\n    /* check that the encoded size in the header must match the allocated size. */\n    size_t bytes = intrev32ifbe(ZIPLIST_BYTES(zl));\n    if (bytes != size)\n        return 0;\n\n    /* the last byte must be the terminator. */\n    if (zl[size - ZIPLIST_END_SIZE] != ZIP_END)\n        return 0;\n\n    /* make sure the tail offset isn't reaching outside the allocation. */\n    if (intrev32ifbe(ZIPLIST_TAIL_OFFSET(zl)) > size - ZIPLIST_END_SIZE)\n        return 0;\n\n    if (!deep)\n        return 1;\n\n    unsigned int count = 0;\n    unsigned int header_count = intrev16ifbe(ZIPLIST_LENGTH(zl));\n    unsigned char *p = ZIPLIST_ENTRY_HEAD(zl);\n    unsigned char *prev = NULL;\n    size_t prev_raw_size = 0;\n    while(*p != ZIP_END) {\n        struct zlentry e;\n        /* Decode the entry headers and fail if invalid or reaches outside the allocation */\n        if (!zipEntrySafe(zl, size, p, &e, 1))\n            return 0;\n\n        /* Make sure the record stating the prev entry size is correct. */\n        if (e.prevrawlen != prev_raw_size)\n            return 0;\n\n        /* Optionally let the caller validate the entry too. */\n        if (entry_cb && !entry_cb(p, header_count, cb_userdata))\n            return 0;\n\n        /* Move to the next entry */\n        prev_raw_size = e.headersize + e.len;\n        prev = p;\n        p += e.headersize + e.len;\n        count++;\n    }\n\n    /* Make sure 'p' really does point to the end of the ziplist. */\n    if (p != zl + bytes - ZIPLIST_END_SIZE)\n        return 0;\n\n    /* Make sure the <zltail> entry really do point to the start of the last entry. */\n    if (prev != NULL && prev != ZIPLIST_ENTRY_TAIL(zl))\n        return 0;\n\n    /* Check that the count in the header is correct */\n    if (header_count != UINT16_MAX && count != header_count)\n        return 0;\n\n    return 1;\n}\n\n/* Randomly select a pair of key and value.\n * total_count is a pre-computed length/2 of the ziplist (to avoid calls to ziplistLen)\n * 'key' and 'val' are used to store the result key value pair.\n * 'val' can be NULL if the value is not needed. */\nvoid ziplistRandomPair(unsigned char *zl, unsigned long total_count, ziplistEntry *key, ziplistEntry *val) {\n    int ret;\n    unsigned char *p;\n\n    /* Avoid div by zero on corrupt ziplist */\n    assert(total_count);\n\n    /* Generate even numbers, because ziplist saved K-V pair */\n    int r = (rand() % total_count) * 2;\n    p = ziplistIndex(zl, r);\n    ret = ziplistGet(p, &key->sval, &key->slen, &key->lval);\n    assert(ret != 0);\n\n    if (!val)\n        return;\n    p = ziplistNext(zl, p);\n    ret = ziplistGet(p, &val->sval, &val->slen, &val->lval);\n    assert(ret != 0);\n}\n\n/* int compare for qsort */\nint uintCompare(const void *a, const void *b) {\n    return (*(unsigned int *) a - *(unsigned int *) b);\n}\n\n/* Helper method to store a string from val or lval into dest */\nstatic inline void ziplistSaveValue(unsigned char *val, unsigned int len, long long lval, ziplistEntry *dest) {\n    dest->sval = val;\n    dest->slen = len;\n    dest->lval = lval;\n}\n\n/* Randomly select count of key value pairs and store into 'keys' and\n * 'vals' args. The order of the picked entries is random, and the selections\n * are non-unique (repetitions are possible).\n * The 'vals' arg can be NULL in which case we skip these. */\nvoid ziplistRandomPairs(unsigned char *zl, unsigned int count, ziplistEntry *keys, ziplistEntry *vals) {\n    unsigned char *p, *key, *value;\n    unsigned int klen = 0, vlen = 0;\n    long long klval = 0, vlval = 0;\n\n    /* Notice: the index member must be first due to the use in uintCompare */\n    typedef struct {\n        unsigned int index;\n        unsigned int order;\n    } rand_pick;\n    rand_pick *picks = zmalloc(sizeof(rand_pick)*count);\n    unsigned int total_size = ziplistLen(zl)/2;\n\n    /* Avoid div by zero on corrupt ziplist */\n    assert(total_size);\n\n    /* create a pool of random indexes (some may be duplicate). */\n    for (unsigned int i = 0; i < count; i++) {\n        picks[i].index = (rand() % total_size) * 2; /* Generate even indexes */\n        /* keep track of the order we picked them */\n        picks[i].order = i;\n    }\n\n    /* sort by indexes. */\n    qsort(picks, count, sizeof(rand_pick), uintCompare);\n\n    /* fetch the elements form the ziplist into a output array respecting the original order. */\n    unsigned int zipindex = picks[0].index, pickindex = 0;\n    p = ziplistIndex(zl, zipindex);\n    while (ziplistGet(p, &key, &klen, &klval) && pickindex < count) {\n        p = ziplistNext(zl, p);\n        assert(ziplistGet(p, &value, &vlen, &vlval));\n        while (pickindex < count && zipindex == picks[pickindex].index) {\n            int storeorder = picks[pickindex].order;\n            ziplistSaveValue(key, klen, klval, &keys[storeorder]);\n            if (vals)\n                ziplistSaveValue(value, vlen, vlval, &vals[storeorder]);\n             pickindex++;\n        }\n        zipindex += 2;\n        p = ziplistNext(zl, p);\n    }\n\n    zfree(picks);\n}\n\n/* Randomly select count of key value pairs and store into 'keys' and\n * 'vals' args. The selections are unique (no repetitions), and the order of\n * the picked entries is NOT-random.\n * The 'vals' arg can be NULL in which case we skip these.\n * The return value is the number of items picked which can be lower than the\n * requested count if the ziplist doesn't hold enough pairs. */\nunsigned int ziplistRandomPairsUnique(unsigned char *zl, unsigned int count, ziplistEntry *keys, ziplistEntry *vals) {\n    unsigned char *p, *key;\n    unsigned int klen = 0;\n    long long klval = 0;\n    unsigned int total_size = ziplistLen(zl)/2;\n    unsigned int index = 0;\n    if (count > total_size)\n        count = total_size;\n\n    /* To only iterate once, every time we try to pick a member, the probability\n     * we pick it is the quotient of the count left we want to pick and the\n     * count still we haven't visited in the dict, this way, we could make every\n     * member be equally picked.*/\n    p = ziplistIndex(zl, 0);\n    unsigned int picked = 0, remaining = count;\n    while (picked < count && p) {\n        double randomDouble = ((double)rand()) / RAND_MAX;\n        double threshold = ((double)remaining) / (total_size - index);\n        if (randomDouble <= threshold) {\n            assert(ziplistGet(p, &key, &klen, &klval));\n            ziplistSaveValue(key, klen, klval, &keys[picked]);\n            p = ziplistNext(zl, p);\n            assert(p);\n            if (vals) {\n                assert(ziplistGet(p, &key, &klen, &klval));\n                ziplistSaveValue(key, klen, klval, &vals[picked]);\n            }\n            remaining--;\n            picked++;\n        } else {\n            p = ziplistNext(zl, p);\n            assert(p);\n        }\n        p = ziplistNext(zl, p);\n        index++;\n    }\n    return picked;\n}\n\n#ifdef REDIS_TEST\n#include <sys/time.h>\n#include \"adlist.h\"\n#include \"sds.h\"\n#include \"testhelp.h\"\n\n#define debug(f, ...) { if (DEBUG) printf(f, __VA_ARGS__); }\n\nstatic unsigned char *createList(void) {\n    unsigned char *zl = ziplistNew();\n    zl = ziplistPush(zl, (unsigned char*)\"foo\", 3, ZIPLIST_TAIL);\n    zl = ziplistPush(zl, (unsigned char*)\"quux\", 4, ZIPLIST_TAIL);\n    zl = ziplistPush(zl, (unsigned char*)\"hello\", 5, ZIPLIST_HEAD);\n    zl = ziplistPush(zl, (unsigned char*)\"1024\", 4, ZIPLIST_TAIL);\n    return zl;\n}\n\nstatic unsigned char *createIntList(void) {\n    unsigned char *zl = ziplistNew();\n    char buf[32];\n\n    snprintf(buf, sizeof(buf), \"100\");\n    zl = ziplistPush(zl, (unsigned char*)buf, strlen(buf), ZIPLIST_TAIL);\n    snprintf(buf, sizeof(buf), \"128000\");\n    zl = ziplistPush(zl, (unsigned char*)buf, strlen(buf), ZIPLIST_TAIL);\n    snprintf(buf, sizeof(buf), \"-100\");\n    zl = ziplistPush(zl, (unsigned char*)buf, strlen(buf), ZIPLIST_HEAD);\n    snprintf(buf, sizeof(buf), \"4294967296\");\n    zl = ziplistPush(zl, (unsigned char*)buf, strlen(buf), ZIPLIST_HEAD);\n    snprintf(buf, sizeof(buf), \"non integer\");\n    zl = ziplistPush(zl, (unsigned char*)buf, strlen(buf), ZIPLIST_TAIL);\n    snprintf(buf,sizeof(buf), \"much much longer non integer\");\n    zl = ziplistPush(zl, (unsigned char*)buf, strlen(buf), ZIPLIST_TAIL);\n    return zl;\n}\n\nstatic long long usec(void) {\n    struct timeval tv;\n    gettimeofday(&tv,NULL);\n    return (((long long)tv.tv_sec)*1000000)+tv.tv_usec;\n}\n\nstatic void stress(int pos, int num, int maxsize, int dnum) {\n    int i,j,k;\n    unsigned char *zl;\n    char posstr[2][5] = { \"HEAD\", \"TAIL\" };\n    long long start;\n    for (i = 0; i < maxsize; i+=dnum) {\n        zl = ziplistNew();\n        for (j = 0; j < i; j++) {\n            zl = ziplistPush(zl,(unsigned char*)\"quux\",4,ZIPLIST_TAIL);\n        }\n\n        /* Do num times a push+pop from pos */\n        start = usec();\n        for (k = 0; k < num; k++) {\n            zl = ziplistPush(zl,(unsigned char*)\"quux\",4,pos);\n            zl = ziplistDeleteRange(zl,0,1);\n        }\n        printf(\"List size: %8d, bytes: %8d, %dx push+pop (%s): %6lld usec\\n\",\n            i,intrev32ifbe(ZIPLIST_BYTES(zl)),num,posstr[pos],usec()-start);\n        zfree(zl);\n    }\n}\n\nstatic unsigned char *pop(unsigned char *zl, int where) {\n    unsigned char *p, *vstr;\n    unsigned int vlen;\n    long long vlong = 0;\n\n    p = ziplistIndex(zl,where == ZIPLIST_HEAD ? 0 : -1);\n    if (ziplistGet(p,&vstr,&vlen,&vlong)) {\n        if (where == ZIPLIST_HEAD)\n            printf(\"Pop head: \");\n        else\n            printf(\"Pop tail: \");\n\n        if (vstr) {\n            if (vlen && fwrite(vstr,vlen,1,stdout) == 0) perror(\"fwrite\");\n        }\n        else {\n            printf(\"%lld\", vlong);\n        }\n\n        printf(\"\\n\");\n        return ziplistDelete(zl,&p);\n    } else {\n        printf(\"ERROR: Could not pop\\n\");\n        exit(1);\n    }\n}\n\nstatic int randstring(char *target, unsigned int min, unsigned int max) {\n    int p = 0;\n    int len = min+rand()%(max-min+1);\n    int minval, maxval;\n    switch(rand() % 3) {\n    case 0:\n        minval = 0;\n        maxval = 255;\n    break;\n    case 1:\n        minval = 48;\n        maxval = 122;\n    break;\n    case 2:\n        minval = 48;\n        maxval = 52;\n    break;\n    default:\n        assert(NULL);\n    }\n\n    while(p < len)\n        target[p++] = minval+rand()%(maxval-minval+1);\n    return len;\n}\n\nstatic void verify(unsigned char *zl, zlentry *e) {\n    int len = ziplistLen(zl);\n    zlentry _e;\n\n    ZIPLIST_ENTRY_ZERO(&_e);\n\n    for (int i = 0; i < len; i++) {\n        memset(&e[i], 0, sizeof(zlentry));\n        zipEntry(ziplistIndex(zl, i), &e[i]);\n\n        memset(&_e, 0, sizeof(zlentry));\n        zipEntry(ziplistIndex(zl, -len+i), &_e);\n\n        assert(memcmp(&e[i], &_e, sizeof(zlentry)) == 0);\n    }\n}\n\nstatic unsigned char *insertHelper(unsigned char *zl, char ch, size_t len, unsigned char *pos) {\n    assert(len <= ZIP_BIG_PREVLEN);\n    unsigned char data[ZIP_BIG_PREVLEN] = {0};\n    memset(data, ch, len);\n    return ziplistInsert(zl, pos, data, len);\n}\n\nstatic int compareHelper(unsigned char *zl, char ch, size_t len, int index) {\n    assert(len <= ZIP_BIG_PREVLEN);\n    unsigned char data[ZIP_BIG_PREVLEN] = {0};\n    memset(data, ch, len);\n    unsigned char *p = ziplistIndex(zl, index);\n    assert(p != NULL);\n    return ziplistCompare(p, data, len);\n}\n\nstatic size_t strEntryBytesSmall(size_t slen) {\n    return slen + zipStorePrevEntryLength(NULL, 0) + zipStoreEntryEncoding(NULL, 0, slen);\n}\n\nstatic size_t strEntryBytesLarge(size_t slen) {\n    return slen + zipStorePrevEntryLength(NULL, ZIP_BIG_PREVLEN) + zipStoreEntryEncoding(NULL, 0, slen);\n}\n\n/* ./redis-server test ziplist <randomseed> */\nint ziplistTest(int argc, char **argv, int flags) {\n    int accurate = (flags & REDIS_TEST_ACCURATE);\n    unsigned char *zl, *p;\n    unsigned char *entry;\n    unsigned int elen;\n    long long value;\n    int iteration;\n\n    /* If an argument is given, use it as the random seed. */\n    if (argc >= 4)\n        srand(atoi(argv[3]));\n\n    zl = createIntList();\n    ziplistRepr(zl);\n\n    zfree(zl);\n\n    zl = createList();\n    ziplistRepr(zl);\n\n    zl = pop(zl,ZIPLIST_TAIL);\n    ziplistRepr(zl);\n\n    zl = pop(zl,ZIPLIST_HEAD);\n    ziplistRepr(zl);\n\n    zl = pop(zl,ZIPLIST_TAIL);\n    ziplistRepr(zl);\n\n    zl = pop(zl,ZIPLIST_TAIL);\n    ziplistRepr(zl);\n\n    zfree(zl);\n\n    printf(\"Get element at index 3:\\n\");\n    {\n        zl = createList();\n        p = ziplistIndex(zl, 3);\n        if (!ziplistGet(p, &entry, &elen, &value)) {\n            printf(\"ERROR: Could not access index 3\\n\");\n            return 1;\n        }\n        if (entry) {\n            if (elen && fwrite(entry,elen,1,stdout) == 0) perror(\"fwrite\");\n            printf(\"\\n\");\n        } else {\n            printf(\"%lld\\n\", value);\n        }\n        printf(\"\\n\");\n        zfree(zl);\n    }\n\n    printf(\"Get element at index 4 (out of range):\\n\");\n    {\n        zl = createList();\n        p = ziplistIndex(zl, 4);\n        if (p == NULL) {\n            printf(\"No entry\\n\");\n        } else {\n            printf(\"ERROR: Out of range index should return NULL, returned offset: %ld\\n\", (long)(p-zl));\n            return 1;\n        }\n        printf(\"\\n\");\n        zfree(zl);\n    }\n\n    printf(\"Get element at index -1 (last element):\\n\");\n    {\n        zl = createList();\n        p = ziplistIndex(zl, -1);\n        if (!ziplistGet(p, &entry, &elen, &value)) {\n            printf(\"ERROR: Could not access index -1\\n\");\n            return 1;\n        }\n        if (entry) {\n            if (elen && fwrite(entry,elen,1,stdout) == 0) perror(\"fwrite\");\n            printf(\"\\n\");\n        } else {\n            printf(\"%lld\\n\", value);\n        }\n        printf(\"\\n\");\n        zfree(zl);\n    }\n\n    printf(\"Get element at index -4 (first element):\\n\");\n    {\n        zl = createList();\n        p = ziplistIndex(zl, -4);\n        if (!ziplistGet(p, &entry, &elen, &value)) {\n            printf(\"ERROR: Could not access index -4\\n\");\n            return 1;\n        }\n        if (entry) {\n            if (elen && fwrite(entry,elen,1,stdout) == 0) perror(\"fwrite\");\n            printf(\"\\n\");\n        } else {\n            printf(\"%lld\\n\", value);\n        }\n        printf(\"\\n\");\n        zfree(zl);\n    }\n\n    printf(\"Get element at index -5 (reverse out of range):\\n\");\n    {\n        zl = createList();\n        p = ziplistIndex(zl, -5);\n        if (p == NULL) {\n            printf(\"No entry\\n\");\n        } else {\n            printf(\"ERROR: Out of range index should return NULL, returned offset: %ld\\n\", (long)(p-zl));\n            return 1;\n        }\n        printf(\"\\n\");\n        zfree(zl);\n    }\n\n    printf(\"Iterate list from 0 to end:\\n\");\n    {\n        zl = createList();\n        p = ziplistIndex(zl, 0);\n        while (ziplistGet(p, &entry, &elen, &value)) {\n            printf(\"Entry: \");\n            if (entry) {\n                if (elen && fwrite(entry,elen,1,stdout) == 0) perror(\"fwrite\");\n            } else {\n                printf(\"%lld\", value);\n            }\n            p = ziplistNext(zl,p);\n            printf(\"\\n\");\n        }\n        printf(\"\\n\");\n        zfree(zl);\n    }\n\n    printf(\"Iterate list from 1 to end:\\n\");\n    {\n        zl = createList();\n        p = ziplistIndex(zl, 1);\n        while (ziplistGet(p, &entry, &elen, &value)) {\n            printf(\"Entry: \");\n            if (entry) {\n                if (elen && fwrite(entry,elen,1,stdout) == 0) perror(\"fwrite\");\n            } else {\n                printf(\"%lld\", value);\n            }\n            p = ziplistNext(zl,p);\n            printf(\"\\n\");\n        }\n        printf(\"\\n\");\n        zfree(zl);\n    }\n\n    printf(\"Iterate list from 2 to end:\\n\");\n    {\n        zl = createList();\n        p = ziplistIndex(zl, 2);\n        while (ziplistGet(p, &entry, &elen, &value)) {\n            printf(\"Entry: \");\n            if (entry) {\n                if (elen && fwrite(entry,elen,1,stdout) == 0) perror(\"fwrite\");\n            } else {\n                printf(\"%lld\", value);\n            }\n            p = ziplistNext(zl,p);\n            printf(\"\\n\");\n        }\n        printf(\"\\n\");\n        zfree(zl);\n    }\n\n    printf(\"Iterate starting out of range:\\n\");\n    {\n        zl = createList();\n        p = ziplistIndex(zl, 4);\n        if (!ziplistGet(p, &entry, &elen, &value)) {\n            printf(\"No entry\\n\");\n        } else {\n            printf(\"ERROR\\n\");\n        }\n        printf(\"\\n\");\n        zfree(zl);\n    }\n\n    printf(\"Iterate from back to front:\\n\");\n    {\n        zl = createList();\n        p = ziplistIndex(zl, -1);\n        while (ziplistGet(p, &entry, &elen, &value)) {\n            printf(\"Entry: \");\n            if (entry) {\n                if (elen && fwrite(entry,elen,1,stdout) == 0) perror(\"fwrite\");\n            } else {\n                printf(\"%lld\", value);\n            }\n            p = ziplistPrev(zl,p);\n            printf(\"\\n\");\n        }\n        printf(\"\\n\");\n        zfree(zl);\n    }\n\n    printf(\"Iterate from back to front, deleting all items:\\n\");\n    {\n        zl = createList();\n        p = ziplistIndex(zl, -1);\n        while (ziplistGet(p, &entry, &elen, &value)) {\n            printf(\"Entry: \");\n            if (entry) {\n                if (elen && fwrite(entry,elen,1,stdout) == 0) perror(\"fwrite\");\n            } else {\n                printf(\"%lld\", value);\n            }\n            zl = ziplistDelete(zl,&p);\n            p = ziplistPrev(zl,p);\n            printf(\"\\n\");\n        }\n        printf(\"\\n\");\n        zfree(zl);\n    }\n\n    printf(\"Delete inclusive range 0,0:\\n\");\n    {\n        zl = createList();\n        zl = ziplistDeleteRange(zl, 0, 1);\n        ziplistRepr(zl);\n        zfree(zl);\n    }\n\n    printf(\"Delete inclusive range 0,1:\\n\");\n    {\n        zl = createList();\n        zl = ziplistDeleteRange(zl, 0, 2);\n        ziplistRepr(zl);\n        zfree(zl);\n    }\n\n    printf(\"Delete inclusive range 1,2:\\n\");\n    {\n        zl = createList();\n        zl = ziplistDeleteRange(zl, 1, 2);\n        ziplistRepr(zl);\n        zfree(zl);\n    }\n\n    printf(\"Delete with start index out of range:\\n\");\n    {\n        zl = createList();\n        zl = ziplistDeleteRange(zl, 5, 1);\n        ziplistRepr(zl);\n        zfree(zl);\n    }\n\n    printf(\"Delete with num overflow:\\n\");\n    {\n        zl = createList();\n        zl = ziplistDeleteRange(zl, 1, 5);\n        ziplistRepr(zl);\n        zfree(zl);\n    }\n\n    printf(\"Delete foo while iterating:\\n\");\n    {\n        zl = createList();\n        p = ziplistIndex(zl,0);\n        while (ziplistGet(p,&entry,&elen,&value)) {\n            if (entry && strncmp(\"foo\",(char*)entry,elen) == 0) {\n                printf(\"Delete foo\\n\");\n                zl = ziplistDelete(zl,&p);\n            } else {\n                printf(\"Entry: \");\n                if (entry) {\n                    if (elen && fwrite(entry,elen,1,stdout) == 0)\n                        perror(\"fwrite\");\n                } else {\n                    printf(\"%lld\",value);\n                }\n                p = ziplistNext(zl,p);\n                printf(\"\\n\");\n            }\n        }\n        printf(\"\\n\");\n        ziplistRepr(zl);\n        zfree(zl);\n    }\n\n    printf(\"Replace with same size:\\n\");\n    {\n        zl = createList(); /* \"hello\", \"foo\", \"quux\", \"1024\" */\n        unsigned char *orig_zl = zl;\n        p = ziplistIndex(zl, 0);\n        zl = ziplistReplace(zl, p, (unsigned char*)\"zoink\", 5);\n        p = ziplistIndex(zl, 3);\n        zl = ziplistReplace(zl, p, (unsigned char*)\"yy\", 2);\n        p = ziplistIndex(zl, 1);\n        zl = ziplistReplace(zl, p, (unsigned char*)\"65536\", 5);\n        p = ziplistIndex(zl, 0);\n        assert(!memcmp((char*)p,\n                       \"\\x00\\x05zoink\"\n                       \"\\x07\\xf0\\x00\\x00\\x01\" /* 65536 as int24 */\n                       \"\\x05\\x04quux\" \"\\x06\\x02yy\" \"\\xff\",\n                       23));\n        assert(zl == orig_zl); /* no reallocations have happened */\n        zfree(zl);\n        printf(\"SUCCESS\\n\\n\");\n    }\n\n    printf(\"Replace with different size:\\n\");\n    {\n        zl = createList(); /* \"hello\", \"foo\", \"quux\", \"1024\" */\n        p = ziplistIndex(zl, 1);\n        zl = ziplistReplace(zl, p, (unsigned char*)\"squirrel\", 8);\n        p = ziplistIndex(zl, 0);\n        assert(!strncmp((char*)p,\n                        \"\\x00\\x05hello\" \"\\x07\\x08squirrel\" \"\\x0a\\x04quux\"\n                        \"\\x06\\xc0\\x00\\x04\" \"\\xff\",\n                        28));\n        zfree(zl);\n        printf(\"SUCCESS\\n\\n\");\n    }\n\n    printf(\"Regression test for >255 byte strings:\\n\");\n    {\n        char v1[257] = {0}, v2[257] = {0};\n        memset(v1,'x',256);\n        memset(v2,'y',256);\n        zl = ziplistNew();\n        zl = ziplistPush(zl,(unsigned char*)v1,strlen(v1),ZIPLIST_TAIL);\n        zl = ziplistPush(zl,(unsigned char*)v2,strlen(v2),ZIPLIST_TAIL);\n\n        /* Pop values again and compare their value. */\n        p = ziplistIndex(zl,0);\n        assert(ziplistGet(p,&entry,&elen,&value));\n        assert(strncmp(v1,(char*)entry,elen) == 0);\n        p = ziplistIndex(zl,1);\n        assert(ziplistGet(p,&entry,&elen,&value));\n        assert(strncmp(v2,(char*)entry,elen) == 0);\n        printf(\"SUCCESS\\n\\n\");\n        zfree(zl);\n    }\n\n    printf(\"Regression test deleting next to last entries:\\n\");\n    {\n        char v[3][257] = {{0}};\n        zlentry e[3] = {{.prevrawlensize = 0, .prevrawlen = 0, .lensize = 0,\n                         .len = 0, .headersize = 0, .encoding = 0, .p = NULL}};\n        size_t i;\n\n        for (i = 0; i < (sizeof(v)/sizeof(v[0])); i++) {\n            memset(v[i], 'a' + i, sizeof(v[0]));\n        }\n\n        v[0][256] = '\\0';\n        v[1][  1] = '\\0';\n        v[2][256] = '\\0';\n\n        zl = ziplistNew();\n        for (i = 0; i < (sizeof(v)/sizeof(v[0])); i++) {\n            zl = ziplistPush(zl, (unsigned char *) v[i], strlen(v[i]), ZIPLIST_TAIL);\n        }\n\n        verify(zl, e);\n\n        assert(e[0].prevrawlensize == 1);\n        assert(e[1].prevrawlensize == 5);\n        assert(e[2].prevrawlensize == 1);\n\n        /* Deleting entry 1 will increase `prevrawlensize` for entry 2 */\n        unsigned char *p = e[1].p;\n        zl = ziplistDelete(zl, &p);\n\n        verify(zl, e);\n\n        assert(e[0].prevrawlensize == 1);\n        assert(e[1].prevrawlensize == 5);\n\n        printf(\"SUCCESS\\n\\n\");\n        zfree(zl);\n    }\n\n    printf(\"Create long list and check indices:\\n\");\n    {\n        unsigned long long start = usec();\n        zl = ziplistNew();\n        char buf[32];\n        int i,len;\n        for (i = 0; i < 1000; i++) {\n            len = snprintf(buf,sizeof(buf),\"%d\",i);\n            zl = ziplistPush(zl,(unsigned char*)buf,len,ZIPLIST_TAIL);\n        }\n        for (i = 0; i < 1000; i++) {\n            p = ziplistIndex(zl,i);\n            assert(ziplistGet(p,NULL,NULL,&value));\n            assert(i == value);\n\n            p = ziplistIndex(zl,-i-1);\n            assert(ziplistGet(p,NULL,NULL,&value));\n            assert(999-i == value);\n        }\n        printf(\"SUCCESS. usec=%lld\\n\\n\", usec()-start);\n        zfree(zl);\n    }\n\n    printf(\"Compare strings with ziplist entries:\\n\");\n    {\n        zl = createList();\n        p = ziplistIndex(zl,0);\n        if (!ziplistCompare(p,(unsigned char*)\"hello\",5)) {\n            printf(\"ERROR: not \\\"hello\\\"\\n\");\n            return 1;\n        }\n        if (ziplistCompare(p,(unsigned char*)\"hella\",5)) {\n            printf(\"ERROR: \\\"hella\\\"\\n\");\n            return 1;\n        }\n\n        p = ziplistIndex(zl,3);\n        if (!ziplistCompare(p,(unsigned char*)\"1024\",4)) {\n            printf(\"ERROR: not \\\"1024\\\"\\n\");\n            return 1;\n        }\n        if (ziplistCompare(p,(unsigned char*)\"1025\",4)) {\n            printf(\"ERROR: \\\"1025\\\"\\n\");\n            return 1;\n        }\n        printf(\"SUCCESS\\n\\n\");\n        zfree(zl);\n    }\n\n    printf(\"Merge test:\\n\");\n    {\n        /* create list gives us: [hello, foo, quux, 1024] */\n        zl = createList();\n        unsigned char *zl2 = createList();\n\n        unsigned char *zl3 = ziplistNew();\n        unsigned char *zl4 = ziplistNew();\n\n        if (ziplistMerge(&zl4, &zl4)) {\n            printf(\"ERROR: Allowed merging of one ziplist into itself.\\n\");\n            return 1;\n        }\n\n        /* Merge two empty ziplists, get empty result back. */\n        zl4 = ziplistMerge(&zl3, &zl4);\n        ziplistRepr(zl4);\n        if (ziplistLen(zl4)) {\n            printf(\"ERROR: Merging two empty ziplists created entries.\\n\");\n            return 1;\n        }\n        zfree(zl4);\n\n        zl2 = ziplistMerge(&zl, &zl2);\n        /* merge gives us: [hello, foo, quux, 1024, hello, foo, quux, 1024] */\n        ziplistRepr(zl2);\n\n        if (ziplistLen(zl2) != 8) {\n            printf(\"ERROR: Merged length not 8, but: %u\\n\", ziplistLen(zl2));\n            return 1;\n        }\n\n        p = ziplistIndex(zl2,0);\n        if (!ziplistCompare(p,(unsigned char*)\"hello\",5)) {\n            printf(\"ERROR: not \\\"hello\\\"\\n\");\n            return 1;\n        }\n        if (ziplistCompare(p,(unsigned char*)\"hella\",5)) {\n            printf(\"ERROR: \\\"hella\\\"\\n\");\n            return 1;\n        }\n\n        p = ziplistIndex(zl2,3);\n        if (!ziplistCompare(p,(unsigned char*)\"1024\",4)) {\n            printf(\"ERROR: not \\\"1024\\\"\\n\");\n            return 1;\n        }\n        if (ziplistCompare(p,(unsigned char*)\"1025\",4)) {\n            printf(\"ERROR: \\\"1025\\\"\\n\");\n            return 1;\n        }\n\n        p = ziplistIndex(zl2,4);\n        if (!ziplistCompare(p,(unsigned char*)\"hello\",5)) {\n            printf(\"ERROR: not \\\"hello\\\"\\n\");\n            return 1;\n        }\n        if (ziplistCompare(p,(unsigned char*)\"hella\",5)) {\n            printf(\"ERROR: \\\"hella\\\"\\n\");\n            return 1;\n        }\n\n        p = ziplistIndex(zl2,7);\n        if (!ziplistCompare(p,(unsigned char*)\"1024\",4)) {\n            printf(\"ERROR: not \\\"1024\\\"\\n\");\n            return 1;\n        }\n        if (ziplistCompare(p,(unsigned char*)\"1025\",4)) {\n            printf(\"ERROR: \\\"1025\\\"\\n\");\n            return 1;\n        }\n        printf(\"SUCCESS\\n\\n\");\n        zfree(zl);\n    }\n\n    printf(\"Stress with random payloads of different encoding:\\n\");\n    {\n        unsigned long long start = usec();\n        int i,j,len,where;\n        unsigned char *p;\n        char buf[1024];\n        int buflen;\n        list *ref;\n        listNode *refnode;\n\n        /* Hold temp vars from ziplist */\n        unsigned char *sstr;\n        unsigned int slen;\n        long long sval;\n\n        iteration = accurate ? 20000 : 20;\n        for (i = 0; i < iteration; i++) {\n            zl = ziplistNew();\n            ref = listCreate();\n            listSetFreeMethod(ref, sdsfreegeneric);\n            len = rand() % 256;\n\n            /* Create lists */\n            for (j = 0; j < len; j++) {\n                where = (rand() & 1) ? ZIPLIST_HEAD : ZIPLIST_TAIL;\n                if (rand() % 2) {\n                    buflen = randstring(buf,1,sizeof(buf)-1);\n                } else {\n                    switch(rand() % 3) {\n                    case 0:\n                        buflen = snprintf(buf,sizeof(buf),\"%lld\",(0LL + rand()) >> 20);\n                        break;\n                    case 1:\n                        buflen = snprintf(buf,sizeof(buf),\"%lld\",(0LL + rand()));\n                        break;\n                    case 2:\n                        buflen = snprintf(buf,sizeof(buf),\"%lld\",(0LL + rand()) << 20);\n                        break;\n                    default:\n                        assert(NULL);\n                    }\n                }\n\n                /* Add to ziplist */\n                zl = ziplistPush(zl, (unsigned char*)buf, buflen, where);\n\n                /* Add to reference list */\n                if (where == ZIPLIST_HEAD) {\n                    listAddNodeHead(ref,sdsnewlen(buf, buflen));\n                } else if (where == ZIPLIST_TAIL) {\n                    listAddNodeTail(ref,sdsnewlen(buf, buflen));\n                } else {\n                    assert(NULL);\n                }\n            }\n\n            assert(listLength(ref) == ziplistLen(zl));\n            for (j = 0; j < len; j++) {\n                /* Naive way to get elements, but similar to the stresser\n                 * executed from the Tcl test suite. */\n                p = ziplistIndex(zl,j);\n                refnode = listIndex(ref,j);\n\n                assert(ziplistGet(p,&sstr,&slen,&sval));\n                if (sstr == NULL) {\n                    buflen = snprintf(buf,sizeof(buf),\"%lld\",sval);\n                } else {\n                    buflen = slen;\n                    memcpy(buf,sstr,buflen);\n                    buf[buflen] = '\\0';\n                }\n                assert(memcmp(buf,listNodeValue(refnode),buflen) == 0);\n            }\n            zfree(zl);\n            listRelease(ref);\n        }\n        printf(\"Done. usec=%lld\\n\\n\", usec()-start);\n    }\n\n    printf(\"Stress with variable ziplist size:\\n\");\n    {\n        unsigned long long start = usec();\n        int maxsize = accurate ? 16384 : 16;\n        stress(ZIPLIST_HEAD,100000,maxsize,256);\n        stress(ZIPLIST_TAIL,100000,maxsize,256);\n        printf(\"Done. usec=%lld\\n\\n\", usec()-start);\n    }\n\n    /* Benchmarks */\n    {\n        zl = ziplistNew();\n        iteration = accurate ? 100000 : 100;\n        for (int i=0; i<iteration; i++) {\n            char buf[4096] = \"asdf\";\n            zl = ziplistPush(zl, (unsigned char*)buf, 4, ZIPLIST_TAIL);\n            zl = ziplistPush(zl, (unsigned char*)buf, 40, ZIPLIST_TAIL);\n            zl = ziplistPush(zl, (unsigned char*)buf, 400, ZIPLIST_TAIL);\n            zl = ziplistPush(zl, (unsigned char*)buf, 4000, ZIPLIST_TAIL);\n            zl = ziplistPush(zl, (unsigned char*)\"1\", 1, ZIPLIST_TAIL);\n            zl = ziplistPush(zl, (unsigned char*)\"10\", 2, ZIPLIST_TAIL);\n            zl = ziplistPush(zl, (unsigned char*)\"100\", 3, ZIPLIST_TAIL);\n            zl = ziplistPush(zl, (unsigned char*)\"1000\", 4, ZIPLIST_TAIL);\n            zl = ziplistPush(zl, (unsigned char*)\"10000\", 5, ZIPLIST_TAIL);\n            zl = ziplistPush(zl, (unsigned char*)\"100000\", 6, ZIPLIST_TAIL);\n        }\n\n        printf(\"Benchmark ziplistFind:\\n\");\n        {\n            unsigned long long start = usec();\n            for (int i = 0; i < 2000; i++) {\n                unsigned char *fptr = ziplistIndex(zl, ZIPLIST_HEAD);\n                fptr = ziplistFind(zl, fptr, (unsigned char*)\"nothing\", 7, 1);\n            }\n            printf(\"%lld\\n\", usec()-start);\n        }\n\n        printf(\"Benchmark ziplistIndex:\\n\");\n        {\n            unsigned long long start = usec();\n            for (int i = 0; i < 2000; i++) {\n                ziplistIndex(zl, 99999);\n            }\n            printf(\"%lld\\n\", usec()-start);\n        }\n\n        printf(\"Benchmark ziplistValidateIntegrity:\\n\");\n        {\n            unsigned long long start = usec();\n            for (int i = 0; i < 2000; i++) {\n                ziplistValidateIntegrity(zl, ziplistBlobLen(zl), 1, NULL, NULL);\n            }\n            printf(\"%lld\\n\", usec()-start);\n        }\n\n        printf(\"Benchmark ziplistCompare with string\\n\");\n        {\n            unsigned long long start = usec();\n            for (int i = 0; i < 2000; i++) {\n                unsigned char *eptr = ziplistIndex(zl,0);\n                while (eptr != NULL) {\n                    ziplistCompare(eptr,(unsigned char*)\"nothing\",7);\n                    eptr = ziplistNext(zl,eptr);\n                }\n            }\n            printf(\"Done. usec=%lld\\n\", usec()-start);\n        }\n\n        printf(\"Benchmark ziplistCompare with number\\n\");\n        {\n            unsigned long long start = usec();\n            for (int i = 0; i < 2000; i++) {\n                unsigned char *eptr = ziplistIndex(zl,0);\n                while (eptr != NULL) {\n                    ziplistCompare(eptr,(unsigned char*)\"99999\",5);\n                    eptr = ziplistNext(zl,eptr);\n                }\n            }\n            printf(\"Done. usec=%lld\\n\", usec()-start);\n        }\n\n        zfree(zl);\n    }\n\n    printf(\"Stress __ziplistCascadeUpdate:\\n\");\n    {\n        char data[ZIP_BIG_PREVLEN];\n        zl = ziplistNew();\n        iteration = accurate ? 100000 : 100;\n        for (int i = 0; i < iteration; i++) {\n            zl = ziplistPush(zl, (unsigned char*)data, ZIP_BIG_PREVLEN-4, ZIPLIST_TAIL);\n        }\n        unsigned long long start = usec();\n        zl = ziplistPush(zl, (unsigned char*)data, ZIP_BIG_PREVLEN-3, ZIPLIST_HEAD);\n        printf(\"Done. usec=%lld\\n\\n\", usec()-start);\n        zfree(zl);\n    }\n\n    printf(\"Edge cases of __ziplistCascadeUpdate:\\n\");\n    {\n        /* Inserting a entry with data length greater than ZIP_BIG_PREVLEN-4 \n         * will leads to cascade update. */\n        size_t s1 = ZIP_BIG_PREVLEN-4, s2 = ZIP_BIG_PREVLEN-3;\n        zl = ziplistNew();\n\n        zlentry e[4] = {{.prevrawlensize = 0, .prevrawlen = 0, .lensize = 0,\n                         .len = 0, .headersize = 0, .encoding = 0, .p = NULL}};\n\n        zl = insertHelper(zl, 'a', s1, ZIPLIST_ENTRY_HEAD(zl));\n        verify(zl, e);\n\n        assert(e[0].prevrawlensize == 1 && e[0].prevrawlen == 0);\n        assert(compareHelper(zl, 'a', s1, 0));\n        ziplistRepr(zl);\n\n        /* No expand. */\n        zl = insertHelper(zl, 'b', s1, ZIPLIST_ENTRY_HEAD(zl));\n        verify(zl, e);\n\n        assert(e[0].prevrawlensize == 1 && e[0].prevrawlen == 0);\n        assert(compareHelper(zl, 'b', s1, 0));\n\n        assert(e[1].prevrawlensize == 1 && e[1].prevrawlen == strEntryBytesSmall(s1));\n        assert(compareHelper(zl, 'a', s1, 1));\n\n        ziplistRepr(zl);\n\n        /* Expand(tail included). */\n        zl = insertHelper(zl, 'c', s2, ZIPLIST_ENTRY_HEAD(zl));\n        verify(zl, e);\n\n        assert(e[0].prevrawlensize == 1 && e[0].prevrawlen == 0);\n        assert(compareHelper(zl, 'c', s2, 0));\n\n        assert(e[1].prevrawlensize == 5 && e[1].prevrawlen == strEntryBytesSmall(s2));\n        assert(compareHelper(zl, 'b', s1, 1));\n\n        assert(e[2].prevrawlensize == 5 && e[2].prevrawlen == strEntryBytesLarge(s1));\n        assert(compareHelper(zl, 'a', s1, 2));\n\n        ziplistRepr(zl);\n\n        /* Expand(only previous head entry). */\n        zl = insertHelper(zl, 'd', s2, ZIPLIST_ENTRY_HEAD(zl));\n        verify(zl, e);\n\n        assert(e[0].prevrawlensize == 1 && e[0].prevrawlen == 0);\n        assert(compareHelper(zl, 'd', s2, 0));\n\n        assert(e[1].prevrawlensize == 5 && e[1].prevrawlen == strEntryBytesSmall(s2));\n        assert(compareHelper(zl, 'c', s2, 1));\n\n        assert(e[2].prevrawlensize == 5 && e[2].prevrawlen == strEntryBytesLarge(s2));\n        assert(compareHelper(zl, 'b', s1, 2));\n\n        assert(e[3].prevrawlensize == 5 && e[3].prevrawlen == strEntryBytesLarge(s1));\n        assert(compareHelper(zl, 'a', s1, 3));\n\n        ziplistRepr(zl);\n\n        /* Delete from mid. */\n        unsigned char *p = ziplistIndex(zl, 2);\n        zl = ziplistDelete(zl, &p);\n        verify(zl, e);\n\n        assert(e[0].prevrawlensize == 1 && e[0].prevrawlen == 0);\n        assert(compareHelper(zl, 'd', s2, 0));\n\n        assert(e[1].prevrawlensize == 5 && e[1].prevrawlen == strEntryBytesSmall(s2));\n        assert(compareHelper(zl, 'c', s2, 1));\n\n        assert(e[2].prevrawlensize == 5 && e[2].prevrawlen == strEntryBytesLarge(s2));\n        assert(compareHelper(zl, 'a', s1, 2));\n\n        ziplistRepr(zl);\n\n        zfree(zl);\n    }\n\n    printf(\"__ziplistInsert nextdiff == -4 && reqlen < 4 (issue #7170):\\n\");\n    {\n        zl = ziplistNew();\n\n        /* We set some values to almost reach the critical point - 254 */\n        char A_252[253] = {0}, A_250[251] = {0};\n        memset(A_252, 'A', 252);\n        memset(A_250, 'A', 250);\n\n        /* After the rpush, the list look like: [one two A_252 A_250 three 10] */\n        zl = ziplistPush(zl, (unsigned char*)\"one\", 3, ZIPLIST_TAIL);\n        zl = ziplistPush(zl, (unsigned char*)\"two\", 3, ZIPLIST_TAIL);\n        zl = ziplistPush(zl, (unsigned char*)A_252, strlen(A_252), ZIPLIST_TAIL);\n        zl = ziplistPush(zl, (unsigned char*)A_250, strlen(A_250), ZIPLIST_TAIL);\n        zl = ziplistPush(zl, (unsigned char*)\"three\", 5, ZIPLIST_TAIL);\n        zl = ziplistPush(zl, (unsigned char*)\"10\", 2, ZIPLIST_TAIL);\n        ziplistRepr(zl);\n\n        p = ziplistIndex(zl, 2);\n        if (!ziplistCompare(p, (unsigned char*)A_252, strlen(A_252))) {\n            printf(\"ERROR: not \\\"A_252\\\"\\n\");\n            return 1;\n        }\n\n        /* When we remove A_252, the list became: [one two A_250 three 10]\n         * A_250's prev node became node two, because node two quite small\n         * So A_250's prevlenSize shrink to 1, A_250's total size became 253(1+2+250)\n         * The prev node of node three is still node A_250.\n         * We will not shrink the node three's prevlenSize, keep it at 5 bytes */\n        zl = ziplistDelete(zl, &p);\n        ziplistRepr(zl);\n\n        p = ziplistIndex(zl, 3);\n        if (!ziplistCompare(p, (unsigned char*)\"three\", 5)) {\n            printf(\"ERROR: not \\\"three\\\"\\n\");\n            return 1;\n        }\n\n        /* We want to insert a node after A_250, the list became: [one two A_250 10 three 10]\n         * Because the new node is quite small, node three prevlenSize will shrink to 1 */\n        zl = ziplistInsert(zl, p, (unsigned char*)\"10\", 2);\n        ziplistRepr(zl);\n\n        /* Last element should equal 10 */\n        p = ziplistIndex(zl, -1);\n        if (!ziplistCompare(p, (unsigned char*)\"10\", 2)) {\n            printf(\"ERROR: not \\\"10\\\"\\n\");\n            return 1;\n        }\n\n        zfree(zl);\n    }\n\n    printf(\"ALL TESTS PASSED!\\n\");\n    return 0;\n}\n#endif\n"
  },
  {
    "path": "src/ziplist.h",
    "content": "/*\n * Copyright (c) 2009-2012, Pieter Noordhuis <pcnoordhuis at gmail dot com>\n * Copyright (c) 2009-current, Redis Ltd.\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions are met:\n *\n *   * Redistributions of source code must retain the above copyright notice,\n *     this list of conditions and the following disclaimer.\n *   * Redistributions in binary form must reproduce the above copyright\n *     notice, this list of conditions and the following disclaimer in the\n *     documentation and/or other materials provided with the distribution.\n *   * Neither the name of Redis nor the names of its contributors may be used\n *     to endorse or promote products derived from this software without\n *     specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n * POSSIBILITY OF SUCH DAMAGE.\n */\n\n#ifndef _ZIPLIST_H\n#define _ZIPLIST_H\n\n#define ZIPLIST_HEAD 0\n#define ZIPLIST_TAIL 1\n\n/* Each entry in the ziplist is either a string or an integer. */\ntypedef struct {\n    /* When string is used, it is provided with the length (slen). */\n    unsigned char *sval;\n    unsigned int slen;\n    /* When integer is used, 'sval' is NULL, and lval holds the value. */\n    long long lval;\n} ziplistEntry;\n\nunsigned char *ziplistNew(void);\nunsigned char *ziplistMerge(unsigned char **first, unsigned char **second);\nunsigned char *ziplistPush(unsigned char *zl, unsigned char *s, unsigned int slen, int where);\nunsigned char *ziplistIndex(unsigned char *zl, int index);\nunsigned char *ziplistNext(unsigned char *zl, unsigned char *p);\nunsigned char *ziplistPrev(unsigned char *zl, unsigned char *p);\nunsigned int ziplistGet(unsigned char *p, unsigned char **sval, unsigned int *slen, long long *lval);\nunsigned char *ziplistInsert(unsigned char *zl, unsigned char *p, unsigned char *s, unsigned int slen);\nunsigned char *ziplistDelete(unsigned char *zl, unsigned char **p);\nunsigned char *ziplistDeleteRange(unsigned char *zl, int index, unsigned int num);\nunsigned char *ziplistReplace(unsigned char *zl, unsigned char *p, unsigned char *s, unsigned int slen);\nunsigned int ziplistCompare(unsigned char *p, unsigned char *s, unsigned int slen);\nunsigned char *ziplistFind(unsigned char *zl, unsigned char *p, unsigned char *vstr, unsigned int vlen, unsigned int skip);\nunsigned int ziplistLen(unsigned char *zl);\nsize_t ziplistBlobLen(unsigned char *zl);\nvoid ziplistRepr(unsigned char *zl);\ntypedef int (*ziplistValidateEntryCB)(unsigned char* p, unsigned int head_count, void* userdata);\nint ziplistValidateIntegrity(unsigned char *zl, size_t size, int deep,\n                             ziplistValidateEntryCB entry_cb, void *cb_userdata);\nvoid ziplistRandomPair(unsigned char *zl, unsigned long total_count, ziplistEntry *key, ziplistEntry *val);\nvoid ziplistRandomPairs(unsigned char *zl, unsigned int count, ziplistEntry *keys, ziplistEntry *vals);\nunsigned int ziplistRandomPairsUnique(unsigned char *zl, unsigned int count, ziplistEntry *keys, ziplistEntry *vals);\nint ziplistSafeToAdd(unsigned char* zl, size_t add);\n\n#ifdef REDIS_TEST\nint ziplistTest(int argc, char *argv[], int flags);\n#endif\n\n#endif /* _ZIPLIST_H */\n"
  },
  {
    "path": "src/zipmap.c",
    "content": "/* String -> String Map data structure optimized for size.\n * This file implements a data structure mapping strings to other strings\n * implementing an O(n) lookup data structure designed to be very memory\n * efficient.\n *\n * The Redis Hash type uses this data structure for hashes composed of a small\n * number of elements, to switch to a hash table once a given number of\n * elements is reached.\n *\n * Given that many times Redis Hashes are used to represent objects composed\n * of few fields, this is a very big win in terms of used memory.\n *\n * --------------------------------------------------------------------------\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n/* Memory layout of a zipmap, for the map \"foo\" => \"bar\", \"hello\" => \"world\":\n *\n * <zmlen><len>\"foo\"<len><free>\"bar\"<len>\"hello\"<len><free>\"world\"\n *\n * <zmlen> is 1 byte length that holds the current size of the zipmap.\n * When the zipmap length is greater than or equal to 254, this value\n * is not used and the zipmap needs to be traversed to find out the length.\n *\n * <len> is the length of the following string (key or value).\n * <len> lengths are encoded in a single value or in a 5 bytes value.\n * If the first byte value (as an unsigned 8 bit value) is between 0 and\n * 253, it's a single-byte length. If it is 254 then a four bytes unsigned\n * integer follows (in the host byte ordering). A value of 255 is used to\n * signal the end of the hash.\n *\n * <free> is the number of free unused bytes after the string, resulting\n * from modification of values associated to a key. For instance if \"foo\"\n * is set to \"bar\", and later \"foo\" will be set to \"hi\", it will have a\n * free byte to use if the value will enlarge again later, or even in\n * order to add a key/value pair if it fits.\n *\n * <free> is always an unsigned 8 bit number, because if after an\n * update operation there are more than a few free bytes, the zipmap will be\n * reallocated to make sure it is as small as possible.\n *\n * The most compact representation of the above two elements hash is actually:\n *\n * \"\\x02\\x03foo\\x03\\x00bar\\x05hello\\x05\\x00world\\xff\"\n *\n * Note that because keys and values are prefixed length \"objects\",\n * the lookup will take O(N) where N is the number of elements\n * in the zipmap and *not* the number of bytes needed to represent the zipmap.\n * This lowers the constant times considerably.\n */\n\n#include <stdio.h>\n#include <string.h>\n#include <assert.h>\n#include \"zmalloc.h\"\n#include \"endianconv.h\"\n\n#define ZIPMAP_BIGLEN 254\n#define ZIPMAP_END 255\n\n/* The following defines the max value for the <free> field described in the\n * comments above, that is, the max number of trailing bytes in a value. */\n#define ZIPMAP_VALUE_MAX_FREE 4\n\n/* The following macro returns the number of bytes needed to encode the length\n * for the integer value _l, that is, 1 byte for lengths < ZIPMAP_BIGLEN and\n * 5 bytes for all the other lengths. */\n#define ZIPMAP_LEN_BYTES(_l) (((_l) < ZIPMAP_BIGLEN) ? 1 : sizeof(unsigned int)+1)\n\n/* Create a new empty zipmap. */\nunsigned char *zipmapNew(void) {\n    unsigned char *zm = zmalloc(2);\n\n    zm[0] = 0; /* Length */\n    zm[1] = ZIPMAP_END;\n    return zm;\n}\n\n/* Decode the encoded length pointed by 'p' */\nstatic unsigned int zipmapDecodeLength(unsigned char *p) {\n    unsigned int len = *p;\n\n    if (len < ZIPMAP_BIGLEN) return len;\n    memcpy(&len,p+1,sizeof(unsigned int));\n    memrev32ifbe(&len);\n    return len;\n}\n\nstatic unsigned int zipmapGetEncodedLengthSize(unsigned char *p) {\n    return (*p < ZIPMAP_BIGLEN) ? 1: 5;\n}\n\n/* Encode the length 'l' writing it in 'p'. If p is NULL it just returns\n * the amount of bytes required to encode such a length. */\nstatic unsigned int zipmapEncodeLength(unsigned char *p, unsigned int len) {\n    if (p == NULL) {\n        return ZIPMAP_LEN_BYTES(len);\n    } else {\n        if (len < ZIPMAP_BIGLEN) {\n            p[0] = len;\n            return 1;\n        } else {\n            p[0] = ZIPMAP_BIGLEN;\n            memcpy(p+1,&len,sizeof(len));\n            memrev32ifbe(p+1);\n            return 1+sizeof(len);\n        }\n    }\n}\n\n/* Search for a matching key, returning a pointer to the entry inside the\n * zipmap. Returns NULL if the key is not found.\n *\n * If NULL is returned, and totlen is not NULL, it is set to the entire\n * size of the zipmap, so that the calling function will be able to\n * reallocate the original zipmap to make room for more entries. */\nstatic unsigned char *zipmapLookupRaw(unsigned char *zm, unsigned char *key, unsigned int klen, unsigned int *totlen) {\n    unsigned char *p = zm+1, *k = NULL;\n    unsigned int l,llen;\n\n    while(*p != ZIPMAP_END) {\n        unsigned char free;\n\n        /* Match or skip the key */\n        l = zipmapDecodeLength(p);\n        llen = zipmapEncodeLength(NULL,l);\n        if (key != NULL && k == NULL && l == klen && !memcmp(p+llen,key,l)) {\n            /* Only return when the user doesn't care\n             * for the total length of the zipmap. */\n            if (totlen != NULL) {\n                k = p;\n            } else {\n                return p;\n            }\n        }\n        p += llen+l;\n        /* Skip the value as well */\n        l = zipmapDecodeLength(p);\n        p += zipmapEncodeLength(NULL,l);\n        free = p[0];\n        p += l+1+free; /* +1 to skip the free byte */\n    }\n    if (totlen != NULL) *totlen = (unsigned int)(p-zm)+1;\n    return k;\n}\n\nstatic unsigned long zipmapRequiredLength(unsigned int klen, unsigned int vlen) {\n    unsigned int l;\n\n    l = klen+vlen+3;\n    if (klen >= ZIPMAP_BIGLEN) l += 4;\n    if (vlen >= ZIPMAP_BIGLEN) l += 4;\n    return l;\n}\n\n/* Return the total amount used by a key (encoded length + payload) */\nstatic unsigned int zipmapRawKeyLength(unsigned char *p) {\n    unsigned int l = zipmapDecodeLength(p);\n    return zipmapEncodeLength(NULL,l) + l;\n}\n\n/* Return the total amount used by a value\n * (encoded length + single byte free count + payload) */\nstatic unsigned int zipmapRawValueLength(unsigned char *p) {\n    unsigned int l = zipmapDecodeLength(p);\n    unsigned int used;\n\n    used = zipmapEncodeLength(NULL,l);\n    used += p[used] + 1 + l;\n    return used;\n}\n\n/* If 'p' points to a key, this function returns the total amount of\n * bytes used to store this entry (entry = key + associated value + trailing\n * free space if any). */\nstatic unsigned int zipmapRawEntryLength(unsigned char *p) {\n    unsigned int l = zipmapRawKeyLength(p);\n    return l + zipmapRawValueLength(p+l);\n}\n\nstatic inline unsigned char *zipmapResize(unsigned char *zm, unsigned int len) {\n    zm = zrealloc(zm, len);\n    zm[len-1] = ZIPMAP_END;\n    return zm;\n}\n\n/* Set key to value, creating the key if it does not already exist.\n * If 'update' is not NULL, *update is set to 1 if the key was\n * already preset, otherwise to 0. */\nunsigned char *zipmapSet(unsigned char *zm, unsigned char *key, unsigned int klen, unsigned char *val, unsigned int vlen, int *update) {\n    unsigned int zmlen, offset;\n    unsigned int freelen, reqlen = zipmapRequiredLength(klen,vlen);\n    unsigned int empty, vempty;\n    unsigned char *p;\n\n    freelen = reqlen;\n    if (update) *update = 0;\n    p = zipmapLookupRaw(zm,key,klen,&zmlen);\n    if (p == NULL) {\n        /* Key not found: enlarge */\n        zm = zipmapResize(zm, zmlen+reqlen);\n        p = zm+zmlen-1;\n        zmlen = zmlen+reqlen;\n\n        /* Increase zipmap length (this is an insert) */\n        if (zm[0] < ZIPMAP_BIGLEN) zm[0]++;\n    } else {\n        /* Key found. Is there enough space for the new value? */\n        /* Compute the total length: */\n        if (update) *update = 1;\n        freelen = zipmapRawEntryLength(p);\n        if (freelen < reqlen) {\n            /* Store the offset of this key within the current zipmap, so\n             * it can be resized. Then, move the tail backwards so this\n             * pair fits at the current position. */\n            offset = p-zm;\n            zm = zipmapResize(zm, zmlen-freelen+reqlen);\n            p = zm+offset;\n\n            /* The +1 in the number of bytes to be moved is caused by the\n             * end-of-zipmap byte. Note: the *original* zmlen is used. */\n            memmove(p+reqlen, p+freelen, zmlen-(offset+freelen+1));\n            zmlen = zmlen-freelen+reqlen;\n            freelen = reqlen;\n        }\n    }\n\n    /* We now have a suitable block where the key/value entry can\n     * be written. If there is too much free space, move the tail\n     * of the zipmap a few bytes to the front and shrink the zipmap,\n     * as we want zipmaps to be very space efficient. */\n    empty = freelen-reqlen;\n    if (empty >= ZIPMAP_VALUE_MAX_FREE) {\n        /* First, move the tail <empty> bytes to the front, then resize\n         * the zipmap to be <empty> bytes smaller. */\n        offset = p-zm;\n        memmove(p+reqlen, p+freelen, zmlen-(offset+freelen+1));\n        zmlen -= empty;\n        zm = zipmapResize(zm, zmlen);\n        p = zm+offset;\n        vempty = 0;\n    } else {\n        vempty = empty;\n    }\n\n    /* Just write the key + value and we are done. */\n    /* Key: */\n    p += zipmapEncodeLength(p,klen);\n    assert(klen < freelen);\n    memcpy(p,key,klen);\n    p += klen;\n    /* Value: */\n    p += zipmapEncodeLength(p,vlen);\n    *p++ = vempty;\n    memcpy(p,val,vlen);\n    return zm;\n}\n\n/* Remove the specified key. If 'deleted' is not NULL the pointed integer is\n * set to 0 if the key was not found, to 1 if it was found and deleted. */\nunsigned char *zipmapDel(unsigned char *zm, unsigned char *key, unsigned int klen, int *deleted) {\n    unsigned int zmlen, freelen;\n    unsigned char *p = zipmapLookupRaw(zm,key,klen,&zmlen);\n    if (p) {\n        freelen = zipmapRawEntryLength(p);\n        memmove(p, p+freelen, zmlen-((p-zm)+freelen+1));\n        zm = zipmapResize(zm, zmlen-freelen);\n\n        /* Decrease zipmap length */\n        if (zm[0] < ZIPMAP_BIGLEN) zm[0]--;\n\n        if (deleted) *deleted = 1;\n    } else {\n        if (deleted) *deleted = 0;\n    }\n    return zm;\n}\n\n/* Call before iterating through elements via zipmapNext() */\nunsigned char *zipmapRewind(unsigned char *zm) {\n    return zm+1;\n}\n\n/* This function is used to iterate through all the zipmap elements.\n * In the first call the first argument is the pointer to the zipmap + 1.\n * In the next calls what zipmapNext returns is used as first argument.\n * Example:\n *\n * unsigned char *i = zipmapRewind(my_zipmap);\n * while((i = zipmapNext(i,&key,&klen,&value,&vlen)) != NULL) {\n *     printf(\"%d bytes key at $p\\n\", klen, key);\n *     printf(\"%d bytes value at $p\\n\", vlen, value);\n * }\n */\nunsigned char *zipmapNext(unsigned char *zm, unsigned char **key, unsigned int *klen, unsigned char **value, unsigned int *vlen) {\n    if (zm[0] == ZIPMAP_END) return NULL;\n    if (key) {\n        *key = zm;\n        *klen = zipmapDecodeLength(zm);\n        *key += ZIPMAP_LEN_BYTES(*klen);\n    }\n    zm += zipmapRawKeyLength(zm);\n    if (value) {\n        *value = zm+1;\n        *vlen = zipmapDecodeLength(zm);\n        *value += ZIPMAP_LEN_BYTES(*vlen);\n    }\n    zm += zipmapRawValueLength(zm);\n    return zm;\n}\n\n/* Search a key and retrieve the pointer and len of the associated value.\n * If the key is found the function returns 1, otherwise 0. */\nint zipmapGet(unsigned char *zm, unsigned char *key, unsigned int klen, unsigned char **value, unsigned int *vlen) {\n    unsigned char *p;\n\n    if ((p = zipmapLookupRaw(zm,key,klen,NULL)) == NULL) return 0;\n    p += zipmapRawKeyLength(p);\n    *vlen = zipmapDecodeLength(p);\n    *value = p + ZIPMAP_LEN_BYTES(*vlen) + 1;\n    return 1;\n}\n\n/* Return 1 if the key exists, otherwise 0 is returned. */\nint zipmapExists(unsigned char *zm, unsigned char *key, unsigned int klen) {\n    return zipmapLookupRaw(zm,key,klen,NULL) != NULL;\n}\n\n/* Return the number of entries inside a zipmap */\nunsigned int zipmapLen(unsigned char *zm) {\n    unsigned int len = 0;\n    if (zm[0] < ZIPMAP_BIGLEN) {\n        len = zm[0];\n    } else {\n        unsigned char *p = zipmapRewind(zm);\n        while((p = zipmapNext(p,NULL,NULL,NULL,NULL)) != NULL) len++;\n\n        /* Re-store length if small enough */\n        if (len < ZIPMAP_BIGLEN) zm[0] = len;\n    }\n    return len;\n}\n\n/* Return the raw size in bytes of a zipmap, so that we can serialize\n * the zipmap on disk (or everywhere is needed) just writing the returned\n * amount of bytes of the C array starting at the zipmap pointer. */\nsize_t zipmapBlobLen(unsigned char *zm) {\n    unsigned int totlen;\n    zipmapLookupRaw(zm,NULL,0,&totlen);\n    return totlen;\n}\n\n/* Validate the integrity of the data structure.\n * when `deep` is 0, only the integrity of the header is validated.\n * when `deep` is 1, we scan all the entries one by one. */\nint zipmapValidateIntegrity(unsigned char *zm, size_t size, int deep) {\n#define OUT_OF_RANGE(p) ( \\\n        (p) < zm + 2 || \\\n        (p) > zm + size - 1)\n    unsigned int l, s, e;\n\n    /* check that we can actually read the header (or ZIPMAP_END). */\n    if (size < 2)\n        return 0;\n\n    /* the last byte must be the terminator. */\n    if (zm[size-1] != ZIPMAP_END)\n        return 0;\n\n    if (!deep)\n        return 1;\n\n    unsigned int count = 0;\n    unsigned char *p = zm + 1; /* skip the count */\n    while(*p != ZIPMAP_END) {\n        /* read the field name length encoding type */\n        s = zipmapGetEncodedLengthSize(p);\n        /* make sure the entry length doesn't reach outside the edge of the zipmap */\n        if (OUT_OF_RANGE(p+s))\n            return 0;\n\n        /* read the field name length */\n        l = zipmapDecodeLength(p);\n        p += s; /* skip the encoded field size */\n        p += l; /* skip the field */\n\n        /* make sure the entry doesn't reach outside the edge of the zipmap */\n        if (OUT_OF_RANGE(p))\n            return 0;\n\n        /* read the value length encoding type */\n        s = zipmapGetEncodedLengthSize(p);\n        /* make sure the entry length doesn't reach outside the edge of the zipmap */\n        if (OUT_OF_RANGE(p+s))\n            return 0;\n\n        /* read the value length */\n        l = zipmapDecodeLength(p);\n        p += s; /* skip the encoded value size*/\n        e = *p++; /* skip the encoded free space (always encoded in one byte) */\n        p += l+e; /* skip the value and free space */\n        count++;\n\n        /* make sure the entry doesn't reach outside the edge of the zipmap */\n        if (OUT_OF_RANGE(p))\n            return 0;\n    }\n\n    /* check that the zipmap is not empty. */\n    if (count == 0) return 0;\n\n    /* check that the count in the header is correct */\n    if (zm[0] != ZIPMAP_BIGLEN && zm[0] != count)\n        return 0;\n\n    return 1;\n#undef OUT_OF_RANGE\n}\n\n#ifdef REDIS_TEST\nstatic void zipmapRepr(unsigned char *p) {\n    unsigned int l;\n\n    printf(\"{status %u}\",*p++);\n    while(1) {\n        if (p[0] == ZIPMAP_END) {\n            printf(\"{end}\");\n            break;\n        } else {\n            unsigned char e;\n\n            l = zipmapDecodeLength(p);\n            printf(\"{key %u}\",l);\n            p += zipmapEncodeLength(NULL,l);\n            if (l != 0 && fwrite(p,l,1,stdout) == 0) perror(\"fwrite\");\n            p += l;\n\n            l = zipmapDecodeLength(p);\n            printf(\"{value %u}\",l);\n            p += zipmapEncodeLength(NULL,l);\n            e = *p++;\n            if (l != 0 && fwrite(p,l,1,stdout) == 0) perror(\"fwrite\");\n            p += l+e;\n            if (e) {\n                printf(\"[\");\n                while(e--) printf(\".\");\n                printf(\"]\");\n            }\n        }\n    }\n    printf(\"\\n\");\n}\n\n#define UNUSED(x) (void)(x)\nint zipmapTest(int argc, char *argv[], int flags) {\n    unsigned char *zm;\n\n    UNUSED(argc);\n    UNUSED(argv);\n    UNUSED(flags);\n\n    zm = zipmapNew();\n\n    zm = zipmapSet(zm,(unsigned char*) \"name\",4, (unsigned char*) \"foo\",3,NULL);\n    zm = zipmapSet(zm,(unsigned char*) \"surname\",7, (unsigned char*) \"foo\",3,NULL);\n    zm = zipmapSet(zm,(unsigned char*) \"age\",3, (unsigned char*) \"foo\",3,NULL);\n    zipmapRepr(zm);\n\n    zm = zipmapSet(zm,(unsigned char*) \"hello\",5, (unsigned char*) \"world!\",6,NULL);\n    zm = zipmapSet(zm,(unsigned char*) \"foo\",3, (unsigned char*) \"bar\",3,NULL);\n    zm = zipmapSet(zm,(unsigned char*) \"foo\",3, (unsigned char*) \"!\",1,NULL);\n    zipmapRepr(zm);\n    zm = zipmapSet(zm,(unsigned char*) \"foo\",3, (unsigned char*) \"12345\",5,NULL);\n    zipmapRepr(zm);\n    zm = zipmapSet(zm,(unsigned char*) \"new\",3, (unsigned char*) \"xx\",2,NULL);\n    zm = zipmapSet(zm,(unsigned char*) \"noval\",5, (unsigned char*) \"\",0,NULL);\n    zipmapRepr(zm);\n    zm = zipmapDel(zm,(unsigned char*) \"new\",3,NULL);\n    zipmapRepr(zm);\n\n    printf(\"\\nLook up large key:\\n\");\n    {\n        unsigned char buf[512];\n        unsigned char *value;\n        unsigned int vlen, i;\n        for (i = 0; i < 512; i++) buf[i] = 'a';\n\n        zm = zipmapSet(zm,buf,512,(unsigned char*) \"long\",4,NULL);\n        if (zipmapGet(zm,buf,512,&value,&vlen)) {\n            printf(\"  <long key> is associated to the %d bytes value: %.*s\\n\",\n                vlen, vlen, value);\n        }\n    }\n\n    printf(\"\\nPerform a direct lookup:\\n\");\n    {\n        unsigned char *value;\n        unsigned int vlen;\n\n        if (zipmapGet(zm,(unsigned char*) \"foo\",3,&value,&vlen)) {\n            printf(\"  foo is associated to the %d bytes value: %.*s\\n\",\n                vlen, vlen, value);\n        }\n    }\n    printf(\"\\nIterate through elements:\\n\");\n    {\n        unsigned char *i = zipmapRewind(zm);\n        unsigned char *key, *value;\n        unsigned int klen, vlen;\n\n        while((i = zipmapNext(i,&key,&klen,&value,&vlen)) != NULL) {\n            printf(\"  %d:%.*s => %d:%.*s\\n\", klen, klen, key, vlen, vlen, value);\n        }\n    }\n    zfree(zm);\n    return 0;\n}\n#endif\n"
  },
  {
    "path": "src/zipmap.h",
    "content": "/* String -> String Map data structure optimized for size.\n *\n * See zipmap.c for more info.\n *\n * --------------------------------------------------------------------------\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef _ZIPMAP_H\n#define _ZIPMAP_H\n\nunsigned char *zipmapNew(void);\nunsigned char *zipmapSet(unsigned char *zm, unsigned char *key, unsigned int klen, unsigned char *val, unsigned int vlen, int *update);\nunsigned char *zipmapDel(unsigned char *zm, unsigned char *key, unsigned int klen, int *deleted);\nunsigned char *zipmapRewind(unsigned char *zm);\nunsigned char *zipmapNext(unsigned char *zm, unsigned char **key, unsigned int *klen, unsigned char **value, unsigned int *vlen);\nint zipmapGet(unsigned char *zm, unsigned char *key, unsigned int klen, unsigned char **value, unsigned int *vlen);\nint zipmapExists(unsigned char *zm, unsigned char *key, unsigned int klen);\nunsigned int zipmapLen(unsigned char *zm);\nsize_t zipmapBlobLen(unsigned char *zm);\nvoid zipmapRepr(unsigned char *p);\nint zipmapValidateIntegrity(unsigned char *zm, size_t size, int deep);\n\n#ifdef REDIS_TEST\nint zipmapTest(int argc, char *argv[], int flags);\n#endif\n\n#endif\n"
  },
  {
    "path": "src/zmalloc.c",
    "content": "/* zmalloc - total amount of allocated memory aware version of malloc()\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"fmacros.h\"\n#include \"config.h\"\n#include \"solarisfixes.h\"\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <stdint.h>\n#include <unistd.h>\n\n#ifdef __linux__\n#include <sys/mman.h>\n#endif\n\n/* This function provide us access to the original libc free(). This is useful\n * for instance to free results obtained by backtrace_symbols(). We need\n * to define this function before including zmalloc.h that may shadow the\n * free implementation if we use jemalloc or another non standard allocator. */\nvoid zlibc_free(void *ptr) {\n    free(ptr);\n}\n\n#include <string.h>\n#include \"zmalloc.h\"\n#include \"atomicvar.h\"\n#include \"redisassert.h\"\n\n#define UNUSED(x) ((void)(x))\n\n#ifdef HAVE_MALLOC_SIZE\n#define PREFIX_SIZE (0)\n#else\n/* Use at least 8 bytes alignment on all systems. */\n#if SIZE_MAX < 0xffffffffffffffffull\n#define PREFIX_SIZE 8\n#else\n#define PREFIX_SIZE (sizeof(size_t))\n#endif\n#endif\n\n/* When using the libc allocator, use a minimum allocation size to match the\n * jemalloc behavior that doesn't return NULL in this case.\n */\n#define MALLOC_MIN_SIZE(x) ((x) > 0 ? (x) : sizeof(long))\n\n/* Explicitly override malloc/free etc when using tcmalloc. */\n#if defined(USE_TCMALLOC)\n#define malloc(size) tc_malloc(size)\n#define calloc(count,size) tc_calloc(count,size)\n#define realloc(ptr,size) tc_realloc(ptr,size)\n#define free(ptr) tc_free(ptr)\n/* Explicitly override malloc/free etc when using jemalloc. */\n#elif defined(USE_JEMALLOC)\n#define malloc(size) je_malloc(size)\n#define calloc(count,size) je_calloc(count,size)\n#define realloc(ptr,size) je_realloc(ptr,size)\n#define free(ptr) je_free(ptr)\n#define mallocx(size,flags) je_mallocx(size,flags)\n#define rallocx(ptr,size,flags) je_rallocx(ptr,size,flags)\n#define dallocx(ptr,flags) je_dallocx(ptr,flags)\n#if defined(HAVE_ALLOC_WITH_USIZE)\nvoid *je_malloc_with_usize(size_t size, size_t *usize);\nvoid *je_calloc_with_usize(size_t num, size_t size, size_t *usize);\nvoid *je_realloc_with_usize(void *ptr, size_t size, size_t *old_usize, size_t *new_usize);\nvoid je_free_with_usize(void *ptr, size_t *usize);\n#define malloc_with_usize(size,usize) je_malloc_with_usize(size,usize)\n#define calloc_with_usize(num,size,usize) je_calloc_with_usize(num,size,usize)\n#define realloc_with_usize(ptr,size,old_usize,new_usize) je_realloc_with_usize(ptr,size,old_usize,new_usize)\n#define free_with_usize(ptr,usize) je_free_with_usize(ptr,usize)\n#endif\n#endif\n\n#define MAX_THREADS 16 /* Keep it a power of 2 so we can use '&' instead of '%'. */\n#define THREAD_MASK (MAX_THREADS - 1)\n#define PEAK_CHECK_THRESHOLD (1024 * 100) /* 100KB */\n\ntypedef struct used_memory_entry {\n    redisAtomic long long used_memory;\n    redisAtomic long long last_peak_check;\n    char padding[CACHE_LINE_SIZE - sizeof(long long) - sizeof(long long)];\n} used_memory_entry;\n\nstatic __attribute__((aligned(CACHE_LINE_SIZE))) used_memory_entry used_memory[MAX_THREADS];\nstatic redisAtomic size_t num_active_threads = 0;\nstatic redisAtomic size_t zmalloc_peak = 0;\nstatic redisAtomic time_t zmalloc_peak_time = 0;\nstatic __thread long my_thread_index = -1;\n\nstatic inline void init_my_thread_index(void) {\n    if (unlikely(my_thread_index == -1)) {\n        atomicGetIncr(num_active_threads, my_thread_index, 1);\n        my_thread_index &= THREAD_MASK;\n    }\n}\n\nstatic void update_zmalloc_stat_alloc(long long bytes_delta) {\n    init_my_thread_index();\n\n    /* Per-thread allocation counter and the last counter value at which we ran a\n     * global peak check (throttles how often we call zmalloc_used_memory()). */\n    long long thread_used, thread_last_peak_check_used;\n    atomicIncrGet(used_memory[my_thread_index].used_memory, thread_used, bytes_delta);\n    atomicGet(used_memory[my_thread_index].last_peak_check, thread_last_peak_check_used);\n\n    /* Only run the (expensive) global used/peak check after this thread's\n     * allocation counter has advanced enough since the last check. */\n    if (unlikely(thread_used - thread_last_peak_check_used > PEAK_CHECK_THRESHOLD)) {\n        /* Snapshot of global used memory across all threads. */\n        size_t used_mem = zmalloc_used_memory();\n\n        /* Current published global peak. */\n        size_t published_peak;\n        atomicGet(zmalloc_peak, published_peak);\n\n        if (used_mem > published_peak) {\n            /* Try to publish `used_mem` as the new global peak.\n             *\n             * Another thread may update `zmalloc_peak` concurrently. Use a CAS loop:\n             * on failure, `old_peak` is refreshed with the latest peak value, and we\n             * retry only while our snapshot still exceeds it. */\n            size_t old_peak = published_peak;\n            while (used_mem > old_peak && !atomicCompareExchange(size_t, zmalloc_peak, old_peak, used_mem)) {\n                /* CAS failed: `old_peak` now holds the current `zmalloc_peak`. */\n            }\n\n            /* If we raised the peak, record when it was reached. */\n            if (used_mem > old_peak) {\n                atomicSet(zmalloc_peak_time, time(NULL));\n            }\n        }\n\n        /* Record the thread counter value at which we last ran a global peak check,\n         * to throttle future checks for this thread. */\n        atomicSet(used_memory[my_thread_index].last_peak_check, thread_used);\n    }\n}\n\nstatic void update_zmalloc_stat_free(long long num) {\n    init_my_thread_index();\n    atomicDecr(used_memory[my_thread_index].used_memory, num);\n}\n\nstatic void zmalloc_default_oom(size_t size) {\n    fprintf(stderr, \"zmalloc: Out of memory trying to allocate %zu bytes\\n\",\n        size);\n    fflush(stderr);\n    abort();\n}\n\nstatic void (*zmalloc_oom_handler)(size_t) = zmalloc_default_oom;\n\n#ifdef HAVE_MALLOC_SIZE\nvoid *extend_to_usable(void *ptr, size_t size) {\n    UNUSED(size);\n    return ptr;\n}\n#endif\n\n/* Try allocating memory, and return NULL if failed.\n * '*usable' is set to the usable size if non NULL. */\nstatic inline void *ztrymalloc_usable_internal(size_t size, size_t *usable) {\n    /* Possible overflow, return NULL, so that the caller can panic or handle a failed allocation. */\n    if (size >= SIZE_MAX/2) return NULL;\n#ifdef HAVE_ALLOC_WITH_USIZE\n    void *ptr = malloc_with_usize(MALLOC_MIN_SIZE(size)+PREFIX_SIZE, &size);\n#else\n    void *ptr = malloc(MALLOC_MIN_SIZE(size)+PREFIX_SIZE);\n#endif\n    if (!ptr) return NULL;\n#ifdef HAVE_ALLOC_WITH_USIZE\n    update_zmalloc_stat_alloc(size);\n    if (usable) *usable = size;\n    return ptr;\n#elif HAVE_MALLOC_SIZE\n    size = zmalloc_size(ptr);\n    update_zmalloc_stat_alloc(size);\n    if (usable) *usable = size;\n    return ptr;\n#else\n    size = MALLOC_MIN_SIZE(size);\n    *((size_t*)ptr) = size;\n    update_zmalloc_stat_alloc(size+PREFIX_SIZE);\n    if (usable) *usable = size;\n    return (char*)ptr+PREFIX_SIZE;\n#endif\n}\n\nvoid *ztrymalloc_usable(size_t size, size_t *usable) {\n    size_t usable_size = 0;\n    void *ptr = ztrymalloc_usable_internal(size, &usable_size);\n#ifdef HAVE_MALLOC_SIZE\n    ptr = extend_to_usable(ptr, usable_size);\n#endif\n    if (usable) *usable = usable_size;\n    return ptr;\n}\n\n/* Allocate memory or panic */\nvoid *zmalloc(size_t size) {\n    void *ptr = ztrymalloc_usable_internal(size, NULL);\n    if (!ptr) zmalloc_oom_handler(size);\n    return ptr;\n}\n\n/* Try allocating memory, and return NULL if failed. */\nvoid *ztrymalloc(size_t size) {\n    void *ptr = ztrymalloc_usable_internal(size, NULL);\n    return ptr;\n}\n\n/* Allocate memory or panic.\n * '*usable' is set to the usable size if non NULL. */\nvoid *zmalloc_usable(size_t size, size_t *usable) {\n    size_t usable_size = 0;\n    void *ptr = ztrymalloc_usable_internal(size, &usable_size);\n    if (!ptr) zmalloc_oom_handler(size);\n#ifdef HAVE_MALLOC_SIZE\n    if (ptr) ptr = extend_to_usable(ptr, usable_size);\n#endif\n    if (usable) *usable = usable_size;\n    return ptr;\n}\n\n#if defined(USE_JEMALLOC)\nvoid *zmalloc_with_flags(size_t size, int flags) {\n    if (size >= SIZE_MAX/2) zmalloc_oom_handler(size);\n    void *ptr = mallocx(size+PREFIX_SIZE, flags);\n    if (!ptr) zmalloc_oom_handler(size);\n    update_zmalloc_stat_alloc(zmalloc_size(ptr));\n    return ptr;\n}\n\nvoid *zrealloc_with_flags(void *ptr, size_t size, int flags) {\n    /* Not allocating anything, just redirect to free. */\n    if (size == 0 && ptr != NULL) {\n        zfree_with_flags(ptr, flags);\n        return NULL;\n    }\n\n    /* Not freeing anything, just redirect to malloc. */\n    if (ptr == NULL)\n        return zmalloc_with_flags(size, flags);\n\n    /* Possible overflow, return NULL, so that the caller can panic or handle a failed allocation. */\n    if (size >= SIZE_MAX/2) {\n        zfree_with_flags(ptr, flags);\n        zmalloc_oom_handler(size);\n        return NULL;\n    }\n\n    size_t oldsize = zmalloc_size(ptr);\n    void *newptr = rallocx(ptr, size, flags);\n    if (newptr == NULL) {\n        zmalloc_oom_handler(size);\n        return NULL;\n    }\n\n    update_zmalloc_stat_free(oldsize);\n    size = zmalloc_size(newptr);\n    update_zmalloc_stat_alloc(size);\n    return newptr;\n}\n\nvoid zfree_with_flags(void *ptr, int flags) {\n    if (ptr == NULL) return;\n    update_zmalloc_stat_free(zmalloc_size(ptr));\n    dallocx(ptr, flags);\n}\n#endif\n\n/* Allocation and free functions that bypass the thread cache\n * and go straight to the allocator arena bins.\n * Currently implemented only for jemalloc. Used for online defragmentation. */\n#if (defined(USE_JEMALLOC) && defined(HAVE_DEFRAG))\nvoid *zmalloc_no_tcache(size_t size) {\n    if (size >= SIZE_MAX/2) zmalloc_oom_handler(size);\n    void *ptr = mallocx(size+PREFIX_SIZE, MALLOCX_TCACHE_NONE);\n    if (!ptr) zmalloc_oom_handler(size);\n    update_zmalloc_stat_alloc(zmalloc_size(ptr));\n    return ptr;\n}\n\nvoid zfree_no_tcache(void *ptr) {\n    if (ptr == NULL) return;\n    update_zmalloc_stat_free(zmalloc_size(ptr));\n    dallocx(ptr, MALLOCX_TCACHE_NONE);\n}\n#endif\n\n/* Try allocating memory and zero it, and return NULL if failed.\n * '*usable' is set to the usable size if non NULL. */\nstatic inline void *ztrycalloc_usable_internal(size_t size, size_t *usable) {\n    /* Possible overflow, return NULL, so that the caller can panic or handle a failed allocation. */\n    if (size >= SIZE_MAX/2) return NULL;\n#ifdef HAVE_ALLOC_WITH_USIZE\n    void *ptr = calloc_with_usize(1, MALLOC_MIN_SIZE(size)+PREFIX_SIZE, &size);\n#else\n    void *ptr = calloc(1, MALLOC_MIN_SIZE(size)+PREFIX_SIZE);\n#endif\n    if (ptr == NULL) return NULL;\n\n#ifdef HAVE_ALLOC_WITH_USIZE\n    update_zmalloc_stat_alloc(size);\n    if (usable) *usable = size;\n    return ptr;\n#elif HAVE_MALLOC_SIZE\n    size = zmalloc_size(ptr);\n    update_zmalloc_stat_alloc(size);\n    if (usable) *usable = size;\n    return ptr;\n#else\n    size = MALLOC_MIN_SIZE(size);\n    *((size_t*)ptr) = size;\n    update_zmalloc_stat_alloc(size+PREFIX_SIZE);\n    if (usable) *usable = size;\n    return (char*)ptr+PREFIX_SIZE;\n#endif\n}\n\nvoid *ztrycalloc_usable(size_t size, size_t *usable) {\n    size_t usable_size = 0;\n    void *ptr = ztrycalloc_usable_internal(size, &usable_size);\n#ifdef HAVE_MALLOC_SIZE\n    ptr = extend_to_usable(ptr, usable_size);\n#endif\n    if (usable) *usable = usable_size;\n    return ptr;\n}\n\n/* Allocate memory and zero it or panic.\n * We need this wrapper to have a calloc compatible signature */\nvoid *zcalloc_num(size_t num, size_t size) {\n    /* Ensure that the arguments to calloc(), when multiplied, do not wrap.\n     * Division operations are susceptible to divide-by-zero errors so we also check it. */\n    if ((size == 0) || (num > SIZE_MAX/size)) {\n        zmalloc_oom_handler(SIZE_MAX);\n        return NULL;\n    }\n    void *ptr = ztrycalloc_usable_internal(num*size, NULL);\n    if (!ptr) zmalloc_oom_handler(num*size);\n    return ptr;\n}\n\n/* Allocate memory and zero it or panic */\nvoid *zcalloc(size_t size) {\n    void *ptr = ztrycalloc_usable_internal(size, NULL);\n    if (!ptr) zmalloc_oom_handler(size);\n    return ptr;\n}\n\n/* Try allocating memory, and return NULL if failed. */\nvoid *ztrycalloc(size_t size) {\n    void *ptr = ztrycalloc_usable_internal(size, NULL);\n    return ptr;\n}\n\n/* Allocate memory or panic.\n * '*usable' is set to the usable size if non NULL. */\nvoid *zcalloc_usable(size_t size, size_t *usable) {\n    size_t usable_size = 0;\n    void *ptr = ztrycalloc_usable_internal(size, &usable_size);\n    if (!ptr) zmalloc_oom_handler(size);\n#ifdef HAVE_MALLOC_SIZE\n    ptr = extend_to_usable(ptr, usable_size);\n#endif\n    if (usable) *usable = usable_size;\n    return ptr;\n}\n\n/* Try reallocating memory, and return NULL if failed.\n * '*usable' is set to the usable size if non NULL\n * '*old_usable' is set to the previous usable size if non NULL. */\nstatic inline void *ztryrealloc_usable_internal(void *ptr, size_t size, size_t *usable, size_t *old_usable) {\n#ifndef HAVE_MALLOC_SIZE\n    void *realptr;\n#endif\n    size_t oldsize, dummy;\n    void *newptr;\n\n    if (!usable) usable = &dummy;\n    if (!old_usable) old_usable = &dummy;\n\n    /* not allocating anything, just redirect to free. */\n    if (size == 0 && ptr != NULL) {\n        zfree_usable(ptr, &oldsize);\n        *usable = 0;\n        *old_usable = oldsize;\n        return NULL;\n    }\n    /* Not freeing anything, just redirect to malloc. */\n    if (ptr == NULL) {\n        *old_usable = 0;\n        return ztrymalloc_usable(size, usable);\n    }\n\n    /* Possible overflow, return NULL, so that the caller can panic or handle a failed allocation. */\n    if (size >= SIZE_MAX/2) {\n        zfree_usable(ptr, &oldsize);\n        *usable = 0;\n        *old_usable = oldsize;\n        return NULL;\n    }\n#ifdef HAVE_ALLOC_WITH_USIZE\n    newptr = realloc_with_usize(ptr, size, &oldsize, &size);\n    if (newptr == NULL) {\n        *usable = 0;\n        *old_usable = oldsize;\n        return NULL;\n    }\n    update_zmalloc_stat_free(oldsize);\n    update_zmalloc_stat_alloc(size);\n    *usable = size;\n    *old_usable = oldsize;\n    return newptr;\n#elif HAVE_MALLOC_SIZE\n    oldsize = zmalloc_size(ptr);\n    newptr = realloc(ptr,size);\n    if (newptr == NULL) {\n        *usable = 0;\n        *old_usable = oldsize;\n        return NULL;\n    }\n\n    update_zmalloc_stat_free(oldsize);\n    size = zmalloc_size(newptr);\n    update_zmalloc_stat_alloc(size);\n    *usable = size;\n    *old_usable = oldsize;\n    return newptr;\n#else\n    realptr = (char*)ptr-PREFIX_SIZE;\n    oldsize = *((size_t*)realptr);\n    newptr = realloc(realptr,size+PREFIX_SIZE);\n    if (newptr == NULL) {\n        *usable = 0;\n        *old_usable = oldsize;\n        return NULL;\n    }\n\n    *((size_t*)newptr) = size;\n    update_zmalloc_stat_free(oldsize);\n    update_zmalloc_stat_alloc(size);\n    *usable = size;\n    *old_usable = oldsize;\n    return (char*)newptr+PREFIX_SIZE;\n#endif\n}\n\nvoid *ztryrealloc_usable(void *ptr, size_t size, size_t *usable, size_t *old_usable) {\n    size_t usable_size = 0;\n    ptr = ztryrealloc_usable_internal(ptr, size, &usable_size, old_usable);\n#ifdef HAVE_MALLOC_SIZE\n    ptr = extend_to_usable(ptr, usable_size);\n#endif\n    if (usable) *usable = usable_size;\n    return ptr;\n}\n\n/* Reallocate memory and zero it or panic */\nvoid *zrealloc(void *ptr, size_t size) {\n    ptr = ztryrealloc_usable_internal(ptr, size, NULL, NULL);\n    if (!ptr && size != 0) zmalloc_oom_handler(size);\n    return ptr;\n}\n\n/* Try Reallocating memory, and return NULL if failed. */\nvoid *ztryrealloc(void *ptr, size_t size) {\n    ptr = ztryrealloc_usable_internal(ptr, size, NULL, NULL);\n    return ptr;\n}\n\n/* Reallocate memory or panic.\n * '*old_usable' is set to the previous usable size if non NULL\n * '*usable' is set to the usable size if non NULL. */\nvoid *zrealloc_usable(void *ptr, size_t size, size_t *usable, size_t *old_usable) {\n    size_t usable_size = 0;\n    ptr = ztryrealloc_usable(ptr, size, &usable_size, old_usable);\n    if (!ptr && size != 0) zmalloc_oom_handler(size);\n#ifdef HAVE_MALLOC_SIZE\n    ptr = extend_to_usable(ptr, usable_size);\n#endif\n    if (usable) *usable = usable_size;\n    return ptr;\n}\n\n/* Provide zmalloc_size() for systems where this function is not provided by\n * malloc itself, given that in that case we store a header with this\n * information as the first bytes of every allocation. */\n#ifndef HAVE_MALLOC_SIZE\nsize_t zmalloc_size(void *ptr) {\n    void *realptr = (char*)ptr-PREFIX_SIZE;\n    size_t size = *((size_t*)realptr);\n    return size+PREFIX_SIZE;\n}\nsize_t zmalloc_usable_size(void *ptr) {\n    return zmalloc_size(ptr)-PREFIX_SIZE;\n}\n#endif\n\nvoid zfree(void *ptr) {\n    if (ptr == NULL) return;\n\n#ifdef HAVE_ALLOC_WITH_USIZE\n    size_t oldsize;\n    free_with_usize(ptr, &oldsize);\n    update_zmalloc_stat_free(oldsize);\n#elif HAVE_MALLOC_SIZE\n    update_zmalloc_stat_free(zmalloc_size(ptr));\n    free(ptr);\n#else\n    size_t oldsize;\n    void *realptr = (char*)ptr-PREFIX_SIZE;\n    oldsize = *((size_t*)realptr);\n    update_zmalloc_stat_free(oldsize+PREFIX_SIZE);\n    free(realptr);\n#endif\n}\n\n/* Similar to zfree, '*usable' is set to the usable size being freed. */\nvoid zfree_usable(void *ptr, size_t *usable) {\n    size_t oldsize;\n#ifndef HAVE_MALLOC_SIZE\n    void *realptr;\n#endif\n\n    if (ptr == NULL) {\n        if (usable) *usable = 0;\n        return;\n    }\n\n#ifdef HAVE_ALLOC_WITH_USIZE\n    free_with_usize(ptr, &oldsize);\n    update_zmalloc_stat_free(oldsize);\n#elif HAVE_MALLOC_SIZE\n    update_zmalloc_stat_free(oldsize = zmalloc_size(ptr));\n    free(ptr);\n#else\n    realptr = (char*)ptr-PREFIX_SIZE;\n    oldsize = *((size_t*)realptr);\n    update_zmalloc_stat_free(oldsize+PREFIX_SIZE);\n    free(realptr);\n#endif\n    if (usable) *usable = oldsize;\n}\n\nchar *zstrdup_usable(const char *s, size_t *usable) {\n    size_t l = strlen(s)+1;\n    char *p = zmalloc_usable(l, usable);\n\n    memcpy(p,s,l);\n    return p;\n}\n\nchar *zstrdup(const char *s) {\n    return zstrdup_usable(s, NULL);\n}\n\nsize_t zmalloc_used_memory(void) {\n    size_t local_num_active_threads;\n    long long total_mem = 0;\n    atomicGet(num_active_threads,local_num_active_threads);\n    if (local_num_active_threads > MAX_THREADS) {\n        local_num_active_threads = MAX_THREADS;\n    }\n    for (size_t i = 0; i < local_num_active_threads; ++i) {\n        long long thread_used_mem;\n        atomicGet(used_memory[i].used_memory, thread_used_mem);\n        total_mem += thread_used_mem;\n    }\n    return total_mem;\n}\n\nsize_t zmalloc_get_peak_memory(void) {\n    size_t peak;\n    atomicGet(zmalloc_peak, peak);\n    return peak;\n}\n\ntime_t zmalloc_get_peak_memory_time(void) {\n    time_t t;\n    atomicGet(zmalloc_peak_time, t);\n    return t;\n}\n\nvoid zmalloc_set_oom_handler(void (*oom_handler)(size_t)) {\n    zmalloc_oom_handler = oom_handler;\n}\n\n/* Use 'MADV_DONTNEED' to release memory to operating system quickly.\n * We do that in a fork child process to avoid CoW when the parent modifies\n * these shared pages. */\nvoid zmadvise_dontneed(void *ptr) {\n#if defined(USE_JEMALLOC) && defined(__linux__)\n    static size_t page_size = 0;\n    if (page_size == 0) page_size = sysconf(_SC_PAGESIZE);\n    size_t page_size_mask = page_size - 1;\n\n    size_t real_size = zmalloc_size(ptr);\n    if (real_size < page_size) return;\n\n    /* We need to align the pointer upwards according to page size, because\n     * the memory address is increased upwards and we only can free memory\n     * based on page. */\n    char *aligned_ptr = (char *)(((size_t)ptr+page_size_mask) & ~page_size_mask);\n    real_size -= (aligned_ptr-(char*)ptr);\n    if (real_size >= page_size) {\n        madvise((void *)aligned_ptr, real_size&~page_size_mask, MADV_DONTNEED);\n    }\n#else\n    (void)(ptr);\n#endif\n}\n\n/* Get the RSS information in an OS-specific way.\n *\n * WARNING: the function zmalloc_get_rss() is not designed to be fast\n * and may not be called in the busy loops where Redis tries to release\n * memory expiring or swapping out objects.\n *\n * For this kind of \"fast RSS reporting\" usages use instead the\n * function RedisEstimateRSS() that is a much faster (and less precise)\n * version of the function. */\n\n#if defined(HAVE_PROC_STAT)\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <fcntl.h>\n#endif\n\n/* Get the i'th field from \"/proc/self/stat\" note i is 1 based as appears in the 'proc' man page */\nint get_proc_stat_ll(int i, long long *res) {\n#if defined(HAVE_PROC_STAT)\n    char buf[4096];\n    int fd, l;\n    char *p, *x;\n\n    if ((fd = open(\"/proc/self/stat\",O_RDONLY)) == -1) return 0;\n    if ((l = read(fd,buf,sizeof(buf)-1)) <= 0) {\n        close(fd);\n        return 0;\n    }\n    close(fd);\n    buf[l] = '\\0';\n    if (buf[l-1] == '\\n') buf[l-1] = '\\0';\n\n    /* Skip pid and process name (surrounded with parentheses) */\n    p = strrchr(buf, ')');\n    if (!p) return 0;\n    p++;\n    while (*p == ' ') p++;\n    if (*p == '\\0') return 0;\n    i -= 3;\n    if (i < 0) return 0;\n\n    while (p && i--) {\n        p = strchr(p, ' ');\n        if (p) p++;\n        else return 0;\n    }\n    x = strchr(p,' ');\n    if (x) *x = '\\0';\n\n    *res = strtoll(p,&x,10);\n    if (*x != '\\0') return 0;\n    return 1;\n#else\n    UNUSED(i);\n    UNUSED(res);\n    return 0;\n#endif\n}\n\n#if defined(HAVE_PROC_STAT)\nsize_t zmalloc_get_rss(void) {\n    int page = sysconf(_SC_PAGESIZE);\n    long long rss;\n\n    /* RSS is the 24th field in /proc/<pid>/stat */\n    if (!get_proc_stat_ll(24, &rss)) return 0;\n    rss *= page;\n    return rss;\n}\n#elif defined(HAVE_TASKINFO)\n#include <sys/types.h>\n#include <sys/sysctl.h>\n#include <mach/task.h>\n#include <mach/mach_init.h>\n\nsize_t zmalloc_get_rss(void) {\n    task_t task = MACH_PORT_NULL;\n    struct task_basic_info t_info;\n    mach_msg_type_number_t t_info_count = TASK_BASIC_INFO_COUNT;\n\n    if (task_for_pid(current_task(), getpid(), &task) != KERN_SUCCESS)\n        return 0;\n    task_info(task, TASK_BASIC_INFO, (task_info_t)&t_info, &t_info_count);\n\n    return t_info.resident_size;\n}\n#elif defined(__FreeBSD__) || defined(__DragonFly__)\n#include <sys/types.h>\n#include <sys/sysctl.h>\n#include <sys/user.h>\n\nsize_t zmalloc_get_rss(void) {\n    struct kinfo_proc info;\n    size_t infolen = sizeof(info);\n    int mib[4];\n    mib[0] = CTL_KERN;\n    mib[1] = KERN_PROC;\n    mib[2] = KERN_PROC_PID;\n    mib[3] = getpid();\n\n    if (sysctl(mib, 4, &info, &infolen, NULL, 0) == 0)\n#if defined(__FreeBSD__)\n        return (size_t)info.ki_rssize * getpagesize();\n#else\n        return (size_t)info.kp_vm_rssize * getpagesize();\n#endif\n\n    return 0L;\n}\n#elif defined(__NetBSD__) || defined(__OpenBSD__)\n#include <sys/types.h>\n#include <sys/sysctl.h>\n\n#if defined(__OpenBSD__)\n#define kinfo_proc2 kinfo_proc\n#define KERN_PROC2 KERN_PROC\n#define __arraycount(a) (sizeof(a) / sizeof(a[0]))\n#endif\n\nsize_t zmalloc_get_rss(void) {\n    struct kinfo_proc2 info;\n    size_t infolen = sizeof(info);\n    int mib[6];\n    mib[0] = CTL_KERN;\n    mib[1] = KERN_PROC2;\n    mib[2] = KERN_PROC_PID;\n    mib[3] = getpid();\n    mib[4] = sizeof(info);\n    mib[5] = 1;\n    if (sysctl(mib, __arraycount(mib), &info, &infolen, NULL, 0) == 0)\n        return (size_t)info.p_vm_rssize * getpagesize();\n\n    return 0L;\n}\n#elif defined(__HAIKU__)\n#include <OS.h>\n\nsize_t zmalloc_get_rss(void) {\n    area_info info;\n    thread_info th;\n    size_t rss = 0;\n    ssize_t cookie = 0;\n\n    if (get_thread_info(find_thread(0), &th) != B_OK)\n        return 0;\n\n    while (get_next_area_info(th.team, &cookie, &info) == B_OK)\n        rss += info.ram_size;\n\n    return rss;\n}\n#elif defined(HAVE_PSINFO)\n#include <unistd.h>\n#include <sys/procfs.h>\n#include <fcntl.h>\n\nsize_t zmalloc_get_rss(void) {\n    struct prpsinfo info;\n    char filename[256];\n    int fd;\n\n    snprintf(filename,256,\"/proc/%ld/psinfo\",(long) getpid());\n\n    if ((fd = open(filename,O_RDONLY)) == -1) return 0;\n    if (ioctl(fd, PIOCPSINFO, &info) == -1) {\n        close(fd);\n        return 0;\n    }\n\n    close(fd);\n    return info.pr_rssize;\n}\n#else\nsize_t zmalloc_get_rss(void) {\n    /* If we can't get the RSS in an OS-specific way for this system just\n     * return the memory usage we estimated in zmalloc()..\n     *\n     * Fragmentation will appear to be always 1 (no fragmentation)\n     * of course... */\n    return zmalloc_used_memory();\n}\n#endif\n\n#if defined(USE_JEMALLOC)\n\n/* Compute the total memory wasted in fragmentation of inside small arena bins.\n * Done by summing the memory in unused regs in all slabs of all small bins.\n *\n * Pass in arena to get the information of the specified arena, otherwise pass\n * in MALLCTL_ARENAS_ALL to get all. */\nsize_t zmalloc_get_frag_smallbins_by_arena(unsigned int arena) {\n    unsigned nbins;\n    size_t sz, frag = 0;\n\n    /* Pre-convert mallctl paths to MIB for better performance.\n     * This eliminates snprintf and string parsing overhead in the loop. */\n    size_t bin_size_mib[8], bin_nregs_mib[8], curregs_mib[8], curslabs_mib[8];\n    size_t bin_size_miblen = 8, bin_nregs_miblen = 8, curregs_miblen = 8, curslabs_miblen = 8;\n\n    sz = sizeof(unsigned);\n    assert(!je_mallctl(\"arenas.nbins\", &nbins, &sz, NULL, 0));\n\n    /* Convert all patterns to MIB (required before using je_mallctlbymib) */\n    assert(!je_mallctlnametomib(\"arenas.bin.0.size\", bin_size_mib, &bin_size_miblen));\n    assert(!je_mallctlnametomib(\"arenas.bin.0.nregs\", bin_nregs_mib, &bin_nregs_miblen));\n    assert(!je_mallctlnametomib(\"stats.arenas.0.bins.0.curregs\", curregs_mib, &curregs_miblen));\n    assert(!je_mallctlnametomib(\"stats.arenas.0.bins.0.curslabs\", curslabs_mib, &curslabs_miblen));\n\n    for (unsigned j = 0; j < nbins; j++) {\n        size_t curregs, curslabs, reg_size;\n        uint32_t nregs;\n\n        /* The size of the current bin */\n        bin_size_mib[2] = j;\n        sz = sizeof(size_t);\n        assert(!je_mallctlbymib(bin_size_mib, bin_size_miblen, &reg_size, &sz, NULL, 0));\n\n        /* Number of used regions in the bin */\n        curregs_mib[2] = arena;\n        curregs_mib[4] = j;\n        sz = sizeof(size_t);\n        assert(!je_mallctlbymib(curregs_mib, curregs_miblen, &curregs, &sz, NULL, 0));\n\n        /* Number of regions per slab */\n        bin_nregs_mib[2] = j;\n        sz = sizeof(uint32_t);\n        assert(!je_mallctlbymib(bin_nregs_mib, bin_nregs_miblen, &nregs, &sz, NULL, 0));\n\n        /* Number of current slabs in the bin */\n        curslabs_mib[2] = arena;\n        curslabs_mib[4] = j;\n        sz = sizeof(size_t);\n        assert(!je_mallctlbymib(curslabs_mib, curslabs_miblen, &curslabs, &sz, NULL, 0));\n\n        /* Calculate the fragmentation bytes for the current bin and add it to the total. */\n        frag += ((nregs * curslabs) - curregs) * reg_size;\n    }\n\n    return frag;\n}\n\n/* Compute the total memory wasted in fragmentation of inside small arena bins.\n * Done by summing the memory in unused regs in all slabs of all small bins. */\nsize_t zmalloc_get_frag_smallbins(void) {\n    return zmalloc_get_frag_smallbins_by_arena(MALLCTL_ARENAS_ALL);\n}\n\n/* Get memory allocation information from allocator.\n *\n * refresh_stats indicates whether to refresh cached statistics.\n * For the meaning of the other parameters, please refer to the function implementation\n * and INFO's allocator_* in redis-doc. */\nint zmalloc_get_allocator_info(int refresh_stats, size_t *allocated, size_t *active, size_t *resident,\n                               size_t *retained, size_t *muzzy, size_t *frag_smallbins_bytes)\n{\n    size_t sz;\n    *allocated = *resident = *active = 0;\n\n    /* Update the statistics cached by mallctl. */\n    if (refresh_stats) {\n        uint64_t epoch = 1;\n        sz = sizeof(epoch);\n        je_mallctl(\"epoch\", &epoch, &sz, &epoch, sz);\n    }\n\n    sz = sizeof(size_t);\n    /* Unlike RSS, this does not include RSS from shared libraries and other non\n     * heap mappings. */\n    je_mallctl(\"stats.resident\", resident, &sz, NULL, 0);\n    /* Unlike resident, this doesn't not include the pages jemalloc reserves\n     * for re-use (purge will clean that). */\n    je_mallctl(\"stats.active\", active, &sz, NULL, 0);\n    /* Unlike zmalloc_used_memory, this matches the stats.resident by taking\n     * into account all allocations done by this process (not only zmalloc). */\n    je_mallctl(\"stats.allocated\", allocated, &sz, NULL, 0);\n\n    /* Retained memory is memory released by `madvised(..., MADV_DONTNEED)`, which is not part\n     * of RSS or mapped memory, and doesn't have a strong association with physical memory in the OS.\n     * It is still part of the VM-Size, and may be used again in later allocations. */\n    if (retained) {\n        *retained = 0;\n        je_mallctl(\"stats.retained\", retained, &sz, NULL, 0);\n    }\n\n    /* Unlike retained, Muzzy representats memory released with `madvised(..., MADV_FREE)`.\n     * These pages will show as RSS for the process, until the OS decides to re-use them. */\n    if (muzzy) {\n        char buf[100];\n        size_t pmuzzy, page;\n        snprintf(buf, sizeof(buf), \"stats.arenas.%u.pmuzzy\", MALLCTL_ARENAS_ALL);\n        assert(!je_mallctl(buf, &pmuzzy, &sz, NULL, 0));\n        assert(!je_mallctl(\"arenas.page\", &page, &sz, NULL, 0));\n        *muzzy = pmuzzy * page;\n    }\n\n    /* Total size of consumed meomry in unused regs in small bins (AKA external fragmentation). */\n    *frag_smallbins_bytes = zmalloc_get_frag_smallbins();\n    return 1;\n}\n\n/* Get the specified arena memory allocation information from allocator.\n *\n * refresh_stats indicates whether to refresh cached statistics.\n * For the meaning of the other parameters, please refer to the function implementation\n * and INFO's allocator_* in redis-doc. */\nint zmalloc_get_allocator_info_by_arena(unsigned int arena, int refresh_stats, size_t *allocated,\n                                        size_t *active, size_t *resident, size_t *frag_smallbins_bytes)\n{\n    char buf[100];\n    size_t sz;\n    *allocated = *resident = *active = 0;\n\n    /* Update the statistics cached by mallctl. */\n    if (refresh_stats) {\n        uint64_t epoch = 1;\n        sz = sizeof(epoch);\n        je_mallctl(\"epoch\", &epoch, &sz, &epoch, sz);\n    }\n\n    sz = sizeof(size_t);\n    /* Unlike RSS, this does not include RSS from shared libraries and other non\n     * heap mappings. */\n    snprintf(buf, sizeof(buf), \"stats.arenas.%u.small.resident\", arena);\n    je_mallctl(buf, resident, &sz, NULL, 0);\n    /* Unlike resident, this doesn't not include the pages jemalloc reserves\n     * for re-use (purge will clean that). */\n    size_t pactive, page;\n    snprintf(buf, sizeof(buf), \"stats.arenas.%u.pactive\", arena);\n    assert(!je_mallctl(buf, &pactive, &sz, NULL, 0));\n    assert(!je_mallctl(\"arenas.page\", &page, &sz, NULL, 0));\n    *active = pactive * page;\n    /* Unlike zmalloc_used_memory, this matches the stats.resident by taking\n     * into account all allocations done by this process (not only zmalloc). */\n    size_t small_allcated, large_allacted;\n    snprintf(buf, sizeof(buf), \"stats.arenas.%u.small.allocated\", arena);\n    assert(!je_mallctl(buf, &small_allcated, &sz, NULL, 0));\n    *allocated += small_allcated;\n    snprintf(buf, sizeof(buf), \"stats.arenas.%u.large.allocated\", arena);\n    assert(!je_mallctl(buf, &large_allacted, &sz, NULL, 0));\n    *allocated += large_allacted;\n\n    /* Total size of consumed meomry in unused regs in small bins (AKA external fragmentation). */\n    *frag_smallbins_bytes = zmalloc_get_frag_smallbins_by_arena(arena);\n    return 1;\n}\n\n\nvoid set_jemalloc_bg_thread(int enable) {\n    /* let jemalloc do purging asynchronously, required when there's no traffic \n     * after flushdb */\n    char val = !!enable;\n    je_mallctl(\"background_thread\", NULL, 0, &val, 1);\n}\n\nint jemalloc_purge(void) {\n    /* return all unused (reserved) pages to the OS */\n    char tmp[32];\n    unsigned narenas = 0;\n    size_t sz = sizeof(unsigned);\n    if (!je_mallctl(\"arenas.narenas\", &narenas, &sz, NULL, 0)) {\n        snprintf(tmp, sizeof(tmp), \"arena.%u.purge\", narenas);\n        if (!je_mallctl(tmp, NULL, 0, NULL, 0))\n            return 0;\n    }\n    return -1;\n}\n\n#else\n\nint zmalloc_get_allocator_info(int refresh_stats, size_t *allocated, size_t *active, size_t *resident,\n                               size_t *retained, size_t *muzzy, size_t *frag_smallbins_bytes)\n{\n    UNUSED(refresh_stats);\n    *allocated = *resident = *active = *frag_smallbins_bytes = 0;\n    if (retained) *retained = 0;\n    if (muzzy) *muzzy = 0;\n    return 1;\n}\n\nint zmalloc_get_allocator_info_by_arena(unsigned int arena, int refresh_stats, size_t *allocated,\n                                        size_t *active, size_t *resident, size_t *frag_smallbins_bytes)\n{\n    UNUSED(arena);\n    UNUSED(refresh_stats);\n    *allocated = *resident = *active = *frag_smallbins_bytes = 0;\n    return 1;\n}\n\n\nvoid set_jemalloc_bg_thread(int enable) {\n    ((void)(enable));\n}\n\nint jemalloc_purge(void) {\n    return 0;\n}\n\n#endif\n\n#if defined(__APPLE__)\n/* For proc_pidinfo() used later in zmalloc_get_smap_bytes_by_field().\n * Note that this file cannot be included in zmalloc.h because it includes\n * a Darwin queue.h file where there is a \"LIST_HEAD\" macro (!) defined\n * conficting with Redis user code. */\n#include <libproc.h>\n#endif\n\n/* Get the sum of the specified field (converted form kb to bytes) in\n * /proc/self/smaps. The field must be specified with trailing \":\" as it\n * apperas in the smaps output.\n *\n * If a pid is specified, the information is extracted for such a pid,\n * otherwise if pid is -1 the information is reported is about the\n * current process.\n *\n * Example: zmalloc_get_smap_bytes_by_field(\"Rss:\",-1);\n */\n#if defined(HAVE_PROC_SMAPS)\nsize_t zmalloc_get_smap_bytes_by_field(char *field, long pid) {\n    char line[1024];\n    size_t bytes = 0;\n    int flen = strlen(field);\n    FILE *fp;\n\n    if (pid == -1) {\n        fp = fopen(\"/proc/self/smaps\",\"r\");\n    } else {\n        char filename[128];\n        snprintf(filename,sizeof(filename),\"/proc/%ld/smaps\",pid);\n        fp = fopen(filename,\"r\");\n    }\n\n    if (!fp) return 0;\n    while(fgets(line,sizeof(line),fp) != NULL) {\n        if (strncmp(line,field,flen) == 0) {\n            char *p = strchr(line,'k');\n            if (p) {\n                *p = '\\0';\n                bytes += strtol(line+flen,NULL,10) * 1024;\n            }\n        }\n    }\n    fclose(fp);\n    return bytes;\n}\n#else\n/* Get sum of the specified field from libproc api call.\n * As there are per page value basis we need to convert\n * them accordingly.\n *\n * Note that AnonHugePages is a no-op as THP feature\n * is not supported in this platform\n */\nsize_t zmalloc_get_smap_bytes_by_field(char *field, long pid) {\n#if defined(__APPLE__)\n    struct proc_regioninfo pri;\n    if (pid == -1) pid = getpid();\n    if (proc_pidinfo(pid, PROC_PIDREGIONINFO, 0, &pri,\n                     PROC_PIDREGIONINFO_SIZE) == PROC_PIDREGIONINFO_SIZE)\n    {\n        int pagesize = getpagesize();\n        if (!strcmp(field, \"Private_Dirty:\")) {\n            return (size_t)pri.pri_pages_dirtied * pagesize;\n        } else if (!strcmp(field, \"Rss:\")) {\n            return (size_t)pri.pri_pages_resident * pagesize;\n        } else if (!strcmp(field, \"AnonHugePages:\")) {\n            return 0;\n        }\n    }\n    return 0;\n#endif\n    ((void) field);\n    ((void) pid);\n    return 0;\n}\n#endif\n\n/* Return the total number bytes in pages marked as Private Dirty.\n *\n * Note: depending on the platform and memory footprint of the process, this\n * call can be slow, exceeding 1000ms!\n */\nsize_t zmalloc_get_private_dirty(long pid) {\n    return zmalloc_get_smap_bytes_by_field(\"Private_Dirty:\",pid);\n}\n\n/* Returns the size of physical memory (RAM) in bytes.\n * It looks ugly, but this is the cleanest way to achieve cross platform results.\n * Cleaned up from:\n *\n * http://nadeausoftware.com/articles/2012/09/c_c_tip_how_get_physical_memory_size_system\n *\n * Note that this function:\n * 1) Was released under the following CC attribution license:\n *    http://creativecommons.org/licenses/by/3.0/deed.en_US.\n * 2) Was originally implemented by David Robert Nadeau.\n * 3) Was modified for Redis by Matt Stancliff.\n * 4) This note exists in order to comply with the original license.\n */\nsize_t zmalloc_get_memory_size(void) {\n#if defined(__unix__) || defined(__unix) || defined(unix) || \\\n    (defined(__APPLE__) && defined(__MACH__))\n#if defined(CTL_HW) && (defined(HW_MEMSIZE) || defined(HW_PHYSMEM64))\n    int mib[2];\n    mib[0] = CTL_HW;\n#if defined(HW_MEMSIZE)\n    mib[1] = HW_MEMSIZE;            /* OSX. --------------------- */\n#elif defined(HW_PHYSMEM64)\n    mib[1] = HW_PHYSMEM64;          /* NetBSD, OpenBSD. --------- */\n#endif\n    int64_t size = 0;               /* 64-bit */\n    size_t len = sizeof(size);\n    if (sysctl( mib, 2, &size, &len, NULL, 0) == 0)\n        return (size_t)size;\n    return 0L;          /* Failed? */\n\n#elif defined(_SC_PHYS_PAGES) && defined(_SC_PAGESIZE)\n    /* FreeBSD, Linux, OpenBSD, and Solaris. -------------------- */\n    return (size_t)sysconf(_SC_PHYS_PAGES) * (size_t)sysconf(_SC_PAGESIZE);\n\n#elif defined(CTL_HW) && (defined(HW_PHYSMEM) || defined(HW_REALMEM))\n    /* DragonFly BSD, FreeBSD, NetBSD, OpenBSD, and OSX. -------- */\n    int mib[2];\n    mib[0] = CTL_HW;\n#if defined(HW_REALMEM)\n    mib[1] = HW_REALMEM;        /* FreeBSD. ----------------- */\n#elif defined(HW_PHYSMEM)\n    mib[1] = HW_PHYSMEM;        /* Others. ------------------ */\n#endif\n    unsigned int size = 0;      /* 32-bit */\n    size_t len = sizeof(size);\n    if (sysctl(mib, 2, &size, &len, NULL, 0) == 0)\n        return (size_t)size;\n    return 0L;          /* Failed? */\n#else\n    return 0L;          /* Unknown method to get the data. */\n#endif\n#else\n    return 0L;          /* Unknown OS. */\n#endif\n}\n\n#ifdef REDIS_TEST\n#include \"testhelp.h\"\n#include \"redisassert.h\"\n\n#define TEST(name) printf(\"test — %s\\n\", name);\n\nint zmalloc_test(int argc, char **argv, int flags) {\n    void *ptr, *ptr2;\n\n    UNUSED(argc);\n    UNUSED(argv);\n    UNUSED(flags);\n\n    printf(\"Malloc prefix size: %d\\n\", (int) PREFIX_SIZE);\n\n    TEST(\"Initial used memory is 0\") {\n        assert(zmalloc_used_memory() == 0);\n    }\n\n    TEST(\"Allocated 123 bytes\") {\n        ptr = zmalloc(123);\n        printf(\"Allocated 123 bytes; used: %zu\\n\", zmalloc_used_memory());\n    }\n\n    TEST(\"Reallocated to 456 bytes\") {\n        ptr = zrealloc(ptr, 456);\n        printf(\"Reallocated to 456 bytes; used: %zu\\n\", zmalloc_used_memory());\n    }\n\n    TEST(\"Callocated 123 bytes\") {\n        ptr2 = zcalloc(123);\n        printf(\"Callocated 123 bytes; used: %zu\\n\", zmalloc_used_memory());\n    }\n\n    TEST(\"Freed pointers\") {\n        zfree(ptr);\n        zfree(ptr2);\n        printf(\"Freed pointers; used: %zu\\n\", zmalloc_used_memory());\n    }\n\n    TEST(\"Allocated 0 bytes\") {\n        ptr = zmalloc(0);\n        printf(\"Allocated 0 bytes; used: %zu\\n\", zmalloc_used_memory());\n        zfree(ptr);\n    }\n\n    TEST(\"At the end used memory is 0\") {\n        assert(zmalloc_used_memory() == 0);\n    }\n\n    return 0;\n}\n#endif\n"
  },
  {
    "path": "src/zmalloc.h",
    "content": "/* zmalloc - total amount of allocated memory aware version of malloc()\n *\n * Copyright (c) 2009-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#ifndef __ZMALLOC_H\n#define __ZMALLOC_H\n\n/* Double expansion needed for stringification of macro values. */\n#define __xstr(s) __str(s)\n#define __str(s) #s\n\n#if defined(USE_TCMALLOC)\n#define ZMALLOC_LIB (\"tcmalloc-\" __xstr(TC_VERSION_MAJOR) \".\" __xstr(TC_VERSION_MINOR))\n#include <google/tcmalloc.h>\n#if (TC_VERSION_MAJOR == 1 && TC_VERSION_MINOR >= 6) || (TC_VERSION_MAJOR > 1)\n#define HAVE_MALLOC_SIZE 1\n#define zmalloc_size(p) tc_malloc_size(p)\n#else\n#error \"Newer version of tcmalloc required\"\n#endif\n\n#elif defined(USE_JEMALLOC)\n#define ZMALLOC_LIB (\"jemalloc-\" __xstr(JEMALLOC_VERSION_MAJOR) \".\" __xstr(JEMALLOC_VERSION_MINOR) \".\" __xstr(JEMALLOC_VERSION_BUGFIX))\n#include <jemalloc/jemalloc.h>\n#if (JEMALLOC_VERSION_MAJOR == 2 && JEMALLOC_VERSION_MINOR >= 1) || (JEMALLOC_VERSION_MAJOR > 2)\n#define HAVE_MALLOC_SIZE 1\n#define zmalloc_size(p) je_malloc_usable_size(p)\n#else\n#error \"Newer version of jemalloc required\"\n#endif\n\n#elif defined(__APPLE__)\n#include <malloc/malloc.h>\n#define HAVE_MALLOC_SIZE 1\n#define zmalloc_size(p) malloc_size(p)\n#endif\n\n/* On native libc implementations, we should still do our best to provide a\n * HAVE_MALLOC_SIZE capability. This can be set explicitly as well:\n *\n * NO_MALLOC_USABLE_SIZE disables it on all platforms, even if they are\n *      known to support it.\n * USE_MALLOC_USABLE_SIZE forces use of malloc_usable_size() regardless\n *      of platform.\n */\n#ifndef ZMALLOC_LIB\n#define ZMALLOC_LIB \"libc\"\n\n#if !defined(NO_MALLOC_USABLE_SIZE) && \\\n    (defined(__GLIBC__) || defined(__FreeBSD__) || \\\n     defined(__DragonFly__) || defined(__HAIKU__) || \\\n     defined(USE_MALLOC_USABLE_SIZE))\n\n/* Includes for malloc_usable_size() */\n#ifdef __FreeBSD__\n#include <malloc_np.h>\n#else\n#ifndef _GNU_SOURCE\n#define _GNU_SOURCE\n#endif\n#include <malloc.h>\n#endif\n\n#define HAVE_MALLOC_SIZE 1\n#define zmalloc_size(p) malloc_usable_size(p)\n\n#endif\n#endif\n\n/* We can enable the Redis defrag capabilities only if we are using Jemalloc\n * and the version used is our special version modified for Redis having\n * the ability to return per-allocation fragmentation hints. */\n#if (defined(USE_JEMALLOC) && defined(JEMALLOC_FRAG_HINT)) || defined(DEBUG_DEFRAG_FORCE)\n#define HAVE_DEFRAG\n#endif\n\n/* We can enable allocation with usable size capabilities only if we are using Jemalloc\n * and the version used is our special version modified for Redis having\n * the ability to return usable size during allocation or deallocation. */\n#if defined(USE_JEMALLOC) && defined(JEMALLOC_ALLOC_WITH_USIZE)\n#define HAVE_ALLOC_WITH_USIZE\n#endif\n\n#include <time.h>\n\n/* 'noinline' attribute is intended to prevent the `-Wstringop-overread` warning\n * when using gcc-12 later with LTO enabled. It may be removed once the\n * bug[https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96503] is fixed. */\n__attribute__((malloc,alloc_size(1),noinline)) void *zmalloc(size_t size);\n__attribute__((malloc,alloc_size(1),noinline)) void *zcalloc(size_t size);\n__attribute__((malloc,alloc_size(1,2),noinline)) void *zcalloc_num(size_t num, size_t size);\n__attribute__((alloc_size(2),noinline)) void *zrealloc(void *ptr, size_t size);\n__attribute__((malloc,alloc_size(1),noinline)) void *ztrymalloc(size_t size);\n__attribute__((malloc,alloc_size(1),noinline)) void *ztrycalloc(size_t size);\n__attribute__((alloc_size(2),noinline)) void *ztryrealloc(void *ptr, size_t size);\nvoid zfree(void *ptr);\nvoid *zmalloc_usable(size_t size, size_t *usable);\nvoid *zcalloc_usable(size_t size, size_t *usable);\nvoid *zrealloc_usable(void *ptr, size_t size, size_t *usable, size_t *old_usable);\nvoid *ztrymalloc_usable(size_t size, size_t *usable);\nvoid *ztrycalloc_usable(size_t size, size_t *usable);\nvoid *ztryrealloc_usable(void *ptr, size_t size, size_t *usable, size_t *old_usable);\nvoid zfree_usable(void *ptr, size_t *usable);\n__attribute__((malloc)) char *zstrdup(const char *s);\n__attribute__((malloc)) char *zstrdup_usable(const char *s, size_t *usable);\nsize_t zmalloc_used_memory(void);\nsize_t zmalloc_get_peak_memory(void);\ntime_t zmalloc_get_peak_memory_time(void);\nvoid zmalloc_set_oom_handler(void (*oom_handler)(size_t));\nsize_t zmalloc_get_rss(void);\nint zmalloc_get_allocator_info(int refresh_stats, size_t *allocated, size_t *active, size_t *resident,\n                               size_t *retained, size_t *muzzy, size_t *frag_smallbins_bytes);\nint zmalloc_get_allocator_info_by_arena(unsigned int arena, int refresh_stats, size_t *allocated,\n                                        size_t *active, size_t *resident, size_t *frag_smallbins_bytes);\nvoid set_jemalloc_bg_thread(int enable);\nint jemalloc_purge(void);\nsize_t zmalloc_get_private_dirty(long pid);\nsize_t zmalloc_get_smap_bytes_by_field(char *field, long pid);\nsize_t zmalloc_get_memory_size(void);\nvoid zlibc_free(void *ptr);\nvoid zmadvise_dontneed(void *ptr);\n\n#if defined(USE_JEMALLOC)\nvoid *zmalloc_with_flags(size_t size, int flags);\nvoid *zrealloc_with_flags(void *ptr, size_t size, int flags);\nvoid zfree_with_flags(void *ptr, int flags);\n#endif\n\n#if (defined(USE_JEMALLOC) && defined(HAVE_DEFRAG))\nvoid zfree_no_tcache(void *ptr);\n__attribute__((malloc)) void *zmalloc_no_tcache(size_t size);\n#endif\n\n#ifndef HAVE_MALLOC_SIZE\nsize_t zmalloc_size(void *ptr);\nsize_t zmalloc_usable_size(void *ptr);\n#else\n/* If we use 'zmalloc_usable_size()' to obtain additional available memory size\n * and manipulate it, we need to call 'extend_to_usable()' afterwards to ensure\n * the compiler recognizes this extra memory. However, if we use the pointer\n * obtained from z[*]_usable() family functions, there is no need for this step. */\n#define zmalloc_usable_size(p) zmalloc_size(p)\n\n/* derived from https://github.com/systemd/systemd/pull/25688\n * We use zmalloc_usable_size() everywhere to use memory blocks, but that is an abuse since the\n * malloc_usable_size() isn't meant for this kind of use, it is for diagnostics only. That is also why the\n * behavior is flaky when built with _FORTIFY_SOURCE, the compiler can sense that we reach outside\n * the allocated block and SIGABRT.\n * We use a dummy allocator function to tell the compiler that the new size of ptr is newsize.\n * The implementation returns the pointer as is; the only reason for its existence is as a conduit for the\n * alloc_size attribute. This cannot be a static inline because gcc then loses the attributes on the function.\n * See: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96503 */\n__attribute__((alloc_size(2),noinline)) void *extend_to_usable(void *ptr, size_t size);\n#endif\n\nint get_proc_stat_ll(int i, long long *res);\n\n#ifdef REDIS_TEST\nint zmalloc_test(int argc, char **argv, int flags);\n#endif\n\n#endif /* __ZMALLOC_H */\n"
  },
  {
    "path": "tests/README.md",
    "content": "Redis Test Suite\n================\n\nThe normal execution mode of the test suite involves starting and manipulating\nlocal `redis-server` instances, inspecting process state, log files, etc.\n\nThe test suite also supports execution against an external server, which is\nenabled using the `--host` and `--port` parameters. When executing against an\nexternal server, tests tagged `external:skip` are skipped.\n\nThere are additional runtime options that can further adjust the test suite to\nmatch different external server configurations:\n\n| Option               | Impact                                                   |\n| -------------------- | -------------------------------------------------------- |\n| `--singledb`         | Only use database 0, don't assume others are supported. |\n| `--ignore-encoding`  | Skip all checks for specific encoding.  |\n| `--ignore-digest`    | Skip key value digest validations. |\n| `--cluster-mode`     | Run in strict Redis Cluster compatibility mode. |\n| `--large-memory`     | Enables tests that consume more than 100mb |\n\nTags\n----\n\nTags are applied to tests to classify them according to the subsystem they test,\nbut also to indicate compatibility with different run modes and required\ncapabilities.\n\nTags can be applied in different context levels:\n* `start_server` context\n* `tags` context that bundles several tests together\n* A single test context.\n\nThe following compatibility and capability tags are currently used:\n\n| Tag                       | Indicates |\n| ---------------------     | --------- |\n| `external:skip`           | Not compatible with external servers. |\n| `cluster:skip`            | Not compatible with `--cluster-mode`. |\n| `large-memory`            | Test that requires more than 100mb |\n| `tls:skip`                | Not compatible with `--tls`. |\n| `tsan:skip`               | Not compatible with running under thread sanitizer. |\n| `needs:repl`              | Uses replication and needs to be able to `SYNC` from server. |\n| `needs:debug`             | Uses the `DEBUG` command or other debugging focused commands (like `OBJECT REFCOUNT`). |\n| `needs:pfdebug`           | Uses the `PFDEBUG` command. |\n| `needs:config-maxmemory`  | Uses `CONFIG SET` to manipulate memory limit, eviction policies, etc. |\n| `needs:config-resetstat`  | Uses `CONFIG RESETSTAT` to reset statistics. |\n| `needs:reset`             | Uses `RESET` to reset client connections. |\n| `needs:save`              | Uses `SAVE` or `BGSAVE` to create an RDB file. |\n\nWhen using an external server (`--host` and `--port`), filtering using the\n`external:skip` tags is done automatically.\n\nWhen using `--cluster-mode`, filtering using the `cluster:skip` tag is done\nautomatically.\n\nWhen not using `--large-memory`, filtering using the `largemem:skip` tag is done\nautomatically.\n\nIn addition, it is possible to specify additional configuration. For example, to\nrun tests on a server that does not permit `SYNC` use:\n\n    ./runtest --host <host> --port <port> --tags -needs:repl\n\n"
  },
  {
    "path": "tests/assets/default.conf",
    "content": "# Redis configuration for testing.\n\nalways-show-logo yes\nnotify-keyspace-events KEA\ndaemonize no\npidfile /var/run/redis.pid\nport 6379\ntimeout 0\nbind 127.0.0.1\nloglevel verbose\nlogfile ''\ndatabases 16\nlatency-monitor-threshold 1\nrepl-diskless-sync-delay 0\n\n# Turn off RDB by default (to speedup tests)\n# Note the infrastructure in server.tcl uses a dict, we can't provide several save directives\nsave ''\n\nrdbcompression yes\ndbfilename dump.rdb\ndir ./\n\nslave-serve-stale-data yes\nappendonly no\nappendfsync everysec\nno-appendfsync-on-rewrite no\nactiverehashing yes\n\nenable-protected-configs yes\nenable-debug-command yes\nenable-module-command yes\n\npropagation-error-behavior panic\n\n# Make sure shutdown doesn't fail if there's an initial AOFRW\nshutdown-on-sigterm force\n"
  },
  {
    "path": "tests/assets/minimal.conf",
    "content": "# Minimal configuration for testing.\nalways-show-logo yes\ndaemonize no\npidfile /var/run/redis.pid\nloglevel verbose\n"
  },
  {
    "path": "tests/assets/nodefaultuser.acl",
    "content": "user alice on nopass ~* +@all\nuser bob on nopass ~* &* +@all"
  },
  {
    "path": "tests/assets/test_cli_hint_suite.txt",
    "content": "# Test suite for redis-cli command-line hinting mechanism.\n# Each test case consists of two strings: a (partial) input command line, and the expected hint string.\n\n# Command with one arg: GET key\n\"GET \" \"key\"\n\"GET abc \" \"\"\n\n# Command with two args: DECRBY key decrement\n\"DECRBY xyz 2 \" \"\"\n\"DECRBY xyz \" \"decrement\"\n\"DECRBY \" \"key decrement\"\n\n# Command with optional arg: LPOP key [count]\n\"LPOP key \" \"[count]\"\n\"LPOP key 3 \" \"\"\n\n# Command with optional token arg: XRANGE key start end [COUNT count]\n\"XRANGE \" \"key start end [COUNT count]\"\n\"XRANGE k 4 2 \" \"[COUNT count]\"\n\"XRANGE k 4 2 COU\" \"[COUNT count]\"\n\"XRANGE k 4 2 COUNT\" \"[COUNT count]\"\n\"XRANGE k 4 2 COUNT \" \"count\"\n\n# Command with optional token block arg: BITFIELD_RO key [GET encoding offset [GET encoding offset ...]]\n\"BITFIELD_RO k \" \"[GET encoding offset [GET encoding offset ...]]\"\n\"BITFIELD_RO k GE\" \"[GET encoding offset [GET encoding offset ...]]\"\n\"BITFIELD_RO k GET\" \"[GET encoding offset [GET encoding offset ...]]\"\n# TODO: The following hints end with an unbalanced \"]\" which shouldn't be there.\n\"BITFIELD_RO k GET \" \"encoding offset [GET encoding offset ...]]\"\n\"BITFIELD_RO k GET xyz \" \"offset [GET encoding offset ...]]\"\n\"BITFIELD_RO k GET xyz 12 \" \"[GET encoding offset ...]]\"\n\"BITFIELD_RO k GET xyz 12 GET \" \"encoding offset [GET encoding offset ...]]\"\n\"BITFIELD_RO k GET enc1 12 GET enc2 \" \"offset [GET encoding offset ...]]\"\n\"BITFIELD_RO k GET enc1 12 GET enc2 34 \" \"[GET encoding offset ...]]\"\n\n# Two-word command with multiple non-token block args:  CONFIG SET parameter value [parameter value ...]\n\"CONFIG SET param \" \"value [parameter value ...]\"\n\"CONFIG SET param val \" \"[parameter value ...]\"\n\"CONFIG SET param val parm2 val2 \" \"[parameter value ...]\"\n\n# Command with nested optional args:  ZRANDMEMBER key [count [WITHSCORES]]\n\"ZRANDMEMBER k \" \"[count [WITHSCORES]]\"\n\"ZRANDMEMBER k 3 \" \"[WITHSCORES]\"\n\"ZRANDMEMBER k 3 WI\" \"[WITHSCORES]\"\n\"ZRANDMEMBER k 3 WITHSCORES \" \"\"\n# Wrong data type: count must be an integer. Hinting fails.\n\"ZRANDMEMBER k cnt \" \"\"\n\n# Command ends with repeated arg: MGET key [key ...]\n\"MGET \" \"key [key ...]\"\n\"MGET k \" \"[key ...]\"\n\"MGET k k \" \"[key ...]\"\n\n# Optional args can be in any order: SCAN cursor [MATCH pattern] [COUNT count] [TYPE type]\n\"SCAN 2 MATCH \" \"pattern [COUNT count] [TYPE type]\"\n\"SCAN 2 COUNT \" \"count [MATCH pattern] [TYPE type]\"\n\n# One-of choices: BLMOVE source destination LEFT|RIGHT LEFT|RIGHT timeout\n\"BLMOVE src dst LEFT \" \"LEFT|RIGHT timeout\"\n\n# Optional args can be in any order: ZRANGE key min max [BYSCORE|BYLEX] [REV] [LIMIT offset count] [WITHSCORES]\n\"ZRANGE k 1 2 \" \"[BYSCORE|BYLEX] [REV] [LIMIT offset count] [WITHSCORES]\"\n\"ZRANGE k 1 2 bylex \" \"[REV] [LIMIT offset count] [WITHSCORES]\"\n\"ZRANGE k 1 2 bylex rev \" \"[LIMIT offset count] [WITHSCORES]\"\n\"ZRANGE k 1 2 limit 2 4 \" \"[BYSCORE|BYLEX] [REV] [WITHSCORES]\"\n\"ZRANGE k 1 2 bylex rev limit 2 4 WITHSCORES \" \"\"\n\"ZRANGE k 1 2 rev \" \"[BYSCORE|BYLEX] [LIMIT offset count] [WITHSCORES]\"\n\"ZRANGE k 1 2 WITHSCORES \" \"[BYSCORE|BYLEX] [REV] [LIMIT offset count]\"\n\n# Optional one-of args with parameters: SET key value [NX|XX|IFEQ ifeq-value|IFNE ifne-value|IFDEQ ifdeq-digest|IFDNE ifdne-digest] [GET] [EX seconds|PX milliseconds|EXAT unix-time-seconds|PXAT unix-time-milliseconds|KEEPTTL]\n\"SET key value \" \"[NX|XX|IFEQ ifeq-value|IFNE ifne-value|IFDEQ ifdeq-digest|IFDNE ifdne-digest] [GET] [EX seconds|PX milliseconds|EXAT unix-time-seconds|PXAT unix-time-milliseconds|KEEPTTL]\"\n\"SET key value EX\" \"[NX|XX|IFEQ ifeq-value|IFNE ifne-value|IFDEQ ifdeq-digest|IFDNE ifdne-digest] [GET] [EX seconds|PX milliseconds|EXAT unix-time-seconds|PXAT unix-time-milliseconds|KEEPTTL]\"\n\"SET key value EX \" \"seconds [NX|XX|IFEQ ifeq-value|IFNE ifne-value|IFDEQ ifdeq-digest|IFDNE ifdne-digest] [GET]\"\n\"SET key value EX 23 \" \"[NX|XX|IFEQ ifeq-value|IFNE ifne-value|IFDEQ ifdeq-digest|IFDNE ifdne-digest] [GET]\"\n\"SET key value EXAT\" \"[NX|XX|IFEQ ifeq-value|IFNE ifne-value|IFDEQ ifdeq-digest|IFDNE ifdne-digest] [GET] [EX seconds|PX milliseconds|EXAT unix-time-seconds|PXAT unix-time-milliseconds|KEEPTTL]\"\n\"SET key value EXAT \" \"unix-time-seconds [NX|XX|IFEQ ifeq-value|IFNE ifne-value|IFDEQ ifdeq-digest|IFDNE ifdne-digest] [GET]\"\n\"SET key value PX\" \"[NX|XX|IFEQ ifeq-value|IFNE ifne-value|IFDEQ ifdeq-digest|IFDNE ifdne-digest] [GET] [EX seconds|PX milliseconds|EXAT unix-time-seconds|PXAT unix-time-milliseconds|KEEPTTL]\"\n\"SET key value PX \" \"milliseconds [NX|XX|IFEQ ifeq-value|IFNE ifne-value|IFDEQ ifdeq-digest|IFDNE ifdne-digest] [GET]\"\n\"SET key value PXAT\" \"[NX|XX|IFEQ ifeq-value|IFNE ifne-value|IFDEQ ifdeq-digest|IFDNE ifdne-digest] [GET] [EX seconds|PX milliseconds|EXAT unix-time-seconds|PXAT unix-time-milliseconds|KEEPTTL]\"\n\"SET key value PXAT \" \"unix-time-milliseconds [NX|XX|IFEQ ifeq-value|IFNE ifne-value|IFDEQ ifdeq-digest|IFDNE ifdne-digest] [GET]\"\n\"SET key value KEEPTTL \" \"[NX|XX|IFEQ ifeq-value|IFNE ifne-value|IFDEQ ifdeq-digest|IFDNE ifdne-digest] [GET]\"\n\"SET key value XX \" \"[GET] [EX seconds|PX milliseconds|EXAT unix-time-seconds|PXAT unix-time-milliseconds|KEEPTTL]\"\n\n# If an input word can't be matched, stop hinting.\n\"SET key value FOOBAR \" \"\"\n# Incorrect type for EX 'seconds' parameter - stop hinting.\n\"SET key value EX sec \" \"\"\n\n# Reordering partially-matched optional argument: GEORADIUS key longitude latitude radius M|KM|FT|MI [WITHCOORD] [WITHDIST] [WITHHASH] [COUNT count [ANY]] [ASC|DESC] [STORE key|STOREDIST key]\n\"GEORADIUS key \" \"longitude latitude radius M|KM|FT|MI [WITHCOORD] [WITHDIST] [WITHHASH] [COUNT count [ANY]] [ASC|DESC] [STORE key|STOREDIST key]\"\n\"GEORADIUS key 1 2 3 M \" \"[WITHCOORD] [WITHDIST] [WITHHASH] [COUNT count [ANY]] [ASC|DESC] [STORE key|STOREDIST key]\"\n\"GEORADIUS key 1 2 3 M COUNT \" \"count [ANY] [WITHCOORD] [WITHDIST] [WITHHASH] [ASC|DESC] [STORE key|STOREDIST key]\"\n\"GEORADIUS key 1 2 3 M COUNT 12 \" \"[ANY] [WITHCOORD] [WITHDIST] [WITHHASH] [ASC|DESC] [STORE key|STOREDIST key]\"\n\"GEORADIUS key 1 2 3 M COUNT 12 \" \"[ANY] [WITHCOORD] [WITHDIST] [WITHHASH] [ASC|DESC] [STORE key|STOREDIST key]\"\n\"GEORADIUS key 1 -2.345 3 M COUNT 12 \" \"[ANY] [WITHCOORD] [WITHDIST] [WITHHASH] [ASC|DESC] [STORE key|STOREDIST key]\"\" \"\"\n# Wrong data type: latitude must be a double. Hinting fails.\n\"GEORADIUS key 1 X \" \"\"\n# Once the next optional argument is started, the [ANY] hint completing the COUNT argument disappears.\n\"GEORADIUS key 1 2 3 M COUNT 12 ASC \" \"[WITHCOORD] [WITHDIST] [WITHHASH] [STORE key|STOREDIST key]\"\n\n# Incorrect argument type for double-valued token parameter.\n\"GEOSEARCH k FROMLONLAT \" \"longitude latitude BYRADIUS radius M|KM|FT|MI|BYBOX width height M|KM|FT|MI [ASC|DESC] [COUNT count [ANY]] [WITHCOORD] [WITHDIST] [WITHHASH]\"\n\"GEOSEARCH k FROMLONLAT 2.34 4.45 BYRADIUS badvalue \" \"\"\n\n# Optional parameters followed by mandatory params: ZADD key [NX|XX] [GT|LT] [CH] [INCR] score member [score member ...]\n\"ZADD key \" \"[NX|XX] [GT|LT] [CH] [INCR] score member [score member ...]\"\n\"ZADD key CH LT \" \"[NX|XX] [INCR] score member [score member ...]\"\n\"ZADD key 0 \" \"member [score member ...]\"\n\n# Empty-valued token argument represented as a pair of double-quotes.\n\"MIGRATE \" \"host port key|\\\"\\\" destination-db timeout [COPY] [REPLACE] [AUTH password|AUTH2 username password] [KEYS key [key ...]]\"\n"
  },
  {
    "path": "tests/assets/user.acl",
    "content": "user alice on allcommands allkeys &* >alice\nuser bob on -@all +@set +acl ~set* &* >bob\nuser doug on resetchannels &test +@all ~* >doug\nuser default on nopass ~* &* +@all\n"
  },
  {
    "path": "tests/assets/userwithselectors.acl",
    "content": "user alice on (+get ~rw*)\nuser bob on (+set %W~w*) (+get %R~r*)"
  },
  {
    "path": "tests/cluster/cluster.tcl",
    "content": "# Cluster-specific test functions.\n#\n# Copyright (C) 2014-Present, Redis Ltd.\n# All Rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n\n# Track cluster configuration as created by create_cluster below\nset ::cluster_master_nodes 0\nset ::cluster_replica_nodes 0\n\n# Returns a parsed CLUSTER NODES output as a list of dictionaries. Optional status field\n# can be specified to only returns entries that match the provided status.\nproc get_cluster_nodes {id {status \"*\"}} {\n    set lines [split [R $id cluster nodes] \"\\r\\n\"]\n    set nodes {}\n    foreach l $lines {\n        set l [string trim $l]\n        if {$l eq {}} continue\n        set args [split $l]\n        set node [dict create \\\n            id [lindex $args 0] \\\n            addr [lindex $args 1] \\\n            flags [split [lindex $args 2] ,] \\\n            slaveof [lindex $args 3] \\\n            ping_sent [lindex $args 4] \\\n            pong_recv [lindex $args 5] \\\n            config_epoch [lindex $args 6] \\\n            linkstate [lindex $args 7] \\\n            slots [lrange $args 8 end] \\\n        ]\n        if {[string match $status [lindex $args 7]]} {\n            lappend nodes $node\n        }\n    }\n    return $nodes\n}\n\n# Test node for flag.\nproc has_flag {node flag} {\n    expr {[lsearch -exact [dict get $node flags] $flag] != -1}\n}\n\n# Returns the parsed myself node entry as a dictionary.\nproc get_myself id {\n    set nodes [get_cluster_nodes $id]\n    foreach n $nodes {\n        if {[has_flag $n myself]} {return $n}\n    }\n    return {}\n}\n\n# Get a specific node by ID by parsing the CLUSTER NODES output\n# of the instance Number 'instance_id'\nproc get_node_by_id {instance_id node_id} {\n    set nodes [get_cluster_nodes $instance_id]\n    foreach n $nodes {\n        if {[dict get $n id] eq $node_id} {return $n}\n    }\n    return {}\n}\n\n# Return the value of the specified CLUSTER INFO field.\nproc CI {n field} {\n    get_info_field [R $n cluster info] $field\n}\n\n# Return the value of the specified INFO field.\nproc s {n field} {\n    get_info_field [R $n info] $field\n}\n\n# Assuming nodes are reset, this function performs slots allocation.\n# Only the first 'n' nodes are used.\nproc cluster_allocate_slots {n} {\n    set slot 16383\n    while {$slot >= 0} {\n        # Allocate successive slots to random nodes.\n        set node [randomInt $n]\n        lappend slots_$node $slot\n        incr slot -1\n    }\n    for {set j 0} {$j < $n} {incr j} {\n        R $j cluster addslots {*}[set slots_${j}]\n    }\n}\n\n# Check that cluster nodes agree about \"state\", or raise an error.\nproc assert_cluster_state {state} {\n    foreach_redis_id id {\n        if {[instance_is_killed redis $id]} continue\n        wait_for_condition 1000 50 {\n            [CI $id cluster_state] eq $state\n        } else {\n            fail \"Cluster node $id cluster_state:[CI $id cluster_state]\"\n        }\n    }\n\n    wait_for_secrets_match 50 100\n}\n\nproc num_unique_secrets {} {\n    set secrets [list]\n    foreach_redis_id id {\n        if {[instance_is_killed redis $id]} continue\n        lappend secrets [R $id debug internal_secret]\n    }\n    set num_secrets [llength [lsort -unique $secrets]]\n    return $num_secrets\n}\n\n# Check that cluster nodes agree about \"state\", or raise an error.\nproc assert_secrets_match {} {\n    assert_equal {1} [num_unique_secrets]\n}\n\nproc wait_for_secrets_match {maxtries delay} {\n    wait_for_condition $maxtries $delay {\n        [num_unique_secrets] eq 1\n    } else {\n        fail \"Failed waiting for secrets to sync\"\n    }\n}\n\n# Search the first node starting from ID $first that is not\n# already configured as a slave.\nproc cluster_find_available_slave {first} {\n    foreach_redis_id id {\n        if {$id < $first} continue\n        if {[instance_is_killed redis $id]} continue\n        set me [get_myself $id]\n        if {[dict get $me slaveof] eq {-}} {return $id}\n    }\n    fail \"No available slaves\"\n}\n\n# Add 'slaves' slaves to a cluster composed of 'masters' masters.\n# It assumes that masters are allocated sequentially from instance ID 0\n# to N-1.\nproc cluster_allocate_slaves {masters slaves} {\n    for {set j 0} {$j < $slaves} {incr j} {\n        set master_id [expr {$j % $masters}]\n        set slave_id [cluster_find_available_slave $masters]\n        set master_myself [get_myself $master_id]\n        R $slave_id cluster replicate [dict get $master_myself id]\n    }\n}\n\n# Create a cluster composed of the specified number of masters and slaves.\nproc create_cluster {masters slaves} {\n    cluster_allocate_slots $masters\n    if {$slaves} {\n        cluster_allocate_slaves $masters $slaves\n    }\n    assert_cluster_state ok\n\n    set ::cluster_master_nodes $masters\n    set ::cluster_replica_nodes $slaves\n}\n\nproc cluster_allocate_with_continuous_slots {n} {\n    set slot 16383\n    set avg [expr ($slot+1) / $n]\n    while {$slot >= 0} {\n        set node [expr $slot/$avg >= $n ? $n-1 : $slot/$avg]\n        lappend slots_$node $slot\n        incr slot -1\n    }\n    for {set j 0} {$j < $n} {incr j} {\n        R $j cluster addslots {*}[set slots_${j}]\n    }\n}\n\n# Create a cluster composed of the specified number of masters and slaves,\n# but with a continuous slot range. \nproc cluster_create_with_continuous_slots {masters slaves} {\n    cluster_allocate_with_continuous_slots $masters\n    if {$slaves} {\n        cluster_allocate_slaves $masters $slaves\n    }\n    assert_cluster_state ok\n\n    set ::cluster_master_nodes $masters\n    set ::cluster_replica_nodes $slaves\n}\n\n\n# Set the cluster node-timeout to all the reachalbe nodes.\nproc set_cluster_node_timeout {to} {\n    foreach_redis_id id {\n        catch {R $id CONFIG SET cluster-node-timeout $to}\n    }\n}\n\n# Check if the cluster is writable and readable. Use node \"id\"\n# as a starting point to talk with the cluster.\nproc cluster_write_test {id} {\n    set prefix [randstring 20 20 alpha]\n    set port [get_instance_attrib redis $id port]\n    set cluster [redis_cluster 127.0.0.1:$port]\n    for {set j 0} {$j < 100} {incr j} {\n        $cluster set key.$j $prefix.$j\n    }\n    for {set j 0} {$j < 100} {incr j} {\n        assert {[$cluster get key.$j] eq \"$prefix.$j\"}\n    }\n    $cluster close\n}\n\n# Normalize cluster slots configuration by sorting replicas by node ID\nproc normalize_cluster_slots {slots_config} {\n    set normalized {}\n    foreach slot_range $slots_config {\n        if {[llength $slot_range] <= 3} {\n            lappend normalized $slot_range\n        } else {\n            # Sort replicas (index 3+) by node ID, keep start/end/master unchanged\n            set replicas [lrange $slot_range 3 end]\n            set sorted_replicas [lsort -index 2 $replicas]\n            lappend normalized [concat [lrange $slot_range 0 2] $sorted_replicas]\n        }\n    }\n    return $normalized\n}\n\n# Check if cluster configuration is consistent.\nproc cluster_config_consistent {} {\n    for {set j 0} {$j < $::cluster_master_nodes + $::cluster_replica_nodes} {incr j} {\n        if {$j == 0} {\n            set base_cfg [R $j cluster slots]\n            set base_secret [R $j debug internal_secret]\n            set normalized_base_cfg [normalize_cluster_slots $base_cfg]\n        } else {\n            set cfg [R $j cluster slots]\n            set secret [R $j debug internal_secret]\n            set normalized_cfg [normalize_cluster_slots $cfg]\n            if {$normalized_cfg != $normalized_base_cfg || $secret != $base_secret} {\n                return 0\n            }\n        }\n    }\n\n    return 1\n}\n\n# Wait for cluster configuration to propagate and be consistent across nodes.\nproc wait_for_cluster_propagation {} {\n    wait_for_condition 50 100 {\n        [cluster_config_consistent] eq 1\n    } else {\n        fail \"cluster config did not reach a consistent state\"\n    }\n}\n\n# Check if cluster's view of hostnames is consistent\nproc are_hostnames_propagated {match_string} {\n    for {set j 0} {$j < $::cluster_master_nodes + $::cluster_replica_nodes} {incr j} {\n        set cfg [R $j cluster slots]\n        foreach node $cfg {\n            for {set i 2} {$i < [llength $node]} {incr i} {\n                if {! [string match $match_string [lindex [lindex [lindex $node $i] 3] 1]] } {\n                    return 0\n                }\n            }\n        }\n    }\n    return 1\n}\n"
  },
  {
    "path": "tests/cluster/run.tcl",
    "content": "# Cluster test suite.\n#\n# Copyright (C) 2014-Present, Redis Ltd.\n# All Rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n\ncd tests/cluster\nsource cluster.tcl\nsource ../instances.tcl\nsource ../../support/cluster.tcl ; # Redis Cluster client.\n\nset ::instances_count 20 ; # How many instances we use at max.\nset ::tlsdir \"../../tls\"\n\nproc main {} {\n    parse_options\n    spawn_instance redis $::redis_base_port $::instances_count {\n        \"cluster-enabled yes\"\n        \"appendonly yes\"\n        \"enable-protected-configs yes\"\n        \"enable-debug-command yes\"\n        \"save ''\"\n    }\n    run_tests\n    cleanup\n    end_tests\n}\n\nif {[catch main e]} {\n    puts $::errorInfo\n    if {$::pause_on_error} pause_on_error\n    cleanup\n    exit 1\n}\n"
  },
  {
    "path": "tests/cluster/tests/00-base.tcl",
    "content": "# Check the basic monitoring and failover capabilities.\n\nsource \"../tests/includes/init-tests.tcl\"\n\nif {$::simulate_error} {\n    test \"This test will fail\" {\n        fail \"Simulated error\"\n    }\n}\n\ntest \"Different nodes have different IDs\" {\n    set ids {}\n    set numnodes 0\n    foreach_redis_id id {\n        incr numnodes\n        # Every node should just know itself.\n        set nodeid [dict get [get_myself $id] id]\n        assert {$nodeid ne {}}\n        lappend ids $nodeid\n    }\n    set numids [llength [lsort -unique $ids]]\n    assert {$numids == $numnodes}\n}\n\ntest \"It is possible to perform slot allocation\" {\n    cluster_allocate_slots 5\n}\n\ntest \"After the join, every node gets a different config epoch\" {\n    set trynum 60\n    while {[incr trynum -1] != 0} {\n        # We check that this condition is true for *all* the nodes.\n        set ok 1 ; # Will be set to 0 every time a node is not ok.\n        foreach_redis_id id {\n            set epochs {}\n            foreach n [get_cluster_nodes $id] {\n                lappend epochs [dict get $n config_epoch]\n            }\n            if {[lsort $epochs] != [lsort -unique $epochs]} {\n                set ok 0 ; # At least one collision!\n            }\n        }\n        if {$ok} break\n        after 1000\n        puts -nonewline .\n        flush stdout\n    }\n    if {$trynum == 0} {\n        fail \"Config epoch conflict resolution is not working.\"\n    }\n}\n\ntest \"Nodes should report cluster_state is ok now\" {\n    assert_cluster_state ok\n}\n\ntest \"Sanity for CLUSTER COUNTKEYSINSLOT\" {\n    set reply [R 0 CLUSTER COUNTKEYSINSLOT 0]\n    assert {$reply eq 0}\n}\n\ntest \"It is possible to write and read from the cluster\" {\n    cluster_write_test 0\n}\n\ntest \"CLUSTER RESET SOFT test\" {\n    set last_epoch_node0 [get_info_field [R 0 cluster info] cluster_current_epoch]\n    R 0 FLUSHALL\n    R 0 CLUSTER RESET\n    assert {[get_info_field [R 0 cluster info] cluster_current_epoch] eq $last_epoch_node0}\n\n    set last_epoch_node1 [get_info_field [R 1 cluster info] cluster_current_epoch]\n    R 1 FLUSHALL\n    R 1 CLUSTER RESET SOFT\n    assert {[get_info_field [R 1 cluster info] cluster_current_epoch] eq $last_epoch_node1}\n}\n\ntest \"Coverage: CLUSTER HELP\" {\n    assert_match \"*CLUSTER <subcommand> *\" [R 0 CLUSTER HELP]\n}\n\ntest \"Coverage: ASKING\" {\n    assert_equal {OK} [R 0 ASKING]\n}\n\ntest \"CLUSTER SLAVES and CLUSTER REPLICAS with zero replicas\" {\n    assert_equal {} [R 0 cluster slaves [R 0 CLUSTER MYID]]\n    assert_equal {} [R 0 cluster replicas [R 0 CLUSTER MYID]]\n}\n\ntest \"CLUSTER FORGET with invalid node ID\" {\n    assert_error {*ERR Unknown node*} {R 0 cluster forget 1}\n}\n"
  },
  {
    "path": "tests/cluster/tests/01-faildet.tcl",
    "content": "# Check the basic monitoring and failover capabilities.\n\nsource \"../tests/includes/init-tests.tcl\"\n\ntest \"Create a 5 nodes cluster\" {\n    create_cluster 5 5\n}\n\ntest \"Cluster should start ok\" {\n    assert_cluster_state ok\n}\n\ntest \"Killing two slave nodes\" {\n    kill_instance redis 5\n    kill_instance redis 6\n}\n\ntest \"Cluster should be still up\" {\n    assert_cluster_state ok\n}\n\ntest \"Killing one master node\" {\n    kill_instance redis 0\n}\n\n# Note: the only slave of instance 0 is already down so no\n# failover is possible, that would change the state back to ok.\ntest \"Cluster should be down now\" {\n    assert_cluster_state fail\n}\n\ntest \"Restarting master node\" {\n    restart_instance redis 0\n}\n\ntest \"Cluster should be up again\" {\n    assert_cluster_state ok\n}\n"
  },
  {
    "path": "tests/cluster/tests/02-failover.tcl",
    "content": "# Check the basic monitoring and failover capabilities.\n\nsource \"../tests/includes/init-tests.tcl\"\n\ntest \"Create a 5 nodes cluster\" {\n    create_cluster 5 5\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\ntest \"Cluster is writable\" {\n    cluster_write_test 0\n}\n\ntest \"Instance #5 is a slave\" {\n    assert {[RI 5 role] eq {slave}}\n}\n\ntest \"Instance #5 synced with the master\" {\n    wait_for_condition 1000 50 {\n        [RI 5 master_link_status] eq {up}\n    } else {\n        fail \"Instance #5 master link status is not up\"\n    }\n}\n\nset current_epoch [CI 1 cluster_current_epoch]\n\ntest \"Killing one master node\" {\n    kill_instance redis 0\n}\n\ntest \"Wait for failover\" {\n    wait_for_condition 1000 50 {\n        [CI 1 cluster_current_epoch] > $current_epoch\n    } else {\n        fail \"No failover detected\"\n    }\n}\n\ntest \"Cluster should eventually be up again\" {\n    assert_cluster_state ok\n}\n\ntest \"Cluster is writable\" {\n    cluster_write_test 1\n}\n\ntest \"Instance #5 is now a master\" {\n    assert {[RI 5 role] eq {master}}\n}\n\ntest \"Restarting the previously killed master node\" {\n    restart_instance redis 0\n}\n\ntest \"Instance #0 gets converted into a slave\" {\n    wait_for_condition 1000 50 {\n        [RI 0 role] eq {slave}\n    } else {\n        fail \"Old master was not converted into slave\"\n    }\n}\n"
  },
  {
    "path": "tests/cluster/tests/03-failover-loop.tcl",
    "content": "# Failover stress test.\n# In this test a different node is killed in a loop for N\n# iterations. The test checks that certain properties\n# are preserved across iterations.\n\nsource \"../tests/includes/init-tests.tcl\"\n\ntest \"Create a 5 nodes cluster\" {\n    create_cluster 5 5\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\nset iterations 20\nset cluster [redis_cluster 127.0.0.1:[get_instance_attrib redis 0 port]]\n\nwhile {[incr iterations -1]} {\n    set tokill [randomInt 10]\n    set other [expr {($tokill+1)%10}] ; # Some other instance.\n    set key [randstring 20 20 alpha]\n    set val [randstring 20 20 alpha]\n    set role [RI $tokill role]\n    if {$role eq {master}} {\n        set slave {}\n        set myid [dict get [get_myself $tokill] id]\n        foreach_redis_id id {\n            if {$id == $tokill} continue\n            if {[dict get [get_myself $id] slaveof] eq $myid} {\n                set slave $id\n            }\n        }\n        if {$slave eq {}} {\n            fail \"Unable to retrieve slave's ID for master #$tokill\"\n        }\n    }\n\n    puts \"--- Iteration $iterations ---\"\n\n    if {$role eq {master}} {\n        test \"Wait for slave of #$tokill to sync\" {\n            wait_for_condition 1000 50 {\n                [string match {*state=online*} [RI $tokill slave0]]\n            } else {\n                fail \"Slave of node #$tokill is not ok\"\n            }\n        }\n        set slave_config_epoch [CI $slave cluster_my_epoch]\n    }\n\n    test \"Cluster is writable before failover\" {\n        for {set i 0} {$i < 100} {incr i} {\n            catch {$cluster set $key:$i $val:$i} err\n            assert {$err eq {OK}}\n        }\n        # Wait for the write to propagate to the slave if we\n        # are going to kill a master.\n        if {$role eq {master}} {\n            R $tokill wait 1 20000\n        }\n    }\n\n    test \"Terminating node #$tokill\" {\n        # Stop AOF so that an initial AOFRW won't prevent the instance from terminating\n        R $tokill config set appendonly no\n        kill_instance redis $tokill\n    }\n\n    if {$role eq {master}} {\n        test \"Wait failover by #$slave with old epoch $slave_config_epoch\" {\n            wait_for_condition 1000 50 {\n                [CI $slave cluster_my_epoch] > $slave_config_epoch\n            } else {\n                fail \"No failover detected, epoch is still [CI $slave cluster_my_epoch]\"\n            }\n        }\n    }\n\n    test \"Cluster should eventually be up again\" {\n        assert_cluster_state ok\n    }\n\n    test \"Cluster is writable again\" {\n        for {set i 0} {$i < 100} {incr i} {\n            catch {$cluster set $key:$i:2 $val:$i:2} err\n            assert {$err eq {OK}}\n        }\n    }\n\n    test \"Restarting node #$tokill\" {\n        restart_instance redis $tokill\n    }\n\n    test \"Instance #$tokill is now a slave\" {\n        wait_for_condition 1000 50 {\n            [RI $tokill role] eq {slave}\n        } else {\n            fail \"Restarted instance is not a slave\"\n        }\n    }\n\n    test \"We can read back the value we set before\" {\n        for {set i 0} {$i < 100} {incr i} {\n            catch {$cluster get $key:$i} err\n            assert {$err eq \"$val:$i\"}\n            catch {$cluster get $key:$i:2} err\n            assert {$err eq \"$val:$i:2\"}\n        }\n    }\n}\n\ntest \"Post condition: current_epoch >= my_epoch everywhere\" {\n    foreach_redis_id id {\n        assert {[CI $id cluster_current_epoch] >= [CI $id cluster_my_epoch]}\n    }\n}\n"
  },
  {
    "path": "tests/cluster/tests/04-resharding.tcl",
    "content": "# Failover stress test.\n# In this test a different node is killed in a loop for N\n# iterations. The test checks that certain properties\n# are preserved across iterations.\n\nsource \"../tests/includes/init-tests.tcl\"\nsource \"../../../tests/support/cli.tcl\"\n\ntest \"Create a 5 nodes cluster\" {\n    create_cluster 5 5\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\ntest \"Enable AOF in all the instances\" {\n    foreach_redis_id id {\n        R $id config set appendonly yes\n        # We use \"appendfsync no\" because it's fast but also guarantees that\n        # write(2) is performed before replying to client.\n        R $id config set appendfsync no\n    }\n\n    foreach_redis_id id {\n        wait_for_condition 1000 500 {\n            [RI $id aof_rewrite_in_progress] == 0 &&\n            [RI $id aof_enabled] == 1\n        } else {\n            fail \"Failed to enable AOF on instance #$id\"\n        }\n    }\n}\n\n# Return non-zero if the specified PID is about a process still in execution,\n# otherwise 0 is returned.\nproc process_is_running {pid} {\n    # PS should return with an error if PID is non existing,\n    # and catch will return non-zero. We want to return non-zero if\n    # the PID exists, so we invert the return value with expr not operator.\n    expr {![catch {exec ps -p $pid}]}\n}\n\n# Our resharding test performs the following actions:\n#\n# - N commands are sent to the cluster in the course of the test.\n# - Every command selects a random key from key:0 to key:MAX-1.\n# - The operation RPUSH key <randomvalue> is performed.\n# - Tcl remembers into an array all the values pushed to each list.\n# - After N/2 commands, the resharding process is started in background.\n# - The test continues while the resharding is in progress.\n# - At the end of the test, we wait for the resharding process to stop.\n# - Finally the keys are checked to see if they contain the value they should.\n\nset numkeys 50000\nset numops 200000\nset start_node_port [get_instance_attrib redis 0 port]\nset cluster [redis_cluster 127.0.0.1:$start_node_port]\nif {$::tls} {\n    # setup a non-TLS cluster client to the TLS cluster\n    set plaintext_port [get_instance_attrib redis 0 plaintext-port]\n    set cluster_plaintext [redis_cluster 127.0.0.1:$plaintext_port 0]\n    puts \"Testing TLS cluster on start node 127.0.0.1:$start_node_port, plaintext port $plaintext_port\"\n} else {\n    set cluster_plaintext $cluster\n    puts \"Testing using non-TLS cluster\"\n}\ncatch {unset content}\narray set content {}\nset tribpid {}\n\ntest \"Cluster consistency during live resharding\" {\n    set ele 0\n    for {set j 0} {$j < $numops} {incr j} {\n        # Trigger the resharding once we execute half the ops.\n        if {$tribpid ne {} &&\n            ($j % 10000) == 0 &&\n            ![process_is_running $tribpid]} {\n            set tribpid {}\n        }\n\n        if {$j >= $numops/2 && $tribpid eq {}} {\n            puts -nonewline \"...Starting resharding...\"\n            flush stdout\n            set target [dict get [get_myself [randomInt 5]] id]\n            set tribpid [lindex [exec \\\n                ../../../src/redis-cli --cluster reshard \\\n                127.0.0.1:[get_instance_attrib redis 0 port] \\\n                --cluster-from all \\\n                --cluster-to $target \\\n                --cluster-slots 100 \\\n                --cluster-yes \\\n                {*}[rediscli_tls_config \"../../../tests\"] \\\n                | [info nameofexecutable] \\\n                ../tests/helpers/onlydots.tcl \\\n                &] 0]\n        }\n\n        # Write random data to random list.\n        set listid [randomInt $numkeys]\n        set key \"key:$listid\"\n        incr ele\n        # We write both with Lua scripts and with plain commands.\n        # This way we are able to stress Lua -> Redis command invocation\n        # as well, that has tests to prevent Lua to write into wrong\n        # hash slots.\n        # We also use both TLS and plaintext connections.\n        if {$listid % 3 == 0} {\n            $cluster rpush $key $ele\n        } elseif {$listid % 3 == 1} {\n            $cluster_plaintext rpush $key $ele\n        } else {\n            $cluster eval {redis.call(\"rpush\",KEYS[1],ARGV[1])} 1 $key $ele\n        }\n        lappend content($key) $ele\n\n        if {($j % 1000) == 0} {\n            puts -nonewline W; flush stdout\n        }\n    }\n\n    # Wait for the resharding process to end\n    wait_for_condition 1000 500 {\n        [process_is_running $tribpid] == 0\n    } else {\n        fail \"Resharding is not terminating after some time.\"\n    }\n\n}\n\ntest \"Verify $numkeys keys for consistency with logical content\" {\n    # Check that the Redis Cluster content matches our logical content.\n    foreach {key value} [array get content] {\n        if {[$cluster lrange $key 0 -1] ne $value} {\n            fail \"Key $key expected to hold '$value' but actual content is [$cluster lrange $key 0 -1]\"\n        }\n    }\n}\n\ntest \"Terminate and restart all the instances\" {\n    foreach_redis_id id {\n        # Stop AOF so that an initial AOFRW won't prevent the instance from terminating\n        R $id config set appendonly no\n        kill_instance redis $id\n        restart_instance redis $id\n    }\n}\n\ntest \"Cluster should eventually be up again\" {\n    assert_cluster_state ok\n}\n\ntest \"Verify $numkeys keys after the restart\" {\n    # Check that the Redis Cluster content matches our logical content.\n    foreach {key value} [array get content] {\n        if {[$cluster lrange $key 0 -1] ne $value} {\n            fail \"Key $key expected to hold '$value' but actual content is [$cluster lrange $key 0 -1]\"\n        }\n    }\n}\n\ntest \"Disable AOF in all the instances\" {\n    foreach_redis_id id {\n        R $id config set appendonly no\n    }\n}\n\ntest \"Verify slaves consistency\" {\n    set verified_masters 0\n    foreach_redis_id id {\n        set role [R $id role]\n        lassign $role myrole myoffset slaves\n        if {$myrole eq {slave}} continue\n        set masterport [get_instance_attrib redis $id port]\n        set masterdigest [R $id debug digest]\n        foreach_redis_id sid {\n            set srole [R $sid role]\n            if {[lindex $srole 0] eq {master}} continue\n            if {[lindex $srole 2] != $masterport} continue\n            wait_for_condition 1000 500 {\n                [R $sid debug digest] eq $masterdigest\n            } else {\n                fail \"Master and slave data digest are different\"\n            }\n            incr verified_masters\n        }\n    }\n    assert {$verified_masters >= 5}\n}\n\ntest \"Dump sanitization was skipped for migrations\" {\n    set verified_masters 0\n    foreach_redis_id id {\n        assert {[RI $id dump_payload_sanitizations] == 0}\n    }\n}\n"
  },
  {
    "path": "tests/cluster/tests/05-slave-selection.tcl",
    "content": "# Slave selection test\n# Check the algorithm trying to pick the slave with the most complete history.\n\nsource \"../tests/includes/init-tests.tcl\"\n\n# Create a cluster with 5 master and 10 slaves, so that we have 2\n# slaves for each master.\ntest \"Create a 5 nodes cluster\" {\n    create_cluster 5 10\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\ntest \"The first master has actually two slaves\" {\n    wait_for_condition 1000 50 {\n        [llength [lindex [R 0 role] 2]] == 2\n        && [llength [R 0 cluster replicas [R 0 CLUSTER MYID]]] == 2\n    } else {\n        fail \"replicas didn't connect\"\n    }\n}\n\ntest \"CLUSTER SLAVES and CLUSTER REPLICAS output is consistent\" {\n    # Because we already have command output that cover CLUSTER REPLICAS elsewhere,\n    # here we simply judge whether their output is consistent to cover CLUSTER SLAVES.\n    set myid [R 0 CLUSTER MYID]\n    R 0 multi\n    R 0 cluster slaves $myid\n    R 0 cluster replicas $myid\n    lassign [R 0 exec] res res2\n    assert_equal $res $res2\n}\n\ntest {Slaves of #0 are instance #5 and #10 as expected} {\n    set port0 [get_instance_attrib redis 0 port]\n    assert {[lindex [R 5 role] 2] == $port0}\n    assert {[lindex [R 10 role] 2] == $port0}\n}\n\ntest \"Instance #5 and #10 synced with the master\" {\n    wait_for_condition 1000 50 {\n        [RI 5 master_link_status] eq {up} &&\n        [RI 10 master_link_status] eq {up}\n    } else {\n        fail \"Instance #5 or #10 master link status is not up\"\n    }\n}\n\nset cluster [redis_cluster 127.0.0.1:[get_instance_attrib redis 0 port]]\n\ntest \"Slaves are both able to receive and acknowledge writes\" {\n    for {set j 0} {$j < 100} {incr j} {\n        $cluster set $j $j\n    }\n    assert {[R 0 wait 2 60000] == 2}\n}\n\ntest \"Write data while slave #10 is paused and can't receive it\" {\n    # Stop the slave with a multi/exec transaction so that the master will\n    # be killed as soon as it can accept writes again.\n    R 10 multi\n    R 10 debug sleep 10\n    R 10 client kill 127.0.0.1:$port0\n    R 10 deferred 1\n    R 10 exec\n\n    # Write some data the slave can't receive.\n    for {set j 0} {$j < 100} {incr j} {\n        $cluster set $j $j\n    }\n\n    # Prevent the master from accepting new slaves.\n    # Use a large pause value since we'll kill it anyway.\n    R 0 CLIENT PAUSE 60000\n\n    # Wait for the slave to return available again\n    R 10 deferred 0\n    assert {[R 10 read] eq {OK OK}}\n\n    # Kill the master so that a reconnection will not be possible.\n    kill_instance redis 0\n}\n\ntest \"Wait for instance #5 (and not #10) to turn into a master\" {\n    wait_for_condition 1000 50 {\n        [RI 5 role] eq {master}\n    } else {\n        fail \"No failover detected\"\n    }\n}\n\ntest \"Wait for the node #10 to return alive before ending the test\" {\n    R 10 ping\n}\n\ntest \"Cluster should eventually be up again\" {\n    assert_cluster_state ok\n}\n\ntest \"Node #10 should eventually replicate node #5\" {\n    set port5 [get_instance_attrib redis 5 port]\n    wait_for_condition 1000 50 {\n        ([lindex [R 10 role] 2] == $port5) &&\n        ([lindex [R 10 role] 3] eq {connected})\n    } else {\n        fail \"#10 didn't became slave of #5\"\n    }\n}\n\nsource \"../tests/includes/init-tests.tcl\"\n\n# Create a cluster with 3 master and 15 slaves, so that we have 5\n# slaves for eatch master.\ntest \"Create a 3 nodes cluster\" {\n    create_cluster 3 15\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\ntest \"The first master has actually 5 slaves\" {\n    wait_for_condition 1000 50 {\n        [llength [lindex [R 0 role] 2]] == 5\n    } else {\n        fail \"replicas didn't connect\"\n    }\n}\n\ntest {Slaves of #0 are instance #3, #6, #9, #12 and #15 as expected} {\n    set port0 [get_instance_attrib redis 0 port]\n    assert {[lindex [R 3 role] 2] == $port0}\n    assert {[lindex [R 6 role] 2] == $port0}\n    assert {[lindex [R 9 role] 2] == $port0}\n    assert {[lindex [R 12 role] 2] == $port0}\n    assert {[lindex [R 15 role] 2] == $port0}\n}\n\ntest {Instance #3, #6, #9, #12 and #15 synced with the master} {\n    wait_for_condition 1000 50 {\n        [RI 3 master_link_status] eq {up} &&\n        [RI 6 master_link_status] eq {up} &&\n        [RI 9 master_link_status] eq {up} &&\n        [RI 12 master_link_status] eq {up} &&\n        [RI 15 master_link_status] eq {up}\n    } else {\n        fail \"Instance #3 or #6 or #9 or #12 or #15 master link status is not up\"\n    }\n}\n\nproc master_detected {instances} {\n    foreach instance [dict keys $instances] {\n        if {[RI $instance role] eq {master}} {\n            return true\n        }\n    }\n\n    return false\n}\n\ntest \"New Master down consecutively\" {\n    set instances [dict create 0 1 3 1 6 1 9 1 12 1 15 1]\n\n    set loops [expr {[dict size $instances]-1}]\n    for {set i 0} {$i < $loops} {incr i} {\n        set master_id -1\n        foreach instance [dict keys $instances] {\n            if {[RI $instance role] eq {master}} {\n                set master_id $instance\n                break;\n            }\n        }\n\n        if {$master_id eq -1} {\n            fail \"no master detected, #loop $i\"\n        }\n\n        set instances [dict remove $instances $master_id]\n\n        kill_instance redis $master_id\n        wait_for_condition 1000 50 {\n            [master_detected $instances]\n        } else {\n            fail \"No failover detected when master $master_id fails\"\n        }\n\n        assert_cluster_state ok\n    }\n}\n"
  },
  {
    "path": "tests/cluster/tests/06-slave-stop-cond.tcl",
    "content": "# Slave stop condition test\n# Check that if there is a disconnection time limit, the slave will not try\n# to failover its master.\n\nsource \"../tests/includes/init-tests.tcl\"\n\n# Create a cluster with 5 master and 5 slaves.\ntest \"Create a 5 nodes cluster\" {\n    create_cluster 5 5\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\ntest \"The first master has actually one slave\" {\n    wait_for_condition 1000 50 {\n        [llength [lindex [R 0 role] 2]] == 1\n    } else {\n        fail \"replicas didn't connect\"\n    }\n}\n\ntest {Slaves of #0 is instance #5 as expected} {\n    set port0 [get_instance_attrib redis 0 port]\n    assert {[lindex [R 5 role] 2] == $port0}\n}\n\ntest \"Instance #5 synced with the master\" {\n    wait_for_condition 1000 50 {\n        [RI 5 master_link_status] eq {up}\n    } else {\n        fail \"Instance #5 master link status is not up\"\n    }\n}\n\ntest \"Lower the slave validity factor of #5 to the value of 2\" {\n    assert {[R 5 config set cluster-slave-validity-factor 2] eq {OK}}\n}\n\ntest \"Break master-slave link and prevent further reconnections\" {\n    # Stop the slave with a multi/exec transaction so that the master will\n    # be killed as soon as it can accept writes again.\n    R 5 multi\n    R 5 client kill 127.0.0.1:$port0\n    # here we should sleep 6 or more seconds (node_timeout * slave_validity)\n    # but the actual validity time is actually incremented by the\n    # repl-ping-slave-period value which is 10 seconds by default. So we\n    # need to wait more than 16 seconds.\n    R 5 debug sleep 20\n    R 5 deferred 1\n    R 5 exec\n\n    # Prevent the master from accepting new slaves.\n    # Use a large pause value since we'll kill it anyway.\n    R 0 CLIENT PAUSE 60000\n\n    # Wait for the slave to return available again\n    R 5 deferred 0\n    assert {[R 5 read] eq {OK OK}}\n\n    # Kill the master so that a reconnection will not be possible.\n    kill_instance redis 0\n}\n\ntest \"Slave #5 is reachable and alive\" {\n    assert {[R 5 ping] eq {PONG}}\n}\n\ntest \"Slave #5 should not be able to failover\" {\n    after 10000\n    assert {[RI 5 role] eq {slave}}\n}\n\ntest \"Cluster should be down\" {\n    assert_cluster_state fail\n}\n"
  },
  {
    "path": "tests/cluster/tests/07-replica-migration.tcl",
    "content": "# Replica migration test.\n# Check that orphaned masters are joined by replicas of masters having\n# multiple replicas attached, according to the migration barrier settings.\n\nsource \"../tests/includes/init-tests.tcl\"\n\n# Create a cluster with 5 master and 10 slaves, so that we have 2\n# slaves for each master.\ntest \"Create a 5 nodes cluster\" {\n    create_cluster 5 10\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\ntest \"Each master should have two replicas attached\" {\n    foreach_redis_id id {\n        if {$id < 5} {\n            wait_for_condition 1000 50 {\n                [llength [lindex [R $id role] 2]] == 2\n            } else {\n                fail \"Master #$id does not have 2 slaves as expected\"\n            }\n        }\n    }\n}\n\ntest \"Killing all the slaves of master #0 and #1\" {\n    kill_instance redis 5\n    kill_instance redis 10\n    kill_instance redis 6\n    kill_instance redis 11\n    after 4000\n}\n\nforeach_redis_id id {\n    if {$id < 5} {\n        test \"Master #$id should have at least one replica\" {\n            wait_for_condition 1000 50 {\n                [llength [lindex [R $id role] 2]] >= 1\n            } else {\n                fail \"Master #$id has no replicas\"\n            }\n        }\n    }\n}\n\n# Now test the migration to a master which used to be a slave, after\n# a failver.\n\nsource \"../tests/includes/init-tests.tcl\"\n\n# Create a cluster with 5 master and 10 slaves, so that we have 2\n# slaves for each master.\ntest \"Create a 5 nodes cluster\" {\n    create_cluster 5 10\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\ntest \"Kill slave #7 of master #2. Only slave left is #12 now\" {\n    kill_instance redis 7\n}\n\nset current_epoch [CI 1 cluster_current_epoch]\n\ntest \"Killing master node #2, #12 should failover\" {\n    kill_instance redis 2\n}\n\ntest \"Wait for failover\" {\n    wait_for_condition 1000 50 {\n        [CI 1 cluster_current_epoch] > $current_epoch\n    } else {\n        fail \"No failover detected\"\n    }\n}\n\ntest \"Cluster should eventually be up again\" {\n    assert_cluster_state ok\n}\n\ntest \"Cluster is writable\" {\n    cluster_write_test 1\n}\n\ntest \"Instance 12 is now a master without slaves\" {\n    assert {[RI 12 role] eq {master}}\n}\n\n# The remaining instance is now without slaves. Some other slave\n# should migrate to it.\n\ntest \"Master #12 should get at least one migrated replica\" {\n    wait_for_condition 1000 50 {\n        [llength [lindex [R 12 role] 2]] >= 1\n    } else {\n        fail \"Master #12 has no replicas\"\n    }\n}\n"
  },
  {
    "path": "tests/cluster/tests/08-update-msg.tcl",
    "content": "# Test UPDATE messages sent by other nodes when the currently authorirative\n# master is unavailable. The test is performed in the following steps:\n#\n# 1) Master goes down.\n# 2) Slave failover and becomes new master.\n# 3) New master is partitioned away.\n# 4) Old master returns.\n# 5) At this point we expect the old master to turn into a slave ASAP because\n#    of the UPDATE messages it will receive from the other nodes when its\n#    configuration will be found to be outdated.\n\nsource \"../tests/includes/init-tests.tcl\"\n\ntest \"Create a 5 nodes cluster\" {\n    create_cluster 5 5\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\ntest \"Cluster is writable\" {\n    cluster_write_test 0\n}\n\ntest \"Instance #5 is a slave\" {\n    assert {[RI 5 role] eq {slave}}\n}\n\ntest \"Instance #5 synced with the master\" {\n    wait_for_condition 1000 50 {\n        [RI 5 master_link_status] eq {up}\n    } else {\n        fail \"Instance #5 master link status is not up\"\n    }\n}\n\nset current_epoch [CI 1 cluster_current_epoch]\n\ntest \"Killing one master node\" {\n    kill_instance redis 0\n}\n\ntest \"Wait for failover\" {\n    wait_for_condition 1000 50 {\n        [CI 1 cluster_current_epoch] > $current_epoch\n    } else {\n        fail \"No failover detected\"\n    }\n}\n\ntest \"Cluster should eventually be up again\" {\n    assert_cluster_state ok\n}\n\ntest \"Cluster is writable\" {\n    cluster_write_test 1\n}\n\ntest \"Instance #5 is now a master\" {\n    assert {[RI 5 role] eq {master}}\n}\n\ntest \"Killing the new master #5\" {\n    kill_instance redis 5\n}\n\ntest \"Cluster should be down now\" {\n    assert_cluster_state fail\n}\n\ntest \"Restarting the old master node\" {\n    restart_instance redis 0\n}\n\ntest \"Instance #0 gets converted into a slave\" {\n    wait_for_condition 1000 50 {\n        [RI 0 role] eq {slave}\n    } else {\n        fail \"Old master was not converted into slave\"\n    }\n}\n\ntest \"Restarting the new master node\" {\n    restart_instance redis 5\n}\n\ntest \"Cluster is up again\" {\n    assert_cluster_state ok\n}\n"
  },
  {
    "path": "tests/cluster/tests/09-pubsub.tcl",
    "content": "# Test PUBLISH propagation across the cluster.\n\nsource \"../tests/includes/init-tests.tcl\"\n\ntest \"Create a 5 nodes cluster\" {\n    create_cluster 5 5\n}\n\nproc test_cluster_publish {instance instances} {\n    # Subscribe all the instances but the one we use to send.\n    for {set j 0} {$j < $instances} {incr j} {\n        if {$j != $instance} {\n            R $j deferred 1\n            R $j subscribe testchannel\n            R $j read; # Read the subscribe reply\n        }\n    }\n\n    set data [randomValue]\n    R $instance PUBLISH testchannel $data\n\n    # Read the message back from all the nodes.\n    for {set j 0} {$j < $instances} {incr j} {\n        if {$j != $instance} {\n            set msg [R $j read]\n            assert {$data eq [lindex $msg 2]}\n            R $j unsubscribe testchannel\n            R $j read; # Read the unsubscribe reply\n            R $j deferred 0\n        }\n    }\n}\n\ntest \"Test publishing to master\" {\n    test_cluster_publish 0 10\n}\n\ntest \"Test publishing to slave\" {\n    test_cluster_publish 5 10\n}\n"
  },
  {
    "path": "tests/cluster/tests/10-manual-failover.tcl",
    "content": "# Check the manual failover\n\nsource \"../tests/includes/init-tests.tcl\"\n\ntest \"Create a 5 nodes cluster\" {\n    create_cluster 5 5\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\ntest \"Cluster is writable\" {\n    cluster_write_test 0\n}\n\ntest \"Instance #5 is a slave\" {\n    assert {[RI 5 role] eq {slave}}\n}\n\ntest \"Instance #5 synced with the master\" {\n    wait_for_condition 1000 50 {\n        [RI 5 master_link_status] eq {up}\n    } else {\n        fail \"Instance #5 master link status is not up\"\n    }\n}\n\nset current_epoch [CI 1 cluster_current_epoch]\n\nset numkeys 50000\nset numops 10000\nset cluster [redis_cluster 127.0.0.1:[get_instance_attrib redis 0 port]]\ncatch {unset content}\narray set content {}\n\ntest \"Send CLUSTER FAILOVER to #5, during load\" {\n    for {set j 0} {$j < $numops} {incr j} {\n        # Write random data to random list.\n        set listid [randomInt $numkeys]\n        set key \"key:$listid\"\n        set ele [randomValue]\n        # We write both with Lua scripts and with plain commands.\n        # This way we are able to stress Lua -> Redis command invocation\n        # as well, that has tests to prevent Lua to write into wrong\n        # hash slots.\n        if {$listid % 2} {\n            $cluster rpush $key $ele\n        } else {\n           $cluster eval {redis.call(\"rpush\",KEYS[1],ARGV[1])} 1 $key $ele\n        }\n        lappend content($key) $ele\n\n        if {($j % 1000) == 0} {\n            puts -nonewline W; flush stdout\n        }\n\n        if {$j == $numops/2} {R 5 cluster failover}\n    }\n}\n\ntest \"Wait for failover\" {\n    wait_for_condition 1000 50 {\n        [CI 1 cluster_current_epoch] > $current_epoch\n    } else {\n        fail \"No failover detected\"\n    }\n}\n\ntest \"Cluster should eventually be up again\" {\n    assert_cluster_state ok\n}\n\ntest \"Cluster is writable\" {\n    cluster_write_test 1\n}\n\ntest \"Instance #5 is now a master\" {\n    assert {[RI 5 role] eq {master}}\n}\n\ntest \"Verify $numkeys keys for consistency with logical content\" {\n    # Check that the Redis Cluster content matches our logical content.\n    foreach {key value} [array get content] {\n        assert {[$cluster lrange $key 0 -1] eq $value}\n    }\n}\n\ntest \"Instance #0 gets converted into a slave\" {\n    wait_for_condition 1000 50 {\n        [RI 0 role] eq {slave}\n    } else {\n        fail \"Old master was not converted into slave\"\n    }\n}\n\n## Check that manual failover does not happen if we can't talk with the master.\n\nsource \"../tests/includes/init-tests.tcl\"\n\ntest \"Create a 5 nodes cluster\" {\n    create_cluster 5 5\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\ntest \"Cluster is writable\" {\n    cluster_write_test 0\n}\n\ntest \"Instance #5 is a slave\" {\n    assert {[RI 5 role] eq {slave}}\n}\n\ntest \"Instance #5 synced with the master\" {\n    wait_for_condition 1000 50 {\n        [RI 5 master_link_status] eq {up}\n    } else {\n        fail \"Instance #5 master link status is not up\"\n    }\n}\n\ntest \"Make instance #0 unreachable without killing it\" {\n    R 0 deferred 1\n    R 0 DEBUG SLEEP 10\n}\n\ntest \"Send CLUSTER FAILOVER to instance #5\" {\n    R 5 cluster failover\n}\n\ntest \"Instance #5 is still a slave after some time (no failover)\" {\n    after 5000\n    assert {[RI 5 role] eq {master}}\n}\n\ntest \"Wait for instance #0 to return back alive\" {\n    R 0 deferred 0\n    assert {[R 0 read] eq {OK}}\n}\n\n## Check with \"force\" failover happens anyway.\n\nsource \"../tests/includes/init-tests.tcl\"\n\ntest \"Create a 5 nodes cluster\" {\n    create_cluster 5 5\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\ntest \"Cluster is writable\" {\n    cluster_write_test 0\n}\n\ntest \"Instance #5 is a slave\" {\n    assert {[RI 5 role] eq {slave}}\n}\n\ntest \"Instance #5 synced with the master\" {\n    wait_for_condition 1000 50 {\n        [RI 5 master_link_status] eq {up}\n    } else {\n        fail \"Instance #5 master link status is not up\"\n    }\n}\n\ntest \"Make instance #0 unreachable without killing it\" {\n    R 0 deferred 1\n    R 0 DEBUG SLEEP 10\n}\n\ntest \"Send CLUSTER FAILOVER to instance #5\" {\n    R 5 cluster failover force\n}\n\ntest \"Instance #5 is a master after some time\" {\n    wait_for_condition 1000 50 {\n        [RI 5 role] eq {master}\n    } else {\n        fail \"Instance #5 is not a master after some time regardless of FORCE\"\n    }\n}\n\ntest \"Wait for instance #0 to return back alive\" {\n    R 0 deferred 0\n    assert {[R 0 read] eq {OK}}\n}\n"
  },
  {
    "path": "tests/cluster/tests/11-manual-takeover.tcl",
    "content": "# Manual takeover test\n\nsource \"../tests/includes/init-tests.tcl\"\n\ntest \"Create a 5 nodes cluster\" {\n    create_cluster 5 5\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\ntest \"Cluster is writable\" {\n    cluster_write_test 0\n}\n\n# For this test, disable replica failover until\n# all of the primaries are confirmed killed. Otherwise\n# there might be enough time to elect a replica.\nset replica_ids { 5 6 7 }\nforeach id $replica_ids {\n    R $id config set cluster-replica-no-failover yes\n}\n\ntest \"Killing majority of master nodes\" {\n    kill_instance redis 0\n    kill_instance redis 1\n    kill_instance redis 2\n}\n\nforeach id $replica_ids {\n    R $id config set cluster-replica-no-failover no\n}\n\ntest \"Cluster should eventually be down\" {\n    assert_cluster_state fail\n}\n\ntest \"Use takeover to bring slaves back\" {\n    foreach id $replica_ids {\n        R $id cluster failover takeover\n    }\n}\n\ntest \"Cluster should eventually be up again\" {\n    assert_cluster_state ok\n}\n\ntest \"Cluster is writable\" {\n    cluster_write_test 4\n}\n\ntest \"Instance #5, #6, #7 are now masters\" {\n    foreach id $replica_ids {\n        assert {[RI $id role] eq {master}}\n    }\n}\n\ntest \"Restarting the previously killed master nodes\" {\n    restart_instance redis 0\n    restart_instance redis 1\n    restart_instance redis 2\n}\n\ntest \"Instance #0, #1, #2 gets converted into a slaves\" {\n    wait_for_condition 1000 50 {\n        [RI 0 role] eq {slave} && [RI 1 role] eq {slave} && [RI 2 role] eq {slave}\n    } else {\n        fail \"Old masters not converted into slaves\"\n    }\n}\n"
  },
  {
    "path": "tests/cluster/tests/12-replica-migration-2.tcl",
    "content": "# Replica migration test #2.\n#\n# Check that the status of master that can be targeted by replica migration\n# is acquired again, after being getting slots again, in a cluster where the\n# other masters have slaves.\n\nsource \"../tests/includes/init-tests.tcl\"\nsource \"../../../tests/support/cli.tcl\"\n\n# Create a cluster with 5 master and 15 slaves, to make sure there are no\n# empty masters and make rebalancing simpler to handle during the test.\ntest \"Create a 5 nodes cluster\" {\n    cluster_create_with_continuous_slots 5 15\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\ntest \"Each master should have at least two replicas attached\" {\n    foreach_redis_id id {\n        if {$id < 5} {\n            wait_for_condition 1000 50 {\n                [llength [lindex [R $id role] 2]] >= 2\n            } else {\n                fail \"Master #$id does not have 2 slaves as expected\"\n            }\n        }\n    }\n}\n\ntest \"Set allow-replica-migration yes\" {\n    foreach_redis_id id {\n        R $id CONFIG SET cluster-allow-replica-migration yes\n    }\n}\n\nset master0_id [dict get [get_myself 0] id]\ntest \"Resharding all the master #0 slots away from it\" {\n    set output [exec \\\n        ../../../src/redis-cli --cluster rebalance \\\n        127.0.0.1:[get_instance_attrib redis 0 port] \\\n        {*}[rediscli_tls_config \"../../../tests\"] \\\n        --cluster-weight ${master0_id}=0 >@ stdout ]\n\n}\n\ntest \"Master #0 who lost all slots should turn into a replica without replicas\" {\n    wait_for_condition 2000 50 {\n        [RI 0 role] == \"slave\" && [RI 0 connected_slaves] == 0\n    } else {\n        puts [R 0 info replication]\n        fail \"Master #0 didn't turn itself into a replica\"\n    }\n}\n\ntest \"Resharding back some slot to master #0\" {\n    # Wait for the cluster config to propagate before attempting a\n    # new resharding.\n    after 10000\n    set output [exec \\\n        ../../../src/redis-cli --cluster rebalance \\\n        127.0.0.1:[get_instance_attrib redis 0 port] \\\n        {*}[rediscli_tls_config \"../../../tests\"] \\\n        --cluster-weight ${master0_id}=.01 \\\n        --cluster-use-empty-masters  >@ stdout]\n}\n\ntest \"Master #0 should re-acquire one or more replicas\" {\n    wait_for_condition 1000 50 {\n        [llength [lindex [R 0 role] 2]] >= 1\n    } else {\n        fail \"Master #0 has no has replicas\"\n    }\n}\n"
  },
  {
    "path": "tests/cluster/tests/12.1-replica-migration-3.tcl",
    "content": "# Replica migration test #2.\n#\n# Check that if 'cluster-allow-replica-migration' is set to 'no', slaves do not\n# migrate when master becomes empty.\n\nsource \"../tests/includes/init-tests.tcl\"\nsource \"../tests/includes/utils.tcl\"\n\n# Create a cluster with 5 master and 15 slaves, to make sure there are no\n# empty masters and make rebalancing simpler to handle during the test.\ntest \"Create a 5 nodes cluster\" {\n    cluster_create_with_continuous_slots 5 15\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\ntest \"Each master should have at least two replicas attached\" {\n    foreach_redis_id id {\n        if {$id < 5} {\n            wait_for_condition 1000 50 {\n                [llength [lindex [R $id role] 2]] >= 2\n            } else {\n                fail \"Master #$id does not have 2 slaves as expected\"\n            }\n        }\n    }\n}\n\ntest \"Set allow-replica-migration no\" {\n    foreach_redis_id id {\n        R $id CONFIG SET cluster-allow-replica-migration no\n    }\n}\n\nset master0_id [dict get [get_myself 0] id]\ntest \"Resharding all the master #0 slots away from it\" {\n    set output [exec \\\n        ../../../src/redis-cli --cluster rebalance \\\n        127.0.0.1:[get_instance_attrib redis 0 port] \\\n        {*}[rediscli_tls_config \"../../../tests\"] \\\n        --cluster-weight ${master0_id}=0 >@ stdout ]\n}\n\ntest \"Wait cluster to be stable\" {\n    wait_cluster_stable\n}\n\ntest \"Master #0 still should have its replicas\" {\n    assert { [llength [lindex [R 0 role] 2]] >= 2 }\n}\n\ntest \"Each master should have at least two replicas attached\" {\n    foreach_redis_id id {\n        if {$id < 5} {\n            wait_for_condition 1000 50 {\n                [llength [lindex [R $id role] 2]] >= 2\n            } else {\n                fail \"Master #$id does not have 2 slaves as expected\"\n            }\n        }\n    }\n}\n\n"
  },
  {
    "path": "tests/cluster/tests/13-no-failover-option.tcl",
    "content": "# Check that the no-failover option works\n\nsource \"../tests/includes/init-tests.tcl\"\n\ntest \"Create a 5 nodes cluster\" {\n    create_cluster 5 5\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\ntest \"Cluster is writable\" {\n    cluster_write_test 0\n}\n\ntest \"Instance #5 is a slave\" {\n    assert {[RI 5 role] eq {slave}}\n\n    # Configure it to never failover the master\n    R 5 CONFIG SET cluster-slave-no-failover yes\n}\n\ntest \"Instance #5 synced with the master\" {\n    wait_for_condition 1000 50 {\n        [RI 5 master_link_status] eq {up}\n    } else {\n        fail \"Instance #5 master link status is not up\"\n    }\n}\n\ntest \"The nofailover flag is propagated\" {\n    set slave5_id [dict get [get_myself 5] id]\n\n    foreach_redis_id id {\n        wait_for_condition 1000 50 {\n            [has_flag [get_node_by_id $id $slave5_id] nofailover]\n        } else {\n            fail \"Instance $id can't see the nofailover flag of slave\"\n        }\n    }\n}\n\nset current_epoch [CI 1 cluster_current_epoch]\n\ntest \"Killing one master node\" {\n    kill_instance redis 0\n}\n\ntest \"Cluster should be still down after some time\" {\n    after 10000\n    assert_cluster_state fail\n}\n\ntest \"Instance #5 is still a slave\" {\n    assert {[RI 5 role] eq {slave}}\n}\n\ntest \"Restarting the previously killed master node\" {\n    restart_instance redis 0\n}\n"
  },
  {
    "path": "tests/cluster/tests/14-consistency-check.tcl",
    "content": "source \"../tests/includes/init-tests.tcl\"\nsource \"../../../tests/support/cli.tcl\"\n\ntest \"Create a 5 nodes cluster\" {\n    create_cluster 5 5\n}\n\ntest \"Cluster should start ok\" {\n    assert_cluster_state ok\n}\n\ntest \"Cluster is writable\" {\n    cluster_write_test 0\n}\n\nproc find_non_empty_master {} {\n    set master_id_no {}\n    foreach_redis_id id {\n        if {[RI $id role] eq {master} && [R $id dbsize] > 0} {\n            set master_id_no $id\n            break\n        }\n    }\n    return $master_id_no\n}\n\nproc get_one_of_my_replica {id} {\n    wait_for_condition 1000 50 {\n        [llength [lindex [R $id role] 2]] > 0\n    } else {\n        fail \"replicas didn't connect\"\n    }\n    set replica_port [lindex [lindex [lindex [R $id role] 2] 0] 1]\n    set replica_id_num [get_instance_id_by_port redis $replica_port]\n\n    # To avoid -LOADING reply, wait until replica syncs with master.\n    wait_for_condition 1000 50 {\n        [RI $replica_id_num master_link_status] eq {up} &&\n        [R $replica_id_num dbsize] eq [R $id dbsize]\n    } else {\n        fail \"Replica did not sync in time.\"\n    }\n    return $replica_id_num\n}\n\nproc cluster_write_keys_with_expire {id ttl} {\n    set prefix [randstring 20 20 alpha]\n    set port [get_instance_attrib redis $id port]\n    set cluster [redis_cluster 127.0.0.1:$port]\n    for {set j 100} {$j < 200} {incr j} {\n        $cluster setex key_expire.$j $ttl $prefix.$j\n    }\n    $cluster close\n}\n\n# make sure that replica who restarts from persistence will load keys\n# that have already expired, critical for correct execution of commands\n# that arrive from the master\nproc test_slave_load_expired_keys {aof} {\n    test \"Slave expired keys is loaded when restarted: appendonly=$aof\" {\n        set master_id [find_non_empty_master]\n        set replica_id [get_one_of_my_replica $master_id]\n\n        set master_dbsize_0 [R $master_id dbsize]\n        set replica_dbsize_0 [R $replica_id dbsize]\n        assert_equal $master_dbsize_0 $replica_dbsize_0\n\n        # config the replica persistency and rewrite the config file to survive restart\n        # note that this needs to be done before populating the volatile keys since\n        # that triggers and AOFRW, and we rather the AOF file to have 'SET PXAT' commands\n        # rather than an RDB with volatile keys\n        R $replica_id config set appendonly $aof\n        R $replica_id config rewrite\n\n        # fill with 100 keys with 3 second TTL\n        set data_ttl 3\n        cluster_write_keys_with_expire $master_id $data_ttl\n\n        # wait for replica to be in sync with master\n        wait_for_condition 500 10 {\n            [RI $replica_id master_link_status] eq {up} &&\n            [R $replica_id dbsize] eq [R $master_id dbsize]\n        } else {\n            fail \"replica didn't sync\"\n        }\n        \n        set replica_dbsize_1 [R $replica_id dbsize]\n        assert {$replica_dbsize_1 > $replica_dbsize_0}\n\n        # make replica create persistence file\n        if {$aof == \"yes\"} {\n            # we need to wait for the initial AOFRW to be done, otherwise\n            # kill_instance (which now uses SIGTERM will fail (\"Writing initial AOF, can't exit\")\n            wait_for_condition 100 10 {\n                [RI $replica_id aof_rewrite_scheduled] eq 0 &&\n                [RI $replica_id aof_rewrite_in_progress] eq 0\n            } else {\n                fail \"AOFRW didn't finish\"\n            }\n        } else {\n            R $replica_id save\n        }\n\n        # kill the replica (would stay down until re-started)\n        kill_instance redis $replica_id\n\n        # Make sure the master doesn't do active expire (sending DELs to the replica)\n        R $master_id DEBUG SET-ACTIVE-EXPIRE 0\n\n        # wait for all the keys to get logically expired\n        after [expr $data_ttl*1000]\n\n        # start the replica again (loading an RDB or AOF file)\n        restart_instance redis $replica_id\n\n        # Replica may start a full sync after restart, trying in a loop to avoid\n        # -LOADING reply in that case.\n        wait_for_condition 1000 50 {\n            [catch {set replica_dbsize_3 [R $replica_id dbsize]} e] == 0\n        } else {\n            fail \"Replica is not up.\"\n        }\n\n        # make sure the keys are still there\n        assert {$replica_dbsize_3 > $replica_dbsize_0}\n        \n        # restore settings\n        R $master_id DEBUG SET-ACTIVE-EXPIRE 1\n\n        # wait for the master to expire all keys and replica to get the DELs\n        wait_for_condition 500 10 {\n            [RI $replica_id master_link_status] eq {up} &&\n            [R $replica_id dbsize] eq $master_dbsize_0\n        } else {\n            fail \"keys didn't expire\"\n        }\n    }\n}\n\ntest_slave_load_expired_keys no\ntest_slave_load_expired_keys yes\n"
  },
  {
    "path": "tests/cluster/tests/15-cluster-slots.tcl",
    "content": "source \"../tests/includes/init-tests.tcl\"\n\nproc cluster_allocate_mixedSlots {n} {\n    set slot 16383\n    while {$slot >= 0} {\n        set node [expr {$slot % $n}]\n        lappend slots_$node $slot\n        incr slot -1\n    }\n    for {set j 0} {$j < $n} {incr j} {\n        R $j cluster addslots {*}[set slots_${j}]\n    }\n}\n\nproc create_cluster_with_mixedSlot {masters slaves} {\n    cluster_allocate_mixedSlots $masters\n    if {$slaves} {\n        cluster_allocate_slaves $masters $slaves\n    }\n    assert_cluster_state ok\n}\n\ntest \"Create a 5 nodes cluster\" {\n    create_cluster_with_mixedSlot 5 15\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\ntest \"Cluster is writable\" {\n    cluster_write_test 0\n}\n\ntest \"Instance #5 is a slave\" {\n    assert {[RI 5 role] eq {slave}}\n}\n\ntest \"client do not break when cluster slot\" {\n    R 0 config set client-output-buffer-limit \"normal 33554432 16777216 60\"\n    if { [catch {R 0 cluster slots}] } {\n        fail \"output overflow when cluster slots\"\n    }\n}\n\ntest \"client can handle keys with hash tag\" {\n    set cluster [redis_cluster 127.0.0.1:[get_instance_attrib redis 0 port]]\n    $cluster set foo{tag} bar\n    $cluster close\n}\n\ntest \"slot migration is valid from primary to another primary\" {\n    set cluster [redis_cluster 127.0.0.1:[get_instance_attrib redis 0 port]]\n    set key order1\n    set slot [$cluster cluster keyslot $key]\n    array set nodefrom [$cluster masternode_for_slot $slot]\n    array set nodeto [$cluster masternode_notfor_slot $slot]\n\n    assert_equal {OK} [$nodefrom(link) cluster setslot $slot node $nodeto(id)]\n    assert_equal {OK} [$nodeto(link) cluster setslot $slot node $nodeto(id)]\n}\n\ntest \"Client unblocks after slot migration from one primary to another\" {\n    set cluster [redis_cluster 127.0.0.1:[get_instance_attrib redis 0 port]]\n    set key mystream\n    set slot [$cluster cluster keyslot $key]\n    array set nodefrom [$cluster masternode_for_slot $slot]\n    array set nodeto [$cluster masternode_notfor_slot $slot]\n\n    # Create a stream group on the source node\n    $nodefrom(link) XGROUP CREATE $key mygroup $ MKSTREAM \n\n    # block another client on xreadgroup\n    set rd [redis_deferring_client_by_addr $nodefrom(host) $nodefrom(port)]\n    $rd XREADGROUP GROUP mygroup Alice BLOCK 0 STREAMS $key \">\"\n    wait_for_condition 1000 50 {\n        [getInfoProperty [$nodefrom(link) info clients] blocked_clients] eq {1}\n    } else {\n        fail \"client wasn't blocked\"\n    }\n\n    # Start slot migration from the source node to the target node.\n    # Because the `unblock_on_nokey` option of xreadgroup is set to 1, the client\n    # will be unblocked when the key `mystream` is migrated.\n    assert_equal {OK} [$nodefrom(link) CLUSTER SETSLOT $slot MIGRATING $nodeto(id)]\n    assert_equal {OK} [$nodeto(link) CLUSTER SETSLOT $slot IMPORTING $nodefrom(id)]\n    $nodefrom(link) MIGRATE $nodeto(host) $nodeto(port) $key 0 5000\n}\n\ntest \"slot migration is invalid from primary to replica\" {\n    set cluster [redis_cluster 127.0.0.1:[get_instance_attrib redis 0 port]]\n    set key order1\n    set slot [$cluster cluster keyslot $key]\n    array set nodefrom [$cluster masternode_for_slot $slot]\n\n    # Get replica node serving slot.\n    set replicanodeinfo [$cluster cluster replicas $nodefrom(id)]\n    puts $replicanodeinfo\n    set args [split $replicanodeinfo \" \"]\n    set replicaid [lindex [split [lindex $args 0] \\{] 1]\n    puts $replicaid\n\n    catch {[$nodefrom(link) cluster setslot $slot node $replicaid]} err\n    assert_match \"*Target node is not a master\" $err\n}\n\nproc count_bound_slots {n} {\n     set slot_count 0\n     foreach slot_range_mapping [$n cluster slots] {\n         set start_slot [lindex $slot_range_mapping 0]\n         set end_slot [lindex $slot_range_mapping 1]\n         incr slot_count [expr $end_slot - $start_slot + 1]\n     }\n     return $slot_count\n }\n\n test \"slot must be unbound on the owner when it is deleted\" {\n     set node0 [Rn 0]\n     set node1 [Rn 1]\n     assert {[count_bound_slots $node0] eq 16384}\n     assert {[count_bound_slots $node1] eq 16384}\n\n     set slot_to_delete 0\n     # Delete\n     $node0 CLUSTER DELSLOTS $slot_to_delete\n\n     # Verify\n     # The node that owns the slot must unbind the slot that was deleted\n     wait_for_condition 1000 50 {\n         [count_bound_slots $node0] == 16383\n     } else {\n         fail \"Cluster slot deletion was not recorded on the node that owns the slot\"\n     }\n\n     # We don't propagate slot deletion across all nodes in the cluster.\n     # This can lead to extra redirect before the clients find out that the slot is unbound.\n     wait_for_condition 1000 50 {\n         [count_bound_slots $node1] == 16384\n     } else {\n         fail \"Cluster slot deletion should not be propagated to all nodes in the cluster\"\n     }\n }\n\nif {$::tls} {\n    test {CLUSTER SLOTS from non-TLS client in TLS cluster} {\n        set slots_tls [R 0 cluster slots]\n        set host [get_instance_attrib redis 0 host]\n        set plaintext_port [get_instance_attrib redis 0 plaintext-port]\n        set client_plain [redis $host $plaintext_port 0 0]\n        set slots_plain [$client_plain cluster slots]\n        $client_plain close\n        # Compare the ports in the first row\n        assert_no_match [lindex $slots_tls 0 3 1] [lindex $slots_plain 0 3 1]\n    }\n}"
  },
  {
    "path": "tests/cluster/tests/16-transactions-on-replica.tcl",
    "content": "# Check basic transactions on a replica.\n\nsource \"../tests/includes/init-tests.tcl\"\n\ntest \"Create a primary with a replica\" {\n    create_cluster 1 1\n}\n\ntest \"Cluster should start ok\" {\n    assert_cluster_state ok\n}\n\nset primary [Rn 0]\nset replica [Rn 1]\n\ntest \"Can't read from replica without READONLY\" {\n    $primary SET a 1\n    wait_for_ofs_sync $primary $replica\n    catch {$replica GET a} err\n    assert {[string range $err 0 4] eq {MOVED}}\n}\n\ntest \"Can't read from replica after READWRITE\" {\n    $replica READWRITE\n    catch {$replica GET a} err\n    assert {[string range $err 0 4] eq {MOVED}}\n}\n\ntest \"Can read from replica after READONLY\" {\n    $replica READONLY\n    assert {[$replica GET a] eq {1}}\n}\n\ntest \"Can perform HSET primary and HGET from replica\" {\n    $primary HSET h a 1\n    $primary HSET h b 2\n    $primary HSET h c 3\n    wait_for_ofs_sync $primary $replica\n    assert {[$replica HGET h a] eq {1}}\n    assert {[$replica HGET h b] eq {2}}\n    assert {[$replica HGET h c] eq {3}}\n}\n\ntest \"Can MULTI-EXEC transaction of HGET operations from replica\" {\n    $replica MULTI\n    assert {[$replica HGET h a] eq {QUEUED}}\n    assert {[$replica HGET h b] eq {QUEUED}}\n    assert {[$replica HGET h c] eq {QUEUED}}\n    assert {[$replica EXEC] eq {1 2 3}}\n}\n\ntest \"MULTI-EXEC with write operations is MOVED\" {\n    $replica MULTI\n    catch {$replica HSET h b 4} err\n    assert {[string range $err 0 4] eq {MOVED}}\n    catch {$replica exec} err\n    assert {[string range $err 0 8] eq {EXECABORT}}\n}\n\ntest \"read-only blocking operations from replica\" {\n    set rd [redis_deferring_client redis 1]\n    $rd readonly\n    $rd read\n    $rd XREAD BLOCK 0 STREAMS k 0\n\n    wait_for_condition 1000 50 {\n        [RI 1 blocked_clients] eq {1}\n    } else {\n        fail \"client wasn't blocked\"\n    }\n\n    $primary XADD k * foo bar\n    set res [$rd read]\n    set res [lindex [lindex [lindex [lindex $res 0] 1] 0] 1]\n    assert {$res eq {foo bar}}\n    $rd close\n}\n\ntest \"reply MOVED when eval from replica for update\" {\n    catch {[$replica eval {#!lua\n        return redis.call('del','a')\n        } 1 a\n    ]} err\n    assert {[string range $err 0 4] eq {MOVED}}\n}"
  },
  {
    "path": "tests/cluster/tests/17-diskless-load-swapdb.tcl",
    "content": "# Check that replica keys and keys to slots map are right after failing to diskless load using SWAPDB.\n\nsource \"../tests/includes/init-tests.tcl\"\n\ntest \"Create a primary with a replica\" {\n    create_cluster 1 1\n}\n\ntest \"Cluster should start ok\" {\n    assert_cluster_state ok\n}\n\ntest \"Cluster is writable\" {\n    cluster_write_test 0\n}\n\ntest \"Main db not affected when fail to diskless load\" {\n    set master [Rn 0]\n    set replica [Rn 1]\n    set master_id 0\n    set replica_id 1\n\n    $replica READONLY\n    $replica config set repl-diskless-load swapdb\n    $replica config set appendonly no\n    $replica config set save \"\"\n    $replica config rewrite\n    $master config set repl-backlog-size 1024\n    $master config set repl-diskless-sync yes\n    $master config set repl-diskless-sync-delay 0\n    $master config set rdb-key-save-delay 10000\n    $master config set rdbcompression no\n    $master config set appendonly no\n    $master config set save \"\"\n\n    # Write a key that belongs to slot 0\n    set slot0_key \"06S\"\n    $master set $slot0_key 1\n    wait_for_ofs_sync $master $replica\n    assert_equal {1} [$replica get $slot0_key]\n    assert_equal $slot0_key [$replica CLUSTER GETKEYSINSLOT 0 1]\n\n    # Save an RDB and kill the replica\n    $replica save\n    kill_instance redis $replica_id\n\n    # Delete the key from master\n    $master del $slot0_key\n\n    # Replica must full sync with master when start because replication\n    # backlog size is very small, and dumping rdb will cost several seconds.\n    set num 10000\n    set value [string repeat A 1024]\n    set rd [redis_deferring_client redis $master_id]\n    for {set j 0} {$j < $num} {incr j} {\n        $rd set $j $value\n\n        if {($j + 1) % 500 == 0} {\n            for {set i 0} {$i < 500} {incr i} {\n                $rd read\n            }\n        }\n    }\n\n    # Start the replica again\n    restart_instance redis $replica_id\n    $replica READONLY\n\n    # Start full sync, wait till after db started loading in background\n    wait_for_condition 500 10 {\n        [s $replica_id async_loading] eq 1\n    } else {\n        fail \"Fail to full sync\"\n    }\n\n    # Kill master, abort full sync\n    kill_instance redis $master_id\n\n    # Start full sync, wait till the replica detects the disconnection\n    wait_for_condition 500 10 {\n        [s $replica_id async_loading] eq 0\n    } else {\n        fail \"Fail to full sync\"\n    }\n\n    # Replica keys and keys to slots map still both are right.\n    # CLUSTERDOWN errors are acceptable here because the cluster may be in a transient state\n    # due to the timing relationship with cluster-node-timeout.\n    if {[catch {$replica get $slot0_key} result]} {\n        assert_match \"*CLUSTERDOWN*\" $result\n    } else {\n        assert_equal {1} $result\n    }\n\n    assert_equal $slot0_key [$replica CLUSTER GETKEYSINSLOT 0 1]\n}\n"
  },
  {
    "path": "tests/cluster/tests/18-info.tcl",
    "content": "# Check cluster info stats\n\nsource \"../tests/includes/init-tests.tcl\"\n\ntest \"Create a primary with a replica\" {\n    create_cluster 2 0\n}\n\ntest \"Cluster should start ok\" {\n    assert_cluster_state ok\n}\n\nset primary1 [Rn 0]\nset primary2 [Rn 1]\n\nproc cmdstat {instance cmd} {\n    return [cmdrstat $cmd $instance]\n}\n\nproc errorstat {instance cmd} {\n    return [errorrstat $cmd $instance]\n}\n\ntest \"errorstats: rejected call due to MOVED Redirection\" {\n    $primary1 config resetstat\n    $primary2 config resetstat\n    assert_match {} [errorstat $primary1 MOVED]\n    assert_match {} [errorstat $primary2 MOVED]\n    # we know that one will have a MOVED reply and one will succeed\n    catch {$primary1 set key b} replyP1\n    catch {$primary2 set key b} replyP2\n    # sort servers so we know which one failed\n    if {$replyP1 eq {OK}} {\n        assert_match {MOVED*} $replyP2\n        set pok $primary1\n        set perr $primary2\n    } else {\n        assert_match {MOVED*} $replyP1\n        set pok $primary2\n        set perr $primary1\n    }\n    assert_match {} [errorstat $pok MOVED]\n    assert_match {*count=1*} [errorstat $perr MOVED]\n    assert_match {*calls=0,*,rejected_calls=1,failed_calls=0*} [cmdstat $perr set]\n}\n"
  },
  {
    "path": "tests/cluster/tests/19-cluster-nodes-slots.tcl",
    "content": "# Optimize CLUSTER NODES command by generating all nodes slot topology firstly\n\nsource \"../tests/includes/init-tests.tcl\"\n\ntest \"Create a 2 nodes cluster\" {\n    cluster_create_with_continuous_slots 2 2\n}\n\ntest \"Cluster should start ok\" {\n    assert_cluster_state ok\n}\n\nset master1 [Rn 0]\nset master2 [Rn 1]\n\ntest \"Continuous slots distribution\" {\n    assert_match \"* 0-8191*\" [$master1 CLUSTER NODES]\n    assert_match \"* 8192-16383*\" [$master2 CLUSTER NODES]\n    assert_match \"*0 8191*\" [$master1 CLUSTER SLOTS]\n    assert_match \"*8192 16383*\" [$master2 CLUSTER SLOTS]\n\n    $master1 CLUSTER DELSLOTS 4096\n    assert_match \"* 0-4095 4097-8191*\" [$master1 CLUSTER NODES]\n    assert_match \"*0 4095*4097 8191*\" [$master1 CLUSTER SLOTS]\n\n\n    $master2 CLUSTER DELSLOTS 12288\n    assert_match \"* 8192-12287 12289-16383*\" [$master2 CLUSTER NODES]\n    assert_match \"*8192 12287*12289 16383*\" [$master2 CLUSTER SLOTS]\n}\n\ntest \"Discontinuous slots distribution\" {\n    # Remove middle slots\n    $master1 CLUSTER DELSLOTS 4092 4094\n    assert_match \"* 0-4091 4093 4095 4097-8191*\" [$master1 CLUSTER NODES]\n    assert_match \"*0 4091*4093 4093*4095 4095*4097 8191*\" [$master1 CLUSTER SLOTS]\n    $master2 CLUSTER DELSLOTS 12284 12286\n    assert_match \"* 8192-12283 12285 12287 12289-16383*\" [$master2 CLUSTER NODES]\n    assert_match \"*8192 12283*12285 12285*12287 12287*12289 16383*\" [$master2 CLUSTER SLOTS]\n\n    # Remove head slots\n    $master1 CLUSTER DELSLOTS 0 2\n    assert_match \"* 1 3-4091 4093 4095 4097-8191*\" [$master1 CLUSTER NODES]\n    assert_match \"*1 1*3 4091*4093 4093*4095 4095*4097 8191*\" [$master1 CLUSTER SLOTS]\n\n    # Remove tail slots\n    $master2 CLUSTER DELSLOTS 16380 16382 16383\n    assert_match \"* 8192-12283 12285 12287 12289-16379 16381*\" [$master2 CLUSTER NODES]\n    assert_match \"*8192 12283*12285 12285*12287 12287*12289 16379*16381 16381*\" [$master2 CLUSTER SLOTS]\n}\n"
  },
  {
    "path": "tests/cluster/tests/20-half-migrated-slot.tcl",
    "content": "# Tests for fixing migrating slot at all stages:\n# 1. when migration is half inited on \"migrating\" node\n# 2. when migration is half inited on \"importing\" node\n# 3. migration inited, but not finished\n# 4. migration is half finished on \"migrating\" node\n# 5. migration is half finished on \"importing\" node\n\n# TODO: Test is currently disabled until it is stabilized (fixing the test\n# itself or real issues in Redis).\n\nif {false} {\nsource \"../tests/includes/init-tests.tcl\"\nsource \"../tests/includes/utils.tcl\"\n\ntest \"Create a 2 nodes cluster\" {\n    create_cluster 2 0\n    config_set_all_nodes cluster-allow-replica-migration no\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\nset cluster [redis_cluster 127.0.0.1:[get_instance_attrib redis 0 port]]\ncatch {unset nodefrom}\ncatch {unset nodeto}\n\nproc reset_cluster {} {\n    uplevel 1 {\n        $cluster refresh_nodes_map\n        array set nodefrom [$cluster masternode_for_slot 609]\n        array set nodeto [$cluster masternode_notfor_slot 609]\n    }\n}\n\nreset_cluster\n\n$cluster set aga xyz\n\ntest \"Half init migration in 'migrating' is fixable\" {\n    assert_equal {OK} [$nodefrom(link) cluster setslot 609 migrating $nodeto(id)]\n    fix_cluster $nodefrom(addr)\n    assert_equal \"xyz\" [$cluster get aga]\n}\n\ntest \"Half init migration in 'importing' is fixable\" {\n    assert_equal {OK} [$nodeto(link) cluster setslot 609 importing $nodefrom(id)]\n    fix_cluster $nodefrom(addr)\n    assert_equal \"xyz\" [$cluster get aga]\n}\n\ntest \"Init migration and move key\" {\n    assert_equal {OK} [$nodefrom(link) cluster setslot 609 migrating $nodeto(id)]\n    assert_equal {OK} [$nodeto(link) cluster setslot 609 importing $nodefrom(id)]\n    assert_equal {OK} [$nodefrom(link) migrate $nodeto(host) $nodeto(port) aga 0 10000]\n    wait_for_cluster_propagation\n    assert_equal \"xyz\" [$cluster get aga]\n    fix_cluster $nodefrom(addr)\n    assert_equal \"xyz\" [$cluster get aga]\n}\n\nreset_cluster\n\ntest \"Move key again\" {\n    wait_for_cluster_propagation\n    assert_equal {OK} [$nodefrom(link) cluster setslot 609 migrating $nodeto(id)]\n    assert_equal {OK} [$nodeto(link) cluster setslot 609 importing $nodefrom(id)]\n    assert_equal {OK} [$nodefrom(link) migrate $nodeto(host) $nodeto(port) aga 0 10000]\n    wait_for_cluster_propagation\n    assert_equal \"xyz\" [$cluster get aga]\n}\n\ntest \"Half-finish migration\" {\n    # half finish migration on 'migrating' node\n    assert_equal {OK} [$nodefrom(link) cluster setslot 609 node $nodeto(id)]\n    fix_cluster $nodefrom(addr)\n    assert_equal \"xyz\" [$cluster get aga]\n}\n\nreset_cluster\n\ntest \"Move key back\" {\n    # 'aga' key is in 609 slot\n    assert_equal {OK} [$nodefrom(link) cluster setslot 609 migrating $nodeto(id)]\n    assert_equal {OK} [$nodeto(link) cluster setslot 609 importing $nodefrom(id)]\n    assert_equal {OK} [$nodefrom(link) migrate $nodeto(host) $nodeto(port) aga 0 10000]\n    assert_equal \"xyz\" [$cluster get aga]\n}\n\ntest \"Half-finish importing\" {\n    # Now we half finish 'importing' node\n    assert_equal {OK} [$nodeto(link) cluster setslot 609 node $nodeto(id)]\n    fix_cluster $nodefrom(addr)\n    assert_equal \"xyz\" [$cluster get aga]\n}\n\nconfig_set_all_nodes cluster-allow-replica-migration yes\n}\n"
  },
  {
    "path": "tests/cluster/tests/21-many-slot-migration.tcl",
    "content": "# Tests for many simultaneous migrations.\n\n# TODO: Test is currently disabled until it is stabilized (fixing the test\n# itself or real issues in Redis).\n\nif {false} {\n\nsource \"../tests/includes/init-tests.tcl\"\nsource \"../tests/includes/utils.tcl\"\n\n# TODO: This test currently runs without replicas, as failovers (which may\n# happen on lower-end CI platforms) are still not handled properly by the\n# cluster during slot migration (related to #6339).\n\ntest \"Create a 10 nodes cluster\" {\n    create_cluster 10 0\n    config_set_all_nodes cluster-allow-replica-migration no\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\nset cluster [redis_cluster 127.0.0.1:[get_instance_attrib redis 0 port]]\ncatch {unset nodefrom}\ncatch {unset nodeto}\n\n$cluster refresh_nodes_map\n\ntest \"Set many keys\" {\n    for {set i 0} {$i < 40000} {incr i} {\n        $cluster set key:$i val:$i\n    }\n}\n\ntest \"Keys are accessible\" {\n    for {set i 0} {$i < 40000} {incr i} {\n        assert { [$cluster get key:$i] eq \"val:$i\" }\n    }\n}\n\ntest \"Init migration of many slots\" {\n    for {set slot 0} {$slot < 1000} {incr slot} {\n        array set nodefrom [$cluster masternode_for_slot $slot]\n        array set nodeto [$cluster masternode_notfor_slot $slot]\n\n        $nodefrom(link) cluster setslot $slot migrating $nodeto(id)\n        $nodeto(link) cluster setslot $slot importing $nodefrom(id)\n    }\n}\n\ntest \"Fix cluster\" {\n    wait_for_cluster_propagation\n    fix_cluster $nodefrom(addr)\n}\n\ntest \"Keys are accessible\" {\n    for {set i 0} {$i < 40000} {incr i} {\n        assert { [$cluster get key:$i] eq \"val:$i\" }\n    }\n}\n\nconfig_set_all_nodes cluster-allow-replica-migration yes\n}\n"
  },
  {
    "path": "tests/cluster/tests/22-replica-in-sync.tcl",
    "content": "source \"../tests/includes/init-tests.tcl\"\n\ntest \"Create a 1 node cluster\" {\n    create_cluster 1 0\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\ntest \"Cluster is writable\" {\n    cluster_write_test 0\n}\n\nproc is_in_slots {master_id replica} {\n    set slots [R $master_id cluster slots]\n    set found_position [string first $replica $slots]\n    set result [expr {$found_position != -1}]\n    return $result\n}\n\nproc is_replica_online {info_repl} {\n    set found_position [string first \"state=online\" $info_repl]\n    set result [expr {$found_position != -1}]\n    return $result\n}\n\nproc get_last_pong_time {node_id target_cid} {\n    foreach item [split [R $node_id cluster nodes] \\n] {\n        set args [split $item \" \"]\n        if {[lindex $args 0] eq $target_cid} {\n            return [lindex $args 5]\n        }\n    }\n    fail \"Target node ID was not present\"\n}\n\nset master_id 0\n\ntest \"Fill up primary with data\" {\n    # Set 1 MB of data\n    R $master_id debug populate 1000 key 1000\n}\n\ntest \"Add new node as replica\" {\n    set replica_id 1\n    set replica [R $replica_id CLUSTER MYID]\n    R $replica_id cluster replicate [R $master_id CLUSTER MYID]\n}\n\ntest \"Check digest and replica state\" {\n    wait_for_condition 1000 50 {\n        [is_in_slots $master_id $replica]\n    } else {\n        fail \"New replica didn't appear in the slots\"\n    }\n\n    wait_for_condition 100 50 {\n        [is_replica_online [R $master_id info replication]]\n    } else {\n        fail \"Replica is down for too long\"\n    }\n    set replica_digest [R $replica_id debug digest]\n    assert {$replica_digest ne 0}\n}\n\ntest \"Replica in loading state is hidden\" {\n    # Kill replica client for master and load new data to the primary\n    R $master_id config set repl-backlog-size 100\n\n    # Set the key load delay so that it will take at least\n    # 2 seconds to fully load the data.\n    R $replica_id config set key-load-delay 4000\n\n    # Trigger event loop processing every 1024 bytes, this trigger\n    # allows us to send and receive cluster messages, so we are setting\n    # it low so that the cluster messages are sent more frequently.\n    R $replica_id config set loading-process-events-interval-bytes 1024\n\n    R $master_id multi\n    R $master_id client kill type replica\n    set num 100\n    set value [string repeat A 1024]\n    for {set j 0} {$j < $num} {incr j} {\n        set key \"{0}\"\n        append key $j\n        R $master_id set $key $value\n    }\n    R $master_id exec\n\n    # The master will be the last to know the replica\n    # is loading, so we will wait on that and assert\n    # the replica is loading afterwards. \n    wait_for_condition 100 50 {\n        ![is_in_slots $master_id $replica]\n    } else {\n        fail \"Replica was always present in cluster slots\"\n    }\n    assert_equal 1 [s $replica_id loading]\n\n    # Wait for the replica to finish full-sync and become online\n    wait_for_condition 200 50 {\n        [s $replica_id master_link_status] eq \"up\"\n    } else {\n        fail \"Replica didn't finish loading\"\n    }\n\n    # Return configs to default values\n    R $replica_id config set loading-process-events-interval-bytes 2097152\n    R $replica_id config set key-load-delay 0\n\n    # Check replica is back in cluster slots\n    wait_for_condition 100 50 {\n        [is_in_slots $master_id $replica] \n    } else {\n        fail \"Replica is not back to slots\"\n    }\n    assert_equal 1 [is_in_slots $replica_id $replica] \n}\n\ntest \"Check disconnected replica not hidden from slots\" {\n    # We want to disconnect the replica, but keep it alive so it can still gossip\n\n    # Make sure that the replica will not be able to re-connect to the master\n    R $master_id config set requirepass asdf\n\n    # Disconnect replica from primary\n    R $master_id client kill type replica\n\n    # Check master to have no replicas\n    assert {[s $master_id connected_slaves] == 0}\n\n    set replica_cid [R $replica_id cluster myid]\n    set initial_pong [get_last_pong_time $master_id $replica_cid]\n    wait_for_condition 50 100 {\n        $initial_pong != [get_last_pong_time $master_id $replica_cid]\n    } else {\n        fail \"Primary never received gossip from replica\"\n    }\n\n    # Check that replica is still in the cluster slots\n    assert {[is_in_slots $master_id $replica]}\n\n    # undo config\n    R $master_id config set requirepass \"\"\n}\n"
  },
  {
    "path": "tests/cluster/tests/25-pubsubshard-slot-migration.tcl",
    "content": "source \"../tests/includes/init-tests.tcl\"\n\ntest \"Create a 3 nodes cluster\" {\n    cluster_create_with_continuous_slots 3 3\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\nset cluster [redis_cluster 127.0.0.1:[get_instance_attrib redis 0 port]]\n\nproc get_addr_replica_serving_slot slot {\n    set cluster [redis_cluster 127.0.0.1:[get_instance_attrib redis 0 port]]\n    array set node [$cluster masternode_for_slot $slot]\n\n    set replicanodeinfo [$cluster cluster replicas $node(id)]\n    set args [split $replicanodeinfo \" \"]\n    set addr [lindex [split [lindex $args 1] @] 0]\n    set replicahost [lindex [split $addr :] 0]\n    set replicaport [lindex [split $addr :] 1]\n    return [list $replicahost $replicaport]\n}\n\ntest \"Migrate a slot, verify client receives sunsubscribe on primary serving the slot.\" {\n\n    # Setup the to and from node\n    set channelname mychannel\n    set slot [$cluster cluster keyslot $channelname]\n    array set nodefrom [$cluster masternode_for_slot $slot]\n    array set nodeto [$cluster masternode_notfor_slot $slot]\n\n    set subscribeclient [redis_deferring_client_by_addr $nodefrom(host) $nodefrom(port)]\n\n    $subscribeclient deferred 1\n    $subscribeclient ssubscribe $channelname\n    $subscribeclient read\n\n    assert_equal {OK} [$nodefrom(link) cluster setslot $slot migrating $nodeto(id)]\n    assert_equal {OK} [$nodeto(link) cluster setslot $slot importing $nodefrom(id)]\n\n    # Verify subscribe is still valid, able to receive messages.\n    $nodefrom(link) spublish $channelname hello\n    assert_equal {smessage mychannel hello} [$subscribeclient read]\n\n    assert_equal {OK} [$nodefrom(link) cluster setslot $slot node $nodeto(id)]\n   \n    set msg [$subscribeclient read]\n    assert {\"sunsubscribe\" eq [lindex $msg 0]}\n    assert {$channelname eq [lindex $msg 1]}\n    assert {\"0\" eq [lindex $msg 2]}\n\n    assert_equal {OK} [$nodeto(link) cluster setslot $slot node $nodeto(id)]\n\n    $subscribeclient close\n}\n\ntest \"Client subscribes to multiple channels, migrate a slot, verify client receives sunsubscribe on primary serving the slot.\" {\n\n    # Setup the to and from node\n    set channelname ch3\n    set anotherchannelname ch7\n    set slot [$cluster cluster keyslot $channelname]\n    array set nodefrom [$cluster masternode_for_slot $slot]\n    array set nodeto [$cluster masternode_notfor_slot $slot]\n\n    set subscribeclient [redis_deferring_client_by_addr $nodefrom(host) $nodefrom(port)]\n\n    $subscribeclient deferred 1\n    $subscribeclient ssubscribe $channelname\n    $subscribeclient read\n\n    $subscribeclient ssubscribe $anotherchannelname\n    $subscribeclient read\n\n    assert_equal {OK} [$nodefrom(link) cluster setslot $slot migrating $nodeto(id)]\n    assert_equal {OK} [$nodeto(link) cluster setslot $slot importing $nodefrom(id)]\n\n    # Verify subscribe is still valid, able to receive messages.\n    $nodefrom(link) spublish $channelname hello\n    assert_equal {smessage ch3 hello} [$subscribeclient read]\n\n    assert_equal {OK} [$nodefrom(link) cluster setslot $slot node $nodeto(id)]\n\n    # Verify the client receives sunsubscribe message for the channel(slot) which got migrated.\n    set msg [$subscribeclient read]\n    assert {\"sunsubscribe\" eq [lindex $msg 0]}\n    assert {$channelname eq [lindex $msg 1]}\n    assert {\"1\" eq [lindex $msg 2]}\n\n    assert_equal {OK} [$nodeto(link) cluster setslot $slot node $nodeto(id)]\n\n    $nodefrom(link) spublish $anotherchannelname hello\n\n    # Verify the client is still connected and receives message from the other channel.\n    set msg [$subscribeclient read]\n    assert {\"smessage\" eq [lindex $msg 0]}\n    assert {$anotherchannelname eq [lindex $msg 1]}\n    assert {\"hello\" eq [lindex $msg 2]}\n\n    $subscribeclient close\n}\n\ntest \"Migrate a slot, verify client receives sunsubscribe on replica serving the slot.\" {\n\n    # Setup the to and from node\n    set channelname mychannel1\n    set slot [$cluster cluster keyslot $channelname]\n    array set nodefrom [$cluster masternode_for_slot $slot]\n    array set nodeto [$cluster masternode_notfor_slot $slot]\n\n    # Get replica node serving slot (mychannel) to connect a client.\n    set replica_addr [get_addr_replica_serving_slot $slot]\n    set replicahost [lindex $replica_addr 0]\n    set replicaport [lindex $replica_addr 1]\n    set subscribeclient [redis_deferring_client_by_addr $replicahost $replicaport]\n\n    $subscribeclient deferred 1\n    $subscribeclient ssubscribe $channelname\n    $subscribeclient read\n\n    assert_equal {OK} [$nodefrom(link) cluster setslot $slot migrating $nodeto(id)]\n    assert_equal {OK} [$nodeto(link) cluster setslot $slot importing $nodefrom(id)]\n\n    # Verify subscribe is still valid, able to receive messages.\n    $nodefrom(link) spublish $channelname hello\n    assert_equal {smessage mychannel1 hello} [$subscribeclient read]\n\n    assert_equal {OK} [$nodefrom(link) cluster setslot $slot node $nodeto(id)]\n    assert_equal {OK} [$nodeto(link) cluster setslot $slot node $nodeto(id)]\n\n    set msg [$subscribeclient read]\n    assert {\"sunsubscribe\" eq [lindex $msg 0]}\n    assert {$channelname eq [lindex $msg 1]}\n    assert {\"0\" eq [lindex $msg 2]}\n\n    $subscribeclient close\n}\n\ntest \"Move a replica to another primary, verify client receives sunsubscribe on replica serving the slot.\" {\n    # Setup the to and from node\n    set channelname mychannel2\n    set slot [$cluster cluster keyslot $channelname]\n\n    array set nodefrom [$cluster masternode_for_slot $slot]\n    array set nodeto [$cluster masternode_notfor_slot $slot]\n    set replica_addr [get_addr_replica_serving_slot $slot]\n    set replica_host [lindex $replica_addr 0]\n    set replica_port [lindex $replica_addr 1]\n    set replica_client [redis_client_by_addr $replica_host $replica_port]\n    set subscribeclient [redis_deferring_client_by_addr $replica_host $replica_port]\n\n    $subscribeclient deferred 1\n    $subscribeclient ssubscribe $channelname\n    $subscribeclient read\n\n    # Verify subscribe is still valid, able to receive messages.\n    $nodefrom(link) spublish $channelname hello\n    assert_equal {smessage mychannel2 hello} [$subscribeclient read]\n\n    assert_equal {OK} [$replica_client cluster replicate $nodeto(id)]\n\n    set msg [$subscribeclient read]\n    assert {\"sunsubscribe\" eq [lindex $msg 0]}\n    assert {$channelname eq [lindex $msg 1]}\n    assert {\"0\" eq [lindex $msg 2]}\n\n    $subscribeclient close\n}\n\ntest \"Delete a slot, verify sunsubscribe message\" {\n    set channelname ch2\n    set slot [$cluster cluster keyslot $channelname]\n\n    array set primary_client [$cluster masternode_for_slot $slot]\n\n    set subscribeclient [redis_deferring_client_by_addr $primary_client(host) $primary_client(port)]\n    $subscribeclient deferred 1\n    $subscribeclient ssubscribe $channelname\n    $subscribeclient read\n\n    $primary_client(link) cluster DELSLOTS $slot\n\n    set msg [$subscribeclient read]\n    assert {\"sunsubscribe\" eq [lindex $msg 0]}\n    assert {$channelname eq [lindex $msg 1]}\n    assert {\"0\" eq [lindex $msg 2]}\n    \n    $subscribeclient close\n}\n\ntest \"Reset cluster, verify sunsubscribe message\" {\n    set channelname ch4\n    set slot [$cluster cluster keyslot $channelname]\n\n    array set primary_client [$cluster masternode_for_slot $slot]\n\n    set subscribeclient [redis_deferring_client_by_addr $primary_client(host) $primary_client(port)]\n    $subscribeclient deferred 1\n    $subscribeclient ssubscribe $channelname\n    $subscribeclient read\n\n    $cluster cluster reset HARD\n\n    set msg [$subscribeclient read]\n    assert {\"sunsubscribe\" eq [lindex $msg 0]}\n    assert {$channelname eq [lindex $msg 1]}\n    assert {\"0\" eq [lindex $msg 2]}\n    \n    $cluster close\n    $subscribeclient close\n}"
  },
  {
    "path": "tests/cluster/tests/26-pubsubshard.tcl",
    "content": "# Test PUBSUB shard propagation in a cluster slot.\n\nsource \"../tests/includes/init-tests.tcl\"\n\ntest \"Create a 3 nodes cluster\" {\n    cluster_create_with_continuous_slots 3 3\n}\n\nset cluster [redis_cluster 127.0.0.1:[get_instance_attrib redis 0 port]]\ntest \"Pub/Sub shard basics\" {\n\n    set slot [$cluster cluster keyslot \"channel.0\"]\n    array set publishnode [$cluster masternode_for_slot $slot]\n    array set notshardnode [$cluster masternode_notfor_slot $slot]\n\n    set publishclient [redis_client_by_addr $publishnode(host) $publishnode(port)]\n    set subscribeclient [redis_deferring_client_by_addr $publishnode(host) $publishnode(port)]\n    set subscribeclient2 [redis_deferring_client_by_addr $publishnode(host) $publishnode(port)]\n    set anotherclient [redis_deferring_client_by_addr $notshardnode(host) $notshardnode(port)]\n\n    $subscribeclient ssubscribe channel.0\n    $subscribeclient read\n\n    $subscribeclient2 ssubscribe channel.0\n    $subscribeclient2 read\n\n    $anotherclient ssubscribe channel.0\n    catch {$anotherclient read} err\n    assert_match {MOVED *} $err\n\n    set data [randomValue]\n    $publishclient spublish channel.0 $data\n\n    set msg [$subscribeclient read]\n    assert_equal $data [lindex $msg 2]\n\n    set msg [$subscribeclient2 read]\n    assert_equal $data [lindex $msg 2]\n\n    $publishclient close\n    $subscribeclient close\n    $subscribeclient2 close\n    $anotherclient close\n}\n\ntest \"client can't subscribe to multiple shard channels across different slots in same call\" {\n    catch {$cluster ssubscribe channel.0 channel.1} err\n    assert_match {CROSSSLOT Keys*} $err\n}\n\ntest \"client can subscribe to multiple shard channels across different slots in separate call\" {\n    $cluster ssubscribe ch3\n    $cluster ssubscribe ch7\n\n    $cluster sunsubscribe ch3\n    $cluster sunsubscribe ch7\n}\n\ntest \"sunsubscribe without specifying any channel would unsubscribe all shard channels subscribed\" {\n    set publishclient [redis_client_by_addr $publishnode(host) $publishnode(port)]\n    set subscribeclient [redis_deferring_client_by_addr $publishnode(host) $publishnode(port)]\n\n    set sub_res [ssubscribe $subscribeclient [list \"\\{channel.0\\}1\" \"\\{channel.0\\}2\" \"\\{channel.0\\}3\"]]\n    assert_equal [list 1 2 3] $sub_res\n    sunsubscribe $subscribeclient\n\n    assert_equal 0 [$publishclient spublish \"\\{channel.0\\}1\" hello]\n    assert_equal 0 [$publishclient spublish \"\\{channel.0\\}2\" hello]\n    assert_equal 0 [$publishclient spublish \"\\{channel.0\\}3\" hello]\n\n    $publishclient close\n    $subscribeclient close\n}\n\ntest \"Verify Pub/Sub and Pub/Sub shard no overlap\" {\n    set slot [$cluster cluster keyslot \"channel.0\"]\n    array set publishnode [$cluster masternode_for_slot $slot]\n    array set notshardnode [$cluster masternode_notfor_slot $slot]\n\n    set publishshardclient [redis_client_by_addr $publishnode(host) $publishnode(port)]\n    set publishclient [redis_deferring_client_by_addr $publishnode(host) $publishnode(port)]\n    set subscribeshardclient [redis_deferring_client_by_addr $publishnode(host) $publishnode(port)]\n    set subscribeclient [redis_deferring_client_by_addr $publishnode(host) $publishnode(port)]\n\n    $subscribeshardclient deferred 1\n    $subscribeshardclient ssubscribe channel.0\n    $subscribeshardclient read\n\n    $subscribeclient deferred 1\n    $subscribeclient subscribe channel.0\n    $subscribeclient read\n\n    set sharddata \"testingpubsubdata\"\n    $publishshardclient spublish channel.0 $sharddata\n\n    set data \"somemoredata\"\n    $publishclient publish channel.0 $data\n\n    set msg [$subscribeshardclient read]\n    assert_equal $sharddata [lindex $msg 2]\n\n    set msg [$subscribeclient read]\n    assert_equal $data [lindex $msg 2]\n\n    $cluster close\n    $publishclient close\n    $subscribeclient close\n    $subscribeshardclient close\n}\n\ntest \"PUBSUB channels/shardchannels\" {\n    set subscribeclient [redis_deferring_client_by_addr $publishnode(host) $publishnode(port)]\n    set subscribeclient2 [redis_deferring_client_by_addr $publishnode(host) $publishnode(port)]\n    set subscribeclient3 [redis_deferring_client_by_addr $publishnode(host) $publishnode(port)]\n    set publishclient [redis_client_by_addr  $publishnode(host) $publishnode(port)]\n\n    ssubscribe $subscribeclient [list \"\\{channel.0\\}1\"]\n    ssubscribe $subscribeclient2 [list \"\\{channel.0\\}2\"]\n    ssubscribe $subscribeclient3 [list \"\\{channel.0\\}3\"]\n    assert_equal {3} [llength [$publishclient pubsub shardchannels]]\n\n    subscribe $subscribeclient [list \"\\{channel.0\\}4\"]\n    assert_equal {3} [llength [$publishclient pubsub shardchannels]]\n\n    sunsubscribe $subscribeclient\n    $subscribeclient read\n    set channel_list [$publishclient pubsub shardchannels]\n    assert_equal {2} [llength $channel_list]\n    assert {[lsearch -exact $channel_list \"\\{channel.0\\}2\"] >= 0}\n    assert {[lsearch -exact $channel_list \"\\{channel.0\\}3\"] >= 0}\n}\n"
  },
  {
    "path": "tests/cluster/tests/28-cluster-shards.tcl",
    "content": "source \"../tests/includes/init-tests.tcl\"\n\n# Initial slot distribution.\nset ::slot0 [list 0 1000 1002 5459 5461 5461 10926 10926]\nset ::slot1 [list 5460 5460 5462 10922 10925 10925]\nset ::slot2 [list 10923 10924 10927 16383]\nset ::slot3 [list 1001 1001]\n\nproc cluster_create_with_split_slots {masters replicas} {\n    for {set j 0} {$j < $masters} {incr j} {\n        R $j cluster ADDSLOTSRANGE {*}[set ::slot${j}]\n    }\n    if {$replicas} {\n        cluster_allocate_slaves $masters $replicas\n    }\n    set ::cluster_master_nodes $masters\n    set ::cluster_replica_nodes $replicas\n}\n\n# Get the node info with the specific node_id from the\n# given reference node. Valid type options are \"node\" and \"shard\"\nproc get_node_info_from_shard {id reference {type node}} {\n    set shards_response [R $reference CLUSTER SHARDS]\n    foreach shard_response $shards_response {\n        set nodes [dict get $shard_response nodes]\n        foreach node $nodes {\n            if {[dict get $node id] eq $id} {\n                if {$type eq \"node\"} {\n                    return $node\n                } elseif {$type eq \"shard\"} {\n                    return $shard_response\n                } else {\n                    return {}\n                }\n            }\n        }\n    }\n    # No shard found, return nothing\n    return {}\n}\n\nproc cluster_ensure_master {id} {\n    if { [regexp \"master\" [R $id role]] == 0 } {\n        assert_equal {OK} [R $id CLUSTER FAILOVER]\n        wait_for_condition 50 100 {\n            [regexp \"master\" [R $id role]] == 1\n        } else {\n            fail \"instance $id is not master\"\n        }\n    }\n}\n\ntest \"Create a 8 nodes cluster with 4 shards\" {\n    cluster_create_with_split_slots 4 4\n}\n\ntest \"Cluster should start ok\" {\n    assert_cluster_state ok\n}\n\ntest \"Set cluster hostnames and verify they are propagated\" {\n    for {set j 0} {$j < $::cluster_master_nodes + $::cluster_replica_nodes} {incr j} {\n        R $j config set cluster-announce-hostname \"host-$j.com\"\n    }\n\n    # Wait for everyone to agree about the state\n    wait_for_cluster_propagation\n}\n\ntest \"Verify information about the shards\" {\n    set ids {}\n    for {set j 0} {$j < $::cluster_master_nodes + $::cluster_replica_nodes} {incr j} {\n        lappend ids [R $j CLUSTER MYID]\n    }\n    set slots [list $::slot0 $::slot1 $::slot2 $::slot3 $::slot0 $::slot1 $::slot2 $::slot3]\n\n    # Verify on each node (primary/replica), the response of the `CLUSTER SLOTS` command is consistent.\n    for {set ref 0} {$ref < $::cluster_master_nodes + $::cluster_replica_nodes} {incr ref} {\n        for {set i 0} {$i < $::cluster_master_nodes + $::cluster_replica_nodes} {incr i} {\n            assert_equal [lindex $slots $i] [dict get [get_node_info_from_shard [lindex $ids $i] $ref \"shard\"] slots]\n            assert_equal \"host-$i.com\" [dict get [get_node_info_from_shard [lindex $ids $i] $ref \"node\"] hostname]\n            assert_equal \"127.0.0.1\"  [dict get [get_node_info_from_shard [lindex $ids $i] $ref \"node\"] ip]\n            # Default value of 'cluster-preferred-endpoint-type' is ip.\n            assert_equal \"127.0.0.1\"  [dict get [get_node_info_from_shard [lindex $ids $i] $ref \"node\"] endpoint]\n\n            if {$::tls} {\n                assert_equal [get_instance_attrib redis $i plaintext-port] [dict get [get_node_info_from_shard [lindex $ids $i] $ref \"node\"] port]\n                assert_equal [get_instance_attrib redis $i port] [dict get [get_node_info_from_shard [lindex $ids $i] $ref \"node\"] tls-port]\n            } else {\n                assert_equal [get_instance_attrib redis $i port] [dict get [get_node_info_from_shard [lindex $ids $i] $ref \"node\"] port]\n            }\n\n            if {$i < 4} {\n                assert_equal \"master\" [dict get [get_node_info_from_shard [lindex $ids $i] $ref \"node\"] role]\n                assert_equal \"online\" [dict get [get_node_info_from_shard [lindex $ids $i] $ref \"node\"] health]\n            } else {\n                assert_equal \"replica\" [dict get [get_node_info_from_shard [lindex $ids $i] $ref \"node\"] role]\n                # Replica could be in online or loading\n            }\n        }\n    }\n}\n\ntest \"Verify no slot shard\" {\n    # Node 8 has no slots assigned\n    set node_8_id [R 8 CLUSTER MYID]\n    assert_equal {} [dict get [get_node_info_from_shard $node_8_id 8 \"shard\"] slots]\n    assert_equal {} [dict get [get_node_info_from_shard $node_8_id 0 \"shard\"] slots]\n}\n\nset node_0_id [R 0 CLUSTER MYID]\n\ntest \"Kill a node and tell the replica to immediately takeover\" {\n    kill_instance redis 0\n    R 4 cluster failover force\n}\n\n# Primary 0 node should report as fail, wait until the new primary acknowledges it.\ntest \"Verify health as fail for killed node\" {\n    wait_for_condition 50 100 {\n        \"fail\" eq [dict get [get_node_info_from_shard $node_0_id 4 \"node\"] \"health\"]\n    } else {\n        fail \"New primary never detected the node failed\"\n    }\n}\n\ntest \"Verify that other nodes can correctly output the new master's slots\" {\n    assert_not_equal {} [dict get [get_node_info_from_shard [R 4 CLUSTER MYID] 8 \"shard\"] slots]\n}\n\nset primary_id 4\nset replica_id 0\n\ntest \"Restarting primary node\" {\n    restart_instance redis $replica_id\n}\n\ntest \"Instance #0 gets converted into a replica\" {\n    wait_for_condition 1000 50 {\n        [RI $replica_id role] eq {slave} &&\n        [RI $replica_id master_link_status] eq {up}\n    } else {\n        fail \"Old primary was not converted into replica\"\n    }\n}\n\ntest \"Test the replica reports a loading state while it's loading\" {\n    # Test the command is good for verifying everything moves to a happy state\n    set replica_cluster_id [R $replica_id CLUSTER MYID]\n    wait_for_condition 50 1000 {\n        [dict get [get_node_info_from_shard $replica_cluster_id $primary_id \"node\"] health] eq \"online\"\n    } else {\n        fail \"Replica never transitioned to online\"\n    }\n\n    # Set 1 MB of data, so there is something to load on full sync\n    R $primary_id debug populate 1000 key 1000\n\n    # Kill replica client for primary and load new data to the primary\n    R $primary_id config set repl-backlog-size 100\n\n    # Set the key load delay so that it will take at least\n    # 2 seconds to fully load the data.\n    R $replica_id config set key-load-delay 4000\n\n    # Trigger event loop processing every 1024 bytes, this trigger\n    # allows us to send and receive cluster messages, so we are setting\n    # it low so that the cluster messages are sent more frequently.\n    R $replica_id config set loading-process-events-interval-bytes 1024\n\n    R $primary_id multi\n    R $primary_id client kill type replica\n    # populate the correct data\n    set num 100\n    set value [string repeat A 1024]\n    for {set j 0} {$j < $num} {incr j} {\n        # Use hashtag valid for shard #0\n        set key \"{ch3}$j\"\n        R $primary_id set $key $value\n    }\n    R $primary_id exec\n\n    # The replica should reconnect and start a full sync, it will gossip about it's health to the primary.\n    wait_for_condition 50 1000 {\n        \"loading\" eq [dict get [get_node_info_from_shard $replica_cluster_id $primary_id \"node\"] health]\n    } else {\n        fail \"Replica never transitioned to loading\"\n    }\n\n    # Verify cluster shards and cluster slots (deprecated) API responds while the node is loading data.\n    R $replica_id CLUSTER SHARDS\n    R $replica_id CLUSTER SLOTS\n\n    # Speed up the key loading and verify everything resumes\n    R $replica_id config set key-load-delay 0\n\n    wait_for_condition 50 1000 {\n        \"online\" eq [dict get [get_node_info_from_shard $replica_cluster_id $primary_id \"node\"] health]\n    } else {\n        fail \"Replica never transitioned to online\"\n    }\n\n    # Final sanity, the replica agrees it is online.\n    assert_equal \"online\" [dict get [get_node_info_from_shard $replica_cluster_id $replica_id \"node\"] health]\n}\n\ntest \"Regression test for a crash when calling SHARDS during handshake\" {\n    # Reset forget a node, so we can use it to establish handshaking connections\n    set id [R 19 CLUSTER MYID]\n    R 19 CLUSTER RESET HARD\n    for {set i 0} {$i < 19} {incr i} {\n        R $i CLUSTER FORGET $id\n    }\n    R 19 cluster meet 127.0.0.1 [get_instance_attrib redis 0 port]\n    # This should line would previously crash, since all the outbound\n    # connections were in handshake state.\n    R 19 CLUSTER SHARDS\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\ntest \"Shard ids are unique\" {\n    set shard_ids {}\n    for {set i 0} {$i < 4} {incr i} {\n        set shard_id [R $i cluster myshardid]\n        assert_equal [dict exists $shard_ids $shard_id] 0\n        dict set shard_ids $shard_id 1\n    }\n}\n\ntest \"CLUSTER MYSHARDID reports same id for both primary and replica\" {\n    for {set i 0} {$i < 4} {incr i} {\n        assert_equal [R $i cluster myshardid] [R [expr $i+4] cluster myshardid]\n        assert_equal [string length [R $i cluster myshardid]] 40\n    }\n}\n\ntest \"New replica receives primary's shard id\" {\n    #find a primary\n    set id 0\n    for {} {$id < 8} {incr id} {\n        if {[regexp \"master\" [R $id role]]} {\n            break\n        }\n    }\n    assert_not_equal [R 8 cluster myshardid] [R $id cluster myshardid]\n    assert_equal {OK} [R 8 cluster replicate [R $id cluster myid]]\n    assert_equal [R 8 cluster myshardid] [R $id cluster myshardid]\n}\n\ntest \"CLUSTER MYSHARDID reports same shard id after shard restart\" {\n    set node_ids {}\n    for {set i 0} {$i < 8} {incr i 4} {\n        dict set node_ids $i [R $i cluster myshardid]\n        kill_instance redis $i\n        wait_for_condition 50 100 {\n            [instance_is_killed redis $i]\n        } else {\n            fail \"instance $i is not killed\"\n        }\n    }\n    for {set i 0} {$i < 8} {incr i 4} {\n        restart_instance redis $i\n    }\n    assert_cluster_state ok\n    for {set i 0} {$i < 8} {incr i 4} {\n        assert_equal [dict get $node_ids $i] [R $i cluster myshardid]\n    }\n}\n\ntest \"CLUSTER MYSHARDID reports same shard id after cluster restart\" {\n    set node_ids {}\n    for {set i 0} {$i < 8} {incr i} {\n        dict set node_ids $i [R $i cluster myshardid]\n    }\n    for {set i 0} {$i < 8} {incr i} {\n        kill_instance redis $i\n        wait_for_condition 50 100 {\n            [instance_is_killed redis $i]\n        } else {\n            fail \"instance $i is not killed\"\n        }\n    }\n    for {set i 0} {$i < 8} {incr i} {\n        restart_instance redis $i\n    }\n    assert_cluster_state ok\n    for {set i 0} {$i < 8} {incr i} {\n        assert_equal [dict get $node_ids $i] [R $i cluster myshardid]\n    }\n}\n"
  },
  {
    "path": "tests/cluster/tests/29-slot-migration-response.tcl",
    "content": "# Tests for the response of slot migrations.\n\nsource \"../tests/includes/init-tests.tcl\"\nsource \"../tests/includes/utils.tcl\"\n\ntest \"Create a 2 nodes cluster\" {\n    create_cluster 2 0\n    config_set_all_nodes cluster-allow-replica-migration no\n}\n\ntest \"Cluster is up\" {\n    assert_cluster_state ok\n}\n\nset cluster [redis_cluster 127.0.0.1:[get_instance_attrib redis 0 port]]\ncatch {unset nodefrom}\ncatch {unset nodeto}\n\n$cluster refresh_nodes_map\n\ntest \"Set many keys in the cluster\" {\n    for {set i 0} {$i < 5000} {incr i} {\n        $cluster set $i $i\n        assert { [$cluster get $i] eq $i }\n    }\n}\n\ntest \"Test cluster responses during migration of slot x\" {\n\n    set slot 10\n    array set nodefrom [$cluster masternode_for_slot $slot]\n    array set nodeto [$cluster masternode_notfor_slot $slot]\n\n    $nodeto(link) cluster setslot $slot importing $nodefrom(id)\n    $nodefrom(link) cluster setslot $slot migrating $nodeto(id)\n\n    # Get a key from that slot\n    set key [$nodefrom(link) cluster GETKEYSINSLOT $slot \"1\"]\n\n    # MOVED REPLY\n    assert_error \"*MOVED*\" {$nodeto(link) set $key \"newVal\"}\n\n    # ASK REPLY\n    assert_error \"*ASK*\" {$nodefrom(link) set \"abc{$key}\" \"newVal\"}\n\n    # UNSTABLE REPLY\n    assert_error \"*TRYAGAIN*\" {$nodefrom(link) mset \"a{$key}\" \"newVal\" $key \"newVal2\"}\n}\n\nconfig_set_all_nodes cluster-allow-replica-migration yes\n"
  },
  {
    "path": "tests/cluster/tests/helpers/onlydots.tcl",
    "content": "# Read the standard input and only shows dots in the output, filtering out\n# all the other characters. Designed to avoid bufferization so that when\n# we get the output of redis-trib and want to show just the dots, we'll see\n# the dots as soon as redis-trib will output them.\n\nfconfigure stdin -buffering none\n\nwhile 1 {\n    set c [read stdin 1]\n    if {$c eq {}} {\n        exit 0; # EOF\n    } elseif {$c eq {.}} {\n        puts -nonewline .\n        flush stdout\n    }\n}\n"
  },
  {
    "path": "tests/cluster/tests/includes/init-tests.tcl",
    "content": "# Initialization tests -- most units will start including this.\n\ntest \"(init) Restart killed instances\" {\n    foreach type {redis} {\n        foreach_${type}_id id {\n            if {[get_instance_attrib $type $id pid] == -1} {\n                puts -nonewline \"$type/$id \"\n                flush stdout\n                restart_instance $type $id\n            }\n        }\n    }\n}\n\ntest \"Cluster nodes are reachable\" {\n    foreach_redis_id id {\n        # Every node should be reachable.\n        wait_for_condition 1000 50 {\n            ([catch {R $id ping} ping_reply] == 0) &&\n            ($ping_reply eq {PONG})\n        } else {\n            catch {R $id ping} err\n            fail \"Node #$id keeps replying '$err' to PING.\"\n        }\n    }\n}\n\ntest \"Cluster nodes hard reset\" {\n    foreach_redis_id id {\n        if {$::valgrind} {\n            set node_timeout 10000\n        } else {\n            set node_timeout 3000\n        }\n\n        # Wait until slave is synced. Otherwise, it may reply -LOADING\n        # for any commands below.\n        if {[RI $id role] eq {slave}} {\n            wait_for_condition 50 1000 {\n                [RI $id master_link_status] eq {up}\n            } else {\n                fail \"Slave were not able to sync.\"\n            }\n        }\n\n        catch {R $id flushall} ; # May fail for readonly slaves.\n        R $id MULTI\n        R $id cluster reset hard\n        R $id cluster set-config-epoch [expr {$id+1}]\n        R $id EXEC\n        R $id config set cluster-node-timeout $node_timeout\n        R $id config set cluster-slave-validity-factor 10\n        R $id config set loading-process-events-interval-bytes 2097152\n        R $id config set key-load-delay 0\n        R $id config set repl-diskless-load disabled\n        R $id config set cluster-announce-hostname \"\"\n        R $id DEBUG DROP-CLUSTER-PACKET-FILTER -1\n        R $id config rewrite\n    }\n}\n\n# Helper function to attempt to have each node in a cluster\n# meet each other.\nproc join_nodes_in_cluster {} {\n    # Join node 0 with 1, 1 with 2, ... and so forth.\n    # If auto-discovery works all nodes will know every other node\n    # eventually.\n    set ids {}\n    foreach_redis_id id {lappend ids $id}\n    for {set j 0} {$j < [expr [llength $ids]-1]} {incr j} {\n        set a [lindex $ids $j]\n        set b [lindex $ids [expr $j+1]]\n        set b_port [get_instance_attrib redis $b port]\n        R $a cluster meet 127.0.0.1 $b_port\n    }\n\n    foreach_redis_id id {\n        wait_for_condition 1000 50 {\n            [llength [get_cluster_nodes $id connected]] == [llength $ids]\n        } else {\n            return 0\n        }\n    }\n    return 1\n}\n\ntest \"Cluster Join and auto-discovery test\" {\n    # Use multiple attempts since sometimes nodes timeout\n    # while attempting to connect.\n    for {set attempts 3} {$attempts > 0} {incr attempts -1} {\n        if {[join_nodes_in_cluster] == 1} {\n            break\n        }\n    }\n    if {$attempts == 0} {\n        fail \"Cluster failed to form full mesh\"\n    }\n}\n\ntest \"Before slots allocation, all nodes report cluster failure\" {\n    assert_cluster_state fail\n}\n"
  },
  {
    "path": "tests/cluster/tests/includes/utils.tcl",
    "content": "source \"../../../tests/support/cli.tcl\"\n\nproc config_set_all_nodes {keyword value} {\n    foreach_redis_id id {\n        R $id config set $keyword $value\n    }\n}\n\nproc fix_cluster {addr} {\n    set code [catch {\n        exec ../../../src/redis-cli {*}[rediscli_tls_config \"../../../tests\"] --cluster fix $addr << yes\n    } result]\n    if {$code != 0} {\n        puts \"redis-cli --cluster fix returns non-zero exit code, output below:\\n$result\"\n    }\n    # Note: redis-cli --cluster fix may return a non-zero exit code if nodes don't agree,\n    # but we can ignore that and rely on the check below.\n    assert_cluster_state ok\n    wait_for_condition 100 100 {\n        [catch {exec ../../../src/redis-cli {*}[rediscli_tls_config \"../../../tests\"] --cluster check $addr} result] == 0\n    } else {\n        puts \"redis-cli --cluster check returns non-zero exit code, output below:\\n$result\"\n        fail \"Cluster could not settle with configuration\"\n    }\n}\n\nproc wait_cluster_stable {} {\n    wait_for_condition 1000 50 {\n        [catch {exec ../../../src/redis-cli --cluster \\\n            check 127.0.0.1:[get_instance_attrib redis 0 port] \\\n            {*}[rediscli_tls_config \"../../../tests\"] \\\n            }] == 0\n    } else {\n        fail \"Cluster doesn't stabilize\"\n    }\n}"
  },
  {
    "path": "tests/cluster/tmp/.gitignore",
    "content": "redis_*\nsentinel_*\n"
  },
  {
    "path": "tests/helpers/bg_block_op.tcl",
    "content": "source tests/support/redis.tcl\nsource tests/support/util.tcl\n\nset ::tlsdir \"tests/tls\"\n\n# This function sometimes writes sometimes blocking-reads from lists/sorted\n# sets. There are multiple processes like this executing at the same time\n# so that we have some chance to trap some corner condition if there is\n# a regression. For this to happen it is important that we narrow the key\n# space to just a few elements, and balance the operations so that it is\n# unlikely that lists and zsets just get more data without ever causing\n# blocking.\nproc bg_block_op {host port db ops tls} {\n    set r [redis $host $port 0 $tls]\n    $r client setname LOAD_HANDLER\n    $r select $db\n\n    for {set j 0} {$j < $ops} {incr j} {\n\n        # List side\n        set k list_[randomInt 10]\n        set k2 list_[randomInt 10]\n        set v [randomValue]\n\n        randpath {\n            randpath {\n                $r rpush $k $v\n            } {\n                $r lpush $k $v\n            }\n        } {\n            $r blpop $k 2\n        } {\n            $r blpop $k $k2 2\n        }\n\n        # Zset side\n        set k zset_[randomInt 10]\n        set k2 zset_[randomInt 10]\n        set v1 [randomValue]\n        set v2 [randomValue]\n\n        randpath {\n            $r zadd $k [randomInt 10000] $v\n        } {\n            $r zadd $k [randomInt 10000] $v [randomInt 10000] $v2\n        } {\n            $r bzpopmin $k 2\n        } {\n            $r bzpopmax $k 2\n        }\n    }\n}\n\nbg_block_op [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]\n"
  },
  {
    "path": "tests/helpers/bg_complex_data.tcl",
    "content": "source tests/support/redis.tcl\nsource tests/support/util.tcl\n\nset ::tlsdir \"tests/tls\"\n\nproc bg_complex_data {host port db ops tls} {\n    set r [redis $host $port 0 $tls]\n    $r client setname LOAD_HANDLER\n    $r select $db\n    createComplexDataset $r $ops\n}\n\nbg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]\n"
  },
  {
    "path": "tests/helpers/fake_redis_node.tcl",
    "content": "# A fake Redis node for replaying predefined/expected traffic with a client.\n#\n# Usage: tclsh fake_redis_node.tcl PORT COMMAND REPLY [ COMMAND REPLY [ ... ] ]\n#\n# Commands are given as space-separated strings, e.g. \"GET foo\", and replies as\n# RESP-encoded replies minus the trailing \\r\\n, e.g. \"+OK\".\n\nset port [lindex $argv 0];\nset expected_traffic [lrange $argv 1 end];\n\n# Reads and parses a command from a socket and returns it as a space-separated\n# string, e.g. \"set foo bar\".\nproc read_command {sock} {\n    set char [read $sock 1]\n    switch $char {\n        * {\n            set numargs [gets $sock]\n            set result {}\n            for {set i 0} {$i<$numargs} {incr i} {\n                read $sock 1;       # dollar sign\n                set len [gets $sock]\n                set str [read $sock $len]\n                gets $sock;         # trailing \\r\\n\n                lappend result $str\n            }\n            return $result\n        }\n        {} {\n            # EOF\n            return {}\n        }\n        default {\n            # Non-RESP command\n            set rest [gets $sock]\n            return \"$char$rest\"\n        }\n    }\n}\n\nproc accept {sock host port} {\n    global expected_traffic\n    foreach {expect_cmd reply} $expected_traffic {\n        if {[eof $sock]} {break}\n        set cmd [read_command $sock]\n        if {[string equal -nocase $cmd $expect_cmd]} {\n            puts $sock $reply\n            flush $sock\n        } else {\n            puts $sock \"-ERR unexpected command $cmd\"\n            break\n        }\n    }\n    close $sock\n}\n\nset sockfd [socket -server accept -myaddr 127.0.0.1 $port]\nafter 5000 set done timeout\nvwait done\nclose $sockfd\n\n"
  },
  {
    "path": "tests/helpers/gen_write_load.tcl",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Copyright (c) 2024-present, Valkey contributors.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n#\n\nsource tests/support/redis.tcl\n\nset ::tlsdir \"tests/tls\"\n\n# Continuously sends SET commands to the server. If key is omitted, a random key\n# is used for every SET command. The value is always random.\n# ignore_error_reply (default 0): when non-zero, MOVED/ASK replies are tolerated\n# while draining pipelined responses (periodic 500-reply batches and final drain).\nproc gen_write_load {host port seconds tls {key \"\"} {size 0} {sleep 0} {ignore_error_reply 0}} {\n    set start_time [clock seconds]\n    set r [redis $host $port 1 $tls]\n    $r client setname LOAD_HANDLER\n    $r read\n    catch {\n        $r select 9\n        $r read\n    } ;# select 9 will fail in cluster mode\n\n    # fixed size value\n    if {$size != 0} {\n        set value [string repeat \"x\" $size]\n    }\n\n    set count 0\n    while 1 {\n        if {$size == 0} {\n            set value [expr rand()]\n        }\n\n        if {$key == \"\"} {\n            $r set [expr rand()] $value\n        } else {\n            $r set $key $value\n        }\n\n        incr count\n        if {$count % 500 == 0} {\n            for {set i 0} {$i < 500} {incr i} {\n                # Capture opts to preserve original errorInfo/errorCode on re-raise.\n                if {[catch {$r read} err opts]} {\n                    if {$ignore_error_reply && ([string match {MOVED*} $err] || [string match {ASK*} $err])} {\n                        continue\n                    }\n                    return -options $opts $err\n                }\n            }\n            set count 0\n        }\n\n        if {[clock seconds]-$start_time > $seconds} {\n            break\n        }\n        if {$sleep ne 0} {\n            after $sleep\n        }\n    }\n\n    # Read remaining replies\n    for {set i 0} {$i < $count} {incr i} {\n        if {[catch {$r read} err opts]} {\n            if {$ignore_error_reply && ([string match {MOVED*} $err] || [string match {ASK*} $err])} {\n                continue\n            }\n            return -options $opts $err\n        }\n    }\n    exit 0\n}\n\ngen_write_load [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4] [lindex $argv 5] [lindex $argv 6] [lindex $argv 7]\n"
  },
  {
    "path": "tests/instances.tcl",
    "content": "# Multi-instance test framework.\n# This is used in order to test Sentinel and Redis Cluster, and provides\n# basic capabilities for spawning and handling N parallel Redis / Sentinel\n# instances.\n#\n# Copyright (C) 2014-Present, Redis Ltd.\n# All Rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n\npackage require Tcl 8.5-10\n\nif {$tcl_version < 9.0} { set tcl_precision 17 }\nsource ../support/redis.tcl\nsource ../support/util.tcl\nsource ../support/aofmanifest.tcl\nsource ../support/server.tcl\nsource ../support/test.tcl\n\nset ::verbose 0\nset ::valgrind 0\nset ::tls 0\nset ::tls_module 0\nset ::pause_on_error 0\nset ::dont_clean 0\nset ::simulate_error 0\nset ::failed 0\nset ::sentinel_instances {}\nset ::redis_instances {}\nset ::global_config {}\nset ::sentinel_base_port 20000\nset ::redis_base_port 30000\nset ::redis_port_count 1024\nset ::host \"127.0.0.1\"\nset ::leaked_fds_file [file normalize \"tmp/leaked_fds.txt\"]\nset ::pids {} ; # We kill everything at exit\nset ::dirs {} ; # We remove all the temp dirs at exit\nset ::run_matching {} ; # If non empty, only tests matching pattern are run.\nset ::stop_on_failure 0\nset ::loop 0\nset ::tsan 0\n\nif {[catch {cd tmp}]} {\n    puts \"tmp directory not found.\"\n    puts \"Please run this test from the Redis source root.\"\n    exit 1\n}\n\n# Execute the specified instance of the server specified by 'type', using\n# the provided configuration file. Returns the PID of the process.\nproc exec_instance {type dirname cfgfile} {\n    if {$type eq \"redis\"} {\n        set prgname redis-server\n    } elseif {$type eq \"sentinel\"} {\n        set prgname redis-sentinel\n    } else {\n        error \"Unknown instance type.\"\n    }\n\n    set errfile [file join $dirname err.txt]\n    if {$::valgrind} {\n        set pid [exec valgrind --track-origins=yes --suppressions=../../../src/valgrind.sup --show-reachable=no --show-possibly-lost=no --leak-check=full ../../../src/${prgname} $cfgfile 2>> $errfile &]\n    } else {\n        set pid [exec ../../../src/${prgname} $cfgfile 2>> $errfile &]\n    }\n    return $pid\n}\n\n# Spawn a redis or sentinel instance, depending on 'type'.\nproc spawn_instance {type base_port count {conf {}} {base_conf_file \"\"}} {\n    for {set j 0} {$j < $count} {incr j} {\n        set port [find_available_port $base_port $::redis_port_count]\n        # plaintext port (only used for TLS cluster)\n        set pport 0\n        # Create a directory for this instance.\n        set dirname \"${type}_${j}\"\n        lappend ::dirs $dirname\n        catch {exec rm -rf $dirname}\n        file mkdir $dirname\n\n        # Write the instance config file.\n        set cfgfile [file join $dirname $type.conf]\n        if {$base_conf_file ne \"\"} {\n            file copy -- $base_conf_file $cfgfile\n            set cfg [open $cfgfile a+]\n        } else {\n            set cfg [open $cfgfile w]\n        }\n\n        if {$::tls} {\n            if {$::tls_module} {\n                puts $cfg [format \"loadmodule %s/../../../src/redis-tls.so\" [pwd]]\n            }\n\n            puts $cfg \"tls-port $port\"\n            puts $cfg \"tls-replication yes\"\n            puts $cfg \"tls-cluster yes\"\n            # plaintext port, only used by plaintext clients in a TLS cluster\n            set pport [find_available_port $base_port $::redis_port_count]\n            puts $cfg \"port $pport\"\n            puts $cfg [format \"tls-cert-file %s/../../tls/server.crt\" [pwd]]\n            puts $cfg [format \"tls-key-file %s/../../tls/server.key\" [pwd]]\n            puts $cfg [format \"tls-client-cert-file %s/../../tls/client.crt\" [pwd]]\n            puts $cfg [format \"tls-client-key-file %s/../../tls/client.key\" [pwd]]\n            puts $cfg [format \"tls-dh-params-file %s/../../tls/redis.dh\" [pwd]]\n            puts $cfg [format \"tls-ca-cert-file %s/../../tls/ca.crt\" [pwd]]\n        } else {\n            puts $cfg \"port $port\"\n        }\n\n        if {$::log_req_res} {\n            puts $cfg \"req-res-logfile stdout.reqres\"\n        }\n\n        if {$::force_resp3} {\n            puts $cfg \"client-default-resp 3\"\n        }\n\n        puts $cfg \"repl-diskless-sync-delay 0\"\n        puts $cfg \"dir ./$dirname\"\n        puts $cfg \"logfile log.txt\"\n        # Add additional config files\n        foreach directive $conf {\n            puts $cfg $directive\n        }\n        dict for {name val} $::global_config {\n            puts $cfg \"$name $val\"\n        }\n        close $cfg\n\n        # Finally exec it and remember the pid for later cleanup.\n        set retry 100\n        while {$retry} {\n            set pid [exec_instance $type $dirname $cfgfile]\n\n            # Check availability\n            if {[server_is_up 127.0.0.1 $port 100] == 0} {\n                puts \"Starting $type #$j at port $port failed, try another\"\n                incr retry -1\n                set port [find_available_port $base_port $::redis_port_count]\n                set cfg [open $cfgfile a+]\n                if {$::tls} {\n                    puts $cfg \"tls-port $port\"\n                    set pport [find_available_port $base_port $::redis_port_count]\n                    puts $cfg \"port $pport\"\n                } else {\n                    puts $cfg \"port $port\"\n                }\n                close $cfg\n            } else {\n                puts \"Starting $type #$j at port $port\"\n                lappend ::pids $pid\n                break\n            }\n        }\n\n        # Check availability finally\n        if {[server_is_up $::host $port 100] == 0} {\n            set logfile [file join $dirname log.txt]\n            puts [exec tail $logfile]\n            abort_sentinel_test \"Problems starting $type #$j: ping timeout, maybe server start failed, check $logfile\"\n        }\n\n        # Push the instance into the right list\n        set link [redis $::host $port 0 $::tls]\n        $link reconnect 1\n        lappend ::${type}_instances [list \\\n            pid $pid \\\n            host $::host \\\n            port $port \\\n            plaintext-port $pport \\\n            link $link \\\n        ]\n    }\n}\n\nproc log_crashes {} {\n    set start_pattern {*REDIS BUG REPORT START*}\n    set logs [glob */log.txt]\n    foreach log $logs {\n        set fd [open $log]\n        set found 0\n        while {[gets $fd line] >= 0} {\n            if {[string match $start_pattern $line]} {\n                puts \"\\n*** Crash report found in $log ***\"\n                set found 1\n            }\n            if {$found} {\n                puts $line\n                incr ::failed\n            }\n        }\n    }\n\n    set logs [glob */err.txt]\n    foreach log $logs {\n        set res [find_valgrind_errors $log true]\n        if {$res != \"\"} {\n            puts $res\n            incr ::failed\n        }\n    }\n\n    set logs [glob */err.txt]\n    foreach log $logs {\n        set res [sanitizer_errors_from_file $log]\n        if {$res != \"\"} {\n            puts $res\n            incr ::failed\n        }\n    }\n}\n\nproc is_alive pid {\n    if {[catch {exec ps -p $pid} err]} {\n        return 0\n    } else {\n        return 1\n    }\n}\n\nproc stop_instance pid {\n    # Node might have been stopped in the test\n    # Send SIGCONT before SIGTERM, otherwise shutdown may be slow with ASAN.\n    catch {exec kill -SIGCONT $pid}\n    catch {exec kill $pid}\n    if {$::valgrind} {\n        set max_wait 120000\n    } else {\n        set max_wait 10000\n    }\n    while {[is_alive $pid]} {\n        incr wait 10\n\n        if {$wait == $max_wait} {\n            puts [colorstr red \"Forcing process $pid to crash...\"]\n            catch {exec kill -SEGV $pid}\n        } elseif {$wait >= $max_wait * 2} {\n            puts [colorstr red \"Forcing process $pid to exit...\"]\n            catch {exec kill -KILL $pid}\n        } elseif {$wait % 1000 == 0} {\n            puts \"Waiting for process $pid to exit...\"\n        }\n        after 10\n    }\n}\n\nproc cleanup {} {\n    puts \"Cleaning up...\"\n    foreach pid $::pids {\n        puts \"killing stale instance $pid\"\n        stop_instance $pid\n    }\n    log_crashes\n    if {$::dont_clean} {\n        return\n    }\n    foreach dir $::dirs {\n        catch {exec rm -rf $dir}\n    }\n}\n\nproc abort_sentinel_test msg {\n    incr ::failed\n    puts \"WARNING: Aborting the test.\"\n    puts \">>>>>>>> $msg\"\n    if {$::pause_on_error} pause_on_error\n    cleanup\n    exit 1\n}\n\nproc parse_options {} {\n    for {set j 0} {$j < [llength $::argv]} {incr j} {\n        set opt [lindex $::argv $j]\n        set val [lindex $::argv [expr $j+1]]\n        if {$opt eq \"--single\"} {\n            incr j\n            lappend ::run_matching \"*${val}*\"\n        } elseif {$opt eq \"--pause-on-error\"} {\n            set ::pause_on_error 1\n        } elseif {$opt eq {--dont-clean}} {\n            set ::dont_clean 1\n        } elseif {$opt eq \"--fail\"} {\n            set ::simulate_error 1\n        } elseif {$opt eq {--valgrind}} {\n            set ::valgrind 1\n        } elseif {$opt eq {--host}} {\n            incr j\n            set ::host ${val}\n        } elseif {$opt eq {--tls} || $opt eq {--tls-module}} {\n            package require tls 1.6\n            ::tls::init \\\n                -cafile \"$::tlsdir/ca.crt\" \\\n                -certfile \"$::tlsdir/client.crt\" \\\n                -keyfile \"$::tlsdir/client.key\"\n            set ::tls 1\n            if {$opt eq {--tls-module}} {\n                set ::tls_module 1\n            }\n        } elseif {$opt eq {--config}} {\n            set val2 [lindex $::argv [expr $j+2]]\n            dict set ::global_config $val $val2\n            incr j 2\n        } elseif {$opt eq {--stop}} {\n            set ::stop_on_failure 1\n        } elseif {$opt eq {--loop}} {\n            set ::loop 1\n        } elseif {$opt eq {--log-req-res}} {\n            set ::log_req_res 1\n        } elseif {$opt eq {--force-resp3}} {\n            set ::force_resp3 1\n        } elseif {$opt eq {--tsan}} {\n            set ::tsan 1\n        } elseif {$opt eq \"--help\"} {\n            puts \"--single <pattern>      Only runs tests specified by pattern.\"\n            puts \"--dont-clean            Keep log files on exit.\"\n            puts \"--pause-on-error        Pause for manual inspection on error.\"\n            puts \"--fail                  Simulate a test failure.\"\n            puts \"--valgrind              Run with valgrind.\"\n            puts \"--tls                   Run tests in TLS mode.\"\n            puts \"--tls-module            Run tests in TLS mode with Redis module.\"\n            puts \"--host <host>           Use hostname instead of 127.0.0.1.\"\n            puts \"--config <k> <v>        Extra config argument(s).\"\n            puts \"--stop                  Blocks once the first test fails.\"\n            puts \"--loop                  Execute the specified set of tests forever.\"\n            puts \"--help                  Shows this help.\"\n            exit 0\n        } else {\n            puts \"Unknown option $opt\"\n            exit 1\n        }\n    }\n}\n\n# If --pause-on-error option was passed at startup this function is called\n# on error in order to give the developer a chance to understand more about\n# the error condition while the instances are still running.\nproc pause_on_error {} {\n    puts \"\"\n    puts [colorstr yellow \"*** Please inspect the error now ***\"]\n    puts \"\\nType \\\"continue\\\" to resume the test, \\\"help\\\" for help screen.\\n\"\n    while 1 {\n        puts -nonewline \"> \"\n        flush stdout\n        set line [gets stdin]\n        set argv [split $line \" \"]\n        set cmd [lindex $argv 0]\n        if {$cmd eq {continue}} {\n            break\n        } elseif {$cmd eq {show-redis-logs}} {\n            set count 10\n            if {[lindex $argv 1] ne {}} {set count [lindex $argv 1]}\n            foreach_redis_id id {\n                puts \"=== REDIS $id ====\"\n                puts [exec tail -$count redis_$id/log.txt]\n                puts \"---------------------\\n\"\n            }\n        } elseif {$cmd eq {show-sentinel-logs}} {\n            set count 10\n            if {[lindex $argv 1] ne {}} {set count [lindex $argv 1]}\n            foreach_sentinel_id id {\n                puts \"=== SENTINEL $id ====\"\n                puts [exec tail -$count sentinel_$id/log.txt]\n                puts \"---------------------\\n\"\n            }\n        } elseif {$cmd eq {ls}} {\n            foreach_redis_id id {\n                puts -nonewline \"Redis $id\"\n                set errcode [catch {\n                    set str {}\n                    append str \"@[RI $id tcp_port]: \"\n                    append str \"[RI $id role] \"\n                    if {[RI $id role] eq {slave}} {\n                        append str \"[RI $id master_host]:[RI $id master_port]\"\n                    }\n                    set str\n                } retval]\n                if {$errcode} {\n                    puts \" -- $retval\"\n                } else {\n                    puts $retval\n                }\n            }\n            foreach_sentinel_id id {\n                puts -nonewline \"Sentinel $id\"\n                set errcode [catch {\n                    set str {}\n                    append str \"@[SI $id tcp_port]: \"\n                    append str \"[join [S $id sentinel get-master-addr-by-name mymaster]]\"\n                    set str\n                } retval]\n                if {$errcode} {\n                    puts \" -- $retval\"\n                } else {\n                    puts $retval\n                }\n            }\n        } elseif {$cmd eq {help}} {\n            puts \"ls                     List Sentinel and Redis instances.\"\n            puts \"show-sentinel-logs \\[N\\] Show latest N lines of logs.\"\n            puts \"show-redis-logs \\[N\\]    Show latest N lines of logs.\"\n            puts \"S <id> cmd ... arg     Call command in Sentinel <id>.\"\n            puts \"R <id> cmd ... arg     Call command in Redis <id>.\"\n            puts \"SI <id> <field>        Show Sentinel <id> INFO <field>.\"\n            puts \"RI <id> <field>        Show Redis <id> INFO <field>.\"\n            puts \"continue               Resume test.\"\n        } else {\n            set errcode [catch {eval $line} retval]\n            if {$retval ne {}} {puts \"$retval\"}\n        }\n    }\n}\n\n# We redefine 'test' as for Sentinel we don't use the server-client\n# architecture for the test, everything is sequential.\nproc test {descr code} {\n    set ts [clock format [clock seconds] -format %H:%M:%S]\n    puts -nonewline \"$ts> $descr: \"\n    flush stdout\n\n    if {[catch {set retval [uplevel 1 $code]} error]} {\n        incr ::failed\n        if {[string match \"assertion:*\" $error]} {\n            set msg \"FAILED: [string range $error 10 end]\"\n            puts [colorstr red $msg]\n            if {$::pause_on_error} pause_on_error\n            puts [colorstr red \"(Jumping to next unit after error)\"]\n            return -code continue\n        } else {\n            # Re-raise, let handler up the stack take care of this.\n            error $error $::errorInfo\n        }\n    } else {\n        puts [colorstr green OK]\n    }\n}\n\n# Check memory leaks when running on OSX using the \"leaks\" utility.\nproc check_leaks instance_types {\n    if {[string match {*Darwin*} [exec uname -a]]} {\n        puts -nonewline \"Testing for memory leaks...\"; flush stdout\n        foreach type $instance_types {\n            foreach_instance_id [set ::${type}_instances] id {\n                if {[instance_is_killed $type $id]} continue\n                set pid [get_instance_attrib $type $id pid]\n                set output {0 leaks}\n                catch {exec leaks $pid} output\n                if {[string match {*process does not exist*} $output] ||\n                    [string match {*cannot examine*} $output]} {\n                    # In a few tests we kill the server process.\n                    set output \"0 leaks\"\n                } else {\n                    puts -nonewline \"$type/$pid \"\n                    flush stdout\n                }\n                if {![string match {*0 leaks*} $output]} {\n                    puts [colorstr red \"=== MEMORY LEAK DETECTED ===\"]\n                    puts \"Instance type $type, ID $id:\"\n                    puts $output\n                    puts \"===\"\n                    incr ::failed\n                }\n            }\n        }\n        puts \"\"\n    }\n}\n\n# Execute all the units inside the 'tests' directory.\nproc run_tests {} {\n    set tests [lsort [glob ../tests/*]]\n\nwhile 1 {\n    foreach test $tests {\n        # Remove leaked_fds file before starting\n        if {$::leaked_fds_file != \"\" && [file exists $::leaked_fds_file]} {\n            file delete $::leaked_fds_file\n        }\n\n        if {[llength $::run_matching] != 0 && ![search_pattern_list $test $::run_matching true]} {\n            continue\n        }\n        if {[file isdirectory $test]} continue\n        puts [colorstr yellow \"Testing unit: [lindex [file split $test] end]\"]\n        if {[catch { source $test } err]} {\n            puts \"FAILED: caught an error in the test $err\"\n            puts $::errorInfo\n            incr ::failed\n            # letting the tests resume, so we'll eventually reach the cleanup and report crashes\n\n            if {$::stop_on_failure} {\n                puts -nonewline \"(Test stopped, press enter to resume the tests)\"\n                flush stdout\n                gets stdin\n            }\n        }\n        check_leaks {redis sentinel}\n\n        # Check if a leaked fds file was created and abort the test.\n        if {$::leaked_fds_file != \"\" && [file exists $::leaked_fds_file]} {\n            puts [colorstr red \"ERROR: Sentinel has leaked fds to scripts:\"]\n            puts [exec cat $::leaked_fds_file]\n            puts \"----\"\n            incr ::failed\n        }\n    }\n\n    if {$::loop == 0} { break }\n} ;# while 1\n}\n\n# Print a message and exists with 0 / 1 according to zero or more failures.\nproc end_tests {} {\n    if {$::failed == 0 } {\n        puts [colorstr green \"GOOD! No errors.\"]\n        exit 0\n    } else {\n        puts [colorstr red \"WARNING $::failed test(s) failed.\"]\n        exit 1\n    }\n}\n\n# The \"S\" command is used to interact with the N-th Sentinel.\n# The general form is:\n#\n# S <sentinel-id> command arg arg arg ...\n#\n# Example to ping the Sentinel 0 (first instance): S 0 PING\nproc S {n args} {\n    set s [lindex $::sentinel_instances $n]\n    [dict get $s link] {*}$args\n}\n\n# Returns a Redis instance by index.\n# Example:\n#     [Rn 0] info\nproc Rn {n} {\n    return [dict get [lindex $::redis_instances $n] link]\n}\n\n# Like R but to chat with Redis instances.\nproc R {n args} {\n    [Rn $n] {*}$args\n}\n\nproc get_info_field {info field} {\n    set fl [string length $field]\n    append field :\n    foreach line [split $info \"\\n\"] {\n        set line [string trim $line \"\\r\\n \"]\n        if {[string range $line 0 $fl] eq $field} {\n            return [string range $line [expr {$fl+1}] end]\n        }\n    }\n    return {}\n}\n\nproc SI {n field} {\n    get_info_field [S $n info] $field\n}\n\nproc RI {n field} {\n    get_info_field [R $n info] $field\n}\n\nproc RPort {n} {\n    if {$::tls} {\n        return [lindex [R $n config get tls-port] 1]\n    } else {\n        return [lindex [R $n config get port] 1]\n    }\n}\n\n# Iterate over IDs of sentinel or redis instances.\nproc foreach_instance_id {instances idvar code} {\n    upvar 1 $idvar id\n    for {set id 0} {$id < [llength $instances]} {incr id} {\n        set errcode [catch {uplevel 1 $code} result]\n        if {$errcode == 1} {\n            error $result $::errorInfo $::errorCode\n        } elseif {$errcode == 4} {\n            continue\n        } elseif {$errcode == 3} {\n            break\n        } elseif {$errcode != 0} {\n            return -code $errcode $result\n        }\n    }\n}\n\nproc foreach_sentinel_id {idvar code} {\n    set errcode [catch {uplevel 1 [list foreach_instance_id $::sentinel_instances $idvar $code]} result]\n    return -code $errcode $result\n}\n\nproc foreach_redis_id {idvar code} {\n    set errcode [catch {uplevel 1 [list foreach_instance_id $::redis_instances $idvar $code]} result]\n    return -code $errcode $result\n}\n\n# Get the specific attribute of the specified instance type, id.\nproc get_instance_attrib {type id attrib} {\n    dict get [lindex [set ::${type}_instances] $id] $attrib\n}\n\n# Set the specific attribute of the specified instance type, id.\nproc set_instance_attrib {type id attrib newval} {\n    set d [lindex [set ::${type}_instances] $id]\n    dict set d $attrib $newval\n    lset ::${type}_instances $id $d\n}\n\n# Create a master-slave cluster of the given number of total instances.\n# The first instance \"0\" is the master, all others are configured as\n# slaves.\nproc create_redis_master_slave_cluster n {\n    foreach_redis_id id {\n        if {$id == 0} {\n            # Our master.\n            R $id slaveof no one\n            R $id flushall\n        } elseif {$id < $n} {\n            R $id slaveof [get_instance_attrib redis 0 host] \\\n                          [get_instance_attrib redis 0 port]\n        } else {\n            # Instances not part of the cluster.\n            R $id slaveof no one\n        }\n    }\n    # Wait for all the slaves to sync.\n    wait_for_condition 1000 50 {\n        [RI 0 connected_slaves] == ($n-1)\n    } else {\n        fail \"Unable to create a master-slaves cluster.\"\n    }\n}\n\nproc get_instance_id_by_port {type port} {\n    foreach_${type}_id id {\n        if {[get_instance_attrib $type $id port] == $port} {\n            return $id\n        }\n    }\n    fail \"Instance $type port $port not found.\"\n}\n\n# Kill an instance of the specified type/id with SIGKILL.\n# This function will mark the instance PID as -1 to remember that this instance\n# is no longer running and will remove its PID from the list of pids that\n# we kill at cleanup.\n#\n# The instance can be restarted with restart-instance.\nproc kill_instance {type id} {\n    set pid [get_instance_attrib $type $id pid]\n    set port [get_instance_attrib $type $id port]\n\n    if {$pid == -1} {\n        error \"You tried to kill $type $id twice.\"\n    }\n\n    stop_instance $pid\n    set_instance_attrib $type $id pid -1\n    set_instance_attrib $type $id link you_tried_to_talk_with_killed_instance\n\n    # Remove the PID from the list of pids to kill at exit.\n    set ::pids [lsearch -all -inline -not -exact $::pids $pid]\n\n    # Wait for the port it was using to be available again, so that's not\n    # an issue to start a new server ASAP with the same port.\n    set retry 100\n    while {[incr retry -1]} {\n        set port_is_free [catch {set s [socket 127.0.0.1 $port]}]\n        if {$port_is_free} break\n        catch {close $s}\n        after 100\n    }\n    if {$retry == 0} {\n        error \"Port $port does not return available after killing instance.\"\n    }\n}\n\n# Return true of the instance of the specified type/id is killed.\nproc instance_is_killed {type id} {\n    set pid [get_instance_attrib $type $id pid]\n    expr {$pid == -1}\n}\n\n# Restart an instance previously killed by kill_instance\nproc restart_instance {type id} {\n    set dirname \"${type}_${id}\"\n    set cfgfile [file join $dirname $type.conf]\n    set port [get_instance_attrib $type $id port]\n\n    # Execute the instance with its old setup and append the new pid\n    # file for cleanup.\n    set pid [exec_instance $type $dirname $cfgfile]\n    set_instance_attrib $type $id pid $pid\n    lappend ::pids $pid\n\n    # Check that the instance is running\n    if {[server_is_up 127.0.0.1 $port 100] == 0} {\n        set logfile [file join $dirname log.txt]\n        puts [exec tail $logfile]\n        abort_sentinel_test \"Problems starting $type #$id: ping timeout, maybe server start failed, check $logfile\"\n    }\n\n    # Connect with it with a fresh link\n    set link [redis 127.0.0.1 $port 0 $::tls]\n    $link reconnect 1\n    set_instance_attrib $type $id link $link\n\n    # Make sure the instance is not loading the dataset when this\n    # function returns.\n    while 1 {\n        catch {[$link ping]} retval\n        if {[string match {*LOADING*} $retval]} {\n            after 100\n            continue\n        } else {\n            break\n        }\n    }\n}\n\nproc redis_deferring_client {type id} {\n    set port [get_instance_attrib $type $id port]\n    set host [get_instance_attrib $type $id host]\n    set client [redis $host $port 1 $::tls]\n    return $client\n}\n\nproc redis_deferring_client_by_addr {host port} {\n    set client [redis $host $port 1 $::tls]\n    return $client\n}\n\nproc redis_client {type id} {\n    set port [get_instance_attrib $type $id port]\n    set host [get_instance_attrib $type $id host]\n    set client [redis $host $port 0 $::tls]\n    return $client\n}\n\nproc redis_client_by_addr {host port} {\n    set client [redis $host $port 0 $::tls]\n    return $client\n}\n"
  },
  {
    "path": "tests/integration/aof-multi-part.tcl",
    "content": "source tests/support/aofmanifest.tcl\nset defaults {appendonly {yes} appendfilename {appendonly.aof} appenddirname {appendonlydir} auto-aof-rewrite-percentage {0}}\nset server_path [tmpdir server.multi.aof]\nset aof_dirname \"appendonlydir\"\nset aof_basename \"appendonly.aof\"\nset aof_dirpath \"$server_path/$aof_dirname\"\nset aof_base1_file \"$server_path/$aof_dirname/${aof_basename}.1$::base_aof_sufix$::aof_format_suffix\"\nset aof_base2_file \"$server_path/$aof_dirname/${aof_basename}.2$::base_aof_sufix$::aof_format_suffix\"\nset aof_incr1_file \"$server_path/$aof_dirname/${aof_basename}.1$::incr_aof_sufix$::aof_format_suffix\"\nset aof_incr2_file \"$server_path/$aof_dirname/${aof_basename}.2$::incr_aof_sufix$::aof_format_suffix\"\nset aof_incr3_file \"$server_path/$aof_dirname/${aof_basename}.3$::incr_aof_sufix$::aof_format_suffix\"\nset aof_manifest_file \"$server_path/$aof_dirname/${aof_basename}$::manifest_suffix\"\nset aof_old_name_old_path \"$server_path/$aof_basename\"\nset aof_old_name_new_path \"$aof_dirpath/$aof_basename\"\nset aof_old_name_old_path2 \"$server_path/${aof_basename}2\"\nset aof_manifest_file2 \"$server_path/$aof_dirname/${aof_basename}2$::manifest_suffix\"\n\ntags {\"external:skip\"} {\n\n    # Test Part 1\n\n    # In order to test the loading logic of redis under different combinations of manifest and AOF.\n    # We will manually construct the manifest file and AOF, and then start redis to verify whether\n    # the redis behavior is as expected.\n\n    test {Multi Part AOF can't load data when some file missing} {\n        create_aof $aof_dirpath $aof_base1_file {\n            append_to_aof [formatCommand set k1 v1]\n        }\n\n        create_aof $aof_dirpath $aof_incr2_file {\n            append_to_aof [formatCommand set k2 v2]\n        }\n\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof.1.base.aof seq 1 type b\\n\"\n            append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i\\n\"\n            append_to_manifest \"file appendonly.aof.2.incr.aof seq 2 type i\\n\"\n        }\n\n        start_server_aof_ex [list dir $server_path] [list wait_ready false] {\n            wait_for_condition 100 50 {\n                ! [is_alive [srv pid]]\n            } else {\n                fail \"AOF loading didn't fail\"\n            }\n\n            assert_equal 1 [count_message_lines $server_path/stdout \"appendonly.aof.1.incr.aof .*No such file or directory\"]\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can't load data when the sequence not increase monotonically} {\n        create_aof $aof_dirpath $aof_incr1_file {\n            append_to_aof [formatCommand set k1 v1]\n        }\n\n        create_aof $aof_dirpath $aof_incr2_file {\n            append_to_aof [formatCommand set k2 v2]\n        }\n\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof.2.incr.aof seq 2 type i\\n\"\n            append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i\\n\"\n        }\n\n        start_server_aof_ex [list dir $server_path] [list wait_ready false] {\n            wait_for_condition 100 50 {\n                ! [is_alive [srv pid]]\n            } else {\n                fail \"AOF loading didn't fail\"\n            }\n\n            assert_equal 1 [count_message_lines $server_path/stdout \"Found a non-monotonic sequence number\"]\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can't load data when there are blank lines in the manifest file} {\n        create_aof $aof_dirpath $aof_incr1_file {\n            append_to_aof [formatCommand set k1 v1]\n        }\n\n        create_aof $aof_dirpath $aof_incr3_file {\n            append_to_aof [formatCommand set k2 v2]\n        }\n\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i\\n\"\n            append_to_manifest \"\\n\"\n            append_to_manifest \"file appendonly.aof.3.incr.aof seq 3 type i\\n\"\n        }\n\n        start_server_aof_ex [list dir $server_path] [list wait_ready false] {\n            wait_for_condition 100 50 {\n                ! [is_alive [srv pid]]\n            } else {\n                fail \"AOF loading didn't fail\"\n            }\n\n            assert_equal 1 [count_message_lines $server_path/stdout \"Invalid AOF manifest file format\"]\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can't load data when there is a duplicate base file} {\n        create_aof $aof_dirpath $aof_base1_file {\n            append_to_aof [formatCommand set k1 v1]\n        }\n\n        create_aof $aof_dirpath $aof_base2_file {\n            append_to_aof [formatCommand set k2 v2]\n        }\n\n        create_aof $aof_dirpath $aof_incr1_file {\n            append_to_aof [formatCommand set k3 v3]\n        }\n\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof.1.base.aof seq 1 type b\\n\"\n            append_to_manifest \"file appendonly.aof.2.base.aof seq 2 type b\\n\"\n            append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i\\n\"\n        }\n\n        start_server_aof_ex [list dir $server_path] [list wait_ready false] {\n            wait_for_condition 100 50 {\n                ! [is_alive [srv pid]]\n            } else {\n                fail \"AOF loading didn't fail\"\n            }\n\n            assert_equal 1 [count_message_lines $server_path/stdout \"Found duplicate base file information\"]\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can't load data when the manifest format is wrong (type unknown)} {\n        create_aof $aof_dirpath $aof_base1_file {\n            append_to_aof [formatCommand set k1 v1]\n        }\n\n        create_aof $aof_dirpath $aof_incr1_file {\n            append_to_aof [formatCommand set k3 v3]\n        }\n\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof.1.base.aof seq 1 type x\\n\"\n            append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i\\n\"\n        }\n\n        start_server_aof_ex [list dir $server_path] [list wait_ready false] {\n            wait_for_condition 100 50 {\n                ! [is_alive [srv pid]]\n            } else {\n                fail \"AOF loading didn't fail\"\n            }\n\n            assert_equal 1 [count_message_lines $server_path/stdout \"Unknown AOF file type\"]\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can't load data when the manifest format is wrong (missing key)} {\n        create_aof $aof_dirpath $aof_base1_file {\n            append_to_aof [formatCommand set k1 v1]\n        }\n\n        create_aof $aof_dirpath $aof_incr1_file {\n            append_to_aof [formatCommand set k3 v3]\n        }\n\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"filx appendonly.aof.1.base.aof seq 1 type b\\n\"\n            append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i\\n\"\n        }\n\n        start_server_aof_ex [list dir $server_path] [list wait_ready false] {\n            wait_for_condition 100 50 {\n                ! [is_alive [srv pid]]\n            } else {\n                fail \"AOF loading didn't fail\"\n            }\n\n            assert_equal 2 [count_message_lines $server_path/stdout \"Invalid AOF manifest file format\"]\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can't load data when the manifest format is wrong (line too short)} {\n        create_aof $aof_dirpath $aof_base1_file {\n            append_to_aof [formatCommand set k1 v1]\n        }\n\n        create_aof $aof_dirpath $aof_incr1_file {\n            append_to_aof [formatCommand set k3 v3]\n        }\n\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof.1.base.aof seq 1 type b\\n\"\n            append_to_manifest \"file appendonly.aof.1.incr.aof type i\\n\"\n        }\n\n        start_server_aof_ex [list dir $server_path] [list wait_ready false] {\n            wait_for_condition 100 50 {\n                ! [is_alive [srv pid]]\n            } else {\n                fail \"AOF loading didn't fail\"\n            }\n\n            assert_equal 3 [count_message_lines $server_path/stdout \"Invalid AOF manifest file format\"]\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can't load data when the manifest format is wrong (line too long)} {\n        create_aof $aof_dirpath $aof_base1_file {\n            append_to_aof [formatCommand set k1 v1]\n        }\n\n        create_aof $aof_dirpath $aof_incr1_file {\n            append_to_aof [formatCommand set k3 v3]\n        }\n\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b file appendonly.aof.1.base.aof seq 1 type b\\n\"\n            append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i\\n\"\n        }\n\n        start_server_aof_ex [list dir $server_path] [list wait_ready false] {\n            wait_for_condition 100 50 {\n                ! [is_alive [srv pid]]\n            } else {\n                fail \"AOF loading didn't fail\"\n            }\n\n            assert_equal 1 [count_message_lines $server_path/stdout \"The AOF manifest file contains too long line\"]\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can't load data when the manifest format is wrong (odd parameter)} {\n        create_aof $aof_dirpath $aof_base1_file {\n            append_to_aof [formatCommand set k1 v1]\n        }\n\n        create_aof $aof_dirpath $aof_incr1_file {\n            append_to_aof [formatCommand set k3 v3]\n        }\n\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof.1.base.aof seq 1 type b\\n\"\n            append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i newkey\\n\"\n        }\n\n        start_server_aof_ex [list dir $server_path] [list wait_ready false] {\n            wait_for_condition 100 50 {\n                ! [is_alive [srv pid]]\n            } else {\n                fail \"AOF loading didn't fail\"\n            }\n\n            assert_equal 4 [count_message_lines $server_path/stdout \"Invalid AOF manifest file format\"]\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can't load data when the manifest file is empty} {\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n        }\n\n        start_server_aof_ex [list dir $server_path] [list wait_ready false] {\n            wait_for_condition 100 50 {\n                ! [is_alive [srv pid]]\n            } else {\n                fail \"AOF loading didn't fail\"\n            }\n\n            assert_equal 1 [count_message_lines $server_path/stdout \"Found an empty AOF manifest\"]\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can start when no aof and no manifest} {\n        start_server_aof [list dir $server_path] {\n            assert_equal 1 [is_alive [srv pid]]\n\n            set client [redis [srv host] [srv port] 0 $::tls]\n\n            assert_equal OK [$client set k1 v1]\n            assert_equal v1 [$client get k1]\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can start when we have en empty AOF dir} {\n        create_aof_dir $aof_dirpath\n\n        start_server_aof [list dir $server_path] {\n            assert_equal 1 [is_alive [srv pid]]\n        }\n    }\n\n    test {Multi Part AOF can load data discontinuously increasing sequence} {\n        create_aof $aof_dirpath $aof_base1_file {\n            append_to_aof [formatCommand set k1 v1]\n        }\n\n        create_aof $aof_dirpath $aof_incr1_file {\n            append_to_aof [formatCommand set k2 v2]\n        }\n\n        create_aof $aof_dirpath $aof_incr3_file {\n            append_to_aof [formatCommand set k3 v3]\n        }\n\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof.1.base.aof seq 1 type b\\n\"\n            append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i\\n\"\n            append_to_manifest \"file appendonly.aof.3.incr.aof seq 3 type i\\n\"\n        }\n\n        start_server_aof [list dir $server_path] {\n            assert_equal 1 [is_alive [srv pid]]\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n\n            assert_equal v1 [$client get k1]\n            assert_equal v2 [$client get k2]\n            assert_equal v3 [$client get k3]\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can load data when manifest add new k-v} {\n        create_aof $aof_dirpath $aof_base1_file {\n            append_to_aof [formatCommand set k1 v1]\n        }\n\n        create_aof $aof_dirpath $aof_incr1_file {\n            append_to_aof [formatCommand set k2 v2]\n        }\n\n        create_aof $aof_dirpath $aof_incr3_file {\n            append_to_aof [formatCommand set k3 v3]\n        }\n\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof.1.base.aof seq 1 type b newkey newvalue\\n\"\n            append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i\\n\"\n            append_to_manifest \"file appendonly.aof.3.incr.aof seq 3 type i\\n\"\n        }\n\n        start_server_aof [list dir $server_path] {\n            assert_equal 1 [is_alive [srv pid]]\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n\n            assert_equal v1 [$client get k1]\n            assert_equal v2 [$client get k2]\n            assert_equal v3 [$client get k3]\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can load data when some AOFs are empty} {\n        create_aof $aof_dirpath $aof_base1_file {\n            append_to_aof [formatCommand set k1 v1]\n        }\n\n        create_aof $aof_dirpath $aof_incr1_file {\n        }\n\n        create_aof $aof_dirpath $aof_incr3_file {\n            append_to_aof [formatCommand set k3 v3]\n        }\n\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof.1.base.aof seq 1 type b\\n\"\n            append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i\\n\"\n            append_to_manifest \"file appendonly.aof.3.incr.aof seq 3 type i\\n\"\n        }\n\n        start_server_aof [list dir $server_path] {\n            assert_equal 1 [is_alive [srv pid]]\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n\n            assert_equal v1 [$client get k1]\n            assert_equal \"\" [$client get k2]\n            assert_equal v3 [$client get k3]\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can load data from old version redis (rdb preamble no)} {\n        create_aof $server_path $aof_old_name_old_path {\n            append_to_aof [formatCommand set k1 v1]\n            append_to_aof [formatCommand set k2 v2]\n            append_to_aof [formatCommand set k3 v3]\n        }\n\n        start_server_aof [list dir $server_path] {\n            assert_equal 1 [is_alive [srv pid]]\n\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n\n            assert_equal v1 [$client get k1]\n            assert_equal v2 [$client get k2]\n            assert_equal v3 [$client get k3]\n\n            assert_equal 0 [check_file_exist $server_path $aof_basename]\n            assert_equal 1 [check_file_exist $aof_dirpath $aof_basename]\n\n            assert_aof_manifest_content $aof_manifest_file  {\n                {file appendonly.aof seq 1 type b}\n                {file appendonly.aof.1.incr.aof seq 1 type i}\n            }\n\n            assert_equal OK [$client set k4 v4]\n\n            $client bgrewriteaof\n            waitForBgrewriteaof $client\n\n            assert_equal OK [$client set k5 v5]\n\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.2.base.rdb seq 2 type b}\n                {file appendonly.aof.2.incr.aof seq 2 type i}\n            }\n\n            set d1 [$client debug digest]\n            $client debug loadaof\n            set d2 [$client debug digest]\n            assert {$d1 eq $d2}\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can load data from old version redis (rdb preamble yes)} {\n        exec cp tests/assets/rdb-preamble.aof $aof_old_name_old_path\n        start_server_aof [list dir $server_path] {\n            assert_equal 1 [is_alive [srv pid]]\n\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n\n            # k1 k2 in rdb header and k3 in AOF tail\n            assert_equal v1 [$client get k1]\n            assert_equal v2 [$client get k2]\n            assert_equal v3 [$client get k3]\n\n            assert_equal 0 [check_file_exist $server_path $aof_basename]\n            assert_equal 1 [check_file_exist $aof_dirpath $aof_basename]\n\n            assert_aof_manifest_content $aof_manifest_file  {\n                {file appendonly.aof seq 1 type b}\n                {file appendonly.aof.1.incr.aof seq 1 type i}\n            }\n\n            assert_equal OK [$client set k4 v4]\n\n            $client bgrewriteaof\n            waitForBgrewriteaof $client\n\n            assert_equal OK [$client set k5 v5]\n\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.2.base.rdb seq 2 type b}\n                {file appendonly.aof.2.incr.aof seq 2 type i}\n            }\n\n            set d1 [$client debug digest]\n            $client debug loadaof\n            set d2 [$client debug digest]\n            assert {$d1 eq $d2}\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can continue the upgrade from the interrupted upgrade state} {\n        create_aof $server_path $aof_old_name_old_path {\n            append_to_aof [formatCommand set k1 v1]\n            append_to_aof [formatCommand set k2 v2]\n            append_to_aof [formatCommand set k3 v3]\n        }\n\n        # Create a layout of an interrupted upgrade (interrupted before the rename).\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof seq 1 type b\\n\"\n        }\n\n        start_server_aof [list dir $server_path] {\n            assert_equal 1 [is_alive [srv pid]]\n\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n\n            assert_equal v1 [$client get k1]\n            assert_equal v2 [$client get k2]\n            assert_equal v3 [$client get k3]\n\n            assert_equal 0 [check_file_exist $server_path $aof_basename]\n            assert_equal 1 [check_file_exist $aof_dirpath $aof_basename]\n\n            assert_aof_manifest_content $aof_manifest_file  {\n                {file appendonly.aof seq 1 type b}\n                {file appendonly.aof.1.incr.aof seq 1 type i}\n            }\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can be loaded correctly when both server dir and aof dir contain old AOF} {\n        create_aof $server_path $aof_old_name_old_path {\n            append_to_aof [formatCommand set k1 v1]\n            append_to_aof [formatCommand set k2 v2]\n            append_to_aof [formatCommand set k3 v3]\n        }\n\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof seq 1 type b\\n\"\n        }\n\n        create_aof $aof_dirpath $aof_old_name_new_path {\n            append_to_aof [formatCommand set k4 v4]\n            append_to_aof [formatCommand set k5 v5]\n            append_to_aof [formatCommand set k6 v6]\n        }\n\n        start_server_aof [list dir $server_path] {\n            assert_equal 1 [is_alive [srv pid]]\n\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n\n            assert_equal 0 [$client exists k1]\n            assert_equal 0 [$client exists k2]\n            assert_equal 0 [$client exists k3]\n\n            assert_equal v4 [$client get k4]\n            assert_equal v5 [$client get k5]\n            assert_equal v6 [$client get k6]\n\n            assert_equal 1 [check_file_exist $server_path $aof_basename]\n            assert_equal 1 [check_file_exist $aof_dirpath $aof_basename]\n\n            assert_aof_manifest_content $aof_manifest_file  {\n                {file appendonly.aof seq 1 type b}\n                {file appendonly.aof.1.incr.aof seq 1 type i}\n            }\n        }\n\n        clean_aof_persistence $aof_dirpath\n        catch {exec rm -rf $aof_old_name_old_path}\n    }\n\n    test {Multi Part AOF can't load data when the manifest contains the old AOF file name but the file does not exist in server dir and aof dir} {\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof seq 1 type b\\n\"\n        }\n\n        start_server_aof_ex [list dir $server_path] [list wait_ready false] {\n            wait_for_condition 100 50 {\n                ! [is_alive [srv pid]]\n            } else {\n                fail \"AOF loading didn't fail\"\n            }\n\n            assert_equal 1 [count_message_lines $server_path/stdout \"appendonly.aof .*No such file or directory\"]\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can upgrade when when two redis share the same server dir} {\n        create_aof $server_path $aof_old_name_old_path {\n            append_to_aof [formatCommand set k1 v1]\n            append_to_aof [formatCommand set k2 v2]\n            append_to_aof [formatCommand set k3 v3]\n        }\n\n        create_aof $server_path $aof_old_name_old_path2 {\n            append_to_aof [formatCommand set k4 v4]\n            append_to_aof [formatCommand set k5 v5]\n            append_to_aof [formatCommand set k6 v6]\n        }\n\n        start_server_aof [list dir $server_path] {\n            set redis1 [redis [srv host] [srv port] 0 $::tls]\n\n            start_server [list overrides [list dir $server_path appendonly yes appendfilename appendonly.aof2]] {\n                set redis2 [redis [srv host] [srv port] 0 $::tls]\n\n                test \"Multi Part AOF can upgrade when when two redis share the same server dir (redis1)\" {\n                    wait_done_loading $redis1\n                    assert_equal v1 [$redis1 get k1]\n                    assert_equal v2 [$redis1 get k2]\n                    assert_equal v3 [$redis1 get k3]\n\n                    assert_equal 0 [$redis1 exists k4]\n                    assert_equal 0 [$redis1 exists k5]\n                    assert_equal 0 [$redis1 exists k6]\n\n                    assert_aof_manifest_content $aof_manifest_file  {\n                        {file appendonly.aof seq 1 type b}\n                        {file appendonly.aof.1.incr.aof seq 1 type i}\n                    }\n\n                    $redis1 bgrewriteaof\n                    waitForBgrewriteaof $redis1\n\n                    assert_equal OK [$redis1 set k v]\n\n                    assert_aof_manifest_content $aof_manifest_file {\n                        {file appendonly.aof.2.base.rdb seq 2 type b}\n                        {file appendonly.aof.2.incr.aof seq 2 type i}\n                    }\n\n                    set d1 [$redis1 debug digest]\n                    $redis1 debug loadaof\n                    set d2 [$redis1 debug digest]\n                    assert {$d1 eq $d2}\n                }\n\n                test \"Multi Part AOF can upgrade when when two redis share the same server dir (redis2)\" {\n                    wait_done_loading $redis2\n\n                    assert_equal 0 [$redis2 exists k1]\n                    assert_equal 0 [$redis2 exists k2]\n                    assert_equal 0 [$redis2 exists k3]\n\n                    assert_equal v4 [$redis2 get k4]\n                    assert_equal v5 [$redis2 get k5]\n                    assert_equal v6 [$redis2 get k6]\n\n                    assert_aof_manifest_content $aof_manifest_file2  {\n                        {file appendonly.aof2 seq 1 type b}\n                        {file appendonly.aof2.1.incr.aof seq 1 type i}\n                    }\n\n                    $redis2 bgrewriteaof\n                    waitForBgrewriteaof $redis2\n\n                    assert_equal OK [$redis2 set k v]\n\n                    assert_aof_manifest_content $aof_manifest_file2 {\n                        {file appendonly.aof2.2.base.rdb seq 2 type b}\n                        {file appendonly.aof2.2.incr.aof seq 2 type i}\n                    }\n\n                    set d1 [$redis2 debug digest]\n                    $redis2 debug loadaof\n                    set d2 [$redis2 debug digest]\n                    assert {$d1 eq $d2}\n                }\n            }\n        }\n    }\n\n    test {Multi Part AOF can handle appendfilename contains whitespaces} {\n        start_server [list overrides [list appendonly yes appendfilename \"\\\" file seq \\\\n\\\\n.aof \\\"\"]] {\n            set dir [get_redis_dir]\n            set aof_manifest_name [format \"%s/%s/%s%s\" $dir \"appendonlydir\" \" file seq \\n\\n.aof \" $::manifest_suffix]\n            set redis [redis [srv host] [srv port] 0 $::tls]\n\n            assert_equal OK [$redis set k1 v1]\n\n            $redis bgrewriteaof\n            waitForBgrewriteaof $redis\n\n            assert_aof_manifest_content $aof_manifest_name {\n                {file \" file seq \\n\\n.aof .2.base.rdb\" seq 2 type b}\n                {file \" file seq \\n\\n.aof .2.incr.aof\" seq 2 type i}\n            }\n\n            set d1 [$redis debug digest]\n            $redis debug loadaof\n            set d2 [$redis debug digest]\n            assert {$d1 eq $d2}\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can create BASE (RDB format) when redis starts from empty} {\n        start_server_aof [list dir $server_path] {\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n\n            assert_equal 1 [check_file_exist $aof_dirpath \"${aof_basename}.1${::base_aof_sufix}${::rdb_format_suffix}\"]\n\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.1.base.rdb seq 1 type b}\n                {file appendonly.aof.1.incr.aof seq 1 type i}\n            }\n\n            $client set foo behavior\n\n            set d1 [$client debug digest]\n            $client debug loadaof\n            set d2 [$client debug digest]\n            assert {$d1 eq $d2} \n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can create BASE (AOF format) when redis starts from empty} {\n        start_server_aof [list dir $server_path aof-use-rdb-preamble no] {\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n\n            assert_equal 1 [check_file_exist $aof_dirpath \"${aof_basename}.1${::base_aof_sufix}${::aof_format_suffix}\"]\n\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.1.base.aof seq 1 type b}\n                {file appendonly.aof.1.incr.aof seq 1 type i}\n            }\n\n            $client set foo behavior\n\n            set d1 [$client debug digest]\n            $client debug loadaof\n            set d2 [$client debug digest]\n            assert {$d1 eq $d2} \n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    # Test Part 2\n    #\n    # To test whether the AOFRW behaves as expected during the redis run.\n    # We will start redis first, then perform pressure writing, enable and disable AOF, and manually\n    # and automatically run bgrewrite and other actions, to test whether the correct AOF file is created,\n    # whether the correct manifest is generated, whether the data can be reload correctly under continuous\n    # writing pressure, etc.\n\n\n    start_server {tags {\"Multi Part AOF\"} overrides {aof-use-rdb-preamble {yes} appendonly {no} save {}}} {\n        set dir [get_redis_dir]\n        set aof_basename \"appendonly.aof\"\n        set aof_dirname \"appendonlydir\"\n        set aof_dirpath \"$dir/$aof_dirname\"\n        set aof_manifest_name \"$aof_basename$::manifest_suffix\"\n        set aof_manifest_file \"$dir/$aof_dirname/$aof_manifest_name\"\n\n        set master [srv 0 client]\n        set master_host [srv 0 host]\n        set master_port [srv 0 port]\n\n        catch {exec rm -rf $aof_manifest_file}\n\n        test \"Make sure aof manifest $aof_manifest_name not in aof directory\" {\n            assert_equal 0 [file exists $aof_manifest_file]\n        }\n\n        test \"AOF enable will create manifest file\" {\n            r config set appendonly yes ; # Will create manifest and new INCR aof\n            r config set auto-aof-rewrite-percentage 0 ; # Disable auto-rewrite.\n            waitForBgrewriteaof r\n\n            # Start write load\n            set load_handle0 [start_write_load $master_host $master_port 10]\n\n            wait_for_condition 50 100 {\n                [r dbsize] > 0\n            } else {\n                fail \"No write load detected.\"\n            }\n\n            # First AOFRW done\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.1.base.rdb seq 1 type b}\n                {file appendonly.aof.1.incr.aof seq 1 type i}\n            }\n\n            # Check we really have these files\n            assert_equal 1 [check_file_exist $aof_dirpath $aof_manifest_name]\n            assert_equal 1 [check_file_exist $aof_dirpath \"${aof_basename}.1${::base_aof_sufix}${::rdb_format_suffix}\"]\n            assert_equal 1 [check_file_exist $aof_dirpath \"${aof_basename}.1${::incr_aof_sufix}${::aof_format_suffix}\"]\n\n            r bgrewriteaof\n            waitForBgrewriteaof r\n\n            # The second AOFRW done\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.2.base.rdb seq 2 type b}\n                {file appendonly.aof.2.incr.aof seq 2 type i}\n            }\n\n            assert_equal 1 [check_file_exist $aof_dirpath $aof_manifest_name]\n            # Wait bio delete history\n            wait_for_condition 1000 10 {\n                [check_file_exist $aof_dirpath \"${aof_basename}.1${::base_aof_sufix}${::rdb_format_suffix}\"] == 0 &&\n                [check_file_exist $aof_dirpath \"${aof_basename}.1${::incr_aof_sufix}${::aof_format_suffix}\"] == 0\n            } else {\n                fail \"Failed to delete history AOF\"\n            }\n            assert_equal 1 [check_file_exist $aof_dirpath \"${aof_basename}.2${::base_aof_sufix}${::rdb_format_suffix}\"]\n            assert_equal 1 [check_file_exist $aof_dirpath \"${aof_basename}.2${::incr_aof_sufix}${::aof_format_suffix}\"]\n\n            stop_write_load $load_handle0\n            wait_load_handlers_disconnected\n\n            set d1 [r debug digest]\n            r debug loadaof\n            set d2 [r debug digest]\n            assert {$d1 eq $d2}\n        }\n\n        test \"AOF multiple rewrite failures will open multiple INCR AOFs\" {\n            # Start write load\n            r config set rdb-key-save-delay 10000000\n\n            set orig_size [r dbsize]\n            set load_handle0 [start_write_load $master_host $master_port 10]\n\n            wait_for_condition 50 100 {\n                [r dbsize] > $orig_size\n            } else {\n                fail \"No write load detected.\"\n            }\n\n            # Let AOFRW fail three times\n            r bgrewriteaof\n            set pid1 [get_child_pid 0]\n            catch {exec kill -9 $pid1}\n            waitForBgrewriteaof r\n\n            r bgrewriteaof\n            set pid2 [get_child_pid 0]\n            catch {exec kill -9 $pid2}\n            waitForBgrewriteaof r\n\n            r bgrewriteaof\n            set pid3 [get_child_pid 0]\n            catch {exec kill -9 $pid3}\n            waitForBgrewriteaof r\n\n            assert_equal 0 [check_file_exist $dir \"temp-rewriteaof-bg-$pid1.aof\"]\n            assert_equal 0 [check_file_exist $dir \"temp-rewriteaof-bg-$pid2.aof\"]\n            assert_equal 0 [check_file_exist $dir \"temp-rewriteaof-bg-$pid3.aof\"]\n\n            # We will have four INCR AOFs\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.2.base.rdb seq 2 type b}\n                {file appendonly.aof.2.incr.aof seq 2 type i}\n                {file appendonly.aof.3.incr.aof seq 3 type i}\n                {file appendonly.aof.4.incr.aof seq 4 type i}\n                {file appendonly.aof.5.incr.aof seq 5 type i}\n            }\n\n            assert_equal 1 [check_file_exist $aof_dirpath \"${aof_basename}.2${::base_aof_sufix}${::rdb_format_suffix}\"]\n            assert_equal 1 [check_file_exist $aof_dirpath \"${aof_basename}.2${::incr_aof_sufix}${::aof_format_suffix}\"]\n            assert_equal 1 [check_file_exist $aof_dirpath \"${aof_basename}.3${::incr_aof_sufix}${::aof_format_suffix}\"]\n            assert_equal 1 [check_file_exist $aof_dirpath \"${aof_basename}.4${::incr_aof_sufix}${::aof_format_suffix}\"]\n            assert_equal 1 [check_file_exist $aof_dirpath \"${aof_basename}.5${::incr_aof_sufix}${::aof_format_suffix}\"]\n\n            stop_write_load $load_handle0\n            wait_load_handlers_disconnected\n\n            set d1 [r debug digest]\n            r debug loadaof\n            set d2 [r debug digest]\n            assert {$d1 eq $d2}\n\n            r config set rdb-key-save-delay 0\n            catch {exec kill -9 [get_child_pid 0]}\n            wait_for_condition 1000 10 {\n                [s rdb_bgsave_in_progress] eq 0\n            } else {\n                fail \"bgsave did not stop in time\"\n            }\n\n            # AOFRW success\n            r bgrewriteaof\n            waitForBgrewriteaof r\n\n            # All previous INCR AOFs have become history\n            # and have be deleted\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.3.base.rdb seq 3 type b}\n                {file appendonly.aof.6.incr.aof seq 6 type i}\n            }\n\n            # Wait bio delete history\n            wait_for_condition 1000 10 {\n                [check_file_exist $aof_dirpath \"${aof_basename}.2${::base_aof_sufix}${::rdb_format_suffix}\"] == 0 &&\n                [check_file_exist $aof_dirpath \"${aof_basename}.2${::incr_aof_sufix}${::aof_format_suffix}\"] == 0 &&\n                [check_file_exist $aof_dirpath \"${aof_basename}.3${::incr_aof_sufix}${::aof_format_suffix}\"] == 0 &&\n                [check_file_exist $aof_dirpath \"${aof_basename}.4${::incr_aof_sufix}${::aof_format_suffix}\"] == 0 &&\n                [check_file_exist $aof_dirpath \"${aof_basename}.5${::incr_aof_sufix}${::aof_format_suffix}\"] == 0\n            } else {\n                fail \"Failed to delete history AOF\"\n            }\n\n            assert_equal 1 [check_file_exist $aof_dirpath \"${aof_basename}.3${::base_aof_sufix}${::rdb_format_suffix}\"]\n            assert_equal 1 [check_file_exist $aof_dirpath \"${aof_basename}.6${::incr_aof_sufix}${::aof_format_suffix}\"]\n\n            set d1 [r debug digest]\n            r debug loadaof\n            set d2 [r debug digest]\n            assert {$d1 eq $d2}\n        }\n\n        test \"AOF rewrite doesn't open new aof when AOF turn off\" {\n            r config set appendonly no\n\n            r bgrewriteaof\n            waitForBgrewriteaof r\n\n            # We only have BASE AOF, no INCR AOF\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.4.base.rdb seq 4 type b}\n            }\n\n            assert_equal 1 [check_file_exist $aof_dirpath \"${aof_basename}.4${::base_aof_sufix}${::rdb_format_suffix}\"]\n            wait_for_condition 1000 10 {\n                [check_file_exist $aof_dirpath \"${aof_basename}.6${::incr_aof_sufix}${::aof_format_suffix}\"] == 0 &&\n                [check_file_exist $aof_dirpath \"${aof_basename}.7${::incr_aof_sufix}${::aof_format_suffix}\"] == 0\n            } else {\n                fail \"Failed to delete history AOF\"\n            }\n\n            set d1 [r debug digest]\n            r debug loadaof\n            set d2 [r debug digest]\n            assert {$d1 eq $d2}\n\n            # Turn on AOF again\n            r config set appendonly yes\n            waitForBgrewriteaof r\n\n            # A new INCR AOF was created\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.5.base.rdb seq 5 type b}\n                {file appendonly.aof.1.incr.aof seq 1 type i}\n            }\n\n            # Wait bio delete history\n            wait_for_condition 1000 10 {\n                [check_file_exist $aof_dirpath \"${aof_basename}.4${::base_aof_sufix}${::rdb_format_suffix}\"] == 0\n            } else {\n                fail \"Failed to delete history AOF\"\n            }\n\n            assert_equal 1 [check_file_exist $aof_dirpath \"${aof_basename}.5${::base_aof_sufix}${::rdb_format_suffix}\"]\n            assert_equal 1 [check_file_exist $aof_dirpath \"${aof_basename}.1${::incr_aof_sufix}${::aof_format_suffix}\"]\n        }\n\n        test \"AOF enable/disable auto gc\" {\n            r config set aof-disable-auto-gc yes\n\n            r bgrewriteaof\n            waitForBgrewriteaof r\n\n            r bgrewriteaof\n            waitForBgrewriteaof r\n\n            # We can see four history AOFs (Evolved from two BASE and two INCR)\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.7.base.rdb seq 7 type b}\n                {file appendonly.aof.2.incr.aof seq 2 type h}\n                {file appendonly.aof.6.base.rdb seq 6 type h}\n                {file appendonly.aof.1.incr.aof seq 1 type h}\n                {file appendonly.aof.5.base.rdb seq 5 type h}\n                {file appendonly.aof.3.incr.aof seq 3 type i}\n            }\n\n            assert_equal 1 [check_file_exist $aof_dirpath \"${aof_basename}.5${::base_aof_sufix}${::rdb_format_suffix}\"]\n            assert_equal 1 [check_file_exist $aof_dirpath \"${aof_basename}.6${::base_aof_sufix}${::rdb_format_suffix}\"]\n            assert_equal 1 [check_file_exist $aof_dirpath \"${aof_basename}.1${::incr_aof_sufix}${::aof_format_suffix}\"]\n            assert_equal 1 [check_file_exist $aof_dirpath \"${aof_basename}.2${::incr_aof_sufix}${::aof_format_suffix}\"]\n\n            r config set aof-disable-auto-gc no\n\n            # Auto gc success\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.7.base.rdb seq 7 type b}\n                {file appendonly.aof.3.incr.aof seq 3 type i}\n            }\n\n            # wait bio delete history\n            wait_for_condition 1000 10 {\n                [check_file_exist $aof_dirpath \"${aof_basename}.5${::base_aof_sufix}${::rdb_format_suffix}\"] == 0 &&\n                [check_file_exist $aof_dirpath \"${aof_basename}.6${::base_aof_sufix}${::rdb_format_suffix}\"] == 0 &&\n                [check_file_exist $aof_dirpath \"${aof_basename}.1${::incr_aof_sufix}${::aof_format_suffix}\"] == 0 &&\n                [check_file_exist $aof_dirpath \"${aof_basename}.2${::incr_aof_sufix}${::aof_format_suffix}\"] == 0\n            } else {\n                fail \"Failed to delete history AOF\"\n            }\n        }\n\n        test \"AOF can produce consecutive sequence number after reload\" {\n            # Current manifest, BASE seq 7 and INCR seq 3\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.7.base.rdb seq 7 type b}\n                {file appendonly.aof.3.incr.aof seq 3 type i}\n            }\n\n            r debug loadaof\n\n            # Trigger AOFRW\n            r bgrewriteaof\n            waitForBgrewriteaof r\n\n            # Now BASE seq is 8 and INCR seq is 4\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.8.base.rdb seq 8 type b}\n                {file appendonly.aof.4.incr.aof seq 4 type i}\n            }\n        }\n\n        test \"AOF enable during BGSAVE will not write data util AOFRW finish\" {\n            r config set appendonly no\n            r config set save \"\"\n            r config set rdb-key-save-delay 10000000\n\n            r set k1 v1\n            r bgsave\n\n            wait_for_condition 1000 10 {\n                [s rdb_bgsave_in_progress] eq 1\n            } else {\n                fail \"bgsave did not start in time\"\n            }\n\n            # Make server.aof_rewrite_scheduled = 1\n            r config set appendonly yes\n            assert_equal [s aof_rewrite_scheduled] 1\n\n            # Not open new INCR aof\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.8.base.rdb seq 8 type b}\n                {file appendonly.aof.4.incr.aof seq 4 type i}\n            }\n\n            r set k2 v2\n            r debug loadaof\n\n            # Both k1 and k2 lost\n            assert_equal 0 [r exists k1]\n            assert_equal 0 [r exists k2]\n\n            set total_forks [s total_forks]\n            assert_equal [s rdb_bgsave_in_progress] 1\n            r config set rdb-key-save-delay 0\n            catch {exec kill -9 [get_child_pid 0]}\n            wait_for_condition 1000 10 {\n                [s rdb_bgsave_in_progress] eq 0\n            } else {\n                fail \"bgsave did not stop in time\"\n            }\n\n            # Make sure AOFRW was scheduled\n            wait_for_condition 1000 10 {\n                [s total_forks] == [expr $total_forks + 1]\n            } else {\n                fail \"aof rewrite did not scheduled\"\n            }\n            waitForBgrewriteaof r\n\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.9.base.rdb seq 9 type b}\n                {file appendonly.aof.5.incr.aof seq 5 type i}\n            }\n\n            r set k3 v3\n            r debug loadaof\n            assert_equal v3 [r get k3]\n        }\n\n        test \"AOF will trigger limit when AOFRW fails many times\" {\n            # Clear all data and trigger a successful AOFRW, so we can let \n            # server.aof_current_size equal to 0\n            r flushall\n            r bgrewriteaof\n            waitForBgrewriteaof r\n\n            r config set rdb-key-save-delay 10000000\n            # Let us trigger AOFRW easily\n            r config set auto-aof-rewrite-percentage 1\n            r config set auto-aof-rewrite-min-size 1kb\n\n            # Set a key so that AOFRW can be delayed\n            r set k v\n\n            # Let AOFRW fail 3 times, this will trigger AOFRW limit\n            r bgrewriteaof\n            catch {exec kill -9 [get_child_pid 0]}\n            waitForBgrewriteaof r\n\n            r bgrewriteaof\n            catch {exec kill -9 [get_child_pid 0]}\n            waitForBgrewriteaof r\n\n            r bgrewriteaof\n            catch {exec kill -9 [get_child_pid 0]}\n            waitForBgrewriteaof r\n\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.10.base.rdb seq 10 type b}\n                {file appendonly.aof.6.incr.aof seq 6 type i}\n                {file appendonly.aof.7.incr.aof seq 7 type i}\n                {file appendonly.aof.8.incr.aof seq 8 type i}\n                {file appendonly.aof.9.incr.aof seq 9 type i}\n            }\n            \n            # Write 1KB data to trigger AOFRW\n            r set x [string repeat x 1024]\n\n            # Make sure we have limit log\n            wait_for_condition 1000 50 {\n                [count_log_message 0 \"triggered the limit\"] == 1\n            } else {\n                fail \"aof rewrite did not trigger limit\"\n            }\n            assert_equal [status r aof_rewrite_in_progress] 0\n\n            # No new INCR AOF be created\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.10.base.rdb seq 10 type b}\n                {file appendonly.aof.6.incr.aof seq 6 type i}\n                {file appendonly.aof.7.incr.aof seq 7 type i}\n                {file appendonly.aof.8.incr.aof seq 8 type i}\n                {file appendonly.aof.9.incr.aof seq 9 type i}\n            }\n\n            # Turn off auto rewrite\n            r config set auto-aof-rewrite-percentage 0\n            r config set rdb-key-save-delay 0\n            catch {exec kill -9 [get_child_pid 0]}\n            wait_for_condition 1000 10 {\n                [s aof_rewrite_in_progress] eq 0\n            } else {\n                fail \"aof rewrite did not stop in time\"\n            }\n\n            # We can still manually execute AOFRW immediately\n            r bgrewriteaof\n            waitForBgrewriteaof r\n\n            # Can create New INCR AOF\n            assert_equal 1 [check_file_exist $aof_dirpath \"${aof_basename}.10${::incr_aof_sufix}${::aof_format_suffix}\"]\n\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.11.base.rdb seq 11 type b}\n                {file appendonly.aof.10.incr.aof seq 10 type i}\n            }\n\n            set d1 [r debug digest]\n            r debug loadaof\n            set d2 [r debug digest]\n            assert {$d1 eq $d2}\n        }\n\n        start_server {overrides {aof-use-rdb-preamble {yes} appendonly {no} save {}}} {\n            set dir [get_redis_dir]\n            set aof_basename \"appendonly.aof\"\n            set aof_dirname \"appendonlydir\"\n            set aof_dirpath \"$dir/$aof_dirname\"\n            set aof_manifest_name \"$aof_basename$::manifest_suffix\"\n            set aof_manifest_file \"$dir/$aof_dirname/$aof_manifest_name\"\n\n            set master [srv 0 client]\n            set master_host [srv 0 host]\n            set master_port [srv 0 port]\n\n            test \"AOF will open a temporary INCR AOF to accumulate data until the first AOFRW success when AOF is dynamically enabled\" {\n                r config set save \"\"\n                # Increase AOFRW execution time to give us enough time to kill it\n                r config set rdb-key-save-delay 10000000\n\n                # Start write load\n                set load_handle0 [start_write_load $master_host $master_port 10]\n\n                wait_for_condition 50 100 {\n                    [r dbsize] > 0\n                } else {\n                    fail \"No write load detected.\"\n                }\n\n                # Enable AOF will trigger an initialized AOFRW\n                r config set appendonly yes\n                # Let AOFRW fail\n                assert_equal 1 [s aof_rewrite_in_progress]\n                set pid1 [get_child_pid 0]\n                catch {exec kill -9 $pid1}\n \n                # Wait for AOFRW to exit and delete temp incr aof\n                wait_for_condition 1000 100 {\n                    [count_log_message 0 \"Removing the temp incr aof file\"] == 1\n                } else {\n                    fail \"temp aof did not delete\"\n                }\n\n                # Make sure manifest file is not created\n                assert_equal 0 [check_file_exist $aof_dirpath $aof_manifest_name]\n                # Make sure BASE AOF is not created\n                assert_equal 0 [check_file_exist $aof_dirpath \"${aof_basename}.1${::base_aof_sufix}${::rdb_format_suffix}\"]\n\n                # Make sure the next AOFRW has started\n                wait_for_condition 1000 50 {\n                    [s aof_rewrite_in_progress] == 1\n                } else {\n                    fail \"aof rewrite did not scheduled\"\n                }\n\n                # Do a successful AOFRW\n                set total_forks [s total_forks]\n                r config set rdb-key-save-delay 0\n                catch {exec kill -9 [get_child_pid 0]}\n\n                # Make sure the next AOFRW has started\n                wait_for_condition 1000 10 {\n                    [s total_forks] == [expr $total_forks + 1]\n                } else {\n                    fail \"aof rewrite did not scheduled\"\n                }\n                waitForBgrewriteaof r\n\n                assert_equal 2 [count_log_message 0 \"Removing the temp incr aof file\"]\n\n                # BASE and INCR AOF are successfully created\n                assert_aof_manifest_content $aof_manifest_file {\n                    {file appendonly.aof.1.base.rdb seq 1 type b}\n                    {file appendonly.aof.1.incr.aof seq 1 type i}\n                }\n\n                stop_write_load $load_handle0\n                wait_load_handlers_disconnected\n\n                set d1 [r debug digest]\n                r debug loadaof\n                set d2 [r debug digest]\n                assert {$d1 eq $d2}\n\n                # Dynamic disable AOF again\n                r config set appendonly no\n\n                # Disabling AOF does not delete previous AOF files\n                r debug loadaof\n                set d2 [r debug digest]\n                assert {$d1 eq $d2}\n\n                assert_equal 0 [s rdb_changes_since_last_save]\n                r config set rdb-key-save-delay 10000000\n                set load_handle0 [start_write_load $master_host $master_port 10]\n                wait_for_condition 50 100 {\n                    [s rdb_changes_since_last_save] > 0\n                } else {\n                    fail \"No write load detected.\"\n                }\n\n                # Re-enable AOF\n                r config set appendonly yes\n\n                # Let AOFRW fail\n                assert_equal 1 [s aof_rewrite_in_progress]\n                set pid1 [get_child_pid 0]\n                catch {exec kill -9 $pid1}\n\n                # Wait for AOFRW to exit and delete temp incr aof\n                wait_for_condition 1000 100 {\n                    [count_log_message 0 \"Removing the temp incr aof file\"] == 3\n                } else {\n                    fail \"temp aof did not delete 3 times\"\n                }\n\n                # Make sure no new incr AOF was created           \n                assert_aof_manifest_content $aof_manifest_file {\n                    {file appendonly.aof.1.base.rdb seq 1 type b}\n                    {file appendonly.aof.1.incr.aof seq 1 type i}\n                }\n\n                # Make sure the next AOFRW has started\n                wait_for_condition 1000 50 {\n                    [s aof_rewrite_in_progress] == 1\n                } else {\n                    fail \"aof rewrite did not scheduled\"\n                }\n\n                # Do a successful AOFRW\n                set total_forks [s total_forks]\n                r config set rdb-key-save-delay 0\n                catch {exec kill -9 [get_child_pid 0]}\n\n                wait_for_condition 1000 10 {\n                    [s total_forks] == [expr $total_forks + 1]\n                } else {\n                    fail \"aof rewrite did not scheduled\"\n                }\n                waitForBgrewriteaof r\n\n                assert_equal 4 [count_log_message 0 \"Removing the temp incr aof file\"]\n\n                # New BASE and INCR AOF are successfully created\n                assert_aof_manifest_content $aof_manifest_file {\n                    {file appendonly.aof.2.base.rdb seq 2 type b}\n                    {file appendonly.aof.2.incr.aof seq 2 type i}\n                }\n\n                stop_write_load $load_handle0\n                wait_load_handlers_disconnected\n\n                set d1 [r debug digest]\n                r debug loadaof\n                set d2 [r debug digest]\n                assert {$d1 eq $d2}\n            }\n        }\n    }\n\n    # Test Part 3\n    #\n    # Test if INCR AOF offset information is as expected\n    test {Multi Part AOF writes start offset in the manifest} {\n        set aof_dirpath \"$server_path/$aof_dirname\"\n        set aof_manifest_file \"$server_path/$aof_dirname/${aof_basename}$::manifest_suffix\"\n\n        start_server_aof [list dir $server_path] {\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n\n            # The manifest file has startoffset now\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.1.base.rdb seq 1 type b}\n                {file appendonly.aof.1.incr.aof seq 1 type i startoffset 0}\n            }\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF won't add the offset of incr AOF from old version} {\n        create_aof $aof_dirpath $aof_base1_file {\n            append_to_aof [formatCommand set k1 v1]\n        }\n\n        create_aof $aof_dirpath $aof_incr1_file {\n            append_to_aof [formatCommand set k2 v2]\n        }\n\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof.1.base.aof seq 1 type b\\n\"\n            append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i\\n\"\n        }\n\n        start_server_aof [list dir $server_path] {\n            assert_equal 1 [is_alive [srv pid]]\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n\n            assert_equal v1 [$client get k1]\n            assert_equal v2 [$client get k2]\n\n            $client set k3 v3\n            catch {$client shutdown}\n\n            # Should not add offset to the manifest since we also don't know the right\n            # starting replication of them.\n            set fp [open $aof_manifest_file r]\n            set content [read $fp]\n            close $fp\n            assert ![regexp {startoffset} $content]\n\n            # The manifest file still have information from the old version\n            assert_aof_manifest_content $aof_manifest_file  {\n                {file appendonly.aof.1.base.aof seq 1 type b}\n                {file appendonly.aof.1.incr.aof seq 1 type i}\n            }\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can update master_repl_offset with only startoffset info} {\n        create_aof $aof_dirpath $aof_base1_file {\n            append_to_aof [formatCommand set k1 v1]\n        }\n\n        create_aof $aof_dirpath $aof_incr1_file {\n            append_to_aof [formatCommand set k2 v2]\n        }\n\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof.1.base.aof seq 1 type b\\n\"\n            append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i startoffset 100\\n\"\n        }\n\n        start_server [list overrides [list dir $server_path appendonly yes ]] {\n            wait_done_loading r\n            r select 0\n            assert_equal v1 [r get k1]\n            assert_equal v2 [r get k2]\n\n            # After loading AOF, redis will update the replication offset based on\n            # the information of the last INCR AOF, to avoid the rollback of the\n            # start offset of new INCR AOF. If the INCR file doesn't have an end offset\n            # info, redis will calculate the replication offset by the start offset\n            # plus the file size.\n            set file_size [file size $aof_incr1_file]\n            set offset [expr $file_size + 100]\n            assert_equal $offset [s master_repl_offset]\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF can update master_repl_offset with endoffset info} {\n        create_aof $aof_dirpath $aof_base1_file {\n            append_to_aof [formatCommand set k1 v1]\n        }\n\n        create_aof $aof_dirpath $aof_incr1_file {\n            append_to_aof [formatCommand set k2 v2]\n        }\n\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof.1.base.aof seq 1 type b\\n\"\n            append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i startoffset 100 endoffset 200\\n\"\n        }\n\n        start_server [list overrides [list dir $server_path appendonly yes ]] {\n            wait_done_loading r\n            r select 0\n            assert_equal v1 [r get k1]\n            assert_equal v2 [r get k2]\n\n            # If the INCR file has an end offset, redis directly uses it as replication offset\n            assert_equal 200 [s master_repl_offset]\n\n            # We should reset endoffset in manifest file\n            set fp [open $aof_manifest_file r]\n            set content [read $fp]\n            close $fp\n            assert ![regexp {endoffset} $content]\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {Multi Part AOF will add the end offset if we close gracefully the AOF} {\n        start_server_aof [list dir $server_path] {\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.1.base.rdb seq 1 type b}\n                {file appendonly.aof.1.incr.aof seq 1 type i startoffset 0}\n            }\n\n            $client set k1 v1\n            $client set k2 v2\n            # Close AOF gracefully when stopping appendonly, we should add endoffset\n            # in the manifest file, 'endoffset' should be 2 since writing 2 commands\n            r config set appendonly no\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.1.base.rdb seq 1 type b}\n                {file appendonly.aof.1.incr.aof seq 1 type i startoffset 0 endoffset 2}\n            }\n            r config set appendonly yes\n            waitForBgrewriteaof $client\n\n            $client set k3 v3\n            # Close AOF gracefully when shutting down server, we should add endoffset\n            # in the manifest file, 'endoffset' should be 3 since writing 3 commands\n            catch {$client shutdown}\n            assert_aof_manifest_content $aof_manifest_file {\n                {file appendonly.aof.2.base.rdb seq 2 type b}\n                {file appendonly.aof.2.incr.aof seq 2 type i startoffset 2 endoffset 3}\n            }\n        }\n\n        clean_aof_persistence $aof_dirpath\n    }\n\n    test {INCR AOF has accurate start offset when AOFRW} {\n        start_server [list overrides [list dir $server_path appendonly yes ]] {\n            r config set auto-aof-rewrite-percentage 0\n\n            # Start write load to let the master_repl_offset continue increasing\n            # since appendonly is enabled\n            set load_handle0 [start_write_load [srv 0 host] [srv 0 port] 10]\n            wait_for_condition 50 100 {\n                [r dbsize] > 0\n            } else {\n                fail \"No write load detected.\"\n            }\n\n            # We obtain the master_repl_offset at the time of bgrewriteaof by pausing\n            # the redis process, sending pipeline commands, and then resuming the process\n            set rd [redis_deferring_client]\n            pause_process [srv 0 pid]\n            set buf \"info replication\\r\\n\"\n            append buf \"bgrewriteaof\\r\\n\"\n            $rd write $buf\n            $rd flush\n            resume_process [srv 0 pid]\n            # Read the replication offset and the start of the bgrewriteaof\n            regexp {master_repl_offset:(\\d+)} [$rd read] -> offset1\n            assert_match {*rewriting started*} [$rd read]\n            $rd close\n\n            # Get the start offset from the manifest file after bgrewriteaof\n            waitForBgrewriteaof r\n            set fp [open $aof_manifest_file r]\n            set content [read $fp]\n            close $fp\n            set offset2 [lindex [regexp -inline {startoffset (\\d+)} $content] 1]\n\n            # The start offset of INCR AOF should be the same as master_repl_offset\n            # when we trigger bgrewriteaof\n            assert {$offset1 == $offset2}\n            stop_write_load $load_handle0\n            wait_load_handlers_disconnected\n        }\n    }\n}\n"
  },
  {
    "path": "tests/integration/aof-race.tcl",
    "content": "source tests/support/aofmanifest.tcl\nset defaults { appendonly {yes} appendfilename {appendonly.aof} appenddirname {appendonlydir} aof-use-rdb-preamble {no} }\nset server_path [tmpdir server.aof]\n\ntags {\"aof external:skip\"} {\n    # Specific test for a regression where internal buffers were not properly\n    # cleaned after a child responsible for an AOF rewrite exited. This buffer\n    # was subsequently appended to the new AOF, resulting in duplicate commands.\n    start_server_aof [list dir $server_path] {\n        set client [redis [srv host] [srv port] 0 $::tls]\n        set bench [open \"|src/redis-benchmark -q -s [srv unixsocket] -c 20 -n 20000 incr foo\" \"r+\"]\n\n        wait_for_condition 100 1 {\n            [$client get foo] > 0\n        } else {\n            # Don't care if it fails.\n        }\n\n        # Benchmark should be running by now: start background rewrite\n        $client bgrewriteaof\n\n        # Read until benchmark pipe reaches EOF\n        while {[string length [read $bench]] > 0} {}\n\n        waitForBgrewriteaof $client\n\n        # Check contents of foo\n        assert_equal 20000 [$client get foo]\n    }\n\n    # Restart server to replay AOF\n    start_server_aof [list dir $server_path] {\n        set client [redis [srv host] [srv port] 0 $::tls]\n        wait_done_loading $client\n        assert_equal 20000 [$client get foo]\n    }\n}\n"
  },
  {
    "path": "tests/integration/aof.tcl",
    "content": "source tests/support/aofmanifest.tcl\nset defaults { appendonly {yes} appendfilename {appendonly.aof} appenddirname {appendonlydir} auto-aof-rewrite-percentage {0}}\nset server_path [tmpdir server.aof]\nset aof_dirname \"appendonlydir\"\nset aof_basename \"appendonly.aof\"\nset aof_dirpath \"$server_path/$aof_dirname\"\nset aof_base_file \"$server_path/$aof_dirname/${aof_basename}.1$::base_aof_sufix$::aof_format_suffix\"\nset aof_file \"$server_path/$aof_dirname/${aof_basename}.1$::incr_aof_sufix$::aof_format_suffix\"\nset aof_manifest_file \"$server_path/$aof_dirname/$aof_basename$::manifest_suffix\"\n\ntags {\"aof external:skip\"} {\n    # Server can start when aof-load-truncated is set to yes and AOF\n    # is truncated, with an incomplete MULTI block.\n    create_aof $aof_dirpath $aof_file {\n        append_to_aof [formatCommand set foo hello]\n        append_to_aof [formatCommand multi]\n        append_to_aof [formatCommand set bar world]\n    }\n\n    create_aof_manifest $aof_dirpath $aof_manifest_file {\n        append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i\\n\"\n    }\n\n    start_server_aof [list dir $server_path aof-load-truncated yes] {\n        test \"Unfinished MULTI: Server should start if load-truncated is yes\" {\n            assert_equal 1 [is_alive [srv pid]]\n        }\n    }\n\n    ## Should also start with truncated AOF without incomplete MULTI block.\n    create_aof $aof_dirpath $aof_file {\n        append_to_aof [formatCommand incr foo]\n        append_to_aof [formatCommand incr foo]\n        append_to_aof [formatCommand incr foo]\n        append_to_aof [formatCommand incr foo]\n        append_to_aof [formatCommand incr foo]\n        append_to_aof [string range [formatCommand incr foo] 0 end-1]\n    }\n\n    start_server_aof [list dir $server_path aof-load-truncated yes] {\n        test \"Short read: Server should start if load-truncated is yes\" {\n            assert_equal 1 [is_alive [srv pid]]\n        }\n\n        test \"Truncated AOF loaded: we expect foo to be equal to 5\" {\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n            assert {[$client get foo] eq \"5\"}\n        }\n\n        test \"Append a new command after loading an incomplete AOF\" {\n            $client incr foo\n        }\n    }\n\n    # Now the AOF file is expected to be correct\n    start_server_aof [list dir $server_path aof-load-truncated yes] {\n        test \"Short read + command: Server should start\" {\n            assert_equal 1 [is_alive [srv pid]]\n        }\n\n        test \"Truncated AOF loaded: we expect foo to be equal to 6 now\" {\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n            assert {[$client get foo] eq \"6\"}\n        }\n    }\n\n    ## Test that the server exits when the AOF contains a format error\n    create_aof $aof_dirpath $aof_file {\n        append_to_aof [formatCommand set foo hello]\n        append_to_aof \"!!!\"\n        append_to_aof [formatCommand set foo hello]\n    }\n\n    start_server_aof_ex [list dir $server_path aof-load-truncated yes] [list wait_ready false] {\n        test \"Bad format: Server should have logged an error\" {\n            wait_for_log_messages 0 {\"*Bad file format reading the append only file*\"} 0 10 1000\n        }\n    }\n\n    ## Test the server doesn't start when the AOF contains an unfinished MULTI\n    create_aof $aof_dirpath $aof_file {\n        append_to_aof [formatCommand set foo hello]\n        append_to_aof [formatCommand multi]\n        append_to_aof [formatCommand set bar world]\n    }\n\n    start_server_aof_ex [list dir $server_path aof-load-truncated no] [list wait_ready false] {\n        test \"Unfinished MULTI: Server should have logged an error\" {\n            wait_for_log_messages 0 {\"*Unexpected end of file reading the append only file*\"} 0 10 1000\n        }\n    }\n\n    ## Test that the server exits when the AOF contains a short read\n    create_aof $aof_dirpath $aof_file {\n        append_to_aof [formatCommand set foo hello]\n        append_to_aof [string range [formatCommand set bar world] 0 end-1]\n    }\n\n    start_server_aof_ex [list dir $server_path aof-load-truncated no] [list wait_ready false] {\n        test \"Short read: Server should have logged an error\" {\n            wait_for_log_messages 0 {\"*Unexpected end of file reading the append only file*\"} 0 10 1000\n        }\n    }\n\n    ## Test that redis-check-aof indeed sees this AOF is not valid\n    test \"Short read: Utility should confirm the AOF is not valid\" {\n        catch {\n            exec src/redis-check-aof $aof_manifest_file\n        } result\n        assert_match \"*not valid*\" $result\n    }\n\n    test \"Short read: Utility should show the abnormal line num in AOF\" {\n        create_aof $aof_dirpath $aof_file {\n            append_to_aof [formatCommand set foo hello]\n            append_to_aof \"!!!\"\n        }\n\n        catch {\n            exec src/redis-check-aof $aof_manifest_file\n        } result\n        assert_match \"*ok_up_to_line=8*\" $result\n    }\n\n    test \"Short read: Utility should be able to fix the AOF\" {\n        set result [exec src/redis-check-aof --fix $aof_manifest_file << \"y\\n\"]\n        assert_match \"*Successfully truncated AOF*\" $result\n    }\n\n    ## Test that the server can be started using the truncated AOF\n    start_server_aof [list dir $server_path aof-load-truncated no] {\n        test \"Fixed AOF: Server should have been started\" {\n            assert_equal 1 [is_alive [srv pid]]\n        }\n\n        test \"Fixed AOF: Keyspace should contain values that were parseable\" {\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n            assert_equal \"hello\" [$client get foo]\n            assert_equal \"\" [$client get bar]\n        }\n    }\n\n    ## Test that SPOP (that modifies the client's argc/argv) is correctly free'd\n    create_aof $aof_dirpath $aof_file {\n        append_to_aof [formatCommand sadd set foo]\n        append_to_aof [formatCommand sadd set bar]\n        append_to_aof [formatCommand spop set]\n    }\n\n    start_server_aof [list dir $server_path aof-load-truncated no] {\n        test \"AOF+SPOP: Server should have been started\" {\n            assert_equal 1 [is_alive [srv pid]]\n        }\n\n        test \"AOF+SPOP: Set should have 1 member\" {\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n            assert_equal 1 [$client scard set]\n        }\n    }\n\n    ## Uses the alsoPropagate() API.\n    create_aof $aof_dirpath $aof_file {\n        append_to_aof [formatCommand sadd set foo]\n        append_to_aof [formatCommand sadd set bar]\n        append_to_aof [formatCommand sadd set gah]\n        append_to_aof [formatCommand spop set 2]\n    }\n\n    start_server_aof [list dir $server_path] {\n        test \"AOF+SPOP: Server should have been started\" {\n            assert_equal 1 [is_alive [srv pid]]\n        }\n\n        test \"AOF+SPOP: Set should have 1 member\" {\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n            assert_equal 1 [$client scard set]\n        }\n    }\n\n    ## Test that PEXPIREAT is loaded correctly\n    create_aof $aof_dirpath $aof_file {\n        append_to_aof [formatCommand rpush list foo]\n        append_to_aof [formatCommand pexpireat list 1000]\n        append_to_aof [formatCommand rpush list bar]\n    }\n\n    start_server_aof [list dir $server_path aof-load-truncated no] {\n        test \"AOF+EXPIRE: Server should have been started\" {\n            assert_equal 1 [is_alive [srv pid]]\n        }\n\n        test \"AOF+EXPIRE: List should be empty\" {\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n            assert_equal 0 [$client llen list]\n        }\n    }\n\n    start_server {overrides {appendonly {yes}}} {\n        test {Redis should not try to convert DEL into EXPIREAT for EXPIRE -1} {\n            r set x 10\n            r expire x -1\n        }\n    }\n\n    start_server {tags {\"tsan:skip\"} overrides {appendonly {yes} appendfsync always}} {\n        test {AOF fsync always barrier issue} {\n            set rd [redis_deferring_client]\n            # Set a sleep when aof is flushed, so that we have a chance to look\n            # at the aof size and detect if the response of an incr command\n            # arrives before the data was written (and hopefully fsynced)\n            # We create a big reply, which will hopefully not have room in the\n            # socket buffers, and will install a write handler, then we sleep\n            # a big and issue the incr command, hoping that the last portion of\n            # the output buffer write, and the processing of the incr will happen\n            # in the same event loop cycle.\n            # Since the socket buffers and timing are unpredictable, we fuzz this\n            # test with slightly different sizes and sleeps a few times.\n            for {set i 0} {$i < 10} {incr i} {\n                r debug aof-flush-sleep 0\n                r del x\n                r setrange x [expr {int(rand()*5000000)+10000000}] x\n                r debug aof-flush-sleep 500000\n                set aof [get_last_incr_aof_path r]\n                set size1 [file size $aof]\n                $rd get x\n                after [expr {int(rand()*30)}]\n                $rd incr new_value\n                $rd read\n                $rd read\n                set size2 [file size $aof]\n                assert {$size1 != $size2}\n            }\n        }\n    }\n\n    start_server {overrides {appendonly {yes} appendfsync always}} {\n        test {GETEX should not append to AOF} {\n            set aof [get_last_incr_aof_path r]\n            r set foo bar\n            set before [file size $aof]\n            r getex foo\n            set after [file size $aof]\n            assert_equal $before $after\n        }\n    }\n\n    ## Test that the server exits when the AOF contains a unknown command\n    create_aof $aof_dirpath $aof_file {\n        append_to_aof [formatCommand set foo hello]\n        append_to_aof [formatCommand bla foo hello]\n        append_to_aof [formatCommand set foo hello]\n    }\n\n    start_server_aof_ex [list dir $server_path aof-load-truncated yes] [list wait_ready false] {\n        test \"Unknown command: Server should have logged an error\" {\n            wait_for_log_messages 0 {\"*Unknown command 'bla' reading the append only file*\"} 0 10 1000\n        }\n    }\n\n    # Test that LMPOP/BLMPOP work fine with AOF.\n    create_aof $aof_dirpath $aof_file {\n        append_to_aof [formatCommand lpush mylist a b c]\n        append_to_aof [formatCommand rpush mylist2 1 2 3]\n        append_to_aof [formatCommand lpush mylist3 a b c d e]\n    }\n\n    start_server_aof [list dir $server_path aof-load-truncated no] {\n        test \"AOF+LMPOP/BLMPOP: pop elements from the list\" {\n            set client [redis [srv host] [srv port] 0 $::tls]\n            set client2 [redis [srv host] [srv port] 1 $::tls]\n            wait_done_loading $client\n\n            # Pop all elements from mylist, should be blmpop delete mylist.\n            $client lmpop 1 mylist left count 1\n            $client blmpop 0 1 mylist left count 10\n\n            # Pop all elements from mylist2, should be lmpop delete mylist2.\n            $client blmpop 0 2 mylist mylist2 right count 10\n            $client lmpop 2 mylist mylist2 right count 2\n\n            # Blocking path, be blocked and then released.\n            $client2 blmpop 0 2 mylist mylist2 left count 2\n            after 100\n            $client lpush mylist2 a b c\n\n            # Pop up the last element in mylist2\n            $client blmpop 0 3 mylist mylist2 mylist3 left count 1\n\n            # Leave two elements in mylist3.\n            $client blmpop 0 3 mylist mylist2 mylist3 right count 3\n        }\n    }\n\n    start_server_aof [list dir $server_path aof-load-truncated no] {\n        test \"AOF+LMPOP/BLMPOP: after pop elements from the list\" {\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n\n            # mylist and mylist2 no longer exist.\n            assert_equal 0 [$client exists mylist mylist2]\n\n            # Length of mylist3 is two.\n            assert_equal 2 [$client llen mylist3]\n        }\n    }\n\n    # Test that ZMPOP/BZMPOP work fine with AOF.\n    create_aof $aof_dirpath $aof_file {\n        append_to_aof [formatCommand zadd myzset 1 one 2 two 3 three]\n        append_to_aof [formatCommand zadd myzset2 4 four 5 five 6 six]\n        append_to_aof [formatCommand zadd myzset3 1 one 2 two 3 three 4 four 5 five]\n    }\n\n    start_server_aof [list dir $server_path aof-load-truncated no] {\n        test \"AOF+ZMPOP/BZMPOP: pop elements from the zset\" {\n            set client [redis [srv host] [srv port] 0 $::tls]\n            set client2 [redis [srv host] [srv port] 1 $::tls]\n            wait_done_loading $client\n\n            # Pop all elements from myzset, should be bzmpop delete myzset.\n            $client zmpop 1 myzset min count 1\n            $client bzmpop 0 1 myzset min count 10\n\n            # Pop all elements from myzset2, should be zmpop delete myzset2.\n            $client bzmpop 0 2 myzset myzset2 max count 10\n            $client zmpop 2 myzset myzset2 max count 2\n\n            # Blocking path, be blocked and then released.\n            $client2 bzmpop 0 2 myzset myzset2 min count 2\n            after 100\n            $client zadd myzset2 1 one 2 two 3 three\n\n            # Pop up the last element in myzset2\n            $client bzmpop 0 3 myzset myzset2 myzset3 min count 1\n\n            # Leave two elements in myzset3.\n            $client bzmpop 0 3 myzset myzset2 myzset3 max count 3\n        }\n    }\n\n    start_server_aof [list dir $server_path aof-load-truncated no] {\n        test \"AOF+ZMPOP/BZMPOP: after pop elements from the zset\" {\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n\n            # myzset and myzset2 no longer exist.\n            assert_equal 0 [$client exists myzset myzset2]\n\n            # Length of myzset3 is two.\n            assert_equal 2 [$client zcard myzset3]\n        }\n    }\n\n    test {Generate timestamp annotations in AOF} {\n        start_server {overrides {appendonly {yes}}} {\n            r config set aof-timestamp-enabled yes\n            r config set aof-use-rdb-preamble no\n            set aof [get_last_incr_aof_path r]\n\n            r set foo bar\n            assert_match \"#TS:*\" [exec head -n 1 $aof]\n\n            r bgrewriteaof\n            waitForBgrewriteaof r\n\n            set aof [get_base_aof_path r]\n            assert_match \"#TS:*\" [exec head -n 1 $aof]\n        }\n    }\n\n    test {skip EXEC ACL check during AOF load} {\n        set user_acl \"default on nopass ~* &* +@read -@write +multi +exec +select +ping\"\n\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i\\n\"\n        }\n\n        create_aof $aof_dirpath $aof_file {\n            append_to_aof [formatCommand set beforetx beforetx]\n            append_to_aof [formatCommand multi]\n            append_to_aof [formatCommand set tx1 tx1]\n            append_to_aof [formatCommand set tx2 tx2]\n            append_to_aof [formatCommand exec]\n            append_to_aof [formatCommand set aftertx aftertx]\n        }\n\n        start_server_aof [list dir $server_path user $user_acl] {\n            set c [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $c\n            assert_equal {beforetx} [$c get beforetx]\n            assert_equal {aftertx}  [$c get aftertx]\n            assert_equal {tx1} [$c get tx1]\n            assert_equal {tx2} [$c get tx2]\n\n            catch {$c set newkey value} e\n            assert_match {*NOPERM*set*} $e\n        }\n    }\n\n    # redis could load AOF which has timestamp annotations inside\n    create_aof $aof_dirpath $aof_file {\n        append_to_aof \"#TS:1628217470\\r\\n\"\n        append_to_aof [formatCommand set foo1 bar1]\n        append_to_aof \"#TS:1628217471\\r\\n\"\n        append_to_aof [formatCommand set foo2 bar2]\n        append_to_aof \"#TS:1628217472\\r\\n\"\n        append_to_aof \"#TS:1628217473\\r\\n\"\n        append_to_aof [formatCommand set foo3 bar3]\n        append_to_aof \"#TS:1628217474\\r\\n\"\n    }\n    start_server_aof [list dir $server_path] {\n        test {Successfully load AOF which has timestamp annotations inside} {\n            set c [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $c\n            assert_equal \"bar1\" [$c get foo1]\n            assert_equal \"bar2\" [$c get foo2]\n            assert_equal \"bar3\" [$c get foo3]\n        }\n    }\n\n    test {Truncate AOF to specific timestamp} {\n        # truncate to timestamp 1628217473\n        exec src/redis-check-aof --truncate-to-timestamp 1628217473 $aof_manifest_file\n        start_server_aof [list dir $server_path] {\n            set c [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $c\n            assert_equal \"bar1\" [$c get foo1]\n            assert_equal \"bar2\" [$c get foo2]\n            assert_equal \"bar3\" [$c get foo3]\n        }\n\n        # truncate to timestamp 1628217471\n        exec src/redis-check-aof --truncate-to-timestamp 1628217471 $aof_manifest_file\n        start_server_aof [list dir $server_path] {\n            set c [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $c\n            assert_equal \"bar1\" [$c get foo1]\n            assert_equal \"bar2\" [$c get foo2]\n            assert_equal \"\" [$c get foo3]\n        }\n\n        # truncate to timestamp 1628217470\n        exec src/redis-check-aof --truncate-to-timestamp 1628217470 $aof_manifest_file\n        start_server_aof [list dir $server_path] {\n            set c [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $c\n            assert_equal \"bar1\" [$c get foo1]\n            assert_equal \"\" [$c get foo2]\n        }\n\n        # truncate to timestamp 1628217469\n        catch {exec src/redis-check-aof --truncate-to-timestamp 1628217469 $aof_manifest_file} e\n        assert_match {*aborting*} $e\n    }\n\n    test {EVAL timeout with slow verbatim Lua script from AOF} {\n        start_server [list overrides [list dir $server_path appendonly yes lua-time-limit 1 aof-use-rdb-preamble no]] {\n            # generate a long running script that is propagated to the AOF as script\n            # make sure that the script times out during loading\n            create_aof $aof_dirpath $aof_file {\n                append_to_aof [formatCommand select 9]\n                append_to_aof [formatCommand eval {redis.call('set',KEYS[1],'y'); for i=1,1500000 do redis.call('ping') end return 'ok'} 1 x]\n            }\n            set rd [redis_deferring_client]\n            $rd debug loadaof\n            $rd flush\n            wait_for_condition 100 10 {\n                [catch {r ping} e] == 1\n            } else {\n                fail \"server didn't start loading\"\n            }\n            assert_error {LOADING*} {r ping}\n            $rd read\n            $rd close\n            wait_for_log_messages 0 {\"*Slow script detected*\"} 0 100 100\n            assert_equal [r get x] y\n        }\n    }\n\n    test {EVAL can process writes from AOF in read-only replicas} {\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i\\n\"\n        }\n        create_aof $aof_dirpath $aof_file {\n            append_to_aof [formatCommand select 9]\n            append_to_aof [formatCommand eval {redis.call(\"set\",KEYS[1],\"100\")} 1 foo]\n            append_to_aof [formatCommand eval {redis.call(\"incr\",KEYS[1])} 1 foo]\n            append_to_aof [formatCommand eval {redis.call(\"incr\",KEYS[1])} 1 foo]\n        }\n        start_server [list overrides [list dir $server_path appendonly yes replica-read-only yes replicaof \"127.0.0.1 0\"]] {\n            assert_equal [r get foo] 102\n        }\n    }\n\n    test {Test redis-check-aof for old style resp AOF} {\n        create_aof $aof_dirpath $aof_file {\n            append_to_aof [formatCommand set foo hello]\n            append_to_aof [formatCommand set bar world]\n        }\n\n        catch {\n            exec src/redis-check-aof $aof_file\n        } result\n        assert_match \"*Start checking Old-Style AOF*is valid*\" $result\n    }\n\n    test {Test redis-check-aof for old style resp AOF - has data in the same format as manifest} {\n        create_aof $aof_dirpath $aof_file {\n            append_to_aof [formatCommand set file file]\n            append_to_aof [formatCommand set \"file appendonly.aof.2.base.rdb seq 2 type b\" \"file appendonly.aof.2.base.rdb seq 2 type b\"]\n        }\n\n        catch {\n            exec src/redis-check-aof $aof_file\n        } result\n        assert_match \"*Start checking Old-Style AOF*is valid*\" $result\n    }\n\n    test {Test redis-check-aof for old style rdb-preamble AOF} {\n        catch {\n            exec src/redis-check-aof tests/assets/rdb-preamble.aof\n        } result\n        assert_match \"*Start checking Old-Style AOF*RDB preamble is OK, proceeding with AOF tail*is valid*\" $result\n    }\n\n    test {Test redis-check-aof for Multi Part AOF with resp AOF base} {\n        create_aof $aof_dirpath $aof_base_file {\n            append_to_aof [formatCommand set foo hello]\n            append_to_aof [formatCommand set bar world]\n        }\n\n        create_aof $aof_dirpath $aof_file {\n            append_to_aof [formatCommand set foo hello]\n            append_to_aof [formatCommand set bar world]\n        }\n\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof.1.base.aof seq 1 type b\\n\"\n            append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i\\n\"\n        }\n\n        catch {\n            exec src/redis-check-aof $aof_manifest_file\n        } result\n        assert_match \"*Start checking Multi Part AOF*Start to check BASE AOF (RESP format)*BASE AOF*is valid*Start to check INCR files*INCR AOF*is valid*All AOF files and manifest are valid*\" $result\n    }\n\n    test {Test redis-check-aof for Multi Part AOF with rdb-preamble AOF base} {\n        exec cp tests/assets/rdb-preamble.aof $aof_base_file\n\n        create_aof $aof_dirpath $aof_file {\n            append_to_aof [formatCommand set foo hello]\n            append_to_aof [formatCommand set bar world]\n        }\n\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof.1.base.aof seq 1 type b\\n\"\n            append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i\\n\"\n        }\n\n        catch {\n            exec src/redis-check-aof $aof_manifest_file\n        } result\n        assert_match \"*Start checking Multi Part AOF*Start to check BASE AOF (RDB format)*DB preamble is OK, proceeding with AOF tail*BASE AOF*is valid*Start to check INCR files*INCR AOF*is valid*All AOF files and manifest are valid*\" $result\n    }\n\n    test {Test redis-check-aof for Multi Part AOF contains a format error} {\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof.1.base.aof seq 1 type b\\n\"\n            append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i\\n\"\n            append_to_manifest \"!!!\\n\"\n        }\n\n        catch {\n            exec src/redis-check-aof $aof_manifest_file\n        } result\n        assert_match \"*Invalid AOF manifest file format*\" $result\n    }\n\n    test {Test redis-check-aof only truncates the last file for Multi Part AOF in fix mode} {\n        create_aof $aof_dirpath $aof_base_file {\n            append_to_aof [formatCommand set foo hello]\n            append_to_aof [formatCommand multi]\n            append_to_aof [formatCommand set bar world]\n        }\n\n        create_aof $aof_dirpath $aof_file {\n            append_to_aof [formatCommand set foo hello]\n            append_to_aof [formatCommand set bar world]\n        }\n\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof.1.base.aof seq 1 type b\\n\"\n            append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i\\n\"\n        }\n\n        catch {\n            exec src/redis-check-aof $aof_manifest_file\n        } result\n        assert_match \"*not valid*\" $result\n\n        catch {\n            exec src/redis-check-aof --fix $aof_manifest_file\n        } result\n        assert_match \"*Failed to truncate AOF*because it is not the last file*\" $result\n    }\n\n    test {Test redis-check-aof only truncates the last file for Multi Part AOF in truncate-to-timestamp mode} {\n        create_aof $aof_dirpath $aof_base_file {\n            append_to_aof \"#TS:1628217470\\r\\n\"\n            append_to_aof [formatCommand set foo1 bar1]\n            append_to_aof \"#TS:1628217471\\r\\n\"\n            append_to_aof [formatCommand set foo2 bar2]\n            append_to_aof \"#TS:1628217472\\r\\n\"\n            append_to_aof \"#TS:1628217473\\r\\n\"\n            append_to_aof [formatCommand set foo3 bar3]\n            append_to_aof \"#TS:1628217474\\r\\n\"\n        }\n\n        create_aof $aof_dirpath $aof_file {\n            append_to_aof [formatCommand set foo hello]\n            append_to_aof [formatCommand set bar world]\n        }\n\n        create_aof_manifest $aof_dirpath $aof_manifest_file {\n            append_to_manifest \"file appendonly.aof.1.base.aof seq 1 type b\\n\"\n            append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i\\n\"\n        }\n\n        catch {\n            exec src/redis-check-aof --truncate-to-timestamp 1628217473 $aof_manifest_file\n        } result\n        assert_match \"*Failed to truncate AOF*to timestamp*because it is not the last file*\" $result\n    }\n\n    start_server {overrides {appendonly yes appendfsync always}} {\n        test {FLUSHDB / FLUSHALL should persist in AOF} {\n            set aof [get_last_incr_aof_path r]\n\n            r set key value\n            r flushdb\n            r set key value2\n            r flushdb\n\n            # DB is empty\n            r flushdb\n            r flushdb\n            r flushdb\n\n            r set key value\n            r flushall\n            r set key value2\n            r flushall\n\n            # DBs are empty.\n            r flushall\n            r flushall\n            r flushall\n\n            # Assert that each FLUSHDB command is persisted even the DB is empty.\n            # Assert that each FLUSHALL command is persisted even the DBs are empty.\n            assert_aof_content $aof {\n                {select *}\n                {set key value}\n                {flushdb}\n                {set key value2}\n                {flushdb}\n                {flushdb}\n                {flushdb}\n                {flushdb}\n                {set key value}\n                {flushall}\n                {set key value2}\n                {flushall}\n                {flushall}\n                {flushall}\n                {flushall}\n            }\n        }\n    }\n\n    start_server {overrides {loading-process-events-interval-bytes 1024}} {\n        test \"Allow changing appendonly config while loading from AOF on startup\" {\n            # Set AOF enabled, populate db and restart.\n            r config set appendonly yes\n            r config set key-load-delay 100\n            r config rewrite\n            populate 10000\n            restart_server 0 false false\n\n            # Disable AOF while loading from the disk.\n            assert_equal 1 [s loading]\n            r config set appendonly no\n            assert_equal 1 [s loading]\n\n            # Speed up loading, verify AOF disabled.\n            r config set key-load-delay 0\n            wait_done_loading r\n            assert_equal {10000} [r dbsize]\n            assert_equal 0 [s aof_enabled]\n        }\n\n        test \"Allow changing appendonly config while loading from RDB on startup\" {\n            # Set AOF disabled, populate db and restart.\n            r flushall\n            r config set appendonly no\n            r config set key-load-delay 100\n            r config rewrite\n            populate 10000\n            r save\n            restart_server 0 false false\n\n            # Enable AOF while loading from the disk.\n            assert_equal 1 [s loading]\n            r config set appendonly yes\n            assert_equal 1 [s loading]\n\n            # Speed up loading, verify AOF enabled, do a quick sanity.\n            r config set key-load-delay 0\n            wait_done_loading r\n            assert_equal {10000} [r dbsize]\n            assert_equal 1 [s aof_enabled]\n            r set t 1\n            assert_equal {1} [r get t]\n        }\n    }\n\n    # Check AOF load broken behavior\n    # Corrupted base AOF, existing AOF files\n    create_aof $aof_dirpath $aof_base_file {\n        append_to_aof [formatCommand set param ok]\n        append_to_aof \"corruption\"\n    }\n    create_aof $aof_dirpath $aof_file {\n        append_to_aof [formatCommand set foo hello]\n    }\n    start_server_aof_ex [list dir $server_path aof-load-corrupt-tail-max-size 4096] [list wait_ready false] {\n        test \"Corrupted base AOF should be recovered\" {\n            wait_for_log_messages 0 {\n                {*AOF*loaded anyway because aof-load-corrupt-tail-max-size is enabled*}\n            } 0 10 1000\n        }\n    }\n\n    # Remove all incr AOF files to make the base file being the last file\n    exec rm -f $aof_dirpath/appendonly.aof.*\n    start_server_aof [list dir $server_path aof-load-corrupt-tail-max-size 4096] {\n        test \"Corrupted base AOF (last file): should recover\" {\n            assert_equal 1 [is_alive [srv pid]]\n        }\n\n        test \"param should be 'ok'\" {\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n            assert {[$client get param] eq \"ok\"}\n        }\n    }\n\n    # Should also start with broken incr AOF.\n    create_aof $aof_dirpath $aof_file {\n        append_to_aof [formatCommand set foo 1]\n        append_to_aof [formatCommand incr foo]\n        append_to_aof [formatCommand incr foo]\n        append_to_aof [formatCommand incr foo]\n        append_to_aof [formatCommand incr foo]\n        append_to_aof \"corruption\"\n    }\n\n    start_server_aof [list dir $server_path aof-load-corrupt-tail-max-size 4096] {\n        test \"Short read: Server should start if aof-load-broken is yes\" {\n            assert_equal 1 [is_alive [srv pid]]\n        }\n\n        # The AOF file is expected to be correct because aof-load-corrupt-tail-max-size is set to 4096,\n        # so the AOF will reload without the corruption\n        test \"Broken AOF loaded: we expect foo to be equal to 5\" {\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n            assert {[$client get foo] eq \"5\"}\n        }\n\n        test \"Append a new command after loading an incomplete AOF\" {\n            $client incr foo\n        }\n    }\n\n    start_server_aof [list dir $server_path aof-load-corrupt-tail-max-size 4096] {\n        test \"Short read + command: Server should start\" {\n            assert_equal 1 [is_alive [srv pid]]\n        }\n\n        test \"Broken AOF loaded: we expect foo to be equal to 6 now\" {\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n            assert {[$client get foo] eq \"6\"}\n        }\n    }\n\n    # Test that the server exits when the AOF contains a format error\n    create_aof $aof_dirpath $aof_file {\n        append_to_aof [formatCommand set foo hello]\n        append_to_aof [string range [formatCommand incr foo] 0 end-3]\n        append_to_aof \"corruption\"\n    }\n\n    # We set the maximum allowed corrupted size to 2 bytes, but the actual corrupted portion is larger,\n    # so the AOF file will not be reloaded.\n    start_server_aof_ex [list dir $server_path aof-load-corrupt-tail-max-size 2] [list wait_ready false] {\n        test \"Bad format: Server should have logged an error\" {\n            wait_for_log_messages 0 {\"*Bad file format reading the append only file*aof-load-corrupt-tail-max-size*\"} 0 10 1000\n        }\n    }\n\n    create_aof_manifest $aof_dirpath $aof_manifest_file {\n        append_to_manifest \"file appendonly.aof.1.base.aof seq 1 type b\\n\"\n        append_to_manifest \"file appendonly.aof.1.incr.aof seq 1 type i\\n\"\n        append_to_manifest \"file appendonly.aof.2.incr.aof seq 2 type i\\n\"\n    }\n    # Create base AOF file\n    set base_aof_file \"$aof_dirpath/appendonly.aof.1.base.aof\"\n    create_aof $aof_dirpath $base_aof_file {\n        append_to_aof [formatCommand set fo base]\n    }\n\n    # Create middle incr AOF file with corruption\n    set mid_aof_file \"$aof_dirpath/appendonly.aof.1.incr.aof\"\n    create_aof $aof_dirpath $mid_aof_file {\n        append_to_aof [formatCommand set fo mid]\n        append_to_aof \"CORRUPTION\"\n    }\n\n    # Create last incr AOF file (valid)\n    set last_aof_file \"$aof_dirpath/appendonly.aof.2.incr.aof\"\n    create_aof $aof_dirpath $last_aof_file {\n        append_to_aof [formatCommand set fo last]\n    }\n\n    # Check that Redis fails to load because corruption is in the middle file\n    start_server_aof_ex [list dir $server_path aof-load-corrupt-tail-max-size 4096] [list wait_ready false] {\n        test \"Intermediate AOF is broken: should recover successfully\" {\n            wait_for_log_messages 0 {\n                {*AOF*loaded anyway because aof-load-corrupt-tail-max-size is enabled*}\n            } 0 10 1000\n        }\n    }\n\n    # Swap mid and last files\n    set tmp_file \"$aof_dirpath/temp.aof\"\n    file rename -force $mid_aof_file $tmp_file\n    file rename -force $last_aof_file $mid_aof_file\n    file rename -force $tmp_file $last_aof_file\n\n    # Should now start successfully since corruption is in last AOF file\n    start_server_aof [list dir $server_path aof-load-corrupt-tail-max-size 4096] {\n        test \"Corrupted last AOF file: Server should still start and recover\" {\n            assert_equal 1 [is_alive [srv pid]]\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n            assert {[$client get fo] eq \"mid\"}\n        }\n    }\n\n    # Test corrupt tail recovery with realistic corruption scenario\n    # Create corruption in the LAST file instead of middle file\n    set last_aof_file \"$aof_dirpath/appendonly.aof.2.incr.aof\"\n    create_aof $aof_dirpath $last_aof_file {\n        append_to_aof [formatCommand set foo 5]\n        append_to_aof \"!!!\"\n        append_to_aof [formatCommand set foo 3]\n    }\n\n    start_server_aof_ex [list dir $server_path aof-load-truncated yes] [list wait_ready false] {\n        test \"Bad format: Server should have logged an error\" {\n            wait_for_log_messages 0 {\"*Bad file format reading the append only file*\"} 0 10 1000\n        }\n    }\n\n    start_server_aof [list dir $server_path aof-load-corrupt-tail-max-size 64] {\n        test \"Corrupt tail: Server should start if aof-load-corrupt-tail-max-size is set\" {\n            assert_equal 1 [is_alive [srv pid]]\n        }\n\n        test \"Corrupt tail: Server should have logged warning\" {\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n            wait_for_log_messages 0 {\"*corrupt AOF file tail*\"} 0 10 1000\n        }\n\n        test \"Corrupt tail: we expect foo to be equal to 5\" {\n            assert {[$client get foo] eq \"5\"}\n        }\n\n        test \"Append a new command after loading an incomplete AOF\" {\n            $client incr foo\n        }\n    }\n\n    # Now the AOF file is expected to be correct\n    start_server_aof [list dir $server_path] {\n        test \"Corrupt tail + command: Server should start\" {\n            assert_equal 1 [is_alive [srv pid]]\n        }\n\n        test \"Corrupt tail: we expect foo to be equal to 6 now\" {\n            set client [redis [srv host] [srv port] 0 $::tls]\n            wait_done_loading $client\n            assert {[$client get foo] eq \"6\"}\n        }\n    }\n}\n"
  },
  {
    "path": "tests/integration/block-repl.tcl",
    "content": "# Test replication of blocking lists and zset operations.\n# Unlike stream operations such operations are \"pop\" style, so they consume\n# the list or sorted set, and must be replicated correctly.\n\nproc start_bg_block_op {host port db ops tls} {\n    set tclsh [info nameofexecutable]\n    exec $tclsh tests/helpers/bg_block_op.tcl $host $port $db $ops $tls &\n}\n\nproc stop_bg_block_op {handle} {\n    catch {exec /bin/kill -9 $handle}\n}\n\nstart_server {tags {\"repl\" \"external:skip\"}} {\n    start_server {overrides {save {}}} {\n        set master [srv -1 client]\n        set master_host [srv -1 host]\n        set master_port [srv -1 port]\n        set slave [srv 0 client]\n\n        set load_handle0 [start_bg_block_op $master_host $master_port 9 100000 $::tls]\n        set load_handle1 [start_bg_block_op $master_host $master_port 9 100000 $::tls]\n        set load_handle2 [start_bg_block_op $master_host $master_port 9 100000 $::tls]\n\n        test {First server should have role slave after SLAVEOF} {\n            $slave slaveof $master_host $master_port\n            after 1000\n            s 0 role\n        } {slave}\n\n        test {Test replication with blocking lists and sorted sets operations} {\n            after 25000\n            stop_bg_block_op $load_handle0\n            stop_bg_block_op $load_handle1\n            stop_bg_block_op $load_handle2\n            wait_for_condition 100 100 {\n                [$master debug digest] == [$slave debug digest]\n            } else {\n                set csv1 [csvdump r]\n                set csv2 [csvdump {r -1}]\n                set fd [open /tmp/repldump1.txt w]\n                puts -nonewline $fd $csv1\n                close $fd\n                set fd [open /tmp/repldump2.txt w]\n                puts -nonewline $fd $csv2\n                close $fd\n                fail \"Master - Replica inconsistency, Run diff -u against /tmp/repldump*.txt for more info\"\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "tests/integration/convert-ziplist-hash-on-load.tcl",
    "content": "tags {\"external:skip\"} {\n\n# Copy RDB with ziplist encoded hash to server path\nset server_path [tmpdir \"server.convert-ziplist-hash-on-load\"]\n\nexec cp -f tests/assets/hash-ziplist.rdb $server_path\nstart_server [list overrides [list \"dir\" $server_path \"dbfilename\" \"hash-ziplist.rdb\"]] {\n    test \"RDB load ziplist hash: converts to listpack when RDB loading\" {\n        r select 0\n\n        assert_encoding listpack hash\n        assert_equal 2 [r hlen hash]\n        assert_match {v1 v2} [r hmget hash f1 f2]\n    }\n}\n\nexec cp -f tests/assets/hash-ziplist.rdb $server_path\nstart_server [list overrides [list \"dir\" $server_path \"dbfilename\" \"hash-ziplist.rdb\" \"hash-max-ziplist-entries\" 1]] {\n    test \"RDB load ziplist hash: converts to hash table when hash-max-ziplist-entries is exceeded\" {\n        r select 0\n\n        assert_encoding hashtable hash\n        assert_equal 2 [r hlen hash]\n        assert_match {v1 v2} [r hmget hash f1 f2]\n    }\n}\n\n}\n"
  },
  {
    "path": "tests/integration/convert-ziplist-zset-on-load.tcl",
    "content": "tags {\"external:skip\"} {\n\n# Copy RDB with ziplist encoded hash to server path\nset server_path [tmpdir \"server.convert-ziplist-hash-on-load\"]\n\nexec cp -f tests/assets/zset-ziplist.rdb $server_path\nstart_server [list overrides [list \"dir\" $server_path \"dbfilename\" \"zset-ziplist.rdb\"]] {\n    test \"RDB load ziplist zset: converts to listpack when RDB loading\" {\n        r select 0\n\n        assert_encoding listpack zset\n        assert_equal 2 [r zcard zset]\n        assert_match {one 1 two 2} [r zrange zset 0 -1 withscores]\n    }\n}\n\nexec cp -f tests/assets/zset-ziplist.rdb $server_path\nstart_server [list overrides [list \"dir\" $server_path \"dbfilename\" \"zset-ziplist.rdb\" \"zset-max-ziplist-entries\" 1]] {\n    test \"RDB load ziplist zset: converts to skiplist when zset-max-ziplist-entries is exceeded\" {\n        r select 0\n\n        assert_encoding skiplist zset\n        assert_equal 2 [r zcard zset]\n        assert_match {one 1 two 2} [r zrange zset 0 -1 withscores]\n    }\n}\n\n}\n"
  },
  {
    "path": "tests/integration/convert-zipmap-hash-on-load.tcl",
    "content": "tags {\"external:skip\"} {\n\n# Copy RDB with zipmap encoded hash to server path\nset server_path [tmpdir \"server.convert-zipmap-hash-on-load\"]\n\nexec cp -f tests/assets/hash-zipmap.rdb $server_path\nstart_server [list overrides [list \"dir\" $server_path \"dbfilename\" \"hash-zipmap.rdb\"]] {\n  test \"RDB load zipmap hash: converts to listpack\" {\n    r select 0\n\n    assert_match \"*listpack*\" [r debug object hash]\n    assert_equal 2 [r hlen hash]\n    assert_match {v1 v2} [r hmget hash f1 f2]\n  }\n}\n\nexec cp -f tests/assets/hash-zipmap.rdb $server_path\nstart_server [list overrides [list \"dir\" $server_path \"dbfilename\" \"hash-zipmap.rdb\" \"hash-max-ziplist-entries\" 1]] {\n  test \"RDB load zipmap hash: converts to hash table when hash-max-ziplist-entries is exceeded\" {\n    r select 0\n\n    assert_match \"*hashtable*\" [r debug object hash]\n    assert_equal 2 [r hlen hash]\n    assert_match {v1 v2} [r hmget hash f1 f2]\n  }\n}\n\nexec cp -f tests/assets/hash-zipmap.rdb $server_path\nstart_server [list overrides [list \"dir\" $server_path \"dbfilename\" \"hash-zipmap.rdb\" \"hash-max-ziplist-value\" 1]] {\n  test \"RDB load zipmap hash: converts to hash table when hash-max-ziplist-value is exceeded\" {\n    r select 0\n\n    assert_match \"*hashtable*\" [r debug object hash]\n    assert_equal 2 [r hlen hash]\n    assert_match {v1 v2} [r hmget hash f1 f2]\n  }\n}\n\n}\n"
  },
  {
    "path": "tests/integration/corrupt-dump-fuzzer.tcl",
    "content": "# tests of corrupt listpack payload with valid CRC\n\n# The fuzzer can cause corrupt the state in many places, which could\n# mess up the reply, so we decided to skip logreqres.\ntags {\"dump\" \"corruption\" \"external:skip\" \"logreqres:skip\"} {\n\n# catch sigterm so that in case one of the random command hangs the test,\n# usually due to redis not putting a response in the output buffers,\n# we'll know which command it was\nif { ! [ catch {\n    package require Tclx\n} err ] } {\n    signal error SIGTERM\n}\n\nproc generate_collections {suffix elements} {\n    set rd [redis_deferring_client]\n    set numcmd 7\n    set has_vsets [server_has_command vadd]\n    if {$has_vsets} {incr numcmd}\n\n    for {set j 0} {$j < $elements} {incr j} {\n        # add both string values and integers\n        if {$j % 2 == 0} {set val $j} else {set val \"_$j\"}\n        $rd hset hash$suffix $j $val\n        $rd hset hashmd$suffix $j $val\n        $rd hexpire hashmd$suffix [expr {int(rand() * 10000)}] FIELDS 1 $j\n        $rd lpush list$suffix $val\n        $rd zadd zset$suffix $j $val\n        $rd sadd set$suffix $val\n        $rd xadd stream$suffix * item 1 value $val\n        if {$has_vsets} {\n            $rd vadd vset$suffix VALUES 3 1 1 1 $j\n        }\n    }\n    for {set j 0} {$j < $elements * $numcmd} {incr j} {\n        $rd read ; # Discard replies\n    }\n    $rd close\n}\n\n# generate keys with various types and encodings\nproc generate_types {} {\n    r config set list-max-ziplist-size 5\n    r config set hash-max-ziplist-entries 5\n    r config set set-max-listpack-entries 5\n    r config set zset-max-ziplist-entries 5\n    r config set stream-node-max-entries 5\n\n    # create small (ziplist / listpack encoded) objects with 3 items\n    generate_collections \"\" 3\n\n    # add some metadata to the stream\n    r xgroup create stream mygroup 0\n    set records [r xreadgroup GROUP mygroup Alice COUNT 2 STREAMS stream >]\n    r xdel stream [lindex [lindex [lindex [lindex $records 0] 1] 1] 0]\n    r xack stream mygroup [lindex [lindex [lindex [lindex $records 0] 1] 0] 0]\n\n    # create other non-collection types\n    r incr int\n    r set string str\n    r gcra gcra 10 5 60000\n\n    # create bigger objects with 10 items (more than a single ziplist / listpack)\n    generate_collections big 10\n\n    # make sure our big stream also has a listpack record that has different\n    # field names than the master recorded\n    r xadd streambig * item 1 value 1\n    r xadd streambig * item 1 unique value\n}\n\nproc corrupt_payload {payload} {\n    set len [string length $payload]\n    set count 1 ;# usually corrupt only one byte\n    if {rand() > 0.9} { set count 2 }\n    while { $count > 0 } {\n        set idx [expr {int(rand() * $len)}]\n        set ch [binary format c [expr {int(rand()*255)}]]\n        set payload [string replace $payload $idx $idx $ch]\n        incr count -1\n    }\n    return $payload\n}\n\n# fuzzy tester for corrupt RESTORE payloads\n# valgrind will make sure there were no leaks in the rdb loader error handling code\nforeach sanitize_dump {no yes} {\n    if {$::accurate} {\n        set min_duration [expr {60 * 10}] ;# run at least 10 minutes\n        set min_cycles 1000 ;# run at least 1k cycles (max 16 minutes)\n    } else {\n        set min_duration 10 ; # run at least 10 seconds\n        set min_cycles 10 ; # run at least 10 cycles\n    }\n\n    # Don't execute this on FreeBSD due to a yet-undiscovered memory issue\n    # which causes tclsh to bloat.\n    if {[exec uname] == \"FreeBSD\"} {\n        set min_cycles 1\n        set min_duration 1\n    }\n\n    test \"Fuzzer corrupt restore payloads - sanitize_dump: $sanitize_dump\" {\n        if {$min_duration * 2 > $::timeout} {\n            fail \"insufficient timeout\"\n        }\n        # start a server, fill with data and save an RDB file once (avoid re-save)\n        start_server [list overrides [list \"save\" \"\" use-exit-on-panic yes crash-memcheck-enabled no loglevel verbose] ] {\n            set stdout [srv 0 stdout]\n            r config set sanitize-dump-payload $sanitize_dump\n            r debug set-skip-checksum-validation 1\n            set start_time [clock seconds]\n            generate_types\n            set dbsize [r dbsize]\n            r save\n            set cycle 0\n            set stat_terminated_in_restore 0\n            set stat_terminated_in_traffic 0\n            set stat_terminated_by_signal 0\n            set stat_successful_restore 0\n            set stat_rejected_restore 0\n            set stat_traffic_commands_sent 0\n            # repeatedly DUMP a random key, corrupt it and try RESTORE into a new key\n            while true {\n                set k [r randomkey]\n                set dump [r dump $k]\n                set dump [corrupt_payload $dump]\n                set printable_dump [string2printable $dump]\n                set restore_failed false\n                set report_and_restart false\n                set sent {}\n                set expired_subkeys [s expired_subkeys]\n                # RESTORE can fail, but hopefully not terminate\n                if { [catch { r restore \"_$k\" 0 $dump REPLACE } err] } {\n                    set restore_failed true\n                    # skip if return failed with an error response.\n                    if {[string match \"ERR*\" $err]} {\n                        incr stat_rejected_restore\n                    } else {\n                        set report_and_restart true\n                        incr stat_terminated_in_restore\n                        write_log_line 0 \"corrupt payload: $printable_dump\"\n                        if {$sanitize_dump == yes} {\n                            puts \"Server crashed in RESTORE with payload: $printable_dump\"\n                        }\n                    }\n                } else {\n                    # an attempt to check if the server didn't terminate (this will throw an error that will terminate the tests)\n                    if { [catch { r ping } err] } {\n                        set msg \"Server crashed after RESTORE with payload: $printable_dump\"\n                        write_log_line 0 $msg\n                        puts $msg\n                        error $err\n                    }\n                }\n\n                set print_commands false\n                if {!$restore_failed} {\n                    # if RESTORE didn't fail or terminate, run some random traffic on the new key\n                    incr stat_successful_restore\n                    if { [ catch {\n                        set type [r type \"_$k\"]\n                        if {$type eq {none}} {\n                            # The key has been removed due to expiration.\n                            # Ensure the server didn't terminate during expiration and verify\n                            # expire stats to confirm the key was removed due to expiration.\n                            r ping\n                            assert_morethan [s expired_subkeys] $expired_subkeys\n                        } else {\n                            set sent [generate_fuzzy_traffic_on_key \"_$k\" $type 1] ;# traffic for 1 second\n                        }\n\n                        incr stat_traffic_commands_sent [llength $sent]\n                        r del \"_$k\" ;# in case the server terminated, here's where we'll detect it.\n                        if {$dbsize != [r dbsize]} {\n                            puts \"unexpected keys\"\n                            puts \"keys: [r keys *]\"\n                            puts \"commands leading to it:\"\n                            foreach cmd $sent {\n                                foreach arg $cmd {\n                                    puts -nonewline \"[string2printable $arg] \"\n                                }\n                                puts \"\"\n                            }\n                            exit 1\n                        }\n                    } err ] } {\n                        set err [format \"%s\" $err] ;# convert to string for pattern matching\n                        if {[string match \"*SIGTERM*\" $err]} {\n                            puts \"payload that caused test to hang: $printable_dump\"\n                            if {$::dump_logs} {\n                                set srv [get_srv 0]\n                                dump_server_log $srv\n                            }\n                            exit 1\n                        }\n                        # if the server terminated update stats and restart it\n                        set report_and_restart true\n                        incr stat_terminated_in_traffic\n                        set by_signal [count_log_message 0 \"crashed by signal\"]\n                        incr stat_terminated_by_signal $by_signal\n\n                        if {$by_signal != 0 || $sanitize_dump == yes} {\n                            if {$::dump_logs} {\n                                set srv [get_srv 0]\n                                dump_server_log $srv\n                            }\n\n                            puts \"Server crashed (by signal: $by_signal, err: $err), with payload: $printable_dump\"\n                            set print_commands true\n                        }\n                    }\n                }\n\n                # check valgrind report for invalid reads after each RESTORE\n                # payload so that we have a report that is easier to reproduce\n                set valgrind_errors [find_valgrind_errors [srv 0 stderr] false]\n                set asan_errors [sanitizer_errors_from_file [srv 0 stderr]]\n                if {$valgrind_errors != \"\" || $asan_errors != \"\"} {\n                    puts \"valgrind or asan found an issue for payload: $printable_dump\"\n                    set report_and_restart true\n                    set print_commands true\n                }\n\n                if {$report_and_restart} {\n                    if {$print_commands} {\n                        puts \"violating commands:\"\n                        foreach cmd $sent {\n                            foreach arg $cmd {\n                                puts -nonewline \"[string2printable $arg] \"\n                            }\n                            puts \"\"\n                        }\n                    }\n\n                    # restart the server and re-apply debug configuration\n                    write_log_line 0 \"corrupt payload: $printable_dump\"\n                    restart_server 0 true true\n                    r config set sanitize-dump-payload $sanitize_dump\n                    r debug set-skip-checksum-validation 1\n                }\n\n                incr cycle\n                if { ([clock seconds]-$start_time) >= $min_duration && $cycle >= $min_cycles} {\n                    break\n                }\n            }\n            if {$::verbose} {\n                puts \"Done $cycle cycles in [expr {[clock seconds]-$start_time}] seconds.\"\n                puts \"RESTORE: successful: $stat_successful_restore, rejected: $stat_rejected_restore\"\n                puts \"Total commands sent in traffic: $stat_traffic_commands_sent, crashes during traffic: $stat_terminated_in_traffic ($stat_terminated_by_signal by signal).\"\n            }\n        }\n        # if we run sanitization we never expect the server to crash at runtime\n        if {$sanitize_dump == yes} {\n            assert_equal $stat_terminated_in_restore 0\n            assert_equal $stat_terminated_in_traffic 0\n        }\n        # make sure all terminations where due to assertion and not a SIGSEGV\n        assert_equal $stat_terminated_by_signal 0\n    }\n}\n\n\n\n} ;# tags\n\n"
  },
  {
    "path": "tests/integration/corrupt-dump.tcl",
    "content": "# tests of corrupt ziplist payload with valid CRC\n# * setting crash-memcheck-enabled to no to avoid issues with valgrind\n# * setting use-exit-on-panic to yes so that valgrind can search for leaks\n# * setting debug set-skip-checksum-validation to 1 on some tests for which we\n#   didn't bother to fake a valid checksum\n# * some tests set sanitize-dump-payload to no and some to yet, depending on\n#   what we want to test\n\ntags {\"dump\" \"corruption\" \"external:skip\"} {\n\n# We only run OOM related tests on x86_64 and aarch64, as jemalloc on other\n# platforms (notably s390x) may actually succeed very large allocations. As\n# a result the test may hang for a very long time at the cleanup phase,\n# iterating as many as 2^61 hash table slots.\n\nset arch_name [exec uname -m]\nset run_oom_tests [expr {($arch_name == \"x86_64\" || $arch_name == \"aarch64\") && !$::tsan}]\n\nset corrupt_payload_7445 \"\\x0E\\x01\\x1D\\x1D\\x00\\x00\\x00\\x16\\x00\\x00\\x00\\x03\\x00\\x00\\x04\\x43\\x43\\x43\\x43\\x06\\x04\\x42\\x42\\x42\\x42\\x06\\x3F\\x41\\x41\\x41\\x41\\xFF\\x09\\x00\\x88\\xA5\\xCA\\xA8\\xC5\\x41\\xF4\\x35\"\n\ntest {corrupt payload: #7445 - with sanitize} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        catch {\n            r restore key 0 $corrupt_payload_7445\n        } err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*integrity check failed*\" 0\n    }\n}\n\ntest {corrupt payload: hash with valid zip list header, invalid entry len} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        catch {\n            r restore key 0 \"\\x0D\\x1B\\x1B\\x00\\x00\\x00\\x16\\x00\\x00\\x00\\x04\\x00\\x00\\x02\\x61\\x00\\x04\\x02\\x62\\x00\\x04\\x14\\x63\\x00\\x04\\x02\\x64\\x00\\xFF\\x09\\x00\\xD9\\x10\\x54\\x92\\x15\\xF5\\x5F\\x52\"\n        } err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*integrity check failed*\" 0\n    }\n}\n\ntest {corrupt payload: invalid zlbytes header} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        catch {\n            r restore key 0 \"\\x0D\\x1B\\x25\\x00\\x00\\x00\\x16\\x00\\x00\\x00\\x04\\x00\\x00\\x02\\x61\\x00\\x04\\x02\\x62\\x00\\x04\\x02\\x63\\x00\\x04\\x02\\x64\\x00\\xFF\\x09\\x00\\xB7\\xF7\\x6E\\x9F\\x43\\x43\\x14\\xC6\"\n        } err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*integrity check failed*\" 0\n    }\n}\n\ntest {corrupt payload: valid zipped hash header, dup records} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        catch {\n            r restore key 0 \"\\x0D\\x1B\\x1B\\x00\\x00\\x00\\x16\\x00\\x00\\x00\\x04\\x00\\x00\\x02\\x61\\x00\\x04\\x02\\x62\\x00\\x04\\x02\\x61\\x00\\x04\\x02\\x64\\x00\\xFF\\x09\\x00\\xA1\\x98\\x36\\x78\\xCC\\x8E\\x93\\x2E\"\n        } err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*integrity check failed*\" 0\n    }\n}\n\ntest {corrupt payload: hash listpackex with invalid string TTL} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        catch {\n            r restore key 0 \"\\x17\\x2d\\x2d\\x00\\x00\\x00\\x09\\x00\\x81\\x61\\x02\\x01\\x01\\xf4\\xa6\\x96\\x18\\xb8\\x8f\\x01\\x00\\x00\\x09\\x82\\x66\\x31\\x03\\x82\\x76\\x31\\x03\\x83\\x66\\x6f\\x6f\\x04\\x82\\x66\\x32\\x03\\x82\\x76\\x32\\x03\\x00\\x01\\xff\\x0c\\x00\\xde\\x40\\xe5\\x37\\x51\\x1c\\x12\\x56\" replace\n        } err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: hash listpackex with TTL large than EB_EXPIRE_TIME_MAX} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        catch {\n            r restore key 0 \"\\x17\\x33\\x33\\x00\\x00\\x00\\x09\\x00\\x00\\x01\\x00\\x01\\xf4\\x01\\xc5\\x89\\x95\\x8f\\x01\\x00\\x00\\x09\\x01\\x01\\x82\\x5f\\x31\\x03\\xf4\\x29\\x94\\x97\\x95\\x8f\\x01\\x00\\x00\\x09\\x02\\x01\\x02\\x01\\xf4\\x01\\x5e\\xaf\\x95\\x8f\\x01\\x33\\x00\\x09\\xff\\x0c\\x00\\x7e\\x4f\\xf4\\x33\\xe9\\xc5\\x3e\\x56\" replace\n        } err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: hash listpackex with unordered TTL fields} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        catch {\n            r restore key 0 \"\\x17\\xc3\\x30\\x35\\x14\\x35\\x00\\x00\\x00\\t\\x00\\x82\\x66\\x32\\x03\\x82\\x76\\x32\\x03\\xf4\\x80\\x73\\x16\\xd1\\x8f\\x01\\x20\\x12\\x02\\x82\\x66\\x31\\x20\\x11\\x03\\x31\\x03\\xf4\\x7f\\xe0\\x01\\x11\\x00\\x33\\x20\\x11\\x04\\x33\\x03\\x00\\x01\\xff\\x0c\\x00\\xf6\\x70\\x29\\x57\\x11\\x68\\x9d\\xe5\" replace\n        } err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: hash listpackex field without TTL should not be followed by field with TTL} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        catch {\n            r restore key 0 \"\\x17\\x2d\\x2d\\x00\\x00\\x00\\x09\\x00\\x82\\x66\\x31\\x03\\x82\\x76\\x31\\x03\\x00\\x01\\x82\\x66\\x32\\x03\\x82\\x76\\x32\\x03\\xf4\\xe0\\x59\\x7a\\x96\\x00\\x00\\x00\\x00\\x09\\x82\\x66\\x33\\x03\\x82\\x76\\x33\\x03\\x00\\x01\\xff\\x0c\\x00\\x42\\x66\\xd4\\xbe\\x17\\xc3\\x96\\x72\" replace\n        } err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: hash hashtable with TTL large than EB_EXPIRE_TIME_MAX} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set hash-max-listpack-entries 0\n        r config set sanitize-dump-payload yes\n        catch {\n            r restore key 0 \"\\x16\\x02\\x81\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x02\\x66\\x31\\x02\\x76\\x31\\x81\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x02\\x66\\x32\\x02\\x76\\x32\\x0c\\x00\\xb9\\x3c\\x65\\x28\\x40\\x94\\x58\\x36\" replace\n        } err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: quicklist big ziplist prev len} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        catch {r restore key 0 \"\\x0E\\x01\\x13\\x13\\x00\\x00\\x00\\x0E\\x00\\x00\\x00\\x02\\x00\\x00\\x02\\x61\\x00\\x0E\\x02\\x62\\x00\\xFF\\x09\\x00\\x49\\x97\\x30\\xB2\\x0D\\xA1\\xED\\xAA\"} err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*integrity check failed*\" 0\n    }\n}\n\ntest {corrupt payload: quicklist small ziplist prev len} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        catch {\n            r restore key 0 \"\\x0E\\x01\\x13\\x13\\x00\\x00\\x00\\x0E\\x00\\x00\\x00\\x02\\x00\\x00\\x02\\x61\\x00\\x02\\x02\\x62\\x00\\xFF\\x09\\x00\\xC7\\x71\\x03\\x97\\x07\\x75\\xB0\\x63\"\n        } err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*integrity check failed*\" 0\n    }\n}\n\ntest {corrupt payload: quicklist ziplist wrong count} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        catch {r restore key 0 \"\\x0E\\x01\\x13\\x13\\x00\\x00\\x00\\x0E\\x00\\x00\\x00\\x03\\x00\\x00\\x02\\x61\\x00\\x04\\x02\\x62\\x00\\xFF\\x09\\x00\\x4D\\xE2\\x0A\\x2F\\x08\\x25\\xDF\\x91\"} err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*integrity check failed*\" 0\n    }\n}\n\ntest {corrupt payload: #3080 - quicklist} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        catch {\n            r RESTORE key 0 \"\\x0E\\x01\\x80\\x00\\x00\\x00\\x10\\x41\\x41\\x41\\x41\\x41\\x41\\x41\\x41\\x02\\x00\\x00\\x80\\x41\\x41\\x41\\x41\\x07\\x00\\x03\\xC7\\x1D\\xEF\\x54\\x68\\xCC\\xF3\"\n            r DUMP key ;# DUMP was used in the original issue, but now even with shallow sanitization restore safely fails, so this is dead code\n        } err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*integrity check failed*\" 0\n    }\n}\n\ntest {corrupt payload: quicklist with empty ziplist} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        catch {r restore key 0 \"\\x0E\\x01\\x0B\\x0B\\x00\\x00\\x00\\x0A\\x00\\x00\\x00\\x00\\x00\\xFF\\x09\\x00\\xC2\\x69\\x37\\x83\\x3C\\x7F\\xFE\\x6F\" replace} err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: quicklist encoded_len is 0} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        catch { r restore _list 0 \"\\x12\\x01\\x01\\x00\\x0a\\x00\\x8f\\xc6\\xc0\\x57\\x1c\\x0a\\xb3\\x3c\" replace } err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: quicklist listpack entry start with EOF} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        catch { r restore _list 0 \"\\x12\\x01\\x02\\x0b\\x0b\\x00\\x00\\x00\\x01\\x00\\x81\\x61\\x02\\xff\\xff\\x0a\\x00\\x7e\\xd8\\xde\\x5b\\x0d\\xd7\\x70\\xb8\" replace } err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: #3080 - ziplist} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        # shallow sanitization is enough for restore to safely reject the payload with wrong size\n        r config set sanitize-dump-payload no\n        catch {\n            r RESTORE key 0 \"\\x0A\\x80\\x00\\x00\\x00\\x10\\x41\\x41\\x41\\x41\\x41\\x41\\x41\\x41\\x02\\x00\\x00\\x80\\x41\\x41\\x41\\x41\\x07\\x00\\x39\\x5B\\x49\\xE0\\xC1\\xC6\\xDD\\x76\"\n        } err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*integrity check failed*\" 0\n    }\n}\n\ntest {corrupt payload: load corrupted rdb with no CRC - #3505} {\n    set server_path [tmpdir \"server.rdb-corruption-test\"]\n    exec cp tests/assets/corrupt_ziplist.rdb $server_path\n    set srv [start_server [list overrides [list \"dir\" $server_path \"dbfilename\" \"corrupt_ziplist.rdb\" loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no sanitize-dump-payload no]]]\n\n    # wait for termination\n    wait_for_condition 100 50 {\n        ! [is_alive [dict get $srv pid]]\n    } else {\n        fail \"rdb loading didn't fail\"\n    }\n\n    set stdout [dict get $srv stdout]\n    assert_equal [count_message_lines $stdout \"Terminating server after rdb file reading failure.\"]  1\n    assert_lessthan 1 [count_message_lines $stdout \"integrity check failed\"]\n    kill_server $srv ;# let valgrind look for issues\n}\n\nforeach sanitize_dump {no yes} {\n    test {corrupt payload: load corrupted rdb with empty keys} {\n        set server_path [tmpdir \"server.rdb-corruption-empty-keys-test\"]\n        exec cp tests/assets/corrupt_empty_keys.rdb $server_path\n        start_server [list overrides [list \"dir\" $server_path \"dbfilename\" \"corrupt_empty_keys.rdb\" \"sanitize-dump-payload\" $sanitize_dump]] {\n            r select 0\n            assert_equal [r dbsize] 0\n\n            verify_log_message 0 \"*skipping empty key: set*\" 0\n            verify_log_message 0 \"*skipping empty key: list_quicklist*\" 0\n            verify_log_message 0 \"*skipping empty key: list_quicklist_empty_ziplist*\" 0\n            verify_log_message 0 \"*skipping empty key: list_ziplist*\" 0\n            verify_log_message 0 \"*skipping empty key: hash*\" 0\n            verify_log_message 0 \"*skipping empty key: hash_ziplist*\" 0\n            verify_log_message 0 \"*skipping empty key: zset*\" 0\n            verify_log_message 0 \"*skipping empty key: zset_ziplist*\" 0\n            verify_log_message 0 \"*skipping empty key: zset_listpack*\" 0\n            verify_log_message 0 \"*empty keys skipped: 9*\" 0\n        }\n    }\n}\n\ntest {corrupt payload: listpack invalid size header} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        catch {\n            r restore key 0 \"\\x0F\\x01\\x10\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x02\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x02\\x40\\x55\\x5F\\x00\\x00\\x00\\x0F\\x00\\x01\\x01\\x00\\x01\\x02\\x01\\x88\\x31\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x88\\x32\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x00\\x01\\x00\\x01\\x00\\x01\\x00\\x01\\x02\\x02\\x88\\x31\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x88\\x61\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x88\\x32\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x88\\x62\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x08\\x01\\xFF\\x0A\\x01\\x00\\x00\\x09\\x00\\x45\\x91\\x0A\\x87\\x2F\\xA5\\xF9\\x2E\"\n        } err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*Stream listpack integrity check failed*\" 0\n    }\n}\n\ntest {corrupt payload: listpack too long entry len} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        catch {\n            r restore key 0 \"\\x0F\\x01\\x10\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x02\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x02\\x40\\x55\\x55\\x00\\x00\\x00\\x0F\\x00\\x01\\x01\\x00\\x01\\x02\\x01\\x88\\x31\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x88\\x32\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x00\\x01\\x00\\x01\\x00\\x01\\x00\\x01\\x02\\x02\\x89\\x31\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x88\\x61\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x88\\x32\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x88\\x62\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x08\\x01\\xFF\\x0A\\x01\\x00\\x00\\x09\\x00\\x40\\x63\\xC9\\x37\\x03\\xA2\\xE5\\x68\"\n        } err\n        assert_equal [count_log_message 0 \"crashed by signal\"] 0\n        assert_equal [count_log_message 0 \"ASSERTION FAILED\"] 1\n    }\n}\n\ntest {corrupt payload: listpack very long entry len} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        catch {\n            # This will catch migrated payloads from v6.2.x\n            r restore key 0 \"\\x0F\\x01\\x10\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x02\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x02\\x40\\x55\\x55\\x00\\x00\\x00\\x0F\\x00\\x01\\x01\\x00\\x01\\x02\\x01\\x88\\x31\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x88\\x32\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x00\\x01\\x00\\x01\\x00\\x01\\x00\\x01\\x02\\x02\\x88\\x31\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x88\\x61\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x88\\x32\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x9C\\x62\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x08\\x01\\xFF\\x0A\\x01\\x00\\x00\\x09\\x00\\x63\\x6F\\x42\\x8E\\x7C\\xB5\\xA2\\x9D\"\n        } err\n        assert_equal [count_log_message 0 \"crashed by signal\"] 0\n        assert_equal [count_log_message 0 \"ASSERTION FAILED\"] 1\n    }\n}\n\ntest {corrupt payload: listpack too long entry prev len} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        catch {\n            r restore key 0 \"\\x0F\\x01\\x10\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x02\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x02\\x40\\x55\\x55\\x00\\x00\\x00\\x0F\\x00\\x01\\x01\\x00\\x15\\x02\\x01\\x88\\x31\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x88\\x32\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x00\\x01\\x00\\x01\\x00\\x01\\x00\\x01\\x02\\x02\\x88\\x31\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x88\\x61\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x88\\x32\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x88\\x62\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x08\\x01\\xFF\\x0A\\x01\\x00\\x00\\x09\\x00\\x06\\xFB\\x44\\x24\\x0A\\x8E\\x75\\xEA\"\n        } err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*Stream listpack integrity check failed*\" 0\n    }\n}\n\ntest {corrupt payload: stream entry with invalid lp_count causing infinite loop in reverse iteration} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        r restore key 0 \"\\x15\\x03\\x10\\x00\\x00\\x01\\x99\\x52\\xB3\\xAC\\x2F\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xC3\\x40\\x4F\\x40\\x5C\\x18\\x5C\\x00\\x00\\x00\\x24\\x00\\x05\\x01\\x00\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x40\\x10\\x00\\x00\\x20\\x01\\x00\\x01\\x20\\x03\\x00\\x19\\x20\\x1C\\x40\\x09\\x05\\x01\\x01\\x82\\x5F\\x31\\x03\\x80\\x0D\\x00\\x02\\x20\\x0D\\x00\\x02\\xA0\\x19\\x00\\x03\\x20\\x0B\\x02\\x82\\x5F\\x33\\xA0\\x19\\x00\\x04\\x20\\x0D\\x00\\x04\\x20\\x19\\x00\\xFF\\x10\\x00\\x00\\x01\\x99\\x52\\xB3\\xAC\\x32\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xC3\\x40\\x51\\x40\\x5E\\x18\\x5E\\x00\\x00\\x00\\x24\\x00\\x05\\x01\\x00\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x40\\x10\\x00\\x00\\x20\\x01\\x06\\x01\\x01\\x82\\x5F\\x35\\x03\\x05\\x20\\x1E\\x00\\x01\\x60\\x0D\\x01\\x06\\x01\\x40\\x0B\\x00\\x04\\x60\\x0B\\x02\\x82\\x5F\\x37\\x60\\x19\\x40\\x3E\\x02\\x01\\x01\\x08\\x20\\x07\\x40\\x0B\\x40\\x00\\x02\\x82\\x5F\\x39\\x20\\x19\\x00\\xFF\\x10\\x00\\x00\\x01\\x99\\x52\\xB3\\xAC\\x39\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xC3\\x3B\\x40\\x49\\x18\\x49\\x00\\x00\\x00\\x15\\x00\\x02\\x01\\x00\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x40\\x10\\x00\\x00\\x20\\x01\\x40\\x00\\x00\\x05\\x20\\x07\\x40\\x09\\xC0\\x22\\x09\\x01\\x01\\x86\\x75\\x6E\\x69\\x71\\x75\\x65\\x07\\xA0\\x2C\\x02\\x08\\x01\\xFF\\x0C\\x81\\x00\\x00\\x01\\x99\\x52\\xB3\\xAC\\x39\\x01\\x81\\x00\\x00\\x01\\x99\\x52\\xB3\\xAC\\x2F\\x00\\x00\\x00\\x0C\\x00\\x0C\\x00\\xA4\\x99\\xB6\\x4E\\x9D\\x69\\x79\\x6A\"\n\n        catch {r XREVRANGE key + -}\n        assert_equal [count_log_message 0 \"crashed by signal\"] 0\n        assert_equal [count_log_message 0 \"ASSERTION FAILED\"] 1\n    }\n}\n\ntest {corrupt payload: stream entry with invalid numfields causing infinite loop in reverse iteration} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n\n        r restore key 0 \"\\x15\\x01\\x10\\x00\\x00\\x01\\x9a\\x0e\\x68\\xdd\\x3e\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x40\\x5c\\x5c\\x00\\x00\\x00\\x1f\\x00\\x03\\x01\\x01\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6d\\x05\\x85\\x76\\x61\\x6c\\x75\\x65\\x06\\x00\\x01\\x02\\x01\\x00\\x01\\x00\\x01\\x01\\x01\\x00\\x01\\x05\\x01\\x03\\x01\\x0e\\x01\\x00\\x01\\x01\\x01\\x82\\x5f\\x31\\x03\\x05\\x01\\x30\\x01\\x0e\\x01\\x01\\x01\\x01\\x01\\x02\\x01\\x05\\x01\\x00\\x01\\xf3\\x91\\x20\\x13\\x17\\x05\\x00\\x01\\x01\\x01\\xf3\\x64\\x2f\\xdf\\xe7\\x05\\xf3\\x80\\xd3\\x91\\x1d\\x05\\x06\\x01\\xff\\x03\\x81\\x00\\x00\\x01\\x9a\\x25\\x7b\\xfd\\xcf\\x00\\x81\\x00\\x00\\x01\\x9a\\x0e\\x68\\xdd\\x3e\\x00\\x81\\x00\\x00\\x01\\x9a\\x0e\\x68\\xdd\\x4c\\x00\\x04\\x01\\x07\\x6d\\x79\\x67\\x72\\x6f\\x75\\x70\\x81\\x00\\x00\\x01\\x9a\\x0e\\x68\\xdd\\x4c\\x00\\x02\\x01\\x00\\x00\\x01\\x9a\\x0e\\x68\\xdd\\x4c\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x4d\\xdd\\x68\\x0e\\x9a\\x01\\x00\\x00\\x01\\x01\\x05\\x41\\x6c\\x69\\x63\\x65\\x4d\\xdd\\x68\\x0e\\x9a\\x01\\x00\\x00\\x4d\\xdd\\x68\\x0e\\x9a\\x01\\x00\\x00\\x01\\x00\\x00\\x01\\x9a\\x0e\\x68\\xdd\\x4c\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x0c\\x00\\xd8\\xd6\\x84\\x4e\\xc6\\xc0\\x63\\xdb\" replace\n        catch {r XREVRANGE key + -}\n        assert_equal [count_log_message 0 \"crashed by signal\"] 0\n        assert_equal [count_log_message 0 \"ASSERTION FAILED\"] 1\n    }\n}\n\ntest {corrupt payload: stream with duplicate consumers} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        catch {\n            r restore key 0 \"\\x0F\\x00\\x00\\x00\\x00\\x01\\x07\\x6D\\x79\\x67\\x72\\x6F\\x75\\x70\\x00\\x00\\x00\\x02\\x04\\x6E\\x61\\x6D\\x65\\x2A\\x4C\\xAA\\x9A\\x7D\\x01\\x00\\x00\\x00\\x04\\x6E\\x61\\x6D\\x65\\x2B\\x4C\\xAA\\x9A\\x7D\\x01\\x00\\x00\\x00\\x0A\\x00\\xCC\\xED\\x8C\\xA7\\x62\\xEE\\xC7\\xC8\"\n        } err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*Duplicate stream consumer detected*\" 0\n        r ping\n    }\n}\n\ntest {corrupt payload: hash ziplist with duplicate records} {\n    # when we do perform full sanitization, we expect duplicate records to fail the restore\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch { r RESTORE _hash 0 \"\\x0D\\x3D\\x3D\\x00\\x00\\x00\\x3A\\x00\\x00\\x00\\x14\\x13\\x00\\xF5\\x02\\xF5\\x02\\xF2\\x02\\x53\\x5F\\x31\\x04\\xF3\\x02\\xF3\\x02\\xF7\\x02\\xF7\\x02\\xF8\\x02\\x02\\x5F\\x37\\x04\\xF1\\x02\\xF1\\x02\\xF6\\x02\\x02\\x5F\\x35\\x04\\xF4\\x02\\x02\\x5F\\x33\\x04\\xFA\\x02\\x02\\x5F\\x39\\x04\\xF9\\x02\\xF9\\xFF\\x09\\x00\\xB5\\x48\\xDE\\x62\\x31\\xD0\\xE5\\x63\" } err\n        assert_match \"*Bad data format*\" $err\n    }\n}\n\ntest {corrupt payload: hash listpack with duplicate records} {\n    # when we do perform full sanitization, we expect duplicate records to fail the restore\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch { r RESTORE _hash 0 \"\\x10\\x17\\x17\\x00\\x00\\x00\\x04\\x00\\x82\\x61\\x00\\x03\\x82\\x62\\x00\\x03\\x82\\x61\\x00\\x03\\x82\\x64\\x00\\x03\\xff\\x0a\\x00\\xc0\\xcf\\xa6\\x87\\xe5\\xa7\\xc5\\xbe\" } err\n        assert_match \"*Bad data format*\" $err\n    }\n}\n\ntest {corrupt payload: hash listpack with duplicate records - convert} {\n    # when we do NOT perform full sanitization, but we convert to hash, we expect duplicate records panic\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r config set hash-max-listpack-entries 1\n        r debug set-skip-checksum-validation 1\n        catch { r RESTORE _hash 0 \"\\x10\\x17\\x17\\x00\\x00\\x00\\x04\\x00\\x82\\x61\\x00\\x03\\x82\\x62\\x00\\x03\\x82\\x61\\x00\\x03\\x82\\x64\\x00\\x03\\xff\\x0a\\x00\\xc0\\xcf\\xa6\\x87\\xe5\\xa7\\xc5\\xbe\" } err\n        assert_equal [count_log_message 0 \"crashed by signal\"] 0\n        assert_equal [count_log_message 0 \"listpack with dup elements\"] 1\n    }\n}\n\ntest {corrupt payload: hash ziplist uneven record count} {\n    # when we do NOT perform full sanitization, but shallow sanitization can detect uneven count\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        catch { r RESTORE _hash 0 \"\\r\\x1b\\x1b\\x00\\x00\\x00\\x16\\x00\\x00\\x00\\x04\\x00\\x00\\x02a\\x00\\x04\\x02b\\x00\\x04\\x02a\\x00\\x04\\x02d\\x00\\xff\\t\\x00\\xa1\\x98\\x36x\\xcc\\x8e\\x93\\x2e\" } err\n        assert_match \"*Bad data format*\" $err\n    }\n}\n\ntest {corrupt payload: hash duplicate records} {\n    # when we do perform full sanitization, we expect duplicate records to fail the restore\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch { r RESTORE _hash 0 \"\\x04\\x02\\x01a\\x01b\\x01a\\x01d\\t\\x00\\xc6\\x9c\\xab\\xbc\\bk\\x0c\\x06\" } err\n        assert_match \"*Bad data format*\" $err\n    }\n}\n\ntest {corrupt payload: hash empty zipmap} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        catch { r RESTORE _hash 0 \"\\x09\\x02\\x00\\xFF\\x09\\x00\\xC0\\xF1\\xB8\\x67\\x4C\\x16\\xAC\\xE3\" } err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*Zipmap integrity check failed*\" 0\n    }\n}\n\ntest {corrupt payload: hash zipmap with duplicate keys} {\n    # Craft a structurally valid zipmap with duplicate key \"a\" to trigger the\n    # dictAdd failure path in rdbLoadObject (RDB_TYPE_HASH_ZIPMAP).\n    # This exercises the error-handling branch that must free the listpack\n    # allocated for the zipmap-to-listpack conversion.\n    #\n    # Zipmap layout (12 bytes):\n    #   \\x02           zmlen=2\n    #   \\x01 a         key \"a\" (len 1)\n    #   \\x01 \\x00 b    value \"b\" (len 1, free 0)\n    #   \\x01 a         key \"a\" again (duplicate!)\n    #   \\x01 \\x00 d    value \"d\" (len 1, free 0)\n    #   \\xFF           end\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch {\n            r RESTORE _hash 0 \"\\x09\\x0C\\x02\\x01\\x61\\x01\\x00\\x62\\x01\\x61\\x01\\x00\\x64\\xFF\\x09\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\"\n        } err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*Hash zipmap with dup elements*\" 0\n    }\n}\n\ntest {corrupt payload: fuzzer findings - NPD in streamIteratorGetID} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        catch {\n            r RESTORE key 0 \"\\x0F\\x01\\x10\\x00\\x00\\x01\\x73\\xBD\\x68\\x48\\x71\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x40\\x42\\x42\\x00\\x00\\x00\\x18\\x00\\x03\\x01\\x00\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x00\\x01\\x02\\x01\\x00\\x01\\x00\\x01\\x01\\x01\\x00\\x01\\x05\\x01\\x02\\x01\\x00\\x01\\x01\\x01\\x01\\x01\\x82\\x5F\\x31\\x03\\x05\\x01\\x02\\x01\\x00\\x01\\x02\\x01\\x01\\x01\\x02\\x01\\x48\\x01\\xFF\\x03\\x81\\x00\\x00\\x01\\x73\\xBD\\x68\\x48\\x71\\x02\\x01\\x07\\x6D\\x79\\x67\\x72\\x6F\\x75\\x70\\x81\\x00\\x00\\x01\\x73\\xBD\\x68\\x48\\x71\\x00\\x01\\x00\\x00\\x01\\x73\\xBD\\x68\\x48\\x71\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x72\\x48\\x68\\xBD\\x73\\x01\\x00\\x00\\x01\\x01\\x05\\x41\\x6C\\x69\\x63\\x65\\x72\\x48\\x68\\xBD\\x73\\x01\\x00\\x00\\x01\\x00\\x00\\x01\\x73\\xBD\\x68\\x48\\x71\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x00\\x80\\xCD\\xB0\\xD5\\x1A\\xCE\\xFF\\x10\"\n            r XREVRANGE key 725 233\n        }\n        assert_equal [count_log_message 0 \"crashed by signal\"] 0\n        assert_equal [count_log_message 0 \"ASSERTION FAILED\"] 1\n    }\n}\n\ntest {corrupt payload: fuzzer findings - listpack NPD on invalid stream} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        catch {\n            r RESTORE _stream 0 \"\\x0F\\x01\\x10\\x00\\x00\\x01\\x73\\xDC\\xB6\\x6B\\xF1\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x40\\x42\\x42\\x00\\x00\\x00\\x18\\x00\\x03\\x01\\x00\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x00\\x01\\x02\\x01\\x00\\x01\\x00\\x01\\x01\\x01\\x00\\x01\\x05\\x01\\x02\\x01\\x1F\\x01\\x00\\x01\\x01\\x01\\x6D\\x5F\\x31\\x03\\x05\\x01\\x02\\x01\\x29\\x01\\x00\\x01\\x01\\x01\\x02\\x01\\x05\\x01\\xFF\\x03\\x81\\x00\\x00\\x01\\x73\\xDC\\xB6\\x6C\\x1A\\x00\\x01\\x07\\x6D\\x79\\x67\\x72\\x6F\\x75\\x70\\x81\\x00\\x00\\x01\\x73\\xDC\\xB6\\x6B\\xF1\\x00\\x01\\x00\\x00\\x01\\x73\\xDC\\xB6\\x6B\\xF1\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x4B\\x6C\\xB6\\xDC\\x73\\x01\\x00\\x00\\x01\\x01\\x05\\x41\\x6C\\x69\\x63\\x65\\x3D\\x6C\\xB6\\xDC\\x73\\x01\\x00\\x00\\x01\\x00\\x00\\x01\\x73\\xDC\\xB6\\x6B\\xF1\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x00\\xC7\\x7D\\x1C\\xD7\\x04\\xFF\\xE6\\x9D\"\n            r XREAD STREAMS _stream 519389898758\n        }\n        assert_equal [count_log_message 0 \"crashed by signal\"] 0\n        assert_equal [count_log_message 0 \"ASSERTION FAILED\"] 1\n    }\n}\n\ntest {corrupt payload: fuzzer findings - NPD in quicklistIndex} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        catch {\n            r RESTORE key 0 \"\\x0E\\x01\\x13\\x13\\x00\\x00\\x00\\x10\\x00\\x00\\x00\\x03\\x12\\x00\\xF3\\x02\\x02\\x5F\\x31\\x04\\xF1\\xFF\\x09\\x00\\xC9\\x4B\\x31\\xFE\\x61\\xC0\\x96\\xFE\"\n        } err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*integrity check failed*\" 0\n    }\n}\n\ntest {corrupt payload: fuzzer findings - encoded entry header reach outside the allocation} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r debug set-skip-checksum-validation 1\n        catch {\n            r RESTORE key 0 \"\\x0D\\x19\\x19\\x00\\x00\\x00\\x16\\x00\\x00\\x00\\x06\\x00\\x00\\xF1\\x02\\xF1\\x02\\xF2\\x02\\x02\\x5F\\x31\\x04\\x99\\x02\\xF3\\xFF\\x09\\x00\\xC5\\xB8\\x10\\xC0\\x8A\\xF9\\x16\\xDF\"\n        } err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*integrity check failed*\" 0\n    }\n}\n\n\ntest {corrupt payload: fuzzer findings - invalid ziplist encoding} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch {\n            r RESTORE _listbig 0 \"\\x0E\\x02\\x1B\\x1B\\x00\\x00\\x00\\x16\\x00\\x00\\x00\\x05\\x00\\x00\\x02\\x5F\\x39\\x04\\xF9\\x02\\x86\\x5F\\x37\\x04\\xF7\\x02\\x02\\x5F\\x35\\xFF\\x19\\x19\\x00\\x00\\x00\\x16\\x00\\x00\\x00\\x05\\x00\\x00\\xF5\\x02\\x02\\x5F\\x33\\x04\\xF3\\x02\\x02\\x5F\\x31\\x04\\xF1\\xFF\\x09\\x00\\x0C\\xFC\\x99\\x2C\\x23\\x45\\x15\\x60\"\n        } err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*integrity check failed*\" 0\n    }\n}\n\ntest {corrupt payload: fuzzer findings - hash crash} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        r RESTORE _hash 0 \"\\x0D\\x19\\x19\\x00\\x00\\x00\\x16\\x00\\x00\\x00\\x06\\x00\\x00\\xF1\\x02\\xF1\\x02\\xF2\\x02\\x02\\x5F\\x31\\x04\\xF3\\x02\\xF3\\xFF\\x09\\x00\\x38\\xB8\\x10\\xC0\\x8A\\xF9\\x16\\xDF\"\n        r HSET _hash 394891450 1635910264\n        r HMGET _hash 887312884855\n    }\n}\n\ntest {corrupt payload: fuzzer findings - uneven entry count in hash} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r debug set-skip-checksum-validation 1\n        catch {\n            r RESTORE _hashbig 0 \"\\x0D\\x3D\\x3D\\x00\\x00\\x00\\x38\\x00\\x00\\x00\\x14\\x00\\x00\\xF2\\x02\\x02\\x5F\\x31\\x04\\x1C\\x02\\xF7\\x02\\xF1\\x02\\xF1\\x02\\xF5\\x02\\xF5\\x02\\xF4\\x02\\x02\\x5F\\x33\\x04\\xF6\\x02\\x02\\x5F\\x35\\x04\\xF8\\x02\\x02\\x5F\\x37\\x04\\xF9\\x02\\xF9\\x02\\xF3\\x02\\xF3\\x02\\xFA\\x02\\x02\\x5F\\x39\\xFF\\x09\\x00\\x73\\xB7\\x68\\xC8\\x97\\x24\\x8E\\x88\"\n        } err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*integrity check failed*\" 0\n    }\n}\n\ntest {corrupt payload: fuzzer findings - invalid read in lzf_decompress} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        catch { r RESTORE _setbig 0 \"\\x02\\x03\\x02\\x5F\\x31\\xC0\\x02\\xC3\\x00\\x09\\x00\\xE6\\xDC\\x76\\x44\\xFF\\xEB\\x3D\\xFE\" } err\n        assert_match \"*Bad data format*\" $err\n    }\n}\n\ntest {corrupt payload: fuzzer findings - leak in rdbloading due to dup entry in set} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        catch { r RESTORE _setbig 0 \"\\x02\\x0A\\x02\\x5F\\x39\\xC0\\x06\\x02\\x5F\\x31\\xC0\\x00\\xC0\\x04\\x02\\x5F\\x35\\xC0\\x02\\xC0\\x08\\x02\\x5F\\x31\\x02\\x5F\\x33\\x09\\x00\\x7A\\x5A\\xFB\\x90\\x3A\\xE9\\x3C\\xBE\" } err\n        assert_match \"*Bad data format*\" $err\n    }\n}\n\ntest {corrupt payload: fuzzer findings - empty intset} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        catch {r RESTORE _setbig 0 \"\\x02\\xC0\\xC0\\x06\\x02\\x5F\\x39\\xC0\\x02\\x02\\x5F\\x33\\xC0\\x00\\x02\\x5F\\x31\\xC0\\x04\\xC0\\x08\\x02\\x5F\\x37\\x02\\x5F\\x35\\x09\\x00\\xC5\\xD4\\x6D\\xBA\\xAD\\x14\\xB7\\xE7\"} err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: fuzzer findings - zset ziplist entry lensize is 0} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        catch {r RESTORE _zsetbig 0 \"\\x0C\\x3D\\x3D\\x00\\x00\\x00\\x3A\\x00\\x00\\x00\\x14\\x00\\x00\\xF1\\x02\\xF1\\x02\\x02\\x5F\\x31\\x04\\xF2\\x02\\xF3\\x02\\xF3\\x02\\x02\\x5F\\x33\\x04\\xF4\\x02\\xEE\\x02\\xF5\\x02\\x02\\x5F\\x35\\x04\\xF6\\x02\\xF7\\x02\\xF7\\x02\\x02\\x5F\\x37\\x04\\xF8\\x02\\xF9\\x02\\xF9\\x02\\x02\\x5F\\x39\\x04\\xFA\\xFF\\x09\\x00\\xAE\\xF9\\x77\\x2A\\x47\\x24\\x33\\xF6\"} err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*Zset ziplist integrity check failed*\" 0\n    }\n}\n\ntest {corrupt payload: fuzzer findings - valgrind ziplist prevlen reaches outside the ziplist} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        catch {r RESTORE _listbig 0 \"\\x0E\\x02\\x1B\\x1B\\x00\\x00\\x00\\x16\\x00\\x00\\x00\\x05\\x00\\x00\\x02\\x5F\\x39\\x04\\xF9\\x02\\x02\\x5F\\x37\\x04\\xF7\\x02\\x02\\x5F\\x35\\xFF\\x19\\x19\\x00\\x00\\x00\\x16\\x00\\x00\\x00\\x05\\x00\\x00\\xF5\\x02\\x02\\x5F\\x33\\x04\\xF3\\x95\\x02\\x5F\\x31\\x04\\xF1\\xFF\\x09\\x00\\x0C\\xFC\\x99\\x2C\\x23\\x45\\x15\\x60\"} err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*integrity check failed*\" 0\n    }\n}\n\ntest {corrupt payload: fuzzer findings - valgrind - bad rdbLoadDoubleValue} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        catch { r RESTORE _list 0 \"\\x03\\x01\\x11\\x11\\x00\\x00\\x00\\x0A\\x00\\x00\\x00\\x01\\x00\\x00\\xD0\\x07\\x1A\\xE9\\x02\\xFF\\x09\\x00\\x1A\\x06\\x07\\x32\\x41\\x28\\x3A\\x46\" } err\n        assert_match \"*Bad data format*\" $err\n    }\n}\n\ntest {corrupt payload: fuzzer findings - valgrind ziplist prev too big} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        catch {r RESTORE _list 0 \"\\x0E\\x01\\x13\\x13\\x00\\x00\\x00\\x10\\x00\\x00\\x00\\x03\\x00\\x00\\xF3\\x02\\x02\\x5F\\x31\\xC1\\xF1\\xFF\\x09\\x00\\xC9\\x4B\\x31\\xFE\\x61\\xC0\\x96\\xFE\"} err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*integrity check failed*\" 0\n    }\n}\n\ntest {corrupt payload: fuzzer findings - lzf decompression fails, avoid valgrind invalid read} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        catch {r RESTORE _stream 0 \"\\x0F\\x02\\x10\\x00\\x00\\x01\\x73\\xDD\\xAA\\x2A\\xB9\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xC3\\x40\\x4B\\x40\\x5C\\x18\\x5C\\x00\\x00\\x00\\x24\\x00\\x05\\x01\\x00\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x40\\x10\\x00\\x00\\x20\\x01\\x00\\x01\\x20\\x03\\x00\\x05\\x20\\x1C\\x40\\x07\\x05\\x01\\x01\\x82\\x5F\\x31\\x03\\x80\\x0D\\x40\\x00\\x00\\x02\\x60\\x19\\x40\\x27\\x40\\x19\\x00\\x33\\x60\\x19\\x40\\x29\\x02\\x01\\x01\\x04\\x20\\x19\\x00\\xFF\\x10\\x00\\x00\\x01\\x73\\xDD\\xAA\\x2A\\xBC\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xC3\\x40\\x4D\\x40\\x5E\\x18\\x5E\\x00\\x00\\x00\\x24\\x00\\x05\\x01\\x00\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x40\\x10\\x00\\x00\\x20\\x01\\x06\\x01\\x01\\x82\\x5F\\x35\\x03\\x05\\x20\\x1E\\x17\\x0B\\x03\\x01\\x01\\x06\\x01\\x40\\x0B\\x00\\x01\\x60\\x0D\\x02\\x82\\x5F\\x37\\x60\\x19\\x80\\x00\\x00\\x08\\x60\\x19\\x80\\x27\\x02\\x82\\x5F\\x39\\x20\\x19\\x00\\xFF\\x0A\\x81\\x00\\x00\\x01\\x73\\xDD\\xAA\\x2A\\xBE\\x00\\x00\\x09\\x00\\x21\\x85\\x77\\x43\\x71\\x7B\\x17\\x88\"} err\n        assert_match \"*Bad data format*\" $err\n    }\n}\n\ntest {corrupt payload: fuzzer findings - stream bad lp_count} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch { r RESTORE _stream 0 \"\\x0F\\x01\\x10\\x00\\x00\\x01\\x73\\xDE\\xDF\\x7D\\x9B\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x40\\x42\\x42\\x00\\x00\\x00\\x18\\x00\\x03\\x01\\x00\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x00\\x01\\x02\\x01\\x00\\x01\\x00\\x01\\x01\\x01\\x00\\x01\\x56\\x01\\x02\\x01\\x22\\x01\\x00\\x01\\x01\\x01\\x82\\x5F\\x31\\x03\\x05\\x01\\x02\\x01\\x2C\\x01\\x00\\x01\\x01\\x01\\x02\\x01\\x05\\x01\\xFF\\x03\\x81\\x00\\x00\\x01\\x73\\xDE\\xDF\\x7D\\xC7\\x00\\x01\\x07\\x6D\\x79\\x67\\x72\\x6F\\x75\\x70\\x81\\x00\\x00\\x01\\x73\\xDE\\xDF\\x7D\\x9B\\x00\\x01\\x00\\x00\\x01\\x73\\xDE\\xDF\\x7D\\x9B\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xF9\\x7D\\xDF\\xDE\\x73\\x01\\x00\\x00\\x01\\x01\\x05\\x41\\x6C\\x69\\x63\\x65\\xEB\\x7D\\xDF\\xDE\\x73\\x01\\x00\\x00\\x01\\x00\\x00\\x01\\x73\\xDE\\xDF\\x7D\\x9B\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x00\\xB2\\xA8\\xA7\\x5F\\x1B\\x61\\x72\\xD5\"} err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: fuzzer findings - stream bad lp_count - unsanitized} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        r RESTORE _stream 0 \"\\x0F\\x01\\x10\\x00\\x00\\x01\\x73\\xDE\\xDF\\x7D\\x9B\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x40\\x42\\x42\\x00\\x00\\x00\\x18\\x00\\x03\\x01\\x00\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x00\\x01\\x02\\x01\\x00\\x01\\x00\\x01\\x01\\x01\\x00\\x01\\x56\\x01\\x02\\x01\\x22\\x01\\x00\\x01\\x01\\x01\\x82\\x5F\\x31\\x03\\x05\\x01\\x02\\x01\\x2C\\x01\\x00\\x01\\x01\\x01\\x02\\x01\\x05\\x01\\xFF\\x03\\x81\\x00\\x00\\x01\\x73\\xDE\\xDF\\x7D\\xC7\\x00\\x01\\x07\\x6D\\x79\\x67\\x72\\x6F\\x75\\x70\\x81\\x00\\x00\\x01\\x73\\xDE\\xDF\\x7D\\x9B\\x00\\x01\\x00\\x00\\x01\\x73\\xDE\\xDF\\x7D\\x9B\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xF9\\x7D\\xDF\\xDE\\x73\\x01\\x00\\x00\\x01\\x01\\x05\\x41\\x6C\\x69\\x63\\x65\\xEB\\x7D\\xDF\\xDE\\x73\\x01\\x00\\x00\\x01\\x00\\x00\\x01\\x73\\xDE\\xDF\\x7D\\x9B\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x00\\xB2\\xA8\\xA7\\x5F\\x1B\\x61\\x72\\xD5\"\n        catch { r XREVRANGE _stream 638932639 738}\n        assert_equal [count_log_message 0 \"crashed by signal\"] 0\n        assert_equal [count_log_message 0 \"ASSERTION FAILED\"] 1\n    }\n}\n\ntest {corrupt payload: fuzzer findings - stream integrity check issue} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch { r RESTORE _stream 0 \"\\x0F\\x02\\x10\\x00\\x00\\x01\\x75\\x2D\\xA2\\x90\\x67\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xC3\\x40\\x4F\\x40\\x5C\\x18\\x5C\\x00\\x00\\x00\\x24\\x00\\x05\\x01\\x00\\x01\\x4A\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x40\\x10\\x00\\x00\\x20\\x01\\x00\\x01\\x20\\x03\\x00\\x05\\x20\\x1C\\x40\\x09\\x05\\x01\\x01\\x82\\x5F\\x31\\x03\\x80\\x0D\\x00\\x02\\x20\\x0D\\x00\\x02\\xA0\\x19\\x00\\x03\\x20\\x0B\\x02\\x82\\x5F\\x33\\xA0\\x19\\x00\\x04\\x20\\x0D\\x00\\x04\\x20\\x19\\x00\\xFF\\x10\\x00\\x00\\x01\\x75\\x2D\\xA2\\x90\\x67\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x05\\xC3\\x40\\x56\\x40\\x60\\x18\\x60\\x00\\x00\\x00\\x24\\x00\\x05\\x01\\x00\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x40\\x10\\x00\\x00\\x20\\x01\\x06\\x01\\x01\\x82\\x5F\\x35\\x03\\x05\\x20\\x1E\\x40\\x0B\\x03\\x01\\x01\\x06\\x01\\x80\\x0B\\x00\\x02\\x20\\x0B\\x02\\x82\\x5F\\x37\\x60\\x19\\x03\\x01\\x01\\xDF\\xFB\\x20\\x05\\x00\\x08\\x60\\x1A\\x20\\x0C\\x00\\xFC\\x20\\x05\\x02\\x82\\x5F\\x39\\x20\\x1B\\x00\\xFF\\x0A\\x81\\x00\\x00\\x01\\x75\\x2D\\xA2\\x90\\x68\\x01\\x00\\x09\\x00\\x1D\\x6F\\xC0\\x69\\x8A\\xDE\\xF7\\x92\" } err\n        assert_match \"*Bad data format*\" $err\n    }\n}\n\ntest {corrupt payload: fuzzer findings - infinite loop} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        r RESTORE _stream 0 \"\\x0F\\x01\\x10\\x00\\x00\\x01\\x75\\x3A\\xA6\\xD0\\x93\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x40\\x42\\x42\\x00\\x00\\x00\\x18\\x00\\x03\\x01\\x00\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x00\\x01\\x02\\x01\\x00\\x01\\x00\\x01\\x01\\x01\\x00\\x01\\x05\\x01\\x02\\x01\\x00\\x01\\x01\\x01\\x01\\x01\\x82\\x5F\\x31\\x03\\xFD\\x01\\x02\\x01\\x00\\x01\\x02\\x01\\x01\\x01\\x02\\x01\\x05\\x01\\xFF\\x03\\x81\\x00\\x00\\x01\\x75\\x3A\\xA6\\xD0\\x93\\x02\\x01\\x07\\x6D\\x79\\x67\\x72\\x6F\\x75\\x70\\x81\\x00\\x00\\x01\\x75\\x3A\\xA6\\xD0\\x93\\x00\\x01\\x00\\x00\\x01\\x75\\x3A\\xA6\\xD0\\x93\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x94\\xD0\\xA6\\x3A\\x75\\x01\\x00\\x00\\x01\\x01\\x05\\x41\\x6C\\x69\\x63\\x65\\x94\\xD0\\xA6\\x3A\\x75\\x01\\x00\\x00\\x01\\x00\\x00\\x01\\x75\\x3A\\xA6\\xD0\\x93\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x00\\xC4\\x09\\xAD\\x69\\x7E\\xEE\\xA6\\x2F\"\n        catch { r XREVRANGE _stream 288270516 971031845 }\n        assert_equal [count_log_message 0 \"crashed by signal\"] 0\n        assert_equal [count_log_message 0 \"ASSERTION FAILED\"] 1\n    }\n}\n\ntest {corrupt payload: fuzzer findings - hash ziplist too long entry len} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r debug set-skip-checksum-validation 1\n        catch {\n            r RESTORE _hash 0 \"\\x0D\\x3D\\x3D\\x00\\x00\\x00\\x3A\\x00\\x00\\x00\\x14\\x13\\x00\\xF5\\x02\\xF5\\x02\\xF2\\x02\\x53\\x5F\\x31\\x04\\xF3\\x02\\xF3\\x02\\xF7\\x02\\xF7\\x02\\xF8\\x02\\x02\\x5F\\x37\\x04\\xF1\\x02\\xF1\\x02\\xF6\\x02\\x02\\x5F\\x35\\x04\\xF4\\x02\\x02\\x5F\\x33\\x04\\xFA\\x02\\x02\\x5F\\x39\\x04\\xF9\\x02\\xF9\\xFF\\x09\\x00\\xB5\\x48\\xDE\\x62\\x31\\xD0\\xE5\\x63\"\n        } err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*integrity check failed*\" 0\n    }\n}\n\nif {$run_oom_tests} {\n\ntest {corrupt payload: OOM in rdbGenericLoadStringObject} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        catch { r RESTORE x 0 \"\\x0A\\x81\\x7F\\xFF\\xFF\\xFF\\xFF\\xFF\\xFF\\xFF\\x13\\x00\\x00\\x00\\x0E\\x00\\x00\\x00\\x02\\x00\\x00\\x02\\x61\\x00\\x04\\x02\\x62\\x00\\xFF\\x09\\x00\\x57\\x04\\xE5\\xCD\\xD4\\x37\\x6C\\x57\" } err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: fuzzer findings - OOM in dictExpand} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        catch { r RESTORE x 0 \"\\x02\\x81\\x02\\x5F\\x31\\xC0\\x00\\xC0\\x02\\x09\\x00\\xCD\\x84\\x2C\\xB7\\xE8\\xA4\\x49\\x57\" } err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n} {} {tsan:skip}\n\n}\n\ntest {corrupt payload: fuzzer findings - zset ziplist invalid tail offset} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        catch {r RESTORE _zset 0 \"\\x0C\\x19\\x19\\x00\\x00\\x00\\x02\\x00\\x00\\x00\\x06\\x00\\x00\\xF1\\x02\\xF1\\x02\\x02\\x5F\\x31\\x04\\xF2\\x02\\xF3\\x02\\xF3\\xFF\\x09\\x00\\x4D\\x72\\x7B\\x97\\xCD\\x9A\\x70\\xC1\"} err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*Zset ziplist integrity check failed*\" 0\n    }\n}\n\ntest {corrupt payload: fuzzer findings - negative reply length} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        r RESTORE _stream 0 \"\\x0F\\x01\\x10\\x00\\x00\\x01\\x75\\xCF\\xA1\\x16\\xA7\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x40\\x42\\x42\\x00\\x00\\x00\\x18\\x00\\x03\\x01\\x00\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x00\\x01\\x02\\x01\\x00\\x01\\x00\\x01\\x01\\x01\\x00\\x01\\x05\\x01\\x02\\x01\\x00\\x01\\x01\\x01\\x01\\x01\\x14\\x5F\\x31\\x03\\x05\\x01\\x02\\x01\\x00\\x01\\x02\\x01\\x01\\x01\\x02\\x01\\x05\\x01\\xFF\\x03\\x81\\x00\\x00\\x01\\x75\\xCF\\xA1\\x16\\xA7\\x02\\x01\\x07\\x6D\\x79\\x67\\x72\\x6F\\x75\\x70\\x81\\x00\\x00\\x01\\x75\\xCF\\xA1\\x16\\xA7\\x01\\x01\\x00\\x00\\x01\\x75\\xCF\\xA1\\x16\\xA7\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\xA7\\x16\\xA1\\xCF\\x75\\x01\\x00\\x00\\x01\\x01\\x05\\x41\\x6C\\x69\\x63\\x65\\xA7\\x16\\xA1\\xCF\\x75\\x01\\x00\\x00\\x01\\x00\\x00\\x01\\x75\\xCF\\xA1\\x16\\xA7\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x09\\x00\\x1B\\x42\\x52\\xB8\\xDD\\x5C\\xE5\\x4E\"\n        catch {r XADD _stream * -956 -2601503852}\n        catch {r XINFO STREAM _stream FULL}\n        assert_equal [count_log_message 0 \"crashed by signal\"] 0\n        assert_equal [count_log_message 0 \"ASSERTION FAILED\"] 1\n    }\n}\n\ntest {corrupt payload: fuzzer findings - valgrind negative malloc} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch {r RESTORE _key 0 \"\\x0E\\x01\\x81\\xD6\\xD6\\x00\\x00\\x00\\x0A\\x00\\x00\\x00\\x01\\x00\\x00\\x40\\xC8\\x6F\\x2F\\x36\\xE2\\xDF\\xE3\\x2E\\x26\\x64\\x8B\\x87\\xD1\\x7A\\xBD\\xFF\\xEF\\xEF\\x63\\x65\\xF6\\xF8\\x8C\\x4E\\xEC\\x96\\x89\\x56\\x88\\xF8\\x3D\\x96\\x5A\\x32\\xBD\\xD1\\x36\\xD8\\x02\\xE6\\x66\\x37\\xCB\\x34\\x34\\xC4\\x52\\xA7\\x2A\\xD5\\x6F\\x2F\\x7E\\xEE\\xA2\\x94\\xD9\\xEB\\xA9\\x09\\x38\\x3B\\xE1\\xA9\\x60\\xB6\\x4E\\x09\\x44\\x1F\\x70\\x24\\xAA\\x47\\xA8\\x6E\\x30\\xE1\\x13\\x49\\x4E\\xA1\\x92\\xC4\\x6C\\xF0\\x35\\x83\\xD9\\x4F\\xD9\\x9C\\x0A\\x0D\\x7A\\xE7\\xB1\\x61\\xF5\\xC1\\x2D\\xDC\\xC3\\x0E\\x87\\xA6\\x80\\x15\\x18\\xBA\\x7F\\x72\\xDD\\x14\\x75\\x46\\x44\\x0B\\xCA\\x9C\\x8F\\x1C\\x3C\\xD7\\xDA\\x06\\x62\\x18\\x7E\\x15\\x17\\x24\\xAB\\x45\\x21\\x27\\xC2\\xBC\\xBB\\x86\\x6E\\xD8\\xBD\\x8E\\x50\\xE0\\xE0\\x88\\xA4\\x9B\\x9D\\x15\\x2A\\x98\\xFF\\x5E\\x78\\x6C\\x81\\xFC\\xA8\\xC9\\xC8\\xE6\\x61\\xC8\\xD1\\x4A\\x7F\\x81\\xD6\\xA6\\x1A\\xAD\\x4C\\xC1\\xA2\\x1C\\x90\\x68\\x15\\x2A\\x8A\\x36\\xC0\\x58\\xC3\\xCC\\xA6\\x54\\x19\\x12\\x0F\\xEB\\x46\\xFF\\x6E\\xE3\\xA7\\x92\\xF8\\xFF\\x09\\x00\\xD0\\x71\\xF7\\x9F\\xF7\\x6A\\xD6\\x2E\"} err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: fuzzer findings - valgrind invalid read} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch {r RESTORE _key 0 \"\\x05\\x0A\\x02\\x5F\\x39\\x00\\x00\\x00\\x00\\x00\\x00\\x22\\x40\\xC0\\x08\\x00\\x00\\x00\\x00\\x00\\x00\\x20\\x40\\x02\\x5F\\x37\\x00\\x00\\x00\\x00\\x00\\x00\\x1C\\x40\\xC0\\x06\\x00\\x00\\x00\\x00\\x00\\x00\\x18\\x40\\x02\\x5F\\x33\\x00\\x00\\x00\\x00\\x00\\x00\\x14\\x40\\xC0\\x04\\x00\\x00\\x00\\x00\\x00\\x00\\x10\\x40\\x02\\x5F\\x33\\x00\\x00\\x00\\x00\\x00\\x00\\x08\\x40\\xC0\\x02\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x40\\x02\\x5F\\x31\\x00\\x00\\x00\\x00\\x00\\x00\\xF0\\x3F\\xC0\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x00\\x3C\\x66\\xD7\\x14\\xA9\\xDA\\x3C\\x69\"} err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: fuzzer findings - empty hash ziplist} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch {r RESTORE _int 0 \"\\x04\\xC0\\x01\\x09\\x00\\xF6\\x8A\\xB6\\x7A\\x85\\x87\\x72\\x4D\"} err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: fuzzer findings - stream with no records} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        r restore _stream 0 \"\\x0F\\x01\\x10\\x00\\x00\\x01\\x78\\x4D\\x55\\x68\\x09\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x40\\x42\\x42\\x00\\x00\\x00\\x18\\x00\\x02\\x01\\x01\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x00\\x01\\x02\\x01\\x00\\x01\\x00\\x01\\x01\\x01\\x00\\x01\\x05\\x01\\x03\\x01\\x3E\\x01\\x00\\x01\\x01\\x01\\x82\\x5F\\x31\\x03\\x05\\x01\\x02\\x01\\x50\\x01\\x00\\x01\\x01\\x01\\x02\\x01\\x05\\x23\\xFF\\x02\\x81\\x00\\x00\\x01\\x78\\x4D\\x55\\x68\\x59\\x00\\x01\\x07\\x6D\\x79\\x67\\x72\\x6F\\x75\\x70\\x81\\x00\\x00\\x01\\x78\\x4D\\x55\\x68\\x47\\x00\\x01\\x00\\x00\\x01\\x78\\x4D\\x55\\x68\\x47\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x9F\\x68\\x55\\x4D\\x78\\x01\\x00\\x00\\x01\\x01\\x05\\x41\\x6C\\x69\\x63\\x65\\x85\\x68\\x55\\x4D\\x78\\x01\\x00\\x00\\x01\\x00\\x00\\x01\\x78\\x4D\\x55\\x68\\x47\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x00\\xF1\\xC0\\x72\\x70\\x39\\x40\\x1E\\xA9\" replace\n        catch {r XREAD STREAMS _stream $}\n        assert_equal [count_log_message 0 \"crashed by signal\"] 0\n        assert_equal [count_log_message 0 \"Guru Meditation\"] 1\n    }\n}\n\ntest {corrupt payload: fuzzer findings - quicklist ziplist tail followed by extra data which start with 0xff} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch {\n            r restore key 0 \"\\x0E\\x01\\x11\\x11\\x00\\x00\\x00\\x0A\\x00\\x00\\x00\\x01\\x00\\x00\\xF6\\xFF\\xB0\\x6C\\x9C\\xFF\\x09\\x00\\x9C\\x37\\x47\\x49\\x4D\\xDE\\x94\\xF5\" replace\n        } err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*integrity check failed*\" 0\n    }\n}\n\ntest {corrupt payload: fuzzer findings - dict init to huge size} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        catch {r restore key 0 \"\\x02\\x81\\xC0\\x00\\x02\\x5F\\x31\\xC0\\x02\\x09\\x00\\xB2\\x1B\\xE5\\x17\\x2E\\x15\\xF4\\x6C\" replace} err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n} {} {tsan:skip}\n\ntest {corrupt payload: fuzzer findings - huge string} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch {r restore key 0 \"\\x00\\x81\\x01\\x09\\x00\\xF6\\x2B\\xB6\\x7A\\x85\\x87\\x72\\x4D\"} err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n} {} {tsan:skip}\n\ntest {corrupt payload: fuzzer findings - stream PEL without consumer} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch {r restore _stream 0 \"\\x0F\\x01\\x10\\x00\\x00\\x01\\x7B\\x08\\xF0\\xB2\\x34\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xC3\\x3B\\x40\\x42\\x19\\x42\\x00\\x00\\x00\\x18\\x00\\x02\\x01\\x01\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x00\\x20\\x10\\x00\\x00\\x20\\x01\\x00\\x01\\x20\\x03\\x02\\x05\\x01\\x03\\x20\\x05\\x40\\x00\\x04\\x82\\x5F\\x31\\x03\\x05\\x60\\x19\\x80\\x32\\x02\\x05\\x01\\xFF\\x02\\x81\\x00\\x00\\x01\\x7B\\x08\\xF0\\xB2\\x34\\x02\\x01\\x07\\x6D\\x79\\x67\\x72\\x6F\\x75\\x70\\x81\\x00\\x00\\x01\\x7B\\x08\\xF0\\xB2\\x34\\x01\\x01\\x00\\x00\\x01\\x7B\\x08\\xF0\\xB2\\x34\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x35\\xB2\\xF0\\x08\\x7B\\x01\\x00\\x00\\x01\\x01\\x13\\x41\\x6C\\x69\\x63\\x65\\x35\\xB2\\xF0\\x08\\x7B\\x01\\x00\\x00\\x01\\x00\\x00\\x01\\x7B\\x08\\xF0\\xB2\\x34\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x09\\x00\\x28\\x2F\\xE0\\xC5\\x04\\xBB\\xA7\\x31\"} err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: fuzzer findings - stream listpack valgrind issue} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        r restore _stream 0 \"\\x0F\\x01\\x10\\x00\\x00\\x01\\x7B\\x09\\x5E\\x94\\xFF\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x40\\x42\\x42\\x00\\x00\\x00\\x18\\x00\\x02\\x01\\x01\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x00\\x01\\x02\\x01\\x00\\x01\\x00\\x01\\x01\\x01\\x00\\x01\\x05\\x01\\x03\\x01\\x25\\x01\\x00\\x01\\x01\\x01\\x82\\x5F\\x31\\x03\\x05\\x01\\x02\\x01\\x32\\x01\\x00\\x01\\x01\\x01\\x02\\x01\\xF0\\x01\\xFF\\x02\\x81\\x00\\x00\\x01\\x7B\\x09\\x5E\\x95\\x31\\x00\\x01\\x07\\x6D\\x79\\x67\\x72\\x6F\\x75\\x70\\x81\\x00\\x00\\x01\\x7B\\x09\\x5E\\x95\\x24\\x00\\x01\\x00\\x00\\x01\\x7B\\x09\\x5E\\x95\\x24\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x5C\\x95\\x5E\\x09\\x7B\\x01\\x00\\x00\\x01\\x01\\x05\\x41\\x6C\\x69\\x63\\x65\\x4B\\x95\\x5E\\x09\\x7B\\x01\\x00\\x00\\x01\\x00\\x00\\x01\\x7B\\x09\\x5E\\x95\\x24\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x00\\x19\\x29\\x94\\xDF\\x76\\xF8\\x1A\\xC6\"\n        catch {r XINFO STREAM _stream FULL }\n        assert_equal [count_log_message 0 \"crashed by signal\"] 0\n        assert_equal [count_log_message 0 \"ASSERTION FAILED\"] 1\n    }\n}\n\ntest {corrupt payload: fuzzer findings - stream with bad lpFirst} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch {r restore _stream 0 \"\\x0F\\x01\\x10\\x00\\x00\\x01\\x7B\\x0E\\x52\\xD2\\xEC\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x40\\x42\\x42\\x00\\x00\\x00\\x18\\x00\\x02\\xF7\\x01\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x00\\x01\\x02\\x01\\x00\\x01\\x00\\x01\\x01\\x01\\x00\\x01\\x05\\x01\\x03\\x01\\x01\\x01\\x00\\x01\\x01\\x01\\x82\\x5F\\x31\\x03\\x05\\x01\\x02\\x01\\x01\\x01\\x01\\x01\\x01\\x01\\x02\\x01\\x05\\x01\\xFF\\x02\\x81\\x00\\x00\\x01\\x7B\\x0E\\x52\\xD2\\xED\\x01\\x01\\x07\\x6D\\x79\\x67\\x72\\x6F\\x75\\x70\\x81\\x00\\x00\\x01\\x7B\\x0E\\x52\\xD2\\xED\\x00\\x01\\x00\\x00\\x01\\x7B\\x0E\\x52\\xD2\\xED\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xED\\xD2\\x52\\x0E\\x7B\\x01\\x00\\x00\\x01\\x01\\x05\\x41\\x6C\\x69\\x63\\x65\\xED\\xD2\\x52\\x0E\\x7B\\x01\\x00\\x00\\x01\\x00\\x00\\x01\\x7B\\x0E\\x52\\xD2\\xED\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x00\\xAC\\x05\\xC9\\x97\\x5D\\x45\\x80\\xB3\"} err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: fuzzer findings - stream listpack lpPrev valgrind issue} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        r restore _stream 0  \"\\x0F\\x01\\x10\\x00\\x00\\x01\\x7B\\x0E\\xAE\\x66\\x36\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x40\\x42\\x42\\x00\\x00\\x00\\x18\\x00\\x02\\x01\\x01\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x00\\x01\\x02\\x01\\x00\\x01\\x00\\x01\\x01\\x01\\x00\\x01\\x1D\\x01\\x03\\x01\\x24\\x01\\x00\\x01\\x01\\x69\\x82\\x5F\\x31\\x03\\x05\\x01\\x02\\x01\\x33\\x01\\x00\\x01\\x01\\x01\\x02\\x01\\x05\\x01\\xFF\\x02\\x81\\x00\\x00\\x01\\x7B\\x0E\\xAE\\x66\\x69\\x00\\x01\\x07\\x6D\\x79\\x67\\x72\\x6F\\x75\\x70\\x81\\x00\\x00\\x01\\x7B\\x0E\\xAE\\x66\\x5A\\x00\\x01\\x00\\x00\\x01\\x7B\\x0E\\xAE\\x66\\x5A\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x94\\x66\\xAE\\x0E\\x7B\\x01\\x00\\x00\\x01\\x01\\x05\\x41\\x6C\\x69\\x63\\x65\\x83\\x66\\xAE\\x0E\\x7B\\x01\\x00\\x00\\x01\\x00\\x00\\x01\\x7B\\x0E\\xAE\\x66\\x5A\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x00\\xD5\\xD7\\xA5\\x5C\\x63\\x1C\\x09\\x40\"\n        catch {r XREVRANGE _stream 1618622681 606195012389}\n        assert_equal [count_log_message 0 \"crashed by signal\"] 0\n        assert_equal [count_log_message 0 \"ASSERTION FAILED\"] 1\n    }\n}\n\ntest {corrupt payload: fuzzer findings - stream with non-integer entry id} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch {r restore _streambig 0 \"\\x0F\\x03\\x10\\x00\\x00\\x01\\x7B\\x13\\x34\\xC3\\xB2\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xC3\\x40\\x4F\\x40\\x5C\\x18\\x5C\\x00\\x00\\x00\\x24\\x00\\x05\\x01\\x00\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x40\\x10\\x00\\x80\\x20\\x01\\x00\\x01\\x20\\x03\\x00\\x05\\x20\\x1C\\x40\\x09\\x05\\x01\\x01\\x82\\x5F\\x31\\x03\\x80\\x0D\\x00\\x02\\x20\\x0D\\x00\\x02\\xA0\\x19\\x00\\x03\\x20\\x0B\\x02\\x82\\x5F\\x33\\xA0\\x19\\x00\\x04\\x20\\x0D\\x00\\x04\\x20\\x19\\x00\\xFF\\x10\\x00\\x00\\x01\\x7B\\x13\\x34\\xC3\\xB2\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x05\\xC3\\x40\\x56\\x40\\x61\\x18\\x61\\x00\\x00\\x00\\x24\\x00\\x05\\x01\\x00\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x40\\x10\\x00\\x00\\x20\\x01\\x06\\x01\\x01\\x82\\x5F\\x35\\x03\\x05\\x20\\x1E\\x40\\x0B\\x03\\x01\\x01\\x06\\x01\\x40\\x0B\\x03\\x01\\x01\\xDF\\xFB\\x20\\x05\\x02\\x82\\x5F\\x37\\x60\\x1A\\x20\\x0E\\x00\\xFC\\x20\\x05\\x00\\x08\\xC0\\x1B\\x00\\xFD\\x20\\x0C\\x02\\x82\\x5F\\x39\\x20\\x1B\\x00\\xFF\\x10\\x00\\x00\\x01\\x7B\\x13\\x34\\xC3\\xB3\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x03\\xC3\\x3D\\x40\\x4A\\x18\\x4A\\x00\\x00\\x00\\x15\\x00\\x02\\x01\\x00\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x40\\x10\\x00\\x00\\x20\\x01\\x40\\x00\\x00\\x05\\x60\\x07\\x02\\xDF\\xFD\\x02\\xC0\\x23\\x09\\x01\\x01\\x86\\x75\\x6E\\x69\\x71\\x75\\x65\\x07\\xA0\\x2D\\x02\\x08\\x01\\xFF\\x0C\\x81\\x00\\x00\\x01\\x7B\\x13\\x34\\xC3\\xB4\\x00\\x00\\x09\\x00\\x9D\\xBD\\xD5\\xB9\\x33\\xC4\\xC5\\xFF\"} err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: fuzzer findings - empty quicklist} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch {\n            r restore key 0 \"\\x0E\\xC0\\x2B\\x15\\x00\\x00\\x00\\x0A\\x00\\x00\\x00\\x01\\x00\\x00\\xE0\\x62\\x58\\xEA\\xDF\\x22\\x00\\x00\\x00\\xFF\\x09\\x00\\xDF\\x35\\xD2\\x67\\xDC\\x0E\\x89\\xAB\" replace\n        } err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: fuzzer findings - empty zset} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch {r restore key 0 \"\\x05\\xC0\\x01\\x09\\x00\\xF6\\x8A\\xB6\\x7A\\x85\\x87\\x72\\x4D\"} err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: fuzzer findings - hash with len of 0} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch {r restore key 0 \"\\x04\\xC0\\x21\\x09\\x00\\xF6\\x8A\\xB6\\x7A\\x85\\x87\\x72\\x4D\"} err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: fuzzer findings - hash listpack first element too long entry len} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r debug set-skip-checksum-validation 1\n        r config set sanitize-dump-payload yes\n        catch { r restore _hash 0 \"\\x10\\x15\\x15\\x00\\x00\\x00\\x06\\x00\\xF0\\x01\\x00\\x01\\x01\\x01\\x82\\x5F\\x31\\x03\\x02\\x01\\x02\\x01\\xFF\\x0A\\x00\\x94\\x21\\x0A\\xFA\\x06\\x52\\x9F\\x44\" replace } err\n        assert_match \"*Bad data format*\" $err\n        verify_log_message 0 \"*integrity check failed*\" 0\n    }\n}\n\ntest {corrupt payload: fuzzer findings - stream double free listpack when insert dup node to rax returns 0} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r debug set-skip-checksum-validation 1\n        r config set sanitize-dump-payload yes\n        catch { r restore _stream 0 \"\\x0F\\x03\\x10\\x00\\x00\\x01\\x7B\\x60\\x5A\\x23\\x79\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xC3\\x40\\x4F\\x40\\x5C\\x18\\x5C\\x00\\x00\\x00\\x24\\x00\\x05\\x01\\x00\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x40\\x10\\x00\\x00\\x20\\x01\\x00\\x01\\x20\\x03\\x00\\x05\\x20\\x1C\\x40\\x09\\x05\\x01\\x01\\x82\\x5F\\x31\\x03\\x80\\x0D\\x00\\x02\\x20\\x0D\\x00\\x02\\xA0\\x19\\x00\\x03\\x20\\x0B\\x02\\x82\\x5F\\x33\\xA0\\x19\\x00\\x04\\x20\\x0D\\x00\\x04\\x20\\x19\\x00\\xFF\\x10\\x00\\x00\\x01\\x7B\\x60\\x5A\\x23\\x79\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x05\\xC3\\x40\\x51\\x40\\x5E\\x18\\x5E\\x00\\x00\\x00\\x24\\x00\\x05\\x01\\x00\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x40\\x10\\x00\\x00\\x20\\x01\\x06\\x01\\x01\\x82\\x5F\\x35\\x03\\x05\\x20\\x1E\\x40\\x0B\\x03\\x01\\x01\\x06\\x01\\x80\\x0B\\x00\\x02\\x20\\x0B\\x02\\x82\\x5F\\x37\\xA0\\x19\\x00\\x03\\x20\\x0D\\x00\\x08\\xA0\\x19\\x00\\x04\\x20\\x0B\\x02\\x82\\x5F\\x39\\x20\\x19\\x00\\xFF\\x10\\x00\\x00\\x01\\x7B\\x60\\x5A\\x23\\x79\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xC3\\x3B\\x40\\x49\\x18\\x49\\x00\\x00\\x00\\x15\\x00\\x02\\x01\\x00\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x40\\x10\\x00\\x00\\x20\\x01\\x40\\x00\\x00\\x05\\x20\\x07\\x40\\x09\\xC0\\x22\\x09\\x01\\x01\\x86\\x75\\x6E\\x69\\x71\\x75\\x65\\x07\\xA0\\x2C\\x02\\x08\\x01\\xFF\\x0C\\x81\\x00\\x00\\x01\\x7B\\x60\\x5A\\x23\\x7A\\x01\\x00\\x0A\\x00\\x9C\\x8F\\x1E\\xBF\\x2E\\x05\\x59\\x09\" replace } err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: fuzzer findings - LCS OOM} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r SETRANGE _int 423324 1450173551\n        catch {r LCS _int _int} err\n        assert_match \"*Insufficient memory*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: fuzzer findings - gcc asan reports false leak on assert} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r debug set-skip-checksum-validation 1\n        r config set sanitize-dump-payload no\n        catch { r restore _list 0 \"\\x12\\x01\\x02\\x13\\x13\\x00\\x00\\x00\\x10\\x00\\x00\\x00\\x03\\x00\\x00\\xF3\\xFE\\x02\\x5F\\x31\\x04\\xF1\\xFF\\x0A\\x00\\x19\\x8D\\x3D\\x74\\x85\\x94\\x29\\xBD\" }\n        catch { r LPOP _list } err\n        assert_equal [count_log_message 0 \"crashed by signal\"] 0\n        assert_equal [count_log_message 0 \"ASSERTION FAILED\"] 1\n    }\n}\n\ntest {corrupt payload: fuzzer findings - lpFind invalid access} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r debug set-skip-checksum-validation 1\n        r config set sanitize-dump-payload no\n         r restore _hashbig 0 \"\\x10\\x39\\x39\\x00\\x00\\x00\\x14\\x00\\x06\\x01\\x06\\x01\\x03\\x01\\x82\\x5F\\x33\\x03\\x07\\x01\\x82\\x5F\\x37\\x03\\x00\\x01\\x00\\x01\\x04\\x01\\x04\\x01\\x09\\x01\\x82\\x5F\\x39\\x03\\x05\\x01\\x82\\x5F\\x35\\x03\\x08\\x01\\x08\\x01\\x01\\x01\\x82\\x5F\\x31\\x03\\x02\\x01\\xF0\\x01\\xFF\\x0A\\x00\\x29\\xD7\\xE4\\x52\\x79\\x7A\\x95\\x82\"\n         catch { r HLEN _hashbig }\n         catch { r HSETNX _hashbig 513072881620 \"\\x9A\\x4B\\x1F\\xF2\\x99\\x74\\x6E\\x96\\x84\\x7F\\xB9\\x85\\xBE\\xD6\\x1A\\x93\\x0A\\xED\\xAE\\x19\\xA0\\x5A\\x67\\xD6\\x89\\xA8\\xF9\\xF2\\xB8\\xBD\\x3E\\x5A\\xCF\\xD2\\x5B\\x17\\xA4\\xBB\\xB2\\xA9\\x56\\x67\\x6E\\x0B\\xED\\xCD\\x36\\x49\\xC6\\x84\\xFF\\xC2\\x76\\x9B\\xF3\\x49\\x88\\x97\\x92\\xD2\\x54\\xE9\\x08\\x19\\x86\\x40\\x96\\x24\\x68\\x25\\x9D\\xF7\\x0E\\xB7\\x36\\x85\\x68\\x6B\\x2A\\x97\\x64\\x30\\xE6\\xFF\\x9A\\x2A\\x42\\x2B\\x31\\x01\\x32\\xB3\\xEE\\x78\\x1A\\x26\\x94\\xE2\\x07\\x34\\x50\\x8A\\xFF\\xF9\\xAE\\xEA\\xEC\\x59\\x42\\xF5\\x39\\x40\\x65\\xDE\\x55\\xCC\\x77\\x1B\\x32\\x02\\x19\\xEE\\x3C\\xD4\\x79\\x48\\x01\\x4F\\x51\\xFE\\x22\\xE0\\x0C\\xF4\\x07\\x06\\xCD\\x55\\x30\\xC0\\x24\\x32\\xD4\\xCC\\xAF\\x82\\x05\\x48\\x14\\x10\\x55\\xA1\\x3D\\xF6\\x81\\x45\\x54\\xEA\\x71\\x24\\x27\\x06\\xDC\\xFA\\xE4\\xE4\\x87\\xCC\\x81\\xA0\\x47\\xA5\\xAF\\xD1\\x89\\xE7\\x42\\xC3\\x24\\xD0\\x32\\x7A\\xDE\\x44\\x47\\x6E\\x1F\\xCB\\xEE\\xA6\\x46\\xDE\\x0D\\xE6\\xD5\\x16\\x03\\x2A\\xD6\\x9E\\xFD\\x94\\x02\\x2C\\xDB\\x1F\\xD0\\xBE\\x98\\x10\\xE3\\xEB\\xEA\\xBE\\xE5\\xD1\" }\n    }\n}\n\ntest {corrupt payload: fuzzer findings - invalid access in ziplist tail prevlen decoding} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r debug set-skip-checksum-validation 1\n        r config set sanitize-dump-payload no\n        catch {r restore _listbig 0 \"\\x0e\\x02\\x1B\\x1B\\x00\\x00\\x00\\x16\\x00\\x00\\x00\\x05\\x00\\x00\\x02\\x5F\\x39\\x04\\xF9\\x02\\x02\\x5F\\x37\\x04\\xF7\\x02\\x02\\x5F\\x35\\xFF\\x19\\x19\\x00\\x00\\x00\\x16\\x00\\x00\\x00\\x05\\x00\\x00\\xF5\\x02\\x02\\x5F\\x33\\x04\\xF3\\x02\\x02\\x5F\\x31\\xFE\\xF1\\xFF\\x0A\\x00\\x6B\\x43\\x32\\x2F\\xBB\\x29\\x0a\\xBE\"} err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: fuzzer findings - zset zslInsert with a NAN score} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        catch {r restore _nan_zset 0 \"\\x05\\x0A\\x02\\x5F\\x39\\x00\\x00\\x00\\x00\\x00\\x00\\x22\\x40\\xC0\\x08\\x00\\x00\\x00\\x00\\x00\\x00\\x20\\x40\\x02\\x5F\\x37\\x00\\x00\\x00\\x00\\x00\\x00\\x1C\\x40\\xC0\\x06\\x00\\x00\\x00\\x00\\x00\\x00\\x18\\x40\\x02\\x5F\\x35\\x00\\x00\\x00\\x00\\x00\\x00\\x14\\x40\\xC0\\x04\\x00\\x00\\x00\\x00\\x00\\x00\\x10\\x40\\x02\\x5F\\x33\\x00\\x00\\x00\\x00\\x00\\x00\\x08\\x40\\xC0\\x02\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x40\\x02\\x5F\\x31\\x00\\x00\\x00\\x00\\x00\\x55\\xF0\\x7F\\xC0\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x0A\\x00\\xEC\\x94\\x86\\xD8\\xFD\\x5C\\x5F\\xD8\"} err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: fuzzer findings - streamLastValidID panic} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch {r restore _streambig 0 \"\\x13\\xC0\\x10\\x00\\x00\\x01\\x80\\x20\\x48\\xA0\\x33\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xC3\\x40\\x4F\\x40\\x5C\\x18\\x5C\\x00\\x00\\x00\\x24\\x00\\x05\\x01\\x00\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x40\\x10\\x00\\x00\\x20\\x01\\x00\\x01\\x20\\x03\\x00\\x05\\x20\\x1C\\x40\\x09\\x05\\x01\\x01\\x82\\x5F\\x31\\x03\\x80\\x0D\\x00\\x02\\x20\\x0D\\x00\\x02\\xA0\\x19\\x00\\x03\\x20\\x0B\\x02\\x82\\x5F\\x33\\x60\\x19\\x40\\x2F\\x02\\x01\\x01\\x04\\x20\\x19\\x00\\xFF\\x10\\x00\\x00\\x01\\x80\\x20\\x48\\xA0\\x34\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\xC3\\x40\\x51\\x40\\x5E\\x18\\x5E\\x00\\x00\\x00\\x24\\x00\\x05\\x01\\x00\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x40\\x10\\x00\\x00\\x20\\x01\\x06\\x01\\x01\\x82\\x5F\\x35\\x03\\x05\\x20\\x1E\\x40\\x0B\\x03\\x01\\x01\\x06\\x01\\x80\\x0B\\x00\\x02\\x20\\x0B\\x02\\x82\\x5F\\x37\\xA0\\x19\\x00\\x03\\x20\\x0D\\x00\\x08\\xA0\\x19\\x00\\x04\\x20\\x0B\\x02\\x82\\x5F\\x39\\x20\\x19\\x00\\xFF\\x10\\x00\\x00\\x01\\x80\\x20\\x48\\xA0\\x34\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x06\\xC3\\x3D\\x40\\x4A\\x18\\x4A\\x00\\x00\\x00\\x15\\x00\\x02\\x01\\x00\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x40\\x10\\x00\\x00\\x20\\x01\\x40\\x00\\x00\\x05\\x60\\x07\\x02\\xDF\\xFA\\x02\\xC0\\x23\\x09\\x01\\x01\\x86\\x75\\x6E\\x69\\x71\\x75\\x65\\x07\\xA0\\x2D\\x02\\x08\\x01\\xFF\\x0C\\x81\\x00\\x00\\x01\\x80\\x20\\x48\\xA0\\x35\\x00\\x81\\x00\\x00\\x01\\x80\\x20\\x48\\xA0\\x33\\x00\\x00\\x00\\x0C\\x00\\x0A\\x00\\x34\\x8B\\x0E\\x5B\\x42\\xCD\\xD6\\x08\"} err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: fuzzer findings - valgrind fishy value warning} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch {r restore _key 0 \"\\x13\\x01\\x10\\x00\\x00\\x01\\x81\\xCC\\x07\\xDC\\xF2\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x40\\x42\\x42\\x00\\x00\\x00\\x18\\x00\\x02\\x01\\x01\\x01\\x02\\x01\\x84\\x69\\x74\\x65\\x6D\\x05\\x85\\x76\\x61\\x6C\\x75\\x65\\x06\\x00\\x01\\x02\\x01\\x00\\x01\\x00\\x01\\x01\\x01\\x00\\x01\\x05\\x01\\x03\\x01\\x2C\\x01\\x00\\x01\\x01\\x01\\x82\\x5F\\x31\\x03\\x05\\x01\\x02\\x01\\x3C\\x01\\x00\\x01\\x01\\x01\\x02\\x01\\x05\\x01\\xFF\\x02\\xD0\\x00\\x00\\x01\\x81\\xCC\\x07\\xDD\\x2E\\x00\\x81\\x00\\x00\\x01\\x81\\xCC\\x07\\xDC\\xF2\\x00\\x81\\x00\\x00\\x01\\x81\\xCC\\x07\\xDD\\x1E\\x00\\x03\\x01\\x07\\x6D\\x79\\x67\\x72\\x6F\\x75\\x70\\x81\\x00\\x00\\x01\\x81\\xCC\\x07\\xDD\\x1E\\x00\\x02\\x01\\x00\\x00\\x01\\x81\\xCC\\x07\\xDD\\x1E\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x71\\xDD\\x07\\xCC\\x81\\x01\\x00\\x00\\x01\\x01\\x05\\x41\\x6C\\x69\\x63\\x65\\x58\\xDD\\x07\\xCC\\x81\\x01\\x00\\x00\\x01\\x00\\x00\\x01\\x81\\xCC\\x07\\xDD\\x1E\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x0A\\x00\\x2F\\xB0\\xD1\\x15\\x0A\\x97\\x87\\x6B\"} err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: fuzzer findings - empty set listpack} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        catch {r restore _key 0 \"\\x14\\x25\\x25\\x00\\x00\\x00\\x00\\x00\\x02\\x01\\x82\\x5F\\x37\\x03\\x06\\x01\\x82\\x5F\\x35\\x03\\x82\\x5F\\x33\\x03\\x00\\x01\\x82\\x5F\\x31\\x03\\x82\\x5F\\x39\\x03\\x04\\xA9\\x08\\x01\\xFF\\x0B\\x00\\xA3\\x26\\x49\\xB4\\x86\\xB0\\x0F\\x41\"} err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: fuzzer findings - set with duplicate elements causes sdiff to hang} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch {r restore _key 0 \"\\x14\\x25\\x25\\x00\\x00\\x00\\x0A\\x00\\x06\\x01\\x82\\x5F\\x35\\x03\\x04\\x01\\x82\\x5F\\x31\\x03\\x82\\x5F\\x33\\x03\\x00\\x01\\x82\\x5F\\x39\\x03\\x82\\x5F\\x33\\x03\\x08\\x01\\x02\\x01\\xFF\\x0B\\x00\\x31\\xBE\\x7D\\x41\\x01\\x03\\x5B\\xEC\" replace} err\n        assert_match \"*Bad data format*\" $err\n        r ping\n\n        # In the past, it generated a broken protocol and left the client hung in sdiff\n        r config set sanitize-dump-payload no\n        assert_equal {OK} [r restore _key 0 \"\\x14\\x25\\x25\\x00\\x00\\x00\\x0A\\x00\\x06\\x01\\x82\\x5F\\x35\\x03\\x04\\x01\\x82\\x5F\\x31\\x03\\x82\\x5F\\x33\\x03\\x00\\x01\\x82\\x5F\\x39\\x03\\x82\\x5F\\x33\\x03\\x08\\x01\\x02\\x01\\xFF\\x0B\\x00\\x31\\xBE\\x7D\\x41\\x01\\x03\\x5B\\xEC\" replace]\n        assert_type set _key\n        assert_encoding listpack _key\n        assert_equal 10 [r scard _key]\n        assert_equal {0 2 4 6 8 _1 _3 _3 _5 _9} [lsort [r smembers _key]]\n        assert_equal {0 2 4 6 8 _1 _3 _5 _9} [lsort [r sdiff _key]]\n    }\n} {} {logreqres:skip} ;# This test violates {\"uniqueItems\": true}\n\ntest {corrupt payload: fuzzer findings - set with invalid length causes smembers to hang} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        # In the past, it generated a broken protocol and left the client hung in smembers\n        r config set sanitize-dump-payload no\n        assert_equal {OK} [r restore _set 0 \"\\x14\\x16\\x16\\x00\\x00\\x00\\x0c\\x00\\x81\\x61\\x02\\x81\\x62\\x02\\x81\\x63\\x02\\x01\\x01\\x02\\x01\\x03\\x01\\xff\\x0c\\x00\\x91\\x00\\x56\\x73\\xc1\\x82\\xd5\\xbd\" replace]\n        assert_encoding listpack _set\n        catch { r SMEMBERS _set } err\n        assert_equal [count_log_message 0 \"crashed by signal\"] 0\n        assert_equal [count_log_message 0 \"ASSERTION FAILED\"] 1\n    }\n}\n\ntest {corrupt payload: fuzzer findings - set with invalid length causes sscan to hang} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        # In the past, it generated a broken protocol and left the client hung in smembers\n        r config set sanitize-dump-payload no\n        assert_equal {OK} [r restore _set 0 \"\\x14\\x16\\x16\\x00\\x00\\x00\\x0c\\x00\\x81\\x61\\x02\\x81\\x62\\x02\\x81\\x63\\x02\\x01\\x01\\x02\\x01\\x03\\x01\\xff\\x0c\\x00\\x91\\x00\\x56\\x73\\xc1\\x82\\xd5\\xbd\" replace]\n        assert_encoding listpack _set\n        catch { r SSCAN _set 0 } err\n        assert_equal [count_log_message 0 \"crashed by signal\"] 0\n        assert_equal [count_log_message 0 \"ASSERTION FAILED\"] 1\n    }\n}\n\ntest {corrupt payload: zset listpack encoded with invalid length causes zscan to hang} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        assert_equal {OK} [r restore _zset 0 \"\\x11\\x16\\x16\\x00\\x00\\x00\\x1a\\x00\\x81\\x61\\x02\\x01\\x01\\x81\\x62\\x02\\x02\\x01\\x81\\x63\\x02\\x03\\x01\\xff\\x0c\\x00\\x81\\xa7\\xcd\\x31\\x22\\x6c\\xef\\xf7\" replace]\n        assert_encoding listpack _zset\n        catch { r ZSCAN _zset 0 } err\n        assert_equal [count_log_message 0 \"crashed by signal\"] 0\n        assert_equal [count_log_message 0 \"ASSERTION FAILED\"] 1\n    }\n}\n\ntest {corrupt payload: hash listpack encoded with invalid length causes hscan to hang} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        assert_equal {OK} [r restore _hash 0 \"\\x10\\x17\\x17\\x00\\x00\\x00\\x0e\\x00\\x82\\x66\\x31\\x03\\x82\\x76\\x31\\x03\\x82\\x66\\x32\\x03\\x82\\x76\\x32\\x03\\xff\\x0c\\x00\\xf1\\xc5\\x36\\x92\\x29\\x6a\\x8c\\xc5\" replace]\n        assert_encoding listpack _hash\n        catch { r HSCAN _hash 0 } err\n        assert_equal [count_log_message 0 \"crashed by signal\"] 0\n        assert_equal [count_log_message 0 \"ASSERTION FAILED\"] 1\n    }\n}\n\ntest {corrupt payload: fuzzer findings - vector sets with wrong encoding} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload yes\n        r debug set-skip-checksum-validation 1\n        catch {r restore _key 0 \"\\x07\\x81\\xBD\\xE7\\x2D\\xA2\\xBB\\x1E\\xB4\\x00\\x02\\x03\\x02\\x03\\x02\\x50\\x8F\\x02\\x00\\x05\\xC0\\x02\\x05\\x03\\x7F\\x7F\\x7F\\x02\\x07\\x02\\x03\\x02\\x00\\x02\\x02\\x02\\x20\\x02\\x01\\x02\\x02\\x02\\x81\\x3F\\x13\\xCD\\x3A\\x3F\\xDD\\xB3\\xD7\\x05\\xC0\\x01\\x05\\x03\\x7F\\x7F\\x7F\\x02\\x0B\\x02\\x02\\x02\\x02\\x02\\x02\\x02\\x20\\x02\\x01\\x02\\x03\\x02\\x06\\x02\\x10\\x02\\x00\\x02\\x10\\x02\\x81\\x3F\\x13\\xCD\\x3A\\x3F\\xDD\\xB3\\xD7\\x05\\xC0\\x00\\x05\\x03\\x7F\\x7F\\x7F\\x02\\x07\\x02\\x01\\x02\\x00\\x02\\x02\\x02\\x20\\x02\\x02\\x02\\x03\\x02\\x81\\x3F\\x13\\xCD\\x3A\\x3F\\xDD\\xB3\\xD7\\x00\\x0C\\x00\\xC6\\xA3\\x70\\x40\\x02\\x26\\xE8\\x9B\"} err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: fuzzer findings - decrRefCount on NULL robj on corrupt KEY_META payload} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no] ] {\n        r config set sanitize-dump-payload no\n        r debug set-skip-checksum-validation 1\n        catch {r restore key 0 \"\\xF3\\x02\\x01\\x0D\\x00\\x54\\x23\\x3F\\xC9\\x82\\x32\\x05\\x8D\" replace} err\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\ntest {corrupt payload: stream with NACK shared between two consumers} {\n    start_server [list overrides [list loglevel verbose use-exit-on-panic yes crash-memcheck-enabled no]] {\n        r debug set-skip-checksum-validation 1\n        # Payload: stream with entry 1-0, one consumer group (mygroup),\n        # two consumers whose PELs both reference 1-0 (shared NACK).\n        # XACK on one consumer frees the NACK, leaving a dangling\n        # pointer in the other consumer's PEL (use-after-free).\n        catch {r RESTORE mystream 0 \"\\x1a\\x01\\x10\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x1d\\x1d\\x00\\x00\\x00\\x0a\\x00\\x01\\x01\\x00\\x01\\x01\\x01\\x81\\x6b\\x02\\x00\\x01\\x02\\x01\\x00\\x01\\x00\\x01\\x81\\x76\\x02\\x04\\x01\\xff\\x01\\x01\\x00\\x01\\x00\\x00\\x00\\x01\\x01\\x07\\x6d\\x79\\x67\\x72\\x6f\\x75\\x70\\x01\\x00\\x01\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x64\\x42\\xb9\\x9d\\x01\\x00\\x00\\x01\\x02\\x09\\x63\\x6f\\x6e\\x73\\x75\\x6d\\x65\\x72\\x41\\x01\\x64\\x42\\xb9\\x9d\\x01\\x00\\x00\\x01\\x64\\x42\\xb9\\x9d\\x01\\x00\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x09\\x63\\x6f\\x6e\\x73\\x75\\x6d\\x65\\x72\\x42\\x01\\x64\\x42\\xb9\\x9d\\x01\\x00\\x00\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x40\\x64\\x40\\x64\\x00\\x00\\x00\\x0d\\x00\\xe7\\x12\\xf7\\xcc\\x25\\xd5\\x0e\\x44\"} err\n        catch {r XACK mystream mygroup 1-0} _\n        catch {r XREADGROUP GROUP mygroup consumerA COUNT 10 STREAMS mystream 0} _\n        catch {r DEL mystream} _\n        assert_match \"*Bad data format*\" $err\n        r ping\n    }\n}\n\n} ;# tags\n\n"
  },
  {
    "path": "tests/integration/dismiss-mem.tcl",
    "content": "# The tests of this file aim to get coverage on all the \"dismiss\" methods\n# that dismiss all data-types memory in the fork child. like client query\n# buffer, client output buffer and replication backlog.\n# Actually, we may not have many asserts in the test, since we just check for\n# crashes and the dump file inconsistencies.\n\nstart_server {tags {\"dismiss external:skip needs:debug\"}} {\n    # In other tests, although we test child process dumping RDB file, but\n    # memory allocations of key/values are usually small, they couldn't cover\n    # the \"dismiss\" object methods, in this test, we create big size key/values\n    # to satisfy the conditions for release memory pages, especially, we assume\n    # the page size of OS is 4KB in some cases.\n    test {dismiss all data types memory} {\n        set bigstr [string repeat A 8192]\n        set 64bytes [string repeat A 64]\n\n        # string\n        populate 100 bigstring 8192\n\n        # list\n        r lpush biglist1 $bigstr            ; # uncompressed ziplist node\n        r config set list-compress-depth 1  ; # compressed ziplist nodes\n        for {set i 0} {$i < 16} {incr i} {\n            r lpush biglist2 $bigstr\n        }\n\n        # set\n        r sadd bigset1 $bigstr              ; # hash encoding\n        set biginteger [string repeat 1 19]\n        for {set i 0} {$i < 512} {incr i} {\n            r sadd bigset2 $biginteger      ; # intset encoding\n        }\n\n        # zset\n        r zadd bigzset1 1.0 $bigstr         ; # skiplist encoding\n        for {set i 0} {$i < 128} {incr i} {\n            r zadd bigzset2 1.0 $64bytes    ; # ziplist encoding\n        }\n\n        # hash\n        r hset bighash1 field1 $bigstr      ; # hash encoding\n        for {set i 0} {$i < 128} {incr i} {\n            r hset bighash2 $i $64bytes     ; # ziplist encoding\n        }\n\n        # stream\n        r xadd bigstream * entry1 $bigstr entry2 $bigstr\n\n        set digest [debug_digest]\n        # Test both RDB (yes) and AOF (no) rewrite paths.\n        foreach preamble {yes no} {\n            r config set aof-use-rdb-preamble $preamble\n            r bgrewriteaof\n            waitForBgrewriteaof r\n            r debug loadaof\n            set newdigest [debug_digest]\n            assert {$digest eq $newdigest}\n        }\n    }\n\n    test {dismiss client output buffer} {\n        # Big output buffer\n        set item [string repeat \"x\" 100000]\n        for {set i 0} {$i < 100} {incr i} {\n            r lpush mylist $item\n        }\n        set rd [redis_deferring_client]\n        $rd lrange mylist 0 -1\n        $rd flush\n        after 100\n\n        r bgsave\n        waitForBgsave r\n        assert_equal $item [r lpop mylist]\n    }\n\n    test {dismiss client query buffer} {\n        # Big pending query buffer\n        set bigstr [string repeat A 8192]\n        set rd [redis_deferring_client]\n        $rd write \"*2\\r\\n\\$8192\\r\\n\"\n        $rd write $bigstr\\r\\n\n        $rd flush\n        after 100\n\n        r bgsave\n        waitForBgsave r\n    }\n\n    test {dismiss replication backlog} {\n        set master [srv 0 client]\n        start_server {} {\n            r slaveof [srv -1 host] [srv -1 port]\n            wait_for_sync r\n            waitForBgsave $master\n\n            set bigstr [string repeat A 8192]\n            for {set i 0} {$i < 20} {incr i} {\n                $master set $i $bigstr\n            }\n            $master bgsave\n            waitForBgsave $master\n        }\n    }\n\n    test {dismiss multi-db kvstore bucket memory in standalone mode} {\n        r flushall\n        regexp {db=(\\d+)} [r client info] -> curdb\n        # Populate multiple DBs to verify each DB's bucket arrays can be dismissed.\n        foreach db {0 1 2 3} {\n            r select $db\n            populate 2000 \"db${db}key:\" 3 0 false 3600\n        }\n        set digest [debug_digest]\n\n        # Test both RDB (yes) and AOF (no) rewrite paths.\n        foreach preamble {yes no} {\n            r config set aof-use-rdb-preamble $preamble\n            r bgrewriteaof\n            waitForBgrewriteaof r\n            r debug loadaof\n            set newdigest [debug_digest]\n            assert {$digest eq $newdigest}\n        }\n        r select $curdb\n    }\n}\n\nstart_cluster 1 0 {tags {dismiss external:skip cluster needs:debug}} {\n    test {dismiss slot dict bucket memory in cluster mode} {\n        # Concentrate keys into a few slots using hash tags so each slot's\n        # bucket array is large enough to be dismissed.\n        # {06S} -> slot 0, {Qi} -> slot 1, {5L5} -> slot 2\n        foreach tag {{06S} {Qi} {5L5}} {\n            populate 2000 \"${tag}key:\" 3 0 false 3600\n        }\n        set digest [r debug digest]\n\n        # Test both RDB (yes) and AOF (no) rewrite paths.\n        foreach preamble {yes no} {\n            r config set aof-use-rdb-preamble $preamble\n            r bgrewriteaof\n            waitForBgrewriteaof r\n            r debug loadaof\n            set newdigest [r debug digest]\n            assert {$digest eq $newdigest}\n        }\n    }\n}\n"
  },
  {
    "path": "tests/integration/failover.tcl",
    "content": "start_server {tags {\"failover external:skip\"} overrides {save {}}} {\nstart_server {overrides {save {}}} {\nstart_server {overrides {save {}}} {\n    set node_0 [srv 0 client]\n    set node_0_host [srv 0 host]\n    set node_0_port [srv 0 port]\n    set node_0_pid [srv 0 pid]\n\n    set node_1 [srv -1 client]\n    set node_1_host [srv -1 host]\n    set node_1_port [srv -1 port]\n    set node_1_pid [srv -1 pid]\n\n    set node_2 [srv -2 client]\n    set node_2_host [srv -2 host]\n    set node_2_port [srv -2 port]\n    set node_2_pid [srv -2 pid]\n\n    proc assert_digests_match {n1 n2 n3} {\n        assert_equal [$n1 debug digest] [$n2 debug digest]\n        assert_equal [$n2 debug digest] [$n3 debug digest]\n    }\n\n    test {failover command fails without connected replica} {\n        catch { $node_0 failover to $node_1_host $node_1_port } err\n        if {! [string match \"ERR*\" $err]} {\n            fail \"failover command succeeded when replica not connected\"\n        }\n    }\n\n    test {setup replication for following tests} {\n        $node_1 replicaof $node_0_host $node_0_port\n        $node_2 replicaof $node_0_host $node_0_port\n        wait_for_sync $node_1\n        wait_for_sync $node_2\n        # wait for both replicas to be online from the perspective of the master\n        wait_for_condition 50 100 {\n            [string match \"*slave0:*,state=online*slave1:*,state=online*\" [$node_0 info replication]]\n        } else {\n            fail \"replica didn't online in time\"\n        }\n    }\n\n    test {failover command fails with invalid host} {\n        catch { $node_0 failover to invalidhost $node_1_port } err\n        assert_match \"ERR*\" $err\n    }\n\n    test {failover command fails with invalid port} {\n        catch { $node_0 failover to $node_1_host invalidport } err\n        assert_match \"ERR*\" $err\n    }\n\n    test {failover command fails with just force and timeout} {\n        catch { $node_0 FAILOVER FORCE TIMEOUT 100} err\n        assert_match \"ERR*\" $err\n    }\n\n    test {failover command fails when sent to a replica} {\n        catch { $node_1 failover to $node_1_host $node_1_port } err\n        assert_match \"ERR*\" $err\n    }\n\n    test {failover command fails with force without timeout} {\n        catch { $node_0 failover to $node_1_host $node_1_port FORCE } err\n        assert_match \"ERR*\" $err\n    }\n\n    test {failover command to specific replica works} {\n        set initial_psyncs [s -1 sync_partial_ok]\n        set initial_syncs [s -1 sync_full]\n\n        # Generate a delta between primary and replica\n        set load_handler [start_write_load $node_0_host $node_0_port 5]\n        pause_process [srv -1 pid]\n        wait_for_condition 50 100 {\n            [s 0 total_commands_processed] > 100\n        } else {\n            fail \"Node 0 did not accept writes\"\n        }\n        resume_process [srv -1 pid]\n\n        # Execute the failover\n        assert_equal \"OK\" [$node_0 failover to $node_1_host $node_1_port]\n\n        # Wait for failover to end\n        wait_for_condition 50 100 {\n            [s 0 master_failover_state] == \"no-failover\"\n        } else {\n            fail \"Failover from node 0 to node 1 did not finish\"\n        }\n\n        # stop the write load and make sure no more commands processed\n        stop_write_load $load_handler\n        wait_load_handlers_disconnected\n\n        $node_2 replicaof $node_1_host $node_1_port\n        wait_for_sync $node_0\n        wait_for_sync $node_2\n\n        assert_match *slave* [$node_0 role]\n        assert_match *master* [$node_1 role]\n        assert_match *slave* [$node_2 role]\n\n        # We should accept psyncs from both nodes\n        assert_equal [expr [s -1 sync_partial_ok] - $initial_psyncs] 2\n        assert_equal [expr [s -1 sync_full] - $initial_psyncs] 0\n        assert_digests_match $node_0 $node_1 $node_2\n    }\n\n    test {failover command to any replica works} {\n        set initial_psyncs [s -2 sync_partial_ok]\n        set initial_syncs [s -2 sync_full]\n\n        wait_for_ofs_sync $node_1 $node_2\n        # We stop node 0 to and make sure node 2 is selected\n        pause_process $node_0_pid\n        $node_1 set CASE 1\n        $node_1 FAILOVER\n\n        # Wait for failover to end\n        wait_for_condition 50 100 {\n            [s -1 master_failover_state] == \"no-failover\"\n        } else {\n            fail \"Failover from node 1 to node 2 did not finish\"\n        }\n        resume_process $node_0_pid\n        $node_0 replicaof $node_2_host $node_2_port\n\n        wait_for_sync $node_0\n        wait_for_sync $node_1\n\n        assert_match *slave* [$node_0 role]\n        assert_match *slave* [$node_1 role]\n        assert_match *master* [$node_2 role]\n\n        # We should accept Psyncs from both nodes\n        assert_equal [expr [s -2 sync_partial_ok] - $initial_psyncs] 2\n        assert_equal [expr [s -1 sync_full] - $initial_psyncs] 0\n        assert_digests_match $node_0 $node_1 $node_2\n    }\n\n    test {failover to a replica with force works} {\n        set initial_psyncs [s 0 sync_partial_ok]\n        set initial_syncs [s 0 sync_full]\n\n        pause_process $node_0_pid\n        # node 0 will never acknowledge this write\n        $node_2 set case 2\n        $node_2 failover to $node_0_host $node_0_port TIMEOUT 100 FORCE\n\n        # Wait for node 0 to give up on sync attempt and start failover\n        wait_for_condition 50 100 {\n            [s -2 master_failover_state] == \"failover-in-progress\"\n        } else {\n            fail \"Failover from node 2 to node 0 did not timeout\"\n        }\n\n        # Quick check that everyone is a replica, we never want a \n        # state where there are two masters.\n        assert_match *slave* [$node_1 role]\n        assert_match *slave* [$node_2 role]\n\n        resume_process $node_0_pid\n\n        # Wait for failover to end\n        wait_for_condition 50 100 {\n            [s -2 master_failover_state] == \"no-failover\"\n        } else {\n            fail \"Failover from node 2 to node 0 did not finish\"\n        }\n        $node_1 replicaof $node_0_host $node_0_port\n\n        wait_for_sync $node_1\n        wait_for_sync $node_2\n\n        assert_match *master* [$node_0 role]\n        assert_match *slave* [$node_1 role]\n        assert_match *slave* [$node_2 role]\n\n        assert_equal [count_log_message -2 \"time out exceeded, failing over.\"] 1\n\n        # We should accept both psyncs and full syncs, although this is the condition we might\n        # not meet since we didn't catch up. This happens often in slow environments\n        # (TSAN, IO threads, etc) which can force a full resync instead of a partial\n        # one. Count both partial and full syncs and verify the total increments by two.\n        set psyncs [expr [s 0 sync_partial_ok] - $initial_psyncs]\n        set full_syncs [expr [s 0 sync_full] - $initial_syncs]\n        # Either we get 2 partial syncs, or some combination of partial/full that totals 2\n        assert_lessthan_equal $psyncs 2\n        assert_morethan_equal $full_syncs 0\n        assert_equal [expr $psyncs + $full_syncs] 2\n        assert_digests_match $node_0 $node_1 $node_2\n    }\n\n    test {failover with timeout aborts if replica never catches up} {\n        set initial_psyncs [s 0 sync_partial_ok]\n        set initial_syncs [s 0 sync_full]\n\n        # Stop replica so it never catches up\n        pause_process [srv -1 pid]\n        $node_0 SET CASE 1\n        \n        $node_0 failover to [srv -1 host] [srv -1 port] TIMEOUT 500\n        # Wait for failover to end\n        wait_for_condition 50 20 {\n            [s 0 master_failover_state] == \"no-failover\"\n        } else {\n            fail \"Failover from node_0 to replica did not finish\"\n        }\n\n        resume_process [srv -1 pid]\n\n        # We need to make sure the nodes actually sync back up\n        wait_for_ofs_sync $node_0 $node_1\n        wait_for_ofs_sync $node_0 $node_2\n\n        assert_match *master* [$node_0 role]\n        assert_match *slave* [$node_1 role]\n        assert_match *slave* [$node_2 role]\n\n        # Since we never caught up, there should be no syncs\n        assert_equal [expr [s 0 sync_partial_ok] - $initial_psyncs] 0\n        assert_equal [expr [s 0 sync_full] - $initial_syncs] 0\n        assert_digests_match $node_0 $node_1 $node_2\n    }\n\n    test {failovers can be aborted} {\n        set initial_psyncs [s 0 sync_partial_ok]\n        set initial_syncs [s 0 sync_full]\n    \n        # Stop replica so it never catches up\n        pause_process [srv -1 pid]\n        $node_0 SET CASE 2\n        \n        $node_0 failover to [srv -1 host] [srv -1 port] TIMEOUT 60000\n        assert_match [s 0 master_failover_state] \"waiting-for-sync\"\n\n        # Sanity check that read commands are still accepted\n        $node_0 GET CASE\n\n        $node_0 failover abort\n        assert_match [s 0 master_failover_state] \"no-failover\"\n\n        resume_process [srv -1 pid]\n\n        # Just make sure everything is still synced\n        wait_for_ofs_sync $node_0 $node_1\n        wait_for_ofs_sync $node_0 $node_2\n\n        assert_match *master* [$node_0 role]\n        assert_match *slave* [$node_1 role]\n        assert_match *slave* [$node_2 role]\n\n        # Since we never caught up, there should be no syncs\n        assert_equal [expr [s 0 sync_partial_ok] - $initial_psyncs] 0\n        assert_equal [expr [s 0 sync_full] - $initial_syncs] 0\n        assert_digests_match $node_0 $node_1 $node_2\n    }\n\n    test {failover aborts if target rejects sync request} {\n        set initial_psyncs [s 0 sync_partial_ok]\n        set initial_syncs [s 0 sync_full]\n\n        # We block psync, so the failover will fail\n        $node_1 acl setuser default -psync\n\n        # We pause the target long enough to send a write command\n        # during the pause. This write will not be interrupted.\n        pause_process [srv -1 pid]\n        set rd [redis_deferring_client]\n        $rd SET FOO BAR\n        $node_0 failover to $node_1_host $node_1_port\n        resume_process [srv -1 pid]\n\n        # Wait for failover to end\n        wait_for_condition 50 100 {\n            [s 0 master_failover_state] == \"no-failover\"\n        } else {\n            fail \"Failover from node_0 to replica did not finish\"\n        }\n\n        assert_equal [$rd read] \"OK\"\n        $rd close\n\n        # restore access to psync\n        $node_1 acl setuser default +psync\n\n        # We need to make sure the nodes actually sync back up\n        wait_for_sync $node_1\n        wait_for_sync $node_2\n\n        assert_match *master* [$node_0 role]\n        assert_match *slave* [$node_1 role]\n        assert_match *slave* [$node_2 role]\n\n        # We will cycle all of our replicas here and force a psync.\n        assert_equal [expr [s 0 sync_partial_ok] - $initial_psyncs] 2\n        assert_equal [expr [s 0 sync_full] - $initial_syncs] 0\n\n        assert_equal [count_log_message 0 \"Failover target rejected psync request\"] 1\n        assert_digests_match $node_0 $node_1 $node_2\n    }\n}\n}\n}\n"
  },
  {
    "path": "tests/integration/logging.tcl",
    "content": "tags {\"external:skip\"} {\n\nset system_name [string tolower [exec uname -s]]\nset backtrace_supported [system_backtrace_supported]\nset threads_mngr_supported 0 ;# Do we support printing stack trace from all threads, not just the one that got the signal?\nif {$system_name eq {linux}} {\n    set threads_mngr_supported 1\n}\n\n# look for the DEBUG command in the backtrace, used when we triggered\n# a stack trace print while we know redis is running that command.\nproc check_log_backtrace_for_debug {log_pattern} {\n    # search for the final line in the stacktraces generation to make sure it was completed.\n    set pattern \"* STACK TRACE DONE *\"\n    set res [wait_for_log_messages 0 \\\"$pattern\\\" 0 100 100]\n\n    set res [wait_for_log_messages 0 \\\"$log_pattern\\\" 0 100 100]\n    if {$::verbose} { puts $res}\n\n    # If the stacktrace is printed more than once, it means redis crashed during crash report generation\n    assert_equal [count_log_message 0 \"STACK TRACE -\"] 1\n\n    upvar threads_mngr_supported threads_mngr_supported\n\n    # the following checks are only done if we support printing stack trace from all threads\n    if {$threads_mngr_supported} {\n        assert_equal [count_log_message 0 \"setupStacktracePipe failed\"] 0\n        assert_equal [count_log_message 0 \"failed to open /proc/\"] 0\n        assert_equal [count_log_message 0 \"failed to find SigBlk or/and SigIgn\"] 0\n        # the following are skipped since valgrind is slow and a timeout can happen\n        if {!$::valgrind} {\n            assert_equal [count_log_message 0 \"wait_threads(): waiting threads timed out\"] 0\n            # make sure redis prints stack trace for all threads. we know 3 threads are idle in bio.c\n            assert_equal [count_log_message 0 \"bioProcessBackgroundJobs\"] 3\n        }\n    }\n\n    set pattern \"*debugCommand*\"\n    set res [wait_for_log_messages 0 \\\"$pattern\\\" 0 100 100]\n    if {$::verbose} { puts $res}\n}\n\n# used when backtrace_supported == 0\nproc check_crash_log {log_pattern} {\n    set res [wait_for_log_messages 0 \\\"$log_pattern\\\" 0 50 100]\n    if {$::verbose} { puts $res }\n}\n\n# test the watchdog and the stack trace report from multiple threads\nif {$backtrace_supported} {\n    set server_path [tmpdir server.log]\n    start_server [list overrides [list dir $server_path]] {\n        test \"Server is able to generate a stack trace on selected systems\" {\n            r config set watchdog-period 200\n            r debug sleep 1\n            \n            check_log_backtrace_for_debug \"*WATCHDOG TIMER EXPIRED*\"\n            # make sure redis is still alive\n            assert_equal \"PONG\" [r ping]\n        }\n    }\n}\n\n# Valgrind will complain that the process terminated by a signal, skip it.\nif {!$::valgrind} {\n    if {$backtrace_supported} {\n        set check_cb check_log_backtrace_for_debug\n    } else {  \n        set check_cb check_crash_log\n    }\n\n    # test being killed by a SIGABRT from outside\n    set server_path [tmpdir server1.log]\n    start_server [list overrides [list dir $server_path crash-memcheck-enabled no]] {\n        test \"Crash report generated on SIGABRT\" {\n            set pid [s process_id]\n            r deferred 1\n            r debug sleep 10 ;# so that we see the function in the stack trace\n            r flush\n            after 100 ;# wait for redis to get into the sleep\n            exec kill -SIGABRT $pid\n            $check_cb \"*crashed by signal*\"\n        }\n    }\n\n    # test DEBUG SEGFAULT\n    set server_path [tmpdir server2.log]\n    start_server [list overrides [list dir $server_path crash-memcheck-enabled no]] {\n        test \"Crash report generated on DEBUG SEGFAULT\" {\n            catch {r debug segfault}\n            $check_cb \"*crashed by signal*\"\n        }\n    }\n\n    # test DEBUG SIGALRM being non-fatal\n    set server_path [tmpdir server3.log]\n    start_server [list overrides [list dir $server_path]] {\n        test \"Stacktraces generated on SIGALRM\" {\n            set pid [s process_id]\n            r deferred 1\n            r debug sleep 10 ;# so that we see the function in the stack trace\n            r flush\n            after 100 ;# wait for redis to get into the sleep\n            exec kill -SIGALRM $pid\n            $check_cb \"*Received SIGALRM*\"\n            r read\n            r deferred 0\n            # make sure redis is still alive\n            assert_equal \"PONG\" [r ping]\n        }\n    }\n}\n\n# test DEBUG ASSERT\nif {$backtrace_supported} {\n    set server_path [tmpdir server4.log]\n    # Use exit() instead of abort() upon assertion so Valgrind tests won't fail.\n    start_server [list overrides [list dir $server_path use-exit-on-panic yes crash-memcheck-enabled no]] {\n        test \"Generate stacktrace on assertion\" {\n            catch {r debug assert}\n            check_log_backtrace_for_debug \"*ASSERTION FAILED*\"\n        }\n    }\n}\n\n# Tests that when `hide-user-data-from-log` is enabled, user information from logs is hidden\nif {$backtrace_supported} {\n    if {!$::valgrind} {\n        set server_path [tmpdir server5.log]\n        start_server [list overrides [list dir $server_path crash-memcheck-enabled no]] {\n            test \"Crash report generated on DEBUG SEGFAULT with user data hidden when 'hide-user-data-from-log' is enabled\" {\n                r config set hide-user-data-from-log yes\n                catch {r debug segfault}\n                check_log_backtrace_for_debug \"*crashed by signal*\"\n                check_log_backtrace_for_debug \"*argv*0*: *debug*\"\n                check_log_backtrace_for_debug \"*argv*1*: *redacted*\"\n                check_log_backtrace_for_debug \"*hide-user-data-from-log is on, skip logging stack content to avoid spilling PII*\"\n            }\n        }\n    }\n\n    set server_path [tmpdir server6.log]\n    start_server [list overrides [list dir $server_path use-exit-on-panic yes crash-memcheck-enabled no]] {\n        test \"Generate stacktrace on assertion with user data hidden when 'hide-user-data-from-log' is enabled\" {\n            r config set hide-user-data-from-log yes\n            catch {r debug assert}\n            check_log_backtrace_for_debug \"*ASSERTION FAILED*\"\n            check_log_backtrace_for_debug \"*argv*0* = *debug*\"\n            check_log_backtrace_for_debug \"*argv*1* = *redacted*\"\n        }\n    }\n}\n\n}\n"
  },
  {
    "path": "tests/integration/psync2-master-restart.tcl",
    "content": "start_server {tags {\"psync2 external:skip\"}} {\nstart_server {} {\nstart_server {} {\n    set master [srv 0 client]\n    set master_host [srv 0 host]\n    set master_port [srv 0 port]\n\n    set replica [srv -1 client]\n    set replica_host [srv -1 host]\n    set replica_port [srv -1 port]\n\n    set sub_replica [srv -2 client]\n\n    # Make sure the server saves an RDB on shutdown\n    $master config set save \"3600 1\"\n\n    # Because we will test partial resync later, we don't want a timeout to cause\n    # the master-replica disconnect, then the extra reconnections will break the\n    # sync_partial_ok stat test\n    $master config set repl-timeout 3600\n    $replica config set repl-timeout 3600\n    $sub_replica config set repl-timeout 3600\n\n    # Avoid PINGs\n    $master config set repl-ping-replica-period 3600\n    $master config rewrite\n\n    # Build replication chain\n    $replica replicaof $master_host $master_port\n    $sub_replica replicaof $replica_host $replica_port\n\n    wait_for_condition 50 100 {\n        [status $replica master_link_status] eq {up} &&\n        [status $sub_replica master_link_status] eq {up}\n    } else {\n        fail \"Replication not started.\"\n    }\n\n    test \"PSYNC2: Partial resync after Master restart using RDB aux fields when offset is 0\" {\n        assert {[status $master master_repl_offset] == 0}\n\n        set replid [status $master master_replid]\n        $replica config resetstat\n\n        catch {\n            restart_server 0 true false true now\n            set master [srv 0 client]\n        }\n        wait_for_condition 50 1000 {\n            [status $replica master_link_status] eq {up} &&\n            [status $sub_replica master_link_status] eq {up}\n        } else {\n            fail \"Replicas didn't sync after master restart\"\n        }\n\n        # Make sure master restore replication info correctly\n        assert {[status $master master_replid] != $replid}\n        assert {[status $master master_repl_offset] == 0}\n        assert {[status $master master_replid2] eq $replid}\n        assert {[status $master second_repl_offset] == 1}\n\n        # Make sure master set replication backlog correctly\n        assert {[status $master repl_backlog_active] == 1}\n        assert {[status $master repl_backlog_first_byte_offset] == 1}\n        assert {[status $master repl_backlog_histlen] == 0}\n\n        # Partial resync after Master restart\n        assert {[status $master sync_partial_ok] == 1}\n        assert {[status $replica sync_partial_ok] == 1}\n    }\n\n    # Generate some data\n    createComplexDataset $master 1000\n\n    test \"PSYNC2: Partial resync after Master restart using RDB aux fields with data\" {\n        wait_for_condition 500 100 {\n            [status $master master_repl_offset] == [status $replica master_repl_offset] &&\n            [status $master master_repl_offset] == [status $sub_replica master_repl_offset]\n        } else {\n            fail \"Replicas and master offsets were unable to match *exactly*.\"\n        }\n\n        set replid [status $master master_replid]\n        set offset [status $master master_repl_offset]\n        $replica config resetstat\n\n        catch {\n            # SHUTDOWN NOW ensures master doesn't send GETACK to replicas before\n            # shutting down which would affect the replication offset.\n            restart_server 0 true false true now\n            set master [srv 0 client]\n        }\n        wait_for_condition 50 1000 {\n            [status $replica master_link_status] eq {up} &&\n            [status $sub_replica master_link_status] eq {up}\n        } else {\n            fail \"Replicas didn't sync after master restart\"\n        }\n\n        # Make sure master restore replication info correctly\n        assert {[status $master master_replid] != $replid}\n        assert {[status $master master_repl_offset] == $offset}\n        assert {[status $master master_replid2] eq $replid}\n        assert {[status $master second_repl_offset] == [expr $offset+1]}\n\n        # Make sure master set replication backlog correctly\n        assert {[status $master repl_backlog_active] == 1}\n        assert {[status $master repl_backlog_first_byte_offset] == [expr $offset+1]}\n        assert {[status $master repl_backlog_histlen] == 0}\n\n        # Partial resync after Master restart\n        assert {[status $master sync_partial_ok] == 1}\n        assert {[status $replica sync_partial_ok] == 1}\n    }\n\n    test \"PSYNC2: Partial resync after Master restart using RDB aux fields with expire\" {\n        $master debug set-active-expire 0\n        for {set j 0} {$j < 1024} {incr j} {\n            $master select [expr $j%16]\n            $master set $j somevalue px 10\n        }\n\n        after 20\n\n        # Wait until master has received ACK from replica. If the master thinks\n        # that any replica is lagging when it shuts down, master would send\n        # GETACK to the replicas, affecting the replication offset.\n        set offset [status $master master_repl_offset]\n        wait_for_condition 500 100 {\n            [string match \"*slave0:*,offset=$offset,*\" [$master info replication]] &&\n            $offset == [status $replica master_repl_offset] &&\n            $offset == [status $sub_replica master_repl_offset]\n        } else {\n            show_cluster_status\n            fail \"Replicas and master offsets were unable to match *exactly*.\"\n        }\n\n        set offset [status $master master_repl_offset]\n        $replica config resetstat\n\n        catch {\n            # Unlike the test above, here we use SIGTERM, which behaves\n            # differently compared to SHUTDOWN NOW if there are lagging\n            # replicas. This is just to increase coverage and let each test use\n            # a different shutdown approach. In this case there are no lagging\n            # replicas though.\n            restart_server 0 true false\n            set master [srv 0 client]\n        }\n        wait_for_condition 50 1000 {\n            [status $replica master_link_status] eq {up} &&\n            [status $sub_replica master_link_status] eq {up}\n        } else {\n            fail \"Replicas didn't sync after master restart\"\n        }\n\n        set expired_offset [status $master repl_backlog_histlen]\n        # Stale keys expired and master_repl_offset grows correctly\n        assert {[status $master rdb_last_load_keys_expired] == 1024}\n        assert {[status $master master_repl_offset] == [expr $offset+$expired_offset]}\n\n        # Partial resync after Master restart\n        assert {[status $master sync_partial_ok] == 1}\n        assert {[status $replica sync_partial_ok] == 1}\n\n        set digest [$master debug digest]\n        wait_for_condition 10 100 {\n          $digest eq [$replica debug digest] &&\n          $digest eq [$sub_replica debug digest]\n        } else {\n            fail \"Replica and sub-replica didn't sync after master restart in time...\"\n        }\n    }\n\n    test \"PSYNC2: Full resync after Master restart when too many key expired\" {\n        $master config set repl-backlog-size 16384\n        $master config rewrite\n\n        $master debug set-active-expire 0\n        # Make sure replication backlog is full and will be trimmed.\n        for {set j 0} {$j < 2048} {incr j} {\n            $master select [expr $j%16]\n            $master set $j somevalue px 10\n        }\n\n        ##### hash-field-expiration\n        # Hashes of type OBJ_ENCODING_LISTPACK_EX won't be discarded during\n        # RDB load, even if they are expired.\n        $master hset myhash1 f1 v1 f2 v2 f3 v3\n        $master hpexpire myhash1 10 FIELDS 3 f1 f2 f3\n        # Hashes of type RDB_TYPE_HASH_METADATA will be discarded during RDB load.\n        $master config set hash-max-listpack-entries 0\n        $master hset myhash2 f1 v1 f2 v2\n        $master hpexpire myhash2 10 FIELDS 2 f1 f2\n        $master config set hash-max-listpack-entries 1\n\n        after 20\n\n        wait_for_condition 500 100 {\n            [status $master master_repl_offset] == [status $replica master_repl_offset] &&\n            [status $master master_repl_offset] == [status $sub_replica master_repl_offset]\n        } else {\n            fail \"Replicas and master offsets were unable to match *exactly*.\"\n        }\n\n        $replica config resetstat\n\n        catch {\n            # Unlike the test above, here we use SIGTERM. This is just to\n            # increase coverage and let each test use a different shutdown\n            # approach.\n            restart_server 0 true false\n            set master [srv 0 client]\n        }\n        wait_for_condition 50 1000 {\n            [status $replica master_link_status] eq {up} &&\n            [status $sub_replica master_link_status] eq {up}\n        } else {\n            fail \"Replicas didn't sync after master restart\"\n        }\n\n        # Replication backlog is full\n        assert {[status $master repl_backlog_first_byte_offset] > [status $master second_repl_offset]}\n        assert {[status $master sync_partial_ok] == 0}\n        assert {[status $master sync_full] == 1}\n        assert {[status $master rdb_last_load_keys_expired] == 2048}\n        assert {[status $replica sync_full] == 1}\n\n        set digest [$master debug digest]\n        assert {$digest eq [$replica debug digest]}\n        assert {$digest eq [$sub_replica debug digest]}\n    }\n}}}\n"
  },
  {
    "path": "tests/integration/psync2-pingoff.tcl",
    "content": "# These tests were added together with the meaningful offset implementation\n# in redis 6.0.0, which was later abandoned in 6.0.4, they used to test that\n# servers are able to PSYNC with replicas even if the replication stream has\n# PINGs at the end which present in one sever and missing on another.\n# We keep these tests just because they reproduce edge cases in the replication\n# logic in hope they'll be able to spot some problem in the future.\n\nstart_server {tags {\"psync2 external:skip\"}} {\nstart_server {} {\n    # Config\n    set debug_msg 0                 ; # Enable additional debug messages\n\n    for {set j 0} {$j < 2} {incr j} {\n        set R($j) [srv [expr 0-$j] client]\n        set R_host($j) [srv [expr 0-$j] host]\n        set R_port($j) [srv [expr 0-$j] port]\n        $R($j) CONFIG SET repl-ping-replica-period 1\n        if {$debug_msg} {puts \"Log file: [srv [expr 0-$j] stdout]\"}\n    }\n\n    # Setup replication\n    test \"PSYNC2 pingoff: setup\" {\n        $R(1) replicaof $R_host(0) $R_port(0)\n        $R(0) set foo bar\n        wait_for_condition 50 1000 {\n            [status $R(1) master_link_status] == \"up\" &&\n            [$R(0) dbsize] == 1 && [$R(1) dbsize] == 1\n        } else {\n            fail \"Replicas not replicating from master\"\n        }\n    }\n\n    test \"PSYNC2 pingoff: write and wait replication\" {\n        $R(0) INCR counter\n        $R(0) INCR counter\n        $R(0) INCR counter\n        wait_for_condition 50 1000 {\n            [$R(0) GET counter] eq [$R(1) GET counter]\n        } else {\n            fail \"Master and replica don't agree about counter\"\n        }\n    }\n\n    # In this test we'll make sure the replica will get stuck, but with\n    # an active connection: this way the master will continue to send PINGs\n    # every second (we modified the PING period earlier)\n    test \"PSYNC2 pingoff: pause replica and promote it\" {\n        $R(1) MULTI\n        $R(1) DEBUG SLEEP 5\n        $R(1) SLAVEOF NO ONE\n        $R(1) EXEC\n        $R(1) ping ; # Wait for it to return back available\n    }\n\n    test \"Make the old master a replica of the new one and check conditions\" {\n        # We set the new master's ping period to a high value, so that there's\n        # no chance for a race condition of sending a PING in between the two\n        # INFO calls in the assert for master_repl_offset match below.\n        $R(1) CONFIG SET repl-ping-replica-period 1000\n\n        assert_equal [status $R(1) sync_full] 0\n        $R(0) REPLICAOF $R_host(1) $R_port(1)\n\n        wait_for_condition 50 1000 {\n            [status $R(0) master_link_status] == \"up\"\n        } else {\n            fail \"The new master was not able to sync\"\n        }\n\n        # make sure replication is still alive and kicking\n        $R(1) incr x\n        wait_for_condition 50 1000 {\n            [status $R(0) loading] == 0 &&\n            [$R(0) get x] == 1\n        } else {\n            fail \"replica didn't get incr\"\n        }\n        assert_equal [status $R(0) master_repl_offset] [status $R(1) master_repl_offset]\n    }\n}}\n\n\nstart_server {tags {\"psync2 external:skip\"}} {\nstart_server {} {\nstart_server {} {\nstart_server {} {\nstart_server {} {\n    test {test various edge cases of repl topology changes with missing pings at the end} {\n        set master [srv -4 client]\n        set master_host [srv -4 host]\n        set master_port [srv -4 port]\n        set replica1 [srv -3 client]\n        set replica2 [srv -2 client]\n        set replica3 [srv -1 client]\n        set replica4 [srv -0 client]\n\n        $replica1 replicaof $master_host $master_port\n        $replica2 replicaof $master_host $master_port\n        $replica3 replicaof $master_host $master_port\n        $replica4 replicaof $master_host $master_port\n        wait_for_condition 50 1000 {\n            [status $master connected_slaves] == 4\n        } else {\n            fail \"replicas didn't connect\"\n        }\n\n        $master incr x\n        wait_for_condition 50 1000 {\n            [$replica1 get x] == 1 && [$replica2 get x] == 1 &&\n            [$replica3 get x] == 1 && [$replica4 get x] == 1\n        } else {\n            fail \"replicas didn't get incr\"\n        }\n\n        # disconnect replica1 and replica2\n        # and wait for the master to send a ping to replica3 and replica4\n        $replica1 replicaof no one\n        $replica2 replicaof 127.0.0.1 1 ;# we can't promote it to master since that will cycle the replication id\n        $master config set repl-ping-replica-period 1\n        set replofs [status $master master_repl_offset]\n        wait_for_condition 50 100 {\n            [status $replica3 master_repl_offset] > $replofs &&\n            [status $replica4 master_repl_offset] > $replofs\n        } else {\n            fail \"replica didn't sync in time\"\n        }\n\n        # make everyone sync from the replica1 that didn't get the last ping from the old master\n        # replica4 will keep syncing from the old master which now syncs from replica1\n        # and replica2 will re-connect to the old master (which went back in time)\n        set new_master_host [srv -3 host]\n        set new_master_port [srv -3 port]\n        $replica3 replicaof $new_master_host $new_master_port\n        $master replicaof $new_master_host $new_master_port\n        $replica2 replicaof $master_host $master_port\n        wait_for_condition 50 1000 {\n            [status $replica2 master_link_status] == \"up\" &&\n            [status $replica3 master_link_status] == \"up\" &&\n            [status $replica4 master_link_status] == \"up\" &&\n            [status $master master_link_status] == \"up\"\n        } else {\n            fail \"replicas didn't connect\"\n        }\n\n        # make sure replication is still alive and kicking\n        $replica1 incr x\n        wait_for_condition 50 1000 {\n            [$replica2 get x] == 2 &&\n            [$replica3 get x] == 2 &&\n            [$replica4 get x] == 2 &&\n            [$master get x] == 2\n        } else {\n            fail \"replicas didn't get incr\"\n        }\n\n        # make sure we have the right amount of full syncs\n        assert_equal [status $master sync_full] 6\n        assert_equal [status $replica1 sync_full] 2\n        assert_equal [status $replica2 sync_full] 0\n        assert_equal [status $replica3 sync_full] 0\n        assert_equal [status $replica4 sync_full] 0\n\n        # force psync\n        $master client kill type master\n        $replica2 client kill type master\n        $replica3 client kill type master\n        $replica4 client kill type master\n\n        # make sure replication is still alive and kicking\n        $replica1 incr x\n        wait_for_condition 50 1000 {\n            [$replica2 get x] == 3 &&\n            [$replica3 get x] == 3 &&\n            [$replica4 get x] == 3 &&\n            [$master get x] == 3\n        } else {\n            fail \"replicas didn't get incr\"\n        }\n\n        # make sure we have the right amount of full syncs\n        assert_equal [status $master sync_full] 6\n        assert_equal [status $replica1 sync_full] 2\n        assert_equal [status $replica2 sync_full] 0\n        assert_equal [status $replica3 sync_full] 0\n        assert_equal [status $replica4 sync_full] 0\n}\n}}}}}\n\nstart_server {tags {\"psync2 external:skip\"}} {\nstart_server {} {\nstart_server {} {\n\n    for {set j 0} {$j < 3} {incr j} {\n        set R($j) [srv [expr 0-$j] client]\n        set R_host($j) [srv [expr 0-$j] host]\n        set R_port($j) [srv [expr 0-$j] port]\n        $R($j) CONFIG SET repl-ping-replica-period 1\n    }\n\n    test \"Chained replicas disconnect when replica re-connect with the same master\" {\n        # Add a second replica as a chained replica of the current replica\n        $R(1) replicaof $R_host(0) $R_port(0)\n        $R(2) replicaof $R_host(1) $R_port(1)\n        wait_for_condition 50 1000 {\n            [status $R(2) master_link_status] == \"up\"\n        } else {\n            fail \"Chained replica not replicating from its master\"\n        }\n\n        # Do a write on the master, and wait for the master to\n        # send some PINGs to its replica\n        $R(0) INCR counter2\n        set replofs [status $R(0) master_repl_offset]\n        wait_for_condition 50 100 {\n            [status $R(1) master_repl_offset] > $replofs &&\n            [status $R(2) master_repl_offset] > $replofs\n        } else {\n            fail \"replica didn't sync in time\"\n        }\n        set sync_partial_master [status $R(0) sync_partial_ok]\n        set sync_partial_replica [status $R(1) sync_partial_ok]\n        $R(0) CONFIG SET repl-ping-replica-period 100\n\n        # Disconnect the master's direct replica\n        $R(0) client kill type replica\n        wait_for_condition 50 1000 {\n            [status $R(1) master_link_status] == \"up\" && \n            [status $R(2) master_link_status] == \"up\" &&\n            [status $R(0) sync_partial_ok] == $sync_partial_master + 1 &&\n            [status $R(1) sync_partial_ok] == $sync_partial_replica\n        } else {\n            fail \"Disconnected replica failed to PSYNC with master\"\n        }\n\n        # Verify that the replica and its replica's meaningful and real\n        # offsets match with the master\n        assert_equal [status $R(0) master_repl_offset] [status $R(1) master_repl_offset]\n        assert_equal [status $R(0) master_repl_offset] [status $R(2) master_repl_offset]\n\n        # make sure replication is still alive and kicking\n        $R(0) incr counter2\n        wait_for_condition 50 1000 {\n            [$R(1) get counter2] == 2 && [$R(2) get counter2] == 2\n        } else {\n            fail \"replicas didn't get incr\"\n        }\n        assert_equal [status $R(0) master_repl_offset] [status $R(1) master_repl_offset]\n        assert_equal [status $R(0) master_repl_offset] [status $R(2) master_repl_offset]\n    }\n}}}\n"
  },
  {
    "path": "tests/integration/psync2-reg.tcl",
    "content": "# Issue 3899 regression test.\n# We create a chain of three instances: master -> slave -> slave2\n# and continuously break the link while traffic is generated by\n# redis-benchmark. At the end we check that the data is the same\n# everywhere.\n\nstart_server {tags {\"psync2 external:skip\"}} {\nstart_server {} {\nstart_server {} {\n    # Config\n    set debug_msg 0                 ; # Enable additional debug messages\n\n    set no_exit 0                   ; # Do not exit at end of the test\n\n    set duration 20                 ; # Total test seconds\n\n    for {set j 0} {$j < 3} {incr j} {\n        set R($j) [srv [expr 0-$j] client]\n        set R_host($j) [srv [expr 0-$j] host]\n        set R_port($j) [srv [expr 0-$j] port]\n        set R_unixsocket($j) [srv [expr 0-$j] unixsocket]\n        if {$debug_msg} {puts \"Log file: [srv [expr 0-$j] stdout]\"}\n    }\n\n    # Setup the replication and backlog parameters\n    test \"PSYNC2 #3899 regression: setup\" {\n        $R(1) slaveof $R_host(0) $R_port(0)\n        $R(2) slaveof $R_host(0) $R_port(0)\n        $R(0) set foo bar\n        wait_for_condition 50 1000 {\n            [status $R(1) master_link_status] == \"up\" &&\n            [status $R(2) master_link_status] == \"up\" &&\n            [$R(1) dbsize] == 1 &&\n            [$R(2) dbsize] == 1\n        } else {\n            fail \"Replicas not replicating from master\"\n        }\n        $R(0) config set repl-backlog-size 10mb\n        $R(1) config set repl-backlog-size 10mb\n    }\n\n    set cycle_start_time [clock milliseconds]\n    set bench_pid [exec src/redis-benchmark -s $R_unixsocket(0) -n 10000000 -r 1000 incr __rand_int__ > /dev/null &]\n    while 1 {\n        set elapsed [expr {[clock milliseconds]-$cycle_start_time}]\n        if {$elapsed > $duration*1000} break\n        if {rand() < .05} {\n            test \"PSYNC2 #3899 regression: kill first replica\" {\n                $R(1) client kill type master\n            }\n        }\n        if {rand() < .05} {\n            test \"PSYNC2 #3899 regression: kill chained replica\" {\n                $R(2) client kill type master\n            }\n        }\n        after 100\n    }\n    exec kill -9 $bench_pid\n\n    if {$debug_msg} {\n        for {set j 0} {$j < 100} {incr j} {\n            if {\n                [$R(0) debug digest] == [$R(1) debug digest] &&\n                [$R(1) debug digest] == [$R(2) debug digest]\n            } break\n            puts [$R(0) debug digest]\n            puts [$R(1) debug digest]\n            puts [$R(2) debug digest]\n            after 1000\n        }\n    }\n\n    test \"PSYNC2 #3899 regression: verify consistency\" {\n        wait_for_condition 50 1000 {\n            ([$R(0) debug digest] eq [$R(1) debug digest]) &&\n            ([$R(1) debug digest] eq [$R(2) debug digest])\n        } else {\n            fail \"The three instances have different data sets\"\n        }\n    }\n}}}\n"
  },
  {
    "path": "tests/integration/psync2.tcl",
    "content": "\nproc show_cluster_status {} {\n    uplevel 1 {\n        # The following is the regexp we use to match the log line\n        # time info. Logs are in the following form:\n        #\n        # 11296:M 25 May 2020 17:37:14.652 # Server initialized\n        set log_regexp {^[0-9]+:[A-Z] [0-9]+ [A-z]+ [0-9]+ ([0-9:.]+) .*}\n        set repl_regexp {(master|repl|sync|backlog|meaningful|offset)}\n\n        puts \"Master ID is $master_id\"\n        for {set j 0} {$j < 5} {incr j} {\n            puts \"$j: sync_full: [status $R($j) sync_full]\"\n            puts \"$j: id1      : [status $R($j) master_replid]:[status $R($j) master_repl_offset]\"\n            puts \"$j: id2      : [status $R($j) master_replid2]:[status $R($j) second_repl_offset]\"\n            puts \"$j: backlog  : firstbyte=[status $R($j) repl_backlog_first_byte_offset] len=[status $R($j) repl_backlog_histlen]\"\n            puts \"$j: x var is : [$R($j) GET x]\"\n            puts \"---\"\n        }\n\n        # Show the replication logs of every instance, interleaving\n        # them by the log date.\n        #\n        # First: load the lines as lists for each instance.\n        array set log {}\n        for {set j 0} {$j < 5} {incr j} {\n            set fd [open $R_log($j)]\n            while {[gets $fd l] >= 0} {\n                if {[regexp $log_regexp $l] &&\n                    [regexp -nocase $repl_regexp $l]} {\n                    lappend log($j) $l\n                }\n            }\n            close $fd\n        }\n\n        # To interleave the lines, at every step consume the element of\n        # the list with the lowest time and remove it. Do it until\n        # all the lists are empty.\n        #\n        # regexp {^[0-9]+:[A-Z] [0-9]+ [A-z]+ [0-9]+ ([0-9:.]+) .*} $l - logdate\n        while 1 {\n            # Find the log with smallest time.\n            set empty 0\n            set best 0\n            set bestdate {}\n            for {set j 0} {$j < 5} {incr j} {\n                if {[llength $log($j)] == 0} {\n                    incr empty\n                    continue\n                }\n                regexp $log_regexp [lindex $log($j) 0] - date\n                if {$bestdate eq {}} {\n                    set best $j\n                    set bestdate $date\n                } else {\n                    if {[string compare $bestdate $date] > 0} {\n                        set best $j\n                        set bestdate $date\n                    }\n                }\n            }\n            if {$empty == 5} break ; # Our exit condition: no more logs\n\n            # Emit the one with the smallest time (that is the first\n            # event in the time line).\n            puts \"\\[$best port $R_port($best)\\] [lindex $log($best) 0]\"\n            set log($best) [lrange $log($best) 1 end]\n        }\n    }\n}\n\nstart_server {tags {\"psync2 external:skip\"}} {\nstart_server {} {\nstart_server {} {\nstart_server {} {\nstart_server {} {\n    set master_id 0                 ; # Current master\n    set start_time [clock seconds]  ; # Test start time\n    set counter_value 0             ; # Current value of the Redis counter \"x\"\n\n    # Config\n    set debug_msg 0                 ; # Enable additional debug messages\n\n    set no_exit 0                   ; # Do not exit at end of the test\n\n    set duration 40                 ; # Total test seconds\n\n    set genload 1                   ; # Load master with writes at every cycle\n\n    set genload_time 5000           ; # Writes duration time in ms\n\n    set disconnect 1                ; # Break replication link between random\n                                      # master and slave instances while the\n                                      # master is loaded with writes.\n\n    set disconnect_period 1000      ; # Disconnect repl link every N ms.\n\n    for {set j 0} {$j < 5} {incr j} {\n        set R($j) [srv [expr 0-$j] client]\n        set R_host($j) [srv [expr 0-$j] host]\n        set R_port($j) [srv [expr 0-$j] port]\n        set R_id_from_port($R_port($j)) $j ;# To get a replica index by port\n        set R_log($j) [srv [expr 0-$j] stdout]\n        if {$debug_msg} {puts \"Log file: [srv [expr 0-$j] stdout]\"}\n    }\n\n    set cycle 0\n    while {([clock seconds]-$start_time) < $duration} {\n        incr cycle\n        test \"PSYNC2: --- CYCLE $cycle ---\" {}\n\n        # Create a random replication layout.\n        # Start with switching master (this simulates a failover).\n\n        # 1) Select the new master.\n        set master_id [randomInt 5]\n        set used [list $master_id]\n        test \"PSYNC2: \\[NEW LAYOUT\\] Set #$master_id as master\" {\n            $R($master_id) slaveof no one\n            $R($master_id) config set repl-ping-replica-period 1 ;# increase the chance that random ping will cause issues\n            if {$counter_value == 0} {\n                $R($master_id) set x $counter_value\n            }\n        }\n\n        # Build a lookup with the root master of each replica (head of the chain).\n        array set root_master {}\n        for {set j 0} {$j < 5} {incr j} {\n            set r $j\n            while {1} {\n                set r_master_port [status $R($r) master_port]\n                if {$r_master_port == \"\"} {\n                    set root_master($j) $r\n                    break\n                }\n                set r_master_id $R_id_from_port($r_master_port)\n                set r $r_master_id\n            }\n        }\n\n        # Wait for the newly detached master-replica chain (new master and existing replicas that were\n        # already connected to it, to get updated on the new replication id.\n        # This is needed to avoid a race that can result in a full sync when a replica that already\n        # got an updated repl id, tries to psync from one that's not yet aware of it.\n        wait_for_condition 50 1000 {\n            ([status $R(0) master_replid] == [status $R($root_master(0)) master_replid]) &&\n            ([status $R(1) master_replid] == [status $R($root_master(1)) master_replid]) &&\n            ([status $R(2) master_replid] == [status $R($root_master(2)) master_replid]) &&\n            ([status $R(3) master_replid] == [status $R($root_master(3)) master_replid]) &&\n            ([status $R(4) master_replid] == [status $R($root_master(4)) master_replid])\n        } else {\n            show_cluster_status\n            fail \"Replica did not inherit the new replid.\"\n        }\n\n        # Build a lookup with the direct connection master of each replica.\n        # First loop that uses random to decide who replicates from who.\n        array set slave_to_master {}\n        while {[llength $used] != 5} {\n            while 1 {\n                set slave_id [randomInt 5]\n                if {[lsearch -exact $used $slave_id] == -1} break\n            }\n            set rand [randomInt [llength $used]]\n            set mid [lindex $used $rand]\n            set slave_to_master($slave_id) $mid\n            lappend used $slave_id\n        }\n\n        # 2) Attach all the slaves to a random instance\n        # Second loop that does the actual SLAVEOF command and make sure execute it in the right order.\n        while {[array size slave_to_master] > 0} {\n            foreach slave_id [array names slave_to_master] {\n                set mid $slave_to_master($slave_id)\n\n                # We only attach the replica to a random instance that already in the old/new chain.\n                if {$root_master($mid) == $root_master($master_id)} {\n                    # Find a replica that can be attached to the new chain already attached to the new master.\n                    # My new master is in the new chain.\n                } elseif {$root_master($mid) == $root_master($slave_id)} {\n                    # My new master and I are in the old chain.\n                } else {\n                    # In cycle 1, we do not care about the order.\n                    if {$cycle != 1} {\n                        # skipping this replica for now to avoid attaching in a bad order\n                        # this is done to avoid an unexpected full sync, when we take a\n                        # replica that already reconnected to the new chain and got a new replid\n                        # and is then set to connect to a master that's still not aware of that new replid\n                        continue\n                    }\n                }\n\n                set master_host $R_host($master_id)\n                set master_port $R_port($master_id)\n\n                test \"PSYNC2: Set #$slave_id to replicate from #$mid\" {\n                    $R($slave_id) slaveof $master_host $master_port\n                }\n\n                # Wait for replica to be connected before we proceed.\n                wait_for_condition 50 1000 {\n                    [status $R($slave_id) master_link_status] == \"up\"\n                } else {\n                    show_cluster_status\n                    fail \"Replica not reconnecting.\"\n                }\n\n                set root_master($slave_id) $root_master($mid)\n                unset slave_to_master($slave_id)\n                break\n            }\n        }\n\n        # Wait for replicas to sync. so next loop won't get -LOADING error\n        wait_for_condition 50 1000 {\n            [status $R([expr {($master_id+1)%5}]) master_link_status] == \"up\" &&\n            [status $R([expr {($master_id+2)%5}]) master_link_status] == \"up\" &&\n            [status $R([expr {($master_id+3)%5}]) master_link_status] == \"up\" &&\n            [status $R([expr {($master_id+4)%5}]) master_link_status] == \"up\"\n        } else {\n            show_cluster_status\n            fail \"Replica not reconnecting\"\n        }\n\n        # 3) Increment the counter and wait for all the instances\n        # to converge.\n        test \"PSYNC2: cluster is consistent after failover\" {\n            $R($master_id) incr x; incr counter_value\n            for {set j 0} {$j < 5} {incr j} {\n                wait_for_condition 50 1000 {\n                    [$R($j) get x] == $counter_value\n                } else {\n                    show_cluster_status\n                    fail \"Instance #$j x variable is inconsistent\"\n                }\n            }\n        }\n\n        # 4) Generate load while breaking the connection of random\n        # slave-master pairs.\n        test \"PSYNC2: generate load while killing replication links\" {\n            set t [clock milliseconds]\n            set next_break [expr {$t+$disconnect_period}]\n            while {[clock milliseconds]-$t < $genload_time} {\n                if {$genload} {\n                    $R($master_id) incr x; incr counter_value\n                }\n                if {[clock milliseconds] == $next_break} {\n                    set next_break \\\n                        [expr {[clock milliseconds]+$disconnect_period}]\n                    set slave_id [randomInt 5]\n                    if {$disconnect} {\n                        $R($slave_id) client kill type master\n                        if {$debug_msg} {\n                            puts \"+++ Breaking link for replica #$slave_id\"\n                        }\n                    }\n                }\n            }\n        }\n\n        # 5) Increment the counter and wait for all the instances\n        set x [$R($master_id) get x]\n        test \"PSYNC2: cluster is consistent after load (x = $x)\" {\n            for {set j 0} {$j < 5} {incr j} {\n                wait_for_condition 50 1000 {\n                    [$R($j) get x] == $counter_value\n                } else {\n                    show_cluster_status\n                    fail \"Instance #$j x variable is inconsistent\"\n                }\n            }\n        }\n\n        # wait for all the slaves to be in sync.\n        set masteroff [status $R($master_id) master_repl_offset]\n        wait_for_condition 500 100 {\n            [status $R(0) master_repl_offset] >= $masteroff &&\n            [status $R(1) master_repl_offset] >= $masteroff &&\n            [status $R(2) master_repl_offset] >= $masteroff &&\n            [status $R(3) master_repl_offset] >= $masteroff &&\n            [status $R(4) master_repl_offset] >= $masteroff\n        } else {\n            show_cluster_status\n            fail \"Replicas offsets didn't catch up with the master after too long time.\"\n        }\n\n        if {$debug_msg} {\n            show_cluster_status\n        }\n\n        test \"PSYNC2: total sum of full synchronizations is exactly 4\" {\n            set sum 0\n            for {set j 0} {$j < 5} {incr j} {\n                incr sum [status $R($j) sync_full]\n            }\n            if {$sum != 4} {\n                show_cluster_status\n                assert {$sum == 4}\n            }\n        }\n\n        # In absence of pings, are the instances really able to have\n        # the exact same offset?\n        $R($master_id) config set repl-ping-replica-period 3600\n        for {set j 0} {$j < 5} {incr j} {\n            if {$j == $master_id} continue\n            $R($j) config set repl-timeout 10000\n        }\n        wait_for_condition 500 100 {\n            [status $R($master_id) master_repl_offset] == [status $R(0) master_repl_offset] &&\n            [status $R($master_id) master_repl_offset] == [status $R(1) master_repl_offset] &&\n            [status $R($master_id) master_repl_offset] == [status $R(2) master_repl_offset] &&\n            [status $R($master_id) master_repl_offset] == [status $R(3) master_repl_offset] &&\n            [status $R($master_id) master_repl_offset] == [status $R(4) master_repl_offset]\n        } else {\n            show_cluster_status\n            fail \"Replicas and master offsets were unable to match *exactly*.\"\n        }\n\n        # Limit anyway the maximum number of cycles. This is useful when the\n        # test is skipped via --only option of the test suite. In that case\n        # we don't want to see many seconds of this test being just skipped.\n        if {$cycle > 50} break\n    }\n\n    test \"PSYNC2: Bring the master back again for next test\" {\n        $R($master_id) slaveof no one\n        set master_host $R_host($master_id)\n        set master_port $R_port($master_id)\n        for {set j 0} {$j < 5} {incr j} {\n            if {$j == $master_id} continue\n            $R($j) slaveof $master_host $master_port\n        }\n\n        # Wait for replicas to sync. it is not enough to just wait for connected_slaves==4\n        # since we might do the check before the master realized that they're disconnected\n        wait_for_condition 50 1000 {\n            [status $R($master_id) connected_slaves] == 4 &&\n            [status $R([expr {($master_id+1)%5}]) master_link_status] == \"up\" &&\n            [status $R([expr {($master_id+2)%5}]) master_link_status] == \"up\" &&\n            [status $R([expr {($master_id+3)%5}]) master_link_status] == \"up\" &&\n            [status $R([expr {($master_id+4)%5}]) master_link_status] == \"up\"\n        } else {\n            show_cluster_status\n            fail \"Replica not reconnecting\"\n        }\n    }\n\n    test \"PSYNC2: Partial resync after restart using RDB aux fields\" {\n        # Pick a random slave\n        set slave_id [expr {($master_id+1)%5}]\n        set sync_count [status $R($master_id) sync_full]\n        set sync_partial [status $R($master_id) sync_partial_ok]\n        set sync_partial_err [status $R($master_id) sync_partial_err]\n        catch {\n            # Make sure the server saves an RDB on shutdown\n            $R($slave_id) config set save \"900 1\"\n            $R($slave_id) config rewrite\n            restart_server [expr {0-$slave_id}] true false\n            set R($slave_id) [srv [expr {0-$slave_id}] client]\n        }\n        # note: just waiting for connected_slaves==4 has a race condition since\n        # we might do the check before the master realized that the slave disconnected\n        wait_for_condition 50 1000 {\n            [status $R($master_id) sync_partial_ok] == $sync_partial + 1\n        } else {\n            puts \"prev sync_full: $sync_count\"\n            puts \"prev sync_partial_ok: $sync_partial\"\n            puts \"prev sync_partial_err: $sync_partial_err\"\n            puts [$R($master_id) info stats]\n            show_cluster_status\n            fail \"Replica didn't partial sync\"\n        }\n        set new_sync_count [status $R($master_id) sync_full]\n        assert {$sync_count == $new_sync_count}\n    }\n\n    if {$no_exit} {\n        while 1 { puts -nonewline .; flush stdout; after 1000}\n    }\n\n}}}}}\n"
  },
  {
    "path": "tests/integration/rdb.tcl",
    "content": "tags {\"rdb external:skip\"} {\n\nset server_path [tmpdir \"server.rdb-encoding-test\"]\n\n# Copy RDB with different encodings in server path\nexec cp tests/assets/encodings.rdb $server_path\nexec cp tests/assets/list-quicklist.rdb $server_path\n\nstart_server [list overrides [list \"dir\" $server_path \"dbfilename\" \"list-quicklist.rdb\" save \"\"]] {\n    test \"test old version rdb file\" {\n        r select 0\n        assert_equal [r get x] 7\n        assert_encoding listpack list\n        r lpop list\n    } {7}\n}\n\nstart_server [list overrides [list \"dir\" $server_path \"dbfilename\" \"encodings.rdb\"]] {\n  test \"RDB encoding loading test\" {\n    r select 0\n    csvdump r\n  } {\"0\",\"compressible\",\"string\",\"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\"\n\"0\",\"hash\",\"hash\",\"a\",\"1\",\"aa\",\"10\",\"aaa\",\"100\",\"b\",\"2\",\"bb\",\"20\",\"bbb\",\"200\",\"c\",\"3\",\"cc\",\"30\",\"ccc\",\"300\",\"ddd\",\"400\",\"eee\",\"5000000000\",\n\"0\",\"hash_zipped\",\"hash\",\"a\",\"1\",\"b\",\"2\",\"c\",\"3\",\n\"0\",\"list\",\"list\",\"1\",\"2\",\"3\",\"a\",\"b\",\"c\",\"100000\",\"6000000000\",\"1\",\"2\",\"3\",\"a\",\"b\",\"c\",\"100000\",\"6000000000\",\"1\",\"2\",\"3\",\"a\",\"b\",\"c\",\"100000\",\"6000000000\",\n\"0\",\"list_zipped\",\"list\",\"1\",\"2\",\"3\",\"a\",\"b\",\"c\",\"100000\",\"6000000000\",\n\"0\",\"number\",\"string\",\"10\"\n\"0\",\"set\",\"set\",\"1\",\"100000\",\"2\",\"3\",\"6000000000\",\"a\",\"b\",\"c\",\n\"0\",\"set_zipped_1\",\"set\",\"1\",\"2\",\"3\",\"4\",\n\"0\",\"set_zipped_2\",\"set\",\"100000\",\"200000\",\"300000\",\"400000\",\n\"0\",\"set_zipped_3\",\"set\",\"1000000000\",\"2000000000\",\"3000000000\",\"4000000000\",\"5000000000\",\"6000000000\",\n\"0\",\"string\",\"string\",\"Hello World\"\n\"0\",\"zset\",\"zset\",\"a\",\"1\",\"b\",\"2\",\"c\",\"3\",\"aa\",\"10\",\"bb\",\"20\",\"cc\",\"30\",\"aaa\",\"100\",\"bbb\",\"200\",\"ccc\",\"300\",\"aaaa\",\"1000\",\"cccc\",\"123456789\",\"bbbb\",\"5000000000\",\n\"0\",\"zset_zipped\",\"zset\",\"a\",\"1\",\"b\",\"2\",\"c\",\"3\",\n}\n}\n\nset server_path [tmpdir \"server.rdb-startup-test\"]\n\nstart_server [list overrides [list \"dir\" $server_path] keep_persistence true] {\n    test {Server started empty with non-existing RDB file} {\n        debug_digest\n    } {0000000000000000000000000000000000000000}\n    # Save an RDB file, needed for the next test.\n    r save\n}\n\nstart_server [list overrides [list \"dir\" $server_path] keep_persistence true] {\n    test {Server started empty with empty RDB file} {\n        debug_digest\n    } {0000000000000000000000000000000000000000}\n}\n\nstart_server [list overrides [list \"dir\" $server_path] keep_persistence true] {\n    test {Test RDB stream encoding} {\n        for {set j 0} {$j < 1000} {incr j} {\n            if {rand() < 0.9} {\n                r xadd stream * foo abc\n            } else {\n                r xadd stream * bar $j\n            }\n        }\n        r xgroup create stream mygroup 0\n        set records [r xreadgroup GROUP mygroup Alice COUNT 2 STREAMS stream >]\n        r xdel stream [lindex [lindex [lindex [lindex $records 0] 1] 1] 0]\n        r xack stream mygroup [lindex [lindex [lindex [lindex $records 0] 1] 0] 0]\n        set digest [debug_digest]\n        r config set sanitize-dump-payload no\n        r debug reload\n        set newdigest [debug_digest]\n        assert {$digest eq $newdigest}\n    }\n    test {Test RDB stream encoding - sanitize dump} {\n        r config set sanitize-dump-payload yes\n        r debug reload\n        set newdigest [debug_digest]\n        assert {$digest eq $newdigest}\n    }\n    # delete the stream, maybe valgrind will find something\n    r del stream\n}\n\nstart_server {overrides {loglevel verbose}} {\n    test {RDB load applies RESIZEDB hint to expand hash tables} {\n        # Populate keys and save RDB\n        r flushall sync\n        regexp {db=(\\d+)} [r client info] -> dbid\n        # 500 keys with 3600 second expiration, 500 without\n        populate 500 \"key1:\" 3 0 false 3600\n        populate 500 \"key2:\" 3 0 false 0\n        r save\n\n        restart_server 0 true false\n\n        # Verify DB resize log message\n        verify_log_message 0 \"*DB $dbid resized*1024 key*512 expire*\" 0\n    }\n}\n\n# Helper function to start a server and kill it, just to check the error\n# logged.\nset defaults {}\nproc start_server_and_kill_it {overrides code} {\n    upvar defaults defaults srv srv server_path server_path\n    set config [concat $defaults $overrides]\n    set srv [start_server [list overrides $config keep_persistence true]]\n    uplevel 1 $code\n    kill_server $srv\n}\n\n# Make the RDB file unreadable\nfile attributes [file join $server_path dump.rdb] -permissions 0222\n\n# Detect root account (it is able to read the file even with 002 perm)\nset isroot 0\ncatch {\n    open [file join $server_path dump.rdb]\n    set isroot 1\n}\n\n# Now make sure the server aborted with an error\nif {!$isroot} {\n    start_server_and_kill_it [list \"dir\" $server_path] {\n        test {Server should not start if RDB file can't be open} {\n            wait_for_condition 50 100 {\n                [string match {*Fatal error loading*} \\\n                    [exec tail -1 < [dict get $srv stdout]]]\n            } else {\n                fail \"Server started even if RDB was unreadable!\"\n            }\n        }\n    }\n}\n\n# Fix permissions of the RDB file.\nfile attributes [file join $server_path dump.rdb] -permissions 0666\n\n# Corrupt its CRC64 checksum.\nset filesize [file size [file join $server_path dump.rdb]]\nset fd [open [file join $server_path dump.rdb] r+]\nfconfigure $fd -translation binary\nseek $fd -8 end\nputs -nonewline $fd \"foobar00\"; # Corrupt the checksum\nclose $fd\n\n# Now make sure the server aborted with an error\nstart_server_and_kill_it [list \"dir\" $server_path] {\n    test {Server should not start if RDB is corrupted} {\n        wait_for_condition 50 100 {\n            [string match {*CRC error*} \\\n                [exec tail -10 < [dict get $srv stdout]]]\n        } else {\n            fail \"Server started even if RDB was corrupted!\"\n        }\n    }\n}\n\nstart_server {} {\n    test {Test FLUSHALL aborts bgsave} {\n        r config set save \"\"\n        # 5000 keys with 1ms sleep per key should take 5 second\n        r config set rdb-key-save-delay 1000\n        populate 5000\n        assert_lessthan 999 [s rdb_changes_since_last_save]\n        r bgsave\n        assert_equal [s rdb_bgsave_in_progress] 1\n        r flushall\n        # wait a second max (bgsave should take 5)\n        wait_for_condition 10 100 {\n            [s rdb_bgsave_in_progress] == 0\n        } else {\n            fail \"bgsave not aborted\"\n        }\n        # verify that bgsave failed, by checking that the change counter is still high\n        assert_lessthan 999 [s rdb_changes_since_last_save]\n        # make sure the server is still writable\n        r set x xx\n    }\n\n    test {bgsave resets the change counter} {\n        r config set rdb-key-save-delay 0\n        r bgsave\n        wait_for_condition 50 100 {\n            [s rdb_bgsave_in_progress] == 0\n        } else {\n            fail \"bgsave not done\"\n        }\n        assert_equal [s rdb_changes_since_last_save] 0\n    }\n}\n\ntest {client freed during loading} {\n    start_server [list overrides [list key-load-delay 50 loading-process-events-interval-bytes 1024 rdbcompression no save \"900 1\"]] {\n        # create a big rdb that will take long to load. it is important\n        # for keys to be big since the server processes events only once in 2mb.\n        # 100mb of rdb, 100k keys will load in more than 5 seconds\n        r debug populate 100000 key 1000\n\n        restart_server 0 false false\n\n        # make sure it's still loading\n        assert_equal [s loading] 1\n\n        # connect and disconnect 5 clients\n        set clients {}\n        for {set j 0} {$j < 5} {incr j} {\n            lappend clients [redis_deferring_client]\n        }\n        foreach rd $clients {\n            $rd debug log bla\n        }\n        foreach rd $clients {\n            $rd read\n        }\n        foreach rd $clients {\n            $rd close\n        }\n\n        # make sure the server freed the clients\n        wait_for_condition 100 100 {\n            [s connected_clients] < 3\n        } else {\n            fail \"clients didn't disconnect\"\n        }\n\n        # make sure it's still loading\n        assert_equal [s loading] 1\n\n        # no need to keep waiting for loading to complete\n        exec kill [srv 0 pid]\n    }\n}\n\nstart_server {} {\n    test {Test RDB load info} {\n        r debug populate 1000\n        r save\n        assert {[r lastsave] <= [lindex [r time] 0]}\n        restart_server 0 true false\n        wait_done_loading r\n        assert {[s rdb_last_load_keys_expired] == 0}\n        assert {[s rdb_last_load_keys_loaded] == 1000}\n\n        r debug set-active-expire 0\n        for {set j 0} {$j < 1024} {incr j} {\n            r select [expr $j%16]\n            r set $j somevalue px 10\n        }\n        after 20\n\n        r save\n        restart_server 0 true false\n        wait_done_loading r\n        assert {[s rdb_last_load_keys_expired] == 1024}\n        assert {[s rdb_last_load_keys_loaded] == 1000}\n    }\n}\n\n# Our COW metrics (Private_Dirty) work only on Linux\nset system_name [string tolower [exec uname -s]]\nset page_size [exec getconf PAGESIZE]\nif {$system_name eq {linux} && $page_size == 4096} {\n\nstart_server {overrides {save \"\"}} {\n    test {Test child sending info} {\n        # make sure that rdb_last_cow_size and current_cow_size are zero (the test using new server),\n        # so that the comparisons during the test will be valid\n        assert {[s current_cow_size] == 0}\n        assert {[s current_save_keys_processed] == 0}\n        assert {[s current_save_keys_total] == 0}\n\n        assert {[s rdb_last_cow_size] == 0}\n\n        # using a 200us delay, the bgsave is empirically taking about 10 seconds.\n        # we need it to take more than some 5 seconds, since redis only report COW once a second.\n        r config set rdb-key-save-delay 200\n        r config set loglevel debug\n\n        # populate the db with 10k keys of 512B each (since we want to measure the COW size by\n        # changing some keys and read the reported COW size, we are using small key size to prevent from\n        # the \"dismiss mechanism\" free memory and reduce the COW size)\n        set rd [redis_deferring_client 0]\n        set size 500 ;# aim for the 512 bin (sds overhead)\n        set cmd_count 10000\n        for {set k 0} {$k < $cmd_count} {incr k} {\n            $rd set key$k [string repeat A $size]\n        }\n\n        for {set k 0} {$k < $cmd_count} {incr k} {\n            catch { $rd read }\n        }\n\n        $rd close\n\n        # start background rdb save\n        r bgsave\n\n        set current_save_keys_total [s current_save_keys_total]\n        if {$::verbose} {\n            puts \"Keys before bgsave start: $current_save_keys_total\"\n        }\n\n        # on each iteration, we will write some key to the server to trigger copy-on-write, and\n        # wait to see that it reflected in INFO.\n        set iteration 1\n        set key_idx 0\n        while 1 {\n            # take samples before writing new data to the server\n            set cow_size [s current_cow_size]\n            if {$::verbose} {\n                puts \"COW info before copy-on-write: $cow_size\"\n            }\n\n            set keys_processed [s current_save_keys_processed]\n            if {$::verbose} {\n                puts \"current_save_keys_processed info : $keys_processed\"\n            }\n\n            # trigger copy-on-write\n            set modified_keys 16\n            for {set k 0} {$k < $modified_keys} {incr k} {\n                r setrange key$key_idx 0 [string repeat B $size]\n                incr key_idx 1\n            }\n\n            # changing 16 keys (512B each) will create at least 8192 COW (2 pages), but we don't want the test\n            # to be too strict, so we check for a change of at least 4096 bytes\n            set exp_cow [expr $cow_size + 4096]\n            # wait to see that current_cow_size value updated (as long as the child is in progress)\n            wait_for_condition 80 100 {\n                [s rdb_bgsave_in_progress] == 0 ||\n                [s current_cow_size] >= $exp_cow &&\n                [s current_save_keys_processed] > $keys_processed &&\n                [s current_fork_perc] > 0\n            } else {\n                if {$::verbose} {\n                    puts \"COW info on fail: [s current_cow_size]\"\n                    puts [exec tail -n 100 < [srv 0 stdout]]\n                }\n                fail \"COW info wasn't reported\"\n            }\n\n            # assert that $keys_processed is not greater than total keys.\n            assert_morethan_equal $current_save_keys_total $keys_processed\n\n            # for no accurate, stop after 2 iterations\n            if {!$::accurate && $iteration == 2} {\n                break\n            }\n\n            # stop iterating if the bgsave completed\n            if { [s rdb_bgsave_in_progress] == 0 } {\n                break\n            }\n\n            incr iteration 1\n        }\n\n        # make sure we saw report of current_cow_size\n        if {$iteration < 2 && $::verbose} {\n            puts [exec tail -n 100 < [srv 0 stdout]]\n        }\n        assert_morethan_equal $iteration 2\n\n        # if bgsave completed, check that rdb_last_cow_size (fork exit report)\n        # is at least 90% of last rdb_active_cow_size.\n        if { [s rdb_bgsave_in_progress] == 0 } {\n            set final_cow [s rdb_last_cow_size]\n            set cow_size [expr $cow_size * 0.9]\n            if {$final_cow < $cow_size && $::verbose} {\n                puts [exec tail -n 100 < [srv 0 stdout]]\n            }\n            assert_morethan_equal $final_cow $cow_size\n        }\n    }\n}\n} ;# system_name\n\nexec cp -f tests/assets/scriptbackup.rdb $server_path\nstart_server [list overrides [list \"dir\" $server_path \"dbfilename\" \"scriptbackup.rdb\" \"appendonly\" \"no\"]] {\n    # the script is: \"return redis.call('set', 'foo', 'bar')\"\"\n    # its sha1   is: a0c38691e9fffe4563723c32ba77a34398e090e6\n    test {script won't load anymore if it's in rdb} {\n        assert_equal [r script exists a0c38691e9fffe4563723c32ba77a34398e090e6] 0\n    }\n}\n\nstart_server {} {\n    test \"failed bgsave prevents writes\" {\n        # Make sure the server saves an RDB on shutdown\n        r config set save \"900 1\"\n\n        r config set rdb-key-save-delay 10000000\n        populate 1000\n        r set x x\n        r bgsave\n        set pid1 [get_child_pid 0]\n        catch {exec kill -9 $pid1}\n        waitForBgsave r\n\n        # make sure a read command succeeds\n        assert_equal [r get x] x\n\n        # make sure a write command fails\n        assert_error {MISCONF *} {r set x y}\n\n        # repeate with script\n        assert_error {MISCONF *} {r eval {\n            return redis.call('set','x',1)\n            } 1 x\n        }\n        assert_equal {x} [r eval {\n            return redis.call('get','x')\n            } 1 x\n        ]\n\n        # again with script using shebang\n        assert_error {MISCONF *} {r eval {#!lua\n            return redis.call('set','x',1)\n            } 1 x\n        }\n        assert_equal {x} [r eval {#!lua flags=no-writes\n            return redis.call('get','x')\n            } 1 x\n        ]\n\n        r config set rdb-key-save-delay 0\n        r bgsave\n        waitForBgsave r\n\n        # server is writable again\n        r set x y\n    } {OK}\n}\n\nstart_server {overrides {save \"900 1\"}} {\n    test \"rdb_saves_consecutive_failures metric\" {\n        assert_equal [s rdb_saves_consecutive_failures] 0\n\n        # First bgsave failure\n        r config set rdb-key-save-delay 10000000\n        populate 100\n        r bgsave\n        set pid1 [get_child_pid 0]\n        catch {exec kill -9 $pid1}\n        waitForBgsave r\n\n        assert_equal [s rdb_saves_consecutive_failures] 1\n\n        # Second bgsave failure\n        r bgsave\n        set pid2 [get_child_pid 0]\n        catch {exec kill -9 $pid2}\n        waitForBgsave r\n\n        assert_equal [s rdb_saves_consecutive_failures] 2\n\n        # Successful bgsave should reset counter\n        r config set rdb-key-save-delay 0\n        r bgsave\n        waitForBgsave r\n\n        # Counter should be reset to 0 after success\n        assert_equal [s rdb_saves_consecutive_failures] 0\n    }\n}\n\nset server_path [tmpdir \"server.partial-hfield-exp-test\"]\n\n# verifies writing and reading hash key with expiring and persistent fields\nstart_server [list overrides [list \"dir\" $server_path]] {\n    foreach {type lp_entries} {listpack 512 dict 0} {\n        test \"HFE - save and load expired fields, expired soon after, or long after ($type)\" {\n            r config set hash-max-listpack-entries $lp_entries\n\n            r FLUSHALL\n\n            r HMSET key a 1 b 2 c 3 d 4 e 5\n            # expected to be expired long after restart\n            r HEXPIREAT key 2524600800 FIELDS 1 a\n            # expected long TTL value (46 bits) is saved and loaded correctly\n            r HPEXPIREAT key 65755674080852 FIELDS 1 b\n            # expected to be already expired after restart\n            r HPEXPIRE key 80 FIELDS 1 d\n            # expected to be expired soon after restart\n            r HPEXPIRE key 200 FIELDS 1 e\n\n            r save\n            # sleep 101 ms to make sure d will expire after restart\n            after 101\n            restart_server 0 true false\n            wait_done_loading r\n\n            # Never be sure when active-expire kicks in into action\n            wait_for_condition 100 10 {\n                [lsort [r hgetall key]] == \"1 2 3 a b c\"\n            } else {\n                fail \"hgetall of key is not as expected\"\n            }\n\n            assert_equal [r hpexpiretime key FIELDS 3 a b c] {2524600800000 65755674080852 -1}\n            assert_equal [s rdb_last_load_keys_loaded] 1\n\n            # wait until expired_subkeys equals 2\n            wait_for_condition 10 100 {\n                [s expired_subkeys] == 2\n            } else {\n                fail \"Value of expired_subkeys is not as expected\"\n            }\n        }\n    }\n}\n\nset server_path [tmpdir \"server.all-hfield-exp-test\"]\n\n# verifies writing hash with several expired keys, and active-expiring it on load\nstart_server [list overrides [list \"dir\" $server_path]] {\n    foreach {type lp_entries} {listpack 512 dict 0} {\n        test \"HFE - save and load rdb all fields expired, ($type)\" {\n            r config set hash-max-listpack-entries $lp_entries\n\n            r FLUSHALL\n\n            r HMSET key a 1 b 2 c 3 d 4\n            r HPEXPIRE key 100 FIELDS 4 a b c d\n\n            r save\n            # sleep 101 ms to make sure all fields will expire after restart\n            after 101\n\n            restart_server 0 true false\n            wait_done_loading r\n\n            #  it is expected that no field was expired on load and the key was\n            # loaded, even though all its fields are actually expired.\n            assert_equal [s rdb_last_load_keys_loaded] 1\n\n            assert_equal [r hgetall key] {}\n        }\n    }\n}\n\nset server_path [tmpdir \"server.listpack-to-dict-test\"]\n\ntest \"save listpack, load dict\" {\n    start_server [list overrides [list \"dir\" $server_path  enable-debug-command yes]] {\n        r config set hash-max-listpack-entries 512\n\n        r FLUSHALL\n\n        r HMSET key a 1 b 2 c 3 d 4\n        assert_match \"*encoding:listpack*\" [r debug object key]\n        r HPEXPIRE key 100 FIELDS 1 d\n        r save\n\n        # sleep 200 ms to make sure 'd' will expire after when reloading\n        after 200\n\n        # change configuration and reload - result should be dict-encoded key\n        r config set hash-max-listpack-entries 0\n        r debug reload nosave\n\n        # first verify d was not expired during load (no expiry when loading\n        # a hash that was saved listpack-encoded)\n        assert_equal [s rdb_last_load_keys_loaded] 1\n\n        # d should be lazy expired in hgetall\n        assert_equal [lsort [r hgetall key]] \"1 2 3 a b c\"\n        assert_match \"*encoding:hashtable*\" [r debug object key]\n    }\n}\n\nset server_path [tmpdir \"server.dict-to-listpack-test\"]\n\ntest \"save dict, load listpack\" {\n    start_server [list overrides [list \"dir\" $server_path  enable-debug-command yes]] {\n        r config set hash-max-listpack-entries 0\n\n        r FLUSHALL\n\n        r HMSET key a 1 b 2 c 3 d 4\n        assert_match \"*encoding:hashtable*\" [r debug object key]\n        r HPEXPIRE key 200 FIELDS 1 d\n        r save\n\n        # sleep 201 ms to make sure 'd' will expire during reload\n        after 201\n\n        # change configuration and reload - result should be LP-encoded key\n        r config set hash-max-listpack-entries 512\n        r debug reload nosave\n\n        # verify d was expired during load\n        assert_equal [s rdb_last_load_keys_loaded] 1\n\n        assert_equal [lsort [r hgetall key]] \"1 2 3 a b c\"\n        assert_match \"*encoding:listpack*\" [r debug object key]\n    }\n}\n\nset server_path [tmpdir \"server.active-expiry-after-load\"]\n\n# verifies a field is correctly expired by active expiry AFTER loading from RDB\nforeach {type lp_entries} {listpack 512 dict 0} {\n    start_server [list overrides [list \"dir\" $server_path enable-debug-command yes]] {\n        test \"active field expiry after load, ($type)\" {\n            r config set hash-max-listpack-entries $lp_entries\n\n            r FLUSHALL\n\n            r HMSET key a 1 b 2 c 3 d 4 e 5 f 6\n            r HEXPIREAT key 2524600800 FIELDS 2 a b\n            r HPEXPIRE key 200 FIELDS 2 c d\n\n            r save\n            r debug reload nosave\n\n            # wait at most 2 secs to make sure 'c' and 'd' will active-expire\n            wait_for_condition 20 100 {\n                [s expired_subkeys] == 2\n            } else {\n                fail \"expired hash fields is [s expired_subkeys] != 2\"\n            }\n\n            assert_equal [s rdb_last_load_keys_loaded] 1\n\n            # hgetall might lazy expire fields, so it's only called after the stat asserts\n            assert_equal [lsort [r hgetall key]] \"1 2 5 6 a b e f\"\n            assert_equal [r hexpiretime key FIELDS 6 a b c d e f] {2524600800 2524600800 -2 -2 -1 -1}\n        }\n    }\n}\n\nset server_path [tmpdir \"server.lazy-expiry-after-load\"]\n\nforeach {type lp_entries} {listpack 512 dict 0} {\n    start_server [list overrides [list \"dir\" $server_path enable-debug-command yes]] {\n        test \"lazy field expiry after load, ($type)\" {\n            r config set hash-max-listpack-entries $lp_entries\n            r debug set-active-expire 0\n\n            r FLUSHALL\n\n            r HMSET key a 1 b 2 c 3 d 4 e 5 f 6\n            r HEXPIREAT key 2524600800 FIELDS 2 a b\n            r HPEXPIRE key 200 FIELDS 2 c d\n\n            r save\n            r debug reload nosave\n\n            # sleep 500 msec to make sure 'c' and 'd' will lazy-expire when calling hgetall\n            after 500\n\n            assert_equal [s rdb_last_load_keys_loaded] 1\n            assert_equal [s expired_subkeys] 0\n\n            # hgetall will lazy expire fields, so it's only called after the stat asserts\n            assert_equal [lsort [r hgetall key]] \"1 2 5 6 a b e f\"\n            assert_equal [r hexpiretime key FIELDS 6 a b c d e f] {2524600800 2524600800 -2 -2 -1 -1}\n        }\n    }\n}\n\nset server_path [tmpdir \"server.unexpired-items-rax-list-boundary\"]\n\nforeach {type lp_entries} {listpack 512 dict 0} {\n    start_server [list overrides [list \"dir\" $server_path enable-debug-command yes]] {\n        test \"load un-expired items below and above rax-list boundary, ($type)\" {\n            r config set hash-max-listpack-entries $lp_entries\n\n            r flushall\n\n            set hash_sizes {15 16 17 31 32 33}\n            foreach h $hash_sizes {\n                for {set i 1} {$i <= $h} {incr i} {\n                    r hset key$h f$i v$i\n                    r hexpireat key$h 2524600800 FIELDS 1 f$i\n                }\n            }\n\n            r save\n\n            restart_server 0 true false\n            wait_done_loading r\n\n            set hash_sizes {15 16 17 31 32 33}\n            foreach h $hash_sizes {\n                for {set i 1} {$i <= $h} {incr i} {\n                    # random expiration time\n                    assert_equal [r hget key$h f$i] v$i\n                    assert_equal [r hexpiretime key$h FIELDS 1 f$i] 2524600800\n                }\n            }\n        }\n    }\n}\n\n} ;# tags\n"
  },
  {
    "path": "tests/integration/redis-benchmark.tcl",
    "content": "source tests/support/benchmark.tcl\nsource tests/support/cli.tcl\n\nproc cmdstat {cmd} {\n    return [cmdrstat $cmd r]\n}\n\n# common code to reset stats, flush the db and run redis-benchmark\nproc common_bench_setup {cmd} {\n    r config resetstat\n    r flushall\n    if {[catch { exec {*}$cmd } error]} {\n        set first_line [lindex [split $error \"\\n\"] 0]\n        puts [colorstr red \"redis-benchmark non zero code. first line: $first_line\"]\n        fail \"redis-benchmark non zero code. first line: $first_line\"\n    }\n}\n\n# we use this extra asserts on a simple set,get test for features like uri parsing\n# and other simple flag related tests\nproc default_set_get_checks {} {\n    assert_match  {*calls=10,*} [cmdstat set]\n    assert_match  {*calls=10,*} [cmdstat get]\n    # assert one of the non benchmarked commands is not present\n    assert_match  {} [cmdstat lrange]\n}\n\ntags {\"benchmark network external:skip logreqres:skip\"} {\n    start_server {} {\n        set master_host [srv 0 host]\n        set master_port [srv 0 port]\n        r select 0\n\n        test {benchmark: set,get} {\n            set cmd [redisbenchmark $master_host $master_port \"-c 5 -n 10 -t set,get\"]\n            common_bench_setup $cmd\n            default_set_get_checks\n        }\n\n        test {benchmark: connecting using URI set,get} {\n            set cmd [redisbenchmarkuri $master_host $master_port \"-c 5 -n 10 -t set,get\"]\n            common_bench_setup $cmd\n            default_set_get_checks\n        }\n\n        test {benchmark: connecting using URI with authentication set,get} {\n            r config set masterauth pass\n            set cmd [redisbenchmarkuriuserpass $master_host $master_port \"default\" pass \"-c 5 -n 10 -t set,get\"]\n            common_bench_setup $cmd\n            default_set_get_checks\n        }\n\n        test {benchmark: full test suite} {\n            set cmd [redisbenchmark $master_host $master_port \"-c 10 -n 100\"]\n            common_bench_setup $cmd\n\n            # ping total calls are 2*issued commands per test due to PING_INLINE and PING_MBULK\n            assert_match  {*calls=200,*} [cmdstat ping]\n            assert_match  {*calls=100,*} [cmdstat set]\n            assert_match  {*calls=100,*} [cmdstat get]\n            assert_match  {*calls=100,*} [cmdstat incr]\n            # lpush total calls are 2*issued commands per test due to the lrange tests\n            assert_match  {*calls=200,*} [cmdstat lpush]\n            assert_match  {*calls=100,*} [cmdstat rpush]\n            assert_match  {*calls=100,*} [cmdstat lpop]\n            assert_match  {*calls=100,*} [cmdstat rpop]\n            assert_match  {*calls=100,*} [cmdstat sadd]\n            assert_match  {*calls=100,*} [cmdstat hset]\n            assert_match  {*calls=100,*} [cmdstat spop]\n            assert_match  {*calls=100,*} [cmdstat zadd]\n            assert_match  {*calls=100,*} [cmdstat zpopmin]\n            assert_match  {*calls=400,*} [cmdstat lrange]\n            assert_match  {*calls=100,*} [cmdstat mset]\n            # assert one of the non benchmarked commands is not present\n            assert_match {} [cmdstat rpoplpush]\n        }\n\n        test {benchmark: multi-thread set,get} {\n            set cmd [redisbenchmark $master_host $master_port \"--threads 10 -c 5 -n 10 -t set,get\"]\n            common_bench_setup $cmd\n            default_set_get_checks\n\n            # ensure only one key was populated\n            assert_match  {1} [scan [regexp -inline {keys\\=([\\d]*)} [r info keyspace]] keys=%d]\n        }\n\n        test {benchmark: pipelined full set,get} {\n            set cmd [redisbenchmark $master_host $master_port \"-P 5 -c 10 -n 10010 -t set,get\"]\n            common_bench_setup $cmd\n            assert_match  {*calls=10010,*} [cmdstat set]\n            assert_match  {*calls=10010,*} [cmdstat get]\n            # assert one of the non benchmarked commands is not present\n            assert_match  {} [cmdstat lrange]\n\n            # ensure only one key was populated\n            assert_match  {1} [scan [regexp -inline {keys\\=([\\d]*)} [r info keyspace]] keys=%d]\n        }\n\n        test {benchmark: arbitrary command} {\n            set cmd [redisbenchmark $master_host $master_port \"-c 5 -n 150 INCRBYFLOAT mykey 10.0\"]\n            common_bench_setup $cmd\n            assert_match  {*calls=150,*} [cmdstat incrbyfloat]\n            # assert one of the non benchmarked commands is not present\n            assert_match  {} [cmdstat get]\n\n            # ensure only one key was populated\n            assert_match  {1} [scan [regexp -inline {keys\\=([\\d]*)} [r info keyspace]] keys=%d]\n        }\n\n        test {benchmark: keyspace length} {\n            set cmd [redisbenchmark $master_host $master_port \"-r 50 -t set -n 1000\"]\n            common_bench_setup $cmd\n            assert_match  {*calls=1000,*} [cmdstat set]\n            # assert one of the non benchmarked commands is not present\n            assert_match  {} [cmdstat get]\n\n            # ensure the keyspace has the desired size\n            assert_match  {50} [scan [regexp -inline {keys\\=([\\d]*)} [r info keyspace]] keys=%d]\n        }\n        \n        test {benchmark: clients idle mode should return error when reached maxclients limit} {\n            set cmd [redisbenchmark $master_host $master_port \"-c 10 -I\"]\n            set original_maxclients [lindex [r config get maxclients] 1]\n            r config set maxclients 5\n            catch { exec {*}$cmd } error\n            assert_match \"*Error*\" $error\n            r config set maxclients $original_maxclients\n        }\n\n        test {benchmark: read last argument from stdin} {\n            set base_cmd [redisbenchmark $master_host $master_port \"-x -n 10 set key\"]\n            set cmd \"printf arg | $base_cmd\"\n            common_bench_setup $cmd\n            r get key\n        } {arg}\n\n        test {benchmark: no NaN or Inf in latency report with fast requests} {\n            # With -n 1 on localhost, totlatency can round to 0 ms. Verify showLatencyReport() handles this gracefully.\n            set cmd [redisbenchmark $master_host $master_port \"-c 1 -n 1 -t set\"]\n            set output [exec {*}$cmd 2>@1]\n            if {[regexp -nocase {nan|(?:^|[^a-z])inf(?:[^o]|$)} $output]} {\n                fail \"redis-benchmark output contains NaN or Inf: $output\"\n            }\n        }\n\n        # tls specific tests\n        if {$::tls} {\n            test {benchmark: specific tls-ciphers} {\n                set cmd [redisbenchmark $master_host $master_port \"-r 50 -t set -n 1000 --tls-ciphers \\\"DEFAULT:-AES128-SHA256\\\"\"]\n                common_bench_setup $cmd\n                assert_match  {*calls=1000,*} [cmdstat set]\n                # assert one of the non benchmarked commands is not present\n                assert_match  {} [cmdstat get]\n            }\n\n            test {benchmark: tls connecting using URI with authentication set,get} {\n                r config set masterauth pass\n                set cmd [redisbenchmarkuriuserpass $master_host $master_port \"default\" pass \"-c 5 -n 10 -t set,get\"]\n                common_bench_setup $cmd\n                default_set_get_checks\n            }\n\n            test {benchmark: specific tls-ciphersuites} {\n                r flushall\n                r config resetstat\n                set ciphersuites_supported 1\n                set cmd [redisbenchmark $master_host $master_port \"-r 50 -t set -n 1000 --tls-ciphersuites \\\"TLS_AES_128_GCM_SHA256\\\"\"]\n                if {[catch { exec {*}$cmd } error]} {\n                    set first_line [lindex [split $error \"\\n\"] 0]\n                    if {[string match \"*Invalid option*\" $first_line]} {\n                        set ciphersuites_supported 0\n                        if {$::verbose} {\n                            puts \"Skipping test, TLSv1.3 not supported.\"\n                        }\n                    } else {\n                        puts [colorstr red \"redis-benchmark non zero code. first line: $first_line\"]\n                        fail \"redis-benchmark non zero code. first line: $first_line\"\n                    }\n                }\n                if {$ciphersuites_supported} {\n                    assert_match  {*calls=1000,*} [cmdstat set]\n                    # assert one of the non benchmarked commands is not present\n                    assert_match  {} [cmdstat get]\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "tests/integration/redis-cli.tcl",
    "content": "source tests/support/cli.tcl\n\nif {$::singledb} {\n    set ::dbnum 0\n} else {\n    set ::dbnum 9\n}\n\nfile delete ./.rediscli_history_test\nset ::env(REDISCLI_HISTFILE) \".rediscli_history_test\"\n\nstart_server {tags {\"cli\"}} {\n    proc open_cli {{opts \"\"} {infile \"\"}} {\n        if { $opts == \"\" } {\n            set opts \"-n $::dbnum\"\n        }\n        set ::env(TERM) dumb\n        set cmdline [rediscli [srv host] [srv port] $opts]\n        if {$infile ne \"\"} {\n            set cmdline \"$cmdline < $infile\"\n            set mode \"r\"\n        } else {\n            set mode \"r+\"\n        }\n        set fd [open \"|$cmdline\" $mode]\n        fconfigure $fd -buffering none\n        fconfigure $fd -blocking false\n        fconfigure $fd -translation binary\n        set _ $fd\n    }\n\n    proc close_cli {fd} {\n        close $fd\n    }\n\n    proc read_cli {fd} {\n        set ret [read $fd]\n        while {[string length $ret] == 0} {\n            after 10\n            set ret [read $fd]\n        }\n\n        # We may have a short read, try to read some more.\n        set empty_reads 0\n        while {$empty_reads < 5} {\n            set buf [read $fd]\n            if {[string length $buf] == 0} {\n                after 10\n                incr empty_reads\n            } else {\n                append ret $buf\n                set empty_reads 0\n            }\n        }\n        return $ret\n    }\n\n    proc write_cli {fd buf} {\n        puts $fd $buf\n        flush $fd\n    }\n\n    # Helpers to run tests in interactive mode\n\n    proc format_output {output} {\n        set _ [string trimright $output \"\\n\"]\n    }\n\n    proc run_command {fd cmd} {\n        write_cli $fd $cmd\n        set _ [format_output [read_cli $fd]]\n    }\n\n    proc test_interactive_cli_with_prompt {name code} {\n        set ::env(FAKETTY_WITH_PROMPT) 1\n        test_interactive_cli $name $code\n        unset ::env(FAKETTY_WITH_PROMPT)\n    }\n\n    proc test_interactive_cli {name code} {\n        set ::env(FAKETTY) 1\n        set fd [open_cli]\n        test \"Interactive CLI: $name\" $code\n        close_cli $fd\n        unset ::env(FAKETTY)\n    }\n\n    proc test_interactive_nontty_cli {name code} {\n        set fd [open_cli]\n        test \"Interactive non-TTY CLI: $name\" $code\n        close_cli $fd\n    }\n\n    # Helpers to run tests where stdout is not a tty\n    proc write_tmpfile {contents} {\n        set tmp [tmpfile \"cli\"]\n        set tmpfd [open $tmp \"w\"]\n        puts -nonewline $tmpfd $contents\n        close $tmpfd\n        set _ $tmp\n    }\n\n    proc _run_cli {host port db opts args} {\n        set cmd [rediscli $host $port [list -n $db {*}$args]]\n        foreach {key value} $opts {\n            if {$key eq \"pipe\"} {\n                set cmd \"sh -c \\\"$value | $cmd\\\"\"\n            }\n            if {$key eq \"path\"} {\n                set cmd \"$cmd < $value\"\n            }\n        }\n\n        set fd [open \"|$cmd\" \"r\"]\n        fconfigure $fd -buffering none\n        fconfigure $fd -translation binary\n        set resp [read $fd 1048576]\n        close $fd\n        set _ [format_output $resp]\n    }\n\n    proc run_cli {args} {\n        _run_cli [srv host] [srv port] $::dbnum {} {*}$args\n    }\n\n    proc run_cli_with_input_pipe {mode cmd args} {\n        if {$mode == \"x\" } {\n            _run_cli [srv host] [srv port] $::dbnum [list pipe $cmd] -x {*}$args\n        } elseif {$mode == \"X\"} {\n            _run_cli [srv host] [srv port] $::dbnum [list pipe $cmd] -X tag {*}$args\n        }\n    }\n\n    proc run_cli_with_input_file {mode path args} {\n        if {$mode == \"x\" } {\n            _run_cli [srv host] [srv port] $::dbnum [list path $path] -x {*}$args\n        } elseif {$mode == \"X\"} {\n            _run_cli [srv host] [srv port] $::dbnum [list path $path] -X tag {*}$args\n        }\n    }\n\n    proc run_cli_host_port_db {host port db args} {\n        _run_cli $host $port $db {} {*}$args\n    }\n\n    proc test_nontty_cli {name code} {\n        test \"Non-interactive non-TTY CLI: $name\" $code\n    }\n\n    # Helpers to run tests where stdout is a tty (fake it)\n    proc test_tty_cli {name code} {\n        set ::env(FAKETTY) 1\n        test \"Non-interactive TTY CLI: $name\" $code\n        unset ::env(FAKETTY)\n    }\n\n    test_interactive_cli_with_prompt \"should find first search result\" {\n        run_command $fd \"keys one\\x0D\"\n        run_command $fd \"keys two\\x0D\"\n\n        puts $fd \"\\x12\" ;# CTRL+R\n        read_cli $fd\n\n        puts -nonewline $fd \"ey\"\n        set result [read_cli $fd]\n        assert_equal 1 [regexp {\\(reverse-i-search\\): \\x1B\\[0mk\\x1B\\[1mey\\x1B\\[0ms two} $result]\n    }\n\n    test_interactive_cli_with_prompt \"should find and use the first search result\" {\n        set now [clock seconds]\n        run_command $fd \"SET blah \\\"myvalue\\\"\\x0D\"\n        run_command $fd \"GET blah\\x0D\"\n\n        puts $fd \"\\x12\" ;# CTRL+R\n        read_cli $fd\n\n        puts -nonewline $fd \"ET b\"\n        set result [read_cli $fd]\n        assert_equal 1 [regexp {\\(reverse-i-search\\): \\x1B\\[0mG\\x1B\\[1mET b\\x1B\\[0mlah} $result]\n\n        puts $fd \"\\x0D\" ;# ENTER\n        set result2 [read_cli $fd]\n        assert_equal 1 [regexp {.*\"myvalue\"\\n} $result2]\n    }\n\n    test_interactive_cli_with_prompt \"should be ok if there is no result\" {\n        puts $fd \"\\x12\" ;# CTRL+R\n\n        set now [clock seconds]\n        puts $fd \"\\x12\" ;# CTRL+R\n        set result [read_cli $fd]\n        assert_equal 1 [regexp {\\(reverse-i-search\\):} $result]\n\n        set result2 [run_command $fd \"keys \\\"$now\\\"\\x0D\"]\n        assert_equal 1 [regexp {.*(empty array).*} $result2]\n    }\n\n    test_interactive_cli_with_prompt \"upon submitting search, (reverse-i-search) prompt should go away\" {\n        puts $fd \"\\x12\" ;# CTRL+R\n\n        set now [clock seconds]\n        set result [read_cli $fd]\n        assert_equal 1 [regexp {\\(reverse-i-search\\):} $result]\n\n        set result2 [run_command $fd \"keys \\\"$now\\\"\\x0D\"]\n\n        assert_equal 1 [regexp {127\\.0\\.0\\.1:[0-9]*(\\[[0-9]])?>} $result2]\n    }\n\n    test_interactive_cli_with_prompt \"should find second search result if user presses ctrl+r again\" {\n        run_command $fd \"keys one\\x0D\"\n        run_command $fd \"keys two\\x0D\"\n\n        puts $fd \"\\x12\" ;# CTRL+R\n        read_cli $fd\n\n        puts -nonewline $fd \"ey\"\n        set result [read_cli $fd]\n        assert_equal 1 [regexp {\\(reverse-i-search\\): \\x1B\\[0mk\\x1B\\[1mey\\x1B\\[0ms two} $result]\n\n        puts $fd \"\\x12\" ;# CTRL+R\n        set result [read_cli $fd]\n        assert_equal 1 [regexp {\\(reverse-i-search\\): \\x1B\\[0mk\\x1B\\[1mey\\x1B\\[0ms one} $result]\n    }\n\n    test_interactive_cli_with_prompt \"should find second search result if user presses ctrl+s\" {\n        run_command $fd \"keys one\\x0D\"\n        run_command $fd \"keys two\\x0D\"\n\n        puts $fd \"\\x13\" ;# CTRL+S\n        read_cli $fd\n\n        puts -nonewline $fd \"ey\"\n        set result [read_cli $fd]\n        assert_equal 1 [regexp {\\(i-search\\): \\x1B\\[0mk\\x1B\\[1mey\\x1B\\[0ms one} $result]\n\n        puts $fd \"\\x13\" ;# CTRL+S\n        set result [read_cli $fd]\n        assert_equal 1 [regexp {\\(i-search\\): \\x1B\\[0mk\\x1B\\[1mey\\x1B\\[0ms two} $result]\n    }\n\n    test_interactive_cli_with_prompt \"should exit reverse search if user presses ctrl+g\" {\n        run_command $fd \"\"\n\n        puts $fd \"\\x12\" ;# CTRL+R\n        set result [read_cli $fd]\n        assert_equal 1 [regexp {\\(reverse-i-search\\):} $result]\n\n        puts $fd \"\\x07\" ;# CTRL+G\n        set result2 [read_cli $fd]\n        assert_equal 1 [regexp {127\\.0\\.0\\.1:[0-9]*(\\[[0-9]])?>} $result2]\n    }\n\n    test_interactive_cli_with_prompt \"should exit reverse search if user presses up arrow\" {\n        run_command $fd \"\"\n\n        puts $fd \"\\x12\" ;# CTRL+R\n        set result [read_cli $fd]\n        assert_equal 1 [regexp {\\(reverse-i-search\\):} $result]\n\n        puts $fd \"\\x1B\\x5B\\x41\" ;# up arrow\n        set result2 [read_cli $fd]\n        assert_equal 1 [regexp {127\\.0\\.0\\.1:[0-9]*(\\[[0-9]])?>} $result2]\n    }\n\n    test_interactive_cli_with_prompt \"should exit reverse search if user presses right arrow\" {\n        run_command $fd \"\"\n\n        puts $fd \"\\x12\" ;# CTRL+R\n        set result [read_cli $fd]\n        assert_equal 1 [regexp {\\(reverse-i-search\\):} $result]\n\n        puts $fd \"\\x1B\\x5B\\x42\" ;# right arrow\n        set result2 [read_cli $fd]\n        assert_equal 1 [regexp {127\\.0\\.0\\.1:[0-9]*(\\[[0-9]])?>} $result2]\n    }\n\n    test_interactive_cli_with_prompt \"should exit reverse search if user presses down arrow\" {\n        run_command $fd \"\"\n\n        puts $fd \"\\x12\" ;# CTRL+R\n        set result [read_cli $fd]\n        assert_equal 1 [regexp {\\(reverse-i-search\\):} $result]\n\n        puts $fd \"\\x1B\\x5B\\x43\" ;# down arrow\n        set result2 [read_cli $fd]\n        assert_equal 1 [regexp {127\\.0\\.0\\.1:[0-9]*(\\[[0-9]])?>} $result2]\n    }\n\n    test_interactive_cli_with_prompt \"should exit reverse search if user presses left arrow\" {\n        run_command $fd \"\"\n\n        puts $fd \"\\x12\" ;# CTRL+R\n        set result [read_cli $fd]\n        assert_equal 1 [regexp {\\(reverse-i-search\\):} $result]\n\n        puts $fd \"\\x1B\\x5B\\x44\" ;# left arrow\n        set result2 [read_cli $fd]\n        assert_equal 1 [regexp {127\\.0\\.0\\.1:[0-9]*(\\[[0-9]])?>} $result2]\n    }\n\n    test_interactive_cli_with_prompt \"should disable and persist line if user presses tab\" {\n        run_command $fd \"\"\n\n        puts $fd \"\\x12\" ;# CTRL+R\n        set result [read_cli $fd]\n        assert_equal 1 [regexp {\\(reverse-i-search\\):} $result]\n\n        puts -nonewline $fd \"GET blah\"\n        read_cli $fd\n\n        puts -nonewline $fd \"\\x09\" ;# TAB\n        set result2 [read_cli $fd]\n        assert_equal 1 [regexp {127\\.0\\.0\\.1:[0-9]*(\\[[0-9]])?> GET blah} $result2]\n    }\n\n    test_interactive_cli_with_prompt \"should disable and persist search result if user presses tab\" {\n        run_command $fd \"GET one\\x0D\"\n\n        puts $fd \"\\x12\" ;# CTRL+R\n        set result [read_cli $fd]\n        assert_equal 1 [regexp {\\(reverse-i-search\\):} $result]\n\n        puts -nonewline $fd \"one\"\n        read_cli $fd\n\n        puts -nonewline $fd \"\\x09\" ;# TAB\n        set result2 [read_cli $fd]\n        assert_equal 1 [regexp {127\\.0\\.0\\.1:[0-9]*(\\[[0-9]])?> GET one} $result2]\n    }\n\n    test_interactive_cli_with_prompt \"should disable and persist line and move the cursor if user presses tab\" {\n        run_command $fd \"\"\n\n        puts $fd \"\\x12\" ;# CTRL+R\n        set result [read_cli $fd]\n        assert_equal 1 [regexp {\\(reverse-i-search\\):} $result]\n\n        puts -nonewline $fd \"GET blah\"\n        read_cli $fd\n\n        puts -nonewline $fd \"\\x09\" ;# TAB\n        set result2 [read_cli $fd]\n        assert_equal 1 [regexp {127\\.0\\.0\\.1:[0-9]*(\\[[0-9]])?> GET blah} $result2]\n\n        puts -nonewline $fd \"suffix\"\n        set result3 [read_cli $fd]\n        assert_equal 1 [regexp {127\\.0\\.0\\.1:[0-9]*(\\[[0-9]])?> GET blahsuffix} $result3]\n    }\n\n    test_interactive_cli \"INFO response should be printed raw\" {\n        set lines [split [run_command $fd info] \"\\n\"]\n        foreach line $lines {\n            # Info lines end in \\r\\n, so they now end in \\r.\n            if {![regexp {^\\r$|^#|^[^#:]+:} $line]} {\n                fail \"Malformed info line: $line\"\n            }\n        }\n    }\n\n    test_interactive_cli \"Status reply\" {\n        assert_equal \"OK\" [run_command $fd \"set key foo\"]\n    }\n\n    test_interactive_cli \"Integer reply\" {\n        assert_equal \"(integer) 1\" [run_command $fd \"incr counter\"]\n    }\n\n    test_interactive_cli \"Bulk reply\" {\n        r set key foo\n        assert_equal \"\\\"foo\\\"\" [run_command $fd \"get key\"]\n    }\n\n    test_interactive_cli \"Multi-bulk reply\" {\n        r rpush list foo\n        r rpush list bar\n        assert_equal \"1) \\\"foo\\\"\\n2) \\\"bar\\\"\" [run_command $fd \"lrange list 0 -1\"]\n    }\n\n    test_interactive_cli \"Parsing quotes\" {\n        assert_equal \"OK\" [run_command $fd \"set key \\\"bar\\\"\"]\n        assert_equal \"bar\" [r get key]\n        assert_equal \"OK\" [run_command $fd \"set key \\\" bar \\\"\"]\n        assert_equal \" bar \" [r get key]\n        assert_equal \"OK\" [run_command $fd \"set key \\\"\\\\\\\"bar\\\\\\\"\\\"\"]\n        assert_equal \"\\\"bar\\\"\" [r get key]\n        assert_equal \"OK\" [run_command $fd \"set key \\\"\\tbar\\t\\\"\"]\n        assert_equal \"\\tbar\\t\" [r get key]\n\n        # invalid quotation\n        assert_equal \"Invalid argument(s)\" [run_command $fd \"get \\\"\\\"key\"]\n        assert_equal \"Invalid argument(s)\" [run_command $fd \"get \\\"key\\\"x\"]\n\n        # quotes after the argument are weird, but should be allowed\n        assert_equal \"OK\" [run_command $fd \"set key\\\"\\\" bar\"]\n        assert_equal \"bar\" [r get key]\n    }\n\n    test_interactive_cli \"Subscribed mode\" {\n        if {$::force_resp3} {\n            run_command $fd \"hello 3\"\n        }\n\n        set reading \"Reading messages... (press Ctrl-C to quit or any key to type command)\\r\"\n        set erase \"\\033\\[K\"; # Erases the \"Reading messages...\" line.\n\n        # Subscribe to some channels.\n        set sub1 \"1) \\\"subscribe\\\"\\n2) \\\"ch1\\\"\\n3) (integer) 1\\n\"\n        set sub2 \"1) \\\"subscribe\\\"\\n2) \\\"ch2\\\"\\n3) (integer) 2\\n\"\n        set sub3 \"1) \\\"subscribe\\\"\\n2) \\\"ch3\\\"\\n3) (integer) 3\\n\"\n        assert_equal $sub1$sub2$sub3$reading \\\n            [run_command $fd \"subscribe ch1 ch2 ch3\"]\n\n        # Receive pubsub message.\n        r publish ch2 hello\n        set message \"1) \\\"message\\\"\\n2) \\\"ch2\\\"\\n3) \\\"hello\\\"\\n\"\n        assert_equal $erase$message$reading [read_cli $fd]\n\n        # Unsubscribe some.\n        set unsub1 \"1) \\\"unsubscribe\\\"\\n2) \\\"ch1\\\"\\n3) (integer) 2\\n\"\n        set unsub2 \"1) \\\"unsubscribe\\\"\\n2) \\\"ch2\\\"\\n3) (integer) 1\\n\"\n        assert_equal $erase$unsub1$unsub2$reading \\\n            [run_command $fd \"unsubscribe ch1 ch2\"]\n\n        run_command $fd \"hello 2\"\n\n        # Command forbidden in subscribed mode (RESP2).\n        set err \"(error) ERR Can't execute 'get': only (P|S)SUBSCRIBE / (P|S)UNSUBSCRIBE / PING / QUIT / RESET are allowed in this context\\n\"\n        assert_equal $erase$err$reading [run_command $fd \"get k\"]\n\n        # Command allowed in subscribed mode.\n        set pong \"1) \\\"pong\\\"\\n2) \\\"\\\"\\n\"\n        assert_equal $erase$pong$reading [run_command $fd \"ping\"]\n\n        # Reset exits subscribed mode.\n        assert_equal ${erase}RESET [run_command $fd \"reset\"]\n        assert_equal PONG [run_command $fd \"ping\"]\n\n        # Check TTY output of push messages in RESP3 has \")\" prefix (to be changed to \">\" in the future).\n        assert_match \"1#*\" [run_command $fd \"hello 3\"]\n        set sub1 \"1) \\\"subscribe\\\"\\n2) \\\"ch1\\\"\\n3) (integer) 1\\n\"\n        assert_equal $sub1$reading \\\n            [run_command $fd \"subscribe ch1\"]\n    }\n\n    test_interactive_nontty_cli \"Subscribed mode\" {\n        # Raw output and no \"Reading messages...\" info message.\n        # Use RESP3 in this test case.\n        assert_match {*proto 3*} [run_command $fd \"hello 3\"]\n\n        # Subscribe to some channels.\n        set sub1 \"subscribe\\nch1\\n1\"\n        set sub2 \"subscribe\\nch2\\n2\"\n        assert_equal $sub1\\n$sub2 \\\n            [run_command $fd \"subscribe ch1 ch2\"]\n\n        assert_equal OK [run_command $fd \"client tracking on\"]\n        assert_equal OK [run_command $fd \"set k 42\"]\n        assert_equal 42 [run_command $fd \"get k\"]\n\n        # Interleaving invalidate and pubsub messages.\n        r publish ch1 hello\n        r del k\n        r publish ch2 world\n        set message1 \"message\\nch1\\nhello\"\n        set invalidate \"invalidate\\nk\"\n        set message2 \"message\\nch2\\nworld\"\n        assert_equal $message1\\n$invalidate\\n$message2\\n [read_cli $fd]\n\n        # Unsubscribe all.\n        set unsub1 \"unsubscribe\\nch1\\n1\"\n        set unsub2 \"unsubscribe\\nch2\\n0\"\n        assert_equal $unsub1\\n$unsub2 [run_command $fd \"unsubscribe ch1 ch2\"]\n    }\n\n    test_tty_cli \"Status reply\" {\n        assert_equal \"OK\" [run_cli set key bar]\n        assert_equal \"bar\" [r get key]\n    }\n\n    test_tty_cli \"Integer reply\" {\n        r del counter\n        assert_equal \"(integer) 1\" [run_cli incr counter]\n    }\n\n    test_tty_cli \"Bulk reply\" {\n        r set key \"tab\\tnewline\\n\"\n        assert_equal \"\\\"tab\\\\tnewline\\\\n\\\"\" [run_cli get key]\n    }\n\n    test_tty_cli \"Multi-bulk reply\" {\n        r del list\n        r rpush list foo\n        r rpush list bar\n        assert_equal \"1) \\\"foo\\\"\\n2) \\\"bar\\\"\" [run_cli lrange list 0 -1]\n    }\n\n    test_tty_cli \"Read last argument from pipe\" {\n        assert_equal \"OK\" [run_cli_with_input_pipe x \"echo foo\" set key]\n        assert_equal \"foo\\n\" [r get key]\n\n        assert_equal \"OK\" [run_cli_with_input_pipe X \"echo foo\" set key2 tag]\n        assert_equal \"foo\\n\" [r get key2]\n    }\n\n    test_tty_cli \"Read last argument from file\" {\n        set tmpfile [write_tmpfile \"from file\"]\n\n        assert_equal \"OK\" [run_cli_with_input_file x $tmpfile set key]\n        assert_equal \"from file\" [r get key]\n\n        assert_equal \"OK\" [run_cli_with_input_file X $tmpfile set key2 tag]\n        assert_equal \"from file\" [r get key2]\n\n        file delete $tmpfile\n    }\n\n    test_tty_cli \"Escape character in JSON mode\" {\n        # reverse solidus\n        r hset solidus \\/ \\/\n        assert_equal \\/ \\/ [run_cli hgetall solidus]\n        set escaped_reverse_solidus \\\"\\\\\"\n        assert_equal $escaped_reverse_solidus $escaped_reverse_solidus [run_cli --json hgetall \\/]\n        # non printable (0xF0 in ISO-8859-1, not UTF-8(0xC3 0xB0))\n        set eth \"\\u00f0\\u0065\"\n        r hset eth test $eth\n        assert_equal \\\"\\\\xf0e\\\" [run_cli hget eth test]\n        assert_equal \\\"\\u00f0e\\\" [run_cli --json hget eth test]\n        assert_equal \\\"\\\\\\\\xf0e\\\" [run_cli --quoted-json hget eth test]\n        # control characters\n        r hset control test \"Hello\\x00\\x01\\x02\\x03World\"\n        assert_equal \\\"Hello\\\\u0000\\\\u0001\\\\u0002\\\\u0003World\" [run_cli --json hget control test]\n        # non-string keys\n        r hset numkey 1 One\n        assert_equal \\{\\\"1\\\":\\\"One\\\"\\} [run_cli --json hgetall numkey]\n        # non-string, non-printable keys\n        r hset npkey \"K\\u0000\\u0001ey\" \"V\\u0000\\u0001alue\"\n        assert_equal \\{\\\"K\\\\u0000\\\\u0001ey\\\":\\\"V\\\\u0000\\\\u0001alue\\\"\\} [run_cli --json hgetall npkey]\n        assert_equal \\{\\\"K\\\\\\\\x00\\\\\\\\x01ey\\\":\\\"V\\\\\\\\x00\\\\\\\\x01alue\\\"\\} [run_cli --quoted-json hgetall npkey]\n    }\n\n    test_nontty_cli \"Status reply\" {\n        assert_equal \"OK\" [run_cli set key bar]\n        assert_equal \"bar\" [r get key]\n    }\n\n    test_nontty_cli \"Integer reply\" {\n        r del counter\n        assert_equal \"1\" [run_cli incr counter]\n    }\n\n    test_nontty_cli \"Bulk reply\" {\n        r set key \"tab\\tnewline\\n\"\n        assert_equal \"tab\\tnewline\" [run_cli get key]\n    }\n\n    test_nontty_cli \"Multi-bulk reply\" {\n        r del list\n        r rpush list foo\n        r rpush list bar\n        assert_equal \"foo\\nbar\" [run_cli lrange list 0 -1]\n    }\n\nif {!$::tls} { ;# fake_redis_node doesn't support TLS\n    test_nontty_cli \"ASK redirect test\" {\n        # Set up two fake Redis nodes.\n        set tclsh [info nameofexecutable]\n        set script \"tests/helpers/fake_redis_node.tcl\"\n        set port1 [find_available_port $::baseport $::portcount]\n        set port2 [find_available_port $::baseport $::portcount]\n        set p1 [exec $tclsh $script $port1 \\\n                \"SET foo bar\" \"-ASK 12182 127.0.0.1:$port2\" &]\n        set p2 [exec $tclsh $script $port2 \\\n                \"ASKING\" \"+OK\" \\\n                \"SET foo bar\" \"+OK\" &]\n        # Make sure both fake nodes have started listening\n        wait_for_condition 50 50 {\n            [catch {close [socket \"127.0.0.1\" $port1]}] == 0 && \\\n            [catch {close [socket \"127.0.0.1\" $port2]}] == 0\n        } else {\n            fail \"Failed to start fake Redis nodes\"\n        }\n        # Run the cli\n        assert_equal \"OK\" [run_cli_host_port_db \"127.0.0.1\" $port1 0 -c SET foo bar]\n    }\n}\n\n    test_nontty_cli \"Quoted input arguments\" {\n        r set \"\\x00\\x00\" \"value\"\n        assert_equal \"value\" [run_cli --quoted-input get {\"\\x00\\x00\"}]\n    }\n\n    test_nontty_cli \"No accidental unquoting of input arguments\" {\n        run_cli --quoted-input set {\"\\x41\\x41\"} quoted-val\n        run_cli set {\"\\x41\\x41\"} unquoted-val\n        assert_equal \"quoted-val\" [r get AA]\n        assert_equal \"unquoted-val\" [r get {\"\\x41\\x41\"}]\n    }\n\n    test_nontty_cli \"Invalid quoted input arguments\" {\n        catch {run_cli --quoted-input set {\"Unterminated}} err\n        assert_match {*exited abnormally*} $err\n\n        # A single arg that unquotes to two arguments is also not expected\n        catch {run_cli --quoted-input set {\"arg1\" \"arg2\"}} err\n        assert_match {*exited abnormally*} $err\n    }\n\n    test_nontty_cli \"Read last argument from pipe\" {\n        assert_equal \"OK\" [run_cli_with_input_pipe x \"echo foo\" set key]\n        assert_equal \"foo\\n\" [r get key]\n\n        assert_equal \"OK\" [run_cli_with_input_pipe X \"echo foo\" set key2 tag]\n        assert_equal \"foo\\n\" [r get key2]\n    }\n\n    test_nontty_cli \"Read last argument from file\" {\n        set tmpfile [write_tmpfile \"from file\"]\n\n        assert_equal \"OK\" [run_cli_with_input_file x $tmpfile set key]\n        assert_equal \"from file\" [r get key]\n\n        assert_equal \"OK\" [run_cli_with_input_file X $tmpfile set key2 tag]\n        assert_equal \"from file\" [r get key2]\n\n        file delete $tmpfile\n    }\n\n    test_nontty_cli \"Test command-line hinting - latest server\" {\n        # cli will connect to the running server and will use COMMAND DOCS\n        catch {run_cli --test_hint_file tests/assets/test_cli_hint_suite.txt} output\n        assert_match \"*SUCCESS*\" $output\n    }\n\n    test_nontty_cli \"Test command-line hinting - no server\" {\n        # cli will fail to connect to the server and will use the cached commands.c\n        catch {run_cli -p 123 --test_hint_file tests/assets/test_cli_hint_suite.txt} output\n        assert_match \"*SUCCESS*\" $output\n    }\n\n    test_nontty_cli \"Test command-line hinting - old server\" {\n        # cli will connect to the server but will not use COMMAND DOCS,\n        # and complete the missing info from the cached commands.c\n        r ACL setuser clitest on nopass +@all -command|docs\n        catch {run_cli --user clitest -a nopass --no-auth-warning --test_hint_file tests/assets/test_cli_hint_suite.txt} output\n        assert_match \"*SUCCESS*\" $output\n        r acl deluser clitest\n    }\n    \n    proc test_redis_cli_rdb_dump {functions_only} {\n        r flushdb\n        r function flush\n\n        if {!$::external} {\n            set lines [count_log_lines 0]\n        }\n        set dir [lindex [r config get dir] 1]\n\n        assert_equal \"OK\" [r debug populate 100000 key 1000]\n        assert_equal \"lib1\" [r function load \"#!lua name=lib1\\nredis.register_function('func1', function() return 123 end)\"]\n        if {$functions_only} {\n            set args \"--functions-rdb $dir/cli.rdb\"\n        } else {\n            set args \"--rdb $dir/cli.rdb\"\n        }\n        catch {run_cli {*}$args} output\n        assert_match {*Transfer finished with success*} $output\n\n        # If server enabled diskless sync, it should transfer the RDB by RDB channel\n        # since server will automatically apply this optimization for rdb-only.\n        if {[r config get repl-diskless-sync] == \"yes\" && !$::external} {\n            verify_log_message 0 \"*replicas sockets (rdb-channel)*\" $lines\n        }\n\n        file delete \"$dir/dump.rdb\"\n        file rename \"$dir/cli.rdb\" \"$dir/dump.rdb\"\n\n        assert_equal \"OK\" [r set should-not-exist 1]\n        assert_equal \"should_not_exist_func\" [r function load \"#!lua name=should_not_exist_func\\nredis.register_function('should_not_exist_func', function() return 456 end)\"]\n        assert_equal \"OK\" [r debug reload nosave]\n        assert_equal {} [r get should-not-exist]\n        assert_equal {{library_name lib1 engine LUA functions {{name func1 description {} flags {}}}}} [r function list]\n        if {$functions_only} {\n            assert_equal 0 [r dbsize]\n        } else {\n            assert_equal 100000 [r dbsize]\n        }\n    }\n\n    foreach {functions_only} {no yes} {\n\n    test \"Dumping an RDB - functions only: $functions_only\" {\n        # Disk-based master\n        assert_match \"OK\" [r config set repl-diskless-sync no]\n        test_redis_cli_rdb_dump $functions_only\n\n        # Disk-less master\n        assert_match \"OK\" [r config set repl-diskless-sync yes]\n        assert_match \"OK\" [r config set repl-diskless-sync-delay 0]\n        test_redis_cli_rdb_dump $functions_only\n    } {} {needs:repl needs:debug}\n\n    } ;# foreach functions_only\n\n    test \"Scan mode\" {\n        r flushdb\n        populate 1000 key: 1\n\n        # basic use\n        assert_equal 1000 [llength [split [run_cli --scan]]]\n\n        # pattern\n        assert_equal {key:2} [run_cli --scan --pattern \"*:2\"]\n\n        # pattern matching with a quoted string\n        assert_equal {key:2} [run_cli --scan --quoted-pattern {\"*:\\x32\"}]\n    }\n\n    proc test_redis_cli_repl {} {\n        set fd [open_cli \"--replica\"]\n        wait_for_condition 500 100 {\n            [string match {*slave0:*state=online*} [r info]]\n        } else {\n            fail \"redis-cli --replica did not connect\"\n        }\n\n        for {set i 0} {$i < 100} {incr i} {\n           r set test-key test-value-$i\n        }\n\n        wait_for_condition 500 100 {\n            [string match {*test-value-99*} [read_cli $fd]]\n        } else {\n            fail \"redis-cli --replica didn't read commands\"\n        }\n\n        fconfigure $fd -blocking true\n        r client kill type slave\n        catch { close_cli $fd } err\n        assert_match {*Server closed the connection*} $err\n    }\n\n    test \"Connecting as a replica\" {\n        # Disk-based master\n        assert_match \"OK\" [r config set repl-diskless-sync no]\n        test_redis_cli_repl\n\n        # Disk-less master\n        assert_match \"OK\" [r config set repl-diskless-sync yes]\n        assert_match \"OK\" [r config set repl-diskless-sync-delay 0]\n        test_redis_cli_repl\n    } {} {needs:repl}\n\n    test \"Piping raw protocol\" {\n        set cmds [tmpfile \"cli_cmds\"]\n        set cmds_fd [open $cmds \"w\"]\n\n        set cmds_count 2101\n\n        if {!$::singledb} {\n            puts $cmds_fd [formatCommand select 9]\n            incr cmds_count\n        }\n        puts $cmds_fd [formatCommand del test-counter]\n\n        for {set i 0} {$i < 1000} {incr i} {\n            puts $cmds_fd [formatCommand incr test-counter]\n            puts $cmds_fd [formatCommand set large-key [string repeat \"x\" 20000]]\n        }\n\n        for {set i 0} {$i < 100} {incr i} {\n            puts $cmds_fd [formatCommand set very-large-key [string repeat \"x\" 512000]]\n        }\n        close $cmds_fd\n\n        set cli_fd [open_cli \"--pipe\" $cmds]\n        fconfigure $cli_fd -blocking true\n        set output [read_cli $cli_fd]\n\n        assert_equal {1000} [r get test-counter]\n        assert_match \"*All data transferred*errors: 0*replies: ${cmds_count}*\" $output\n\n        file delete $cmds\n    }\n\n    test \"Options -X with illegal argument\" {\n        assert_error \"*-x and -X are mutually exclusive*\" {run_cli -x -X tag}\n\n        assert_error \"*Unrecognized option or bad number*\" {run_cli -X}\n\n        assert_error \"*tag not match*\" {run_cli_with_input_pipe X \"echo foo\" set key wrong_tag}\n    }\n\n    test \"DUMP RESTORE with -x option\" {\n        set cmdline [rediscli [srv host] [srv port]]\n\n        exec {*}$cmdline DEL set new_set\n        exec {*}$cmdline SADD set 1 2 3 4 5 6\n        assert_equal 6 [exec {*}$cmdline SCARD set]\n\n        assert_equal \"OK\" [exec {*}$cmdline -D \"\" --raw DUMP set | \\\n                                {*}$cmdline -x RESTORE new_set 0]\n\n        assert_equal 6 [exec {*}$cmdline SCARD new_set]\n        assert_equal \"1\\n2\\n3\\n4\\n5\\n6\" [exec {*}$cmdline SMEMBERS new_set]\n    }\n\n    test \"DUMP RESTORE with -X option\" {\n        set cmdline [rediscli [srv host] [srv port]]\n\n        exec {*}$cmdline DEL zset new_zset\n        exec {*}$cmdline ZADD zset 1 a 2 b 3 c\n        assert_equal 3 [exec {*}$cmdline ZCARD zset]\n\n        assert_equal \"OK\" [exec {*}$cmdline -D \"\" --raw DUMP zset | \\\n                                {*}$cmdline -X dump_tag RESTORE new_zset 0 dump_tag REPLACE]\n\n        assert_equal 3 [exec {*}$cmdline ZCARD new_zset]\n        assert_equal \"a\\n1\\nb\\n2\\nc\\n3\" [exec {*}$cmdline ZRANGE new_zset 0 -1 WITHSCORES]\n    }\n\n    test \"Send eval command by using --eval option\" {\n        set tmpfile [write_tmpfile {return ARGV[1]}]\n        set cmdline [rediscli [srv host] [srv port]]\n        assert_equal \"foo\" [exec {*}$cmdline --eval $tmpfile , foo]\n    }\n}\n\nstart_server {tags {\"cli external:skip\"}} {\n    test_interactive_cli_with_prompt \"db_num showed in redis-cli after reconnected\" {\n        run_command $fd \"select 0\\x0D\"\n        run_command $fd \"set a zoo-0\\x0D\"\n        run_command $fd \"select 6\\x0D\"\n        run_command $fd \"set a zoo-6\\x0D\"\n        r save\n\n        # kill server and restart\n        exec kill [s process_id]\n        wait_for_log_messages 0 {\"*Redis is now ready to exit*\"} 0 1000 10\n        catch {[run_command $fd \"ping\\x0D\"]} err\n        restart_server 0 true false 0\n\n        # redis-cli should show '[6]' after reconnected and return 'zoo-6'\n        write_cli $fd \"GET a\\x0D\"\n        after 100\n        set result [format_output [read_cli $fd]]\n        set regex {not connected> GET a.*\"zoo-6\".*127\\.0\\.0\\.1:[0-9]*\\[6\\]>}\n        assert_equal 1 [regexp $regex $result]\n    }\n}\n\nstart_server {tags {\"cli external:skip\"}} {\n    test \"keystats on empty database should not produce garbage stats\" {\n        # On an empty DB the keystats histogram has total_count = 0. Verify hdr_mean(), hdr_stddev(),\n        # and the percentile calculation handle this gracefully.\n        set cmd [rediscli [srv host] [srv port] [list --keystats]]\n        catch {exec {*}$cmd 2>@1} result\n        assert_match \"*Scanning the entire keyspace*\" $result\n\n        # The \"Note:\" line with Mean/StdDeviation is only printed when displayKeyStatsSizeDist()\n        # compute stats. When keysize_histogram->total_count == 0, it should be skipped entirely.\n        assert_match \"*No key size samples collected*\" $result\n    }\n}\n\nfile delete ./.rediscli_history_test\n"
  },
  {
    "path": "tests/integration/replication-2.tcl",
    "content": "start_server {tags {\"repl external:skip\"}} {\n    start_server {} {\n        test {First server should have role slave after SLAVEOF} {\n            r -1 slaveof [srv 0 host] [srv 0 port]\n            wait_replica_online r\n            wait_for_condition 50 100 {\n                [s -1 master_link_status] eq {up}\n            } else {\n                fail \"Replication not started.\"\n            }\n        }\n\n        test {If min-slaves-to-write is honored, write is accepted} {\n            r config set min-slaves-to-write 1\n            r config set min-slaves-max-lag 10\n            r set foo 12345\n            wait_for_condition 50 100 {\n                [r -1 get foo] eq {12345}\n            } else {\n                fail \"Write did not reached replica\"\n            }\n        }\n\n        test {No write if min-slaves-to-write is < attached slaves} {\n            r config set min-slaves-to-write 2\n            r config set min-slaves-max-lag 10\n            catch {r set foo 12345} err\n            set err\n        } {NOREPLICAS*}\n\n        test {If min-slaves-to-write is honored, write is accepted (again)} {\n            r config set min-slaves-to-write 1\n            r config set min-slaves-max-lag 10\n            r set foo 12345\n            wait_for_condition 50 100 {\n                [r -1 get foo] eq {12345}\n            } else {\n                fail \"Write did not reached replica\"\n            }\n        }\n\n        test {No write if min-slaves-max-lag is > of the slave lag} {\n            r config set min-slaves-to-write 1\n            r config set min-slaves-max-lag 2\n            pause_process [srv -1 pid]\n            assert {[r set foo 12345] eq {OK}}\n            wait_for_condition 100 100 {\n                [catch {r set foo 12345}] != 0\n            } else {\n                fail \"Master didn't become readonly\"\n            }\n            catch {r set foo 12345} err\n            assert_match {NOREPLICAS*} $err\n        }\n        resume_process [srv -1 pid]\n\n        test {min-slaves-to-write is ignored by slaves} {\n            r config set min-slaves-to-write 1\n            r config set min-slaves-max-lag 10\n            r -1 config set min-slaves-to-write 1\n            r -1 config set min-slaves-max-lag 10\n            r set foo aaabbb\n            wait_for_condition 50 100 {\n                [r -1 get foo] eq {aaabbb}\n            } else {\n                fail \"Write did not reached replica\"\n            }\n        }\n\n        # Fix parameters for the next test to work\n        r config set min-slaves-to-write 0\n        r -1 config set min-slaves-to-write 0\n        r flushall\n\n        test {MASTER and SLAVE dataset should be identical after complex ops} {\n            createComplexDataset r 10000\n            after 500\n            if {[r debug digest] ne [r -1 debug digest]} {\n                set csv1 [csvdump r]\n                set csv2 [csvdump {r -1}]\n                set fd [open /tmp/repldump1.txt w]\n                puts -nonewline $fd $csv1\n                close $fd\n                set fd [open /tmp/repldump2.txt w]\n                puts -nonewline $fd $csv2\n                close $fd\n                puts \"Master - Replica inconsistency\"\n                puts \"Run diff -u against /tmp/repldump*.txt for more info\"\n            }\n            assert_equal [r debug digest] [r -1 debug digest]\n        }\n    }\n}\n"
  },
  {
    "path": "tests/integration/replication-3.tcl",
    "content": "start_server {tags {\"repl external:skip debug_defrag:skip\"}} {\n    start_server {} {\n        test {First server should have role slave after SLAVEOF} {\n            r -1 slaveof [srv 0 host] [srv 0 port]\n            wait_for_condition 50 100 {\n                [s -1 master_link_status] eq {up}\n            } else {\n                fail \"Replication not started.\"\n            }\n        }\n\n        if {$::accurate} {set numops 50000} else {set numops 5000}\n\n        test {MASTER and SLAVE consistency with expire} {\n            createComplexDataset r $numops useexpire\n\n            # Make sure everything expired before taking the digest\n            # createComplexDataset uses max expire time of 2 seconds\n            wait_for_condition 50 100 {\n                0 == [scan [regexp -inline {expires\\=([\\d]*)} [r -1 info keyspace]] expires=%d]\n            } else {\n                fail \"expire didn't end\"\n            }\n\n            # make sure the replica got all the DELs\n            wait_for_ofs_sync [srv 0 client] [srv -1 client]\n\n            if {[r debug digest] ne [r -1 debug digest]} {\n                set csv1 [csvdump r]\n                set csv2 [csvdump {r -1}]\n                set fd [open /tmp/repldump1.txt w]\n                puts -nonewline $fd $csv1\n                close $fd\n                set fd [open /tmp/repldump2.txt w]\n                puts -nonewline $fd $csv2\n                close $fd\n                puts \"Master - Replica inconsistency\"\n                puts \"Run diff -u against /tmp/repldump*.txt for more info\"\n            }\n            assert_equal [r debug digest] [r -1 debug digest]\n        }\n\n        test {Master can replicate command longer than client-query-buffer-limit on replica} {\n            # Configure the master to have a bigger query buffer limit\n            r config set client-query-buffer-limit 2000000\n            r -1 config set client-query-buffer-limit 1048576\n            # Write a very large command onto the master\n            r set key [string repeat \"x\" 1100000]\n            wait_for_condition 300 100 {\n                [r -1 get key] eq [string repeat \"x\" 1100000]\n            } else {\n                fail \"Unable to replicate command longer than client-query-buffer-limit\"\n            }\n        }\n\n        test {Slave is able to evict keys created in writable slaves} {\n            r -1 select 5\n            assert {[r -1 dbsize] == 0}\n            r -1 config set slave-read-only no\n            r -1 set key1 1 ex 5\n            r -1 set key2 2 ex 5\n            r -1 set key3 3 ex 5\n            assert {[r -1 dbsize] == 3}\n            after 6000\n            r -1 dbsize\n        } {0}\n\n        test {Writable replica doesn't return expired keys} {\n            r select 5\n            assert {[r dbsize] == 0}\n            r debug set-active-expire 0\n            r set key1 5 px 10\n            r set key2 5 px 10\n            r -1 select 5\n            wait_for_condition 50 100 {\n                [r -1 dbsize] == 2 && [r -1 exists key1 key2] == 0\n            } else {\n                fail \"Keys didn't replicate or didn't expire.\"\n            }\n            r -1 config set slave-read-only no\n            assert_equal 2 [r -1 dbsize]    ; # active expire is off\n            assert_equal 1 [r -1 incr key1] ; # incr expires and re-creates key1\n            assert_equal -1 [r -1 ttl key1] ; # incr created key1 without TTL\n            assert_equal {} [r -1 get key2] ; # key2 expired but not deleted\n            assert_equal 2 [r -1 dbsize]\n            # cleanup\n            r debug set-active-expire 1\n            r -1 del key1 key2\n            r -1 config set slave-read-only yes\n            r del key1 key2\n        }\n\n        test {PFCOUNT updates cache on readonly replica} {\n            r select 5\n            assert {[r dbsize] == 0}\n            r pfadd key a b c d e f g h i j k l m n o p q\n            set strval [r get key]\n            r -1 select 5\n            wait_for_condition 50 100 {\n                [r -1 dbsize] == 1\n            } else {\n                fail \"Replication timeout.\"\n            }\n            assert {$strval == [r -1 get key]}\n            assert_equal 17 [r -1 pfcount key]\n            assert {$strval != [r -1 get key]}; # cache updated\n            # cleanup\n            r del key\n        }\n\n        test {PFCOUNT doesn't use expired key on readonly replica} {\n            r select 5\n            assert {[r dbsize] == 0}\n            r debug set-active-expire 0\n            r pfadd key a b c d e f g h i j k l m n o p q\n            r pexpire key 10\n            r -1 select 5\n            wait_for_condition 50 100 {\n                [r -1 dbsize] == 1 && [r -1 exists key] == 0\n            } else {\n                fail \"Key didn't replicate or didn't expire.\"\n            }\n            assert_equal [r -1 pfcount key] 0 ; # expired key not used\n            assert_equal [r -1 dbsize] 1      ; # but it's also not deleted\n            # cleanup\n            r debug set-active-expire 1\n            r del key\n        }\n    }\n}\n"
  },
  {
    "path": "tests/integration/replication-4.tcl",
    "content": "start_server {tags {\"repl network external:skip singledb:skip\"} overrides {save {}}} {\n    start_server { overrides {save {}}} {\n\n        set master [srv -1 client]\n        set master_host [srv -1 host]\n        set master_port [srv -1 port]\n        set slave [srv 0 client]\n\n        set load_handle0 [start_bg_complex_data $master_host $master_port 9 100000]\n        set load_handle1 [start_bg_complex_data $master_host $master_port 11 100000]\n        set load_handle2 [start_bg_complex_data $master_host $master_port 12 100000]\n\n        test {First server should have role slave after SLAVEOF} {\n            $slave slaveof $master_host $master_port\n            wait_for_condition 50 100 {\n                [s 0 role] eq {slave}\n            } else {\n                fail \"Replication not started.\"\n            }\n        }\n\n        test {Test replication with parallel clients writing in different DBs} {\n            # Gives the random workloads a chance to add some complex commands.\n            after 5000\n\n            # Make sure all parallel clients have written data.\n            wait_for_condition 1000 50 {\n                [$master select 9] == {OK} && [$master dbsize] > 0 &&\n                [$master select 11] == {OK} && [$master dbsize] > 0 &&\n                [$master select 12] == {OK} && [$master dbsize] > 0\n            } else {\n                fail \"Parallel clients are not writing in different DBs.\"\n            }\n\n            stop_bg_complex_data $load_handle0\n            stop_bg_complex_data $load_handle1\n            stop_bg_complex_data $load_handle2\n            wait_for_condition 100 100 {\n                [$master debug digest] == [$slave debug digest]\n            } else {\n                set csv1 [csvdump r]\n                set csv2 [csvdump {r -1}]\n                set fd [open /tmp/repldump1.txt w]\n                puts -nonewline $fd $csv1\n                close $fd\n                set fd [open /tmp/repldump2.txt w]\n                puts -nonewline $fd $csv2\n                close $fd\n                fail \"Master - Replica inconsistency, Run diff -u against /tmp/repldump*.txt for more info\"\n            }\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip\"}} {\n    start_server {} {\n        set master [srv -1 client]\n        set master_host [srv -1 host]\n        set master_port [srv -1 port]\n        set slave [srv 0 client]\n\n        # Load some functions to be used later\n        $master FUNCTION load replace {#!lua name=test\n            redis.register_function{function_name='f_default_flags', callback=function(keys, args) return redis.call('get',keys[1]) end, flags={}}\n            redis.register_function{function_name='f_no_writes', callback=function(keys, args) return redis.call('get',keys[1]) end, flags={'no-writes'}}\n        }\n\n        test {First server should have role slave after SLAVEOF} {\n            $slave slaveof $master_host $master_port\n            wait_replica_online $master\n        }\n\n        test {With min-slaves-to-write (1,3): master should be writable} {\n            $master config set min-slaves-max-lag 3\n            $master config set min-slaves-to-write 1\n            assert_equal OK [$master set foo 123]\n            assert_equal OK [$master eval \"return redis.call('set','foo',12345)\" 0]\n        }\n\n        test {With min-slaves-to-write (2,3): master should not be writable} {\n            $master config set min-slaves-max-lag 3\n            $master config set min-slaves-to-write 2\n            assert_error \"*NOREPLICAS*\" {$master set foo bar}\n            assert_error \"*NOREPLICAS*\" {$master eval \"redis.call('set','foo','bar')\" 0}\n        }\n\n        test {With min-slaves-to-write function without no-write flag} {\n            assert_error \"*NOREPLICAS*\" {$master fcall f_default_flags 1 foo}\n            assert_equal \"12345\" [$master fcall f_no_writes 1 foo]\n        }\n\n        test {With not enough good slaves, read in Lua script is still accepted} {\n            $master config set min-slaves-max-lag 3\n            $master config set min-slaves-to-write 1\n            $master eval \"redis.call('set','foo','bar')\" 0\n\n            $master config set min-slaves-to-write 2\n            $master eval \"return redis.call('get','foo')\" 0\n        } {bar}\n\n        test {With min-slaves-to-write: master not writable with lagged slave} {\n            $master config set min-slaves-max-lag 2\n            $master config set min-slaves-to-write 1\n            assert_equal OK [$master set foo 123]\n            assert_equal OK [$master eval \"return redis.call('set','foo',12345)\" 0]\n            # Killing a slave to make it become a lagged slave.\n            pause_process [srv 0 pid]\n            # Waiting for slave kill.\n            wait_for_condition 100 100 {\n                [catch {$master set foo 123}] != 0\n            } else {\n                fail \"Master didn't become readonly\"\n            }\n            assert_error \"*NOREPLICAS*\" {$master set foo 123}\n            assert_error \"*NOREPLICAS*\" {$master eval \"return redis.call('set','foo',12345)\" 0}\n            resume_process [srv 0 pid]\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip\"}} {\n    start_server {} {\n        set master [srv -1 client]\n        set master_host [srv -1 host]\n        set master_port [srv -1 port]\n        set slave [srv 0 client]\n\n        test {First server should have role slave after SLAVEOF} {\n            $slave slaveof $master_host $master_port\n            wait_for_condition 50 100 {\n                [s 0 master_link_status] eq {up}\n            } else {\n                fail \"Replication not started.\"\n            }\n        }\n\n        test {Replication of an expired key does not delete the expired key} {\n            # This test is very likely to do a false positive if the wait_for_ofs_sync\n            # takes longer than the expiration time, so give it a few more chances.\n            # Go with 5 retries of increasing timeout, i.e. start with 500ms, then go\n            # to 1000ms, 2000ms, 4000ms, 8000ms.\n            set px_ms 500\n            for {set i 0} {$i < 5} {incr i} {\n\n            wait_for_ofs_sync $master $slave\n            $master debug set-active-expire 0\n            $master set k 1 px $px_ms\n            wait_for_ofs_sync $master $slave\n            pause_process [srv 0 pid]\n            $master incr k\n            after [expr $px_ms + 1]\n            # Stopping the replica for one second to makes sure the INCR arrives\n            # to the replica after the key is logically expired.\n            resume_process [srv 0 pid]\n            wait_for_ofs_sync $master $slave\n            # Check that k is logically expired but is present in the replica.\n            set res [$slave exists k]\n            set errcode [catch {$slave debug object k} err] ; # Raises exception if k is gone.\n            if {$res == 0 && $errcode == 0} { break }\n            set px_ms [expr $px_ms * 2]\n\n            } ;# for\n\n            if {$::verbose} { puts \"Replication of an expired key does not delete the expired key test attempts: $i\" }\n            assert_equal $res 0\n            assert_equal $errcode 0\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip\"}} {\n    start_server {} {\n        set master [srv -1 client]\n        set master_host [srv -1 host]\n        set master_port [srv -1 port]\n        set slave [srv 0 client]\n\n        test {First server should have role slave after SLAVEOF} {\n            $slave slaveof $master_host $master_port\n            wait_for_condition 50 100 {\n                [s 0 role] eq {slave}\n            } else {\n                fail \"Replication not started.\"\n            }\n        }\n\n        test {Replication: commands with many arguments (issue #1221)} {\n            # We now issue large MSET commands, that may trigger a specific\n            # class of bugs, see issue #1221.\n            for {set j 0} {$j < 100} {incr j} {\n                set cmd [list mset]\n                for {set x 0} {$x < 1000} {incr x} {\n                    lappend cmd [randomKey] [randomValue]\n                }\n                $master {*}$cmd\n            }\n\n            set retry 10\n            while {$retry && ([$master debug digest] ne [$slave debug digest])}\\\n            {\n                after 1000\n                incr retry -1\n            }\n            assert {[$master dbsize] > 0}\n        }\n\n        test {spopwithcount rewrite srem command} {\n            $master del myset\n\n            set content {}\n            for {set j 0} {$j < 4000} {} {\n                lappend content [incr j]\n            }\n            $master sadd myset {*}$content\n            $master spop myset 1023\n            $master spop myset 1024\n            $master spop myset 1025\n\n            assert_match 928 [$master scard myset]\n            assert_match {*calls=3,*} [cmdrstat spop $master]\n\n            wait_for_condition 50 100 {\n                 [status $slave master_repl_offset] == [status $master master_repl_offset]\n            } else {\n                fail \"SREM replication inconsistency.\"\n            }\n            assert_match {*calls=4,*} [cmdrstat srem $slave]\n            assert_match 928 [$slave scard myset]\n        }\n\n        test {Replication of SPOP command -- alsoPropagate() API} {\n            $master del myset\n            set size [expr 1+[randomInt 100]]\n            set content {}\n            for {set j 0} {$j < $size} {incr j} {\n                lappend content [randomValue]\n            }\n            $master sadd myset {*}$content\n\n            set count [randomInt 100]\n            set result [$master spop myset $count]\n\n            wait_for_condition 50 100 {\n                [$master debug digest] eq [$slave debug digest]\n            } else {\n                fail \"SPOP replication inconsistency\"\n            }\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip\"}} {\n    start_server {} {\n        set master [srv -1 client]\n        set master_host [srv -1 host]\n        set master_port [srv -1 port]\n        set replica [srv 0 client]\n\n        test {First server should have role slave after SLAVEOF} {\n            $replica slaveof $master_host $master_port\n            wait_for_condition 50 100 {\n                [s 0 role] eq {slave}\n            } else {\n                fail \"Replication not started.\"\n            }\n            wait_for_sync $replica\n        }\n\n        test {Data divergence can happen under default conditions} {       \n            $replica config set propagation-error-behavior ignore     \n            $master debug replicate fake-command-1\n\n            # Wait for replication to normalize\n            $master set foo bar2\n            $master wait 1 2000\n\n            # Make sure we triggered the error, by finding the critical\n            # message and the fake command.\n            assert_equal [count_log_message 0 \"fake-command-1\"] 1\n            assert_equal [count_log_message 0 \"== CRITICAL ==\"] 1\n        }\n\n        test {Data divergence is allowed on writable replicas} {            \n            $replica config set replica-read-only no\n            $replica set number2 foo\n            $master incrby number2 1\n            $master wait 1 2000\n\n            assert_equal [$master get number2] 1\n            assert_equal [$replica get number2] foo\n\n            assert_equal [count_log_message 0 \"incrby\"] 1\n        }\n    }\n}\n"
  },
  {
    "path": "tests/integration/replication-buffer.tcl",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Copyright (c) 2024-present, Valkey contributors.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n#\n\n# This test group aims to test that all replicas share one global replication buffer,\n# two replicas don't make replication buffer size double, and when there is no replica,\n# replica buffer will shrink.\nforeach rdbchannel {\"yes\" \"no\"} {\nstart_server {tags {\"repl external:skip\"}} {\nstart_server {} {\nstart_server {} {\nstart_server {} {\n    set replica1 [srv -3 client]\n    set replica2 [srv -2 client]\n    set replica3 [srv -1 client]\n\n    $replica1 config set repl-rdb-channel $rdbchannel\n    $replica2 config set repl-rdb-channel $rdbchannel\n    $replica3 config set repl-rdb-channel $rdbchannel\n\n    set master [srv 0 client]\n    set master_host [srv 0 host]\n    set master_port [srv 0 port]\n\n    $master config set save \"\"\n    $master config set repl-backlog-size 16384\n    $master config set repl-diskless-sync-delay 5\n    $master config set repl-diskless-sync-max-replicas 1\n    $master config set client-output-buffer-limit \"replica 0 0 0\"\n    $master config set repl-rdb-channel $rdbchannel\n\n    # Make sure replica3 is synchronized with master\n    $replica3 replicaof $master_host $master_port\n    wait_for_sync $replica3\n\n    # Generating RDB will take some 100 seconds\n    $master config set rdb-key-save-delay 1000000\n    populate 100 \"\" 16\n\n    # Make sure replica1 and replica2 are waiting bgsave\n    $master config set repl-diskless-sync-max-replicas 2\n    $replica1 replicaof $master_host $master_port\n    $replica2 replicaof $master_host $master_port\n    wait_for_condition 50 100 {\n        ([s rdb_bgsave_in_progress] == 1) &&\n        [lindex [$replica1 role] 3] eq {sync} &&\n        [lindex [$replica2 role] 3] eq {sync}\n    } else {\n        fail \"fail to sync with replicas\"\n    }\n\n    test \"All replicas share one global replication buffer rdbchannel=$rdbchannel\" {\n        set before_used [s used_memory]\n        populate 1024 \"\" 1024 ; # Write extra 1M data\n\n        # In case we are running with IO-threads we need to give a few cycles\n        # for IO-threads to start sending the cmd stream. If we don't do that\n        # the checks related to the repl_buf_mem will be incorrect as the buffer\n        # will still be full with the above 1Mb data.\n        set iothreads [s io_threads_active]\n        if {$iothreads && $rdbchannel == \"yes\"} {\n            after 1000\n        }\n\n        # New data uses 1M memory, but all replicas use only one\n        # replication buffer, so all replicas output memory is not\n        # more than double of replication buffer.\n        set repl_buf_mem [s mem_total_replication_buffers]\n        set extra_mem [expr {[s used_memory]-$before_used-1024*1024}]\n        if {$rdbchannel == \"yes\"} {\n            # master's replication buffers should not grow\n            assert {$extra_mem < 1024*1024}\n            assert {$repl_buf_mem < 1024*1024}\n        } else {\n            assert {$extra_mem < 2*$repl_buf_mem}\n        }\n\n        # Kill replica1, replication_buffer will not become smaller\n        catch {$replica1 shutdown nosave}\n        wait_for_condition 50 100 {\n            [s connected_slaves] eq {2}\n        } else {\n            fail \"replica doesn't disconnect with master\"\n        }\n        assert_equal $repl_buf_mem [s mem_total_replication_buffers]\n    }\n\n    test \"Replication buffer will become smaller when no replica uses rdbchannel=$rdbchannel\" {\n        # Make sure replica3 catch up with the master\n        wait_for_ofs_sync $master $replica3\n\n        set repl_buf_mem [s mem_total_replication_buffers]\n        # Kill replica2, replication_buffer will become smaller\n        catch {$replica2 shutdown nosave}\n        wait_for_condition 50 100 {\n            [s connected_slaves] eq {1}\n        } else {\n            fail \"replica2 doesn't disconnect with master\"\n        }\n        if {$rdbchannel == \"yes\"} {\n            # master's replication buffers should not grow\n            assert {1024*512 > [s mem_total_replication_buffers]}\n        } else {\n            assert {[expr $repl_buf_mem - 1024*1024] > [s mem_total_replication_buffers]}\n        }\n    }\n}\n}\n}\n}\n}\n\n# This test group aims to test replication backlog size can outgrow the backlog\n# limit config if there is a slow replica which keep massive replication buffers,\n# and replicas could use this replication buffer (beyond backlog config) for\n# partial re-synchronization. Of course, replication backlog memory also can\n# become smaller when master disconnects with slow replicas since output buffer\n# limit is reached.\nforeach rdbchannel {\"yes\" \"no\"} {\nstart_server {tags {\"repl external:skip debug_defrag:skip\"}} {\nstart_server {} {\nstart_server {} {\n    set replica1 [srv -2 client]\n    set replica1_pid [s -2 process_id]\n    set replica2 [srv -1 client]\n    set replica2_pid [s -1 process_id]\n\n    set master [srv 0 client]\n    set master_host [srv 0 host]\n    set master_port [srv 0 port]\n\n    $master config set save \"\"\n    $master config set repl-backlog-size 16384\n    $master config set repl-rdb-channel $rdbchannel\n    $master config set client-output-buffer-limit \"replica 0 0 0\"\n\n    # Executing 'debug digest' on master which has many keys costs much time\n    # (especially in valgrind), this causes that replica1 and replica2 disconnect\n    # with master.\n    $master config set repl-timeout 1000\n    $replica1 config set repl-timeout 1000\n    $replica1 config set repl-rdb-channel $rdbchannel\n    $replica1 config set client-output-buffer-limit \"replica 1024 0 0\"\n    $replica2 config set repl-timeout 1000\n    $replica2 config set client-output-buffer-limit \"replica 1024 0 0\"\n    $replica2 config set repl-rdb-channel $rdbchannel\n\n    $replica1 replicaof $master_host $master_port\n    wait_for_sync $replica1\n\n    test \"Replication backlog size can outgrow the backlog limit config rdbchannel=$rdbchannel\" {\n        # Generating RDB will take 1000 seconds\n        $master config set rdb-key-save-delay 1000000\n        populate 1000 master 10000\n        $replica2 replicaof $master_host $master_port\n        # Make sure replica2 is waiting bgsave\n        wait_for_condition 5000 100 {\n            ([s rdb_bgsave_in_progress] == 1) &&\n            [lindex [$replica2 role] 3] eq {sync}\n        } else {\n            fail \"fail to sync with replicas\"\n        }\n        # Replication actual backlog grow more than backlog setting since\n        # the slow replica2 kept replication buffer.\n        populate 20000 master 10000\n        assert {[s repl_backlog_histlen] > [expr 10000*10000]}\n    }\n\n    # Wait replica1 catch up with the master\n    wait_for_condition 1000 100 {\n        [s -2 master_repl_offset] eq [s master_repl_offset]\n    } else {\n        fail \"Replica offset didn't catch up with the master after too long time\"\n    }\n\n    test \"Replica could use replication buffer (beyond backlog config) for partial resynchronization rdbchannel=$rdbchannel\" {\n        # replica1 disconnects with master\n        $replica1 replicaof [srv -1 host] [srv -1 port]\n        # Write a mass of data that exceeds repl-backlog-size\n        populate 10000 master 10000\n        # replica1 reconnects with master\n        $replica1 replicaof $master_host $master_port\n        wait_for_condition 1000 100 {\n            [s -2 master_repl_offset] eq [s master_repl_offset]\n        } else {\n            fail \"Replica offset didn't catch up with the master after too long time\"\n        }\n\n        # replica2 still waits for bgsave ending\n        assert {[s rdb_bgsave_in_progress] eq {1} && [lindex [$replica2 role] 3] eq {sync}}\n        # master accepted replica1 partial resync\n        assert_equal [s sync_partial_ok] {1}\n        assert_equal [$master debug digest] [$replica1 debug digest]\n    }\n\n    test \"Replication backlog memory will become smaller if disconnecting with replica rdbchannel=$rdbchannel\" {\n        assert {[s repl_backlog_histlen] > [expr 2*10000*10000]}\n        assert_equal [s connected_slaves] {2}\n\n        pause_process $replica2_pid\n        r config set client-output-buffer-limit \"replica 128k 0 0\"\n        # trigger output buffer limit check\n        r set key [string repeat A [expr 64*1024]]\n        # master will close replica2's connection since replica2's output\n        # buffer limit is reached, so there only is replica1.\n        # In case of rdbchannel=yes, main channel will be disconnected only.\n        wait_for_condition 100 100 {\n            [s connected_slaves] eq {1} ||\n            ([s connected_slaves] eq {2} &&\n            [string match {*slave*state=wait_bgsave*} [$master info]])\n        } else {\n            fail \"master didn't disconnect with replica2\"\n        }\n\n        # Since we trim replication backlog inrementally, replication backlog\n        # memory may take time to be reclaimed.\n        wait_for_condition 1000 100 {\n            [s repl_backlog_histlen] < [expr 10000*10000]\n        } else {\n            fail \"Replication backlog memory is not smaller\"\n        }\n        resume_process $replica2_pid\n    }\n    # speed up termination\n    $master config set shutdown-timeout 0\n}\n}\n}\n}\n\nforeach rdbchannel {\"yes\" \"no\"} {\ntest \"Partial resynchronization is successful even client-output-buffer-limit is less than repl-backlog-size rdbchannel=$rdbchannel\" {\n    start_server {tags {\"repl external:skip\"}} {\n        start_server {} {\n            r config set save \"\"\n            r config set repl-backlog-size 100mb\n            r config set client-output-buffer-limit \"replica 512k 0 0\"\n            r config set repl-rdb-channel $rdbchannel\n\n            set replica [srv -1 client]\n            $replica config set repl-rdb-channel $rdbchannel\n            $replica replicaof [srv 0 host] [srv 0 port]\n            wait_for_sync $replica\n\n            set big_str [string repeat A [expr 10*1024*1024]] ;# 10mb big string\n            r multi\n            r client kill type replica\n            r set key $big_str\n            r set key $big_str\n            r debug sleep 2 ;# wait for replica reconnecting\n            r exec\n            # When replica reconnects with master, master accepts partial resync,\n            # and don't close replica client even client output buffer limit is\n            # reached.\n            r set key $big_str ;# trigger output buffer limit check\n            wait_for_ofs_sync r $replica\n            # master accepted replica partial resync\n            assert_equal [s sync_full] {1}\n            assert_equal [s sync_partial_ok] {1}\n\n            r multi\n            r set key $big_str\n            r set key $big_str\n            r exec\n            # replica's reply buffer size is more than client-output-buffer-limit but\n            # doesn't exceed repl-backlog-size, we don't close replica client.\n            wait_for_condition 1000 100 {\n                [s -1 master_repl_offset] eq [s master_repl_offset]\n            } else {\n                fail \"Replica offset didn't catch up with the master after too long time\"\n            }\n            assert_equal [s sync_full] {1}\n            assert_equal [s sync_partial_ok] {1}\n        }\n    }\n}\n\n# This test was added to make sure big keys added to the backlog do not trigger psync loop.\ntest \"Replica client-output-buffer size is limited to backlog_limit/16 when no replication data is pending rdbchannel=$rdbchannel\" {\n    proc client_field {r type f} {\n        set client [$r client list type $type]\n        if {![regexp $f=(\\[a-zA-Z0-9-\\]+) $client - res]} {\n            error \"field $f not found for in $client\"\n        }\n        return $res\n    }\n\n    start_server {tags {\"repl external:skip\"}} {\n        start_server {} {\n            set replica [srv -1 client]\n            set replica_host [srv -1 host]\n            set replica_port [srv -1 port]\n            set master [srv 0 client]\n            set master_host [srv 0 host]\n            set master_port [srv 0 port]\n            $master config set maxmemory-policy allkeys-lru\n\n            $master config set repl-backlog-size 16384\n            $master config set client-output-buffer-limit \"replica 32768 32768 60\"\n            $master config set repl-rdb-channel $rdbchannel\n            $replica config set repl-rdb-channel $rdbchannel\n            # Key has has to be larger than replica client-output-buffer limit.\n            set keysize [expr 256*1024]\n\n            $replica replicaof $master_host $master_port\n            wait_for_condition 50 100 {\n                [lindex [$replica role] 0] eq {slave} &&\n                [string match {*master_link_status:up*} [$replica info replication]]\n            } else {\n                fail \"Can't turn the instance into a replica\"\n            }\n\n            # Write a big key that is gonna breach the obuf limit and cause the replica to disconnect,\n            # then in the same event loop, add at least 16 more keys, and enable eviction, so that the\n            # eviction code has a chance to call flushSlavesOutputBuffers, and then run PING to trigger the eviction code\n            set _v [prepare_value $keysize]\n            $master write \"[format_command mset key $_v k1 1 k2 2 k3 3 k4 4 k5 5 k6 6 k7 7 k8 8 k9 9 ka a kb b kc c kd d ke e kf f kg g kh h]config set maxmemory 1\\r\\nping\\r\\n\"\n            $master flush\n            $master read\n            $master read\n            $master read\n            wait_for_ofs_sync $master $replica\n\n            # Write another key to force the test to wait for another event loop iteration so that we\n            # give the serverCron a chance to disconnect replicas with COB size exceeding the limits\n            $master config set maxmemory 0\n            $master set key1 1\n            wait_for_ofs_sync $master $replica\n\n            assert {[status $master connected_slaves] == 1}\n\n            wait_for_condition 50 100 {\n                [client_field $master replica tot-mem] < $keysize\n            } else {\n                fail \"replica client-output-buffer usage is higher than expected.\"\n            }\n\n            # now we expect the replica to re-connect but fail partial sync (it doesn't have large\n            # enough COB limit and must result in a full-sync)\n            assert {[status $master sync_partial_ok] == 0}\n\n            # Before this fix (#11905), the test would trigger an assertion in 'o->used >= c->ref_block_pos'\n            test {The update of replBufBlock's repl_offset is ok - Regression test for #11666} {\n                set rd [redis_deferring_client]\n                set replid [status $master master_replid]\n                set offset [status $master repl_backlog_first_byte_offset]\n                $rd psync $replid $offset\n                assert_equal {PONG} [$master ping] ;# Make sure the master doesn't crash.\n                $rd close\n            }\n        }\n    }\n}\n}\n\n"
  },
  {
    "path": "tests/integration/replication-iothreads.tcl",
    "content": "#\n# Copyright (c) 2025-Present, Redis Ltd.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n\n# Tests for master and slave clients in IO threads\n\n# Helper function to get master client IO thread from INFO replication\nproc get_master_client_io_thread {r} {\n    return [status $r master_client_io_thread]\n}\n\n# Helper function to get slave client IO thread from INFO replication\nproc get_slave_client_io_thread {r slave_idx} {\n    set info [$r info replication]\n    set lines [split $info \"\\r\\n\"]\n\n    foreach line $lines {\n        if {[string match \"slave${slave_idx}:*\" $line]} {\n            # Parse the slave line to extract io-thread value\n            set parts [split $line \",\"]\n            foreach part $parts {\n                if {[string match \"*io-thread=*\" $part]} {\n                    set kv [split $part \"=\"]\n                    assert_equal [llength $kv] 2\n                    return [lindex $kv 1]\n                }\n            }\n        }\n    }\n    return -1\n}\n\nstart_server {overrides {io-threads 4 save \"\"} tags {\"iothreads repl network external:skip\"}} {\nstart_server {overrides {io-threads 4 save \"\"}} {\n    set master [srv 0 client]\n    set master_host [srv 0 host]\n    set master_port [srv 0 port]\n    set slave [srv -1 client]\n\n    test {Setup slave} {\n        $slave slaveof $master_host $master_port\n        wait_for_condition 1000 100 {\n            [s -1 master_link_status] eq {up}\n        } else {\n            fail \"Replication not started.\"\n        }\n    }\n\n    test {Master client moves to IO thread after sync complete} {\n        # Check master client thread assignment (master client is on slave side)\n        wait_for_condition 100 100 {\n            [get_master_client_io_thread $slave] > 0\n        } else {\n            fail \"Master client was not assigned to IO thread\"\n        }\n    }\n\n    test {Slave client assignment to IO threads} {\n        # Verify slave is connected and online\n        wait_replica_online $master 0\n\n        # Slave client is connected - force a write so that it's assigned to an\n        # IO thread.\n        assert_equal \"OK\" [$master set x x]\n\n        # Check slave client thread assignment\n        wait_for_condition 50 100 {\n            [get_slave_client_io_thread $master 0] > 0\n        } else {\n            fail \"Slave client was not assigned to IO thread\"\n        }\n    }\n\n    test {WAIT command works with master/slave in IO threads} {\n        # Test basic WAIT functionality\n        $master set wait_test_key1 value1\n        $master set wait_test_key2 value2\n        $master incr wait_counter\n\n        assert {[$master wait 1 2000] == 1}\n\n        # Verify data reached slave\n        wait_for_condition 10 100 {\n            [$slave get wait_test_key1] eq \"value1\" &&\n            [$slave get wait_test_key2] eq \"value2\" &&\n            [$slave get wait_counter] eq \"1\"\n        } else {\n            fail \"commands not propagated to IO thread slave in time\"\n        }\n    }\n\n    test {Replication data integrity with IO threads} {\n        # Generate significant replication traffic\n        for {set i 0} {$i < 100} {incr i} {\n            $master set bulk_key_$i [string repeat \"data\" 10]\n            $master lpush bulk_list element_$i\n            $master zadd bulk_zset $i member_$i\n            if {$i % 20 == 0} {\n                # Periodically verify WAIT works\n                assert {[$master wait 1 2000] == 1}\n            }\n        }\n\n        # Final verification\n        wait_for_condition 50 100 {\n            [$slave get bulk_key_99] eq [string repeat \"data\" 10] &&\n            [$slave llen bulk_list] eq 100 &&\n            [$slave zcard bulk_zset] eq 100\n        } else {\n            fail \"Replication data integrity failed\"\n        }\n    }\n\n    test {WAIT timeout behavior with slave in IO thread} {\n        set slave_pid [srv -1 pid]\n\n        # Pause slave to test timeout\n        pause_process $slave_pid\n\n        # Should timeout and return 0 acks\n        $master set timeout_test_key timeout_value\n        set start_time [clock milliseconds]\n        assert {[$master wait 1 2000] == 0}\n        set elapsed [expr {[clock milliseconds] - $start_time}]\n        assert_range $elapsed 2000 2500\n\n        # Resume and verify recovery\n        resume_process $slave_pid\n\n        assert {[$master wait 1 2000] == 1}\n\n        # Verify data reached slave after resume\n        wait_for_condition 10 100 {\n            [$slave get timeout_test_key] eq \"timeout_value\"\n        } else {\n            fail \"commands not propagated to IO thread slave in time\"\n        }\n    }\n\n    test {Network interruption recovery with IO threads} {\n        # Generate traffic before interruption\n        for {set i 0} {$i < 50} {incr i} {\n            $master set pre_interrupt_$i value_$i\n        }\n\n        # Simulate network interruption\n        pause_process $slave_pid\n\n        # Continue writing during interruption\n        for {set i 0} {$i < 50} {incr i} {\n            $master set during_interrupt_$i value_$i\n        }\n\n        # WAIT should timeout\n        assert {[$master wait 1 2000] == 0}\n\n        # Resume slave and verify recovery\n        resume_process $slave_pid\n\n        # Verify WAIT works again\n        assert {[$master wait 1 2000] == 1}\n\n        # Wait for reconnection and catch up\n        wait_for_condition 100 100 {\n            [$slave get during_interrupt_49] eq \"value_49\"\n        } else {\n            fail \"Slave didn't catch up after network recovery\"\n        }\n\n        $master set post_recovery_test recovery_value\n        wait_for_condition 10 100 {\n          [$slave get post_recovery_test] eq \"recovery_value\"\n        } else {\n          fail \"Slave didn't receive 'set post_recovery_test' command\"\n        }\n\n        # Check thread assignments after recovery\n        wait_for_condition 100 100 {\n            [get_master_client_io_thread $slave] > 0\n        } else {\n            fail \"Slave client not assigned to IO thread after recovery\"\n        }\n    }\n\n    test {Replication reconnection cycles with IO threads} {\n        # Test multiple disconnect/reconnect cycles\n        for {set cycle 0} {$cycle < 3} {incr cycle} {\n            # Generate traffic\n            for {set i 0} {$i < 20} {incr i} {\n                $master set cycle_${cycle}_key_$i value_$i\n            }\n\n            assert {[$master wait 1 2000] == 1}\n\n            # Record thread assignments during cycle\n            set master_thread [get_master_client_io_thread $slave]\n            set slave_thread [get_slave_client_io_thread $master 0]\n            puts \"Cycle $cycle - Master thread: $master_thread, Slave thread: $slave_thread\"\n\n            # Disconnect and reconnect (except last cycle)\n            if {$cycle < 2} {\n                $slave replicaof no one\n                after 100\n                $slave replicaof $master_host $master_port\n                wait_for_sync $slave\n            }\n        }\n\n        # Verify final state\n        wait_for_condition 10 100 {\n            [$slave get cycle_2_key_19] eq \"value_19\"\n        } else {\n            fail \"last command not propagated to IO thread slave in time\"\n        }\n    }\n\n    test {INFO replication shows correct thread information} {\n        # Test INFO replication output format\n        set info [$master info replication]\n\n        # Should show master role\n        assert_match \"*role:master*\" $info\n\n        # Should have slave thread information\n        assert_match \"*slave0:*io-thread=*\" $info\n\n        # Test we can parse the thread ID\n        set slave_thread [get_slave_client_io_thread $master 0]\n        assert_morethan $slave_thread 0\n\n        # Test master client thread info\n        set slave_info [$slave info replication]\n        assert_match \"*role:slave*\" $slave_info\n        assert_match \"*master_client_io_thread:*\" $slave_info\n\n        set master_thread [get_master_client_io_thread $slave]\n        assert_morethan $master_thread 0\n    }\n}\n}\n\nstart_server {overrides {io-threads 4 save \"\"} tags {\"iothreads repl network external:skip\"}} {\nstart_server {overrides {io-threads 4 save \"\"}} {\nstart_server {overrides {io-threads 4 save \"\"}} {\nstart_server {overrides {io-threads 4 save \"\"}} {\n    set master [srv 0 client]\n    set master_host [srv 0 host]\n    set master_port [srv 0 port]\n    set slave1 [srv -1 client]\n    set slave2 [srv -2 client]\n    set slave3 [srv -3 client]\n\n    test {Multiple slaves across IO threads} {\n        # Setup replication for all slaves\n        $slave1 replicaof $master_host $master_port\n        $slave2 replicaof $master_host $master_port\n        $slave3 replicaof $master_host $master_port\n\n        # Wait for all slaves to be online\n        wait_replica_online $master 0\n        wait_replica_online $master 1\n        wait_replica_online $master 2\n\n        set iterations 5\n        while {[incr iterations -1] >= 0} {\n            # Slave clients are connected - force a write so that they are assigned\n            # to IO threads.\n            assert_equal \"OK\" [$master set x x]\n\n            wait_for_condition 10 100 {\n                ([get_slave_client_io_thread $master 0] > 0) &&\n                ([get_slave_client_io_thread $master 1] > 0) &&\n                ([get_slave_client_io_thread $master 2] > 0)\n            } else {\n                continue\n            }\n\n            break\n        }\n        if {$iterations < 0} {\n            fail \"Replicas failed to be assigned to IO threads in time\"\n        }\n\n        # Test concurrent replication to all slaves\n        for {set i 0} {$i < 200} {incr i} {\n            $master set multi_key_$i value_$i\n            if {$i % 50 == 0} {\n                assert {[$master wait 3 2000] == 3}\n            }\n        }\n\n        # Final verification all slaves got data\n        wait_for_condition 50 100 {\n            [$slave1 get multi_key_199] eq \"value_199\" &&\n            [$slave2 get multi_key_199] eq \"value_199\" &&\n            [$slave3 get multi_key_199] eq \"value_199\"\n        } else {\n            fail \"Multi-slave replication failed\"\n        }\n    }\n\n    test {WAIT with multiple slaves in IO threads} {\n        # Test various WAIT scenarios\n        $master set wait_multi_test1 value1\n        assert {[$master wait 3 2000] == 3}\n\n        $master set wait_multi_test2 value2\n        assert {[$master wait 2 2000] >= 2}\n\n        $master set wait_multi_test3 value3\n        assert {[$master wait 1 2000] >= 1}\n\n        # Verify all slaves have the data\n        wait_for_condition 10 100 {\n            [$slave1 get wait_multi_test3] eq \"value3\" &&\n            [$slave2 get wait_multi_test3] eq \"value3\" &&\n            [$slave3 get wait_multi_test3] eq \"value3\"\n        } else {\n            fail \"commands not propagated to io thread slaves in time\"\n        }\n    }\n}\n}\n}\n}\n\n"
  },
  {
    "path": "tests/integration/replication-psync.tcl",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Copyright (c) 2024-present, Valkey contributors.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n#\n\n# Creates a master-slave pair and breaks the link continuously to force\n# partial resyncs attempts, all this while flooding the master with\n# write queries.\n#\n# You can specify backlog size, ttl, delay before reconnection, test duration\n# in seconds, and an additional condition to verify at the end.\n#\n# If reconnect is > 0, the test actually try to break the connection and\n# reconnect with the master, otherwise just the initial synchronization is\n# checked for consistency.\nproc test_psync {descr duration backlog_size backlog_ttl delay cond mdl sdl reconnect rdbchannel} {\n    start_server {tags {\"repl\"} overrides {save {}}} {\n        start_server {overrides {save {}}} {\n\n            set master [srv -1 client]\n            set master_host [srv -1 host]\n            set master_port [srv -1 port]\n            set slave [srv 0 client]\n\n            $master config set repl-backlog-size $backlog_size\n            $master config set repl-backlog-ttl $backlog_ttl\n            $master config set repl-diskless-sync $mdl\n            $master config set repl-diskless-sync-delay 1\n            $master config set repl-rdb-channel $rdbchannel\n            $slave config set repl-diskless-load $sdl\n            $slave config set repl-rdb-channel $rdbchannel\n\n            set load_handle0 [start_bg_complex_data $master_host $master_port 9 100000]\n            set load_handle1 [start_bg_complex_data $master_host $master_port 11 100000]\n            set load_handle2 [start_bg_complex_data $master_host $master_port 12 100000]\n\n            test {Slave should be able to synchronize with the master} {\n                $slave slaveof $master_host $master_port\n                wait_for_condition 50 100 {\n                    [lindex [r role] 0] eq {slave} &&\n                    [lindex [r role] 3] eq {connected}\n                } else {\n                    fail \"Replication not started.\"\n                }\n            }\n\n            # Check that the background clients are actually writing.\n            test {Detect write load to master} {\n                wait_for_condition 50 1000 {\n                    [$master dbsize] > 100\n                } else {\n                    fail \"Can't detect write load from background clients.\"\n                }\n            }\n\n            test \"Test replication partial resync: $descr (diskless: $mdl, $sdl, reconnect: $reconnect, rdbchannel: $rdbchannel)\" {\n                # Now while the clients are writing data, break the maste-slave\n                # link multiple times.\n                if ($reconnect) {\n                    for {set j 0} {$j < $duration*10} {incr j} {\n                        after 100\n                        # catch {puts \"MASTER [$master dbsize] keys, REPLICA [$slave dbsize] keys\"}\n\n                        if {($j % 20) == 0} {\n                            catch {\n                                if {$delay} {\n                                    $slave multi\n                                    $slave client kill $master_host:$master_port\n                                    $slave debug sleep $delay\n                                    $slave exec\n                                } else {\n                                    $slave client kill $master_host:$master_port\n                                }\n                            }\n                        }\n                    }\n                }\n                stop_bg_complex_data $load_handle0\n                stop_bg_complex_data $load_handle1\n                stop_bg_complex_data $load_handle2\n\n                # Wait for the slave to reach the \"online\"\n                # state from the POV of the master.\n                set retry 5000\n                while {$retry} {\n                    set info [$master info]\n                    if {[string match {*slave0:*state=online*} $info]} {\n                        break\n                    } else {\n                        incr retry -1\n                        after 100\n                    }\n                }\n                if {$retry == 0} {\n                    error \"assertion:Slave not correctly synchronized\"\n                }\n\n                # Wait that slave acknowledge it is online so\n                # we are sure that DBSIZE and DEBUG DIGEST will not\n                # fail because of timing issues. (-LOADING error)\n                wait_for_condition 5000 100 {\n                    [lindex [$slave role] 3] eq {connected}\n                } else {\n                    fail \"Slave still not connected after some time\"\n                }  \n\n                wait_for_condition 100 100 {\n                    [$master debug digest] == [$slave debug digest]\n                } else {\n                    set csv1 [csvdump r]\n                    set csv2 [csvdump {r -1}]\n                    set fd [open /tmp/repldump1.txt w]\n                    puts -nonewline $fd $csv1\n                    close $fd\n                    set fd [open /tmp/repldump2.txt w]\n                    puts -nonewline $fd $csv2\n                    close $fd\n                    fail \"Master - Replica inconsistency, Run diff -u against /tmp/repldump*.txt for more info\"\n                }\n                assert {[$master dbsize] > 0}\n                eval $cond\n            }\n        }\n    }\n}\n\ntags {\"external:skip\"} {\nforeach mdl {no yes} {\n    foreach sdl {disabled swapdb flushdb} {\n        foreach rdbchannel {yes no} {\n            if {$rdbchannel == \"yes\" && $mdl == \"no\"} {\n                # rdbchannel replication requires repl-diskless-sync enabled\n                continue\n            }\n\n            test_psync {no reconnection, just sync} 6 1000000 3600 0 {\n            } $mdl $sdl 0 $rdbchannel\n\n            test_psync {ok psync} 6 100000000 3600 0 {\n            assert {[s -1 sync_partial_ok] > 0}\n            } $mdl $sdl 1 $rdbchannel\n\n            test_psync {no backlog} 6 100 3600 0.5 {\n            assert {[s -1 sync_partial_err] > 0}\n            } $mdl $sdl 1 $rdbchannel\n\n            test_psync {ok after delay} 3 100000000 3600 3 {\n            assert {[s -1 sync_partial_ok] > 0}\n            } $mdl $sdl 1 $rdbchannel\n\n            test_psync {backlog expired} 3 100000000 1 3 {\n            assert {[s -1 sync_partial_err] > 0}\n            } $mdl $sdl 1 $rdbchannel\n        }\n    }\n}\n}\n"
  },
  {
    "path": "tests/integration/replication-rdbchannel.tcl",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Copyright (c) 2024-present, Valkey contributors.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n#\n\n# Returns either main or rdbchannel client id\n# Assumes there is one replica with two channels\nproc get_replica_client_id {master rdbchannel} {\n    set input [$master client list type replica]\n\n    foreach line [split $input \"\\n\"] {\n        if {[regexp {id=(\\d+).*flags=(\\S+)} $line match id flags]} {\n            if {$rdbchannel == \"yes\"} {\n                # rdbchannel will have C flag\n                if {[string match *C* $flags]} {\n                    return $id\n                }\n            } else {\n                return $id\n            }\n        }\n    }\n\n    error \"Replica not found\"\n}\n\nstart_server {tags {\"repl external:skip\"}} {\n    set replica1 [srv 0 client]\n\n    start_server {} {\n        set replica2 [srv 0 client]\n\n        start_server {} {\n            set master [srv 0 client]\n            set master_host [srv 0 host]\n            set master_port [srv 0 port]\n\n            $master config set repl-diskless-sync yes\n            $master config set repl-rdb-channel yes\n            populate 1000 master 10\n\n            test \"Test replication with multiple replicas (rdbchannel enabled on both)\" {\n                $replica1 config set repl-rdb-channel yes\n                $replica1 replicaof $master_host $master_port\n\n                $replica2 config set repl-rdb-channel yes\n                $replica2 replicaof $master_host $master_port\n\n                wait_replica_online $master 0\n                wait_replica_online $master 1\n\n                $master set x 1\n\n                # Wait until replicas catch master\n                wait_for_ofs_sync $master $replica1\n                wait_for_ofs_sync $master $replica2\n\n                # Verify db's are identical\n                assert_morethan [$master dbsize] 0\n                assert_equal [$master get x] 1\n                assert_equal [$master debug digest] [$replica1 debug digest]\n                assert_equal [$master debug digest] [$replica2 debug digest]\n            }\n\n            test \"Test replication with multiple replicas (rdbchannel enabled on one of them)\" {\n                # Allow both replicas to ask for sync\n                $master config set repl-diskless-sync-delay 5\n\n                $replica1 replicaof no one\n                $replica2 replicaof no one\n                $replica1 config set repl-rdb-channel yes\n                $replica2 config set repl-rdb-channel no\n\n                set loglines [count_log_lines 0]\n                set prev_forks [s 0 total_forks]\n                $master set x 2\n\n                # There will be two forks subsequently, one for rdbchannel\n                # replica another for the replica without rdbchannel config.\n                $replica1 replicaof $master_host $master_port\n                $replica2 replicaof $master_host $master_port\n\n                # There will be two forks subsequently, one for rdbchannel\n                # replica, another for the replica without rdbchannel config.\n                wait_for_log_messages 0 {\"*Starting BGSAVE* replicas sockets (rdb-channel)*\"} $loglines 300 100\n                wait_for_log_messages 0 {\"*Starting BGSAVE* replicas sockets\"} $loglines 300 100\n\n                wait_replica_online $master 0 100 100\n                wait_replica_online $master 1 100 100\n\n                # Verify two new forks.\n                assert_equal [s 0 total_forks] [expr $prev_forks + 2]\n\n                wait_for_ofs_sync $master $replica1\n                wait_for_ofs_sync $master $replica2\n\n                # Verify db's are identical\n                assert_equal [$replica1 get x] 2\n                assert_equal [$replica2 get x] 2\n                assert_equal [$master debug digest] [$replica1 debug digest]\n                assert_equal [$master debug digest] [$replica2 debug digest]\n            }\n\n            test \"Test rdbchannel is not used if repl-diskless-sync config is disabled on master\" {\n                $replica1 replicaof no one\n                $replica2 replicaof no one\n\n                $master config set repl-diskless-sync-delay 0\n                $master config set repl-diskless-sync no\n\n                $master set x 3\n                $replica1 replicaof $master_host $master_port\n\n                # Verify log message does not mention rdbchannel\n                wait_for_log_messages 0 {\"*Starting BGSAVE for SYNC with target: disk*\"} 0 2000 1\n\n                wait_replica_online $master 0\n                wait_for_ofs_sync $master $replica1\n\n                # Verify db's are identical\n                assert_equal [$replica1 get x] 3\n                assert_equal [$master debug digest] [$replica1 debug digest]\n            }\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip\"}} {\n    set replica [srv 0 client]\n    set replica_pid [srv 0 pid]\n\n    start_server {} {\n        set master [srv 0 client]\n        set master_host [srv 0 host]\n        set master_port [srv 0 port]\n\n        $master config set repl-rdb-channel yes\n        $replica config set repl-rdb-channel yes\n\n        # Reuse this test to verify large key delivery\n        $master config set rdbcompression no\n        $master config set rdb-key-save-delay 3000\n        populate 1000 prefix1 10\n        populate 5 prefix2 3000000\n        populate 5 prefix3 2000000\n        populate 5 prefix4 1000000\n\n        # On master info output, we should see state transition in this order:\n        # 1. wait_bgsave: Replica receives psync error (+RDBCHANNELSYNC)\n        # 2. send_bulk_and_stream: Replica opens rdbchannel and delivery started\n        # 3. online: Sync is completed\n        test \"Test replica state should start with wait_bgsave\" {\n            $replica config set key-load-delay 100000\n            # Pause replica before opening rdb channel conn\n            $replica debug repl-pause before-rdb-channel\n            $replica replicaof $master_host $master_port\n\n            wait_for_condition 50 200 {\n                [s 0 connected_slaves] == 1 &&\n                [string match \"*wait_bgsave*\" [s 0 slave0]]\n            } else {\n                fail \"replica failed\"\n            }\n        }\n\n        test \"Test replica state advances to send_bulk_and_stream when rdbchannel connects\" {\n            $master set x 1\n            resume_process $replica_pid\n\n            wait_for_condition 50 200 {\n                [s 0 connected_slaves] == 1 &&\n                [s 0 rdb_bgsave_in_progress] == 1 &&\n                [string match \"*send_bulk_and_stream*\" [s 0 slave0]]\n            } else {\n                fail \"replica failed\"\n            }\n        }\n\n        test \"Test replica rdbchannel client has SC flag on client list output\" {\n            set input [$master client list type replica]\n\n            # There will two replicas, second one should be rdbchannel\n            set trimmed_input [string trimright $input]\n            set lines [split $trimmed_input \"\\n\"]\n            if {[llength $lines] < 2} {\n                error \"There is no second line in the input: $input\"\n            }\n            set second_line [lindex $lines 1]\n\n            # Check if 'flags=SC' exists in the second line\n            if {![regexp {flags=SC} $second_line]} {\n                error \"Flags are not 'SC' in the second line: $second_line\"\n            }\n        }\n\n        test \"Test replica state advances to online when fullsync is completed\" {\n            # Speed up loading\n            $replica config set key-load-delay 0\n\n            wait_replica_online $master 0 100 1000\n            wait_for_ofs_sync $master $replica\n\n            wait_for_condition 50 200 {\n                [s 0 rdb_bgsave_in_progress] == 0 &&\n                [s 0 connected_slaves] == 1 &&\n                [string match \"*online*\" [s 0 slave0]]\n            } else {\n                fail \"replica failed\"\n            }\n\n            wait_replica_online $master 0 100 1000\n            wait_for_ofs_sync $master $replica\n\n            # Verify db's are identical\n            assert_morethan [$master dbsize] 0\n            assert_equal [$master debug digest] [$replica debug digest]\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip debug_defrag:skip\"}} {\n    set replica [srv 0 client]\n\n    start_server {} {\n        set master [srv 0 client]\n        set master_host [srv 0 host]\n        set master_port [srv 0 port]\n\n        $master config set repl-rdb-channel yes\n        $replica config set repl-rdb-channel yes\n\n        test \"Test master memory does not increase during replication\" {\n            # Put some delay to rdb generation. If master doesn't forward\n            # incoming traffic to replica, master's replication buffer will grow\n            $master config set repl-diskless-sync-delay 0\n            $master config set rdb-key-save-delay 500 ;# 500us delay and 10k keys means at least 5 seconds replication\n            $master config set repl-backlog-size 5mb\n            $replica config set replica-full-sync-buffer-limit 200mb\n            populate 10000 master 10000 ;# 10k keys of 10k, means 100mb\n            $replica config set loading-process-events-interval-bytes 262144 ;# process events every 256kb of rdb or command stream\n\n            # Start write traffic\n            set load_handle [start_write_load $master_host $master_port 100 \"key1\" 5000 4]\n\n            set prev_used [s 0 used_memory]\n\n            $replica replicaof $master_host $master_port\n            set backlog_size [lindex [$master config get repl-backlog-size] 1]\n\n            # Verify used_memory stays low\n            set max_retry 1000\n            set peak_replica_buf_size 0\n            set peak_master_slave_buf_size 0\n            set peak_master_used_mem 0\n            set peak_master_rpl_buf 0\n            while {$max_retry} {\n                set replica_buf_size [s -1 replica_full_sync_buffer_size]\n                set master_slave_buf_size [s mem_clients_slaves]\n                set master_used_mem [s used_memory]\n                set master_rpl_buf [s mem_total_replication_buffers]\n                if {$replica_buf_size > $peak_replica_buf_size} {set peak_replica_buf_size $replica_buf_size}\n                if {$master_slave_buf_size > $peak_master_slave_buf_size} {set peak_master_slave_buf_size $master_slave_buf_size}\n                if {$master_used_mem > $peak_master_used_mem} {set peak_master_used_mem $master_used_mem}\n                if {$master_rpl_buf > $peak_master_rpl_buf} {set peak_master_rpl_buf $master_rpl_buf}\n                if {$::verbose} {\n                    puts \"[clock format [clock seconds] -format %H:%M:%S] master: $master_slave_buf_size replica: $replica_buf_size\"\n                }\n\n                # Wait for the replica to finish reading the rdb (also from the master's perspective), and also consume much of the replica buffer\n                if {[string match *slave0*state=online* [$master info]] &&\n                    [s -1 master_link_status] == \"up\" &&\n                    $replica_buf_size < 1000000} {\n                    break\n                } else {\n                    incr max_retry -1\n                    after 10\n                }\n            }\n            if {$max_retry == 0} {\n                error \"assertion:Replica not in sync after 10 seconds\"\n            }\n\n            if {$::verbose} {\n                puts \"peak_master_used_mem $peak_master_used_mem\"\n                puts \"peak_master_rpl_buf $peak_master_rpl_buf\"\n                puts \"peak_master_slave_buf_size $peak_master_slave_buf_size\"\n                puts \"peak_replica_buf_size $peak_replica_buf_size\"\n            }\n            # memory on the master is less than 1mb\n            assert_lessthan [expr $peak_master_used_mem - $prev_used - $backlog_size] 1000000\n            assert_lessthan $peak_master_rpl_buf [expr {$backlog_size + 1000000}]\n            assert_lessthan $peak_master_slave_buf_size 1000000\n            # buffers in the replica are more than 5mb\n            assert_morethan $peak_replica_buf_size 5000000\n\n            stop_write_load $load_handle\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip\"}} {\n    set replica [srv 0 client]\n\n    start_server {} {\n        set master [srv 0 client]\n        set master_host [srv 0 host]\n        set master_port [srv 0 port]\n\n        $master config set repl-rdb-channel yes\n        $replica config set repl-rdb-channel yes\n\n        test \"Test replication stream buffer becomes full on replica\" {\n            # For replication stream accumulation, replica inherits slave output\n            # buffer limit as the size limit. In this test, we create traffic to\n            # fill the buffer fully. Once the limit is reached, accumulation\n            # will stop. This is not a failure scenario though. From that point,\n            # further accumulation may occur on master side. Replication should\n            # be completed successfully.\n\n            # Create some artificial delay for rdb delivery and load. We'll\n            # generate some traffic to fill the replication buffer.\n            $master config set rdb-key-save-delay 1000\n            $replica config set key-load-delay 1000\n            $replica config set client-output-buffer-limit \"replica 64kb 64kb 0\"\n            populate 2000 master 1\n\n            set prev_sync_full [s 0 sync_full]\n            $replica replicaof $master_host $master_port\n\n            # Wait for replica to establish psync using main channel\n            wait_for_condition 500 1000 {\n                [string match \"*state=send_bulk_and_stream*\" [s 0 slave0]]\n            } else {\n                fail \"replica didn't start sync\"\n            }\n\n            # Create some traffic on replication stream\n            populate 100 master 100000\n\n            # Wait for replica's buffer limit reached\n            wait_for_log_messages -1 {\"*Replication buffer limit has been reached*\"} 0 1000 10\n\n            # Speed up loading\n            $replica config set key-load-delay 0\n\n            # Wait until sync is successful\n            wait_for_condition 200 200 {\n                [status $master master_repl_offset] eq [status $replica master_repl_offset] &&\n                [status $master master_repl_offset] eq [status $replica slave_repl_offset]\n            } else {\n                fail \"replica offsets didn't match in time\"\n            }\n\n            # Verify sync was not interrupted.\n            assert_equal [s 0 sync_full] [expr $prev_sync_full + 1]\n\n            # Verify db's are identical\n            assert_morethan [$master dbsize] 0\n            assert_equal [$master debug digest] [$replica debug digest]\n        }\n\n        test \"Test replication stream buffer config replica-full-sync-buffer-limit\" {\n            # By default, replica inherits client-output-buffer-limit of replica\n            # to limit accumulated repl data during rdbchannel sync.\n            # replica-full-sync-buffer-limit should override it if it is set.\n            $replica replicaof no one\n\n            # Create some artificial delay for rdb delivery and load. We'll\n            # generate some traffic to fill the replication buffer.\n            $master config set rdb-key-save-delay 1000\n            $replica config set key-load-delay 1000\n            $replica config set client-output-buffer-limit \"replica 1024 1024 0\"\n            $replica config set replica-full-sync-buffer-limit 20mb\n            populate 2000 master 1\n\n            $replica replicaof $master_host $master_port\n\n            # Wait until replication starts\n            wait_for_condition 500 1000 {\n                [string match \"*state=send_bulk_and_stream*\" [s 0 slave0]]\n            } else {\n                fail \"replica didn't start sync\"\n            }\n\n            # Create some traffic on replication stream\n            populate 100 master 100000\n\n            # Make sure config is used, we accumulated more than\n            # client-output-buffer-limit\n            assert_morethan [s -1 replica_full_sync_buffer_size] 1024\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip debug_defrag:skip\"}} {\n    set master [srv 0 client]\n    set master_host [srv 0 host]\n    set master_port [srv 0 port]\n    set master_pid  [srv 0 pid]\n    set loglines [count_log_lines 0]\n\n    $master config set repl-diskless-sync yes\n    $master config set repl-rdb-channel yes\n    $master config set repl-backlog-size 1mb\n    $master config set client-output-buffer-limit \"replica 100k 0 0\"\n    $master config set repl-diskless-sync-delay 3\n\n    start_server {} {\n        set replica [srv 0 client]\n        set replica_pid [srv 0 pid]\n\n        $replica config set repl-rdb-channel yes\n        $replica config set repl-timeout 10\n        $replica config set key-load-delay 10000\n        $replica config set loading-process-events-interval-bytes 1024\n\n        test \"Test master disconnects replica when output buffer limit is reached\" {\n            populate 20000 master 100 -1\n\n            $replica replicaof $master_host $master_port\n            wait_for_condition 100 200 {\n                [s 0 loading] == 1\n            } else {\n                fail \"Replica did not start loading\"\n            }\n\n            # Generate replication traffic of ~20mb to disconnect the slave on obuf limit\n            populate 20 master 1000000 -1\n\n            wait_for_log_messages -1 {\"*Client * closed * for overcoming of output buffer limits.*\"} $loglines 1000 10\n            $replica config set key-load-delay 0\n\n            # Wait until replica loads RDB\n            wait_for_log_messages 0 {\"*Done loading RDB*\"} 0 1000 10\n        }\n\n        test \"Test replication recovers after output buffer failures\" {\n            # Verify system is operational\n            $master set x 1\n\n            # Wait until replica catches up\n            wait_replica_online $master 0 1000 100\n            wait_for_ofs_sync $master $replica\n\n            # Verify db's are identical\n            assert_morethan [$master dbsize] 0\n            assert_equal [$replica get x] 1\n            assert_equal [$master debug digest] [$replica debug digest]\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip\"}} {\n    set master [srv 0 client]\n    set master_host [srv 0 host]\n    set master_port [srv 0 port]\n\n    $master config set repl-diskless-sync yes\n    $master config set repl-rdb-channel yes\n    $master config set rdb-key-save-delay 300\n    $master config set client-output-buffer-limit \"replica 0 0 0\"\n    $master config set repl-diskless-sync-delay 5\n\n    populate 10000 master 1\n\n    start_server {} {\n        set replica1 [srv 0 client]\n        $replica1 config set repl-rdb-channel yes\n\n        start_server {} {\n            set replica2 [srv 0 client]\n            $replica2 config set repl-rdb-channel yes\n\n            set load_handle [start_write_load $master_host $master_port 100 \"key\"]\n\n            test \"Test master continues RDB delivery if not all replicas are dropped\" {\n                $replica1 replicaof $master_host $master_port\n                $replica2 replicaof $master_host $master_port\n\n                wait_for_condition 50 200 {\n                    [s -2 rdb_bgsave_in_progress] == 1\n                } else {\n                    fail \"Sync did not start\"\n                }\n\n                # Verify replicas are connected\n                wait_for_condition 500 100 {\n                    [s -2 connected_slaves] == 2\n                } else {\n                    fail \"Replicas didn't connect: [s -2 connected_slaves]\"\n                }\n\n                # kill one of the replicas\n                catch {$replica1 shutdown nosave}\n\n                # Wait until replica completes full sync\n                # Verify there is no other full sync attempt\n                wait_for_condition 50 1000 {\n                    [s 0 master_link_status] == \"up\" &&\n                    [s -2 sync_full] == 2 &&\n                    [s -2 connected_slaves] == 1\n                } else {\n                    fail \"Sync session did not continue\n                          master_link_status: [s 0 master_link_status]\n                          sync_full:[s -2 sync_full]\n                          connected_slaves: [s -2 connected_slaves]\"\n                }\n\n                # Wait until replica catches up\n                wait_replica_online $master 0 200 100\n                wait_for_condition 200 100 {\n                    [s 0 mem_replica_full_sync_buffer] == 0\n                } else {\n                    fail \"Replica did not consume buffer in time\"\n                }\n            }\n\n            test \"Test master aborts rdb delivery if all replicas are dropped\" {\n                $replica2 replicaof no one\n\n                # Start replication\n                $replica2 replicaof $master_host $master_port\n\n                wait_for_condition 50 1000 {\n                    [s -2 rdb_bgsave_in_progress] == 1\n                } else {\n                    fail \"Sync did not start\"\n                }\n                set loglines [count_log_lines -2]\n\n                # kill replica\n                catch {$replica2 shutdown nosave}\n\n                # Verify master aborts rdb save\n                wait_for_condition 50 1000 {\n                    [s -2 rdb_bgsave_in_progress] == 0 &&\n                    [s -2 connected_slaves] == 0\n                } else {\n                    fail \"Master should abort the sync\n                          rdb_bgsave_in_progress:[s -2 rdb_bgsave_in_progress]\n                          connected_slaves: [s -2 connected_slaves]\"\n                }\n                wait_for_log_messages -2 {\"*Background transfer error*\"} $loglines 1000 50\n            }\n\n            stop_write_load $load_handle\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip\"}} {\n    set master [srv 0 client]\n    set master_host [srv 0 host]\n    set master_port [srv 0 port]\n\n    $master config set repl-diskless-sync yes\n    $master config set repl-rdb-channel yes\n    $master config set rdb-key-save-delay 1000\n\n    populate 3000 prefix1 1\n    populate 100 prefix2 100000\n\n    start_server {} {\n        set replica [srv 0 client]\n        set replica_pid [srv 0 pid]\n\n        $replica config set repl-rdb-channel yes\n        $replica config set repl-timeout 10\n\n        set load_handle [start_write_load $master_host $master_port 100 \"key\"]\n\n        test \"Test replica recovers when rdb channel connection is killed\" {\n            $replica replicaof $master_host $master_port\n\n            # Wait for sync session to start\n            wait_for_condition 500 200 {\n                [string match \"*state=send_bulk_and_stream*\" [s -1 slave0]] &&\n                [s -1 rdb_bgsave_in_progress] eq 1\n            } else {\n                fail \"replica didn't start sync session in time\"\n            }\n\n            set loglines [count_log_lines -1]\n\n            # Kill rdb channel client\n            set id [get_replica_client_id $master yes]\n            $master client kill id $id\n\n            wait_for_log_messages -1 {\"*Background transfer error*\"} $loglines 1000 10\n\n            # Verify master rejects main-ch-client-id after connection is killed\n            assert_error {*Unrecognized*} {$master replconf main-ch-client-id $id}\n\n            # Replica should retry\n            wait_for_condition 500 200 {\n                [string match \"*state=send_bulk_and_stream*\" [s -1 slave0]] &&\n                [s -1 rdb_bgsave_in_progress] eq 1\n            } else {\n                fail \"replica didn't retry after connection close\"\n            }\n        }\n\n        test \"Test replica recovers when main channel connection is killed\" {\n            set loglines [count_log_lines -1]\n\n            # Kill main channel client\n            set id [get_replica_client_id $master yes]\n            $master client kill id $id\n\n            wait_for_log_messages -1 {\"*Background transfer error*\"} $loglines 1000 20\n\n            # Replica should retry\n            wait_for_condition 500 2000 {\n                [string match \"*state=send_bulk_and_stream*\" [s -1 slave0]] &&\n                [s -1 rdb_bgsave_in_progress] eq 1\n            } else {\n                fail \"replica didn't retry after connection close\"\n            }\n        }\n\n        stop_write_load $load_handle\n\n        test \"Test replica recovers connection failures\" {\n            # Wait until replica catches up\n            wait_replica_online $master 0 1000 100\n            wait_for_ofs_sync $master $replica\n\n            # Verify db's are identical\n            assert_morethan [$master dbsize] 0\n            assert_equal [$master debug digest] [$replica debug digest]\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip tsan:skip\"}} {\n    set replica [srv 0 client]\n    set replica_pid  [srv 0 pid]\n\n    start_server {} {\n        set master [srv 0 client]\n        set master_host [srv 0 host]\n        set master_port [srv 0 port]\n\n        test \"Test master connection drops while streaming repl buffer into the db\" {\n            # Just after replica loads RDB, it will stream repl buffer into the\n            # db. During streaming, we kill the master connection. Replica\n            # will abort streaming and then try another psync with master.\n            $master config set rdb-key-save-delay 1000\n            $master config set repl-rdb-channel yes\n            $master config set repl-diskless-sync yes\n            $replica config set repl-rdb-channel yes\n            $replica config set loading-process-events-interval-bytes 1024\n\n            # Populate db and start write traffic\n            populate 2000 master 1000\n            set load_handle [start_write_load $master_host $master_port 100 \"key1\"]\n\n            # Replica will pause in the loop of repl buffer streaming\n            $replica debug repl-pause on-streaming-repl-buf\n            $replica replicaof $master_host $master_port\n\n            # Check if repl stream accumulation is started.\n            wait_for_condition 50 1000 {\n                [s -1 replica_full_sync_buffer_size] > 0\n            } else {\n                fail \"repl stream accumulation not started\"\n            }\n\n            # Wait until replica starts streaming repl buffer\n            wait_for_log_messages -1 {\"*Starting to stream replication buffer*\"} 0 2000 10\n            stop_write_load $load_handle\n            $master config set rdb-key-save-delay 0\n\n            # Kill master connection and resume the process\n            $replica deferred 1\n            $replica client kill type master\n            $replica debug repl-pause clear\n            resume_process $replica_pid\n            $replica read\n            $replica read\n            $replica deferred 0\n\n            wait_for_log_messages -1 {\"*Master client was freed while streaming*\"} 0 500 10\n\n            # Quick check for stats test coverage\n            assert_morethan_equal [s -1 replica_full_sync_buffer_peak] [s -1 replica_full_sync_buffer_size]\n\n            # Wait until replica recovers and verify db's are identical\n            wait_replica_online $master 0 1000 10\n            wait_for_ofs_sync $master $replica\n\n            assert_morethan [$master dbsize] 0\n            assert_equal [$master debug digest] [$replica debug digest]\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip\"}} {\n    set replica [srv 0 client]\n    set replica_pid  [srv 0 pid]\n\n    start_server {} {\n        set master [srv 0 client]\n        set master_host [srv 0 host]\n        set master_port [srv 0 port]\n\n        test \"Test main channel connection drops while loading rdb (disk based)\" {\n            # While loading rdb, we kill main channel connection.\n            # We expect replica to complete loading RDB and then try psync\n            # with the master.\n            $master config set repl-rdb-channel yes\n            $replica config set repl-rdb-channel yes\n            $replica config set repl-diskless-load disabled\n            $replica config set key-load-delay 10000\n            $replica config set loading-process-events-interval-bytes 1024\n\n            # Populate db and start write traffic\n            populate 10000 master 100\n            $replica replicaof $master_host $master_port\n\n            # Wait until replica starts loading\n            wait_for_condition 50 200 {\n                [s -1 loading] == 1\n            } else {\n                fail \"replica did not start loading\"\n            }\n\n            # Kill replica connections\n            $master client kill type replica\n            $master set x 1\n\n            # At this point, we expect replica to complete loading RDB. Then,\n            # it will try psync with master.\n            wait_for_log_messages -1 {\"*Aborting rdb channel sync while loading the RDB*\"} 0 2000 10\n            wait_for_log_messages -1 {\"*After loading RDB, replica will try psync with master*\"} 0 2000 10\n\n            # Speed up loading\n            $replica config set key-load-delay 0\n\n            # Wait until replica becomes online\n            wait_replica_online $master 0 100 100\n\n            # Verify there is another successful psync and no other full sync\n            wait_for_condition 50 200 {\n                [s 0 sync_full] == 1 &&\n                [s 0 sync_partial_ok] == 1\n            } else {\n                fail \"psync was not successful [s 0 sync_full] [s 0 sync_partial_ok]\"\n            }\n\n            # Verify db's are identical after recovery\n            wait_for_ofs_sync $master $replica\n            assert_morethan [$master dbsize] 0\n            assert_equal [$master debug digest] [$replica debug digest]\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip\"}} {\n    set replica [srv 0 client]\n    set replica_pid  [srv 0 pid]\n\n    start_server {} {\n        set master [srv 0 client]\n        set master_host [srv 0 host]\n        set master_port [srv 0 port]\n\n        test \"Test main channel connection drops while loading rdb (diskless)\" {\n            # While loading rdb, kill both main and rdbchannel connections.\n            # We expect replica to abort sync and later retry again.\n            $master config set repl-rdb-channel yes\n            $replica config set repl-rdb-channel yes\n            $replica config set repl-diskless-load swapdb\n            $replica config set key-load-delay 10000\n            $replica config set loading-process-events-interval-bytes 1024\n\n            # Populate db and start write traffic\n            populate 10000 master 100\n\n            $replica replicaof $master_host $master_port\n\n            # Wait until replica starts loading\n            wait_for_condition 50 200 {\n                [s -1 loading] == 1\n            } else {\n                fail \"replica did not start loading\"\n            }\n\n            # Kill replica connections\n            $master client kill type replica\n            $master set x 1\n\n            # At this point, we expect replica to abort loading RDB.\n            wait_for_log_messages -1 {\"*Aborting rdb channel sync while loading the RDB*\"} 0 2000 10\n            wait_for_log_messages -1 {\"*Failed trying to load the MASTER synchronization DB from socket*\"} 0 2000 10\n\n            # Speed up loading\n            $replica config set key-load-delay 0\n\n            # Wait until replica recovers and becomes online\n            wait_replica_online $master 0 100 100\n\n            # Verify replica attempts another full sync\n            wait_for_condition 50 200 {\n                [s 0 sync_full] == 2 &&\n                [s 0 sync_partial_ok] == 0\n            } else {\n                fail \"sync was not successful [s 0 sync_full] [s 0 sync_partial_ok]\"\n            }\n\n            # Verify db's are identical after recovery\n            wait_for_ofs_sync $master $replica\n            assert_morethan [$master dbsize] 0\n            assert_equal [$master debug digest] [$replica debug digest]\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip tsan:skip\"}} {\n    set master2 [srv 0 client]\n    set master2_host [srv 0 host]\n    set master2_port [srv 0 port]\n    start_server {tags {\"repl external:skip\"}} {\n        set replica [srv 0 client]\n        set replica_pid  [srv 0 pid]\n\n        start_server {} {\n            set master [srv 0 client]\n            set master_host [srv 0 host]\n            set master_port [srv 0 port]\n\n            test \"Test replicaof command while streaming repl buffer into the db\" {\n                # After replica loads the RDB, it will stream repl buffer into\n                # the db. During streaming, replica receives command\n                # \"replicaof newmaster\". Replica will abort streaming and then\n                # should be able to connect to the new master.\n                $master config set rdb-key-save-delay 1000\n                $master config set repl-rdb-channel yes\n                $master config set repl-diskless-sync yes\n                $replica config set repl-rdb-channel yes\n                $replica config set loading-process-events-interval-bytes 1024\n\n                # Populate db and start write traffic\n                populate 2000 master 1000\n                set load_handle [start_write_load $master_host $master_port 100 \"key1\"]\n\n                # Replica will pause in the loop of repl buffer streaming\n                $replica debug repl-pause on-streaming-repl-buf\n                $replica replicaof $master_host $master_port\n\n                # Check if repl stream accumulation is started.\n                wait_for_condition 50 1000 {\n                    [s -1 replica_full_sync_buffer_size] > 0\n                } else {\n                    fail \"repl stream accumulation not started\"\n                }\n\n                # Wait until replica starts streaming repl buffer\n                wait_for_log_messages -1 {\"*Starting to stream replication buffer*\"} 0 2000 10\n                stop_write_load $load_handle\n                $master config set rdb-key-save-delay 0\n\n                # Populate the other master\n                populate 100 master2 100 -2\n\n                # Send \"replicaof newmaster\" command and resume the process\n                $replica deferred 1\n                $replica replicaof $master2_host $master2_port\n                $replica debug repl-pause clear\n                resume_process $replica_pid\n                $replica read\n                $replica read\n                $replica deferred 0\n\n                wait_for_log_messages -1 {\"*Master client was freed while streaming*\"} 0 500 10\n\n                # Wait until replica recovers and verify db's are identical\n                wait_replica_online $master2 0 1000 10\n                wait_for_ofs_sync $master2 $replica\n                assert_morethan [$master2 dbsize] 0\n                assert_equal [$master2 debug digest] [$replica debug digest]\n\n                # Try replication once more to be sure everything is okay.\n                $replica replicaof no one\n                $master2 set x 100\n\n                $replica replicaof $master2_host $master2_port\n                wait_replica_online $master2 0 1000 10\n                wait_for_ofs_sync $master2 $replica\n                assert_morethan [$master2 dbsize] 0\n                assert_equal [$master2 debug digest] [$replica debug digest]\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "tests/integration/replication.tcl",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Copyright (c) 2024-present, Valkey contributors.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n#\n\nproc log_file_matches {log pattern} {\n    set fp [open $log r]\n    set content [read $fp]\n    close $fp\n    string match $pattern $content\n}\n\nstart_server {tags {\"repl network external:skip\"}} {\n    set slave [srv 0 client]\n    set slave_host [srv 0 host]\n    set slave_port [srv 0 port]\n    set slave_log [srv 0 stdout]\n    start_server {} {\n        set master [srv 0 client]\n        set master_host [srv 0 host]\n        set master_port [srv 0 port]\n\n        # Configure the master in order to hang waiting for the BGSAVE\n        # operation, so that the slave remains in the handshake state.\n        $master config set repl-diskless-sync yes\n        $master config set repl-diskless-sync-delay 1000\n\n        # Start the replication process...\n        $slave slaveof $master_host $master_port\n\n        test {Slave enters handshake} {\n            wait_for_condition 50 1000 {\n                [string match *handshake* [$slave role]]\n            } else {\n                fail \"Replica does not enter handshake state\"\n            }\n        }\n\n        test {Slave enters wait_bgsave} {\n            # Wait until the rdbchannel is connected to prevent the following\n            # 'debug sleep' occurring during the rdbchannel handshake.\n            wait_for_condition 50 1000 {\n                [string match *state=wait_bgsave* [$master info replication]] &&\n                [llength [split [string trim [$master client list type slave]] \"\\r\\n\"]] == 2\n            } else {\n                fail \"Replica does not enter wait_bgsave state\"\n            }\n        }\n\n        # Use a short replication timeout on the slave, so that if there\n        # are no bugs the timeout is triggered in a reasonable amount\n        # of time.\n        $slave config set repl-timeout 5\n\n        # But make the master unable to send\n        # the periodic newlines to refresh the connection. The slave\n        # should detect the timeout.\n        $master debug sleep 10\n\n        test {Slave is able to detect timeout during handshake} {\n            wait_for_condition 50 1000 {\n                [log_file_matches $slave_log \"*Timeout connecting to the MASTER*\"]\n            } else {\n                fail \"Replica is not able to detect timeout\"\n            }\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip\"}} {\n    set A [srv 0 client]\n    set A_host [srv 0 host]\n    set A_port [srv 0 port]\n    start_server {} {\n        set B [srv 0 client]\n        set B_host [srv 0 host]\n        set B_port [srv 0 port]\n\n        test {Set instance A as slave of B} {\n            $A slaveof $B_host $B_port\n            wait_for_condition 50 100 {\n                [lindex [$A role] 0] eq {slave} &&\n                [string match {*master_link_status:up*} [$A info replication]]\n            } else {\n                fail \"Can't turn the instance into a replica\"\n            }\n        }\n\n        test {INCRBYFLOAT replication, should not remove expire} {\n            r set test 1 EX 100\n            r incrbyfloat test 0.1\n            wait_for_ofs_sync $A $B\n            assert_equal [$A debug digest] [$B debug digest]\n        }\n\n        test {GETSET replication} {\n            $A config resetstat\n            $A config set loglevel debug\n            $B config set loglevel debug\n            r set test foo\n            assert_equal [r getset test bar] foo\n            wait_for_condition 500 10 {\n                [$A get test] eq \"bar\"\n            } else {\n                fail \"getset wasn't propagated\"\n            }\n            assert_equal [r set test vaz get] bar\n            wait_for_condition 500 10 {\n                [$A get test] eq \"vaz\"\n            } else {\n                fail \"set get wasn't propagated\"\n            }\n            assert_equal [r set test qaz get get] vaz\n            wait_for_condition 500 10 {\n                [$A get test] eq \"qaz\"\n            } else {\n                fail \"set get get wasn't propagated\"\n            }\n            assert_match {*calls=4,*} [cmdrstat set $A]\n            assert_match {} [cmdrstat getset $A]\n        }\n\n        test {BRPOPLPUSH replication, when blocking against empty list} {\n            $A config resetstat\n            set rd [redis_deferring_client]\n            $rd brpoplpush a b 5\n            wait_for_blocked_client\n            r lpush a foo\n            wait_for_ofs_sync $B $A\n            assert_equal [$A debug digest] [$B debug digest]\n            assert_match {*calls=1,*} [cmdrstat rpoplpush $A]\n            assert_match {} [cmdrstat lmove $A]\n            assert_equal [$rd read] {foo}\n            $rd close\n        }\n\n        test {BRPOPLPUSH replication, list exists} {\n            $A config resetstat\n            r lpush c 1\n            r lpush c 2\n            r lpush c 3\n            assert_equal [r brpoplpush c d 5] {1}\n            wait_for_ofs_sync $B $A\n            assert_equal [$A debug digest] [$B debug digest]\n            assert_match {*calls=1,*} [cmdrstat rpoplpush $A]\n            assert_match {} [cmdrstat lmove $A]\n        }\n\n        foreach wherefrom {left right} {\n            foreach whereto {left right} {\n                test \"BLMOVE ($wherefrom, $whereto) replication, when blocking against empty list\" {\n                    $A config resetstat\n                    set rd [redis_deferring_client]\n                    $rd blmove a b $wherefrom $whereto 5\n                    $rd flush\n                    wait_for_blocked_client\n                    r lpush a foo\n                    wait_for_ofs_sync $B $A\n                    assert_equal [$A debug digest] [$B debug digest]\n                    assert_match {*calls=1,*} [cmdrstat lmove $A]\n                    assert_match {} [cmdrstat rpoplpush $A]\n                    assert_equal [$rd read] {foo}\n                    $rd close\n                }\n\n                test \"BLMOVE ($wherefrom, $whereto) replication, list exists\" {\n                    $A config resetstat\n                    r lpush c 1\n                    r lpush c 2\n                    r lpush c 3\n                    r blmove c d $wherefrom $whereto 5\n                    wait_for_ofs_sync $B $A\n                    assert_equal [$A debug digest] [$B debug digest]\n                    assert_match {*calls=1,*} [cmdrstat lmove $A]\n                    assert_match {} [cmdrstat rpoplpush $A]\n                }\n            }\n        }\n\n        test {BLPOP followed by role change, issue #2473} {\n            set rd [redis_deferring_client]\n            $rd blpop foo 0 ; # Block while B is a master\n            wait_for_blocked_client\n\n            # Turn B into master of A\n            $A slaveof no one\n            $B slaveof $A_host $A_port\n            wait_for_condition 50 100 {\n                [lindex [$B role] 0] eq {slave} &&\n                [string match {*master_link_status:up*} [$B info replication]]\n            } else {\n                fail \"Can't turn the instance into a replica\"\n            }\n\n            # Push elements into the \"foo\" list of the new replica.\n            # If the client is still attached to the instance, we'll get\n            # a desync between the two instances.\n            $A rpush foo a b c\n            wait_for_ofs_sync $B $A\n\n            wait_for_condition 50 100 {\n                [$A debug digest] eq [$B debug digest] &&\n                [$A lrange foo 0 -1] eq {a b c} &&\n                [$B lrange foo 0 -1] eq {a b c}\n            } else {\n                fail \"Master and replica have different digest: [$A debug digest] VS [$B debug digest]\"\n            }          \n            assert_match {*calls=1,*,rejected_calls=0,failed_calls=1*} [cmdrstat blpop $B]\n\n            assert_error {UNBLOCKED*} {$rd read}\n            $rd close\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip\"}} {\n    r set mykey foo\n\n    start_server {} {\n        test {Second server should have role master at first} {\n            s role\n        } {master}\n\n        test {SLAVEOF should start with link status \"down\"} {\n            r multi\n            r slaveof [srv -1 host] [srv -1 port]\n            r info replication\n            r exec\n        } {*master_link_status:down*}\n\n        test {The role should immediately be changed to \"replica\"} {\n            s role\n        } {slave}\n\n        wait_for_sync r\n        test {Sync should have transferred keys from master} {\n            r get mykey\n        } {foo}\n\n        test {The link status should be up} {\n            s master_link_status\n        } {up}\n\n        test {SET on the master should immediately propagate} {\n            r -1 set mykey bar\n\n            wait_for_condition 500 100 {\n                [r  0 get mykey] eq {bar}\n            } else {\n                fail \"SET on master did not propagated on replica\"\n            }\n        }\n\n        test {FLUSHDB / FLUSHALL should replicate} {\n            # we're attaching to a sub-replica, so we need to stop pings on the real master\n            r -1 config set repl-ping-replica-period 3600\n\n            set repl [attach_to_replication_stream]\n\n            r -1 set key value\n            r -1 flushdb\n\n            r -1 set key value2\n            r -1 flushall\n\n            wait_for_ofs_sync [srv 0 client] [srv -1 client]\n            assert_equal [r -1 dbsize] 0\n            assert_equal [r 0 dbsize] 0\n\n            # DB is empty.\n            r -1 flushdb\n            r -1 flushdb\n            r -1 eval {redis.call(\"flushdb\")} 0\n\n            # DBs are empty.\n            r -1 flushall\n            r -1 flushall\n            r -1 eval {redis.call(\"flushall\")} 0\n\n            # add another command to check nothing else was propagated after the above\n            r -1 incr x\n\n            # Assert that each FLUSHDB command is replicated even the DB is empty.\n            # Assert that each FLUSHALL command is replicated even the DBs are empty.\n            assert_replication_stream $repl {\n                {set key value}\n                {flushdb}\n                {set key value2}\n                {flushall}\n                {flushdb}\n                {flushdb}\n                {flushdb}\n                {flushall}\n                {flushall}\n                {flushall}\n                {incr x}\n            }\n            close_replication_stream $repl\n        }\n\n        test {ROLE in master reports master with a slave} {\n            set res [r -1 role]\n            lassign $res role offset slaves\n            assert {$role eq {master}}\n            assert {$offset > 0}\n            assert {[llength $slaves] == 1}\n            lassign [lindex $slaves 0] master_host master_port slave_offset\n            assert {$slave_offset <= $offset}\n        }\n\n        test {ROLE in slave reports slave in connected state} {\n            set res [r role]\n            lassign $res role master_host master_port slave_state slave_offset\n            assert {$role eq {slave}}\n            assert {$slave_state eq {connected}}\n        }\n    }\n}\n\nforeach mdl {no yes} rdbchannel {no yes} {\n    foreach sdl {disabled swapdb flushdb} {\n        start_server {tags {\"repl external:skip debug_defrag:skip\"} overrides {save {}}} {\n            set master [srv 0 client]\n            $master config set repl-diskless-sync $mdl\n            $master config set repl-diskless-sync-delay 5\n            $master config set repl-diskless-sync-max-replicas 3\n            set master_host [srv 0 host]\n            set master_port [srv 0 port]\n            set slaves {}\n            start_server {overrides {save {}}} {\n                lappend slaves [srv 0 client]\n                start_server {overrides {save {}}} {\n                    lappend slaves [srv 0 client]\n                    start_server {overrides {save {}}} {\n                        lappend slaves [srv 0 client]\n                        test \"Connect multiple replicas at the same time (issue #141), master diskless=$mdl, replica diskless=$sdl, rdbchannel=$rdbchannel\" {\n\n                            $master config set repl-rdb-channel $rdbchannel\n                            [lindex $slaves 0] config set repl-rdb-channel $rdbchannel\n                            [lindex $slaves 1] config set repl-rdb-channel $rdbchannel\n                            [lindex $slaves 2] config set repl-rdb-channel $rdbchannel\n\n                            # start load handles only inside the test, so that the test can be skipped\n                            set load_handle0 [start_bg_complex_data $master_host $master_port 9 100000000]\n                            set load_handle1 [start_bg_complex_data $master_host $master_port 11 100000000]\n                            set load_handle2 [start_bg_complex_data $master_host $master_port 12 100000000]\n                            set load_handle3 [start_write_load $master_host $master_port 8]\n                            set load_handle4 [start_write_load $master_host $master_port 4]\n                            after 5000 ;# wait for some data to accumulate so that we have RDB part for the fork\n\n                            # Send SLAVEOF commands to slaves\n                            [lindex $slaves 0] config set repl-diskless-load $sdl\n                            [lindex $slaves 1] config set repl-diskless-load $sdl\n                            [lindex $slaves 2] config set repl-diskless-load $sdl\n                            [lindex $slaves 0] slaveof $master_host $master_port\n                            [lindex $slaves 1] slaveof $master_host $master_port\n                            [lindex $slaves 2] slaveof $master_host $master_port\n\n                            # Wait for all the three slaves to reach the \"online\"\n                            # state from the POV of the master.\n                            set retry 500\n                            while {$retry} {\n                                set info [r -3 info]\n                                if {[string match {*slave0:*state=online*slave1:*state=online*slave2:*state=online*} $info]} {\n                                    break\n                                } else {\n                                    incr retry -1\n                                    after 100\n                                }\n                            }\n                            if {$retry == 0} {\n                                error \"assertion:Slaves not correctly synchronized\"\n                            }\n\n                            # Wait that slaves acknowledge they are online so\n                            # we are sure that DBSIZE and DEBUG DIGEST will not\n                            # fail because of timing issues.\n                            wait_for_condition 500 100 {\n                                [lindex [[lindex $slaves 0] role] 3] eq {connected} &&\n                                [lindex [[lindex $slaves 1] role] 3] eq {connected} &&\n                                [lindex [[lindex $slaves 2] role] 3] eq {connected}\n                            } else {\n                                fail \"Slaves still not connected after some time\"\n                            }\n\n                            # Stop the write load\n                            stop_bg_complex_data $load_handle0\n                            stop_bg_complex_data $load_handle1\n                            stop_bg_complex_data $load_handle2\n                            stop_write_load $load_handle3\n                            stop_write_load $load_handle4\n\n                            # Make sure no more commands processed\n                            wait_load_handlers_disconnected -3\n\n                            wait_for_ofs_sync $master [lindex $slaves 0]\n                            wait_for_ofs_sync $master [lindex $slaves 1]\n                            wait_for_ofs_sync $master [lindex $slaves 2]\n\n                            # Check digests\n                            set digest [$master debug digest]\n                            set digest0 [[lindex $slaves 0] debug digest]\n                            set digest1 [[lindex $slaves 1] debug digest]\n                            set digest2 [[lindex $slaves 2] debug digest]\n                            assert {$digest ne 0000000000000000000000000000000000000000}\n                            assert {$digest eq $digest0}\n                            assert {$digest eq $digest1}\n                            assert {$digest eq $digest2}\n                        }\n                   }\n                }\n            }\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip\"} overrides {save {}}} {\n    set master [srv 0 client]\n    set master_host [srv 0 host]\n    set master_port [srv 0 port]\n    start_server {overrides {save {}}} {\n        test \"Master stream is correctly processed while the replica has a script in -BUSY state\" {\n            set load_handle0 [start_write_load $master_host $master_port 3]\n            set slave [srv 0 client]\n            $slave config set lua-time-limit 500\n            $slave slaveof $master_host $master_port\n\n            # Wait for the slave to be online\n            wait_for_condition 500 100 {\n                [lindex [$slave role] 3] eq {connected}\n            } else {\n                fail \"Replica still not connected after some time\"\n            }\n\n            # Wait some time to make sure the master is sending data\n            # to the slave.\n            after 5000\n\n            # Stop the ability of the slave to process data by sendig\n            # a script that will put it in BUSY state.\n            $slave eval {for i=1,3000000000 do end} 0\n\n            # Wait some time again so that more master stream will\n            # be processed.\n            after 2000\n\n            # Stop the write load\n            stop_write_load $load_handle0\n\n            # number of keys\n            wait_for_condition 500 100 {\n                [$master debug digest] eq [$slave debug digest]\n            } else {\n                fail \"Different datasets between replica and master\"\n            }\n        }\n    }\n}\n\n# Diskless load swapdb when NOT async_loading (different master replid)\nforeach testType {Successful Aborted} rdbchannel {yes no} {\n    start_server {tags {\"repl external:skip\"}} {\n        set replica [srv 0 client]\n        set replica_host [srv 0 host]\n        set replica_port [srv 0 port]\n        set replica_log [srv 0 stdout]\n        start_server {} {\n            set master [srv 0 client]\n            set master_host [srv 0 host]\n            set master_port [srv 0 port]\n\n            # Set master and replica to use diskless replication on swapdb mode\n            $master config set repl-diskless-sync yes\n            $master config set repl-diskless-sync-delay 0\n            $master config set save \"\"\n            $master config set repl-rdb-channel $rdbchannel\n            $replica config set repl-diskless-load swapdb\n            $replica config set save \"\"\n\n            # Put different data sets on the master and replica\n            # We need to put large keys on the master since the replica replies to info only once in 2mb\n            $replica debug populate 200 slave 10\n            $master debug populate 1000 master 100000\n            $master config set rdbcompression no\n\n            # Set a key value on replica to check status on failure and after swapping db\n            $replica set mykey myvalue\n\n            switch $testType {\n                \"Aborted\" {\n                    # Set master with a slow rdb generation, so that we can easily intercept loading\n                    # 10ms per key, with 1000 keys is 10 seconds\n                    $master config set rdb-key-save-delay 10000\n\n                    # Start the replication process\n                    $replica replicaof $master_host $master_port\n\n                    test \"Diskless load swapdb (different replid): replica enter loading rdbchannel=$rdbchannel\" {\n                        # Wait for the replica to start reading the rdb\n                        wait_for_condition 100 100 {\n                            [s -1 loading] eq 1\n                        } else {\n                            fail \"Replica didn't get into loading mode\"\n                        }\n\n                        assert_equal [s -1 async_loading] 0\n                    }\n\n                    # Make sure that next sync will not start immediately so that we can catch the replica in between syncs\n                    $master config set repl-diskless-sync-delay 5\n\n                    # Kill the replica connection on the master\n                    set killed [$master client kill type replica]\n\n                    # Wait for loading to stop (fail)\n                    wait_for_condition 100 100 {\n                        [s -1 loading] eq 0\n                    } else {\n                        fail \"Replica didn't disconnect\"\n                    }\n\n                    test \"Diskless load swapdb (different replid): old database is exposed after replication fails rdbchannel=$rdbchannel\" {\n                        # Ensure we see old values from replica\n                        assert_equal [$replica get mykey] \"myvalue\"\n\n                        # Make sure amount of replica keys didn't change\n                        assert_equal [$replica dbsize] 201\n                    }\n\n                    # Speed up shutdown\n                    $master config set rdb-key-save-delay 0\n                }\n                \"Successful\" {\n                    # Start the replication process\n                    $replica replicaof $master_host $master_port\n\n                    # Let replica finish sync with master\n                    wait_for_condition 100 100 {\n                        [s -1 master_link_status] eq \"up\"\n                    } else {\n                        fail \"Master <-> Replica didn't finish sync\"\n                    }\n\n                    test {Diskless load swapdb (different replid): new database is exposed after swapping} {\n                        # Ensure we don't see anymore the key that was stored only to replica and also that we don't get LOADING status\n                        assert_equal [$replica GET mykey] \"\"\n\n                        # Make sure amount of keys matches master\n                        assert_equal [$replica dbsize] 1000\n                    }\n                }\n            }\n        }\n    }\n}\n\n# Diskless load swapdb when async_loading (matching master replid)\nforeach testType {Successful Aborted} {\n    start_server {tags {\"repl external:skip\"}} {\n        set replica [srv 0 client]\n        set replica_host [srv 0 host]\n        set replica_port [srv 0 port]\n        set replica_log [srv 0 stdout]\n        start_server {} {\n            set master [srv 0 client]\n            set master_host [srv 0 host]\n            set master_port [srv 0 port]\n\n            # Set master and replica to use diskless replication on swapdb mode\n            $master config set repl-diskless-sync yes\n            $master config set repl-diskless-sync-delay 0\n            $master config set save \"\"\n            $replica config set repl-diskless-load swapdb\n            $replica config set save \"\"\n\n            # Set replica writable so we can check that a key we manually added is served\n            # during replication and after failure, but disappears on success\n            $replica config set replica-read-only no\n\n            # Initial sync to have matching replids between master and replica\n            $replica replicaof $master_host $master_port\n\n            # Let replica finish initial sync with master\n            wait_for_condition 100 100 {\n                [s -1 master_link_status] eq \"up\"\n            } else {\n                fail \"Master <-> Replica didn't finish sync\"\n            }\n\n            # Put different data sets on the master and replica\n            # We need to put large keys on the master since the replica replies to info only once in 2mb\n            $replica debug populate 2000 slave 10\n            $master debug populate 2000 master 100000\n            $master config set rdbcompression no\n\n            # Set a key value on replica to check status during loading, on failure and after swapping db\n            $replica set mykey myvalue\n\n            # Set a function value on replica to check status during loading, on failure and after swapping db\n            $replica function load {#!lua name=test\n                redis.register_function('test', function() return 'hello1' end)\n            }\n\n            # Set a function value on master to check it reaches the replica when replication ends\n            $master function load {#!lua name=test\n                redis.register_function('test', function() return 'hello2' end)\n            }\n\n            # Remember the sync_full stat before the client kill.\n            set sync_full [s 0 sync_full]\n\n            if {$testType == \"Aborted\"} {\n                # Set master with a slow rdb generation, so that we can easily intercept loading\n                # 20ms per key, with 2000 keys is 40 seconds\n                $master config set rdb-key-save-delay 20000\n            }\n\n            # Force the replica to try another full sync (this time it will have matching master replid)\n            $master multi\n            $master client kill type replica\n            # Fill replication backlog with new content\n            $master config set repl-backlog-size 16384\n            for {set keyid 0} {$keyid < 10} {incr keyid} {\n                $master set \"$keyid string_$keyid\" [string repeat A 16384]\n            }\n            $master exec\n\n            # Wait for sync_full to get incremented from the previous value.\n            # After the client kill, make sure we do a reconnect, and do a FULL SYNC.\n            wait_for_condition 100 100 {\n                [s 0 sync_full] > $sync_full\n            } else {\n                fail \"Master <-> Replica didn't start the full sync\"\n            }\n\n            switch $testType {\n                \"Aborted\" {\n                    test {Diskless load swapdb (async_loading): replica enter async_loading} {\n                        # Wait for the replica to start reading the rdb\n                        wait_for_condition 100 100 {\n                            [s -1 async_loading] eq 1\n                        } else {\n                            fail \"Replica didn't get into async_loading mode\"\n                        }\n\n                        assert_equal [s -1 loading] 0\n                    }\n\n                    test {Diskless load swapdb (async_loading): old database is exposed while async replication is in progress} {\n                        # Ensure we still see old values while async_loading is in progress and also not LOADING status\n                        assert_equal [$replica get mykey] \"myvalue\"\n\n                        # Ensure we still can call old function while async_loading is in progress\n                        assert_equal [$replica fcall test 0] \"hello1\"\n\n                        # Make sure we're still async_loading to validate previous assertion\n                        assert_equal [s -1 async_loading] 1\n\n                        # Make sure amount of replica keys didn't change\n                        assert_equal [$replica dbsize] 2001\n                    }\n\n                    test {Busy script during async loading} {\n                        set rd_replica [redis_deferring_client -1]\n                        $replica config set lua-time-limit 10\n                        $rd_replica eval {while true do end} 0\n                        after 200\n                        assert_error {BUSY*} {$replica ping}\n                        $replica script kill\n                        after 200 ; # Give some time to Lua to call the hook again...\n                        assert_equal [$replica ping] \"PONG\"\n                        $rd_replica close\n                    }\n\n                    test {Blocked commands and configs during async-loading} {\n                        assert_error {LOADING*} {$replica REPLICAOF no one}\n                    }\n\n                    # Make sure that next sync will not start immediately so that we can catch the replica in between syncs\n                    $master config set repl-diskless-sync-delay 5\n\n                    # Kill the replica connection on the master\n                    set killed [$master client kill type replica]\n\n                    # Wait for loading to stop (fail)\n                    wait_for_condition 100 100 {\n                        [s -1 async_loading] eq 0\n                    } else {\n                        fail \"Replica didn't disconnect\"\n                    }\n\n                    test {Diskless load swapdb (async_loading): old database is exposed after async replication fails} {\n                        # Ensure we see old values from replica\n                        assert_equal [$replica get mykey] \"myvalue\"\n\n                        # Ensure we still can call old function\n                        assert_equal [$replica fcall test 0] \"hello1\"\n\n                        # Make sure amount of replica keys didn't change\n                        assert_equal [$replica dbsize] 2001\n                    }\n\n                    # Speed up shutdown\n                    $master config set rdb-key-save-delay 0\n                }\n                \"Successful\" {\n                    # Let replica finish sync with master\n                    wait_for_condition 100 100 {\n                        [s -1 master_link_status] eq \"up\"\n                    } else {\n                        fail \"Master <-> Replica didn't finish sync\"\n                    }\n\n                    test {Diskless load swapdb (async_loading): new database is exposed after swapping} {\n                        # Ensure we don't see anymore the key that was stored only to replica and also that we don't get LOADING status\n                        assert_equal [$replica GET mykey] \"\"\n\n                        # Ensure we got the new function\n                        assert_equal [$replica fcall test 0] \"hello2\"\n\n                        # Make sure amount of keys matches master\n                        assert_equal [$replica dbsize] 2010\n                    }\n                }\n            }\n        }\n    }\n}\n\ntest {diskless loading short read} {\n    start_server {tags {\"repl\"} overrides {save \"\"}} {\n        set replica [srv 0 client]\n        set replica_host [srv 0 host]\n        set replica_port [srv 0 port]\n        start_server {overrides {save \"\"}} {\n            set master [srv 0 client]\n            set master_host [srv 0 host]\n            set master_port [srv 0 port]\n\n            # Set master and replica to use diskless replication\n            $master config set repl-diskless-sync yes\n            $master config set rdbcompression no\n            $replica config set repl-diskless-load swapdb\n            $master config set hz 500\n            $replica config set hz 500\n            $master config set dynamic-hz no\n            $replica config set dynamic-hz no\n            # Try to fill the master with all types of data types / encodings\n            set start [clock clicks -milliseconds]\n\n            # Set a function value to check short read handling on functions\n            r function load {#!lua name=test\n                redis.register_function('test', function() return 'hello1' end)\n            }\n\n            set has_vector_sets [server_has_command vadd]\n\n            for {set k 0} {$k < 3} {incr k} {\n                for {set i 0} {$i < 10} {incr i} {\n                    r set \"$k int_$i\" [expr {int(rand()*10000)}]\n                    r expire \"$k int_$i\" [expr {int(rand()*10000)}]\n                    r set \"$k string_$i\" [string repeat A [expr {int(rand()*1000000)}]]\n                    r hset \"$k hash_small\" [string repeat A [expr {int(rand()*10)}]]  0[string repeat A [expr {int(rand()*10)}]]\n                    r hset \"$k hash_large\" [string repeat A [expr {int(rand()*10000)}]] [string repeat A [expr {int(rand()*1000000)}]]\n                    r hsetex \"$k hfe_small\" EX [expr {int(rand()*100)}] FIELDS 1 [string repeat A [expr {int(rand()*10)}]] 0[string repeat A [expr {int(rand()*10)}]]\n                    r hsetex \"$k hfe_large\" EX [expr {int(rand()*100)}] FIELDS 1 [string repeat A [expr {int(rand()*10000)}]] [string repeat A [expr {int(rand()*1000000)}]]\n                    r sadd \"$k set_small\" [string repeat A [expr {int(rand()*10)}]]\n                    r sadd \"$k set_large\" [string repeat A [expr {int(rand()*1000000)}]]\n                    r zadd \"$k zset_small\" [expr {rand()}] [string repeat A [expr {int(rand()*10)}]]\n                    r zadd \"$k zset_large\" [expr {rand()}] [string repeat A [expr {int(rand()*1000000)}]]\n                    r lpush \"$k list_small\" [string repeat A [expr {int(rand()*10)}]]\n                    r lpush \"$k list_large\" [string repeat A [expr {int(rand()*1000000)}]]\n\n                    if {$has_vector_sets} {\n                        r vadd \"$k vector_set\" VALUES 3 [expr {rand()}] [expr {rand()}] [expr {rand()}] [string repeat A [expr {int(rand()*1000)}]]\n                    }\n\n                    for {set j 0} {$j < 10} {incr j} {\n                        r xadd \"$k stream\" * foo \"asdf\" bar \"1234\"\n                    }\n                    r xgroup create \"$k stream\" \"mygroup_$i\" 0\n                    r xreadgroup GROUP \"mygroup_$i\" Alice COUNT 1 STREAMS \"$k stream\" >\n                }\n            }\n\n            if {$::verbose} {\n                set end [clock clicks -milliseconds]\n                set duration [expr $end - $start]\n                puts \"filling took $duration ms (TODO: use pipeline)\"\n                set start [clock clicks -milliseconds]\n            }\n\n            # Start the replication process...\n            set loglines [count_log_lines -1]\n            $master config set repl-diskless-sync-delay 0\n            $replica replicaof $master_host $master_port\n\n            # kill the replication at various points\n            set attempts 100\n            if {$::accurate} { set attempts 500 }\n            for {set i 0} {$i < $attempts} {incr i} {\n                # wait for the replica to start reading the rdb\n                # using the log file since the replica only responds to INFO once in 2mb\n                set res [wait_for_log_messages -1 {\"*Loading DB in memory*\"} $loglines 2000 1]\n                set loglines [lindex $res 1]\n\n                # add some additional random sleep so that we kill the master on a different place each time\n                after [expr {int(rand()*50)}]\n\n                # kill the replica connection on the master\n                set killed [$master client kill type replica]\n\n                set res [wait_for_log_messages -1 {\"*Internal error in RDB*\" \"*Finished with success*\" \"*Successful partial resynchronization*\"} $loglines 500 10]\n                if {$::verbose} { puts $res }\n                set log_text [lindex $res 0]\n                set loglines [lindex $res 1]\n                if {![string match \"*Internal error in RDB*\" $log_text]} {\n                    # force the replica to try another full sync\n                    $master multi\n                    $master client kill type replica\n                    $master set asdf asdf\n                    # fill replication backlog with new content\n                    $master config set repl-backlog-size 16384\n                    for {set keyid 0} {$keyid < 10} {incr keyid} {\n                        $master set \"$keyid string_$keyid\" [string repeat A 16384]\n                    }\n                    $master exec\n                }\n\n                # wait for loading to stop (fail)\n                # After a loading successfully, next loop will enter `async_loading`\n                wait_for_condition 1000 1 {\n                    [s -1 async_loading] eq 0 &&\n                    [s -1 loading] eq 0\n                } else {\n                    fail \"Replica didn't disconnect\"\n                }\n            }\n            if {$::verbose} {\n                set end [clock clicks -milliseconds]\n                set duration [expr $end - $start]\n                puts \"test took $duration ms\"\n            }\n            # enable fast shutdown\n            $master config set rdb-key-save-delay 0\n        }\n    }\n} {} {external:skip}\n\n# get current stime and utime metrics for a thread (since it's creation)\nproc get_cpu_metrics { statfile } {\n    if { [ catch {\n        set fid   [ open $statfile r ]\n        set data  [ read $fid 1024 ]\n        ::close $fid\n        set data  [ split $data ]\n\n        ;## number of jiffies it has been scheduled...\n        set utime [ lindex $data 13 ]\n        set stime [ lindex $data 14 ]\n    } err ] } {\n        error \"assertion:can't parse /proc: $err\"\n    }\n    set mstime [clock milliseconds]\n    return [ list $mstime $utime $stime ]\n}\n\n# compute %utime and %stime of a thread between two measurements\nproc compute_cpu_usage {start end} {\n    set clock_ticks [exec getconf CLK_TCK]\n    # convert ms time to jiffies and calc delta\n    set dtime [ expr { ([lindex $end 0] - [lindex $start 0]) * double($clock_ticks) / 1000 } ]\n    set utime [ expr { [lindex $end 1] - [lindex $start 1] } ]\n    set stime [ expr { [lindex $end 2] - [lindex $start 2] } ]\n    set pucpu  [ expr { ($utime / $dtime) * 100 } ]\n    set pscpu  [ expr { ($stime / $dtime) * 100 } ]\n    return [ list $pucpu $pscpu ]\n}\n\n\n# test diskless rdb pipe with multiple replicas, which may drop half way\nstart_server {tags {\"repl external:skip tsan:skip\"} overrides {save \"\"}} {\n    set master [srv 0 client]\n    $master config set repl-diskless-sync yes\n    $master config set repl-diskless-sync-delay 5\n    $master config set repl-diskless-sync-max-replicas 2\n    set master_host [srv 0 host]\n    set master_port [srv 0 port]\n    set master_pid [srv 0 pid]\n    # put enough data in the db that the rdb file will be bigger than the socket buffers\n    # and since we'll have key-load-delay of 100, 20000 keys will take at least 2 seconds\n    # we also need the replica to process requests during transfer (which it does only once in 2mb)\n    $master debug populate 20000 test 10000\n    $master config set rdbcompression no\n    $master config set repl-rdb-channel no\n    # If running on Linux, we also measure utime/stime to detect possible I/O handling issues\n    set os [catch {exec uname}]\n    set measure_time [expr {$os == \"Linux\"} ? 1 : 0]\n    foreach all_drop {no slow fast all timeout} {\n        test \"diskless $all_drop replicas drop during rdb pipe\" {\n            set replicas {}\n            set replicas_alive {}\n            # start one replica that will read the rdb fast, and one that will be slow\n            start_server {overrides {save \"\"}} {\n                lappend replicas [srv 0 client]\n                lappend replicas_alive [srv 0 client]\n                start_server {overrides {save \"\"}} {\n                    lappend replicas [srv 0 client]\n                    lappend replicas_alive [srv 0 client]\n\n                    # start replication\n                    # it's enough for just one replica to be slow, and have it's write handler enabled\n                    # so that the whole rdb generation process is bound to that\n                    set loglines [count_log_lines -2]\n                    [lindex $replicas 0] config set repl-diskless-load swapdb\n                    [lindex $replicas 1] config set repl-diskless-load swapdb\n                    [lindex $replicas 0] config set key-load-delay 100 ;# 20k keys and 100 microseconds sleep means at least 2 seconds\n                    [lindex $replicas 0] replicaof $master_host $master_port\n                    [lindex $replicas 1] replicaof $master_host $master_port\n\n                    # wait for the replicas to start reading the rdb\n                    # using the log file since the replica only responds to INFO once in 2mb\n                    wait_for_log_messages -1 {\"*Loading DB in memory*\"} 0 1500 10\n\n                    if {$measure_time} {\n                        set master_statfile \"/proc/$master_pid/stat\"\n                        set master_start_metrics [get_cpu_metrics $master_statfile]\n                        set start_time [clock seconds]\n                    }\n\n                    # wait a while so that the pipe socket writer will be\n                    # blocked on write (since replica 0 is slow to read from the socket)\n                    after 500\n\n                    # add some command to be present in the command stream after the rdb.\n                    $master incr $all_drop\n\n                    # disconnect replicas depending on the current test\n                    if {$all_drop == \"all\" || $all_drop == \"fast\"} {\n                        exec kill [srv 0 pid]\n                        set replicas_alive [lreplace $replicas_alive 1 1]\n                    }\n                    if {$all_drop == \"all\" || $all_drop == \"slow\"} {\n                        exec kill [srv -1 pid]\n                        set replicas_alive [lreplace $replicas_alive 0 0]\n                    }\n                    if {$all_drop == \"timeout\"} {\n                        $master config set repl-timeout 2\n                        # we want the slow replica to hang on a key for very long so it'll reach repl-timeout\n                        pause_process [srv -1 pid]\n                        after 2000\n                    }\n\n                    # wait for rdb child to exit\n                    wait_for_condition 500 100 {\n                        [s -2 rdb_bgsave_in_progress] == 0\n                    } else {\n                        fail \"rdb child didn't terminate\"\n                    }\n\n                    # make sure we got what we were aiming for, by looking for the message in the log file\n                    if {$all_drop == \"all\"} {\n                        wait_for_log_messages -2 {\"*Diskless rdb transfer, last replica dropped, killing fork child*\"} $loglines 1 1\n                    }\n                    if {$all_drop == \"no\"} {\n                        wait_for_log_messages -2 {\"*Diskless rdb transfer, done reading from pipe, 2 replicas still up*\"} $loglines 1 1\n                    }\n                    if {$all_drop == \"slow\" || $all_drop == \"fast\"} {\n                        wait_for_log_messages -2 {\"*Diskless rdb transfer, done reading from pipe, 1 replicas still up*\"} $loglines 1 1\n                    }\n                    if {$all_drop == \"timeout\"} {\n                        wait_for_log_messages -2 {\"*Disconnecting timedout replica (full sync)*\"} $loglines 1 1\n                        wait_for_log_messages -2 {\"*Diskless rdb transfer, done reading from pipe, 1 replicas still up*\"} $loglines 1 1\n                        # master disconnected the slow replica, remove from array\n                        set replicas_alive [lreplace $replicas_alive 0 0]\n                        # release it\n                        resume_process [srv -1 pid]\n                    }\n\n                    # make sure we don't have a busy loop going thought epoll_wait\n                    if {$measure_time} {\n                        set master_end_metrics [get_cpu_metrics $master_statfile]\n                        set time_elapsed [expr {[clock seconds]-$start_time}]\n                        set master_cpu [compute_cpu_usage $master_start_metrics $master_end_metrics]\n                        set master_utime [lindex $master_cpu 0]\n                        set master_stime [lindex $master_cpu 1]\n                        if {$::verbose} {\n                            puts \"elapsed: $time_elapsed\"\n                            puts \"master utime: $master_utime\"\n                            puts \"master stime: $master_stime\"\n                        }\n                        if {!$::no_latency && ($all_drop == \"all\" || $all_drop == \"slow\" || $all_drop == \"timeout\")} {\n                            assert {$master_utime < 70}\n                            assert {$master_stime < 70}\n                        }\n                        if {!$::no_latency && ($all_drop == \"none\" || $all_drop == \"fast\")} {\n                            assert {$master_utime < 15}\n                            assert {$master_stime < 15}\n                        }\n                    }\n\n                    # verify the data integrity\n                    foreach replica $replicas_alive {\n                        # Wait that replicas acknowledge they are online so\n                        # we are sure that DBSIZE and DEBUG DIGEST will not\n                        # fail because of timing issues.\n                        wait_for_condition 150 100 {\n                            [lindex [$replica role] 3] eq {connected}\n                        } else {\n                            fail \"replicas still not connected after some time\"\n                        }\n\n                        # Make sure that replicas and master have same\n                        # number of keys\n                        wait_for_condition 50 100 {\n                            [$master dbsize] == [$replica dbsize]\n                        } else {\n                            fail \"Different number of keys between master and replicas after too long time.\"\n                        }\n\n                        # Check digests\n                        set digest [$master debug digest]\n                        set digest0 [$replica debug digest]\n                        assert {$digest ne 0000000000000000000000000000000000000000}\n                        assert {$digest eq $digest0}\n                    }\n                }\n            }\n        }\n    }\n}\n\ntest \"diskless replication child being killed is collected\" {\n    # when diskless master is waiting for the replica to become writable\n    # it removes the read event from the rdb pipe so if the child gets killed\n    # the replica will hung. and the master may not collect the pid with waitpid\n    start_server {tags {\"repl\"} overrides {save \"\"}} {\n        set master [srv 0 client]\n        set master_host [srv 0 host]\n        set master_port [srv 0 port]\n        set master_pid [srv 0 pid]\n        $master config set repl-diskless-sync yes\n        $master config set repl-diskless-sync-delay 0\n        $master config set repl-rdb-channel no\n        # put enough data in the db that the rdb file will be bigger than the socket buffers\n        $master debug populate 20000 test 10000\n        $master config set rdbcompression no\n        start_server {overrides {save \"\"}} {\n            set replica [srv 0 client]\n            set loglines [count_log_lines 0]\n            $replica config set repl-diskless-load swapdb\n            $replica config set key-load-delay 1000000\n            $replica config set loading-process-events-interval-bytes 1024\n            $replica replicaof $master_host $master_port\n\n            # wait for the replicas to start reading the rdb\n            wait_for_log_messages 0 {\"*Loading DB in memory*\"} $loglines 1500 10\n\n            # wait to be sure the replica is hung and the master is blocked on write\n            after 500\n\n            # simulate the OOM killer or anyone else kills the child\n            set fork_child_pid [get_child_pid -1]\n            exec kill -9 $fork_child_pid\n\n            # wait for the parent to notice the child have exited\n            wait_for_condition 50 100 {\n                [s -1 rdb_bgsave_in_progress] == 0\n            } else {\n                fail \"rdb child didn't terminate\"\n            }\n\n            # Speed up shutdown\n            $replica config set key-load-delay 0\n        }\n    }\n} {} {external:skip}\n\nforeach mdl {yes no} {\n    test \"replication child dies when parent is killed - diskless: $mdl\" {\n        # when master is killed, make sure the fork child can detect that and exit\n        start_server {tags {\"repl\"} overrides {save \"\"}} {\n            set master [srv 0 client]\n            set master_host [srv 0 host]\n            set master_port [srv 0 port]\n            set master_pid [srv 0 pid]\n            $master config set repl-diskless-sync $mdl\n            $master config set repl-diskless-sync-delay 0\n            # create keys that will take 10 seconds to save\n            $master config set rdb-key-save-delay 1000\n            $master debug populate 10000\n            start_server {overrides {save \"\"}} {\n                set replica [srv 0 client]\n                $replica replicaof $master_host $master_port\n\n                # wait for rdb child to start\n                wait_for_condition 5000 10 {\n                    [s -1 rdb_bgsave_in_progress] == 1\n                } else {\n                    fail \"rdb child didn't start\"\n                }\n                set fork_child_pid [get_child_pid -1]\n\n                # simulate the OOM killer or anyone else kills the parent\n                exec kill -9 $master_pid\n\n                # wait for the child to notice the parent died have exited\n                wait_for_condition 500 10 {\n                    [process_is_alive $fork_child_pid] == 0\n                } else {\n                    fail \"rdb child didn't terminate\"\n                }\n            }\n        }\n    } {} {external:skip}\n}\n\ntest \"diskless replication read pipe cleanup\" {\n    # In diskless replication, we create a read pipe for the RDB, between the child and the parent.\n    # When we close this pipe (fd), the read handler also needs to be removed from the event loop (if it still registered).\n    # Otherwise, next time we will use the same fd, the registration will be fail (panic), because\n    # we will use EPOLL_CTL_MOD (the fd still register in the event loop), on fd that already removed from epoll_ctl\n    start_server {tags {\"repl\"} overrides {save \"\"}} {\n        set master [srv 0 client]\n        set master_host [srv 0 host]\n        set master_port [srv 0 port]\n        set master_pid [srv 0 pid]\n        $master config set repl-diskless-sync yes\n        $master config set repl-diskless-sync-delay 0\n\n        # put enough data in the db, and slowdown the save, to keep the parent busy at the read process\n        $master config set rdb-key-save-delay 100000\n        $master debug populate 20000 test 10000\n        $master config set rdbcompression no\n        start_server {overrides {save \"\"}} {\n            set replica [srv 0 client]\n            set loglines [count_log_lines 0]\n            $replica config set repl-diskless-load swapdb\n            $replica replicaof $master_host $master_port\n\n            # wait for the replicas to start reading the rdb\n            wait_for_log_messages 0 {\"*Loading DB in memory*\"} $loglines 1500 10\n\n            set loglines [count_log_lines -1]\n            # send FLUSHALL so the RDB child will be killed\n            $master flushall\n\n            # wait for another RDB child process to be started\n            wait_for_log_messages -1 {\"*Background RDB transfer started by pid*\"} $loglines 800 10\n\n            # make sure master is alive\n            $master ping\n        }\n    }\n} {} {external:skip tsan:skip}\n\ntest {replicaof right after disconnection} {\n    # this is a rare race condition that was reproduced sporadically by the psync2 unit.\n    # see details in #7205\n    start_server {tags {\"repl\"} overrides {save \"\"}} {\n        set replica1 [srv 0 client]\n        set replica1_host [srv 0 host]\n        set replica1_port [srv 0 port]\n        set replica1_log [srv 0 stdout]\n        start_server {overrides {save \"\"}} {\n            set replica2 [srv 0 client]\n            set replica2_host [srv 0 host]\n            set replica2_port [srv 0 port]\n            set replica2_log [srv 0 stdout]\n            start_server {overrides {save \"\"}} {\n                set master [srv 0 client]\n                set master_host [srv 0 host]\n                set master_port [srv 0 port]\n                $replica1 replicaof $master_host $master_port\n                $replica2 replicaof $master_host $master_port\n\n                wait_for_condition 50 100 {\n                    [string match {*master_link_status:up*} [$replica1 info replication]] &&\n                    [string match {*master_link_status:up*} [$replica2 info replication]]\n                } else {\n                    fail \"Can't turn the instance into a replica\"\n                }\n\n                set rd [redis_deferring_client -1]\n                $rd debug sleep 1\n                after 100\n\n                # when replica2 will wake up from the sleep it will find both disconnection\n                # from it's master and also a replicaof command at the same event loop\n                $master client kill type replica\n                $replica2 replicaof $replica1_host $replica1_port\n                $rd read\n\n                wait_for_condition 50 100 {\n                    [string match {*master_link_status:up*} [$replica2 info replication]]\n                } else {\n                    fail \"role change failed.\"\n                }\n\n                # make sure psync succeeded, and there were no unexpected full syncs.\n                assert_equal [status $master sync_full] 2\n                assert_equal [status $replica1 sync_full] 0\n                assert_equal [status $replica2 sync_full] 0\n            }\n        }\n    }\n} {} {external:skip}\n\ntest {Kill rdb child process if its dumping RDB is not useful} {\n    start_server {tags {\"repl\"}} {\n        set slave1 [srv 0 client]\n        start_server {} {\n            set slave2 [srv 0 client]\n            start_server {} {\n                set master [srv 0 client]\n                set master_host [srv 0 host]\n                set master_port [srv 0 port]\n                for {set i 0} {$i < 10} {incr i} {\n                    $master set $i $i\n                }\n                # Generating RDB will cost 10s(10 * 1s)\n                $master config set rdb-key-save-delay 1000000\n                $master config set repl-diskless-sync no\n                $master config set save \"\"\n\n                $slave1 slaveof $master_host $master_port\n                $slave2 slaveof $master_host $master_port\n\n                # Wait for starting child\n                wait_for_condition 50 100 {\n                    ([s 0 rdb_bgsave_in_progress] == 1) &&\n                    ([string match \"*wait_bgsave*\" [s 0 slave0]]) &&\n                    ([string match \"*wait_bgsave*\" [s 0 slave1]])\n                } else {\n                    fail \"rdb child didn't start\"\n                }\n\n                # Slave1 disconnect with master\n                $slave1 slaveof no one\n                # Shouldn't kill child since another slave wait for rdb\n                after 100\n                assert {[s 0 rdb_bgsave_in_progress] == 1}\n\n                # Slave2 disconnect with master\n                $slave2 slaveof no one\n                # Should kill child\n                wait_for_condition 1000 10 {\n                    [s 0 rdb_bgsave_in_progress] eq 0\n                } else {\n                    fail \"can't kill rdb child\"\n                }\n\n                # If have save parameters, won't kill child\n                $master config set save \"900 1\"\n                $slave1 slaveof $master_host $master_port\n                $slave2 slaveof $master_host $master_port\n                wait_for_condition 50 100 {\n                    ([s 0 rdb_bgsave_in_progress] == 1) &&\n                    ([string match \"*wait_bgsave*\" [s 0 slave0]]) &&\n                    ([string match \"*wait_bgsave*\" [s 0 slave1]])\n                } else {\n                    fail \"rdb child didn't start\"\n                }\n                $slave1 slaveof no one\n                $slave2 slaveof no one\n                after 200\n                assert {[s 0 rdb_bgsave_in_progress] == 1}\n                catch {$master shutdown nosave}\n            }\n        }\n    }\n} {} {external:skip}\n\nstart_server {tags {\"repl external:skip\"}} {\n    set master1_host [srv 0 host]\n    set master1_port [srv 0 port]\n    r set a b\n\n    start_server {} {\n        set master2 [srv 0 client]\n        set master2_host [srv 0 host]\n        set master2_port [srv 0 port]\n        # Take 10s for dumping RDB\n        $master2 debug populate 10 master2 10\n        $master2 config set rdb-key-save-delay 1000000\n\n        start_server {} {\n            set sub_replica [srv 0 client]\n\n            start_server {} {\n                # Full sync with master1\n                r slaveof $master1_host $master1_port\n                wait_for_sync r\n                assert_equal \"b\" [r get a]\n\n                # Let sub replicas sync with me\n                $sub_replica slaveof [srv 0 host] [srv 0 port]\n                wait_for_sync $sub_replica\n                assert_equal \"b\" [$sub_replica get a]\n\n                # Full sync with master2, and then kill master2 before finishing dumping RDB\n                r slaveof $master2_host $master2_port\n                wait_for_condition 50 100 {\n                    ([s -2 rdb_bgsave_in_progress] == 1) &&\n                        ([string match \"*wait_bgsave*\" [s -2 slave0]] ||\n                         [string match \"*send_bulk_and_stream*\" [s -2 slave0]])\n                } else {\n                    fail \"full sync didn't start\"\n                }\n                catch {$master2 shutdown nosave}\n\n                test {Don't disconnect with replicas before loading transferred RDB when full sync} {\n                    assert ![log_file_matches [srv -1 stdout] \"*Connection with master lost*\"]\n                    # The replication id is not changed in entire replication chain\n                    assert_equal [s master_replid] [s -3 master_replid]\n                    assert_equal [s master_replid] [s -1 master_replid]\n                }\n\n                test {Discard cache master before loading transferred RDB when full sync} {\n                    set full_sync [s -3 sync_full]\n                    set partial_sync [s -3 sync_partial_ok]\n                    # Partial sync with master1\n                    r slaveof $master1_host $master1_port\n                    wait_for_sync r\n                    # master1 accepts partial sync instead of full sync\n                    assert_equal $full_sync [s -3 sync_full]\n                    assert_equal [expr $partial_sync+1] [s -3 sync_partial_ok]\n\n                    # Since master only partially sync replica, and repl id is not changed,\n                    # the replica doesn't disconnect with its sub-replicas\n                    assert_equal [s master_replid] [s -3 master_replid]\n                    assert_equal [s master_replid] [s -1 master_replid]\n                    assert ![log_file_matches [srv -1 stdout] \"*Connection with master lost*\"]\n                    # Sub replica just has one full sync, no partial resync.\n                    assert_equal 1 [s sync_full]\n                    assert_equal 0 [s sync_partial_ok]\n                }\n            }\n        }\n    }\n}\n\ntest {replica can handle EINTR if use diskless load} {\n    start_server {tags {\"repl\"}} {\n        set replica [srv 0 client]\n        set replica_log [srv 0 stdout]\n        start_server {} {\n            set master [srv 0 client]\n            set master_host [srv 0 host]\n            set master_port [srv 0 port]\n\n            $master debug populate 100 master 100000\n            $master config set rdbcompression no\n            $master config set repl-diskless-sync yes\n            $master config set repl-diskless-sync-delay 0\n            $replica config set repl-diskless-load on-empty-db\n            # Construct EINTR error by using the built in watchdog\n            $replica config set watchdog-period 200\n            # Block replica in read()\n            $master config set rdb-key-save-delay 10000\n            # set speedy shutdown\n            $master config set save \"\"\n            # Start the replication process...\n            $replica replicaof $master_host $master_port\n\n            # Wait for the replica to start reading the rdb\n            set res [wait_for_log_messages -1 {\"*Loading DB in memory*\"} 0 200 10]\n            set loglines [lindex $res 1]\n\n            # Wait till we see the watchgod log line AFTER the loading started\n            wait_for_log_messages -1 {\"*WATCHDOG TIMER EXPIRED*\"} $loglines 200 10\n\n            # Make sure we're still loading, and that there was just one full sync attempt\n            assert ![log_file_matches [srv -1 stdout] \"*Reconnecting to MASTER*\"]\n            assert_equal 1 [s 0 sync_full]\n            assert_equal 1 [s -1 loading]\n        }\n    }\n} {} {external:skip}\n\nstart_server {tags {\"repl\" \"external:skip\"}} {\n    test \"replica do not write the reply to the replication link - SYNC (_addReplyToBufferOrList)\" {\n        set rd [redis_deferring_client]\n        set lines [count_log_lines 0]\n\n        $rd sync\n        $rd ping\n        catch {$rd read} e\n        if {$::verbose} { puts \"SYNC _addReplyToBufferOrList: $e\" }\n        assert_equal \"PONG\" [r ping]\n\n        # Check we got the warning logs about the PING command.\n        verify_log_message 0 \"*Replica generated a reply to command 'ping', disconnecting it: *\" $lines\n\n        $rd close\n        waitForBgsave r\n    }\n\n    test \"replica do not write the reply to the replication link - SYNC (addReplyDeferredLen)\" {\n        set rd [redis_deferring_client]\n        set lines [count_log_lines 0]\n\n        $rd sync\n        $rd xinfo help\n        catch {$rd read} e\n        if {$::verbose} { puts \"SYNC addReplyDeferredLen: $e\" }\n        assert_equal \"PONG\" [r ping]\n\n        # Check we got the warning logs about the XINFO HELP command.\n        verify_log_message 0 \"*Replica generated a reply to command 'xinfo|help', disconnecting it: *\" $lines\n\n        $rd close\n        waitForBgsave r\n    }\n\n    test \"replica do not write the reply to the replication link - PSYNC (_addReplyToBufferOrList)\" {\n        set rd [redis_deferring_client]\n        set lines [count_log_lines 0]\n\n        $rd psync replicationid -1\n        assert_match {FULLRESYNC * 0} [$rd read]\n        $rd get foo\n        catch {$rd read} e\n        if {$::verbose} { puts \"PSYNC _addReplyToBufferOrList: $e\" }\n        assert_equal \"PONG\" [r ping]\n\n        # Check we got the warning logs about the GET command.\n        verify_log_message 0 \"*Replica generated a reply to command 'get', disconnecting it: *\" $lines\n        verify_log_message 0 \"*== CRITICAL == This master is sending an error to its replica: *\" $lines\n        verify_log_message 0 \"*Replica can't interact with the keyspace*\" $lines\n\n        $rd close\n        waitForBgsave r\n    }\n\n    test \"replica do not write the reply to the replication link - PSYNC (addReplyDeferredLen)\" {\n        set rd [redis_deferring_client]\n        set lines [count_log_lines 0]\n\n        $rd psync replicationid -1\n        assert_match {FULLRESYNC * 0} [$rd read]\n        $rd slowlog get\n        catch {$rd read} e\n        if {$::verbose} { puts \"PSYNC addReplyDeferredLen: $e\" }\n        assert_equal \"PONG\" [r ping]\n\n        # Check we got the warning logs about the SLOWLOG GET command.\n        verify_log_message 0 \"*Replica generated a reply to command 'slowlog|get', disconnecting it: *\" $lines\n\n        $rd close\n        waitForBgsave r\n    }\n\n    test \"PSYNC with wrong offset should throw error\" {\n        # It used to accept the FULL SYNC, but also replied with an error.\n        assert_error {ERR value is not an integer or out of range} {r psync replicationid offset_str}\n        set logs [exec tail -n 100 < [srv 0 stdout]]\n        assert_match {*Replica * asks for synchronization but with a wrong offset} $logs\n        assert_equal \"PONG\" [r ping]\n    }\n}\n\nstart_server {tags {\"repl external:skip\"}} {\n    set master [srv 0 client]\n    set master_host [srv 0 host]\n    set master_port [srv 0 port]\n    $master debug SET-ACTIVE-EXPIRE 0\n    start_server {} {\n        set slave [srv 0 client]\n        $slave debug SET-ACTIVE-EXPIRE 0\n        $slave slaveof $master_host $master_port\n\n        test \"Test replication with lazy expire\" {\n            # wait for replication to be in sync\n            wait_for_condition 50 100 {\n                [lindex [$slave role] 0] eq {slave} &&\n                [string match {*master_link_status:up*} [$slave info replication]]\n            } else {\n                fail \"Can't turn the instance into a replica\"\n            }\n\n            $master sadd s foo\n            $master pexpire s 1\n            after 10\n            $master sadd s foo\n            assert_equal 1 [$master wait 1 0]\n\n            assert_equal \"set\" [$master type s]\n            assert_equal \"set\" [$slave type s]\n        }\n    }\n}\n\nforeach disklessload {disabled on-empty-db} {\n    test \"Replica should reply LOADING while flushing a large db (disklessload: $disklessload)\" {\n        start_server {} {\n            set replica [srv 0 client]\n            start_server {} {\n                set master [srv 0 client]\n                set master_host [srv 0 host]\n                set master_port [srv 0 port]\n\n                $replica config set repl-diskless-load $disklessload\n\n                # Populate replica with many keys, master with a few keys.\n                $replica debug populate 4000000\n                populate 3 master 10\n\n                # Start the replication process...\n                $replica replicaof $master_host $master_port\n\n                wait_for_condition 100 100 {\n                    [s -1 loading] eq 1\n                } else {\n                    fail \"Replica didn't get into loading mode\"\n                }\n\n                # If replica has a large db, it may take some time to discard it\n                # after receiving new db from the master. In this case, replica\n                # should reply -LOADING. Replica may reply -LOADING while\n                # loading the new db as well. To test the first case, populated\n                # replica with large amount of keys and master with a few keys.\n                # Discarding old db will take a long time and loading new one\n                # will be quick. So, if we receive -LOADING, most probably it is\n                # when flushing the db.\n                wait_for_condition 1 10000 {\n                    [catch {$replica ping} err] &&\n                    [string match *LOADING* $err]\n                } else {\n                    # There is a chance that we may not catch LOADING response\n                    # if flushing db happens too fast compared to test execution\n                    # Then, we may consider increasing key count or introducing\n                    # artificial delay to db flush.\n                    fail \"Replica did not reply LOADING.\"\n                }\n\n                catch {$replica shutdown nosave}\n            }\n        }\n    } {} {repl external:skip debug_defrag:skip}\n}\n\nstart_server {tags {\"repl external:skip\"} overrides {save {}}} {\n    set master [srv 0 client]\n    set master_host [srv 0 host]\n    set master_port [srv 0 port]\n    populate 10000 master 10\n\n    start_server {overrides {save {} rdb-del-sync-files yes loading-process-events-interval-bytes 1024}} {\n        test \"Allow appendonly config change while loading rdb on slave\" {\n            set replica [srv 0 client]\n\n            # While loading rdb on slave, verify appendonly config changes are allowed\n            # 1- Change appendonly config from no to yes\n            $replica config set appendonly no\n            $replica config set key-load-delay 100\n            $replica debug populate 1000\n\n            # Start the replication process...\n            $replica replicaof $master_host $master_port\n\n            wait_for_condition 10 1000 {\n                [s loading] eq 1\n            } else {\n                fail \"Replica didn't get into loading mode\"\n            }\n\n            # Change config while replica is loading data\n            $replica config set appendonly yes\n            assert_equal 1 [s loading]\n\n            # Speed up loading and verify aof is enabled\n            $replica config set key-load-delay 0\n            wait_done_loading $replica\n            assert_equal 1 [s aof_enabled]\n\n            # Quick sanity for AOF\n            $replica replicaof no one\n            set prev [s aof_current_size]\n            $replica set x 100\n            assert_morethan [s aof_current_size] $prev\n\n            # 2- While loading rdb, change appendonly from yes to no\n            $replica config set appendonly yes\n            $replica config set key-load-delay 100\n            $replica flushall\n\n            # Start the replication process...\n            $replica replicaof $master_host $master_port\n\n            wait_for_condition 10 1000 {\n                [s loading] eq 1\n            } else {\n                fail \"Replica didn't get into loading mode\"\n            }\n\n            # Change config while replica is loading data\n            $replica config set appendonly no\n            assert_equal 1 [s loading]\n\n            # Speed up loading and verify aof is disabled\n            $replica config set key-load-delay 0\n            wait_done_loading $replica\n            assert_equal 0 [s 0 aof_enabled]\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip\"}} {\n    set replica [srv 0 client]\n    start_server {} {\n        set master [srv 0 client]\n        set master_host [srv 0 host]\n        set master_port [srv 0 port]\n\n        test \"Replica flushes db lazily when replica-lazy-flush enabled\" {\n            $replica config set replica-lazy-flush yes\n            $replica debug populate 1000\n            populate 1 master 10\n\n            # Start the replication process...\n            $replica replicaof $master_host $master_port\n\n            wait_for_condition 100 100 {\n                [s -1 lazyfreed_objects] >= 1000 &&\n                [s -1 master_link_status] eq {up}\n            } else {\n                fail \"Replica did not free db lazily\"\n            }\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip\"}} {\n    set replica [srv 0 client]\n    start_server {} {\n        set master [srv 0 client]\n        set master_host [srv 0 host]\n        set master_port [srv 0 port]\n\n        test \"Test replication with functions when repl-diskless-load is set to on-empty-db\" {\n            $replica config set repl-diskless-load on-empty-db\n\n            populate 10 master 10\n            $master function load {#!lua name=test\n                redis.register_function{function_name='func1', callback=function() return 'hello' end, flags={'no-writes'}}\n            }\n\n            $replica replicaof $master_host $master_port\n\n            # Wait until replication is completed\n            wait_for_sync $replica\n            wait_for_ofs_sync $master $replica\n\n            # Sanity check\n            assert_equal [$replica fcall func1 0] \"hello\"\n            assert_morethan [$replica dbsize] 0\n            assert_equal [$master debug digest] [$replica debug digest]\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip\"}} {\n    set master [srv 0 client]\n    set master_host [srv 0 host]\n    set master_port [srv 0 port]\n\n    start_server {} {\n        set slave [srv 0 client]\n        $slave slaveof $master_host $master_port\n\n        test \"Accumulate repl_total_disconnect_time with delayed reconnection\" {\n            wait_for_condition 50 100 {\n                [string match {*master_link_status:up*} [$slave info replication]]\n            } else {\n                fail \"Initial replica setup failed\"\n            }\n\n            # Simulate disconnect by pointing to invalid master\n            $slave slaveof $master_host 0\n            after 1000\n\n            $slave slaveof $master_host $master_port\n\n            wait_for_condition 50 100 {\n                [string match {*master_link_status:up*} [$slave info replication]]\n            } else {\n                fail \"Initial replica setup failed\"\n            }\n            assert {[status $slave total_disconnect_time_sec] >= 1}\n        }\n\n        test \"Test the total_disconnect_time_sec incr after slaveof no one\" {\n            $slave slaveof no one\n            after 1000\n            $slave slaveof $master_host $master_port\n            wait_for_condition 50 100 {\n                [lindex [$slave role] 0] eq {slave} &&\n                [string match {*master_link_status:up*} [$slave info replication]]\n            } else {\n                fail \"Can't turn the instance into a replica\"\n            }\n            assert {[status $slave total_disconnect_time_sec] >= 2}\n        }\n\n        test \"Test correct replication disconnection time counters behavior\" {\n            # Simulate disconnection\n            $slave slaveof $master_host 0\n\n            after 1000\n\n            set total_disconnect_time [status $slave total_disconnect_time_sec]\n            set link_down_since [status $slave master_link_down_since_seconds]\n\n            # Restore real master\n            $slave slaveof $master_host $master_port\n            wait_for_condition 50 100 {\n                [string match {*master_link_status:up*} [$slave info replication]]\n            } else {\n                fail \"Replication did not reconnect\"\n            }\n            #  total_disconnect_time and link_down_since incer\n            assert {$total_disconnect_time >= 3}\n            assert {$link_down_since > 0}\n            assert {$total_disconnect_time > $link_down_since}\n\n            #  total_disconnect_time_reconnect can be up to 5 seconds more than total_disconnect_time due to reconnection time\n            set total_disconnect_time_reconnect [status $slave total_disconnect_time_sec]\n            assert {$total_disconnect_time_reconnect >= $total_disconnect_time && $total_disconnect_time_reconnect <= $total_disconnect_time + 5}\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip\"}} {\n    set master [srv 0 client]\n    set master_host [srv 0 host]\n    set master_port [srv 0 port]\n\n    start_server {} {\n        set slave [srv 0 client]\n        $slave slaveof $master_host $master_port\n\n        # Test: Normal establishment of the master link\n        test \"Test normal establishment process of the master link\" {\n            wait_for_condition 50 100 {\n                [lindex [$slave role] 0] eq {slave} &&\n                [string match {*master_link_status:up*} [$slave info replication]]\n            } else {\n                fail \"Can't turn the instance into a replica\"\n            }\n\n            assert_equal 1 [status $slave master_current_sync_attempts]\n            assert_equal 1 [status $slave master_total_sync_attempts]\n        }\n\n        # Test: Sync attempts reset after 'slaveof no one'\n        test \"Test sync attempts reset after slaveof no one\" {\n            $slave slaveof no one\n            $slave slaveof $master_host $master_port\n\n            wait_for_condition 50 100 {\n                [lindex [$slave role] 0] eq {slave} &&\n                [string match {*master_link_status:up*} [$slave info replication]]\n            } else {\n                fail \"Can't turn the instance into a replica\"\n            }\n\n            assert_equal 1 [status $slave master_current_sync_attempts]\n            assert_equal 1 [status $slave master_total_sync_attempts]\n        }\n\n        # Test: Sync attempts reset on master reconnect\n        test \"Test sync attempts reset on master reconnect\" {\n            $slave client kill type master\n\n            wait_for_condition 50 100 {\n                [lindex [$slave role] 0] eq {slave} &&\n                [string match {*master_link_status:up*} [$slave info replication]]\n            } else {\n                fail \"Can't turn the instance into a replica\"\n            }\n\n            assert_equal 1 [status $slave master_current_sync_attempts]\n            assert_equal 2 [status $slave master_total_sync_attempts]\n        }\n\n        # Test: Sync attempts reset on master switch\n        test \"Test sync attempts reset on master switch\" {\n            start_server {} {\n                set new_master_host [srv 0 host]\n                set new_master_port [srv 0 port]\n                $slave slaveof $new_master_host $new_master_port\n\n                wait_for_condition 50 100 {\n                    [lindex [$slave role] 0] eq {slave} &&\n                    [string match {*master_link_status:up*} [$slave info replication]]\n                } else {\n                    fail \"Can't turn the instance into a replica\"\n                }\n\n                assert_equal 1 [status $slave master_current_sync_attempts]\n                assert_equal 1 [status $slave master_total_sync_attempts]\n            }\n        }\n\n        # Test: Replication current attempts counter behavior\n        test \"Replication current attempts counter behavior\" {\n            $slave slaveof $master_host $master_port\n\n            # Wait until replica state becomes \"connected\"\n            wait_for_condition 1000 50 {\n                [lindex [$slave role] 0] eq {slave} &&\n                [string match {*master_link_status:up*} [$slave info replication]]\n            } else {\n                fail \"slave did not connect to master.\"\n            }\n\n            assert_equal 1 [status $slave master_current_sync_attempts]\n\n            # Connect to an invalid master\n            $slave slaveof $master_host 0\n\n            # Expect current sync attempts to increase\n            wait_for_condition 100 50 {\n                [status $slave master_current_sync_attempts] >= 2\n            } else {\n                fail \"Timeout waiting for master_current_sync_attempts\"\n            }\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip\"}} {\n    set master [srv 0 client]\n    set master_host [srv 0 host]\n    set master_port [srv 0 port]\n\n    start_server {overrides {io-threads 2}} {\n        set slave [srv 0 client]\n\n        test {prefetchCommands handles NULL argv and keys during RDB replication with IO threads} {\n            # Enable diskless sync to trigger RDB streaming during replication\n            $master config set repl-diskless-sync yes\n            $master config set repl-diskless-sync-delay 0\n\n            # Populate keys in the format key:$i with 128-byte values.\n            $slave debug populate 700000 key 128\n\n            # Force a full resync by resetting the slave.\n            set rd [redis_deferring_client 0]\n            $rd slaveof $master_host $master_port\n\n            # Create a large pipeline command.\n            set batch_size 1000\n            set buf \"\"\n            for {set i 0} {$i < $batch_size} {incr i} {\n                append buf [format_command get key:1]\n            }\n            \n            # Continuously send pipelined commands so that the replica processes\n            # and prefetches them while it is emptying old data during full sync.\n            set start_time [clock milliseconds]\n            while {[clock milliseconds] - $start_time < 5000} {\n                $rd write $buf\n                $rd flush\n                if {[s 0 master_link_status] eq \"up\"} break\n            }\n            $rd close\n        }\n    }\n}\n"
  },
  {
    "path": "tests/integration/shutdown.tcl",
    "content": "# This test suite tests shutdown when there are lagging replicas connected.\n\n# Fill up the OS socket send buffer for the replica connection 1M at a time.\n# When the replication buffer memory increases beyond 2M (often after writing 4M\n# or so), we assume it's because the OS socket send buffer can't swallow\n# anymore.\nproc fill_up_os_socket_send_buffer_for_repl {idx} {\n    set i 0\n    while {1} {\n        incr i\n        populate 1024 junk$i: 1024 $idx\n        after 10\n        set buf_size [s $idx mem_total_replication_buffers]\n        if {$buf_size > 2*1024*1024} {\n            break\n        }\n    }\n}\n\nforeach how {sigterm shutdown} {\n    test \"Shutting down master waits for replica to catch up ($how)\" {\n        start_server {overrides {save \"\"}} {\n            start_server {overrides {save \"\"}} {\n                set master [srv -1 client]\n                set master_host [srv -1 host]\n                set master_port [srv -1 port]\n                set master_pid [srv -1 pid]\n                set replica [srv 0 client]\n                set replica_pid [srv 0 pid]\n\n                # Config master.\n                $master config set shutdown-timeout 300; # 5min for slow CI\n                $master config set repl-backlog-size 1;  # small as possible\n                $master config set hz 100;               # cron runs every 10ms\n\n                # Config replica.\n                $replica replicaof $master_host $master_port\n                wait_for_sync $replica\n\n                # Preparation: Set k to 1 on both master and replica.\n                $master set k 1\n                wait_for_ofs_sync $master $replica\n\n                # Pause the replica.\n                pause_process $replica_pid\n\n                # Fill up the OS socket send buffer for the replica connection\n                # to prevent the following INCR from reaching the replica via\n                # the OS.\n                fill_up_os_socket_send_buffer_for_repl -1\n\n                # Incr k and immediately shutdown master.\n                $master incr k\n                switch $how {\n                    sigterm {\n                        exec kill -SIGTERM $master_pid\n                    }\n                    shutdown {\n                        set rd [redis_deferring_client -1]\n                        $rd shutdown\n                    }\n                }\n                wait_for_condition 50 100 {\n                    [s -1 shutdown_in_milliseconds] > 0\n                } else {\n                    fail \"Master not indicating ongoing shutdown.\"\n                }\n\n                # Wake up replica and check if master has waited for it.\n                after 20; # 2 cron intervals\n                resume_process $replica_pid\n                wait_for_condition 300 1000 {\n                    [$replica get k] eq 2\n                } else {\n                    fail \"Master exited before replica could catch up.\"\n                }\n\n                # Check shutdown log messages on master\n                wait_for_log_messages -1 {\"*ready to exit, bye bye*\"} 0 100 500\n                assert_equal 0 [count_log_message -1 \"*Lagging replica*\"]\n                verify_log_message -1 \"*1 of 1 replicas are in sync*\" 0\n            }\n        }\n    } {} {repl external:skip}\n}\n\ntest {Shutting down master waits for replica timeout} {\n    start_server {overrides {save \"\"}} {\n        start_server {overrides {save \"\"}} {\n            set master [srv -1 client]\n            set master_host [srv -1 host]\n            set master_port [srv -1 port]\n            set master_pid [srv -1 pid]\n            set replica [srv 0 client]\n            set replica_pid [srv 0 pid]\n\n            # Config master.\n            $master config set shutdown-timeout 1; # second\n\n            # Config replica.\n            $replica replicaof $master_host $master_port\n            wait_for_sync $replica\n\n            # Preparation: Set k to 1 on both master and replica.\n            $master set k 1\n            wait_for_ofs_sync $master $replica\n\n            # Pause the replica.\n            pause_process $replica_pid\n\n            # Fill up the OS socket send buffer for the replica connection to\n            # prevent the following INCR k from reaching the replica via the OS.\n            fill_up_os_socket_send_buffer_for_repl -1\n\n            # Incr k and immediately shutdown master.\n            $master incr k\n            exec kill -SIGTERM $master_pid\n            wait_for_condition 50 100 {\n                [s -1 shutdown_in_milliseconds] > 0\n            } else {\n                fail \"Master not indicating ongoing shutdown.\"\n            }\n\n            # Let master finish shutting down and check log.\n            wait_for_log_messages -1 {\"*ready to exit, bye bye*\"} 0 100 100\n            verify_log_message -1 \"*Lagging replica*\" 0\n            verify_log_message -1 \"*0 of 1 replicas are in sync*\" 0\n\n            # Wake up replica.\n            resume_process $replica_pid\n            assert_equal 1 [$replica get k]\n        }\n    }\n} {} {repl external:skip}\n\ntest \"Shutting down master waits for replica then fails\" {\n    start_server {overrides {save \"\"}} {\n        start_server {overrides {save \"\"}} {\n            set master [srv -1 client]\n            set master_host [srv -1 host]\n            set master_port [srv -1 port]\n            set master_pid [srv -1 pid]\n            set replica [srv 0 client]\n            set replica_pid [srv 0 pid]\n\n            # Config master and replica.\n            $replica replicaof $master_host $master_port\n            wait_for_sync $replica\n\n            # Pause the replica and write a key on master.\n            pause_process $replica_pid\n            $master incr k\n\n            # Two clients call blocking SHUTDOWN in parallel.\n            set rd1 [redis_deferring_client -1]\n            set rd2 [redis_deferring_client -1]\n            $rd1 shutdown\n            $rd2 shutdown\n            wait_for_condition 100 10 {\n                [llength [regexp -all -inline {cmd=shutdown} [$master client list]]] eq 2\n            } else {\n                fail \"shutdown did not arrive\"\n            }\n            set info_clients [$master info clients]\n            assert_match \"*connected_clients:3*\" $info_clients\n            assert_match \"*blocked_clients:2*\" $info_clients\n\n            # Start a very slow initial AOFRW, which will prevent shutdown.\n            $master config set rdb-key-save-delay 30000000; # 30 seconds\n            $master config set appendonly yes\n\n            # Wake up replica, causing master to continue shutting down.\n            resume_process $replica_pid\n\n            # SHUTDOWN returns an error to both clients blocking on SHUTDOWN.\n            catch { $rd1 read } e1\n            catch { $rd2 read } e2\n            assert_match \"*Errors trying to SHUTDOWN. Check logs*\" $e1\n            assert_match \"*Errors trying to SHUTDOWN. Check logs*\" $e2\n\n            # Verify that after shutdown is cancelled, the client is properly\n            # reset and can handle other commands normally.\n            $rd1 PING\n            assert_equal \"PONG\" [$rd1 read]\n\n            $rd1 close\n            $rd2 close\n\n            # Check shutdown log messages on master.\n            verify_log_message -1 \"*1 of 1 replicas are in sync*\" 0\n            verify_log_message -1 \"*Writing initial AOF, can't exit*\" 0\n            verify_log_message -1 \"*Errors trying to shut down*\" 0\n\n            # Let master to exit fast, without waiting for the very slow AOFRW.\n            catch {$master shutdown nosave force}\n        }\n    }\n} {} {repl external:skip}\n\ntest \"Shutting down master waits for replica then aborted\" {\n    start_server {overrides {save \"\"}} {\n        start_server {overrides {save \"\"}} {\n            set master [srv -1 client]\n            set master_host [srv -1 host]\n            set master_port [srv -1 port]\n            set master_pid [srv -1 pid]\n            set replica [srv 0 client]\n            set replica_pid [srv 0 pid]\n\n            # Config master and replica.\n            $replica replicaof $master_host $master_port\n            wait_for_sync $replica\n\n            # Pause the replica and write a key on master.\n            pause_process $replica_pid\n            $master incr k\n\n            # Two clients call blocking SHUTDOWN in parallel.\n            set rd1 [redis_deferring_client -1]\n            set rd2 [redis_deferring_client -1]\n            $rd1 shutdown\n            $rd2 shutdown\n            wait_for_condition 100 10 {\n                [llength [regexp -all -inline {cmd=shutdown} [$master client list]]] eq 2\n            } else {\n                fail \"shutdown did not arrive\"\n            }\n            set info_clients [$master info clients]\n            assert_match \"*connected_clients:3*\" $info_clients\n            assert_match \"*blocked_clients:2*\" $info_clients\n\n            # Abort the shutdown\n            $master shutdown abort\n\n            # Wake up replica, causing master to continue shutting down.\n            resume_process $replica_pid\n\n            # SHUTDOWN returns an error to both clients blocking on SHUTDOWN.\n            catch { $rd1 read } e1\n            catch { $rd2 read } e2\n            assert_match \"*Errors trying to SHUTDOWN. Check logs*\" $e1\n            assert_match \"*Errors trying to SHUTDOWN. Check logs*\" $e2\n            $rd1 close\n            $rd2 close\n\n            # Check shutdown log messages on master.\n            verify_log_message -1 \"*Shutdown manually aborted*\" 0\n        }\n    }\n} {} {repl external:skip}\n"
  },
  {
    "path": "tests/modules/Makefile",
    "content": "\n# find the OS\nuname_S := $(shell sh -c 'uname -s 2>/dev/null || echo not')\n\nwarning_cflags = -W -Wall -Wno-missing-field-initializers\nifeq ($(uname_S),Darwin)\n\tSHOBJ_CFLAGS ?= $(warning_cflags) -dynamic -fno-common -g -ggdb -std=gnu11 -O2\n\tSHOBJ_LDFLAGS ?= -bundle -undefined dynamic_lookup\nelse\t# Linux, others\n\tSHOBJ_CFLAGS ?= $(warning_cflags) -fno-common -g -ggdb -std=gnu11 -O2\n\tSHOBJ_LDFLAGS ?= -shared\nendif\n\nCLANG := $(findstring clang,$(shell sh -c '$(CC) --version | head -1'))\n\nifeq ($(SANITIZER),memory)\nifeq (clang, $(CLANG))\n\tLD=clang\n\tMALLOC=libc\n\tCFLAGS+=-fsanitize=memory -fsanitize-memory-track-origins=2 -fno-sanitize-recover=all -fno-omit-frame-pointer\n\tLDFLAGS+=-fsanitize=memory\nelse\n    $(error \"MemorySanitizer needs to be compiled and linked with clang. Please use CC=clang\")\nendif\nendif\n\n\n# This is a hack to override the default CC. When running with SANITIZER=memory\n# tough we want to keep the compiler as clang as MSan is not supported for gcc\nifeq ($(uname_S),Linux)\nifneq ($(SANITIZER),memory)\n\tLD = gcc\n\tCC = gcc\nendif\nendif\n\n# OS X 11.x doesn't have /usr/lib/libSystem.dylib and needs an explicit setting.\nifeq ($(uname_S),Darwin)\nifeq (\"$(wildcard /usr/lib/libSystem.dylib)\",\"\")\nLIBS = -L /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/lib -lsystem\nendif\nendif\n\nTEST_MODULES = \\\n    commandfilter.so \\\n    basics.so \\\n    testrdb.so \\\n    fork.so \\\n    infotest.so \\\n    propagate.so \\\n    misc.so \\\n    hooks.so \\\n    blockonkeys.so \\\n    blockonbackground.so \\\n    scan.so \\\n    datatype.so \\\n    datatype2.so \\\n    auth.so \\\n    keyspace_events.so \\\n    blockedclient.so \\\n    getkeys.so \\\n    getchannels.so \\\n    test_lazyfree.so \\\n    timer.so \\\n    defragtest.so \\\n    keyspecs.so \\\n    hash.so \\\n    zset.so \\\n    stream.so \\\n    mallocsize.so \\\n    aclcheck.so \\\n    list.so \\\n    subcommands.so \\\n    reply.so \\\n    cmdintrospection.so \\\n    eventloop.so \\\n    moduleconfigs.so \\\n    moduleconfigstwo.so \\\n    publish.so \\\n    usercall.so \\\n    postnotifications.so \\\n    moduleauthtwo.so \\\n    rdbloadsave.so \\\n    crash.so \\\n    internalsecret.so \\\n    configaccess.so \\\n    test_keymeta.so \\\n    keymeta_notify.so \\\n    atomicslotmigration.so\n\n.PHONY: all\n\nall: $(TEST_MODULES)\n\n32bit:\n\t$(MAKE) CFLAGS=\"-m32\" LDFLAGS=\"-m32\"\n\n%.xo: %.c ../../src/redismodule.h\n\t$(CC) -I../../src $(CFLAGS) $(SHOBJ_CFLAGS) -fPIC -c $< -o $@\n\n%.so: %.xo\n\t$(LD) -o $@ $^ $(SHOBJ_LDFLAGS) $(LDFLAGS) $(LIBS)\n\n.PHONY: clean\n\nclean:\n\trm -f $(TEST_MODULES) $(TEST_MODULES:.so=.xo)\n"
  },
  {
    "path": "tests/modules/aclcheck.c",
    "content": "\n#include \"redismodule.h\"\n#include <errno.h>\n#include <assert.h>\n#include <string.h>\n#include <strings.h>\n\n/* A wrap for SET command with ACL check on the key. */\nint set_aclcheck_key(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc < 4) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    int permissions;\n    const char *flags = RedisModule_StringPtrLen(argv[1], NULL);\n\n    if (!strcasecmp(flags, \"W\")) {\n        permissions = REDISMODULE_CMD_KEY_UPDATE;\n    } else if (!strcasecmp(flags, \"R\")) {\n        permissions = REDISMODULE_CMD_KEY_ACCESS;\n    } else if (!strcasecmp(flags, \"*\")) {\n        permissions = REDISMODULE_CMD_KEY_UPDATE | REDISMODULE_CMD_KEY_ACCESS;\n    } else if (!strcasecmp(flags, \"~\")) {\n        permissions = 0; /* Requires either read or write */\n    } else {\n        RedisModule_ReplyWithError(ctx, \"INVALID FLAGS\");\n        return REDISMODULE_OK;\n    }\n\n    /* Check that the key can be accessed */\n    RedisModuleString *user_name = RedisModule_GetCurrentUserName(ctx);\n    RedisModuleUser *user = RedisModule_GetModuleUserFromUserName(user_name);\n    int ret = RedisModule_ACLCheckKeyPermissions(user, argv[2], permissions);\n    if (ret != 0) {\n        RedisModule_ReplyWithError(ctx, \"DENIED KEY\");\n        RedisModule_FreeModuleUser(user);\n        RedisModule_FreeString(ctx, user_name);\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleCallReply *rep = RedisModule_Call(ctx, \"SET\", \"v\", argv + 2, (size_t)argc - 2);\n    if (!rep) {\n        RedisModule_ReplyWithError(ctx, \"NULL reply returned\");\n    } else {\n        RedisModule_ReplyWithCallReply(ctx, rep);\n        RedisModule_FreeCallReply(rep);\n    }\n\n    RedisModule_FreeModuleUser(user);\n    RedisModule_FreeString(ctx, user_name);\n    return REDISMODULE_OK;\n}\n\n/* A wrap for SET command with ACL check on the key. */\nint set_aclcheck_prefixkey(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc < 4) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    int permissions;\n    const char *flags = RedisModule_StringPtrLen(argv[1], NULL);\n\n    if (!strcasecmp(flags, \"W\")) {\n        permissions = REDISMODULE_CMD_KEY_UPDATE;\n    } else if (!strcasecmp(flags, \"R\")) {\n        permissions = REDISMODULE_CMD_KEY_ACCESS;\n    } else if (!strcasecmp(flags, \"*\")) {\n        permissions = REDISMODULE_CMD_KEY_UPDATE | REDISMODULE_CMD_KEY_ACCESS;\n    } else if (!strcasecmp(flags, \"~\")) {\n        permissions = 0; /* Requires either read or write */\n    } else {\n        RedisModule_ReplyWithError(ctx, \"INVALID FLAGS\");\n        return REDISMODULE_OK;\n    }\n\n    /* Check that the key can be accessed */\n    RedisModuleString *user_name = RedisModule_GetCurrentUserName(ctx);\n    RedisModuleUser *user = RedisModule_GetModuleUserFromUserName(user_name);\n    int ret = RedisModule_ACLCheckKeyPrefixPermissions(user, argv[2], permissions);\n    if (ret != 0) {\n        RedisModule_ReplyWithError(ctx, \"DENIED KEY\");\n        RedisModule_FreeModuleUser(user);\n        RedisModule_FreeString(ctx, user_name);\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleCallReply *rep = RedisModule_Call(ctx, \"SET\", \"v\", argv + 3, (size_t)argc - 3);\n    if (!rep) {\n        RedisModule_ReplyWithError(ctx, \"NULL reply returned\");\n    } else {\n        RedisModule_ReplyWithCallReply(ctx, rep);\n        RedisModule_FreeCallReply(rep);\n    }\n\n    RedisModule_FreeModuleUser(user);\n    RedisModule_FreeString(ctx, user_name);\n    return REDISMODULE_OK;\n}\n\n/* A wrap for PUBLISH command with ACL check on the channel. */\nint publish_aclcheck_channel(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 3) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    /* Check that the pubsub channel can be accessed */\n    RedisModuleString *user_name = RedisModule_GetCurrentUserName(ctx);\n    RedisModuleUser *user = RedisModule_GetModuleUserFromUserName(user_name);\n    int ret = RedisModule_ACLCheckChannelPermissions(user, argv[1], REDISMODULE_CMD_CHANNEL_SUBSCRIBE);\n    if (ret != 0) {\n        RedisModule_ReplyWithError(ctx, \"DENIED CHANNEL\");\n        RedisModule_FreeModuleUser(user);\n        RedisModule_FreeString(ctx, user_name);\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleCallReply *rep = RedisModule_Call(ctx, \"PUBLISH\", \"v\", argv + 1, (size_t)argc - 1);\n    if (!rep) {\n        RedisModule_ReplyWithError(ctx, \"NULL reply returned\");\n    } else {\n        RedisModule_ReplyWithCallReply(ctx, rep);\n        RedisModule_FreeCallReply(rep);\n    }\n\n    RedisModule_FreeModuleUser(user);\n    RedisModule_FreeString(ctx, user_name);\n    return REDISMODULE_OK;\n}\n\n/* A wrap for RM_Call that check first that the command can be executed */\nint rm_call_aclcheck_cmd(RedisModuleCtx *ctx, RedisModuleUser *user, RedisModuleString **argv, int argc) {\n    if (argc < 2) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    /* Check that the command can be executed */\n    int ret = RedisModule_ACLCheckCommandPermissions(user, argv + 1, argc - 1);\n    if (ret != 0) {\n        RedisModule_ReplyWithError(ctx, \"DENIED CMD\");\n        /* Add entry to ACL log */\n        RedisModule_ACLAddLogEntry(ctx, user, argv[1], REDISMODULE_ACL_LOG_CMD);\n        return REDISMODULE_OK;\n    }\n\n    const char* cmd = RedisModule_StringPtrLen(argv[1], NULL);\n\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, cmd, \"v\", argv + 2, (size_t)argc - 2);\n    if(!rep){\n        RedisModule_ReplyWithError(ctx, \"NULL reply returned\");\n    }else{\n        RedisModule_ReplyWithCallReply(ctx, rep);\n        RedisModule_FreeCallReply(rep);\n    }\n\n    return REDISMODULE_OK;\n}\n\nint rm_call_aclcheck_cmd_default_user(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModuleString *user_name = RedisModule_GetCurrentUserName(ctx);\n    RedisModuleUser *user = RedisModule_GetModuleUserFromUserName(user_name);\n\n    int res = rm_call_aclcheck_cmd(ctx, user, argv, argc);\n\n    RedisModule_FreeModuleUser(user);\n    RedisModule_FreeString(ctx, user_name);\n    return res;\n}\n\nint rm_call_aclcheck_cmd_module_user(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    /* Create a user and authenticate */\n    RedisModuleUser *user = RedisModule_CreateModuleUser(\"testuser1\");\n    RedisModule_SetModuleUserACL(user, \"allcommands\");\n    RedisModule_SetModuleUserACL(user, \"allkeys\");\n    RedisModule_SetModuleUserACL(user, \"on\");\n    RedisModule_AuthenticateClientWithUser(ctx, user, NULL, NULL, NULL);\n\n    int res = rm_call_aclcheck_cmd(ctx, user, argv, argc);\n\n    /* authenticated back to \"default\" user (so once we free testuser1 we will not disconnected */\n    RedisModule_AuthenticateClientWithACLUser(ctx, \"default\", 7, NULL, NULL, NULL);\n    RedisModule_FreeModuleUser(user);\n    return res;\n}\n\nint rm_call_aclcheck_with_errors(RedisModuleCtx *ctx, RedisModuleString **argv, int argc){\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if(argc < 2){\n        return RedisModule_WrongArity(ctx);\n    }\n\n    const char* cmd = RedisModule_StringPtrLen(argv[1], NULL);\n\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, cmd, \"vEC\", argv + 2, (size_t)argc - 2);\n    RedisModule_ReplyWithCallReply(ctx, rep);\n    RedisModule_FreeCallReply(rep);\n    return REDISMODULE_OK;\n}\n\n/* A wrap for RM_Call that pass the 'C' flag to do ACL check on the command. */\nint rm_call_aclcheck(RedisModuleCtx *ctx, RedisModuleString **argv, int argc){\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if(argc < 2){\n        return RedisModule_WrongArity(ctx);\n    }\n\n    const char* cmd = RedisModule_StringPtrLen(argv[1], NULL);\n\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, cmd, \"vC\", argv + 2, (size_t)argc - 2);\n    if(!rep) {\n        char err[100];\n        switch (errno) {\n            case EACCES:\n                RedisModule_ReplyWithError(ctx, \"ERR NOPERM\");\n                break;\n            default:\n                snprintf(err, sizeof(err) - 1, \"ERR errno=%d\", errno);\n                RedisModule_ReplyWithError(ctx, err);\n                break;\n        }\n    } else {\n        RedisModule_ReplyWithCallReply(ctx, rep);\n        RedisModule_FreeCallReply(rep);\n    }\n\n    return REDISMODULE_OK;\n}\n\nint module_test_acl_category(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\nint commandBlockCheck(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    int response_ok = 0;\n    int result = RedisModule_CreateCommand(ctx,\"command.that.should.fail\", module_test_acl_category, \"\", 0, 0, 0);\n    response_ok |= (result == REDISMODULE_OK);\n\n    result = RedisModule_AddACLCategory(ctx,\"blockedcategory\");\n    response_ok |= (result == REDISMODULE_OK);\n    \n    RedisModuleCommand *parent = RedisModule_GetCommand(ctx,\"block.commands.outside.onload\");\n    result = RedisModule_SetCommandACLCategories(parent, \"write\");\n    response_ok |= (result == REDISMODULE_OK);\n\n    result = RedisModule_CreateSubcommand(parent,\"subcommand.that.should.fail\",module_test_acl_category,\"\",0,0,0);\n    response_ok |= (result == REDISMODULE_OK);\n    \n    /* This validates that it's not possible to create commands or add\n     * a new ACL Category outside OnLoad function.\n     * thus returns an error if they succeed. */\n    if (response_ok) {\n        RedisModule_ReplyWithError(ctx, \"UNEXPECTEDOK\");\n    } else {\n        RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    }\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n\n    if (RedisModule_Init(ctx,\"aclcheck\",1,REDISMODULE_APIVER_1)== REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (argc > 1) return RedisModule_WrongArity(ctx);\n    \n    /* When that flag is passed, we try to create too many categories,\n     * and the test expects this to fail. In this case redis returns REDISMODULE_ERR\n     * and set errno to ENOMEM*/\n    if (argc == 1) {\n        long long fail_flag = 0;\n        RedisModule_StringToLongLong(argv[0], &fail_flag);\n        if (fail_flag) {\n            for (size_t j = 0; j < 45; j++) {\n                RedisModuleString* name =  RedisModule_CreateStringPrintf(ctx, \"customcategory%zu\", j);\n                if (RedisModule_AddACLCategory(ctx, RedisModule_StringPtrLen(name, NULL)) == REDISMODULE_ERR) {\n                    RedisModule_Assert(errno == ENOMEM);\n                    RedisModule_FreeString(ctx, name);\n                    return REDISMODULE_ERR;\n                }\n                RedisModule_FreeString(ctx, name);\n            }\n        }\n    }\n\n    if (RedisModule_CreateCommand(ctx,\"aclcheck.set.check.key\", set_aclcheck_key,\"write\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"aclcheck.set.check.prefixkey\", set_aclcheck_prefixkey,\"write\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;    \n\n    if (RedisModule_CreateCommand(ctx,\"block.commands.outside.onload\", commandBlockCheck,\"write\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"aclcheck.module.command.aclcategories.write\", module_test_acl_category,\"write\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    RedisModuleCommand *aclcategories_write = RedisModule_GetCommand(ctx,\"aclcheck.module.command.aclcategories.write\");\n\n    if (RedisModule_SetCommandACLCategories(aclcategories_write, \"write\") == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"aclcheck.module.command.aclcategories.write.function.read.category\", module_test_acl_category,\"write\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    RedisModuleCommand *read_category = RedisModule_GetCommand(ctx,\"aclcheck.module.command.aclcategories.write.function.read.category\");\n\n    if (RedisModule_SetCommandACLCategories(read_category, \"read\") == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"aclcheck.module.command.aclcategories.read.only.category\", module_test_acl_category,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    RedisModuleCommand *read_only_category = RedisModule_GetCommand(ctx,\"aclcheck.module.command.aclcategories.read.only.category\");\n\n    if (RedisModule_SetCommandACLCategories(read_only_category, \"read\") == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"aclcheck.publish.check.channel\", publish_aclcheck_channel,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"aclcheck.rm_call.check.cmd\", rm_call_aclcheck_cmd_default_user,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"aclcheck.rm_call.check.cmd.module.user\", rm_call_aclcheck_cmd_module_user,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"aclcheck.rm_call\", rm_call_aclcheck,\n                                  \"write\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"aclcheck.rm_call_with_errors\", rm_call_aclcheck_with_errors,\n                                      \"write\",0,0,0) == REDISMODULE_ERR)\n            return REDISMODULE_ERR;\n\n    /* This validates that, when module tries to add a category with invalid characters,\n     * redis returns REDISMODULE_ERR and set errno to `EINVAL` */\n    if (RedisModule_AddACLCategory(ctx,\"!nval!dch@r@cter$\") == REDISMODULE_ERR)\n        RedisModule_Assert(errno == EINVAL);\n    else \n        return REDISMODULE_ERR;\n    \n    /* This validates that, when module tries to add a category that already exists,\n     * redis returns REDISMODULE_ERR and set errno to `EBUSY` */\n    if (RedisModule_AddACLCategory(ctx,\"write\") == REDISMODULE_ERR)\n        RedisModule_Assert(errno == EBUSY);\n    else \n        return REDISMODULE_ERR;\n    \n    if (RedisModule_AddACLCategory(ctx,\"foocategory\") == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    \n    if (RedisModule_CreateCommand(ctx,\"aclcheck.module.command.test.add.new.aclcategories\", module_test_acl_category,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    RedisModuleCommand *test_add_new_aclcategories = RedisModule_GetCommand(ctx,\"aclcheck.module.command.test.add.new.aclcategories\");\n\n    if (RedisModule_SetCommandACLCategories(test_add_new_aclcategories, \"foocategory\") == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    \n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/atomicslotmigration.c",
    "content": "#include \"redismodule.h\"\n\n#include <stdlib.h>\n#include <memory.h>\n#include <errno.h>\n\n#define MAX_EVENTS 1024\n\n/* Log of cluster events. */\nconst char *clusterEventLog[MAX_EVENTS];\nint numClusterEvents = 0;\n\n/* Log of cluster trim events. */\nconst char *clusterTrimEventLog[MAX_EVENTS];\nint numClusterTrimEvents = 0;\n\n/* Log of last deleted key event. */\nconst char *lastDeletedKeyLog = NULL;\n\n/* Flag to disable trim. */\nint disableTrimFlag = 0;\n\nint replicateModuleCommand = 0;   /* Enable or disable module command replication. */\nRedisModuleString *moduleCommandKeyName = NULL; /* Key name to replicate. */\nRedisModuleString *moduleCommandKeyVal = NULL;  /* Key value to replicate. */\n\n/* Enable or disable module command replication. */\nint replicate_module_command(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 4) {\n        RedisModule_ReplyWithError(ctx, \"ERR wrong number of arguments\");\n        return REDISMODULE_OK;\n    }\n\n    long long enable = 0;\n    if (RedisModule_StringToLongLong(argv[1], &enable) != REDISMODULE_OK) {\n        RedisModule_ReplyWithError(ctx, \"ERR enable value\");\n        return REDISMODULE_OK;\n    }\n    replicateModuleCommand = (enable != 0);\n\n    /* Set the key name and value to replicate. */\n    if (moduleCommandKeyName) RedisModule_FreeString(ctx, moduleCommandKeyName);\n    if (moduleCommandKeyVal) RedisModule_FreeString(ctx, moduleCommandKeyVal);\n    moduleCommandKeyName = RedisModule_CreateStringFromString(ctx, argv[2]);\n    moduleCommandKeyVal = RedisModule_CreateStringFromString(ctx, argv[3]);\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\nint lpush_and_replicate_crossslot_command(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 3) return RedisModule_WrongArity(ctx);\n\n    /* LPUSH */\n    RedisModuleCallReply *rep = RedisModule_Call(ctx, \"LPUSH\", \"!ss\", argv[1], argv[2]);\n    RedisModule_Assert(RedisModule_CallReplyType(rep) != REDISMODULE_REPLY_ERROR);\n    RedisModule_FreeCallReply(rep);\n\n    /* Replicate cross slot command */\n    int ret = RedisModule_Replicate(ctx, \"MSET\", \"cccccc\", \"key1\", \"val1\", \"key2\", \"val2\", \"key3\", \"val3\");\n    RedisModule_Assert(ret == REDISMODULE_OK);\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\nint testClusterGetLocalSlotRanges(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    static int use_auto_memory = 0;\n    use_auto_memory = !use_auto_memory;\n\n    RedisModuleSlotRangeArray *slots;\n    if (use_auto_memory) {\n        RedisModule_AutoMemory(ctx);\n        slots = RedisModule_ClusterGetLocalSlotRanges(ctx);\n    } else {\n        slots = RedisModule_ClusterGetLocalSlotRanges(NULL);\n    }\n\n    RedisModule_ReplyWithArray(ctx, slots->num_ranges);\n    for (int i = 0; i < slots->num_ranges; i++) {\n        RedisModule_ReplyWithArray(ctx, 2);\n        RedisModule_ReplyWithLongLong(ctx, slots->ranges[i].start);\n        RedisModule_ReplyWithLongLong(ctx, slots->ranges[i].end);\n    }\n    if (!use_auto_memory)\n        RedisModule_ClusterFreeSlotRanges(NULL, slots);\n    return REDISMODULE_OK;\n}\n\n/* Helper function to check if a slot range array contains a given slot. */\nint slotRangeArrayContains(RedisModuleSlotRangeArray *sra, unsigned int slot) {\n    for (int i = 0; i < sra->num_ranges; i++)\n        if (sra->ranges[i].start <= slot && sra->ranges[i].end >= slot)\n            return 1;\n    return 0;\n}\n\n/* Sanity check. */\nint sanity(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_Assert(RedisModule_ClusterCanAccessKeysInSlot(-1) == 0);\n    RedisModule_Assert(RedisModule_ClusterCanAccessKeysInSlot(16384) == 0);\n    RedisModule_Assert(RedisModule_ClusterCanAccessKeysInSlot(100000) == 0);\n\n    /* Call with invalid args. */\n    errno = 0;\n    RedisModule_Assert(RedisModule_ClusterPropagateForSlotMigration(NULL, NULL, NULL) == REDISMODULE_ERR);\n    RedisModule_Assert(errno == EINVAL);\n\n    /* Call with invalid args. */\n    errno = 0;\n    RedisModule_Assert(RedisModule_ClusterPropagateForSlotMigration(ctx, NULL, NULL) == REDISMODULE_ERR);\n    RedisModule_Assert(errno == EINVAL);\n\n    /* Call with invalid args. */\n    errno = 0;\n    RedisModule_Assert(RedisModule_ClusterPropagateForSlotMigration(NULL, \"asm.keyless_cmd\", \"\") == REDISMODULE_ERR);\n    RedisModule_Assert(errno == EINVAL);\n\n    /* Call outside of slot migration. */\n    errno = 0;\n    RedisModule_Assert(RedisModule_ClusterPropagateForSlotMigration(ctx, \"asm.keyless_cmd\", \"\") == REDISMODULE_ERR);\n    RedisModule_Assert(errno == EBADF);\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\n/* Command to test RM_ClusterCanAccessKeysInSlot(). */\nint testClusterCanAccessKeysInSlot(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argc);\n    long long slot = 0;\n\n    if (RedisModule_StringToLongLong(argv[1],&slot) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid slot\");\n    }\n    RedisModule_ReplyWithLongLong(ctx, RedisModule_ClusterCanAccessKeysInSlot(slot));\n    return REDISMODULE_OK;\n}\n\n/* Generate a string representation of the info struct and subevent.\n   e.g. 'sub: cluster-slot-migration-import-started, task_id: aeBd..., slots: 0-100,200-300' */\nconst char *clusterAsmInfoToString(RedisModuleClusterSlotMigrationInfo *info, uint64_t sub) {\n    char buf[1024] = {0};\n\n    if (sub == REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_IMPORT_STARTED)\n        snprintf(buf, sizeof(buf), \"sub: cluster-slot-migration-import-started, \");\n    else  if (sub == REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_IMPORT_FAILED)\n        snprintf(buf, sizeof(buf), \"sub: cluster-slot-migration-import-failed, \");\n    else if (sub == REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_IMPORT_COMPLETED)\n        snprintf(buf, sizeof(buf), \"sub: cluster-slot-migration-import-completed, \");\n    else if (sub == REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_STARTED)\n        snprintf(buf, sizeof(buf), \"sub: cluster-slot-migration-migrate-started, \");\n    else if (sub == REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_FAILED)\n        snprintf(buf, sizeof(buf), \"sub: cluster-slot-migration-migrate-failed, \");\n    else if (sub == REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_COMPLETED)\n        snprintf(buf, sizeof(buf), \"sub: cluster-slot-migration-migrate-completed, \");\n    else {\n        RedisModule_Assert(0);\n    }\n    snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), \"source_node_id:%.40s, destination_node_id:%.40s, \",\n             info->source_node_id, info->destination_node_id);\n    snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), \"task_id:%s, slots:\", info->task_id);\n    for (int i = 0; i < info->slots->num_ranges; i++) {\n        RedisModuleSlotRange *sr = &info->slots->ranges[i];\n        snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), \"%d-%d\", sr->start, sr->end);\n        if (i != info->slots->num_ranges - 1)\n            snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), \",\");\n    }\n    return RedisModule_Strdup(buf);\n}\n\n/* Generate a string representation of the info struct and subevent.\n   e.g. 'sub: cluster-slot-migration-trim-started, task_id: aeBd..., slots:0-100,200-300' */\nconst char *clusterTrimInfoToString(RedisModuleClusterSlotMigrationTrimInfo *info, uint64_t sub) {\n    RedisModule_Assert(info);\n    char buf[1024] = {0};\n\n    if (sub == REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_TRIM_BACKGROUND)\n        snprintf(buf, sizeof(buf), \"sub: cluster-slot-migration-trim-background, \");\n    else if (sub == REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_TRIM_STARTED)\n        snprintf(buf, sizeof(buf), \"sub: cluster-slot-migration-trim-started, \");\n    else if (sub == REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_TRIM_COMPLETED)\n        snprintf(buf, sizeof(buf), \"sub: cluster-slot-migration-trim-completed, \");\n    else {\n        RedisModule_Assert(0);\n    }\n    snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), \"slots:\");\n    for (int i = 0; i < info->slots->num_ranges; i++) {\n        RedisModuleSlotRange *sr = &info->slots->ranges[i];\n        snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), \"%d-%d\", sr->start, sr->end);\n        if (i != info->slots->num_ranges - 1)\n            snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), \",\");\n    }\n    return RedisModule_Strdup(buf);\n}\n\nstatic void testReplicatingOutsideSlotRange(RedisModuleCtx *ctx, RedisModuleClusterSlotMigrationInfo *info) {\n    int slot = 0;\n    while (slot >= 0 && slot <= 16383) {\n        if (!slotRangeArrayContains(info->slots, slot)) {\n            break;\n        }\n        slot++;\n    }\n    char buf[128] = {0};\n    const char *prefix = RedisModule_ClusterCanonicalKeyNameInSlot(slot);\n    snprintf(buf, sizeof(buf), \"{%s}%s\", prefix, \"modulekey\");\n    errno = 0;\n    int ret = RedisModule_ClusterPropagateForSlotMigration(ctx, \"SET\", \"cc\", buf, \"value\");\n    RedisModule_Assert(ret == REDISMODULE_ERR);\n    RedisModule_Assert(errno == ERANGE);\n}\n\nstatic void testReplicatingCrossslotCommand(RedisModuleCtx *ctx) {\n    errno = 0;\n    int ret = RedisModule_ClusterPropagateForSlotMigration(ctx, \"MSET\", \"cccccc\", \"key1\", \"val1\", \"key2\", \"val2\", \"key3\", \"val3\");\n    RedisModule_Assert(ret == REDISMODULE_ERR);\n    RedisModule_Assert(errno == ENOTSUP);\n}\n\nstatic void testReplicatingUnknownCommand(RedisModuleCtx *ctx) {\n    errno = 0;\n    int ret = RedisModule_ClusterPropagateForSlotMigration(ctx, \"unknowncommand\", \"\");\n    RedisModule_Assert(ret == REDISMODULE_ERR);\n    RedisModule_Assert(errno == ENOENT);\n}\n\nstatic void testNonFatalScenarios(RedisModuleCtx *ctx, RedisModuleClusterSlotMigrationInfo *info) {\n    testReplicatingOutsideSlotRange(ctx, info);\n    testReplicatingCrossslotCommand(ctx);\n    testReplicatingUnknownCommand(ctx);\n}\n\nint disableTrimCmd(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    disableTrimFlag = 1;\n    /* Only disable when MIGRATE_COMPLETED for simulating recommended usage. */\n    // RedisModule_ClusterDisableTrim(ctx)\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\nint enableTrimCmd(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    disableTrimFlag = 0;\n    RedisModule_Assert(RedisModule_ClusterEnableTrim(ctx) == REDISMODULE_OK);\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\nint trimInProgressCmd(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    uint64_t flags = RedisModule_GetContextFlags(ctx);\n    RedisModule_ReplyWithLongLong(ctx, !!(flags & REDISMODULE_CTX_FLAGS_TRIM_IN_PROGRESS));\n    return REDISMODULE_OK;\n}\n\nvoid clusterEventCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data) {\n    REDISMODULE_NOT_USED(ctx);\n    int ret;\n\n    RedisModule_Assert(RedisModule_IsSubEventSupported(e, sub));\n\n    if (e.id == REDISMODULE_EVENT_CLUSTER_SLOT_MIGRATION) {\n        RedisModuleClusterSlotMigrationInfo *info = data;\n\n        if (sub == REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_MODULE_PROPAGATE) {\n            /* Test some non-fatal scenarios. */\n            testNonFatalScenarios(ctx, info);\n\n            if (replicateModuleCommand == 0) return;\n\n            /* Replicate a keyless command. */\n            ret = RedisModule_ClusterPropagateForSlotMigration(ctx, \"asm.keyless_cmd\", \"\");\n            RedisModule_Assert(ret == REDISMODULE_OK);\n\n            /* Propagate configured key and value. */\n            ret = RedisModule_ClusterPropagateForSlotMigration(ctx, \"SET\", \"ss\", moduleCommandKeyName, moduleCommandKeyVal);\n            RedisModule_Assert(ret == REDISMODULE_OK);\n        } else {\n            /* Log the event. */\n            if (numClusterEvents >= MAX_EVENTS) return;\n            clusterEventLog[numClusterEvents++] = clusterAsmInfoToString(info, sub);\n\n            if (sub == REDISMODULE_SUBEVENT_CLUSTER_SLOT_MIGRATION_MIGRATE_COMPLETED) {\n                /* If users ask to disable trim, we disable trim. */\n                if (disableTrimFlag) {\n                    RedisModule_Assert(RedisModule_ClusterDisableTrim(ctx) == REDISMODULE_OK);\n                }\n            }\n        }\n    }\n}\n\nint getPendingTrimKeyCmd(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) {\n        RedisModule_ReplyWithError(ctx, \"ERR wrong number of arguments\");\n        return REDISMODULE_ERR;\n    }\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1],\n                                REDISMODULE_READ | REDISMODULE_OPEN_KEY_ACCESS_TRIMMED);\n    if (!key) {\n        RedisModule_ReplyWithNull(ctx);\n        return REDISMODULE_OK;\n    }\n    if (RedisModule_KeyType(key) != REDISMODULE_KEYTYPE_STRING) {\n        RedisModule_ReplyWithError(ctx, \"key is not a string\");\n        return REDISMODULE_ERR;\n    }\n    size_t len;\n    const char *value = RedisModule_StringDMA(key, &len, 0);\n    RedisModule_ReplyWithStringBuffer(ctx, value, len);\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\nvoid clusterTrimEventCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data) {\n    REDISMODULE_NOT_USED(ctx);\n\n    RedisModule_Assert(RedisModule_IsSubEventSupported(e, sub));\n\n    if (e.id == REDISMODULE_EVENT_CLUSTER_SLOT_MIGRATION_TRIM) {\n        /* Log the event. */\n        if (numClusterTrimEvents >= MAX_EVENTS) return;\n        RedisModuleClusterSlotMigrationTrimInfo *info = data;\n        clusterTrimEventLog[numClusterTrimEvents++] = clusterTrimInfoToString(info, sub);\n    }\n}\n\nstatic int keyspaceNotificationTrimmedCallback(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key) {\n    REDISMODULE_NOT_USED(ctx);\n\n    RedisModule_Assert(type == REDISMODULE_NOTIFY_KEY_TRIMMED);\n    RedisModule_Assert(strcmp(event, \"key_trimmed\") == 0);\n\n    if (numClusterTrimEvents >= MAX_EVENTS) return REDISMODULE_OK;\n\n    /* Log the trimmed key event. */\n    size_t len;\n    const char *key_str = RedisModule_StringPtrLen(key, &len);\n\n    char buf[1024] = {0};\n    snprintf(buf, sizeof(buf), \"keyspace: key_trimmed, key: %s\", key_str);\n\n    clusterTrimEventLog[numClusterTrimEvents++] = RedisModule_Strdup(buf);\n    return REDISMODULE_OK;\n}\n\n/* ASM.PARENT SET key value  (just proxy to Redis SET) */\nstatic int asmParentSet(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 4) return RedisModule_WrongArity(ctx);\n    RedisModuleCallReply *reply = RedisModule_Call(ctx, \"SET\", \"ss\", argv[2], argv[3]);\n    if (!reply) return RedisModule_ReplyWithError(ctx, \"ERR internal\");\n    RedisModule_ReplyWithCallReply(ctx, reply);\n    RedisModule_FreeCallReply(reply);\n    RedisModule_ReplicateVerbatim(ctx);\n    return REDISMODULE_OK;\n}\n\n/* Clear both the cluster and trim event logs. */\nint clearEventLog(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    for (int i = 0; i < numClusterEvents; i++)\n        RedisModule_Free((void *)clusterEventLog[i]);\n    numClusterEvents = 0;\n\n    for (int i = 0; i < numClusterTrimEvents; i++)\n        RedisModule_Free((void *)clusterTrimEventLog[i]);\n    numClusterTrimEvents = 0;\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\n/* Reply with the cluster event log. */\nint getClusterEventLog(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_ReplyWithArray(ctx, numClusterEvents);\n    for (int i = 0; i < numClusterEvents; i++)\n        RedisModule_ReplyWithStringBuffer(ctx, clusterEventLog[i], strlen(clusterEventLog[i]));\n    return REDISMODULE_OK;\n}\n\n/* Reply with the cluster trim event log. */\nint getClusterTrimEventLog(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_ReplyWithArray(ctx, numClusterTrimEvents);\n    for (int i = 0; i < numClusterTrimEvents; i++)\n        RedisModule_ReplyWithStringBuffer(ctx, clusterTrimEventLog[i], strlen(clusterTrimEventLog[i]));\n    return REDISMODULE_OK;\n}\n\n/* A keyless command to test module command replication. */\nint moduledata = 0;\nint keylessCmd(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    moduledata++;\n    RedisModule_ReplyWithLongLong(ctx, moduledata);\n    return REDISMODULE_OK;\n}\nint readkeylessCmdVal(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    RedisModule_ReplyWithLongLong(ctx, moduledata);\n    return REDISMODULE_OK;\n}\n\nint subscribeTrimmedEvent(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(ctx);\n    if (argc != 2)\n        return RedisModule_WrongArity(ctx);\n\n    long long subscribe = 0;\n    if (RedisModule_StringToLongLong(argv[1], &subscribe) != REDISMODULE_OK) {\n        RedisModule_ReplyWithError(ctx, \"ERR subscribe value\");\n        return REDISMODULE_OK;\n    }\n\n    if (subscribe) {\n        /* Unsubscribe first to avoid duplicate subscription. */\n        RedisModule_UnsubscribeFromKeyspaceEvents(ctx, REDISMODULE_NOTIFY_KEY_TRIMMED, keyspaceNotificationTrimmedCallback);\n        int ret = RedisModule_SubscribeToKeyspaceEvents(ctx, REDISMODULE_NOTIFY_KEY_TRIMMED, keyspaceNotificationTrimmedCallback);\n        RedisModule_Assert(ret == REDISMODULE_OK);\n    } else {\n        int ret = RedisModule_UnsubscribeFromKeyspaceEvents(ctx, REDISMODULE_NOTIFY_KEY_TRIMMED, keyspaceNotificationTrimmedCallback);\n        RedisModule_Assert(ret == REDISMODULE_OK);\n    }\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\nvoid keyEventCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data) {\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(e);\n\n    if (sub == REDISMODULE_SUBEVENT_KEY_DELETED) {\n        RedisModuleKeyInfoV1 *ei = data;\n        RedisModuleKey *kp = ei->key;\n        RedisModuleString *key = (RedisModuleString *) RedisModule_GetKeyNameFromModuleKey(kp);\n        size_t keylen;\n        const char *keyname = RedisModule_StringPtrLen(key, &keylen);\n\n        /* Verify value can be read. It will be used to verify key's value can\n         * be read in a trim callback. */\n        size_t valuelen = 0;\n        const char *value = \"\";\n        RedisModuleKey *mk = RedisModule_OpenKey(ctx, key, REDISMODULE_READ);\n        if (RedisModule_KeyType(mk) == REDISMODULE_KEYTYPE_STRING) {\n            value = RedisModule_StringDMA(mk, &valuelen, 0);\n        }\n        RedisModule_CloseKey(mk);\n\n        char buf[1024] = {0};\n        snprintf(buf, sizeof(buf), \"keyevent: key: %.*s, value: %.*s\", (int) keylen, keyname, (int)valuelen, value);\n\n        if (lastDeletedKeyLog) RedisModule_Free((void *)lastDeletedKeyLog);\n        lastDeletedKeyLog = RedisModule_Strdup(buf);\n    }\n}\n\nint getLastDeletedKey(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (lastDeletedKeyLog) {\n        RedisModule_ReplyWithStringBuffer(ctx, lastDeletedKeyLog, strlen(lastDeletedKeyLog));\n    } else {\n        RedisModule_ReplyWithNull(ctx);\n    }\n    return REDISMODULE_OK;\n}\n\nint asmGetCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(ctx);\n\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_READ);\n    if (key == NULL) {\n        RedisModule_ReplyWithNull(ctx);\n        return REDISMODULE_OK;\n    }\n\n    RedisModule_Assert(RedisModule_KeyType(key) == REDISMODULE_KEYTYPE_STRING);\n    size_t len;\n    const char *value = RedisModule_StringDMA(key, &len, 0);\n    RedisModule_ReplyWithStringBuffer(ctx, value, len);\n    RedisModule_CloseKey(key);\n\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx, \"asm\", 1, REDISMODULE_APIVER_1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"asm.cluster_can_access_keys_in_slot\", testClusterCanAccessKeysInSlot, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"asm.clear_event_log\", clearEventLog, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"asm.get_cluster_event_log\", getClusterEventLog, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"asm.get_cluster_trim_event_log\", getClusterTrimEventLog, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"asm.keyless_cmd\", keylessCmd, \"write\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"asm.disable_trim\", disableTrimCmd, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"asm.enable_trim\", enableTrimCmd, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"asm.read_pending_trim_key\", getPendingTrimKeyCmd, \"readonly\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    \n    if (RedisModule_CreateCommand(ctx, \"asm.trim_in_progress\", trimInProgressCmd, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"asm.read_keyless_cmd_val\", readkeylessCmdVal, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"asm.sanity\", sanity, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"asm.subscribe_trimmed_event\", subscribeTrimmedEvent, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"asm.replicate_module_command\", replicate_module_command, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"asm.lpush_replicate_crossslot_command\", lpush_and_replicate_crossslot_command, \"write\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"asm.cluster_get_local_slot_ranges\", testClusterGetLocalSlotRanges, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"asm.get_last_deleted_key\", getLastDeletedKey, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"asm.get\", asmGetCommand, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"asm.parent\", NULL, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleCommand *parent = RedisModule_GetCommand(ctx, \"asm.parent\");\n    if (!parent) return REDISMODULE_ERR;\n\n    /* Subcommand: ASM.PARENT SET (write) */\n    if (RedisModule_CreateSubcommand(parent, \"set\", asmParentSet, \"write fast\", 2, 2, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_SubscribeToServerEvent(ctx, RedisModuleEvent_ClusterSlotMigration, clusterEventCallback) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_SubscribeToServerEvent(ctx, RedisModuleEvent_ClusterSlotMigrationTrim, clusterTrimEventCallback) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_SubscribeToKeyspaceEvents(ctx, REDISMODULE_NOTIFY_KEY_TRIMMED, keyspaceNotificationTrimmedCallback) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_SubscribeToServerEvent(ctx, RedisModuleEvent_Key, keyEventCallback) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/auth.c",
    "content": "/* define macros for having usleep */\n#define _BSD_SOURCE\n#define _DEFAULT_SOURCE\n\n#include \"redismodule.h\"\n\n#include <string.h>\n#include <unistd.h>\n#include <pthread.h>\n\n#define UNUSED(V) ((void) V)\n\n// A simple global user\nstatic RedisModuleUser *global = NULL;\nstatic long long client_change_delta = 0;\nstatic pthread_t tid;\n\nvoid UserChangedCallback(uint64_t client_id, void *privdata) {\n    REDISMODULE_NOT_USED(privdata);\n    REDISMODULE_NOT_USED(client_id);\n    client_change_delta++;\n}\n\nint Auth_CreateModuleUser(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (global) {\n        RedisModule_FreeModuleUser(global);\n    }\n\n    global = RedisModule_CreateModuleUser(\"global\");\n    RedisModule_SetModuleUserACL(global, \"allcommands\");\n    RedisModule_SetModuleUserACL(global, \"allkeys\");\n    RedisModule_SetModuleUserACL(global, \"on\");\n\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\nint Auth_AuthModuleUser(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    uint64_t client_id;\n    RedisModule_AuthenticateClientWithUser(ctx, global, UserChangedCallback, NULL, &client_id);\n\n    return RedisModule_ReplyWithLongLong(ctx, (uint64_t) client_id);\n}\n\nint Auth_AuthRealUser(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n\n    size_t length;\n    uint64_t client_id;\n\n    RedisModuleString *user_string = argv[1];\n    const char *name = RedisModule_StringPtrLen(user_string, &length);\n\n    if (RedisModule_AuthenticateClientWithACLUser(ctx, name, length, \n            UserChangedCallback, NULL, &client_id) == REDISMODULE_ERR) {\n        return RedisModule_ReplyWithError(ctx, \"Invalid user\");   \n    }\n\n    return RedisModule_ReplyWithLongLong(ctx, (uint64_t) client_id);\n}\n\n/* This command redacts every other arguments and returns OK */\nint Auth_RedactedAPI(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    for(int i = argc - 1; i > 0; i -= 2) {\n        int result = RedisModule_RedactClientCommandArgument(ctx, i);\n        RedisModule_Assert(result == REDISMODULE_OK);\n    }\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\"); \n}\n\nint Auth_ChangeCount(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    long long result = client_change_delta;\n    client_change_delta = 0;\n    return RedisModule_ReplyWithLongLong(ctx, result);\n}\n\n/* The Module functionality below validates that module authentication callbacks can be registered\n * to support both non-blocking and blocking module based authentication. */\n\n/* Non Blocking Module Auth callback / implementation. */\nint auth_cb(RedisModuleCtx *ctx, RedisModuleString *username, RedisModuleString *password, RedisModuleString **err) {\n    const char *user = RedisModule_StringPtrLen(username, NULL);\n    const char *pwd = RedisModule_StringPtrLen(password, NULL);\n    if (!strcmp(user,\"foo\") && !strcmp(pwd,\"allow\")) {\n        RedisModule_AuthenticateClientWithACLUser(ctx, \"foo\", 3, NULL, NULL, NULL);\n        return REDISMODULE_AUTH_HANDLED;\n    }\n    else if (!strcmp(user,\"foo\") && !strcmp(pwd,\"deny\")) {\n        RedisModuleString *log = RedisModule_CreateString(ctx, \"Module Auth\", 11);\n        RedisModule_ACLAddLogEntryByUserName(ctx, username, log, REDISMODULE_ACL_LOG_AUTH);\n        RedisModule_FreeString(ctx, log);\n        const char *err_msg = \"Auth denied by Misc Module.\";\n        *err = RedisModule_CreateString(ctx, err_msg, strlen(err_msg));\n        return REDISMODULE_AUTH_HANDLED;\n    }\n    return REDISMODULE_AUTH_NOT_HANDLED;\n}\n\nint test_rm_register_auth_cb(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    RedisModule_RegisterAuthCallback(ctx, auth_cb);\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\n/*\n * The thread entry point that actually executes the blocking part of the AUTH command.\n * This function sleeps for 0.5 seconds and then unblocks the client which will later call\n * `AuthBlock_Reply`.\n * `arg` is expected to contain the RedisModuleBlockedClient, username, and password.\n */\nvoid *AuthBlock_ThreadMain(void *arg) {\n    usleep(500000);\n    void **targ = arg;\n    RedisModuleBlockedClient *bc = targ[0];\n    int result = 2;\n    const char *user = RedisModule_StringPtrLen(targ[1], NULL);\n    const char *pwd = RedisModule_StringPtrLen(targ[2], NULL);\n    if (!strcmp(user,\"foo\") && !strcmp(pwd,\"block_allow\")) {\n        result = 1;\n    }\n    else if (!strcmp(user,\"foo\") && !strcmp(pwd,\"block_deny\")) {\n        result = 0;\n    }\n    else if (!strcmp(user,\"foo\") && !strcmp(pwd,\"block_abort\")) {\n        RedisModule_BlockedClientMeasureTimeEnd(bc);\n        RedisModule_AbortBlock(bc);\n        goto cleanup;\n    }\n    /* Provide the result to the blocking reply cb. */\n    void **replyarg = RedisModule_Alloc(sizeof(void*));\n    replyarg[0] = (void *) (uintptr_t) result;\n    RedisModule_BlockedClientMeasureTimeEnd(bc);\n    RedisModule_UnblockClient(bc, replyarg);\ncleanup:\n    /* Free the username and password and thread / arg data. */\n    RedisModule_FreeString(NULL, targ[1]);\n    RedisModule_FreeString(NULL, targ[2]);\n    RedisModule_Free(targ);\n    return NULL;\n}\n\n/*\n * Reply callback for a blocking AUTH command. This is called when the client is unblocked.\n */\nint AuthBlock_Reply(RedisModuleCtx *ctx, RedisModuleString *username, RedisModuleString *password, RedisModuleString **err) {\n    REDISMODULE_NOT_USED(password);\n    void **targ = RedisModule_GetBlockedClientPrivateData(ctx);\n    int result = (uintptr_t) targ[0];\n    size_t userlen = 0;\n    const char *user = RedisModule_StringPtrLen(username, &userlen);\n    /* Handle the success case by authenticating. */\n    if (result == 1) {\n        RedisModule_AuthenticateClientWithACLUser(ctx, user, userlen, NULL, NULL, NULL);\n        return REDISMODULE_AUTH_HANDLED;\n    }\n    /* Handle the Error case by denying auth */\n    else if (result == 0) {\n        RedisModuleString *log = RedisModule_CreateString(ctx, \"Module Auth\", 11);\n        RedisModule_ACLAddLogEntryByUserName(ctx, username, log, REDISMODULE_ACL_LOG_AUTH);\n        RedisModule_FreeString(ctx, log);\n        const char *err_msg = \"Auth denied by Misc Module.\";\n        *err = RedisModule_CreateString(ctx, err_msg, strlen(err_msg));\n        return REDISMODULE_AUTH_HANDLED;\n    }\n    /* \"Skip\" Authentication */\n    return REDISMODULE_AUTH_NOT_HANDLED;\n}\n\n/* Private data freeing callback for Module Auth. */\nvoid AuthBlock_FreeData(RedisModuleCtx *ctx, void *privdata) {\n    REDISMODULE_NOT_USED(ctx);\n    RedisModule_Free(privdata);\n}\n\n/* Callback triggered when the engine attempts module auth\n * Return code here is one of the following: Auth succeeded, Auth denied,\n * Auth not handled, Auth blocked.\n * The Module can have auth succeed / denied here itself, but this is an example\n * of blocking module auth.\n */\nint blocking_auth_cb(RedisModuleCtx *ctx, RedisModuleString *username, RedisModuleString *password, RedisModuleString **err) {\n    REDISMODULE_NOT_USED(username);\n    REDISMODULE_NOT_USED(password);\n    REDISMODULE_NOT_USED(err);\n    /* Block the client from the Module. */\n    RedisModuleBlockedClient *bc = RedisModule_BlockClientOnAuth(ctx, AuthBlock_Reply, AuthBlock_FreeData);\n    int ctx_flags = RedisModule_GetContextFlags(ctx);\n    if (ctx_flags & REDISMODULE_CTX_FLAGS_MULTI || ctx_flags & REDISMODULE_CTX_FLAGS_LUA) {\n        /* Clean up by using RedisModule_UnblockClient since we attempted blocking the client. */\n        RedisModule_UnblockClient(bc, NULL);\n        return REDISMODULE_AUTH_HANDLED;\n    }\n\n    /* Another blocking auth cb may have spawned a thread, we'll just wait for it\n     * to finish here */\n    if (tid) pthread_join(tid, NULL);\n\n    RedisModule_BlockedClientMeasureTimeStart(bc);\n\n    /* Allocate memory for information needed. */\n    void **targ = RedisModule_Alloc(sizeof(void*)*3);\n    targ[0] = bc;\n    targ[1] = RedisModule_CreateStringFromString(NULL, username);\n    targ[2] = RedisModule_CreateStringFromString(NULL, password);\n\n    /* Create bg thread and pass the blockedclient, username and password to it. */\n    if (pthread_create(&tid, NULL, AuthBlock_ThreadMain, targ) != 0) {\n        RedisModule_AbortBlock(bc);\n\n        /* These are freed in AuthBlock_ThreadMain but since we failed to spawn\n         * the thread need to free them here. */\n        RedisModule_FreeString(NULL, targ[1]);\n        RedisModule_FreeString(NULL, targ[2]);\n        RedisModule_Free(targ);\n    }\n\n    return REDISMODULE_AUTH_HANDLED;\n}\n\nint test_rm_register_blocking_auth_cb(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    RedisModule_RegisterAuthCallback(ctx, blocking_auth_cb);\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\n/* This function must be present on each Redis module. It is used in order to\n * register the commands into the Redis server. */\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx,\"testacl\",1,REDISMODULE_APIVER_1)\n        == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"auth.authrealuser\",\n        Auth_AuthRealUser,\"no-auth\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"auth.createmoduleuser\",\n        Auth_CreateModuleUser,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"auth.authmoduleuser\",\n        Auth_AuthModuleUser,\"no-auth\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"auth.changecount\",\n        Auth_ChangeCount,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"auth.redact\",\n        Auth_RedactedAPI,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"testmoduleone.rm_register_auth_cb\",\n        test_rm_register_auth_cb,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"testmoduleone.rm_register_blocking_auth_cb\",\n        test_rm_register_blocking_auth_cb,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnUnload(RedisModuleCtx *ctx) {\n    UNUSED(ctx);\n\n    if (tid) pthread_join(tid, NULL);\n\n    if (global)\n        RedisModule_FreeModuleUser(global);\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/basics.c",
    "content": "/* Module designed to test the Redis modules subsystem.\n *\n * -----------------------------------------------------------------------------\n *\n * Copyright (c) 2016-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"redismodule.h\"\n#include <string.h>\n#include <stdlib.h>\n\n/* --------------------------------- Helpers -------------------------------- */\n\n/* Return true if the reply and the C null term string matches. */\nint TestMatchReply(RedisModuleCallReply *reply, char *str) {\n    RedisModuleString *mystr;\n    mystr = RedisModule_CreateStringFromCallReply(reply);\n    if (!mystr) return 0;\n    const char *ptr = RedisModule_StringPtrLen(mystr,NULL);\n    return strcmp(ptr,str) == 0;\n}\n\n/* ------------------------------- Test units ------------------------------- */\n\n/* TEST.CALL -- Test Call() API. */\nint TestCall(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_AutoMemory(ctx);\n    RedisModuleCallReply *reply;\n\n    RedisModule_Call(ctx,\"DEL\",\"c\",\"mylist\");\n    RedisModuleString *mystr = RedisModule_CreateString(ctx,\"foo\",3);\n    RedisModule_Call(ctx,\"RPUSH\",\"csl\",\"mylist\",mystr,(long long)1234);\n    reply = RedisModule_Call(ctx,\"LRANGE\",\"ccc\",\"mylist\",\"0\",\"-1\");\n    long long items = RedisModule_CallReplyLength(reply);\n    if (items != 2) goto fail;\n\n    RedisModuleCallReply *item0, *item1;\n\n    item0 = RedisModule_CallReplyArrayElement(reply,0);\n    item1 = RedisModule_CallReplyArrayElement(reply,1);\n    if (!TestMatchReply(item0,\"foo\")) goto fail;\n    if (!TestMatchReply(item1,\"1234\")) goto fail;\n\n    RedisModule_ReplyWithSimpleString(ctx,\"OK\");\n    return REDISMODULE_OK;\n\nfail:\n    RedisModule_ReplyWithSimpleString(ctx,\"ERR\");\n    return REDISMODULE_OK;\n}\n\nint TestCallResp3Attribute(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_AutoMemory(ctx);\n    RedisModuleCallReply *reply;\n\n    reply = RedisModule_Call(ctx,\"DEBUG\",\"3cc\" ,\"PROTOCOL\", \"attrib\"); /* 3 stands for resp 3 reply */\n    if (RedisModule_CallReplyType(reply) != REDISMODULE_REPLY_STRING) goto fail;\n\n    /* make sure we can not reply to resp2 client with resp3 (it might be a string but it contains attribute) */\n    if (RedisModule_ReplyWithCallReply(ctx, reply) != REDISMODULE_ERR) goto fail;\n\n    if (!TestMatchReply(reply,\"Some real reply following the attribute\")) goto fail;\n\n    reply = RedisModule_CallReplyAttribute(reply);\n    if (!reply || RedisModule_CallReplyType(reply) != REDISMODULE_REPLY_ATTRIBUTE) goto fail;\n    /* make sure we can not reply to resp2 client with resp3 attribute */\n    if (RedisModule_ReplyWithCallReply(ctx, reply) != REDISMODULE_ERR) goto fail;\n    if (RedisModule_CallReplyLength(reply) != 1) goto fail;\n\n    RedisModuleCallReply *key, *val;\n    if (RedisModule_CallReplyAttributeElement(reply,0,&key,&val) != REDISMODULE_OK) goto fail;\n    if (!TestMatchReply(key,\"key-popularity\")) goto fail;\n    if (RedisModule_CallReplyType(val) != REDISMODULE_REPLY_ARRAY) goto fail;\n    if (RedisModule_CallReplyLength(val) != 2) goto fail;\n    if (!TestMatchReply(RedisModule_CallReplyArrayElement(val, 0),\"key:123\")) goto fail;\n    if (!TestMatchReply(RedisModule_CallReplyArrayElement(val, 1),\"90\")) goto fail;\n\n    RedisModule_ReplyWithSimpleString(ctx,\"OK\");\n    return REDISMODULE_OK;\n\nfail:\n    RedisModule_ReplyWithSimpleString(ctx,\"ERR\");\n    return REDISMODULE_OK;\n}\n\nint TestGetResp(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    int flags = RedisModule_GetContextFlags(ctx);\n\n    if (flags & REDISMODULE_CTX_FLAGS_RESP3) {\n        RedisModule_ReplyWithLongLong(ctx, 3);\n    } else {\n        RedisModule_ReplyWithLongLong(ctx, 2);\n    }\n\n    return REDISMODULE_OK;\n}\n\nint TestCallRespAutoMode(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_AutoMemory(ctx);\n    RedisModuleCallReply *reply;\n\n    RedisModule_Call(ctx,\"DEL\",\"c\",\"myhash\");\n    RedisModule_Call(ctx,\"HSET\",\"ccccc\",\"myhash\", \"f1\", \"v1\", \"f2\", \"v2\");\n    /* 0 stands for auto mode, we will get the reply in the same format as the client */\n    reply = RedisModule_Call(ctx,\"HGETALL\",\"0c\" ,\"myhash\");\n    RedisModule_ReplyWithCallReply(ctx, reply);\n    return REDISMODULE_OK;\n}\n\nint TestCallResp3Map(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_AutoMemory(ctx);\n    RedisModuleCallReply *reply;\n\n    RedisModule_Call(ctx,\"DEL\",\"c\",\"myhash\");\n    RedisModule_Call(ctx,\"HSET\",\"ccccc\",\"myhash\", \"f1\", \"v1\", \"f2\", \"v2\");\n    reply = RedisModule_Call(ctx,\"HGETALL\",\"3c\" ,\"myhash\"); /* 3 stands for resp 3 reply */\n    if (RedisModule_CallReplyType(reply) != REDISMODULE_REPLY_MAP) goto fail;\n\n    /* make sure we can not reply to resp2 client with resp3 map */\n    if (RedisModule_ReplyWithCallReply(ctx, reply) != REDISMODULE_ERR) goto fail;\n\n    long long items = RedisModule_CallReplyLength(reply);\n    if (items != 2) goto fail;\n\n    RedisModuleCallReply *key0, *key1;\n    RedisModuleCallReply *val0, *val1;\n    if (RedisModule_CallReplyMapElement(reply,0,&key0,&val0) != REDISMODULE_OK) goto fail;\n    if (RedisModule_CallReplyMapElement(reply,1,&key1,&val1) != REDISMODULE_OK) goto fail;\n    if (!TestMatchReply(key0,\"f1\")) goto fail;\n    if (!TestMatchReply(key1,\"f2\")) goto fail;\n    if (!TestMatchReply(val0,\"v1\")) goto fail;\n    if (!TestMatchReply(val1,\"v2\")) goto fail;\n\n    RedisModule_ReplyWithSimpleString(ctx,\"OK\");\n    return REDISMODULE_OK;\n\nfail:\n    RedisModule_ReplyWithSimpleString(ctx,\"ERR\");\n    return REDISMODULE_OK;\n}\n\nint TestCallResp3Bool(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_AutoMemory(ctx);\n    RedisModuleCallReply *reply;\n\n    reply = RedisModule_Call(ctx,\"DEBUG\",\"3cc\" ,\"PROTOCOL\", \"true\"); /* 3 stands for resp 3 reply */\n    if (RedisModule_CallReplyType(reply) != REDISMODULE_REPLY_BOOL) goto fail;\n    /* make sure we can not reply to resp2 client with resp3 bool */\n    if (RedisModule_ReplyWithCallReply(ctx, reply) != REDISMODULE_ERR) goto fail;\n\n    if (!RedisModule_CallReplyBool(reply)) goto fail;\n    reply = RedisModule_Call(ctx,\"DEBUG\",\"3cc\" ,\"PROTOCOL\", \"false\"); /* 3 stands for resp 3 reply */\n    if (RedisModule_CallReplyType(reply) != REDISMODULE_REPLY_BOOL) goto fail;\n    if (RedisModule_CallReplyBool(reply)) goto fail;\n\n    RedisModule_ReplyWithSimpleString(ctx,\"OK\");\n    return REDISMODULE_OK;\n\nfail:\n    RedisModule_ReplyWithSimpleString(ctx,\"ERR\");\n    return REDISMODULE_OK;\n}\n\nint TestCallResp3Null(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_AutoMemory(ctx);\n    RedisModuleCallReply *reply;\n\n    reply = RedisModule_Call(ctx,\"DEBUG\",\"3cc\" ,\"PROTOCOL\", \"null\"); /* 3 stands for resp 3 reply */\n    if (RedisModule_CallReplyType(reply) != REDISMODULE_REPLY_NULL) goto fail;\n\n    /* make sure we can not reply to resp2 client with resp3 null */\n    if (RedisModule_ReplyWithCallReply(ctx, reply) != REDISMODULE_ERR) goto fail;\n\n    RedisModule_ReplyWithSimpleString(ctx,\"OK\");\n    return REDISMODULE_OK;\n\nfail:\n    RedisModule_ReplyWithSimpleString(ctx,\"ERR\");\n    return REDISMODULE_OK;\n}\n\nint TestCallReplyWithNestedReply(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_AutoMemory(ctx);\n    RedisModuleCallReply *reply;\n\n    RedisModule_Call(ctx,\"DEL\",\"c\",\"mylist\");\n    RedisModule_Call(ctx,\"RPUSH\",\"ccl\",\"mylist\",\"test\",(long long)1234);\n    reply = RedisModule_Call(ctx,\"LRANGE\",\"ccc\",\"mylist\",\"0\",\"-1\");\n    if (RedisModule_CallReplyType(reply) != REDISMODULE_REPLY_ARRAY) goto fail;\n    if (RedisModule_CallReplyLength(reply) < 1) goto fail;\n    RedisModuleCallReply *nestedReply = RedisModule_CallReplyArrayElement(reply, 0);\n\n    RedisModule_ReplyWithCallReply(ctx,nestedReply);\n    return REDISMODULE_OK;\n\nfail:\n    RedisModule_ReplyWithSimpleString(ctx,\"ERR\");\n    return REDISMODULE_OK;\n}\n\nint TestCallReplyWithArrayReply(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_AutoMemory(ctx);\n    RedisModuleCallReply *reply;\n\n    RedisModule_Call(ctx,\"DEL\",\"c\",\"mylist\");\n    RedisModule_Call(ctx,\"RPUSH\",\"ccl\",\"mylist\",\"test\",(long long)1234);\n    reply = RedisModule_Call(ctx,\"LRANGE\",\"ccc\",\"mylist\",\"0\",\"-1\");\n    if (RedisModule_CallReplyType(reply) != REDISMODULE_REPLY_ARRAY) goto fail;\n\n    RedisModule_ReplyWithCallReply(ctx,reply);\n    return REDISMODULE_OK;\n\nfail:\n    RedisModule_ReplyWithSimpleString(ctx,\"ERR\");\n    return REDISMODULE_OK;\n}\n\nint TestCallResp3Double(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_AutoMemory(ctx);\n    RedisModuleCallReply *reply;\n\n    reply = RedisModule_Call(ctx,\"DEBUG\",\"3cc\" ,\"PROTOCOL\", \"double\"); /* 3 stands for resp 3 reply */\n    if (RedisModule_CallReplyType(reply) != REDISMODULE_REPLY_DOUBLE) goto fail;\n\n    /* make sure we can not reply to resp2 client with resp3 double*/\n    if (RedisModule_ReplyWithCallReply(ctx, reply) != REDISMODULE_ERR) goto fail;\n\n    double d = RedisModule_CallReplyDouble(reply);\n    /* we compare strings, since comparing doubles directly can fail in various architectures, e.g. 32bit */\n    char got[30], expected[30];\n    snprintf(got, sizeof(got), \"%.17g\", d);\n    snprintf(expected, sizeof(expected), \"%.17g\", 3.141);\n    if (strcmp(got, expected) != 0) goto fail;\n    RedisModule_ReplyWithSimpleString(ctx,\"OK\");\n    return REDISMODULE_OK;\n\nfail:\n    RedisModule_ReplyWithSimpleString(ctx,\"ERR\");\n    return REDISMODULE_OK;\n}\n\nint TestCallResp3BigNumber(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_AutoMemory(ctx);\n    RedisModuleCallReply *reply;\n\n    reply = RedisModule_Call(ctx,\"DEBUG\",\"3cc\" ,\"PROTOCOL\", \"bignum\"); /* 3 stands for resp 3 reply */\n    if (RedisModule_CallReplyType(reply) != REDISMODULE_REPLY_BIG_NUMBER) goto fail;\n\n    /* make sure we can not reply to resp2 client with resp3 big number */\n    if (RedisModule_ReplyWithCallReply(ctx, reply) != REDISMODULE_ERR) goto fail;\n\n    size_t len;\n    const char* big_num = RedisModule_CallReplyBigNumber(reply, &len);\n    RedisModule_ReplyWithStringBuffer(ctx,big_num,len);\n    return REDISMODULE_OK;\n\nfail:\n    RedisModule_ReplyWithSimpleString(ctx,\"ERR\");\n    return REDISMODULE_OK;\n}\n\nint TestCallResp3Verbatim(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_AutoMemory(ctx);\n    RedisModuleCallReply *reply;\n\n    reply = RedisModule_Call(ctx,\"DEBUG\",\"3cc\" ,\"PROTOCOL\", \"verbatim\"); /* 3 stands for resp 3 reply */\n    if (RedisModule_CallReplyType(reply) != REDISMODULE_REPLY_VERBATIM_STRING) goto fail;\n\n    /* make sure we can not reply to resp2 client with resp3 verbatim string */\n    if (RedisModule_ReplyWithCallReply(ctx, reply) != REDISMODULE_ERR) goto fail;\n\n    const char* format;\n    size_t len;\n    const char* str = RedisModule_CallReplyVerbatim(reply, &len, &format);\n    RedisModuleString *s = RedisModule_CreateStringPrintf(ctx, \"%.*s:%.*s\", 3, format, (int)len, str);\n    RedisModule_ReplyWithString(ctx,s);\n    return REDISMODULE_OK;\n\nfail:\n    RedisModule_ReplyWithSimpleString(ctx,\"ERR\");\n    return REDISMODULE_OK;\n}\n\nint TestCallResp3Set(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_AutoMemory(ctx);\n    RedisModuleCallReply *reply;\n\n    RedisModule_Call(ctx,\"DEL\",\"c\",\"myset\");\n    RedisModule_Call(ctx,\"sadd\",\"ccc\",\"myset\", \"v1\", \"v2\");\n    reply = RedisModule_Call(ctx,\"smembers\",\"3c\" ,\"myset\"); // N stands for resp 3 reply\n    if (RedisModule_CallReplyType(reply) != REDISMODULE_REPLY_SET) goto fail;\n\n    /* make sure we can not reply to resp2 client with resp3 set */\n    if (RedisModule_ReplyWithCallReply(ctx, reply) != REDISMODULE_ERR) goto fail;\n\n    long long items = RedisModule_CallReplyLength(reply);\n    if (items != 2) goto fail;\n\n    RedisModuleCallReply *val0, *val1;\n\n    val0 = RedisModule_CallReplySetElement(reply,0);\n    val1 = RedisModule_CallReplySetElement(reply,1);\n\n    /*\n     * The order of elements on sets are not promised so we just\n     * veridy that the reply matches one of the elements.\n     */\n    if (!TestMatchReply(val0,\"v1\") && !TestMatchReply(val0,\"v2\")) goto fail;\n    if (!TestMatchReply(val1,\"v1\") && !TestMatchReply(val1,\"v2\")) goto fail;\n\n    RedisModule_ReplyWithSimpleString(ctx,\"OK\");\n    return REDISMODULE_OK;\n\nfail:\n    RedisModule_ReplyWithSimpleString(ctx,\"ERR\");\n    return REDISMODULE_OK;\n}\n\n/* TEST.STRING.APPEND -- Test appending to an existing string object. */\nint TestStringAppend(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModuleString *s = RedisModule_CreateString(ctx,\"foo\",3);\n    RedisModule_StringAppendBuffer(ctx,s,\"bar\",3);\n    RedisModule_ReplyWithString(ctx,s);\n    RedisModule_FreeString(ctx,s);\n    return REDISMODULE_OK;\n}\n\n/* TEST.STRING.APPEND.AM -- Test append with retain when auto memory is on. */\nint TestStringAppendAM(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_AutoMemory(ctx);\n    RedisModuleString *s = RedisModule_CreateString(ctx,\"foo\",3);\n    RedisModule_RetainString(ctx,s);\n    RedisModule_TrimStringAllocation(s);    /* Mostly NOP, but exercises the API function */\n    RedisModule_StringAppendBuffer(ctx,s,\"bar\",3);\n    RedisModule_ReplyWithString(ctx,s);\n    RedisModule_FreeString(ctx,s);\n    return REDISMODULE_OK;\n}\n\n/* TEST.STRING.TRIM -- Test we trim a string with free space. */\nint TestTrimString(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    RedisModuleString *s = RedisModule_CreateString(ctx,\"foo\",3);\n    char *tmp = RedisModule_Alloc(1024);\n    RedisModule_StringAppendBuffer(ctx,s,tmp,1024);\n    size_t string_len = RedisModule_MallocSizeString(s);\n    RedisModule_TrimStringAllocation(s);\n    size_t len_after_trim = RedisModule_MallocSizeString(s);\n\n    /* Determine if using jemalloc memory allocator. */\n    RedisModuleServerInfoData *info = RedisModule_GetServerInfo(ctx, \"memory\");\n    const char *field = RedisModule_ServerInfoGetFieldC(info, \"mem_allocator\");\n    int use_jemalloc = !strncmp(field, \"jemalloc\", 8);\n\n    /* Jemalloc will reallocate `s` from 2k to 1k after RedisModule_TrimStringAllocation(),\n     * but non-jemalloc memory allocators may keep the old size. */\n    if ((use_jemalloc && len_after_trim < string_len) ||\n        (!use_jemalloc && len_after_trim <= string_len))\n    {\n        RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    } else {\n        RedisModule_ReplyWithError(ctx, \"String was not trimmed as expected.\");\n    }\n    RedisModule_FreeServerInfo(ctx, info);\n    RedisModule_Free(tmp);\n    RedisModule_FreeString(ctx,s);\n    return REDISMODULE_OK;\n}\n\n/* TEST.STRING.PRINTF -- Test string formatting. */\nint TestStringPrintf(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx);\n    if (argc < 3) {\n        return RedisModule_WrongArity(ctx);\n    }\n    RedisModuleString *s = RedisModule_CreateStringPrintf(ctx,\n        \"Got %d args. argv[1]: %s, argv[2]: %s\",\n        argc,\n        RedisModule_StringPtrLen(argv[1], NULL),\n        RedisModule_StringPtrLen(argv[2], NULL)\n    );\n\n    RedisModule_ReplyWithString(ctx,s);\n\n    return REDISMODULE_OK;\n}\n\nint failTest(RedisModuleCtx *ctx, const char *msg) {\n    RedisModule_ReplyWithError(ctx, msg);\n    return REDISMODULE_ERR;\n}\n\nint TestUnlink(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx);\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModuleKey *k = RedisModule_OpenKey(ctx, RedisModule_CreateStringPrintf(ctx, \"unlinked\"), REDISMODULE_WRITE | REDISMODULE_READ);\n    if (!k) return failTest(ctx, \"Could not create key\");\n\n    if (REDISMODULE_ERR == RedisModule_StringSet(k, RedisModule_CreateStringPrintf(ctx, \"Foobar\"))) {\n        return failTest(ctx, \"Could not set string value\");\n    }\n\n    RedisModuleCallReply *rep = RedisModule_Call(ctx, \"EXISTS\", \"c\", \"unlinked\");\n    if (!rep || RedisModule_CallReplyInteger(rep) != 1) {\n        return failTest(ctx, \"Key does not exist before unlink\");\n    }\n\n    if (REDISMODULE_ERR == RedisModule_UnlinkKey(k)) {\n        return failTest(ctx, \"Could not unlink key\");\n    }\n\n    rep = RedisModule_Call(ctx, \"EXISTS\", \"c\", \"unlinked\");\n    if (!rep || RedisModule_CallReplyInteger(rep) != 0) {\n        return failTest(ctx, \"Could not verify key to be unlinked\");\n    }\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\nint TestNestedCallReplyArrayElement(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx);\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModuleString *expect_key = RedisModule_CreateString(ctx, \"mykey\", strlen(\"mykey\"));\n    RedisModule_SelectDb(ctx, 1);\n    RedisModule_Call(ctx, \"LPUSH\", \"sc\", expect_key, \"myvalue\");\n\n    RedisModuleCallReply *scan_reply = RedisModule_Call(ctx, \"SCAN\", \"l\", (long long)0);\n    RedisModule_Assert(scan_reply != NULL && RedisModule_CallReplyType(scan_reply) == REDISMODULE_REPLY_ARRAY);\n    RedisModule_Assert(RedisModule_CallReplyLength(scan_reply) == 2);\n\n    long long scan_cursor;\n    RedisModuleCallReply *cursor_reply = RedisModule_CallReplyArrayElement(scan_reply, 0);\n    RedisModule_Assert(RedisModule_CallReplyType(cursor_reply) == REDISMODULE_REPLY_STRING);\n    RedisModule_Assert(RedisModule_StringToLongLong(RedisModule_CreateStringFromCallReply(cursor_reply), &scan_cursor) == REDISMODULE_OK);\n    RedisModule_Assert(scan_cursor == 0);\n\n    RedisModuleCallReply *keys_reply = RedisModule_CallReplyArrayElement(scan_reply, 1);\n    RedisModule_Assert(RedisModule_CallReplyType(keys_reply) == REDISMODULE_REPLY_ARRAY);\n    RedisModule_Assert( RedisModule_CallReplyLength(keys_reply) == 1);\n \n    RedisModuleCallReply *key_reply = RedisModule_CallReplyArrayElement(keys_reply, 0);\n    RedisModule_Assert(RedisModule_CallReplyType(key_reply) == REDISMODULE_REPLY_STRING);\n    RedisModuleString *key = RedisModule_CreateStringFromCallReply(key_reply);\n    RedisModule_Assert(RedisModule_StringCompare(key, expect_key) == 0);\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\n/* TEST.STRING.TRUNCATE -- Test truncating an existing string object. */\nint TestStringTruncate(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx);\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_Call(ctx, \"SET\", \"cc\", \"foo\", \"abcde\");\n    RedisModuleKey *k = RedisModule_OpenKey(ctx, RedisModule_CreateStringPrintf(ctx, \"foo\"), REDISMODULE_READ | REDISMODULE_WRITE);\n    if (!k) return failTest(ctx, \"Could not create key\");\n\n    size_t len = 0;\n    char* s;\n\n    /* expand from 5 to 8 and check null pad */\n    if (REDISMODULE_ERR == RedisModule_StringTruncate(k, 8)) {\n        return failTest(ctx, \"Could not truncate string value (8)\");\n    }\n    s = RedisModule_StringDMA(k, &len, REDISMODULE_READ);\n    if (!s) {\n        return failTest(ctx, \"Failed to read truncated string (8)\");\n    } else if (len != 8) {\n        return failTest(ctx, \"Failed to expand string value (8)\");\n    } else if (0 != strncmp(s, \"abcde\\0\\0\\0\", 8)) {\n        return failTest(ctx, \"Failed to null pad string value (8)\");\n    }\n\n    /* shrink from 8 to 4 */\n    if (REDISMODULE_ERR == RedisModule_StringTruncate(k, 4)) {\n        return failTest(ctx, \"Could not truncate string value (4)\");\n    }\n    s = RedisModule_StringDMA(k, &len, REDISMODULE_READ);\n    if (!s) {\n        return failTest(ctx, \"Failed to read truncated string (4)\");\n    } else if (len != 4) {\n        return failTest(ctx, \"Failed to shrink string value (4)\");\n    } else if (0 != strncmp(s, \"abcd\", 4)) {\n        return failTest(ctx, \"Failed to truncate string value (4)\");\n    }\n\n    /* shrink to 0 */\n    if (REDISMODULE_ERR == RedisModule_StringTruncate(k, 0)) {\n        return failTest(ctx, \"Could not truncate string value (0)\");\n    }\n    s = RedisModule_StringDMA(k, &len, REDISMODULE_READ);\n    if (!s) {\n        return failTest(ctx, \"Failed to read truncated string (0)\");\n    } else if (len != 0) {\n        return failTest(ctx, \"Failed to shrink string value to (0)\");\n    }\n\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\nint NotifyCallback(RedisModuleCtx *ctx, int type, const char *event,\n                   RedisModuleString *key) {\n  RedisModule_AutoMemory(ctx);\n  /* Increment a counter on the notifications: for each key notified we\n   * increment a counter */\n  RedisModule_Log(ctx, \"notice\", \"Got event type %d, event %s, key %s\", type,\n                  event, RedisModule_StringPtrLen(key, NULL));\n\n  RedisModule_Call(ctx, \"HINCRBY\", \"csc\", \"notifications\", key, \"1\");\n  return REDISMODULE_OK;\n}\n\n/* TEST.NOTIFICATIONS -- Test Keyspace Notifications. */\nint TestNotifications(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx);\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n#define FAIL(msg, ...)                                                                       \\\n    {                                                                                        \\\n        RedisModule_Log(ctx, \"warning\", \"Failed NOTIFY Test. Reason: \" #msg, ##__VA_ARGS__); \\\n        goto err;                                                                            \\\n    }\n    RedisModule_Call(ctx, \"FLUSHDB\", \"\");\n\n    RedisModule_Call(ctx, \"SET\", \"cc\", \"foo\", \"bar\");\n    RedisModule_Call(ctx, \"SET\", \"cc\", \"foo\", \"baz\");\n    RedisModule_Call(ctx, \"SADD\", \"cc\", \"bar\", \"x\");\n    RedisModule_Call(ctx, \"SADD\", \"cc\", \"bar\", \"y\");\n\n    RedisModule_Call(ctx, \"HSET\", \"ccc\", \"baz\", \"x\", \"y\");\n    /* LPUSH should be ignored and not increment any counters */\n    RedisModule_Call(ctx, \"LPUSH\", \"cc\", \"l\", \"y\");\n    RedisModule_Call(ctx, \"LPUSH\", \"cc\", \"l\", \"y\");\n\n    /* Miss some keys intentionally so we will get a \"keymiss\" notification. */\n    RedisModule_Call(ctx, \"GET\", \"c\", \"nosuchkey\");\n    RedisModule_Call(ctx, \"SMEMBERS\", \"c\", \"nosuchkey\");\n\n    size_t sz;\n    const char *rep;\n    RedisModuleCallReply *r = RedisModule_Call(ctx, \"HGET\", \"cc\", \"notifications\", \"foo\");\n    if (r == NULL || RedisModule_CallReplyType(r) != REDISMODULE_REPLY_STRING) {\n        FAIL(\"Wrong or no reply for foo\");\n    } else {\n        rep = RedisModule_CallReplyStringPtr(r, &sz);\n        if (sz != 1 || *rep != '2') {\n            FAIL(\"Got reply '%s'. expected '2'\", RedisModule_CallReplyStringPtr(r, NULL));\n        }\n    }\n\n    r = RedisModule_Call(ctx, \"HGET\", \"cc\", \"notifications\", \"bar\");\n    if (r == NULL || RedisModule_CallReplyType(r) != REDISMODULE_REPLY_STRING) {\n        FAIL(\"Wrong or no reply for bar\");\n    } else {\n        rep = RedisModule_CallReplyStringPtr(r, &sz);\n        if (sz != 1 || *rep != '2') {\n            FAIL(\"Got reply '%s'. expected '2'\", rep);\n        }\n    }\n\n    r = RedisModule_Call(ctx, \"HGET\", \"cc\", \"notifications\", \"baz\");\n    if (r == NULL || RedisModule_CallReplyType(r) != REDISMODULE_REPLY_STRING) {\n        FAIL(\"Wrong or no reply for baz\");\n    } else {\n        rep = RedisModule_CallReplyStringPtr(r, &sz);\n        if (sz != 1 || *rep != '1') {\n            FAIL(\"Got reply '%.*s'. expected '1'\", (int)sz, rep);\n        }\n    }\n    /* For l we expect nothing since we didn't subscribe to list events */\n    r = RedisModule_Call(ctx, \"HGET\", \"cc\", \"notifications\", \"l\");\n    if (r == NULL || RedisModule_CallReplyType(r) != REDISMODULE_REPLY_NULL) {\n        FAIL(\"Wrong reply for l\");\n    }\n\n    r = RedisModule_Call(ctx, \"HGET\", \"cc\", \"notifications\", \"nosuchkey\");\n    if (r == NULL || RedisModule_CallReplyType(r) != REDISMODULE_REPLY_STRING) {\n        FAIL(\"Wrong or no reply for nosuchkey\");\n    } else {\n        rep = RedisModule_CallReplyStringPtr(r, &sz);\n        if (sz != 1 || *rep != '2') {\n            FAIL(\"Got reply '%.*s'. expected '2'\", (int)sz, rep);\n        }\n    }\n\n    RedisModule_Call(ctx, \"FLUSHDB\", \"\");\n\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\nerr:\n    RedisModule_Call(ctx, \"FLUSHDB\", \"\");\n\n    return RedisModule_ReplyWithSimpleString(ctx, \"ERR\");\n}\n\n/* TEST.CTXFLAGS -- Test GetContextFlags. */\nint TestCtxFlags(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argc);\n    REDISMODULE_NOT_USED(argv);\n\n    RedisModule_AutoMemory(ctx);\n\n    int ok = 1;\n    const char *errString = NULL;\n#undef FAIL\n#define FAIL(msg)        \\\n    {                    \\\n        ok = 0;          \\\n        errString = msg; \\\n        goto end;        \\\n    }\n\n    int flags = RedisModule_GetContextFlags(ctx);\n    if (flags == 0) {\n        FAIL(\"Got no flags\");\n    }\n\n    if (flags & REDISMODULE_CTX_FLAGS_LUA) FAIL(\"Lua flag was set\");\n    if (flags & REDISMODULE_CTX_FLAGS_MULTI) FAIL(\"Multi flag was set\");\n\n    if (flags & REDISMODULE_CTX_FLAGS_AOF) FAIL(\"AOF Flag was set\")\n    /* Enable AOF to test AOF flags */\n    RedisModule_Call(ctx, \"config\", \"ccc\", \"set\", \"appendonly\", \"yes\");\n    flags = RedisModule_GetContextFlags(ctx);\n    if (!(flags & REDISMODULE_CTX_FLAGS_AOF)) FAIL(\"AOF Flag not set after config set\");\n\n    /* Disable RDB saving and test the flag. */\n    RedisModule_Call(ctx, \"config\", \"ccc\", \"set\", \"save\", \"\");\n    flags = RedisModule_GetContextFlags(ctx);\n    if (flags & REDISMODULE_CTX_FLAGS_RDB) FAIL(\"RDB Flag was set\");\n    /* Enable RDB to test RDB flags */\n    RedisModule_Call(ctx, \"config\", \"ccc\", \"set\", \"save\", \"900 1\");\n    flags = RedisModule_GetContextFlags(ctx);\n    if (!(flags & REDISMODULE_CTX_FLAGS_RDB)) FAIL(\"RDB Flag was not set after config set\");\n\n    if (!(flags & REDISMODULE_CTX_FLAGS_MASTER)) FAIL(\"Master flag was not set\");\n    if (flags & REDISMODULE_CTX_FLAGS_SLAVE) FAIL(\"Slave flag was set\");\n    if (flags & REDISMODULE_CTX_FLAGS_READONLY) FAIL(\"Read-only flag was set\");\n    if (flags & REDISMODULE_CTX_FLAGS_CLUSTER) FAIL(\"Cluster flag was set\");\n\n    /* Disable maxmemory and test the flag. (it is implicitly set in 32bit builds. */\n    RedisModule_Call(ctx, \"config\", \"ccc\", \"set\", \"maxmemory\", \"0\");\n    flags = RedisModule_GetContextFlags(ctx);\n    if (flags & REDISMODULE_CTX_FLAGS_MAXMEMORY) FAIL(\"Maxmemory flag was set\");\n\n    /* Enable maxmemory and test the flag. */\n    RedisModule_Call(ctx, \"config\", \"ccc\", \"set\", \"maxmemory\", \"100000000\");\n    flags = RedisModule_GetContextFlags(ctx);\n    if (!(flags & REDISMODULE_CTX_FLAGS_MAXMEMORY))\n        FAIL(\"Maxmemory flag was not set after config set\");\n\n    if (flags & REDISMODULE_CTX_FLAGS_EVICT) FAIL(\"Eviction flag was set\");\n    RedisModule_Call(ctx, \"config\", \"ccc\", \"set\", \"maxmemory-policy\", \"allkeys-lru\");\n    flags = RedisModule_GetContextFlags(ctx);\n    if (!(flags & REDISMODULE_CTX_FLAGS_EVICT)) FAIL(\"Eviction flag was not set after config set\");\n\nend:\n    /* Revert config changes */\n    RedisModule_Call(ctx, \"config\", \"ccc\", \"set\", \"appendonly\", \"no\");\n    RedisModule_Call(ctx, \"config\", \"ccc\", \"set\", \"save\", \"\");\n    RedisModule_Call(ctx, \"config\", \"ccc\", \"set\", \"maxmemory\", \"0\");\n    RedisModule_Call(ctx, \"config\", \"ccc\", \"set\", \"maxmemory-policy\", \"noeviction\");\n\n    if (!ok) {\n        RedisModule_Log(ctx, \"warning\", \"Failed CTXFLAGS Test. Reason: %s\", errString);\n        return RedisModule_ReplyWithSimpleString(ctx, \"ERR\");\n    }\n\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\n/* ----------------------------- Test framework ----------------------------- */\n\n/* Return 1 if the reply matches the specified string, otherwise log errors\n * in the server log and return 0. */\nint TestAssertErrorReply(RedisModuleCtx *ctx, RedisModuleCallReply *reply, char *str, size_t len) {\n    RedisModuleString *mystr, *expected;\n    if (RedisModule_CallReplyType(reply) != REDISMODULE_REPLY_ERROR) {\n        return 0;\n    }\n\n    mystr = RedisModule_CreateStringFromCallReply(reply);\n    expected = RedisModule_CreateString(ctx,str,len);\n    if (RedisModule_StringCompare(mystr,expected) != 0) {\n        const char *mystr_ptr = RedisModule_StringPtrLen(mystr,NULL);\n        const char *expected_ptr = RedisModule_StringPtrLen(expected,NULL);\n        RedisModule_Log(ctx,\"warning\",\n            \"Unexpected Error reply reply '%s' (instead of '%s')\",\n            mystr_ptr, expected_ptr);\n        return 0;\n    }\n    return 1;\n}\n\nint TestAssertStringReply(RedisModuleCtx *ctx, RedisModuleCallReply *reply, char *str, size_t len) {\n    RedisModuleString *mystr, *expected;\n\n    if (RedisModule_CallReplyType(reply) == REDISMODULE_REPLY_ERROR) {\n        RedisModule_Log(ctx,\"warning\",\"Test error reply: %s\",\n            RedisModule_CallReplyStringPtr(reply, NULL));\n        return 0;\n    } else if (RedisModule_CallReplyType(reply) != REDISMODULE_REPLY_STRING) {\n        RedisModule_Log(ctx,\"warning\",\"Unexpected reply type %d\",\n            RedisModule_CallReplyType(reply));\n        return 0;\n    }\n    mystr = RedisModule_CreateStringFromCallReply(reply);\n    expected = RedisModule_CreateString(ctx,str,len);\n    if (RedisModule_StringCompare(mystr,expected) != 0) {\n        const char *mystr_ptr = RedisModule_StringPtrLen(mystr,NULL);\n        const char *expected_ptr = RedisModule_StringPtrLen(expected,NULL);\n        RedisModule_Log(ctx,\"warning\",\n            \"Unexpected string reply '%s' (instead of '%s')\",\n            mystr_ptr, expected_ptr);\n        return 0;\n    }\n    return 1;\n}\n\n/* Return 1 if the reply matches the specified integer, otherwise log errors\n * in the server log and return 0. */\nint TestAssertIntegerReply(RedisModuleCtx *ctx, RedisModuleCallReply *reply, long long expected) {\n    if (RedisModule_CallReplyType(reply) == REDISMODULE_REPLY_ERROR) {\n        RedisModule_Log(ctx,\"warning\",\"Test error reply: %s\",\n            RedisModule_CallReplyStringPtr(reply, NULL));\n        return 0;\n    } else if (RedisModule_CallReplyType(reply) != REDISMODULE_REPLY_INTEGER) {\n        RedisModule_Log(ctx,\"warning\",\"Unexpected reply type %d\",\n            RedisModule_CallReplyType(reply));\n        return 0;\n    }\n    long long val = RedisModule_CallReplyInteger(reply);\n    if (val != expected) {\n        RedisModule_Log(ctx,\"warning\",\n            \"Unexpected integer reply '%lld' (instead of '%lld')\",\n            val, expected);\n        return 0;\n    }\n    return 1;\n}\n\n/* Replies \"yes\", \"no\" otherwise if the context may execute debug commands */\nint TestCanDebug(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    int flags = RedisModule_GetContextFlags(ctx);\n    int allFlags = RedisModule_GetContextFlagsAll();\n    if ((allFlags & REDISMODULE_CTX_FLAGS_DEBUG_ENABLED) &&\n        (flags & REDISMODULE_CTX_FLAGS_DEBUG_ENABLED)) {\n        RedisModule_ReplyWithSimpleString(ctx, \"yes\");\n    } else {\n        RedisModule_ReplyWithSimpleString(ctx, \"no\");\n    }\n    return REDISMODULE_OK;\n}\n\n#define T(name,...) \\\n    do { \\\n        RedisModule_Log(ctx,\"warning\",\"Testing %s\", name); \\\n        reply = RedisModule_Call(ctx,name,__VA_ARGS__); \\\n    } while (0)\n\n/* TEST.BASICS -- Run all the tests.\n * Note: it is useful to run these tests from the module rather than TCL\n * since it's easier to check the reply types like that make a distinction\n * between 0 and \"0\", etc. */\nint TestBasics(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_AutoMemory(ctx);\n    RedisModuleCallReply *reply;\n\n    /* Make sure the DB is empty before to proceed. */\n    T(\"dbsize\",\"\");\n    if (!TestAssertIntegerReply(ctx,reply,0)) goto fail;\n\n    T(\"ping\",\"\");\n    if (!TestAssertStringReply(ctx,reply,\"PONG\",4)) goto fail;\n\n    T(\"test.call\",\"\");\n    if (!TestAssertStringReply(ctx,reply,\"OK\",2)) goto fail;\n\n    T(\"test.callresp3map\",\"\");\n    if (!TestAssertStringReply(ctx,reply,\"OK\",2)) goto fail;\n\n    T(\"test.callresp3set\",\"\");\n    if (!TestAssertStringReply(ctx,reply,\"OK\",2)) goto fail;\n\n    T(\"test.callresp3double\",\"\");\n    if (!TestAssertStringReply(ctx,reply,\"OK\",2)) goto fail;\n\n    T(\"test.callresp3bool\",\"\");\n    if (!TestAssertStringReply(ctx,reply,\"OK\",2)) goto fail;\n\n    T(\"test.callresp3null\",\"\");\n    if (!TestAssertStringReply(ctx,reply,\"OK\",2)) goto fail;\n\n    T(\"test.callreplywithnestedreply\",\"\");\n    if (!TestAssertStringReply(ctx,reply,\"test\",4)) goto fail;\n\n    T(\"test.callreplywithbignumberreply\",\"\");\n    if (!TestAssertStringReply(ctx,reply,\"1234567999999999999999999999999999999\",37)) goto fail;\n\n    T(\"test.callreplywithverbatimstringreply\",\"\");\n    if (!TestAssertStringReply(ctx,reply,\"txt:This is a verbatim\\nstring\",29)) goto fail;\n\n    T(\"test.ctxflags\",\"\");\n    if (!TestAssertStringReply(ctx,reply,\"OK\",2)) goto fail;\n\n    T(\"test.string.append\",\"\");\n    if (!TestAssertStringReply(ctx,reply,\"foobar\",6)) goto fail;\n\n    T(\"test.string.truncate\",\"\");\n    if (!TestAssertStringReply(ctx,reply,\"OK\",2)) goto fail;\n\n    T(\"test.unlink\",\"\");\n    if (!TestAssertStringReply(ctx,reply,\"OK\",2)) goto fail;\n\n    T(\"test.nestedcallreplyarray\",\"\");\n    if (!TestAssertStringReply(ctx,reply,\"OK\",2)) goto fail;\n\n    T(\"test.string.append.am\",\"\");\n    if (!TestAssertStringReply(ctx,reply,\"foobar\",6)) goto fail;\n    \n    T(\"test.string.trim\",\"\");\n    if (!TestAssertStringReply(ctx,reply,\"OK\",2)) goto fail;\n\n    T(\"test.string.printf\", \"cc\", \"foo\", \"bar\");\n    if (!TestAssertStringReply(ctx,reply,\"Got 3 args. argv[1]: foo, argv[2]: bar\",38)) goto fail;\n\n    T(\"test.notify\", \"\");\n    if (!TestAssertStringReply(ctx,reply,\"OK\",2)) goto fail;\n\n    T(\"test.callreplywitharrayreply\", \"\");\n    if (RedisModule_CallReplyType(reply) != REDISMODULE_REPLY_ARRAY) goto fail;\n    if (RedisModule_CallReplyLength(reply) != 2) goto fail;\n    if (!TestAssertStringReply(ctx,RedisModule_CallReplyArrayElement(reply, 0),\"test\",4)) goto fail;\n    if (!TestAssertStringReply(ctx,RedisModule_CallReplyArrayElement(reply, 1),\"1234\",4)) goto fail;\n\n    T(\"foo\", \"E\");\n    if (!TestAssertErrorReply(ctx,reply,\"ERR unknown command 'foo'\",25)) goto fail;\n\n    T(\"set\", \"Ec\", \"x\");\n    if (!TestAssertErrorReply(ctx,reply,\"ERR wrong number of arguments for 'set' command\",47)) goto fail;\n\n    T(\"shutdown\", \"SE\");\n    if (!TestAssertErrorReply(ctx,reply,\"ERR command 'shutdown' is not allowed on script mode\",52)) goto fail;\n\n    T(\"set\", \"WEcc\", \"x\", \"1\");\n    if (!TestAssertErrorReply(ctx,reply,\"ERR Write command 'set' was called while write is not allowed.\",62)) goto fail;\n\n    RedisModule_ReplyWithSimpleString(ctx,\"ALL TESTS PASSED\");\n    return REDISMODULE_OK;\n\nfail:\n    RedisModule_ReplyWithSimpleString(ctx,\n        \"SOME TEST DID NOT PASS! Check server logs\");\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx,\"test\",1,REDISMODULE_APIVER_1)\n        == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    /* Perform RM_Call inside the RedisModule_OnLoad\n     * to verify that it works as expected without crashing.\n     * The tests will verify it on different configurations\n     * options (cluster/no cluster). A simple ping command\n     * is enough for this test. */\n    RedisModuleCallReply *reply = RedisModule_Call(ctx, \"ping\", \"\");\n    if (RedisModule_CallReplyType(reply) != REDISMODULE_REPLY_STRING) {\n        RedisModule_FreeCallReply(reply);\n        return REDISMODULE_ERR;\n    }\n    size_t len;\n    const char *reply_str = RedisModule_CallReplyStringPtr(reply, &len);\n    if (len != 4) {\n        RedisModule_FreeCallReply(reply);\n        return REDISMODULE_ERR;\n    }\n    if (memcmp(reply_str, \"PONG\", 4) != 0) {\n        RedisModule_FreeCallReply(reply);\n        return REDISMODULE_ERR;\n    }\n    RedisModule_FreeCallReply(reply);\n\n    if (RedisModule_CreateCommand(ctx,\"test.call\",\n        TestCall,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.callresp3map\",\n        TestCallResp3Map,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.callresp3attribute\",\n        TestCallResp3Attribute,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.callresp3set\",\n        TestCallResp3Set,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.callresp3double\",\n        TestCallResp3Double,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.callresp3bool\",\n        TestCallResp3Bool,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.callresp3null\",\n        TestCallResp3Null,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.callreplywitharrayreply\",\n        TestCallReplyWithArrayReply,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.callreplywithnestedreply\",\n        TestCallReplyWithNestedReply,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.callreplywithbignumberreply\",\n        TestCallResp3BigNumber,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.callreplywithverbatimstringreply\",\n        TestCallResp3Verbatim,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.string.append\",\n        TestStringAppend,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.string.trim\",\n        TestTrimString,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.string.append.am\",\n        TestStringAppendAM,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.string.truncate\",\n        TestStringTruncate,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.string.printf\",\n        TestStringPrintf,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.ctxflags\",\n        TestCtxFlags,\"readonly\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.unlink\",\n        TestUnlink,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.nestedcallreplyarray\",\n        TestNestedCallReplyArrayElement,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.basics\",\n        TestBasics,\"write\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    /* the following commands are used by an external test and should not be added to TestBasics */\n    if (RedisModule_CreateCommand(ctx,\"test.rmcallautomode\",\n        TestCallRespAutoMode,\"write\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.getresp\",\n        TestGetResp,\"readonly\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.candebug\",\n        TestCanDebug,\"readonly\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModule_SubscribeToKeyspaceEvents(ctx,\n                                            REDISMODULE_NOTIFY_HASH |\n                                            REDISMODULE_NOTIFY_SET |\n                                            REDISMODULE_NOTIFY_STRING |\n                                            REDISMODULE_NOTIFY_KEY_MISS,\n                                        NotifyCallback);\n    if (RedisModule_CreateCommand(ctx,\"test.notify\",\n        TestNotifications,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/blockedclient.c",
    "content": "/* define macros for having usleep */\n#define _BSD_SOURCE\n#define _DEFAULT_SOURCE\n#include <unistd.h>\n\n#include \"redismodule.h\"\n#include <assert.h>\n#include <stdio.h>\n#include <pthread.h>\n#include <strings.h>\n\n#define UNUSED(V) ((void) V)\n\n/* used to test processing events during slow bg operation */\nstatic volatile int g_slow_bg_operation = 0;\nstatic volatile int g_is_in_slow_bg_operation = 0;\n\nvoid *sub_worker(void *arg) {\n    // Get Redis module context\n    RedisModuleCtx *ctx = (RedisModuleCtx *)arg;\n\n    // Try acquiring GIL\n    int res = RedisModule_ThreadSafeContextTryLock(ctx);\n\n    // GIL is already taken by the calling thread expecting to fail.\n    assert(res != REDISMODULE_OK);\n\n    return NULL;\n}\n\nvoid *worker(void *arg) {\n    // Retrieve blocked client\n    RedisModuleBlockedClient *bc = (RedisModuleBlockedClient *)arg;\n\n    // Get Redis module context\n    RedisModuleCtx *ctx = RedisModule_GetThreadSafeContext(bc);\n\n    // Acquire GIL\n    RedisModule_ThreadSafeContextLock(ctx);\n\n    // Create another thread which will try to acquire the GIL\n    pthread_t tid;\n    int res = pthread_create(&tid, NULL, sub_worker, ctx);\n    assert(res == 0);\n\n    // Wait for thread\n    pthread_join(tid, NULL);\n\n    // Release GIL\n    RedisModule_ThreadSafeContextUnlock(ctx);\n\n    // Reply to client\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n\n    // Unblock client\n    RedisModule_UnblockClient(bc, NULL);\n\n    // Free the Redis module context\n    RedisModule_FreeThreadSafeContext(ctx);\n\n    return NULL;\n}\n\nint acquire_gil(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    UNUSED(argv);\n    UNUSED(argc);\n\n    int flags = RedisModule_GetContextFlags(ctx);\n    int allFlags = RedisModule_GetContextFlagsAll();\n    if ((allFlags & REDISMODULE_CTX_FLAGS_MULTI) &&\n        (flags & REDISMODULE_CTX_FLAGS_MULTI)) {\n        RedisModule_ReplyWithSimpleString(ctx, \"Blocked client is not supported inside multi\");\n        return REDISMODULE_OK;\n    }\n\n    if ((allFlags & REDISMODULE_CTX_FLAGS_DENY_BLOCKING) &&\n        (flags & REDISMODULE_CTX_FLAGS_DENY_BLOCKING)) {\n        RedisModule_ReplyWithSimpleString(ctx, \"Blocked client is not allowed\");\n        return REDISMODULE_OK;\n    }\n\n    /* This command handler tries to acquire the GIL twice\n     * once in the worker thread using \"RedisModule_ThreadSafeContextLock\"\n     * second in the sub-worker thread\n     * using \"RedisModule_ThreadSafeContextTryLock\"\n     * as the GIL is already locked. */\n    RedisModuleBlockedClient *bc = RedisModule_BlockClient(ctx, NULL, NULL, NULL, 0);\n\n    pthread_t tid;\n    int res = pthread_create(&tid, NULL, worker, bc);\n    assert(res == 0);\n    pthread_detach(tid);\n\n    return REDISMODULE_OK;\n}\n\ntypedef struct {\n    RedisModuleString **argv;\n    int argc;\n    RedisModuleBlockedClient *bc;\n} bg_call_data;\n\nvoid *bg_call_worker(void *arg) {\n    bg_call_data *bg = arg;\n    RedisModuleBlockedClient *bc = bg->bc;\n\n    // Get Redis module context\n    RedisModuleCtx *ctx = RedisModule_GetThreadSafeContext(bg->bc);\n\n    // Acquire GIL\n    RedisModule_ThreadSafeContextLock(ctx);\n\n    // Test slow operation yielding\n    if (g_slow_bg_operation) {\n        g_is_in_slow_bg_operation = 1;\n        while (g_slow_bg_operation) {\n            RedisModule_Yield(ctx, REDISMODULE_YIELD_FLAG_CLIENTS, \"Slow module operation\");\n            usleep(1000);\n        }\n        g_is_in_slow_bg_operation = 0;\n    }\n\n    // Call the command\n    const char *module_cmd = RedisModule_StringPtrLen(bg->argv[0], NULL);\n    int cmd_pos = 1;\n    RedisModuleString *format_redis_str = RedisModule_CreateString(NULL, \"v\", 1);\n    if (!strcasecmp(module_cmd, \"do_bg_rm_call_format\")) {\n        cmd_pos = 2;\n        size_t format_len;\n        const char *format = RedisModule_StringPtrLen(bg->argv[1], &format_len);\n        RedisModule_StringAppendBuffer(NULL, format_redis_str, format, format_len);\n        RedisModule_StringAppendBuffer(NULL, format_redis_str, \"E\", 1);\n    }\n    const char *format = RedisModule_StringPtrLen(format_redis_str, NULL);\n    const char *cmd = RedisModule_StringPtrLen(bg->argv[cmd_pos], NULL);\n    RedisModuleCallReply *rep = RedisModule_Call(ctx, cmd, format, bg->argv + cmd_pos + 1, (size_t)bg->argc - cmd_pos - 1);\n    RedisModule_FreeString(NULL, format_redis_str);\n\n    /* Free the arguments within GIL to prevent simultaneous freeing in main thread. */\n    for (int i=0; i<bg->argc; i++)\n        RedisModule_FreeString(ctx, bg->argv[i]);\n    RedisModule_Free(bg->argv);\n    RedisModule_Free(bg);\n\n    // Release GIL\n    RedisModule_ThreadSafeContextUnlock(ctx);\n\n    // Reply to client\n    if (!rep) {\n        RedisModule_ReplyWithError(ctx, \"NULL reply returned\");\n    } else {\n        RedisModule_ReplyWithCallReply(ctx, rep);\n        RedisModule_FreeCallReply(rep);\n    }\n\n    // Unblock client\n    RedisModule_UnblockClient(bc, NULL);\n\n    // Free the Redis module context\n    RedisModule_FreeThreadSafeContext(ctx);\n\n    return NULL;\n}\n\nint do_bg_rm_call(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    UNUSED(argv);\n    UNUSED(argc);\n\n    /* Make sure we're not trying to block a client when we shouldn't */\n    int flags = RedisModule_GetContextFlags(ctx);\n    int allFlags = RedisModule_GetContextFlagsAll();\n    if ((allFlags & REDISMODULE_CTX_FLAGS_MULTI) &&\n        (flags & REDISMODULE_CTX_FLAGS_MULTI)) {\n        RedisModule_ReplyWithSimpleString(ctx, \"Blocked client is not supported inside multi\");\n        return REDISMODULE_OK;\n    }\n    if ((allFlags & REDISMODULE_CTX_FLAGS_DENY_BLOCKING) &&\n        (flags & REDISMODULE_CTX_FLAGS_DENY_BLOCKING)) {\n        RedisModule_ReplyWithSimpleString(ctx, \"Blocked client is not allowed\");\n        return REDISMODULE_OK;\n    }\n\n    /* Make a copy of the arguments and pass them to the thread. */\n    bg_call_data *bg = RedisModule_Alloc(sizeof(bg_call_data));\n    bg->argv = RedisModule_Alloc(sizeof(RedisModuleString*)*argc);\n    bg->argc = argc;\n    for (int i=0; i<argc; i++)\n        bg->argv[i] = RedisModule_HoldString(ctx, argv[i]);\n\n    /* Block the client */\n    bg->bc = RedisModule_BlockClient(ctx, NULL, NULL, NULL, 0);\n\n    /* Start a thread to handle the request */\n    pthread_t tid;\n    int res = pthread_create(&tid, NULL, bg_call_worker, bg);\n    assert(res == 0);\n    pthread_detach(tid);\n\n    return REDISMODULE_OK;\n}\n\nint do_rm_call(RedisModuleCtx *ctx, RedisModuleString **argv, int argc){\n    UNUSED(argv);\n    UNUSED(argc);\n\n    if(argc < 2){\n        return RedisModule_WrongArity(ctx);\n    }\n\n    const char* cmd = RedisModule_StringPtrLen(argv[1], NULL);\n\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, cmd, \"Ev\", argv + 2, (size_t)argc - 2);\n    if(!rep){\n        RedisModule_ReplyWithError(ctx, \"NULL reply returned\");\n    }else{\n        RedisModule_ReplyWithCallReply(ctx, rep);\n        RedisModule_FreeCallReply(rep);\n    }\n\n    return REDISMODULE_OK;\n}\n\nstatic void rm_call_async_send_reply(RedisModuleCtx *ctx, RedisModuleCallReply *reply) {\n    RedisModule_ReplyWithCallReply(ctx, reply);\n    RedisModule_FreeCallReply(reply);\n}\n\n/* Called when the command that was blocked on 'RM_Call' gets unblocked\n * and send the reply to the blocked client. */\nstatic void rm_call_async_on_unblocked(RedisModuleCtx *ctx, RedisModuleCallReply *reply, void *private_data) {\n    UNUSED(ctx);\n    RedisModuleBlockedClient *bc = private_data;\n    RedisModuleCtx *bctx = RedisModule_GetThreadSafeContext(bc);\n    rm_call_async_send_reply(bctx, reply);\n    RedisModule_FreeThreadSafeContext(bctx);\n    RedisModule_UnblockClient(bc, RedisModule_BlockClientGetPrivateData(bc));\n}\n\nint do_rm_call_async_fire_and_forget(RedisModuleCtx *ctx, RedisModuleString **argv, int argc){\n    UNUSED(argv);\n    UNUSED(argc);\n\n    if(argc < 2){\n        return RedisModule_WrongArity(ctx);\n    }\n    const char* cmd = RedisModule_StringPtrLen(argv[1], NULL);\n\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, cmd, \"!KEv\", argv + 2, (size_t)argc - 2);\n\n    if(RedisModule_CallReplyType(rep) != REDISMODULE_REPLY_PROMISE) {\n        RedisModule_ReplyWithCallReply(ctx, rep);\n    } else {\n        RedisModule_ReplyWithSimpleString(ctx, \"Blocked\");\n    }\n    RedisModule_FreeCallReply(rep);\n\n    return REDISMODULE_OK;\n}\n\nstatic void do_rm_call_async_free_pd(RedisModuleCtx * ctx, void *pd) {\n    UNUSED(ctx);\n    RedisModule_FreeCallReply(pd);\n}\n\nstatic void do_rm_call_async_disconnect(RedisModuleCtx *ctx, struct RedisModuleBlockedClient *bc) {\n    UNUSED(ctx);\n    RedisModuleCallReply* rep = RedisModule_BlockClientGetPrivateData(bc);\n    RedisModule_CallReplyPromiseAbort(rep, NULL);\n    RedisModule_FreeCallReply(rep);\n    RedisModule_AbortBlock(bc);\n}\n\n/*\n * Callback for do_rm_call_async / do_rm_call_async_script_mode\n * Gets the command to invoke as the first argument to the command and runs it,\n * passing the rest of the arguments to the command invocation.\n * If the command got blocked, blocks the client and unblock it when the command gets unblocked,\n * this allows check the K (allow blocking) argument to RM_Call.\n */\nint do_rm_call_async(RedisModuleCtx *ctx, RedisModuleString **argv, int argc){\n    UNUSED(argv);\n    UNUSED(argc);\n\n    if(argc < 2){\n        return RedisModule_WrongArity(ctx);\n    }\n\n    size_t format_len = 0;\n    char format[6] = {0};\n\n    if (!(RedisModule_GetContextFlags(ctx) & REDISMODULE_CTX_FLAGS_DENY_BLOCKING)) {\n        /* We are allowed to block the client so we can allow RM_Call to also block us */\n        format[format_len++] = 'K';\n    }\n\n    const char* invoked_cmd = RedisModule_StringPtrLen(argv[0], NULL);\n    if (strcasecmp(invoked_cmd, \"do_rm_call_async_script_mode\") == 0) {\n        format[format_len++] = 'S';\n    }\n\n    format[format_len++] = 'E';\n    format[format_len++] = 'v';\n    if (strcasecmp(invoked_cmd, \"do_rm_call_async_no_replicate\") != 0) {\n        /* Notice, without the '!' flag we will have inconsistency between master and replica.\n         * This is used only to check '!' flag correctness on blocked commands. */\n        format[format_len++] = '!';\n    }\n\n    const char* cmd = RedisModule_StringPtrLen(argv[1], NULL);\n\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, cmd, format, argv + 2, (size_t)argc - 2);\n\n    if(RedisModule_CallReplyType(rep) != REDISMODULE_REPLY_PROMISE) {\n        rm_call_async_send_reply(ctx, rep);\n    } else {\n        RedisModuleBlockedClient *bc = RedisModule_BlockClient(ctx, NULL, NULL, do_rm_call_async_free_pd, 0);\n        RedisModule_SetDisconnectCallback(bc, do_rm_call_async_disconnect);\n        RedisModule_BlockClientSetPrivateData(bc, rep);\n        RedisModule_CallReplyPromiseSetUnblockHandler(rep, rm_call_async_on_unblocked, bc);\n    }\n\n    return REDISMODULE_OK;\n}\n\ntypedef struct ThreadedAsyncRMCallCtx{\n    RedisModuleBlockedClient *bc;\n    RedisModuleCallReply *reply;\n} ThreadedAsyncRMCallCtx;\n\nvoid *send_async_reply(void *arg) {\n    ThreadedAsyncRMCallCtx *ta_rm_call_ctx = arg;\n    rm_call_async_on_unblocked(NULL, ta_rm_call_ctx->reply, ta_rm_call_ctx->bc);\n    RedisModule_Free(ta_rm_call_ctx);\n    return NULL;\n}\n\n/* Called when the command that was blocked on 'RM_Call' gets unblocked\n * and schedule a thread to send the reply to the blocked client. */\nstatic void rm_call_async_reply_on_thread(RedisModuleCtx *ctx, RedisModuleCallReply *reply, void *private_data) {\n    UNUSED(ctx);\n    ThreadedAsyncRMCallCtx *ta_rm_call_ctx = RedisModule_Alloc(sizeof(*ta_rm_call_ctx));\n    ta_rm_call_ctx->bc = private_data;\n    ta_rm_call_ctx->reply = reply;\n    pthread_t tid;\n    int res = pthread_create(&tid, NULL, send_async_reply, ta_rm_call_ctx);\n    assert(res == 0);\n    pthread_detach(tid);\n}\n\n/*\n * Callback for do_rm_call_async_on_thread.\n * Gets the command to invoke as the first argument to the command and runs it,\n * passing the rest of the arguments to the command invocation.\n * If the command got blocked, blocks the client and unblock on a background thread.\n * this allows check the K (allow blocking) argument to RM_Call, and make sure that the reply\n * that passes to unblock handler is owned by the handler and are not attached to any\n * context that might be freed after the callback ends.\n */\nint do_rm_call_async_on_thread(RedisModuleCtx *ctx, RedisModuleString **argv, int argc){\n    UNUSED(argv);\n    UNUSED(argc);\n\n    if(argc < 2){\n        return RedisModule_WrongArity(ctx);\n    }\n\n    const char* cmd = RedisModule_StringPtrLen(argv[1], NULL);\n\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, cmd, \"KEv\", argv + 2, (size_t)argc - 2);\n\n    if(RedisModule_CallReplyType(rep) != REDISMODULE_REPLY_PROMISE) {\n        rm_call_async_send_reply(ctx, rep);\n    } else {\n        RedisModuleBlockedClient *bc = RedisModule_BlockClient(ctx, NULL, NULL, NULL, 0);\n        RedisModule_CallReplyPromiseSetUnblockHandler(rep, rm_call_async_reply_on_thread, bc);\n        RedisModule_FreeCallReply(rep);\n    }\n\n    return REDISMODULE_OK;\n}\n\n/* Private data for wait_and_do_rm_call_async that holds information about:\n * 1. the block client, to unblock when done.\n * 2. the arguments, contains the command to run using RM_Call */\ntypedef struct WaitAndDoRMCallCtx {\n    RedisModuleBlockedClient *bc;\n    RedisModuleString **argv;\n    int argc;\n} WaitAndDoRMCallCtx;\n\n/*\n * This callback will be called when the 'wait' command invoke on 'wait_and_do_rm_call_async' will finish.\n * This callback will continue the execution flow just like 'do_rm_call_async' command.\n */\nstatic void wait_and_do_rm_call_async_on_unblocked(RedisModuleCtx *ctx, RedisModuleCallReply *reply, void *private_data) {\n    WaitAndDoRMCallCtx *wctx = private_data;\n    if (RedisModule_CallReplyType(reply) != REDISMODULE_REPLY_INTEGER) {\n        goto done;\n    }\n\n    if (RedisModule_CallReplyInteger(reply) != 1) {\n        goto done;\n    }\n\n    RedisModule_FreeCallReply(reply);\n    reply = NULL;\n\n    const char* cmd = RedisModule_StringPtrLen(wctx->argv[0], NULL);\n    reply = RedisModule_Call(ctx, cmd, \"!EKv\", wctx->argv + 1, (size_t)wctx->argc - 1);\n\ndone:\n    if(RedisModule_CallReplyType(reply) != REDISMODULE_REPLY_PROMISE) {\n        RedisModuleCtx *bctx = RedisModule_GetThreadSafeContext(wctx->bc);\n        rm_call_async_send_reply(bctx, reply);\n        RedisModule_FreeThreadSafeContext(bctx);\n        RedisModule_UnblockClient(wctx->bc, NULL);\n    } else {\n        RedisModule_CallReplyPromiseSetUnblockHandler(reply, rm_call_async_on_unblocked, wctx->bc);\n        RedisModule_FreeCallReply(reply);\n    }\n    for (int i = 0 ; i < wctx->argc ; ++i) {\n        RedisModule_FreeString(NULL, wctx->argv[i]);\n    }\n    RedisModule_Free(wctx->argv);\n    RedisModule_Free(wctx);\n}\n\n/*\n * Callback for wait_and_do_rm_call\n * Gets the command to invoke as the first argument, runs 'wait'\n * command (using the K flag to RM_Call). Once the wait finished, runs the\n * command that was given (just like 'do_rm_call_async').\n */\nint wait_and_do_rm_call_async(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    UNUSED(argc);\n\n    if(argc < 2){\n        return RedisModule_WrongArity(ctx);\n    }\n\n    int flags = RedisModule_GetContextFlags(ctx);\n    if (flags & REDISMODULE_CTX_FLAGS_DENY_BLOCKING) {\n        return RedisModule_ReplyWithError(ctx, \"Err can not run wait, blocking is not allowed.\");\n    }\n\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, \"wait\", \"!EKcc\", \"1\", \"0\");\n    if(RedisModule_CallReplyType(rep) != REDISMODULE_REPLY_PROMISE) {\n        rm_call_async_send_reply(ctx, rep);\n    } else {\n        RedisModuleBlockedClient *bc = RedisModule_BlockClient(ctx, NULL, NULL, NULL, 0);\n        WaitAndDoRMCallCtx *wctx = RedisModule_Alloc(sizeof(*wctx));\n        *wctx = (WaitAndDoRMCallCtx){\n                .bc = bc,\n                .argv = RedisModule_Alloc((argc - 1) * sizeof(RedisModuleString*)),\n                .argc = argc - 1,\n        };\n\n        for (int i = 1 ; i < argc ; ++i) {\n            wctx->argv[i - 1] = RedisModule_HoldString(NULL, argv[i]);\n        }\n        RedisModule_CallReplyPromiseSetUnblockHandler(rep, wait_and_do_rm_call_async_on_unblocked, wctx);\n        RedisModule_FreeCallReply(rep);\n    }\n\n    return REDISMODULE_OK;\n}\n\nstatic void blpop_and_set_multiple_keys_on_unblocked(RedisModuleCtx *ctx, RedisModuleCallReply *reply, void *private_data) {\n    /* ignore the reply */\n    RedisModule_FreeCallReply(reply);\n    WaitAndDoRMCallCtx *wctx = private_data;\n    for (int i = 0 ; i < wctx->argc ; i += 2) {\n        RedisModuleCallReply* rep = RedisModule_Call(ctx, \"set\", \"!ss\", wctx->argv[i], wctx->argv[i + 1]);\n        RedisModule_FreeCallReply(rep);\n    }\n\n    RedisModuleCtx *bctx = RedisModule_GetThreadSafeContext(wctx->bc);\n    RedisModule_ReplyWithSimpleString(bctx, \"OK\");\n    RedisModule_FreeThreadSafeContext(bctx);\n    RedisModule_UnblockClient(wctx->bc, NULL);\n\n    for (int i = 0 ; i < wctx->argc ; ++i) {\n        RedisModule_FreeString(NULL, wctx->argv[i]);\n    }\n    RedisModule_Free(wctx->argv);\n    RedisModule_Free(wctx);\n\n}\n\n/*\n * Performs a blpop command on a given list and when unblocked set multiple string keys.\n * This command allows checking that the unblock callback is performed as a unit\n * and its effect are replicated to the replica and AOF wrapped with multi exec.\n */\nint blpop_and_set_multiple_keys(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    UNUSED(argc);\n\n    if(argc < 2 || argc % 2 != 0){\n        return RedisModule_WrongArity(ctx);\n    }\n\n    int flags = RedisModule_GetContextFlags(ctx);\n    if (flags & REDISMODULE_CTX_FLAGS_DENY_BLOCKING) {\n        return RedisModule_ReplyWithError(ctx, \"Err can not run wait, blocking is not allowed.\");\n    }\n\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, \"blpop\", \"!EKsc\", argv[1], \"0\");\n    if(RedisModule_CallReplyType(rep) != REDISMODULE_REPLY_PROMISE) {\n        rm_call_async_send_reply(ctx, rep);\n    } else {\n        RedisModuleBlockedClient *bc = RedisModule_BlockClient(ctx, NULL, NULL, NULL, 0);\n        WaitAndDoRMCallCtx *wctx = RedisModule_Alloc(sizeof(*wctx));\n        *wctx = (WaitAndDoRMCallCtx){\n                .bc = bc,\n                .argv = RedisModule_Alloc((argc - 2) * sizeof(RedisModuleString*)),\n                .argc = argc - 2,\n        };\n\n        for (int i = 0 ; i < argc - 2 ; ++i) {\n            wctx->argv[i] = RedisModule_HoldString(NULL, argv[i + 2]);\n        }\n        RedisModule_CallReplyPromiseSetUnblockHandler(rep, blpop_and_set_multiple_keys_on_unblocked, wctx);\n        RedisModule_FreeCallReply(rep);\n    }\n\n    return REDISMODULE_OK;\n}\n\n/* simulate a blocked client replying to a thread safe context without creating a thread */\nint do_fake_bg_true(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    UNUSED(argc);\n\n    RedisModuleBlockedClient *bc = RedisModule_BlockClient(ctx, NULL, NULL, NULL, 0);\n    RedisModuleCtx *bctx = RedisModule_GetThreadSafeContext(bc);\n\n    RedisModule_ReplyWithBool(bctx, 1);\n\n    RedisModule_FreeThreadSafeContext(bctx);\n    RedisModule_UnblockClient(bc, NULL);\n\n    return REDISMODULE_OK;\n}\n\n\n/* this flag is used to work with busy commands, that might take a while\n * and ability to stop the busy work with a different command*/\nstatic volatile int abort_flag = 0;\n\nint slow_fg_command(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n    long long block_time = 0;\n    if (RedisModule_StringToLongLong(argv[1], &block_time) != REDISMODULE_OK) {\n        RedisModule_ReplyWithError(ctx, \"Invalid integer value\");\n        return REDISMODULE_OK;\n    }\n\n    uint64_t start_time = RedisModule_MonotonicMicroseconds();\n    /* when not blocking indefinitely, we don't process client commands in this test. */\n    int yield_flags = block_time? REDISMODULE_YIELD_FLAG_NONE: REDISMODULE_YIELD_FLAG_CLIENTS;\n    while (!abort_flag) {\n        RedisModule_Yield(ctx, yield_flags, \"Slow module operation\");\n        usleep(1000);\n        if (block_time && RedisModule_MonotonicMicroseconds() - start_time > (uint64_t)block_time)\n            break;\n    }\n\n    abort_flag = 0;\n    RedisModule_ReplyWithLongLong(ctx, 1);\n    return REDISMODULE_OK;\n}\n\nint stop_slow_fg_command(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    abort_flag = 1;\n    RedisModule_ReplyWithLongLong(ctx, 1);\n    return REDISMODULE_OK;\n}\n\n/* used to enable or disable slow operation in do_bg_rm_call */\nstatic int set_slow_bg_operation(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n    long long ll;\n    if (RedisModule_StringToLongLong(argv[1], &ll) != REDISMODULE_OK) {\n        RedisModule_ReplyWithError(ctx, \"Invalid integer value\");\n        return REDISMODULE_OK;\n    }\n    g_slow_bg_operation = ll;\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\n/* used to test if we reached the slow operation in do_bg_rm_call */\nstatic int is_in_slow_bg_operation(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    if (argc != 1) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    RedisModule_ReplyWithLongLong(ctx, g_is_in_slow_bg_operation);\n    return REDISMODULE_OK;\n}\n\nstatic void timer_callback(RedisModuleCtx *ctx, void *data)\n{\n    UNUSED(ctx);\n\n    RedisModuleBlockedClient *bc = data;\n\n    // Get Redis module context\n    RedisModuleCtx *reply_ctx = RedisModule_GetThreadSafeContext(bc);\n\n    // Reply to client\n    RedisModule_ReplyWithSimpleString(reply_ctx, \"OK\");\n\n    // Unblock client\n    RedisModule_UnblockClient(bc, NULL);\n\n    // Free the Redis module context\n    RedisModule_FreeThreadSafeContext(reply_ctx);\n}\n\n/* unblock_by_timer <period_ms> <timeout_ms>\n * period_ms is the period of the timer.\n * timeout_ms is the blocking timeout. */\nint unblock_by_timer(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc != 3)\n        return RedisModule_WrongArity(ctx);\n\n    long long period;\n    long long timeout;\n    if (RedisModule_StringToLongLong(argv[1],&period) != REDISMODULE_OK)\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid period\");\n    if (RedisModule_StringToLongLong(argv[2],&timeout) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid timeout\");\n    }\n\n    RedisModuleBlockedClient *bc = RedisModule_BlockClient(ctx, NULL, NULL, NULL, timeout);\n    RedisModule_CreateTimer(ctx, period, timer_callback, bc);\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx, \"blockedclient\", 1, REDISMODULE_APIVER_1)== REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"acquire_gil\", acquire_gil, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"do_rm_call\", do_rm_call,\n                                  \"write\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"do_rm_call_async\", do_rm_call_async,\n                                  \"write\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"do_rm_call_async_on_thread\", do_rm_call_async_on_thread,\n                                      \"write\", 0, 0, 0) == REDISMODULE_ERR)\n            return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"do_rm_call_async_script_mode\", do_rm_call_async,\n                                  \"write\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"do_rm_call_async_no_replicate\", do_rm_call_async,\n                                  \"write\", 0, 0, 0) == REDISMODULE_ERR)\n            return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"do_rm_call_fire_and_forget\", do_rm_call_async_fire_and_forget,\n                                  \"write\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"wait_and_do_rm_call\", wait_and_do_rm_call_async,\n                                  \"write\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"blpop_and_set_multiple_keys\", blpop_and_set_multiple_keys,\n                                      \"write\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"do_bg_rm_call\", do_bg_rm_call, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"do_bg_rm_call_format\", do_bg_rm_call, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"do_fake_bg_true\", do_fake_bg_true, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"slow_fg_command\", slow_fg_command,\"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"stop_slow_fg_command\", stop_slow_fg_command,\"allow-busy\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"set_slow_bg_operation\", set_slow_bg_operation, \"allow-busy\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"is_in_slow_bg_operation\", is_in_slow_bg_operation, \"allow-busy\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"unblock_by_timer\", unblock_by_timer, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/blockonbackground.c",
    "content": "#define _XOPEN_SOURCE 700\n#include \"redismodule.h\"\n#include <stdio.h>\n#include <stdlib.h>\n#include <pthread.h>\n#include <time.h>\n\n#define UNUSED(x) (void)(x)\n\ntypedef struct {\n    /* Mutex for protecting RedisModule_BlockedClientMeasureTime*() API from race\n     * conditions due to timeout callback triggered in the main thread. */\n    pthread_mutex_t measuretime_mutex;\n    int measuretime_completed; /* Indicates that time measure has ended and will not continue further */\n    int myint; /* Used for replying */\n} BlockPrivdata;\n\nvoid blockClientPrivdataInit(RedisModuleBlockedClient *bc) {\n    BlockPrivdata *block_privdata = RedisModule_Calloc(1, sizeof(*block_privdata));\n    block_privdata->measuretime_mutex = (pthread_mutex_t)PTHREAD_MUTEX_INITIALIZER;\n    RedisModule_BlockClientSetPrivateData(bc, block_privdata);\n}\n\nvoid blockClientMeasureTimeStart(RedisModuleBlockedClient *bc, BlockPrivdata *block_privdata) {\n    pthread_mutex_lock(&block_privdata->measuretime_mutex);\n    RedisModule_BlockedClientMeasureTimeStart(bc);\n    pthread_mutex_unlock(&block_privdata->measuretime_mutex);\n}\n\nvoid blockClientMeasureTimeEnd(RedisModuleBlockedClient *bc, BlockPrivdata *block_privdata, int completed) {\n    pthread_mutex_lock(&block_privdata->measuretime_mutex);\n    if (!block_privdata->measuretime_completed) {\n        RedisModule_BlockedClientMeasureTimeEnd(bc);\n        if (completed) block_privdata->measuretime_completed = 1;\n    }\n    pthread_mutex_unlock(&block_privdata->measuretime_mutex);\n}\n\n/* Reply callback for blocking command BLOCK.DEBUG */\nint HelloBlock_Reply(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    UNUSED(argc);\n    BlockPrivdata *block_privdata = RedisModule_GetBlockedClientPrivateData(ctx);\n    return RedisModule_ReplyWithLongLong(ctx,block_privdata->myint);\n}\n\n/* Timeout callback for blocking command BLOCK.DEBUG */\nint HelloBlock_Timeout(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    UNUSED(argc);\n    RedisModuleBlockedClient *bc = RedisModule_GetBlockedClientHandle(ctx);\n    BlockPrivdata *block_privdata = RedisModule_GetBlockedClientPrivateData(ctx);\n    blockClientMeasureTimeEnd(bc, block_privdata, 1);\n    return RedisModule_ReplyWithSimpleString(ctx,\"Request timedout\");\n}\n\n/* Private data freeing callback for BLOCK.DEBUG command. */\nvoid HelloBlock_FreeData(RedisModuleCtx *ctx, void *privdata) {\n    UNUSED(ctx);\n    BlockPrivdata *block_privdata = privdata;\n    pthread_mutex_destroy(&block_privdata->measuretime_mutex);\n    RedisModule_Free(privdata);\n}\n\n/* Private data freeing callback for BLOCK.BLOCK command. */\nvoid HelloBlock_FreeStringData(RedisModuleCtx *ctx, void *privdata) {\n    RedisModule_FreeString(ctx, (RedisModuleString*)privdata);\n}\n\n/* The thread entry point that actually executes the blocking part\n * of the command BLOCK.DEBUG. */\nvoid *BlockDebug_ThreadMain(void *arg) {\n    void **targ = arg;\n    RedisModuleBlockedClient *bc = targ[0];\n    long long delay = (unsigned long)targ[1];\n    long long enable_time_track = (unsigned long)targ[2];\n    BlockPrivdata *block_privdata = RedisModule_BlockClientGetPrivateData(bc);\n\n    if (enable_time_track)\n        blockClientMeasureTimeStart(bc, block_privdata);\n    RedisModule_Free(targ);\n\n    struct timespec ts;\n    ts.tv_sec = delay / 1000;\n    ts.tv_nsec = (delay % 1000) * 1000000;\n    nanosleep(&ts, NULL);\n    if (enable_time_track)\n        blockClientMeasureTimeEnd(bc, block_privdata, 0);\n    block_privdata->myint = rand();\n    RedisModule_UnblockClient(bc,block_privdata);\n    return NULL;\n}\n\n/* The thread entry point that actually executes the blocking part\n * of the command BLOCK.DOUBLE_DEBUG. */\nvoid *DoubleBlock_ThreadMain(void *arg) {\n    void **targ = arg;\n    RedisModuleBlockedClient *bc = targ[0];\n    long long delay = (unsigned long)targ[1];\n    BlockPrivdata *block_privdata = RedisModule_BlockClientGetPrivateData(bc);\n    blockClientMeasureTimeStart(bc, block_privdata);\n    RedisModule_Free(targ);\n    struct timespec ts;\n    ts.tv_sec = delay / 1000;\n    ts.tv_nsec = (delay % 1000) * 1000000;\n    nanosleep(&ts, NULL);\n    blockClientMeasureTimeEnd(bc, block_privdata, 0);\n    /* call again RedisModule_BlockedClientMeasureTimeStart() and\n     * RedisModule_BlockedClientMeasureTimeEnd and ensure that the\n     * total execution time is 2x the delay. */\n    blockClientMeasureTimeStart(bc, block_privdata);\n    nanosleep(&ts, NULL);\n    blockClientMeasureTimeEnd(bc, block_privdata, 0);\n    block_privdata->myint = rand();\n    RedisModule_UnblockClient(bc,block_privdata);\n    return NULL;\n}\n\nvoid HelloBlock_Disconnected(RedisModuleCtx *ctx, RedisModuleBlockedClient *bc) {\n    RedisModule_Log(ctx,\"warning\",\"Blocked client %p disconnected!\",\n        (void*)bc);\n}\n\n/* BLOCK.DEBUG <delay_ms> <timeout_ms> -- Block for <count> milliseconds, then reply with\n * a random number. Timeout is the command timeout, so that you can test\n * what happens when the delay is greater than the timeout. */\nint HelloBlock_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 3) return RedisModule_WrongArity(ctx);\n    long long delay;\n    long long timeout;\n\n    if (RedisModule_StringToLongLong(argv[1],&delay) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid count\");\n    }\n\n    if (RedisModule_StringToLongLong(argv[2],&timeout) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid count\");\n    }\n\n    pthread_t tid;\n    RedisModuleBlockedClient *bc = RedisModule_BlockClient(ctx,HelloBlock_Reply,HelloBlock_Timeout,HelloBlock_FreeData,timeout);\n    blockClientPrivdataInit(bc);\n\n    /* Here we set a disconnection handler, however since this module will\n     * block in sleep() in a thread, there is not much we can do in the\n     * callback, so this is just to show you the API. */\n    RedisModule_SetDisconnectCallback(bc,HelloBlock_Disconnected);\n\n    /* Now that we setup a blocking client, we need to pass the control\n     * to the thread. However we need to pass arguments to the thread:\n     * the delay and a reference to the blocked client handle. */\n    void **targ = RedisModule_Alloc(sizeof(void*)*3);\n    targ[0] = bc;\n    targ[1] = (void*)(unsigned long) delay;\n    // pass 1 as flag to enable time tracking\n    targ[2] = (void*)(unsigned long) 1;\n\n    if (pthread_create(&tid,NULL,BlockDebug_ThreadMain,targ) != 0) {\n        RedisModule_AbortBlock(bc);\n        return RedisModule_ReplyWithError(ctx,\"-ERR Can't start thread\");\n    }\n    pthread_detach(tid);\n    return REDISMODULE_OK;\n}\n\n/* BLOCK.DEBUG_NOTRACKING <delay_ms> <timeout_ms> -- Block for <count> milliseconds, then reply with\n * a random number. Timeout is the command timeout, so that you can test\n * what happens when the delay is greater than the timeout.\n * this command does not track background time so the background time should no appear in stats*/\nint HelloBlockNoTracking_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 3) return RedisModule_WrongArity(ctx);\n    long long delay;\n    long long timeout;\n\n    if (RedisModule_StringToLongLong(argv[1],&delay) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid count\");\n    }\n\n    if (RedisModule_StringToLongLong(argv[2],&timeout) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid count\");\n    }\n\n    pthread_t tid;\n    RedisModuleBlockedClient *bc = RedisModule_BlockClient(ctx,HelloBlock_Reply,HelloBlock_Timeout,HelloBlock_FreeData,timeout);\n    blockClientPrivdataInit(bc);\n\n    /* Here we set a disconnection handler, however since this module will\n     * block in sleep() in a thread, there is not much we can do in the\n     * callback, so this is just to show you the API. */\n    RedisModule_SetDisconnectCallback(bc,HelloBlock_Disconnected);\n\n    /* Now that we setup a blocking client, we need to pass the control\n     * to the thread. However we need to pass arguments to the thread:\n     * the delay and a reference to the blocked client handle. */\n    void **targ = RedisModule_Alloc(sizeof(void*)*3);\n    targ[0] = bc;\n    targ[1] = (void*)(unsigned long) delay;\n    // pass 0 as flag to enable time tracking\n    targ[2] = (void*)(unsigned long) 0;\n\n    if (pthread_create(&tid,NULL,BlockDebug_ThreadMain,targ) != 0) {\n        RedisModule_AbortBlock(bc);\n        return RedisModule_ReplyWithError(ctx,\"-ERR Can't start thread\");\n    }\n    pthread_detach(tid);\n    return REDISMODULE_OK;\n}\n\n/* BLOCK.DOUBLE_DEBUG <delay_ms> -- Block for 2 x <count> milliseconds,\n * then reply with a random number.\n * This command is used to test multiple calls to RedisModule_BlockedClientMeasureTimeStart()\n * and RedisModule_BlockedClientMeasureTimeEnd() within the same execution. */\nint HelloDoubleBlock_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n    long long delay;\n\n    if (RedisModule_StringToLongLong(argv[1],&delay) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid count\");\n    }\n\n    pthread_t tid;\n    RedisModuleBlockedClient *bc = RedisModule_BlockClient(ctx,HelloBlock_Reply,HelloBlock_Timeout,HelloBlock_FreeData,0);\n    blockClientPrivdataInit(bc);\n\n    /* Now that we setup a blocking client, we need to pass the control\n     * to the thread. However we need to pass arguments to the thread:\n     * the delay and a reference to the blocked client handle. */\n    void **targ = RedisModule_Alloc(sizeof(void*)*2);\n    targ[0] = bc;\n    targ[1] = (void*)(unsigned long) delay;\n\n    if (pthread_create(&tid,NULL,DoubleBlock_ThreadMain,targ) != 0) {\n        RedisModule_AbortBlock(bc);\n        return RedisModule_ReplyWithError(ctx,\"-ERR Can't start thread\");\n    }\n    pthread_detach(tid);\n    return REDISMODULE_OK;\n}\n\nRedisModuleBlockedClient *blocked_client = NULL;\n\n/* BLOCK.BLOCK [TIMEOUT] -- Blocks the current client until released\n * or TIMEOUT seconds. If TIMEOUT is zero, no timeout function is\n * registered.\n */\nint Block_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (RedisModule_IsBlockedReplyRequest(ctx)) {\n        RedisModuleString *r = RedisModule_GetBlockedClientPrivateData(ctx);\n        return RedisModule_ReplyWithString(ctx, r);\n    } else if (RedisModule_IsBlockedTimeoutRequest(ctx)) {\n        RedisModule_UnblockClient(blocked_client, NULL); /* Must be called to avoid leaks. */\n        blocked_client = NULL;\n        return RedisModule_ReplyWithSimpleString(ctx, \"Timed out\");\n    }\n\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n    long long timeout;\n\n    if (RedisModule_StringToLongLong(argv[1], &timeout) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx, \"ERR invalid timeout\");\n    }\n    if (blocked_client) {\n        return RedisModule_ReplyWithError(ctx, \"ERR another client already blocked\");\n    }\n\n    /* Block client. We use this function as both a reply and optional timeout\n     * callback and differentiate the different code flows above.\n     */\n    blocked_client = RedisModule_BlockClient(ctx, Block_RedisCommand,\n            timeout > 0 ? Block_RedisCommand : NULL, HelloBlock_FreeStringData, timeout);\n    return REDISMODULE_OK;\n}\n\n/* BLOCK.IS_BLOCKED -- Returns 1 if we have a blocked client, or 0 otherwise.\n */\nint IsBlocked_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    UNUSED(argc);\n    RedisModule_ReplyWithLongLong(ctx, blocked_client ? 1 : 0);\n    return REDISMODULE_OK;\n}\n\n/* BLOCK.RELEASE [reply] -- Releases the blocked client and produce the specified reply.\n */\nint Release_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n    if (!blocked_client) {\n        return RedisModule_ReplyWithError(ctx, \"ERR No blocked client\");\n    }\n\n    RedisModuleString *replystr = argv[1];\n    RedisModule_RetainString(ctx, replystr);\n    RedisModule_UnblockClient(blocked_client, replystr);\n    blocked_client = NULL;\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    UNUSED(argc);\n\n    if (RedisModule_Init(ctx,\"block\",1,REDISMODULE_APIVER_1)\n        == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"block.debug\",\n        HelloBlock_RedisCommand,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"block.double_debug\",\n        HelloDoubleBlock_RedisCommand,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"block.debug_no_track\",\n        HelloBlockNoTracking_RedisCommand,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"block.block\",\n        Block_RedisCommand, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"block.is_blocked\",\n        IsBlocked_RedisCommand,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"block.release\",\n        Release_RedisCommand,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/blockonkeys.c",
    "content": "#include \"redismodule.h\"\n\n#include <string.h>\n#include <strings.h>\n#include <assert.h>\n#include <unistd.h>\n\n#define UNUSED(V) ((void) V)\n\n#define LIST_SIZE 1024\n\n/* The FSL (Fixed-Size List) data type is a low-budget imitation of the\n * native Redis list, in order to test list-like commands implemented\n * by a module.\n * Examples: FSL.PUSH, FSL.BPOP, etc. */\n\ntypedef struct {\n    long long list[LIST_SIZE];\n    long long length;\n} fsl_t; /* Fixed-size list */\n\nstatic RedisModuleType *fsltype = NULL;\n\nfsl_t *fsl_type_create(void) {\n    fsl_t *o;\n    o = RedisModule_Alloc(sizeof(*o));\n    o->length = 0;\n    return o;\n}\n\nvoid fsl_type_free(fsl_t *o) {\n    RedisModule_Free(o);\n}\n\n/* ========================== \"fsltype\" type methods ======================= */\n\nvoid *fsl_rdb_load(RedisModuleIO *rdb, int encver) {\n    if (encver != 0) {\n        return NULL;\n    }\n    fsl_t *fsl = fsl_type_create();\n    fsl->length = RedisModule_LoadUnsigned(rdb);\n    for (long long i = 0; i < fsl->length; i++)\n        fsl->list[i] = RedisModule_LoadSigned(rdb);\n    return fsl;\n}\n\nvoid fsl_rdb_save(RedisModuleIO *rdb, void *value) {\n    fsl_t *fsl = value;\n    RedisModule_SaveUnsigned(rdb,fsl->length);\n    for (long long i = 0; i < fsl->length; i++)\n        RedisModule_SaveSigned(rdb, fsl->list[i]);\n}\n\nvoid fsl_aofrw(RedisModuleIO *aof, RedisModuleString *key, void *value) {\n    fsl_t *fsl = value;\n    for (long long i = 0; i < fsl->length; i++)\n        RedisModule_EmitAOF(aof, \"FSL.PUSH\",\"sl\", key, fsl->list[i]);\n}\n\nvoid fsl_free(void *value) {\n    fsl_type_free(value);\n}\n\n/* ========================== helper methods ======================= */\n\n/* Wrapper to the boilerplate code of opening a key, checking its type, etc.\n * Returns 0 if `keyname` exists in the dataset, but it's of the wrong type (i.e. not FSL) */\nint get_fsl(RedisModuleCtx *ctx, RedisModuleString *keyname, int mode, int create, fsl_t **fsl, int reply_on_failure) {\n    *fsl = NULL;\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, keyname, mode);\n\n    if (RedisModule_KeyType(key) != REDISMODULE_KEYTYPE_EMPTY) {\n        /* Key exists */\n        if (RedisModule_ModuleTypeGetType(key) != fsltype) {\n            /* Key is not FSL */\n            RedisModule_CloseKey(key);\n            if (reply_on_failure)\n                RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n            RedisModuleCallReply *reply = RedisModule_Call(ctx, \"INCR\", \"c\", \"fsl_wrong_type\");\n            RedisModule_FreeCallReply(reply);\n            return 0;\n        }\n\n        *fsl = RedisModule_ModuleTypeGetValue(key);\n        if (*fsl && !(*fsl)->length && mode & REDISMODULE_WRITE) {\n            /* Key exists, but it's logically empty */\n            if (create) {\n                create = 0; /* No need to create, key exists in its basic state */\n            } else {\n                RedisModule_DeleteKey(key);\n                *fsl = NULL;\n            }\n        } else {\n            /* Key exists, and has elements in it - no need to create anything */\n            create = 0;\n        }\n    }\n\n    if (create) {\n        *fsl = fsl_type_create();\n        RedisModule_ModuleTypeSetValue(key, fsltype, *fsl);\n    }\n\n    RedisModule_CloseKey(key);\n    return 1;\n}\n\n/* ========================== commands ======================= */\n\n/* FSL.PUSH <key> <int> - Push an integer to the fixed-size list (to the right).\n * It must be greater than the element in the head of the list. */\nint fsl_push(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 3)\n        return RedisModule_WrongArity(ctx);\n\n    long long ele;\n    if (RedisModule_StringToLongLong(argv[2],&ele) != REDISMODULE_OK)\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid integer\");\n\n    fsl_t *fsl;\n    if (!get_fsl(ctx, argv[1], REDISMODULE_WRITE, 1, &fsl, 1))\n        return REDISMODULE_OK;\n\n    if (fsl->length == LIST_SIZE)\n        return RedisModule_ReplyWithError(ctx,\"ERR list is full\");\n\n    if (fsl->length != 0 && fsl->list[fsl->length-1] >= ele)\n        return RedisModule_ReplyWithError(ctx,\"ERR new element has to be greater than the head element\");\n\n    fsl->list[fsl->length++] = ele;\n    RedisModule_SignalKeyAsReady(ctx, argv[1]);\n\n    RedisModule_ReplicateVerbatim(ctx);\n\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\ntypedef struct {\n    RedisModuleString *keyname;\n    long long ele;\n} timer_data_t;\n\nstatic void timer_callback(RedisModuleCtx *ctx, void *data)\n{\n    timer_data_t *td = data;\n\n    fsl_t *fsl;\n    if (!get_fsl(ctx, td->keyname, REDISMODULE_WRITE, 1, &fsl, 1))\n        return;\n\n    if (fsl->length == LIST_SIZE)\n        return; /* list is full */\n\n    if (fsl->length != 0 && fsl->list[fsl->length-1] >= td->ele)\n        return; /* new element has to be greater than the head element */\n\n    fsl->list[fsl->length++] = td->ele;\n    RedisModule_SignalKeyAsReady(ctx, td->keyname);\n\n    RedisModule_Replicate(ctx, \"FSL.PUSH\", \"sl\", td->keyname, td->ele);\n\n    RedisModule_FreeString(ctx, td->keyname);\n    RedisModule_Free(td);\n}\n\n/* FSL.PUSHTIMER <key> <int> <period-in-ms> - Push the number 9000 to the fixed-size list (to the right).\n * It must be greater than the element in the head of the list. */\nint fsl_pushtimer(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc != 4)\n        return RedisModule_WrongArity(ctx);\n\n    long long ele;\n    if (RedisModule_StringToLongLong(argv[2],&ele) != REDISMODULE_OK)\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid integer\");\n\n    long long period;\n    if (RedisModule_StringToLongLong(argv[3],&period) != REDISMODULE_OK)\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid period\");\n\n    fsl_t *fsl;\n    if (!get_fsl(ctx, argv[1], REDISMODULE_WRITE, 1, &fsl, 1))\n        return REDISMODULE_OK;\n\n    if (fsl->length == LIST_SIZE)\n        return RedisModule_ReplyWithError(ctx,\"ERR list is full\");\n\n    timer_data_t *td = RedisModule_Alloc(sizeof(*td));\n    td->keyname = argv[1];\n    RedisModule_RetainString(ctx, td->keyname);\n    td->ele = ele;\n\n    RedisModuleTimerID id = RedisModule_CreateTimer(ctx, period, timer_callback, td);\n    RedisModule_ReplyWithLongLong(ctx, id);\n\n    return REDISMODULE_OK;\n}\n\nint bpop_reply_callback(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    RedisModuleString *keyname = RedisModule_GetBlockedClientReadyKey(ctx);\n\n    fsl_t *fsl;\n    if (!get_fsl(ctx, keyname, REDISMODULE_WRITE, 0, &fsl, 0) || !fsl)\n        return REDISMODULE_ERR;\n\n    RedisModule_Assert(fsl->length);\n    RedisModule_ReplyWithLongLong(ctx, fsl->list[--fsl->length]);\n\n    /* I'm lazy so i'll replicate a potentially blocking command, it shouldn't block in this flow. */\n    RedisModule_ReplicateVerbatim(ctx);\n    return REDISMODULE_OK;\n}\n\nint bpop_timeout_callback(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    return RedisModule_ReplyWithSimpleString(ctx, \"Request timedout\");\n}\n\n/* FSL.BPOP <key> <timeout> [NO_TO_CB]- Block clients until list has two or more elements.\n * When that happens, unblock client and pop the last two elements (from the right). */\nint fsl_bpop(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc < 3)\n        return RedisModule_WrongArity(ctx);\n\n    long long timeout;\n    if (RedisModule_StringToLongLong(argv[2],&timeout) != REDISMODULE_OK || timeout < 0)\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid timeout\");\n\n    int to_cb = 1;\n    if (argc == 4) {\n        if (strcasecmp(\"NO_TO_CB\", RedisModule_StringPtrLen(argv[3], NULL)))\n            return RedisModule_ReplyWithError(ctx,\"ERR invalid argument\");\n        to_cb = 0;\n    }\n\n    fsl_t *fsl;\n    if (!get_fsl(ctx, argv[1], REDISMODULE_WRITE, 0, &fsl, 1))\n        return REDISMODULE_OK;\n\n    if (!fsl) {\n        RedisModule_BlockClientOnKeys(ctx, bpop_reply_callback, to_cb ? bpop_timeout_callback : NULL,\n                                      NULL, timeout, &argv[1], 1, NULL);\n    } else {\n        RedisModule_Assert(fsl->length);\n        RedisModule_ReplyWithLongLong(ctx, fsl->list[--fsl->length]);\n        /* I'm lazy so i'll replicate a potentially blocking command, it shouldn't block in this flow. */\n        RedisModule_ReplicateVerbatim(ctx);\n    }\n\n    return REDISMODULE_OK;\n}\n\nint bpopgt_reply_callback(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    RedisModuleString *keyname = RedisModule_GetBlockedClientReadyKey(ctx);\n    long long *pgt = RedisModule_GetBlockedClientPrivateData(ctx);\n\n    fsl_t *fsl;\n    if (!get_fsl(ctx, keyname, REDISMODULE_WRITE, 0, &fsl, 0) || !fsl)\n        return RedisModule_ReplyWithError(ctx,\"UNBLOCKED key no longer exists\");\n\n    if (fsl->list[fsl->length-1] <= *pgt)\n        return REDISMODULE_ERR;\n\n    RedisModule_Assert(fsl->length);\n    RedisModule_ReplyWithLongLong(ctx, fsl->list[--fsl->length]);\n    /* I'm lazy so i'll replicate a potentially blocking command, it shouldn't block in this flow. */\n    RedisModule_ReplicateVerbatim(ctx);\n    return REDISMODULE_OK;\n}\n\nint bpopgt_timeout_callback(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    return RedisModule_ReplyWithSimpleString(ctx, \"Request timedout\");\n}\n\nvoid bpopgt_free_privdata(RedisModuleCtx *ctx, void *privdata) {\n    REDISMODULE_NOT_USED(ctx);\n    RedisModule_Free(privdata);\n}\n\n/* FSL.BPOPGT <key> <gt> <timeout> - Block clients until list has an element greater than <gt>.\n * When that happens, unblock client and pop the last element (from the right). */\nint fsl_bpopgt(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 4)\n        return RedisModule_WrongArity(ctx);\n\n    long long gt;\n    if (RedisModule_StringToLongLong(argv[2],&gt) != REDISMODULE_OK)\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid integer\");\n\n    long long timeout;\n    if (RedisModule_StringToLongLong(argv[3],&timeout) != REDISMODULE_OK || timeout < 0)\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid timeout\");\n\n    fsl_t *fsl;\n    if (!get_fsl(ctx, argv[1], REDISMODULE_WRITE, 0, &fsl, 1))\n        return REDISMODULE_OK;\n\n    if (!fsl)\n        return RedisModule_ReplyWithError(ctx,\"ERR key must exist\");\n\n    if (fsl->list[fsl->length-1] <= gt) {\n        /* We use malloc so the tests in blockedonkeys.tcl can check for memory leaks */\n        long long *pgt = RedisModule_Alloc(sizeof(long long));\n        *pgt = gt;\n        RedisModule_BlockClientOnKeysWithFlags(\n            ctx, bpopgt_reply_callback, bpopgt_timeout_callback,\n            bpopgt_free_privdata, timeout, &argv[1], 1, pgt,\n            REDISMODULE_BLOCK_UNBLOCK_DELETED);\n    } else {\n        RedisModule_Assert(fsl->length);\n        RedisModule_ReplyWithLongLong(ctx, fsl->list[--fsl->length]);\n        /* I'm lazy so i'll replicate a potentially blocking command, it shouldn't block in this flow. */\n        RedisModule_ReplicateVerbatim(ctx);\n    }\n\n    return REDISMODULE_OK;\n}\n\nint bpoppush_reply_callback(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    RedisModuleString *src_keyname = RedisModule_GetBlockedClientReadyKey(ctx);\n    RedisModuleString *dst_keyname = RedisModule_GetBlockedClientPrivateData(ctx);\n\n    fsl_t *src;\n    if (!get_fsl(ctx, src_keyname, REDISMODULE_WRITE, 0, &src, 0) || !src)\n        return REDISMODULE_ERR;\n\n    fsl_t *dst;\n    if (!get_fsl(ctx, dst_keyname, REDISMODULE_WRITE, 1, &dst, 0) || !dst)\n        return REDISMODULE_ERR;\n\n    RedisModule_Assert(src->length);\n    long long ele = src->list[--src->length];\n    dst->list[dst->length++] = ele;\n    RedisModule_SignalKeyAsReady(ctx, dst_keyname);\n    /* I'm lazy so i'll replicate a potentially blocking command, it shouldn't block in this flow. */\n    RedisModule_ReplicateVerbatim(ctx);\n    return RedisModule_ReplyWithLongLong(ctx, ele);\n}\n\nint bpoppush_timeout_callback(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    return RedisModule_ReplyWithSimpleString(ctx, \"Request timedout\");\n}\n\nvoid bpoppush_free_privdata(RedisModuleCtx *ctx, void *privdata) {\n    RedisModule_FreeString(ctx, privdata);\n}\n\n/* FSL.BPOPPUSH <src> <dst> <timeout> - Block clients until <src> has an element.\n * When that happens, unblock client, pop the last element from <src> and push it to <dst>\n * (from the right). */\nint fsl_bpoppush(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 4)\n        return RedisModule_WrongArity(ctx);\n\n    long long timeout;\n    if (RedisModule_StringToLongLong(argv[3],&timeout) != REDISMODULE_OK || timeout < 0)\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid timeout\");\n\n    fsl_t *src;\n    if (!get_fsl(ctx, argv[1], REDISMODULE_WRITE, 0, &src, 1))\n        return REDISMODULE_OK;\n\n    if (!src) {\n        /* Retain string for reply callback */\n        RedisModule_RetainString(ctx, argv[2]);\n        /* Key is empty, we must block */\n        RedisModule_BlockClientOnKeys(ctx, bpoppush_reply_callback, bpoppush_timeout_callback,\n                                      bpoppush_free_privdata, timeout, &argv[1], 1, argv[2]);\n    } else {\n        fsl_t *dst;\n        if (!get_fsl(ctx, argv[2], REDISMODULE_WRITE, 1, &dst, 1))\n            return REDISMODULE_OK;\n\n        RedisModule_Assert(src->length);\n        long long ele = src->list[--src->length];\n        dst->list[dst->length++] = ele;\n        RedisModule_SignalKeyAsReady(ctx, argv[2]);\n        RedisModule_ReplyWithLongLong(ctx, ele);\n        /* I'm lazy so i'll replicate a potentially blocking command, it shouldn't block in this flow. */\n        RedisModule_ReplicateVerbatim(ctx);\n    }\n\n    return REDISMODULE_OK;\n}\n\n/* FSL.GETALL <key> - Reply with an array containing all elements. */\nint fsl_getall(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2)\n        return RedisModule_WrongArity(ctx);\n\n    fsl_t *fsl;\n    if (!get_fsl(ctx, argv[1], REDISMODULE_READ, 0, &fsl, 1))\n        return REDISMODULE_OK;\n\n    if (!fsl)\n        return RedisModule_ReplyWithArray(ctx, 0);\n\n    RedisModule_ReplyWithArray(ctx, fsl->length);\n    for (int i = 0; i < fsl->length; i++)\n        RedisModule_ReplyWithLongLong(ctx, fsl->list[i]);\n    return REDISMODULE_OK;\n}\n\n/* Callback for blockonkeys_popall */\nint blockonkeys_popall_reply_callback(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argc);\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_WRITE);\n    if (RedisModule_KeyType(key) == REDISMODULE_KEYTYPE_LIST) {\n        RedisModuleString *elem;\n        long len = 0;\n        RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_ARRAY_LEN);\n        while ((elem = RedisModule_ListPop(key, REDISMODULE_LIST_HEAD)) != NULL) {\n            len++;\n            RedisModule_ReplyWithString(ctx, elem);\n            RedisModule_FreeString(ctx, elem);\n        }\n        /* I'm lazy so i'll replicate a potentially blocking command, it shouldn't block in this flow. */\n        RedisModule_ReplicateVerbatim(ctx);\n        RedisModule_ReplySetArrayLength(ctx, len);\n    } else {\n        RedisModule_ReplyWithError(ctx, \"ERR Not a list\");\n    }\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\nint blockonkeys_popall_timeout_callback(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    return RedisModule_ReplyWithError(ctx, \"ERR Timeout\");\n}\n\n/* BLOCKONKEYS.POPALL key\n *\n * Blocks on an empty key for up to 3 seconds. When unblocked by a list\n * operation like LPUSH, all the elements are popped and returned. Fails with an\n * error on timeout. */\nint blockonkeys_popall(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2)\n        return RedisModule_WrongArity(ctx);\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_READ);\n    if (RedisModule_KeyType(key) == REDISMODULE_KEYTYPE_EMPTY) {\n        RedisModule_BlockClientOnKeys(ctx, blockonkeys_popall_reply_callback,\n                                      blockonkeys_popall_timeout_callback,\n                                      NULL, 3000, &argv[1], 1, NULL);\n    } else {\n        RedisModule_ReplyWithError(ctx, \"ERR Key not empty\");\n    }\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\n/* BLOCKONKEYS.LPUSH key val [val ..]\n * BLOCKONKEYS.LPUSH_UNBLOCK key val [val ..]\n *\n * A module equivalent of LPUSH. If the name LPUSH_UNBLOCK is used,\n * RM_SignalKeyAsReady() is also called. */\nint blockonkeys_lpush(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc < 3)\n        return RedisModule_WrongArity(ctx);\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_WRITE);\n    if (RedisModule_KeyType(key) != REDISMODULE_KEYTYPE_EMPTY &&\n        RedisModule_KeyType(key) != REDISMODULE_KEYTYPE_LIST) {\n        RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n    } else {\n        for (int i = 2; i < argc; i++) {\n            if (RedisModule_ListPush(key, REDISMODULE_LIST_HEAD,\n                                     argv[i]) != REDISMODULE_OK) {\n                RedisModule_CloseKey(key);\n                return RedisModule_ReplyWithError(ctx, \"ERR Push failed\");\n            }\n        }\n    }\n    RedisModule_CloseKey(key);\n\n    /* signal key as ready if the command is lpush_unblock */\n    size_t len;\n    const char *str = RedisModule_StringPtrLen(argv[0], &len);\n    if (!strncasecmp(str, \"blockonkeys.lpush_unblock\", len)) {\n        RedisModule_SignalKeyAsReady(ctx, argv[1]);\n    }\n    RedisModule_ReplicateVerbatim(ctx);\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\n/* Callback for the BLOCKONKEYS.BLPOPN command */\nint blockonkeys_blpopn_reply_callback(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argc);\n    long long n;\n    RedisModule_StringToLongLong(argv[2], &n);\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_WRITE);\n    int result;\n    if (RedisModule_KeyType(key) == REDISMODULE_KEYTYPE_LIST &&\n        RedisModule_ValueLength(key) >= (size_t)n) {\n        RedisModule_ReplyWithArray(ctx, n);\n        for (long i = 0; i < n; i++) {\n            RedisModuleString *elem = RedisModule_ListPop(key, REDISMODULE_LIST_HEAD);\n            RedisModule_ReplyWithString(ctx, elem);\n            RedisModule_FreeString(ctx, elem);\n        }\n        /* I'm lazy so i'll replicate a potentially blocking command, it shouldn't block in this flow. */\n        RedisModule_ReplicateVerbatim(ctx);\n        result = REDISMODULE_OK;\n    } else if (RedisModule_KeyType(key) == REDISMODULE_KEYTYPE_LIST ||\n               RedisModule_KeyType(key) == REDISMODULE_KEYTYPE_EMPTY) {\n        const char *module_cmd = RedisModule_StringPtrLen(argv[0], NULL);\n        if (!strcasecmp(module_cmd, \"blockonkeys.blpopn_or_unblock\"))\n            RedisModule_UnblockClient(RedisModule_GetBlockedClientHandle(ctx), NULL);\n\n        /* continue blocking */\n        result = REDISMODULE_ERR;\n    } else {\n        result = RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n    RedisModule_CloseKey(key);\n    return result;\n}\n\nint blockonkeys_blpopn_timeout_callback(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    return RedisModule_ReplyWithError(ctx, \"ERR Timeout\");\n}\n\nint blockonkeys_blpopn_abort_callback(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    return RedisModule_ReplyWithSimpleString(ctx, \"Action aborted\");\n}\n\n/* BLOCKONKEYS.BLPOPN key N\n *\n * Blocks until key has N elements and then pops them or fails after 3 seconds.\n */\nint blockonkeys_blpopn(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc < 3) return RedisModule_WrongArity(ctx);\n\n    long long n, timeout = 3000LL;\n    if (RedisModule_StringToLongLong(argv[2], &n) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx, \"ERR Invalid N\");\n    }\n\n    if (argc > 3 ) {\n        if (RedisModule_StringToLongLong(argv[3], &timeout) != REDISMODULE_OK) {\n            return RedisModule_ReplyWithError(ctx, \"ERR Invalid timeout value\");\n        }\n    }\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_WRITE);\n    int keytype = RedisModule_KeyType(key);\n    if (keytype != REDISMODULE_KEYTYPE_EMPTY &&\n        keytype != REDISMODULE_KEYTYPE_LIST) {\n        RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n    } else if (keytype == REDISMODULE_KEYTYPE_LIST &&\n               RedisModule_ValueLength(key) >= (size_t)n) {\n        RedisModule_ReplyWithArray(ctx, n);\n        for (long i = 0; i < n; i++) {\n            RedisModuleString *elem = RedisModule_ListPop(key, REDISMODULE_LIST_HEAD);\n            RedisModule_ReplyWithString(ctx, elem);\n            RedisModule_FreeString(ctx, elem);\n        }\n        /* I'm lazy so i'll replicate a potentially blocking command, it shouldn't block in this flow. */\n        RedisModule_ReplicateVerbatim(ctx);\n    } else {\n        RedisModule_BlockClientOnKeys(ctx, blockonkeys_blpopn_reply_callback,\n                                      timeout ? blockonkeys_blpopn_timeout_callback : blockonkeys_blpopn_abort_callback,\n                                      NULL, timeout, &argv[1], 1, NULL);\n    }\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx, \"blockonkeys\", 1, REDISMODULE_APIVER_1)== REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleTypeMethods tm = {\n        .version = REDISMODULE_TYPE_METHOD_VERSION,\n        .rdb_load = fsl_rdb_load,\n        .rdb_save = fsl_rdb_save,\n        .aof_rewrite = fsl_aofrw,\n        .mem_usage = NULL,\n        .free = fsl_free,\n        .digest = NULL,\n    };\n\n    fsltype = RedisModule_CreateDataType(ctx, \"fsltype_t\", 0, &tm);\n    if (fsltype == NULL)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"fsl.push\",fsl_push,\"write\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"fsl.pushtimer\",fsl_pushtimer,\"write\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"fsl.bpop\",fsl_bpop,\"write\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"fsl.bpopgt\",fsl_bpopgt,\"write\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"fsl.bpoppush\",fsl_bpoppush,\"write\",1,2,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"fsl.getall\",fsl_getall,\"\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"blockonkeys.popall\", blockonkeys_popall,\n                                  \"write\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"blockonkeys.lpush\", blockonkeys_lpush,\n                                  \"write\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"blockonkeys.lpush_unblock\", blockonkeys_lpush,\n                                  \"write\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"blockonkeys.blpopn\", blockonkeys_blpopn,\n                                  \"write\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"blockonkeys.blpopn_or_unblock\", blockonkeys_blpopn,\n                                      \"write\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/cmdintrospection.c",
    "content": "#include \"redismodule.h\"\n\n#define UNUSED(V) ((void) V)\n\nint cmd_xadd(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    UNUSED(argc);\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    if (RedisModule_Init(ctx, \"cmdintrospection\", 1, REDISMODULE_APIVER_1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"cmdintrospection.xadd\",cmd_xadd,\"write deny-oom random fast\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleCommand *xadd = RedisModule_GetCommand(ctx,\"cmdintrospection.xadd\");\n\n    RedisModuleCommandInfo info = {\n        .version = REDISMODULE_COMMAND_INFO_VERSION,\n        .arity = -5,\n        .summary = \"Appends a new message to a stream. Creates the key if it doesn't exist.\",\n        .since = \"5.0.0\",\n        .complexity = \"O(1) when adding a new entry, O(N) when trimming where N being the number of entries evicted.\",\n        .tips = \"nondeterministic_output\",\n        .history = (RedisModuleCommandHistoryEntry[]){\n            /* NOTE: All versions specified should be the module's versions, not\n             * Redis'! We use Redis versions in this example for the purpose of\n             * testing (comparing the output with the output of the vanilla\n             * XADD). */\n            {\"6.2.0\", \"Added the `NOMKSTREAM` option, `MINID` trimming strategy and the `LIMIT` option.\"},\n            {\"7.0.0\", \"Added support for the `<ms>-*` explicit ID form.\"},\n            {\"8.2.0\", \"Added the `KEEPREF`, `DELREF` and `ACKED` options.\"},\n            {0}\n        },\n        .key_specs = (RedisModuleCommandKeySpec[]){\n            {\n                .notes = \"UPDATE instead of INSERT because of the optional trimming feature\",\n                .flags = REDISMODULE_CMD_KEY_RW | REDISMODULE_CMD_KEY_UPDATE,\n                .begin_search_type = REDISMODULE_KSPEC_BS_INDEX,\n                .bs.index.pos = 1,\n                .find_keys_type = REDISMODULE_KSPEC_FK_RANGE,\n                .fk.range = {0,1,0}\n            },\n            {0}\n        },\n        .args = (RedisModuleCommandArg[]){\n            {\n                .name = \"key\",\n                .type = REDISMODULE_ARG_TYPE_KEY,\n                .key_spec_index = 0\n            },\n            {\n                .name = \"nomkstream\",\n                .type = REDISMODULE_ARG_TYPE_PURE_TOKEN,\n                .token = \"NOMKSTREAM\",\n                .since = \"6.2.0\",\n                .flags = REDISMODULE_CMD_ARG_OPTIONAL\n            },\n            {\n                .name = \"condition\",\n                .type = REDISMODULE_ARG_TYPE_ONEOF,\n                .flags = REDISMODULE_CMD_ARG_OPTIONAL,\n                .subargs = (RedisModuleCommandArg[]){\n                    {\n                        .name = \"keepref\",\n                        .type = REDISMODULE_ARG_TYPE_PURE_TOKEN,\n                        .token = \"KEEPREF\"\n                    },\n                    {\n                        .name = \"delref\",\n                        .type = REDISMODULE_ARG_TYPE_PURE_TOKEN,\n                        .token = \"DELREF\"\n                    },\n                    {\n                        .name = \"acked\",\n                        .type = REDISMODULE_ARG_TYPE_PURE_TOKEN,\n                        .token = \"ACKED\"\n                    },\n                    {0}\n                }\n            },\n            {\n                .name = \"idmp\",\n                .type = REDISMODULE_ARG_TYPE_ONEOF,\n                .since = \"8.6.0\",\n                .flags = REDISMODULE_CMD_ARG_OPTIONAL,\n                .subargs = (RedisModuleCommandArg[]){\n                    {\n                        .name = \"pid\",\n                        .type = REDISMODULE_ARG_TYPE_STRING,\n                        .token = \"IDMPAUTO\"\n                    },\n                    {\n                        .name = \"idmp\",\n                        .type = REDISMODULE_ARG_TYPE_BLOCK,\n                        .token = \"IDMP\",\n                        .subargs = (RedisModuleCommandArg[]){\n                            {\n                                .name = \"pid\",\n                                .type = REDISMODULE_ARG_TYPE_STRING,\n                            },\n                            {\n                                .name = \"iid\",\n                                .type = REDISMODULE_ARG_TYPE_STRING,\n                            },\n                            {0}\n                        }\n                    },\n                    {0}\n                }\n            },\n            {\n                .name = \"trim\",\n                .type = REDISMODULE_ARG_TYPE_BLOCK,\n                .flags = REDISMODULE_CMD_ARG_OPTIONAL,\n                .subargs = (RedisModuleCommandArg[]){\n                    {\n                        .name = \"strategy\",\n                        .type = REDISMODULE_ARG_TYPE_ONEOF,\n                        .subargs = (RedisModuleCommandArg[]){\n                            {\n                                .name = \"maxlen\",\n                                .type = REDISMODULE_ARG_TYPE_PURE_TOKEN,\n                                .token = \"MAXLEN\",\n                            },\n                            {\n                                .name = \"minid\",\n                                .type = REDISMODULE_ARG_TYPE_PURE_TOKEN,\n                                .token = \"MINID\",\n                                .since = \"6.2.0\",\n                            },\n                            {0}\n                        }\n                    },\n                    {\n                        .name = \"operator\",\n                        .type = REDISMODULE_ARG_TYPE_ONEOF,\n                        .flags = REDISMODULE_CMD_ARG_OPTIONAL,\n                        .subargs = (RedisModuleCommandArg[]){\n                            {\n                                .name = \"equal\",\n                                .type = REDISMODULE_ARG_TYPE_PURE_TOKEN,\n                                .token = \"=\"\n                            },\n                            {\n                                .name = \"approximately\",\n                                .type = REDISMODULE_ARG_TYPE_PURE_TOKEN,\n                                .token = \"~\"\n                            },\n                            {0}\n                        }\n                    },\n                    {\n                        .name = \"threshold\",\n                        .type = REDISMODULE_ARG_TYPE_STRING,\n                        .display_text = \"threshold\" /* Just for coverage, doesn't have a visible effect */\n                    },\n                    {\n                        .name = \"count\",\n                        .type = REDISMODULE_ARG_TYPE_INTEGER,\n                        .token = \"LIMIT\",\n                        .since = \"6.2.0\",\n                        .flags = REDISMODULE_CMD_ARG_OPTIONAL\n                    },\n                    {0}\n                }\n            },\n            {\n                .name = \"id-selector\",\n                .type = REDISMODULE_ARG_TYPE_ONEOF,\n                .subargs = (RedisModuleCommandArg[]){\n                    {\n                        .name = \"auto-id\",\n                        .type = REDISMODULE_ARG_TYPE_PURE_TOKEN,\n                        .token = \"*\"\n                    },\n                    {\n                        .name = \"id\",\n                        .type = REDISMODULE_ARG_TYPE_STRING,\n                    },\n                    {0}\n                }\n            },\n            {\n                .name = \"data\",\n                .type = REDISMODULE_ARG_TYPE_BLOCK,\n                .flags = REDISMODULE_CMD_ARG_MULTIPLE,\n                .subargs = (RedisModuleCommandArg[]){\n                    {\n                        .name = \"field\",\n                        .type = REDISMODULE_ARG_TYPE_STRING,\n                    },\n                    {\n                        .name = \"value\",\n                        .type = REDISMODULE_ARG_TYPE_STRING,\n                    },\n                    {0}\n                }\n            },\n            {0}\n        }\n    };\n    if (RedisModule_SetCommandInfo(xadd, &info) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/commandfilter.c",
    "content": "#include \"redismodule.h\"\n\n#include <string.h>\n#include <strings.h>\n\nstatic RedisModuleString *log_key_name;\n\nstatic const char log_command_name[] = \"commandfilter.log\";\nstatic const char ping_command_name[] = \"commandfilter.ping\";\nstatic const char retained_command_name[] = \"commandfilter.retained\";\nstatic const char unregister_command_name[] = \"commandfilter.unregister\";\nstatic const char unfiltered_clientid_name[] = \"unfilter_clientid\";\nstatic int in_log_command = 0;\n\nunsigned long long unfiltered_clientid = 0;\n\nstatic RedisModuleCommandFilter *filter, *filter1;\nstatic RedisModuleString *retained;\n\nint CommandFilter_UnregisterCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    (void) argc;\n    (void) argv;\n\n    RedisModule_ReplyWithLongLong(ctx,\n            RedisModule_UnregisterCommandFilter(ctx, filter));\n\n    return REDISMODULE_OK;\n}\n\nint CommandFilter_PingCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    (void) argc;\n    (void) argv;\n\n    RedisModuleCallReply *reply = RedisModule_Call(ctx, \"ping\", \"c\", \"@log\");\n    if (reply) {\n        RedisModule_ReplyWithCallReply(ctx, reply);\n        RedisModule_FreeCallReply(reply);\n    } else {\n        RedisModule_ReplyWithSimpleString(ctx, \"Unknown command or invalid arguments\");\n    }\n\n    return REDISMODULE_OK;\n}\n\nint CommandFilter_Retained(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    (void) argc;\n    (void) argv;\n\n    if (retained) {\n        RedisModule_ReplyWithString(ctx, retained);\n    } else {\n        RedisModule_ReplyWithNull(ctx);\n    }\n\n    return REDISMODULE_OK;\n}\n\nint CommandFilter_LogCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    RedisModuleString *s = RedisModule_CreateString(ctx, \"\", 0);\n\n    int i;\n    for (i = 1; i < argc; i++) {\n        size_t arglen;\n        const char *arg = RedisModule_StringPtrLen(argv[i], &arglen);\n\n        if (i > 1) RedisModule_StringAppendBuffer(ctx, s, \" \", 1);\n        RedisModule_StringAppendBuffer(ctx, s, arg, arglen);\n    }\n\n    RedisModuleKey *log = RedisModule_OpenKey(ctx, log_key_name, REDISMODULE_WRITE|REDISMODULE_READ);\n    RedisModule_ListPush(log, REDISMODULE_LIST_HEAD, s);\n    RedisModule_CloseKey(log);\n    RedisModule_FreeString(ctx, s);\n\n    in_log_command = 1;\n\n    size_t cmdlen;\n    const char *cmdname = RedisModule_StringPtrLen(argv[1], &cmdlen);\n    RedisModuleCallReply *reply = RedisModule_Call(ctx, cmdname, \"v\", &argv[2], (size_t)argc - 2);\n    if (reply) {\n        RedisModule_ReplyWithCallReply(ctx, reply);\n        RedisModule_FreeCallReply(reply);\n    } else {\n        RedisModule_ReplyWithSimpleString(ctx, \"Unknown command or invalid arguments\");\n    }\n\n    in_log_command = 0;\n\n    return REDISMODULE_OK;\n}\n\nint CommandFilter_UnfilteredClientId(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc < 2)\n        return RedisModule_WrongArity(ctx);\n\n    long long id;\n    if (RedisModule_StringToLongLong(argv[1], &id) != REDISMODULE_OK) {\n        RedisModule_ReplyWithError(ctx, \"invalid client id\");\n        return REDISMODULE_OK;\n    }\n    if (id < 0) {\n        RedisModule_ReplyWithError(ctx, \"invalid client id\");\n        return REDISMODULE_OK;\n    }\n\n    unfiltered_clientid = id;\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\n/* Filter to protect against Bug #11894 reappearing\n *\n * ensures that the filter is only run the first time through, and not on reprocessing\n */\nvoid CommandFilter_BlmoveSwap(RedisModuleCommandFilterCtx *filter)\n{\n    if (RedisModule_CommandFilterArgsCount(filter) != 6)\n        return;\n\n    RedisModuleString *arg = RedisModule_CommandFilterArgGet(filter, 0);\n    size_t arg_len;\n    const char *arg_str = RedisModule_StringPtrLen(arg, &arg_len);\n\n    if (arg_len != 6 || strncmp(arg_str, \"blmove\", 6))\n        return;\n\n    /*\n     * Swapping directional args (right/left) from source and destination.\n     * need to hold here, can't push into the ArgReplace func, as it will cause other to freed -> use after free\n     */\n    RedisModuleString *dir1 = RedisModule_HoldString(NULL, RedisModule_CommandFilterArgGet(filter, 3));\n    RedisModuleString *dir2 = RedisModule_HoldString(NULL, RedisModule_CommandFilterArgGet(filter, 4));\n    RedisModule_CommandFilterArgReplace(filter, 3, dir2);\n    RedisModule_CommandFilterArgReplace(filter, 4, dir1);\n}\n\nvoid CommandFilter_CommandFilter(RedisModuleCommandFilterCtx *filter)\n{\n    unsigned long long id = RedisModule_CommandFilterGetClientId(filter);\n    if (id == unfiltered_clientid) return;\n\n    if (in_log_command) return;  /* don't process our own RM_Call() from CommandFilter_LogCommand() */\n\n    /* Fun manipulations:\n     * - Remove @delme\n     * - Replace @replaceme\n     * - Append @insertbefore or @insertafter\n     * - Prefix with Log command if @log encountered\n     */\n    int log = 0;\n    int pos = 0;\n    while (pos < RedisModule_CommandFilterArgsCount(filter)) {\n        const RedisModuleString *arg = RedisModule_CommandFilterArgGet(filter, pos);\n        size_t arg_len;\n        const char *arg_str = RedisModule_StringPtrLen(arg, &arg_len);\n\n        if (arg_len == 6 && !memcmp(arg_str, \"@delme\", 6)) {\n            RedisModule_CommandFilterArgDelete(filter, pos);\n            continue;\n        } \n        if (arg_len == 10 && !memcmp(arg_str, \"@replaceme\", 10)) {\n            RedisModule_CommandFilterArgReplace(filter, pos,\n                    RedisModule_CreateString(NULL, \"--replaced--\", 12));\n        } else if (arg_len == 13 && !memcmp(arg_str, \"@insertbefore\", 13)) {\n            RedisModule_CommandFilterArgInsert(filter, pos,\n                    RedisModule_CreateString(NULL, \"--inserted-before--\", 19));\n            pos++;\n        } else if (arg_len == 12 && !memcmp(arg_str, \"@insertafter\", 12)) {\n            RedisModule_CommandFilterArgInsert(filter, pos + 1,\n                    RedisModule_CreateString(NULL, \"--inserted-after--\", 18));\n            pos++;\n        } else if (arg_len == 7 && !memcmp(arg_str, \"@retain\", 7)) {\n            if (retained) RedisModule_FreeString(NULL, retained);\n            retained = RedisModule_CommandFilterArgGet(filter, pos + 1);\n            RedisModule_RetainString(NULL, retained);\n            pos++;\n        } else if (arg_len == 4 && !memcmp(arg_str, \"@log\", 4)) {\n            log = 1;\n        }\n        pos++;\n    }\n\n    if (log) RedisModule_CommandFilterArgInsert(filter, 0,\n            RedisModule_CreateString(NULL, log_command_name, sizeof(log_command_name)-1));\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (RedisModule_Init(ctx,\"commandfilter\",1,REDISMODULE_APIVER_1)\n            == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    if (argc != 2 && argc != 3) {\n        RedisModule_Log(ctx, \"warning\", \"Log key name not specified\");\n        return REDISMODULE_ERR;\n    }\n\n    long long noself = 0;\n    log_key_name = RedisModule_CreateStringFromString(ctx, argv[0]);\n    RedisModule_StringToLongLong(argv[1], &noself);\n    retained = NULL;\n\n    if (RedisModule_CreateCommand(ctx,log_command_name,\n                CommandFilter_LogCommand,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n            return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,ping_command_name,\n                CommandFilter_PingCommand,\"deny-oom\",1,1,1) == REDISMODULE_ERR)\n            return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,retained_command_name,\n                CommandFilter_Retained,\"readonly\",1,1,1) == REDISMODULE_ERR)\n            return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,unregister_command_name,\n                CommandFilter_UnregisterCommand,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n            return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, unfiltered_clientid_name,\n                CommandFilter_UnfilteredClientId, \"admin\", 1,1,1) == REDISMODULE_ERR)\n            return REDISMODULE_ERR;\n\n    if ((filter = RedisModule_RegisterCommandFilter(ctx, CommandFilter_CommandFilter, \n                    noself ? REDISMODULE_CMDFILTER_NOSELF : 0))\n            == NULL) return REDISMODULE_ERR;\n\n    if ((filter1 = RedisModule_RegisterCommandFilter(ctx, CommandFilter_BlmoveSwap, 0)) == NULL)\n        return REDISMODULE_ERR;\n\n    if (argc == 3) {\n        const char *ptr = RedisModule_StringPtrLen(argv[2], NULL);\n        if (!strcasecmp(ptr, \"noload\")) {\n            /* This is a hint that we return ERR at the last moment of OnLoad. */\n            RedisModule_FreeString(ctx, log_key_name);\n            if (retained) RedisModule_FreeString(NULL, retained);\n            return REDISMODULE_ERR;\n        }\n    }\n\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnUnload(RedisModuleCtx *ctx) {\n    RedisModule_FreeString(ctx, log_key_name);\n    if (retained) RedisModule_FreeString(NULL, retained);\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/configaccess.c",
    "content": "#include \"redismodule.h\"\n#include <assert.h>\n#include <string.h>\n\n/* See moduleconfigs.c for registering module configs. We need to register some\n * module configs with our module in order to test the interaction between\n * module configs and the RM_Get/Set*Config APIs. */\nint configaccess_bool;\n\nint getBoolConfigCommand(const char *name, void *privdata) {\n    REDISMODULE_NOT_USED(name);\n    return (*(int *)privdata);\n}\n\nint setBoolConfigCommand(const char *name, int new, void *privdata, RedisModuleString **err) {\n    REDISMODULE_NOT_USED(name);\n    REDISMODULE_NOT_USED(err);\n    *(int *)privdata = new;\n    return REDISMODULE_OK;\n}\n\n/* Test command for RM_GetConfigType */\nint TestGetConfigType_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    size_t len;\n    const char *config_name = RedisModule_StringPtrLen(argv[1], &len);\n\n    RedisModuleConfigType type;\n    int res = RedisModule_ConfigGetType(config_name, &type);\n    if (res == REDISMODULE_ERR) {\n        RedisModule_ReplyWithError(ctx, \"ERR Config does not exist\");\n        return REDISMODULE_ERR;\n    }\n\n    const char *type_str;\n    switch (type) {\n    case REDISMODULE_CONFIG_TYPE_BOOL:\n        type_str = \"bool\";\n        break;\n    case REDISMODULE_CONFIG_TYPE_NUMERIC:\n        type_str = \"numeric\";\n        break;\n    case REDISMODULE_CONFIG_TYPE_STRING:\n        type_str = \"string\";\n        break;\n    case REDISMODULE_CONFIG_TYPE_ENUM:\n        type_str = \"enum\";\n        break;\n    default:\n        assert(0);\n        break;\n    }\n\n    RedisModule_ReplyWithSimpleString(ctx, type_str);\n    return REDISMODULE_OK;\n}\n\n/* Test command for config iteration */\nint TestConfigIteration_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n\n    if (argc > 2) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    const char *pattern = NULL;\n    if (argc == 2) {\n        pattern = RedisModule_StringPtrLen(argv[1], NULL);\n    }\n\n    RedisModuleConfigIterator *iter = RedisModule_ConfigIteratorCreate(ctx, pattern);\n    if (!iter) {\n        RedisModule_ReplyWithError(ctx, \"ERR Failed to get config iterator\");\n        return REDISMODULE_ERR;\n    }\n\n    /* Start array reply for the configs */\n    RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_ARRAY_LEN);\n\n    /* Iterate through the dictionary */\n    const char *config_name = NULL;\n    long count = 0;\n    while ((config_name = RedisModule_ConfigIteratorNext(iter)) != NULL) {\n        RedisModuleString *value = NULL;\n        RedisModule_ConfigGet(ctx, config_name, &value);\n\n        RedisModule_ReplyWithArray(ctx, 2);\n        RedisModule_ReplyWithStringBuffer(ctx, config_name, strlen(config_name));\n        RedisModule_ReplyWithString(ctx, value);\n\n        RedisModule_FreeString(ctx, value);\n        ++count;\n    }\n    RedisModule_ReplySetArrayLength(ctx, count);\n\n    /* Free the iterator */\n    RedisModule_ConfigIteratorRelease(ctx, iter);\n\n    return REDISMODULE_OK;\n}\n\n/* Test command for RM_GetBoolConfig */\nint TestGetBoolConfig_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    size_t len;\n    const char *config_name = RedisModule_StringPtrLen(argv[1], &len);\n\n    int value;\n    if (RedisModule_ConfigGetBool(ctx, config_name, &value) == REDISMODULE_ERR) {\n        RedisModule_ReplyWithError(ctx, \"ERR Failed to get bool config\");\n        return REDISMODULE_ERR;\n    }\n\n    RedisModule_ReplyWithLongLong(ctx, value);\n    return REDISMODULE_OK;\n}\n\n/* Test command for RM_GetNumericConfig */\nint TestGetNumericConfig_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    size_t len;\n    const char *config_name = RedisModule_StringPtrLen(argv[1], &len);\n\n    long long value;\n    if (RedisModule_ConfigGetNumeric(ctx, config_name, &value) == REDISMODULE_ERR) {\n        RedisModule_ReplyWithError(ctx, \"ERR Failed to get numeric config\");\n        return REDISMODULE_ERR;\n    }\n\n    RedisModule_ReplyWithLongLong(ctx, value);\n    return REDISMODULE_OK;\n}\n\n/* Test command for RM_GetConfig */\nint TestGetConfig_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    size_t len;\n    const char *config_name = RedisModule_StringPtrLen(argv[1], &len);\n\n    RedisModuleString *value;\n    if (RedisModule_ConfigGet(ctx, config_name, &value) == REDISMODULE_ERR) {\n        RedisModule_ReplyWithError(ctx, \"ERR Failed to get string config\");\n        return REDISMODULE_ERR;\n    }\n\n    RedisModule_ReplyWithString(ctx, value);\n    RedisModule_FreeString(ctx,value);\n    return REDISMODULE_OK;\n}\n\n/* Test command for RM_GetEnumConfig */\nint TestGetEnumConfig_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    size_t len;\n    const char *config_name = RedisModule_StringPtrLen(argv[1], &len);\n\n    RedisModuleString *value;\n    if (RedisModule_ConfigGetEnum(ctx, config_name, &value) == REDISMODULE_ERR) {\n        RedisModule_ReplyWithError(ctx, \"ERR Failed to get enum name config\");\n        return REDISMODULE_ERR;\n    }\n\n    RedisModule_ReplyWithString(ctx, value);\n    RedisModule_Free(value);\n    return REDISMODULE_OK;\n}\n\n/* Test command for RM_SetBoolConfig */\nint TestSetBoolConfig_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 3) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    size_t name_len, value_len;\n    const char *config_name = RedisModule_StringPtrLen(argv[1], &name_len);\n    const char *config_value = RedisModule_StringPtrLen(argv[2], &value_len);\n\n    int bool_value;\n    if (!strcasecmp(config_value, \"yes\")) {\n        bool_value = 1;\n    } else if (!strcasecmp(config_value, \"no\")) {\n        bool_value = 0;\n    } else {\n        bool_value = -1;\n    }\n\n    RedisModuleString *error = NULL;\n    int result = RedisModule_ConfigSetBool(ctx, config_name, bool_value, &error);\n    if (result == REDISMODULE_ERR) {\n        RedisModule_ReplyWithErrorFormat(ctx, \"ERR Failed to set bool config %s: %s\", config_name, RedisModule_StringPtrLen(error, NULL));\n        RedisModule_FreeString(ctx, error);\n        return REDISMODULE_ERR;\n    }\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\n/* Test command for RM_SetNumericConfig */\nint TestSetNumericConfig_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 3) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    size_t name_len;\n    const char *config_name = RedisModule_StringPtrLen(argv[1], &name_len);\n\n    long long value;\n    const char *value_str= RedisModule_StringPtrLen(argv[2], NULL);\n    if (value_str[0] == '-') {\n        if (RedisModule_StringToLongLong(argv[2], &value) != REDISMODULE_OK) {\n            RedisModule_ReplyWithError(ctx, \"ERR Invalid numeric value\");\n            return REDISMODULE_ERR;\n        }\n    } else {\n        if (RedisModule_StringToULongLong(argv[2], (unsigned long long*)&value) != REDISMODULE_OK) {\n            RedisModule_ReplyWithError(ctx, \"ERR Invalid numeric value\");\n            return REDISMODULE_ERR;\n        }\n    }\n\n    RedisModuleString *error = NULL;\n    int result = RedisModule_ConfigSetNumeric(ctx, config_name, value, &error);\n    if (result == REDISMODULE_OK) {\n        RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    } else {\n        RedisModule_ReplyWithErrorFormat(ctx, \"ERR Failed to set numeric config %s: %s\", config_name, RedisModule_StringPtrLen(error, NULL));\n        RedisModule_FreeString(ctx, error);\n        return REDISMODULE_ERR;\n    }\n\n    return REDISMODULE_OK;\n}\n\n/* Test command for RM_SetConfig */\nint TestSetConfig_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 3) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    size_t name_len;\n    const char *config_name = RedisModule_StringPtrLen(argv[1], &name_len);\n\n    RedisModuleString *error = NULL;\n    int result = RedisModule_ConfigSet(ctx, config_name, argv[2], &error);\n    if (result == REDISMODULE_OK) {\n        RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    } else {\n        RedisModule_ReplyWithErrorFormat(ctx, \"ERR Failed to set string config %s: %s\", config_name, RedisModule_StringPtrLen(error, NULL));\n        RedisModule_FreeString(ctx, error);\n        return REDISMODULE_ERR;\n    }\n\n    return REDISMODULE_OK;\n}\n\n/* Test command for RM_SetEnumConfig with name */\nint TestSetEnumConfig_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc < 3) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    const char *config_name = RedisModule_StringPtrLen(argv[1], NULL);\n\n    RedisModuleString *error = NULL;\n    int result = RedisModule_ConfigSetEnum(ctx, config_name, argv[2], &error);\n\n    if (result == REDISMODULE_OK) {\n        RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    } else {\n        RedisModule_ReplyWithErrorFormat(ctx, \"ERR Failed to set enum config %s: %s\", config_name, RedisModule_StringPtrLen(error, NULL));\n        RedisModule_FreeString(ctx, error);\n        return REDISMODULE_ERR;\n    }\n\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx, \"configaccess\", 1, REDISMODULE_APIVER_1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"configaccess.getconfigs\",\n                                 TestConfigIteration_RedisCommand, \"readonly\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"configaccess.getbool\",\n                                 TestGetBoolConfig_RedisCommand, \"readonly\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"configaccess.getnumeric\",\n                                 TestGetNumericConfig_RedisCommand, \"readonly\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"configaccess.get\",\n                                 TestGetConfig_RedisCommand, \"readonly\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"configaccess.getenum\",\n                                 TestGetEnumConfig_RedisCommand, \"readonly\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"configaccess.setbool\",\n                                 TestSetBoolConfig_RedisCommand, \"write\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"configaccess.setnumeric\",\n                                 TestSetNumericConfig_RedisCommand, \"write\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"configaccess.set\",\n                                 TestSetConfig_RedisCommand, \"write\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"configaccess.setenum\",\n                                 TestSetEnumConfig_RedisCommand, \"write\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"configaccess.getconfigtype\", TestGetConfigType_RedisCommand, \"readonly\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_RegisterBoolConfig(ctx, \"bool\", 1, REDISMODULE_CONFIG_DEFAULT,\n                                       getBoolConfigCommand, setBoolConfigCommand, NULL, &configaccess_bool) == REDISMODULE_ERR) {\n        RedisModule_Log(ctx, \"warning\", \"Failed to register configaccess_bool\");\n        return REDISMODULE_ERR;\n    }\n\n    RedisModule_Log(ctx, \"debug\", \"Loading configaccess module configuration\");\n    if (RedisModule_LoadConfigs(ctx) == REDISMODULE_ERR) {\n        RedisModule_Log(ctx, \"warning\", \"Failed to load configaccess module configuration\");\n        return REDISMODULE_ERR;\n    }\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/crash.c",
    "content": "#include \"redismodule.h\"\n\n#include <strings.h>\n#include <sys/mman.h>\n\n#define UNUSED(V) ((void) V)\n\nvoid assertCrash(RedisModuleInfoCtx *ctx, int for_crash_report) {\n    UNUSED(ctx);\n    UNUSED(for_crash_report);\n    RedisModule_Assert(0);\n}\n\nvoid segfaultCrash(RedisModuleInfoCtx *ctx, int for_crash_report) {\n    UNUSED(ctx);\n    UNUSED(for_crash_report);\n    /* Compiler gives warnings about writing to a random address\n     * e.g \"*((char*)-1) = 'x';\". As a workaround, we map a read-only area\n     * and try to write there to trigger segmentation fault. */\n    char *p = mmap(NULL, 4096, PROT_READ, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);\n    *p = 'x';\n}\n\nint cmd_crash(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(ctx);\n    UNUSED(argv);\n    UNUSED(argc);\n\n    RedisModule_Assert(0);\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    if (RedisModule_Init(ctx,\"modulecrash\",1,REDISMODULE_APIVER_1)\n            == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    if (argc >= 1) {\n        if (!strcasecmp(RedisModule_StringPtrLen(argv[0], NULL), \"segfault\")) {\n            if (RedisModule_RegisterInfoFunc(ctx, segfaultCrash) == REDISMODULE_ERR) return REDISMODULE_ERR;\n        } else if (!strcasecmp(RedisModule_StringPtrLen(argv[0], NULL),\"assert\")) {\n            if (RedisModule_RegisterInfoFunc(ctx, assertCrash) == REDISMODULE_ERR) return REDISMODULE_ERR;\n        }\n    }\n\n    /* Create modulecrash.xadd command which is similar to xadd command.\n     * It will crash in the command handler to verify we print command tokens\n     * when hide-user-data-from-log config is enabled */\n    RedisModuleCommandInfo info = {\n            .version = REDISMODULE_COMMAND_INFO_VERSION,\n            .arity = -5,\n            .key_specs = (RedisModuleCommandKeySpec[]){\n                    {\n                            .notes = \"UPDATE instead of INSERT because of the optional trimming feature\",\n                            .flags = REDISMODULE_CMD_KEY_RW | REDISMODULE_CMD_KEY_UPDATE,\n                            .begin_search_type = REDISMODULE_KSPEC_BS_INDEX,\n                            .bs.index.pos = 1,\n                            .find_keys_type = REDISMODULE_KSPEC_FK_RANGE,\n                            .fk.range = {0,1,0}\n                    },\n                    {0}\n            },\n            .args = (RedisModuleCommandArg[]){\n                    {\n                            .name = \"key\",\n                            .type = REDISMODULE_ARG_TYPE_KEY,\n                            .key_spec_index = 0\n                    },\n                    {\n                            .name = \"nomkstream\",\n                            .type = REDISMODULE_ARG_TYPE_PURE_TOKEN,\n                            .token = \"NOMKSTREAM\",\n                            .since = \"6.2.0\",\n                            .flags = REDISMODULE_CMD_ARG_OPTIONAL\n                    },\n                    {\n                            .name = \"trim\",\n                            .type = REDISMODULE_ARG_TYPE_BLOCK,\n                            .flags = REDISMODULE_CMD_ARG_OPTIONAL,\n                            .subargs = (RedisModuleCommandArg[]){\n                                    {\n                                            .name = \"strategy\",\n                                            .type = REDISMODULE_ARG_TYPE_ONEOF,\n                                            .subargs = (RedisModuleCommandArg[]){\n                                                    {\n                                                            .name = \"maxlen\",\n                                                            .type = REDISMODULE_ARG_TYPE_PURE_TOKEN,\n                                                            .token = \"MAXLEN\",\n                                                    },\n                                                    {\n                                                            .name = \"minid\",\n                                                            .type = REDISMODULE_ARG_TYPE_PURE_TOKEN,\n                                                            .token = \"MINID\",\n                                                            .since = \"6.2.0\",\n                                                    },\n                                                    {0}\n                                            }\n                                    },\n                                    {\n                                            .name = \"operator\",\n                                            .type = REDISMODULE_ARG_TYPE_ONEOF,\n                                            .flags = REDISMODULE_CMD_ARG_OPTIONAL,\n                                            .subargs = (RedisModuleCommandArg[]){\n                                                    {\n                                                            .name = \"equal\",\n                                                            .type = REDISMODULE_ARG_TYPE_PURE_TOKEN,\n                                                            .token = \"=\"\n                                                    },\n                                                    {\n                                                            .name = \"approximately\",\n                                                            .type = REDISMODULE_ARG_TYPE_PURE_TOKEN,\n                                                            .token = \"~\"\n                                                    },\n                                                    {0}\n                                            }\n                                    },\n                                    {\n                                            .name = \"threshold\",\n                                            .type = REDISMODULE_ARG_TYPE_STRING,\n                                            .display_text = \"threshold\" /* Just for coverage, doesn't have a visible effect */\n                                    },\n                                    {\n                                            .name = \"count\",\n                                            .type = REDISMODULE_ARG_TYPE_INTEGER,\n                                            .token = \"LIMIT\",\n                                            .since = \"6.2.0\",\n                                            .flags = REDISMODULE_CMD_ARG_OPTIONAL\n                                    },\n                                    {0}\n                            }\n                    },\n                    {\n                            .name = \"id-selector\",\n                            .type = REDISMODULE_ARG_TYPE_ONEOF,\n                            .subargs = (RedisModuleCommandArg[]){\n                                    {\n                                            .name = \"auto-id\",\n                                            .type = REDISMODULE_ARG_TYPE_PURE_TOKEN,\n                                            .token = \"*\"\n                                    },\n                                    {\n                                            .name = \"id\",\n                                            .type = REDISMODULE_ARG_TYPE_STRING,\n                                    },\n                                    {0}\n                            }\n                    },\n                    {\n                            .name = \"data\",\n                            .type = REDISMODULE_ARG_TYPE_BLOCK,\n                            .flags = REDISMODULE_CMD_ARG_MULTIPLE,\n                            .subargs = (RedisModuleCommandArg[]){\n                                    {\n                                            .name = \"field\",\n                                            .type = REDISMODULE_ARG_TYPE_STRING,\n                                    },\n                                    {\n                                            .name = \"value\",\n                                            .type = REDISMODULE_ARG_TYPE_STRING,\n                                    },\n                                    {0}\n                            }\n                    },\n                    {0}\n            }\n    };\n\n    RedisModuleCommand *cmd;\n\n    if (RedisModule_CreateCommand(ctx,\"modulecrash.xadd\", cmd_crash,\"write deny-oom random fast\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    cmd = RedisModule_GetCommand(ctx,\"modulecrash.xadd\");\n    if (RedisModule_SetCommandInfo(cmd, &info) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    /* Create a subcommand: modulecrash.parent sub\n     * It will crash in the command handler to verify we print subcommand name\n     * when hide-user-data-from-log config is enabled */\n    RedisModuleCommandInfo subcommand_info = {\n            .version = REDISMODULE_COMMAND_INFO_VERSION,\n            .arity = -5,\n            .key_specs = (RedisModuleCommandKeySpec[]){\n                    {\n                            .flags = REDISMODULE_CMD_KEY_RW | REDISMODULE_CMD_KEY_UPDATE,\n                            .begin_search_type = REDISMODULE_KSPEC_BS_INDEX,\n                            .bs.index.pos = 1,\n                            .find_keys_type = REDISMODULE_KSPEC_FK_RANGE,\n                            .fk.range = {0,1,0}\n                    },\n                    {0}\n            },\n            .args = (RedisModuleCommandArg[]){\n                    {\n                            .name = \"key\",\n                            .type = REDISMODULE_ARG_TYPE_KEY,\n                            .key_spec_index = 0\n                    },\n                    {\n                            .name = \"token\",\n                            .type = REDISMODULE_ARG_TYPE_PURE_TOKEN,\n                            .token = \"TOKEN\",\n                            .flags = REDISMODULE_CMD_ARG_OPTIONAL\n                    },\n                    {\n                            .name = \"data\",\n                            .type = REDISMODULE_ARG_TYPE_BLOCK,\n                            .subargs = (RedisModuleCommandArg[]){\n                                    {\n                                            .name = \"field\",\n                                            .type = REDISMODULE_ARG_TYPE_STRING,\n                                    },\n                                    {\n                                            .name = \"value\",\n                                            .type = REDISMODULE_ARG_TYPE_STRING,\n                                    },\n                                    {0}\n                            }\n                    },\n                    {0}\n            }\n    };\n\n    if (RedisModule_CreateCommand(ctx,\"modulecrash.parent\",NULL,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleCommand *parent = RedisModule_GetCommand(ctx,\"modulecrash.parent\");\n\n    if (RedisModule_CreateSubcommand(parent,\"subcmd\",cmd_crash,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    cmd = RedisModule_GetCommand(ctx,\"modulecrash.parent|subcmd\");\n    if (RedisModule_SetCommandInfo(cmd, &subcommand_info) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    /* Create modulecrash.zunion command which is similar to zunion command.\n    * It will crash in the command handler to verify we print command tokens\n    * when hide-user-data-from-log config is enabled */\n    RedisModuleCommandInfo zunioninfo = {\n            .version = REDISMODULE_COMMAND_INFO_VERSION,\n            .arity = -5,\n            .key_specs = (RedisModuleCommandKeySpec[]){\n                    {\n                            .flags = REDISMODULE_CMD_KEY_RO,\n                            .begin_search_type = REDISMODULE_KSPEC_BS_INDEX,\n                            .bs.index.pos = 1,\n                            .find_keys_type = REDISMODULE_KSPEC_FK_KEYNUM,\n                            .fk.keynum = {0,1,1}\n                    },\n                    {0}\n            },\n            .args = (RedisModuleCommandArg[]){\n                    {\n                            .name = \"numkeys\",\n                            .type = REDISMODULE_ARG_TYPE_INTEGER,\n                    },\n                    {\n                            .name = \"key\",\n                            .type = REDISMODULE_ARG_TYPE_KEY,\n                            .key_spec_index = 0,\n                            .flags = REDISMODULE_CMD_ARG_MULTIPLE\n                    },\n                    {\n                            .name = \"weights\",\n                            .type = REDISMODULE_ARG_TYPE_INTEGER,\n                            .token = \"WEIGHTS\",\n                            .flags = REDISMODULE_CMD_ARG_OPTIONAL | REDISMODULE_CMD_ARG_MULTIPLE\n                    },\n                    {\n                            .name = \"aggregate\",\n                            .type = REDISMODULE_ARG_TYPE_ONEOF,\n                            .token = \"AGGREGATE\",\n                            .flags = REDISMODULE_CMD_ARG_OPTIONAL,\n                            .subargs = (RedisModuleCommandArg[]){\n                                    {\n                                            .name = \"sum\",\n                                            .type = REDISMODULE_ARG_TYPE_PURE_TOKEN,\n                                            .token = \"sum\"\n                                    },\n                                    {\n                                            .name = \"min\",\n                                            .type = REDISMODULE_ARG_TYPE_PURE_TOKEN,\n                                            .token = \"min\"\n                                    },\n                                    {\n                                            .name = \"max\",\n                                            .type = REDISMODULE_ARG_TYPE_PURE_TOKEN,\n                                            .token = \"max\"\n                                    },\n                                    {0}\n                            }\n                    },\n                    {\n                            .name = \"withscores\",\n                            .type = REDISMODULE_ARG_TYPE_PURE_TOKEN,\n                            .token = \"WITHSCORES\",\n                            .flags = REDISMODULE_CMD_ARG_OPTIONAL\n                    },\n                    {0}\n            }\n    };\n\n    if (RedisModule_CreateCommand(ctx,\"modulecrash.zunion\", cmd_crash,\"readonly\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    cmd = RedisModule_GetCommand(ctx,\"modulecrash.zunion\");\n    if (RedisModule_SetCommandInfo(cmd, &zunioninfo) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/datatype.c",
    "content": "/* This module current tests a small subset but should be extended in the future\n * for general ModuleDataType coverage.\n */\n\n/* define macros for having usleep */\n#define _BSD_SOURCE\n#define _DEFAULT_SOURCE\n#include <unistd.h>\n\n#include \"redismodule.h\"\n\nstatic RedisModuleType *datatype = NULL;\nstatic int load_encver = 0;\n\n/* used to test processing events during slow loading */\nstatic volatile int slow_loading = 0;\nstatic volatile int is_in_slow_loading = 0;\n\n#define DATATYPE_ENC_VER 1\n\ntypedef struct {\n    long long intval;\n    RedisModuleString *strval;\n} DataType;\n\nstatic void *datatype_load(RedisModuleIO *io, int encver) {\n    load_encver = encver;\n    int intval = RedisModule_LoadSigned(io);\n    if (RedisModule_IsIOError(io)) return NULL;\n\n    RedisModuleString *strval = RedisModule_LoadString(io);\n    if (RedisModule_IsIOError(io)) return NULL;\n\n    DataType *dt = (DataType *) RedisModule_Alloc(sizeof(DataType));\n    dt->intval = intval;\n    dt->strval = strval;\n\n    if (slow_loading) {\n        RedisModuleCtx *ctx = RedisModule_GetContextFromIO(io);\n        is_in_slow_loading = 1;\n        while (slow_loading) {\n            RedisModule_Yield(ctx, REDISMODULE_YIELD_FLAG_CLIENTS, \"Slow module operation\");\n            usleep(1000);\n        }\n        is_in_slow_loading = 0;\n    }\n\n    return dt;\n}\n\nstatic void datatype_save(RedisModuleIO *io, void *value) {\n    DataType *dt = (DataType *) value;\n    RedisModule_SaveSigned(io, dt->intval);\n    RedisModule_SaveString(io, dt->strval);\n}\n\nstatic void datatype_free(void *value) {\n    if (value) {\n        DataType *dt = (DataType *) value;\n\n        if (dt->strval) RedisModule_FreeString(NULL, dt->strval);\n        RedisModule_Free(dt);\n    }\n}\n\nstatic void *datatype_copy(RedisModuleString *fromkey, RedisModuleString *tokey, const void *value) {\n    const DataType *old = value;\n\n    /* Answers to ultimate questions cannot be copied! */\n    if (old->intval == 42)\n        return NULL;\n\n    DataType *new = (DataType *) RedisModule_Alloc(sizeof(DataType));\n\n    new->intval = old->intval;\n    new->strval = RedisModule_CreateStringFromString(NULL, old->strval);\n\n    /* Breaking the rules here! We return a copy that also includes traces\n     * of fromkey/tokey to confirm we get what we expect.\n     */\n    size_t len;\n    const char *str = RedisModule_StringPtrLen(fromkey, &len);\n    RedisModule_StringAppendBuffer(NULL, new->strval, \"/\", 1);\n    RedisModule_StringAppendBuffer(NULL, new->strval, str, len);\n    RedisModule_StringAppendBuffer(NULL, new->strval, \"/\", 1);\n    str = RedisModule_StringPtrLen(tokey, &len);\n    RedisModule_StringAppendBuffer(NULL, new->strval, str, len);\n\n    return new;\n}\n\nstatic int datatype_set(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 4) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    long long intval;\n\n    if (RedisModule_StringToLongLong(argv[2], &intval) != REDISMODULE_OK) {\n        RedisModule_ReplyWithError(ctx, \"Invalid integer value\");\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_WRITE);\n    DataType *dt = RedisModule_Calloc(sizeof(DataType), 1);\n    dt->intval = intval;\n    dt->strval = argv[3];\n    RedisModule_RetainString(ctx, dt->strval);\n\n    RedisModule_ModuleTypeSetValue(key, datatype, dt);\n    RedisModule_CloseKey(key);\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n\n    return REDISMODULE_OK;\n}\n\nstatic int datatype_restore(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 4) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    long long encver;\n    if (RedisModule_StringToLongLong(argv[3], &encver) != REDISMODULE_OK) {\n        RedisModule_ReplyWithError(ctx, \"Invalid integer value\");\n        return REDISMODULE_OK;\n    }\n\n    DataType *dt = RedisModule_LoadDataTypeFromStringEncver(argv[2], datatype, encver);\n    if (!dt) {\n        RedisModule_ReplyWithError(ctx, \"Invalid data\");\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_WRITE);\n    RedisModule_ModuleTypeSetValue(key, datatype, dt);\n    RedisModule_CloseKey(key);\n    RedisModule_ReplyWithLongLong(ctx, load_encver);\n\n    return REDISMODULE_OK;\n}\n\nstatic int datatype_get(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_READ);\n    DataType *dt = RedisModule_ModuleTypeGetValue(key);\n    RedisModule_CloseKey(key);\n\n    if (!dt) {\n        RedisModule_ReplyWithNullArray(ctx);\n    } else {\n        RedisModule_ReplyWithArray(ctx, 2);\n        RedisModule_ReplyWithLongLong(ctx, dt->intval);\n        RedisModule_ReplyWithString(ctx, dt->strval);\n    }\n    return REDISMODULE_OK;\n}\n\nstatic int datatype_dump(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_READ);\n    DataType *dt = RedisModule_ModuleTypeGetValue(key);\n    RedisModule_CloseKey(key);\n\n    RedisModuleString *reply = RedisModule_SaveDataTypeToString(ctx, dt, datatype);\n    if (!reply) {\n        RedisModule_ReplyWithError(ctx, \"Failed to save\");\n        return REDISMODULE_OK;\n    }\n\n    RedisModule_ReplyWithString(ctx, reply);\n    RedisModule_FreeString(ctx, reply);\n    return REDISMODULE_OK;\n}\n\nstatic int datatype_swap(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 3) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleKey *a = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_WRITE);\n    RedisModuleKey *b = RedisModule_OpenKey(ctx, argv[2], REDISMODULE_WRITE);\n    void *val = RedisModule_ModuleTypeGetValue(a);\n\n    int error = (RedisModule_ModuleTypeReplaceValue(b, datatype, val, &val) == REDISMODULE_ERR ||\n                 RedisModule_ModuleTypeReplaceValue(a, datatype, val, NULL) == REDISMODULE_ERR);\n    if (!error)\n        RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    else\n        RedisModule_ReplyWithError(ctx, \"ERR failed\");\n\n    RedisModule_CloseKey(a);\n    RedisModule_CloseKey(b);\n\n    return REDISMODULE_OK;\n}\n\n/* used to enable or disable slow loading */\nstatic int datatype_slow_loading(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    long long ll;\n    if (RedisModule_StringToLongLong(argv[1], &ll) != REDISMODULE_OK) {\n        RedisModule_ReplyWithError(ctx, \"Invalid integer value\");\n        return REDISMODULE_OK;\n    }\n    slow_loading = ll;\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\n/* used to test if we reached the slow loading code */\nstatic int datatype_is_in_slow_loading(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    if (argc != 1) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    RedisModule_ReplyWithLongLong(ctx, is_in_slow_loading);\n    return REDISMODULE_OK;\n}\n\nint createDataTypeBlockCheck(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    static RedisModuleType *datatype_outside_onload = NULL;\n\n    RedisModuleTypeMethods datatype_methods = {\n        .version = REDISMODULE_TYPE_METHOD_VERSION,\n        .rdb_load = datatype_load,\n        .rdb_save = datatype_save,\n        .free = datatype_free,\n        .copy = datatype_copy\n    };\n\n    datatype_outside_onload = RedisModule_CreateDataType(ctx, \"test_dt_outside_onload\", 1, &datatype_methods);\n\n    /* This validates that it's not possible to create datatype outside OnLoad,\n     * thus returns an error if it succeeds. */\n    if (datatype_outside_onload == NULL) {\n        RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    } else {\n        RedisModule_ReplyWithError(ctx, \"UNEXPECTEDOK\");\n    }\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx,\"datatype\",DATATYPE_ENC_VER,REDISMODULE_APIVER_1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    /* Creates a command which creates a datatype outside OnLoad() function. */\n    if (RedisModule_CreateCommand(ctx,\"block.create.datatype.outside.onload\", createDataTypeBlockCheck, \"write\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModule_SetModuleOptions(ctx, REDISMODULE_OPTIONS_HANDLE_IO_ERRORS);\n\n    RedisModuleTypeMethods datatype_methods = {\n        .version = REDISMODULE_TYPE_METHOD_VERSION,\n        .rdb_load = datatype_load,\n        .rdb_save = datatype_save,\n        .free = datatype_free,\n        .copy = datatype_copy\n    };\n\n    datatype = RedisModule_CreateDataType(ctx, \"test___dt\", 1, &datatype_methods);\n    if (datatype == NULL)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"datatype.set\", datatype_set,\n                                  \"write deny-oom\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"datatype.get\", datatype_get,\"\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"datatype.restore\", datatype_restore,\n                                  \"write deny-oom\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"datatype.dump\", datatype_dump,\"\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"datatype.swap\", datatype_swap,\n                                  \"write\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"datatype.slow_loading\", datatype_slow_loading,\n                                  \"allow-loading\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"datatype.is_in_slow_loading\", datatype_is_in_slow_loading,\n                                  \"allow-loading\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnUnload(RedisModuleCtx *ctx) {\n    REDISMODULE_NOT_USED(ctx);\n    if (datatype) {\n        RedisModule_Free(datatype);\n        datatype = NULL;\n    }\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/datatype2.c",
    "content": "/* This module is used to test a use case of a module that stores information\n * about keys in global memory, and relies on the enhanced data type callbacks to\n * get key name and dbid on various operations.\n *\n * it simulates a simple memory allocator. The smallest allocation unit of \n * the allocator is a mem block with a size of 4KB. Multiple mem blocks are combined \n * using a linked list. These linked lists are placed in a global dict named 'mem_pool'.\n * Each db has a 'mem_pool'. You can use the 'mem.alloc' command to allocate a specified \n * number of mem blocks, and use 'mem.free' to release the memory. Use 'mem.write', 'mem.read'\n * to write and read the specified mem block (note that each mem block can only be written once).\n * Use 'mem.usage' to get the memory usage under different dbs, and it will return the size \n * mem blocks and used mem blocks under the db.\n * The specific structure diagram is as follows:\n * \n * \n * Global variables of the module:\n * \n *                                           mem blocks link\n *                          ┌─────┬─────┐\n *                          │     │     │    ┌───┐    ┌───┐    ┌───┐\n *                          │ k1  │  ───┼───►│4KB├───►│4KB├───►│4KB│\n *                          │     │     │    └───┘    └───┘    └───┘\n *                          ├─────┼─────┤\n *    ┌───────┐      ┌────► │     │     │    ┌───┐    ┌───┐\n *    │       │      │      │ k2  │  ───┼───►│4KB├───►│4KB│\n *    │ db0   ├──────┘      │     │     │    └───┘    └───┘\n *    │       │             ├─────┼─────┤\n *    ├───────┤             │     │     │    ┌───┐    ┌───┐    ┌───┐\n *    │       │             │ k3  │  ───┼───►│4KB├───►│4KB├───►│4KB│\n *    │ db1   ├──►null      │     │     │    └───┘    └───┘    └───┘\n *    │       │             └─────┴─────┘\n *    ├───────┤                  dict\n *    │       │\n *    │ db2   ├─────────┐\n *    │       │         │\n *    ├───────┤         │   ┌─────┬─────┐\n *    │       │         │   │     │     │    ┌───┐    ┌───┐    ┌───┐\n *    │ db3   ├──►null  │   │ k1  │  ───┼───►│4KB├───►│4KB├───►│4KB│\n *    │       │         │   │     │     │    └───┘    └───┘    └───┘\n *    └───────┘         │   ├─────┼─────┤\n * mem_pool[MAX_DB]     │   │     │     │    ┌───┐    ┌───┐\n *                      └──►│ k2  │  ───┼───►│4KB├───►│4KB│\n *                          │     │     │    └───┘    └───┘\n *                          └─────┴─────┘\n *                               dict\n * \n * \n * Keys in redis database:\n * \n *                                ┌───────┐\n *                                │ size  │\n *                   ┌───────────►│ used  │\n *                   │            │ mask  │\n *     ┌─────┬─────┐ │            └───────┘                                   ┌───────┐\n *     │     │     │ │          MemAllocObject                                │ size  │\n *     │ k1  │  ───┼─┘                                           ┌───────────►│ used  │\n *     │     │     │                                             │            │ mask  │\n *     ├─────┼─────┤              ┌───────┐        ┌─────┬─────┐ │            └───────┘\n *     │     │     │              │ size  │        │     │     │ │          MemAllocObject\n *     │ k2  │  ───┼─────────────►│ used  │        │ k1  │  ───┼─┘\n *     │     │     │              │ mask  │        │     │     │\n *     ├─────┼─────┤              └───────┘        ├─────┼─────┤\n *     │     │     │            MemAllocObject     │     │     │\n *     │ k3  │  ───┼─┐                             │ k2  │  ───┼─┐\n *     │     │     │ │                             │     │     │ │\n *     └─────┴─────┘ │            ┌───────┐        └─────┴─────┘ │            ┌───────┐\n *      redis db[0]  │            │ size  │          redis db[1] │            │ size  │\n *                   └───────────►│ used  │                      └───────────►│ used  │\n *                                │ mask  │                                   │ mask  │\n *                                └───────┘                                   └───────┘\n *                              MemAllocObject                              MemAllocObject\n *\n **/\n\n#include \"redismodule.h\"\n#include <stdio.h>\n#include <stdlib.h>\n#include <ctype.h>\n#include <string.h>\n#include <stdint.h>\n\nstatic RedisModuleType *MemAllocType;\n\n#define MAX_DB 16\nRedisModuleDict *mem_pool[MAX_DB];\ntypedef struct MemAllocObject {\n    long long size;\n    long long used;\n    uint64_t mask;\n} MemAllocObject;\n\nMemAllocObject *createMemAllocObject(void) {\n    MemAllocObject *o = RedisModule_Calloc(1, sizeof(*o));\n    return o;\n}\n\n/*---------------------------- mem block apis ------------------------------------*/\n#define BLOCK_SIZE 4096\nstruct MemBlock {\n    char block[BLOCK_SIZE];\n    struct MemBlock *next;\n};\n\nvoid MemBlockFree(struct MemBlock *head) {\n    if (head) {\n        struct MemBlock *block = head->next, *next;\n        RedisModule_Free(head);\n        while (block) {\n            next = block->next;\n            RedisModule_Free(block);\n            block = next;\n        }\n    }\n}\nstruct MemBlock *MemBlockCreate(long long num) {\n    if (num <= 0) {\n        return NULL;\n    }\n\n    struct MemBlock *head = RedisModule_Calloc(1, sizeof(struct MemBlock));\n    struct MemBlock *block = head;\n    while (--num) {\n        block->next = RedisModule_Calloc(1, sizeof(struct MemBlock));\n        block = block->next;\n    }\n\n    return head;\n}\n\nlong long MemBlockNum(const struct MemBlock *head) {\n    long long num = 0;\n    const struct MemBlock *block = head;\n    while (block) {\n        num++;\n        block = block->next;\n    }\n\n    return num;\n}\n\nsize_t MemBlockWrite(struct MemBlock *head, long long block_index, const char *data, size_t size) {\n    size_t w_size = 0;\n    struct MemBlock *block = head;\n    while (block_index-- && block) {\n        block = block->next;\n    }\n\n    if (block) {\n        size = size > BLOCK_SIZE ? BLOCK_SIZE:size;\n        memcpy(block->block, data, size);\n        w_size += size;\n    }\n\n    return w_size;\n}\n\nint MemBlockRead(struct MemBlock *head, long long block_index, char *data, size_t size) {\n    size_t r_size = 0;\n    struct MemBlock *block = head;\n    while (block_index-- && block) {\n        block = block->next;\n    }\n\n    if (block) {\n        size = size > BLOCK_SIZE ? BLOCK_SIZE:size;\n        memcpy(data, block->block, size);\n        r_size += size;\n    }\n\n    return r_size;\n}\n\nvoid MemPoolFreeDb(RedisModuleCtx *ctx, int dbid) {\n    RedisModuleString *key;\n    void *tdata;\n    RedisModuleDictIter *iter = RedisModule_DictIteratorStartC(mem_pool[dbid], \"^\", NULL, 0);\n    while((key = RedisModule_DictNext(ctx, iter, &tdata)) != NULL) {\n        MemBlockFree((struct MemBlock *)tdata);\n    }\n    RedisModule_DictIteratorStop(iter);\n    RedisModule_FreeDict(NULL, mem_pool[dbid]);\n    mem_pool[dbid] = RedisModule_CreateDict(NULL);\n}\n\nstruct MemBlock *MemBlockClone(const struct MemBlock *head) {\n    struct MemBlock *newhead = NULL;\n    if (head) {\n        newhead = RedisModule_Calloc(1, sizeof(struct MemBlock));\n        memcpy(newhead->block, head->block, BLOCK_SIZE);\n        struct MemBlock *newblock = newhead;\n        const struct MemBlock *oldblock = head->next;\n        while (oldblock) {\n            newblock->next = RedisModule_Calloc(1, sizeof(struct MemBlock));\n            newblock = newblock->next;\n            memcpy(newblock->block, oldblock->block, BLOCK_SIZE);\n            oldblock = oldblock->next;\n        }\n    }\n\n    return newhead;\n}\n\n/*---------------------------- event handler ------------------------------------*/\nvoid swapDbCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data) {\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(e);\n    REDISMODULE_NOT_USED(sub);\n\n    RedisModuleSwapDbInfo *ei = data;\n\n    // swap\n    RedisModuleDict *tmp = mem_pool[ei->dbnum_first];\n    mem_pool[ei->dbnum_first] = mem_pool[ei->dbnum_second];\n    mem_pool[ei->dbnum_second] = tmp;\n}\n\nvoid flushdbCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data) {\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(e);\n    int i;\n    RedisModuleFlushInfo *fi = data;\n\n    RedisModule_AutoMemory(ctx);\n\n    if (sub == REDISMODULE_SUBEVENT_FLUSHDB_START) {\n        if (fi->dbnum != -1) {\n           MemPoolFreeDb(ctx, fi->dbnum);\n        } else {\n            for (i = 0; i < MAX_DB; i++) {\n                MemPoolFreeDb(ctx, i);\n            }\n        }\n    }\n}\n\n/*---------------------------- command implementation ------------------------------------*/\n\n/* MEM.ALLOC key block_num */\nint MemAlloc_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx);  \n\n    if (argc != 3) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    long long block_num;\n    if ((RedisModule_StringToLongLong(argv[2], &block_num) != REDISMODULE_OK) || block_num <= 0) {\n        return RedisModule_ReplyWithError(ctx, \"ERR invalid block_num: must be a value greater than 0\");\n    }\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_READ | REDISMODULE_WRITE);\n    int type = RedisModule_KeyType(key);\n    if (type != REDISMODULE_KEYTYPE_EMPTY && RedisModule_ModuleTypeGetType(key) != MemAllocType) {\n        return RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    MemAllocObject *o;\n    if (type == REDISMODULE_KEYTYPE_EMPTY) {\n        o = createMemAllocObject();\n        RedisModule_ModuleTypeSetValue(key, MemAllocType, o);\n    } else {\n        o = RedisModule_ModuleTypeGetValue(key);\n    }\n\n    struct MemBlock *mem = MemBlockCreate(block_num);\n    RedisModule_Assert(mem != NULL);\n    RedisModule_DictSet(mem_pool[RedisModule_GetSelectedDb(ctx)], argv[1], mem);\n    o->size = block_num;\n    o->used = 0;\n    o->mask = 0;\n\n    RedisModule_ReplyWithLongLong(ctx, block_num);\n    RedisModule_ReplicateVerbatim(ctx);\n    return REDISMODULE_OK;\n}\n\n/* MEM.FREE key */\nint MemFree_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx);  \n\n    if (argc != 2) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_READ);\n    int type = RedisModule_KeyType(key);\n    if (type != REDISMODULE_KEYTYPE_EMPTY && RedisModule_ModuleTypeGetType(key) != MemAllocType) {\n        return RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    int ret = 0;\n    MemAllocObject *o;\n    if (type == REDISMODULE_KEYTYPE_EMPTY) {\n        RedisModule_ReplyWithLongLong(ctx, ret);\n        return REDISMODULE_OK;\n    } else {\n        o = RedisModule_ModuleTypeGetValue(key);\n    }\n\n    int nokey;\n    struct MemBlock *mem = (struct MemBlock *)RedisModule_DictGet(mem_pool[RedisModule_GetSelectedDb(ctx)], argv[1], &nokey);\n    if (!nokey && mem) {\n        RedisModule_DictDel(mem_pool[RedisModule_GetSelectedDb(ctx)], argv[1], NULL);\n        MemBlockFree(mem);\n        o->used = 0;\n        o->size = 0;\n        o->mask = 0;\n        ret = 1;\n    }\n\n    RedisModule_ReplyWithLongLong(ctx, ret);\n    RedisModule_ReplicateVerbatim(ctx);\n    return REDISMODULE_OK;\n}\n\n/* MEM.WRITE key block_index data */\nint MemWrite_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx);  \n\n    if (argc != 4) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    long long block_index;\n    if ((RedisModule_StringToLongLong(argv[2], &block_index) != REDISMODULE_OK) || block_index < 0) {\n        return RedisModule_ReplyWithError(ctx, \"ERR invalid block_index: must be a value greater than 0\");\n    }\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_READ | REDISMODULE_WRITE);\n    int type = RedisModule_KeyType(key);\n    if (type != REDISMODULE_KEYTYPE_EMPTY && RedisModule_ModuleTypeGetType(key) != MemAllocType) {\n        return RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    MemAllocObject *o;\n    if (type == REDISMODULE_KEYTYPE_EMPTY) {\n        return RedisModule_ReplyWithError(ctx, \"ERR Memory has not been allocated\");\n    } else {\n        o = RedisModule_ModuleTypeGetValue(key);\n    }\n\n    if (o->mask & (1UL << block_index)) {\n        return RedisModule_ReplyWithError(ctx, \"ERR block is busy\");\n    }\n\n    int ret = 0;\n    int nokey;\n    struct MemBlock *mem = (struct MemBlock *)RedisModule_DictGet(mem_pool[RedisModule_GetSelectedDb(ctx)], argv[1], &nokey);\n    if (!nokey && mem) {\n        size_t len;\n        const char *buf = RedisModule_StringPtrLen(argv[3], &len);\n        ret = MemBlockWrite(mem, block_index, buf, len);\n        o->mask |= (1UL << block_index);\n        o->used++;\n    }\n\n    RedisModule_ReplyWithLongLong(ctx, ret);\n    RedisModule_ReplicateVerbatim(ctx);\n    return REDISMODULE_OK;\n}\n\n/* MEM.READ key block_index */\nint MemRead_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx);  \n\n    if (argc != 3) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    long long block_index;\n    if ((RedisModule_StringToLongLong(argv[2], &block_index) != REDISMODULE_OK) || block_index < 0) {\n        return RedisModule_ReplyWithError(ctx, \"ERR invalid block_index: must be a value greater than 0\");\n    }\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_READ);\n    int type = RedisModule_KeyType(key);\n    if (type != REDISMODULE_KEYTYPE_EMPTY && RedisModule_ModuleTypeGetType(key) != MemAllocType) {\n        return RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    MemAllocObject *o;\n    if (type == REDISMODULE_KEYTYPE_EMPTY) {\n        return RedisModule_ReplyWithError(ctx, \"ERR Memory has not been allocated\");\n    } else {\n        o = RedisModule_ModuleTypeGetValue(key);\n    }\n\n    if (!(o->mask & (1UL << block_index))) {\n        return RedisModule_ReplyWithNull(ctx);\n    }\n\n    int nokey;\n    struct MemBlock *mem = (struct MemBlock *)RedisModule_DictGet(mem_pool[RedisModule_GetSelectedDb(ctx)], argv[1], &nokey);\n    RedisModule_Assert(nokey == 0 && mem != NULL);\n     \n    char buf[BLOCK_SIZE];\n    MemBlockRead(mem, block_index, buf, sizeof(buf));\n    \n    /* Assuming that the contents are all c-style strings */\n    RedisModule_ReplyWithStringBuffer(ctx, buf, strlen(buf));\n    return REDISMODULE_OK;\n}\n\n/* MEM.USAGE dbid */\nint MemUsage_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx);  \n\n    if (argc != 2) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    long long dbid;\n    if ((RedisModule_StringToLongLong(argv[1], (long long *)&dbid) != REDISMODULE_OK)) {\n        return RedisModule_ReplyWithError(ctx, \"ERR invalid value: must be a integer\");\n    }\n\n    if (dbid < 0 || dbid >= MAX_DB) {\n        return RedisModule_ReplyWithError(ctx, \"ERR dbid out of range\");\n    }\n\n\n    long long size = 0, used = 0;\n\n    void *data;\n    RedisModuleString *key;\n    RedisModuleDictIter *iter = RedisModule_DictIteratorStartC(mem_pool[dbid], \"^\", NULL, 0);\n    while((key = RedisModule_DictNext(ctx, iter, &data)) != NULL) {\n        int dbbackup = RedisModule_GetSelectedDb(ctx);\n        RedisModule_SelectDb(ctx, dbid);\n        RedisModuleKey *openkey = RedisModule_OpenKey(ctx, key, REDISMODULE_READ);\n        int type = RedisModule_KeyType(openkey);\n        RedisModule_Assert(type != REDISMODULE_KEYTYPE_EMPTY && RedisModule_ModuleTypeGetType(openkey) == MemAllocType);\n        MemAllocObject *o = RedisModule_ModuleTypeGetValue(openkey);\n        used += o->used;\n        size += o->size;\n        RedisModule_CloseKey(openkey);\n        RedisModule_SelectDb(ctx, dbbackup);\n    }\n    RedisModule_DictIteratorStop(iter);\n\n    RedisModule_ReplyWithArray(ctx, 4);\n    RedisModule_ReplyWithSimpleString(ctx, \"total\");\n    RedisModule_ReplyWithLongLong(ctx, size);\n    RedisModule_ReplyWithSimpleString(ctx, \"used\");\n    RedisModule_ReplyWithLongLong(ctx, used);\n    return REDISMODULE_OK;\n}\n\n/* MEM.ALLOCANDWRITE key block_num block_index data block_index data ... */\nint MemAllocAndWrite_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx);  \n\n    if (argc < 3) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    long long block_num;\n    if ((RedisModule_StringToLongLong(argv[2], &block_num) != REDISMODULE_OK) || block_num <= 0) {\n        return RedisModule_ReplyWithError(ctx, \"ERR invalid block_num: must be a value greater than 0\");\n    }\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_READ | REDISMODULE_WRITE);\n    int type = RedisModule_KeyType(key);\n    if (type != REDISMODULE_KEYTYPE_EMPTY && RedisModule_ModuleTypeGetType(key) != MemAllocType) {\n        return RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    MemAllocObject *o;\n    if (type == REDISMODULE_KEYTYPE_EMPTY) {\n        o = createMemAllocObject();\n        RedisModule_ModuleTypeSetValue(key, MemAllocType, o);\n    } else {\n        o = RedisModule_ModuleTypeGetValue(key);\n    }\n\n    struct MemBlock *mem = MemBlockCreate(block_num);\n    RedisModule_Assert(mem != NULL);\n    RedisModule_DictSet(mem_pool[RedisModule_GetSelectedDb(ctx)], argv[1], mem);\n    o->used = 0;\n    o->mask = 0;\n    o->size = block_num;\n\n    int i = 3;\n    long long block_index;\n    for (; i < argc; i++) {\n        /* Security is guaranteed internally, so no security check. */\n        RedisModule_StringToLongLong(argv[i], &block_index);\n        size_t len;\n        const char * buf = RedisModule_StringPtrLen(argv[i + 1], &len);\n        MemBlockWrite(mem, block_index, buf, len);\n        o->used++;\n        o->mask |= (1UL << block_index);\n    }\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    RedisModule_ReplicateVerbatim(ctx);\n    return REDISMODULE_OK;\n}\n\n/*---------------------------- type callbacks ------------------------------------*/\n\nvoid *MemAllocRdbLoad(RedisModuleIO *rdb, int encver) {\n    if (encver != 0) {\n        return NULL;\n    }\n\n    MemAllocObject *o = createMemAllocObject();\n    o->size = RedisModule_LoadSigned(rdb);\n    o->used = RedisModule_LoadSigned(rdb);\n    o->mask = RedisModule_LoadUnsigned(rdb);\n\n    const RedisModuleString *key = RedisModule_GetKeyNameFromIO(rdb);\n    int dbid = RedisModule_GetDbIdFromIO(rdb);\n\n    if (o->size) {\n        size_t size;\n        char *tmpbuf;\n        long long num = o->size;\n        struct MemBlock *head = RedisModule_Calloc(1, sizeof(struct MemBlock));\n        tmpbuf = RedisModule_LoadStringBuffer(rdb, &size);\n        memcpy(head->block, tmpbuf, size > BLOCK_SIZE ? BLOCK_SIZE:size);\n        RedisModule_Free(tmpbuf);\n        struct MemBlock *block = head;\n        while (--num) {\n            block->next = RedisModule_Calloc(1, sizeof(struct MemBlock));\n            block = block->next;\n\n            tmpbuf = RedisModule_LoadStringBuffer(rdb, &size);\n            memcpy(block->block, tmpbuf, size > BLOCK_SIZE ? BLOCK_SIZE:size);\n            RedisModule_Free(tmpbuf);\n        }\n\n        RedisModule_DictSet(mem_pool[dbid], (RedisModuleString *)key, head);\n    }\n     \n    return o;\n}\n\nvoid MemAllocRdbSave(RedisModuleIO *rdb, void *value) {\n    MemAllocObject *o = value;\n    RedisModule_SaveSigned(rdb, o->size);\n    RedisModule_SaveSigned(rdb, o->used);\n    RedisModule_SaveUnsigned(rdb, o->mask);\n\n    const RedisModuleString *key = RedisModule_GetKeyNameFromIO(rdb);\n    int dbid = RedisModule_GetDbIdFromIO(rdb);\n\n    if (o->size) {\n        int nokey;\n        struct MemBlock *mem = (struct MemBlock *)RedisModule_DictGet(mem_pool[dbid], (RedisModuleString *)key, &nokey);\n        RedisModule_Assert(nokey == 0 && mem != NULL);\n\n        struct MemBlock *block = mem; \n        while (block) {\n            RedisModule_SaveStringBuffer(rdb, block->block, BLOCK_SIZE);\n            block = block->next;\n        }\n    }\n}\n\nvoid MemAllocAofRewrite(RedisModuleIO *aof, RedisModuleString *key, void *value) {\n    MemAllocObject *o = (MemAllocObject *)value;\n    if (o->size) {\n        int dbid = RedisModule_GetDbIdFromIO(aof);\n        int nokey;\n        size_t i = 0, j = 0;\n        struct MemBlock *mem = (struct MemBlock *)RedisModule_DictGet(mem_pool[dbid], (RedisModuleString *)key, &nokey);\n        RedisModule_Assert(nokey == 0 && mem != NULL);\n        size_t array_size = o->size * 2;\n        RedisModuleString ** string_array = RedisModule_Calloc(array_size, sizeof(RedisModuleString *));\n        while (mem) {\n            string_array[i] = RedisModule_CreateStringFromLongLong(NULL, j);\n            string_array[i + 1] = RedisModule_CreateString(NULL, mem->block, BLOCK_SIZE);\n            mem = mem->next;\n            i += 2;\n            j++;\n        }\n        RedisModule_EmitAOF(aof, \"mem.allocandwrite\", \"slv\", key, o->size, string_array, array_size);\n        for (i = 0; i < array_size; i++) {\n            RedisModule_FreeString(NULL, string_array[i]);\n        }\n        RedisModule_Free(string_array);\n    } else {\n        RedisModule_EmitAOF(aof, \"mem.allocandwrite\", \"sl\", key, o->size);\n    }\n}\n\nvoid MemAllocFree(void *value) {\n    RedisModule_Free(value);\n}\n\nvoid MemAllocUnlink(RedisModuleString *key, const void *value) {\n    REDISMODULE_NOT_USED(key);\n    REDISMODULE_NOT_USED(value);\n\n    /* When unlink and unlink2 exist at the same time, we will only call unlink2. */\n    RedisModule_Assert(0);\n}\n\nvoid MemAllocUnlink2(RedisModuleKeyOptCtx *ctx, const void *value) {\n    MemAllocObject *o = (MemAllocObject *)value;\n\n    const RedisModuleString *key = RedisModule_GetKeyNameFromOptCtx(ctx);\n    int dbid = RedisModule_GetDbIdFromOptCtx(ctx);\n    \n    if (o->size) {\n        void *oldval;\n        RedisModule_DictDel(mem_pool[dbid], (RedisModuleString *)key, &oldval);\n        RedisModule_Assert(oldval != NULL);\n        MemBlockFree((struct MemBlock *)oldval);\n    }\n}\n\nvoid MemAllocDigest(RedisModuleDigest *md, void *value) {\n    MemAllocObject *o = (MemAllocObject *)value;\n    RedisModule_DigestAddLongLong(md, o->size);\n    RedisModule_DigestAddLongLong(md, o->used);\n    RedisModule_DigestAddLongLong(md, o->mask);\n\n    int dbid = RedisModule_GetDbIdFromDigest(md);\n    const RedisModuleString *key = RedisModule_GetKeyNameFromDigest(md);\n    \n    if (o->size) {\n        int nokey;\n        struct MemBlock *mem = (struct MemBlock *)RedisModule_DictGet(mem_pool[dbid], (RedisModuleString *)key, &nokey);\n        RedisModule_Assert(nokey == 0 && mem != NULL);\n\n        struct MemBlock *block = mem;\n        while (block) {\n            RedisModule_DigestAddStringBuffer(md, (const char *)block->block, BLOCK_SIZE);\n            block = block->next;\n        }\n    }\n}\n\nvoid *MemAllocCopy2(RedisModuleKeyOptCtx *ctx, const void *value) {\n    const MemAllocObject *old = value;\n    MemAllocObject *new = createMemAllocObject();\n    new->size = old->size;\n    new->used = old->used;\n    new->mask = old->mask;\n\n    int from_dbid = RedisModule_GetDbIdFromOptCtx(ctx);\n    int to_dbid = RedisModule_GetToDbIdFromOptCtx(ctx);\n    const RedisModuleString *fromkey = RedisModule_GetKeyNameFromOptCtx(ctx);\n    const RedisModuleString *tokey = RedisModule_GetToKeyNameFromOptCtx(ctx);\n\n    if (old->size) {\n        int nokey;\n        struct MemBlock *oldmem = (struct MemBlock *)RedisModule_DictGet(mem_pool[from_dbid], (RedisModuleString *)fromkey, &nokey);\n        RedisModule_Assert(nokey == 0 && oldmem != NULL);\n        struct MemBlock *newmem = MemBlockClone(oldmem);\n        RedisModule_Assert(newmem != NULL);\n        RedisModule_DictSet(mem_pool[to_dbid], (RedisModuleString *)tokey, newmem);\n    }   \n\n    return new;\n}\n\nsize_t MemAllocMemUsage2(RedisModuleKeyOptCtx *ctx, const void *value, size_t sample_size) {\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(sample_size);\n    uint64_t size = 0;\n    MemAllocObject *o = (MemAllocObject *)value;\n\n    size += sizeof(*o);\n    size += o->size * sizeof(struct MemBlock);\n\n    return size;\n}\n\nsize_t MemAllocMemFreeEffort2(RedisModuleKeyOptCtx *ctx, const void *value) {\n    REDISMODULE_NOT_USED(ctx);\n    MemAllocObject *o = (MemAllocObject *)value;\n    return o->size;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx, \"datatype2\", 1,REDISMODULE_APIVER_1) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n\n    RedisModuleTypeMethods tm = {\n        .version = REDISMODULE_TYPE_METHOD_VERSION,\n        .rdb_load = MemAllocRdbLoad,\n        .rdb_save = MemAllocRdbSave,\n        .aof_rewrite = MemAllocAofRewrite,\n        .free = MemAllocFree,\n        .digest = MemAllocDigest,\n        .unlink = MemAllocUnlink,\n        // .defrag = MemAllocDefrag, // Tested in defragtest.c\n        .unlink2 = MemAllocUnlink2,\n        .copy2 = MemAllocCopy2,\n        .mem_usage2 = MemAllocMemUsage2,\n        .free_effort2 = MemAllocMemFreeEffort2,\n    };\n\n    MemAllocType = RedisModule_CreateDataType(ctx, \"mem_alloc\", 0, &tm);\n    if (MemAllocType == NULL) {\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"mem.alloc\", MemAlloc_RedisCommand, \"write deny-oom\", 1, 1, 1) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"mem.free\", MemFree_RedisCommand, \"write deny-oom\", 1, 1, 1) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"mem.write\", MemWrite_RedisCommand, \"write deny-oom\", 1, 1, 1) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"mem.read\", MemRead_RedisCommand, \"readonly\", 1, 1, 1) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"mem.usage\", MemUsage_RedisCommand, \"readonly\", 1, 1, 1) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n\n    /* used for internal aof rewrite */\n    if (RedisModule_CreateCommand(ctx, \"mem.allocandwrite\", MemAllocAndWrite_RedisCommand, \"write deny-oom\", 1, 1, 1) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n\n    for(int i = 0; i < MAX_DB; i++){\n        mem_pool[i] = RedisModule_CreateDict(NULL);\n    }\n\n    RedisModule_SubscribeToServerEvent(ctx, RedisModuleEvent_FlushDB, flushdbCallback);\n    RedisModule_SubscribeToServerEvent(ctx, RedisModuleEvent_SwapDB, swapDbCallback);\n  \n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/defragtest.c",
    "content": "/* A module that implements defrag callback mechanisms.\n */\n\n#include \"redismodule.h\"\n#include <stdlib.h>\n#include <string.h>\n\n#define UNUSED(V) ((void) V)\n\nstatic RedisModuleType *FragType;\n\nstruct FragObject {\n    unsigned long len;\n    void **values;\n    int maxstep;\n};\n\n/* Make sure we get the expected cursor */\nunsigned long int last_set_cursor = 0;\n\nunsigned long int datatype_attempts = 0;\nunsigned long int datatype_defragged = 0;\nunsigned long int datatype_raw_defragged = 0;\nunsigned long int datatype_resumes = 0;\nunsigned long int datatype_wrong_cursor = 0;\nunsigned long int defrag_started = 0;\nunsigned long int defrag_ended = 0;\nunsigned long int global_strings_attempts = 0;\nunsigned long int global_strings_defragged = 0;\nunsigned long int global_dicts_resumes = 0;  /* Number of dict defragmentation resumed from a previous break */\nunsigned long int global_subdicts_resumes = 0;  /* Number of subdict defragmentation resumed from a previous break */\nunsigned long int global_dicts_attempts = 0; /* Number of attempts to defragment dictionary */\nunsigned long int global_dicts_defragged = 0; /* Number of dictionaries successfully defragmented */\nunsigned long int global_dicts_items_defragged = 0; /* Number of dictionaries items successfully defragmented */\n\nunsigned long global_strings_len = 0;\nRedisModuleString **global_strings = NULL;\n\nunsigned long global_dicts_len = 0;\nRedisModuleDict **global_dicts = NULL;\n\nstatic void createGlobalStrings(RedisModuleCtx *ctx, unsigned long count)\n{\n    global_strings_len = count;\n    global_strings = RedisModule_Alloc(sizeof(RedisModuleString *) * count);\n\n    for (unsigned long i = 0; i < count; i++) {\n        global_strings[i] = RedisModule_CreateStringFromLongLong(ctx, i);\n    }\n}\n\nstatic int defragGlobalStrings(RedisModuleDefragCtx *ctx)\n{\n    unsigned long cursor = 0;\n    RedisModule_DefragCursorGet(ctx, &cursor);\n\n    if (!global_strings_len) return 0; /* strings is empty. */\n    RedisModule_Assert(cursor < global_strings_len);\n    for (; cursor < global_strings_len; cursor++) {\n        RedisModuleString *str = global_strings[cursor];\n        if (!str) continue;\n        RedisModuleString *new = RedisModule_DefragRedisModuleString(ctx, str);\n        global_strings_attempts++;\n        if (new != NULL) {\n            global_strings[cursor] = new;\n            global_strings_defragged++;\n        }\n\n        if (RedisModule_DefragShouldStop(ctx)) {\n            RedisModule_DefragCursorSet(ctx, cursor);\n            return 1;\n        }\n    }\n    return 0;\n}\n\nstatic void createFragGlobalStrings(RedisModuleCtx *ctx) {\n    for (unsigned long i = 0; i < global_strings_len; i++) {\n        if (i % 2 == 1) {\n            RedisModule_FreeString(ctx, global_strings[i]);\n            global_strings[i] = NULL;\n        }\n    }\n}\n\nstatic void createGlobalDicts(RedisModuleCtx *ctx, unsigned long count) {\n    global_dicts_len = count;\n    global_dicts = RedisModule_Alloc(sizeof(RedisModuleDict *) * count);\n\n    /* Create some nested dictionaries:\n     * - Each main dict contains some subdicts.\n     * - Each sub-dict contains some strings. */\n    for (unsigned long i = 0; i < count; i++) {\n        RedisModuleDict *dict = RedisModule_CreateDict(ctx);\n        for (unsigned long j = 0; j < 10; j++) {\n            /* Create sub dict. */\n            RedisModuleDict *subdict = RedisModule_CreateDict(ctx);\n            for (unsigned long k = 0; k < 10; k++) {\n                RedisModuleString *str = RedisModule_CreateStringFromULongLong(ctx, k);\n                RedisModule_DictSet(subdict, str, str);\n            }\n\n            RedisModuleString *key = RedisModule_CreateStringFromULongLong(ctx, j);\n            RedisModule_DictSet(dict, key, subdict);\n            RedisModule_FreeString(ctx, key);\n        }\n        global_dicts[i] = dict;\n    }\n}\n\nstatic void freeFragGlobalSubDict(RedisModuleCtx *ctx, RedisModuleDict *subdict) {\n    char *key;\n    size_t keylen;\n    RedisModuleString *str;\n    RedisModuleDictIter *iter = RedisModule_DictIteratorStartC(subdict, \"^\", NULL, 0);\n    while ((key = RedisModule_DictNextC(iter, &keylen, (void**)&str))) {\n        RedisModule_FreeString(ctx, str);\n    }\n    RedisModule_FreeDict(ctx, subdict);\n    RedisModule_DictIteratorStop(iter);\n}\n\nstatic void createFragGlobalDicts(RedisModuleCtx *ctx) {\n    char *key;\n    size_t keylen;\n    RedisModuleDict *subdict;\n\n    for (unsigned long i = 0; i < global_dicts_len; i++) {\n        RedisModuleDict *dict = global_dicts[i];\n        if (!dict) continue;\n\n        /* Handle dictionaries differently based on their index in global_dicts array:\n         * 1. For odd indices (i % 2 == 1): Remove the entire dictionary.\n         * 2. For even indices: Keep the dictionary but remove half of its items. */\n        if (i % 2 == 1) {\n            RedisModuleDictIter *iter = RedisModule_DictIteratorStartC(dict, \"^\", NULL, 0);\n            while ((key = RedisModule_DictNextC(iter, &keylen, (void**)&subdict))) {\n                freeFragGlobalSubDict(ctx, subdict);\n            }\n            RedisModule_FreeDict(ctx, dict);\n            global_dicts[i] = NULL;\n            RedisModule_DictIteratorStop(iter);\n        } else {\n            int key_index = 0;\n            RedisModuleDictIter *iter = RedisModule_DictIteratorStartC(dict, \"^\", NULL, 0);\n            while ((key = RedisModule_DictNextC(iter, &keylen, (void**)&subdict))) {\n                if (key_index++ % 2 == 1) {\n                    freeFragGlobalSubDict(ctx, subdict);\n                    RedisModule_DictReplaceC(dict, key, keylen, NULL);\n                }\n            }\n            RedisModule_DictIteratorStop(iter);\n        }\n    }\n}\n\nstatic int defragGlobalSubDictValueCB(RedisModuleDefragCtx *ctx, void *data, unsigned char *key, size_t keylen, void **newptr) {\n    REDISMODULE_NOT_USED(key);\n    REDISMODULE_NOT_USED(keylen);\n    if (!data) return 0;\n    *newptr = RedisModule_DefragAlloc(ctx, data);\n    return 0;\n}\n\nstatic int defragGlobalDictValueCB(RedisModuleDefragCtx *ctx, void *data, unsigned char *key, size_t keylen, void **newptr) {\n    REDISMODULE_NOT_USED(key);\n    REDISMODULE_NOT_USED(keylen);\n    static RedisModuleString *seekTo = NULL;\n    RedisModuleDict *subdict = data;\n    if (!subdict) return 0;\n    if (seekTo != NULL) global_subdicts_resumes++;\n\n    *newptr = RedisModule_DefragRedisModuleDict(ctx, subdict, defragGlobalSubDictValueCB, &seekTo);\n    if (*newptr) global_dicts_items_defragged++;\n    /* Return 1 if seekTo is not NULL, indicating this node needs more defrag work. */\n    return seekTo != NULL;\n}\n\nstatic int defragGlobalDicts(RedisModuleDefragCtx *ctx) {\n    static RedisModuleString *seekTo = NULL;\n    static unsigned long dict_index = 0;\n    unsigned long cursor = 0;\n\n    RedisModule_DefragCursorGet(ctx, &cursor);\n    if (cursor == 0) { /* Start a new defrag. */\n        if (seekTo) {\n            RedisModule_FreeString(NULL, seekTo);\n            seekTo = NULL;\n        }\n        dict_index = 0;\n    } else {\n        global_dicts_resumes++;\n    }\n\n    if (!global_dicts_len) return 0; /* dicts is empty. */\n    RedisModule_Assert(dict_index < global_dicts_len);\n    for (; dict_index < global_dicts_len; dict_index++) {\n        RedisModuleDict *dict = global_dicts[dict_index];\n        if (!dict) continue;\n        RedisModuleDict *new = RedisModule_DefragRedisModuleDict(ctx, dict, defragGlobalDictValueCB, &seekTo);\n        global_dicts_attempts++;\n        if (new != NULL) {\n            global_dicts[dict_index] = new;\n            global_dicts_defragged++;\n        }\n\n        if (seekTo != NULL) {\n            /* Set cursor to 1 to indicate defragmentation is not finished. */\n            RedisModule_DefragCursorSet(ctx, 1);\n            return 1;\n        }\n    }\n\n    /* Set cursor to 0 to indicate completion. */\n    dict_index = 0;\n    RedisModule_DefragCursorSet(ctx, 0);\n    return 0;\n}\n\ntypedef enum { DEFRAG_NOT_START, DEFRAG_STRING, DEFRAG_DICT } defrag_module_stage;\nstatic int defragGlobal(RedisModuleDefragCtx *ctx) {\n    static defrag_module_stage stage = DEFRAG_NOT_START;\n    if (stage == DEFRAG_NOT_START) {\n        stage = DEFRAG_STRING; /* Start a new global defrag. */\n    }\n\n    if (stage == DEFRAG_STRING) {\n        if (defragGlobalStrings(ctx) != 0) return 1;\n        stage = DEFRAG_DICT;\n    }\n    if (stage == DEFRAG_DICT) {\n        if (defragGlobalDicts(ctx) != 0) return 1;\n        stage = DEFRAG_NOT_START;\n    }\n    return 0;\n}\n\nstatic void defragStart(RedisModuleDefragCtx *ctx) {\n    REDISMODULE_NOT_USED(ctx);\n    defrag_started++;\n}\n\nstatic void defragEnd(RedisModuleDefragCtx *ctx) {\n    REDISMODULE_NOT_USED(ctx);\n    defrag_ended++;\n}\n\nstatic void FragInfo(RedisModuleInfoCtx *ctx, int for_crash_report) {\n    REDISMODULE_NOT_USED(for_crash_report);\n\n    RedisModule_InfoAddSection(ctx, \"stats\");\n    RedisModule_InfoAddFieldLongLong(ctx, \"datatype_attempts\", datatype_attempts);\n    RedisModule_InfoAddFieldLongLong(ctx, \"datatype_defragged\", datatype_defragged);\n    RedisModule_InfoAddFieldLongLong(ctx, \"datatype_raw_defragged\", datatype_raw_defragged);\n    RedisModule_InfoAddFieldLongLong(ctx, \"datatype_resumes\", datatype_resumes);\n    RedisModule_InfoAddFieldLongLong(ctx, \"datatype_wrong_cursor\", datatype_wrong_cursor);\n    RedisModule_InfoAddFieldLongLong(ctx, \"global_strings_attempts\", global_strings_attempts);\n    RedisModule_InfoAddFieldLongLong(ctx, \"global_strings_defragged\", global_strings_defragged);\n    RedisModule_InfoAddFieldLongLong(ctx, \"global_dicts_resumes\", global_dicts_resumes);\n    RedisModule_InfoAddFieldLongLong(ctx, \"global_subdicts_resumes\", global_subdicts_resumes);\n    RedisModule_InfoAddFieldLongLong(ctx, \"global_dicts_attempts\", global_dicts_attempts);\n    RedisModule_InfoAddFieldLongLong(ctx, \"global_dicts_defragged\", global_dicts_defragged);\n    RedisModule_InfoAddFieldLongLong(ctx, \"global_dicts_items_defragged\", global_dicts_items_defragged);\n    RedisModule_InfoAddFieldLongLong(ctx, \"defrag_started\", defrag_started);\n    RedisModule_InfoAddFieldLongLong(ctx, \"defrag_ended\", defrag_ended);\n}\n\nstruct FragObject *createFragObject(unsigned long len, unsigned long size, int maxstep) {\n    struct FragObject *o = RedisModule_Alloc(sizeof(*o));\n    o->len = len;\n    o->values = RedisModule_Alloc(sizeof(RedisModuleString*) * len);\n    o->maxstep = maxstep;\n\n    for (unsigned long i = 0; i < len; i++) {\n        o->values[i] = RedisModule_Calloc(1, size);\n    }\n\n    return o;\n}\n\n/* FRAG.RESETSTATS */\nstatic int fragResetStatsCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    datatype_attempts = 0;\n    datatype_defragged = 0;\n    datatype_raw_defragged = 0;\n    datatype_resumes = 0;\n    datatype_wrong_cursor = 0;\n    global_strings_attempts = 0;\n    global_strings_defragged = 0;\n    global_dicts_resumes = 0;\n    global_subdicts_resumes = 0;\n    global_dicts_attempts = 0;\n    global_dicts_defragged = 0;\n    global_dicts_items_defragged = 0;\n    defrag_started = 0;\n    defrag_ended = 0;\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\n/* FRAG.CREATE key len size maxstep */\nstatic int fragCreateCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 5)\n        return RedisModule_WrongArity(ctx);\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx,argv[1],\n                                              REDISMODULE_READ|REDISMODULE_WRITE);\n    int type = RedisModule_KeyType(key);\n    if (type != REDISMODULE_KEYTYPE_EMPTY)\n    {\n        return RedisModule_ReplyWithError(ctx, \"ERR key exists\");\n    }\n\n    long long len;\n    if ((RedisModule_StringToLongLong(argv[2], &len) != REDISMODULE_OK)) {\n        return RedisModule_ReplyWithError(ctx, \"ERR invalid len\");\n    }\n\n    long long size;\n    if ((RedisModule_StringToLongLong(argv[3], &size) != REDISMODULE_OK)) {\n        return RedisModule_ReplyWithError(ctx, \"ERR invalid size\");\n    }\n\n    long long maxstep;\n    if ((RedisModule_StringToLongLong(argv[4], &maxstep) != REDISMODULE_OK)) {\n        return RedisModule_ReplyWithError(ctx, \"ERR invalid maxstep\");\n    }\n\n    struct FragObject *o = createFragObject(len, size, maxstep);\n    RedisModule_ModuleTypeSetValue(key, FragType, o);\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    RedisModule_CloseKey(key);\n\n    return REDISMODULE_OK;\n}\n\n/* FRAG.create_frag_global len */\nstatic int fragCreateGlobalCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    if (argc != 2)\n        return RedisModule_WrongArity(ctx);\n\n    long long glen;\n    if ((RedisModule_StringToLongLong(argv[1], &glen) != REDISMODULE_OK)) {\n        return RedisModule_ReplyWithError(ctx, \"ERR invalid len\");\n    }\n\n    createGlobalStrings(ctx, glen);\n    createGlobalDicts(ctx, glen);\n    createFragGlobalStrings(ctx);\n    createFragGlobalDicts(ctx);\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\nvoid FragFree(void *value) {\n    struct FragObject *o = value;\n\n    for (unsigned long i = 0; i < o->len; i++)\n        RedisModule_Free(o->values[i]);\n    RedisModule_Free(o->values);\n    RedisModule_Free(o);\n}\n\nsize_t FragFreeEffort(RedisModuleString *key, const void *value) {\n    REDISMODULE_NOT_USED(key);\n\n    const struct FragObject *o = value;\n    return o->len;\n}\n\nint FragDefrag(RedisModuleDefragCtx *ctx, RedisModuleString *key, void **value) {\n    unsigned long i = 0;\n    int steps = 0;\n\n    int dbid = RedisModule_GetDbIdFromDefragCtx(ctx);\n    RedisModule_Assert(dbid != -1);\n\n    RedisModule_Log(NULL, \"notice\", \"Defrag key: %s\", RedisModule_StringPtrLen(key, NULL));\n\n    /* Attempt to get cursor, validate it's what we're exepcting */\n    if (RedisModule_DefragCursorGet(ctx, &i) == REDISMODULE_OK) {\n        if (i > 0) datatype_resumes++;\n\n        /* Validate we're expecting this cursor */\n        if (i != last_set_cursor) datatype_wrong_cursor++;\n    } else {\n        if (last_set_cursor != 0) datatype_wrong_cursor++;\n    }\n\n    /* Attempt to defrag the object itself */\n    datatype_attempts++;\n    struct FragObject *o = RedisModule_DefragAlloc(ctx, *value);\n    if (o == NULL) {\n        /* Not defragged */\n        o = *value;\n    } else {\n        /* Defragged */\n        *value = o;\n        datatype_defragged++;\n    }\n\n    /* Deep defrag now */\n    for (; i < o->len; i++) {\n        datatype_attempts++;\n        void *new = RedisModule_DefragAlloc(ctx, o->values[i]);\n        if (new) {\n            o->values[i] = new;\n            datatype_defragged++;\n        }\n\n        if ((o->maxstep && ++steps > o->maxstep) ||\n            ((i % 64 == 0) && RedisModule_DefragShouldStop(ctx)))\n        {\n            RedisModule_DefragCursorSet(ctx, i);\n            last_set_cursor = i;\n            return 1;\n        }\n    }\n\n    /* Defrag the values array itself using RedisModule_DefragAllocRaw\n     * and RedisModule_DefragFreeRaw for testing purposes. */\n    void *new_values = RedisModule_DefragAllocRaw(ctx, o->len * sizeof(void*));\n    memcpy(new_values, o->values, o->len * sizeof(void*));\n    RedisModule_DefragFreeRaw(ctx, o->values);\n    o->values = new_values;\n    datatype_raw_defragged++;\n\n    last_set_cursor = 0;\n    return 0;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx, \"defragtest\", 1, REDISMODULE_APIVER_1)\n        == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    if (RedisModule_GetTypeMethodVersion() < REDISMODULE_TYPE_METHOD_VERSION) {\n        return REDISMODULE_ERR;\n    }\n\n    RedisModuleTypeMethods tm = {\n            .version = REDISMODULE_TYPE_METHOD_VERSION,\n            .free = FragFree,\n            .free_effort = FragFreeEffort,\n            .defrag = FragDefrag\n    };\n\n    FragType = RedisModule_CreateDataType(ctx, \"frag_type\", 0, &tm);\n    if (FragType == NULL) return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"frag.create\",\n                                  fragCreateCommand, \"write deny-oom\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"frag.create_frag_global\",\n        fragCreateGlobalCommand, \"write deny-oom\", 1, 1, 1) == REDISMODULE_ERR)\n    return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"frag.resetstats\",\n                                  fragResetStatsCommand, \"write deny-oom\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModule_RegisterInfoFunc(ctx, FragInfo);\n    RedisModule_RegisterDefragFunc2(ctx, defragGlobal);\n    RedisModule_RegisterDefragCallbacks(ctx, defragStart, defragEnd);\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/eventloop.c",
    "content": "/* This module contains four tests :\n * 1- test.sanity    : Basic tests for argument validation mostly.\n * 2- test.sendbytes : Creates a pipe and registers its fds to the event loop,\n *                     one end of the pipe for read events and the other end for\n *                     the write events. On writable event, data is written. On\n *                     readable event data is read. Repeated until all data is\n *                     received.\n * 3- test.iteration : A test for BEFORE_SLEEP and AFTER_SLEEP callbacks.\n *                     Counters are incremented each time these events are\n *                     fired. They should be equal and increment monotonically.\n * 4- test.oneshot   : Test for oneshot API\n */\n\n#include \"redismodule.h\"\n#include <stdlib.h>\n#include <unistd.h>\n#include <fcntl.h>\n#include <memory.h>\n#include <errno.h>\n\nint fds[2];\nlong long buf_size;\nchar *src;\nlong long src_offset;\nchar *dst;\nlong long dst_offset;\n\nRedisModuleBlockedClient *bc;\nRedisModuleCtx *reply_ctx;\n\nvoid onReadable(int fd, void *user_data, int mask) {\n    REDISMODULE_NOT_USED(mask);\n\n    RedisModule_Assert(strcmp(user_data, \"userdataread\") == 0);\n\n    while (1) {\n        int rd = read(fd, dst + dst_offset, buf_size - dst_offset);\n        if (rd <= 0)\n            return;\n        dst_offset += rd;\n\n        /* Received all bytes */\n        if (dst_offset == buf_size) {\n            if (memcmp(src, dst, buf_size) == 0)\n                RedisModule_ReplyWithSimpleString(reply_ctx, \"OK\");\n            else\n                RedisModule_ReplyWithError(reply_ctx, \"ERR bytes mismatch\");\n\n            RedisModule_EventLoopDel(fds[0], REDISMODULE_EVENTLOOP_READABLE);\n            RedisModule_EventLoopDel(fds[1], REDISMODULE_EVENTLOOP_WRITABLE);\n            RedisModule_Free(src);\n            RedisModule_Free(dst);\n            close(fds[0]);\n            close(fds[1]);\n\n            RedisModule_FreeThreadSafeContext(reply_ctx);\n            RedisModule_UnblockClient(bc, NULL);\n            return;\n        }\n    };\n}\n\nvoid onWritable(int fd, void *user_data, int mask) {\n    REDISMODULE_NOT_USED(user_data);\n    REDISMODULE_NOT_USED(mask);\n\n    RedisModule_Assert(strcmp(user_data, \"userdatawrite\") == 0);\n\n    while (1) {\n        /* Check if we sent all data */\n        if (src_offset >= buf_size)\n            return;\n        int written = write(fd, src + src_offset, buf_size - src_offset);\n        if (written <= 0) {\n            return;\n        }\n\n        src_offset += written;\n    };\n}\n\n/* Create a pipe(), register pipe fds to the event loop and send/receive data\n * using them. */\nint sendbytes(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    if (RedisModule_StringToLongLong(argv[1], &buf_size) != REDISMODULE_OK ||\n        buf_size == 0) {\n        RedisModule_ReplyWithError(ctx, \"Invalid integer value\");\n        return REDISMODULE_OK;\n    }\n\n    bc = RedisModule_BlockClient(ctx, NULL, NULL, NULL, 0);\n    reply_ctx = RedisModule_GetThreadSafeContext(bc);\n\n    /* Allocate source buffer and write some random data */\n    src = RedisModule_Calloc(1,buf_size);\n    src_offset = 0;\n    memset(src, rand() % 0xFF, buf_size);\n    memcpy(src, \"randomtestdata\", strlen(\"randomtestdata\"));\n\n    dst = RedisModule_Calloc(1,buf_size);\n    dst_offset = 0;\n\n    /* Create a pipe and register it to the event loop. */\n    if (pipe(fds) < 0) return REDISMODULE_ERR;\n    if (fcntl(fds[0], F_SETFL, O_NONBLOCK) < 0) return REDISMODULE_ERR;\n    if (fcntl(fds[1], F_SETFL, O_NONBLOCK) < 0) return REDISMODULE_ERR;\n\n    if (RedisModule_EventLoopAdd(fds[0], REDISMODULE_EVENTLOOP_READABLE,\n        onReadable, \"userdataread\") != REDISMODULE_OK) return REDISMODULE_ERR;\n    if (RedisModule_EventLoopAdd(fds[1], REDISMODULE_EVENTLOOP_WRITABLE,\n        onWritable, \"userdatawrite\") != REDISMODULE_OK) return REDISMODULE_ERR;\n    return REDISMODULE_OK;\n}\n\nint sanity(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (pipe(fds) < 0) return REDISMODULE_ERR;\n\n    if (RedisModule_EventLoopAdd(fds[0], 9999999, onReadable, NULL)\n        == REDISMODULE_OK || errno != EINVAL) {\n        RedisModule_ReplyWithError(ctx, \"ERR non-existing event type should fail\");\n        goto out;\n    }\n    if (RedisModule_EventLoopAdd(-1, REDISMODULE_EVENTLOOP_READABLE, onReadable, NULL)\n        == REDISMODULE_OK || errno != ERANGE) {\n        RedisModule_ReplyWithError(ctx, \"ERR out of range fd should fail\");\n        goto out;\n    }\n    if (RedisModule_EventLoopAdd(99999999, REDISMODULE_EVENTLOOP_READABLE, onReadable, NULL)\n        == REDISMODULE_OK || errno != ERANGE) {\n        RedisModule_ReplyWithError(ctx, \"ERR out of range fd should fail\");\n        goto out;\n    }\n    if (RedisModule_EventLoopAdd(fds[0], REDISMODULE_EVENTLOOP_READABLE, NULL, NULL)\n        == REDISMODULE_OK || errno != EINVAL) {\n        RedisModule_ReplyWithError(ctx, \"ERR null callback should fail\");\n        goto out;\n    }\n    if (RedisModule_EventLoopAdd(fds[0], 9999999, onReadable, NULL)\n        == REDISMODULE_OK || errno != EINVAL) {\n        RedisModule_ReplyWithError(ctx, \"ERR non-existing event type should fail\");\n        goto out;\n    }\n    if (RedisModule_EventLoopDel(fds[0], REDISMODULE_EVENTLOOP_READABLE)\n        != REDISMODULE_OK || errno != 0) {\n        RedisModule_ReplyWithError(ctx, \"ERR del on non-registered fd should not fail\");\n        goto out;\n    }\n    if (RedisModule_EventLoopDel(fds[0], 9999999) == REDISMODULE_OK ||\n        errno != EINVAL) {\n        RedisModule_ReplyWithError(ctx, \"ERR non-existing event type should fail\");\n        goto out;\n    }\n    if (RedisModule_EventLoopDel(-1, REDISMODULE_EVENTLOOP_READABLE)\n        == REDISMODULE_OK || errno != ERANGE) {\n        RedisModule_ReplyWithError(ctx, \"ERR out of range fd should fail\");\n        goto out;\n    }\n    if (RedisModule_EventLoopDel(99999999, REDISMODULE_EVENTLOOP_READABLE)\n        == REDISMODULE_OK || errno != ERANGE) {\n        RedisModule_ReplyWithError(ctx, \"ERR out of range fd should fail\");\n        goto out;\n    }\n    if (RedisModule_EventLoopAdd(fds[0], REDISMODULE_EVENTLOOP_READABLE, onReadable, NULL)\n        != REDISMODULE_OK || errno != 0) {\n        RedisModule_ReplyWithError(ctx, \"ERR Add failed\");\n        goto out;\n    }\n    if (RedisModule_EventLoopAdd(fds[0], REDISMODULE_EVENTLOOP_READABLE, onReadable, NULL)\n        != REDISMODULE_OK || errno != 0) {\n        RedisModule_ReplyWithError(ctx, \"ERR Adding same fd twice failed\");\n        goto out;\n    }\n    if (RedisModule_EventLoopDel(fds[0], REDISMODULE_EVENTLOOP_READABLE)\n        != REDISMODULE_OK || errno != 0) {\n        RedisModule_ReplyWithError(ctx, \"ERR Del failed\");\n        goto out;\n    }\n    if (RedisModule_EventLoopAddOneShot(NULL, NULL) == REDISMODULE_OK || errno != EINVAL) {\n        RedisModule_ReplyWithError(ctx, \"ERR null callback should fail\");\n        goto out;\n    }\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\nout:\n    close(fds[0]);\n    close(fds[1]);\n    return REDISMODULE_OK;\n}\n\nstatic long long beforeSleepCount;\nstatic long long afterSleepCount;\n\nint iteration(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    /* On each event loop iteration, eventloopCallback() is called. We increment\n     * beforeSleepCount and afterSleepCount, so these two should be equal.\n     * We reply with iteration count, caller can test if iteration count\n     * increments monotonically */\n    RedisModule_Assert(beforeSleepCount == afterSleepCount);\n    RedisModule_ReplyWithLongLong(ctx, beforeSleepCount);\n    return REDISMODULE_OK;\n}\n\nvoid oneshotCallback(void* arg)\n{\n    RedisModule_Assert(strcmp(arg, \"userdata\") == 0);\n    RedisModule_ReplyWithSimpleString(reply_ctx, \"OK\");\n    RedisModule_FreeThreadSafeContext(reply_ctx);\n    RedisModule_UnblockClient(bc, NULL);\n}\n\nint oneshot(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    bc = RedisModule_BlockClient(ctx, NULL, NULL, NULL, 0);\n    reply_ctx = RedisModule_GetThreadSafeContext(bc);\n\n    if (RedisModule_EventLoopAddOneShot(oneshotCallback, \"userdata\") != REDISMODULE_OK) {\n        RedisModule_ReplyWithError(ctx, \"ERR oneshot failed\");\n        RedisModule_FreeThreadSafeContext(reply_ctx);\n        RedisModule_UnblockClient(bc, NULL);\n    }\n    return REDISMODULE_OK;\n}\n\nvoid eventloopCallback(struct RedisModuleCtx *ctx, RedisModuleEvent eid, uint64_t subevent, void *data) {\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(eid);\n    REDISMODULE_NOT_USED(subevent);\n    REDISMODULE_NOT_USED(data);\n\n    RedisModule_Assert(eid.id == REDISMODULE_EVENT_EVENTLOOP);\n    if (subevent == REDISMODULE_SUBEVENT_EVENTLOOP_BEFORE_SLEEP)\n        beforeSleepCount++;\n    else if (subevent == REDISMODULE_SUBEVENT_EVENTLOOP_AFTER_SLEEP)\n        afterSleepCount++;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx,\"eventloop\",1,REDISMODULE_APIVER_1)\n        == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    /* Test basics. */\n    if (RedisModule_CreateCommand(ctx, \"test.sanity\", sanity, \"\", 0, 0, 0)\n        == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    /* Register a command to create a pipe() and send data through it by using\n     * event loop API. */\n    if (RedisModule_CreateCommand(ctx, \"test.sendbytes\", sendbytes, \"\", 0, 0, 0)\n        == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    /* Register a command to return event loop iteration count. */\n    if (RedisModule_CreateCommand(ctx, \"test.iteration\", iteration, \"\", 0, 0, 0)\n        == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"test.oneshot\", oneshot, \"\", 0, 0, 0)\n        == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    if (RedisModule_SubscribeToServerEvent(ctx, RedisModuleEvent_EventLoop,\n        eventloopCallback) != REDISMODULE_OK) return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/fork.c",
    "content": "\n/* define macros for having usleep */\n#define _BSD_SOURCE\n#define _DEFAULT_SOURCE\n\n#include \"redismodule.h\"\n#include <string.h>\n#include <assert.h>\n#include <unistd.h>\n\n#define UNUSED(V) ((void) V)\n\nint child_pid = -1;\nint exitted_with_code = -1;\n\nvoid done_handler(int exitcode, int bysignal, void *user_data) {\n    child_pid = -1;\n    exitted_with_code = exitcode;\n    assert(user_data==(void*)0xdeadbeef);\n    UNUSED(bysignal);\n}\n\nint fork_create(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    long long code_to_exit_with;\n    long long usleep_us;\n    if (argc != 3) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    if(!RMAPI_FUNC_SUPPORTED(RedisModule_Fork)){\n        RedisModule_ReplyWithError(ctx, \"Fork api is not supported in the current redis version\");\n        return REDISMODULE_OK;\n    }\n\n    RedisModule_StringToLongLong(argv[1], &code_to_exit_with);\n    RedisModule_StringToLongLong(argv[2], &usleep_us);\n    exitted_with_code = -1;\n    int fork_child_pid = RedisModule_Fork(done_handler, (void*)0xdeadbeef);\n    if (fork_child_pid < 0) {\n        RedisModule_ReplyWithError(ctx, \"Fork failed\");\n        return REDISMODULE_OK;\n    } else if (fork_child_pid > 0) {\n        /* parent */\n        child_pid = fork_child_pid;\n        RedisModule_ReplyWithLongLong(ctx, child_pid);\n        return REDISMODULE_OK;\n    }\n\n    /* child */\n    RedisModule_Log(ctx, \"notice\", \"fork child started\");\n    usleep(usleep_us);\n    RedisModule_Log(ctx, \"notice\", \"fork child exiting\");\n    RedisModule_ExitFromChild(code_to_exit_with);\n    /* unreachable */\n    return 0;\n}\n\nint fork_exitcode(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    UNUSED(argv);\n    UNUSED(argc);\n    RedisModule_ReplyWithLongLong(ctx, exitted_with_code);\n    return REDISMODULE_OK;\n}\n\nint fork_kill(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    UNUSED(argv);\n    UNUSED(argc);\n    if (RedisModule_KillForkChild(child_pid) != REDISMODULE_OK)\n        RedisModule_ReplyWithError(ctx, \"KillForkChild failed\");\n    else\n        RedisModule_ReplyWithLongLong(ctx, 1);\n    child_pid = -1;\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    UNUSED(argc);\n    if (RedisModule_Init(ctx,\"fork\",1,REDISMODULE_APIVER_1)== REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"fork.create\", fork_create,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"fork.exitcode\", fork_exitcode,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"fork.kill\", fork_kill,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/getchannels.c",
    "content": "#include \"redismodule.h\"\n#include <strings.h>\n#include <assert.h>\n#include <unistd.h>\n#include <errno.h>\n\n/* A sample with declarable channels, that are used to validate against ACLs */\nint getChannels_subscribe(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if ((argc - 1) % 3 != 0) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n    char *err = NULL;\n    \n    /* getchannels.command [[subscribe|unsubscribe|publish] [pattern|literal] <channel> ...]\n     * This command marks the given channel is accessed based on the\n     * provided modifiers. */\n    for (int i = 1; i < argc; i += 3) {\n        const char *operation = RedisModule_StringPtrLen(argv[i], NULL);\n        const char *type = RedisModule_StringPtrLen(argv[i+1], NULL);\n        int flags = 0;\n\n        if (!strcasecmp(operation, \"subscribe\")) {\n            flags |= REDISMODULE_CMD_CHANNEL_SUBSCRIBE;\n        } else if (!strcasecmp(operation, \"unsubscribe\")) {\n            flags |= REDISMODULE_CMD_CHANNEL_UNSUBSCRIBE;\n        } else if (!strcasecmp(operation, \"publish\")) {\n            flags |= REDISMODULE_CMD_CHANNEL_PUBLISH;\n        } else {\n            err = \"Invalid channel operation\";\n            break;\n        }\n\n        if (!strcasecmp(type, \"literal\")) {\n            /* No op */\n        } else if (!strcasecmp(type, \"pattern\")) {\n            flags |= REDISMODULE_CMD_CHANNEL_PATTERN;\n        } else {\n            err = \"Invalid channel type\";\n            break;\n        }\n        if (RedisModule_IsChannelsPositionRequest(ctx)) {\n            RedisModule_ChannelAtPosWithFlags(ctx, i+2, flags);\n        }\n    }\n\n    if (!RedisModule_IsChannelsPositionRequest(ctx)) {\n        if (err) {\n            RedisModule_ReplyWithError(ctx, err);\n        } else {\n            /* Normal implementation would go here, but for tests just return okay */\n            RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n        }\n    }\n\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    if (RedisModule_Init(ctx, \"getchannels\", 1, REDISMODULE_APIVER_1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"getchannels.command\", getChannels_subscribe, \"getchannels-api\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/getkeys.c",
    "content": "\n#include \"redismodule.h\"\n#include <strings.h>\n#include <assert.h>\n#include <unistd.h>\n#include <errno.h>\n\n#define UNUSED(V) ((void) V)\n\n/* A sample movable keys command that returns a list of all\n * arguments that follow a KEY argument, i.e.\n */\nint getkeys_command(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    int i;\n    int count = 0;\n\n    /* Handle getkeys-api introspection */\n    if (RedisModule_IsKeysPositionRequest(ctx)) {\n        for (i = 0; i < argc; i++) {\n            size_t len;\n            const char *str = RedisModule_StringPtrLen(argv[i], &len);\n\n            if (len == 3 && !strncasecmp(str, \"key\", 3) && i + 1 < argc)\n                RedisModule_KeyAtPos(ctx, i + 1);\n        }\n\n        return REDISMODULE_OK;\n    }\n\n    /* Handle real command invocation */\n    RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_LEN);\n    for (i = 0; i < argc; i++) {\n        size_t len;\n        const char *str = RedisModule_StringPtrLen(argv[i], &len);\n\n        if (len == 3 && !strncasecmp(str, \"key\", 3) && i + 1 < argc) {\n            RedisModule_ReplyWithString(ctx, argv[i+1]);\n            count++;\n        }\n    }\n    RedisModule_ReplySetArrayLength(ctx, count);\n\n    return REDISMODULE_OK;\n}\n\nint getkeys_command_with_flags(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    int i;\n    int count = 0;\n\n    /* Handle getkeys-api introspection */\n    if (RedisModule_IsKeysPositionRequest(ctx)) {\n        for (i = 0; i < argc; i++) {\n            size_t len;\n            const char *str = RedisModule_StringPtrLen(argv[i], &len);\n\n            if (len == 3 && !strncasecmp(str, \"key\", 3) && i + 1 < argc)\n                RedisModule_KeyAtPosWithFlags(ctx, i + 1, REDISMODULE_CMD_KEY_RO | REDISMODULE_CMD_KEY_ACCESS);\n        }\n\n        return REDISMODULE_OK;\n    }\n\n    /* Handle real command invocation */\n    RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_LEN);\n    for (i = 0; i < argc; i++) {\n        size_t len;\n        const char *str = RedisModule_StringPtrLen(argv[i], &len);\n\n        if (len == 3 && !strncasecmp(str, \"key\", 3) && i + 1 < argc) {\n            RedisModule_ReplyWithString(ctx, argv[i+1]);\n            count++;\n        }\n    }\n    RedisModule_ReplySetArrayLength(ctx, count);\n\n    return REDISMODULE_OK;\n}\n\nint getkeys_fixed(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    int i;\n\n    RedisModule_ReplyWithArray(ctx, argc - 1);\n    for (i = 1; i < argc; i++) {\n        RedisModule_ReplyWithString(ctx, argv[i]);\n    }\n    return REDISMODULE_OK;\n}\n\n/* Introspect a command using RM_GetCommandKeys() and returns the list\n * of keys. Essentially this is COMMAND GETKEYS implemented in a module.\n * INTROSPECT <with-flags> <cmd> <args>\n */\nint getkeys_introspect(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    long long with_flags = 0;\n\n    if (argc < 4) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    if (RedisModule_StringToLongLong(argv[1],&with_flags) != REDISMODULE_OK)\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid integer\");\n\n    int num_keys, *keyflags = NULL;\n    int *keyidx = RedisModule_GetCommandKeysWithFlags(ctx, &argv[2], argc - 2, &num_keys, with_flags ? &keyflags : NULL);\n\n    if (!keyidx) {\n        if (!errno)\n            RedisModule_ReplyWithEmptyArray(ctx);\n        else {\n            char err[100];\n            switch (errno) {\n                case ENOENT:\n                    RedisModule_ReplyWithError(ctx, \"ERR ENOENT\");\n                    break;\n                case EINVAL:\n                    RedisModule_ReplyWithError(ctx, \"ERR EINVAL\");\n                    break;\n                default:\n                    snprintf(err, sizeof(err) - 1, \"ERR errno=%d\", errno);\n                    RedisModule_ReplyWithError(ctx, err);\n                    break;\n            }\n        }\n    } else {\n        int i;\n\n        RedisModule_ReplyWithArray(ctx, num_keys);\n        for (i = 0; i < num_keys; i++) {\n            if (!with_flags) {\n                RedisModule_ReplyWithString(ctx, argv[2 + keyidx[i]]);\n                continue;\n            }\n            RedisModule_ReplyWithArray(ctx, 2);\n            RedisModule_ReplyWithString(ctx, argv[2 + keyidx[i]]);\n            char* sflags = \"\";\n            if (keyflags[i] & REDISMODULE_CMD_KEY_RO)\n                sflags = \"RO\";\n            else if (keyflags[i] & REDISMODULE_CMD_KEY_RW)\n                sflags = \"RW\";\n            else if (keyflags[i] & REDISMODULE_CMD_KEY_OW)\n                sflags = \"OW\";\n            else if (keyflags[i] & REDISMODULE_CMD_KEY_RM)\n                sflags = \"RM\";\n            RedisModule_ReplyWithCString(ctx, sflags);\n        }\n\n        RedisModule_Free(keyidx);\n        RedisModule_Free(keyflags);\n    }\n\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    UNUSED(argc);\n    if (RedisModule_Init(ctx,\"getkeys\",1,REDISMODULE_APIVER_1)== REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"getkeys.command\", getkeys_command,\"getkeys-api\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"getkeys.command_with_flags\", getkeys_command_with_flags,\"getkeys-api\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"getkeys.fixed\", getkeys_fixed,\"\",2,4,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"getkeys.introspect\", getkeys_introspect,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/hash.c",
    "content": "#include \"redismodule.h\"\n#include <strings.h>\n#include <errno.h>\n#include <stdlib.h>\n\n#define UNUSED(x) (void)(x)\n\n/* If a string is \":deleted:\", the special value for deleted hash fields is\n * returned; otherwise the input string is returned. */\nstatic RedisModuleString *value_or_delete(RedisModuleString *s) {\n    if (!strcasecmp(RedisModule_StringPtrLen(s, NULL), \":delete:\"))\n        return REDISMODULE_HASH_DELETE;\n    else\n        return s;\n}\n\n/* HASH.SET key flags field1 value1 [field2 value2 ..]\n *\n * Sets 1-4 fields. Returns the same as RedisModule_HashSet().\n * Flags is a string of \"nxa\" where n = NX, x = XX, a = COUNT_ALL.\n * To delete a field, use the value \":delete:\".\n */\nint hash_set(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc < 5 || argc % 2 == 0 || argc > 11)\n        return RedisModule_WrongArity(ctx);\n\n    RedisModule_AutoMemory(ctx);\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_WRITE);\n\n    size_t flags_len;\n    const char *flags_str = RedisModule_StringPtrLen(argv[2], &flags_len);\n    int flags = REDISMODULE_HASH_NONE;\n    for (size_t i = 0; i < flags_len; i++) {\n        switch (flags_str[i]) {\n        case 'n': flags |= REDISMODULE_HASH_NX; break;\n        case 'x': flags |= REDISMODULE_HASH_XX; break;\n        case 'a': flags |= REDISMODULE_HASH_COUNT_ALL; break;\n        }\n    }\n\n    /* Test some varargs. (In real-world, use a loop and set one at a time.) */\n    int result;\n    errno = 0;\n    if (argc == 5) {\n        result = RedisModule_HashSet(key, flags,\n                                     argv[3], value_or_delete(argv[4]),\n                                     NULL);\n    } else if (argc == 7) {\n        result = RedisModule_HashSet(key, flags,\n                                     argv[3], value_or_delete(argv[4]),\n                                     argv[5], value_or_delete(argv[6]),\n                                     NULL);\n    } else if (argc == 9) {\n        result = RedisModule_HashSet(key, flags,\n                                     argv[3], value_or_delete(argv[4]),\n                                     argv[5], value_or_delete(argv[6]),\n                                     argv[7], value_or_delete(argv[8]),\n                                     NULL);\n    } else if (argc == 11) {\n        result = RedisModule_HashSet(key, flags,\n                                     argv[3], value_or_delete(argv[4]),\n                                     argv[5], value_or_delete(argv[6]),\n                                     argv[7], value_or_delete(argv[8]),\n                                     argv[9], value_or_delete(argv[10]),\n                                     NULL);\n    } else {\n        return RedisModule_ReplyWithError(ctx, \"ERR too many fields\");\n    }\n\n    /* Check errno */\n    if (result == 0) {\n        if (errno == ENOTSUP)\n            return RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n        else\n            RedisModule_Assert(errno == ENOENT);\n    }\n\n    return RedisModule_ReplyWithLongLong(ctx, result);\n}\n\nRedisModuleKey* openKeyWithMode(RedisModuleCtx *ctx, RedisModuleString *keyName, int mode) {\n    int supportedMode = RedisModule_GetOpenKeyModesAll();\n    if (!(supportedMode & REDISMODULE_READ) || ((supportedMode & mode)!=mode)) {\n        RedisModule_ReplyWithError(ctx, \"OpenKey mode is not supported\");\n        return NULL;\n    }\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, keyName, REDISMODULE_READ | mode);\n    if (!key) {\n        RedisModule_ReplyWithError(ctx, \"key not found\");\n        return NULL;\n    }\n\n    return key;\n}\n\nint test_open_key_subexpired_hget(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc<3) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleKey *key = openKeyWithMode(ctx, argv[1], REDISMODULE_OPEN_KEY_ACCESS_EXPIRED);\n    if (!key) return REDISMODULE_OK;\n\n    RedisModuleString *value;\n    RedisModule_HashGet(key,REDISMODULE_HASH_NONE,argv[2],&value,NULL);\n\n    /* return the value */\n    if (value) {\n        RedisModule_ReplyWithString(ctx, value);\n        RedisModule_FreeString(ctx, value);\n    } else {\n        RedisModule_ReplyWithNull(ctx);\n    }\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\nint test_open_key_hget_expire(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc<3) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleKey *key = openKeyWithMode(ctx, argv[1], REDISMODULE_OPEN_KEY_ACCESS_EXPIRED);\n    if (!key) return REDISMODULE_OK;\n\n    mstime_t expireAt;\n    \n    /* Let's test here that we get error if using invalid flags combination */\n    RedisModule_Assert(\n            RedisModule_HashGet(key,\n                                REDISMODULE_HASH_EXISTS |\n                                REDISMODULE_HASH_EXPIRE_TIME,\n                                argv[2], &expireAt, NULL) == REDISMODULE_ERR);    \n    \n    /* Now let's get the expire time */\n    RedisModule_HashGet(key, REDISMODULE_HASH_EXPIRE_TIME,argv[2],&expireAt,NULL);\n    RedisModule_ReplyWithLongLong(ctx, expireAt);\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\n/* Test variadic function to get two expiration times */\nint test_open_key_hget_two_expire(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc<3) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleKey *key = openKeyWithMode(ctx, argv[1], REDISMODULE_OPEN_KEY_ACCESS_EXPIRED);\n    if (!key) return REDISMODULE_OK;\n\n    mstime_t expireAt1, expireAt2;\n    RedisModule_HashGet(key,REDISMODULE_HASH_EXPIRE_TIME,argv[2],&expireAt1,argv[3],&expireAt2,NULL);\n    \n    /* return the two expire time */\n    RedisModule_ReplyWithArray(ctx, 2);\n    RedisModule_ReplyWithLongLong(ctx, expireAt1);\n    RedisModule_ReplyWithLongLong(ctx, expireAt2);\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\nint test_open_key_hget_min_expire(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc!=2) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleKey *key = openKeyWithMode(ctx, argv[1], REDISMODULE_READ);\n    if (!key) return REDISMODULE_OK;\n\n    volatile mstime_t minExpire = RedisModule_HashFieldMinExpire(key);\n    RedisModule_ReplyWithLongLong(ctx, minExpire);\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\nint  numReplies;\nvoid ScanCallback(RedisModuleKey *key, RedisModuleString *field, RedisModuleString *value, void *privdata) {\n    UNUSED(key);\n    RedisModuleCtx *ctx = (RedisModuleCtx *)privdata;\n\n    /* Reply with the field and value (or NULL for sets) */\n    RedisModule_ReplyWithString(ctx, field);\n    if (value) {\n        RedisModule_ReplyWithString(ctx, value);\n    } else {\n        RedisModule_ReplyWithCString(ctx, \"(null)\");\n    }\n    numReplies+=2;\n}\n\nint test_open_key_access_expired_hscan(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc < 2) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleKey *key = openKeyWithMode(ctx, argv[1], REDISMODULE_OPEN_KEY_ACCESS_EXPIRED);\n\n    if (!key)\n        return RedisModule_ReplyWithError(ctx, \"ERR key not exists\");\n\n    /* Verify it is a hash */\n    if (RedisModule_KeyType(key) != REDISMODULE_KEYTYPE_HASH) {\n        RedisModule_CloseKey(key);\n        return RedisModule_ReplyWithError(ctx, \"ERR key is not a hash\");\n    }\n\n    /* Scan the hash and reply pairs of key-value */\n    RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_ARRAY_LEN);\n    numReplies = 0;\n    RedisModuleScanCursor *cursor = RedisModule_ScanCursorCreate();\n    while (RedisModule_ScanKey(key, cursor, ScanCallback, ctx));\n    RedisModule_ScanCursorDestroy(cursor);\n    RedisModule_CloseKey(key);\n    RedisModule_ReplySetArrayLength(ctx, numReplies);\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    if (RedisModule_Init(ctx, \"hash\", 1, REDISMODULE_APIVER_1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"hash.set\", hash_set, \"write\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx, \"hash.hget_expired\", test_open_key_subexpired_hget,\"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx, \"hash.hscan_expired\", test_open_key_access_expired_hscan,\"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx, \"hash.hget_expire\", test_open_key_hget_expire,\"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx, \"hash.hget_two_expire\", test_open_key_hget_two_expire,\"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx, \"hash.hget_min_expire\", test_open_key_hget_min_expire,\"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/hooks.c",
    "content": "/* This module is used to test the server events hooks API.\n *\n * -----------------------------------------------------------------------------\n *\n * Copyright (c) 2019-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"redismodule.h\"\n#include <stdio.h>\n#include <string.h>\n#include <strings.h>\n#include <assert.h>\n\n/* We need to store events to be able to test and see what we got, and we can't\n * store them in the key-space since that would mess up rdb loading (duplicates)\n * and be lost of flushdb. */\nRedisModuleDict *event_log = NULL;\n/* stores all the keys on which we got 'removed' event */\nRedisModuleDict *removed_event_log = NULL;\n/* stores all the subevent on which we got 'removed' event */\nRedisModuleDict *removed_subevent_type = NULL;\n/* stores all the keys on which we got 'removed' event with expiry information */\nRedisModuleDict *removed_expiry_log = NULL;\n\ntypedef struct EventElement {\n    long count;\n    RedisModuleString *last_val_string;\n    long last_val_int;\n} EventElement;\n\nvoid LogStringEvent(RedisModuleCtx *ctx, const char* keyname, const char* data) {\n    EventElement *event = RedisModule_DictGetC(event_log, (void*)keyname, strlen(keyname), NULL);\n    if (!event) {\n        event = RedisModule_Alloc(sizeof(EventElement));\n        memset(event, 0, sizeof(EventElement));\n        RedisModule_DictSetC(event_log, (void*)keyname, strlen(keyname), event);\n    }\n    if (event->last_val_string) RedisModule_FreeString(ctx, event->last_val_string);\n    event->last_val_string = RedisModule_CreateString(ctx, data, strlen(data));\n    event->count++;\n}\n\nvoid LogNumericEvent(RedisModuleCtx *ctx, const char* keyname, long data) {\n    REDISMODULE_NOT_USED(ctx);\n    EventElement *event = RedisModule_DictGetC(event_log, (void*)keyname, strlen(keyname), NULL);\n    if (!event) {\n        event = RedisModule_Alloc(sizeof(EventElement));\n        memset(event, 0, sizeof(EventElement));\n        RedisModule_DictSetC(event_log, (void*)keyname, strlen(keyname), event);\n    }\n    event->last_val_int = data;\n    event->count++;\n}\n\nvoid FreeEvent(RedisModuleCtx *ctx, EventElement *event) {\n    if (event->last_val_string)\n        RedisModule_FreeString(ctx, event->last_val_string);\n    RedisModule_Free(event);\n}\n\nint cmdEventCount(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc != 2){\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    EventElement *event = RedisModule_DictGet(event_log, argv[1], NULL);\n    RedisModule_ReplyWithLongLong(ctx, event? event->count: 0);\n    return REDISMODULE_OK;\n}\n\nint cmdEventLast(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc != 2){\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    EventElement *event = RedisModule_DictGet(event_log, argv[1], NULL);\n    if (event && event->last_val_string)\n        RedisModule_ReplyWithString(ctx, event->last_val_string);\n    else if (event)\n        RedisModule_ReplyWithLongLong(ctx, event->last_val_int);\n    else\n        RedisModule_ReplyWithNull(ctx);\n    return REDISMODULE_OK;\n}\n\nvoid clearEvents(RedisModuleCtx *ctx)\n{\n    RedisModuleString *key;\n    EventElement *event;\n    RedisModuleDictIter *iter = RedisModule_DictIteratorStart(event_log, \"^\", NULL);\n    while((key = RedisModule_DictNext(ctx, iter, (void**)&event)) != NULL) {\n        event->count = 0;\n        event->last_val_int = 0;\n        if (event->last_val_string) RedisModule_FreeString(ctx, event->last_val_string);\n        event->last_val_string = NULL;\n        RedisModule_DictDel(event_log, key, NULL);\n        RedisModule_Free(event);\n    }\n    RedisModule_DictIteratorStop(iter);\n}\n\nint cmdEventsClear(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    REDISMODULE_NOT_USED(argc);\n    REDISMODULE_NOT_USED(argv);\n    clearEvents(ctx);\n    return REDISMODULE_OK;\n}\n\n/* Client state change callback. */\nvoid clientChangeCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data)\n{\n    REDISMODULE_NOT_USED(e);\n\n    RedisModuleClientInfo *ci = data;\n    char *keyname = (sub == REDISMODULE_SUBEVENT_CLIENT_CHANGE_CONNECTED) ?\n        \"client-connected\" : \"client-disconnected\";\n    LogNumericEvent(ctx, keyname, ci->id);\n}\n\nvoid flushdbCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data)\n{\n    REDISMODULE_NOT_USED(e);\n\n    RedisModuleFlushInfo *fi = data;\n    char *keyname = (sub == REDISMODULE_SUBEVENT_FLUSHDB_START) ?\n        \"flush-start\" : \"flush-end\";\n    LogNumericEvent(ctx, keyname, fi->dbnum);\n}\n\nvoid roleChangeCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data)\n{\n    REDISMODULE_NOT_USED(e);\n    REDISMODULE_NOT_USED(data);\n\n    RedisModuleReplicationInfo *ri = data;\n    char *keyname = (sub == REDISMODULE_EVENT_REPLROLECHANGED_NOW_MASTER) ?\n        \"role-master\" : \"role-replica\";\n    LogStringEvent(ctx, keyname, ri->masterhost);\n}\n\nvoid replicationChangeCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data)\n{\n    REDISMODULE_NOT_USED(e);\n    REDISMODULE_NOT_USED(data);\n\n    char *keyname = (sub == REDISMODULE_SUBEVENT_REPLICA_CHANGE_ONLINE) ?\n        \"replica-online\" : \"replica-offline\";\n    LogNumericEvent(ctx, keyname, 0);\n}\n\nvoid rasterLinkChangeCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data)\n{\n    REDISMODULE_NOT_USED(e);\n    REDISMODULE_NOT_USED(data);\n\n    char *keyname = (sub == REDISMODULE_SUBEVENT_MASTER_LINK_UP) ?\n        \"masterlink-up\" : \"masterlink-down\";\n    LogNumericEvent(ctx, keyname, 0);\n}\n\nvoid persistenceCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data)\n{\n    REDISMODULE_NOT_USED(e);\n    REDISMODULE_NOT_USED(data);\n\n    char *keyname = NULL;\n    switch (sub) {\n        case REDISMODULE_SUBEVENT_PERSISTENCE_RDB_START: keyname = \"persistence-rdb-start\"; break;\n        case REDISMODULE_SUBEVENT_PERSISTENCE_AOF_START: keyname = \"persistence-aof-start\"; break;\n        case REDISMODULE_SUBEVENT_PERSISTENCE_SYNC_AOF_START: keyname = \"persistence-syncaof-start\"; break;\n        case REDISMODULE_SUBEVENT_PERSISTENCE_SYNC_RDB_START: keyname = \"persistence-syncrdb-start\"; break;\n        case REDISMODULE_SUBEVENT_PERSISTENCE_ENDED: keyname = \"persistence-end\"; break;\n        case REDISMODULE_SUBEVENT_PERSISTENCE_FAILED: keyname = \"persistence-failed\"; break;\n    }\n    /* modifying the keyspace from the fork child is not an option, using log instead */\n    RedisModule_Log(ctx, \"warning\", \"module-event-%s\", keyname);\n    if (sub == REDISMODULE_SUBEVENT_PERSISTENCE_SYNC_RDB_START ||\n        sub == REDISMODULE_SUBEVENT_PERSISTENCE_SYNC_AOF_START) \n    {\n        LogNumericEvent(ctx, keyname, 0);\n    }\n}\n\nvoid loadingCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data)\n{\n    REDISMODULE_NOT_USED(e);\n    REDISMODULE_NOT_USED(data);\n\n    char *keyname = NULL;\n    switch (sub) {\n        case REDISMODULE_SUBEVENT_LOADING_RDB_START: keyname = \"loading-rdb-start\"; break;\n        case REDISMODULE_SUBEVENT_LOADING_AOF_START: keyname = \"loading-aof-start\"; break;\n        case REDISMODULE_SUBEVENT_LOADING_REPL_START: keyname = \"loading-repl-start\"; break;\n        case REDISMODULE_SUBEVENT_LOADING_ENDED: keyname = \"loading-end\"; break;\n        case REDISMODULE_SUBEVENT_LOADING_FAILED: keyname = \"loading-failed\"; break;\n    }\n    LogNumericEvent(ctx, keyname, 0);\n}\n\nvoid loadingProgressCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data)\n{\n    REDISMODULE_NOT_USED(e);\n\n    RedisModuleLoadingProgress *ei = data;\n    char *keyname = (sub == REDISMODULE_SUBEVENT_LOADING_PROGRESS_RDB) ?\n        \"loading-progress-rdb\" : \"loading-progress-aof\";\n    LogNumericEvent(ctx, keyname, ei->progress);\n}\n\nvoid shutdownCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data)\n{\n    REDISMODULE_NOT_USED(e);\n    REDISMODULE_NOT_USED(data);\n    REDISMODULE_NOT_USED(sub);\n\n    RedisModule_Log(ctx, \"warning\", \"module-event-%s\", \"shutdown\");\n}\n\nvoid cronLoopCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data)\n{\n    REDISMODULE_NOT_USED(e);\n    REDISMODULE_NOT_USED(sub);\n\n    RedisModuleCronLoop *ei = data;\n    LogNumericEvent(ctx, \"cron-loop\", ei->hz);\n}\n\nvoid moduleChangeCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data)\n{\n    REDISMODULE_NOT_USED(e);\n\n    RedisModuleModuleChange *ei = data;\n    char *keyname = (sub == REDISMODULE_SUBEVENT_MODULE_LOADED) ?\n        \"module-loaded\" : \"module-unloaded\";\n    LogStringEvent(ctx, keyname, ei->module_name);\n}\n\nvoid swapDbCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data)\n{\n    REDISMODULE_NOT_USED(e);\n    REDISMODULE_NOT_USED(sub);\n\n    RedisModuleSwapDbInfo *ei = data;\n    LogNumericEvent(ctx, \"swapdb-first\", ei->dbnum_first);\n    LogNumericEvent(ctx, \"swapdb-second\", ei->dbnum_second);\n}\n\nvoid configChangeCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data)\n{\n    REDISMODULE_NOT_USED(e);\n    if (sub != REDISMODULE_SUBEVENT_CONFIG_CHANGE) {\n        return;\n    }\n\n    RedisModuleConfigChangeV1 *ei = data;\n    LogNumericEvent(ctx, \"config-change-count\", ei->num_changes);\n    LogStringEvent(ctx, \"config-change-first\", ei->config_names[0]);\n}\n\nvoid keyInfoCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data)\n{\n    REDISMODULE_NOT_USED(e);\n\n    RedisModuleKeyInfoV1 *ei = data;\n    RedisModuleKey *kp = ei->key;\n    RedisModuleString *key = (RedisModuleString *) RedisModule_GetKeyNameFromModuleKey(kp);\n    const char *keyname = RedisModule_StringPtrLen(key, NULL);\n    RedisModuleString *event_keyname = RedisModule_CreateStringPrintf(ctx, \"key-info-%s\", keyname);\n    LogStringEvent(ctx, RedisModule_StringPtrLen(event_keyname, NULL), keyname);\n    RedisModule_FreeString(ctx, event_keyname);\n\n    /* Despite getting a key object from the callback, we also try to re-open it\n     * to make sure the callback is called before it is actually removed from the keyspace. */\n    RedisModuleKey *kp_open = RedisModule_OpenKey(ctx, key, REDISMODULE_READ);\n    assert(RedisModule_ValueLength(kp) == RedisModule_ValueLength(kp_open));\n    RedisModule_CloseKey(kp_open);\n\n    /* We also try to RM_Call a command that accesses that key, also to make sure it's still in the keyspace. */\n    char *size_command = NULL;\n    int key_type = RedisModule_KeyType(kp);\n    if (key_type == REDISMODULE_KEYTYPE_STRING) {\n        size_command = \"STRLEN\";\n    } else if (key_type == REDISMODULE_KEYTYPE_LIST) {\n        size_command = \"LLEN\";\n    } else if (key_type == REDISMODULE_KEYTYPE_HASH) {\n        size_command = \"HLEN\";\n    } else if (key_type == REDISMODULE_KEYTYPE_SET) {\n        size_command = \"SCARD\";\n    } else if (key_type == REDISMODULE_KEYTYPE_ZSET) {\n        size_command = \"ZCARD\";\n    } else if (key_type == REDISMODULE_KEYTYPE_STREAM) {\n        size_command = \"XLEN\";\n    }\n    if (size_command != NULL) {\n        RedisModuleCallReply *reply = RedisModule_Call(ctx, size_command, \"s\", key);\n        assert(reply != NULL);\n        assert(RedisModule_ValueLength(kp) == (size_t) RedisModule_CallReplyInteger(reply));\n        RedisModule_FreeCallReply(reply);\n    }\n\n    /* Now use the key object we got from the callback for various validations. */\n    RedisModuleString *prev = RedisModule_DictGetC(removed_event_log, (void*)keyname, strlen(keyname), NULL);\n    /* We keep object length */\n    RedisModuleString *v = RedisModule_CreateStringPrintf(ctx, \"%zd\", RedisModule_ValueLength(kp));\n    /* For string type, we keep value instead of length */\n    if (RedisModule_KeyType(kp) == REDISMODULE_KEYTYPE_STRING) {\n        RedisModule_FreeString(ctx, v);\n        size_t len;\n        /* We need to access the string value with RedisModule_StringDMA.\n         * RedisModule_StringDMA may call dbUnshareStringValue to free the origin object,\n         * so we also can test it. */\n        char *s = RedisModule_StringDMA(kp, &len, REDISMODULE_READ);\n        v = RedisModule_CreateString(ctx, s, len);\n    }\n    RedisModule_DictReplaceC(removed_event_log, (void*)keyname, strlen(keyname), v);\n    if (prev != NULL) {\n        RedisModule_FreeString(ctx, prev);\n    }\n\n    const char *subevent = \"deleted\";\n    if (sub == REDISMODULE_SUBEVENT_KEY_EXPIRED) {\n        subevent = \"expired\";\n    } else if (sub == REDISMODULE_SUBEVENT_KEY_EVICTED) {\n        subevent = \"evicted\";\n    } else if (sub == REDISMODULE_SUBEVENT_KEY_OVERWRITTEN) {\n        subevent = \"overwritten\";\n    }\n    RedisModule_DictReplaceC(removed_subevent_type, (void*)keyname, strlen(keyname), (void *)subevent);\n\n    RedisModuleString *prevexpire = RedisModule_DictGetC(removed_expiry_log, (void*)keyname, strlen(keyname), NULL);\n    RedisModuleString *expire = RedisModule_CreateStringPrintf(ctx, \"%lld\", RedisModule_GetAbsExpire(kp));\n    RedisModule_DictReplaceC(removed_expiry_log, (void*)keyname, strlen(keyname), (void *)expire);\n    if (prevexpire != NULL) {\n        RedisModule_FreeString(ctx, prevexpire);\n    }\n}\n\nstatic int cmdIsKeyRemoved(RedisModuleCtx *ctx, RedisModuleString **argv, int argc){\n    if(argc != 2){\n        return RedisModule_WrongArity(ctx);\n    }\n\n    const char *key  = RedisModule_StringPtrLen(argv[1], NULL);\n\n    RedisModuleString *value = RedisModule_DictGetC(removed_event_log, (void*)key, strlen(key), NULL);\n\n    if (value == NULL) {\n        return RedisModule_ReplyWithError(ctx, \"ERR Key was not removed\");\n    }\n\n    const char *subevent = RedisModule_DictGetC(removed_subevent_type, (void*)key, strlen(key), NULL);\n    RedisModule_ReplyWithArray(ctx, 2);\n    RedisModule_ReplyWithString(ctx, value);\n    RedisModule_ReplyWithSimpleString(ctx, subevent);\n\n    return REDISMODULE_OK;\n}\n\nstatic int cmdKeyExpiry(RedisModuleCtx *ctx, RedisModuleString **argv, int argc){\n    if(argc != 2){\n        return RedisModule_WrongArity(ctx);\n    }\n\n    const char* key  = RedisModule_StringPtrLen(argv[1], NULL);\n    RedisModuleString *expire = RedisModule_DictGetC(removed_expiry_log, (void*)key, strlen(key), NULL);\n    if (expire == NULL) {\n        return RedisModule_ReplyWithError(ctx, \"ERR Key was not removed\");\n    }\n    RedisModule_ReplyWithString(ctx, expire);\n    return REDISMODULE_OK;\n}\n\n/* This function must be present on each Redis module. It is used in order to\n * register the commands into the Redis server. */\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n#define VerifySubEventSupported(e, s) \\\n    if (!RedisModule_IsSubEventSupported(e, s)) { \\\n        return REDISMODULE_ERR; \\\n    }\n\n    if (RedisModule_Init(ctx,\"testhook\",1,REDISMODULE_APIVER_1)\n        == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    /* Example on how to check if a server sub event is supported */\n    if (!RedisModule_IsSubEventSupported(RedisModuleEvent_ReplicationRoleChanged, REDISMODULE_EVENT_REPLROLECHANGED_NOW_MASTER)) {\n        return REDISMODULE_ERR;\n    }\n\n    /* replication related hooks */\n    RedisModule_SubscribeToServerEvent(ctx,\n        RedisModuleEvent_ReplicationRoleChanged, roleChangeCallback);\n    RedisModule_SubscribeToServerEvent(ctx,\n        RedisModuleEvent_ReplicaChange, replicationChangeCallback);\n    RedisModule_SubscribeToServerEvent(ctx,\n        RedisModuleEvent_MasterLinkChange, rasterLinkChangeCallback);\n\n    /* persistence related hooks */\n    RedisModule_SubscribeToServerEvent(ctx,\n        RedisModuleEvent_Persistence, persistenceCallback);\n    RedisModule_SubscribeToServerEvent(ctx,\n        RedisModuleEvent_Loading, loadingCallback);\n    RedisModule_SubscribeToServerEvent(ctx,\n        RedisModuleEvent_LoadingProgress, loadingProgressCallback);\n\n    /* other hooks */\n    RedisModule_SubscribeToServerEvent(ctx,\n        RedisModuleEvent_ClientChange, clientChangeCallback);\n    RedisModule_SubscribeToServerEvent(ctx,\n        RedisModuleEvent_FlushDB, flushdbCallback);\n    RedisModule_SubscribeToServerEvent(ctx,\n        RedisModuleEvent_Shutdown, shutdownCallback);\n    RedisModule_SubscribeToServerEvent(ctx,\n        RedisModuleEvent_CronLoop, cronLoopCallback);\n\n    RedisModule_SubscribeToServerEvent(ctx,\n        RedisModuleEvent_ModuleChange, moduleChangeCallback);\n    RedisModule_SubscribeToServerEvent(ctx,\n        RedisModuleEvent_SwapDB, swapDbCallback);\n\n    RedisModule_SubscribeToServerEvent(ctx,\n        RedisModuleEvent_Config, configChangeCallback);\n\n    RedisModule_SubscribeToServerEvent(ctx,\n        RedisModuleEvent_Key, keyInfoCallback);\n\n    event_log = RedisModule_CreateDict(ctx);\n    removed_event_log = RedisModule_CreateDict(ctx);\n    removed_subevent_type = RedisModule_CreateDict(ctx);\n    removed_expiry_log = RedisModule_CreateDict(ctx);\n\n    if (RedisModule_CreateCommand(ctx,\"hooks.event_count\", cmdEventCount,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"hooks.event_last\", cmdEventLast,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"hooks.clear\", cmdEventsClear,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"hooks.is_key_removed\", cmdIsKeyRemoved,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"hooks.pexpireat\", cmdKeyExpiry,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (argc == 1) {\n        const char *ptr = RedisModule_StringPtrLen(argv[0], NULL);\n        if (!strcasecmp(ptr, \"noload\")) {\n            /* This is a hint that we return ERR at the last moment of OnLoad. */\n            RedisModule_FreeDict(ctx, event_log);\n            RedisModule_FreeDict(ctx, removed_event_log);\n            RedisModule_FreeDict(ctx, removed_subevent_type);\n            RedisModule_FreeDict(ctx, removed_expiry_log);\n            return REDISMODULE_ERR;\n        }\n    }\n\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnUnload(RedisModuleCtx *ctx) {\n    clearEvents(ctx);\n    RedisModule_FreeDict(ctx, event_log);\n    event_log = NULL;\n\n    RedisModuleDictIter *iter = RedisModule_DictIteratorStartC(removed_event_log, \"^\", NULL, 0);\n    char* key;\n    size_t keyLen;\n    RedisModuleString* val;\n    while((key = RedisModule_DictNextC(iter, &keyLen, (void**)&val))){\n        RedisModule_FreeString(ctx, val);\n    }\n    RedisModule_FreeDict(ctx, removed_event_log);\n    RedisModule_DictIteratorStop(iter);\n    removed_event_log = NULL;\n\n    RedisModule_FreeDict(ctx, removed_subevent_type);\n    removed_subevent_type = NULL;\n\n    iter = RedisModule_DictIteratorStartC(removed_expiry_log, \"^\", NULL, 0);\n    while((key = RedisModule_DictNextC(iter, &keyLen, (void**)&val))){\n        RedisModule_FreeString(ctx, val);\n    }\n    RedisModule_FreeDict(ctx, removed_expiry_log);\n    RedisModule_DictIteratorStop(iter);\n    removed_expiry_log = NULL;\n\n    return REDISMODULE_OK;\n}\n\n"
  },
  {
    "path": "tests/modules/infotest.c",
    "content": "#include \"redismodule.h\"\n\n#include <string.h>\n\nvoid InfoFunc(RedisModuleInfoCtx *ctx, int for_crash_report) {\n    static int info_func_calls = 0;\n    RedisModule_InfoAddSection(ctx, \"\");\n    RedisModule_InfoAddFieldLongLong(ctx, \"global\", -2);\n    RedisModule_InfoAddFieldULongLong(ctx, \"uglobal\", (unsigned long long)-2);\n    RedisModule_InfoAddFieldLongLong(ctx, \"info_calls\", ++info_func_calls);\n\n    RedisModule_InfoAddSection(ctx, \"Spanish\");\n    RedisModule_InfoAddFieldCString(ctx, \"uno\", \"one\");\n    RedisModule_InfoAddFieldLongLong(ctx, \"dos\", 2);\n\n    RedisModule_InfoAddSection(ctx, \"Italian\");\n    RedisModule_InfoAddFieldLongLong(ctx, \"due\", 2);\n    RedisModule_InfoAddFieldDouble(ctx, \"tre\", 3.3);\n\n    RedisModule_InfoAddSection(ctx, \"keyspace\");\n    RedisModule_InfoBeginDictField(ctx, \"db0\");\n    RedisModule_InfoAddFieldLongLong(ctx, \"keys\", 3);\n    RedisModule_InfoAddFieldLongLong(ctx, \"expires\", 1);\n    RedisModule_InfoEndDictField(ctx);\n\n    RedisModule_InfoAddSection(ctx, \"unsafe\");\n    RedisModule_InfoBeginDictField(ctx, \"unsafe:field\");\n    RedisModule_InfoAddFieldLongLong(ctx, \"value\", 1);\n    RedisModule_InfoEndDictField(ctx);\n\n    if (for_crash_report) {\n        RedisModule_InfoAddSection(ctx, \"Klingon\");\n        RedisModule_InfoAddFieldCString(ctx, \"one\", \"wa'\");\n        RedisModule_InfoAddFieldCString(ctx, \"two\", \"cha'\");\n        RedisModule_InfoAddFieldCString(ctx, \"three\", \"wej\");\n    }\n\n}\n\nint info_get(RedisModuleCtx *ctx, RedisModuleString **argv, int argc, char field_type)\n{\n    if (argc != 3 && argc != 4) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n    int err = REDISMODULE_OK;\n    const char *section, *field;\n    section = RedisModule_StringPtrLen(argv[1], NULL);\n    field = RedisModule_StringPtrLen(argv[2], NULL);\n    RedisModuleServerInfoData *info = RedisModule_GetServerInfo(ctx, section);\n    if (field_type=='i') {\n        long long ll = RedisModule_ServerInfoGetFieldSigned(info, field, &err);\n        if (err==REDISMODULE_OK)\n            RedisModule_ReplyWithLongLong(ctx, ll);\n    } else if (field_type=='u') {\n        unsigned long long ll = (unsigned long long)RedisModule_ServerInfoGetFieldUnsigned(info, field, &err);\n        if (err==REDISMODULE_OK)\n            RedisModule_ReplyWithLongLong(ctx, ll);\n    } else if (field_type=='d') {\n        double d = RedisModule_ServerInfoGetFieldDouble(info, field, &err);\n        if (err==REDISMODULE_OK)\n            RedisModule_ReplyWithDouble(ctx, d);\n    } else if (field_type=='c') {\n        const char *str = RedisModule_ServerInfoGetFieldC(info, field);\n        if (str)\n            RedisModule_ReplyWithCString(ctx, str);\n    } else {\n        RedisModuleString *str = RedisModule_ServerInfoGetField(ctx, info, field);\n        if (str) {\n            RedisModule_ReplyWithString(ctx, str);\n            RedisModule_FreeString(ctx, str);\n        } else\n            err=REDISMODULE_ERR;\n    }\n    if (err!=REDISMODULE_OK)\n        RedisModule_ReplyWithError(ctx, \"not found\");\n    RedisModule_FreeServerInfo(ctx, info);\n    return REDISMODULE_OK;\n}\n\nint info_gets(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    return info_get(ctx, argv, argc, 's');\n}\n\nint info_getc(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    return info_get(ctx, argv, argc, 'c');\n}\n\nint info_geti(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    return info_get(ctx, argv, argc, 'i');\n}\n\nint info_getu(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    return info_get(ctx, argv, argc, 'u');\n}\n\nint info_getd(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    return info_get(ctx, argv, argc, 'd');\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    if (RedisModule_Init(ctx,\"infotest\",1,REDISMODULE_APIVER_1)\n            == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    if (RedisModule_RegisterInfoFunc(ctx, InfoFunc) == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"info.gets\", info_gets,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"info.getc\", info_getc,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"info.geti\", info_geti,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"info.getu\", info_getu,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"info.getd\", info_getd,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/internalsecret.c",
    "content": "#include \"redismodule.h\"\n#include <errno.h>\n\nint InternalAuth_GetInternalSecret(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    /* NOTE: The internal secret SHOULD NOT be exposed by any module. This is\n    done for testing purposes only. */\n    size_t len;\n    const char *secret = RedisModule_GetInternalSecret(ctx, &len);\n    if(secret) {\n        RedisModule_ReplyWithStringBuffer(ctx, secret, len);\n    } else {\n        RedisModule_ReplyWithError(ctx, \"ERR no internal secret available\");\n    }\n    return REDISMODULE_OK;\n}\n\nint InternalAuth_InternalCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\ntypedef enum {\n    RM_CALL_REGULAR = 0,\n    RM_CALL_WITHUSER = 1,\n    RM_CALL_WITHDETACHEDCLIENT = 2,\n    RM_CALL_REPLICATED = 3\n} RMCallMode;\n\nint call_rm_call(RedisModuleCtx *ctx, RedisModuleString **argv, int argc, RMCallMode mode) {\n    if(argc < 2){\n        return RedisModule_WrongArity(ctx);\n    }\n    RedisModuleCallReply *rep = NULL;\n    RedisModuleCtx *detached_ctx = NULL;\n    const char* cmd = RedisModule_StringPtrLen(argv[1], NULL);\n\n    switch (mode) {\n        case RM_CALL_REGULAR:\n            // Regular call, with the unrestricted user.\n            rep = RedisModule_Call(ctx, cmd, \"vE\", argv + 2, (size_t)argc - 2);\n            break;\n        case RM_CALL_WITHUSER:\n            // Simply call the command with the current client.\n            rep = RedisModule_Call(ctx, cmd, \"vCE\", argv + 2, (size_t)argc - 2);\n            break;\n        case RM_CALL_WITHDETACHEDCLIENT:\n            // Use a context created with the thread-safe-context API\n            detached_ctx = RedisModule_GetThreadSafeContext(NULL);\n            if(!detached_ctx){\n                RedisModule_ReplyWithError(ctx, \"ERR failed to create detached context\");\n                return REDISMODULE_ERR;\n            }\n            // Dispatch the command with the detached context\n            rep = RedisModule_Call(detached_ctx, cmd, \"vCE\", argv + 2, (size_t)argc - 2);\n            break;\n        case RM_CALL_REPLICATED:\n            rep = RedisModule_Call(ctx, cmd, \"vE\", argv + 2, (size_t)argc - 2);\n    }\n\n    if(!rep) {\n        char err[100];\n        switch (errno) {\n            case EACCES:\n                RedisModule_ReplyWithError(ctx, \"ERR NOPERM\");\n                break;\n            case ENOENT:\n                RedisModule_ReplyWithError(ctx, \"ERR unknown command\");\n                break;\n            default:\n                snprintf(err, sizeof(err) - 1, \"ERR errno=%d\", errno);\n                RedisModule_ReplyWithError(ctx, err);\n                break;\n        }\n    } else {\n        RedisModule_ReplyWithCallReply(ctx, rep);\n        RedisModule_FreeCallReply(rep);\n        if (mode == RM_CALL_REPLICATED)\n            RedisModule_ReplicateVerbatim(ctx);\n    }\n\n    if (mode == RM_CALL_WITHDETACHEDCLIENT) {\n        RedisModule_FreeThreadSafeContext(detached_ctx);\n    }\n\n    return REDISMODULE_OK;\n}\n\nint internal_rmcall(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    return call_rm_call(ctx, argv, argc, RM_CALL_REGULAR);\n}\n\nint noninternal_rmcall(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    return call_rm_call(ctx, argv, argc, RM_CALL_REGULAR);\n}\n\nint noninternal_rmcall_withuser(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    return call_rm_call(ctx, argv, argc, RM_CALL_WITHUSER);\n}\n\nint noninternal_rmcall_detachedcontext_withuser(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    return call_rm_call(ctx, argv, argc, RM_CALL_WITHDETACHEDCLIENT);\n}\n\nint internal_rmcall_replicated(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    return call_rm_call(ctx, argv, argc, RM_CALL_REPLICATED);\n}\n\n/* This function must be present on each Redis module. It is used in order to\n * register the commands into the Redis server. */\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx,\"testinternalsecret\",1,REDISMODULE_APIVER_1)\n        == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    /* WARNING: A module should NEVER expose the internal secret - this is for\n     * testing purposes only. */\n    if (RedisModule_CreateCommand(ctx,\"internalauth.getinternalsecret\",\n        InternalAuth_GetInternalSecret,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"internalauth.internalcommand\",\n        InternalAuth_InternalCommand,\"internal\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"internalauth.internal_rmcall\",\n        internal_rmcall,\"write internal\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"internalauth.noninternal_rmcall\",\n        noninternal_rmcall,\"write\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"internalauth.noninternal_rmcall_withuser\",\n        noninternal_rmcall_withuser,\"write\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"internalauth.noninternal_rmcall_detachedcontext_withuser\",\n        noninternal_rmcall_detachedcontext_withuser,\"write\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"internalauth.internal_rmcall_replicated\",\n        internal_rmcall_replicated,\"write internal\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/keymeta_notify.c",
    "content": "/* Test module for KSN paths that must tolerate keymeta writes.\n *\n * In general, keyspace notification callbacks must not perform write\n * operations. However, Search module modifies key metadata as part of KSN, so \n * this module exercises the subset of KSN flows that must remain resilient to\n * such keymeta modifications, including cases that may trigger kvobj\n * reallocation.\n *\n * Commands:\n *   KEYMETANOTIFY.GET <key>      - Get the metadata value attached to a key\n *   KEYMETANOTIFY.SETCOUNT       - Get how many times metadata was set in notifications\n *\n * Copyright (c) 2006-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"redismodule.h\"\n#include <string.h>\n#include <stdlib.h>\n\nstatic RedisModuleKeyMetaClassId meta_class_id = -1;\n\n/* Counter incremented each time we successfully set metadata in a notification */\nstatic long long meta_set_count = 0;\n\n/* Notification callback: sets metadata on the key during notifications. */\nstatic int HashNotifyCallback(RedisModuleCtx *ctx, int type, const char *event,\n                               RedisModuleString *key) {\n    REDISMODULE_NOT_USED(type);\n    REDISMODULE_NOT_USED(event);\n\n    if (meta_class_id < 0) return REDISMODULE_OK;\n\n    RedisModuleKey *k = RedisModule_OpenKey(ctx, key, REDISMODULE_WRITE);\n    if (!k) return REDISMODULE_OK;\n\n    if (RedisModule_KeyType(k) == REDISMODULE_KEYTYPE_EMPTY) {\n        RedisModule_CloseKey(k);\n        return REDISMODULE_OK;\n    }\n\n    /* Free existing metadata if any */\n    uint64_t existing = 0;\n    if (RedisModule_GetKeyMeta(meta_class_id, k, &existing) == REDISMODULE_OK) {\n        if (existing != 0) {\n            free((char *)existing);\n        }\n    }\n\n    /* Set new metadata - a simple string \"notified\" */\n    char *new_str = strdup(\"notified\");\n    if (RedisModule_SetKeyMeta(meta_class_id, k, (uint64_t)new_str) == REDISMODULE_OK) {\n        meta_set_count++;\n    } else {\n        free(new_str);\n    }\n\n    RedisModule_CloseKey(k);\n    return REDISMODULE_OK;\n}\n\n/* Free callback for metadata */\nstatic void MetaFreeCallback(const char *keyname, uint64_t meta) {\n    REDISMODULE_NOT_USED(keyname);\n    if (meta != 0) {\n        free((char *)meta);\n    }\n}\n\n/* KEYMETANOTIFY.GET <key> - Get the metadata string attached to a key */\nstatic int GetMetaCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n\n    RedisModuleKey *k = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_READ);\n    if (!k || RedisModule_KeyType(k) == REDISMODULE_KEYTYPE_EMPTY) {\n        if (k) RedisModule_CloseKey(k);\n        RedisModule_ReplyWithNull(ctx);\n        return REDISMODULE_OK;\n    }\n\n    uint64_t meta = 0;\n    if (RedisModule_GetKeyMeta(meta_class_id, k, &meta) == REDISMODULE_OK && meta != 0) {\n        RedisModule_ReplyWithCString(ctx, (const char *)meta);\n    } else {\n        RedisModule_ReplyWithNull(ctx);\n    }\n    RedisModule_CloseKey(k);\n    return REDISMODULE_OK;\n}\n\n/* KEYMETANOTIFY.SETCOUNT - Get how many times we successfully set metadata in notifications */\nstatic int SetCountCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    RedisModule_ReplyWithLongLong(ctx, meta_set_count);\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx, \"keymetanotify\", 1, REDISMODULE_APIVER_1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    /* Register a metadata class */\n    RedisModuleKeyMetaClassConfig config = {0};\n    config.version = REDISMODULE_KEY_META_VERSION;\n    config.flags = (1 << REDISMODULE_META_ALLOW_IGNORE);\n    config.reset_value = (uint64_t)NULL;\n    config.free = MetaFreeCallback;\n    config.rdb_load = NULL;\n    config.rdb_save = NULL;\n    config.aof_rewrite = NULL;\n    config.copy = NULL;\n    config.rename = NULL;\n    config.move = NULL;\n    config.defrag = NULL;\n    config.unlink = NULL;\n    config.mem_usage = NULL;\n    config.free_effort = NULL;\n\n    meta_class_id = RedisModule_CreateKeyMetaClass(ctx, \"kmno\", 1, &config);\n    if (meta_class_id < 0) return REDISMODULE_ERR;\n\n    /* Subscribe to keyspace events matching RediSearch's notification types:\n     * GENERIC, HASH, STRING, EXPIRED, and EVICTED. */\n    int notifyFlags = REDISMODULE_NOTIFY_GENERIC | REDISMODULE_NOTIFY_HASH |\n                      REDISMODULE_NOTIFY_STRING | REDISMODULE_NOTIFY_EXPIRED |\n                      REDISMODULE_NOTIFY_EVICTED;\n    if (RedisModule_SubscribeToKeyspaceEvents(ctx, notifyFlags,\n                                              HashNotifyCallback) != REDISMODULE_OK)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"keymetanotify.get\", GetMetaCommand,\n                                  \"readonly\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"keymetanotify.setcount\", SetCountCommand,\n                                  \"readonly\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnUnload(RedisModuleCtx *ctx) {\n    REDISMODULE_NOT_USED(ctx);\n\n    if (meta_class_id >= 0) {\n        RedisModule_ReleaseKeyMetaClass(meta_class_id);\n        meta_class_id = -1;\n    }\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/keyspace_events.c",
    "content": "/* This module is used to test the server keyspace events API.\n *\n * -----------------------------------------------------------------------------\n *\n * Copyright (c) 2020-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#define _BSD_SOURCE\n#define _DEFAULT_SOURCE /* For usleep */\n\n#include \"redismodule.h\"\n#include <stdio.h>\n#include <string.h>\n#include <strings.h>\n#include <unistd.h>\n\nustime_t cached_time = 0;\n\n/** stores all the keys on which we got 'loaded' keyspace notification **/\nRedisModuleDict *loaded_event_log = NULL;\n/** stores all the keys on which we got 'module' keyspace notification **/\nRedisModuleDict *module_event_log = NULL;\n\n/** Counts how many deleted KSN we got on keys with a prefix of \"count_dels_\" **/\nstatic size_t dels = 0;\n\n/* Subkey notification log */\n#define SUBKEY_LOG_MAX 256\nstatic char subkey_log[SUBKEY_LOG_MAX][512];\nstatic int subkey_log_count = 0;\n\nstatic int KeySpace_NotificationLoaded(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key){\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(type);\n\n    if(strcmp(event, \"loaded\") == 0){\n        const char* keyName = RedisModule_StringPtrLen(key, NULL);\n        int nokey;\n        RedisModule_DictGetC(loaded_event_log, (void*)keyName, strlen(keyName), &nokey);\n        if(nokey){\n            RedisModule_DictSetC(loaded_event_log, (void*)keyName, strlen(keyName), RedisModule_HoldString(ctx, key));\n        }\n    }\n\n    return REDISMODULE_OK;\n}\n\nstatic long long callback_call_count = 0;\nstatic int KeySpace_NotificationGeneric(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key) {\n    REDISMODULE_NOT_USED(type);\n    callback_call_count++;\n    const char *key_str = RedisModule_StringPtrLen(key, NULL);\n    if (strncmp(key_str, \"count_dels_\", 11) == 0 && strcmp(event, \"del\") == 0) {\n        if (RedisModule_GetContextFlags(ctx) & REDISMODULE_CTX_FLAGS_MASTER) {\n            dels++;\n            RedisModule_Replicate(ctx, \"keyspace.incr_dels\", \"\");\n        }\n        return REDISMODULE_OK;\n    }\n    if (cached_time) {\n        RedisModule_Assert(cached_time == RedisModule_CachedMicroseconds());\n        usleep(1);\n        RedisModule_Assert(cached_time != RedisModule_Microseconds());\n    }\n\n    if (strcmp(event, \"del\") == 0) {\n        RedisModuleString *copykey = RedisModule_CreateStringPrintf(ctx, \"%s_copy\", RedisModule_StringPtrLen(key, NULL));\n        RedisModuleCallReply* rep = RedisModule_Call(ctx, \"DEL\", \"s!\", copykey);\n        RedisModule_FreeString(ctx, copykey);\n        RedisModule_FreeCallReply(rep);\n\n        int ctx_flags = RedisModule_GetContextFlags(ctx);\n        if (ctx_flags & REDISMODULE_CTX_FLAGS_LUA) {\n            RedisModuleCallReply* rep = RedisModule_Call(ctx, \"INCR\", \"c\", \"lua\");\n            RedisModule_FreeCallReply(rep);\n        }\n        if (ctx_flags & REDISMODULE_CTX_FLAGS_MULTI) {\n            RedisModuleCallReply* rep = RedisModule_Call(ctx, \"INCR\", \"c\", \"multi\");\n            RedisModule_FreeCallReply(rep);\n        }\n    }\n\n    return REDISMODULE_OK;\n}\n\nstatic int KeySpace_NotificationExpired(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key) {\n    REDISMODULE_NOT_USED(type);\n    REDISMODULE_NOT_USED(event);\n    REDISMODULE_NOT_USED(key);\n\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, \"INCR\", \"c!\", \"testkeyspace:expired\");\n    RedisModule_FreeCallReply(rep);\n\n    return REDISMODULE_OK;\n}\n\n/* This key miss notification handler is performing a write command inside the notification callback.\n * Notice, it is discourage and currently wrong to perform a write command inside key miss event.\n * It can cause read commands to be replicated to the replica/aof. This test is here temporary (for coverage and\n * verification that it's not crashing). */\nstatic int KeySpace_NotificationModuleKeyMiss(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key) {\n    REDISMODULE_NOT_USED(type);\n    REDISMODULE_NOT_USED(event);\n    REDISMODULE_NOT_USED(key);\n\n    int flags = RedisModule_GetContextFlags(ctx);\n    if (!(flags & REDISMODULE_CTX_FLAGS_MASTER)) {\n        return REDISMODULE_OK; // ignore the event on replica\n    }\n\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, \"incr\", \"!c\", \"missed\");\n    RedisModule_FreeCallReply(rep);\n\n    return REDISMODULE_OK;\n}\n\nstatic int KeySpace_NotificationModuleString(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key) {\n    REDISMODULE_NOT_USED(type);\n    REDISMODULE_NOT_USED(event);\n    RedisModuleKey *redis_key = RedisModule_OpenKey(ctx, key, REDISMODULE_READ);\n\n    size_t len = 0;\n    /* RedisModule_StringDMA could change the data format and cause the old robj to be freed.\n     * This code verifies that such format change will not cause any crashes.*/\n    char *data = RedisModule_StringDMA(redis_key, &len, REDISMODULE_READ);\n    int res = strncmp(data, \"dummy\", 5);\n    REDISMODULE_NOT_USED(res);\n\n    RedisModule_CloseKey(redis_key);\n\n    return REDISMODULE_OK;\n}\n\nstatic void KeySpace_PostNotificationStringFreePD(void *pd) {\n    RedisModule_FreeString(NULL, pd);\n}\n\nstatic void KeySpace_PostNotificationString(RedisModuleCtx *ctx, void *pd) {\n    REDISMODULE_NOT_USED(ctx);\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, \"incr\", \"!s\", pd);\n    RedisModule_FreeCallReply(rep);\n}\n\nstatic int KeySpace_NotificationModuleStringPostNotificationJob(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key) {\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(type);\n    REDISMODULE_NOT_USED(event);\n\n    const char *key_str = RedisModule_StringPtrLen(key, NULL);\n\n    if (strncmp(key_str, \"string1_\", 8) != 0) {\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleString *new_key = RedisModule_CreateStringPrintf(NULL, \"string_changed{%s}\", key_str);\n    RedisModule_AddPostNotificationJob(ctx, KeySpace_PostNotificationString, new_key, KeySpace_PostNotificationStringFreePD);\n    return REDISMODULE_OK;\n}\n\nstatic int KeySpace_NotificationModule(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key) {\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(type);\n    REDISMODULE_NOT_USED(event);\n\n    const char* keyName = RedisModule_StringPtrLen(key, NULL);\n    int nokey;\n    RedisModule_DictGetC(module_event_log, (void*)keyName, strlen(keyName), &nokey);\n    if(nokey){\n        RedisModule_DictSetC(module_event_log, (void*)keyName, strlen(keyName), RedisModule_HoldString(ctx, key));\n    }\n    return REDISMODULE_OK;\n}\n\nstatic int cmdNotify(RedisModuleCtx *ctx, RedisModuleString **argv, int argc){\n    if(argc != 2){\n        return RedisModule_WrongArity(ctx);\n    }\n\n    RedisModule_NotifyKeyspaceEvent(ctx, REDISMODULE_NOTIFY_MODULE, \"notify\", argv[1]);\n    RedisModule_ReplyWithNull(ctx);\n    return REDISMODULE_OK;\n}\n\nstatic int cmdIsModuleKeyNotified(RedisModuleCtx *ctx, RedisModuleString **argv, int argc){\n    if(argc != 2){\n        return RedisModule_WrongArity(ctx);\n    }\n\n    const char* key  = RedisModule_StringPtrLen(argv[1], NULL);\n\n    int nokey;\n    RedisModuleString* keyStr = RedisModule_DictGetC(module_event_log, (void*)key, strlen(key), &nokey);\n\n    RedisModule_ReplyWithArray(ctx, 2);\n    RedisModule_ReplyWithLongLong(ctx, !nokey);\n    if(nokey){\n        RedisModule_ReplyWithNull(ctx);\n    }else{\n        RedisModule_ReplyWithString(ctx, keyStr);\n    }\n    return REDISMODULE_OK;\n}\n\nstatic int cmdIsKeyLoaded(RedisModuleCtx *ctx, RedisModuleString **argv, int argc){\n    if(argc != 2){\n        return RedisModule_WrongArity(ctx);\n    }\n\n    const char* key  = RedisModule_StringPtrLen(argv[1], NULL);\n\n    int nokey;\n    RedisModuleString* keyStr = RedisModule_DictGetC(loaded_event_log, (void*)key, strlen(key), &nokey);\n\n    RedisModule_ReplyWithArray(ctx, 2);\n    RedisModule_ReplyWithLongLong(ctx, !nokey);\n    if(nokey){\n        RedisModule_ReplyWithNull(ctx);\n    }else{\n        RedisModule_ReplyWithString(ctx, keyStr);\n    }\n    return REDISMODULE_OK;\n}\n\nstatic int cmdDelKeyCopy(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2)\n        return RedisModule_WrongArity(ctx);\n\n    cached_time = RedisModule_CachedMicroseconds();\n\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, \"DEL\", \"s!\", argv[1]);\n    if (!rep) {\n        RedisModule_ReplyWithError(ctx, \"NULL reply returned\");\n    } else {\n        RedisModule_ReplyWithCallReply(ctx, rep);\n        RedisModule_FreeCallReply(rep);\n    }\n    cached_time = 0;\n    return REDISMODULE_OK;\n}\n\n/* Call INCR and propagate using RM_Call with `!`. */\nstatic int cmdIncrCase1(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2)\n        return RedisModule_WrongArity(ctx);\n\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, \"INCR\", \"s!\", argv[1]);\n    if (!rep) {\n        RedisModule_ReplyWithError(ctx, \"NULL reply returned\");\n    } else {\n        RedisModule_ReplyWithCallReply(ctx, rep);\n        RedisModule_FreeCallReply(rep);\n    }\n    return REDISMODULE_OK;\n}\n\n/* Call INCR and propagate using RM_Replicate. */\nstatic int cmdIncrCase2(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2)\n        return RedisModule_WrongArity(ctx);\n\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, \"INCR\", \"s\", argv[1]);\n    if (!rep) {\n        RedisModule_ReplyWithError(ctx, \"NULL reply returned\");\n    } else {\n        RedisModule_ReplyWithCallReply(ctx, rep);\n        RedisModule_FreeCallReply(rep);\n    }\n    RedisModule_Replicate(ctx, \"INCR\", \"s\", argv[1]);\n    return REDISMODULE_OK;\n}\n\n/* Call INCR and propagate using RM_ReplicateVerbatim. */\nstatic int cmdIncrCase3(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2)\n        return RedisModule_WrongArity(ctx);\n\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, \"INCR\", \"s\", argv[1]);\n    if (!rep) {\n        RedisModule_ReplyWithError(ctx, \"NULL reply returned\");\n    } else {\n        RedisModule_ReplyWithCallReply(ctx, rep);\n        RedisModule_FreeCallReply(rep);\n    }\n    RedisModule_ReplicateVerbatim(ctx);\n    return REDISMODULE_OK;\n}\n\nstatic int cmdIncrDels(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    dels++;\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\nstatic int cmdGetDels(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    return RedisModule_ReplyWithLongLong(ctx, dels);\n}\n\n/* Subkey notification callback */\nstatic void KeySpace_NotificationSubkeys(RedisModuleCtx *ctx, int type, const char *event,\n                                          RedisModuleString *key, RedisModuleString **subkeys, int count) {\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(type);\n\n    if (subkey_log_count >= SUBKEY_LOG_MAX) return;\n\n    const char *key_str = RedisModule_StringPtrLen(key, NULL);\n\n    /* Format: \"<event> <key> <count> <subkey1> <subkey2> ...\" or \"<event> <key> 0\" */\n    char buf[512];\n    int off = snprintf(buf, sizeof(buf), \"%s %s %d\", event, key_str, count);\n    for (int i = 0; i < count && (size_t)off < sizeof(buf) - 1; i++) {\n        const char *sk = RedisModule_StringPtrLen(subkeys[i], NULL);\n        off += snprintf(buf + off, sizeof(buf) - off, \" %s\", sk);\n    }\n    snprintf(subkey_log[subkey_log_count], sizeof(subkey_log[0]), \"%s\", buf);\n    subkey_log_count++;\n}\n\n/* keyspace.get_subkey_events — return all logged subkey events as an array */\nstatic int cmdGetSubkeyEvents(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    RedisModule_ReplyWithArray(ctx, subkey_log_count);\n    for (int i = 0; i < subkey_log_count; i++) {\n        RedisModule_ReplyWithCString(ctx, subkey_log[i]);\n    }\n    return REDISMODULE_OK;\n}\n\n/* keyspace.reset_subkey_events — clear the log */\nstatic int cmdResetSubkeyEvents(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    subkey_log_count = 0;\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\n/* keyspace.notify_with_subkeys <key> <subkey1> [subkey2 ...] — trigger a module subkey notification */\nstatic int cmdNotifyWithSubkeys(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc < 3) return RedisModule_WrongArity(ctx);\n\n    RedisModuleString *key = argv[1];\n    RedisModuleString **subkeys = &argv[2];\n    int count = argc - 2;\n\n    RedisModule_NotifyKeyspaceEventWithSubkeys(ctx, REDISMODULE_NOTIFY_HASH, \"module_subkey_event\", key, subkeys, count);\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\n/* keyspace.subscribe_subkeys — subscribe with NONE flag (all events) */\nstatic int cmdSubscribeSubkeys(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    if (RedisModule_SubscribeToKeyspaceEventsWithSubkeys(ctx, REDISMODULE_NOTIFY_HASH | REDISMODULE_NOTIFY_GENERIC,\n                                                         REDISMODULE_NOTIFY_FLAG_NONE, KeySpace_NotificationSubkeys) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx, \"ERR subscribe failed\");\n    }\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\n/* keyspace.unsubscribe_subkeys — unsubscribe the subkey callback */\nstatic int cmdUnsubscribeSubkeys(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    if (RedisModule_UnsubscribeFromKeyspaceEventsWithSubkeys(ctx, REDISMODULE_NOTIFY_HASH | REDISMODULE_NOTIFY_GENERIC,\n                                                             REDISMODULE_NOTIFY_FLAG_NONE, KeySpace_NotificationSubkeys) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx, \"ERR unsubscribe failed\");\n    }\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\n/* keyspace.subscribe_require_subkeys — subscribe with SUBKEYS_REQUIRED flag */\nstatic int cmdSubscribeRequireSubkeys(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    if (RedisModule_SubscribeToKeyspaceEventsWithSubkeys(ctx, REDISMODULE_NOTIFY_HASH | REDISMODULE_NOTIFY_GENERIC,\n                                                         REDISMODULE_NOTIFY_FLAG_SUBKEYS_REQUIRED,\n                                                         KeySpace_NotificationSubkeys) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx, \"ERR subscribe failed\");\n    }\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\n/* keyspace.unsubscribe_require_subkeys — unsubscribe the SUBKEYS_REQUIRED callback */\nstatic int cmdUnsubscribeRequireSubkeys(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    if (RedisModule_UnsubscribeFromKeyspaceEventsWithSubkeys(ctx, REDISMODULE_NOTIFY_HASH | REDISMODULE_NOTIFY_GENERIC,\n                                                             REDISMODULE_NOTIFY_FLAG_SUBKEYS_REQUIRED,\n                                                             KeySpace_NotificationSubkeys) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx, \"ERR unsubscribe failed\");\n    }\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\nstatic RedisModuleNotificationFunc get_callback_for_event(int event_mask) {\n    switch(event_mask) {\n    case REDISMODULE_NOTIFY_LOADED:\n        return KeySpace_NotificationLoaded;\n    case REDISMODULE_NOTIFY_GENERIC:\n        return KeySpace_NotificationGeneric;\n    case REDISMODULE_NOTIFY_EXPIRED:\n        return KeySpace_NotificationExpired;\n    case REDISMODULE_NOTIFY_MODULE:\n        return KeySpace_NotificationModule;\n    case REDISMODULE_NOTIFY_KEY_MISS:\n        return KeySpace_NotificationModuleKeyMiss;\n    case REDISMODULE_NOTIFY_STRING:\n        // We have two callbacks for STRING events in your OnLoad,\n        // For simplicity, pick the first:\n        return KeySpace_NotificationModuleString;\n    default:\n        return NULL;\n    }\n}\n\nint GetCallbackCountCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    RedisModule_ReplyWithLongLong(ctx, callback_call_count);\n    return REDISMODULE_OK;\n}\n\nstatic int CmdUnsub(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    long long event_mask;\n    if (RedisModule_StringToLongLong(argv[1], &event_mask) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx, \"ERR invalid event mask\");\n    }\n\n    RedisModuleNotificationFunc cb = get_callback_for_event((int)event_mask);\n    if (cb == NULL) {\n        return RedisModule_ReplyWithError(ctx, \"ERR unknown event mask\");\n    }\n\n    if (RedisModule_UnsubscribeFromKeyspaceEvents(ctx, (int)event_mask, cb) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx, \"ERR unsubscribe failed\");\n    }\n\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n/* This function must be present on each Redis module. It is used in order to\n * register the commands into the Redis server. */\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (RedisModule_Init(ctx,\"testkeyspace\",1,REDISMODULE_APIVER_1) == REDISMODULE_ERR){\n        return REDISMODULE_ERR;\n    }\n\n    loaded_event_log = RedisModule_CreateDict(ctx);\n    module_event_log = RedisModule_CreateDict(ctx);\n\n    int keySpaceAll = RedisModule_GetKeyspaceNotificationFlagsAll();\n\n    if (!(keySpaceAll & REDISMODULE_NOTIFY_LOADED)) {\n        // REDISMODULE_NOTIFY_LOADED event are not supported we can not start\n        return REDISMODULE_ERR;\n    }\n\n    if(RedisModule_SubscribeToKeyspaceEvents(ctx, REDISMODULE_NOTIFY_LOADED, KeySpace_NotificationLoaded) != REDISMODULE_OK){\n        return REDISMODULE_ERR;\n    }\n\n    if(RedisModule_SubscribeToKeyspaceEvents(ctx, REDISMODULE_NOTIFY_GENERIC, KeySpace_NotificationGeneric) != REDISMODULE_OK){\n        return REDISMODULE_ERR;\n    }\n\n    if(RedisModule_SubscribeToKeyspaceEvents(ctx, REDISMODULE_NOTIFY_EXPIRED, KeySpace_NotificationExpired) != REDISMODULE_OK){\n        return REDISMODULE_ERR;\n    }\n\n    if(RedisModule_SubscribeToKeyspaceEvents(ctx, REDISMODULE_NOTIFY_MODULE, KeySpace_NotificationModule) != REDISMODULE_OK){\n        return REDISMODULE_ERR;\n    }\n\n    if(RedisModule_SubscribeToKeyspaceEvents(ctx, REDISMODULE_NOTIFY_KEY_MISS, KeySpace_NotificationModuleKeyMiss) != REDISMODULE_OK){\n        return REDISMODULE_ERR;\n    }\n\n    if(RedisModule_SubscribeToKeyspaceEvents(ctx, REDISMODULE_NOTIFY_STRING, KeySpace_NotificationModuleString) != REDISMODULE_OK){\n        return REDISMODULE_ERR;\n    }\n\n    if(RedisModule_SubscribeToKeyspaceEvents(ctx, REDISMODULE_NOTIFY_STRING, KeySpace_NotificationModuleStringPostNotificationJob) != REDISMODULE_OK){\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx,\"keyspace.notify\", cmdNotify,\"\",0,0,0) == REDISMODULE_ERR){\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx,\"keyspace.is_module_key_notified\", cmdIsModuleKeyNotified,\"\",0,0,0) == REDISMODULE_ERR){\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx,\"keyspace.is_key_loaded\", cmdIsKeyLoaded,\"\",0,0,0) == REDISMODULE_ERR){\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"keyspace.del_key_copy\", cmdDelKeyCopy,\n                                  \"write\", 0, 0, 0) == REDISMODULE_ERR){\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"keyspace.incr_case1\", cmdIncrCase1,\n                                  \"write\", 0, 0, 0) == REDISMODULE_ERR){\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"keyspace.incr_case2\", cmdIncrCase2,\n                                  \"write\", 0, 0, 0) == REDISMODULE_ERR){\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"keyspace.incr_case3\", cmdIncrCase3,\n                                  \"write\", 0, 0, 0) == REDISMODULE_ERR){\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"keyspace.incr_dels\", cmdIncrDels,\n                                  \"write\", 0, 0, 0) == REDISMODULE_ERR){\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"keyspace.get_dels\", cmdGetDels,\n                                  \"readonly\", 0, 0, 0) == REDISMODULE_ERR){\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"keyspace.unsubscribe\", CmdUnsub, \"write\", 0, 0, 0) == REDISMODULE_ERR){\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"keyspace.callback_count\", GetCallbackCountCommand, \"\", 0, 0, 0)== REDISMODULE_ERR){\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"keyspace.subscribe_subkeys\", cmdSubscribeSubkeys, \"\", 0, 0, 0) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"keyspace.unsubscribe_subkeys\", cmdUnsubscribeSubkeys, \"\", 0, 0, 0) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"keyspace.get_subkey_events\", cmdGetSubkeyEvents, \"readonly\", 0, 0, 0) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"keyspace.reset_subkey_events\", cmdResetSubkeyEvents, \"\", 0, 0, 0) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"keyspace.notify_with_subkeys\", cmdNotifyWithSubkeys, \"write\", 0, 0, 0) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"keyspace.subscribe_require_subkeys\", cmdSubscribeRequireSubkeys, \"\", 0, 0, 0) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"keyspace.unsubscribe_require_subkeys\", cmdUnsubscribeRequireSubkeys, \"\", 0, 0, 0) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n\n    if (argc == 1) {\n        const char *ptr = RedisModule_StringPtrLen(argv[0], NULL);\n        if (!strcasecmp(ptr, \"noload\")) {\n            /* This is a hint that we return ERR at the last moment of OnLoad. */\n            RedisModule_FreeDict(ctx, loaded_event_log);\n            RedisModule_FreeDict(ctx, module_event_log);\n            return REDISMODULE_ERR;\n        }\n    }\n\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnUnload(RedisModuleCtx *ctx) {\n    RedisModuleDictIter *iter = RedisModule_DictIteratorStartC(loaded_event_log, \"^\", NULL, 0);\n    char* key;\n    size_t keyLen;\n    RedisModuleString* val;\n    while((key = RedisModule_DictNextC(iter, &keyLen, (void**)&val))){\n        RedisModule_FreeString(ctx, val);\n    }\n    RedisModule_FreeDict(ctx, loaded_event_log);\n    RedisModule_DictIteratorStop(iter);\n    loaded_event_log = NULL;\n\n    iter = RedisModule_DictIteratorStartC(module_event_log, \"^\", NULL, 0);\n    while((key = RedisModule_DictNextC(iter, &keyLen, (void**)&val))){\n        RedisModule_FreeString(ctx, val);\n    }\n    RedisModule_FreeDict(ctx, module_event_log);\n    RedisModule_DictIteratorStop(iter);\n    module_event_log = NULL;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/keyspecs.c",
    "content": "#include \"redismodule.h\"\n\n#define UNUSED(V) ((void) V)\n\n/* This function implements all commands in this module. All we care about is\n * the COMMAND metadata anyway. */\nint kspec_impl(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    UNUSED(argc);\n\n    /* Handle getkeys-api introspection (for \"kspec.nonewithgetkeys\")  */\n    if (RedisModule_IsKeysPositionRequest(ctx)) {\n        for (int i = 1; i < argc; i += 2)\n            RedisModule_KeyAtPosWithFlags(ctx, i, REDISMODULE_CMD_KEY_RO | REDISMODULE_CMD_KEY_ACCESS);\n\n        return REDISMODULE_OK;\n    }\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\nint createKspecNone(RedisModuleCtx *ctx) {\n    /* A command without keyspecs; only the legacy (first,last,step) triple (MSET like spec). */\n    if (RedisModule_CreateCommand(ctx,\"kspec.none\",kspec_impl,\"\",1,-1,2) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    return REDISMODULE_OK;\n}\n\nint createKspecNoneWithGetkeys(RedisModuleCtx *ctx) {\n    /* A command without keyspecs; only the legacy (first,last,step) triple (MSET like spec), but also has a getkeys callback */\n    if (RedisModule_CreateCommand(ctx,\"kspec.nonewithgetkeys\",kspec_impl,\"getkeys-api\",1,-1,2) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    return REDISMODULE_OK;\n}\n\nint createKspecTwoRanges(RedisModuleCtx *ctx) {\n    /* Test that two position/range-based key specs are combined to produce the\n     * legacy (first,last,step) values representing both keys. */\n    if (RedisModule_CreateCommand(ctx,\"kspec.tworanges\",kspec_impl,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleCommand *command = RedisModule_GetCommand(ctx,\"kspec.tworanges\");\n    RedisModuleCommandInfo info = {\n        .version = REDISMODULE_COMMAND_INFO_VERSION,\n        .arity = -2,\n        .key_specs = (RedisModuleCommandKeySpec[]){\n            {\n                .flags = REDISMODULE_CMD_KEY_RO | REDISMODULE_CMD_KEY_ACCESS,\n                .begin_search_type = REDISMODULE_KSPEC_BS_INDEX,\n                .bs.index.pos = 1,\n                .find_keys_type = REDISMODULE_KSPEC_FK_RANGE,\n                .fk.range = {0,1,0}\n            },\n            {\n                .flags = REDISMODULE_CMD_KEY_RW | REDISMODULE_CMD_KEY_UPDATE,\n                .begin_search_type = REDISMODULE_KSPEC_BS_INDEX,\n                .bs.index.pos = 2,\n                /* Omitted find_keys_type is shorthand for RANGE {0,1,0} */\n            },\n            {0}\n        }\n    };\n    if (RedisModule_SetCommandInfo(command, &info) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n\nint createKspecTwoRangesWithGap(RedisModuleCtx *ctx) {\n    /* Test that two position/range-based key specs are combined to produce the\n     * legacy (first,last,step) values representing just one key. */\n    if (RedisModule_CreateCommand(ctx,\"kspec.tworangeswithgap\",kspec_impl,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleCommand *command = RedisModule_GetCommand(ctx,\"kspec.tworangeswithgap\");\n    RedisModuleCommandInfo info = {\n        .version = REDISMODULE_COMMAND_INFO_VERSION,\n        .arity = -2,\n        .key_specs = (RedisModuleCommandKeySpec[]){\n            {\n                .flags = REDISMODULE_CMD_KEY_RO | REDISMODULE_CMD_KEY_ACCESS,\n                .begin_search_type = REDISMODULE_KSPEC_BS_INDEX,\n                .bs.index.pos = 1,\n                .find_keys_type = REDISMODULE_KSPEC_FK_RANGE,\n                .fk.range = {0,1,0}\n            },\n            {\n                .flags = REDISMODULE_CMD_KEY_RW | REDISMODULE_CMD_KEY_UPDATE,\n                .begin_search_type = REDISMODULE_KSPEC_BS_INDEX,\n                .bs.index.pos = 3,\n                /* Omitted find_keys_type is shorthand for RANGE {0,1,0} */\n            },\n            {0}\n        }\n    };\n    if (RedisModule_SetCommandInfo(command, &info) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n\nint createKspecKeyword(RedisModuleCtx *ctx) {\n    /* Only keyword-based specs. The legacy triple is wiped and set to (0,0,0). */\n    if (RedisModule_CreateCommand(ctx,\"kspec.keyword\",kspec_impl,\"\",3,-1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleCommand *command = RedisModule_GetCommand(ctx,\"kspec.keyword\");\n    RedisModuleCommandInfo info = {\n        .version = REDISMODULE_COMMAND_INFO_VERSION,\n        .key_specs = (RedisModuleCommandKeySpec[]){\n            {\n                .flags = REDISMODULE_CMD_KEY_RO | REDISMODULE_CMD_KEY_ACCESS,\n                .begin_search_type = REDISMODULE_KSPEC_BS_KEYWORD,\n                .bs.keyword.keyword = \"KEYS\",\n                .bs.keyword.startfrom = 1,\n                .find_keys_type = REDISMODULE_KSPEC_FK_RANGE,\n                .fk.range = {-1,1,0}\n            },\n            {0}\n        }\n    };\n    if (RedisModule_SetCommandInfo(command, &info) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n\nint createKspecComplex1(RedisModuleCtx *ctx) {\n    /* First is a range a single key. The rest are keyword-based specs. */\n    if (RedisModule_CreateCommand(ctx,\"kspec.complex1\",kspec_impl,\"\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleCommand *command = RedisModule_GetCommand(ctx,\"kspec.complex1\");\n    RedisModuleCommandInfo info = {\n        .version = REDISMODULE_COMMAND_INFO_VERSION,\n        .key_specs = (RedisModuleCommandKeySpec[]){\n            {\n                .flags = REDISMODULE_CMD_KEY_RO,\n                .begin_search_type = REDISMODULE_KSPEC_BS_INDEX,\n                .bs.index.pos = 1,\n            },\n            {\n                .flags = REDISMODULE_CMD_KEY_RW | REDISMODULE_CMD_KEY_UPDATE,\n                .begin_search_type = REDISMODULE_KSPEC_BS_KEYWORD,\n                .bs.keyword.keyword = \"STORE\",\n                .bs.keyword.startfrom = 2,\n            },\n            {\n                .flags = REDISMODULE_CMD_KEY_RO | REDISMODULE_CMD_KEY_ACCESS,\n                .begin_search_type = REDISMODULE_KSPEC_BS_KEYWORD,\n                .bs.keyword.keyword = \"KEYS\",\n                .bs.keyword.startfrom = 2,\n                .find_keys_type = REDISMODULE_KSPEC_FK_KEYNUM,\n                .fk.keynum = {0,1,1}\n            },\n            {0}\n        }\n    };\n    if (RedisModule_SetCommandInfo(command, &info) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n\nint createKspecComplex2(RedisModuleCtx *ctx) {\n    /* First is not legacy, more than STATIC_KEYS_SPECS_NUM specs */\n    if (RedisModule_CreateCommand(ctx,\"kspec.complex2\",kspec_impl,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleCommand *command = RedisModule_GetCommand(ctx,\"kspec.complex2\");\n    RedisModuleCommandInfo info = {\n        .version = REDISMODULE_COMMAND_INFO_VERSION,\n        .key_specs = (RedisModuleCommandKeySpec[]){\n            {\n                .flags = REDISMODULE_CMD_KEY_RW | REDISMODULE_CMD_KEY_UPDATE,\n                .begin_search_type = REDISMODULE_KSPEC_BS_KEYWORD,\n                .bs.keyword.keyword = \"STORE\",\n                .bs.keyword.startfrom = 5,\n                .find_keys_type = REDISMODULE_KSPEC_FK_RANGE,\n                .fk.range = {0,1,0}\n            },\n            {\n                .flags = REDISMODULE_CMD_KEY_RO | REDISMODULE_CMD_KEY_ACCESS,\n                .begin_search_type = REDISMODULE_KSPEC_BS_INDEX,\n                .bs.index.pos = 1,\n                .find_keys_type = REDISMODULE_KSPEC_FK_RANGE,\n                .fk.range = {0,1,0}\n            },\n            {\n                .flags = REDISMODULE_CMD_KEY_RO | REDISMODULE_CMD_KEY_ACCESS,\n                .begin_search_type = REDISMODULE_KSPEC_BS_INDEX,\n                .bs.index.pos = 2,\n                .find_keys_type = REDISMODULE_KSPEC_FK_RANGE,\n                .fk.range = {0,1,0}\n            },\n            {\n                .flags = REDISMODULE_CMD_KEY_RW | REDISMODULE_CMD_KEY_UPDATE,\n                .begin_search_type = REDISMODULE_KSPEC_BS_INDEX,\n                .bs.index.pos = 3,\n                .find_keys_type = REDISMODULE_KSPEC_FK_KEYNUM,\n                .fk.keynum = {0,1,1}\n            },\n            {\n                .flags = REDISMODULE_CMD_KEY_RW | REDISMODULE_CMD_KEY_UPDATE,\n                .begin_search_type = REDISMODULE_KSPEC_BS_KEYWORD,\n                .bs.keyword.keyword = \"MOREKEYS\",\n                .bs.keyword.startfrom = 5,\n                .find_keys_type = REDISMODULE_KSPEC_FK_RANGE,\n                .fk.range = {-1,1,0}\n            },\n            {0}\n        }\n    };\n    if (RedisModule_SetCommandInfo(command, &info) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx, \"keyspecs\", 1, REDISMODULE_APIVER_1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (createKspecNone(ctx) == REDISMODULE_ERR) return REDISMODULE_ERR;\n    if (createKspecNoneWithGetkeys(ctx) == REDISMODULE_ERR) return REDISMODULE_ERR;\n    if (createKspecTwoRanges(ctx) == REDISMODULE_ERR) return REDISMODULE_ERR;\n    if (createKspecTwoRangesWithGap(ctx) == REDISMODULE_ERR) return REDISMODULE_ERR;\n    if (createKspecKeyword(ctx) == REDISMODULE_ERR) return REDISMODULE_ERR;\n    if (createKspecComplex1(ctx) == REDISMODULE_ERR) return REDISMODULE_ERR;\n    if (createKspecComplex2(ctx) == REDISMODULE_ERR) return REDISMODULE_ERR;\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/list.c",
    "content": "#include \"redismodule.h\"\n#include <assert.h>\n#include <errno.h>\n#include <strings.h>\n\n/* LIST.GETALL key [REVERSE] */\nint list_getall(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc < 2 || argc > 3) return RedisModule_WrongArity(ctx);\n    int reverse = (argc == 3 &&\n                   !strcasecmp(RedisModule_StringPtrLen(argv[2], NULL),\n                               \"REVERSE\"));\n    RedisModule_AutoMemory(ctx);\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_READ);\n    if (RedisModule_KeyType(key) != REDISMODULE_KEYTYPE_LIST) {\n        return RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n    long n = RedisModule_ValueLength(key);\n    RedisModule_ReplyWithArray(ctx, n);\n    if (!reverse) {\n        for (long i = 0; i < n; i++) {\n            RedisModuleString *elem = RedisModule_ListGet(key, i);\n            RedisModule_ReplyWithString(ctx, elem);\n            RedisModule_FreeString(ctx, elem);\n        }\n    } else {\n        for (long i = -1; i >= -n; i--) {\n            RedisModuleString *elem = RedisModule_ListGet(key, i);\n            RedisModule_ReplyWithString(ctx, elem);\n            RedisModule_FreeString(ctx, elem);\n        }\n    }\n\n    /* Test error condition: index out of bounds */\n    assert(RedisModule_ListGet(key, n) == NULL);\n    assert(errno == EDOM); /* no more elements in list */\n\n    /* RedisModule_CloseKey(key); //implicit, done by auto memory */\n    return REDISMODULE_OK;\n}\n\n/* LIST.EDIT key [REVERSE] cmdstr [value ..]\n *\n * cmdstr is a string of the following characters:\n *\n *     k -- keep\n *     d -- delete\n *     i -- insert value from args\n *     r -- replace with value from args\n *\n * The number of occurrences of \"i\" and \"r\" in cmdstr) should correspond to the\n * number of args after cmdstr.\n *\n * Reply with a RESP3 Map, containing the number of edits (inserts, replaces, deletes)\n * performed, as well as the last index and the entry it points to.\n */\nint list_edit(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc < 3) return RedisModule_WrongArity(ctx);\n    RedisModule_AutoMemory(ctx);\n    int argpos = 1; /* the next arg */\n\n    /* key */\n    int keymode = REDISMODULE_READ | REDISMODULE_WRITE;\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[argpos++], keymode);\n    if (RedisModule_KeyType(key) != REDISMODULE_KEYTYPE_LIST) {\n        return RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    /* REVERSE */\n    int reverse = 0;\n    if (argc >= 4 &&\n        !strcasecmp(RedisModule_StringPtrLen(argv[argpos], NULL), \"REVERSE\")) {\n        reverse = 1;\n        argpos++;\n    }\n\n    /* cmdstr */\n    size_t cmdstr_len;\n    const char *cmdstr = RedisModule_StringPtrLen(argv[argpos++], &cmdstr_len);\n\n    /* validate cmdstr vs. argc */\n    long num_req_args = 0;\n    long min_list_length = 0;\n    for (size_t cmdpos = 0; cmdpos < cmdstr_len; cmdpos++) {\n        char c = cmdstr[cmdpos];\n        if (c == 'i' || c == 'r') num_req_args++;\n        if (c == 'd' || c == 'r' || c == 'k') min_list_length++;\n    }\n    if (argc < argpos + num_req_args) {\n        return RedisModule_ReplyWithError(ctx, \"ERR too few args\");\n    }\n    if ((long)RedisModule_ValueLength(key) < min_list_length) {\n        return RedisModule_ReplyWithError(ctx, \"ERR list too short\");\n    }\n\n    /* Iterate over the chars in cmdstr (edit instructions) */\n    long long num_inserts = 0, num_deletes = 0, num_replaces = 0;\n    long index = reverse ? -1 : 0;\n    RedisModuleString *value;\n\n    for (size_t cmdpos = 0; cmdpos < cmdstr_len; cmdpos++) {\n        switch (cmdstr[cmdpos]) {\n        case 'i': /* insert */\n            value = argv[argpos++];\n            assert(RedisModule_ListInsert(key, index, value) == REDISMODULE_OK);\n            index += reverse ? -1 : 1;\n            num_inserts++;\n            break;\n        case 'd': /* delete */\n            assert(RedisModule_ListDelete(key, index) == REDISMODULE_OK);\n            num_deletes++;\n            break;\n        case 'r': /* replace */\n            value = argv[argpos++];\n            assert(RedisModule_ListSet(key, index, value) == REDISMODULE_OK);\n            index += reverse ? -1 : 1;\n            num_replaces++;\n            break;\n        case 'k': /* keep */\n            index += reverse ? -1 : 1;\n            break;\n        }\n    }\n\n    RedisModuleString *v = RedisModule_ListGet(key, index);\n    RedisModule_ReplyWithMap(ctx, v ? 5 : 4);\n    RedisModule_ReplyWithCString(ctx, \"i\");\n    RedisModule_ReplyWithLongLong(ctx, num_inserts);\n    RedisModule_ReplyWithCString(ctx, \"d\");\n    RedisModule_ReplyWithLongLong(ctx, num_deletes);\n    RedisModule_ReplyWithCString(ctx, \"r\");\n    RedisModule_ReplyWithLongLong(ctx, num_replaces);\n    RedisModule_ReplyWithCString(ctx, \"index\");\n    RedisModule_ReplyWithLongLong(ctx, index);\n    if (v) {\n        RedisModule_ReplyWithCString(ctx, \"entry\");\n        RedisModule_ReplyWithString(ctx, v);\n        RedisModule_FreeString(ctx, v);\n    } \n\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\n/* Reply based on errno as set by the List API functions. */\nstatic int replyByErrno(RedisModuleCtx *ctx) {\n    switch (errno) {\n    case EDOM:\n        return RedisModule_ReplyWithError(ctx, \"ERR index out of bounds\");\n    case ENOTSUP:\n        return RedisModule_ReplyWithError(ctx, REDISMODULE_ERRORMSG_WRONGTYPE);\n    default: assert(0); /* Can't happen */\n    }\n}\n\n/* LIST.GET key index */\nint list_get(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 3) return RedisModule_WrongArity(ctx);\n    long long index;\n    if (RedisModule_StringToLongLong(argv[2], &index) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx, \"ERR index must be a number\");\n    }\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_READ);\n    RedisModuleString *value = RedisModule_ListGet(key, index);\n    if (value) {\n        RedisModule_ReplyWithString(ctx, value);\n        RedisModule_FreeString(ctx, value);\n    } else {\n        replyByErrno(ctx);\n    }\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\n/* LIST.SET key index value */\nint list_set(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 4) return RedisModule_WrongArity(ctx);\n    long long index;\n    if (RedisModule_StringToLongLong(argv[2], &index) != REDISMODULE_OK) {\n        RedisModule_ReplyWithError(ctx, \"ERR index must be a number\");\n        return REDISMODULE_OK;\n    }\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_WRITE);\n    if (RedisModule_ListSet(key, index, argv[3]) == REDISMODULE_OK) {\n        RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    } else {\n        replyByErrno(ctx);\n    }\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\n/* LIST.INSERT key index value\n *\n * If index is negative, value is inserted after, otherwise before the element\n * at index.\n */\nint list_insert(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 4) return RedisModule_WrongArity(ctx);\n    long long index;\n    if (RedisModule_StringToLongLong(argv[2], &index) != REDISMODULE_OK) {\n        RedisModule_ReplyWithError(ctx, \"ERR index must be a number\");\n        return REDISMODULE_OK;\n    }\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_WRITE);\n    if (RedisModule_ListInsert(key, index, argv[3]) == REDISMODULE_OK) {\n        RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    } else {\n        replyByErrno(ctx);\n    }\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\n/* LIST.DELETE key index */\nint list_delete(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 3) return RedisModule_WrongArity(ctx);\n    long long index;\n    if (RedisModule_StringToLongLong(argv[2], &index) != REDISMODULE_OK) {\n        RedisModule_ReplyWithError(ctx, \"ERR index must be a number\");\n        return REDISMODULE_OK;\n    }\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_WRITE);\n    if (RedisModule_ListDelete(key, index) == REDISMODULE_OK) {\n        RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    } else {\n        replyByErrno(ctx);\n    }\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    if (RedisModule_Init(ctx, \"list\", 1, REDISMODULE_APIVER_1) == REDISMODULE_OK &&\n        RedisModule_CreateCommand(ctx, \"list.getall\", list_getall, \"\",\n                                  1, 1, 1) == REDISMODULE_OK &&\n        RedisModule_CreateCommand(ctx, \"list.edit\", list_edit, \"write\",\n                                  1, 1, 1) == REDISMODULE_OK &&\n        RedisModule_CreateCommand(ctx, \"list.get\", list_get, \"write\",\n                                  1, 1, 1) == REDISMODULE_OK &&\n        RedisModule_CreateCommand(ctx, \"list.set\", list_set, \"write\",\n                                  1, 1, 1) == REDISMODULE_OK &&\n        RedisModule_CreateCommand(ctx, \"list.insert\", list_insert, \"write\",\n                                  1, 1, 1) == REDISMODULE_OK &&\n        RedisModule_CreateCommand(ctx, \"list.delete\", list_delete, \"write\",\n                                  1, 1, 1) == REDISMODULE_OK) {\n        return REDISMODULE_OK;\n    } else {\n        return REDISMODULE_ERR;\n    }\n}\n"
  },
  {
    "path": "tests/modules/mallocsize.c",
    "content": "#include \"redismodule.h\"\n#include <string.h>\n#include <assert.h>\n#include <unistd.h>\n\n#define UNUSED(V) ((void) V)\n\n/* Registered type */\nRedisModuleType *mallocsize_type = NULL;\n\ntypedef enum {\n    UDT_RAW,\n    UDT_STRING,\n    UDT_DICT\n} udt_type_t;\n\ntypedef struct {\n    void *ptr;\n    size_t len;\n} raw_t;\n\ntypedef struct {\n    udt_type_t type;\n    union {\n        raw_t raw;\n        RedisModuleString *str;\n        RedisModuleDict *dict;\n    } data;\n} udt_t;\n\nvoid udt_free(void *value) {\n    udt_t *udt = value;\n    switch (udt->type) {\n        case (UDT_RAW): {\n            RedisModule_Free(udt->data.raw.ptr);\n            break;\n        }\n        case (UDT_STRING): {\n            RedisModule_FreeString(NULL, udt->data.str);\n            break;\n        }\n        case (UDT_DICT): {\n            RedisModuleString *dk, *dv;\n            RedisModuleDictIter *iter = RedisModule_DictIteratorStartC(udt->data.dict, \"^\", NULL, 0);\n            while((dk = RedisModule_DictNext(NULL, iter, (void **)&dv)) != NULL) {\n                RedisModule_FreeString(NULL, dk);\n                RedisModule_FreeString(NULL, dv);\n            }\n            RedisModule_DictIteratorStop(iter);\n            RedisModule_FreeDict(NULL, udt->data.dict);\n            break;\n        }\n    }\n    RedisModule_Free(udt);\n}\n\nvoid udt_rdb_save(RedisModuleIO *rdb, void *value) {\n    udt_t *udt = value;\n    RedisModule_SaveUnsigned(rdb, udt->type);\n    switch (udt->type) {\n        case (UDT_RAW): {\n            RedisModule_SaveStringBuffer(rdb, udt->data.raw.ptr, udt->data.raw.len);\n            break;\n        }\n        case (UDT_STRING): {\n            RedisModule_SaveString(rdb, udt->data.str);\n            break;\n        }\n        case (UDT_DICT): {\n            RedisModule_SaveUnsigned(rdb, RedisModule_DictSize(udt->data.dict));\n            RedisModuleString *dk, *dv;\n            RedisModuleDictIter *iter = RedisModule_DictIteratorStartC(udt->data.dict, \"^\", NULL, 0);\n            while((dk = RedisModule_DictNext(NULL, iter, (void **)&dv)) != NULL) {\n                RedisModule_SaveString(rdb, dk);\n                RedisModule_SaveString(rdb, dv);\n                RedisModule_FreeString(NULL, dk); /* Allocated by RedisModule_DictNext */\n            }\n            RedisModule_DictIteratorStop(iter);\n            break;\n        }\n    }\n}\n\nvoid *udt_rdb_load(RedisModuleIO *rdb, int encver) {\n    if (encver != 0)\n        return NULL;\n    udt_t *udt = RedisModule_Alloc(sizeof(*udt));\n    udt->type = RedisModule_LoadUnsigned(rdb);\n    switch (udt->type) {\n        case (UDT_RAW): {\n            udt->data.raw.ptr = RedisModule_LoadStringBuffer(rdb, &udt->data.raw.len);\n            break;\n        }\n        case (UDT_STRING): {\n            udt->data.str = RedisModule_LoadString(rdb);\n            break;\n        }\n        case (UDT_DICT): {\n            long long dict_len = RedisModule_LoadUnsigned(rdb);\n            udt->data.dict = RedisModule_CreateDict(NULL);\n            for (int i = 0; i < dict_len; i += 2) {\n                RedisModuleString *key = RedisModule_LoadString(rdb);\n                RedisModuleString *val = RedisModule_LoadString(rdb);\n                RedisModule_DictSet(udt->data.dict, key, val);\n            }\n            break;\n        }\n    }\n\n    return udt;\n}\n\nsize_t udt_mem_usage(RedisModuleKeyOptCtx *ctx, const void *value, size_t sample_size) {\n    UNUSED(ctx);\n    UNUSED(sample_size);\n    \n    const udt_t *udt = value;\n    size_t size = sizeof(*udt);\n    \n    switch (udt->type) {\n        case (UDT_RAW): {\n            size += RedisModule_MallocSize(udt->data.raw.ptr);\n            break;\n        }\n        case (UDT_STRING): {\n            size += RedisModule_MallocSizeString(udt->data.str);\n            break;\n        }\n        case (UDT_DICT): {\n            void *dk;\n            size_t keylen;\n            RedisModuleString *dv;\n            RedisModuleDictIter *iter = RedisModule_DictIteratorStartC(udt->data.dict, \"^\", NULL, 0);\n            while((dk = RedisModule_DictNextC(iter, &keylen, (void **)&dv)) != NULL) {\n                size += keylen;\n                size += RedisModule_MallocSizeString(dv);\n            }\n            RedisModule_DictIteratorStop(iter);\n            break;\n        }\n    }\n    \n    return size;\n}\n\n/* MALLOCSIZE.SETRAW key len */\nint cmd_setraw(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 3)\n        return RedisModule_WrongArity(ctx);\n        \n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_WRITE);\n\n    udt_t *udt = RedisModule_Alloc(sizeof(*udt));\n    udt->type = UDT_RAW;\n    \n    long long raw_len;\n    RedisModule_StringToLongLong(argv[2], &raw_len);\n    udt->data.raw.ptr = RedisModule_Alloc(raw_len);\n    udt->data.raw.len = raw_len;\n    \n    RedisModule_ModuleTypeSetValue(key, mallocsize_type, udt);\n    RedisModule_CloseKey(key);\n\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\n/* MALLOCSIZE.SETSTR key string */\nint cmd_setstr(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 3)\n        return RedisModule_WrongArity(ctx);\n        \n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_WRITE);\n\n    udt_t *udt = RedisModule_Alloc(sizeof(*udt));\n    udt->type = UDT_STRING;\n    \n    udt->data.str = argv[2];\n    RedisModule_RetainString(ctx, argv[2]);\n    \n    RedisModule_ModuleTypeSetValue(key, mallocsize_type, udt);\n    RedisModule_CloseKey(key);\n\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\n/* MALLOCSIZE.SETDICT key field value [field value ...] */\nint cmd_setdict(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc < 4 || argc % 2)\n        return RedisModule_WrongArity(ctx);\n        \n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_WRITE);\n\n    udt_t *udt = RedisModule_Alloc(sizeof(*udt));\n    udt->type = UDT_DICT;\n    \n    udt->data.dict = RedisModule_CreateDict(ctx);\n    for (int i = 2; i < argc; i += 2) {\n        RedisModule_DictSet(udt->data.dict, argv[i], argv[i+1]);\n        /* No need to retain argv[i], it is copied as the rax key */\n        RedisModule_RetainString(ctx, argv[i+1]);   \n    }\n    \n    RedisModule_ModuleTypeSetValue(key, mallocsize_type, udt);\n    RedisModule_CloseKey(key);\n\n    return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    UNUSED(argc);\n    if (RedisModule_Init(ctx,\"mallocsize\",1,REDISMODULE_APIVER_1)== REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n        \n    RedisModuleTypeMethods tm = {\n        .version = REDISMODULE_TYPE_METHOD_VERSION,\n        .rdb_load = udt_rdb_load,\n        .rdb_save = udt_rdb_save,\n        .free = udt_free,\n        .mem_usage2 = udt_mem_usage,\n    };\n\n    mallocsize_type = RedisModule_CreateDataType(ctx, \"allocsize\", 0, &tm);\n    if (mallocsize_type == NULL)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"mallocsize.setraw\", cmd_setraw, \"\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n        \n    if (RedisModule_CreateCommand(ctx, \"mallocsize.setstr\", cmd_setstr, \"\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n        \n    if (RedisModule_CreateCommand(ctx, \"mallocsize.setdict\", cmd_setdict, \"\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/misc.c",
    "content": "#include \"redismodule.h\"\n\n#include <string.h>\n#include <assert.h>\n#include <unistd.h>\n#include <errno.h>\n#include <limits.h>\n\n#define UNUSED(x) (void)(x)\n\nstatic int n_events = 0;\n\nstatic int KeySpace_NotificationModuleKeyMissExpired(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key) {\n    UNUSED(ctx);\n    UNUSED(type);\n    UNUSED(event);\n    UNUSED(key);\n    n_events++;\n    return REDISMODULE_OK;\n}\n\nint test_clear_n_events(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    UNUSED(argc);\n    n_events = 0;\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\nint test_get_n_events(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    UNUSED(argc);\n    RedisModule_ReplyWithLongLong(ctx, n_events);\n    return REDISMODULE_OK;\n}\n\nint test_open_key_no_effects(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc<2) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    int supportedMode = RedisModule_GetOpenKeyModesAll();\n    if (!(supportedMode & REDISMODULE_READ) || !(supportedMode & REDISMODULE_OPEN_KEY_NOEFFECTS)) {\n        RedisModule_ReplyWithError(ctx, \"OpenKey modes are not supported\");\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_READ | REDISMODULE_OPEN_KEY_NOEFFECTS);\n    if (!key) {\n        RedisModule_ReplyWithError(ctx, \"key not found\");\n        return REDISMODULE_OK;\n    }\n\n    RedisModule_CloseKey(key);\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\nint test_call_generic(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc<2) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    const char* cmdname = RedisModule_StringPtrLen(argv[1], NULL);\n    RedisModuleCallReply *reply = RedisModule_Call(ctx, cmdname, \"v\", argv+2, (size_t)argc-2);\n    if (reply) {\n        RedisModule_ReplyWithCallReply(ctx, reply);\n        RedisModule_FreeCallReply(reply);\n    } else {\n        RedisModule_ReplyWithError(ctx, strerror(errno));\n    }\n    return REDISMODULE_OK;\n}\n\nint test_call_info(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    RedisModuleCallReply *reply;\n    if (argc>1)\n        reply = RedisModule_Call(ctx, \"info\", \"s\", argv[1]);\n    else\n        reply = RedisModule_Call(ctx, \"info\", \"\");\n    if (reply) {\n        RedisModule_ReplyWithCallReply(ctx, reply);\n        RedisModule_FreeCallReply(reply);\n    } else {\n        RedisModule_ReplyWithError(ctx, strerror(errno));\n    }\n    return REDISMODULE_OK;\n}\n\nint test_ld_conv(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    UNUSED(argc);\n    long double ld = 0.00000000000000001L;\n    const char *ldstr = \"0.00000000000000001\";\n    RedisModuleString *s1 = RedisModule_CreateStringFromLongDouble(ctx, ld, 1);\n    RedisModuleString *s2 =\n        RedisModule_CreateString(ctx, ldstr, strlen(ldstr));\n    if (RedisModule_StringCompare(s1, s2) != 0) {\n        char err[4096];\n        snprintf(err, 4096,\n            \"Failed to convert long double to string ('%s' != '%s')\",\n            RedisModule_StringPtrLen(s1, NULL),\n            RedisModule_StringPtrLen(s2, NULL));\n        RedisModule_ReplyWithError(ctx, err);\n        goto final;\n    }\n    long double ld2 = 0;\n    if (RedisModule_StringToLongDouble(s2, &ld2) == REDISMODULE_ERR) {\n        RedisModule_ReplyWithError(ctx,\n            \"Failed to convert string to long double\");\n        goto final;\n    }\n    if (ld2 != ld) {\n        char err[4096];\n        snprintf(err, 4096,\n            \"Failed to convert string to long double (%.40Lf != %.40Lf)\",\n            ld2,\n            ld);\n        RedisModule_ReplyWithError(ctx, err);\n        goto final;\n    }\n\n    /* Make sure we can't convert a string that has \\0 in it */\n    char buf[4] = \"123\";\n    buf[1] = '\\0';\n    RedisModuleString *s3 = RedisModule_CreateString(ctx, buf, 3);\n    long double ld3;\n    if (RedisModule_StringToLongDouble(s3, &ld3) == REDISMODULE_OK) {\n        RedisModule_ReplyWithError(ctx, \"Invalid string successfully converted to long double\");\n        RedisModule_FreeString(ctx, s3);\n        goto final;\n    }\n    RedisModule_FreeString(ctx, s3);\n\n    RedisModule_ReplyWithLongDouble(ctx, ld2);\nfinal:\n    RedisModule_FreeString(ctx, s1);\n    RedisModule_FreeString(ctx, s2);\n    return REDISMODULE_OK;\n}\n\nint test_flushall(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    RedisModule_ResetDataset(1, 0);\n    RedisModule_ReplyWithCString(ctx, \"Ok\");\n    return REDISMODULE_OK;\n}\n\nint test_dbsize(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    long long ll = RedisModule_DbSize(ctx);\n    RedisModule_ReplyWithLongLong(ctx, ll);\n    return REDISMODULE_OK;\n}\n\nint test_randomkey(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    RedisModuleString *str = RedisModule_RandomKey(ctx);\n    RedisModule_ReplyWithString(ctx, str);\n    RedisModule_FreeString(ctx, str);\n    return REDISMODULE_OK;\n}\n\nint test_keyexists(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc < 2) return RedisModule_WrongArity(ctx);\n    RedisModuleString *key = argv[1];\n    int exists = RedisModule_KeyExists(ctx, key);\n    return RedisModule_ReplyWithBool(ctx, exists);\n}\n\nRedisModuleKey *open_key_or_reply(RedisModuleCtx *ctx, RedisModuleString *keyname, int mode) {\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, keyname, mode);\n    if (!key) {\n        RedisModule_ReplyWithError(ctx, \"key not found\");\n        return NULL;\n    }\n    return key;\n}\n\nint test_getlru(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc<2) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n    RedisModuleKey *key = open_key_or_reply(ctx, argv[1], REDISMODULE_READ|REDISMODULE_OPEN_KEY_NOTOUCH);\n    mstime_t lru;\n    RedisModule_GetLRU(key, &lru);\n    RedisModule_ReplyWithLongLong(ctx, lru);\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\nint test_setlru(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc<3) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n    RedisModuleKey *key = open_key_or_reply(ctx, argv[1], REDISMODULE_READ|REDISMODULE_OPEN_KEY_NOTOUCH);\n    mstime_t lru;\n    if (RedisModule_StringToLongLong(argv[2], &lru) != REDISMODULE_OK) {\n        RedisModule_ReplyWithError(ctx, \"invalid idle time\");\n        return REDISMODULE_OK;\n    }\n    int was_set = RedisModule_SetLRU(key, lru)==REDISMODULE_OK;\n    RedisModule_ReplyWithLongLong(ctx, was_set);\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\nint test_getlfu(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc<2) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n    RedisModuleKey *key = open_key_or_reply(ctx, argv[1], REDISMODULE_READ|REDISMODULE_OPEN_KEY_NOTOUCH);\n    mstime_t lfu;\n    RedisModule_GetLFU(key, &lfu);\n    RedisModule_ReplyWithLongLong(ctx, lfu);\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\nint test_setlfu(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc<3) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n    RedisModuleKey *key = open_key_or_reply(ctx, argv[1], REDISMODULE_READ|REDISMODULE_OPEN_KEY_NOTOUCH);\n    mstime_t lfu;\n    if (RedisModule_StringToLongLong(argv[2], &lfu) != REDISMODULE_OK) {\n        RedisModule_ReplyWithError(ctx, \"invalid freq\");\n        return REDISMODULE_OK;\n    }\n    int was_set = RedisModule_SetLFU(key, lfu)==REDISMODULE_OK;\n    RedisModule_ReplyWithLongLong(ctx, was_set);\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\nint test_redisversion(RedisModuleCtx *ctx, RedisModuleString **argv, int argc){\n    (void) argv;\n    (void) argc;\n\n    int version = RedisModule_GetServerVersion();\n    int patch = version & 0x000000ff;\n    int minor = (version & 0x0000ff00) >> 8;\n    int major = (version & 0x00ff0000) >> 16;\n\n    RedisModuleString* vStr = RedisModule_CreateStringPrintf(ctx, \"%d.%d.%d\", major, minor, patch);\n    RedisModule_ReplyWithString(ctx, vStr);\n    RedisModule_FreeString(ctx, vStr);\n  \n    return REDISMODULE_OK;\n}\n\nint test_getclientcert(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    (void) argv;\n    (void) argc;\n\n    RedisModuleString *cert = RedisModule_GetClientCertificate(ctx,\n            RedisModule_GetClientId(ctx));\n    if (!cert) {\n        RedisModule_ReplyWithNull(ctx);\n    } else {\n        RedisModule_ReplyWithString(ctx, cert);\n        RedisModule_FreeString(ctx, cert);\n    }\n\n    return REDISMODULE_OK;\n}\n\nint test_clientinfo(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    (void) argv;\n    (void) argc;\n\n    RedisModuleClientInfoV1 ci = REDISMODULE_CLIENTINFO_INITIALIZER_V1;\n    uint64_t client_id = RedisModule_GetClientId(ctx);\n\n    /* Check expected result from the V1 initializer. */\n    assert(ci.version == 1);\n    /* Trying to populate a future version of the struct should fail. */\n    ci.version = REDISMODULE_CLIENTINFO_VERSION + 1;\n    assert(RedisModule_GetClientInfoById(&ci, client_id) == REDISMODULE_ERR);\n\n    ci.version = 1;\n    if (RedisModule_GetClientInfoById(&ci, client_id) == REDISMODULE_ERR) {\n            RedisModule_ReplyWithError(ctx, \"failed to get client info\");\n            return REDISMODULE_OK;\n    }\n\n    RedisModule_ReplyWithArray(ctx, 10);\n    char flags[512];\n    snprintf(flags, sizeof(flags) - 1, \"%s:%s:%s:%s:%s:%s\",\n        ci.flags & REDISMODULE_CLIENTINFO_FLAG_SSL ? \"ssl\" : \"\",\n        ci.flags & REDISMODULE_CLIENTINFO_FLAG_PUBSUB ? \"pubsub\" : \"\",\n        ci.flags & REDISMODULE_CLIENTINFO_FLAG_BLOCKED ? \"blocked\" : \"\",\n        ci.flags & REDISMODULE_CLIENTINFO_FLAG_TRACKING ? \"tracking\" : \"\",\n        ci.flags & REDISMODULE_CLIENTINFO_FLAG_UNIXSOCKET ? \"unixsocket\" : \"\",\n        ci.flags & REDISMODULE_CLIENTINFO_FLAG_MULTI ? \"multi\" : \"\");\n\n    RedisModule_ReplyWithCString(ctx, \"flags\");\n    RedisModule_ReplyWithCString(ctx, flags);\n    RedisModule_ReplyWithCString(ctx, \"id\");\n    RedisModule_ReplyWithLongLong(ctx, ci.id);\n    RedisModule_ReplyWithCString(ctx, \"addr\");\n    RedisModule_ReplyWithCString(ctx, ci.addr);\n    RedisModule_ReplyWithCString(ctx, \"port\");\n    RedisModule_ReplyWithLongLong(ctx, ci.port);\n    RedisModule_ReplyWithCString(ctx, \"db\");\n    RedisModule_ReplyWithLongLong(ctx, ci.db);\n\n    return REDISMODULE_OK;\n}\n\nint test_getname(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    (void)argv;\n    if (argc != 1) return RedisModule_WrongArity(ctx);\n    unsigned long long id = RedisModule_GetClientId(ctx);\n    RedisModuleString *name = RedisModule_GetClientNameById(ctx, id);\n    if (name == NULL)\n        return RedisModule_ReplyWithError(ctx, \"-ERR No name\");\n    RedisModule_ReplyWithString(ctx, name);\n    RedisModule_FreeString(ctx, name);\n    return REDISMODULE_OK;\n}\n\nint test_setname(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n    unsigned long long id = RedisModule_GetClientId(ctx);\n    if (RedisModule_SetClientNameById(id, argv[1]) == REDISMODULE_OK)\n        return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    else\n        return RedisModule_ReplyWithError(ctx, strerror(errno));\n}\n\nint test_log_tsctx(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    RedisModuleCtx *tsctx = RedisModule_GetDetachedThreadSafeContext(ctx);\n\n    if (argc != 3) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    char level[50];\n    size_t level_len;\n    const char *level_str = RedisModule_StringPtrLen(argv[1], &level_len);\n    snprintf(level, sizeof(level) - 1, \"%.*s\", (int) level_len, level_str);\n\n    size_t msg_len;\n    const char *msg_str = RedisModule_StringPtrLen(argv[2], &msg_len);\n\n    RedisModule_Log(tsctx, level, \"%.*s\", (int) msg_len, msg_str);\n    RedisModule_FreeThreadSafeContext(tsctx);\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\nint test_weird_cmd(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\nint test_monotonic_time(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_ReplyWithLongLong(ctx, RedisModule_MonotonicMicroseconds());\n    return REDISMODULE_OK;\n}\n\n/* wrapper for RM_Call */\nint test_rm_call(RedisModuleCtx *ctx, RedisModuleString **argv, int argc){\n    if(argc < 2){\n        return RedisModule_WrongArity(ctx);\n    }\n\n    const char* cmd = RedisModule_StringPtrLen(argv[1], NULL);\n\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, cmd, \"Ev\", argv + 2, (size_t)argc - 2);\n    if(!rep){\n        RedisModule_ReplyWithError(ctx, \"NULL reply returned\");\n    }else{\n        RedisModule_ReplyWithCallReply(ctx, rep);\n        RedisModule_FreeCallReply(rep);\n    }\n\n    return REDISMODULE_OK;\n}\n\n/* wrapper for RM_Call which also replicates the module command */\nint test_rm_call_replicate(RedisModuleCtx *ctx, RedisModuleString **argv, int argc){\n    test_rm_call(ctx, argv, argc);\n    RedisModule_ReplicateVerbatim(ctx);\n\n    return REDISMODULE_OK;\n}\n\n/* wrapper for RM_Call with flags */\nint test_rm_call_flags(RedisModuleCtx *ctx, RedisModuleString **argv, int argc){\n    if(argc < 3){\n        return RedisModule_WrongArity(ctx);\n    }\n\n    /* Append Ev to the provided flags. */\n    RedisModuleString *flags = RedisModule_CreateStringFromString(ctx, argv[1]);\n    RedisModule_StringAppendBuffer(ctx, flags, \"Ev\", 2);\n\n    const char* flg = RedisModule_StringPtrLen(flags, NULL);\n    const char* cmd = RedisModule_StringPtrLen(argv[2], NULL);\n\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, cmd, flg, argv + 3, (size_t)argc - 3);\n    if(!rep){\n        RedisModule_ReplyWithError(ctx, \"NULL reply returned\");\n    }else{\n        RedisModule_ReplyWithCallReply(ctx, rep);\n        RedisModule_FreeCallReply(rep);\n    }\n    RedisModule_FreeString(ctx, flags);\n\n    return REDISMODULE_OK;\n}\n\nint test_ull_conv(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    UNUSED(argc);\n    unsigned long long ull = 18446744073709551615ULL;\n    const char *ullstr = \"18446744073709551615\";\n\n    RedisModuleString *s1 = RedisModule_CreateStringFromULongLong(ctx, ull);\n    RedisModuleString *s2 =\n        RedisModule_CreateString(ctx, ullstr, strlen(ullstr));\n    if (RedisModule_StringCompare(s1, s2) != 0) {\n        char err[4096];\n        snprintf(err, 4096,\n            \"Failed to convert unsigned long long to string ('%s' != '%s')\",\n            RedisModule_StringPtrLen(s1, NULL),\n            RedisModule_StringPtrLen(s2, NULL));\n        RedisModule_ReplyWithError(ctx, err);\n        goto final;\n    }\n    unsigned long long ull2 = 0;\n    if (RedisModule_StringToULongLong(s2, &ull2) == REDISMODULE_ERR) {\n        RedisModule_ReplyWithError(ctx,\n            \"Failed to convert string to unsigned long long\");\n        goto final;\n    }\n    if (ull2 != ull) {\n        char err[4096];\n        snprintf(err, 4096,\n            \"Failed to convert string to unsigned long long (%llu != %llu)\",\n            ull2,\n            ull);\n        RedisModule_ReplyWithError(ctx, err);\n        goto final;\n    }\n    \n    /* Make sure we can't convert a string more than ULLONG_MAX or less than 0 */\n    ullstr = \"18446744073709551616\";\n    RedisModuleString *s3 = RedisModule_CreateString(ctx, ullstr, strlen(ullstr));\n    unsigned long long ull3;\n    if (RedisModule_StringToULongLong(s3, &ull3) == REDISMODULE_OK) {\n        RedisModule_ReplyWithError(ctx, \"Invalid string successfully converted to unsigned long long\");\n        RedisModule_FreeString(ctx, s3);\n        goto final;\n    }\n    RedisModule_FreeString(ctx, s3);\n    ullstr = \"-1\";\n    RedisModuleString *s4 = RedisModule_CreateString(ctx, ullstr, strlen(ullstr));\n    unsigned long long ull4;\n    if (RedisModule_StringToULongLong(s4, &ull4) == REDISMODULE_OK) {\n        RedisModule_ReplyWithError(ctx, \"Invalid string successfully converted to unsigned long long\");\n        RedisModule_FreeString(ctx, s4);\n        goto final;\n    }\n    RedisModule_FreeString(ctx, s4);\n   \n    RedisModule_ReplyWithSimpleString(ctx, \"ok\");\n\nfinal:\n    RedisModule_FreeString(ctx, s1);\n    RedisModule_FreeString(ctx, s2);\n    return REDISMODULE_OK;\n}\n\nint test_malloc_api(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    UNUSED(argc);\n\n    void *p;\n\n    p = RedisModule_TryAlloc(1024);\n    memset(p, 0, 1024);\n    RedisModule_Free(p);\n\n    p = RedisModule_TryCalloc(1, 1024);\n    memset(p, 1, 1024);\n\n    p = RedisModule_TryRealloc(p, 5 * 1024);\n    memset(p, 1, 5 * 1024);\n    RedisModule_Free(p);\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\nint test_keyslot(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    /* Static check of the ClusterKeySlot + ClusterKeySlotC + ClusterCanonicalKeyNameInSlot\n     * round-trip for all slots. */\n    for (unsigned int slot = 0; slot < 16384; slot++) {\n        const char *tag = RedisModule_ClusterCanonicalKeyNameInSlot(slot);\n        RedisModuleString *key = RedisModule_CreateStringPrintf(ctx, \"x{%s}y\", tag);\n        assert(slot == RedisModule_ClusterKeySlot(key));\n        size_t len;\n        const char *key_c = RedisModule_StringPtrLen(key, &len);\n        assert(slot == RedisModule_ClusterKeySlotC(key_c, len));\n        RedisModule_FreeString(ctx, key);\n    }\n    if (argc != 2){\n        return RedisModule_WrongArity(ctx);\n    }\n    unsigned int slot = RedisModule_ClusterKeySlot(argv[1]);\n    return RedisModule_ReplyWithLongLong(ctx, slot);\n}\n\nint only_reply_ok(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\n/* Test command for RM_SignalModifiedKey. */\nint test_signalmodifiedkey(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n\n    /* Manually signal that the key was modified */\n    RedisModule_SignalModifiedKey(ctx, argv[1]);\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    if (RedisModule_Init(ctx,\"misc\",1,REDISMODULE_APIVER_1)== REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if(RedisModule_SubscribeToKeyspaceEvents(ctx, REDISMODULE_NOTIFY_KEY_MISS | REDISMODULE_NOTIFY_EXPIRED, KeySpace_NotificationModuleKeyMissExpired) != REDISMODULE_OK){\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx,\"test.call_generic\", test_call_generic,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"test.call_info\", test_call_info,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"test.ld_conversion\", test_ld_conv, \"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"test.ull_conversion\", test_ull_conv, \"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"test.flushall\", test_flushall,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"test.dbsize\", test_dbsize,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"test.randomkey\", test_randomkey,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"test.keyexists\", test_keyexists,\"\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"test.setlru\", test_setlru,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"test.getlru\", test_getlru,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"test.setlfu\", test_setlfu,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"test.getlfu\", test_getlfu,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"test.clientinfo\", test_clientinfo,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"test.getname\", test_getname,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"test.setname\", test_setname,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"test.redisversion\", test_redisversion,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"test.getclientcert\", test_getclientcert,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"test.log_tsctx\", test_log_tsctx,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    /* Add a command with ':' in it's name, so that we can check commandstats sanitization. */\n    if (RedisModule_CreateCommand(ctx,\"test.weird:cmd\", test_weird_cmd,\"readonly\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"test.monotonic_time\", test_monotonic_time,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx, \"test.rm_call\", test_rm_call,\"allow-stale\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx, \"test.rm_call_flags\", test_rm_call_flags,\"allow-stale\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx, \"test.rm_call_replicate\", test_rm_call_replicate,\"allow-stale\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx, \"test.silent_open_key\", test_open_key_no_effects,\"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx, \"test.get_n_events\", test_get_n_events,\"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx, \"test.clear_n_events\", test_clear_n_events,\"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx, \"test.malloc_api\", test_malloc_api,\"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx, \"test.keyslot\", test_keyslot, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx, \"test.incompatible_cluster_cmd\", only_reply_ok, \"\", 1, -1, 2) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx, \"test.no_cluster_cmd\", NULL, \"no-cluster\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    RedisModuleCommand *parent = RedisModule_GetCommand(ctx, \"test.no_cluster_cmd\");\n    if (RedisModule_CreateSubcommand(parent, \"set\", only_reply_ok, \"no-cluster\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx, \"test.signalmodifiedkey\", test_signalmodifiedkey, \"write\", 1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/moduleauthtwo.c",
    "content": "#include \"redismodule.h\"\n\n#include <string.h>\n\n/* This is a second sample module to validate that module authentication callbacks can be registered\n * from multiple modules. */\n\n/* Non Blocking Module Auth callback / implementation. */\nint auth_cb(RedisModuleCtx *ctx, RedisModuleString *username, RedisModuleString *password, RedisModuleString **err) {\n    const char *user = RedisModule_StringPtrLen(username, NULL);\n    const char *pwd = RedisModule_StringPtrLen(password, NULL);\n    if (!strcmp(user,\"foo\") && !strcmp(pwd,\"allow_two\")) {\n        RedisModule_AuthenticateClientWithACLUser(ctx, \"foo\", 3, NULL, NULL, NULL);\n        return REDISMODULE_AUTH_HANDLED;\n    }\n    else if (!strcmp(user,\"foo\") && !strcmp(pwd,\"deny_two\")) {\n        RedisModuleString *log = RedisModule_CreateString(ctx, \"Module Auth\", 11);\n        RedisModule_ACLAddLogEntryByUserName(ctx, username, log, REDISMODULE_ACL_LOG_AUTH);\n        RedisModule_FreeString(ctx, log);\n        const char *err_msg = \"Auth denied by Misc Module.\";\n        *err = RedisModule_CreateString(ctx, err_msg, strlen(err_msg));\n        return REDISMODULE_AUTH_HANDLED;\n    }\n    return REDISMODULE_AUTH_NOT_HANDLED;\n}\n\nint test_rm_register_auth_cb(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    RedisModule_RegisterAuthCallback(ctx, auth_cb);\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    if (RedisModule_Init(ctx,\"moduleauthtwo\",1,REDISMODULE_APIVER_1)== REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"testmoduletwo.rm_register_auth_cb\", test_rm_register_auth_cb,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    return REDISMODULE_OK;\n}"
  },
  {
    "path": "tests/modules/moduleconfigs.c",
    "content": "#include \"redismodule.h\"\n#include <string.h>\n#include <strings.h>\n#include <assert.h>\nint mutable_bool_val, no_prefix_bool, no_prefix_bool2;\nint immutable_bool_val;\nlong long longval, no_prefix_longval;\nlong long memval, no_prefix_memval;\nRedisModuleString *strval = NULL;\nRedisModuleString *strval2 = NULL;\nint enumval, no_prefix_enumval;\nint flagsval;\n\n/* Series of get and set callbacks for each type of config, these rely on the privdata ptr\n * to point to the config, and they register the configs as such. Note that one could also just\n * use names if they wanted, and store anything in privdata. */\nint getBoolConfigCommand(const char *name, void *privdata) {\n    assert(strcmp(name, \"mutable_bool\") == 0 || strcmp(name, \"immutable_bool\") == 0);\n    return (*(int *)privdata);\n}\n\nint setBoolConfigCommand(const char *name, int new, void *privdata, RedisModuleString **err) {\n    REDISMODULE_NOT_USED(err);\n    assert(strcmp(name, \"mutable_bool\") == 0 || strcmp(name, \"immutable_bool\") == 0);\n    *(int *)privdata = new;\n    return REDISMODULE_OK;\n}\n\nlong long getNumericConfigCommand(const char *name, void *privdata) {\n    assert(strcmp(name, \"numeric\") == 0 || strcmp(name, \"memory_numeric\") == 0);\n    return (*(long long *) privdata);\n}\n\nint setNumericConfigCommand(const char *name, long long new, void *privdata, RedisModuleString **err) {\n    REDISMODULE_NOT_USED(err);\n    assert(strcmp(name, \"numeric\") == 0 || strcmp(name, \"memory_numeric\") == 0);\n    *(long long *)privdata = new;\n    return REDISMODULE_OK;\n}\n\nRedisModuleString *getStringConfigCommand(const char *name, void *privdata) {\n    REDISMODULE_NOT_USED(privdata);\n    assert(strcmp(name, \"string\") == 0);\n    return strval;\n}\nint setStringConfigCommand(const char *name, RedisModuleString *new, void *privdata, RedisModuleString **err) {\n    REDISMODULE_NOT_USED(privdata);\n    assert(strcmp(name, \"string\") == 0);\n    size_t len;\n    if (!strcasecmp(RedisModule_StringPtrLen(new, &len), \"rejectisfreed\")) {\n        *err = RedisModule_CreateString(NULL, \"Cannot set string to 'rejectisfreed'\", 36);\n        return REDISMODULE_ERR;\n    }\n    if (strval) RedisModule_FreeString(NULL, strval);\n    RedisModule_RetainString(NULL, new);\n    strval = new;\n    return REDISMODULE_OK;\n}\n\nint getEnumConfigCommand(const char *name, void *privdata) {\n    REDISMODULE_NOT_USED(privdata);\n    assert(strcmp(name, \"enum\") == 0);\n    return enumval;\n}\n\nint setEnumConfigCommand(const char *name, int val, void *privdata, RedisModuleString **err) {\n    REDISMODULE_NOT_USED(privdata);\n    REDISMODULE_NOT_USED(err);\n    assert(strcmp(name, \"enum\") == 0);\n    enumval = val;\n    return REDISMODULE_OK;\n}\n\nint getFlagsConfigCommand(const char *name, void *privdata) {\n    REDISMODULE_NOT_USED(privdata);\n    assert(strcmp(name, \"flags\") == 0);\n    return flagsval;\n}\n\nint setFlagsConfigCommand(const char *name, int val, void *privdata, RedisModuleString **err) {\n    REDISMODULE_NOT_USED(err);\n    REDISMODULE_NOT_USED(privdata);\n    assert(strcmp(name, \"flags\") == 0);\n    flagsval = val;\n    return REDISMODULE_OK;\n}\n\nint boolApplyFunc(RedisModuleCtx *ctx, void *privdata, RedisModuleString **err) {\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(privdata);\n    if (mutable_bool_val && immutable_bool_val) {\n        *err = RedisModule_CreateString(NULL, \"Bool configs cannot both be yes.\", 32);\n        return REDISMODULE_ERR;\n    }\n    return REDISMODULE_OK;\n}\n\nint longlongApplyFunc(RedisModuleCtx *ctx, void *privdata, RedisModuleString **err) {\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(privdata);\n    if (longval == memval) {\n        *err = RedisModule_CreateString(NULL, \"These configs cannot equal each other.\", 38);\n        return REDISMODULE_ERR;\n    }\n    return REDISMODULE_OK;\n}    \n\nRedisModuleString *getStringConfigUnprefix(const char *name, void *privdata) {\n    REDISMODULE_NOT_USED(privdata);\n    assert(strcmp(name, \"unprefix-string\") == 0 || strcmp(name, \"unprefix.string-alias\") == 0);\n    return strval2;\n}\n\nint setStringConfigUnprefix(const char *name, RedisModuleString *new, void *privdata, RedisModuleString **err) {\n    REDISMODULE_NOT_USED(privdata);\n    REDISMODULE_NOT_USED(err);\n    assert(strcmp(name, \"unprefix-string\") == 0 || strcmp(name, \"unprefix.string-alias\") == 0);\n    if (strval2) RedisModule_FreeString(NULL, strval2);\n    RedisModule_RetainString(NULL, new);\n    strval2 = new;\n    return REDISMODULE_OK;\n}\n\nint getBoolConfigUnprefix(const char *name, void *privdata) {\n    assert(strcmp(name, \"unprefix-bool\") == 0 || strcmp(name, \"unprefix-bool-alias\") == 0 ||\n           strcmp(name, \"unprefix-noalias-bool\") == 0);\n    return (*(int *)privdata);\n}\n\nint setBoolConfigUnprefix(const char *name, int new, void *privdata, RedisModuleString **err) {\n    REDISMODULE_NOT_USED(err);\n    assert(strcmp(name, \"unprefix-bool\") == 0 || strcmp(name, \"unprefix-bool-alias\") == 0 ||\n           strcmp(name, \"unprefix-noalias-bool\") == 0);\n    *(int *)privdata = new;\n    return REDISMODULE_OK;\n}\n\nlong long getNumericConfigUnprefix(const char *name, void *privdata) {\n    assert(strcmp(name, \"unprefix.numeric\") == 0 || strcmp(name, \"unprefix.numeric-alias\") == 0);\n    return (*(long long *) privdata);\n}\n\nint setNumericConfigUnprefix(const char *name, long long new, void *privdata, RedisModuleString **err) {\n    REDISMODULE_NOT_USED(err);\n    assert(strcmp(name, \"unprefix.numeric\") == 0 || strcmp(name, \"unprefix.numeric-alias\") == 0);\n    *(long long *)privdata = new;\n    return REDISMODULE_OK;\n}\n\nint getEnumConfigUnprefix(const char *name, void *privdata) {\n    REDISMODULE_NOT_USED(privdata);\n    assert(strcmp(name, \"unprefix-enum\") == 0 || strcmp(name, \"unprefix-enum-alias\") == 0);\n    return no_prefix_enumval;\n}\n\nint setEnumConfigUnprefix(const char *name, int val, void *privdata, RedisModuleString **err) {\n    REDISMODULE_NOT_USED(privdata);\n    REDISMODULE_NOT_USED(err);\n    assert(strcmp(name, \"unprefix-enum\") == 0 || strcmp(name, \"unprefix-enum-alias\") == 0);\n    no_prefix_enumval = val;\n    return REDISMODULE_OK;\n}\n\nint registerBlockCheck(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    int response_ok = 0;\n    int result = RedisModule_RegisterBoolConfig(ctx, \"mutable_bool\", 1, REDISMODULE_CONFIG_DEFAULT, getBoolConfigCommand, setBoolConfigCommand, boolApplyFunc, &mutable_bool_val);\n    response_ok |= (result == REDISMODULE_OK);\n\n    result = RedisModule_RegisterStringConfig(ctx, \"string\", \"secret password\", REDISMODULE_CONFIG_DEFAULT, getStringConfigCommand, setStringConfigCommand, NULL, NULL);\n    response_ok |= (result == REDISMODULE_OK);\n\n    const char *enum_vals[] = {\"none\", \"five\", \"one\", \"two\", \"four\"};\n    const int int_vals[] = {0, 5, 1, 2, 4};\n    result = RedisModule_RegisterEnumConfig(ctx, \"enum\", 1, REDISMODULE_CONFIG_DEFAULT, enum_vals, int_vals, 5, getEnumConfigCommand, setEnumConfigCommand, NULL, NULL);\n    response_ok |= (result == REDISMODULE_OK);\n\n    result = RedisModule_RegisterNumericConfig(ctx, \"numeric\", -1, REDISMODULE_CONFIG_DEFAULT, -5, 2000, getNumericConfigCommand, setNumericConfigCommand, longlongApplyFunc, &longval);\n    response_ok |= (result == REDISMODULE_OK);\n\n    result = RedisModule_LoadConfigs(ctx);\n    response_ok |= (result == REDISMODULE_OK);\n    \n    /* This validates that it's not possible to register/load configs outside OnLoad,\n     * thus returns an error if they succeed. */\n    if (response_ok) {\n        RedisModule_ReplyWithError(ctx, \"UNEXPECTEDOK\");\n    } else {\n        RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    }\n    return REDISMODULE_OK;\n}\n\nvoid cleanup(RedisModuleCtx *ctx) {\n    if (strval) {\n        RedisModule_FreeString(ctx, strval);\n        strval = NULL;\n    }\n    if (strval2) {\n        RedisModule_FreeString(ctx, strval2);\n        strval2 = NULL;\n    }\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx, \"moduleconfigs\", 1, REDISMODULE_APIVER_1) == REDISMODULE_ERR) {\n        RedisModule_Log(ctx, \"warning\", \"Failed to init module\");\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_RegisterBoolConfig(ctx, \"mutable_bool\", 1, REDISMODULE_CONFIG_DEFAULT, getBoolConfigCommand, setBoolConfigCommand, boolApplyFunc, &mutable_bool_val) == REDISMODULE_ERR) {\n        RedisModule_Log(ctx, \"warning\", \"Failed to register mutable_bool\");\n        return REDISMODULE_ERR;\n    }\n    /* Immutable config here. */\n    if (RedisModule_RegisterBoolConfig(ctx, \"immutable_bool\", 0, REDISMODULE_CONFIG_IMMUTABLE, getBoolConfigCommand, setBoolConfigCommand, boolApplyFunc, &immutable_bool_val) == REDISMODULE_ERR) {\n        RedisModule_Log(ctx, \"warning\", \"Failed to register immutable_bool\");\n        return REDISMODULE_ERR;\n    }\n    if (RedisModule_RegisterStringConfig(ctx, \"string\", \"secret password\", REDISMODULE_CONFIG_DEFAULT, getStringConfigCommand, setStringConfigCommand, NULL, NULL) == REDISMODULE_ERR) {\n        RedisModule_Log(ctx, \"warning\", \"Failed to register string\");\n        return REDISMODULE_ERR;\n    }\n\n    /* On the stack to make sure we're copying them. */\n    const char *enum_vals[] = {\"none\", \"five\", \"one\", \"two\", \"four\"};\n    const int int_vals[] = {0, 5, 1, 2, 4};\n\n    if (RedisModule_RegisterEnumConfig(ctx, \"enum\", 1, REDISMODULE_CONFIG_DEFAULT, enum_vals, int_vals, 5, getEnumConfigCommand, setEnumConfigCommand, NULL, NULL) == REDISMODULE_ERR) {\n        RedisModule_Log(ctx, \"warning\", \"Failed to register enum\");\n        return REDISMODULE_ERR;\n    }\n    if (RedisModule_RegisterEnumConfig(ctx, \"flags\", 3, REDISMODULE_CONFIG_DEFAULT | REDISMODULE_CONFIG_BITFLAGS, enum_vals, int_vals, 5, getFlagsConfigCommand, setFlagsConfigCommand, NULL, NULL) == REDISMODULE_ERR) {\n        RedisModule_Log(ctx, \"warning\", \"Failed to register flags\");\n        return REDISMODULE_ERR;\n    }\n    /* Memory config here. */\n    if (RedisModule_RegisterNumericConfig(ctx, \"memory_numeric\", 1024, REDISMODULE_CONFIG_DEFAULT | REDISMODULE_CONFIG_MEMORY, 0, 3000000, getNumericConfigCommand, setNumericConfigCommand, longlongApplyFunc, &memval) == REDISMODULE_ERR) {\n        RedisModule_Log(ctx, \"warning\", \"Failed to register memory_numeric\");\n        return REDISMODULE_ERR;\n    }\n    if (RedisModule_RegisterNumericConfig(ctx, \"numeric\", -1, REDISMODULE_CONFIG_DEFAULT, -5, 2000, getNumericConfigCommand, setNumericConfigCommand, longlongApplyFunc, &longval) == REDISMODULE_ERR) {\n        RedisModule_Log(ctx, \"warning\", \"Failed to register numeric\");\n        return REDISMODULE_ERR;\n    }\n\n    /*** unprefixed and aliased configuration ***/\n    if (RedisModule_RegisterBoolConfig(ctx, \"unprefix-bool|unprefix-bool-alias\", 1, REDISMODULE_CONFIG_DEFAULT|REDISMODULE_CONFIG_UNPREFIXED,\n                                       getBoolConfigUnprefix, setBoolConfigUnprefix, NULL, &no_prefix_bool) == REDISMODULE_ERR) {\n        RedisModule_Log(ctx, \"warning\", \"Failed to register unprefix-bool\");\n        return REDISMODULE_ERR;\n    }\n    if (RedisModule_RegisterBoolConfig(ctx, \"unprefix-noalias-bool\", 1, REDISMODULE_CONFIG_DEFAULT|REDISMODULE_CONFIG_UNPREFIXED,\n                                       getBoolConfigUnprefix, setBoolConfigUnprefix, NULL, &no_prefix_bool2) == REDISMODULE_ERR) {\n        RedisModule_Log(ctx, \"warning\", \"Failed to register unprefix-noalias-bool\");\n        return REDISMODULE_ERR;\n    }\n    if (RedisModule_RegisterNumericConfig(ctx, \"unprefix.numeric|unprefix.numeric-alias\", -1, REDISMODULE_CONFIG_DEFAULT|REDISMODULE_CONFIG_UNPREFIXED,\n                                          -5, 2000, getNumericConfigUnprefix, setNumericConfigUnprefix, NULL, &no_prefix_longval) == REDISMODULE_ERR) {\n        RedisModule_Log(ctx, \"warning\", \"Failed to register unprefix.numeric\");\n        return REDISMODULE_ERR;\n    }\n    if (RedisModule_RegisterStringConfig(ctx, \"unprefix-string|unprefix.string-alias\", \"secret unprefix\", REDISMODULE_CONFIG_DEFAULT|REDISMODULE_CONFIG_UNPREFIXED, \n                                         getStringConfigUnprefix, setStringConfigUnprefix, NULL, NULL) == REDISMODULE_ERR) {\n        RedisModule_Log(ctx, \"warning\", \"Failed to register unprefix-string\");\n        return REDISMODULE_ERR;\n    }    \n    if (RedisModule_RegisterEnumConfig(ctx, \"unprefix-enum|unprefix-enum-alias\", 1, REDISMODULE_CONFIG_DEFAULT|REDISMODULE_CONFIG_UNPREFIXED, \n                                       enum_vals, int_vals, 5, getEnumConfigUnprefix, setEnumConfigUnprefix, NULL, NULL) == REDISMODULE_ERR) {\n        RedisModule_Log(ctx, \"warning\", \"Failed to register unprefix-enum\");\n        return REDISMODULE_ERR;\n    }\n\n    RedisModule_Log(ctx, \"debug\", \"Registered configuration\");\n    size_t len;\n    if (argc && !strcasecmp(RedisModule_StringPtrLen(argv[0], &len), \"noload\")) {\n        return REDISMODULE_OK;\n    } else if (argc && !strcasecmp(RedisModule_StringPtrLen(argv[0], &len), \"override-default\")) {\n        if (RedisModule_LoadDefaultConfigs(ctx) == REDISMODULE_ERR) {\n            RedisModule_Log(ctx, \"warning\", \"Failed to load default configuration\");\n            goto err;\n        }\n        // simulate configuration values being overwritten by the command line\n        RedisModule_Log(ctx, \"debug\", \"Overriding configuration values\");\n        if (strval) RedisModule_FreeString(ctx, strval);\n        strval = RedisModule_CreateString(ctx, \"foo\", 3);\n        longval = memval = 123;\n    }\n    RedisModule_Log(ctx, \"debug\", \"Loading configuration\");\n    if (RedisModule_LoadConfigs(ctx) == REDISMODULE_ERR) {\n        RedisModule_Log(ctx, \"warning\", \"Failed to load configuration\");\n        goto err;\n    }\n    /* Creates a command which registers configs outside OnLoad() function. */\n    if (RedisModule_CreateCommand(ctx,\"block.register.configs.outside.onload\", registerBlockCheck, \"write\", 0, 0, 0) == REDISMODULE_ERR) {\n        RedisModule_Log(ctx, \"warning\", \"Failed to register command\");\n        goto err;\n    }\n  \n    return REDISMODULE_OK;\nerr:\n    cleanup(ctx);\n    return REDISMODULE_ERR;\n}\n\nint RedisModule_OnUnload(RedisModuleCtx *ctx) {\n    REDISMODULE_NOT_USED(ctx);\n    cleanup(ctx);\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/moduleconfigstwo.c",
    "content": "#include \"redismodule.h\"\n#include <strings.h>\n\n/* Second module configs module, for testing.\n * Need to make sure that multiple modules with configs don't interfere with each other */\nint bool_config;\n\nint getBoolConfigCommand(const char *name, void *privdata) {\n    REDISMODULE_NOT_USED(privdata);\n    if (!strcasecmp(name, \"test\")) {\n        return bool_config;\n    }\n    return 0;\n}\n\nint setBoolConfigCommand(const char *name, int new, void *privdata, RedisModuleString **err) {\n    REDISMODULE_NOT_USED(privdata);\n    REDISMODULE_NOT_USED(err);\n    if (!strcasecmp(name, \"test\")) {\n        bool_config = new;\n        return REDISMODULE_OK;\n    }\n    return REDISMODULE_ERR;\n}\n\n/* No arguments are expected */ \nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    if (RedisModule_Init(ctx, \"configs\", 1, REDISMODULE_APIVER_1) == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    if (RedisModule_RegisterBoolConfig(ctx, \"test\", 1, REDISMODULE_CONFIG_DEFAULT, getBoolConfigCommand, setBoolConfigCommand, NULL, &argc) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n    if (RedisModule_LoadConfigs(ctx) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n    return REDISMODULE_OK;\n}"
  },
  {
    "path": "tests/modules/postnotifications.c",
    "content": "/* This module is used to test the server post keyspace jobs API.\n *\n * -----------------------------------------------------------------------------\n *\n * Copyright (c) 2020-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n/* This module allow to verify 'RedisModule_AddPostNotificationJob' by registering to 3\n * key space event:\n * * STRINGS - the module register to all strings notifications and set post notification job\n *             that increase a counter indicating how many times the string key was changed.\n *             In addition, it increase another counter that counts the total changes that\n *             was made on all strings keys.\n * * EXPIRED - the module register to expired event and set post notification job that that\n *             counts the total number of expired events.\n * * EVICTED - the module register to evicted event and set post notification job that that\n *             counts the total number of evicted events.\n *\n * In addition, the module register a new command, 'postnotification.async_set', that performs a set\n * command from a background thread. This allows to check the 'RedisModule_AddPostNotificationJob' on\n * notifications that was triggered on a background thread. */\n\n#define _BSD_SOURCE\n#define _DEFAULT_SOURCE /* For usleep */\n\n#include \"redismodule.h\"\n#include <stdio.h>\n#include <string.h>\n#include <unistd.h>\n#include <pthread.h>\n\nstatic void KeySpace_PostNotificationStringFreePD(void *pd) {\n    RedisModule_FreeString(NULL, pd);\n}\n\nstatic void KeySpace_PostNotificationReadKey(RedisModuleCtx *ctx, void *pd) {\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, \"get\", \"!s\", pd);\n    RedisModule_FreeCallReply(rep);\n}\n\nstatic void KeySpace_PostNotificationString(RedisModuleCtx *ctx, void *pd) {\n    REDISMODULE_NOT_USED(ctx);\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, \"incr\", \"!s\", pd);\n    RedisModule_FreeCallReply(rep);\n}\n\nstatic int KeySpace_NotificationExpired(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key){\n    REDISMODULE_NOT_USED(type);\n    REDISMODULE_NOT_USED(event);\n    REDISMODULE_NOT_USED(key);\n\n    RedisModuleString *new_key = RedisModule_CreateString(NULL, \"expired\", 7);\n    int res = RedisModule_AddPostNotificationJob(ctx, KeySpace_PostNotificationString, new_key, KeySpace_PostNotificationStringFreePD);\n    if (res == REDISMODULE_ERR) KeySpace_PostNotificationStringFreePD(new_key);\n    return REDISMODULE_OK;\n}\n\nstatic int KeySpace_NotificationEvicted(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key){\n    REDISMODULE_NOT_USED(type);\n    REDISMODULE_NOT_USED(event);\n    REDISMODULE_NOT_USED(key);\n\n    const char *key_str = RedisModule_StringPtrLen(key, NULL);\n\n    if (strncmp(key_str, \"evicted\", 7) == 0) {\n        return REDISMODULE_OK; /* do not count the evicted key */\n    }\n\n    if (strncmp(key_str, \"before_evicted\", 14) == 0) {\n        return REDISMODULE_OK; /* do not count the before_evicted key */\n    }\n\n    RedisModuleString *new_key = RedisModule_CreateString(NULL, \"evicted\", 7);\n    int res = RedisModule_AddPostNotificationJob(ctx, KeySpace_PostNotificationString, new_key, KeySpace_PostNotificationStringFreePD);\n    if (res == REDISMODULE_ERR) KeySpace_PostNotificationStringFreePD(new_key);\n    return REDISMODULE_OK;\n}\n\nstatic int KeySpace_NotificationString(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key){\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(type);\n    REDISMODULE_NOT_USED(event);\n\n    const char *key_str = RedisModule_StringPtrLen(key, NULL);\n\n    if (strncmp(key_str, \"string_\", 7) != 0) {\n        return REDISMODULE_OK;\n    }\n\n    if (strcmp(key_str, \"string_total\") == 0) {\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleString *new_key;\n    if (strncmp(key_str, \"string_changed{\", 15) == 0) {\n        new_key = RedisModule_CreateString(NULL, \"string_total\", 12);\n    } else {\n        new_key = RedisModule_CreateStringPrintf(NULL, \"string_changed{%s}\", key_str);\n    }\n\n    int res = RedisModule_AddPostNotificationJob(ctx, KeySpace_PostNotificationString, new_key, KeySpace_PostNotificationStringFreePD);\n    if (res == REDISMODULE_ERR) KeySpace_PostNotificationStringFreePD(new_key);\n    return REDISMODULE_OK;\n}\n\nstatic int KeySpace_LazyExpireInsidePostNotificationJob(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key){\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(type);\n    REDISMODULE_NOT_USED(event);\n\n    const char *key_str = RedisModule_StringPtrLen(key, NULL);\n\n    if (strncmp(key_str, \"read_\", 5) != 0) {\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleString *new_key = RedisModule_CreateString(NULL, key_str + 5, strlen(key_str) - 5);;\n    int res = RedisModule_AddPostNotificationJob(ctx, KeySpace_PostNotificationReadKey, new_key, KeySpace_PostNotificationStringFreePD);\n    if (res == REDISMODULE_ERR) KeySpace_PostNotificationStringFreePD(new_key);\n    return REDISMODULE_OK;\n}\n\nstatic int KeySpace_NestedNotification(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key){\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(type);\n    REDISMODULE_NOT_USED(event);\n\n    const char *key_str = RedisModule_StringPtrLen(key, NULL);\n\n    if (strncmp(key_str, \"write_sync_\", 11) != 0) {\n        return REDISMODULE_OK;\n    }\n\n    /* This test was only meant to check REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS.\n     * In general it is wrong and discourage to perform any writes inside a notification callback.  */\n    RedisModuleString *new_key = RedisModule_CreateString(NULL, key_str + 11, strlen(key_str) - 11);;\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, \"set\", \"!sc\", new_key, \"1\");\n    RedisModule_FreeCallReply(rep);\n    RedisModule_FreeString(NULL, new_key);\n    return REDISMODULE_OK;\n}\n\nstatic void *KeySpace_PostNotificationsAsyncSetInner(void *arg) {\n    RedisModuleBlockedClient *bc = arg;\n    RedisModuleCtx *ctx = RedisModule_GetThreadSafeContext(bc);\n    RedisModule_ThreadSafeContextLock(ctx);\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, \"set\", \"!cc\", \"string_x\", \"1\");\n    RedisModule_ThreadSafeContextUnlock(ctx);\n    RedisModule_ReplyWithCallReply(ctx, rep);\n    RedisModule_FreeCallReply(rep);\n\n    RedisModule_UnblockClient(bc, NULL);\n    RedisModule_FreeThreadSafeContext(ctx);\n    return NULL;\n}\n\nstatic int KeySpace_PostNotificationsAsyncSet(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    if (argc != 1)\n        return RedisModule_WrongArity(ctx);\n\n    pthread_t tid;\n    RedisModuleBlockedClient *bc = RedisModule_BlockClient(ctx,NULL,NULL,NULL,0);\n\n    if (pthread_create(&tid,NULL,KeySpace_PostNotificationsAsyncSetInner,bc) != 0) {\n        RedisModule_AbortBlock(bc);\n        return RedisModule_ReplyWithError(ctx,\"-ERR Can't start thread\");\n    }\n    pthread_detach(tid);\n    return REDISMODULE_OK;\n}\n\ntypedef struct KeySpace_EventPostNotificationCtx {\n    RedisModuleString *triggered_on;\n    RedisModuleString *new_key;\n} KeySpace_EventPostNotificationCtx;\n\nstatic void KeySpace_ServerEventPostNotificationFree(void *pd) {\n    KeySpace_EventPostNotificationCtx *pn_ctx = pd;\n    RedisModule_FreeString(NULL, pn_ctx->new_key);\n    RedisModule_FreeString(NULL, pn_ctx->triggered_on);\n    RedisModule_Free(pn_ctx);\n}\n\nstatic void KeySpace_ServerEventPostNotification(RedisModuleCtx *ctx, void *pd) {\n    REDISMODULE_NOT_USED(ctx);\n    KeySpace_EventPostNotificationCtx *pn_ctx = pd;\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, \"lpush\", \"!ss\", pn_ctx->new_key, pn_ctx->triggered_on);\n    RedisModule_FreeCallReply(rep);\n}\n\nstatic void KeySpace_ServerEventCallback(RedisModuleCtx *ctx, RedisModuleEvent eid, uint64_t subevent, void *data) {\n    REDISMODULE_NOT_USED(eid);\n    REDISMODULE_NOT_USED(data);\n    if (subevent > 3) {\n        RedisModule_Log(ctx, \"warning\", \"Got an unexpected subevent '%llu'\", (unsigned long long)subevent);\n        return;\n    }\n    static const char* events[] = {\n            \"before_deleted\",\n            \"before_expired\",\n            \"before_evicted\",\n            \"before_overwritten\",\n    };\n\n    const RedisModuleString *key_name = RedisModule_GetKeyNameFromModuleKey(((RedisModuleKeyInfo*)data)->key);\n    const char *key_str = RedisModule_StringPtrLen(key_name, NULL);\n\n    for (int i = 0 ; i < 4 ; ++i) {\n        const char *event = events[i];\n        if (strncmp(key_str, event , strlen(event)) == 0) {\n            return; /* don't log any event on our tracking keys */\n        }\n    }\n\n    KeySpace_EventPostNotificationCtx *pn_ctx = RedisModule_Alloc(sizeof(*pn_ctx));\n    pn_ctx->triggered_on = RedisModule_HoldString(NULL, (RedisModuleString*)key_name);\n    pn_ctx->new_key = RedisModule_CreateString(NULL, events[subevent], strlen(events[subevent]));\n    int res = RedisModule_AddPostNotificationJob(ctx, KeySpace_ServerEventPostNotification, pn_ctx, KeySpace_ServerEventPostNotificationFree);\n    if (res == REDISMODULE_ERR) KeySpace_ServerEventPostNotificationFree(pn_ctx);\n}\n\n/* This function must be present on each Redis module. It is used in order to\n * register the commands into the Redis server. */\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx,\"postnotifications\",1,REDISMODULE_APIVER_1) == REDISMODULE_ERR){\n        return REDISMODULE_ERR;\n    }\n\n    if (!(RedisModule_GetModuleOptionsAll() & REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS)) {\n        return REDISMODULE_ERR;\n    }\n\n    int with_key_events = 0;\n    if (argc >= 1) {\n        const char *arg = RedisModule_StringPtrLen(argv[0], 0);\n        if (strcmp(arg, \"with_key_events\") == 0) {\n            with_key_events = 1;\n        }\n    }\n\n    RedisModule_SetModuleOptions(ctx, REDISMODULE_OPTIONS_ALLOW_NESTED_KEYSPACE_NOTIFICATIONS);\n\n    if(RedisModule_SubscribeToKeyspaceEvents(ctx, REDISMODULE_NOTIFY_STRING, KeySpace_NotificationString) != REDISMODULE_OK){\n        return REDISMODULE_ERR;\n    }\n\n    if(RedisModule_SubscribeToKeyspaceEvents(ctx, REDISMODULE_NOTIFY_STRING, KeySpace_LazyExpireInsidePostNotificationJob) != REDISMODULE_OK){\n        return REDISMODULE_ERR;\n    }\n\n    if(RedisModule_SubscribeToKeyspaceEvents(ctx, REDISMODULE_NOTIFY_STRING, KeySpace_NestedNotification) != REDISMODULE_OK){\n        return REDISMODULE_ERR;\n    }\n\n    if(RedisModule_SubscribeToKeyspaceEvents(ctx, REDISMODULE_NOTIFY_EXPIRED, KeySpace_NotificationExpired) != REDISMODULE_OK){\n        return REDISMODULE_ERR;\n    }\n\n    if(RedisModule_SubscribeToKeyspaceEvents(ctx, REDISMODULE_NOTIFY_EVICTED, KeySpace_NotificationEvicted) != REDISMODULE_OK){\n        return REDISMODULE_ERR;\n    }\n\n    if (with_key_events) {\n        if(RedisModule_SubscribeToServerEvent(ctx, RedisModuleEvent_Key, KeySpace_ServerEventCallback) != REDISMODULE_OK){\n            return REDISMODULE_ERR;\n        }\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"postnotification.async_set\", KeySpace_PostNotificationsAsyncSet,\n                                      \"write\", 0, 0, 0) == REDISMODULE_ERR){\n        return REDISMODULE_ERR;\n    }\n\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnUnload(RedisModuleCtx *ctx) {\n    REDISMODULE_NOT_USED(ctx);\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/propagate.c",
    "content": "/* This module is used to test the propagation (replication + AOF) of\n * commands, via the RedisModule_Replicate() interface, in asynchronous\n * contexts, such as callbacks not implementing commands, and thread safe\n * contexts.\n *\n * We create a timer callback and a threads using a thread safe context.\n * Using both we try to propagate counters increments, and later we check\n * if the replica contains the changes as expected.\n *\n * -----------------------------------------------------------------------------\n *\n * Copyright (c) 2019-Present, Redis Ltd.\n * All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include \"redismodule.h\"\n#include <pthread.h>\n#include <errno.h>\n\n#define UNUSED(V) ((void) V)\n\nRedisModuleCtx *detached_ctx = NULL;\n\nstatic int KeySpace_NotificationGeneric(RedisModuleCtx *ctx, int type, const char *event, RedisModuleString *key) {\n    REDISMODULE_NOT_USED(type);\n    REDISMODULE_NOT_USED(event);\n    REDISMODULE_NOT_USED(key);\n\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, \"INCR\", \"c!\", \"notifications\");\n    RedisModule_FreeCallReply(rep);\n\n    return REDISMODULE_OK;\n}\n\n/* Timer callback. */\nvoid timerHandler(RedisModuleCtx *ctx, void *data) {\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(data);\n\n    static int times = 0;\n\n    RedisModule_Replicate(ctx,\"INCR\",\"c\",\"timer\");\n    times++;\n\n    if (times < 3)\n        RedisModule_CreateTimer(ctx,100,timerHandler,NULL);\n    else\n        times = 0;\n}\n\nint propagateTestTimerCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModuleTimerID timer_id =\n        RedisModule_CreateTimer(ctx,100,timerHandler,NULL);\n    REDISMODULE_NOT_USED(timer_id);\n\n    RedisModule_ReplyWithSimpleString(ctx,\"OK\");\n    return REDISMODULE_OK;\n}\n\n/* Timer callback. */\nvoid timerNestedHandler(RedisModuleCtx *ctx, void *data) {\n    int repl = (long long)data;\n\n    /* The goal is the trigger a module command that calls RM_Replicate\n     * in order to test MULTI/EXEC structure */\n    RedisModule_Replicate(ctx,\"INCRBY\",\"cc\",\"timer-nested-start\",\"1\");\n    RedisModuleCallReply *reply = RedisModule_Call(ctx,\"propagate-test.nested\", repl? \"!\" : \"\");\n    RedisModule_FreeCallReply(reply);\n    reply = RedisModule_Call(ctx, \"INCR\", repl? \"c!\" : \"c\", \"timer-nested-middle\");\n    RedisModule_FreeCallReply(reply);\n    RedisModule_Replicate(ctx,\"INCRBY\",\"cc\",\"timer-nested-end\",\"1\");\n}\n\nint propagateTestTimerNestedCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModuleTimerID timer_id =\n        RedisModule_CreateTimer(ctx,100,timerNestedHandler,(void*)0);\n    REDISMODULE_NOT_USED(timer_id);\n\n    RedisModule_ReplyWithSimpleString(ctx,\"OK\");\n    return REDISMODULE_OK;\n}\n\nint propagateTestTimerNestedReplCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModuleTimerID timer_id =\n        RedisModule_CreateTimer(ctx,100,timerNestedHandler,(void*)1);\n    REDISMODULE_NOT_USED(timer_id);\n\n    RedisModule_ReplyWithSimpleString(ctx,\"OK\");\n    return REDISMODULE_OK;\n}\n\nvoid timerHandlerMaxmemory(RedisModuleCtx *ctx, void *data) {\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(data);\n\n    RedisModuleCallReply *reply = RedisModule_Call(ctx,\"SETEX\",\"ccc!\",\"timer-maxmemory-volatile-start\",\"100\",\"1\");\n    RedisModule_FreeCallReply(reply);\n    reply = RedisModule_Call(ctx, \"CONFIG\", \"ccc!\", \"SET\", \"maxmemory\", \"1\");\n    RedisModule_FreeCallReply(reply);\n\n    RedisModule_Replicate(ctx, \"INCR\", \"c\", \"timer-maxmemory-middle\");\n\n    reply = RedisModule_Call(ctx,\"SETEX\",\"ccc!\",\"timer-maxmemory-volatile-end\",\"100\",\"1\");\n    RedisModule_FreeCallReply(reply);\n}\n\nint propagateTestTimerMaxmemoryCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModuleTimerID timer_id =\n        RedisModule_CreateTimer(ctx,100,timerHandlerMaxmemory,(void*)1);\n    REDISMODULE_NOT_USED(timer_id);\n\n    RedisModule_ReplyWithSimpleString(ctx,\"OK\");\n    return REDISMODULE_OK;\n}\n\nvoid timerHandlerEval(RedisModuleCtx *ctx, void *data) {\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(data);\n\n    RedisModuleCallReply *reply = RedisModule_Call(ctx,\"INCRBY\",\"cc!\",\"timer-eval-start\",\"1\");\n    RedisModule_FreeCallReply(reply);\n    reply = RedisModule_Call(ctx, \"EVAL\", \"cccc!\", \"redis.call('set',KEYS[1],ARGV[1])\", \"1\", \"foo\", \"bar\");\n    RedisModule_FreeCallReply(reply);\n\n    RedisModule_Replicate(ctx, \"INCR\", \"c\", \"timer-eval-middle\");\n\n    reply = RedisModule_Call(ctx,\"INCRBY\",\"cc!\",\"timer-eval-end\",\"1\");\n    RedisModule_FreeCallReply(reply);\n}\n\nint propagateTestTimerEvalCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModuleTimerID timer_id =\n        RedisModule_CreateTimer(ctx,100,timerHandlerEval,(void*)1);\n    REDISMODULE_NOT_USED(timer_id);\n\n    RedisModule_ReplyWithSimpleString(ctx,\"OK\");\n    return REDISMODULE_OK;\n}\n\n/* The thread entry point. */\nvoid *threadMain(void *arg) {\n    REDISMODULE_NOT_USED(arg);\n    RedisModuleCtx *ctx = RedisModule_GetThreadSafeContext(NULL);\n    RedisModule_SelectDb(ctx,9); /* Tests ran in database number 9. */\n    for (int i = 0; i < 3; i++) {\n        RedisModule_ThreadSafeContextLock(ctx);\n        RedisModule_Replicate(ctx,\"INCR\",\"c\",\"a-from-thread\");\n        RedisModuleCallReply *reply = RedisModule_Call(ctx,\"INCR\",\"c!\",\"thread-call\");\n        RedisModule_FreeCallReply(reply);\n        RedisModule_Replicate(ctx,\"INCR\",\"c\",\"b-from-thread\");\n        RedisModule_ThreadSafeContextUnlock(ctx);\n    }\n    RedisModule_FreeThreadSafeContext(ctx);\n    return NULL;\n}\n\nint propagateTestThreadCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    pthread_t tid;\n    if (pthread_create(&tid,NULL,threadMain,NULL) != 0)\n        return RedisModule_ReplyWithError(ctx,\"-ERR Can't start thread\");\n    pthread_detach(tid);\n\n    RedisModule_ReplyWithSimpleString(ctx,\"OK\");\n    return REDISMODULE_OK;\n}\n\n/* The thread entry point. */\nvoid *threadDetachedMain(void *arg) {\n    REDISMODULE_NOT_USED(arg);\n    RedisModule_SelectDb(detached_ctx,9); /* Tests ran in database number 9. */\n\n    RedisModule_ThreadSafeContextLock(detached_ctx);\n    RedisModule_Replicate(detached_ctx,\"INCR\",\"c\",\"thread-detached-before\");\n    RedisModuleCallReply *reply = RedisModule_Call(detached_ctx,\"INCR\",\"c!\",\"thread-detached-1\");\n    RedisModule_FreeCallReply(reply);\n    reply = RedisModule_Call(detached_ctx,\"INCR\",\"c!\",\"thread-detached-2\");\n    RedisModule_FreeCallReply(reply);\n    RedisModule_Replicate(detached_ctx,\"INCR\",\"c\",\"thread-detached-after\");\n    RedisModule_ThreadSafeContextUnlock(detached_ctx);\n\n    return NULL;\n}\n\nint propagateTestDetachedThreadCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    pthread_t tid;\n    if (pthread_create(&tid,NULL,threadDetachedMain,NULL) != 0)\n        return RedisModule_ReplyWithError(ctx,\"-ERR Can't start thread\");\n    pthread_detach(tid);\n\n    RedisModule_ReplyWithSimpleString(ctx,\"OK\");\n    return REDISMODULE_OK;\n}\n\nint propagateTestSimpleCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    /* Replicate two commands to test MULTI/EXEC wrapping. */\n    RedisModule_Replicate(ctx,\"INCR\",\"c\",\"counter-1\");\n    RedisModule_Replicate(ctx,\"INCR\",\"c\",\"counter-2\");\n    RedisModule_ReplyWithSimpleString(ctx,\"OK\");\n    return REDISMODULE_OK;\n}\n\nint propagateTestMixedCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    RedisModuleCallReply *reply;\n\n    /* This test mixes multiple propagation systems. */\n    reply = RedisModule_Call(ctx, \"INCR\", \"c!\", \"using-call\");\n    RedisModule_FreeCallReply(reply);\n\n    RedisModule_Replicate(ctx,\"INCR\",\"c\",\"counter-1\");\n    RedisModule_Replicate(ctx,\"INCR\",\"c\",\"counter-2\");\n\n    reply = RedisModule_Call(ctx, \"INCR\", \"c!\", \"after-call\");\n    RedisModule_FreeCallReply(reply);\n\n    RedisModule_ReplyWithSimpleString(ctx,\"OK\");\n    return REDISMODULE_OK;\n}\n\nint propagateTestNestedCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    RedisModuleCallReply *reply;\n\n    /* This test mixes multiple propagation systems. */\n    reply = RedisModule_Call(ctx, \"INCR\", \"c!\", \"using-call\");\n    RedisModule_FreeCallReply(reply);\n\n    reply = RedisModule_Call(ctx,\"propagate-test.simple\", \"!\");\n    RedisModule_FreeCallReply(reply);\n\n    RedisModule_Replicate(ctx,\"INCR\",\"c\",\"counter-3\");\n    RedisModule_Replicate(ctx,\"INCR\",\"c\",\"counter-4\");\n\n    reply = RedisModule_Call(ctx, \"INCR\", \"c!\", \"after-call\");\n    RedisModule_FreeCallReply(reply);\n\n    reply = RedisModule_Call(ctx, \"INCR\", \"c!\", \"before-call-2\");\n    RedisModule_FreeCallReply(reply);\n\n    reply = RedisModule_Call(ctx, \"keyspace.incr_case1\", \"c!\", \"asdf\"); /* Propagates INCR */\n    RedisModule_FreeCallReply(reply);\n\n    reply = RedisModule_Call(ctx, \"keyspace.del_key_copy\", \"c!\", \"asdf\"); /* Propagates DEL */\n    RedisModule_FreeCallReply(reply);\n\n    reply = RedisModule_Call(ctx, \"INCR\", \"c!\", \"after-call-2\");\n    RedisModule_FreeCallReply(reply);\n\n    RedisModule_ReplyWithSimpleString(ctx,\"OK\");\n    return REDISMODULE_OK;\n}\n\nint propagateTestIncr(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    REDISMODULE_NOT_USED(argc);\n    RedisModuleCallReply *reply;\n\n    /* This test propagates the module command, not the INCR it executes. */\n    reply = RedisModule_Call(ctx, \"INCR\", \"s\", argv[1]);\n    RedisModule_ReplyWithCallReply(ctx,reply);\n    RedisModule_FreeCallReply(reply);\n    RedisModule_ReplicateVerbatim(ctx);\n    return REDISMODULE_OK;\n}\n\nint propagateTestVerbatim(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc < 2){\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    long long replicate_num;\n    RedisModule_StringToLongLong(argv[1], &replicate_num);\n    /* Replicate the command verbatim for the specified number of times. */\n    for (long long i = 0; i < replicate_num; i++)\n        RedisModule_ReplicateVerbatim(ctx);\n    RedisModule_ReplyWithSimpleString(ctx,\"OK\");\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx,\"propagate-test\",1,REDISMODULE_APIVER_1)\n            == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    detached_ctx = RedisModule_GetDetachedThreadSafeContext(ctx);\n\n    if (RedisModule_SubscribeToKeyspaceEvents(ctx, REDISMODULE_NOTIFY_ALL, KeySpace_NotificationGeneric) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"propagate-test.timer\",\n                propagateTestTimerCommand,\n                \"\",1,1,1) == REDISMODULE_ERR)\n            return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"propagate-test.timer-nested\",\n                propagateTestTimerNestedCommand,\n                \"\",1,1,1) == REDISMODULE_ERR)\n            return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"propagate-test.timer-nested-repl\",\n                propagateTestTimerNestedReplCommand,\n                \"\",1,1,1) == REDISMODULE_ERR)\n            return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"propagate-test.timer-maxmemory\",\n                propagateTestTimerMaxmemoryCommand,\n                \"\",1,1,1) == REDISMODULE_ERR)\n            return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"propagate-test.timer-eval\",\n                propagateTestTimerEvalCommand,\n                \"\",1,1,1) == REDISMODULE_ERR)\n            return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"propagate-test.thread\",\n                propagateTestThreadCommand,\n                \"\",1,1,1) == REDISMODULE_ERR)\n            return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"propagate-test.detached-thread\",\n                propagateTestDetachedThreadCommand,\n                \"\",1,1,1) == REDISMODULE_ERR)\n            return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"propagate-test.simple\",\n                propagateTestSimpleCommand,\n                \"\",1,1,1) == REDISMODULE_ERR)\n            return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"propagate-test.mixed\",\n                propagateTestMixedCommand,\n                \"write\",1,1,1) == REDISMODULE_ERR)\n            return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"propagate-test.nested\",\n                propagateTestNestedCommand,\n                \"write\",1,1,1) == REDISMODULE_ERR)\n            return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"propagate-test.incr\",\n                propagateTestIncr,\n                \"write\",1,1,1) == REDISMODULE_ERR)\n            return REDISMODULE_ERR;\n        \n    if (RedisModule_CreateCommand(ctx,\"propagate-test.verbatim\",\n            propagateTestVerbatim,\n            \"write\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnUnload(RedisModuleCtx *ctx) {\n    UNUSED(ctx);\n\n    if (detached_ctx)\n        RedisModule_FreeThreadSafeContext(detached_ctx);\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/publish.c",
    "content": "#include \"redismodule.h\"\n#include <string.h>\n#include <assert.h>\n#include <unistd.h>\n\n#define UNUSED(V) ((void) V)\n\nint cmd_publish_classic_multi(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc < 3)\n        return RedisModule_WrongArity(ctx);\n    RedisModule_ReplyWithArray(ctx, argc-2);\n    for (int i = 2; i < argc; i++) {\n        int receivers = RedisModule_PublishMessage(ctx, argv[1], argv[i]);\n        RedisModule_ReplyWithLongLong(ctx, receivers);\n    }\n    return REDISMODULE_OK;\n}\n\nint cmd_publish_classic(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc != 3)\n        return RedisModule_WrongArity(ctx);\n    \n    int receivers = RedisModule_PublishMessage(ctx, argv[1], argv[2]);\n    RedisModule_ReplyWithLongLong(ctx, receivers);\n    return REDISMODULE_OK;\n}\n\nint cmd_publish_shard(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc != 3)\n        return RedisModule_WrongArity(ctx);\n    \n    int receivers = RedisModule_PublishMessageShard(ctx, argv[1], argv[2]);\n    RedisModule_ReplyWithLongLong(ctx, receivers);\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    UNUSED(argc);\n    \n    if (RedisModule_Init(ctx,\"publish\",1,REDISMODULE_APIVER_1)== REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"publish.classic\",cmd_publish_classic,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"publish.classic_multi\",cmd_publish_classic_multi,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"publish.shard\",cmd_publish_shard,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n        \n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/rdbloadsave.c",
    "content": "#include \"redismodule.h\"\n\n#include <stdlib.h>\n#include <unistd.h>\n#include <fcntl.h>\n#include <memory.h>\n#include <errno.h>\n\n/* Sanity tests to verify inputs and return values. */\nint sanity(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    RedisModuleRdbStream *s = RedisModule_RdbStreamCreateFromFile(\"dbnew.rdb\");\n\n    /* NULL stream should fail. */\n    if (RedisModule_RdbLoad(ctx, NULL, 0) == REDISMODULE_OK || errno != EINVAL) {\n        RedisModule_ReplyWithError(ctx, strerror(errno));\n        goto out;\n    }\n\n    /* Invalid flags should fail. */\n    if (RedisModule_RdbLoad(ctx, s, 188) == REDISMODULE_OK || errno != EINVAL) {\n        RedisModule_ReplyWithError(ctx, strerror(errno));\n        goto out;\n    }\n\n    /* Missing file should fail. */\n    if (RedisModule_RdbLoad(ctx, s, 0) == REDISMODULE_OK || errno != ENOENT) {\n        RedisModule_ReplyWithError(ctx, strerror(errno));\n        goto out;\n    }\n\n    /* Save RDB file. */\n    if (RedisModule_RdbSave(ctx, s, 0) != REDISMODULE_OK || errno != 0) {\n        RedisModule_ReplyWithError(ctx, strerror(errno));\n        goto out;\n    }\n\n    /* Load the saved RDB file. */\n    if (RedisModule_RdbLoad(ctx, s, 0) != REDISMODULE_OK || errno != 0) {\n        RedisModule_ReplyWithError(ctx, strerror(errno));\n        goto out;\n    }\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n\n out:\n    RedisModule_RdbStreamFree(s);\n    return REDISMODULE_OK;\n}\n\nint cmd_rdbsave(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    size_t len;\n    const char *filename = RedisModule_StringPtrLen(argv[1], &len);\n\n    char tmp[len + 1];\n    memcpy(tmp, filename, len);\n    tmp[len] = '\\0';\n\n    RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile(tmp);\n\n    if (RedisModule_RdbSave(ctx, stream, 0) != REDISMODULE_OK || errno != 0) {\n        RedisModule_ReplyWithError(ctx, strerror(errno));\n        goto out;\n    }\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n\nout:\n    RedisModule_RdbStreamFree(stream);\n    return REDISMODULE_OK;\n}\n\n/* Fork before calling RM_RdbSave(). */\nint cmd_rdbsave_fork(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    size_t len;\n    const char *filename = RedisModule_StringPtrLen(argv[1], &len);\n\n    char tmp[len + 1];\n    memcpy(tmp, filename, len);\n    tmp[len] = '\\0';\n\n    int fork_child_pid = RedisModule_Fork(NULL, NULL);\n    if (fork_child_pid < 0) {\n        RedisModule_ReplyWithError(ctx, strerror(errno));\n        return REDISMODULE_OK;\n    } else if (fork_child_pid > 0) {\n        /* parent */\n        RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile(tmp);\n\n    int ret = 0;\n    if (RedisModule_RdbSave(ctx, stream, 0) != REDISMODULE_OK) {\n        ret = errno;\n    }\n    RedisModule_RdbStreamFree(stream);\n\n    RedisModule_ExitFromChild(ret);\n    return REDISMODULE_OK;\n}\n\nint cmd_rdbload(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    size_t len;\n    const char *filename = RedisModule_StringPtrLen(argv[1], &len);\n\n    char tmp[len + 1];\n    memcpy(tmp, filename, len);\n    tmp[len] = '\\0';\n\n    RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile(tmp);\n\n    if (RedisModule_RdbLoad(ctx, stream, 0) != REDISMODULE_OK || errno != 0) {\n        RedisModule_RdbStreamFree(stream);\n        RedisModule_ReplyWithError(ctx, strerror(errno));\n        return REDISMODULE_OK;\n    }\n\n    RedisModule_RdbStreamFree(stream);\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx, \"rdbloadsave\", 1, REDISMODULE_APIVER_1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"test.sanity\", sanity, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"test.rdbsave\", cmd_rdbsave, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"test.rdbsave_fork\", cmd_rdbsave_fork, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"test.rdbload\", cmd_rdbload, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/reply.c",
    "content": "/* \n * A module the tests RM_ReplyWith family of commands\n */\n\n#include \"redismodule.h\"\n#include <math.h>\n\nint rw_string(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n\n    return RedisModule_ReplyWithString(ctx, argv[1]);\n}\n\nint rw_cstring(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    if (argc != 1) return RedisModule_WrongArity(ctx);\n\n    return RedisModule_ReplyWithSimpleString(ctx, \"A simple string\");\n}\n\nint rw_int(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n\n    long long integer;\n    if (RedisModule_StringToLongLong(argv[1], &integer) != REDISMODULE_OK)\n        return RedisModule_ReplyWithError(ctx, \"Arg cannot be parsed as an integer\");\n\n    return RedisModule_ReplyWithLongLong(ctx, integer);\n}\n\n/* When one argument is given, it is returned as a double,\n * when two arguments are given, it returns a/b. */\nint rw_double(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc==1)\n        return RedisModule_ReplyWithDouble(ctx, NAN);\n\n    if (argc != 2 && argc != 3) return RedisModule_WrongArity(ctx);\n\n    double dbl, dbl2;\n    if (RedisModule_StringToDouble(argv[1], &dbl) != REDISMODULE_OK)\n        return RedisModule_ReplyWithError(ctx, \"Arg cannot be parsed as a double\");\n    if (argc == 3) {\n        if (RedisModule_StringToDouble(argv[2], &dbl2) != REDISMODULE_OK)\n            return RedisModule_ReplyWithError(ctx, \"Arg cannot be parsed as a double\");\n        dbl /= dbl2;\n    }\n\n    return RedisModule_ReplyWithDouble(ctx, dbl);\n}\n\nint rw_longdouble(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n\n    long double longdbl;\n    if (RedisModule_StringToLongDouble(argv[1], &longdbl) != REDISMODULE_OK)\n        return RedisModule_ReplyWithError(ctx, \"Arg cannot be parsed as a double\");\n\n    return RedisModule_ReplyWithLongDouble(ctx, longdbl);\n}\n\nint rw_bignumber(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n\n    size_t bignum_len;\n    const char *bignum_str = RedisModule_StringPtrLen(argv[1], &bignum_len);\n\n    return RedisModule_ReplyWithBigNumber(ctx, bignum_str, bignum_len);\n}\n\nint rw_array(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n\n    long long integer;\n    if (RedisModule_StringToLongLong(argv[1], &integer) != REDISMODULE_OK)\n        return RedisModule_ReplyWithError(ctx, \"Arg cannot be parsed as a integer\");\n\n    RedisModule_ReplyWithArray(ctx, integer);\n    for (int i = 0; i < integer; ++i) {\n        RedisModule_ReplyWithLongLong(ctx, i);\n    }\n\n    return REDISMODULE_OK;\n}\n\nint rw_map(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n\n    long long integer;\n    if (RedisModule_StringToLongLong(argv[1], &integer) != REDISMODULE_OK)\n        return RedisModule_ReplyWithError(ctx, \"Arg cannot be parsed as a integer\");\n\n    RedisModule_ReplyWithMap(ctx, integer);\n    for (int i = 0; i < integer; ++i) {\n        RedisModule_ReplyWithLongLong(ctx, i);\n        RedisModule_ReplyWithDouble(ctx, i * 1.5);\n    }\n\n    return REDISMODULE_OK;\n}\n\nint rw_set(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n\n    long long integer;\n    if (RedisModule_StringToLongLong(argv[1], &integer) != REDISMODULE_OK)\n        return RedisModule_ReplyWithError(ctx, \"Arg cannot be parsed as a integer\");\n\n    RedisModule_ReplyWithSet(ctx, integer);\n    for (int i = 0; i < integer; ++i) {\n        RedisModule_ReplyWithLongLong(ctx, i);\n    }\n\n    return REDISMODULE_OK;\n}\n\nint rw_attribute(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n\n    long long integer;\n    if (RedisModule_StringToLongLong(argv[1], &integer) != REDISMODULE_OK)\n        return RedisModule_ReplyWithError(ctx, \"Arg cannot be parsed as a integer\");\n\n    if (RedisModule_ReplyWithAttribute(ctx, integer) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx, \"Attributes aren't supported by RESP 2\");\n    }\n\n    for (int i = 0; i < integer; ++i) {\n        RedisModule_ReplyWithLongLong(ctx, i);\n        RedisModule_ReplyWithDouble(ctx, i * 1.5);\n    }\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\nint rw_bool(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    if (argc != 1) return RedisModule_WrongArity(ctx);\n\n    RedisModule_ReplyWithArray(ctx, 2);\n    RedisModule_ReplyWithBool(ctx, 0);\n    return RedisModule_ReplyWithBool(ctx, 1);\n}\n\nint rw_null(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    if (argc != 1) return RedisModule_WrongArity(ctx);\n\n    return RedisModule_ReplyWithNull(ctx);\n}\n\nint rw_error(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    if (argc != 1) return RedisModule_WrongArity(ctx);\n\n    return RedisModule_ReplyWithError(ctx, \"An error\");\n}\n\nint rw_error_format(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 3) return RedisModule_WrongArity(ctx);\n\n    return RedisModule_ReplyWithErrorFormat(ctx,\n                                            RedisModule_StringPtrLen(argv[1], NULL),\n                                            RedisModule_StringPtrLen(argv[2], NULL));\n}\n\nint rw_verbatim(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n\n    size_t verbatim_len;\n    const char *verbatim_str = RedisModule_StringPtrLen(argv[1], &verbatim_len);\n\n    return RedisModule_ReplyWithVerbatimString(ctx, verbatim_str, verbatim_len);\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    if (RedisModule_Init(ctx, \"replywith\", 1, REDISMODULE_APIVER_1) != REDISMODULE_OK)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"rw.string\",rw_string,\"\",0,0,0) != REDISMODULE_OK)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"rw.cstring\",rw_cstring,\"\",0,0,0) != REDISMODULE_OK)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"rw.bignumber\",rw_bignumber,\"\",0,0,0) != REDISMODULE_OK)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"rw.int\",rw_int,\"\",0,0,0) != REDISMODULE_OK)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"rw.double\",rw_double,\"\",0,0,0) != REDISMODULE_OK)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"rw.longdouble\",rw_longdouble,\"\",0,0,0) != REDISMODULE_OK)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"rw.array\",rw_array,\"\",0,0,0) != REDISMODULE_OK)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"rw.map\",rw_map,\"\",0,0,0) != REDISMODULE_OK)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"rw.attribute\",rw_attribute,\"\",0,0,0) != REDISMODULE_OK)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"rw.set\",rw_set,\"\",0,0,0) != REDISMODULE_OK)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"rw.bool\",rw_bool,\"\",0,0,0) != REDISMODULE_OK)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"rw.null\",rw_null,\"\",0,0,0) != REDISMODULE_OK)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"rw.error\",rw_error,\"\",0,0,0) != REDISMODULE_OK)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"rw.error_format\",rw_error_format,\"\",0,0,0) != REDISMODULE_OK)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"rw.verbatim\",rw_verbatim,\"\",0,0,0) != REDISMODULE_OK)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/scan.c",
    "content": "#include \"redismodule.h\"\n\n#include <string.h>\n#include <assert.h>\n#include <unistd.h>\n\ntypedef struct {\n    size_t nkeys;\n} scan_strings_pd;\n\nvoid scan_strings_callback(RedisModuleCtx *ctx, RedisModuleString* keyname, RedisModuleKey* key, void *privdata) {\n    scan_strings_pd* pd = privdata;\n    int was_opened = 0;\n    if (!key) {\n        key = RedisModule_OpenKey(ctx, keyname, REDISMODULE_READ);\n        was_opened = 1;\n    }\n\n    if (RedisModule_KeyType(key) == REDISMODULE_KEYTYPE_STRING) {\n        size_t len;\n        char * data = RedisModule_StringDMA(key, &len, REDISMODULE_READ);\n        RedisModule_ReplyWithArray(ctx, 2);\n        RedisModule_ReplyWithString(ctx, keyname);\n        RedisModule_ReplyWithStringBuffer(ctx, data, len);\n        pd->nkeys++;\n    }\n    if (was_opened)\n        RedisModule_CloseKey(key);\n}\n\nint scan_strings(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    scan_strings_pd pd = {\n        .nkeys = 0,\n    };\n\n    RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_LEN);\n\n    RedisModuleScanCursor* cursor = RedisModule_ScanCursorCreate();\n    while(RedisModule_Scan(ctx, cursor, scan_strings_callback, &pd));\n    RedisModule_ScanCursorDestroy(cursor);\n\n    RedisModule_ReplySetArrayLength(ctx, pd.nkeys);\n    return REDISMODULE_OK;\n}\n\ntypedef struct {\n    RedisModuleCtx *ctx;\n    size_t nreplies;\n} scan_key_pd;\n\nvoid scan_key_callback(RedisModuleKey *key, RedisModuleString* field, RedisModuleString* value, void *privdata) {\n    REDISMODULE_NOT_USED(key);\n    scan_key_pd* pd = privdata;\n    RedisModule_ReplyWithArray(pd->ctx, 2);\n    size_t fieldCStrLen;\n\n    // The implementation of RedisModuleString is robj with lots of encodings.\n    // We want to make sure the robj that passes to this callback in\n    // String encoded, this is why we use RedisModule_StringPtrLen and\n    // RedisModule_ReplyWithStringBuffer instead of directly use\n    // RedisModule_ReplyWithString.\n    const char* fieldCStr = RedisModule_StringPtrLen(field, &fieldCStrLen);\n    RedisModule_ReplyWithStringBuffer(pd->ctx, fieldCStr, fieldCStrLen);\n    if(value){\n        size_t valueCStrLen;\n        const char* valueCStr = RedisModule_StringPtrLen(value, &valueCStrLen);\n        RedisModule_ReplyWithStringBuffer(pd->ctx, valueCStr, valueCStrLen);\n    } else {\n        RedisModule_ReplyWithNull(pd->ctx);\n    }\n\n    pd->nreplies++;\n}\n\nint scan_key(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc != 2) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n    scan_key_pd pd = {\n        .ctx = ctx,\n        .nreplies = 0,\n    };\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_READ);\n    if (!key) {\n        RedisModule_ReplyWithError(ctx, \"not found\");\n        return REDISMODULE_OK;\n    }\n\n    RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_ARRAY_LEN);\n\n    RedisModuleScanCursor* cursor = RedisModule_ScanCursorCreate();\n    while(RedisModule_ScanKey(key, cursor, scan_key_callback, &pd));\n    RedisModule_ScanCursorDestroy(cursor);\n\n    RedisModule_ReplySetArrayLength(ctx, pd.nreplies);\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    if (RedisModule_Init(ctx, \"scan\", 1, REDISMODULE_APIVER_1)== REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"scan.scan_strings\", scan_strings, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"scan.scan_key\", scan_key, \"\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n\n\n"
  },
  {
    "path": "tests/modules/stream.c",
    "content": "#include \"redismodule.h\"\n\n#include <string.h>\n#include <strings.h>\n#include <assert.h>\n#include <unistd.h>\n#include <errno.h>\n\n/* Command which adds a stream entry with automatic ID, like XADD *.\n *\n * Syntax: STREAM.ADD key field1 value1 [ field2 value2 ... ]\n *\n * The response is the ID of the added stream entry or an error message.\n */\nint stream_add(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc < 2 || argc % 2 != 0) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_WRITE);\n    RedisModuleStreamID id;\n    if (RedisModule_StreamAdd(key, REDISMODULE_STREAM_ADD_AUTOID, &id,\n                              &argv[2], (argc-2)/2) == REDISMODULE_OK) {\n        RedisModuleString *id_str = RedisModule_CreateStringFromStreamID(ctx, &id);\n        RedisModule_ReplyWithString(ctx, id_str);\n        RedisModule_FreeString(ctx, id_str);\n    } else {\n        RedisModule_ReplyWithError(ctx, \"ERR StreamAdd failed\");\n    }\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\n/* Command which adds a stream entry N times.\n *\n * Syntax: STREAM.ADD key N field1 value1 [ field2 value2 ... ]\n *\n * Returns the number of successfully added entries.\n */\nint stream_addn(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc < 3 || argc % 2 == 0) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    long long n, i;\n    if (RedisModule_StringToLongLong(argv[2], &n) == REDISMODULE_ERR) {\n        RedisModule_ReplyWithError(ctx, \"N must be a number\");\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_WRITE);\n    for (i = 0; i < n; i++) {\n        if (RedisModule_StreamAdd(key, REDISMODULE_STREAM_ADD_AUTOID, NULL,\n                                  &argv[3], (argc-3)/2) == REDISMODULE_ERR)\n            break;\n    }\n    RedisModule_ReplyWithLongLong(ctx, i);\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\n/* STREAM.DELETE key stream-id */\nint stream_delete(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 3) return RedisModule_WrongArity(ctx);\n    RedisModuleStreamID id;\n    if (RedisModule_StringToStreamID(argv[2], &id) != REDISMODULE_OK) {\n        return RedisModule_ReplyWithError(ctx, \"Invalid stream ID\");\n    }\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_WRITE);\n    if (RedisModule_StreamDelete(key, &id) == REDISMODULE_OK) {\n        RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    } else {\n        RedisModule_ReplyWithError(ctx, \"ERR StreamDelete failed\");\n    }\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\n/* STREAM.RANGE key start-id end-id\n *\n * Returns an array of stream items. Each item is an array on the form\n * [stream-id, [field1, value1, field2, value2, ...]].\n *\n * A funny side-effect used for testing RM_StreamIteratorDelete() is that if any\n * entry has a field named \"selfdestruct\", the stream entry is deleted. It is\n * however included in the results of this command.\n */\nint stream_range(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 4) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleStreamID startid, endid;\n    if (RedisModule_StringToStreamID(argv[2], &startid) != REDISMODULE_OK ||\n        RedisModule_StringToStreamID(argv[3], &endid) != REDISMODULE_OK) {\n        RedisModule_ReplyWithError(ctx, \"Invalid stream ID\");\n        return REDISMODULE_OK;\n    }\n\n    /* If startid > endid, we swap and set the reverse flag. */\n    int flags = 0;\n    if (startid.ms > endid.ms ||\n        (startid.ms == endid.ms && startid.seq > endid.seq)) {\n        RedisModuleStreamID tmp = startid;\n        startid = endid;\n        endid = tmp;\n        flags |= REDISMODULE_STREAM_ITERATOR_REVERSE;\n    }\n\n    /* Open key and start iterator. */\n    int openflags = REDISMODULE_READ | REDISMODULE_WRITE;\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], openflags);\n    if (RedisModule_StreamIteratorStart(key, flags,\n                                        &startid, &endid) != REDISMODULE_OK) {\n        /* Key is not a stream, etc. */\n        RedisModule_ReplyWithError(ctx, \"ERR StreamIteratorStart failed\");\n        RedisModule_CloseKey(key);\n        return REDISMODULE_OK;\n    }\n\n    /* Check error handling: Delete current entry when no current entry. */\n    assert(RedisModule_StreamIteratorDelete(key) ==\n           REDISMODULE_ERR);\n    assert(errno == ENOENT);\n\n    /* Check error handling: Fetch fields when no current entry. */\n    assert(RedisModule_StreamIteratorNextField(key, NULL, NULL) ==\n           REDISMODULE_ERR);\n    assert(errno == ENOENT);\n\n    /* Return array. */\n    RedisModule_ReplyWithArray(ctx, REDISMODULE_POSTPONED_LEN);\n    RedisModule_AutoMemory(ctx);\n    RedisModuleStreamID id;\n    long numfields;\n    long len = 0;\n    while (RedisModule_StreamIteratorNextID(key, &id,\n                                            &numfields) == REDISMODULE_OK) {\n        RedisModule_ReplyWithArray(ctx, 2);\n        RedisModuleString *id_str = RedisModule_CreateStringFromStreamID(ctx, &id);\n        RedisModule_ReplyWithString(ctx, id_str);\n        RedisModule_ReplyWithArray(ctx, numfields * 2);\n        int delete = 0;\n        RedisModuleString *field, *value;\n        for (long i = 0; i < numfields; i++) {\n            assert(RedisModule_StreamIteratorNextField(key, &field, &value) ==\n                   REDISMODULE_OK);\n            RedisModule_ReplyWithString(ctx, field);\n            RedisModule_ReplyWithString(ctx, value);\n            /* check if this is a \"selfdestruct\" field */\n            size_t field_len;\n            const char *field_str = RedisModule_StringPtrLen(field, &field_len);\n            if (!strncmp(field_str, \"selfdestruct\", field_len)) delete = 1;\n        }\n        if (delete) {\n            assert(RedisModule_StreamIteratorDelete(key) == REDISMODULE_OK);\n        }\n        /* check error handling: no more fields to fetch */\n        assert(RedisModule_StreamIteratorNextField(key, &field, &value) ==\n               REDISMODULE_ERR);\n        assert(errno == ENOENT);\n        len++;\n    }\n    RedisModule_ReplySetArrayLength(ctx, len);\n    RedisModule_StreamIteratorStop(key);\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\n/*\n * STREAM.TRIM key (MAXLEN (=|~) length | MINID (=|~) id)\n */\nint stream_trim(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 5) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    /* Parse args */\n    int trim_by_id = 0; /* 0 = maxlen, 1 = minid */\n    long long maxlen;\n    RedisModuleStreamID minid;\n    size_t arg_len;\n    const char *arg = RedisModule_StringPtrLen(argv[2], &arg_len);\n    if (!strcasecmp(arg, \"minid\")) {\n        trim_by_id = 1;\n        if (RedisModule_StringToStreamID(argv[4], &minid) != REDISMODULE_OK) {\n            RedisModule_ReplyWithError(ctx, \"ERR Invalid stream ID\");\n            return REDISMODULE_OK;\n        }\n    } else if (!strcasecmp(arg, \"maxlen\")) {\n        if (RedisModule_StringToLongLong(argv[4], &maxlen) == REDISMODULE_ERR) {\n            RedisModule_ReplyWithError(ctx, \"ERR Maxlen must be a number\");\n            return REDISMODULE_OK;\n        }\n    } else {\n        RedisModule_ReplyWithError(ctx, \"ERR Invalid arguments\");\n        return REDISMODULE_OK;\n    }\n\n    /* Approx or exact */\n    int flags;\n    arg = RedisModule_StringPtrLen(argv[3], &arg_len);\n    if (arg_len == 1 && arg[0] == '~') {\n        flags = REDISMODULE_STREAM_TRIM_APPROX;\n    } else if (arg_len == 1 && arg[0] == '=') {\n        flags = 0;\n    } else {\n        RedisModule_ReplyWithError(ctx, \"ERR Invalid approx-or-exact mark\");\n        return REDISMODULE_OK;\n    }\n\n    /* Trim */\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_WRITE);\n    long long trimmed;\n    if (trim_by_id) {\n        trimmed = RedisModule_StreamTrimByID(key, flags, &minid);\n    } else {\n        trimmed = RedisModule_StreamTrimByLength(key, flags, maxlen);\n    }\n\n    /* Return result */\n    if (trimmed < 0) {\n        RedisModule_ReplyWithError(ctx, \"ERR Trimming failed\");\n    } else {\n        RedisModule_ReplyWithLongLong(ctx, trimmed);\n    }\n    RedisModule_CloseKey(key);\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    if (RedisModule_Init(ctx, \"stream\", 1, REDISMODULE_APIVER_1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"stream.add\", stream_add, \"write\",\n                                  1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx, \"stream.addn\", stream_addn, \"write\",\n                                  1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx, \"stream.delete\", stream_delete, \"write\",\n                                  1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx, \"stream.range\", stream_range, \"write\",\n                                  1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx, \"stream.trim\", stream_trim, \"write\",\n                                  1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/subcommands.c",
    "content": "#include \"redismodule.h\"\n\n#define UNUSED(V) ((void) V)\n\nint cmd_set(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    UNUSED(argc);\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\nint cmd_get(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n\n    if (argc > 4) /* For testing */\n        return RedisModule_WrongArity(ctx);\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    return REDISMODULE_OK;\n}\n\nint cmd_get_fullname(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    UNUSED(argv);\n    UNUSED(argc);\n\n    const char *command_name = RedisModule_GetCurrentCommandName(ctx);\n    RedisModule_ReplyWithSimpleString(ctx, command_name);\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx, \"subcommands\", 1, REDISMODULE_APIVER_1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    /* Module command names cannot contain special characters. */\n    RedisModule_Assert(RedisModule_CreateCommand(ctx,\"subcommands.char\\r\",NULL,\"\",0,0,0) == REDISMODULE_ERR);\n    RedisModule_Assert(RedisModule_CreateCommand(ctx,\"subcommands.char\\n\",NULL,\"\",0,0,0) == REDISMODULE_ERR);\n    RedisModule_Assert(RedisModule_CreateCommand(ctx,\"subcommands.char \",NULL,\"\",0,0,0) == REDISMODULE_ERR);\n\n    if (RedisModule_CreateCommand(ctx,\"subcommands.bitarray\",NULL,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    RedisModuleCommand *parent = RedisModule_GetCommand(ctx,\"subcommands.bitarray\");\n\n    if (RedisModule_CreateSubcommand(parent,\"set\",cmd_set,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    /* Module subcommand names cannot contain special characters. */\n    RedisModule_Assert(RedisModule_CreateSubcommand(parent,\"char|\",cmd_set,\"\",0,0,0) == REDISMODULE_ERR);\n    RedisModule_Assert(RedisModule_CreateSubcommand(parent,\"char@\",cmd_set,\"\",0,0,0) == REDISMODULE_ERR);\n    RedisModule_Assert(RedisModule_CreateSubcommand(parent,\"char=\",cmd_set,\"\",0,0,0) == REDISMODULE_ERR);\n\n    RedisModuleCommand *subcmd = RedisModule_GetCommand(ctx,\"subcommands.bitarray|set\");\n    RedisModuleCommandInfo cmd_set_info = {\n        .version = REDISMODULE_COMMAND_INFO_VERSION,\n        .key_specs = (RedisModuleCommandKeySpec[]){\n            {\n                .flags = REDISMODULE_CMD_KEY_RW | REDISMODULE_CMD_KEY_UPDATE,\n                .begin_search_type = REDISMODULE_KSPEC_BS_INDEX,\n                .bs.index.pos = 1,\n                .find_keys_type = REDISMODULE_KSPEC_FK_RANGE,\n                .fk.range = {0,1,0}\n            },\n            {0}\n        }\n    };\n    if (RedisModule_SetCommandInfo(subcmd, &cmd_set_info) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateSubcommand(parent,\"get\",cmd_get,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    subcmd = RedisModule_GetCommand(ctx,\"subcommands.bitarray|get\");\n    RedisModuleCommandInfo cmd_get_info = {\n        .version = REDISMODULE_COMMAND_INFO_VERSION,\n        .key_specs = (RedisModuleCommandKeySpec[]){\n            {\n                .flags = REDISMODULE_CMD_KEY_RO | REDISMODULE_CMD_KEY_ACCESS,\n                .begin_search_type = REDISMODULE_KSPEC_BS_INDEX,\n                .bs.index.pos = 1,\n                .find_keys_type = REDISMODULE_KSPEC_FK_RANGE,\n                .fk.range = {0,1,0}\n            },\n            {0}\n        }\n    };\n    if (RedisModule_SetCommandInfo(subcmd, &cmd_get_info) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    /* Get the name of the command currently running. */\n    if (RedisModule_CreateCommand(ctx,\"subcommands.parent_get_fullname\",cmd_get_fullname,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    /* Get the name of the subcommand currently running. */\n    if (RedisModule_CreateCommand(ctx,\"subcommands.sub\",NULL,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModuleCommand *fullname_parent = RedisModule_GetCommand(ctx,\"subcommands.sub\");\n    if (RedisModule_CreateSubcommand(fullname_parent,\"get_fullname\",cmd_get_fullname,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    /* Sanity */\n\n    /* Trying to create the same subcommand fails */\n    RedisModule_Assert(RedisModule_CreateSubcommand(parent,\"get\",NULL,\"\",0,0,0) == REDISMODULE_ERR);\n\n    /* Trying to create a sub-subcommand fails */\n    RedisModule_Assert(RedisModule_CreateSubcommand(subcmd,\"get\",NULL,\"\",0,0,0) == REDISMODULE_ERR);\n\n    /* Internal container command for testing */\n    if (RedisModule_CreateCommand(ctx,\"subcommands.internal_container\",NULL,\"internal\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    RedisModuleCommand *internal_parent = RedisModule_GetCommand(ctx,\"subcommands.internal_container\");\n    if (RedisModule_CreateSubcommand(internal_parent,\"test\",cmd_set,\"internal\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/test_keymeta.c",
    "content": "/* An example module for attaching metadata to keys.\n *\n * This example lets tests create metadata-key classes and then SET and GET metadata\n * to keys. The 8-byte slot stores a handle to a module-managed allocation; here\n * we use to attach a string per-key.\n *\n * The module pre-registers several metadata classes during initialization and exposes\n * the following commands (via RedisModule_CreateCommand):\n *\n * 1) KEYMETA.REGISTER <4-char-id> <version> [FLAGS]\n *    Register a new metadata-key class during module load.\n *    Returns the <keymeta-class-id> index (Returned from RedisModule_CreateKeyMetaClass)\n *    On failure, returns nil\n *    In a real module it should be registered \"automatically\" via OnLoad.\n *\n *    FLAGS (colon-separated):\n *      KEEPONCOPY     - Keep metadata on COPY operation\n *      KEEPONRENAME   - Keep metadata on RENAME operation\n *      KEEPONMOVE     - Keep metadata on MOVE operation\n *      UNLINKFREE     - Use unlink callback for async free\n *      RDBLOAD        - Enable rdb_load callback (metadata can be loaded from RDB)\n *      RDBSAVE        - Enable rdb_save callback (metadata can be saved to RDB)\n *      ALLOWIGNORE    - Enable ALLOW_IGNORE flag (graceful discard on load if\n *                       class not registered or no rdb_load callback)\n *\n *    Example: > keymeta.register KMT1 1 KEEPONCOPY:KEEPONRENAME:ALLOWIGNORE:RDBLOAD:RDBSAVE\n *    Example: > keymeta.register KMT2 1 ALLOWIGNORE\n *\n * 2) KEYMETA.SET <4-char-id> <key> <string-value>\n *    Set the string value as metadata to given key.\n *    Note:\n *    - If already set earlier, then it is expected that it will released before setting a\n *      new string. That is why this command should start with trying to get first\n *      metadata for given key.\n *\n * 3) KEYMETA.GET <4-char-id> <key>\n *    Get the metadata attached to the key for the given class.\n *    Returns a string attached to the given key. Or nil if nothing is attached.\n *\n * 4) KEYMETA.UNREGISTER <4-char-id>\n *    This will mark the key metadata class as released. It can later be reused again\n *    by the same class (consider comment above).\n *    Return REDISMODULE_OK/REDISMODULE_ERR.\n *\n * 5) KEYMETA.ACTIVE\n *    Return total number of active metadata at the moment.\n */\n\n#include \"redismodule.h\"\n#include <string.h>\n#include <stdlib.h>\n#include <assert.h>\n\n/* Virtualize class IDs for testing. Values: 0 unused, 1..7 used, -1 released */\nRedisModuleKeyMetaClassId class_ids[8] = { 0 };\n\n/* Mapping from 4-char-id to class-id */\ntypedef struct {\n    char name[5];  /* 4 chars + null terminator */\n    RedisModuleKeyMetaClassId class_id;\n} ClassMapping;\n\n#define MAX_CLASS_MAPPINGS 8\nstatic ClassMapping class_mappings[MAX_CLASS_MAPPINGS];\nstatic int num_class_mappings = 0;\n\n/* Reverse lookup: given a class_id, find the 4-char-id name */\nstatic const char* lookupClassName(RedisModuleKeyMetaClassId class_id) {\n    for (int i = 0; i < num_class_mappings; i++) {\n        if (class_mappings[i].class_id == class_id) {\n            return class_mappings[i].name;\n        }\n    }\n    return NULL;\n}\n\n/* Track active metadata instances (not yet freed) */\nstatic long long active_metadata_count = 0;\n\n/* Helper functions for class mapping */\n\n/* Add a mapping from 4-char-id to class-id */\nstatic int addClassMapping(const char *name, RedisModuleKeyMetaClassId class_id) {\n    if (num_class_mappings >= MAX_CLASS_MAPPINGS) {\n        return 0; /* No space */\n    }\n    strncpy(class_mappings[num_class_mappings].name, name, 4);\n    class_mappings[num_class_mappings].name[4] = '\\0';\n    class_mappings[num_class_mappings].class_id = class_id;\n    num_class_mappings++;\n    return 1;\n}\n\n/* Lookup class-id by 4-char-id. Returns -1 if not found. */\nstatic RedisModuleKeyMetaClassId lookupClassId(const char *name) {\n    for (int i = 0; i < num_class_mappings; i++) {\n        if (strncmp(class_mappings[i].name, name, 4) == 0) {\n            return class_mappings[i].class_id;\n        }\n    }\n    return -1;\n}\n\n/* Remove a mapping by 4-char-id */\nstatic int removeClassMapping(const char *name) {\n    for (int i = 0; i < num_class_mappings; i++) {\n        if (strncmp(class_mappings[i].name, name, 4) == 0) {\n            /* Shift remaining entries down */\n            for (int j = i; j < num_class_mappings - 1; j++) {\n                class_mappings[j] = class_mappings[j + 1];\n            }\n            num_class_mappings--;\n            return 1;\n        }\n    }\n    return 0;\n}\n\n/* Callback functions for metadata lifecycle */\n\n/* Copy callback - called when a key is copied */\nstatic int KeyMetaCopyCallback(RedisModuleKeyOptCtx *ctx, uint64_t *meta) {\n    REDISMODULE_NOT_USED(ctx);\n    char *str = (char *)*meta;\n    /* Note, condition is redundant since cb only invoked when meta != reset_value */\n    if (str) {\n        char *new_str = strdup(str);\n        *meta = (uint64_t)new_str;\n        active_metadata_count++; /* New metadata instance created */\n    }\n    return 1; /* Keep metadata */\n}\n\n/* Rename callback - called when a key is renamed. */\nstatic int KeyMetaRenameDiscardCallback(RedisModuleKeyOptCtx *ctx, uint64_t *meta) {\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(meta);\n    return 0;\n}\n\n/* Unlink callback - called when a key is unlinked */\nstatic void KeyMetaUnlinkCallback(RedisModuleKeyOptCtx *ctx, uint64_t *meta) {\n    /* Let's challenge and free early on before free callback */\n    /* Note, condition is redundant since cb only invoked when meta != reset_value */\n    if (*meta != 0) {\n        char *str = (char *)*meta;\n        free(str);\n        *meta = 0;  /* Set to reset_value !!! */\n        active_metadata_count--; /* Metadata instance freed */\n    }\n    REDISMODULE_NOT_USED(ctx);\n}\n\n/* Free callback - called when metadata needs to be freed */\nstatic void KeyMetaFreeCallback(const char *keyname, uint64_t meta) {\n    REDISMODULE_NOT_USED(keyname);\n    /* Note, condition is redundant since cb only invoked when meta != reset_value */\n    if (meta != 0) {\n        char *str = (char *)meta;\n        free(str);\n        active_metadata_count--; /* Metadata instance freed */\n    }\n}\n\nstatic int KeyMetaMoveDiscardCallback(RedisModuleKeyOptCtx *ctx, uint64_t *meta) {\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(meta);\n    return 0; /* discard metadata */\n}\n\n/* RDB Save Callback - Serialize metadata to RDB\n * This callback is invoked during RDB save to write the metadata value.\n *\n * Parameters:\n *   - rdb: RedisModuleIO context for writing to RDB\n *   - reserved: Reserved for future use\n *   - meta: Pointer to the 8-byte metadata value (pointer to our string)\n */\nstatic void KeyMetaRDBSaveCallback(RedisModuleIO *rdb, void *reserved, uint64_t *meta) {\n    REDISMODULE_NOT_USED(reserved);\n\n    /* If metadata is NULL (reset_value), don't save anything */\n    if (*meta == 0) return;\n\n    /* Extract the string from the metadata pointer */\n    char *metadata_string = (char *)*meta;\n\n    /* Save the string to RDB using SaveStringBuffer */\n    RedisModule_SaveStringBuffer(rdb, metadata_string, strlen(metadata_string));\n    /* Save more silly data */\n    RedisModule_SaveSigned(rdb, 1);\n    RedisModule_SaveFloat(rdb, 1.5);\n    RedisModule_SaveLongDouble(rdb, 0.333333333333333333L);\n}\n\n/* RDB Load Callback - Deserialize metadata from RDB\n * This callback is invoked during RDB load to read the metadata value.\n *\n * Parameters:\n *   - rdb: RedisModuleIO context for reading from RDB\n *   - meta: Pointer to store the loaded 8-byte metadata value\n *   - encver: Encoding version (class version from RDB)\n *\n * Returns:\n *   - 1: Attach metadata to key (success)\n *   - 0: Ignore/skip metadata (not an error)\n *   - -1: Error - abort RDB load\n */\nstatic int KeyMetaRDBLoadCallback(RedisModuleIO *rdb, uint64_t *meta, int encver) {\n    REDISMODULE_NOT_USED(encver);\n\n    /* Load the string from RDB using LoadStringBuffer */\n    size_t len;\n    char *loaded_string = RedisModule_LoadStringBuffer(rdb, &len);\n\n    if (loaded_string == NULL) {\n        /* Error loading string */\n        return -1;\n    }\n\n    /* Allocate and copy the string (LoadStringBuffer returns a buffer that must be freed) */\n    char *metadata_string = malloc(len + 1);\n    if (metadata_string == NULL) {\n        RedisModule_Free(loaded_string);\n        return -1;\n    }\n\n    memcpy(metadata_string, loaded_string, len);\n    metadata_string[len] = '\\0';\n    RedisModule_Free(loaded_string);\n\n    /* Load the additional data that was saved (must match rdb_save) */\n    int64_t signed_val = RedisModule_LoadSigned(rdb);\n    float float_val = RedisModule_LoadFloat(rdb);\n    long double ldouble_val = RedisModule_LoadLongDouble(rdb);\n    /* We don't use these values, just need to consume them from the stream */\n    (void)signed_val;\n    (void)float_val;\n    (void)ldouble_val;\n\n    /* Store the pointer in metadata */\n    *meta = (uint64_t)metadata_string;\n    active_metadata_count++; /* New metadata instance created */\n\n    /* Return 1 to attach metadata to the key */\n    return 1;\n}\n\n/* AOF Rewrite Callback - Common implementation for all classes\n * This callback is invoked during AOF rewrite to emit commands that will\n * recreate the metadata when the AOF is loaded.\n *\n * Parameters:\n *   - aof: RedisModuleIO context for writing to AOF\n *   - reserved: Reserved for future use\n *   - meta: The 8-byte metadata value (pointer to our string)\n *   - class_id: The class ID for this metadata\n */\nstatic void KeyMetaAOFRewriteCallback_Class(RedisModuleIO *aof, void *reserved, uint64_t meta, RedisModuleKeyMetaClassId class_id) {\n    REDISMODULE_NOT_USED(reserved);\n\n    /* If metadata is NULL (reset_value), don't emit anything */\n    if (meta == 0) return;\n\n    /* Extract the string from the metadata pointer */\n    char *metadata_string = (char *)meta;\n\n    /* Lookup the 9-byte-id name for this class */\n    const char *class_name = lookupClassName(class_id);\n    if (!class_name) {\n        /* This shouldn't happen, but handle gracefully */\n        return;\n    }\n\n    /* Get the key name from the AOF IO context */\n    const RedisModuleString *key = RedisModule_GetKeyNameFromIO(aof);\n    if (!key) {\n        /* Key name not available - shouldn't happen during AOF rewrite */\n        return;\n    }\n\n    /* Emit the KEYMETA.SET command to recreate this metadata\n     * Format: KEYMETA.SET <9-byte-id> <key> <string-value> */\n    RedisModule_EmitAOF(aof, \"KEYMETA.SET\", \"csc\",\n                        class_name,           /* c: 9-byte-id (C string) */\n                        key,                  /* s: key name (RedisModuleString) */\n                        metadata_string);     /* c: metadata value (C string) */\n}\n\n/* Individual AOF rewrite callbacks for each class (1-7)\n * Each callback wraps the common implementation with its specific class ID */\nstatic void KeyMetaAOFRewriteCb1(RedisModuleIO *aof, void *reserved, uint64_t meta) {\n    KeyMetaAOFRewriteCallback_Class(aof, reserved, meta, 1);\n}\n\nstatic void KeyMetaAOFRewriteCb2(RedisModuleIO *aof, void *reserved, uint64_t meta) {\n    KeyMetaAOFRewriteCallback_Class(aof, reserved, meta, 2);\n}\n\nstatic void KeyMetaAOFRewriteCb3(RedisModuleIO *aof, void *reserved, uint64_t meta) {\n    KeyMetaAOFRewriteCallback_Class(aof, reserved, meta, 3);\n}\n\nstatic void KeyMetaAOFRewriteCb4(RedisModuleIO *aof, void *reserved, uint64_t meta) {\n    KeyMetaAOFRewriteCallback_Class(aof, reserved, meta, 4);\n}\n\nstatic void KeyMetaAOFRewriteCb5(RedisModuleIO *aof, void *reserved, uint64_t meta) {\n    KeyMetaAOFRewriteCallback_Class(aof, reserved, meta, 5);\n}\n\nstatic void KeyMetaAOFRewriteCb6(RedisModuleIO *aof, void *reserved, uint64_t meta) {\n    KeyMetaAOFRewriteCallback_Class(aof, reserved, meta, 6);\n}\n\nstatic void KeyMetaAOFRewriteCb7(RedisModuleIO *aof, void *reserved, uint64_t meta) {\n    KeyMetaAOFRewriteCallback_Class(aof, reserved, meta, 7);\n}\n\n/* KEYMETA.REGISTER <4-char-id> <version> [KEEPONCOPY:KEEPONRENAME:UNLINKFREE:ALLOWIGNORE:NORDBLOAD:NORDBSAVE] */\nstatic int KeyMetaRegister_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc < 3 || argc > 4) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    /* argv[1]: key metadata class name */\n    size_t namelen;\n    const char *metaname = RedisModule_StringPtrLen(argv[1], &namelen);\n\n    /* argv[2]: key metadata class version */\n    long long metaver;\n    if (RedisModule_StringToLongLong(argv[2], &metaver) != REDISMODULE_OK) {\n        RedisModule_ReplyWithError(ctx, \"ERR invalid version number\");\n        return REDISMODULE_OK;\n    }\n\n    /* Parse optional callback flags */\n    int keep_on_copy = 0, keep_on_rename = 0, unlink_free = 0, keep_on_move = 0;\n    int allow_ignore = 0;  /* Default: ALLOW_IGNORE disabled */\n    int rdb_load = 0;      /* Default: rdb_load disabled */\n    int rdb_save = 0;      /* Default: rdb_save disabled */\n\n    if (argc == 4) {\n        const char *flags = RedisModule_StringPtrLen(argv[3], NULL);\n        if (strstr(flags, \"KEEPONCOPY\")) keep_on_copy = 1;\n        if (strstr(flags, \"KEEPONRENAME\")) keep_on_rename = 1;\n        if (strstr(flags, \"UNLINKFREE\")) unlink_free = 1;\n        if (strstr(flags, \"KEEPONMOVE\")) keep_on_move = 1;\n        if (strstr(flags, \"ALLOWIGNORE\")) allow_ignore = 1;   /* Enable ALLOW_IGNORE */\n        if (strstr(flags, \"RDBLOAD\")) rdb_load = 1;           /* Enable rdb_load */\n        if (strstr(flags, \"RDBSAVE\")) rdb_save = 1;           /* Enable rdb_save */\n    }\n\n    /* Setup configuration */\n    RedisModuleKeyMetaClassConfig config = {0};\n    config.version = REDISMODULE_KEY_META_VERSION;\n    config.flags = allow_ignore ? (1 << REDISMODULE_META_ALLOW_IGNORE) : 0;\n    config.reset_value = (uint64_t)NULL;  /* NULL pointer means no resource to free */\n    config.rdb_load = rdb_load ? KeyMetaRDBLoadCallback : NULL;\n    config.rdb_save = rdb_save ? KeyMetaRDBSaveCallback : NULL;\n    switch (num_class_mappings + 1) { /* distinct cb per class */\n        case 1: config.aof_rewrite = KeyMetaAOFRewriteCb1; break;\n        case 2: config.aof_rewrite = KeyMetaAOFRewriteCb2; break;\n        case 3: config.aof_rewrite = KeyMetaAOFRewriteCb3; break;\n        case 4: config.aof_rewrite = KeyMetaAOFRewriteCb4; break;\n        case 5: config.aof_rewrite = KeyMetaAOFRewriteCb5; break;\n        case 6: config.aof_rewrite = KeyMetaAOFRewriteCb6; break;\n        case 7: config.aof_rewrite = KeyMetaAOFRewriteCb7; break;\n        default: config.aof_rewrite = NULL; break;\n    }\n    config.free = KeyMetaFreeCallback;\n    config.copy = keep_on_copy ? KeyMetaCopyCallback : NULL;\n    config.rename = keep_on_rename ? NULL : KeyMetaRenameDiscardCallback;\n    config.move = keep_on_move ? NULL : KeyMetaMoveDiscardCallback;\n    config.defrag = NULL;\n    config.unlink = unlink_free ? KeyMetaUnlinkCallback : NULL;\n    config.mem_usage = NULL;\n    config.free_effort = NULL;\n\n    /* Create the metadata class */\n    RedisModuleKeyMetaClassId class_id = RedisModule_CreateKeyMetaClass(ctx, metaname, (int)metaver, &config);\n\n    if (class_id < 0) {\n        RedisModule_ReplyWithError(ctx, \"ERR failed to create metadata class\");\n        return REDISMODULE_OK;\n    } else {\n        /* Store the mapping from 9-byte-id to class-id */\n        if (!addClassMapping(metaname, class_id)) {\n            RedisModule_ReplyWithError(ctx, \"ERR failed to store class mapping\");\n            return REDISMODULE_OK;\n        }\n        RedisModule_ReplyWithLongLong(ctx, class_id);\n    }\n\n    return REDISMODULE_OK;\n}\n\n/* KEYMETA.SET <9-byte-id> <key> <string-value> */\nstatic int KeyMetaSet_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 4) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    /* Parse arguments */\n    const char *metaname = RedisModule_StringPtrLen(argv[1], NULL);\n    RedisModuleString *keyname = argv[2];\n    const char *value = RedisModule_StringPtrLen(argv[3], NULL);\n\n    /* Lookup the metadata class by name */\n    RedisModuleKeyMetaClassId class_id = lookupClassId(metaname);\n    if (class_id < 0) {\n        RedisModule_ReplyWithError(ctx, \"ERR metadata class not found\");\n        return REDISMODULE_OK;\n    }\n\n    /* Open the key for writing */\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, keyname, REDISMODULE_READ | REDISMODULE_WRITE);\n    \n    if (RedisModule_KeyType(key) == REDISMODULE_KEYTYPE_EMPTY) {\n        RedisModule_ReplyWithNull(ctx);\n        RedisModule_CloseKey(key);\n        return REDISMODULE_OK;\n    }    \n\n    /* Check if metadata already exists and free it first. \n     * \n     * Note: The caller is responsible for retrieving and freeing any existing \n     *       pointer-based metadata before RM_SetKeyMeta() to a new value \n     */\n    uint64_t meta = 0;\n    if (RedisModule_GetKeyMeta(class_id, key, &meta) == REDISMODULE_OK) {\n        if (meta != 0) {\n            free((char *)meta);\n            active_metadata_count--; /* Old metadata freed */\n        }\n    }\n\n    char *new_str = strdup(value);\n    int res = RedisModule_SetKeyMeta(class_id, key, (uint64_t)new_str);\n\n    if (res == REDISMODULE_OK) {\n        active_metadata_count++; /* New metadata instance created */\n    }\n\n    RedisModule_CloseKey(key);\n\n    if (res == REDISMODULE_OK) {\n        RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    } else {\n        free(new_str);\n        RedisModule_ReplyWithError(ctx, \"ERR failed to set metadata\");\n    }\n    return REDISMODULE_OK;\n}\n\n/* KEYMETA.GET <9-byte-id> <key> */\nstatic int KeyMetaGet_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 3) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    /* Parse arguments */\n    const char *metaname = RedisModule_StringPtrLen(argv[1], NULL);\n    RedisModuleString *keyname = argv[2];\n\n    /* Lookup the metadata class by name */\n    RedisModuleKeyMetaClassId class_id = lookupClassId(metaname);\n    if (class_id < 0) {\n        RedisModule_ReplyWithError(ctx, \"ERR metadata class not found\");\n        return REDISMODULE_OK;\n    }\n\n    /* Open the key for reading */\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, keyname, REDISMODULE_READ);\n    if (RedisModule_KeyType(key) == REDISMODULE_KEYTYPE_EMPTY) {\n        RedisModule_ReplyWithNull(ctx);\n        RedisModule_CloseKey(key);\n        return REDISMODULE_OK;\n    }\n\n    /* Get the metadata */\n    uint64_t meta = 0;\n    int result = RedisModule_GetKeyMeta(class_id, key, &meta);\n\n    RedisModule_CloseKey(key);\n\n    if (result == REDISMODULE_OK && meta != 0) {\n        char *str = (char *)meta;\n        RedisModule_ReplyWithCString(ctx, str);\n    } else {\n        RedisModule_ReplyWithNull(ctx);\n    }\n\n    return REDISMODULE_OK;\n}\n\n/* KEYMETA.UNREGISTER <9-byte-id> */\nstatic int KeyMetaUnregister_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    /* Parse arguments */\n    const char *metaname = RedisModule_StringPtrLen(argv[1], NULL);\n\n    /* Lookup the metadata class by name */\n    RedisModuleKeyMetaClassId class_id = lookupClassId(metaname);\n    if (class_id < 0) {\n        RedisModule_ReplyWithError(ctx, \"ERR metadata class not found\");\n        return REDISMODULE_OK;\n    }\n\n    /* Release the metadata class */\n    int result = RedisModule_ReleaseKeyMetaClass(class_id);\n\n    if (result == REDISMODULE_OK) {\n        /* Remove the mapping */\n        removeClassMapping(metaname);\n        RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    } else {\n        RedisModule_ReplyWithError(ctx, \"ERR failed to unregister class\");\n    }\n    return REDISMODULE_OK;\n}\n\n/* KEYMETA.ACTIVE\n * Returns the total number of active metadata instances that haven't been freed yet.\n * This is useful for testing to verify that metadata is properly cleaned up. */\nstatic int KeyMetaActive_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 1) {\n        return RedisModule_WrongArity(ctx);\n    }\n    REDISMODULE_NOT_USED(argv);\n\n    RedisModule_ReplyWithLongLong(ctx, active_metadata_count);\n    return REDISMODULE_OK;\n}\n\n/* Module initialization */\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx, \"test_keymeta\", 1, REDISMODULE_APIVER_1) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n\n    /* Register commands */\n    if (RedisModule_CreateCommand(ctx, \"keymeta.register\",\n        KeyMetaRegister_RedisCommand, \"write\", 0, 0, 0) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"keymeta.set\",\n        KeyMetaSet_RedisCommand, \"write deny-oom\", 1, 1, 1) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"keymeta.get\",\n        KeyMetaGet_RedisCommand, \"readonly\", 1, 1, 1) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"keymeta.unregister\",\n        KeyMetaUnregister_RedisCommand, \"write\", 0, 0, 0) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n\n    if (RedisModule_CreateCommand(ctx, \"keymeta.active\",\n        KeyMetaActive_RedisCommand, \"readonly fast\", 0, 0, 0) == REDISMODULE_ERR) {\n        return REDISMODULE_ERR;\n    }\n\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnUnload(RedisModuleCtx *ctx) {\n    REDISMODULE_NOT_USED(ctx);\n    long unsigned int i;\n    for (i = 0 ; i < sizeof(class_ids) / sizeof(class_ids[0]); i++) {\n        if (class_ids[i] > 0)\n            RedisModule_ReleaseKeyMetaClass(class_ids[i]);\n    }\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/test_lazyfree.c",
    "content": "/* This module emulates a linked list for lazyfree testing of modules, which\n is a simplified version of 'hellotype.c'\n */\n#include \"redismodule.h\"\n#include <stdio.h>\n#include <stdlib.h>\n#include <ctype.h>\n#include <string.h>\n#include <stdint.h>\n\nstatic RedisModuleType *LazyFreeLinkType;\n\nstruct LazyFreeLinkNode {\n    int64_t value;\n    struct LazyFreeLinkNode *next;\n};\n\nstruct LazyFreeLinkObject {\n    struct LazyFreeLinkNode *head;\n    size_t len; /* Number of elements added. */\n};\n\nstruct LazyFreeLinkObject *createLazyFreeLinkObject(void) {\n    struct LazyFreeLinkObject *o;\n    o = RedisModule_Alloc(sizeof(*o));\n    o->head = NULL;\n    o->len = 0;\n    return o;\n}\n\nvoid LazyFreeLinkInsert(struct LazyFreeLinkObject *o, int64_t ele) {\n    struct LazyFreeLinkNode *next = o->head, *newnode, *prev = NULL;\n\n    while(next && next->value < ele) {\n        prev = next;\n        next = next->next;\n    }\n    newnode = RedisModule_Alloc(sizeof(*newnode));\n    newnode->value = ele;\n    newnode->next = next;\n    if (prev) {\n        prev->next = newnode;\n    } else {\n        o->head = newnode;\n    }\n    o->len++;\n}\n\nvoid LazyFreeLinkReleaseObject(struct LazyFreeLinkObject *o) {\n    struct LazyFreeLinkNode *cur, *next;\n    cur = o->head;\n    while(cur) {\n        next = cur->next;\n        RedisModule_Free(cur);\n        cur = next;\n    }\n    RedisModule_Free(o);\n}\n\n/* LAZYFREELINK.INSERT key value */\nint LazyFreeLinkInsert_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx); /* Use automatic memory management. */\n\n    if (argc != 3) return RedisModule_WrongArity(ctx);\n    RedisModuleKey *key = RedisModule_OpenKey(ctx,argv[1],\n        REDISMODULE_READ|REDISMODULE_WRITE);\n    int type = RedisModule_KeyType(key);\n    if (type != REDISMODULE_KEYTYPE_EMPTY &&\n        RedisModule_ModuleTypeGetType(key) != LazyFreeLinkType)\n    {\n        return RedisModule_ReplyWithError(ctx,REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    long long value;\n    if ((RedisModule_StringToLongLong(argv[2],&value) != REDISMODULE_OK)) {\n        return RedisModule_ReplyWithError(ctx,\"ERR invalid value: must be a signed 64 bit integer\");\n    }\n\n    struct LazyFreeLinkObject *hto;\n    if (type == REDISMODULE_KEYTYPE_EMPTY) {\n        hto = createLazyFreeLinkObject();\n        RedisModule_ModuleTypeSetValue(key,LazyFreeLinkType,hto);\n    } else {\n        hto = RedisModule_ModuleTypeGetValue(key);\n    }\n\n    LazyFreeLinkInsert(hto,value);\n    RedisModule_SignalKeyAsReady(ctx,argv[1]);\n\n    RedisModule_ReplyWithLongLong(ctx,hto->len);\n    RedisModule_ReplicateVerbatim(ctx);\n    return REDISMODULE_OK;\n}\n\n/* LAZYFREELINK.LEN key */\nint LazyFreeLinkLen_RedisCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    RedisModule_AutoMemory(ctx); /* Use automatic memory management. */\n\n    if (argc != 2) return RedisModule_WrongArity(ctx);\n    RedisModuleKey *key = RedisModule_OpenKey(ctx,argv[1],\n                                              REDISMODULE_READ);\n    int type = RedisModule_KeyType(key);\n    if (type != REDISMODULE_KEYTYPE_EMPTY &&\n        RedisModule_ModuleTypeGetType(key) != LazyFreeLinkType)\n    {\n        return RedisModule_ReplyWithError(ctx,REDISMODULE_ERRORMSG_WRONGTYPE);\n    }\n\n    struct LazyFreeLinkObject *hto = RedisModule_ModuleTypeGetValue(key);\n    RedisModule_ReplyWithLongLong(ctx,hto ? hto->len : 0);\n    return REDISMODULE_OK;\n}\n\nvoid *LazyFreeLinkRdbLoad(RedisModuleIO *rdb, int encver) {\n    if (encver != 0) {\n        return NULL;\n    }\n    uint64_t elements = RedisModule_LoadUnsigned(rdb);\n    struct LazyFreeLinkObject *hto = createLazyFreeLinkObject();\n    while(elements--) {\n        int64_t ele = RedisModule_LoadSigned(rdb);\n        LazyFreeLinkInsert(hto,ele);\n    }\n    return hto;\n}\n\nvoid LazyFreeLinkRdbSave(RedisModuleIO *rdb, void *value) {\n    struct LazyFreeLinkObject *hto = value;\n    struct LazyFreeLinkNode *node = hto->head;\n    RedisModule_SaveUnsigned(rdb,hto->len);\n    while(node) {\n        RedisModule_SaveSigned(rdb,node->value);\n        node = node->next;\n    }\n}\n\nvoid LazyFreeLinkAofRewrite(RedisModuleIO *aof, RedisModuleString *key, void *value) {\n    struct LazyFreeLinkObject *hto = value;\n    struct LazyFreeLinkNode *node = hto->head;\n    while(node) {\n        RedisModule_EmitAOF(aof,\"LAZYFREELINK.INSERT\",\"sl\",key,node->value);\n        node = node->next;\n    }\n}\n\nvoid LazyFreeLinkFree(void *value) {\n    LazyFreeLinkReleaseObject(value);\n}\n\nsize_t LazyFreeLinkFreeEffort(RedisModuleString *key, const void *value) {\n    REDISMODULE_NOT_USED(key);\n    const struct LazyFreeLinkObject *hto = value;\n    return hto->len;\n}\n\nvoid LazyFreeLinkUnlink(RedisModuleString *key, const void *value) {\n    REDISMODULE_NOT_USED(key);\n    REDISMODULE_NOT_USED(value);\n    /* Here you can know which key and value is about to be freed. */\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx,\"lazyfreetest\",1,REDISMODULE_APIVER_1)\n        == REDISMODULE_ERR) return REDISMODULE_ERR;\n\n    /* We only allow our module to be loaded when the redis core version is greater than the version of my module */\n    if (RedisModule_GetTypeMethodVersion() < REDISMODULE_TYPE_METHOD_VERSION) {\n        return REDISMODULE_ERR;\n    }\n\n    RedisModuleTypeMethods tm = {\n        .version = REDISMODULE_TYPE_METHOD_VERSION,\n        .rdb_load = LazyFreeLinkRdbLoad,\n        .rdb_save = LazyFreeLinkRdbSave,\n        .aof_rewrite = LazyFreeLinkAofRewrite,\n        .free = LazyFreeLinkFree,\n        .free_effort = LazyFreeLinkFreeEffort,\n        .unlink = LazyFreeLinkUnlink,\n    };\n\n    LazyFreeLinkType = RedisModule_CreateDataType(ctx,\"test_lazy\",0,&tm);\n    if (LazyFreeLinkType == NULL) return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"lazyfreelink.insert\",\n        LazyFreeLinkInsert_RedisCommand,\"write deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"lazyfreelink.len\",\n        LazyFreeLinkLen_RedisCommand,\"readonly\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/testrdb.c",
    "content": "#include \"redismodule.h\"\n\n#include <string.h>\n#include <assert.h>\n\n/* Module configuration, save aux or not? */\n#define CONF_AUX_OPTION_NO_AUX           0\n#define CONF_AUX_OPTION_SAVE2            1 << 0\n#define CONF_AUX_OPTION_BEFORE_KEYSPACE  1 << 1\n#define CONF_AUX_OPTION_AFTER_KEYSPACE   1 << 2\n#define CONF_AUX_OPTION_NO_DATA          1 << 3\nlong long conf_aux_count = 0;\n\n/* Registered type */\nRedisModuleType *testrdb_type = NULL;\n\n/* Global values to store and persist to aux */\nRedisModuleString *before_str = NULL;\nRedisModuleString *after_str = NULL;\n\n/* Global values used to keep aux from db being loaded (in case of async_loading) */\nRedisModuleString *before_str_temp = NULL;\nRedisModuleString *after_str_temp = NULL;\n\n/* Indicates whether there is an async replication in progress.\n * We control this value from RedisModuleEvent_ReplAsyncLoad events. */\nint async_loading = 0;\n\nint n_aux_load_called = 0;\n\nvoid replAsyncLoadCallback(RedisModuleCtx *ctx, RedisModuleEvent e, uint64_t sub, void *data)\n{\n    REDISMODULE_NOT_USED(e);\n    REDISMODULE_NOT_USED(data);\n\n    switch (sub) {\n    case REDISMODULE_SUBEVENT_REPL_ASYNC_LOAD_STARTED:\n        assert(async_loading == 0);\n        async_loading = 1;\n        break;\n    case REDISMODULE_SUBEVENT_REPL_ASYNC_LOAD_ABORTED:\n        /* Discard temp aux */\n        if (before_str_temp)\n            RedisModule_FreeString(ctx, before_str_temp);\n        if (after_str_temp)\n            RedisModule_FreeString(ctx, after_str_temp);\n        before_str_temp = NULL;\n        after_str_temp = NULL;\n\n        async_loading = 0;\n        break;\n    case REDISMODULE_SUBEVENT_REPL_ASYNC_LOAD_COMPLETED:\n        if (before_str)\n            RedisModule_FreeString(ctx, before_str);\n        if (after_str)\n            RedisModule_FreeString(ctx, after_str);\n        before_str = before_str_temp;\n        after_str = after_str_temp;\n\n        before_str_temp = NULL;\n        after_str_temp = NULL;\n\n        async_loading = 0;\n        break;\n    default:\n        assert(0);\n    }\n}\n\nvoid *testrdb_type_load(RedisModuleIO *rdb, int encver) {\n    int count = RedisModule_LoadSigned(rdb);\n    RedisModuleString *str = RedisModule_LoadString(rdb);\n    float f = RedisModule_LoadFloat(rdb);\n    long double ld = RedisModule_LoadLongDouble(rdb);\n    if (RedisModule_IsIOError(rdb)) {\n        RedisModuleCtx *ctx = RedisModule_GetContextFromIO(rdb);\n        if (str)\n            RedisModule_FreeString(ctx, str);\n        return NULL;\n    }\n    /* Using the values only after checking for io errors. */\n    assert(count==1);\n    assert(encver==1);\n    assert(f==1.5f);\n    assert(ld==0.333333333333333333L);\n    return str;\n}\n\nvoid testrdb_type_save(RedisModuleIO *rdb, void *value) {\n    RedisModuleString *str = (RedisModuleString*)value;\n    RedisModule_SaveSigned(rdb, 1);\n    RedisModule_SaveString(rdb, str);\n    RedisModule_SaveFloat(rdb, 1.5);\n    RedisModule_SaveLongDouble(rdb, 0.333333333333333333L);\n}\n\nvoid testrdb_aux_save(RedisModuleIO *rdb, int when) {\n    if (!(conf_aux_count & CONF_AUX_OPTION_BEFORE_KEYSPACE)) assert(when == REDISMODULE_AUX_AFTER_RDB);\n    if (!(conf_aux_count & CONF_AUX_OPTION_AFTER_KEYSPACE)) assert(when == REDISMODULE_AUX_BEFORE_RDB);\n    assert(conf_aux_count!=CONF_AUX_OPTION_NO_AUX);\n    if (when == REDISMODULE_AUX_BEFORE_RDB) {\n        if (before_str) {\n            RedisModule_SaveSigned(rdb, 1);\n            RedisModule_SaveString(rdb, before_str);\n        } else {\n            RedisModule_SaveSigned(rdb, 0);\n        }\n    } else {\n        if (after_str) {\n            RedisModule_SaveSigned(rdb, 1);\n            RedisModule_SaveString(rdb, after_str);\n        } else {\n            RedisModule_SaveSigned(rdb, 0);\n        }\n    }\n}\n\nint testrdb_aux_load(RedisModuleIO *rdb, int encver, int when) {\n    assert(encver == 1);\n    if (!(conf_aux_count & CONF_AUX_OPTION_BEFORE_KEYSPACE)) assert(when == REDISMODULE_AUX_AFTER_RDB);\n    if (!(conf_aux_count & CONF_AUX_OPTION_AFTER_KEYSPACE)) assert(when == REDISMODULE_AUX_BEFORE_RDB);\n    assert(conf_aux_count!=CONF_AUX_OPTION_NO_AUX);\n    RedisModuleCtx *ctx = RedisModule_GetContextFromIO(rdb);\n    if (when == REDISMODULE_AUX_BEFORE_RDB) {\n        if (async_loading == 0) {\n            if (before_str)\n                RedisModule_FreeString(ctx, before_str);\n            before_str = NULL;\n            int count = RedisModule_LoadSigned(rdb);\n            if (RedisModule_IsIOError(rdb))\n                return REDISMODULE_ERR;\n            if (count)\n                before_str = RedisModule_LoadString(rdb);\n        } else {\n            if (before_str_temp)\n                RedisModule_FreeString(ctx, before_str_temp);\n            before_str_temp = NULL;\n            int count = RedisModule_LoadSigned(rdb);\n            if (RedisModule_IsIOError(rdb))\n                return REDISMODULE_ERR;\n            if (count)\n                before_str_temp = RedisModule_LoadString(rdb);\n        }\n    } else {\n        if (async_loading == 0) {\n            if (after_str)\n                RedisModule_FreeString(ctx, after_str);\n            after_str = NULL;\n            int count = RedisModule_LoadSigned(rdb);\n            if (RedisModule_IsIOError(rdb))\n                return REDISMODULE_ERR;\n            if (count)\n                after_str = RedisModule_LoadString(rdb);\n        } else {\n            if (after_str_temp)\n                RedisModule_FreeString(ctx, after_str_temp);\n            after_str_temp = NULL;\n            int count = RedisModule_LoadSigned(rdb);\n            if (RedisModule_IsIOError(rdb))\n                return REDISMODULE_ERR;\n            if (count)\n                after_str_temp = RedisModule_LoadString(rdb);\n        }\n    }\n\n    if (RedisModule_IsIOError(rdb))\n        return REDISMODULE_ERR;\n    return REDISMODULE_OK;\n}\n\nvoid testrdb_type_free(void *value) {\n    if (value)\n        RedisModule_FreeString(NULL, (RedisModuleString*)value);\n}\n\nint testrdb_set_before(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc != 2) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    if (before_str)\n        RedisModule_FreeString(ctx, before_str);\n    before_str = argv[1];\n    RedisModule_RetainString(ctx, argv[1]);\n    RedisModule_ReplyWithLongLong(ctx, 1);\n    return REDISMODULE_OK;\n}\n\nint testrdb_get_before(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    REDISMODULE_NOT_USED(argv);\n    if (argc != 1){\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n    if (before_str)\n        RedisModule_ReplyWithString(ctx, before_str);\n    else\n        RedisModule_ReplyWithStringBuffer(ctx, \"\", 0);\n    return REDISMODULE_OK;\n}\n\n/* For purpose of testing module events, expose variable state during async_loading. */\nint testrdb_async_loading_get_before(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    REDISMODULE_NOT_USED(argv);\n    if (argc != 1){\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n    if (before_str_temp)\n        RedisModule_ReplyWithString(ctx, before_str_temp);\n    else\n        RedisModule_ReplyWithStringBuffer(ctx, \"\", 0);\n    return REDISMODULE_OK;\n}\n\nint testrdb_set_after(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc != 2){\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    if (after_str)\n        RedisModule_FreeString(ctx, after_str);\n    after_str = argv[1];\n    RedisModule_RetainString(ctx, argv[1]);\n    RedisModule_ReplyWithLongLong(ctx, 1);\n    return REDISMODULE_OK;\n}\n\nint testrdb_get_after(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    REDISMODULE_NOT_USED(argv);\n    if (argc != 1){\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n    if (after_str)\n        RedisModule_ReplyWithString(ctx, after_str);\n    else\n        RedisModule_ReplyWithStringBuffer(ctx, \"\", 0);\n    return REDISMODULE_OK;\n}\n\nint testrdb_set_key(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc != 3){\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_WRITE);\n    RedisModuleString *str = RedisModule_ModuleTypeGetValue(key);\n    if (str)\n        RedisModule_FreeString(ctx, str);\n    RedisModule_ModuleTypeSetValue(key, testrdb_type, argv[2]);\n    RedisModule_RetainString(ctx, argv[2]);\n    RedisModule_CloseKey(key);\n    RedisModule_ReplyWithLongLong(ctx, 1);\n    return REDISMODULE_OK;\n}\n\nint testrdb_get_key(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc != 2){\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], REDISMODULE_READ);\n    RedisModuleString *str = RedisModule_ModuleTypeGetValue(key);\n    RedisModule_CloseKey(key);\n    RedisModule_ReplyWithString(ctx, str);\n    return REDISMODULE_OK;\n}\n\nint testrdb_get_n_aux_load_called(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    REDISMODULE_NOT_USED(ctx);\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    RedisModule_ReplyWithLongLong(ctx, n_aux_load_called);\n    return REDISMODULE_OK;\n}\n\nint test2rdb_aux_load(RedisModuleIO *rdb, int encver, int when) {\n    REDISMODULE_NOT_USED(rdb);\n    REDISMODULE_NOT_USED(encver);\n    REDISMODULE_NOT_USED(when);\n    n_aux_load_called++;\n    return REDISMODULE_OK;\n}\n\nvoid test2rdb_aux_save(RedisModuleIO *rdb, int when) {\n    REDISMODULE_NOT_USED(rdb);\n    REDISMODULE_NOT_USED(when);\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    if (RedisModule_Init(ctx,\"testrdb\",1,REDISMODULE_APIVER_1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    RedisModule_SetModuleOptions(ctx, REDISMODULE_OPTIONS_HANDLE_IO_ERRORS | REDISMODULE_OPTIONS_HANDLE_REPL_ASYNC_LOAD);\n\n    if (argc > 0)\n        RedisModule_StringToLongLong(argv[0], &conf_aux_count);\n\n    if (conf_aux_count==CONF_AUX_OPTION_NO_AUX) {\n        RedisModuleTypeMethods datatype_methods = {\n            .version = 1,\n            .rdb_load = testrdb_type_load,\n            .rdb_save = testrdb_type_save,\n            .aof_rewrite = NULL,\n            .digest = NULL,\n            .free = testrdb_type_free,\n        };\n\n        testrdb_type = RedisModule_CreateDataType(ctx, \"test__rdb\", 1, &datatype_methods);\n        if (testrdb_type == NULL)\n            return REDISMODULE_ERR;\n    } else if (!(conf_aux_count & CONF_AUX_OPTION_NO_DATA)) {\n        RedisModuleTypeMethods datatype_methods = {\n            .version = REDISMODULE_TYPE_METHOD_VERSION,\n            .rdb_load = testrdb_type_load,\n            .rdb_save = testrdb_type_save,\n            .aof_rewrite = NULL,\n            .digest = NULL,\n            .free = testrdb_type_free,\n            .aux_load = testrdb_aux_load,\n            .aux_save = testrdb_aux_save,\n            .aux_save_triggers = ((conf_aux_count & CONF_AUX_OPTION_BEFORE_KEYSPACE) ? REDISMODULE_AUX_BEFORE_RDB : 0) |\n                                 ((conf_aux_count & CONF_AUX_OPTION_AFTER_KEYSPACE)  ? REDISMODULE_AUX_AFTER_RDB : 0)\n        };\n\n        if (conf_aux_count & CONF_AUX_OPTION_SAVE2) {\n            datatype_methods.aux_save2 = testrdb_aux_save;\n        }\n\n        testrdb_type = RedisModule_CreateDataType(ctx, \"test__rdb\", 1, &datatype_methods);\n        if (testrdb_type == NULL)\n            return REDISMODULE_ERR;\n    } else {\n\n        /* Used to verify that aux_save2 api without any data, saves nothing to the RDB. */\n        RedisModuleTypeMethods datatype_methods = {\n            .version = REDISMODULE_TYPE_METHOD_VERSION,\n            .aux_load = test2rdb_aux_load,\n            .aux_save = test2rdb_aux_save,\n            .aux_save_triggers = ((conf_aux_count & CONF_AUX_OPTION_BEFORE_KEYSPACE) ? REDISMODULE_AUX_BEFORE_RDB : 0) |\n                                 ((conf_aux_count & CONF_AUX_OPTION_AFTER_KEYSPACE)  ? REDISMODULE_AUX_AFTER_RDB : 0)\n        };\n        if (conf_aux_count & CONF_AUX_OPTION_SAVE2) {\n            datatype_methods.aux_save2 = test2rdb_aux_save;\n        }\n\n        RedisModule_CreateDataType(ctx, \"test__rdb\", 1, &datatype_methods);\n    }\n\n    if (RedisModule_CreateCommand(ctx,\"testrdb.set.before\", testrdb_set_before,\"deny-oom\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"testrdb.get.before\", testrdb_get_before,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"testrdb.async_loading.get.before\", testrdb_async_loading_get_before,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"testrdb.set.after\", testrdb_set_after,\"deny-oom\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"testrdb.get.after\", testrdb_get_after,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"testrdb.set.key\", testrdb_set_key,\"deny-oom\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"testrdb.get.key\", testrdb_get_key,\"\",1,1,1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"testrdb.get.n_aux_load_called\", testrdb_get_n_aux_load_called,\"\",1,1,1) == REDISMODULE_ERR)\n            return REDISMODULE_ERR;\n\n    RedisModule_SubscribeToServerEvent(ctx,\n        RedisModuleEvent_ReplAsyncLoad, replAsyncLoadCallback);\n\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnUnload(RedisModuleCtx *ctx) {\n    if (before_str)\n        RedisModule_FreeString(ctx, before_str);\n    if (after_str)\n        RedisModule_FreeString(ctx, after_str);\n    if (before_str_temp)\n        RedisModule_FreeString(ctx, before_str_temp);\n    if (after_str_temp)\n        RedisModule_FreeString(ctx, after_str_temp);\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/timer.c",
    "content": "\n#include \"redismodule.h\"\n\nstatic void timer_callback(RedisModuleCtx *ctx, void *data)\n{\n    RedisModuleString *keyname = data;\n    RedisModuleCallReply *reply;\n\n    reply = RedisModule_Call(ctx, \"INCR\", \"s\", keyname);\n    if (reply != NULL)\n        RedisModule_FreeCallReply(reply);\n    RedisModule_FreeString(ctx, keyname);\n}\n\nint test_createtimer(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc != 3) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    long long period;\n    if (RedisModule_StringToLongLong(argv[1], &period) == REDISMODULE_ERR) {\n        RedisModule_ReplyWithError(ctx, \"Invalid time specified.\");\n        return REDISMODULE_OK;\n    }\n\n    RedisModuleString *keyname = argv[2];\n    RedisModule_RetainString(ctx, keyname);\n\n    RedisModuleTimerID id = RedisModule_CreateTimer(ctx, period, timer_callback, keyname);\n    RedisModule_ReplyWithLongLong(ctx, id);\n\n    return REDISMODULE_OK;\n}\n\nint test_gettimer(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc != 2) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    long long id;\n    if (RedisModule_StringToLongLong(argv[1], &id) == REDISMODULE_ERR) {\n        RedisModule_ReplyWithError(ctx, \"Invalid id specified.\");\n        return REDISMODULE_OK;\n    }\n\n    uint64_t remaining;\n    RedisModuleString *keyname;\n    if (RedisModule_GetTimerInfo(ctx, id, &remaining, (void **)&keyname) == REDISMODULE_ERR) {\n        RedisModule_ReplyWithNull(ctx);\n    } else {\n        RedisModule_ReplyWithArray(ctx, 2);\n        RedisModule_ReplyWithString(ctx, keyname);\n        RedisModule_ReplyWithLongLong(ctx, remaining);\n    }\n\n    return REDISMODULE_OK;\n}\n\nint test_stoptimer(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    if (argc != 2) {\n        RedisModule_WrongArity(ctx);\n        return REDISMODULE_OK;\n    }\n\n    long long id;\n    if (RedisModule_StringToLongLong(argv[1], &id) == REDISMODULE_ERR) {\n        RedisModule_ReplyWithError(ctx, \"Invalid id specified.\");\n        return REDISMODULE_OK;\n    }\n\n    int ret = 0;\n    RedisModuleString *keyname;\n    if (RedisModule_StopTimer(ctx, id, (void **) &keyname) == REDISMODULE_OK) {\n        RedisModule_FreeString(ctx, keyname);\n        ret = 1;\n    }\n\n    RedisModule_ReplyWithLongLong(ctx, ret);\n    return REDISMODULE_OK;\n}\n\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    if (RedisModule_Init(ctx,\"timer\",1,REDISMODULE_APIVER_1)== REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"test.createtimer\", test_createtimer,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"test.gettimer\", test_gettimer,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n    if (RedisModule_CreateCommand(ctx,\"test.stoptimer\", test_stoptimer,\"\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/usercall.c",
    "content": "#include \"redismodule.h\"\n#include <pthread.h>\n#include <assert.h>\n\n#define UNUSED(V) ((void) V)\n\nRedisModuleUser *user = NULL;\n\nint call_without_user(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc < 2) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    const char *cmd = RedisModule_StringPtrLen(argv[1], NULL);\n\n    RedisModuleCallReply *rep = RedisModule_Call(ctx, cmd, \"Ev\", argv + 2, (size_t)argc - 2);\n    if (!rep) {\n        RedisModule_ReplyWithError(ctx, \"NULL reply returned\");\n    } else {\n        RedisModule_ReplyWithCallReply(ctx, rep);\n        RedisModule_FreeCallReply(rep);\n    }\n    return REDISMODULE_OK;\n}\n\nint call_with_user_flag(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc < 3) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    RedisModule_SetContextUser(ctx, user);\n\n    /* Append Ev to the provided flags. */\n    RedisModuleString *flags = RedisModule_CreateStringFromString(ctx, argv[1]);\n    RedisModule_StringAppendBuffer(ctx, flags, \"Ev\", 2);\n\n    const char* flg = RedisModule_StringPtrLen(flags, NULL);\n    const char* cmd = RedisModule_StringPtrLen(argv[2], NULL);\n\n    RedisModuleCallReply* rep = RedisModule_Call(ctx, cmd, flg, argv + 3, (size_t)argc - 3);\n    if (!rep) {\n        RedisModule_ReplyWithError(ctx, \"NULL reply returned\");\n    } else {\n        RedisModule_ReplyWithCallReply(ctx, rep);\n        RedisModule_FreeCallReply(rep);\n    }\n    RedisModule_FreeString(ctx, flags);\n\n    return REDISMODULE_OK;\n}\n\nint add_to_acl(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 2) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    size_t acl_len;\n    const char *acl = RedisModule_StringPtrLen(argv[1], &acl_len);\n\n    RedisModuleString *error;\n    int ret = RedisModule_SetModuleUserACLString(ctx, user, acl, &error);\n    if (ret) {\n        size_t len;\n        const char * e = RedisModule_StringPtrLen(error, &len);\n        RedisModule_ReplyWithError(ctx, e);\n        return REDISMODULE_OK;\n    }\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n\n    return REDISMODULE_OK;\n}\n\nint get_acl(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n\n    if (argc != 1) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    RedisModule_Assert(user != NULL);\n\n    RedisModuleString *acl = RedisModule_GetModuleUserACLString(user);\n\n    RedisModule_ReplyWithString(ctx, acl);\n\n    RedisModule_FreeString(NULL, acl);\n\n    return REDISMODULE_OK;\n}\n\n/* Sets the context user via SetContextUser, retrieves it via GetContextUser,\n * then returns the ACL string of that user. Tests SetContextUser + GetContextUser. */\nint get_context_acl(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n\n    if (argc != 1) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    RedisModule_Assert(user != NULL);\n    RedisModule_SetContextUser(ctx, user);\n    const RedisModuleUser *ctx_user = RedisModule_GetContextUser(ctx);\n    RedisModuleString *acl = RedisModule_GetModuleUserACLString((RedisModuleUser *)ctx_user);\n    RedisModule_ReplyWithString(ctx, acl);\n    RedisModule_FreeString(NULL, acl);\n    return REDISMODULE_OK;\n}\n\n/* Returns the username of the module user via RedisModule_GetUserUsername. Tests GetUserUsername API. */\nint get_user_username(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n\n    if (argc != 1) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    if (user == NULL) {\n        RedisModule_ReplyWithSimpleString(ctx, \"none\");\n        return REDISMODULE_OK;\n    }\n    RedisModuleString *name = RedisModule_GetUserUsername(ctx, user);\n    if (name == NULL) {\n        RedisModule_ReplyWithSimpleString(ctx, \"none\");\n        return REDISMODULE_OK;\n    }\n    RedisModule_ReplyWithString(ctx, name);\n    RedisModule_FreeString(ctx, name);\n    return REDISMODULE_OK;\n}\n\nint reset_user(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n\n    if (argc != 1) {\n        return RedisModule_WrongArity(ctx);\n    }\n\n    if (user != NULL) {\n        RedisModule_FreeModuleUser(user);\n    }\n\n    user = RedisModule_CreateModuleUser(\"module_user\");\n\n    RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n\n    return REDISMODULE_OK;\n}\n\ntypedef struct {\n    RedisModuleString **argv;\n    int argc;\n    RedisModuleBlockedClient *bc;\n} bg_call_data;\n\nvoid *bg_call_worker(void *arg) {\n    bg_call_data *bg = arg;\n    RedisModuleBlockedClient *bc = bg->bc;\n\n    // Get Redis module context\n    RedisModuleCtx *ctx = RedisModule_GetThreadSafeContext(bg->bc);\n\n    // Acquire GIL\n    RedisModule_ThreadSafeContextLock(ctx);\n\n    // Set user\n    RedisModule_SetContextUser(ctx, user);\n\n    // Call the command\n    size_t format_len;\n    RedisModuleString *format_redis_str = RedisModule_CreateString(NULL, \"v\", 1);\n    const char *format = RedisModule_StringPtrLen(bg->argv[1], &format_len);\n    RedisModule_StringAppendBuffer(NULL, format_redis_str, format, format_len);\n    RedisModule_StringAppendBuffer(NULL, format_redis_str, \"E\", 1);\n    format = RedisModule_StringPtrLen(format_redis_str, NULL);\n    const char *cmd = RedisModule_StringPtrLen(bg->argv[2], NULL);\n    RedisModuleCallReply *rep = RedisModule_Call(ctx, cmd, format, bg->argv + 3, (size_t)bg->argc - 3);\n    RedisModule_FreeString(NULL, format_redis_str);\n\n    /* Free the arguments within GIL to prevent simultaneous freeing in main thread. */\n    for (int i=0; i<bg->argc; i++)\n        RedisModule_FreeString(ctx, bg->argv[i]);\n    RedisModule_Free(bg->argv);\n    RedisModule_Free(bg);\n\n    // Release GIL\n    RedisModule_ThreadSafeContextUnlock(ctx);\n\n    // Reply to client\n    if (!rep) {\n        RedisModule_ReplyWithError(ctx, \"NULL reply returned\");\n    } else {\n        RedisModule_ReplyWithCallReply(ctx, rep);\n        RedisModule_FreeCallReply(rep);\n    }\n\n    // Unblock client\n    RedisModule_UnblockClient(bc, NULL);\n\n    // Free the Redis module context\n    RedisModule_FreeThreadSafeContext(ctx);\n\n    return NULL;\n}\n\nint call_with_user_bg(RedisModuleCtx *ctx, RedisModuleString **argv, int argc)\n{\n    UNUSED(argv);\n    UNUSED(argc);\n\n    /* Make sure we're not trying to block a client when we shouldn't */\n    int flags = RedisModule_GetContextFlags(ctx);\n    int allFlags = RedisModule_GetContextFlagsAll();\n    if ((allFlags & REDISMODULE_CTX_FLAGS_MULTI) &&\n        (flags & REDISMODULE_CTX_FLAGS_MULTI)) {\n        RedisModule_ReplyWithSimpleString(ctx, \"Blocked client is not supported inside multi\");\n        return REDISMODULE_OK;\n    }\n    if ((allFlags & REDISMODULE_CTX_FLAGS_DENY_BLOCKING) &&\n        (flags & REDISMODULE_CTX_FLAGS_DENY_BLOCKING)) {\n        RedisModule_ReplyWithSimpleString(ctx, \"Blocked client is not allowed\");\n        return REDISMODULE_OK;\n    }\n\n    /* Make a copy of the arguments and pass them to the thread. */\n    bg_call_data *bg = RedisModule_Alloc(sizeof(bg_call_data));\n    bg->argv = RedisModule_Alloc(sizeof(RedisModuleString*)*argc);\n    bg->argc = argc;\n    for (int i=0; i<argc; i++)\n        bg->argv[i] = RedisModule_HoldString(ctx, argv[i]);\n\n    /* Block the client */\n    bg->bc = RedisModule_BlockClient(ctx, NULL, NULL, NULL, 0);\n\n    /* Start a thread to handle the request */\n    pthread_t tid;\n    int res = pthread_create(&tid, NULL, bg_call_worker, bg);\n    assert(res == 0);\n    pthread_detach(tid);\n\n    return REDISMODULE_OK;\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n\n    if (RedisModule_Init(ctx,\"usercall\",1,REDISMODULE_APIVER_1)== REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"usercall.call_without_user\", call_without_user,\"write\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"usercall.call_with_user_flag\", call_with_user_flag,\"write\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"usercall.call_with_user_bg\", call_with_user_bg, \"write\", 0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"usercall.add_to_acl\", add_to_acl, \"write\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"usercall.reset_user\", reset_user,\"write\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"usercall.get_acl\", get_acl,\"write\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"usercall.get_context_acl\", get_context_acl,\"write\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx,\"usercall.get_user_username\", get_user_username,\"readonly\",0,0,0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/modules/zset.c",
    "content": "#include \"redismodule.h\"\n#include <math.h>\n#include <errno.h>\n\n/* ZSET.REM key element\n *\n * Removes an occurrence of an element from a sorted set. Replies with the\n * number of removed elements (0 or 1).\n */\nint zset_rem(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 3) return RedisModule_WrongArity(ctx);\n    RedisModule_AutoMemory(ctx);\n    int keymode = REDISMODULE_READ | REDISMODULE_WRITE;\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], keymode);\n    int deleted;\n    if (RedisModule_ZsetRem(key, argv[2], &deleted) == REDISMODULE_OK)\n        return RedisModule_ReplyWithLongLong(ctx, deleted);\n    else\n        return RedisModule_ReplyWithError(ctx, \"ERR ZsetRem failed\");\n}\n\n/* ZSET.ADD key score member\n *\n * Adds a specified member with the specified score to the sorted\n * set stored at key.\n */\nint zset_add(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 4) return RedisModule_WrongArity(ctx);\n    RedisModule_AutoMemory(ctx);\n    int keymode = REDISMODULE_READ | REDISMODULE_WRITE;\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], keymode);\n\n    size_t len;\n    double score;\n    char *endptr;\n    const char *str = RedisModule_StringPtrLen(argv[2], &len);\n    score = strtod(str, &endptr);\n    if (*endptr != '\\0' || errno == ERANGE)\n        return RedisModule_ReplyWithError(ctx, \"value is not a valid float\");\n\n    if (RedisModule_ZsetAdd(key, score, argv[3], NULL) == REDISMODULE_OK)\n        return RedisModule_ReplyWithSimpleString(ctx, \"OK\");\n    else\n        return RedisModule_ReplyWithError(ctx, \"ERR ZsetAdd failed\");\n}\n\n/* ZSET.INCRBY key member increment\n *\n * Increments the score stored at member in the sorted set stored at key by increment.\n * Replies with the new score of this element.\n */\nint zset_incrby(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    if (argc != 4) return RedisModule_WrongArity(ctx);\n    RedisModule_AutoMemory(ctx);\n    int keymode = REDISMODULE_READ | REDISMODULE_WRITE;\n    RedisModuleKey *key = RedisModule_OpenKey(ctx, argv[1], keymode);\n\n    size_t len;\n    double score, newscore;\n    char *endptr;\n    const char *str = RedisModule_StringPtrLen(argv[3], &len);\n    score = strtod(str, &endptr);\n    if (*endptr != '\\0' || errno == ERANGE)\n        return RedisModule_ReplyWithError(ctx, \"value is not a valid float\");\n\n    if (RedisModule_ZsetIncrby(key, score, argv[2], NULL, &newscore) == REDISMODULE_OK)\n        return RedisModule_ReplyWithDouble(ctx, newscore);\n    else\n        return RedisModule_ReplyWithError(ctx, \"ERR ZsetIncrby failed\");\n}\n\n/* Structure to hold data for the delall scan callback */\ntypedef struct {\n    RedisModuleCtx *ctx;\n    RedisModuleString **keys_to_delete;\n    size_t keys_capacity;\n    size_t keys_count;\n} zset_delall_data;\n\n/* Callback function for scanning keys and collecting zset keys to delete */\nvoid zset_delall_callback(RedisModuleCtx *ctx, RedisModuleString *keyname, RedisModuleKey *key, void *privdata) {\n    zset_delall_data *data = privdata;\n    int was_opened = 0;\n\n    /* Open the key if it wasn't already opened */\n    if (!key) {\n        key = RedisModule_OpenKey(ctx, keyname, REDISMODULE_READ);\n        was_opened = 1;\n    }\n\n    /* Check if the key is a zset and add it to the list */\n    if (RedisModule_KeyType(key) == REDISMODULE_KEYTYPE_ZSET) {\n        /* Expand the array if needed */\n        if (data->keys_count >= data->keys_capacity) {\n            data->keys_capacity = data->keys_capacity ? data->keys_capacity * 2 : 16;\n            data->keys_to_delete = RedisModule_Realloc(data->keys_to_delete,\n                                                       data->keys_capacity * sizeof(RedisModuleString*));\n        }\n\n        /* Store the key name (retain it so it doesn't get freed) */\n        data->keys_to_delete[data->keys_count] = keyname;\n        RedisModule_RetainString(ctx, keyname);\n        data->keys_count++;\n    }\n\n    /* Close the key if we opened it */\n    if (was_opened) {\n        RedisModule_CloseKey(key);\n    }\n}\n\n/* ZSET.DELALL\n *\n * Iterates through the keyspace and deletes all keys of type \"zset\".\n * Returns the number of deleted keys.\n */\nint zset_delall(RedisModuleCtx *ctx, REDISMODULE_ATTR_UNUSED RedisModuleString **argv, int argc) {\n    if (argc != 1) return RedisModule_WrongArity(ctx);\n    RedisModule_AutoMemory(ctx);\n\n    zset_delall_data data = {\n        .ctx = ctx,\n        .keys_to_delete = NULL,\n        .keys_capacity = 0,\n        .keys_count = 0\n    };\n\n    /* Create a scan cursor and iterate through all keys */\n    RedisModuleScanCursor *cursor = RedisModule_ScanCursorCreate();\n    while (RedisModule_Scan(ctx, cursor, zset_delall_callback, &data));\n    RedisModule_ScanCursorDestroy(cursor);\n\n    /* Delete all the collected zset keys after scan is complete */\n    size_t deleted_count = 0;\n    for (size_t i = 0; i < data.keys_count; i++) {\n        RedisModuleCallReply *reply = RedisModule_Call(ctx, \"DEL\", \"s!\", data.keys_to_delete[i]);\n        if (reply && RedisModule_CallReplyType(reply) == REDISMODULE_REPLY_INTEGER) {\n            long long del_result = RedisModule_CallReplyInteger(reply);\n            if (del_result > 0) {\n                deleted_count++;\n            }\n        }\n        if (reply) {\n            RedisModule_FreeCallReply(reply);\n        }\n        RedisModule_FreeString(ctx, data.keys_to_delete[i]);\n    }\n\n    /* Free the keys array */\n    if (data.keys_to_delete) {\n        RedisModule_Free(data.keys_to_delete);\n    }\n\n    return RedisModule_ReplyWithLongLong(ctx, deleted_count);\n}\n\nint RedisModule_OnLoad(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) {\n    REDISMODULE_NOT_USED(argv);\n    REDISMODULE_NOT_USED(argc);\n    if (RedisModule_Init(ctx, \"zset\", 1, REDISMODULE_APIVER_1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"zset.rem\", zset_rem, \"write\",\n                                  1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"zset.add\", zset_add, \"write\",\n                                  1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"zset.incrby\", zset_incrby, \"write\",\n                                  1, 1, 1) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    if (RedisModule_CreateCommand(ctx, \"zset.delall\", zset_delall, \"write touches-arbitrary-keys\",\n                                  0, 0, 0) == REDISMODULE_ERR)\n        return REDISMODULE_ERR;\n\n    return REDISMODULE_OK;\n}\n"
  },
  {
    "path": "tests/sentinel/run.tcl",
    "content": "# Sentinel test suite.\n#\n# Copyright (C) 2014-Present, Redis Ltd.\n# All Rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n\ncd tests/sentinel\nsource ../instances.tcl\n\nset ::instances_count 5 ; # How many instances we use at max.\nset ::tlsdir \"../../tls\"\n\nproc main {} {\n    parse_options\n    if {$::leaked_fds_file != \"\"} {\n        set ::env(LEAKED_FDS_FILE) $::leaked_fds_file\n    }\n    spawn_instance sentinel $::sentinel_base_port $::instances_count {\n        \"sentinel deny-scripts-reconfig no\"\n        \"enable-protected-configs yes\"\n        \"enable-debug-command yes\"\n    } \"../tests/includes/sentinel.conf\"\n\n    spawn_instance redis $::redis_base_port $::instances_count {\n        \"enable-protected-configs yes\"\n        \"enable-debug-command yes\"\n        \"save ''\"\n    }\n    run_tests\n    cleanup\n    end_tests\n}\n\nif {[catch main e]} {\n    puts $::errorInfo\n    cleanup\n    exit 1\n}\n"
  },
  {
    "path": "tests/sentinel/tests/00-base.tcl",
    "content": "# Check the basic monitoring and failover capabilities.\nsource \"../tests/includes/start-init-tests.tcl\"\nsource \"../tests/includes/init-tests.tcl\"\n\nforeach_sentinel_id id {\n    S $id sentinel debug default-down-after 1000\n}\n\nif {$::simulate_error} {\n    test \"This test will fail\" {\n        fail \"Simulated error\"\n    }\n}\n\ntest \"Sentinel command flag infrastructure works correctly\" {\n    foreach_sentinel_id id {\n        set command_list [S $id command list]\n\n        foreach cmd {ping info subscribe client|setinfo} {\n            assert_not_equal [S $id command docs $cmd] {}\n            assert_not_equal [lsearch $command_list $cmd] -1\n        }\n\n        foreach cmd {save bgrewriteaof blpop replicaof} {\n            assert_equal [S $id command docs $cmd] {}\n            assert_equal [lsearch $command_list $cmd] -1\n            assert_error {ERR unknown command*} {S $id $cmd}\n        }\n\n        assert_error {ERR unknown subcommand*} {S $id client no-touch}\n    }\n}\n\ntest \"SENTINEL HELP output the sentinel subcommand help\" {\n    assert_match \"*SENTINEL <subcommand> *\" [S 0 SENTINEL HELP]\n}\n\ntest \"SENTINEL MYID return the sentinel instance ID\" {\n    assert_equal 40 [string length [S 0 SENTINEL MYID]]\n    assert_equal [S 0 SENTINEL MYID] [S 0 SENTINEL MYID]\n}\n\ntest \"SENTINEL INFO CACHE returns the cached info\" {\n    set res [S 0 SENTINEL INFO-CACHE mymaster]\n    assert_morethan_equal [llength $res] 2\n    assert_equal \"mymaster\" [lindex $res 0]\n\n    set res [lindex $res 1]\n    assert_morethan_equal [llength $res] 2\n    assert_morethan [lindex $res 0] 0\n    assert_match \"*# Server*\" [lindex $res 1]\n}\n\ntest \"SENTINEL PENDING-SCRIPTS returns the information about pending scripts\" {\n    # may or may not have a value, so assert greater than or equal to 0.\n    assert_morethan_equal [llength [S 0 SENTINEL PENDING-SCRIPTS]] 0\n}\n\ntest \"SENTINEL MASTERS returns a list of monitored masters\" {\n    assert_match \"*mymaster*\" [S 0 SENTINEL MASTERS]\n    assert_morethan_equal [llength [S 0 SENTINEL MASTERS]] 1\n}\n\ntest \"SENTINEL SENTINELS returns a list of sentinel instances\" {\n     assert_morethan_equal [llength [S 0 SENTINEL SENTINELS mymaster]] 1\n}\n\ntest \"SENTINEL SLAVES returns a list of the monitored replicas\" {\n     assert_morethan_equal [llength [S 0 SENTINEL SLAVES mymaster]] 1\n}\n\ntest \"SENTINEL SIMULATE-FAILURE HELP list supported flags\" {\n    set res [S 0 SENTINEL SIMULATE-FAILURE HELP]\n    assert_morethan_equal [llength $res] 2\n    assert_equal {crash-after-election crash-after-promotion} $res\n}\n\ntest \"Basic failover works if the master is down\" {\n    set old_port [RPort $master_id]\n    set addr [S 0 SENTINEL GET-MASTER-ADDR-BY-NAME mymaster]\n    assert {[lindex $addr 1] == $old_port}\n    kill_instance redis $master_id\n    foreach_sentinel_id id {\n        S $id sentinel debug ping-period 500\n        S $id sentinel debug ask-period 500  \n        wait_for_condition 1000 100 {\n            [lindex [S $id SENTINEL GET-MASTER-ADDR-BY-NAME mymaster] 1] != $old_port\n        } else {\n            fail \"At least one Sentinel did not receive failover info\"\n        }\n    }\n    restart_instance redis $master_id\n    set addr [S 0 SENTINEL GET-MASTER-ADDR-BY-NAME mymaster]\n    set master_id [get_instance_id_by_port redis [lindex $addr 1]]\n}\n\ntest \"New master [join $addr {:}] role matches\" {\n    assert {[RI $master_id role] eq {master}}\n}\n\ntest \"All the other slaves now point to the new master\" {\n    foreach_redis_id id {\n        if {$id != $master_id && $id != 0} {\n            wait_for_condition 1000 50 {\n                [RI $id master_port] == [lindex $addr 1]\n            } else {\n                fail \"Redis ID $id not configured to replicate with new master\"\n            }\n        }\n    }\n}\n\ntest \"The old master eventually gets reconfigured as a slave\" {\n    wait_for_condition 1000 50 {\n        [RI 0 master_port] == [lindex $addr 1]\n    } else {\n        fail \"Old master not reconfigured as slave of new master\"\n    }\n}\n\ntest \"ODOWN is not possible without N (quorum) Sentinels reports\" {\n    foreach_sentinel_id id {\n        S $id SENTINEL SET mymaster quorum [expr $sentinels+1]\n    }\n    set old_port [RPort $master_id]\n    set addr [S 0 SENTINEL GET-MASTER-ADDR-BY-NAME mymaster]\n    assert {[lindex $addr 1] == $old_port}\n    kill_instance redis $master_id\n\n    # Make sure failover did not happened.\n    set addr [S 0 SENTINEL GET-MASTER-ADDR-BY-NAME mymaster]\n    assert {[lindex $addr 1] == $old_port}\n    restart_instance redis $master_id\n}\n\ntest \"Failover is not possible without majority agreement\" {\n    foreach_sentinel_id id {\n        S $id SENTINEL SET mymaster quorum $quorum\n    }\n\n    # Crash majority of sentinels\n    for {set id 0} {$id < $quorum} {incr id} {\n        kill_instance sentinel $id\n    }\n\n    # Kill the current master\n    kill_instance redis $master_id\n\n    # Make sure failover did not happened.\n    set addr [S $quorum SENTINEL GET-MASTER-ADDR-BY-NAME mymaster]\n    assert {[lindex $addr 1] == $old_port}\n    restart_instance redis $master_id\n\n    # Cleanup: restart Sentinels to monitor the master.\n    for {set id 0} {$id < $quorum} {incr id} {\n        restart_instance sentinel $id\n    }\n}\n\ntest \"Failover works if we configure for absolute agreement\" {\n    foreach_sentinel_id id {\n        S $id SENTINEL SET mymaster quorum $sentinels\n    }\n\n    # Wait for Sentinels to monitor the master again\n    foreach_sentinel_id id {\n        wait_for_condition 1000 100 {\n            [dict get [S $id SENTINEL MASTER mymaster] info-refresh] < 100000\n        } else {\n            fail \"At least one Sentinel is not monitoring the master\"\n        }\n    }\n\n    kill_instance redis $master_id\n\n    foreach_sentinel_id id {\n        wait_for_condition 1000 100 {\n            [lindex [S $id SENTINEL GET-MASTER-ADDR-BY-NAME mymaster] 1] != $old_port\n        } else {\n            fail \"At least one Sentinel did not receive failover info\"\n        }\n    }\n    restart_instance redis $master_id\n    set addr [S 0 SENTINEL GET-MASTER-ADDR-BY-NAME mymaster]\n    set master_id [get_instance_id_by_port redis [lindex $addr 1]]\n\n    # Set the min ODOWN agreement back to strict majority.\n    foreach_sentinel_id id {\n        S $id SENTINEL SET mymaster quorum $quorum\n    }\n}\n\ntest \"New master [join $addr {:}] role matches\" {\n    assert {[RI $master_id role] eq {master}}\n}\n\ntest \"SENTINEL RESET can resets the master\" {\n    # After SENTINEL RESET, sometimes the sentinel can sense the master again,\n    # causing the test to fail. Here we give it a few more chances.\n    for {set j 0} {$j < 10} {incr j} {\n        assert_equal 1 [S 0 SENTINEL RESET mymaster]\n        set res1 [llength [S 0 SENTINEL SENTINELS mymaster]]\n        set res2 [llength [S 0 SENTINEL SLAVES mymaster]]\n        set res3 [llength [S 0 SENTINEL REPLICAS mymaster]]\n        if {$res1 eq 0 && $res2 eq 0 && $res3 eq 0} break\n    }\n    assert_equal 0 $res1\n    assert_equal 0 $res2\n    assert_equal 0 $res3\n}\n"
  },
  {
    "path": "tests/sentinel/tests/01-conf-update.tcl",
    "content": "# Test Sentinel configuration consistency after partitions heal.\n\nsource \"../tests/includes/init-tests.tcl\"\n\ntest \"We can failover with Sentinel 1 crashed\" {\n    set old_port [RPort $master_id]\n    set addr [S 0 SENTINEL GET-MASTER-ADDR-BY-NAME mymaster]\n    assert {[lindex $addr 1] == $old_port}\n\n    # Crash Sentinel 1\n    kill_instance sentinel 1\n\n    kill_instance redis $master_id\n    foreach_sentinel_id id {\n        if {$id != 1} {\n            wait_for_condition 1000 50 {\n                [lindex [S $id SENTINEL GET-MASTER-ADDR-BY-NAME mymaster] 1] != $old_port\n            } else {\n                fail \"Sentinel $id did not receive failover info\"\n            }\n        }\n    }\n    restart_instance redis $master_id\n    set addr [S 0 SENTINEL GET-MASTER-ADDR-BY-NAME mymaster]\n    set master_id [get_instance_id_by_port redis [lindex $addr 1]]\n}\n\ntest \"After Sentinel 1 is restarted, its config gets updated\" {\n    restart_instance sentinel 1\n    wait_for_condition 1000 50 {\n        [lindex [S 1 SENTINEL GET-MASTER-ADDR-BY-NAME mymaster] 1] != $old_port\n    } else {\n        fail \"Restarted Sentinel did not receive failover info\"\n    }\n}\n\ntest \"New master [join $addr {:}] role matches\" {\n    assert {[RI $master_id role] eq {master}}\n}\n\ntest \"Update log level\" {\n    set current_loglevel [S 0 SENTINEL CONFIG GET loglevel]\n    assert {[lindex $current_loglevel 1] == {notice}}\n\n    foreach {loglevel} {debug verbose notice warning nothing} {\n        S 0 SENTINEL CONFIG SET loglevel $loglevel\n        set updated_loglevel [S 0 SENTINEL CONFIG GET loglevel]\n        assert {[lindex $updated_loglevel 1] == $loglevel}\n    }\n}\n"
  },
  {
    "path": "tests/sentinel/tests/02-slaves-reconf.tcl",
    "content": "# Check that slaves are reconfigured at a latter time if they are partitioned.\n#\n# Here we should test:\n# 1) That slaves point to the new master after failover.\n# 2) That partitioned slaves point to new master when they are partitioned\n#    away during failover and return at a latter time.\n\nsource \"../tests/includes/init-tests.tcl\"\n\nproc 02_test_slaves_replication {} {\n    uplevel 1 {\n        test \"Check that slaves replicate from current master\" {\n            set master_port [RPort $master_id]\n            foreach_redis_id id {\n                if {$id == $master_id} continue\n                if {[instance_is_killed redis $id]} continue\n                wait_for_condition 1000 50 {\n                    ([RI $id master_port] == $master_port) &&\n                    ([RI $id master_link_status] eq {up})\n                } else {\n                    fail \"Redis slave $id is replicating from wrong master\"\n                }\n            }\n        }\n    }\n}\n\nproc 02_crash_and_failover {} {\n    uplevel 1 {\n        test \"Crash the master and force a failover\" {\n            set old_port [RPort $master_id]\n            set addr [S 0 SENTINEL GET-MASTER-ADDR-BY-NAME mymaster]\n            assert {[lindex $addr 1] == $old_port}\n            kill_instance redis $master_id\n            foreach_sentinel_id id {\n                wait_for_condition 1000 50 {\n                    [lindex [S $id SENTINEL GET-MASTER-ADDR-BY-NAME mymaster] 1] != $old_port\n                } else {\n                    fail \"At least one Sentinel did not receive failover info\"\n                }\n            }\n            restart_instance redis $master_id\n            set addr [S 0 SENTINEL GET-MASTER-ADDR-BY-NAME mymaster]\n            set master_id [get_instance_id_by_port redis [lindex $addr 1]]\n        }\n    }\n}\n\n02_test_slaves_replication\n02_crash_and_failover\n\nforeach_sentinel_id id {\n    S $id sentinel debug info-period 100\n    S $id sentinel debug default-down-after 1000\n    S $id sentinel debug publish-period 100\n}\n\n02_test_slaves_replication\n\ntest \"Kill a slave instance\" {\n    foreach_redis_id id {\n        if {$id == $master_id} continue\n        set killed_slave_id $id\n        kill_instance redis $id\n        break\n    }\n}\n\n02_crash_and_failover\n02_test_slaves_replication\n\ntest \"Wait for failover to end\" {\n    set inprogress 1\n    while {$inprogress} {\n        set inprogress 0\n        foreach_sentinel_id id {\n            if {[dict exists [S $id SENTINEL MASTER mymaster] failover-state]} {\n                incr inprogress\n            }\n        }\n        if {$inprogress} {after 100}\n    }\n}\n\ntest \"Restart killed slave and test replication of slaves again...\" {\n    restart_instance redis $killed_slave_id\n}\n\n# Now we check if the slave rejoining the partition is reconfigured even\n# if the failover finished.\n02_test_slaves_replication\n"
  },
  {
    "path": "tests/sentinel/tests/03-runtime-reconf.tcl",
    "content": "# Test runtime reconfiguration command SENTINEL SET.\nsource \"../tests/includes/init-tests.tcl\"\nset num_sentinels [llength $::sentinel_instances]\n\nset ::user \"testuser\"\nset ::password \"secret\"\n\nproc server_set_password {} {\n    foreach_redis_id id {\n        assert_equal {OK} [R $id CONFIG SET requirepass $::password]\n        assert_equal {OK} [R $id AUTH $::password]\n        assert_equal {OK} [R $id CONFIG SET masterauth $::password]\n    }\n}\n\nproc server_reset_password {} {\n    foreach_redis_id id {\n        assert_equal {OK} [R $id CONFIG SET requirepass \"\"]\n        assert_equal {OK} [R $id CONFIG SET masterauth \"\"]\n    }\n}\n\nproc server_set_acl {id} {\n    assert_equal {OK} [R $id ACL SETUSER $::user on >$::password allchannels +@all]\n    assert_equal {OK} [R $id ACL SETUSER default off]\n\n    R $id CLIENT KILL USER default SKIPME no\n    assert_equal {OK} [R $id AUTH $::user $::password]\n    assert_equal {OK} [R $id CONFIG SET masteruser $::user]\n    assert_equal {OK} [R $id CONFIG SET masterauth $::password]\n}\n\nproc server_reset_acl {id} {\n    assert_equal {OK} [R $id ACL SETUSER default on]\n    assert_equal {1} [R $id ACL DELUSER $::user]\n\n    assert_equal {OK} [R $id CONFIG SET masteruser \"\"]\n    assert_equal {OK} [R $id CONFIG SET masterauth \"\"]\n}\n\nproc verify_sentinel_connect_replicas {id} {\n    foreach replica [S $id SENTINEL REPLICAS mymaster] {\n        if {[string match \"*disconnected*\" [dict get $replica flags]]} {\n            return 0\n        }\n    }\n    return 1\n}\n\nproc wait_for_sentinels_connect_servers { {is_connect 1} } {\n    foreach_sentinel_id id {\n        wait_for_condition 1000 50 {\n            [string match \"*disconnected*\" [dict get [S $id SENTINEL MASTER mymaster] flags]] != $is_connect\n        } else {\n            fail \"At least some sentinel can't connect to master\"\n        }\n\n        wait_for_condition 1000 50 {\n            [verify_sentinel_connect_replicas $id] == $is_connect\n        } else {\n            fail \"At least some sentinel can't connect to replica\"\n        }\n    }\n}\n\ntest \"Sentinels (re)connection following SENTINEL SET mymaster auth-pass\" {\n    # 3 types of sentinels to test:\n    # (re)started while master changed pwd. Manage to connect only after setting pwd\n    set sent2re 0\n    # (up)dated in advance with master new password\n    set sent2up 1\n    # (un)touched. Yet manage to maintain (old) connection\n    set sent2un 2\n\n    wait_for_sentinels_connect_servers\n    kill_instance sentinel $sent2re\n    server_set_password\n    assert_equal {OK} [S $sent2up SENTINEL SET mymaster auth-pass $::password]\n    restart_instance sentinel $sent2re\n\n    # Verify sentinel that restarted failed to connect master\n    wait_for_condition 100 50 {\n        [string match \"*disconnected*\" [dict get [S $sent2re SENTINEL MASTER mymaster] flags]] != 0\n    } else {\n        fail \"Expected to be disconnected from master due to wrong password\"\n    }\n\n    # Update restarted sentinel with master password\n    assert_equal {OK} [S $sent2re SENTINEL SET mymaster auth-pass $::password]\n\n    # All sentinels expected to connect successfully\n    wait_for_sentinels_connect_servers\n\n    # remove requirepass and verify sentinels manage to connect servers\n    server_reset_password\n    wait_for_sentinels_connect_servers\n    # Sanity check\n    verify_sentinel_auto_discovery\n}\n\ntest \"Sentinels (re)connection following master ACL change\" {\n    # Three types of sentinels to test during ACL change:\n    # 1. (re)started Sentinel. Manage to connect only after setting new pwd\n    # 2. (up)dated Sentinel, get just before ACL change the new password\n    # 3. (un)touched Sentinel that kept old connection with master and didn't\n    #    set new ACL password won't persist ACL pwd change (unlike legacy auth-pass)\n    set sent2re 0\n    set sent2up 1\n    set sent2un 2\n\n    wait_for_sentinels_connect_servers\n    # kill sentinel 'sent2re' and restart it after ACL change\n    kill_instance sentinel $sent2re\n\n    # Update sentinel 'sent2up' with new user and pwd\n    assert_equal {OK} [S $sent2up SENTINEL SET mymaster auth-user $::user]\n    assert_equal {OK} [S $sent2up SENTINEL SET mymaster auth-pass $::password]\n\n    foreach_redis_id id {\n        server_set_acl $id\n    }\n\n    restart_instance sentinel $sent2re\n\n    # Verify sentinel that restarted failed to reconnect master\n    wait_for_condition 100 50 {\n        [string match \"*disconnected*\" [dict get [S $sent2re SENTINEL MASTER mymaster] flags]] != 0\n    } else {\n        fail \"Expected: Restarted sentinel to be disconnected from master due to obsolete password\"\n    }\n\n    # Verify sentinel with updated password managed to connect (wait for sentinelTimer to reconnect)\n    wait_for_condition 100 50 {\n        [string match \"*disconnected*\" [dict get [S $sent2up SENTINEL MASTER mymaster] flags]] == 0\n    } else {\n        fail \"Expected: Sentinel to be connected to master\"\n    }\n\n    # Verify sentinel untouched gets failed to connect master\n    wait_for_condition 100 50 {\n        [string match \"*disconnected*\" [dict get [S $sent2un SENTINEL MASTER mymaster] flags]] != 0\n    } else {\n        fail \"Expected: Sentinel to be disconnected from master due to obsolete password\"\n    }\n\n    # Now update all sentinels with new password\n    foreach_sentinel_id id {\n        assert_equal {OK} [S $id SENTINEL SET mymaster auth-user $::user]\n        assert_equal {OK} [S $id SENTINEL SET mymaster auth-pass $::password]\n    }\n\n    # All sentinels expected to connect successfully\n    wait_for_sentinels_connect_servers\n\n    # remove requirepass and verify sentinels manage to connect servers\n    foreach_redis_id id {\n        server_reset_acl $id\n    }\n\n    wait_for_sentinels_connect_servers\n    # Sanity check\n    verify_sentinel_auto_discovery\n}\n\ntest \"Set parameters in normal case\" {\n\n    set info [S 0 SENTINEL master mymaster]\n    set origin_quorum [dict get $info quorum]\n    set origin_down_after_milliseconds [dict get $info down-after-milliseconds]\n    set update_quorum [expr $origin_quorum+1]\n    set update_down_after_milliseconds [expr $origin_down_after_milliseconds+1000]\n\n    assert_equal [S 0 SENTINEL SET mymaster quorum $update_quorum] \"OK\"\n    assert_equal [S 0 SENTINEL SET mymaster down-after-milliseconds $update_down_after_milliseconds] \"OK\"\n\n    set update_info [S 0 SENTINEL master mymaster]\n    assert {[dict get $update_info quorum] != $origin_quorum}\n    assert {[dict get $update_info down-after-milliseconds] != $origin_down_after_milliseconds}\n\n    #restore to origin config parameters\n    assert_equal [S 0 SENTINEL SET mymaster quorum $origin_quorum] \"OK\"\n    assert_equal [S 0 SENTINEL SET mymaster down-after-milliseconds $origin_down_after_milliseconds] \"OK\"\n}\n\ntest \"Set parameters in normal case with bad format\" {\n\n    set info [S 0 SENTINEL master mymaster]\n    set origin_down_after_milliseconds [dict get $info down-after-milliseconds]\n\n    assert_error \"ERR Invalid argument '-20' for SENTINEL SET 'down-after-milliseconds'*\" {S 0 SENTINEL SET mymaster down-after-milliseconds -20}\n    assert_error \"ERR Invalid argument 'abc' for SENTINEL SET 'down-after-milliseconds'*\" {S 0 SENTINEL SET mymaster down-after-milliseconds \"abc\"}\n\n    set current_info [S 0 SENTINEL master mymaster]\n    assert {[dict get $current_info down-after-milliseconds] == $origin_down_after_milliseconds}\n}\n\ntest \"Sentinel Set with other error situations\" {\n\n   # non-existing script\n   assert_error \"ERR Notification script seems non existing*\" {S 0 SENTINEL SET mymaster notification-script test.txt}\n\n   # wrong parameter number\n   assert_error \"ERR wrong number of arguments for 'sentinel|set' command\" {S 0 SENTINEL SET mymaster fakeoption}\n\n   # unknown parameter option\n   assert_error \"ERR Unknown option or number of arguments for SENTINEL SET 'fakeoption'\" {S 0 SENTINEL SET mymaster fakeoption fakevalue}\n\n   # save new config to disk failed\n   set info [S 0 SENTINEL master mymaster]\n   set origin_quorum [dict get $info quorum]\n   set update_quorum [expr $origin_quorum+1]\n   set sentinel_id 0\n   set configfilename [file join \"sentinel_$sentinel_id\" \"sentinel.conf\"]\n   set configfilename_bak [file join \"sentinel_$sentinel_id\" \"sentinel.conf.bak\"]\n\n   file rename $configfilename $configfilename_bak\n   file mkdir $configfilename\n\n   catch {[S 0 SENTINEL SET mymaster quorum $update_quorum]} err\n\n   file delete $configfilename\n   file rename $configfilename_bak $configfilename\n\n   assert_match \"ERR Failed to save config file*\" $err\n}\n"
  },
  {
    "path": "tests/sentinel/tests/04-slave-selection.tcl",
    "content": "# Test slave selection algorithm.\n#\n# This unit should test:\n# 1) That when there are no suitable slaves no failover is performed.\n# 2) That among the available slaves, the one with better offset is picked.\n"
  },
  {
    "path": "tests/sentinel/tests/05-manual.tcl",
    "content": "# Test manual failover\n\nsource \"../tests/includes/init-tests.tcl\"\n\nforeach_sentinel_id id {\n    S $id sentinel debug info-period 2000\n    S $id sentinel debug default-down-after 6000\n    S $id sentinel debug publish-period 1000\n}\n\ntest \"Manual failover works\" {\n    set old_port [RPort $master_id]\n    set addr [S 0 SENTINEL GET-MASTER-ADDR-BY-NAME mymaster]\n    assert {[lindex $addr 1] == $old_port}\n\n    # Since we reduced the info-period (default 10000) above immediately,\n    # sentinel - replica may not have enough time to exchange INFO and update\n    # the replica's info-period, so the test may get a NOGOODSLAVE.\n    wait_for_condition 300 50 {\n        [catch {S 0 SENTINEL FAILOVER mymaster}] == 0\n    } else {\n        catch {S 0 SENTINEL FAILOVER mymaster} reply\n        puts [S 0 SENTINEL REPLICAS mymaster]\n        fail \"Sentinel manual failover did not work, got: $reply\"\n    }\n\n    catch {S 0 SENTINEL FAILOVER mymaster} reply\n    assert_match {*INPROG*} $reply ;# Failover already in progress\n\n    foreach_sentinel_id id {\n        wait_for_condition 1000 50 {\n            [lindex [S $id SENTINEL GET-MASTER-ADDR-BY-NAME mymaster] 1] != $old_port\n        } else {\n            fail \"At least one Sentinel did not receive failover info\"\n        }\n    }\n    set addr [S 0 SENTINEL GET-MASTER-ADDR-BY-NAME mymaster]\n    set master_id [get_instance_id_by_port redis [lindex $addr 1]]\n}\n\ntest \"New master [join $addr {:}] role matches\" {\n    assert {[RI $master_id role] eq {master}}\n}\n\ntest \"All the other slaves now point to the new master\" {\n    foreach_redis_id id {\n        if {$id != $master_id && $id != 0} {\n            wait_for_condition 1000 50 {\n                [RI $id master_port] == [lindex $addr 1]\n            } else {\n                fail \"Redis ID $id not configured to replicate with new master\"\n            }\n        }\n    }\n}\n\ntest \"The old master eventually gets reconfigured as a slave\" {\n    wait_for_condition 1000 50 {\n        [RI 0 master_port] == [lindex $addr 1]\n    } else {\n        fail \"Old master not reconfigured as slave of new master\"\n    }\n}\n\nforeach flag {crash-after-election crash-after-promotion} {\n    # Before each SIMULATE-FAILURE test, re-source init-tests to get a clean environment\n    source \"../tests/includes/init-tests.tcl\"\n\n    test \"SENTINEL SIMULATE-FAILURE $flag works\" {\n        assert_equal {OK} [S 0 SENTINEL SIMULATE-FAILURE $flag]\n\n        # Trigger a failover, failover will trigger leader election, replica promotion\n        # Sentinel may enter failover and exit before the command, catch it and allow it\n        wait_for_condition 300 50 {\n            [catch {S 0 SENTINEL FAILOVER mymaster}] == 0\n            ||\n            ([catch {S 0 SENTINEL FAILOVER mymaster} reply] == 1 &&\n            [string match {*couldn't open socket: connection refused*} $reply])\n        } else {\n            catch {S 0 SENTINEL FAILOVER mymaster} reply\n            fail \"Sentinel manual failover did not work, got: $reply\"\n        }\n\n        # Wait for sentinel to exit (due to simulate-failure flags)\n        wait_for_condition 1000 50 {\n            [catch {S 0 PING}] == 1\n        } else {\n            fail \"Sentinel set $flag but did not exit\"\n        }\n        assert_error {*couldn't open socket: connection refused*} {S 0 PING}\n\n        restart_instance sentinel 0\n    }\n}\n"
  },
  {
    "path": "tests/sentinel/tests/06-ckquorum.tcl",
    "content": "# Test for the SENTINEL CKQUORUM command\n\nsource \"../tests/includes/init-tests.tcl\"\nset num_sentinels [llength $::sentinel_instances]\n\ntest \"CKQUORUM reports OK and the right amount of Sentinels\" {\n    foreach_sentinel_id id {\n        assert_match \"*OK $num_sentinels usable*\" [S $id SENTINEL CKQUORUM mymaster]\n    }\n}\n\ntest \"CKQUORUM detects quorum cannot be reached\" {\n    set orig_quorum [expr {$num_sentinels/2+1}]\n    S 0 SENTINEL SET mymaster quorum [expr {$num_sentinels+1}]\n    catch {[S 0 SENTINEL CKQUORUM mymaster]} err\n    assert_match \"*NOQUORUM*\" $err\n    S 0 SENTINEL SET mymaster quorum $orig_quorum\n}\n\ntest \"CKQUORUM detects failover authorization cannot be reached\" {\n    set orig_quorum [expr {$num_sentinels/2+1}]\n    S 0 SENTINEL SET mymaster quorum 1\n    for {set i 0} {$i < $orig_quorum} {incr i} {\n        kill_instance sentinel [expr {$i + 1}]\n    }\n\n    # We need to make sure that other sentinels are in `DOWN` state\n    # from the point of view of S 0 before we executing `CKQUORUM`.\n    wait_for_condition 300 50 {\n        [catch {S 0 SENTINEL CKQUORUM mymaster}] == 1\n    } else {\n        fail \"At least $orig_quorum sentinels did not enter the down state.\"\n    }\n\n    assert_error \"*NOQUORUM*\" {S 0 SENTINEL CKQUORUM mymaster}\n\n    S 0 SENTINEL SET mymaster quorum $orig_quorum\n    for {set i 0} {$i < $orig_quorum} {incr i} {\n        restart_instance sentinel [expr {$i + 1}]\n    }\n}\n\n"
  },
  {
    "path": "tests/sentinel/tests/07-down-conditions.tcl",
    "content": "# Test conditions where an instance is considered to be down\n\nsource \"../tests/includes/init-tests.tcl\"\nsource \"../../../tests/support/cli.tcl\"\n\nforeach_sentinel_id id {\n    S $id sentinel debug info-period 1000\n    S $id sentinel debug ask-period 100\n    S $id sentinel debug default-down-after 3000\n    S $id sentinel debug publish-period 200\n    S $id sentinel debug ping-period 100\n}\n\nset ::alive_sentinel [expr {$::instances_count/2+2}]\nproc ensure_master_up {} {\n    S $::alive_sentinel sentinel debug info-period 1000\n    S $::alive_sentinel sentinel debug ping-period 100\n    S $::alive_sentinel sentinel debug ask-period 100\n    S $::alive_sentinel sentinel debug publish-period 100\n    wait_for_condition 1000 50 {\n        [dict get [S $::alive_sentinel sentinel master mymaster] flags] eq \"master\"\n    } else {\n        fail \"Master flags are not just 'master'\"\n    }\n}\n\nproc ensure_master_down {} {\n    S $::alive_sentinel sentinel debug info-period 1000\n    S $::alive_sentinel sentinel debug ping-period 100\n    S $::alive_sentinel sentinel debug ask-period 100\n    S $::alive_sentinel sentinel debug publish-period 100\n    wait_for_condition 1000 50 {\n        [string match *down* \\\n            [dict get [S $::alive_sentinel sentinel master mymaster] flags]]\n    } else {\n        fail \"Master is not flagged SDOWN\"\n    }\n}\n\ntest \"Crash the majority of Sentinels to prevent failovers for this unit\" {\n    for {set id 0} {$id < $quorum} {incr id} {\n        kill_instance sentinel $id\n    }\n}\n\ntest \"SDOWN is triggered by non-responding but not crashed instance\" {\n    ensure_master_up\n    set master_addr [S $::alive_sentinel SENTINEL GET-MASTER-ADDR-BY-NAME mymaster]\n    set master_id [get_instance_id_by_port redis [lindex $master_addr 1]]\n\n    set pid [get_instance_attrib redis $master_id pid]\n    pause_process $pid\n    ensure_master_down\n    resume_process $pid\n    ensure_master_up\n}\n\ntest \"SDOWN is triggered by crashed instance\" {\n    lassign [S $::alive_sentinel SENTINEL GET-MASTER-ADDR-BY-NAME mymaster] host port\n    ensure_master_up\n    kill_instance redis 0\n    ensure_master_down\n    restart_instance redis 0\n    ensure_master_up\n}\n\ntest \"SDOWN is triggered by masters advertising as slaves\" {\n    ensure_master_up\n    R 0 slaveof 127.0.0.1 34567\n    ensure_master_down\n    R 0 slaveof no one\n    ensure_master_up\n}\n\nif {!$::log_req_res} { # this test changes 'dir' config to '/' and logreqres.c cannot open protocol dump file under the root directory.\ntest \"SDOWN is triggered by misconfigured instance replying with errors\" {\n    ensure_master_up\n    set orig_dir [lindex [R 0 config get dir] 1]\n    set orig_save [lindex [R 0 config get save] 1]\n    # Set dir to / and filename to \"tmp\" to make sure it will fail.\n    R 0 config set dir /\n    R 0 config set dbfilename tmp\n    R 0 config set save \"1000000 1000000\"\n    after 5000\n    R 0 bgsave\n    after 5000\n    ensure_master_down\n    R 0 config set save $orig_save\n    R 0 config set dir $orig_dir\n    R 0 config set dbfilename dump.rdb\n    R 0 bgsave\n    ensure_master_up\n}\n}\n\n# We use this test setup to also test command renaming, as a side\n# effect of the master going down if we send PONG instead of PING\ntest \"SDOWN is triggered if we rename PING to PONG\" {\n    ensure_master_up\n    S $::alive_sentinel SENTINEL SET mymaster rename-command PING PONG\n    ensure_master_down\n    S $::alive_sentinel SENTINEL SET mymaster rename-command PING PING\n    ensure_master_up\n}\n"
  },
  {
    "path": "tests/sentinel/tests/08-hostname-conf.tcl",
    "content": "source \"../tests/includes/utils.tcl\"\n\nproc set_redis_announce_ip {addr} {\n    foreach_redis_id id {\n        R $id config set replica-announce-ip $addr\n    }\n}\n\nproc set_sentinel_config {keyword value} {\n    foreach_sentinel_id id {\n        S $id sentinel config set $keyword $value\n    }\n}\n\nproc set_all_instances_hostname {hostname} {\n    foreach_sentinel_id id {\n        set_instance_attrib sentinel $id host $hostname\n    }\n    foreach_redis_id id {\n        set_instance_attrib redis $id host $hostname\n    }\n}\n\ntest \"(pre-init) Configure instances and sentinel for hostname use\" {\n    set ::host \"localhost\"\n    restart_killed_instances\n    set_all_instances_hostname $::host\n    set_redis_announce_ip $::host\n    set_sentinel_config resolve-hostnames yes\n    set_sentinel_config announce-hostnames yes\n}\n\nsource \"../tests/includes/init-tests.tcl\"\n\nproc verify_hostname_announced {hostname} {\n    foreach_sentinel_id id {\n        # Master is reported with its hostname\n        if {![string equal [lindex [S $id SENTINEL GET-MASTER-ADDR-BY-NAME mymaster] 0] $hostname]} {\n            return 0\n        }\n\n        # Replicas are reported with their hostnames\n        foreach replica [S $id SENTINEL REPLICAS mymaster] {\n            if {![string equal [dict get $replica ip] $hostname]} {\n                return 0\n            }\n        }\n    }\n    return 1\n}\n\ntest \"Sentinel announces hostnames\" {\n    # Check initial state\n    verify_hostname_announced $::host\n\n    # Disable announce-hostnames and confirm IPs are used\n    set_sentinel_config announce-hostnames no\n    assert {[verify_hostname_announced \"127.0.0.1\"] || [verify_hostname_announced \"::1\"]}\n}\n\n# We need to revert any special configuration because all tests currently\n# share the same instances.\ntest \"(post-cleanup) Configure instances and sentinel for IPs\" {\n    set ::host \"127.0.0.1\"\n    set_all_instances_hostname $::host\n    set_redis_announce_ip $::host\n    set_sentinel_config resolve-hostnames no\n    set_sentinel_config announce-hostnames no\n}"
  },
  {
    "path": "tests/sentinel/tests/09-acl-support.tcl",
    "content": "\nsource \"../tests/includes/init-tests.tcl\"\n\nset ::user \"testuser\"\nset ::password \"secret\"\n\nproc setup_acl {} {\n    foreach_sentinel_id id {\n        assert_equal {OK} [S $id ACL SETUSER $::user >$::password +@all on]\n        assert_equal {OK} [S $id ACL SETUSER default off]\n\n        S $id CLIENT KILL USER default SKIPME no\n        assert_equal {OK} [S $id AUTH $::user $::password]\n    }\n}\n\nproc teardown_acl {} {\n    foreach_sentinel_id id {\n        assert_equal {OK} [S $id ACL SETUSER default on]\n        assert_equal {1} [S $id ACL DELUSER $::user]\n\n        S $id SENTINEL CONFIG SET sentinel-user \"\"\n        S $id SENTINEL CONFIG SET sentinel-pass \"\"\n    }\n}\n\ntest \"(post-init) Set up ACL configuration\" {\n    setup_acl\n    assert_equal $::user [S 1 ACL WHOAMI]\n}\n\ntest \"SENTINEL CONFIG SET handles on-the-fly credentials reconfiguration\" {\n    # Make sure we're starting with a broken state...\n    wait_for_condition 200 50 {\n        [catch {S 1 SENTINEL CKQUORUM mymaster}] == 1\n    } else {\n        fail \"Expected: Sentinel to be disconnected from master due to wrong password\"\n    }\n    assert_error \"*NOQUORUM*\" {S 1 SENTINEL CKQUORUM mymaster}\n\n    foreach_sentinel_id id {\n        assert_equal {OK} [S $id SENTINEL CONFIG SET sentinel-user $::user]\n        assert_equal {OK} [S $id SENTINEL CONFIG SET sentinel-pass $::password]\n    }\n\n    wait_for_condition 200 50 {\n        [catch {S 1 SENTINEL CKQUORUM mymaster}] == 0\n    } else {\n         fail \"Expected: Sentinel to be connected to master after setting password\"\n    }\n    assert_match {*OK*} [S 1 SENTINEL CKQUORUM mymaster]\n}\n\ntest \"(post-cleanup) Tear down ACL configuration\" {\n    teardown_acl\n}\n"
  },
  {
    "path": "tests/sentinel/tests/10-replica-priority.tcl",
    "content": "source \"../tests/includes/init-tests.tcl\"\n\ntest \"Check acceptable replica-priority values\" {\n    foreach_redis_id id {\n        if {$id == $master_id} continue\n\n        # ensure replica-announced accepts yes and no\n        catch {R $id CONFIG SET replica-announced no} e\n        if {$e ne \"OK\"} {\n            fail \"Unable to set replica-announced to no\"\n        }\n        catch {R $id CONFIG SET replica-announced yes} e\n        if {$e ne \"OK\"} {\n            fail \"Unable to set replica-announced to yes\"\n        }\n\n        # ensure a random value throw error\n        catch {R $id CONFIG SET replica-announced 321} e\n        if {$e eq \"OK\"} {\n            fail \"Able to set replica-announced with something else than yes or no (321) whereas it should not be possible\"\n        }\n        catch {R $id CONFIG SET replica-announced a3b2c1} e\n        if {$e eq \"OK\"} {\n            fail \"Able to set replica-announced with something else than yes or no (a3b2c1) whereas it should not be possible\"\n        }\n\n        # test only the first redis replica, no need to double test\n        break\n    }\n}\n\nproc 10_test_number_of_replicas {n_replicas_expected} {\n    test \"Check sentinel replies with $n_replicas_expected replicas\" {\n        # ensure sentinels replies with the right number of replicas\n        foreach_sentinel_id id {\n            S $id sentinel debug info-period 100\n            S $id sentinel debug default-down-after 1000\n            S $id sentinel debug publish-period 100\n            set len [llength [S $id SENTINEL REPLICAS mymaster]]\n            wait_for_condition 200 100 {\n                [llength [S $id SENTINEL REPLICAS mymaster]] == $n_replicas_expected\n            } else {\n                fail \"Sentinel replies with a wrong number of replicas with replica-announced=yes (expected $n_replicas_expected but got $len) on sentinel $id\"\n            }\n        }\n    }\n}\n\nproc 10_set_replica_announced {master_id announced n_replicas} {\n    test \"Set replica-announced=$announced on $n_replicas replicas\" {\n        set i 0\n        foreach_redis_id id {\n            if {$id == $master_id} continue\n            #puts \"set replica-announce=$announced on redis #$id\"\n            R $id CONFIG SET replica-announced \"$announced\"\n            incr i\n            if { $n_replicas!=\"all\" && $i >= $n_replicas } { break }\n        }\n    }\n}\n\n# ensure all replicas are announced\n10_set_replica_announced $master_id \"yes\" \"all\"\n# ensure all replicas are announced by sentinels\n10_test_number_of_replicas 4\n\n# ensure the first 2 replicas are not announced\n10_set_replica_announced $master_id \"no\" 2\n# ensure sentinels are not announcing the first 2 replicas that have been set unannounced\n10_test_number_of_replicas 2\n\n# ensure all replicas are announced\n10_set_replica_announced $master_id \"yes\" \"all\"\n# ensure all replicas are not announced by sentinels\n10_test_number_of_replicas 4\n\n"
  },
  {
    "path": "tests/sentinel/tests/11-port-0.tcl",
    "content": "source \"../tests/includes/init-tests.tcl\"\n\ntest \"Start/Stop sentinel on same port with a different runID should not change the total number of sentinels\" {\n        set sentinel_id [expr $::instances_count-1]\n        # Kill sentinel instance\n        kill_instance sentinel $sentinel_id\n\n        # Delete line with myid in sentinels config file\n        set orgfilename [file join \"sentinel_$sentinel_id\" \"sentinel.conf\"]\n        set tmpfilename \"sentinel.conf_tmp\"\n        set dirname \"sentinel_$sentinel_id\"\n\n        delete_lines_with_pattern $orgfilename $tmpfilename \"myid\"\n\n        # Get count of total sentinels\n        set a [S 0 SENTINEL  master mymaster]\n        set original_count [lindex $a 33]\n\n        # Restart sentinel with the modified config file\n        set pid [exec_instance \"sentinel\" $dirname $orgfilename]\n        lappend ::pids $pid\n\n        after 1000\n\n        # Get new count of total sentinel\n        set b [S 0 SENTINEL master mymaster]\n        set curr_count [lindex $b 33]\n\n        # If the count is not the same then fail the test\n        if {$original_count != $curr_count} {\n                fail \"Sentinel count is incorrect, original count being $original_count and current count is $curr_count\"\n        }\n}\n"
  },
  {
    "path": "tests/sentinel/tests/12-master-reboot.tcl",
    "content": "# Check the basic monitoring and failover capabilities.\nsource \"../tests/includes/init-tests.tcl\"\n\n\nif {$::simulate_error} {\n    test \"This test will fail\" {\n        fail \"Simulated error\"\n    }\n}\n\n\n# Reboot an instance previously in very short time but do not check if it is loading\nproc reboot_instance {type id} {\n    set dirname \"${type}_${id}\"\n    set cfgfile [file join $dirname $type.conf]\n    set port [get_instance_attrib $type $id port]\n\n    # Execute the instance with its old setup and append the new pid\n    # file for cleanup.\n    set pid [exec_instance $type $dirname $cfgfile]\n    set_instance_attrib $type $id pid $pid\n    lappend ::pids $pid\n\n    # Check that the instance is running\n    if {[server_is_up 127.0.0.1 $port 100] == 0} {\n        set logfile [file join $dirname log.txt]\n        puts [exec tail $logfile]\n        abort_sentinel_test \"Problems starting $type #$id: ping timeout, maybe server start failed, check $logfile\"\n    }\n\n    # Connect with it with a fresh link\n    set link [redis 127.0.0.1 $port 0 $::tls]\n    $link reconnect 1\n    set_instance_attrib $type $id link $link\n}\n\n\ntest \"Master reboot in very short time\" {\n    set old_port [RPort $master_id]\n    set addr [S 0 SENTINEL GET-MASTER-ADDR-BY-NAME mymaster]\n    assert {[lindex $addr 1] == $old_port}\n    \n    R $master_id debug populate 20000\n    R $master_id bgsave\n    R $master_id config set key-load-delay 1500\n    R $master_id config set loading-process-events-interval-bytes 1024\n    R $master_id config rewrite\n\n    foreach_sentinel_id id {\n        S $id SENTINEL SET mymaster master-reboot-down-after-period 5000\n        S $id sentinel debug ping-period 500\n        S $id sentinel debug ask-period 500 \n    }\n\n    kill_instance redis $master_id\n    reboot_instance redis $master_id\n    \n    set max_tries 1000\n    if {$::tsan} {\n        set max_tries 5000\n    }\n\n    foreach_sentinel_id id {        \n        wait_for_condition $max_tries 100 {\n            [lindex [S $id SENTINEL GET-MASTER-ADDR-BY-NAME mymaster] 1] != $old_port\n        } else {\n            fail \"At least one Sentinel did not receive failover info\"\n        }\n    }\n\n    # Reset configuration to avoid unnecessary delays\n    R $master_id config set key-load-delay 0\n    R $master_id config rewrite\n\n    set addr [S 0 SENTINEL GET-MASTER-ADDR-BY-NAME mymaster]\n    set master_id [get_instance_id_by_port redis [lindex $addr 1]]\n\n    # Make sure the instance load all the dataset\n    while 1 {\n        catch {[$link ping]} retval\n        if {[string match {*LOADING*} $retval]} {\n            after 100\n            continue\n        } else {\n            break\n        }\n    }\n}\n\ntest \"New master [join $addr {:}] role matches\" {\n    assert {[RI $master_id role] eq {master}}\n}\n\ntest \"All the other slaves now point to the new master\" {\n    foreach_redis_id id {\n        if {$id != $master_id && $id != 0} {\n            wait_for_condition 1000 50 {\n                [RI $id master_port] == [lindex $addr 1]\n            } else {\n                fail \"Redis ID $id not configured to replicate with new master\"\n            }\n        }\n    }\n}\n\ntest \"The old master eventually gets reconfigured as a slave\" {\n    wait_for_condition 1000 50 {\n        [RI 0 master_port] == [lindex $addr 1]\n    } else {\n        fail \"Old master not reconfigured as slave of new master\"\n    }\n}\n"
  },
  {
    "path": "tests/sentinel/tests/13-info-command.tcl",
    "content": "source \"../tests/includes/init-tests.tcl\"\n\ntest \"info command with at most one argument\" {\n    set subCommandList {}\n    foreach arg {\"\" \"all\" \"default\" \"everything\"} {\n        if {$arg == \"\"} {\n            set info [S 0 info]\n        } else {\n            set info [S 0 info $arg]\n        }\n        assert { [string match \"*redis_version*\" $info] }\n        assert { [string match \"*maxclients*\" $info] }\n        assert { [string match \"*used_cpu_user*\" $info] }\n        assert { [string match \"*sentinel_tilt*\" $info] }\n        assert { ![string match \"*used_memory*\" $info] }\n        assert { ![string match \"*rdb_last_bgsave*\" $info] }\n        assert { ![string match \"*master_repl_offset*\" $info] }\n        assert { ![string match \"*cluster_enabled*\" $info] }\n    }\n}\n\ntest \"info command with one sub-section\" {\n    set info [S 0 info cpu]\n    assert { [string match \"*used_cpu_user*\" $info] }\n    assert { ![string match \"*sentinel_tilt*\" $info] }\n    assert { ![string match \"*redis_version*\" $info] }\n\n    set info [S 0 info sentinel]\n    assert { [string match \"*sentinel_tilt*\" $info] }\n    assert { ![string match \"*used_cpu_user*\" $info] }\n    assert { ![string match \"*redis_version*\" $info] }\n}\n\ntest \"info command with multiple sub-sections\" {\n    set info [S 0 info server sentinel replication]\n    assert { [string match \"*redis_version*\" $info] }\n    assert { [string match \"*sentinel_tilt*\" $info] }\n    assert { ![string match \"*used_memory*\" $info] }\n    assert { ![string match \"*used_cpu_user*\" $info] }\n\n    set info [S 0 info cpu all]\n    assert { [string match \"*used_cpu_user*\" $info] }\n    assert { [string match \"*sentinel_tilt*\" $info] }\n    assert { [string match \"*redis_version*\" $info] }\n    assert { ![string match \"*used_memory*\" $info] }\n    assert { ![string match \"*master_repl_offset*\" $info] }\n}\n"
  },
  {
    "path": "tests/sentinel/tests/14-debug-command.tcl",
    "content": "source \"../tests/includes/init-tests.tcl\"\n\ntest \"Sentinel debug test with arguments and without argument\" {\n   set current_info_period [lindex [S 0 SENTINEL DEBUG] 1]\n   S 0 SENTINEL DEBUG info-period 8888\n   assert_equal {8888} [lindex [S 0 SENTINEL DEBUG] 1] \n   S 0 SENTINEL DEBUG info-period $current_info_period\n   assert_equal $current_info_period [lindex [S 0 SENTINEL DEBUG] 1]\n}\n"
  },
  {
    "path": "tests/sentinel/tests/15-config-set-config-get.tcl",
    "content": "source \"../tests/includes/init-tests.tcl\"\n\ntest \"SENTINEL CONFIG SET and SENTINEL CONFIG GET handles multiple variables\" {\n    foreach_sentinel_id id {\n        assert_equal {OK} [S $id SENTINEL CONFIG SET resolve-hostnames yes announce-port 1234]\n    }\n    assert_match {*yes*1234*} [S 1 SENTINEL CONFIG GET resolve-hostnames announce-port]\n    assert_match {announce-port 1234} [S 1 SENTINEL CONFIG GET announce-port]\n}\n\ntest \"SENTINEL CONFIG GET for duplicate and unknown variables\" {\n    assert_equal {OK} [S 1 SENTINEL CONFIG SET resolve-hostnames yes announce-port 1234]\n    assert_match {resolve-hostnames yes} [S 1 SENTINEL CONFIG GET resolve-hostnames resolve-hostnames does-not-exist]\n}\n\ntest \"SENTINEL CONFIG GET for patterns\" {\n    assert_equal {OK} [S 1 SENTINEL CONFIG SET loglevel notice announce-port 1234 announce-hostnames yes ]\n    assert_match {loglevel notice} [S 1 SENTINEL CONFIG GET log* *level loglevel]\n    assert_match {announce-hostnames yes announce-ip*announce-port 1234} [S 1 SENTINEL CONFIG GET announce*]\n}\n\ntest \"SENTINEL CONFIG SET duplicate variables\" {\n    catch {[S 1 SENTINEL CONFIG SET resolve-hostnames yes announce-port 1234 announce-port 100]} e\n    if {![string match \"*Duplicate argument*\" $e]} {\n        fail \"Should give wrong arity error\"\n    }\n}\n\ntest \"SENTINEL CONFIG SET, one option does not exist\" {\n    foreach_sentinel_id id {\n        assert_equal {OK} [S $id SENTINEL CONFIG SET announce-port 111]\n        catch {[S $id SENTINEL CONFIG SET does-not-exist yes announce-port 1234]} e\n        if {![string match \"*Invalid argument*\" $e]} {\n            fail \"Should give Invalid argument error\"\n        }\n    }\n    # The announce-port should not be set to 1234 as it was called with a wrong argument\n    assert_match {*111*} [S 1 SENTINEL CONFIG GET announce-port]\n}\n\ntest \"SENTINEL CONFIG SET, one option with wrong value\" {\n    foreach_sentinel_id id {\n        assert_equal {OK} [S $id SENTINEL CONFIG SET resolve-hostnames no]\n        catch {[S $id SENTINEL CONFIG SET announce-port -1234 resolve-hostnames yes]} e\n        if {![string match \"*Invalid value*\" $e]} {\n            fail \"Expected to return Invalid value error\"\n        }\n    }\n    # The resolve-hostnames should not be set to yes as it was called after an argument with an invalid value\n    assert_match {*no*} [S 1 SENTINEL CONFIG GET resolve-hostnames]\n}\n\ntest \"SENTINEL CONFIG SET, wrong number of arguments\" {\n    catch {[S 1 SENTINEL CONFIG SET resolve-hostnames yes announce-port 1234 announce-ip]} e\n    if {![string match \"*Missing argument*\" $e]} {\n        fail \"Expected to return Missing argument error\"\n    }\n}\n"
  },
  {
    "path": "tests/sentinel/tests/helpers/check_leaked_fds.tcl",
    "content": "#!/usr/bin/env tclsh\n#\n# This script detects file descriptors that have leaked from a parent process.\n#\n# Our goal is to detect file descriptors that were opened by the parent and\n# not cleaned up prior to exec(), but not file descriptors that were inherited\n# from the grandparent which the parent knows nothing about. To do that, we\n# look up every potential leak and try to match it against open files by the\n# grandparent process.\n\n# Get PID of parent process\nproc get_parent_pid {_pid} {\n    set fd [open \"/proc/$_pid/status\" \"r\"]\n    set content [read $fd]\n    close $fd\n\n    if {[regexp {\\nPPid:\\s+(\\d+)} $content _ ppid]} {\n        return $ppid\n    }\n\n    error \"failed to get parent pid\"\n}\n\n# Read symlink to get info about the specified fd of the specified process.\n# The result can be the file name or an arbitrary string that identifies it.\n# When not able to read, an empty string is returned.\nproc get_fdlink {_pid fd} {\n    if { [catch {set fdlink [file readlink \"/proc/$_pid/fd/$fd\"]} err] } {\n        return \"\"\n    }\n    return $fdlink\n}\n\n# Linux only\nset os [exec uname]\nif {$os != \"Linux\"} {\n    puts \"Only Linux is supported.\"\n    exit 0\n}\n\nif {![info exists env(LEAKED_FDS_FILE)]} {\n    puts \"Missing LEAKED_FDS_FILE environment variable.\"\n    exit 0\n}\n\nset outfile $::env(LEAKED_FDS_FILE)\nset parent_pid [get_parent_pid [pid]]\nset grandparent_pid [get_parent_pid $parent_pid]\nset leaked_fds {}\n\n# Look for fds that were directly inherited from our parent but not from\n# our grandparent (tcl)\nforeach fd [glob -tails -directory \"/proc/self/fd\" *] {\n    # Ignore stdin/stdout/stderr\n    if {$fd == 0 || $fd == 1 || $fd == 2} {\n        continue\n    }\n\n    set fdlink [get_fdlink \"self\" $fd]\n    if {$fdlink == \"\"} {\n        continue\n    }\n\n    # We ignore fds that existed in the grandparent, or fds that don't exist\n    # in our parent (Sentinel process).\n    if {[get_fdlink $grandparent_pid $fd] == $fdlink ||\n\t[get_fdlink $parent_pid $fd] != $fdlink} {\n        continue\n    }\n\n    lappend leaked_fds [list $fd $fdlink]\n}\n\n# Produce report only if we found leaks\nif {[llength $leaked_fds] > 0} {\n    set fd [open $outfile \"w\"]\n    puts $fd [join $leaked_fds \"\\n\"]\n    close $fd\n}\n"
  },
  {
    "path": "tests/sentinel/tests/includes/init-tests.tcl",
    "content": "# Initialization tests -- most units will start including this.\nsource \"../tests/includes/utils.tcl\"\n\ntest \"(init) Restart killed instances\" {\n    restart_killed_instances\n}\n\ntest \"(init) Remove old master entry from sentinels\" {\n    foreach_sentinel_id id {\n        catch {S $id SENTINEL REMOVE mymaster}\n    }\n}\n\nset redis_slaves [expr $::instances_count - 1]\ntest \"(init) Create a master-slaves cluster of [expr $redis_slaves+1] instances\" {\n    create_redis_master_slave_cluster [expr {$redis_slaves+1}]\n}\nset master_id 0\n\ntest \"(init) Sentinels can start monitoring a master\" {\n    set sentinels [llength $::sentinel_instances]\n    set quorum [expr {$sentinels/2+1}]\n    foreach_sentinel_id id {\n        S $id SENTINEL MONITOR mymaster \\\n              [get_instance_attrib redis $master_id host] \\\n              [get_instance_attrib redis $master_id port] $quorum\n    }\n    foreach_sentinel_id id {\n        assert {[S $id sentinel master mymaster] ne {}}\n        S $id SENTINEL SET mymaster down-after-milliseconds 2000\n        S $id SENTINEL SET mymaster failover-timeout 10000\n        S $id SENTINEL debug tilt-period 5000\n        S $id SENTINEL SET mymaster parallel-syncs 10\n        if {$::leaked_fds_file != \"\" && [exec uname] == \"Linux\"} {\n            S $id SENTINEL SET mymaster notification-script ../../tests/helpers/check_leaked_fds.tcl\n            S $id SENTINEL SET mymaster client-reconfig-script ../../tests/helpers/check_leaked_fds.tcl\n        }\n    }\n}\n\ntest \"(init) Sentinels can talk with the master\" {\n    foreach_sentinel_id id {\n        wait_for_condition 1000 50 {\n            [catch {S $id SENTINEL GET-MASTER-ADDR-BY-NAME mymaster}] == 0\n        } else {\n            fail \"Sentinel $id can't talk with the master.\"\n        }\n    }\n}\n\ntest \"(init) Sentinels are able to auto-discover other sentinels\" {\n    verify_sentinel_auto_discovery\n}\n\ntest \"(init) Sentinels are able to auto-discover slaves\" {\n    foreach_sentinel_id id {\n        wait_for_condition 1000 50 {\n            [dict get [S $id SENTINEL MASTER mymaster] num-slaves] == $redis_slaves\n        } else {\n            fail \"At least some sentinel can't detect some slave\"\n        }\n    }\n}\n"
  },
  {
    "path": "tests/sentinel/tests/includes/sentinel.conf",
    "content": "# assume master is down after being unresponsive for 20s\nsentinel down-after-milliseconds setmaster 20000\n# reconfigure one slave at a time\nsentinel parallel-syncs setmaster 2\n# wait for 4m before assuming failover went wrong\nsentinel failover-timeout setmaster 240000\n# monitoring set\nsentinel monitor setmaster 10.0.0.1 30000 2\n\n"
  },
  {
    "path": "tests/sentinel/tests/includes/start-init-tests.tcl",
    "content": "test \"(start-init) Flush config and compare rewrite config file lines\" {\n    foreach_sentinel_id id {\n        assert_match \"OK\" [S $id SENTINEL FLUSHCONFIG]\n        set file1 ../tests/includes/sentinel.conf\n        set file2 [file join \"sentinel_${id}\" \"sentinel.conf\"] \n        set fh1 [open $file1 r]\n        set fh2 [open $file2 r]\n        while {[gets $fh1 line1]} {\n            if {[gets $fh2 line2]} {\n                assert [string equal $line1 $line2]\n            } else {\n                fail \"sentinel config file rewrite sequence changed\"\n            }\n        }\n        close $fh1\n        close $fh2  \n    }\n}"
  },
  {
    "path": "tests/sentinel/tests/includes/utils.tcl",
    "content": "proc restart_killed_instances {} {\n    foreach type {redis sentinel} {\n        foreach_${type}_id id {\n            if {[get_instance_attrib $type $id pid] == -1} {\n                puts -nonewline \"$type/$id \"\n                flush stdout\n                restart_instance $type $id\n            }\n        }\n    }\n}\n\nproc verify_sentinel_auto_discovery {} {\n    set sentinels [llength $::sentinel_instances]\n    foreach_sentinel_id id {\n        wait_for_condition 1000 50 {\n            [dict get [S $id SENTINEL MASTER mymaster] num-other-sentinels] == ($sentinels-1)\n        } else {\n            fail \"At least some sentinel can't detect some other sentinel\"\n        }\n    }\n}\n"
  },
  {
    "path": "tests/sentinel/tmp/.gitignore",
    "content": "redis_*\nsentinel_*\n"
  },
  {
    "path": "tests/support/aofmanifest.tcl",
    "content": "set ::base_aof_sufix \".base\"\nset ::incr_aof_sufix \".incr\"\nset ::manifest_suffix \".manifest\"\nset ::aof_format_suffix \".aof\"\nset ::rdb_format_suffix \".rdb\"\n\nproc get_full_path {dir filename} {\n    set _ [format \"%s/%s\" $dir $filename]\n}\n\nproc join_path {dir1 dir2} {\n    return [format \"%s/%s\" $dir1 $dir2]\n}\n\nproc get_redis_dir {} {\n    set config [srv config]\n    set _ [dict get $config \"dir\"]\n}\n\nproc check_file_exist {dir filename} {\n    set file_path [get_full_path $dir $filename]\n    return [file exists $file_path]\n}\n\nproc del_file {dir filename} {\n    set file_path [get_full_path $dir $filename]\n    catch {exec rm -rf $file_path}\n}\n\nproc get_cur_base_aof_name {manifest_filepath} {\n    set fp [open $manifest_filepath r+]\n    set lines {}\n    while {1} {\n        set line [gets $fp]\n        if {[eof $fp]} {\n           close $fp\n           break;\n        }\n\n        lappend lines $line\n    }\n\n    if {[llength $lines] == 0} {\n        return \"\"\n    }\n\n    set first_line [lindex $lines 0]\n    set aofname [lindex [split $first_line \" \"] 1]\n    set aoftype [lindex [split $first_line \" \"] 5]\n    if { $aoftype eq \"b\" } {\n        return $aofname\n    }\n\n    return \"\"\n}\n\nproc get_last_incr_aof_name {manifest_filepath} {\n    set fp [open $manifest_filepath r+]\n    set lines {}\n    while {1} {\n        set line [gets $fp]\n        if {[eof $fp]} {\n           close $fp\n           break;\n        }\n\n        lappend lines $line\n    }\n\n    if {[llength $lines] == 0} {\n        return \"\"\n    }\n\n    set len [llength $lines]\n    set last_line [lindex $lines [expr $len - 1]]\n    set aofname [lindex [split $last_line \" \"] 1]\n    set aoftype [lindex [split $last_line \" \"] 5]\n    if { $aoftype eq \"i\" } {\n        return $aofname\n    }\n\n    return \"\"\n}\n\nproc get_last_incr_aof_path {r} {\n    set dir [lindex [$r config get dir] 1]\n    set appenddirname [lindex [$r config get appenddirname] 1]\n    set appendfilename [lindex [$r config get appendfilename] 1]\n    set manifest_filepath [file join $dir $appenddirname $appendfilename$::manifest_suffix]\n    set last_incr_aof_name [get_last_incr_aof_name $manifest_filepath]\n    if {$last_incr_aof_name == \"\"} {\n        return \"\"\n    }\n    return [file join $dir $appenddirname $last_incr_aof_name]\n}\n\nproc get_base_aof_path {r} {\n    set dir [lindex [$r config get dir] 1]\n    set appenddirname [lindex [$r config get appenddirname] 1]\n    set appendfilename [lindex [$r config get appendfilename] 1]\n    set manifest_filepath [file join $dir $appenddirname $appendfilename$::manifest_suffix]\n    set cur_base_aof_name [get_cur_base_aof_name $manifest_filepath]\n    if {$cur_base_aof_name == \"\"} {\n        return \"\"\n    }\n    return [file join $dir $appenddirname $cur_base_aof_name]\n}\n\nproc assert_aof_manifest_content {manifest_path content} {\n    set fp [open $manifest_path r+]\n    set lines {}\n    while {1} {\n        set line [gets $fp]\n        if {[eof $fp]} {\n           close $fp\n           break;\n        }\n\n        lappend lines $line\n    }\n\n    assert_equal [llength $lines] [llength $content]\n\n    for { set i 0 } { $i < [llength $lines] } {incr i} {\n        assert {[string first [lindex $content $i] [lindex $lines $i]] != -1}\n    }\n}\n\nproc clean_aof_persistence {aof_dirpath} {\n    catch {eval exec rm -rf [glob $aof_dirpath]}\n}\n\nproc append_to_manifest {str} {\n    upvar fp fp\n    puts -nonewline $fp $str\n}\n\nproc create_aof_manifest {dir aof_manifest_file code} {\n    create_aof_dir $dir\n    upvar fp fp\n    set fp [open $aof_manifest_file w+]\n    uplevel 1 $code\n    close $fp\n}\n\nproc append_to_aof {str} {\n    upvar fp fp\n    puts -nonewline $fp $str\n}\n\nproc create_aof {dir aof_file code} {\n    create_aof_dir $dir\n    upvar fp fp\n    set fp [open $aof_file w+]\n    uplevel 1 $code\n    close $fp\n}\n\nproc create_aof_dir {dir_path} {\n    file mkdir $dir_path\n}\n\nproc start_server_aof {overrides code} {\n    upvar defaults defaults srv srv server_path server_path aof_basename aof_basename aof_dirpath aof_dirpath aof_manifest_file aof_manifest_file aof_manifest_file2 aof_manifest_file2\n    set config [concat $defaults $overrides]\n    start_server [list overrides $config keep_persistence true] $code\n}\n\nproc start_server_aof_ex {overrides options code} {\n    upvar defaults defaults srv srv server_path server_path\n    set config [concat $defaults $overrides]\n    start_server [concat [list overrides $config keep_persistence true] $options] $code\n}\n"
  },
  {
    "path": "tests/support/benchmark.tcl",
    "content": "proc redisbenchmark_tls_config {testsdir} {\n    set tlsdir [file join $testsdir tls]\n    set cert [file join $tlsdir client.crt]\n    set key [file join $tlsdir client.key]\n    set cacert [file join $tlsdir ca.crt]\n\n    if {$::tls} {\n        return [list --tls --cert $cert --key $key --cacert $cacert]\n    } else {\n        return {}\n    }\n}\n\nproc redisbenchmark {host port {opts {}}} {\n    set cmd [list src/redis-benchmark -h $host -p $port]\n    lappend cmd {*}[redisbenchmark_tls_config \"tests\"]\n    lappend cmd {*}$opts\n    return $cmd\n}\n\nproc redisbenchmarkuri {host port {opts {}}} {\n    set cmd [list src/redis-benchmark -u redis://$host:$port]\n    lappend cmd {*}[redisbenchmark_tls_config \"tests\"]\n    lappend cmd {*}$opts\n    return $cmd\n}\n\nproc redisbenchmarkuriuserpass {host port user pass {opts {}}} {\n    set cmd [list src/redis-benchmark -u redis://$user:$pass@$host:$port]\n    lappend cmd {*}[redisbenchmark_tls_config \"tests\"]\n    lappend cmd {*}$opts\n    return $cmd\n}\n"
  },
  {
    "path": "tests/support/cli.tcl",
    "content": "proc rediscli_tls_config {testsdir} {\n    set tlsdir [file join $testsdir tls]\n    set cert [file join $tlsdir client.crt]\n    set key [file join $tlsdir client.key]\n    set cacert [file join $tlsdir ca.crt]\n\n    if {$::tls} {\n        return [list --tls --cert $cert --key $key --cacert $cacert]\n    } else {\n        return {}\n    }\n}\n\n# Returns command line for executing redis-cli\nproc rediscli {host port {opts {}}} {\n    set cmd [list src/redis-cli -h $host -p $port]\n    lappend cmd {*}[rediscli_tls_config \"tests\"]\n    lappend cmd {*}$opts\n    return $cmd\n}\n\n# Returns command line for executing redis-cli with a unix socket address\nproc rediscli_unixsocket {unixsocket {opts {}}} {\n    return [list src/redis-cli -s $unixsocket {*}$opts]\n}\n\n# Run redis-cli with specified args on the server of specified level.\n# Returns output broken down into individual lines.\nproc rediscli_exec {level args} {\n    set cmd [rediscli_unixsocket [srv $level unixsocket] $args]\n    set fd [open \"|$cmd\" \"r\"]\n    set ret [lrange [split [read $fd] \"\\n\"] 0 end-1]\n    close $fd\n\n    return $ret\n}\n"
  },
  {
    "path": "tests/support/cluster.tcl",
    "content": "# Tcl redis cluster client as a wrapper of redis.rb.\n#\n# Copyright (C) 2014-Present, Redis Ltd.\n# All Rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Example usage:\n#\n# set c [redis_cluster {127.0.0.1:6379 127.0.0.1:6380}]\n# $c set foo\n# $c get foo\n# $c close\n\npackage require Tcl 8.5-10\npackage provide redis_cluster 0.1\n\nnamespace eval redis_cluster {}\nset ::redis_cluster::internal_id 0\nset ::redis_cluster::id 0\narray set ::redis_cluster::startup_nodes {}\narray set ::redis_cluster::nodes {}\narray set ::redis_cluster::slots {}\narray set ::redis_cluster::tls {}\n\n# List of \"plain\" commands, which are commands where the sole key is always\n# the first argument.\nset ::redis_cluster::plain_commands {\n    get set setnx setex psetex append strlen exists setbit getbit\n    setrange getrange substr incr decr rpush lpush rpushx lpushx\n    linsert rpop lpop brpop llen lindex lset lrange ltrim lrem\n    sadd srem sismember smismember scard spop srandmember smembers sscan zadd\n    zincrby zrem zremrangebyscore zremrangebyrank zremrangebylex zrange\n    zrangebyscore zrevrangebyscore zrangebylex zrevrangebylex zcount\n    zlexcount zrevrange zcard zscore zmscore zrank zrevrank zscan hset hsetnx\n    hget hmset hmget hincrby hincrbyfloat hdel hlen hkeys hvals\n    hgetall hexists hscan incrby decrby incrbyfloat getset move\n    expire expireat pexpire pexpireat type ttl pttl persist restore\n    dump bitcount bitpos pfadd pfcount cluster ssubscribe spublish\n    sunsubscribe\n}\n\n# Create a cluster client. The nodes are given as a list of host:port. The TLS\n# parameter (1 or 0) is optional and defaults to the global $::tls.\nproc redis_cluster {nodes {tls -1}} {\n    set id [incr ::redis_cluster::id]\n    set ::redis_cluster::startup_nodes($id) $nodes\n    set ::redis_cluster::nodes($id) {}\n    set ::redis_cluster::slots($id) {}\n    set ::redis_cluster::tls($id) [expr $tls == -1 ? $::tls : $tls]\n    set handle [interp alias {} ::redis_cluster::instance$id {} ::redis_cluster::__dispatch__ $id]\n    $handle refresh_nodes_map\n    return $handle\n}\n\n# Totally reset the slots / nodes state for the client, calls\n# CLUSTER NODES in the first startup node available, populates the\n# list of nodes ::redis_cluster::nodes($id) with an hash mapping node\n# ip:port to a representation of the node (another hash), and finally\n# maps ::redis_cluster::slots($id) with an hash mapping slot numbers\n# to node IDs.\n#\n# This function is called when a new Redis Cluster client is initialized\n# and every time we get a -MOVED redirection error.\nproc ::redis_cluster::__method__refresh_nodes_map {id} {\n    # Contact the first responding startup node.\n    set idx 0; # Index of the node that will respond.\n    set errmsg {}\n    foreach start_node $::redis_cluster::startup_nodes($id) {\n        set ip_port [lindex [split $start_node @] 0]\n        lassign [split $ip_port :] start_host start_port\n        set tls $::redis_cluster::tls($id)\n        if {[catch {\n            set r {}\n            set r [redis $start_host $start_port 0 $tls]\n            set nodes_descr [$r cluster nodes]\n            $r close\n        } e]} {\n            if {$r ne {}} {catch {$r close}}\n            incr idx\n            if {[string length $errmsg] < 200} {\n                append errmsg \" $ip_port: $e\"\n            }\n            continue ; # Try next.\n        } else {\n            break; # Good node found.\n        }\n    }\n\n    if {$idx == [llength $::redis_cluster::startup_nodes($id)]} {\n        error \"No good startup node found. $errmsg\"\n    }\n\n    # Put the node that responded as first in the list if it is not\n    # already the first.\n    if {$idx != 0} {\n        set l $::redis_cluster::startup_nodes($id)\n        set left [lrange $l 0 [expr {$idx-1}]]\n        set right [lrange $l [expr {$idx+1}] end]\n        set l [concat [lindex $l $idx] $left $right]\n        set ::redis_cluster::startup_nodes($id) $l\n    }\n\n    # Parse CLUSTER NODES output to populate the nodes description.\n    set nodes {} ; # addr -> node description hash.\n    foreach line [split $nodes_descr \"\\n\"] {\n        set line [string trim $line]\n        if {$line eq {}} continue\n        set args [split $line \" \"]\n        lassign $args nodeid addr flags slaveof pingsent pongrecv configepoch linkstate\n        set slots [lrange $args 8 end]\n        set addr [lindex [split $addr @] 0]\n        if {$addr eq {:0}} {\n            set addr $start_host:$start_port\n        }\n        lassign [split $addr :] host port\n\n        # Connect to the node\n        set link {}\n        set tls $::redis_cluster::tls($id)\n        catch {set link [redis $host $port 0 $tls]}\n\n        # Build this node description as an hash.\n        set node [dict create \\\n            id $nodeid \\\n            internal_id $id \\\n            addr $addr \\\n            host $host \\\n            port $port \\\n            flags $flags \\\n            slaveof $slaveof \\\n            slots $slots \\\n            link $link \\\n        ]\n        dict set nodes $addr $node\n        lappend ::redis_cluster::startup_nodes($id) $addr\n    }\n\n    # Close all the existing links in the old nodes map, and set the new\n    # map as current.\n    foreach n $::redis_cluster::nodes($id) {\n        catch {\n            [dict get $n link] close\n        }\n    }\n    set ::redis_cluster::nodes($id) $nodes\n\n    # Populates the slots -> nodes map.\n    dict for {addr node} $nodes {\n        foreach slotrange [dict get $node slots] {\n            lassign [split $slotrange -] start end\n            if {$end == {}} {set end $start}\n            for {set j $start} {$j <= $end} {incr j} {\n                dict set ::redis_cluster::slots($id) $j $addr\n            }\n        }\n    }\n\n    # Only retain unique entries in the startup nodes list\n    set ::redis_cluster::startup_nodes($id) [lsort -unique $::redis_cluster::startup_nodes($id)]\n}\n\n# Free a redis_cluster handle.\nproc ::redis_cluster::__method__close {id} {\n    catch {\n        set nodes $::redis_cluster::nodes($id)\n        dict for {addr node} $nodes {\n            catch {\n                [dict get $node link] close\n            }\n        }\n    }\n    catch {unset ::redis_cluster::startup_nodes($id)}\n    catch {unset ::redis_cluster::nodes($id)}\n    catch {unset ::redis_cluster::slots($id)}\n    catch {unset ::redis_cluster::tls($id)}\n    catch {interp alias {} ::redis_cluster::instance$id {}}\n}\n\nproc ::redis_cluster::__method__masternode_for_slot {id slot} {\n    # Get the node mapped to this slot.\n    set node_addr [dict get $::redis_cluster::slots($id) $slot]\n    if {$node_addr eq {}} {\n        error \"No mapped node for slot $slot.\"\n    }\n    return [dict get $::redis_cluster::nodes($id) $node_addr]\n}\n\nproc ::redis_cluster::__method__masternode_notfor_slot {id slot} {\n    # Get a node that is not mapped to this slot.\n    set node_addr [dict get $::redis_cluster::slots($id) $slot]\n    set addrs [dict keys $::redis_cluster::nodes($id)]\n    foreach addr [lshuffle $addrs] {\n        set node [dict get $::redis_cluster::nodes($id) $addr]\n        if {$node_addr ne $addr && [dict get $node slaveof] eq \"-\"} {\n            return $node\n        }\n    }\n    error \"Slot $slot is everywhere\"\n}\n\nproc ::redis_cluster::__dispatch__ {id method args} {\n    if {[info command ::redis_cluster::__method__$method] eq {}} {\n        # Get the keys from the command.\n        set keys [::redis_cluster::get_keys_from_command $method $args]\n        if {$keys eq {}} {\n            error \"Redis command '$method' is not supported by redis_cluster.\"\n        }\n\n        # Resolve the keys in the corresponding hash slot they hash to.\n        set slot [::redis_cluster::get_slot_from_keys $keys]\n        if {$slot eq {}} {\n            error \"Invalid command: multiple keys not hashing to the same slot.\"\n        }\n\n        # Get the node mapped to this slot.\n        set node_addr [dict get $::redis_cluster::slots($id) $slot]\n        if {$node_addr eq {}} {\n            error \"No mapped node for slot $slot.\"\n        }\n\n        # Execute the command in the node we think is the slot owner.\n        set retry 100\n        set asking 0\n        while {[incr retry -1]} {\n            if {$retry < 5} {after 100}\n            set node [dict get $::redis_cluster::nodes($id) $node_addr]\n            set link [dict get $node link]\n            if {$asking} {\n                $link ASKING\n                set asking 0\n            }\n            if {[catch {$link $method {*}$args} e]} {\n                if {$link eq {} || \\\n                    [string range $e 0 4] eq {MOVED} || \\\n                    [string range $e 0 2] eq {I/O} \\\n                } {\n                    # MOVED redirection.\n                    ::redis_cluster::__method__refresh_nodes_map $id\n                    set node_addr [dict get $::redis_cluster::slots($id) $slot]\n                    continue\n                } elseif {[string range $e 0 2] eq {ASK}} {\n                    # ASK redirection.\n                    set node_addr [lindex $e 2]\n                    set asking 1\n                    continue\n                } else {\n                    # Non redirecting error.\n                    error $e $::errorInfo $::errorCode\n                }\n            } else {\n                # OK query went fine\n                return $e\n            }\n        }\n        error \"Too many redirections or failures contacting Redis Cluster.\"\n    } else {\n        uplevel 1 [list ::redis_cluster::__method__$method $id] $args\n    }\n}\n\nproc ::redis_cluster::get_keys_from_command {cmd argv} {\n    set cmd [string tolower $cmd]\n    # Most Redis commands get just one key as first argument.\n    if {[lsearch -exact $::redis_cluster::plain_commands $cmd] != -1} {\n        return [list [lindex $argv 0]]\n    }\n\n    # Special handling for other commands\n    switch -exact $cmd {\n        mget {return $argv}\n        eval {return [lrange $argv 2 1+[lindex $argv 1]]}\n        evalsha {return [lrange $argv 2 1+[lindex $argv 1]]}\n        spublish {return [list [lindex $argv 1]]}\n    }\n\n    # All the remaining commands are not handled.\n    return {}\n}\n\n# Returns the CRC16 of the specified string.\n# The CRC parameters are described in the Redis Cluster specification.\nset ::redis_cluster::XMODEMCRC16Lookup {\n    0x0000 0x1021 0x2042 0x3063 0x4084 0x50a5 0x60c6 0x70e7\n    0x8108 0x9129 0xa14a 0xb16b 0xc18c 0xd1ad 0xe1ce 0xf1ef\n    0x1231 0x0210 0x3273 0x2252 0x52b5 0x4294 0x72f7 0x62d6\n    0x9339 0x8318 0xb37b 0xa35a 0xd3bd 0xc39c 0xf3ff 0xe3de\n    0x2462 0x3443 0x0420 0x1401 0x64e6 0x74c7 0x44a4 0x5485\n    0xa56a 0xb54b 0x8528 0x9509 0xe5ee 0xf5cf 0xc5ac 0xd58d\n    0x3653 0x2672 0x1611 0x0630 0x76d7 0x66f6 0x5695 0x46b4\n    0xb75b 0xa77a 0x9719 0x8738 0xf7df 0xe7fe 0xd79d 0xc7bc\n    0x48c4 0x58e5 0x6886 0x78a7 0x0840 0x1861 0x2802 0x3823\n    0xc9cc 0xd9ed 0xe98e 0xf9af 0x8948 0x9969 0xa90a 0xb92b\n    0x5af5 0x4ad4 0x7ab7 0x6a96 0x1a71 0x0a50 0x3a33 0x2a12\n    0xdbfd 0xcbdc 0xfbbf 0xeb9e 0x9b79 0x8b58 0xbb3b 0xab1a\n    0x6ca6 0x7c87 0x4ce4 0x5cc5 0x2c22 0x3c03 0x0c60 0x1c41\n    0xedae 0xfd8f 0xcdec 0xddcd 0xad2a 0xbd0b 0x8d68 0x9d49\n    0x7e97 0x6eb6 0x5ed5 0x4ef4 0x3e13 0x2e32 0x1e51 0x0e70\n    0xff9f 0xefbe 0xdfdd 0xcffc 0xbf1b 0xaf3a 0x9f59 0x8f78\n    0x9188 0x81a9 0xb1ca 0xa1eb 0xd10c 0xc12d 0xf14e 0xe16f\n    0x1080 0x00a1 0x30c2 0x20e3 0x5004 0x4025 0x7046 0x6067\n    0x83b9 0x9398 0xa3fb 0xb3da 0xc33d 0xd31c 0xe37f 0xf35e\n    0x02b1 0x1290 0x22f3 0x32d2 0x4235 0x5214 0x6277 0x7256\n    0xb5ea 0xa5cb 0x95a8 0x8589 0xf56e 0xe54f 0xd52c 0xc50d\n    0x34e2 0x24c3 0x14a0 0x0481 0x7466 0x6447 0x5424 0x4405\n    0xa7db 0xb7fa 0x8799 0x97b8 0xe75f 0xf77e 0xc71d 0xd73c\n    0x26d3 0x36f2 0x0691 0x16b0 0x6657 0x7676 0x4615 0x5634\n    0xd94c 0xc96d 0xf90e 0xe92f 0x99c8 0x89e9 0xb98a 0xa9ab\n    0x5844 0x4865 0x7806 0x6827 0x18c0 0x08e1 0x3882 0x28a3\n    0xcb7d 0xdb5c 0xeb3f 0xfb1e 0x8bf9 0x9bd8 0xabbb 0xbb9a\n    0x4a75 0x5a54 0x6a37 0x7a16 0x0af1 0x1ad0 0x2ab3 0x3a92\n    0xfd2e 0xed0f 0xdd6c 0xcd4d 0xbdaa 0xad8b 0x9de8 0x8dc9\n    0x7c26 0x6c07 0x5c64 0x4c45 0x3ca2 0x2c83 0x1ce0 0x0cc1\n    0xef1f 0xff3e 0xcf5d 0xdf7c 0xaf9b 0xbfba 0x8fd9 0x9ff8\n    0x6e17 0x7e36 0x4e55 0x5e74 0x2e93 0x3eb2 0x0ed1 0x1ef0\n}\n\nproc ::redis_cluster::crc16 {s} {\n    set s [encoding convertto ascii $s]\n    set crc 0\n    foreach char [split $s {}] {\n        scan $char %c byte\n        set crc [expr {(($crc<<8)&0xffff) ^ [lindex $::redis_cluster::XMODEMCRC16Lookup [expr {(($crc>>8)^$byte) & 0xff}]]}]\n    }\n    return $crc\n}\n\n# Hash a single key returning the slot it belongs to, Implemented hash\n# tags as described in the Redis Cluster specification.\nproc ::redis_cluster::hash {key} {\n    set keylen [string length $key]\n    set s {}\n    set e {}\n    for {set s 0} {$s < $keylen} {incr s} {\n        if {[string index $key $s] eq \"\\{\"} break\n    }\n\n    if {[expr {$s == $keylen}]} {\n        set res [expr {[crc16 $key] & 16383}]\n        return $res\n    }\n\n    for {set e [expr {$s+1}]} {$e < $keylen} {incr e} {\n        if {[string index $key $e] == \"\\}\"} break\n    }\n\n    if {$e == $keylen || $e == [expr {$s+1}]} {\n        set res [expr {[crc16 $key] & 16383}]\n        return $res\n    }\n\n    set key_sub [string range $key [expr {$s+1}] [expr {$e-1}]]\n    return [expr {[crc16 $key_sub] & 16383}]\n}\n\n# Return the slot the specified keys hash to.\n# If the keys hash to multiple slots, an empty string is returned to\n# signal that the command can't be run in Redis Cluster.\nproc ::redis_cluster::get_slot_from_keys {keys} {\n    set slot {}\n    foreach k $keys {\n        set s [::redis_cluster::hash $k]\n        if {$slot eq {}} {\n            set slot $s\n        } elseif {$slot != $s} {\n            return {} ; # Error\n        }\n    }\n    return $slot\n}\n"
  },
  {
    "path": "tests/support/cluster_util.tcl",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Copyright (c) 2024-present, Valkey contributors.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n#\n\n# Cluster helper functions\n# Normalize cluster slots configuration by sorting replicas by node ID\nproc normalize_cluster_slots {slots_config} {\n    set normalized {}\n    foreach slot_range $slots_config {\n        if {[llength $slot_range] <= 3} {\n            lappend normalized $slot_range\n        } else {\n            # Sort replicas (index 3+) by node ID, keep start/end/master unchanged\n            set replicas [lrange $slot_range 3 end]\n            set sorted_replicas [lsort -index 2 $replicas]\n            lappend normalized [concat [lrange $slot_range 0 2] $sorted_replicas]\n        }\n    }\n    return $normalized\n}\n\n# Check if cluster configuration is consistent.\nproc cluster_config_consistent {} {\n    for {set j 0} {$j < [llength $::servers]} {incr j} {\n        if {$j == 0} {\n            set base_cfg [R $j cluster slots]\n            set base_secret [R $j debug internal_secret]\n            set normalized_base_cfg [normalize_cluster_slots $base_cfg]\n        } else {\n            set cfg [R $j cluster slots]\n            set secret [R $j debug internal_secret]\n            set normalized_cfg [normalize_cluster_slots $cfg]\n            if {$normalized_cfg != $normalized_base_cfg || $secret != $base_secret} {\n                return 0\n            }\n        }\n    }\n\n    return 1\n}\n\n# Check if cluster size is consistent.\nproc cluster_size_consistent {cluster_size} {\n    for {set j 0} {$j < $cluster_size} {incr j} {\n        if {[CI $j cluster_known_nodes] ne $cluster_size} {\n            return 0\n        }\n    }\n    return 1\n}\n\n# Wait for cluster configuration to propagate and be consistent across nodes.\nproc wait_for_cluster_propagation {} {\n    wait_for_condition 50 100 {\n        [cluster_config_consistent] eq 1\n    } else {\n        fail \"cluster config did not reach a consistent state\"\n    }\n}\n\n# Wait for cluster size to be consistent across nodes.\nproc wait_for_cluster_size {cluster_size} {\n    wait_for_condition 1000 50 {\n        [cluster_size_consistent $cluster_size] eq 1\n    } else {\n        fail \"cluster size did not reach a consistent size $cluster_size\"\n    }\n}\n\n# Check that cluster nodes agree about \"state\", or raise an error.\nproc wait_for_cluster_state {state} {\n    for {set j 0} {$j < [llength $::servers]} {incr j} {\n        wait_for_condition 100 50 {\n            [CI $j cluster_state] eq $state\n        } else {\n            fail \"Cluster node $j cluster_state:[CI $j cluster_state]\"\n        }\n    }\n}\n\n# Default slot allocation for clusters, each master has a continuous block\n# and approximately equal number of slots.\nproc continuous_slot_allocation {masters} {\n    set avg [expr double(16384) / $masters]\n    set slot_start 0\n    for {set j 0} {$j < $masters} {incr j} {\n        set slot_end [expr int(ceil(($j + 1) * $avg) - 1)]\n        R $j cluster addslotsrange $slot_start $slot_end\n        set slot_start [expr $slot_end + 1]\n    }\n}\n\n# Setup method to be executed to configure the cluster before the\n# tests run.\nproc cluster_setup {masters node_count slot_allocator code} {\n    # Have all nodes meet\n    if {$::tls} {\n        set tls_cluster [lindex [R 0 CONFIG GET tls-cluster] 1]\n    }\n    if {$::tls && !$tls_cluster} {\n        for {set i 1} {$i < $node_count} {incr i} {\n            R 0 CLUSTER MEET [srv -$i host] [srv -$i pport]\n        }         \n    } else {\n        for {set i 1} {$i < $node_count} {incr i} {\n            R 0 CLUSTER MEET [srv -$i host] [srv -$i port]\n        }\n    }  \n\n    $slot_allocator $masters\n\n    wait_for_cluster_propagation\n\n    # Setup master/replica relationships\n    for {set i 0} {$i < $masters} {incr i} {\n        set nodeid [R $i CLUSTER MYID]\n        for {set j [expr $i + $masters]} {$j < $node_count} {incr j $masters} {\n            R $j CLUSTER REPLICATE $nodeid\n        }\n    }\n\n    wait_for_cluster_propagation\n    wait_for_cluster_state \"ok\"\n\n    uplevel 1 $code\n}\n\n# Start a cluster with the given number of masters and replicas. Replicas\n# will be allocated to masters by round robin.\nproc start_cluster {masters replicas options code {slot_allocator continuous_slot_allocation}} {\n    set ::cluster_master_nodes $masters\n    set ::cluster_replica_nodes $replicas\n    set node_count [expr $masters + $replicas]\n\n    # Set the final code to be the tests + cluster setup\n    set code [list cluster_setup $masters $node_count $slot_allocator $code]\n\n    # Configure the starting of multiple servers. Set cluster node timeout\n    # aggressively since many tests depend on ping/pong messages. \n    set cluster_options [list overrides [list cluster-enabled yes cluster-ping-interval 100 cluster-node-timeout 3000 cluster-slot-stats-enabled yes]]\n    set options [concat $cluster_options $options]\n\n    # Cluster mode only supports a single database, so before executing the tests\n    # it needs to be configured correctly and needs to be reset after the tests. \n    set old_singledb $::singledb\n    set ::singledb 1\n    start_multiple_servers $node_count $options $code\n    set ::singledb $old_singledb\n}\n\n# Test node for flag.\nproc cluster_has_flag {node flag} {\n    expr {[lsearch -exact [dict get $node flags] $flag] != -1}\n}\n\n# Returns the parsed \"myself\" node entry as a dictionary.\nproc cluster_get_myself id {\n    set nodes [get_cluster_nodes $id]\n    foreach n $nodes {\n        if {[cluster_has_flag $n myself]} {return $n}\n    }\n    return {}\n}\n\n# Returns a parsed CLUSTER NODES output as a list of dictionaries.\nproc get_cluster_nodes id {\n    set lines [split [R $id cluster nodes] \"\\r\\n\"]\n    set nodes {}\n    foreach l $lines {\n        set l [string trim $l]\n        if {$l eq {}} continue\n        set args [split $l]\n        set node [dict create \\\n            id [lindex $args 0] \\\n            addr [lindex $args 1] \\\n            flags [split [lindex $args 2] ,] \\\n            slaveof [lindex $args 3] \\\n            ping_sent [lindex $args 4] \\\n            pong_recv [lindex $args 5] \\\n            config_epoch [lindex $args 6] \\\n            linkstate [lindex $args 7] \\\n            slots [lrange $args 8 end] \\\n        ]\n        lappend nodes $node\n    }\n    return $nodes\n}\n\n# Returns 1 if no node knows node_id, 0 if any node knows it.\nproc node_is_forgotten {node_id} {\n    for {set j 0} {$j < [llength $::servers]} {incr j} {\n        set cluster_nodes [R $j CLUSTER NODES]\n        if { [string match \"*$node_id*\" $cluster_nodes] } {\n            return 0\n        }\n    }\n    return 1\n}\n\n# Isolate a node from the cluster and give it a new nodeid\nproc isolate_node {id} {\n    set node_id [R $id CLUSTER MYID]\n    R $id CLUSTER RESET HARD\n    # Here we additionally test that CLUSTER FORGET propagates to all nodes.\n    set other_id [expr $id == 0 ? 1 : 0]\n    R $other_id CLUSTER FORGET $node_id\n    wait_for_condition 50 100 {\n        [node_is_forgotten $node_id]\n    } else {\n        fail \"CLUSTER FORGET was not propagated to all nodes\"\n    }\n}\n\n# Check if cluster's view of hostnames is consistent\nproc are_hostnames_propagated {match_string} {\n    for {set j 0} {$j < [llength $::servers]} {incr j} {\n        set cfg [R $j cluster slots]\n        foreach node $cfg {\n            for {set i 2} {$i < [llength $node]} {incr i} {\n                if {! [string match $match_string [lindex [lindex [lindex $node $i] 3] 1]] } {\n                    return 0\n                }\n            }\n        }\n    }\n    return 1\n}\n\nproc wait_node_marked_fail {ref_node_index instance_id_to_check} {\n    wait_for_condition 1000 50 {\n        [check_cluster_node_mark fail $ref_node_index $instance_id_to_check]\n    } else {\n        fail \"Replica node never marked as FAIL ('fail')\"\n    }\n}\n\nproc wait_node_marked_pfail {ref_node_index instance_id_to_check} {\n    wait_for_condition 1000 50 {\n        [check_cluster_node_mark fail\\? $ref_node_index $instance_id_to_check]\n    } else {\n        fail \"Replica node never marked as PFAIL ('fail?')\"\n    }\n}\n\nproc check_cluster_node_mark {flag ref_node_index instance_id_to_check} {\n    set nodes [get_cluster_nodes $ref_node_index]\n\n    foreach n $nodes {\n        if {[dict get $n id] eq $instance_id_to_check} {\n            return [cluster_has_flag $n $flag]\n        }\n    }\n    fail \"Unable to find instance id in cluster nodes. ID: $instance_id_to_check\"\n}\n"
  },
  {
    "path": "tests/support/redis.tcl",
    "content": "# Tcl client library - used by the Redis test\n#\n# Copyright (C) 2014-Present, Redis Ltd.\n# All Rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Example usage:\n#\n# set r [redis 127.0.0.1 6379]\n# $r lpush mylist foo\n# $r lpush mylist bar\n# $r lrange mylist 0 -1\n# $r close\n#\n# Non blocking usage example:\n#\n# proc handlePong {r type reply} {\n#     puts \"PONG $type '$reply'\"\n#     if {$reply ne \"PONG\"} {\n#         $r ping [list handlePong]\n#     }\n# }\n#\n# set r [redis]\n# $r blocking 0\n# $r get fo [list handlePong]\n#\n# vwait forever\n\npackage require Tcl 8.5-10\npackage provide redis 0.1\n\nsource [file join [file dirname [info script]] \"response_transformers.tcl\"]\n\nnamespace eval redis {}\nset ::redis::id 0\narray set ::redis::fd {}\narray set ::redis::addr {}\narray set ::redis::blocking {}\narray set ::redis::deferred {}\narray set ::redis::readraw {}\narray set ::redis::attributes {} ;# Holds the RESP3 attributes from the last call\narray set ::redis::reconnect {}\narray set ::redis::tls {}\narray set ::redis::callback {}\narray set ::redis::state {} ;# State in non-blocking reply reading\narray set ::redis::statestack {} ;# Stack of states, for nested mbulks\narray set ::redis::curr_argv {} ;# Remember the current argv, to be used in response_transformers.tcl\narray set ::redis::testing_resp3 {} ;# Indicating if the current client is using RESP3 (only if the test is trying to test RESP3 specific behavior. It won't be on in case of force_resp3)\n\nset ::force_resp3 0\nset ::log_req_res 0\n\nproc redis {{server 127.0.0.1} {port 6379} {defer 0} {tls 0} {tlsoptions {}} {readraw 0}} {\n    if {$tls} {\n        package require tls\n        ::tls::init \\\n            -cafile \"$::tlsdir/ca.crt\" \\\n            -certfile \"$::tlsdir/client.crt\" \\\n            -keyfile \"$::tlsdir/client.key\" \\\n            {*}$tlsoptions\n        set fd [::tls::socket $server $port]\n    } else {\n        set fd [socket $server $port]\n    }\n    fconfigure $fd -translation binary\n    set id [incr ::redis::id]\n    set ::redis::fd($id) $fd\n    set ::redis::addr($id) [list $server $port]\n    set ::redis::blocking($id) 1\n    set ::redis::deferred($id) $defer\n    set ::redis::readraw($id) $readraw\n    set ::redis::reconnect($id) 0\n    set ::redis::curr_argv($id) 0\n    set ::redis::testing_resp3($id) 0\n    set ::redis::tls($id) $tls\n    ::redis::redis_reset_state $id\n    interp alias {} ::redis::redisHandle$id {} ::redis::__dispatch__ $id\n}\n\n# On recent versions of tcl-tls/OpenSSL, reading from a dropped connection\n# results with an error we need to catch and mimic the old behavior.\nproc ::redis::redis_safe_read {fd len} {\n    if {$len == -1} {\n        set err [catch {set val [read $fd]} msg]\n    } else {\n        set err [catch {set val [read $fd $len]} msg]\n    }\n    if {!$err} {\n        return $val\n    }\n    if {[string match \"*connection abort*\" $msg]} {\n        return {}\n    }\n    error $msg\n}\n\nproc ::redis::redis_safe_gets {fd} {\n    if {[catch {set val [gets $fd]} msg]} {\n        if {[string match \"*connection abort*\" $msg]} {\n            return {}\n        }\n        error $msg\n    }\n    return $val\n}\n\n# This is a wrapper to the actual dispatching procedure that handles\n# reconnection if needed.\nproc ::redis::__dispatch__ {id method args} {\n    set errorcode [catch {::redis::__dispatch__raw__ $id $method $args} retval]\n    if {$errorcode && $::redis::reconnect($id) && $::redis::fd($id) eq {}} {\n        # Try again if the connection was lost.\n        # FIXME: we don't re-select the previously selected DB, nor we check\n        # if we are inside a transaction that needs to be re-issued from\n        # scratch.\n        set errorcode [catch {::redis::__dispatch__raw__ $id $method $args} retval]\n    }\n    return -code $errorcode $retval\n}\n\nproc ::redis::__dispatch__raw__ {id method argv} {\n    set fd $::redis::fd($id)\n\n    # Reconnect the link if needed.\n    if {$fd eq {} && $method ne {close}} {\n        lassign $::redis::addr($id) host port\n        if {$::redis::tls($id)} {\n            set ::redis::fd($id) [::tls::socket $host $port]\n        } else {\n            set ::redis::fd($id) [socket $host $port]\n        }\n        fconfigure $::redis::fd($id) -translation binary\n        set fd $::redis::fd($id)\n    }\n\n    # Transform HELLO 2 to HELLO 3 if force_resp3\n    # All set the connection var testing_resp3 in case of HELLO 3\n    if {[llength $argv] > 0 && [string compare -nocase $method \"HELLO\"] == 0} {\n        if {[lindex $argv 0] == 3} {\n            set ::redis::testing_resp3($id) 1\n        } else {\n            set ::redis::testing_resp3($id) 0\n            if {$::force_resp3} {\n                # If we are in force_resp3 we run HELLO 3 instead of HELLO 2\n                lset argv 0 3\n            }\n        }\n    }\n\n    set blocking $::redis::blocking($id)\n    set deferred $::redis::deferred($id)\n    if {$blocking == 0} {\n        if {[llength $argv] == 0} {\n            error \"Please provide a callback in non-blocking mode\"\n        }\n        set callback [lindex $argv end]\n        set argv [lrange $argv 0 end-1]\n    }\n    if {[info command ::redis::__method__$method] eq {}} {\n        catch {unset ::redis::attributes($id)}\n        set cmd \"*[expr {[llength $argv]+1}]\\r\\n\"\n        append cmd \"$[string length $method]\\r\\n$method\\r\\n\"\n        foreach a $argv {\n            # In Tcl 9.0, only convert to UTF-8 if the string contains non-byte characters\n            # to preserve binary data while handling unicode correctly\n            # Uses regexp rather than \"string match *[^\\u0000-\\u00ff]*\" to avoid the\n            # catastrophic backtracking bug in Tcl 9.0's glob engine.\n            if {$::tcl_version >= 9.0 && [regexp {[^\\u0000-\\u00ff]} $a]} {\n                set a [encoding convertto utf-8 $a]\n            }\n            append cmd \"$[string length $a]\\r\\n$a\\r\\n\"\n        }\n        ::redis::redis_write $fd $cmd\n        if {[catch {flush $fd}]} {\n            catch {close $fd}\n            set ::redis::fd($id) {}\n            return -code error \"I/O error reading reply\"\n        }\n\n        set ::redis::curr_argv($id) [concat $method $argv]\n        if {!$deferred} {\n            if {$blocking} {\n                ::redis::redis_read_reply $id $fd\n            } else {\n                # Every well formed reply read will pop an element from this\n                # list and use it as a callback. So pipelining is supported\n                # in non blocking mode.\n                lappend ::redis::callback($id) $callback\n                fileevent $fd readable [list ::redis::redis_readable $fd $id]\n            }\n        }\n    } else {\n        uplevel 1 [list ::redis::__method__$method $id $fd] $argv\n    }\n}\n\nproc ::redis::__method__blocking {id fd val} {\n    set ::redis::blocking($id) $val\n    fconfigure $fd -blocking $val\n}\n\nproc ::redis::__method__reconnect {id fd val} {\n    set ::redis::reconnect($id) $val\n}\n\nproc ::redis::__method__read {id fd} {\n    ::redis::redis_read_reply $id $fd\n}\n\nproc ::redis::__method__rawread {id fd {len -1}} {\n    return [redis_safe_read $fd $len]\n}\n\nproc ::redis::__method__write {id fd buf} {\n    ::redis::redis_write $fd $buf\n}\n\nproc ::redis::__method__flush {id fd} {\n    flush $fd\n}\n\nproc ::redis::__method__close {id fd} {\n    catch {close $fd}\n    catch {unset ::redis::fd($id)}\n    catch {unset ::redis::addr($id)}\n    catch {unset ::redis::blocking($id)}\n    catch {unset ::redis::deferred($id)}\n    catch {unset ::redis::readraw($id)}\n    catch {unset ::redis::attributes($id)}\n    catch {unset ::redis::reconnect($id)}\n    catch {unset ::redis::tls($id)}\n    catch {unset ::redis::state($id)}\n    catch {unset ::redis::statestack($id)}\n    catch {unset ::redis::callback($id)}\n    catch {unset ::redis::curr_argv($id)}\n    catch {unset ::redis::testing_resp3($id)}\n    catch {interp alias {} ::redis::redisHandle$id {}}\n}\n\nproc ::redis::__method__channel {id fd} {\n    return $fd\n}\n\nproc ::redis::__method__deferred {id fd val} {\n    set ::redis::deferred($id) $val\n}\n\nproc ::redis::__method__readraw {id fd val} {\n    set ::redis::readraw($id) $val\n}\n\nproc ::redis::__method__readingraw {id fd} {\n    return $::redis::readraw($id)\n}\n\nproc ::redis::__method__attributes {id fd} {\n    set _ $::redis::attributes($id)\n}\n\nproc ::redis::redis_write {fd buf} {\n    puts -nonewline $fd $buf\n}\n\nproc ::redis::redis_writenl {fd buf} {\n    redis_write $fd $buf\n    redis_write $fd \"\\r\\n\"\n    flush $fd\n}\n\nproc ::redis::redis_readnl {fd len} {\n    set buf [redis_safe_read $fd $len]\n    redis_safe_read $fd 2 ; # discard CR LF\n    return $buf\n}\n\nproc ::redis::redis_bulk_read {fd} {\n    set count [redis_read_line $fd]\n    if {$count == -1} return {}\n    set buf [redis_readnl $fd $count]\n    return $buf\n}\n\nproc ::redis::redis_multi_bulk_read {id fd} {\n    set count [redis_read_line $fd]\n    if {$count == -1} return {}\n    set l {}\n    set err {}\n    for {set i 0} {$i < $count} {incr i} {\n        if {[catch {\n            lappend l [redis_read_reply_logic $id $fd]\n        } e] && $err eq {}} {\n            set err $e\n        }\n    }\n    if {$err ne {}} {return -code error $err}\n    return $l\n}\n\nproc ::redis::redis_read_map {id fd} {\n    set count [redis_read_line $fd]\n    if {$count == -1} return {}\n    set d {}\n    set err {}\n    for {set i 0} {$i < $count} {incr i} {\n        if {[catch {\n            set k [redis_read_reply_logic $id $fd] ; # key\n            set v [redis_read_reply_logic $id $fd] ; # value\n            dict set d $k $v\n        } e] && $err eq {}} {\n            set err $e\n        }\n    }\n    if {$err ne {}} {return -code error $err}\n    return $d\n}\n\nproc ::redis::redis_read_line fd {\n    string trim [redis_safe_gets $fd]\n}\n\nproc ::redis::redis_read_null fd {\n    redis_safe_gets $fd\n    return {}\n}\n\nproc ::redis::redis_read_bool fd {\n    set v [redis_read_line $fd]\n    if {$v == \"t\"} {return 1}\n    if {$v == \"f\"} {return 0}\n    return -code error \"Bad protocol, '$v' as bool type\"\n}\n\nproc ::redis::redis_read_double {id fd} {\n    set v [redis_read_line $fd]\n    # unlike many other DTs, there is a textual difference between double and a string with the same value,\n    # so we need to transform to double if we are testing RESP3 (i.e. some tests check that a\n    # double reply is \"1.0\" and not \"1\")\n    if {[should_transform_to_resp2 $id]} {\n        return $v\n    } else {\n        return [expr {double($v)}]\n    }\n}\n\nproc ::redis::redis_read_verbatim_str fd {\n    set v [redis_bulk_read $fd]\n    # strip the first 4 chars (\"txt:\")\n    return [string range $v 4 end]\n}\n\nproc ::redis::redis_read_reply_logic {id fd} {\n    if {$::redis::readraw($id)} {\n        return [redis_read_line $fd]\n    }\n\n    while {1} {\n        set type [redis_safe_read $fd 1]\n        switch -exact -- $type {\n            _ {return [redis_read_null $fd]}\n            : -\n            ( -\n            + {return [redis_read_line $fd]}\n            , {return [redis_read_double $id $fd]}\n            # {return [redis_read_bool $fd]}\n            = {return [redis_read_verbatim_str $fd]}\n            - {return -code error [redis_read_line $fd]}\n            $ {return [redis_bulk_read $fd]}\n            > -\n            ~ -\n            * {return [redis_multi_bulk_read $id $fd]}\n            % {return [redis_read_map $id $fd]}\n            | {\n                set attrib [redis_read_map $id $fd]\n                set ::redis::attributes($id) $attrib\n                continue\n            }\n            default {\n                if {$type eq {}} {\n                    catch {close $fd}\n                    set ::redis::fd($id) {}\n                    return -code error \"I/O error reading reply\"\n                }\n                return -code error \"Bad protocol, '$type' as reply type byte\"\n            }\n        }\n    }\n}\n\nproc ::redis::redis_read_reply {id fd} {\n    set response [redis_read_reply_logic $id $fd]\n    ::response_transformers::transform_response_if_needed $id $::redis::curr_argv($id) $response\n}\n\nproc ::redis::redis_reset_state id {\n    set ::redis::state($id) [dict create buf {} mbulk -1 bulk -1 reply {}]\n    set ::redis::statestack($id) {}\n}\n\nproc ::redis::redis_call_callback {id type reply} {\n    set cb [lindex $::redis::callback($id) 0]\n    set ::redis::callback($id) [lrange $::redis::callback($id) 1 end]\n    uplevel #0 $cb [list ::redis::redisHandle$id $type $reply]\n    ::redis::redis_reset_state $id\n}\n\n# Read a reply in non-blocking mode.\nproc ::redis::redis_readable {fd id} {\n    if {[eof $fd]} {\n        redis_call_callback $id eof {}\n        ::redis::__method__close $id $fd\n        return\n    }\n    if {[dict get $::redis::state($id) bulk] == -1} {\n        set line [gets $fd]\n        if {$line eq {}} return ;# No complete line available, return\n        switch -exact -- [string index $line 0] {\n            : -\n            + {redis_call_callback $id reply [string range $line 1 end-1]}\n            - {redis_call_callback $id err [string range $line 1 end-1]}\n            ( {redis_call_callback $id reply [string range $line 1 end-1]}\n            $ {\n                dict set ::redis::state($id) bulk \\\n                    [expr [string range $line 1 end-1]+2]\n                if {[dict get $::redis::state($id) bulk] == 1} {\n                    # We got a $-1, hack the state to play well with this.\n                    dict set ::redis::state($id) bulk 2\n                    dict set ::redis::state($id) buf \"\\r\\n\"\n                    ::redis::redis_readable $fd $id\n                }\n            }\n            * {\n                dict set ::redis::state($id) mbulk [string range $line 1 end-1]\n                # Handle *-1\n                if {[dict get $::redis::state($id) mbulk] == -1} {\n                    redis_call_callback $id reply {}\n                }\n            }\n            default {\n                redis_call_callback $id err \\\n                    \"Bad protocol, $type as reply type byte\"\n            }\n        }\n    } else {\n        set totlen [dict get $::redis::state($id) bulk]\n        set buflen [string length [dict get $::redis::state($id) buf]]\n        set toread [expr {$totlen-$buflen}]\n        set data [read $fd $toread]\n        set nread [string length $data]\n        dict append ::redis::state($id) buf $data\n        # Check if we read a complete bulk reply\n        if {[string length [dict get $::redis::state($id) buf]] ==\n            [dict get $::redis::state($id) bulk]} {\n            if {[dict get $::redis::state($id) mbulk] == -1} {\n                redis_call_callback $id reply \\\n                    [string range [dict get $::redis::state($id) buf] 0 end-2]\n            } else {\n                dict with ::redis::state($id) {\n                    lappend reply [string range $buf 0 end-2]\n                    incr mbulk -1\n                    set bulk -1\n                }\n                if {[dict get $::redis::state($id) mbulk] == 0} {\n                    redis_call_callback $id reply \\\n                        [dict get $::redis::state($id) reply]\n                }\n            }\n        }\n    }\n}\n\n# when forcing resp3 some tests that rely on resp2 can fail, so we have to translate the resp3 response to resp2\nproc ::redis::should_transform_to_resp2 {id} {\n    return [expr {$::force_resp3 && !$::redis::testing_resp3($id)}]\n}\n"
  },
  {
    "path": "tests/support/response_transformers.tcl",
    "content": "# Tcl client library - used by the Redis test\n#\n# Copyright (C) 2009-Present, Redis Ltd.\n# All Rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# This file contains a bunch of commands whose purpose is to transform\n# a RESP3 response to RESP2\n# Why is it needed?\n# When writing the reply_schema part in COMMAND DOCS we decided to use\n# the existing tests in order to verify the schemas (see logreqres.c)\n# The problem was that many tests were relying on the RESP2 structure\n# of the response (e.g. HRANDFIELD WITHVALUES in RESP2: {f1 v1 f2 v2}\n# vs. RESP3: {{f1 v1} {f2 v2}}).\n# Instead of adjusting the tests to expect RESP3 responses (a lot of\n# changes in many files) we decided to transform the response to RESP2\n# when running with --force-resp3\n\npackage require Tcl 8.5-10\n\nnamespace eval response_transformers {}\n\n# Transform a map response into an array of tuples (tuple = array with 2 elements)\n# Used for XREAD[GROUP]\nproc transfrom_map_to_tupple_array {argv response} {\n    set tuparray {}\n    foreach {key val} $response {\n        set tmp {}\n        lappend tmp $key\n        lappend tmp $val\n        lappend tuparray $tmp\n    }\n    return $tuparray\n}\n\n# Transform an array of tuples to a flat array\nproc transfrom_tuple_array_to_flat_array {argv response} {\n    set flatarray {}\n    foreach pair $response {\n        lappend flatarray {*}$pair\n    }\n    return $flatarray\n}\n\n# With HRANDFIELD, we only need to transform the response if the request had WITHVALUES\n# (otherwise the returned response is a flat array in both RESPs)\nproc transfrom_hrandfield_command {argv response} {\n    foreach ele $argv {\n        if {[string compare -nocase $ele \"WITHVALUES\"] == 0} {\n            return [transfrom_tuple_array_to_flat_array $argv $response]\n        }\n    }\n    return $response\n}\n\n# With some zset commands, we only need to transform the response if the request had WITHSCORES\n# (otherwise the returned response is a flat array in both RESPs)\nproc transfrom_zset_withscores_command {argv response} {\n    foreach ele $argv {\n        if {[string compare -nocase $ele \"WITHSCORES\"] == 0} {\n            return [transfrom_tuple_array_to_flat_array $argv $response]\n        }\n    }\n    return $response\n}\n\n# With ZPOPMIN/ZPOPMAX, we only need to transform the response if the request had COUNT (3rd arg)\n# (otherwise the returned response is a flat array in both RESPs)\nproc transfrom_zpopmin_zpopmax {argv response} {\n    if {[llength $argv] == 3} {\n        return [transfrom_tuple_array_to_flat_array $argv $response]\n    }\n    return $response\n}\n\nset ::trasformer_funcs {\n    XREAD transfrom_map_to_tupple_array\n    XREADGROUP transfrom_map_to_tupple_array\n    HRANDFIELD transfrom_hrandfield_command\n    ZRANDMEMBER transfrom_zset_withscores_command\n    ZRANGE transfrom_zset_withscores_command\n    ZRANGEBYSCORE transfrom_zset_withscores_command\n    ZRANGEBYLEX transfrom_zset_withscores_command\n    ZREVRANGE transfrom_zset_withscores_command\n    ZREVRANGEBYSCORE transfrom_zset_withscores_command\n    ZREVRANGEBYLEX transfrom_zset_withscores_command\n    ZUNION transfrom_zset_withscores_command\n    ZDIFF transfrom_zset_withscores_command\n    ZINTER transfrom_zset_withscores_command\n    ZPOPMIN transfrom_zpopmin_zpopmax\n    ZPOPMAX transfrom_zpopmin_zpopmax\n}\n\nproc ::response_transformers::transform_response_if_needed {id argv response} {\n    if {![::redis::should_transform_to_resp2 $id] || $::redis::readraw($id)} {\n        return $response\n    }\n\n    set key [string toupper [lindex $argv 0]]\n    if {![dict exists $::trasformer_funcs $key]} {\n        return $response\n    }\n\n    set transform [dict get $::trasformer_funcs $key]\n\n    return [$transform $argv $response]\n}\n"
  },
  {
    "path": "tests/support/server.tcl",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Copyright (c) 2024-present, Valkey contributors.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n#\n\nset ::global_overrides {}\nset ::tags {}\nset ::valgrind_errors {}\n\nproc start_server_error {config_file error} {\n    set err {}\n    append err \"Can't start the Redis server\\n\"\n    append err \"CONFIGURATION:\\n\"\n    append err [exec cat $config_file]\n    append err \"\\nERROR:\\n\"\n    append err [string trim $error]\n    send_data_packet $::test_server_fd err $err\n}\n\nproc check_valgrind_errors stderr {\n    set res [find_valgrind_errors $stderr true]\n    if {$res != \"\"} {\n        send_data_packet $::test_server_fd err \"Valgrind error: $res\\n\"\n    }\n}\n\nproc check_sanitizer_errors stderr {\n    set res [sanitizer_errors_from_file $stderr]\n    if {$res != \"\"} {\n        send_data_packet $::test_server_fd err \"Sanitizer error: $res\\n\"\n    }\n}\n\nproc clean_persistence config {\n    # we may wanna keep the logs for later, but let's clean the persistence\n    # files right away, since they can accumulate and take up a lot of space\n    set config [dict get $config \"config\"]\n    set dir [dict get $config \"dir\"]\n    set rdb [format \"%s/%s\" $dir \"dump.rdb\"]\n    if {[dict exists $config \"appenddirname\"]} {\n        set aofdir [dict get $config \"appenddirname\"]\n    } else {\n        set aofdir \"appendonlydir\"\n    }\n    set aof_dirpath [format \"%s/%s\" $dir $aofdir]\n    clean_aof_persistence $aof_dirpath\n    catch {exec rm -rf $rdb}\n}\n\nproc kill_server config {\n    # nothing to kill when running against external server\n    if {$::external} return\n\n    # Close client connection if exists\n    if {[dict exists $config \"client\"]} {\n        [dict get $config \"client\"] close\n    }\n\n    # nevermind if its already dead\n    set pid [dict get $config pid]\n    if {![is_alive $pid]} {\n        # Check valgrind errors if needed\n        if {$::valgrind} {\n            check_valgrind_errors [dict get $config stderr]\n        }\n\n        check_sanitizer_errors [dict get $config stderr]\n\n        # Remove this pid from the set of active pids in the test server.\n        send_data_packet $::test_server_fd server-killed $pid\n\n        return\n    }\n\n    # check for leaks\n    if {![dict exists $config \"skipleaks\"]} {\n        catch {\n            if {[string match {*Darwin*} [exec uname -a]]} {\n                tags {\"leaks\"} {\n                    test \"Check for memory leaks (pid $pid)\" {\n                        set output {0 leaks}\n                        catch {exec leaks $pid} output option\n                        # In a few tests we kill the server process, so leaks will not find it.\n                        # It'll exits with exit code >1 on error, so we ignore these.\n                        if {[dict exists $option -errorcode]} {\n                            set details [dict get $option -errorcode]\n                            if {[lindex $details 0] eq \"CHILDSTATUS\"} {\n                                  set status [lindex $details 2]\n                                  if {$status > 1} {\n                                      set output \"0 leaks\"\n                                  }\n                            }\n                        }\n                        set output\n                    } {*0 leaks*}\n                }\n            }\n        }\n    }\n\n    # kill server and wait for the process to be totally exited\n    send_data_packet $::test_server_fd server-killing $pid\n    # Node might have been stopped in the test\n    # Send SIGCONT before SIGTERM, otherwise shutdown may be slow with ASAN.\n    catch {exec kill -SIGCONT $pid}\n    catch {exec kill $pid}\n    if {$::valgrind} {\n        set max_wait 120000\n    } else {\n        set max_wait 10000\n    }\n    while {[is_alive $pid]} {\n        incr wait 10\n\n        if {$wait == $max_wait} {\n            puts \"Forcing process $pid to crash...\"\n            catch {exec kill -SEGV $pid}\n        } elseif {$wait >= $max_wait * 2} {\n            puts \"Forcing process $pid to exit...\"\n            catch {exec kill -KILL $pid}\n        } elseif {$wait % 1000 == 0} {\n            puts \"Waiting for process $pid to exit...\"\n        }\n        after 10\n    }\n\n    # Check valgrind errors if needed\n    if {$::valgrind} {\n        check_valgrind_errors [dict get $config stderr]\n    }\n\n    check_sanitizer_errors [dict get $config stderr]\n\n    # Remove this pid from the set of active pids in the test server.\n    send_data_packet $::test_server_fd server-killed $pid\n}\n\nproc is_alive pid {\n    if {[catch {exec kill -0 $pid} err]} {\n        return 0\n    } else {\n        return 1\n    }\n}\n\nproc ping_server {host port} {\n    set retval 0\n    if {[catch {\n        if {$::tls} {\n            set fd [::tls::socket $host $port]\n        } else {\n            set fd [socket $host $port]\n        }\n        fconfigure $fd -translation binary\n        puts $fd \"PING\\r\\n\"\n        flush $fd\n        set reply [gets $fd]\n        if {[string range $reply 0 0] eq {+} ||\n            [string range $reply 0 0] eq {-}} {\n            set retval 1\n        }\n        close $fd\n    } e]} {\n        if {$::verbose} {\n            puts -nonewline \".\"\n        }\n    } else {\n        if {$::verbose} {\n            puts -nonewline \"ok\"\n        }\n    }\n    return $retval\n}\n\n# Ping server with a timeout. Returns 1 if server responds within timeout_ms,\n# otherwise returns 0. Uses blocking TCP connect (instant on localhost) and\n# Tcl's event loop (fileevent + vwait + after) for reliable timeouts.\n# For TLS, the handshake is performed separately in non-blocking mode so that\n# a paused/unresponsive server triggers the timeout instead of hanging.\nproc ping_server_with_timeout {host port timeout_ms} {\n    set retval 0\n    set fd {}\n    set wait_var \"::ping_wait_[incr ::ping_server_uid]\"\n    if {[catch {\n        # TCP connect is always blocking: instant on localhost even if the\n        # server process is paused, because the kernel handles SYN/ACK.\n        set fd [socket $host $port]\n\n        if {$::tls} {\n            # Perform TLS handshake in non-blocking mode with a timeout.\n            # We avoid ::tls::socket because it blocks indefinitely when the\n            # server is paused (SIGSTOP) -- the TLS handshake requires active\n            # server participation that a paused process cannot provide.\n            ::tls::import $fd \\\n                -cafile \"$::tlsdir/ca.crt\" \\\n                -certfile \"$::tlsdir/client.crt\" \\\n                -keyfile \"$::tlsdir/client.key\"\n            fconfigure $fd -blocking 0\n            set hs_done 0\n            set hs_end [expr {[clock milliseconds] + $timeout_ms}]\n            while {!$hs_done && [clock milliseconds] < $hs_end} {\n                if {[catch {set hs_done [::tls::handshake $fd]}]} break\n                if {!$hs_done} {\n                    set $wait_var \"\"\n                    after 10 [list set $wait_var \"x\"]\n                    vwait $wait_var\n                }\n            }\n            if {!$hs_done} {\n                error \"TLS handshake did not complete\"\n            }\n            fconfigure $fd -blocking 1\n        }\n\n        fconfigure $fd -translation binary -buffering full\n        puts $fd \"PING\\r\\n\"\n        flush $fd\n\n        # Read timeout via event loop: whichever fires first unblocks vwait\n        set $wait_var \"\"\n        set timer [after $timeout_ms [list set $wait_var \"timeout\"]]\n        fileevent $fd readable [list set $wait_var \"readable\"]\n        vwait $wait_var\n\n        after cancel $timer\n        fileevent $fd readable {}\n\n        if {[set $wait_var] eq \"readable\"} {\n            set reply [gets $fd]\n            if {[string range $reply 0 0] eq {+} ||\n                [string range $reply 0 0] eq {-}} {\n                set retval 1\n            }\n        }\n        close $fd\n        set fd {}\n    } e]} {\n        if {$fd ne {}} {\n            catch {close $fd}\n        }\n    }\n    unset -nocomplain $wait_var\n    return $retval\n}\nset ::ping_server_uid 0\n\n# Save configuration for a single server.\n# Arguments:\n#   client - Redis client object to use for CONFIG GET\n# Returns: A dict of {param value} pairs\nproc save_single_server_config {client} {\n    set saved_config {}\n    foreach {param val} [$client config get *] {\n        dict set saved_config $param $val\n    }\n    return $saved_config\n}\n\n# Restore configuration for a single server.\n# Arguments:\n#   client       - Redis client object to use for CONFIG SET\n#   saved_config - Dict of {param value} pairs from save_single_server_config\n#   diff_based   - If 1, only restore configs that actually changed (default: 0)\nproc restore_single_server_config {client saved_config {diff_based 0}} {\n    if {$diff_based} {\n        # Get current config state for comparison\n        set current_config [save_single_server_config $client]\n\n        # Only restore configs that changed\n        dict for {param saved_val} $saved_config {\n            if {[catch {dict get $current_config $param} current_val]} {\n                # Parameter no longer exists, skip it\n                continue\n            }\n            if {$saved_val ne $current_val} {\n                # Config was modified - restore it\n                catch {$client config set $param $saved_val}\n            }\n        }\n    } else {\n        # Restore all configs (original behavior)\n        dict for {param val} $saved_config {\n            # Some may fail, specifically immutable ones\n            catch {$client config set $param $val}\n        }\n    }\n}\n\n# Return 1 if the server at the specified addr is reachable by PING, otherwise\n# returns 0. Performs a try every 50 milliseconds for the specified number\n# of retries.\nproc server_is_up {host port retrynum} {\n    after 10 ;# Use a small delay to make likely a first-try success.\n    set retval 0\n    while {[incr retrynum -1]} {\n        if {[catch {ping_server $host $port} ping]} {\n            set ping 0\n        }\n        if {$ping} {return 1}\n        after 50\n    }\n    return 0\n}\n\n# Check if current ::tags match requested tags. If ::allowtags are used,\n# there must be some intersection. If ::denytags are used, no intersection\n# is allowed. Returns 1 if tags are acceptable or 0 otherwise, in which\n# case err_return names a return variable for the message to be logged.\nproc tags_acceptable {tags err_return} {\n    upvar $err_return err\n\n    # If tags are whitelisted, make sure there's match\n    if {[llength $::allowtags] > 0} {\n        set matched 0\n        foreach tag $::allowtags {\n            if {[lsearch $tags $tag] >= 0} {\n                incr matched\n            }\n        }\n        if {$matched < 1} {\n            set err \"Tag: none of the tags allowed\"\n            return 0\n        }\n    }\n\n    foreach tag $::denytags {\n        if {[lsearch $tags $tag] >= 0} {\n            set err \"Tag: $tag denied\"\n            return 0\n        }\n    }\n\n    # some units mess with the client output buffer so we can't really use the req-res logging mechanism.\n    if {$::log_req_res && [lsearch $tags \"logreqres:skip\"] >= 0} {\n        set err \"Not supported when running in log-req-res mode\"\n        return 0\n    }\n\n    if {$::external && [lsearch $tags \"external:skip\"] >= 0} {\n        set err \"Not supported on external server\"\n        return 0\n    }\n\n    if {$::debug_defrag && [lsearch $tags \"debug_defrag:skip\"] >= 0} {\n        set err \"Not supported on server compiled with DEBUG_DEFRAG option\"\n        return 0\n    }\n\n    if {$::singledb && [lsearch $tags \"singledb:skip\"] >= 0} {\n        set err \"Not supported on singledb\"\n        return 0\n    }\n\n    if {$::cluster_mode && [lsearch $tags \"cluster:skip\"] >= 0} {\n        set err \"Not supported in cluster mode\"\n        return 0\n    }\n\n    if {$::tsan && [lsearch $tags \"tsan:skip\"] >= 0} {\n        set err \"Not supported under thread sanitizer\"\n        return 0\n    }\n\n    if {$::tls && [lsearch $tags \"tls:skip\"] >= 0} {\n        set err \"Not supported in tls mode\"\n        return 0\n    }\n\n    if {!$::large_memory && [lsearch $tags \"large-memory\"] >= 0} {\n        set err \"large memory flag not provided\"\n        return 0\n    }\n\n    if { [lsearch $tags \"experimental\"] >=0 && [lsearch $::allowtags \"experimental\"] == -1 } {\n        set err \"experimental test not allowed\"\n        return 0\n    }\n\n    return 1\n}\n\n# doesn't really belong here, but highly coupled to code in start_server\nproc tags {tags code} {\n    # If we 'tags' contain multiple tags, quoted and separated by spaces,\n    # we want to get rid of the quotes in order to have a proper list\n    set tags [string map { \\\" \"\" } $tags]\n    set ::tags [concat $::tags $tags]\n    if {![tags_acceptable $::tags err]} {\n        incr ::num_aborted\n        send_data_packet $::test_server_fd ignore $err\n        set ::tags [lrange $::tags 0 end-[llength $tags]]\n        return\n    }\n    if {[catch {uplevel 1 $code} error]} {\n        set ::tags [lrange $::tags 0 end-[llength $tags]]\n        error $error $::errorInfo\n    }\n    set ::tags [lrange $::tags 0 end-[llength $tags]]\n}\n\n# Write the configuration in the dictionary 'config' in the specified\n# file name.\nproc create_server_config_file {filename config config_lines} {\n    set fp [open $filename w+]\n    foreach directive [dict keys $config] {\n        puts -nonewline $fp \"$directive \"\n        puts $fp [dict get $config $directive]\n    }\n    foreach {config_line_directive config_line_args} $config_lines {\n        puts $fp \"$config_line_directive $config_line_args\"\n    }\n    close $fp\n}\n\nproc spawn_server {config_file stdout stderr args} {\n    set cmd [list src/redis-server $config_file]\n    set args {*}$args\n    if {[llength $args] > 0} {\n        lappend cmd {*}$args\n    }\n\n    if {$::valgrind} {\n        set pid [exec valgrind --track-origins=yes --trace-children=yes --suppressions=[pwd]/src/valgrind.sup --show-reachable=no --show-possibly-lost=no --leak-check=full {*}$cmd >> $stdout 2>> $stderr &]\n    } elseif ($::stack_logging) {\n        set pid [exec /usr/bin/env MallocStackLogging=1 MallocLogFile=/tmp/malloc_log.txt {*}$cmd >> $stdout 2>> $stderr &]\n    } else {\n        # ASAN_OPTIONS environment variable is for address sanitizer. If a test\n        # tries to allocate huge memory area and expects allocator to return\n        # NULL, address sanitizer throws an error without this setting.\n        set env [list \\\n            \"ASAN_OPTIONS=allocator_may_return_null=1\" \\\n            \"MSAN_OPTIONS=allocator_may_return_null=1\" \\\n            \"TSAN_OPTIONS=allocator_may_return_null=1,detect_deadlocks=0,suppressions=src/tsan.sup\" \\\n        ]\n        set pid [exec /usr/bin/env {*}$env {*}$cmd >> $stdout 2>> $stderr &]\n    }\n\n    if {$::wait_server} {\n        set msg \"server started PID: $pid. press any key to continue...\"\n        puts $msg\n        read stdin 1\n    }\n\n    # Tell the test server about this new instance.\n    send_data_packet $::test_server_fd server-spawned $pid\n    return $pid\n}\n\n# Wait for actual startup, return 1 if port is busy, 0 otherwise\nproc wait_server_started {config_file stdout pid} {\n    set checkperiod 100; # Milliseconds\n    set maxiter [expr {120*1000/$checkperiod}] ; # Wait up to 2 minutes.\n    set port_busy 0\n    while 1 {\n        if {[regexp -- \" PID: $pid.*Server initialized\" [exec cat $stdout]]} {\n            break\n        }\n        after $checkperiod\n        incr maxiter -1\n        if {$maxiter == 0} {\n            start_server_error $config_file \"No PID detected in log $stdout\"\n            puts \"--- LOG CONTENT ---\"\n            puts [exec cat $stdout]\n            puts \"-------------------\"\n            break\n        }\n\n        # Check if the port is actually busy and the server failed\n        # for this reason.\n        if {[regexp {Failed listening on port} [exec cat $stdout]]} {\n            set port_busy 1\n            break\n        }\n    }\n    return $port_busy\n}\n\nproc dump_server_log {srv} {\n    set pid [dict get $srv \"pid\"]\n    puts \"\\n===== Start of server log (pid $pid) =====\\n\"\n    puts [exec cat [dict get $srv \"stdout\"]]\n    puts \"===== End of server log (pid $pid) =====\\n\"\n\n    puts \"\\n===== Start of server stderr log (pid $pid) =====\\n\"\n    puts [exec cat [dict get $srv \"stderr\"]]\n    puts \"===== End of server stderr log (pid $pid) =====\\n\"\n}\n\nproc run_external_server_test {code overrides} {\n    set srv {}\n    dict set srv \"host\" $::host\n    dict set srv \"port\" $::port\n    set client [redis $::host $::port 0 $::tls]\n    dict set srv \"client\" $client\n    if {!$::singledb} {\n        $client select 9\n    }\n\n    set config {}\n    dict set config \"port\" $::port\n    dict set srv \"config\" $config\n\n    # append the server to the stack\n    lappend ::servers $srv\n\n    if {[llength $::servers] > 1} {\n        if {$::verbose} {\n            puts \"Notice: nested start_server statements in external server mode, test must be aware of that!\"\n        }\n    }\n\n    r flushall\n    r function flush\n    r script flush\n    r config resetstat\n\n    # Resolve client dynamically via srv (not the captured $client variable)\n    # to handle reconnections that replace the client in ::servers.\n    set saved_config [save_single_server_config [srv 0 \"client\"]]\n\n    # apply overrides\n    foreach {param val} $overrides {\n        r config set $param $val\n\n        # If we enable appendonly, wait for for rewrite to complete. This is\n        # required for tests that begin with a bg* command which will fail if\n        # the rewriteaof operation is not completed at this point.\n        if {$param == \"appendonly\" && $val == \"yes\"} {\n            waitForBgrewriteaof r\n        }\n    }\n\n    if {[catch {set retval [uplevel 2 $code]} error]} {\n        if {$::durable} {\n            set msg [string range $error 10 end]\n            lappend details $msg\n            lappend details $::errorInfo\n            lappend ::tests_failed $details\n\n            incr ::num_failed\n            send_data_packet $::test_server_fd err [join $details \"\\n\"]\n        } else {\n            # Re-raise, let handler up the stack take care of this.\n            error $error $::errorInfo\n        }\n    }\n\n    # Resolve client dynamically from ::servers rather than using the captured\n    # $client variable. If a reconnect occurred during test execution, $client\n    # references the old (closed) connection while ::servers holds the new one.\n    restore_single_server_config [srv 0 \"client\"] $saved_config\n\n    set srv [lpop ::servers]\n    \n    if {[dict exists $srv \"client\"]} {\n        [dict get $srv \"client\"] close\n    }\n}\n\nproc start_server {options {code undefined}} {\n    # setup defaults\n    set baseconfig \"default.conf\"\n    set overrides {}\n    set omit {}\n    set tags {}\n    set args {}\n    set keep_persistence false\n    set config_lines {}\n\n    # Wait for the server to be ready and check for server liveness/client connectivity before starting the test.\n    set wait_ready true\n\n    # parse options\n    foreach {option value} $options {\n        switch $option {\n            \"config\" {\n                set baseconfig $value\n            }\n            \"overrides\" {\n                set overrides [concat $overrides $value]\n            }\n            \"config_lines\" {\n                set config_lines $value\n            }\n            \"args\" {\n                set args $value\n            }\n            \"omit\" {\n                set omit $value\n            }\n            \"tags\" {\n                # If we 'tags' contain multiple tags, quoted and separated by spaces,\n                # we want to get rid of the quotes in order to have a proper list\n                set _tags [string map { \\\" \"\" } $value]\n                set tags [concat $tags $_tags]\n            }\n            \"keep_persistence\" {\n                set keep_persistence $value\n            }\n            \"wait_ready\" {\n                set wait_ready $value\n            }\n            default {\n                error \"Unknown option $option\"\n            }\n        }\n    }\n    set ::tags [concat $::tags $tags]\n\n    # We skip unwanted tags\n    if {![tags_acceptable $::tags err]} {\n        incr ::num_aborted\n        send_data_packet $::test_server_fd ignore $err\n        set ::tags [lrange $::tags 0 end-[llength $tags]]\n        return\n    }\n\n    # If we are running against an external server, we just push the\n    # host/port pair in the stack the first time\n    if {$::external} {\n        run_external_server_test $code $overrides\n\n        set ::tags [lrange $::tags 0 end-[llength $tags]]\n        return\n    }\n\n    set data [split [exec cat \"tests/assets/$baseconfig\"] \"\\n\"]\n    set config {}\n    if {$::tls} {\n        if {$::tls_module} {\n            lappend config_lines [list \"loadmodule\" [format \"%s/src/redis-tls.so\" [pwd]]]\n        }\n        dict set config \"tls-cert-file\" [format \"%s/tests/tls/server.crt\" [pwd]]\n        dict set config \"tls-key-file\" [format \"%s/tests/tls/server.key\" [pwd]]\n        dict set config \"tls-client-cert-file\" [format \"%s/tests/tls/client.crt\" [pwd]]\n        dict set config \"tls-client-key-file\" [format \"%s/tests/tls/client.key\" [pwd]]\n        dict set config \"tls-dh-params-file\" [format \"%s/tests/tls/redis.dh\" [pwd]]\n        dict set config \"tls-ca-cert-file\" [format \"%s/tests/tls/ca.crt\" [pwd]]\n        dict set config \"loglevel\" \"debug\"\n    }\n    foreach line $data {\n        if {[string length $line] > 0 && [string index $line 0] ne \"#\"} {\n            set elements [split $line \" \"]\n            set directive [lrange $elements 0 0]\n            set arguments [lrange $elements 1 end]\n            dict set config $directive $arguments\n        }\n    }\n\n    # use a different directory every time a server is started\n    dict set config dir [tmpdir server]\n\n    # start every server on a different port\n    set port [find_available_port $::baseport $::portcount]\n    if {$::tls} {\n        set pport [find_available_port $::baseport $::portcount]\n        dict set config \"port\" $pport\n        dict set config \"tls-port\" $port\n        dict set config \"tls-cluster\" \"yes\"\n        dict set config \"tls-replication\" \"yes\"\n    } else {\n        dict set config port $port\n    }\n\n    set unixsocket [file normalize [format \"%s/%s\" [dict get $config \"dir\"] \"socket\"]]\n    dict set config \"unixsocket\" $unixsocket\n\n    # apply overrides from global space and arguments\n    foreach {directive arguments} [concat $::global_overrides $overrides] {\n        dict set config $directive $arguments\n    }\n\n    # remove directives that are marked to be omitted\n    foreach directive $omit {\n        dict unset config $directive\n    }\n\n    if {$::log_req_res} {\n        dict set config \"req-res-logfile\" \"stdout.reqres\"\n    }\n\n    if {$::force_resp3} {\n        dict set config \"client-default-resp\" \"3\"\n    }\n\n    if {$::debug_defrag} {\n        dict set config \"activedefrag\" \"yes\" ;# defrag enabled\n        dict set config \"active-defrag-cycle-min\" \"65\"\n        dict set config \"active-defrag-cycle-max\" \"75\"\n    }\n\n    # write new configuration to temporary file\n    set config_file [tmpfile redis.conf]\n    create_server_config_file $config_file $config $config_lines\n\n    set stdout [format \"%s/%s\" [dict get $config \"dir\"] \"stdout\"]\n    set stderr [format \"%s/%s\" [dict get $config \"dir\"] \"stderr\"]\n\n    # if we're inside a test, write the test name to the server log file\n    if {[info exists ::cur_test]} {\n        set fd [open $stdout \"a+\"]\n        puts $fd \"### Starting server for test $::cur_test\"\n        close $fd\n        if {$::verbose > 1} {\n            puts \"### Starting server $stdout for test - $::cur_test\"\n        }\n    }\n\n    # We may have a stdout left over from the previous tests, so we need\n    # to get the current count of ready logs\n    set previous_ready_count [count_message_lines $stdout \"Ready to accept\"]\n\n    # We need a loop here to retry with different ports.\n    set server_started 0\n    while {$server_started == 0} {\n        if {$::verbose} {\n            puts -nonewline \"=== ($tags) Starting server ${::host}:${port} \"\n        }\n\n        send_data_packet $::test_server_fd \"server-spawning\" \"port $port\"\n\n        set pid [spawn_server $config_file $stdout $stderr $args]\n\n        # check that the server actually started\n        set port_busy [wait_server_started $config_file $stdout $pid]\n\n        # Sometimes we have to try a different port, even if we checked\n        # for availability. Other test clients may grab the port before we\n        # are able to do it for example.\n        if {$port_busy} {\n            puts \"Port $port was already busy, trying another port...\"\n            set port [find_available_port $::baseport $::portcount]\n            if {$::tls} {\n                set pport [find_available_port $::baseport $::portcount]\n                dict set config port $pport\n                dict set config \"tls-port\" $port\n            } else {\n                dict set config port $port\n            }\n            create_server_config_file $config_file $config $config_lines\n\n            # Truncate log so wait_server_started will not be looking at\n            # output of the failed server.\n            close [open $stdout \"w\"]\n\n            continue; # Try again\n        }\n\n        if {$::valgrind} {set retrynum 1000} else {set retrynum 100}\n        if {$code ne \"undefined\" && $wait_ready} {\n            set serverisup [server_is_up $::host $port $retrynum]\n        } else {\n            set serverisup 1\n        }\n\n        if {$::verbose} {\n            puts \"\"\n        }\n\n        if {!$serverisup} {\n            set err {}\n            append err [exec cat $stdout] \"\\n\" [exec cat $stderr]\n            start_server_error $config_file $err\n            set ::tags [lrange $::tags 0 end-[llength $tags]]\n            return\n        }\n        set server_started 1\n    }\n\n    # setup properties to be able to initialize a client object\n    set port_param [expr $::tls ? {\"tls-port\"} : {\"port\"}]\n    set host $::host\n    if {[dict exists $config bind]} { set host [dict get $config bind] }\n    if {[dict exists $config $port_param]} { set port [dict get $config $port_param] }\n\n    # setup config dict\n    dict set srv \"config_file\" $config_file\n    dict set srv \"config\" $config\n    dict set srv \"pid\" $pid\n    dict set srv \"host\" $host\n    dict set srv \"port\" $port\n    dict set srv \"stdout\" $stdout\n    dict set srv \"stderr\" $stderr\n    dict set srv \"unixsocket\" $unixsocket\n    if {$::tls} {\n        dict set srv \"pport\" $pport\n    }\n\n    # if a block of code is supplied, we wait for the server to become\n    # available, create a client object and kill the server afterwards\n    if {$code ne \"undefined\"} {\n        set line [exec head -n1 $stdout]\n        if {[string match {*already in use*} $line]} {\n            set ::tags [lrange $::tags 0 end-[llength $tags]]\n            error_and_quit $config_file $line\n        }\n\n        # append the server to the stack\n        lappend ::servers $srv\n\n        if {$wait_ready} {\n            while 1 {\n                # check that the server actually started and is ready for connections\n                if {[count_message_lines $stdout \"Ready to accept\"] > $previous_ready_count} {\n                    break\n                }\n                after 10\n            }\n\n            # connect client (after server dict is put on the stack)\n            reconnect\n        }\n\n        # remember previous num_failed to catch new errors\n        set prev_num_failed $::num_failed\n\n        # execute provided block\n        set num_tests $::num_tests\n        if {[catch { uplevel 1 $code } error]} {\n            set backtrace $::errorInfo\n            set assertion [string match \"assertion:*\" $error]\n\n            # fetch srv back from the server list, in case it was restarted by restart_server (new PID)\n            set srv [lindex $::servers end]\n\n            # pop the server object\n            set ::servers [lrange $::servers 0 end-1]\n\n            # Kill the server without checking for leaks\n            dict set srv \"skipleaks\" 1\n            kill_server $srv\n\n            if {$::dump_logs && $assertion} {\n                # if we caught an assertion ($::num_failed isn't incremented yet)\n                # this happens when the test spawns a server and not the other way around\n                dump_server_log $srv\n            } else {\n                # Print crash report from log\n                set crashlog [crashlog_from_file [dict get $srv \"stdout\"]]\n                if {[string length $crashlog] > 0} {\n                    puts [format \"\\nLogged crash report (pid %d):\" [dict get $srv \"pid\"]]\n                    puts \"$crashlog\"\n                    puts \"\"\n                }\n\n                set sanitizerlog [sanitizer_errors_from_file [dict get $srv \"stderr\"]]\n                if {[string length $sanitizerlog] > 0} {\n                    puts [format \"\\nLogged sanitizer errors (pid %d):\" [dict get $srv \"pid\"]]\n                    puts \"$sanitizerlog\"\n                    puts \"\"\n                }\n            }\n\n            if {!$assertion && $::durable} {\n                # durable is meant to prevent the whole tcl test from exiting on\n                # an exception. an assertion will be caught by the test proc.\n                set msg [string range $error 10 end]\n                lappend details $msg\n                lappend details $backtrace\n                lappend ::tests_failed $details\n\n                incr ::num_failed\n                send_data_packet $::test_server_fd err [join $details \"\\n\"]\n            } else {\n                # Re-raise, let handler up the stack take care of this.\n                set ::tags [lrange $::tags 0 end-[llength $tags]]\n                error $error $backtrace\n            }\n        } else {\n            if {$::dump_logs && $prev_num_failed != $::num_failed} {\n                dump_server_log $srv\n            }\n        }\n\n        # fetch srv back from the server list, in case it was restarted by restart_server (new PID)\n        set srv [lindex $::servers end]\n\n        # pop the server object\n        set ::servers [lrange $::servers 0 end-1]\n\n        set ::tags [lrange $::tags 0 end-[llength $tags]]\n        kill_server $srv\n        if {!$keep_persistence} {\n            clean_persistence $srv\n        }\n        set _ \"\"\n    } else {\n        set ::tags [lrange $::tags 0 end-[llength $tags]]\n        set _ $srv\n    }\n}\n\n# Start multiple servers with the same options, run code, then stop them.\nproc start_multiple_servers {num options code} {\n    for {set i 0} {$i < $num} {incr i} {\n        set code [list start_server $options $code]\n    }\n    uplevel 1 $code\n}\n\nproc restart_server {level wait_ready rotate_logs {reconnect 1} {shutdown sigterm}} {\n    set srv [lindex $::servers end+$level]\n    if {$shutdown ne {sigterm}} {\n        catch {[dict get $srv \"client\"] shutdown $shutdown}\n    }\n    # Kill server doesn't mind if the server is already dead\n    kill_server $srv\n    # Remove the default client from the server\n    dict unset srv \"client\"\n\n    set pid [dict get $srv \"pid\"]\n    set stdout [dict get $srv \"stdout\"]\n    set stderr [dict get $srv \"stderr\"]\n    if {$rotate_logs} {\n        set ts [clock format [clock seconds] -format %y%m%d%H%M%S]\n        file rename $stdout $stdout.$ts.$pid\n        file rename $stderr $stderr.$ts.$pid\n    }\n    set prev_ready_count [count_message_lines $stdout \"Ready to accept\"]\n\n    # if we're inside a test, write the test name to the server log file\n    if {[info exists ::cur_test]} {\n        set fd [open $stdout \"a+\"]\n        puts $fd \"### Restarting server for test $::cur_test\"\n        close $fd\n    }\n\n    set config_file [dict get $srv \"config_file\"]\n\n    set pid [spawn_server $config_file $stdout $stderr {}]\n\n    # check that the server actually started\n    wait_server_started $config_file $stdout $pid\n\n    # update the pid in the servers list\n    dict set srv \"pid\" $pid\n    # re-set $srv in the servers list\n    lset ::servers end+$level $srv\n\n    if {$wait_ready} {\n        while 1 {\n            # check that the server actually started and is ready for connections\n            if {[count_message_lines $stdout \"Ready to accept\"] > $prev_ready_count} {\n                break\n            }\n            after 10\n        }\n    }\n    if {$reconnect} {\n        reconnect $level\n    }\n}\n"
  },
  {
    "path": "tests/support/test.tcl",
    "content": "set ::num_tests 0\nset ::num_passed 0\nset ::num_failed 0\nset ::num_skipped 0\nset ::num_aborted 0\nset ::tests_failed {}\nset ::cur_test \"\"\n\nproc fail {msg} {\n    error \"assertion:$msg\"\n}\n\nproc assert {condition} {\n    if {![uplevel 1 [list expr $condition]]} {\n        set context \"(context: [info frame -1])\"\n        error \"assertion:Expected [uplevel 1 [list subst -nocommands $condition]] $context\"\n    }\n}\n\nproc assert_no_match {pattern value} {\n    if {[string match $pattern $value]} {\n        set context \"(context: [info frame -1])\"\n        error \"assertion:Expected '$value' to not match '$pattern' $context\"\n    }\n}\n\nproc assert_match {pattern value {detail \"\"} {context \"\"}} {\n    if {![string match $pattern $value]} {\n        if {$context eq \"\"} {\n            set context \"(context: [info frame -1])\"\n        }\n        error \"assertion:Expected '$value' to match '$pattern' $context $detail\"\n    }\n}\n\nproc assert_failed {expected_err detail} {\n     if {$detail ne \"\"} {\n        set detail \"(detail: $detail)\"\n     } else {\n        set detail \"(context: [info frame -2])\"\n     }\n     error \"assertion:$expected_err $detail\"\n}\n\nproc assert_not_equal {value expected {detail \"\"}} {\n    if {!($expected ne $value)} {\n        assert_failed \"Expected '$value' not equal to '$expected'\" $detail\n    }\n}\n\nproc assert_equal {value expected {detail \"\"}} {\n    if {$expected ne $value} {\n        assert_failed \"Expected '$value' to be equal to '$expected'\" $detail\n    }\n}\n\nproc assert_lessthan {value expected {detail \"\"}} {\n    if {!($value < $expected)} {\n        assert_failed \"Expected '$value' to be less than '$expected'\" $detail\n    }\n}\n\nproc assert_lessthan_equal {value expected {detail \"\"}} {\n    if {!($value <= $expected)} {\n        assert_failed \"Expected '$value' to be less than or equal to '$expected'\" $detail\n    }\n}\n\nproc assert_morethan {value expected {detail \"\"}} {\n    if {!($value > $expected)} {\n        assert_failed \"Expected '$value' to be more than '$expected'\" $detail\n    }\n}\n\nproc assert_morethan_equal {value expected {detail \"\"}} {\n    if {!($value >= $expected)} {\n        assert_failed \"Expected '$value' to be more than or equal to '$expected'\" $detail\n    }\n}\n\nproc assert_range {value min max {detail \"\"}} {\n    if {!($value <= $max && $value >= $min)} {\n        assert_failed \"Expected '$value' to be between to '$min' and '$max'\" $detail\n    }\n}\n\nproc assert_error {pattern code {detail \"\"}} {\n    if {[catch {uplevel 1 $code} error]} {\n        assert_match $pattern $error $detail\n    } else {\n        assert_failed \"Expected an error matching '$pattern' but got '$error'\" $detail\n    }\n}\n\nproc assert_encoding {enc key} {\n    if {$::ignoreencoding} {\n        return\n    }\n    set val [r object encoding $key]\n    assert_match $enc $val\n}\n\nproc assert_type {type key} {\n    assert_equal $type [r type $key]\n}\n\nproc assert_refcount {ref key} {\n    if {[lsearch $::denytags \"needs:debug\"] >= 0} {\n        return\n    }\n\n    set val [r object refcount $key]\n    assert_equal $ref $val\n}\n\nproc assert_refcount_morethan {key ref} {\n    if {[lsearch $::denytags \"needs:debug\"] >= 0} {\n        return\n    }\n\n    set val [r object refcount $key]\n    assert_morethan $val $ref\n}\n\n# Wait for the specified condition to be true, with the specified number of\n# max retries and delay between retries. Otherwise the 'elsescript' is\n# executed.\nproc wait_for_condition {maxtries delay e _else_ elsescript} {\n    if {$_else_ ne \"else\"} {\n        error \"$_else_ must be equal to \\\"else\\\"\"\n    }\n\n    while {[incr maxtries -1] >= 0} {\n        set errcode [catch {uplevel 1 [list expr $e]} result]\n        if {$errcode == 0} {\n            if {$result} break\n        } else {\n            return -code $errcode $result\n        }\n        after $delay\n    }\n    if {$maxtries == -1} {\n        set errcode [catch {uplevel 1 $elsescript} result]\n        return -code $errcode $result\n    }\n}\n\n# try to match a value to a list of patterns that are either regex (starts with \"/\") or plain string.\n# The caller can specify to use only glob-pattern match\nproc search_pattern_list {value pattern_list {glob_pattern false}} {\n    foreach el $pattern_list {\n        if {[string length $el] == 0} { continue }\n        if { $glob_pattern } {\n            if {[string match $el $value]} {\n                return 1\n            }\n            continue\n        }\n        if {[string equal / [string index $el 0]] && [regexp -- [string range $el 1 end] $value]} {\n            return 1\n        } elseif {[string equal $el $value]} {\n            return 1\n        }\n    }\n    return 0\n}\n\n# Save configuration for all servers in the ::servers stack\n# Returns a list of [server_index config_dict] pairs\n# Uses save_single_server_config helper from server.tcl\nproc save_server_configs {} {\n    set saved_configs {}\n    set num_servers [llength $::servers]\n    for {set i 0} {$i < $num_servers} {incr i} {\n        set level [expr {0 - $i}]\n        # Use catch to handle servers that may not be accessible\n        if {[catch {srv $level \"client\"} config_client]} {\n            continue\n        }\n        # Use shared helper for single-server config save\n        set server_config [save_single_server_config $config_client]\n        lappend saved_configs [list $i $server_config]\n    }\n    return $saved_configs\n}\n\n# Restore configuration for all servers that have changes\n# Uses diff-based restoration: only restore configs that actually changed\n# Uses restore_single_server_config helper from server.tcl\n# Arguments:\n#   saved_configs - List of [server_index config_dict] pairs from save_server_configs\nproc restore_server_configs {saved_configs} {\n    foreach entry $saved_configs {\n        lassign $entry server_idx saved_config\n        set level [expr {0 - $server_idx}]\n\n        # Use catch to handle servers that may have terminated\n        if {[catch {srv $level \"client\"} config_client]} {\n            if {$::verbose} {\n                puts \"Warning: Failed to get client for server $server_idx during config restore\"\n            }\n            continue\n        }\n\n        # Check if server is responsive before attempting restore\n        # This prevents hanging on paused/unresponsive servers\n        set host [srv $level \"host\"]\n        set port [srv $level \"port\"]\n        if {$::valgrind} {set ping_timeout 5000} else {set ping_timeout 500}\n        if {![ping_server_with_timeout $host $port $ping_timeout]} {\n            # Server unresponsive - skip restoration\n            if {$::verbose} {\n                puts \"Warning: Server $server_idx unresponsive, skipping config restore\"\n            }\n            continue\n        }\n\n        # Use shared helper for single-server config restore (diff-based)\n        # Catch errors to ensure restoration failures don't propagate to caller\n        # This is \"best effort\" restoration - log failures but continue\n        if {[catch {restore_single_server_config $config_client $saved_config 1} err]} {\n            if {$::verbose} {\n                puts \"Warning: Failed to restore config for server $server_idx: $err\"\n            }\n        }\n    }\n}\n\nproc test {name code {okpattern undefined} {tags {}}} {\n    # abort if test name in skiptests\n    if {[search_pattern_list $name $::skiptests]} {\n        incr ::num_skipped\n        send_data_packet $::test_server_fd skip $name\n        return\n    }\n    if {$::verbose > 1} {\n        puts \"starting test $name\"\n    }\n    # abort if only_tests was set but test name is not included\n    if {[llength $::only_tests] > 0 && ![search_pattern_list $name $::only_tests]} {\n        incr ::num_skipped\n        send_data_packet $::test_server_fd skip $name\n        return\n    }\n\n    set tags [concat $::tags $tags]\n    if {![tags_acceptable $tags err]} {\n        incr ::num_aborted\n        send_data_packet $::test_server_fd ignore \"$name: $err\"\n        return\n    }\n\n    # Check if config restoration is requested\n    set restore_config 0\n    if {[lsearch $tags \"config:restore\"] >= 0} {\n        set restore_config 1\n    }\n\n    incr ::num_tests\n    set details {}\n    lappend details \"$name in $::curfile\"\n\n    # set a cur_test global to be logged into new servers that are spawn\n    # and log the test name in all existing servers\n    set prev_test $::cur_test\n    set ::cur_test \"$name in $::curfile\"\n    if {$::external} {\n        catch {\n            set r [redis [srv 0 host] [srv 0 port] 0 $::tls]\n            catch {\n                $r debug log \"### Starting test $::cur_test\"\n            }\n            $r close\n        }\n    } else {\n        set servers {}\n        foreach srv $::servers {\n            set stdout [dict get $srv stdout]\n            set fd [open $stdout \"a+\"]\n            puts $fd \"### Starting test $::cur_test\"\n            close $fd\n            lappend servers $stdout\n        }\n        if {$::verbose > 1} {\n            puts \"### Starting test $::cur_test - with servers: $servers\"\n        }\n    }\n\n    send_data_packet $::test_server_fd testing $name\n\n    # Save server configuration if restoration is requested\n    set saved_configs {}\n    if {$restore_config} {\n        set saved_configs [save_server_configs]\n    }\n\n    set failed false\n    set test_start_time [clock milliseconds]\n    if {[catch {set retval [uplevel 1 $code]} error]} {\n        set assertion [string match \"assertion:*\" $error]\n        if {$assertion || $::durable} {\n            # durable prevents the whole tcl test from exiting on an exception.\n            # an assertion is handled gracefully anyway.\n            set msg [string range $error 10 end]\n            lappend details $msg\n            if {!$assertion} {\n                lappend details $::errorInfo\n            }\n            lappend ::tests_failed $details\n\n            incr ::num_failed\n            set failed true\n            send_data_packet $::test_server_fd err [join $details \"\\n\"]\n\n            if {$::stop_on_failure} {\n                puts \"Test error (last server port:[srv port], log:[srv stdout]), press enter to teardown the test.\"\n                flush stdout\n                gets stdin\n            }\n        } else {\n            # Re-raise, let handler up the stack take care of this.\n            # But first, restore config if needed (since we won't reach the normal restoration code at the end)\n            # Save errorInfo before restore_server_configs, whose internal\n            # catch blocks would overwrite the global $::errorInfo.\n            set saved_errorInfo $::errorInfo\n            if {$restore_config && [llength $saved_configs] > 0} {\n                catch {restore_server_configs $saved_configs}\n            }\n            error $error $saved_errorInfo\n        }\n    } else {\n        if {$okpattern eq \"undefined\" || $okpattern eq $retval || [string match $okpattern $retval]} {\n            incr ::num_passed\n            set elapsed [expr {[clock milliseconds]-$test_start_time}]\n            send_data_packet $::test_server_fd ok $name $elapsed\n        } else {\n            set msg \"Expected '$okpattern' to equal or match '$retval'\"\n            lappend details $msg\n            lappend ::tests_failed $details\n\n            incr ::num_failed\n            set failed true\n            send_data_packet $::test_server_fd err [join $details \"\\n\"]\n        }\n    }\n\n    if {$::dump_logs && $failed} {\n        foreach srv $::servers {\n            dump_server_log $srv\n        }\n    }\n\n    if {$::traceleaks} {\n        set output [exec leaks redis-server]\n        if {![string match {*0 leaks*} $output]} {\n            send_data_packet $::test_server_fd err \"Detected a memory leak in test '$name': $output\"\n        }\n    }\n\n    # Restore server configuration if it was saved\n    if {$restore_config && [llength $saved_configs] > 0} {\n        restore_server_configs $saved_configs\n    }\n\n    set ::cur_test $prev_test\n}\n"
  },
  {
    "path": "tests/support/tmpfile.tcl",
    "content": "set ::tmpcounter 0\nset ::tmproot \"./tests/tmp\"\nfile mkdir $::tmproot\n\n# returns a dirname unique to this process to write to\nproc tmpdir {basename} {\n    set dir [file join $::tmproot $basename.[pid].[incr ::tmpcounter]]\n    file mkdir $dir\n    set _ $dir\n}\n\n# return a filename unique to this process to write to\nproc tmpfile {basename} {\n    file join $::tmproot $basename.[pid].[incr ::tmpcounter]\n}\n"
  },
  {
    "path": "tests/support/util.tcl",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Copyright (c) 2024-present, Valkey contributors.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n#\n\nproc randstring {min max {type binary}} {\n    set len [expr {$min+int(rand()*($max-$min+1))}]\n    set output {}\n    if {$type eq {binary}} {\n        set minval 0\n        set maxval 255\n    } elseif {$type eq {alpha} || $type eq {simplealpha}} {\n        set minval 48\n        set maxval 122\n    } elseif {$type eq {compr}} {\n        set minval 48\n        set maxval 52\n    }\n    while {$len} {\n        set num [expr {$minval+int(rand()*($maxval-$minval+1))}]\n        set rr [format \"%c\" $num]\n        if {$type eq {simplealpha} && ![string is alnum $rr]} {continue}\n        if {$type eq {alpha} && $num eq 92} {continue} ;# avoid putting '\\' char in the string, it can mess up TCL processing\n        append output $rr\n        incr len -1\n    }\n    return $output\n}\n\n# Useful for some test\nproc zlistAlikeSort {a b} {\n    if {[lindex $a 0] > [lindex $b 0]} {return 1}\n    if {[lindex $a 0] < [lindex $b 0]} {return -1}\n    string compare [lindex $a 1] [lindex $b 1]\n}\n\n# Return all log lines starting with the first line that contains a warning.\n# Generally, this will be an assertion error with a stack trace.\nproc crashlog_from_file {filename} {\n    set lines [split [exec cat $filename] \"\\n\"]\n    set matched 0\n    set logall 0\n    set result {}\n    foreach line $lines {\n        if {[string match {*REDIS BUG REPORT START*} $line]} {\n            set logall 1\n        }\n        if {[regexp {^\\[\\d+\\]\\s+\\d+\\s+\\w+\\s+\\d{2}:\\d{2}:\\d{2} \\#} $line]} {\n            set matched 1\n        }\n        if {$logall || $matched} {\n            lappend result $line\n        }\n    }\n    join $result \"\\n\"\n}\n\n# Return sanitizer log lines\nproc sanitizer_errors_from_file {filename} {\n    set log [exec cat $filename]\n    set lines [split [exec cat $filename] \"\\n\"]\n\n    foreach line $lines {\n        # Ignore huge allocation warnings for both ASan and MSan\n        if ([string match {*WARNING: AddressSanitizer failed to allocate*} $line]) {\n            continue\n        }\n\n        if ([string match {*WARNING: MemorySanitizer failed to allocate*} $line]) {\n            continue\n        }\n\n        # GCC UBSAN output does not contain 'Sanitizer' but 'runtime error'.\n        if {[string match {*runtime error*} $line] ||\n            [string match {*Sanitizer*} $line]} {\n            return $log\n        }\n    }\n\n    return \"\"\n}\n\nproc getInfoProperty {infostr property} {\n    if {[regexp -lineanchor \"^$property:(.*?)\\r\\n\" $infostr _ value]} {\n        return $value\n    }\n}\n\n# Return value for INFO property\nproc status {r property} {\n    set _ [getInfoProperty [{*}$r info] $property]\n}\n\nproc waitForBgsave r {\n    while 1 {\n        if {[status $r rdb_bgsave_in_progress] eq 1} {\n            if {$::verbose} {\n                puts -nonewline \"\\nWaiting for background save to finish... \"\n                flush stdout\n            }\n            after 50\n        } else {\n            break\n        }\n    }\n}\n\nproc waitForBgrewriteaof r {\n    while 1 {\n        if {[status $r aof_rewrite_in_progress] eq 1} {\n            if {$::verbose} {\n                puts -nonewline \"\\nWaiting for background AOF rewrite to finish... \"\n                flush stdout\n            }\n            after 50\n        } else {\n            break\n        }\n    }\n}\n\nproc wait_for_sync r {\n    set maxtries 50\n    # tsan adds significant overhead to the execution time, so we increase the\n    # wait time here JIC\n    if {$::tsan} {\n        set maxtries 100\n    }\n\n    wait_for_condition $maxtries 100 {\n        [status $r master_link_status] eq \"up\"\n    } else {\n        fail \"replica didn't sync in time\"\n    }\n}\n\nproc wait_replica_online {r {replica_id 0} {maxtries 50} {delay 100}} {\n    # tsan adds significant overhead to the execution time, so we increase the\n    # wait time here JIC\n    if {$::tsan} {\n        set maxtries [expr {$maxtries * 2}]\n    }\n\n    wait_for_condition $maxtries $delay {\n        [string match \"*slave$replica_id:*,state=online*\" [$r info replication]]\n    } else {\n        fail \"replica $replica_id did not become online in time\"\n    }\n}\n\nproc wait_for_ofs_sync {r1 r2} {\n    set maxtries 50\n    # tsan adds significant overhead to the execution time, so we increase the\n    # wait time here JIC\n    if {$::tsan} {\n        set maxtries 100\n    }\n    wait_for_condition $maxtries 100 {\n        [status $r1 master_repl_offset] eq [status $r2 master_repl_offset]\n    } else {\n        fail \"replica offset didn't match in time\"\n    }\n}\n\nproc wait_done_loading r {\n    wait_for_condition 50 100 {\n        [catch {$r ping} e] == 0\n    } else {\n        fail \"Loading DB is taking too much time.\"\n    }\n}\n\nproc wait_lazyfree_done r {\n    wait_for_condition 50 100 {\n        [status $r lazyfree_pending_objects] == 0\n    } else {\n        fail \"lazyfree isn't done\"\n    }\n}\n\n# count current log lines in server's stdout\nproc count_log_lines {srv_idx} {\n    set _ [string trim [exec wc -l < [srv $srv_idx stdout]]]\n}\n\n# returns the number of times a line with that pattern appears in a file\nproc count_message_lines {file pattern} {\n    set res 0\n    # exec fails when grep exists with status other than 0 (when the pattern wasn't found)\n    catch {\n        set res [string trim [exec grep $pattern $file 2> /dev/null | wc -l]]\n    }\n    return $res\n}\n\n# returns the number of times a line with that pattern appears in the log\nproc count_log_message {srv_idx pattern} {\n    set stdout [srv $srv_idx stdout]\n    return [count_message_lines $stdout $pattern]\n}\n\n# verify pattern exists in server's sdtout after a certain line number\nproc verify_log_message {srv_idx pattern from_line} {\n    incr from_line\n    set result [exec tail -n +$from_line < [srv $srv_idx stdout]]\n    if {![string match $pattern $result]} {\n        error \"assertion:expected message not found in log file: $pattern\"\n    }\n}\n\n# wait for pattern to be found in server's stdout after certain line number\n# return value is a list containing the line that matched the pattern and the line number\nproc wait_for_log_messages {srv_idx patterns from_line maxtries delay} {\n    set retry $maxtries\n    set next_line [expr $from_line + 1] ;# searching form the line after\n    set stdout [srv $srv_idx stdout]\n    while {$retry} {\n        # re-read the last line (unless it's before to our first), last time we read it, it might have been incomplete\n        set next_line [expr $next_line - 1 > $from_line + 1 ? $next_line - 1 : $from_line + 1]\n        set result [exec tail -n +$next_line < $stdout]\n        set result [split $result \"\\n\"]\n        foreach line $result {\n            foreach pattern $patterns {\n                if {[string match $pattern $line]} {\n                    return [list $line $next_line]\n                }\n            }\n            incr next_line\n        }\n        incr retry -1\n        after $delay\n    }\n    if {$retry == 0} {\n        if {$::verbose} {\n            puts \"content of $stdout from line: $from_line:\"\n            puts [exec tail -n +$from_line < $stdout]\n        }\n        fail \"log message of '$patterns' not found in $stdout after line: $from_line till line: [expr $next_line -1]\"\n    }\n}\n\n# write line to server log file\nproc write_log_line {srv_idx msg} {\n    set logfile [srv $srv_idx stdout]\n    set fd [open $logfile \"a+\"]\n    puts $fd \"### $msg\"\n    close $fd\n}\n\n# Random integer between 0 and max (excluded).\nproc randomInt {max} {\n    expr {int(rand()*$max)}\n}\n\n# Random integer between min and max (excluded).\nproc randomRange {min max} {\n    expr {int(rand()*[expr $max - $min]) + $min}\n}\n\n# Random signed integer between -max and max (both extremes excluded).\nproc randomSignedInt {max} {\n    set i [randomInt $max]\n    if {rand() > 0.5} {\n        set i -$i\n    }\n    return $i\n}\n\nproc randpath args {\n    set path [expr {int(rand()*[llength $args])}]\n    uplevel 1 [lindex $args $path]\n}\n\nproc randomValue {} {\n    randpath {\n        # Small enough to likely collide\n        randomSignedInt 1000\n    } {\n        # 32 bit compressible signed/unsigned\n        randpath {randomSignedInt 2000000000} {randomSignedInt 4000000000}\n    } {\n        # 64 bit\n        randpath {randomSignedInt 1000000000000}\n    } {\n        # Random string\n        randpath {randstring 0 256 alpha} \\\n                {randstring 0 256 compr} \\\n                {randstring 0 256 binary}\n    }\n}\n\nproc randomKey {} {\n    randpath {\n        # Small enough to likely collide\n        randomInt 1000\n    } {\n        # 32 bit compressible signed/unsigned\n        randpath {randomInt 2000000000} {randomInt 4000000000}\n    } {\n        # 64 bit\n        randpath {randomInt 1000000000000}\n    } {\n        # Random string\n        randpath {randstring 1 256 alpha} \\\n                {randstring 1 256 compr}\n    }\n}\n\nproc findKeyWithType {r type} {\n    for {set j 0} {$j < 20} {incr j} {\n        set k [{*}$r randomkey]\n        if {$k eq {}} {\n            return {}\n        }\n        if {[{*}$r type $k] eq $type} {\n            return $k\n        }\n    }\n    return {}\n}\n\nproc createComplexDataset {r ops {opt {}}} {\n    set useexpire [expr {[lsearch -exact $opt useexpire] != -1}]\n    set usehexpire [expr {[lsearch -exact $opt usehexpire] != -1}]\n\n    if {[lsearch -exact $opt usetag] != -1} {\n        set tag \"{t}\"\n    } else {\n        set tag \"\"\n    }\n    for {set j 0} {$j < $ops} {incr j} {\n        set k [randomKey]$tag\n        set k2 [randomKey]$tag\n        set f [randomValue]\n        set v [randomValue]\n\n        if {$useexpire} {\n            if {rand() < 0.1} {\n                {*}$r expire [randomKey] [randomInt 2]\n            }\n        }\n\n        randpath {\n            set d [expr {rand()}]\n        } {\n            set d [expr {rand()}]\n        } {\n            set d [expr {rand()}]\n        } {\n            set d [expr {rand()}]\n        } {\n            set d [expr {rand()}]\n        } {\n            randpath {set d +inf} {set d -inf}\n        }\n        set t [{*}$r type $k]\n\n        if {$t eq {none}} {\n            randpath {\n                {*}$r set $k $v\n            } {\n                {*}$r lpush $k $v\n            } {\n                {*}$r sadd $k $v\n            } {\n                {*}$r zadd $k $d $v\n            } {\n                {*}$r hset $k $f $v\n            } {\n                {*}$r del $k\n            }\n            set t [{*}$r type $k]\n        }\n\n        switch $t {\n            {string} {\n                # Nothing to do\n            }\n            {list} {\n                randpath {{*}$r lpush $k $v} \\\n                        {{*}$r rpush $k $v} \\\n                        {{*}$r lrem $k 0 $v} \\\n                        {{*}$r rpop $k} \\\n                        {{*}$r lpop $k}\n            }\n            {set} {\n                randpath {{*}$r sadd $k $v} \\\n                        {{*}$r srem $k $v} \\\n                        {\n                            set otherset [findKeyWithType {*}$r set]\n                            if {$otherset ne {}} {\n                                randpath {\n                                    {*}$r sunionstore $k2 $k $otherset\n                                } {\n                                    {*}$r sinterstore $k2 $k $otherset\n                                } {\n                                    {*}$r sdiffstore $k2 $k $otherset\n                                }\n                            }\n                        }\n            }\n            {zset} {\n                randpath {{*}$r zadd $k $d $v} \\\n                        {{*}$r zrem $k $v} \\\n                        {\n                            set otherzset [findKeyWithType {*}$r zset]\n                            if {$otherzset ne {}} {\n                                randpath {\n                                    {*}$r zunionstore $k2 2 $k $otherzset\n                                } {\n                                    {*}$r zinterstore $k2 2 $k $otherzset\n                                }\n                            }\n                        }\n            }\n            {hash} {\n                randpath {{*}$r hset $k $f $v} \\\n                        {{*}$r hdel $k $f}\n\n                if { [{*}$r hexists $k $f] && $usehexpire && rand() < 0.5} {\n                    {*}$r hexpire $k 1000 FIELDS 1 $f\n                }\n            }\n        }\n    }\n}\n\nproc formatCommand {args} {\n    set cmd \"*[llength $args]\\r\\n\"\n    foreach a $args {\n        append cmd \"$[string length $a]\\r\\n$a\\r\\n\"\n    }\n    set _ $cmd\n}\n\nproc csvdump r {\n    set o {}\n    if {$::singledb} {\n        set maxdb 1\n    } else {\n        set maxdb 16\n    }\n    for {set db 0} {$db < $maxdb} {incr db} {\n        if {!$::singledb} {\n            {*}$r select $db\n        }\n        foreach k [lsort [{*}$r keys *]] {\n            set type [{*}$r type $k]\n            append o [csvstring $db] , [csvstring $k] , [csvstring $type] ,\n            switch $type {\n                string {\n                    append o [csvstring [{*}$r get $k]] \"\\n\"\n                }\n                list {\n                    foreach e [{*}$r lrange $k 0 -1] {\n                        append o [csvstring $e] ,\n                    }\n                    append o \"\\n\"\n                }\n                set {\n                    foreach e [lsort [{*}$r smembers $k]] {\n                        append o [csvstring $e] ,\n                    }\n                    append o \"\\n\"\n                }\n                zset {\n                    foreach e [{*}$r zrange $k 0 -1 withscores] {\n                        append o [csvstring $e] ,\n                    }\n                    append o \"\\n\"\n                }\n                hash {\n                    set fields [{*}$r hgetall $k]\n                    set newfields {}\n                    foreach {f v} $fields {\n                        set expirylist [{*}$r hexpiretime $k FIELDS 1 $f]\n                        if {$expirylist eq (-1)} {\n                            lappend newfields [list $f $v]\n                        } else {\n                            set e [lindex $expirylist 0]\n                            lappend newfields [list $f $e $v] # TODO: extract the actual ttl value from the list in $e\n                        }\n                    }\n                    set fields [lsort -index 0 $newfields]\n                    foreach kv $fields {\n                        append o [csvstring [lindex $kv 0]] ,\n                        append o [csvstring [lindex $kv 1]] ,\n                    }\n                    append o \"\\n\"\n                }\n            }\n        }\n    }\n    if {!$::singledb} {\n        {*}$r select 9\n    }\n    return $o\n}\n\nproc csvstring s {\n    return \"\\\"$s\\\"\"\n}\n\nproc roundFloat f {\n    format \"%.10g\" $f\n}\n\nset ::last_port_attempted 0\nproc find_available_port {start count} {\n    set port [expr $::last_port_attempted + 1]\n    for {set attempts 0} {$attempts < $count} {incr attempts} {\n        if {$port < $start || $port >= $start+$count} {\n            set port $start\n        }\n        set fd1 -1\n        proc dummy_accept {chan addr port} {}\n        if {[catch {set fd1 [socket -server dummy_accept -myaddr 127.0.0.1 $port]}] ||\n            [catch {set fd2 [socket -server dummy_accept -myaddr 127.0.0.1 [expr $port+10000]]}]} {\n            if {$fd1 != -1} {\n                close $fd1\n            }\n        } else {\n            close $fd1\n            close $fd2\n            set ::last_port_attempted $port\n            return $port\n        }\n        incr port\n    }\n    error \"Can't find a non busy port in the $start-[expr {$start+$count-1}] range.\"\n}\n\n# Test if TERM looks like to support colors\nproc color_term {} {\n    expr {[info exists ::env(TERM)] && [string match *xterm* $::env(TERM)]}\n}\n\nproc colorstr {color str} {\n    if {[color_term]} {\n        set b 0\n        if {[string range $color 0 4] eq {bold-}} {\n            set b 1\n            set color [string range $color 5 end]\n        }\n        switch $color {\n            red {set colorcode {31}}\n            green {set colorcode {32}}\n            yellow {set colorcode {33}}\n            blue {set colorcode {34}}\n            magenta {set colorcode {35}}\n            cyan {set colorcode {36}}\n            white {set colorcode {37}}\n            default {set colorcode {37}}\n        }\n        if {$colorcode ne {}} {\n            return \"\\033\\[$b;${colorcode};49m$str\\033\\[0m\"\n        }\n    } else {\n        return $str\n    }\n}\n\nproc find_valgrind_errors {stderr on_termination} {\n    set fd [open $stderr]\n    set buf [read $fd]\n    close $fd\n\n    # Look for stack trace (\" at 0x\") and other errors (Invalid, Mismatched, etc).\n    # Look for \"Warnings\", but not the \"set address range perms\". These don't indicate any real concern.\n    # corrupt-dump unit, not sure why but it seems they don't indicate any real concern.\n    if {[regexp -- { at 0x} $buf] ||\n        [regexp -- {^(?=.*Warning)(?:(?!set address range perms).)*$} $buf] ||\n        [regexp -- {Invalid} $buf] ||\n        [regexp -- {Mismatched} $buf] ||\n        [regexp -- {uninitialized} $buf] ||\n        [regexp -- {has a fishy} $buf] ||\n        [regexp -- {overlap} $buf]} {\n        return $buf\n    }\n\n    # If the process didn't terminate yet, we can't look for the summary report\n    if {!$on_termination} {\n        return \"\"\n    }\n\n    # Look for the absence of a leak free summary (happens when redis isn't terminated properly).\n    if {(![regexp -- {definitely lost: 0 bytes} $buf] &&\n         ![regexp -- {no leaks are possible} $buf])} {\n        return $buf\n    }\n\n    return \"\"\n}\n\n# Execute a background process writing random data for the specified number\n# of seconds to the specified Redis instance. If key is omitted, a random key\n# is used for every SET command.\n# ignore_error_reply (default 0): set non-zero in cluster slot-migration tests to tolerate\n# MOVED/ASK replies while draining pipelined writes in the load helper.\nproc start_write_load {host port seconds {key \"\"} {size 0} {sleep 0} {ignore_error_reply 0}} {\n    set tclsh [info nameofexecutable]\n    exec $tclsh tests/helpers/gen_write_load.tcl $host $port $seconds $::tls $key $size $sleep $ignore_error_reply &\n}\n\n# Stop a process generating write load executed with start_write_load.\nproc stop_write_load {handle} {\n    catch {exec /bin/kill -9 $handle}\n}\n\nproc wait_load_handlers_disconnected {{level 0}} {\n    wait_for_condition 50 100 {\n        ![string match {*name=LOAD_HANDLER*} [r $level client list]]\n    } else {\n        fail \"load_handler(s) still connected after too long time.\"\n    }\n}\n\nproc K { x y } { set x } \n\n# Shuffle a list with Fisher-Yates algorithm.\nproc lshuffle {list} {\n    set n [llength $list]\n    while {$n>1} {\n        set j [expr {int(rand()*$n)}]\n        incr n -1\n        if {$n==$j} continue\n        set v [lindex $list $j]\n        lset list $j [lindex $list $n]\n        lset list $n $v\n    }\n    return $list\n}\n\n# Execute a background process writing complex data for the specified number\n# of ops to the specified Redis instance.\nproc start_bg_complex_data {host port db ops} {\n    set tclsh [info nameofexecutable]\n    exec $tclsh tests/helpers/bg_complex_data.tcl $host $port $db $ops $::tls &\n}\n\n# Stop a process generating write load executed with start_bg_complex_data.\nproc stop_bg_complex_data {handle} {\n    catch {exec /bin/kill -9 $handle}\n}\n\n# Write num keys with the given key prefix and value size (in bytes). If idx is\n# given, it's the index (AKA level) used with the srv procedure and it specifies\n# to which Redis instance to write the keys.\nproc populate {num {prefix key:} {size 3} {idx 0} {prints false} {expires 0}} {\n    r $idx deferred 1\n    if {$num > 16} {set pipeline 16} else {set pipeline $num}\n    set val [string repeat A $size]\n    for {set j 0} {$j < $pipeline} {incr j} {\n        if {$expires > 0} {\n            r $idx set $prefix$j $val ex $expires\n        } else {\n            r $idx set $prefix$j $val\n        }\n        if {$prints} {puts $j}\n    }\n    for {} {$j < $num} {incr j} {\n        if {$expires > 0} {\n            r $idx set $prefix$j $val ex $expires\n        } else {\n            r $idx set $prefix$j $val\n        }\n        r $idx read\n        if {$prints} {puts $j}\n    }\n    for {set j 0} {$j < $pipeline} {incr j} {\n        r $idx read\n        if {$prints} {puts $j}\n    }\n    r $idx deferred 0\n}\n\nproc get_child_pid {idx} {\n    set pid [srv $idx pid]\n    if {[file exists \"/usr/bin/pgrep\"]} {\n        set fd [open \"|pgrep -P $pid\" \"r\"]\n        set child_pid [string trim [lindex [split [read $fd] \\n] 0]]\n    } else {\n        set fd [open \"|ps --ppid $pid -o pid\" \"r\"]\n        set child_pid [string trim [lindex [split [read $fd] \\n] 1]]\n    }\n    close $fd\n\n    return $child_pid\n}\n\nproc process_is_alive pid {\n    if {[catch {exec ps -p $pid -f} err]} {\n        return 0\n    } else {\n        if {[string match \"*<defunct>*\" $err]} { return 0 }\n        return 1\n    }\n}\n\nproc get_system_name {} {\n    return [string tolower [exec uname -s]]\n}\n\nproc get_proc_state {pid} {\n    if {[get_system_name] eq {sunos}} {\n        return [exec ps -o s= -p $pid]\n    } else {\n        return [exec ps -o state= -p $pid]\n    }\n}\n\nproc get_proc_job {pid} {\n    if {[get_system_name] eq {sunos}} {\n        return [exec ps -l -p $pid]\n    } else {\n        return [exec ps j $pid]\n    }\n}\n\nproc pause_process {pid} {\n    exec kill -SIGSTOP $pid\n    wait_for_condition 50 100 {\n        [string match \"T*\" [get_proc_state $pid]]\n    } else {\n        puts [get_proc_job $pid]\n        fail \"process didn't stop\"\n    }\n}\n\nproc resume_process {pid} {\n    wait_for_condition 50 1000 {\n        [string match \"T*\" [get_proc_state $pid]]\n    } else {\n        puts [get_proc_job $pid]\n        fail \"process was not stopped\"\n    }\n\n    set max_attempts 10\n    set attempt 0\n    while {($attempt < $max_attempts) && [string match \"T*\" [exec ps -o state= -p $pid]]} {\n        exec kill -SIGCONT $pid\n\n        incr attempt\n        after 100\n    }\n\n    wait_for_condition 50 1000 {\n        [string match \"R*\" [exec ps -o state= -p $pid]] ||\n        [string match \"S*\" [exec ps -o state= -p $pid]]\n    } else {\n        puts [exec ps j $pid]\n        fail \"process was not resumed\"\n    }\n}\n\nproc cmdrstat {cmd r} {\n    if {[regexp \"\\r\\ncmdstat_$cmd:(.*?)\\r\\n\" [$r info commandstats] _ value]} {\n        set _ $value\n    }\n}\n\nproc errorrstat {cmd r} {\n    if {[regexp \"\\r\\nerrorstat_$cmd:(.*?)\\r\\n\" [$r info errorstats] _ value]} {\n        set _ $value\n    }\n}\n\nproc latencyrstat_percentiles {cmd r} {\n    if {[regexp \"\\r\\nlatency_percentiles_usec_$cmd:(.*?)\\r\\n\" [$r info latencystats] _ value]} {\n        set _ $value\n    }\n}\n\nproc get_io_thread_clients {id {client r}} {\n    set pattern \"io_thread_$id:clients=(\\[0-9\\]+)\"\n    set info [$client info threads]\n    if {[regexp $pattern $info _ value]} {\n        return $value\n    } else {\n        return -1\n    }\n}\n\nproc generate_fuzzy_traffic_on_key {key type duration} {\n    # Commands per type, blocking commands removed\n    # TODO: extract these from COMMAND DOCS, and improve to include other types\n    set string_commands {APPEND BITCOUNT BITFIELD BITOP BITPOS DECR DECRBY GET GETBIT GETRANGE GETSET INCR INCRBY INCRBYFLOAT MGET MSET MSETNX PSETEX SET SETBIT SETEX SETNX SETRANGE LCS STRLEN}\n    set hash_commands {HDEL HEXISTS HGET HGETALL HINCRBY HINCRBYFLOAT HKEYS HLEN HMGET HMSET HSCAN HSET HSETNX HSTRLEN HVALS HRANDFIELD}\n    set zset_commands {ZADD ZCARD ZCOUNT ZINCRBY ZINTERSTORE ZLEXCOUNT ZPOPMAX ZPOPMIN ZRANGE ZRANGEBYLEX ZRANGEBYSCORE ZRANK ZREM ZREMRANGEBYLEX ZREMRANGEBYRANK ZREMRANGEBYSCORE ZREVRANGE ZREVRANGEBYLEX ZREVRANGEBYSCORE ZREVRANK ZSCAN ZSCORE ZUNIONSTORE ZRANDMEMBER}\n    set list_commands {LINDEX LINSERT LLEN LPOP LPOS LPUSH LPUSHX LRANGE LREM LSET LTRIM RPOP RPOPLPUSH RPUSH RPUSHX}\n    set set_commands {SADD SCARD SDIFF SDIFFSTORE SINTER SINTERSTORE SISMEMBER SMEMBERS SMOVE SPOP SRANDMEMBER SREM SSCAN SUNION SUNIONSTORE}\n    set stream_commands {XACK XADD XCLAIM XDEL XGROUP XINFO XLEN XPENDING XRANGE XREAD XREADGROUP XREVRANGE XTRIM XDELEX XACKDEL XNACK}\n    set vset_commands {VADD VREM}\n    set gcra_commands {GCRA}\n    set commands [dict create string $string_commands hash $hash_commands zset $zset_commands list $list_commands set $set_commands stream $stream_commands vectorset $vset_commands gcra $gcra_commands]\n\n    set cmds [dict get $commands $type]\n    set start_time [clock seconds]\n    set sent {}\n    set succeeded 0\n    while {([clock seconds]-$start_time) < $duration} {\n        # find a random command for our key type\n        set cmd_idx [expr {int(rand()*[llength $cmds])}]\n        set cmd [lindex $cmds $cmd_idx]\n        # get the command details from redis\n        if { [ catch {\n            set cmd_info [lindex [r command info $cmd] 0]\n        } err ] } {\n            # if we failed, it means redis crashed after the previous command\n            return $sent\n        }\n        # try to build a valid command argument\n        set arity [lindex $cmd_info 1]\n        set arity [expr $arity < 0 ? - $arity: $arity]\n        set firstkey [lindex $cmd_info 3]\n        set lastkey [lindex $cmd_info 4]\n        set i 1\n        if {$cmd == \"XINFO\"} {\n            lappend cmd \"STREAM\"\n            lappend cmd $key\n            lappend cmd \"FULL\"\n            incr i 3\n        }\n        if {$cmd == \"XREAD\"} {\n            lappend cmd \"STREAMS\"\n            lappend cmd $key\n            randpath {\n                lappend cmd \\$\n            } {\n                lappend cmd [randomValue]\n            }\n            incr i 3\n        }\n        if {$cmd == \"XADD\"} {\n            lappend cmd $key\n            randpath {\n                lappend cmd \"*\"\n            } {\n                lappend cmd [randomValue]\n            }\n            lappend cmd [randomValue]\n            lappend cmd [randomValue]\n            incr i 4\n        }\n        if {$cmd == \"VADD\"} {\n            lappend cmd $key\n            lappend cmd VALUES 3 1 1 1\n            lappend cmd [randomValue]\n            incr i 7\n        }\n        if {$cmd == \"VREM\"} {\n            lappend cmd $key\n            lappend cmd [randomValue]\n            incr i 2\n        }\n\n        for {} {$i < $arity} {incr i} {\n            if {$i == $firstkey || $i == $lastkey} {\n                lappend cmd $key\n            } else {\n                lappend cmd [randomValue]\n            }\n        }\n        # execute the command, we expect commands to fail on syntax errors\n        lappend sent $cmd\n        if { ! [ catch {\n            r {*}$cmd\n        } err ] } {\n            incr succeeded\n        } else {\n            set err [format \"%s\" $err] ;# convert to string for pattern matching\n            if {[string match \"*SIGTERM*\" $err]} {\n                puts \"commands caused test to hang:\"\n                foreach cmd $sent {\n                    foreach arg $cmd {\n                        puts -nonewline \"[string2printable $arg] \"\n                    }\n                    puts \"\"\n                }\n                # Re-raise, let handler up the stack take care of this.\n                error $err $::errorInfo\n            }\n        }\n    }\n\n    # print stats so that we know if we managed to generate commands that actually made sense\n    #if {$::verbose} {\n    #    set count [llength $sent]\n    #    puts \"Fuzzy traffic sent: $count, succeeded: $succeeded\"\n    #}\n\n    # return the list of commands we sent\n    return $sent\n}\n\nproc string2printable s {\n    set res {}\n    set has_special_chars false\n    foreach i [split $s {}] {\n        scan $i %c int\n        # non printable characters, including space and excluding: \" \\ $ { }\n        if {$int < 32 || $int > 122 || $int == 34 || $int == 36 || $int == 92} {\n            set has_special_chars true\n        }\n        # TCL8.5 has issues mixing \\x notation and normal chars in the same\n        # source code string, so we'll convert the entire string.\n        append res \\\\x[format %02X $int]\n    }\n    if {!$has_special_chars} {\n        return $s\n    }\n    set res \"\\\"$res\\\"\"\n    return $res\n}\n\n# Calculation value of Chi-Square Distribution. By this value\n# we can verify the random distribution sample confidence.\n# Based on the following wiki:\n# https://en.wikipedia.org/wiki/Chi-square_distribution\n#\n# param res    Random sample list\n# return       Value of Chi-Square Distribution\n#\n# x2_value: return of chi_square_value function\n# df: Degrees of freedom, Number of independent values minus 1\n#\n# By using x2_value and df to back check the cardinality table,\n# we can know the confidence of the random sample.\nproc chi_square_value {res} {\n    unset -nocomplain mydict\n    foreach key $res {\n        dict incr mydict $key 1\n    }\n\n    set x2_value 0\n    set p [expr [llength $res] / [dict size $mydict]]\n    foreach key [dict keys $mydict] {\n        set value [dict get $mydict $key]\n\n        # Aggregate the chi-square value of each element\n        set v [expr {pow($value - $p, 2) / $p}]\n        set x2_value [expr {$x2_value + $v}]\n    }\n\n    return $x2_value\n}\n\n#subscribe to Pub/Sub channels\nproc consume_subscribe_messages {client type channels} {\n    set numsub -1\n    set counts {}\n\n    for {set i [llength $channels]} {$i > 0} {incr i -1} {\n        set msg [$client read]\n        assert_equal $type [lindex $msg 0]\n\n        # when receiving subscribe messages the channels names\n        # are ordered. when receiving unsubscribe messages\n        # they are unordered\n        set idx [lsearch -exact $channels [lindex $msg 1]]\n        if {[string match \"*unsubscribe\" $type]} {\n            assert {$idx >= 0}\n        } else {\n            assert {$idx == 0}\n        }\n        set channels [lreplace $channels $idx $idx]\n\n        # aggregate the subscription count to return to the caller\n        lappend counts [lindex $msg 2]\n    }\n\n    # we should have received messages for channels\n    assert {[llength $channels] == 0}\n    return $counts\n}\n\nproc subscribe {client channels} {\n    $client subscribe {*}$channels\n    consume_subscribe_messages $client subscribe $channels\n}\n\nproc ssubscribe {client channels} {\n    $client ssubscribe {*}$channels\n    consume_subscribe_messages $client ssubscribe $channels\n}\n\nproc unsubscribe {client {channels {}}} {\n    $client unsubscribe {*}$channels\n    consume_subscribe_messages $client unsubscribe $channels\n}\n\nproc sunsubscribe {client {channels {}}} {\n    $client sunsubscribe {*}$channels\n    consume_subscribe_messages $client sunsubscribe $channels\n}\n\nproc psubscribe {client channels} {\n    $client psubscribe {*}$channels\n    consume_subscribe_messages $client psubscribe $channels\n}\n\nproc punsubscribe {client {channels {}}} {\n    $client punsubscribe {*}$channels\n    consume_subscribe_messages $client punsubscribe $channels\n}\n\nproc debug_digest_value {key} {\n    if {[lsearch $::denytags \"needs:debug\"] >= 0 || $::ignoredigest} {\n        return \"dummy-digest-value\"\n    }\n    r debug digest-value $key\n}\n\nproc debug_digest {{level 0}} {\n    if {[lsearch $::denytags \"needs:debug\"] >= 0 || $::ignoredigest} {\n        return \"dummy-digest\"\n    }\n    r $level debug digest\n}\n\nproc wait_for_blocked_client {{idx 0}} {\n    wait_for_condition 50 100 {\n        [s $idx blocked_clients] ne 0\n    } else {\n        fail \"no blocked clients\"\n    }\n}\n\nproc wait_for_blocked_clients_count {count {maxtries 100} {delay 10} {idx 0}} {\n    wait_for_condition $maxtries $delay  {\n        [s $idx blocked_clients] == $count\n    } else {\n        fail \"Timeout waiting for blocked clients (expected $count, actual [s $idx blocked_clients])\"\n    }\n}\n\nproc wait_for_watched_clients_count {count {maxtries 100} {delay 10} {idx 0}} {\n    wait_for_condition $maxtries $delay  {\n        [s $idx watching_clients] == $count\n    } else {\n        fail \"Timeout waiting for watched clients\"\n    }\n}\n\nproc read_from_aof {fp} {\n    # Input fp is a blocking binary file descriptor of an opened AOF file.\n    if {[gets $fp count] == -1} return \"\"\n    set count [string range $count 1 end]\n\n    # Return a list of arguments for the command.\n    set res {}\n    for {set j 0} {$j < $count} {incr j} {\n        read $fp 1\n        set arg [::redis::redis_bulk_read $fp]\n        if {$j == 0} {set arg [string tolower $arg]}\n        lappend res $arg\n    }\n    return $res\n}\n\nproc assert_aof_content {aof_path patterns} {\n    set fp [open $aof_path r]\n    fconfigure $fp -translation binary\n    fconfigure $fp -blocking 1\n\n    for {set j 0} {$j < [llength $patterns]} {incr j} {\n        assert_match [lindex $patterns $j] [read_from_aof $fp]\n    }\n}\n\nproc config_set {param value {options {}}} {\n    set mayfail 0\n    foreach option $options {\n        switch $option {\n            \"mayfail\" {\n                set mayfail 1\n            }\n            default {\n                error \"Unknown option $option\"\n            }\n        }\n    }\n\n    if {[catch {r config set $param $value} err]} {\n        if {!$mayfail} {\n            error $err\n        } else {\n            if {$::verbose} {\n                puts \"Ignoring CONFIG SET $param $value failure: $err\"\n            }\n        }\n    }\n}\n\nproc config_get_set {param value {options {}}} {\n    set config [lindex [r config get $param] 1]\n    config_set $param $value $options\n    return $config\n}\n\nproc delete_lines_with_pattern {filename tmpfilename pattern} {\n    set fh_in [open $filename r]\n    set fh_out [open $tmpfilename w]\n    while {[gets $fh_in line] != -1} {\n        if {![regexp $pattern $line]} {\n            puts $fh_out $line\n        }\n    }\n    close $fh_in\n    close $fh_out\n    file rename -force $tmpfilename $filename\n}\n\nproc get_nonloopback_addr {} {\n    set addrlist [list {}]\n    catch { set addrlist [exec hostname -I] }\n    return [lindex $addrlist 0]\n}\n\nproc get_nonloopback_client {} {\n    return [redis [get_nonloopback_addr] [srv 0 \"port\"] 0 $::tls]\n}\n\n# The following functions and variables are used only when running large-memory\n# tests. We avoid defining them when not running large-memory tests because the\n# global variables takes up lots of memory.\nproc init_large_mem_vars {} {\n    if {![info exists ::str500]} {\n        set ::str500 [string repeat x 500000000] ;# 500mb\n        set ::str500_len [string length $::str500]\n    }\n}\n\n# Utility function to write big argument into redis client connection\nproc write_big_bulk {size {prefix \"\"} {skip_read no}} {\n    init_large_mem_vars\n\n    assert {[string length prefix] <= $size}\n    r write \"\\$$size\\r\\n\"\n    r write $prefix\n    incr size -[string length $prefix]\n    while {$size >= 500000000} {\n        r write $::str500\n        incr size -500000000\n    }\n    if {$size > 0} {\n        r write [string repeat x $size]\n    }\n    r write \"\\r\\n\"\n    if {!$skip_read} {\n        r flush\n        r read\n    }\n}\n\n# Utility to read big bulk response (work around Tcl limitations)\nproc read_big_bulk {code {compare no} {prefix \"\"}} {\n    init_large_mem_vars\n\n    r readraw 1\n    set resp_len [uplevel 1 $code] ;# get the first line of the RESP response\n    assert_equal [string range $resp_len 0 0] \"$\"\n    set resp_len [string range $resp_len 1 end]\n    set prefix_len [string length $prefix]\n    if {$compare} {\n        assert {$prefix_len <= $resp_len}\n        assert {$prefix_len <= $::str500_len}\n    }\n\n    set remaining $resp_len\n    while {$remaining > 0} {\n        set l $remaining\n        if {$l > $::str500_len} {set l $::str500_len} ; # can't read more than 2gb at a time, so read 500mb so we can easily verify read data\n        set read_data [r rawread $l]\n        set nbytes [string length $read_data]\n        if {$compare} {\n            set comp_len $nbytes\n            # Compare prefix part\n            if {$remaining == $resp_len} {\n                assert_equal $prefix [string range $read_data 0 [expr $prefix_len - 1]]\n                set read_data [string range $read_data $prefix_len $nbytes]\n                incr comp_len -$prefix_len\n            }\n            # Compare rest of data, evaluate and then assert to avoid huge print in case of failure\n            set data_equal [expr {$read_data == [string range $::str500 0 [expr $comp_len - 1]]}]\n            assert $data_equal\n        }\n        incr remaining -$nbytes\n    }\n    assert_equal [r rawread 2] \"\\r\\n\"\n    r readraw 0\n    return $resp_len\n}\n\nproc prepare_value {size} {\n    set _v \"c\"\n    for {set i 1} {$i < $size} {incr i} {\n        append _v 0\n    }\n    return $_v\n}\n\nproc memory_usage {key} {\n    set usage [r memory usage $key]\n    if {![string match {*jemalloc*} [s mem_allocator]]} {\n        # libc allocator can sometimes return a different size allocation for the same requested size\n        # this makes tests that rely on MEMORY USAGE unreliable, so instead we return a constant 1\n        set usage 1\n    }\n    return $usage\n}\n\n# Test if the server supports the specified command.\nproc server_has_command {cmd_wanted} {\n    set lowercase_commands {}\n    foreach cmd [r command list] {\n        lappend lowercase_commands [string tolower $cmd]\n    }\n    expr {[lsearch $lowercase_commands [string tolower $cmd_wanted]] != -1}\n}\n\n# forward compatibility, lmap missing in TCL 8.5\nproc lmap args {\n    set body [lindex $args end]\n    set args [lrange $args 0 end-1]\n    set n 0\n    set pairs [list]\n    foreach {varnames listval} $args {\n        set varlist [list]\n        foreach varname $varnames {\n            upvar 1 $varname var$n\n            lappend varlist var$n\n            incr n\n        }\n        lappend pairs $varlist $listval\n    }\n    set temp [list]\n    foreach {*}$pairs {\n        lappend temp [uplevel 1 $body]\n    }\n    set temp\n}\n\nproc format_command {args} {\n    set cmd \"*[llength $args]\\r\\n\"\n    foreach a $args {\n        append cmd \"$[string length $a]\\r\\n$a\\r\\n\"\n    }\n    set _ $cmd\n}\n\n# Returns whether or not the system supports stack traces\nproc system_backtrace_supported {} {\n    # Thread sanitizer reports backtrace_symbols_fd() as\n    # signal-unsafe since it allocates memory\n    if {$::tsan} {\n        return 0\n    }\n\n    set system_name [get_system_name]\n    if {$system_name eq {darwin}} {\n        return 1\n    } elseif {$system_name ne {linux}} {\n        return 0\n    }\n\n    # libmusl does not support backtrace. Also return 0 on\n    # static binaries (ldd exit code 1) where we can't detect libmusl\n    if {![catch {set ldd [exec ldd src/redis-server]}]} {\n        if {![string match {*libc.*musl*} $ldd]} {\n            return 1\n        }\n    }\n    return 0\n}\n\nproc generate_largevalue_test_array {} {\n    array set largevalue {}\n    set largevalue(listpack) \"hello\"\n    set largevalue(quicklist) [string repeat \"x\" 8192]\n    return [array get largevalue]\n}\n"
  },
  {
    "path": "tests/test_helper.tcl",
    "content": "# Redis test suite.\n#\n# Copyright (C) 2014-Present, Redis Ltd.\n# All Rights reserved.\n#\n# Copyright (c) 2024-present, Valkey contributors.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n#\n\npackage require Tcl 8.5-10\n\nif {$tcl_version < 9.0} { set tcl_precision 17 }\nsource tests/support/redis.tcl\nsource tests/support/aofmanifest.tcl\nsource tests/support/server.tcl\nsource tests/support/cluster_util.tcl\nsource tests/support/tmpfile.tcl\nsource tests/support/test.tcl\nsource tests/support/util.tcl\n\nset dir [pwd]\nset ::all_tests []\n\nset test_dirs {\n    unit\n    unit/type\n    unit/moduleapi\n    unit/cluster\n    integration\n}\n\nforeach test_dir $test_dirs {\n    set files [glob -nocomplain $dir/tests/$test_dir/*.tcl]\n\n    foreach file [lsort $files] {\n        lappend ::all_tests $test_dir/[file root [file tail $file]]\n    }\n}\n# Index to the next test to run in the ::all_tests list.\nset ::next_test 0\n\nset ::host 127.0.0.1\nset ::port 6379; # port for external server\nset ::baseport 21111; # initial port for spawned redis servers\nset ::portcount 8000; # we don't wanna use more than 10000 to avoid collision with cluster bus ports\nset ::traceleaks 0\nset ::valgrind 0\nset ::tsan 0\nset ::durable 0\nset ::tls 0\nset ::tls_module 0\nset ::stack_logging 0\nset ::verbose 0\nset ::quiet 0\nset ::denytags {}\nset ::skiptests {}\nset ::skipunits {}\nset ::no_latency 0\nset ::allowtags {}\nset ::only_tests {}\nset ::single_tests {}\nset ::run_solo_tests {}\nset ::skip_till \"\"\nset ::external 0; # If \"1\" this means, we are running against external instance\nset ::file \"\"; # If set, runs only the tests in this comma separated list\nset ::curfile \"\"; # Hold the filename of the current suite\nset ::accurate 0; # If true runs fuzz tests with more iterations\nset ::force_failure 0\nset ::timeout 1200; # 20 minutes without progresses will quit the test.\nset ::last_progress [clock seconds]\nset ::active_servers {} ; # Pids of active Redis instances.\nset ::dont_clean 0\nset ::dont_pre_clean 0\nset ::wait_server 0\nset ::stop_on_failure 0\nset ::dump_logs 0\nset ::loop 0\nset ::tlsdir \"tests/tls\"\nset ::singledb 0\nset ::cluster_mode 0\nset ::ignoreencoding 0\nset ::ignoredigest 0\nset ::large_memory 0\nset ::log_req_res 0\nset ::force_resp3 0\nset ::debug_defrag 0\n\n# Set to 1 when we are running in client mode. The Redis test uses a\n# server-client model to run tests simultaneously. The server instance\n# runs the specified number of client instances that will actually run tests.\n# The server is responsible of showing the result to the user, and exit with\n# the appropriate exit code depending on the test outcome.\nset ::client 0\nset ::numclients 16\n\n# This function is called by one of the test clients when it receives\n# a \"run\" command from the server, with a filename as data.\n# It will run the specified test source file and signal it to the\n# test server when finished.\nproc execute_test_file __testname {\n    set path \"tests/$__testname.tcl\"\n    set ::curfile $path\n    source $path\n    send_data_packet $::test_server_fd done \"$__testname\"\n}\n\n# This function is called by one of the test clients when it receives\n# a \"run_code\" command from the server, with a verbatim test source code\n# as argument, and an associated name.\n# It will run the specified code and signal it to the test server when\n# finished.\nproc execute_test_code {__testname filename code} {\n    set ::curfile $filename\n    eval $code\n    send_data_packet $::test_server_fd done \"$__testname\"\n}\n\n# Setup a list to hold a stack of server configs. When calls to start_server\n# are nested, use \"srv 0 pid\" to get the pid of the inner server. To access\n# outer servers, use \"srv -1 pid\" etcetera.\nset ::servers {}\nproc srv {args} {\n    set level 0\n    if {[string is integer [lindex $args 0]]} {\n        set level [lindex $args 0]\n        set property [lindex $args 1]\n    } else {\n        set property [lindex $args 0]\n    }\n    set srv [lindex $::servers end+$level]\n    dict get $srv $property\n}\n\n# Take an index to get a srv.\nproc get_srv {level} {\n    set srv [lindex $::servers end+$level]\n    return $srv\n}\n\n# Provide easy access to the client for the inner server. It's possible to\n# prepend the argument list with a negative level to access clients for\n# servers running in outer blocks.\nproc r {args} {\n    set level 0\n    if {[string is integer [lindex $args 0]]} {\n        set level [lindex $args 0]\n        set args [lrange $args 1 end]\n    }\n    [srv $level \"client\"] {*}$args\n}\n\n# Returns a Redis instance by index.\nproc Rn {n} {\n    set level [expr -1*$n]\n    return [srv $level \"client\"]\n}\n\n# Provide easy access to a client for an inner server. Requires a positive\n# index, unlike r which uses an optional negative index.\nproc R {n args} {\n    [Rn $n] {*}$args\n}\n\nproc reconnect {args} {\n    set level [lindex $args 0]\n    if {[string length $level] == 0 || ![string is integer $level]} {\n        set level 0\n    }\n\n    set srv [lindex $::servers end+$level]\n    set host [dict get $srv \"host\"]\n    set port [dict get $srv \"port\"]\n    set config [dict get $srv \"config\"]\n    set client [redis $host $port 0 $::tls]\n    if {[dict exists $srv \"client\"]} {\n        set old [dict get $srv \"client\"]\n        $old close\n    }\n    dict set srv \"client\" $client\n\n    # select the right db when we don't have to authenticate\n    if {![dict exists $config \"requirepass\"] && !$::singledb} {\n        $client select 9\n    }\n\n    # re-set $srv in the servers list\n    lset ::servers end+$level $srv\n}\n\nproc redis_deferring_client {args} {\n    set level 0\n    if {[llength $args] > 0 && [string is integer [lindex $args 0]]} {\n        set level [lindex $args 0]\n        set args [lrange $args 1 end]\n    }\n\n    # create client that defers reading reply\n    set client [redis [srv $level \"host\"] [srv $level \"port\"] 1 $::tls]\n\n    # select the right db and read the response (OK)\n    if {!$::singledb} {\n        $client select 9\n        $client read\n    } else {\n        # For timing/symmetry with the above select\n        $client ping\n        $client read\n    }\n    return $client\n}\n\nproc redis_client {args} {\n    set level 0\n    if {[llength $args] > 0 && [string is integer [lindex $args 0]]} {\n        set level [lindex $args 0]\n        set args [lrange $args 1 end]\n    }\n\n    # create client that won't defers reading reply\n    set client [redis [srv $level \"host\"] [srv $level \"port\"] 0 $::tls]\n\n    # select the right db and read the response (OK), or at least ping\n    # the server if we're in a singledb mode.\n    if {$::singledb} {\n        $client ping\n    } else {\n        $client select 9\n    }\n    return $client\n}\n\n# Provide easy access to INFO properties. Same semantic as \"proc r\".\nproc s {args} {\n    set level 0\n    if {[string is integer [lindex $args 0]]} {\n        set level [lindex $args 0]\n        set args [lrange $args 1 end]\n    }\n    status [srv $level \"client\"] [lindex $args 0]\n}\n\nproc S {index field} {\n    getInfoProperty [R $index info] $field\n}\n\n# Get the specified field from the givens instances cluster info output.\nproc CI {index field} {\n    getInfoProperty [R $index cluster info] $field\n}\n\n# Test wrapped into run_solo are sent back from the client to the\n# test server, so that the test server will send them again to\n# clients once the clients are idle.\nproc run_solo {name code} {\n    if {$::numclients == 1 || $::loop || $::external} {\n        # run_solo is not supported in these scenarios, just run the code.\n        eval $code\n        return\n    }\n    send_data_packet $::test_server_fd run_solo [list $name $::curfile $code]\n}\n\nproc cleanup {} {\n    if {!$::quiet} {puts -nonewline \"Cleanup: may take some time... \"}\n    flush stdout\n    catch {exec rm -rf {*}[glob tests/tmp/redis.conf.*]}\n    catch {exec rm -rf {*}[glob tests/tmp/server.*]}\n    if {!$::quiet} {puts \"OK\"}\n}\n\nproc test_server_main {} {\n    if {!$::dont_pre_clean} cleanup\n    set tclsh [info nameofexecutable]\n    # Open a listening socket, trying different ports in order to find a\n    # non busy one.\n    set clientport [find_available_port [expr {$::baseport - 32}] 32]\n    if {!$::quiet} {\n        puts \"Starting test server at port $clientport\"\n    }\n    socket -server accept_test_clients  -myaddr 127.0.0.1 $clientport\n\n    # Start the client instances\n    set ::clients_pids {}\n    if {$::external} {\n        set p [exec $tclsh [info script] {*}$::argv \\\n            --client $clientport &]\n        lappend ::clients_pids $p\n    } else {\n        set start_port $::baseport\n        set port_count [expr {$::portcount / $::numclients}]\n        for {set j 0} {$j < $::numclients} {incr j} {\n            set p [exec $tclsh [info script] {*}$::argv \\\n                --client $clientport --baseport $start_port --portcount $port_count &]\n            lappend ::clients_pids $p\n            incr start_port $port_count\n        }\n    }\n\n    # Setup global state for the test server\n    set ::idle_clients {}\n    set ::active_clients {}\n    array set ::active_clients_task {}\n    array set ::clients_start_time {}\n    set ::clients_time_history {}\n    set ::failed_tests {}\n\n    # Enter the event loop to handle clients I/O\n    after 100 test_server_cron\n    vwait forever\n}\n\n# This function gets called 10 times per second.\nproc test_server_cron {} {\n    set elapsed [expr {[clock seconds]-$::last_progress}]\n\n    if {$elapsed > $::timeout} {\n        set err \"\\[[colorstr red TIMEOUT]\\]: clients state report follows.\"\n        puts $err\n        lappend ::failed_tests $err\n        show_clients_state\n        kill_clients\n        force_kill_all_servers\n        the_end\n    }\n\n    after 100 test_server_cron\n}\n\nproc accept_test_clients {fd addr port} {\n    fconfigure $fd -translation binary\n    fileevent $fd readable [list read_from_test_client $fd]\n}\n\n# This is the readable handler of our test server. Clients send us messages\n# in the form of a status code such and additional data. Supported\n# status types are:\n#\n# ready: the client is ready to execute the command. Only sent at client\n#        startup. The server will queue the client FD in the list of idle\n#        clients.\n# testing: just used to signal that a given test started.\n# ok: a test was executed with success.\n# err: a test was executed with an error.\n# skip: a test was skipped by skipfile or individual test options.\n# ignore: a test was skipped by a group tag.\n# exception: there was a runtime exception while executing the test.\n# done: all the specified test file was processed, this test client is\n#       ready to accept a new task.\nproc read_from_test_client fd {\n    set bytes [gets $fd]\n    set payload [encoding convertfrom utf-8 [read $fd $bytes]]\n    foreach {status data elapsed} $payload break\n    set ::last_progress [clock seconds]\n\n    if {$status eq {ready}} {\n        if {!$::quiet} {\n            puts \"\\[$status\\]: $data\"\n        }\n        signal_idle_client $fd\n    } elseif {$status eq {done}} {\n        set elapsed [expr {[clock seconds]-$::clients_start_time($fd)}]\n        set all_tests_count [llength $::all_tests]\n        set running_tests_count [expr {[llength $::active_clients]-1}]\n        set completed_tests_count [expr {$::next_test-$running_tests_count}]\n        puts \"\\[$completed_tests_count/$all_tests_count [colorstr yellow $status]\\]: $data ($elapsed seconds)\"\n        lappend ::clients_time_history $elapsed $data\n        signal_idle_client $fd\n        set ::active_clients_task($fd) \"(DONE) $data\"\n    } elseif {$status eq {ok}} {\n        if {!$::quiet} {\n            puts \"\\[[colorstr green $status]\\]: $data ($elapsed ms)\"\n        }\n        set ::active_clients_task($fd) \"(OK) $data\"\n    } elseif {$status eq {skip}} {\n        if {!$::quiet} {\n            puts \"\\[[colorstr yellow $status]\\]: $data\"\n        }\n    } elseif {$status eq {ignore}} {\n        if {!$::quiet} {\n            puts \"\\[[colorstr cyan $status]\\]: $data\"\n        }\n    } elseif {$status eq {err}} {\n        set err \"\\[[colorstr red $status]\\]: $data\"\n        puts $err\n        lappend ::failed_tests $err\n        set ::active_clients_task($fd) \"(ERR) $data\"\n        if {$::stop_on_failure} {\n            puts -nonewline \"(Test stopped, press enter to resume the tests)\"\n            flush stdout\n            gets stdin\n        }\n    } elseif {$status eq {exception}} {\n        puts \"\\[[colorstr red $status]\\]: $data\"\n        kill_clients\n        force_kill_all_servers\n        exit 1\n    } elseif {$status eq {testing}} {\n        set ::active_clients_task($fd) \"(IN PROGRESS) $data\"\n    } elseif {$status eq {server-spawning}} {\n        set ::active_clients_task($fd) \"(SPAWNING SERVER) $data\"\n    } elseif {$status eq {server-spawned}} {\n        lappend ::active_servers $data\n        set ::active_clients_task($fd) \"(SPAWNED SERVER) pid:$data\"\n    } elseif {$status eq {server-killing}} {\n        set ::active_clients_task($fd) \"(KILLING SERVER) pid:$data\"\n    } elseif {$status eq {server-killed}} {\n        set ::active_servers [lsearch -all -inline -not -exact $::active_servers $data]\n        set ::active_clients_task($fd) \"(KILLED SERVER) pid:$data\"\n    } elseif {$status eq {run_solo}} {\n        lappend ::run_solo_tests $data\n    } else {\n        if {!$::quiet} {\n            puts \"\\[$status\\]: $data\"\n        }\n    }\n}\n\nproc show_clients_state {} {\n    # The following loop is only useful for debugging tests that may\n    # enter an infinite loop.\n    foreach x $::active_clients {\n        if {[info exist ::active_clients_task($x)]} {\n            puts \"$x => $::active_clients_task($x)\"\n        } else {\n            puts \"$x => ???\"\n        }\n    }\n}\n\nproc kill_clients {} {\n    foreach p $::clients_pids {\n        catch {exec kill $p}\n    }\n}\n\nproc force_kill_all_servers {} {\n    foreach p $::active_servers {\n        puts \"Killing still running Redis server $p\"\n        catch {exec kill -9 $p}\n    }\n}\n\nproc lpop {listVar {count 1}} {\n    upvar 1 $listVar l\n    set ele [lindex $l 0]\n    set l [lrange $l 1 end]\n    set ele\n}\n\nproc lremove {listVar value} {\n    upvar 1 $listVar var\n    set idx [lsearch -exact $var $value]\n    set var [lreplace $var $idx $idx]\n}\n\n# A new client is idle. Remove it from the list of active clients and\n# if there are still test units to run, launch them.\nproc signal_idle_client fd {\n    # Remove this fd from the list of active clients.\n    set ::active_clients \\\n        [lsearch -all -inline -not -exact $::active_clients $fd]\n\n    # New unit to process?\n    if {$::next_test != [llength $::all_tests]} {\n        if {!$::quiet} {\n            puts [colorstr bold-white \"Testing [lindex $::all_tests $::next_test]\"]\n            set ::active_clients_task($fd) \"ASSIGNED: $fd ([lindex $::all_tests $::next_test])\"\n        }\n        set ::clients_start_time($fd) [clock seconds]\n        send_data_packet $fd run [lindex $::all_tests $::next_test]\n        lappend ::active_clients $fd\n        incr ::next_test\n        if {$::loop && $::next_test == [llength $::all_tests]} {\n            set ::next_test 0\n            incr ::loop -1\n        }\n    } elseif {[llength $::run_solo_tests] != 0 && [llength $::active_clients] == 0} {\n        if {!$::quiet} {\n            puts [colorstr bold-white \"Testing solo test\"]\n            set ::active_clients_task($fd) \"ASSIGNED: $fd solo test\"\n        }\n        set ::clients_start_time($fd) [clock seconds]\n        send_data_packet $fd run_code [lpop ::run_solo_tests]\n        lappend ::active_clients $fd\n    } else {\n        lappend ::idle_clients $fd\n        set ::active_clients_task($fd) \"SLEEPING, no more units to assign\"\n        if {[llength $::active_clients] == 0} {\n            the_end\n        }\n    }\n}\n\n# The the_end function gets called when all the test units were already\n# executed, so the test finished.\nproc the_end {} {\n    # TODO: print the status, exit with the right exit code.\n    puts \"\\n                   The End\\n\"\n    puts \"Execution time of different units:\"\n    foreach {time name} $::clients_time_history {\n        puts \"  $time seconds - $name\"\n    }\n    if {[llength $::failed_tests]} {\n        puts \"\\n[colorstr bold-red {!!! WARNING}] The following tests failed:\\n\"\n        foreach failed $::failed_tests {\n            puts \"*** $failed\"\n        }\n        if {!$::dont_clean} cleanup\n        exit 1\n    } else {\n        puts \"\\n[colorstr bold-white {\\o/}] [colorstr bold-green {All tests passed without errors!}]\\n\"\n        if {!$::dont_clean} cleanup\n        exit 0\n    }\n}\n\n# The client is not event driven (the test server is instead) as we just need\n# to read the command, execute, reply... all this in a loop.\nproc test_client_main server_port {\n    set ::test_server_fd [socket localhost $server_port]\n    fconfigure $::test_server_fd -translation binary\n    send_data_packet $::test_server_fd ready [pid]\n    while 1 {\n        set bytes [gets $::test_server_fd]\n        set payload [encoding convertfrom utf-8 [read $::test_server_fd $bytes]]\n        foreach {cmd data} $payload break\n        if {$cmd eq {run}} {\n            execute_test_file $data\n        } elseif {$cmd eq {run_code}} {\n            foreach {name filename code} $data break\n            execute_test_code $name $filename $code\n        } else {\n            error \"Unknown test client command: $cmd\"\n        }\n    }\n}\n\nproc send_data_packet {fd status data {elapsed 0}} {\n    set payload [list $status $data $elapsed]\n    # Convert to UTF-8 bytes before sending so that:\n    # 1. The byte count is accurate (Tcl 9.0 string length returns character\n    #    count, which differs from byte count for non-ASCII Unicode chars).\n    # 2. Characters above U+00FF (not representable in the channel's iso8859-1\n    #    encoding) are safely transmitted as multi-byte UTF-8 sequences.\n    set payload_bytes [encoding convertto utf-8 $payload]\n    puts $fd [string length $payload_bytes]\n    puts -nonewline $fd $payload_bytes\n    flush $fd\n}\n\nproc print_help_screen {} {\n    puts [join {\n        \"--valgrind         Run the test over valgrind.\"\n        \"--tsan             Run the test with thread sanitizer.\"\n        \"--durable          suppress test crashes and keep running\"\n        \"--stack-logging    Enable OSX leaks/malloc stack logging.\"\n        \"--accurate         Run slow randomized tests for more iterations.\"\n        \"--quiet            Don't show individual tests.\"\n        \"--single <unit>    Just execute the specified unit (see next option). This option can be repeated.\"\n        \"--verbose          Increases verbosity.\"\n        \"--list-tests       List all the available test units.\"\n        \"--only <test>      Just execute the specified test by test name or tests that match <test> regexp (if <test> starts with '/'). This option can be repeated.\"\n        \"--skip-till <unit> Skip all units until (and including) the specified one.\"\n        \"--skipunit <unit>  Skip one unit.\"\n        \"--clients <num>    Number of test clients (default 16).\"\n        \"--timeout <sec>    Test timeout in seconds (default 20 min).\"\n        \"--force-failure    Force the execution of a test that always fails.\"\n        \"--config <k> <v>   Extra config file argument.\"\n        \"--skipfile <file>  Name of a file containing test names or regexp patterns (if <test> starts with '/') that should be skipped (one per line). This option can be repeated.\"\n        \"--skiptest <test>  Test name or regexp pattern (if <test> starts with '/') to skip. This option can be repeated.\"\n        \"--tags <tags>      Run only tests having specified tags or not having '-' prefixed tags.\"\n        \"--dont-clean       Don't delete redis log files after the run.\"\n        \"--dont-pre-clean   Don't delete existing redis log files before the run.\"\n        \"--no-latency       Skip latency measurements and validation by some tests.\"\n        \"--stop             Blocks once the first test fails.\"\n        \"--loop             Execute the specified set of tests forever.\"\n        \"--loops <count>    Execute the specified set of tests several times.\"\n        \"--wait-server      Wait after server is started (so that you can attach a debugger).\"\n        \"--dump-logs        Dump server log on test failure.\"\n        \"--tls              Run tests in TLS mode.\"\n        \"--tls-module       Run tests in TLS mode with Redis module.\"\n        \"--host <addr>      Run tests against an external host.\"\n        \"--port <port>      TCP port to use against external host.\"\n        \"--baseport <port>  Initial port number for spawned redis servers.\"\n        \"--portcount <num>  Port range for spawned redis servers.\"\n        \"--singledb         Use a single database, avoid SELECT.\"\n        \"--cluster-mode     Run tests in cluster protocol compatible mode.\"\n        \"--ignore-encoding  Don't validate object encoding.\"\n        \"--ignore-digest    Don't use debug digest validations.\"\n        \"--large-memory     Run tests using over 100mb.\"\n        \"--debug-defrag     Indicate the test is running against server compiled with DEBUG_DEFRAG option\"\n        \"--help             Print this help screen.\"\n    } \"\\n\"]\n}\n\n# parse arguments\nfor {set j 0} {$j < [llength $argv]} {incr j} {\n    set opt [lindex $argv $j]\n    set arg [lindex $argv [expr $j+1]]\n    if {$opt eq {--tags}} {\n        foreach tag $arg {\n            if {[string index $tag 0] eq \"-\"} {\n                lappend ::denytags [string range $tag 1 end]\n            } else {\n                lappend ::allowtags $tag\n            }\n        }\n        incr j\n    } elseif {$opt eq {--config}} {\n        set arg2 [lindex $argv [expr $j+2]]\n        lappend ::global_overrides $arg\n        lappend ::global_overrides $arg2\n        incr j 2\n    } elseif {$opt eq {--log-req-res}} {\n        set ::log_req_res 1\n    } elseif {$opt eq {--force-resp3}} {\n        set ::force_resp3 1\n    } elseif {$opt eq {--skipfile}} {\n        incr j\n        set fp [open $arg r]\n        set file_data [read $fp]\n        close $fp\n        set ::skiptests [concat $::skiptests [split $file_data \"\\n\"]]\n    } elseif {$opt eq {--skiptest}} {\n        lappend ::skiptests $arg\n        incr j\n    } elseif {$opt eq {--valgrind}} {\n        set ::valgrind 1\n    } elseif {$opt eq {--tsan}} {\n        set ::tsan 1\n    } elseif {$opt eq {--stack-logging}} {\n        if {[string match {*Darwin*} [exec uname -a]]} {\n            set ::stack_logging 1\n        }\n    } elseif {$opt eq {--quiet}} {\n        set ::quiet 1\n    } elseif {$opt eq {--tls} || $opt eq {--tls-module}} {\n        package require tls 1.6\n        set ::tls 1\n        ::tls::init \\\n            -cafile \"$::tlsdir/ca.crt\" \\\n            -certfile \"$::tlsdir/client.crt\" \\\n            -keyfile \"$::tlsdir/client.key\"\n        if {$opt eq {--tls-module}} {\n            set ::tls_module 1\n        }\n    } elseif {$opt eq {--host}} {\n        set ::external 1\n        set ::host $arg\n        incr j\n    } elseif {$opt eq {--port}} {\n        set ::port $arg\n        incr j\n    } elseif {$opt eq {--baseport}} {\n        set ::baseport $arg\n        incr j\n    } elseif {$opt eq {--portcount}} {\n        set ::portcount $arg\n        incr j\n    } elseif {$opt eq {--accurate}} {\n        set ::accurate 1\n    } elseif {$opt eq {--force-failure}} {\n        set ::force_failure 1\n    } elseif {$opt eq {--single}} {\n        lappend ::single_tests $arg\n        incr j\n    } elseif {$opt eq {--only}} {\n        lappend ::only_tests $arg\n        incr j\n    } elseif {$opt eq {--skipunit}} {\n        lappend ::skipunits $arg\n        incr j\n    } elseif {$opt eq {--skip-till}} {\n        set ::skip_till $arg\n        incr j\n    } elseif {$opt eq {--list-tests}} {\n        foreach t $::all_tests {\n            puts $t\n        }\n        exit 0\n    } elseif {$opt eq {--verbose}} {\n        incr ::verbose\n    } elseif {$opt eq {--client}} {\n        set ::client 1\n        set ::test_server_port $arg\n        incr j\n    } elseif {$opt eq {--clients}} {\n        set ::numclients $arg\n        incr j\n    } elseif {$opt eq {--durable}} {\n        set ::durable 1\n    } elseif {$opt eq {--dont-clean}} {\n        set ::dont_clean 1\n    } elseif {$opt eq {--dont-pre-clean}} {\n        set ::dont_pre_clean 1\n    } elseif {$opt eq {--no-latency}} {\n        set ::no_latency 1\n    } elseif {$opt eq {--wait-server}} {\n        set ::wait_server 1\n    } elseif {$opt eq {--dump-logs}} {\n        set ::dump_logs 1\n    } elseif {$opt eq {--stop}} {\n        set ::stop_on_failure 1\n    } elseif {$opt eq {--loop}} {\n        set ::loop 2147483647\n    } elseif {$opt eq {--loops}} {\n        set ::loop $arg\n        incr j\n    } elseif {$opt eq {--timeout}} {\n        set ::timeout $arg\n        incr j\n    } elseif {$opt eq {--singledb}} {\n        set ::singledb 1\n    } elseif {$opt eq {--cluster-mode}} {\n        set ::cluster_mode 1\n        set ::singledb 1\n    } elseif {$opt eq {--large-memory}} {\n        set ::large_memory 1\n    } elseif {$opt eq {--ignore-encoding}} {\n        set ::ignoreencoding 1\n    } elseif {$opt eq {--ignore-digest}} {\n        set ::ignoredigest 1\n    } elseif {$opt eq {--debug-defrag}} {\n        set ::debug_defrag 1\n    } elseif {$opt eq {--help}} {\n        print_help_screen\n        exit 0\n    } else {\n        puts \"Wrong argument: $opt\"\n        exit 1\n    }\n}\n\nset filtered_tests {}\n\n# Set the filtered tests to be the short list (single_tests) if exists.\n# Otherwise, we start filtering all_tests\nif {[llength $::single_tests] > 0} {\n    set filtered_tests $::single_tests\n} else {\n    set filtered_tests $::all_tests\n}\n\n# If --skip-till option was given, we populate the list of single tests\n# to run with everything *after* the specified unit.\nif {$::skip_till != \"\"} {\n    set skipping 1\n    foreach t $::all_tests {\n        if {$skipping == 1} {\n            lremove filtered_tests $t\n        }\n        if {$t == $::skip_till} {\n            set skipping 0\n        }\n    }\n    if {$skipping} {\n        puts \"test $::skip_till not found\"\n        exit 0\n    }\n}\n\n# If --skipunits option was given, we populate the list of single tests\n# to run with everything *not* in the skipunits list.\nif {[llength $::skipunits] > 0} {\n    foreach t $::all_tests {\n        if {[lsearch $::skipunits $t] != -1} {\n            lremove filtered_tests $t\n        }\n    }\n}\n\n# Override the list of tests with the specific tests we want to run\n# in case there was some filter, that is --single, -skipunit or --skip-till options.\nif {[llength $filtered_tests] < [llength $::all_tests]} {\n    set ::all_tests $filtered_tests\n}\n\nproc attach_to_replication_stream_on_connection {conn} {\n    r config set repl-ping-replica-period 3600\n    if {$::tls} {\n        set s [::tls::socket [srv $conn \"host\"] [srv $conn \"port\"]]\n    } else {\n        set s [socket [srv $conn \"host\"] [srv $conn \"port\"]]\n    }\n    fconfigure $s -translation binary\n    puts -nonewline $s \"SYNC\\r\\n\"\n    flush $s\n\n    # Get the count\n    while 1 {\n        set count [gets $s]\n        set prefix [string range $count 0 0]\n        if {$prefix ne {}} break; # Newlines are allowed as PINGs.\n    }\n    if {$prefix ne {$}} {\n        error \"attach_to_replication_stream error. Received '$count' as count.\"\n    }\n    set count [string range $count 1 end]\n\n    # Consume the bulk payload\n    while {$count} {\n        set buf [read $s $count]\n        set count [expr {$count-[string length $buf]}]\n    }\n    return $s\n}\n\nproc attach_to_replication_stream {} {\n    return [attach_to_replication_stream_on_connection 0]\n}\n\nproc read_from_replication_stream {s} {\n    fconfigure $s -blocking 0\n    set attempt 0\n    while {[gets $s count] == -1} {\n        if {[incr attempt] == 10} return \"\"\n        after 100\n    }\n    fconfigure $s -blocking 1\n    set count [string range $count 1 end]\n\n    # Return a list of arguments for the command.\n    set res {}\n    for {set j 0} {$j < $count} {incr j} {\n        read $s 1\n        set arg [::redis::redis_bulk_read $s]\n        if {$j == 0} {set arg [string tolower $arg]}\n        lappend res $arg\n    }\n    return $res\n}\n\nproc assert_replication_stream {s patterns} {\n    set errors 0\n    set values_list {}\n    set patterns_list {}\n    for {set j 0} {$j < [llength $patterns]} {incr j} {\n        set pattern [lindex $patterns $j]\n        lappend patterns_list $pattern\n        set value [read_from_replication_stream $s]\n        lappend values_list $value\n        if {![string match $pattern $value]} { incr errors }\n    }\n\n    if {$errors == 0} { return }\n\n    set context [info frame -1]\n    close_replication_stream $s ;# for fast exit\n    assert_match $patterns_list $values_list \"\" $context\n}\n\nproc close_replication_stream {s} {\n    close $s\n    r config set repl-ping-replica-period 10\n    return\n}\n\n# With the parallel test running multiple Redis instances at the same time\n# we need a fast enough computer, otherwise a lot of tests may generate\n# false positives.\n# If the computer is too slow we revert the sequential test without any\n# parallelism, that is, clients == 1.\nproc is_a_slow_computer {} {\n    set start [clock milliseconds]\n    for {set j 0} {$j < 1000000} {incr j} {}\n    set elapsed [expr [clock milliseconds]-$start]\n    expr {$elapsed > 200}\n}\n\nif {$::client} {\n    if {[catch { test_client_main $::test_server_port } err]} {\n        set estr \"Executing test client: $err.\\n$::errorInfo\"\n        if {[catch {send_data_packet $::test_server_fd exception $estr}]} {\n            puts $estr\n        }\n        exit 1\n    }\n} else {\n    if {[is_a_slow_computer]} {\n        puts \"** SLOW COMPUTER ** Using a single client to avoid false positives.\"\n        set ::numclients 1\n    }\n\n    if {[catch { test_server_main } err]} {\n        if {[string length $err] > 0} {\n            # only display error when not generated by the test suite\n            if {$err ne \"exception\"} {\n                puts $::errorInfo\n            }\n            exit 1\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/acl-v2.tcl",
    "content": "start_server {tags {\"acl external:skip\"}} {\n    set r2 [redis_client]\n    test {Test basic multiple selectors} {\n        r ACL SETUSER selector-1 on -@all resetkeys nopass\n        $r2 auth selector-1 password\n        catch {$r2 ping} err\n        assert_match \"*NOPERM*command*\" $err\n        catch {$r2 set write::foo bar} err\n        assert_match \"*NOPERM*command*\" $err\n        catch {$r2 get read::foo} err\n        assert_match \"*NOPERM*command*\" $err\n\n        r ACL SETUSER selector-1 (+@write ~write::*) (+@read ~read::*)\n        catch {$r2 ping} err\n        assert_equal \"OK\" [$r2 set write::foo bar]\n        assert_equal \"\" [$r2 get read::foo]\n        catch {$r2 get write::foo} err\n        assert_match \"*NOPERM*key*\" $err\n        catch {$r2 set read::foo bar} err\n        assert_match \"*NOPERM*key*\" $err\n    }\n\n    test {Test ACL selectors by default have no permissions} {\n        r ACL SETUSER selector-default reset ()\n        set user [r ACL GETUSER \"selector-default\"]\n        assert_equal 1 [llength [dict get $user selectors]]\n        assert_equal \"\" [dict get [lindex [dict get $user selectors] 0] keys]\n        assert_equal \"\" [dict get [lindex [dict get $user selectors] 0] channels]\n        assert_equal \"-@all\" [dict get [lindex [dict get $user selectors] 0] commands]\n    }\n\n    test {Test deleting selectors} {\n        r ACL SETUSER selector-del on \"(~added-selector)\"\n        set user [r ACL GETUSER \"selector-del\"]\n        assert_equal \"~added-selector\" [dict get [lindex [dict get $user selectors] 0] keys]\n        assert_equal [llength [dict get $user selectors]] 1\n\n        r ACL SETUSER selector-del clearselectors\n        set user [r ACL GETUSER \"selector-del\"]\n        assert_equal [llength [dict get $user selectors]] 0\n    }\n\n    test {Test selector syntax error reports the error in the selector context} {\n        catch {r ACL SETUSER selector-syntax on (this-is-invalid)} e\n        assert_match \"*ERR Error in ACL SETUSER modifier '(*)*Syntax*\" $e\n\n        catch {r ACL SETUSER selector-syntax on (&* &fail)} e\n        assert_match \"*ERR Error in ACL SETUSER modifier '(*)*Adding a pattern after the*\" $e\n\n        catch {r ACL SETUSER selector-syntax on (+PING (+SELECT (+DEL} e\n        assert_match \"*ERR Unmatched parenthesis in acl selector*\" $e\n\n        catch {r ACL SETUSER selector-syntax on (+PING (+SELECT (+DEL ) ) ) } e\n        assert_match \"*ERR Error in ACL SETUSER modifier*\" $e\n\n        catch {r ACL SETUSER selector-syntax on (+PING (+SELECT (+DEL ) } e\n        assert_match \"*ERR Error in ACL SETUSER modifier*\" $e\n\n        assert_equal \"\" [r ACL GETUSER selector-syntax]\n    }\n\n    test {Test flexible selector definition} {\n        # Test valid selectors\n        r ACL SETUSER selector-2 \"(~key1 +get )\" \"( ~key2 +get )\" \"( ~key3 +get)\" \"(~key4 +get)\"\n        r ACL SETUSER selector-2 (~key5 +get ) ( ~key6 +get ) ( ~key7 +get) (~key8 +get)\n        set user [r ACL GETUSER \"selector-2\"]\n        assert_equal \"~key1\" [dict get [lindex [dict get $user selectors] 0] keys]\n        assert_equal \"~key2\" [dict get [lindex [dict get $user selectors] 1] keys]\n        assert_equal \"~key3\" [dict get [lindex [dict get $user selectors] 2] keys]\n        assert_equal \"~key4\" [dict get [lindex [dict get $user selectors] 3] keys]\n        assert_equal \"~key5\" [dict get [lindex [dict get $user selectors] 4] keys]\n        assert_equal \"~key6\" [dict get [lindex [dict get $user selectors] 5] keys]\n        assert_equal \"~key7\" [dict get [lindex [dict get $user selectors] 6] keys]\n        assert_equal \"~key8\" [dict get [lindex [dict get $user selectors] 7] keys]\n\n        # Test invalid selector syntax\n        catch {r ACL SETUSER invalid-selector \" () \"} err\n        assert_match \"*ERR*Syntax error*\" $err\n        catch {r ACL SETUSER invalid-selector (} err\n        assert_match \"*Unmatched parenthesis*\" $err\n        catch {r ACL SETUSER invalid-selector )} err\n        assert_match \"*ERR*Syntax error\" $err\n    }\n\n    test {Test separate read permission} {\n        r ACL SETUSER key-permission-R on nopass %R~read* +@all\n        $r2 auth key-permission-R password\n        assert_equal PONG [$r2 PING]\n        r set readstr bar\n        assert_equal bar [$r2 get readstr]\n        catch {$r2 set readstr bar} err\n        assert_match \"*NOPERM*key*\" $err\n        catch {$r2 get notread} err\n        assert_match \"*NOPERM*key*\" $err\n    }\n\n    test {Test separate write permission} {\n        r ACL SETUSER key-permission-W on nopass %W~write* +@all\n        $r2 auth key-permission-W password\n        assert_equal PONG [$r2 PING]\n        # Note, SET is a RW command, so it's not used for testing\n        $r2 LPUSH writelist 10\n        catch {$r2 GET writestr} err\n        assert_match \"*NOPERM*key*\" $err\n        catch {$r2 LPUSH notwrite 10} err\n        assert_match \"*NOPERM*key*\" $err\n    }\n\n    test {Test separate read and write permissions} {\n        r ACL SETUSER key-permission-RW on nopass %R~read* %W~write* +@all\n        $r2 auth key-permission-RW password\n        assert_equal PONG [$r2 PING]\n        r set read bar\n        $r2 copy read write\n        catch {$r2 copy write read} err\n        assert_match \"*NOPERM*key*\" $err\n    }\n\n    test {Validate read and write permissions format - empty permission} {\n        catch {r ACL SETUSER key-permission-RW %~} err\n        set err\n    } {ERR Error in ACL SETUSER modifier '%~': Syntax error}\n\n    test {Validate read and write permissions format - empty selector} {\n        catch {r ACL SETUSER key-permission-RW %} err\n        set err\n    } {ERR Error in ACL SETUSER modifier '%': Syntax error}\n\n    test {Validate read and write permissions format - empty pattern} {\n        # Empty pattern results with R/W access to no key\n        r ACL SETUSER key-permission-RW on nopass %RW~ +@all\n        $r2 auth key-permission-RW password\n        catch {$r2 SET x 5} err\n        set err\n    } {NOPERM No permissions to access a key}\n\n    test {Validate read and write permissions format - no pattern} {\n        # No pattern results with R/W access to no key (currently we accept this syntax error)\n        r ACL SETUSER key-permission-RW on nopass %RW +@all\n        $r2 auth key-permission-RW password\n        catch {$r2 SET x 5} err\n        set err\n    } {NOPERM No permissions to access a key}\n\n    test {Test separate read and write permissions on different selectors are not additive} {\n        r ACL SETUSER key-permission-RW-selector on nopass \"(%R~read* +@all)\" \"(%W~write* +@all)\"\n        $r2 auth key-permission-RW-selector password\n        assert_equal PONG [$r2 PING]\n\n        # Verify write selector\n        $r2 LPUSH writelist 10\n        catch {$r2 GET writestr} err\n        assert_match \"*NOPERM*key*\" $err\n        catch {$r2 LPUSH notwrite 10} err\n        assert_match \"*NOPERM*key*\" $err\n\n        # Verify read selector\n        r set readstr bar\n        assert_equal bar [$r2 get readstr]\n        catch {$r2 set readstr bar} err\n        assert_match \"*NOPERM*key*\" $err\n        catch {$r2 get notread} err\n        assert_match \"*NOPERM*key*\" $err\n\n        # Verify they don't combine\n        catch {$r2 copy read write} err\n        assert_match \"*NOPERM*key*\" $err\n        catch {$r2 copy write read} err\n        assert_match \"*NOPERM*key*\" $err\n    }\n\n    test {Test SET with separate read permission} {\n        r del readstr\n        r ACL SETUSER set-key-permission-R on nopass %R~read* +@all\n        $r2 auth set-key-permission-R password\n        assert_equal PONG [$r2 PING]\n        assert_equal {} [$r2 get readstr]\n\n        # We don't have the permission to WRITE key.\n        assert_error {*NOPERM*key*} {$r2 set readstr bar}\n        assert_error {*NOPERM*key*} {$r2 set readstr bar get}\n        assert_error {*NOPERM*key*} {$r2 set readstr bar ex 100}\n        assert_error {*NOPERM*key*} {$r2 set readstr bar keepttl nx}\n    }\n\n    test {Test SET with separate write permission} {\n        r del writestr\n        r ACL SETUSER set-key-permission-W on nopass %W~write* +@all\n        $r2 auth set-key-permission-W password\n        assert_equal PONG [$r2 PING]\n        assert_equal {OK} [$r2 set writestr bar]\n        assert_equal {OK} [$r2 set writestr get]\n\n        # We don't have the permission to READ key.\n        assert_error {*NOPERM*key*} {$r2 set get writestr}\n        assert_error {*NOPERM*key*} {$r2 set writestr bar get}\n        assert_error {*NOPERM*key*} {$r2 set writestr bar get ex 100}\n        assert_error {*NOPERM*key*} {$r2 set writestr bar get keepttl nx}\n\n        # this probably should be `ERR value is not an integer or out of range`\n        assert_error {*NOPERM*key*} {$r2 set writestr bar ex get}\n    }\n\n    test {Test SET with read and write permissions} {\n        r del readwrite_str\n        r ACL SETUSER set-key-permission-RW-selector on nopass %RW~readwrite* +@all\n        $r2 auth set-key-permission-RW-selector password\n        assert_equal PONG [$r2 PING]\n\n        assert_equal {} [$r2 get readwrite_str]\n        assert_error {ERR * not an integer *} {$r2 set readwrite_str bar ex get}\n\n        assert_equal {OK} [$r2 set readwrite_str bar]\n        assert_equal {bar} [$r2 get readwrite_str]\n\n        assert_equal {bar} [$r2 set readwrite_str bar2 get]\n        assert_equal {bar2} [$r2 get readwrite_str]\n\n        assert_equal {bar2} [$r2 set readwrite_str bar3 get ex 10]\n        assert_equal {bar3} [$r2 get readwrite_str]\n        assert_range [$r2 ttl readwrite_str] 5 10\n    }\n\n    test {Test BITFIELD with separate read permission} {\n        r del readstr\n        r ACL SETUSER bitfield-key-permission-R on nopass %R~read* +@all\n        $r2 auth bitfield-key-permission-R password\n        assert_equal PONG [$r2 PING]\n        assert_equal {0} [$r2 bitfield readstr get u4 0]\n\n        # We don't have the permission to WRITE key.\n        assert_error {*NOPERM*key*} {$r2 bitfield readstr set u4 0 1}\n        assert_error {*NOPERM*key*} {$r2 bitfield readstr get u4 0 set u4 0 1}\n        assert_error {*NOPERM*key*} {$r2 bitfield readstr incrby u4 0 1}\n    }\n\n    test {Test BITFIELD with separate write permission} {\n        r del writestr\n        r ACL SETUSER bitfield-key-permission-W on nopass %W~write* +@all\n        $r2 auth bitfield-key-permission-W password\n        assert_equal PONG [$r2 PING]\n\n        # We don't have the permission to READ key.\n        assert_error {*NOPERM*key*} {$r2 bitfield writestr get u4 0}\n        assert_error {*NOPERM*key*} {$r2 bitfield writestr set u4 0 1}\n        assert_error {*NOPERM*key*} {$r2 bitfield writestr incrby u4 0 1}\n    }\n\n    test {Test BITFIELD with read and write permissions} {\n        r del readwrite_str\n        r ACL SETUSER bitfield-key-permission-RW-selector on nopass %RW~readwrite* +@all\n        $r2 auth bitfield-key-permission-RW-selector password\n        assert_equal PONG [$r2 PING]\n\n        assert_equal {0} [$r2 bitfield readwrite_str get u4 0]\n        assert_equal {0} [$r2 bitfield readwrite_str set u4 0 1]\n        assert_equal {2} [$r2 bitfield readwrite_str incrby u4 0 1]\n        assert_equal {2} [$r2 bitfield readwrite_str get u4 0]\n    }\n\n    test {Test ACL log correctly identifies the relevant item when selectors are used} {\n        r ACL SETUSER acl-log-test-selector on nopass\n        r ACL SETUSER acl-log-test-selector +mget ~key (+mget ~key ~otherkey)\n        $r2 auth acl-log-test-selector password\n\n        # Test that command is shown only if none of the selectors match\n        r ACL LOG RESET\n        catch {$r2 GET key} err\n        assert_match \"*NOPERM*command*\" $err\n        set entry [lindex [r ACL LOG] 0]\n        assert_equal [dict get $entry username] \"acl-log-test-selector\"\n        assert_equal [dict get $entry context] \"toplevel\"\n        assert_equal [dict get $entry reason] \"command\"\n        assert_equal [dict get $entry object] \"get\"\n\n        # Test two cases where the first selector matches less than the\n        # second selector. We should still show the logically first unmatched key.\n        r ACL LOG RESET\n        catch {$r2 MGET otherkey someotherkey} err\n        assert_match \"*NOPERM*key*\" $err\n        set entry [lindex [r ACL LOG] 0]\n        assert_equal [dict get $entry username] \"acl-log-test-selector\"\n        assert_equal [dict get $entry context] \"toplevel\"\n        assert_equal [dict get $entry reason] \"key\"\n        assert_equal [dict get $entry object] \"someotherkey\"\n\n        r ACL LOG RESET\n        catch {$r2 MGET key otherkey someotherkey} err\n        assert_match \"*NOPERM*key*\" $err\n        set entry [lindex [r ACL LOG] 0]\n        assert_equal [dict get $entry username] \"acl-log-test-selector\"\n        assert_equal [dict get $entry context] \"toplevel\"\n        assert_equal [dict get $entry reason] \"key\"\n        assert_equal [dict get $entry object] \"someotherkey\"\n    }\n\n    test {Test ACL GETUSER response information} {\n        r ACL setuser selector-info -@all +get resetchannels &channel1 %R~foo1 %W~bar1 ~baz1\n        r ACL setuser selector-info (-@all +set resetchannels &channel2 %R~foo2 %W~bar2 ~baz2)\n        set user [r ACL GETUSER \"selector-info\"]\n\n        # Root selector\n        assert_equal \"%R~foo1 %W~bar1 ~baz1\" [dict get $user keys]\n        assert_equal \"&channel1\" [dict get $user channels]\n        assert_equal \"-@all +get\" [dict get $user commands]\n\n        # Added selector\n        set secondary_selector [lindex [dict get $user selectors] 0]\n        assert_equal \"%R~foo2 %W~bar2 ~baz2\" [dict get $secondary_selector keys]\n        assert_equal \"&channel2\" [dict get $secondary_selector channels]\n        assert_equal \"-@all +set\" [dict get $secondary_selector commands]\n    }\n\n    test {Test ACL list idempotency} {\n        r ACL SETUSER user-idempotency off -@all +get resetchannels &channel1 %R~foo1 %W~bar1 ~baz1 (-@all +set resetchannels &channel2 %R~foo2 %W~bar2 ~baz2)\n        set response [lindex [r ACL LIST] [lsearch [r ACL LIST] \"user user-idempotency*\"]]\n\n        assert_match \"*-@all*+get*(*)*\" $response\n        assert_match \"*resetchannels*&channel1*(*)*\" $response\n        assert_match \"*%R~foo1*%W~bar1*~baz1*(*)*\" $response\n\n        assert_match \"*(*-@all*+set*)*\" $response\n        assert_match \"*(*resetchannels*&channel2*)*\" $response\n        assert_match \"*(*%R~foo2*%W~bar2*~baz2*)*\" $response\n    }\n\n    test {Test R+W is the same as all permissions} {\n        r ACL setuser selector-rw-info %R~foo %W~foo %RW~bar\n        set user [r ACL GETUSER selector-rw-info]\n        assert_equal \"~foo ~bar\" [dict get $user keys]\n    }\n\n    test {Test basic dry run functionality} {\n        r ACL setuser command-test +@all %R~read* %W~write* %RW~rw*\n        assert_equal \"OK\" [r ACL DRYRUN command-test GET read]\n\n        catch {r ACL DRYRUN not-a-user GET read} e\n        assert_equal \"ERR User 'not-a-user' not found\" $e\n\n        catch {r ACL DRYRUN command-test not-a-command read} e\n        assert_equal \"ERR Command 'not-a-command' not found\" $e\n    }\n\n    test {Test various commands for command permissions} {\n        r ACL setuser command-test -@all\n        assert_match {*has no permissions to run the 'set' command*} [r ACL DRYRUN command-test set somekey somevalue]\n        assert_match {*has no permissions to run the 'get' command*} [r ACL DRYRUN command-test get somekey]\n    }\n\n    test {Test various odd commands for key permissions} {\n        r ACL setuser command-test +@all %R~read* %W~write* %RW~rw*\n\n        # Test migrate, which is marked with incomplete keys\n        assert_equal \"OK\" [r ACL DRYRUN command-test MIGRATE whatever whatever rw 0 500]\n        assert_match {*has no permissions to access the 'read' key*} [r ACL DRYRUN command-test MIGRATE whatever whatever read 0 500]\n        assert_match {*has no permissions to access the 'write' key*} [r ACL DRYRUN command-test MIGRATE whatever whatever write 0 500]\n        assert_equal \"OK\" [r ACL DRYRUN command-test MIGRATE whatever whatever \"\" 0 5000 KEYS rw]\n        assert_match \"*has no permissions to access the 'read' key\" [r ACL DRYRUN command-test MIGRATE whatever whatever \"\" 0 5000 KEYS read]\n        assert_match \"*has no permissions to access the 'write' key\" [r ACL DRYRUN command-test MIGRATE whatever whatever \"\" 0 5000 KEYS write]\n        assert_equal \"OK\" [r ACL DRYRUN command-test MIGRATE whatever whatever \"\" 0 5000 AUTH KEYS KEYS rw]\n        assert_match \"*has no permissions to access the 'read' key\" [r ACL DRYRUN command-test MIGRATE whatever whatever \"\" 0 5000 AUTH KEYS KEYS read]\n        assert_match \"*has no permissions to access the 'write' key\" [r ACL DRYRUN command-test MIGRATE whatever whatever \"\" 0 5000 AUTH KEYS KEYS write]\n        assert_equal \"OK\" [r ACL DRYRUN command-test MIGRATE whatever whatever \"\" 0 5000 AUTH2 KEYS 123 KEYS rw]\n        assert_match \"*has no permissions to access the 'read' key\" [r ACL DRYRUN command-test MIGRATE whatever whatever \"\" 0 5000 AUTH2 KEYS 123 KEYS read]\n        assert_match \"*has no permissions to access the 'write' key\" [r ACL DRYRUN command-test MIGRATE whatever whatever \"\" 0 5000 AUTH2 KEYS 123 KEYS write]\n        assert_equal \"OK\" [r ACL DRYRUN command-test MIGRATE whatever whatever \"\" 0 5000 AUTH2 USER KEYS KEYS rw]\n        assert_match \"*has no permissions to access the 'read' key\" [r ACL DRYRUN command-test MIGRATE whatever whatever \"\" 0 5000 AUTH2 USER KEYS KEYS read]\n        assert_match \"*has no permissions to access the 'write' key\" [r ACL DRYRUN command-test MIGRATE whatever whatever \"\" 0 5000 AUTH2 USER KEYS KEYS write]\n\n        # Test SORT, which is marked with incomplete keys\n        assert_equal \"OK\" [r ACL DRYRUN command-test SORT read STORE write]\n        assert_match {*has no permissions to access the 'read' key*}  [r ACL DRYRUN command-test SORT read STORE read]\n        assert_match {*has no permissions to access the 'write' key*}  [r ACL DRYRUN command-test SORT write STORE write]\n\n        # Test EVAL, which uses the numkey keyspec (Also test EVAL_RO)\n        assert_equal \"OK\" [r ACL DRYRUN command-test EVAL \"\" 1 rw1]\n        assert_match {*has no permissions to access the 'read' key*} [r ACL DRYRUN command-test EVAL \"\" 1 read]\n        assert_equal \"OK\" [r ACL DRYRUN command-test EVAL_RO \"\" 1 rw1]\n        assert_equal \"OK\" [r ACL DRYRUN command-test EVAL_RO \"\" 1 read]\n\n        # Read is an optional argument and not a key here, make sure we don't treat it as a key\n        assert_equal \"OK\" [r ACL DRYRUN command-test EVAL \"\" 0 read]\n\n        # These are syntax errors, but it's 'OK' from an ACL perspective\n        assert_equal \"OK\" [r ACL DRYRUN command-test EVAL \"\" -1 read]\n        assert_equal \"OK\" [r ACL DRYRUN command-test EVAL \"\" 3 rw rw]\n        assert_equal \"OK\" [r ACL DRYRUN command-test EVAL \"\" 3 rw read]\n\n        # Test GEORADIUS which uses the last type of keyspec, keyword\n        assert_equal \"OK\" [r ACL DRYRUN command-test GEORADIUS read longitude latitude radius M STOREDIST write]\n        assert_equal \"OK\" [r ACL DRYRUN command-test GEORADIUS read longitude latitude radius M]\n        assert_match {*has no permissions to access the 'read2' key*} [r ACL DRYRUN command-test GEORADIUS read1 longitude latitude radius M STOREDIST read2]\n        assert_match {*has no permissions to access the 'write1' key*} [r ACL DRYRUN command-test GEORADIUS write1 longitude latitude radius M STOREDIST write2]\n        assert_equal \"OK\" [r ACL DRYRUN command-test GEORADIUS read longitude latitude radius M STORE write]\n        assert_equal \"OK\" [r ACL DRYRUN command-test GEORADIUS read longitude latitude radius M]\n        assert_match {*has no permissions to access the 'read2' key*} [r ACL DRYRUN command-test GEORADIUS read1 longitude latitude radius M STORE read2]\n        assert_match {*has no permissions to access the 'write1' key*} [r ACL DRYRUN command-test GEORADIUS write1 longitude latitude radius M STORE write2]\n    }\n\n    # Existence test commands are not marked as access since they are the result\n    # of a lot of write commands. We therefore make the claim they can be executed\n    # when either READ or WRITE flags are provided.\n    test {Existence test commands are not marked as access} {\n        assert_equal \"OK\" [r ACL DRYRUN command-test HEXISTS read foo]\n        assert_equal \"OK\" [r ACL DRYRUN command-test HEXISTS write foo]\n        assert_match {*has no permissions to access the 'nothing' key*} [r ACL DRYRUN command-test HEXISTS nothing foo]\n\n        assert_equal \"OK\" [r ACL DRYRUN command-test HSTRLEN read foo]\n        assert_equal \"OK\" [r ACL DRYRUN command-test HSTRLEN write foo]\n        assert_match {*has no permissions to access the 'nothing' key*} [r ACL DRYRUN command-test HSTRLEN nothing foo]\n\n        assert_equal \"OK\" [r ACL DRYRUN command-test SISMEMBER read foo]\n        assert_equal \"OK\" [r ACL DRYRUN command-test SISMEMBER write foo]\n        assert_match {*has no permissions to access the 'nothing' key*} [r ACL DRYRUN command-test SISMEMBER nothing foo]\n    }\n\n    # Unlike existence test commands, intersection cardinality commands process the data\n    # between keys and return an aggregated cardinality. therefore they have the access\n    # requirement.\n    test {Intersection cardinaltiy commands are access commands} {\n        assert_equal \"OK\" [r ACL DRYRUN command-test SINTERCARD 2 read read]\n        assert_match {*has no permissions to access the 'write' key*} [r ACL DRYRUN command-test SINTERCARD 2 write read]\n        assert_match {*has no permissions to access the 'nothing' key*} [r ACL DRYRUN command-test SINTERCARD 2 nothing read]\n\n        assert_equal \"OK\" [r ACL DRYRUN command-test ZCOUNT read 0 1]\n        assert_match {*has no permissions to access the 'write' key*} [r ACL DRYRUN command-test ZCOUNT write 0 1]\n        assert_match {*has no permissions to access the 'nothing' key*} [r ACL DRYRUN command-test ZCOUNT nothing 0 1]\n\n        assert_equal \"OK\" [r ACL DRYRUN command-test PFCOUNT read read]\n        assert_match {*has no permissions to access the 'write' key*} [r ACL DRYRUN command-test PFCOUNT write read]\n        assert_match {*has no permissions to access the 'nothing' key*} [r ACL DRYRUN command-test PFCOUNT nothing read]\n\n        assert_equal \"OK\" [r ACL DRYRUN command-test ZINTERCARD 2 read read]\n        assert_match {*has no permissions to access the 'write' key*} [r ACL DRYRUN command-test ZINTERCARD 2 write read]\n        assert_match {*has no permissions to access the 'nothing' key*} [r ACL DRYRUN command-test ZINTERCARD 2 nothing read]\n    }\n\n    test {Test general keyspace commands require some type of permission to execute} {\n        assert_equal \"OK\" [r ACL DRYRUN command-test touch read]\n        assert_equal \"OK\" [r ACL DRYRUN command-test touch write]\n        assert_equal \"OK\" [r ACL DRYRUN command-test touch rw]\n        assert_match {*has no permissions to access the 'nothing' key*} [r ACL DRYRUN command-test touch nothing]\n\n        assert_equal \"OK\" [r ACL DRYRUN command-test exists read]\n        assert_equal \"OK\" [r ACL DRYRUN command-test exists write]\n        assert_equal \"OK\" [r ACL DRYRUN command-test exists rw]\n        assert_match {*has no permissions to access the 'nothing' key*} [r ACL DRYRUN command-test exists nothing]\n\n        assert_equal \"OK\" [r ACL DRYRUN command-test MEMORY USAGE read]\n        assert_equal \"OK\" [r ACL DRYRUN command-test MEMORY USAGE write]\n        assert_equal \"OK\" [r ACL DRYRUN command-test MEMORY USAGE rw]\n        assert_match {*has no permissions to access the 'nothing' key*} [r ACL DRYRUN command-test MEMORY USAGE nothing]\n\n        assert_equal \"OK\" [r ACL DRYRUN command-test TYPE read]\n        assert_equal \"OK\" [r ACL DRYRUN command-test TYPE write]\n        assert_equal \"OK\" [r ACL DRYRUN command-test TYPE rw]\n        assert_match {*has no permissions to access the 'nothing' key*} [r ACL DRYRUN command-test TYPE nothing]\n    }\n\n    test {Cardinality commands require some type of permission to execute} {\n        set commands {STRLEN HLEN LLEN SCARD ZCARD XLEN}\n        foreach command $commands {\n            assert_equal \"OK\" [r ACL DRYRUN command-test $command read]\n            assert_equal \"OK\" [r ACL DRYRUN command-test $command write]\n            assert_equal \"OK\" [r ACL DRYRUN command-test $command rw]\n            assert_match {*has no permissions to access the 'nothing' key*} [r ACL DRYRUN command-test $command nothing]\n        }\n    }\n\n    test {Test sharded channel permissions} {\n        r ACL setuser test-channels +@all resetchannels &channel\n        assert_equal \"OK\" [r ACL DRYRUN test-channels spublish channel foo]\n        assert_equal \"OK\" [r ACL DRYRUN test-channels ssubscribe channel]\n        assert_equal \"OK\" [r ACL DRYRUN test-channels sunsubscribe]\n        assert_equal \"OK\" [r ACL DRYRUN test-channels sunsubscribe channel]\n        assert_equal \"OK\" [r ACL DRYRUN test-channels sunsubscribe otherchannel]\n\n        assert_match {*has no permissions to access the 'otherchannel' channel*} [r ACL DRYRUN test-channels spublish otherchannel foo]\n        assert_match {*has no permissions to access the 'otherchannel' channel*} [r ACL DRYRUN test-channels ssubscribe otherchannel foo]\n    }\n\n    test {Test sort with ACL permissions} {\n        r set v1 1\n        r lpush mylist 1\n        \n        r ACL setuser test-sort-acl on nopass (+sort ~mylist)   \n        $r2 auth test-sort-acl nopass\n         \n        catch {$r2 sort mylist by v*} e\n        assert_equal \"ERR BY option of SORT denied due to insufficient ACL permissions.\" $e\n        catch {$r2 sort mylist get v*} e\n        assert_equal \"ERR GET option of SORT denied due to insufficient ACL permissions.\" $e \n        \n        r ACL setuser test-sort-acl (+sort ~mylist ~v*)     \n        catch {$r2 sort mylist by v*} e\n        assert_equal \"ERR BY option of SORT denied due to insufficient ACL permissions.\" $e  \n        catch {$r2 sort mylist get v*} e\n        assert_equal \"ERR GET option of SORT denied due to insufficient ACL permissions.\" $e \n        \n        r ACL setuser test-sort-acl (+sort ~mylist %W~*)     \n        catch {$r2 sort mylist by v*} e\n        assert_equal \"ERR BY option of SORT denied due to insufficient ACL permissions.\" $e\n        catch {$r2 sort mylist get v*} e\n        assert_equal \"ERR GET option of SORT denied due to insufficient ACL permissions.\" $e\n       \n        r ACL setuser test-sort-acl (+sort ~mylist %R~*)     \n        assert_equal \"1\" [$r2 sort mylist by v*]     \n        \n        # cleanup\n        r ACL deluser test-sort-acl\n        r del v1 mylist\n    }\n    \n    test {Test DRYRUN with wrong number of arguments} {\n        r ACL setuser test-dry-run +@all ~v*\n        \n        assert_equal \"OK\" [r ACL DRYRUN test-dry-run SET v v]\n        \n        catch {r ACL DRYRUN test-dry-run SET v} e\n        assert_equal \"ERR wrong number of arguments for 'set' command\" $e\n        \n        catch {r ACL DRYRUN test-dry-run SET} e\n        assert_equal \"ERR wrong number of arguments for 'set' command\" $e\n    }\n\n    $r2 close\n}\n\nset server_path [tmpdir \"selectors.acl\"]\nexec cp -f tests/assets/userwithselectors.acl $server_path\nexec cp -f tests/assets/default.conf $server_path\nstart_server [list overrides [list \"dir\" $server_path \"aclfile\" \"userwithselectors.acl\"] tags [list \"external:skip\"]] {\n\n    test {Test behavior of loading ACLs} {\n        set selectors [dict get [r ACL getuser alice] selectors]\n        assert_equal [llength $selectors] 1\n        set test_selector [lindex $selectors 0]\n        assert_equal \"-@all +get\" [dict get $test_selector \"commands\"]\n        assert_equal \"~rw*\" [dict get $test_selector \"keys\"]\n\n        set selectors [dict get [r ACL getuser bob] selectors]\n        assert_equal [llength $selectors] 2\n        set test_selector [lindex $selectors 0]\n        assert_equal \"-@all +set\" [dict get $test_selector \"commands\"]\n        assert_equal \"%W~w*\" [dict get $test_selector \"keys\"]\n\n        set test_selector [lindex $selectors 1]\n        assert_equal \"-@all +get\" [dict get $test_selector \"commands\"]\n        assert_equal \"%R~r*\" [dict get $test_selector \"keys\"]\n    }\n}\n"
  },
  {
    "path": "tests/unit/acl.tcl",
    "content": "start_server {tags {\"acl external:skip\"}} {\n    test {Connections start with the default user} {\n        r ACL WHOAMI\n    } {default}\n\n    test {It is possible to create new users} {\n        r ACL setuser newuser\n    }\n\n    test {Coverage: ACL USERS} {\n        r ACL USERS\n    } {default newuser}\n\n    test {Usernames can not contain spaces or null characters} {\n        catch {r ACL setuser \"a a\"} err\n        set err\n    } {*Usernames can't contain spaces or null characters*}\n\n    test {New users start disabled} {\n        r ACL setuser newuser >passwd1\n        catch {r AUTH newuser passwd1} err\n        set err\n    } {*WRONGPASS*}\n\n    test {Enabling the user allows the login} {\n        r ACL setuser newuser on +acl\n        r AUTH newuser passwd1\n        r ACL WHOAMI\n    } {newuser}\n\n    test {Only the set of correct passwords work} {\n        r ACL setuser newuser >passwd2\n        catch {r AUTH newuser passwd1} e\n        assert {$e eq \"OK\"}\n        catch {r AUTH newuser passwd2} e\n        assert {$e eq \"OK\"}\n        catch {r AUTH newuser passwd3} e\n        set e\n    } {*WRONGPASS*}\n\n    test {It is possible to remove passwords from the set of valid ones} {\n        r ACL setuser newuser <passwd1\n        catch {r AUTH newuser passwd1} e\n        set e\n    } {*WRONGPASS*}\n\n    test {Test password hashes can be added} {\n        r ACL setuser newuser #34344e4d60c2b6d639b7bd22e18f2b0b91bc34bf0ac5f9952744435093cfb4e6\n        catch {r AUTH newuser passwd4} e\n        assert {$e eq \"OK\"}\n    }\n\n    test {Test password hashes validate input} {\n        # Validate Length\n        catch {r ACL setuser newuser #34344e4d60c2b6d639b7bd22e18f2b0b91bc34bf0ac5f9952744435093cfb4e} e\n        # Validate character outside set\n        catch {r ACL setuser newuser #34344e4d60c2b6d639b7bd22e18f2b0b91bc34bf0ac5f9952744435093cfb4eq} e\n        set e\n    } {*Error in ACL SETUSER modifier*}\n\n    test {ACL GETUSER returns the password hash instead of the actual password} {\n        set passstr [dict get [r ACL getuser newuser] passwords]\n        assert_match {*34344e4d60c2b6d639b7bd22e18f2b0b91bc34bf0ac5f9952744435093cfb4e6*} $passstr\n        assert_no_match {*passwd4*} $passstr\n    }\n\n    test {Test hashed passwords removal} {\n        r ACL setuser newuser !34344e4d60c2b6d639b7bd22e18f2b0b91bc34bf0ac5f9952744435093cfb4e6\n        set passstr [dict get [r ACL getuser newuser] passwords]\n        assert_no_match {*34344e4d60c2b6d639b7bd22e18f2b0b91bc34bf0ac5f9952744435093cfb4e6*} $passstr\n    }\n\n    test {By default users are not able to access any command} {\n        catch {r SET foo bar} e\n        set e\n    } {*NOPERM*set*}\n\n    test {By default users are not able to access any key} {\n        r ACL setuser newuser +set\n        catch {r SET foo bar} e\n        set e\n    } {*NOPERM*key*}\n\n    test {It's possible to allow the access of a subset of keys} {\n        r ACL setuser newuser allcommands ~foo:* ~bar:*\n        r SET foo:1 a\n        r SET bar:2 b\n        catch {r SET zap:3 c} e\n        r ACL setuser newuser allkeys; # Undo keys ACL\n        set e\n    } {*NOPERM*key*}\n\n    test {By default, only default user is able to publish to any channel} {\n        r AUTH default pwd\n        r PUBLISH foo bar\n        r ACL setuser psuser on >pspass +acl +client +@pubsub\n        r AUTH psuser pspass\n        catch {r PUBLISH foo bar} e\n        set e\n    } {*NOPERM*channel*}\n\n    test {By default, only default user is not able to publish to any shard channel} {\n        r AUTH default pwd\n        r SPUBLISH foo bar\n        r AUTH psuser pspass\n        catch {r SPUBLISH foo bar} e\n        set e\n    } {*NOPERM*channel*}\n\n    test {By default, only default user is able to subscribe to any channel} {\n        set rd [redis_deferring_client]\n        $rd AUTH default pwd\n        $rd read\n        $rd SUBSCRIBE foo\n        assert_match {subscribe foo 1} [$rd read]\n        $rd UNSUBSCRIBE\n        $rd read\n        $rd AUTH psuser pspass\n        $rd read\n        $rd SUBSCRIBE foo\n        catch {$rd read} e\n        $rd close\n        set e\n    } {*NOPERM*channel*}\n\n    test {By default, only default user is able to subscribe to any shard channel} {\n        set rd [redis_deferring_client]\n        $rd AUTH default pwd\n        $rd read\n        $rd SSUBSCRIBE foo\n        assert_match {ssubscribe foo 1} [$rd read]\n        $rd SUNSUBSCRIBE\n        $rd read\n        $rd AUTH psuser pspass\n        $rd read\n        $rd SSUBSCRIBE foo\n        catch {$rd read} e\n        $rd close\n        set e\n    } {*NOPERM*channel*}\n\n    test {By default, only default user is able to subscribe to any pattern} {\n        set rd [redis_deferring_client]\n        $rd AUTH default pwd\n        $rd read\n        $rd PSUBSCRIBE bar*\n        assert_match {psubscribe bar\\* 1} [$rd read]\n        $rd PUNSUBSCRIBE\n        $rd read\n        $rd AUTH psuser pspass\n        $rd read\n        $rd PSUBSCRIBE bar*\n        catch {$rd read} e\n        $rd close\n        set e\n    } {*NOPERM*channel*}\n\n    test {It's possible to allow publishing to a subset of channels} {\n        r ACL setuser psuser resetchannels &foo:1 &bar:*\n        assert_equal {0} [r PUBLISH foo:1 somemessage]\n        assert_equal {0} [r PUBLISH bar:2 anothermessage]\n        catch {r PUBLISH zap:3 nosuchmessage} e\n        set e\n    } {*NOPERM*channel*}\n\n    test {It's possible to allow publishing to a subset of shard channels} {\n        r ACL setuser psuser resetchannels &foo:1 &bar:*\n        assert_equal {0} [r SPUBLISH foo:1 somemessage]\n        assert_equal {0} [r SPUBLISH bar:2 anothermessage]\n        catch {r SPUBLISH zap:3 nosuchmessage} e\n        set e\n    } {*NOPERM*channel*}\n\n    test {Validate subset of channels is prefixed with resetchannels flag} {\n        r ACL setuser hpuser on nopass resetchannels &foo +@all\n\n        # Verify resetchannels flag is prefixed before the channel name(s)\n        set users [r ACL LIST]\n        set curruser \"hpuser\"\n        foreach user [lshuffle $users] {\n            if {[string first $curruser $user] != -1} {\n                assert_equal {user hpuser on nopass sanitize-payload resetchannels &foo +@all} $user\n            }\n        }\n\n        # authenticate as hpuser\n        r AUTH hpuser pass\n\n        assert_equal {0} [r PUBLISH foo bar]\n        catch {r PUBLISH bar game} e\n\n        # Falling back to psuser for the below tests\n        r AUTH psuser pspass\n        r ACL deluser hpuser\n        set e\n    } {*NOPERM*channel*}\n\n    test {In transaction queue publish/subscribe/psubscribe to unauthorized channel will fail} {\n        r ACL setuser psuser +multi +discard\n        r MULTI\n        assert_error {*NOPERM*channel*} {r PUBLISH notexits helloworld}\n        r DISCARD\n        r MULTI\n        assert_error {*NOPERM*channel*} {r SUBSCRIBE notexits foo:1}\n        r DISCARD\n        r MULTI\n        assert_error {*NOPERM*channel*} {r PSUBSCRIBE notexits:* bar:*}\n        r DISCARD\n    }\n\n    test {It's possible to allow subscribing to a subset of channels} {\n        set rd [redis_deferring_client]\n        $rd AUTH psuser pspass\n        $rd read\n        $rd SUBSCRIBE foo:1\n        assert_match {subscribe foo:1 1} [$rd read]\n        $rd SUBSCRIBE bar:2\n        assert_match {subscribe bar:2 2} [$rd read]\n        $rd SUBSCRIBE zap:3\n        catch {$rd read} e\n        set e\n    } {*NOPERM*channel*}\n\n    test {It's possible to allow subscribing to a subset of shard channels} {\n        set rd [redis_deferring_client]\n        $rd AUTH psuser pspass\n        $rd read\n        $rd SSUBSCRIBE foo:1\n        assert_match {ssubscribe foo:1 1} [$rd read]\n        $rd SSUBSCRIBE bar:2\n        assert_match {ssubscribe bar:2 2} [$rd read]\n        $rd SSUBSCRIBE zap:3\n        catch {$rd read} e\n        set e\n    } {*NOPERM*channel*}\n\n    test {It's possible to allow subscribing to a subset of channel patterns} {\n        set rd [redis_deferring_client]\n        $rd AUTH psuser pspass\n        $rd read\n        $rd PSUBSCRIBE foo:1\n        assert_match {psubscribe foo:1 1} [$rd read]\n        $rd PSUBSCRIBE bar:*\n        assert_match {psubscribe bar:\\* 2} [$rd read]\n        $rd PSUBSCRIBE bar:baz\n        catch {$rd read} e\n        set e\n    } {*NOPERM*channel*}\n\n    test {Subscribers are killed when revoked of channel permission} {\n        # This test covers the case that the SETUSER is requested over the subscriber\n        set rd [redis_deferring_client]\n        r ACL setuser psuser resetchannels &foo:1\n        # we must use RESP 3 since AUTH command is not supported over a subscribed client with RESP2\n        $rd HELLO 3 AUTH psuser pspass\n        $rd read\n        $rd CLIENT SETNAME deathrow\n        $rd read\n        $rd SUBSCRIBE foo:1\n        assert_match {subscribe foo:1 1} [$rd read]\n        $rd ACL setuser psuser resetchannels\n        assert_match {OK} [$rd read]\n        # 'psuser' no longer has access to \"foo:1\" channel, so they should get disconnected\n        catch {$rd read} e\n        assert_match {*I/O error*} $e\n        assert_no_match {*deathrow*} [r CLIENT LIST]\n        $rd close\n    } {0}\n\n    test {Subscribers are killed when revoked of channel permission} {\n        set rd [redis_deferring_client]\n        r ACL setuser psuser resetchannels &foo:1\n        $rd AUTH psuser pspass\n        $rd read\n        $rd CLIENT SETNAME deathrow\n        $rd read\n        $rd SUBSCRIBE foo:1\n        $rd read\n        r ACL setuser psuser resetchannels\n        assert_no_match {*deathrow*} [r CLIENT LIST]\n        $rd close\n    } {0}\n\n    test {Subscribers are killed when revoked of channel permission} {\n        set rd [redis_deferring_client]\n        r ACL setuser psuser resetchannels &foo:1\n        $rd AUTH psuser pspass\n        $rd read\n        $rd CLIENT SETNAME deathrow\n        $rd read\n        $rd SSUBSCRIBE foo:1\n        $rd read\n        r ACL setuser psuser resetchannels\n        assert_no_match {*deathrow*} [r CLIENT LIST]\n        $rd close\n    } {0}\n\n    test {Subscribers are killed when revoked of pattern permission} {\n        set rd [redis_deferring_client]\n        r ACL setuser psuser resetchannels &bar:*\n        $rd AUTH psuser pspass\n        $rd read\n        $rd CLIENT SETNAME deathrow\n        $rd read\n        $rd PSUBSCRIBE bar:*\n        $rd read\n        r ACL setuser psuser resetchannels\n        assert_no_match {*deathrow*} [r CLIENT LIST]\n        $rd close\n    } {0}\n\n    test {Subscribers are killed when revoked of allchannels permission} {\n        set rd [redis_deferring_client]\n        r ACL setuser psuser allchannels\n        $rd AUTH psuser pspass\n        $rd read\n        $rd CLIENT SETNAME deathrow\n        $rd read\n        $rd PSUBSCRIBE foo\n        $rd read\n        r ACL setuser psuser resetchannels\n        assert_no_match {*deathrow*} [r CLIENT LIST]\n        $rd close\n    } {0}\n\n    test {Subscribers are pardoned if literal permissions are retained and/or gaining allchannels} {\n        set rd [redis_deferring_client]\n        r ACL setuser psuser resetchannels &foo:1 &bar:* &orders\n        $rd AUTH psuser pspass\n        $rd read\n        $rd CLIENT SETNAME pardoned\n        $rd read\n        $rd SUBSCRIBE foo:1\n        $rd read\n        $rd SSUBSCRIBE orders\n        $rd read\n        $rd PSUBSCRIBE bar:*\n        $rd read\n        r ACL setuser psuser resetchannels &foo:1 &bar:* &orders &baz:qaz &zoo:*\n        assert_match {*pardoned*} [r CLIENT LIST]\n        r ACL setuser psuser allchannels\n        assert_match {*pardoned*} [r CLIENT LIST]\n        $rd close\n    } {0}\n\n    test {blocked command gets rejected when reprocessed after permission change} {\n        r auth default \"\"\n        r config resetstat\n        set rd [redis_deferring_client]\n        r ACL setuser psuser reset on nopass +@all allkeys\n        $rd AUTH psuser pspass\n        $rd read\n        $rd BLPOP list1 0\n        wait_for_blocked_client\n        r ACL setuser psuser resetkeys\n        r LPUSH list1 foo\n        assert_error {*NOPERM No permissions to access a key*} {$rd read}\n        $rd ping\n        $rd close\n        assert_match {*calls=0,usec=0,*,rejected_calls=1,failed_calls=0*} [cmdrstat blpop r]\n    }\n\n    test {Users can be configured to authenticate with any password} {\n        r ACL setuser newuser nopass\n        r AUTH newuser zipzapblabla\n    } {OK}\n\n    test {ACLs can exclude single commands} {\n        r ACL setuser newuser -ping\n        r INCR mycounter ; # Should not raise an error\n        catch {r PING} e\n        set e\n    } {*NOPERM*ping*}\n\n    test {ACLs can include or exclude whole classes of commands} {\n        r ACL setuser newuser -@all +@set +acl\n        r SADD myset a b c; # Should not raise an error\n        r ACL setuser newuser +@all -@string\n        r SADD myset a b c; # Again should not raise an error\n        # String commands instead should raise an error\n        catch {r SET foo bar} e\n        r ACL setuser newuser allcommands; # Undo commands ACL\n        set e\n    } {*NOPERM*set*}\n\n    test {ACLs can include single subcommands} {\n        r ACL setuser newuser +@all -client\n        r ACL setuser newuser +client|id +client|setname\n        set cmdstr [dict get [r ACL getuser newuser] commands]\n        assert_match {+@all*-client*+client|id*} $cmdstr\n        assert_match {+@all*-client*+client|setname*} $cmdstr\n        r CLIENT ID; # Should not fail\n        r CLIENT SETNAME foo ; # Should not fail\n        catch {r CLIENT KILL type master} e\n        set e\n    } {*NOPERM*client|kill*}\n\n    test {ACLs can exclude single subcommands, case 1} {\n        r ACL setuser newuser +@all -client|kill\n        set cmdstr [dict get [r ACL getuser newuser] commands]\n        assert_equal {+@all -client|kill} $cmdstr\n        r CLIENT ID; # Should not fail\n        r CLIENT SETNAME foo ; # Should not fail\n        catch {r CLIENT KILL type master} e\n        set e\n    } {*NOPERM*client|kill*}\n\n    test {ACLs can exclude single subcommands, case 2} {\n        r ACL setuser newuser -@all +acl +config -config|set\n        set cmdstr [dict get [r ACL getuser newuser] commands]\n        assert_match {*+config*} $cmdstr\n        assert_match {*-config|set*} $cmdstr\n        r CONFIG GET loglevel; # Should not fail\n        catch {r CONFIG SET loglevel debug} e\n        set e\n    } {*NOPERM*config|set*}\n\n    test {ACLs cannot include a subcommand with a specific arg} {\n        r ACL setuser newuser +@all -config|get\n        catch { r ACL setuser newuser +config|get|appendonly} e\n        set e\n    } {*Allowing first-arg of a subcommand is not supported*}\n\n    test {ACLs cannot exclude or include a container commands with a specific arg} {\n        r ACL setuser newuser +@all +config|get\n        catch { r ACL setuser newuser +@all +config|asdf} e\n        assert_match \"*Unknown command or category name in ACL*\" $e\n        catch { r ACL setuser newuser +@all -config|asdf} e\n        assert_match \"*Unknown command or category name in ACL*\" $e\n    } {}\n\n    test {ACLs cannot exclude or include a container command with two args} {\n        r ACL setuser newuser +@all +config|get\n        catch { r ACL setuser newuser +@all +get|key1|key2} e\n        assert_match \"*Unknown command or category name in ACL*\" $e\n        catch { r ACL setuser newuser +@all -get|key1|key2} e\n        assert_match \"*Unknown command or category name in ACL*\" $e\n    } {}\n\n    test {ACLs including of a type includes also subcommands} {\n        r ACL setuser newuser -@all +del +acl +@stream\n        r DEL key\n        r XADD key * field value\n        r XINFO STREAM key\n    }\n\n    test {ACLs can block SELECT of all but a specific DB} {\n        r ACL setuser newuser -@all +acl +select|0\n        set cmdstr [dict get [r ACL getuser newuser] commands]\n        assert_match {*+select|0*} $cmdstr\n        r SELECT 0\n        catch {r SELECT 1} e\n        set e\n    } {*NOPERM*select*} {singledb:skip}\n\n    test {ACLs can block all DEBUG subcommands except one} {\n        r ACL setuser newuser -@all +acl +del +incr +debug|object\n        r DEL key\n        set cmdstr [dict get [r ACL getuser newuser] commands]\n        assert_match {*+debug|object*} $cmdstr\n        r INCR key\n        r DEBUG OBJECT key\n        catch {r DEBUG SEGFAULT} e\n        set e\n    } {*NOPERM*debug*}\n\n    test {ACLs set can include subcommands, if already full command exists} {\n        r ACL setuser bob +memory|doctor\n        set cmdstr [dict get [r ACL getuser bob] commands]\n        assert_equal {-@all +memory|doctor} $cmdstr\n\n        # Validate the commands have got engulfed to +memory.\n        r ACL setuser bob +memory\n        set cmdstr [dict get [r ACL getuser bob] commands]\n        assert_equal {-@all +memory} $cmdstr\n\n        # Appending to the existing access string of bob.\n        r ACL setuser bob +@all +client|id\n        # Although this does nothing, we retain it anyways so we can reproduce\n        # the original ACL. \n        set cmdstr [dict get [r ACL getuser bob] commands]\n        assert_equal {+@all +client|id} $cmdstr\n\n        r ACL setuser bob >passwd1 on\n        r AUTH bob passwd1\n        r CLIENT ID; # Should not fail\n        r MEMORY DOCTOR; # Should not fail\n    }\n\n    test {ACLs set can exclude subcommands, if already full command exists} {\n        r ACL setuser alice +@all -memory|doctor\n        set cmdstr [dict get [r ACL getuser alice] commands]\n        assert_equal {+@all -memory|doctor} $cmdstr\n\n        r ACL setuser alice >passwd1 on\n        r AUTH alice passwd1\n\n        assert_error {*NOPERM*memory|doctor*} {r MEMORY DOCTOR}\n        r MEMORY STATS ;# should work\n\n        # Validate the commands have got engulfed to -memory.\n        r ACL setuser alice +@all -memory\n        set cmdstr [dict get [r ACL getuser alice] commands]\n        assert_equal {+@all -memory} $cmdstr\n\n        assert_error {*NOPERM*memory|doctor*} {r MEMORY DOCTOR}\n        assert_error {*NOPERM*memory|stats*} {r MEMORY STATS}\n\n        # Appending to the existing access string of alice.\n        r ACL setuser alice -@all\n\n        # Now, alice can't do anything, we need to auth newuser to execute ACL GETUSER\n        r AUTH newuser passwd1\n\n        # Validate the new commands has got engulfed to -@all.\n        set cmdstr [dict get [r ACL getuser alice] commands]\n        assert_equal {-@all} $cmdstr\n\n        r AUTH alice passwd1\n\n        assert_error {*NOPERM*get*} {r GET key}\n        assert_error {*NOPERM*memory|stats*} {r MEMORY STATS}\n\n        # Auth newuser before the next test\n        r AUTH newuser passwd1\n    }\n\n    test {ACL SETUSER RESET reverting to default newly created user} {\n        set current_user \"example\"\n        r ACL DELUSER $current_user\n        r ACL SETUSER $current_user\n\n        set users [r ACL LIST]\n        foreach user [lshuffle $users] {\n            if {[string first $current_user $user] != -1} {\n                set current_user_output $user\n            }\n        }\n\n        r ACL SETUSER $current_user reset\n        set users [r ACL LIST]\n        foreach user [lshuffle $users] {\n            if {[string first $current_user $user] != -1} {\n                assert_equal $current_user_output $user\n            }\n        }\n    }\n\n    # Note that the order of the generated ACL rules is not stable in Redis\n    # so we need to match the different parts and not as a whole string.\n    test {ACL GETUSER is able to translate back command permissions} {\n        # Subtractive\n        r ACL setuser newuser reset +@all ~* -@string +incr -debug +debug|digest\n        set cmdstr [dict get [r ACL getuser newuser] commands]\n        assert_match {*+@all*} $cmdstr\n        assert_match {*-@string*} $cmdstr\n        assert_match {*+incr*} $cmdstr\n        assert_match {*-debug +debug|digest**} $cmdstr\n\n        # Additive\n        r ACL setuser newuser reset +@string -incr +acl +debug|digest +debug|segfault\n        set cmdstr [dict get [r ACL getuser newuser] commands]\n        assert_match {*-@all*} $cmdstr\n        assert_match {*+@string*} $cmdstr\n        assert_match {*-incr*} $cmdstr\n        assert_match {*+debug|digest*} $cmdstr\n        assert_match {*+debug|segfault*} $cmdstr\n        assert_match {*+acl*} $cmdstr\n    }\n\n    # A regression test make sure that as long as there is a simple\n    # category defining the commands, that it will be used as is.\n    test {ACL GETUSER provides reasonable results} {\n        set categories [r ACL CAT]\n\n        # Test that adding each single category will\n        # result in just that category with both +@all and -@all\n        foreach category $categories {\n            # Test for future commands where allowed\n            r ACL setuser additive reset +@all \"-@$category\"\n            set cmdstr [dict get [r ACL getuser additive] commands]\n            assert_equal \"+@all -@$category\" $cmdstr\n\n            # Test for future commands where disallowed\n            r ACL setuser restrictive reset -@all \"+@$category\"\n            set cmdstr [dict get [r ACL getuser restrictive] commands]\n            assert_equal \"-@all +@$category\" $cmdstr\n        }\n    }\n\n    # Test that only lossless compaction of ACLs occur.\n    test {ACL GETUSER provides correct results} {\n        r ACL SETUSER adv-test\n        r ACL SETUSER adv-test +@all -@hash -@slow +hget\n        assert_equal \"+@all -@hash -@slow +hget\" [dict get [r ACL getuser adv-test] commands]\n\n        # Categories are re-ordered if re-added\n        r ACL SETUSER adv-test -@hash\n        assert_equal \"+@all -@slow +hget -@hash\" [dict get [r ACL getuser adv-test] commands]\n\n        # Inverting categories removes existing categories\n        r ACL SETUSER adv-test +@hash\n        assert_equal \"+@all -@slow +hget +@hash\" [dict get [r ACL getuser adv-test] commands]\n\n        # Inverting the all category compacts everything\n        r ACL SETUSER adv-test -@all\n        assert_equal \"-@all\" [dict get [r ACL getuser adv-test] commands]\n        r ACL SETUSER adv-test -@string -@slow +@all\n        assert_equal \"+@all\" [dict get [r ACL getuser adv-test] commands]\n\n        # Make sure categories are case insensitive\n        r ACL SETUSER adv-test -@all +@HASH +@hash +@HaSh\n        assert_equal \"-@all +@hash\" [dict get [r ACL getuser adv-test] commands]\n\n        # Make sure commands are case insensitive\n        r ACL SETUSER adv-test -@all +HGET +hget +hGeT\n        assert_equal \"-@all +hget\" [dict get [r ACL getuser adv-test] commands]\n\n        # Arbitrary category additions and removals are handled\n        r ACL SETUSER adv-test -@all +@hash +@slow +@set +@set +@slow +@hash\n        assert_equal \"-@all +@set +@slow +@hash\" [dict get [r ACL getuser adv-test] commands]\n\n        # Arbitrary command additions and removals are handled\n        r ACL SETUSER adv-test -@all +hget -hset +hset -hget\n        assert_equal \"-@all +hset -hget\" [dict get [r ACL getuser adv-test] commands]\n\n        # Arbitrary subcommands are compacted\n        r ACL SETUSER adv-test -@all +client|list +client|list +config|get +config +acl|list -acl\n        assert_equal \"-@all +client|list +config -acl\" [dict get [r ACL getuser adv-test] commands]\n\n        # Deprecated subcommand usage is handled\n        r ACL SETUSER adv-test -@all +select|0 +select|0 +debug|segfault +debug\n        assert_equal \"-@all +select|0 +debug\" [dict get [r ACL getuser adv-test] commands]\n\n        # Unnecessary categories are retained for potentional future compatibility\n        r ACL SETUSER adv-test -@all -@dangerous\n        assert_equal \"-@all -@dangerous\" [dict get [r ACL getuser adv-test] commands]\n\n        # Duplicate categories are compressed, regression test for #12470\n        r ACL SETUSER adv-test -@all +config +config|get -config|set +config\n        assert_equal \"-@all +config\" [dict get [r ACL getuser adv-test] commands]\n    }\n\n    test \"ACL CAT with illegal arguments\" {\n        assert_error {*Unknown category 'NON_EXISTS'} {r ACL CAT NON_EXISTS}\n        assert_error {*unknown subcommand or wrong number of arguments for 'CAT'*} {r ACL CAT NON_EXISTS NON_EXISTS2}\n    }\n\n    test \"ACL CAT without category - list all categories\" {\n        set categories [r acl cat]\n        assert_not_equal [lsearch $categories \"keyspace\"] -1\n        assert_not_equal [lsearch $categories \"connection\"] -1\n    }\n\n    test \"ACL CAT category - list all commands/subcommands that belong to category\" {\n        assert_not_equal [lsearch [r acl cat transaction] \"multi\"] -1\n        assert_not_equal [lsearch [r acl cat scripting] \"function|list\"] -1\n\n        # Negative check to make sure it doesn't actually return all commands.\n        assert_equal [lsearch [r acl cat keyspace] \"set\"] -1\n        assert_equal [lsearch [r acl cat stream] \"get\"] -1\n    }\n\n    test \"ACL requires explicit permission for scripting for EVAL_RO, EVALSHA_RO and FCALL_RO\" {\n        r ACL SETUSER scripter on nopass +readonly\n        assert_match {*has no permissions to run the 'eval_ro' command*} [r ACL DRYRUN scripter EVAL_RO \"\" 0]\n        assert_match {*has no permissions to run the 'evalsha_ro' command*} [r ACL DRYRUN scripter EVALSHA_RO \"\" 0]\n        assert_match {*has no permissions to run the 'fcall_ro' command*} [r ACL DRYRUN scripter FCALL_RO \"\" 0]\n    }\n\n    test {ACL #5998 regression: memory leaks adding / removing subcommands} {\n        r AUTH default \"\"\n        r ACL setuser newuser reset -debug +debug|a +debug|b +debug|c\n        r ACL setuser newuser -debug\n        # The test framework will detect a leak if any.\n    }\n\n    test {ACL LOG aggregates similar errors together and assigns unique entry-id to new errors} {\n         r ACL LOG RESET\n         r ACL setuser user1 >foo\n         assert_error \"*WRONGPASS*\" {r AUTH user1 doo}\n         set entry_id_initial_error [dict get [lindex [r ACL LOG] 0] entry-id]\n         set timestamp_created_original [dict get [lindex [r ACL LOG] 0] timestamp-created]\n         set timestamp_last_update_original [dict get [lindex [r ACL LOG] 0] timestamp-last-updated]\n         after 1\n         for {set j 0} {$j < 10} {incr j} {\n             assert_error \"*WRONGPASS*\" {r AUTH user1 doo}\n         }\n         set entry_id_lastest_error [dict get [lindex [r ACL LOG] 0] entry-id]\n         set timestamp_created_updated [dict get [lindex [r ACL LOG] 0] timestamp-created]\n         set timestamp_last_updated_after_update [dict get [lindex [r ACL LOG] 0] timestamp-last-updated]\n         assert {$entry_id_lastest_error eq $entry_id_initial_error}\n         assert {$timestamp_last_update_original < $timestamp_last_updated_after_update}\n         assert {$timestamp_created_original eq $timestamp_created_updated}\n         r ACL setuser user2 >doo\n         assert_error \"*WRONGPASS*\" {r AUTH user2 foo}\n         set new_error_entry_id [dict get [lindex [r ACL LOG] 0] entry-id]\n         assert {$new_error_entry_id eq $entry_id_lastest_error + 1 }\n    }\n\n    test {ACL LOG shows failed command executions at toplevel} {\n        r ACL LOG RESET\n        r ACL setuser antirez >foo on +set ~object:1234\n        r ACL setuser antirez +eval +multi +exec\n        r ACL setuser antirez resetchannels +publish\n        r AUTH antirez foo\n        assert_error \"*NOPERM*get*\" {r GET foo}\n        r AUTH default \"\"\n        set entry [lindex [r ACL LOG] 0]\n        assert {[dict get $entry username] eq {antirez}}\n        assert {[dict get $entry context] eq {toplevel}}\n        assert {[dict get $entry reason] eq {command}}\n        assert {[dict get $entry object] eq {get}}\n        assert_match {*cmd=get*} [dict get $entry client-info]\n    }\n\n    test \"ACL LOG shows failed subcommand executions at toplevel\" {\n        r ACL LOG RESET\n        r ACL DELUSER demo\n        r ACL SETUSER demo on nopass\n        r AUTH demo \"\"\n        assert_error \"*NOPERM*script|help*\" {r SCRIPT HELP}\n        r AUTH default \"\"\n        set entry [lindex [r ACL LOG] 0]\n        assert_equal [dict get $entry username] {demo}\n        assert_equal [dict get $entry context] {toplevel}\n        assert_equal [dict get $entry reason] {command}\n        assert_equal [dict get $entry object] {script|help}\n    }\n\n    test {ACL LOG is able to test similar events} {\n        r ACL LOG RESET\n        r AUTH antirez foo\n        catch {r GET foo}\n        catch {r GET foo}\n        catch {r GET foo}\n        r AUTH default \"\"\n        set entry [lindex [r ACL LOG] 0]\n        assert {[dict get $entry count] == 3}\n    }\n\n    test {ACL LOG is able to log keys access violations and key name} {\n        r AUTH antirez foo\n        catch {r SET somekeynotallowed 1234}\n        r AUTH default \"\"\n        set entry [lindex [r ACL LOG] 0]\n        assert {[dict get $entry reason] eq {key}}\n        assert {[dict get $entry object] eq {somekeynotallowed}}\n    }\n\n    test {ACL LOG is able to log channel access violations and channel name} {\n        r AUTH antirez foo\n        catch {r PUBLISH somechannelnotallowed nullmsg}\n        r AUTH default \"\"\n        set entry [lindex [r ACL LOG] 0]\n        assert {[dict get $entry reason] eq {channel}}\n        assert {[dict get $entry object] eq {somechannelnotallowed}}\n    }\n\n    test {ACL LOG RESET is able to flush the entries in the log} {\n        r ACL LOG RESET\n        assert {[llength [r ACL LOG]] == 0}\n    }\n\n    test {ACL LOG can distinguish the transaction context (1)} {\n        r AUTH antirez foo\n        r MULTI\n        catch {r INCR foo}\n        catch {r EXEC}\n        r AUTH default \"\"\n        set entry [lindex [r ACL LOG] 0]\n        assert {[dict get $entry context] eq {multi}}\n        assert {[dict get $entry object] eq {incr}}\n    }\n\n    test {ACL LOG can distinguish the transaction context (2)} {\n        set rd1 [redis_deferring_client]\n        r ACL SETUSER antirez +incr\n\n        r AUTH antirez foo\n        r MULTI\n        r INCR object:1234\n        $rd1 ACL SETUSER antirez -incr\n        $rd1 read\n        catch {r EXEC}\n        $rd1 close\n        r AUTH default \"\"\n        set entry [lindex [r ACL LOG] 0]\n        assert {[dict get $entry context] eq {multi}}\n        assert {[dict get $entry object] eq {incr}}\n        assert_match {*cmd=exec*} [dict get $entry client-info]\n        r ACL SETUSER antirez -incr\n    }\n\n    test {ACL can log errors in the context of Lua scripting} {\n        r AUTH antirez foo\n        catch {r EVAL {redis.call('incr','foo')} 0}\n        r AUTH default \"\"\n        set entry [lindex [r ACL LOG] 0]\n        assert {[dict get $entry context] eq {lua}}\n        assert {[dict get $entry object] eq {incr}}\n        assert_match {*cmd=eval*} [dict get $entry client-info]\n    }\n\n    test {ACL LOG can accept a numerical argument to show less entries} {\n        r AUTH antirez foo\n        catch {r INCR foo}\n        catch {r INCR foo}\n        catch {r INCR foo}\n        catch {r INCR foo}\n        r AUTH default \"\"\n        assert {[llength [r ACL LOG]] > 1}\n        assert {[llength [r ACL LOG 2]] == 2}\n    }\n\n    test {ACL LOG can log failed auth attempts} {\n        catch {r AUTH antirez wrong-password}\n        set entry [lindex [r ACL LOG] 0]\n        assert {[dict get $entry context] eq {toplevel}}\n        assert {[dict get $entry reason] eq {auth}}\n        assert {[dict get $entry object] eq {AUTH}}\n        assert {[dict get $entry username] eq {antirez}}\n    }\n\n    test {ACLLOG - zero max length is correctly handled} {\n        r ACL LOG RESET\n        r CONFIG SET acllog-max-len 0\n        for {set j 0} {$j < 10} {incr j} {\n            catch {r SET obj:$j 123}\n        }\n        r AUTH default \"\"\n        assert {[llength [r ACL LOG]] == 0}\n    }\n\n    test {ACL LOG entries are limited to a maximum amount} {\n        r ACL LOG RESET\n        r CONFIG SET acllog-max-len 5\n        r AUTH antirez foo\n        for {set j 0} {$j < 10} {incr j} {\n            catch {r SET obj:$j 123}\n        }\n        r AUTH default \"\"\n        assert {[llength [r ACL LOG]] == 5}\n    }\n\n    test {ACL LOG entries are still present on update of max len config} {\n        r CONFIG SET acllog-max-len 0\n        assert {[llength [r ACL LOG]] == 5}\n    }\n\n    test {When default user is off, new connections are not authenticated} {\n        r ACL setuser default off\n        catch {set rd1 [redis_deferring_client]} e\n        r ACL setuser default on\n        set e\n    } {*NOAUTH*}\n\n    test {When default user has no command permission, hello command still works for other users} {\n        r ACL setuser secure-user >supass on +@all\n        r ACL setuser default -@all\n        r HELLO 2 AUTH secure-user supass\n        r ACL setuser default nopass +@all\n        r AUTH default \"\"\n    }\n\n    test {When an authentication chain is used in the HELLO cmd, the last auth cmd has precedence} {\n        r ACL setuser secure-user1 >supass on +@all\n        r ACL setuser secure-user2 >supass on +@all\n        r HELLO 2 AUTH secure-user pass AUTH secure-user2 supass AUTH secure-user1 supass\n        assert {[r ACL whoami] eq {secure-user1}}\n        catch {r HELLO 2 AUTH secure-user supass AUTH secure-user2 supass AUTH secure-user pass} e\n        assert_match \"WRONGPASS invalid username-password pair or user is disabled.\" $e\n        assert {[r ACL whoami] eq {secure-user1}}\n    }\n\n    test {When a setname chain is used in the HELLO cmd, the last setname cmd has precedence} {\n        r HELLO 2 setname client1 setname client2 setname client3 setname client4\n        assert {[r client getname] eq {client4}}\n        catch {r HELLO 2 setname client5 setname client6 setname \"client name\"} e\n        assert_match \"ERR Client names cannot contain spaces, newlines or special characters.\" $e\n        assert {[r client getname] eq {client4}}\n    }\n\n    test {When authentication fails in the HELLO cmd, the client setname should not be applied} {\n        r client setname client0\n        catch {r HELLO 2 AUTH user pass setname client1} e\n        assert_match \"WRONGPASS invalid username-password pair or user is disabled.\" $e\n        assert {[r client getname] eq {client0}}\n    }\n\n    test {ACL HELP should not have unexpected options} {\n        catch {r ACL help xxx} e\n        assert_match \"*wrong number of arguments for 'acl|help' command\" $e\n    }\n\n    test {Delete a user that the client doesn't use} {\n        r ACL setuser not_used on >passwd\n        assert {[r ACL deluser not_used] == 1}\n        # The client is not closed\n        assert {[r ping] eq {PONG}}\n    }\n\n    test {Delete a user that the client is using} {\n        r ACL setuser using on +acl >passwd\n        r AUTH using passwd\n        # The client will receive reply normally\n        assert {[r ACL deluser using] == 1}\n        # The client is closed\n        catch {[r ping]} e\n        assert_match \"*I/O error*\" $e\n    }\n\n    test {ACL GENPASS command failed test} {\n       catch {r ACL genpass -236} err1\n       catch {r ACL genpass 5000} err2\n       assert_match \"*ACL GENPASS argument must be the number*\" $err1\n       assert_match \"*ACL GENPASS argument must be the number*\" $err2\n    }\n\n    test {Default user can not be removed} {\n       catch {r ACL deluser default} err\n       set err\n    } {ERR The 'default' user cannot be removed}\n\n    test {ACL load non-existing configured ACL file} {\n       catch {r ACL load} err\n       set err\n    } {*Redis instance is not configured to use an ACL file*}\n\n    # If there is an AUTH failure the metric increases\n    test {ACL-Metrics user AUTH failure} {\n        set current_auth_failures [s acl_access_denied_auth]\n        set current_invalid_cmd_accesses [s acl_access_denied_cmd]\n        set current_invalid_key_accesses [s acl_access_denied_key]\n        set current_invalid_channel_accesses [s acl_access_denied_channel]\n        assert_error \"*WRONGPASS*\" {r AUTH notrealuser 1233456} \n        assert {[s acl_access_denied_auth] eq [expr $current_auth_failures + 1]}\n        assert_error \"*WRONGPASS*\" {r HELLO 3 AUTH notrealuser 1233456}\n        assert {[s acl_access_denied_auth] eq [expr $current_auth_failures + 2]}\n        assert_error \"*WRONGPASS*\" {r HELLO 2 AUTH notrealuser 1233456}\n        assert {[s acl_access_denied_auth] eq [expr $current_auth_failures + 3]}\n        assert {[s acl_access_denied_cmd] eq $current_invalid_cmd_accesses}\n        assert {[s acl_access_denied_key] eq $current_invalid_key_accesses}\n        assert {[s acl_access_denied_channel] eq $current_invalid_channel_accesses}\n    }\n\n    # If a user try to access an unauthorized command the metric increases\n    test {ACL-Metrics invalid command accesses} {\n        set current_auth_failures [s acl_access_denied_auth]\n        set current_invalid_cmd_accesses [s acl_access_denied_cmd]\n        set current_invalid_key_accesses [s acl_access_denied_key]\n        set current_invalid_channel_accesses [s acl_access_denied_channel]\n        r ACL setuser invalidcmduser on >passwd nocommands\n        r AUTH invalidcmduser passwd\n        assert_error \"*no permissions to run the * command*\" {r acl list}\n        r AUTH default \"\"\n        assert {[s acl_access_denied_auth] eq $current_auth_failures}\n        assert {[s acl_access_denied_cmd] eq [expr $current_invalid_cmd_accesses + 1]}\n        assert {[s acl_access_denied_key] eq $current_invalid_key_accesses}\n        assert {[s acl_access_denied_channel] eq $current_invalid_channel_accesses}\n    }\n\n    # If a user try to access an unauthorized key the metric increases\n    test {ACL-Metrics invalid key accesses} {\n        set current_auth_failures [s acl_access_denied_auth]\n        set current_invalid_cmd_accesses [s acl_access_denied_cmd]\n        set current_invalid_key_accesses [s acl_access_denied_key]\n        set current_invalid_channel_accesses [s acl_access_denied_channel]\n        r ACL setuser invalidkeyuser on >passwd resetkeys allcommands\n        r AUTH invalidkeyuser passwd\n        assert_error \"*NOPERM*key*\" {r get x}\n        r AUTH default \"\"\n        assert {[s acl_access_denied_auth] eq $current_auth_failures}\n        assert {[s acl_access_denied_cmd] eq $current_invalid_cmd_accesses}\n        assert {[s acl_access_denied_key] eq [expr $current_invalid_key_accesses + 1]}\n        assert {[s acl_access_denied_channel] eq $current_invalid_channel_accesses}\n    }   \n\n    # If a user try to access an unauthorized channel the metric increases\n    test {ACL-Metrics invalid channels accesses} {\n        set current_auth_failures [s acl_access_denied_auth]\n        set current_invalid_cmd_accesses [s acl_access_denied_cmd]\n        set current_invalid_key_accesses [s acl_access_denied_key]\n        set current_invalid_channel_accesses [s acl_access_denied_channel]\n        r ACL setuser invalidchanneluser on >passwd resetchannels allcommands\n        r AUTH invalidkeyuser passwd\n        assert_error \"*NOPERM*channel*\" {r subscribe x}\n        r AUTH default \"\"\n        assert {[s acl_access_denied_auth] eq $current_auth_failures}\n        assert {[s acl_access_denied_cmd] eq $current_invalid_cmd_accesses}\n        assert {[s acl_access_denied_key] eq $current_invalid_key_accesses}\n        assert {[s acl_access_denied_channel] eq [expr $current_invalid_channel_accesses + 1]}\n    }\n}\n\nset server_path [tmpdir \"server.acl\"]\nexec cp -f tests/assets/user.acl $server_path\nstart_server [list overrides [list \"dir\" $server_path \"acl-pubsub-default\" \"allchannels\" \"aclfile\" \"user.acl\"] tags [list \"external:skip\"]] {\n    # user alice on allcommands allkeys &* >alice\n    # user bob on -@all +@set +acl ~set* &* >bob\n    # user default on nopass ~* &* +@all\n\n    test {default: load from include file, can access any channels} {\n        r SUBSCRIBE foo\n        r PSUBSCRIBE bar*\n        r UNSUBSCRIBE\n        r PUNSUBSCRIBE\n        r PUBLISH hello world\n    }\n\n    test {default: with config acl-pubsub-default allchannels after reset, can access any channels} {\n        r ACL setuser default reset on nopass ~* +@all\n        r SUBSCRIBE foo\n        r PSUBSCRIBE bar*\n        r UNSUBSCRIBE\n        r PUNSUBSCRIBE\n        r PUBLISH hello world\n    }\n\n    test {default: with config acl-pubsub-default resetchannels after reset, can not access any channels} {\n        r CONFIG SET acl-pubsub-default resetchannels\n        r ACL setuser default reset on nopass ~* +@all\n        assert_error {*NOPERM*channel*} {r SUBSCRIBE foo}\n        assert_error {*NOPERM*channel*} {r PSUBSCRIBE bar*}\n        assert_error {*NOPERM*channel*} {r PUBLISH hello world}\n        r CONFIG SET acl-pubsub-default resetchannels\n    }\n\n    test {Alice: can execute all command} {\n        r AUTH alice alice\n        assert_equal \"alice\" [r acl whoami]\n        r SET key value\n    }\n\n    test {Bob: just execute @set and acl command} {\n        r AUTH bob bob\n        assert_equal \"bob\" [r acl whoami]\n        assert_equal \"3\" [r sadd set 1 2 3]\n        catch {r SET key value} e\n        set e\n    } {*NOPERM*set*}\n\n    test {ACL LOAD only disconnects affected clients} {\n        reconnect\n        r ACL SETUSER doug on nopass resetchannels &test* +@all ~*\n\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n\n        $rd1 AUTH alice alice\n        $rd1 read\n        $rd1 SUBSCRIBE test1\n        $rd1 read\n\n        $rd2 AUTH doug doug\n        $rd2 read\n        $rd2 SUBSCRIBE test1\n        $rd2 read\n\n        r ACL LOAD\n        r PUBLISH test1 test-message\n\n        # Permissions for 'alice' haven't changed, so they should still be connected\n        assert_match {*test-message*} [$rd1 read]\n\n        # 'doug' no longer has access to \"test1\" channel, so they should get disconnected\n        catch {$rd2 read} e\n        assert_match {*I/O error*} $e\n\n        $rd1 close\n        $rd2 close\n    }\n\n    test {ACL LOAD disconnects affected subscriber} {\n        # This test covers the case that the LOAD is requested over the subscriber\n        reconnect\n        r ACL SETUSER doug on nopass resetchannels &test* +@all ~*\n\n        set rd1 [redis_deferring_client]\n\n        # we must use RESP 3 since AUTH command is not supported over a subscribed client with RESP2\n        $rd1 HELLO 3 AUTH doug doug\n        $rd1 read\n        $rd1 SUBSCRIBE test1\n        $rd1 read\n\n        $rd1 ACL LOAD\n        assert_match {OK} [$rd1 read]\n\n        # 'doug' no longer has access to \"test1\" channel, so they should get disconnected\n        catch {$rd1 read} e\n        assert_match {*I/O error*} $e\n\n        $rd1 close\n    }\n\n    test {ACL LOAD disconnects clients of deleted users} {\n        reconnect\n        r ACL SETUSER mortimer on >mortimer ~* &* +@all\n\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n\n        $rd1 AUTH alice alice\n        $rd1 read\n        $rd1 SUBSCRIBE test\n        $rd1 read\n\n        $rd2 AUTH mortimer mortimer\n        $rd2 read\n        $rd2 SUBSCRIBE test\n        $rd2 read\n\n        r ACL LOAD\n        r PUBLISH test test-message\n\n        # Permissions for 'alice' haven't changed, so they should still be connected\n        assert_match {*test-message*} [$rd1 read]\n\n        # 'mortimer' has been deleted, so their client should get disconnected\n        catch {$rd2 read} e\n        assert_match {*I/O error*} $e\n\n        $rd1 close\n        $rd2 close\n    }\n\n    test {ACL load and save} {\n        r ACL setuser eve +get allkeys >eve on\n        r ACL save\n\n        r ACL load\n\n        # Clients should not be disconnected since permissions haven't changed\n\n        r AUTH alice alice\n        r SET key value\n        r AUTH eve eve\n        r GET key\n        catch {r SET key value} e\n        set e\n    } {*NOPERM*set*}\n\n    test {ACL load and save with restricted channels} {\n        r AUTH alice alice\n        r ACL setuser harry on nopass resetchannels &test +@all ~*\n        r ACL save\n\n        r ACL load\n\n        # Clients should not be disconnected since permissions haven't changed\n\n        r AUTH harry anything\n        r publish test bar\n        catch {r publish test1 bar} e\n        r ACL deluser harry\n        set e\n    } {*NOPERM*channel*}\n\n    set server_path [tmpdir \"server.acl\"]\n    exec cp -f tests/assets/user.acl $server_path\n    start_server [list overrides [list \"dir\" $server_path \"acl-pubsub-default\" \"allchannels\" \"aclfile\" \"user.acl\"] tags [list \"repl\" \"external:skip\"]] {\n        set master [srv -1 client]\n        set master_host [srv -1 host]\n        set master_port [srv -1 port]\n        set slave [srv 0 client]\n\n        test {First server should have role slave after SLAVEOF} {\n            $slave slaveof $master_host $master_port\n            wait_for_condition 50 100 {\n                [s 0 master_link_status] eq {up}\n            } else {\n                fail \"Replication not started.\"\n            }\n        }\n\n        test {ACL load on replica when connected to replica} {\n            assert_match {OK} [$slave ACL LOAD]\n        }\n    }\n}\n\nset server_path [tmpdir \"resetchannels.acl\"]\nexec cp -f tests/assets/nodefaultuser.acl $server_path\nexec cp -f tests/assets/default.conf $server_path\nstart_server [list overrides [list \"dir\" $server_path \"aclfile\" \"nodefaultuser.acl\"] tags [list \"external:skip\"]] {\n\n    test {Default user has access to all channels irrespective of flag} {\n        set channelinfo [dict get [r ACL getuser default] channels]\n        assert_equal \"&*\" $channelinfo\n        set channelinfo [dict get [r ACL getuser alice] channels]\n        assert_equal \"\" $channelinfo\n    }\n\n    test {Update acl-pubsub-default, existing users shouldn't get affected} {\n        set channelinfo [dict get [r ACL getuser default] channels]\n        assert_equal \"&*\" $channelinfo\n        r CONFIG set acl-pubsub-default allchannels\n        r ACL setuser mydefault\n        set channelinfo [dict get [r ACL getuser mydefault] channels]\n        assert_equal \"&*\" $channelinfo\n        r CONFIG set acl-pubsub-default resetchannels\n        set channelinfo [dict get [r ACL getuser mydefault] channels]\n        assert_equal \"&*\" $channelinfo\n    }\n\n    test {Single channel is valid} {\n        r ACL setuser onechannel &test\n        set channelinfo [dict get [r ACL getuser onechannel] channels]\n        assert_equal \"&test\" $channelinfo\n        r ACL deluser onechannel\n    }\n\n    test {Single channel is not valid with allchannels} {\n        r CONFIG set acl-pubsub-default allchannels\n        catch {r ACL setuser onechannel &test} err\n        r CONFIG set acl-pubsub-default resetchannels\n        set err\n    } {*start with an empty list of channels*}\n}\n\nset server_path [tmpdir \"resetchannels.acl\"]\nexec cp -f tests/assets/nodefaultuser.acl $server_path\nexec cp -f tests/assets/default.conf $server_path\nstart_server [list overrides [list \"dir\" $server_path \"acl-pubsub-default\" \"resetchannels\" \"aclfile\" \"nodefaultuser.acl\"] tags [list \"external:skip\"]] {\n\n    test {Only default user has access to all channels irrespective of flag} {\n        set channelinfo [dict get [r ACL getuser default] channels]\n        assert_equal \"&*\" $channelinfo\n        set channelinfo [dict get [r ACL getuser alice] channels]\n        assert_equal \"\" $channelinfo\n    }\n}\n\n\nstart_server {overrides {user \"default on nopass ~* +@all\"} tags {\"external:skip\"}} {\n    test {default: load from config file, without channel permission default user can't access any channels} {\n        catch {r SUBSCRIBE foo} e\n        set e\n    } {*NOPERM*channel*}\n}\n\nstart_server {overrides {user \"default on nopass ~* &* +@all\"} tags {\"external:skip\"}} {\n    test {default: load from config file with all channels permissions} {\n        r SUBSCRIBE foo\n        r PSUBSCRIBE bar*\n        r UNSUBSCRIBE\n        r PUNSUBSCRIBE\n        r PUBLISH hello world\n    }\n}\n\nset server_path [tmpdir \"duplicate.acl\"]\nexec cp -f tests/assets/user.acl $server_path\nexec cp -f tests/assets/default.conf $server_path\nstart_server [list overrides [list \"dir\" $server_path \"aclfile\" \"user.acl\"] tags [list \"external:skip\"]] {\n\n    test {Test loading an ACL file with duplicate users} {\n        exec cp -f tests/assets/user.acl $server_path\n\n        # Corrupt the ACL file\n        set corruption \"\\nuser alice on nopass ~* -@all\"\n        exec echo $corruption >> $server_path/user.acl\n        catch {r ACL LOAD} err\n        assert_match {*Duplicate user 'alice' found*} $err \n\n        # Verify the previous users still exist\n        # NOTE: A missing user evaluates to an empty\n        # string. \n        assert {[r ACL GETUSER alice] != \"\"}\n        assert_equal [dict get [r ACL GETUSER alice] commands] \"+@all\"\n        assert {[r ACL GETUSER bob] != \"\"}\n        assert {[r ACL GETUSER default] != \"\"}\n    }\n\n    test {Test loading an ACL file with duplicate default user} {\n        exec cp -f tests/assets/user.acl $server_path\n\n        # Corrupt the ACL file\n        set corruption \"\\nuser default on nopass ~* -@all\"\n        exec echo $corruption >> $server_path/user.acl\n        catch {r ACL LOAD} err\n        assert_match {*Duplicate user 'default' found*} $err \n\n        # Verify the previous users still exist\n        # NOTE: A missing user evaluates to an empty\n        # string. \n        assert {[r ACL GETUSER alice] != \"\"}\n        assert_equal [dict get [r ACL GETUSER alice] commands] \"+@all\"\n        assert {[r ACL GETUSER bob] != \"\"}\n        assert {[r ACL GETUSER default] != \"\"}\n    }\n    \n    test {Test loading duplicate users in config on startup} {\n        catch {exec src/redis-server --user foo --user foo} err\n        assert_match {*Duplicate user*} $err\n\n        catch {exec src/redis-server --user default --user default} err\n        assert_match {*Duplicate user*} $err\n    } {} {external:skip}\n\n    test {Test loading an ACL file with comments} {\n        exec cp -f tests/assets/user.acl $server_path\n\n        # Add comments to the ACL file\n        set acl_content \"# This is a comment at the beginning\\nuser alice on allcommands allkeys &* >alice\\n# Comment between users\\nuser bob on -@all +@set +acl ~set* &* >bob\\n\\n# Comment with blank line above\\nuser doug on resetchannels &test +@all ~* >doug\\nuser default on nopass ~* &* +@all\\n# Comment at the end\"\n        set fd [open $server_path/user.acl w]\n        puts $fd $acl_content\n        close $fd\n\n        # Load the ACL file with comments\n        assert_match {OK} [r ACL LOAD]\n\n        # Verify all users loaded correctly\n        assert {[r ACL GETUSER alice] != \"\"}\n        assert_equal [dict get [r ACL GETUSER alice] commands] \"+@all\"\n        assert {[r ACL GETUSER bob] != \"\"}\n        assert {[r ACL GETUSER doug] != \"\"}\n        assert {[r ACL GETUSER default] != \"\"}\n    }\n}\n\nstart_server {overrides {user \"default on nopass ~* +@all -flushdb\"} tags {acl external:skip}} {\n    test {ACL from config file and config rewrite} {\n        assert_error {NOPERM *} {r flushdb}\n        r config rewrite\n        restart_server 0 true false\n        assert_error {NOPERM *} {r flushdb}\n    }\n}\n\n"
  },
  {
    "path": "tests/unit/aofrw.tcl",
    "content": "# This unit has the potential to create huge .reqres files, causing log-req-res-validator.py to run for a very long time...\n# Since this unit doesn't do anything worth validating, reply_schema-wise, we decided to skip it\nstart_server {tags {\"aofrw external:skip logreqres:skip\"} overrides {save {}}} {\n    # Enable the AOF\n    r config set appendonly yes\n    r config set auto-aof-rewrite-percentage 0 ; # Disable auto-rewrite.\n    waitForBgrewriteaof r\n\n    foreach rdbpre {yes no} {\n        r config set aof-use-rdb-preamble $rdbpre\n        test \"AOF rewrite during write load: RDB preamble=$rdbpre\" {\n            # Start a write load for 10 seconds\n            set master [srv 0 client]\n            set master_host [srv 0 host]\n            set master_port [srv 0 port]\n            set load_handle0 [start_write_load $master_host $master_port 10]\n            set load_handle1 [start_write_load $master_host $master_port 10]\n            set load_handle2 [start_write_load $master_host $master_port 10]\n            set load_handle3 [start_write_load $master_host $master_port 10]\n            set load_handle4 [start_write_load $master_host $master_port 10]\n\n            # Make sure the instance is really receiving data\n            wait_for_condition 50 100 {\n                [r dbsize] > 0\n            } else {\n                fail \"No write load detected.\"\n            }\n\n            # After 3 seconds, start a rewrite, while the write load is still\n            # active.\n            after 3000\n            r bgrewriteaof\n            waitForBgrewriteaof r\n\n            # Let it run a bit more so that we'll append some data to the new\n            # AOF.\n            after 1000\n\n            # Stop the processes generating the load if they are still active\n            stop_write_load $load_handle0\n            stop_write_load $load_handle1\n            stop_write_load $load_handle2\n            stop_write_load $load_handle3\n            stop_write_load $load_handle4\n\n            # Make sure no more commands processed, before taking debug digest\n            wait_load_handlers_disconnected\n\n            # Get the data set digest\n            set d1 [debug_digest]\n\n            # Load the AOF\n            r debug loadaof\n            set d2 [debug_digest]\n\n            # Make sure they are the same\n            assert {$d1 eq $d2}\n        }\n    }\n}\n\nstart_server {tags {\"aofrw external:skip debug_defrag:skip\"} overrides {aof-use-rdb-preamble no}} {\n    test {Turning off AOF kills the background writing child if any} {\n        r config set appendonly yes\n        waitForBgrewriteaof r\n\n        # start a slow AOFRW\n        r set k v\n        r config set rdb-key-save-delay 10000000\n        r bgrewriteaof\n\n        # disable AOF and wait for the child to be killed\n        r config set appendonly no\n        wait_for_condition 50 100 {\n            [string match {*Killing*AOF*child*} [exec tail -5 < [srv 0 stdout]]]\n        } else {\n            fail \"Can't find 'Killing AOF child' into recent logs\"\n        }\n        r config set rdb-key-save-delay 0\n    }\n\n    foreach d {string int} {\n        foreach e {listpack quicklist} {\n            test \"AOF rewrite of list with $e encoding, $d data\" {\n                r flushall\n                if {$e eq {listpack}} {\n                    r config set list-max-listpack-size -2\n                    set len 10\n                } else {\n                    r config set list-max-listpack-size 10\n                    set len 1000\n                }\n                for {set j 0} {$j < $len} {incr j} {\n                    if {$d eq {string}} {\n                        set data [randstring 0 16 alpha]\n                    } else {\n                        set data [randomInt 4000000000]\n                    }\n                    r lpush key $data\n                }\n                assert_equal [r object encoding key] $e\n                set d1 [debug_digest]\n                r bgrewriteaof\n                waitForBgrewriteaof r\n                r debug loadaof\n                set d2 [debug_digest]\n                if {$d1 ne $d2} {\n                    error \"assertion:$d1 is not equal to $d2\"\n                }\n            }\n        }\n    }\n\n    foreach d {string int} {\n        foreach e {intset hashtable} {\n            test \"AOF rewrite of set with $e encoding, $d data\" {\n                r flushall\n                if {$e eq {intset}} {set len 10} else {set len 1000}\n                for {set j 0} {$j < $len} {incr j} {\n                    if {$d eq {string}} {\n                        set data [randstring 0 16 alpha]\n                    } else {\n                        set data [randomInt 4000000000]\n                    }\n                    r sadd key $data\n                }\n                if {$d ne {string}} {\n                    assert_equal [r object encoding key] $e\n                }\n                set d1 [debug_digest]\n                r bgrewriteaof\n                waitForBgrewriteaof r\n                r debug loadaof\n                set d2 [debug_digest]\n                if {$d1 ne $d2} {\n                    error \"assertion:$d1 is not equal to $d2\"\n                }\n            }\n        }\n    }\n\n    foreach d {string int} {\n        foreach e {listpack hashtable} {\n            test \"AOF rewrite of hash with $e encoding, $d data\" {\n                r flushall\n                if {$e eq {listpack}} {set len 10} else {set len 1000}\n                for {set j 0} {$j < $len} {incr j} {\n                    if {$d eq {string}} {\n                        set data [randstring 0 16 alpha]\n                    } else {\n                        set data [randomInt 4000000000]\n                    }\n                    r hset key $data $data\n                }\n                assert_equal [r object encoding key] $e\n                set d1 [debug_digest]\n                r bgrewriteaof\n                waitForBgrewriteaof r\n                r debug loadaof\n                set d2 [debug_digest]\n                if {$d1 ne $d2} {\n                    error \"assertion:$d1 is not equal to $d2\"\n                }\n            }\n        }\n    }\n\n    foreach d {string int} {\n        foreach e {listpack skiplist} {\n            test \"AOF rewrite of zset with $e encoding, $d data\" {\n                r flushall\n                if {$e eq {listpack}} {set len 10} else {set len 1000}\n                for {set j 0} {$j < $len} {incr j} {\n                    if {$d eq {string}} {\n                        set data [randstring 0 16 alpha]\n                    } else {\n                        set data [randomInt 4000000000]\n                    }\n                    r zadd key [expr rand()] $data\n                }\n                assert_equal [r object encoding key] $e\n                set d1 [debug_digest]\n                r bgrewriteaof\n                waitForBgrewriteaof r\n                r debug loadaof\n                set d2 [debug_digest]\n                if {$d1 ne $d2} {\n                    error \"assertion:$d1 is not equal to $d2\"\n                }\n            }\n        }\n    }\n\n    test \"AOF rewrite functions\" {\n        r flushall\n        r FUNCTION LOAD {#!lua name=test\n            redis.register_function('test', function() return 1 end)\n        }\n        r bgrewriteaof\n        waitForBgrewriteaof r\n        r function flush\n        r debug loadaof\n        assert_equal [r fcall test 0] 1\n        r FUNCTION LIST\n    } {{library_name test engine LUA functions {{name test description {} flags {}}}}}\n\n    test {BGREWRITEAOF is delayed if BGSAVE is in progress} {\n        r flushall\n        r set k v\n        r config set rdb-key-save-delay 10000000\n        r bgsave\n        assert_match {*scheduled*} [r bgrewriteaof]\n        assert_equal [s aof_rewrite_scheduled] 1\n        r config set rdb-key-save-delay 0\n        catch {exec kill -9 [get_child_pid 0]}\n        while {[s aof_rewrite_scheduled] eq 1} {\n            after 100\n        }\n    }\n\n    test {BGREWRITEAOF is refused if already in progress} {\n        r config set aof-use-rdb-preamble yes\n        r config set rdb-key-save-delay 10000000\n        catch {\n            r bgrewriteaof\n            r bgrewriteaof\n        } e\n        assert_match {*ERR*already*} $e\n        r config set rdb-key-save-delay 0\n        catch {exec kill -9 [get_child_pid 0]}\n    }\n}\n"
  },
  {
    "path": "tests/unit/auth.tcl",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Copyright (c) 2024-present, Valkey contributors.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n#\n\nstart_server {tags {\"auth external:skip\"}} {\n    test {AUTH fails if there is no password configured server side} {\n        catch {r auth foo} err\n        set _ $err\n    } {ERR *any password*}\n\n    test {Arity check for auth command} {\n        catch {r auth a b c} err\n        set _ $err\n    } {*syntax error*}\n}\n\nstart_server {tags {\"auth external:skip\"} overrides {requirepass foobar}} {\n    test {AUTH fails when a wrong password is given} {\n        catch {r auth wrong!} err\n        set _ $err\n    } {WRONGPASS*}\n\n    test {Arbitrary command gives an error when AUTH is required} {\n        catch {r set foo bar} err\n        set _ $err\n    } {NOAUTH*}\n\n    test {AUTH succeeds when the right password is given} {\n        r auth foobar\n    } {OK}\n\n    test {Once AUTH succeeded we can actually send commands to the server} {\n        r set foo 100\n        r incr foo\n    } {101}\n\n    test {For unauthenticated clients multibulk and bulk length are limited} {\n        set rr [redis [srv \"host\"] [srv \"port\"] 0 $::tls]\n        $rr write \"*100\\r\\n\"\n        $rr flush\n        catch {[$rr read]} e\n        assert_match {*unauthenticated multibulk length*} $e\n        $rr close\n\n        set rr [redis [srv \"host\"] [srv \"port\"] 0 $::tls]\n        $rr write \"*1\\r\\n\\$100000000\\r\\n\"\n        $rr flush\n        catch {[$rr read]} e\n        assert_match {*unauthenticated bulk length*} $e\n        $rr close\n    }\n\n    test {For unauthenticated clients output buffer is limited} {\n        set rr [redis [srv \"host\"] [srv \"port\"] 1 $::tls]\n        $rr SET x 5\n        catch {[$rr read]} e\n        assert_match {*NOAUTH Authentication required*} $e\n\n        # Fill the output buffer in a loop without reading it and make\n        # sure the client disconnected.\n        # Considering the socket eat some of the replies, we are testing\n        # that such client can't consume more than few MB's.\n        catch {\n            for {set j 0} {$j < 1000000} {incr j} {\n                    $rr SET x 5\n            }\n        } e\n        assert_match {I/O error reading reply} $e\n    }\n}\n\nstart_server {tags {\"auth_binary_password external:skip\"}} {\n    test {AUTH fails when binary password is wrong} {\n        r config set requirepass \"abc\\x00def\"\n        catch {r auth abc} err\n        set _ $err\n    } {WRONGPASS*}\n\n    test {AUTH succeeds when binary password is correct} {\n        r config set requirepass \"abc\\x00def\"\n        r auth \"abc\\x00def\"\n    } {OK}\n\n    start_server {tags {\"masterauth\"}} {\n        set master [srv -1 client]\n        set master_host [srv -1 host]\n        set master_port [srv -1 port]\n        set slave [srv 0 client]\n\n        foreach rdbchannel {yes no} {\n            test \"MASTERAUTH test with binary password rdbchannel=$rdbchannel\" {\n                $slave slaveof no one\n                $master config set requirepass \"abc\\x00def\"\n                $master config set repl-rdb-channel $rdbchannel\n\n                # Configure the replica with masterauth\n                set loglines [count_log_lines 0]\n                $slave config set masterauth \"abc\"\n                $slave config set repl-rdb-channel $rdbchannel\n                $slave slaveof $master_host $master_port\n\n                # Verify replica is not able to sync with master\n                wait_for_log_messages 0 {\"*Unable to AUTH to MASTER*\"} $loglines 1000 10\n                assert_equal {down} [s 0 master_link_status]\n\n                # Test replica with the correct masterauth\n                $slave config set masterauth \"abc\\x00def\"\n                wait_for_condition 50 100 {\n                    [s 0 master_link_status] eq {up}\n                } else {\n                    fail \"Can't turn the instance into a replica\"\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/bitfield.tcl",
    "content": "start_server {tags {\"bitops\"}} {\n    test {BITFIELD signed SET and GET basics} {\n        r del bits\n        set results {}\n        lappend results [r bitfield bits set i8 0 -100]\n        lappend results [r bitfield bits set i8 0 101]\n        lappend results [r bitfield bits get i8 0]\n        set results\n    } {0 -100 101}\n\n    test {BITFIELD unsigned SET and GET basics} {\n        r del bits\n        set results {}\n        lappend results [r bitfield bits set u8 0 255]\n        lappend results [r bitfield bits set u8 0 100]\n        lappend results [r bitfield bits get u8 0]\n        set results\n    } {0 255 100}\n\n    test {BITFIELD signed SET and GET together} {\n        r del bits\n        set results [r bitfield bits set i8 0 255 set i8 0 100 get i8 0]\n    } {0 -1 100}\n \n    test {BITFIELD unsigned with SET, GET and INCRBY arguments} {\n        r del bits\n        set results [r bitfield bits set u8 0 255 incrby u8 0 100 get u8 0]\n    } {0 99 99}\n\n    test {BITFIELD with only key as argument} {\n        r del bits\n        set result [r bitfield bits]\n        assert {$result eq {}}\n    }\n\n    test {BITFIELD #<idx> form} {\n        r del bits\n        set results {}\n        r bitfield bits set u8 #0 65\n        r bitfield bits set u8 #1 66\n        r bitfield bits set u8 #2 67\n        r get bits\n    } {ABC}\n\n    test {BITFIELD basic INCRBY form} {\n        r del bits\n        set results {}\n        r bitfield bits set u8 #0 10\n        lappend results [r bitfield bits incrby u8 #0 100]\n        lappend results [r bitfield bits incrby u8 #0 100]\n        set results\n    } {110 210}\n\n    test {BITFIELD chaining of multiple commands} {\n        r del bits\n        set results {}\n        r bitfield bits set u8 #0 10\n        lappend results [r bitfield bits incrby u8 #0 100 incrby u8 #0 100]\n        set results\n    } {{110 210}}\n\n    test {BITFIELD unsigned overflow wrap} {\n        r del bits\n        set results {}\n        r bitfield bits set u8 #0 100\n        lappend results [r bitfield bits overflow wrap incrby u8 #0 257]\n        lappend results [r bitfield bits get u8 #0]\n        lappend results [r bitfield bits overflow wrap incrby u8 #0 255]\n        lappend results [r bitfield bits get u8 #0]\n    } {101 101 100 100}\n\n    test {BITFIELD unsigned overflow sat} {\n        r del bits\n        set results {}\n        r bitfield bits set u8 #0 100\n        lappend results [r bitfield bits overflow sat incrby u8 #0 257]\n        lappend results [r bitfield bits get u8 #0]\n        lappend results [r bitfield bits overflow sat incrby u8 #0 -255]\n        lappend results [r bitfield bits get u8 #0]\n    } {255 255 0 0}\n\n    test {BITFIELD signed overflow wrap} {\n        r del bits\n        set results {}\n        r bitfield bits set i8 #0 100\n        lappend results [r bitfield bits overflow wrap incrby i8 #0 257]\n        lappend results [r bitfield bits get i8 #0]\n        lappend results [r bitfield bits overflow wrap incrby i8 #0 255]\n        lappend results [r bitfield bits get i8 #0]\n    } {101 101 100 100}\n\n    test {BITFIELD signed overflow sat} {\n        r del bits\n        set results {}\n        r bitfield bits set u8 #0 100\n        lappend results [r bitfield bits overflow sat incrby i8 #0 257]\n        lappend results [r bitfield bits get i8 #0]\n        lappend results [r bitfield bits overflow sat incrby i8 #0 -255]\n        lappend results [r bitfield bits get i8 #0]\n    } {127 127 -128 -128}\n\n    test {BITFIELD overflow detection fuzzing} {\n        for {set j 0} {$j < 1000} {incr j} {\n            set bits [expr {[randomInt 64]+1}]\n            set sign [randomInt 2]\n            set range [expr {2**$bits}]\n            if {$bits == 64} {set sign 1} ; # u64 is not supported by BITFIELD.\n            if {$sign} {\n                set min [expr {-($range/2)}]\n                set type \"i$bits\"\n            } else {\n                set min 0\n                set type \"u$bits\"\n            }\n            set max [expr {$min+$range-1}]\n\n            # Compare Tcl vs Redis\n            set range2 [expr {$range*2}]\n            set value [expr {($min*2)+[randomInt $range2]}]\n            set increment [expr {($min*2)+[randomInt $range2]}]\n            if {$value > 9223372036854775807} {\n                set value 9223372036854775807\n            }\n            if {$value < -9223372036854775808} {\n                set value -9223372036854775808\n            }\n            if {$increment > 9223372036854775807} {\n                set increment 9223372036854775807\n            }\n            if {$increment < -9223372036854775808} {\n                set increment -9223372036854775808\n            }\n\n            set overflow 0\n            if {$value > $max || $value < $min} {set overflow 1}\n            if {($value + $increment) > $max} {set overflow 1}\n            if {($value + $increment) < $min} {set overflow 1}\n\n            r del bits\n            set res1 [r bitfield bits overflow fail set $type 0 $value]\n            set res2 [r bitfield bits overflow fail incrby $type 0 $increment]\n\n            if {$overflow && [lindex $res1 0] ne {} &&\n                             [lindex $res2 0] ne {}} {\n                fail \"OW not detected where needed: $type $value+$increment\"\n            }\n            if {!$overflow && ([lindex $res1 0] eq {} ||\n                               [lindex $res2 0] eq {})} {\n                fail \"OW detected where NOT needed: $type $value+$increment\"\n            }\n        }\n    }\n\n    test {BITFIELD overflow wrap fuzzing} {\n        for {set j 0} {$j < 1000} {incr j} {\n            set bits [expr {[randomInt 64]+1}]\n            set sign [randomInt 2]\n            set range [expr {2**$bits}]\n            if {$bits == 64} {set sign 1} ; # u64 is not supported by BITFIELD.\n            if {$sign} {\n                set min [expr {-($range/2)}]\n                set type \"i$bits\"\n            } else {\n                set min 0\n                set type \"u$bits\"\n            }\n            set max [expr {$min+$range-1}]\n\n            # Compare Tcl vs Redis\n            set range2 [expr {$range*2}]\n            set value [expr {($min*2)+[randomInt $range2]}]\n            set increment [expr {($min*2)+[randomInt $range2]}]\n            if {$value > 9223372036854775807} {\n                set value 9223372036854775807\n            }\n            if {$value < -9223372036854775808} {\n                set value -9223372036854775808\n            }\n            if {$increment > 9223372036854775807} {\n                set increment 9223372036854775807\n            }\n            if {$increment < -9223372036854775808} {\n                set increment -9223372036854775808\n            }\n\n            r del bits\n            r bitfield bits overflow wrap set $type 0 $value\n            r bitfield bits overflow wrap incrby $type 0 $increment\n            set res [lindex [r bitfield bits get $type 0] 0]\n\n            set expected 0\n            if {$sign} {incr expected [expr {$max+1}]}\n            incr expected $value\n            incr expected $increment\n            set expected [expr {$expected % $range}]\n            if {$sign} {incr expected $min}\n\n            if {$res != $expected} {\n                fail \"WRAP error: $type $value+$increment = $res, should be $expected\"\n            }\n        }\n    }\n\n    test {BITFIELD regression for #3221} {\n        r set bits 1\n        r bitfield bits get u1 0\n    } {0}\n\n    test {BITFIELD regression for #3564} {\n        for {set j 0} {$j < 10} {incr j} {\n            r del mystring\n            set res [r BITFIELD mystring SET i8 0 10 SET i8 64 10 INCRBY i8 10 99900]\n            assert {$res eq {0 0 60}}\n        }\n        r del mystring\n    }\n\n    test {BITFIELD_RO with only key as argument} {\n        set res [r bitfield_ro bits]\n        assert {$res eq {}}\n    }\n\n    test {BITFIELD_RO fails when write option is used} {\n        catch {r bitfield_ro bits set u8 0 100 get u8 0} err\n        assert_match {*ERR BITFIELD_RO only supports the GET subcommand*} $err\n    }\n}\n\nstart_server {tags {\"repl external:skip\"}} {\n    start_server {} {\n        set master [srv -1 client]\n        set master_host [srv -1 host]\n        set master_port [srv -1 port]\n        set slave [srv 0 client]\n\n        test {BITFIELD: setup slave} {\n            $slave slaveof $master_host $master_port\n            wait_for_condition 50 100 {\n                [s 0 master_link_status] eq {up}\n            } else {\n                fail \"Replication not started.\"\n            }\n        }\n\n        test {BITFIELD: write on master, read on slave} {\n            $master del bits\n            assert_equal 0 [$master bitfield bits set u8 0 255]\n            assert_equal 255 [$master bitfield bits set u8 0 100]\n            wait_for_ofs_sync $master $slave\n            assert_equal 100 [$slave bitfield_ro bits get u8 0]\n        }\n\n        test {BITFIELD_RO with only key as argument on read-only replica} {\n            set res [$slave bitfield_ro bits]\n            assert {$res eq {}}\n        }\n\n        test {BITFIELD_RO fails when write option is used on read-only replica} {\n            catch {$slave bitfield_ro bits set u8 0 100 get u8 0} err\n            assert_match {*ERR BITFIELD_RO only supports the GET subcommand*} $err\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/bitops.tcl",
    "content": "# Compare Redis commands against Tcl implementations of the same commands.\nproc count_bits s {\n    binary scan $s b* bits\n    string length [regsub -all {0} $bits {}]\n}\n\n# start end are bit index\nproc count_bits_start_end {s start end} {\n    binary scan $s B* bits\n    string length [regsub -all {0} [string range $bits $start $end] {}]\n}\n\nproc simulate_bit_op {op args} {\n    set maxlen 0\n    set j 0\n    set count [llength $args]\n    foreach a $args {\n        binary scan $a b* bits\n        set b($j) $bits\n        if {[string length $bits] > $maxlen} {\n            set maxlen [string length $bits]\n        }\n        incr j\n    }\n    for {set j 0} {$j < $count} {incr j} {\n        if {[string length $b($j)] < $maxlen} {\n            append b($j) [string repeat 0 [expr $maxlen-[string length $b($j)]]]\n        }\n    }\n    set out {}\n    for {set x 0} {$x < $maxlen} {incr x} {\n        set fst_bit [string range $b(0) $x $x]\n        set bit $fst_bit\n        if {[expr {$op == {diff} || $op == {diff1} || $op == {andor}}]} {\n            set bit \"0\"\n        }\n        if {$op eq {not}} {set bit [expr {!$bit}]}\n        set multi_cnt \"0\"\n        for {set j 1} {$j < $count} {incr j} {\n            set bit2 [string range $b($j) $x $x]\n            switch $op {\n                and   {set bit [expr {$bit & $bit2}]}\n                or    {set bit [expr {$bit | $bit2}]}\n                xor   {set bit [expr {$bit ^ $bit2}]}\n                diff  {set bit [expr {$bit | $bit2}]}\n                diff1 {set bit [expr {$bit | $bit2}]}\n                andor {set bit [expr {$bit | $bit2}]}\n                one {\n                    set multi_cnt [expr $multi_cnt | {$bit & $bit2}]\n                    set bit [expr {$bit ^ $bit2}]\n                    set bit [expr {$bit & !$multi_cnt}]\n                }\n            }\n        }\n        switch $op {\n            diff  { set bit [expr {$fst_bit & !$bit}] }\n            diff1 { set bit [expr {!$fst_bit & $bit}] }\n            andor { set bit [expr {$fst_bit & $bit}] }\n        }\n\n        append out $bit\n    }\n    binary format b* $out\n}\n\nstart_server {tags {\"bitops\"}} {\n    test {BITCOUNT against wrong type} {\n        r del mylist\n        r lpush mylist a b c\n        assert_error \"*WRONGTYPE*\" {r bitcount mylist}\n        assert_error \"*WRONGTYPE*\" {r bitcount mylist 0 100}\n\n        # with negative indexes where start > end\n        assert_error \"*WRONGTYPE*\" {r bitcount mylist -6 -7}\n        assert_error \"*WRONGTYPE*\" {r bitcount mylist -6 -15 bit}\n    }\n\n    test {BITCOUNT returns 0 against non existing key} {\n        r del no-key\n        assert {[r bitcount no-key] == 0}\n        assert {[r bitcount no-key 0 1000 bit] == 0}\n    }\n\n    test {BITCOUNT returns 0 with out of range indexes} {\n        r set str \"xxxx\"\n        assert {[r bitcount str 4 10] == 0}\n        assert {[r bitcount str 32 87 bit] == 0}\n    }\n\n    test {BITCOUNT returns 0 with negative indexes where start > end} {\n        r set str \"xxxx\"\n        assert {[r bitcount str -6 -7] == 0}\n        assert {[r bitcount str -6 -15 bit] == 0}\n\n        # against non existing key\n        r del str\n        assert {[r bitcount str -6 -7] == 0}\n        assert {[r bitcount str -6 -15 bit] == 0}\n    }\n\n    catch {unset num}\n    foreach vec [list \"\" \"\\xaa\" \"\\x00\\x00\\xff\" \"foobar\" \"123\"] {\n        incr num\n        test \"BITCOUNT against test vector #$num\" {\n            r set str $vec\n            set count [count_bits $vec]\n            assert {[r bitcount str] == $count}\n            assert {[r bitcount str 0 -1 bit] == $count}\n        }\n    }\n\n    test {BITCOUNT fuzzing without start/end} {\n        for {set j 0} {$j < 100} {incr j} {\n            set str [randstring 0 3000]\n            r set str $str\n            set count [count_bits $str]\n            assert {[r bitcount str] == $count}\n            assert {[r bitcount str 0 -1 bit] == $count}\n        }\n    }\n\n    test {BITCOUNT fuzzing with start/end} {\n        for {set j 0} {$j < 100} {incr j} {\n            set str [randstring 0 3000]\n            r set str $str\n            set l [string length $str]\n            set start [randomInt $l]\n            set end [randomInt $l]\n            if {$start > $end} {\n                # Swap start and end\n                lassign [list $end $start] start end\n            }\n            assert {[r bitcount str $start $end] == [count_bits [string range $str $start $end]]}\n        }\n\n        for {set j 0} {$j < 100} {incr j} {\n            set str [randstring 0 3000]\n            r set str $str\n            set l [expr [string length $str] * 8]\n            set start [randomInt $l]\n            set end [randomInt $l]\n            if {$start > $end} {\n                # Swap start and end\n                lassign [list $end $start] start end\n            }\n            assert {[r bitcount str $start $end bit] == [count_bits_start_end $str $start $end]}\n        }\n    }\n\n    test {BITCOUNT with start, end} {\n        set s \"foobar\"\n        r set s $s\n        assert_equal [r bitcount s 0 -1] [count_bits \"foobar\"]\n        assert_equal [r bitcount s 1 -2] [count_bits \"ooba\"]\n        assert_equal [r bitcount s -2 1] [count_bits \"\"]\n        assert_equal [r bitcount s 0 1000] [count_bits \"foobar\"]\n\n        assert_equal [r bitcount s 0 -1 bit] [count_bits $s]\n        assert_equal [r bitcount s 10 14 bit] [count_bits_start_end $s 10 14]\n        assert_equal [r bitcount s 3 14 bit] [count_bits_start_end $s 3 14]\n        assert_equal [r bitcount s 3 29 bit] [count_bits_start_end $s 3 29]\n        assert_equal [r bitcount s 10 -34 bit] [count_bits_start_end $s 10 14]\n        assert_equal [r bitcount s 3 -34 bit] [count_bits_start_end $s 3 14]\n        assert_equal [r bitcount s 3 -19 bit] [count_bits_start_end $s 3 29]\n        assert_equal [r bitcount s -2 1 bit] 0\n        assert_equal [r bitcount s 0 1000 bit] [count_bits $s]\n    }\n\n    test {BITCOUNT with illegal arguments} {\n        # Used to return 0 for non-existing key instead of errors\n        r del s\n        assert_error {ERR *syntax*} {r bitcount s 0}\n        assert_error {ERR *syntax*} {r bitcount s 0 1 hello}\n        assert_error {ERR *syntax*} {r bitcount s 0 1 hello hello2}\n\n        r set s 1\n        assert_error {ERR *syntax*} {r bitcount s 0}\n        assert_error {ERR *syntax*} {r bitcount s 0 1 hello}\n        assert_error {ERR *syntax*} {r bitcount s 0 1 hello hello2}\n    }\n\n    test {BITCOUNT against non-integer value} {\n        # against existing key\n        r set s 1\n        assert_error {ERR *not an integer*} {r bitcount s a b}\n\n        # against non existing key\n        r del s\n        assert_error {ERR *not an integer*} {r bitcount s a b}\n\n        # against wrong type\n        r lpush s a b c\n        assert_error {ERR *not an integer*} {r bitcount s a b}\n    }\n\n    test {BITCOUNT regression test for github issue #582} {\n        r del foo\n        r setbit foo 0 1\n        if {[catch {r bitcount foo 0 4294967296} e]} {\n            assert_match {*ERR*out of range*} $e\n            set _ 1\n        } else {\n            set e\n        }\n    } {1}\n\n    test {BITCOUNT misaligned prefix} {\n        r del str\n        r set str ab\n        r bitcount str 1 -1\n    } {3}\n\n    test {BITCOUNT misaligned prefix + full words + remainder} {\n        r del str\n        r set str __PPxxxxxxxxxxxxxxxxRR__\n        r bitcount str 2 -3\n    } {74}\n\n    test {BITOP NOT (empty string)} {\n        r set s{t} \"\"\n        r bitop not dest{t} s{t}\n        r get dest{t}\n    } {}\n\n    test {BITOP NOT (known string)} {\n        r set s{t} \"\\xaa\\x00\\xff\\x55\"\n        r bitop not dest{t} s{t}\n        r get dest{t}\n    } \"\\x55\\xff\\x00\\xaa\"\n\n    test {BITOP NOT with multiple source keys} {\n        r set s{t} \"\\xaa\\x00\\xff\\x55\"\n        assert_error \"ERR BITOP NOT*\" { r bitop not dest{t} s{t} s{t} }\n    }\n\n    test {BITOP where dest and target are the same key} {\n        r set s \"\\xaa\\x00\\xff\\x55\"\n        r bitop not s s\n        r get s\n    } \"\\x55\\xff\\x00\\xaa\"\n\n    test {BITOP AND|OR|XOR|ONE don't change the string with single input key} {\n        r set a{t} \"\\x01\\x02\\xff\"\n        r bitop and res1{t} a{t}\n        r bitop or  res2{t} a{t}\n        r bitop xor res3{t} a{t}\n        r bitop one res4{t} a{t}\n        list [r get res1{t}] [r get res2{t}] [r get res3{t}] [r get res4{t}]\n    } [list \"\\x01\\x02\\xff\" \"\\x01\\x02\\xff\" \"\\x01\\x02\\xff\" \"\\x01\\x02\\xff\"]\n\n    test {BITOP DIFF|DIFF1|ANDOR with one source key} {\n        r set s{t} \"\"\n        assert_error \"ERR BITOP DIFF*\" { r bitop diff dest{t} s{t} }\n        assert_error \"ERR BITOP DIFF1*\" { r bitop diff1 dest{t} s{t} }\n        assert_error \"ERR BITOP ANDOR*\" { r bitop andor dest{t} s{t} }\n    }\n\n    test {BITOP missing key is considered a stream of zero} {\n        r set a{t} \"\\x01\\x02\\xff\"\n        r bitop and   res1{t} no-such-key{t} a{t}\n        r bitop or    res2{t} no-such-key{t} a{t} no-such-key{t}\n        r bitop xor   res3{t} no-such-key{t} a{t}\n        r bitop diff  res4{t} a{t} no-such-key{t}\n        r bitop diff1 res5{t} a{t} no-such-key{t}\n        r bitop andor res6{t} a{t} no-such-key{t}\n        r bitop one   res7{t} no-such_key{t} a{t}\n        list [r get res1{t}] [r get res2{t}] [r get res3{t}] [r get res4{t}] [r get res5{t}] [r get res6{t}] [r get res7{t}]\n    } [list \"\\x00\\x00\\x00\" \"\\x01\\x02\\xff\" \"\\x01\\x02\\xff\" \"\\x01\\x02\\xff\" \"\\x00\\x00\\x00\" \"\\x00\\x00\\x00\" \"\\x01\\x02\\xff\"]\n\n    test {BITOP shorter keys are zero-padded to the key with max length} {\n        r set a{t} \"\\x01\\x02\\xff\\xff\"\n        r set b{t} \"\\x01\\x02\\xff\"\n        r bitop and   res1{t} a{t} b{t}\n        r bitop or    res2{t} a{t} b{t}\n        r bitop xor   res3{t} a{t} b{t}\n        r bitop diff  res4{t} a{t} b{t}\n        r bitop diff1 res5{t} a{t} b{t}\n        r bitop andor res6{t} a{t} b{t}\n        r bitop one   res7{t} a{t} b{t}\n        list [r get res1{t}] [r get res2{t}] [r get res3{t}] [r get res4{t}] [r get res5{t}] [r get res6{t}] [r get res7{t}]\n    } [list \"\\x01\\x02\\xff\\x00\" \"\\x01\\x02\\xff\\xff\" \"\\x00\\x00\\x00\\xff\" \"\\x00\\x00\\x00\\xff\" \"\\x00\\x00\\x00\\x00\" \"\\x01\\x02\\xff\\x00\" \"\\x00\\x00\\x00\\xff\"]\n\n    foreach op {and or xor diff diff1 andor one} {\n        test \"BITOP $op fuzzing\" {\n            set min_args 1\n            if {[expr {$op == {diff} || $op == {diff1} || $op == {andor}}]} {\n                set min_args 2\n            }\n            for {set i 0} {$i < 10} {incr i} {\n                r flushall\n                set vec {}\n                set veckeys {}\n                set numvec [expr {[randomInt 10]+$min_args}]\n                for {set j 0} {$j < $numvec} {incr j} {\n                    set str [randstring 0 1000]\n                    lappend vec $str\n                    lappend veckeys vector_$j{t}\n                    r set vector_$j{t} $str\n                }\n                r bitop $op target{t} {*}$veckeys\n                assert_equal [r get target{t}] [simulate_bit_op $op {*}$vec]\n            }\n        }\n    }\n\n    test {BITOP NOT fuzzing} {\n        for {set i 0} {$i < 10} {incr i} {\n            r flushall\n            set str [randstring 0 1000]\n            r set str{t} $str\n            r bitop not target{t} str{t}\n            assert_equal [r get target{t}] [simulate_bit_op not $str]\n        }\n    }\n\n    # The AVX-512 BITOP path is triggered when minlen >= 10000 and numkeys >= 8.\n    foreach op {and or xor diff diff1 andor one} {\n        test \"BITOP $op with large values (AVX-512 path)\" {\n            set min_args 1\n            if {$op eq \"diff\" || $op eq \"diff1\" || $op eq \"andor\"} {\n                set min_args 2\n            }\n            # Test at and above the 10000-byte / 8-key threshold.\n            foreach {numvec strlen} {8 10000 10 10001 10 12000} {\n                assert {$numvec >= $min_args}\n                r flushall\n                set vec {}\n                set veckeys {}\n                for {set j 0} {$j < $numvec} {incr j} {\n                    set str [randstring $strlen $strlen]\n                    lappend vec $str\n                    lappend veckeys vector_$j{t}\n                    r set vector_$j{t} $str\n                }\n                r bitop $op target{t} {*}$veckeys\n                assert_equal [r get target{t}] [simulate_bit_op $op {*}$vec]\n            }\n        }\n    }\n\n    test {BITOP with integer encoded source objects} {\n        r set a{t} 1\n        r set b{t} 2\n        r bitop xor dest{t} a{t} b{t} a{t}\n        r get dest{t}\n    } {2}\n\n    test {BITOP with non string source key} {\n        r del c{t}\n        r set a{t} 1\n        r set b{t} 2\n        r lpush c{t} foo\n        catch {r bitop xor dest{t} a{t} b{t} c{t} d{t}} e\n        set e\n    } {WRONGTYPE*}\n\n    test {BITOP with empty string after non empty string (issue #529)} {\n        r flushdb\n        r set a{t} \"\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\"\n        r bitop or x{t} a{t} b{t}\n    } {32}\n\n    test {BITPOS against wrong type} {\n        r del mylist\n        r lpush mylist a b c\n        assert_error \"*WRONGTYPE*\" {r bitpos mylist 0}\n        assert_error \"*WRONGTYPE*\" {r bitpos mylist 1 10 100}\n    }\n\n    test {BITPOS will illegal arguments} {\n        # Used to return 0 for non-existing key instead of errors\n        r del s\n        assert_error {ERR *syntax*} {r bitpos s 0 1 hello hello2}\n        assert_error {ERR *syntax*} {r bitpos s 0 0 1 hello}\n\n        r set s 1\n        assert_error {ERR *syntax*} {r bitpos s 0 1 hello hello2}\n        assert_error {ERR *syntax*} {r bitpos s 0 0 1 hello}\n    }\n\n    test {BITPOS against non-integer value} {\n        # against existing key\n        r set s 1\n        assert_error {ERR *not an integer*} {r bitpos s a}\n        assert_error {ERR *not an integer*} {r bitpos s 0 a b}\n\n        # against non existing key\n        r del s\n        assert_error {ERR *not an integer*} {r bitpos s b}\n        assert_error {ERR *not an integer*} {r bitpos s 0 a b}\n\n        # against wrong type\n        r lpush s a b c\n        assert_error {ERR *not an integer*} {r bitpos s a}\n        assert_error {ERR *not an integer*} {r bitpos s 1 a b}\n    }\n\n    test {BITPOS bit=0 with empty key returns 0} {\n        r del str\n        assert {[r bitpos str 0] == 0}\n        assert {[r bitpos str 0 0 -1 bit] == 0}\n    }\n\n    test {BITPOS bit=1 with empty key returns -1} {\n        r del str\n        assert {[r bitpos str 1] == -1}\n        assert {[r bitpos str 1 0 -1] == -1}\n    }\n\n    test {BITPOS bit=0 with string less than 1 word works} {\n        r set str \"\\xff\\xf0\\x00\"\n        assert {[r bitpos str 0] == 12}\n        assert {[r bitpos str 0 0 -1 bit] == 12}\n    }\n\n    test {BITPOS bit=1 with string less than 1 word works} {\n        r set str \"\\x00\\x0f\\x00\"\n        assert {[r bitpos str 1] == 12}\n        assert {[r bitpos str 1 0 -1 bit] == 12}\n    }\n\n    test {BITPOS bit=0 starting at unaligned address} {\n        r set str \"\\xff\\xf0\\x00\"\n        assert {[r bitpos str 0 1] == 12}\n        assert {[r bitpos str 0 1 -1 bit] == 12}\n    }\n\n    test {BITPOS bit=1 starting at unaligned address} {\n        r set str \"\\x00\\x0f\\xff\"\n        assert {[r bitpos str 1 1] == 12}\n        assert {[r bitpos str 1 1 -1 bit] == 12}\n    }\n\n    test {BITPOS bit=0 unaligned+full word+reminder} {\n        r del str\n        r set str \"\\xff\\xff\\xff\" ; # Prefix\n        # Followed by two (or four in 32 bit systems) full words\n        r append str \"\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\"\n        r append str \"\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\"\n        r append str \"\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\"\n        # First zero bit.\n        r append str \"\\x0f\"\n        assert {[r bitpos str 0] == 216}\n        assert {[r bitpos str 0 1] == 216}\n        assert {[r bitpos str 0 2] == 216}\n        assert {[r bitpos str 0 3] == 216}\n        assert {[r bitpos str 0 4] == 216}\n        assert {[r bitpos str 0 5] == 216}\n        assert {[r bitpos str 0 6] == 216}\n        assert {[r bitpos str 0 7] == 216}\n        assert {[r bitpos str 0 8] == 216}\n\n        assert {[r bitpos str 0 1 -1 bit] == 216}\n        assert {[r bitpos str 0 9 -1 bit] == 216}\n        assert {[r bitpos str 0 17 -1 bit] == 216}\n        assert {[r bitpos str 0 25 -1 bit] == 216}\n        assert {[r bitpos str 0 33 -1 bit] == 216}\n        assert {[r bitpos str 0 41 -1 bit] == 216}\n        assert {[r bitpos str 0 49 -1 bit] == 216}\n        assert {[r bitpos str 0 57 -1 bit] == 216}\n        assert {[r bitpos str 0 65 -1 bit] == 216}\n    }\n\n    test {BITPOS bit=1 unaligned+full word+reminder} {\n        r del str\n        r set str \"\\x00\\x00\\x00\" ; # Prefix\n        # Followed by two (or four in 32 bit systems) full words\n        r append str \"\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\"\n        r append str \"\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\"\n        r append str \"\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\"\n        # First zero bit.\n        r append str \"\\xf0\"\n        assert {[r bitpos str 1] == 216}\n        assert {[r bitpos str 1 1] == 216}\n        assert {[r bitpos str 1 2] == 216}\n        assert {[r bitpos str 1 3] == 216}\n        assert {[r bitpos str 1 4] == 216}\n        assert {[r bitpos str 1 5] == 216}\n        assert {[r bitpos str 1 6] == 216}\n        assert {[r bitpos str 1 7] == 216}\n        assert {[r bitpos str 1 8] == 216}\n\n        assert {[r bitpos str 1 1 -1 bit] == 216}\n        assert {[r bitpos str 1 9 -1 bit] == 216}\n        assert {[r bitpos str 1 17 -1 bit] == 216}\n        assert {[r bitpos str 1 25 -1 bit] == 216}\n        assert {[r bitpos str 1 33 -1 bit] == 216}\n        assert {[r bitpos str 1 41 -1 bit] == 216}\n        assert {[r bitpos str 1 49 -1 bit] == 216}\n        assert {[r bitpos str 1 57 -1 bit] == 216}\n        assert {[r bitpos str 1 65 -1 bit] == 216}\n    }\n\n    test {BITPOS bit=1 returns -1 if string is all 0 bits} {\n        r set str \"\"\n        for {set j 0} {$j < 20} {incr j} {\n            assert {[r bitpos str 1] == -1}\n            assert {[r bitpos str 1 0 -1 bit] == -1}\n            r append str \"\\x00\"\n        }\n    }\n\n    test {BITPOS bit=0 works with intervals} {\n        r set str \"\\x00\\xff\\x00\"\n        assert {[r bitpos str 0 0 -1] == 0}\n        assert {[r bitpos str 0 1 -1] == 16}\n        assert {[r bitpos str 0 2 -1] == 16}\n        assert {[r bitpos str 0 2 200] == 16}\n        assert {[r bitpos str 0 1 1] == -1}\n\n        assert {[r bitpos str 0 0 -1 bit] == 0}\n        assert {[r bitpos str 0 8 -1 bit] == 16}\n        assert {[r bitpos str 0 16 -1 bit] == 16}\n        assert {[r bitpos str 0 16 200 bit] == 16}\n        assert {[r bitpos str 0 8 8 bit] == -1}\n    }\n\n    test {BITPOS bit=1 works with intervals} {\n        r set str \"\\x00\\xff\\x00\"\n        assert {[r bitpos str 1 0 -1] == 8}\n        assert {[r bitpos str 1 1 -1] == 8}\n        assert {[r bitpos str 1 2 -1] == -1}\n        assert {[r bitpos str 1 2 200] == -1}\n        assert {[r bitpos str 1 1 1] == 8}\n\n        assert {[r bitpos str 1 0 -1 bit] == 8}\n        assert {[r bitpos str 1 8 -1 bit] == 8}\n        assert {[r bitpos str 1 16 -1 bit] == -1}\n        assert {[r bitpos str 1 16 200 bit] == -1}\n        assert {[r bitpos str 1 8 8 bit] == 8}\n    }\n\n    test {BITPOS bit=0 changes behavior if end is given} {\n        r set str \"\\xff\\xff\\xff\"\n        assert {[r bitpos str 0] == 24}\n        assert {[r bitpos str 0 0] == 24}\n        assert {[r bitpos str 0 0 -1] == -1}\n        assert {[r bitpos str 0 0 -1 bit] == -1}\n    }\n\n    test {SETBIT/BITFIELD only increase dirty when the value changed} {\n        r del foo{t} foo2{t} foo3{t}\n        set dirty [s rdb_changes_since_last_save]\n\n        # Create a new key, always increase the dirty.\n        r setbit foo{t} 0 0\n        r bitfield foo2{t} set i5 0 0\n        set dirty2 [s rdb_changes_since_last_save]\n        assert {$dirty2 == $dirty + 2}\n\n        # No change.\n        r setbit foo{t} 0 0\n        r bitfield foo2{t} set i5 0 0\n        set dirty3 [s rdb_changes_since_last_save]\n        assert {$dirty3 == $dirty2}\n\n        # Do a change and a no change.\n        r setbit foo{t} 0 1\n        r setbit foo{t} 0 1\n        r setbit foo{t} 0 0\n        r setbit foo{t} 0 0\n        r bitfield foo2{t} set i5 0 1\n        r bitfield foo2{t} set i5 0 1\n        r bitfield foo2{t} set i5 0 0\n        r bitfield foo2{t} set i5 0 0\n        set dirty4 [s rdb_changes_since_last_save]\n        assert {$dirty4 == $dirty3 + 4}\n\n        # BITFIELD INCRBY always increase dirty.\n        r bitfield foo3{t} incrby i5 0 1\n        r bitfield foo3{t} incrby i5 0 1\n        set dirty5 [s rdb_changes_since_last_save]\n        assert {$dirty5 == $dirty4 + 2}\n\n        # Change length only\n        r setbit foo{t} 90 0\n        r bitfield foo2{t} set i5 90 0\n        set dirty6 [s rdb_changes_since_last_save]\n        assert {$dirty6 == $dirty5 + 2}\n    }\n\n    test {BITPOS bit=1 fuzzy testing using SETBIT} {\n        r del str\n        set max 524288; # 64k\n        set first_one_pos -1\n        for {set j 0} {$j < 1000} {incr j} {\n            assert {[r bitpos str 1] == $first_one_pos}\n            assert {[r bitpos str 1 0 -1 bit] == $first_one_pos}\n            set pos [randomInt $max]\n            r setbit str $pos 1\n            if {$first_one_pos == -1 || $first_one_pos > $pos} {\n                # Update the position of the first 1 bit in the array\n                # if the bit we set is on the left of the previous one.\n                set first_one_pos $pos\n            }\n        }\n    }\n\n    test {BITPOS bit=0 fuzzy testing using SETBIT} {\n        set max 524288; # 64k\n        set first_zero_pos $max\n        r set str [string repeat \"\\xff\" [expr $max/8]]\n        for {set j 0} {$j < 1000} {incr j} {\n            assert {[r bitpos str 0] == $first_zero_pos}\n            if {$first_zero_pos == $max} {\n                assert {[r bitpos str 0 0 -1 bit] == -1}\n            } else {\n                assert {[r bitpos str 0 0 -1 bit] == $first_zero_pos}\n            }\n            set pos [randomInt $max]\n            r setbit str $pos 0\n            if {$first_zero_pos > $pos} {\n                # Update the position of the first 0 bit in the array\n                # if the bit we clear is on the left of the previous one.\n                set first_zero_pos $pos\n            }\n        }\n    }\n\n    # This test creates a string of 10 bytes. It has two iterations. One clears\n    # all the bits and sets just one bit and another set all the bits and clears\n    # just one bit. Each iteration loops from bit offset 0 to 79 and uses SETBIT\n    # to set the bit to 0 or 1, and then use BITPOS and BITCOUNT on a few mutations.\n    test {BITPOS/BITCOUNT fuzzy testing using SETBIT} {\n        # We have two start and end ranges, each range used to select a random\n        # position, one for start position and one for end position.\n        proc test_one {start1 end1 start2 end2 pos bit pos_type} {\n            set start [randomRange $start1 $end1]\n            set end [randomRange $start2 $end2]\n            if {$start > $end} {\n                # Swap start and end\n                lassign [list $end $start] start end\n            }\n            set startbit $start\n            set endbit $end\n            # For byte index, we need to generate the real bit index\n            if {[string equal $pos_type byte]} {\n                set startbit [expr $start << 3]\n                set endbit [expr ($end << 3) + 7]\n            }\n            # This means whether the test bit index is in the range.\n            set inrange [expr ($pos >= $startbit && $pos <= $endbit) ? 1: 0]\n            # For bitcount, there are four different results.\n            # $inrange == 0 && $bit == 0, all bits in the range are set, so $endbit - $startbit + 1\n            # $inrange == 0 && $bit == 1, all bits in the range are clear, so 0\n            # $inrange == 1 && $bit == 0, all bits in the range are set but one, so $endbit - $startbit\n            # $inrange == 1 && $bit == 1, all bits in the range are clear but one, so 1\n            set res_count [expr ($endbit - $startbit + 1) * (1 - $bit) + $inrange * [expr $bit ? 1 : -1]]\n            assert {[r bitpos str $bit $start $end $pos_type] == [expr $inrange ? $pos : -1]}\n            assert {[r bitcount str $start $end $pos_type] == $res_count}\n        }\n\n        r del str\n        set max 80;\n        r setbit str [expr $max - 1] 0\n        set bytes [expr $max >> 3]\n        # First iteration sets all bits to 1, then set bit to 0 from 0 to max - 1\n        # Second iteration sets all bits to 0, then set bit to 1 from 0 to max - 1\n        for {set bit 0} {$bit < 2} {incr bit} {\n            r bitop not str str\n            for {set j 0} {$j < $max} {incr j} {\n                r setbit str $j $bit\n\n                # First iteration tests byte index and second iteration tests bit index.\n                foreach {curr end pos_type} [list [expr $j >> 3] $bytes byte $j $max bit] {\n                    # start==end set to bit position\n                    test_one $curr $curr $curr $curr $j $bit $pos_type\n                    # Both start and end are before bit position\n                    if {$curr > 0} {\n                        test_one 0 $curr 0 $curr $j $bit $pos_type\n                    }\n                    # Both start and end are after bit position\n                    if {$curr < [expr $end - 1]} {\n                        test_one [expr $curr + 1] $end [expr $curr + 1] $end $j $bit $pos_type\n                    }\n                    # start is before and end is after bit position\n                    if {$curr > 0 && $curr < [expr $end - 1]} {\n                        test_one 0 $curr [expr $curr +1] $end $j $bit $pos_type\n                    }\n                }\n\n                # restore bit\n                r setbit str $j [expr 1 - $bit]\n            }\n        }\n    }\n}\n\nrun_solo {bitops-large-memory} {\nstart_server {tags {\"bitops\"}} {\n    test \"BIT pos larger than UINT_MAX\" {\n        set bytes [expr (1 << 29) + 1]\n        set bitpos [expr (1 << 32)]\n        set oldval [lindex [r config get proto-max-bulk-len] 1]\n        r config set proto-max-bulk-len $bytes\n        r setbit mykey $bitpos 1\n        assert_equal $bytes [r strlen mykey]\n        assert_equal 1 [r getbit mykey $bitpos]\n        assert_equal [list 128 128 -1] [r bitfield mykey get u8 $bitpos set u8 $bitpos 255 get i8 $bitpos]\n        assert_equal $bitpos [r bitpos mykey 1]\n        assert_equal $bitpos [r bitpos mykey 1 [expr $bytes - 1]]\n        if {$::accurate} {\n            # set all bits to 1\n            set mega [expr (1 << 23)]\n            set part [string repeat \"\\xFF\" $mega]\n            for {set i 0} {$i < 64} {incr i} {\n                r setrange mykey [expr $i * $mega] $part\n            }\n            r setrange mykey [expr $bytes - 1] \"\\xFF\"\n            assert_equal [expr $bitpos + 8] [r bitcount mykey]\n            assert_equal -1 [r bitpos mykey 0 0 [expr $bytes - 1]]\n        }\n        r config set proto-max-bulk-len $oldval\n        r del mykey\n    } {1} {large-memory}\n\n    test \"SETBIT values larger than UINT32_MAX and lzf_compress/lzf_decompress correctly\" {\n        set bytes [expr (1 << 32) + 1]\n        set bitpos [expr (1 << 35)]\n        set oldval [lindex [r config get proto-max-bulk-len] 1]\n        r config set proto-max-bulk-len $bytes\n        r setbit mykey $bitpos 1\n        assert_equal $bytes [r strlen mykey]\n        assert_equal 1 [r getbit mykey $bitpos]\n        r debug reload ;# lzf_compress/lzf_decompress when RDB saving/loading.\n        assert_equal 1 [r getbit mykey $bitpos]\n        r config set proto-max-bulk-len $oldval\n        r del mykey\n    } {1} {large-memory needs:debug}\n}\n} ;#run_solo\n"
  },
  {
    "path": "tests/unit/client-eviction.tcl",
    "content": "tags {\"external:skip logreqres:skip\"} {\n\n# Get info about a redis client connection:\n# name - name of client we want to query\n# f - field name from \"CLIENT LIST\" we want to get\nproc client_field {name f} {\n    set clients [split [string trim [r client list]] \"\\r\\n\"]\n    set c [lsearch -inline $clients *name=$name*]\n    if {![regexp $f=(\\[a-zA-Z0-9-\\]+) $c - res]} {\n        error \"no client named $name found with field $f\"\n    }\n    return $res\n}\n\nproc client_exists {name} {\n    if {[catch { client_field $name tot-mem } e]} {\n        return false\n    }\n    return true\n}\n\nproc gen_client {} {\n    set rr [redis_client]\n    set name \"tst_[randstring 4 4 simplealpha]\"\n    $rr client setname $name\n    assert {[client_exists $name]}\n    return [list $rr $name]\n}\n\n# Sum a value across all redis client connections:\n# f - the field name from \"CLIENT LIST\" we want to sum\nproc clients_sum {f} {\n    set sum 0\n    set clients [split [string trim [r client list]] \"\\r\\n\"]\n    foreach c $clients {\n        if {![regexp $f=(\\[a-zA-Z0-9-\\]+) $c - res]} {\n            error \"field $f not found in $c\"\n        }\n        incr sum $res\n    }\n    return $sum\n}\n\nproc mb {v} {\n    return [expr $v * 1024 * 1024]\n}\n\nproc kb {v} {\n    return [expr $v * 1024]\n}\n\nstart_server {} {\n    set maxmemory_clients 3000000\n    r config set maxmemory-clients $maxmemory_clients\n    r debug reply-copy-avoidance 0 ;# Disable copy avoidance because it affects memory usage\n\n    test \"client evicted due to large argv\" {\n        r flushdb\n        lassign [gen_client] rr cname\n        # Attempt a large multi-bulk command under eviction limit\n        $rr mset k v k2 [string repeat v 1000000]\n        assert_equal [$rr get k] v\n        # Attempt another command, now causing client eviction\n        catch { $rr mset k v k2 [string repeat v $maxmemory_clients] } e\n        assert {![client_exists $cname]}\n        $rr close\n    }\n\n    test \"client evicted due to large query buf\" {\n        r flushdb\n        lassign [gen_client] rr cname\n        # Attempt to fill the query buff without completing the argument above the limit, causing client eviction\n        catch {\n            $rr write [join [list \"*1\\r\\n\\$$maxmemory_clients\\r\\n\" [string repeat v $maxmemory_clients]] \"\"]\n            $rr flush\n            $rr read\n        } e\n        assert {![client_exists $cname]}\n        $rr close\n    }\n\n    test \"client evicted due to percentage of maxmemory\" {\n        set maxmemory [mb 6]\n        r config set maxmemory $maxmemory\n        # Set client eviction threshold to 7% of maxmemory\n        set maxmemory_clients_p 7\n        r config set maxmemory-clients $maxmemory_clients_p%\n        r flushdb\n\n        set maxmemory_clients_actual [expr $maxmemory * $maxmemory_clients_p / 100]\n\n        lassign [gen_client] rr cname\n        # Attempt to fill the query buff with only half the percentage threshold verify we're not disconnected\n        set n [expr $maxmemory_clients_actual / 2]\n        $rr write [join [list \"*1\\r\\n\\$$n\\r\\n\" [string repeat v $n]] \"\"]\n        $rr flush\n        wait_for_condition 100 10 {\n            [client_field $cname tot-mem] >= $n\n        } else {\n            fail \"Failed to fill qbuf for test\"\n        }\n        set tot_mem [client_field $cname tot-mem]\n        assert {$tot_mem >= $n && $tot_mem < $maxmemory_clients_actual}\n\n        # Attempt to fill the query buff with the percentage threshold of maxmemory and verify we're evicted\n        $rr close\n        lassign [gen_client] rr cname\n        catch {\n            $rr write [join [list \"*1\\r\\n\\$$maxmemory_clients_actual\\r\\n\" [string repeat v $maxmemory_clients_actual]] \"\"]\n            $rr flush\n        } e\n        wait_for_condition 100 10 {\n            ![client_exists $cname]\n        } else {\n            fail \"Failed to evict client\"\n        }\n        $rr close\n\n        # Restore settings\n        r config set maxmemory 0\n        r config set maxmemory-clients $maxmemory_clients\n    }\n\n    test \"client evicted due to large multi buf\" {\n        r flushdb\n        lassign [gen_client] rr cname\n\n        # Attempt a multi-exec where sum of commands is less than maxmemory_clients\n        $rr multi\n        $rr set k [string repeat v [expr $maxmemory_clients / 4]]\n        $rr set k [string repeat v [expr $maxmemory_clients / 4]]\n        assert_equal [$rr exec] {OK OK}\n\n        # Attempt a multi-exec where sum of commands is more than maxmemory_clients, causing client eviction\n        $rr multi\n        catch {\n            for {set j 0} {$j < 5} {incr j} {\n                $rr set k [string repeat v [expr $maxmemory_clients / 4]]\n            }\n        } e\n        assert {![client_exists $cname]}\n        $rr close\n    }\n\n    test \"client evicted due to watched key list\" {\n        r flushdb\n        set rr [redis_client]\n\n        # Since watched key list is a small overhead this test uses a minimal maxmemory-clients config\n        set temp_maxmemory_clients 200000\n        r config set maxmemory-clients $temp_maxmemory_clients\n\n        # Append watched keys until list maxes out maxmemory clients and causes client eviction\n        catch {\n            for {set j 0} {$j < $temp_maxmemory_clients} {incr j} {\n                $rr watch $j\n            }\n        } e\n        assert_match {I/O error reading reply} $e\n        $rr close\n\n        # Restore config for next tests\n        r config set maxmemory-clients $maxmemory_clients\n    }\n\n    test \"client evicted due to pubsub subscriptions\" {\n        r flushdb\n\n        # Since pubsub subscriptions cause a small overhead this test uses a minimal maxmemory-clients config\n        set temp_maxmemory_clients 200000\n        r config set maxmemory-clients $temp_maxmemory_clients\n\n        # Test eviction due to pubsub patterns\n        set rr [redis_client]\n        # Add patterns until list maxes out maxmemory clients and causes client eviction\n        catch {\n            for {set j 0} {$j < $temp_maxmemory_clients} {incr j} {\n                $rr psubscribe $j\n            }\n        } e\n        assert_match {I/O error reading reply} $e\n        $rr close\n\n        # Test eviction due to pubsub channels\n        set rr [redis_client]\n        # Subscribe to global channels until list maxes out maxmemory clients and causes client eviction\n        catch {\n            for {set j 0} {$j < $temp_maxmemory_clients} {incr j} {\n                $rr subscribe $j\n            }\n        } e\n        assert_match {I/O error reading reply} $e\n        $rr close\n\n        # Test eviction due to sharded pubsub channels\n        set rr [redis_client]\n        # Subscribe to sharded pubsub channels until list maxes out maxmemory clients and causes client eviction\n        catch {\n            for {set j 0} {$j < $temp_maxmemory_clients} {incr j} {\n                $rr ssubscribe $j\n            }\n        } e\n        assert_match {I/O error reading reply} $e\n        $rr close\n\n        # Restore config for next tests\n        r config set maxmemory-clients $maxmemory_clients\n    }\n\n    test \"client evicted due to tracking redirection\" {\n        r flushdb\n        set rr [redis_client]\n        set redirected_c [redis_client]\n        $redirected_c client setname redirected_client\n        set redir_id [$redirected_c client id]\n        $redirected_c SUBSCRIBE __redis__:invalidate\n        $rr client tracking on redirect $redir_id bcast\n        # Use a big key name to fill the redirected tracking client's buffer quickly\n        set key_length [expr 1024*200]\n        set long_key [string repeat k $key_length]\n        # Use a script so we won't need to pass the long key name when dirtying it in the loop\n        set script_sha [$rr script load \"redis.call('incr', '$long_key')\"]\n\n        # Pause serverCron so it won't update memory usage since we're testing the update logic when\n        # writing tracking redirection output\n        r debug pause-cron 1\n\n        # Read and write to same (long) key until redirected_client's buffers cause it to be evicted\n        catch {\n            while true {\n                set mem [client_field redirected_client tot-mem]\n                assert {$mem < $maxmemory_clients}\n                $rr evalsha $script_sha 0\n            }\n        } e\n        assert_match {no client named redirected_client found*} $e\n\n        r debug pause-cron 0\n        $rr close\n        $redirected_c close\n    } {0} {needs:debug}\n\n    test \"client evicted due to client tracking prefixes\" {\n        r flushdb\n        set rr [redis_client]\n\n        # Since tracking prefixes list is a small overhead this test uses a minimal maxmemory-clients config\n        set temp_maxmemory_clients 200000\n        r config set maxmemory-clients $temp_maxmemory_clients\n\n        # Append tracking prefixes until list maxes out maxmemory clients and causes client eviction\n        # Combine more prefixes in each command to speed up the test. Because we did not actually count\n        # the memory usage of all prefixes, see getClientMemoryUsage, so we can not use larger prefixes\n        # to speed up the test here.\n        catch {\n            for {set j 0} {$j < $temp_maxmemory_clients} {incr j} {\n                $rr client tracking on prefix [format a%09s $j] prefix [format b%09s $j] prefix [format c%09s $j] bcast\n            }\n        } e\n        assert_match {I/O error reading reply} $e\n        $rr close\n\n        # Restore config for next tests\n        r config set maxmemory-clients $maxmemory_clients\n    }\n\n    test \"client evicted due to output buf\" {\n        r flushdb\n        r setrange k 200000 v\n        set rr [redis_deferring_client]\n        $rr client setname test_client\n        $rr flush\n        assert {[$rr read] == \"OK\"}\n        # Attempt a large response under eviction limit\n        $rr get k\n        $rr flush\n        assert {[string length [$rr read]] == 200001}\n        set mem [client_field test_client tot-mem]\n        assert {$mem < $maxmemory_clients}\n\n        # Fill output buff in loop without reading it and make sure\n        # we're eventually disconnected, but before reaching maxmemory_clients\n        while true {\n            if { [catch {\n                set mem [client_field test_client tot-mem]\n                assert {$mem < $maxmemory_clients}\n                $rr get k\n                $rr flush\n               } e]} {\n                assert {![client_exists test_client]}\n                break\n            }\n        }\n        $rr close\n    }\n\n    foreach {no_evict} {on off} {\n        test \"client no-evict $no_evict\" {\n            r flushdb\n            r client setname control\n            r client no-evict on ;# Avoid evicting the main connection\n            lassign [gen_client] rr cname\n            $rr client no-evict $no_evict\n\n            # Overflow maxmemory-clients\n            set qbsize [expr {$maxmemory_clients + 1}]\n            if {[catch {\n                $rr write [join [list \"*1\\r\\n\\$$qbsize\\r\\n\" [string repeat v $qbsize]] \"\"]\n                $rr flush\n                wait_for_condition 200 10 {\n                    [client_field $cname qbuf] == $qbsize\n                } else {\n                    fail \"Failed to fill qbuf for test\"\n                }\n            } e] && $no_evict == off} {\n                assert {![client_exists $cname]}\n            } elseif {$no_evict == on} {\n                assert {[client_field $cname tot-mem] > $maxmemory_clients}\n            }\n            $rr close\n        }\n    }\n}\n\nstart_server {} {\n    set server_pid [s process_id]\n    set maxmemory_clients [mb 10]\n    set obuf_limit [mb 3]\n    r config set maxmemory-clients $maxmemory_clients\n    r config set client-output-buffer-limit \"normal $obuf_limit 0 0\"\n    r debug reply-copy-avoidance 0 ;# Disable copy avoidance because it affects memory usage\n\n    test \"avoid client eviction when client is freed by output buffer limit\" {\n        r flushdb\n        set obuf_size [expr {$obuf_limit + [mb 1]}]\n        r setrange k $obuf_size v\n        set rr1 [redis_client]\n        $rr1 client setname \"qbuf-client\"\n        set rr2 [redis_deferring_client]\n        $rr2 client setname \"obuf-client1\"\n        assert_equal [$rr2 read] OK\n        set rr3 [redis_deferring_client]\n        $rr3 client setname \"obuf-client2\"\n        assert_equal [$rr3 read] OK\n\n        # Occupy client's query buff with less than output buffer limit left to exceed maxmemory-clients\n        set qbsize [expr {$maxmemory_clients - $obuf_size}]\n        $rr1 write [join [list \"*1\\r\\n\\$$qbsize\\r\\n\" [string repeat v $qbsize]] \"\"]\n        $rr1 flush\n        # Wait for qbuff to be as expected\n        wait_for_condition 200 10 {\n            [client_field qbuf-client qbuf] == $qbsize\n        } else {\n            fail \"Failed to fill qbuf for test\"\n        }\n\n        # Make the other two obuf-clients pass obuf limit and also pass maxmemory-clients\n        # We use two obuf-clients to make sure that even if client eviction is attempted\n        # between two command processing (with no sleep) we don't perform any client eviction\n        # because the obuf limit is enforced with precedence.\n        pause_process $server_pid\n        $rr2 get k\n        $rr2 flush\n        $rr3 get k\n        $rr3 flush\n        resume_process $server_pid\n        r ping ;# make sure a full event loop cycle is processed before issuing CLIENT LIST\n\n        # wait for get commands to be processed\n        wait_for_condition 100 10 {\n            [expr {[regexp {calls=(\\d+)} [cmdrstat get r] -> calls] ? $calls : 0}] >= 2\n        } else {\n            fail \"get did not arrive\"\n        }\n\n        # Validate obuf-clients were disconnected (because of obuf limit)\n        catch {client_field obuf-client1 name} e\n        assert_match {no client named obuf-client1 found*} $e\n        catch {client_field obuf-client2 name} e\n        assert_match {no client named obuf-client2 found*} $e\n\n        # Validate qbuf-client is still connected and wasn't evicted\n        if {[lindex [r config get io-threads] 1] == 1} {\n            assert_equal [client_field qbuf-client name] {qbuf-client}\n        }\n\n        $rr1 close\n        $rr2 close\n        $rr3 close\n    }\n}\n\nstart_server {} {\n    r debug reply-copy-avoidance 0 ;# Disable copy avoidance because it affects memory usage\n\n    test \"decrease maxmemory-clients causes client eviction\" {\n        set maxmemory_clients [mb 4]\n        set client_count 10\n        set qbsize [expr ($maxmemory_clients - [mb 1]) / $client_count]\n        r config set maxmemory-clients $maxmemory_clients\n\n\n        # Make multiple clients consume together roughly 1mb less than maxmemory_clients\n        set rrs {}\n        for {set j 0} {$j < $client_count} {incr j} {\n            set rr [redis_client]\n            lappend rrs $rr\n            $rr client setname client$j\n            $rr write [join [list \"*2\\r\\n\\$$qbsize\\r\\n\" [string repeat v $qbsize]] \"\"]\n            $rr flush\n            wait_for_condition 200 10 {\n                [client_field client$j qbuf] >= $qbsize\n            } else {\n                fail \"Failed to fill qbuf for test\"\n            }\n        }\n\n        # Make sure all clients are still connected\n        set connected_clients [llength [lsearch -all [split [string trim [r client list]] \"\\r\\n\"] *name=client*]]\n        assert {$connected_clients == $client_count}\n\n        # Decrease maxmemory_clients and expect client eviction\n        r config set maxmemory-clients [expr $maxmemory_clients / 2]\n        wait_for_condition 200 10 {\n            [llength [regexp -all -inline {name=client} [r client list]]] < $client_count\n        } else {\n            fail \"Failed to evict clients\"\n        }\n\n        foreach rr $rrs {$rr close}\n    }\n}\n\nstart_server {} {\n    r debug reply-copy-avoidance 0 ;# Disable copy avoidance because it affects memory usage\n\n    test \"evict clients only until below limit\" {\n        set client_count 10\n        set client_mem [mb 1]\n        r debug replybuffer resizing 0\n        r config set maxmemory-clients 0\n        r client setname control\n        r client no-evict on\n\n        # Make multiple clients consume together roughly 1mb less than maxmemory_clients\n        set total_client_mem 0\n        set max_client_mem 0\n        set rrs {}\n        for {set j 0} {$j < $client_count} {incr j} {\n            set rr [redis_client]\n            lappend rrs $rr\n            $rr client setname client$j\n            $rr write [join [list \"*2\\r\\n\\$$client_mem\\r\\n\" [string repeat v $client_mem]] \"\"]\n            $rr flush\n            wait_for_condition 200 10 {\n                [client_field client$j tot-mem] >= $client_mem\n            } else {\n                fail \"Failed to fill qbuf for test\"\n            }\n            # In theory all these clients should use the same amount of memory (~1mb). But in practice\n            # some allocators (libc) can return different allocation sizes for the same malloc argument causing\n            # some clients to use slightly more memory than others. We find the largest client and make sure\n            # all clients are roughly the same size (+-1%). Then we can safely set the client eviction limit and\n            # expect consistent results in the test.\n            set cmem [client_field client$j tot-mem]\n            if {$max_client_mem > 0} {\n                set size_ratio [expr $max_client_mem.0/$cmem.0]\n                assert_range $size_ratio 0.99 1.01\n            }\n            if {$cmem > $max_client_mem} {\n                set max_client_mem $cmem\n            }\n        }\n\n        # Make sure all clients are still connected\n        set connected_clients [llength [lsearch -all [split [string trim [r client list]] \"\\r\\n\"] *name=client*]]\n        assert {$connected_clients == $client_count}\n\n        # Set maxmemory-clients to accommodate half our clients (taking into account the control client)\n        set maxmemory_clients [expr ($max_client_mem * $client_count) / 2 + [client_field control tot-mem]]\n        r config set maxmemory-clients $maxmemory_clients\n\n        # Make sure total used memory is below maxmemory_clients\n        set total_client_mem [clients_sum tot-mem]\n        assert {$total_client_mem <= $maxmemory_clients}\n\n        # Make sure we have only half of our clients now\n        wait_for_condition 200 100 {\n            ([lindex [r config get io-threads] 1] == 1) ?\n                ([llength [regexp -all -inline {name=client} [r client list]]] == $client_count / 2) :\n                ([llength [regexp -all -inline {name=client} [r client list]]] <= $client_count / 2)\n        } else {\n            fail \"Failed to evict clients\"\n        }\n\n        # Restore the reply buffer resize to default\n        r debug replybuffer resizing 1\n\n        foreach rr $rrs {$rr close}\n    } {} {needs:debug}\n}\n\nstart_server {} {\n    r debug reply-copy-avoidance 0 ;# Disable copy avoidance because it affects memory usage\n\n    test \"evict clients in right order (large to small)\" {\n        # Note that each size step needs to be at least x2 larger than previous step\n        # because of how the client-eviction size bucketing works\n        set sizes [list [kb 128] [mb 1] [mb 3]]\n        set clients_per_size 3\n        r client setname control\n        r client no-evict on\n        r config set maxmemory-clients 0\n        r debug replybuffer resizing 0\n\n        # Run over all sizes and create some clients using up that size\n        set total_client_mem 0\n        set rrs {}\n        for {set i 0} {$i < [llength $sizes]} {incr i} {\n            set size [lindex $sizes $i]\n\n            for {set j 0} {$j < $clients_per_size} {incr j} {\n                set rr [redis_client]\n                lappend rrs $rr\n                $rr client setname client-$i\n                $rr write [join [list \"*2\\r\\n\\$$size\\r\\n\" [string repeat v $size]] \"\"]\n                $rr flush\n            }\n            set client_mem [client_field client-$i tot-mem]\n\n            # Update our size list based on actual used up size (this is usually\n            # slightly more than expected because of allocator bins\n            assert {$client_mem >= $size}\n            set sizes [lreplace $sizes $i $i $client_mem]\n\n            # Account total client memory usage\n            incr total_mem [expr $clients_per_size * $client_mem]\n        }\n\n        # Make sure all clients are connected\n        set clients [split [string trim [r client list]] \"\\r\\n\"]\n        for {set i 0} {$i < [llength $sizes]} {incr i} {\n            assert_equal [llength [lsearch -all $clients \"*name=client-$i *\"]] $clients_per_size\n        }\n\n        # For each size reduce maxmemory-clients so relevant clients should be evicted\n        # do this from largest to smallest\n        foreach size [lreverse $sizes] {\n            set control_mem [client_field control tot-mem]\n            set total_mem [expr $total_mem - $clients_per_size * $size]\n            # allow some tolerance when using io threads\n            r config set maxmemory-clients [expr $total_mem + $control_mem + 1000]\n            set clients [split [string trim [r client list]] \"\\r\\n\"]\n            # Verify only relevant clients were evicted\n            for {set i 0} {$i < [llength $sizes]} {incr i} {\n                set verify_size [lindex $sizes $i]\n                set count [llength [lsearch -all $clients \"*name=client-$i *\"]]\n                if {$verify_size < $size} {\n                    assert_equal $count $clients_per_size\n                } else {\n                    assert_equal $count 0\n                }\n            }\n        }\n\n        # Restore the reply buffer resize to default\n        r debug replybuffer resizing 1\n\n        foreach rr $rrs {$rr close}\n    } {} {needs:debug}\n}\n\nstart_server {} {\n    r debug reply-copy-avoidance 0 ;# Disable copy avoidance because it affects memory usage\n\n    foreach type {\"client no-evict\" \"maxmemory-clients disabled\"} {\n        r flushall\n        r client no-evict on\n        r config set maxmemory-clients 0\n\n        test \"client total memory grows during $type\" {\n            r setrange k [mb 1] v\n            set rr [redis_client]\n            $rr client setname test_client\n            if {$type eq \"client no-evict\"} {\n                $rr client no-evict on\n                r config set maxmemory-clients 1\n            }\n            $rr deferred 1\n\n            # Fill output buffer in loop without reading it and make sure\n            # the tot-mem of client has increased (OS buffers didn't swallow it)\n            # and eviction not occurring.\n            while {true} {\n                $rr get k\n                $rr flush\n                after 10\n                if {[client_field test_client tot-mem] > [mb 10]} {\n                    break\n                }\n            }\n\n            # Trigger the client eviction, by flipping the no-evict flag to off\n            if {$type eq \"client no-evict\"} {\n                $rr client no-evict off\n            } else {\n                r config set maxmemory-clients 1\n            }\n\n            # wait for the client to be disconnected\n            wait_for_condition 5000 50 {\n                ![client_exists test_client]\n            } else {\n                puts [r client list]\n                fail \"client was not disconnected\"\n            }\n            $rr close\n        }\n    }\n}\n\n} ;# tags\n\n"
  },
  {
    "path": "tests/unit/cluster/announced-endpoints.tcl",
    "content": "start_cluster 2 2 {tags {external:skip cluster}} {\n\n    test \"Test change cluster-announce-port and cluster-announce-tls-port at runtime\" {\n        if {$::tls} {\n            set baseport [lindex [R 0 config get tls-port] 1]\n        } else {\n            set baseport [lindex [R 0 config get port] 1]\n        }\n        set count [expr [llength $::servers] + 1]\n        set used_port [find_available_port $baseport $count]\n\n        R 0 config set cluster-announce-tls-port $used_port\n        R 0 config set cluster-announce-port $used_port\n\n        assert_match \"*:$used_port@*\" [R 0 CLUSTER NODES]\n        wait_for_condition 50 100 {\n            [string match \"*:$used_port@*\" [R 1 CLUSTER NODES]]\n        } else {\n            fail \"Cluster announced port was not propagated via gossip\"\n        }\n\n        R 0 config set cluster-announce-tls-port 0\n        R 0 config set cluster-announce-port 0\n        assert_match \"*:$baseport@*\" [R 0 CLUSTER NODES]\n    }\n\n    test \"Test change cluster-announce-bus-port at runtime\" {\n        if {$::tls} {\n            set baseport [lindex [R 0 config get tls-port] 1]\n        } else {\n            set baseport [lindex [R 0 config get port] 1]\n        }\n        set count [expr [llength $::servers] + 1]\n        set used_port [find_available_port $baseport $count]\n\n        # Verify config set cluster-announce-bus-port\n        R 0 config set cluster-announce-bus-port $used_port\n        assert_match \"*@$used_port *\" [R 0 CLUSTER NODES]\n        wait_for_condition 50 100 {\n            [string match \"*@$used_port *\" [R 1 CLUSTER NODES]]\n        } else {\n            fail \"Cluster announced port was not propagated via gossip\"\n        }\n\n        # Verify restore default cluster-announce-port\n        set base_bus_port [expr $baseport + 10000]\n        R 0 config set cluster-announce-bus-port 0\n        assert_match \"*@$base_bus_port *\" [R 0 CLUSTER NODES]\n    }\n\n    test \"CONFIG SET port updates cluster-announced port\" {\n        set count [expr [llength $::servers] + 1]\n        # Get the original port and change to new_port\n        if {$::tls} {\n            set orig_port [lindex [R 0 config get tls-port] 1]\n        } else {\n            set orig_port [lindex [R 0 config get port] 1]\n        }\n        assert {$orig_port != \"\"}\n        set new_port [find_available_port $orig_port $count]\n\n        if {$::tls} {\n            R 0 config set tls-port $new_port\n        } else {\n            R 0 config set port $new_port\n        }\n\n        # Verify that the new port appears in the output of cluster slots\n        wait_for_condition 50 100 {\n            [string match \"*$new_port*\" [R 0 cluster slots]]\n        } else {\n            fail \"Cluster announced port was not updated in cluster slots\"\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/cluster/atomic-slot-migration.tcl",
    "content": "set ::slot_prefixes [dict create \\\n    0 \"{06S}\" \\\n    1 \"{Qi}\" \\\n    2 \"{5L5}\" \\\n    3 \"{4Iu}\" \\\n    4 \"{4gY}\" \\\n    5 \"{460}\" \\\n    6 \"{1Y7}\" \\\n    7 \"{1LV}\" \\\n    101 \"{1j2}\" \\\n    102 \"{75V}\" \\\n    103 \"{bno}\" \\\n    5462 \"{450}\"\\\n    5463 \"{4dY}\"\\\n    6000 \"{4L7}\" \\\n    6001 \"{4YV}\" \\\n    6002 \"{0bx}\" \\\n    6003 \"{AJ}\" \\\n    6004 \"{of}\" \\\n    16383 \"{6ZJ}\" \\\n]\n\n# Helper functions\nproc get_port {node_id} {\n    if {$::tls} {\n        return [lindex [R $node_id config get tls-port] 1]\n    } else {\n        return [lindex [R $node_id config get port] 1]\n    }\n}\n\n# return the prefix for the given slot\nproc slot_prefix {slot} {\n    return [dict get $::slot_prefixes $slot]\n}\n\n# return a key for the given slot\nproc slot_key {slot {suffix \"\"}} {\n    return \"[slot_prefix $slot]$suffix\"\n}\n\n# Populate a slot with keys\n# TODO: Consider merging with populate()\nproc populate_slot {num args} {\n    # Default values\n    set prefix \"key:\"\n    set size 3\n    set idx 0\n    set prints false\n    set expires 0\n    set slot -1\n\n    # Parse named arguments\n    foreach {key value} $args {\n        switch -- $key {\n            -prefix { set prefix $value }\n            -size { set size $value }\n            -idx { set idx $value }\n            -prints { set prints $value }\n            -expires { set expires $value }\n            -slot { set slot $value }\n            default { error \"Unknown option: $key\" }\n        }\n    }\n\n    # If slot is specified, use slot prefix from table\n    if {$slot >= 0} {\n        if {[dict exists $::slot_prefixes $slot]} {\n            set prefix [dict get $::slot_prefixes $slot]\n        } else {\n            error \"Slot $slot not supported in slot_prefixes table, add it manually\"\n        }\n    }\n\n    R $idx deferred 1\n    if {$num > 16} {set pipeline 16} else {set pipeline $num}\n    set val [string repeat A $size]\n    for {set j 0} {$j < $pipeline} {incr j} {\n        if {$expires > 0} {\n            R $idx set $prefix$j $val ex $expires\n        } else {\n            R $idx set $prefix$j $val\n        }\n        if {$prints} {puts $j}\n    }\n    for {} {$j < $num} {incr j} {\n        if {$expires > 0} {\n            R $idx set $prefix$j $val ex $expires\n        } else {\n            R $idx set $prefix$j $val\n        }\n        R $idx read\n        if {$prints} {puts $j}\n    }\n    for {set j 0} {$j < $pipeline} {incr j} {\n        R $idx read\n        if {$prints} {puts $j}\n    }\n    R $idx deferred 0\n}\n\n# Return 1 if all instances are idle\nproc asm_all_instances_idle {total} {\n    for {set i 0} {$i < $total} {incr i} {\n        if {[CI $i cluster_slot_migration_active_tasks] != 0} { return 0 }\n        if {[CI $i cluster_slot_migration_active_trim_running] != 0} { return 0 }\n    }\n    return 1\n}\n\n# Wait for all ASM tasks to complete in the cluster\nproc wait_for_asm_done {} {\n    set total_instances [expr {$::cluster_master_nodes + $::cluster_replica_nodes}]\n\n    wait_for_condition 3000 10 {\n        [asm_all_instances_idle $total_instances] == 1\n    } else {\n        # Print the number of active tasks on each instance\n        for {set i 0} {$i < $total_instances} {incr i} {\n            set migration_count [CI $i cluster_slot_migration_active_tasks]\n            set trim_count [CI $i cluster_slot_migration_active_trim_running]\n            puts \"Instance $i: migration_tasks=$migration_count, trim_tasks=$trim_count\"\n        }\n        fail \"ASM tasks did not complete on all instances\"\n    }\n    # wait all nodes to reach the same cluster config after ASM\n    wait_for_cluster_propagation\n}\n\nproc failover_and_wait_for_done {node_id {failover_arg \"\"}} {\n    set max_attempts 5\n    for {set attempt 1} {$attempt <= $max_attempts} {incr attempt} {\n        if {$failover_arg eq \"\"} {\n            R $node_id cluster failover\n        } else {\n            R $node_id cluster failover $failover_arg\n        }\n\n        set completed 1\n        wait_for_condition 1000 10 {\n            [string match \"*master*\" [R $node_id role]]\n        } else {\n            set completed 0\n        }\n\n        if {$completed} {\n            wait_for_cluster_propagation\n            return\n        }\n    }\n    fail \"Failover did not complete after $max_attempts attempts for node $node_id\"\n}\n\nproc migration_status {node_id task_id field} {\n    set status [R $node_id CLUSTER MIGRATION STATUS ID $task_id]\n\n    # STATUS ID returns single task, so get first element\n    if {[llength $status] == 0} {\n        return \"\"\n    }\n\n    set task_status [lindex $status 0]\n    set field_value \"\"\n\n    # Parse the key-value pairs in the task\n    for {set i 0} {$i < [llength $task_status]} {incr i 2} {\n        set key [lindex $task_status $i]\n        set value [lindex $task_status [expr $i + 1]]\n\n        if {$key eq $field} {\n            set field_value $value\n            break\n        }\n    }\n\n    return $field_value\n}\n\n# Setup slot migration test with keys and delay, then start migration\n# Returns the task_id for the migration\nproc setup_slot_migration_with_delay {src_node dst_node start_slot end_slot {keys 2} {delay 1000000}} {\n    # Two keys on the start slot\n    populate_slot $keys -idx $src_node -slot $start_slot\n\n    # we set a delay to ensure migration takes time for testing,\n    # with default parameters, two keys cost 2s to save\n    R $src_node config set rdb-key-save-delay $delay\n\n    # migrate slot range from src_node to dst_node\n    set task_id [R $dst_node CLUSTER MIGRATION IMPORT $start_slot $end_slot]\n    wait_for_condition 2000 10 {\n        [string match {*send-bulk-and-stream*} [migration_status $src_node $task_id state]]\n    } else {\n        fail \"ASM task did not start\"\n    }\n\n    return $task_id\n}\n\n# Helper function to clear module internal event logs\nproc clear_module_event_log {} {\n    for {set i 0} {$i < $::cluster_master_nodes + $::cluster_replica_nodes} {incr i} {\n        R $i asm.clear_event_log\n    }\n}\n\nproc reset_default_trim_method {} {\n    for {set i 0} {$i < $::cluster_master_nodes + $::cluster_replica_nodes} {incr i} {\n        R $i debug asm-trim-method default\n    }\n}\n\nstart_cluster 3 3 {tags {external:skip cluster} overrides {cluster-node-timeout 60000 cluster-allow-replica-migration no}} {\n    foreach trim_method {\"active\" \"bg\"} {\n        test \"Simple slot migration (trim method: $trim_method)\" {\n            R 0 debug asm-trim-method $trim_method\n            R 3 debug asm-trim-method $trim_method\n\n            set slot0_key [slot_key 0 mykey]\n            R 0 set $slot0_key \"a\"\n            set slot1_key [slot_key 1 mykey]\n            R 0 set $slot1_key \"b\"\n            set slot101_key [slot_key 101 mykey]\n            R 0 set $slot101_key \"c\"\n            # 3 keys cost 3s to save\n            R 0 config set rdb-key-save-delay 1000000\n\n            # load a function\n            R 0 function load {#!lua name=test1\n                    redis.register_function('test1', function() return 'hello1' end)\n            }\n\n            # migrate slot 0-100 to R 1\n            set task_id [R 1 CLUSTER MIGRATION IMPORT 0 100]\n            # migration is start, and in accumulating buffer stage\n            wait_for_condition 1000 50 {\n                [string match {*send-bulk-and-stream*} [migration_status 0 $task_id state]] &&\n                [string match {*accumulate-buffer*} [migration_status 1 $task_id state]]\n            } else {\n                fail \"ASM task did not start\"\n            }\n\n            # append 99 times during migration\n            for {set i 0} {$i < 99} {incr i} {\n                R 0 multi\n                R 0 append $slot0_key \"a\"\n                R 0 exec\n                R 0 append $slot1_key \"b\"\n                R 0 append $slot101_key \"c\"\n            }\n\n            # wait until migration of 0-100 successful\n            wait_for_asm_done\n\n            # verify task state became completed\n            assert_equal \"completed\" [migration_status 0 $task_id state]\n            assert_equal \"completed\" [migration_status 1 $task_id state]\n\n            # the appended 99 times should also be migrated\n            assert_equal [string repeat a 100] [R 1 get $slot0_key]\n            assert_equal [string repeat b 100] [R 1 get $slot1_key]\n\n            # function should be migrated\n            assert_equal [R 0 function dump] [R 1 function dump]\n            # the slave should also get the data\n            wait_for_ofs_sync [Rn 1] [Rn 4]\n\n            R 4 readonly\n            assert_equal [string repeat a 100] [R 4 get $slot0_key]\n            assert_equal [string repeat b 100] [R 4 get $slot1_key]\n            assert_equal [R 0 function dump] [R 4 function dump]\n\n            # verify key that was not in the slot range is not migrated\n            assert_equal [string repeat c 100] [R 0 get $slot101_key]\n            # verify changes in replica\n            wait_for_ofs_sync [Rn 0] [Rn 3]\n            R 3 readonly\n            assert_equal [string repeat c 100] [R 3 get $slot101_key]\n\n            # cleanup\n            R 0 config set rdb-key-save-delay 0\n            R 0 flushall\n            R 0 function flush\n            R 1 flushall\n            R 1 function flush\n            R 0 CLUSTER MIGRATION IMPORT 0 100\n            wait_for_asm_done\n        }\n    }\n}\n\n# Skip most of the tests when running under valgrind since it is hard to\n# stabilize tests under valgrind.\nif {!$::valgrind} {\nstart_cluster 3 3 {tags {external:skip cluster} overrides {cluster-node-timeout 60000 cluster-allow-replica-migration no}} {\n    test \"Test CLUSTER MIGRATION IMPORT input validation\" {\n        # invalid arguments\n        assert_error {*wrong number of arguments*} {R 0 CLUSTER MIGRATION}\n        assert_error {*wrong number of arguments*} {R 0 CLUSTER MIGRATION IMPORT}\n        assert_error {*wrong number of arguments*} {R 0 CLUSTER MIGRATION IMPORT 100}\n        assert_error {*wrong number of arguments*} {R 0 CLUSTER MIGRATION IMPORT 100 200 300}\n        assert_error {*unknown argument*} {R 0 CLUSTER MIGRATION UNKNOWN 1 2}\n\n        # invalid slot range\n        assert_error {*greater than end slot number*} {R 0 CLUSTER MIGRATION IMPORT 200 100}\n        assert_error {*out of range slot*} {R 0 CLUSTER MIGRATION IMPORT 17000 18000}\n        assert_error {*out of range slot*} {R 0 CLUSTER MIGRATION IMPORT 14000 18000}\n        assert_error {*out of range slot*} {R 0 CLUSTER MIGRATION IMPORT 0 16384}\n        assert_error {*out of range slot*} {R 0 CLUSTER MIGRATION IMPORT 0 -1}\n        assert_error {*out of range slot*} {R 0 CLUSTER MIGRATION IMPORT -1 2}\n        assert_error {*out of range slot*} {R 0 CLUSTER MIGRATION IMPORT -2 -1}\n        assert_error {*out of range slot*} {R 0 CLUSTER MIGRATION IMPORT 10 a}\n        assert_error {*out of range slot*} {R 0 CLUSTER MIGRATION IMPORT sd sd}\n        assert_error {*already the owner of the slot*} {R 0 CLUSTER MIGRATION IMPORT 100 200}\n    }\n\n    test \"Test CLUSTER MIGRATION CANCEL input validation\" {\n        # invalid arguments\n        assert_error {*wrong number of arguments*} {R 0 CLUSTER MIGRATION CANCEL}\n        assert_error {*wrong number of arguments*} {R 0 CLUSTER MIGRATION CANCEL ID}\n        assert_error {*wrong number of arguments*} {R 0 CLUSTER MIGRATION CANCEL ID 12345 EXTRAARG}\n        assert_error {*wrong number of arguments*} {R 0 CLUSTER MIGRATION CANCEL ALL EXTRAARG}\n        assert_error {*unknown argument*} {R 0 CLUSTER MIGRATION CANCEL UNKNOWNARG}\n        assert_error {*unknown argument*} {R 0 CLUSTER MIGRATION CANCEL abc def}\n        # empty string id should not cancel any task\n        assert_equal 0 [R 0 CLUSTER MIGRATION CANCEL ID \"\"]\n    }\n\n    test \"Test CLUSTER MIGRATION STATUS input validation\" {\n        # invalid arguments\n        assert_error {*wrong number of arguments*} {R 0 CLUSTER MIGRATION STATUS}\n        assert_error {*wrong number of arguments*} {R 0 CLUSTER MIGRATION STATUS ID}\n        assert_error {*wrong number of arguments*} {R 0 CLUSTER MIGRATION STATUS ID id EXTRAARG}\n        assert_error {*wrong number of arguments*} {R 0 CLUSTER MIGRATION STATUS ALL EXTRAARG}\n        assert_error {*unknown argument*} {R 0 CLUSTER MIGRATION STATUS ABC DEF}\n        assert_error {*unknown argument*} {R 0 CLUSTER MIGRATION STATUS UNKNOWNARG}\n        # empty string id should not list any task\n        assert_equal {} [R 0 CLUSTER MIGRATION STATUS ID \"\"]\n    }\n\n    test \"Test TRIMSLOTS input validation\" {\n        # Wrong number of arguments\n        assert_error {*wrong number of arguments*} {R 0 TRIMSLOTS}\n        assert_error {*wrong number of arguments*} {R 0 TRIMSLOTS RANGES}\n        assert_error {*wrong number of arguments*} {R 0 TRIMSLOTS RANGES 1}\n        assert_error {*wrong number of arguments*} {R 0 TRIMSLOTS RANGES 2 100}\n        assert_error {*wrong number of arguments*} {R 0 TRIMSLOTS RANGES 17000 1}\n        assert_error {*wrong number of arguments*} {R 0 TRIMSLOTS RANGES abc}\n\n        # Missing ranges argument\n        assert_error {*missing ranges argument*} {R 0 TRIMSLOTS UNKNOWN 1 100 200}\n\n        # Invalid number of ranges\n        assert_error {*invalid number of ranges*} {R 0 TRIMSLOTS RANGES 0 1 1}\n        assert_error {*invalid number of ranges*} {R 0 TRIMSLOTS RANGES -1 2 2}\n        assert_error {*invalid number of ranges*} {R 0 TRIMSLOTS RANGES 17000 1 2}\n        assert_error {*invalid number of ranges*} {R 0 TRIMSLOTS RANGES 2 100 200 300}\n\n        # Invalid slot numbers\n        assert_error {*out of range slot*} {R 0 TRIMSLOTS RANGES 1 -1 0}\n        assert_error {*out of range slot*} {R 0 TRIMSLOTS RANGES 1 -2 -1}\n        assert_error {*out of range slot*} {R 0 TRIMSLOTS RANGES 1 0 16384}\n        assert_error {*out of range slot*} {R 0 TRIMSLOTS RANGES 1 abc def}\n        assert_error {*out of range slot*} {R 0 TRIMSLOTS RANGES 1 100 abc}\n\n        # Start slot greater than end slot\n        assert_error {*greater than end slot number*} {R 0 TRIMSLOTS RANGES 1 200 100}\n    }\n\n    test \"Test IMPORT not allowed on replica\" {\n        assert_error {* not allowed on replica*} {R 4 CLUSTER MIGRATION IMPORT 100 200}\n    }\n\n    test \"Test IMPORT not allowed during manual migration\" {\n        set dst_id [R 1 CLUSTER MYID]\n\n        # Set a slot to IMPORTING\n        R 0 CLUSTER SETSLOT 15000 IMPORTING $dst_id\n        assert_error {*must be STABLE to start*slot migration*} {R 0 CLUSTER MIGRATION IMPORT 100 200}\n        # Revert the change\n        R 0 CLUSTER SETSLOT 15000 STABLE\n\n        # Same test with setting a slot to MIGRATING\n        R 0 CLUSTER SETSLOT 5000 MIGRATING $dst_id\n        assert_error {*must be STABLE to start*slot migration*} {R 0 CLUSTER MIGRATION IMPORT 100 200}\n        # Revert the change\n        R 0 CLUSTER SETSLOT 5000 STABLE\n    }\n\n    test \"Test IMPORT not allowed if the node is already the owner\" {\n        assert_error {*already the owner of the slot*} {R 0 CLUSTER MIGRATION IMPORT 100 100}\n    }\n\n    test \"Test IMPORT not allowed for a slot without an owner\" {\n        # Slot will have no owner\n        R 0 CLUSTER DELSLOTS 5000\n\n        assert_error {*slot has no owner: 5000*} {R 0 CLUSTER MIGRATION IMPORT 5000 5000}\n\n        # Revert the change\n        R 0 CLUSTER ADDSLOTS 5000\n    }\n\n    test \"Test IMPORT not allowed if slot ranges belong to different nodes\" {\n        assert_error {*slots belong to different source nodes*} {R 0 CLUSTER MIGRATION IMPORT 7000 15000}\n        assert_error {*slots belong to different source nodes*} {R 0 CLUSTER MIGRATION IMPORT 7000 8000 14000 15000}\n    }\n\n    test \"Test IMPORT not allowed if slot is given multiple times\" {\n        assert_error {*Slot*specified multiple times*} {R 0 CLUSTER MIGRATION IMPORT 7000 8000 8000 9000}\n        assert_error {*Slot*specified multiple times*} {R 0 CLUSTER MIGRATION IMPORT 7000 8000 7900 9000}\n    }\n\n    test \"Test CLUSTER MIGRATION STATUS ALL lists all tasks\" {\n        # Create 3 completed tasks\n        R 0 CLUSTER MIGRATION IMPORT 7000 7001\n        wait_for_asm_done\n        R 0 CLUSTER MIGRATION IMPORT 7002 7003\n        wait_for_asm_done\n        R 0 CLUSTER MIGRATION IMPORT 7004 7005\n        wait_for_asm_done\n\n        # Get node IDs for verification\n        set node0_id [R 0 cluster myid]\n        set node1_id [R 1 cluster myid]\n\n        # Verify CLUSTER MIGRATION STATUS ALL reply from both nodes\n        foreach node_idx {0 1} {\n            set tasks [R $node_idx CLUSTER MIGRATION STATUS ALL]\n            assert_equal 3 [llength $tasks]\n\n            for {set i 0} {$i < 3} {incr i} {\n                set task [lindex $tasks $i]\n\n                # Verify field order\n                set expected_fields {id slots source dest operation state\n                                    last_error retries create_time start_time\n                                    end_time write_pause_ms}\n                for {set j 0} {$j < [llength $expected_fields]} {incr j} {\n                    set expected_field [lindex $expected_fields $j]\n                    set actual_field [lindex $task [expr $j * 2]]\n                    assert_equal $expected_field $actual_field\n                }\n\n                # Verify basic fields\n                assert_equal \"completed\" [dict get $task state]\n                assert_equal \"\" [dict get $task last_error]\n                assert_equal 0 [dict get $task retries]\n                assert {[dict get $task write_pause_ms] >= 0}\n\n                # Verify operation based on node\n                if {$node_idx == 0} {\n                    assert_equal \"import\" [dict get $task operation]\n                } else {\n                    assert_equal \"migrate\" [dict get $task operation]\n                }\n\n                # Verify node IDs (all tasks: node1 -> node0)\n                assert_equal $node1_id [dict get $task source]\n                assert_equal $node0_id [dict get $task dest]\n\n                # Verify timestamps exist and are reasonable\n                set create_time [dict get $task create_time]\n                set start_time [dict get $task start_time]\n                set end_time [dict get $task end_time]\n                assert {$create_time > 0}\n                assert {$start_time >= $create_time}\n                assert {$end_time >= $start_time}\n\n                # Verify specific slot ranges for each task\n                set slots [dict get $task slots]\n                if {$i == 0} {\n                    assert_equal \"7004-7005\" $slots\n                } elseif {$i == 1} {\n                    assert_equal \"7002-7003\" $slots\n                } elseif {$i == 2} {\n                    assert_equal \"7000-7001\" $slots\n                }\n            }\n        }\n\n        # cleanup\n        R 1 CLUSTER MIGRATION IMPORT 7000 7005\n        wait_for_asm_done\n    }\n\n    test \"Test IMPORT not allowed if there is an overlapping import\" {\n        # Let slot migration take long time, so that we can test overlapping import\n        R 1 config set rdb-key-save-delay 1000000\n        R 1 set tag22273 tag22273 ;# slot hash is 7000\n        R 1 set tag9283 tag9283 ;# slot hash is 8000\n\n        set task_id [R 0 CLUSTER MIGRATION IMPORT 7000 8000]\n        assert_error {*overlapping import exists*} {R 0 CLUSTER MIGRATION IMPORT 8000 9000}\n        assert_error {*overlapping import exists*} {R 0 CLUSTER MIGRATION IMPORT 7500 8500}\n        assert_error {*overlapping import exists*} {R 0 CLUSTER MIGRATION IMPORT 6000 7000}\n        assert_error {*overlapping import exists*} {R 0 CLUSTER MIGRATION IMPORT 6500 7500}\n\n        wait_for_condition 1000 50 {\n            [string match {*completed*} [migration_status 0 $task_id state]] &&\n            [string match {*completed*} [migration_status 1 $task_id state]]\n        } else {\n            fail \"ASM task did not start\"\n        }\n        assert_equal \"tag22273\" [R 0 get tag22273]\n        assert_equal \"tag9283\" [R 0 get tag9283]\n        R 1 config set rdb-key-save-delay 0\n\n        # revert the migration\n        R 1 CLUSTER MIGRATION IMPORT 7000 8000\n        wait_for_asm_done\n    }\n\n    test \"Test IMPORT with unsorted and adjacent ranges\" {\n        # Redis should sort and merge adjacent ranges\n        # Adjacent means: prev.end + 1 == next.start\n        # e.g. 7000-7001 7002-7003 7004-7005  =>  7000-7005\n\n        # Test with adjacent ranges\n        set task_id [R 0 CLUSTER MIGRATION IMPORT 7000 7001 7002 7100]\n        wait_for_asm_done\n        # verify migration is successfully completed on both nodes\n        assert_equal \"completed\" [migration_status 0 $task_id state]\n        assert_equal \"completed\" [migration_status 1 $task_id state]\n        # verify slot ranges are merged correctly\n        assert_equal \"7000-7100\" [migration_status 0 $task_id slots]\n        assert_equal \"7000-7100\" [migration_status 1 $task_id slots]\n\n        # Test with unsorted and adjacent ranges\n        set task_id [R 1 CLUSTER MIGRATION IMPORT 7050 7051 7010 7049 7000 7005]\n        wait_for_asm_done\n        # verify migration is successfully completed on both nodes\n        assert_equal \"completed\" [migration_status 0 $task_id state]\n        assert_equal \"completed\" [migration_status 1 $task_id state]\n        # verify slot ranges are merged correctly\n        assert_equal \"7000-7005 7010-7051\" [migration_status 0 $task_id slots]\n        assert_equal \"7000-7005 7010-7051\" [migration_status 1 $task_id slots]\n\n        # Another test with unsorted and adjacent ranges\n        set task_id [R 1 CLUSTER MIGRATION IMPORT 7007 7007 7008 7009 7006 7006]\n        wait_for_asm_done\n        # verify migration is successfully completed on both nodes\n        assert_equal \"completed\" [migration_status 0 $task_id state]\n        assert_equal \"completed\" [migration_status 1 $task_id state]\n        # verify slot ranges are merged correctly\n        assert_equal \"7006-7009\" [migration_status 0 $task_id slots]\n        assert_equal \"7006-7009\" [migration_status 1 $task_id slots]\n    }\n\n    test \"Simple slot migration with write load\" {\n        # Perform slot migration while traffic is on and verify data consistency.\n        # Trimming is disabled on source nodes so, we can compare the dbs after\n        # migration via DEBUG DIGEST to ensure no data loss during migration.\n        # Steps:\n        # 1. Disable trimming on both nodes\n        # 2. Populate slot 0 on node-0 and slot 6000 on node-1\n        # 2. Start write traffic on both nodes\n        # 3. Migrate slot 0 from node-0 to node-1\n        # 4. Migrate slot 6000 from node-1 to node-0\n        # 5. Stop write traffic, verify db's are identical.\n\n        # This test runs slowly under the thread sanitizer.\n        #  1. Increase the lag threshold from the default 1 MB to 10 MB to let the destination catch up easily.\n        #  2. Increase the write pause timeout from the default 10s to 60s so the source can wait longer.\n        set prev_config_lag [lindex [R 0 config get cluster-slot-migration-handoff-max-lag-bytes] 1]\n        R 0 config set cluster-slot-migration-handoff-max-lag-bytes 10mb\n        R 1 config set cluster-slot-migration-handoff-max-lag-bytes 10mb\n        set prev_config_timeout [lindex [R 0 config get cluster-slot-migration-write-pause-timeout] 1]\n        R 0 config set cluster-slot-migration-write-pause-timeout 60000\n        R 1 config set cluster-slot-migration-write-pause-timeout 60000\n\n        R 0 flushall\n        R 0 debug asm-trim-method none\n        populate_slot 10000 -idx 0 -slot 0\n\n        R 1 flushall\n        R 1 debug asm-trim-method none\n        populate_slot 10000 -idx 1 -slot 6000\n\n        # Start write traffic on node-0 (ignore_error_reply=1 tolerates MOVED/ASK\n        # replies while slots are being migrated).\n        set port [get_port 0]\n        set key [slot_key 0 mykey]\n        set load_handle0 [start_write_load \"127.0.0.1\" $port 100 $key 0 5 1]\n\n        # Start write traffic on node-1 (ignore_error_reply=1 for migration redirects).\n        set port [get_port 1]\n        set key [slot_key 6000 mykey]\n        set load_handle1 [start_write_load \"127.0.0.1\" $port 100 $key 0 5 1]\n\n        # Migrate keys\n        R 1 CLUSTER MIGRATION IMPORT 0 100\n        wait_for_asm_done\n        R 0 CLUSTER MIGRATION IMPORT 6000 6100\n        wait_for_asm_done\n\n        stop_write_load $load_handle0\n        stop_write_load $load_handle1\n\n        # verify data\n        assert_morethan [R 0 dbsize] 0\n        assert_equal [R 0 debug digest] [R 1 debug digest]\n\n        # cleanup\n        R 0 config set cluster-slot-migration-handoff-max-lag-bytes $prev_config_lag\n        R 0 config set cluster-slot-migration-write-pause-timeout $prev_config_timeout\n        R 0 debug asm-trim-method default\n        R 0 flushall\n        R 1 config set cluster-slot-migration-handoff-max-lag-bytes $prev_config_lag\n        R 1 config set cluster-slot-migration-write-pause-timeout $prev_config_timeout\n        R 1 debug asm-trim-method default\n        R 1 flushall\n\n        R 1 CLUSTER MIGRATION IMPORT 6000 6100\n        wait_for_asm_done\n    }\n\n    test \"Verify expire time is migrated correctly\" {\n        R 0 flushall\n        R 1 flushall\n\n        set string_key [slot_key 0 string_key]\n        set list_key [slot_key 0 list_key]\n        set hash_key [slot_key 0 hash_key]\n        set stream_key [slot_key 0 stream_key]\n\n        for {set i 0} {$i < 20} {incr i} {\n            R 1 hset $hash_key $i $i\n            R 1 xadd $stream_key * item $i\n        }\n        for {set i 0} {$i < 2000} {incr i} {\n            R 1 lpush $list_key $i\n        }\n\n        # set expire time of some keys\n        R 1 set $string_key \"a\" EX 1000\n        R 1 EXPIRE $list_key 1000\n        R 1 EXPIRE $hash_key 1000\n\n        # migrate slot 0-100 to R 0\n        R 0 CLUSTER MIGRATION IMPORT 0 100\n        wait_for_asm_done\n\n        # check expire times are migrated correctly\n        assert_range [R 0 ttl $string_key] 900 1000\n        assert_range [R 0 ttl $list_key] 900 1000\n        assert_range [R 0 ttl $hash_key] 900 1000\n        assert_equal -1 [R 0 ttl $stream_key]\n\n        # cleanup\n        R 0 flushall\n        R 1 flushall\n        R 1 CLUSTER MIGRATION IMPORT 0 100\n        wait_for_asm_done\n    }\n\n    test \"Slot migration with complex data types can work well\" {\n        R 0 flushall\n        R 1 flushall\n\n        set list_key [slot_key 0 list_key]\n        set set_key [slot_key 0 set_key]\n        set zset_key [slot_key 0 zset_key]\n        set hash_key [slot_key 0 hash_key]\n        set stream_key [slot_key 0 stream_key]\n\n        # generate big keys for each data type\n        for {set i 0} {$i < 1000} {incr i} {\n            R 1 lpush $list_key $i\n            R 1 sadd $set_key $i\n            R 1 zadd $zset_key $i $i\n            R 1 hset $hash_key $i $i\n            R 1 xadd $stream_key * item $i\n        }\n\n        # migrate slot 0-100 to R 0\n        R 0 CLUSTER MIGRATION IMPORT 0 100\n        wait_for_asm_done\n        # check the data on destination node is correct\n        assert_equal 1000 [R 0 llen $list_key]\n        assert_equal 1000 [R 0 scard $set_key]\n        assert_equal 1000 [R 0 zcard $zset_key]\n        assert_equal 1000 [R 0 hlen $hash_key]\n        assert_equal 1000 [R 0 xlen $stream_key]\n        # migrate slot 0-100 to R 1\n        R 1 CLUSTER MIGRATION IMPORT 0 100\n        wait_for_asm_done\n    }\n\n    proc asm_basic_error_handling_test {operation channel all_states} {\n        foreach state $all_states {\n            if {$::verbose} { puts \"Testing $operation $channel channel with state: $state\"}\n\n            # For states that need incremental data streaming, set a longer delay\n            set streaming_states [list \"streaming-buffer\" \"accumulate-buffer\" \"send-bulk-and-stream\" \"send-stream\"]\n            if {$state in $streaming_states} {\n                R 1 config set rdb-key-save-delay 1000000\n            }\n\n            # Let the destination node take time to stream buffer, so the source node will handle\n            # slot snapshot child process exit, and then enter \"send-stream\" state.\n            if {$state == \"send-stream\"} {\n                R 0 config set key-load-delay 100000\n            }\n\n            # Start the slot 0 write load on the R 1\n            set slot0_key [slot_key 0 mykey]\n            set load_handle [start_write_load \"127.0.0.1\" [get_port 1] 100 $slot0_key 500]\n\n            # clear old fail points and set the new fail point\n            assert_equal {OK} [R 0 debug asm-failpoint \"\" \"\"]\n            assert_equal {OK} [R 1 debug asm-failpoint \"\" \"\"]\n            if {$operation eq \"import\"} {\n                assert_equal {OK} [R 0 debug asm-failpoint \"import-$channel-channel\" $state]\n            } elseif {$operation eq \"migrate\"} {\n                assert_equal {OK} [R 1 debug asm-failpoint \"migrate-$channel-channel\" $state]\n            } else {\n                fail \"Unknown operation: $operation\"\n            }\n\n            # Start the migration\n            set task_id [R 0 CLUSTER MIGRATION IMPORT 0 100]\n\n            # The task should be failed due to the fail point\n            wait_for_condition 2000 10 {\n                [string match -nocase \"*$channel*${state}*\" [migration_status 0 $task_id last_error]] ||\n                [string match -nocase \"*$channel*${state}*\" [migration_status 1 $task_id last_error]]\n            } else {\n                fail \"ASM task did not fail with expected error -\n                     (dst: [migration_status 0 $task_id last_error]\n                      src: [migration_status 1 $task_id last_error]\n                      expected: $channel $state)\"\n            }\n            stop_write_load $load_handle\n\n            # Cancel the task\n            R 0 CLUSTER MIGRATION CANCEL ID $task_id\n            R 1 CLUSTER MIGRATION CANCEL ID $task_id\n\n            R 1 config set rdb-key-save-delay 0\n            R 0 config set key-load-delay 0\n        }\n    }\n\n    test \"Destination node main channel basic error-handling tests \" {\n        set all_states [list \\\n            \"connecting\" \\\n            \"auth-reply\" \\\n            \"handshake-reply\" \\\n            \"syncslots-reply\" \\\n            \"accumulate-buffer\" \\\n            \"streaming-buffer\" \\\n            \"wait-stream-eof\" \\\n        ]\n        asm_basic_error_handling_test \"import\" \"main\" $all_states\n    }\n\n    test \"Destination node rdb channel basic error-handling tests\" {\n        set all_states [list \\\n            \"connecting\" \\\n            \"auth-reply\" \\\n            \"rdbchannel-reply\" \\\n            \"rdbchannel-transfer\" \\\n        ]\n        asm_basic_error_handling_test \"import\" \"rdb\" $all_states\n    }\n\n    test \"Source node main channel basic error-handling tests \" {\n        set all_states [list \\\n            \"wait-rdbchannel\" \\\n            \"send-bulk-and-stream\" \\\n            \"send-stream\" \\\n            \"handoff\" \\\n        ]\n        asm_basic_error_handling_test \"migrate\" \"main\" $all_states\n    }\n\n    test \"Source node rdb channel basic error-handling tests\" {\n        set all_states [list \\\n            \"wait-bgsave-start\" \\\n            \"send-bulk-and-stream\" \\\n        ]\n        asm_basic_error_handling_test \"migrate\" \"rdb\" $all_states\n    }\n\n    test \"Migration will be successful after fail points are cleared\" {\n        R 0 flushall\n        R 1 flushall\n        set slot0_key [slot_key 0 mykey]\n        set slot1_key [slot_key 1 mykey]\n        R 1 set $slot0_key \"a\"\n        R 1 set $slot1_key \"b\"\n\n        # we set a delay to write incremental data\n        R 1 config set rdb-key-save-delay 1000000\n\n        # Start slot 0 write load on R1. ignore_error_reply=1 tolerates MOVED/ASK\n        # replies that can appear while slot 0 is being migrated.\n        set load_handle [start_write_load \"127.0.0.1\" [get_port 1] 100 $slot0_key 0 0 1]\n\n        # Clear all fail points\n        assert_equal {OK} [R 0 debug asm-failpoint \"\" \"\"]\n        assert_equal {OK} [R 1 debug asm-failpoint \"\" \"\"]\n\n        # Start the migration\n        set task_id [R 0 CLUSTER MIGRATION IMPORT 0 100]\n\n        # Wait for the migration to complete\n        wait_for_asm_done\n\n        stop_write_load $load_handle\n\n        # Verify the data is migrated, slot 0 and 1 should belong to R 1\n        # slot 0 key should be changed by the write load\n        assert_not_equal \"a\" [R 0 get $slot0_key]\n        assert_equal \"b\" [R 0 get $slot1_key]\n        R 1 config set rdb-key-save-delay 0\n    }\n\n    test \"Client output buffer limit is reached on source side\" {\n        R 0 flushall\n        R 1 flushall\n        set r1_pid [S 1 process_id]\n        R 1 debug repl-pause on-streaming-repl-buf\n\n        # Set a small output buffer limit to trigger the error\n        R 0 config set client-output-buffer-limit \"replica 4mb 0 0\"\n\n        set task_id [setup_slot_migration_with_delay 0 1 0 100]\n\n        # some write traffic is to have chance to enter streaming buffer state\n        set slot0_key [slot_key 0 mykey]\n        R 0 set $slot0_key \"a\"\n\n        # after 3 second, the slots snapshot (costs 2s to generate) should be transferred,\n        # then start streaming buffer\n        after 3000\n\n        set loglines [count_log_lines 0]\n\n        # Start the slot 0 write load on the R 0\n        set load_handle [start_write_load \"127.0.0.1\" [get_port 0] 100 $slot0_key 1000]\n\n        # verify the metric is accessible, it is transient, will be reset on disconnect\n        assert {[S 0 mem_cluster_slot_migration_output_buffer] >= 0}\n\n        # After some time, the client output buffer limit should be reached\n        wait_for_log_messages 0 {\"*Client * closed * for overcoming of output buffer limits.*\"} $loglines 1000 10\n        wait_for_condition 1000 10 {\n            [string match {*send*stream*} [migration_status 0 $task_id last_error]]\n        } else {\n            fail \"ASM task did not fail as expected\"\n        }\n\n        stop_write_load $load_handle\n\n        # Reset configurations\n        R 0 config set client-output-buffer-limit \"replica 0 0 0\"\n        R 0 config set rdb-key-save-delay 0\n\n        # resume server and clear pause point\n        resume_process $r1_pid\n        R 1 debug repl-pause clear\n\n        # Wait for the migration to complete\n        wait_for_asm_done\n    }\n\n    test \"Full sync buffer limit is reached on destination side\" {\n        # Set a small replication buffer limit to trigger the error\n        R 0 config set replica-full-sync-buffer-limit 1mb\n\n        # start migration from 1 to 0, cost 4s to transfer slots snapshot\n        set task_id [setup_slot_migration_with_delay 1 0 0 100 2 2000000]\n        set loglines [count_log_lines 0]\n\n        # Create some traffic on slot 0\n        populate_slot 100 -idx 1 -slot 0 -size 100000\n\n        # After some time, slots sync buffer limit should be reached, but migration would not fail\n        # since the buffer will be accumulated on source side from now.\n        wait_for_log_messages 0 {\"*Slots sync buffer limit has been reached*\"} $loglines 1000 10\n\n        # verify the peak value, should be greater than 1mb\n        assert {[S 0 mem_cluster_slot_migration_input_buffer_peak] > 1000000}\n        # verify the metric is accessible, it is transient, will be reset on disconnect\n        assert {[S 0 mem_cluster_slot_migration_input_buffer] >= 0}\n\n        wait_for_asm_done\n\n        # Reset configurations\n        R 0 config set replica-full-sync-buffer-limit 0\n        R 1 config set rdb-key-save-delay 0\n        R 1 cluster migration import 0 100\n        wait_for_asm_done\n    }\n\n    test \"Expired key is not deleted and SCAN/KEYS/RANDOMKEY/CLUSTER GETKEYSINSLOT filter keys in importing slots\" {\n        set slot0_key [slot_key 0 mykey]\n        set slot1_key [slot_key 1 mykey]\n        set slot2_key [slot_key 2 mykey]\n        R 1 flushall\n        R 0 flushall\n\n        # we set a delay to write incremental data\n        R 1 config set rdb-key-save-delay 1000000\n\n        # set expire time 2s. Generating slot snapshot will 3s, so these\n        # three keys will be expired after slot snapshot is transferred\n        R 1 setex $slot0_key 2 \"a\"\n        R 1 setex $slot1_key 2 \"b\"\n        R 1 hset $slot2_key \"f1\" \"1\"\n        R 1 expire $slot2_key 2\n        R 1 hexpire $slot2_key 2 FIELDS 1 \"f1\"\n\n        set task_id [R 0 CLUSTER MIGRATION IMPORT 0 100]\n        wait_for_condition 2000 10 {\n            [string match {*send-bulk-and-stream*} [migration_status 1 $task_id state]]\n        } else {\n            fail \"ASM task did not start\"\n        }\n\n        # update expire time during mirgration\n        R 1 setex $slot0_key 100 \"a\"\n        R 1 expire $slot1_key 80\n        R 1 expire $slot2_key 60\n        R 1 hincrbyfloat $slot2_key \"f1\" 1\n        R 1 hexpire $slot2_key 60 FIELDS 1 \"f1\"\n\n        # after 2s, at least a key should be transferred, and should not be deleted\n        # due to expired, neither active nor lazy expiration (SCAN) takes effect,\n        # Besides SCAN/KEYS/RANDOMKEY/CLUSTER GETKEYSINSLOT command can not find them\n        after 2000\n        R 3 readonly\n        foreach id {0 3} { ;# 0 is the master, 3 is the replica\n            assert_equal {0 {}} [R $id scan 0 count 10]\n            assert_equal {} [R $id keys \"*\"]\n            assert_equal {} [R $id keys \"{06S}*\"]\n            assert_equal {} [R $id randomkey]\n            assert_equal {} [R $id cluster getkeysinslot 0 100]\n            assert_equal [R $id cluster countkeysinslot 0] 0\n            assert_equal [R $id dbsize] 0\n\n            # but we can see the number of keys is increased in INFO KEYSPACE\n            assert {[scan [regexp -inline {keys\\=([\\d]*)} [R $id info keyspace]] keys=%d] >= 1}\n            assert {[scan [regexp -inline {expires\\=([\\d]*)} [R $id info keyspace]] expires=%d] >= 1}\n        }\n\n        wait_for_asm_done\n\n        wait_for_ofs_sync [Rn 0] [Rn 3]\n\n        foreach id {0 3} { ;# 0 is the master, 3 is the replica\n            # verify the keys are valid\n            assert_range [R $id ttl $slot0_key] 90 100\n            assert_range [R $id ttl $slot1_key] 70 80\n            assert_range [R $id ttl $slot2_key] 50 60\n            assert_range [R $id httl $slot2_key FIELDS 1 \"f1\"] 50 60\n\n            # KEYS/SCAN/RANDOMKEY/CLUSTER GETKEYSINSLOT will find the keys after migration\n            assert_equal [list 0 [list $slot0_key $slot1_key $slot2_key]] [R $id scan 0 count 10]\n            assert_equal [list $slot0_key $slot1_key $slot2_key] [R $id keys \"*\"]\n            assert_equal [list $slot0_key] [R $id keys \"{06S}*\"]\n            assert_not_equal {} [R $id randomkey]\n            assert_equal [list $slot0_key] [R $id cluster getkeysinslot 0 100]\n\n            # INFO KEYSPACE/DBSIZE/CLUSTER COUNTKEYSINSLOT will also reflect the keys\n            assert_equal 3 [scan [regexp -inline {keys\\=([\\d]*)} [R $id info keyspace]] keys=%d]\n            assert_equal 3 [scan [regexp -inline {expires\\=([\\d]*)} [R $id info keyspace]] expires=%d]\n            assert_equal 1 [scan [regexp -inline {subexpiry\\=([\\d]*)} [R $id info keyspace]] subexpiry=%d]\n            assert_equal 3 [R $id dbsize]\n            assert_equal 1 [R $id cluster countkeysinslot 0]\n        }\n\n        # update expire time to 10ms, after some time, the keys should be deleted due to\n        # active expiration\n        R 0 pexpire $slot0_key 10\n        R 0 pexpire $slot1_key 10\n        R 0 hpexpire $slot2_key 10 FIELDS 1 \"f1\" ;# the last field is expired, the key will be deleted\n        wait_for_condition 100 50 {\n            [scan [regexp -inline {keys\\=([\\d]*)} [R 0 info keyspace]] keys=%d] == {} &&\n            [scan [regexp -inline {keys\\=([\\d]*)} [R 3 info keyspace]] keys=%d] == {}\n        } else {\n            fail \"keys did not expire\"\n        }\n\n        R 1 config set rdb-key-save-delay 0\n    }\n\n    test \"Eviction does not evict keys in importing slots\" {\n        set slot0_key [slot_key 0 mykey]\n        set slot1_key [slot_key 1 mykey]\n        set slot2_key [slot_key 2 mykey]\n        set slot5462_key [slot_key 5462 mykey]\n        set slot5463_key [slot_key 5463 mykey]\n        R 1 flushall\n        R 0 flushall\n\n        # we set a delay to write incremental data\n        R 0 config set rdb-key-save-delay 1000000\n\n        set 1k_str [string repeat \"a\" 1024]\n        set 1m_str [string repeat \"a\" 1048576]\n\n        # set two keys to be evicted\n        R 1 set $slot5462_key $1k_str\n        R 1 set $slot5463_key $1k_str\n\n        # set maxmemory to 200kb more than current used memory,\n        # redis should evict some keys if importing some big keys\n        set r1_mem_used [S 1 used_memory]\n        set r1_max_mem [expr {$r1_mem_used + 200*1024}]\n        R 1 config set maxmemory $r1_max_mem\n        R 1 config set maxmemory-policy allkeys-lru\n\n        # set 3 keys to be migrated\n        R 0 set $slot0_key $1m_str\n        R 0 set $slot1_key $1m_str\n        R 0 set $slot2_key $1m_str\n\n        set task_id [R 1 CLUSTER MIGRATION IMPORT 0 100]\n        wait_for_condition 2000 10 {\n            [string match {*send-bulk-and-stream*} [migration_status 0 $task_id state]]\n        } else {\n            fail \"ASM task did not start\"\n        }\n\n        # after 2.2s, at least two keys should be transferred, they should not be evicted\n        # but other keys (slot5462_key and slot5463_key) should be evicted\n        after 2200\n        for {set j 0} {$j < 100} {incr j} { R 1 ping } ;# trigger eviction\n        assert_equal 0 [R 1 exists $slot5462_key]\n        assert_equal 0 [R 1 exists $slot5463_key]\n        assert {[scan [regexp -inline {keys\\=([\\d]*)} [R 1 info keyspace]] keys=%d] >= 2}\n\n        # current used memory should be more than the maxmemory, since the big keys that\n        # belong importing slots can not be evicted.\n        set r1_mem_used  [S 1 used_memory]\n        assert {$r1_mem_used > $r1_max_mem + 1024*1024}\n\n        wait_for_asm_done\n\n        # after migration, these big keys should be evicted\n        for {set j 0} {$j < 100} {incr j} { R 1 ping } ;# trigger eviction\n        assert_equal {} [scan [regexp -inline {expires\\=([\\d]*)} [R 1 info keyspace]] expires=%d]\n    }\n\n    test \"Failover will cancel slot migration tasks\" {\n        # migrate slot 0-100 from 1 to 0\n        set task_id [setup_slot_migration_with_delay 1 0 0 100]\n\n        # FAILOVER happens on the destination node, instance #3 become master, #0 become slave\n        failover_and_wait_for_done 3\n\n        # the old master will cancel the importing task, and the migrating task on\n        # the source node will be failed\n        wait_for_condition 1000 50 {\n            [string match {*canceled*} [migration_status 0 $task_id state]] &&\n            [string match {*failover*} [migration_status 0 $task_id last_error]] &&\n            [string match {*failed*} [migration_status 1 $task_id state]]\n        } else {\n            fail \"ASM task did not cancel\"\n        }\n\n        # We can restart ASM tasks on new master, migrate slot 0-100 from 1 to 3\n        R 1 config set rdb-key-save-delay 0\n        set task_id [R 3 CLUSTER MIGRATION IMPORT 0 100]\n        wait_for_asm_done\n\n        # migrate slot 0-100 from 3 to 1\n        set task_id [setup_slot_migration_with_delay 3 1 0 100]\n\n        # FAILOVER happens on the source node, instance #3 become slave, #0 become master\n        failover_and_wait_for_done 0\n\n        # the old master will cancel the migrating task, but the destination node will\n        # retry the importing task, and then succeed.\n        wait_for_condition 1000 50 {\n            [string match {*canceled*} [migration_status 3 $task_id state]]\n        } else {\n            fail \"ASM task did not cancel\"\n        }\n        wait_for_asm_done\n    }\n\n    test \"Flush-like command can cancel slot migration task\" {\n        # flushall, flushdb\n        foreach flushcmd {flushall flushdb} {\n            # start slot migration from 1 to 0\n            set task_id [setup_slot_migration_with_delay 1 0 0 100]\n\n            if {$::verbose} { puts \"Testing flush command: $flushcmd\"}\n            R 0 $flushcmd\n\n            # flush-like will cancel the task\n            wait_for_condition 1000 50 {\n                [string match {*canceled*} [migration_status 0 $task_id state]]\n            } else {\n                fail \"ASM task did not cancel\"\n            }\n        }\n\n        R 1 config set rdb-key-save-delay 0\n        R 0 cluster migration import 0 100\n        wait_for_asm_done\n    }\n\n    test \"CLUSTER SETSLOT command when there is a slot migration task\" {\n        # Setup slot migration test from node 0 to node 1\n        set task_id [setup_slot_migration_with_delay 0 1 0 100]\n\n        # Cluster SETSLOT command is not allowed when there is a slot migration task\n        # on the slot. #0 and #1 are having migration task now.\n        foreach instance {0 1} {     \n            set node_id [R $instance cluster myid]\n\n            catch {R $instance cluster setslot 0 migrating $node_id} err\n            assert_match {*in an active atomic slot migration*} $err\n\n            catch {R $instance cluster setslot 0 importing $node_id} err\n            assert_match {*in an active atomic slot migration*} $err\n\n            catch {R $instance cluster setslot 0 stable} err\n            assert_match {*in an active atomic slot migration*} $err\n\n            catch {R $instance cluster setslot 0 node $node_id} err\n            assert_match {*in an active atomic slot migration*} $err\n        }\n\n        # CLUSTER SETSLOT on other node will cancel the migration task, we update\n        # the owner of slot 0 (that is migrating from #0 to #1) to #2 on #2, we\n        # bump the config epoch to make sure the change can update #0 and #1\n        # slot configuration, so #0 and #1 will cancel the migration task.\n        # BTW, if config epoch is not bumped, the slot config of #2 may be\n        # updated by #0 and #1.\n        R 2 cluster bumpepoch\n        R 2 cluster setslot 0 node [R 2 cluster myid]\n        wait_for_condition 1000 50 {\n            [string match {*canceled*} [migration_status 0 $task_id state]] &&\n            [string match {*slots configuration updated*} [migration_status 0 $task_id last_error]] &&\n            [string match {*canceled*} [migration_status 1 $task_id state]]\n        } else {\n            fail \"ASM task did not cancel\"\n        }\n\n        # set slot 0 back to #0\n        R 0 cluster bumpepoch\n        R 0 cluster setslot 0 node [R 0 cluster myid]\n        wait_for_cluster_propagation\n        wait_for_cluster_state \"ok\"\n    }\n\n    test \"CLUSTER DELSLOTSRANGE command cancels a slot migration task\" {\n        # start slot migration from 0 to 1\n        set task_id [setup_slot_migration_with_delay 0 1 0 100]\n\n        R 0 cluster delslotsrange 0 100\n        wait_for_condition 1000 50 {\n            [string match {*canceled*} [migration_status 0 $task_id state]] &&\n            [string match {*slots configuration updated*} [migration_status 0 $task_id last_error]] &&\n            [string match {*failed*} [migration_status 1 $task_id state]]\n        } else {\n            fail \"ASM task did not cancel\"\n        }\n        R 1 cluster migration cancel id $task_id\n\n        # add the slots back\n        R 0 cluster addslotsrange 0 100\n        wait_for_cluster_propagation\n        wait_for_cluster_state \"ok\"\n    }\n\n    # NOTE: this test needs more than 60s, maybe you can skip when testing\n    test \"CLUSTER FORGET command cancels a slot migration task\" {\n        R 0 config set rdb-key-save-delay 0\n        # Migrate all slot on #0 to #1, so we can forget #0\n        set task_id [R 1 CLUSTER MIGRATION IMPORT 0 5461]\n        wait_for_asm_done\n\n        # start slot migration from 1 to 0\n        set task_id [setup_slot_migration_with_delay 1 0 0 5461]\n\n        # Forget #0 on #1, the migration task on #1 will be canceled due to node deleted,\n        # and the importing task on #0 will be failed\n        R 1 cluster forget [R 0 cluster myid]\n        wait_for_condition 1000 50 {\n            [string match {*canceled*} [migration_status 1 $task_id state]] &&\n            [string match {*node deleted*} [migration_status 1 $task_id last_error]] &&\n            [string match {*failed*} [migration_status 0 $task_id state]]\n        } else {\n            fail \"ASM task did not cancel\"\n        }\n\n        # Add #0 back into cluster\n        # NOTE: this will cost 60s to let #0 join the cluster since\n        # other nodes add #0 into black list for 60s after FORGET.\n        R 1 config set rdb-key-save-delay 0\n        R 1 cluster meet \"127.0.0.1\" [lindex [R 0 config get port] 1]\n\n        # the importing task on #0 will be retried, and eventually succeed\n        # since now #0 is back in the cluster\n        wait_for_condition 3000 50 {\n            [string match {*completed*} [migration_status 0 $task_id state]] &&\n            [string match {*completed*} [migration_status 1 $task_id state]]\n        } else {\n            fail \"ASM task did not finish\"\n        }\n\n        # make sure #0 is completely back to the cluster\n        wait_for_cluster_propagation\n        wait_for_cluster_state \"ok\"\n    }\n\n    test \"CLIENT PAUSE can cancel slot migration task\" {\n        # start slot migration from 0 to 1\n        set task_id [setup_slot_migration_with_delay 0 1 0 100]\n\n        # CLIENT PAUSE happens on the destination node, #1 will cancel the importing task\n        R 1 client pause 100000 write ;# pause 100s\n        wait_for_condition 1000 50 {\n            [string match {*canceled*} [migration_status 1 $task_id state]] &&\n            [string match {*client pause*} [migration_status 1 $task_id last_error]]\n        } else {\n            fail \"ASM task did not cancel\"\n        }\n\n        # start task again\n        set task_id [R 1 CLUSTER MIGRATION IMPORT 0 100]\n        after 200 ;# give some time to have chance to schedule the task\n        # the task should not start since server is paused\n        assert {[string match {*none*} [migration_status 1 $task_id state]]}\n\n        # unpause the server, the task should start\n        R 1 client unpause\n        wait_for_asm_done\n\n        # migrate back to original node #0\n        R 0 config set rdb-key-save-delay 0\n        R 1 config set rdb-key-save-delay 0\n        R 0 CLUSTER MIGRATION IMPORT 0 100\n        wait_for_asm_done\n    }\n\n    test \"Server shutdown can cancel slot migration task, exit with success\" {\n        # start slot migration from 0 to 1\n        setup_slot_migration_with_delay 0 1 0 100\n\n        set loglines [count_log_lines -1]\n\n        # Shutdown the server, it should cancel the migration task\n        restart_server -1 true false true nosave\n\n        wait_for_log_messages -1 {\"*Cancelled due to server shutdown*\"} $loglines 100 100\n\n        wait_for_cluster_propagation\n        wait_for_cluster_state \"ok\"\n    }\n\n    test \"Cancel import task when streaming buffer into db\" {\n        # set a delay to have time to cancel import task that is streaming buf to db\n        R 1 config set key-load-delay 50000\n        # start slot migration from 0 to 1\n        set task_id [setup_slot_migration_with_delay 0 1 0 100 5]\n\n        # start the slot 0 write load on the node 0\n        set slot0_key [slot_key 0 mykey]\n        set load_handle [start_write_load \"127.0.0.1\" [get_port 0] 100 $slot0_key 500]\n\n        # wait for entering streaming buffer state\n        wait_for_condition 1000 10 {\n            [string match {*streaming-buffer*} [migration_status 1 $task_id state]]\n        } else {\n            fail \"ASM task did not enter streaming buffer state\"\n        }\n        stop_write_load $load_handle\n\n        # cancel the import task on #1, the destination node works fine\n        R 1 cluster migration cancel id $task_id\n        assert_match {*canceled*} [migration_status 1 $task_id state]\n\n        # reset config\n        R 0 config set key-load-delay 0\n        R 1 config set key-load-delay 0\n    }\n\n    test \"Destination node main channel timeout when waiting stream EOF\" {\n        set task_id [setup_slot_migration_with_delay 0 1 0 100]\n        R 1 config set repl-timeout 5\n\n        # pause the source node to make EOF wait timeout. Do not pause\n        # the child process, so it can deliver slot snapshot to destination\n        set r0_process_id [S 0 process_id]\n        pause_process $r0_process_id\n\n        # the destination node will fail after 7s, 5s for EOF wait and 2s for slot snapshot\n        wait_for_condition 1000 20 {\n            [string match {*failed*} [migration_status 1 $task_id state]] &&\n            [string match {*Main channel*Connection timeout*wait-stream-eof*} \\\n                [migration_status 1 $task_id last_error]]\n        } else {\n            fail \"ASM task did not fail\"\n        }\n\n        # resume the source node\n        resume_process $r0_process_id\n\n        # After the source node is resumed, the task on source node may receive\n        # ACKs from destination and consider the task is stream-done. In this case,\n        # the task on source node will be failed after several seconds\n        if {[string match {*stream-done*} [migration_status 0 $task_id state]]} {\n            wait_for_condition 1000 20 {\n                [string match {*failed*} [migration_status 0 $task_id state]] &&\n                [string match {*Write pause timeout*} [migration_status 0 $task_id last_error]]\n            } else {\n                fail \"ASM task did not fail\"\n            }\n        }\n\n        R 1 config set repl-timeout 60\n        R 0 cluster migration cancel id $task_id\n        R 1 cluster migration cancel id $task_id\n    }\n\n    test \"Destination node rdb channel timeout when transferring slots snapshot\" {\n        # cost 10s to transfer each key\n        set task_id [setup_slot_migration_with_delay 0 1 0 100 2 10000000]\n        R 1 config set repl-timeout 3\n\n        # the destination node will fail after 3s\n        wait_for_condition 1000 20 {\n            [string match {*failed*} [migration_status 1 $task_id state]] &&\n            [string match {*RDB channel*Connection timeout*rdbchannel-transfer*} \\\n                [migration_status 1 $task_id last_error]]\n        } else {\n            fail \"ASM task did not fail\"\n        }\n\n        R 1 config set repl-timeout 60\n        R 0 cluster migration cancel id $task_id\n        R 1 cluster migration cancel id $task_id\n    }\n\n    test \"Source node rdb channel timeout when transferring slots snapshot\" {\n        set r1_pid [S 1 process_id]\n        R 0 flushall\n        R 0 config set save \"\"\n        # generate several large keys, make sure the memory usage is more than\n        # socket buffer size, so the rdb channel will block and timeout if\n        # no data is received by destination.\n        set val [string repeat \"a\" 102400] ;# 100kb\n        for {set i 0} {$i < 1000} {incr i} {\n            set key [slot_key 0 \"key$i\"]\n            R 0 set $key $val\n        }\n        R 0 config set repl-timeout 3 ;# 3s for rdb channel timeout\n        R 0 config set rdb-key-save-delay 10000 ;# 1000 keys cost 10s to save\n\n        # start migration from #0 to #1\n        set task_id [R 1 CLUSTER MIGRATION IMPORT 0 100]\n        wait_for_condition 1000 20 {\n            [string match {*send-bulk-and-stream*} [migration_status 0 $task_id state]]\n        } else {\n            fail \"ASM task did not start\"\n        }\n\n        # pause the destination node to make rdb channel timeout\n        pause_process $r1_pid\n\n        # the source node will fail, the rdb child process can not\n        # write data to destination, so it will timeout\n        wait_for_condition 1000 30 {\n            [string match {*failed*} [migration_status 0 $task_id state]] &&\n            [string match {*RDB channel*Failed to send slots snapshot*} \\\n                [migration_status 0 $task_id last_error]]\n        } else {\n            fail \"ASM task did not fail\"\n        }\n        resume_process $r1_pid\n\n        R 0 config set repl-timeout 60\n        R 0 cluster migration cancel id $task_id\n        R 1 cluster migration cancel id $task_id\n    }\n\n    test \"Source node main channel timeout when sending incremental stream\" {\n        R 0 flushall\n        R 0 config set repl-timeout 2   ;# 2s for main channel timeout\n\n        set r1_pid [S 1 process_id]\n        # in order to have time to pause the destination node\n        R 1 config set key-load-delay 50000 ;# 50ms each 16k data\n\n        # start migration from #0 to #1\n        set task_id [setup_slot_migration_with_delay 0 1 0 100]\n\n        # Create 200 keys of 16k size traffic on slot 0, streaming buffer need 10s (200*50ms)\n        populate_slot 200 -idx 0 -slot 0 -size 16384\n\n        # wait for streaming buffer state, then pause the destination node\n        wait_for_condition 1000 20 {\n            [string match {*streaming-buffer*} [migration_status 1 $task_id state]]\n        } else {\n            fail \"ASM task did not stream buffer, state: [migration_status 1 $task_id state]\"\n        }\n        pause_process $r1_pid\n\n        # Start the slot 0 write load on the R 0\n        set load_handle [start_write_load \"127.0.0.1\" [get_port 0] 100 [slot_key 0 mykey] 500]\n\n        # the source node will fail after several seconds (including the time\n        # to fill the socket buffer of source node), the main channel can not\n        # write data to destination since the destination is paused\n        wait_for_condition 1000 30 {\n            [string match {*failed*} [migration_status 0 $task_id state]] &&\n            [string match {*Main channel*Connection timeout*} \\\n                [migration_status 0 $task_id last_error]]\n        } else {\n            fail \"ASM task did not fail\"\n        }\n        stop_write_load $load_handle\n        resume_process $r1_pid\n\n        R 0 config set repl-timeout 60\n        R 1 config set key-load-delay 0\n        R 0 cluster migration cancel id $task_id\n        R 1 cluster migration cancel id $task_id\n        R 0 flushall\n    }\n\n    test \"Source write pause timeout\" {\n        # set timeout to 0, so the task will fail immediately when checking timeout\n        R 0 config set cluster-slot-migration-write-pause-timeout 0\n        R 1 debug asm-failpoint \"import-main-channel\" \"takeover\"\n\n        # start migration from node 0 to 1\n        set task_id [setup_slot_migration_with_delay 0 1 0 100]\n\n        # start the slot 0 write load on the node 0\n        set slot0_key [slot_key 0 mykey]\n        set load_handle [start_write_load \"127.0.0.1\" [get_port 0] 100 $slot0_key]\n\n        # node 0 will fail due to write pause timeout\n        wait_for_condition 2000 10 {\n            [string match {*failed*} [migration_status 0 $task_id state]] &&\n            [string match {*Write pause timeout*} \\\n                [migration_status 0 $task_id last_error]]\n        } else {\n            fail \"ASM task did not fail\"\n        }\n\n        stop_write_load $load_handle\n\n        # reset config\n        R 0 config set cluster-slot-migration-write-pause-timeout 10000\n        R 0 cluster migration cancel id $task_id\n        R 1 cluster migration cancel id $task_id\n        R 1 debug asm-failpoint \"\" \"\"\n    }\n\n    test \"Sync buffer drain timeout\" {\n        # set a fail point to avoid the source node to enter handoff prep state\n        # to test the sync buffer drain timeout\n        R 0 debug asm-failpoint \"migrate-main-channel\" \"handoff-prep\"\n        R 0 config set cluster-slot-migration-sync-buffer-drain-timeout 5000\n\n        set r1_pid [S 1 process_id]\n\n        # start migration from node 0 to 1\n        set task_id [setup_slot_migration_with_delay 0 1 0 100]\n\n        # start the slot 0 write load on the node 0\n        set slot0_key [slot_key 0 mykey]\n        set load_handle [start_write_load \"127.0.0.1\" [get_port 0] 100 $slot0_key]\n\n        # wait for entering streaming buffer state\n        wait_for_condition 1000 10 {\n            [string match {*wait-stream-eof*} [migration_status 1 $task_id state]]\n        } else {\n            fail \"ASM task did not enter wait-stream-eof state\"\n        }\n\n        pause_process $r1_pid ;# avoid the destination to apply commands\n\n        # node 0 will fail since sync buffer drain timeout\n        wait_for_condition 2000 10 {\n            [string match {*failed*} [migration_status 0 $task_id state]] &&\n            [string match {*Sync buffer drain timeout*} \\\n                [migration_status 0 $task_id last_error]]\n        } else {\n            fail \"ASM task did not fail\"\n        }\n\n        stop_write_load $load_handle\n        resume_process $r1_pid\n\n        # reset config\n        R 0 config set cluster-slot-migration-sync-buffer-drain-timeout 60000\n        R 0 debug asm-failpoint \"\" \"\"\n        R 0 cluster migration cancel id $task_id\n        R 1 cluster migration cancel id $task_id\n    }\n\n    test \"Cluster implementation cannot start migrate task temporarily\" {\n        # Inject a fail point to make the source node not ready\n        R 0 debug asm-failpoint \"migrate-main-channel\" \"none\"\n\n        # start migration from node 0 to 1\n        set task_id [R 1 CLUSTER MIGRATION IMPORT 0 100]\n\n        # verify source node replies SYNCSLOTS with -NOTREADY\n        set loglines [count_log_lines -1]\n        wait_for_log_messages -1 {\"*Source node replied to SYNCSLOTS SYNC with -NOTREADY, will retry later*\"} $loglines 100 100\n\n        # clear the fail point and verify the task is completed\n        R 0 debug asm-failpoint \"\" \"\"\n        wait_for_asm_done\n        assert_equal \"completed\" [migration_status 0 $task_id state]\n        assert_equal \"completed\" [migration_status 1 $task_id state]\n\n        # cleanup\n        R 0 CLUSTER MIGRATION IMPORT 0 100\n        wait_for_asm_done\n    }\n}\n\nstart_cluster 3 3 {tags {external:skip cluster} overrides {cluster-node-timeout 60000 cluster-allow-replica-migration no}} {\n    test \"Test bgtrim after a successful migration\" {\n        R 0 debug asm-trim-method bg\n        R 3 debug asm-trim-method bg\n        R 0 CONFIG RESETSTAT\n        R 3 CONFIG RESETSTAT\n\n        R 0 flushall\n        # Fill slot 0\n        populate_slot 1000 -idx 0 -slot 0\n        # Fill slot 1 with keys that have TTL\n        populate_slot 1000 -idx 0 -slot 1 -prefix \"expirekey\" -expires 100\n        # HFE key on slot 2\n        set slot2_hfekey [slot_key 2 hfekey]\n        R 0 HSETEX $slot2_hfekey EX 10 FIELDS 1 f1 v1\n\n        # Fill slot 101, these keys won't be migrated\n        populate_slot 1000 -idx 0 -slot 101\n        # Fill slot 102 with keys that have TTL\n        populate_slot 1000 -idx 0 -slot 102 -prefix \"expirekey\" -expires 100\n        # HFE key on slot 103\n        set slot103_hfekey [slot_key 103 hfekey]\n        R 0 HSETEX $slot103_hfekey EX 10 FIELDS 1 f1 v1\n\n        # migrate slot 0 to node-1\n        R 1 CLUSTER MIGRATION IMPORT 0 100\n        wait_for_asm_done\n\n        # Verify the data is migrated\n        wait_for_ofs_sync [Rn 0] [Rn 3]\n        assert_equal 2001 [R 0 dbsize]\n        assert_equal 2001 [R 3 dbsize]\n        wait_for_ofs_sync [Rn 1] [Rn 4]\n        assert_equal 2001 [R 1 dbsize]\n        assert_equal 2001 [R 4 dbsize]\n\n        # Verify the keys are trimmed lazily\n        wait_for_condition 1000 10 {\n            [S 0 lazyfreed_objects] == 2001 &&\n            [S 3 lazyfreed_objects] == 2001\n        } else {\n            puts \"lazyfreed_objects: [S 0 lazyfreed_objects] [S 3 lazyfreed_objects]\"\n            fail \"Background trim did not happen\"\n        }\n\n        # Cleanup\n        R 0 CLUSTER MIGRATION IMPORT 0 100\n        wait_for_asm_done\n        R 0 flushall\n        R 0 debug asm-trim-method default\n        R 3 debug asm-trim-method default\n    }\n\n    test \"Test bgtrim after a failed migration\" {\n        R 0 debug asm-trim-method bg\n        R 3 debug asm-trim-method bg\n        R 1 CONFIG RESETSTAT\n        R 4 CONFIG RESETSTAT\n\n        # Fill slot 0 on node-0 and migrate it to node-1 (with some delay)\n        R 0 flushall\n        set task_id [setup_slot_migration_with_delay 0 1 0 100 10000 1000]\n        after 1000 ;# wait some time so that some keys are moved\n\n        # Fail the migration\n        R 1 CLUSTER MIGRATION CANCEL ID $task_id\n        wait_for_asm_done\n\n        # Verify the data is not migrated\n        assert_equal 10000 [R 0 dbsize]\n        assert_equal 10000 [R 3 dbsize]\n\n        # Verify the keys are trimmed lazily after a failed import on dest side.\n        wait_for_condition 1000 20 {\n            [R 1 dbsize] == 0 &&\n            [R 4 dbsize] == 0 &&\n            [S 1 lazyfreed_objects] > 0 &&\n            [S 4 lazyfreed_objects] > 0\n        } else {\n            fail \"Background trim did not happen\"\n        }\n\n        # Cleanup\n        wait_for_asm_done\n        R 0 flushall\n        R 0 debug asm-trim-method default\n        R 3 debug asm-trim-method default\n    }\n\n    test \"Test bgtrim unblocks stream client\" {\n        # Two clients waiting for data on two different streams which are in\n        # different slots. We are going to migrate one slot, which will unblock\n        # the client. The other client should still be blocked.\n        R 0 debug asm-trim-method bg\n\n        set key0 [slot_key 0 mystream]\n        set key1 [slot_key 1 mystream]\n\n        # First client waits on slot-0 key\n        R 0 DEL $key0\n        R 0 XADD $key0 666 f v\n        R 0 XGROUP CREATE $key0 mygroup $\n        set rd0 [redis_deferring_client]\n        $rd0 XREADGROUP GROUP mygroup Alice BLOCK 0 STREAMS $key0 \">\"\n        wait_for_blocked_clients_count 1\n\n        # Second client waits on slot-1 key\n        R 0 DEL $key1\n        R 0 XADD $key1 666 f v\n        R 0 XGROUP CREATE $key1 mygroup $\n        set rd1 [redis_deferring_client]\n        $rd1 XREADGROUP GROUP mygroup Alice BLOCK 0 STREAMS $key1 \">\"\n        wait_for_blocked_clients_count 2\n\n        # Migrate slot 0\n        R 1 CLUSTER MIGRATION IMPORT 0 0\n        wait_for_asm_done\n\n        # First client should get MOVED error\n        assert_error \"*MOVED*\" {$rd0 read}\n        $rd0 close\n\n        # Second client should operate normally\n        R 0 XADD $key1 667 f v\n        set res [$rd1 read]\n        assert_equal [lindex $res 0 1 0] {667-0 {f v}}\n        $rd1 close\n\n        # cleanup\n        wait_for_asm_done\n        R 0 CLUSTER MIGRATION IMPORT 0 0\n        wait_for_asm_done\n        R 0 flushall\n        R 0 debug asm-trim-method default\n    }\n\n    test \"Test bgtrim touches watched keys\" {\n        R 0 debug asm-trim-method bg\n\n        # bgtrim should touch watched keys on migrated slots\n        set key0 [slot_key 0 key]\n        R 0 set $key0 30\n        R 0 watch $key0\n        R 1 CLUSTER MIGRATION IMPORT 0 0\n        wait_for_asm_done\n        R 0 multi\n        R 0 ping\n        assert_equal {} [R 0 exec]\n\n        # bgtrim should not touch watched keys on other slots\n        set key2 [slot_key 2 key]\n        R 0 set $key2 30\n        R 0 watch $key2\n        R 1 CLUSTER MIGRATION IMPORT 1 1\n        wait_for_asm_done\n        R 0 multi\n        R 0 ping\n        assert_equal PONG [R 0 exec]\n\n        # cleanup\n        wait_for_asm_done\n        R 0 CLUSTER MIGRATION IMPORT 0 1\n        wait_for_asm_done\n        R 0 flushall\n        R 0 debug asm-trim-method default\n    }\n\n    test \"Test bgtrim after a FAILOVER on destination side\" {\n        R 1 debug asm-trim-method bg\n        R 4 debug asm-trim-method bg\n\n        set loglines [count_log_lines -4]\n\n        # Fill slot 0 on node-0 and migrate it to node-1 (with some delay)\n        R 0 flushall\n        set task_id [setup_slot_migration_with_delay 0 1 0 100 10000 1000]\n        after 1000 ;# wait some time so that some keys are moved\n\n        # Trigger a failover with force to simulate unreachable master and\n        # verify unowned keys are trimmed once replica becomes master.\n        failover_and_wait_for_done 4 force\n        wait_for_log_messages -4 {\"*Detected keys in slots that do not belong*Scheduling trim*\"} $loglines 1000 10\n        wait_for_condition 1000 10 {\n            [R 1 dbsize] == 0 &&\n            [R 4 dbsize] == 0\n        } else {\n            fail \"Background trim did not happen\"\n        }\n\n        # cleanup\n        wait_for_cluster_propagation\n        failover_and_wait_for_done 1\n        R 0 config set rdb-key-save-delay 0\n        R 1 debug asm-trim-method default\n        R 4 debug asm-trim-method default\n        wait_for_asm_done\n    }\n\n    test \"CLUSTER SETSLOT is not allowed if there is a pending trim job\" {\n        R 0 debug asm-trim-method bg\n        R 3 debug asm-trim-method bg\n\n        # Fill slot 0 on node-0 and migrate it to node-1 (with some delay)\n        R 0 flushall\n        set task_id [setup_slot_migration_with_delay 0 1 0 100 10000 1000]\n\n        # Pause will cancel the task and there will be a pending trim job\n        # until writes are allowed again.\n        R 1 client pause 100000 write ;# pause 100s\n        wait_for_asm_done\n\n        # CLUSTER SETSLOT is not allowed if there is a pending trim job.\n        assert_error {*There is a pending trim job for slot 0*} {R 1 CLUSTER SETSLOT 0 STABLE}\n\n        # Unpause the server, trim will be triggered and SETSLOT will be allowed\n        R 1 client unpause\n        R 1 CLUSTER SETSLOT 0 STABLE\n    }\n}\n\nstart_cluster 3 3 {tags {external:skip cluster} overrides {cluster-node-timeout 60000 cluster-allow-replica-migration no save \"\"}} {\n    test \"Test active trim after a successful migration\" {\n        R 0 debug asm-trim-method active\n        R 3 debug asm-trim-method active\n        populate_slot 500 -slot 0\n        populate_slot 500 -slot 1\n        populate_slot 500 -slot 3\n        populate_slot 500 -slot 4\n\n        # Migrate 1500 keys\n        R 1 CLUSTER MIGRATION IMPORT 0 1 3 3\n        wait_for_asm_done\n\n        wait_for_condition 1000 10 {\n            [CI 0 cluster_slot_migration_active_tasks] == 0 &&\n            [CI 0 cluster_slot_migration_active_trim_running] == 0 &&\n            [CI 0 cluster_slot_migration_active_trim_current_job_trimmed] == 1500 &&\n            [CI 3 cluster_slot_migration_active_trim_running] == 0 &&\n            [CI 3 cluster_slot_migration_active_trim_current_job_trimmed] == 1500\n        } else {\n            fail \"trim failed\"\n        }\n\n        assert_equal 1500 [CI 0 cluster_slot_migration_active_trim_current_job_keys]\n        assert_equal 1500 [CI 3 cluster_slot_migration_active_trim_current_job_keys]\n\n        assert_equal 500 [R 0 dbsize]\n        assert_equal 500 [R 3 dbsize]\n        assert_equal 1500 [R 1 dbsize]\n        assert_equal 1500 [R 4 dbsize]\n        assert_equal 0 [R 0 cluster countkeysinslot 0]\n        assert_equal 0 [R 0 cluster countkeysinslot 1]\n        assert_equal 0 [R 0 cluster countkeysinslot 3]\n        assert_equal 500 [R 0 cluster countkeysinslot 4]\n\n        # cleanup\n        R 0 debug asm-trim-method default\n        R 3 debug asm-trim-method default\n        R 0 CLUSTER MIGRATION IMPORT 0 1 3 3\n        wait_for_asm_done\n        R 0 flushall\n        R 1 flushall\n    }\n\n    test \"Test multiple active trim jobs can be scheduled\" {\n        # Active trim will be scheduled but it won't run\n        R 0 debug asm-trim-method active -1\n        R 3 debug asm-trim-method active -1\n\n        populate_slot 500 -slot 0\n        populate_slot 500 -slot 1\n        populate_slot 500 -slot 3\n        populate_slot 500 -slot 4\n\n        # Migrate 1500 keys\n        R 1 CLUSTER MIGRATION IMPORT 0 1\n        wait_for_condition 1000 10 {\n            [CI 0 cluster_slot_migration_active_tasks] == 0 &&\n            [CI 0 cluster_slot_migration_active_trim_running] == 1 &&\n            [CI 3 cluster_slot_migration_active_trim_running] == 1\n        } else {\n            fail \"migrate failed\"\n        }\n\n        # Migrate another slot and verify there are two trim tasks on the source\n        R 1 CLUSTER MIGRATION IMPORT 3 3\n        wait_for_condition 1000 10 {\n            [CI 0 cluster_slot_migration_active_tasks] == 0 &&\n            [CI 0 cluster_slot_migration_active_trim_running] == 2 &&\n            [CI 3 cluster_slot_migration_active_trim_running] == 2\n        } else {\n            fail \"migrate failed\"\n        }\n\n        # Enabled active trim and wait until it is completed.\n        R 0 debug asm-trim-method active 0\n        R 3 debug asm-trim-method active 0\n        wait_for_asm_done\n\n        assert_equal 500 [R 0 dbsize]\n        assert_equal 500 [R 3 dbsize]\n        assert_equal 0 [R 0 cluster countkeysinslot 0]\n        assert_equal 0 [R 0 cluster countkeysinslot 1]\n        assert_equal 0 [R 0 cluster countkeysinslot 3]\n        assert_equal 500 [R 0 cluster countkeysinslot 4]\n\n        # cleanup\n        R 0 debug asm-trim-method default\n        R 3 debug asm-trim-method default\n        R 0 CLUSTER MIGRATION IMPORT 0 1 3 3\n        wait_for_asm_done\n        R 0 flushall\n        R 1 flushall\n    }\n\n    test \"Test active-trim clears partially imported keys on cancel\" {\n        R 1 debug asm-trim-method active\n        R 4 debug asm-trim-method active\n\n        # Rdb delivery will take 10 seconds\n        R 0 config set rdb-key-save-delay 10000\n        populate_slot 250 -slot 0\n        populate_slot 250 -slot 1\n        populate_slot 250 -slot 3\n        populate_slot 250 -slot 4\n\n        R 1 CLUSTER MIGRATION IMPORT 0 100\n        after 2000\n        R 1 CLUSTER MIGRATION CANCEL ALL\n        wait_for_asm_done\n\n        assert_morethan [CI 1 cluster_slot_migration_active_trim_current_job_keys] 0\n        assert_morethan [CI 4 cluster_slot_migration_active_trim_current_job_trimmed] 0\n\n        assert_equal 1000 [R 0 dbsize]\n        assert_equal 1000 [R 3 dbsize]\n        assert_equal 0 [R 1 dbsize]\n        assert_equal 0 [R 4 dbsize]\n\n        # Cleanup\n        R 1 debug asm-trim-method default\n        R 4 debug asm-trim-method default\n        R 0 config set rdb-key-save-delay 0\n    }\n\n    test \"Test active-trim clears partially imported keys on failover\" {\n        R 1 debug asm-trim-method active\n        R 4 debug asm-trim-method active\n\n        # Rdb delivery will take 10 seconds\n        R 0 config set rdb-key-save-delay 10000\n\n        populate_slot 250 -slot 0\n        populate_slot 250 -slot 1\n        populate_slot 250 -slot 3\n        populate_slot 250 -slot 4\n\n        set prev_trim_started_1 [CI 1 cluster_slot_migration_stats_active_trim_started]\n        set prev_trim_started_4 [CI 4 cluster_slot_migration_stats_active_trim_started]\n\n        R 1 CLUSTER MIGRATION IMPORT 0 100\n        after 2000\n        failover_and_wait_for_done 4\n        wait_for_asm_done\n\n        # Verify there is at least one trim job started\n        assert_morethan [CI 1 cluster_slot_migration_stats_active_trim_started] $prev_trim_started_1\n        assert_morethan [CI 4 cluster_slot_migration_stats_active_trim_started] $prev_trim_started_4\n\n        assert_equal 1000 [R 0 dbsize]\n        assert_equal 1000 [R 3 dbsize]\n        assert_equal 0 [R 1 dbsize]\n        assert_equal 0 [R 4 dbsize]\n\n        # Cleanup\n        failover_and_wait_for_done 1\n        R 1 debug asm-trim-method default\n        R 4 debug asm-trim-method default\n        R 0 config set rdb-key-save-delay 0\n        R 0 flushall\n        R 1 flushall\n    }\n\n    test \"Test import task does not start if active trim is in progress for the same slots\" {\n        # Active trim will be scheduled but it won't run\n        R 0 flushall\n        R 1 flushall\n        R 0 debug asm-trim-method active -1\n\n        populate_slot 500 -slot 0\n        populate_slot 500 -slot 1\n\n        # Migrate 1000 keys\n        R 1 CLUSTER MIGRATION IMPORT 0 1\n        wait_for_condition 1000 10 {\n            [CI 0 cluster_slot_migration_active_tasks] == 0 &&\n            [CI 0 cluster_slot_migration_active_trim_running] == 1\n        } else {\n            fail \"migrate failed\"\n        }\n\n        # Try to migrate slots back\n        R 0 CLUSTER MIGRATION IMPORT 0 1\n        wait_for_log_messages 0 {\"*Can not start import task*trim in progress for some of the slots*\"} 0 1000 10\n\n        # Enabled active trim and verify slots are imported back\n        R 0 debug asm-trim-method active 0\n        wait_for_asm_done\n\n        assert_equal 1000 [R 0 dbsize]\n        assert_equal 500 [R 0 cluster countkeysinslot 0]\n        assert_equal 500 [R 0 cluster countkeysinslot 1]\n\n        # cleanup\n        R 0 debug asm-trim-method default\n        R 0 flushall\n    }\n\n    test \"Rdb save during active trim should skip keys in trimmed slots\" {\n        # Insert some delay to activate trim\n        R 0 debug asm-trim-method active 1000\n        R 0 config set repl-diskless-sync-delay 0\n        R 0 flushall\n\n        populate_slot 5000 -idx 0 -slot 0\n        populate_slot 5000 -idx 0 -slot 1\n        populate_slot 5000 -idx 0 -slot 2\n\n        # Start migration and wait until trim is in progress\n        R 1 CLUSTER MIGRATION IMPORT 0 1\n        wait_for_condition 1000 10 {\n            [CI 0 cluster_slot_migration_active_tasks] == 0 &&\n            [CI 0 cluster_slot_migration_active_trim_running] == 1 &&\n            [S 0 rdb_bgsave_in_progress] == 0\n        } else {\n            puts \"[CI 0 cluster_slot_migration_active_tasks]\"\n            puts \"[CI 0 cluster_slot_migration_active_trim_running]\"\n            fail \"trim failed\"\n        }\n\n        # Trigger save during active trim\n        R 0 save\n        # Wait until the log contains a \"keys skipped\" message with a non-zero value\n        wait_for_log_messages 0 {\"*BGSAVE done, 5000 keys saved, [1-9]* keys skipped*\"} 0 1000 10\n\n        restart_server 0 yes no yes nosave\n        assert_equal 5000 [R 0 dbsize]\n        assert_equal 0 [R 0 cluster countkeysinslot 0]\n        assert_equal 0 [R 0 cluster countkeysinslot 1]\n        assert_equal 5000 [R 0 cluster countkeysinslot 2]\n\n        # Cleanup\n        wait_for_cluster_propagation\n        wait_for_cluster_state \"ok\"\n        R 0 flushall\n        R 1 flushall\n        R 0 save\n        R 0 CLUSTER MIGRATION IMPORT 0 1\n        wait_for_asm_done\n    }\n\n    test \"AOF rewrite during active trim should skip keys in trimmed slots\" {\n        R 0 debug asm-trim-method active 1000\n        R 0 config set repl-diskless-sync-delay 0\n        R 0 config set aof-use-rdb-preamble no\n        R 0 config set appendonly yes\n        R 0 config rewrite\n        R 0 flushall\n        populate_slot 5000 -idx 0 -slot 0\n        populate_slot 5000 -idx 0 -slot 1\n        populate_slot 5000 -idx 0 -slot 2\n\n        R 1 CLUSTER MIGRATION IMPORT 0 1\n        wait_for_condition 1000 10 {\n            [CI 0 cluster_slot_migration_active_tasks] == 0 &&\n            [CI 0 cluster_slot_migration_active_trim_running] == 1\n        } else {\n            puts \"[CI 0 cluster_slot_migration_active_tasks]\"\n            puts \"[CI 0 cluster_slot_migration_active_trim_running]\"\n            fail \"trim failed\"\n        }\n\n        wait_for_condition 50 100 {\n            [S 0 rdb_bgsave_in_progress] == 0\n        } else {\n            fail \"bgsave is in progress\"\n        }\n\n        R 0 bgrewriteaof\n        # Wait until the log contains a \"keys skipped\" message with a non-zero value\n        wait_for_log_messages 0 {\"*AOF rewrite done, [1-9]* keys saved, [1-9]* keys skipped*\"} 0 1000 10\n\n        restart_server 0 yes no yes nosave\n        assert_equal 5000 [R 0 dbsize]\n        assert_equal 0 [R 0 cluster countkeysinslot 0]\n        assert_equal 0 [R 0 cluster countkeysinslot 1]\n        assert_equal 5000 [R 0 cluster countkeysinslot 2]\n\n        # cleanup\n        R 0 config set appendonly no\n        R 0 config rewrite\n        restart_server 0 yes no yes nosave\n        wait_for_cluster_propagation\n        wait_for_cluster_state \"ok\"\n        R 0 flushall\n        R 1 flushall\n        R 0 save\n        R 0 CLUSTER MIGRATION IMPORT 0 1\n        wait_for_asm_done\n    }\n\n    test \"Pause actions will stop active trimming\" {\n        R 0 debug asm-trim-method active 1000\n        R 0 config set repl-diskless-sync-delay 0\n        R 0 flushall\n        populate_slot 10000 -idx 0 -slot 0\n\n        R 1 CLUSTER MIGRATION IMPORT 0 100\n        wait_for_condition 1000 10 {\n            [CI 0 cluster_slot_migration_active_tasks] == 0 &&\n            [CI 0 cluster_slot_migration_active_trim_running] == 1\n        } else {\n            puts \"[CI 0 cluster_slot_migration_active_tasks]\"\n            puts \"[CI 0 cluster_slot_migration_active_trim_running]\"\n            fail \"trim failed\"\n        }\n\n        # Pause the server and verify no keys are trimmed\n        R 0 client pause 100000 write ;# pause 100s\n        set prev [CI 0 cluster_slot_migration_active_trim_current_job_trimmed]\n        after 1000 ; # wait some time to see if any keys are trimmed\n        set curr [CI 0 cluster_slot_migration_active_trim_current_job_trimmed]\n        assert_equal $prev $curr\n\n        R 0 client unpause\n        R 0 debug asm-trim-method default\n        wait_for_asm_done\n        assert_equal 0 [R 0 dbsize]\n\n        # revert\n        R 0 CLUSTER MIGRATION IMPORT 0 100\n        wait_for_asm_done\n        assert_equal 10000 [R 0 dbsize]\n    }\n\n    foreach diskless_load {\"disabled\" \"swapdb\" \"on-empty-db\"} {\n        test \"Test fullsync cancels active trim (repl-diskless-load $diskless_load)\" {\n            R 3 debug asm-trim-method active -10\n            R 3 config set repl-diskless-load $diskless_load\n            R 0 flushall\n\n            R 0 config set repl-diskless-sync-delay 0\n            populate_slot 10000 -idx 0 -slot 0\n\n            R 1 CLUSTER MIGRATION IMPORT 0 0\n            wait_for_condition 1000 10 {\n                [CI 0 cluster_slot_migration_active_tasks] == 0 &&\n                [CI 0 cluster_slot_migration_active_trim_running] == 0 &&\n                [CI 3 cluster_slot_migration_active_trim_running] == 1\n            } else {\n                puts \"[CI 0 cluster_slot_migration_active_tasks]\"\n                puts \"[CI 0 cluster_slot_migration_active_trim_running]\"\n                puts \"[CI 3 cluster_slot_migration_active_trim_running]\"\n                fail \"trim failed\"\n            }\n\n            set prev_cancelled [CI 3 cluster_slot_migration_stats_active_trim_cancelled]\n            R 0 config set client-output-buffer-limit \"replica 1024 0 0\"\n\n            # Trigger a fullsync\n            populate_slot 1 -idx 0 -size 2000000 -slot 2\n\n            wait_for_condition 1000 10 {\n                [CI 3 cluster_slot_migration_active_trim_running] == 0 &&\n                [CI 3 cluster_slot_migration_stats_active_trim_cancelled] == $prev_cancelled + 1\n            } else {\n                puts \"[CI 3 cluster_slot_migration_active_trim_running]\"\n                puts \"[CI 3 cluster_slot_migration_stats_active_trim_cancelled]\"\n                fail \"trim failed\"\n            }\n\n            R 3 debug asm-trim-method active 0\n            R 3 config set repl-diskless-load disabled\n            R 0 CLUSTER MIGRATION IMPORT 0 0\n            wait_for_asm_done\n            wait_for_ofs_sync [Rn 0] [Rn 3]\n            assert_equal 10001 [R 0 dbsize]\n            assert_equal 10001 [R 3 dbsize]\n            assert_equal 0 [R 1 dbsize]\n            assert_equal 0 [R 4 dbsize]\n            R 0 flushall\n        }\n    }\n\n    test \"Test importing slots while active-trim is in progress for the same slots on replica\" {\n       R 3 debug asm-trim-method active 10000\n       R 0 flushall\n       populate_slot 10000 -slot 0\n       wait_for_ofs_sync [Rn 0] [Rn 3]\n\n       # Wait until active trim is in progress on replica\n       R 1 CLUSTER MIGRATION IMPORT 0 100\n       wait_for_condition 1000 10 {\n           [CI 0 cluster_slot_migration_active_tasks] == 0 &&\n           [CI 0 cluster_slot_migration_active_trim_running] == 0 &&\n           [CI 3 cluster_slot_migration_active_trim_running] == 1\n       } else {\n           puts \"[CI 0 cluster_slot_migration_active_tasks]\"\n           puts \"[CI 0 cluster_slot_migration_active_trim_running]\"\n           puts \"[CI 3 cluster_slot_migration_active_trim_running]\"\n           fail \"trim failed\"\n       }\n\n       set loglines [count_log_lines -3]\n\n       # Get slots back\n       R 0 CLUSTER MIGRATION IMPORT 0 100\n       wait_for_condition 1000 20 {\n           [CI 0 cluster_slot_migration_active_tasks] == 1 &&\n           [CI 0 cluster_slot_migration_active_trim_running] == 0 &&\n           [CI 3 cluster_slot_migration_active_trim_running] == 1\n       } else {\n           fail \"trim failed\"\n       }\n\n       # Verify replica blocks master until trim is done\n       wait_for_log_messages -3 {\"*Blocking master client until trim job is done*\"} $loglines 1000 30\n       R 3 debug asm-trim-method active 0\n       wait_for_log_messages -3 {\"*Unblocking master client after active trim*\"} $loglines 1000 30\n\n       wait_for_asm_done\n       wait_for_ofs_sync [Rn 0] [Rn 3]\n       assert_equal 10000 [R 0 dbsize]\n       assert_equal 10000 [R 3 dbsize]\n       assert_equal 0 [R 1 dbsize]\n       assert_equal 0 [R 4 dbsize]\n    }\n\n    test \"TRIMSLOTS should not trim slots that this node is serving\" {\n        assert_error {*the slot 0 is served by this node*} {R 0 trimslots ranges 1 0 0}\n        assert_error {*READONLY*} {R 3 trimslots ranges 1 0 100}\n        assert_equal {OK} [R 0 trimslots ranges 1 16383 16383]\n        assert_error {*READONLY*} {R 3 trimslots ranges 1 16383 16383}\n    }\n\n    test \"Trigger multiple active trim jobs at the same time\" {\n        R 1 debug asm-trim-method active 0\n        R 1 flushall\n\n        set prev_trim_done [CI 1 cluster_slot_migration_stats_active_trim_completed]\n\n        R 1 debug populate 1000 [slot_prefix 0] 100\n        R 1 debug populate 1000 [slot_prefix 1] 100\n        R 1 debug populate 1000 [slot_prefix 2] 100\n\n        R 1 multi\n        R 1 trimslots ranges 1 0 0\n        R 1 trimslots ranges 1 1 1\n        R 1 trimslots ranges 1 2 2\n        R 1 exec\n\n        wait_for_condition 1000 10 {\n            [CI 1 cluster_slot_migration_stats_active_trim_completed] == $prev_trim_done + 3\n        } else {\n            fail \"active trim failed\"\n        }\n\n        R 1 flushall\n        R 1 debug asm-trim-method default\n    }\n\n    test \"Restart will clean up unowned slot keys\" {\n        R 1 flushall\n\n        # generate 1000 keys belonging to slot 0\n        R 1 debug populate 1000 [slot_prefix 0] 100\n        assert {[scan [regexp -inline {keys\\=([\\d]*)} [R 1 info keyspace]] keys=%d] >= 1000}\n\n        # restart node-1\n        restart_server -1 true false true save\n        wait_for_cluster_propagation\n        wait_for_cluster_state \"ok\"\n\n        # Node-1 has no keys since unowned slot 0 keys were cleaned up during restart\n        assert {[scan [regexp -inline {keys\\=([\\d]*)} [R 1 info keyspace]] keys=%d] == {}}\n\n        R 1 flushall\n    }\n\n    test \"Test active trim is used when client tracking is used\" {\n        R 0 flushall\n        R 1 flushall\n        R 0 debug asm-trim-method default\n        R 1 debug asm-trim-method default\n\n        set prev_active_trim [CI 0 cluster_slot_migration_stats_active_trim_completed]\n\n        # Setup a tracking client that is redirected to a pubsub client\n        set rd_redirection [redis_deferring_client]\n        $rd_redirection client id\n        set redir_id [$rd_redirection read]\n        $rd_redirection subscribe __redis__:invalidate\n        $rd_redirection read ; # Consume the SUBSCRIBE reply.\n\n        # setup tracking\n        set key0 [slot_key 0 key]\n        R 0 CLIENT TRACKING on REDIRECT $redir_id\n        R 0 SET $key0 1\n        R 0 GET $key0\n        R 1 CLUSTER MIGRATION IMPORT 0 0\n        wait_for_asm_done\n\n        wait_for_condition 1000 10 {\n            [CI 0 cluster_slot_migration_stats_active_trim_completed] == [expr $prev_active_trim + 1]\n        } else {\n            fail \"active trim did not happen\"\n        }\n\n        # Verify the tracking client received the invalidation message\n        set msg [$rd_redirection read]\n        set head [lindex $msg 0]\n\n        if {$head eq \"message\"} {\n            # RESP 2\n            set got_key [lindex [lindex $msg 2] 0]\n        } elseif {$head eq \"invalidate\"} {\n            # RESP 3\n            set got_key [lindex $msg 1 0]\n        } else {\n            fail \"unexpected invalidation message: $msg\"\n        }\n        assert_equal $got_key $key0\n\n        # cleanup\n        $rd_redirection close\n        wait_for_asm_done\n        R 0 CLUSTER MIGRATION IMPORT 0 0\n        wait_for_asm_done\n        R 0 flushall\n    }\n}\n\nset testmodule [file normalize tests/modules/atomicslotmigration.so]\n\nstart_cluster 3 6 [list tags {external:skip cluster modules} config_lines [list loadmodule $testmodule cluster-node-timeout 60000 cluster-allow-replica-migration no]] {\n    test \"Module api sanity\" {\n        R 0 asm.sanity ;# on master\n        R 3 asm.sanity ;# on replica\n    }\n\n    test \"Module replicate cross slot command\" {\n        set task_id [setup_slot_migration_with_delay 0 1 0 100]\n        set listkey [slot_key 0 \"asmlist\"]\n        # replicate cross slot command during migrating\n        R 0 asm.lpush_replicate_crossslot_command $listkey \"item1\"\n\n        # node 0 will fail due to cross slot\n        wait_for_condition 2000 10 {\n            [string match {*canceled*} [migration_status 0 $task_id state]] &&\n            [string match {*cross slot*} [migration_status 0 $task_id last_error]]\n        } else {\n            fail \"ASM task did not fail\"\n        }\n        R 1 CLUSTER MIGRATION CANCEL ID $task_id\n\n        # sanity check if lpush replicated correctly to the replica\n        wait_for_ofs_sync [Rn 0] [Rn 3]\n        assert_equal {item1} [R 0 lrange $listkey 0 -1]\n        R 3 readonly\n        assert_equal {item1} [R 3 lrange $listkey 0 -1]\n    }\n\n    test \"Test RM_ClusterCanAccessKeysInSlot\" {\n        # Test invalid slots\n        assert_equal 0 [R 0 asm.cluster_can_access_keys_in_slot -1]\n        assert_equal 0 [R 0 asm.cluster_can_access_keys_in_slot 20000]\n        assert_equal 0 [R 2 asm.cluster_can_access_keys_in_slot 16384]\n        assert_equal 0 [R 5 asm.cluster_can_access_keys_in_slot 16384]\n\n        # Test on a master-replica pair\n        assert_equal 1 [R 0 asm.cluster_can_access_keys_in_slot 0]\n        assert_equal 1 [R 0 asm.cluster_can_access_keys_in_slot 100]\n        assert_equal 1 [R 3 asm.cluster_can_access_keys_in_slot 0]\n        assert_equal 1 [R 3 asm.cluster_can_access_keys_in_slot 100]\n\n        # Test on a master-replica pair\n        assert_equal 1 [R 2 asm.cluster_can_access_keys_in_slot 16383]\n        assert_equal 1 [R 5 asm.cluster_can_access_keys_in_slot 16383]\n    }\n\n    test \"Test RM_ClusterCanAccessKeysInSlot returns false for unowned slots\" {\n        # Active trim will be scheduled but it won't run\n        R 0 debug asm-trim-method active -1\n        R 3 debug asm-trim-method active -1\n\n        setup_slot_migration_with_delay 0 1 0 100 3 1000000\n\n        # Verify importing slots are not local\n        assert_equal 0 [R 1 asm.cluster_can_access_keys_in_slot 0]\n        assert_equal 0 [R 1 asm.cluster_can_access_keys_in_slot 100]\n        assert_equal 0 [R 4 asm.cluster_can_access_keys_in_slot 0]\n        assert_equal 0 [R 4 asm.cluster_can_access_keys_in_slot 100]\n\n        wait_for_condition 1000 10 {\n            [CI 0 cluster_slot_migration_active_tasks] == 0 &&\n            [CI 0 cluster_slot_migration_active_trim_running] == 1 &&\n            [CI 3 cluster_slot_migration_active_trim_running] == 1\n        } else {\n            fail \"migrate failed\"\n        }\n\n        # Wait for config propagation before checking the slot ownership on replica\n        wait_for_cluster_propagation\n\n        # Verify slots that are being trimmed are not local\n        assert_equal 0 [R 0 asm.cluster_can_access_keys_in_slot 0]\n        assert_equal 0 [R 0 asm.cluster_can_access_keys_in_slot 100]\n        assert_equal 0 [R 3 asm.cluster_can_access_keys_in_slot 0]\n        assert_equal 0 [R 3 asm.cluster_can_access_keys_in_slot 100]\n\n        # Enabled active trim and wait until it is completed.\n        R 0 debug asm-trim-method active 0\n        R 3 debug asm-trim-method active 0\n        wait_for_asm_done\n        wait_for_ofs_sync [Rn 0] [Rn 3]\n\n        # Verify slots are local after migration\n        assert_equal 1 [R 1 asm.cluster_can_access_keys_in_slot 0]\n        assert_equal 1 [R 1 asm.cluster_can_access_keys_in_slot 100]\n        assert_equal 1 [R 4 asm.cluster_can_access_keys_in_slot 0]\n        assert_equal 1 [R 4 asm.cluster_can_access_keys_in_slot 100]\n\n        # cleanup\n        R 0 debug asm-trim-method default\n        R 3 debug asm-trim-method default\n        R 0 CLUSTER MIGRATION IMPORT 0 100\n        wait_for_asm_done\n        R 0 flushall\n        R 1 flushall\n    }\n\n    foreach trim_method {\"active\" \"bg\"} {\n        test \"Test cluster module notifications on a successful migration ($trim_method-trim)\" {\n            clear_module_event_log\n            R 0 debug asm-trim-method $trim_method\n            R 3 debug asm-trim-method $trim_method\n            R 6 debug asm-trim-method $trim_method\n\n            # Set a key in the slot range\n            set key [slot_key 0 mykey]\n            R 0 set $key \"value\"\n\n            # Migrate the slot ranges\n            set task_id [R 1 CLUSTER MIGRATION IMPORT 0 100 200 300]\n            wait_for_asm_done\n\n            set src_id [R 0 cluster myid]\n            set dest_id [R 1 cluster myid]\n\n            # Verify the events on source, both master and replica\n            set migrate_event_log [list \\\n                \"sub: cluster-slot-migration-migrate-started, source_node_id:$src_id, destination_node_id:$dest_id, task_id:$task_id, slots:0-100,200-300\" \\\n                \"sub: cluster-slot-migration-migrate-completed, source_node_id:$src_id, destination_node_id:$dest_id, task_id:$task_id, slots:0-100,200-300\" \\\n            ]\n            assert_equal [R 0 asm.get_cluster_event_log] $migrate_event_log\n            assert_equal [R 3 asm.get_cluster_event_log] {}\n            assert_equal [R 6 asm.get_cluster_event_log] {}\n\n            # Verify the events on destination, both master and replica\n            set import_event_log [list \\\n                \"sub: cluster-slot-migration-import-started, source_node_id:$src_id, destination_node_id:$dest_id, task_id:$task_id, slots:0-100,200-300\" \\\n                \"sub: cluster-slot-migration-import-completed, source_node_id:$src_id, destination_node_id:$dest_id, task_id:$task_id, slots:0-100,200-300\" \\\n            ]\n            wait_for_condition 500 20 {\n                [R 1 asm.get_cluster_event_log] eq $import_event_log &&\n                [R 4 asm.get_cluster_event_log] eq $import_event_log &&\n                [R 7 asm.get_cluster_event_log] eq $import_event_log\n            } else {\n                puts \"R1: [R 1 asm.get_cluster_event_log]\"\n                puts \"R4: [R 4 asm.get_cluster_event_log]\"\n                puts \"R7: [R 7 asm.get_cluster_event_log]\"\n                fail \"ASM import event not received\"\n            }\n\n            # Verify the trim events\n            if {$trim_method eq \"active\"} {\n                set trim_event_log [list \\\n                    \"sub: cluster-slot-migration-trim-started, slots:0-100,200-300\" \\\n                    \"keyspace: key_trimmed, key: $key\" \\\n                    \"sub: cluster-slot-migration-trim-completed, slots:0-100,200-300\" \\\n                ]\n            } else {\n                set trim_event_log [list \\\n                    \"sub: cluster-slot-migration-trim-background, slots:0-100,200-300\" \\\n                ]\n            }\n            wait_for_condition 500 10 {\n                [R 0 asm.get_cluster_trim_event_log] eq $trim_event_log &&\n                [R 3 asm.get_cluster_trim_event_log] eq $trim_event_log &&\n                [R 6 asm.get_cluster_trim_event_log] eq $trim_event_log\n            } else {\n                fail \"ASM source trim event not received\"\n            }\n\n            # cleanup\n            R 0 CLUSTER MIGRATION IMPORT 0 100 200 300\n            wait_for_asm_done\n            clear_module_event_log\n            reset_default_trim_method\n            R 0 flushall\n            R 1 flushall\n        }\n\n        test \"Test cluster module notifications on a failed migration ($trim_method-trim)\" {\n            clear_module_event_log\n            R 1 debug asm-trim-method $trim_method\n            R 4 debug asm-trim-method $trim_method\n            R 7 debug asm-trim-method $trim_method\n\n            # Set a key in the slot range\n            set key [slot_key 0 mykey]\n            R 0 set $key \"value\"\n\n            # Start migration and cancel it\n            set task_id [setup_slot_migration_with_delay 0 1 0 100 0 2000000]\n            # Wait until at least one key is moved to destination\n            wait_for_condition 1000 10 {\n                [scan [regexp -inline {keys\\=([\\d]*)} [R 1 info keyspace]] keys=%d] >= 1\n            } else {\n                fail \"Key not moved to destination\"\n            }\n            R 1 CLUSTER MIGRATION CANCEL ID $task_id\n            wait_for_asm_done\n\n            set src_id [R 0 cluster myid]\n            set dest_id [R 1 cluster myid]\n\n            # Verify the events on source, both master and replica\n            set migrate_event_log [list \\\n                \"sub: cluster-slot-migration-migrate-started, source_node_id:$src_id, destination_node_id:$dest_id, task_id:$task_id, slots:0-100\" \\\n                \"sub: cluster-slot-migration-migrate-failed, source_node_id:$src_id, destination_node_id:$dest_id, task_id:$task_id, slots:0-100\" \\\n            ]\n            assert_equal [R 0 asm.get_cluster_event_log] $migrate_event_log\n            assert_equal [R 3 asm.get_cluster_event_log] {}\n            assert_equal [R 6 asm.get_cluster_event_log] {}\n\n            # Verify the events on destination, both master and replica\n            set import_event_log [list \\\n                \"sub: cluster-slot-migration-import-started, source_node_id:$src_id, destination_node_id:$dest_id, task_id:$task_id, slots:0-100\" \\\n                \"sub: cluster-slot-migration-import-failed, source_node_id:$src_id, destination_node_id:$dest_id, task_id:$task_id, slots:0-100\" \\\n            ]\n            wait_for_condition 500 10 {\n                [R 1 asm.get_cluster_event_log] eq $import_event_log &&\n                [R 4 asm.get_cluster_event_log] eq $import_event_log &&\n                [R 7 asm.get_cluster_event_log] eq $import_event_log\n            } else {\n                fail \"ASM import event not received\"\n            }\n\n            # Verify the trim events on destination (partially imported keys are trimmed)\n            if {$trim_method eq \"active\"} {\n                set trim_event_log [list \\\n                    \"sub: cluster-slot-migration-trim-started, slots:0-100\" \\\n                    \"keyspace: key_trimmed, key: $key\" \\\n                    \"sub: cluster-slot-migration-trim-completed, slots:0-100\" \\\n                ]\n            } else {\n                set trim_event_log [list \\\n                    \"sub: cluster-slot-migration-trim-background, slots:0-100\" \\\n                ]\n            }\n            wait_for_condition 500 10 {\n                [R 1 asm.get_cluster_trim_event_log] eq $trim_event_log &&\n                [R 4 asm.get_cluster_trim_event_log] eq $trim_event_log &&\n                [R 7 asm.get_cluster_trim_event_log] eq $trim_event_log\n            } else {\n                fail \"ASM destination trim event not received\"\n            }\n\n            # cleanup\n            clear_module_event_log\n            reset_default_trim_method\n            wait_for_asm_done\n            R 0 flushall\n            R 1 flushall\n        }\n\n        test \"Test cluster module notifications on failover ($trim_method-trim)\" {\n            # NOTE: cluster legacy may have a bug, multiple manual failover will fail,\n            # so only perform one round of failover test, fix it later\n            if {$trim_method eq \"bg\"} {\n            clear_module_event_log\n            R 1 debug asm-trim-method $trim_method\n            R 4 debug asm-trim-method $trim_method\n            R 7 debug asm-trim-method $trim_method\n\n            # Set a key in the slot range\n            set key [slot_key 0 mykey]\n            R 0 set $key \"value\"\n\n            # Start migration\n            set task_id [setup_slot_migration_with_delay 0 1 0 100 0 2000000]\n            # Wait until at least one key is moved to destination\n            wait_for_condition 1000 10 {\n                [scan [regexp -inline {keys\\=([\\d]*)} [R 1 info keyspace]] keys=%d] >= 1\n            } else {\n                fail \"Key not moved to destination\"\n            }\n\n            failover_and_wait_for_done 4\n            wait_for_asm_done\n\n            set src_id [R 0 cluster myid]\n            set dest_id [R 1 cluster myid]\n\n            # Verify the events on source, both master and replica\n            set migrate_event_log [list \\\n                \"sub: cluster-slot-migration-migrate-started, source_node_id:$src_id, destination_node_id:$dest_id, task_id:$task_id, slots:0-100\" \\\n                \"sub: cluster-slot-migration-migrate-failed, source_node_id:$src_id, destination_node_id:$dest_id, task_id:$task_id, slots:0-100\" \\\n            ]\n            assert_equal [R 0 asm.get_cluster_event_log] $migrate_event_log\n            assert_equal [R 3 asm.get_cluster_event_log] {}\n            assert_equal [R 6 asm.get_cluster_event_log] {}\n\n            # Verify the events on destination, both master and replica\n            set import_event_log [list \\\n                \"sub: cluster-slot-migration-import-started, source_node_id:$src_id, destination_node_id:$dest_id, task_id:$task_id, slots:0-100\" \\\n                \"sub: cluster-slot-migration-import-failed, source_node_id:$src_id, destination_node_id:$dest_id, task_id:$task_id, slots:0-100\" \\\n            ]\n            wait_for_condition 500 20 {\n                [R 1 asm.get_cluster_event_log] eq $import_event_log &&\n                [R 4 asm.get_cluster_event_log] eq $import_event_log &&\n                [R 7 asm.get_cluster_event_log] eq $import_event_log\n            } else {\n                puts \"R1: [R 1 asm.get_cluster_event_log]\"\n                puts \"R4: [R 4 asm.get_cluster_event_log]\"\n                puts \"R7: [R 7 asm.get_cluster_event_log]\"\n                fail \"ASM import event not received\"\n            }\n\n            # Verify the trim events on destination (partially imported keys are trimmed)\n            # NOTE: after failover, the new master will initiate the slot trimming,\n            # and only slot 0 has data, so only slot 0 is trimmed\n            if {$trim_method eq \"active\"} {\n                set trim_event_log [list \\\n                    \"sub: cluster-slot-migration-trim-started, slots:0-0\" \\\n                    \"keyspace: key_trimmed, key: $key\" \\\n                    \"sub: cluster-slot-migration-trim-completed, slots:0-0\" \\\n                ]\n            } else {\n                set trim_event_log [list \\\n                    \"sub: cluster-slot-migration-trim-background, slots:0-0\" \\\n                ]\n            }\n            wait_for_condition 500 20 {\n                [R 1 asm.get_cluster_trim_event_log] eq $trim_event_log &&\n                [R 4 asm.get_cluster_trim_event_log] eq $trim_event_log &&\n                [R 7 asm.get_cluster_trim_event_log] eq $trim_event_log\n            } else {\n                puts \"R1: [R 1 asm.get_cluster_trim_event_log]\"\n                puts \"R4: [R 4 asm.get_cluster_trim_event_log]\"\n                puts \"R7: [R 7 asm.get_cluster_trim_event_log]\"\n                fail \"ASM destination trim event not received\"\n            }\n\n            # cleanup\n            failover_and_wait_for_done 1\n            clear_module_event_log\n            reset_default_trim_method\n            R 0 flushall\n            R 1 flushall\n        }\n        }\n    }\n\n    foreach with_rdb {\"with\" \"without\"} {\n        test \"Test cluster module notifications when replica restart $with_rdb RDB during importing\" {\n            clear_module_event_log\n            R 1 debug asm-trim-method $trim_method\n            R 4 debug asm-trim-method $trim_method\n            R 7 debug asm-trim-method $trim_method\n            R 4 config set save \"\"\n\n            set src_id [R 0 cluster myid]\n            set dest_id [R 1 cluster myid]\n\n            # Set a key in the slot range\n            set key [slot_key 0 mykey]\n            R 0 set $key \"value\"\n\n            # Start migration, 2s delay\n            set task_id [setup_slot_migration_with_delay 0 1 0 100 0 2000000]\n            # Wait until at least one key is moved to destination\n            wait_for_condition 1000 10 {\n                [scan [regexp -inline {keys\\=([\\d]*)} [R 1 info keyspace]] keys=%d] >= 1\n            } else {\n                fail \"Key not moved to destination\"\n            }\n            wait_for_ofs_sync [Rn 1] [Rn 4]\n\n            # restart node 4\n            if {$with_rdb eq \"with\"} {\n                restart_server -4 true false true save ;# rdb save\n            } else {                \n                restart_server -4 true false true nosave ;# no rdb saved\n            }\n            wait_for_cluster_propagation\n\n            wait_for_asm_done\n\n            # started and completed are paired, and not duplicated\n            set import_event_log [list \\\n                \"sub: cluster-slot-migration-import-started, source_node_id:$src_id, destination_node_id:$dest_id, task_id:$task_id, slots:0-100\" \\\n                \"sub: cluster-slot-migration-import-completed, source_node_id:$src_id, destination_node_id:$dest_id, task_id:$task_id, slots:0-100\" \\\n            ]\n            wait_for_condition 500 10 {\n                [R 1 asm.get_cluster_event_log] eq $import_event_log &&\n                [R 4 asm.get_cluster_event_log] eq $import_event_log &&\n                [R 7 asm.get_cluster_event_log] eq $import_event_log\n            } else {\n                fail \"ASM import event not received\"\n            }\n\n            R 0 CLUSTER MIGRATION IMPORT 0 100\n            wait_for_asm_done\n            R 4 save ;# save an empty rdb to override previous one\n            clear_module_event_log\n            reset_default_trim_method\n            R 0 flushall\n            R 1 flushall\n        }\n    }\n\n    test \"Test cluster module notifications when replica is disconnected and full resync after importing\" {\n        clear_module_event_log\n        R 1 debug asm-trim-method $trim_method\n        R 4 debug asm-trim-method $trim_method\n        R 7 debug asm-trim-method $trim_method\n\n        set src_id [R 0 cluster myid]\n        set dest_id [R 1 cluster myid]\n\n        # Set a key in the slot range\n        set key [slot_key 0 mykey]\n        R 0 set $key \"value\"\n\n        # Start migration, 2s delay\n        set task_id [setup_slot_migration_with_delay 0 1 0 100 0 2000000]\n        # Wait until at least one key is moved to destination\n        wait_for_condition 1000 10 {\n            [scan [regexp -inline {keys\\=([\\d]*)} [R 1 info keyspace]] keys=%d] >= 1\n        } else {\n            fail \"Key not moved to destination\"\n        }\n        wait_for_ofs_sync [Rn 1] [Rn 4]\n\n        # puase node-4\n        set r4_pid [S 4 process_id]\n        pause_process $r4_pid\n\n        # set a small repl-backlog-size and write some commands to make node-4\n        # full resync when reconnecting after waking up\n        set r1_full_sync [S 1 sync_full]\n        R 1 config set repl-backlog-size 16kb\n        R 1 client kill type replica\n        set 1k_str [string repeat \"a\" 1024]\n        for {set i 0} {$i < 2000} {incr i} {\n            R 1 set [slot_key 6000] $1k_str\n        }\n\n        # after ASM task is completed, wake up node-4\n        wait_for_condition 1000 10 {\n            [CI 1 cluster_slot_migration_active_tasks] == 0 &&\n            [CI 1 cluster_slot_migration_active_trim_running] == 0\n        } else {\n            fail \"ASM tasks did not completed\"\n        }\n        resume_process $r4_pid\n\n        # make sure full resync happens\n        wait_for_sync [Rn 4]\n        wait_for_ofs_sync [Rn 1] [Rn 4]\n        assert_morethan [S 1 sync_full] $r1_full_sync\n\n        # started and completed are paired, and not duplicated\n        set import_event_log [list \\\n            \"sub: cluster-slot-migration-import-started, source_node_id:$src_id, destination_node_id:$dest_id, task_id:$task_id, slots:0-100\" \\\n            \"sub: cluster-slot-migration-import-completed, source_node_id:$src_id, destination_node_id:$dest_id, task_id:$task_id, slots:0-100\" \\\n        ]\n        wait_for_condition 500 10 {\n            [R 1 asm.get_cluster_event_log] eq $import_event_log &&\n            [R 4 asm.get_cluster_event_log] eq $import_event_log &&\n            [R 7 asm.get_cluster_event_log] eq $import_event_log\n        } else {\n            fail \"ASM import event not received\"\n        }\n\n        # since ASM task is completed on node-1 before node-4 reconnects,\n        # no trim event should be received on node-4\n        assert_equal {} [R 4 asm.get_cluster_trim_event_log]\n\n        R 0 CLUSTER MIGRATION IMPORT 0 100\n        wait_for_asm_done\n        clear_module_event_log\n        reset_default_trim_method\n        R 0 flushall\n        R 1 flushall\n    }\n\n    test \"Test new master can trim slots when migration is completed and failover occurs on source side\" {\n        R 0 asm.disable_trim ;# can not start slot trimming on source side\n        set slot0_key [slot_key 0 mykey]\n        R 0 set $slot0_key \"value\"\n\n        # migrate slot 0 from #0 to #1, and wait it completed, but not allow to trim slots\n        # on source node\n        set task_id [R 1 CLUSTER MIGRATION IMPORT 0 0]\n        wait_for_condition 1000 10 {\n            [string match {*completed*} [migration_status 0 $task_id state]] &&\n            [string match {*completed*} [migration_status 1 $task_id state]]\n        } else {\n            fail \"ASM task did not complete\"\n        }\n        # verify trim is not allowed on source node, and replica node doesn't have trim job either\n        wait_for_ofs_sync [Rn 0] [Rn 3]\n        assert_equal 1 [R 0 asm.trim_in_progress]\n        assert_equal \"value\" [R 0 asm.read_pending_trim_key $slot0_key]\n        assert_equal 0 [R 3 asm.trim_in_progress]\n        assert_equal \"value\" [R 3 asm.read_pending_trim_key $slot0_key]\n\n        set loglines [count_log_lines 0]\n\n        # failover happens on source node, instance #3 become slave, #0 become master\n        failover_and_wait_for_done 3\n        R 0 asm.enable_trim ;# enable trim on old master\n\n        # old master should cancel the pending trim job\n        wait_for_log_messages 0 {\"*Cancelling the pending trim job*\"} $loglines 1000 10\n\n        wait_for_ofs_sync [Rn 3] [Rn 0]\n        # verify trim is allowed on new master, and the key is trimmed\n        wait_for_condition 1000 10 {\n            [R 3 asm.trim_in_progress] == 0 &&\n            [R 3 asm.read_pending_trim_key $slot0_key] eq \"\" &&\n            [R 0 asm.trim_in_progress] == 0 &&\n            [R 0 asm.read_pending_trim_key $slot0_key] eq \"\"\n        } else {\n            fail \"Trim did not complete\"\n        }\n\n        # verify the trim events, use active trim since module is subscribed to trimmed event\n        set trim_event_log [list \\\n            \"sub: cluster-slot-migration-trim-started, slots:0-0\" \\\n            \"keyspace: key_trimmed, key: $slot0_key\" \\\n            \"sub: cluster-slot-migration-trim-completed, slots:0-0\" \\\n        ]\n        wait_for_condition 500 20 {\n            [R 0 asm.get_cluster_trim_event_log] eq $trim_event_log &&\n            [R 3 asm.get_cluster_trim_event_log] eq $trim_event_log &&\n            [R 6 asm.get_cluster_trim_event_log] eq $trim_event_log\n        } else {\n            fail \"ASM destination trim event not received\"\n        }\n\n        # cleanup\n        failover_and_wait_for_done 0\n        R 0 CLUSTER MIGRATION IMPORT 0 0\n        wait_for_asm_done\n        clear_module_event_log\n        reset_default_trim_method\n        R 0 flushall\n        R 1 flushall\n    }\n\n    test \"Test module replicates commands at the beginning of slot migration \" {\n        R 0 flushall\n        R 1 flushall\n\n        # Sanity check\n        assert_equal 0 [R 1 asm.read_keyless_cmd_val]\n        assert_equal 0 [R 4 asm.read_keyless_cmd_val]\n\n        # Enable module command replication and set a key to be replicated\n        # Module will replicate two commands:\n        #  1- A keyless command: asm.keyless_cmd\n        #  2- SET command for the given key and value\n        set keyname [slot_key 0 modulekey]\n        R 0 asm.replicate_module_command 1 $keyname \"value\"\n\n        setup_slot_migration_with_delay 0 1 0 100\n        wait_for_asm_done\n        wait_for_ofs_sync [Rn 1] [Rn 4]\n\n        # Verify the commands are replicated\n        assert_equal 1 [R 1 asm.read_keyless_cmd_val]\n        assert_equal value [R 1 get $keyname]\n\n        # Verify the commands are replicated to replica\n        R 4 readonly\n        assert_equal 1 [R 4 asm.read_keyless_cmd_val]\n        assert_equal value [R 4 get $keyname]\n\n        # cleanup\n        R 0 asm.replicate_module_command 0 \"\" \"\"\n        R 0 CLUSTER MIGRATION IMPORT 0 100\n        wait_for_asm_done\n        R 0 flushall\n        R 1 flushall\n    }\n\n    test \"Test subcommand propagation during slot migration\" {\n        R 0 flushall\n        R 1 flushall\n        set task_id [setup_slot_migration_with_delay 0 1 0 100]\n\n        set key [slot_key 0 mykey]\n        R 0 asm.parent set $key \"value\" ;# execute a module subcommand\n        wait_for_asm_done\n        assert_equal \"value\" [R 1 GET $key]\n\n        # cleanup\n        R 0 cluster migration import 0 100\n        wait_for_asm_done\n    }\n\n    test \"Test trim method selection based on module keyspace subscription\" {\n        R 0 debug asm-trim-method default\n        R 1 debug asm-trim-method default\n\n        R 0 flushall\n        R 1 flushall\n\n        populate_slot 10 -idx 0 -slot 0\n\n        # Make sure module is subscribed to NOTIFY_KEY_TRIMMED event. In this\n        # case, active trim must be used.\n        R 0 asm.subscribe_trimmed_event 1\n        set loglines [count_log_lines 0]\n        R 1 CLUSTER MIGRATION IMPORT 0 15\n        wait_for_asm_done\n        wait_for_log_messages 0 {\"*Active trim scheduled for slots: 0-15*\"} $loglines 1000 10\n\n        # Move slots back to node-0. Make sure module is not subscribed to\n        # NOTIFY_KEY_TRIMMED event. In this case, background trim must be used.\n        R 1 asm.subscribe_trimmed_event 0\n        set loglines [count_log_lines -1]\n        R 0 CLUSTER MIGRATION IMPORT 0 15\n        wait_for_asm_done\n        wait_for_log_messages -1 {\"*Background trim started for slots: 0-15*\"} $loglines 1000 10\n\n        # cleanup\n        wait_for_asm_done\n        R 0 asm.subscribe_trimmed_event 1\n        R 1 asm.subscribe_trimmed_event 1\n        R 0 flushall\n        R 1 flushall\n    }\n\n    test \"Verify trimmed key value can be read in the server event callback\" {\n        R 0 flushall\n        set key [slot_key 0]\n        set value \"value123random\"\n        R 0 set $key $value\n\n        R 1 CLUSTER MIGRATION IMPORT 0 0\n        wait_for_asm_done\n        wait_for_condition 1000 10 {\n            [R 0 asm.get_last_deleted_key] eq \"keyevent: key: $key, value: $value\"\n        } else {\n            fail \"Last deleted key event not received\"\n        }\n\n        # cleanup\n        R 0 CLUSTER MIGRATION IMPORT 0 0\n        wait_for_asm_done\n    }\n\n    test \"Verify module cannot open a key in a slot that is being trimmed\" {\n        R 0 flushall\n        R 0 debug asm-trim-method active -1 ;# disable active trim\n\n        set key [slot_key 0]\n        R 0 set $key value\n\n        R 1 CLUSTER MIGRATION IMPORT 0 0\n        wait_for_condition 1000 10 {\n            [CI 0 cluster_slot_migration_active_tasks] == 0 &&\n            [CI 1 cluster_slot_migration_active_tasks] == 0 &&\n            [CI 0 cluster_slot_migration_active_trim_running] == 1\n        } else {\n            fail \"migrate failed\"\n        }\n\n        # We cannot open the key since it is in a slot being trimmed\n        assert_equal {} [R 0 asm.get $key]\n\n        # cleanup\n        R 0 debug asm-trim-method default\n        R 0 CLUSTER MIGRATION IMPORT 0 0\n        wait_for_asm_done\n    }\n\n    test \"Test RM_ClusterGetLocalSlotRanges\" {\n       assert_equal [R 0 asm.cluster_get_local_slot_ranges] {{0 5461}}\n       assert_equal [R 3 asm.cluster_get_local_slot_ranges] {{0 5461}}\n\n       R 0 cluster migration import 5463 6000\n       wait_for_asm_done\n       wait_for_cluster_propagation\n       assert_equal [R 0 asm.cluster_get_local_slot_ranges] {{0 5461} {5463 6000}}\n       assert_equal [R 3 asm.cluster_get_local_slot_ranges] {{0 5461} {5463 6000}}\n\n       R 0 cluster migration import 5462 5462 6001 10922\n       wait_for_asm_done\n       wait_for_cluster_propagation\n       assert_equal [R 0 asm.cluster_get_local_slot_ranges] {{0 10922}}\n       assert_equal [R 3 asm.cluster_get_local_slot_ranges] {{0 10922}}\n       assert_equal [R 1 asm.cluster_get_local_slot_ranges] {}\n       assert_equal [R 4 asm.cluster_get_local_slot_ranges] {}\n    }\n}\n\nset testmodule [file normalize tests/modules/atomicslotmigration.so]\n\nstart_cluster 2 0 [list tags {external:skip cluster modules} config_lines [list loadmodule $testmodule cluster-node-timeout 60000 cluster-allow-replica-migration no appendonly yes]] {\n    test \"TRIMSLOTS in AOF will work synchronously on restart\" {\n        # When TRIMSLOTS is replayed from AOF during restart, it must execute\n        # synchronously rather than using active trim. This prevents race\n        # conditions where subsequent AOF commands might operate on keys\n        # that should have been trimmed.\n\n        # Subscribe to key trimmed event to force active trim\n        R 0 asm.subscribe_trimmed_event 1\n        populate_slot 1000 -slot 0\n        populate_slot 1000 -slot 1\n        R 1 CLUSTER MIGRATION IMPORT 0 0\n        wait_for_asm_done\n\n        # verify active trim is used\n        assert_equal 1 [CI 0 cluster_slot_migration_stats_active_trim_completed]\n\n        # restart server and verify aof is loaded\n        restart_server 0 yes no yes nosave\n        assert {[scan [regexp -inline {aof_current_size:([\\d]*)} [R 0 info persistence]] aof_current_size=%d] > 0}\n        wait_for_cluster_state \"ok\"\n\n        # verify TRIMSLOTS in AOF is executed synchronously\n        assert_equal 0 [CI 0 cluster_slot_migration_stats_active_trim_completed]\n        assert_equal 1000 [R 0 dbsize]\n\n        # cleanup\n        R 0 CLUSTER MIGRATION IMPORT 0 0\n        wait_for_asm_done\n        assert_equal 2000 [R 0 dbsize]\n        R 0 flushall\n        R 1 flushall\n        clear_module_event_log\n\n    }\n\n    test \"Test trim is disabled when module requests it\" {\n        R 0 asm.disable_trim\n\n        set slot0_key [slot_key 0 mykey]\n        R 0 set $slot0_key \"value\"\n        set task_id [R 1 CLUSTER MIGRATION IMPORT 0 0]\n        wait_for_condition 1000 10 {\n            [string match {*completed*} [migration_status 0 $task_id state]]\n        } else {\n            fail \"ASM task did not complete\"\n        }\n        # since we disable trim, the key should still exist on source,\n        # we can read it with REDISMODULE_OPEN_KEY_ACCESS_TRIMMED flag\n        assert_equal \"value\" [R 0 asm.read_pending_trim_key $slot0_key]\n        assert_equal 1 [R 0 asm.trim_in_progress]\n\n        # enable trim and verify the key is trimmed\n        R 0 asm.enable_trim\n        wait_for_condition 1000 10 {\n            [R 0 asm.read_pending_trim_key $slot0_key] eq \"\" &&\n            [R 0 asm.trim_in_progress] == 0\n        } else {\n            fail \"Trim did not complete\"\n        }\n        wait_for_asm_done\n        R 0 CLUSTER MIGRATION IMPORT 0 0\n        wait_for_asm_done\n        clear_module_event_log\n    }\n\n    test \"Can not start new asm task when trim is not allowed\" {\n        # start a migration task, wait it completed but not allow to trim slots\n        R 0 asm.disable_trim\n        set task_id [R 1 CLUSTER MIGRATION IMPORT 0 0]\n        wait_for_condition 1000 10 {\n            [string match {*completed*} [migration_status 0 $task_id state]]\n        } else {\n            fail \"ASM task did not complete\"\n        }\n        # Can not start new migrating task since trim is disabled\n        set task_id [R 1 CLUSTER MIGRATION IMPORT 1 1]\n        wait_for_condition 1000 10 {\n            [string match {*fail*} [migration_status 1 $task_id state]] &&\n            [string match {*Trim is disabled by module*} [migration_status 1 $task_id last_error]]\n        } else {\n            fail \"ASM task did not fail\"\n        }\n        R 0 asm.enable_trim\n        wait_for_asm_done\n\n        # start a migration task, wait it completed but not allow to trim slots\n        R 0 asm.disable_trim\n        set task_id [R 1 CLUSTER MIGRATION IMPORT 2 2]\n        wait_for_condition 1000 10 {\n            [string match {*completed*} [migration_status 0 $task_id state]]\n        } else {\n            fail \"ASM task did not complete\"\n        }\n        set logline [count_log_lines 0]\n        # Can not start new importing task since trim is disabled\n        set task_id [R 0 CLUSTER MIGRATION IMPORT 0 1]\n        wait_for_log_messages 0 {\"*Can not start import task*trim is disabled by module*\"} $logline 1000 10\n        R 0 asm.enable_trim\n        wait_for_asm_done\n    }\n}\n\nstart_server {tags \"cluster external:skip\"} {\n    test \"Test RM_ClusterGetLocalSlotRanges without cluster\" {\n        r module load $testmodule\n        assert_equal [r asm.cluster_get_local_slot_ranges] {{0 16383}}\n    }\n}\n}\n"
  },
  {
    "path": "tests/unit/cluster/cli.tcl",
    "content": "# Primitive tests on cluster-enabled redis using redis-cli\n\nsource tests/support/cli.tcl\n\n# make sure the test infra won't use SELECT\nset old_singledb $::singledb\nset ::singledb 1\n\n# cluster creation is complicated with TLS, and the current tests don't really need that coverage\ntags {tls:skip external:skip cluster} {\n\n# start three servers\nset base_conf [list cluster-enabled yes cluster-node-timeout 1000]\nstart_multiple_servers 3 [list overrides $base_conf] {\n\n    set node1 [srv 0 client]\n    set node2 [srv -1 client]\n    set node3 [srv -2 client]\n    set node3_pid [srv -2 pid]\n    set node3_rd [redis_deferring_client -2]\n\n    test {Create 3 node cluster} {\n        exec src/redis-cli --cluster-yes --cluster create \\\n                           127.0.0.1:[srv 0 port] \\\n                           127.0.0.1:[srv -1 port] \\\n                           127.0.0.1:[srv -2 port]\n\n        wait_for_condition 1000 50 {\n            [CI 0 cluster_state] eq {ok} &&\n            [CI 1 cluster_state] eq {ok} &&\n            [CI 2 cluster_state] eq {ok}\n        } else {\n            fail \"Cluster doesn't stabilize\"\n        }\n    }\n\n    test \"Run blocking command on cluster node3\" {\n        # key9184688 is mapped to slot 10923 (first slot of node 3)\n        $node3_rd brpop key9184688 0\n        $node3_rd flush\n\n        wait_for_condition 50 100 {\n            [s -2 blocked_clients] eq {1}\n        } else {\n            fail \"Client not blocked\"\n        }\n    }\n\n    test \"Perform a Resharding\" {\n        exec src/redis-cli --cluster-yes --cluster reshard 127.0.0.1:[srv -2 port] \\\n                           --cluster-to [$node1 cluster myid] \\\n                           --cluster-from [$node3 cluster myid] \\\n                           --cluster-slots 1\n    }\n\n    test \"Verify command got unblocked after resharding\" {\n        # this (read) will wait for the node3 to realize the new topology\n        assert_error {*MOVED*} {$node3_rd read}\n\n        # verify there are no blocked clients\n        assert_equal [s 0 blocked_clients]  {0}\n        assert_equal [s -1 blocked_clients]  {0}\n        assert_equal [s -2 blocked_clients]  {0}\n    }\n\n    test \"Wait for cluster to be stable\" {\n        # Cluster check just verifies the config state is self-consistent,\n        # waiting for cluster_state to be okay is an independent check that all the\n        # nodes actually believe each other are healthy, prevent cluster down error.\n        wait_for_condition 1000 50 {\n            [catch {exec src/redis-cli --cluster check 127.0.0.1:[srv 0 port]}] == 0 &&\n            [catch {exec src/redis-cli --cluster check 127.0.0.1:[srv -1 port]}] == 0 &&\n            [catch {exec src/redis-cli --cluster check 127.0.0.1:[srv -2 port]}] == 0 &&\n            [CI 0 cluster_state] eq {ok} &&\n            [CI 1 cluster_state] eq {ok} &&\n            [CI 2 cluster_state] eq {ok}\n        } else {\n            fail \"Cluster doesn't stabilize\"\n        }\n    }\n\n    set node1_rd [redis_deferring_client 0]\n\n    test \"use previous hostip in \\\"cluster-preferred-endpoint-type unknown-endpoint\\\" mode\" {\n        \n        # backup and set cluster-preferred-endpoint-type unknown-endpoint\n        set endpoint_type_before_set [lindex [split [$node1 CONFIG GET cluster-preferred-endpoint-type] \" \"] 1]\n        $node1 CONFIG SET cluster-preferred-endpoint-type unknown-endpoint\n\n        # when redis-cli not in cluster mode, return MOVE with empty host\n        set slot_for_foo [$node1 CLUSTER KEYSLOT foo]\n        assert_error \"*MOVED $slot_for_foo :*\" {$node1 set foo bar}\n\n        # when in cluster mode, redirect using previous hostip\n        assert_equal \"[exec src/redis-cli -h 127.0.0.1 -p [srv 0 port] -c set foo bar]\" {OK}\n        assert_match \"[exec src/redis-cli -h 127.0.0.1 -p [srv 0 port] -c get foo]\" {bar}\n\n        assert_equal [$node1 CONFIG SET cluster-preferred-endpoint-type \"$endpoint_type_before_set\"]  {OK}\n    }\n\n    test \"Sanity test push cmd after resharding\" {\n        assert_error {*MOVED*} {$node3 lpush key9184688 v1}\n\n        $node1_rd brpop key9184688 0\n        $node1_rd flush\n\n        wait_for_condition 50 100 {\n            [s 0 blocked_clients] eq {1}\n        } else {\n            puts \"Client not blocked\"\n            puts \"read from blocked client: [$node1_rd read]\"\n            fail \"Client not blocked\"\n        }\n\n        $node1 lpush key9184688 v2\n        assert_equal {key9184688 v2} [$node1_rd read]\n    }\n\n    $node3_rd close\n\n    test \"Run blocking command again on cluster node1\" {\n        $node1 del key9184688\n        # key9184688 is mapped to slot 10923 which has been moved to node1\n        $node1_rd brpop key9184688 0\n        $node1_rd flush\n\n        wait_for_condition 50 100 {\n            [s 0 blocked_clients] eq {1}\n        } else {\n            fail \"Client not blocked\"\n        }\n    }\n\n     test \"Kill a cluster node and wait for fail state\" {\n        # kill node3 in cluster\n        pause_process $node3_pid\n\n        wait_for_condition 1000 50 {\n            [CI 0 cluster_state] eq {fail} &&\n            [CI 1 cluster_state] eq {fail}\n        } else {\n            fail \"Cluster doesn't fail\"\n        }\n    }\n\n     test \"Verify command got unblocked after cluster failure\" {\n        assert_error {*CLUSTERDOWN*} {$node1_rd read}\n\n        # verify there are no blocked clients\n        assert_equal [s 0 blocked_clients]  {0}\n        assert_equal [s -1 blocked_clients]  {0}\n    }\n\n    resume_process $node3_pid\n    $node1_rd close\n\n} ;# stop servers\n\n# Test redis-cli -- cluster create, add-node, call.\n# Test that functions are propagated on add-node\nstart_multiple_servers 5 [list overrides $base_conf] {\n\n    set node4_rd [redis_client -3]\n    set node5_rd [redis_client -4]\n\n    test {Functions are added to new node on redis-cli cluster add-node} {\n        exec src/redis-cli --cluster-yes --cluster create \\\n                           127.0.0.1:[srv 0 port] \\\n                           127.0.0.1:[srv -1 port] \\\n                           127.0.0.1:[srv -2 port]\n\n\n        wait_for_condition 1000 50 {\n            [CI 0 cluster_state] eq {ok} &&\n            [CI 1 cluster_state] eq {ok} &&\n            [CI 2 cluster_state] eq {ok}\n        } else {\n            fail \"Cluster doesn't stabilize\"\n        }\n\n        # upload a function to all the cluster\n        exec src/redis-cli --cluster-yes --cluster call 127.0.0.1:[srv 0 port] \\\n                           FUNCTION LOAD {#!lua name=TEST\n                               redis.register_function('test', function() return 'hello' end)\n                           }\n\n        # adding node to the cluster\n        exec src/redis-cli --cluster-yes --cluster add-node \\\n                       127.0.0.1:[srv -3 port] \\\n                       127.0.0.1:[srv 0 port]\n\n        wait_for_cluster_size 4\n\n        wait_for_condition 1000 50 {\n            [CI 0 cluster_state] eq {ok} &&\n            [CI 1 cluster_state] eq {ok} &&\n            [CI 2 cluster_state] eq {ok} &&\n            [CI 3 cluster_state] eq {ok}\n        } else {\n            fail \"Cluster doesn't stabilize\"\n        }\n\n        # make sure 'test' function was added to the new node\n        assert_equal {{library_name TEST engine LUA functions {{name test description {} flags {}}}}} [$node4_rd FUNCTION LIST]\n\n        # add function to node 5\n        assert_equal {TEST} [$node5_rd FUNCTION LOAD {#!lua name=TEST\n            redis.register_function('test', function() return 'hello' end)\n        }]\n\n        # make sure functions was added to node 5\n        assert_equal {{library_name TEST engine LUA functions {{name test description {} flags {}}}}} [$node5_rd FUNCTION LIST]\n\n        # adding node 5 to the cluster should failed because it already contains the 'test' function\n        catch {\n            exec src/redis-cli --cluster-yes --cluster add-node \\\n                        127.0.0.1:[srv -4 port] \\\n                        127.0.0.1:[srv 0 port]\n        } e\n        assert_match {*node already contains functions*} $e\n    }\n} ;# stop servers\n\n# Test redis-cli --cluster create, add-node.\n# Test that one slot can be migrated to and then away from the new node.\ntest {Migrate the last slot away from a node using redis-cli} {\n    start_multiple_servers 4 [list overrides $base_conf] {\n\n        # Create a cluster of 3 nodes\n        exec src/redis-cli --cluster-yes --cluster create \\\n                           127.0.0.1:[srv 0 port] \\\n                           127.0.0.1:[srv -1 port] \\\n                           127.0.0.1:[srv -2 port]\n\n        wait_for_condition 1000 50 {\n            [CI 0 cluster_state] eq {ok} &&\n            [CI 1 cluster_state] eq {ok} &&\n            [CI 2 cluster_state] eq {ok}\n        } else {\n            fail \"Cluster doesn't stabilize\"\n        }\n\n        # Insert some data\n        assert_equal OK [exec src/redis-cli -c -p [srv 0 port] SET foo bar]\n        set slot [exec src/redis-cli -c -p [srv 0 port] CLUSTER KEYSLOT foo]\n\n        # Add new node to the cluster\n        exec src/redis-cli --cluster-yes --cluster add-node \\\n                     127.0.0.1:[srv -3 port] \\\n                     127.0.0.1:[srv 0 port]\n        \n        # First we wait for new node to be recognized by entire cluster\n        wait_for_cluster_size 4\n        \n        wait_for_condition 1000 50 {\n            [CI 0 cluster_state] eq {ok} &&\n            [CI 1 cluster_state] eq {ok} &&\n            [CI 2 cluster_state] eq {ok} &&\n            [CI 3 cluster_state] eq {ok}\n        } else {\n            fail \"Cluster doesn't stabilize\"\n        }\n\n        set newnode_r [redis_client -3]\n        set newnode_id [$newnode_r CLUSTER MYID]\n\n        # Find out which node has the key \"foo\" by asking the new node for a\n        # redirect.\n        catch { $newnode_r get foo } e\n        assert_match \"MOVED $slot *\" $e\n        lassign [split [lindex $e 2] :] owner_host owner_port\n        set owner_r [redis $owner_host $owner_port 0 $::tls]\n        set owner_id [$owner_r CLUSTER MYID]\n\n        # Move slot to new node using plain Redis commands\n        assert_equal OK [$newnode_r CLUSTER SETSLOT $slot IMPORTING $owner_id]\n        assert_equal OK [$owner_r CLUSTER SETSLOT $slot MIGRATING $newnode_id]\n        assert_equal {foo} [$owner_r CLUSTER GETKEYSINSLOT $slot 10]\n        assert_equal OK [$owner_r MIGRATE 127.0.0.1 [srv -3 port] \"\" 0 5000 KEYS foo]\n        assert_equal OK [$newnode_r CLUSTER SETSLOT $slot NODE $newnode_id]\n        assert_equal OK [$owner_r CLUSTER SETSLOT $slot NODE $newnode_id]\n\n        # Using --cluster check make sure we won't get `Not all slots are covered by nodes`.\n        # Wait for the cluster to become stable make sure the cluster is up during MIGRATE.\n        wait_for_condition 1000 50 {\n            [catch {exec src/redis-cli --cluster check 127.0.0.1:[srv 0 port]}] == 0 &&\n            [catch {exec src/redis-cli --cluster check 127.0.0.1:[srv -1 port]}] == 0 &&\n            [catch {exec src/redis-cli --cluster check 127.0.0.1:[srv -2 port]}] == 0 &&\n            [catch {exec src/redis-cli --cluster check 127.0.0.1:[srv -3 port]}] == 0 &&\n            [CI 0 cluster_state] eq {ok} &&\n            [CI 1 cluster_state] eq {ok} &&\n            [CI 2 cluster_state] eq {ok} &&\n            [CI 3 cluster_state] eq {ok}\n        } else {\n            fail \"Cluster doesn't stabilize\"\n        }\n\n        # Move the only slot back to original node using redis-cli\n        exec src/redis-cli --cluster reshard 127.0.0.1:[srv -3 port] \\\n            --cluster-from $newnode_id \\\n            --cluster-to $owner_id \\\n            --cluster-slots 1 \\\n            --cluster-yes\n\n        # The empty node will become a replica of the new owner before the\n        # `MOVED` check, so let's wait for the cluster to become stable.\n        wait_for_condition 1000 50 {\n            [CI 0 cluster_state] eq {ok} &&\n            [CI 1 cluster_state] eq {ok} &&\n            [CI 2 cluster_state] eq {ok} &&\n            [CI 3 cluster_state] eq {ok}\n        } else {\n            fail \"Cluster doesn't stabilize\"\n        }\n\n        # Check that the key foo has been migrated back to the original owner.\n        catch { $newnode_r get foo } e\n        assert_equal \"MOVED $slot $owner_host:$owner_port\" $e\n\n        # Check that the empty node has turned itself into a replica of the new\n        # owner and that the new owner knows that.\n        wait_for_condition 1000 50 {\n            [string match \"*slave*\" [$owner_r CLUSTER REPLICAS $owner_id]]\n        } else {\n            fail \"Empty node didn't turn itself into a replica.\"\n        }\n    }\n}\n\nforeach ip_or_localhost {127.0.0.1 localhost} {\n\n# Test redis-cli --cluster create, add-node with cluster-port.\n# Create five nodes, three with custom cluster_port and two with default values.\nstart_server [list overrides [list cluster-enabled yes cluster-node-timeout 1 cluster-port [find_available_port $::baseport $::portcount]]] {\nstart_server [list overrides [list cluster-enabled yes cluster-node-timeout 1]] {\nstart_server [list overrides [list cluster-enabled yes cluster-node-timeout 1 cluster-port [find_available_port $::baseport $::portcount]]] {\nstart_server [list overrides [list cluster-enabled yes cluster-node-timeout 1]] {\nstart_server [list overrides [list cluster-enabled yes cluster-node-timeout 1 cluster-port [find_available_port $::baseport $::portcount]]] {\n\n    # The first three are used to test --cluster create.\n    # The last two are used to test --cluster add-node\n\n    test \"redis-cli -4 --cluster create using $ip_or_localhost with cluster-port\" {\n        exec src/redis-cli -4 --cluster-yes --cluster create \\\n                           $ip_or_localhost:[srv 0 port] \\\n                           $ip_or_localhost:[srv -1 port] \\\n                           $ip_or_localhost:[srv -2 port]\n\n        wait_for_condition 1000 50 {\n            [CI 0 cluster_state] eq {ok} &&\n            [CI 1 cluster_state] eq {ok} &&\n            [CI 2 cluster_state] eq {ok}\n        } else {\n            fail \"Cluster doesn't stabilize\"\n        }\n\n        # Make sure each node can meet other nodes\n        assert_equal 3 [CI 0 cluster_known_nodes]\n        assert_equal 3 [CI 1 cluster_known_nodes]\n        assert_equal 3 [CI 2 cluster_known_nodes]\n    }\n\n    test \"redis-cli -4 --cluster add-node using $ip_or_localhost with cluster-port\" {\n        # Adding node to the cluster (without cluster-port)\n        exec src/redis-cli -4 --cluster-yes --cluster add-node \\\n                           $ip_or_localhost:[srv -3 port] \\\n                           $ip_or_localhost:[srv 0 port]\n\n        wait_for_cluster_size 4\n\n        wait_for_condition 1000 50 {\n            [CI 0 cluster_state] eq {ok} &&\n            [CI 1 cluster_state] eq {ok} &&\n            [CI 2 cluster_state] eq {ok} &&\n            [CI 3 cluster_state] eq {ok}\n        } else {\n            fail \"Cluster doesn't stabilize\"\n        }\n\n        # Adding node to the cluster (with cluster-port)\n        exec src/redis-cli -4 --cluster-yes --cluster add-node \\\n                           $ip_or_localhost:[srv -4 port] \\\n                           $ip_or_localhost:[srv 0 port]\n\n        wait_for_cluster_size 5\n\n        wait_for_condition 1000 50 {\n            [CI 0 cluster_state] eq {ok} &&\n            [CI 1 cluster_state] eq {ok} &&\n            [CI 2 cluster_state] eq {ok} &&\n            [CI 3 cluster_state] eq {ok} &&\n            [CI 4 cluster_state] eq {ok}\n        } else {\n            fail \"Cluster doesn't stabilize\"\n        }\n\n        # Make sure each node can meet other nodes\n        assert_equal 5 [CI 0 cluster_known_nodes]\n        assert_equal 5 [CI 1 cluster_known_nodes]\n        assert_equal 5 [CI 2 cluster_known_nodes]\n        assert_equal 5 [CI 3 cluster_known_nodes]\n        assert_equal 5 [CI 4 cluster_known_nodes]\n    }\n# stop 5 servers\n}\n}\n}\n}\n}\n\n} ;# foreach ip_or_localhost\n\n} ;# tags\n\nset ::singledb $old_singledb\n"
  },
  {
    "path": "tests/unit/cluster/cluster-response-tls.tcl",
    "content": "source tests/support/cluster.tcl\n\nproc get_port_from_moved_error {e} {\n    set ip_port [lindex [split $e \" \"] 2]\n    return [lindex [split $ip_port \":\"] 1]\n}\n\nproc get_pport_by_port {port} {\n    foreach srv $::servers {\n        set srv_port [dict get $srv port]\n        if {$port == $srv_port} {\n            return [dict get $srv pport]\n        }\n    }\n    return 0\n}\n\nproc get_port_from_node_info {line} {\n    set fields [split $line \" \"]\n    set addr [lindex $fields 1]\n    set ip_port [lindex [split $addr \"@\"] 0]\n    return [lindex [split $ip_port \":\"] 1]\n}\n\nproc cluster_response_tls {tls_cluster} {\n\n    test \"CLUSTER SLOTS with different connection type -- tls-cluster $tls_cluster\" {\n        set slots1 [R 0 cluster slots]\n        set pport [srv 0 pport]\n        set cluster_client [redis_cluster 127.0.0.1:$pport 0]\n        set slots2 [$cluster_client cluster slots]\n        $cluster_client close\n        # Compare the ports in the first row\n        assert_no_match [lindex $slots1 0 2 1] [lindex $slots2 0 2 1]\n    }\n\n    test \"CLUSTER NODES return port according to connection type -- tls-cluster $tls_cluster\" {\n        set nodes [R 0 cluster nodes]\n        set port1 [get_port_from_node_info [lindex [split $nodes \"\\r\\n\"] 0]]\n        set pport [srv 0 pport]\n        set cluster_client [redis_cluster 127.0.0.1:$pport 0]\n        set nodes [$cluster_client cluster nodes]\n        set port2 [get_port_from_node_info [lindex [split $nodes \"\\r\\n\"] 0]]\n        $cluster_client close\n        assert_not_equal $port1 $port2\n    }\n\n    set cluster [redis_cluster 127.0.0.1:[srv 0 port]]\n    set cluster_pport [redis_cluster 127.0.0.1:[srv 0 pport] 0]\n    $cluster refresh_nodes_map\n\n    test \"Set many keys in the cluster -- tls-cluster $tls_cluster\" {\n        for {set i 0} {$i < 5000} {incr i} {\n            $cluster set $i $i\n            assert { [$cluster get $i] eq $i }\n        }\n    }\n\n    test \"Test cluster responses during migration of slot x -- tls-cluster $tls_cluster\" {\n        set slot 10\n        array set nodefrom [$cluster masternode_for_slot $slot]\n        array set nodeto [$cluster masternode_notfor_slot $slot]\n        $nodeto(link) cluster setslot $slot importing $nodefrom(id)\n        $nodefrom(link) cluster setslot $slot migrating $nodeto(id)\n\n        # Get a key from that slot\n        set key [$nodefrom(link) cluster GETKEYSINSLOT $slot \"1\"]\n        # MOVED REPLY\n        catch {$nodeto(link) set $key \"newVal\"} e_moved1\n        assert_match \"*MOVED*\" $e_moved1\n        # ASK REPLY\n        catch {$nodefrom(link) set \"abc{$key}\" \"newVal\"} e_ask1\n        assert_match \"*ASK*\" $e_ask1\n\n        # UNSTABLE REPLY\n        assert_error \"*TRYAGAIN*\" {$nodefrom(link) mset \"a{$key}\" \"newVal\" $key \"newVal2\"}\n\n        # Connecting using another protocol\n        array set nodefrom_pport [$cluster_pport masternode_for_slot $slot]\n        array set nodeto_pport [$cluster_pport masternode_notfor_slot $slot]\n\n        # MOVED REPLY\n        catch {$nodeto_pport(link) set $key \"newVal\"} e_moved2\n        assert_match \"*MOVED*\" $e_moved2\n        # ASK REPLY\n        catch {$nodefrom_pport(link) set \"abc{$key}\" \"newVal\"} e_ask2\n        assert_match \"*ASK*\" $e_ask2\n        # Compare MOVED error's port\n        set port1 [get_port_from_moved_error $e_moved1]\n        set port2 [get_port_from_moved_error $e_moved2]\n        assert_not_equal $port1 $port2\n        assert_equal $port1 $nodefrom(port)\n        assert_equal $port2 [get_pport_by_port $nodefrom(port)]\n        # Compare ASK error's port\n        set port1 [get_port_from_moved_error $e_ask1]\n        set port2 [get_port_from_moved_error $e_ask2]\n        assert_not_equal $port1 $port2\n        assert_equal $port1 $nodeto(port)\n        assert_equal $port2 [get_pport_by_port $nodeto(port)]\n    }\n}\n\nif {$::tls} {\n    start_cluster 3 3 {tags {external:skip cluster tls} overrides {tls-cluster yes tls-replication yes}} {      \n        cluster_response_tls yes\n    }\n    start_cluster 3 3 {tags {external:skip cluster tls} overrides {tls-cluster no tls-replication no}} {\n        cluster_response_tls no\n    }\n}\n"
  },
  {
    "path": "tests/unit/cluster/failure-marking.tcl",
    "content": "# Test a single primary can mark replica as `fail`\nstart_cluster 1 1 {tags {external:skip cluster}} {\n\n    test \"Verify that single primary marks replica as failed\" {\n        set primary [srv -0 client]\n\n        set replica1 [srv -1 client]\n        set replica1_pid [srv -1 pid]\n        set replica1_instance_id [dict get [cluster_get_myself 1] id]\n\n        assert {[lindex [$primary role] 0] eq {master}}\n        assert {[lindex [$replica1 role] 0] eq {slave}}\n\n        wait_for_sync $replica1\n\n        pause_process $replica1_pid\n\n        wait_node_marked_fail 0 $replica1_instance_id\n    }\n}\n\n# Test multiple primaries wait for a quorum and then mark a replica as `fail`\nstart_cluster 2 1 {tags {external:skip cluster}} {\n\n    test \"Verify that multiple primaries mark replica as failed\" {\n        set primary1 [srv -0 client]\n\n        set primary2 [srv -1 client]\n        set primary2_pid [srv -1 pid]\n\n        set replica1 [srv -2 client]\n        set replica1_pid [srv -2 pid]\n        set replica1_instance_id [dict get [cluster_get_myself 2] id]\n\n        assert {[lindex [$primary1 role] 0] eq {master}}\n        assert {[lindex [$primary2 role] 0] eq {master}}\n        assert {[lindex [$replica1 role] 0] eq {slave}}\n\n        wait_for_sync $replica1\n\n        pause_process $replica1_pid\n\n        # Pause other primary to allow time for pfail flag to appear\n        pause_process $primary2_pid\n\n        wait_node_marked_pfail 0 $replica1_instance_id\n\n        # Resume other primary and wait for to show replica as failed\n        resume_process $primary2_pid\n\n        wait_node_marked_fail 0 $replica1_instance_id\n    }\n}\n"
  },
  {
    "path": "tests/unit/cluster/hostnames.tcl",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Copyright (c) 2024-present, Valkey contributors.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n#\n\nproc get_slot_field {slot_output shard_id node_id attrib_id} {\n    return [lindex [lindex [lindex $slot_output $shard_id] $node_id] $attrib_id]\n}\n\n# Start a cluster with 3 masters and 4 replicas.\n# These tests rely on specific node ordering, so make sure no node fails over.\nstart_cluster 3 4 {tags {external:skip cluster} overrides {cluster-replica-no-failover yes}} {\ntest \"Set cluster hostnames and verify they are propagated\" {\n    for {set j 0} {$j < [llength $::servers]} {incr j} {\n        R $j config set cluster-announce-hostname \"host-$j.com\"\n    }\n    \n    wait_for_condition 50 100 {\n        [are_hostnames_propagated \"host-*.com\"] eq 1\n    } else {\n        fail \"cluster hostnames were not propagated\"\n    }\n\n    # Now that everything is propagated, assert everyone agrees\n    wait_for_cluster_propagation\n}\n\ntest \"Update hostnames and make sure they are all eventually propagated\" {\n    for {set j 0} {$j < [llength $::servers]} {incr j} {\n        R $j config set cluster-announce-hostname \"host-updated-$j.com\"\n    }\n    \n    wait_for_condition 50 100 {\n        [are_hostnames_propagated \"host-updated-*.com\"] eq 1\n    } else {\n        fail \"cluster hostnames were not propagated\"\n    }\n\n    # Now that everything is propagated, assert everyone agrees\n    wait_for_cluster_propagation\n}\n\ntest \"Remove hostnames and make sure they are all eventually propagated\" {\n    for {set j 0} {$j < [llength $::servers]} {incr j} {\n        R $j config set cluster-announce-hostname \"\"\n    }\n    \n    wait_for_condition 50 100 {\n        [are_hostnames_propagated \"\"] eq 1\n    } else {\n        fail \"cluster hostnames were not propagated\"\n    }\n\n    # Now that everything is propagated, assert everyone agrees\n    wait_for_cluster_propagation\n}\n\ntest \"Verify cluster-preferred-endpoint-type behavior for redirects and info\" {\n    R 0 config set cluster-announce-hostname \"me.com\"\n    R 1 config set cluster-announce-hostname \"\"\n    R 2 config set cluster-announce-hostname \"them.com\"\n\n    wait_for_cluster_propagation\n\n    # Verify default behavior\n    set slot_result [R 0 cluster slots]\n    assert_equal \"\" [lindex [get_slot_field $slot_result 0 2 0] 1]\n    assert_equal \"\" [lindex [get_slot_field $slot_result 2 2 0] 1]\n    assert_equal \"hostname\" [lindex [get_slot_field $slot_result 0 2 3] 0]\n    assert_equal \"me.com\" [lindex [get_slot_field $slot_result 0 2 3] 1]\n    assert_equal \"hostname\" [lindex [get_slot_field $slot_result 2 2 3] 0]\n    assert_equal \"them.com\" [lindex [get_slot_field $slot_result 2 2 3] 1]\n\n    # Redirect will use the IP address\n    catch {R 0 set foo foo} redir_err\n    assert_match \"MOVED * 127.0.0.1:*\" $redir_err\n\n    # Verify prefer hostname behavior\n    R 0 config set cluster-preferred-endpoint-type hostname\n\n    set slot_result [R 0 cluster slots]\n    assert_equal \"me.com\" [get_slot_field $slot_result 0 2 0]\n    assert_equal \"them.com\" [get_slot_field $slot_result 2 2 0]\n\n    # Redirect should use hostname\n    catch {R 0 set foo foo} redir_err\n    assert_match \"MOVED * them.com:*\" $redir_err\n\n    # Redirect to an unknown hostname returns ?\n    catch {R 0 set barfoo bar} redir_err\n    assert_match \"MOVED * ?:*\" $redir_err\n\n    # Verify unknown hostname behavior\n    R 0 config set cluster-preferred-endpoint-type unknown-endpoint\n\n    # Verify default behavior\n    set slot_result [R 0 cluster slots]\n    assert_equal \"ip\" [lindex [get_slot_field $slot_result 0 2 3] 0]\n    assert_equal \"127.0.0.1\" [lindex [get_slot_field $slot_result 0 2 3] 1]\n    assert_equal \"ip\" [lindex [get_slot_field $slot_result 2 2 3] 0]\n    assert_equal \"127.0.0.1\" [lindex [get_slot_field $slot_result 2 2 3] 1]\n    assert_equal \"ip\" [lindex [get_slot_field $slot_result 1 2 3] 0]\n    assert_equal \"127.0.0.1\" [lindex [get_slot_field $slot_result 1 2 3] 1]\n    # Not required by the protocol, but IP comes before hostname\n    assert_equal \"hostname\" [lindex [get_slot_field $slot_result 0 2 3] 2]\n    assert_equal \"me.com\" [lindex [get_slot_field $slot_result 0 2 3] 3]\n    assert_equal \"hostname\" [lindex [get_slot_field $slot_result 2 2 3] 2]\n    assert_equal \"them.com\" [lindex [get_slot_field $slot_result 2 2 3] 3]\n\n    # This node doesn't have a hostname\n    assert_equal 2 [llength [get_slot_field $slot_result 1 2 3]]\n\n    # Redirect should use empty string\n    catch {R 0 set foo foo} redir_err\n    assert_match \"MOVED * :*\" $redir_err\n\n    R 0 config set cluster-preferred-endpoint-type ip\n}\n\ntest \"Verify the nodes configured with prefer hostname only show hostname for new nodes\" {\n    # Have everyone forget node 6 and isolate it from the cluster.\n    isolate_node 6\n\n    set primaries 3\n    for {set j 0} {$j < $primaries} {incr j} {\n        # Set hostnames for the masters, now that the node is isolated\n        R $j config set cluster-announce-hostname \"shard-$j.com\"\n    }\n\n    # Prevent Node 0 and Node 6 from properly meeting,\n    # they'll hang in the handshake phase. This allows us to \n    # test the case where we \"know\" about it but haven't\n    # successfully retrieved information about it yet.\n    R 0 DEBUG DROP-CLUSTER-PACKET-FILTER 0\n    R 6 DEBUG DROP-CLUSTER-PACKET-FILTER 0\n\n    # Have a replica meet the isolated node\n    R 3 cluster meet 127.0.0.1 [srv -6 port]\n\n    # Wait for the isolated node to learn about the rest of the cluster,\n    # which correspond to a single entry in cluster nodes. Note this\n    # doesn't mean the isolated node has successfully contacted each\n    # node.\n    wait_for_condition 50 100 {\n        [llength [split [R 6 CLUSTER NODES] \"\\n\"]] eq [expr [llength $::servers] + 1]\n    } else {\n        fail \"Isolated node didn't learn about the rest of the cluster *\"\n    }\n\n    # Now, we wait until the two nodes that aren't filtering packets\n    # to accept our isolated nodes connections. At this point they will\n    # start showing up in cluster slots. \n    wait_for_condition 50 100 {\n        [llength [R 6 CLUSTER SLOTS]] eq 2\n    } else {\n        fail \"Node did not learn about the 2 shards it can talk to\"\n    }\n    wait_for_condition 50 100 {\n        [lindex [get_slot_field [R 6 CLUSTER SLOTS] 0 2 3] 1] eq \"shard-1.com\"\n    } else {\n        fail \"hostname for shard-1 didn't reach node 6\"\n    }\n\n    wait_for_condition 50 100 {\n        [lindex [get_slot_field [R 6 CLUSTER SLOTS] 1 2 3] 1] eq \"shard-2.com\"\n    } else {\n        fail \"hostname for shard-2 didn't reach node 6\"\n    }\n\n    # Also make sure we know about the isolated master, we \n    # just can't reach it.\n    set master_id [R 0 CLUSTER MYID]\n    assert_match \"*$master_id*\" [R 6 CLUSTER NODES]\n\n    # Stop dropping cluster packets, and make sure everything\n    # stabilizes\n    R 0 DEBUG DROP-CLUSTER-PACKET-FILTER -1\n    R 6 DEBUG DROP-CLUSTER-PACKET-FILTER -1\n\n    # This operation sometimes spikes to around 5 seconds to resolve the state,\n    # so it has a higher timeout. \n    wait_for_condition 50 500 {\n        [llength [R 6 CLUSTER SLOTS]] eq 3\n    } else {\n        fail \"Node did not learn about the 2 shards it can talk to\"\n    }\n\n    for {set j 0} {$j < $primaries} {incr j} {\n        wait_for_condition 50 100 {\n            [lindex [get_slot_field [R 6 CLUSTER SLOTS] $j 2 3] 1] eq \"shard-$j.com\"\n        } else {\n            fail \"hostname information for shard-$j didn't reach node 6\"\n        }\n    }\n}\n\ntest \"Test restart will keep hostname information\" {\n    # Set a new hostname, reboot and make sure it sticks\n    R 0 config set cluster-announce-hostname \"restart-1.com\"\n    \n    # Store the hostname in the config\n    R 0 config rewrite\n\n    restart_server 0 true false\n    set slot_result [R 0 CLUSTER SLOTS]\n    assert_equal [lindex [get_slot_field $slot_result 0 2 3] 1] \"restart-1.com\"\n\n    # As a sanity check, make sure everyone eventually agrees\n    wait_for_cluster_propagation\n}\n\ntest \"Test hostname validation\" {\n    catch {R 0 config set cluster-announce-hostname [string repeat x 256]} err\n    assert_match \"*Hostnames must be less than 256 characters*\" $err\n    catch {R 0 config set cluster-announce-hostname \"?.com\"} err\n    assert_match \"*Hostnames may only contain alphanumeric characters, hyphens or dots*\" $err\n\n    # Note this isn't a valid hostname, but it passes our internal validation\n    R 0 config set cluster-announce-hostname \"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789-.\"\n}\n}\n"
  },
  {
    "path": "tests/unit/cluster/human-announced-nodename.tcl",
    "content": "# Check if cluster's view of human announced nodename is reported in logs\nstart_cluster 3 0 {tags {external:skip cluster}} {\n    test \"Set cluster human announced nodename and let it propagate\" {\n        for {set j 0} {$j < [llength $::servers]} {incr j} {\n            R $j config set cluster-announce-hostname \"host-$j.com\"\n            R $j config set cluster-announce-human-nodename \"nodename-$j\"\n        }\n\n        # We wait for everyone to agree on the hostnames. Since they are gossiped\n        # the same way as nodenames, it implies everyone knows the nodenames too.\n        wait_for_condition 50 100 {\n            [are_hostnames_propagated \"host-*.com\"] eq 1\n        } else {\n            fail \"cluster hostnames were not propagated\"\n        }\n    }\n\n    test \"Human nodenames are visible in log messages\" {\n        # Pause instance 0, so everyone thinks it is dead\n        pause_process [srv 0 pid]\n\n        # We're going to use a message we will know will be sent, node unreachable,\n        # since it includes the other node gossiping.\n        wait_for_log_messages -1 {\"*Node * (nodename-2) reported node * (nodename-0) as not reachable*\"} 0 20 500\n        wait_for_log_messages -2 {\"*Node * (nodename-1) reported node * (nodename-0) as not reachable*\"} 0 20 500\n        \n        resume_process [srv 0 pid]\n    }\n}\n"
  },
  {
    "path": "tests/unit/cluster/internal-secret.tcl",
    "content": "proc num_unique_secrets {num_nodes} {\n    set secrets [list]\n    for {set i 0} {$i < $num_nodes} {incr i} {\n        lappend secrets [R $i debug internal_secret]\n    }\n    set num_secrets [llength [lsort -unique $secrets]]\n    return $num_secrets\n}\n\nproc wait_for_secret_sync {maxtries delay num_nodes} {\n    wait_for_condition $maxtries $delay {\n        [num_unique_secrets $num_nodes] eq 1\n    } else {\n        fail \"Failed waiting for secrets to sync\"\n    }\n}\n\nstart_cluster 3 3 {tags {external:skip cluster}} {\n    test \"Test internal secret sync\" {\n        wait_for_secret_sync 50 100 6\n    }\n\n    \n    set first_shard_host [srv 0 host]\n    set first_shard_port [srv 0 port]\n    \n    if {$::verbose} {\n        puts {cluster internal secret:}\n        puts [R 1 debug internal_secret]\n    }\n\n    test \"Join a node to the cluster and make sure it gets the same secret\" {\n        start_server {tags {\"external:skip\"} overrides {cluster-enabled {yes}}} {\n            r cluster meet $first_shard_host $first_shard_port\n            wait_for_condition 50 100 {\n                [r debug internal_secret] eq [R 1 debug internal_secret]\n            } else {\n                puts [r debug internal_secret]\n                puts [R 1 debug internal_secret]\n                fail \"Secrets not match\"\n            }\n        }\n    }\n\n    test \"Join another cluster, make sure clusters sync on the internal secret\" {\n        start_server {tags {\"external:skip\"} overrides {cluster-enabled {yes}}} {\n            set new_shard_host [srv 0 host]\n            set new_shard_port [srv 0 port]\n            start_server {tags {\"external:skip\"} overrides {cluster-enabled {yes}}} {\n                r cluster meet $new_shard_host $new_shard_port\n                wait_for_condition 50 100 {\n                    [r debug internal_secret] eq [r -1 debug internal_secret]\n                } else {\n                    puts [r debug internal_secret]\n                    puts [r -1 debug internal_secret]\n                    fail \"Secrets not match\"\n                }\n                if {$::verbose} {\n                    puts {new cluster internal secret:}\n                    puts [r -1 debug internal_secret]\n                }\n                r cluster meet $first_shard_host $first_shard_port\n                wait_for_secret_sync 50 100 8\n                if {$::verbose} {\n                    puts {internal secret after join to bigger cluster:}\n                    puts [r -1 debug internal_secret]\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/cluster/links.tcl",
    "content": "proc get_links_with_peer {this_instance_id peer_nodename} {\n    set links [R $this_instance_id cluster links]\n    set links_with_peer {}\n    foreach l $links {\n        if {[dict get $l node] eq $peer_nodename} {\n            lappend links_with_peer $l\n        }\n    }\n    return $links_with_peer\n}\n\n# Return the entry in CLUSTER LINKS output by instance identified by `this_instance_id` that\n# corresponds to the link established toward a peer identified by `peer_nodename`\nproc get_link_to_peer {this_instance_id peer_nodename} {\n    set links_with_peer [get_links_with_peer $this_instance_id $peer_nodename]\n    foreach l $links_with_peer {\n        if {[dict get $l direction] eq \"to\"} {\n            return $l\n        }\n    }\n    return {}\n}\n\n# Return the entry in CLUSTER LINKS output by instance identified by `this_instance_id` that\n# corresponds to the link accepted from a peer identified by `peer_nodename`\nproc get_link_from_peer {this_instance_id peer_nodename} {\n    set links_with_peer [get_links_with_peer $this_instance_id $peer_nodename]\n    foreach l $links_with_peer {\n        if {[dict get $l direction] eq \"from\"} {\n            return $l\n        }\n    }\n    return {}\n}\n\n# Reset cluster links to their original state\nproc reset_links {id} {\n    set limit [lindex [R $id CONFIG get cluster-link-sendbuf-limit] 1]\n\n    # Set a 1 byte limit and wait for cluster cron to run\n    # (executes every 100ms) and terminate links\n    R $id CONFIG SET cluster-link-sendbuf-limit 1\n    after 150\n\n    # Reset limit\n    R $id CONFIG SET cluster-link-sendbuf-limit $limit\n\n    # Wait until the cluster links come back up for each node\n    wait_for_condition 50 100 {\n        [number_of_links $id] == [expr [number_of_peers $id] * 2]\n    } else {\n        fail \"Cluster links did not come back up\"\n    }\n}\n\nproc number_of_peers {id} {\n    expr [llength $::servers] - 1\n}\n\nproc number_of_links {id} {\n    llength [R $id cluster links]\n}\n\nproc publish_messages {server num_msgs msg_size} {\n    for {set i 0} {$i < $num_msgs} {incr i} {\n        $server PUBLISH channel [string repeat \"x\" $msg_size]\n    }\n}\n\nstart_cluster 1 2 {tags {external:skip cluster}} {\n    set primary_id 0\n    set replica1_id 1\n\n    set primary [Rn $primary_id]\n    set replica1 [Rn $replica1_id]\n\n    test \"Broadcast message across a cluster shard while a cluster link is down\" {\n        set replica1_node_id [$replica1 CLUSTER MYID]\n\n        set channelname ch3\n\n        # subscribe on replica1\n        set subscribeclient1 [redis_deferring_client -1]\n        $subscribeclient1 deferred 1\n        $subscribeclient1 SSUBSCRIBE $channelname\n        $subscribeclient1 read\n\n        # subscribe on replica2\n        set subscribeclient2 [redis_deferring_client -2]\n        $subscribeclient2 deferred 1\n        $subscribeclient2 SSUBSCRIBE $channelname\n        $subscribeclient2 read\n\n        # Verify number of links with cluster stable state\n        assert_equal [expr [number_of_peers $primary_id]*2] [number_of_links $primary_id]\n\n        # Disconnect the cluster between primary and replica1 and publish a message.\n        $primary MULTI\n        $primary DEBUG CLUSTERLINK KILL TO $replica1_node_id\n        $primary SPUBLISH $channelname hello\n        set res [$primary EXEC]\n\n        # Verify no client exists on the primary to receive the published message.\n        assert_equal $res {OK 0}\n\n        # Wait for all the cluster links are healthy\n        wait_for_condition 50 100 {\n            [number_of_peers $primary_id]*2 == [number_of_links $primary_id]\n        } else {\n            fail \"All peer links couldn't be established\"\n        }\n\n        # Publish a message afterwards.\n        $primary SPUBLISH $channelname world\n\n        # Verify replica1 has received only (world) / hello is lost.\n        assert_equal \"smessage ch3 world\" [$subscribeclient1 read]\n\n        # Verify replica2 has received both messages (hello/world)\n        assert_equal \"smessage ch3 hello\" [$subscribeclient2 read]\n        assert_equal \"smessage ch3 world\" [$subscribeclient2 read]\n    } {} {needs:debug}\n}\n\nstart_cluster 3 0 {tags {external:skip cluster}} {\n    test \"Each node has two links with each peer\" {\n        for {set id 0} {$id < [llength $::servers]} {incr id} {\n            # Assert that from point of view of each node, there are two links for\n            # each peer. It might take a while for cluster to stabilize so wait up\n            # to 5 seconds.\n            wait_for_condition 50 100 {\n                [number_of_peers $id]*2 == [number_of_links $id]\n            } else {\n                assert_equal [expr [number_of_peers $id]*2] [number_of_links $id]\n            }\n\n            set nodes [get_cluster_nodes $id]\n            set links [R $id cluster links]\n\n            # For each peer there should be exactly one\n            # link \"to\" it and one link \"from\" it.\n            foreach n $nodes {\n                if {[cluster_has_flag $n myself]} continue\n                set peer [dict get $n id]\n                set to 0\n                set from 0\n                foreach l $links {\n                    if {[dict get $l node] eq $peer} {\n                        if {[dict get $l direction] eq \"to\"} {\n                            incr to\n                        } elseif {[dict get $l direction] eq \"from\"} {\n                            incr from\n                        }\n                    }\n                }\n                assert {$to eq 1}\n                assert {$from eq 1}\n            }\n        }\n    }\n\n    test {Validate cluster links format} {\n        set lines [R 0 cluster links]\n        foreach l $lines {\n            if {$l eq {}} continue\n            assert_equal [llength $l] 12\n            assert_equal 1 [dict exists $l \"direction\"]\n            assert_equal 1 [dict exists $l \"node\"]\n            assert_equal 1 [dict exists $l \"create-time\"]\n            assert_equal 1 [dict exists $l \"events\"]\n            assert_equal 1 [dict exists $l \"send-buffer-allocated\"]\n            assert_equal 1 [dict exists $l \"send-buffer-used\"]\n        }\n    }\n\n    set primary1_id 0\n    set primary2_id 1\n\n    set primary1 [Rn $primary1_id]\n    set primary2 [Rn $primary2_id]\n\n    test \"Disconnect link when send buffer limit reached\" {\n        # On primary1, set timeout to 1 hour so links won't get disconnected due to timeouts\n        set oldtimeout [lindex [$primary1 CONFIG get cluster-node-timeout] 1]\n        $primary1 CONFIG set cluster-node-timeout [expr 60*60*1000]\n\n        # Get primary1's links with primary2\n        set primary2_name [dict get [cluster_get_myself $primary2_id] id]\n        set orig_link_p1_to_p2 [get_link_to_peer $primary1_id $primary2_name]\n        set orig_link_p1_from_p2 [get_link_from_peer $primary1_id $primary2_name]\n\n        # On primary1, set cluster link send buffer limit to 256KB, which is large enough to not be\n        # overflowed by regular gossip messages but also small enough that it doesn't take too much\n        # memory to overflow it. If it is set too high, Redis may get OOM killed by kernel before this\n        # limit is overflowed in some RAM-limited test environments.\n        set oldlimit [lindex [$primary1 CONFIG get cluster-link-sendbuf-limit] 1]\n        $primary1 CONFIG set cluster-link-sendbuf-limit [expr 256*1024]\n        assert {[CI $primary1_id total_cluster_links_buffer_limit_exceeded] eq 0}\n\n        # To manufacture an ever-growing send buffer from primary1 to primary2,\n        # make primary2 unresponsive.\n        set primary2_pid [srv [expr -1*$primary2_id] pid]\n        pause_process $primary2_pid\n\n        # On primary1, send 128KB Pubsub messages in a loop until the send buffer of the link from\n        # primary1 to primary2 exceeds buffer limit therefore be dropped.\n        # For the send buffer to grow, we need to first exhaust TCP send buffer of primary1 and TCP\n        # receive buffer of primary2 first. The sizes of these two buffers vary by OS, but 100 128KB\n        # messages should be sufficient.\n        set i 0\n        wait_for_condition 100 0 {\n            [catch {incr i} e] == 0 &&\n            [catch {$primary1 publish channel [prepare_value [expr 128*1024]]} e] == 0 &&\n            [catch {after 500} e] == 0 &&\n            [CI $primary1_id total_cluster_links_buffer_limit_exceeded] >= 1\n        } else {\n            fail \"Cluster link not freed as expected\"\n        }\n\n        # A new link to primary2 should have been recreated\n        set new_link_p1_to_p2 [get_link_to_peer $primary1_id $primary2_name]\n        assert {[dict get $new_link_p1_to_p2 create-time] > [dict get $orig_link_p1_to_p2 create-time]}\n\n        # Link from primary2 should not be affected\n        set same_link_p1_from_p2 [get_link_from_peer $primary1_id $primary2_name]\n        assert {[dict get $same_link_p1_from_p2 create-time] eq [dict get $orig_link_p1_from_p2 create-time]}\n\n        # Revive primary2\n        resume_process $primary2_pid\n\n        # Reset configs on primary1 so config changes don't leak out to other tests\n        $primary1 CONFIG set cluster-node-timeout $oldtimeout\n        $primary1 CONFIG set cluster-link-sendbuf-limit $oldlimit\n\n        reset_links $primary1_id\n    }\n\n    test \"Link memory increases with publishes\" {\n        set server_id 0\n        set server [Rn $server_id]\n        set msg_size 10000\n        set num_msgs 10\n\n        # Remove any sendbuf limit\n        $primary1 CONFIG set cluster-link-sendbuf-limit 0\n\n        # Publish ~100KB to one of the servers\n        $server MULTI\n        $server INFO memory\n        publish_messages $server $num_msgs $msg_size\n        $server INFO memory\n        set res [$server EXEC]\n\n        set link_mem_before_pubs [getInfoProperty $res mem_cluster_links]\n\n        # Remove the first half of the response string which contains the\n        # first \"INFO memory\" results and search for the property again\n        set res [string range $res [expr [string length $res] / 2] end]\n        set link_mem_after_pubs [getInfoProperty $res mem_cluster_links]\n        \n        # We expect the memory to have increased by more than\n        # the culmulative size of the publish messages\n        set mem_diff_floor [expr $msg_size * $num_msgs]\n        set mem_diff [expr $link_mem_after_pubs - $link_mem_before_pubs]\n        assert {$mem_diff > $mem_diff_floor}\n\n        # Reset links to ensure no leftover data for the next test\n        reset_links $server_id\n    }\n\n    test \"Link memory resets after publish messages flush\" {\n        set server [Rn 0]\n        set msg_size 100000\n        set num_msgs 10\n\n        set link_mem_before [status $server mem_cluster_links]\n\n        # Publish ~1MB to one of the servers\n        $server MULTI\n        publish_messages $server $num_msgs $msg_size\n        $server EXEC\n\n        # Wait until the cluster link memory has returned to below the pre-publish value.\n        # We can't guarantee it returns to the exact same value since gossip messages\n        # can cause the values to fluctuate.\n        wait_for_condition 1000 500 {\n            [status $server mem_cluster_links] <= $link_mem_before\n        } else {\n            fail \"Cluster link memory did not settle back to expected range\"\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/cluster/misc.tcl",
    "content": "start_cluster 2 2 {tags {external:skip cluster}} {\n    test {Key lazy expires during key migration} {\n        R 0 DEBUG SET-ACTIVE-EXPIRE 0\n\n        set key_slot [R 0 CLUSTER KEYSLOT FOO]\n        R 0 set FOO BAR PX 10\n        set src_id [R 0 CLUSTER MYID]\n        set trg_id [R 1 CLUSTER MYID]\n        R 0 CLUSTER SETSLOT $key_slot MIGRATING $trg_id\n        R 1 CLUSTER SETSLOT $key_slot IMPORTING $src_id\n        after 11\n        assert_error {ASK*} {R 0 GET FOO}\n        R 0 ping\n    } {PONG}\n\n    test \"Coverage: Basic cluster commands\" {\n        assert_equal {OK} [R 0 CLUSTER saveconfig]\n\n        set id [R 0 CLUSTER MYID]\n        assert_equal {0} [R 0 CLUSTER count-failure-reports $id]\n\n        R 0 flushall\n        assert_equal {OK} [R 0 CLUSTER flushslots]\n    }\n\n    test \"CROSSSLOT error for keys in different slots\" {\n        # Test MSET with keys in different slots\n        assert_error {*CROSSSLOT Keys in request don't hash to the same slot*} {R 0 MSET foo bar baz qux}\n        \n        # Test DEL with keys in different slots  \n        assert_error {*CROSSSLOT Keys in request don't hash to the same slot*} {R 0 DEL foo bar}\n        \n        # Test MGET with keys in different slots\n        assert_error {*CROSSSLOT Keys in request don't hash to the same slot*} {R 0 MGET foo bar}\n    } \n}\n"
  },
  {
    "path": "tests/unit/cluster/multi-slot-operations.tcl",
    "content": "# This test uses a custom slot allocation for testing\nproc cluster_allocate_with_continuous_slots_local {n} {\n    R 0 cluster ADDSLOTSRANGE 0 3276\n    R 1 cluster ADDSLOTSRANGE 3277 6552\n    R 2 cluster ADDSLOTSRANGE 6553 9828\n    R 3 cluster ADDSLOTSRANGE 9829 13104\n    R 4 cluster ADDSLOTSRANGE 13105 16383\n}\n\nstart_cluster 5 0 {tags {external:skip cluster}} {\n\nset master1 [srv 0 \"client\"]\nset master2 [srv -1 \"client\"]\nset master3 [srv -2 \"client\"]\nset master4 [srv -3 \"client\"]\nset master5 [srv -4 \"client\"]\n\ntest \"Continuous slots distribution\" {\n    assert_match \"* 0-3276*\" [$master1 CLUSTER NODES]\n    assert_match \"* 3277-6552*\" [$master2 CLUSTER NODES]\n    assert_match \"* 6553-9828*\" [$master3 CLUSTER NODES]\n    assert_match \"* 9829-13104*\" [$master4 CLUSTER NODES]\n    assert_match \"* 13105-16383*\" [$master5 CLUSTER NODES]\n    assert_match \"*0 3276*\" [$master1 CLUSTER SLOTS]\n    assert_match \"*3277 6552*\" [$master2 CLUSTER SLOTS]\n    assert_match \"*6553 9828*\" [$master3 CLUSTER SLOTS]\n    assert_match \"*9829 13104*\" [$master4 CLUSTER SLOTS]\n    assert_match \"*13105 16383*\" [$master5 CLUSTER SLOTS]\n\n    $master1 CLUSTER DELSLOTSRANGE 3001 3050\n    assert_match \"* 0-3000 3051-3276*\" [$master1 CLUSTER NODES]\n    assert_match \"*0 3000*3051 3276*\" [$master1 CLUSTER SLOTS]\n\n    $master2 CLUSTER DELSLOTSRANGE 5001 5500\n    assert_match \"* 3277-5000 5501-6552*\" [$master2 CLUSTER NODES]\n    assert_match \"*3277 5000*5501 6552*\" [$master2 CLUSTER SLOTS]\n\n    $master3 CLUSTER DELSLOTSRANGE 7001 7100 8001 8500\n    assert_match \"* 6553-7000 7101-8000 8501-9828*\" [$master3 CLUSTER NODES]\n    assert_match \"*6553 7000*7101 8000*8501 9828*\" [$master3 CLUSTER SLOTS]\n\n    $master4 CLUSTER DELSLOTSRANGE 11001 12000 12101 12200\n    assert_match \"* 9829-11000 12001-12100 12201-13104*\" [$master4 CLUSTER NODES]\n    assert_match \"*9829 11000*12001 12100*12201 13104*\" [$master4 CLUSTER SLOTS]\n\n    $master5 CLUSTER DELSLOTSRANGE 13501 14000 15001 16000\n    assert_match \"* 13105-13500 14001-15000 16001-16383*\" [$master5 CLUSTER NODES]\n    assert_match \"*13105 13500*14001 15000*16001 16383*\" [$master5 CLUSTER SLOTS]\n}\n\ntest \"ADDSLOTS command with several boundary conditions test suite\" {\n    assert_error \"ERR Invalid or out of range slot\" {R 0 cluster ADDSLOTS 3001 aaa}\n    assert_error \"ERR Invalid or out of range slot\" {R 0 cluster ADDSLOTS 3001 -1000}\n    assert_error \"ERR Invalid or out of range slot\" {R 0 cluster ADDSLOTS 3001 30003}\n    \n    assert_error \"ERR Slot 3200 is already busy\" {R 0 cluster ADDSLOTS 3200}\n    assert_error \"ERR Slot 8501 is already busy\" {R 0 cluster ADDSLOTS 8501}\n\n    assert_error \"ERR Slot 3001 specified multiple times\" {R 0 cluster ADDSLOTS 3001 3002 3001}\n}\n\ntest \"ADDSLOTSRANGE command with several boundary conditions test suite\" {\n    # Add multiple slots with incorrect argument number\n    assert_error \"ERR wrong number of arguments for 'cluster|addslotsrange' command\" {R 0 cluster ADDSLOTSRANGE 3001 3020 3030}\n\n    # Add multiple slots with invalid input slot\n    assert_error \"ERR Invalid or out of range slot\" {R 0 cluster ADDSLOTSRANGE 3001 3020 3030 aaa}\n    assert_error \"ERR Invalid or out of range slot\" {R 0 cluster ADDSLOTSRANGE 3001 3020 3030 70000}\n    assert_error \"ERR Invalid or out of range slot\" {R 0 cluster ADDSLOTSRANGE 3001 3020 -1000 3030}\n\n    # Add multiple slots when start slot number is greater than the end slot\n    assert_error \"ERR start slot number 3030 is greater than end slot number 3025\" {R 0 cluster ADDSLOTSRANGE 3001 3020 3030 3025}\n\n    # Add multiple slots with busy slot\n    assert_error \"ERR Slot 3200 is already busy\" {R 0 cluster ADDSLOTSRANGE 3001 3020 3200 3250}\n\n    # Add multiple slots with assigned multiple times\n    assert_error \"ERR Slot 3001 specified multiple times\" {R 0 cluster ADDSLOTSRANGE 3001 3020 3001 3020}\n}\n\ntest \"DELSLOTSRANGE command with several boundary conditions test suite\" {\n    # Delete multiple slots with incorrect argument number\n    assert_error \"ERR wrong number of arguments for 'cluster|delslotsrange' command\" {R 0 cluster DELSLOTSRANGE 1000 2000 2100}\n    assert_match \"* 0-3000 3051-3276*\" [$master1 CLUSTER NODES]\n    assert_match \"*0 3000*3051 3276*\" [$master1 CLUSTER SLOTS]\n\n    # Delete multiple slots with invalid input slot\n    assert_error \"ERR Invalid or out of range slot\" {R 0 cluster DELSLOTSRANGE 1000 2000 2100 aaa}\n    assert_error \"ERR Invalid or out of range slot\" {R 0 cluster DELSLOTSRANGE 1000 2000 2100 70000}\n    assert_error \"ERR Invalid or out of range slot\" {R 0 cluster DELSLOTSRANGE 1000 2000 -2100 2200}\n    assert_match \"* 0-3000 3051-3276*\" [$master1 CLUSTER NODES]\n    assert_match \"*0 3000*3051 3276*\" [$master1 CLUSTER SLOTS]\n\n    # Delete multiple slots when start slot number is greater than the end slot\n    assert_error \"ERR start slot number 5800 is greater than end slot number 5750\" {R 1 cluster DELSLOTSRANGE 5600 5700 5800 5750}\n    assert_match \"* 3277-5000 5501-6552*\" [$master2 CLUSTER NODES]\n    assert_match \"*3277 5000*5501 6552*\" [$master2 CLUSTER SLOTS]\n\n    # Delete multiple slots with already unassigned\n    assert_error \"ERR Slot 7001 is already unassigned\" {R 2 cluster DELSLOTSRANGE 7001 7100 9000 9200}\n    assert_match \"* 6553-7000 7101-8000 8501-9828*\" [$master3 CLUSTER NODES]\n    assert_match \"*6553 7000*7101 8000*8501 9828*\" [$master3 CLUSTER SLOTS]\n\n    # Delete multiple slots with assigned multiple times\n    assert_error \"ERR Slot 12500 specified multiple times\" {R 3 cluster DELSLOTSRANGE 12500 12600 12500 12600}\n    assert_match \"* 9829-11000 12001-12100 12201-13104*\" [$master4 CLUSTER NODES]\n    assert_match \"*9829 11000*12001 12100*12201 13104*\" [$master4 CLUSTER SLOTS]\n}\n} cluster_allocate_with_continuous_slots_local\n\nstart_cluster 2 0 {tags {external:skip cluster experimental}} {\n\nset master1 [srv 0 \"client\"]\nset master2 [srv -1 \"client\"]\n\ntest \"SFLUSH - Errors and output validation\" {\n    assert_match \"* 0-8191*\" [$master1 CLUSTER NODES]\n    assert_match \"* 8192-16383*\" [$master2 CLUSTER NODES]\n    assert_match \"*0 8191*\" [$master1 CLUSTER SLOTS]\n    assert_match \"*8192 16383*\" [$master2 CLUSTER SLOTS]\n\n    # make master1 non-continuous slots\n    $master1 cluster DELSLOTSRANGE 1000 2000\n\n    # Test SFLUSH errors validation\n    assert_error {ERR wrong number of arguments*}           {$master1 SFLUSH 4}\n    assert_error {ERR wrong number of arguments*}           {$master1 SFLUSH 4 SYNC}\n    assert_error {ERR Invalid or out of range slot}         {$master1 SFLUSH x 4}\n    assert_error {ERR Invalid or out of range slot}         {$master1 SFLUSH 0 12x}\n    assert_error {ERR Slot 3 specified multiple times}      {$master1 SFLUSH 2 4 3 5}\n    assert_error {ERR start slot number 8 is greater than*} {$master1 SFLUSH 8 4}\n    assert_error {ERR wrong number of arguments*}           {$master1 SFLUSH 4 8 10}\n    assert_error {ERR wrong number of arguments*}           {$master1 SFLUSH 0 999 2001 8191 ASYNCX}\n\n    # Test SFLUSH output validation\n    assert_match \"{2 4}\" [$master1 SFLUSH 2 4]\n    assert_match \"{0 4}\" [$master1 SFLUSH 0 4]\n    assert_match \"\" [$master2 SFLUSH 0 4]\n    assert_match \"{1 999} {2001 8191}\" [$master1 SFLUSH 1 8191]\n    assert_match \"{0 999} {2001 8190}\" [$master1 SFLUSH 0 8190]\n    assert_match \"{0 998} {2001 8191}\" [$master1 SFLUSH 0 998 2001 8191]\n    assert_match \"{1 999} {2001 8191}\" [$master1 SFLUSH 1 999 2001 8191]\n    assert_match \"{0 999} {2001 8190}\" [$master1 SFLUSH 0 999 2001 8190]\n    assert_match \"{0 999} {2002 8191}\" [$master1 SFLUSH 0 999 2002 8191]\n    assert_match \"{0 999} {2001 8191}\" [$master1 SFLUSH 0 999 2001 8191]\n    assert_match \"{0 999} {2001 8191}\" [$master1 SFLUSH 0 8191]\n    assert_match \"{0 999} {2001 8191}\" [$master1 SFLUSH 0 4000 4001 8191]\n    assert_match \"{8193 16383}\" [$master2 SFLUSH 8193 16383]\n    assert_match \"{8192 16382}\" [$master2 SFLUSH 8192 16382]\n    assert_match \"{8192 16383}\" [$master2 SFLUSH 8192 16383]\n    assert_match \"{8192 16383}\" [$master2 SFLUSH 8192 16383 SYNC]\n    assert_match \"{8192 16383}\" [$master2 SFLUSH 8192 16383 ASYNC]\n    assert_match \"{8192 16383}\" [$master2 SFLUSH 8192 9000 9001 16383]\n    assert_match \"{8192 16383}\" [$master2 SFLUSH 8192 9000 9001 16383 SYNC]\n    assert_match \"{8192 16383}\" [$master2 SFLUSH 8192 9000 9001 16383 ASYNC]\n\n    # restore master1 continuous slots\n    $master1 cluster ADDSLOTSRANGE 1000 2000\n}\n\ntest \"SFLUSH - Deletes the keys with argument <NONE>/SYNC/ASYNC\" {\n    foreach op {\"\" \"SYNC\" \"ASYNC\"} {\n        for {set i 0} {$i < 100} {incr i} {\n            catch {$master1 SET key$i val$i}\n            catch {$master2 SET key$i val$i}\n        }\n\n        assert {[$master1 DBSIZE] > 0}\n        assert {[$master2 DBSIZE] > 0}\n        if {$op eq \"\"} {\n            assert_match \"{0 8191}\" [ $master1 SFLUSH 0 8191]\n        } else {\n            assert_match \"{0 8191}\" [ $master1 SFLUSH 0 8191 $op]\n        }\n        assert {[$master1 DBSIZE] == 0}\n        assert {[$master2 DBSIZE] > 0}\n        assert_match \"{8192 16383}\" [ $master2 SFLUSH 8192 16383]\n        assert {[$master2 DBSIZE] == 0}\n    }\n}\n\n}\n\nset testmodule [file normalize tests/modules/atomicslotmigration.so]\nstart_cluster 2 2 [list tags {external:skip cluster experimental modules} config_lines [list loadmodule $testmodule]] {\nforeach sync_method {\"SYNC\" \"BLOCKING-ASYNC\" \"ASYNC\"} {\nforeach trim_method {\"active\" \"bg\"} {\n    test \"sflush can propagate to replicas (sync method: $sync_method, trim method: $trim_method)\" {\n        R 0 flushall\n        R 0 debug asm-trim-method $trim_method\n        R 2 debug asm-trim-method $trim_method\n\n        # Add keys in master\n        R 0 set \"06S\" \"slot0\"\n        wait_for_ofs_sync [Rn 0] [Rn 2]\n\n        set loglines [count_log_lines 0]\n        set loglines2 [count_log_lines -2]\n\n        # since we have optimization, if the master is not running in blocking context,\n        # we will try to run in blocking ASYNC mode, so we need to use MULTI/EXEC to make it blocking\n        if {$sync_method eq \"SYNC\"} {\n            R 0 MULTI\n        }\n\n        # Execute SFLUSH on master, SYNC will be run as blocking ASYNC if not running in MULTI/EXEC\n        set sync_option \"SYNC\"\n        if {$sync_method eq \"ASYNC\"} {\n            set sync_option \"ASYNC\"\n        }\n        R 0 SFLUSH 0 8190 $sync_option\n\n        # Execute EXEC if using SYNC\n        if {$sync_method eq \"SYNC\"} {\n            R 0 EXEC\n        }\n\n        # Wait for SFLUSH to propagate to replica, and complete the trim\n        wait_for_condition 1000 10 {\n           [R 0 DBSIZE] == 0 && [R 2 DBSIZE] == 0\n        } else {\n            fail \"SFLUSH did not propagate to replica\"\n        }\n\n        if {$sync_method ne \"SYNC\"} {\n            if {$trim_method eq \"active\"} {\n                wait_for_log_messages 0 {\"*Active trim completed for slots*0-8190*\"} $loglines 1000 10\n                wait_for_log_messages -2 {\"*Active trim completed for slots*0-8190*\"} $loglines2 1000 10\n            } else {\n                # background trim\n                wait_for_log_messages 0 {\"*Background trim started for slots*0-8190*\"} $loglines 1000 10\n                wait_for_log_messages -2 {\"*Background trim started for slots*0-8190*\"} $loglines2 1000 10\n            }\n        }\n    }\n}\n}\n    test \"Canceling active trimming can unblock sflush\" {\n        # Delay active trim to make sure it is not completed before FLUSHDB\n        R 0 debug asm-trim-method active 10000 ;# delay 10ms per key\n        # Add slot 0 keys in master\n        for {set i 0} {$i < 1000} {incr i} {\n            R 0 set \"{06S}$i\" \"value$i\"\n        }\n\n        set rd [redis_deferring_client 0]\n        $rd SFLUSH 0 8190 SYNC ;# running in blocking async method\n\n        # FLUSHDB will cancel all trim jobs\n        R 0 SELECT 0\n        R 0 FLUSHDB SYNC\n\n        # SFLUSH should be unblocked and return empty array\n        assert_equal [$rd read] \"{0 8190}\"\n        $rd close\n    }\n\n    test \"Write is rejected and read is allowed in SFLUSH slots using active trim\" {\n        R 0 debug asm-trim-method active 1000 ;# delay 1ms per key\n        R 0 asm.clear_event_log\n        R 2 asm.clear_event_log\n\n        # Add slot 0 keys\n        for {set i 0} {$i < 1000} {incr i} {\n            R 0 set \"{06S}$i\" \"value$i\"\n        }\n        # Add a slot 1 key, we should trim slot 0 first, then slot 1\n        set slot1_key \"Qi\"\n        R 0 set $slot1_key \"slot1\"\n        wait_for_ofs_sync [Rn 0] [Rn 2]\n\n        set rd [redis_deferring_client 0]\n        $rd SFLUSH 0 8190 SYNC ;# running in blocking async method\n\n        # we can read the slot 1 key\n        assert_equal [R 0 get $slot1_key] \"slot1\"\n        # Module with flag REDISMODULE_OPEN_KEY_ACCESS_TRIMMED also can read the key\n        assert_equal [R 0 asm.read_pending_trim_key $slot1_key] \"slot1\"\n\n        # we can not write to the slot 1 key\n        assert_error \"*TRYAGAIN Slot is being trimmed*\" {R 0 set $slot1_key \"value1\"}\n\n        # wait for SFLUSH to complete\n        assert_equal [$rd read] \"{0 8190}\"\n        $rd close\n\n        # there is no trim event since we sfluh the owned slots of this node\n        assert_equal [R 0 asm.get_cluster_event_log] {}\n        assert_equal [R 2 asm.get_cluster_event_log] {}\n    }\n\n    test \"SFLUSH all local slots uses flushdb optimization (no trim)\" {\n        R 0 flushall\n        R 0 debug asm-trim-method active\n\n        # Add keys in slot 0\n        for {set i 0} {$i < 100} {incr i} {\n            R 0 set \"{06S}$i\" \"value$i\"\n        }\n        assert {[R 0 DBSIZE] == 100}\n        wait_for_ofs_sync [Rn 0] [Rn 2]\n        assert {[R 2 DBSIZE] == 100}\n\n        set prev_trim_done [CI 0 cluster_slot_migration_stats_active_trim_completed]\n        set prev_trim_done2 [CI 2 cluster_slot_migration_stats_active_trim_completed]\n\n        # SFLUSH with multiple ranges that together cover all local slots.\n        # If the selected slots are exactly the same as the local slots, we can\n        # simply flush the entire DB.\n        assert_match \"{0 8191}\" [R 0 SFLUSH 0 1000 1001 5000 5001 8191]\n        assert {[R 0 DBSIZE] == 0}\n\n        # Verify replica is also flushed\n        wait_for_ofs_sync [Rn 0] [Rn 2]\n        assert {[R 2 DBSIZE] == 0}\n\n        # Verify active_trim_completed counter did NOT increase since it will trigger\n        # flush (similar to flushdb command) instead of triggering trim.\n        assert_equal [CI 0 cluster_slot_migration_stats_active_trim_completed] $prev_trim_done\n        assert_equal [CI 2 cluster_slot_migration_stats_active_trim_completed] $prev_trim_done2\n    }\n}\n"
  },
  {
    "path": "tests/unit/cluster/scripting.tcl",
    "content": "start_cluster 1 0 {tags {external:skip cluster}} {\n\n    test {Eval scripts with shebangs and functions default to no cross slots} {\n        # Test that scripts with shebang block cross slot operations\n        assert_error \"ERR Script attempted to access keys that do not hash to the same slot*\" {\n            r 0 eval {#!lua\n                redis.call('set', 'foo', 'bar')\n                redis.call('set', 'bar', 'foo')\n                return 'OK'\n            } 0}\n\n        # Test the functions by default block cross slot operations\n        r 0 function load REPLACE {#!lua name=crossslot\n            local function test_cross_slot(keys, args)\n                redis.call('set', 'foo', 'bar')\n                redis.call('set', 'bar', 'foo')\n                return 'OK'\n            end\n\n            redis.register_function('test_cross_slot', test_cross_slot)}\n        assert_error \"ERR Script attempted to access keys that do not hash to the same slot*\" {r FCALL test_cross_slot 0}\n    }\n\n    test {Cross slot commands are allowed by default for eval scripts and with allow-cross-slot-keys flag} {\n        # Old style lua scripts are allowed to access cross slot operations\n        r 0 eval \"redis.call('set', 'foo', 'bar'); redis.call('set', 'bar', 'foo')\" 0\n\n        # scripts with allow-cross-slot-keys flag are allowed\n        r 0 eval {#!lua flags=allow-cross-slot-keys\n            redis.call('set', 'foo', 'bar'); redis.call('set', 'bar', 'foo')\n        } 0\n\n        # Retrieve data from different slot to verify data has been stored in the correct dictionary in cluster-enabled setup\n        # during cross-slot operation from the above lua script.\n        assert_equal \"bar\" [r 0 get foo]\n        assert_equal \"foo\" [r 0 get bar]\n        r 0 del foo\n        r 0 del bar\n\n        # Functions with allow-cross-slot-keys flag are allowed\n        r 0 function load REPLACE {#!lua name=crossslot\n            local function test_cross_slot(keys, args)\n                redis.call('set', 'foo', 'bar')\n                redis.call('set', 'bar', 'foo')\n                return 'OK'\n            end\n\n            redis.register_function{function_name='test_cross_slot', callback=test_cross_slot, flags={ 'allow-cross-slot-keys' }}}\n        r FCALL test_cross_slot 0\n\n        # Retrieve data from different slot to verify data has been stored in the correct dictionary in cluster-enabled setup\n        # during cross-slot operation from the above lua function.\n        assert_equal \"bar\" [r 0 get foo]\n        assert_equal \"foo\" [r 0 get bar]\n    }\n    \n    test {Cross slot commands are also blocked if they disagree with pre-declared keys} {\n        assert_error \"ERR Script attempted to access keys that do not hash to the same slot*\" {\n            r 0 eval {#!lua\n                redis.call('set', 'foo', 'bar')\n                return 'OK'\n            } 1 bar}\n    }\n\n    test {Cross slot commands are allowed by default if they disagree with pre-declared keys} {\n        r 0 flushall\n        r 0 eval \"redis.call('set', 'foo', 'bar')\" 1 bar\n\n        # Make sure the script writes to the right slot\n        assert_equal 1 [r 0 cluster COUNTKEYSINSLOT 12182] ;# foo slot\n        assert_equal 0 [r 0 cluster COUNTKEYSINSLOT 5061] ;# bar slot\n    }\n\n    test \"Function no-cluster flag\" {\n        R 0 function load {#!lua name=test\n            redis.register_function{function_name='f1', callback=function() return 'hello' end, flags={'no-cluster'}}\n        }\n        catch {R 0 fcall f1 0} e\n        assert_match {*Can not run script on cluster, 'no-cluster' flag is set*} $e\n    }\n\n    test \"Script no-cluster flag\" {\n        catch {\n            R 0 eval {#!lua flags=no-cluster\n                return 1\n            } 0\n        } e\n        \n        assert_match {*Can not run script on cluster, 'no-cluster' flag is set*} $e\n    }\n}\n"
  },
  {
    "path": "tests/unit/cluster/sharded-pubsub.tcl",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n#\n\nstart_cluster 1 1 {tags {external:skip cluster}} {\n    set primary_id 0\n    set replica1_id 1\n\n    set primary [Rn $primary_id]\n    set replica [Rn $replica1_id]\n\n    test \"Sharded pubsub publish behavior within multi/exec\" {\n        foreach {node} {primary replica} {\n            set node [set $node]\n            $node MULTI\n            $node SPUBLISH ch1 \"hello\"\n            $node EXEC\n        }\n    }\n\n    test \"Sharded pubsub within multi/exec with cross slot operation\" {\n        $primary MULTI\n        $primary SPUBLISH ch1 \"hello\"\n        $primary GET foo\n        catch {$primary EXEC} err\n        assert_match {CROSSSLOT*} $err\n    }\n\n    test \"Sharded pubsub publish behavior within multi/exec with read operation on primary\" {\n        $primary MULTI\n        $primary SPUBLISH foo \"hello\"\n        $primary GET foo\n        $primary EXEC\n    } {0 {}}\n\n    test \"Sharded pubsub publish behavior within multi/exec with read operation on replica\" {\n        $replica MULTI\n        $replica SPUBLISH foo \"hello\"\n        catch {[$replica GET foo]} err\n        assert_match {MOVED*} $err\n        catch {[$replica EXEC]} err\n        assert_match {EXECABORT*} $err\n    }\n\n    test \"Sharded pubsub publish behavior within multi/exec with write operation on primary\" {\n        $primary MULTI\n        $primary SPUBLISH foo \"hello\"\n        $primary SET foo bar\n        $primary EXEC\n    } {0 OK}\n\n    test \"Sharded pubsub publish behavior within multi/exec with write operation on replica\" {\n        $replica MULTI\n        $replica SPUBLISH foo \"hello\"\n        catch {[$replica SET foo bar]} err\n        assert_match {MOVED*} $err\n        catch {[$replica EXEC]} err\n        assert_match {EXECABORT*} $err\n    }\n\n    # Regression: shard channel slot must not follow getKeySlot() current_client\n    # cache when CLIENT KILL runs inside another client's EXEC (pubsubUnsubscribeChannel).\n    test {Shard pubsub: CLIENT KILL subscriber inside MULTI/EXEC (cross-slot)} {\n        # SET fixes the transaction client's slot to keyk's slot; the subscriber must\n        # use a shard channel in a different slot so a wrong-slot lookup would fail.\n        set keyk \"{06S}k\"\n        set channel \"{Qi}ch\"\n        assert {[R 0 cluster keyslot $channel] != [R 0 cluster keyslot $keyk]}\n\n        set rd_sub [redis_deferring_client]\n        $rd_sub client id\n        set cid [$rd_sub read]\n        $rd_sub ssubscribe $channel\n        $rd_sub read\n\n        $primary multi\n        $primary set $keyk v\n        $primary client kill id $cid\n        set got [$primary exec]\n\n        assert_equal {OK 1} $got\n        assert_equal PONG [$primary ping]\n\n        catch {$rd_sub read}\n        $rd_sub close\n    }\n}\n"
  },
  {
    "path": "tests/unit/cluster/slot-ownership.tcl",
    "content": "start_cluster 2 2 {tags {external:skip cluster}} {\n\n    test \"Verify that slot ownership transfer through gossip propagates deletes to replicas\" {\n        assert {[s -2 role] eq {slave}}\n        wait_for_condition 1000 50 {\n            [s -2 master_link_status] eq {up}\n        } else {\n            fail \"Instance #2 master link status is not up\"\n        }\n\n        assert {[s -3 role] eq {slave}}\n        wait_for_condition 1000 50 {\n            [s -3 master_link_status] eq {up}\n        } else {\n            fail \"Instance #3 master link status is not up\"\n        }\n\n        # Set a single key that will be used to test deletion\n        set key \"FOO\"\n        R 0 SET $key TEST\n        set key_slot [R 0 cluster keyslot $key]\n        set slot_keys_num [R 0 cluster countkeysinslot $key_slot]\n        assert {$slot_keys_num > 0}\n\n        # Wait for replica to have the key\n        R 2 readonly\n        wait_for_condition 1000 50 {\n            [R 2 exists $key] eq \"1\"\n        } else {\n            fail \"Test key was not replicated\"\n        }\n\n        assert_equal [R 2 cluster countkeysinslot $key_slot] $slot_keys_num\n\n        # Assert other shards in cluster doesn't have the key\n        assert_equal [R 1 cluster countkeysinslot $key_slot] \"0\"\n        assert_equal [R 3 cluster countkeysinslot $key_slot] \"0\"\n\n        set nodeid [R 1 cluster myid]\n\n        R 1 cluster bumpepoch\n        # Move $key_slot to node 1\n        assert_equal [R 1 cluster setslot $key_slot node $nodeid] \"OK\"\n\n        wait_for_cluster_propagation\n\n        # src master will delete keys in the slot\n        wait_for_condition 50 100 {\n            [R 0 cluster countkeysinslot $key_slot] eq 0\n        } else {\n            fail \"master 'countkeysinslot $key_slot' did not eq 0\"\n        }\n\n        # src replica will delete keys in the slot\n        wait_for_condition 50 100 {\n            [R 2 cluster countkeysinslot $key_slot] eq 0\n        } else {\n            fail \"replica 'countkeysinslot $key_slot' did not eq 0\"\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/cluster/slot-stats.tcl",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Copyright (c) 2024-present, Valkey contributors.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n#\n\n# Integration tests for CLUSTER SLOT-STATS command.\n\n# -----------------------------------------------------------------------------\n# Helper functions for CLUSTER SLOT-STATS test cases.\n# -----------------------------------------------------------------------------\n\n# Converts array RESP response into a dict.\n# This is useful for many test cases, where unnecessary nesting is removed.\nproc convert_array_into_dict {slot_stats} {\n    set res [dict create]\n    foreach slot_stat $slot_stats {\n        # slot_stat is an array of size 2, where 0th index represents (int) slot,\n        # and 1st index represents (map) usage statistics.\n        dict set res [lindex $slot_stat 0] [lindex $slot_stat 1]\n    }\n    return $res\n}\n\nproc get_cmdstat_usec {cmd r} {\n    set cmdstatline [cmdrstat $cmd r]\n    regexp \"usec=(.*?),usec_per_call=(.*?),rejected_calls=0,failed_calls=0\" $cmdstatline -> usec _\n    return $usec\n}\n\nproc initialize_expected_slots_dict {} {\n    set expected_slots [dict create]\n    for {set i 0} {$i < 16384} {incr i 1} {\n        dict set expected_slots $i 0\n    }\n    return $expected_slots\n}\n\nproc initialize_expected_slots_dict_with_range {start_slot end_slot} {\n    assert {$start_slot <= $end_slot}\n    set expected_slots [dict create]\n    for {set i $start_slot} {$i <= $end_slot} {incr i 1} {\n        dict set expected_slots $i 0\n    }\n    return $expected_slots\n}\n\nproc assert_empty_slot_stats {slot_stats metrics_to_assert} {\n    set slot_stats [convert_array_into_dict $slot_stats]\n    dict for {slot stats} $slot_stats {\n        foreach metric_name $metrics_to_assert {\n            set metric_value [dict get $stats $metric_name]\n            assert {$metric_value == 0}\n        }\n    }\n}\n\nproc assert_empty_slot_stats_with_exception {slot_stats exception_slots metrics_to_assert} {\n    set slot_stats [convert_array_into_dict $slot_stats]\n    dict for {slot stats} $exception_slots {\n        assert {[dict exists $slot_stats $slot]} ;# slot_stats must contain the expected slots.\n    }\n    dict for {slot stats} $slot_stats {\n        if {[dict exists $exception_slots $slot]} {\n            foreach metric_name $metrics_to_assert {\n                set metric_value [dict get $exception_slots $slot $metric_name]\n                assert {[dict get $stats $metric_name] == $metric_value}\n            }\n        } else {\n            dict for {metric value} $stats {\n                assert {$value == 0}\n            }\n        }\n    }\n}\n\nproc assert_equal_slot_stats {slot_stats_1 slot_stats_2 deterministic_metrics non_deterministic_metrics} {\n    set slot_stats_1 [convert_array_into_dict $slot_stats_1]\n    set slot_stats_2 [convert_array_into_dict $slot_stats_2]\n    assert {[dict size $slot_stats_1] == [dict size $slot_stats_2]}\n\n    dict for {slot stats_1} $slot_stats_1 {\n        assert {[dict exists $slot_stats_2 $slot]}\n        set stats_2 [dict get $slot_stats_2 $slot]\n\n        # For deterministic metrics, we assert their equality.\n        foreach metric $deterministic_metrics {\n            assert {[dict get $stats_1 $metric] == [dict get $stats_2 $metric]}\n        }\n        # For non-deterministic metrics, we assert their non-zeroness as a best-effort.\n        foreach metric $non_deterministic_metrics {\n            assert {([dict get $stats_1 $metric] == 0 && [dict get $stats_2 $metric] == 0) || \\\n                    ([dict get $stats_1 $metric] != 0 && [dict get $stats_2 $metric] != 0)}\n        }\n    }\n}\n\nproc assert_all_slots_have_been_seen {expected_slots} {\n    dict for {k v} $expected_slots {\n        assert {$v == 1}\n    }\n}\n\nproc assert_slot_visibility {slot_stats expected_slots} {\n    set slot_stats [convert_array_into_dict $slot_stats]\n    dict for {slot _} $slot_stats {\n        assert {[dict exists $expected_slots $slot]}\n        dict set expected_slots $slot 1\n    }\n\n    assert_all_slots_have_been_seen $expected_slots\n}\n\nproc assert_slot_stats_monotonic_order {slot_stats orderby is_desc} {\n    # For Tcl dict, the order of iteration is the order in which the keys were inserted into the dictionary\n    # Thus, the response ordering is preserved upon calling 'convert_array_into_dict()'.\n    # Source: https://www.tcl.tk/man/tcl8.6.11/TclCmd/dict.htm\n    set slot_stats [convert_array_into_dict $slot_stats]\n    set prev_metric -1\n    dict for {_ stats} $slot_stats {\n        set curr_metric [dict get $stats $orderby]\n        if {$prev_metric != -1} {\n            if {$is_desc == 1} {\n                assert {$prev_metric >= $curr_metric}\n            } else {\n                assert {$prev_metric <= $curr_metric}\n            }\n        }\n        set prev_metric $curr_metric\n    }\n}\n\nproc assert_slot_stats_monotonic_descent {slot_stats orderby} {\n    assert_slot_stats_monotonic_order $slot_stats $orderby 1\n}\n\nproc assert_slot_stats_monotonic_ascent {slot_stats orderby} {\n    assert_slot_stats_monotonic_order $slot_stats $orderby 0\n}\n\nproc wait_for_replica_key_exists {key key_count} {\n    wait_for_condition 1000 50 {\n        [R 1 exists $key] eq \"$key_count\"\n    } else {\n        fail \"Test key was not replicated\"\n    }\n}\n\n# -----------------------------------------------------------------------------\n# Test cases for CLUSTER SLOT-STATS cpu-usec metric correctness.\n# -----------------------------------------------------------------------------\n\nstart_cluster 1 0 {tags {external:skip cluster} overrides {cluster-slot-stats-enabled yes}} {\n\n    # Define shared variables.\n    set key \"FOO\"\n    set key_slot [R 0 cluster keyslot $key]\n    set key_secondary \"FOO2\"\n    set key_secondary_slot [R 0 cluster keyslot $key_secondary]\n    set metrics_to_assert [list cpu-usec]\n\n    test \"CLUSTER SLOT-STATS cpu-usec reset upon CONFIG RESETSTAT.\" {\n        R 0 SET $key VALUE\n        R 0 DEL $key\n        R 0 CONFIG RESETSTAT\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_empty_slot_stats $slot_stats $metrics_to_assert\n    }\n    R 0 CONFIG RESETSTAT\n    R 0 FLUSHALL\n\n    test \"CLUSTER SLOT-STATS cpu-usec reset upon slot migration.\" {\n        R 0 SET $key VALUE\n\n        R 0 CLUSTER DELSLOTS $key_slot\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_empty_slot_stats $slot_stats $metrics_to_assert\n\n        R 0 CLUSTER ADDSLOTS $key_slot\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_empty_slot_stats $slot_stats $metrics_to_assert\n    }\n    R 0 CONFIG RESETSTAT\n    R 0 FLUSHALL\n\n    test \"CLUSTER SLOT-STATS cpu-usec for non-slot specific commands.\" {\n        R 0 INFO\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_empty_slot_stats $slot_stats $metrics_to_assert\n    }\n    R 0 CONFIG RESETSTAT\n    R 0 FLUSHALL\n\n    test \"CLUSTER SLOT-STATS cpu-usec for slot specific commands.\" {\n        R 0 SET $key VALUE\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set usec [get_cmdstat_usec set r]\n        set expected_slot_stats [\n            dict create $key_slot [\n                dict create cpu-usec $usec\n            ]\n        ]\n        assert_empty_slot_stats_with_exception $slot_stats $expected_slot_stats $metrics_to_assert\n    }\n    R 0 CONFIG RESETSTAT\n    R 0 FLUSHALL\n\n    test \"CLUSTER SLOT-STATS cpu-usec for blocking commands, unblocked on keyspace update.\" {\n        # Blocking command with no timeout. Only keyspace update can unblock this client.\n        set rd [redis_deferring_client]\n        $rd BLPOP $key 0\n        wait_for_blocked_clients_count 1\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        # When the client is blocked, no accumulation is made. This behaviour is identical to INFO COMMANDSTATS.\n        assert_empty_slot_stats $slot_stats $metrics_to_assert\n\n        # Unblocking command.\n        R 0 LPUSH $key value\n        wait_for_blocked_clients_count 0\n\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set lpush_usec [get_cmdstat_usec lpush r]\n        set blpop_usec [get_cmdstat_usec blpop r]\n\n        # Assert that both blocking and non-blocking command times have been accumulated.\n        set expected_slot_stats [\n            dict create $key_slot [\n                dict create cpu-usec [expr $lpush_usec + $blpop_usec]\n            ]\n        ]\n        assert_empty_slot_stats_with_exception $slot_stats $expected_slot_stats $metrics_to_assert\n    }\n    R 0 CONFIG RESETSTAT\n    R 0 FLUSHALL\n\n    test \"CLUSTER SLOT-STATS cpu-usec for blocking commands, unblocked on timeout.\" {\n        # Blocking command with 0.5 seconds timeout.\n        set rd [redis_deferring_client]\n        $rd BLPOP $key 0.5\n\n        # Confirm that the client is blocked, then unblocked within 1 second.\n        wait_for_blocked_clients_count 1\n        wait_for_blocked_clients_count 0\n\n        # Assert that the blocking command time has been accumulated.\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set blpop_usec [get_cmdstat_usec blpop r]\n        set expected_slot_stats [\n            dict create $key_slot [\n                dict create cpu-usec $blpop_usec\n            ]\n        ]\n        assert_empty_slot_stats_with_exception $slot_stats $expected_slot_stats $metrics_to_assert\n    }\n    R 0 CONFIG RESETSTAT\n    R 0 FLUSHALL\n\n    test \"CLUSTER SLOT-STATS cpu-usec for transactions.\" {\n        set r1 [redis_client]\n        $r1 MULTI\n        $r1 SET $key value\n        $r1 GET $key\n\n        # CPU metric is not accumulated until EXEC is reached. This behaviour is identical to INFO COMMANDSTATS.\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_empty_slot_stats $slot_stats $metrics_to_assert\n\n        # Execute transaction, and assert that all nested command times have been accumulated.\n        $r1 EXEC\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set exec_usec [get_cmdstat_usec exec r]\n        set expected_slot_stats [\n            dict create $key_slot [\n                dict create cpu-usec $exec_usec\n            ]\n        ]\n        assert_empty_slot_stats_with_exception $slot_stats $expected_slot_stats $metrics_to_assert\n    }\n    R 0 CONFIG RESETSTAT\n    R 0 FLUSHALL\n\n    test \"CLUSTER SLOT-STATS cpu-usec for lua-scripts, without cross-slot keys.\" {\n        R 0 eval {#!lua\n            redis.call('set', KEYS[1], 'bar') redis.call('get', KEYS[2])\n        } 2 $key $key\n\n        set eval_usec [get_cmdstat_usec eval r]\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n\n        set expected_slot_stats [\n            dict create $key_slot [\n                dict create cpu-usec $eval_usec\n            ]\n        ]\n        assert_empty_slot_stats_with_exception $slot_stats $expected_slot_stats $metrics_to_assert\n    }\n    R 0 CONFIG RESETSTAT\n    R 0 FLUSHALL\n\n    test \"CLUSTER SLOT-STATS cpu-usec for lua-scripts, with cross-slot keys.\" {\n        R 0 eval {#!lua flags=allow-cross-slot-keys\n            redis.call('set', KEYS[1], 'bar') redis.call('get', ARGV[1])\n        } 1 $key $key_secondary\n\n        # For cross-slot, we do not accumulate at all.\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_empty_slot_stats $slot_stats $metrics_to_assert\n    }\n    R 0 CONFIG RESETSTAT\n    R 0 FLUSHALL\n\n    test \"CLUSTER SLOT-STATS cpu-usec for functions, without cross-slot keys.\" {\n        R 0 function load replace {#!lua name=f1\n            redis.register_function{\n                function_name='f1',\n                callback=function(keys, args) redis.call('set', keys[1], '1') redis.call('get', keys[2]) end\n            }\n        }\n        R 0 fcall f1 2 $key $key\n\n        set fcall_usec [get_cmdstat_usec fcall r]\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n\n        set expected_slot_stats [\n            dict create $key_slot [\n                dict create cpu-usec $fcall_usec\n            ]\n        ]\n        assert_empty_slot_stats_with_exception $slot_stats $expected_slot_stats $metrics_to_assert\n    }\n    R 0 CONFIG RESETSTAT\n    R 0 FLUSHALL\n\n    test \"CLUSTER SLOT-STATS cpu-usec for functions, with cross-slot keys.\" {\n        R 0 function load replace {#!lua name=f1\n            redis.register_function{\n                function_name='f1',\n                callback=function(keys, args) redis.call('set', keys[1], '1') redis.call('get', args[1]) end,\n                flags={'allow-cross-slot-keys'}\n            }\n        }\n        R 0 fcall f1 1 $key $key_secondary\n\n        # For cross-slot, we do not accumulate at all.\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_empty_slot_stats $slot_stats $metrics_to_assert\n    }\n    R 0 CONFIG RESETSTAT\n    R 0 FLUSHALL\n}\n\n# -----------------------------------------------------------------------------\n# Test cases for CLUSTER SLOT-STATS network-bytes-in.\n# -----------------------------------------------------------------------------\n\nstart_cluster 1 0 {tags {external:skip cluster} overrides {cluster-slot-stats-enabled yes}} {\n\n    # Define shared variables.\n    set key \"key\"\n    set key_slot [R 0 cluster keyslot $key]\n    set metrics_to_assert [list network-bytes-in]\n\n    test \"CLUSTER SLOT-STATS network-bytes-in, multi bulk buffer processing.\" {\n        # *3\\r\\n$3\\r\\nSET\\r\\n$3\\r\\nkey\\r\\n$5\\r\\nvalue\\r\\n --> 33 bytes.\n        R 0 SET $key value\n\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set expected_slot_stats [\n            dict create $key_slot [\n                dict create network-bytes-in 33\n            ]\n        ]\n        assert_empty_slot_stats_with_exception $slot_stats $expected_slot_stats $metrics_to_assert\n    }\n    R 0 CONFIG RESETSTAT\n    R 0 FLUSHALL\n\n    test \"CLUSTER SLOT-STATS network-bytes-in, in-line buffer processing.\" {\n        set rd [redis_deferring_client]\n        # SET key value\\r\\n --> 15 bytes.\n        $rd write \"SET $key value\\r\\n\"\n        $rd flush\n\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set expected_slot_stats [\n            dict create $key_slot [\n                dict create network-bytes-in 15\n            ]\n        ]\n\n        assert_empty_slot_stats_with_exception $slot_stats $expected_slot_stats $metrics_to_assert\n    }\n    R 0 CONFIG RESETSTAT\n    R 0 FLUSHALL\n\n    test \"CLUSTER SLOT-STATS network-bytes-in, blocking command.\" {\n        set rd [redis_deferring_client]\n        # *3\\r\\n$5\\r\\nblpop\\r\\n$3\\r\\nkey\\r\\n$1\\r\\n0\\r\\n --> 31 bytes.\n        $rd BLPOP $key 0\n        wait_for_blocked_clients_count 1\n\n        # Slot-stats must be empty here, as the client is yet to be unblocked.\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_empty_slot_stats $slot_stats $metrics_to_assert\n\n        # *3\\r\\n$5\\r\\nlpush\\r\\n$3\\r\\nkey\\r\\n$5\\r\\nvalue\\r\\n --> 35 bytes.\n        R 0 LPUSH $key value\n        wait_for_blocked_clients_count 0\n\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set expected_slot_stats [\n            dict create $key_slot [\n                dict create network-bytes-in 66 ;# 31 + 35 bytes.\n            ]\n        ]\n\n        assert_empty_slot_stats_with_exception $slot_stats $expected_slot_stats $metrics_to_assert\n    }\n    R 0 CONFIG RESETSTAT\n    R 0 FLUSHALL\n\n    test \"CLUSTER SLOT-STATS network-bytes-in, multi-exec transaction.\" {\n        set r [redis_client]\n        # *1\\r\\n$5\\r\\nmulti\\r\\n --> 15 bytes.\n        $r MULTI\n        # *3\\r\\n$3\\r\\nSET\\r\\n$3\\r\\nkey\\r\\n$5\\r\\nvalue\\r\\n --> 33 bytes.\n        assert {[$r SET $key value] eq {QUEUED}}\n        # *1\\r\\n$4\\r\\nexec\\r\\n --> 14 bytes.\n        assert {[$r EXEC] eq {OK}}\n\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set expected_slot_stats [\n            dict create $key_slot [\n                dict create network-bytes-in 62 ;# 15 + 33 + 14 bytes.\n            ]\n        ]\n\n        assert_empty_slot_stats_with_exception $slot_stats $expected_slot_stats $metrics_to_assert\n    }\n    R 0 CONFIG RESETSTAT\n    R 0 FLUSHALL\n\n    test \"CLUSTER SLOT-STATS network-bytes-in, non slot specific command.\" {\n        R 0 INFO\n\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_empty_slot_stats $slot_stats $metrics_to_assert\n    }\n    R 0 CONFIG RESETSTAT\n    R 0 FLUSHALL\n\n    test \"CLUSTER SLOT-STATS network-bytes-in, pub/sub.\" {\n        # PUB/SUB does not get accumulated at per-slot basis,\n        # as it is cluster-wide and is not slot specific.\n        set rd [redis_deferring_client]\n        $rd subscribe channel\n        R 0 publish channel message\n\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_empty_slot_stats $slot_stats $metrics_to_assert\n    }\n    R 0 CONFIG RESETSTAT\n    R 0 FLUSHALL\n}\n\nstart_cluster 1 1 {tags {external:skip cluster} overrides {cluster-slot-stats-enabled yes}} {\n    set channel \"channel\"\n    set key_slot [R 0 cluster keyslot $channel]\n    set metrics_to_assert [list network-bytes-in]\n\n    # Setup replication.\n    assert {[s -1 role] eq {slave}}\n    wait_for_condition 1000 50 {\n        [s -1 master_link_status] eq {up}\n    } else {\n        fail \"Instance #1 master link status is not up\"\n    }\n    R 1 readonly\n\n    test \"CLUSTER SLOT-STATS network-bytes-in, sharded pub/sub.\" {\n        set slot [R 0 cluster keyslot $channel]\n        set primary [Rn 0]\n        set replica [Rn 1]\n        set replica_subcriber [redis_deferring_client -1]\n        $replica_subcriber SSUBSCRIBE $channel\n        # *2\\r\\n$10\\r\\nssubscribe\\r\\n$7\\r\\nchannel\\r\\n --> 34 bytes.\n        $primary SPUBLISH $channel hello\n        # *3\\r\\n$8\\r\\nspublish\\r\\n$7\\r\\nchannel\\r\\n$5\\r\\nhello\\r\\n --> 42 bytes.\n\n        set slot_stats [$primary CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set expected_slot_stats [\n            dict create $key_slot [\n                dict create network-bytes-in 42\n            ]\n        ]\n        assert_empty_slot_stats_with_exception $slot_stats $expected_slot_stats $metrics_to_assert\n\n        set slot_stats [$replica CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set expected_slot_stats [\n            dict create $key_slot [\n                dict create network-bytes-in 34\n            ]\n        ]\n        assert_empty_slot_stats_with_exception $slot_stats $expected_slot_stats $metrics_to_assert\n    }\n    R 0 CONFIG RESETSTAT\n    R 0 FLUSHALL\n}\n\n# -----------------------------------------------------------------------------\n# Test cases for CLUSTER SLOT-STATS network-bytes-out correctness.\n# -----------------------------------------------------------------------------\n\nstart_cluster 1 0 {tags {external:skip cluster}} {\n    # Define shared variables.\n    set key \"FOO\"\n    set key_slot [R 0 cluster keyslot $key]\n    set expected_slots_to_key_count [dict create $key_slot 1]\n    set metrics_to_assert [list network-bytes-out]\n    R 0 CONFIG SET cluster-slot-stats-enabled yes\n\n    test \"CLUSTER SLOT-STATS network-bytes-out, for non-slot specific commands.\" {\n        R 0 INFO\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_empty_slot_stats $slot_stats $metrics_to_assert\n    }\n    R 0 CONFIG RESETSTAT\n    R 0 FLUSHALL\n\n    test \"CLUSTER SLOT-STATS network-bytes-out, for slot specific commands.\" {\n        R 0 SET $key value\n        # +OK\\r\\n --> 5 bytes\n\n        set expected_slot_stats [\n            dict create $key_slot [\n                dict create network-bytes-out 5\n            ]\n        ]\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_empty_slot_stats_with_exception $slot_stats $expected_slot_stats $metrics_to_assert\n    }\n    R 0 CONFIG RESETSTAT\n    R 0 FLUSHALL\n\n    test \"CLUSTER SLOT-STATS network-bytes-out, blocking commands.\" {\n        set rd [redis_deferring_client]\n        $rd BLPOP $key 0\n        wait_for_blocked_clients_count 1\n\n        # Assert empty slot stats here, since COB is yet to be flushed due to the block.\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_empty_slot_stats $slot_stats $metrics_to_assert\n\n        # Unblock the command.\n        # LPUSH client) :1\\r\\n --> 4 bytes.\n        # BLPOP client) *2\\r\\n$3\\r\\nkey\\r\\n$5\\r\\nvalue\\r\\n --> 24 bytes, upon unblocking.\n        R 0 LPUSH $key value\n        wait_for_blocked_clients_count 0\n\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set expected_slot_stats [\n            dict create $key_slot [\n                dict create network-bytes-out 28 ;# 4 + 24 bytes.\n            ]\n        ]\n        assert_empty_slot_stats_with_exception $slot_stats $expected_slot_stats $metrics_to_assert\n    }\n    R 0 CONFIG RESETSTAT\n    R 0 FLUSHALL\n}\n\nstart_cluster 1 1 {tags {external:skip cluster}} {\n\n    # Define shared variables.\n    set key \"FOO\"\n    set key_slot [R 0 CLUSTER KEYSLOT $key]\n    set metrics_to_assert [list network-bytes-out]\n    R 0 CONFIG SET cluster-slot-stats-enabled yes\n\n    # Setup replication.\n    assert {[s -1 role] eq {slave}}\n    wait_for_condition 1000 50 {\n        [s -1 master_link_status] eq {up}\n    } else {\n        fail \"Instance #1 master link status is not up\"\n    }\n    R 1 readonly\n\n    test \"CLUSTER SLOT-STATS network-bytes-out, replication stream egress.\" {\n        assert_equal [R 0 SET $key VALUE] {OK}\n        # Local client) +OK\\r\\n --> 5 bytes.\n        # Replication stream) *3\\r\\n$3\\r\\nSET\\r\\n$3\\r\\nkey\\r\\n$5\\r\\nvalue\\r\\n --> 33 bytes.\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set expected_slot_stats [\n            dict create $key_slot [\n                dict create network-bytes-out 38 ;# 5 + 33 bytes.\n            ]\n        ]\n        assert_empty_slot_stats_with_exception $slot_stats $expected_slot_stats $metrics_to_assert\n    }\n}\n\nstart_cluster 1 1 {tags {external:skip cluster}} {\n\n    # Define shared variables.\n    set channel \"channel\"\n    set key_slot [R 0 cluster keyslot $channel]\n    set channel_secondary \"channel2\"\n    set key_slot_secondary [R 0 cluster keyslot $channel_secondary]\n    set metrics_to_assert [list network-bytes-out]\n    R 0 CONFIG SET cluster-slot-stats-enabled yes\n\n    test \"CLUSTER SLOT-STATS network-bytes-out, sharded pub/sub, single channel.\" {\n        set slot [R 0 cluster keyslot $channel]\n        set publisher [Rn 0]\n        set subscriber [redis_client]\n        set replica [redis_deferring_client -1]\n\n        # Subscriber client) *3\\r\\n$10\\r\\nssubscribe\\r\\n$7\\r\\nchannel\\r\\n:1\\r\\n --> 38 bytes\n        $subscriber SSUBSCRIBE $channel\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set expected_slot_stats [\n            dict create $key_slot [\n                dict create network-bytes-out 38\n            ]\n        ]\n        R 0 CONFIG RESETSTAT\n\n        # Publisher client) :1\\r\\n --> 4 bytes.\n        # Subscriber client) *3\\r\\n$8\\r\\nsmessage\\r\\n$7\\r\\nchannel\\r\\n$5\\r\\nhello\\r\\n --> 42 bytes.\n        assert_equal 1 [$publisher SPUBLISH $channel hello]\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set expected_slot_stats [\n            dict create $key_slot [\n                dict create network-bytes-out 46 ;# 4 + 42 bytes.\n            ]\n        ]\n        assert_empty_slot_stats_with_exception $slot_stats $expected_slot_stats $metrics_to_assert\n        $subscriber QUIT\n    }\n    R 0 FLUSHALL\n    R 0 CONFIG RESETSTAT\n\n    test \"CLUSTER SLOT-STATS network-bytes-out, sharded pub/sub, cross-slot channels.\" {\n        set slot [R 0 cluster keyslot $channel]\n        set publisher [Rn 0]\n        set subscriber [redis_client]\n        set replica [redis_deferring_client -1]\n\n        # Stack multi-slot subscriptions against a single client.\n        # For primary channel;\n        # Subscriber client) *3\\r\\n$10\\r\\nssubscribe\\r\\n$7\\r\\nchannel\\r\\n:1\\r\\n --> 38 bytes\n        # For secondary channel;\n        # Subscriber client) *3\\r\\n$10\\r\\nssubscribe\\r\\n$8\\r\\nchannel2\\r\\n:1\\r\\n --> 39 bytes\n        $subscriber SSUBSCRIBE $channel\n        $subscriber SSUBSCRIBE $channel_secondary\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set expected_slot_stats [\n            dict create \\\n                $key_slot [ \\\n                    dict create network-bytes-out 38\n                ] \\\n                $key_slot_secondary [ \\\n                    dict create network-bytes-out 39\n                ]\n        ]\n        R 0 CONFIG RESETSTAT\n\n        # For primary channel;\n        # Publisher client) :1\\r\\n --> 4 bytes.\n        # Subscriber client) *3\\r\\n$8\\r\\nsmessage\\r\\n$7\\r\\nchannel\\r\\n$5\\r\\nhello\\r\\n --> 42 bytes.\n        # For secondary channel;\n        # Publisher client) :1\\r\\n --> 4 bytes.\n        # Subscriber client) *3\\r\\n$8\\r\\nsmessage\\r\\n$8\\r\\nchannel2\\r\\n$5\\r\\nhello\\r\\n --> 43 bytes.\n        assert_equal 1 [$publisher SPUBLISH $channel hello]\n        assert_equal 1 [$publisher SPUBLISH $channel_secondary hello]\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set expected_slot_stats [\n            dict create \\\n                $key_slot [ \\\n                    dict create network-bytes-out 46 ;# 4 + 42 bytes.\n                ] \\\n                $key_slot_secondary [ \\\n                    dict create network-bytes-out 47 ;# 4 + 43 bytes.\n                ]\n        ]\n        assert_empty_slot_stats_with_exception $slot_stats $expected_slot_stats $metrics_to_assert\n    }\n}\n\n# -----------------------------------------------------------------------------\n# Test cases for CLUSTER SLOT-STATS key-count metric correctness.\n# -----------------------------------------------------------------------------\n\nstart_cluster 1 0 {tags {external:skip cluster} overrides {cluster-slot-stats-enabled yes}} {\n\n    # Define shared variables.\n    set key \"FOO\"\n    set key_slot [R 0 cluster keyslot $key]\n    set metrics_to_assert [list key-count]\n    set expected_slot_stats [\n        dict create $key_slot [\n            dict create key-count 1\n        ]\n    ]\n\n    test \"CLUSTER SLOT-STATS contains default value upon redis-server startup\" {\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_empty_slot_stats $slot_stats $metrics_to_assert\n    }\n\n    test \"CLUSTER SLOT-STATS contains correct metrics upon key introduction\" {\n        R 0 SET $key TEST\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_empty_slot_stats_with_exception $slot_stats $expected_slot_stats $metrics_to_assert\n    }\n\n    test \"CLUSTER SLOT-STATS contains correct metrics upon key mutation\" {\n        R 0 SET $key NEW_VALUE\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_empty_slot_stats_with_exception $slot_stats $expected_slot_stats $metrics_to_assert\n    }\n\n    test \"CLUSTER SLOT-STATS contains correct metrics upon key deletion\" {\n        R 0 DEL $key\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_empty_slot_stats $slot_stats $metrics_to_assert\n    }\n\n    test \"CLUSTER SLOT-STATS slot visibility based on slot ownership changes\" {\n        R 0 CONFIG SET cluster-require-full-coverage no\n\n        R 0 CLUSTER DELSLOTS $key_slot\n        set expected_slots [initialize_expected_slots_dict]\n        dict unset expected_slots $key_slot\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert {[dict size $expected_slots] == 16383}\n        assert_slot_visibility $slot_stats $expected_slots\n\n        R 0 CLUSTER ADDSLOTS $key_slot\n        set expected_slots [initialize_expected_slots_dict]\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert {[dict size $expected_slots] == 16384}\n        assert_slot_visibility $slot_stats $expected_slots\n    }\n}\n\n# -----------------------------------------------------------------------------\n# Test cases for CLUSTER SLOT-STATS SLOTSRANGE sub-argument.\n# -----------------------------------------------------------------------------\n\nstart_cluster 1 0 {tags {external:skip cluster}} {\n\n    test \"CLUSTER SLOT-STATS SLOTSRANGE all slots present\" {\n        set start_slot 100\n        set end_slot 102\n        set expected_slots [initialize_expected_slots_dict_with_range $start_slot $end_slot]\n\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE $start_slot $end_slot]\n        assert_slot_visibility $slot_stats $expected_slots\n    }\n\n    test \"CLUSTER SLOT-STATS SLOTSRANGE some slots missing\" {\n        set start_slot 100\n        set end_slot 102\n        set expected_slots [initialize_expected_slots_dict_with_range $start_slot $end_slot]\n\n        R 0 CLUSTER DELSLOTS $start_slot\n        dict unset expected_slots $start_slot\n\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE $start_slot $end_slot]\n        assert_slot_visibility $slot_stats $expected_slots\n    }\n}\n\n# -----------------------------------------------------------------------------\n# Test cases for CLUSTER SLOT-STATS ORDERBY sub-argument.\n# -----------------------------------------------------------------------------\n\nstart_cluster 1 0 {tags {external:skip cluster} overrides {cluster-slot-stats-enabled yes}} {\n\n    set metrics [list \"key-count\" \"memory-bytes\" \"cpu-usec\" \"network-bytes-in\" \"network-bytes-out\"]\n\n    # SET keys for target hashslots, to encourage ordering.\n    set hash_tags [list 0 1 2 3 4]\n    set num_keys 1\n    foreach hash_tag $hash_tags {\n        for {set i 0} {$i < $num_keys} {incr i 1} {\n            R 0 SET \"$i{$hash_tag}\" VALUE\n        }\n        incr num_keys 1\n    }\n\n    # SET keys for random hashslots, for random noise.\n    set num_keys 0\n    while {$num_keys < 1000} {\n        set random_key [randomInt 16384]\n        R 0 SET $random_key VALUE\n        incr num_keys 1\n    }\n\n    test \"CLUSTER SLOT-STATS ORDERBY DESC correct ordering\" {\n        foreach orderby $metrics {\n            set slot_stats [R 0 CLUSTER SLOT-STATS ORDERBY $orderby DESC]\n            assert_slot_stats_monotonic_descent $slot_stats $orderby\n        }\n    }\n\n    test \"CLUSTER SLOT-STATS ORDERBY ASC correct ordering\" {\n        foreach orderby $metrics {\n            set slot_stats [R 0 CLUSTER SLOT-STATS ORDERBY $orderby ASC]\n            assert_slot_stats_monotonic_ascent $slot_stats $orderby\n        }\n    }\n\n    test \"CLUSTER SLOT-STATS ORDERBY LIMIT correct response pagination, where limit is less than number of assigned slots\" {\n        R 0 FLUSHALL SYNC\n        R 0 CONFIG RESETSTAT\n\n        foreach orderby $metrics {\n            set limit 5\n            set slot_stats_desc [R 0 CLUSTER SLOT-STATS ORDERBY $orderby LIMIT $limit DESC]\n            set slot_stats_asc [R 0 CLUSTER SLOT-STATS ORDERBY $orderby LIMIT $limit ASC]\n            set slot_stats_desc_length [llength $slot_stats_desc]\n            set slot_stats_asc_length [llength $slot_stats_asc]\n            assert {$limit == $slot_stats_desc_length && $limit == $slot_stats_asc_length}\n\n            # All slot statistics have been reset to 0, so we will order by slot in ascending order.\n            set expected_slots [dict create 0 0 1 0 2 0 3 0 4 0]\n            assert_slot_visibility $slot_stats_desc $expected_slots\n            assert_slot_visibility $slot_stats_asc $expected_slots\n        }\n    }\n\n    test \"CLUSTER SLOT-STATS ORDERBY LIMIT correct response pagination, where limit is greater than number of assigned slots\" {\n        R 0 CONFIG SET cluster-require-full-coverage no\n        R 0 FLUSHALL SYNC\n        R 0 CLUSTER FLUSHSLOTS\n        R 0 CLUSTER ADDSLOTS 100 101\n\n        foreach orderby $metrics {\n            set num_assigned_slots 2\n            set limit 5\n            set slot_stats_desc [R 0 CLUSTER SLOT-STATS ORDERBY $orderby LIMIT $limit DESC]\n            set slot_stats_asc [R 0 CLUSTER SLOT-STATS ORDERBY $orderby LIMIT $limit ASC]\n            set slot_stats_desc_length [llength $slot_stats_desc]\n            set slot_stats_asc_length [llength $slot_stats_asc]\n            set expected_response_length [expr min($num_assigned_slots, $limit)]\n            assert {$expected_response_length == $slot_stats_desc_length && $expected_response_length == $slot_stats_asc_length}\n\n            set expected_slots [dict create 100 0 101 0]\n            assert_slot_visibility $slot_stats_desc $expected_slots\n            assert_slot_visibility $slot_stats_asc $expected_slots\n        }\n    }\n\n    test \"CLUSTER SLOT-STATS ORDERBY arg sanity check.\" {\n        # Non-existent argument.\n        assert_error \"ERR*\" {R 0 CLUSTER SLOT-STATS ORDERBY key-count non-existent-arg}\n        # Negative LIMIT.\n        assert_error \"ERR*\" {R 0 CLUSTER SLOT-STATS ORDERBY key-count DESC LIMIT -1}\n        # Non-existent ORDERBY metric.\n        assert_error \"ERR*\" {R 0 CLUSTER SLOT-STATS ORDERBY non-existent-metric}\n        # When cluster-slot-stats-enabled config is disabled, you cannot sort using advanced metrics.\n        R 0 CONFIG SET cluster-slot-stats-enabled no\n        set orderby \"cpu-usec\"\n        assert_error \"ERR*\" {R 0 CLUSTER SLOT-STATS ORDERBY $orderby}\n        set orderby \"network-bytes-in\"\n        assert_error \"ERR*\" {R 0 CLUSTER SLOT-STATS ORDERBY $orderby}\n        set orderby \"network-bytes-out\"\n        assert_error \"ERR*\" {R 0 CLUSTER SLOT-STATS ORDERBY $orderby}\n        set orderby \"memory-bytes\"\n        assert_error \"ERR*\" {R 0 CLUSTER SLOT-STATS ORDERBY $orderby}\n\n        # When only cpu net is enabled, memory-bytes ORDERBY should fail\n        R 0 CONFIG SET cluster-slot-stats-enabled \"cpu net\"\n        assert_error \"ERR*\" {R 0 CLUSTER SLOT-STATS ORDERBY memory-bytes}\n    }\n\n}\n\n# -----------------------------------------------------------------------------\n# Test cases for CLUSTER SLOT-STATS replication.\n# -----------------------------------------------------------------------------\n\nstart_cluster 1 1 {tags {external:skip cluster} overrides {cluster-slot-stats-enabled yes}} {\n\n    # Define shared variables.\n    set key \"key\"\n    set key_slot [R 0 CLUSTER KEYSLOT $key]\n    set primary [Rn 0]\n    set replica [Rn 1]\n\n    # For replication, assertions are split between deterministic and non-deterministic metrics.\n    # * For deterministic metrics, strict equality assertions are made.\n    # * For non-deterministic metrics, non-zeroness assertions are made.\n    #   Non-zeroness as in, both primary and replica should either have some value, or no value at all.\n    #\n    # * key-count is deterministic between primary and its replica.\n    # * cpu-usec is non-deterministic between primary and its replica.\n    # * network-bytes-in is deterministic between primary and its replica.\n    # * network-bytes-out will remain empty in the replica, since primary client do not receive replies, unless for replicationSendAck().\n    set deterministic_metrics [list key-count network-bytes-in]\n    set non_deterministic_metrics [list cpu-usec]\n    set empty_metrics [list network-bytes-out]\n\n    # Setup replication.\n    assert {[s -1 role] eq {slave}}\n    wait_for_condition 1000 50 {\n        [s -1 master_link_status] eq {up}\n    } else {\n        fail \"Instance #1 master link status is not up\"\n    }\n    R 1 readonly\n\n    test \"CLUSTER SLOT-STATS metrics replication for new keys\" {\n        # *3\\r\\n$3\\r\\nset\\r\\n$3\\r\\nkey\\r\\n$5\\r\\nvalue\\r\\n --> 33 bytes.\n        R 0 SET $key VALUE\n\n        set expected_slot_stats [\n            dict create $key_slot [\n                dict create key-count 1 network-bytes-in 33\n            ]\n        ]\n        set slot_stats_master [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_empty_slot_stats_with_exception $slot_stats_master $expected_slot_stats $deterministic_metrics\n\n        wait_for_condition 500 10 {\n            [string match {*calls=1,*} [cmdrstat set $replica]]\n        } else {\n            fail \"Replica did not receive the command.\"\n        }\n        set slot_stats_replica [R 1 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_equal_slot_stats $slot_stats_master $slot_stats_replica $deterministic_metrics $non_deterministic_metrics\n        assert_empty_slot_stats $slot_stats_replica $empty_metrics\n    }\n    R 0 CONFIG RESETSTAT\n    R 1 CONFIG RESETSTAT\n\n    test \"CLUSTER SLOT-STATS metrics replication for existing keys\" {\n        # *3\\r\\n$3\\r\\nset\\r\\n$3\\r\\nkey\\r\\n$13\\r\\nvalue_updated\\r\\n --> 42 bytes.\n        R 0 SET $key VALUE_UPDATED\n\n        set expected_slot_stats [\n            dict create $key_slot [\n                dict create key-count 1 network-bytes-in 42\n            ]\n        ]\n        set slot_stats_master [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_empty_slot_stats_with_exception $slot_stats_master $expected_slot_stats $deterministic_metrics\n\n        wait_for_condition 500 10 {\n            [string match {*calls=1,*} [cmdrstat set $replica]]\n        } else {\n            fail \"Replica did not receive the command.\"\n        }\n        set slot_stats_replica [R 1 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_equal_slot_stats $slot_stats_master $slot_stats_replica $deterministic_metrics $non_deterministic_metrics\n        assert_empty_slot_stats $slot_stats_replica $empty_metrics\n    }\n    R 0 CONFIG RESETSTAT\n    R 1 CONFIG RESETSTAT\n\n    test \"CLUSTER SLOT-STATS metrics replication for deleting keys\" {\n        # *2\\r\\n$3\\r\\ndel\\r\\n$3\\r\\nkey\\r\\n --> 22 bytes.\n        R 0 DEL $key\n\n        set expected_slot_stats [\n            dict create $key_slot [\n                dict create key-count 0 network-bytes-in 22\n            ]\n        ]\n        set slot_stats_master [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_empty_slot_stats_with_exception $slot_stats_master $expected_slot_stats $deterministic_metrics\n\n        wait_for_condition 500 10 {\n            [string match {*calls=1,*} [cmdrstat del $replica]]\n        } else {\n            fail \"Replica did not receive the command.\"\n        }\n        set slot_stats_replica [R 1 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        assert_equal_slot_stats $slot_stats_master $slot_stats_replica $deterministic_metrics $non_deterministic_metrics\n        assert_empty_slot_stats $slot_stats_replica $empty_metrics\n    }\n    R 0 CONFIG RESETSTAT\n    R 1 CONFIG RESETSTAT\n}\n\nstart_cluster 2 2 {tags {external:skip cluster} overrides {cluster-slot-stats-enabled yes}} {\n    test \"CLUSTER SLOT-STATS reset upon atomic slot migration\" {\n        # key on slot-0\n        set key0 \"{06S}mykey0\"\n        set key0_slot [R 0 CLUSTER KEYSLOT $key0]\n        R 0 SET $key0 VALUE\n\n        # Migrate slot-0 to node-1\n        R 1 CLUSTER MIGRATION IMPORT 0 0\n        wait_for_condition 1000 10 {\n            [CI 0 cluster_slot_migration_active_tasks] == 0 &&\n            [CI 1 cluster_slot_migration_active_tasks] == 0\n        } else {\n            fail \"ASM tasks did not complete\"\n        }\n\n        set expected_slot_stats [\n            dict create \\\n                $key0_slot [ \\\n                    dict create key-count 1 \\\n                    dict create cpu-usec 0 \\\n                    dict create network-bytes-in 0 \\\n                    dict create network-bytes-out 0 \\\n                ]\n        ]\n        set metrics_to_assert [list key-count cpu-usec network-bytes-in network-bytes-out]\n\n        # Verify metrics are reset except key-count\n        set slot_stats [R 1 CLUSTER SLOT-STATS SLOTSRANGE 0 0]\n        assert_empty_slot_stats_with_exception $slot_stats $expected_slot_stats $metrics_to_assert\n\n        # Migrate slot-0 back to node-0\n        R 0 CLUSTER MIGRATION IMPORT 0 0\n        wait_for_condition 1000 10 {\n            [CI 0 cluster_slot_migration_active_tasks] == 0 &&\n            [CI 1 cluster_slot_migration_active_tasks] == 0\n        } else {\n            fail \"ASM tasks did not complete\"\n        }\n\n        # Verify metrics are reset except key-count\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 0]\n        assert_empty_slot_stats_with_exception $slot_stats $expected_slot_stats $metrics_to_assert\n    }\n}\n\n# -----------------------------------------------------------------------------\n# Test cases for CLUSTER SLOT-STATS memory-bytes field presence.\n# -----------------------------------------------------------------------------\n\nstart_cluster 1 0 {tags {external:skip cluster} overrides {cluster-slot-stats-enabled yes}} {\n    # Define shared variables.\n    set key \"FOO\"\n    set key_slot [R 0 cluster keyslot $key]\n\n    test \"CLUSTER SLOT-STATS memory-bytes field present when cluster-slot-stats-enabled set on startup\" {\n        R 0 SET $key VALUE\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set slot_stats [convert_array_into_dict $slot_stats]\n\n        # Verify memory-bytes field is present\n        assert {[dict exists $slot_stats $key_slot]}\n        set stats [dict get $slot_stats $key_slot]\n        assert {[dict exists $stats memory-bytes]}\n        assert {[dict get $stats memory-bytes] > 0}\n    }\n\n    test \"CLUSTER SLOT-STATS net mem combination shows only net and mem stats\" {\n        R 0 CONFIG SET cluster-slot-stats-enabled \"net mem\"\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set slot_stats [convert_array_into_dict $slot_stats]\n\n        set stats [dict get $slot_stats $key_slot]\n        assert {[dict exists $stats memory-bytes]}\n        assert {[dict exists $stats network-bytes-in]}\n        assert {[dict exists $stats network-bytes-out]}\n        assert {![dict exists $stats cpu-usec]}\n    }\n\n    test \"CLUSTER SLOT-STATS cpu mem combination shows only cpu and mem stats\" {\n        R 0 CONFIG SET cluster-slot-stats-enabled \"cpu mem\"\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set slot_stats [convert_array_into_dict $slot_stats]\n\n        set stats [dict get $slot_stats $key_slot]\n        assert {[dict exists $stats memory-bytes]}\n        assert {[dict exists $stats cpu-usec]}\n        assert {![dict exists $stats network-bytes-in]}\n        assert {![dict exists $stats network-bytes-out]}\n\n        # Restore to yes for subsequent tests\n        R 0 CONFIG SET cluster-slot-stats-enabled yes\n    }\n\n    test \"CLUSTER SLOT-STATS memory-bytes field not present after disabling cluster-slot-stats-enabled\" {\n        R 0 CONFIG SET cluster-slot-stats-enabled no\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set slot_stats [convert_array_into_dict $slot_stats]\n\n        # Verify memory-bytes field is not present after disabling config\n        # (memory tracking is disabled when MEM flag is removed)\n        assert {[dict exists $slot_stats $key_slot]}\n        set stats [dict get $slot_stats $key_slot]\n        assert {![dict exists $stats memory-bytes]}\n\n        # Verify other stats fields are not present\n        assert {![dict exists $stats cpu-usec]}\n        assert {![dict exists $stats network-bytes-in]}\n        assert {![dict exists $stats network-bytes-out]}\n    }\n\n    test \"CLUSTER SLOT-STATS memory tracking cannot be re-enabled after being disabled\" {\n        # Once memory tracking is disabled, it cannot be re-enabled at runtime\n        assert_error \"ERR*memory tracking cannot be enabled at runtime*\" {R 0 CONFIG SET cluster-slot-stats-enabled yes}\n        assert_error \"ERR*memory tracking cannot be enabled at runtime*\" {R 0 CONFIG SET cluster-slot-stats-enabled mem}\n\n        # But cpu and net can still be enabled\n        R 0 CONFIG SET cluster-slot-stats-enabled \"cpu net\"\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set slot_stats [convert_array_into_dict $slot_stats]\n\n        assert {[dict exists $slot_stats $key_slot]}\n        set stats [dict get $slot_stats $key_slot]\n        assert {![dict exists $stats memory-bytes]}\n        assert {[dict exists $stats cpu-usec]}\n        assert {[dict exists $stats network-bytes-in]}\n        assert {[dict exists $stats network-bytes-out]}\n    }\n}\n\nstart_cluster 1 0 {tags {external:skip cluster} overrides {cluster-slot-stats-enabled no}} {\n    # Define shared variables.\n    set key \"FOO\"\n    set key_slot [R 0 cluster keyslot $key]\n\n    test \"CLUSTER SLOT-STATS memory-bytes field not present when cluster-slot-stats-enabled not set on startup\" {\n        R 0 SET $key VALUE\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set slot_stats [convert_array_into_dict $slot_stats]\n\n        # Verify memory-bytes field is not present\n        assert {[dict exists $slot_stats $key_slot]}\n        set stats [dict get $slot_stats $key_slot]\n        assert {![dict exists $stats memory-bytes]}\n\n        # Only key-count should be present\n        assert {[dict exists $stats key-count]}\n        assert {[dict get $stats key-count] == 1}\n    }\n\n    test \"CLUSTER SLOT-STATS enabling mem at runtime fails when not enabled at startup\" {\n        # Trying to enable memory tracking at runtime should fail\n        assert_error \"ERR*memory tracking cannot be enabled at runtime*\" {R 0 CONFIG SET cluster-slot-stats-enabled mem}\n        assert_error \"ERR*memory tracking cannot be enabled at runtime*\" {R 0 CONFIG SET cluster-slot-stats-enabled yes}\n        assert_error \"ERR*memory tracking cannot be enabled at runtime*\" {R 0 CONFIG SET cluster-slot-stats-enabled \"cpu net mem\"}\n    }\n\n    test \"CLUSTER SLOT-STATS enabling cpu and net at runtime works\" {\n        R 0 CONFIG SET cluster-slot-stats-enabled \"cpu net\"\n        set slot_stats [R 0 CLUSTER SLOT-STATS SLOTSRANGE 0 16383]\n        set slot_stats [convert_array_into_dict $slot_stats]\n\n        # Verify memory-bytes field is still not present\n        assert {[dict exists $slot_stats $key_slot]}\n        set stats [dict get $slot_stats $key_slot]\n        assert {![dict exists $stats memory-bytes]}\n\n        # Other stats fields should now be present\n        assert {[dict exists $stats cpu-usec]}\n        assert {[dict exists $stats network-bytes-in]}\n        assert {[dict exists $stats network-bytes-out]}\n    }\n}\n\n# -----------------------------------------------------------------------------\n# Test cases for memory tracking accuracy with DEBUG ALLOCSIZE-SLOTS-ASSERT.\n# These tests verify that memory accounting is correct after operations that\n# may change object encoding (e.g., listTypeTryConversion).\n# -----------------------------------------------------------------------------\n\nstart_cluster 1 0 {tags {external:skip cluster needs:debug} overrides {cluster-slot-stats-enabled yes}} {\n    # Enable debug assertion that validates memory tracking after each command.\n    # This will cause a panic if tracked memory doesn't match actual memory.\n    R 0 DEBUG ALLOCSIZE-SLOTS-ASSERT 1\n\n    test \"LTRIM memory tracking with quicklist to listpack conversion\" {\n        # Use a small list-max-listpack-size to force quicklist encoding\n        set origin_conf [R 0 CONFIG GET list-max-listpack-size]\n        R 0 CONFIG SET list-max-listpack-size 3\n\n        # Create a quicklist by adding more elements than the listpack limit\n        R 0 DEL mylist{t}\n        R 0 RPUSH mylist{t} a b c d\n\n        # Verify we have a quicklist (4 elements > 3 limit)\n        assert_equal \"quicklist\" [R 0 OBJECT ENCODING mylist{t}]\n\n        # LTRIM to reduce elements - this triggers listTypeTryConversion(LIST_CONV_SHRINKING)\n        # which may convert quicklist back to listpack. The bug was that memory was tracked\n        # BEFORE the conversion, causing a mismatch.\n        R 0 LTRIM mylist{t} 0 0\n\n        # Verify conversion happened\n        assert_equal \"listpack\" [R 0 OBJECT ENCODING mylist{t}]\n\n        # The DEBUG ALLOCSIZE-SLOTS-ASSERT will have already panicked if memory\n        # tracking was wrong. If we reach here, the test passed.\n        assert_equal \"a\" [R 0 LRANGE mylist{t} 0 -1]\n\n        R 0 CONFIG SET list-max-listpack-size [lindex $origin_conf 1]\n    }\n\n    test \"LREM memory tracking with quicklist to listpack conversion\" {\n        # Use a small list-max-listpack-size to force quicklist encoding\n        set origin_conf [R 0 CONFIG GET list-max-listpack-size]\n        R 0 CONFIG SET list-max-listpack-size 3\n\n        # Create a quicklist by adding more elements than the listpack limit\n        # Use 3 'a' elements so we can remove them to trigger conversion (same as list.tcl test)\n        R 0 DEL mylist{t}\n        R 0 RPUSH mylist{t} a a a d\n\n        # Verify we have a quicklist (4 elements > 3 limit)\n        assert_equal \"quicklist\" [R 0 OBJECT ENCODING mylist{t}]\n\n        # LREM to remove 3 'a' elements - this triggers listTypeTryConversion(LIST_CONV_SHRINKING)\n        # which may convert quicklist back to listpack. The bug was that memory was tracked\n        # BEFORE the conversion, causing a mismatch.\n        R 0 LREM mylist{t} 3 a\n\n        # Verify conversion happened (1 element <= 3 limit)\n        assert_equal \"listpack\" [R 0 OBJECT ENCODING mylist{t}]\n\n        # The DEBUG ALLOCSIZE-SLOTS-ASSERT will have already panicked if memory\n        # tracking was wrong. If we reach here, the test passed.\n        assert_equal \"d\" [R 0 LRANGE mylist{t} 0 -1]\n\n        R 0 CONFIG SET list-max-listpack-size [lindex $origin_conf 1]\n    }\n}\n"
  },
  {
    "path": "tests/unit/dump.tcl",
    "content": "start_server {tags {\"dump\"}} {\n    test {DUMP / RESTORE are able to serialize / unserialize a simple key} {\n        r set foo bar\n        set encoded [r dump foo]\n        r del foo\n        list [r exists foo] [r restore foo 0 $encoded] [r ttl foo] [r get foo]\n    } {0 OK -1 bar}\n\n    test {RESTORE can set an arbitrary expire to the materialized key} {\n        r set foo bar\n        set encoded [r dump foo]\n        r del foo\n        r restore foo 5000 $encoded\n        set ttl [r pttl foo]\n        assert_range $ttl 3000 5000\n        r get foo\n    } {bar}\n\n    test {RESTORE can set an expire that overflows a 32 bit integer} {\n        r set foo bar\n        set encoded [r dump foo]\n        r del foo\n        r restore foo 2569591501 $encoded\n        set ttl [r pttl foo]\n        assert_range $ttl (2569591501-3000) 2569591501\n        r get foo\n    } {bar}\n    \n    test {RESTORE can set an absolute expire} {\n        r set foo bar\n        set encoded [r dump foo]\n        r del foo\n        set now [clock milliseconds]\n        r restore foo [expr $now+3000] $encoded absttl\n        set ttl [r pttl foo]\n        assert_range $ttl 2000 3100\n        r get foo\n    } {bar}\n\n    test {RESTORE with ABSTTL in the past} {\n        r set foo bar\n        set encoded [r dump foo]\n        set now [clock milliseconds]\n        r debug set-active-expire 0\n        set expiredkeys [s expired_keys]\n        r restore foo [expr $now-3000] $encoded absttl REPLACE\n        catch {r debug object foo} e\n        r debug set-active-expire 1\n        # Verify that expired_keys was incremented, even though\n        # the key was not added to the DB actually.\n        assert_equal [expr $expiredkeys + 1] [s expired_keys]\n        set e\n    } {ERR no such key} {needs:debug}\n\n    test {RESTORE can set LRU} {\n        r set foo bar\n        set encoded [r dump foo]\n        r del foo\n        r config set maxmemory-policy allkeys-lru\n        r restore foo 0 $encoded idletime 1000\n        set idle [r object idletime foo]\n        assert {$idle >= 1000 && $idle <= 1010}\n        assert_equal [r get foo] {bar}\n        r config set maxmemory-policy noeviction\n    } {OK} {needs:config-maxmemory}\n    \n    test {RESTORE with TTL maintain valid object} {\n        # RESTORE Creates a string with TTL in two steps. The second step potentially \n        # reallocates the object. Access the object and verify it is not corrupted\n        r del foo\n        r set foo bar\n        set encoded [r dump foo]\n        # Iterate several times and verify it is consistent\n        for {set i 0} {$i < 100} {incr i} {\n            r del foo\n            r restore foo 1000 $encoded IDLETIME 500\n            assert_equal [r get foo] {bar}\n        }\n    }\n\n    test {RESTORE can set LFU} {\n        r set foo bar\n        set encoded [r dump foo]\n        r del foo\n        r config set maxmemory-policy allkeys-lfu\n        r restore foo 0 $encoded freq 100\n\n        # We need to determine whether the `object` operation happens within the same minute or crosses into a new one\n        # This will help us verify if the freq remains 100 or decays due to a minute transition\n        set start [clock format [clock seconds] -format %M]\n        set freq [r object freq foo]\n        set end [clock format [clock seconds] -format %M]\n\n        if { $start == $end } {\n            # If the minutes haven't changed (i.e., the restore and object happened within the same minute),\n            # the freq should remain 100 as no decay has occurred yet.\n            assert {$freq == 100}\n        } else {\n            # If the object operation crosses into a new minute, freq may have already decayed by 1 (99),\n            # or it may still be 100 if the minute update hasn't been applied yet when the operation is performed.\n            # The decay might only take effect after the operation completes and the minute is updated.\n            assert {($freq == 100) || ($freq == 99)}\n        }\n\n        r get foo\n        assert_equal [r get foo] {bar}\n        r config set maxmemory-policy noeviction\n    } {OK} {needs:config-maxmemory}\n\n    test {RESTORE returns an error of the key already exists} {\n        r set foo bar\n        set e {}\n        catch {r restore foo 0 \"...\"} e\n        set e\n    } {*BUSYKEY*}\n\n    test {RESTORE can overwrite an existing key with REPLACE} {\n        r set foo bar1\n        set encoded1 [r dump foo]\n        r set foo bar2\n        set encoded2 [r dump foo]\n        r del foo\n        r restore foo 0 $encoded1\n        r restore foo 0 $encoded2 replace\n        r get foo\n    } {bar2}\n\n    test {RESTORE can detect a syntax error for unrecognized options} {\n        catch {r restore foo 0 \"...\" invalid-option} e\n        set e\n    } {*syntax*}\n\n    test {RESTORE should not store key that are already expired, with REPLACE will propagate it as DEL or UNLINK} {\n        r del key1{t} key2{t}\n        r set key1{t} value2\n        r lpush key2{t} 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65\n\n        r set key{t} value\n        set encoded [r dump key{t}]\n        set now [clock milliseconds]\n\n        set repl [attach_to_replication_stream]\n\n        # Keys that have expired will not be stored.\n        r config set lazyfree-lazy-server-del no\n        assert_equal {OK} [r restore key1{t} [expr $now-5000] $encoded replace absttl]\n        r config set lazyfree-lazy-server-del yes\n        assert_equal {OK} [r restore key2{t} [expr $now-5000] $encoded replace absttl]\n        assert_equal {0} [r exists key1{t} key2{t}]\n\n        # Verify the propagate of DEL and UNLINK.\n        assert_replication_stream $repl {\n            {select *}\n            {del key1{t}}\n            {unlink key2{t}}\n        }\n\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test {DUMP of non existing key returns nil} {\n        r dump nonexisting_key\n    } {}\n\n    test {MIGRATE is caching connections} {\n        # Note, we run this as first test so that the connection cache\n        # is empty.\n        set first [srv 0 client]\n        r set key \"Some Value\"\n        start_server {tags {\"repl\"}} {\n            set second [srv 0 client]\n            set second_host [srv 0 host]\n            set second_port [srv 0 port]\n\n            assert_match {*migrate_cached_sockets:0*} [r -1 info]\n            r -1 migrate $second_host $second_port key 9 1000\n            assert_match {*migrate_cached_sockets:1*} [r -1 info]\n        }\n    } {} {external:skip}\n\n    test {MIGRATE cached connections are released after some time} {\n        after 15000\n        assert_match {*migrate_cached_sockets:0*} [r info]\n    }\n\n    test {MIGRATE is able to migrate a key between two instances} {\n        set first [srv 0 client]\n        r set key \"Some Value\"\n        start_server {tags {\"repl\"}} {\n            set second [srv 0 client]\n            set second_host [srv 0 host]\n            set second_port [srv 0 port]\n\n            assert {[$first exists key] == 1}\n            assert {[$second exists key] == 0}\n            set ret [r -1 migrate $second_host $second_port key 9 5000]\n            assert {$ret eq {OK}}\n            assert {[$first exists key] == 0}\n            assert {[$second exists key] == 1}\n            assert {[$second get key] eq {Some Value}}\n            assert {[$second ttl key] == -1}\n        }\n    } {} {external:skip}\n\n    test {MIGRATE is able to copy a key between two instances} {\n        set first [srv 0 client]\n        r del list\n        r lpush list a b c d\n        start_server {tags {\"repl\"}} {\n            set second [srv 0 client]\n            set second_host [srv 0 host]\n            set second_port [srv 0 port]\n\n            assert {[$first exists list] == 1}\n            assert {[$second exists list] == 0}\n            set ret [r -1 migrate $second_host $second_port list 9 5000 copy]\n            assert {$ret eq {OK}}\n            assert {[$first exists list] == 1}\n            assert {[$second exists list] == 1}\n            assert {[$first lrange list 0 -1] eq [$second lrange list 0 -1]}\n        }\n    } {} {external:skip}\n\n    test {MIGRATE will not overwrite existing keys, unless REPLACE is used} {\n        set first [srv 0 client]\n        r del list\n        r lpush list a b c d\n        start_server {tags {\"repl\"}} {\n            set second [srv 0 client]\n            set second_host [srv 0 host]\n            set second_port [srv 0 port]\n\n            assert {[$first exists list] == 1}\n            assert {[$second exists list] == 0}\n            $second set list somevalue\n            catch {r -1 migrate $second_host $second_port list 9 5000 copy} e\n            assert_match {ERR*} $e\n            set ret [r -1 migrate $second_host $second_port list 9 5000 copy replace]\n            assert {$ret eq {OK}}\n            assert {[$first exists list] == 1}\n            assert {[$second exists list] == 1}\n            assert {[$first lrange list 0 -1] eq [$second lrange list 0 -1]}\n        }\n    } {} {external:skip}\n\n    test {MIGRATE propagates TTL correctly} {\n        set first [srv 0 client]\n        r set key \"Some Value\"\n        start_server {tags {\"repl\"}} {\n            set second [srv 0 client]\n            set second_host [srv 0 host]\n            set second_port [srv 0 port]\n\n            assert {[$first exists key] == 1}\n            assert {[$second exists key] == 0}\n            $first expire key 10\n            set ret [r -1 migrate $second_host $second_port key 9 5000]\n            assert {$ret eq {OK}}\n            assert {[$first exists key] == 0}\n            assert {[$second exists key] == 1}\n            assert {[$second get key] eq {Some Value}}\n            assert {[$second ttl key] >= 7 && [$second ttl key] <= 10}\n        }\n    } {} {external:skip}\n\n    test {MIGRATE can correctly transfer large values} {\n        set first [srv 0 client]\n        r del key\n        for {set j 0} {$j < 40000} {incr j} {\n            r rpush key 1 2 3 4 5 6 7 8 9 10\n            r rpush key \"item 1\" \"item 2\" \"item 3\" \"item 4\" \"item 5\" \\\n                        \"item 6\" \"item 7\" \"item 8\" \"item 9\" \"item 10\"\n        }\n        assert {[string length [r dump key]] > (1024*64)}\n        start_server {tags {\"repl\"}} {\n            set second [srv 0 client]\n            set second_host [srv 0 host]\n            set second_port [srv 0 port]\n\n            assert {[$first exists key] == 1}\n            assert {[$second exists key] == 0}\n            set ret [r -1 migrate $second_host $second_port key 9 10000]\n            assert {$ret eq {OK}}\n            assert {[$first exists key] == 0}\n            assert {[$second exists key] == 1}\n            assert {[$second ttl key] == -1}\n            assert {[$second llen key] == 40000*20}\n        }\n    } {} {external:skip}\n\n    test {MIGRATE can correctly transfer hashes} {\n        set first [srv 0 client]\n        r del key\n        r hmset key field1 \"item 1\" field2 \"item 2\" field3 \"item 3\" \\\n                    field4 \"item 4\" field5 \"item 5\" field6 \"item 6\"\n        start_server {tags {\"repl\"}} {\n            set second [srv 0 client]\n            set second_host [srv 0 host]\n            set second_port [srv 0 port]\n\n            assert {[$first exists key] == 1}\n            assert {[$second exists key] == 0}\n            set ret [r -1 migrate $second_host $second_port key 9 10000]\n            assert {$ret eq {OK}}\n            assert {[$first exists key] == 0}\n            assert {[$second exists key] == 1}\n            assert {[$second ttl key] == -1}\n        }\n    } {} {external:skip}\n\n    test {MIGRATE timeout actually works} {\n        set first [srv 0 client]\n        r set key \"Some Value\"\n        start_server {tags {\"repl\"}} {\n            set second [srv 0 client]\n            set second_host [srv 0 host]\n            set second_port [srv 0 port]\n\n            assert {[$first exists key] == 1}\n            assert {[$second exists key] == 0}\n\n            set rd [redis_deferring_client]\n            $rd debug sleep 1.0 ; # Make second server unable to reply.\n            set e {}\n            catch {r -1 migrate $second_host $second_port key 9 500} e\n            assert_match {IOERR*} $e\n        }\n    } {} {external:skip}\n\n    test {MIGRATE can migrate multiple keys at once} {\n        set first [srv 0 client]\n        r set key1 \"v1\"\n        r set key2 \"v2\"\n        r set key3 \"v3\"\n        start_server {tags {\"repl\"}} {\n            set second [srv 0 client]\n            set second_host [srv 0 host]\n            set second_port [srv 0 port]\n\n            assert {[$first exists key1] == 1}\n            assert {[$second exists key1] == 0}\n            set ret [r -1 migrate $second_host $second_port \"\" 9 5000 keys key1 key2 key3]\n            assert {$ret eq {OK}}\n            assert {[$first exists key1] == 0}\n            assert {[$first exists key2] == 0}\n            assert {[$first exists key3] == 0}\n            assert {[$second get key1] eq {v1}}\n            assert {[$second get key2] eq {v2}}\n            assert {[$second get key3] eq {v3}}\n        }\n    } {} {external:skip}\n\n    test {MIGRATE with multiple keys must have empty key arg} {\n        catch {r MIGRATE 127.0.0.1 6379 NotEmpty 9 5000 keys a b c} e\n        set e\n    } {*empty string*} {external:skip}\n\n    test {MIGRATE with multiple keys migrate just existing ones} {\n        set first [srv 0 client]\n        r set key1 \"v1\"\n        r set key2 \"v2\"\n        r set key3 \"v3\"\n        start_server {tags {\"repl\"}} {\n            set second [srv 0 client]\n            set second_host [srv 0 host]\n            set second_port [srv 0 port]\n\n            set ret [r -1 migrate $second_host $second_port \"\" 9 5000 keys nokey-1 nokey-2 nokey-2]\n            assert {$ret eq {NOKEY}}\n\n            assert {[$first exists key1] == 1}\n            assert {[$second exists key1] == 0}\n            set ret [r -1 migrate $second_host $second_port \"\" 9 5000 keys nokey-1 key1 nokey-2 key2 nokey-3 key3]\n            assert {$ret eq {OK}}\n            assert {[$first exists key1] == 0}\n            assert {[$first exists key2] == 0}\n            assert {[$first exists key3] == 0}\n            assert {[$second get key1] eq {v1}}\n            assert {[$second get key2] eq {v2}}\n            assert {[$second get key3] eq {v3}}\n        }\n    } {} {external:skip}\n\n    test {MIGRATE with multiple keys: stress command rewriting} {\n        set first [srv 0 client]\n        r flushdb\n        r mset a 1 b 2 c 3 d 4 c 5 e 6 f 7 g 8 h 9 i 10 l 11 m 12 n 13 o 14 p 15 q 16\n        start_server {tags {\"repl\"}} {\n            set second [srv 0 client]\n            set second_host [srv 0 host]\n            set second_port [srv 0 port]\n\n            set ret [r -1 migrate $second_host $second_port \"\" 9 5000 keys a b c d e f g h i l m n o p q]\n\n            assert {[$first dbsize] == 0}\n            assert {[$second dbsize] == 15}\n        }\n    } {} {external:skip}\n\n    test {MIGRATE with multiple keys: delete just ack keys} {\n        set first [srv 0 client]\n        r flushdb\n        r mset a 1 b 2 c 3 d 4 c 5 e 6 f 7 g 8 h 9 i 10 l 11 m 12 n 13 o 14 p 15 q 16\n        start_server {tags {\"repl\"}} {\n            set second [srv 0 client]\n            set second_host [srv 0 host]\n            set second_port [srv 0 port]\n\n            $second mset c _ d _; # Two busy keys and no REPLACE used\n\n            catch {r -1 migrate $second_host $second_port \"\" 9 5000 keys a b c d e f g h i l m n o p q} e\n\n            assert {[$first dbsize] == 2}\n            assert {[$second dbsize] == 15}\n            assert {[$first exists c] == 1}\n            assert {[$first exists d] == 1}\n        }\n    } {} {external:skip}\n\n    test {MIGRATE AUTH: correct and wrong password cases} {\n        set first [srv 0 client]\n        r del list\n        r lpush list a b c d\n        start_server {tags {\"repl\"}} {\n            set second [srv 0 client]\n            set second_host [srv 0 host]\n            set second_port [srv 0 port]\n            $second config set requirepass foobar\n            $second auth foobar\n\n            assert {[$first exists list] == 1}\n            assert {[$second exists list] == 0}\n            set ret [r -1 migrate $second_host $second_port list 9 5000 AUTH foobar]\n            assert {$ret eq {OK}}\n            assert {[$second exists list] == 1}\n            assert {[$second lrange list 0 -1] eq {d c b a}}\n\n            r -1 lpush list a b c d\n            $second config set requirepass foobar2\n            catch {r -1 migrate $second_host $second_port list 9 5000 AUTH foobar} err\n            assert_match {*WRONGPASS*} $err\n        }\n    } {} {external:skip}\n}\n"
  },
  {
    "path": "tests/unit/expire.tcl",
    "content": "start_server {tags {\"expire\"}} {\n    test {EXPIRE - set timeouts multiple times} {\n        r set x foobar\n        set v1 [r expire x 5]\n        set v2 [r ttl x]\n        set v3 [r expire x 10]\n        set v4 [r ttl x]\n        r expire x 2\n        list $v1 $v2 $v3 $v4\n    } {1 [45] 1 10}\n\n    test {EXPIRE - It should be still possible to read 'x'} {\n        r get x\n    } {foobar}\n\n    tags {\"slow\"} {\n        test {EXPIRE - After 2.1 seconds the key should no longer be here} {\n            after 2100\n            list [r get x] [r exists x]\n        } {{} 0}\n    }\n\n    test {EXPIRE - write on expire should work} {\n        r del x\n        r lpush x foo\n        r expire x 1000\n        r lpush x bar\n        r lrange x 0 -1\n    } {bar foo}\n\n    test {EXPIREAT - Check for EXPIRE alike behavior} {\n        r del x\n        r set x foo\n        r expireat x [expr [clock seconds]+15]\n        r ttl x\n    } {1[345]}\n\n    test {SETEX - Set + Expire combo operation. Check for TTL} {\n        r setex x 12 test\n        r ttl x\n    } {1[012]}\n\n    test {SETEX - Check value} {\n        r get x\n    } {test}\n\n    test {SETEX - Overwrite old key} {\n        r setex y 1 foo\n        r get y\n    } {foo}\n\n    tags {\"slow\"} {\n        test {SETEX - Wait for the key to expire} {\n            after 1100\n            r get y\n        } {}\n    }\n\n    test {SETEX - Wrong time parameter} {\n        catch {r setex z -10 foo} e\n        set _ $e\n    } {*invalid expire*}\n\n    test {PERSIST can undo an EXPIRE} {\n        r set x foo\n        r expire x 50\n        list [r ttl x] [r persist x] [r ttl x] [r get x]\n    } {50 1 -1 foo}\n\n    test {PERSIST returns 0 against non existing or non volatile keys} {\n        r set x foo\n        list [r persist foo] [r persist nokeyatall]\n    } {0 0}\n\n    test {EXPIRE precision is now the millisecond} {\n        # This test is very likely to do a false positive if the\n        # server is under pressure, so if it does not work give it a few more\n        # chances.\n        for {set j 0} {$j < 30} {incr j} {\n            r del x\n            r setex x 1 somevalue\n            after 800\n            set a [r get x]\n            if {$a ne {somevalue}} continue\n            after 300\n            set b [r get x]\n            if {$b eq {}} break\n        }\n        if {$::verbose} {\n            puts \"millisecond expire test attempts: $j\"\n        }\n        assert_equal $a {somevalue}\n        assert_equal $b {}\n    }\n\n    test \"PSETEX can set sub-second expires\" {\n        # This test is very likely to do a false positive if the server is\n        # under pressure, so if it does not work give it a few more chances.\n        for {set j 0} {$j < 50} {incr j} {\n            r del x\n            r psetex x 100 somevalue\n            set a [r get x]\n            after 101\n            set b [r get x]\n\n            if {$a eq {somevalue} && $b eq {}} break\n        }\n        if {$::verbose} { puts \"PSETEX sub-second expire test attempts: $j\" }\n        list $a $b\n    } {somevalue {}}\n\n    test \"PEXPIRE can set sub-second expires\" {\n        # This test is very likely to do a false positive if the server is\n        # under pressure, so if it does not work give it a few more chances.\n        for {set j 0} {$j < 50} {incr j} {\n            r set x somevalue\n            r pexpire x 100\n            set c [r get x]\n            after 101\n            set d [r get x]\n\n            if {$c eq {somevalue} && $d eq {}} break\n        }\n        if {$::verbose} { puts \"PEXPIRE sub-second expire test attempts: $j\" }\n        list $c $d\n    } {somevalue {}}\n\n    test \"PEXPIREAT can set sub-second expires\" {\n        # This test is very likely to do a false positive if the server is\n        # under pressure, so if it does not work give it a few more chances.\n        for {set j 0} {$j < 50} {incr j} {\n            r set x somevalue\n            set now [r time]\n            r pexpireat x [expr ([lindex $now 0]*1000)+([lindex $now 1]/1000)+200]\n            set e [r get x]\n            after 201\n            set f [r get x]\n\n            if {$e eq {somevalue} && $f eq {}} break\n        }\n        if {$::verbose} { puts \"PEXPIREAT sub-second expire test attempts: $j\" }\n        list $e $f\n    } {somevalue {}}\n\n    test {TTL returns time to live in seconds} {\n        r del x\n        r setex x 10 somevalue\n        set ttl [r ttl x]\n        assert {$ttl > 8 && $ttl <= 10}\n    }\n\n    test {PTTL returns time to live in milliseconds} {\n        r del x\n        r setex x 1 somevalue\n        set ttl [r pttl x]\n        assert {$ttl > 500 && $ttl <= 1000}\n    }\n\n    test {TTL / PTTL / EXPIRETIME / PEXPIRETIME return -1 if key has no expire} {\n        r del x\n        r set x hello\n        list [r ttl x] [r pttl x] [r expiretime x] [r pexpiretime x]\n    } {-1 -1 -1 -1}\n\n    test {TTL / PTTL / EXPIRETIME / PEXPIRETIME return -2 if key does not exit} {\n        r del x\n        list [r ttl x] [r pttl x] [r expiretime x] [r pexpiretime x]\n    } {-2 -2 -2 -2}\n\n    test {EXPIRETIME returns absolute expiration time in seconds} {\n        r del x\n        set abs_expire [expr [clock seconds] + 100]\n        r set x somevalue exat $abs_expire\n        assert_equal [r expiretime x] $abs_expire\n    }\n\n    test {PEXPIRETIME returns absolute expiration time in milliseconds} {\n        r del x\n        set abs_expire [expr [clock milliseconds] + 100000]\n        r set x somevalue pxat $abs_expire\n        assert_equal [r pexpiretime x] $abs_expire\n    }\n\n    test {Redis should actively expire keys incrementally} {\n        r flushdb\n        r psetex key1 500 a\n        r psetex key2 500 a\n        r psetex key3 500 a\n        assert_equal 3 [r dbsize]\n        # Redis expires random keys ten times every second so we are\n        # fairly sure that all the three keys should be evicted after\n        # two seconds.\n        wait_for_condition 20 100 {\n            [r dbsize] eq 0\n        } else {\n            fail \"Keys did not actively expire.\"\n        }\n    }\n\n    test {Redis should lazy expire keys} {\n        r flushdb\n        r debug set-active-expire 0\n        r psetex key1{t} 500 a\n        r psetex key2{t} 500 a\n        r psetex key3{t} 500 a\n        set size1 [r dbsize]\n        # Redis expires random keys ten times every second so we are\n        # fairly sure that all the three keys should be evicted after\n        # one second.\n        after 1000\n        set size2 [r dbsize]\n        r mget key1{t} key2{t} key3{t}\n        set size3 [r dbsize]\n        r debug set-active-expire 1\n        list $size1 $size2 $size3\n    } {3 3 0} {needs:debug}\n\n    test {EXPIRE should not resurrect keys (issue #1026)} {\n        r debug set-active-expire 0\n        r set foo bar\n        r pexpire foo 500\n        after 1000\n        r expire foo 10\n        r debug set-active-expire 1\n        r exists foo\n    } {0} {needs:debug}\n\n    test {5 keys in, 5 keys out} {\n        r flushdb\n        r set a c\n        r expire a 5\n        r set t c\n        r set e c\n        r set s c\n        r set foo b\n        assert_equal [lsort [r keys *]] {a e foo s t}\n        r del a ; # Do not leak volatile keys to other tests\n    }\n\n    test {EXPIRE with empty string as TTL should report an error} {\n        r set foo bar\n        catch {r expire foo \"\"} e\n        set e\n    } {*not an integer*}\n\n    test {SET with EX with big integer should report an error} {\n        catch {r set foo bar EX 10000000000000000} e\n        set e\n    } {ERR invalid expire time in 'set' command}\n\n    test {SET with EX with smallest integer should report an error} {\n        catch {r SET foo bar EX -9999999999999999} e\n        set e\n    } {ERR invalid expire time in 'set' command}\n\n    test {GETEX with big integer should report an error} {\n        r set foo bar\n        catch {r GETEX foo EX 10000000000000000} e\n        set e\n    } {ERR invalid expire time in 'getex' command}\n\n    test {GETEX with smallest integer should report an error} {\n        r set foo bar\n        catch {r GETEX foo EX -9999999999999999} e\n        set e\n    } {ERR invalid expire time in 'getex' command}\n\n    test {EXPIRE with big integer overflows when converted to milliseconds} {\n        r set foo bar\n\n        # Hit `when > LLONG_MAX - basetime`\n        assert_error \"ERR invalid expire time in 'expire' command\" {r EXPIRE foo 9223370399119966}\n\n        # Hit `when > LLONG_MAX / 1000`\n        assert_error \"ERR invalid expire time in 'expire' command\" {r EXPIRE foo 9223372036854776}\n        assert_error \"ERR invalid expire time in 'expire' command\" {r EXPIRE foo 10000000000000000}\n        assert_error \"ERR invalid expire time in 'expire' command\" {r EXPIRE foo 18446744073709561}\n\n        assert_equal {-1} [r ttl foo]\n    }\n\n    test {PEXPIRE with big integer overflow when basetime is added} {\n        r set foo bar\n        catch {r PEXPIRE foo 9223372036854770000} e\n        set e\n    } {ERR invalid expire time in 'pexpire' command}\n\n    test {EXPIRE with big negative integer} {\n        r set foo bar\n\n        # Hit `when < LLONG_MIN / 1000`\n        assert_error \"ERR invalid expire time in 'expire' command\" {r EXPIRE foo -9223372036854776}\n        assert_error \"ERR invalid expire time in 'expire' command\" {r EXPIRE foo -9999999999999999}\n\n        r ttl foo\n    } {-1}\n\n    test {PEXPIREAT with big integer works} {\n        r set foo bar\n        r PEXPIREAT foo 9223372036854770000\n    } {1}\n\n    test {PEXPIREAT with big negative integer works} {\n        r set foo bar\n        r PEXPIREAT foo -9223372036854770000\n        r ttl foo\n    } {-2}\n\n    # Start a new server with empty data and AOF file.\n    start_server {overrides {appendonly {yes} appendfsync always} tags {external:skip}} {\n        test {All time-to-live(TTL) in commands are propagated as absolute timestamp in milliseconds in AOF} {\n            # This test makes sure that expire times are propagated as absolute\n            # times to the AOF file and not as relative time, so that when the AOF\n            # is reloaded the TTLs are not being shifted forward to the future.\n            # We want the time to logically pass when the server is restarted!\n\n            set aof [get_last_incr_aof_path r]\n\n            # Apply each TTL-related command to a unique key\n            # SET commands\n            r set foo1 bar ex 100\n            r set foo2 bar px 100000\n            r set foo3 bar exat [expr [clock seconds]+100]\n            r set foo4 bar PXAT [expr [clock milliseconds]+100000]\n            r setex foo5 100 bar\n            r psetex foo6 100000 bar\n            # EXPIRE-family commands\n            r set foo7 bar\n            r expire foo7 100\n            r set foo8 bar\n            r pexpire foo8 100000\n            r set foo9 bar\n            r expireat foo9 [expr [clock seconds]+100]\n            r set foo10 bar\n            r pexpireat foo10 [expr [clock seconds]*1000+100000]\n            r set foo11 bar\n            r expireat foo11 [expr [clock seconds]-100]\n            # GETEX commands\n            r set foo12 bar\n            r getex foo12 ex 100\n            r set foo13 bar\n            r getex foo13 px 100000\n            r set foo14 bar\n            r getex foo14 exat [expr [clock seconds]+100]\n            r set foo15 bar\n            r getex foo15 pxat [expr [clock milliseconds]+100000]\n            # RESTORE commands\n            r set foo16 bar\n            set encoded [r dump foo16]\n            r restore foo17 100000 $encoded\n            r restore foo18 [expr [clock milliseconds]+100000] $encoded absttl\n\n            # Assert that each TTL-related command are persisted with absolute timestamps in AOF\n            assert_aof_content $aof {\n                {select *}\n                {set foo1 bar PXAT *}\n                {set foo2 bar PXAT *}\n                {set foo3 bar PXAT *}\n                {set foo4 bar PXAT *}\n                {set foo5 bar PXAT *}\n                {set foo6 bar PXAT *}\n                {set foo7 bar}\n                {pexpireat foo7 *}\n                {set foo8 bar}\n                {pexpireat foo8 *}\n                {set foo9 bar}\n                {pexpireat foo9 *}\n                {set foo10 bar}\n                {pexpireat foo10 *}\n                {set foo11 bar}\n                {del foo11}\n                {set foo12 bar}\n                {pexpireat foo12 *}\n                {set foo13 bar}\n                {pexpireat foo13 *}\n                {set foo14 bar}\n                {pexpireat foo14 *}\n                {set foo15 bar}\n                {pexpireat foo15 *}\n                {set foo16 bar}\n                {restore foo17 * * ABSTTL}\n                {restore foo18 * * absttl}\n            }\n\n            # Remember the absolute TTLs of all the keys\n            set ttl1 [r pexpiretime foo1]\n            set ttl2 [r pexpiretime foo2]\n            set ttl3 [r pexpiretime foo3]\n            set ttl4 [r pexpiretime foo4]\n            set ttl5 [r pexpiretime foo5]\n            set ttl6 [r pexpiretime foo6]\n            set ttl7 [r pexpiretime foo7]\n            set ttl8 [r pexpiretime foo8]\n            set ttl9 [r pexpiretime foo9]\n            set ttl10 [r pexpiretime foo10]\n            assert_equal \"-2\" [r pexpiretime foo11] ; # foo11 is gone\n            set ttl12 [r pexpiretime foo12]\n            set ttl13 [r pexpiretime foo13]\n            set ttl14 [r pexpiretime foo14]\n            set ttl15 [r pexpiretime foo15]\n            assert_equal \"-1\" [r pexpiretime foo16] ; # foo16 has no TTL\n            set ttl17 [r pexpiretime foo17]\n            set ttl18 [r pexpiretime foo18]\n\n            # Let some time pass and reload data from AOF\n            after 2000\n            r debug loadaof\n\n            # Assert that relative TTLs are roughly the same\n            assert_range [r ttl foo1] 90 98\n            assert_range [r ttl foo2] 90 98\n            assert_range [r ttl foo3] 90 98\n            assert_range [r ttl foo4] 90 98\n            assert_range [r ttl foo5] 90 98\n            assert_range [r ttl foo6] 90 98\n            assert_range [r ttl foo7] 90 98\n            assert_range [r ttl foo8] 90 98\n            assert_range [r ttl foo9] 90 98\n            assert_range [r ttl foo10] 90 98\n            assert_equal [r ttl foo11] \"-2\" ; # foo11 is gone\n            assert_range [r ttl foo12] 90 98\n            assert_range [r ttl foo13] 90 98\n            assert_range [r ttl foo14] 90 98\n            assert_range [r ttl foo15] 90 98\n            assert_equal [r ttl foo16] \"-1\" ; # foo16 has no TTL\n            assert_range [r ttl foo17] 90 98\n            assert_range [r ttl foo18] 90 98\n\n            # Assert that all keys have restored the same absolute TTLs from AOF\n            assert_equal [r pexpiretime foo1] $ttl1\n            assert_equal [r pexpiretime foo2] $ttl2\n            assert_equal [r pexpiretime foo3] $ttl3\n            assert_equal [r pexpiretime foo4] $ttl4\n            assert_equal [r pexpiretime foo5] $ttl5\n            assert_equal [r pexpiretime foo6] $ttl6\n            assert_equal [r pexpiretime foo7] $ttl7\n            assert_equal [r pexpiretime foo8] $ttl8\n            assert_equal [r pexpiretime foo9] $ttl9\n            assert_equal [r pexpiretime foo10] $ttl10\n            assert_equal [r pexpiretime foo11] \"-2\" ; # foo11 is gone\n            assert_equal [r pexpiretime foo12] $ttl12\n            assert_equal [r pexpiretime foo13] $ttl13\n            assert_equal [r pexpiretime foo14] $ttl14\n            assert_equal [r pexpiretime foo15] $ttl15\n            assert_equal [r pexpiretime foo16] \"-1\" ; # foo16 has no TTL\n            assert_equal [r pexpiretime foo17] $ttl17\n            assert_equal [r pexpiretime foo18] $ttl18\n        } {} {needs:debug}\n    }\n\n    test {All TTL in commands are propagated as absolute timestamp in replication stream} {\n        # Make sure that both relative and absolute expire commands are propagated\n        # as absolute to replicas for two reasons:\n        # 1) We want to avoid replicas retaining data much longer than primary due\n        #    to replication lag.\n        # 2) We want to unify the way TTLs are replicated in both RDB and replication\n        #    stream, which is as absolute timestamps.\n        # See: https://github.com/redis/redis/issues/8433\n\n        r flushall ; # Clean up keyspace to avoid interference by keys from other tests\n        set repl [attach_to_replication_stream]\n        # SET commands\n        r set foo1 bar ex 200\n        r set foo1 bar px 100000\n        r set foo1 bar exat [expr [clock seconds]+100]\n        r set foo1 bar pxat [expr [clock milliseconds]+100000]\n        r setex foo1 100 bar\n        r psetex foo1 100000 bar\n        r set foo2 bar\n        # EXPIRE-family commands\n        r expire foo2 100\n        r pexpire foo2 100000\n        r set foo3 bar\n        r expireat foo3 [expr [clock seconds]+100]\n        r pexpireat foo3 [expr [clock seconds]*1000+100000]\n        r expireat foo3 [expr [clock seconds]-100]\n        # GETEX-family commands\n        r set foo4 bar\n        r getex foo4 ex 200\n        r getex foo4 px 200000\n        r getex foo4 exat [expr [clock seconds]+100]\n        r getex foo4 pxat [expr [clock milliseconds]+100000]\n        # RESTORE commands\n        r set foo5 bar\n        set encoded [r dump foo5]\n        r restore foo6 100000 $encoded\n        r restore foo7 [expr [clock milliseconds]+100000] $encoded absttl\n\n        assert_replication_stream $repl {\n            {select *}\n            {set foo1 bar PXAT *}\n            {set foo1 bar PXAT *}\n            {set foo1 bar PXAT *}\n            {set foo1 bar pxat *}\n            {set foo1 bar PXAT *}\n            {set foo1 bar PXAT *}\n            {set foo2 bar}\n            {pexpireat foo2 *}\n            {pexpireat foo2 *}\n            {set foo3 bar}\n            {pexpireat foo3 *}\n            {pexpireat foo3 *}\n            {del foo3}\n            {set foo4 bar}\n            {pexpireat foo4 *}\n            {pexpireat foo4 *}\n            {pexpireat foo4 *}\n            {pexpireat foo4 *}\n            {set foo5 bar}\n            {restore foo6 * * ABSTTL}\n            {restore foo7 * * absttl}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    # Start another server to test replication of TTLs\n    start_server {tags {needs:repl external:skip}} {\n        # Set the outer layer server as primary\n        set primary [srv -1 client]\n        set primary_host [srv -1 host]\n        set primary_port [srv -1 port]\n        # Set this inner layer server as replica\n        set replica [srv 0 client]\n\n        test {First server should have role slave after REPLICAOF} {\n            $replica replicaof $primary_host $primary_port\n            wait_for_condition 50 100 {\n                [s 0 role] eq {slave}\n            } else {\n                fail \"Replication not started.\"\n            }\n        }\n\n        test {For all replicated TTL-related commands, absolute expire times are identical on primary and replica} {\n            # Apply each TTL-related command to a unique key on primary\n            # SET commands\n            $primary set foo1 bar ex 100\n            $primary set foo2 bar px 100000\n            $primary set foo3 bar exat [expr [clock seconds]+100]\n            $primary set foo4 bar pxat [expr [clock milliseconds]+100000]\n            $primary setex foo5 100 bar\n            $primary psetex foo6 100000 bar\n            # EXPIRE-family commands\n            $primary set foo7 bar\n            $primary expire foo7 100\n            $primary set foo8 bar\n            $primary pexpire foo8 100000\n            $primary set foo9 bar\n            $primary expireat foo9 [expr [clock seconds]+100]\n            $primary set foo10 bar\n            $primary pexpireat foo10 [expr [clock milliseconds]+100000]\n            # GETEX commands\n            $primary set foo11 bar\n            $primary getex foo11 ex 100\n            $primary set foo12 bar\n            $primary getex foo12 px 100000\n            $primary set foo13 bar\n            $primary getex foo13 exat [expr [clock seconds]+100]\n            $primary set foo14 bar\n            $primary getex foo14 pxat [expr [clock milliseconds]+100000]\n            # RESTORE commands\n            $primary set foo15 bar\n            set encoded [$primary dump foo15]\n            $primary restore foo16 100000 $encoded\n            $primary restore foo17 [expr [clock milliseconds]+100000] $encoded absttl\n\n            # Wait for replica to get the keys and TTLs\n            assert {[$primary wait 1 0] == 1}\n\n            # Verify absolute TTLs are identical on primary and replica for all keys\n            # This is because TTLs are always replicated as absolute values\n            foreach key [$primary keys *] {\n                assert_equal [$primary pexpiretime $key] [$replica pexpiretime $key]\n            }\n        }\n\n        test {expired key which is created in writeable replicas should be deleted by active expiry} {\n            $primary flushall\n            $replica config set replica-read-only no\n            foreach {yes_or_no} {yes no} {\n                $replica config set appendonly $yes_or_no\n                waitForBgrewriteaof $replica\n                set prev_expired [s expired_keys]\n                $replica set foo bar PX 1\n                wait_for_condition 100 10 {\n                    [s expired_keys] eq $prev_expired + 1\n                } else {\n                    fail \"key not expired\"\n                }\n                assert_equal {} [$replica get foo]\n            }\n        }\n    }\n\n    test {SET command will remove expire} {\n        r set foo bar EX 100\n        r set foo bar\n        r ttl foo\n    } {-1}\n\n    test {SET command will remove expire with large string (optimization path)} {\n        # This test specifically targets the dbSetValue optimization path\n        # that was missing TTL handling for large strings\n        set large_value [string repeat \"A\" 1000]\n        r set foo $large_value EX 100\n        r set foo $large_value KEEPTTL\n        set ttl1 [r ttl foo]\n        assert {$ttl1 <= 100 && $ttl1 > 90}\n\n        # Plain SET should remove TTL even with large strings\n        r set foo $large_value\n        set ttl2 [r ttl foo]\n        assert_equal $ttl2 -1\n    }\n\n    test {SET - use KEEPTTL option, TTL should not be removed} {\n        r set foo bar EX 100\n        r set foo bar KEEPTTL\n        set ttl [r ttl foo]\n        assert {$ttl <= 100 && $ttl > 90}\n    }\n\n    test {SET - use KEEPTTL option, TTL should not be removed after loadaof} {\n        r config set appendonly yes\n        r set foo bar EX 100\n        r set foo bar2 KEEPTTL\n        after 2000\n        r debug loadaof\n        set ttl [r ttl foo]\n        assert {$ttl <= 98 && $ttl > 90}\n    } {} {needs:debug}\n\n    test {GETEX use of PERSIST option should remove TTL} {\n       r set foo bar EX 100\n       r getex foo PERSIST\n       r ttl foo\n    } {-1}\n\n    test {GETEX use of PERSIST option should remove TTL after loadaof} {\n       r config set appendonly yes\n       r set foo bar EX 100\n       r getex foo PERSIST\n       r debug loadaof\n       r ttl foo\n    } {-1} {needs:debug}\n\n    test {GETEX propagate as to replica as PERSIST, DEL, or nothing} {\n        # In the above tests, many keys with random expiration times are set, flush\n        # the DBs to avoid active expiry kicking in and messing the replication streams.\n        r flushall\n       set repl [attach_to_replication_stream]\n       r set foo bar EX 100\n       r getex foo PERSIST\n       r getex foo\n       r getex foo exat [expr [clock seconds]-100]\n       assert_replication_stream $repl {\n           {select *}\n           {set foo bar PXAT *}\n           {persist foo}\n           {del foo}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test {EXPIRE with NX option on a key with ttl} {\n        r SET foo bar EX 100\n        assert_equal [r EXPIRE foo 200 NX] 0\n        assert_range [r TTL foo] 50 100\n    } {}\n\n    test {EXPIRE with NX option on a key without ttl} {\n        r SET foo bar\n        assert_equal [r EXPIRE foo 200 NX] 1\n        assert_range [r TTL foo] 100 200\n    } {}\n\n    test {EXPIRE with XX option on a key with ttl} {\n        r SET foo bar EX 100\n        assert_equal [r EXPIRE foo 200 XX] 1\n        assert_range [r TTL foo] 100 200\n    } {}\n\n    test {EXPIRE with XX option on a key without ttl} {\n        r SET foo bar\n        assert_equal [r EXPIRE foo 200 XX] 0\n        assert_equal [r TTL foo] -1\n    } {}\n\n    test {EXPIRE with GT option on a key with lower ttl} {\n        r SET foo bar EX 100\n        assert_equal [r EXPIRE foo 200 GT] 1\n        assert_range [r TTL foo] 100 200\n    } {}\n\n    test {EXPIRE with GT option on a key with higher ttl} {\n        r SET foo bar EX 200\n        assert_equal [r EXPIRE foo 100 GT] 0\n        assert_range [r TTL foo] 100 200\n    } {}\n\n    test {EXPIRE with GT option on a key without ttl} {\n        r SET foo bar\n        assert_equal [r EXPIRE foo 200 GT] 0\n        assert_equal [r TTL foo] -1\n    } {}\n\n    test {EXPIRE with LT option on a key with higher ttl} {\n        r SET foo bar EX 100\n        assert_equal [r EXPIRE foo 200 LT] 0\n        assert_range [r TTL foo] 50 100\n    } {}\n\n    test {EXPIRE with LT option on a key with lower ttl} {\n        r SET foo bar EX 200\n        assert_equal [r EXPIRE foo 100 LT] 1\n        assert_range [r TTL foo] 50 100\n    } {}\n\n    test {EXPIRE with LT option on a key without ttl} {\n        r SET foo bar\n        assert_equal [r EXPIRE foo 100 LT] 1\n        assert_range [r TTL foo] 50 100\n    } {}\n\n    test {EXPIRE with LT and XX option on a key with ttl} {\n        r SET foo bar EX 200\n        assert_equal [r EXPIRE foo 100 LT XX] 1\n        assert_range [r TTL foo] 50 100\n    } {}\n\n    test {EXPIRE with LT and XX option on a key without ttl} {\n        r SET foo bar\n        assert_equal [r EXPIRE foo 200 LT XX] 0\n        assert_equal [r TTL foo] -1\n    } {}\n\n    test {EXPIRE with conflicting options: LT GT} {\n        catch {r EXPIRE foo 200 LT GT} e\n        set e\n    } {ERR GT and LT options at the same time are not compatible}\n\n    test {EXPIRE with conflicting options: NX GT} {\n        catch {r EXPIRE foo 200 NX GT} e\n        set e\n    } {ERR NX and XX, GT or LT options at the same time are not compatible}\n\n    test {EXPIRE with conflicting options: NX LT} {\n        catch {r EXPIRE foo 200 NX LT} e\n        set e\n    } {ERR NX and XX, GT or LT options at the same time are not compatible}\n\n    test {EXPIRE with conflicting options: NX XX} {\n        catch {r EXPIRE foo 200 NX XX} e\n        set e\n    } {ERR NX and XX, GT or LT options at the same time are not compatible}\n\n    test {EXPIRE with unsupported options} {\n        catch {r EXPIRE foo 200 AB} e\n        set e\n    } {ERR Unsupported option AB}\n\n    test {EXPIRE with unsupported options} {\n        catch {r EXPIRE foo 200 XX AB} e\n        set e\n    } {ERR Unsupported option AB}\n\n    test {EXPIRE with negative expiry} {\n        r SET foo bar EX 100\n        assert_equal [r EXPIRE foo -10 LT] 1\n        assert_equal [r TTL foo] -2\n    } {}\n\n    test {EXPIRE with negative expiry on a non-valitale key} {\n        r SET foo bar\n        assert_equal [r EXPIRE foo -10 LT] 1\n        assert_equal [r TTL foo] -2\n    } {}\n\n    test {EXPIRE with non-existed key} {\n        assert_equal [r EXPIRE none 100 NX] 0\n        assert_equal [r EXPIRE none 100 XX] 0\n        assert_equal [r EXPIRE none 100 GT] 0\n        assert_equal [r EXPIRE none 100 LT] 0\n    } {}\n\n    test {Redis should not propagate the read command on lazy expire} {\n        r debug set-active-expire 0\n        r flushall ; # Clean up keyspace to avoid interference by keys from other tests\n        r set foo bar PX 1\n        set repl [attach_to_replication_stream]\n        wait_for_condition 50 100 {\n            [r get foo] eq {}\n        } else {\n            fail \"Replication not started.\"\n        }\n\n        # dummy command to verify nothing else gets into the replication stream.\n        r set x 1\n\n        assert_replication_stream $repl {\n            {select *}\n            {del foo}\n            {set x 1}\n        }\n        close_replication_stream $repl\n        assert_equal [r debug set-active-expire 1] {OK}\n    } {} {needs:debug}\n\n    test {SCAN: Lazy-expire should not be wrapped in MULTI/EXEC} {\n        r debug set-active-expire 0\n        r flushall\n\n        r set foo1 bar PX 1\n        r set foo2 bar PX 1\n        after 2\n\n        set repl [attach_to_replication_stream]\n\n        r scan 0\n\n        assert_replication_stream $repl {\n            {select *}\n            {del foo*}\n            {del foo*}\n        }\n        close_replication_stream $repl\n        assert_equal [r debug set-active-expire 1] {OK}\n    } {} {needs:debug}\n\n    test {RANDOMKEY: Lazy-expire should not be wrapped in MULTI/EXEC} {\n        r debug set-active-expire 0\n        r flushall\n\n        r set foo1 bar PX 1\n        r set foo2 bar PX 1\n        after 2\n\n        set repl [attach_to_replication_stream]\n\n        r randomkey\n\n        assert_replication_stream $repl {\n            {select *}\n            {del foo*}\n            {del foo*}\n        }\n        close_replication_stream $repl\n        assert_equal [r debug set-active-expire 1] {OK}\n    } {} {needs:debug}\n\n    test {Lazy expire should increment expired_keys but not expired_keys_active} {\n        r flushall\n        r config resetstat\n        r debug set-active-expire 0\n\n        # Set keys that will expire\n        r set foo1{t} bar PX 1\n        r set foo2{t} bar PX 1\n        r set foo3{t} bar PX 1\n        after 2\n\n        # Trigger lazy expire by accessing the keys\n        assert_equal {{} {} {}} [r mget foo1{t} foo2{t} foo3{t}]\n\n        # Verify that expired_keys was incremented but expired_keys_active was not\n        assert_equal [s expired_keys] 3\n        assert_equal [s expired_keys_active] 0\n\n        assert_equal [r debug set-active-expire 1] {OK}\n    } {} {needs:debug}\n\n    test {Active expire should increment both expired_keys and expired_keys_active} {\n        r flushall\n        r config resetstat\n\n        # Set keys that will expire soon\n        r set foo1 bar PX 1\n        r set foo2 bar PX 1\n        r set foo3 bar PX 1\n\n        # Wait for active expire to kick in\n        wait_for_condition 50 100 {\n            [s expired_keys] == 3\n        } else {\n            fail \"Keys were not expired\"\n        }\n\n        # Verify that both expired_keys and expired_keys_active were incremented\n        assert_equal [s expired_keys] 3\n        assert_equal [s expired_keys_active] 3\n    } {}\n}\n\nstart_cluster 1 0 {tags {\"expire external:skip cluster\"}} {\n    test \"expire scan should skip dictionaries with lot's of empty buckets\" {\n        r debug set-active-expire 0\n\n        # Collect two slots to help determine the expiry scan logic is able\n        # to go past certain slots which aren't valid for scanning at the given point of time.\n        # And the next non empty slot after that still gets scanned and expiration happens.\n\n        # hashslot(alice) is 749\n        r psetex alice 500 val\n\n        # hashslot(foo) is 12182\n        # fill data across different slots with expiration\n        for {set j 1} {$j <= 100} {incr j} {\n            r psetex \"{foo}$j\" 500 a\n        }\n        # hashslot(key) is 12539\n        r psetex key 500 val\n\n        # disable resizing, the reason for not using slow bgsave is because\n        # it will hit the dict_force_resize_ratio.\n        r debug dict-resizing 0\n\n        # delete data to have lot's (99%) of empty buckets (slot 12182 should be skipped)\n        for {set j 1} {$j <= 99} {incr j} {\n            r del \"{foo}$j\"\n        }\n\n        # Trigger a full traversal of all dictionaries.\n        r keys *\n\n        r debug set-active-expire 1\n\n        # Verify {foo}100 still exists and remaining got cleaned up\n        wait_for_condition 20 100 {\n            [r dbsize] eq 1\n        } else {\n            if {[r dbsize] eq 0} {\n                puts [r debug htstats 0]\n                fail \"scan didn't handle slot skipping logic.\"\n            } else {\n                puts [r debug htstats 0]\n                fail \"scan didn't process all valid slots.\"\n            }\n        }\n\n        # Enable resizing\n        r debug dict-resizing 1\n\n        # put some data into slot 12182 and trigger the resize\n        r psetex \"{foo}0\" 500 a\n\n        # Verify all keys have expired\n        wait_for_condition 400 100 {\n            [r dbsize] eq 0\n        } else {\n            puts [r dbsize]\n            flush stdout\n            fail \"Keys did not actively expire.\"\n        }\n\n        # Make sure we don't have any timeouts.\n        assert_equal 0 [s 0 expired_time_cap_reached_count]\n    } {} {needs:debug}\n}\n\n# Config lazyexpire-nested-arbitrary-keys test body\nproc conf_le_test {option mode} {\n    r config set lazyexpire-nested-arbitrary-keys $option\n    r debug set-active-expire 0\n    r flushall\n    r script LOAD {return redis.call('SCAN', 0)}\n\n    r set foo bar\n    r pexpire foo 1\n    after 2\n\n    set repl [attach_to_replication_stream]\n\n    # First two conditions hit lazy expire within a 'transaction', meaning\n    # DEL propagation should be blocked if 'lazyexpire-nested-arbitrary-keys' is set.\n    if {$mode == \"lua\"} {\n        r eval \"return redis.call('SCAN', 0)\" 0\n    } elseif {$mode == \"multi\"} {\n        r multi\n        r scan 0\n        r exec\n    } else {\n        r scan 0\n    }\n\n    # dummy command to verify nothing else gets into the replication stream.\n    r set x 1\n\n    if {$option == \"no\" && $mode != \"direct\"} {\n        assert_replication_stream $repl {\n            {select *}\n            {set x 1}\n        }\n    } else {\n        assert_replication_stream $repl {\n            {select *}\n            {del foo}\n            {set x 1}\n        }\n    }\n\n    close_replication_stream $repl\n    r script flush\n    assert_equal [r config set lazyexpire-nested-arbitrary-keys yes] {OK}\n    assert_equal [r debug set-active-expire 1] {OK}\n}\n\nforeach option {yes no} {\nforeach mode {direct multi lua} {\n    start_server {tags {\"expire\"}} {\n        test \"Config lazyexpire-nested-arbitrary-keys ($option, $mode)\" {\n            conf_le_test $option $mode\n        } {} {needs:debug repl}\n    }\n}}\n"
  },
  {
    "path": "tests/unit/functions.tcl",
    "content": "proc get_function_code {args} {\n    return [format \"#!%s name=%s\\nredis.register_function('%s', function(KEYS, ARGV)\\n %s \\nend)\" [lindex $args 0] [lindex $args 1] [lindex $args 2] [lindex $args 3]]\n}\n\nproc get_no_writes_function_code {args} {\n    return [format \"#!%s name=%s\\nredis.register_function{function_name='%s', callback=function(KEYS, ARGV)\\n %s \\nend, flags={'no-writes'}}\" [lindex $args 0] [lindex $args 1] [lindex $args 2] [lindex $args 3]]\n}\n\nstart_server {tags {\"scripting\"}} {\n    test {FUNCTION - Basic usage} {\n        r function load [get_function_code LUA test test {return 'hello'}]\n        r fcall test 0\n    } {hello}\n\n    test {FUNCTION - Load with unknown argument} {\n        catch {\n            r function load foo bar [get_function_code LUA test test {return 'hello'}]\n        } e\n        set _ $e\n    } {*Unknown option given*}\n\n    test {FUNCTION - Create an already exiting library raise error} {\n        catch {\n            r function load [get_function_code LUA test test {return 'hello1'}]\n        } e\n        set _ $e\n    } {*already exists*}\n\n    test {FUNCTION - Create an already exiting library raise error (case insensitive)} {\n        catch {\n            r function load [get_function_code LUA test test {return 'hello1'}]\n        } e\n        set _ $e\n    } {*already exists*}\n\n    test {FUNCTION - Create a library with wrong name format} {\n        catch {\n            r function load [get_function_code LUA {bad\\0foramat} test {return 'hello1'}]\n        } e\n        set _ $e\n    } {*Library names can only contain letters, numbers, or underscores(_)*}\n\n    test {FUNCTION - Create library with unexisting engine} {\n        catch {\n            r function load [get_function_code bad_engine test test {return 'hello1'}]\n        } e\n        set _ $e\n    } {*Engine 'bad_engine' not found*}\n\n    test {FUNCTION - Test uncompiled script} {\n        catch {\n            r function load replace [get_function_code LUA test test {bad script}]\n        } e\n        set _ $e\n    } {*Error compiling function*}\n\n    test {FUNCTION - test replace argument} {\n        r function load REPLACE [get_function_code LUA test test {return 'hello1'}]\n        r fcall test 0\n    } {hello1}\n\n    test {FUNCTION - test function case insensitive} {\n        r fcall TEST 0\n    } {hello1}\n\n    test {FUNCTION - test replace argument with failure keeps old libraries} {\n        catch {r function load REPLACE [get_function_code LUA test test {error}]} e\n        assert_match {ERR Error compiling function*} $e\n        r fcall test 0\n    } {hello1}\n\n    test {FUNCTION - test function delete} {\n        r function delete test\n        catch {\n            r fcall test 0\n        } e\n        set _ $e\n    } {*Function not found*}\n\n    test {FUNCTION - test fcall bad arguments} {\n        r function load [get_function_code LUA test test {return 'hello'}]\n        catch {\n            r fcall test bad_arg\n        } e\n        set _ $e\n    } {*Bad number of keys provided*}\n\n    test {FUNCTION - test fcall bad number of keys arguments} {\n        catch {\n            r fcall test 10 key1\n        } e\n        set _ $e\n    } {*Number of keys can't be greater than number of args*}\n\n    test {FUNCTION - test fcall negative number of keys} {\n        catch {\n            r fcall test -1 key1\n        } e\n        set _ $e\n    } {*Number of keys can't be negative*}\n\n    test {FUNCTION - test delete on not exiting library} {\n        catch {\n            r function delete test1\n        } e\n        set _ $e\n    } {*Library not found*}\n\n    test {FUNCTION - test function kill when function is not running} {\n        catch {\n            r function kill\n        } e\n        set _ $e\n    } {*No scripts in execution*}\n\n    test {FUNCTION - test wrong subcommand} {\n        catch {\n            r function bad_subcommand\n        } e\n        set _ $e\n    } {*unknown subcommand*}\n\n    test {FUNCTION - test loading from rdb} {\n        r debug reload\n        r fcall test 0\n    } {hello} {needs:debug}\n\n    test {FUNCTION - test debug reload different options} {\n        catch {r debug reload noflush} e\n        assert_match \"*Error trying to load the RDB*\" $e\n        r debug reload noflush merge\n        r function list\n    } {{library_name test engine LUA functions {{name test description {} flags {}}}}} {needs:debug}\n\n    test {FUNCTION - test debug reload with nosave and noflush} {\n        r function delete test\n        r set x 1\n        r function load [get_function_code LUA test1 test1 {return 'hello'}]\n        r debug reload\n        r function load [get_function_code LUA test2 test2 {return 'hello'}]\n        r debug reload nosave noflush merge\n        assert_equal [r fcall test1 0] {hello}\n        assert_equal [r fcall test2 0] {hello}\n    } {} {needs:debug}\n\n    test {FUNCTION - test flushall and flushdb do not clean functions} {\n        r function flush\n        r function load REPLACE [get_function_code lua test test {return redis.call('set', 'x', '1')}]\n        r flushall\n        r flushdb\n        r function list\n    } {{library_name test engine LUA functions {{name test description {} flags {}}}}}\n\n    test {FUNCTION - test function dump and restore} {\n        r function flush\n        r function load [get_function_code lua test test {return 'hello'}]\n        set e [r function dump]\n        r function delete test\n        assert_match {} [r function list]\n        r function restore $e\n        r function list\n    } {{library_name test engine LUA functions {{name test description {} flags {}}}}}\n\n    test {FUNCTION - test function dump and restore with flush argument} {\n        set e [r function dump]\n        r function flush\n        assert_match {} [r function list]\n        r function restore $e FLUSH\n        r function list\n    } {{library_name test engine LUA functions {{name test description {} flags {}}}}}\n\n    test {FUNCTION - test function dump and restore with append argument} {\n        set e [r function dump]\n        r function flush\n        assert_match {} [r function list]\n        r function load [get_function_code lua test test {return 'hello1'}]\n        catch {r function restore $e APPEND} err\n        assert_match {*already exists*} $err\n        r function flush\n        r function load [get_function_code lua test1 test1 {return 'hello1'}]\n        r function restore $e APPEND\n        assert_match {hello} [r fcall test 0]\n        assert_match {hello1} [r fcall test1 0]\n    }\n\n    test {FUNCTION - test function dump and restore with replace argument} {\n        r function flush\n        r function load [get_function_code LUA test test {return 'hello'}]\n        set e [r function dump]\n        r function flush\n        assert_match {} [r function list]\n        r function load [get_function_code lua test test {return 'hello1'}]\n        assert_match {hello1} [r fcall test 0]\n        r function restore $e REPLACE\n        assert_match {hello} [r fcall test 0]\n    }\n\n    test {FUNCTION - test function restore with bad payload do not drop existing functions} {\n        r function flush\n        r function load [get_function_code LUA test test {return 'hello'}]\n        catch {r function restore bad_payload} e\n        assert_match {*payload version or checksum are wrong*} $e\n        r function list\n    } {{library_name test engine LUA functions {{name test description {} flags {}}}}}\n\n    test {FUNCTION - test function restore with wrong number of arguments} {\n        catch {r function restore arg1 args2 arg3} e\n        set _ $e\n    } {*unknown subcommand or wrong number of arguments for 'restore'. Try FUNCTION HELP.}\n\n    test {FUNCTION - test fcall_ro with write command} {\n        r function load REPLACE [get_no_writes_function_code lua test test {return redis.call('set', 'x', '1')}]\n        catch { r fcall_ro test 1 x } e\n        set _ $e\n    } {*Write commands are not allowed from read-only scripts*}\n\n    test {FUNCTION - test fcall_ro with read only commands} {\n        r function load REPLACE [get_no_writes_function_code lua test test {return redis.call('get', 'x')}]\n        r set x 1\n        r fcall_ro test 1 x\n    } {1}\n\n    test {FUNCTION - test keys and argv} {\n        r function load REPLACE [get_function_code lua test test {return redis.call('set', KEYS[1], ARGV[1])}]\n        r fcall test 1 x foo\n        r get x\n    } {foo}\n\n    test {FUNCTION - test command get keys on fcall} {\n        r COMMAND GETKEYS fcall test 1 x foo\n    } {x}\n\n    test {FUNCTION - test command get keys on fcall_ro} {\n        r COMMAND GETKEYS fcall_ro test 1 x foo\n    } {x}\n\n    test {FUNCTION - test function kill} {\n        set rd [redis_deferring_client]\n        r config set busy-reply-threshold 10\n        r function load REPLACE [get_function_code lua test test {local a = 1 while true do a = a + 1 end}]\n        $rd fcall test 0\n        after 200\n        catch {r ping} e\n        assert_match {BUSY*} $e\n        assert_match {running_script {name test command {fcall test 0} duration_ms *} engines {*}} [r FUNCTION STATS]\n        r function kill\n        after 200 ; # Give some time to Lua to call the hook again...\n        assert_equal [r ping] \"PONG\"\n    }\n\n    test {FUNCTION - test script kill not working on function} {\n        set rd [redis_deferring_client]\n        r config set busy-reply-threshold 10\n        r function load REPLACE [get_function_code lua test test {local a = 1 while true do a = a + 1 end}]\n        $rd fcall test 0\n        after 200\n        catch {r ping} e\n        assert_match {BUSY*} $e\n        catch {r script kill} e\n        assert_match {BUSY*} $e\n        r function kill\n        after 200 ; # Give some time to Lua to call the hook again...\n        assert_equal [r ping] \"PONG\"\n    }\n\n    test {FUNCTION - test function kill not working on eval} {\n        set rd [redis_deferring_client]\n        r config set busy-reply-threshold 10\n        $rd eval {local a = 1 while true do a = a + 1 end} 0\n        after 200\n        catch {r ping} e\n        assert_match {BUSY*} $e\n        catch {r function kill} e\n        assert_match {BUSY*} $e\n        r script kill\n        after 200 ; # Give some time to Lua to call the hook again...\n        assert_equal [r ping] \"PONG\"\n    }\n\n    test {FUNCTION - test function flush} {\n        r function load REPLACE [get_function_code lua test test {local a = 1 while true do a = a + 1 end}]\n        assert_match {{library_name test engine LUA functions {{name test description {} flags {}}}}} [r function list]\n        r function flush\n        assert_match {} [r function list]\n\n        r function load REPLACE [get_function_code lua test test {local a = 1 while true do a = a + 1 end}]\n        assert_match {{library_name test engine LUA functions {{name test description {} flags {}}}}} [r function list]\n        r function flush async\n        assert_match {} [r function list]\n\n        r function load REPLACE [get_function_code lua test test {local a = 1 while true do a = a + 1 end}]\n        assert_match {{library_name test engine LUA functions {{name test description {} flags {}}}}} [r function list]\n        r function flush sync\n        assert_match {} [r function list]\n    }\n\n    test {FUNCTION - async function flush rebuilds Lua VM without causing race condition between main and lazyfree thread} {\n        # LAZYFREE_THRESHOLD is 64\n        for {set i 0} {$i < 1000} {incr i} {\n            r function load [get_function_code lua test$i test$i {local a = 1 while true do a = a + 1 end}]\n        }\n        assert_morethan [s used_memory_vm_functions] 100000\n        r config resetstat\n        r function flush async\n        assert_lessthan [s used_memory_vm_functions] 40000\n\n        # Wait for the completion of lazy free for both functions and engines.\n        set start_time [clock seconds]\n        while {1} {\n            # Tests for race conditions between async function flushes and main thread Lua VM operations.\n            r function load REPLACE [get_function_code lua test test {local a = 1 while true do a = a + 1 end}]\n            if {[s lazyfreed_objects] == 1001 || [expr {[clock seconds] - $start_time}] > 5} {\n                break\n            }\n        }\n        if {[s lazyfreed_objects] != 1001} {\n            error \"Timeout or unexpected number of lazyfreed_objects: [s lazyfreed_objects]\"\n        }\n        assert_match {{library_name test engine LUA functions {{name test description {} flags {}}}}} [r function list]\n    }\n\n    test {FUNCTION - test function wrong argument} {\n        catch {r function flush bad_arg} e\n        assert_match {*only supports SYNC|ASYNC*} $e\n\n        catch {r function flush sync extra_arg} e\n        assert_match {*unknown subcommand or wrong number of arguments for 'flush'. Try FUNCTION HELP.} $e\n    }\n}\n\nstart_server {tags {\"scripting repl external:skip\"}} {\n    start_server {} {\n        test \"Connect a replica to the master instance\" {\n            r -1 slaveof [srv 0 host] [srv 0 port]\n            wait_for_condition 150 100 {\n                [s -1 role] eq {slave} &&\n                [string match {*master_link_status:up*} [r -1 info replication]]\n            } else {\n                fail \"Can't turn the instance into a replica\"\n            }\n        }\n\n        test {FUNCTION - creation is replicated to replica} {\n            r function load [get_no_writes_function_code LUA test test {return 'hello'}]\n            wait_for_condition 150 100 {    \n                [r -1 function list] eq {{library_name test engine LUA functions {{name test description {} flags no-writes}}}}\n            } else {\n                fail \"Failed waiting for function to replicate to replica\"\n            }\n        }\n\n        test {FUNCTION - call on replica} {\n            r -1 fcall test 0\n        } {hello}\n\n        test {FUNCTION - restore is replicated to replica} {\n            set e [r function dump]\n\n            r function delete test\n            wait_for_condition 150 100 {\n                [r -1 function list] eq {}\n            } else {\n                fail \"Failed waiting for function to replicate to replica\"\n            }\n\n            assert_equal [r function restore $e] {OK}\n\n            wait_for_condition 150 100 {\n                [r -1 function list] eq {{library_name test engine LUA functions {{name test description {} flags no-writes}}}}\n            } else {\n                fail \"Failed waiting for function to replicate to replica\"\n            }\n        }\n\n        test {FUNCTION - delete is replicated to replica} {\n            r function delete test\n            wait_for_condition 150 100 {\n                [r -1 function list] eq {}\n            } else {\n                fail \"Failed waiting for function to replicate to replica\"\n            }\n        }\n\n        test {FUNCTION - flush is replicated to replica} {\n            r function load [get_function_code LUA test test {return 'hello'}]\n            wait_for_condition 150 100 {\n                [r -1 function list] eq {{library_name test engine LUA functions {{name test description {} flags {}}}}}\n            } else {\n                fail \"Failed waiting for function to replicate to replica\"\n            }\n            r function flush\n            wait_for_condition 150 100 {\n                [r -1 function list] eq {}\n            } else {\n                fail \"Failed waiting for function to replicate to replica\"\n            }\n        }\n\n        test \"Disconnecting the replica from master instance\" {\n            r -1 slaveof no one\n            # creating a function after disconnect to make sure function\n            # is replicated on rdb phase\n            r function load [get_no_writes_function_code LUA test test {return 'hello'}]\n\n            # reconnect the replica\n            r -1 slaveof [srv 0 host] [srv 0 port]\n            wait_for_condition 150 100 {\n                [s -1 role] eq {slave} &&\n                [string match {*master_link_status:up*} [r -1 info replication]]\n            } else {\n                fail \"Can't turn the instance into a replica\"\n            }\n        }\n\n        test \"FUNCTION - test replication to replica on rdb phase\" {\n            r -1 fcall test 0\n        } {hello}\n\n        test \"FUNCTION - test replication to replica on rdb phase info command\" {\n            r -1 function list\n        } {{library_name test engine LUA functions {{name test description {} flags no-writes}}}}\n\n        test \"FUNCTION - create on read only replica\" {\n            catch {\n                r -1 function load [get_function_code LUA test test {return 'hello'}]\n            } e\n            set _ $e\n        } {*can't write against a read only replica*}\n\n        test \"FUNCTION - delete on read only replica\" {\n            catch {\n                r -1 function delete test\n            } e\n            set _ $e\n        } {*can't write against a read only replica*}\n\n        test \"FUNCTION - function effect is replicated to replica\" {\n            r function load REPLACE [get_function_code LUA test test {return redis.call('set', 'x', '1')}]\n            r fcall test 1 x\n            assert {[r get x] eq {1}}\n            wait_for_condition 150 100 {\n                [r -1 get x] eq {1}\n            } else {\n                fail \"Failed waiting function effect to be replicated to replica\"\n            }\n        }\n\n        test \"FUNCTION - modify key space of read only replica\" {\n            catch {\n                r -1 fcall test 1 x\n            } e\n            set _ $e\n        } {READONLY You can't write against a read only replica.}\n    }\n}\n\ntest {FUNCTION can processes create, delete and flush commands in AOF when doing \"debug loadaof\" in read-only slaves} {\n    start_server {} {\n        r config set appendonly yes\n        waitForBgrewriteaof r\n        r FUNCTION LOAD \"#!lua name=test\\nredis.register_function('test', function() return 'hello' end)\"\n        r config set slave-read-only yes\n        r slaveof 127.0.0.1 0\n        r debug loadaof\n        r slaveof no one\n        assert_equal [r function list] {{library_name test engine LUA functions {{name test description {} flags {}}}}}\n\n        r FUNCTION DELETE test\n\n        r slaveof 127.0.0.1 0\n        r debug loadaof\n        r slaveof no one\n        assert_equal [r function list] {}\n\n        r FUNCTION LOAD \"#!lua name=test\\nredis.register_function('test', function() return 'hello' end)\"\n        r FUNCTION FLUSH\n\n        r slaveof 127.0.0.1 0\n        r debug loadaof\n        r slaveof no one\n        assert_equal [r function list] {}\n    }\n} {} {needs:debug external:skip}\n\nstart_server {tags {\"scripting\"}} {\n    test {LIBRARIES - test shared function can access default globals} {\n        r function load {#!lua name=lib1\n            local function ping()\n                return redis.call('ping')\n            end\n            redis.register_function(\n                'f1',\n                function(keys, args)\n                    return ping()\n                end\n            )\n        }\n        r fcall f1 0\n    } {PONG}\n\n    test {LIBRARIES - usage and code sharing} {\n        r function load REPLACE {#!lua name=lib1\n            local function add1(a)\n                return a + 1\n            end\n            redis.register_function(\n                'f1',\n                function(keys, args)\n                    return add1(1)\n                end\n            )\n            redis.register_function(\n                'f2',\n                function(keys, args)\n                    return add1(2)\n                end\n            )\n        }\n        assert_equal [r fcall f1 0] {2}\n        assert_equal [r fcall f2 0] {3}\n        r function list\n    } {{library_name lib1 engine LUA functions {*}}}\n\n    test {LIBRARIES - test registration failure revert the entire load} {\n        catch {\n            r function load replace {#!lua name=lib1\n                local function add1(a)\n                    return a + 2\n                end\n                redis.register_function(\n                    'f1',\n                    function(keys, args)\n                        return add1(1)\n                    end\n                )\n                redis.register_function(\n                    'f2',\n                    'not a function'\n                )\n            }\n        } e\n        assert_match {*second argument to redis.register_function must be a function*} $e\n        assert_equal [r fcall f1 0] {2}\n        assert_equal [r fcall f2 0] {3}\n    }\n\n    test {LIBRARIES - test registration function name collision} {\n        catch {\n            r function load replace {#!lua name=lib2\n                redis.register_function(\n                    'f1',\n                    function(keys, args)\n                        return 1\n                    end\n                )\n            }\n        } e\n        assert_match {*Function f1 already exists*} $e\n        assert_equal [r fcall f1 0] {2}\n        assert_equal [r fcall f2 0] {3}\n    }\n\n    test {LIBRARIES - test registration function name collision on same library} {\n        catch {\n            r function load replace {#!lua name=lib2\n                redis.register_function(\n                    'f1',\n                    function(keys, args)\n                        return 1\n                    end\n                )\n                redis.register_function(\n                    'f1',\n                    function(keys, args)\n                        return 1\n                    end\n                )\n            }\n        } e\n        set _ $e\n    } {*Function already exists in the library*}\n\n    test {LIBRARIES - test registration with no argument} {\n        catch {\n            r function load replace {#!lua name=lib2\n                redis.register_function()\n            }\n        } e\n        set _ $e\n    } {*wrong number of arguments to redis.register_function*}\n\n    test {LIBRARIES - test registration with only name} {\n        catch {\n            r function load replace {#!lua name=lib2\n                redis.register_function('f1')\n            }\n        } e\n        set _ $e\n    } {*calling redis.register_function with a single argument is only applicable to Lua table*}\n\n    test {LIBRARIES - test registration with to many arguments} {\n        catch {\n            r function load replace {#!lua name=lib2\n                redis.register_function('f1', function() return 1 end, {}, 'description', 'extra arg')\n            }\n        } e\n        set _ $e\n    } {*wrong number of arguments to redis.register_function*}\n\n    test {LIBRARIES - test registration with no string name} {\n        catch {\n            r function load replace {#!lua name=lib2\n                redis.register_function(nil, function() return 1 end)\n            }\n        } e\n        set _ $e\n    } {*first argument to redis.register_function must be a string*}\n\n    test {LIBRARIES - test registration with wrong name format} {\n        catch {\n            r function load replace {#!lua name=lib2\n                redis.register_function('test\\0test', function() return 1 end)\n            }\n        } e\n        set _ $e\n    } {*Library names can only contain letters, numbers, or underscores(_) and must be at least one character long*}\n\n    test {LIBRARIES - test registration with empty name} {\n        catch {\n            r function load replace {#!lua name=lib2\n                redis.register_function('', function() return 1 end)\n            }\n        } e\n        set _ $e\n    } {*Library names can only contain letters, numbers, or underscores(_) and must be at least one character long*}\n\n    test {LIBRARIES - math.random from function load} {\n        catch {\n            r function load replace {#!lua name=lib2\n                return math.random()\n            }\n        } e\n        set _ $e\n    } {*attempted to access nonexistent global variable 'math'*}\n\n    test {LIBRARIES - redis.call from function load} {\n        catch {\n            r function load replace {#!lua name=lib2\n                return redis.call('ping')\n            }\n        } e\n        set _ $e\n    } {*attempted to access nonexistent global variable 'call'*}\n\n    test {LIBRARIES - redis.setresp from function load} {\n        catch {\n            r function load replace {#!lua name=lib2\n                return redis.setresp(3)\n            }\n        } e\n        set _ $e\n    } {*attempted to access nonexistent global variable 'setresp'*}\n\n    test {LIBRARIES - redis.set_repl from function load} {\n        catch {\n            r function load replace {#!lua name=lib2\n                return redis.set_repl(redis.REPL_NONE)\n            }\n        } e\n        set _ $e\n    } {*attempted to access nonexistent global variable 'set_repl'*}\n\n    test {LIBRARIES - redis.acl_check_cmd from function load} {\n        catch {\n            r function load replace {#!lua name=lib2\n                return redis.acl_check_cmd('set','xx',1)\n            }\n        } e\n        set _ $e\n    } {*attempted to access nonexistent global variable 'acl_check_cmd'*}\n\n    test {LIBRARIES - malicious access test} {\n        # the 'library' API is not exposed inside a\n        # function context and the 'redis' API is not\n        # expose on the library registration context.\n        # But a malicious user might find a way to hack it\n        # (as demonstrated in this test). This is why we\n        # have another level of protection on the C\n        # code itself and we want to test it and verify\n        # that it works properly.\n        r function load replace {#!lua name=lib1\n            local lib = redis\n            lib.register_function('f1', function ()\n                lib.redis = redis\n                lib.math = math\n                return {ok='OK'}\n            end)\n\n            lib.register_function('f2', function ()\n                lib.register_function('f1', function ()\n                    lib.redis = redis\n                    lib.math = math\n                    return {ok='OK'}\n                end)\n            end)\n        }\n        catch {[r fcall f1 0]} e\n        assert_match {*Attempt to modify a readonly table*} $e\n\n        catch {[r function load {#!lua name=lib2\n            redis.math.random()\n        }]} e\n        assert_match {*Script attempted to access nonexistent global variable 'math'*} $e\n\n        catch {[r function load {#!lua name=lib2\n            redis.redis.call('ping')\n        }]} e\n        assert_match {*Script attempted to access nonexistent global variable 'redis'*} $e\n\n        catch {[r fcall f2 0]} e\n        assert_match {*can only be called on FUNCTION LOAD command*} $e\n    }\n\n    test {LIBRARIES - delete removed all functions on library} {\n        r function delete lib1\n        r function list\n    } {}\n\n    test {LIBRARIES - register function inside a function} {\n        r function load {#!lua name=lib\n            redis.register_function(\n                'f1',\n                function(keys, args)\n                    redis.register_function(\n                        'f2',\n                        function(key, args)\n                            return 2\n                        end\n                    )\n                    return 1\n                end\n            )\n        }\n        catch {r fcall f1 0} e\n        set _ $e\n    } {*attempt to call field 'register_function' (a nil value)*}\n\n    test {LIBRARIES - register library with no functions} {\n        r function flush\n        catch {\n            r function load {#!lua name=lib\n                return 1\n            }\n        } e\n        set _ $e\n    } {*No functions registered*}\n\n    test {LIBRARIES - load timeout} {\n        catch {\n            r function load {#!lua name=lib\n                local a = 1\n                while 1 do a = a + 1 end\n            }\n        } e\n        set _ $e\n    } {*FUNCTION LOAD timeout*}\n\n    test {LIBRARIES - verify global protection on the load run} {\n        catch {\n            r function load {#!lua name=lib\n                a = 1\n            }\n        } e\n        set _ $e\n    } {*Attempt to modify a readonly table*}\n\n    test {LIBRARIES - named arguments} {\n        r function load {#!lua name=lib\n            redis.register_function{\n                function_name='f1',\n                callback=function()\n                    return 'hello'\n                end,\n                description='some desc'\n            }\n        }\n        r function list\n    } {{library_name lib engine LUA functions {{name f1 description {some desc} flags {}}}}}\n\n    test {LIBRARIES - named arguments, bad function name} {\n        catch {\n            r function load replace {#!lua name=lib\n                redis.register_function{\n                    function_name=function() return 1 end,\n                    callback=function()\n                        return 'hello'\n                    end,\n                    description='some desc'\n                }\n            }\n        } e\n        set _ $e\n    } {*function_name argument given to redis.register_function must be a string*}\n\n    test {LIBRARIES - named arguments, bad callback type} {\n        catch {\n            r function load replace {#!lua name=lib\n                redis.register_function{\n                    function_name='f1',\n                    callback='bad',\n                    description='some desc'\n                }\n            }\n        } e\n        set _ $e\n    } {*callback argument given to redis.register_function must be a function*}\n\n    test {LIBRARIES - named arguments, bad description} {\n        catch {\n            r function load replace {#!lua name=lib\n                redis.register_function{\n                    function_name='f1',\n                    callback=function()\n                        return 'hello'\n                    end,\n                    description=function() return 1 end\n                }\n            }\n        } e\n        set _ $e\n    } {*description argument given to redis.register_function must be a string*}\n\n    test {LIBRARIES - named arguments, unknown argument} {\n        catch {\n            r function load replace {#!lua name=lib\n                redis.register_function{\n                    function_name='f1',\n                    callback=function()\n                        return 'hello'\n                    end,\n                    description='desc',\n                    some_unknown='unknown'\n                }\n            }\n        } e\n        set _ $e\n    } {*unknown argument given to redis.register_function*}\n\n    test {LIBRARIES - named arguments, missing function name} {\n        catch {\n            r function load replace {#!lua name=lib\n                redis.register_function{\n                    callback=function()\n                        return 'hello'\n                    end,\n                    description='desc'\n                }\n            }\n        } e\n        set _ $e\n    } {*redis.register_function must get a function name argument*}\n\n    test {LIBRARIES - named arguments, missing callback} {\n        catch {\n            r function load replace {#!lua name=lib\n                redis.register_function{\n                    function_name='f1',\n                    description='desc'\n                }\n            }\n        } e\n        set _ $e\n    } {*redis.register_function must get a callback argument*}\n\n    test {FUNCTION - test function restore with function name collision} {\n        r function flush\n        r function load {#!lua name=lib1\n            local function add1(a)\n                return a + 1\n            end\n            redis.register_function(\n                'f1',\n                function(keys, args)\n                    return add1(1)\n                end\n            )\n            redis.register_function(\n                'f2',\n                function(keys, args)\n                    return add1(2)\n                end\n            )\n            redis.register_function(\n                'f3',\n                function(keys, args)\n                    return add1(3)\n                end\n            )\n        }\n        set e [r function dump]\n        r function flush\n\n        # load a library with different name but with the same function name\n        r function load {#!lua name=lib1\n            redis.register_function(\n                'f6',\n                function(keys, args)\n                    return 7\n                end\n            )\n        }\n        r function load {#!lua name=lib2\n            local function add1(a)\n                return a + 1\n            end\n            redis.register_function(\n                'f4',\n                function(keys, args)\n                    return add1(4)\n                end\n            )\n            redis.register_function(\n                'f5',\n                function(keys, args)\n                    return add1(5)\n                end\n            )\n            redis.register_function(\n                'f3',\n                function(keys, args)\n                    return add1(3)\n                end\n            )\n        }\n\n        catch {r function restore $e} error\n        assert_match {*Library lib1 already exists*} $error\n        assert_equal [r fcall f3 0] {4}\n        assert_equal [r fcall f4 0] {5}\n        assert_equal [r fcall f5 0] {6}\n        assert_equal [r fcall f6 0] {7}\n\n        catch {r function restore $e replace} error\n        assert_match {*Function f3 already exists*} $error\n        assert_equal [r fcall f3 0] {4}\n        assert_equal [r fcall f4 0] {5}\n        assert_equal [r fcall f5 0] {6}\n        assert_equal [r fcall f6 0] {7}\n    }\n\n    test {FUNCTION - test function list with code} {\n        r function flush\n        r function load {#!lua name=library1\n            redis.register_function('f6', function(keys, args) return 7 end)\n        }\n        r function list withcode\n    } {{library_name library1 engine LUA functions {{name f6 description {} flags {}}} library_code {*redis.register_function('f6', function(keys, args) return 7 end)*}}}\n\n    test {FUNCTION - test function list with pattern} {\n        r function load {#!lua name=lib1\n            redis.register_function('f7', function(keys, args) return 7 end)\n        }\n        r function list libraryname library*\n    } {{library_name library1 engine LUA functions {{name f6 description {} flags {}}}}}\n\n    test {FUNCTION - test function list wrong argument} {\n        catch {r function list bad_argument} e\n        set _ $e\n    } {*Unknown argument bad_argument*}\n\n    test {FUNCTION - test function list with bad argument to library name} {\n        catch {r function list libraryname} e\n        set _ $e\n    } {*library name argument was not given*}\n\n    test {FUNCTION - test function list withcode multiple times} {\n        catch {r function list withcode withcode} e\n        set _ $e\n    } {*Unknown argument withcode*}\n\n    test {FUNCTION - test function list libraryname multiple times} {\n        catch {r function list withcode libraryname foo libraryname foo} e\n        set _ $e\n    } {*Unknown argument libraryname*}\n\n    test {FUNCTION - verify OOM on function load and function restore} {\n        r function flush\n        r function load replace {#!lua name=test\n            redis.register_function('f1', function() return 1 end)\n        }\n        set payload [r function dump]\n        r config set maxmemory 1\n\n        r function flush\n        catch {r function load replace {#!lua name=test\n            redis.register_function('f1', function() return 1 end)\n        }} e\n        assert_match {*command not allowed when used memory*} $e\n\n        r function flush\n        catch {r function restore $payload} e\n        assert_match {*command not allowed when used memory*} $e\n\n        r config set maxmemory 0\n    } {OK} {needs:config-maxmemory}\n\n    test {FUNCTION - verify allow-omm allows running any command} {\n        r FUNCTION load replace {#!lua name=f1\n            redis.register_function{\n                function_name='f1',\n                callback=function() return redis.call('set', 'x', '1') end,\n                flags={'allow-oom'}\n            }\n        }\n\n        r config set maxmemory 1\n\n        assert_match {OK} [r fcall f1 1 x]\n        assert_match {1} [r get x]\n\n        r config set maxmemory 0\n    } {OK} {needs:config-maxmemory}\n}\n\nstart_server {tags {\"scripting\"}} {\n    test {FUNCTION - wrong flags type named arguments} {\n        catch {r function load replace {#!lua name=test\n            redis.register_function{\n                function_name = 'f1',\n                callback = function() return 1 end,\n                flags = 'bad flags type'\n            }\n        }} e\n        set _ $e\n    } {*flags argument to redis.register_function must be a table representing function flags*}\n\n    test {FUNCTION - wrong flag type} {\n        catch {r function load replace {#!lua name=test\n            redis.register_function{\n                function_name = 'f1',\n                callback = function() return 1 end,\n                flags = {function() return 1 end}\n            }\n        }} e\n        set _ $e\n    } {*unknown flag given*}\n\n    test {FUNCTION - unknown flag} {\n        catch {r function load replace {#!lua name=test\n            redis.register_function{\n                function_name = 'f1',\n                callback = function() return 1 end,\n                flags = {'unknown'}\n            }\n        }} e\n        set _ $e\n    } {*unknown flag given*}\n\n    test {FUNCTION - write script on fcall_ro} {\n        r function load replace {#!lua name=test\n            redis.register_function{\n                function_name = 'f1',\n                callback = function() return redis.call('set', 'x', 1) end\n            }\n        }\n        catch {r fcall_ro f1 1 x} e\n        set _ $e\n    } {*Can not execute a script with write flag using \\*_ro command*}\n\n    test {FUNCTION - write script with no-writes flag} {\n        r function load replace {#!lua name=test\n            redis.register_function{\n                function_name = 'f1',\n                callback = function() return redis.call('set', 'x', 1) end,\n                flags = {'no-writes'}\n            }\n        }\n        catch {r fcall f1 1 x} e\n        set _ $e\n    } {*Write commands are not allowed from read-only scripts*}\n\n    test {FUNCTION - deny oom} {\n        r FUNCTION load replace {#!lua name=test\n            redis.register_function('f1', function() return redis.call('set', 'x', '1') end) \n        }\n\n        r config set maxmemory 1\n\n        catch {[r fcall f1 1 x]} e\n        assert_match {OOM *when used memory > 'maxmemory'*} $e\n\n        r config set maxmemory 0\n    } {OK} {needs:config-maxmemory}\n\n    test {FUNCTION - deny oom on no-writes function} {\n        r FUNCTION load replace {#!lua name=test\n            redis.register_function{function_name='f1', callback=function() return 'hello' end, flags={'no-writes'}}\n        }\n\n        r config set maxmemory 1\n\n        assert_equal [r fcall f1 1 k] hello\n        assert_equal [r fcall_ro f1 1 k] hello\n\n        r config set maxmemory 0\n    } {OK} {needs:config-maxmemory}\n\n    test {FUNCTION - allow stale} {\n        r FUNCTION load replace {#!lua name=test\n            redis.register_function{function_name='f1', callback=function() return 'hello' end, flags={'no-writes'}}\n            redis.register_function{function_name='f2', callback=function() return 'hello' end, flags={'allow-stale', 'no-writes'}}\n            redis.register_function{function_name='f3', callback=function() return redis.call('get', 'x') end, flags={'allow-stale', 'no-writes'}}\n            redis.register_function{function_name='f4', callback=function() return redis.call('info', 'server') end, flags={'allow-stale', 'no-writes'}}\n        }\n        \n        r config set replica-serve-stale-data no\n        r replicaof 127.0.0.1 1\n\n        catch {[r fcall f1 0]} e\n        assert_match {MASTERDOWN *} $e\n\n        assert_equal {hello} [r fcall f2 0]\n\n        catch {[r fcall f3 1 x]} e\n        assert_match {ERR *Can not execute the command on a stale replica*} $e\n\n        assert_match {*redis_version*} [r fcall f4 0]\n\n        r replicaof no one\n        r config set replica-serve-stale-data yes\n        set _ {}\n    } {} {external:skip}\n\n    test {FUNCTION - redis version api} {\n        r FUNCTION load replace {#!lua name=test\n            local version = redis.REDIS_VERSION_NUM\n\n            redis.register_function{function_name='get_version_v1', callback=function()\n              return string.format('%s.%s.%s',\n                                    bit.band(bit.rshift(version, 16), 0x000000ff),\n                                    bit.band(bit.rshift(version, 8), 0x000000ff),\n                                    bit.band(version, 0x000000ff))\n            end}\n            redis.register_function{function_name='get_version_v2', callback=function() return redis.REDIS_VERSION end}\n        }\n\n        catch {[r fcall f1 0]} e\n        assert_equal  [r fcall get_version_v1 0] [r fcall get_version_v2 0]\n    }\n\n    test {FUNCTION - function stats} {\n        r FUNCTION FLUSH\n\n        r FUNCTION load {#!lua name=test1\n            redis.register_function('f1', function() return 1 end)\n            redis.register_function('f2', function() return 1 end)\n        }\n\n        r FUNCTION load {#!lua name=test2\n            redis.register_function('f3', function() return 1 end)\n        }\n\n        r function stats\n    } {running_script {} engines {LUA {libraries_count 2 functions_count 3}}}\n\n    test {FUNCTION - function stats reloaded correctly from rdb} {\n        r debug reload\n        r function stats\n    } {running_script {} engines {LUA {libraries_count 2 functions_count 3}}} {needs:debug}\n\n    test {FUNCTION - function stats delete library} {\n        r function delete test1\n        r function stats\n    } {running_script {} engines {LUA {libraries_count 1 functions_count 1}}}\n\n    test {FUNCTION - test function stats on loading failure} {\n        r FUNCTION FLUSH\n\n        r FUNCTION load {#!lua name=test1\n            redis.register_function('f1', function() return 1 end)\n            redis.register_function('f2', function() return 1 end)\n        }\n\n        catch {r FUNCTION load {#!lua name=test1\n            redis.register_function('f3', function() return 1 end)\n        }} e\n        assert_match \"*Library 'test1' already exists*\" $e\n        \n\n        r function stats\n    } {running_script {} engines {LUA {libraries_count 1 functions_count 2}}}\n\n    test {FUNCTION - function stats cleaned after flush} {\n        r function flush\n        r function stats\n    } {running_script {} engines {LUA {libraries_count 0 functions_count 0}}}\n\n    test {FUNCTION - function test empty engine} {\n         catch {r function load replace {#! name=test\n            redis.register_function('foo', function() return 1 end)\n        }} e\n        set _ $e\n    } {ERR Engine '' not found}\n\n    test {FUNCTION - function test unknown metadata value} {\n         catch {r function load replace {#!lua name=test foo=bar\n            redis.register_function('foo', function() return 1 end)\n        }} e\n        set _ $e\n    } {ERR Invalid metadata value given: foo=bar}\n\n    test {FUNCTION - function test no name} {\n         catch {r function load replace {#!lua\n            redis.register_function('foo', function() return 1 end)\n        }} e\n        set _ $e\n    } {ERR Library name was not given}\n\n    test {FUNCTION - function test multiple names} {\n         catch {r function load replace {#!lua name=foo name=bar\n            redis.register_function('foo', function() return 1 end)\n        }} e\n        set _ $e\n    } {ERR Invalid metadata value, name argument was given multiple times}\n\n    test {FUNCTION - function test name with quotes} {\n        r function load replace {#!lua name=\"foo\"\n            redis.register_function('foo', function() return 1 end)\n        }\n    } {foo}\n\n    test {FUNCTION - trick global protection 1} {\n        r FUNCTION FLUSH\n\n        r FUNCTION load {#!lua name=test1\n            redis.register_function('f1', function() \n                mt = getmetatable(_G)\n                original_globals = mt.__index\n                original_globals['redis'] = function() return 1 end\n            end)\n        }\n\n        catch {[r fcall f1 0]} e\n        set _ $e\n    } {*Attempt to modify a readonly table*}\n\n    test {FUNCTION - test getmetatable on script load} {\n        r FUNCTION FLUSH\n\n        catch {\n            r FUNCTION load {#!lua name=test1\n                mt = getmetatable(_G)\n            }\n        } e\n\n        set _ $e\n    } {*Script attempted to access nonexistent global variable 'getmetatable'*}\n\n}\n"
  },
  {
    "path": "tests/unit/gcra.tcl",
    "content": "start_server {tags {\"gcra\" \"external:skip\"}} {\n    test {GCRA - argument validation} {\n        # Wrong number of arguments (too few)\n        catch {r gcra} err\n        assert_match \"*wrong number of arguments*\" $err\n        catch {r gcra mykey} err\n        assert_match \"*wrong number of arguments*\" $err\n        catch {r gcra mykey 10} err\n        assert_match \"*wrong number of arguments*\" $err\n        catch {r gcra mykey 10 5} err\n        assert_match \"*wrong number of arguments*\" $err\n\n        # max_burst must be non-negative integer\n        catch {r gcra mykey -1 5 10} err\n        assert_match \"*out of range*\" $err\n        catch {r gcra mykey notanumber 5 10} err\n        assert_match \"*out of range*\" $err\n\n        # tokens_per_period must be >= 1\n        catch {r gcra mykey 10 0 10} err\n        assert_match \"*out of range*\" $err\n        catch {r gcra mykey 10 -1 10} err\n        assert_match \"*out of range*\" $err\n        catch {r gcra mykey 10 notanumber 10} err\n        assert_match \"*not an integer*\" $err\n\n        # period must be positive\n        catch {r gcra mykey 10 5 0} err\n        assert_match \"*period must be > 0*\" $err\n        catch {r gcra mykey 10 5 -1} err\n        assert_match \"*period must be > 0*\" $err\n        catch {r gcra mykey 10 5 notanumber} err\n        assert_match \"*not a valid float*\" $err\n\n        # tokens (optional) must be >= 1\n        catch {r gcra mykey 10 5 10 TOKENS} err\n        assert_match \"*Missing TOKENS value*\" $err\n        catch {r gcra mykey 10 5 10 TOKENS 0} err\n        assert_match \"*out of range*\" $err\n        catch {r gcra mykey 10 5 10 TOKENS -1} err\n        assert_match \"*out of range*\" $err\n        catch {r gcra mykey 10 5 10 TOKENS notanumber} err\n        assert_match \"*not an integer*\" $err\n\n        # Valid arguments with default tokens\n        r del mykey\n        set result [r gcra mykey 10 5 60]\n        assert_equal 5 [llength $result]\n        set limited [lindex $result 0]\n        set max_req_num [lindex $result 1]\n        assert_equal $limited 0\n        assert_equal 11 $max_req_num\n\n        # Valid arguments with explicit tokens\n        r del mykey\n        set result [r gcra mykey 10 5 60 TOKENS 2]\n        assert_equal 5 [llength $result]\n        assert_equal 11 [lindex $result 1]\n\n        # Period accepts fractional seconds\n        r del mykey\n        set result [r gcra mykey 10 5 0.5]\n        assert_equal 5 [llength $result]\n    }\n\n    test {GCRA - first request is allowed} {\n        r del mykey\n        set result [r gcra mykey 5 10 60]\n        set limited [lindex $result 0]\n        # First request should not be limited\n        assert_equal 0 $limited\n    }\n\n    test {GCRA - requests within burst are allowed} {\n        r del mykey\n        # max_burst=5, tokens_per_period=1, period=60\n        # This allows burst of 6 requests (max_burst + 1)\n        for {set i 0} {$i < 6} {incr i} {\n            set result [r gcra mykey 5 1 60]\n            set limited [lindex $result 0]\n            assert_equal 0 $limited \"Request $i should be allowed\"\n        }\n        # Request 6 should be limited\n        set result [r gcra mykey 5 1 60]\n        set limited [lindex $result 0]\n        assert_equal 1 $limited \"Request 6 should be limited\"\n    }\n\n    test {GCRA - retry_after is positive when limited} {\n        r del mykey\n        # Exhaust burst\n        for {set i 0} {$i < 3} {incr i} {\n            r gcra mykey 2 1 60\n        }\n        # Next request should be limited with positive retry_after\n        set result [r gcra mykey 2 1 60]\n        set limited [lindex $result 0]\n        set retry_after [lindex $result 3]\n        assert_equal 1 $limited\n        assert {$retry_after > 0}\n    }\n\n    test {GCRA - retry_after is negative when allowed} {\n        r del mykey\n        set result [r gcra mykey 5 1 60]\n        set limited [lindex $result 0]\n        set retry_after [lindex $result 3]\n        assert_equal 0 $limited\n        assert {$retry_after < 0}\n    }\n\n    test {GCRA - num_avail_req decreases with each request} {\n        r del mykey\n        set result1 [r gcra mykey 5 1 60]\n        set avail1 [lindex $result1 2]\n\n        set result2 [r gcra mykey 5 1 60]\n        set avail2 [lindex $result2 2]\n\n        assert {$avail2 < $avail1}\n    }\n\n    test {GCRA - multiple tokens consumed per request} {\n        r del mykey\n        # max_burst=5, so 6 tokens available initially\n        # Consume 1 token (default)\n        set result1 [r gcra mykey 5 1 60]\n        set avail1 [lindex $result1 2]\n        assert_equal 5 $avail1\n\n        r del mykey\n        # Consume 3 tokens from fresh state\n        set result2 [r gcra mykey 5 1 60 TOKENS 3]\n        set avail2 [lindex $result2 2]\n        assert_equal 3 $avail2\n    }\n\n    test {GCRA - rate recovery over time} {\n        r del mykey\n        # max_burst=1, tokens_per_period=1, period=1 (1 token per second)\n        # Exhaust: 2 allowed (burst+1), then limited\n        r gcra mykey 1 1 1\n        r gcra mykey 1 1 1\n        set result [r gcra mykey 1 1 1]\n        assert_equal 1 [lindex $result 0] \"Should be limited\"\n\n        # Wait for rate to recover\n        after 1100\n\n        # Should be allowed again\n        set result [r gcra mykey 1 1 1]\n        assert_equal 0 [lindex $result 0] \"Should be allowed after recovery\"\n    }\n\n    test {GCRA - full_burst_after indicates time to full recovery} {\n        r del mykey\n        # Consume some tokens\n        r gcra mykey 5 1 60\n        r gcra mykey 5 1 60\n\n        set result [r gcra mykey 5 1 60]\n        set full_burst_after [lindex $result 4]\n\n        # full_burst_after should be positive (time until full burst available)\n        # Since we've taken from the burst twice the reset time was incremented\n        # by the rate twice\n        assert {$full_burst_after >= 179}\n\n        r del mykey\n        r gcra mykey 5 1 60\n        set result [r gcra mykey 5 1 60 TOKENS 4]\n        set full_burst_after [lindex $result 4]\n        assert {$full_burst_after >= 299}\n    }\n\n    test {GCRA - different keys are independent} {\n        r del key1\n        r del key2\n\n        # Exhaust key1\n        for {set i 0} {$i < 3} {incr i} {\n            r gcra key1 2 1 60\n        }\n        set result1 [r gcra key1 2 1 60]\n        assert_equal 1 [lindex $result1 0] \"key1 should be limited\"\n\n        # key2 should still be available\n        set result2 [r gcra key2 2 1 60]\n        assert_equal 0 [lindex $result2 0] \"key2 should be allowed\"\n    }\n\n    test {GCRA - max_burst of 0 allows only sustained rate} {\n        r del mykey\n        # max_burst=0, tokens_per_period=1, period=1\n        # Only 1 request allowed initially (0+1)\n        set result [r gcra mykey 0 1 1]\n        assert_equal 0 [lindex $result 0] \"First request allowed\"\n        assert_equal 1 [lindex $result 1] \"max_req_num should be 1\"\n\n        # Second request should be limited\n        set result [r gcra mykey 0 1 1]\n        assert_equal 1 [lindex $result 0] \"Second request limited\"\n    }\n\n    test {GCRA - wrong type error} {\n        r del mykey\n        r lpush mykey \"value\"\n        catch {r gcra mykey 5 1 60} err\n        assert_match \"*WRONGTYPE*\" $err\n\n        r del mykey\n        r sadd mykey \"value\"\n        catch {r gcra mykey 5 1 60} err\n        assert_match \"*WRONGTYPE*\" $err\n    }\n\n    test {GCRA - overflow} {\n        r del mykey\n        catch {r gcra mykey 1 1 86400 TOKENS 200000000} err\n        assert_match \"*would cause an overflow*\" $err\n\n        r del mykey\n        catch {r gcra mykey 200000000 1 86400} err\n        assert_match \"*would cause an overflow*\" $err\n\n        r del mykey\n        catch {r gcra mykey 1 1 2147483647 TOKENS 2147483647} err\n        assert_match \"*would cause an overflow*\" $err\n    }\n\n    test {GCRASETVALUE - basic functionality} {\n        r del mykey\n        set tat_us [expr {[clock microseconds] + 60000000}]\n        assert_equal {OK} [r gcrasetvalue mykey $tat_us]\n        assert_equal {gcra} [r type mykey]\n        assert {[r pttl mykey] > 0}\n    }\n}\n\nstart_server {tags {\"gcra\" \"external:skip\"}} {\n    test {GCRA - RDB save and reload preserves value} {\n        r del mykey\n        r gcra mykey 5 1 60\n        r gcra mykey 5 1 60\n\n        set dump_before [r dump mykey]\n\n        r debug reload\n\n        assert_equal [r type mykey] \"gcra\"\n        set dump_after [r dump mykey]\n        assert_equal $dump_before $dump_after\n    } {} {needs:debug}\n\n    test {GCRA - RDB save and reload preserves TTL} {\n        r del mykey\n        r gcra mykey 5 1 60\n        set ttl_before [r pexpiretime mykey]\n        assert_morethan $ttl_before 0\n\n        r debug reload\n\n        set ttl_after [r pexpiretime mykey]\n        assert_morethan $ttl_after 0\n        assert_equal $ttl_after $ttl_before\n    } {} {needs:debug}\n\n    test {GCRA - DUMP and RESTORE roundtrip} {\n        r del mykey mykey2\n        r gcra mykey 5 1 60\n        r gcra mykey 5 1 60\n\n        set dump [r dump mykey]\n        set ttl [r pttl mykey]\n        r restore mykey2 $ttl $dump\n\n        assert_equal [r type mykey2] \"gcra\"\n\n        set result_orig [r gcra mykey 5 1 60]\n        set result_restored [r gcra mykey2 5 1 60]\n        assert_equal [lindex $result_orig 2] [lindex $result_restored 2]\n    }\n\n    test {GCRA - AOF rewrite preserves value} {\n        r del mykey\n        r config set appendonly yes\n        waitForBgrewriteaof r\n\n        r gcra mykey 5 1 60\n        r gcra mykey 5 1 60\n\n        set dump_before [r dump mykey]\n\n        r BGREWRITEAOF\n        waitForBgrewriteaof r\n        r debug reload\n\n        assert_equal [r type mykey] \"gcra\"\n        set dump_after [r dump mykey]\n        assert_equal $dump_before $dump_after\n    } {} {external:skip needs:debug}\n\n    test {GCRA - AOF rewrite preserves TTL} {\n        r del mykey\n        r config set appendonly yes\n        waitForBgrewriteaof r\n\n        r gcra mykey 5 1 60\n\n        r BGREWRITEAOF\n        waitForBgrewriteaof r\n\n        set ttl_before [r pttl mykey]\n        assert {$ttl_before > 0}\n\n        r debug reload\n\n        set ttl_after [r pttl mykey]\n        assert {$ttl_after > 0}\n        assert {$ttl_after <= $ttl_before}\n    } {} {external:skip needs:debug}\n\n    test {GCRA - DEBUG DIGEST consistent after RDB reload} {\n        r del mykey\n        r gcra mykey 5 1 60\n        r gcra mykey 5 1 60\n\n        set digest_before [r debug digest]\n\n        r debug reload\n\n        set digest_after [r debug digest]\n        assert_equal $digest_before $digest_after\n    } {} {needs:debug}\n}\n\nstart_server {tags {\"gcra repl\" \"external:skip\"}} {\n    set replica [srv 0 client]\n    set replica_host [srv 0 host]\n    set replica_port [srv 0 port]\n    set replica_log [srv 0 stdout]\n\n    start_server {tags {}} {\n        set master [srv 0 client]\n        set master_host [srv 0 host]\n        set master_port [srv 0 port]\n\n        test {GCRA - Replication works} {\n            $master flushdb\n            $replica flushdb\n\n            $replica replicaof $master_host $master_port\n            wait_for_condition 100 100 {\n                [s -1 master_link_status] eq \"up\"\n            } else {\n                fail \"Master <-> Replica didn't finish sync\"\n            }\n\n            set cmdinfo [$replica info commandstats]\n            assert_equal [lsearch -glob $cmdinfo \"cmdstat_gcrasetvalue:*\"] -1\n\n            $master del mykey\n            $master gcra mykey 2 1 1000 TOKENS 2\n            wait_for_ofs_sync $master $replica\n\n            set cmdinfo [$replica info commandstats]\n            assert_morethan_equal [lsearch -glob $cmdinfo \"cmdstat_gcrasetvalue:*\"] 0\n        } {} {external:skip}\n    }\n}\n"
  },
  {
    "path": "tests/unit/geo.tcl",
    "content": "# Helper functions to simulate search-in-radius in the Tcl side in order to\n# verify the Redis implementation with a fuzzy test.\nproc geo_degrad deg {expr {$deg*(atan(1)*8/360)}}\nproc geo_raddeg rad {expr {$rad/(atan(1)*8/360)}}\n\nproc geo_distance {lon1d lat1d lon2d lat2d} {\n    set lon1r [geo_degrad $lon1d]\n    set lat1r [geo_degrad $lat1d]\n    set lon2r [geo_degrad $lon2d]\n    set lat2r [geo_degrad $lat2d]\n    set v [expr {sin(($lon2r - $lon1r) / 2)}]\n    set u [expr {sin(($lat2r - $lat1r) / 2)}]\n    expr {2.0 * 6372797.560856 * \\\n            asin(sqrt($u * $u + cos($lat1r) * cos($lat2r) * $v * $v))}\n}\n\nproc geo_random_point {lonvar latvar} {\n    upvar 1 $lonvar lon\n    upvar 1 $latvar lat\n    # Note that the actual latitude limit should be -85 to +85, we restrict\n    # the test to -70 to +70 since in this range the algorithm is more precise\n    # while outside this range occasionally some element may be missing.\n    set lon [expr {-180 + rand()*360}]\n    set lat [expr {-70 + rand()*140}]\n}\n\n# Return elements non common to both the lists.\n# This code is from http://wiki.tcl.tk/15489\nproc compare_lists {List1 List2} {\n   set DiffList {}\n   foreach Item $List1 {\n      if {[lsearch -exact $List2 $Item] == -1} {\n         lappend DiffList $Item\n      }\n   }\n   foreach Item $List2 {\n      if {[lsearch -exact $List1 $Item] == -1} {\n         if {[lsearch -exact $DiffList $Item] == -1} {\n            lappend DiffList $Item\n         }\n      }\n   }\n   return $DiffList\n}\n\n# return true If a point in circle.\n# search_lon and search_lat define the center of the circle,\n# and lon, lat define the point being searched.\nproc pointInCircle {radius_km lon lat search_lon search_lat} {\n    set radius_m [expr {$radius_km*1000}]\n    set distance [geo_distance $lon $lat $search_lon $search_lat]\n    if {$distance < $radius_m} {\n        return true\n    }\n    return false\n}\n\n# return true If a point in rectangle.\n# search_lon and search_lat define the center of the rectangle,\n# and lon, lat define the point being searched.\n# error: can adjust the width and height of the rectangle according to the error\nproc pointInRectangle {width_km height_km lon lat search_lon search_lat error} {\n    set width_m [expr {$width_km*1000*$error/2}]\n    set height_m [expr {$height_km*1000*$error/2}]\n    set lon_distance [geo_distance $lon $lat $search_lon $lat]\n    set lat_distance [geo_distance $lon $lat $lon $search_lat]\n\n    if {$lon_distance > $width_m || $lat_distance > $height_m} {\n        return false\n    }\n    return true\n}\n\nproc verify_geo_edge_response_bylonlat {expected_response expected_store_response} {\n    catch {r georadius src{t} 1 1 1 km} response\n    assert_match $expected_response $response\n\n    catch {r georadius src{t} 1 1 1 km store dest{t}} response\n    assert_match $expected_store_response $response\n\n    catch {r geosearch src{t} fromlonlat 0 0 byradius 1 km} response\n    assert_match $expected_response $response\n\n    catch {r geosearchstore dest{t} src{t} fromlonlat 0 0 byradius 1 km} response\n    assert_match $expected_store_response $response\n}\n\nproc verify_geo_edge_response_bymember {expected_response expected_store_response} {\n    catch {r georadiusbymember src{t} member 1 km} response\n    assert_match $expected_response $response\n\n    catch {r georadiusbymember src{t} member 1 km store dest{t}} response\n    assert_match $expected_store_response $response\n\n    catch {r geosearch src{t} frommember member bybox 1 1 km} response\n    assert_match $expected_response $response\n\n    catch {r geosearchstore dest{t} src{t} frommember member bybox 1 1 m} response\n    assert_match $expected_store_response $response\n}\n\nproc verify_geo_edge_response_generic {expected_response} {\n    catch {r geodist src{t} member 1 km} response\n    assert_match $expected_response $response\n\n    catch {r geohash src{t} member} response\n    assert_match $expected_response $response\n\n    catch {r geopos src{t} member} response\n    assert_match $expected_response $response\n}\n\n\n# The following list represents sets of random seed, search position\n# and radius that caused bugs in the past. It is used by the randomized\n# test later as a starting point. When the regression vectors are scanned\n# the code reverts to using random data.\n#\n# The format is: seed km lon lat\nset regression_vectors {\n    {1482225976969 7083 81.634948934258375 30.561509253718668}\n    {1482340074151 5416 -70.863281847379767 -46.347003465679947}\n    {1499014685896 6064 -89.818768962202014 -40.463868561416803}\n    {1412 156 149.29737817929004 15.95807862745508}\n    {441574 143 59.235461856813856 66.269555127373678}\n    {160645 187 -101.88575239939883 49.061997951502917}\n    {750269 154 -90.187939661642517 66.615930412251487}\n    {342880 145 163.03472387745728 64.012747720821181}\n    {729955 143 137.86663517256579 63.986745399416776}\n    {939895 151 59.149620271823181 65.204186651485145}\n    {1412 156 149.29737817929004 15.95807862745508}\n    {564862 149 84.062063109158544 -65.685403922426232}\n    {1546032440391 16751 -1.8175081637769495 20.665668878082954}\n}\nset rv_idx 0\n\nstart_server {tags {\"geo\"}} {\n    test {GEO with wrong type src key} {\n        r set src{t} wrong_type\n\n        verify_geo_edge_response_bylonlat \"WRONGTYPE*\" \"WRONGTYPE*\"\n        verify_geo_edge_response_bymember \"WRONGTYPE*\" \"WRONGTYPE*\"\n        verify_geo_edge_response_generic \"WRONGTYPE*\"\n    }\n\n    test {GEO with non existing src key} {\n        r del src{t}\n\n        verify_geo_edge_response_bylonlat {} 0\n        verify_geo_edge_response_bymember {} 0\n    }\n\n    test {GEO BYLONLAT with empty search} {\n        r del src{t}\n        r geoadd src{t} 13.361389 38.115556 \"Palermo\" 15.087269 37.502669 \"Catania\"\n\n        verify_geo_edge_response_bylonlat {} 0\n    }\n\n    test {GEO BYMEMBER with non existing member} {\n        r del src{t}\n        r geoadd src{t} 13.361389 38.115556 \"Palermo\" 15.087269 37.502669 \"Catania\"\n\n        verify_geo_edge_response_bymember \"ERR*\" \"ERR*\"\n    }\n\n    test {GEOADD create} {\n        r geoadd nyc -73.9454966 40.747533 \"lic market\"\n    } {1}\n\n    test {GEOADD update} {\n        r geoadd nyc -73.9454966 40.747533 \"lic market\"\n    } {0}\n\n    test {GEOADD update with CH option} {\n        assert_equal 1 [r geoadd nyc CH 40.747533 -73.9454966 \"lic market\"]\n        lassign [lindex [r geopos nyc \"lic market\"] 0] x1 y1\n        assert {abs($x1) - 40.747 < 0.001}\n        assert {abs($y1) - 73.945 < 0.001}\n    } {}\n\n    test {GEOADD update with NX option} {\n        assert_equal 0 [r geoadd nyc NX -73.9454966 40.747533 \"lic market\"]\n        lassign [lindex [r geopos nyc \"lic market\"] 0] x1 y1\n        assert {abs($x1) - 40.747 < 0.001}\n        assert {abs($y1) - 73.945 < 0.001}\n    } {}\n\n    test {GEOADD update with XX option} {\n        assert_equal 0 [r geoadd nyc XX -83.9454966 40.747533 \"lic market\"]\n        lassign [lindex [r geopos nyc \"lic market\"] 0] x1 y1\n        assert {abs($x1) - 83.945 < 0.001}\n        assert {abs($y1) - 40.747 < 0.001}\n    } {}\n\n    test {GEOADD update with CH NX option} {\n        r geoadd nyc CH NX -73.9454966 40.747533 \"lic market\"\n    } {0}\n\n    test {GEOADD update with CH XX option} {\n        r geoadd nyc CH XX -73.9454966 40.747533 \"lic market\"\n    } {1}\n\n    test {GEOADD update with XX NX option will return syntax error} {\n        catch {\n            r geoadd nyc xx nx -73.9454966 40.747533 \"lic market\"\n        } err\n        set err\n    } {ERR *syntax*}\n\n    test {GEOADD update with invalid option} {\n        catch {\n            r geoadd nyc ch xx foo -73.9454966 40.747533 \"lic market\"\n        } err\n        set err\n    } {ERR *syntax*}\n\n    test {GEOADD invalid coordinates} {\n        catch {\n            r geoadd nyc -73.9454966 40.747533 \"lic market\" \\\n                foo bar \"luck market\"\n        } err\n        set err\n    } {*valid*}\n\n    test {GEOADD out-of-range longitude/latitude error reply is well-formed} {\n        r readraw 1\n        set reply [r geoadd nyc 200 40 \"bad lon\"]\n        r readraw 0\n        # RESP simple error: single line starting with '-', no duplicated \"-ERR\" prefix.\n        assert_match {-ERR invalid longitude,latitude pair*} $reply\n    }\n\n    test {GEOADD multi add} {\n        r geoadd nyc -73.9733487 40.7648057 \"central park n/q/r\" -73.9903085 40.7362513 \"union square\" -74.0131604 40.7126674 \"wtc one\" -73.7858139 40.6428986 \"jfk\" -73.9375699 40.7498929 \"q4\" -73.9564142 40.7480973 4545\n    } {6}\n\n    test {Check geoset values} {\n        r zrange nyc 0 -1 withscores\n    } {{wtc one} 1791873972053020 {union square} 1791875485187452 {central park n/q/r} 1791875761332224 4545 1791875796750882 {lic market} 1791875804419201 q4 1791875830079666 jfk 1791895905559723}\n\n    test {GEORADIUS simple (sorted)} {\n        r georadius nyc -73.9798091 40.7598464 3 km asc\n    } {{central park n/q/r} 4545 {union square}}\n\n    test {GEORADIUS_RO simple (sorted)} {\n        r georadius_ro nyc -73.9798091 40.7598464 3 km asc\n    } {{central park n/q/r} 4545 {union square}}\n\n    test {GEOSEARCH simple (sorted)} {\n        r geosearch nyc fromlonlat -73.9798091 40.7598464 bybox 6 6 km asc\n    } {{central park n/q/r} 4545 {union square} {lic market}}\n\n    test {GEOSEARCH FROMLONLAT and FROMMEMBER cannot exist at the same time} {\n        catch {r geosearch nyc fromlonlat -73.9798091 40.7598464 frommember xxx bybox 6 6 km asc} e\n        set e\n    } {ERR *syntax*}\n\n    test {GEOSEARCH FROMLONLAT and FROMMEMBER one must exist} {\n        catch {r geosearch nyc bybox 3 3 km asc desc withhash withdist withcoord} e\n        set e\n    } {ERR *exactly one of FROMMEMBER or FROMLONLAT*}\n\n    test {GEOSEARCH BYRADIUS and BYBOX cannot exist at the same time} {\n        catch {r geosearch nyc fromlonlat -73.9798091 40.7598464 byradius 3 km bybox 3 3 km asc} e\n        set e\n    } {ERR *syntax*}\n\n    test {GEOSEARCH BYRADIUS and BYBOX one must exist} {\n        catch {r geosearch nyc fromlonlat -73.9798091 40.7598464 asc desc withhash withdist withcoord} e\n        set e\n    } {ERR *exactly one of BYRADIUS and BYBOX*}\n\n    test {GEOSEARCH with STOREDIST option} {\n        catch {r geosearch nyc fromlonlat -73.9798091 40.7598464 bybox 6 6 km asc storedist} e\n        set e\n    } {ERR *syntax*}\n\n    test {GEORADIUS withdist (sorted)} {\n        r georadius nyc -73.9798091 40.7598464 3 km withdist asc\n    } {{{central park n/q/r} 0.7750} {4545 2.3651} {{union square} 2.7697}}\n\n    test {GEOSEARCH withdist (sorted)} {\n        r geosearch nyc fromlonlat -73.9798091 40.7598464 bybox 6 6 km withdist asc\n    } {{{central park n/q/r} 0.7750} {4545 2.3651} {{union square} 2.7697} {{lic market} 3.1991}}\n\n    test {GEORADIUS with COUNT} {\n        r georadius nyc -73.9798091 40.7598464 10 km COUNT 3\n    } {{central park n/q/r} 4545 {union square}}\n\n    test {GEORADIUS with multiple WITH* tokens} {\n        assert_match {{{central park n/q/r} 1791875761332224 {-73.97334* 40.76480*}} {4545 1791875796750882 {-73.95641* 40.74809*}}} [r georadius nyc -73.9798091 40.7598464 10 km WITHCOORD WITHHASH COUNT 2]\n        assert_match {{{central park n/q/r} 1791875761332224 {-73.97334* 40.76480*}} {4545 1791875796750882 {-73.95641* 40.74809*}}} [r georadius nyc -73.9798091 40.7598464 10 km WITHHASH WITHCOORD COUNT 2]\n        assert_match {{{central park n/q/r} 0.7750 1791875761332224 {-73.97334* 40.76480*}} {4545 2.3651 1791875796750882 {-73.95641* 40.74809*}}} [r georadius nyc -73.9798091 40.7598464 10 km WITHDIST WITHHASH WITHCOORD COUNT 2]\n    }\n\n    test {GEORADIUS with ANY not sorted by default} {\n        r georadius nyc -73.9798091 40.7598464 10 km COUNT 3 ANY\n    } {{wtc one} {union square} {central park n/q/r}}\n\n    test {GEORADIUS with ANY sorted by ASC} {\n        r georadius nyc -73.9798091 40.7598464 10 km COUNT 3 ANY ASC\n    } {{central park n/q/r} {union square} {wtc one}}\n\n    test {GEORADIUS with ANY but no COUNT} {\n        catch {r georadius nyc -73.9798091 40.7598464 10 km ANY ASC} e\n        set e\n    } {ERR *ANY*requires*COUNT*}\n\n    test {GEORADIUS with COUNT but missing integer argument} {\n        catch {r georadius nyc -73.9798091 40.7598464 10 km COUNT} e\n        set e\n    } {ERR *syntax*}\n\n    test {GEORADIUS with COUNT DESC} {\n        r georadius nyc -73.9798091 40.7598464 10 km COUNT 2 DESC\n    } {{wtc one} q4}\n\n    test {GEORADIUS HUGE, issue #2767} {\n        r geoadd users -47.271613776683807 -54.534504198047678 user_000000\n        llength [r GEORADIUS users 0 0 50000 km WITHCOORD]\n    } {1}\n\n    test {GEORADIUSBYMEMBER simple (sorted)} {\n        r georadiusbymember nyc \"wtc one\" 7 km\n    } {{wtc one} {union square} {central park n/q/r} 4545 {lic market}}\n\n    test {GEORADIUSBYMEMBER_RO simple (sorted)} {\n        r georadiusbymember_ro nyc \"wtc one\" 7 km\n    } {{wtc one} {union square} {central park n/q/r} 4545 {lic market}}\n    \n    test {GEORADIUSBYMEMBER search areas contain satisfied points in oblique direction} {\n        r del k1\n        \n        r geoadd k1 -0.15307903289794921875 85 n1 0.3515625 85.00019260486917005437 n2\n        set ret1 [r GEORADIUSBYMEMBER k1 n1 4891.94 m]\n        assert_equal $ret1 {n1 n2}\n        \n        r zrem k1 n1 n2\n        r geoadd k1 -4.95211958885192871094 85 n3 11.25 85.0511 n4\n        set ret2 [r GEORADIUSBYMEMBER k1 n3 156544 m]\n        assert_equal $ret2 {n3 n4}\n        \n        r zrem k1 n3 n4\n        r geoadd k1 -45 65.50900022111811438208 n5 90 85.0511 n6\n        set ret3 [r GEORADIUSBYMEMBER k1 n5 5009431 m]\n        assert_equal $ret3 {n5 n6}\n    }\n\n    test {GEORADIUSBYMEMBER crossing pole search} {\n        r del k1\n        r geoadd k1 45 65 n1 -135 85.05 n2\n        set ret [r GEORADIUSBYMEMBER k1 n1 5009431 m]\n        assert_equal $ret {n1 n2}\n    }\n\n    test {GEOSEARCH FROMMEMBER simple (sorted)} {\n        r geosearch nyc frommember \"wtc one\" bybox 14 14 km\n    } {{wtc one} {union square} {central park n/q/r} 4545 {lic market} q4}\n\n    test {GEOSEARCH vs GEORADIUS} {\n        r del Sicily\n        r geoadd Sicily 13.361389 38.115556 \"Palermo\" 15.087269 37.502669 \"Catania\"\n        r geoadd Sicily 12.758489 38.788135 \"edge1\"   17.241510 38.788135 \"eage2\"\n        set ret1 [r georadius Sicily 15 37 200 km asc]\n        assert_equal $ret1 {Catania Palermo}\n        set ret2 [r geosearch Sicily fromlonlat 15 37 bybox 400 400 km asc]\n        assert_equal $ret2 {Catania Palermo eage2 edge1}\n    }\n\n    test {GEOSEARCH non square, long and narrow} {\n        r del Sicily\n        r geoadd Sicily 12.75 36.995 \"test1\"\n        r geoadd Sicily 12.75 36.50 \"test2\"\n        r geoadd Sicily 13.00 36.50 \"test3\"\n        # box height=2km width=400km\n        set ret1 [r geosearch Sicily fromlonlat 15 37 bybox 400 2 km]\n        assert_equal $ret1 {test1}\n\n        # Add a western Hemisphere point\n        r geoadd Sicily -1 37.00 \"test3\"\n        set ret2 [r geosearch Sicily fromlonlat 15 37 bybox 3000 2 km asc]\n        assert_equal $ret2 {test1 test3}\n    }\n\n    test {GEOSEARCH corner point test} {\n        r del Sicily\n        r geoadd Sicily 12.758489 38.788135 edge1 17.241510 38.788135 edge2 17.250000 35.202000 edge3 12.750000 35.202000 edge4 12.748489955781654 37 edge5 15 38.798135872540925 edge6 17.251510044218346 37 edge7 15 35.201864127459075 edge8 12.692799634687903 38.798135872540925 corner1 12.692799634687903 38.798135872540925 corner2 17.200560937451133 35.201864127459075 corner3 12.799439062548865 35.201864127459075 corner4\n        set ret [lsort [r geosearch Sicily fromlonlat 15 37 bybox 400 400 km asc]]\n        assert_equal $ret {edge1 edge2 edge5 edge7}\n    }\n\n    test {GEORADIUSBYMEMBER withdist (sorted)} {\n        r georadiusbymember nyc \"wtc one\" 7 km withdist\n    } {{{wtc one} 0.0000} {{union square} 3.2544} {{central park n/q/r} 6.7000} {4545 6.1975} {{lic market} 6.8969}}\n\n    test {GEOHASH is able to return geohash strings} {\n        # Example from Wikipedia.\n        r del points\n        r geoadd points -5.6 42.6 test\n        lindex [r geohash points test] 0\n    } {ezs42e44yx0}\n\n    test {GEOHASH with only key as argument} {\n        r del points\n        r geoadd points 10 20 a 30 40 b\n        set result [r geohash points]\n        assert {$result eq {}}\n    } \n\n    test {GEOPOS simple} {\n        r del points\n        r geoadd points 10 20 a 30 40 b\n        lassign [lindex [r geopos points a b] 0] x1 y1\n        lassign [lindex [r geopos points a b] 1] x2 y2\n        assert {abs($x1 - 10) < 0.001}\n        assert {abs($y1 - 20) < 0.001}\n        assert {abs($x2 - 30) < 0.001}\n        assert {abs($y2 - 40) < 0.001}\n    }\n\n    test {GEOPOS missing element} {\n        r del points\n        r geoadd points 10 20 a 30 40 b\n        lindex [r geopos points a x b] 1\n    } {}\n\n    test {GEOPOS with only key as argument} {\n        r del points\n        r geoadd points 10 20 a 30 40 b\n        set result [r geopos points]\n        assert {$result eq {}}\n    }\n\n    test {GEODIST simple & unit} {\n        r del points\n        r geoadd points 13.361389 38.115556 \"Palermo\" \\\n                        15.087269 37.502669 \"Catania\"\n        set m [r geodist points Palermo Catania]\n        assert {$m > 166274 && $m < 166275}\n        set km [r geodist points Palermo Catania km]\n        assert {$km > 166.2 && $km < 166.3}\n        set dist [r geodist points Palermo Palermo]\n        assert {$dist eq 0.0000}\n    }\n\n    test {GEODIST missing elements} {\n        r del points\n        r geoadd points 13.361389 38.115556 \"Palermo\" \\\n                        15.087269 37.502669 \"Catania\"\n        set m [r geodist points Palermo Agrigento]\n        assert {$m eq {}}\n        set m [r geodist points Ragusa Agrigento]\n        assert {$m eq {}}\n        set m [r geodist empty_key Palermo Catania]\n        assert {$m eq {}}\n    }\n\n    test {GEORADIUS STORE option: syntax error} {\n        r del points{t}\n        r geoadd points{t} 13.361389 38.115556 \"Palermo\" \\\n                           15.087269 37.502669 \"Catania\"\n        catch {r georadius points{t} 13.361389 38.115556 50 km store} e\n        set e\n    } {*ERR*syntax*}\n\n    test {GEOSEARCHSTORE STORE option: syntax error} {\n        catch {r geosearchstore abc{t} points{t} fromlonlat 13.361389 38.115556 byradius 50 km store abc{t}} e\n        set e\n    } {*ERR*syntax*}\n\n    test {GEORANGE STORE option: incompatible options} {\n        r del points{t}\n        r geoadd points{t} 13.361389 38.115556 \"Palermo\" \\\n                           15.087269 37.502669 \"Catania\"\n        catch {r georadius points{t} 13.361389 38.115556 50 km store points2{t} withdist} e\n        assert_match {*ERR*} $e\n        catch {r georadius points{t} 13.361389 38.115556 50 km store points2{t} withhash} e\n        assert_match {*ERR*} $e\n        catch {r georadius points{t} 13.361389 38.115556 50 km store points2{t} withcoords} e\n        assert_match {*ERR*} $e\n    }\n\n    test {GEORANGE STORE option: plain usage} {\n        r del points{t}\n        r geoadd points{t} 13.361389 38.115556 \"Palermo\" \\\n                           15.087269 37.502669 \"Catania\"\n        r georadius points{t} 13.361389 38.115556 500 km store points2{t}\n        assert_equal [r zrange points{t} 0 -1] [r zrange points2{t} 0 -1]\n    }\n\n    test {GEORADIUSBYMEMBER STORE/STOREDIST option: plain usage} {\n        r del points{t}\n        r geoadd points{t} 13.361389 38.115556 \"Palermo\" 15.087269 37.502669 \"Catania\"\n\n        r georadiusbymember points{t} Palermo 500 km store points2{t}\n        assert_equal {Palermo Catania} [r zrange points2{t} 0 -1]\n\n        r georadiusbymember points{t} Catania 500 km storedist points2{t}\n        assert_equal {Catania Palermo} [r zrange points2{t} 0 -1]\n\n        set res [r zrange points2{t} 0 -1 withscores]\n        assert {[lindex $res 1] < 1}\n        assert {[lindex $res 3] > 166}\n    }\n\n    test {GEOSEARCHSTORE STORE option: plain usage} {\n        r geosearchstore points2{t} points{t} fromlonlat 13.361389 38.115556 byradius 500 km\n        assert_equal [r zrange points{t} 0 -1] [r zrange points2{t} 0 -1]\n    }\n\n    test {GEORANGE STOREDIST option: plain usage} {\n        r del points{t}\n        r geoadd points{t} 13.361389 38.115556 \"Palermo\" \\\n                           15.087269 37.502669 \"Catania\"\n        r georadius points{t} 13.361389 38.115556 500 km storedist points2{t}\n        set res [r zrange points2{t} 0 -1 withscores]\n        assert {[lindex $res 1] < 1}\n        assert {[lindex $res 3] > 166}\n        assert {[lindex $res 3] < 167}\n    }\n\n    test {GEOSEARCHSTORE STOREDIST option: plain usage} {\n        r geosearchstore points2{t} points{t} fromlonlat 13.361389 38.115556 byradius 500 km storedist\n        set res [r zrange points2{t} 0 -1 withscores]\n        assert {[lindex $res 1] < 1}\n        assert {[lindex $res 3] > 166}\n        assert {[lindex $res 3] < 167}\n    }\n\n    test {GEORANGE STOREDIST option: COUNT ASC and DESC} {\n        r del points{t}\n        r geoadd points{t} 13.361389 38.115556 \"Palermo\" \\\n                           15.087269 37.502669 \"Catania\"\n        r georadius points{t} 13.361389 38.115556 500 km storedist points2{t} asc count 1\n        assert {[r zcard points2{t}] == 1}\n        set res [r zrange points2{t} 0 -1 withscores]\n        assert {[lindex $res 0] eq \"Palermo\"}\n\n        r georadius points{t} 13.361389 38.115556 500 km storedist points2{t} desc count 1\n        assert {[r zcard points2{t}] == 1}\n        set res [r zrange points2{t} 0 -1 withscores]\n        assert {[lindex $res 0] eq \"Catania\"}\n    }\n\n    test {GEOSEARCH the box spans -180° or 180°} {\n        r del points\n        r geoadd points 179.5 36 point1\n        r geoadd points -179.5 36 point2\n        assert_equal {point1 point2} [r geosearch points fromlonlat 179 37 bybox 400 400 km asc]\n        assert_equal {point2 point1} [r geosearch points fromlonlat -179 37 bybox 400 400 km asc]\n    }\n\n    test {GEOSEARCH with small distance} {\n        r del points\n        r geoadd points -122.407107 37.794300 1\n        r geoadd points -122.227336 37.794300 2\n        assert_equal {{1 0.0001} {2 9.8182}} [r GEORADIUS points -122.407107 37.794300 30 mi ASC WITHDIST]\n    }\n\n    foreach {type} {byradius bybox} {\n    test \"GEOSEARCH fuzzy test - $type\" {\n        if {$::accurate} { set attempt 300 } else { set attempt 30 }\n        while {[incr attempt -1]} {\n            set rv [lindex $regression_vectors $rv_idx]\n            incr rv_idx\n\n            set radius_km 0; set width_km 0; set height_km 0\n            unset -nocomplain debuginfo\n            set srand_seed [clock milliseconds]\n            if {$rv ne {}} {set srand_seed [lindex $rv 0]}\n            lappend debuginfo \"srand_seed is $srand_seed\"\n            expr {srand($srand_seed)} ; # If you need a reproducible run\n            r del mypoints\n\n            if {[randomInt 10] == 0} {\n                # From time to time use very big radiuses\n                if {$type == \"byradius\"} {\n                    set radius_km [expr {[randomInt 5000]+10}]\n                } elseif {$type == \"bybox\"} {\n                    set width_km [expr {[randomInt 5000]+10}]\n                    set height_km [expr {[randomInt 5000]+10}]\n                }\n            } else {\n                # Normally use a few - ~200km radiuses to stress\n                # test the code the most in edge cases.\n                if {$type == \"byradius\"} {\n                    set radius_km [expr {[randomInt 200]+10}]\n                } elseif {$type == \"bybox\"} {\n                    set width_km [expr {[randomInt 200]+10}]\n                    set height_km [expr {[randomInt 200]+10}]\n                }\n            }\n            if {$rv ne {}} {\n                set radius_km [lindex $rv 1]\n                set width_km [lindex $rv 1]\n                set height_km [lindex $rv 1]\n            }\n            geo_random_point search_lon search_lat\n            if {$rv ne {}} {\n                set search_lon [lindex $rv 2]\n                set search_lat [lindex $rv 3]\n            }\n            lappend debuginfo \"Search area: $search_lon,$search_lat $radius_km $width_km $height_km km\"\n            set tcl_result {}\n            set argv {}\n            for {set j 0} {$j < 20000} {incr j} {\n                geo_random_point lon lat\n                lappend argv $lon $lat \"place:$j\"\n                if {$type == \"byradius\"} {\n                    if {[pointInCircle $radius_km $lon $lat $search_lon $search_lat]} {\n                        lappend tcl_result \"place:$j\"\n                    }\n                } elseif {$type == \"bybox\"} {\n                    if {[pointInRectangle $width_km $height_km $lon $lat $search_lon $search_lat 1]} {\n                        lappend tcl_result \"place:$j\"\n                    }\n                }\n                lappend debuginfo \"place:$j $lon $lat\"\n            }\n            r geoadd mypoints {*}$argv\n            if {$type == \"byradius\"} {\n                set res [lsort [r geosearch mypoints fromlonlat $search_lon $search_lat byradius $radius_km km]]\n            } elseif {$type == \"bybox\"} {\n                set res [lsort [r geosearch mypoints fromlonlat $search_lon $search_lat bybox $width_km $height_km km]]\n            }\n            set res2 [lsort $tcl_result]\n            set test_result OK\n\n            if {$res != $res2} {\n                set rounding_errors 0\n                set diff [compare_lists $res $res2]\n                foreach place $diff {\n                    lassign [lindex [r geopos mypoints $place] 0] lon lat\n                    set mydist [geo_distance $lon $lat $search_lon $search_lat]\n                    set mydist [expr $mydist/1000]\n                    if {$type == \"byradius\"} {\n                        if {($mydist / $radius_km) > 0.999} {\n                            incr rounding_errors\n                            continue\n                        }\n                        if {$mydist < [expr {$radius_km*1000}]} {\n                            # This is a false positive for redis since given the\n                            # same points the higher precision calculation provided\n                            # by TCL shows the point within range\n                            incr rounding_errors\n                            continue\n                        }\n                    } elseif {$type == \"bybox\"} {\n                        # we add 0.1% error for floating point calculation error\n                        if {[pointInRectangle $width_km $height_km $lon $lat $search_lon $search_lat 1.001]} {\n                            incr rounding_errors\n                            continue\n                        }\n                    }\n                }\n\n                # Make sure this is a real error and not a rounidng issue.\n                if {[llength $diff] == $rounding_errors} {\n                    set res $res2; # Error silenced\n                }\n            }\n\n            if {$res != $res2} {\n                set diff [compare_lists $res $res2]\n                puts \"*** Possible problem in GEO radius query ***\"\n                puts \"Redis: $res\"\n                puts \"Tcl  : $res2\"\n                puts \"Diff : $diff\"\n                puts [join $debuginfo \"\\n\"]\n                foreach place $diff {\n                    if {[lsearch -exact $res2 $place] != -1} {\n                        set where \"(only in Tcl)\"\n                    } else {\n                        set where \"(only in Redis)\"\n                    }\n                    lassign [lindex [r geopos mypoints $place] 0] lon lat\n                    set mydist [geo_distance $lon $lat $search_lon $search_lat]\n                    set mydist [expr $mydist/1000]\n                    puts \"$place -> [r geopos mypoints $place] $mydist $where\"\n                }\n                set test_result FAIL\n            }\n            unset -nocomplain debuginfo\n            if {$test_result ne {OK}} break\n        }\n        set test_result\n    } {OK}\n    }\n\n    test {GEOSEARCH box edges fuzzy test} {\n        if {$::accurate} { set attempt 300 } else { set attempt 30 }\n        while {[incr attempt -1]} {\n            unset -nocomplain debuginfo\n            set srand_seed [clock milliseconds]\n            lappend debuginfo \"srand_seed is $srand_seed\"\n            expr {srand($srand_seed)} ; # If you need a reproducible run\n            r del mypoints\n\n            geo_random_point search_lon search_lat\n            set width_m [expr {[randomInt 10000]+10}]\n            set height_m [expr {[randomInt 10000]+10}]\n            set lat_delta [geo_raddeg [expr {$height_m/2/6372797.560856}]]\n            set long_delta_top [geo_raddeg [expr {$width_m/2/6372797.560856/cos([geo_degrad [expr {$search_lat+$lat_delta}]])}]]\n            set long_delta_middle [geo_raddeg [expr {$width_m/2/6372797.560856/cos([geo_degrad $search_lat])}]]\n            set long_delta_bottom [geo_raddeg [expr {$width_m/2/6372797.560856/cos([geo_degrad [expr {$search_lat-$lat_delta}]])}]]\n\n            # Total of 8 points are generated, which are located at each vertex and the center of each side\n            set points(north) [list $search_lon [expr {$search_lat+$lat_delta}]]\n            set points(south) [list $search_lon [expr {$search_lat-$lat_delta}]]\n            set points(east) [list [expr {$search_lon+$long_delta_middle}] $search_lat]\n            set points(west) [list [expr {$search_lon-$long_delta_middle}] $search_lat]\n            set points(north_east) [list [expr {$search_lon+$long_delta_top}] [expr {$search_lat+$lat_delta}]]\n            set points(north_west) [list [expr {$search_lon-$long_delta_top}] [expr {$search_lat+$lat_delta}]]\n            set points(south_east) [list [expr {$search_lon+$long_delta_bottom}] [expr {$search_lat-$lat_delta}]]\n            set points(south_west) [list [expr {$search_lon-$long_delta_bottom}] [expr {$search_lat-$lat_delta}]]\n\n            lappend debuginfo \"Search area: geosearch mypoints fromlonlat $search_lon $search_lat bybox $width_m $height_m m\"\n            set tcl_result {}\n            foreach name [array names points] {\n                set x [lindex $points($name) 0]\n                set y [lindex $points($name) 1]\n                # If longitude crosses -180° or 180°, we need to convert it.\n                # latitude doesn't have this problem, because it's scope is -70~70, see geo_random_point\n                if {$x > 180} {\n                    set x [expr {$x-360}]\n                } elseif {$x < -180} {\n                    set x [expr {$x+360}]\n                }\n                r geoadd mypoints $x $y place:$name\n                lappend tcl_result \"place:$name\"\n                lappend debuginfo \"geoadd mypoints $x $y place:$name\"\n            }\n\n            set res2 [lsort $tcl_result]\n\n            # make the box larger by two meter in each direction to put the coordinate slightly inside the box.\n            set height_new [expr {$height_m+4}]\n            set width_new [expr {$width_m+4}]\n            set res [lsort [r geosearch mypoints fromlonlat $search_lon $search_lat bybox $width_new $height_new m]]\n            if {$res != $res2} {\n                set diff [compare_lists $res $res2]\n                lappend debuginfo \"res: $res, res2: $res2, diff: $diff\"\n                fail \"place should be found, debuginfo: $debuginfo, height_new: $height_new width_new: $width_new\"\n            }\n\n            # The width decreases and the height increases. Only north and south are found\n            set width_new [expr {$width_m-4}]\n            set height_new [expr {$height_m+4}]\n            set res [lsort [r geosearch mypoints fromlonlat $search_lon $search_lat bybox $width_new $height_new m]]\n            if {$res != {place:north place:south}} {\n                lappend debuginfo \"res: $res\"\n                fail \"place should not be found, debuginfo: $debuginfo, height_new: $height_new width_new: $width_new\"\n            }\n\n            # The width increases and the height decreases. Only ease and west are found\n            set width_new [expr {$width_m+4}]\n            set height_new [expr {$height_m-4}]\n            set res [lsort [r geosearch mypoints fromlonlat $search_lon $search_lat bybox $width_new $height_new m]]\n            if {$res != {place:east place:west}} {\n                lappend debuginfo \"res: $res\"\n                fail \"place should not be found, debuginfo: $debuginfo, height_new: $height_new width_new: $width_new\"\n            }\n\n            # make the box smaller by two meter in each direction to put the coordinate slightly outside the box.\n            set height_new [expr {$height_m-4}]\n            set width_new [expr {$width_m-4}]\n            set res [r geosearch mypoints fromlonlat $search_lon $search_lat bybox $width_new $height_new m]\n            if {$res != \"\"} {\n                lappend debuginfo \"res: $res\"\n                fail \"place should not be found, debuginfo: $debuginfo, height_new: $height_new width_new: $width_new\"\n            }\n            unset -nocomplain debuginfo\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/hotkeys.tcl",
    "content": "# Helper function to convert flat array response to dict\nproc hotkeys_array_to_dict {arr} {\n    set result {}\n    for {set i 0} {$i < [llength $arr]} {incr i 2} {\n        set key [lindex $arr $i]\n        set val [lindex $arr [expr {$i + 1}]]\n        dict set result $key $val\n    }\n    return $result\n}\n\nstart_server {tags {external:skip \"hotkeys\"}} {\n    test {HOTKEYS START - METRICS required} {\n        catch {r hotkeys start} err\n        assert_match \"*METRICS parameter is required*\" $err\n    }\n\n    test {HOTKEYS START - METRICS with CPU only} {\n        assert_equal {OK} [r hotkeys start METRICS 1 CPU]\n        r set key1 value1\n        assert_equal {OK} [r hotkeys stop]\n\n        set result [lindex [r hotkeys get] 0]\n        if {[llength $result] > 0 && [lindex $result 0] ne \"tracking-active\"} {\n            set result [hotkeys_array_to_dict $result]\n        }\n\n        assert [dict exists $result \"total-cpu-time-user-ms\"]\n        assert [dict exists $result \"total-cpu-time-sys-ms\"]\n        assert [dict exists $result \"by-cpu-time-us\"]\n        assert {![dict exists $result \"total-net-bytes\"]}\n        assert {![dict exists $result \"by-net-bytes\"]}\n\n        assert_equal {OK} [r hotkeys reset]\n    }\n\n    test {HOTKEYS START - METRICS with NET only} {\n        assert_equal {OK} [r hotkeys start METRICS 1 NET]\n        r set key1 value1\n        assert_equal {OK} [r hotkeys stop]\n\n        set result [lindex [r hotkeys get] 0]\n        if {[llength $result] > 0 && [lindex $result 0] ne \"tracking-active\"} {\n            set result [hotkeys_array_to_dict $result]\n        }\n\n        assert [dict exists $result \"total-net-bytes\"]\n        assert [dict exists $result \"by-net-bytes\"]\n        assert {![dict exists $result \"total-cpu-time-user-ms\"]}\n        assert {![dict exists $result \"total-cpu-time-sys-ms\"]}\n        assert {![dict exists $result \"by-cpu-time-us\"]}\n\n        assert_equal {OK} [r hotkeys reset]\n    }\n\n    test {HOTKEYS START - METRICS with both CPU and NET} {\n        assert_equal {OK} [r hotkeys start METRICS 2 CPU NET]\n        r set key1 value1\n        assert_equal {OK} [r hotkeys stop]\n\n        set result [lindex [r hotkeys get] 0]\n        if {[llength $result] > 0 && [lindex $result 0] ne \"tracking-active\"} {\n            set result [hotkeys_array_to_dict $result]\n        }\n\n        assert [dict exists $result \"total-cpu-time-user-ms\"]\n        assert [dict exists $result \"total-cpu-time-sys-ms\"]\n        assert [dict exists $result \"by-cpu-time-us\"]\n        assert [dict exists $result \"total-net-bytes\"]\n        assert [dict exists $result \"by-net-bytes\"]\n\n        assert_equal {OK} [r hotkeys reset]\n    }\n\n    test {HOTKEYS START - Error: session already started} {\n        assert_equal {OK} [r hotkeys start METRICS 1 CPU]\n        catch {r hotkeys start METRICS 1 NET} err\n        assert_match \"*hotkey tracking session already in progress*\" $err\n        assert_equal {OK} [r hotkeys stop]\n        assert_equal {OK} [r hotkeys reset]\n    }\n\n    test {HOTKEYS START - Error: invalid METRICS count} {\n        catch {r hotkeys start METRICS 0} err\n        assert_match \"*METRICS count*\" $err\n        catch {r hotkeys start METRICS -1} err\n        assert_match \"*METRICS count*\" $err\n    }\n\n    test {HOTKEYS START - Error: METRICS count mismatch} {\n        catch {r hotkeys start METRICS 2 CPU} err\n        assert_match \"*METRICS count does not match number of metric types provided*\" $err\n        catch {r hotkeys start METRICS 1 CPU NET} err\n        assert_match \"*syntax error*\" $err\n        catch {r hotkeys start METRICS 3 CPU NET} err\n        assert_match \"*METRICS count*\" $err\n    }\n\n    test {HOTKEYS START - Error: METRICS invalid metrics} {\n        catch {r hotkeys start METRICS 1 GPU} err\n        assert_match \"*METRICS no valid metrics*\" $err\n        catch {r hotkeys start METRICS 2 GPU NYET} err\n        assert_match \"*METRICS no valid metrics*\" $err\n\n        # Allowing invalid metrics gives us forward-compatibility\n        assert_equal {OK} [r hotkeys start METRICS 2 GPU CPU]\n\n        assert_equal {OK} [r hotkeys stop]\n        assert_equal {OK} [r hotkeys reset]\n    }\n\n    test {HOTKEYS START - Error: METRICS same parameter} {\n        catch {r hotkeys start METRICS 2 CPU CPU} err\n        assert_match \"*METRICS CPU*\" $err\n        catch {r hotkeys start METRICS 2 NET NET} err\n        assert_match \"*METRICS NET*\" $err\n    }\n\n\n    test {HOTKEYS START - with COUNT parameter} {\n        assert_equal {OK} [r hotkeys start METRICS 2 CPU NET COUNT 20]\n\n        for {set i 0} {$i < 30} {incr i} {\n            r set \"key_$i\" \"value_$i\"\n        }\n\n        assert_equal {OK} [r hotkeys stop]\n\n        set result [lindex [r hotkeys get] 0]\n        if {[llength $result] > 0 && [lindex $result 0] ne \"tracking-active\"} {\n            set result [hotkeys_array_to_dict $result]\n        }\n\n        set cpu_array [dict get $result \"by-cpu-time-us\"]\n        set net_array [dict get $result \"by-net-bytes\"]\n\n        set cpu_count [expr {[llength $cpu_array] / 2}]\n        set net_count [expr {[llength $net_array] / 2}]\n \n        assert_lessthan_equal $cpu_count 20\n        assert_lessthan_equal $net_count 20\n\n        assert_equal {OK} [r hotkeys reset]\n    }\n\n    test {HOTKEYS START - Error: COUNT out of range} {\n        catch {r hotkeys start METRICS 1 CPU COUNT 0} err\n        assert_match \"*COUNT must be between 1 and 64*\" $err\n        catch {r hotkeys start METRICS 1 CPU COUNT 100} err\n        assert_match \"*COUNT must be between 1 and 64*\" $err\n    }\n\n    test {HOTKEYS START - with DURATION parameter} {\n        assert_equal {OK} [r hotkeys start METRICS 1 CPU DURATION 1]\n        after 1500\n\n        set result [lindex [r hotkeys get] 0]\n        if {[llength $result] > 0 && [lindex $result 0] eq \"tracking-active\"} {\n            set result [hotkeys_array_to_dict $result]\n        }\n        assert_equal 0 [dict get $result \"tracking-active\"]\n\n        assert_equal {OK} [r hotkeys reset]\n    }\n\n    test {HOTKEYS START - with SAMPLE parameter} {\n        assert_equal {OK} [r hotkeys start METRICS 2 CPU NET SAMPLE 10]\n        assert_equal {OK} [r hotkeys stop]\n        assert_equal {OK} [r hotkeys reset]\n    }\n\n    test {HOTKEYS START - Error: SAMPLE ratio invalid} {\n        catch {r hotkeys start METRICS 1 CPU SAMPLE 0} err\n        assert_match \"*SAMPLE ratio must be positive*\" $err\n    }\n\n    test {HOTKEYS START - Error: SLOTS not allowed in non-cluster mode} {\n        catch {r hotkeys start METRICS 1 CPU SLOTS 2 0 5} err\n        assert_match \"*SLOTS parameter cannot be used in non-cluster mode*\" $err\n    } {} {cluster:skip}\n\n    test {HOTKEYS STOP - basic functionality} {\n        assert_equal {OK} [r hotkeys start METRICS 2 CPU NET]\n        assert_equal {OK} [r hotkeys stop]\n\n        set result [lindex [r hotkeys get] 0]\n        if {[llength $result] > 0 && [lindex $result 0] ne \"tracking-active\"} {\n            set result [hotkeys_array_to_dict $result]\n        }\n        assert_equal 0 [dict get $result \"tracking-active\"]\n\n        assert_equal {OK} [r hotkeys reset]\n    }\n\n    test {HOTKEYS RESET - basic functionality} {\n        assert_equal {OK} [r hotkeys start METRICS 1 CPU]\n        assert_equal {OK} [r hotkeys stop]\n        assert_equal {OK} [r hotkeys reset]\n        # After reset, GET should return nil\n        set result [lindex [r hotkeys get] 0]\n        assert_equal {} $result\n    }\n\n    test {HOTKEYS RESET - Error: session in progress} {\n        assert_equal {OK} [r hotkeys start METRICS 1 CPU]\n        catch {r hotkeys reset} err\n        assert_match \"*hotkey tracking session in progress, stop tracking first*\" $err\n        assert_equal {OK} [r hotkeys stop]\n        assert_equal {OK} [r hotkeys reset]\n    }\n\n    test {HOTKEYS GET - returns nil when not started} {\n        set result [lindex [r hotkeys get] 0]\n        assert_equal {} $result\n    }\n\n    test {HOTKEYS GET - sample-ratio field} {\n        assert_equal {OK} [r hotkeys start METRICS 2 CPU NET SAMPLE 5]\n        assert_equal {OK} [r hotkeys stop]\n\n        set result [lindex [r hotkeys get] 0]\n        if {[llength $result] > 0 && [lindex $result 0] ne \"tracking-active\"} {\n            set result [hotkeys_array_to_dict $result]\n        }\n        assert_equal 5 [dict get $result \"sample-ratio\"]\n\n        assert_equal {OK} [r hotkeys reset]\n    }\n\n    test {HOTKEYS - nested commands} {\n        assert_equal {OK} [r hotkeys start METRICS 1 NET]\n        r eval \"redis.call('set', 'x', 1)\" 1 x\n        r eval \"redis.call('set', 'y', 1)\" 1 y\n        r eval \"redis.call('set', 'x', 2)\" 1 x\n        r eval \"redis.call('set', 'x', 3)\" 1 x\n\n        set result [lindex [r hotkeys get] 0]\n        set result [dict get $result \"by-net-bytes\"]\n        assert [dict exists $result \"x\"]\n        assert [dict exists $result \"y\"]\n\n        assert_equal {OK} [r hotkeys stop]\n        assert_equal {OK} [r hotkeys reset]\n    }\n\n    test {HOTKEYS - commands inside MULTI/EXEC} {\n        set key1 \"key1\\{t\\}\"\n        set key2 \"key2\\{t\\}\"\n\n        assert_equal {OK} [r hotkeys start METRICS 2 CPU NET]\n        r multi\n        # Send multiple commands to avoid <1us cpu for $key2 which we assert\n        # at end of test\n        for {set i 0} {$i < 30} {incr i} {\n            r set $key1 value1\n            r set $key2 value1\n            r set $key1 value2\n            r set $key1 value3\n        }\n        r exec\n\n        assert_equal {OK} [r hotkeys stop]\n        set result [lindex [r hotkeys get] 0]\n        assert_equal {OK} [r hotkeys reset]\n\n        # Check NET metrics\n        set net_result [dict get $result \"by-net-bytes\"]\n        # Both keys should be tracked from within the MULTI/EXEC block\n        assert [dict exists $net_result $key1]\n        assert [dict exists $net_result $key2]\n        # key1 should have more bytes than key2 since it's accessed more times\n        assert {[dict get $net_result $key1] > [dict get $net_result $key2]}\n\n        # Check CPU metrics\n        set cpu_result [dict get $result \"by-cpu-time-us\"]\n        # Both keys should be tracked from within the MULTI/EXEC block\n        assert [dict exists $cpu_result $key1]\n        assert [dict exists $cpu_result $key2]\n    }\n\n    test {HOTKEYS - EVAL inside MULTI/EXEC with nested calls} {\n        set key1 \"evalkey1\\{t\\}\"\n        set key2 \"evalkey2\\{t\\}\"\n\n        assert_equal {OK} [r hotkeys start METRICS 1 NET]\n        r multi\n        r eval {redis.call('set', KEYS[1], 'value1')} 1 $key1\n        r eval {redis.call('set', KEYS[1], 'value2'); redis.call('set', KEYS[1], 'value3')} 1 $key1\n        r eval {redis.call('set', KEYS[1], 'value4')} 1 $key2\n        r exec\n\n        assert_equal {OK} [r hotkeys stop]\n        set result [lindex [r hotkeys get] 0]\n        assert_equal {OK} [r hotkeys reset]\n\n        # Check NET metrics - both keys should be tracked through EVAL commands\n        set net_result [dict get $result \"by-net-bytes\"]\n        assert [dict exists $net_result $key1]\n        assert [dict exists $net_result $key2]\n        # key1 should have more bytes than key2 since it's accessed by more EVAL commands\n        assert {[dict get $net_result $key1] > [dict get $net_result $key2]}\n    }\n\n    test {HOTKEYS GET - no conditional fields without selected slots} {\n        assert_equal {OK} [r hotkeys start METRICS 2 CPU NET SAMPLE 10]\n        r set key1 value1\n        assert_equal {OK} [r hotkeys stop]\n\n        set result [lindex [r hotkeys get] 0]\n        if {[llength $result] > 0 && [lindex $result 0] ne \"tracking-active\"} {\n            set result [hotkeys_array_to_dict $result]\n        }\n\n        # Should NOT have selected-slots conditional fields\n        assert {![dict exists $result \"sampled-commands-selected-slots-us\"]}\n        assert {![dict exists $result \"all-commands-selected-slots-us\"]}\n        assert {![dict exists $result \"net-bytes-sampled-commands-selected-slots\"]}\n        assert {![dict exists $result \"net-bytes-all-commands-selected-slots\"]}\n\n        # Should have all-slots fields\n        assert [dict exists $result \"all-commands-all-slots-us\"]\n        assert [dict exists $result \"net-bytes-all-commands-all-slots\"]\n\n        assert_equal {OK} [r hotkeys reset]\n    }\n\n    foreach sample_ratio {1 100 500 1000} {\n        test \"HOTKEYS detection with biased key access, sample ratio = $sample_ratio\" {\n            # Generate 100 random keys\n            set all_keys {}\n            for {set i 0} {$i < 100} {incr i} {\n                lappend all_keys \"key_[format %03d $i]\"\n            }\n\n            # Choose 20 keys to bias towards. These will be out hot keys\n            set hot_keys {}\n            for {set i 0} {$i < 20} {incr i} {\n                lappend hot_keys [lindex $all_keys $i]\n            }\n\n            assert_equal {OK} [r hotkeys start METRICS 2 CPU NET SAMPLE $sample_ratio]\n\n            # Biasing towards the 20 chosen keys when sending commands\n            set total_commands 50000\n            for {set i 0} {$i < $total_commands} {incr i} {\n                set rand [expr {rand()}]\n                if {$rand < 0.8} {\n                    set key [lindex $hot_keys [expr {int(rand() * 20)}]]\n                } else {\n                    set key [lindex $all_keys [expr {20 + int(rand() * 80)}]]\n                }\n                r set $key \"value_$i\"\n            }\n\n            assert_equal {OK} [r hotkeys stop]\n\n            set result [lindex [r hotkeys get] 0]\n            assert_not_equal $result {}\n\n            # Convert to dict if it's a flat array\n            if {[llength $result] > 0 && [lindex $result 0] ne \"tracking-active\"} {\n                set result [hotkeys_array_to_dict $result]\n            }\n\n            set cpu_time_array [dict get $result \"by-cpu-time-us\"]\n            set net_bytes_array [dict get $result \"by-net-bytes\"]\n\n            set returned_cpu_keys {}\n            for {set i 0} {$i < [llength $cpu_time_array]} {incr i 2} {\n                lappend returned_cpu_keys [lindex $cpu_time_array $i]\n            }\n\n            # Check that most of returned keys (based on cpu time) are from our\n            # hot_keys list\n            set num_returned_cpu [llength $returned_cpu_keys]\n            assert_lessthan_equal $num_returned_cpu 10\n            assert_morethan $num_returned_cpu 0\n\n            set res 0\n            foreach key $returned_cpu_keys {\n                if {[lsearch -exact $hot_keys $key] >= 0} {\n                    incr res\n                }\n            }\n            assert_morethan $res 5\n\n            set returned_net_keys {}\n            for {set i 0} {$i < [llength $net_bytes_array]} {incr i 2} {\n                lappend returned_net_keys [lindex $net_bytes_array $i]\n            }\n\n            # Same as cpu-time but for net-bytes\n            set num_returned_net [llength $returned_net_keys]\n            assert_lessthan_equal $num_returned_net 10\n            assert_morethan $num_returned_net 0\n\n            set res_net 0\n            foreach key $returned_net_keys {\n                if {[lsearch -exact $hot_keys $key] >= 0} {\n                    incr res_net\n                }\n            }\n            assert_morethan $res_net 5\n\n            assert_equal {OK} [r hotkeys reset]\n        }\n    }\n}\n\nstart_server {tags {external:skip \"hotkeys\"}} {\n    test {HOTKEYS GET - RESP3 returns map with flat array values for hotkeys} {\n        r hello 3\n\n        assert_equal {OK} [r hotkeys start METRICS 2 CPU NET]\n        r set testkey testvalue\n        assert_equal {OK} [r hotkeys stop]\n\n        set result [lindex [r hotkeys get] 0]\n\n        # In RESP3, the outer result is a native map (dict)\n        assert [dict exists $result \"tracking-active\"]\n        assert [dict exists $result \"sample-ratio\"]\n        assert [dict exists $result \"selected-slots\"]\n        assert [dict exists $result \"by-cpu-time-us\"]\n        assert [dict exists $result \"by-net-bytes\"]\n\n        # Verify by-cpu-time-us is a flat array [key1, val1, key2, val2, ...]\n        set cpu_array [dict get $result \"by-cpu-time-us\"]\n        # Flat array length should be even (key-value pairs)\n        assert {[llength $cpu_array] % 2 == 0}\n        # First element is the key name (string), second is the value (integer)\n        set first_key [lindex $cpu_array 0]\n        set first_val [lindex $cpu_array 1]\n        assert_equal \"testkey\" $first_key\n        assert {[string is integer $first_val]}\n\n        # Verify by-net-bytes is a flat array [key1, val1, key2, val2, ...]\n        set net_array [dict get $result \"by-net-bytes\"]\n        # Flat array length should be even (key-value pairs)\n        assert {[llength $net_array] % 2 == 0}\n        # First element is the key name (string), second is the value (integer)\n        set first_key [lindex $net_array 0]\n        set first_val [lindex $net_array 1]\n        assert_equal \"testkey\" $first_key\n        assert {[string is integer $first_val]}\n\n        assert_equal {OK} [r hotkeys reset]\n    }\n\n    test {HOTKEYS GET - selected-slots returns full range in non-cluster mode} {\n        assert_equal {OK} [r hotkeys start METRICS 1 CPU]\n        assert_equal {OK} [r hotkeys stop]\n\n        set result [lindex [r hotkeys get] 0]\n        if {[llength $result] > 0 && [lindex $result 0] ne \"tracking-active\"} {\n            set result [hotkeys_array_to_dict $result]\n        }\n        set slots [dict get $result \"selected-slots\"]\n        # Should return single range [[0, 16383]]\n        assert_equal 1 [llength $slots]\n        set range [lindex $slots 0]\n        assert_equal 2 [llength $range]\n        assert_equal 0 [lindex $range 0]\n        assert_equal 16383 [lindex $range 1]\n\n        assert_equal {OK} [r hotkeys reset]\n    }\n}\n\nstart_cluster 1 0 {tags {external:skip cluster hotkeys}} {\n\n    test {HOTKEYS START - with SLOTS parameter in cluster mode} {\n        assert_equal {OK} [R 0 hotkeys start METRICS 2 CPU NET SLOTS 2 0 5]\n        assert_equal {OK} [R 0 hotkeys stop]\n        assert_equal {OK} [R 0 hotkeys reset]\n    }\n\n    test {HOTKEYS START - Error: SLOTS count mismatch} {\n        catch {R 0 hotkeys start METRICS 1 CPU SLOTS 2 0} err\n        assert_match \"*not enough slot numbers provided*\" $err\n    }\n\n    test {HOTKEYS START - Error: duplicate slots} {\n        catch {R 0 hotkeys start METRICS 1 CPU SLOTS 2 0 0} err\n        assert_match \"*duplicate slot number*\" $err\n    }\n\n    test {HOTKEYS START - Error: SLOTS already specified} {\n        catch {R 0 hotkeys start METRICS 1 CPU SLOTS 1 0 SLOTS 1 5} err\n        assert_match \"*SLOTS parameter already specified*\" $err\n    }\n\n    test {HOTKEYS START - Error: invalid slot - negative value} {\n        catch {R 0 hotkeys start METRICS 1 CPU SLOTS 1 -1} err\n        assert_match \"*Invalid or out of range slot*\" $err\n    }\n\n    test {HOTKEYS START - Error: invalid slot - out of range} {\n        catch {R 0 hotkeys start METRICS 1 CPU SLOTS 1 16384} err\n        assert_match \"*Invalid or out of range slot*\" $err\n    }\n\n    test {HOTKEYS START - Error: invalid slot - non-integer} {\n        catch {R 0 hotkeys start METRICS 1 CPU SLOTS 1 abc} err\n        assert_match \"*Invalid or out of range slot*\" $err\n    }\n\n    test {HOTKEYS GET - selected-slots field with individual slots} {\n        assert_equal {OK} [R 0 hotkeys start METRICS 2 CPU NET SLOTS 2 0 5]\n        assert_equal {OK} [R 0 hotkeys stop]\n\n        set result [lindex [R 0 hotkeys get] 0]\n        if {[llength $result] > 0 && [lindex $result 0] ne \"tracking-active\"} {\n            set result [hotkeys_array_to_dict $result]\n        }\n        set slots [dict get $result \"selected-slots\"]\n        # Two individual slots should return two 1-element arrays\n        assert_equal 2 [llength $slots]\n        assert_equal {0} [lindex $slots 0]\n        assert_equal {5} [lindex $slots 1]\n\n        assert_equal {OK} [R 0 hotkeys reset]\n    }\n\n    test {HOTKEYS GET - selected-slots with unordered input slots are sorted} {\n        # Slots 10,5,1,0,6,2 should become [[0,2], [5,6], [10]]\n        assert_equal {OK} [R 0 hotkeys start METRICS 1 CPU SLOTS 6 10 5 1 0 6 2]\n        assert_equal {OK} [R 0 hotkeys stop]\n\n        set result [lindex [R 0 hotkeys get] 0]\n        if {[llength $result] > 0 && [lindex $result 0] ne \"tracking-active\"} {\n            set result [hotkeys_array_to_dict $result]\n        }\n        set slots [dict get $result \"selected-slots\"]\n        assert_equal 3 [llength $slots]\n        assert_equal {0 2} [lindex $slots 0]\n        assert_equal {5 6} [lindex $slots 1]\n        assert_equal {10} [lindex $slots 2]\n\n        assert_equal {OK} [R 0 hotkeys reset]\n    }\n\n    test {HOTKEYS GET - selected-slots returns node's slot ranges when no SLOTS specified in cluster mode} {\n        # In a 1-node cluster, the node owns all slots [0-16383]\n        assert_equal {OK} [R 0 hotkeys start METRICS 1 CPU]\n        assert_equal {OK} [R 0 hotkeys stop]\n\n        set result [lindex [R 0 hotkeys get] 0]\n        if {[llength $result] > 0 && [lindex $result 0] ne \"tracking-active\"} {\n            set result [hotkeys_array_to_dict $result]\n        }\n        set slots [dict get $result \"selected-slots\"]\n        # 1-node cluster owns all slots, should return [[0, 16383]]\n        assert_equal 1 [llength $slots]\n        set range [lindex $slots 0]\n        assert_equal 2 [llength $range]\n        assert_equal 0 [lindex $range 0]\n        assert_equal 16383 [lindex $range 1]\n\n        assert_equal {OK} [R 0 hotkeys reset]\n    }\n\n    test {HOTKEYS GET - conditional fields with sample_ratio > 1 and selected slots} {\n        assert_equal {OK} [R 0 hotkeys start METRICS 2 CPU NET SAMPLE 10 SLOTS 1 0]\n        R 0 set \"{06S}key1\" value1\n        assert_equal {OK} [R 0 hotkeys stop]\n\n        set result [lindex [R 0 hotkeys get] 0]\n        if {[llength $result] > 0 && [lindex $result 0] ne \"tracking-active\"} {\n            set result [hotkeys_array_to_dict $result]\n        }\n\n        # Should have conditional fields\n        assert [dict exists $result \"sampled-commands-selected-slots-us\"]\n        assert [dict exists $result \"all-commands-selected-slots-us\"]\n        assert [dict exists $result \"net-bytes-sampled-commands-selected-slots\"]\n        assert [dict exists $result \"net-bytes-all-commands-selected-slots\"]\n\n        assert_equal {OK} [R 0 hotkeys reset]\n    }\n\n    test {HOTKEYS GET - no conditional fields with sample_ratio = 1} {\n        assert_equal {OK} [R 0 hotkeys start METRICS 2 CPU NET SLOTS 1 0]\n        R 0 set \"{06S}key1\" value1\n        assert_equal {OK} [R 0 hotkeys stop]\n\n        set result [lindex [R 0 hotkeys get] 0]\n        if {[llength $result] > 0 && [lindex $result 0] ne \"tracking-active\"} {\n            set result [hotkeys_array_to_dict $result]\n        }\n\n        # Should NOT have sampled-commands fields (sample_ratio = 1)\n        assert {![dict exists $result \"sampled-commands-selected-slots-us\"]}\n        assert {![dict exists $result \"net-bytes-sampled-commands-selected-slots\"]}\n\n        # Should have all-commands-selected-slots fields\n        assert [dict exists $result \"all-commands-selected-slots-us\"]\n        assert [dict exists $result \"net-bytes-all-commands-selected-slots\"]\n\n        assert_equal {OK} [R 0 hotkeys reset]\n    }\n\n    test {HOTKEYS - tracks only keys in selected slots} {\n        # Get slots for keys with different hash tags\n        set key_slot0 \"{06S}key\"\n        set slot0 [R 0 cluster keyslot $key_slot0]\n\n        set key_other \"{zzz}key\"\n        set slot_other [R 0 cluster keyslot $key_other]\n\n        # Start tracking only slot 0\n        assert_equal {OK} [R 0 hotkeys start METRICS 1 NET SLOTS 1 $slot0]\n\n        # Set keys in both slots\n        for {set i 0} {$i < 100} {incr i} {\n            R 0 set \"${key_slot0}_$i\" \"value_$i\"\n            R 0 set \"${key_other}_$i\" \"value_$i\"\n        }\n\n        assert_equal {OK} [R 0 hotkeys stop]\n\n        set result [lindex [R 0 hotkeys get] 0]\n        if {[llength $result] > 0 && [lindex $result 0] ne \"tracking-active\"} {\n            set result [hotkeys_array_to_dict $result]\n        }\n\n        # Check that by-net-bytes only contains keys from slot 0\n        set net_bytes_array [dict get $result \"by-net-bytes\"]\n        for {set i 0} {$i < [llength $net_bytes_array]} {incr i 2} {\n            set key [lindex $net_bytes_array $i]\n            # Keys should contain the slot0 hash tag\n            assert_match \"*{06S}*\" $key\n        }\n\n        assert_equal {OK} [R 0 hotkeys reset]\n    }\n\n    test {HOTKEYS - multiple selected slots} {\n        # Get slots for keys with different hash tags\n        set key_slot0 \"{06S}key\"\n        set slot0 [R 0 cluster keyslot $key_slot0]\n\n        set key_slot1 \"{4oi}key\"\n        set slot1 [R 0 cluster keyslot $key_slot1]\n\n        set key_other \"{zzz}key\"\n        set slot_other [R 0 cluster keyslot $key_other]\n\n        # Start tracking slots 0 and 1\n        assert_equal {OK} [R 0 hotkeys start METRICS 1 NET SLOTS 2 $slot0 $slot1]\n\n        # Set keys in all three slots\n        for {set i 0} {$i < 50} {incr i} {\n            R 0 set \"${key_slot0}_$i\" \"value_$i\"\n            R 0 set \"${key_slot1}_$i\" \"value_$i\"\n            R 0 set \"${key_other}_$i\" \"value_$i\"\n        }\n\n        assert_equal {OK} [R 0 hotkeys stop]\n\n        set result [lindex [R 0 hotkeys get] 0]\n        if {[llength $result] > 0 && [lindex $result 0] ne \"tracking-active\"} {\n            set result [hotkeys_array_to_dict $result]\n        }\n\n        # Verify selected-slots contains both slots\n        set slots [dict get $result \"selected-slots\"]\n        assert_equal 2 [llength $slots]\n\n        # Check that by-net-bytes contains keys from both selected slots\n        set net_bytes_array [dict get $result \"by-net-bytes\"]\n        set found_slot0 0\n        set found_slot1 0\n        for {set i 0} {$i < [llength $net_bytes_array]} {incr i 2} {\n            set key [lindex $net_bytes_array $i]\n            if {[string match \"*{06S}*\" $key]} {\n                set found_slot0 1\n            }\n            if {[string match \"*{4oi}*\" $key]} {\n                set found_slot1 1\n            }\n            # Keys should NOT contain the other hash tag\n            assert {![string match \"*{zzz}*\" $key]}\n        }\n\n        # Should have found keys from both selected slots\n        assert_equal 1 $found_slot0\n        assert_equal 1 $found_slot1\n\n        assert_equal {OK} [R 0 hotkeys reset]\n    }\n}\n\nstart_cluster 2 0 {tags {external:skip cluster hotkeys}} {\n\n    test {HOTKEYS START - Error: slot not handled by this node} {\n        # In a 2-master cluster, each node handles half the slots.\n        # Node 0 handles slots 0-8191, Node 1 handles slots 8192-16383.\n        # Try to use a slot that belongs to node 1 on node 0.\n        catch {R 0 hotkeys start METRICS 1 CPU SLOTS 1 8192} err\n        assert_match \"*slot 8192 not handled by this node*\" $err\n        catch {R 1 hotkeys start METRICS 1 CPU SLOTS 1 0} err\n        assert_match \"*slot 0 not handled by this node*\" $err\n    }\n\n    test {HOTKEYS GET - selected-slots returns each node's slot ranges in multi-node cluster} {\n        # In a 2-master cluster:\n        # Node 0 handles slots 0-8191\n        # Node 1 handles slots 8192-16383\n\n        # Test node 0\n        assert_equal {OK} [R 0 hotkeys start METRICS 1 CPU]\n        assert_equal {OK} [R 0 hotkeys stop]\n\n        set result [lindex [R 0 hotkeys get] 0]\n        if {[llength $result] > 0 && [lindex $result 0] ne \"tracking-active\"} {\n            set result [hotkeys_array_to_dict $result]\n        }\n        set slots [dict get $result \"selected-slots\"]\n        # Node 0 should return [[0, 8191]]\n        assert_equal 1 [llength $slots]\n        set range [lindex $slots 0]\n        assert_equal 2 [llength $range]\n        assert_equal 0 [lindex $range 0]\n        assert_equal 8191 [lindex $range 1]\n\n        assert_equal {OK} [R 0 hotkeys reset]\n\n        # Test node 1\n        assert_equal {OK} [R 1 hotkeys start METRICS 1 CPU]\n        assert_equal {OK} [R 1 hotkeys stop]\n\n        set result [lindex [R 1 hotkeys get] 0]\n        if {[llength $result] > 0 && [lindex $result 0] ne \"tracking-active\"} {\n            set result [hotkeys_array_to_dict $result]\n        }\n        set slots [dict get $result \"selected-slots\"]\n        # Node 1 should return [[8192, 16383]]\n        assert_equal 1 [llength $slots]\n        set range [lindex $slots 0]\n        assert_equal 2 [llength $range]\n        assert_equal 8192 [lindex $range 0]\n        assert_equal 16383 [lindex $range 1]\n\n        assert_equal {OK} [R 1 hotkeys reset]\n    }\n}\n"
  },
  {
    "path": "tests/unit/hyperloglog.tcl",
    "content": "start_server {tags {\"hll\"}} {\n    test {HyperLogLog self test passes} {\n        catch {r pfselftest} e\n        set e\n    } {OK} {needs:pfdebug}\n\n    test {PFADD without arguments creates an HLL value} {\n        r pfadd hll\n        r exists hll\n    } {1}\n\n    test {Approximated cardinality after creation is zero} {\n        r pfcount hll\n    } {0}\n\n    test {PFADD returns 1 when at least 1 reg was modified} {\n        r pfadd hll a b c\n    } {1}\n\n    test {PFADD returns 0 when no reg was modified} {\n        r pfadd hll a b c\n    } {0}\n\n    test {PFADD works with empty string (regression)} {\n        r pfadd hll \"\"\n    }\n\n    # Note that the self test stresses much better the\n    # cardinality estimation error. We are testing just the\n    # command implementation itself here.\n    test {PFCOUNT returns approximated cardinality of set} {\n        r del hll\n        set res {}\n        r pfadd hll 1 2 3 4 5\n        lappend res [r pfcount hll]\n        # Call it again to test cached value invalidation.\n        r pfadd hll 6 7 8 8 9 10\n        lappend res [r pfcount hll]\n        set res\n    } {5 10}\n\n    test {HyperLogLogs are promote from sparse to dense} {\n        r del hll\n        r config set hll-sparse-max-bytes 3000\n        set n 0\n        while {$n < 100000} {\n            set elements {}\n            for {set j 0} {$j < 100} {incr j} {lappend elements [expr rand()]}\n            incr n 100\n            r pfadd hll {*}$elements\n            set card [r pfcount hll]\n            set err [expr {abs($card-$n)}]\n            assert {$err < (double($card)/100)*5}\n            if {$n < 1000} {\n                assert {[r pfdebug encoding hll] eq {sparse}}\n            } elseif {$n > 10000} {\n                assert {[r pfdebug encoding hll] eq {dense}}\n            }\n        }\n    } {} {needs:pfdebug}\n\n    test {Change hll-sparse-max-bytes} {\n        r config set hll-sparse-max-bytes 3000\n        r del hll\n        r pfadd hll a b c d e d g h i j k\n        assert {[r pfdebug encoding hll] eq {sparse}}\n        r config set hll-sparse-max-bytes 30\n        r pfadd hll new_element\n        assert {[r pfdebug encoding hll] eq {dense}}\n    } {} {needs:pfdebug}\n\n    test {Hyperloglog promote to dense well in different hll-sparse-max-bytes} {\n        set max(0) 100\n        set max(1) 500\n        set max(2) 3000\n        for {set i 0} {$i < [array size max]} {incr i} {\n            r config set hll-sparse-max-bytes $max($i)\n            r del hll\n            r pfadd hll\n            set len [r strlen hll]\n            while {$len <= $max($i)} {\n                assert {[r pfdebug encoding hll] eq {sparse}}\n                set elements {}\n                for {set j 0} {$j < 10} {incr j} { lappend elements [expr rand()]}\n                r pfadd hll {*}$elements\n                set len [r strlen hll]\n            }\n            assert {[r pfdebug encoding hll] eq {dense}}\n        }\n    } {} {needs:pfdebug}\n\n    test {HyperLogLog sparse encoding stress test} {\n        for {set x 0} {$x < 1000} {incr x} {\n            r del hll1\n            r del hll2\n            set numele [randomInt 100]\n            set elements {}\n            for {set j 0} {$j < $numele} {incr j} {\n                lappend elements [expr rand()]\n            }\n            # Force dense representation of hll2\n            r pfadd hll2\n            r pfdebug todense hll2\n            r pfadd hll1 {*}$elements\n            r pfadd hll2 {*}$elements\n            assert {[r pfdebug encoding hll1] eq {sparse}}\n            assert {[r pfdebug encoding hll2] eq {dense}}\n            # Cardinality estimated should match exactly.\n            assert {[r pfcount hll1] eq [r pfcount hll2]}\n        }\n    } {} {needs:pfdebug}\n\n    test {Corrupted sparse HyperLogLogs are detected: Additional at tail} {\n        r del hll\n        r pfadd hll a b c\n        r append hll \"hello\"\n        set e {}\n        catch {r pfcount hll} e\n        set e\n    } {*INVALIDOBJ*}\n\n    test {Corrupted sparse HyperLogLogs are detected: Broken magic} {\n        r del hll\n        r pfadd hll a b c\n        r setrange hll 0 \"0123\"\n        set e {}\n        catch {r pfcount hll} e\n        set e\n    } {*WRONGTYPE*}\n\n    test {Corrupted sparse HyperLogLogs are detected: Invalid encoding} {\n        r del hll\n        r pfadd hll a b c\n        r setrange hll 4 \"x\"\n        set e {}\n        catch {r pfcount hll} e\n        set e\n    } {*WRONGTYPE*}\n\n    test {Corrupted sparse HyperLogLogs doesn't cause overflow and out-of-bounds with XZERO opcode} {\n        r del hll\n        \n        # Create a sparse-encoded HyperLogLog header\n        set header \"HYLL\"\n        set payload [binary format c12 {1 0 0 0 0 0 0 0 0 0 0 0}]\n        set pl [binary format a4a12 $header $payload]\n\n        # Create an XZERO opcode with the maximum run length of 16384(2^14)\n        set runlen [expr 16384 - 1]\n        set chunk [binary format cc [expr {0b01000000 | ($runlen >> 8)}] [expr {$runlen & 0xff}]]\n        # Fill the HLL with more than 131072(2^17) XZERO opcodes to make the total\n        # run length exceed 4GB, will cause an integer overflow.\n        set repeat [expr 131072 + 1000]\n        for {set i 0} {$i < $repeat} {incr i} {\n            append pl $chunk\n        }\n\n        # Create a VAL opcode with a value that will cause out-of-bounds.\n        append pl [binary format c 0b11111111]\n        r set hll $pl\n\n        # This should not overflow and out-of-bounds.\n        assert_error {*INVALIDOBJ*} {r pfcount hll hll}\n        assert_error {*INVALIDOBJ*} {r pfdebug getreg hll}\n        r ping\n    }\n\n    test {Corrupted sparse HyperLogLogs doesn't cause overflow and out-of-bounds with ZERO opcode} {\n        r del hll\n        \n        # Create a sparse-encoded HyperLogLog header\n        set header \"HYLL\"\n        set payload [binary format c12 {1 0 0 0 0 0 0 0 0 0 0 0}]\n        set pl [binary format a4a12 $header $payload]\n\n        # # Create an ZERO opcode with the maximum run length of 64(2^6)\n        set chunk [binary format c [expr {0b00000000 | 0x3f}]]\n        # Fill the HLL with more than 33554432(2^17) ZERO opcodes to make the total\n        # run length exceed 4GB, will cause an integer overflow.\n        set repeat [expr 33554432 + 1000]\n        for {set i 0} {$i < $repeat} {incr i} {\n            append pl $chunk\n        }\n\n        # Create a VAL opcode with a value that will cause out-of-bounds.\n        append pl [binary format c 0b11111111]\n        r set hll $pl\n\n        # This should not overflow and out-of-bounds.\n        assert_error {*INVALIDOBJ*} {r pfcount hll hll}\n        assert_error {*INVALIDOBJ*} {r pfdebug getreg hll}\n        r ping\n    }\n\n    test {Corrupted dense HyperLogLogs are detected: Wrong length} {\n        r del hll\n        r pfadd hll a b c\n        r setrange hll 4 \"\\x00\"\n        set e {}\n        catch {r pfcount hll} e\n        set e\n    } {*WRONGTYPE*}\n\n    test {Fuzzing dense/sparse encoding: Redis should always detect errors} {\n        for {set j 0} {$j < 1000} {incr j} {\n            r del hll\n            set items {}\n            set numitems [randomInt 3000]\n            for {set i 0} {$i < $numitems} {incr i} {\n                lappend items [expr {rand()}]\n            }\n            r pfadd hll {*}$items\n\n            # Corrupt it in some random way.\n            for {set i 0} {$i < 5} {incr i} {\n                set len [r strlen hll]\n                set pos [randomInt $len]\n                set byte [randstring 1 1 binary]\n                r setrange hll $pos $byte\n                # Don't modify more bytes 50% of times\n                if {rand() < 0.5} break\n            }\n\n            # Use the hyperloglog to check if it crashes\n            # Redis in some way.\n            catch {\n                r pfcount hll\n            }\n        }\n    }\n\n    test {PFADD, PFCOUNT, PFMERGE type checking works} {\n        r set foo{t} bar\n        catch {r pfadd foo{t} 1} e\n        assert_match {*WRONGTYPE*} $e\n        catch {r pfcount foo{t}} e\n        assert_match {*WRONGTYPE*} $e\n        catch {r pfmerge bar{t} foo{t}} e\n        assert_match {*WRONGTYPE*} $e\n        catch {r pfmerge foo{t} bar{t}} e\n        assert_match {*WRONGTYPE*} $e\n    }\n\n    test {PFMERGE results on the cardinality of union of sets} {\n        r del hll{t} hll1{t} hll2{t} hll3{t}\n        r pfadd hll1{t} a b c\n        r pfadd hll2{t} b c d\n        r pfadd hll3{t} c d e\n        r pfmerge hll{t} hll1{t} hll2{t} hll3{t}\n        r pfcount hll{t}\n    } {5}\n\n    test {PFMERGE on missing source keys will create an empty destkey} {\n        r del sourcekey{t} sourcekey2{t} destkey{t} destkey2{t}\n\n        assert_equal {OK} [r pfmerge destkey{t} sourcekey{t}]\n        assert_equal 1 [r exists destkey{t}]\n        assert_equal 0 [r pfcount destkey{t}]\n\n        assert_equal {OK} [r pfmerge destkey2{t} sourcekey{t} sourcekey2{t}]\n        assert_equal 1 [r exists destkey2{t}]\n        assert_equal 0 [r pfcount destkey{t}]\n    }\n\n    test {PFMERGE with one empty input key, create an empty destkey} {\n        r del destkey\n        assert_equal {OK} [r pfmerge destkey]\n        assert_equal 1 [r exists destkey]\n        assert_equal 0 [r pfcount destkey]\n    }\n\n    test {PFMERGE with one non-empty input key, dest key is actually one of the source keys} {\n        r del destkey\n        assert_equal 1 [r pfadd destkey a b c]\n        assert_equal {OK} [r pfmerge destkey]\n        assert_equal 1 [r exists destkey]\n        assert_equal 3 [r pfcount destkey]\n    }\n\n    test {PFMERGE results with simd} {\n        r del hllscalar{t} hllsimd{t} hll1{t} hll2{t} hll3{t}\n        for {set x 1} {$x < 2000} {incr x} {\n            r pfadd hll1{t} [expr rand()]\n        }\n        for {set x 1} {$x < 4000} {incr x} {\n            r pfadd hll2{t} [expr rand()]\n        }\n        for {set x 1} {$x < 8000} {incr x} {\n            r pfadd hll3{t} [expr rand()]\n        }\n        assert {[r pfcount hll1{t}] > 0}\n        assert {[r pfcount hll2{t}] > 0}\n        assert {[r pfcount hll3{t}] > 0}\n\n        r pfdebug simd off\n        set scalar [r pfcount hll1{t} hll2{t} hll3{t}]\n        r pfdebug simd on\n        set simd [r pfcount hll1{t} hll2{t} hll3{t}]\n        assert {$scalar > 0}\n        assert {$simd > 0}\n        assert_equal $scalar $simd\n\n        r pfdebug simd off\n        r pfmerge hllscalar{t} hll1{t} hll2{t} hll3{t}\n        r pfdebug simd on\n        r pfmerge hllsimd{t} hll1{t} hll2{t} hll3{t}\n\n        set scalar [r pfcount hllscalar{t}]\n        set simd [r pfcount hllsimd{t}]\n        assert {$scalar > 0}\n        assert {$simd > 0}\n        assert_equal $scalar $simd\n\n        set scalar [r get hllscalar{t}]\n        set simd [r get hllsimd{t}]\n        assert_equal $scalar $simd\n\n    } {} {needs:pfdebug}\n\n    test {PFCOUNT multiple-keys merge returns cardinality of union #1} {\n        r del hll1{t} hll2{t} hll3{t}\n        for {set x 1} {$x < 10000} {incr x} {\n            r pfadd hll1{t} \"foo-$x\"\n            r pfadd hll2{t} \"bar-$x\"\n            r pfadd hll3{t} \"zap-$x\"\n\n            set card [r pfcount hll1{t} hll2{t} hll3{t}]\n            set realcard [expr {$x*3}]\n            set err [expr {abs($card-$realcard)}]\n            assert {$err < (double($card)/100)*5}\n        }\n    }\n\n    test {PFCOUNT multiple-keys merge returns cardinality of union #2} {\n        r del hll1{t} hll2{t} hll3{t}\n        set elements {}\n        for {set x 1} {$x < 10000} {incr x} {\n            for {set j 1} {$j <= 3} {incr j} {\n                set rint [randomInt 20000]\n                r pfadd hll$j{t} $rint\n                lappend elements $rint\n            }\n        }\n        set realcard [llength [lsort -unique $elements]]\n        set card [r pfcount hll1{t} hll2{t} hll3{t}]\n        set err [expr {abs($card-$realcard)}]\n        assert {$err < (double($card)/100)*5}\n    }\n\n    test {PFDEBUG GETREG returns the HyperLogLog raw registers} {\n        r del hll\n        r pfadd hll 1 2 3\n        llength [r pfdebug getreg hll]\n    } {16384} {needs:pfdebug}\n\n    test {PFADD / PFCOUNT cache invalidation works} {\n        r del hll\n        r pfadd hll a b c\n        r pfcount hll\n        assert {[r getrange hll 15 15] eq \"\\x00\"}\n        r pfadd hll a b c\n        assert {[r getrange hll 15 15] eq \"\\x00\"}\n        r pfadd hll 1 2 3\n        assert {[r getrange hll 15 15] eq \"\\x80\"}\n    }\n\n    test {PFADD with 2GB entry should not crash server due to overflow in MurmurHash64A} {\n        r config set proto-max-bulk-len 3221225472\n        r config set client-query-buffer-limit 3221225472\n        r write \"*3\\r\\n\\$5\\r\\nPFADD\\r\\n\\$3\\r\\nhll\\r\\n\"\n        write_big_bulk 2147483648;\n        r ping\n    } {PONG} {large-memory}\n}\n"
  },
  {
    "path": "tests/unit/info-command.tcl",
    "content": "start_server {tags {\"info and its relative command\"}} {\n    test \"info command with at most one sub command\" {\n        foreach arg {\"\" \"all\" \"default\" \"everything\"} {\n            if {$arg == \"\"} {\n                set info [r 0 info]\n            } else {\n                set info [r 0 info $arg]\n            }\n\n            assert { [string match \"*redis_version*\" $info] }\n            assert { [string match \"*used_cpu_user*\" $info] }\n            assert { ![string match \"*sentinel_tilt*\" $info] }\n            assert { [string match \"*used_memory*\" $info] }\n            if {$arg == \"\" || $arg == \"default\"} {\n                assert { ![string match \"*rejected_calls*\" $info] }        \n            } else {\n                assert { [string match \"*rejected_calls*\" $info] }        \n            }        \n        }\n    }\n\n    test \"info command with one sub-section\" {\n        set info [r info cpu]\n        assert { [string match \"*used_cpu_user*\" $info] }\n        assert { ![string match \"*sentinel_tilt*\" $info] }\n        assert { ![string match \"*used_memory*\" $info] }\n\n        set info [r info sentinel]\n        assert { ![string match \"*sentinel_tilt*\" $info] }\n        assert { ![string match \"*used_memory*\" $info] }\n\n        set info [r info commandSTATS] ;# test case insensitive compare\n        assert { ![string match \"*used_memory*\" $info] }\n        assert { [string match \"*rejected_calls*\" $info] }\n    }\n\n    test \"info command with multiple sub-sections\" {\n        set info [r info cpu sentinel]\n        assert { [string match \"*used_cpu_user*\" $info] }\n        assert { ![string match \"*sentinel_tilt*\" $info] }\n        assert { ![string match \"*master_repl_offset*\" $info] }\n\n        set info [r info cpu all]\n        assert { [string match \"*used_cpu_user*\" $info] }\n        assert { ![string match \"*sentinel_tilt*\" $info] }\n        assert { [string match \"*used_memory*\" $info] }\n        assert { [string match \"*master_repl_offset*\" $info] }\n        assert { [string match \"*rejected_calls*\" $info] }\n        # check that we didn't get the same info twice\n        assert { ![string match \"*used_cpu_user_children*used_cpu_user_children*\" $info] }\n\n        set info [r info cpu default]\n        assert { [string match \"*used_cpu_user*\" $info] }\n        assert { ![string match \"*sentinel_tilt*\" $info] }\n        assert { [string match \"*used_memory*\" $info] }\n        assert { [string match \"*master_repl_offset*\" $info] }\n        assert { ![string match \"*rejected_calls*\" $info] }\n        # check that we didn't get the same info twice\n        assert { ![string match \"*used_cpu_user_children*used_cpu_user_children*\" $info] }\n    }\n   \n}\n"
  },
  {
    "path": "tests/unit/info-keysizes.tcl",
    "content": "################################################################################\n# Test the \"info keysizes\" command.\n# The command returns a histogram of the sizes of keys in the database.\n################################################################################\n\n# Verify output of \"info keysizes\" command is as expected.\n#\n# Arguments:\n#  cmd         -  A command that should be run before the verification.\n#  expOutput   -  This is a string that represents the expected output abbreviated.\n#                 Instead of the output of \"strings_len_exp_distrib\" write \"STR\". \n#                 Similarly for LIST, SET, ZSET and HASH. Spaces and newlines are \n#                 ignored.\n#\n#                 Alternatively, you can set \"__EVAL_DB_HIST__\". The function\n#                 will read all the keys from the server for selected db index,\n#                 ask for their length and compute the expected output.\n\n#  waitCond    -  If set to 1, the function wait_for_condition 50x50msec for the \n#                 expOutput to match the actual output. \n# \n# (replicaMode) -  Global variable that indicates if the test is running in replica \n#                  mode. If so, run the command on leader, verify the output. Then wait\n#                  for the replica to catch up and verify the output on the replica\n#                  as well. Otherwise, just run the command on the leader and verify \n#                  the output.\nproc run_cmd_verify_hist {cmd expOutput {waitCond 0}} {\n\n    #################### internal funcs ################\n    proc build_exp_hist {server expOutput} {\n        if {[regexp {^__EVAL_DB_HIST__\\s+(\\d+)$} $expOutput -> dbid]} {\n            set expOutput [eval_db_histogram $server $dbid]\n        }\n    \n        # Replace all placeholders with the actual values. Remove spaces & newlines.\n        set res [string map {\n            \"STR\" \"distrib_strings_sizes\"\n            \"LIST\" \"distrib_lists_items\"\n            \"SET\" \"distrib_sets_items\"\n            \"ZSET\" \"distrib_zsets_items\"\n            \"HASH\" \"distrib_hashes_items\"\n            \" \" \"\" \"\\n\" \"\" \"\\r\" \"\"\n        } $expOutput]\n        return $res\n    }\n    proc verify_histogram { server expOutput cmd {retries 1} } {\n         wait_for_condition 50 $retries {\n            [build_exp_hist $server $expOutput] eq [get_info_hist_stripped $server]\n        } else {\n            fail \"Expected: \\n`[build_exp_hist $server $expOutput]` \\\n                Actual: `[get_info_hist_stripped $server]`. \\nFailed after command: $cmd\"\n        }\n    }\n    # Query and Strip result of \"info keysizes\" from header, spaces, and newlines.\n    proc get_info_hist_stripped {server} {\n        set infoStripped [string map {\n            \"# Keysizes\" \"\"\n            \" \" \"\" \"\\n\" \"\" \"\\r\" \"\"\n        } [$server info keysizes] ]\n        return $infoStripped\n    }\n    #################### EOF internal funcs ################\n\n    uplevel 1 $cmd\n    global replicaMode\n\n    # ref the leader with `server` variable\n    if {$replicaMode eq 1} {\n        set server [srv -1 client]\n        set replica [srv 0 client]\n    } else {\n        set server [srv 0 client]\n    }\n\n    # Compare the stripped expected output with the actual output from the server\n    set retries [expr { $waitCond ? 20 : 1}]\n    verify_histogram $server $expOutput $cmd $retries\n\n    # If we are testing `replicaMode` then need to wait for the replica to catch up\n    if {$replicaMode eq 1} {\n        verify_histogram $replica $expOutput $cmd 20\n    }\n}\n\n# eval_db_histogram - eval The expected histogram for current db, by\n# reading all the keys from the server, query for their length, and computing\n# the expected output.\nproc eval_db_histogram {server dbid} {\n    $server select $dbid\n    array set type_counts {}\n\n    set keys [$server keys *]\n    foreach key $keys {\n        set key_type [$server type $key]\n        switch -exact $key_type {\n            \"string\" {\n                set value [$server strlen $key]\n                set type \"STR\"\n            }\n            \"list\" {\n                set value [$server llen $key]\n                set type \"LIST\"\n            }\n            \"set\" {\n                set value [$server scard $key]\n                set type \"SET\"\n            }\n            \"zset\" {\n                set value [$server zcard $key]\n                set type \"ZSET\"\n            }\n            \"hash\" {\n                set value [$server hlen $key]\n                set type \"HASH\"\n            }\n            default {\n                continue  ; # Skip unknown types\n            }\n        }\n\n        set power 1\n        while { ($power * 2) <= $value } { set power [expr {$power * 2}] }\n        if {$value == 0} { set power 0}\n        # Store counts by type and size bucket\n        incr type_counts($type,$power)\n    }\n\n    set result {}\n    foreach type {STR LIST SET ZSET HASH} {\n        if {[array exists type_counts] && [array names type_counts $type,*] ne \"\"} {\n            set sorted_powers [lsort -integer [lmap item [array names type_counts $type,*] {\n                lindex [split $item ,] 1  ; # Extracts only the numeric part\n            }]]\n\n            set type_result {}\n            foreach power $sorted_powers {\n                set display_power $power\n                if { $power >= 1024 } {  set display_power \"[expr {$power / 1024}]K\" }\n                lappend type_result \"$display_power=$type_counts($type,$power)\"\n            }\n            lappend result \"db${dbid}_$type: [join $type_result \", \"]\"\n        }\n    }\n\n    return [join $result \" \"]\n}\n\nproc test_all_keysizes { {replMode 0} } {\n    # If in replica mode then update global var `replicaMode` so function \n    # `run_cmd_verify_hist` knows to run the command on the leader and then\n    # wait for the replica to catch up. \n    global replicaMode\n    set replicaMode $replMode    \n    # ref the leader with `server` variable\n    if {$replicaMode eq 1} { \n        set server [srv -1 client]\n        set replica [srv 0 client]\n        set suffixRepl \"(replica)\"\n    } else { \n        set server [srv 0 client]\n        set suffixRepl \"\" \n    }    \n        \n    test \"KEYSIZES - Test i'th bin counts keysizes between (2^i) and (2^(i+1)-1) as expected $suffixRepl\" {\n        set base_string \"\"\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        for {set i 1} {$i <= 10} {incr i} {\n            append base_string \"x\"\n            set log_value [expr {1 << int(log($i) / log(2))}]\n            #puts \"Iteration $i: $base_string (Log base 2 pattern: $log_value)\"\n            run_cmd_verify_hist {$server set mykey $base_string}  \"db0_STR:$log_value=1\"\n        }\n    }\n\n    test \"KEYSIZES - Histogram values of Bytes, Kilo and Mega $suffixRepl\" {\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server set x 0123456789ABCDEF} {db0_STR:16=1}\n        run_cmd_verify_hist {$server APPEND x [$server get x]} {db0_STR:32=1}\n        run_cmd_verify_hist {$server APPEND x [$server get x]} {db0_STR:64=1}\n        run_cmd_verify_hist {$server APPEND x [$server get x]} {db0_STR:128=1}\n        run_cmd_verify_hist {$server APPEND x [$server get x]} {db0_STR:256=1}\n        run_cmd_verify_hist {$server APPEND x [$server get x]} {db0_STR:512=1}\n        run_cmd_verify_hist {$server APPEND x [$server get x]} {db0_STR:1K=1}\n        run_cmd_verify_hist {$server APPEND x [$server get x]} {db0_STR:2K=1}\n        run_cmd_verify_hist {$server APPEND x [$server get x]} {db0_STR:4K=1}\n        run_cmd_verify_hist {$server APPEND x [$server get x]} {db0_STR:8K=1}\n        run_cmd_verify_hist {$server APPEND x [$server get x]} {db0_STR:16K=1}\n        run_cmd_verify_hist {$server APPEND x [$server get x]} {db0_STR:32K=1}\n        run_cmd_verify_hist {$server APPEND x [$server get x]} {db0_STR:64K=1}\n        run_cmd_verify_hist {$server APPEND x [$server get x]} {db0_STR:128K=1}\n        run_cmd_verify_hist {$server APPEND x [$server get x]} {db0_STR:256K=1}\n        run_cmd_verify_hist {$server APPEND x [$server get x]} {db0_STR:512K=1}\n        run_cmd_verify_hist {$server APPEND x [$server get x]} {db0_STR:1M=1}\n        run_cmd_verify_hist {$server APPEND x [$server get x]} {db0_STR:2M=1}               \n    }\n\n    # It is difficult to predict the actual string length of hyperloglog. To address\n    # this, we will create expected output by indicating __EVAL_DB_HIST__ to read\n    # all keys & lengths from server. Based on it, generate the expected output.\n    test \"KEYSIZES - Test hyperloglog $suffixRepl\" {\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        # PFADD (sparse & dense)\n        for {set i 1} {$i <= 3000} {incr i} {\n            $server PFADD hll1 a$i b$i c$i\n            $server PFADD hll2 x$i y$i z$i\n            run_cmd_verify_hist {} {__EVAL_DB_HIST__ 0}\n        }\n        # PFMERGE, PFCOUNT (sparse & dense)\n        for {set i 1} {$i <= 3000} {incr i} {\n        $server PFADD hll3 x$i y$i z$i\n        $server PFMERGE hll4 hll1 hll2 hll3\n        $server PFCOUNT hll1 hll2 hll3 hll4\n        run_cmd_verify_hist {} {__EVAL_DB_HIST__ 0}\n        }\n        # DEL\n        run_cmd_verify_hist {$server DEL hll4} {__EVAL_DB_HIST__ 0}\n        run_cmd_verify_hist {$server DEL hll3} {__EVAL_DB_HIST__ 0}\n        run_cmd_verify_hist {$server DEL hll1} {__EVAL_DB_HIST__ 0}\n        run_cmd_verify_hist {$server DEL hll2} {}\n        # SET overwrites\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server PFADD hll1 a b c d e f g h i j k l m} {db0_STR:32=1}\n        run_cmd_verify_hist {$server SET hll1 1234567} {db0_STR:4=1}\n        catch {run_cmd_verify_hist {$server PFADD hll1 a b c d e f g h i j k l m} {db0_STR:4=1}}\n        run_cmd_verify_hist {} {db0_STR:4=1}\n        # EXPIRE\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server PFADD hll1 a b c d e f g h i j k l m} {db0_STR:32=1}\n        run_cmd_verify_hist {$server PEXPIRE hll1 50} {db0_STR:32=1}\n        run_cmd_verify_hist {} {} 1\n    } {} {cluster:skip}\n            \n    test \"KEYSIZES - Test List $suffixRepl\" {\n        # FLUSHALL\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        # RPUSH\n        run_cmd_verify_hist {$server RPUSH l1 1 2 3 4 5} {db0_LIST:4=1}\n        run_cmd_verify_hist {$server RPUSH l1 6 7 8 9} {db0_LIST:8=1}\n        # Test also LPUSH, RPUSH, LPUSHX, RPUSHX\n        run_cmd_verify_hist {$server LPUSH l2 1} {db0_LIST:1=1,8=1}\n        run_cmd_verify_hist {$server LPUSH l2 2} {db0_LIST:2=1,8=1}\n        run_cmd_verify_hist {$server LPUSHX l2 3} {db0_LIST:2=1,8=1}\n        run_cmd_verify_hist {$server RPUSHX l2 4} {db0_LIST:4=1,8=1}\n        # RPOP\n        run_cmd_verify_hist {$server RPOP l1} {db0_LIST:4=1,8=1}\n        run_cmd_verify_hist {$server RPOP l1} {db0_LIST:4=2}         \n        # DEL\n        run_cmd_verify_hist {$server DEL l1} {db0_LIST:4=1}\n        # LINSERT, LTRIM\n        run_cmd_verify_hist {$server RPUSH l3 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14} {db0_LIST:4=1,8=1}\n        run_cmd_verify_hist {$server LINSERT l3 AFTER 9 10} {db0_LIST:4=1,16=1}\n        run_cmd_verify_hist {$server LTRIM l3 0 8} {db0_LIST:4=1,8=1}       \n        # DEL\n        run_cmd_verify_hist {$server DEL l3} {db0_LIST:4=1}\n        run_cmd_verify_hist {$server DEL l2} {}        \n        # LMOVE, BLMOVE\n        run_cmd_verify_hist {$server RPUSH l4 1 2 3 4 5 6 7 8} {db0_LIST:8=1}\n        run_cmd_verify_hist {$server LMOVE l4 l5 LEFT LEFT} {db0_LIST:1=1,4=1} \n        run_cmd_verify_hist {$server LMOVE l4 l5 RIGHT RIGHT} {db0_LIST:2=1,4=1}\n        run_cmd_verify_hist {$server LMOVE l4 l5 LEFT RIGHT} {db0_LIST:2=1,4=1}\n        run_cmd_verify_hist {$server LMOVE l4 l5 RIGHT LEFT} {db0_LIST:4=2}\n        run_cmd_verify_hist {$server BLMOVE l4 l5 RIGHT LEFT 0} {db0_LIST:2=1,4=1}        \n        # DEL\n        run_cmd_verify_hist {$server DEL l4} {db0_LIST:4=1}\n        run_cmd_verify_hist {$server DEL l5} {}\n        # LMPOP\n        run_cmd_verify_hist {$server RPUSH l6 1 2 3 4 5 6 7 8 9 10} {db0_LIST:8=1}\n        run_cmd_verify_hist {$server LMPOP 1 l6 LEFT COUNT 2} {db0_LIST:8=1}\n        run_cmd_verify_hist {$server LMPOP 1 l6 LEFT COUNT 1} {db0_LIST:4=1}\n        run_cmd_verify_hist {$server LMPOP 1 l6 LEFT COUNT 6} {db0_LIST:1=1}\n        # LPOP\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server RPUSH l7 1 2 3 4} {db0_LIST:4=1}\n        run_cmd_verify_hist {$server LPOP l7} {db0_LIST:2=1}\n        run_cmd_verify_hist {$server LPOP l7} {db0_LIST:2=1}\n        run_cmd_verify_hist {$server LPOP l7} {db0_LIST:1=1}\n        run_cmd_verify_hist {$server LPOP l7} {}\n        # LREM\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server RPUSH l8 y x y x y x y x y y} {db0_LIST:8=1}\n        run_cmd_verify_hist {$server LREM l8 3 x} {db0_LIST:4=1}        \n        run_cmd_verify_hist {$server LREM l8 0 y} {db0_LIST:1=1}\n        run_cmd_verify_hist {$server LREM l8 0 x} {}\n        # EXPIRE \n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server RPUSH l9 1 2 3 4} {db0_LIST:4=1}\n        run_cmd_verify_hist {$server PEXPIRE l9 50} {db0_LIST:4=1}\n        run_cmd_verify_hist {} {} 1\n        # SET overwrites\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server RPUSH l9 1 2 3 4} {db0_LIST:4=1}\n        run_cmd_verify_hist {$server SET l9 1234567} {db0_STR:4=1}\n        run_cmd_verify_hist {$server DEL l9} {}                            \n    } {} {cluster:skip}\n\n    test \"KEYSIZES - Test SET $suffixRepl\" {\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        # SADD\n        run_cmd_verify_hist {$server SADD s1 1 2 3 4 5} {db0_SET:4=1}\n        run_cmd_verify_hist {$server SADD s1 6 7 8} {db0_SET:8=1}                        \n        # Test also SADD, SREM, SMOVE, SPOP\n        run_cmd_verify_hist {$server SADD s2 1} {db0_SET:1=1,8=1}\n        run_cmd_verify_hist {$server SADD s2 2} {db0_SET:2=1,8=1}\n        run_cmd_verify_hist {$server SREM s2 3} {db0_SET:2=1,8=1}\n        run_cmd_verify_hist {$server SMOVE s2 s3 2} {db0_SET:1=2,8=1}\n        run_cmd_verify_hist {$server SPOP s3} {db0_SET:1=1,8=1}\n        run_cmd_verify_hist {$server SPOP s2} {db0_SET:8=1}\n        run_cmd_verify_hist {$server SPOP s1} {db0_SET:4=1}\n        run_cmd_verify_hist {$server SPOP s1 7} {}\n        run_cmd_verify_hist {$server SADD s2 1} {db0_SET:1=1}\n        run_cmd_verify_hist {$server SMOVE s2 s4 1} {db0_SET:1=1}\n        run_cmd_verify_hist {$server SREM s4 1} {}\n        run_cmd_verify_hist {$server SADD s2 1 2 3 4 5 6 7 8} {db0_SET:8=1}\n        run_cmd_verify_hist {$server SPOP s2 7} {db0_SET:1=1}\n        # SDIFFSTORE\n        run_cmd_verify_hist {$server flushall} {}\n        run_cmd_verify_hist {$server SADD s1 1 2 3 4 5 6 7 8} {db0_SET:8=1}\n        run_cmd_verify_hist {$server SADD s2 6 7 8 9 A B C D} {db0_SET:8=2}\n        run_cmd_verify_hist {$server SADD s3 x} {db0_SET:1=1,8=2}\n        run_cmd_verify_hist {$server SDIFFSTORE s3 s1 s2} {db0_SET:4=1,8=2}        \n        #SINTERSTORE\n        run_cmd_verify_hist {$server flushall} {}\n        run_cmd_verify_hist {$server SADD s1 1 2 3 4 5 6 7 8} {db0_SET:8=1}\n        run_cmd_verify_hist {$server SADD s2 6 7 8 9 A B C D} {db0_SET:8=2}\n        run_cmd_verify_hist {$server SADD s3 x} {db0_SET:1=1,8=2}\n        run_cmd_verify_hist {$server SINTERSTORE s3 s1 s2} {db0_SET:2=1,8=2}        \n        #SUNIONSTORE\n        run_cmd_verify_hist {$server flushall} {}\n        run_cmd_verify_hist {$server SADD s1 1 2 3 4 5 6 7 8} {db0_SET:8=1}\n        run_cmd_verify_hist {$server SADD s2 6 7 8 9 A B C D} {db0_SET:8=2}\n        run_cmd_verify_hist {$server SUNIONSTORE s3 s1 s2} {db0_SET:8=3}\n        run_cmd_verify_hist {$server SADD s4 E F G H} {db0_SET:4=1,8=3}\n        run_cmd_verify_hist {$server SUNIONSTORE s5 s3 s4} {db0_SET:4=1,8=3,16=1}\n        # DEL\n        run_cmd_verify_hist {$server flushall} {}\n        run_cmd_verify_hist {$server SADD s1 1 2 3 4 5 6 7 8} {db0_SET:8=1}\n        run_cmd_verify_hist {$server DEL s1} {}\n        # EXPIRE\n        run_cmd_verify_hist {$server flushall} {}\n        run_cmd_verify_hist {$server SADD s1 1 2 3 4 5 6 7 8} {db0_SET:8=1}\n        run_cmd_verify_hist {$server PEXPIRE s1 50} {db0_SET:8=1}\n        run_cmd_verify_hist {} {} 1\n        # SET overwrites\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server SADD s1 1 2 3 4 5 6 7 8} {db0_SET:8=1}\n        run_cmd_verify_hist {$server SET s1 1234567} {db0_STR:4=1}\n        run_cmd_verify_hist {$server DEL s1} {}        \n    } {} {cluster:skip}\n     \n    test \"KEYSIZES - Test ZSET $suffixRepl\" {\n        # ZADD, ZREM\n        run_cmd_verify_hist {$server FLUSHALL} {}        \n        run_cmd_verify_hist {$server ZADD z1 1 a 2 b 3 c 4 d 5 e} {db0_ZSET:4=1}\n        run_cmd_verify_hist {$server ZADD z1 6 f 7 g 8 h 9 i} {db0_ZSET:8=1}\n        run_cmd_verify_hist {$server ZADD z2 1 a} {db0_ZSET:1=1,8=1}\n        run_cmd_verify_hist {$server ZREM z1 a} {db0_ZSET:1=1,8=1}\n        run_cmd_verify_hist {$server ZREM z1 b} {db0_ZSET:1=1,4=1}\n        run_cmd_verify_hist {$server ZREM z1 c d e f g h i} {db0_ZSET:1=1}\n        run_cmd_verify_hist {$server ZREM z2 a} {}\n        # ZREMRANGEBYSCORE\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server ZADD z1 1 a 2 b 3 c 4 d 5 e} {db0_ZSET:4=1}\n        run_cmd_verify_hist {$server ZREMRANGEBYSCORE z1 -inf (2} {db0_ZSET:4=1}\n        run_cmd_verify_hist {$server ZREMRANGEBYSCORE z1 -inf (3} {db0_ZSET:2=1}\n        run_cmd_verify_hist {$server ZREMRANGEBYSCORE z1 -inf +inf} {}\n                \n        # ZREMRANGEBYRANK\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server ZADD z1 1 a 2 b 3 c 4 d 5 e 6 f} {db0_ZSET:4=1}\n        run_cmd_verify_hist {$server ZREMRANGEBYRANK z1 0 1} {db0_ZSET:4=1}\n        run_cmd_verify_hist {$server ZREMRANGEBYRANK z1 0 0} {db0_ZSET:2=1}        \n        # ZREMRANGEBYLEX\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server ZADD z1 0 a 0 b 0 c 0 d 0 e 0 f} {db0_ZSET:4=1}\n        run_cmd_verify_hist {$server ZREMRANGEBYLEX z1 - (d} {db0_ZSET:2=1}\n        # ZUNIONSTORE\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server ZADD z1 1 a 2 b 3 c 4 d 5 e} {db0_ZSET:4=1}\n        run_cmd_verify_hist {$server ZADD z2 6 f 7 g 8 h 9 i} {db0_ZSET:4=2}\n        run_cmd_verify_hist {$server ZUNIONSTORE z3 2 z1 z2} {db0_ZSET:4=2,8=1}\n        # ZINTERSTORE\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server ZADD z1 1 a 2 b 3 c 4 d 5 e} {db0_ZSET:4=1}\n        run_cmd_verify_hist {$server ZADD z2 3 c 4 d 5 e 6 f} {db0_ZSET:4=2}\n        run_cmd_verify_hist {$server ZINTERSTORE z3 2 z1 z2} {db0_ZSET:2=1,4=2}\n        # BZPOPMIN, BZPOPMAX\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server ZADD z1 1 a 2 b 3 c 4 d 5 e} {db0_ZSET:4=1}\n        run_cmd_verify_hist {$server BZPOPMIN z1 0} {db0_ZSET:4=1}\n        run_cmd_verify_hist {$server BZPOPMAX z1 0} {db0_ZSET:2=1}\n        # ZDIFFSTORE\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server ZADD z1 1 a 2 b 3 c 4 d 5 e} {db0_ZSET:4=1}\n        run_cmd_verify_hist {$server ZADD z2 3 c 4 d 5 e 6 f} {db0_ZSET:4=2}\n        run_cmd_verify_hist {$server ZDIFFSTORE z3 2 z1 z2} {db0_ZSET:2=1,4=2}\n        # ZINTERSTORE\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server ZADD z1 1 a 2 b 3 c 4 d 5 e} {db0_ZSET:4=1}\n        run_cmd_verify_hist {$server ZADD z2 3 c 4 d 5 e 6 f} {db0_ZSET:4=2}\n        run_cmd_verify_hist {$server ZADD z3 1 x} {db0_ZSET:1=1,4=2}\n        run_cmd_verify_hist {$server ZINTERSTORE z4 2 z1 z2} {db0_ZSET:1=1,2=1,4=2}\n        run_cmd_verify_hist {$server ZINTERSTORE z4 2 z1 z3} {db0_ZSET:1=1,4=2}        \n        # DEL\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server ZADD z1 1 a 2 b 3 c 4 d 5 e} {db0_ZSET:4=1}\n        run_cmd_verify_hist {$server DEL z1} {}\n        # EXPIRE\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server ZADD z1 1 a 2 b 3 c 4 d 5 e} {db0_ZSET:4=1}\n        run_cmd_verify_hist {$server PEXPIRE z1 50} {db0_ZSET:4=1}\n        run_cmd_verify_hist {} {} 1\n        # SET overwrites\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server ZADD z1 1 a 2 b 3 c 4 d 5 e} {db0_ZSET:4=1}\n        run_cmd_verify_hist {$server SET z1 1234567} {db0_STR:4=1}\n        run_cmd_verify_hist {$server DEL z1} {}\n        # ZMPOP\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server ZADD z1 1 a 2 b 3 c} {db0_ZSET:2=1}\n        run_cmd_verify_hist {$server ZMPOP 1 z1 MIN} {db0_ZSET:2=1}\n        run_cmd_verify_hist {$server ZMPOP 1 z1 MAX COUNT 2} {}\n        \n    } {} {cluster:skip}    \n    \n    test \"KEYSIZES - Test STRING $suffixRepl\" {\n        # SETRANGE\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server SET s2 1234567890} {db0_STR:8=1}\n        run_cmd_verify_hist {$server SETRANGE s2 10 123456} {db0_STR:16=1}\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server SETRANGE k 200000 v} {db0_STR:128K=1}\n        # MSET, MSETNX\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server MSET s3 1 s4 2 s5 3} {db0_STR:1=3}\n        run_cmd_verify_hist {$server MSETNX s6 1 s7 2 s8 3} {db0_STR:1=6}      \n        # DEL\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server SET s9 1234567890} {db0_STR:8=1}\n        run_cmd_verify_hist {$server DEL s9} {}\n        #EXPIRE\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server SET s10 1234567890} {db0_STR:8=1}\n        run_cmd_verify_hist {$server PEXPIRE s10 50} {db0_STR:8=1}\n        run_cmd_verify_hist {} {} 1\n        # SET (+overwrite)\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server SET s1 1024} {db0_STR:4=1}\n        run_cmd_verify_hist {$server SET s1 842} {db0_STR:2=1}\n        run_cmd_verify_hist {$server SET s1 2} {db0_STR:1=1}\n        run_cmd_verify_hist {$server SET s1 1234567} {db0_STR:4=1}\n        # SET (string of length 0)\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server SET s1 \"\"} {db0_STR:0=1}\n        run_cmd_verify_hist {$server SET s1 \"\"} {db0_STR:0=1}\n        run_cmd_verify_hist {$server SET s2 \"\"} {db0_STR:0=2}\n        run_cmd_verify_hist {$server SET s2 \"bla\"} {db0_STR:0=1,2=1}\n        run_cmd_verify_hist {$server SET s2 \"\"} {db0_STR:0=2}\n        run_cmd_verify_hist {$server HSET h f v} {db0_STR:0=2 db0_HASH:1=1}\n        run_cmd_verify_hist {$server SET h \"\"} {db0_STR:0=3}\n        run_cmd_verify_hist {$server DEL h} {db0_STR:0=2}\n        run_cmd_verify_hist {$server DEL s2} {db0_STR:0=1}\n        run_cmd_verify_hist {$server DEL s1} {}\n        # APPEND\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server APPEND s1 x} {db0_STR:1=1}\n        run_cmd_verify_hist {$server APPEND s2 y} {db0_STR:1=2}\n\n    } {} {cluster:skip}\n\n    test \"KEYSIZES - Test UNLINK (async deletion) $suffixRepl\" {\n        # UNLINK on STRING\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server SET s1 1234567890} {db0_STR:8=1}\n        run_cmd_verify_hist {$server UNLINK s1} {} 1\n\n        # UNLINK on LIST\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server RPUSH l1 1 2 3 4 5 6 7 8} {db0_LIST:8=1}\n        run_cmd_verify_hist {$server UNLINK l1} {} 1\n\n        # UNLINK on SET\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server SADD s1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16} {db0_SET:16=1}\n        run_cmd_verify_hist {$server UNLINK s1} {} 1\n\n        # UNLINK on ZSET\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server ZADD z1 1 a 2 b 3 c 4 d 5 e 6 f 7 g 8 h} {db0_ZSET:8=1}\n        run_cmd_verify_hist {$server UNLINK z1} {} 1\n\n        # UNLINK on HASH\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server HSET h1 f1 v1 f2 v2 f3 v3 f4 v4} {db0_HASH:4=1}\n        run_cmd_verify_hist {$server UNLINK h1} {} 1\n\n        # UNLINK multiple keys of different types\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server SET s1 12345678} {db0_STR:8=1}\n        run_cmd_verify_hist {$server RPUSH l1 1 2 3 4} {db0_STR:8=1 db0_LIST:4=1}\n        run_cmd_verify_hist {$server SADD set1 a b c d e f g h} {db0_STR:8=1 db0_LIST:4=1 db0_SET:8=1}\n        run_cmd_verify_hist {$server ZADD z1 1 x 2 y} {db0_STR:8=1 db0_LIST:4=1 db0_SET:8=1 db0_ZSET:2=1}\n        run_cmd_verify_hist {$server UNLINK s1 l1 set1 z1} {} 1\n    } {} {cluster:skip}\n\n    test \"KEYSIZES - Test complex dataset $suffixRepl\" {\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        createComplexDataset $server 1000\n        run_cmd_verify_hist {} {__EVAL_DB_HIST__ 0}\n        \n        run_cmd_verify_hist {$server FLUSHALL} {}\n        createComplexDataset $server 1000 {useexpire usehexpire}\n        run_cmd_verify_hist {} {__EVAL_DB_HIST__ 0} 1\n    } {} {cluster:skip}\n    \n    start_server {tags {\"cluster:skip\" \"external:skip\" \"needs:debug\"}} {\n        test \"KEYSIZES - Test DEBUG KEYSIZES-HIST-ASSERT command\" {\n        # Test based on debug command rather than __EVAL_DB_HIST__\n            r DEBUG KEYSIZES-HIST-ASSERT 1\n            r FLUSHALL\n            createComplexDataset r 100\n            createComplexDataset r 100 {useexpire usehexpire}\n        }\n    }\n    \n    foreach type {listpackex hashtable} {\n        # Test different implementations of hash tables and listpacks\n        if {$type eq \"hashtable\"} {\n            $server config set hash-max-listpack-entries 0\n        } else {\n            $server config set hash-max-listpack-entries 512\n        }\n    \n        test \"KEYSIZES - Test HASH ($type) $suffixRepl\" {\n            # HSETNX\n            run_cmd_verify_hist {$server FLUSHALL} {}\n            run_cmd_verify_hist {$server HSETNX h1 1 1} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HSETNX h1 2 2} {db0_HASH:2=1}\n            # HSET, HDEL\n            run_cmd_verify_hist {$server FLUSHALL} {}\n            run_cmd_verify_hist {$server HSET h2 1 1} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HSET h2 2 2} {db0_HASH:2=1}\n            run_cmd_verify_hist {$server HDEL h2 1}   {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HDEL h2 2}   {}\n            run_cmd_verify_hist {$server HSET h2 1 1 2 2} {db0_HASH:2=1}\n            run_cmd_verify_hist {$server HDEL h2 1 2}   {}\n            # HGETDEL\n            run_cmd_verify_hist {$server FLUSHALL} {}\n            run_cmd_verify_hist {$server HSETEX h2 FIELDS 1 1 1} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HSETEX h2 FIELDS 1 2 2} {db0_HASH:2=1}\n            run_cmd_verify_hist {$server HGETDEL h2 FIELDS 1 1}  {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HGETDEL h2 FIELDS 1 3}  {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HGETDEL h2 FIELDS 1 2}  {}\n            # HGETEX\n            run_cmd_verify_hist {$server FLUSHALL} {}\n            run_cmd_verify_hist {$server HSETEX h1 FIELDS 2 f1 1 f2 1} {db0_HASH:2=1}\n            run_cmd_verify_hist {$server HGETEX h1 PXAT 1 FIELDS 1 f1} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HSETEX h1 FIELDS 1 f3 1} {db0_HASH:2=1}\n            run_cmd_verify_hist {$server HGETEX h1 PX 50 FIELDS 1 f2} {db0_HASH:2=1}\n            run_cmd_verify_hist {} {db0_HASH:1=1} 1\n            run_cmd_verify_hist {$server HGETEX h1 PX 50 FIELDS 1 f3} {db0_HASH:1=1}\n            run_cmd_verify_hist {} {} 1\n            # HSETEX\n            run_cmd_verify_hist {$server FLUSHALL} {}\n            run_cmd_verify_hist {$server HSETEX h1 FIELDS 2 f1 1 f2 1} {db0_HASH:2=1}\n            run_cmd_verify_hist {$server HSETEX h1 PXAT 1 FIELDS 1 f1 v1} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HSETEX h1 FIELDS 1 f3 1} {db0_HASH:2=1}\n            run_cmd_verify_hist {$server HSETEX h1 PX 50 FIELDS 1 f2 v2} {db0_HASH:2=1}\n            run_cmd_verify_hist {} {db0_HASH:1=1} 1\n            run_cmd_verify_hist {$server HSETEX h1 PX 50 FIELDS 1 f3 v3} {db0_HASH:1=1}\n            run_cmd_verify_hist {} {} 1\n            # HMSET\n            run_cmd_verify_hist {$server FLUSHALL} {}\n            run_cmd_verify_hist {$server HMSET h1 1 1 2 2 3 3} {db0_HASH:2=1}\n            run_cmd_verify_hist {$server HMSET h1 1 1 2 2 3 3} {db0_HASH:2=1}\n            run_cmd_verify_hist {$server HMSET h1 1 1 2 2 3 3 4 4} {db0_HASH:4=1}\n\n            # HINCRBY\n            run_cmd_verify_hist {$server FLUSHALL} {}\n            run_cmd_verify_hist {$server hincrby h1 f1 10} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server hincrby h1 f1 10} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server hincrby h1 f2 20} {db0_HASH:2=1}\n            # HINCRBYFLOAT\n            run_cmd_verify_hist {$server FLUSHALL} {}\n            run_cmd_verify_hist {$server hincrbyfloat h1 f1 10.5} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server hincrbyfloat h1 f1 10.5} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server hincrbyfloat h1 f2 10.5} {db0_HASH:2=1}\n            # HEXPIRE\n            run_cmd_verify_hist {$server FLUSHALL} {}\n            run_cmd_verify_hist {$server HSET h1 f1 1} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HSET h1 f2 1} {db0_HASH:2=1}\n            run_cmd_verify_hist {$server HPEXPIREAT h1 1 FIELDS 1 f1} {db0_HASH:1=1}            \n            run_cmd_verify_hist {$server HSET h1 f3 1} {db0_HASH:2=1} \n            run_cmd_verify_hist {$server HPEXPIRE h1 50 FIELDS 1 f2} {db0_HASH:2=1}\n            run_cmd_verify_hist {} {db0_HASH:1=1} 1\n            run_cmd_verify_hist {$server HPEXPIRE h1 50 FIELDS 1 f3} {db0_HASH:1=1}\n            run_cmd_verify_hist {} {} 1\n        }\n\n        test \"KEYSIZES - Test Hash field lazy expiration ($type) $suffixRepl\" {\n            $server debug set-active-expire 0\n\n            # HGET\n            run_cmd_verify_hist {$server FLUSHALL} {}\n            run_cmd_verify_hist {$server HSETEX h1 PX 1 FIELDS 2 f1 v1 f2 v2} {db0_HASH:2=1}\n            run_cmd_verify_hist {after 5} {db0_HASH:2=1}\n            run_cmd_verify_hist {$server HGET h1 f1} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HGET h1 f2} {}\n\n            # HMGET\n            run_cmd_verify_hist {$server HSETEX h1 PX 1 FIELDS 2 f1 v1 f2 v2} {db0_HASH:2=1}\n            run_cmd_verify_hist {after 5} {db0_HASH:2=1}\n            run_cmd_verify_hist {$server HMGET h1 f1} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HMGET h1 f2} {}\n\n            # HGETDEL\n            run_cmd_verify_hist {$server FLUSHALL} {}\n            run_cmd_verify_hist {$server HSETEX h1 PX 1 FIELDS 2 f1 v1 f2 v2} {db0_HASH:2=1}\n            run_cmd_verify_hist {after 5} {db0_HASH:2=1}\n            run_cmd_verify_hist {$server HGETDEL h1 FIELDS 1 f1}  {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HGETDEL h1 FIELDS 1 f2}  {}\n\n            # HGETEX\n            run_cmd_verify_hist {$server FLUSHALL} {}\n            run_cmd_verify_hist {$server HSETEX h1 PX 1 FIELDS 2 f1 v1 f2 v2} {db0_HASH:2=1}\n            run_cmd_verify_hist {after 5} {db0_HASH:2=1}\n            run_cmd_verify_hist {$server HGETEX h1 PX 1 FIELDS 1 f1} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HGETEX h1 PX 1 FIELDS 1 f2} {}\n\n            # HSETNX\n            run_cmd_verify_hist {$server FLUSHALL} {}\n            run_cmd_verify_hist {$server HSETEX h1 PX 1 FIELDS 1 f1 v1} {db0_HASH:1=1}\n            run_cmd_verify_hist {after 5} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HSETNX h1 f1 v1} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server DEL h1} {}\n            run_cmd_verify_hist {$server HSETEX h2 PX 1 FIELDS 2 f1 v1 f2 v2} {db0_HASH:2=1}\n            run_cmd_verify_hist {after 5} {db0_HASH:2=1}\n            run_cmd_verify_hist {$server HSETNX h2 f1 v1} {db0_HASH:2=1}\n            run_cmd_verify_hist {$server HSETNX h2 f2 v2} {db0_HASH:2=1}\n\n            # HSETEX\n            run_cmd_verify_hist {$server FLUSHALL} {}\n            run_cmd_verify_hist {$server HSETEX h1 PX 1 FIELDS 1 f1 v1} {db0_HASH:1=1}\n            run_cmd_verify_hist {after 5} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HSETEX h1 PX 1 FIELDS 1 f1 v1} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HSETEX h1 PX 1 FIELDS 1 f2 v2} {db0_HASH:2=1}\n            run_cmd_verify_hist {after 5} {db0_HASH:2=1}\n            run_cmd_verify_hist {$server HGET h1 f1} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HGET h1 f2} {}\n\n            # HEXISTS\n            run_cmd_verify_hist {$server FLUSHALL} {}\n            run_cmd_verify_hist {$server HSETEX h1 PX 1 FIELDS 2 f1 v1 f2 v2} {db0_HASH:2=1}\n            run_cmd_verify_hist {after 5} {db0_HASH:2=1}\n            run_cmd_verify_hist {$server HEXISTS h1 f1} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HEXISTS h1 f2} {}\n\n            # HSTRLEN\n            run_cmd_verify_hist {$server FLUSHALL} {}\n            run_cmd_verify_hist {$server HSETEX h1 PX 1 FIELDS 2 f1 v1 f2 v2} {db0_HASH:2=1}\n            run_cmd_verify_hist {after 5} {db0_HASH:2=1}\n            run_cmd_verify_hist {$server HSTRLEN h1 f1} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HSTRLEN h1 f2} {}\n\n            # HINCRBYFLOAT\n            run_cmd_verify_hist {$server FLUSHALL} {}\n            run_cmd_verify_hist {$server HSETEX h1 PX 1 FIELDS 1 f1 1} {db0_HASH:1=1}\n            run_cmd_verify_hist {after 5} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HINCRBYFLOAT h1 f1 1.5} {db0_HASH:1=1}\n\n            # HINCRBY\n            run_cmd_verify_hist {$server FLUSHALL} {}\n            run_cmd_verify_hist {$server HSETEX h1 PX 1 FIELDS 1 f1 1} {db0_HASH:1=1}\n            run_cmd_verify_hist {after 5} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HINCRBY h1 f1 1} {db0_HASH:1=1}\n            run_cmd_verify_hist {$server HSETEX h1 PX 1 FIELDS 1 f2 1} {db0_HASH:2=1}\n            run_cmd_verify_hist {after 5} {db0_HASH:2=1}\n            run_cmd_verify_hist {$server HINCRBY h1 f2 1} {db0_HASH:2=1}\n\n            # SORT\n            run_cmd_verify_hist {$server FLUSHALL} {}\n            run_cmd_verify_hist {$server RPUSH user_ids 1 2} {db0_LIST:2=1}\n            run_cmd_verify_hist {$server HSET user:1 name \"Alice\" score 50} {db0_LIST:2=1 db0_HASH:2=1}\n            run_cmd_verify_hist {$server HSETEX user:2 PX 1 FIELDS 2 name \"Bob\" score 70} {db0_LIST:2=1 db0_HASH:2=2}\n            run_cmd_verify_hist {after 5} {db0_LIST:2=1 db0_HASH:2=2}\n            run_cmd_verify_hist {$server SORT user_ids BY user:*->score GET user:*->name GET user:*->score}  {db0_LIST:2=1 db0_HASH:2=1}\n            run_cmd_verify_hist {$server DEL user_ids} {db0_HASH:2=1}\n            run_cmd_verify_hist {$server RPUSH user_ids 1} {db0_LIST:1=1 db0_HASH:2=1}\n            run_cmd_verify_hist {$server HSETEX user:1 PX 1 FIELDS 2 name \"Alice\" score 50} {db0_LIST:1=1 db0_HASH:2=1}\n            run_cmd_verify_hist {after 5} {db0_LIST:1=1 db0_HASH:2=1}\n            run_cmd_verify_hist {$server SORT user_ids BY user:*->score GET user:*->name GET user:*->score}  {db0_LIST:1=1}\n\n            $server debug set-active-expire 1\n        } {OK} {cluster:skip needs:debug}\n    }\n    \n    test \"KEYSIZES - Test STRING BITS $suffixRepl\" {\n        # BITOPS\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server SET b1 \"x123456789\"} {db0_STR:8=1}\n        run_cmd_verify_hist {$server SET b2 \"x12345678\"} {db0_STR:8=2}\n        run_cmd_verify_hist {$server BITOP AND b3 b1 b2} {db0_STR:8=3}\n        run_cmd_verify_hist {$server BITOP OR b4 b1 b2} {db0_STR:8=4}\n        run_cmd_verify_hist {$server BITOP XOR b5 b1 b2} {db0_STR:8=5}        \n        # SETBIT\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server setbit b1 71 1} {db0_STR:8=1}\n        run_cmd_verify_hist {$server setbit b1 72 1} {db0_STR:8=1}\n        run_cmd_verify_hist {$server setbit b2 72 1} {db0_STR:8=2}\n        run_cmd_verify_hist {$server setbit b2 640 0} {db0_STR:8=1,64=1}\n        # BITFIELD\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server bitfield b3 set u8 6 255} {db0_STR:2=1}\n        run_cmd_verify_hist {$server bitfield b3 set u8 65 255} {db0_STR:8=1}\n        run_cmd_verify_hist {$server bitfield b4 set u8 1000 255} {db0_STR:8=1,64=1}\n    } {} {cluster:skip}\n       \n    test \"KEYSIZES - Test RESTORE $suffixRepl\" {\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server RPUSH l10 1 2 3 4} {db0_LIST:4=1}\n        set encoded [$server dump l10]\n        run_cmd_verify_hist {$server del l10} {}         \n        run_cmd_verify_hist {$server restore l11 0 $encoded} {db0_LIST:4=1}\n    }\n\n    test \"KEYSIZES - Test RENAME $suffixRepl\" {\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server RPUSH l12 1 2 3 4} {db0_LIST:4=1}\n        run_cmd_verify_hist {$server RENAME l12 l13} {db0_LIST:4=1}\n    } {} {cluster:skip}\n    \n    test \"KEYSIZES - Test MOVE $suffixRepl\" {\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server RPUSH l1 1 2 3 4} {db0_LIST:4=1}\n        run_cmd_verify_hist {$server RPUSH l2 1} {db0_LIST:1=1,4=1}\n        run_cmd_verify_hist {$server MOVE l1 1} {db0_LIST:1=1 db1_LIST:4=1}\n    } {} {cluster:skip}\n    \n    test \"KEYSIZES - Test SWAPDB $suffixRepl\" {\n        run_cmd_verify_hist {$server FLUSHALL} {}        \n        run_cmd_verify_hist {$server RPUSH l1 1 2 3 4} {db0_LIST:4=1}\n        $server select 1\n        run_cmd_verify_hist {$server ZADD z1 1 A} {db0_LIST:4=1   db1_ZSET:1=1}\n        run_cmd_verify_hist {$server SWAPDB 0 1}  {db0_ZSET:1=1   db1_LIST:4=1}\n        $server select 0    \n    } {OK} {singledb:skip}\n    \n    test \"KEYSIZES - DEBUG RELOAD reset keysizes $suffixRepl\" {\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        run_cmd_verify_hist {$server RPUSH l10 1 2 3 4} {db0_LIST:4=1}\n        run_cmd_verify_hist {$server SET s2 1234567890} {db0_STR:8=1 db0_LIST:4=1}\n        run_cmd_verify_hist {$server DEBUG RELOAD} {db0_STR:8=1 db0_LIST:4=1}\n        run_cmd_verify_hist {$server DEL l10} {db0_STR:8=1}\n        run_cmd_verify_hist {$server DEBUG RELOAD} {db0_STR:8=1}\n    } {} {cluster:skip needs:debug}\n\n    test \"KEYSIZES - Test RDB $suffixRepl\" {\n        run_cmd_verify_hist {$server FLUSHALL} {}\n        # Write list, set and zset to db0\n        run_cmd_verify_hist {$server RPUSH l1 1 2 3 4} {db0_LIST:4=1}\n        run_cmd_verify_hist {$server SADD s1 1 2 3 4 5} {db0_LIST:4=1 db0_SET:4=1}\n        run_cmd_verify_hist {$server ZADD z1 1 a 2 b 3 c} {db0_LIST:4=1 db0_SET:4=1 db0_ZSET:2=1}\n        run_cmd_verify_hist {$server SAVE} {db0_LIST:4=1 db0_SET:4=1 db0_ZSET:2=1}\n        if {$replicaMode eq 1} {\n            run_cmd_verify_hist {$replica SAVE} {db0_LIST:4=1 db0_SET:4=1 db0_ZSET:2=1}\n            run_cmd_verify_hist {restart_server 0 true false} {db0_LIST:4=1 db0_SET:4=1 db0_ZSET:2=1} 1\n        } else {\n            run_cmd_verify_hist {restart_server 0 true false} {db0_LIST:4=1 db0_SET:4=1 db0_ZSET:2=1}\n        }\n    } {} {external:skip}\n}\n\nstart_server {} {\n    r select 0\n    test_all_keysizes 0\n    # Start another server to test replication of KEYSIZES\n    start_server {tags {needs:repl external:skip}} {\n        # Set the outer layer server as primary\n        set primary [srv -1 client]\n        set primary_host [srv -1 host]\n        set primary_port [srv -1 port]\n        # Set this inner layer server as replica\n        set replica [srv 0 client]\n\n        # Server should have role replica\n        $replica replicaof $primary_host $primary_port\n        wait_for_condition 50 100 { [s 0 role] eq {slave} } else { fail \"Replication not started.\" }\n\n        # Test KEYSIZES on leader and replica\n        $primary select 0\n        test_all_keysizes 1\n    }\n}\n\n################################################################################\n# Test the key-memory-histograms config and key memory histograms (_sizes fields)\n# in \"info keysizes\" command.\n#\n# The key memory histogram (distrib_*_sizes) requires key-memory-histograms or\n# cluster-slot-stats-enabled to be set on startup (which enables memory_tracking).\n#\n# Note: Strings are not tracked to avoid confusion with distrib_strings_sizes.\n################################################################################\n\n# Query and Strip result of \"info keysizes\" from header, spaces, and newlines,\n# keeping only the key memory distribution lines.\nproc get_info_keymem_stripped {server} {\n    set info [$server info keysizes]\n    set result \"\"\n    foreach line [split $info \"\\n\"] {\n        # Match key memory histograms: lists_sizes, sets_sizes, zsets_sizes, hashes_sizes\n        if {[regexp {distrib_(lists|sets|zsets|hashes)_sizes} $line]} {\n            append result [string map {\" \" \"\" \"\\r\" \"\"} $line]\n        }\n    }\n    return $result\n}\n\n# Verify that key memory histogram has entries for the expected types\nproc verify_keymem_non_empty {server types} {\n    set info [$server info keysizes]\n    foreach type $types {\n        if {![string match \"*distrib_${type}_sizes*\" $info]} {\n            fail \"Expected key memory histogram for type $type but not found in: $info\"\n        }\n    }\n}\n\n# Verify that key memory histogram is empty\nproc verify_keymem_empty {server} {\n    set stripped [get_info_keymem_stripped $server]\n    if {$stripped ne \"\"} {\n        fail \"Expected empty key memory histogram but got: $stripped\"\n    }\n}\n\n# Test key-memory-histograms config in standalone mode\nstart_server {tags {external:skip needs:debug} overrides {key-memory-histograms yes}} {\n\n    test \"KEY-MEMORY-STATS - Empty database should have empty key memory histogram\" {\n        r FLUSHALL\n        verify_keymem_empty r\n    }\n\n    test \"KEY-MEMORY-STATS - List keys should appear in key memory histogram\" {\n        r FLUSHALL\n        r RPUSH \"mylist\" a b c d e\n        verify_keymem_non_empty r {lists}\n        r FLUSHALL\n        verify_keymem_empty r\n    }\n\n    test \"KEY-MEMORY-STATS - All data types should appear in key memory histogram\" {\n        r FLUSHALL\n        r RPUSH \"list\" a b c\n        r SADD \"set\" x y z\n        r ZADD \"zset\" 1 a 2 b\n        r HSET \"hash\" f1 v1\n\n        verify_keymem_non_empty r {lists sets zsets hashes}\n    }\n\n    test \"KEY-MEMORY-STATS - Histogram bins should use power-of-2 labels\" {\n        r FLUSHALL\n        r HSET \"hash\" f1 v1\n        set info [r info keysizes]\n        assert {[regexp {distrib_hashes_sizes:([0-9]+[KMGTPE]?)=} $info -> label]}\n        set valid_labels {0 1 2 4 8 16 32 64 128 256 512\n                          1K 2K 4K 8K 16K 32K 64K 128K 256K 512K\n                          1M 2M 4M 8M 16M 32M 64M 128M 256M 512M\n                          1G 2G 4G 8G 16G 32G 64G 128G 256G 512G\n                          1T 2T 4T 8T 16T 32T 64T 128T 256T 512T\n                          1P 2P 4P 8P 16P 32P 64P 128P 256P 512P\n                          1E 2E 4E}\n        if {[lsearch -exact $valid_labels $label] < 0} {\n            fail \"Label '$label' is not a valid power-of-2 label\"\n        }\n    }\n\n    test \"KEY-MEMORY-STATS - DEL should remove key from key memory histogram\" {\n        r FLUSHALL\n        r RPUSH \"list\" a b c\n        verify_keymem_non_empty r {lists}\n        r DEL \"list\"\n        verify_keymem_empty r\n    }\n\n    test \"KEY-MEMORY-STATS - Modifying a list should update key memory histogram\" {\n        r FLUSHALL\n        r RPUSH \"mylist\" a\n        set info1 [r info keysizes]\n        # Add many elements to change allocation\n        for {set i 0} {$i < 1000} {incr i} {\n            r RPUSH \"mylist\" [string repeat \"x\" 100]\n        }\n        set info2 [r info keysizes]\n        # The histogram should have changed\n        assert {$info1 ne $info2}\n    }\n\n    test \"KEY-MEMORY-STATS - FLUSHALL clears key memory histogram\" {\n        r RPUSH \"list1\" a b c\n        r RPUSH \"list2\" d e f\n        verify_keymem_non_empty r {lists}\n        r FLUSHALL\n        verify_keymem_empty r\n    }\n\n    test \"KEY-MEMORY-STATS - Larger allocations go to higher bins\" {\n        r FLUSHALL\n        r HSET \"small\" f1 v1\n        set small_info [r info keysizes]\n        r FLUSHALL\n\n        # Create a large hash\n        for {set i 0} {$i < 1000} {incr i} {\n            r HSET \"large\" field$i [string repeat \"x\" 100]\n        }\n        set large_info [r info keysizes]\n\n        # The bin labels should be different\n        assert {$small_info ne $large_info}\n    }\n\n    test \"KEY-MEMORY-STATS - EXPIRE eventually removes from histogram\" {\n        r FLUSHALL\n        r RPUSH \"expiring\" a b c\n        verify_keymem_non_empty r {lists}\n        r PEXPIRE \"expiring\" 50\n        after 100\n        wait_for_condition 50 20 {\n            [get_info_keymem_stripped r] eq \"\"\n        } else {\n            fail \"Key did not expire from key memory histogram\"\n        }\n    }\n\n    test \"KEY-MEMORY-STATS - Test RESTORE adds to histogram\" {\n        r FLUSHALL\n        r RPUSH \"mylist\" 1 2 3 4\n        set encoded [r dump \"mylist\"]\n        r DEL \"mylist\"\n        verify_keymem_empty r\n        r RESTORE \"mylist2\" 0 $encoded\n        verify_keymem_non_empty r {lists}\n    }\n\n    test \"KEY-MEMORY-STATS - DEBUG RELOAD preserves key memory histogram\" {\n        r FLUSHALL\n        r RPUSH \"list\" 1 2 3 4 5\n        r HSET \"hash\" f1 v1\n        verify_keymem_non_empty r {lists hashes}\n        r DEBUG RELOAD\n        verify_keymem_non_empty r {lists hashes}\n        r DEL \"list\"\n        r DEBUG RELOAD\n        verify_keymem_non_empty r {hashes}\n        r FLUSHALL\n        verify_keymem_empty r\n    }\n\n    test \"KEY-MEMORY-STATS - RENAME should preserve key memory histogram\" {\n        r FLUSHALL\n        r RPUSH \"oldkey\" a b c d e\n        verify_keymem_non_empty r {lists}\n        r RENAME \"oldkey\" \"newkey\"\n        verify_keymem_non_empty r {lists}\n        r DEL \"newkey\"\n        verify_keymem_empty r\n    }\n\n    test \"KEY-MEMORY-STATS - Test DEBUG KEYSIZES-HIST-ASSERT command\" {\n        r DEBUG KEYSIZES-HIST-ASSERT 1\n        r FLUSHALL\n        createComplexDataset r 100\n        createComplexDataset r 100 {useexpire usehexpire}\n        # If we get here without crash, the assertion passed\n        r DEBUG KEYSIZES-HIST-ASSERT 0\n    }\n\n    test \"KEY-MEMORY-STATS - RDB save and restart preserves key memory histogram\" {\n        r FLUSHALL\n        r RPUSH \"list\" 1 2 3 4 5\n        r SADD \"set\" a b c d e\n        r ZADD \"zset\" 1 a 2 b 3 c\n        r HSET \"hash\" f1 v1 f2 v2\n        verify_keymem_non_empty r {lists sets zsets hashes}\n        r SAVE\n        restart_server 0 true false\n        verify_keymem_non_empty r {lists sets zsets hashes}\n    }\n\n    foreach type {listpackex hashtable} {\n        if {$type eq \"hashtable\"} {\n            r config set hash-max-listpack-entries 0\n        } else {\n            r config set hash-max-listpack-entries 512\n        }\n\n        test \"KEY-MEMORY-STATS - Hash field lazy expiration ($type)\" {\n            r debug set-active-expire 0\n\n            # HGET triggers lazy expiration\n            r FLUSHALL\n            r HSETEX \"h1\" PX 1 FIELDS 2 f1 v1 f2 v2\n            verify_keymem_non_empty r {hashes}\n            after 5\n            r HGET \"h1\" f1\n            verify_keymem_non_empty r {hashes}\n            r HGET \"h1\" f2\n            verify_keymem_empty r\n\n            r debug set-active-expire 1\n            r FLUSHALL\n        }\n    }\n}\n\n# Test that key-memory-histograms=no does NOT show key memory histogram\nstart_server {tags {external:skip} overrides {key-memory-histograms no}} {\n\n    test \"KEY-MEMORY-STATS disabled - key memory histogram should not appear\" {\n        r FLUSHALL\n        r SET \"mykey\" \"hello world\"\n        r RPUSH \"list\" a b c\n        r SADD \"set\" x y z\n        r ZADD \"zset\" 1 a 2 b\n        r HSET \"hash\" f1 v1\n\n        set info [r info keysizes]\n        # Keysizes (sizes/items) should be present\n        assert {[string match \"*distrib_strings_sizes*\" $info]}\n        assert {[string match \"*distrib_lists_items*\" $info]}\n        # Key memory histogram should NOT be present (note: lists_sizes\n        # is only present when memory tracking is enabled, but lists_items always is)\n        set stripped [get_info_keymem_stripped r]\n        assert {$stripped eq \"\"}\n    }\n\n    test \"KEY-MEMORY-STATS - cannot enable key-memory-histograms at runtime when disabled at startup\" {\n        # Verify the config is currently disabled\n        assert_equal [r config get key-memory-histograms] {key-memory-histograms no}\n\n        # Try to enable at runtime - should fail\n        catch {r config set key-memory-histograms yes} err\n        assert_match \"*cannot be enabled at runtime*\" $err\n\n        # Verify it's still disabled\n        assert_equal [r config get key-memory-histograms] {key-memory-histograms no}\n    }\n}\n\n# Test that key-memory-histograms can be disabled at runtime when enabled at startup\nstart_server {tags {external:skip} overrides {key-memory-histograms yes}} {\n\n    test \"KEY-MEMORY-STATS - can disable key-memory-histograms at runtime and distrib_*_sizes disappear\" {\n        # Verify the config is currently enabled\n        assert_equal [r config get key-memory-histograms] {key-memory-histograms yes}\n\n        # Create some data that would appear in histogram\n        r RPUSH \"list\" a b c d e\n        r SADD \"set\" x y z\n        r ZADD \"zset\" 1 a 2 b\n        r HSET \"hash\" f1 v1\n        verify_keymem_non_empty r {lists sets zsets hashes}\n\n        # Disable at runtime - should succeed\n        r config set key-memory-histograms no\n\n        # Verify it's now disabled\n        assert_equal [r config get key-memory-histograms] {key-memory-histograms no}\n\n        # Verify distrib_*_sizes fields are no longer in INFO keysizes\n        set stripped [get_info_keymem_stripped r]\n        assert_equal $stripped \"\" \"Expected empty key memory histogram after disabling\"\n    }\n\n    test \"KEY-MEMORY-STATS - cannot re-enable key-memory-histograms at runtime after disabling\" {\n        # Disable first (may already be disabled from previous test)\n        r config set key-memory-histograms no\n        assert_equal [r config get key-memory-histograms] {key-memory-histograms no}\n\n        # Try to re-enable - should fail\n        catch {r config set key-memory-histograms yes} err\n        assert_match \"*cannot be enabled at runtime*\" $err\n\n        # Verify it's still disabled\n        assert_equal [r config get key-memory-histograms] {key-memory-histograms no}\n    }\n}\n\n# Test key memory histograms in cluster mode (with cluster-slot-stats-enabled)\nstart_cluster 1 0 {tags {external:skip cluster needs:debug} overrides {cluster-slot-stats-enabled yes}} {\n\n    test \"SLOT-ALLOCSIZE - Test DEBUG ALLOCSIZE-SLOTS-ASSERT command\" {\n        r DEBUG ALLOCSIZE-SLOTS-ASSERT 1\n        r FLUSHALL\n        createComplexDataset r 100 {usetag}\n        createComplexDataset r 100 {usetag useexpire usehexpire}\n        # If we get here without crash, the assertion passed\n        r DEBUG ALLOCSIZE-SLOTS-ASSERT 0\n    }\n\n    test \"KEY-MEMORY-STATS - Test DEBUG KEYSIZES-HIST-ASSERT command\" {\n        r DEBUG KEYSIZES-HIST-ASSERT 1\n        r FLUSHALL\n        createComplexDataset r 100 {usetag}\n        createComplexDataset r 100 {usetag useexpire usehexpire}\n        # If we get here without crash, the assertion passed\n        r DEBUG KEYSIZES-HIST-ASSERT 0\n    }\n\n    test \"KEY-MEMORY-STATS - key memory histogram should appear\" {\n        r FLUSHALL\n        r RPUSH \"mylist{t}\" a b c d e\n        verify_keymem_non_empty r {lists}\n        r FLUSHALL\n        verify_keymem_empty r\n    }\n\n    test \"KEY-MEMORY-STATS - All data types should appear in key memory histogram\" {\n        r FLUSHALL\n        r RPUSH \"list{t}\" a b c\n        r SADD \"set{t}\" x y z\n        r ZADD \"zset{t}\" 1 a 2 b\n        r HSET \"hash{t}\" f1 v1\n\n        verify_keymem_non_empty r {lists sets zsets hashes}\n    }\n}\n\n# Test with replication in cluster mode for key memory stats\nstart_cluster 1 1 {tags {external:skip cluster needs:debug needs:repl} overrides {cluster-slot-stats-enabled yes}} {\n    set primary_id 0\n    set replica_id 1\n    set primary [Rn $primary_id]\n    set replica [Rn $replica_id]\n\n    # Wait for replica to sync\n    wait_for_condition 50 100 {\n        [s -1 role] eq {slave}\n    } else {\n        fail \"Replica did not start\"\n    }\n    wait_for_condition 1000 50 {\n        [s -1 master_link_status] eq {up}\n    } else {\n        fail \"Replica link not up\"\n    }\n\n    test \"KEY-MEMORY-STATS - Replication updates key memory stats on replica\" {\n        $primary FLUSHALL\n        wait_for_ofs_sync $primary $replica\n\n        $primary RPUSH \"list{t}\" 1 2 3 4 5\n        $primary SADD \"set{t}\" a b c d e\n        $primary ZADD \"zset{t}\" 1 a 2 b 3 c\n        $primary HSET \"hash{t}\" f1 v1 f2 v2\n\n        wait_for_ofs_sync $primary $replica\n\n        verify_keymem_non_empty $replica {lists sets zsets hashes}\n    }\n\n    test \"KEY-MEMORY-STATS - DEL on primary updates key memory stats on replica\" {\n        $primary FLUSHALL\n        wait_for_ofs_sync $primary $replica\n\n        $primary RPUSH \"list{t}\" a b c d e\n        wait_for_ofs_sync $primary $replica\n        verify_keymem_non_empty $replica {lists}\n\n        $primary DEL \"list{t}\"\n        wait_for_ofs_sync $primary $replica\n        verify_keymem_empty $replica\n    }\n}\n"
  },
  {
    "path": "tests/unit/info.tcl",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Copyright (c) 2024-present, Valkey contributors.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n#\n\nproc cmdstat {cmd} {\n    return [cmdrstat $cmd r]\n}\n\nproc errorstat {cmd} {\n    return [errorrstat $cmd r]\n}\n\nproc latency_percentiles_usec {cmd} {\n    return [latencyrstat_percentiles $cmd r]\n}\n\nstart_server {tags {\"info\" \"external:skip\"}} {\n    start_server {} {\n\n        test {latencystats: disable/enable} {\n            r config resetstat\n            r CONFIG SET latency-tracking no\n            r set a b\n            assert_match {} [latency_percentiles_usec set]\n            r CONFIG SET latency-tracking yes\n            r set a b\n            assert_match {*p50=*,p99=*,p99.9=*} [latency_percentiles_usec set]\n            r config resetstat\n            assert_match {} [latency_percentiles_usec set]\n        }\n\n        test {latencystats: configure percentiles} {\n            r config resetstat\n            assert_match {} [latency_percentiles_usec set]\n            r CONFIG SET latency-tracking yes\n            r SET a b\n            r GET a\n            assert_match {*p50=*,p99=*,p99.9=*} [latency_percentiles_usec set]\n            assert_match {*p50=*,p99=*,p99.9=*} [latency_percentiles_usec get]\n            r CONFIG SET latency-tracking-info-percentiles \"0.0 50.0 100.0\"\n            assert_match [r config get latency-tracking-info-percentiles] {latency-tracking-info-percentiles {0 50 100}}\n            assert_match {*p0=*,p50=*,p100=*} [latency_percentiles_usec set]\n            assert_match {*p0=*,p50=*,p100=*} [latency_percentiles_usec get]\n            r config resetstat\n            assert_match {} [latency_percentiles_usec set]\n        }\n\n        test {latencystats: bad configure percentiles} {\n            r config resetstat\n            set configlatencyline [r config get latency-tracking-info-percentiles]\n            catch {r CONFIG SET latency-tracking-info-percentiles \"10.0 50.0 a\"} e\n            assert_match {ERR CONFIG SET failed*} $e\n            assert_equal [s total_error_replies] 1\n            assert_match [r config get latency-tracking-info-percentiles] $configlatencyline\n            catch {r CONFIG SET latency-tracking-info-percentiles \"10.0 50.0 101.0\"} e\n            assert_match {ERR CONFIG SET failed*} $e\n            assert_equal [s total_error_replies] 2\n            assert_match [r config get latency-tracking-info-percentiles] $configlatencyline\n            r config resetstat\n            assert_match {} [errorstat ERR]\n        }\n\n        test {latencystats: blocking commands} {\n            r config resetstat\n            r CONFIG SET latency-tracking yes\n            r CONFIG SET latency-tracking-info-percentiles \"50.0 99.0 99.9\"\n            set rd [redis_deferring_client]\n            r del list1{t}\n\n            $rd blpop list1{t} 0\n            wait_for_blocked_client\n            r lpush list1{t} a\n            assert_equal [$rd read] {list1{t} a}\n            $rd blpop list1{t} 0\n            wait_for_blocked_client\n            r lpush list1{t} b\n            assert_equal [$rd read] {list1{t} b}\n            assert_match {*p50=*,p99=*,p99.9=*} [latency_percentiles_usec blpop]\n            $rd close\n        }\n\n        test {latencystats: subcommands} {\n            r config resetstat\n            r CONFIG SET latency-tracking yes\n            r CONFIG SET latency-tracking-info-percentiles \"50.0 99.0 99.9\"\n            r client id\n\n            assert_match {*p50=*,p99=*,p99.9=*} [latency_percentiles_usec client\\\\|id]\n            assert_match {*p50=*,p99=*,p99.9=*} [latency_percentiles_usec config\\\\|set]\n        }\n\n        test {latencystats: measure latency} {\n            r config resetstat\n            r CONFIG SET latency-tracking yes\n            r CONFIG SET latency-tracking-info-percentiles \"50.0\"\n            r DEBUG sleep 0.05\n            r SET k v\n            set latencystatline_debug [latency_percentiles_usec debug]\n            set latencystatline_set [latency_percentiles_usec set]\n            regexp \"p50=(.+\\..+)\" $latencystatline_debug -> p50_debug\n            regexp \"p50=(.+\\..+)\" $latencystatline_set -> p50_set\n            assert {$p50_debug >= 50000}\n            assert {$p50_set >= 0}\n            assert {$p50_debug >= $p50_set}\n        } {} {needs:debug}\n\n        test {errorstats: failed call authentication error} {\n            r config resetstat\n            assert_match {} [errorstat ERR]\n            assert_equal [s total_error_replies] 0\n            catch {r auth k} e\n            assert_match {ERR AUTH*} $e\n            assert_match {*count=1*} [errorstat ERR]\n            assert_match {*calls=1,*,rejected_calls=0,failed_calls=1*} [cmdstat auth]\n            assert_equal [s total_error_replies] 1\n            r config resetstat\n            assert_match {} [errorstat ERR]\n        }\n\n        test {errorstats: failed call within MULTI/EXEC} {\n            r config resetstat\n            assert_match {} [errorstat ERR]\n            assert_equal [s total_error_replies] 0\n            r multi\n            r set a b\n            r auth a\n            catch {r exec} e\n            assert_match {ERR AUTH*} $e\n            assert_match {*count=1*} [errorstat ERR]\n            assert_match {*calls=1,*,rejected_calls=0,failed_calls=0*} [cmdstat set]\n            assert_match {*calls=1,*,rejected_calls=0,failed_calls=1*} [cmdstat auth]\n            assert_match {*calls=1,*,rejected_calls=0,failed_calls=0*} [cmdstat exec]\n            assert_equal [s total_error_replies] 1\n\n            # MULTI/EXEC command errors should still be pinpointed to him\n            catch {r exec} e\n            assert_match {ERR EXEC without MULTI} $e\n            assert_match {*calls=2,*,rejected_calls=0,failed_calls=1*} [cmdstat exec]\n            assert_match {*count=2*} [errorstat ERR]\n            assert_equal [s total_error_replies] 2\n        }\n\n        test {errorstats: failed call within LUA} {\n            r config resetstat\n            assert_match {} [errorstat ERR]\n            assert_equal [s total_error_replies] 0\n            catch {r eval {redis.pcall('XGROUP', 'CREATECONSUMER', 's1', 'mygroup', 'consumer') return } 0} e\n            assert_match {*count=1*} [errorstat ERR]\n            assert_match {*calls=1,*,rejected_calls=0,failed_calls=1*} [cmdstat xgroup\\\\|createconsumer]\n            assert_match {*calls=1,*,rejected_calls=0,failed_calls=0*} [cmdstat eval]\n\n            # EVAL command errors should still be pinpointed to him\n            catch {r eval a} e\n            assert_match {ERR wrong*} $e\n            assert_match {*calls=1,*,rejected_calls=1,failed_calls=0*} [cmdstat eval]\n            assert_match {*count=2*} [errorstat ERR]\n            assert_equal [s total_error_replies] 2\n        }\n\n        test {errorstats: failed call NOSCRIPT error} {\n            r config resetstat\n            assert_equal [s total_error_replies] 0\n            assert_match {} [errorstat NOSCRIPT]\n            catch {r evalsha NotValidShaSUM 0} e\n            assert_match {NOSCRIPT*} $e\n            assert_match {*count=1*} [errorstat NOSCRIPT]\n            assert_match {*calls=1,*,rejected_calls=0,failed_calls=1*} [cmdstat evalsha]\n            assert_equal [s total_error_replies] 1\n            r config resetstat\n            assert_match {} [errorstat NOSCRIPT]\n        }\n\n        test {errorstats: failed call NOGROUP error} {\n            r config resetstat\n            assert_match {} [errorstat NOGROUP]\n            r del mystream\n            r XADD mystream * f v\n            catch {r XGROUP CREATECONSUMER mystream mygroup consumer} e\n            assert_match {NOGROUP*} $e\n            assert_match {*count=1*} [errorstat NOGROUP]\n            assert_match {*calls=1,*,rejected_calls=0,failed_calls=1*} [cmdstat xgroup\\\\|createconsumer]\n            r config resetstat\n            assert_match {} [errorstat NOGROUP]\n        }\n\n        test {errorstats: rejected call unknown command} {\n            r config resetstat\n            assert_equal [s total_error_replies] 0\n            assert_match {} [errorstat ERR]\n            catch {r asdf} e\n            assert_match {ERR unknown*} $e\n            assert_match {*count=1*} [errorstat ERR]\n            assert_equal [s total_error_replies] 1\n            r config resetstat\n            assert_match {} [errorstat ERR]\n        }\n\n        test {errorstats: rejected call within MULTI/EXEC} {\n            r config resetstat\n            assert_equal [s total_error_replies] 0\n            assert_match {} [errorstat ERR]\n            r multi\n            catch {r set} e\n            assert_match {ERR wrong number of arguments for 'set' command} $e\n            catch {r exec} e\n            assert_match {EXECABORT*} $e\n            assert_match {*count=1*} [errorstat ERR]\n            assert_match {*count=1*} [errorstat EXECABORT]\n            assert_equal [s total_error_replies] 2\n            assert_match {*calls=0,*,rejected_calls=1,failed_calls=0*} [cmdstat set]\n            assert_match {*calls=1,*,rejected_calls=0,failed_calls=0*} [cmdstat multi]\n            assert_match {*calls=1,*,rejected_calls=0,failed_calls=1*} [cmdstat exec]\n            assert_equal [s total_error_replies] 2\n            r config resetstat\n            assert_match {} [errorstat ERR]\n        }\n\n        test {errorstats: rejected call due to wrong arity} {\n            r config resetstat\n            assert_equal [s total_error_replies] 0\n            assert_match {} [errorstat ERR]\n            catch {r set k} e\n            assert_match {ERR wrong number of arguments for 'set' command} $e\n            assert_match {*count=1*} [errorstat ERR]\n            assert_match {*calls=0,*,rejected_calls=1,failed_calls=0*} [cmdstat set]\n            # ensure that after a rejected command, valid ones are counted properly\n            r set k1 v1\n            r set k2 v2\n            assert_match {calls=2,*,rejected_calls=1,failed_calls=0*} [cmdstat set]\n            assert_equal [s total_error_replies] 1\n        }\n\n        test {errorstats: rejected call by OOM error} {\n            r config resetstat\n            assert_equal [s total_error_replies] 0\n            assert_match {} [errorstat OOM]\n            r config set maxmemory 1\n            catch {r set a b} e\n            assert_match {OOM*} $e\n            assert_match {*count=1*} [errorstat OOM]\n            assert_match {*calls=0,*,rejected_calls=1,failed_calls=0*} [cmdstat set]\n            assert_equal [s total_error_replies] 1\n            r config resetstat\n            assert_match {} [errorstat OOM]\n            r config set maxmemory 0\n        }\n\n        test {errorstats: rejected call by authorization error} {\n            r config resetstat\n            assert_equal [s total_error_replies] 0\n            assert_match {} [errorstat NOPERM]\n            r ACL SETUSER alice on >p1pp0 ~cached:* +get +info +config\n            r auth alice p1pp0\n            catch {r set a b} e\n            assert_match {NOPERM*} $e\n            assert_match {*count=1*} [errorstat NOPERM]\n            assert_match {*calls=0,*,rejected_calls=1,failed_calls=0*} [cmdstat set]\n            assert_equal [s total_error_replies] 1\n            r config resetstat\n            assert_match {} [errorstat NOPERM]\n            r auth default \"\"\n        }\n\n        test {errorstats: blocking commands} {\n            r config resetstat\n            set rd [redis_deferring_client]\n            $rd client id\n            set rd_id [$rd read]\n            r del list1{t}\n\n            $rd blpop list1{t} 0\n            wait_for_blocked_client\n            r client unblock $rd_id error\n            assert_error {UNBLOCKED*} {$rd read}\n            assert_match {*count=1*} [errorstat UNBLOCKED]\n            assert_match {*calls=1,*,rejected_calls=0,failed_calls=1*} [cmdstat blpop]\n            assert_equal [s total_error_replies] 1\n            $rd close\n        }\n\n        test {errorstats: limit errors will not increase indefinitely} {\n            r config resetstat\n            for {set j 1} {$j <= 1100} {incr j} {\n                assert_error \"$j my error message\" {\n                    r eval {return redis.error_reply(string.format('%s my error message', ARGV[1]))} 0 $j\n                }\n            }\n\n            assert_equal [count_log_message 0 \"Errorstats stopped adding new errors\"] 1\n            assert_equal [count_log_message 0 \"Current errors code list\"] 1\n            assert_equal \"count=1\" [errorstat ERRORSTATS_DISABLED]\n\n            # Since we currently have no metrics exposed for server.errors, we use lazyfree\n            # to verify that we only have 128 errors.\n            wait_for_condition 50 100 {\n                [s lazyfreed_objects] eq 128\n            } else {\n                fail \"errorstats resetstat lazyfree error\"\n            }\n        }\n\n        test {stats: eventloop metrics} {\n            set info1 [r info stats]\n            set cycle1 [getInfoProperty $info1 eventloop_cycles]\n            set el_sum1 [getInfoProperty $info1 eventloop_duration_sum]\n            set cmd_sum1 [getInfoProperty $info1 eventloop_duration_cmd_sum]\n            assert_morethan $cycle1 0\n            assert_morethan $el_sum1 0\n            assert_morethan $cmd_sum1 0\n            after 110 ;# default hz is 10, wait for a cron tick. \n            set info2 [r info stats]\n            set cycle2 [getInfoProperty $info2 eventloop_cycles]\n            set el_sum2 [getInfoProperty $info2 eventloop_duration_sum]\n            set cmd_sum2 [getInfoProperty $info2 eventloop_duration_cmd_sum]\n            if {$::verbose} { puts \"eventloop metrics cycle1: $cycle1, cycle2: $cycle2\" }\n            assert_morethan $cycle2 $cycle1\n            assert_lessthan $cycle2 [expr $cycle1+10] ;# we expect 2 or 3 cycles here, but allow some tolerance\n            if {$::verbose} { puts \"eventloop metrics el_sum1: $el_sum1, el_sum2: $el_sum2\" }\n            assert_morethan $el_sum2 $el_sum1\n            assert_lessthan $el_sum2 [expr $el_sum1+100000] ;# we expect roughly 100ms here, but allow some tolerance\n            if {$::verbose} { puts \"eventloop metrics cmd_sum1: $cmd_sum1, cmd_sum2: $cmd_sum2\" }\n            assert_morethan $cmd_sum2 $cmd_sum1\n            assert_lessthan $cmd_sum2 [expr $cmd_sum1+15000] ;# we expect about tens of ms here, but allow some tolerance\n        } {} {debug_defrag:skip}\n\n        test {stats: instantaneous metrics} {\n            r config resetstat\n\n            set multiplier 1\n            if {[r config get io-threads] > 1} {\n                # the IO threads also have clients cron job now, and default hz is 10,\n                # so the IO thread that have the current client will trigger the main\n                # thread to run clients cron job, that will also count as a cron tick\n                set multiplier 2\n            }\n\n            set retries 0\n            for {set retries 1} {$retries < 4} {incr retries} {\n                after 1600 ;# hz is 10, wait for 16 cron tick so that sample array is fulfilled\n                set value [s instantaneous_eventloop_cycles_per_sec]\n                if {$value > 0} break\n            }\n\n            assert_lessthan $retries 4\n            if {$::verbose} { puts \"instantaneous metrics instantaneous_eventloop_cycles_per_sec: $value\" }\n            assert_morethan $value 0\n            assert_lessthan $value [expr $retries*15*$multiplier] ;# default hz is 10\n            set value [s instantaneous_eventloop_duration_usec]\n            if {$::verbose} { puts \"instantaneous metrics instantaneous_eventloop_duration_usec: $value\" }\n            assert_morethan $value 0\n            assert_lessthan $value [expr $retries*22000] ;# default hz is 10, so duration < 1000 / 10, allow some tolerance\n        } {} {debug_defrag:skip}\n\n        test {stats: debug metrics} {\n            # make sure debug info is hidden\n            set info [r info]\n            assert_equal [getInfoProperty $info eventloop_duration_aof_sum] {}\n            set info_all [r info all]\n            assert_equal [getInfoProperty $info_all eventloop_duration_aof_sum] {}\n\n            set info1 [r info debug]\n\n            set aof1 [getInfoProperty $info1 eventloop_duration_aof_sum]\n            assert {$aof1 >= 0}\n            set cron1 [getInfoProperty $info1 eventloop_duration_cron_sum]\n            assert {$cron1 > 0}\n            set cycle_max1 [getInfoProperty $info1 eventloop_cmd_per_cycle_max]\n            assert {$cycle_max1 > 0}\n            set duration_max1 [getInfoProperty $info1 eventloop_duration_max]\n            assert {$duration_max1 > 0}\n\n            after 110 ;# hz is 10, wait for a cron tick.\n            set info2 [r info debug]\n\n            set aof2 [getInfoProperty $info2 eventloop_duration_aof_sum]\n            assert {$aof2 >= $aof1} ;# AOF is disabled, we expect $aof2 == $aof1, but allow some tolerance.\n            set cron2 [getInfoProperty $info2 eventloop_duration_cron_sum]\n            assert_morethan $cron2 $cron1\n            set cycle_max2 [getInfoProperty $info2 eventloop_cmd_per_cycle_max]\n            assert {$cycle_max2 >= $cycle_max1}\n            set duration_max2 [getInfoProperty $info2 eventloop_duration_max]\n            assert {$duration_max2 >= $duration_max1}\n        }\n\n        test {stats: client input and output buffer limit disconnections} {\n            r debug reply-copy-avoidance 0 ;# Disable copy avoidance because it affects memory usage\n\n            r config resetstat\n            set info [r info stats]\n            assert_equal [getInfoProperty $info client_query_buffer_limit_disconnections] {0}\n            assert_equal [getInfoProperty $info client_output_buffer_limit_disconnections] {0}\n            # set qbuf limit to minimum to test stat\n            set org_qbuf_limit [lindex [r config get client-query-buffer-limit] 1]\n            r config set client-query-buffer-limit 1048576\n            catch {r set key [string repeat a 1048576]}\n            set info [r info stats]\n            assert_equal [getInfoProperty $info client_query_buffer_limit_disconnections] {1}\n            r config set client-query-buffer-limit $org_qbuf_limit\n            # set outbuf limit to just 10 to test stat\n            set org_outbuf_limit [lindex [r config get client-output-buffer-limit] 1]\n            r config set client-output-buffer-limit \"normal 10 0 0\"\n            r set key [string repeat a 100000] ;# to trigger output buffer limit check this needs to be big\n            catch {r get key}\n            r config set client-output-buffer-limit $org_outbuf_limit\n            set info [r info stats]\n            assert_equal [getInfoProperty $info client_output_buffer_limit_disconnections] {1}\n        } {} {logreqres:skip} ;# same as obuf-limits.tcl, skip logreqres\n\n        test {clients: pubsub clients} {\n            set info [r info clients]\n            assert_equal [getInfoProperty $info pubsub_clients] {0}\n            set rd1 [redis_deferring_client]\n            set rd2 [redis_deferring_client]\n            # basic count\n            assert_equal {1} [ssubscribe $rd1 {chan1}]\n            assert_equal {1} [subscribe $rd2 {chan2}]\n            set info [r info clients]\n            assert_equal [getInfoProperty $info pubsub_clients] {2}\n            # unsubscribe non existing channel\n            assert_equal {1} [unsubscribe $rd2 {non-exist-chan}]\n            set info [r info clients]\n            assert_equal [getInfoProperty $info pubsub_clients] {2}\n            # count change when client unsubscribe all channels\n            assert_equal {0} [unsubscribe $rd2 {chan2}]\n            set info [r info clients]\n            assert_equal [getInfoProperty $info pubsub_clients] {1}\n            # non-pubsub clients should not be involved\n            assert_equal {0} [unsubscribe $rd2 {non-exist-chan}]\n            set info [r info clients]\n            assert_equal [getInfoProperty $info pubsub_clients] {1}\n            # close all clients\n            $rd1 close\n            $rd2 close\n            wait_for_condition 100 50 {\n                [getInfoProperty [r info clients] pubsub_clients] eq {0}\n            } else {\n                fail \"pubsub clients did not clear\"\n            }\n        }\n\n        test {clients: watching clients} {\n            set r2 [redis_client]\n            assert_equal [s watching_clients] 0\n            assert_equal [s total_watched_keys] 0\n            assert_match {*watch=0*} [r client info]\n            assert_match {*watch=0*} [$r2 client info]\n            # count after watch key\n            $r2 watch key\n            assert_equal [s watching_clients] 1\n            assert_equal [s total_watched_keys] 1\n            assert_match {*watch=0*} [r client info]\n            assert_match {*watch=1*} [$r2 client info]\n            # the same client watch the same key has no effect\n            $r2 watch key\n            assert_equal [s watching_clients] 1\n            assert_equal [s total_watched_keys] 1\n            assert_match {*watch=0*} [r client info]\n            assert_match {*watch=1*} [$r2 client info]\n            # different client watch different key\n            r watch key2\n            assert_equal [s watching_clients] 2\n            assert_equal [s total_watched_keys] 2\n            assert_match {*watch=1*} [$r2 client info]\n            assert_match {*watch=1*} [r client info]\n            # count after unwatch\n            r unwatch\n            assert_equal [s watching_clients] 1\n            assert_equal [s total_watched_keys] 1\n            assert_match {*watch=0*} [r client info]\n            assert_match {*watch=1*} [$r2 client info]\n            $r2 unwatch\n            assert_equal [s watching_clients] 0\n            assert_equal [s total_watched_keys] 0\n            assert_match {*watch=0*} [r client info]\n            assert_match {*watch=0*} [$r2 client info]\n\n            # count after watch/multi/exec\n            $r2 watch key\n            assert_equal [s watching_clients] 1\n            $r2 multi\n            $r2 exec\n            assert_equal [s watching_clients] 0\n            # count after watch/multi/discard\n            $r2 watch key\n            assert_equal [s watching_clients] 1\n            $r2 multi\n            $r2 discard\n            assert_equal [s watching_clients] 0\n            # discard without multi has no effect\n            $r2 watch key\n            assert_equal [s watching_clients] 1\n            catch {$r2 discard} e\n            assert_equal [s watching_clients] 1\n            # unwatch without watch has no effect\n            r unwatch\n            assert_equal [s watching_clients] 1\n            # after disconnect, since close may arrive later, or the client may\n            # be freed asynchronously, we use a wait_for_condition\n            $r2 close\n            wait_for_watched_clients_count 0\n        }\n \n        test {clients: active_clients} {\n            set info [r info clients]\n            set ac [getInfoProperty $info active_clients]\n            # The test connection just ran a command, so at least 1 client is active\n            assert_morethan_equal $ac 1\n\n            # Create additional clients and make them active\n            set r2 [redis_client]\n            set r3 [redis_client]\n            $r2 ping\n            $r3 ping\n\n            # Within the 512ms window, all 3 clients should be counted\n            set info [r info clients]\n            set ac [getInfoProperty $info active_clients]\n            assert_morethan_equal $ac 3\n\n            # After the window expires (512ms), idle clients should drop off\n            wait_for_condition 20 100 {\n                [getInfoProperty [r info clients] active_clients] <= 1\n            } else {\n                fail \"active_clients did not drop after window expired\"\n            }\n\n            $r2 close\n            $r3 close\n        }\n\n        test {stats: client processing and pipeline metrics} {\n            set info1 [r info stats]\n            set proc_events1 [getInfoProperty $info1 total_client_processing_events]\n            set cycles1 [getInfoProperty $info1 eventloop_cycles_with_clients_processing]\n            set plsum1 [getInfoProperty $info1 avg_pipeline_length_sum]\n            set plcnt1 [getInfoProperty $info1 avg_pipeline_length_cnt]\n\n            # Issue several commands\n            r ping\n            r ping\n            r ping\n\n            set info2 [r info stats]\n            set proc_events2 [getInfoProperty $info2 total_client_processing_events]\n            set cycles2 [getInfoProperty $info2 eventloop_cycles_with_clients_processing]\n            set plsum2 [getInfoProperty $info2 avg_pipeline_length_sum]\n            set plcnt2 [getInfoProperty $info2 avg_pipeline_length_cnt]\n            set plavg2 [getInfoProperty $info2 avg_pipeline_length]\n\n            # processInputBuffer was called for 3 PINGs + the INFO call = at least 4\n            assert_morethan_equal [expr {$proc_events2 - $proc_events1}] 4\n\n            # At least one eventloop cycle processed client input\n            assert_morethan $cycles2 $cycles1\n\n            # Cycles with clients can never exceed total processInputBuffer calls\n            assert_morethan_equal $proc_events2 $cycles2\n\n            # Pipeline sum and cnt increased (3 PINGs + INFO, each batch of 1)\n            assert_morethan_equal [expr {$plsum2 - $plsum1}] 4\n            assert_morethan_equal [expr {$plcnt2 - $plcnt1}] 4\n\n            # Average pipeline length is a valid positive number\n            assert_morethan $plavg2 0\n        }\n\n        test {stats: client processing metrics reset with CONFIG RESETSTAT} {\n            # Build up meaningful counter values\n            for {set i 0} {$i < 20} {incr i} { r ping }\n\n            set info_before [r info stats]\n            set proc_before [getInfoProperty $info_before total_client_processing_events]\n            set cycles_before [getInfoProperty $info_before eventloop_cycles_with_clients_processing]\n            set plsum_before [getInfoProperty $info_before avg_pipeline_length_sum]\n            set plcnt_before [getInfoProperty $info_before avg_pipeline_length_cnt]\n\n            # Verify counters are meaningfully large before resetting\n            assert_morethan $proc_before 10\n            assert_morethan $cycles_before 0\n            assert_morethan $plsum_before 10\n            assert_morethan $plcnt_before 10\n\n            r config resetstat\n\n            set info_after [r info stats]\n            set proc_after [getInfoProperty $info_after total_client_processing_events]\n            set cycles_after [getInfoProperty $info_after eventloop_cycles_with_clients_processing]\n            set plsum_after [getInfoProperty $info_after avg_pipeline_length_sum]\n            set plcnt_after [getInfoProperty $info_after avg_pipeline_length_cnt]\n\n            # Counters should be near zero (only RESETSTAT + INFO ran after reset)\n            assert_lessthan_equal $proc_after 3\n            assert_lessthan_equal $cycles_after 3\n            assert_lessthan_equal $plsum_after 3\n            assert_lessthan_equal $plcnt_after 3\n        }\n    }\n}\n\nstart_server {tags {\"info\" \"external:skip\"} overrides {io-threads 4 io-threads-do-reads yes}} {\n    test {clients: active_clients with io-thread one-by-one commands} {\n        r config resetstat\n\n        set clients {}\n        set clients_num 16\n        for {set i 0} {$i < $clients_num} {incr i} {\n            lappend clients [redis_client]\n        }\n\n        # Run request/response (non-pipelined) traffic on many clients.\n        for {set round 0} {$round < 5} {incr round} {\n            set i 0\n            foreach c $clients {\n                $c set key:$round:$i value\n                incr i\n            }\n        }\n\n        # We are still within the 512ms active-client window.\n        set info [r info clients]\n        set ac [getInfoProperty $info active_clients]\n\n        foreach c $clients {\n            $c close\n        }\n\n        # The query client itself is active; additional active clients should\n        # also be counted. If this is <= 1, IO-thread one-by-one traffic was\n        # likely missed by active-client accounting.\n        assert_morethan_equal $ac 2\n    }\n}\n\nstart_server {tags {\"info\" \"external:skip\"}} {\n    test {memory: database and pubsub overhead and rehashing dict count} {\n        r flushall\n\n        # Better not set ht0_size to 4 since there is a probability that all\n        # keys will end up in the same bucket and rehashing will ended instantly.\n        set ht0_size [expr 1 << 3]\n        # ht1 size is twice the size of ht0\n        set ht1_size [expr $ht0_size << 1]\n\n        populate [expr $ht0_size - 1]\n\n        # Verify rehashing is not ongoing\n        wait_for_condition 100 10 {\n            [dict get [r memory stats] db.dict.rehashing.count] == 0\n        } else {\n            fail \"Rehashing did not finish in time\"\n        }\n\n        # Verify the info reflects steady state\n        set info_mem [r info memory]\n        set mem_stats [r memory stats]\n        assert_equal [getInfoProperty $info_mem mem_overhead_db_hashtable_rehashing] {0}\n        set ptr_size [expr {[s arch_bits] == 32 ? 4 : 8}]\n        assert_equal [dict get $mem_stats overhead.db.hashtable.lut] [expr $ht0_size * $ptr_size]\n        assert_equal [dict get $mem_stats overhead.db.hashtable.rehashing] {0}\n        assert_equal [dict get $mem_stats db.dict.rehashing.count] {0}\n\n        # Set 2 more keys to trigger rehashing\n        # get the info within a transaction to make sure the rehashing is not completed\n        r multi \n        r set this_will_reach_max_load_factor 1\n        r set this_must_be_rehashed 1\n        r info memory\n        r memory stats\n        set res [r exec]\n        set info_mem [lindex $res 2]\n        set mem_stats [lindex $res 3]\n\n        # Verify the info reflects rehashing state\n        assert_range [getInfoProperty $info_mem mem_overhead_db_hashtable_rehashing] 1 [expr $ht0_size * $ptr_size]\n        assert_equal [dict get $mem_stats overhead.db.hashtable.lut] [expr ($ht0_size + $ht1_size) * $ptr_size]\n        assert_equal [dict get $mem_stats overhead.db.hashtable.rehashing] [expr $ht0_size * $ptr_size]\n        assert_equal [dict get $mem_stats db.dict.rehashing.count] {1}\n    }\n\n    test {memory: used_memory_peak_time is updated when used_memory_peak is updated} {\n        r flushall\n\n        # Add a large string to trigger memory peak tracking\n        set time_before_add_large_str [clock seconds]\n        r set large_str [string repeat \"a\" 1000000]\n        assert {[s used_memory_peak_time] >= $time_before_add_large_str}\n\n        r del large_str\n\n        # Note: this info command must be called after the del operation to ensure\n        # the peak memory measurement isn't affected by the info command itself\n        # potentially increasing peak memory.\n        set peak_value [s used_memory_peak]\n\n        # Add a small string, which cannot exceed the previous peak value\n        r set small_str [string repeat \"a\" 1000]\n        assert {[s used_memory_peak] == $peak_value}\n    }\n}\n\nstart_cluster 1 0 {tags {external:skip cluster}} {\n    test \"Verify that LUT overhead is properly updated when dicts are emptied or reused (issue #13973)\" {\n        R 0 set k v ;# Make dbs overhead displayed\n        set info_mem [r memory stats]\n        set overhead_main [dict get $info_mem db.0 overhead.hashtable.main]\n        set overhead_expires [dict get $info_mem db.0 overhead.hashtable.expires]\n        assert_range $overhead_main 1 5000\n        assert_range $overhead_expires 1 1000\n\n        # In cluster mode, we use KVSTORE_FREE_EMPTY_DICTS to ensure that dicts\n        # are freed when they are emptied. This test verifies that after a dict\n        # is cleared, the lut overhead is properly updated, preventing it from \n        # growing indefinitely.\n        for {set j 1} {$j <= 500} {incr j} {\n            R 0 set k v\n            R 0 del k\n        }\n        R 0 set k v ;# Make dbs overhead displayed\n        set info_mem [r memory stats]\n        assert_equal [dict get $info_mem db.0 overhead.hashtable.main] $overhead_main\n        assert_equal [dict get $info_mem db.0 overhead.hashtable.expires] $overhead_expires\n    }\n}\n"
  },
  {
    "path": "tests/unit/introspection-2.tcl",
    "content": "proc cmdstat {cmd} {\n    return [cmdrstat $cmd r]\n}\n\nproc getlru {key} {\n    set objinfo [r debug object $key]\n    foreach info $objinfo {\n        set kvinfo [split $info \":\"]\n        if {[string compare [lindex $kvinfo 0] \"lru\"] == 0} {\n            return [lindex $kvinfo 1]\n        }\n    }\n    fail \"Can't get LRU info with DEBUG OBJECT\"\n}\n\nstart_server {tags {\"introspection\"}} {\n    test {The microsecond part of the TIME command will not overflow} {\n        set now [r time]\n        set microseconds [lindex $now 1]\n        assert_morethan $microseconds 0\n        assert_lessthan $microseconds 1000000\n    }\n\n    test {TTL, TYPE and EXISTS do not alter the last access time of a key} {\n        r set foo bar\n        after 3000\n        r ttl foo\n        r type foo\n        r exists foo\n        assert {[r object idletime foo] >= 2}\n    }\n\n    test {TOUCH alters the last access time of a key} {\n        r set foo bar\n        after 3000\n        r touch foo\n        assert {[r object idletime foo] < 2}\n    }\n\n    test {Operations in no-touch mode do not alter the last access time of a key} {\n        r set foo bar\n        r client no-touch on\n        set oldlru [getlru foo]\n        after 1100\n        r get foo\n        set newlru [getlru foo]\n        assert_equal $newlru $oldlru\n        r client no-touch off\n        r get foo\n        set newlru [getlru foo]\n        assert_morethan $newlru $oldlru\n    } {} {needs:debug}\n\n    test {Operations in no-touch mode TOUCH alters the last access time of a key} {\n        r set foo bar\n        r client no-touch on\n        set oldlru [getlru foo]\n        after 1100\n        r touch foo\n        set newlru [getlru foo]\n        assert_morethan $newlru $oldlru\n    } {} {needs:debug}\n\n    test {Operations in no-touch mode TOUCH from script alters the last access time of a key} {\n        r set foo bar\n        r client no-touch on\n        set oldlru [getlru foo]\n        after 1100\n        assert_equal {1} [r eval \"return redis.call('touch', 'foo')\" 0]\n        set newlru [getlru foo]\n        assert_morethan $newlru $oldlru\n    } {} {needs:debug}\n\n    test {TOUCH returns the number of existing keys specified} {\n        r flushdb\n        r set key1{t} 1\n        r set key2{t} 2\n        r touch key0{t} key1{t} key2{t} key3{t}\n    } 2\n\n    test {command stats for GEOADD} {\n        r config resetstat\n        r GEOADD foo 0 0 bar\n        assert_match {*calls=1,*} [cmdstat geoadd]\n        assert_match {} [cmdstat zadd]\n    } {} {needs:config-resetstat}\n\n    test {errors stats for GEOADD} {\n        r config resetstat\n        # make sure geo command will failed\n        r set foo 1\n        assert_error {WRONGTYPE Operation against a key holding the wrong kind of value*} {r GEOADD foo 0 0 bar}\n        assert_match {*calls=1*,rejected_calls=0,failed_calls=1*} [cmdstat geoadd]\n        assert_match {} [cmdstat zadd]\n    } {} {needs:config-resetstat}\n\n    test {command stats for EXPIRE} {\n        r config resetstat\n        r SET foo bar\n        r EXPIRE foo 0\n        assert_match {*calls=1,*} [cmdstat expire]\n        assert_match {} [cmdstat del]\n    } {} {needs:config-resetstat}\n\n    test {command stats for BRPOP} {\n        r config resetstat\n        r LPUSH list foo\n        r BRPOP list 0\n        assert_match {*calls=1,*} [cmdstat brpop]\n        assert_match {} [cmdstat rpop]\n    } {} {needs:config-resetstat}\n\n    test {command stats for MULTI} {\n        r config resetstat\n        r MULTI\n        r set foo{t} bar\n        r GEOADD foo2{t} 0 0 bar\n        r EXPIRE foo2{t} 0\n        r EXEC\n        assert_match {*calls=1,*} [cmdstat multi]\n        assert_match {*calls=1,*} [cmdstat exec]\n        assert_match {*calls=1,*} [cmdstat set]\n        assert_match {*calls=1,*} [cmdstat expire]\n        assert_match {*calls=1,*} [cmdstat geoadd]\n    } {} {needs:config-resetstat}\n\n    test {command stats for scripts} {\n        r config resetstat\n        r set mykey myval\n        r eval {\n            redis.call('set', KEYS[1], 0)\n            redis.call('expire', KEYS[1], 0)\n            redis.call('geoadd', KEYS[1], 0, 0, \"bar\")\n        } 1 mykey\n        assert_match {*calls=1,*} [cmdstat eval]\n        assert_match {*calls=2,*} [cmdstat set]\n        assert_match {*calls=1,*} [cmdstat expire]\n        assert_match {*calls=1,*} [cmdstat geoadd]\n    } {} {needs:config-resetstat}\n\n    test {COMMAND COUNT get total number of Redis commands} {\n        assert_morethan [r command count] 0\n    }\n\n    test {COMMAND GETKEYS GET} {\n        assert_equal {key} [r command getkeys get key]\n    }\n\n    test {COMMAND GETKEYSANDFLAGS} {\n        assert_equal {{k1 {OW update}}} [r command getkeysandflags set k1 v1]\n        assert_equal {{k1 {OW update}} {k2 {OW update}}} [r command getkeysandflags mset k1 v1 k2 v2]\n        assert_equal {{k1 {RW access delete}} {k2 {RW insert}}} [r command getkeysandflags LMOVE k1 k2 left right]\n        assert_equal {{k1 {RO access}} {k2 {OW update}}} [r command getkeysandflags sort k1 store k2]\n        assert_equal {{k1 {RW update}}} [r command getkeysandflags set k1 v1 IFEQ v1]\n        assert_equal {{k1 {RW access update}}} [r command getkeysandflags set k1 v1 GET]\n        assert_equal {{k1 {RM delete}}} [r command getkeysandflags delex k1]\n        assert_equal {{k1 {RW delete}}} [r command getkeysandflags delex k1 ifeq v1]\n    }\n\n    test {COMMAND GETKEYSANDFLAGS invalid args} {\n        assert_error \"ERR Invalid arguments*\" {r command getkeysandflags ZINTERSTORE zz 1443677133621497600 asdf}\n    }\n\n    test {COMMAND GETKEYSANDFLAGS MSETEX} {\n        assert_equal {{k1 {OW update}}} [r command getkeysandflags msetex 1 k1 v1 ex 10]\n        assert_equal {{k1 {OW update}} {k2 {OW update}}} [r command getkeysandflags msetex 2 k1 v1 k2 v2 ex 10]\n        assert_equal {{k1 {OW update}} {k2 {OW update}} {k3 {OW update}}} [r command getkeysandflags msetex 3 k1 v1 k2 v2 k3 v3 ex 10]\n        assert_equal {{k1 {OW update}} {k2 {OW update}}} [r command getkeysandflags msetex 2 k1 v1 k2 v2 keepttl]\n        assert_equal {{k1 {OW update}} {k2 {OW update}}} [r command getkeysandflags msetex 2 k1 v1 k2 v2 ex 10 nx]\n    }\n\n    test {COMMAND GETKEYS MEMORY USAGE} {\n        assert_equal {key} [r command getkeys memory usage key]\n    }\n\n    test {COMMAND GETKEYS XGROUP} {\n        assert_equal {key} [r command getkeys xgroup create key groupname $]\n    }\n\n    test {COMMAND GETKEYS EVAL with keys} {\n        assert_equal {key} [r command getkeys eval \"return 1\" 1 key]\n    }\n\n    test {COMMAND GETKEYS EVAL without keys} {\n        assert_equal {} [r command getkeys eval \"return 1\" 0]\n    }\n\n    test {COMMAND GETKEYS LCS} {\n        assert_equal {key1 key2} [r command getkeys lcs key1 key2]\n    }\n\n    test {COMMAND GETKEYS PFMERGE with and without source keys} {\n        # dest + sources: both key specs yield keys\n        assert_equal {dest src1 src2} [r command getkeys PFMERGE dest src1 src2]\n\n        # dest only, no source keys: spec[1] yields empty range (last < first).\n        # Without pfmergeGetKeys this returned \"Invalid arguments\" because\n        # getKeysUsingKeySpecs treated the empty range as invalid_spec,\n        # discarding the dest key found by spec[0].\n        assert_equal {dest} [r command getkeys PFMERGE dest]\n    }\n\n    test {COMMAND GETKEYS MORE THAN 256 KEYS} {\n        set all_keys [list]\n        set numkeys 260\n        for {set i 1} {$i <= $numkeys} {incr i} {\n            lappend all_keys \"key$i\"\n        }\n        set all_keys_with_target [linsert $all_keys 0 target]\n        # we are using ZUNIONSTORE command since in order to reproduce allocation of a new buffer in getKeysPrepareResult\n        # when numkeys in result > 0\n        # we need a command that the final number of keys is not known in the first call to getKeysPrepareResult\n        # before the fix in that case data of old buffer was not copied to the new result buffer\n        # causing all previous keys (numkeys) data to be uninitialize\n        assert_equal $all_keys_with_target [r command getkeys ZUNIONSTORE target $numkeys {*}$all_keys]\n    }\n\n    test \"COMMAND LIST syntax error\" {\n        assert_error \"ERR syntax error*\" {r command list bad_arg}\n        assert_error \"ERR syntax error*\" {r command list filterby bad_arg}\n        assert_error \"ERR syntax error*\" {r command list filterby bad_arg bad_arg2}\n    }\n\n    test \"COMMAND LIST WITHOUT FILTERBY\" {\n        set commands [r command list]\n        assert_not_equal [lsearch $commands \"set\"] -1\n        assert_not_equal [lsearch $commands \"client|list\"] -1\n    }\n\n    test \"COMMAND LIST FILTERBY ACLCAT against non existing category\" {\n        assert_equal {} [r command list filterby aclcat non_existing_category]\n    }\n\n    test \"COMMAND LIST FILTERBY ACLCAT - list all commands/subcommands\" {\n        set commands [r command list filterby aclcat scripting]\n        assert_not_equal [lsearch $commands \"eval\"] -1\n        assert_not_equal [lsearch $commands \"script|kill\"] -1\n\n        # Negative check, a command that should not be here\n        assert_equal [lsearch $commands \"set\"] -1\n    }\n\n    test \"COMMAND LIST FILTERBY PATTERN - list all commands/subcommands\" {\n        # Exact command match.\n        assert_equal {set} [r command list filterby pattern set]\n        assert_equal {get} [r command list filterby pattern get]\n\n        # Return the parent command and all the subcommands below it.\n        set commands [r command list filterby pattern config*]\n        assert_not_equal [lsearch $commands \"config\"] -1\n        assert_not_equal [lsearch $commands \"config|get\"] -1\n\n        # We can filter subcommands under a parent command.\n        set commands [r command list filterby pattern config|*re*]\n        assert_not_equal [lsearch $commands \"config|resetstat\"] -1\n        assert_not_equal [lsearch $commands \"config|rewrite\"] -1\n\n        # We can filter subcommands across parent commands.\n        set commands [r command list filterby pattern cl*help]\n        assert_not_equal [lsearch $commands \"client|help\"] -1\n        assert_not_equal [lsearch $commands \"cluster|help\"] -1\n\n        # Negative check, command that doesn't exist.\n        assert_equal {} [r command list filterby pattern non_exists]\n        assert_equal {} [r command list filterby pattern non_exists*]\n    }\n\n    test \"COMMAND LIST FILTERBY MODULE against non existing module\" {\n        # This should be empty, the real one is in subcommands.tcl\n        assert_equal {} [r command list filterby module non_existing_module]\n    }\n\n    test {COMMAND INFO of invalid subcommands} {\n        assert_equal {{}} [r command info get|key]\n        assert_equal {{}} [r command info config|get|key]\n    }\n\n    foreach cmd {SET GET MSET BITFIELD LMOVE LPOP BLPOP PING MEMORY MEMORY|USAGE RENAME GEORADIUS_RO} {\n        test \"$cmd command will not be marked with movablekeys\" {\n            set info [lindex [r command info $cmd] 0]\n            assert_no_match {*movablekeys*} [lindex $info 2]\n        }\n    }\n\n    foreach cmd {ZUNIONSTORE XREAD EVAL SORT SORT_RO MIGRATE GEORADIUS} {\n        test \"$cmd command is marked with movablekeys\" {\n            set info [lindex [r command info $cmd] 0]\n            assert_match {*movablekeys*} [lindex $info 2]\n        }\n    }\n\n}\n"
  },
  {
    "path": "tests/unit/introspection.tcl",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Copyright (c) 2024-present, Valkey contributors.\n# All rights reserved.\n#\n# Licensed under your choice of the Redis Source Available License 2.0\n# (RSALv2) or the Server Side Public License v1 (SSPLv1).\n#\n# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n#\n\nstart_server {tags {\"introspection\"}} {\n    test \"PING\" {\n        assert_equal {PONG} [r ping]\n        assert_equal {redis} [r ping redis]\n        assert_error {*wrong number of arguments for 'ping' command} {r ping hello redis}\n    }\n\n    test {CLIENT LIST} {\n        set client_list [r client list]\n        if {[lindex [r config get io-threads] 1] == 1} {\n            assert_match {id=* addr=*:* laddr=*:* fd=* name=* age=* idle=* flags=N db=* sub=0 psub=0 ssub=0 multi=-1 watch=0 qbuf=26 qbuf-free=* argv-mem=* multi-mem=0 rbs=* rbp=* obl=0 oll=0 omem=0 tot-mem=* events=r cmd=client|list user=* redir=-1 resp=* lib-name=* lib-ver=* io-thread=* tot-net-in=* tot-net-out=* tot-cmds=* read-events=* avg-pipeline-len-sum=* avg-pipeline-len-cnt=*} $client_list\n        } else {\n            assert_match {id=* addr=*:* laddr=*:* fd=* name=* age=* idle=* flags=N db=* sub=0 psub=0 ssub=0 multi=-1 watch=0 qbuf=0 qbuf-free=* argv-mem=* multi-mem=0 rbs=* rbp=* obl=0 oll=0 omem=0 tot-mem=* events=r cmd=client|list user=* redir=-1 resp=* lib-name=* lib-ver=* io-thread=* tot-net-in=* tot-net-out=* tot-cmds=* read-events=* avg-pipeline-len-sum=* avg-pipeline-len-cnt=*} $client_list\n        }\n    }\n\n    test {CLIENT LIST with IDs} {\n        set myid [r client id]\n        set cl [split [r client list id $myid] \"\\r\\n\"]\n        assert_match \"id=$myid * cmd=client|list *\" [lindex $cl 0]\n    }\n\n    test {CLIENT INFO} {\n        set client [r client info]\n        if {[lindex [r config get io-threads] 1] == 1} {\n            assert_match {id=* addr=*:* laddr=*:* fd=* name=* age=* idle=* flags=N db=* sub=0 psub=0 ssub=0 multi=-1 watch=0 qbuf=26 qbuf-free=* argv-mem=* multi-mem=0 rbs=* rbp=* obl=0 oll=0 omem=0 tot-mem=* events=r cmd=client|info user=* redir=-1 resp=* lib-name=* lib-ver=* io-thread=* tot-net-in=* tot-net-out=* tot-cmds=* read-events=* avg-pipeline-len-sum=* avg-pipeline-len-cnt=*} $client\n        } else {\n            assert_match {id=* addr=*:* laddr=*:* fd=* name=* age=* idle=* flags=N db=* sub=0 psub=0 ssub=0 multi=-1 watch=0 qbuf=0 qbuf-free=* argv-mem=* multi-mem=0 rbs=* rbp=* obl=0 oll=0 omem=0 tot-mem=* events=r cmd=client|info user=* redir=-1 resp=* lib-name=* lib-ver=* io-thread=* tot-net-in=* tot-net-out=* tot-cmds=* read-events=* avg-pipeline-len-sum=* avg-pipeline-len-cnt=*} $client\n        }\n    } \n\n    proc get_field_in_client_info {info field} {\n        set info [string trim $info]\n        foreach item [split $info \" \"] {\n            set kv [split $item \"=\"]\n            set k [lindex $kv 0]\n            if {[string match $field $k]} {\n                return [lindex $kv 1]   \n            }\n        }\n        return \"\"\n    }\n\n    proc get_field_in_client_list {id client_list filed} {\n        set list [split $client_list \"\\r\\n\"]\n        foreach info $list {\n            if {[string match \"id=$id *\" $info] } {\n                return [get_field_in_client_info $info $filed]\n            }\n        }\n        return \"\"\n    }\n\n    test {CLIENT INFO input/output/cmds-processed stats} {\n        set info1 [r client info]\n        set input1 [get_field_in_client_info $info1 \"tot-net-in\"]\n        set output1 [get_field_in_client_info $info1 \"tot-net-out\"]\n        set cmd1 [get_field_in_client_info $info1 \"tot-cmds\"]\n\n        # Run a command by that client and test if the stats change correctly\n        set info2 [r client info]\n        set input2 [get_field_in_client_info $info2 \"tot-net-in\"]\n        set output2 [get_field_in_client_info $info2 \"tot-net-out\"]\n        set cmd2 [get_field_in_client_info $info2 \"tot-cmds\"]\n\n        # NOTE if CLIENT INFO changes it's stats the output_bytes here and in the\n        # other related tests will need to be updated.\n        set input_bytes 26 ; # CLIENT INFO request\n        set output_bytes 300 ; # CLIENT INFO result\n        set cmds_processed 1 ; # processed the command CLIENT INFO\n        assert_equal [expr $input1+$input_bytes] $input2\n        assert {[expr $output1+$output_bytes] < $output2}\n        assert_equal [expr $cmd1+$cmds_processed] $cmd2\n    }\n\n    test {CLIENT INFO input/output/cmds-processed stats for blocking command} {\n        r del mylist\n        set rd [redis_deferring_client]\n        $rd client id\n        set rd_id [$rd read]\n \n        set info_list [r client list]\n        set input1 [get_field_in_client_list $rd_id $info_list \"tot-net-in\"]\n        set output1 [get_field_in_client_list $rd_id $info_list \"tot-net-out\"]\n        set cmd1 [get_field_in_client_list $rd_id $info_list \"tot-cmds\"]\n        $rd blpop mylist 0\n\n        # Make sure to wait for the $rd client to be blocked\n        wait_for_blocked_client\n\n        # Check if input stats have changed for $rd. Since command is blocking\n        # and has not been unblocked yet we expect no change in output/cmds-processed\n        # stats.\n        set info_list [r client list]\n        set input2 [get_field_in_client_list $rd_id $info_list \"tot-net-in\"]\n        set output2 [get_field_in_client_list $rd_id $info_list \"tot-net-out\"]\n        set cmd2 [get_field_in_client_list $rd_id $info_list \"tot-cmds\"]\n        assert_equal [expr $input1+34] $input2\n        assert_equal $output1 $output2\n        assert_equal $cmd1 $cmd2\n\n        # Unblock the $rd client (which will send a reply and thus update output\n        # and cmd-processed stats).\n        r lpush mylist a\n\n        # Note that the per-client stats are from the POV of the server. The\n        # deferred client may have not read the response yet, but the stats\n        # are still updated.\n        set info_list [r client list]\n        set input3 [get_field_in_client_list $rd_id $info_list \"tot-net-in\"]\n        set output3 [get_field_in_client_list $rd_id $info_list \"tot-net-out\"]\n        set cmd3 [get_field_in_client_list $rd_id $info_list \"tot-cmds\"]\n        assert_equal $input2 $input3\n        assert_equal [expr $output2+23] $output3\n        assert_equal [expr $cmd2+1] $cmd3\n\n        $rd close\n    }\n\n    test {CLIENT INFO cmds-processed stats for recursive command} {\n        set info [r client info]\n        set tot_cmd_before [get_field_in_client_info $info \"tot-cmds\"]\n        r eval \"redis.call('ping')\" 0\n        set info [r client info]\n        set tot_cmd_after [get_field_in_client_info $info \"tot-cmds\"]\n\n        # We executed 3 commands - EVAL, which in turn executed PING and finally CLIENT INFO\n        assert_equal [expr $tot_cmd_before+3] $tot_cmd_after\n    }\n\n    test {CLIENT INFO read-events and pipeline-length stats} {\n        # Use a fresh client so counters start from a known state\n        set r2 [redis_client]\n\n        # Baseline: the redis_client constructor issues some commands internally,\n        # so capture the current state rather than assuming zeros.\n        set info1 [$r2 client info]\n        set re1 [get_field_in_client_info $info1 \"read-events\"]\n        set plsum1 [get_field_in_client_info $info1 \"avg-pipeline-len-sum\"]\n        set plcnt1 [get_field_in_client_info $info1 \"avg-pipeline-len-cnt\"]\n\n        # Send 3 sequential (non-pipelined) commands\n        $r2 ping\n        $r2 ping\n        $r2 ping\n\n        set info2 [$r2 client info]\n        set re2 [get_field_in_client_info $info2 \"read-events\"]\n        set plsum2 [get_field_in_client_info $info2 \"avg-pipeline-len-sum\"]\n        set plcnt2 [get_field_in_client_info $info2 \"avg-pipeline-len-cnt\"]\n\n        # read-events should have increased (3 PINGs + the CLIENT INFO itself)\n        assert_morethan_equal $re2 [expr {$re1 + 4}]\n\n        # For sequential commands each batch has exactly 1 command,\n        # so delta of sum should equal delta of cnt (average pipeline length = 1.0)\n        set delta_sum [expr {$plsum2 - $plsum1}]\n        set delta_cnt [expr {$plcnt2 - $plcnt1}]\n        assert_morethan $delta_sum 0\n        assert_equal $delta_sum $delta_cnt\n\n        $r2 close\n    }\n\n    test {CLIENT INFO pipeline-length stats for pipelined commands} {\n        set rd [redis_deferring_client]\n        $rd client id\n        set rd_id [$rd read]\n\n        # Capture baseline from CLIENT LIST\n        set info_list [r client list]\n        set plsum1 [get_field_in_client_list $rd_id $info_list \"avg-pipeline-len-sum\"]\n        set plcnt1 [get_field_in_client_list $rd_id $info_list \"avg-pipeline-len-cnt\"]\n\n        # Send 5 pipelined commands without reading replies\n        for {set i 0} {$i < 5} {incr i} {\n            $rd ping\n        }\n\n        # Read all 5 replies to ensure they have been processed\n        for {set i 0} {$i < 5} {incr i} {\n            $rd read\n        }\n\n        set info_list [r client list]\n        set plsum2 [get_field_in_client_list $rd_id $info_list \"avg-pipeline-len-sum\"]\n        set plcnt2 [get_field_in_client_list $rd_id $info_list \"avg-pipeline-len-cnt\"]\n\n        # All 5 commands must have been counted in the sum\n        set delta_sum [expr {$plsum2 - $plsum1}]\n        set delta_cnt [expr {$plcnt2 - $plcnt1}]\n        assert_equal $delta_sum 5\n        assert_morethan $delta_cnt 0\n        assert_morethan_equal $delta_sum $delta_cnt\n\n        $rd close\n    }\n\n    test {CLIENT KILL with illegal arguments} {\n        assert_error \"ERR wrong number of arguments for 'client|kill' command\" {r client kill}\n        assert_error \"ERR syntax error*\" {r client kill id 10 wrong_arg}\n\n        assert_error \"ERR *greater than 0*\" {r client kill id str}\n        assert_error \"ERR *greater than 0*\" {r client kill id -1}\n        assert_error \"ERR *greater than 0*\" {r client kill id 0}\n\n        assert_error \"ERR Unknown client type*\" {r client kill type wrong_type}\n\n        assert_error \"ERR No such user*\" {r client kill user wrong_user}\n\n        assert_error \"ERR syntax error*\" {r client kill skipme yes_or_no}\n\n        assert_error \"ERR *not an integer or out of range*\" {r client kill maxage str}\n        assert_error \"ERR *not an integer or out of range*\" {r client kill maxage 9999999999999999999}\n        assert_error \"ERR *greater than 0*\" {r client kill maxage -1}\n    }\n\n    test {CLIENT KILL maxAGE will kill old clients} {\n        # This test is very likely to do a false positive if the execute time\n        # takes longer than the max age, so give it a few more chances. Go with\n        # 3 retries of increasing sleep_time, i.e. start with 2s, then go 4s, 8s.\n        set sleep_time 2\n        for {set i 0} {$i < 3} {incr i} {\n            set rd1 [redis_deferring_client]\n            r debug sleep $sleep_time\n            set rd2 [redis_deferring_client]\n            r acl setuser dummy on nopass +ping\n            $rd1 auth dummy \"\"\n            $rd1 read\n            $rd2 auth dummy \"\"\n            $rd2 read\n\n            # Should kill rd1 but not rd2\n            set max_age [expr $sleep_time / 2]\n            set res [r client kill user dummy maxage $max_age]\n            if {$res == 1} {\n                break\n            } else {\n                # Clean up and try again next time\n                set sleep_time [expr $sleep_time * 2]\n                $rd1 close\n                $rd2 close\n            }\n\n        } ;# for\n\n        if {$::verbose} { puts \"CLIENT KILL maxAGE will kill old clients test attempts: $i\" }\n        assert_equal $res 1\n\n        # rd2 should still be connected\n        $rd2 ping\n        assert_equal \"PONG\" [$rd2 read]\n\n        $rd1 close\n        $rd2 close\n    } {0} {\"needs:debug\"}\n\n    test {CLIENT KILL SKIPME YES/NO will kill all clients} {\n        # Kill all clients except `me`\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n        set connected_clients [s connected_clients]\n        assert {$connected_clients >= 3}\n        set res [r client kill skipme yes]\n        assert {$res == $connected_clients - 1}\n        wait_for_condition 1000 10 {\n            [s connected_clients] eq 1\n        } else {\n            fail \"Can't kill all clients except the current one\"\n        }\n\n        # Kill all clients, including `me`\n        set rd3 [redis_deferring_client]\n        set rd4 [redis_deferring_client]\n        set connected_clients [s connected_clients]\n        assert {$connected_clients == 3}\n        set res [r client kill skipme no]\n        assert_equal $res $connected_clients\n\n        # After killing `me`, the first ping will throw an error\n        assert_error \"*I/O error*\" {r ping}\n        assert_equal \"PONG\" [r ping]\n\n        $rd1 close\n        $rd2 close\n        $rd3 close\n        $rd4 close\n    }\n\n    test {CLIENT command unhappy path coverage} {\n        assert_error \"ERR*wrong number of arguments*\" {r client caching}\n        assert_error \"ERR*when the client is in tracking mode*\" {r client caching maybe}\n        assert_error \"ERR*syntax*\" {r client no-evict wrongInput}\n        assert_error \"ERR*syntax*\" {r client reply wrongInput}\n        assert_error \"ERR*syntax*\" {r client tracking wrongInput}\n        assert_error \"ERR*syntax*\" {r client tracking on wrongInput}\n        assert_error \"ERR*when the client is in tracking mode*\" {r client caching off}\n        assert_error \"ERR*when the client is in tracking mode*\" {r client caching on}\n\n        r CLIENT TRACKING ON optout\n        assert_error \"ERR*syntax*\" {r client caching on}\n\n        r CLIENT TRACKING off optout\n        assert_error \"ERR*when the client is in tracking mode*\" {r client caching on}\n\n        assert_error \"ERR*No such*\" {r client kill 000.123.321.567:0000}\n        assert_error \"ERR*No such*\" {r client kill 127.0.0.1:}\n\n        assert_error \"ERR*timeout is not an integer*\" {r client pause abc}\n        assert_error \"ERR timeout is negative\" {r client pause -1}\n    }\n\n    test \"CLIENT KILL close the client connection during bgsave\" {\n        # Start a slow bgsave, trigger an active fork.\n        r flushall\n        r set k v\n        r config set rdb-key-save-delay 10000000\n        r bgsave\n        wait_for_condition 1000 10 {\n            [s rdb_bgsave_in_progress] eq 1\n        } else {\n            fail \"bgsave did not start in time\"\n        }\n\n        # Kill (close) the connection\n        r client kill skipme no\n\n        # In the past, client connections needed to wait for bgsave\n        # to end before actually closing, now they are closed immediately.\n        assert_error \"*I/O error*\" {r ping} ;# get the error very quickly\n        assert_equal \"PONG\" [r ping]\n\n        # Make sure the bgsave is still in progress\n        assert_equal [s rdb_bgsave_in_progress] 1\n\n        # Stop the child before we proceed to the next test\n        r config set rdb-key-save-delay 0\n        r flushall\n        wait_for_condition 1000 10 {\n            [s rdb_bgsave_in_progress] eq 0\n        } else {\n            fail \"bgsave did not stop in time\"\n        }\n    } {} {needs:save}\n\n    test \"CLIENT REPLY OFF/ON: disable all commands reply\" {\n        set rd [redis_deferring_client]\n\n        # These replies were silenced.\n        $rd client reply off\n        $rd ping pong\n        $rd ping pong2\n\n        $rd client reply on\n        assert_equal {OK} [$rd read]\n        $rd ping pong3\n        assert_equal {pong3} [$rd read]\n\n        $rd close\n    }\n\n    test \"CLIENT REPLY SKIP: skip the next command reply\" {\n        set rd [redis_deferring_client]\n\n        # The first pong reply was silenced.\n        $rd client reply skip\n        $rd ping pong\n\n        $rd ping pong2\n        assert_equal {pong2} [$rd read]\n\n        $rd close\n    }\n\n    test \"CLIENT REPLY ON: unset SKIP flag\" {\n        set rd [redis_deferring_client]\n\n        $rd client reply skip\n        $rd client reply on\n        assert_equal {OK} [$rd read] ;# OK from CLIENT REPLY ON command\n\n        $rd ping\n        assert_equal {PONG} [$rd read]\n\n        $rd close\n    }\n\n    test {MONITOR can log executed commands} {\n        set rd [redis_deferring_client]\n        $rd monitor\n        assert_match {*OK*} [$rd read]\n        r set foo bar\n        r get foo\n        set res [list [$rd read] [$rd read]]\n        $rd close\n        set _ $res\n    } {*\"set\" \"foo\"*\"get\" \"foo\"*}\n\n    test {MONITOR can log commands issued by the scripting engine} {\n        set rd [redis_deferring_client]\n        $rd monitor\n        $rd read ;# Discard the OK\n        r eval {redis.call('set',KEYS[1],ARGV[1])} 1 foo bar\n        assert_match {*eval*} [$rd read]\n        assert_match {*lua*\"set\"*\"foo\"*\"bar\"*} [$rd read]\n        $rd close\n    }\n\n    test {MONITOR can log commands issued by functions} {\n        r function load replace {#!lua name=test\n            redis.register_function('test', function() return redis.call('set', 'foo', 'bar') end)\n        }\n        set rd [redis_deferring_client]\n        $rd monitor\n        $rd read ;# Discard the OK\n        r fcall test 0\n        assert_match {*fcall*test*} [$rd read]\n        assert_match {*lua*\"set\"*\"foo\"*\"bar\"*} [$rd read]\n        $rd close\n    }\n\n    test {MONITOR supports redacting command arguments} {\n        set rd [redis_deferring_client]\n        $rd monitor\n        $rd read ; # Discard the OK\n\n        r migrate [srv 0 host] [srv 0 port] key 9 5000\n        r migrate [srv 0 host] [srv 0 port] key 9 5000 AUTH user\n        r migrate [srv 0 host] [srv 0 port] key 9 5000 AUTH2 user password\n        catch {r auth not-real} _\n        catch {r auth not-real not-a-password} _\n        \n        assert_match {*\"key\"*\"9\"*\"5000\"*} [$rd read]\n        assert_match {*\"key\"*\"9\"*\"5000\"*\"(redacted)\"*} [$rd read]\n        assert_match {*\"key\"*\"9\"*\"5000\"*\"(redacted)\"*\"(redacted)\"*} [$rd read]\n        assert_match {*\"auth\"*\"(redacted)\"*} [$rd read]\n        assert_match {*\"auth\"*\"(redacted)\"*\"(redacted)\"*} [$rd read]\n\n        foreach resp {3 2} {\n            if {[lsearch $::denytags \"resp3\"] >= 0} {\n                if {$resp == 3} {continue}\n            } elseif {$::force_resp3} {\n                if {$resp == 2} {continue}\n            }\n            catch {r hello $resp AUTH not-real not-a-password} _\n            assert_match \"*\\\"hello\\\"*\\\"$resp\\\"*\\\"AUTH\\\"*\\\"(redacted)\\\"*\\\"(redacted)\\\"*\" [$rd read]\n        }\n        $rd close\n    } {0} {needs:repl}\n\n    test {MONITOR correctly handles multi-exec cases} {\n        set rd [redis_deferring_client]\n        $rd monitor\n        $rd read ; # Discard the OK\n\n        # Make sure multi-exec statements are ordered\n        # correctly\n        r multi\n        r set foo bar\n        r exec\n        assert_match {*\"multi\"*} [$rd read]\n        assert_match {*\"set\"*\"foo\"*\"bar\"*} [$rd read]\n        assert_match {*\"exec\"*} [$rd read]\n\n        # Make sure we close multi statements on errors\n        r multi\n        catch {r syntax error} _\n        catch {r exec} _\n\n        assert_match {*\"multi\"*} [$rd read]\n        assert_match {*\"exec\"*} [$rd read]\n\n        $rd close\n    }\n    \n    test {MONITOR log blocked command only once} {\n        \n        # need to reconnect in order to reset the clients state\n        reconnect\n        \n        set rd [redis_deferring_client]\n        set bc [redis_deferring_client]\n        r del mylist\n        \n        $rd monitor\n        $rd read ; # Discard the OK\n        \n        $bc blpop mylist 0\n        # make sure the blpop arrives first\n        $bc flush\n        after 100\n        wait_for_blocked_clients_count 1\n        r lpush mylist 1\n        wait_for_blocked_clients_count 0\n        r lpush mylist 2\n        \n        # we expect to see the blpop on the monitor first\n        assert_match {*\"blpop\"*\"mylist\"*\"0\"*} [$rd read]\n        \n        # we scan out all the info commands on the monitor\n        set monitor_output [$rd read]\n        while { [string match {*\"info\"*} $monitor_output] } {\n            set monitor_output [$rd read]\n        }\n        \n        # we expect to locate the lpush right when the client was unblocked\n        assert_match {*\"lpush\"*\"mylist\"*\"1\"*} $monitor_output\n        \n        # we scan out all the info commands\n        set monitor_output [$rd read]\n        while { [string match {*\"info\"*} $monitor_output] } {\n            set monitor_output [$rd read]\n        }\n        \n        # we expect to see the next lpush and not duplicate blpop command\n        assert_match {*\"lpush\"*\"mylist\"*\"2\"*} $monitor_output\n        \n        $rd close\n        $bc close\n    }\n\n    test {CLIENT GETNAME should return NIL if name is not assigned} {\n        r client getname\n    } {}\n\n    test {CLIENT GETNAME check if name set correctly} {\n        r client setname testName\n        r client getName\n    } {testName}\n\n    test {CLIENT LIST shows empty fields for unassigned names} {\n        r client list\n    } {*name= *}\n\n    test {CLIENT SETNAME does not accept spaces} {\n        catch {r client setname \"foo bar\"} e\n        set e\n    } {ERR*}\n\n    test {CLIENT SETNAME can assign a name to this connection} {\n        assert_equal [r client setname myname] {OK}\n        r client list\n    } {*name=myname*}\n\n    test {CLIENT SETNAME can change the name of an existing connection} {\n        assert_equal [r client setname someothername] {OK}\n        r client list\n    } {*name=someothername*}\n\n    test {After CLIENT SETNAME, connection can still be closed} {\n        set rd [redis_deferring_client]\n        $rd client setname foobar\n        assert_equal [$rd read] \"OK\"\n        assert_match {*foobar*} [r client list]\n        $rd close\n        # Now the client should no longer be listed\n        wait_for_condition 50 100 {\n            [string match {*foobar*} [r client list]] == 0\n        } else {\n            fail \"Client still listed in CLIENT LIST after SETNAME.\"\n        }\n    }\n\n    test {CLIENT SETINFO can set a library name to this connection} {\n        r CLIENT SETINFO lib-name redis.py\n        r CLIENT SETINFO lib-ver 1.2.3\n        r client info\n    } {*lib-name=redis.py lib-ver=1.2.3*}\n\n    test {CLIENT SETINFO invalid args} {\n        assert_error {*wrong number of arguments*} {r CLIENT SETINFO lib-name}\n        assert_error {*cannot contain spaces*} {r CLIENT SETINFO lib-name \"redis py\"}\n        assert_error {*newlines*} {r CLIENT SETINFO lib-name \"redis.py\\n\"}\n        assert_error {*Unrecognized*} {r CLIENT SETINFO badger hamster}\n        # test that all of these didn't affect the previously set values\n        r client info\n    } {*lib-name=redis.py lib-ver=1.2.3*}\n\n    test {RESET does NOT clean library name} {\n        r reset\n        r client info\n    } {*lib-name=redis.py*} {needs:reset}\n\n    test {CLIENT SETINFO can clear library name} {\n        r CLIENT SETINFO lib-name \"\"\n        r client info\n    } {*lib-name= *}\n\n    test {CONFIG save params special case handled properly} {\n        # No \"save\" keyword - defaults should apply\n        start_server {config \"minimal.conf\"} {\n            assert_match [r config get save] {save {3600 1 300 100 60 10000}}\n        }\n\n        # First \"save\" keyword overrides hard coded defaults\n        start_server {config \"minimal.conf\" overrides {save {100 100}}} {\n            # Defaults\n            assert_match [r config get save] {save {100 100}}\n        }\n\n        # First \"save\" keyword appends default from config file\n        start_server {config \"default.conf\" overrides {save {900 1}} args {--save 100 100}} {\n            assert_match [r config get save] {save {900 1 100 100}}\n        }\n\n        # Empty \"save\" keyword resets all\n        start_server {config \"default.conf\" overrides {save {900 1}} args {--save {}}} {\n            assert_match [r config get save] {save {}}\n        }\n    } {} {external:skip}\n\n    test {CONFIG sanity} {\n        # Do CONFIG GET, CONFIG SET and then CONFIG GET again\n        # Skip immutable configs, one with no get, and other complicated configs\n        set skip_configs {\n            rdbchecksum\n            daemonize\n            tcp-backlog\n            always-show-logo\n            syslog-enabled\n            cluster-enabled\n            disable-thp\n            aclfile\n            unixsocket\n            pidfile\n            syslog-ident\n            appendfilename\n            appenddirname\n            supervised\n            syslog-facility\n            databases\n            io-threads\n            logfile\n            unixsocketperm\n            replicaof\n            slaveof\n            requirepass\n            server-cpulist\n            bio-cpulist\n            aof-rewrite-cpulist\n            bgsave-cpulist\n            server_cpulist\n            bio_cpulist\n            aof_rewrite_cpulist\n            bgsave_cpulist\n            set-proc-title\n            cluster-config-file\n            cluster-port\n            oom-score-adj\n            oom-score-adj-values\n            enable-protected-configs\n            enable-debug-command\n            enable-module-command\n            dbfilename\n            logfile\n            dir\n            socket-mark-id\n            req-res-logfile\n            client-default-resp\n            vset-force-single-threaded-execution\n        }\n\n        if {!$::tls} {\n            append skip_configs {\n                tls-prefer-server-ciphers\n                tls-session-cache-timeout\n                tls-session-cache-size\n                tls-session-caching\n                tls-cert-file\n                tls-key-file\n                tls-client-cert-file\n                tls-client-key-file\n                tls-dh-params-file\n                tls-ca-cert-file\n                tls-ca-cert-dir\n                tls-protocols\n                tls-ciphers\n                tls-ciphersuites\n                tls-port\n            }\n        }\n\n        set configs {}\n        foreach {k v} [r config get *] {\n            if {[lsearch $skip_configs $k] != -1} {\n                continue\n            }\n            dict set configs $k $v\n            # try to set the config to the same value it already has\n            r config set $k $v\n        }\n\n        set newconfigs {}\n        foreach {k v} [r config get *] {\n            if {[lsearch $skip_configs $k] != -1} {\n                continue\n            }\n            dict set newconfigs $k $v\n        }\n\n        dict for {k v} $configs {\n            set vv [dict get $newconfigs $k]\n            if {$v != $vv} {\n                fail \"config $k mismatch, expecting $v but got $vv\"\n            }\n\n        }\n    }\n\n    # Do a force-all config rewrite and make sure we're able to parse\n    # it.\n    test {CONFIG REWRITE sanity} {\n        # Capture state of config before\n        set configs {}\n        foreach {k v} [r config get *] {\n            dict set configs $k $v\n        }\n\n        # Rewrite entire configuration, restart and confirm the\n        # server is able to parse it and start.\n        assert_equal [r debug config-rewrite-force-all] \"OK\"\n        restart_server 0 true false\n        wait_done_loading r\n\n        # Verify no changes were introduced\n        dict for {k v} $configs {\n            assert_equal $v [lindex [r config get $k] 1]\n        }\n    } {} {external:skip}\n\n    test {CONFIG REWRITE handles save and shutdown properly} {\n        r config set save \"3600 1 300 100 60 10000\"\n        r config set shutdown-on-sigterm \"nosave now\"\n        r config set shutdown-on-sigint \"save\"\n        r config rewrite\n        restart_server 0 true false\n        assert_equal [r config get save] {save {3600 1 300 100 60 10000}}\n        assert_equal [r config get shutdown-on-sigterm] {shutdown-on-sigterm {nosave now}}\n        assert_equal [r config get shutdown-on-sigint] {shutdown-on-sigint save}\n\n        r config set save \"\"\n        r config set shutdown-on-sigterm \"default\"\n        r config rewrite\n        restart_server 0 true false\n        assert_equal [r config get save] {save {}}\n        assert_equal [r config get shutdown-on-sigterm] {shutdown-on-sigterm default}\n\n        start_server {config \"minimal.conf\"} {\n            assert_equal [r config get save] {save {3600 1 300 100 60 10000}}\n            r config set save \"\"\n            r config rewrite\n            restart_server 0 true false\n            assert_equal [r config get save] {save {}}\n        }\n    } {} {external:skip}\n    \n    test {CONFIG SET with multiple args} {\n        set some_configs {maxmemory 10000001 repl-backlog-size 10000002 save {3000 5}}\n\n        # Backup\n        set backups {}\n        foreach c [dict keys $some_configs] {\n            lappend backups $c [lindex [r config get $c] 1]\n        }\n\n        # multi config set and veirfy\n        assert_equal [eval \"r config set $some_configs\"] \"OK\"\n        dict for {c val} $some_configs {\n            assert_equal [lindex [r config get $c] 1] $val\n        }\n\n        # Restore backup\n        assert_equal [eval \"r config set $backups\"] \"OK\"\n    }\n\n    test {CONFIG SET rollback on set error} {\n        # This test passes an invalid percent value to maxmemory-clients which should cause an\n        # input verification failure during the \"set\" phase before trying to apply the \n        # configuration. We want to make sure the correct failure happens and everything\n        # is rolled back.\n        # backup maxmemory config\n        set mm_backup [lindex [r config get maxmemory] 1]\n        set mmc_backup [lindex [r config get maxmemory-clients] 1]\n        set qbl_backup [lindex [r config get client-query-buffer-limit] 1]\n        # Set some value to maxmemory\n        assert_equal [r config set maxmemory 10000002] \"OK\"\n        # Set another value to maxmeory together with another invalid config\n        assert_error \"ERR CONFIG SET failed (possibly related to argument 'maxmemory-clients') - percentage argument must be less or equal to 100\" {\n            r config set maxmemory 10000001 maxmemory-clients 200% client-query-buffer-limit invalid\n        }\n        # Validate we rolled back to original values\n        assert_equal [lindex [r config get maxmemory] 1] 10000002\n        assert_equal [lindex [r config get maxmemory-clients] 1] $mmc_backup\n        assert_equal [lindex [r config get client-query-buffer-limit] 1] $qbl_backup\n        # Make sure we revert back to the previous maxmemory\n        assert_equal [r config set maxmemory $mm_backup] \"OK\"\n    }\n\n    test {CONFIG SET rollback on apply error} {\n        # This test tries to configure a used port number in redis. This is expected\n        # to pass the `CONFIG SET` validity checking implementation but fail on \n        # actual \"apply\" of the setting. This will validate that after an \"apply\"\n        # failure we rollback to the previous values.\n        proc dummy_accept {chan addr port} {}\n\n        set some_configs {maxmemory 10000001 port 0 client-query-buffer-limit 10m}\n\n        # On Linux we also set the oom score adj which has an apply function. This is\n        # used to verify that even successful applies are rolled back if some other\n        # config's apply fails.\n        set oom_adj_avail [expr {!$::external && [exec uname] == \"Linux\"}]\n        if {$oom_adj_avail} {\n            proc get_oom_score_adj {} {\n                set pid [srv 0 pid]\n                set fd [open \"/proc/$pid/oom_score_adj\" \"r\"]\n                set val [gets $fd]\n                close $fd\n                return $val\n            }\n            set some_configs [linsert $some_configs 0 oom-score-adj yes oom-score-adj-values {1 1 1}]\n            set read_oom_adj [get_oom_score_adj]\n        }\n\n        # Backup\n        set backups {}\n        foreach c [dict keys $some_configs] {\n            lappend backups $c [lindex [r config get $c] 1]\n        }\n\n        set used_port [find_available_port $::baseport $::portcount]\n        dict set some_configs port $used_port\n\n        # Run a dummy server on used_port so we know we can't configure redis to \n        # use it. It's ok for this to fail because that means used_port is invalid \n        # anyway\n        catch {set sockfd [socket -server dummy_accept -myaddr 127.0.0.1 $used_port]} e\n        if {$::verbose} { puts \"dummy_accept: $e\" }\n\n        # Try to listen on the used port, pass some more configs to make sure the\n        # returned failure message is for the first bad config and everything is rolled back.\n        assert_error \"ERR CONFIG SET failed (possibly related to argument 'port') - Unable to listen on this port*\" {\n            eval \"r config set $some_configs\"\n        }\n\n        # Make sure we reverted back to previous configs\n        dict for {conf val} $backups {\n            assert_equal [lindex [r config get $conf] 1] $val\n        }\n\n        if {$oom_adj_avail} {\n            assert_equal [get_oom_score_adj] $read_oom_adj\n        }\n\n        # Make sure we can still communicate with the server (on the original port)\n        set r1 [redis_client]\n        assert_equal [$r1 ping] \"PONG\"\n        $r1 close\n        close $sockfd\n    }\n\n    test {CONFIG SET duplicate configs} {\n        assert_error \"ERR *duplicate*\" {r config set maxmemory 10000001 maxmemory 10000002}\n    }\n\n    test {CONFIG SET set immutable} {\n        assert_error \"ERR *immutable*\" {r config set daemonize yes}\n    }\n\n    test {CONFIG GET hidden configs} {\n        set hidden_config \"key-load-delay\"\n\n        # When we use a pattern we shouldn't get the hidden config\n        assert {![dict exists [r config get *] $hidden_config]}\n\n        # When we explicitly request the hidden config we should get it\n        assert {[dict exists [r config get $hidden_config] \"$hidden_config\"]}\n    }\n\n    test {CONFIG GET multiple args} {\n        set res [r config get maxmemory maxmemory* bind *of]\n        \n        # Verify there are no duplicates in the result\n        assert_equal [expr [llength [dict keys $res]]*2] [llength $res]\n        \n        # Verify we got both name and alias in result\n        assert {[dict exists $res slaveof] && [dict exists $res replicaof]}  \n\n        # Verify pattern found multiple maxmemory* configs\n        assert {[dict exists $res maxmemory] && [dict exists $res maxmemory-samples] && [dict exists $res maxmemory-clients]}  \n\n        # Verify we also got the explicit config\n        assert {[dict exists $res bind]}  \n    }\n\n    test {redis-server command line arguments - error cases} {\n        # Take '--invalid' as the option.\n        catch {exec src/redis-server --invalid} err\n        assert_match {*Bad directive or wrong number of arguments*} $err\n\n        catch {exec src/redis-server --port} err\n        assert_match {*'port'*wrong number of arguments*} $err\n\n        catch {exec src/redis-server --port 6380 --loglevel} err\n        assert_match {*'loglevel'*wrong number of arguments*} $err\n\n        # Take `6379` and `6380` as the port option value.\n        catch {exec src/redis-server --port 6379 6380} err\n        assert_match {*'port \"6379\" \"6380\"'*wrong number of arguments*} $err\n\n        # Take `--loglevel` and `verbose` as the port option value.\n        catch {exec src/redis-server --port --loglevel verbose} err\n        assert_match {*'port \"--loglevel\" \"verbose\"'*wrong number of arguments*} $err\n\n        # Take `--bla` as the port option value.\n        catch {exec src/redis-server --port --bla --loglevel verbose} err\n        assert_match {*'port \"--bla\"'*argument couldn't be parsed into an integer*} $err\n\n        # Take `--bla` as the loglevel option value.\n        catch {exec src/redis-server --logfile --my--log--file --loglevel --bla} err\n        assert_match {*'loglevel \"--bla\"'*argument(s) must be one of the following*} $err\n\n        # Using MULTI_ARG's own check, empty option value\n        catch {exec src/redis-server --shutdown-on-sigint} err\n        assert_match {*'shutdown-on-sigint'*argument(s) must be one of the following*} $err\n        catch {exec src/redis-server --shutdown-on-sigint \"now force\" --shutdown-on-sigterm} err\n        assert_match {*'shutdown-on-sigterm'*argument(s) must be one of the following*} $err\n\n        # Something like `redis-server --some-config --config-value1 --config-value2 --loglevel debug` would break,\n        # because if you want to pass a value to a config starting with `--`, it can only be a single value.\n        catch {exec src/redis-server --replicaof 127.0.0.1 abc} err\n        assert_match {*'replicaof \"127.0.0.1\" \"abc\"'*Invalid master port*} $err\n        catch {exec src/redis-server --replicaof --127.0.0.1 abc} err\n        assert_match {*'replicaof \"--127.0.0.1\" \"abc\"'*Invalid master port*} $err\n        catch {exec src/redis-server --replicaof --127.0.0.1 --abc} err\n        assert_match {*'replicaof \"--127.0.0.1\"'*wrong number of arguments*} $err\n    } {} {external:skip}\n\n    test {redis-server command line arguments - allow passing option name and option value in the same arg} {\n        start_server {config \"default.conf\" args {\"--maxmemory 700mb\" \"--maxmemory-policy volatile-lru\"}} {\n            assert_match [r config get maxmemory] {maxmemory 734003200}\n            assert_match [r config get maxmemory-policy] {maxmemory-policy volatile-lru}\n        }\n    } {} {external:skip}\n\n    test {redis-server command line arguments - wrong usage that we support anyway} {\n        start_server {config \"default.conf\" args {loglevel verbose \"--maxmemory '700mb'\" \"--maxmemory-policy 'volatile-lru'\"}} {\n            assert_match [r config get loglevel] {loglevel verbose}\n            assert_match [r config get maxmemory] {maxmemory 734003200}\n            assert_match [r config get maxmemory-policy] {maxmemory-policy volatile-lru}\n        }\n    } {} {external:skip}\n\n    test {redis-server command line arguments - allow option value to use the `--` prefix} {\n        start_server {config \"default.conf\" args {--proc-title-template --my--title--template --loglevel verbose}} {\n            assert_match [r config get proc-title-template] {proc-title-template --my--title--template}\n            assert_match [r config get loglevel] {loglevel verbose}\n        }\n    } {} {external:skip}\n\n    test {redis-server command line arguments - option name and option value in the same arg and `--` prefix} {\n        start_server {config \"default.conf\" args {\"--proc-title-template --my--title--template\" \"--loglevel verbose\"}} {\n            assert_match [r config get proc-title-template] {proc-title-template --my--title--template}\n            assert_match [r config get loglevel] {loglevel verbose}\n        }\n    } {} {external:skip}\n\n    test {redis-server command line arguments - save with empty input} {\n        start_server {config \"default.conf\" args {--save --loglevel verbose}} {\n            assert_match [r config get save] {save {}}\n            assert_match [r config get loglevel] {loglevel verbose}\n        }\n\n        start_server {config \"default.conf\" args {--loglevel verbose --save}} {\n            assert_match [r config get save] {save {}}\n            assert_match [r config get loglevel] {loglevel verbose}\n        }\n\n        start_server {config \"default.conf\" args {--save {} --loglevel verbose}} {\n            assert_match [r config get save] {save {}}\n            assert_match [r config get loglevel] {loglevel verbose}\n        }\n\n        start_server {config \"default.conf\" args {--loglevel verbose --save {}}} {\n            assert_match [r config get save] {save {}}\n            assert_match [r config get loglevel] {loglevel verbose}\n        }\n\n        start_server {config \"default.conf\" args {--proc-title-template --save --save {} --loglevel verbose}} {\n            assert_match [r config get proc-title-template] {proc-title-template --save}\n            assert_match [r config get save] {save {}}\n            assert_match [r config get loglevel] {loglevel verbose}\n        }\n\n    } {} {external:skip}\n\n    test {redis-server command line arguments - take one bulk string with spaces for MULTI_ARG configs parsing} {\n        start_server {config \"default.conf\" args {--shutdown-on-sigint nosave force now --shutdown-on-sigterm \"nosave force\"}} {\n            assert_match [r config get shutdown-on-sigint] {shutdown-on-sigint {nosave now force}}\n            assert_match [r config get shutdown-on-sigterm] {shutdown-on-sigterm {nosave force}}\n        }\n    } {} {external:skip}\n\n    # Config file at this point is at a weird state, and includes all\n    # known keywords. Might be a good idea to avoid adding tests here.\n}\n\nstart_server {tags {\"introspection external:skip\"} overrides {enable-protected-configs {no} enable-debug-command {no}}} {\n    test {cannot modify protected configuration - no} {\n        assert_error \"ERR *protected*\" {r config set dir somedir}\n        assert_error \"ERR *DEBUG command not allowed*\" {r DEBUG HELP}\n    } {} {needs:debug}\n}\n\nstart_server {config \"minimal.conf\" tags {\"introspection external:skip\"} overrides {protected-mode {no} enable-protected-configs {local} enable-debug-command {local}}} {\n    test {cannot modify protected configuration - local} {\n        # verify that for local connection it doesn't error\n        r config set dbfilename somename\n        r DEBUG HELP\n\n        # Get a non-loopback address of this instance for this test.\n        set myaddr [get_nonloopback_addr]\n        if {$myaddr != \"\" && ![string match {127.*} $myaddr]} {\n            # Non-loopback client should fail\n            set r2 [get_nonloopback_client]\n            assert_error \"ERR *protected*\" {$r2 config set dir somedir}\n            assert_error \"ERR *DEBUG command not allowed*\" {$r2 DEBUG HELP}\n        }\n    } {} {needs:debug}\n}\n\ntest {config during loading} {\n    start_server [list overrides [list key-load-delay 50 loading-process-events-interval-bytes 1024 rdbcompression no save \"900 1\"]] {\n        # create a big rdb that will take long to load. it is important\n        # for keys to be big since the server processes events only once in 2mb.\n        # 100mb of rdb, 100k keys will load in more than 5 seconds\n        r debug populate 100000 key 1000\n\n        restart_server 0 false false\n\n        # make sure it's still loading\n        assert_equal [s loading] 1\n\n        # verify some configs are allowed during loading\n        r config set loglevel debug\n        assert_equal [lindex [r config get loglevel] 1] debug\n\n        # verify some configs are forbidden during loading\n        assert_error {LOADING*} {r config set dir asdf}\n\n        # make sure it's still loading\n        assert_equal [s loading] 1\n\n        # no need to keep waiting for loading to complete\n        exec kill [srv 0 pid]\n    }\n} {} {external:skip}\n\ntest {CONFIG REWRITE handles rename-command properly} {\n    start_server {tags {\"introspection\"} overrides {rename-command {flushdb badger}}} {\n        assert_error {ERR unknown command*} {r flushdb}\n\n        r config rewrite\n        restart_server 0 true false\n\n        assert_error {ERR unknown command*} {r flushdb}\n    }\n} {} {external:skip}\n\ntest {CONFIG REWRITE handles alias config properly} {\n    start_server {tags {\"introspection\"} overrides {hash-max-listpack-entries 20 hash-max-ziplist-entries 21}} {\n        assert_equal [r config get hash-max-listpack-entries] {hash-max-listpack-entries 21}\n        assert_equal [r config get hash-max-ziplist-entries] {hash-max-ziplist-entries 21}\n        r config set hash-max-listpack-entries 100\n\n        r config rewrite\n        restart_server 0 true false\n\n        assert_equal [r config get hash-max-listpack-entries] {hash-max-listpack-entries 100}\n    }\n    # test the order doesn't matter\n    start_server {tags {\"introspection\"} overrides {hash-max-ziplist-entries 20 hash-max-listpack-entries 21}} {\n        assert_equal [r config get hash-max-listpack-entries] {hash-max-listpack-entries 21}\n        assert_equal [r config get hash-max-ziplist-entries] {hash-max-ziplist-entries 21}\n        r config set hash-max-listpack-entries 100\n\n        r config rewrite\n        restart_server 0 true false\n\n        assert_equal [r config get hash-max-listpack-entries] {hash-max-listpack-entries 100}\n    }\n} {} {external:skip}\n\ntest {IO threads client number} {\n    start_server {overrides {io-threads 2} tags {external:skip}} {\n        set iothread_clients [get_io_thread_clients 1]\n        assert_equal $iothread_clients [s connected_clients]\n        assert_equal [get_io_thread_clients 0] 0\n\n        r script debug yes ; # Transfer to main thread\n        assert_equal [get_io_thread_clients 0] 1\n        assert_equal [get_io_thread_clients 1] [expr $iothread_clients - 1]\n\n        set iothread_clients [get_io_thread_clients 1]\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n        assert_equal [get_io_thread_clients 1] [expr $iothread_clients + 2]\n        $rd1 close\n        $rd2 close\n        wait_for_condition 1000 10 {\n            [get_io_thread_clients 1] eq $iothread_clients\n        } else {\n            fail \"Fail to close clients of io thread 1\"\n        }\n        assert_equal [get_io_thread_clients 0] 1\n\n        r script debug no ; # Transfer to io thread\n        assert_equal [get_io_thread_clients 0] 0\n        assert_equal [get_io_thread_clients 1] [expr $iothread_clients + 1]\n    }\n}\n\ntest {Clients are evenly distributed among io threads} {\n    start_server {overrides {io-threads 4} tags {external:skip}} {\n        # There might be a client used for health checks (to detect if the server is up)\n        # that has not been freed timely. This can lead to an inaccurate count of\n        # connectedclients processed by IO threads.\n        wait_for_condition 1000 10 {\n            [s connected_clients] eq 1\n        } else {\n            fail \"Fail to wait for connected_clients to be 1\"\n        }\n        global rdclients\n        for {set i 1} {$i < 9} {incr i} {\n            set rdclients($i) [redis_deferring_client]\n        }\n        for {set i 1} {$i <= 3} {incr i} {\n            assert_equal [get_io_thread_clients $i] 3\n        }\n\n        $rdclients(3) close\n        $rdclients(4) close\n        wait_for_condition 1000 10 {\n            [get_io_thread_clients 1] eq 2 &&\n            [get_io_thread_clients 2] eq 2 &&\n            [get_io_thread_clients 3] eq 3\n        } else {\n            fail \"Fail to close clients\"\n        }\n\n        set  $rdclients(3) [redis_deferring_client]\n        set  $rdclients(4) [redis_deferring_client]\n        for {set i 1} {$i <= 3} {incr i} {\n            assert_equal [get_io_thread_clients $i] 3\n        }\n    }\n}\n\n# Test insecure configuration warnings\nstart_server {tags {introspection external:skip} overrides {protected-mode no bind \"*\"} wait_ready false} {\n    test {Warning shown when no auth and binding all interfaces} {\n        wait_for_log_messages 0 {\"*WARNING: Redis does not require authentication and is not protected by network restrictions*\"} 0 10 100\n    }\n}\n\nstart_server {tags {introspection external:skip} overrides {protected-mode no bind \"127.0.0.1\"}} {\n    test {Warning shown for configured interface when binding specific address} {\n        wait_for_log_messages 0 {\"*WARNING: Redis does not require authentication*configured network interface*\"} 0 10 100\n    }\n}\n\nstart_server {tags {introspection external:skip} overrides {protected-mode yes}} {\n    test {Warning shown for local clients when protected mode is on} {\n        wait_for_log_messages 0 {\"*WARNING: Redis does not require authentication*local client*\"} 0 10 100\n    }\n}\n\nstart_server {tags {introspection external:skip} overrides {requirepass secret}} {\n    test {No warning shown when password is set} {\n        # Check that the warning does NOT appear\n        set loglines [exec cat [srv 0 stdout]]\n        assert_equal 0 [string match \"*WARNING: Redis does not require authentication*\" $loglines]\n    }\n}\n"
  },
  {
    "path": "tests/unit/keyspace.tcl",
    "content": "start_server {tags {\"keyspace\"}} {\n    test {DEL against a single item} {\n        r set x foo\n        assert {[r get x] eq \"foo\"}\n        r del x\n        r get x\n    } {}\n\n    test {Vararg DEL} {\n        r set foo1{t} a\n        r set foo2{t} b\n        r set foo3{t} c\n        list [r del foo1{t} foo2{t} foo3{t} foo4{t}] [r mget foo1{t} foo2{t} foo3{t}]\n    } {3 {{} {} {}}}\n\n    test {Untagged multi-key commands} {\n        r mset foo1 a foo2 b foo3 c\n        assert_equal {a b c {}} [r mget foo1 foo2 foo3 foo4]\n        r del foo1 foo2 foo3 foo4\n    } {3} {cluster:skip}\n\n    test {KEYS with pattern} {\n        foreach key {key_x key_y key_z foo_a foo_b foo_c} {\n            r set $key hello\n        }\n        lsort [r keys foo*]\n    } {foo_a foo_b foo_c}\n\n    test {KEYS to get all keys} {\n        lsort [r keys *]\n    } {foo_a foo_b foo_c key_x key_y key_z}\n\n    test {DBSIZE} {\n        r dbsize\n    } {6}\n\n    test {KEYS with hashtag} {\n        foreach key {\"{a}x\" \"{a}y\" \"{a}z\" \"{b}a\" \"{b}b\" \"{b}c\"} {\n            r set $key hello\n        }\n        assert_equal [lsort [r keys \"{a}*\"]] [list \"{a}x\" \"{a}y\" \"{a}z\"]\n        assert_equal [lsort [r keys \"*{b}*\"]] [list \"{b}a\" \"{b}b\" \"{b}c\"]\n    } \n\n    test {DEL all keys} {\n        foreach key [r keys *] {r del $key}\n        r dbsize\n    } {0}\n\n    test \"DEL against expired key\" {\n        r debug set-active-expire 0\n        r setex keyExpire 1 valExpire\n        after 1100\n        assert_equal 0 [r del keyExpire]\n        r debug set-active-expire 1\n    } {OK} {needs:debug}\n\n    test {EXISTS} {\n        set res {}\n        r set newkey test\n        append res [r exists newkey]\n        r del newkey\n        append res [r exists newkey]\n    } {10}\n\n    test {Zero length value in key. SET/GET/EXISTS} {\n        r set emptykey {}\n        set res [r get emptykey]\n        append res [r exists emptykey]\n        r del emptykey\n        append res [r exists emptykey]\n    } {10}\n\n    test {Commands pipelining} {\n        set fd [r channel]\n        puts -nonewline $fd \"SET k1 xyzk\\r\\nGET k1\\r\\nPING\\r\\n\"\n        flush $fd\n        set res {}\n        append res [string match OK* [r read]]\n        append res [r read]\n        append res [string match PONG* [r read]]\n        format $res\n    } {1xyzk1}\n\n    test {Non existing command} {\n        catch {r foobaredcommand} err\n        string match ERR* $err\n    } {1}\n\n    test {RENAME basic usage} {\n        r set mykey{t} hello\n        r rename mykey{t} mykey1{t}\n        r rename mykey1{t} mykey2{t}\n        r get mykey2{t}\n    } {hello}\n\n    test {RENAME source key should no longer exist} {\n        r exists mykey\n    } {0}\n\n    test {RENAME against already existing key} {\n        r set mykey{t} a\n        r set mykey2{t} b\n        r rename mykey2{t} mykey{t}\n        set res [r get mykey{t}]\n        append res [r exists mykey2{t}]\n    } {b0}\n\n    test {RENAMENX basic usage} {\n        r del mykey{t}\n        r del mykey2{t}\n        r set mykey{t} foobar\n        r renamenx mykey{t} mykey2{t}\n        set res [r get mykey2{t}]\n        append res [r exists mykey{t}]\n    } {foobar0}\n\n    test {RENAMENX against already existing key} {\n        r set mykey{t} foo\n        r set mykey2{t} bar\n        r renamenx mykey{t} mykey2{t}\n    } {0}\n\n    test {RENAMENX against already existing key (2)} {\n        set res [r get mykey{t}]\n        append res [r get mykey2{t}]\n    } {foobar}\n\n    test {RENAME against non existing source key} {\n        catch {r rename nokey{t} foobar{t}} err\n        format $err\n    } {ERR*}\n\n    test {RENAME where source and dest key are the same (existing)} {\n        r set mykey foo\n        r rename mykey mykey\n    } {OK}\n\n    test {RENAMENX where source and dest key are the same (existing)} {\n        r set mykey foo\n        r renamenx mykey mykey\n    } {0}\n\n    test {RENAME where source and dest key are the same (non existing)} {\n        r del mykey\n        catch {r rename mykey mykey} err\n        format $err\n    } {ERR*}\n\n    test {RENAME with volatile key, should move the TTL as well} {\n        r del mykey{t} mykey2{t}\n        r set mykey{t} foo\n        r expire mykey{t} 100\n        assert {[r ttl mykey{t}] > 95 && [r ttl mykey{t}] <= 100}\n        r rename mykey{t} mykey2{t}\n        assert {[r ttl mykey2{t}] > 95 && [r ttl mykey2{t}] <= 100}\n    }\n\n    test {RENAME with volatile key, should not inherit TTL of target key} {\n        r del mykey{t} mykey2{t}\n        r set mykey{t} foo\n        r set mykey2{t} bar\n        r expire mykey2{t} 100\n        assert {[r ttl mykey{t}] == -1 && [r ttl mykey2{t}] > 0}\n        r rename mykey{t} mykey2{t}\n        r ttl mykey2{t}\n    } {-1}\n\n    test {DEL all keys again (DB 0)} {\n        foreach key [r keys *] {\n            r del $key\n        }\n        r dbsize\n    } {0}\n\n    test {DEL all keys again (DB 1)} {\n        r select 10\n        foreach key [r keys *] {\n            r del $key\n        }\n        set res [r dbsize]\n        r select 9\n        format $res\n    } {0} {singledb:skip}\n\n    test {COPY basic usage for string} {\n        r set mykey{t} foobar\n        set res {}\n        r copy mykey{t} mynewkey{t}\n        lappend res [r get mynewkey{t}]\n        lappend res [r dbsize]\n        if {$::singledb} {\n            assert_equal [list foobar 2] [format $res]\n        } else {\n            r copy mykey{t} mynewkey{t} DB 10\n            r select 10\n            lappend res [r get mynewkey{t}]\n            lappend res [r dbsize]\n            r select 9\n            assert_equal [list foobar 2 foobar 1] [format $res]\n        }\n    } \n\n    test {COPY for string does not replace an existing key without REPLACE option} {\n        r set mykey2{t} hello\n        catch {r copy mykey2{t} mynewkey{t} DB 10} e\n        set e\n    } {0} {singledb:skip}\n\n    test {COPY for string can replace an existing key with REPLACE option} {\n        r copy mykey2{t} mynewkey{t} DB 10 REPLACE\n        r select 10\n        r get mynewkey{t}\n    } {hello} {singledb:skip}\n\n    test {COPY for string ensures that copied data is independent of copying data} {\n        r flushdb\n        r select 9\n        r set mykey{t} foobar\n        set res {}\n        r copy mykey{t} mynewkey{t} DB 10\n        r select 10\n        lappend res [r get mynewkey{t}]\n        r set mynewkey{t} hoge\n        lappend res [r get mynewkey{t}]\n        r select 9\n        lappend res [r get mykey{t}]\n        r select 10\n        r flushdb\n        r select 9\n        format $res\n    } [list foobar hoge foobar] {singledb:skip}\n\n    test {COPY for string does not copy data to no-integer DB} {\n        r set mykey{t} foobar\n        catch {r copy mykey{t} mynewkey{t} DB notanumber} e\n        set e\n    } {ERR value is not an integer or out of range}\n\n    test {COPY can copy key expire metadata as well} {\n        r set mykey{t} foobar ex 100\n        r copy mykey{t} mynewkey{t} REPLACE\n        assert {[r ttl mynewkey{t}] > 0 && [r ttl mynewkey{t}] <= 100}\n        assert {[r get mynewkey{t}] eq \"foobar\"}\n    }\n\n    test {COPY does not create an expire if it does not exist} {\n        r set mykey{t} foobar\n        assert {[r ttl mykey{t}] == -1}\n        r copy mykey{t} mynewkey{t} REPLACE\n        assert {[r ttl mynewkey{t}] == -1}\n        assert {[r get mynewkey{t}] eq \"foobar\"}\n    }\n\narray set largevalue [generate_largevalue_test_array]\nforeach {type large} [array get largevalue] {\n    set origin_config [config_get_set list-max-listpack-size -1]\n    test \"COPY basic usage for list - $type\" {\n        r del mylist{t} mynewlist{t}\n        r lpush mylist{t} a b $large c d\n        assert_encoding $type mylist{t}\n        r copy mylist{t} mynewlist{t}\n        assert_encoding $type mynewlist{t}\n        set digest [debug_digest_value mylist{t}]\n        assert_equal $digest [debug_digest_value mynewlist{t}]\n        assert_refcount 1 mylist{t}\n        assert_refcount 1 mynewlist{t}\n        r del mylist{t}\n        assert_equal $digest [debug_digest_value mynewlist{t}]\n    }\n    config_set list-max-listpack-size $origin_config\n}\n\n    foreach type {intset listpack hashtable} {\n        test \"COPY basic usage for $type set\" {\n            r del set1{t} newset1{t}\n            r sadd set1{t} 1 2 3\n            if {$type ne \"intset\"} {\n                r sadd set1{t} a\n            }\n            if {$type eq \"hashtable\"} {\n                for {set i 4} {$i < 200} {incr i} {\n                    r sadd set1{t} $i\n                }\n            }\n            assert_encoding $type set1{t}\n            r copy set1{t} newset1{t}\n            set digest [debug_digest_value set1{t}]\n            assert_equal $digest [debug_digest_value newset1{t}]\n            assert_refcount 1 set1{t}\n            assert_refcount 1 newset1{t}\n            r del set1{t}\n            assert_equal $digest [debug_digest_value newset1{t}]\n        }\n    }\n\n    test {COPY basic usage for listpack sorted set} {\n        r del zset1{t} newzset1{t}\n        r zadd zset1{t} 123 foobar\n        assert_encoding listpack zset1{t}\n        r copy zset1{t} newzset1{t}\n        set digest [debug_digest_value zset1{t}]\n        assert_equal $digest [debug_digest_value newzset1{t}]\n        assert_refcount 1 zset1{t}\n        assert_refcount 1 newzset1{t}\n        r del zset1{t}\n        assert_equal $digest [debug_digest_value newzset1{t}]\n    }\n\n     test {COPY basic usage for skiplist sorted set} {\n        r del zset2{t} newzset2{t}\n        set original_max [lindex [r config get zset-max-ziplist-entries] 1]\n        r config set zset-max-ziplist-entries 0\n        for {set j 0} {$j < 130} {incr j} {\n            r zadd zset2{t} [randomInt 50] ele-[randomInt 10]\n        }\n        assert_encoding skiplist zset2{t}\n        r copy zset2{t} newzset2{t}\n        set digest [debug_digest_value zset2{t}]\n        assert_equal $digest [debug_digest_value newzset2{t}]\n        assert_refcount 1 zset2{t}\n        assert_refcount 1 newzset2{t}\n        r del zset2{t}\n        assert_equal $digest [debug_digest_value newzset2{t}]\n        r config set zset-max-ziplist-entries $original_max\n    }\n\n    test {COPY basic usage for listpack hash} {\n        r config set hash-max-listpack-entries 512\n        r del hash1{t} newhash1{t}\n        r hset hash1{t} tmp 17179869184\n        assert_encoding listpack hash1{t}\n        r copy hash1{t} newhash1{t}\n        set digest [debug_digest_value hash1{t}]\n        assert_equal $digest [debug_digest_value newhash1{t}]\n        assert_refcount 1 hash1{t}\n        assert_refcount 1 newhash1{t}\n        r del hash1{t}\n        assert_equal $digest [debug_digest_value newhash1{t}]\n    }\n\n    test {COPY basic usage for hashtable hash} {\n        r del hash2{t} newhash2{t}\n        set original_max [lindex [r config get hash-max-ziplist-entries] 1]\n        r config set hash-max-ziplist-entries 0\n        for {set i 0} {$i < 64} {incr i} {\n            r hset hash2{t} [randomValue] [randomValue]\n        }\n        assert_encoding hashtable hash2{t}\n        r copy hash2{t} newhash2{t}\n        set digest [debug_digest_value hash2{t}]\n        assert_equal $digest [debug_digest_value newhash2{t}]\n        assert_refcount 1 hash2{t}\n        assert_refcount 1 newhash2{t}\n        r del hash2{t}\n        assert_equal $digest [debug_digest_value newhash2{t}]\n        r config set hash-max-ziplist-entries $original_max\n    }\n\n    test {COPY basic usage for stream} {\n        r del mystream{t} mynewstream{t}\n        for {set i 0} {$i < 1000} {incr i} {\n            r XADD mystream{t} * item 2 value b\n        }\n        r copy mystream{t} mynewstream{t}\n        set digest [debug_digest_value mystream{t}]\n        assert_equal $digest [debug_digest_value mynewstream{t}]\n        assert_refcount 1 mystream{t}\n        assert_refcount 1 mynewstream{t}\n        r del mystream{t}\n        assert_equal $digest [debug_digest_value mynewstream{t}]\n    }\n\n    test {COPY basic usage for stream-cgroups} {\n        r del x{t}\n        r XADD x{t} 100 a 1\n        set id [r XADD x{t} 101 b 1]\n        r XADD x{t} 102 c 1\n        r XADD x{t} 103 e 1\n        r XADD x{t} 104 f 1\n        r XADD x{t} 105 g 1\n        r XGROUP CREATE x{t} g1 0\n        r XGROUP CREATE x{t} g2 0\n        r XREADGROUP GROUP g1 Alice COUNT 1 STREAMS x{t} >\n        r XREADGROUP GROUP g1 Bob COUNT 1 STREAMS x{t} >\n        r XREADGROUP GROUP g1 Bob NOACK COUNT 1 STREAMS x{t} >\n        r XREADGROUP GROUP g2 Charlie COUNT 4 STREAMS x{t} >\n        r XGROUP SETID x{t} g1 $id\n        r XREADGROUP GROUP g1 Dave COUNT 3 STREAMS x{t} >\n        r XDEL x{t} 103\n\n        r copy x{t} newx{t}\n        set info [r xinfo stream x{t} full]\n        assert_equal $info [r xinfo stream newx{t} full]\n        assert_refcount 1 x{t}\n        assert_refcount 1 newx{t}\n        r del x{t}\n        assert_equal $info [r xinfo stream newx{t} full]\n        r flushdb\n    }\n\n    test {MOVE basic usage} {\n        r set mykey foobar\n        r move mykey 10\n        set res {}\n        lappend res [r exists mykey]\n        lappend res [r dbsize]\n        r select 10\n        lappend res [r get mykey]\n        lappend res [r dbsize]\n        r select 9\n        format $res\n    } [list 0 0 foobar 1] {singledb:skip}\n\n    test {MOVE against key existing in the target DB} {\n        r set mykey hello\n        r move mykey 10\n    } {0} {singledb:skip}\n\n    test {MOVE against non-integer DB (#1428)} {\n        r set mykey hello\n        catch {r move mykey notanumber} e\n        set e\n    } {ERR value is not an integer or out of range} {singledb:skip}\n\n    test {MOVE can move key expire metadata as well} {\n        r select 10\n        r flushdb\n        r select 9\n        r set mykey foo ex 100\n        r move mykey 10\n        assert {[r ttl mykey] == -2}\n        r select 10\n        assert {[r ttl mykey] > 0 && [r ttl mykey] <= 100}\n        assert {[r get mykey] eq \"foo\"}\n        r select 9\n    } {OK} {singledb:skip}\n\n    test {MOVE does not create an expire if it does not exist} {\n        r select 10\n        r flushdb\n        r select 9\n        r set mykey foo\n        r move mykey 10\n        assert {[r ttl mykey] == -2}\n        r select 10\n        assert {[r ttl mykey] == -1}\n        assert {[r get mykey] eq \"foo\"}\n        r select 9\n    } {OK} {singledb:skip}\n\n    test {SET/GET keys in different DBs} {\n        r set a hello\n        r set b world\n        r select 10\n        r set a foo\n        r set b bared\n        r select 9\n        set res {}\n        lappend res [r get a]\n        lappend res [r get b]\n        r select 10\n        lappend res [r get a]\n        lappend res [r get b]\n        r select 9\n        format $res\n    } {hello world foo bared} {singledb:skip}\n\n    test {RANDOMKEY} {\n        r flushdb\n        r set foo x\n        r set bar y\n        set foo_seen 0\n        set bar_seen 0\n        for {set i 0} {$i < 100} {incr i} {\n            set rkey [r randomkey]\n            if {$rkey eq {foo}} {\n                set foo_seen 1\n            }\n            if {$rkey eq {bar}} {\n                set bar_seen 1\n            }\n        }\n        list $foo_seen $bar_seen\n    } {1 1}\n\n    test {RANDOMKEY against empty DB} {\n        r flushdb\n        r randomkey\n    } {}\n\n    test {RANDOMKEY regression 1} {\n        r flushdb\n        r set x 10\n        r del x\n        r randomkey\n    } {}\n\n    test {KEYS * two times with long key, Github issue #1208} {\n        r flushdb\n        r set dlskeriewrioeuwqoirueioqwrueoqwrueqw test\n        r keys *\n        r keys *\n    } {dlskeriewrioeuwqoirueioqwrueoqwrueqw}\n\n    test {Regression for pattern matching long nested loops} {\n        r flushdb\n        r SET aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa 1\n        r KEYS \"a*a*a*a*a*a*a*a*a*a*a*a*a*a*a*a*a*a*a*a*b\"\n    } {}\n\n    test {Regression for pattern matching very long nested loops} {\n        r flushdb\n        r SET [string repeat \"a\" 50000] 1\n        r KEYS [string repeat \"*?\" 50000]\n    } {}\n\n    test {Coverage: basic SWAPDB test and unhappy path} {\n       r flushall\n       r select 0\n       r set swapkey v1\n       r select 1\n       assert_match 0 [r dbsize] ;#verify DB[1] has 0 keys\n       r swapdb 0 1\n       assert_match 1 [r dbsize]\n       r select 0\n       assert_match 0 [r dbsize] ;#verify DB[0] has 0 keys\n       r flushall\n       assert_error \"ERR DB index is out of range*\" {r swapdb 44 55}\n       assert_error \"ERR invalid second DB index*\" {r swapdb 44 a}\n       assert_error \"ERR invalid first DB index*\" {r swapdb a 55}\n       assert_error \"ERR invalid first DB index*\" {r swapdb a b}\n       assert_match \"OK\" [r swapdb 0 0]\n    } {} {singledb:skip}\n\n    test {Coverage: SWAPDB and FLUSHDB} {\n       # set a key in each db and swapdb one of 2 with different db\n       # and flushdb on swapped db.\n       r flushall\n       r select 0\n       r set swapkey v1\n       r select 1\n       r set swapkey1 v1\n       assert_no_match \"*db2:keys=*\" [r info keyspace]\n       r swapdb 0 2\n       r select 0\n       assert_match 0 [r dbsize]\n       assert_no_match \"*db0:keys=*\" [r info keyspace]\n       r select 2\n       r flushdb\n       assert_match 0 [r dbsize]\n       assert_match \"*db1:keys=*\" [r info keyspace]\n       assert_no_match \"*db0:keys=*\" [r info keyspace]\n       assert_no_match \"*db2:keys=*\" [r info keyspace]\n       r flushall\n    } {OK} {singledb:skip}\n}\n"
  },
  {
    "path": "tests/unit/latency-monitor.tcl",
    "content": "start_server {tags {\"latency-monitor needs:latency\"}} {\n    # Set a threshold high enough to avoid spurious latency events.\n    r config set latency-monitor-threshold 200\n    r latency reset\n\n    test {LATENCY HISTOGRAM with empty histogram} {\n        r config resetstat\n        set histo [dict create {*}[r latency histogram]]\n        # Config resetstat is recorded\n        assert_equal [dict size $histo] 1\n        assert_match {*config|resetstat*} $histo\n    }\n\n    test {LATENCY HISTOGRAM all commands} {\n        r config resetstat\n        r set a b\n        r set c d\n        set histo [dict create {*}[r latency histogram]]\n        assert_match {calls 2 histogram_usec *} [dict get $histo set]\n        assert_match {calls 1 histogram_usec *} [dict get $histo \"config|resetstat\"]\n    }\n\n    test {LATENCY HISTOGRAM sub commands} {\n        r config resetstat\n        r client id\n        r client list\n        # parent command reply with its sub commands\n        set histo [dict create {*}[r latency histogram client]]\n        assert {[dict size $histo] == 2}\n        assert_match {calls 1 histogram_usec *} [dict get $histo \"client|id\"]\n        assert_match {calls 1 histogram_usec *} [dict get $histo \"client|list\"]\n\n        # explicitly ask for one sub-command\n        set histo [dict create {*}[r latency histogram \"client|id\"]]\n        assert {[dict size $histo] == 1}\n        assert_match {calls 1 histogram_usec *} [dict get $histo \"client|id\"]\n    }\n\n    test {LATENCY HISTOGRAM with a subset of commands} {\n        r config resetstat\n        r set a b\n        r set c d\n        r get a\n        r hset f k v\n        r hgetall f\n        set histo [dict create {*}[r latency histogram set hset]]\n        assert_match {calls 2 histogram_usec *} [dict get $histo set]\n        assert_match {calls 1 histogram_usec *} [dict get $histo hset]\n        assert_equal [dict size $histo] 2\n        set histo [dict create {*}[r latency histogram hgetall get zadd]]\n        assert_match {calls 1 histogram_usec *} [dict get $histo hgetall]\n        assert_match {calls 1 histogram_usec *} [dict get $histo get]\n        assert_equal [dict size $histo] 2\n    }\n\n    test {LATENCY HISTOGRAM command} {\n        r config resetstat\n        r set a b\n        r get a\n        assert {[llength [r latency histogram set get]] == 4}\n    }\n\n    test {LATENCY HISTOGRAM with wrong command name skips the invalid one} {\n        r config resetstat\n        assert {[llength [r latency histogram blabla]] == 0}\n        assert {[llength [r latency histogram blabla blabla2 set get]] == 0}\n        r set a b\n        r get a\n        assert_match {calls 1 histogram_usec *} [lindex [r latency histogram blabla blabla2 set get] 1]\n        assert_match {calls 1 histogram_usec *} [lindex [r latency histogram blabla blabla2 set get] 3]\n        assert {[string length [r latency histogram blabla set get]] > 0}\n    }\n\ntags {\"needs:debug\"} {\n    set old_threshold_value [lindex [r config get latency-monitor-threshold] 1]\n\n    test {Test latency events logging} {\n        r config set latency-monitor-threshold 200\n        r latency reset\n        r debug sleep 0.3\n        after 1100\n        r debug sleep 0.4\n        after 1100\n        r debug sleep 0.5\n        r config set latency-monitor-threshold 0\n        assert {[r latency history command] >= 3}\n    }\n\n    test {LATENCY HISTORY output is ok} {\n        set res [r latency history command]\n        if {$::verbose} {\n            puts \"LATENCY HISTORY data:\"\n            puts $res\n        }\n\n        set min 250\n        set max 450\n        foreach event $res {\n            lassign $event time latency\n            if {!$::no_latency} {\n                assert {$latency >= $min && $latency <= $max}\n            }\n            incr min 100\n            incr max 100\n            set last_time $time ; # Used in the next test\n        }\n    }\n\n    test {LATENCY LATEST output is ok} {\n        set res [r latency latest]\n        if {$::verbose} {\n            puts \"LATENCY LATEST data:\"\n            puts $res\n        }\n\n        foreach event $res {\n            lassign $event eventname time latency max\n            assert {$eventname eq \"command\"}\n            if {!$::no_latency} {\n                assert {$max >= 450 & $max <= 650}\n                assert {$time == $last_time}\n            }\n            break\n        }\n    }\n\n    test {LATENCY GRAPH can output the event graph} {\n        set res [r latency graph command]\n        if {$::verbose} {\n            puts \"LATENCY GRAPH data:\"\n            puts $res\n        }\n        assert_match {*command*high*low*} $res\n\n        # These numbers are taken from the \"Test latency events logging\" test.\n        # (debug sleep 0.3) and (debug sleep 0.5), using range to prevent timing issue.\n        regexp \"command - high (.*?) ms, low (.*?) ms\" $res -> high low\n        assert_morethan_equal $high 500\n        assert_morethan_equal $low 300\n    }\n\n    r config set latency-monitor-threshold $old_threshold_value\n} ;# tag\n\n    test {LATENCY of expire events are correctly collected} {\n        r config set latency-monitor-threshold 20\n        r flushdb\n        if {$::valgrind} {set count 100000} else {set count 1000000}\n        r eval {\n            local i = 0\n            while (i < tonumber(ARGV[1])) do\n                redis.call('sadd',KEYS[1],i)\n                i = i+1\n             end\n        } 1 mybigkey $count\n        r pexpire mybigkey 50\n        wait_for_condition 5 100 {\n            [r dbsize] == 0\n        } else {\n            fail \"key wasn't expired\"\n        }\n        assert_match {*expire-cycle*} [r latency latest]\n\n        test {LATENCY GRAPH can output the expire event graph} {\n             assert_match {*expire-cycle*high*low*} [r latency graph expire-cycle]\n        }\n\n        r config set latency-monitor-threshold 200\n    }\n\n    test {LATENCY HISTORY / RESET with wrong event name is fine} {\n        assert {[llength [r latency history blabla]] == 0}\n        assert {[r latency reset blabla] == 0}\n    }\n\n    test {LATENCY DOCTOR produces some output} {\n        assert {[string length [r latency doctor]] > 0}\n    }\n\n    test {LATENCY RESET is able to reset events} {\n        assert {[r latency reset] > 0}\n        assert {[r latency latest] eq {}}\n    }\n\n    test {LATENCY HELP should not have unexpected options} {\n        catch {r LATENCY help xxx} e\n        assert_match \"*wrong number of arguments for 'latency|help' command\" $e\n    }\n}\n"
  },
  {
    "path": "tests/unit/lazyfree.tcl",
    "content": "start_server {tags {\"lazyfree\"}} {\n    test \"UNLINK can reclaim memory in background\" {\n        set orig_mem [s used_memory]\n        set args {}\n        for {set i 0} {$i < 100000} {incr i} {\n            lappend args $i\n        }\n        r sadd myset {*}$args\n        assert {[r scard myset] == 100000}\n        set peak_mem [s used_memory]\n        assert {[r unlink myset] == 1}\n        assert {$peak_mem > $orig_mem+1000000}\n        wait_for_condition 50 100 {\n            [s used_memory] < $peak_mem &&\n            [s used_memory] < $orig_mem*2\n        } else {\n            fail \"Memory is not reclaimed by UNLINK\"\n        }\n    }\n\n    test \"FLUSHDB ASYNC can reclaim memory in background\" {\n        # make the previous test is really done before sampling used_memory\n        wait_lazyfree_done r\n\n        set orig_mem [s used_memory]\n        set args {}\n        for {set i 0} {$i < 100000} {incr i} {\n            lappend args $i\n        }\n        r sadd myset {*}$args\n        assert {[r scard myset] == 100000}\n        set peak_mem [s used_memory]\n        r flushdb async\n        assert {$peak_mem > $orig_mem+1000000}\n        wait_for_condition 50 100 {\n            [s used_memory] < $peak_mem &&\n            [s used_memory] < $orig_mem*2\n        } else {\n            fail \"Memory is not reclaimed by FLUSHDB ASYNC\"\n        }\n    }\n\n    test \"lazy free a stream with all types of metadata\" {\n        # make the previous test is really done before doing RESETSTAT\n        wait_for_condition 50 100 {\n            [s lazyfree_pending_objects] == 0\n        } else {\n            fail \"lazyfree isn't done\"\n        }\n\n        r config resetstat\n        r config set stream-node-max-entries 5\n        for {set j 0} {$j < 1000} {incr j} {\n            if {rand() < 0.9} {\n                r xadd stream * foo $j\n            } else {\n                r xadd stream * bar $j\n            }\n        }\n        r xgroup create stream mygroup 0\n        set records [r xreadgroup GROUP mygroup Alice COUNT 2 STREAMS stream >]\n        r xdel stream [lindex [lindex [lindex [lindex $records 0] 1] 1] 0]\n        r xack stream mygroup [lindex [lindex [lindex [lindex $records 0] 1] 0] 0]\n        r unlink stream\n\n        # make sure it was lazy freed\n        wait_for_condition 50 100 {\n            [s lazyfree_pending_objects] == 0\n        } else {\n            fail \"lazyfree isn't done\"\n        }\n        assert_equal [s lazyfreed_objects] 1\n    } {} {needs:config-resetstat}\n\n    test \"lazy free a stream with deleted cgroup\" {\n        r config resetstat\n        r xadd s * a b\n        r xgroup create s bla $\n        r xgroup destroy s bla\n        r unlink s\n\n        # make sure it was not lazy freed\n        wait_for_condition 50 100 {\n            [s lazyfree_pending_objects] == 0\n        } else {\n            fail \"lazyfree isn't done\"\n        }\n        assert_equal [s lazyfreed_objects] 0\n    } {} {needs:config-resetstat}\n\n    test \"FLUSHALL SYNC optimized to run in bg as blocking FLUSHALL ASYNC\" {\n        set num_keys 1000\n        r config resetstat\n\n        # Verify at start there are no lazyfree pending objects\n        assert_equal [s lazyfree_pending_objects] 0\n\n        # Fillup DB with items\n        populate $num_keys\n\n        # Run FLUSHALL SYNC command, optimized as blocking ASYNC\n        r flushall\n\n        # Verify all keys counted as lazyfreed\n        assert_equal [s lazyfreed_objects] $num_keys\n    }\n\n    test \"Run consecutive blocking FLUSHALL ASYNC successfully\" {\n        r config resetstat\n        set rd [redis_deferring_client]\n\n        # Fillup DB with items\n        r set x 1\n        r set y 2\n\n        $rd write \"FLUSHALL\\r\\nFLUSHALL\\r\\nFLUSHDB\\r\\n\"\n        $rd flush\n        assert_equal [$rd read] {OK}\n        assert_equal [$rd read] {OK}\n        assert_equal [$rd read] {OK}\n        assert_equal [s lazyfreed_objects] 2\n        $rd close\n    }\n\n    test \"FLUSHALL SYNC in MULTI not optimized to run as blocking FLUSHALL ASYNC\" {\n        r config resetstat\n\n        # Fillup DB with items\n        r set x 11\n        r set y 22\n\n        # FLUSHALL SYNC in multi\n        r multi\n        r flushall\n        r exec\n\n        # Verify flushall not run as lazyfree\n        assert_equal [s lazyfree_pending_objects] 0\n        assert_equal [s lazyfreed_objects] 0\n    }\n\n    test \"Client closed in the middle of blocking FLUSHALL ASYNC\" {\n        set num_keys 100000\n        r config resetstat\n\n        # Fillup DB with items\n        populate $num_keys\n\n        # close client in the middle of ongoing Blocking FLUSHALL ASYNC\n        set rd [redis_deferring_client]\n        $rd flushall\n        $rd close\n\n        # Wait to verify all keys counted as lazyfreed\n        wait_for_condition 50 100 {\n            [s lazyfreed_objects] == $num_keys\n        } else {\n            fail \"Unexpected number of lazyfreed_objects: [s lazyfreed_objects]\"\n        }\n    }\n\n    test \"Pending commands in querybuf processed once unblocking FLUSHALL ASYNC\" {\n        r config resetstat\n        set rd [redis_deferring_client]\n\n        # Fillup DB with items\n        r set x 1\n        r set y 2\n\n        $rd write \"FLUSHALL\\r\\nPING\\r\\n\"\n        $rd flush\n        assert_equal [$rd read] {OK}\n        assert_equal [$rd read] {PONG}\n        assert_equal [s lazyfreed_objects] 2\n        $rd close\n    }\n\n    test \"Unblocks client blocked on lazyfree via REPLICAOF command\" {\n        r config resetstat\n        set rd [redis_deferring_client]\n\n        populate 50000 ;# Just to make flushdb async slower\n        $rd flushdb\n\n        # Verify flushdb run as lazyfree\n        wait_for_condition 50 100 {\n            [s lazyfree_pending_objects] > 0 ||\n            [s lazyfreed_objects] > 0\n        } else {\n            fail \"FLUSHDB didn't run as lazyfree\"\n        }\n\n        # Test that slaveof command unblocks clients without assertion failure\n        r slaveof 127.0.0.1 0\n        assert_equal [$rd read] {OK}\n        $rd close\n        r ping\n        r slaveof no one\n    } {OK} {external:skip}\n}\n"
  },
  {
    "path": "tests/unit/limits.tcl",
    "content": "start_server {tags {\"limits network external:skip\"} overrides {maxclients 10}} {\n    if {$::tls} {\n        set expected_code \"*I/O error*\"\n    } else {\n        set expected_code \"*ERR max*reached*\"\n    }\n    test {Check if maxclients works refusing connections} {\n        set c 0\n        catch {\n            while {$c < 50} {\n                incr c\n                set rd [redis_deferring_client]\n                $rd ping\n                $rd read\n                after 100\n            }\n        } e\n        assert {$c > 8 && $c <= 10}\n        set e\n    } $expected_code\n}\n"
  },
  {
    "path": "tests/unit/maxmemory.tcl",
    "content": "start_server {tags {\"maxmemory\" \"external:skip\"}} {\n\n    test {SET and RESTORE key nearly as large as the memory limit} {\n        r flushall\n        set used [s used_memory]\n        r config set maxmemory [expr {$used+10000000}]\n        r set foo [string repeat a 8000000]\n        set encoded [r dump foo]\n        r del foo\n        r restore foo 0 $encoded\n        r strlen foo\n    } {8000000} {logreqres:skip}\n\n    r flushall\n    r config set maxmemory 11mb\n    r config set maxmemory-policy allkeys-lru\n    set server_pid [s process_id]\n    r debug reply-copy-avoidance 0 ;# Disable copy avoidance because it affects memory usage\n\n    proc init_test {client_eviction} {\n        r flushdb\n\n        set prev_maxmemory_clients [r config get maxmemory-clients]\n        if $client_eviction {\n            r config set maxmemory-clients 3mb\n            r client no-evict on\n        } else {\n            r config set maxmemory-clients 0\n        }\n\n        r config resetstat\n        # fill 5mb using 50 keys of 100kb\n        for {set j 0} {$j < 50} {incr j} {\n            r setrange $j 100000 x\n        }\n        assert_equal [r dbsize] 50\n    }\n    \n    # Return true if the eviction occurred (client or key) based on argument\n    proc check_eviction_test {client_eviction} {\n        set evicted_keys [s evicted_keys]\n        set evicted_clients [s evicted_clients]\n        set dbsize [r dbsize]\n        \n        if $client_eviction {\n            if {[lindex [r config get io-threads] 1] == 1} {\n                return [expr $evicted_clients > 0 && $evicted_keys == 0 && $dbsize == 50]\n            } else {\n                return [expr $evicted_clients >= 0 && $evicted_keys >= 0 && $dbsize <= 50]\n            }\n        } else {\n            return [expr $evicted_clients == 0 && $evicted_keys > 0 && $dbsize < 50]\n        }\n    }\n\n    # Assert the eviction test passed (and prints some debug info on verbose)\n    proc verify_eviction_test {client_eviction} {\n        set evicted_keys [s evicted_keys]\n        set evicted_clients [s evicted_clients]\n        set dbsize [r dbsize]\n        \n        if $::verbose {\n            puts \"evicted keys: $evicted_keys\"\n            puts \"evicted clients: $evicted_clients\"\n            puts \"dbsize: $dbsize\"\n        }\n\n        assert [check_eviction_test $client_eviction]\n    }\n\n    foreach {client_eviction} {false true} {\n        set clients {}\n        test \"eviction due to output buffers of many MGET clients, client eviction: $client_eviction\" {\n            init_test $client_eviction\n\n            for {set j 0} {$j < 20} {incr j} {\n                set rr [redis_deferring_client]\n                lappend clients $rr\n            }\n            \n            # Generate client output buffers via MGET until we can observe some effect on \n            # keys / client eviction, or we time out.\n            set t [clock seconds]\n            while {![check_eviction_test $client_eviction] && [expr [clock seconds] - $t] < 20} {\n                foreach rr $clients {\n                    if {[catch {\n                        $rr mget 1\n                        $rr flush\n                    } err]} {\n                        lremove clients $rr\n                    }\n                }\n            }\n\n            verify_eviction_test $client_eviction\n        }\n        foreach rr $clients {\n            $rr close\n        }\n\n        set clients {}\n        test \"eviction due to input buffer of a dead client, client eviction: $client_eviction\" {\n            init_test $client_eviction\n            \n            for {set j 0} {$j < 30} {incr j} {\n                set rr [redis_deferring_client]\n                lappend clients $rr\n            }\n\n            foreach rr $clients {\n                if {[catch {\n                    $rr write \"*250\\r\\n\"\n                    for {set j 0} {$j < 249} {incr j} {\n                        $rr write \"\\$1000\\r\\n\"\n                        $rr write [string repeat x 1000]\n                        $rr write \"\\r\\n\"\n                        $rr flush\n                    }\n                }]} {\n                    lremove clients $rr\n                }\n            }\n\n            verify_eviction_test $client_eviction\n        }\n        foreach rr $clients {\n            $rr close\n        }\n\n        set clients {}\n        test \"eviction due to output buffers of pubsub, client eviction: $client_eviction\" {\n            init_test $client_eviction\n\n            for {set j 0} {$j < 20} {incr j} {\n                set rr [redis_client]\n                lappend clients $rr\n            }\n\n            foreach rr $clients {\n                $rr subscribe bla\n            }\n\n            # Generate client output buffers via PUBLISH until we can observe some effect on \n            # keys / client eviction, or we time out.\n            set bigstr [string repeat x 100000]\n            set t [clock seconds]\n            while {![check_eviction_test $client_eviction] && [expr [clock seconds] - $t] < 20} {\n                if {[catch { r publish bla $bigstr } err]} {\n                    if $::verbose {\n                        puts \"Error publishing: $err\"\n                    }\n                }\n            }\n\n            verify_eviction_test $client_eviction\n        }\n        foreach rr $clients {\n            $rr close\n        }\n    }\n\n}\n\nstart_server {tags {\"maxmemory external:skip\"}} {\n\n    foreach policy {\n        allkeys-random allkeys-lru allkeys-lfu allkeys-lrm volatile-lru volatile-lfu volatile-random volatile-ttl volatile-lrm\n    } {\n        test \"maxmemory - is the memory limit honoured? (policy $policy)\" {\n            # make sure to start with a blank instance\n            r flushall\n            # Get the current memory limit and calculate a new limit.\n            # We just add 100k to the current memory size so that it is\n            # fast for us to reach that limit.\n            set used [s used_memory]\n            set limit [expr {$used+100*1024}]\n            r config set maxmemory $limit\n            r config set maxmemory-policy $policy\n            # Now add keys until the limit is almost reached.\n            set numkeys 0\n            while 1 {\n                r setex [randomKey] 10000 x\n                incr numkeys\n                if {[s used_memory]+4096 > $limit} {\n                    assert {$numkeys > 10}\n                    break\n                }\n            }\n            # If we add the same number of keys already added again, we\n            # should still be under the limit.\n            for {set j 0} {$j < $numkeys} {incr j} {\n                r setex [randomKey] 10000 x\n            }\n            assert {[s used_memory] < ($limit+4096)}\n        }\n    }\n\n    foreach policy {\n        allkeys-random allkeys-lru allkeys-lrm volatile-lru volatile-random volatile-ttl volatile-lrm\n    } {\n        test \"maxmemory - only allkeys-* should remove non-volatile keys ($policy)\" {\n            # make sure to start with a blank instance\n            r flushall\n            # Get the current memory limit and calculate a new limit.\n            # We just add 100k to the current memory size so that it is\n            # fast for us to reach that limit.\n            set used [s used_memory]\n            set limit [expr {$used+100*1024}]\n            r config set maxmemory $limit\n            r config set maxmemory-policy $policy\n            # Now add keys until the limit is almost reached.\n            set numkeys 0\n            while 1 {\n                r set [randomKey] x\n                incr numkeys\n                if {[s used_memory]+4096 > $limit} {\n                    assert {$numkeys > 10}\n                    break\n                }\n            }\n            # If we add the same number of keys already added again and\n            # the policy is allkeys-* we should still be under the limit.\n            # Otherwise we should see an error reported by Redis.\n            set err 0\n            for {set j 0} {$j < $numkeys} {incr j} {\n                if {[catch {r set [randomKey] x} e]} {\n                    if {[string match {*used memory*} $e]} {\n                        set err 1\n                    }\n                }\n            }\n            if {[string match allkeys-* $policy]} {\n                assert {[s used_memory] < ($limit+4096)}\n            } else {\n                assert {$err == 1}\n            }\n        }\n    }\n\n    foreach policy {\n        volatile-lru volatile-lfu volatile-random volatile-ttl volatile-lrm\n    } {\n        test \"maxmemory - policy $policy should only remove volatile keys.\" {\n            # make sure to start with a blank instance\n            r flushall\n            # Get the current memory limit and calculate a new limit.\n            # We just add 100k to the current memory size so that it is\n            # fast for us to reach that limit.\n            set used [s used_memory]\n            set limit [expr {$used+100*1024}]\n            r config set maxmemory $limit\n            r config set maxmemory-policy $policy\n            # Now add keys until the limit is almost reached.\n            set numkeys 0\n            while 1 {\n                # Odd keys are volatile\n                # Even keys are non volatile\n                if {$numkeys % 2} {\n                    r setex \"key:$numkeys\" 10000 x\n                } else {\n                    r set \"key:$numkeys\" x\n                }\n                if {[s used_memory]+4096 > $limit} {\n                    assert {$numkeys > 10}\n                    break\n                }\n                incr numkeys\n            }\n            # Now we add the same number of volatile keys already added.\n            # We expect Redis to evict only volatile keys in order to make\n            # space.\n            set err 0\n            for {set j 0} {$j < $numkeys} {incr j} {\n                catch {r setex \"foo:$j\" 10000 x}\n            }\n            # We should still be under the limit.\n            assert {[s used_memory] < ($limit+4096)}\n            # However all our non volatile keys should be here.\n            for {set j 0} {$j < $numkeys} {incr j 2} {\n                assert {[r exists \"key:$j\"]}\n            }\n        }\n    }\n}\n\n# Calculate query buffer memory of slave\nproc slave_query_buffer {srv} {\n    set clients [split [$srv client list] \"\\r\\n\"]\n    set c [lsearch -inline $clients *flags=S*]\n    if {[string length $c] > 0} {\n        assert {[regexp {qbuf=([0-9]+)} $c - qbuf]}\n        assert {[regexp {qbuf-free=([0-9]+)} $c - qbuf_free]}\n        return [expr $qbuf + $qbuf_free]\n    }\n    return 0\n}\n\nproc test_slave_buffers {test_name cmd_count payload_len limit_memory pipeline} {\n    start_server {tags {\"maxmemory external:skip\"}} {\n        start_server {} {\n        set slave_pid [s process_id]\n        test \"$test_name\" {\n            set slave [srv 0 client]\n            set slave_host [srv 0 host]\n            set slave_port [srv 0 port]\n            set master [srv -1 client]\n            set master_host [srv -1 host]\n            set master_port [srv -1 port]\n\n            # Disable slow log for master to avoid memory growth in slow env.\n            $master config set slowlog-log-slower-than -1\n\n            # add 100 keys of 100k (10MB total)\n            for {set j 0} {$j < 100} {incr j} {\n                $master setrange \"key:$j\" 100000 asdf\n            }\n\n            # make sure master doesn't disconnect slave because of timeout\n            $master config set repl-timeout 1200 ;# 20 minutes (for valgrind and slow machines)\n            $master config set maxmemory-policy allkeys-random\n            $master config set client-output-buffer-limit \"replica 100000000 100000000 300\"\n            $master config set repl-backlog-size [expr {10*1024}]\n\n            # disable latency tracking\n            $master config set latency-tracking no\n            $slave config set latency-tracking no\n\n            $slave slaveof $master_host $master_port\n            wait_for_condition 50 100 {\n                [s 0 master_link_status] eq {up}\n            } else {\n                fail \"Replication not started.\"\n            }\n\n            # measure used memory after the slave connected and set maxmemory\n            set orig_used [s -1 used_memory]\n            set orig_client_buf [s -1 mem_clients_normal]\n            set orig_mem_not_counted_for_evict [s -1 mem_not_counted_for_evict]\n            set orig_used_no_repl [expr {$orig_used - $orig_mem_not_counted_for_evict}]\n            set limit [expr {$orig_used - $orig_mem_not_counted_for_evict + 32*1024}]\n\n            if {$limit_memory==1} {\n                $master config set maxmemory $limit\n            }\n\n            # put the slave to sleep\n            set rd_slave [redis_deferring_client]\n            pause_process $slave_pid\n\n            # send some 10mb worth of commands that don't increase the memory usage\n            if {$pipeline == 1} {\n                set rd_master [redis_deferring_client -1]\n                # Send commands in batches and read responses to avoid TCP deadlock.\n                # Without interleaving reads, the client's send buffer fills up when\n                # the server's output buffers are full (because we're not reading),\n                # causing flush to block indefinitely on slow machines.\n                set batch_size 10000\n                for {set k 0} {$k < $cmd_count} {incr k} {\n                    $rd_master setrange key:0 0 [string repeat A $payload_len]\n                    if {($k + 1) % $batch_size == 0} {\n                        # Drain responses to prevent TCP buffer deadlock\n                        for {set j 0} {$j < $batch_size} {incr j} {\n                            $rd_master read\n                        }\n                    }\n                }\n                # Read any remaining responses\n                set remaining [expr {$cmd_count % $batch_size}]\n                for {set k 0} {$k < $remaining} {incr k} {\n                    $rd_master read\n                }\n            } else {\n                for {set k 0} {$k < $cmd_count} {incr k} {\n                    $master setrange key:0 0 [string repeat A $payload_len]\n                }\n            }\n\n            set new_used [s -1 used_memory]\n            set slave_buf [s -1 mem_clients_slaves]\n            set client_buf [s -1 mem_clients_normal]\n            set mem_not_counted_for_evict [s -1 mem_not_counted_for_evict]\n            set used_no_repl [expr {$new_used - $mem_not_counted_for_evict - [slave_query_buffer $master]}]\n            # we need to exclude replies buffer and query buffer of replica from used memory.\n            # removing the replica (output) buffers is done so that we are able to measure any other\n            # changes to the used memory and see that they're insignificant (the test's purpose is to check that\n            # the replica buffers are counted correctly, so the used memory growth after deducting them\n            # should be nearly 0).\n            # we remove the query buffers because on slow test platforms, they can accumulate many ACKs.\n            set delta [expr {($used_no_repl - $client_buf) - ($orig_used_no_repl - $orig_client_buf)}]\n\n            assert {[$master dbsize] == 100}\n            assert {$slave_buf > 2*1024*1024} ;# some of the data may have been pushed to the OS buffers\n            set delta_max [expr {$cmd_count / 2}] ;# 1 byte unaccounted for, with 1M commands will consume some 1MB\n            assert {$delta < $delta_max && $delta > -$delta_max}\n\n            $master client kill type slave\n            set info_str [$master info memory]\n            set killed_used [getInfoProperty $info_str used_memory]\n            set killed_mem_not_counted_for_evict [getInfoProperty $info_str mem_not_counted_for_evict]\n            set killed_slave_buf [s -1 mem_clients_slaves]\n            # we need to exclude replies buffer and query buffer of slave from used memory after kill slave\n            set killed_used_no_repl [expr {$killed_used - $killed_mem_not_counted_for_evict - [slave_query_buffer $master]}]\n            set delta_no_repl [expr {$killed_used_no_repl - $used_no_repl}]\n            assert {[$master dbsize] == 100}\n            assert {$killed_slave_buf == 0}\n            assert {$delta_no_repl > -$delta_max && $delta_no_repl < $delta_max}\n\n        }\n        # unfreeze slave process (after the 'test' succeeded or failed, but before we attempt to terminate the server\n        resume_process $slave_pid\n        }\n    }\n}\n\n# test that slave buffer are counted correctly\n# we wanna use many small commands, and we don't wanna wait long\n# so we need to use a pipeline (redis_deferring_client)\n# that may cause query buffer to fill and induce eviction, so we disable it\ntest_slave_buffers {slave buffer are counted correctly} 1000000 10 0 1\n\n# test that slave buffer don't induce eviction\n# test again with fewer (and bigger) commands without pipeline, but with eviction\ntest_slave_buffers \"replica buffer don't induce eviction\" 100000 100 1 0\n\nstart_server {tags {\"maxmemory external:skip\"}} {\n    test {Don't rehash if used memory exceeds maxmemory after rehash} {\n        r config set latency-tracking no\n        r config set maxmemory 0\n        r config set maxmemory-policy allkeys-random\n\n        # Next rehash size is 8192, that will eat 64k memory\n        populate 4095 \"\" 1\n\n        set used [s used_memory]\n        set limit [expr {$used + 10*1024}]\n        r config set maxmemory $limit\n\n        # Adding a key to meet the 1:1 radio.\n        r set k0 v0\n        # The dict has reached 4096, it can be resized in tryResizeHashTables in cron,\n        # or we add a key to let it check whether it can be resized.\n        r set k1 v1\n        # Next writing command will trigger evicting some keys if last\n        # command trigger DB dict rehash\n        r set k2 v2\n        # There must be 4098 keys because redis doesn't evict keys.\n        r dbsize\n    } {4098}\n}\n\nstart_server {tags {\"maxmemory external:skip\"}} {\n    test {client tracking don't cause eviction feedback loop} {\n        r config set latency-tracking no\n        r config set maxmemory 0\n        r config set maxmemory-policy allkeys-lru\n        r config set maxmemory-eviction-tenacity 100\n\n        # check if enabling multithreaded IO\n        set multithreaded 0\n        if {[r config get io-threads] > 1} {\n            set multithreaded 1\n        }\n\n        # 10 clients listening on tracking messages\n        set clients {}\n        for {set j 0} {$j < 10} {incr j} {\n            lappend clients [redis_deferring_client]\n        }\n        foreach rd $clients {\n            $rd HELLO 3\n            $rd read ; # Consume the HELLO reply\n            $rd CLIENT TRACKING on\n            $rd read ; # Consume the CLIENT reply\n        }\n\n        # populate 300 keys, with long key name and short value\n        for {set j 0} {$j < 300} {incr j} {\n            set key $j[string repeat x 1000]\n            r set $key x\n\n            # for each key, enable caching for this key\n            foreach rd $clients {\n                $rd get $key\n                $rd read\n            }\n        }\n\n        # we need to wait one second for the client querybuf excess memory to be\n        # trimmed by cron, otherwise the INFO used_memory and CONFIG maxmemory\n        # below (on slow machines) won't be \"atomic\" and won't trigger eviction.\n        after 1100\n\n        # set the memory limit which will cause a few keys to be evicted\n        # we need to make sure to evict keynames of a total size of more than\n        # 16kb since the (PROTO_REPLY_CHUNK_BYTES), only after that the\n        # invalidation messages have a chance to trigger further eviction.\n        set used [s used_memory]\n        set limit [expr {$used - 40000}]\n        r config set maxmemory $limit\n\n        # If multithreaded, we need to let IO threads have chance to reply output\n        # buffer, to avoid next commands causing eviction. After eviction is performed,\n        # the next command becomes ready immediately in IO threads, and now we enqueue\n        # the client to be processed in main thread’s beforeSleep without notification.\n        # However, invalidation messages generated by eviction may not have been fully\n        # delivered by that time. As a result, executing the command in beforeSleep of\n        # the event loop (running eviction) can cause additional keys to be evicted.\n        if $multithreaded { after 200 }\n\n        # make sure some eviction happened\n        set evicted [s evicted_keys]\n        if {$::verbose} { puts \"evicted: $evicted\" }\n\n        # make sure we didn't drain the database\n        assert_range [r dbsize] 200 300\n\n        assert_range $evicted 10 50\n        foreach rd $clients {\n            $rd read ;# make sure we have some invalidation message waiting\n            $rd close\n        }\n\n        # eviction continues (known problem described in #8069)\n        # for now this test only make sures the eviction loop itself doesn't\n        # have feedback loop\n        set evicted [s evicted_keys]\n        if {$::verbose} { puts \"evicted: $evicted\" }\n    }\n}\n\nstart_server {tags {\"maxmemory\" \"external:skip\"}} {\n    test {propagation with eviction} {\n        set repl [attach_to_replication_stream]\n\n        r set asdf1 1\n        r set asdf2 2\n        r set asdf3 3\n\n        r config set maxmemory-policy allkeys-lru\n        r config set maxmemory 1\n\n        wait_for_condition 5000 10 {\n            [r dbsize] eq 0\n        } else {\n            fail \"Not all keys have been evicted\"\n        }\n\n        r config set maxmemory 0\n        r config set maxmemory-policy noeviction\n\n        r set asdf4 4\n\n        assert_replication_stream $repl {\n            {select *}\n            {set asdf1 1}\n            {set asdf2 2}\n            {set asdf3 3}\n            {del asdf*}\n            {del asdf*}\n            {del asdf*}\n            {set asdf4 4}\n        }\n        close_replication_stream $repl\n\n        r config set maxmemory 0\n        r config set maxmemory-policy noeviction\n    }\n}\n\nstart_server {tags {\"maxmemory\" \"external:skip\"}} {\n    test {propagation with eviction in MULTI} {\n        set repl [attach_to_replication_stream]\n\n        r config set maxmemory-policy allkeys-lru\n\n        r multi\n        r incr x\n        r config set maxmemory 1\n        r incr x\n        assert_equal [r exec] {1 OK 2}\n\n        wait_for_condition 5000 10 {\n            [r dbsize] eq 0\n        } else {\n            fail \"Not all keys have been evicted\"\n        }\n\n        assert_replication_stream $repl {\n            {multi}\n            {select *}\n            {incr x}\n            {incr x}\n            {exec}\n            {del x}\n        }\n        close_replication_stream $repl\n\n        r config set maxmemory 0\n        r config set maxmemory-policy noeviction\n    }\n}\n\nstart_server {tags {\"maxmemory\" \"external:skip\"}} {\n    test {lru/lfu value of the key just added} {\n        r config set maxmemory-policy allkeys-lru\n        r set foo a\n        assert {[r object idletime foo] <= 2}\n        r del foo\n        r set foo 1\n        r get foo\n        assert {[r object idletime foo] <= 2}\n\n        r config set maxmemory-policy allkeys-lfu\n        r del foo \n        r set foo a\n        assert {[r object freq foo] == 5}\n    }\n}\n\n# LRM eviction policy tests\nstart_server {tags {\"maxmemory\" \"external:skip\"}} {\n    test {LRM: Basic write updates idle time} {\n        r flushdb\n        r config set maxmemory-policy allkeys-lrm\n\n        r set foo a\n        after 2000\n\n        # Read the key should NOT update LRM\n        r get foo\n        assert_morethan_equal [r object idletime foo] 1\n\n        # LRM should be updated (idletime should be smaller)\n        r set foo b\n        assert_lessthan_equal [r object idletime foo] 1\n    } {} {slow}\n\n    test {LRM: RENAME updates destination key LRM} {\n        r flushdb\n        r set src value\n        after 2000\n        r rename src dst\n        assert_lessthan_equal [r object idletime dst] 1\n    } {} {slow}\n\n    test {LRM: XREADGROUP updates stream LRM} {\n        r flushdb\n        r xadd mystream * field value\n        r xgroup create mystream mygroup 0\n        after 2000\n        r xreadgroup GROUP mygroup consumer1 STREAMS mystream >\n\n        # LRM should be updated (idletime should be smaller)\n        assert_lessthan_equal [r object idletime mystream] 1\n    } {} {slow}\n\n    test {LRM: Keys with only read operations should be removed first} {\n        r flushdb\n        r config set maxmemory 0\n        r config set maxmemory-policy allkeys-lrm\n        r config set maxmemory-samples 64 ;# Ensure eviction sampling can pick all keys\n\n        # Create keys and populate them\n        # We'll create two groups of keys:\n        # - read-only keys: will only be read after creation\n        # - write keys: will be continuously written to\n        for {set j 0} {$j < 25} {incr j} {\n            r set \"read:$j\" [string repeat x 20000]\n            r set \"write:$j\" [string repeat x 20000]\n        }\n\n        after 1000\n\n        # Perform read and write operations on keys\n        for {set j 0} {$j < 25} {incr j} {\n            r get \"read:$j\"\n            r set \"write:$j\" [string repeat y 20000]\n        }\n\n        # Set memory limit to force eviction\n        set used [s used_memory]\n        set limit [expr {$used - 200*1024}]\n        r config set maxmemory $limit\n\n        # Add more keys to trigger eviction\n        for {set j 0} {$j < 10} {incr j} {\n            r set \"trigger:$j\" [string repeat z 20000]\n        }\n\n        # Count how many keys from each group survived\n        set read_survived 0\n        set write_survived 0\n        for {set j 0} {$j < 25} {incr j} {\n            if {[r exists \"read:$j\"]} {\n                incr read_survived\n            }\n            if {[r exists \"write:$j\"]} {\n                incr write_survived\n            }\n        }\n\n        # If read-only keys haven't been fully evicted, write keys must not be evicted at all. */\n        if {$read_survived > 0} {\n            assert {$write_survived == 25}\n        } else {\n            assert {$write_survived > $read_survived}\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/memefficiency.tcl",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Copyright (c) 2024-present, Valkey contributors.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n#\n\nproc test_memory_efficiency {range} {\n    r flushall\n    set rd [redis_deferring_client]\n    set base_mem [s used_memory]\n    set written 0\n    for {set j 0} {$j < 10000} {incr j} {\n        set key key:$j\n        set val [string repeat A [expr {int(rand()*$range)}]]\n        $rd set $key $val\n        incr written [string length $key]\n        incr written [string length $val]\n        incr written 2 ;# A separator is the minimum to store key-value data.\n        \n        if {($j + 1) % 500 == 0} {\n            for {set i 0} {$i < 500} {incr i} {\n                $rd read ; # Discard replies\n            }\n        }\n    }\n\n    set current_mem [s used_memory]\n    set used [expr {$current_mem-$base_mem}]\n    set efficiency [expr {double($written)/$used}]\n    return $efficiency\n}\n\nstart_server {tags {\"memefficiency external:skip\"}} {\n    foreach {size_range expected_min_efficiency} {\n        32    0.15\n        64    0.25\n        128   0.35\n        1024  0.75\n        16384 0.82\n    } {\n        test \"Memory efficiency with values in range $size_range\" {\n            set efficiency [test_memory_efficiency $size_range]\n            assert {$efficiency >= $expected_min_efficiency}\n        }\n    }\n}\n\nrun_solo {defrag} {\n    proc wait_for_defrag_stop {maxtries delay {expect_frag 0}} {\n        wait_for_condition $maxtries $delay {\n            [s active_defrag_running] eq 0 && ($expect_frag == 0 || [s allocator_frag_ratio] <= $expect_frag)\n        } else {\n            after 120 ;# serverCron only updates the info once in 100ms\n            puts [r info memory]\n            puts [r info stats]\n            puts [r memory malloc-stats]\n            if {$expect_frag != 0} {\n                fail \"defrag didn't stop or failed to achieve expected frag ratio ([s allocator_frag_ratio] > $expect_frag)\"\n            } else {\n                fail \"defrag didn't stop.\"\n            }\n        }\n    }\n\n    proc discard_replies_every {rd count frequency discard_num} {\n        if {$count % $frequency == 0} {\n            for {set k 0} {$k < $discard_num} {incr k} {\n                $rd read ; # Discard replies\n            }\n        }\n    }\n\n    proc test_active_defrag {type} {\n\n    # note: Disabling lookahead because it changes the number and order of allocations which interferes with defrag and causes tests to fail\n    r config set lookahead 1\n\n    r debug reply-copy-avoidance 0 ;# Disable copy avoidance because it affects memory usage\n\n    if {[string match {*jemalloc*} [s mem_allocator]] && [r debug mallctl arenas.page] <= 8192} {\n        test \"Active defrag main dictionary: $type\" {\n            r config set hz 100\n            r config set activedefrag no\n            r config set active-defrag-threshold-lower 5\n            r config set active-defrag-cycle-min 65\n            r config set active-defrag-cycle-max 75\n            r config set active-defrag-ignore-bytes 2mb\n            r config set maxmemory 100mb\n            r config set maxmemory-policy allkeys-lru\n\n            populate 700000 asdf1 150\n            populate 100 asdf1 150 0 false 1000\n            populate 170000 asdf2 300\n            populate 100 asdf2 300 0 false 1000\n\n            assert {[scan [regexp -inline {expires\\=([\\d]*)} [r info keyspace]] expires=%d] > 0}\n            after 120 ;# serverCron only updates the info once in 100ms\n            set frag [s allocator_frag_ratio]\n            if {$::verbose} {\n                puts \"frag $frag\"\n            }\n            assert {$frag >= 1.4}\n\n            r config set latency-monitor-threshold 5\n            r latency reset\n            r config set maxmemory 110mb ;# prevent further eviction (not to fail the digest test)\n            set digest [debug_digest]\n            catch {r config set activedefrag yes} e\n            if {[r config get activedefrag] eq \"activedefrag yes\"} {\n                # Wait for the active defrag to start working (decision once a\n                # second).\n                wait_for_condition 50 100 {\n                    [s total_active_defrag_time] ne 0\n                } else {\n                    after 120 ;# serverCron only updates the info once in 100ms\n                    puts [r info memory]\n                    puts [r info stats]\n                    puts [r memory malloc-stats]\n                    fail \"defrag not started.\"\n                }\n\n                # This test usually runs for a while, during this interval, we test the range.\n                assert_range [s active_defrag_running] 65 75\n                r config set active-defrag-cycle-min 1\n                r config set active-defrag-cycle-max 1\n                after 120 ;# serverCron only updates the info once in 100ms\n                assert_range [s active_defrag_running] 1 1\n                r config set active-defrag-cycle-min 65\n                r config set active-defrag-cycle-max 75\n\n                # Wait for the active defrag to stop working.\n                wait_for_defrag_stop 2000 100 1.1\n\n                # Test the fragmentation is lower.\n                after 120 ;# serverCron only updates the info once in 100ms\n                set frag [s allocator_frag_ratio]\n                set max_latency 0\n                foreach event [r latency latest] {\n                    lassign $event eventname time latency max\n                    if {$eventname == \"active-defrag-cycle\"} {\n                        set max_latency $max\n                    }\n                }\n                if {$::verbose} {\n                    puts \"frag $frag\"\n                    set misses [s active_defrag_misses]\n                    set hits [s active_defrag_hits]\n                    puts \"hits: $hits\"\n                    puts \"misses: $misses\"\n                    puts \"max latency $max_latency\"\n                    puts [r latency latest]\n                    puts [r latency history active-defrag-cycle]\n                }\n                # due to high fragmentation, 100hz, and active-defrag-cycle-max set to 75,\n                # we expect max latency to be not much higher than 7.5ms but due to rare slowness threshold is set higher\n                if {!$::no_latency} {\n                    assert {$max_latency <= 30}\n                }\n            }\n            # verify the data isn't corrupted or changed\n            set newdigest [debug_digest]\n            assert {$digest eq $newdigest}\n            r save ;# saving an rdb iterates over all the data / pointers\n\n            # if defrag is supported, test AOF loading too\n            if {[r config get activedefrag] eq \"activedefrag yes\" && $type eq \"standalone\"} {\n            test \"Active defrag - AOF loading\" {\n                # reset stats and load the AOF file\n                r config resetstat\n                r config set key-load-delay -25 ;# sleep on average 1/25 usec\n                # Note: This test is checking if defrag is working DURING AOF loading (while\n                #       timers are not active).  So we don't give any extra time, and we deactivate\n                #       defrag immediately after the AOF loading is complete.  During loading,\n                #       defrag will get invoked less often, causing starvation prevention.  We\n                #       should expect longer latency measurements.\n                r debug loadaof\n                r config set activedefrag no\n                # measure hits and misses right after aof loading\n                set misses [s active_defrag_misses]\n                set hits [s active_defrag_hits]\n\n                after 120 ;# serverCron only updates the info once in 100ms\n                set frag [s allocator_frag_ratio]\n                set max_latency 0\n                foreach event [r latency latest] {\n                    lassign $event eventname time latency max\n                    if {$eventname == \"while-blocked-cron\"} {\n                        set max_latency $max\n                    }\n                }\n                if {$::verbose} {\n                    puts \"AOF loading:\"\n                    puts \"frag $frag\"\n                    puts \"hits: $hits\"\n                    puts \"misses: $misses\"\n                    puts \"max latency $max_latency\"\n                    puts [r latency latest]\n                    puts [r latency history \"while-blocked-cron\"]\n                }\n                # make sure we had defrag hits during AOF loading\n                assert {$hits > 100000}\n                # make sure the defragger did enough work to keep the fragmentation low during loading.\n                # we cannot check that it went all the way down, since we don't wait for full defrag cycle to complete.\n                assert {$frag < 1.4}\n                # since the AOF contains simple (fast) SET commands (and the cron during loading runs every 1024 commands),\n                # it'll still not block the loading for long periods of time.\n                if {!$::no_latency} {\n                    assert {$max_latency <= 40}\n                }\n            }\n            } ;# Active defrag - AOF loading\n        }\n        r config set appendonly no\n        r config set key-load-delay 0\n\n        test \"Active defrag eval scripts: $type\" {\n            r flushdb\n            r script flush sync\n            r config set hz 100\n            r config set activedefrag no\n            wait_for_defrag_stop 500 100\n            r config resetstat\n            r config set active-defrag-threshold-lower 5\n            r config set active-defrag-cycle-min 65\n            r config set active-defrag-cycle-max 75\n            r config set active-defrag-ignore-bytes 1500kb\n            r config set maxmemory 0\n\n            set n 50000\n\n            # Populate memory with interleaving script-key pattern of same size\n            set dummy_script \"--[string repeat x 400]\\nreturn \"\n            set rd [redis_deferring_client]\n            # Send commands in batches and read responses to avoid TCP deadlock.\n            # Without interleaving reads, TCP congestion control can throttle\n            # the connection when buffers fill, causing the test to hang.\n            set batch_size 1000\n            for {set j 0} {$j < $n} {incr j} {\n                set val \"$dummy_script[format \"%06d\" $j]\"\n                $rd script load $val\n                $rd set k$j $val\n                if {($j + 1) % $batch_size == 0} {\n                    for {set i 0} {$i < $batch_size} {incr i} {\n                        $rd read ; # Discard script load replies\n                        $rd read ; # Discard set replies\n                    }\n                }\n            }\n            # Read remaining responses\n            set remaining [expr {$n % $batch_size}]\n            for {set j 0} {$j < $remaining} {incr j} {\n                $rd read ; # Discard script load replies\n                $rd read ; # Discard set replies\n            }\n            after 120 ;# serverCron only updates the info once in 100ms\n            if {$::verbose} {\n                puts \"used [s allocator_allocated]\"\n                puts \"rss [s allocator_active]\"\n                puts \"frag [s allocator_frag_ratio]\"\n                puts \"frag_bytes [s allocator_frag_bytes]\"\n            }\n            assert_lessthan [s allocator_frag_ratio] 1.05\n\n            # Delete all the keys to create fragmentation\n            # Use same batching pattern to avoid TCP deadlock\n            for {set j 0} {$j < $n} {incr j} {\n                $rd del k$j\n                if {($j + 1) % $batch_size == 0} {\n                    for {set i 0} {$i < $batch_size} {incr i} {\n                        $rd read\n                    }\n                }\n            }\n            set remaining [expr {$n % $batch_size}]\n            for {set j 0} {$j < $remaining} {incr j} { $rd read }\n            if {$type eq \"cluster\"} {\n                $rd config resetstat\n                $rd read ; # Discard config resetstat reply\n            }\n            $rd close\n            after 120 ;# serverCron only updates the info once in 100ms\n            if {$::verbose} {\n                puts \"used [s allocator_allocated]\"\n                puts \"rss [s allocator_active]\"\n                puts \"frag [s allocator_frag_ratio]\"\n                puts \"frag_bytes [s allocator_frag_bytes]\"\n            }\n            assert_morethan [s allocator_frag_ratio] 1.4\n\n            catch {r config set activedefrag yes} e\n            if {[r config get activedefrag] eq \"activedefrag yes\"} {\n            \n                # wait for the active defrag to start working (decision once a second)\n                wait_for_condition 50 100 {\n                    [s total_active_defrag_time] ne 0\n                } else {\n                    after 120 ;# serverCron only updates the info once in 100ms\n                    puts [r info memory]\n                    puts [r info stats]\n                    puts [r memory malloc-stats]\n                    fail \"defrag not started.\"\n                }\n\n                # wait for the active defrag to stop working\n                wait_for_defrag_stop 500 100 1.05\n\n                # test the fragmentation is lower\n                after 120 ;# serverCron only updates the info once in 100ms\n                if {$::verbose} {\n                    puts \"used [s allocator_allocated]\"\n                    puts \"rss [s allocator_active]\"\n                    puts \"frag [s allocator_frag_ratio]\"\n                    puts \"frag_bytes [s allocator_frag_bytes]\"\n                }\n            }\n            # Flush all script to make sure we don't crash after defragging them\n            r script flush sync\n        } {OK}\n\n        test \"Active defrag big keys: $type\" {\n            r flushdb\n            r config set hz 100\n            r config set activedefrag no\n            wait_for_defrag_stop 500 100\n            r config resetstat\n            r config set active-defrag-max-scan-fields 1000\n            r config set active-defrag-threshold-lower 5\n            r config set active-defrag-cycle-min 65\n            r config set active-defrag-cycle-max 75\n            r config set active-defrag-ignore-bytes 2mb\n            r config set maxmemory 0\n            r config set list-max-ziplist-size 5 ;# list of 10k items will have 2000 quicklist nodes\n            r config set stream-node-max-entries 5\n            r config set hash-max-listpack-entries 10\n            r hmset hash_lp h1 v1 h2 v2 h3 v3\n            assert_encoding listpack hash_lp\n            r hmset hash_ht h1 v1 h2 v2 h3 v3 h4 v4 h5 v5 h6 v6 h7 v7 h8 v8 h9 v9 h10 v10 h11 v11\n            assert_encoding hashtable hash_ht\n            r lpush list a b c d\n            r zadd zset 0 a 1 b 2 c 3 d\n            r sadd set a b c d\n            r xadd stream * item 1 value a\n            r xadd stream * item 2 value b\n            r xgroup create stream mygroup 0\n            r xreadgroup GROUP mygroup Alice COUNT 1 STREAMS stream >\n\n            # create big keys with 10k items\n            # Use batching to avoid TCP deadlock\n            set rd [redis_deferring_client]\n            set batch_size 100\n            for {set j 0} {$j < 10000} {incr j} {\n                $rd hset bighash $j [concat \"asdfasdfasdf\" $j]\n                $rd lpush biglist [concat \"asdfasdfasdf\" $j]\n                $rd zadd bigzset $j [concat \"asdfasdfasdf\" $j]\n                $rd sadd bigset [concat \"asdfasdfasdf\" $j]\n                $rd xadd bigstream * item 1 value a\n                if {($j + 1) % $batch_size == 0} {\n                    for {set i 0} {$i < [expr {$batch_size * 5}]} {incr i} {\n                        $rd read\n                    }\n                }\n            }\n            # Read remaining replies\n            set remaining [expr {(10000 % $batch_size) * 5}]\n            for {set j 0} {$j < $remaining} {incr j} {\n                $rd read\n            }\n\n            # create some small items (effective in cluster-enabled)\n            r set \"{bighash}smallitem\" val\n            r set \"{biglist}smallitem\" val\n            r set \"{bigzset}smallitem\" val\n            r set \"{bigset}smallitem\" val\n            r set \"{bigstream}smallitem\" val\n\n\n            set expected_frag 1.49\n            if {$::accurate} {\n                # scale the hash to 1m fields in order to have a measurable the latency\n                set count 0\n                for {set j 10000} {$j < 1000000} {incr j} {\n                    $rd hset bighash $j [concat \"asdfasdfasdf\" $j]\n\n                    incr count\n                    discard_replies_every $rd $count 10000 10000\n                }\n                # creating that big hash, increased used_memory, so the relative frag goes down\n                set expected_frag 1.3\n            }\n\n            # add a mass of string keys\n            set count 0\n            for {set j 0} {$j < 500000} {incr j} {\n                $rd setrange $j 150 a\n\n                incr count\n                discard_replies_every $rd $count 10000 10000\n            }\n            assert_equal [r dbsize] 500016\n\n            # create some fragmentation\n            set count 0\n            for {set j 0} {$j < 500000} {incr j 2} {\n                $rd del $j\n\n                incr count\n                discard_replies_every $rd $count 10000 10000\n            }\n            assert_equal [r dbsize] 250016\n\n            # start defrag\n            after 120 ;# serverCron only updates the info once in 100ms\n            set frag [s allocator_frag_ratio]\n            if {$::verbose} {\n                puts \"frag $frag\"\n            }\n            assert {$frag >= $expected_frag}\n            r config set latency-monitor-threshold 5\n            r latency reset\n\n            set digest [debug_digest]\n            catch {r config set activedefrag yes} e\n            if {[r config get activedefrag] eq \"activedefrag yes\"} {\n                # wait for the active defrag to start working (decision once a second)\n                wait_for_condition 50 100 {\n                    [s total_active_defrag_time] ne 0\n                } else {\n                    after 120 ;# serverCron only updates the info once in 100ms\n                    puts [r info memory]\n                    puts [r info stats]\n                    puts [r memory malloc-stats]\n                    fail \"defrag not started.\"\n                }\n\n                # wait for the active defrag to stop working\n                wait_for_defrag_stop 500 100 1.1\n\n                # test the fragmentation is lower\n                after 120 ;# serverCron only updates the info once in 100ms\n                set frag [s allocator_frag_ratio]\n                set max_latency 0\n                foreach event [r latency latest] {\n                    lassign $event eventname time latency max\n                    if {$eventname == \"active-defrag-cycle\"} {\n                        set max_latency $max\n                    }\n                }\n                if {$::verbose} {\n                    puts \"frag $frag\"\n                    set misses [s active_defrag_misses]\n                    set hits [s active_defrag_hits]\n                    puts \"hits: $hits\"\n                    puts \"misses: $misses\"\n                    puts \"max latency $max_latency\"\n                    puts [r latency latest]\n                    puts [r latency history active-defrag-cycle]\n                }\n                # due to high fragmentation, 100hz, and active-defrag-cycle-max set to 75,\n                # we expect max latency to be not much higher than 7.5ms but due to rare slowness threshold is set higher\n                if {!$::no_latency} {\n                    assert {$max_latency <= 30}\n                }\n            }\n            # verify the data isn't corrupted or changed\n            set newdigest [debug_digest]\n            assert {$digest eq $newdigest}\n            r save ;# saving an rdb iterates over all the data / pointers\n        } {OK}\n\n        test \"Active defrag pubsub: $type\" {\n            r flushdb\n            r config set hz 100\n            r config set activedefrag no\n            wait_for_defrag_stop 500 100\n            r config resetstat\n            r config set active-defrag-threshold-lower 5\n            r config set active-defrag-cycle-min 65\n            r config set active-defrag-cycle-max 75\n            r config set active-defrag-ignore-bytes 1500kb\n            r config set maxmemory 0\n\n            # Populate memory with interleaving pubsub-key pattern of same size\n            set n 50000\n            set dummy_channel \"[string repeat x 400]\"\n            set rd [redis_deferring_client]\n            set rd_pubsub [redis_deferring_client]\n            for {set j 0} {$j < $n} {incr j} {\n                set channel_name \"$dummy_channel[format \"%06d\" $j]\"\n                $rd_pubsub subscribe $channel_name\n                $rd_pubsub read ; # Discard subscribe replies\n                $rd_pubsub ssubscribe $channel_name\n                $rd_pubsub read ; # Discard ssubscribe replies\n                # Pub/Sub clients are handled in the main thread, so their memory is\n                # allocated there. Using the SETBIT command avoids the main thread\n                # referencing argv from IO threads.\n                $rd setbit k$j [expr {[string length $channel_name] * 8}] 1\n                $rd read ; # Discard set replies\n            }\n\n            after 120 ;# serverCron only updates the info once in 100ms\n            if {$::verbose} {\n                puts \"used [s allocator_allocated]\"\n                puts \"rss [s allocator_active]\"\n                puts \"frag [s allocator_frag_ratio]\"\n                puts \"frag_bytes [s allocator_frag_bytes]\"\n            }\n            assert_lessthan [s allocator_frag_ratio] 1.05\n\n            # Delete all the keys to create fragmentation\n            # Use batching to avoid TCP deadlock\n            set batch_size 1000\n            for {set j 0} {$j < $n} {incr j} {\n                $rd del k$j\n                if {($j + 1) % $batch_size == 0} {\n                    for {set i 0} {$i < $batch_size} {incr i} {\n                        $rd read\n                    }\n                }\n            }\n            set remaining [expr {$n % $batch_size}]\n            for {set j 0} {$j < $remaining} {incr j} { $rd read }\n            if {$type eq \"cluster\"} {\n                $rd config resetstat\n                $rd read ; # Discard config resetstat reply\n            }\n            $rd close\n            after 120 ;# serverCron only updates the info once in 100ms\n            if {$::verbose} {\n                puts \"used [s allocator_allocated]\"\n                puts \"rss [s allocator_active]\"\n                puts \"frag [s allocator_frag_ratio]\"\n                puts \"frag_bytes [s allocator_frag_bytes]\"\n            }\n            assert_morethan [s allocator_frag_ratio] 1.35\n\n            catch {r config set activedefrag yes} e\n            if {[r config get activedefrag] eq \"activedefrag yes\"} {\n            \n                # wait for the active defrag to start working (decision once a second)\n                wait_for_condition 50 100 {\n                    [s total_active_defrag_time] ne 0\n                } else {\n                    after 120 ;# serverCron only updates the info once in 100ms\n                    puts [r info memory]\n                    puts [r info stats]\n                    puts [r memory malloc-stats]\n                    fail \"defrag not started.\"\n                }\n\n                # wait for the active defrag to stop working\n                wait_for_defrag_stop 500 100 1.05\n\n                # test the fragmentation is lower\n                after 120 ;# serverCron only updates the info once in 100ms\n                if {$::verbose} {\n                    puts \"used [s allocator_allocated]\"\n                    puts \"rss [s allocator_active]\"\n                    puts \"frag [s allocator_frag_ratio]\"\n                    puts \"frag_bytes [s allocator_frag_bytes]\"\n                }\n            }\n\n            # Publishes some message to all the pubsub clients to make sure that\n            # we didn't break the data structure.\n            for {set j 0} {$j < $n} {incr j} {\n                set channel \"$dummy_channel[format \"%06d\" $j]\"\n                r publish $channel \"hello\"\n                assert_equal \"message $channel hello\" [$rd_pubsub read] \n                $rd_pubsub unsubscribe $channel\n                $rd_pubsub read\n                r spublish $channel \"hello\"\n                assert_equal \"smessage $channel hello\" [$rd_pubsub read] \n                $rd_pubsub sunsubscribe $channel\n                $rd_pubsub read\n            }\n            $rd_pubsub close\n        }\n\n        test \"Active defrag IDMP streams: $type\" {\n            r flushdb\n            r config set hz 100\n            r config set activedefrag no\n            wait_for_defrag_stop 500 100\n            r config resetstat\n            r config set active-defrag-threshold-lower 5\n            r config set active-defrag-cycle-min 65\n            r config set active-defrag-cycle-max 75\n            r config set active-defrag-ignore-bytes 1500kb\n            r config set maxmemory 0\n\n            set n 50000\n\n            # Create the stream first and configure IDMP limits\n            r xadd idmpstream * dummy value\n            r xcfgset idmpstream idmp-maxsize 10000 ;# Allow 10000 entries per producer\n\n            # Populate memory with interleaving IDMP stream-key pattern of same size\n            set dummy_iid \"[string repeat x 400]\"\n            set rd [redis_deferring_client]\n\n            # Use batching to avoid TCP deadlock\n            set batch_size 1000\n            for {set j 0} {$j < $n} {incr j} {\n                set producer_id \"producer[expr {$j % 10}]\"\n                set iid \"$dummy_iid[format \"%06d\" $j]\"\n                $rd xadd idmpstream IDMP $producer_id $iid * field value\n                $rd set k$j $iid\n\n                if {($j + 1) % $batch_size == 0} {\n                    for {set i 0} {$i < [expr {$batch_size * 2}]} {incr i} {\n                        $rd read\n                    }\n                }\n            }\n            # Read remaining responses\n            set remaining [expr {($n % $batch_size) * 2}]\n            for {set j 0} {$j < $remaining} {incr j} {\n                $rd read\n            }\n\n            after 120 ;# serverCron only updates the info once in 100ms\n            if {$::verbose} {\n                puts \"used [s allocator_allocated]\"\n                puts \"rss [s allocator_active]\"\n                puts \"frag [s allocator_frag_ratio]\"\n                puts \"frag_bytes [s allocator_frag_bytes]\"\n            }\n            assert_lessthan [s allocator_frag_ratio] 1.05\n\n            # Verify IDMP structures were created\n            set idmp_info [r xinfo stream idmpstream full]\n            set num_producers [dict get $idmp_info pids-tracked]\n            set num_entries [dict get $idmp_info iids-tracked]\n            assert {$num_producers == 10}\n            assert {$num_entries == $n}\n\n            # Delete all the keys to create fragmentation\n            for {set j 0} {$j < $n} {incr j} { $rd del k$j }\n            for {set j 0} {$j < $n} {incr j} { $rd read } ; # Discard del replies\n            if {$type eq \"cluster\"} {\n                $rd config resetstat\n                $rd read ; # Discard config resetstat reply\n            }\n            $rd close\n            after 120 ;# serverCron only updates the info once in 100ms\n            if {$::verbose} {\n                puts \"used [s allocator_allocated]\"\n                puts \"rss [s allocator_active]\"\n                puts \"frag [s allocator_frag_ratio]\"\n                puts \"frag_bytes [s allocator_frag_bytes]\"\n            }\n            assert_morethan [s allocator_frag_ratio] 1.35\n\n            catch {r config set activedefrag yes} e\n            if {[r config get activedefrag] eq \"activedefrag yes\"} {\n            \n                # wait for the active defrag to start working (decision once a second)\n                wait_for_condition 50 100 {\n                    [s total_active_defrag_time] ne 0\n                } else {\n                    after 120 ;# serverCron only updates the info once in 100ms\n                    puts [r info memory]\n                    puts [r info stats]\n                    puts [r memory malloc-stats]\n                    fail \"defrag not started.\"\n                }\n\n                # wait for the active defrag to stop working\n                wait_for_defrag_stop 500 100 1.1\n\n                # test the fragmentation is lower\n                after 120 ;# serverCron only updates the info once in 100ms\n                if {$::verbose} {\n                    puts \"used [s allocator_allocated]\"\n                    puts \"rss [s allocator_active]\"\n                    puts \"frag [s allocator_frag_ratio]\"\n                    puts \"frag_bytes [s allocator_frag_bytes]\"\n                }\n\n                # Verify IDMP structures are intact after defrag\n                set idmp_info_after [r xinfo stream idmpstream full]\n                set num_producers_after [dict get $idmp_info_after pids-tracked]\n                set num_entries_after [dict get $idmp_info_after iids-tracked]\n                assert {$num_producers_after == 10}\n                assert {$num_entries_after == $n}\n\n                # Verify IDMP deduplication still works after defrag\n                set original_length [r xlen idmpstream]\n                r xadd idmpstream IDMP producer0 \"${dummy_iid}000000\" * field newvalue\n                set new_length [r xlen idmpstream]\n                assert {$new_length == $original_length}\n            }\n        }\n\n        foreach {eb_container fields n} {eblist 16 3000 ebrax 30 1600 large_ebrax 500 100} {\n        test \"Active Defrag HFE with $eb_container: $type\" {\n            r flushdb\n            r config set hz 100\n            r config set activedefrag no\n            wait_for_defrag_stop 500 100\n            r config resetstat\n            r config set active-defrag-threshold-lower 7\n            r config set active-defrag-cycle-min 65\n            r config set active-defrag-cycle-max 75\n            r config set active-defrag-ignore-bytes 1000kb\n            r config set maxmemory 0\n            r config set hash-max-listpack-value 512\n            r config set hash-max-listpack-entries 10\n\n            # Populate memory with interleaving hash field of same size\n            # Interleave reads to avoid TCP deadlock\n            set dummy_field \"[string repeat x 400]\"\n            set rd [redis_deferring_client]\n            for {set i 0} {$i < $n} {incr i} {\n                for {set j 0} {$j < $fields} {incr j} {\n                    $rd hset h$i $dummy_field$j v\n                    $rd hexpire h$i 9999999 FIELDS 1 $dummy_field$j\n                    $rd hset k$i $dummy_field$j v\n                    $rd hexpire k$i 9999999 FIELDS 1 $dummy_field$j\n                }\n                $rd expire h$i 9999999 ;# Ensure expire is updated after kvobj reallocation\n                # Read replies for this iteration to avoid TCP deadlock\n                for {set j 0} {$j < $fields} {incr j} {\n                    $rd read ; # Discard hset replies\n                    $rd read ; # Discard hexpire replies\n                    $rd read ; # Discard hset replies\n                    $rd read ; # Discard hexpire replies\n                }\n                $rd read ; # Discard expire replies\n            }\n\n            # Coverage for listpackex.\n            r hset h_lpex $dummy_field v\n            r hexpire h_lpex 9999999 FIELDS 1 $dummy_field\n            assert_encoding listpackex h_lpex\n\n            after 120 ;# serverCron only updates the info once in 100ms\n            if {$::verbose} {\n                puts \"used [s allocator_allocated]\"\n                puts \"rss [s allocator_active]\"\n                puts \"frag [s allocator_frag_ratio]\"\n                puts \"frag_bytes [s allocator_frag_bytes]\"\n            }\n            assert_lessthan [s allocator_frag_ratio] 1.07\n\n            # Delete all the keys to create fragmentation\n            for {set i 0} {$i < $n} {incr i} {\n                r del k$i\n            }\n            $rd close\n            after 120 ;# serverCron only updates the info once in 100ms\n            if {$::verbose} {\n                puts \"used [s allocator_allocated]\"\n                puts \"rss [s allocator_active]\"\n                puts \"frag [s allocator_frag_ratio]\"\n                puts \"frag_bytes [s allocator_frag_bytes]\"\n            }\n            assert_morethan [s allocator_frag_ratio] 1.35\n\n            catch {r config set activedefrag yes} e\n            if {[r config get activedefrag] eq \"activedefrag yes\"} {\n            \n                # wait for the active defrag to start working (decision once a second)\n                wait_for_condition 50 100 {\n                    [s total_active_defrag_time] ne 0\n                } else {\n                    after 120 ;# serverCron only updates the info once in 100ms\n                    puts [r info memory]\n                    puts [r info stats]\n                    puts [r memory malloc-stats]\n                    fail \"defrag not started.\"\n                }\n\n                # wait for the active defrag to stop working\n                wait_for_defrag_stop 500 100 1.07\n\n                # test the fragmentation is lower\n                after 120 ;# serverCron only updates the info once in 100ms\n                if {$::verbose} {\n                    puts \"used [s allocator_allocated]\"\n                    puts \"rss [s allocator_active]\"\n                    puts \"frag [s allocator_frag_ratio]\"\n                    puts \"frag_bytes [s allocator_frag_bytes]\"\n                }\n            }\n        }\n        } ;# end of foreach\n\n        test \"Active defrag for argv retained by the main thread from IO thread: $type\" {\n            r flushdb\n            r config set hz 100\n            r config set activedefrag no\n            wait_for_defrag_stop 500 100\n            r config resetstat\n            set io_threads [lindex [r config get io-threads] 1]\n            if {$io_threads == 1} {\n                r config set active-defrag-threshold-lower 5\n            } else {\n                r config set active-defrag-threshold-lower 10\n            }\n            r config set active-defrag-cycle-min 65\n            r config set active-defrag-cycle-max 75\n            r config set active-defrag-ignore-bytes 1000kb\n            r config set maxmemory 0\n\n            # Create some clients so that they are distributed among different io threads.\n            set clients {}\n            for {set i 0} {$i < 8} {incr i} {\n                lappend clients [redis_client]\n            }\n\n            # Populate memory with interleaving key pattern of same size\n            set dummy \"[string repeat x 400]\"\n            set n 10000\n            for {set i 0} {$i < [llength $clients]} {incr i} {\n                set rr [lindex $clients $i]\n                for {set j 0} {$j < $n} {incr j} {\n                    $rr set \"k$i-$j\" $dummy\n                }\n            }\n\n            # If io-threads is enable, verify that memory allocation is not from the main thread.\n            if {$io_threads != 1} {\n                # At least make sure that bin 448 is created in the main thread's arena.\n                r set k dummy\n                r del k\n\n                # We created 10000 string keys of 400 bytes each for each client, so when the memory\n                # allocation for the 448 bin in the main thread is significantly smaller than this,\n                # we can conclude that the memory allocation is not coming from it.\n                set malloc_stats [r memory malloc-stats]\n                if {[regexp {(?s)arenas\\[0\\]:.*?448[ ]+[\\d]+[ ]+([\\d]+)[ ]} $malloc_stats - allocated]} {\n                    # Ensure the allocation for bin 448 in the main thread’s arena\n                    # is far less than 4375k (10000 * 448 bytes).\n                    assert_lessthan $allocated 200000\n                } else {\n                    fail \"Failed to get the main thread's malloc stats.\"\n                }\n            }\n\n            after 120 ;# serverCron only updates the info once in 100ms\n            if {$::verbose} {\n                puts \"used [s allocator_allocated]\"\n                puts \"rss [s allocator_active]\"\n                puts \"frag [s allocator_frag_ratio]\"\n                puts \"frag_bytes [s allocator_frag_bytes]\"\n            }\n            assert_lessthan [s allocator_frag_ratio] 1.05\n\n            # Delete keys with even indices to create fragmentation.\n            for {set i 0} {$i < [llength $clients]} {incr i} {\n                set rd [lindex $clients $i]\n                for {set j 0} {$j < $n} {incr j 2} {\n                    $rd del \"k$i-$j\"\n                }\n            }\n            for {set i 0} {$i < [llength $clients]} {incr i} {\n                [lindex $clients $i] close\n            }\n            if {$type eq \"cluster\"} {\n                r config resetstat\n            }\n\n            after 120 ;# serverCron only updates the info once in 100ms\n            if {$::verbose} {\n                puts \"used [s allocator_allocated]\"\n                puts \"rss [s allocator_active]\"\n                puts \"frag [s allocator_frag_ratio]\"\n                puts \"frag_bytes [s allocator_frag_bytes]\"\n            }\n            assert_morethan [s allocator_frag_ratio] 1.35\n\n            catch {r config set activedefrag yes} e\n            if {[r config get activedefrag] eq \"activedefrag yes\"} {\n            \n                # wait for the active defrag to start working (decision once a second)\n                wait_for_condition 50 100 {\n                    [s total_active_defrag_time] ne 0\n                } else {\n                    after 120 ;# serverCron only updates the info once in 100ms\n                    puts [r info memory]\n                    puts [r info stats]\n                    puts [r memory malloc-stats]\n                    fail \"defrag not started.\"\n                }\n\n                # wait for the active defrag to stop working\n                if {$io_threads == 1} {\n                    wait_for_defrag_stop 500 100 1.05\n                } else {\n                    # TODO: When multithreading is enabled, argv may be created in the io thread\n                    # and kept in the main thread, which can cause fragmentation to become worse.\n                    wait_for_defrag_stop 500 100 1.1\n                }\n\n                # test the fragmentation is lower\n                after 120 ;# serverCron only updates the info once in 100ms\n                if {$::verbose} {\n                    puts \"used [s allocator_allocated]\"\n                    puts \"rss [s allocator_active]\"\n                    puts \"frag [s allocator_frag_ratio]\"\n                    puts \"frag_bytes [s allocator_frag_bytes]\"\n                }\n            }\n        }\n\n        if {$type eq \"standalone\"} { ;# skip in cluster mode\n        test \"Active defrag big list: $type\" {\n            r flushdb\n            r config set hz 100\n            r config set activedefrag no\n            wait_for_defrag_stop 500 100\n            r config resetstat\n            r config set active-defrag-max-scan-fields 1000\n            r config set active-defrag-threshold-lower 5\n            r config set active-defrag-cycle-min 65\n            r config set active-defrag-cycle-max 75\n            r config set active-defrag-ignore-bytes 2mb\n            r config set maxmemory 0\n            r config set list-max-ziplist-size 1 ;# list of 100k items will have 100k quicklist nodes\n\n            # create big keys with 10k items\n            set rd [redis_deferring_client]\n\n            set expected_frag 1.5\n            # add a mass of list nodes to two lists (allocations are interlaced)\n            set val [string repeat A 500] ;# 1 item of 500 bytes puts us in the 640 bytes bin, which has 32 regs, so high potential for fragmentation\n            set elements 100000\n            set count 0\n            for {set j 0} {$j < $elements} {incr j} {\n                $rd lpush biglist1 $val\n                $rd lpush biglist2 $val\n\n                incr count\n                discard_replies_every $rd $count 1000 2000\n            }\n\n            # create some fragmentation\n            r del biglist2\n\n            # start defrag\n            after 120 ;# serverCron only updates the info once in 100ms\n            set frag [s allocator_frag_ratio]\n            if {$::verbose} {\n                puts \"frag $frag\"\n            }\n\n            assert {$frag >= $expected_frag}\n            r config set latency-monitor-threshold 5\n            r latency reset\n\n            set digest [debug_digest]\n            catch {r config set activedefrag yes} e\n            if {[r config get activedefrag] eq \"activedefrag yes\"} {\n                # wait for the active defrag to start working (decision once a second)\n                wait_for_condition 50 100 {\n                    [s total_active_defrag_time] ne 0\n                } else {\n                    after 120 ;# serverCron only updates the info once in 100ms\n                    puts [r info memory]\n                    puts [r info stats]\n                    puts [r memory malloc-stats]\n                    fail \"defrag not started.\"\n                }\n\n                # wait for the active defrag to stop working\n                wait_for_defrag_stop 500 100 1.1\n\n                # test the fragmentation is lower\n                after 120 ;# serverCron only updates the info once in 100ms\n                set misses [s active_defrag_misses]\n                set hits [s active_defrag_hits]\n                set frag [s allocator_frag_ratio]\n                set max_latency 0\n                foreach event [r latency latest] {\n                    lassign $event eventname time latency max\n                    if {$eventname == \"active-defrag-cycle\"} {\n                        set max_latency $max\n                    }\n                }\n                if {$::verbose} {\n                    puts \"used [s allocator_allocated]\"\n                    puts \"rss [s allocator_active]\"\n                    puts \"frag_bytes [s allocator_frag_bytes]\"\n                    puts \"frag $frag\"\n                    puts \"misses: $misses\"\n                    puts \"hits: $hits\"\n                    puts \"max latency $max_latency\"\n                    puts [r latency latest]\n                    puts [r latency history active-defrag-cycle]\n                    puts [r memory malloc-stats]\n                }\n                # due to high fragmentation, 100hz, and active-defrag-cycle-max set to 75,\n                # we expect max latency to be not much higher than 7.5ms but due to rare slowness threshold is set higher\n                if {!$::no_latency} {\n                    assert {$max_latency <= 30}\n                }\n\n                # in extreme cases of stagnation, we see over 5m misses before the tests aborts with \"defrag didn't stop\",\n                # in normal cases we only see 100k misses out of 100k elements\n                assert {$misses < $elements * 2}\n            }\n            # verify the data isn't corrupted or changed\n            set newdigest [debug_digest]\n            assert {$digest eq $newdigest}\n            r save ;# saving an rdb iterates over all the data / pointers\n            r del biglist1 ;# coverage for quicklistBookmarksClear\n        } {1}\n\n        test \"Active defrag edge case: $type\" {\n            # there was an edge case in defrag where all the slabs of a certain bin are exact the same\n            # % utilization, with the exception of the current slab from which new allocations are made\n            # if the current slab is lower in utilization the defragger would have ended up in stagnation,\n            # kept running and not move any allocation.\n            # this test is more consistent on a fresh server with no history\n            start_server {tags {\"defrag\"} overrides {save \"\"}} {\n                r flushdb\n                r config set hz 100\n                r config set activedefrag no\n                wait_for_defrag_stop 500 100\n                r config resetstat\n                r config set active-defrag-max-scan-fields 1000\n                r config set active-defrag-threshold-lower 5\n                r config set active-defrag-cycle-min 65\n                r config set active-defrag-cycle-max 75\n                r config set active-defrag-ignore-bytes 1mb\n                r config set maxmemory 0\n                set expected_frag 1.3\n\n                r debug mallctl-str thread.tcache.flush VOID\n                # fill the first slab containing 32 regs of 640 bytes.\n                for {set j 0} {$j < 32} {incr j} {\n                    r setrange \"_$j\" 600 x\n                    r debug mallctl-str thread.tcache.flush VOID\n                }\n\n                # add a mass of keys with 600 bytes values, fill the bin of 640 bytes which has 32 regs per slab.\n                set rd [redis_deferring_client]\n                set keys 640000\n                set count 0\n                for {set j 0} {$j < $keys} {incr j} {\n                    $rd setrange $j 600 x\n\n                    incr count\n                    discard_replies_every $rd $count 10000 10000\n                }\n\n                # create some fragmentation of 50%\n                set sent 0\n                for {set j 0} {$j < $keys} {incr j 1} {\n                    $rd del $j\n                    incr sent\n                    incr j 1\n\n                    discard_replies_every $rd $sent 10000 10000\n                }\n\n                # create higher fragmentation in the first slab\n                for {set j 10} {$j < 32} {incr j} {\n                    r del \"_$j\"\n                }\n\n                # start defrag\n                after 120 ;# serverCron only updates the info once in 100ms\n                set frag [s allocator_frag_ratio]\n                if {$::verbose} {\n                    puts \"frag $frag\"\n                }\n\n                assert {$frag >= $expected_frag}\n\n                set digest [debug_digest]\n                catch {r config set activedefrag yes} e\n                if {[r config get activedefrag] eq \"activedefrag yes\"} {\n                    # wait for the active defrag to start working (decision once a second)\n                    wait_for_condition 50 100 {\n                        [s total_active_defrag_time] ne 0\n                    } else {\n                        after 120 ;# serverCron only updates the info once in 100ms\n                        puts [r info memory]\n                        puts [r info stats]\n                        puts [r memory malloc-stats]\n                        fail \"defrag not started.\"\n                    }\n\n                    # wait for the active defrag to stop working\n                    wait_for_defrag_stop 500 100 1.1\n\n                    # test the fragmentation is lower\n                    after 120 ;# serverCron only updates the info once in 100ms\n                    set misses [s active_defrag_misses]\n                    set hits [s active_defrag_hits]\n                    set frag [s allocator_frag_ratio]\n                    if {$::verbose} {\n                        puts \"frag $frag\"\n                        puts \"hits: $hits\"\n                        puts \"misses: $misses\"\n                    }\n                    assert {$misses < 10000000} ;# when defrag doesn't stop, we have some 30m misses, when it does, we have 2m misses\n                }\n\n                # verify the data isn't corrupted or changed\n                set newdigest [debug_digest]\n                assert {$digest eq $newdigest}\n                r save ;# saving an rdb iterates over all the data / pointers\n            }\n        } ;# standalone\n        }\n    }\n    }\n\n    test \"Active defrag can't be triggered during replicaof database flush. See issue #14267\" {\n        start_server {tags {\"repl\"} overrides {save \"\"}} {\n            set master_host [srv 0 host]\n            set master_port [srv 0 port]\n\n            start_server {overrides {save \"\"}} {\n                set replica [srv 0 client]\n                set rd [redis_deferring_client 0]\n\n                $replica config set hz 100\n                $replica config set activedefrag no\n                $replica config set active-defrag-threshold-lower 5\n                $replica config set active-defrag-cycle-min 65\n                $replica config set active-defrag-cycle-max 75\n                $replica config set active-defrag-ignore-bytes 2mb\n\n                # add a mass of string keys\n                set count 0\n                for {set j 0} {$j < 500000} {incr j} {\n                    $rd setrange $j 150 a\n\n                    incr count\n                    discard_replies_every $rd $count 10000 10000\n                }\n                assert_equal [$replica dbsize] 500000\n\n                # create some fragmentation\n                set count 0\n                for {set j 0} {$j < 500000} {incr j 2} {\n                    $rd del $j\n\n                    incr count\n                    discard_replies_every $rd $count 10000 10000\n                }\n                $rd close\n                assert_equal [$replica dbsize] 250000\n\n                catch {$replica config set activedefrag yes} e\n                if {[$replica config get activedefrag] eq \"activedefrag yes\"} {\n                    # Start replication sync which will flush the replica's database,\n                    # then enable defrag to run concurrently with the database flush.\n                    $replica replicaof $master_host $master_port\n\n                    # wait for the active defrag to start working (decision once a second)\n                    wait_for_condition 50 100 {\n                        [s total_active_defrag_time] ne 0\n                    } else {\n                        after 120 ;# serverCron only updates the info once in 100ms\n                        puts [$replica info memory]\n                        puts [$replica info stats]\n                        puts [$replica memory malloc-stats]\n                        fail \"defrag not started.\"\n                    }\n\n                    wait_for_sync $replica\n\n                    # wait for the active defrag to stop working (db has been emptied during replication sync)\n                    wait_for_defrag_stop 500 100\n                    assert_equal [$replica dbsize] 0\n                }\n            }\n        }\n    } {} {defrag external:skip tsan:skip debug_defrag:skip cluster}\n\n    start_cluster 1 0 {tags {\"defrag external:skip tsan:skip debug_defrag:skip cluster needs:debug\"} overrides {appendonly yes auto-aof-rewrite-percentage 0 save \"\" loglevel notice}} {\n        test_active_defrag \"cluster\"\n    }\n\n    start_server {tags {\"defrag external:skip tsan:skip debug_defrag:skip standalone needs:debug\"} overrides {appendonly yes auto-aof-rewrite-percentage 0 save \"\" loglevel notice}} {\n        test_active_defrag \"standalone\"\n    }\n} ;# run_solo\n"
  },
  {
    "path": "tests/unit/moduleapi/aclcheck.tcl",
    "content": "set testmodule [file normalize tests/modules/aclcheck.so]\n\nstart_server {tags {\"modules acl external:skip\"}} {\n    r module load $testmodule\n\n    test {test module check acl for command perm} {\n        # by default all commands allowed\n        assert_equal [r aclcheck.rm_call.check.cmd set x 5] OK\n        # block SET command for user\n        r acl setuser default -set\n        catch {r aclcheck.rm_call.check.cmd set x 5} e\n        assert_match {*DENIED CMD*} $e\n\n        # verify that new log entry added\n        set entry [lindex [r ACL LOG] 0]\n        assert {[dict get $entry username] eq {default}}\n        assert {[dict get $entry context] eq {module}}\n        assert {[dict get $entry object] eq {set}}\n        assert {[dict get $entry reason] eq {command}}\n\n        # Wrong command arity must fail safely (no crash on KEYNUM keyspec path)\n        r acl setuser default on nopass resetkeys ~restricted:* +@all\n        catch {r aclcheck.rm_call.check.cmd eval script} e\n        assert_match {*DENIED*} $e\n    }\n        \n    test {test module check acl for key prefix permission} {\n        r acl setuser default +set resetkeys ~CART* %W~ORDER* %R~PRODUCT* ~ESCAPED_STAR\\\\* ~NON_ESCAPED_STAR\\\\\\\\*\n        \n        # check for key permission of prefix CART* (READ+WRITE)\n        catch {r aclcheck.set.check.prefixkey \"~\" CAR CART_CLOTHES_7 5} e\n        assert_match \"*DENIED KEY*\" $e\n        assert_equal [r aclcheck.set.check.prefixkey \"~\" CART CART 5] OK\n        assert_equal [r aclcheck.set.check.prefixkey \"W\" CART_BOOKS CART_BOOKS_12 5] OK\n        assert_equal [r aclcheck.set.check.prefixkey \"R\" CART_CLOTHES CART_CLOTHES_7 5] OK\n        \n        # check for key permission of prefix ORDER* (WRITE)\n        catch {r aclcheck.set.check.prefixkey \"~\" ORDE ORDER_2024_155351 5} e\n        assert_match \"*DENIED KEY*\" $e\n        assert_equal [r aclcheck.set.check.prefixkey \"~\" ORDER ORDER 5] OK\n        assert_equal [r aclcheck.set.check.prefixkey \"W\" ORDER_2024 ORDER_2024_564879 5] OK\n        assert_equal [r aclcheck.set.check.prefixkey \"~\" ORDER_2023 ORDER_2023_564879 5] OK\n        catch {r aclcheck.set.check.prefixkey \"R\" ORDER_2023 ORDER_2023_564879 5}\n        assert_match \"*DENIED KEY*\" $e\n        \n        # check for key permission of prefix PRODUCT* (READ)\n        catch {r aclcheck.set.check.prefixkey \"~\" PRODUC PRODUCT_CLOTHES_753376 5} e\n        assert_match \"*DENIED KEY*\" $e\n        assert_equal [r aclcheck.set.check.prefixkey \"~\" PRODUCT PRODUCT 5] OK\n        assert_equal [r aclcheck.set.check.prefixkey \"~\" PRODUCT_BOOKS PRODUCT_BOOKS_753376 5] OK\n        \n        # pattern ends with a escaped '*' character should not be counted as a prefix\n        catch {r aclcheck.set.check.prefixkey \"~\" ESCAPED_STAR ESCAPED_STAR_12 5} e\n        assert_match \"*DENIED KEY*\" $e\n        catch {r aclcheck.set.check.prefixkey \"~\" ESCAPED_STAR* ESCAPED_STAR* 5} e\n        assert_match \"*DENIED KEY*\" $e        \n        assert_equal [r aclcheck.set.check.prefixkey \"~\" NON_ESCAPED_STAR\\\\ NON_ESCAPED_STAR\\\\clothes 5] OK\n    }\n    \n    test {check ACL permissions versus empty string prefix} {\n        # The empty string should should match all keys permissions\n        r acl setuser default +set resetkeys %R~* %W~* ~*\n        assert_equal [r aclcheck.set.check.prefixkey \"~\" \"\" CART_BOOKS_12 5] OK\n        assert_equal [r aclcheck.set.check.prefixkey \"W\" \"\" ORDER_2024_564879 5] OK\n        assert_equal [r aclcheck.set.check.prefixkey \"R\" \"\" PRODUCT_BOOKS_753376 5] OK\n        \n        # The empty string prefix should not match if cannot access all keys \n        r acl setuser default +set resetkeys %R~x* %W~x* ~x*\n        catch {r aclcheck.set.check.prefixkey \"~\" \"\" CART_BOOKS_12 5} e\n        assert_match \"*DENIED KEY*\" $e\n    }\n\n    test {test module check acl for key perm} {\n        # give permission for SET and block all keys but x(READ+WRITE), y(WRITE), z(READ)\n        r acl setuser default +set resetkeys ~x %W~y %R~z ~ESCAPED_STAR\\\\*\n\n        assert_equal [r aclcheck.set.check.key \"*\" x 5] OK\n        catch {r aclcheck.set.check.key \"*\" v 5} e\n        assert_match \"*DENIED KEY*\" $e\n\n        assert_equal [r aclcheck.set.check.key \"~\" x 5] OK\n        assert_equal [r aclcheck.set.check.key \"~\" y 5] OK\n        assert_equal [r aclcheck.set.check.key \"~\" z 5] OK\n        catch {r aclcheck.set.check.key \"~\" v 5} e\n        assert_match \"*DENIED KEY*\" $e\n\n        assert_equal [r aclcheck.set.check.key \"W\" y 5] OK\n        catch {r aclcheck.set.check.key \"W\" v 5} e\n        assert_match \"*DENIED KEY*\" $e\n\n        assert_equal [r aclcheck.set.check.key \"R\" z 5] OK\n        catch {r aclcheck.set.check.key \"R\" v 5} e\n        assert_match \"*DENIED KEY*\" $e\n        \n        # check pattern ends with escaped '*' character\n        assert_equal [r aclcheck.set.check.key \"~\" ESCAPED_STAR* 5] OK\n    }\n\n    test {test module check acl for module user} {\n        # the module user has access to all keys\n        assert_equal [r aclcheck.rm_call.check.cmd.module.user set y 5] OK\n    }\n\n    test {test module check acl for channel perm} {\n        # block all channels but ch1\n        r acl setuser default resetchannels &ch1\n        assert_equal [r aclcheck.publish.check.channel ch1 msg] 0\n        catch {r aclcheck.publish.check.channel ch2 msg} e\n        set e\n    } {*DENIED CHANNEL*}\n\n    test {test module check acl in rm_call} {\n        # rm call check for key permission (x: READ + WRITE)\n        assert_equal [r aclcheck.rm_call set x 5] OK\n        assert_equal [r aclcheck.rm_call set x 6 get] 5\n\n        # rm call check for key permission (y: only WRITE)\n        assert_equal [r aclcheck.rm_call set y 5] OK\n        assert_error {*NOPERM*} {r aclcheck.rm_call set y 5 get}\n        assert_error {*NOPERM*No permissions to access a key*} {r aclcheck.rm_call_with_errors set y 5 get}\n\n        # rm call check for key permission (z: only READ)\n        assert_error {*NOPERM*} {r aclcheck.rm_call set z 5}\n        catch {r aclcheck.rm_call_with_errors set z 5} e\n        assert_match {*NOPERM*No permissions to access a key*} $e\n        assert_error {*NOPERM*} {r aclcheck.rm_call set z 6 get}\n        assert_error {*NOPERM*No permissions to access a key*} {r aclcheck.rm_call_with_errors set z 6 get}\n\n        # verify that new log entry added\n        set entry [lindex [r ACL LOG] 0]\n        assert {[dict get $entry username] eq {default}}\n        assert {[dict get $entry context] eq {module}}\n        assert {[dict get $entry object] eq {z}}\n        assert {[dict get $entry reason] eq {key}}\n\n        # rm call check for command permission\n        r acl setuser default -set\n        assert_error {*NOPERM*} {r aclcheck.rm_call set x 5}\n        assert_error {*NOPERM*has no permissions to run the 'set' command*} {r aclcheck.rm_call_with_errors set x 5}\n\n        # verify that new log entry added\n        set entry [lindex [r ACL LOG] 0]\n        assert {[dict get $entry username] eq {default}}\n        assert {[dict get $entry context] eq {module}}\n        assert {[dict get $entry object] eq {set}}\n        assert {[dict get $entry reason] eq {command}}\n    }\n\n    test {test blocking of Commands outside of OnLoad} {\n        assert_equal [r block.commands.outside.onload] OK\n    }\n\n    test {test users to have access to module commands having acl categories} {\n        r acl SETUSER j1 on >password -@all +@WRITE\n        r acl SETUSER j2 on >password -@all +@READ\n        assert_equal [r acl DRYRUN j1 aclcheck.module.command.aclcategories.write] OK\n        assert_equal [r acl DRYRUN j2 aclcheck.module.command.aclcategories.write.function.read.category] OK\n        assert_equal [r acl DRYRUN j2 aclcheck.module.command.aclcategories.read.only.category] OK\n    }\n\n    test {Unload the module - aclcheck} {\n        assert_equal {OK} [r module unload aclcheck]\n    }\n}\n\nstart_server {tags {\"modules acl external:skip\"}} {\n    test {test existing users to have access to module commands loaded on runtime} {\n        r acl SETUSER j3 on >password -@all +@WRITE\n        assert_equal [r module load $testmodule] OK\n        assert_equal [r acl DRYRUN j3 aclcheck.module.command.aclcategories.write] OK\n        assert_equal {OK} [r module unload aclcheck]\n    }\n}\n\nstart_server {tags {\"modules acl external:skip\"}} {\n    test {test existing users without permissions, do not have access to module commands loaded on runtime.} {\n        r acl SETUSER j4 on >password -@all +@READ\n        r acl SETUSER j5 on >password -@all +@WRITE\n        assert_equal [r module load $testmodule] OK\n        catch {r acl DRYRUN j4 aclcheck.module.command.aclcategories.write} e\n        assert_equal {User j4 has no permissions to run the 'aclcheck.module.command.aclcategories.write' command} $e\n        catch {r acl DRYRUN j5 aclcheck.module.command.aclcategories.write.function.read.category} e\n        assert_equal {User j5 has no permissions to run the 'aclcheck.module.command.aclcategories.write.function.read.category' command} $e\n    }\n\n    test {test users without permissions, do not have access to module commands.} {\n        r acl SETUSER j6 on >password -@all +@READ\n        catch {r acl DRYRUN j6 aclcheck.module.command.aclcategories.write} e\n        assert_equal {User j6 has no permissions to run the 'aclcheck.module.command.aclcategories.write' command} $e\n        r acl SETUSER j7 on >password -@all +@WRITE\n        catch {r acl DRYRUN j7 aclcheck.module.command.aclcategories.write.function.read.category} e\n        assert_equal {User j7 has no permissions to run the 'aclcheck.module.command.aclcategories.write.function.read.category' command} $e\n    }\n\n    test {test if foocategory acl categories is added} {\n        r acl SETUSER j8 on >password -@all +@foocategory\n        assert_equal [r acl DRYRUN j8 aclcheck.module.command.test.add.new.aclcategories] OK\n    }\n\n    test {test permission compaction and simplification for categories added by a module} {\n        r acl SETUSER j9 on >password -@all +@foocategory -@foocategory\n        catch {r ACL GETUSER j9} res\n        assert_equal {-@all -@foocategory} [lindex $res 5]\n        assert_equal {OK} [r module unload aclcheck]\n    }\n}\n\nstart_server {tags {\"modules acl external:skip\"}} {\n    test {test module load fails if exceeds the maximum number of adding acl categories} {\n        assert_error {ERR Error loading the extension. Please check the server logs.} {r module load $testmodule 1}\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/async_rm_call.tcl",
    "content": "set testmodule [file normalize tests/modules/blockedclient.so]\nset testmodule2 [file normalize tests/modules/postnotifications.so]\nset testmodule3 [file normalize tests/modules/blockonkeys.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test {Locked GIL acquisition from async RM_Call} {\n        assert_equal {OK} [r do_rm_call_async acquire_gil]\n    }\n\n    test \"Blpop on async RM_Call fire and forget\" {\n        assert_equal {Blocked} [r do_rm_call_fire_and_forget blpop l 0]\n        r lpush l a\n        assert_equal {0} [r llen l]\n    }\n\n    test \"Blpop on threaded async RM_Call\" {\n        set rd [redis_deferring_client]\n\n        $rd do_rm_call_async_on_thread blpop l 0\n        wait_for_blocked_clients_count 1\n        r lpush l a\n        assert_equal [$rd read] {l a}\n        wait_for_blocked_clients_count 0\n        $rd close\n    }\n\n    foreach cmd {do_rm_call_async do_rm_call_async_script_mode } {\n\n        test \"Blpop on async RM_Call using $cmd\" {\n            set rd [redis_deferring_client]\n\n            $rd $cmd blpop l 0\n            wait_for_blocked_clients_count 1\n            r lpush l a\n            assert_equal [$rd read] {l a}\n            wait_for_blocked_clients_count 0\n            $rd close\n        }\n\n        test \"Brpop on async RM_Call using $cmd\" {\n            set rd [redis_deferring_client]\n\n            $rd $cmd brpop l 0\n            wait_for_blocked_clients_count 1\n            r lpush l a\n            assert_equal [$rd read] {l a}\n            wait_for_blocked_clients_count 0\n            $rd close\n        }\n\n        test \"Brpoplpush on async RM_Call using $cmd\" {\n            set rd [redis_deferring_client]\n\n            $rd $cmd brpoplpush l1 l2 0\n            wait_for_blocked_clients_count 1\n            r lpush l1 a\n            assert_equal [$rd read] {a}\n            wait_for_blocked_clients_count 0\n            $rd close\n            r lpop l2\n        } {a}\n\n        test \"Blmove on async RM_Call using $cmd\" {\n            set rd [redis_deferring_client]\n\n            $rd $cmd blmove l1 l2 LEFT LEFT 0\n            wait_for_blocked_clients_count 1\n            r lpush l1 a\n            assert_equal [$rd read] {a}\n            wait_for_blocked_clients_count 0\n            $rd close\n            r lpop l2\n        } {a}\n\n        test \"Bzpopmin on async RM_Call using $cmd\" {\n            set rd [redis_deferring_client]\n\n            $rd $cmd bzpopmin s 0\n            wait_for_blocked_clients_count 1\n            r zadd s 10 foo\n            assert_equal [$rd read] {s foo 10}\n            wait_for_blocked_clients_count 0\n            $rd close\n        }\n\n        test \"Bzpopmax on async RM_Call using $cmd\" {\n            set rd [redis_deferring_client]\n\n            $rd $cmd bzpopmax s 0\n            wait_for_blocked_clients_count 1\n            r zadd s 10 foo\n            assert_equal [$rd read] {s foo 10}\n            wait_for_blocked_clients_count 0\n            $rd close\n        }\n    }\n\n    test {Nested async RM_Call} {\n        set rd [redis_deferring_client]\n\n        $rd do_rm_call_async do_rm_call_async do_rm_call_async do_rm_call_async blpop l 0\n        wait_for_blocked_clients_count 1\n        r lpush l a\n        assert_equal [$rd read] {l a}\n        wait_for_blocked_clients_count 0\n        $rd close\n    }\n\n    test {Test multiple async RM_Call waiting on the same event} {\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n\n        $rd1 do_rm_call_async do_rm_call_async do_rm_call_async do_rm_call_async blpop l 0\n        $rd2 do_rm_call_async do_rm_call_async do_rm_call_async do_rm_call_async blpop l 0\n        wait_for_blocked_clients_count 2\n        r lpush l element element\n        assert_equal [$rd1 read] {l element}\n        assert_equal [$rd2 read] {l element}\n        wait_for_blocked_clients_count 0\n        $rd1 close\n        $rd2 close\n    }\n\n    test {async RM_Call calls RM_Call} {\n        assert_equal {PONG} [r do_rm_call_async do_rm_call ping]\n    }\n\n    test {async RM_Call calls background RM_Call calls RM_Call} {\n        assert_equal {PONG} [r do_rm_call_async do_bg_rm_call do_rm_call ping]\n    }\n\n    test {async RM_Call calls background RM_Call calls RM_Call calls async RM_Call} {\n        assert_equal {PONG} [r do_rm_call_async do_bg_rm_call do_rm_call do_rm_call_async ping]\n    }\n\n    test {async RM_Call inside async RM_Call callback} {\n        set rd [redis_deferring_client]\n        $rd wait_and_do_rm_call blpop l 0\n        wait_for_blocked_clients_count 1\n\n        start_server {} {\n            test \"Connect a replica to the master instance\" {\n                r slaveof [srv -1 host] [srv -1 port]\n                wait_for_condition 50 100 {\n                    [s role] eq {slave} &&\n                    [string match {*master_link_status:up*} [r info replication]]\n                } else {\n                    fail \"Can't turn the instance into a replica\"\n                }\n            }\n\n            assert_equal {1} [r -1 lpush l a]\n            assert_equal [$rd read] {l a}\n        }\n\n        wait_for_blocked_clients_count 0\n        $rd close\n    }\n\n    test {Become replica while having async RM_Call running} {\n        r flushall\n        set rd [redis_deferring_client]\n        $rd do_rm_call_async blpop l 0\n        wait_for_blocked_clients_count 1\n\n        #become a replica of a not existing redis\n        r replicaof localhost 30000\n\n        catch {[$rd read]} e\n        assert_match {UNBLOCKED force unblock from blocking operation*} $e\n        wait_for_blocked_clients_count 0\n\n        r replicaof no one\n\n        r lpush l 1\n        # make sure the async rm_call was aborted\n        assert_equal [r llen l] {1}\n        $rd close\n    }\n\n    test {Pipeline with blocking RM_Call} {\n        r flushall\n        set rd [redis_deferring_client]\n        set buf \"\"\n        append buf \"do_rm_call_async blpop l 0\\r\\n\"\n        append buf \"ping\\r\\n\"\n        $rd write $buf\n        $rd flush\n        wait_for_blocked_clients_count 1\n\n        # release the blocked client\n        r lpush l 1\n\n        assert_equal [$rd read] {l 1}\n        assert_equal [$rd read] {PONG}\n\n        wait_for_blocked_clients_count 0\n        $rd close\n    }\n\n    test {blocking RM_Call abort} {\n        r flushall\n        set rd [redis_deferring_client]\n        \n        $rd client id\n        set client_id [$rd read]\n\n        $rd do_rm_call_async blpop l 0\n        wait_for_blocked_clients_count 1\n\n        r client kill ID $client_id\n        assert_error {*error reading reply*} {$rd read}\n\n        wait_for_blocked_clients_count 0\n\n        r lpush l 1\n        # make sure the async rm_call was aborted\n        assert_equal [r llen l] {1}\n        $rd close\n    }\n}\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test {Test basic replication stream on unblock handler} {\n        r flushall\n        set repl [attach_to_replication_stream]\n\n        set rd [redis_deferring_client]\n\n        $rd do_rm_call_async blpop l 0\n        wait_for_blocked_clients_count 1\n        r lpush l a\n        assert_equal [$rd read] {l a}\n\n        assert_replication_stream $repl {\n            {select *}\n            {lpush l a}\n            {lpop l}\n        }\n        close_replication_stream $repl\n\n        wait_for_blocked_clients_count 0\n        $rd close\n    }\n\n    test {Test unblock handler are executed as a unit} {\n        r flushall\n        set repl [attach_to_replication_stream]\n\n        set rd [redis_deferring_client]\n\n        $rd blpop_and_set_multiple_keys l x 1 y 2\n        wait_for_blocked_clients_count 1\n        r lpush l a\n        assert_equal [$rd read] {OK}\n\n        assert_replication_stream $repl {\n            {select *}\n            {lpush l a}\n            {multi}\n            {lpop l}\n            {set x 1}\n            {set y 2}\n            {exec}\n        }\n        close_replication_stream $repl\n\n        wait_for_blocked_clients_count 0\n        $rd close\n    }\n\n    test {Test no propagation of blocking command} {\n        r flushall\n        set repl [attach_to_replication_stream]\n\n        set rd [redis_deferring_client]\n\n        $rd do_rm_call_async_no_replicate blpop l 0\n        wait_for_blocked_clients_count 1\n        r lpush l a\n        assert_equal [$rd read] {l a}\n\n        # make sure the lpop are not replicated\n        r set x 1\n\n        assert_replication_stream $repl {\n            {select *}\n            {lpush l a}\n            {set x 1}\n        }\n        close_replication_stream $repl\n\n        wait_for_blocked_clients_count 0\n        $rd close\n    }\n}\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n    r module load $testmodule2\n\n    test {Test unblock handler are executed as a unit with key space notifications} {\n        r flushall\n        set repl [attach_to_replication_stream]\n\n        set rd [redis_deferring_client]\n\n        $rd blpop_and_set_multiple_keys l string_foo 1 string_bar 2\n        wait_for_blocked_clients_count 1\n        r lpush l a\n        assert_equal [$rd read] {OK}\n\n        # Explanation of the first multi exec block:\n        # {lpop l} - pop the value by our blocking command 'blpop_and_set_multiple_keys'\n        # {set string_foo 1} - the action of our blocking command 'blpop_and_set_multiple_keys'\n        # {set string_bar 2} - the action of our blocking command 'blpop_and_set_multiple_keys'\n        # {incr string_changed{string_foo}} - post notification job that was registered when 'string_foo' changed\n        # {incr string_changed{string_bar}} - post notification job that was registered when 'string_bar' changed\n        # {incr string_total} - post notification job that was registered when string_changed{string_foo} changed\n        # {incr string_total} - post notification job that was registered when string_changed{string_bar} changed\n        assert_replication_stream $repl {\n            {select *}\n            {lpush l a}\n            {multi}\n            {lpop l}\n            {set string_foo 1}\n            {set string_bar 2}\n            {incr string_changed{string_foo}}\n            {incr string_changed{string_bar}}\n            {incr string_total}\n            {incr string_total}\n            {exec}\n        }\n        close_replication_stream $repl\n\n        wait_for_blocked_clients_count 0\n        $rd close\n    }\n\n    test {Test unblock handler are executed as a unit with lazy expire} {\n        r flushall\n        r DEBUG SET-ACTIVE-EXPIRE 0\n        set repl [attach_to_replication_stream]\n\n        set rd [redis_deferring_client]\n\n        $rd blpop_and_set_multiple_keys l string_foo 1 string_bar 2\n        wait_for_blocked_clients_count 1\n        r lpush l a\n        assert_equal [$rd read] {OK}\n\n        # set expiration on string_foo\n        r pexpire string_foo 1\n        after 10\n\n        # now the key should have been expired\n        $rd blpop_and_set_multiple_keys l string_foo 1 string_bar 2\n        wait_for_blocked_clients_count 1\n        r lpush l a\n        assert_equal [$rd read] {OK}\n\n        # Explanation of the first multi exec block:\n        # {lpop l} - pop the value by our blocking command 'blpop_and_set_multiple_keys'\n        # {set string_foo 1} - the action of our blocking command 'blpop_and_set_multiple_keys'\n        # {set string_bar 2} - the action of our blocking command 'blpop_and_set_multiple_keys'\n        # {incr string_changed{string_foo}} - post notification job that was registered when 'string_foo' changed\n        # {incr string_changed{string_bar}} - post notification job that was registered when 'string_bar' changed\n        # {incr string_total} - post notification job that was registered when string_changed{string_foo} changed\n        # {incr string_total} - post notification job that was registered when string_changed{string_bar} changed\n        #\n        # Explanation of the second multi exec block:\n        # {lpop l} - pop the value by our blocking command 'blpop_and_set_multiple_keys'\n        # {del string_foo} - lazy expiration of string_foo when 'blpop_and_set_multiple_keys' tries to write to it. \n        # {set string_foo 1} - the action of our blocking command 'blpop_and_set_multiple_keys'\n        # {set string_bar 2} - the action of our blocking command 'blpop_and_set_multiple_keys'\n        # {incr expired} - the post notification job, registered after string_foo got expired\n        # {incr string_changed{string_foo}} - post notification job triggered when we set string_foo\n        # {incr string_changed{string_bar}} - post notification job triggered when we set string_bar\n        # {incr string_total} - post notification job triggered when we incr 'string_changed{string_foo}'\n        # {incr string_total} - post notification job triggered when we incr 'string_changed{string_bar}'\n        assert_replication_stream $repl {\n            {select *}\n            {lpush l a}\n            {multi}\n            {lpop l}\n            {set string_foo 1}\n            {set string_bar 2}\n            {incr string_changed{string_foo}}\n            {incr string_changed{string_bar}}\n            {incr string_total}\n            {incr string_total}\n            {exec}\n            {pexpireat string_foo *}\n            {lpush l a}\n            {multi}\n            {lpop l}\n            {del string_foo}\n            {set string_foo 1}\n            {set string_bar 2}\n            {incr expired}\n            {incr string_changed{string_foo}}\n            {incr string_changed{string_bar}}\n            {incr string_total}\n            {incr string_total}\n            {exec}\n        }\n        close_replication_stream $repl\n        r DEBUG SET-ACTIVE-EXPIRE 1\n        \n        wait_for_blocked_clients_count 0\n        $rd close\n    }\n}\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n    r module load $testmodule3\n\n    test {Test unblock handler on module blocked on keys} {\n        set rd [redis_deferring_client]\n\n        r fsl.push l 1\n        $rd do_rm_call_async FSL.BPOPGT l 3 0\n        wait_for_blocked_clients_count 1\n        r fsl.push l 2\n        r fsl.push l 3\n        r fsl.push l 4\n        assert_equal [$rd read] {4}\n\n        wait_for_blocked_clients_count 0\n        $rd close\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/auth.tcl",
    "content": "set testmodule [file normalize tests/modules/auth.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test {Modules can create a user that can be authenticated} {\n        # Make sure we start authenticated with default user\n        r auth default \"\"\n        assert_equal [r acl whoami] \"default\"\n        r auth.createmoduleuser\n\n        set id [r auth.authmoduleuser]\n        assert_equal [r client id] $id\n\n        # Verify returned id is the same as our current id and\n        # we are authenticated with the specified user\n        assert_equal [r acl whoami] \"global\"\n    }\n\n    test {De-authenticating clients is tracked and kills clients} {\n        assert_equal [r auth.changecount] 0\n        r auth.createmoduleuser\n\n        # Catch the I/O exception that was thrown when Redis\n        # disconnected with us. \n        catch { [r ping] } e\n        assert_match {*I/O*} $e\n\n        # Check that a user change was registered\n        assert_equal [r auth.changecount] 1\n    }\n\n    test {Modules can't authenticate with ACLs users that dont exist} {\n        catch { [r auth.authrealuser auth-module-test-fake] } e\n        assert_match {*Invalid user*} $e\n    }\n\n    test {Modules can authenticate with ACL users} {\n        assert_equal [r acl whoami] \"default\"\n\n        # Create user to auth into\n        r acl setuser auth-module-test on allkeys allcommands\n\n        set id [r auth.authrealuser auth-module-test]\n\n        # Verify returned id is the same as our current id and\n        # we are authenticated with the specified user\n        assert_equal [r client id] $id\n        assert_equal [r acl whoami] \"auth-module-test\"\n    }\n\n    test {Client callback is called on user switch} {\n        assert_equal [r auth.changecount] 0\n\n        # Auth again and validate change count\n        r auth.authrealuser auth-module-test\n        assert_equal [r auth.changecount] 1\n\n        # Re-auth with the default user\n        r auth default \"\"\n        assert_equal [r auth.changecount] 1\n        assert_equal [r acl whoami] \"default\"\n\n        # Re-auth with the default user again, to\n        # verify the callback isn't fired again\n        r auth default \"\"\n        assert_equal [r auth.changecount] 0\n        assert_equal [r acl whoami] \"default\"\n    }\n\n    test {modules can redact arguments} {\n        r config set slowlog-log-slower-than 0\n        r slowlog reset\n        r auth.redact 1 2 3 4\n        r auth.redact 1 2 3\n        r config set slowlog-log-slower-than -1\n        set slowlog_resp [r slowlog get]\n\n        # There will be 3 records, slowlog reset and the\n        # two auth redact calls.\n        assert_equal 3 [llength $slowlog_resp]\n        assert_equal {slowlog reset} [lindex [lindex $slowlog_resp 2] 3]\n        assert_equal {auth.redact 1 (redacted) 3 (redacted)} [lindex [lindex $slowlog_resp 1] 3]\n        assert_equal {auth.redact (redacted) 2 (redacted)} [lindex [lindex $slowlog_resp 0] 3]\n    }\n\n    test \"Unload the module - testacl\" {\n        assert_equal {OK} [r module unload testacl]\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/basics.tcl",
    "content": "set testmodule [file normalize tests/modules/basics.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test {test module api basics} {\n        r test.basics\n    } {ALL TESTS PASSED}\n\n    test {test rm_call auto mode} {\n        r hello 2\n        set reply [r test.rmcallautomode]\n        assert_equal [lindex $reply 0] f1\n        assert_equal [lindex $reply 1] v1\n        assert_equal [lindex $reply 2] f2\n        assert_equal [lindex $reply 3] v2\n        r hello 3\n        set reply [r test.rmcallautomode]\n        assert_equal [dict get $reply f1] v1\n        assert_equal [dict get $reply f2] v2\n    }\n\n    test {test get resp} {\n        foreach resp {3 2} {\n            if {[lsearch $::denytags \"resp3\"] >= 0} {\n                if {$resp == 3} {continue}\n            } elseif {$::force_resp3} {\n                if {$resp == 2} {continue}\n            }\n            r hello $resp\n            set reply [r test.getresp]\n            assert_equal $reply $resp\n            r hello 2\n        }\n    }\n\n    test \"Unload the module - test\" {\n        assert_equal {OK} [r module unload test]\n    }\n}\n\nstart_server {tags {\"modules external:skip\"} overrides {enable-module-command no}} {\n    test {module command disabled} {\n       assert_error \"ERR *MODULE command not allowed*\" {r module load $testmodule}\n    }\n}\n\nstart_server {tags {\"modules external:skip\"} overrides {enable-debug-command no}} {\n    r module load $testmodule\n\n    test {debug command disabled} {\n        assert_equal {no} [r test.candebug]\n    }\n}\n\nstart_server {tags {\"modules external:skip\"} overrides {enable-debug-command yes}} {\n    r module load $testmodule\n\n    test {debug command enabled} {\n        assert_equal {yes} [r test.candebug]\n    }\n}\n\nstart_server {tags {\"modules external:skip\"} overrides {enable-debug-command local}} {\n    r module load $testmodule\n\n    test {debug commands are enabled for local connection} {\n        assert_equal {yes} [r test.candebug]\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/blockedclient.tcl",
    "content": "set testmodule [file normalize tests/modules/blockedclient.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test {Locked GIL acquisition} {\n        assert_match \"OK\" [r acquire_gil]\n    }\n\n    test {Locked GIL acquisition during multi} {\n    \tr multi\n    \tr acquire_gil\n    \tassert_equal {{Blocked client is not supported inside multi}} [r exec]\n    }\n\n    test {Locked GIL acquisition from RM_Call} {\n    \tassert_equal {Blocked client is not allowed} [r do_rm_call acquire_gil]\n    }\n\n    test {Blocking command are not block the client on RM_Call} {\n    \tr lpush l test\n    \tassert_equal [r do_rm_call blpop l 0] {l test}\n    \t\n    \tr lpush l test\n    \tassert_equal [r do_rm_call brpop l 0] {l test}\n    \t\n    \tr lpush l1 test\n    \tassert_equal [r do_rm_call brpoplpush l1 l2 0] {test}\n    \tassert_equal [r do_rm_call brpop l2 0] {l2 test}\n\n    \tr lpush l1 test\n    \tassert_equal [r do_rm_call blmove l1 l2 LEFT LEFT 0] {test}\n    \tassert_equal [r do_rm_call brpop l2 0] {l2 test}\n\n    \tr ZADD zset1 0 a 1 b 2 c\n    \tassert_equal [r do_rm_call bzpopmin zset1 0] {zset1 a 0}\n    \tassert_equal [r do_rm_call bzpopmax zset1 0] {zset1 c 2}\n\n    \tr xgroup create s g $ MKSTREAM\n    \tr xadd s * foo bar\n    \tassert {[r do_rm_call xread BLOCK 0 STREAMS s 0-0] ne {}}\n    \tassert {[r do_rm_call xreadgroup group g c BLOCK 0 STREAMS s >] ne {}}\n\n    \tassert {[r do_rm_call blpop empty_list 0] eq {}}\n        assert {[r do_rm_call brpop empty_list 0] eq {}}\n        assert {[r do_rm_call brpoplpush empty_list1 empty_list2 0] eq {}}\n        assert {[r do_rm_call blmove empty_list1 empty_list2 LEFT LEFT 0] eq {}}\n        \n        assert {[r do_rm_call bzpopmin empty_zset 0] eq {}}\n        assert {[r do_rm_call bzpopmax empty_zset 0] eq {}}\n       \n        r xgroup create empty_stream g $ MKSTREAM\n        assert {[r do_rm_call xread BLOCK 0 STREAMS empty_stream $] eq {}}\n        assert {[r do_rm_call xreadgroup group g c BLOCK 0 STREAMS empty_stream >] eq {}}\n\n    }\n\n    test {Monitor disallow inside RM_Call} {\n        set e {}\n        catch {\n            r do_rm_call monitor\n        } e\n        set e\n    } {*ERR*DENY BLOCKING*}\n\n    test {subscribe disallow inside RM_Call} {\n        set e {}\n        catch {\n            r do_rm_call subscribe x\n        } e\n        set e\n    } {*ERR*DENY BLOCKING*}\n\n    test {RM_Call from blocked client} {\n        r hset hash foo bar\n        r do_bg_rm_call hgetall hash\n    } {foo bar}\n\n    test {RM_Call from blocked client with script mode} {\n        r do_bg_rm_call_format S hset k foo bar\n    } {1}\n\n    test {RM_Call from blocked client with oom mode} {\n        r config set maxmemory 1\n        # will set server.pre_command_oom_state to 1\n        assert_error {OOM command not allowed*} {r hset hash foo bar}\n        r config set maxmemory 0\n        # now its should be OK to call OOM commands\n        r do_bg_rm_call_format M hset k1 foo bar\n    } {1} {needs:config-maxmemory}\n\n    test {RESP version carries through to blocked client} {\n        for {set client_proto 2} {$client_proto <= 3} {incr client_proto} {\n            if {[lsearch $::denytags \"resp3\"] >= 0} {\n                if {$client_proto == 3} {continue}\n            } elseif {$::force_resp3} {\n                if {$client_proto == 2} {continue}\n            }\n            r hello $client_proto\n            r readraw 1\n            set ret [r do_fake_bg_true]\n            if {$client_proto == 2} {\n                assert_equal $ret {:1}\n            } else {\n                assert_equal $ret \"#t\"\n            }\n            r readraw 0\n            r hello 2\n        }\n    }\n\nforeach call_type {nested normal} {\n    test \"Busy module command - $call_type\" {\n        set busy_time_limit 50\n        set old_time_limit [lindex [r config get busy-reply-threshold] 1]\n        r config set busy-reply-threshold $busy_time_limit\n        set rd [redis_deferring_client]\n\n        # run command that blocks until released\n        set start [clock clicks -milliseconds]\n        if {$call_type == \"nested\"} {\n            $rd do_rm_call slow_fg_command 0\n        } else {\n            $rd slow_fg_command 0\n        }\n        $rd flush\n\n        # send another command after the blocked one, to make sure we don't attempt to process it\n        $rd ping\n        $rd flush\n\n        # make sure we get BUSY error, and that we didn't get it too early\n        wait_for_condition 50 100 {\n            ([catch {r ping} reply] == 1) &&\n            ([string match {*BUSY Slow module operation*} $reply])\n        } else {\n            fail \"Failed waiting for busy slow response\"\n        }\n        assert_morethan_equal [expr [clock clicks -milliseconds]-$start] $busy_time_limit\n\n        # abort the blocking operation\n        r stop_slow_fg_command\n        wait_for_condition 50 100 {\n            [catch {r ping} e] == 0\n        } else {\n            fail \"Failed waiting for busy command to end\"\n        }\n        assert_equal [$rd read] \"1\"\n        assert_equal [$rd read] \"PONG\"\n\n        # run command that blocks for 200ms\n        set start [clock clicks -milliseconds]\n        if {$call_type == \"nested\"} {\n            $rd do_rm_call slow_fg_command 200000\n        } else {\n            $rd slow_fg_command 200000\n        }\n        $rd flush\n        after 10 ;# try to make sure redis started running the command before we proceed\n\n        # make sure we didn't get BUSY error, it simply blocked till the command was done\n        r ping\n        # The command blocks for 200ms, allow 1-2ms clock skew (1%)\n        # to accommodate differences between using of monotonic timer and ustime\n        assert_morethan_equal [expr [clock clicks -milliseconds]-$start] 198\n        $rd read\n\n        $rd close\n        r config set busy-reply-threshold $old_time_limit\n    }\n}\n\n    test {RM_Call from blocked client} {\n        set busy_time_limit 50\n        set old_time_limit [lindex [r config get busy-reply-threshold] 1]\n        r config set busy-reply-threshold $busy_time_limit\n\n        # trigger slow operation\n        r set_slow_bg_operation 1\n        r hset hash foo bar\n        set rd [redis_deferring_client]\n        set start [clock clicks -milliseconds]\n        $rd do_bg_rm_call hgetall hash\n\n        # send another command after the blocked one, to make sure we don't attempt to process it\n        $rd ping\n        $rd flush\n\n        # wait till we know we're blocked inside the module\n        wait_for_condition 50 100 {\n            [r is_in_slow_bg_operation] eq 1\n        } else {\n            fail \"Failed waiting for slow operation to start\"\n        }\n\n        # make sure we get BUSY error, and that we didn't get here too early\n        assert_error {*BUSY Slow module operation*} {r ping}\n        assert_morethan_equal [expr [clock clicks -milliseconds]-$start] $busy_time_limit\n        # abort the blocking operation\n        r set_slow_bg_operation 0\n\n        wait_for_condition 50 100 {\n            [r is_in_slow_bg_operation] eq 0\n        } else {\n            fail \"Failed waiting for slow operation to stop\"\n        }\n        assert_equal [r ping] {PONG}\n\n        r config set busy-reply-threshold $old_time_limit\n        assert_equal [$rd read] {foo bar}\n        assert_equal [$rd read] {PONG}\n        $rd close\n    }\n\n    test {blocked client reaches client output buffer limit} {\n        r hset hash big [string repeat x 50000]\n        r hset hash bada [string repeat x 50000]\n        r hset hash boom [string repeat x 50000]\n        r config set client-output-buffer-limit {normal 100000 0 0}\n        r client setname myclient\n        catch {r do_bg_rm_call hgetall hash} e\n        assert_match \"*I/O error*\" $e\n        reconnect\n        set clients [r client list]\n        assert_no_match \"*name=myclient*\" $clients\n    }\n\n    test {module client error stats} {\n        r config resetstat\n\n        # simple module command that replies with string error\n        assert_error \"ERR unknown command 'hgetalllll'\" {r do_rm_call hgetalllll}\n        assert_equal [errorrstat ERR r] {count=1}\n\n        # simple module command that replies with string error\n        assert_error \"ERR unknown subcommand 'bla'. Try CONFIG HELP.\" {r do_rm_call config bla}\n        assert_equal [errorrstat ERR r] {count=2}\n\n        # module command that replies with string error from bg thread\n        assert_error \"NULL reply returned\" {r do_bg_rm_call hgetalllll}\n        assert_equal [errorrstat NULL r] {count=1}\n\n        # module command that returns an arity error\n        r do_rm_call set x x\n        assert_error \"ERR wrong number of arguments for 'do_rm_call' command\" {r do_rm_call}\n        assert_equal [errorrstat ERR r] {count=3}\n\n        # RM_Call that propagates an error\n        assert_error \"WRONGTYPE*\" {r do_rm_call hgetall x}\n        assert_equal [errorrstat WRONGTYPE r] {count=1}\n        assert_match {*calls=1,*,rejected_calls=0,failed_calls=1*} [cmdrstat hgetall r]\n\n        # RM_Call from bg thread that propagates an error\n        assert_error \"WRONGTYPE*\" {r do_bg_rm_call hgetall x}\n        assert_equal [errorrstat WRONGTYPE r] {count=2}\n        assert_match {*calls=2,*,rejected_calls=0,failed_calls=2*} [cmdrstat hgetall r]\n\n        assert_equal [s total_error_replies] 6\n        assert_match {*calls=5,*,rejected_calls=0,failed_calls=4*} [cmdrstat do_rm_call r]\n        assert_match {*calls=2,*,rejected_calls=0,failed_calls=2*} [cmdrstat do_bg_rm_call r]\n    }\n\n    set master [srv 0 client]\n    set master_host [srv 0 host]\n    set master_port [srv 0 port]\n    start_server [list overrides [list loadmodule \"$testmodule\"] tags {\"external:skip\"}] {\n        set replica [srv 0 client]\n        set replica_host [srv 0 host]\n        set replica_port [srv 0 port]\n\n        # Start the replication process...\n        $replica replicaof $master_host $master_port\n        wait_for_sync $replica\n\n        test {WAIT command on module blocked client} {\n            pause_process [srv 0 pid]\n\n            $master do_bg_rm_call_format ! hset bk1 foo bar\n\n            assert_equal [$master wait 1 1000] 0\n            resume_process [srv 0 pid]\n            assert_equal [$master wait 1 1000] 1\n            assert_equal [$replica hget bk1 foo] bar\n        }\n    }\n\n    test {Unblock by timer} {\n        # When the client is unlock, we will get the OK reply.\n        assert_match \"OK\" [r unblock_by_timer 100 0]\n    }\n\n    test {block time is shorter than timer period} {\n        # This command does not have the reply.\n        set rd [redis_deferring_client]\n        $rd unblock_by_timer 100 10\n        # Wait for the client to unlock.\n        after 120\n        $rd close\n    }\n\n    test {block time is equal to timer period} {\n        # These time is equal, they will be unlocked in the same event loop,\n        # when the client is unlock, we will get the OK reply from timer.\n        assert_match \"OK\" [r unblock_by_timer 100 100]\n    }\n    \n    test \"Unload the module - blockedclient\" {\n        assert_equal {OK} [r module unload blockedclient]\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/blockonbackground.tcl",
    "content": "set testmodule [file normalize tests/modules/blockonbackground.so]\n\nproc latency_percentiles_usec {cmd} {\n    return [latencyrstat_percentiles $cmd r]\n}\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test { blocked clients time tracking - check blocked command that uses RedisModule_BlockedClientMeasureTimeStart() is tracking background time} {\n        r slowlog reset\n        r config set slowlog-log-slower-than 200000\n        if {!$::no_latency} {\n            assert_equal [r slowlog len] 0\n        }\n        r block.debug 0 10000\n        if {!$::no_latency} {\n            assert_equal [r slowlog len] 0\n        }\n        r config resetstat\n        r config set latency-tracking yes\n        r config set latency-tracking-info-percentiles \"50.0\"\n        r block.debug 200 10000\n        if {!$::no_latency} {\n            assert_equal [r slowlog len] 1\n        }\n\n        set cmdstatline [cmdrstat block.debug r]\n        set latencystatline_debug [latency_percentiles_usec block.debug]\n\n        regexp \"calls=1,usec=(.*?),usec_per_call=(.*?),rejected_calls=0,failed_calls=0\" $cmdstatline -> usec usec_per_call\n        regexp \"p50=(.+\\..+)\" $latencystatline_debug  -> p50\n        assert {$usec >= 100000}\n        assert {$usec_per_call >= 100000}\n        assert {$p50 >= 100000}\n    }\n\n    test { blocked clients time tracking - check blocked command that uses RedisModule_BlockedClientMeasureTimeStart() is tracking background time even in timeout } {\n        r slowlog reset\n        r config set slowlog-log-slower-than 200000\n        if {!$::no_latency} {\n            assert_equal [r slowlog len] 0\n        }\n        r block.debug 0 20000\n        if {!$::no_latency} {\n            assert_equal [r slowlog len] 0\n        }\n        r config resetstat\n        r block.debug 20000 500\n        if {!$::no_latency} {\n            assert_equal [r slowlog len] 1\n        }\n\n        set cmdstatline [cmdrstat block.debug r]\n\n        regexp \"calls=1,usec=(.*?),usec_per_call=(.*?),rejected_calls=0,failed_calls=0\" $cmdstatline usec usec_per_call\n        assert {$usec >= 250000}\n        assert {$usec_per_call >= 250000}\n    }\n\n    test { blocked clients time tracking - check blocked command with multiple calls RedisModule_BlockedClientMeasureTimeStart()  is tracking the total background time } {\n        r slowlog reset\n        r config set slowlog-log-slower-than 200000\n        if {!$::no_latency} {\n            assert_equal [r slowlog len] 0\n        }\n        r block.double_debug 0\n        if {!$::no_latency} {\n            assert_equal [r slowlog len] 0\n        }\n        r config resetstat\n        r block.double_debug 100\n        if {!$::no_latency} {\n            assert_equal [r slowlog len] 1\n        }\n        set cmdstatline [cmdrstat block.double_debug r]\n\n        regexp \"calls=1,usec=(.*?),usec_per_call=(.*?),rejected_calls=0,failed_calls=0\" $cmdstatline usec usec_per_call\n        assert {$usec >= 60000}\n        assert {$usec_per_call >= 60000}\n    }\n\n    test { blocked clients time tracking - check blocked command without calling RedisModule_BlockedClientMeasureTimeStart() is not reporting background time } {\n        r slowlog reset\n        r config set slowlog-log-slower-than 200000\n        if {!$::no_latency} {\n            assert_equal [r slowlog len] 0\n        }\n        r block.debug_no_track 200 1000\n        # ensure slowlog is still empty\n        if {!$::no_latency} {\n            assert_equal [r slowlog len] 0\n        }\n    }\n\n    test \"client unblock works only for modules with timeout support\" {\n        set rd [redis_deferring_client]\n        $rd client id\n        set id [$rd read]\n\n        # Block with a timeout function - may unblock\n        $rd block.block 20000\n        wait_for_condition 50 100 {\n            [r block.is_blocked] == 1\n        } else {\n            fail \"Module did not block\"\n        }\n\n        assert_equal 1 [r client unblock $id]\n        assert_match {*Timed out*} [$rd read]\n\n        # Block without a timeout function - cannot unblock\n        $rd block.block 0\n        wait_for_condition 50 100 {\n            [r block.is_blocked] == 1\n        } else {\n            fail \"Module did not block\"\n        }\n\n        assert_equal 0 [r client unblock $id]\n        assert_equal \"OK\" [r block.release foobar]\n        assert_equal \"foobar\" [$rd read]\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/blockonkeys.tcl",
    "content": "set testmodule [file normalize tests/modules/blockonkeys.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test \"Module client blocked on keys: Circular BPOPPUSH\" {\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n\n        r del src dst\n\n        $rd1 fsl.bpoppush src dst 0\n        wait_for_blocked_clients_count 1\n\n        $rd2 fsl.bpoppush dst src 0\n        wait_for_blocked_clients_count 2\n\n        r fsl.push src 42\n        wait_for_blocked_clients_count 0\n\n        assert_equal {42} [r fsl.getall src]\n        assert_equal {} [r fsl.getall dst]\n    }\n\n    test \"Module client blocked on keys: Self-referential BPOPPUSH\" {\n        set rd1 [redis_deferring_client]\n\n        r del src\n\n        $rd1 fsl.bpoppush src src 0\n        wait_for_blocked_clients_count 1\n        r fsl.push src 42\n\n        assert_equal {42} [r fsl.getall src]\n    }\n\n    test \"Module client blocked on keys: BPOPPUSH unblocked by timer\" {\n        set rd1 [redis_deferring_client]\n\n        r del src dst\n\n        set repl [attach_to_replication_stream]\n\n        $rd1 fsl.bpoppush src dst 0\n        wait_for_blocked_clients_count 1\n\n        r fsl.pushtimer src 9000 10\n        wait_for_blocked_clients_count 0\n\n        assert_equal {9000} [r fsl.getall dst]\n        assert_equal {} [r fsl.getall src]\n\n        assert_replication_stream $repl {\n            {select *}\n            {fsl.push src 9000}\n            {fsl.bpoppush src dst 0}\n        }\n\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test {Module client blocked on keys (no metadata): No block} {\n        r del k\n        r fsl.push k 33\n        r fsl.push k 34\n        r fsl.bpop k 0\n    } {34}\n\n    test {Module client blocked on keys (no metadata): Timeout} {\n        r del k\n        set rd [redis_deferring_client]\n        $rd fsl.bpop k 1\n        assert_equal {Request timedout} [$rd read]\n    }\n\n    test {Module client blocked on keys (no metadata): Blocked} {\n        r del k\n        set rd [redis_deferring_client]\n        $rd fsl.bpop k 0\n        wait_for_blocked_clients_count 1\n        r fsl.push k 34\n        assert_equal {34} [$rd read]\n    }\n\n    test {Module client blocked on keys (with metadata): No block} {\n        r del k\n        r fsl.push k 34\n        r fsl.bpopgt k 30 0\n    } {34}\n\n    test {Module client blocked on keys (with metadata): Timeout} {\n        r del k\n        set rd [redis_deferring_client]\n        $rd client id\n        set cid [$rd read]\n        r fsl.push k 33\n        $rd fsl.bpopgt k 35 1\n        assert_equal {Request timedout} [$rd read]\n        r client kill id $cid ;# try to smoke-out client-related memory leak\n    }\n\n    test {Module client blocked on keys (with metadata): Blocked, case 1} {\n        r del k\n        set rd [redis_deferring_client]\n        $rd client id\n        set cid [$rd read]\n        r fsl.push k 33\n        $rd fsl.bpopgt k 33 0\n        wait_for_blocked_clients_count 1\n        r fsl.push k 34\n        assert_equal {34} [$rd read]\n        r client kill id $cid ;# try to smoke-out client-related memory leak\n    }\n\n    test {Module client blocked on keys (with metadata): Blocked, case 2} {\n        r del k\n        r fsl.push k 32\n        set rd [redis_deferring_client]\n        $rd fsl.bpopgt k 35 0\n        wait_for_blocked_clients_count 1\n        r fsl.push k 33\n        r fsl.push k 34\n        r fsl.push k 35\n        r fsl.push k 36\n        assert_equal {36} [$rd read]\n    }\n\n    test {Module client blocked on keys (with metadata): Blocked, DEL} {\n        r del k\n        r fsl.push k 32\n        set rd [redis_deferring_client]\n        $rd fsl.bpopgt k 35 0\n        wait_for_blocked_clients_count 1\n        r del k\n        assert_error {*UNBLOCKED key no longer exists*} {$rd read}\n    }\n\n    test {Module client blocked on keys (with metadata): Blocked, FLUSHALL} {\n        r del k\n        r fsl.push k 32\n        set rd [redis_deferring_client]\n        $rd fsl.bpopgt k 35 0\n        wait_for_blocked_clients_count 1\n        r flushall\n        assert_error {*UNBLOCKED key no longer exists*} {$rd read}\n    }\n\n    test {Module client blocked on keys (with metadata): Blocked, SWAPDB, no key} {\n        r select 9\n        r del k\n        r fsl.push k 32\n        set rd [redis_deferring_client]\n        $rd fsl.bpopgt k 35 0\n        wait_for_blocked_clients_count 1\n        r swapdb 0 9\n        assert_error {*UNBLOCKED key no longer exists*} {$rd read}\n    }\n\n    test {Module client blocked on keys (with metadata): Blocked, SWAPDB, key exists, case 1} {\n        ;# Key exists on other db, but wrong type\n        r flushall\n        r select 9\n        r fsl.push k 32\n        r select 0\n        r lpush k 38\n        r select 9\n        set rd [redis_deferring_client]\n        $rd fsl.bpopgt k 35 0\n        wait_for_blocked_clients_count 1\n        r swapdb 0 9\n        assert_error {*UNBLOCKED key no longer exists*} {$rd read}\n        r select 9\n    }\n\n    test {Module client blocked on keys (with metadata): Blocked, SWAPDB, key exists, case 2} {\n        ;# Key exists on other db, with the right type, but the value doesn't allow to unblock\n        r flushall\n        r select 9\n        r fsl.push k 32\n        r select 0\n        r fsl.push k 34\n        r select 9\n        set rd [redis_deferring_client]\n        $rd fsl.bpopgt k 35 0\n        wait_for_blocked_clients_count 1\n        r swapdb 0 9\n        assert_equal {1} [s 0 blocked_clients]\n        r fsl.push k 38\n        assert_equal {38} [$rd read]\n        r select 9\n    }\n\n    test {Module client blocked on keys (with metadata): Blocked, SWAPDB, key exists, case 3} {\n        ;# Key exists on other db, with the right type, the value allows to unblock\n        r flushall\n        r select 9\n        r fsl.push k 32\n        r select 0\n        r fsl.push k 38\n        r select 9\n        set rd [redis_deferring_client]\n        $rd fsl.bpopgt k 35 0\n        wait_for_blocked_clients_count 1\n        r swapdb 0 9\n        assert_equal {38} [$rd read]\n        r select 9\n    }\n\n    test {Module client blocked on keys (with metadata): Blocked, CLIENT KILL} {\n        r del k\n        r fsl.push k 32\n        set rd [redis_deferring_client]\n        $rd client id\n        set cid [$rd read]\n        $rd fsl.bpopgt k 35 0\n        wait_for_blocked_clients_count 1\n        r client kill id $cid ;# try to smoke-out client-related memory leak\n    }\n\n    test {Module client blocked on keys (with metadata): Blocked, CLIENT UNBLOCK TIMEOUT} {\n        r del k\n        r fsl.push k 32\n        set rd [redis_deferring_client]\n        $rd client id\n        set cid [$rd read]\n        $rd fsl.bpopgt k 35 0\n        wait_for_blocked_clients_count 1\n        r client unblock $cid timeout ;# try to smoke-out client-related memory leak\n        assert_equal {Request timedout} [$rd read]\n    }\n\n    test {Module client blocked on keys (with metadata): Blocked, CLIENT UNBLOCK ERROR} {\n        r del k\n        r fsl.push k 32\n        set rd [redis_deferring_client]\n        $rd client id\n        set cid [$rd read]\n        $rd fsl.bpopgt k 35 0\n        wait_for_blocked_clients_count 1\n        r client unblock $cid error ;# try to smoke-out client-related memory leak\n        assert_error \"*unblocked*\" {$rd read}\n    }\n\n    test {Module client blocked on keys, no timeout CB, CLIENT UNBLOCK TIMEOUT} {\n        r del k\n        set rd [redis_deferring_client]\n        $rd client id\n        set cid [$rd read]\n        $rd fsl.bpop k 0 NO_TO_CB\n        wait_for_blocked_clients_count 1\n        assert_equal [r client unblock $cid timeout] {0}\n        $rd close\n    }\n\n    test {Module client blocked on keys, no timeout CB, CLIENT UNBLOCK ERROR} {\n        r del k\n        set rd [redis_deferring_client]\n        $rd client id\n        set cid [$rd read]\n        $rd fsl.bpop k 0 NO_TO_CB\n        wait_for_blocked_clients_count 1\n        assert_equal [r client unblock $cid error] {0}\n        $rd close\n    }\n\n    test {Module client re-blocked on keys after woke up on wrong type} {\n        r del k\n        set rd [redis_deferring_client]\n        $rd fsl.bpop k 0\n        wait_for_blocked_clients_count 1\n        r lpush k 12\n        r lpush k 13\n        r lpush k 14\n        r del k\n        r fsl.push k 34\n        assert_equal {34} [$rd read]\n        assert_equal {1} [r get fsl_wrong_type] ;# first lpush caused one wrong-type wake-up\n    }\n\n    test {Module client blocked on keys woken up by LPUSH} {\n        r del k\n        set rd [redis_deferring_client]\n        $rd blockonkeys.popall k\n        wait_for_blocked_clients_count 1\n        r lpush k 42 squirrel banana\n        assert_equal {banana squirrel 42} [$rd read]\n        $rd close\n    }\n\n    test {Module client unblocks BLPOP} {\n        r del k\n        set rd [redis_deferring_client]\n        $rd blpop k 3\n        wait_for_blocked_clients_count 1\n        r blockonkeys.lpush k 42\n        assert_equal {k 42} [$rd read]\n        $rd close\n    }\n\n    test {Module unblocks module blocked on non-empty list} {\n        r del k\n        r lpush k aa\n        # Module client blocks to pop 5 elements from list\n        set rd [redis_deferring_client]\n        $rd blockonkeys.blpopn k 5\n        wait_for_blocked_clients_count 1\n        # Check that RM_SignalKeyAsReady() can wake up BLPOPN\n        r blockonkeys.lpush_unblock k bb cc ;# Not enough elements for BLPOPN\n        r lpush k dd ee ff                  ;# Doesn't unblock module\n        r blockonkeys.lpush_unblock k gg    ;# Unblocks module\n        assert_equal {gg ff ee dd cc} [$rd read]\n        $rd close\n    }\n    \n    test {Module explicit unblock when blocked on keys} {\n        r del k\n        r set somekey someval\n        # Module client blocks to pop 5 elements from list\n        set rd [redis_deferring_client]\n        $rd blockonkeys.blpopn_or_unblock k 5 0\n        wait_for_blocked_clients_count 1\n        # will now cause the module to trigger pop but instead will unblock the client from the reply_callback\n        r lpush k dd\n        # we should still get unblocked as the command should not reprocess\n        wait_for_blocked_clients_count 0\n        assert_equal {Action aborted} [$rd read]\n        $rd get somekey\n        assert_equal {someval} [$rd read]\n        $rd close\n    }\n\n    set master [srv 0 client]\n    set master_host [srv 0 host]\n    set master_port [srv 0 port]\n    start_server [list overrides [list loadmodule \"$testmodule\"] tags {\"external:skip\"}] {\n        set replica [srv 0 client]\n        set replica_host [srv 0 host]\n        set replica_port [srv 0 port]\n\n        # Start the replication process...\n        $replica replicaof $master_host $master_port\n        wait_for_sync $replica\n\n        test {WAIT command on module blocked client on keys} {\n            set rd [redis_deferring_client -1]\n            $rd set x y\n            $rd read\n\n            pause_process [srv 0 pid]\n\n            $master del k\n            $rd fsl.bpop k 0\n            wait_for_blocked_client -1\n            $master fsl.push k 34\n            $master fsl.push k 35\n            assert_equal {34} [$rd read]\n\n            assert_equal [$master wait 1 1000] 0\n            resume_process [srv 0 pid]\n            assert_equal [$master wait 1 1000] 1\n            $rd close\n            assert_equal {35} [$replica fsl.getall k]\n        }\n    }\n\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/cluster.tcl",
    "content": "# Primitive tests on cluster-enabled redis with modules\n\nsource tests/support/cli.tcl\n\n# cluster creation is complicated with TLS, and the current tests don't really need that coverage\ntags {tls:skip external:skip cluster modules} {\n\nset testmodule_nokey [file normalize tests/modules/blockonbackground.so]\nset testmodule_blockedclient [file normalize tests/modules/blockedclient.so]\nset testmodule [file normalize tests/modules/blockonkeys.so]\n\nset modules [list loadmodule $testmodule loadmodule $testmodule_nokey loadmodule $testmodule_blockedclient]\nstart_cluster 3 0 [list tags {external:skip cluster modules} config_lines $modules] {\n\n    set node1 [srv 0 client]\n    set node2 [srv -1 client]\n    set node3 [srv -2 client]\n    set node3_pid [srv -2 pid]\n\n    test \"Run blocking command (blocked on key) on cluster node3\" {\n        # key9184688 is mapped to slot 10923 (first slot of node 3)\n        set node3_rd [redis_deferring_client -2]\n        $node3_rd fsl.bpop key9184688 0\n        $node3_rd flush\n        wait_for_condition 50 100 {\n            [s -2 blocked_clients] eq {1}\n        } else {\n            fail \"Client executing blocking command (blocked on key) not blocked\"\n        }\n    }\n\n    test \"Run blocking command (no keys) on cluster node2\" {\n        set node2_rd [redis_deferring_client -1]\n        $node2_rd block.block 0\n        $node2_rd flush\n\n        wait_for_condition 50 100 {\n            [s -1 blocked_clients] eq {1}\n        } else {\n            fail \"Client executing blocking command (no keys) not blocked\"\n        }\n    }\n\n\n    test \"Perform a Resharding\" {\n        exec src/redis-cli --cluster-yes --cluster reshard 127.0.0.1:[srv -2 port] \\\n                           --cluster-to [$node1 cluster myid] \\\n                           --cluster-from [$node3 cluster myid] \\\n                           --cluster-slots 1\n    }\n\n    test \"Verify command (no keys) is unaffected after resharding\" {\n        # verify there are blocked clients on node2\n        assert_equal [s -1 blocked_clients]  {1}\n\n        #release client \n        $node2 block.release 0\n    }\n\n    test \"Verify command (blocked on key) got unblocked after resharding\" {\n        # this (read) will wait for the node3 to realize the new topology\n        assert_error {*MOVED*} {$node3_rd read}\n\n        # verify there are no blocked clients\n        assert_equal [s 0 blocked_clients]  {0}\n        assert_equal [s -1 blocked_clients]  {0}\n        assert_equal [s -2 blocked_clients]  {0}\n    }\n\n    test \"Wait for cluster to be stable\" {\n        wait_for_condition 1000 50 {\n            [catch {exec src/redis-cli --cluster check 127.0.0.1:[srv 0 port]}] == 0 &&\n            [catch {exec src/redis-cli --cluster check 127.0.0.1:[srv -1 port]}] == 0 &&\n            [catch {exec src/redis-cli --cluster check 127.0.0.1:[srv -2 port]}] == 0 &&\n            [CI 0 cluster_state] eq {ok} &&\n            [CI 1 cluster_state] eq {ok} &&\n            [CI 2 cluster_state] eq {ok}\n        } else {\n            fail \"Cluster doesn't stabilize\"\n        }\n    }\n\n    test \"Sanity test push cmd after resharding\" {\n        assert_error {*MOVED*} {$node3 fsl.push key9184688 1}\n\n        set node1_rd [redis_deferring_client 0]\n        $node1_rd fsl.bpop key9184688 0\n        $node1_rd flush\n\n        wait_for_condition 50 100 {\n            [s 0 blocked_clients] eq {1}\n        } else {\n            puts \"Client not blocked\"\n            puts \"read from blocked client: [$node1_rd read]\"\n            fail \"Client not blocked\"\n        }\n\n        $node1 fsl.push key9184688 2\n        assert_equal {2} [$node1_rd read]\n    }\n\n    $node1_rd close\n    $node2_rd close\n    $node3_rd close\n\n    test \"Run blocking command (blocked on key) again on cluster node1\" {\n        $node1 del key9184688\n        # key9184688 is mapped to slot 10923 which has been moved to node1\n        set node1_rd [redis_deferring_client 0]\n        $node1_rd fsl.bpop key9184688 0\n        $node1_rd flush\n\n        wait_for_condition 50 100 {\n            [s 0 blocked_clients] eq {1}\n        } else {\n            fail \"Client executing blocking command (blocked on key) again not blocked\"\n        }\n    }\n\n    test \"Run blocking command (no keys) again on cluster node2\" {\n        set node2_rd [redis_deferring_client -1]\n\n        $node2_rd block.block 0\n        $node2_rd flush\n\n        wait_for_condition 50 100 {\n            [s -1 blocked_clients] eq {1}\n        } else {\n            fail \"Client executing blocking command (no keys) again not blocked\"\n        }\n    }\n\n    test \"Kill a cluster node and wait for fail state\" {\n        # kill node3 in cluster\n        pause_process $node3_pid\n\n        wait_for_condition 1000 50 {\n            [CI 0 cluster_state] eq {fail} &&\n            [CI 1 cluster_state] eq {fail}\n        } else {\n            fail \"Cluster doesn't fail\"\n        }\n    }\n\n    test \"Verify command (blocked on key) got unblocked after cluster failure\" {\n        assert_error {*CLUSTERDOWN*} {$node1_rd read}\n    }\n\n    test \"Verify command (no keys) got unblocked after cluster failure\" {\n        assert_error {*CLUSTERDOWN*} {$node2_rd read}\n\n        # verify there are no blocked clients\n        assert_equal [s 0 blocked_clients]  {0}\n        assert_equal [s -1 blocked_clients]  {0}\n    }\n\n    test \"Verify command RM_Call is rejected when cluster is down\" {\n        assert_error \"ERR Can not execute a command 'set' while the cluster is down\" {$node1 do_rm_call set x 1}\n    }\n\n    resume_process $node3_pid\n    $node1_rd close\n    $node2_rd close\n}\n\nset testmodule_keyspace_events [file normalize tests/modules/keyspace_events.so]\nset testmodule_postnotifications \"[file normalize tests/modules/postnotifications.so] with_key_events\"\nset modules [list loadmodule $testmodule_keyspace_events loadmodule $testmodule_postnotifications]\nstart_cluster 2 2 [list tags {external:skip cluster modules} config_lines $modules] {\n\n    set master1 [srv 0 client]\n    set master2 [srv -1 client]\n    set replica1 [srv -2 client]\n    set replica2 [srv -3 client]\n\n    test \"Verify keys deletion and notification effects happened on cluster slots change are replicated inside multi exec\" {\n        $master2 set count_dels_{4oi} 1\n        $master2 del count_dels_{4oi}\n        assert_equal 1 [$master2 keyspace.get_dels]\n        wait_for_ofs_sync $master2 $replica2\n        assert_equal 1 [$replica2 keyspace.get_dels]\n        $master2 set count_dels_{4oi} 1\n\n        set repl [attach_to_replication_stream_on_connection -3]\n\n        $master1 cluster bumpepoch\n        $master1 cluster setslot 16382 node [$master1 cluster myid]\n\n        wait_for_cluster_propagation\n        wait_for_condition 50 100 {\n            [$master2 keyspace.get_dels] eq 2\n        } else {\n            fail \"master did not delete the key\"\n        }\n        wait_for_condition 50 100 {\n            [$replica2 keyspace.get_dels] eq 2\n        } else {\n            fail \"replica did not increase del counter\"\n        }\n\n        # the {lpush before_deleted count_dels_{4oi}} is a post notification job registered when 'count_dels_{4oi}' was removed\n        assert_replication_stream $repl {\n            {multi}\n            {del count_dels_{4oi}}\n            {keyspace.incr_dels}\n            {lpush before_deleted count_dels_{4oi}}\n            {exec}\n        }\n        close_replication_stream $repl\n    }\n}\n\n}\n\nset testmodule [file normalize tests/modules/basics.so]\nset modules [list loadmodule $testmodule]\nstart_cluster 3 0 [list tags {external:skip cluster modules} config_lines $modules] {\n    set node1 [srv 0 client]\n    set node2 [srv -1 client]\n    set node3 [srv -2 client]\n\n    test \"Verify RM_Call inside module load function on cluster mode\" {\n        assert_equal {PONG} [$node1 PING]\n        assert_equal {PONG} [$node2 PING]\n        assert_equal {PONG} [$node3 PING]\n    }\n}\n\n# -----------------------------------------------------------------------------\n# Test cases for RM_StringTruncate memory tracking.\n# This verifies memory tracking works correctly when module API truncates strings.\n# -----------------------------------------------------------------------------\n\nstart_cluster 1 0 [list tags {external:skip cluster needs:debug modules} config_lines $modules overrides {cluster-slot-stats-enabled yes}] {\n    set node1 [srv 0 client]\n\n    # Enable debug assertion that validates memory tracking after each command.\n    # This will cause a panic if tracked memory doesn't match actual memory.\n    $node1 DEBUG ALLOCSIZE-SLOTS-ASSERT 1\n\n    test \"RM_StringTruncate memory tracking\" {\n        # The test.string.truncate command:\n        # 1. Creates a key \"foo\" with value \"abcde\" (5 bytes)\n        # 2. Truncates (expands) to 8 bytes\n        # 3. Truncates (shrinks) to 4 bytes\n        # 4. Truncates (shrinks) to 0 bytes\n        #\n        # Without the fix, memory tracking was missing in RM_StringTruncate,\n        # causing the DEBUG ALLOCSIZE-SLOTS-ASSERT to panic.\n        $node1 test.string.truncate\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/cmdintrospection.tcl",
    "content": "set testmodule [file normalize tests/modules/cmdintrospection.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    # cmdintrospection.xadd mimics XADD with regards to how\n    # what COMMAND exposes. There are two differences:\n    #\n    # 1. cmdintrospection.xadd (and all module commands) do not have ACL categories\n    # 2. cmdintrospection.xadd's `group` is \"module\"\n    #\n    # This tests verify that, apart from the above differences, the output of\n    # COMMAND INFO and COMMAND DOCS are identical for the two commands.\n    test \"Module command introspection via COMMAND INFO\" {\n        set redis_reply [lindex [r command info xadd] 0]\n        set module_reply [lindex [r command info cmdintrospection.xadd] 0]\n        for {set i 1} {$i < [llength $redis_reply]} {incr i} {\n            if {$i == 2} {\n                # Remove the \"module\" flag\n                set mylist [lindex $module_reply $i]\n                set idx [lsearch $mylist \"module\"]\n                set mylist [lreplace $mylist $idx $idx]\n                lset module_reply $i $mylist\n            }\n            if {$i == 6} {\n                # Skip ACL categories\n                continue\n            }\n            assert_equal [lindex $redis_reply $i] [lindex $module_reply $i]\n        }\n    }\n\n    test \"Module command introspection via COMMAND DOCS\" {\n        set redis_reply [dict create {*}[lindex [r command docs xadd] 1]]\n        set module_reply [dict create {*}[lindex [r command docs cmdintrospection.xadd] 1]]\n        # Compare the maps. We need to pop \"group\" first.\n        dict unset redis_reply group\n        dict unset module_reply group\n        dict unset module_reply module\n        if {$::log_req_res} {\n            dict unset redis_reply reply_schema\n        }\n\n        assert_equal $redis_reply $module_reply\n    }\n\n    test \"Unload the module - cmdintrospection\" {\n        assert_equal {OK} [r module unload cmdintrospection]\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/commandfilter.tcl",
    "content": "set testmodule [file normalize tests/modules/commandfilter.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule log-key 0\n\n    test {Retain a command filter argument} {\n        # Retain an argument now. Later we'll try to re-read it and make sure\n        # it is not corrupt and that valgrind does not complain.\n        r rpush some-list @retain my-retained-string\n        r commandfilter.retained\n    } {my-retained-string}\n\n    test {Command Filter handles redirected commands} {\n        r set mykey @log\n        r lrange log-key 0 -1\n    } \"{set mykey @log}\"\n\n    test {Command Filter can call RedisModule_CommandFilterArgDelete} {\n        r rpush mylist elem1 @delme elem2\n        r lrange mylist 0 -1\n    } {elem1 elem2}\n\n    test {Command Filter can call RedisModule_CommandFilterArgInsert} {\n        r del mylist\n        r rpush mylist elem1 @insertbefore elem2 @insertafter elem3\n        r lrange mylist 0 -1\n    } {elem1 --inserted-before-- @insertbefore elem2 @insertafter --inserted-after-- elem3}\n\n    test {Command Filter can call RedisModule_CommandFilterArgReplace} {\n        r del mylist\n        r rpush mylist elem1 @replaceme elem2\n        r lrange mylist 0 -1\n    } {elem1 --replaced-- elem2}\n\n    test {Command Filter applies on RM_Call() commands} {\n        r del log-key\n        r commandfilter.ping\n        r lrange log-key 0 -1\n    } \"{ping @log}\"\n\n    test {Command Filter applies on Lua redis.call()} {\n        r del log-key\n        r eval \"redis.call('ping', '@log')\" 0\n        r lrange log-key 0 -1\n    } \"{ping @log}\"\n\n    test {Command Filter applies on Lua redis.call() that calls a module} {\n        r del log-key\n        r eval \"redis.call('commandfilter.ping')\" 0\n        r lrange log-key 0 -1\n    } \"{ping @log}\"\n\n    test {Command Filter strings can be retained} {\n        r commandfilter.retained\n    } {my-retained-string}\n\n    test {Command Filter is unregistered implicitly on module unload} {\n        r del log-key\n        r module unload commandfilter\n        r set mykey @log\n        r lrange log-key 0 -1\n    } {}\n\n    r module load $testmodule log-key 0\n\n    test {Command Filter unregister works as expected} {\n        # Validate reloading succeeded\n        r del log-key\n        r set mykey @log\n        assert_equal \"{set mykey @log}\" [r lrange log-key 0 -1]\n\n        # Unregister\n        r commandfilter.unregister\n        r del log-key\n\n        r set mykey @log\n        r lrange log-key 0 -1\n    } {}\n\n    r module unload commandfilter\n    r module load $testmodule log-key 1\n\n    test {Command Filter REDISMODULE_CMDFILTER_NOSELF works as expected} {\n        r set mykey @log\n        assert_equal \"{set mykey @log}\" [r lrange log-key 0 -1]\n\n        r del log-key\n        r commandfilter.ping\n        assert_equal {} [r lrange log-key 0 -1]\n\n        r eval \"redis.call('commandfilter.ping')\" 0\n        assert_equal {} [r lrange log-key 0 -1]\n    }\n\n    test \"Unload the module - commandfilter\" {\n        assert_equal {OK} [r module unload commandfilter]\n    }\n}\n\ntest {RM_CommandFilterArgInsert and script argv caching} {\n    # coverage for scripts calling commands that expand the argv array\n    # an attempt to add coverage for a possible bug in luaArgsToRedisArgv\n    # this test needs a fresh server so that lua_argv_size is 0.\n    # glibc realloc can return the same pointer even when the size changes\n    # still this test isn't able to trigger the issue, but we keep it anyway.\n    start_server {tags {\"modules external:skip\"}} {\n        r module load $testmodule log-key 0\n        r del mylist\n        # command with 6 args\n        r eval {redis.call('rpush', KEYS[1], 'elem1', 'elem2', 'elem3', 'elem4')} 1 mylist\n        # command with 3 args that is changed to 4\n        r eval {redis.call('rpush', KEYS[1], '@insertafter')} 1 mylist\n        # command with 6 args again\n        r eval {redis.call('rpush', KEYS[1], 'elem1', 'elem2', 'elem3', 'elem4')} 1 mylist\n        assert_equal [r lrange mylist 0 -1] {elem1 elem2 elem3 elem4 @insertafter --inserted-after-- elem1 elem2 elem3 elem4}\n    }\n}\n\n# previously, there was a bug that command filters would be rerun (which would cause args to swap back)\n# this test is meant to protect against that bug\ntest {Blocking Commands don't run through command filter when reprocessed} {\n    start_server {tags {\"modules external:skip\"}} {\n        r module load $testmodule log-key 0\n\n        r del list1{t}\n        r del list2{t}\n\n        r lpush list2{t} a b c d e\n\n        set rd [redis_deferring_client]\n        # we're asking to pop from the left, but the command filter swaps the two arguments,\n        # if it didn't swap it, we would end up with e d c b a 5 (5 being the left most of the following lpush)\n        # but since we swap the arguments, we end up with 1 e d c b a (1 being the right most of it).\n        # if the command filter would run again on unblock, they would be swapped back.\n        $rd blmove list1{t} list2{t} left right 0\n        wait_for_blocked_client\n        r lpush list1{t} 1 2 3 4 5\n        # validate that we moved the correct element with the swapped args\n        assert_equal [$rd read] 1\n        # validate that we moved the correct elements to the correct side of the list\n        assert_equal [r lpop list2{t}] 1\n\n        $rd close\n    }\n}\n\ntest {Filtering based on client id} {\n    start_server {tags {\"modules external:skip\"}} {\n        r module load $testmodule log-key 0\n\n        set rr [redis_client]\n        set cid [$rr client id]\n        r unfilter_clientid $cid\n\n        r rpush mylist elem1 @replaceme elem2\n        assert_equal [r lrange mylist 0 -1] {elem1 --replaced-- elem2}\n\n        r del mylist\n\n        assert_equal [$rr rpush mylist elem1 @replaceme elem2] 3\n        assert_equal [r lrange mylist 0 -1] {elem1 @replaceme elem2}\n\n        $rr close\n    }\n}\n\nstart_server {tags {\"external:skip\"}} {\n    test {OnLoad failure will handle un-registration} {\n        catch {r module load $testmodule log-key 0 noload}\n        r set mykey @log\n        assert_equal [r lrange log-key 0 -1] {}\n        r rpush mylist elem1 @delme elem2\n        assert_equal [r lrange mylist 0 -1] {elem1 @delme elem2}\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/configaccess.tcl",
    "content": "set testmodule [file normalize tests/modules/configaccess.so]\nset othermodule [file normalize tests/modules/moduleconfigs.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n    r module loadex $othermodule CONFIG moduleconfigs.mutable_bool yes\n\n    test {Test module config get with standard Redis configs} {\n        # Test getting standard Redis configs of different types\n        set maxmemory [r config get maxmemory]\n        assert_equal [lindex $maxmemory 1] [r configaccess.getnumeric maxmemory]\n\n        set port [r config get port]\n        assert_equal [lindex $port 1] [r configaccess.getnumeric port]\n\n        set appendonly [r config get appendonly]\n        assert_equal [string is true [lindex $appendonly 1]] [r configaccess.getbool appendonly]\n\n        # Test string config\n        set logfile [r config get logfile]\n        assert_equal [lindex $logfile 1] [r configaccess.get logfile]\n\n        # Test SDS config\n        set requirepass [r config get requirepass]\n        assert_equal [lindex $requirepass 1] [r configaccess.get requirepass]\n\n        # Test special config\n        set oom_score_adj_values [r config get oom-score-adj-values]\n        assert_equal [lindex $oom_score_adj_values 1] [r configaccess.get oom-score-adj-values]\n\n        set maxmemory_policy_name [r configaccess.getenum maxmemory-policy]\n        assert_equal [lindex [r config get maxmemory-policy] 1] $maxmemory_policy_name\n\n        # Test percent config\n        r config set maxmemory 100000\n        r configaccess.setnumeric maxmemory-clients -50\n        assert_equal [lindex [r config get maxmemory-clients] 1] 50%\n\n        # Test multi-argument enum config\n        r config set moduleconfigs.flags \"one two four\"\n        assert_equal \"five two\" [r configaccess.getenum moduleconfigs.flags]\n\n        # Test getting multi-argument enum config via generic get\n        r config set moduleconfigs.flags \"two four\"\n        assert_equal \"two four\" [r configaccess.get moduleconfigs.flags]\n    }\n\n    test {Test module config get with non-existent configs} {\n        # Test getting non-existent configs\n        catch {r configaccess.getnumeric nonexistent_config} err\n        assert_match \"ERR*\" $err\n\n        catch {r configaccess.getbool nonexistent_config} err\n        assert_match \"ERR*\" $err\n\n        catch {r configaccess.get nonexistent_config} err\n        assert_match \"ERR*\" $err\n\n        catch {r configaccess.getenum nonexistent_config} err\n        assert_match \"ERR*\" $err\n    }\n\n    test {Test module config set with standard Redis configs} {\n        # Test setting numeric config\n        set old_maxmemory_samples [r config get maxmemory-samples]\n        r configaccess.setnumeric maxmemory-samples 10\n        assert_equal \"maxmemory-samples 10\" [r config get maxmemory-samples]\n        r config set maxmemory-samples [lindex $old_maxmemory_samples 1]\n\n        # Test setting bool config\n        set old_protected_mode [r config get protected-mode]\n        r configaccess.setbool protected-mode no\n        assert_equal \"protected-mode no\" [r config get protected-mode]\n        r config set protected-mode [lindex $old_protected_mode 1]\n\n        # Test setting string config\n        set old_masteruser [r config get masteruser]\n        r configaccess.set masteruser \"__newmasteruser__\"\n        assert_equal \"__newmasteruser__\" [lindex [r config get masteruser] 1]\n        r config set masteruser [lindex $old_masteruser 1]\n\n        # Test setting enum config\n        set old_loglevel [r config get loglevel]\n        r config set loglevel \"notice\" ; # Set to some value we are sure is different than the one tested\n        r configaccess.setenum loglevel warning\n        assert_equal \"loglevel warning\" [r config get loglevel]\n        r config set loglevel [lindex $old_loglevel 1]\n\n        # Test setting multi-argument enum config\n        r config set moduleconfigs.flags \"one two four\"\n        assert_equal \"moduleconfigs.flags {five two}\" [r config get moduleconfigs.flags]\n        r configaccess.setenum moduleconfigs.flags \"two four\"\n        assert_equal \"moduleconfigs.flags {two four}\" [r config get moduleconfigs.flags]\n\n        # Test setting multi-argument enum config via generic set\n        r config set moduleconfigs.flags \"one two four\"\n        assert_equal \"moduleconfigs.flags {five two}\" [r config get moduleconfigs.flags]\n        r configaccess.set moduleconfigs.flags \"two four\"\n        assert_equal \"moduleconfigs.flags {two four}\" [r config get moduleconfigs.flags]\n    }\n\n    test {Test module config set with module configs} {\n        # Test setting module bool config\n        assert_equal \"OK\" [r configaccess.setbool configaccess.bool no]\n        assert_equal \"configaccess.bool no\" [r config get configaccess.bool]\n\n        # Test setting module bool config from another module\n        assert_equal \"OK\" [r configaccess.setbool moduleconfigs.mutable_bool no]\n        assert_equal \"moduleconfigs.mutable_bool no\" [r config get moduleconfigs.mutable_bool]\n\n        # Test setting module numeric config\n        assert_equal \"OK\" [r configaccess.setnumeric moduleconfigs.numeric 100]\n        assert_equal \"moduleconfigs.numeric 100\" [r config get moduleconfigs.numeric]\n\n        # Test setting module enum config\n        assert_equal \"OK\" [r configaccess.setenum moduleconfigs.enum \"five\"]\n        assert_equal \"moduleconfigs.enum five\" [r config get moduleconfigs.enum]\n    }\n\n    test {Test module config set with error cases} {\n        # Test setting a non-existent config\n        catch {r configaccess.setbool nonexistent_config yes} err\n        assert_match \"*ERR*\" $err\n\n        # Test setting a read-only config\n        catch {r configaccess.setbool moduleconfigs.immutable_bool yes} err\n        assert_match \"*ERR*\" $err\n\n        # Test setting an enum config with invalid value\n        catch {r configaccess.setenumname moduleconfigs.enum \"invalid_value\"} err\n        assert_match \"*ERR*\" $err\n\n        # Test setting a numeric config with out-of-range value\n        catch {r configaccess.setnumeric moduleconfigs.numeric 5000} err\n        assert_match \"*ERR*\" $err\n        catch {r configaccess.setnumeric maxclients -1} err\n        assert_match \"*Failed to set numeric config maxclients: argument must be between*\" $err\n        catch {r configaccess.setnumeric maxclients -9223372036854775808} err\n        assert_match \"*Failed to set numeric config maxclients: argument must be between*\" $err\n\n        # Sanity check\n        assert_equal [r configaccess.setnumeric maxmemory 18446744073709551615] \"OK\"\n        assert_equal [r configaccess.setnumeric maxmemory -1] \"OK\"\n    }\n\n    test {Test module get all configs} {\n        # Get all configs using the module command\n        set all_configs [r configaccess.getconfigs]\n\n        # Verify the number of configs matches the number of configs returned\n        # by Redis's native CONFIG GET command.\n        set all_configs_std_pairs [llength [r config get *]]\n\n        # When comparing with the standard CONFIG GET command, we need to divide\n        # by 2 because the standard command returns a flattened array of\n        # key-value pairs whereas our testing command returns an array of pairs.\n        assert_equal [llength $all_configs] [expr $all_configs_std_pairs / 2]\n\n        # Verify all the configs are present in both replies.\n        foreach config_pair $all_configs {\n            assert_equal 2 [llength $config_pair]\n            set config_name [lindex $config_pair 0]\n            set config_value [lindex $config_pair 1]\n\n            # Verify that we can get this config using standard config get\n            set redis_config [r config get $config_name]\n            assert {[llength $redis_config] != 0}\n\n            assert_equal $config_value [lindex $redis_config 1]\n        }\n\n        # Test that module configs are also included\n        set found_module_config 0\n        foreach config_pair $all_configs {\n            set config_name [lindex $config_pair 0]\n            if {$config_name eq \"configaccess.bool\"} {\n                set found_module_config 1\n                break\n            }\n        }\n\n        assert {$found_module_config == 1}\n\n        # Test pattern matching\n        set moduleconfigs_count [r configaccess.getconfigs \"moduleconfigs.*\"]\n        assert_equal 7 [llength $moduleconfigs_count]\n\n        set memoryconfigs_count [r configaccess.getconfigs \"*memory\"]\n        assert_equal 3 [llength $memoryconfigs_count]\n    }\n\n    test {Test module config type detection} {\n        # Test getting config types for different config types\n        assert_equal \"bool\" [r configaccess.getconfigtype appendonly]\n        assert_equal \"numeric\" [r configaccess.getconfigtype port]\n        assert_equal \"string\" [r configaccess.getconfigtype logfile]\n        assert_equal \"enum\" [r configaccess.getconfigtype maxmemory-policy]\n\n        # Test with module config\n        assert_equal \"bool\" [r configaccess.getconfigtype configaccess.bool]\n\n        # Test with non-existent config\n        catch {r configaccess.getconfigtype nonexistent_config} err\n        assert_match \"ERR Config does not exist\" $err\n    }\n\n    test {Test config rollback on apply} {\n        set og_port [lindex [r config get port] 1]\n\n        set used_port [find_available_port $::baseport $::portcount]\n\n        # Run a dummy server on used_port so we know we can't configure redis to \n        # use it. It's ok for this to fail because that means used_port is invalid \n        # anyway\n        catch {set sockfd [socket -server dummy_accept -myaddr 127.0.0.1 $used_port]} e\n        if {$::verbose} { puts \"dummy_accept: $e\" }\n\n        # Try to listen on the used port, pass some more configs to make sure the\n        # returned failure message is for the first bad config and everything is rolled back.\n        assert_error \"ERR Failed to set numeric config port: Unable to listen on this port*\" {\n            eval \"r configaccess.setnumeric port $used_port\"\n        }\n\n        assert_equal [lindex [r config get port] 1] $og_port\n        close $sockfd\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/crash.tcl",
    "content": "# This file is used to test certain crash edge cases to make sure they produce\n# correct stack traces for debugging.\nset testmodule [file normalize tests/modules/crash.so]\nset backtrace_supported [system_backtrace_supported]\n\n# Valgrind will complain that the process terminated by a signal, skip it.\nif {!$::valgrind && !$::tsan} {\n    start_server {tags {\"modules external:skip\"}} {\n        r module load $testmodule assert\n        test {Test module crash when info crashes with an assertion } {\n            catch {r 0 info modulecrash}\n            set res [wait_for_log_messages 0 {\"*=== REDIS BUG REPORT START: Cut & paste starting from here ===*\"} 0 10 1000]\n            set loglines [lindex $res 1]\n\n            set res [wait_for_log_messages 0 {\"*ASSERTION FAILED*\"} $loglines 10 1000]\n            set loglines [lindex $res 1]\n\n            set res [wait_for_log_messages 0 {\"*RECURSIVE ASSERTION FAILED*\"} $loglines 10 1000]\n            set loglines [lindex $res 1]\n\n            wait_for_log_messages 0 {\"*=== REDIS BUG REPORT END. Make sure to include from START to END. ===*\"} $loglines 10 1000\n            assert_equal 1 [count_log_message 0 \"=== REDIS BUG REPORT END. Make sure to include from START to END. ===\"]\n            assert_equal 2 [count_log_message 0 \"ASSERTION FAILED\"]\n            if {$backtrace_supported} {\n                # Make sure the crash trace is printed twice. There will be 3 instances of,\n                # assertCrash 1 in the first stack trace and 2 in the second.\n                assert_equal 3 [count_log_message 0 \"assertCrash\"]\n            }\n            assert_equal 1 [count_log_message 0 \"RECURSIVE ASSERTION FAILED\"]\n            assert_equal 1 [count_log_message 0 \"=== REDIS BUG REPORT START: Cut & paste starting from here ===\"]\n        }\n    }\n\n    start_server {tags {\"modules external:skip\"}} {\n        r module load $testmodule segfault\n        test {Test module crash when info crashes with a segfault} {\n            catch {r 0 info modulecrash}\n            set res [wait_for_log_messages 0 {\"*=== REDIS BUG REPORT START: Cut & paste starting from here ===*\"} 0 10 1000]\n            set loglines [lindex $res 1]\n\n            if {$backtrace_supported} {\n                set res [wait_for_log_messages 0 {\"*Crashed running the instruction at*\"} $loglines 10 1000]\n                set loglines [lindex $res 1]\n\n                set res [wait_for_log_messages 0 {\"*Crashed running signal handler. Providing reduced version of recursive crash report*\"} $loglines 10 1000]\n                set loglines [lindex $res 1]\n                set res [wait_for_log_messages 0 {\"*Crashed running the instruction at*\"} $loglines 10 1000]\n                set loglines [lindex $res 1]\n            }\n\n            wait_for_log_messages 0 {\"*=== REDIS BUG REPORT END. Make sure to include from START to END. ===*\"} $loglines 10 1000\n            assert_equal 1 [count_log_message 0 \"=== REDIS BUG REPORT END. Make sure to include from START to END. ===\"]\n            assert_equal 1 [count_log_message 0 \"Crashed running signal handler. Providing reduced version of recursive crash report\"]\n            if {$backtrace_supported} {\n                assert_equal 2 [count_log_message 0 \"Crashed running the instruction at\"]\n                # Make sure the crash trace is printed twice. There will be 3 instances of \n                # modulesCollectInfo, 1 in the first stack trace and 2 in the second.\n                assert_equal 3 [count_log_message 0 \"modulesCollectInfo\"]\n            }\n            assert_equal 1 [count_log_message 0 \"=== REDIS BUG REPORT START: Cut & paste starting from here ===\"]\n        }\n    }\n\n    start_server {tags {\"modules external:skip\"}} {\n        r module load $testmodule\n\n        # memcheck confuses sanitizer\n        r config set crash-memcheck-enabled no\n\n        test {Test command tokens are printed when hide-user-data-from-log is enabled (xadd)} {\n            r config set hide-user-data-from-log yes\n            catch {r 0 modulecrash.xadd key NOMKSTREAM MAXLEN ~ 1000 * a b}\n\n            wait_for_log_messages 0 {\"*argv*0*: *modulecrash.xadd*\"} 0 10 1000\n            wait_for_log_messages 0 {\"*argv*1*: *redacted*\"} 0 10 1000\n            wait_for_log_messages 0 {\"*argv*2*: *NOMKSTREAM*\"} 0 10 1000\n            wait_for_log_messages 0 {\"*argv*3*: *MAXLEN*\"} 0 10 1000\n            wait_for_log_messages 0 {\"*argv*4*: *~*\"} 0 10 1000\n            wait_for_log_messages 0 {\"*argv*5*: *redacted*\"} 0 10 1000\n            wait_for_log_messages 0 {\"*argv*6*: *\\**\"} 0 10 1000\n            wait_for_log_messages 0 {\"*argv*7*: *redacted*\"} 0 10 1000\n            wait_for_log_messages 0 {\"*argv*8*: *redacted*\"} 0 10 1000\n        }\n    }\n\n    start_server {tags {\"modules external:skip\"}} {\n        r module load $testmodule\n\n        # memcheck confuses sanitizer\n        r config set crash-memcheck-enabled no\n\n        test {Test command tokens are printed when hide-user-data-from-log is enabled (zunion)} {\n            r config set hide-user-data-from-log yes\n            catch {r 0 modulecrash.zunion 2 zset1 zset2 WEIGHTS 1 2 WITHSCORES somedata}\n\n            wait_for_log_messages 0 {\"*argv*0*: *modulecrash.zunion*\"} 0 10 1000\n            wait_for_log_messages 0 {\"*argv*1*: *redacted*\"} 0 10 1000\n            wait_for_log_messages 0 {\"*argv*2*: *redacted*\"} 0 10 1000\n            wait_for_log_messages 0 {\"*argv*3*: *redacted*\"} 0 10 1000\n            wait_for_log_messages 0 {\"*argv*4*: *WEIGHTS*\"} 0 10 1000\n            wait_for_log_messages 0 {\"*argv*5*: *redacted*\"} 0 10 1000\n            wait_for_log_messages 0 {\"*argv*6*: *redacted*\"} 0 10 1000\n            wait_for_log_messages 0 {\"*argv*7*: *WITHSCORES*\"} 0 10 1000\n\n            # We don't expect arguments after WITHSCORE but just in case there\n            # is we rather not print it\n            wait_for_log_messages 0 {\"*argv*8*: *redacted*\"} 0 10 1000\n        }\n    }\n\n    start_server {tags {\"modules external:skip\"}} {\n        r module load $testmodule\n\n        # memcheck confuses sanitizer\n        r config set crash-memcheck-enabled no\n\n        test {Test subcommand name is printed when hide-user-data-from-log is enabled} {\n            r config set hide-user-data-from-log yes\n            catch {r 0 modulecrash.parent subcmd key TOKEN a b}\n\n            wait_for_log_messages 0 {\"*argv*0*: *modulecrash.parent*\"} 0 10 1000\n            wait_for_log_messages 0 {\"*argv*1*: *subcmd*\"} 0 10 1000\n            wait_for_log_messages 0 {\"*argv*2*: *redacted*\"} 0 10 1000\n            wait_for_log_messages 0 {\"*argv*3*: *TOKEN*\"} 0 10 1000\n            wait_for_log_messages 0 {\"*argv*4*: *redacted*\"} 0 10 1000\n            wait_for_log_messages 0 {\"*argv*5*: *redacted*\"} 0 10 1000\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/datatype.tcl",
    "content": "set testmodule [file normalize tests/modules/datatype.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    test {DataType: test loadex with invalid config} {\n        catch { r module loadex $testmodule CONFIG invalid_config 1 } e\n        assert_match {*ERR Error loading the extension*} $e\n    }\n\n    r module load $testmodule\n\n    test {DataType: Test module is sane, GET/SET work.} {\n        r datatype.set dtkey 100 stringval\n        assert {[r datatype.get dtkey] eq {100 stringval}}\n    }\n\n    test {test blocking of datatype creation outside of OnLoad} {\n        assert_equal [r block.create.datatype.outside.onload] OK\n    }\n\n    test {DataType: RM_SaveDataTypeToString(), RM_LoadDataTypeFromStringEncver() work} {\n        r datatype.set dtkey -1111 MyString\n        set encoded [r datatype.dump dtkey]\n\n        assert {[r datatype.restore dtkeycopy $encoded 4] eq {4}}\n        assert {[r datatype.get dtkeycopy] eq {-1111 MyString}}\n    }\n\n    test {DataType: Handle truncated RM_LoadDataTypeFromStringEncver()} {\n        r datatype.set dtkey -1111 MyString\n        set encoded [r datatype.dump dtkey]\n        set truncated [string range $encoded 0 end-1]\n\n        catch {r datatype.restore dtkeycopy $truncated 4} e\n        set e\n    } {*Invalid*}\n\n    test {DataType: ModuleTypeReplaceValue() happy path works} {\n        r datatype.set key-a 1 AAA\n        r datatype.set key-b 2 BBB\n\n        assert {[r datatype.swap key-a key-b] eq {OK}}\n        assert {[r datatype.get key-a] eq {2 BBB}}\n        assert {[r datatype.get key-b] eq {1 AAA}}\n    }\n\n    test {DataType: ModuleTypeReplaceValue() fails on non-module keys} {\n        r datatype.set key-a 1 AAA\n        r set key-b RedisString\n\n        catch {r datatype.swap key-a key-b} e\n        set e\n    } {*ERR*}\n\n    test {DataType: Copy command works for modules} {\n        # Test failed copies\n        r datatype.set answer-to-universe 42 AAA\n        catch {r copy answer-to-universe answer2} e\n        assert_match {*module key failed to copy*} $e\n\n        # Our module's data type copy function copies the int value as-is\n        # but appends /<from-key>/<to-key> to the string value so we can\n        # track passed arguments.\n        r datatype.set sourcekey 1234 AAA\n        r copy sourcekey targetkey\n        r datatype.get targetkey\n    } {1234 AAA/sourcekey/targetkey}\n\n    test {DataType: Slow Loading} {\n        r config set busy-reply-threshold 5000 ;# make sure we're using a high default\n        # trigger slow loading\n        r datatype.slow_loading 1\n        set rd [redis_deferring_client]\n        set start [clock clicks -milliseconds]\n        $rd debug reload\n\n        # wait till we know we're blocked inside the module\n        wait_for_condition 50 100 {\n            [r datatype.is_in_slow_loading] eq 1\n        } else {\n            fail \"Failed waiting for slow loading to start\"\n        }\n\n        # make sure we get LOADING error, and that we didn't get here late (not waiting for busy-reply-threshold)\n        assert_error {*LOADING*} {r ping}\n        assert_lessthan [expr [clock clicks -milliseconds]-$start] 2000\n\n        # abort the blocking operation\n        r datatype.slow_loading 0\n        wait_for_condition 50 100 {\n            [s loading] eq {0}\n        } else {\n            fail \"Failed waiting for loading to end\"\n        }\n        $rd read\n        $rd close\n    }\n\n    test {DataType: check the type name} {\n        r flushdb\n        r datatype.set foo 111 bar\n        assert_type test___dt foo\n    }\n\n    test {SCAN module datatype} {\n        r flushdb\n        populate 1000\n        r datatype.set foo 111 bar\n        set type [r type foo]\n        set cur 0\n        set keys {}\n        while 1 {\n            set res [r scan $cur type $type]\n            set cur [lindex $res 0]\n            set k [lindex $res 1]\n            lappend keys {*}$k\n            if {$cur == 0} break\n        }\n\n        assert_equal 1 [llength $keys]    \n    }\n\n    test {SCAN module datatype with case sensitive} {\n        r flushdb\n        populate 1000\n        r datatype.set foo 111 bar\n        set type \"tEsT___dT\"\n        set cur 0\n        set keys {}\n        while 1 {\n            set res [r scan $cur type $type]\n            set cur [lindex $res 0]\n            set k [lindex $res 1]\n            lappend keys {*}$k\n            if {$cur == 0} break\n        }\n\n        assert_equal 1 [llength $keys]\n    }\n\n    if {[string match {*jemalloc*} [s mem_allocator]] && [r debug mallctl arenas.page] <= 8192} {\n        test {Reduce defrag CPU usage when module data can't be defragged} {\n            r flushdb\n            r config set hz 100\n            r config set activedefrag no\n            r config set active-defrag-threshold-lower 5\n            r config set active-defrag-cycle-min 25\n            r config set active-defrag-cycle-max 75\n            r config set active-defrag-ignore-bytes 100kb\n\n            # Populate memory with interleaving field of same size.\n            set n 20000\n            set dummy \"[string repeat x 400]\"\n            set rd [redis_deferring_client]\n            for {set i 0} {$i < $n} {incr i} { $rd datatype.set k$i 1 $dummy }\n            for {set i 0} {$i < [expr $n]} {incr i} { $rd read } ;# Discard replies\n\n            after 120 ;# serverCron only updates the info once in 100ms\n            if {$::verbose} {\n                puts \"used [s allocator_allocated]\"\n                puts \"rss [s allocator_active]\"\n                puts \"frag [s allocator_frag_ratio]\"\n                puts \"frag_bytes [s allocator_frag_bytes]\"\n            }\n            assert_lessthan [s allocator_frag_ratio] 1.05\n\n            for {set i 0} {$i < $n} {incr i 2} { $rd del k$i }\n            for {set j 0} {$j < $n} {incr j 2} { $rd read } ; # Discard del replies\n            after 120 ;# serverCron only updates the info once in 100ms\n            assert_morethan [s allocator_frag_ratio] 1.4\n\n            catch {r config set activedefrag yes} e\n            if {[r config get activedefrag] eq \"activedefrag yes\"} {\n                # wait for the active defrag to start working (decision once a second)\n                wait_for_condition 50 100 {\n                    [s total_active_defrag_time] ne 0\n                } else {\n                    after 120 ;# serverCron only updates the info once in 100ms\n                    puts [r info memory]\n                    puts [r info stats]\n                    puts [r memory malloc-stats]\n                    fail \"defrag not started.\"\n                }\n                assert_morethan [s allocator_frag_ratio] 1.4\n\n                # The cpu usage of defragment will drop to active-defrag-cycle-min\n                wait_for_condition 1000 50 {\n                    [s active_defrag_running] == 25\n                } else {\n                    fail \"Unable to reduce the defragmentation speed.\"\n                }\n\n                # Fuzzy test to restore defragmentation speed to normal\n                set end_time [expr {[clock seconds] + 10}]\n                set speed_restored 0\n                while {[clock seconds] < $end_time} {\n                    for {set i 0} {$i < 500} {incr i} {\n                    switch [expr {int(rand() * 3)}] {\n                        0 {\n                            # Randomly delete a key\n                            set random_key [r RANDOMKEY]\n                            if {$random_key != \"\"} {\n                                r DEL $random_key\n                            }\n                        }\n                        1 {\n                            # Randomly overwrite a key\n                            set random_key [r RANDOMKEY]\n                            if {$random_key != \"\"} {\n                                r datatype.set $random_key 1 $dummy\n                            }\n                        }\n                        2 {\n                            # Randomly generate a new key\n                            set random_key \"key_[expr {int(rand() * 10000)}]\"\n                            r datatype.set $random_key 1 $dummy\n                        }\n                    } ;# end of switch\n                    } ;# end of for\n\n                    # Wait for defragmentation speed to restore.\n                    if {{[count_log_message $loglines \"*Starting active defrag, frag=*%, frag_bytes=*, cpu=5?%*\"]} > 1} {\n                        set speed_restored 1\n                        break;\n                    }\n                }\n                # Make sure the speed is restored\n                assert_equal $speed_restored 1\n\n                # After the traffic disappears, the defragmentation speed will decrease again.\n                wait_for_condition 1000 50 {\n                    [s active_defrag_running] == 25\n                } else {\n                    fail \"Unable to reduce the defragmentation speed after traffic disappears.\"\n                } \n            }\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/datatype2.tcl",
    "content": "set testmodule [file normalize tests/modules/datatype2.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test \"datatype2: test mem alloc and free\" {\n        r flushall\n\n        r select 0\n        assert_equal 3 [r mem.alloc k1 3]\n        assert_equal 2 [r mem.alloc k2 2]\n\n        r select 1\n        assert_equal 1 [r mem.alloc k1 1]\n        assert_equal 5 [r mem.alloc k2 5]\n\n        r select 0 \n        assert_equal 1 [r mem.free k1]\n        assert_equal 1 [r mem.free k2]\n\n        r select 1\n        assert_equal 1 [r mem.free k1]\n        assert_equal 1 [r mem.free k2]\n    }\n\n    test \"datatype2: test del and unlink\" {\n        r flushall\n\n        assert_equal 100 [r mem.alloc k1 100]\n        assert_equal 60 [r mem.alloc k2 60]\n\n        assert_equal 1 [r unlink k1]\n        assert_equal 1 [r del k2]\n    }\n\n    test \"datatype2: test read and write\" {\n        r flushall\n\n        assert_equal 3 [r mem.alloc k1 3]\n        \n        set data datatype2\n        assert_equal [string length $data] [r mem.write k1 0 $data]\n        assert_equal $data [r mem.read k1 0]\n    }\n\n    test \"datatype2: test rdb save and load\" {\n        r flushall\n\n        r select 0\n        set data k1\n        assert_equal 3 [r mem.alloc k1 3]\n        assert_equal [string length $data] [r mem.write k1 1 $data]\n\n        set data k2\n        assert_equal 2 [r mem.alloc k2 2]\n        assert_equal [string length $data] [r mem.write k2 0 $data]\n\n        r select 1\n        set data k3\n        assert_equal 3 [r mem.alloc k3 3]\n        assert_equal [string length $data] [r mem.write k3 1 $data]\n\n        set data k4\n        assert_equal 2 [r mem.alloc k4 2]\n        assert_equal [string length $data] [r mem.write k4 0 $data]\n\n        r bgsave\n        waitForBgsave r\n        r debug reload\n \n        r select 0\n        assert_equal k1 [r mem.read k1 1]\n        assert_equal k2 [r mem.read k2 0]\n\n        r select 1\n        assert_equal k3 [r mem.read k3 1]\n        assert_equal k4 [r mem.read k4 0]\n    }\n\n    test \"datatype2: test aof rewrite\" {\n        r flushall\n\n        r select 0\n        set data k1\n        assert_equal 3 [r mem.alloc k1 3]\n        assert_equal [string length $data] [r mem.write k1 1 $data]\n\n        set data k2\n        assert_equal 2 [r mem.alloc k2 2]\n        assert_equal [string length $data] [r mem.write k2 0 $data]\n\n        r select 1\n        set data k3\n        assert_equal 3 [r mem.alloc k3 3]\n        assert_equal [string length $data] [r mem.write k3 1 $data]\n\n        set data k4\n        assert_equal 2 [r mem.alloc k4 2]\n        assert_equal [string length $data] [r mem.write k4 0 $data]\n\n        r bgrewriteaof\n        waitForBgrewriteaof r\n        r debug loadaof\n \n        r select 0\n        assert_equal k1 [r mem.read k1 1]\n        assert_equal k2 [r mem.read k2 0]\n\n        r select 1\n        assert_equal k3 [r mem.read k3 1]\n        assert_equal k4 [r mem.read k4 0]\n    }\n\n    test \"datatype2: test copy\" {\n        r flushall\n\n        r select 0\n        set data k1\n        assert_equal 3 [r mem.alloc k1 3]\n        assert_equal [string length $data] [r mem.write k1 1 $data]\n        assert_equal $data [r mem.read k1 1]\n\n        set data k2\n        assert_equal 2 [r mem.alloc k2 2]\n        assert_equal [string length $data] [r mem.write k2 0 $data]\n        assert_equal $data [r mem.read k2 0]\n\n        r select 1\n        set data k3\n        assert_equal 3 [r mem.alloc k3 3]\n        assert_equal [string length $data] [r mem.write k3 1 $data]\n\n        set data k4\n        assert_equal 2 [r mem.alloc k4 2]\n        assert_equal [string length $data] [r mem.write k4 0 $data]\n\n        assert_equal {total 5 used 2} [r mem.usage 0]\n        assert_equal {total 5 used 2} [r mem.usage 1]\n\n        r select 0\n        assert_equal 1 [r copy k1 k3]\n        assert_equal k1 [r mem.read k3 1]\n        assert_equal {total 8 used 3} [r mem.usage 0]\n        assert_equal 1 [r copy k2 k1 db 1]\n\n        r select 1\n        assert_equal k2 [r mem.read k1 0]\n        assert_equal {total 8 used 3} [r mem.usage 0]\n        assert_equal {total 7 used 3} [r mem.usage 1]\n    }\n\n    test \"datatype2: test swapdb\" {\n        r flushall\n\n        r select 0\n        set data k1\n        assert_equal 5 [r mem.alloc k1 5]\n        assert_equal [string length $data] [r mem.write k1 1 $data]\n        assert_equal $data [r mem.read k1 1]\n\n        set data k2\n        assert_equal 4 [r mem.alloc k2 4]\n        assert_equal [string length $data] [r mem.write k2 0 $data]\n        assert_equal $data [r mem.read k2 0]\n\n        r select 1\n        set data k1\n        assert_equal 3 [r mem.alloc k3 3]\n        assert_equal [string length $data] [r mem.write k3 1 $data]\n\n        set data k2\n        assert_equal 2 [r mem.alloc k4 2]\n        assert_equal [string length $data] [r mem.write k4 0 $data]\n\n        assert_equal {total 9 used 2} [r mem.usage 0]\n        assert_equal {total 5 used 2} [r mem.usage 1]\n\n        assert_equal OK [r swapdb 0 1]\n        assert_equal {total 9 used 2} [r mem.usage 1]\n        assert_equal {total 5 used 2} [r mem.usage 0]\n    }\n\n    test \"datatype2: test digest\" {\n        r flushall\n\n        r select 0\n        set data k1\n        assert_equal 3 [r mem.alloc k1 3]\n        assert_equal [string length $data] [r mem.write k1 1 $data]\n        assert_equal $data [r mem.read k1 1]\n\n        set data k2\n        assert_equal 2 [r mem.alloc k2 2]\n        assert_equal [string length $data] [r mem.write k2 0 $data]\n        assert_equal $data [r mem.read k2 0]\n\n        r select 1\n        set data k1\n        assert_equal 3 [r mem.alloc k1 3]\n        assert_equal [string length $data] [r mem.write k1 1 $data]\n        assert_equal $data [r mem.read k1 1]\n\n        set data k2\n        assert_equal 2 [r mem.alloc k2 2]\n        assert_equal [string length $data] [r mem.write k2 0 $data]\n        assert_equal $data [r mem.read k2 0]\n\n        r select 0\n        set digest0 [debug_digest]\n\n        r select 1\n        set digest1 [debug_digest]\n\n        assert_equal $digest0 $digest1\n    }\n\n    test \"datatype2: test memusage\" {\n        r flushall\n\n        set data k1\n        assert_equal 3 [r mem.alloc k1 3]\n        assert_equal [string length $data] [r mem.write k1 1 $data]\n        assert_equal $data [r mem.read k1 1]\n\n        set data k2\n        assert_equal 3 [r mem.alloc k2 3]\n        assert_equal [string length $data] [r mem.write k2 0 $data]\n        assert_equal $data [r mem.read k2 0]\n\n        assert_equal [memory_usage k1] [memory_usage k2] \n    }\n}"
  },
  {
    "path": "tests/unit/moduleapi/defrag.tcl",
    "content": "set testmodule [file normalize tests/modules/defragtest.so]\n\nstart_server {tags {\"modules external:skip debug_defrag:skip\"} overrides {{save \"\"}}} {\n    r module load $testmodule\n    r config set hz 100\n    r config set active-defrag-ignore-bytes 1\n    r config set active-defrag-threshold-lower 0\n    r config set active-defrag-cycle-min 99\n\n    # try to enable active defrag, it will fail if redis was compiled without it\n    catch {r config set activedefrag yes} e\n    if {[r config get activedefrag] eq \"activedefrag yes\"} {\n\n        test {Module defrag: simple key defrag works} {\n            r config set activedefrag no\n            wait_for_condition 100 50 {\n                [s active_defrag_running] eq 0\n            } else {\n                fail \"Unable to wait for active defrag to stop\"\n            }\n\n            r flushdb\n            r frag.resetstats\n            r frag.create key1 1 1000 0\n\n            r config set activedefrag yes\n            wait_for_condition 200 50 {\n                [getInfoProperty [r info defragtest_stats] defragtest_defrag_ended] > 0\n            } else {\n                fail \"Unable to wait for a complete defragmentation cycle to finish\"\n            }\n\n            set info [r info defragtest_stats]\n            assert {[getInfoProperty $info defragtest_datatype_attempts] > 0}\n            assert_equal 0 [getInfoProperty $info defragtest_datatype_resumes]\n            assert_morethan [getInfoProperty $info defragtest_datatype_raw_defragged] 0\n            assert_morethan [getInfoProperty $info defragtest_defrag_started] 0\n            assert_morethan [getInfoProperty $info defragtest_defrag_ended] 0\n        } {} {tsan:skip}\n\n        test {Module defrag: late defrag with cursor works} {\n            r config set activedefrag no\n            wait_for_condition 100 50 {\n                [s active_defrag_running] eq 0\n            } else {\n                fail \"Unable to wait for active defrag to stop\"\n            }\n\n            r flushdb\n            r frag.resetstats\n\n            # key can only be defragged in no less than 10 iterations\n            # due to maxstep\n            r frag.create key2 10000 100 1000\n\n            r config set activedefrag yes\n            wait_for_condition 1000 50 {\n                [getInfoProperty [r info defragtest_stats] defragtest_defrag_ended] > 0 &&\n                [getInfoProperty [r info defragtest_stats] defragtest_datatype_resumes] > 10\n            } else {\n                fail \"Unable to wait for a complete defragmentation cycle to finish\"\n            }\n\n            set info [r info defragtest_stats]\n            assert_equal 0 [getInfoProperty $info defragtest_datatype_wrong_cursor]\n            assert_morethan [getInfoProperty $info defragtest_datatype_raw_defragged] 0\n            assert_morethan [getInfoProperty $info defragtest_defrag_started] 0\n            assert_morethan [getInfoProperty $info defragtest_defrag_ended] 0\n        } {} {tsan:skip}\n\n        test {Module defrag: global defrag works} {\n            r config set activedefrag no\n            wait_for_condition 100 50 {\n                [s active_defrag_running] eq 0\n            } else {\n                fail \"Unable to wait for active defrag to stop\"\n            }\n\n            r flushdb\n            r frag.resetstats\n            r frag.create_frag_global 50000\n            r config set activedefrag yes\n\n            wait_for_condition 1000 50 {\n                [getInfoProperty [r info defragtest_stats] defragtest_defrag_ended] > 0\n            } else {\n                fail \"Unable to wait for a complete defragmentation cycle to finish\"\n            }\n\n            set info [r info defragtest_stats]\n            assert {[getInfoProperty $info defragtest_global_strings_attempts] > 0}\n            assert {[getInfoProperty $info defragtest_global_dicts_attempts] > 0}\n            assert {[getInfoProperty $info defragtest_global_dicts_defragged] > 0}\n            assert {[getInfoProperty $info defragtest_global_dicts_items_defragged] > 0}\n            assert_morethan [getInfoProperty $info defragtest_defrag_started] 0\n            assert_morethan [getInfoProperty $info defragtest_defrag_ended] 0\n            assert_morethan [getInfoProperty $info defragtest_global_dicts_resumes] [getInfoProperty $info defragtest_defrag_ended]\n            assert_morethan [getInfoProperty $info defragtest_global_subdicts_resumes] [getInfoProperty $info defragtest_defrag_ended]\n        } {} {tsan:skip}\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/eventloop.tcl",
    "content": "set testmodule [file normalize tests/modules/eventloop.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test \"Module eventloop sendbytes\" {\n        assert_match \"OK\" [r test.sendbytes 5000000]\n        assert_match \"OK\" [r test.sendbytes 2000000]\n    }\n\n    test \"Module eventloop iteration\" {\n        set iteration [r test.iteration]\n        set next_iteration [r test.iteration]\n        assert {$next_iteration > $iteration}\n    }\n\n    test \"Module eventloop sanity\" {\n        r test.sanity\n    }\n\n    test \"Module eventloop oneshot\" {\n        r test.oneshot\n    }\n\n    test \"Unload the module - eventloop\" {\n        assert_equal {OK} [r module unload eventloop]\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/fork.tcl",
    "content": "set testmodule [file normalize tests/modules/fork.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test {Module fork} {\n        # the argument to fork.create is the exitcode on termination\n        # the second argument to fork.create is passed to usleep\n        r fork.create 3 100000 ;# 100ms\n        wait_for_condition 20 100 {\n            [r fork.exitcode] != -1\n        } else {\n            fail \"fork didn't terminate\"\n        }\n        r fork.exitcode\n    } {3}\n\n    test {Module fork kill} {\n        # use a longer time to avoid the child exiting before being killed\n        r fork.create 3 100000000 ;# 100s\n        wait_for_condition 20 100 {\n            [count_log_message 0 \"fork child started\"] == 2\n        } else {\n            fail \"fork didn't start\"\n        }\n\n        # module fork twice\n        assert_error {Fork failed} {r fork.create 0 1}\n        assert {[count_log_message 0 \"Can't fork for module: File exists\"] eq \"1\"}\n\n        r fork.kill\n\n        assert {[count_log_message 0 \"Received SIGUSR1 in child\"] eq \"1\"}\n        # check that it wasn't printed again (the print belong to the previous test)\n        assert {[count_log_message 0 \"fork child exiting\"] eq \"1\"}\n    }\n\n    test \"Unload the module - fork\" {\n        assert_equal {OK} [r module unload fork]\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/getchannels.tcl",
    "content": "set testmodule [file normalize tests/modules/getchannels.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    # Channels are currently used to just validate ACLs, so test them here\n    r ACL setuser testuser +@all resetchannels &channel &pattern*\n\n    test \"module getchannels-api with literals - ACL\" {\n        assert_equal \"OK\" [r ACL DRYRUN testuser getchannels.command subscribe literal channel subscribe literal pattern1]\n        assert_equal \"OK\" [r ACL DRYRUN testuser getchannels.command publish literal channel publish literal pattern1]\n        assert_equal \"OK\" [r ACL DRYRUN testuser getchannels.command unsubscribe literal channel unsubscribe literal pattern1]\n\n        assert_equal \"User testuser has no permissions to access the 'nopattern1' channel\" [r ACL DRYRUN testuser getchannels.command subscribe literal channel subscribe literal nopattern1]\n        assert_equal \"User testuser has no permissions to access the 'nopattern1' channel\" [r ACL DRYRUN testuser getchannels.command publish literal channel subscribe literal nopattern1]\n        assert_equal \"OK\" [r ACL DRYRUN testuser getchannels.command unsubscribe literal channel unsubscribe literal nopattern1]\n\n        assert_equal \"User testuser has no permissions to access the 'otherchannel' channel\" [r ACL DRYRUN testuser getchannels.command subscribe literal otherchannel subscribe literal pattern1]\n        assert_equal \"User testuser has no permissions to access the 'otherchannel' channel\" [r ACL DRYRUN testuser getchannels.command publish literal otherchannel subscribe literal pattern1]\n        assert_equal \"OK\" [r ACL DRYRUN testuser getchannels.command unsubscribe literal otherchannel unsubscribe literal pattern1]\n    }\n\n    test \"module getchannels-api with patterns - ACL\" {\n        assert_equal \"OK\" [r ACL DRYRUN testuser getchannels.command subscribe pattern pattern*]\n        assert_equal \"OK\" [r ACL DRYRUN testuser getchannels.command publish pattern pattern*]\n        assert_equal \"OK\" [r ACL DRYRUN testuser getchannels.command unsubscribe pattern pattern*]\n\n        assert_equal \"User testuser has no permissions to access the 'pattern1' channel\" [r ACL DRYRUN testuser getchannels.command subscribe pattern pattern1 subscribe pattern pattern*]\n        assert_equal \"User testuser has no permissions to access the 'pattern1' channel\" [r ACL DRYRUN testuser getchannels.command publish pattern pattern1 subscribe pattern pattern*]\n        assert_equal \"OK\" [r ACL DRYRUN testuser getchannels.command unsubscribe pattern pattern1 unsubscribe pattern pattern*]\n\n        assert_equal \"User testuser has no permissions to access the 'otherpattern*' channel\" [r ACL DRYRUN testuser getchannels.command subscribe pattern otherpattern* subscribe pattern pattern*]\n        assert_equal \"User testuser has no permissions to access the 'otherpattern*' channel\" [r ACL DRYRUN testuser getchannels.command publish pattern otherpattern* subscribe pattern pattern*]\n        assert_equal \"OK\" [r ACL DRYRUN testuser getchannels.command unsubscribe pattern otherpattern* unsubscribe pattern pattern*]\n    }\n\n    test \"Unload the module - getchannels\" {\n        assert_equal {OK} [r module unload getchannels]\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/getkeys.tcl",
    "content": "set testmodule [file normalize tests/modules/getkeys.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test {COMMAND INFO correctly reports a movable keys module command} {\n        set info [lindex [r command info getkeys.command] 0]\n\n        assert_equal {module movablekeys} [lindex $info 2]\n        assert_equal {0} [lindex $info 3]\n        assert_equal {0} [lindex $info 4]\n        assert_equal {0} [lindex $info 5]\n    }\n\n    test {COMMAND GETKEYS correctly reports a movable keys module command} {\n        r command getkeys getkeys.command arg1 arg2 key key1 arg3 key key2 key key3\n    } {key1 key2 key3}\n\n    test {COMMAND GETKEYS correctly reports a movable keys module command using flags} {\n        r command getkeys getkeys.command_with_flags arg1 arg2 key key1 arg3 key key2 key key3\n    } {key1 key2 key3}\n\n    test {COMMAND GETKEYSANDFLAGS correctly reports a movable keys module command not using flags} {\n        r command getkeysandflags getkeys.command arg1 arg2 key key1 arg3 key key2\n    } {{key1 {RW access update}} {key2 {RW access update}}}\n\n    test {COMMAND GETKEYSANDFLAGS correctly reports a movable keys module command using flags} {\n        r command getkeysandflags getkeys.command_with_flags arg1 arg2 key key1 arg3 key key2 key key3\n    } {{key1 {RO access}} {key2 {RO access}} {key3 {RO access}}}\n\n    test {RM_GetCommandKeys on non-existing command} {\n        catch {r getkeys.introspect 0 non-command key1 key2} e\n        set _ $e\n    } {*ENOENT*}\n\n    test {RM_GetCommandKeys on built-in fixed keys command} {\n        r getkeys.introspect 0 set key1 value1\n    } {key1}\n\n    test {RM_GetCommandKeys on built-in fixed keys command with flags} {\n        r getkeys.introspect 1 set key1 value1\n    } {{key1 OW}}\n\n    test {RM_GetCommandKeys on EVAL} {\n        r getkeys.introspect 0 eval \"\" 4 key1 key2 key3 key4 arg1 arg2\n    } {key1 key2 key3 key4}\n\n    test {RM_GetCommandKeys on a movable keys module command} {\n        r getkeys.introspect 0 getkeys.command arg1 arg2 key key1 arg3 key key2 key key3\n    } {key1 key2 key3}\n\n    test {RM_GetCommandKeys on a non-movable module command} {\n        r getkeys.introspect 0 getkeys.fixed arg1 key1 key2 key3 arg2\n    } {key1 key2 key3}\n\n    test {RM_GetCommandKeys with bad arity} {\n        catch {r getkeys.introspect 0 set key} e\n        set _ $e\n    } {*EINVAL*}\n\n    test \"introspect with > MAX_KEYS_BUFFER keys triggers RM_GetCommandKeysWithFlags heap alloc\" {\n        set args {}\n        for {set i 0} {$i < 7} {incr i} {\n            lappend args key k$i\n        }\n        set reply [r getkeys.introspect 1 getkeys.command_with_flags {*}$args]\n        assert_equal {{k0 RO} {k1 RO} {k2 RO} {k3 RO} {k4 RO} {k5 RO} {k6 RO}} $reply\n    }\n\n    # user that can only read from \"read\" keys, write to \"write\" keys, and read+write to \"RW\" keys\n    r ACL setuser testuser +@all %R~read* %W~write* %RW~rw*\n\n    test \"module getkeys-api - ACL\" {\n        # legacy triple didn't provide flags, so they require both read and write\n        assert_equal \"OK\" [r ACL DRYRUN testuser getkeys.command key rw]\n        assert_match {*has no permissions to access the 'read' key*} [r ACL DRYRUN testuser getkeys.command key read]\n        assert_match {*has no permissions to access the 'write' key*} [r ACL DRYRUN testuser getkeys.command key write]\n    }\n\n    test \"module getkeys-api with flags - ACL\" {\n        assert_equal \"OK\" [r ACL DRYRUN testuser getkeys.command_with_flags key rw]\n        assert_equal \"OK\" [r ACL DRYRUN testuser getkeys.command_with_flags key read]\n        assert_match {*has no permissions to access the 'write' key*} [r ACL DRYRUN testuser getkeys.command_with_flags key write]\n    }\n\n    test \"Unload the module - getkeys\" {\n        assert_equal {OK} [r module unload getkeys]\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/hash.tcl",
    "content": "set testmodule [file normalize tests/modules/hash.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test {Module hash set} {\n        r set k mystring\n        assert_error \"WRONGTYPE*\" {r hash.set k \"\" hello world}\n        r del k\n        # \"\" = count updates and deletes of existing fields only\n        assert_equal 0 [r hash.set k \"\" squirrel yes]\n        # \"a\" = COUNT_ALL = count inserted, modified and deleted fields\n        assert_equal 2 [r hash.set k \"a\" banana no sushi whynot]\n        # \"n\" = NX = only add fields not already existing in the hash\n        # \"x\" = XX = only replace the value for existing fields\n        assert_equal 0 [r hash.set k \"n\" squirrel hoho what nothing]\n        assert_equal 1 [r hash.set k \"na\" squirrel hoho something nice]\n        assert_equal 0 [r hash.set k \"xa\" new stuff not inserted]\n        assert_equal 1 [r hash.set k \"x\" squirrel ofcourse]\n        assert_equal 1 [r hash.set k \"\" sushi :delete: none :delete:]\n        r hgetall k\n    } {squirrel ofcourse banana no what nothing something nice}\n\n    test {Module hash - set (override) NX expired field successfully} {\n        r debug set-active-expire 0\n        r del H1 H2\n        r hash.set H1 \"n\" f1 v1\n        r hpexpire H1 1 FIELDS 1 f1\n        r hash.set H2 \"n\" f1 v1 f2 v2\n        r hpexpire H2 1 FIELDS 1 f1\n        after 5\n        assert_equal 0 [r hash.set H1 \"n\" f1 xx]\n        assert_equal \"f1 xx\" [r hgetall H1]\n        assert_equal 0 [r hash.set H2 \"n\" f1 yy]\n        assert_equal \"f1 f2 v2 yy\" [lsort [r hgetall H2]]\n        r debug set-active-expire 1\n    } {OK} {needs:debug}\n\n    test {Module hash - set XX of expired field gets failed as expected} {\n        r debug set-active-expire 0\n        r del H1 H2\n        r hash.set H1 \"n\" f1 v1\n        r hpexpire H1 1 FIELDS 1 f1\n        r hash.set H2 \"n\" f1 v1 f2 v2\n        r hpexpire H2 1 FIELDS 1 f1\n        after 5\n\n        # expected to fail on condition XX. hgetall should return empty list\n        r hash.set H1 \"x\" f1 xx\n        assert_equal \"\" [lsort [r hgetall H1]]\n        # But expired field was not lazy deleted\n        assert_equal 1 [r hlen H1]\n\n        # expected to fail on condition XX. hgetall should return list without expired f1\n        r hash.set H2 \"x\" f1 yy\n        assert_equal \"f2 v2\" [lsort [r hgetall H2]]\n        # But expired field was not lazy deleted\n        assert_equal 2 [r hlen H2]\n\n        r debug set-active-expire 1\n    } {OK} {needs:debug}\n\n    test {Module hash - test open key with REDISMODULE_OPEN_KEY_ACCESS_EXPIRED to scan expired fields} {\n        r debug set-active-expire 0\n        r del H1\n        r hash.set H1 \"n\" f1 v1 f2 v2 f3 v3\n        r hpexpire H1 1 FIELDS 2 f1 f2\n        after 10\n        # Scan expired fields with flag REDISMODULE_OPEN_KEY_ACCESS_EXPIRED\n        assert_equal \"f1 f2 f3 v1 v2 v3\" [lsort [r hash.hscan_expired H1]]\n        # Get expired field with flag REDISMODULE_OPEN_KEY_ACCESS_EXPIRED\n        assert_equal {v1} [r hash.hget_expired H1 f1]\n        # Verify we can get the TTL of the expired field as well\n        set now [expr [clock seconds]*1000]\n        assert_range [r hash.hget_expire H1 f2] [expr {$now-1000}] [expr {$now+1000}]        \n        # Verify key doesn't exist on normal access without the flag\n        assert_equal 0 [r hexists H1 f1]\n        assert_equal 0 [r hexists H1 f2]\n        # Scan again expired fields with flag REDISMODULE_OPEN_KEY_ACCESS_EXPIRED\n        assert_equal \"f3 v3\" [lsort [r hash.hscan_expired H1]]\n        r debug set-active-expire 1\n    }\n\n    test {Module hash - test open key with REDISMODULE_OPEN_KEY_ACCESS_EXPIRED to scan expired key} {\n        r debug set-active-expire 0\n        r del H1\n        r hash.set H1 \"n\" f1 v1 f2 v2 f3 v3\n        r pexpire H1 5\n        after 10\n        # Scan expired fields with flag REDISMODULE_OPEN_KEY_ACCESS_EXPIRED\n        assert_equal \"f1 f2 f3 v1 v2 v3\" [lsort [r hash.hscan_expired H1]]\n        # Get expired field with flag REDISMODULE_OPEN_KEY_ACCESS_EXPIRED\n        assert_equal {v1} [r hash.hget_expired H1 f1]\n        # Verify key doesn't exist on normal access without the flag\n        assert_equal 0 [r exists H1]\n        r debug set-active-expire 1\n    }\n    \n    test {Module hash - Read field expiration time} {\n        r del H1\n        r hash.set H1 \"n\" f1 v1 f2 v2 f3 v3 f4 v4\n        r hexpire H1 10   FIELDS 1 f1\n        r hexpire H1 100  FIELDS 1 f2\n        r hexpire H1 1000 FIELDS 1 f3        \n        \n        # Validate that the expiration times for fields f1, f2, and f3 are correct\n        set nowMsec [expr [clock seconds]*1000]\n        assert_range [r hash.hget_expire H1 f1] [expr {$nowMsec+9000}] [expr {$nowMsec+11000}]\n        assert_range [r hash.hget_expire H1 f2] [expr {$nowMsec+90000}] [expr {$nowMsec+110000}]\n        assert_range [r hash.hget_expire H1 f3] [expr {$nowMsec+900000}] [expr {$nowMsec+1100000}]\n        \n        # Assert that field f4 and f5_not_exist have no expiration (should return -1)\n        assert_equal [r hash.hget_expire H1 f4] -1  \n        assert_equal [r hash.hget_expire H1 f5_not_exist] -1\n        \n        # Assert that variadic version of hget_expire works as well\n        assert_equal [r hash.hget_two_expire H1 f1 f2] [list [r hash.hget_expire H1 f1] [r hash.hget_expire H1 f2]]        \n    }\n    \n    test {Module hash - Read minimum expiration time} {\n        r del H1\n        r hash.set H1 \"n\" f1 v1 f2 v2 f3 v3 f4 v4\n        r hexpire H1 100   FIELDS 1 f1\n        r hexpire H1 10    FIELDS 1 f2\n        r hexpire H1 1000  FIELDS 1 f3        \n        \n        # Validate that the minimum expiration time is correct\n        set nowMsec [expr [clock seconds]*1000]\n        assert_range [r hash.hget_min_expire H1] [expr {$nowMsec+9000}] [expr {$nowMsec+11000}]\n        assert_equal [r hash.hget_min_expire H1] [r hash.hget_expire H1 f2]\n        \n        # Assert error if key not found\n        assert_error {*key not found*} {r hash.hget_min_expire non_exist_hash}\n        \n        # Assert return -1 if no expiration (=REDISMODULE_NO_EXPIRE)\n        r del H2\n        r hash.set H2 \"n\" f1 v1\n        assert_equal [r hash.hget_min_expire H2] -1\n    }\n\n    test {Module hash - KEYSIZES is updated as expected} {\n        proc run_cmd_verify_hist {cmd expOutput {retries 1}} {\n            proc K {} {return [string map { \"db0_distrib_hashes_items\" \"db0_HASH\" \"# Keysizes\" \"\" \" \" \"\" \"\\n\" \"\" \"\\r\" \"\" } [r info keysizes] ]}\n            uplevel 1 $cmd    \n            wait_for_condition 50 $retries {\n                $expOutput eq [K]\n            } else { fail \"Expected: \\n`$expOutput`\\n Actual:\\n`[K]`.\\nFailed after command: $cmd\" }\n        }\n        \n        r select 0\n        r flushall\n        # Check RM_HashSet \n        run_cmd_verify_hist {r hash.set H1 \"n\" f1 v1} {db0_HASH:1=1}\n        run_cmd_verify_hist {r hash.set H2 \"n\" f1 v1} {db0_HASH:1=2}\n        run_cmd_verify_hist {r hash.set H2 \"n\" f1 v1} {db0_HASH:1=2}\n        run_cmd_verify_hist {r hash.set H2 \"x\" f1 v1} {db0_HASH:1=2}\n        run_cmd_verify_hist {r hash.set H3 \"x\" f1 v1} {db0_HASH:1=2}\n        run_cmd_verify_hist {r hash.set H1 \"n\" f2 v2} {db0_HASH:1=1,2=1}\n        run_cmd_verify_hist {r hash.set H1 \"a\" f3 v3 f4 v4} {db0_HASH:1=1,4=1}\n        run_cmd_verify_hist {r del H1} {db0_HASH:1=1}\n        run_cmd_verify_hist {r del H2} {}        \n        \n        # Check lazy expire\n        r debug set-active-expire 0\n        run_cmd_verify_hist {r hash.set H1 \"n\" f1 v1} {db0_HASH:1=1}\n        run_cmd_verify_hist {r hpexpire H1 1 FIELDS 1 f1} {db0_HASH:1=1}\n        run_cmd_verify_hist {after 5} {db0_HASH:1=1}\n        run_cmd_verify_hist {r hash.hget_expired H1 f1} {db0_HASH:1=1}\n        r debug set-active-expire 1\n        run_cmd_verify_hist {after 5} {} 50\n    }\n\n    test \"Unload the module - hash\" {\n        assert_equal {OK} [r module unload hash]\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/hooks.tcl",
    "content": "set testmodule [file normalize tests/modules/hooks.so]\n\ntags \"modules external:skip\" {\n    start_server [list overrides [list loadmodule \"$testmodule\" appendonly yes]] {\n        test {Test module aof save on server start from empty} {\n            assert {[r hooks.event_count persistence-syncaof-start] == 1}\n        }\n\n        test {Test clients connection / disconnection hooks} {\n            for {set j 0} {$j < 2} {incr j} {\n                set rd1 [redis_deferring_client]\n                $rd1 close\n            }\n            assert {[r hooks.event_count client-connected] > 1}\n            assert {[r hooks.event_count client-disconnected] > 1}\n        }\n\n        test {Test module client change event for blocked client} {\n            set rd [redis_deferring_client]\n            # select db other than 0\n            $rd select 1\n            # block on key\n            $rd brpop foo 0\n            # kill blocked client\n            r client kill skipme yes\n            # assert server is still up\n            assert_equal [r ping] PONG\n            $rd close\n        } \n\n        test {Test module cron hook} {\n            after 100\n            assert {[r hooks.event_count cron-loop] > 0}\n            set hz [r hooks.event_last cron-loop]\n            assert_equal $hz 10\n        }\n\n        test {Test module loaded / unloaded hooks} {\n            set othermodule [file normalize tests/modules/infotest.so]\n            r module load $othermodule\n            r module unload infotest\n            assert_equal [r hooks.event_last module-loaded] \"infotest\"\n            assert_equal [r hooks.event_last module-unloaded] \"infotest\"\n        }\n\n        test {Test module aofrw hook} {\n            r debug populate 1000 foo 10000 ;# 10mb worth of data\n            r config set rdbcompression no ;# rdb progress is only checked once in 2mb\n            r BGREWRITEAOF\n            waitForBgrewriteaof r\n            assert_equal [string match {*module-event-persistence-aof-start*} [exec tail -20 < [srv 0 stdout]]] 1\n            assert_equal [string match {*module-event-persistence-end*} [exec tail -20 < [srv 0 stdout]]] 1\n        }\n\n        test {Test module aof load and rdb/aof progress hooks} {\n            # create some aof tail (progress is checked only once in 1000 commands)\n            for {set j 0} {$j < 4000} {incr j} {\n                r set \"bar$j\" x\n            }\n            # set some configs that will cause many loading progress events during aof loading\n            r config set key-load-delay 500\n            r config set dynamic-hz no\n            r config set hz 500\n            r DEBUG LOADAOF\n            assert_equal [r hooks.event_last loading-aof-start] 0\n            assert_equal [r hooks.event_last loading-end] 0\n            assert {[r hooks.event_count loading-rdb-start] == 0}\n            assert_lessthan 2 [r hooks.event_count loading-progress-rdb] ;# comes from the preamble section\n            assert_lessthan 2 [r hooks.event_count loading-progress-aof]\n            if {$::verbose} {\n                puts \"rdb progress events [r hooks.event_count loading-progress-rdb]\"\n                puts \"aof progress events [r hooks.event_count loading-progress-aof]\"\n            }\n        }\n        # undo configs before next test\n        r config set dynamic-hz yes\n        r config set key-load-delay 0\n\n        test {Test module rdb save hook} {\n            # debug reload does: save, flush, load:\n            assert {[r hooks.event_count persistence-syncrdb-start] == 0}\n            assert {[r hooks.event_count loading-rdb-start] == 0}\n            r debug reload\n            assert {[r hooks.event_count persistence-syncrdb-start] == 1}\n            assert {[r hooks.event_count loading-rdb-start] == 1}\n        }\n\n        test {Test key unlink hook} {\n            r set testkey1 hello\n            r del testkey1\n            assert {[r hooks.event_count key-info-testkey1] == 1}\n            assert_equal [r hooks.event_last key-info-testkey1] testkey1\n            r lpush testkey1 hello\n            r lpop testkey1\n            assert {[r hooks.event_count key-info-testkey1] == 2}\n            assert_equal [r hooks.event_last key-info-testkey1] testkey1\n            r set testkey2 world\n            r unlink testkey2\n            assert {[r hooks.event_count key-info-testkey2] == 1}\n            assert_equal [r hooks.event_last key-info-testkey2] testkey2\n        }\n\n        test {Test removed key event} {\n            r set str abcd\n            r set str abcde\n            # For String Type value is returned\n            assert_equal {abcd overwritten} [r hooks.is_key_removed str]\n            assert_equal -1 [r hooks.pexpireat str]\n\n            r del str\n            assert_equal {abcde deleted} [r hooks.is_key_removed str]\n            assert_equal -1 [r hooks.pexpireat str]\n\n            # test int encoded string\n            r set intstr 12345678\n            # incr doesn't fire event\n            r incr intstr\n            catch {[r hooks.is_key_removed intstr]} output\n            assert_match {ERR * removed} $output\n            r del intstr\n            assert_equal {12345679 deleted} [r hooks.is_key_removed intstr]\n\n            catch {[r hooks.is_key_removed not-exists]} output\n            assert_match {ERR * removed} $output\n\n            r hset hash f v\n            r hdel hash f\n            assert_equal {0 deleted} [r hooks.is_key_removed hash]\n\n            r hset hash f v a b\n            r del hash\n            assert_equal {2 deleted} [r hooks.is_key_removed hash]\n\n            r lpush list 1\n            r lpop list\n            assert_equal {0 deleted} [r hooks.is_key_removed list]\n\n            r lpush list 1 2 3\n            r del list\n            assert_equal {3 deleted} [r hooks.is_key_removed list]\n\n            r sadd set 1\n            r spop set\n            assert_equal {0 deleted} [r hooks.is_key_removed set]\n\n            r sadd set 1 2 3 4\n            r del set\n            assert_equal {4 deleted} [r hooks.is_key_removed set]\n\n            r zadd zset 1 f\n            r zpopmin zset\n            assert_equal {0 deleted} [r hooks.is_key_removed zset]\n\n            r zadd zset 1 f 2 d\n            r del zset\n            assert_equal {2 deleted} [r hooks.is_key_removed zset]\n\n            r xadd stream 1-1 f v\n            r xdel stream 1-1\n            # Stream does not delete object when del entry\n            catch {[r hooks.is_key_removed stream]} output\n            assert_match {ERR * removed} $output\n            r del stream\n            assert_equal {0 deleted} [r hooks.is_key_removed stream]\n\n            r xadd stream 2-1 f v\n            r del stream\n            assert_equal {1 deleted} [r hooks.is_key_removed stream]\n\n            # delete key because of active expire\n            set size [r dbsize]\n            r set active-expire abcd px 1\n            #ensure active expire\n            wait_for_condition 50 100 {\n                [r dbsize] == $size\n            } else {\n                fail \"Active expire not trigger\"\n            }\n            assert_equal {abcd expired} [r hooks.is_key_removed active-expire]\n            # current time is greater than pexpireat\n            set now [r time]\n            set mill [expr ([lindex $now 0]*1000)+([lindex $now 1]/1000)]\n            assert {$mill >= [r hooks.pexpireat active-expire]}\n\n            # delete key because of lazy expire\n            r debug set-active-expire 0\n            r set lazy-expire abcd px 1\n            after 10\n            r get lazy-expire\n            assert_equal {abcd expired} [r hooks.is_key_removed lazy-expire]\n            set now [r time]\n            set mill [expr ([lindex $now 0]*1000)+([lindex $now 1]/1000)]\n            assert {$mill >= [r hooks.pexpireat lazy-expire]}\n            r debug set-active-expire 1\n\n            # delete key not yet expired\n            set now [r time]\n            set expireat [expr ([lindex $now 0]*1000)+([lindex $now 1]/1000)+1000000]\n            r set not-expire abcd pxat $expireat\n            r del not-expire\n            assert_equal {abcd deleted} [r hooks.is_key_removed not-expire]\n            assert_equal $expireat [r hooks.pexpireat not-expire]\n\n            # Test key evict\n            set used [expr {[s used_memory] - [s mem_not_counted_for_evict]}]\n            set limit [expr {$used+100*1024}]\n            set old_policy [lindex [r config get maxmemory-policy] 1]\n            r config set maxmemory $limit\n            # We set policy volatile-random, so only keys with ttl will be evicted\n            r config set maxmemory-policy volatile-random\n            r setex volatile-key 10000 x\n            # We use SETBIT here, so we can set a big key and get the used_memory\n            # bigger than maxmemory. Next command will evict volatile keys. We\n            # can't use SET, as SET uses big input buffer, so it will fail.\n            r setbit big-key 1600000 0 ;# this will consume 200kb\n            r getbit big-key 0\n            assert_equal {x evicted} [r hooks.is_key_removed volatile-key]\n            r config set maxmemory-policy $old_policy\n            r config set maxmemory 0\n        } {OK} {needs:debug}\n\n        test {Test flushdb hooks} {\n            r flushdb\n            assert_equal [r hooks.event_last flush-start] 9\n            assert_equal [r hooks.event_last flush-end] 9\n            r flushall\n            assert_equal [r hooks.event_last flush-start] -1\n            assert_equal [r hooks.event_last flush-end] -1\n        }\n\n        # replication related tests\n        set master [srv 0 client]\n        set master_host [srv 0 host]\n        set master_port [srv 0 port]\n        start_server {} {\n            r module load $testmodule\n            set replica [srv 0 client]\n            set replica_host [srv 0 host]\n            set replica_port [srv 0 port]\n            $replica replicaof $master_host $master_port\n\n            wait_replica_online $master\n\n            test {Test master link up hook} {\n                assert_equal [r hooks.event_count masterlink-up] 1\n                assert_equal [r hooks.event_count masterlink-down] 0\n            }\n\n            test {Test role-replica hook} {\n                assert_equal [r hooks.event_count role-replica] 1\n                assert_equal [r hooks.event_count role-master] 0\n                assert_equal [r hooks.event_last role-replica] [s 0 master_host]\n            }\n\n            test {Test replica-online hook} {\n                assert_equal [r -1 hooks.event_count replica-online] 1\n                assert_equal [r -1 hooks.event_count replica-offline] 0\n            }\n\n            test {Test master link down hook} {\n                r client kill type master\n                assert_equal [r hooks.event_count masterlink-down] 1\n\n                wait_for_condition 50 100 {\n                    [string match {*master_link_status:up*} [r info replication]]\n                } else {\n                    fail \"Replica didn't reconnect\"\n                }\n\n                assert_equal [r hooks.event_count masterlink-down] 1\n                assert_equal [r hooks.event_count masterlink-up] 2\n            }\n\n            wait_for_condition 50 10 {\n                [string match {*master_link_status:up*} [r info replication]]\n            } else {\n                fail \"Can't turn the instance into a replica\"\n            }\n\n            $replica replicaof no one\n\n            test {Test role-master hook} {\n                assert_equal [r hooks.event_count role-replica] 1\n                assert_equal [r hooks.event_count role-master] 1\n                assert_equal [r hooks.event_last role-master] {}\n            }\n\n            test {Test replica-offline hook} {\n                assert_equal [r -1 hooks.event_count replica-online] 2\n                assert_equal [r -1 hooks.event_count replica-offline] 2\n            }\n            # get the replica stdout, to be used by the next test\n            set replica_stdout [srv 0 stdout]\n        }\n\n        test {Test swapdb hooks} {\n            r swapdb 0 10\n            assert_equal [r hooks.event_last swapdb-first] 0\n            assert_equal [r hooks.event_last swapdb-second] 10\n        }\n\n        test {Test configchange hooks} {\n            r config set rdbcompression no \n            assert_equal [r hooks.event_last config-change-count] 1\n            assert_equal [r hooks.event_last config-change-first] rdbcompression\n        }\n\n        # look into the log file of the server that just exited\n        test {Test shutdown hook} {\n            assert_equal [string match {*module-event-shutdown*} [exec tail -5 < $replica_stdout]] 1\n        }\n    }\n\n    start_server {} {\n        test {OnLoad failure will handle un-registration} {\n            catch {r module load $testmodule noload}\n            r flushall\n            r ping\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/infotest.tcl",
    "content": "set testmodule [file normalize tests/modules/infotest.so]\n\n# Return value for INFO property\nproc field {info property} {\n    if {[regexp \"\\r\\n$property:(.*?)\\r\\n\" $info _ value]} {\n        set _ $value\n    }\n}\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule log-key 0\n\n    test {module info not attempted in INFO ALL} {\n        # call INFO in a few different ways, check that regardless of the section filtering,\n        # the module isn't at all being called when unneeded.\n        r INFO\n        r INFO all\n        r INFO memory\n        set info [r info everything]\n        set calls [field $info infotest_info_calls]\n    } {1}\n\n    test {module reading info} {\n        # check string, integer and float fields\n        assert_equal [r info.gets replication role] \"master\"\n        assert_equal [r info.getc replication role] \"master\"\n        assert_equal [r info.geti stats expired_keys] 0\n        assert_equal [r info.getd stats expired_stale_perc] 0\n\n        # check signed and unsigned\n        assert_equal [r info.geti infotest infotest_global] -2\n        assert_equal [r info.getu infotest infotest_uglobal] -2\n\n        # the above are always 0, try module info that is non-zero\n        assert_equal [r info.geti infotest_italian infotest_due] 2\n        set tre [r info.getd infotest_italian infotest_tre]\n        assert {$tre > 3.2 && $tre < 3.4 }\n\n        # search using the wrong section\n        catch { [r info.gets badname redis_version] } e\n        assert_match {*not found*} $e\n\n        # check that section filter works\n        assert { [string match \"*usec_per_call*\" [r info.gets all cmdstat_info.gets] ] }\n        catch { [r info.gets default cmdstat_info.gets] ] } e\n        assert_match {*not found*} $e\n    }\n\n    test {module info all} {\n        set info [r info all]\n        # info all does not contain modules\n        assert { ![string match \"*Spanish*\" $info] }\n        assert { ![string match \"*infotest_*\" $info] }\n        assert { [string match \"*used_memory*\" $info] }\n    }\n\n    test {module info all infotest} {\n        set info [r info all infotest]\n        # info all infotest should contain both ALL and the module information\n        assert { [string match \"*Spanish*\" $info] }\n        assert { [string match \"*infotest_*\" $info] }\n        assert { [string match \"*used_memory*\" $info] }\n    }\n\n    test {module info everything} {\n        set info [r info everything]\n        # info everything contains all default sections, but not ones for crash report\n        assert { [string match \"*infotest_global*\" $info] }\n        assert { [string match \"*Spanish*\" $info] }\n        assert { [string match \"*Italian*\" $info] }\n        assert { [string match \"*used_memory*\" $info] }\n        assert { ![string match \"*Klingon*\" $info] }\n        field $info infotest_dos\n    } {2}\n\n    test {module info modules} {\n        set info [r info modules]\n        # info all does not contain modules\n        assert { [string match \"*Spanish*\" $info] }\n        assert { [string match \"*infotest_global*\" $info] }\n        assert { ![string match \"*used_memory*\" $info] }\n    }\n\n    test {module info one module} {\n        set info [r info INFOtest] ;# test case insensitive compare\n        # info all does not contain modules\n        assert { [string match \"*Spanish*\" $info] }\n        assert { ![string match \"*used_memory*\" $info] }\n        field $info infotest_global\n    } {-2}\n\n    test {module info one section} {\n        set info [r info INFOtest_SpanisH] ;# test case insensitive compare\n        assert { ![string match \"*used_memory*\" $info] }\n        assert { ![string match \"*Italian*\" $info] }\n        assert { ![string match \"*infotest_global*\" $info] }\n        field $info infotest_uno\n    } {one}\n\n    test {module info dict} {\n        set info [r info infotest_keyspace]\n        set keyspace [field $info infotest_db0]\n        set keys [scan [regexp -inline {keys\\=([\\d]*)} $keyspace] keys=%d]\n    } {3}\n\n    test {module info unsafe fields} {\n        set info [r info infotest_unsafe]\n        assert_match {*infotest_unsafe_field:value=1*} $info\n    }\n\n    test {module info multiply sections without all, everything, default keywords} {\n        set info [r info replication INFOTEST]\n        assert { [string match \"*Spanish*\" $info] }\n        assert { ![string match \"*used_memory*\" $info] }\n        assert { [string match \"*repl_offset*\" $info] }\n    }\n\n    test {module info multiply sections with all keyword and modules} {\n        set info [r info all modules]\n        assert { [string match \"*cluster*\" $info] }\n        assert { [string match \"*cmdstat_info*\" $info] }\n        assert { [string match \"*infotest_global*\" $info] }\n    }\n\n    test {module info multiply sections with everything keyword} {\n        set info [r info replication everything cpu]\n        assert { [string match \"*client_recent*\" $info] }\n        assert { [string match \"*cmdstat_info*\" $info] }\n        assert { [string match \"*Italian*\" $info] }\n        # check that we didn't get the same info twice\n        assert { ![string match \"*used_cpu_user_children*used_cpu_user_children*\" $info] }\n        assert { ![string match \"*Italian*Italian*\" $info] }\n        field $info infotest_dos\n    } {2}\n\n    test \"Unload the module - infotest\" {\n        assert_equal {OK} [r module unload infotest]\n    }\n\n    # TODO: test crash report.\n} \n"
  },
  {
    "path": "tests/unit/moduleapi/infra.tcl",
    "content": "set testmodule [file normalize tests/modules/infotest.so]\n\ntest {modules config rewrite} {\n\n    start_server {tags {\"modules external:skip\"}} {\n        r module load $testmodule\n\n        set modules [lmap x [r module list] {dict get $x name}]\n        assert_not_equal [lsearch $modules infotest] -1\n\n        r config rewrite\n        restart_server 0 true false\n\n        set modules [lmap x [r module list] {dict get $x name}]\n        assert_not_equal [lsearch $modules infotest] -1\n\n        assert_equal {OK} [r module unload infotest]\n\n        r config rewrite\n        restart_server 0 true false\n\n        set modules [lmap x [r module list] {dict get $x name}]\n        assert_equal [lsearch $modules infotest] -1\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/internalsecret.tcl",
    "content": "tags {modules external:skip cluster} {\nset testmodule [file normalize tests/modules/internalsecret.so]\n\nset modules [list loadmodule $testmodule]\nstart_cluster 1 0 [list config_lines $modules] {\n    set r [srv 0 client]\n\n    test {Internal command without internal connection fails as an unknown command} {\n        assert_error {*unknown command*} {r internalauth.internalcommand}\n    }\n\n    test {Wrong internalsecret fails authentication} {\n        assert_error {*WRONGPASS invalid internal password*} {r auth \"internal connection\" 123}\n    }\n\n    test {Internal connection basic flow} {\n        # A non-internal connection cannot execute internal commands, and they\n        # seem non-existent to it.\n        assert_error {*unknown command*} {r internalauth.internalcommand}\n\n        # Authenticate as an internal connection\n        assert_equal {OK} [r debug mark-internal-client]\n\n        # Now, internal commands are available.\n        assert_equal {OK} [r internalauth.internalcommand]\n    }\n}\n\nstart_server {} {\n    r module load $testmodule\n\n    test {Internal secret is not available in non-cluster mode} {\n        # On non-cluster mode, the internal secret does not exist, nor is the\n        # internal auth command available\n        assert_error {*unknown command*} {r internalauth.internalcommand}\n        assert_error {*ERR no internal secret available*} {r internalauth.getinternalsecret}\n        assert_error {*Cannot authenticate as an internal connection on non-cluster instances*} {r auth \"internal connection\" somepassword}\n    }\n\n    test {marking and un-marking a connection as internal via a debug command} {\n        # After marking the connection to an internal one via a debug command,\n        # internal commands succeed.\n        r debug mark-internal-client\n        assert_equal {OK} [r internalauth.internalcommand]\n\n        # After unmarking the connection, internal commands fail.\n        r debug mark-internal-client unmark\n        assert_error {*unknown command*} {r internalauth.internalcommand}\n    }\n}\n\nstart_server {} {\n    r module load $testmodule\n\n    test {Test `COMMAND *` commands with\\without internal connections} {\n        # ------------------ Non-internal connection ------------------\n        # `COMMAND DOCS <cmd>` returns empty response.\n        assert_equal {} [r command docs internalauth.internalcommand]\n\n        # `COMMAND INFO <cmd>` should reply with null for the internal command\n        assert_equal {{}} [r command info internalauth.internalcommand]\n\n        # `COMMAND GETKEYS/GETKEYSANDFLAGS <cmd> <args>` returns an invalid command error\n        assert_error {*Invalid command specified*} {r command getkeys internalauth.internalcommand}\n        assert_error {*Invalid command specified*} {r command getkeysandflags internalauth.internalcommand}\n\n        # -------------------- Internal connection --------------------\n        # Non-empty response for non-internal connections.\n        assert_equal {OK} [r debug mark-internal-client]\n\n        # `COMMAND DOCS <cmd>` returns a correct response.\n        assert_match {*internalauth.internalcommand*} [r command docs internalauth.internalcommand]\n\n        # `COMMAND INFO <cmd>` should reply with a full response for the internal command\n        assert_match {*internalauth.internalcommand*} [r command info internalauth.internalcommand]\n\n        # `COMMAND GETKEYS/GETKEYSANDFLAGS <cmd> <args>` returns a key error (not related to the internal connection)\n        assert_error {*ERR The command has no key arguments*} {r command getkeys internalauth.internalcommand}\n        assert_error {*ERR The command has no key arguments*} {r command getkeysandflags internalauth.internalcommand}\n    }\n}\n\nstart_server {} {\n    r module load $testmodule\n\n    test {No authentication needed for internal connections} {\n        # Authenticate with a user that does not have permissions to any command\n        r acl setuser David on >123 &* ~* -@all +auth +internalauth.getinternalsecret +debug +internalauth.internalcommand\n        assert_equal {OK} [r auth David 123]\n\n        assert_equal {OK} [r debug mark-internal-client]\n        # Execute a command for which David does not have permission\n        assert_equal {OK} [r internalauth.internalcommand]\n    }\n}\n\nstart_server {} {\n    r module load $testmodule\n\n    test {RM_Call of internal commands without user-flag succeeds only for all connections} {\n        # Fail before authenticating as an internal connection.\n        assert_equal {OK} [r internalauth.noninternal_rmcall internalauth.internalcommand]\n    }\n\n    test {Internal commands via RM_Call succeeds for non-internal connections depending on the user flag} {\n        # A non-internal connection that calls rm_call of an internal command\n        assert_equal {OK} [r internalauth.noninternal_rmcall internalauth.internalcommand]\n\n        # A non-internal connection that calls rm_call of an internal command\n        # with a user flag should fail.\n        assert_error {*unknown command*} {r internalauth.noninternal_rmcall_withuser internalauth.internalcommand}\n    }\n\n    test {Internal connections override the user flag} {\n        # Authenticate as an internal connection\n        assert_equal {OK} [r debug mark-internal-client]\n\n        assert_equal {OK} [r internalauth.noninternal_rmcall internalauth.internalcommand]\n        assert_equal {OK} [r internalauth.noninternal_rmcall_withuser internalauth.internalcommand]\n    }\n}\n\nstart_server {} {\n    r module load $testmodule\n\n    test {RM_Call with the user-flag after setting thread-safe-context from an internal connection should fail} {\n        # Authenticate as an internal connection\n        assert_equal {OK} [r debug mark-internal-client]\n\n        # New threadSafeContexts do not inherit the internal flag.\n        assert_error {*unknown command*} {r internalauth.noninternal_rmcall_detachedcontext_withuser internalauth.internalcommand}\n    }\n}\n\nstart_server {} {\n    r module load $testmodule\n\n    r config set appendonly yes\n    r config set appendfsync always\n    waitForBgrewriteaof r\n\n    test {AOF executes internal commands successfully} {\n        # Authenticate as an internal connection\n        assert_equal {OK} [r debug mark-internal-client]\n\n        # Call an internal writing command\n        assert_equal {OK} [r internalauth.internal_rmcall_replicated set x 5]\n\n        # Reload the server from the AOF\n        r debug loadaof\n\n        # Check if the internal command was executed successfully\n        assert_equal {5} [r get x]\n    }\n}\n\nstart_server {} {\n    r module load $testmodule\n\n    test {Internal commands are not allowed from scripts} {\n        # Internal commands are not allowed from scripts\n        assert_error {*not allowed from script*} {r eval {redis.call('internalauth.internalcommand')} 0}\n\n        # Even after authenticating as an internal connection\n        assert_equal {OK} [r debug mark-internal-client]\n        assert_error {*not allowed from script*} {r eval {redis.call('internalauth.internalcommand')} 0}\n    }\n}\n\nstart_cluster 1 1 [list config_lines $modules] {\n    set master [srv 0 client]\n    set slave [srv -1 client]\n\n    test {Setup master} {\n        # Authenticate as an internal connection\n        set reply [$master internalauth.getinternalsecret]\n        assert_equal {OK} [$master auth \"internal connection\" $reply]\n    }\n\n    test {Slaves successfully execute internal commands from the replication link} {\n        assert {[s -1 role] eq {slave}}\n        wait_for_condition 1000 50 {\n            [s -1 master_link_status] eq {up}\n        } else {\n            fail \"Master link status is not up\"\n        }\n\n        # Execute internal command in master, that will set `x` to `5`.\n        assert_equal {OK} [$master internalauth.internal_rmcall_replicated set x 5]\n\n        # Wait for replica to have the key\n        $slave readonly\n        wait_for_condition 1000 50 {\n            [$slave exists x] eq \"1\"\n        } else {\n            fail \"Test key was not replicated\"\n        }\n\n        # See that the slave has the same value for `x`.\n        assert_equal {5} [$slave get x]\n    }\n}\n\nstart_server {} {\n    r module load $testmodule\n\n    test {Internal commands are not reported in the monitor output for non-internal connections when unsuccessful} {\n        set rd [redis_deferring_client]\n        $rd monitor\n        $rd read ; # Discard the OK\n        assert_error {*unknown command*} {r internalauth.internalcommand}\n\n        # Assert that the monitor output does not contain the internal command\n        r ping\n        assert_match {*ping*} [$rd read]\n        $rd close\n    }\n\n    test {Internal commands are not reported in the monitor output for non-internal connections when successful} {\n        # Authenticate as an internal connection\n        assert_equal {OK} [r debug mark-internal-client]\n\n        set rd [redis_deferring_client]\n        $rd monitor\n        $rd read ; # Discard the OK\n        assert_equal {OK} [r internalauth.internalcommand]\n\n        # Assert that the monitor output does not contain the internal command\n        r ping\n        assert_match {*ping*} [$rd read]\n        $rd close\n    }\n\n    test {Internal commands are reported in the monitor output for internal connections} {\n        set rd [redis_deferring_client]\n        $rd debug mark-internal-client\n        assert_equal {OK} [$rd read]\n        $rd monitor\n        $rd read ; # Discard the OK\n        assert_equal {OK} [r internalauth.internalcommand]\n\n        # Assert that the monitor output contains the internal command\n        assert_match {*internalauth.internalcommand*} [$rd read]\n        $rd close\n    }\n\n    test {Internal commands are reported in the slowlog} {\n        # Set up slowlog to log all commands\n        r config set slowlog-log-slower-than 0\n\n        # Execute an internal command\n        r slowlog reset\n        r internalauth.internalcommand\n\n        # The slow-log should contain the internal command\n        set log [r slowlog get 1]\n        assert_match {*internalauth.internalcommand*} $log\n    }\n\n    test {Internal commands are reported in the latency report} {\n        # The latency report should contain the internal command\n        set report [r latency histogram internalauth.internalcommand]\n        assert_match {*internalauth.internalcommand*} $report\n    }\n\n    test {Internal commands are reported in the command stats report} {\n        # The INFO report should contain the internal command for both the internal\n        # and non-internal connections.\n        set report [r info commandstats]\n        assert_match {*internalauth.internalcommand*} $report\n\n        set report [r info latencystats]\n        assert_match {*internalauth.internalcommand*} $report\n\n        # Un-mark the connection as internal\n        r debug mark-internal-client unmark\n        assert_error {*unknown command*} {r internalauth.internalcommand}\n\n        # We still expect to see the internal command in the report\n        set report [r info commandstats]\n        assert_match {*internalauth.internalcommand*} $report\n\n        set report [r info latencystats]\n        assert_match {*internalauth.internalcommand*} $report\n    }\n}\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/keymeta.tcl",
    "content": "# ============================================================================\n# Key Metadata (keymeta) Test Suite\n# ============================================================================\n#\n# Tests the Redis module key metadata framework: up to 7 independent metadata\n# classes (IDs 1-7) can be attached to keys. Class ID 0 is reserved for key\n# expiration.\n#\n# The following features are sensitive to Key Metadata and are tested here:\n#\n# - KEY EXPIRATION (class ID 0)\n#   - Stored at ((uint64_t *)kv) - 1 (first metadata slot)\n#   - Managed via db->expires dictionary\n#   - Must be preserved/updated when kvobj is reallocated\n# - HASH FIELD EXPIRATION (HFE)\n#   - NOT in kvobj metadata slots (Maybe in the future...)\n#   - Managed via db->hexpires ebuckets (holds direct kvobj pointer)\n#   - Must be removed before kvobj reallocation (hashTypeRemoveFromExpires)\n#      and restored after (hashTypeAddToExpires)\n# - MODULE METADATA (class IDs 1-7)\n#   - Defines metadata lifecycle via callbacks\n# - EMBEDDED STRINGS vs. REGULAR OBJECTS\n#   - Short strings and numbers are embedded into kvobj\n#   - The rest are kept as distinct objects\n# - LAZYFREE\n# ============================================================================\n\nset testmodule [file normalize tests/modules/test_keymeta.so]\n\n# Helper procedure to convert class ID to 4-char-id name\nproc cname {cid} {\n    return \"KMT$cid\"\n}\n\n# Helper procedure to check if a class should keep metadata for a given operation\nproc shouldKeep {cid operation classesSpec} {\n    upvar $classesSpec specs\n    set spec $specs($cid)\n    switch $operation {\n        \"copy\"   { return [string match \"*KEEPONCOPY*\" $spec] }\n        \"rename\" { return [string match \"*KEEPONRENAME*\" $spec] }\n        \"move\"   { return [string match \"*KEEPONMOVE*\" $spec] }\n        default  { return 0 }\n    }\n}\n\n# Helper procedure to setup a key with metadata\nproc setupKeyMeta {keyname numClasses expiryBefore expiryAfter} {\n    # Set expiry if requested\n    if {$expiryBefore} {\n        r expire $keyname 10000\n        assert_range [r ttl $keyname] 9990 10000\n    }\n\n    # Set metadata for all classes\n    for {set i 1} {$i <= $numClasses} {incr i} {\n        # Set twice to verify overwrite behavior\n        r keymeta.set [cname $i] $keyname \"blabla$i\"\n        assert_equal [r keymeta.get [cname $i] $keyname] \"blabla$i\"\n        r keymeta.set [cname $i] $keyname \"meta$i\"\n    }\n\n    # Verify metadata was set correctly\n    for {set i 1} {$i <= $numClasses} {incr i} {\n        assert_equal [r keymeta.get [cname $i] $keyname] \"meta$i\"\n    }\n\n    if {$expiryAfter} {\n        r expire $keyname 10000\n        assert_range [r ttl $keyname] 9990 10000\n    }\n\n    if {$expiryBefore} {\n        assert_range [r ttl $keyname] 9990 10000\n    }\n}\n\n# Helper procedure to verify metadata after an operation\nproc verifyKeyMeta {keyname operation numClasses hasExpiry classesSpec} {\n    upvar $classesSpec specs\n\n    # Verify expiry\n    if {$hasExpiry} {\n        assert_range [r ttl $keyname] 9990 10000\n    }\n\n    # Verify metadata based on class spec\n    for {set i 1} {$i <= $numClasses} {incr i} {\n        set expected [expr {[shouldKeep $i $operation specs] ? \"meta$i\" : \"\"}]\n        assert_equal [r keymeta.get [cname $i] $keyname] $expected\n    }\n}\n\nproc flushallAndVerifyCleanup {} {\n    r flushall\n    # Verify all metadata is cleaned up properly\n    assert_equal [r keymeta.active] 0\n}\n\nstart_server {tags {\"modules\" \"external:skip\" \"cluster:skip\"} overrides {enable-debug-command yes}} {\n    r module load $testmodule\n    r debug enable-keymeta-runtime-registration 1\n\n    array set classesSpec {}\n    set classesSpec(1) \"KEEPONCOPY:KEEPONRENAME:KEEPONMOVE:ALLOWIGNORE:RDBLOAD:RDBSAVE\"\n    set classesSpec(2) \"KEEPONCOPY:KEEPONRENAME:UNLINKFREE:ALLOWIGNORE:RDBLOAD:RDBSAVE\"\n    set classesSpec(3) \"KEEPONCOPY:ALLOWIGNORE:RDBLOAD:RDBSAVE\"\n    set classesSpec(4) \"ALLOWIGNORE:RDBLOAD:RDBSAVE\"\n    set classesSpec(5) \"KEEPONRENAME:KEEPONMOVE:ALLOWIGNORE:RDBLOAD:RDBSAVE\"\n    set classesSpec(6) \"KEEPONRENAME:ALLOWIGNORE:RDBLOAD:RDBSAVE\"\n    set classesSpec(7) \"KEEPONMOVE:UNLINKFREE:ALLOWIGNORE:RDBLOAD:RDBSAVE\"\n\n    array set classes {}\n    for {set cid 1} {$cid <= 7} {incr cid} {\n        set spec $classesSpec($cid)\n        set classes($cid) [r keymeta.register [cname $cid] 1 $spec]\n        puts \"Registered class $cid with spec $spec\"\n        assert_equal $classes($cid) $cid\n    }\n\n    # Validates metadata behavior across COPY/RENAME/MOVE operations\n    # with varying numbers of metadata classes (1-7), key expiration states,\n    # key types (string/hash), hash field expiration, and metadata class flags\n    # (KEEPONCOPY, KEEPONRENAME, KEEPONMOVE).\n    for {set numClasses 1} {$numClasses < 8} {incr numClasses} {\n        foreach expiryBefore {0 1} {\n            foreach expiryAfter {0 1} {\n                set hasExpiry [expr {$expiryBefore || $expiryAfter}]\n                set expiryStr \"expiryBefore=$expiryBefore, expiryAfter=$expiryAfter)\"\n                # Test COPY operation\n                test \"KEYMETA - copy key-string with $numClasses classes, $expiryStr\" {\n                    foreach value { 3 \"value1\" [string repeat \"ABCD\" 1000]} {\n                        r select 0\n                        r del k1 k2\n                        r set k1 $value\n                        setupKeyMeta k1 $numClasses $expiryBefore $expiryAfter\n                        # Copy:\n                        r copy k1 k2\n                        # Verify:\n                        assert_equal [r get k1] $value\n                        assert_equal [r get k2] $value\n                        # Verify expiry and metadata\n                        verifyKeyMeta k2 \"copy\" $numClasses $hasExpiry classesSpec\n                        flushallAndVerifyCleanup\n                    }\n                }\n\n                test \"KEYMETA - copy key-hash with $numClasses classes, $expiryStr\" {\n                    r select 0\n                    r del h1 h2\n                    r HSET h1 field1 \"value1\" field2 \"value2\"\n                    r hexpire h1 10000 FIELDS 1 field1\n                    setupKeyMeta h1 $numClasses $expiryBefore $expiryAfter\n                    # Copy:\n                    r copy h1 h2\n                    # Verify:\n                    verifyKeyMeta h2 \"copy\" $numClasses $hasExpiry classesSpec\n                    assert_range [r httl h1 FIELDS 1 field1] 9999 10000\n                    assert_range [r httl h2 FIELDS 1 field1] 9999 10000\n                    flushallAndVerifyCleanup\n                }\n\n                # Test RENAME operation\n                test \"KEYMETA - rename key-string with $numClasses classes, $expiryStr\" {\n                    foreach value { 3 \"value1\" [string repeat \"ABCD\" 1000]} {\n                        r select 0\n                        r del k1 k2\n                        r set k1 $value\n                        setupKeyMeta k1 $numClasses $expiryBefore $expiryAfter\n                        # Rename:\n                        r rename k1 k2\n                        # Verify:\n                        assert_equal [r exists k1] 0\n                        assert_equal [r get k2] $value\n                        # Verify expiry and metadata\n                        verifyKeyMeta k2 \"rename\" $numClasses $hasExpiry classesSpec\n                        flushallAndVerifyCleanup\n                    }\n                }\n\n                test \"KEYMETA - rename key-hash with $numClasses classes, $expiryStr\" {\n                    r select 0\n                    r del h1 h2\n                    r HSET h1 field1 \"value1\" field2 \"value2\"\n                    r hexpire h1 10000 FIELDS 1 field1\n                    setupKeyMeta h1 $numClasses $expiryBefore $expiryAfter\n                    # Rename:\n                    r rename h1 h2\n                    # Verify:\n                    assert_equal [r exists h1] 0\n                    assert_range [r httl h2 FIELDS 1 field1] 9999 10000\n                    verifyKeyMeta h2 \"rename\" $numClasses $hasExpiry classesSpec\n                    flushallAndVerifyCleanup\n                }\n\n\n\n                # Test MOVE operation\n                test \"KEYMETA - move key-string with $numClasses classes, $expiryStr\" {\n                    foreach value { 3 \"value1\" [string repeat \"ABCD\" 1000]} {\n                        r select 9\n                        r del k1\n                        r select 0\n                        r del k1\n                        r set k1 $value\n                        setupKeyMeta k1 $numClasses $expiryBefore $expiryAfter\n                        # Perform move\n                        assert_equal [r move k1 9] 1\n                        # Verify key moved\n                        assert_equal [r exists k1] 0\n                        r select 9\n                        assert_equal [r get k1] $value\n                        # Verify expiry and metadata\n                        verifyKeyMeta k1 \"move\" $numClasses $hasExpiry classesSpec\n                        r select 0\n                        flushallAndVerifyCleanup\n                    }\n                }\n\n                test \"KEYMETA - move key-hash with $numClasses classes, $expiryStr\" {\n                    r select 9\n                    r del h1\n                    r select 0\n                    r del h1\n                    r HSET h1 field1 \"value1\" field2 \"value2\"\n                    r hexpire h1 10000 FIELDS 1 field1\n                    setupKeyMeta h1 $numClasses $expiryBefore $expiryAfter\n                    assert_range [r httl h1 FIELDS 1 field1] 9999 10000\n                    assert_equal [r move h1 9] 1\n                    assert_equal [r exists h1] 0\n                    r select 9\n                    assert_range [r httl h1 FIELDS 1 field1] 9999 10000\n                    verifyKeyMeta h1 \"move\" $numClasses $hasExpiry classesSpec\n                    r select 0\n                    flushallAndVerifyCleanup\n                }\n            }\n        }\n    }\n\n    test \"KEYMETA - Verify active metadata count on copy\" {\n        for {set cid 1} {$cid < 7} {incr cid} {\n            set numAlloc 0\n            flushallAndVerifyCleanup\n            set dupOnCopy [shouldKeep $cid \"copy\" classesSpec]\n            r set k1 \"v1\"\n            r keymeta.set [cname $cid] k1 \"meta1\"\n            assert_equal [r keymeta.active] [incr numAlloc]\n            r keymeta.set [cname $cid] k1 \"meta1b\"\n            assert_equal [r keymeta.active] $numAlloc\n            r copy k1 k1copy\n            assert_equal [r keymeta.active] [incr numAlloc $dupOnCopy]\n            r del k1\n            assert_equal [r keymeta.active] [incr numAlloc -1]\n            r del k1copy\n            assert_equal [r keymeta.active] 0\n        }\n    }\n\n    test \"KEYMETA - Verify active metadata count on rename\" {\n        for {set cid 1} {$cid <= 7} {incr cid} {\n            set numAlloc 0\n            flushallAndVerifyCleanup\n            set keepOnRename [shouldKeep $cid \"rename\" classesSpec]\n            set discOnRename [expr {!$keepOnRename}]\n            r set k1 \"v1\"\n            r keymeta.set [cname $cid] k1 \"meta1\"\n            assert_equal [r keymeta.active] [incr numAlloc]\n            r rename k1 k1_renamed\n            assert_equal [r keymeta.active] [incr numAlloc -$discOnRename]\n            r del k1_renamed\n            assert_equal [r keymeta.active] 0\n        }\n    }\n\n    test \"KEYMETA - Verify active metadata count on move\" {\n        for {set cid 1} {$cid <= 7} {incr cid} {\n            set numAlloc 0\n            r select 0\n            flushallAndVerifyCleanup\n\n            set keepOnMove [shouldKeep $cid \"move\" classesSpec]\n            set discOnMove [expr {!$keepOnMove}]\n\n            # Create keys with metadata in DB 0\n            r set k1 \"v1\"\n            r keymeta.set [cname $cid] k1 \"meta1\"\n            assert_equal [r keymeta.active] [incr numAlloc]\n            # Move: metadata discarded if !keepOnMove\n            r move k1 9\n            set active [r keymeta.active]\n            assert_equal [r keymeta.active] [incr numAlloc -$discOnMove]\n            # Cleanup\n            r select 9\n            r del k1\n            r select 0\n            assert_equal [r keymeta.active] 0\n        }\n    }\n\n    test \"KEYMETA - Verify metadta cleanup on lazyfree\" {\n        r config set lazyfree-lazy-user-del yes\n        # Class 2 has UNLINKFREE flag, so it should call unlink callback when lazyfree is enabled\n        # Class 1 does not have UNLINKFREE flag, so it should only call free callback\n        foreach {cid} { 1 2 } {\n            r config resetstat\n            # Create a large unsorted set collection to ensure it exceeds LAZYFREE_THRESHOLD\n            for {set i 0} {$i < 1024} {incr i} { r sadd myset $i }\n            r keymeta.set [cname $cid] myset \"meta\"\n            assert_equal [r keymeta.active] 1\n            r del myset\n\n            # Wait for lazyfree to complete and verify lazyfreed_objects incremented\n            wait_for_condition 50 100 {\n                [s lazyfree_pending_objects] == 0\n            } else {\n                fail \"lazyfree isn't done\"\n            }\n            assert_equal [r keymeta.active] 0\n            assert_equal [s lazyfreed_objects] 1\n        }\n        r config set lazyfree-lazy-user-del no\n    } {OK} {needs:config-resetstat}\n\n    test \"KEYMETA - Verify metadata cleanup on expire\" {\n        # Class 2 has UNLINKFREE flag, so it should call unlink callback when lazyfree is enabled\n        # Class 1 does not have UNLINKFREE flag, so it should only call free callback\n        foreach {cid} { 1 2 } {\n            r set mykey \"mykey$cid\"\n            r keymeta.set [cname $cid] mykey \"meta\"\n            assert_equal [r keymeta.active] 1\n            r pexpire mykey 1\n            wait_for_condition 50 100 {\n                [r exists mykey] == 0\n            } else {\n                fail \"key not expired\"\n            }\n            assert_equal [r keymeta.active] 0\n        }\n    }\n\n    # ============================================================================\n    # AOF Rewrite Tests\n    # ============================================================================\n    # Note: Full AOF round-trip tests (write → restart → load) are not included\n    # because the test module registers classes dynamically via commands, which\n    # creates a chicken-and-egg problem:\n    # - Classes must be registered BEFORE AOF loading (in RedisModule_OnLoad)\n    # - But the KEYMETA.REGISTER commands are in the AOF itself\n    # - When server restarts and loads AOF, classes aren't registered yet\n    # - KEYMETA.SET commands fail with \"metadata class not found\"\n    #\n    # For production modules, classes MUST be registered in RedisModule_OnLoad()\n    # to ensure they're available when AOF/RDB files are loaded on server startup.\n    # See src/module.c documentation for RM_CreateKeyMetaClass() for details.\n    #\n    # The test below verifies that AOF callbacks correctly emit KEYMETA.SET commands\n    # to the AOF file during rewrite, which is the module's responsibility.\n    test \"KEYMETA - AOF rewrite emits correct KEYMETA.SET commands to file\" {\n        # This test verifies that the AOF callback implementation correctly writes\n        # KEYMETA.SET commands to the AOF file during rewrite. We don't test the\n        # full round-trip (restart + load) due to the dynamic registration limitation\n        # explained above.\n\n        r config set appendonly yes\n        r config set auto-aof-rewrite-percentage 0\n        r config set aof-use-rdb-preamble no\n        # Wait for the initial AOF rewrite that Redis triggers when enabling AOF\n        waitForBgrewriteaof r\n\n        # Create keys with metadata from multiple classes\n        r set key1 \"value1\"\n        r keymeta.set [cname 1] key1 \"metadata_c1\"\n\n        r set key2 \"value2\"\n        r keymeta.set [cname 2] key2 \"metadata_c2\"\n        r keymeta.set [cname 3] key2 \"metadata_c3\"\n\n        r hset hashkey field1 val1\n        r keymeta.set [cname 4] hashkey \"hash_meta\"\n\n        # Trigger AOF rewrite\n        r bgrewriteaof\n        waitForBgrewriteaof r\n\n        # Get the AOF directory and read the AOF file\n        set aof_dir [lindex [r config get dir] 1]\n        set aof_base_filename [lindex [r config get appendfilename] 1]\n\n        # Find the base AOF file (after rewrite)\n        set aof_files [glob -nocomplain -directory $aof_dir appendonlydir/${aof_base_filename}.*.base.aof]\n        assert {[llength $aof_files] > 0}\n\n        # Read the most recent base AOF file\n        set aof_file [lindex [lsort $aof_files] end]\n        set fp [open $aof_file r]\n        set aof_content [read $fp]\n        close $fp\n\n        # Verify the AOF contains KEYMETA.SET commands with correct format\n        assert_match \"*KEYMETA.SET*[cname 1]*key1*metadata_c1*\" $aof_content\n        assert_match \"*KEYMETA.SET*[cname 2]*key2*metadata_c2*\" $aof_content\n        assert_match \"*KEYMETA.SET*[cname 3]*key2*metadata_c3*\" $aof_content\n        assert_match \"*KEYMETA.SET*[cname 4]*hashkey*hash_meta*\" $aof_content\n\n        # Verify the RESP format is correct by checking for the command structure\n        # The AOF should contain: *4 (array of 4 elements)\n        assert_match \"*\\$11*KEYMETA.SET*\" $aof_content\n        # Count how many KEYMETA.SET commands are in the AOF\n        set keymeta_count [regexp -all {KEYMETA\\.SET} $aof_content]\n        assert_equal $keymeta_count 4\n    } {} {external:skip}\n\n    # ========================================================================\n    # RDB Save/Load Tests\n    # ========================================================================\n\n    test {RDB: SAVE and reload preserves metadata} {\n        # Create key with metadata\n        r set key1 \"value1\"\n        r keymeta.set [cname 1] key1 \"key1_meta1\"\n        assert_equal [r keymeta.get [cname 1] key1] \"key1_meta1\"\n\n        r save\n        r debug reload\n\n        # Verify metadata persisted after reload\n        assert_equal [r keymeta.get [cname 1] key1] \"key1_meta1\"\n\n        flushallAndVerifyCleanup\n    } {} {external:skip needs:save}\n\n    test {RDB: BGSAVE writes metadata to RDB file} {\n        # Create keys with different metadata combinations\n        r set key1 \"value1\"\n        r keymeta.set [cname 1] key1 \"key1_meta1\"\n\n        r set key2 \"value2\"\n        r keymeta.set [cname 1] key2 \"key2_meta1\"\n        r keymeta.set [cname 2] key2 \"key2_meta2\"\n\n        # Trigger BGSAVE and reload (debug reload preserves modules)\n        r bgsave\n        waitForBgsave r\n        r debug reload\n\n        # Verify metadata persisted after reload\n        assert_equal [r keymeta.get [cname 1] key1] \"key1_meta1\"\n        assert_equal [r keymeta.get [cname 1] key2] \"key2_meta1\"\n        assert_equal [r keymeta.get [cname 2] key2] \"key2_meta2\"\n\n        flushallAndVerifyCleanup\n    } {} {external:skip needs:save}\n\n    test {RDB: Metadata persists with expiretime} {\n        # Create key with both expiry and metadata\n        r set key1 \"value1\"\n        set expire_time [expr {[clock seconds] + 10000}]\n        r expireat key1 $expire_time\n        r keymeta.set [cname 1] key1 \"meta_with_expire\"\n\n        assert_equal [r expiretime key1] $expire_time\n        assert_equal [r keymeta.get [cname 1] key1] \"meta_with_expire\"\n\n        # Reload from RDB\n        r debug reload\n\n        # Verify metadata and expiry persist after reload\n        assert_equal [r expiretime key1] $expire_time\n        assert_equal [r keymeta.get [cname 1] key1] \"meta_with_expire\"\n\n        flushallAndVerifyCleanup\n    } {} {external:skip needs:debug}\n\n    test {RDB: Create keys with upto 7 meta classes, with or without expiry} {\n        # Test all combinations of 1-7 metadata classes, with or without expiry\n        for {set n 1} {$n <= 7} {incr n} {\n            foreach hasExpiry {0 1} {\n                set keyname \"key_${n}_exp${hasExpiry}\"\n                r set $keyname \"value$n\"\n\n                # Set expiry if hasExpiry is 1\n                if {$hasExpiry} {\n                    set ttl [expr {3600 + $n}]\n                    r expire $keyname $ttl\n                    # Get the actual expiretime set by Redis to use as expected value\n                    set expExpiry [r expiretime $keyname]\n                }\n\n                # Create list of class IDs to attach (1 through n)\n                set class_ids {}\n                for {set i 1} {$i <= $n} {incr i} {\n                    lappend class_ids $i\n                }\n\n                # Randomize the order of metadata attachment\n                set class_ids [lshuffle $class_ids]\n\n                # Attach metadata in randomized order\n                foreach cid $class_ids {\n                    r keymeta.set [cname $cid] $keyname \"meta$cid\"\n                }\n\n                # Verify metadata before RDB save\n                # Verify exactly n metadata classes are attached\n                for {set i 1} {$i <= 7} {incr i} {\n                    if {$i <= $n} {\n                        assert_equal [r keymeta.get [cname $i] $keyname] \"meta$i\"\n                    } else {\n                        assert_equal [r keymeta.get [cname $i] $keyname] \"\"\n                    }\n                }\n\n                # Verify expiry before RDB save\n                if {$hasExpiry} {\n                    set actual_expiretime [r expiretime $keyname]\n                    assert_equal $actual_expiretime $expExpiry\n                }\n\n                # Save and reload from RDB (debug reload preserves modules)\n                r save\n                r debug reload\n\n                # Verify metadata after RDB reload\n                # Verify exactly n metadata classes are still attached\n                for {set i 1} {$i <= 7} {incr i} {\n                    if {$i <= $n} {\n                        assert_equal [r keymeta.get [cname $i] $keyname] \"meta$i\"\n                    } else {\n                        assert_equal [r keymeta.get [cname $i] $keyname] \"\"\n                    }\n                }\n\n                # Verify expiry after RDB reload\n                if {$hasExpiry} {\n                    set actual_expiretime [r expiretime $keyname]\n                    assert_equal $actual_expiretime $expExpiry\n                } else {\n                    # Verify no expiry set\n                    assert_equal [r expiretime $keyname] -1\n                }\n                flushallAndVerifyCleanup\n            }\n        }\n    } {} {external:skip needs:save}\n\n    # ========================================================================\n    # RDB Flag Tests: ALLOW_IGNORE, RDBLOAD, RDBSAVE\n    # ========================================================================\n\n    # Test all combinations except the error case (ALLOW_IGNORE=0, RDBLOAD=0, RDBSAVE=1)\n    foreach RDBLOAD {0 1} {\n        foreach RDBSAVE {0 1} {\n            foreach ALLOW_IGNORE {0 1} {\n                # Skip the error case - we'll test it last since it causes RDB load to fail\n                if {!$RDBLOAD && $RDBSAVE && !$ALLOW_IGNORE} { continue }\n\n                test \"RDB: SAVE and LOAD (ALLOW_IGNORE=$ALLOW_IGNORE, RDBLOAD=$RDBLOAD, RDBSAVE=$RDBSAVE)\" {\n                    # Flush all data and save empty RDB to start with a clean slate\n                    r flushall\n                    r save\n\n                    # re-register class 1 with new flags. Expected re-registered same class ID\n                    r keymeta.unregister [cname 1]\n                    # dummy default spec\n                    set newSpec \"KEEPONCOPY\"\n                    if {$ALLOW_IGNORE} { append newSpec \":ALLOWIGNORE\" }\n                    if {$RDBLOAD} { append newSpec \":RDBLOAD\" }\n                    if {$RDBSAVE} { append newSpec \":RDBSAVE\" }\n\n                    # Must reuse same class-id that it had before\n                    assert_equal $classes(1) [r keymeta.register [cname 1] 1 $newSpec]\n\n                    r set key1 \"value1\"\n                    r keymeta.set [cname 1] key1 \"key1_meta1\"\n                    assert_equal [r keymeta.get [cname 1] key1] \"key1_meta1\"\n\n                    r save\n                    r debug reload\n\n                    # Metadata is preserved only when BOTH rdb_save AND rdb_load are enabled\n                    # Otherwise metadata is lost (either not saved, or saved but not loaded)\n                    set metaPreserved [expr {$RDBSAVE && $RDBLOAD}]\n                    set expectedMeta [expr {$metaPreserved ? \"key1_meta1\" : \"\"}]\n\n                    assert_equal [r keymeta.get [cname 1] key1] $expectedMeta\n\n                    flushallAndVerifyCleanup\n                } {} {external:skip needs:save}\n            }\n        }\n    }\n\n    # Test the error case last (ALLOW_IGNORE=0, RDBLOAD=0, RDBSAVE=1)\n    # This test causes RDB load to fail, so we test it last to avoid polluting subsequent tests\n    test \"RDB: SAVE and LOAD Invalid combination: (ALLOW_IGNORE=0, RDBLOAD=0, RDBSAVE=1)\" {\n        # re-register class 1 with RDBSAVE flag but no RDBLOAD or ALLOW_IGNORE\n        r keymeta.unregister [cname 1]\n        set newSpec \"KEEPONCOPY:RDBSAVE\"\n        assert_equal $classes(1) [r keymeta.register [cname 1] 1 $newSpec]\n\n        r set key1 \"value1\"\n        r keymeta.set [cname 1] key1 \"key1_meta1\"\n        assert_equal [r keymeta.get [cname 1] key1] \"key1_meta1\"\n\n        r save\n\n        # This combination causes RDB load to fail because:\n        # - Metadata was saved (RDBSAVE=1)\n        # - Class has no rdb_load callback (RDBLOAD=0)\n        # - Errors are not ignored (ALLOW_IGNORE=0)\n        catch {r debug reload} err\n        assert_match \"*Error trying to load the RDB dump*\" $err\n    } {} {external:skip needs:save}\n\n    # ========================================================================\n    # DUMP/RESTORE Tests\n    # ========================================================================\n\n    test {DUMP/RESTORE: 1 to 7 metadata classes, optional TTL} {\n        foreach withTTL {0 1} {\n            for {set numClasses 1} {$numClasses < 8} {incr numClasses} {\n                # Re-register classes with RDBLOAD and RDBSAVE flags\n                for {set cid 1} {$cid <= $numClasses} {incr cid} {\n                    r keymeta.unregister [cname $cid]\n                    assert_equal $classes($cid) [r keymeta.register [cname $cid] 1 $classesSpec($cid)]\n                }\n\n                # Create key with metadata classes\n                r set key1 \"value1\"\n                for {set i 1} {$i <= $numClasses} {incr i} {\n                    r keymeta.set [cname $i] key1 \"meta${i}_value\"\n                }\n\n                if {$withTTL} { r expire key1 10000 }\n\n                # Verify all metadata before DUMP\n                for {set i 1} {$i <= $numClasses} {incr i} {\n                    assert_equal [r keymeta.get [cname $i] key1] \"meta${i}_value\"\n                }\n\n                # DUMP the key\n                set encoded [r dump key1]\n\n                # Delete and RESTORE\n                r del key1\n                r restore key1 [expr {$withTTL ? 10000 : 0}] $encoded\n\n                # Verify all metadata was restored\n                assert_equal [r get key1] \"value1\"\n                for {set i 1} {$i <= $numClasses} {incr i} {\n                    assert_equal [r keymeta.get [cname $i] key1] \"meta${i}_value\"\n                }\n                if {$withTTL} { assert_range [r pttl key1] 9000 10000 }\n\n                flushallAndVerifyCleanup\n            }\n        }\n    }\n\n    test {DUMP/RESTORE: REPLACE with metadata} {\n        # Create key with metadata\n        r set key1 value1\n        r keymeta.set [cname 1] key1 \"meta1_original\"\n\n        # DUMP the key\n        set encoded1 [r dump key1]\n\n        # Create different key with different metadata\n        r set key1 value2\n        r keymeta.set [cname 1] key1 \"meta1_new\"\n\n        # DUMP the second version\n        set encoded2 [r dump key1]\n\n        # Delete and restore first version\n        r del key1\n        r restore key1 0 $encoded1\n        assert_equal [r get key1] \"value1\"\n        assert_equal [r keymeta.get [cname 1] key1] \"meta1_original\"\n\n        # RESTORE second version with REPLACE\n        r restore key1 0 $encoded2 replace\n        assert_equal [r get key1] \"value2\"\n        assert_equal [r keymeta.get [cname 1] key1] \"meta1_new\"\n\n        flushallAndVerifyCleanup\n    }\n\n\n    # Test all combinations except the error case (ALLOW_IGNORE=0, RDBLOAD=0, RDBSAVE=1)\n    foreach RDBLOAD {0 1} {\n        foreach RDBSAVE {0 1} {\n            foreach ALLOW_IGNORE {0 1} {\n                # Skip the error case - we'll test it last since it causes RESTORE to fail\n                if {!$RDBLOAD && $RDBSAVE && !$ALLOW_IGNORE} { continue }\n\n                test \"DUMP/RESTORE: (ALLOW_IGNORE=$ALLOW_IGNORE, RDBLOAD=$RDBLOAD, RDBSAVE=$RDBSAVE)\" {\n                    # re-register class 1 with new flags. Expected re-registered same class ID\n                    r keymeta.unregister [cname 1]\n                    # dummy default spec\n                    set newSpec \"KEEPONCOPY\"\n                    if {$ALLOW_IGNORE} { append newSpec \":ALLOWIGNORE\" }\n                    if {$RDBLOAD} { append newSpec \":RDBLOAD\" }\n                    if {$RDBSAVE} { append newSpec \":RDBSAVE\" }\n\n                    # Must reuse same class-id that it had before\n                    assert_equal $classes(1) [r keymeta.register [cname 1] 1 $newSpec]\n\n                    r set key1 \"value1\"\n                    r keymeta.set [cname 1] key1 \"key1_meta1\"\n                    assert_equal [r keymeta.get [cname 1] key1] \"key1_meta1\"\n\n                    # DUMP & RESTORE\n                    set encoded [r dump key1]\n                    r del key1\n                    r restore key1 0 $encoded\n\n                    # Metadata is preserved only when BOTH rdb_save AND rdb_load are enabled\n                    # Otherwise metadata is lost (either not saved, or saved but not loaded)\n                    set metaPreserved [expr {$RDBSAVE && $RDBLOAD}]\n                    set expectedMeta [expr {$metaPreserved ? \"key1_meta1\" : \"\"}]\n\n                    assert_equal [r keymeta.get [cname 1] key1] $expectedMeta\n\n                    flushallAndVerifyCleanup\n                }\n            }\n        }\n    }\n\n    # Test the error case last (ALLOW_IGNORE=0, RDBLOAD=0, RDBSAVE=1)\n    # This test causes RESTORE to fail, so we test it last to avoid polluting subsequent tests\n    test \"DUMP/RESTORE: Invalid combination: (ALLOW_IGNORE=0, RDBLOAD=0, RDBSAVE=1)\" {\n        # re-register class 1 with RDBSAVE flag but no RDBLOAD or ALLOW_IGNORE\n        r keymeta.unregister [cname 1]\n        set newSpec \"KEEPONCOPY:RDBSAVE\"\n        assert_equal $classes(1) [r keymeta.register [cname 1] 1 $newSpec]\n\n        r set key1 \"value1\"\n        r keymeta.set [cname 1] key1 \"key1_meta1\"\n        assert_equal [r keymeta.get [cname 1] key1] \"key1_meta1\"\n\n        # DUMP the key\n        set encoded [r dump key1]\n\n        # Delete and try to RESTORE\n        r del key1\n\n        # This combination causes RESTORE to fail because:\n        # - Metadata was saved (RDBSAVE=1)\n        # - Class has no rdb_load callback (RDBLOAD=0)\n        # - Errors are not ignored (ALLOW_IGNORE=0)\n        catch {r restore key1 0 $encoded} err\n        assert_match \"*Bad data format*\" $err\n\n        flushallAndVerifyCleanup\n    }\n}\n\ntest \"RDB: Load with different module registration order preserves metadata correctly\" {\n    # This test verifies out-of-order metadata attachment during RDB load.\n    # When modules register in different order at load time vs save time,\n    # metadata values should still be correctly associated with their classes.\n    start_server {tags {\"modules\" \"external:skip\" \"cluster:skip\"} overrides {enable-debug-command yes}} {\n        r module load $testmodule\n        r debug enable-keymeta-runtime-registration 1\n\n        # Helper function to generate class names (needed in inner scope)\n        proc cname {id} { return \"CLS$id\" }\n\n        # Register classes in order: 1, 2, 3\n        set spec1 \"KEEPONCOPY:ALLOWIGNORE:RDBLOAD:RDBSAVE\"\n        set spec2 \"KEEPONRENAME:ALLOWIGNORE:RDBLOAD:RDBSAVE\"\n        set spec3 \"KEEPONMOVE:ALLOWIGNORE:RDBLOAD:RDBSAVE\"\n\n        set class1 [r keymeta.register [cname 1] 1 $spec1]\n        set class2 [r keymeta.register [cname 2] 1 $spec2]\n        set class3 [r keymeta.register [cname 3] 1 $spec3]\n\n        # Verify class IDs match registration order\n        assert_equal $class1 1 \"Class 1 registered first, gets ID 1\"\n        assert_equal $class2 2 \"Class 2 registered second, gets ID 2\"\n        assert_equal $class3 3 \"Class 3 registered third, gets ID 3\"\n\n        # OUTER SERVER: Create RDB with classes registered in order 1,2,3\n        r flushall\n        r set mykey \"myvalue\"\n        r keymeta.set [cname 1] mykey \"metadata_for_class1\"\n        r keymeta.set [cname 2] mykey \"metadata_for_class2\"\n        r keymeta.set [cname 3] mykey \"metadata_for_class3\"\n\n        # Verify metadata before save\n        assert_equal [r keymeta.get [cname 1] mykey] \"metadata_for_class1\"\n        assert_equal [r keymeta.get [cname 2] mykey] \"metadata_for_class2\"\n        assert_equal [r keymeta.get [cname 3] mykey] \"metadata_for_class3\"\n\n        r save\n\n        # Get RDB file path & Copy RDB to a temp location with unique name\n        set rdb_dir [lindex [r config get dir] 1]\n        set rdb_file [lindex [r config get dbfilename] 1]\n        set rdb_path [file join $rdb_dir $rdb_file]\n        set temp_rdb [file join $rdb_dir \"temp_metadata_outoforder_[pid].rdb\"]\n        file copy -force $rdb_path $temp_rdb\n\n        # INNER SERVER: Start new server, register classes in DIFFERENT order, then load RDB\n        start_server [list overrides [list dir $rdb_dir enable-debug-command yes]] {\n            r module load $testmodule\n            r debug enable-keymeta-runtime-registration 1\n\n            # Helper function to generate class names (needed in inner scope)\n            proc cname {id} { return \"CLS$id\" }\n\n            # Register classes in DIFFERENT order: 3, 1, 2\n            # This simulates a server where modules load in different order\n            set spec1 \"KEEPONCOPY:ALLOWIGNORE:RDBLOAD:RDBSAVE\"\n            set spec2 \"KEEPONRENAME:ALLOWIGNORE:RDBLOAD:RDBSAVE\"\n            set spec3 \"KEEPONMOVE:ALLOWIGNORE:RDBLOAD:RDBSAVE\"\n\n            set class3 [r keymeta.register [cname 3] 1 $spec3]\n            set class1 [r keymeta.register [cname 1] 1 $spec1]\n            set class2 [r keymeta.register [cname 2] 1 $spec2]\n\n            # Verify class IDs are assigned by REGISTRATION ORDER, not name\n            # We registered in order 3,1,2, so the runtime IDs are:\n            # - class3 (name \"CLS3\") gets ID 1 (first registered)\n            # - class1 (name \"CLS1\") gets ID 2 (second registered)\n            # - class2 (name \"CLS2\") gets ID 3 (third registered)\n            # This is DIFFERENT from outer server which registered in order 1,2,3\n            assert_equal $class3 1 \"Class 3 registered first, gets ID 1\"\n            assert_equal $class1 2 \"Class 1 registered second, gets ID 2\"\n            assert_equal $class2 3 \"Class 2 registered third, gets ID 3\"\n\n            # Copy the saved RDB to this server's dbfilename\n            set inner_rdb_file [lindex [r config get dbfilename] 1]\n            set inner_rdb_path [file join $rdb_dir $inner_rdb_file]\n            file copy -force $temp_rdb $inner_rdb_path\n\n            # NOW load the RDB (AFTER registration in different order)\n            # Use 'nosave' to reload from the copied RDB without saving current state first\n            r debug reload nosave\n\n            # Verify the key exists\n            assert_equal [r exists mykey] 1 \"Key should exist after RDB load\"\n            assert_equal [r get mykey] \"myvalue\" \"Key value should be preserved\"\n\n            # Verify metadata values are correctly associated with their classes\n            # WITHOUT metadata would be swapped because:\n            # - At SAVE time (outer): classes registered in order 1,2,3\n            # - At LOAD time (inner): classes registered in order 3,1,2\n            # - RDB contains metadata in saved order, but keyMetaClassLookupByName\n            #   maps them back to correct classes by NAME, not by registration order\n            assert_equal [r keymeta.get [cname 1] mykey] \"metadata_for_class1\"\n            assert_equal [r keymeta.get [cname 2] mykey] \"metadata_for_class2\"\n            assert_equal [r keymeta.get [cname 3] mykey] \"metadata_for_class3\"\n        }\n\n        # Cleanup temp file\n        file delete $temp_rdb\n\n    }\n} {} {external:skip needs:save}\n\ntest \"RDB: File size same with/without metadata when no rdb_save callback\" {\n    # This test verifies that when a metadata class has no rdb_save callback,\n    # the metadata is not serialized to RDB, so the RDB file size should be\n    # approximately the same (within a small tolerance for header differences).\n\n    start_server {tags {\"modules\" \"external:skip\" \"cluster:skip\"} overrides {enable-debug-command yes}} {\n        r module load $testmodule\n        r debug enable-keymeta-runtime-registration 1\n\n        # Get RDB directory\n        set rdb_dir [lindex [r config get dir] 1]\n        set rdb_file [lindex [r config get dbfilename] 1]\n        set rdb_path [file join $rdb_dir $rdb_file]\n\n        # Test 1: Create key WITHOUT metadata and save\n        r flushall\n        r set key1 \"test_value_12345\"\n        r save\n        set size_without_meta [file size $rdb_path]\n        \n        # Test 2: Create identical key WITH metadata (but no rdb_save) and save        \n        # Register a class WITHOUT rdb_save callback (RDBSAVE=0)\n        # Use ALLOWIGNORE so loading doesn't fail when metadata is missing\n        set spec \"ALLOWIGNORE\"\n        r keymeta.register [cname 1] 1 $spec\n        \n        r flushall\n        r set key1 \"test_value_12345\"\n        r keymeta.set [cname 1] key1 \"some_metadata_value\"\n\n        # Verify metadata is attached\n        assert_equal [r keymeta.get [cname 1] key1] \"some_metadata_value\"\n\n        r save\n        set size_with_meta [file size $rdb_path]\n\n        # The file sizes should be the same (metadata not serialized)\n        assert_equal $size_without_meta $size_with_meta\n    }\n} {} {external:skip needs:save}\n\ntest \"Creating key metadata not during OnLoad should fail\" {\n    # Start server without enabling keymeta runtime registration debug flag\n    start_server {tags {\"modules\" \"external:skip\" \"cluster:skip\"} overrides {enable-debug-command no}} {\n        r module load $testmodule\n        # Creating a class not during server startup should fail\n        catch {r keymeta.register [cname 1] 1 \"ALLOWIGNORE\"} err\n        assert_match {*failed to create metadata class*} $err\n    }\n} {} {external:skip needs:save}\n"
  },
  {
    "path": "tests/unit/moduleapi/keyspace_events.tcl",
    "content": "set testmodule [file normalize tests/modules/keyspace_events.so]\n\ntags \"modules external:skip\" {\n    start_server [list overrides [list loadmodule \"$testmodule\"]] {\n\n        # avoid using shared integers, to increase the chance of detection heap issues\n        r config set maxmemory-policy allkeys-lru\n        r config set maxmemory 1gb\n\n        test {Test loaded key space event} {\n            r set x 1\n            r hset y f v\n            r lpush z 1 2 3\n            r sadd p 1 2 3\n            r zadd t 1 f1 2 f2\n            r xadd s * f v\n            r debug reload\n            assert_equal {1 x} [r keyspace.is_key_loaded x]\n            assert_equal {1 y} [r keyspace.is_key_loaded y]\n            assert_equal {1 z} [r keyspace.is_key_loaded z]\n            assert_equal {1 p} [r keyspace.is_key_loaded p]\n            assert_equal {1 t} [r keyspace.is_key_loaded t]\n            assert_equal {1 s} [r keyspace.is_key_loaded s]\n        }\n\n        test {Nested multi due to RM_Call} {\n            r del multi\n            r del lua\n\n            r set x 1\n            r set x_copy 1\n            r keyspace.del_key_copy x\n            r keyspace.incr_case1 x\n            r keyspace.incr_case2 x\n            r keyspace.incr_case3 x\n            assert_equal {} [r get multi]\n            assert_equal {} [r get lua]\n            r get x\n        } {3}\n        \n        test {Nested multi due to RM_Call, with client MULTI} {\n            r del multi\n            r del lua\n\n            r set x 1\n            r set x_copy 1\n            r multi\n            r keyspace.del_key_copy x\n            r keyspace.incr_case1 x\n            r keyspace.incr_case2 x\n            r keyspace.incr_case3 x\n            r exec\n            assert_equal {1} [r get multi]\n            assert_equal {} [r get lua]\n            r get x\n        } {3}\n        \n        test {Nested multi due to RM_Call, with EVAL} {\n            r del multi\n            r del lua\n\n            r set x 1\n            r set x_copy 1\n            r eval {\n                redis.pcall('keyspace.del_key_copy', KEYS[1])\n                redis.pcall('keyspace.incr_case1', KEYS[1])\n                redis.pcall('keyspace.incr_case2', KEYS[1])\n                redis.pcall('keyspace.incr_case3', KEYS[1])\n            } 1 x\n            assert_equal {} [r get multi]\n            assert_equal {1} [r get lua]\n            r get x\n        } {3}\n\n        test {Test module key space event} {\n            r keyspace.notify x\n            assert_equal {1 x} [r keyspace.is_module_key_notified x]\n        }\n\n        test \"Keyspace notifications: module events test\" {\n            r config set notify-keyspace-events Kd\n            r del x\n            set rd1 [redis_deferring_client]\n            assert_equal {1} [psubscribe $rd1 *]\n            r keyspace.notify x\n            assert_equal {pmessage * __keyspace@9__:x notify} [$rd1 read]\n            $rd1 close\n        }\n\n        test \"Keyspace notifications: unsubscribe removes handler\" {\n            r config set notify-keyspace-events KEA\n            set before [r keyspace.callback_count]\n            r set a 1\n            r del a\n            wait_for_condition 100 10 {\n                [r keyspace.callback_count] > $before\n            } else {\n                fail \"callback did not trigger\"\n            }\n            set before_unsub [r keyspace.callback_count]\n            r keyspace.unsubscribe 4  ;# REDISMODULE_NOTIFY_GENERIC\n            r set a 1\n            r del a\n            set after_unsub [r keyspace.callback_count]\n            assert_equal $before_unsub $after_unsub\n        }\n\n        test {Test expired key space event} {\n            set prev_expired [s expired_keys]\n            r set exp 1 PX 10\n            wait_for_condition 100 10 {\n                [s expired_keys] eq $prev_expired + 1\n            } else {\n                fail \"key not expired\"\n            }\n            assert_equal [r get testkeyspace:expired] 1\n        }\n\n        test \"Subkey notification: subscribe starts callback\" {\n            r keyspace.subscribe_subkeys\n            r keyspace.reset_subkey_events\n            r config set notify-keyspace-events \"\"\n        }\n    \n        test \"Subkey notification: HSET triggers module subkey callback\" {\n            r keyspace.reset_subkey_events\n            r hset myhash f1 v1 f2 v2\n            set events [r keyspace.get_subkey_events]\n            assert_equal 1 [llength $events]\n            assert_equal \"hset myhash 2 f1 f2\" [lindex $events 0]\n            r del myhash\n        }\n\n        test \"Subkey notification: HDEL triggers module subkey callback\" {\n            r hset myhash f1 v1 f2 v2\n            r keyspace.reset_subkey_events\n            r hdel myhash f1\n            set events [r keyspace.get_subkey_events]\n            assert_equal 1 [llength $events]\n            assert_equal \"hdel myhash 1 f1\" [lindex $events 0]\n            r del myhash\n        }\n\n        test \"Subkey notification: non-subkey event calls subkey callback with count=0\" {\n            r hset myhash f1 v1\n            r keyspace.reset_subkey_events\n            r del myhash\n            set events [r keyspace.get_subkey_events]\n            # DEL is NOTIFY_GENERIC — our callback is registered for\n            # HASH|GENERIC, so it should be called with subkeys=NULL, count=0.\n            assert_equal 1 [llength $events]\n            assert_equal \"del myhash 0\" [lindex $events 0]\n        }\n\n        test \"Subkey notification: module-triggered NotifyKeyspaceEventWithSubkeys\" {\n            r keyspace.reset_subkey_events\n            r keyspace.notify_with_subkeys mykey sk1 sk2 sk3\n            set events [r keyspace.get_subkey_events]\n            assert_equal 1 [llength $events]\n            assert_equal \"module_subkey_event mykey 3 sk1 sk2 sk3\" [lindex $events 0]\n        }\n\n        test \"Subkey notification: lazy hash field expiry triggers hexpired with subkeys\" {\n            r debug set-active-expire 0\n            r del myhash\n            r hset myhash f1 v1 f2 v2 f3 v3\n            r hpexpire myhash 10 FIELDS 2 f1 f2\n            r keyspace.reset_subkey_events\n            after 100\n            r hmget myhash f1 f2\n            assert_equal \"hexpired myhash 2 f1 f2\" [lindex [r keyspace.get_subkey_events] 0]\n            r debug set-active-expire 1\n        } {OK} {needs:debug}\n\n        test \"Subkey notification: active hash field expiry triggers hexpired with subkeys\" {\n            r del myhash\n            r hset myhash f1 v1 f2 v2\n            r keyspace.reset_subkey_events\n            r hpexpire myhash 10 FIELDS 2 f1 f2\n            # wait for active expiry to kick in\n            wait_for_condition 50 100 {\n                [r exists myhash] == 0\n            } else {\n                fail \"Fields not expired by active expiry\"\n            }\n            # fields order is undefined\n            assert_match \"hexpired myhash 2 f* f*\" [lindex [r keyspace.get_subkey_events] 1]\n            r del myhash\n        }\n\n        test \"Subkey notification: unsubscribe stops callback and resubscribe resumes\" {\n            r keyspace.reset_subkey_events\n            r hset myhash f1 v1\n            set events [r keyspace.get_subkey_events]\n            assert_equal 1 [llength $events]\n\n            # Unsubscribe — events should stop\n            r keyspace.unsubscribe_subkeys\n            r keyspace.reset_subkey_events\n            r hset myhash f2 v2\n            set events [r keyspace.get_subkey_events]\n            assert_equal 0 [llength $events]\n            # active expire should not trigger subkey callback\n            r hpexpire myhash 10 FIELDS 2 f1 f2\n            wait_for_condition 50 100 {\n                [r exists myhash] == 0\n            } else {\n                fail \"Fields not expired by active expiry\"\n            }\n            set events [r keyspace.get_subkey_events]\n            assert_equal 0 [llength $events]\n\n            # Re-subscribe — events should resume\n            r keyspace.subscribe_subkeys\n            r del myhash\n            r hset myhash f1 v1 f2 v2\n            r keyspace.reset_subkey_events\n            r hpexpire myhash 10 FIELDS 2 f1 f2\n            assert_match \"hexpire myhash 2 f* f*\" [lindex [r keyspace.get_subkey_events] 0]\n            # active expire should also resume subkey callback\n            wait_for_condition 50 100 {\n                [r exists myhash] == 0\n            } else {\n                fail \"Fields not expired by active expiry\"\n            }\n            assert_match \"hexpired myhash 2 f* f*\" [lindex [r keyspace.get_subkey_events] 1]\n\n            r keyspace.unsubscribe_subkeys\n            r keyspace.reset_subkey_events\n            r del myhash\n        }\n\n        test \"Subkey notification: SUBKEYS_REQUIRED flag skips events without subkeys\" {\n            r keyspace.subscribe_require_subkeys\n            r keyspace.reset_subkey_events\n\n            # HSET has subkeys — should trigger callback\n            r hset myhash f1 v1 f2 v2\n            set events [r keyspace.get_subkey_events]\n            assert_equal 1 [llength $events]\n            assert_equal \"hset myhash 2 f1 f2\" [lindex $events 0]\n\n            # DEL has no subkeys — the callback should be skipped.\n            r keyspace.reset_subkey_events\n            r del myhash\n            set events [r keyspace.get_subkey_events]\n            assert_equal 0 [llength $events]\n\n            r keyspace.unsubscribe_require_subkeys\n        }\n\n        test \"Unload the module - testkeyspace\" {\n            assert_equal {OK} [r module unload testkeyspace]\n        }\n\n        test \"Verify RM_StringDMA with expiration are not causing invalid memory access\" {\n            assert_equal {OK} [r set x 1 EX 1]\n        }\n    }\n\n    # Replication test: replica module receives subkey notifications\n    start_server [list overrides [list loadmodule \"$testmodule\"]] {\n        set master [srv 0 client]\n        set master_host [srv 0 host]\n        set master_port [srv 0 port]\n\n        start_server [list overrides [list loadmodule \"$testmodule\"]] {\n            set replica [srv 0 client]\n\n            $replica replicaof $master_host $master_port\n            wait_for_sync $replica\n\n            test \"Subkey notification: replica module receives subkey callback after replication\" {\n                $master keyspace.subscribe_subkeys\n                $replica keyspace.subscribe_subkeys\n                $replica keyspace.reset_subkey_events\n\n                $master hset myhash f1 v1 f2 v2\n\n                wait_for_ofs_sync $master $replica\n\n                set events [$replica keyspace.get_subkey_events]\n                assert_equal 1 [llength $events]\n                assert_equal \"hset myhash 2 f1 f2\" [lindex $events 0]\n\n                $master del myhash\n                $master keyspace.unsubscribe_subkeys\n                $replica keyspace.unsubscribe_subkeys\n            }\n        }\n    }\n\n    start_server {} {\n        test {OnLoad failure will handle un-registration} {\n            catch {r module load $testmodule noload}\n            r set x 1\n            r hset y f v\n            r lpush z 1 2 3\n            r sadd p 1 2 3\n            r zadd t 1 f1 2 f2\n            r xadd s * f v\n            r ping\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/keyspecs.tcl",
    "content": "set testmodule [file normalize tests/modules/keyspecs.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test \"Module key specs: No spec, only legacy triple\" {\n        set reply [lindex [r command info kspec.none] 0]\n        # Verify (first, last, step) and not movablekeys\n        assert_equal [lindex $reply 2] {module}\n        assert_equal [lindex $reply 3] 1\n        assert_equal [lindex $reply 4] -1\n        assert_equal [lindex $reply 5] 2\n        # Verify key-spec auto-generated from the legacy triple\n        set keyspecs [lindex $reply 8]\n        assert_equal [llength $keyspecs] 1\n        assert_equal [lindex $keyspecs 0] {flags {RW access update} begin_search {type index spec {index 1}} find_keys {type range spec {lastkey -1 keystep 2 limit 0}}}\n        assert_equal [r command getkeys kspec.none key1 val1 key2 val2] {key1 key2}\n    }\n\n    test \"Module key specs: No spec, only legacy triple with getkeys-api\" {\n        set reply [lindex [r command info kspec.nonewithgetkeys] 0]\n        # Verify (first, last, step) and movablekeys\n        assert_equal [lindex $reply 2] {module movablekeys}\n        assert_equal [lindex $reply 3] 1\n        assert_equal [lindex $reply 4] -1\n        assert_equal [lindex $reply 5] 2\n        # Verify key-spec auto-generated from the legacy triple\n        set keyspecs [lindex $reply 8]\n        assert_equal [llength $keyspecs] 1\n        assert_equal [lindex $keyspecs 0] {flags {RW access update variable_flags} begin_search {type index spec {index 1}} find_keys {type range spec {lastkey -1 keystep 2 limit 0}}}\n        assert_equal [r command getkeys kspec.nonewithgetkeys key1 val1 key2 val2] {key1 key2}\n    }\n\n    test \"Module key specs: Two ranges\" {\n        set reply [lindex [r command info kspec.tworanges] 0]\n        # Verify (first, last, step) and not movablekeys\n        assert_equal [lindex $reply 2] {module}\n        assert_equal [lindex $reply 3] 1\n        assert_equal [lindex $reply 4] 2\n        assert_equal [lindex $reply 5] 1\n        # Verify key-specs\n        set keyspecs [lindex $reply 8]\n        assert_equal [lindex $keyspecs 0] {flags {RO access} begin_search {type index spec {index 1}} find_keys {type range spec {lastkey 0 keystep 1 limit 0}}}\n        assert_equal [lindex $keyspecs 1] {flags {RW update} begin_search {type index spec {index 2}} find_keys {type range spec {lastkey 0 keystep 1 limit 0}}}\n        assert_equal [r command getkeys kspec.tworanges foo bar baz quux] {foo bar}\n    }\n\n    test \"Module key specs: Two ranges with gap\" {\n        set reply [lindex [r command info kspec.tworangeswithgap] 0]\n        # Verify (first, last, step) and movablekeys\n        assert_equal [lindex $reply 2] {module movablekeys}\n        assert_equal [lindex $reply 3] 1\n        assert_equal [lindex $reply 4] 1\n        assert_equal [lindex $reply 5] 1\n        # Verify key-specs\n        set keyspecs [lindex $reply 8]\n        assert_equal [lindex $keyspecs 0] {flags {RO access} begin_search {type index spec {index 1}} find_keys {type range spec {lastkey 0 keystep 1 limit 0}}}\n        assert_equal [lindex $keyspecs 1] {flags {RW update} begin_search {type index spec {index 3}} find_keys {type range spec {lastkey 0 keystep 1 limit 0}}}\n        assert_equal [r command getkeys kspec.tworangeswithgap foo bar baz quux] {foo baz}\n    }\n\n    test \"Module key specs: Keyword-only spec clears the legacy triple\" {\n        set reply [lindex [r command info kspec.keyword] 0]\n        # Verify (first, last, step) and movablekeys\n        assert_equal [lindex $reply 2] {module movablekeys}\n        assert_equal [lindex $reply 3] 0\n        assert_equal [lindex $reply 4] 0\n        assert_equal [lindex $reply 5] 0\n        # Verify key-specs\n        set keyspecs [lindex $reply 8]\n        assert_equal [lindex $keyspecs 0] {flags {RO access} begin_search {type keyword spec {keyword KEYS startfrom 1}} find_keys {type range spec {lastkey -1 keystep 1 limit 0}}}\n        assert_equal [r command getkeys kspec.keyword foo KEYS bar baz] {bar baz}\n    }\n\n    test \"Module key specs: Complex specs, case 1\" {\n        set reply [lindex [r command info kspec.complex1] 0]\n        # Verify (first, last, step) and movablekeys\n        assert_equal [lindex $reply 2] {module movablekeys}\n        assert_equal [lindex $reply 3] 1\n        assert_equal [lindex $reply 4] 1\n        assert_equal [lindex $reply 5] 1\n        # Verify key-specs\n        set keyspecs [lindex $reply 8]\n        assert_equal [lindex $keyspecs 0] {flags RO begin_search {type index spec {index 1}} find_keys {type range spec {lastkey 0 keystep 1 limit 0}}}\n        assert_equal [lindex $keyspecs 1] {flags {RW update} begin_search {type keyword spec {keyword STORE startfrom 2}} find_keys {type range spec {lastkey 0 keystep 1 limit 0}}}\n        assert_equal [lindex $keyspecs 2] {flags {RO access} begin_search {type keyword spec {keyword KEYS startfrom 2}} find_keys {type keynum spec {keynumidx 0 firstkey 1 keystep 1}}}\n        assert_equal [r command getkeys kspec.complex1 foo dummy KEYS 1 bar baz STORE quux] {foo quux bar}\n    }\n\n    test \"Module key specs: Complex specs, case 2\" {\n        set reply [lindex [r command info kspec.complex2] 0]\n        # Verify (first, last, step) and movablekeys\n        assert_equal [lindex $reply 2] {module movablekeys}\n        assert_equal [lindex $reply 3] 1\n        assert_equal [lindex $reply 4] 2\n        assert_equal [lindex $reply 5] 1\n        # Verify key-specs\n        set keyspecs [lindex $reply 8]\n        assert_equal [lindex $keyspecs 0] {flags {RW update} begin_search {type keyword spec {keyword STORE startfrom 5}} find_keys {type range spec {lastkey 0 keystep 1 limit 0}}}\n        assert_equal [lindex $keyspecs 1] {flags {RO access} begin_search {type index spec {index 1}} find_keys {type range spec {lastkey 0 keystep 1 limit 0}}}\n        assert_equal [lindex $keyspecs 2] {flags {RO access} begin_search {type index spec {index 2}} find_keys {type range spec {lastkey 0 keystep 1 limit 0}}}\n        assert_equal [lindex $keyspecs 3] {flags {RW update} begin_search {type index spec {index 3}} find_keys {type keynum spec {keynumidx 0 firstkey 1 keystep 1}}}\n        assert_equal [lindex $keyspecs 4] {flags {RW update} begin_search {type keyword spec {keyword MOREKEYS startfrom 5}} find_keys {type range spec {lastkey -1 keystep 1 limit 0}}}\n        assert_equal [r command getkeys kspec.complex2 foo bar 2 baz quux banana STORE dst dummy MOREKEYS hey ho] {dst foo bar baz quux hey ho}\n    }\n\n    test \"Module command list filtering\" {\n        ;# Note: we piggyback this tcl file to test the general functionality of command list filtering\n        set reply [r command list filterby module keyspecs]\n        assert_equal [lsort $reply] {kspec.complex1 kspec.complex2 kspec.keyword kspec.none kspec.nonewithgetkeys kspec.tworanges kspec.tworangeswithgap}\n        assert_equal [r command getkeys kspec.complex2 foo bar 2 baz quux banana STORE dst dummy MOREKEYS hey ho] {dst foo bar baz quux hey ho}\n    }\n\n    test {COMMAND GETKEYSANDFLAGS correctly reports module key-spec without flags} {\n        r command getkeysandflags kspec.none key1 val1 key2 val2\n    } {{key1 {RW access update}} {key2 {RW access update}}}\n\n    test {COMMAND GETKEYSANDFLAGS correctly reports module key-spec with flags} {\n        r command getkeysandflags kspec.nonewithgetkeys key1 val1 key2 val2\n    } {{key1 {RO access}} {key2 {RO access}}}\n\n    test {COMMAND GETKEYSANDFLAGS correctly reports module key-spec flags} {\n        r command getkeysandflags kspec.keyword keys key1 key2 key3\n    } {{key1 {RO access}} {key2 {RO access}} {key3 {RO access}}}\n\n    # user that can only read from \"read\" keys, write to \"write\" keys, and read+write to \"RW\" keys\n    r ACL setuser testuser +@all %R~read* %W~write* %RW~rw*\n\n    test \"Module key specs: No spec, only legacy triple - ACL\" {\n        # legacy triple didn't provide flags, so they require both read and write\n        assert_equal \"OK\" [r ACL DRYRUN testuser kspec.none rw val1]\n        assert_match {*has no permissions to access the 'read' key*} [r ACL DRYRUN testuser kspec.none read val1]\n        assert_match {*has no permissions to access the 'write' key*} [r ACL DRYRUN testuser kspec.none write val1]\n    }\n\n    test \"Module key specs: tworanges - ACL\" {\n        assert_equal \"OK\" [r ACL DRYRUN testuser kspec.tworanges read write]\n        assert_equal \"OK\" [r ACL DRYRUN testuser kspec.tworanges rw rw]\n        assert_match {*has no permissions to access the 'read' key*} [r ACL DRYRUN testuser kspec.tworanges rw read]\n        assert_match {*has no permissions to access the 'write' key*} [r ACL DRYRUN testuser kspec.tworanges write rw]\n    }\n\n    foreach cmd {kspec.none kspec.tworanges} {\n        test \"$cmd command will not be marked with movablekeys\" {\n            set info [lindex [r command info $cmd] 0]\n            assert_no_match {*movablekeys*} [lindex $info 2]\n        }\n    }\n\n    foreach cmd {kspec.keyword kspec.complex1 kspec.complex2 kspec.nonewithgetkeys} {\n        test \"$cmd command is marked with movablekeys\" {\n            set info [lindex [r command info $cmd] 0]\n            assert_match {*movablekeys*} [lindex $info 2]\n        }\n    }\n\n    test \"Unload the module - keyspecs\" {\n        assert_equal {OK} [r module unload keyspecs]\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/ksn_notify_side_effect.tcl",
    "content": "# Test for SetKeyMeta during keyspace notification (KSN) callbacks.\n#\n# On key space notification, the module shouldn't modify the key. This focused \n# regression tests makes an exception for RediSearch which uses SetKeyMeta\n# as part of its KSN callback (Currently only for hash keys without hash field \n# expiration). The test module mutates key metadata during selected notifications,\n# which may reallocate the underlying kvobj and invalidates any local pointer to \n# it. Each test uses fresh keys when possible so the first metadata write forces\n# the reallocation-sensitive path, then verifies the command still completes.\n\nset testmodule [file normalize tests/modules/keymeta_notify.so]\n\nstart_server {tags {\"modules\" \"external:skip\"} overrides {enable-debug-command yes}} {\n    r debug enable-keymeta-runtime-registration 1\n    r module load $testmodule\n\n    # --- HASH notification tests ---\n    # Each test uses a fresh key to ensure kvobj reallocation happens.\n\n    test {HSETNX with SetKeyMeta in notification does not crash} {\n        set before [r keymetanotify.setcount]\n        r HSETNX hsetnx_key field1 value1\n        assert_equal [r HGET hsetnx_key field1] \"value1\"\n        assert_equal [r keymetanotify.get hsetnx_key] \"notified\"\n        # Verify SetKeyMeta was called (reallocation happened on first call)\n        assert {[r keymetanotify.setcount] > $before}\n\n        # Second HSETNX on same field (no-op, field exists) - in-place update\n        r HSETNX hsetnx_key field1 value2\n        assert_equal [r HGET hsetnx_key field1] \"value1\"\n\n        # HSETNX on a new field in the same hash\n        r HSETNX hsetnx_key field2 value2\n        assert_equal [r HGET hsetnx_key field2] \"value2\"\n        assert_equal [r HLEN hsetnx_key] 2\n    }\n\n    test {HSET with SetKeyMeta in notification does not crash} {\n        set before [r keymetanotify.setcount]\n        r HSET hset_key f1 v1\n        assert_equal [r HGET hset_key f1] \"v1\"\n        assert_equal [r keymetanotify.get hset_key] \"notified\"\n        assert {[r keymetanotify.setcount] > $before}\n\n        # Multiple fields on same key (in-place metadata update)\n        r HSET hset_key f2 v2 f3 v3\n        assert_equal [r HLEN hset_key] 3\n    }\n\n    test {HMSET with SetKeyMeta in notification does not crash} {\n        set before [r keymetanotify.setcount]\n        r HMSET hmset_key f1 v1 f2 v2\n        assert_equal [r HGET hmset_key f1] \"v1\"\n        assert_equal [r HGET hmset_key f2] \"v2\"\n        assert_equal [r keymetanotify.get hmset_key] \"notified\"\n        assert {[r keymetanotify.setcount] > $before}\n    }\n\n    test {HINCRBY with SetKeyMeta in notification does not crash} {\n        # Use a fresh key - HINCRBY creates it with value 5\n        set before [r keymetanotify.setcount]\n        r HINCRBY hincrby_key counter 5\n        assert_equal [r HGET hincrby_key counter] \"5\"\n        assert_equal [r keymetanotify.get hincrby_key] \"notified\"\n        assert {[r keymetanotify.setcount] > $before}\n    }\n\n    test {HINCRBYFLOAT with SetKeyMeta in notification does not crash} {\n        # Use a fresh key - HINCRBYFLOAT creates it with value 1.5\n        set before [r keymetanotify.setcount]\n        r HINCRBYFLOAT hincrbyfloat_key value 1.5\n        assert_equal [r HGET hincrbyfloat_key value] \"1.5\"\n        assert_equal [r keymetanotify.get hincrbyfloat_key] \"notified\"\n        assert {[r keymetanotify.setcount] > $before}\n    }\n\n    test {Multiple HSETNX on new keys with SetKeyMeta does not crash} {\n        set before [r keymetanotify.setcount]\n        for {set i 0} {$i < 100} {incr i} {\n            r HSETNX \"stresskey:$i\" field \"value$i\"\n        }\n        for {set i 0} {$i < 100} {incr i} {\n            assert_equal [r HGET \"stresskey:$i\" field] \"value$i\"\n            assert_equal [r keymetanotify.get \"stresskey:$i\"] \"notified\"\n        }\n        # All 100 keys should have triggered SetKeyMeta\n        assert {[r keymetanotify.setcount] >= $before + 100}\n    }\n\n    test {HGETDEL with SetKeyMeta in notification does not crash} {\n        # To test the \"first SetKeyMeta causes kvobj reallocation\" scenario,\n        # create the key BEFORE loading the module so the first metadata\n        # attachment happens during HGETDEL, not during HSET.\n        r module unload keymetanotify\n        r HSET hgetdel_key f1 v1 f2 v2 f3 v3\n        r module load $testmodule\n\n        # HGETDEL returns the value and deletes the field\n        # This is the first SetKeyMeta call for this key, triggering kvobj reallocation\n        set before [r keymetanotify.setcount]\n        set result [r HGETDEL hgetdel_key FIELDS 1 f1]\n        assert_equal $result \"v1\"\n        assert_equal [r HEXISTS hgetdel_key f1] 0\n        assert_equal [r HLEN hgetdel_key] 2\n        # SetKeyMeta should be called during the hdel notification\n        assert {[r keymetanotify.setcount] > $before}\n        assert_equal [r keymetanotify.get hgetdel_key] \"notified\"\n\n        # HGETDEL multiple fields\n        set result [r HGETDEL hgetdel_key FIELDS 2 f2 f3]\n        assert_equal [lindex $result 0] \"v2\"\n        assert_equal [lindex $result 1] \"v3\"\n        assert_equal [r HLEN hgetdel_key] 0\n    }\n\n    test {HDEL with SetKeyMeta in notification does not crash} {\n        # To test the \"first SetKeyMeta causes kvobj reallocation\" scenario,\n        # create the key BEFORE loading the module so the first metadata\n        # attachment happens during HDEL, not during HSET.\n        r module unload keymetanotify\n        r HSET hdel_key f1 v1 f2 v2 f3 v3\n        r module load $testmodule\n\n        # HDEL single field - this is the first SetKeyMeta call for this key,\n        # triggering kvobj reallocation during the hdel notification\n        set before [r keymetanotify.setcount]\n        r HDEL hdel_key f1\n        assert_equal [r HEXISTS hdel_key f1] 0\n        assert_equal [r HLEN hdel_key] 2\n        # SetKeyMeta should be called during the hdel notification\n        assert {[r keymetanotify.setcount] > $before}\n        assert_equal [r keymetanotify.get hdel_key] \"notified\"\n\n        # HDEL multiple fields (in-place metadata update)\n        r HDEL hdel_key f2 f3\n        assert_equal [r HLEN hdel_key] 0\n    }\n\n    # --- GENERIC notification tests ---\n\n    test {PERSIST with SetKeyMeta in notification does not crash} {\n        # Create key with expiration\n        set before [r keymetanotify.setcount]\n        r SET persist_key \"value\"\n        r EXPIRE persist_key 1000\n        assert_equal [r keymetanotify.get persist_key] \"notified\"\n        assert {[r keymetanotify.setcount] > $before}\n\n        # Verify TTL is set\n        assert {[r TTL persist_key] > 0}\n\n        # PERSIST removes expiration\n        set before [r keymetanotify.setcount]\n        r PERSIST persist_key\n        # persist notification triggers SetKeyMeta\n        assert {[r keymetanotify.setcount] > $before}\n\n        # Verify TTL is removed\n        assert_equal [r TTL persist_key] -1\n        assert_equal [r GET persist_key] \"value\"\n    }\n\n    test {COPY with SetKeyMeta in notification does not crash} {\n        # Create source key\n        set before [r keymetanotify.setcount]\n        r HSET copy_src_key f1 v1 f2 v2\n        assert_equal [r keymetanotify.get copy_src_key] \"notified\"\n        assert {[r keymetanotify.setcount] > $before}\n\n        # COPY to new key\n        set before [r keymetanotify.setcount]\n        r COPY copy_src_key copy_dst_key\n        # copy_to notification triggers SetKeyMeta on destination\n        assert_equal [r keymetanotify.get copy_dst_key] \"notified\"\n        assert {[r keymetanotify.setcount] > $before}\n\n        # Verify both keys have same content\n        assert_equal [r HGET copy_src_key f1] \"v1\"\n        assert_equal [r HGET copy_dst_key f1] \"v1\"\n        assert_equal [r HGET copy_src_key f2] \"v2\"\n        assert_equal [r HGET copy_dst_key f2] \"v2\"\n\n        # COPY with REPLACE\n        r HSET copy_src_key f3 v3\n        set before [r keymetanotify.setcount]\n        r COPY copy_src_key copy_dst_key REPLACE\n        assert {[r keymetanotify.setcount] > $before}\n        assert_equal [r HGET copy_dst_key f3] \"v3\"\n    }\n\n    # --- STRING notification tests ---\n    # Each test uses a fresh key for actual kvobj reallocation.\n\n    test {SET with SetKeyMeta in notification does not crash} {\n        set before [r keymetanotify.setcount]\n        r SET set_key hello\n        assert_equal [r GET set_key] \"hello\"\n        assert_equal [r keymetanotify.get set_key] \"notified\"\n        assert {[r keymetanotify.setcount] > $before}\n    }\n\n    test {APPEND with SetKeyMeta in notification does not crash} {\n        # APPEND on nonexistent key creates it\n        set before [r keymetanotify.setcount]\n        r APPEND append_key \"hello\"\n        assert_equal [r GET append_key] \"hello\"\n        assert_equal [r keymetanotify.get append_key] \"notified\"\n        assert {[r keymetanotify.setcount] > $before}\n    }\n\n    test {INCR with SetKeyMeta in notification does not crash} {\n        # INCR on nonexistent key creates it with value 1\n        set before [r keymetanotify.setcount]\n        r INCR incr_key\n        assert_equal [r GET incr_key] \"1\"\n        assert_equal [r keymetanotify.get incr_key] \"notified\"\n        assert {[r keymetanotify.setcount] > $before}\n    }\n\n    test {INCRBY with SetKeyMeta in notification does not crash} {\n        set before [r keymetanotify.setcount]\n        r INCRBY incrby_key 5\n        assert_equal [r GET incrby_key] \"5\"\n        assert_equal [r keymetanotify.get incrby_key] \"notified\"\n        assert {[r keymetanotify.setcount] > $before}\n    }\n\n    test {INCRBYFLOAT with SetKeyMeta in notification does not crash} {\n        set before [r keymetanotify.setcount]\n        r SET incrbyfloat_key 10.5\n        r INCRBYFLOAT incrbyfloat_key 1.5\n        assert_equal [r GET incrbyfloat_key] \"12\"\n        assert_equal [r keymetanotify.get incrbyfloat_key] \"notified\"\n        assert {[r keymetanotify.setcount] > $before}\n    }\n\n    test {GETSET with SetKeyMeta in notification does not crash} {\n        set before [r keymetanotify.setcount]\n        r SET getset_key \"old\"\n        r GETSET getset_key \"new\"\n        assert_equal [r GET getset_key] \"new\"\n        assert_equal [r keymetanotify.get getset_key] \"notified\"\n        assert {[r keymetanotify.setcount] > $before}\n    }\n\n    test {SETRANGE with SetKeyMeta in notification does not crash} {\n        set before [r keymetanotify.setcount]\n        r SET setrange_key \"Hello World\"\n        r SETRANGE setrange_key 6 \"Redis\"\n        assert_equal [r GET setrange_key] \"Hello Redis\"\n        assert_equal [r keymetanotify.get setrange_key] \"notified\"\n        assert {[r keymetanotify.setcount] > $before}\n    }\n\n    # --- GENERIC notification tests ---\n\n    test {DEL with SetKeyMeta in notification does not crash} {\n        r SET del_key \"value\"\n        assert_equal [r keymetanotify.get del_key] \"notified\"\n        r DEL del_key\n        assert_equal [r EXISTS del_key] 0\n    }\n\n    test {DELEX with SetKeyMeta in notification does not crash} {\n        r SET delex_key \"value\"\n        assert_equal [r keymetanotify.get delex_key] \"notified\"\n        r DELEX delex_key IFEQ value\n        assert_equal [r EXISTS delex_key] 0\n    }\n\n    test {MOVE with SetKeyMeta in notification does not crash} {\n        r select 10\n        r DEL move_key\n        r select 9\n\n        # Create the key before loading the module so the first metadata\n        # attachment happens during MOVE, not during SET.\n        r module unload keymetanotify\n        r SET move_key \"value\"\n        r module load $testmodule\n\n        set before [r keymetanotify.setcount]\n        r MOVE move_key 10\n        assert_equal [r EXISTS move_key] 0\n\n        r select 10\n        assert_equal [r GET move_key] \"value\"\n        assert_equal [r keymetanotify.get move_key] \"notified\"\n        assert {[r keymetanotify.setcount] > $before}\n        r DEL move_key\n        r select 9\n        set _ {}\n    } {} {singledb:skip}\n\n    test {RENAME with SetKeyMeta in notification does not crash} {\n        r SET rename_src \"value\"\n        r RENAME rename_src rename_dst\n        assert_equal [r GET rename_dst] \"value\"\n        assert_equal [r EXISTS rename_src] 0\n    }\n\n    test {RESTORE with SetKeyMeta in notification does not crash} {\n        r SET restore_src \"hello\"\n        set dump [r DUMP restore_src]\n        r DEL restore_src\n        set before [r keymetanotify.setcount]\n        r RESTORE restore_dst 0 $dump\n        assert_equal [r GET restore_dst] \"hello\"\n        assert {[r keymetanotify.setcount] > $before}\n    }\n\n    test {RESTORE REPLACE with SetKeyMeta in notification does not crash} {\n        # Create a key with metadata already attached\n        r SET restore_replace_src \"hello\"\n        assert_equal [r keymetanotify.get restore_replace_src] \"notified\"\n        set dump [r DUMP restore_replace_src]\n        # Create a destination key that already exists (with metadata)\n        r SET restore_replace_dst \"old_value\"\n        assert_equal [r keymetanotify.get restore_replace_dst] \"notified\"\n        set before [r keymetanotify.setcount]\n        # RESTORE REPLACE overwrites the existing key, triggering delete + load\n        r RESTORE restore_replace_dst 0 $dump REPLACE\n        assert_equal [r GET restore_replace_dst] \"hello\"\n        assert {[r keymetanotify.setcount] > $before}\n    }\n\n    test {EXPIRE and key expiry with SetKeyMeta in notification does not crash} {\n        r SET expire_key \"value\"\n        assert_equal [r keymetanotify.get expire_key] \"notified\"\n        r PEXPIRE expire_key 50\n        after 100\n        assert_equal [r EXISTS expire_key] 0\n    }\n\n    test {DEBUG RELOAD with SetKeyMeta in notification does not crash} {\n        r SET reload_key \"value\"\n        assert_equal [r keymetanotify.get reload_key] \"notified\"\n        r DEBUG RELOAD\n        # After reload, keys are restored from RDB triggering LOADED notifications.\n        # The module setcount counter resets on reload, so just verify it is > 0\n        # (meaning SetKeyMeta was called during RDB loading).\n        assert_equal [r GET reload_key] \"value\"\n        assert {[r keymetanotify.setcount] > 0}\n    }\n\n    test {SetKeyMeta notification count is tracked} {\n        set count [r keymetanotify.setcount]\n        assert {$count > 0}\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/list.tcl",
    "content": "set testmodule [file normalize tests/modules/list.so]\n\n# The following arguments can be passed to args:\n#   i -- the number of inserts\n#   d -- the number of deletes\n#   r -- the number of replaces\n#   index -- the last index\n#   entry -- The entry pointed to by index\nproc verify_list_edit_reply {reply argv} {\n    foreach {k v} $argv {\n        assert_equal [dict get $reply $k] $v\n    }\n}\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test {Module list set, get, insert, delete} {\n        r del k\n        assert_error {WRONGTYPE Operation against a key holding the wrong kind of value*} {r list.set k 1 xyz}\n        r rpush k x\n        # insert, set, get\n        r list.insert k 0 foo\n        r list.insert k -1 bar\n        r list.set k 1 xyz\n        assert_equal {foo xyz bar} [r list.getall k]\n        assert_equal {foo} [r list.get k 0]\n        assert_equal {xyz} [r list.get k 1]\n        assert_equal {bar} [r list.get k 2]\n        assert_equal {bar} [r list.get k -1]\n        assert_equal {foo} [r list.get k -3]\n        assert_error {ERR index out*} {r list.get k -4}\n        assert_error {ERR index out*} {r list.get k 3}\n        # remove\n        assert_error {ERR index out*} {r list.delete k -4}\n        assert_error {ERR index out*} {r list.delete k 3}\n        r list.delete k 0\n        r list.delete k -1\n        assert_equal {xyz} [r list.getall k]\n        # removing the last element deletes the list\n        r list.delete k 0\n        assert_equal 0 [r exists k]\n    }\n\n    test {Module list iteration} {\n        r del k\n        r rpush k x y z\n        assert_equal {x y z} [r list.getall k]\n        assert_equal {z y x} [r list.getall k REVERSE]\n    }\n\n    test {Module list insert & delete} {\n        r del k\n        r rpush k x y z\n        verify_list_edit_reply [r list.edit k ikikdi foo bar baz] {i 3 index 5}\n        r list.getall k\n    } {foo x bar y baz}\n\n    test {Module list insert & delete, neg index} {\n        r del k\n        r rpush k x y z\n        verify_list_edit_reply [r list.edit k REVERSE ikikdi foo bar baz] {i 3 index -6}\n        r list.getall k\n    } {baz y bar z foo}\n\n    test {Module list set while iterating} {\n        r del k\n        r rpush k x y z\n        verify_list_edit_reply [r list.edit k rkr foo bar] {r 2 index 3}\n        r list.getall k\n    } {foo y bar}\n\n    test {Module list set while iterating, neg index} {\n        r del k\n        r rpush k x y z\n        verify_list_edit_reply [r list.edit k reverse rkr foo bar] {r 2 index -4}\n        r list.getall k\n    } {bar y foo}\n\n    test {Module list - encoding conversion while inserting} {\n        r config set list-max-listpack-size 4\n        r del k\n        r rpush k a b c d\n        assert_encoding listpack k\n\n        # Converts to quicklist after inserting.\n        r list.edit k dii foo bar\n        assert_encoding quicklist k\n        assert_equal [r list.getall k] {foo bar b c d}\n\n        # Converts to listpack after deleting three entries.\n        r list.edit k ddd e\n        assert_encoding listpack k\n        assert_equal [r list.getall k] {c d}\n    }\n\n    test {Module list - encoding conversion while replacing} {\n        r config set list-max-listpack-size -1\n        r del k\n        r rpush k x y z\n        assert_encoding listpack k\n\n        # Converts to quicklist after replacing.\n        set big [string repeat \"x\" 4096]\n        r list.edit k r $big\n        assert_encoding quicklist k\n        assert_equal [r list.getall k] \"$big y z\"\n\n        # Converts to listpack after deleting the big entry.\n        r list.edit k d\n        assert_encoding listpack k\n        assert_equal [r list.getall k] {y z}\n    }\n\n    test {Module list - list entry and index should be updated when deletion} {\n        set original_config [config_get_set list-max-listpack-size 1]\n\n        # delete from start (index 0)\n        r del l\n        r rpush l x y z\n        verify_list_edit_reply [r list.edit l dd] {d 2 index 0 entry z}\n        assert_equal [r list.getall l] {z}\n\n        # delete from start (index -3)\n        r del l\n        r rpush l x y z\n        verify_list_edit_reply [r list.edit l reverse kkd] {d 1 index -3}\n        assert_equal [r list.getall l] {y z}\n\n        # # delete from tail (index 2)\n        r del l\n        r rpush l x y z\n        verify_list_edit_reply [r list.edit l kkd] {d 1 index 2}\n        assert_equal [r list.getall l] {x y}\n\n        # # delete from tail (index -1)\n        r del l\n        r rpush l x y z\n        verify_list_edit_reply [r list.edit l reverse dd] {d 2 index -1 entry x}\n        assert_equal [r list.getall l] {x}\n\n        # # delete from middle (index 1)\n        r del l\n        r rpush l x y z\n        verify_list_edit_reply [r list.edit l kdd] {d 2 index 1}\n        assert_equal [r list.getall l] {x}\n\n        # # delete from middle (index -2)\n        r del l\n        r rpush l x y z\n        verify_list_edit_reply [r list.edit l reverse kdd] {d 2 index -2}\n        assert_equal [r list.getall l] {z}\n\n        config_set list-max-listpack-size $original_config\n    }\n\n    test {Module list - KEYSIZES is updated as expected} {\n        proc run_cmd_verify_hist {cmd expOutput {retries 1}} {\n            proc K {} {return [string map { \"db0_distrib_lists_items\" \"db0_LIST\" \"# Keysizes\" \"\" \" \" \"\" \"\\n\" \"\" \"\\r\" \"\" } [r info keysizes] ]}\n            uplevel 1 $cmd    \n            wait_for_condition 50 $retries {\n                $expOutput eq [K]\n            } else { fail \"Expected: \\n`$expOutput`\\n Actual:\\n`[K]`.\\nFailed after command: $cmd\" }\n        }\n\n        r select 0\n\n        # RedisModule_ListPush & RedisModule_ListDelete\n        run_cmd_verify_hist {r flushall} {}\n        run_cmd_verify_hist {r list.insert L1 0 foo} {db0_LIST:1=1}\n        run_cmd_verify_hist {r list.insert L1 0 bla} {db0_LIST:2=1}\n        run_cmd_verify_hist {r list.delete L1 0} {db0_LIST:1=1}\n        run_cmd_verify_hist {r list.delete L1 0} {}\n        \n\n        # RedisModule_ListSet & RedisModule_ListDelete\n        run_cmd_verify_hist {r list.insert L1 0 foo} {db0_LIST:1=1}\n        run_cmd_verify_hist {r list.set L1 0 bar} {db0_LIST:1=1}\n        run_cmd_verify_hist {r list.set L1 0 baz} {db0_LIST:1=1}\n        run_cmd_verify_hist {r list.delete L1 0} {}\n\n        # Check lazy expire\n        r debug set-active-expire 0\n        run_cmd_verify_hist {r list.insert L1 0 foo} {db0_LIST:1=1}\n        run_cmd_verify_hist {r pexpire L1 1} {db0_LIST:1=1}\n        run_cmd_verify_hist {after 5} {db0_LIST:1=1}        \n        r debug set-active-expire 1\n        run_cmd_verify_hist {after 5} {} 50\n    }\n    \n    test \"Unload the module - list\" {\n        assert_equal {OK} [r module unload list]\n    }\n}\n\n# A basic test that exercises a module's list commands under cluster mode.\n# Currently, many module commands are never run even once in a clustered setup.\n# This test helps ensure that basic module functionality works correctly and that\n# the KEYSIZES histogram remains accurate and that insert & delete was tested.\nset testmodule [file normalize tests/modules/list.so]\nset modules [list loadmodule $testmodule]\nstart_cluster 2 2 [list tags {external:skip cluster modules} config_lines [list loadmodule $testmodule enable-debug-command yes]] {\n    test \"Module list - KEYSIZES is updated correctly in cluster mode\" {\n        for {set srvid -2} {$srvid <= 0} {incr srvid} {\n            set instance [srv $srvid client]\n            # Assert consistency after each command\n            $instance DEBUG KEYSIZES-HIST-ASSERT 1\n    \n            for {set i 0} {$i < 50} {incr i} {\n                for {set j 0} {$j < 4} {incr j} {\n                    catch {$instance list.insert \"list:$i\" $j \"item:$j\"} e\n                    if {![string match \"OK\" $e]} {assert_match \"*MOVED*\" $e}\n                }\n            }\n            for {set i 0} {$i < 50} {incr i} {\n                for {set j 0} {$j < 4} {incr j} {\n                    catch {$instance list.delete \"list:$i\" 0} e\n                    if {![string match \"OK\" $e]} {assert_match \"*MOVED*\" $e}\n                }\n            }\n            # Verify also that instance is responsive and didn't crash on assert\n            assert_equal [$instance dbsize] 0\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/mallocsize.tcl",
    "content": "set testmodule [file normalize tests/modules/mallocsize.so]\n\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test {MallocSize of raw bytes} {\n        assert_equal [r mallocsize.setraw key 40] {OK}\n        assert_morethan [r memory usage key] 40\n    }\n    \n    test {MallocSize of string} {\n        assert_equal [r mallocsize.setstr key abcdefg] {OK}\n        assert_morethan [r memory usage key] 7 ;# Length of \"abcdefg\"\n    }\n    \n    test {MallocSize of dict} {\n        assert_equal [r mallocsize.setdict key f1 v1 f2 v2] {OK}\n        assert_morethan [r memory usage key] 8 ;# Length of \"f1v1f2v2\"\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/misc.tcl",
    "content": "set testmodule [file normalize tests/modules/misc.so]\n\nstart_server {overrides {save {900 1}} tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test {test RM_Call} {\n        set info [r test.call_info commandstats]\n        # cmdstat is not in a default section, so we also test an argument was passed\n        assert { [string match \"*cmdstat_module*\" $info] }\n    }\n\n    test {test RM_Call args array} {\n        set info [r test.call_generic info commandstats]\n        # cmdstat is not in a default section, so we also test an argument was passed\n        assert { [string match \"*cmdstat_module*\" $info] }\n    }\n\n    test {test RM_Call recursive} {\n        set info [r test.call_generic test.call_generic info commandstats]\n        assert { [string match \"*cmdstat_module*\" $info] }\n    }\n\n    test {test redis version} {\n        set version [s redis_version]\n        assert_equal $version [r test.redisversion]\n    }\n\n    test {test long double conversions} {\n        set ld [r test.ld_conversion]\n        assert {[string match $ld \"0.00000000000000001\"]}\n    }\n    \n    test {test unsigned long long conversions} {\n        set ret [r test.ull_conversion]\n        assert {[string match $ret \"ok\"]}\n    }\n\n    test {test module db commands} {\n        r set x foo\n        set key [r test.randomkey]\n        assert_equal $key \"x\"\n        assert_equal [r test.dbsize] 1\n        r test.flushall\n        assert_equal [r test.dbsize] 0\n    }\n\n    test {test RedisModule_ResetDataset do not reset functions} {\n        r function load {#!lua name=lib\n            redis.register_function('test', function() return 1 end)\n        }\n        assert_equal [r function list] {{library_name lib engine LUA functions {{name test description {} flags {}}}}}\n        r test.flushall\n        assert_equal [r function list] {{library_name lib engine LUA functions {{name test description {} flags {}}}}}\n        r function flush\n    }\n\n    test {test module keyexists} {\n        r set x foo\n        assert_equal 1 [r test.keyexists x]\n        r del x\n        assert_equal 0 [r test.keyexists x]\n    }\n\n    test {test module lru api} {\n        r config set maxmemory-policy allkeys-lru\n        r set x foo\n        set lru [r test.getlru x]\n        assert { $lru <= 1000 }\n        set was_set [r test.setlru x 100000]\n        assert { $was_set == 1 }\n        set idle [r object idletime x]\n        assert { $idle >= 100 }\n        set lru [r test.getlru x]\n        assert { $lru >= 100000 }\n        r config set maxmemory-policy allkeys-lfu\n        set lru [r test.getlru x]\n        assert { $lru == -1 }\n        set was_set [r test.setlru x 100000]\n        assert { $was_set == 0 }\n    }\n    r config set maxmemory-policy allkeys-lru\n\n    test {test module lfu api} {\n        r config set maxmemory-policy allkeys-lfu\n        r set x foo\n        set lfu [r test.getlfu x]\n        assert { $lfu >= 1 }\n        set was_set [r test.setlfu x 100]\n        assert { $was_set == 1 }\n        set freq [r object freq x]\n        assert { $freq <= 100 }\n        set lfu [r test.getlfu x]\n        assert { $lfu <= 100 }\n        r config set maxmemory-policy allkeys-lru\n        set lfu [r test.getlfu x]\n        assert { $lfu == -1 }\n        set was_set [r test.setlfu x 100]\n        assert { $was_set == 0 }\n    }\n\n    test {test module clientinfo api} {\n        # Test basic sanity and SSL flag\n        set info [r test.clientinfo]\n        set ssl_flag [expr $::tls ? {\"ssl:\"} : {\":\"}]\n\n        assert { [dict get $info db] == 9 }\n        assert { [dict get $info flags] == \"${ssl_flag}::::\" }\n\n        # Test MULTI flag\n        r multi\n        r test.clientinfo\n        set info [lindex [r exec] 0]\n        assert { [dict get $info flags] == \"${ssl_flag}::::multi\" }\n\n        # Test TRACKING flag\n        r client tracking on\n        set info [r test.clientinfo]\n        assert { [dict get $info flags] == \"${ssl_flag}::tracking::\" }\n        r CLIENT TRACKING off\n    }\n\n    test {tracking with rm_call sanity} {\n        set rd_trk [redis_client]\n        $rd_trk HELLO 3\n        $rd_trk CLIENT TRACKING on\n        r MSET key1{t} 1 key2{t} 1\n\n        # GET triggers tracking, SET does not\n        $rd_trk test.rm_call GET key1{t}\n        $rd_trk test.rm_call SET key2{t} 2\n        r MSET key1{t} 2 key2{t} 2\n        assert_equal {invalidate key1{t}} [$rd_trk read]\n        assert_equal \"PONG\" [$rd_trk ping]\n        $rd_trk close\n    }\n\n    test {tracking with rm_call with script} {\n        set rd_trk [redis_client]\n        $rd_trk HELLO 3\n        $rd_trk CLIENT TRACKING on\n        r MSET key1{t} 1 key2{t} 1\n\n        # GET triggers tracking, SET does not\n        $rd_trk test.rm_call EVAL \"redis.call('get', 'key1{t}')\" 2 key1{t} key2{t}\n        r MSET key1{t} 2 key2{t} 2\n        assert_equal {invalidate key1{t}} [$rd_trk read]\n        assert_equal \"PONG\" [$rd_trk ping]\n        $rd_trk close\n    }\n\n    test {RM_SignalModifiedKey - tracking invalidation} {\n        set rd_trk [redis_client]\n        $rd_trk HELLO 3\n        $rd_trk CLIENT TRACKING on\n        r SET mykey{t} abc\n\n        # Track the key by reading it\n        $rd_trk GET mykey{t}\n\n        # # Modify the key using module command that calls RM_SignalModifiedKey\n        r test.signalmodifiedkey mykey{t}\n\n        # # Should receive invalidation message\n        assert_equal {invalidate mykey{t}} [$rd_trk read]\n        assert_equal \"PONG\" [$rd_trk ping]\n        $rd_trk close\n    }\n\n    test {RM_SignalModifiedKey - update LRM timestamp} {\n        set old_policy [config_get_set maxmemory-policy allkeys-lrm]\n        r SET mykey{t} abc\n        after 2000\n        assert_morethan_equal [r object idletime mykey{t}] 1\n\n        # LRM should be updated.\n        r test.signalmodifiedkey mykey{t}\n        assert_lessthan [r object idletime mykey{t}] 2\n        r config set maxmemory-policy $old_policy\n    } {OK} {slow}\n\n    test {publish to self inside rm_call} {\n        r hello 3\n        r subscribe foo\n\n        # published message comes after the response of the command that issued it.\n        assert_equal [r test.rm_call publish foo bar] {1}\n        assert_equal [r read] {message foo bar}\n\n        r unsubscribe foo\n        r hello 2\n        set _ \"\"\n    } {} {resp3}\n\n    test {test module get/set client name by id api} {\n        catch { r test.getname } e\n        assert_equal \"-ERR No name\" $e\n        r client setname nobody\n        catch { r test.setname \"name with spaces\" } e\n        assert_match \"*Invalid argument*\" $e\n        assert_equal nobody [r client getname]\n        assert_equal nobody [r test.getname]\n        r test.setname somebody\n        assert_equal somebody [r client getname]\n    }\n\n    test {test module getclientcert api} {\n        set cert [r test.getclientcert]\n\n        if {$::tls} {\n            assert {$cert != \"\"}\n        } else {\n            assert {$cert == \"\"}\n        }\n    }\n\n    test {test detached thread safe cnotext} {\n        r test.log_tsctx \"info\" \"Test message\"\n        verify_log_message 0 \"*<misc> Test message*\" 0\n    }\n\n    test {test RM_Call CLIENT INFO} {\n        assert_match \"*fd=-1*\" [r test.call_generic client info]\n    }\n\n    test {Unsafe command names are sanitized in INFO output} {\n        r test.weird:cmd\n        set info [r info commandstats]\n        assert_match {*cmdstat_test.weird_cmd:calls=1*} $info\n    }\n\n    test {test monotonic time} {\n        set x [r test.monotonic_time]\n        assert { [r test.monotonic_time] >= $x }\n    }\n\n    test {rm_call OOM} {\n        r config set maxmemory 1\n        r config set maxmemory-policy volatile-lru\n\n        # sanity test plain call\n        assert_equal {OK} [\n            r test.rm_call set x 1\n        ]\n\n        # add the M flag\n        assert_error {OOM *} {\n            r test.rm_call_flags M set x 1\n\n        }\n\n        # test a non deny-oom command\n        assert_equal {1} [\n            r test.rm_call_flags M get x\n        ]\n\n        r config set maxmemory 0\n    } {OK} {needs:config-maxmemory}\n\n    test {rm_call clear OOM} {\n        r config set maxmemory 1\n\n        # verify rm_call fails with OOM\n        assert_error {OOM *} {\n            r test.rm_call_flags M set x 1\n        }\n\n        # clear OOM state\n        r config set maxmemory 0\n\n        # test set command is allowed\n        r test.rm_call_flags M set x 1\n    } {OK} {needs:config-maxmemory}\n\n    test {rm_call OOM Eval} {\n        r config set maxmemory 1\n        r config set maxmemory-policy volatile-lru\n\n        # use the M flag without allow-oom shebang flag\n        assert_error {OOM *} {\n            r test.rm_call_flags M eval {#!lua\n                redis.call('set','x',1)\n                return 1\n            } 1 x\n        }\n\n        # add the M flag with allow-oom shebang flag\n        assert_equal {1} [\n            r test.rm_call_flags M eval {#!lua flags=allow-oom\n                redis.call('set','x',1)\n                return 1\n            } 1 x\n        ]\n\n        r config set maxmemory 0\n    } {OK} {needs:config-maxmemory}\n\n    test {rm_call write flag} {\n        # add the W flag\n        assert_error {ERR Write command 'set' was called while write is not allowed.} {\n            r test.rm_call_flags W set x 1\n        }\n\n        # test a non deny-oom command\n        r test.rm_call_flags W get x\n    } {1}\n\n    test {rm_call EVAL} {\n        r test.rm_call eval {\n            redis.call('set','x',1)\n            return 1\n        } 1 x\n\n        assert_error {ERR Write commands are not allowed from read-only scripts.*} {\n            r test.rm_call eval {#!lua flags=no-writes\n                redis.call('set','x',1)\n                return 1\n            } 1 x\n        }\n    }\n\n    # Note: each script is unique, to check that flags are extracted correctly\n    test {rm_call EVAL - OOM - with M flag} {\n        r config set maxmemory 1\n\n        # script without shebang, but uses SET, so fails\n        assert_error {*OOM command not allowed when used memory > 'maxmemory'*} {\n            r test.rm_call_flags M eval {\n                redis.call('set','x',1)\n                return 1\n            } 1 x\n        }\n\n        # script with an allow-oom flag, succeeds despite using SET\n        r test.rm_call_flags M eval {#!lua flags=allow-oom\n            redis.call('set','x', 1)\n            return 2\n        } 1 x\n\n        # script with no-writes flag, implies allow-oom, succeeds\n        r test.rm_call_flags M eval {#!lua flags=no-writes\n            redis.call('get','x')\n            return 2\n        } 1 x\n\n        # script with shebang using default flags, so fails regardless of using only GET\n        assert_error {*OOM command not allowed when used memory > 'maxmemory'*} {\n            r test.rm_call_flags M eval {#!lua\n                redis.call('get','x')\n                return 3\n            } 1 x\n        }\n\n        # script without shebang, but uses GET, so succeeds\n        r test.rm_call_flags M eval {\n            redis.call('get','x')\n            return 4\n        } 1 x\n\n        r config set maxmemory 0\n    } {OK} {needs:config-maxmemory}\n\n    # All RM_Call for script succeeds in OOM state without using the M flag\n    test {rm_call EVAL - OOM - without M flag} {\n        r config set maxmemory 1\n\n        # no shebang at all\n        r test.rm_call eval {\n            redis.call('set','x',1)\n            return 6\n        } 1 x\n\n        # Shebang without flags\n        r test.rm_call eval {#!lua\n            redis.call('set','x', 1)\n            return 7\n        } 1 x\n\n        # with allow-oom flag\n        r test.rm_call eval {#!lua flags=allow-oom\n            redis.call('set','x', 1)\n            return 8\n        } 1 x\n\n        r config set maxmemory 0\n    } {OK} {needs:config-maxmemory}\n\n    test \"not enough good replicas\" {\n        r set x \"some value\"\n        r config set min-replicas-to-write 1\n\n        # rm_call in script mode\n        assert_error {NOREPLICAS *} {r test.rm_call_flags S set x s}\n\n        assert_equal [\n            r test.rm_call eval {#!lua flags=no-writes\n                return redis.call('get','x')\n            } 1 x\n        ] \"some value\"\n\n        assert_equal [\n            r test.rm_call eval {\n                return redis.call('get','x')\n            } 1 x\n        ] \"some value\"\n\n        assert_error {NOREPLICAS *} {\n            r test.rm_call eval {#!lua\n                return redis.call('get','x')\n            } 1 x\n        }\n\n        assert_error {NOREPLICAS *} {\n            r test.rm_call eval {\n                return redis.call('set','x', 1)\n            } 1 x\n        }\n\n        r config set min-replicas-to-write 0\n    }\n\n    test {rm_call EVAL - read-only replica} {\n        r replicaof 127.0.0.1 1\n\n        # rm_call in script mode\n        assert_error {READONLY *} {r test.rm_call_flags S set x 1}\n\n        assert_error {READONLY You can't write against a read only replica. script*} {\n            r test.rm_call eval {\n                redis.call('set','x',1)\n                return 1\n            } 1 x\n        }\n\n        r test.rm_call eval {#!lua flags=no-writes\n            redis.call('get','x')\n            return 2\n        } 1 x\n\n        assert_error {READONLY Can not run script with write flag on readonly replica*} {\n            r test.rm_call eval {#!lua\n                redis.call('get','x')\n                return 3\n            } 1 x\n        }\n\n        r test.rm_call eval {\n            redis.call('get','x')\n            return 4\n        } 1 x\n\n        r replicaof no one\n    } {OK} {needs:config-maxmemory}\n\n    test {rm_call EVAL - stale replica} {\n        r replicaof 127.0.0.1 1\n        r config set replica-serve-stale-data no\n\n        # rm_call in script mode\n        assert_error {MASTERDOWN *} {\n            r test.rm_call_flags S get x\n        }\n\n        assert_error {MASTERDOWN *} {\n            r test.rm_call eval {#!lua flags=no-writes\n                redis.call('get','x')\n                return 2\n            } 1 x\n        }\n\n        assert_error {MASTERDOWN *} {\n            r test.rm_call eval {\n                redis.call('get','x')\n                return 4\n            } 1 x\n        }\n\n        r replicaof no one\n        r config set replica-serve-stale-data yes\n    } {OK} {needs:config-maxmemory}\n\n    test \"rm_call EVAL - failed bgsave prevents writes\" {\n        r config set rdb-key-save-delay 10000000\n        populate 1000\n        r set x x\n        r bgsave\n        set pid1 [get_child_pid 0]\n        catch {exec kill -9 $pid1}\n        waitForBgsave r\n\n        # make sure a read command succeeds\n        assert_equal [r get x] x\n\n        # make sure a write command fails\n        assert_error {MISCONF *} {r set x y}\n\n        # rm_call in script mode\n        assert_error {MISCONF *} {r test.rm_call_flags S set x 1}\n\n        # repeate with script\n        assert_error {MISCONF *} {r test.rm_call eval {\n            return redis.call('set','x',1)\n            } 1 x\n        }\n        assert_equal {x} [r test.rm_call eval {\n            return redis.call('get','x')\n            } 1 x\n        ]\n\n        # again with script using shebang\n        assert_error {MISCONF *} {r test.rm_call eval {#!lua\n            return redis.call('set','x',1)\n            } 1 x\n        }\n        assert_equal {x} [r test.rm_call eval {#!lua flags=no-writes\n            return redis.call('get','x')\n            } 1 x\n        ]\n\n        r config set rdb-key-save-delay 0\n        r bgsave\n        waitForBgsave r\n\n        # server is writable again\n        r set x y\n    } {OK}\n\n    test \"malloc API\" {\n        assert_equal {OK} [r test.malloc_api 0]\n    }\n\n    test \"Cluster keyslot\" {\n        assert_equal 12182 [r test.keyslot foo]\n    }\n}\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test {test Dry Run - OK OOM/ACL} {\n        set x 5\n        r set x $x\n        catch {r test.rm_call_flags DMC set x 10} e\n        assert_match {*NULL reply returned*} $e\n        assert_equal [r get x] 5\n    }\n\n    test {test Dry Run - Fail OOM} {\n        set x 5\n        r set x $x\n        r config set maxmemory 1\n        catch {r test.rm_call_flags DM set x 10} e\n        assert_match {*OOM*} $e\n        assert_equal [r get x] $x\n        r config set maxmemory 0\n    } {OK} {needs:config-maxmemory}\n\n    test {test Dry Run - Fail ACL} {\n        set x 5\n        r set x $x\n        # deny all permissions besides the dryrun command\n        r acl setuser default resetkeys\n\n        catch {r test.rm_call_flags DC set x 10} e\n        assert_match {*NOPERM No permissions to access a key*} $e\n        r acl setuser default +@all ~*\n        assert_equal [r get x] $x\n    }\n\n    test {test silent open key} {\n        r debug set-active-expire 0\n        r test.clear_n_events\n        r set x 1 PX 10\n        after 1000\n        # now the key has been expired, open it silently and make sure not event were fired.\n        assert_error {key not found} {r test.silent_open_key x}\n        assert_equal {0} [r test.get_n_events]\n    }\n\nif {[string match {*jemalloc*} [s mem_allocator]]} {\n    test {test RM_Call with large arg for SET command} {\n        # set a big value to trigger increasing the query buf\n        r set foo [string repeat A 100000]\n        # set a smaller value but > PROTO_MBULK_BIG_ARG (32*1024) Redis will try to save the query buf itself on the DB.\n        r test.call_generic set bar [string repeat A 33000]\n        # asset the value was trimmed\n        assert {[r memory usage bar] < 42000}; # 42K to count for Jemalloc's additional memory overhead.\n    }\n} ;# if jemalloc\n\n    test \"Unload the module - misc\" {\n        assert_equal {OK} [r module unload misc]\n    }\n}\n\nstart_server {tags {\"modules external:skip\"}} {\n    test {Detect incompatible operations in cluster mode for module} {\n        r config set cluster-compatibility-sample-ratio 100\n        set incompatible_ops [s cluster_incompatible_ops]\n\n        # since test.no_cluster_cmd and its subcommand have 'no-cluster' flag,\n        # they should not be counted as incompatible ops, increment the counter by 2\n        r module load $testmodule\n        assert_equal [expr $incompatible_ops+2] [s cluster_incompatible_ops]\n\n        # incompatible_cluster_cmd is similar with MSET, check if it is counted as\n        # incompatible ops with different number of keys\n        # only 1 key, should not increment the counter\n        r test.incompatible_cluster_cmd foo bar\n        assert_equal [expr $incompatible_ops+2] [s cluster_incompatible_ops]\n        # 2 cross slot keys, should increment the counter\n        r test.incompatible_cluster_cmd foo bar bar foo\n        assert_equal [expr $incompatible_ops+3] [s cluster_incompatible_ops]\n        # 2 non cross slot keys, should not increment the counter\n        r test.incompatible_cluster_cmd foo bar bar{foo} bar\n        assert_equal [expr $incompatible_ops+3] [s cluster_incompatible_ops]\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/moduleauth.tcl",
    "content": "set testmodule [file normalize tests/modules/auth.so]\nset testmoduletwo [file normalize tests/modules/moduleauthtwo.so]\nset miscmodule [file normalize tests/modules/misc.so]\n\nproc cmdstat {cmd} {\n    return [cmdrstat $cmd r]\n}\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n    r module load $testmoduletwo\n\n    set hello2_response [r HELLO 2]\n    set hello3_response [r HELLO 3]\n\n    test {test registering module auth callbacks} {\n        assert_equal {OK} [r testmoduleone.rm_register_blocking_auth_cb]\n        assert_equal {OK} [r testmoduletwo.rm_register_auth_cb]\n        assert_equal {OK} [r testmoduleone.rm_register_auth_cb]\n    }\n\n    test {test module AUTH for non existing / disabled users} {\n        r config resetstat\n        # Validate that an error is thrown for non existing users.\n        assert_error {*WRONGPASS*} {r AUTH foo pwd}\n        assert_match {*calls=1,*,rejected_calls=0,failed_calls=1*} [cmdstat auth]\n        # Validate that an error is thrown for disabled users.\n        r acl setuser foo >pwd off ~* &* +@all\n        assert_error {*WRONGPASS*} {r AUTH foo pwd}\n        assert_match {*calls=2,*,rejected_calls=0,failed_calls=2*} [cmdstat auth]\n    }\n\n    test {test non blocking module AUTH} {\n        r config resetstat\n        # Test for a fixed password user\n        r acl setuser foo >pwd on ~* &* +@all\n        assert_equal {OK} [r AUTH foo allow]\n        assert_error {*Auth denied by Misc Module*} {r AUTH foo deny}\n        assert_match {*calls=2,*,rejected_calls=0,failed_calls=1*} [cmdstat auth]\n        assert_error {*WRONGPASS*} {r AUTH foo nomatch}\n        assert_match {*calls=3,*,rejected_calls=0,failed_calls=2*} [cmdstat auth]\n        assert_equal {OK} [r AUTH foo pwd]\n        # Test for No Pass user\n        r acl setuser foo on ~* &* +@all nopass\n        assert_equal {OK} [r AUTH foo allow]\n        assert_error {*Auth denied by Misc Module*} {r AUTH foo deny}\n        assert_match {*calls=6,*,rejected_calls=0,failed_calls=3*} [cmdstat auth]\n        assert_equal {OK} [r AUTH foo nomatch]\n\n        # Validate that the Module added an ACL Log entry.\n        set entry [lindex [r ACL LOG] 0]\n        assert {[dict get $entry username] eq {foo}}\n        assert {[dict get $entry context] eq {module}}\n        assert {[dict get $entry reason] eq {auth}}\n        assert {[dict get $entry object] eq {Module Auth}}\n        assert_match {*cmd=auth*} [dict get $entry client-info]\n        r ACL LOG RESET\n    }\n\n    test {test non blocking module HELLO AUTH} {\n        r config resetstat\n        r acl setuser foo >pwd on ~* &* +@all\n        # Validate proto 2 and 3 in case of success\n        assert_equal $hello2_response [r HELLO 2 AUTH foo pwd]\n        assert_equal $hello2_response [r HELLO 2 AUTH foo allow]\n        assert_equal $hello3_response [r HELLO 3 AUTH foo pwd]\n        assert_equal $hello3_response [r HELLO 3 AUTH foo allow]\n        # Validate denying AUTH for the HELLO cmd\n        assert_error {*Auth denied by Misc Module*} {r HELLO 2 AUTH foo deny}\n        assert_match {*calls=5,*,rejected_calls=0,failed_calls=1*} [cmdstat hello]\n        assert_error {*WRONGPASS*} {r HELLO 2 AUTH foo nomatch}\n        assert_match {*calls=6,*,rejected_calls=0,failed_calls=2*} [cmdstat hello]\n        assert_error {*Auth denied by Misc Module*} {r HELLO 3 AUTH foo deny}\n        assert_match {*calls=7,*,rejected_calls=0,failed_calls=3*} [cmdstat hello]\n        assert_error {*WRONGPASS*} {r HELLO 3 AUTH foo nomatch}\n        assert_match {*calls=8,*,rejected_calls=0,failed_calls=4*} [cmdstat hello]\n\n        # Validate that the Module added an ACL Log entry.\n        set entry [lindex [r ACL LOG] 1]\n        assert {[dict get $entry username] eq {foo}}\n        assert {[dict get $entry context] eq {module}}\n        assert {[dict get $entry reason] eq {auth}}\n        assert {[dict get $entry object] eq {Module Auth}}\n        assert_match {*cmd=hello*} [dict get $entry client-info]\n        r ACL LOG RESET\n    }\n\n    test {test non blocking module HELLO AUTH SETNAME} {\n        r config resetstat\n        r acl setuser foo >pwd on ~* &* +@all\n        # Validate clientname is set on success\n        assert_equal $hello2_response [r HELLO 2 AUTH foo pwd setname client1]\n        assert {[r client getname] eq {client1}}\n        assert_equal $hello2_response [r HELLO 2 AUTH foo allow setname client2]\n        assert {[r client getname] eq {client2}}\n        # Validate clientname is not updated on failure\n        r client setname client0\n        assert_error {*Auth denied by Misc Module*} {r HELLO 2 AUTH foo deny setname client1}\n        assert {[r client getname] eq {client0}}\n        assert_match {*calls=3,*,rejected_calls=0,failed_calls=1*} [cmdstat hello]\n        assert_error {*WRONGPASS*} {r HELLO 2 AUTH foo nomatch setname client2}\n        assert {[r client getname] eq {client0}}\n        assert_match {*calls=4,*,rejected_calls=0,failed_calls=2*} [cmdstat hello]\n    }\n\n    test {test blocking module AUTH} {\n        r config resetstat\n        # Test for a fixed password user\n        r acl setuser foo >pwd on ~* &* +@all\n        assert_equal {OK} [r AUTH foo block_allow]\n        assert_error {*Auth denied by Misc Module*} {r AUTH foo block_deny}\n        assert_match {*calls=2,*,rejected_calls=0,failed_calls=1*} [cmdstat auth]\n        assert_error {*WRONGPASS*} {r AUTH foo nomatch}\n        assert_match {*calls=3,*,rejected_calls=0,failed_calls=2*} [cmdstat auth]\n        assert_equal {OK} [r AUTH foo pwd]\n        # Test for No Pass user\n        r acl setuser foo on ~* &* +@all nopass\n        assert_equal {OK} [r AUTH foo block_allow]\n        assert_error {*Auth denied by Misc Module*} {r AUTH foo block_deny}\n        assert_match {*calls=6,*,rejected_calls=0,failed_calls=3*} [cmdstat auth]\n        assert_equal {OK} [r AUTH foo nomatch]\n        # Validate that every Blocking AUTH command took at least 500000 usec.\n        set stats [cmdstat auth]\n        regexp \"usec_per_call=(\\[0-9]{1,})\\.*,\" $stats all usec_per_call\n        assert {$usec_per_call >= 500000}\n\n        # Validate that the Module added an ACL Log entry.\n        set entry [lindex [r ACL LOG] 0]\n        assert {[dict get $entry username] eq {foo}}\n        assert {[dict get $entry context] eq {module}}\n        assert {[dict get $entry reason] eq {auth}}\n        assert {[dict get $entry object] eq {Module Auth}}\n        assert_match {*cmd=auth*} [dict get $entry client-info]\n        r ACL LOG RESET\n    }\n\n    test {test blocking module HELLO AUTH} {\n        r config resetstat\n        r acl setuser foo >pwd on ~* &* +@all\n        # validate proto 2 and 3 in case of success\n        assert_equal $hello2_response [r HELLO 2 AUTH foo pwd]\n        assert_equal $hello2_response [r HELLO 2 AUTH foo block_allow]\n        assert_equal $hello3_response [r HELLO 3 AUTH foo pwd]\n        assert_equal $hello3_response [r HELLO 3 AUTH foo block_allow]\n        # validate denying AUTH for the HELLO cmd\n        assert_error {*Auth denied by Misc Module*} {r HELLO 2 AUTH foo block_deny}\n        assert_match {*calls=5,*,rejected_calls=0,failed_calls=1*} [cmdstat hello]\n        assert_error {*WRONGPASS*} {r HELLO 2 AUTH foo nomatch}\n        assert_match {*calls=6,*,rejected_calls=0,failed_calls=2*} [cmdstat hello]\n        assert_error {*Auth denied by Misc Module*} {r HELLO 3 AUTH foo block_deny}\n        assert_match {*calls=7,*,rejected_calls=0,failed_calls=3*} [cmdstat hello]\n        assert_error {*WRONGPASS*} {r HELLO 3 AUTH foo nomatch}\n        assert_match {*calls=8,*,rejected_calls=0,failed_calls=4*} [cmdstat hello]\n        # Validate that every HELLO AUTH command took at least 500000 usec.\n        set stats [cmdstat hello]\n        regexp \"usec_per_call=(\\[0-9]{1,})\\.*,\" $stats all usec_per_call\n        assert {$usec_per_call >= 500000}\n\n        # Validate that the Module added an ACL Log entry.\n        set entry [lindex [r ACL LOG] 1]\n        assert {[dict get $entry username] eq {foo}}\n        assert {[dict get $entry context] eq {module}}\n        assert {[dict get $entry reason] eq {auth}}\n        assert {[dict get $entry object] eq {Module Auth}}\n        assert_match {*cmd=hello*} [dict get $entry client-info]\n        r ACL LOG RESET\n    }\n\n    test {test blocking module HELLO AUTH SETNAME} {\n        r config resetstat\n        r acl setuser foo >pwd on ~* &* +@all\n        # Validate clientname is set on success\n        assert_equal $hello2_response [r HELLO 2 AUTH foo pwd setname client1]\n        assert {[r client getname] eq {client1}}\n        assert_equal $hello2_response [r HELLO 2 AUTH foo block_allow setname client2]\n        assert {[r client getname] eq {client2}}\n        # Validate clientname is not updated on failure\n        r client setname client0\n        assert_error {*Auth denied by Misc Module*} {r HELLO 2 AUTH foo block_deny setname client1}\n        assert {[r client getname] eq {client0}}\n        assert_match {*calls=3,*,rejected_calls=0,failed_calls=1*} [cmdstat hello]\n        assert_error {*WRONGPASS*} {r HELLO 2 AUTH foo nomatch setname client2}\n        assert {[r client getname] eq {client0}}\n        assert_match {*calls=4,*,rejected_calls=0,failed_calls=2*} [cmdstat hello]\n        # Validate that every HELLO AUTH SETNAME command took at least 500000 usec.\n        set stats [cmdstat hello]\n        regexp \"usec_per_call=(\\[0-9]{1,})\\.*,\" $stats all usec_per_call\n        assert {$usec_per_call >= 500000}\n    }\n\n    test {test AUTH after registering multiple module auth callbacks} {\n        r config resetstat\n\n        # Register two more callbacks from the same module.\n        assert_equal {OK} [r testmoduleone.rm_register_blocking_auth_cb]\n        assert_equal {OK} [r testmoduleone.rm_register_auth_cb]\n\n        # Register another module auth callback from the second module.\n        assert_equal {OK} [r testmoduletwo.rm_register_auth_cb]\n\n        r acl setuser foo >pwd on ~* &* +@all\n\n        # Case 1 - Non Blocking Success\n        assert_equal {OK} [r AUTH foo allow]\n\n        # Case 2 - Non Blocking Deny\n        assert_error {*Auth denied by Misc Module*} {r AUTH foo deny}\n        assert_match {*calls=2,*,rejected_calls=0,failed_calls=1*} [cmdstat auth]\n\n        r config resetstat\n\n        # Case 3 - Blocking Success\n        assert_equal {OK} [r AUTH foo block_allow]\n\n        # Case 4 - Blocking Deny\n        assert_error {*Auth denied by Misc Module*} {r AUTH foo block_deny}\n        assert_match {*calls=2,*,rejected_calls=0,failed_calls=1*} [cmdstat auth]\n\n        # Validate that every Blocking AUTH command took at least 500000 usec.\n        set stats [cmdstat auth]\n        regexp \"usec_per_call=(\\[0-9]{1,})\\.*,\" $stats all usec_per_call\n        assert {$usec_per_call >= 500000}\n\n        r config resetstat\n\n        # Case 5 - Non Blocking Success via the second module.\n        assert_equal {OK} [r AUTH foo allow_two]\n\n        # Case 6 - Non Blocking Deny via the second module.\n        assert_error {*Auth denied by Misc Module*} {r AUTH foo deny_two}\n        assert_match {*calls=2,*,rejected_calls=0,failed_calls=1*} [cmdstat auth]\n\n        r config resetstat\n\n        # Case 7 - All four auth callbacks \"Skip\" by not explicitly allowing or denying.\n        assert_error {*WRONGPASS*} {r AUTH foo nomatch}\n        assert_match {*calls=1,*,rejected_calls=0,failed_calls=1*} [cmdstat auth]\n        assert_equal {OK} [r AUTH foo pwd]\n\n        # Because we had to attempt all 4 callbacks, validate that the AUTH command took at least\n        # 1000000 usec (each blocking callback takes 500000 usec).\n        set stats [cmdstat auth]\n        regexp \"usec_per_call=(\\[0-9]{1,})\\.*,\" $stats all usec_per_call\n        assert {$usec_per_call >= 1000000}\n    }\n\n    test {module auth during blocking module auth} {\n        r config resetstat\n        r acl setuser foo >pwd on ~* &* +@all\n        set rd [redis_deferring_client]\n        set rd_two [redis_deferring_client]\n\n        # Attempt blocking module auth. While this ongoing, attempt non blocking module auth from\n        # moduleone/moduletwo and start another blocking module auth from another deferring client.\n        $rd AUTH foo block_allow\n        wait_for_blocked_clients_count 1\n        assert_equal {OK} [r AUTH foo allow]\n        assert_equal {OK} [r AUTH foo allow_two]\n        # Validate that the non blocking module auth cmds finished before any blocking module auth.\n        set info_clients [r info clients]\n        assert_match \"*blocked_clients:1*\" $info_clients\n        $rd_two AUTH foo block_allow\n\n        # Validate that all of the AUTH commands succeeded.\n        wait_for_blocked_clients_count 0 500 10\n        $rd flush\n        assert_equal [$rd read] \"OK\"\n        $rd_two flush\n        assert_equal [$rd_two read] \"OK\"\n        assert_match {*calls=4,*,rejected_calls=0,failed_calls=0*} [cmdstat auth]\n    }\n\n    test {module auth inside MULTI EXEC} {\n        r config resetstat\n        r acl setuser foo >pwd on ~* &* +@all\n\n        # Validate that non blocking module auth inside MULTI succeeds.\n        r multi\n        r AUTH foo allow\n        assert_equal {OK} [r exec]\n\n        # Validate that blocking module auth inside MULTI throws an err.\n        r multi\n        r AUTH foo block_allow\n        assert_error {*ERR Blocking module command called from transaction*} {r exec}\n        assert_match {*calls=2,*,rejected_calls=0,failed_calls=1*} [cmdstat auth]\n    }\n\n    test {Disabling Redis User during blocking module auth} {\n        r config resetstat\n        r acl setuser foo >pwd on ~* &* +@all\n        set rd [redis_deferring_client]\n\n        # Attempt blocking module auth and disable the Redis user while module auth is in progress.\n        $rd AUTH foo pwd\n        wait_for_blocked_clients_count 1\n        r acl setuser foo >pwd off ~* &* +@all\n\n        # Validate that module auth failed.\n        wait_for_blocked_clients_count 0 500 10\n        $rd flush\n        assert_error {*WRONGPASS*} { $rd read }\n        assert_match {*calls=1,*,rejected_calls=0,failed_calls=1*} [cmdstat auth]\n    }\n\n    test {Killing a client in the middle of blocking module auth} {\n        r config resetstat\n        r acl setuser foo >pwd on ~* &* +@all\n        set rd [redis_deferring_client]\n        $rd client id\n        set cid [$rd read]\n\n        # Attempt blocking module auth command on client `cid` and kill the client while module auth\n        # is in progress.\n        $rd AUTH foo pwd\n        wait_for_blocked_clients_count 1\n        r client kill id $cid\n\n        # Validate that the blocked client count goes to 0 and no AUTH command is tracked.\n        wait_for_blocked_clients_count 0 500 10\n        $rd flush\n        assert_error {*I/O error reading reply*} { $rd read }\n        assert_match {} [cmdstat auth]\n    }\n\n    test {test RM_AbortBlock Module API during blocking module auth} {\n        r config resetstat\n        r acl setuser foo >pwd on ~* &* +@all\n\n        # Attempt module auth. With the \"block_abort\" as the password, the \"testacl.so\" module\n        # blocks the client and uses the RM_AbortBlock API. This should result in module auth\n        # failing and the client being unblocked with the default AUTH err message.\n        assert_error {*WRONGPASS*} {r AUTH foo block_abort}\n        assert_match {*calls=1,*,rejected_calls=0,failed_calls=1*} [cmdstat auth]\n    }\n\n    test {test RM_RegisterAuthCallback Module API during blocking module auth} {\n        r config resetstat\n        r acl setuser foo >defaultpwd on ~* &* +@all\n        set rd [redis_deferring_client]\n\n        # Start the module auth attempt with the standard Redis auth password for the user. This\n        # will result in all module auth cbs attempted and then standard Redis auth will be tried.\n        $rd AUTH foo defaultpwd\n        wait_for_blocked_clients_count 1\n\n        # Validate that we allow modules to register module auth cbs while module auth is already\n        # in progress.\n        assert_equal {OK} [r testmoduleone.rm_register_blocking_auth_cb]\n        assert_equal {OK} [r testmoduletwo.rm_register_auth_cb]\n\n        # Validate that blocking module auth succeeds.\n        wait_for_blocked_clients_count 0 500 10\n        $rd flush\n        assert_equal [$rd read] \"OK\"\n        set stats [cmdstat auth]\n        assert_match {*calls=1,*,rejected_calls=0,failed_calls=0*} $stats\n\n        # Validate that even the new blocking module auth cb which was registered in the middle of\n        # blocking module auth is attempted - making it take twice the duration (2x 500000 us).\n        regexp \"usec_per_call=(\\[0-9]{1,})\\.*,\" $stats all usec_per_call\n        assert {$usec_per_call >= 1000000}\n    }\n\n    test {Module unload during blocking module auth} {\n        r config resetstat\n        r module load $miscmodule\n        set rd [redis_deferring_client]\n        r acl setuser foo >pwd on ~* &* +@all\n\n        # Start a blocking module auth attempt.\n        $rd AUTH foo block_allow\n        wait_for_blocked_clients_count 1\n\n        # moduleone and moduletwo have module auth cbs registered. Because blocking module auth is\n        # ongoing, they cannot be unloaded.\n        catch {r module unload testacl} e\n        assert_match {*the module has blocked clients*} $e\n        # The moduleauthtwo module can be unregistered because no client is blocked on it.\n        assert_equal \"OK\" [r module unload moduleauthtwo]\n\n        # The misc module does not have module auth cbs registered, so it can be unloaded even when\n        # blocking module auth is ongoing.\n        assert_equal \"OK\" [r module unload misc]\n\n        # Validate that blocking module auth succeeds.\n        wait_for_blocked_clients_count 0 500 10\n        $rd flush\n        assert_equal [$rd read] \"OK\"\n        assert_match {*calls=1,*,rejected_calls=0,failed_calls=0*} [cmdstat auth]\n\n        # Validate that unloading the moduleauthtwo module does not unregister module auth cbs of\n        # of the testacl module. Module based auth should succeed.\n        assert_equal {OK} [r AUTH foo allow]\n\n        # Validate that the testacl module can be unloaded since blocking module auth is done.\n        r module unload testacl\n\n        # Validate that since all module auth cbs are unregistered, module auth attempts fail.\n        assert_error {*WRONGPASS*} {r AUTH foo block_allow}\n        assert_error {*WRONGPASS*} {r AUTH foo allow_two}\n        assert_error {*WRONGPASS*} {r AUTH foo allow}\n        assert_match {*calls=5,*,rejected_calls=0,failed_calls=3*} [cmdstat auth]\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/moduleconfigs.tcl",
    "content": "set testmodule [file normalize tests/modules/moduleconfigs.so]\nset testmoduletwo [file normalize tests/modules/moduleconfigstwo.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n    test {Config get commands work} {\n        # Make sure config get module config works\n        assert_not_equal [lsearch [lmap x [r module list] {dict get $x name}] moduleconfigs] -1\n        assert_equal [r config get moduleconfigs.mutable_bool] \"moduleconfigs.mutable_bool yes\"\n        assert_equal [r config get moduleconfigs.immutable_bool] \"moduleconfigs.immutable_bool no\"\n        assert_equal [r config get moduleconfigs.memory_numeric] \"moduleconfigs.memory_numeric 1024\"\n        assert_equal [r config get moduleconfigs.string] \"moduleconfigs.string {secret password}\"\n        assert_equal [r config get moduleconfigs.enum] \"moduleconfigs.enum one\"\n        assert_equal [r config get moduleconfigs.flags] \"moduleconfigs.flags {one two}\"\n        assert_equal [r config get moduleconfigs.numeric] \"moduleconfigs.numeric -1\"\n        \n        # Check un-prefixed and aliased configuration\n        assert_equal [r config get unprefix-bool] \"unprefix-bool yes\"\n        assert_equal [r config get unprefix-noalias-bool] \"unprefix-noalias-bool yes\"\n        assert_equal [r config get unprefix-bool-alias] \"unprefix-bool-alias yes\"\n        assert_equal [r config get unprefix.numeric] \"unprefix.numeric -1\"\n        assert_equal [r config get unprefix.numeric-alias] \"unprefix.numeric-alias -1\"\n        assert_equal [r config get unprefix-string] \"unprefix-string {secret unprefix}\"        \n        assert_equal [r config get unprefix.string-alias] \"unprefix.string-alias {secret unprefix}\"\n        assert_equal [r config get unprefix-enum] \"unprefix-enum one\"\n        assert_equal [r config get unprefix-enum-alias] \"unprefix-enum-alias one\"\n    }\n\n    test {Config set commands work} {\n        # Make sure that config sets work during runtime\n        r config set moduleconfigs.mutable_bool no\n        assert_equal [r config get moduleconfigs.mutable_bool] \"moduleconfigs.mutable_bool no\"\n        r config set moduleconfigs.memory_numeric 1mb\n        assert_equal [r config get moduleconfigs.memory_numeric] \"moduleconfigs.memory_numeric 1048576\"\n        r config set moduleconfigs.string wafflewednesdays\n        assert_equal [r config get moduleconfigs.string] \"moduleconfigs.string wafflewednesdays\"\n        set not_embstr [string repeat A 50]\n        r config set moduleconfigs.string $not_embstr\n        assert_equal [r config get moduleconfigs.string] \"moduleconfigs.string $not_embstr\"\n        r config set moduleconfigs.string \\x73\\x75\\x70\\x65\\x72\\x20\\x00\\x73\\x65\\x63\\x72\\x65\\x74\\x20\\x70\\x61\\x73\\x73\\x77\\x6f\\x72\\x64\n        assert_equal [r config get moduleconfigs.string] \"moduleconfigs.string {super \\0secret password}\"\n        r config set moduleconfigs.enum two\n        assert_equal [r config get moduleconfigs.enum] \"moduleconfigs.enum two\"\n        r config set moduleconfigs.flags two\n        assert_equal [r config get moduleconfigs.flags] \"moduleconfigs.flags two\"\n        r config set moduleconfigs.numeric -2\n        assert_equal [r config get moduleconfigs.numeric] \"moduleconfigs.numeric -2\"\n        \n        # Check un-prefixed and aliased configuration\n        r config set unprefix-bool no\n        assert_equal [r config get unprefix-bool] \"unprefix-bool no\"\n        assert_equal [r config get unprefix-bool-alias] \"unprefix-bool-alias no\"\n        r config set unprefix-bool-alias yes\n        assert_equal [r config get unprefix-bool] \"unprefix-bool yes\"\n        assert_equal [r config get unprefix-bool-alias] \"unprefix-bool-alias yes\"\n        r config set unprefix.numeric 5\n        assert_equal [r config get unprefix.numeric] \"unprefix.numeric 5\"\n        assert_equal [r config get unprefix.numeric-alias] \"unprefix.numeric-alias 5\"\n        r config set unprefix.numeric-alias 6\n        assert_equal [r config get unprefix.numeric] \"unprefix.numeric 6\"\n        r config set unprefix.string-alias \"blabla\"\n        assert_equal [r config get unprefix-string] \"unprefix-string blabla\"\n        assert_equal [r config get unprefix.string-alias] \"unprefix.string-alias blabla\"\n        r config set unprefix-enum two\n        assert_equal [r config get unprefix-enum] \"unprefix-enum two\"\n        assert_equal [r config get unprefix-enum-alias] \"unprefix-enum-alias two\"\n    }\n\n    test {Config set commands enum flags} {\n        r config set moduleconfigs.flags \"none\"\n        assert_equal [r config get moduleconfigs.flags] \"moduleconfigs.flags none\"\n\n        r config set moduleconfigs.flags \"two four\"\n        assert_equal [r config get moduleconfigs.flags] \"moduleconfigs.flags {two four}\"\n\n        r config set moduleconfigs.flags \"five\"\n        assert_equal [r config get moduleconfigs.flags] \"moduleconfigs.flags five\"\n\n        r config set moduleconfigs.flags \"one four\"\n        assert_equal [r config get moduleconfigs.flags] \"moduleconfigs.flags five\"\n\n        r config set moduleconfigs.flags \"one two four\"\n        assert_equal [r config get moduleconfigs.flags] \"moduleconfigs.flags {five two}\"\n    }\n\n    test {Immutable flag works properly and rejected strings dont leak} {\n        # Configs flagged immutable should not allow sets\n        catch {[r config set moduleconfigs.immutable_bool yes]} e\n        assert_match {*can't set immutable config*} $e\n        catch {[r config set moduleconfigs.string rejectisfreed]} e\n        assert_match {*Cannot set string to 'rejectisfreed'*} $e\n    }\n    \n    test {Numeric limits work properly} {\n        # Configs over/under the limit shouldn't be allowed, and memory configs should only take memory values\n        catch {[r config set moduleconfigs.memory_numeric 200gb]} e\n        assert_match {*argument must be between*} $e\n        catch {[r config set moduleconfigs.memory_numeric -5]} e\n        assert_match {*argument must be a memory value*} $e\n        catch {[r config set moduleconfigs.numeric -10]} e\n        assert_match {*argument must be between*} $e\n    }\n\n    test {Enums only able to be set to passed in values} {\n        # Module authors specify what values are valid for enums, check that only those values are ok on a set\n        catch {[r config set moduleconfigs.enum asdf]} e\n        assert_match {*must be one of the following*} $e\n    }\n\n    test {test blocking of config registration and load outside of OnLoad} {\n        assert_equal [r block.register.configs.outside.onload] OK\n    }\n\n    test {Unload removes module configs} {\n        r module unload moduleconfigs\n        assert_equal [r config get moduleconfigs.*] \"\"\n        r module load $testmodule\n        # these should have reverted back to their module specified values\n        assert_equal [r config get moduleconfigs.mutable_bool] \"moduleconfigs.mutable_bool yes\"\n        assert_equal [r config get moduleconfigs.immutable_bool] \"moduleconfigs.immutable_bool no\"\n        assert_equal [r config get moduleconfigs.memory_numeric] \"moduleconfigs.memory_numeric 1024\"\n        assert_equal [r config get moduleconfigs.string] \"moduleconfigs.string {secret password}\"\n        assert_equal [r config get moduleconfigs.enum] \"moduleconfigs.enum one\"\n        assert_equal [r config get moduleconfigs.flags] \"moduleconfigs.flags {one two}\"\n        assert_equal [r config get moduleconfigs.numeric] \"moduleconfigs.numeric -1\"\n        \n        # Check un-prefixed and aliased configuration\n        assert_equal [r config get unprefix-bool] \"unprefix-bool yes\"\n        assert_equal [r config get unprefix-bool-alias] \"unprefix-bool-alias yes\"\n        assert_equal [r config get unprefix.numeric] \"unprefix.numeric -1\"\n        assert_equal [r config get unprefix.numeric-alias] \"unprefix.numeric-alias -1\"\n        assert_equal [r config get unprefix-string] \"unprefix-string {secret unprefix}\"\n        assert_equal [r config get unprefix.string-alias] \"unprefix.string-alias {secret unprefix}\"\n        assert_equal [r config get unprefix-enum] \"unprefix-enum one\"\n        assert_equal [r config get unprefix-enum-alias] \"unprefix-enum-alias one\"\n                \n                \n        r module unload moduleconfigs\n    }\n\n    test {test loadex functionality} {\n        r module loadex $testmodule CONFIG moduleconfigs.mutable_bool no \\\n                                    CONFIG moduleconfigs.immutable_bool yes \\\n                                    CONFIG moduleconfigs.memory_numeric 2mb \\\n                                    CONFIG moduleconfigs.string tclortickle \\\n                                    CONFIG unprefix-bool no \\\n                                    CONFIG unprefix.numeric-alias 123 \\\n                                    CONFIG unprefix-string abc_def \\\n                                    \n        assert_not_equal [lsearch [lmap x [r module list] {dict get $x name}] moduleconfigs] -1\n        assert_equal [r config get moduleconfigs.mutable_bool] \"moduleconfigs.mutable_bool no\"\n        assert_equal [r config get moduleconfigs.immutable_bool] \"moduleconfigs.immutable_bool yes\"\n        assert_equal [r config get moduleconfigs.memory_numeric] \"moduleconfigs.memory_numeric 2097152\"\n        assert_equal [r config get moduleconfigs.string] \"moduleconfigs.string tclortickle\"\n        # Configs that were not changed should still be their module specified value\n        assert_equal [r config get moduleconfigs.enum] \"moduleconfigs.enum one\"\n        assert_equal [r config get moduleconfigs.flags] \"moduleconfigs.flags {one two}\"\n        assert_equal [r config get moduleconfigs.numeric] \"moduleconfigs.numeric -1\"\n        \n        # Check un-prefixed and aliased configuration\n        assert_equal [r config get unprefix-bool] \"unprefix-bool no\"\n        assert_equal [r config get unprefix-bool-alias] \"unprefix-bool-alias no\"\n        assert_equal [r config get unprefix.numeric] \"unprefix.numeric 123\"\n        assert_equal [r config get unprefix.numeric-alias] \"unprefix.numeric-alias 123\"\n        assert_equal [r config get unprefix-string] \"unprefix-string abc_def\"\n        assert_equal [r config get unprefix.string-alias] \"unprefix.string-alias abc_def\"\n        assert_equal [r config get unprefix-enum] \"unprefix-enum one\"\n        assert_equal [r config get unprefix-enum-alias] \"unprefix-enum-alias one\"\n        \n\n    }\n\n    test {apply function works} {\n        catch {[r config set moduleconfigs.mutable_bool yes]} e\n        assert_match {*Bool configs*} $e\n        assert_equal [r config get moduleconfigs.mutable_bool] \"moduleconfigs.mutable_bool no\"\n        catch {[r config set moduleconfigs.memory_numeric 1000 moduleconfigs.numeric 1000]} e\n        assert_match {*cannot equal*} $e\n        assert_equal [r config get moduleconfigs.memory_numeric] \"moduleconfigs.memory_numeric 2097152\"\n        assert_equal [r config get moduleconfigs.numeric] \"moduleconfigs.numeric -1\"\n        r module unload moduleconfigs\n    }\n\n    test {test double config argument to loadex} {\n        r module loadex $testmodule CONFIG moduleconfigs.mutable_bool yes \\\n                                    CONFIG moduleconfigs.mutable_bool no \\\n                                    CONFIG unprefix.numeric-alias 1 \\\n                                    CONFIG unprefix.numeric-alias 2 \\\n                                    CONFIG unprefix-string blabla\n                                    \n        assert_equal [r config get moduleconfigs.mutable_bool] \"moduleconfigs.mutable_bool no\"        \n        # Check un-prefixed and aliased configuration\n        assert_equal [r config get unprefix.numeric-alias] \"unprefix.numeric-alias 2\"\n        assert_equal [r config get unprefix.numeric] \"unprefix.numeric 2\"\n        assert_equal [r config get unprefix-string] \"unprefix-string blabla\"\n        assert_equal [r config get unprefix.string-alias] \"unprefix.string-alias blabla\"\n        r module unload moduleconfigs        \n    }\n\n    test {missing loadconfigs call} {\n        catch {[r module loadex $testmodule CONFIG moduleconfigs.string \"cool\" ARGS noload]} e\n        assert_match {*ERR*} $e\n    }\n\n    test {test loadex rejects bad configs} {\n        # Bad config 200gb is over the limit\n        catch {[r module loadex $testmodule CONFIG moduleconfigs.memory_numeric 200gb ARGS]} e\n        assert_match {*ERR*} $e\n        # We should completely remove all configs on a failed load\n        assert_equal [r config get moduleconfigs.*] \"\"\n        # No value for config, should error out\n        catch {[r module loadex $testmodule CONFIG moduleconfigs.mutable_bool CONFIG moduleconfigs.enum two ARGS]} e\n        assert_match {*ERR*} $e\n        assert_equal [r config get moduleconfigs.*] \"\"\n        # Asan will catch this if this string is not freed\n        catch {[r module loadex $testmodule CONFIG moduleconfigs.string rejectisfreed]}\n        assert_match {*ERR*} $e\n        assert_equal [r config get moduleconfigs.*] \"\"\n        # test we can't set random configs\n        catch {[r module loadex $testmodule CONFIG maxclients 333]}\n        assert_match {*ERR*} $e\n        assert_equal [r config get moduleconfigs.*] \"\"\n        assert_not_equal [r config get maxclients] \"maxclients 333\"\n        # test we can't set other module's configs\n        r module load $testmoduletwo\n        catch {[r module loadex $testmodule CONFIG configs.test no]}\n        assert_match {*ERR*} $e\n        assert_equal [r config get configs.test] \"configs.test yes\"\n        r module unload configs\n        # Verify config name and its alias being used together gets failed\n        catch {[r module loadex $testmodule CONFIG unprefix.numeric 1 CONFIG unprefix.numeric-alias 1]}\n        assert_match {*ERR*} $e\n    }\n\n    test {test config rewrite with dynamic load} {\n        #translates to: super \\0secret password\n        r module loadex $testmodule CONFIG moduleconfigs.string \\x73\\x75\\x70\\x65\\x72\\x20\\x00\\x73\\x65\\x63\\x72\\x65\\x74\\x20\\x70\\x61\\x73\\x73\\x77\\x6f\\x72\\x64 ARGS\n        assert_not_equal [lsearch [lmap x [r module list] {dict get $x name}] moduleconfigs] -1\n        assert_equal [r config get moduleconfigs.string] \"moduleconfigs.string {super \\0secret password}\"\n        r config set moduleconfigs.mutable_bool yes\n        r config set moduleconfigs.memory_numeric 750\n        r config set moduleconfigs.enum two\n        r config set moduleconfigs.flags \"four two\"\n        r config set unprefix-bool-alias no\n        r config set unprefix.numeric 456\n        r config set unprefix.string-alias \"unprefix\"\n        r config set unprefix-enum two\n        r config rewrite\n        restart_server 0 true false\n        # Ensure configs we rewrote are present and that the conf file is readable\n        assert_equal [r config get moduleconfigs.mutable_bool] \"moduleconfigs.mutable_bool yes\"\n        assert_equal [r config get moduleconfigs.memory_numeric] \"moduleconfigs.memory_numeric 750\"\n        assert_equal [r config get moduleconfigs.string] \"moduleconfigs.string {super \\0secret password}\"\n        assert_equal [r config get moduleconfigs.enum] \"moduleconfigs.enum two\"\n        assert_equal [r config get moduleconfigs.flags] \"moduleconfigs.flags {two four}\"\n        assert_equal [r config get moduleconfigs.numeric] \"moduleconfigs.numeric -1\"\n        \n        # Check unprefixed configuration and alias\n        assert_equal [r config get unprefix-bool] \"unprefix-bool no\"\n        assert_equal [r config get unprefix-bool-alias] \"unprefix-bool-alias no\"\n        assert_equal [r config get unprefix.numeric] \"unprefix.numeric 456\"\n        assert_equal [r config get unprefix.numeric-alias] \"unprefix.numeric-alias 456\"\n        assert_equal [r config get unprefix-string] \"unprefix-string unprefix\"\n        assert_equal [r config get unprefix.string-alias] \"unprefix.string-alias unprefix\"\n        assert_equal [r config get unprefix-enum] \"unprefix-enum two\"\n        assert_equal [r config get unprefix-enum-alias] \"unprefix-enum-alias two\"\n        \n        r module unload moduleconfigs\n    }\n\n    test {test multiple modules with configs} {\n        r module load $testmodule\n        r module loadex $testmoduletwo CONFIG configs.test yes\n        assert_equal [r config get moduleconfigs.mutable_bool] \"moduleconfigs.mutable_bool yes\"\n        assert_equal [r config get moduleconfigs.immutable_bool] \"moduleconfigs.immutable_bool no\"\n        assert_equal [r config get moduleconfigs.memory_numeric] \"moduleconfigs.memory_numeric 1024\"\n        assert_equal [r config get moduleconfigs.string] \"moduleconfigs.string {secret password}\"\n        assert_equal [r config get moduleconfigs.enum] \"moduleconfigs.enum one\"\n        assert_equal [r config get moduleconfigs.numeric] \"moduleconfigs.numeric -1\"\n        assert_equal [r config get configs.test] \"configs.test yes\"\n        r config set moduleconfigs.mutable_bool no\n        r config set moduleconfigs.string nice\n        r config set moduleconfigs.enum two\n        r config set configs.test no\n        assert_equal [r config get moduleconfigs.mutable_bool] \"moduleconfigs.mutable_bool no\"\n        assert_equal [r config get moduleconfigs.string] \"moduleconfigs.string nice\"\n        assert_equal [r config get moduleconfigs.enum] \"moduleconfigs.enum two\"\n        assert_equal [r config get configs.test] \"configs.test no\"\n        r config rewrite\n        # test we can load from conf file with multiple different modules.\n        restart_server 0 true false\n        assert_equal [r config get moduleconfigs.mutable_bool] \"moduleconfigs.mutable_bool no\"\n        assert_equal [r config get moduleconfigs.string] \"moduleconfigs.string nice\"\n        assert_equal [r config get moduleconfigs.enum] \"moduleconfigs.enum two\"\n        assert_equal [r config get configs.test] \"configs.test no\"\n        r module unload moduleconfigs\n        r module unload configs\n    }\n\n    test {test 1.module load 2.config rewrite 3.module unload 4.config rewrite works} {\n        # Configs need to be removed from the old config file in this case.\n        r module loadex $testmodule CONFIG moduleconfigs.memory_numeric 500 ARGS\n        assert_not_equal [lsearch [lmap x [r module list] {dict get $x name}] moduleconfigs] -1\n        r config rewrite\n        r module unload moduleconfigs\n        r config rewrite\n        restart_server 0 true false\n        # Ensure configs we rewrote are no longer present\n        assert_equal [r config get moduleconfigs.*] \"\"\n    }\n    test {startup moduleconfigs} {\n        # No loadmodule directive\n        catch {exec src/redis-server --moduleconfigs.string \"hello\"} err\n        assert_match {*Module Configuration detected without loadmodule directive or no ApplyConfig call: aborting*} $err\n\n        # Bad config value\n        catch {exec src/redis-server --loadmodule \"$testmodule\" --moduleconfigs.string \"rejectisfreed\"} err\n        assert_match {*Issue during loading of configuration moduleconfigs.string : Cannot set string to 'rejectisfreed'*} $err\n\n        # missing LoadConfigs call\n        catch {exec src/redis-server --loadmodule \"$testmodule\" noload --moduleconfigs.string \"hello\"} err\n        assert_match {*Module Configurations were not set, missing LoadConfigs call. Unloading the module.*} $err\n\n        # successful\n        start_server [list overrides [list loadmodule \"$testmodule\" moduleconfigs.string \"bootedup\" moduleconfigs.enum two moduleconfigs.flags \"two four\"] tags {\"external:skip\"}] {\n            assert_equal [r config get moduleconfigs.string] \"moduleconfigs.string bootedup\"\n            assert_equal [r config get moduleconfigs.mutable_bool] \"moduleconfigs.mutable_bool yes\"\n            assert_equal [r config get moduleconfigs.immutable_bool] \"moduleconfigs.immutable_bool no\"\n            assert_equal [r config get moduleconfigs.enum] \"moduleconfigs.enum two\"\n            assert_equal [r config get moduleconfigs.flags] \"moduleconfigs.flags {two four}\"\n            assert_equal [r config get moduleconfigs.numeric] \"moduleconfigs.numeric -1\"\n            assert_equal [r config get moduleconfigs.memory_numeric] \"moduleconfigs.memory_numeric 1024\"\n            \n            # Check un-prefixed and aliased configuration\n            assert_equal [r config get unprefix-bool] \"unprefix-bool yes\"\n            assert_equal [r config get unprefix-bool-alias] \"unprefix-bool-alias yes\"\n            assert_equal [r config get unprefix.numeric] \"unprefix.numeric -1\"\n            assert_equal [r config get unprefix.numeric-alias] \"unprefix.numeric-alias -1\"\n            assert_equal [r config get unprefix-string] \"unprefix-string {secret unprefix}\"\n            assert_equal [r config get unprefix.string-alias] \"unprefix.string-alias {secret unprefix}\"\n            assert_equal [r config get unprefix-enum] \"unprefix-enum one\"\n            assert_equal [r config get unprefix-enum-alias] \"unprefix-enum-alias one\"\n        }\n    }\n\n    test {loadmodule CONFIG values take precedence over module loadex ARGS values} {\n        # Load module with conflicting CONFIG and ARGS values\n        r module loadex $testmodule \\\n            CONFIG moduleconfigs.string goo \\\n            CONFIG moduleconfigs.memory_numeric 2mb \\\n            ARGS override-default\n\n        # Verify CONFIG values took precedence over the values that override-default would have caused the module to set\n        assert_equal [r config get moduleconfigs.string] \"moduleconfigs.string goo\"\n        assert_equal [r config get moduleconfigs.memory_numeric] \"moduleconfigs.memory_numeric 2097152\"\n\n        r module unload moduleconfigs\n    }\n\n    # Test: Ensure that modified configuration values from ARGS are correctly written to the config file\n    test {Modified ARGS values are persisted after config rewrite when set through CONFIG commands} {\n        # Load module with non-default ARGS values\n        r module loadex $testmodule ARGS override-default\n\n        # Verify the initial values were overwritten\n        assert_equal [r config get moduleconfigs.memory_numeric] \"moduleconfigs.memory_numeric 123\"\n        assert_equal [r config get moduleconfigs.string] \"moduleconfigs.string foo\"\n\n        # Set new values to simulate user configuration changes\n        r config set moduleconfigs.memory_numeric 1mb\n        r config set moduleconfigs.string \"modified_value\"\n\n        # Verify that the changes took effect\n        assert_equal [r config get moduleconfigs.memory_numeric] \"moduleconfigs.memory_numeric 1048576\"\n        assert_equal [r config get moduleconfigs.string] \"moduleconfigs.string modified_value\"\n\n        # Perform a config rewrite\n        r config rewrite\n\n        restart_server 0 true false\n        assert_equal [r config get moduleconfigs.memory_numeric] \"moduleconfigs.memory_numeric 1048576\"\n        assert_equal [r config get moduleconfigs.string] \"moduleconfigs.string modified_value\"\n        r module unload moduleconfigs\n    }\n}\n\n"
  },
  {
    "path": "tests/unit/moduleapi/postnotifications.tcl",
    "content": "set testmodule [file normalize tests/modules/postnotifications.so]\n\ntags \"modules external:skip\" {\n    start_server {} {\n        r module load $testmodule with_key_events\n\n        test {Test write on post notification callback} {\n            set repl [attach_to_replication_stream]\n\n            r set string_x 1\n            assert_equal {1} [r get string_changed{string_x}]\n            assert_equal {1} [r get string_total]\n\n            r set string_x 2\n            assert_equal {2} [r get string_changed{string_x}]\n            assert_equal {2} [r get string_total]\n\n            # the {lpush before_overwritten string_x} is a post notification job registered when 'string_x' was overwritten\n            assert_replication_stream $repl {\n                {multi}\n                {select *}\n                {set string_x 1}\n                {incr string_changed{string_x}}\n                {incr string_total}\n                {exec}\n                {multi}\n                {set string_x 2}\n                {lpush before_overwritten string_x}\n                {incr string_changed{string_x}}\n                {incr string_total}\n                {exec}\n            }\n            close_replication_stream $repl\n        }\n\n        test {Test write on post notification callback from module thread} {\n            r flushall\n            set repl [attach_to_replication_stream]\n\n            assert_equal {OK} [r postnotification.async_set]\n            assert_equal {1} [r get string_changed{string_x}]\n            assert_equal {1} [r get string_total]\n\n            assert_replication_stream $repl {\n                {multi}\n                {select *}\n                {set string_x 1}\n                {incr string_changed{string_x}}\n                {incr string_total}\n                {exec}\n            }\n            close_replication_stream $repl\n        }\n\n        test {Test active expire} {\n            r flushall\n            set repl [attach_to_replication_stream]\n\n            r set x 1\n            r pexpire x 10\n\n            wait_for_condition 100 50 {\n                [r keys expired] == {expired}\n            } else {\n                puts [r keys *]\n                fail \"Failed waiting for x to expired\"\n            }\n\n            # the {lpush before_expired x} is a post notification job registered before 'x' got expired\n            assert_replication_stream $repl {\n                {select *}\n                {set x 1}\n                {pexpireat x *}\n                {multi}\n                {del x}\n                {lpush before_expired x}\n                {incr expired}\n                {exec}\n            }\n            close_replication_stream $repl\n        }\n\n        test {Test lazy expire} {\n            r flushall\n            r DEBUG SET-ACTIVE-EXPIRE 0\n            set repl [attach_to_replication_stream]\n\n            r set x 1\n            r pexpire x 1\n            after 10\n            assert_equal {} [r get x]\n\n            # the {lpush before_expired x} is a post notification job registered before 'x' got expired\n            assert_replication_stream $repl {\n                {select *}\n                {set x 1}\n                {pexpireat x *}\n                {multi}\n                {del x}\n                {lpush before_expired x}\n                {incr expired}\n                {exec}\n            }\n            close_replication_stream $repl\n            r DEBUG SET-ACTIVE-EXPIRE 1\n        } {OK} {needs:debug}\n\n        test {Test lazy expire inside post job notification} {\n            r flushall\n            r DEBUG SET-ACTIVE-EXPIRE 0\n            set repl [attach_to_replication_stream]\n\n            r set x 1\n            r pexpire x 1\n            after 10\n            assert_equal {OK} [r set read_x 1]\n\n            # the {lpush before_expired x} is a post notification job registered before 'x' got expired\n            assert_replication_stream $repl {\n                {select *}\n                {set x 1}\n                {pexpireat x *}\n                {multi}\n                {set read_x 1}\n                {del x}\n                {lpush before_expired x}\n                {incr expired}\n                {exec}\n            }\n            close_replication_stream $repl\n            r DEBUG SET-ACTIVE-EXPIRE 1\n        } {OK} {needs:debug}\n\n        test {Test nested keyspace notification} {\n            r flushall\n            set repl [attach_to_replication_stream]\n\n            assert_equal {OK} [r set write_sync_write_sync_x 1]\n\n            assert_replication_stream $repl {\n                {multi}\n                {select *}\n                {set x 1}\n                {set write_sync_x 1}\n                {set write_sync_write_sync_x 1}\n                {exec}\n            }\n            close_replication_stream $repl\n        }\n\n        test {Test eviction} {\n            r flushall\n            set repl [attach_to_replication_stream]\n            r set x 1\n            r config set maxmemory-policy allkeys-random\n            r config set maxmemory 1\n\n            assert_error {OOM *} {r set y 1}\n\n            # the {lpush before_evicted x} is a post notification job registered before 'x' got evicted\n            assert_replication_stream $repl {\n                {select *}\n                {set x 1}\n                {multi}\n                {del x}\n                {lpush before_evicted x}\n                {incr evicted}\n                {exec}\n            }\n            close_replication_stream $repl\n        } {} {needs:config-maxmemory}\n    }\n}\n\nset testmodule2 [file normalize tests/modules/keyspace_events.so]\n\ntags \"modules external:skip\" {\n    start_server {} {\n        r module load $testmodule with_key_events\n        r module load $testmodule2\n        test {Test write on post notification callback} {\n            set repl [attach_to_replication_stream]\n\n            r set string_x 1\n            assert_equal {1} [r get string_changed{string_x}]\n            assert_equal {1} [r get string_total]\n\n            r set string_x 2\n            assert_equal {2} [r get string_changed{string_x}]\n            assert_equal {2} [r get string_total]\n\n            r set string1_x 1\n            assert_equal {1} [r get string_changed{string1_x}]\n            assert_equal {3} [r get string_total]\n\n            # the {lpush before_overwritten string_x} is a post notification job registered before 'string_x' got overwritten\n            assert_replication_stream $repl {\n                {multi}\n                {select *}\n                {set string_x 1}\n                {incr string_changed{string_x}}\n                {incr string_total}\n                {exec}\n                {multi}\n                {set string_x 2}\n                {lpush before_overwritten string_x}\n                {incr string_changed{string_x}}\n                {incr string_total}\n                {exec}\n                {multi}\n                {set string1_x 1}\n                {incr string_changed{string1_x}}\n                {incr string_total}\n                {exec}\n            }\n            close_replication_stream $repl\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/propagate.tcl",
    "content": "set testmodule [file normalize tests/modules/propagate.so]\nset miscmodule [file normalize tests/modules/misc.so]\nset keyspace_events [file normalize tests/modules/keyspace_events.so]\n\ntags \"modules external:skip\" {\n    test {Modules can propagate in async and threaded contexts} {\n        start_server [list overrides [list loadmodule \"$testmodule\"]] {\n            set replica [srv 0 client]\n            set replica_host [srv 0 host]\n            set replica_port [srv 0 port]\n            $replica module load $keyspace_events\n            start_server [list overrides [list loadmodule \"$testmodule\"]] {\n                set master [srv 0 client]\n                set master_host [srv 0 host]\n                set master_port [srv 0 port]\n                $master module load $keyspace_events\n\n                # Start the replication process...\n                $replica replicaof $master_host $master_port\n                wait_for_sync $replica\n                after 1000\n\n                test {module propagates from timer} {\n                    set repl [attach_to_replication_stream]\n\n                    $master propagate-test.timer\n\n                    wait_for_condition 500 10 {\n                        [$replica get timer] eq \"3\"\n                    } else {\n                        fail \"The two counters don't match the expected value.\"\n                    }\n\n                    assert_replication_stream $repl {\n                        {select *}\n                        {incr timer}\n                        {incr timer}\n                        {incr timer}\n                    }\n                    close_replication_stream $repl\n                }\n\n                test {module propagation with notifications} {\n                    set repl [attach_to_replication_stream]\n\n                    $master set x y\n\n                    assert_replication_stream $repl {\n                        {multi}\n                        {select *}\n                        {incr notifications}\n                        {set x y}\n                        {exec}\n                    }\n                    close_replication_stream $repl\n                }\n\n                test {module propagation with notifications with multi} {\n                    set repl [attach_to_replication_stream]\n\n                    $master multi\n                    $master set x1 y1\n                    $master set x2 y2\n                    $master exec\n\n                    assert_replication_stream $repl {\n                        {multi}\n                        {select *}\n                        {incr notifications}\n                        {set x1 y1}\n                        {incr notifications}\n                        {set x2 y2}\n                        {exec}\n                    }\n                    close_replication_stream $repl\n                }\n\n                test {module propagation with notifications with active-expire} {\n                    $master debug set-active-expire 1\n                    set repl [attach_to_replication_stream]\n\n                    $master set asdf1 1 PX 300\n                    $master set asdf2 2 PX 300\n                    $master set asdf3 3 PX 300\n\n                    wait_for_condition 500 10 {\n                        [$replica keys asdf*] eq {}\n                    } else {\n                        fail \"Not all keys have expired\"\n                    }\n\n                    # Note whenever there's double notification: SET with PX issues two separate\n                    # notifications: one for \"set\" and one for \"expire\"\n                    assert_replication_stream $repl {\n                        {multi}\n                        {select *}\n                        {incr notifications}\n                        {incr notifications}\n                        {set asdf1 1 PXAT *}\n                        {exec}\n                        {multi}\n                        {incr notifications}\n                        {incr notifications}\n                        {set asdf2 2 PXAT *}\n                        {exec}\n                        {multi}\n                        {incr notifications}\n                        {incr notifications}\n                        {set asdf3 3 PXAT *}\n                        {exec}\n                        {multi}\n                        {incr notifications}\n                        {incr notifications}\n                        {incr testkeyspace:expired}\n                        {del asdf*}\n                        {exec}\n                        {multi}\n                        {incr notifications}\n                        {incr notifications}\n                        {incr testkeyspace:expired}\n                        {del asdf*}\n                        {exec}\n                        {multi}\n                        {incr notifications}\n                        {incr notifications}\n                        {incr testkeyspace:expired}\n                        {del asdf*}\n                        {exec}\n                    }\n                    close_replication_stream $repl\n\n                    $master debug set-active-expire 0\n                }\n\n                test {module propagation with notifications with eviction case 1} {\n                    $master flushall\n                    $master set asdf1 1\n                    $master set asdf2 2\n                    $master set asdf3 3\n          \n                    $master config set maxmemory-policy allkeys-random\n                    $master config set maxmemory 1\n\n                    # Please note the following loop:\n                    # We evict a key and send a notification, which does INCR on the \"notifications\" key, so\n                    # that every time we evict any key, \"notifications\" key exist (it happens inside the\n                    # performEvictions loop). So even evicting \"notifications\" causes INCR on \"notifications\".\n                    # If maxmemory_eviction_tenacity would have been set to 100 this would be an endless loop, but\n                    # since the default is 10, at some point the performEvictions loop would end.\n                    # Bottom line: \"notifications\" always exists and we can't really determine the order of evictions\n                    # This test is here only for sanity\n\n                    # The replica will get the notification with multi exec and we have a generic notification handler\n                    # that performs `RedisModule_Call(ctx, \"INCR\", \"c\", \"multi\");` if the notification is inside multi exec.\n                    # so we will have 2 keys, \"notifications\" and \"multi\".\n                    wait_for_condition 500 10 {\n                        [$replica dbsize] eq 2 \n                    } else {\n                        fail \"Not all keys have been evicted\"\n                    }\n\n                    $master config set maxmemory 0\n                    $master config set maxmemory-policy noeviction\n                }\n\n                test {module propagation with notifications with eviction case 2} {\n                    $master flushall\n                    set repl [attach_to_replication_stream]\n\n                    $master set asdf1 1 EX 300\n                    $master set asdf2 2 EX 300\n                    $master set asdf3 3 EX 300\n\n                    # Please note we use volatile eviction to prevent the loop described in the test above.\n                    # \"notifications\" is not volatile so it always remains\n                    $master config resetstat\n                    $master config set maxmemory-policy volatile-ttl\n                    $master config set maxmemory 1\n\n                    wait_for_condition 500 10 {\n                        [s evicted_keys] eq 3\n                    } else {\n                        fail \"Not all keys have been evicted\"\n                    }\n\n                    $master config set maxmemory 0\n                    $master config set maxmemory-policy noeviction\n\n                    $master set asdf4 4\n\n                    # Note whenever there's double notification: SET with EX issues two separate\n                    # notifications: one for \"set\" and one for \"expire\"\n                    # Note that although CONFIG SET maxmemory is called in this flow (see issue #10014),\n                    # eviction will happen and will not induce propagation of the CONFIG command (see #10019).\n                    assert_replication_stream $repl {\n                        {multi}\n                        {select *}\n                        {incr notifications}\n                        {incr notifications}\n                        {set asdf1 1 PXAT *}\n                        {exec}\n                        {multi}\n                        {incr notifications}\n                        {incr notifications}\n                        {set asdf2 2 PXAT *}\n                        {exec}\n                        {multi}\n                        {incr notifications}\n                        {incr notifications}\n                        {set asdf3 3 PXAT *}\n                        {exec}\n                        {multi}\n                        {incr notifications}\n                        {del asdf*}\n                        {exec}\n                        {multi}\n                        {incr notifications}\n                        {del asdf*}\n                        {exec}\n                        {multi}\n                        {incr notifications}\n                        {del asdf*}\n                        {exec}\n                        {multi}\n                        {incr notifications}\n                        {set asdf4 4}\n                        {exec}\n                    }\n                    close_replication_stream $repl\n                }\n\n                test {module propagation with timer and CONFIG SET maxmemory} {\n                    set repl [attach_to_replication_stream]\n\n                    $master config resetstat\n                    $master config set maxmemory-policy volatile-random\n\n                    $master propagate-test.timer-maxmemory\n\n                    # Wait until the volatile keys are evicted\n                    wait_for_condition 500 10 {\n                        [s evicted_keys] eq 2\n                    } else {\n                        fail \"Not all keys have been evicted\"\n                    }\n\n                    assert_replication_stream $repl {\n                        {multi}\n                        {select *}\n                        {incr notifications}\n                        {incr notifications}\n                        {set timer-maxmemory-volatile-start 1 PXAT *}\n                        {incr timer-maxmemory-middle}\n                        {incr notifications}\n                        {incr notifications}\n                        {set timer-maxmemory-volatile-end 1 PXAT *}\n                        {exec}\n                        {multi}\n                        {incr notifications}\n                        {del timer-maxmemory-volatile-*}\n                        {exec}\n                        {multi}\n                        {incr notifications}\n                        {del timer-maxmemory-volatile-*}\n                        {exec}\n                    }\n                    close_replication_stream $repl\n\n                    $master config set maxmemory 0\n                    $master config set maxmemory-policy noeviction\n                }\n\n                test {module propagation with timer and EVAL} {\n                    set repl [attach_to_replication_stream]\n\n                    $master propagate-test.timer-eval\n\n                    assert_replication_stream $repl {\n                        {multi}\n                        {select *}\n                        {incr notifications}\n                        {incrby timer-eval-start 1}\n                        {incr notifications}\n                        {set foo bar}\n                        {incr timer-eval-middle}\n                        {incr notifications}\n                        {incrby timer-eval-end 1}\n                        {exec}\n                    }\n                    close_replication_stream $repl\n                }\n\n                test {module propagates nested ctx case1} {\n                    set repl [attach_to_replication_stream]\n\n                    $master propagate-test.timer-nested\n\n                    wait_for_condition 500 10 {\n                        [$replica get timer-nested-end] eq \"1\"\n                    } else {\n                        fail \"The two counters don't match the expected value.\"\n                    }\n\n                    assert_replication_stream $repl {\n                        {multi}\n                        {select *}\n                        {incrby timer-nested-start 1}\n                        {incrby timer-nested-end 1}\n                        {exec}\n                    }\n                    close_replication_stream $repl\n\n                    # Note propagate-test.timer-nested just propagates INCRBY, causing an\n                    # inconsistency, so we flush\n                    $master flushall\n                }\n\n                test {module propagates nested ctx case2} {\n                    set repl [attach_to_replication_stream]\n\n                    $master propagate-test.timer-nested-repl\n\n                    wait_for_condition 500 10 {\n                        [$replica get timer-nested-end] eq \"1\"\n                    } else {\n                        fail \"The two counters don't match the expected value.\"\n                    }\n\n                    assert_replication_stream $repl {\n                        {multi}\n                        {select *}\n                        {incrby timer-nested-start 1}\n                        {incr notifications}\n                        {incr using-call}\n                        {incr counter-1}\n                        {incr counter-2}\n                        {incr counter-3}\n                        {incr counter-4}\n                        {incr notifications}\n                        {incr after-call}\n                        {incr notifications}\n                        {incr before-call-2}\n                        {incr notifications}\n                        {incr asdf}\n                        {incr notifications}\n                        {del asdf}\n                        {incr notifications}\n                        {incr after-call-2}\n                        {incr notifications}\n                        {incr timer-nested-middle}\n                        {incrby timer-nested-end 1}\n                        {exec}\n                    }\n                    close_replication_stream $repl\n\n                    # Note propagate-test.timer-nested-repl just propagates INCRBY, causing an\n                    # inconsistency, so we flush\n                    $master flushall\n                }\n\n                test {module propagates from thread} {\n                    set repl [attach_to_replication_stream]\n\n                    $master propagate-test.thread\n\n                    wait_for_condition 500 10 {\n                        [$replica get a-from-thread] eq \"3\"\n                    } else {\n                        fail \"The two counters don't match the expected value.\"\n                    }\n\n                    assert_replication_stream $repl {\n                        {multi}\n                        {select *}\n                        {incr a-from-thread}\n                        {incr notifications}\n                        {incr thread-call}\n                        {incr b-from-thread}\n                        {exec}\n                        {multi}\n                        {incr a-from-thread}\n                        {incr notifications}\n                        {incr thread-call}\n                        {incr b-from-thread}\n                        {exec}\n                        {multi}\n                        {incr a-from-thread}\n                        {incr notifications}\n                        {incr thread-call}\n                        {incr b-from-thread}\n                        {exec}\n                    }\n                    close_replication_stream $repl\n                }\n\n                test {module propagates from thread with detached ctx} {\n                    set repl [attach_to_replication_stream]\n\n                    $master propagate-test.detached-thread\n\n                    wait_for_condition 500 10 {\n                        [$replica get thread-detached-after] eq \"1\"\n                    } else {\n                        fail \"The key doesn't match the expected value.\"\n                    }\n\n                    assert_replication_stream $repl {\n                        {multi}\n                        {select *}\n                        {incr thread-detached-before}\n                        {incr notifications}\n                        {incr thread-detached-1}\n                        {incr notifications}\n                        {incr thread-detached-2}\n                        {incr thread-detached-after}\n                        {exec}\n                    }\n                    close_replication_stream $repl\n                }\n\n                test {module propagates from command} {\n                    set repl [attach_to_replication_stream]\n\n                    $master propagate-test.simple\n                    $master propagate-test.mixed\n\n                    assert_replication_stream $repl {\n                        {multi}\n                        {select *}\n                        {incr counter-1}\n                        {incr counter-2}\n                        {exec}\n                        {multi}\n                        {incr notifications}\n                        {incr using-call}\n                        {incr counter-1}\n                        {incr counter-2}\n                        {incr notifications}\n                        {incr after-call}\n                        {exec}\n                    }\n                    close_replication_stream $repl\n                }\n\n                test {module propagates from EVAL} {\n                    set repl [attach_to_replication_stream]\n\n                    assert_equal [ $master eval { \\\n                        redis.call(\"propagate-test.simple\"); \\\n                        redis.call(\"set\", \"x\", \"y\"); \\\n                        redis.call(\"propagate-test.mixed\"); return \"OK\" } 0 ] {OK}\n\n                    assert_replication_stream $repl {\n                        {multi}\n                        {select *}\n                        {incr counter-1}\n                        {incr counter-2}\n                        {incr notifications}\n                        {set x y}\n                        {incr notifications}\n                        {incr using-call}\n                        {incr counter-1}\n                        {incr counter-2}\n                        {incr notifications}\n                        {incr after-call}\n                        {exec}\n                    }\n                    close_replication_stream $repl\n                }\n\n                test {module propagates from command after good EVAL} {\n                    set repl [attach_to_replication_stream]\n\n                    assert_equal [ $master eval { return \"hello\" } 0 ] {hello}\n                    $master propagate-test.simple\n                    $master propagate-test.mixed\n\n                    assert_replication_stream $repl {\n                        {multi}\n                        {select *}\n                        {incr counter-1}\n                        {incr counter-2}\n                        {exec}\n                        {multi}\n                        {incr notifications}\n                        {incr using-call}\n                        {incr counter-1}\n                        {incr counter-2}\n                        {incr notifications}\n                        {incr after-call}\n                        {exec}\n                    }\n                    close_replication_stream $repl\n                }\n\n                test {module propagates from command after bad EVAL} {\n                    set repl [attach_to_replication_stream]\n\n                    catch { $master eval { return \"hello\" } -12 } e\n                    assert_equal $e {ERR Number of keys can't be negative}\n                    $master propagate-test.simple\n                    $master propagate-test.mixed\n\n                    assert_replication_stream $repl {\n                        {multi}\n                        {select *}\n                        {incr counter-1}\n                        {incr counter-2}\n                        {exec}\n                        {multi}\n                        {incr notifications}\n                        {incr using-call}\n                        {incr counter-1}\n                        {incr counter-2}\n                        {incr notifications}\n                        {incr after-call}\n                        {exec}\n                    }\n                    close_replication_stream $repl\n                }\n\n                test {module propagates from multi-exec} {\n                    set repl [attach_to_replication_stream]\n\n                    $master multi\n                    $master propagate-test.simple\n                    $master propagate-test.mixed\n                    $master propagate-test.timer-nested-repl\n                    $master exec\n\n                    wait_for_condition 500 10 {\n                        [$replica get timer-nested-end] eq \"1\"\n                    } else {\n                        fail \"The two counters don't match the expected value.\"\n                    }\n\n                    assert_replication_stream $repl {\n                        {multi}\n                        {select *}\n                        {incr counter-1}\n                        {incr counter-2}\n                        {incr notifications}\n                        {incr using-call}\n                        {incr counter-1}\n                        {incr counter-2}\n                        {incr notifications}\n                        {incr after-call}\n                        {exec}\n                        {multi}\n                        {incrby timer-nested-start 1}\n                        {incr notifications}\n                        {incr using-call}\n                        {incr counter-1}\n                        {incr counter-2}\n                        {incr counter-3}\n                        {incr counter-4}\n                        {incr notifications}\n                        {incr after-call}\n                        {incr notifications}\n                        {incr before-call-2}\n                        {incr notifications}\n                        {incr asdf}\n                        {incr notifications}\n                        {del asdf}\n                        {incr notifications}\n                        {incr after-call-2}\n                        {incr notifications}\n                        {incr timer-nested-middle}\n                        {incrby timer-nested-end 1}\n                        {exec}\n                    }\n                    close_replication_stream $repl\n\n                   # Note propagate-test.timer-nested just propagates INCRBY, causing an\n                    # inconsistency, so we flush\n                    $master flushall\n                }\n\n                test {module RM_Call of expired key propagation} {\n                    $master debug set-active-expire 0\n\n                    $master set k1 900 px 100\n                    after 110\n\n                    set repl [attach_to_replication_stream]\n                    $master propagate-test.incr k1\n\n                    assert_replication_stream $repl {\n                        {multi}\n                        {select *}\n                        {del k1}\n                        {propagate-test.incr k1}\n                        {exec}\n                    }\n                    close_replication_stream $repl\n\n                    assert_equal [$master get k1] 1\n                    assert_equal [$master ttl k1] -1\n\n                    wait_for_condition 50 100 {\n                        [$replica get k1] eq 1 &&\n                        [$replica ttl k1] eq -1\n                    } else {\n                        fail \"failed RM_Call of expired key propagation\"\n                    }\n                }\n\n                test {module notification on set} {\n                    set repl [attach_to_replication_stream]\n\n                    $master SADD s foo\n\n                    wait_for_condition 500 10 {\n                        [$replica SCARD s] eq \"1\"\n                    } else {\n                        fail \"Failed to wait for set to be replicated\"\n                    }\n\n                    $master SPOP s 1\n\n                    wait_for_condition 500 10 {\n                        [$replica SCARD s] eq \"0\"\n                    } else {\n                        fail \"Failed to wait for set to be replicated\"\n                    }\n\n                    # Currently the `del` command comes after the notification.\n                    # When we fix spop to fire notification at the end (like all other commands),\n                    # the `del` will come first.\n                    assert_replication_stream $repl {\n                        {multi}\n                        {select *}\n                        {incr notifications}\n                        {sadd s foo}\n                        {exec}\n                        {multi}\n                        {incr notifications}\n                        {incr notifications}\n                        {del s}\n                        {exec}\n                    }\n                    close_replication_stream $repl\n                }\n\n                test {module key miss notification do not cause read command to be replicated} {\n                    set repl [attach_to_replication_stream]\n\n                    $master flushall\n                    \n                    $master get unexisting_key\n\n                    wait_for_condition 500 10 {\n                        [$replica get missed] eq \"1\"\n                    } else {\n                        fail \"Failed to wait for set to be replicated\"\n                    }\n\n                    # Test is checking a wrong!!! behavior that causes a read command to be replicated to replica/aof.\n                    # We keep the test to verify that such a wrong behavior does not cause any crashes.\n                    assert_replication_stream $repl {\n                        {select *}\n                        {flushall}\n                        {multi}\n                        {incr notifications}\n                        {incr missed}\n                        {get unexisting_key}\n                        {exec}\n                    }\n                    \n                    close_replication_stream $repl\n                }\n\n                test \"Unload the module - propagate-test/testkeyspace\" {\n                    assert_equal {OK} [r module unload propagate-test]\n                    assert_equal {OK} [r module unload testkeyspace]\n                }\n\n                assert_equal [s -1 unexpected_error_replies] 0\n            }\n        }\n    }\n}\n\n\ntags \"modules aof external:skip\" {\n    foreach aofload_type {debug_cmd startup} {\n    test \"Modules RM_Replicate replicates MULTI/EXEC correctly: AOF-load type $aofload_type\" {\n        start_server [list overrides [list loadmodule \"$testmodule\"]] {\n            # Enable the AOF\n            r config set appendonly yes\n            r config set auto-aof-rewrite-percentage 0 ; # Disable auto-rewrite.\n            waitForBgrewriteaof r\n\n            r propagate-test.simple\n            r propagate-test.mixed\n            r multi\n            r propagate-test.simple\n            r propagate-test.mixed\n            r exec\n\n            assert_equal [r get counter-1] {}\n            assert_equal [r get counter-2] {}\n            assert_equal [r get using-call] 2\n            assert_equal [r get after-call] 2\n            assert_equal [r get notifications] 4\n\n            # Load the AOF\n            if {$aofload_type == \"debug_cmd\"} {\n                r debug loadaof\n            } else {\n                r config rewrite\n                restart_server 0 true false\n                wait_done_loading r\n            }\n\n            # This module behaves bad on purpose, it only calls\n            # RM_Replicate for counter-1 and counter-2 so values\n            # after AOF-load are different\n            assert_equal [r get counter-1] 4\n            assert_equal [r get counter-2] 4\n            assert_equal [r get using-call] 2\n            assert_equal [r get after-call] 2\n            # 4+4+2+2 commands from AOF (just above) + 4 \"INCR notifications\" from AOF + 4 notifications for these INCRs\n            assert_equal [r get notifications] 20\n\n            assert_equal {OK} [r module unload propagate-test]\n            assert_equal [s 0 unexpected_error_replies] 0\n        }\n    }\n    test \"Modules RM_Call does not update stats during aof load: AOF-load type $aofload_type\" {\n        start_server [list overrides [list loadmodule \"$miscmodule\"]] {\n            # Enable the AOF\n            r config set appendonly yes\n            r config set auto-aof-rewrite-percentage 0 ; # Disable auto-rewrite.\n            waitForBgrewriteaof r\n            \n            r config resetstat\n            r set foo bar\n            r EVAL {return redis.call('SET', KEYS[1], ARGV[1])} 1 foo bar2\n            r test.rm_call_replicate set foo bar3\n            r EVAL {return redis.call('test.rm_call_replicate',ARGV[1],KEYS[1],ARGV[2])} 1 foo set bar4\n            \n            r multi\n            r set foo bar5\n            r EVAL {return redis.call('SET', KEYS[1], ARGV[1])} 1 foo bar6\n            r test.rm_call_replicate set foo bar7\n            r EVAL {return redis.call('test.rm_call_replicate',ARGV[1],KEYS[1],ARGV[2])} 1 foo set bar8\n            r exec\n\n            assert_match {*calls=8,*,rejected_calls=0,failed_calls=0*} [cmdrstat set r]\n            \n            \n            # Load the AOF\n            if {$aofload_type == \"debug_cmd\"} {\n                r config resetstat\n                r debug loadaof\n            } else {\n                r config rewrite\n                restart_server 0 true false\n                wait_done_loading r\n            }\n            \n            assert_no_match {*calls=*} [cmdrstat set r]\n            \n        }\n    }\n    }\n}\n\n# This test does not really test module functionality, but rather uses a module\n# command to test Redis replication mechanisms.\ntest {Replicas that was marked as CLIENT_CLOSE_ASAP should not keep the replication backlog from been trimmed} {\n    start_server [list overrides [list loadmodule \"$testmodule\"] tags {\"external:skip\"}] {\n        set replica [srv 0 client]\n        start_server [list overrides [list loadmodule \"$testmodule\"] tags {\"external:skip\"}] {\n            set master [srv 0 client]\n            set master_host [srv 0 host]\n            set master_port [srv 0 port]\n            $master config set client-output-buffer-limit \"replica 10mb 5mb 0\"\n\n            # Start the replication process...\n            $replica replicaof $master_host $master_port\n            wait_for_sync $replica\n\n            test {module propagates from timer} {\n                # Replicate large commands to make the replica disconnected.\n                $master write [format_command propagate-test.verbatim 100000 [string repeat \"a\" 1000]] ;# almost 100mb\n                # Execute this command together with module commands within the same\n                # event loop to prevent periodic cleanup of replication backlog.\n                $master write [format_command info memory]\n                $master flush\n                $master read ;# propagate-test.verbatim\n                set res [$master read] ;# info memory\n\n                # Wait for the replica to be disconnected.\n                wait_for_log_messages 0 {\"*flags=S*scheduled to be closed ASAP for overcoming of output buffer limits*\"} 0 1500 10\n                # Due to the replica reaching the soft limit (5MB), memory peaks should not significantly\n                # exceed the replica soft limit. Furthermore, as the replica release its reference to\n                # replication backlog, it should be properly trimmed, the memory usage of replication\n                # backlog should not significantly exceed repl-backlog-size (default 1MB). */\n                assert_lessthan [getInfoProperty $res used_memory_peak] 20000000;# less than 20mb\n                assert_lessthan [getInfoProperty $res mem_replication_backlog] 2000000;# less than 2mb\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/publish.tcl",
    "content": "set testmodule [file normalize tests/modules/publish.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test {PUBLISH and SPUBLISH via a module} {\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n\n        assert_equal {1} [ssubscribe $rd1 {chan1}]\n        assert_equal {1} [subscribe $rd2 {chan1}]\n        assert_equal 1 [r publish.shard chan1 hello]\n        assert_equal 1 [r publish.classic chan1 world]\n        assert_equal {smessage chan1 hello} [$rd1 read]\n        assert_equal {message chan1 world} [$rd2 read]\n        $rd1 close\n        $rd2 close\n    }\n\n    test {module publish to self with multi message} {\n        r hello 3\n        r subscribe foo\n\n        # published message comes after the response of the command that issued it.\n        assert_equal [r publish.classic_multi foo bar vaz] {1 1}\n        assert_equal [r read] {message foo bar}\n        assert_equal [r read] {message foo vaz}\n\n        r unsubscribe foo\n        r hello 2\n        set _ \"\"\n    } {} {resp3}\n\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/rdbloadsave.tcl",
    "content": "set testmodule [file normalize tests/modules/rdbloadsave.so]\n\nstart_server {tags {\"modules external:skip debug_defrag:skip\"}} {\n    r module load $testmodule\n\n    test \"Module rdbloadsave sanity\" {\n        r test.sanity\n\n        # Try to load non-existing file\n        assert_error {*No such file or directory*} {r test.rdbload sanity.rdb}\n\n        r set x 1\n        assert_equal OK [r test.rdbsave sanity.rdb]\n\n        r flushdb\n        assert_equal OK [r test.rdbload sanity.rdb]\n        assert_equal 1 [r get x]\n    }\n\n    test \"Module rdbloadsave test with pipelining\" {\n        r config set save \"\"\n        r config set loading-process-events-interval-bytes 1024\n        r config set key-load-delay 50\n        r flushdb\n\n        populate 3000 a 1024\n        r set x 111\n        assert_equal [r dbsize] 3001\n\n        assert_equal OK [r test.rdbsave blabla.rdb]\n        r flushdb\n        assert_equal [r dbsize] 0\n\n        # Send commands with pipeline. First command will call RM_RdbLoad() in\n        # the command callback. While loading RDB, Redis can go to networking to\n        # reply -LOADING. By sending commands in pipeline, we verify it doesn't\n        # cause a problem.\n        # e.g. Redis won't try to process next message of the current client\n        # while it is in the command callback for that client   .\n        set rd1 [redis_deferring_client]\n        $rd1 test.rdbload blabla.rdb\n\n        wait_for_condition 50 100 {\n            [s loading] eq 1\n        } else {\n            fail \"Redis did not start loading or loaded RDB too fast\"\n        }\n\n        $rd1 get x\n        $rd1 dbsize\n\n        assert_equal OK [$rd1 read]\n        assert_equal 111 [$rd1 read]\n        assert_equal 3001 [$rd1 read]\n        r flushdb\n        r config set key-load-delay 0\n    }\n\n    test \"Module rdbloadsave with aof\" {\n        r config set save \"\"\n\n        # Enable the AOF\n        r config set appendonly yes\n        r config set auto-aof-rewrite-percentage 0 ; # Disable auto-rewrite.\n        waitForBgrewriteaof r\n\n        r set k v1\n        assert_equal OK [r test.rdbsave aoftest.rdb]\n\n        r set k v2\n        r config set rdb-key-save-delay 10000000\n        r bgrewriteaof\n\n        # RM_RdbLoad() should kill aof fork\n        assert_equal OK [r test.rdbload aoftest.rdb]\n\n        wait_for_condition 50 100 {\n            [string match {*Killing*AOF*child*} [exec tail -20 < [srv 0 stdout]]]\n        } else {\n            fail \"Can't find 'Killing AOF child' in recent log lines\"\n        }\n\n        # Verify the value in the loaded rdb\n        assert_equal v1 [r get k]\n\n        # Verify aof is still enabled after RM_RdbLoad() call\n        assert_equal 1 [s aof_enabled]\n\n        r flushdb\n        r config set rdb-key-save-delay 0\n        r config set appendonly no\n    }\n\n    test \"Module rdbloadsave with bgsave\" {\n        r flushdb\n        r config set save \"\"\n\n        r set k v1\n        assert_equal OK [r test.rdbsave bgsave.rdb]\n\n        r set k v2\n        r config set rdb-key-save-delay 500000\n        r bgsave\n\n        # RM_RdbLoad() should kill RDB fork\n        assert_equal OK [r test.rdbload bgsave.rdb]\n\n        wait_for_condition 10 1000 {\n            [string match {*Background*saving*terminated*} [exec tail -20 < [srv 0 stdout]]]\n        } else {\n            fail \"Can't find 'Background saving terminated' in recent log lines\"\n        }\n\n        assert_equal v1 [r get k]\n        r flushall\n        waitForBgsave r\n        r config set rdb-key-save-delay 0\n    }\n\n    test \"Module rdbloadsave calls rdbsave in a module fork\" {\n        r flushdb\n        r config set save \"\"\n        r config set rdb-key-save-delay 500000\n\n        r set k v1\n\n        # Module will call RM_Fork() before calling RM_RdbSave()\n        assert_equal OK [r test.rdbsave_fork rdbfork.rdb]\n        assert_equal [s module_fork_in_progress] 1\n\n        wait_for_condition 10 1000 {\n            [status r module_fork_in_progress] == \"0\"\n        } else {\n            fail \"Module fork didn't finish\"\n        }\n\n        r set k v2\n        assert_equal OK [r test.rdbload rdbfork.rdb]\n        assert_equal v1 [r get k]\n\n        r config set rdb-key-save-delay 0\n    }\n\n    test \"Unload the module - rdbloadsave\" {\n        assert_equal {OK} [r module unload rdbloadsave]\n    }\n\n    tags {repl} {\n        test {Module rdbloadsave on master and replica} {\n            start_server [list overrides [list loadmodule \"$testmodule\"] tags {\"external:skip\"}] {\n                set replica [srv 0 client]\n                set replica_host [srv 0 host]\n                set replica_port [srv 0 port]\n                start_server [list overrides [list loadmodule \"$testmodule\"] tags {\"external:skip\"}] {\n                    set master [srv 0 client]\n                    set master_host [srv 0 host]\n                    set master_port [srv 0 port]\n\n                    $master set x 10000\n\n                    # Start the replication process...\n                    $replica replicaof $master_host $master_port\n\n                    wait_for_condition 100 100 {\n                        [status $master sync_full] == 1\n                    } else {\n                        fail \"Master <-> Replica didn't start the full sync\"\n                    }\n\n                    # RM_RdbSave() is allowed on replicas\n                    assert_equal OK [$replica test.rdbsave rep.rdb]\n\n                    # RM_RdbLoad() is not allowed on replicas\n                    assert_error {*supported*} {$replica test.rdbload rep.rdb}\n\n                    assert_equal OK [$master test.rdbsave master.rdb]\n                    $master set x 20000\n\n                    wait_for_condition 100 100 {\n                        [$replica get x] == 20000\n                    } else {\n                        fail \"Replica didn't get the update\"\n                    }\n\n                    # Loading RDB on master will drop replicas\n                    assert_equal OK [$master test.rdbload master.rdb]\n\n                    wait_for_condition 100 100 {\n                        [status $master sync_full] == 2\n                    } else {\n                        fail \"Master <-> Replica didn't start the full sync\"\n                    }\n\n                    wait_for_condition 100 100 {\n                        [$replica get x] == 10000\n                    } else {\n                        fail \"Replica didn't get the update\"\n                    }\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/reply.tcl",
    "content": "set testmodule [file normalize tests/modules/reply.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n    \n    #   test all with hello 2/3\n    for {set proto 2} {$proto <= 3} {incr proto} {\n        if {[lsearch $::denytags \"resp3\"] >= 0} {\n            if {$proto == 3} {continue}\n        } elseif {$::force_resp3} {\n            if {$proto == 2} {continue}\n        }\n        r hello $proto\n\n        test \"RESP$proto: RM_ReplyWithString: an string reply\" {\n            # RedisString\n            set string [r rw.string \"Redis\"]\n            assert_equal \"Redis\" $string\n            # C string\n            set string [r rw.cstring]\n            assert_equal \"A simple string\" $string\n        }\n\n        test \"RESP$proto: RM_ReplyWithBigNumber: an string reply\" {\n            assert_equal \"123456778901234567890\" [r rw.bignumber \"123456778901234567890\"]\n        }\n\n        test \"RESP$proto: RM_ReplyWithInt: an integer reply\" {\n            assert_equal 42 [r rw.int 42]\n        }\n\n        test \"RESP$proto: RM_ReplyWithDouble: a float reply\" {\n            assert_equal 3.141 [r rw.double 3.141]\n        }\n\n        test \"RESP$proto: RM_ReplyWithDouble: inf\" {\n            if {$proto == 2} {\n                assert_equal \"inf\" [r rw.double inf]\n                assert_equal \"-inf\" [r rw.double -inf]\n            } else {\n                # TCL convert inf to different results on different platforms, e.g. inf on mac\n                # and Inf on others, so use readraw to verify the protocol\n                r readraw 1\n                assert_equal \",inf\" [r rw.double inf]\n                assert_equal \",-inf\" [r rw.double -inf]\n                r readraw 0\n            }\n        }\n\n        test \"RESP$proto: RM_ReplyWithDouble: NaN\" {\n            if {$proto == 2} {\n                assert_equal \"nan\" [r rw.double 0 0]\n                assert_equal \"nan\" [r rw.double]\n            } else {\n                # TCL won't convert nan into a double, use readraw to verify the protocol\n                r readraw 1\n                assert_equal \",nan\" [r rw.double 0 0]\n                assert_equal \",nan\" [r rw.double]\n                r readraw 0\n            }\n        }\n\n        set ld 0.00000000000000001\n        test \"RESP$proto: RM_ReplyWithLongDouble: a float reply\" {\n            if {$proto == 2} {\n                # here the response gets to TCL as a string\n                assert_equal $ld [r rw.longdouble $ld]\n            } else {\n                # TCL doesn't support long double and the test infra converts it to a\n                # normal double which causes precision loss. so we use readraw instead\n                r readraw 1\n                assert_equal \",$ld\" [r rw.longdouble $ld]\n                r readraw 0\n            }\n        }\n\n        test \"RESP$proto: RM_ReplyWithVerbatimString: a string reply\" {\n            assert_equal \"bla\\nbla\\nbla\" [r rw.verbatim \"bla\\nbla\\nbla\"]\n        }\n\n        test \"RESP$proto: RM_ReplyWithArray: an array reply\" {\n            assert_equal {0 1 2 3 4} [r rw.array 5]\n        }\n\n        test \"RESP$proto: RM_ReplyWithMap: an map reply\" {\n            set res [r rw.map 3]\n            if {$proto == 2} {\n                assert_equal {0 0 1 1.5 2 3} $res\n            } else {\n                assert_equal [dict create 0 0.0 1 1.5 2 3.0] $res\n            }\n        }\n\n        test \"RESP$proto: RM_ReplyWithSet: an set reply\" {\n            assert_equal {0 1 2} [r rw.set 3]\n        }\n\n        test \"RESP$proto: RM_ReplyWithAttribute: an set reply\" {\n            if {$proto == 2} {\n                catch {[r rw.attribute 3]} e\n                assert_match \"Attributes aren't supported by RESP 2\" $e\n            } else {\n                r readraw 1\n                set res [r rw.attribute 3]\n                assert_equal [r read] {:0}\n                assert_equal [r read] {,0}\n                assert_equal [r read] {:1}\n                assert_equal [r read] {,1.5}\n                assert_equal [r read] {:2}\n                assert_equal [r read] {,3}\n                assert_equal [r read] {+OK}\n                r readraw 0\n            }\n        }\n\n        test \"RESP$proto: RM_ReplyWithBool: a boolean reply\" {\n            assert_equal {0 1} [r rw.bool]\n        }\n\n        test \"RESP$proto: RM_ReplyWithNull: a NULL reply\" {\n            assert_equal {} [r rw.null]\n        }\n\n        test \"RESP$proto: RM_ReplyWithError: an error reply\" {\n            catch {r rw.error} e\n            assert_match \"An error\" $e\n        }\n\n        test \"RESP$proto: RM_ReplyWithErrorFormat: error format reply\" {\n            catch {r rw.error_format \"An error: %s\" foo} e\n            assert_match \"An error: foo\" $e  ;# Should not be used by a user, but compatible with RM_ReplyError\n\n            catch {r rw.error_format \"-ERR An error: %s\" foo2} e\n            assert_match \"-ERR An error: foo2\" $e  ;# Should not be used by a user, but compatible with RM_ReplyError (There are two hyphens, TCL removes the first one)\n\n            catch {r rw.error_format \"-WRONGTYPE A type error: %s\" foo3} e\n            assert_match \"-WRONGTYPE A type error: foo3\" $e  ;# Should not be used by a user, but compatible with RM_ReplyError (There are two hyphens, TCL removes the first one)\n\n            catch {r rw.error_format \"ERR An error: %s\" foo4} e\n            assert_match \"ERR An error: foo4\" $e\n\n            catch {r rw.error_format \"WRONGTYPE A type error: %s\" foo5} e\n            assert_match \"WRONGTYPE A type error: foo5\" $e\n        }\n\n        r hello 2\n    }\n\n    test \"Unload the module - replywith\" {\n        assert_equal {OK} [r module unload replywith]\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/scan.tcl",
    "content": "set testmodule [file normalize tests/modules/scan.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test {Module scan keyspace} {\n        # the module create a scan command with filtering which also return values\n        r set x 1\n        r set y 2\n        r set z 3\n        r hset h f v\n        lsort [r scan.scan_strings]\n    } {{x 1} {y 2} {z 3}}\n\n    test {Module scan hash listpack} {\n        r hmset hh f1 v1 f2 v2\n        assert_encoding listpack hh\n        lsort [r scan.scan_key hh]\n    } {{f1 v1} {f2 v2}}\n\n    test {Module scan hash listpack with int value} {\n        r hmset hh1 f1 1\n        assert_encoding listpack hh1\n        lsort [r scan.scan_key hh1]\n    } {{f1 1}}\n\n    test {Module scan hash listpack with hexpire} {\n        r debug set-active-expire 0\n        r hmset hh f1 v1 f2 v2 f3 v3\n        r hexpire hh 100000 fields 1 f1\n        r hpexpire hh 1 fields 1 f3\n        after 10\n        assert_range [r httl hh fields 1 f1] 10000 100000\n        assert_encoding listpackex hh\n        r debug set-active-expire 1\n        lsort [r scan.scan_key hh]\n    } {{f1 v1} {f2 v2}} {needs:debug}\n\n    test {Module scan hash dict} {\n        r config set hash-max-ziplist-entries 2\n        r hmset hh f3 v3\n        assert_encoding hashtable hh\n        lsort [r scan.scan_key hh]\n    } {{f1 v1} {f2 v2} {f3 v3}}\n\n    test {Module scan hash dict with hexpire} {\n        r config set hash-max-listpack-entries 1\n        r del hh\n        r hmset hh f1 v1 f2 v2 f3 v3\n        r hexpire hh 100000 fields 1 f2\n        r hpexpire hh 5 fields 1 f3\n        assert_range [r httl hh fields 1 f2] 10000 100000\n        assert_encoding hashtable hh\n        after 10\n        lsort [r scan.scan_key hh]\n    } {{f1 v1} {f2 v2}}\n\n    test {Module scan hash with hexpire can return no items} {\n        r del hh\n        r debug set-active-expire 0\n        r hmset hh f1 v1 f2 v2 f3 v3\n        r hpexpire hh 1 fields 3 f1 f2 f3\n        after 10\n        assert_equal [r scan.scan_key hh] {}\n        r debug set-active-expire 1\n    } {OK} {needs:debug}\n\n    test {Module scan zset listpack} {\n        r zadd zz 1 f1 2 f2\n        assert_encoding listpack zz\n        lsort [r scan.scan_key zz]\n    } {{f1 1} {f2 2}}\n\n    test {Module scan zset skiplist} {\n        r config set zset-max-ziplist-entries 2\n        r zadd zz 3 f3\n        assert_encoding skiplist zz\n        lsort [r scan.scan_key zz]\n    } {{f1 1} {f2 2} {f3 3}}\n\n    test {Module scan set intset} {\n        r sadd ss 1 2\n        assert_encoding intset ss\n        lsort [r scan.scan_key ss]\n    } {{1 {}} {2 {}}}\n\n    test {Module scan set dict} {\n        r config set set-max-intset-entries 2\n        r sadd ss 3\n        assert_encoding hashtable ss\n        lsort [r scan.scan_key ss]\n    } {{1 {}} {2 {}} {3 {}}}\n\n    test {Module scan set listpack} {\n        r sadd ss1 a b c\n        assert_encoding listpack ss1\n        lsort [r scan.scan_key ss1]\n    } {{a {}} {b {}} {c {}}}\n\n    test \"Unload the module - scan\" {\n        assert_equal {OK} [r module unload scan]\n    }\n}"
  },
  {
    "path": "tests/unit/moduleapi/stream.tcl",
    "content": "set testmodule [file normalize tests/modules/stream.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test {Module stream add and delete} {\n        r del mystream\n        # add to empty key\n        set streamid1 [r stream.add mystream item 1 value a]\n        # add to existing stream\n        set streamid2 [r stream.add mystream item 2 value b]\n        # check result\n        assert { [string match \"*-*\" $streamid1] }\n        set items [r XRANGE mystream - +]\n        assert_equal $items \\\n            \"{$streamid1 {item 1 value a}} {$streamid2 {item 2 value b}}\"\n        # delete one of them and try deleting non-existing ID\n        assert_equal OK [r stream.delete mystream $streamid1]\n        assert_error \"ERR StreamDelete*\" {r stream.delete mystream 123-456}\n        assert_error \"Invalid stream ID*\" {r stream.delete mystream foo}\n        assert_equal \"{$streamid2 {item 2 value b}}\" [r XRANGE mystream - +]\n        # check error condition: wrong type\n        r del mystream\n        r set mystream mystring\n        assert_error \"ERR StreamAdd*\" {r stream.add mystream item 1 value a}\n        assert_error \"ERR StreamDelete*\" {r stream.delete mystream 123-456}\n    }\n\n    test {Module stream add unblocks blocking xread} {\n        r del mystream\n\n        # Blocking XREAD on an empty key\n        set rd1 [redis_deferring_client]\n        $rd1 XREAD BLOCK 3000 STREAMS mystream $\n        # wait until client is actually blocked\n        wait_for_condition 50 100 {\n            [s 0 blocked_clients] eq {1}\n        } else {\n            fail \"Client is not blocked\"\n        }\n        set id [r stream.add mystream field 1 value a]\n        assert_equal \"{mystream {{$id {field 1 value a}}}}\" [$rd1 read]\n\n        # Blocking XREAD on an existing stream\n        set rd2 [redis_deferring_client]\n        $rd2 XREAD BLOCK 3000 STREAMS mystream $\n        # wait until client is actually blocked\n        wait_for_condition 50 100 {\n            [s 0 blocked_clients] eq {1}\n        } else {\n            fail \"Client is not blocked\"\n        }\n        set id [r stream.add mystream field 2 value b]\n        assert_equal \"{mystream {{$id {field 2 value b}}}}\" [$rd2 read]\n    }\n\n    test {Module stream add benchmark (1M stream add)} {\n        set n 1000000\n        r del mystream\n        set result [r stream.addn mystream $n field value]\n        assert_equal $result $n\n    }\n\n    test {Module stream XADD big fields doesn't create empty key} {\n        set original_proto [config_get_set proto-max-bulk-len 2147483647] ;#2gb\n        set original_query [config_get_set client-query-buffer-limit 2147483647] ;#2gb\n\n        r del mystream\n        r write \"*4\\r\\n\\$10\\r\\nstream.add\\r\\n\\$8\\r\\nmystream\\r\\n\\$5\\r\\nfield\\r\\n\"\n        catch {\n            write_big_bulk 1073741824 ;#1gb\n        } err\n        assert {$err eq \"ERR StreamAdd failed\"}\n        assert_equal 0 [r exists mystream]\n\n        # restore defaults\n        r config set proto-max-bulk-len $original_proto\n        r config set client-query-buffer-limit $original_query\n    } {OK} {large-memory}\n\n    test {Module stream iterator} {\n        r del mystream\n        set streamid1 [r xadd mystream * item 1 value a]\n        set streamid2 [r xadd mystream * item 2 value b]\n        # range result\n        set result1 [r stream.range mystream \"-\" \"+\"]\n        set expect1 [r xrange mystream \"-\" \"+\"]\n        assert_equal $result1 $expect1\n        # reverse range\n        set result_rev [r stream.range mystream \"+\" \"-\"]\n        set expect_rev [r xrevrange mystream \"+\" \"-\"]\n        assert_equal $result_rev $expect_rev\n\n        # only one item: range with startid = endid\n        set result2 [r stream.range mystream \"-\" $streamid1]\n        assert_equal $result2 \"{$streamid1 {item 1 value a}}\"\n        assert_equal $result2 [list [list $streamid1 {item 1 value a}]]\n        # only one item: range with startid = endid\n        set result3 [r stream.range mystream $streamid2 $streamid2]\n        assert_equal $result3 \"{$streamid2 {item 2 value b}}\"\n        assert_equal $result3 [list [list $streamid2 {item 2 value b}]]\n    }\n\n    test {Module stream iterator delete} {\n        r del mystream\n        set id1 [r xadd mystream * normal item]\n        set id2 [r xadd mystream * selfdestruct yes]\n        set id3 [r xadd mystream * another item]\n        # stream.range deletes the \"selfdestruct\" item after returning it\n        assert_equal \\\n            \"{$id1 {normal item}} {$id2 {selfdestruct yes}} {$id3 {another item}}\" \\\n            [r stream.range mystream - +]\n        # now, the \"selfdestruct\" item is gone\n        assert_equal \\\n            \"{$id1 {normal item}} {$id3 {another item}}\" \\\n            [r stream.range mystream - +]\n    }\n\n    test {Module stream trim by length} {\n        r del mystream\n        # exact maxlen\n        r xadd mystream * item 1 value a\n        r xadd mystream * item 2 value b\n        r xadd mystream * item 3 value c\n        assert_equal 3 [r xlen mystream]\n        assert_equal 0 [r stream.trim mystream maxlen = 5]\n        assert_equal 3 [r xlen mystream]\n        assert_equal 2 [r stream.trim mystream maxlen = 1]\n        assert_equal 1 [r xlen mystream]\n        assert_equal 1 [r stream.trim mystream maxlen = 0]\n        # check that there is no limit for exact maxlen\n        r stream.addn mystream 20000 item x value y\n        assert_equal 20000 [r stream.trim mystream maxlen = 0]\n        # approx maxlen (100 items per node implies default limit 10K items)\n        r stream.addn mystream 20000 item x value y\n        assert_equal 20000 [r xlen mystream]\n        assert_equal 10000 [r stream.trim mystream maxlen ~ 2]\n        assert_equal 9900  [r stream.trim mystream maxlen ~ 2]\n        assert_equal 0     [r stream.trim mystream maxlen ~ 2]\n        assert_equal 100   [r xlen mystream]\n        assert_equal 100   [r stream.trim mystream maxlen ~ 0]\n        assert_equal 0     [r xlen mystream]\n    }\n\n    test {Module stream trim by ID} {\n        r del mystream\n        # exact minid\n        r xadd mystream * item 1 value a\n        r xadd mystream * item 2 value b\n        set minid [r xadd mystream * item 3 value c]\n        assert_equal 3 [r xlen mystream]\n        assert_equal 0 [r stream.trim mystream minid = -]\n        assert_equal 3 [r xlen mystream]\n        assert_equal 2 [r stream.trim mystream minid = $minid]\n        assert_equal 1 [r xlen mystream]\n        assert_equal 1 [r stream.trim mystream minid = +]\n        # check that there is no limit for exact minid\n        r stream.addn mystream 20000 item x value y\n        assert_equal 20000 [r stream.trim mystream minid = +]\n        # approx minid (100 items per node implies default limit 10K items)\n        r stream.addn mystream 19980 item x value y\n        set minid [r xadd mystream * item x value y]\n        r stream.addn mystream 19 item x value y\n        assert_equal 20000 [r xlen mystream]\n        assert_equal 10000 [r stream.trim mystream minid ~ $minid]\n        assert_equal 9900  [r stream.trim mystream minid ~ $minid]\n        assert_equal 0     [r stream.trim mystream minid ~ $minid]\n        assert_equal 100   [r xlen mystream]\n        assert_equal 100   [r stream.trim mystream minid ~ +]\n        assert_equal 0     [r xlen mystream]\n    }\n\n    test \"Unload the module - stream\" {\n        assert_equal {OK} [r module unload stream]\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/subcommands.tcl",
    "content": "set testmodule [file normalize tests/modules/subcommands.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test \"Module subcommands via COMMAND\" {\n        # Verify that module subcommands are displayed correctly in COMMAND\n        set command_reply [r command info subcommands.bitarray]\n        set first_cmd [lindex $command_reply 0]\n        set subcmds_in_command [lsort [lindex $first_cmd 9]]\n        assert_equal [lindex $subcmds_in_command 0] {subcommands.bitarray|get -2 module 1 1 1 {} {} {{flags {RO access} begin_search {type index spec {index 1}} find_keys {type range spec {lastkey 0 keystep 1 limit 0}}}} {}}\n        assert_equal [lindex $subcmds_in_command 1] {subcommands.bitarray|set -2 module 1 1 1 {} {} {{flags {RW update} begin_search {type index spec {index 1}} find_keys {type range spec {lastkey 0 keystep 1 limit 0}}}} {}}\n\n        # Verify that module subcommands are displayed correctly in COMMAND DOCS\n        set docs_reply [r command docs subcommands.bitarray]\n        set docs [dict create {*}[lindex $docs_reply 1]]\n        set subcmds_in_cmd_docs [dict create {*}[dict get $docs subcommands]]\n        assert_equal [dict get $subcmds_in_cmd_docs \"subcommands.bitarray|get\"] {group module module subcommands}\n        assert_equal [dict get $subcmds_in_cmd_docs \"subcommands.bitarray|set\"] {group module module subcommands}\n    }\n\n    test \"Module pure-container command fails on arity error\" {\n        catch {r subcommands.bitarray} e\n        assert_match {*wrong number of arguments for 'subcommands.bitarray' command} $e\n\n        # Subcommands can be called\n        assert_equal [r subcommands.bitarray get k1] {OK}\n\n        # Subcommand arity error\n        catch {r subcommands.bitarray get k1 8 90} e\n        assert_match {*wrong number of arguments for 'subcommands.bitarray|get' command} $e\n    }\n\n    test \"Module get current command fullname\" {\n        assert_equal [r subcommands.parent_get_fullname] {subcommands.parent_get_fullname}\n    }\n\n    test \"Module get current subcommand fullname\" {\n        assert_equal [r subcommands.sub get_fullname] {subcommands.sub|get_fullname}\n    }\n\n    test \"COMMAND LIST FILTERBY MODULE\" {\n        assert_equal {} [r command list filterby module non_existing]\n\n        set commands [r command list filterby module subcommands]\n        assert_not_equal [lsearch $commands \"subcommands.bitarray\"] -1\n        assert_not_equal [lsearch $commands \"subcommands.bitarray|set\"] -1\n        assert_not_equal [lsearch $commands \"subcommands.parent_get_fullname\"] -1\n        assert_not_equal [lsearch $commands \"subcommands.sub|get_fullname\"] -1\n\n        assert_equal [lsearch $commands \"set\"] -1\n    }\n\n    test \"Internal container command without subcommand returns missing subcommand error\" {\n        assert_error {*missing subcommand*} {r subcommands.internal_container}\n    }\n\n    test \"Unload the module - subcommands\" {\n        assert_equal {OK} [r module unload subcommands]\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/test_lazyfree.tcl",
    "content": "set testmodule [file normalize tests/modules/test_lazyfree.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test \"modules allocated memory can be reclaimed in the background\" {\n        set orig_mem [s used_memory]\n        set rd [redis_deferring_client]\n\n        # LAZYFREE_THRESHOLD is 64\n        for {set i 0} {$i < 10000} {incr i} {\n            $rd lazyfreelink.insert lazykey $i\n        }\n\n        for {set j 0} {$j < 10000} {incr j} {\n            $rd read \n        }\n\n        assert {[r lazyfreelink.len lazykey] == 10000}\n\n        set peak_mem [s used_memory]\n        assert {[r unlink lazykey] == 1}\n        assert {$peak_mem > $orig_mem+10000}\n        wait_for_condition 50 100 {\n            [s used_memory] < $peak_mem &&\n            [s used_memory] < $orig_mem*2 &&\n            [string match {*lazyfreed_objects:1*} [r info Memory]]\n        } else {\n            fail \"Module memory is not reclaimed by UNLINK\"\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/testrdb.tcl",
    "content": "# This module can be configure with multiple options given as flags on module load time\n# 0 - not aux fields will be declared (this is the default)\n# 1 << 0 - use aux_save2 api\n# 1 << 1 - call aux callback before key space\n# 1 << 2 - call aux callback after key space\n# 1 << 3 - do not save data on aux callback\nset testmodule [file normalize tests/modules/testrdb.so]\n\ntags \"modules external:skip\" {\n    test {modules are able to persist types} {\n        start_server [list overrides [list loadmodule \"$testmodule\"]] {\n            r testrdb.set.key key1 value1\n            assert_equal \"value1\" [r testrdb.get.key key1]\n            r debug reload\n            assert_equal \"value1\" [r testrdb.get.key key1]\n        }\n    }\n\n    test {modules global are lost without aux} {\n        set server_path [tmpdir \"server.module-testrdb\"]\n        start_server [list overrides [list loadmodule \"$testmodule\" \"dir\" $server_path] keep_persistence true] {\n            r testrdb.set.before global1\n            assert_equal \"global1\" [r testrdb.get.before]\n        }\n        start_server [list overrides [list loadmodule \"$testmodule\" \"dir\" $server_path]] {\n            assert_equal \"\" [r testrdb.get.before]\n        }\n    }\n\n    test {aux that saves no data are not saved to the rdb when aux_save2 is used} {\n        set server_path [tmpdir \"server.module-testrdb\"]\n        puts $server_path\n        # 15 == 1111 - use aux_save2 before and after key space without data\n        start_server [list overrides [list loadmodule \"$testmodule 15\" \"dir\" $server_path] keep_persistence true] {\n            r set x 1\n            r save\n        }\n        start_server [list overrides [list \"dir\" $server_path] keep_persistence true] {\n            # make sure server started successfully without the module.\n            assert_equal {1} [r get x]\n        }\n    }\n\n    test {aux that saves no data are saved to the rdb when aux_save is used} {\n        set server_path [tmpdir \"server.module-testrdb\"]\n        puts $server_path\n        # 14 == 1110 - use aux_save before and after key space without data\n        start_server [list overrides [list loadmodule \"$testmodule 14\" \"dir\" $server_path] keep_persistence true] {\n            r set x 1\n            r save\n        }\n        start_server [list overrides [list loadmodule \"$testmodule 14\" \"dir\" $server_path] keep_persistence true] {\n            # make sure server started successfully and aux_save was called twice.\n            assert_equal {1} [r get x]\n            assert_equal {2} [r testrdb.get.n_aux_load_called]\n        }\n    }\n\n    foreach test_case {6 7} {\n    # 6 == 0110 - use aux_save before and after key space with data\n    # 7 == 0111 - use aux_save2 before and after key space with data\n    test {modules are able to persist globals before and after} {\n        set server_path [tmpdir \"server.module-testrdb\"]\n        start_server [list overrides [list loadmodule \"$testmodule $test_case\" \"dir\" $server_path \"save\" \"900 1\"] keep_persistence true] {\n            r testrdb.set.before global1\n            r testrdb.set.after global2\n            assert_equal \"global1\" [r testrdb.get.before]\n            assert_equal \"global2\" [r testrdb.get.after]\n        }\n        start_server [list overrides [list loadmodule \"$testmodule $test_case\" \"dir\" $server_path \"save\" \"900 1\"]] {\n            assert_equal \"global1\" [r testrdb.get.before]\n            assert_equal \"global2\" [r testrdb.get.after]\n        }\n\n    }\n    }\n\n    foreach test_case {4 5} {\n    # 4 == 0100 - use aux_save after key space with data\n    # 5 == 0101 - use aux_save2 after key space with data\n    test {modules are able to persist globals just after} {\n        set server_path [tmpdir \"server.module-testrdb\"]\n        start_server [list overrides [list loadmodule \"$testmodule $test_case\" \"dir\" $server_path \"save\" \"900 1\"] keep_persistence true] {\n            r testrdb.set.after global2\n            assert_equal \"global2\" [r testrdb.get.after]\n        }\n        start_server [list overrides [list loadmodule \"$testmodule $test_case\" \"dir\" $server_path \"save\" \"900 1\"]] {\n            assert_equal \"global2\" [r testrdb.get.after]\n        }\n    }\n    }\n\n    test {Verify module options info} {\n        start_server [list overrides [list loadmodule \"$testmodule\"]] {\n            assert_match \"*\\[handle-io-errors|handle-repl-async-load\\]*\" [r info modules]\n        }\n    }\n\n    tags {repl} {\n        test {diskless loading short read with module} {\n            start_server [list overrides [list loadmodule \"$testmodule\"]] {\n                set replica [srv 0 client]\n                set replica_host [srv 0 host]\n                set replica_port [srv 0 port]\n                start_server [list overrides [list loadmodule \"$testmodule\"]] {\n                    set master [srv 0 client]\n                    set master_host [srv 0 host]\n                    set master_port [srv 0 port]\n\n                    # Set master and replica to use diskless replication\n                    $master config set repl-diskless-sync yes\n                    $master config set rdbcompression no\n                    $replica config set repl-diskless-load swapdb\n                    $master config set hz 500\n                    $replica config set hz 500\n                    $master config set dynamic-hz no\n                    $replica config set dynamic-hz no\n                    set start [clock clicks -milliseconds]\n                    # Generate small keys\n                    for {set k 0} {$k < 20000} {incr k} {\n                        r testrdb.set.key keysmall$k [string repeat A [expr {int(rand()*100)}]]\n                    }\n                    # Generate larger keys\n                    for {set k 0} {$k < 30} {incr k} {\n                        r testrdb.set.key key$k [string repeat A [expr {int(rand()*1000000)}]]\n                    }\n\n                    if {$::verbose} {\n                        set end [clock clicks -milliseconds]\n                        set duration [expr $end - $start]\n                        puts \"filling took $duration ms (TODO: use pipeline)\"\n                        set start [clock clicks -milliseconds]\n                    }\n\n                    # Start the replication process...\n                    set loglines [count_log_lines -1]\n                    $master config set repl-diskless-sync-delay 0\n                    $replica replicaof $master_host $master_port\n\n                    # kill the replication at various points\n                    set attempts 100\n                    if {$::accurate} { set attempts 500 }\n                    for {set i 0} {$i < $attempts} {incr i} {\n                        # wait for the replica to start reading the rdb\n                        # using the log file since the replica only responds to INFO once in 2mb\n                        set res [wait_for_log_messages -1 {\"*Loading DB in memory*\"} $loglines 2000 1]\n                        set loglines [lindex $res 1]\n\n                        # add some additional random sleep so that we kill the master on a different place each time\n                        after [expr {int(rand()*50)}]\n\n                        # kill the replica connection on the master\n                        set killed [$master client kill type replica]\n\n                        set res [wait_for_log_messages -1 {\"*Internal error in RDB*\" \"*Finished with success*\" \"*Successful partial resynchronization*\"} $loglines 500 10]\n                        if {$::verbose} { puts $res }\n                        set log_text [lindex $res 0]\n                        set loglines [lindex $res 1]\n                        if {![string match \"*Internal error in RDB*\" $log_text]} {\n                            # force the replica to try another full sync\n                            $master multi\n                            $master client kill type replica\n                            $master set asdf asdf\n                            # fill replication backlog with new content\n                            $master config set repl-backlog-size 16384\n                            for {set keyid 0} {$keyid < 10} {incr keyid} {\n                                $master set \"$keyid string_$keyid\" [string repeat A 16384]\n                            }\n                            $master exec\n                        }\n\n                        # wait for loading to stop (fail)\n                        # After a loading successfully, next loop will enter `async_loading`\n                        wait_for_condition 1000 1 {\n                            [s -1 async_loading] eq 0 &&\n                            [s -1 loading] eq 0\n                        } else {\n                            fail \"Replica didn't disconnect\"\n                        }\n                    }\n                    if {$::verbose} {\n                        set end [clock clicks -milliseconds]\n                        set duration [expr $end - $start]\n                        puts \"test took $duration ms\"\n                    }\n                    # enable fast shutdown\n                    $master config set rdb-key-save-delay 0\n                }\n            }\n        }\n\n        # Module events for diskless load swapdb when async_loading (matching master replid)\n        foreach test_case {6 7} {\n        # 6 == 0110 - use aux_save before and after key space with data\n        # 7 == 0111 - use aux_save2 before and after key space with data\n        foreach testType {Successful Aborted} {\n            start_server [list overrides [list loadmodule \"$testmodule $test_case\"] tags [list external:skip]] {\n                set replica [srv 0 client]\n                set replica_host [srv 0 host]\n                set replica_port [srv 0 port]\n                set replica_log [srv 0 stdout]\n                start_server [list overrides [list loadmodule \"$testmodule $test_case\"]] {\n                    set master [srv 0 client]\n                    set master_host [srv 0 host]\n                    set master_port [srv 0 port]\n\n                    set start [clock clicks -milliseconds]\n\n                    # Set master and replica to use diskless replication on swapdb mode\n                    $master config set repl-diskless-sync yes\n                    $master config set repl-diskless-sync-delay 0\n                    $master config set save \"\"\n                    $replica config set repl-diskless-load swapdb\n                    $replica config set save \"\" \n\n                    # Initial sync to have matching replids between master and replica\n                    $replica replicaof $master_host $master_port\n\n                    # Let replica finish initial sync with master\n                    wait_for_condition 100 100 {\n                        [s -1 master_link_status] eq \"up\"\n                    } else {\n                        fail \"Master <-> Replica didn't finish sync\"\n                    }\n\n                    # Set global values on module so we can check if module event callbacks will pick it up correctly\n                    $master testrdb.set.before value1_master\n                    $replica testrdb.set.before value1_replica\n\n                    # Put different data sets on the master and replica\n                    # We need to put large keys on the master since the replica replies to info only once in 2mb\n                    $replica debug populate 200 slave 10\n                    $master debug populate 1000 master 100000\n                    $master config set rdbcompression no\n\n                    # Force the replica to try another full sync (this time it will have matching master replid)\n                    $master multi\n                    $master client kill type replica\n                    # Fill replication backlog with new content\n                    $master config set repl-backlog-size 16384\n                    for {set keyid 0} {$keyid < 10} {incr keyid} {\n                        $master set \"$keyid string_$keyid\" [string repeat A 16384]\n                    }\n                    $master exec\n\n                    switch $testType {\n                        \"Aborted\" {\n                            # Set master with a slow rdb generation, so that we can easily intercept loading\n                            # 10ms per key, with 1000 keys is 10 seconds\n                            $master config set rdb-key-save-delay 10000\n\n                            test {Diskless load swapdb RedisModuleEvent_ReplAsyncLoad handling: during loading, can keep module variable same as before} {\n                                # Wait for the replica to start reading the rdb and module for acknowledgement\n                                # We wanna abort only after the temp db was populated by REDISMODULE_AUX_BEFORE_RDB\n                                wait_for_condition 100 100 {\n                                    [s -1 async_loading] eq 1 && [$replica testrdb.async_loading.get.before] eq \"value1_master\"\n                                } else {\n                                    fail \"Module didn't receive or react to REDISMODULE_SUBEVENT_REPL_ASYNC_LOAD_STARTED\"\n                                }\n\n                                assert_equal [$replica dbsize] 200\n                                assert_equal value1_replica [$replica testrdb.get.before]\n                            }\n\n                            # Make sure that next sync will not start immediately so that we can catch the replica in between syncs\n                            $master config set repl-diskless-sync-delay 5\n\n                            # Kill the replica connection on the master\n                            set killed [$master client kill type replica]\n\n                            test {Diskless load swapdb RedisModuleEvent_ReplAsyncLoad handling: when loading aborted, can keep module variable same as before} {\n                                # Wait for loading to stop (fail) and module for acknowledgement\n                                wait_for_condition 100 100 {\n                                    [s -1 async_loading] eq 0 && [$replica testrdb.async_loading.get.before] eq \"\"\n                                } else {\n                                    fail \"Module didn't receive or react to REDISMODULE_SUBEVENT_REPL_ASYNC_LOAD_ABORTED\"\n                                }\n\n                                assert_equal [$replica dbsize] 200\n                                assert_equal value1_replica [$replica testrdb.get.before]\n                            }\n\n                            # Speed up shutdown\n                            $master config set rdb-key-save-delay 0\n                        }\n                        \"Successful\" {\n                            # Let replica finish sync with master\n                            wait_for_condition 100 100 {\n                                [s -1 master_link_status] eq \"up\"\n                            } else {\n                                fail \"Master <-> Replica didn't finish sync\"\n                            }\n\n                            test {Diskless load swapdb RedisModuleEvent_ReplAsyncLoad handling: after db loaded, can set module variable with new value} {\n                                assert_equal [$replica dbsize] 1010\n                                assert_equal value1_master [$replica testrdb.get.before]\n                            }\n                        }\n                    }\n\n                    if {$::verbose} {\n                        set end [clock clicks -milliseconds]\n                        set duration [expr $end - $start]\n                        puts \"test took $duration ms\"\n                    }\n                }\n            }\n        }\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/timer.tcl",
    "content": "set testmodule [file normalize tests/modules/timer.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test {RM_CreateTimer: a sequence of timers work} {\n        # We can't guarantee same-ms but we try using MULTI/EXEC\n        r multi\n        for {set i 0} {$i < 20} {incr i} {\n            r test.createtimer 10 timer-incr-key\n        }\n        r exec\n\n        after 500\n        assert_equal 20 [r get timer-incr-key]\n    }\n\n    test {RM_GetTimer: basic sanity} {\n        # Getting non-existing timer\n        assert_equal {} [r test.gettimer 0]\n\n        # Getting a real timer\n        set id [r test.createtimer 10000 timer-incr-key]\n        set info [r test.gettimer $id]\n\n        assert_equal \"timer-incr-key\" [lindex $info 0]\n        set remaining [lindex $info 1]\n        assert {$remaining < 10000 && $remaining > 1}\n        # Stop the timer after get timer test\n        assert_equal 1 [r test.stoptimer $id]\n    }\n\n    test {RM_StopTimer: basic sanity} {\n        r set \"timer-incr-key\" 0\n        set id [r test.createtimer 1000 timer-incr-key]\n\n        assert_equal 1 [r test.stoptimer $id]\n\n        # Wait to be sure timer doesn't execute\n        after 2000\n        assert_equal 0 [r get timer-incr-key]\n\n        # Stop non-existing timer\n        assert_equal 0 [r test.stoptimer $id]\n    }\n\n    test {Timer appears non-existing after it fires} {\n        r set \"timer-incr-key\" 0\n        set id [r test.createtimer 10 timer-incr-key]\n\n        # verify timer fired\n        after 500\n        assert_equal 1 [r get timer-incr-key]\n\n        # verify id does not exist\n        assert_equal {} [r test.gettimer $id]\n    }\n\n    test \"Module can be unloaded when timer was finished\" {\n        r set \"timer-incr-key\" 0\n        r test.createtimer 500 timer-incr-key\n\n        # Make sure the Timer has not been fired\n        assert_equal 0 [r get timer-incr-key]\n        # Module can not be unloaded since the timer was ongoing\n        catch {r module unload timer} err\n        assert_match {*the module holds timer that is not fired*} $err\n\n        # Wait to be sure timer has been finished\n        wait_for_condition 10 500 {\n            [r get timer-incr-key] == 1\n        } else {\n            fail \"Timer not fired\"\n        }\n\n        # Timer fired, can be unloaded now.\n        assert_equal {OK} [r module unload timer]\n    }\n\n    test \"Module can be unloaded when timer was stopped\" {\n        r module load $testmodule\n        r set \"timer-incr-key\" 0\n        set id [r test.createtimer 5000 timer-incr-key]\n\n        # Module can not be unloaded since the timer was ongoing\n        catch {r module unload timer} err\n        assert_match {*the module holds timer that is not fired*} $err\n\n        # Stop the timer\n        assert_equal 1 [r test.stoptimer $id]\n\n        # Make sure the Timer has not been fired\n        assert_equal 0 [r get timer-incr-key]\n\n        # Timer has stopped, can be unloaded now.\n        assert_equal {OK} [r module unload timer]\n    }\n}\n\n"
  },
  {
    "path": "tests/unit/moduleapi/usercall.tcl",
    "content": "set testmodule [file normalize tests/modules/usercall.so]\n\nset test_script_set \"#!lua\nredis.call('set','x',1)\nreturn 1\"\n\nset test_script_get \"#!lua\nredis.call('get','x')\nreturn 1\"\n\nstart_server {tags {\"modules usercall external:skip\"}} {\n    r module load $testmodule\n\n    # Test RedisModule_GetContextUser API: set context user, get it back, return its ACL string\n    test {test GetContextUser API - set context user and get ACL via GetContextUser} {\n        assert_equal [r usercall.reset_user] OK\n        assert_equal [r usercall.add_to_acl \"~* &* +@all -set\"] OK\n        assert_equal [r usercall.get_context_acl] \"off sanitize-payload ~* &* +@all -set\"\n    }\n\n    # Test RedisModule_GetUserUsername API: get username of the module user\n    test {test GetUserUsername API - returns module user name} {\n        assert_equal [r usercall.reset_user] OK\n        assert_equal [r usercall.get_user_username] module_user\n    }\n\n    # baseline test that module isn't doing anything weird\n    test {test module check regular redis command without user/acl} {\n        assert_equal [r usercall.reset_user] OK\n        assert_equal [r usercall.add_to_acl \"~* &* +@all -set\"] OK\n        assert_equal [r usercall.call_without_user set x 5] OK\n        assert_equal [r usercall.reset_user] OK\n    }\n\n    # call with user with acl set on it, but without testing the acl\n    test {test module check regular redis command with user} {\n        assert_equal [r set x 5] OK\n\n        assert_equal [r usercall.reset_user] OK\n        assert_equal [r usercall.add_to_acl \"~* &* +@all -set\"] OK\n        # off and sanitize-payload because module user / default value\n        assert_equal [r usercall.get_acl] \"off sanitize-payload ~* &* +@all -set\"\n\n        # doesn't fail for regular commands as just testing acl here\n        assert_equal [r usercall.call_with_user_flag {} set x 10] OK\n\n        assert_equal [r get x] 10\n        assert_equal [r usercall.reset_user] OK\n    }\n\n    # call with user with acl set on it, but with testing the acl in rm_call (for cmd itself)\n    test {test module check regular redis command with user and acl} {\n        assert_equal [r set x 5] OK\n\n        r ACL LOG RESET\n        assert_equal [r usercall.reset_user] OK\n        assert_equal [r usercall.add_to_acl \"~* &* +@all -set\"] OK\n        # off and sanitize-payload because module user / default value\n        assert_equal [r usercall.get_acl] \"off sanitize-payload ~* &* +@all -set\"\n\n        # fails here as testing acl in rm call\n        assert_error {*NOPERM User module_user has no permissions*} {r usercall.call_with_user_flag C set x 10}\n\n        assert_equal [r usercall.call_with_user_flag C get x] 5\n\n        # verify that new log entry added\n        set entry [lindex [r ACL LOG] 0]\n        assert_equal [dict get $entry username] {module_user}\n        assert_equal [dict get $entry context] {module}\n        assert_equal [dict get $entry object] {set}\n        assert_equal [dict get $entry reason] {command}\n        assert_match {*cmd=usercall.call_with_user_flag*} [dict get $entry client-info]\n\n        assert_equal [r usercall.reset_user] OK\n    }\n\n    # call with user with acl set on it, but with testing the acl in rm_call (for cmd itself)\n    test {test module check regular redis command with user and acl from blocked background thread} {\n        assert_equal [r set x 5] OK\n\n        r ACL LOG RESET\n        assert_equal [r usercall.reset_user] OK\n        assert_equal [r usercall.add_to_acl \"~* &* +@all -set\"] OK\n\n        # fails here as testing acl in rm call from a background thread\n        assert_error {*NOPERM User module_user has no permissions*} {r usercall.call_with_user_bg C set x 10}\n\n        assert_equal [r usercall.call_with_user_bg C get x] 5\n\n        # verify that new log entry added\n        set entry [lindex [r ACL LOG] 0]\n        assert_equal [dict get $entry username] {module_user}\n        assert_equal [dict get $entry context] {module}\n        assert_equal [dict get $entry object] {set}\n        assert_equal [dict get $entry reason] {command}\n        assert_match {*cmd=NULL*} [dict get $entry client-info]\n\n        assert_equal [r usercall.reset_user] OK\n    }\n\n    # baseline script test, call without user on script\n    test {test module check eval script without user} {\n        set sha_set [r script load $test_script_set]\n        set sha_get [r script load $test_script_get]\n\n        assert_equal [r usercall.call_without_user evalsha $sha_set 0] 1\n        assert_equal [r usercall.call_without_user evalsha $sha_get 0] 1\n    }\n\n    # baseline script test, call without user on script\n    test {test module check eval script with user being set, but not acl testing} {\n        set sha_set [r script load $test_script_set]\n        set sha_get [r script load $test_script_get]\n\n        assert_equal [r usercall.reset_user] OK\n        assert_equal [r usercall.add_to_acl \"~* &* +@all -set\"] OK\n        # off and sanitize-payload because module user / default value\n        assert_equal [r usercall.get_acl] \"off sanitize-payload ~* &* +@all -set\"\n\n        # passes as not checking ACL\n        assert_equal [r usercall.call_with_user_flag {} evalsha $sha_set 0] 1\n        assert_equal [r usercall.call_with_user_flag {} evalsha $sha_get 0] 1\n    }\n\n    # call with user on script (without rm_call acl check) to ensure user carries through to script execution\n    # we already tested the check in rm_call above, here we are checking the script itself will enforce ACL\n    test {test module check eval script with user and acl} {\n        set sha_set [r script load $test_script_set]\n        set sha_get [r script load $test_script_get]\n\n        r ACL LOG RESET\n        assert_equal [r usercall.reset_user] OK\n        assert_equal [r usercall.add_to_acl \"~* &* +@all -set\"] OK\n\n        # fails here in script, as rm_call will permit the eval call\n        catch {r usercall.call_with_user_flag C evalsha $sha_set 0} e\n        assert_match {*ERR ACL failure in script*} $e\n\n        assert_equal [r usercall.call_with_user_flag C evalsha $sha_get 0] 1\n\n        # verify that new log entry added\n        set entry [lindex [r ACL LOG] 0]\n        assert_equal [dict get $entry username] {module_user}\n        assert_equal [dict get $entry context] {lua}\n        assert_equal [dict get $entry object] {set}\n        assert_equal [dict get $entry reason] {command}\n        assert_match {*cmd=usercall.call_with_user_flag*} [dict get $entry client-info]\n    }\n\n    test {server not crashing when MONITOR is fed from spawned thread} {\n        set rd [redis_deferring_client]\n        $rd monitor\n\n        r ACL LOG RESET\n        assert_equal [r usercall.reset_user] OK\n        assert_equal [r usercall.add_to_acl \"~* &* +@all -set\"] OK\n\n        r flushdb\n        r set x x\n\n        # This is enough. This checks that we don't crash inside\n        # updateClientMemUsageAndBucket\n        assert_equal x [r usercall.call_with_user_bg C get x]\n\n        $rd close\n    }\n\n    start_server {tags {\"wait aof network external:skip\"}} {\n        set slave [srv 0 client]\n        set slave_host [srv 0 host]\n        set slave_port [srv 0 port]\n        set slave_pid [srv 0 pid]\n        set master [srv -1 client]\n        set master_host [srv -1 host]\n        set master_port [srv -1 port]\n\n        $master config set appendonly yes\n        $master config set appendfsync everysec\n        $slave config set appendonly yes\n        $slave config set appendfsync everysec\n\n        test {Setup slave} {\n            $slave slaveof $master_host $master_port\n            wait_for_condition 50 100 {\n                [s 0 master_link_status] eq {up}\n            } else {\n                fail \"Replication not started.\"\n            }\n        }\n\n        test {test module replicate only to replicas and WAITAOF} {\n            $master set x 1\n            assert_equal [$master waitaof 1 1 10000] {1 1}\n            $master usercall.call_with_user_flag A! config set loglevel notice\n            # Make sure WAITAOF doesn't hang\n            assert_equal [$master waitaof 1 1 10000] {1 1}\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/moduleapi/zset.tcl",
    "content": "set testmodule [file normalize tests/modules/zset.so]\n\nstart_server {tags {\"modules external:skip\"}} {\n    r module load $testmodule\n\n    test {Module zset rem} {\n        r del k\n        r zadd k 100 hello 200 world\n        assert_equal 1 [r zset.rem k hello]\n        assert_equal 0 [r zset.rem k hello]\n        assert_equal 1 [r exists k]\n        # Check that removing the last element deletes the key\n        assert_equal 1 [r zset.rem k world]\n        assert_equal 0 [r exists k]\n    }\n\n    test {Module zset add} {\n        r del k\n        # Check that failure does not create empty key\n        assert_error \"ERR ZsetAdd failed\" {r zset.add k nan hello}\n        assert_equal 0 [r exists k]\n\n        r zset.add k 100 hello\n        assert_equal {hello 100} [r zrange k 0 -1 withscores]\n    }\n\n    test {Module zset incrby} {\n        r del k\n        # Check that failure does not create empty key\n        assert_error \"ERR ZsetIncrby failed\" {r zset.incrby k hello nan}\n        assert_equal 0 [r exists k]\n\n        r zset.incrby k hello 100\n        assert_equal {hello 100} [r zrange k 0 -1 withscores]\n    }\n\n    test {Module zset - KEYSIZES is updated as expected (like test at hash.tcl)} {\n        proc run_cmd_verify_hist {cmd expOutput {retries 1}} {\n            proc K {} {return [string map { \"db0_distrib_zsets_items\" \"db0_ZSET\" \"# Keysizes\" \"\" \" \" \"\" \"\\n\" \"\" \"\\r\" \"\" } [r info keysizes] ]}\n            uplevel 1 $cmd    \n            wait_for_condition 50 $retries {\n                $expOutput eq [K]\n            } else { fail \"Expected: \\n`$expOutput`\\n Actual:\\n`[K]`.\\nFailed after command: $cmd\" }\n        }\n        \n        r select 0\n        \n        #RedisModule_ZsetAdd, RedisModule_ZsetRem\n        run_cmd_verify_hist {r FLUSHALL} {}\n        run_cmd_verify_hist {r zset.add k 100 hello} {db0_ZSET:1=1}\n        run_cmd_verify_hist {r zset.add k 101 bye} {db0_ZSET:2=1}\n        run_cmd_verify_hist {r zset.rem k hello} {db0_ZSET:1=1}\n        run_cmd_verify_hist {r zset.rem k bye} {}\n        \n        #RM_ZsetIncrby\n        run_cmd_verify_hist {r FLUSHALL} {}\n        run_cmd_verify_hist {r zset.incrby k hello 100} {db0_ZSET:1=1}\n        run_cmd_verify_hist {r zset.incrby k hello 100} {db0_ZSET:1=1}\n        run_cmd_verify_hist {r zset.rem k hello} {}\n\n        # Check lazy expire\n        r debug set-active-expire 0\n        run_cmd_verify_hist {r zset.add k 100 hello} {db0_ZSET:1=1}\n        run_cmd_verify_hist {r pexpire k 2} {db0_ZSET:1=1}\n        run_cmd_verify_hist {after 5} {db0_ZSET:1=1}\n        r debug set-active-expire 1\n        run_cmd_verify_hist {after 5} {} 50\n    }\n\n    test {Module zset DELALL functionality} {\n        # Clean up any existing keys\n        r flushall\n\n        # Create some zsets and other types of keys\n        r zadd zset1 100 hello 200 world\n        r zadd zset2 300 foo 400 bar\n        r zadd zset3 500 baz\n        r set string1 \"value1\"\n        r hset hash1 field1 value1\n        r lpush list1 item1\n\n        # Verify we have the expected keys\n        assert_equal 6 [r dbsize]\n        assert_equal 3 [llength [r keys zset*]]\n\n        # Run zset.delall\n        set deleted [r zset.delall]\n        assert_equal 3 $deleted\n\n        # Verify only zsets were deleted\n        assert_equal 3 [r dbsize]\n        assert_equal 0 [llength [r keys zset*]]\n        assert_equal 1 [r exists string1]\n        assert_equal 1 [r exists hash1]\n        assert_equal 1 [r exists list1]\n\n        # Test with no zsets\n        set deleted [r zset.delall]\n        assert_equal 0 $deleted\n        assert_equal 3 [r dbsize]\n    }\n\n    test {Module zset DELALL not in transaction} {\n        set repl [attach_to_replication_stream]\n        r zadd z1 1 e1\n        r zadd z2 1 e1\n        r zset.delall\n        assert_replication_stream $repl {\n            {select *}\n            {zadd z1 1 e1}\n            {zadd z2 1 e1}\n            {del z*}\n            {del z*}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test \"Unload the module - zset\" {\n        assert_equal {OK} [r module unload zset]\n    }\n}\n"
  },
  {
    "path": "tests/unit/multi.tcl",
    "content": "proc wait_for_dbsize {size} {\n    set r2 [redis_client]\n    wait_for_condition 50 100 {\n        [$r2 dbsize] == $size\n    } else {\n        fail \"Target dbsize not reached\"\n    }\n    $r2 close\n}\n\nstart_server {tags {\"multi\"}} {\n    test {MULTI / EXEC basics} {\n        r del mylist\n        r rpush mylist a\n        r rpush mylist b\n        r rpush mylist c\n        r multi\n        set v1 [r lrange mylist 0 -1]\n        set v2 [r ping]\n        set v3 [r exec]\n        list $v1 $v2 $v3\n    } {QUEUED QUEUED {{a b c} PONG}}\n\n    test {DISCARD} {\n        r del mylist\n        r rpush mylist a\n        r rpush mylist b\n        r rpush mylist c\n        r multi\n        set v1 [r del mylist]\n        set v2 [r discard]\n        set v3 [r lrange mylist 0 -1]\n        list $v1 $v2 $v3\n    } {QUEUED OK {a b c}}\n\n    test {Nested MULTI are not allowed} {\n        set err {}\n        r multi\n        catch {[r multi]} err\n        r exec\n        set _ $err\n    } {*ERR MULTI*}\n\n    test {MULTI where commands alter argc/argv} {\n        r sadd myset a\n        r multi\n        r spop myset\n        list [r exec] [r exists myset]\n    } {a 0}\n\n    test {WATCH inside MULTI is not allowed} {\n        set err {}\n        r multi\n        catch {[r watch x]} err\n        r exec\n        set _ $err\n    } {*ERR WATCH*}\n\n    test {EXEC fails if there are errors while queueing commands #1} {\n        r del foo1{t} foo2{t}\n        r multi\n        r set foo1{t} bar1\n        catch {r non-existing-command}\n        r set foo2{t} bar2\n        catch {r exec} e\n        assert_match {EXECABORT*} $e\n        list [r exists foo1{t}] [r exists foo2{t}]\n    } {0 0}\n\n    test {EXEC fails if there are errors while queueing commands #2} {\n        set rd [redis_deferring_client]\n        r del foo1{t} foo2{t}\n        r multi\n        r set foo1{t} bar1\n        $rd config set maxmemory 1\n        assert  {[$rd read] eq {OK}}\n        catch {r lpush mylist{t} myvalue}\n        $rd config set maxmemory 0\n        assert  {[$rd read] eq {OK}}\n        r set foo2{t} bar2\n        catch {r exec} e\n        assert_match {EXECABORT*} $e\n        $rd close\n        list [r exists foo1{t}] [r exists foo2{t}]\n    } {0 0} {needs:config-maxmemory}\n\n    test {If EXEC aborts, the client MULTI state is cleared} {\n        r del foo1{t} foo2{t}\n        r multi\n        r set foo1{t} bar1\n        catch {r non-existing-command}\n        r set foo2{t} bar2\n        catch {r exec} e\n        assert_match {EXECABORT*} $e\n        r ping\n    } {PONG}\n\n    test {EXEC works on WATCHed key not modified} {\n        r watch x{t} y{t} z{t}\n        r watch k{t}\n        r multi\n        r ping\n        r exec\n    } {PONG}\n\n    test {EXEC fail on WATCHed key modified (1 key of 1 watched)} {\n        r set x 30\n        r watch x\n        r set x 40\n        r multi\n        r ping\n        r exec\n    } {}\n\n    test {EXEC fail on WATCHed key modified (1 key of 5 watched)} {\n        r set x{t} 30\n        r watch a{t} b{t} x{t} k{t} z{t}\n        r set x{t} 40\n        r multi\n        r ping\n        r exec\n    } {}\n\n    test {EXEC fail on WATCHed key modified by SORT with STORE even if the result is empty} {\n        r flushdb\n        r lpush foo bar\n        r watch foo\n        r sort emptylist store foo\n        r multi\n        r ping\n        r exec\n    } {} {cluster:skip}\n\n    test {EXEC fail on lazy expired WATCHed key} {\n        r del key\n        r debug set-active-expire 0\n\n        for {set j 0} {$j < 10} {incr j} {\n            r set key 1 px 100\n            r watch key\n            after 101\n            r multi\n            r incr key\n\n            set res [r exec]\n            if {$res eq {}} break\n        }\n        if {$::verbose} { puts \"EXEC fail on lazy expired WATCHed key attempts: $j\" }\n\n        r debug set-active-expire 1\n        set _ $res\n    } {} {needs:debug}\n\n    test {WATCH stale keys should not fail EXEC} {\n        r del x\n        r debug set-active-expire 0\n        r set x foo px 1\n        after 2\n        r watch x\n        r multi\n        r ping\n        assert_equal {PONG} [r exec]\n        r debug set-active-expire 1\n    } {OK} {needs:debug}\n\n    test {Delete WATCHed stale keys should not fail EXEC} {\n        r del x\n        r debug set-active-expire 0\n        r set x foo px 1\n        after 2\n        r watch x\n        # EXISTS triggers lazy expiry/deletion\n        assert_equal 0 [r exists x]\n        r multi\n        r ping\n        assert_equal {PONG} [r exec]\n        r debug set-active-expire 1\n    } {OK} {needs:debug}\n\n    test {FLUSHDB while watching stale keys should not fail EXEC} {\n        r del x\n        r debug set-active-expire 0\n        r set x foo px 1\n        after 2\n        r watch x\n        r flushdb\n        r multi\n        r ping\n        assert_equal {PONG} [r exec]\n        r debug set-active-expire 1\n    } {OK} {needs:debug}\n\n    test {After successful EXEC key is no longer watched} {\n        r set x 30\n        r watch x\n        r multi\n        r ping\n        r exec\n        r set x 40\n        r multi\n        r ping\n        r exec\n    } {PONG}\n\n    test {After failed EXEC key is no longer watched} {\n        r set x 30\n        r watch x\n        r set x 40\n        r multi\n        r ping\n        r exec\n        r set x 40\n        r multi\n        r ping\n        r exec\n    } {PONG}\n\n    test {It is possible to UNWATCH} {\n        r set x 30\n        r watch x\n        r set x 40\n        r unwatch\n        r multi\n        r ping\n        r exec\n    } {PONG}\n\n    test {UNWATCH when there is nothing watched works as expected} {\n        r unwatch\n    } {OK}\n\n    test {FLUSHALL is able to touch the watched keys} {\n        r set x 30\n        r watch x\n        r flushall\n        r multi\n        r ping\n        r exec\n    } {}\n\n    test {FLUSHALL does not touch non affected keys} {\n        r del x\n        r watch x\n        r flushall\n        r multi\n        r ping\n        r exec\n    } {PONG}\n\n    test {FLUSHDB is able to touch the watched keys} {\n        r set x 30\n        r watch x\n        r flushdb\n        r multi\n        r ping\n        r exec\n    } {}\n\n    test {FLUSHDB does not touch non affected keys} {\n        r del x\n        r watch x\n        r flushdb\n        r multi\n        r ping\n        r exec\n    } {PONG}\n\n    test {SWAPDB is able to touch the watched keys that exist} {\n        r flushall\n        r select 0\n        r set x 30\n        r watch x ;# make sure x (set to 30) doesn't change (SWAPDB will \"delete\" it)\n        r swapdb 0 1\n        r multi\n        r ping\n        r exec\n    } {} {singledb:skip}\n\n    test {SWAPDB is able to touch the watched keys that do not exist} {\n        r flushall\n        r select 1\n        r set x 30\n        r select 0\n        r watch x ;# make sure the key x (currently missing) doesn't change (SWAPDB will create it)\n        r swapdb 0 1\n        r multi\n        r ping\n        r exec\n    } {} {singledb:skip}\n\n    test {SWAPDB does not touch watched stale keys} {\n        r flushall\n        r select 1\n        r debug set-active-expire 0\n        r set x foo px 1\n        after 2\n        r watch x\n        r swapdb 0 1 ; # expired key replaced with no key => no change\n        r multi\n        r ping\n        assert_equal {PONG} [r exec]\n        r debug set-active-expire 1\n    } {OK} {singledb:skip needs:debug}\n\n    test {SWAPDB does not touch non-existing key replaced with stale key} {\n        r flushall\n        r select 0\n        r debug set-active-expire 0\n        r set x foo px 1\n        after 2\n        r select 1\n        r watch x\n        r swapdb 0 1 ; # no key replaced with expired key => no change\n        r multi\n        r ping\n        assert_equal {PONG} [r exec]\n        r debug set-active-expire 1\n    } {OK} {singledb:skip needs:debug}\n\n    test {SWAPDB does not touch stale key replaced with another stale key} {\n        r flushall\n        r debug set-active-expire 0\n        r select 1\n        r set x foo px 1\n        r select 0\n        r set x bar px 1\n        after 2\n        r select 1\n        r watch x\n        r swapdb 0 1 ; # no key replaced with expired key => no change\n        r multi\n        r ping\n        assert_equal {PONG} [r exec]\n        r debug set-active-expire 1\n    } {OK} {singledb:skip needs:debug}\n\n    test {WATCH is able to remember the DB a key belongs to} {\n        r select 5\n        r set x 30\n        r watch x\n        r select 1\n        r set x 10\n        r select 5\n        r multi\n        r ping\n        set res [r exec]\n        # Restore original DB\n        r select 9\n        set res\n    } {PONG} {singledb:skip}\n\n    test {WATCH will consider touched keys target of EXPIRE} {\n        r del x\n        r set x foo\n        r watch x\n        r expire x 10\n        r multi\n        r ping\n        r exec\n    } {}\n\n    test {WATCH will consider touched expired keys} {\n        r flushall\n        r del x\n        r set x foo\n        r expire x 1\n        r watch x\n\n        # Wait for the keys to expire.\n        wait_for_dbsize 0\n\n        r multi\n        r ping\n        r exec\n    } {}\n\n    test {DISCARD should clear the WATCH dirty flag on the client} {\n        r watch x\n        r set x 10\n        r multi\n        r discard\n        r multi\n        r incr x\n        r exec\n    } {11}\n\n    test {DISCARD should UNWATCH all the keys} {\n        r watch x\n        r set x 10\n        r multi\n        r discard\n        r set x 10\n        r multi\n        r incr x\n        r exec\n    } {11}\n\n    test {MULTI / EXEC is not propagated (single write command)} {\n        set repl [attach_to_replication_stream]\n        r multi\n        r set foo bar\n        r exec\n        r set foo2 bar\n        assert_replication_stream $repl {\n            {select *}\n            {set foo bar}\n            {set foo2 bar}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test {MULTI / EXEC is propagated correctly (multiple commands)} {\n        set repl [attach_to_replication_stream]\n        r multi\n        r set foo{t} bar\n        r get foo{t}\n        r set foo2{t} bar2\n        r get foo2{t}\n        r set foo3{t} bar3\n        r get foo3{t}\n        r exec\n\n        assert_replication_stream $repl {\n            {multi}\n            {select *}\n            {set foo{t} bar}\n            {set foo2{t} bar2}\n            {set foo3{t} bar3}\n            {exec}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test {MULTI / EXEC is propagated correctly (multiple commands with SELECT)} {\n        set repl [attach_to_replication_stream]\n        r multi\n        r select 1\n        r set foo{t} bar\n        r get foo{t}\n        r select 2\n        r set foo2{t} bar2\n        r get foo2{t}\n        r select 3\n        r set foo3{t} bar3\n        r get foo3{t}\n        r exec\n\n        assert_replication_stream $repl {\n            {multi}\n            {select *}\n            {set foo{t} bar}\n            {select *}\n            {set foo2{t} bar2}\n            {select *}\n            {set foo3{t} bar3}\n            {exec}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl singledb:skip}\n\n    test {MULTI / EXEC is propagated correctly (empty transaction)} {\n        set repl [attach_to_replication_stream]\n        r multi\n        r exec\n        r set foo bar\n        assert_replication_stream $repl {\n            {select *}\n            {set foo bar}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test {MULTI / EXEC is propagated correctly (read-only commands)} {\n        r set foo value1\n        set repl [attach_to_replication_stream]\n        r multi\n        r get foo\n        r exec\n        r set foo value2\n        assert_replication_stream $repl {\n            {select *}\n            {set foo value2}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test {MULTI / EXEC is propagated correctly (write command, no effect)} {\n        r del bar\n        r del foo\n        set repl [attach_to_replication_stream]\n        r multi\n        r del foo\n        r exec\n\n        # add another command so that when we see it we know multi-exec wasn't\n        # propagated\n        r incr foo\n\n        assert_replication_stream $repl {\n            {select *}\n            {incr foo}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test {MULTI / EXEC with REPLICAOF} {\n        # This test verifies that if we demote a master to replica inside a transaction, the\n        # entire transaction is not propagated to the already-connected replica\n        set repl [attach_to_replication_stream]\n        r set foo bar\n        r multi\n        r set foo2 bar\n        r replicaof localhost 9999\n        r set foo3 bar\n        r exec\n        catch {r set foo4 bar} e\n        assert_match {READONLY*} $e\n        assert_replication_stream $repl {\n            {select *}\n            {set foo bar}\n        }\n        r replicaof no one\n    } {OK} {needs:repl cluster:skip}\n\n    test {DISCARD should not fail during OOM} {\n        set rd [redis_deferring_client]\n        $rd config set maxmemory 1\n        assert  {[$rd read] eq {OK}}\n        r multi\n        catch {r set x 1} e\n        assert_match {OOM*} $e\n        r discard\n        $rd config set maxmemory 0\n        assert  {[$rd read] eq {OK}}\n        $rd close\n        r ping\n    } {PONG} {needs:config-maxmemory}\n\n    test {MULTI and script timeout} {\n        # check that if MULTI arrives during timeout, it is either refused, or\n        # allowed to pass, and we don't end up executing half of the transaction\n        set rd1 [redis_deferring_client]\n        set r2 [redis_client]\n        r config set lua-time-limit 10\n        r set xx 1\n        $rd1 eval {while true do end} 0\n        after 200\n        catch { $r2 multi; } e\n        catch { $r2 incr xx; } e\n        r script kill\n        after 200 ; # Give some time to Lua to call the hook again...\n        catch { $r2 incr xx; } e\n        catch { $r2 exec; } e\n        assert_match {EXECABORT*previous errors*} $e\n        set xx [r get xx]\n        # make sure that either the whole transcation passed or none of it (we actually expect none)\n        assert { $xx == 1 || $xx == 3}\n        # check that the connection is no longer in multi state\n        set pong [$r2 ping asdf]\n        assert_equal $pong \"asdf\"\n        $rd1 close; $r2 close\n    }\n\n    test {EXEC and script timeout} {\n        # check that if EXEC arrives during timeout, we don't end up executing\n        # half of the transaction, and also that we exit the multi state\n        set rd1 [redis_deferring_client]\n        set r2 [redis_client]\n        r config set lua-time-limit 10\n        r set xx 1\n        catch { $r2 multi; } e\n        catch { $r2 incr xx; } e\n        $rd1 eval {while true do end} 0\n        after 200\n        catch { $r2 incr xx; } e\n        catch { $r2 exec; } e\n        assert_match {EXECABORT*BUSY*} $e\n        r script kill\n        after 200 ; # Give some time to Lua to call the hook again...\n        set xx [r get xx]\n        # make sure that either the whole transcation passed or none of it (we actually expect none)\n        assert { $xx == 1 || $xx == 3}\n        # check that the connection is no longer in multi state\n        set pong [$r2 ping asdf]\n        assert_equal $pong \"asdf\"\n        $rd1 close; $r2 close\n    }\n\n    test {MULTI-EXEC body and script timeout} {\n        # check that we don't run an incomplete transaction due to some commands\n        # arriving during busy script\n        set rd1 [redis_deferring_client]\n        set r2 [redis_client]\n        r config set lua-time-limit 10\n        r set xx 1\n        catch { $r2 multi; } e\n        catch { $r2 incr xx; } e\n        $rd1 eval {while true do end} 0\n        after 200\n        catch { $r2 incr xx; } e\n        r script kill\n        after 200 ; # Give some time to Lua to call the hook again...\n        catch { $r2 exec; } e\n        assert_match {EXECABORT*previous errors*} $e\n        set xx [r get xx]\n        # make sure that either the whole transcation passed or none of it (we actually expect none)\n        assert { $xx == 1 || $xx == 3}\n        # check that the connection is no longer in multi state\n        set pong [$r2 ping asdf]\n        assert_equal $pong \"asdf\"\n        $rd1 close; $r2 close\n    }\n\n    test {just EXEC and script timeout} {\n        # check that if EXEC arrives during timeout, we don't end up executing\n        # actual commands during busy script, and also that we exit the multi state\n        set rd1 [redis_deferring_client]\n        set r2 [redis_client]\n        r config set lua-time-limit 10\n        r set xx 1\n        catch { $r2 multi; } e\n        catch { $r2 incr xx; } e\n        $rd1 eval {while true do end} 0\n        after 200\n        catch { $r2 exec; } e\n        assert_match {EXECABORT*BUSY*} $e\n        r script kill\n        after 200 ; # Give some time to Lua to call the hook again...\n        set xx [r get xx]\n        # make we didn't execute the transaction\n        assert { $xx == 1}\n        # check that the connection is no longer in multi state\n        set pong [$r2 ping asdf]\n        assert_equal $pong \"asdf\"\n        $rd1 close; $r2 close\n    }\n\n    test {exec with write commands and state change} {\n        # check that exec that contains write commands fails if server state changed since they were queued\n        set r1 [redis_client]\n        r set xx 1\n        r multi\n        r incr xx\n        $r1 config set min-replicas-to-write 2\n        catch {r exec} e\n        assert_match {*EXECABORT*NOREPLICAS*} $e\n        set xx [r get xx]\n        # make sure that the INCR wasn't executed\n        assert { $xx == 1}\n        $r1 config set min-replicas-to-write 0\n        $r1 close\n    } {0} {needs:repl}\n\n    test {exec with read commands and stale replica state change} {\n        # check that exec that contains read commands fails if server state changed since they were queued\n        r config set replica-serve-stale-data no\n        set r1 [redis_client]\n        r set xx 1\n\n        # check that GET and PING are disallowed on stale replica, even if the replica becomes stale only after queuing.\n        r multi\n        r get xx\n        $r1 replicaof localhsot 0\n        catch {r exec} e\n        assert_match {*EXECABORT*MASTERDOWN*} $e\n\n        # reset\n        $r1 replicaof no one\n\n        r multi\n        r ping\n        $r1 replicaof localhsot 0\n        catch {r exec} e\n        assert_match {*EXECABORT*MASTERDOWN*} $e\n\n        # check that when replica is not stale, GET is allowed\n        # while we're at it, let's check that multi is allowed on stale replica too\n        r multi\n        $r1 replicaof no one\n        r get xx\n        set xx [r exec]\n        # make sure that the INCR was executed\n        assert { $xx == 1 }\n        $r1 close\n    } {0} {needs:repl cluster:skip}\n\n    test {EXEC with only read commands should not be rejected when OOM} {\n        set r2 [redis_client]\n\n        r set x value\n        r multi\n        r get x\n        r ping\n\n        # enforcing OOM\n        $r2 config set maxmemory 1\n\n        # finish the multi transaction with exec\n        assert { [r exec] == {value PONG} }\n\n        # releasing OOM\n        $r2 config set maxmemory 0\n        $r2 close\n    } {0} {needs:config-maxmemory}\n\n    test {EXEC with at least one use-memory command should fail} {\n        set r2 [redis_client]\n\n        r multi\n        r set x 1\n        r get x\n\n        # enforcing OOM\n        $r2 config set maxmemory 1\n\n        # finish the multi transaction with exec\n        catch {r exec} e\n        assert_match {EXECABORT*OOM*} $e\n\n        # releasing OOM\n        $r2 config set maxmemory 0\n        $r2 close\n    } {0} {needs:config-maxmemory}\n\n    test {Blocking commands ignores the timeout} {\n        r xgroup create s{t} g $ MKSTREAM\n\n        set m [r multi]\n        r blpop empty_list{t} 0\n        r brpop empty_list{t} 0\n        r brpoplpush empty_list1{t} empty_list2{t} 0\n        r blmove empty_list1{t} empty_list2{t} LEFT LEFT 0\n        r bzpopmin empty_zset{t} 0\n        r bzpopmax empty_zset{t} 0\n        r xread BLOCK 0 STREAMS s{t} $\n        r xreadgroup group g c BLOCK 0 STREAMS s{t} >\n        set res [r exec]\n\n        list $m $res\n    } {OK {{} {} {} {} {} {} {} {}}}\n\n    test {MULTI propagation of PUBLISH} {\n        set repl [attach_to_replication_stream]\n\n        r multi\n        r publish bla bla\n        r exec\n\n        assert_replication_stream $repl {\n            {select *}\n            {publish bla bla}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl cluster:skip}\n\n    test {MULTI propagation of SCRIPT LOAD} {\n        set repl [attach_to_replication_stream]\n\n        # make sure that SCRIPT LOAD inside MULTI isn't propagated\n        r multi\n        r script load {redis.call('set', KEYS[1], 'foo')}\n        r set foo bar\n        set res [r exec]\n        set sha [lindex $res 0]\n\n        assert_replication_stream $repl {\n            {select *}\n            {set foo bar}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test {MULTI propagation of EVAL} {\n        set repl [attach_to_replication_stream]\n\n        # make sure that EVAL inside MULTI is propagated in a transaction in effects\n        r multi\n        r eval {redis.call('set', KEYS[1], 'bar')} 1 bar\n        r exec\n\n        assert_replication_stream $repl {\n            {select *}\n            {set bar bar}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test {MULTI propagation of SCRIPT FLUSH} {\n        set repl [attach_to_replication_stream]\n\n        # make sure that SCRIPT FLUSH isn't propagated\n        r multi\n        r script flush\n        r set foo bar\n        r exec\n\n        assert_replication_stream $repl {\n            {select *}\n            {set foo bar}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    tags {\"stream\"} {\n        test {MULTI propagation of XREADGROUP} {\n            set repl [attach_to_replication_stream]\n\n            r XADD mystream * foo bar\n            r XADD mystream * foo2 bar2\n            r XADD mystream * foo3 bar3\n            r XGROUP CREATE mystream mygroup 0\n\n            # make sure the XCALIM (propagated by XREADGROUP) is indeed inside MULTI/EXEC\n            r multi\n            r XREADGROUP GROUP mygroup consumer1 COUNT 2 STREAMS mystream \">\"\n            r XREADGROUP GROUP mygroup consumer1 STREAMS mystream \">\"\n            r exec\n\n            assert_replication_stream $repl {\n                {select *}\n                {xadd *}\n                {xadd *}\n                {xadd *}\n                {xgroup CREATE *}\n                {multi}\n                {xclaim *}\n                {xclaim *}\n                {xgroup SETID * ENTRIESREAD *}\n                {xclaim *}\n                {xgroup SETID * ENTRIESREAD *}\n                {exec}\n            }\n            close_replication_stream $repl\n        } {} {needs:repl}\n    }\n\n    foreach {cmd} {SAVE SHUTDOWN} {\n        test \"MULTI with $cmd\" {\n            r del foo\n            r multi\n            r set foo bar\n            catch {r $cmd} e1\n            catch {r exec} e2\n            assert_match {*Command not allowed inside a transaction*} $e1\n            assert_match {EXECABORT*} $e2\n            r get foo\n        } {}\n    }\n\n    test \"MULTI with BGREWRITEAOF\" {\n        set forks [s total_forks]\n        r multi\n        r set foo bar\n        r BGREWRITEAOF\n        set res [r exec]\n        assert_match \"*rewriting scheduled*\" [lindex $res 1]\n        wait_for_condition 50 100 {\n            [s total_forks] > $forks\n        } else {\n            fail \"aofrw didn't start\"\n        }\n        waitForBgrewriteaof r\n    } {} {external:skip}\n\n    test \"MULTI with config set appendonly\" {\n        set lines [count_log_lines 0]\n        set forks [s total_forks]\n        r multi\n        r set foo bar\n        r config set appendonly yes\n        r exec\n        verify_log_message 0 \"*AOF background was scheduled*\" $lines\n        wait_for_condition 50 100 {\n            [s total_forks] > $forks\n        } else {\n            fail \"aofrw didn't start\"\n        }\n        waitForBgrewriteaof r\n    } {} {external:skip}\n\n    test \"MULTI with config error\" {\n        r multi\n        r set foo bar\n        r config set maxmemory bla\n\n        # letting the redis parser read it, it'll throw an exception instead of\n        # reply with an array that contains an error, so we switch to reading\n        # raw RESP instead\n        r readraw 1\n\n        set res [r exec]\n        assert_equal $res \"*2\"\n        set res [r read]\n        assert_equal $res \"+OK\"\n        set res [r read]\n        r readraw 0\n        set _ $res\n    } {*CONFIG SET failed*}\n    \n    test \"Flushall while watching several keys by one client\" {\n        r flushall\n        r mset a{t} a b{t} b\n        r watch b{t} a{t}\n        r flushall\n        r ping\n     }\n}\n\nstart_server {overrides {appendonly {yes} appendfilename {appendonly.aof} appendfsync always} tags {external:skip}} {\n    test {MULTI with FLUSHALL and AOF} {\n        set aof [get_last_incr_aof_path r]\n        r multi\n        r set foo bar\n        r flushall\n        r exec\n        assert_aof_content $aof {\n            {multi}\n            {select *}\n            {set *}\n            {flushall}\n            {exec}\n        }\n        r get foo\n    } {}\n}\n"
  },
  {
    "path": "tests/unit/networking.tcl",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Copyright (c) 2025-present, Valkey contributors.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n#\n\nsource tests/support/cli.tcl\n\ntest {CONFIG SET port number} {\n    start_server {} {\n        if {$::tls} { set port_cfg tls-port} else { set port_cfg port }\n\n        # available port\n        set avail_port [find_available_port $::baseport $::portcount]\n        set rd [redis [srv 0 host] [srv 0 port] 0 $::tls]\n        $rd CONFIG SET $port_cfg $avail_port\n        $rd close\n        set rd [redis [srv 0 host] $avail_port 0 $::tls]\n        $rd PING\n\n        # already inuse port\n        catch {$rd CONFIG SET $port_cfg $::test_server_port} e\n        assert_match {*Unable to listen on this port*} $e\n        $rd close\n\n        # make sure server still listening on the previous port\n        set rd [redis [srv 0 host] $avail_port 0 $::tls]\n        $rd PING\n        $rd close\n    }\n} {} {external:skip}\n\ntest {CONFIG SET bind address} {\n    start_server {} {\n        # non-valid address\n        catch {r CONFIG SET bind \"999.999.999.999\"} e\n        assert_match {*Failed to bind to specified addresses*} $e\n\n        # make sure server still bound to the previous address\n        set rd [redis [srv 0 host] [srv 0 port] 0 $::tls]\n        $rd PING\n        $rd close\n    }\n} {} {external:skip}\n\n# Attempt to connect to host using a client bound to bindaddr,\n# and return a non-zero value if successful within specified\n# millisecond timeout, or zero otherwise.\nproc test_loopback {host bindaddr timeout} {\n    if {[exec uname] != {Linux}} {\n        return 0\n    }\n\n    after $timeout set ::test_loopback_state timeout\n    if {[catch {\n        set server_sock [socket -server accept 0]\n        set port [lindex [fconfigure $server_sock -sockname] 2] } err]} {\n            return 0\n    }\n\n    proc accept {channel clientaddr clientport} {\n        set ::test_loopback_state \"connected\"\n        close $channel\n    }\n\n    if {[catch {set client_sock [socket -async -myaddr $bindaddr $host $port]} err]} {\n        puts \"test_loopback: Client connect failed: $err\"\n    } else {\n        close $client_sock\n    }\n\n    vwait ::test_loopback_state\n    close $server_sock\n\n    return [expr {$::test_loopback_state == {connected}}]\n}\n\ntest {CONFIG SET bind-source-addr} {\n    if {[test_loopback 127.0.0.1 127.0.0.2 1000]} {\n        start_server {} {\n            start_server {} {\n                set replica [srv 0 client]\n                set master [srv -1 client]\n\n                $master config set protected-mode no\n\n                $replica config set bind-source-addr 127.0.0.2\n                $replica replicaof [srv -1 host] [srv -1 port]\n\n                wait_for_condition 50 100 {\n                    [s 0 master_link_status] eq {up}\n                } else {\n                    fail \"Replication not started.\"\n                }\n\n                assert_match {*ip=127.0.0.2*} [s -1 slave0]\n            }\n        }\n    } else {\n        if {$::verbose} { puts \"Skipping bind-source-addr test.\" }\n    }\n} {} {external:skip}\n\nstart_server {config \"minimal.conf\" tags {\"external:skip\"}} {\n    test {Default bind address configuration handling} {\n        # Default is explicit and sane\n        assert_equal \"* -::*\" [lindex [r CONFIG GET bind] 1]\n\n        # CONFIG REWRITE acknowledges this as a default\n        r CONFIG REWRITE\n        assert_equal 0 [count_message_lines [srv 0 config_file] bind]\n\n        # Removing the bind address works\n        r CONFIG SET bind \"\"\n        assert_equal \"\" [lindex [r CONFIG GET bind] 1]\n\n        # No additional clients can connect\n        catch {redis_client} err\n        assert_match {*connection refused*} $err\n\n        # CONFIG REWRITE handles empty bindaddr\n        r CONFIG REWRITE\n        assert_equal 1 [count_message_lines [srv 0 config_file] bind]\n\n        # Make sure we're able to restart\n        restart_server 0 0 0 0\n\n        # Make sure bind parameter is as expected and server handles binding\n        # accordingly.\n        # (it seems that rediscli_exec behaves differently in RESP3, possibly\n        # because CONFIG GET returns a dict instead of a list so redis-cli emits\n        # it in a single line)\n        if {$::force_resp3} {\n            assert_equal {{bind }} [rediscli_exec 0 config get bind]\n        } else {\n            assert_equal {bind {}} [rediscli_exec 0 config get bind]\n        }\n        catch {reconnect 0} err\n        assert_match {*connection refused*} $err\n\n        assert_equal {OK} [rediscli_exec 0 config set bind *]\n        reconnect 0\n        r ping\n    } {PONG}\n\n    test {Protected mode works as expected} {\n        # Get a non-loopback address of this instance for this test.\n        set myaddr [get_nonloopback_addr]\n        if {$myaddr != \"\" && ![string match {127.*} $myaddr]} {\n            # Non-loopback client should fail by default\n            set r2 [get_nonloopback_client]\n            catch {$r2 ping} err\n            assert_match {*DENIED*} $err\n\n            # Bind configuration should not matter\n            assert_equal {OK} [r config set bind \"*\"]\n            set r2 [get_nonloopback_client]\n            catch {$r2 ping} err\n            assert_match {*DENIED*} $err\n\n            # Setting a password should disable protected mode\n            assert_equal {OK} [r config set requirepass \"secret\"]\n            set r2 [redis $myaddr [srv 0 \"port\"] 0 $::tls]\n            assert_equal {OK} [$r2 auth secret]\n            assert_equal {PONG} [$r2 ping]\n\n            # Clearing the password re-enables protected mode\n            assert_equal {OK} [r config set requirepass \"\"]\n            set r2 [redis $myaddr [srv 0 \"port\"] 0 $::tls]\n            assert_match {*DENIED*} $err\n\n            # Explicitly disabling protected-mode works\n            assert_equal {OK} [r config set protected-mode no]\n            set r2 [redis $myaddr [srv 0 \"port\"] 0 $::tls]\n            assert_equal {PONG} [$r2 ping]\n        }\n    }\n}\n\nstart_server {config \"minimal.conf\" tags {\"external:skip\"} overrides {enable-debug-command {yes} io-threads 2}} {\n    set server_pid [s process_id]\n    # Since each thread may perform memory prefetch independently, this test is\n    # only run when the number of IO threads is 2 to ensure deterministic results.\n    if {[r config get io-threads] eq \"io-threads 2\"} {\n        test {prefetch works as expected when killing a client from the middle of prefetch commands batch} {\n            # Create 16 (prefetch batch size) +1 clients\n            for {set i 0} {$i < 16} {incr i} {\n                set rd$i [redis_deferring_client]\n            }\n\n            # set a key that will be later be prefetch\n            r set a 0\n\n            # Get the client ID of rd4\n            $rd4 client id\n            set rd4_id [$rd4 read]\n\n            # Create a batch of commands by suspending the server for a while\n            # before responding to the first command\n            pause_process $server_pid\n\n            # The first client will kill the fourth client\n            $rd0 client kill id $rd4_id\n\n            # Send set commands for all clients except the first\n            for {set i 1} {$i < 16} {incr i} {\n                [set rd$i] set $i $i\n                [set rd$i] flush\n            }\n\n            # Resume the server\n            resume_process $server_pid\n\n            # Read the results\n            assert_equal {1} [$rd0 read]\n            catch {$rd4 read} res\n            if {$res eq \"OK\"} {\n                # maybe OK then err, we can not control the order of execution\n                catch {$rd4 read} err\n            } else {\n                set err $res\n            }\n            assert_match {I/O error reading reply} $err\n\n            # verify the prefetch stats are as expected\n            set info [r info stats]\n            set prefetch_entries [getInfoProperty $info io_threaded_total_prefetch_entries]\n            assert_range $prefetch_entries 2 15; # With slower machines, the number of prefetch entries can be lower\n            set prefetch_batches [getInfoProperty $info io_threaded_total_prefetch_batches]\n            assert_range $prefetch_batches 1 7; # With slower machines, the number of batches can be higher\n\n            # verify other clients are working as expected\n            for {set i 1} {$i < 16} {incr i} {\n                if {$i != 4} { ;# 4th client was killed\n                    [set rd$i] get $i\n                    assert_equal {OK} [[set rd$i] read]\n                    assert_equal $i [[set rd$i] read]\n                }\n            }\n        }\n\n        test {prefetch works as expected when changing the batch size while executing the commands batch} {\n            # Create 16 (default prefetch batch size) clients\n            for {set i 0} {$i < 16} {incr i} {\n                set rd$i [redis_deferring_client]\n            }\n\n            # Create a batch of commands by suspending the server for a while\n            # before responding to the first command\n            pause_process $server_pid\n\n            # Send set commands for all clients the 5th client will change the prefetch batch size\n            for {set i 0} {$i < 16} {incr i} {\n                if {$i == 4} {\n                    [set rd$i] config set prefetch-batch-max-size 1\n                }\n                [set rd$i] set a $i\n                [set rd$i] flush\n            }\n            # Resume the server\n            resume_process $server_pid\n            # Read the results\n            for {set i 0} {$i < 16} {incr i} {\n                assert_equal {OK} [[set rd$i] read]\n                [set rd$i] close\n            }\n\n            # assert the configured prefetch batch size was changed\n            assert {[r config get prefetch-batch-max-size] eq \"prefetch-batch-max-size 1\"}\n        }\n \n        proc do_prefetch_batch {server_pid batch_size} {\n            # Create clients\n            for {set i 0} {$i < $batch_size} {incr i} {\n                set rd$i [redis_deferring_client]\n            }\n\n            # Suspend the server to batch the commands\n            pause_process $server_pid\n\n            # Send commands from all clients\n            for {set i 0} {$i < $batch_size} {incr i} {\n                [set rd$i] set a $i\n                [set rd$i] flush\n            }\n\n            # Resume the server to process the batch\n            resume_process $server_pid\n\n            # Verify responses\n            for {set i 0} {$i < $batch_size} {incr i} {\n                assert_equal {OK} [[set rd$i] read]\n                [set rd$i] close\n            }\n        }\n\n        test {no prefetch when the batch size is set to 0} {\n            # set the batch size to 0\n            r config set prefetch-batch-max-size 0\n            # save the current value of prefetch entries\n            set info [r info stats]\n            set prefetch_entries [getInfoProperty $info io_threaded_total_prefetch_entries]\n\n            do_prefetch_batch $server_pid 16\n\n            # assert the prefetch entries did not change\n            set info [r info stats]\n            set new_prefetch_entries [getInfoProperty $info io_threaded_total_prefetch_entries]\n            assert_equal $prefetch_entries $new_prefetch_entries\n        }\n\n        test {Prefetch can resume working when the configuration option is set to a non-zero value} {\n            # save the current value of prefetch entries\n            set info [r info stats]\n            set prefetch_entries [getInfoProperty $info io_threaded_total_prefetch_entries]\n            # set the batch size to 0\n            r config set prefetch-batch-max-size 16\n\n            do_prefetch_batch $server_pid 16\n\n            # assert the prefetch entries did not change\n            set info [r info stats]\n            set new_prefetch_entries [getInfoProperty $info io_threaded_total_prefetch_entries]\n            # With slower machines, the number of prefetch entries can be lower\n            assert_range $new_prefetch_entries [expr {$prefetch_entries + 2}] [expr {$prefetch_entries + 16}]\n        }\n\n        test {Prefetch works with batch size greater than 16 (buffer overflow regression test)} {\n            # save the current value of prefetch entries\n            set info [r info stats]\n            set prefetch_entries [getInfoProperty $info io_threaded_total_prefetch_entries]\n            # set the batch size to a value greater than the old hardcoded limit of 16\n            r config set prefetch-batch-max-size 64\n\n            # Create a batch with more than 16 clients to trigger the old buffer overflow\n            do_prefetch_batch $server_pid 64\n\n            # verify the prefetch entries increased\n            set info [r info stats]\n            set new_prefetch_entries [getInfoProperty $info io_threaded_total_prefetch_entries]\n            # With slower machines, the number of prefetch entries can be lower\n            assert_range $new_prefetch_entries [expr {$prefetch_entries + 2}] [expr {$prefetch_entries + 64}]\n        }\n\n        test {Prefetch works with maximum batch size of 128 and client number larger than batch size} {\n            # save the current value of prefetch entries\n            set info [r info stats]\n            set prefetch_entries [getInfoProperty $info io_threaded_total_prefetch_entries]\n            # set the batch size to the maximum allowed value\n            r config set prefetch-batch-max-size 128\n\n            # Create a batch with 300 clients to test the maximum limit\n            do_prefetch_batch $server_pid 300\n\n            # verify the prefetch entries increased\n            set info [r info stats]\n            set new_prefetch_entries [getInfoProperty $info io_threaded_total_prefetch_entries]\n            # With slower machines, the number of prefetch entries can be lower\n            assert_range $new_prefetch_entries [expr {$prefetch_entries + 2}] [expr {$prefetch_entries + 300}]\n        }\n    }\n}\n\nstart_server {tags {\"timeout external:skip\"}} {\n    test {Multiple clients idle timeout test} {\n        # set client timeout to 1 second\n        r config set timeout 1\n\n        # create multiple client connections\n        set clients {}\n        set num_clients 10\n\n        for {set i 0} {$i < $num_clients} {incr i} {\n            set client [redis_deferring_client]\n            $client ping\n            assert_equal \"PONG\" [$client read]\n            lappend clients $client\n        }\n        assert_equal [llength $clients] $num_clients\n\n        # wait for 2.5 seconds\n        after 2500\n\n        # try to send commands to all clients - they should all fail due to timeout\n        set disconnected_count 0\n        foreach client $clients {\n            $client ping\n            if {[catch {$client read} err]} {\n                incr disconnected_count\n                # expected error patterns for connection timeout\n                assert_match {*I/O error*} $err\n            }\n            catch {$client close}\n        }\n\n        # all clients should have been disconnected due to timeout\n        assert_equal $disconnected_count $num_clients\n\n        # redis server still works well\n        reconnect\n        assert_equal \"PONG\" [r ping]\n    }\n}\n\ntest {Pending command pool expansion and shrinking} {\n    start_server {overrides {loglevel debug io-threads 1} tags {external:skip}} {\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n        \n        # Client1 sends 16 commands in pipeline, and was blocked at the first command\n        set buf \"\"\n        append buf \"blpop mylist 0\\r\\n\"\n        for {set i 1} {$i < 16} {incr i} {\n            append buf \"set key$i value$i\\r\\n\"\n        }\n        $rd1 write $buf\n        $rd1 flush\n        wait_for_blocked_clients_count 1\n        \n        # Client2 sends 1 command, this will trigger pending command pool expansion\n        # from 16 to 32 since A client has used up all 16 commands in the command pool.\n        $rd2 set bkey bvalue\n        assert_equal {OK} [$rd2 read]\n        \n        # Unblock client1, allowing it to return all pending commands back to the pool.\n        r lpush mylist unblock_value\n        assert_equal {mylist unblock_value} [$rd1 read]\n        for {set i 1} {$i < 16} {incr i} {\n            assert_equal {OK} [$rd1 read]\n        }\n        \n        # Wait for the pending command pool to shrink back to 16 due to low utilization.\n        wait_for_log_messages 0 {\"*Shrunk pending command pool: capacity 32->16*\"} 0 10 1000\n        \n        $rd1 close\n        $rd2 close\n    }\n}\n"
  },
  {
    "path": "tests/unit/obuf-limits.tcl",
    "content": "start_server {tags {\"obuf-limits external:skip logreqres:skip\"}} {\n    r debug reply-copy-avoidance 0 ;# Disable copy avoidance because it affects memory usage\n\n    test {CONFIG SET client-output-buffer-limit} {\n        set oldval [lindex [r config get client-output-buffer-limit] 1]\n\n        catch {r config set client-output-buffer-limit \"wrong number\"} e\n        assert_match {*Wrong*arguments*} $e\n\n        catch {r config set client-output-buffer-limit \"invalid_class 10mb 10mb 60\"} e\n        assert_match {*Invalid*client*class*} $e\n        catch {r config set client-output-buffer-limit \"master 10mb 10mb 60\"} e\n        assert_match {*Invalid*client*class*} $e\n\n        catch {r config set client-output-buffer-limit \"normal 10mbs 10mb 60\"} e\n        assert_match {*Error*hard*} $e\n\n        catch {r config set client-output-buffer-limit \"replica 10mb 10mbs 60\"} e\n        assert_match {*Error*soft*} $e\n\n        catch {r config set client-output-buffer-limit \"pubsub 10mb 10mb 60s\"} e\n        assert_match {*Error*soft_seconds*} $e\n\n        r config set client-output-buffer-limit \"normal 1mb 2mb 60 replica 3mb 4mb 70 pubsub 5mb 6mb 80\"\n        set res [lindex [r config get client-output-buffer-limit] 1]\n        assert_equal $res \"normal 1048576 2097152 60 slave 3145728 4194304 70 pubsub 5242880 6291456 80\"\n\n        # Set back to the original value.\n        r config set client-output-buffer-limit $oldval\n    }\n\n    test {Client output buffer hard limit is enforced} {\n        r config set client-output-buffer-limit {pubsub 100000 0 0}\n        set rd1 [redis_deferring_client]\n\n        $rd1 subscribe foo\n        set reply [$rd1 read]\n        assert {$reply eq \"subscribe foo 1\"}\n\n        set omem 0\n        while 1 {\n            # The larger content size ensures that client.buf gets filled more quickly,\n            # allowing us to correctly observe the gradual increase of `omem`\n            r publish foo [string repeat bar 50]\n            set clients [split [r client list] \"\\r\\n\"]\n            set c [split [lindex $clients 1] \" \"]\n            if {![regexp {omem=([0-9]+)} $c - omem]} break\n            if {$omem > 200000} break\n        }\n        assert {$omem >= 70000 && $omem < 200000}\n        $rd1 close\n    }\n    \n    foreach {soft_limit_time wait_for_timeout} {3 yes\n                                                4 no } {\n        if $wait_for_timeout {\n            set test_name \"Client output buffer soft limit is enforced if time is overreached\"\n        } else {\n            set test_name \"Client output buffer soft limit is not enforced too early and is enforced when no traffic\"\n        }\n\n        test $test_name {\n            r config set client-output-buffer-limit \"pubsub 0 100000 $soft_limit_time\"\n            set soft_limit_time [expr $soft_limit_time*1000]\n            set rd1 [redis_deferring_client]\n\n            $rd1 client setname test_client\n            set reply [$rd1 read]\n            assert {$reply eq \"OK\"}\n\n            $rd1 subscribe foo\n            set reply [$rd1 read]\n            assert {$reply eq \"subscribe foo 1\"}\n\n            set omem 0\n            set start_time 0\n            set time_elapsed 0\n            set last_under_limit_time [clock milliseconds]\n            while 1 {\n                r publish foo [string repeat \"x\" 1000]\n                set clients [split [r client list] \"\\r\\n\"]\n                set c [lsearch -inline $clients *name=test_client*]\n                if {$start_time != 0} {\n                    set time_elapsed [expr {[clock milliseconds]-$start_time}]\n                    # Make sure test isn't taking too long\n                    assert {$time_elapsed <= [expr $soft_limit_time+3000]}\n                }\n                if {$wait_for_timeout && $c == \"\"} {\n                    # Make sure we're disconnected when we reach the soft limit\n                    assert {$omem >= 100000 && $time_elapsed >= $soft_limit_time}\n                    break\n                } else {\n                    assert {[regexp {omem=([0-9]+)} $c - omem]}\n                }\n                if {$omem > 100000} {\n                    if {$start_time == 0} {set start_time $last_under_limit_time}\n                    if {!$wait_for_timeout && $time_elapsed >= [expr $soft_limit_time-1000]} break\n                    # Slow down loop when omem has reached the limit.\n                    after 10\n                } else {\n                    # if the OS socket buffers swallowed what we previously filled, reset the start timer.\n                    set start_time 0\n                    set last_under_limit_time [clock milliseconds]\n                }\n            }\n\n            if {!$wait_for_timeout} {\n                # After we completely stopped the traffic, wait for soft limit to time out\n                set timeout [expr {$soft_limit_time+1500 - ([clock milliseconds]-$start_time)}]\n                wait_for_condition [expr $timeout/10] 10 {\n                    [lsearch [split [r client list] \"\\r\\n\"] *name=test_client*] == -1\n                } else {\n                    fail \"Soft limit timed out but client still connected\"\n                }\n            }\n\n            $rd1 close\n        }\n    }\n\n    test {No response for single command if client output buffer hard limit is enforced} {\n        r config set latency-tracking no\n        r config set client-output-buffer-limit {normal 100000 0 0}\n        # Total size of all items must be more than 100k\n        set item [string repeat \"x\" 1000]\n        for {set i 0} {$i < 150} {incr i} {\n            r lpush mylist $item\n        }\n        set orig_mem [s used_memory]\n        # Set client name and get all items\n        set rd [redis_deferring_client]\n        $rd client setname mybiglist\n        assert {[$rd read] eq \"OK\"}\n        $rd lrange mylist 0 -1\n        $rd flush\n        after 100\n\n        # Before we read reply, redis will close this client.\n        set clients [r client list]\n        assert_no_match \"*name=mybiglist*\" $clients\n        set cur_mem [s used_memory]\n        # 10k just is a deviation threshold\n        assert {$cur_mem < 10000 + $orig_mem}\n\n        # Read nothing\n        set fd [$rd channel]\n        assert_equal {} [$rd rawread]\n    }\n\n    # Note: This test assumes that what's written with one write, will be read by redis in one read.\n    # this assumption is wrong, but seem to work empirically (for now)\n    test {No response for multi commands in pipeline if client output buffer limit is enforced} {\n        r config set client-output-buffer-limit {normal 100000 0 0}\n        set value [string repeat \"x\" 10000]\n        r set bigkey $value\n        set rd [redis_deferring_client]\n        $rd client setname multicommands\n        assert_equal \"OK\" [$rd read]\n\n        set server_pid [s process_id]\n        # Pause the server, so that the client's write will be buffered\n        pause_process $server_pid\n\n        # Create a pipeline of commands that will be processed in one socket read.\n        # It is important to use one write, in TLS mode independent writes seem\n        # to wait for response from the server.\n        # Total size should be less than OS socket buffer, redis can\n        # execute all commands in this pipeline when it wakes up.\n        set buf \"\"\n        for {set i 0} {$i < 15} {incr i} {\n            append buf \"set $i $i\\r\\n\"\n            append buf \"get $i\\r\\n\"\n            append buf \"del $i\\r\\n\"\n            # One bigkey is 10k, total response size must be more than 100k\n            append buf \"get bigkey\\r\\n\"\n        }\n        $rd write $buf\n        $rd flush\n\n        # Resume the server to process the pipeline in one go\n        resume_process $server_pid\n        # Make sure the pipeline of commands is processed\n        wait_for_condition 100 10 {\n            [expr {[regexp {calls=(\\d+)} [cmdrstat get r] -> calls] ? $calls : 0}] >= 5\n        } else {\n            fail \"the pipeline of commands commands is not processed\"\n        }\n\n        # Redis must wake up if it can send reply\n        assert_equal \"PONG\" [r ping]\n        set clients [r client list]\n        assert_no_match \"*name=multicommands*\" $clients\n        assert_equal {} [$rd rawread]\n    }\n\n    test {Execute transactions completely even if client output buffer limit is enforced} {\n        r config set client-output-buffer-limit {normal 100000 0 0}\n        # Total size of all items must be more than 100k\n        set item [string repeat \"x\" 1000]\n        for {set i 0} {$i < 150} {incr i} {\n            r lpush mylist2 $item\n        }\n\n        # Output buffer limit is enforced during executing transaction\n        r client setname transactionclient\n        r set k1 v1\n        r multi\n        r set k2 v2\n        r get k2\n        r lrange mylist2 0 -1\n        r set k3 v3\n        r del k1\n        catch {[r exec]} e\n        assert_match \"*I/O error*\" $e\n        reconnect\n        set clients [r client list]\n        assert_no_match \"*name=transactionclient*\" $clients\n\n        # Transactions should be executed completely\n        assert_equal {} [r get k1]\n        assert_equal \"v2\" [r get k2]\n        assert_equal \"v3\" [r get k3]\n    }\n\n    test \"Obuf limit, HRANDFIELD with huge count stopped mid-run\" {\n        r config set client-output-buffer-limit {normal 1000000 0 0}\n        r hset myhash a b\n        catch {r hrandfield myhash -999999999} e\n        assert_match \"*I/O error*\" $e\n        reconnect\n    }\n\n    test \"Obuf limit, KEYS stopped mid-run\" {\n        r config set client-output-buffer-limit {normal 100000 0 0}\n        populate 1000 \"long-key-name-prefix-of-100-chars-------------------------------------------------------------------\"\n        catch {r keys *} e\n        assert_match \"*I/O error*\" $e\n        reconnect\n    }\n}\n"
  },
  {
    "path": "tests/unit/oom-score-adj.tcl",
    "content": "set system_name [string tolower [exec uname -s]]\n\nif {$system_name eq {linux}} {\n    start_server {tags {\"oom-score-adj external:skip\"}} {\n        proc get_oom_score_adj {{pid \"\"}} {\n            if {$pid == \"\"} {\n                set pid [srv 0 pid]\n            }\n            set fd [open \"/proc/$pid/oom_score_adj\" \"r\"]\n            set val [gets $fd]\n            close $fd\n\n            return $val\n        }\n\n        proc set_oom_score_adj {score {pid \"\"}} {\n            if {$pid == \"\"} {\n                set pid [srv 0 pid]\n            }\n            set fd [open \"/proc/$pid/oom_score_adj\" \"w\"]\n            puts $fd $score\n            close $fd\n        }\n\n        test {CONFIG SET oom-score-adj works as expected} {\n            set base [get_oom_score_adj]\n\n            # Enable oom-score-adj, check defaults\n            r config set oom-score-adj-values \"10 20 30\"\n            r config set oom-score-adj yes\n\n            assert {[get_oom_score_adj] == [expr $base + 10]}\n\n            # Modify current class\n            r config set oom-score-adj-values \"15 20 30\"\n            assert {[get_oom_score_adj] == [expr $base + 15]}\n\n            # Check replica class\n            r replicaof localhost 1\n            assert {[get_oom_score_adj] == [expr $base + 20]}\n            r replicaof no one\n            assert {[get_oom_score_adj] == [expr $base + 15]}\n\n            # Check child process\n            r set key-a value-a\n            r config set rdb-key-save-delay 1000000\n            r bgsave\n\n            set child_pid [get_child_pid 0]\n            # Wait until background child process to setOOMScoreAdj success.\n            wait_for_condition 100 10 {\n                [get_oom_score_adj $child_pid] == [expr $base + 30]\n            } else {\n                fail \"Set oom-score-adj of background child process is not ok\"\n            }\n        }\n\n        # Determine whether the current user is unprivileged\n        set original_value [exec cat /proc/self/oom_score_adj]\n        catch {\n            set fd [open \"/proc/self/oom_score_adj\" \"w\"]\n            puts $fd -1000\n            close $fd\n        } e\n        # Failed oom-score-adj tests can only run unprivileged\n        if {[string match \"*permission denied*\" $e]} {\n            test {CONFIG SET oom-score-adj handles configuration failures} {\n                # Bad config\n                r config set oom-score-adj no\n                r config set oom-score-adj-values \"-1000 -1000 -1000\"\n\n                # Make sure it fails\n                catch {r config set oom-score-adj yes} e\n                assert_match {*Failed to set*} $e\n\n                # Make sure it remains off\n                assert {[r config get oom-score-adj] == \"oom-score-adj no\"}\n\n                # Fix config\n                r config set oom-score-adj-values \"0 100 100\"\n                r config set oom-score-adj yes\n\n                # Make sure it fails\n                catch {r config set oom-score-adj-values \"-1000 -1000 -1000\"} e\n                assert_match {*Failed*} $e\n\n                # Make sure previous values remain\n                assert {[r config get oom-score-adj-values] == {oom-score-adj-values {0 100 100}}}\n            }\n        } else {\n            # Restore the original oom_score_adj value\n            set fd [open \"/proc/self/oom_score_adj\" \"w\"]\n            puts $fd $original_value\n            close $fd\n        }\n\n        test {CONFIG SET oom-score-adj-values doesn't touch proc when disabled} {\n            set orig_osa [get_oom_score_adj]\n            \n            set other_val1 [expr $orig_osa + 1]\n            set other_val2 [expr $orig_osa + 2]\n            \n            r config set oom-score-adj no\n            \n            set_oom_score_adj $other_val2\n            assert_equal [get_oom_score_adj] $other_val2\n\n            r config set oom-score-adj-values \"$other_val1 $other_val1 $other_val1\"\n            \n            assert_equal [get_oom_score_adj] $other_val2\n        }\n\n        test {CONFIG SET oom score restored on disable} {\n            r config set oom-score-adj no\n            set custom_oom [expr [get_oom_score_adj] + 1]\n            set_oom_score_adj $custom_oom\n            assert_equal [get_oom_score_adj] $custom_oom\n\n            r config set oom-score-adj-values \"9 9 9\" oom-score-adj yes\n            assert_equal [get_oom_score_adj] [expr 9+$custom_oom]\n\n            r config set oom-score-adj no\n            assert_equal [get_oom_score_adj] $custom_oom\n        }\n\n        test {CONFIG SET oom score relative and absolute} {\n            r config set oom-score-adj no\n            set base_oom [get_oom_score_adj]\n\n            set custom_oom 9\n            r config set oom-score-adj-values \"$custom_oom $custom_oom $custom_oom\" oom-score-adj relative\n            assert_equal [get_oom_score_adj] [expr $base_oom+$custom_oom]\n\n            set custom_oom [expr [get_oom_score_adj] + 1]\n            r config set oom-score-adj-values \"$custom_oom $custom_oom $custom_oom\" oom-score-adj absolute\n            assert_equal [get_oom_score_adj] $custom_oom\n        }\n\n        test {CONFIG SET out-of-range oom score} {\n            assert_error {ERR *must be between -2000 and 2000*} {r config set oom-score-adj-values \"-2001 -2001 -2001\"} \n            assert_error {ERR *must be between -2000 and 2000*} {r config set oom-score-adj-values \"2001 2001 2001\"} \n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/other.tcl",
    "content": "start_server {tags {\"other\"}} {\n    if {$::force_failure} {\n        # This is used just for test suite development purposes.\n        test {Failing test} {\n            format err\n        } {ok}\n    }\n\n    test {Coverage: HELP commands} {\n        assert_match \"*OBJECT <subcommand> *\" [r OBJECT HELP]\n        assert_match \"*MEMORY <subcommand> *\" [r MEMORY HELP]\n        assert_match \"*PUBSUB <subcommand> *\" [r PUBSUB HELP]\n        assert_match \"*SLOWLOG <subcommand> *\" [r SLOWLOG HELP]\n        assert_match \"*CLIENT <subcommand> *\" [r CLIENT HELP]\n        assert_match \"*COMMAND <subcommand> *\" [r COMMAND HELP]\n        assert_match \"*CONFIG <subcommand> *\" [r CONFIG HELP]\n        assert_match \"*FUNCTION <subcommand> *\" [r FUNCTION HELP]\n        assert_match \"*MODULE <subcommand> *\" [r MODULE HELP]\n        assert_match \"*HOTKEYS <subcommand> *\" [r HOTKEYS HELP]\n    }\n\n    test {Coverage: MEMORY MALLOC-STATS} {\n        if {[string match {*jemalloc*} [s mem_allocator]]} {\n            assert_match \"*jemalloc*\" [r memory malloc-stats]\n        }\n    }\n\n    test {Coverage: MEMORY PURGE} {\n        if {[string match {*jemalloc*} [s mem_allocator]]} {\n            assert_equal {OK} [r memory purge]\n        }\n    }\n\n    test {SAVE - make sure there are all the types as values} {\n        # Wait for a background saving in progress to terminate\n        waitForBgsave r\n        r lpush mysavelist hello\n        r lpush mysavelist world\n        r set myemptykey {}\n        r set mynormalkey {blablablba}\n        r zadd mytestzset 10 a\n        r zadd mytestzset 20 b\n        r zadd mytestzset 30 c\n        r save\n    } {OK} {needs:save}\n\n    tags {slow} {\n        if {$::accurate} {set iterations 10000} else {set iterations 1000}\n        foreach fuzztype {binary alpha compr} {\n            test \"FUZZ stresser with data model $fuzztype\" {\n                set err 0\n                for {set i 0} {$i < $iterations} {incr i} {\n                    set fuzz [randstring 0 512 $fuzztype]\n                    r set foo $fuzz\n                    set got [r get foo]\n                    if {$got ne $fuzz} {\n                        set err [list $fuzz $got]\n                        break\n                    }\n                }\n                set _ $err\n            } {0}\n        }\n    }\n\n    start_server {overrides {save \"\"} tags {external:skip}} {\n        test {FLUSHALL should not reset the dirty counter if we disable save} {\n            r set key value\n            r flushall\n            assert_morethan [s rdb_changes_since_last_save] 0\n        }\n\n        test {FLUSHALL should reset the dirty counter to 0 if we enable save} {\n            r config set save \"3600 1 300 100 60 10000\"\n            r set key value\n            r flushall\n            assert_equal [s rdb_changes_since_last_save] 0\n        }\n\n        test {FLUSHALL and bgsave} {\n            r config set save \"3600 1 300 100 60 10000\"\n            r set x y\n            r bgsave\n            r set x y\n            r multi\n            r debug sleep 1\n            # by the time we'll get to run flushall, the child will finish,\n            # but the parent will be unaware of it, and it could wrongly set the dirty counter.\n            r flushall\n            r exec\n            assert_equal [s rdb_changes_since_last_save] 0\n        }\n    }\n\n    test {BGSAVE} {\n        # Use FLUSHALL instead of FLUSHDB, FLUSHALL do a foreground save\n        # and reset the dirty counter to 0, so we won't trigger an unexpected bgsave.\n        r flushall\n        r save\n        r set x 10\n        r bgsave\n        waitForBgsave r\n        r debug reload\n        r get x\n    } {10} {needs:debug needs:save}\n\n    test {SELECT an out of range DB} {\n        catch {r select 1000000} err\n        set _ $err\n    } {*index is out of range*} {cluster:skip}\n\n    tags {consistency} {\n        proc check_consistency {dumpname code} {\n            set dump [csvdump r]\n            set sha1 [debug_digest]\n\n            uplevel 1 $code\n\n            set sha1_after [debug_digest]\n            if {$sha1 eq $sha1_after} {\n                return 1\n            }\n\n            # Failed\n            set newdump [csvdump r]\n            puts \"Consistency test failed!\"\n            puts \"You can inspect the two dumps in /tmp/${dumpname}*.txt\"\n\n            set fd [open /tmp/${dumpname}1.txt w]\n            puts $fd $dump\n            close $fd\n            set fd [open /tmp/${dumpname}2.txt w]\n            puts $fd $newdump\n            close $fd\n\n            return 0\n        }\n\n        if {$::accurate} {set numops 10000} else {set numops 1000}\n        test {Check consistency of different data types after a reload} {\n            r flushdb\n            # TODO: integrate usehexpire following next commit that will support replication\n            createComplexDataset r $numops {usetag usehexpire}\n            if {$::ignoredigest} {\n                set _ 1\n            } else {\n                check_consistency {repldump} {\n                    r debug reload\n                }\n            }\n        } {1} {needs:debug}\n\n        test {Same dataset digest if saving/reloading as AOF?} {\n            if {$::ignoredigest} {\n                set _ 1\n            } else {\n                check_consistency {aofdump} {\n                    r config set aof-use-rdb-preamble no\n                    r bgrewriteaof\n                    waitForBgrewriteaof r\n                    r debug loadaof\n                }\n            }\n        } {1} {needs:debug}\n    }\n\n    test {EXPIRES after a reload (snapshot + append only file rewrite)} {\n        r flushdb\n        r set x 10\n        r expire x 1000\n        r save\n        r debug reload\n        set ttl [r ttl x]\n        set e1 [expr {$ttl > 900 && $ttl <= 1000}]\n        r bgrewriteaof\n        waitForBgrewriteaof r\n        r debug loadaof\n        set ttl [r ttl x]\n        set e2 [expr {$ttl > 900 && $ttl <= 1000}]\n        list $e1 $e2\n    } {1 1} {needs:debug needs:save}\n\n    test {EXPIRES after AOF reload (without rewrite)} {\n        r flushdb\n        r config set appendonly yes\n        r config set aof-use-rdb-preamble no\n        r set x somevalue\n        r expire x 1000\n        r setex y 2000 somevalue\n        r set z somevalue\n        r expireat z [expr {[clock seconds]+3000}]\n\n        # Milliseconds variants\n        r set px somevalue\n        r pexpire px 1000000\n        r psetex py 2000000 somevalue\n        r set pz somevalue\n        r pexpireat pz [expr {([clock seconds]+3000)*1000}]\n\n        # Reload and check\n        waitForBgrewriteaof r\n        # We need to wait two seconds to avoid false positives here, otherwise\n        # the DEBUG LOADAOF command may read a partial file.\n        # Another solution would be to set the fsync policy to no, since this\n        # prevents write() to be delayed by the completion of fsync().\n        after 2000\n        r debug loadaof\n        set ttl [r ttl x]\n        assert {$ttl > 900 && $ttl <= 1000}\n        set ttl [r ttl y]\n        assert {$ttl > 1900 && $ttl <= 2000}\n        set ttl [r ttl z]\n        assert {$ttl > 2900 && $ttl <= 3000}\n        set ttl [r ttl px]\n        assert {$ttl > 900 && $ttl <= 1000}\n        set ttl [r ttl py]\n        assert {$ttl > 1900 && $ttl <= 2000}\n        set ttl [r ttl pz]\n        assert {$ttl > 2900 && $ttl <= 3000}\n        r config set appendonly no\n    } {OK} {needs:debug}\n\n    tags {protocol} {\n        test {PIPELINING stresser (also a regression for the old epoll bug)} {\n            if {$::tls} {\n                set fd2 [::tls::socket [srv host] [srv port]]\n            } else {\n                set fd2 [socket [srv host] [srv port]]\n            }\n            fconfigure $fd2 -translation binary\n            if {!$::singledb} {\n                puts -nonewline $fd2 \"SELECT 9\\r\\n\"\n                flush $fd2\n                gets $fd2\n            }\n\n            for {set i 0} {$i < 100000} {incr i} {\n                set q {}\n                set val \"0000${i}0000\"\n                append q \"SET key:$i $val\\r\\n\"\n                puts -nonewline $fd2 $q\n                set q {}\n                append q \"GET key:$i\\r\\n\"\n                puts -nonewline $fd2 $q\n            }\n            flush $fd2\n\n            for {set i 0} {$i < 100000} {incr i} {\n                gets $fd2 line\n                gets $fd2 count\n                set count [string range $count 1 end]\n                set val [read $fd2 $count]\n                read $fd2 2\n            }\n            close $fd2\n            set _ 1\n        } {1}\n    }\n\n    test {APPEND basics} {\n        r del foo\n        list [r append foo bar] [r get foo] \\\n             [r append foo 100] [r get foo]\n    } {3 bar 6 bar100}\n\n    test {APPEND basics, integer encoded values} {\n        set res {}\n        r del foo\n        r append foo 1\n        r append foo 2\n        lappend res [r get foo]\n        r set foo 1\n        r append foo 2\n        lappend res [r get foo]\n    } {12 12}\n\n    test {APPEND fuzzing} {\n        set err {}\n        foreach type {binary alpha compr} {\n            set buf {}\n            r del x\n            for {set i 0} {$i < 1000} {incr i} {\n                set bin [randstring 0 10 $type]\n                append buf $bin\n                r append x $bin\n            }\n            if {$buf != [r get x]} {\n                set err \"Expected '$buf' found '[r get x]'\"\n                break\n            }\n        }\n        set _ $err\n    } {}\n\n    # Leave the user with a clean DB before to exit\n    test {FLUSHDB} {\n        set aux {}\n        if {$::singledb} {\n            r flushdb\n            lappend aux 0 [r dbsize]\n        } else {\n            r select 9\n            r flushdb\n            lappend aux [r dbsize]\n            r select 10\n            r flushdb\n            lappend aux [r dbsize]\n        }\n    } {0 0}\n\n    test {Perform a final SAVE to leave a clean DB on disk} {\n        waitForBgsave r\n        r save\n    } {OK} {needs:save}\n\n    test {RESET clears client state} {\n        r client setname test-client\n        r client tracking on\n\n        assert_equal [r reset] \"RESET\"\n        set client [r client list]\n        assert_match {*name= *} $client\n        assert_match {*flags=N *} $client\n    } {} {needs:reset}\n\n    test {RESET clears MONITOR state} {\n        set rd [redis_deferring_client]\n        $rd monitor\n        assert_equal [$rd read] \"OK\"\n\n        $rd reset\n        assert_equal [$rd read] \"RESET\"\n        $rd close\n\n        assert_no_match {*flags=O*} [r client list]\n    } {} {needs:reset}\n\n    test {RESET clears and discards MULTI state} {\n        r multi\n        r set key-a a\n\n        r reset\n        catch {r exec} err\n        assert_match {*EXEC without MULTI*} $err\n    } {} {needs:reset}\n\n    test {RESET clears Pub/Sub state} {\n        r subscribe channel-1\n        r reset\n\n        # confirm we're not subscribed by executing another command\n        r set key val\n    } {OK} {needs:reset}\n\n    test {RESET clears authenticated state} {\n        r acl setuser user1 on >secret +@all\n        r auth user1 secret\n        assert_equal [r acl whoami] user1\n\n        r reset\n\n        assert_equal [r acl whoami] default\n    } {} {needs:reset}\n\n    test \"Subcommand syntax error crash (issue #10070)\" {\n        assert_error {*unknown command*} {r GET|}\n        assert_error {*unknown command*} {r GET|SET}\n        assert_error {*unknown command*} {r GET|SET|OTHER}\n        assert_error {*unknown command*} {r CONFIG|GET GET_XX}\n        assert_error {*unknown subcommand*} {r CONFIG GET_XX}\n    }\n}\n\nstart_server {tags {\"other external:skip\"}} {\n    test {Don't rehash if redis has child process} {\n        r config set save \"\"\n        r config set rdb-key-save-delay 1000000\n\n        populate 4095 \"\" 1\n        r bgsave\n        wait_for_condition 10 100 {\n            [s rdb_bgsave_in_progress] eq 1\n        } else {\n            fail \"bgsave did not start in time\"\n        }\n\n        r mset k1 v1 k2 v2\n        # Hash table should not rehash\n        assert_no_match \"*table size: 8192*\" [r debug HTSTATS 9]\n        exec kill -9 [get_child_pid 0]\n        waitForBgsave r\n\n        # Hash table should rehash since there is no child process,\n        # size is power of two and over 4096, so it is 8192\n        wait_for_condition 50 100 {\n            [string match \"*table size: 8192*\" [r debug HTSTATS 9]]\n        } else {\n            fail \"hash table did not rehash after child process killed\"\n        }\n    } {} {needs:debug needs:local-process}\n}\n\nproc read_proc_title {pid} {\n    set fd [open \"/proc/$pid/cmdline\" \"r\"]\n    set cmdline [read $fd 1024]\n    close $fd\n\n    return $cmdline\n}\n\nstart_server {tags {\"other external:skip\"}} {\n    test {Process title set as expected} {\n        # Test only on Linux where it's easy to get cmdline without relying on tools.\n        # Skip valgrind as it messes up the arguments.\n        set os [exec uname]\n        if {$os == \"Linux\" && !$::valgrind} {\n            # Set a custom template\n            r config set \"proc-title-template\" \"TEST {title} {listen-addr} {port} {tls-port} {unixsocket} {config-file}\"\n            set cmdline [read_proc_title [srv 0 pid]]\n\n            assert_equal \"TEST\" [lindex $cmdline 0]\n            assert_match \"*/redis-server\" [lindex $cmdline 1]\n            \n            if {$::tls} {\n                set expect_port [srv 0 pport]\n                set expect_tls_port [srv 0 port]\n                set port [srv 0 pport]\n            } else {\n                set expect_port [srv 0 port]\n                set expect_tls_port 0\n                set port [srv 0 port]\n            }\n\n            assert_equal \"$::host:$port\" [lindex $cmdline 2]\n            assert_equal $expect_port [lindex $cmdline 3]\n            assert_equal $expect_tls_port [lindex $cmdline 4]\n            assert_match \"*/tests/tmp/server.*/socket\" [lindex $cmdline 5]\n            assert_match \"*/tests/tmp/redis.conf.*\" [lindex $cmdline 6]\n\n            # Try setting a bad template\n            catch {r config set \"proc-title-template\" \"{invalid-var}\"} err\n            assert_match {*template format is invalid*} $err\n        }\n    }\n}\n\nstart_cluster 1 0 {tags {\"other external:skip cluster slow\"}} {\n    r config set dynamic-hz no hz 500\n    test \"Redis can trigger resizing\" {\n        r flushall\n        # hashslot(foo) is 12182\n        for {set j 1} {$j <= 128} {incr j} {\n            r set \"{foo}$j\" a\n        }\n        assert_match \"*table size: 128*\" [r debug HTSTATS 0]\n\n        # disable resizing, the reason for not using slow bgsave is because\n        # it will hit the dict_force_resize_ratio.\n        r debug dict-resizing 0\n\n        # delete data to have lot's (96%) of empty buckets\n        for {set j 1} {$j <= 123} {incr j} {\n            r del \"{foo}$j\"\n        }\n        assert_match \"*table size: 128*\" [r debug HTSTATS 0]\n\n        # enable resizing\n        r debug dict-resizing 1\n\n        # waiting for serverCron to resize the tables\n        wait_for_condition 1000 10 {\n            [string match {*table size: 8*} [r debug HTSTATS 0]]\n        } else {\n            puts [r debug HTSTATS 0]\n            fail \"hash tables weren't resize.\"\n        }\n    } {} {needs:debug}\n\n    test \"Redis can rewind and trigger smaller slot resizing\" {\n        # hashslot(foo) is 12182\n        # hashslot(alice) is 749, smaller than hashslot(foo),\n        # attempt to trigger a resize on it, see details in #12802.\n        for {set j 1} {$j <= 128} {incr j} {\n            r set \"{alice}$j\" a\n        }\n\n        # disable resizing, the reason for not using slow bgsave is because\n        # it will hit the dict_force_resize_ratio.\n        r debug dict-resizing 0\n\n        for {set j 1} {$j <= 123} {incr j} {\n            r del \"{alice}$j\"\n        }\n\n        # enable resizing\n        r debug dict-resizing 1\n\n        # waiting for serverCron to resize the tables\n        wait_for_condition 1000 10 {\n            [string match {*table size: 16*} [r debug HTSTATS 0]]\n        } else {\n            puts [r debug HTSTATS 0]\n            fail \"hash tables weren't resize.\"\n        }\n    } {} {needs:debug}\n}\n\nstart_server {tags {\"other external:skip\"}} {\n    test \"Redis can resize empty dict\" {\n        # Write and then delete 128 keys, creating an empty dict\n        r flushall\n        \n        # Add one key to the db just to create the dict and get its initial size\n        r set x 1\n        set initial_size [dict get [r memory stats] db.9 overhead.hashtable.main] \n        \n        # Now add 128 keys and then delete them\n        for {set j 1} {$j <= 128} {incr j} {\n            r set $j{b} a\n        }\n        \n        for {set j 1} {$j <= 128} {incr j} {\n            r del $j{b}\n        }\n        \n        # dict must have expanded. Verify it eventually shrinks back to its initial size.\n        wait_for_condition 100 50 {\n            [dict get [r memory stats] db.9 overhead.hashtable.main] == $initial_size\n        } else {\n            fail \"dict did not resize in time to its initial size\"\n        }\n    }\n}\n\nstart_server {tags {\"other external:skip\"} overrides {cluster-compatibility-sample-ratio 100}} {\n    test {Cross DB command is incompatible with cluster mode} {\n        set incompatible_ops [s cluster_incompatible_ops]\n\n        # SELECT with 0 is compatible command in cluster mode\n        assert_equal {OK} [r select 0]\n        assert_equal $incompatible_ops [s cluster_incompatible_ops]\n\n        # SELECT with nonzero is incompatible command in cluster mode\n        assert_equal {OK} [r select 1]\n        assert_equal [expr $incompatible_ops + 1] [s cluster_incompatible_ops]\n\n        # SWAPDB is incompatible command in cluster mode\n        assert_equal {OK} [r swapdb 0 1]\n        assert_equal [expr $incompatible_ops + 2] [s cluster_incompatible_ops]\n\n\n        # If destination db in COPY command is equal to source db, it is compatible\n        # with cluster mode, otherwise it is incompatible.\n        r select 0\n        r set key1 value1\n        set incompatible_ops [s cluster_incompatible_ops]\n        assert_equal {1} [r copy key1 key2{key1}] ;# destination db is equal to source db\n        assert_equal $incompatible_ops [s cluster_incompatible_ops]\n        assert_equal {1} [r copy key2{key1} key1 db 1] ;# destination db is not equal to source db\n        assert_equal [expr $incompatible_ops + 1] [s cluster_incompatible_ops]\n\n        # If destination db in MOVE command is not equal to source db, it is incompatible\n        # with cluster mode.\n        r set key3 value3\n        assert_equal {1} [r move key3 1]\n        assert_equal [expr $incompatible_ops + 2] [s cluster_incompatible_ops]\n    } {} {cluster:skip}\n\n    test {Function no-cluster flag is incompatible with cluster mode} {\n        set incompatible_ops [s cluster_incompatible_ops]\n\n        # no-cluster flag is incompatible with cluster mode\n        r function load {#!lua name=test\n            redis.register_function{function_name='f1', callback=function() return 'hello' end, flags={'no-cluster'}}\n        }\n        r fcall f1 0\n        assert_equal [expr $incompatible_ops + 1] [s cluster_incompatible_ops]\n\n        # It is compatible without no-cluster flag, should not increase the cluster_incompatible_ops\n        r function load {#!lua name=test2\n            redis.register_function{function_name='f2', callback=function() return 'hello' end}\n        }\n        r fcall f2 0\n        assert_equal [expr $incompatible_ops + 1] [s cluster_incompatible_ops]\n    } {} {cluster:skip}\n\n    test {Script no-cluster flag is incompatible with cluster mode} {\n        set incompatible_ops [s cluster_incompatible_ops]\n\n        # no-cluster flag is incompatible with cluster mode\n        r eval {#!lua flags=no-cluster\n                return 1\n            } 0\n        assert_equal [expr $incompatible_ops + 1] [s cluster_incompatible_ops]\n\n        # It is compatible without no-cluster flag, should not increase the cluster_incompatible_ops\n        r eval {#!lua\n                return 1\n            } 0\n        assert_equal [expr $incompatible_ops + 1] [s cluster_incompatible_ops]\n    } {} {cluster:skip}\n\n    test {SORT command incompatible operations with cluster mode} {\n        set incompatible_ops [s cluster_incompatible_ops]\n\n        # If the BY pattern slot is not equal with the slot of keys, we consider\n        # an incompatible behavior, otherwise it is compatible, should not increase\n        # the cluster_incompatible_ops\n        r lpush mylist 1 2 3\n        for {set i 1} {$i < 4} {incr i} {\n            r set weight_$i [expr 4 - $i]\n        }\n        assert_equal {3 2 1} [r sort mylist BY weight_*]\n        assert_equal [expr $incompatible_ops + 1] [s cluster_incompatible_ops]\n        # weight{mylist}_* and mylist have the same slot\n        for {set i 1} {$i < 4} {incr i} {\n            r set weight{mylist}_$i [expr 4 - $i]\n        }\n        assert_equal {3 2 1} [r sort mylist BY weight{mylist}_*]\n        assert_equal [expr $incompatible_ops + 1] [s cluster_incompatible_ops]\n\n        # If the GET pattern slot is not equal with the slot of keys, we consider\n        # an incompatible behavior, otherwise it is compatible, should not increase\n        # the cluster_incompatible_ops\n        for {set i 1} {$i < 4} {incr i} {\n            r set object_$i o_$i\n        }\n        assert_equal {o_3 o_2 o_1} [r sort mylist BY weight{mylist}_* GET object_*]\n        assert_equal [expr $incompatible_ops + 2] [s cluster_incompatible_ops]\n        # object{mylist}_*, weight{mylist}_* and mylist have the same slot\n        for {set i 1} {$i < 4} {incr i} {\n            r set object{mylist}_$i o_$i\n        }\n        assert_equal {o_3 o_2 o_1} [r sort mylist BY weight{mylist}_* GET object{mylist}_*]\n        assert_equal [expr $incompatible_ops + 2] [s cluster_incompatible_ops]\n    } {} {cluster:skip}\n\n    test {Normal cross slot commands are incompatible with cluster mode} {\n        # Normal cross slot command\n        set incompatible_ops [s cluster_incompatible_ops]\n        r mset foo bar bar foo\n        r del foo bar\n        assert_equal [expr $incompatible_ops + 2] [s cluster_incompatible_ops]\n    } {} {cluster:skip}\n\n    test {Transaction is incompatible with cluster mode} {\n        set incompatible_ops [s cluster_incompatible_ops]\n\n        # Incomplete transaction\n        catch {r EXEC}\n        r multi\n        r exec\n        assert_equal $incompatible_ops [s cluster_incompatible_ops]\n\n        # Transaction, SET and DEL have keys with different slots\n        r multi\n        r set foo bar\n        r del bar\n        r exec\n        assert_equal [expr $incompatible_ops + 1] [s cluster_incompatible_ops]\n    } {} {cluster:skip}\n\n    test {Lua scripts are incompatible with cluster mode} {\n        # Lua script, declared keys have different slots, it is not a compatible operation\n        set incompatible_ops [s cluster_incompatible_ops]\n        r eval {#!lua\n            redis.call('mset', KEYS[1], 0, KEYS[2], 0)\n        } 2 foo bar\n        assert_equal [expr $incompatible_ops + 1] [s cluster_incompatible_ops]\n\n        # Lua script, no declared keys, but accessing keys have different slots,\n        # it is not a compatible operation\n        set incompatible_ops [s cluster_incompatible_ops]\n        r eval {#!lua\n            redis.call('mset', 'foo', 0, 'bar', 0)\n        } 0\n        assert_equal [expr $incompatible_ops + 1] [s cluster_incompatible_ops]\n\n        # Lua script, declared keys have the same slot, but accessing keys\n        # have different slots in one command, even with flag 'allow-cross-slot-keys',\n        # it still is not a compatible operation\n        set incompatible_ops [s cluster_incompatible_ops]\n        r eval {#!lua flags=allow-cross-slot-keys\n            redis.call('mset', 'foo', 0, 'bar', 0)\n            redis.call('mset', KEYS[1], 0, KEYS[2], 0)\n        } 2 foo bar{foo}\n        assert_equal [expr $incompatible_ops + 1] [s cluster_incompatible_ops]\n\n        # Lua script, declared keys have the same slot, but accessing keys have different slots\n        # in multiple commands, and with flag 'allow-cross-slot-keys', it is a compatible operation\n        set incompatible_ops [s cluster_incompatible_ops]\n        r eval {#!lua flags=allow-cross-slot-keys\n            redis.call('set', 'foo', 0)\n            redis.call('set', 'bar', 0)\n            redis.call('mset', KEYS[1], 0, KEYS[2], 0)\n        } 2 foo bar{foo}\n        assert_equal $incompatible_ops [s cluster_incompatible_ops]\n    } {} {cluster:skip}\n\n    test {Shard subscribe commands are incompatible with cluster mode} {\n        set rd1 [redis_deferring_client]\n        set incompatible_ops [s cluster_incompatible_ops]\n        assert_equal {1 2} [ssubscribe $rd1 {foo bar}]\n        assert_equal [expr $incompatible_ops + 1] [s cluster_incompatible_ops]\n    } {} {cluster:skip}\n\n    test {cluster-compatibility-sample-ratio configuration can work} {\n        # Disable cluster compatibility sampling, no increase in cluster_incompatible_ops\n        set incompatible_ops [s cluster_incompatible_ops]\n        r config set cluster-compatibility-sample-ratio 0\n        for {set i 0} {$i < 100} {incr i} {\n            r mset foo bar$i bar foo$i\n        }\n        # Enable cluster compatibility sampling again to show the metric\n        r config set cluster-compatibility-sample-ratio 1\n        assert_equal $incompatible_ops [s cluster_incompatible_ops]\n\n        # 100% sample ratio, all operations should increase cluster_incompatible_ops\n        set incompatible_ops [s cluster_incompatible_ops]\n        r config set cluster-compatibility-sample-ratio 100\n        for {set i 0} {$i < 100} {incr i} {\n            r mset foo bar$i bar foo$i\n        }\n        assert_equal [expr $incompatible_ops + 100] [s cluster_incompatible_ops]\n\n        # 30% sample ratio, cluster_incompatible_ops should increase between 20% and 40%\n        set incompatible_ops [s cluster_incompatible_ops]\n        r config set cluster-compatibility-sample-ratio 30\n        for {set i 0} {$i < 1000} {incr i} {\n            r mset foo bar$i bar foo$i\n        }\n        assert_range [s cluster_incompatible_ops] [expr $incompatible_ops + 200] [expr $incompatible_ops + 400]\n    } {} {cluster:skip}\n}\n"
  },
  {
    "path": "tests/unit/pause.tcl",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Copyright (c) 2024-present, Valkey contributors.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n#\n\nstart_server {tags {\"pause network\"}} {\n    test \"Test read commands are not blocked by client pause\" {\n        r client PAUSE 100000 WRITE\n        set rd [redis_deferring_client]\n        $rd GET FOO\n        $rd PING\n        $rd INFO\n        assert_equal [s 0 blocked_clients] 0\n        r client unpause\n        $rd close\n    }\n\n    test \"Test old pause-all takes precedence over new pause-write (less restrictive)\" {\n        # Scenario:\n        # 1. Run 'PAUSE ALL' for 200msec\n        # 2. Run 'PAUSE WRITE' for 10 msec\n        # 3. Wait 50msec\n        # 4. 'GET FOO'.\n        # Expected that:\n        # - While the time of the second 'PAUSE' is shorter than first 'PAUSE',\n        #   pause-client feature will stick to the longer one, i.e, will be paused\n        #   up to 200msec.\n        # - The GET command will be postponed ~200msec, even though last command\n        #   paused only WRITE. This is because the first 'PAUSE ALL' command is\n        #   more restrictive than the second 'PAUSE WRITE' and pause-client feature\n        #   preserve most restrictive configuration among multiple settings.\n        set rd [redis_deferring_client]\n        $rd SET FOO BAR\n        $rd read\n\n        set test_start_time [clock milliseconds]\n        r client PAUSE 200 ALL\n        r client PAUSE 20 WRITE\n        after 50\n        $rd get FOO\n        $rd read\n        set elapsed [expr {[clock milliseconds]-$test_start_time}]\n        assert_lessthan 200 $elapsed\n        $rd close\n    }\n\n    test \"Test new pause time is smaller than old one, then old time preserved\" {\n        r client PAUSE 60000 WRITE\n        r client PAUSE 10 WRITE\n        after 100\n        set rd [redis_deferring_client]\n        $rd SET FOO BAR\n        wait_for_blocked_clients_count 1 100 10\n\n        r client unpause\n        assert_match \"OK\" [$rd read]\n        $rd close\n    }\n\n    test \"Test write commands are paused by RO\" {\n        r client PAUSE 60000 WRITE\n\n        set rd [redis_deferring_client]\n        $rd SET FOO BAR\n        wait_for_blocked_clients_count 1 50 100\n\n        r client unpause\n        assert_match \"OK\" [$rd read]\n        $rd close\n    }\n\n    test \"Test special commands are paused by RO\" {\n        r PFADD pause-hll test\n        r client PAUSE 100000 WRITE\n\n        # Test that pfcount, which can replicate, is also blocked\n        set rd [redis_deferring_client]\n        $rd PFCOUNT pause-hll\n        wait_for_blocked_clients_count 1 50 100\n\n        # Test that publish, which adds the message to the replication\n        # stream is blocked.\n        set rd2 [redis_deferring_client]\n        $rd2 publish foo bar\n        wait_for_blocked_clients_count 2 50 100\n\n        r client unpause \n        assert_match \"1\" [$rd read]\n        assert_match \"0\" [$rd2 read]\n        $rd close\n        $rd2 close\n    }\n\n    test \"Test read/admin multi-execs are not blocked by pause RO\" {\n        r SET FOO BAR\n        r client PAUSE 100000 WRITE\n        set rr [redis_client]\n        assert_equal [$rr MULTI] \"OK\"\n        assert_equal [$rr PING] \"QUEUED\"\n        assert_equal [$rr GET FOO] \"QUEUED\"\n        assert_match \"PONG BAR\" [$rr EXEC]\n        assert_equal [s 0 blocked_clients] 0\n        r client unpause \n        $rr close\n    }\n\n    test \"Test write multi-execs are blocked by pause RO\" {\n        set rd [redis_deferring_client]\n        $rd MULTI\n        assert_equal [$rd read] \"OK\"\n        $rd SET FOO BAR\n        assert_equal [$rd read] \"QUEUED\"\n        r client PAUSE 60000 WRITE\n        $rd EXEC\n        wait_for_blocked_clients_count 1 50 100\n        r client unpause \n        assert_match \"OK\" [$rd read]\n        $rd close\n    }\n\n    test \"Test scripts are blocked by pause RO\" {\n        r client PAUSE 60000 WRITE\n        set rd [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n        $rd EVAL \"return 1\" 0\n\n        # test a script with a shebang and no flags for coverage\n        $rd2 EVAL {#!lua\n            return 1\n        } 0\n\n        wait_for_blocked_clients_count 2 50 100\n        r client unpause \n        assert_match \"1\" [$rd read]\n        assert_match \"1\" [$rd2 read]\n        $rd close\n        $rd2 close\n    }\n\n    test \"Test RO scripts are not blocked by pause RO\" {\n        r set x y\n        # create a function for later\n        r FUNCTION load replace {#!lua name=f1\n            redis.register_function{\n                function_name='f1',\n                callback=function() return \"hello\" end,\n                flags={'no-writes'}\n            }\n        }\n\n        r client PAUSE 6000000 WRITE\n        set rr [redis_client]\n\n        # test an eval that's for sure not in the script cache\n        assert_equal [$rr EVAL {#!lua flags=no-writes\n                return 'unique script'\n            } 0\n        ] \"unique script\"\n\n        # for sanity, repeat that EVAL on a script that's already cached\n        assert_equal [$rr EVAL {#!lua flags=no-writes\n                return 'unique script'\n            } 0\n        ] \"unique script\"\n\n        # test EVAL_RO on a unique script that's for sure not in the cache\n        assert_equal [$rr EVAL_RO {\n            return redis.call('GeT', 'x')..' unique script'\n            } 1 x\n        ] \"y unique script\"\n\n        # test with evalsha\n        set sha [$rr script load {#!lua flags=no-writes\n                return 2\n            }]\n        assert_equal [$rr EVALSHA $sha 0] 2\n\n        # test with function\n        assert_equal [$rr fcall f1 0] hello\n\n        r client unpause\n        $rr close\n    }\n\n    test \"Test read-only scripts in multi-exec are not blocked by pause RO\" {\n        r SET FOO BAR\n        r client PAUSE 100000 WRITE\n        set rr [redis_client]\n        assert_equal [$rr MULTI] \"OK\"\n        assert_equal [$rr EVAL {#!lua flags=no-writes\n                return 12\n            } 0\n        ] QUEUED\n        assert_equal [$rr EVAL {#!lua flags=no-writes\n                return 13\n            } 0\n        ] QUEUED\n        assert_match \"12 13\" [$rr EXEC]\n        assert_equal [s 0 blocked_clients] 0\n        r client unpause\n        $rr close\n    }\n\n    test \"Test write scripts in multi-exec are blocked by pause RO\" {\n        set rd [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n\n        # one with a shebang\n        $rd MULTI\n        assert_equal [$rd read] \"OK\"\n        $rd EVAL {#!lua\n                return 12\n            } 0\n        assert_equal [$rd read] \"QUEUED\"\n\n        # one without a shebang\n        $rd2 MULTI\n        assert_equal [$rd2 read] \"OK\"\n        $rd2 EVAL {#!lua\n                return 13\n            } 0\n        assert_equal [$rd2 read] \"QUEUED\"\n\n        r client PAUSE 60000 WRITE\n        $rd EXEC\n        $rd2 EXEC\n        wait_for_blocked_clients_count 2 50 100\n        r client unpause\n        assert_match \"12\" [$rd read]\n        assert_match \"13\" [$rd2 read]\n        $rd close\n        $rd2 close\n    }\n\n    test \"Test may-replicate commands are rejected in RO scripts\" {\n        # that's specifically important for CLIENT PAUSE WRITE\n        assert_error {ERR Write commands are not allowed from read-only scripts. script:*} {\n            r EVAL_RO \"return redis.call('publish','ch','msg')\" 0\n        }\n        assert_error {ERR Write commands are not allowed from read-only scripts. script:*} {\n            r EVAL {#!lua flags=no-writes\n                return redis.call('publish','ch','msg')\n            } 0\n        }\n        # make sure that publish isn't blocked from a non-RO script\n        assert_equal [r EVAL \"return redis.call('publish','ch','msg')\" 0] 0\n    }\n\n    test \"Test multiple clients can be queued up and unblocked\" {\n        r client PAUSE 60000 WRITE\n        set clients [list [redis_deferring_client] [redis_deferring_client] [redis_deferring_client]]\n        foreach client $clients {\n            $client SET FOO BAR\n        }\n\n        wait_for_blocked_clients_count 3 50 100\n        r client unpause\n        foreach client $clients {\n            assert_match \"OK\" [$client read]\n            $client close\n        }\n    }\n\n    test \"Test clients with syntax errors will get responses immediately\" {\n        r client PAUSE 100000 WRITE\n        catch {r set FOO} err\n        assert_match \"ERR wrong number of arguments for 'set' command\" $err\n        r client unpause\n    }\n\n    test \"Test both active and passive expires are skipped during client pause\" {\n        set expired_keys [s 0 expired_keys]\n        r multi\n        r set foo{t} bar{t} PX 10\n        r set bar{t} foo{t} PX 10\n        r client PAUSE 50000 WRITE\n        r exec\n\n        wait_for_condition 10 100 {\n            [r get foo{t}] == {} && [r get bar{t}] == {}\n        } else {\n            fail \"Keys were never logically expired\"\n        }\n\n        # No keys should actually have been expired\n        assert_match $expired_keys [s 0 expired_keys]\n\n        r client unpause\n\n        # Force the keys to expire\n        r get foo{t}\n        r get bar{t}\n\n        # Now that clients have been unpaused, expires should go through\n        assert_match [expr $expired_keys + 2] [s 0 expired_keys]   \n    }\n\n    test \"Test that client pause starts at the end of a transaction\" {\n        r MULTI\n        r SET FOO1{t} BAR\n        r client PAUSE 60000 WRITE\n        r SET FOO2{t} BAR\n        r exec\n\n        set rd [redis_deferring_client]\n        $rd SET FOO3{t} BAR\n\n        wait_for_blocked_clients_count 1 50 100\n\n        assert_match \"BAR\" [r GET FOO1{t}]\n        assert_match \"BAR\" [r GET FOO2{t}]\n        assert_match \"\" [r GET FOO3{t}]\n\n        r client unpause \n        assert_match \"OK\" [$rd read]\n        $rd close\n    }\n\n    start_server {tags {needs:repl external:skip}} {\n        set master [srv -1 client]\n        set master_host [srv -1 host]\n        set master_port [srv -1 port]\n\n        # Avoid PINGs\n        $master config set repl-ping-replica-period 3600\n        r replicaof $master_host $master_port\n\n        wait_for_condition 50 100 {\n            [s master_link_status] eq {up}\n        } else {\n            fail \"Replication not started.\"\n        }\n\n        test \"Test when replica paused, offset would not grow\" {\n            $master set foo bar\n            set old_master_offset [status $master master_repl_offset]\n\n            wait_for_condition 50 100 {\n                [s slave_repl_offset] == [status $master master_repl_offset]\n            } else {\n                fail \"Replication offset not matched.\"\n            }\n\n            r client pause 100000 write\n            $master set foo2 bar2\n\n            # Make sure replica received data from master\n            wait_for_condition 50 100 {\n                [s slave_read_repl_offset] == [status $master master_repl_offset]\n            } else {\n                fail \"Replication not work.\"\n            }\n\n            # Replica would not apply the write command\n            assert {[s slave_repl_offset] == $old_master_offset}\n            r get foo2\n        } {}\n\n        test \"Test replica offset would grow after unpause\" {\n            r client unpause\n            wait_for_condition 50 100 {\n                [s slave_repl_offset] == [status $master master_repl_offset]\n            } else {\n                fail \"Replication not continue.\"\n            }\n            r get foo2\n        } {bar2}\n    }\n\n    test \"Test the randomkey command will not cause the server to get into an infinite loop during the client pause write\" {\n        r flushall\n\n        r multi\n        r set key value px 3\n        r client pause 10000 write\n        r exec\n        \n        after 5\n\n        wait_for_condition 50 100 {\n            [r randomkey] == \"key\"\n        } else {\n            fail \"execute randomkey failed, caused by the infinite loop\"\n        }\n\n        r client unpause\n        assert_equal [r randomkey] {}\n    }\n\n    test \"CLIENT UNBLOCK is not allow to unblock client blocked by CLIENT PAUSE\" {\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n        $rd1 client id\n        $rd2 client id\n        set client_id1 [$rd1 read]\n        set client_id2 [$rd2 read]\n\n        r del mylist\n        r client pause 100000 write\n        $rd1 blpop mylist 0\n        $rd2 blpop mylist 0\n        wait_for_blocked_clients_count 2 50 100\n\n        # This used to trigger a panic.\n        assert_equal 0 [r client unblock $client_id1 timeout]\n        # This used to return a UNBLOCKED error.\n        assert_equal 0 [r client unblock $client_id2 error]\n\n        # After the unpause, it must be able to unblock the client.\n        r client unpause\n        assert_equal 1 [r client unblock $client_id1 timeout]\n        assert_equal 1 [r client unblock $client_id2 error]\n        assert_equal {} [$rd1 read]\n        assert_error \"UNBLOCKED*\" {$rd2 read}\n\n        $rd1 close\n        $rd2 close\n    }\n\n    # Make sure we unpause at the end\n    r client unpause\n}\n"
  },
  {
    "path": "tests/unit/printver.tcl",
    "content": "start_server {} {\n    set i [r info]\n    regexp {redis_version:(.*?)\\r\\n} $i - version\n    regexp {redis_git_sha1:(.*?)\\r\\n} $i - sha1\n    puts \"Testing Redis version $version ($sha1)\"\n}\n"
  },
  {
    "path": "tests/unit/protocol.tcl",
    "content": "start_server {tags {\"protocol network\"}} {\n    test \"Handle an empty query\" {\n        reconnect\n        r write \"\\r\\n\"\n        r flush\n        assert_equal \"PONG\" [r ping]\n    }\n\n    test \"Negative multibulk length\" {\n        reconnect\n        r write \"*-10\\r\\n\"\n        r flush\n        assert_equal PONG [r ping]\n    }\n\n    test \"Out of range multibulk length\" {\n        reconnect\n        r write \"*3000000000\\r\\n\"\n        r flush\n        assert_error \"*invalid multibulk length*\" {r read}\n    }\n\n    test \"Wrong multibulk payload header\" {\n        reconnect\n        r write \"*3\\r\\n\\$3\\r\\nSET\\r\\n\\$1\\r\\nx\\r\\nfooz\\r\\n\"\n        r flush\n        assert_error \"*expected '$', got 'f'*\" {r read}\n    }\n\n    test \"Negative multibulk payload length\" {\n        reconnect\n        r write \"*3\\r\\n\\$3\\r\\nSET\\r\\n\\$1\\r\\nx\\r\\n\\$-10\\r\\n\"\n        r flush\n        assert_error \"*invalid bulk length*\" {r read}\n    }\n\n    test \"Out of range multibulk payload length\" {\n        reconnect\n        r write \"*3\\r\\n\\$3\\r\\nSET\\r\\n\\$1\\r\\nx\\r\\n\\$2000000000\\r\\n\"\n        r flush\n        assert_error \"*invalid bulk length*\" {r read}\n    }\n\n    test \"Non-number multibulk payload length\" {\n        reconnect\n        r write \"*3\\r\\n\\$3\\r\\nSET\\r\\n\\$1\\r\\nx\\r\\n\\$blabla\\r\\n\"\n        r flush\n        assert_error \"*invalid bulk length*\" {r read}\n    }\n\n    test \"Multi bulk request not followed by bulk arguments\" {\n        reconnect\n        r write \"*1\\r\\nfoo\\r\\n\"\n        r flush\n        assert_error \"*expected '$', got 'f'*\" {r read}\n    }\n\n    test \"Generic wrong number of args\" {\n        reconnect\n        assert_error \"*wrong*arguments*ping*\" {r ping x y z}\n    }\n\n    test \"Unbalanced number of quotes\" {\n        reconnect\n        r write \"set \\\"\\\"\\\"test-key\\\"\\\"\\\" test-value\\r\\n\"\n        r write \"ping\\r\\n\"\n        r flush\n        assert_error \"*unbalanced*\" {r read}\n    }\n\n    set c 0\n    foreach seq [list \"\\x00\" \"*\\x00\" \"$\\x00\"] {\n        incr c\n        test \"Protocol desync regression test #$c\" {\n            if {$::tls} {\n                set s [::tls::socket [srv 0 host] [srv 0 port]]\n            } else {\n                set s [socket [srv 0 host] [srv 0 port]]\n            }\n            puts -nonewline $s $seq\n            set payload [string repeat A 1024]\"\\n\"\n            set test_start [clock seconds]\n            set test_time_limit 30\n            while 1 {\n                if {[catch {\n                    puts -nonewline $s payload\n                    flush $s\n                    incr payload_size [string length $payload]\n                }]} {\n                    set retval [gets $s]\n                    close $s\n                    break\n                } else {\n                    set elapsed [expr {[clock seconds]-$test_start}]\n                    if {$elapsed > $test_time_limit} {\n                        close $s\n                        error \"assertion:Redis did not closed connection after protocol desync\"\n                    }\n                }\n            }\n            set retval\n        } {*Protocol error*}\n    }\n    unset c\n\n    # recover the broken connection\n    reconnect\n    r ping\n\n    # raw RESP response tests\n    r readraw 1\n\n    set nullres {*-1}\n    if {$::force_resp3} {\n        set nullres {_}\n    }\n\n    test \"raw protocol response\" {\n        r srandmember nonexisting_key\n    } \"$nullres\"\n\n    r deferred 1\n\n    test \"raw protocol response - deferred\" {\n        r srandmember nonexisting_key\n        r read\n    } \"$nullres\"\n\n    test \"raw protocol response - multiline\" {\n        r sadd ss a\n        assert_equal [r read] {:1}\n        r srandmember ss 100\n        assert_equal [r read] {*1}\n        assert_equal [r read] {$1}\n        assert_equal [r read] {a}\n    }\n\n    test \"bulk reply protocol\" {\n        # value=2 (int encoding)\n        r set crlf 2\n        assert_equal [r rawread 5] \"+OK\\r\\n\"\n        r get crlf\n        assert_equal [r rawread 7] \"\\$1\\r\\n2\\r\\n\"\n\n        # value=2147483647 (int encoding)\n        r set crlf 2147483647\n        assert_equal [r rawread 5] \"+OK\\r\\n\"\n        r get crlf\n        assert_equal [r rawread 17] \"\\$10\\r\\n2147483647\\r\\n\"\n\n        # value=-2147483648 (int encoding)\n        r set crlf -2147483648\n        assert_equal [r rawread 5] \"+OK\\r\\n\"\n        r get crlf\n        assert_equal [r rawread 18] \"\\$11\\r\\n-2147483648\\r\\n\"\n\n        # value=-9223372036854775809 (embstr encoding)\n        r set crlf -9223372036854775809\n        assert_equal [r rawread 5] \"+OK\\r\\n\"\n        r get crlf\n        assert_equal [r rawread 27] \"\\$20\\r\\n-9223372036854775809\\r\\n\"\n\n        # value=9223372036854775808 (embstr encoding)\n        r set crlf 9223372036854775808\n        assert_equal [r rawread 5] \"+OK\\r\\n\"\n        r get crlf\n        assert_equal [r rawread 26] \"\\$19\\r\\n9223372036854775808\\r\\n\"\n\n        # normal sds (embstr encoding)\n        r set crlf aaaaaaaaaaaaaaaa\n        assert_equal [r rawread 5] \"+OK\\r\\n\"\n        r get crlf\n        assert_equal [r rawread 23] \"\\$16\\r\\naaaaaaaaaaaaaaaa\\r\\n\"\n\n        # normal sds (raw string encoding) with 45 'a'\n        set rawstr [string repeat \"a\" 45]\n        r set crlf $rawstr\n        assert_equal [r rawread 5] \"+OK\\r\\n\"\n        r get crlf\n        assert_equal [r rawread 52] \"\\$45\\r\\n$rawstr\\r\\n\"\n\n        r del crlf\n        assert_equal [r rawread 4] \":1\\r\\n\"\n    }\n\n    # restore connection settings\n    r readraw 0\n    r deferred 0\n\n    # check the connection still works\n    assert_equal [r ping] {PONG}\n\n    test {RESP3 attributes} {\n        r hello 3\n        assert_equal {Some real reply following the attribute} [r debug protocol attrib]\n        assert_equal {key-popularity {key:123 90}} [r attributes]\n\n        # make sure attributes are not kept from previous command\n        r ping\n        assert_error {*attributes* no such element in array} {r attributes}\n\n        # restore state\n        r hello 2\n        set _ \"\"\n    } {} {needs:debug resp3}\n\n    test {RESP3 attributes readraw} {\n        r hello 3\n        r readraw 1\n        r deferred 1\n\n        r debug protocol attrib\n        assert_equal [r read] {|1}\n        assert_equal [r read] {$14}\n        assert_equal [r read] {key-popularity}\n        assert_equal [r read] {*2}\n        assert_equal [r read] {$7}\n        assert_equal [r read] {key:123}\n        assert_equal [r read] {:90}\n        assert_equal [r read] {$39}\n        assert_equal [r read] {Some real reply following the attribute}\n\n        # restore state\n        r readraw 0\n        r deferred 0\n        r hello 2\n        set _ {}\n    } {} {needs:debug resp3}\n\n    test {RESP3 attributes on RESP2} {\n        r hello 2\n        set res [r debug protocol attrib]\n        set _ $res\n    } {Some real reply following the attribute} {needs:debug}\n\n    test \"test big number parsing\" {\n        r hello 3\n        r debug protocol bignum\n    } {1234567999999999999999999999999999999} {needs:debug resp3}\n\n    test \"test bool parsing\" {\n        r hello 3\n        assert_equal [r debug protocol true] 1\n        assert_equal [r debug protocol false] 0\n        r hello 2\n        assert_equal [r debug protocol true] 1\n        assert_equal [r debug protocol false] 0\n        set _ {}\n    } {} {needs:debug resp3}\n\n    test \"test verbatim str parsing\" {\n        r hello 3\n        r debug protocol verbatim\n    } \"This is a verbatim\\nstring\" {needs:debug resp3}\n\n    test \"test large number of args\" {\n        r flushdb\n        set args [split [string trim [string repeat \"k v \" 10000]]]\n        lappend args \"{k}2\" v2\n        r mset {*}$args\n        assert_equal [r get \"{k}2\"] v2\n    }\n    \n    test \"test argument rewriting - issue 9598\" {\n        # INCRBYFLOAT uses argument rewriting for correct float value propagation.\n        # We use it to make sure argument rewriting works properly. It's important \n        # this test is run under valgrind to verify there are no memory leaks in \n        # arg buffer handling.\n        r flushdb\n\n        # Test normal argument handling\n        r set k 0\n        assert_equal [r incrbyfloat k 1.0] 1\n        \n        # Test argument handing in multi-state buffers\n        r multi\n        r incrbyfloat k 1.0\n        assert_equal [r exec] 2\n    }\n\n}\n\nstart_server {tags {\"regression\"}} {\n    test \"Regression for a crash with blocking ops and pipelining\" {\n        set rd [redis_deferring_client]\n        set fd [r channel]\n        set proto \"*3\\r\\n\\$5\\r\\nBLPOP\\r\\n\\$6\\r\\nnolist\\r\\n\\$1\\r\\n0\\r\\n\"\n        puts -nonewline $fd $proto$proto\n        flush $fd\n        set res {}\n\n        $rd rpush nolist a\n        $rd read\n        $rd rpush nolist a\n        $rd read\n        $rd close\n    }\n}\n\nstart_server {tags {\"regression\"}} {\n    test \"Regression for a crash with cron release of client arguments\" {\n        r write \"*3\\r\\n\"\n        r flush\n        after 3000 ;# wait for c->argv to be released due to timeout\n        r write \"\\$3\\r\\nSET\\r\\n\\$3\\r\\nkey\\r\\n\\$1\\r\\n0\\r\\n\"\n        r flush\n        r read\n    } {OK}\n}\n"
  },
  {
    "path": "tests/unit/pubsub.tcl",
    "content": "start_server {tags {\"pubsub network\"}} {\n    if {$::singledb} {\n        set db 0\n    } else {\n        set db 9\n    }\n\n    foreach resp {2 3} {\n        set rd1 [redis_deferring_client]\n        if {[lsearch $::denytags \"resp3\"] >= 0} {\n            if {$resp == 3} {continue}\n        } elseif {$::force_resp3} {\n            if {$resp == 2} {continue}\n        }\n\n        $rd1 hello $resp\n        $rd1 read\n\n        test \"Pub/Sub PING on RESP$resp\" {\n            subscribe $rd1 somechannel\n            # While subscribed to non-zero channels PING works in Pub/Sub mode.\n            $rd1 ping\n            $rd1 ping \"foo\"\n            # In RESP3, the SUBSCRIBEd client can issue any command and get a reply, so the PINGs are standard\n            # In RESP2, only a handful of commands are allowed after a client is SUBSCRIBED (PING is one of them).\n            # For some reason, the reply in that case is an array with two elements: \"pong\"  and argv[1] or an empty string\n            # God knows why. Done in commit 2264b981\n            if {$resp == 3} {\n                assert_equal {PONG} [$rd1 read]\n                assert_equal {foo} [$rd1 read]\n            } else {\n                assert_equal {pong {}} [$rd1 read]\n                assert_equal {pong foo} [$rd1 read]\n            }\n            unsubscribe $rd1 somechannel\n            # Now we are unsubscribed, PING should just return PONG.\n            $rd1 ping\n            assert_equal {PONG} [$rd1 read]\n\n        }\n        $rd1 close\n    }\n\n    test \"PUBLISH/SUBSCRIBE basics\" {\n        set rd1 [redis_deferring_client]\n\n        # subscribe to two channels\n        assert_equal {1 2} [subscribe $rd1 {chan1 chan2}]\n        assert_equal 1 [r publish chan1 hello]\n        assert_equal 1 [r publish chan2 world]\n        assert_equal {message chan1 hello} [$rd1 read]\n        assert_equal {message chan2 world} [$rd1 read]\n\n        # unsubscribe from one of the channels\n        unsubscribe $rd1 {chan1}\n        assert_equal 0 [r publish chan1 hello]\n        assert_equal 1 [r publish chan2 world]\n        assert_equal {message chan2 world} [$rd1 read]\n\n        # unsubscribe from the remaining channel\n        unsubscribe $rd1 {chan2}\n        assert_equal 0 [r publish chan1 hello]\n        assert_equal 0 [r publish chan2 world]\n\n        # clean up clients\n        $rd1 close\n    }\n\n    test \"PUBLISH/SUBSCRIBE with two clients\" {\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n\n        assert_equal {1} [subscribe $rd1 {chan1}]\n        assert_equal {1} [subscribe $rd2 {chan1}]\n        assert_equal 2 [r publish chan1 hello]\n        assert_equal {message chan1 hello} [$rd1 read]\n        assert_equal {message chan1 hello} [$rd2 read]\n\n        # clean up clients\n        $rd1 close\n        $rd2 close\n    }\n\n    test \"PUBLISH/SUBSCRIBE after UNSUBSCRIBE without arguments\" {\n        set rd1 [redis_deferring_client]\n        assert_equal {1 2 3} [subscribe $rd1 {chan1 chan2 chan3}]\n        unsubscribe $rd1\n        wait_for_condition 100 10 {\n            [regexp {cmd=unsubscribe} [r client list]] eq 1\n        } else {\n            fail \"unsubscribe did not arrive\"\n        }\n        assert_equal 0 [r publish chan1 hello]\n        assert_equal 0 [r publish chan2 hello]\n        assert_equal 0 [r publish chan3 hello]\n\n        # clean up clients\n        $rd1 close\n    }\n\n    test \"SUBSCRIBE to one channel more than once\" {\n        set rd1 [redis_deferring_client]\n        assert_equal {1 1 1} [subscribe $rd1 {chan1 chan1 chan1}]\n        assert_equal 1 [r publish chan1 hello]\n        assert_equal {message chan1 hello} [$rd1 read]\n\n        # clean up clients\n        $rd1 close\n    }\n\n    test \"UNSUBSCRIBE from non-subscribed channels\" {\n        set rd1 [redis_deferring_client]\n        assert_equal {0 0 0} [unsubscribe $rd1 {foo bar quux}]\n\n        # clean up clients\n        $rd1 close\n    }\n\n    test \"PUBLISH/PSUBSCRIBE basics\" {\n        set rd1 [redis_deferring_client]\n\n        # subscribe to two patterns\n        assert_equal {1 2} [psubscribe $rd1 {foo.* bar.*}]\n        assert_equal 1 [r publish foo.1 hello]\n        assert_equal 1 [r publish bar.1 hello]\n        assert_equal 0 [r publish foo1 hello]\n        assert_equal 0 [r publish barfoo.1 hello]\n        assert_equal 0 [r publish qux.1 hello]\n        assert_equal {pmessage foo.* foo.1 hello} [$rd1 read]\n        assert_equal {pmessage bar.* bar.1 hello} [$rd1 read]\n\n        # unsubscribe from one of the patterns\n        assert_equal {1} [punsubscribe $rd1 {foo.*}]\n        assert_equal 0 [r publish foo.1 hello]\n        assert_equal 1 [r publish bar.1 hello]\n        assert_equal {pmessage bar.* bar.1 hello} [$rd1 read]\n\n        # unsubscribe from the remaining pattern\n        assert_equal {0} [punsubscribe $rd1 {bar.*}]\n        assert_equal 0 [r publish foo.1 hello]\n        assert_equal 0 [r publish bar.1 hello]\n\n        # clean up clients\n        $rd1 close\n    }\n\n    test \"PUBLISH/PSUBSCRIBE with two clients\" {\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n\n        assert_equal {1} [psubscribe $rd1 {chan.*}]\n        assert_equal {1} [psubscribe $rd2 {chan.*}]\n        assert_equal 2 [r publish chan.foo hello]\n        assert_equal {pmessage chan.* chan.foo hello} [$rd1 read]\n        assert_equal {pmessage chan.* chan.foo hello} [$rd2 read]\n\n        # clean up clients\n        $rd1 close\n        $rd2 close\n    }\n\n    test \"PUBLISH/PSUBSCRIBE after PUNSUBSCRIBE without arguments\" {\n        set rd1 [redis_deferring_client]\n        assert_equal {1 2 3} [psubscribe $rd1 {chan1.* chan2.* chan3.*}]\n        punsubscribe $rd1\n        wait_for_condition 100 10 {\n            [regexp {cmd=punsubscribe} [r client list]] eq 1\n        } else {\n            fail \"punsubscribe did not arrive\"\n        }\n        assert_equal 0 [r publish chan1.hi hello]\n        assert_equal 0 [r publish chan2.hi hello]\n        assert_equal 0 [r publish chan3.hi hello]\n\n        # clean up clients\n        $rd1 close\n    }\n\n    test \"PubSub messages with CLIENT REPLY OFF\" {\n        set rd [redis_deferring_client]\n        $rd hello 3\n        $rd read ;# Discard the hello reply\n\n        # Test that the subscribe/psubscribe notification is ok\n        $rd client reply off\n        assert_equal {1} [subscribe $rd channel]\n        assert_equal {2} [psubscribe $rd ch*]\n\n        # Test that the publish notification is ok\n        $rd client reply off\n        assert_equal 2 [r publish channel hello]\n        assert_equal {message channel hello} [$rd read]\n        assert_equal {pmessage ch* channel hello} [$rd read]\n\n        # Test that the unsubscribe/punsubscribe notification is ok\n        $rd client reply off\n        assert_equal {1} [unsubscribe $rd channel]\n        assert_equal {0} [punsubscribe $rd ch*]\n\n        $rd close\n    } {0} {resp3}\n\n    test \"PUNSUBSCRIBE from non-subscribed channels\" {\n        set rd1 [redis_deferring_client]\n        assert_equal {0 0 0} [punsubscribe $rd1 {foo.* bar.* quux.*}]\n\n        # clean up clients\n        $rd1 close\n    }\n\n    test \"NUMSUB returns numbers, not strings (#1561)\" {\n        r pubsub numsub abc def\n    } {abc 0 def 0}\n\n    test \"NUMPATs returns the number of unique patterns\" {\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n\n        # Three unique patterns and one that overlaps\n        psubscribe $rd1 \"foo*\"\n        psubscribe $rd2 \"foo*\"\n        psubscribe $rd1 \"bar*\"\n        psubscribe $rd2 \"baz*\"\n\n        set patterns [r pubsub numpat]\n\n        # clean up clients\n        punsubscribe $rd1\n        punsubscribe $rd2\n        assert_equal 3 $patterns\n        $rd1 close\n        $rd2 close\n    }\n\n    test \"Mix SUBSCRIBE and PSUBSCRIBE\" {\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [subscribe $rd1 {foo.bar}]\n        assert_equal {2} [psubscribe $rd1 {foo.*}]\n\n        assert_equal 2 [r publish foo.bar hello]\n        assert_equal {message foo.bar hello} [$rd1 read]\n        assert_equal {pmessage foo.* foo.bar hello} [$rd1 read]\n\n        # clean up clients\n        $rd1 close\n    }\n\n    test \"PUNSUBSCRIBE and UNSUBSCRIBE should always reply\" {\n        # Make sure we are not subscribed to any channel at all.\n        r punsubscribe\n        r unsubscribe\n        # Now check if the commands still reply correctly.\n        set reply1 [r punsubscribe]\n        set reply2 [r unsubscribe]\n        concat $reply1 $reply2\n    } {punsubscribe {} 0 unsubscribe {} 0}\n\n    ### Keyspace events notification tests\n\n    test \"Keyspace notifications: we receive keyspace notifications\" {\n        r config set notify-keyspace-events KA\n        set rd1 [redis_deferring_client]\n        $rd1 CLIENT REPLY OFF ;# Make sure it works even if replies are silenced\n        assert_equal {1} [psubscribe $rd1 *]\n        r set foo bar\n        assert_equal \"pmessage * __keyspace@${db}__:foo set\" [$rd1 read]\n        $rd1 close\n    }\n\n    test \"Keyspace notifications: we receive keyevent notifications\" {\n        r config set notify-keyspace-events EA\n        r del foo\n        set rd1 [redis_deferring_client]\n        $rd1 CLIENT REPLY SKIP ;# Make sure it works even if replies are silenced\n        assert_equal {1} [psubscribe $rd1 *]\n        r set foo bar\n        assert_equal \"pmessage * __keyevent@${db}__:set foo\" [$rd1 read]\n        $rd1 close\n    }\n\n    test \"Keyspace notifications: we can receive both kind of events\" {\n        r config set notify-keyspace-events KEA\n        r del foo\n        set rd1 [redis_deferring_client]\n        $rd1 CLIENT REPLY ON ;# Just coverage\n        assert_equal {OK} [$rd1 read]\n        assert_equal {1} [psubscribe $rd1 *]\n        r set foo bar\n        assert_equal \"pmessage * __keyspace@${db}__:foo set\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:set foo\" [$rd1 read]\n        $rd1 close\n    }\n\n    test \"Keyspace notifications: we are able to mask events\" {\n        r config set notify-keyspace-events KEl\n        r del mylist\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 *]\n        r set foo bar\n        r lpush mylist a\n        # No notification for set, because only list commands are enabled.\n        assert_equal \"pmessage * __keyspace@${db}__:mylist lpush\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:lpush mylist\" [$rd1 read]\n        $rd1 close\n    }\n\n    test \"Keyspace notifications: general events test\" {\n        r config set notify-keyspace-events KEg\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 *]\n        r set foo bar\n        r expire foo 1\n        r del foo\n        assert_equal \"pmessage * __keyspace@${db}__:foo expire\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:expire foo\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:foo del\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:del foo\" [$rd1 read]\n        $rd1 close\n    }\n\n    test \"Keyspace notifications: list events test\" {\n        r config set notify-keyspace-events KEl\n        r del mylist\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 *]\n        r lpush mylist a\n        r rpush mylist a\n        r rpop mylist\n        assert_equal \"pmessage * __keyspace@${db}__:mylist lpush\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:lpush mylist\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:mylist rpush\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:rpush mylist\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:mylist rpop\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:rpop mylist\" [$rd1 read]\n        $rd1 close\n    }\n\n    test \"Keyspace notifications: set events test\" {\n        r config set notify-keyspace-events Ks\n        r del myset\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 *]\n        r sadd myset a b c d\n        r srem myset x\n        r sadd myset x y z\n        r srem myset x\n        assert_equal \"pmessage * __keyspace@${db}__:myset sadd\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myset sadd\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myset srem\" [$rd1 read]\n        $rd1 close\n    }\n\n    test \"Keyspace notifications: zset events test\" {\n        r config set notify-keyspace-events Kz\n        r del myzset\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 *]\n        r zadd myzset 1 a 2 b\n        r zrem myzset x\n        r zadd myzset 3 x 4 y 5 z\n        r zrem myzset x\n        assert_equal \"pmessage * __keyspace@${db}__:myzset zadd\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myzset zadd\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myzset zrem\" [$rd1 read]\n        $rd1 close\n    }\n\n    foreach {type max_lp_entries} {listpackex 512 hashtable 0} {\n    test \"Keyspace notifications: hash events test ($type)\" {\n        r config set hash-max-listpack-entries $max_lp_entries\n        r config set notify-keyspace-events Khg\n        r del myhash\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 *]\n        r hmset myhash yes 1 no 0 f1 1 f2 2 f3_hdel 3\n        r hincrby myhash yes 10\n        r hexpire myhash 999999 FIELDS 1 yes\n        r hexpireat myhash [expr {[clock seconds] + 999999}] NX FIELDS 1 no\n        r hpexpire myhash 999999 FIELDS 1 yes\n        r hpersist myhash FIELDS 1 yes\n        r hpexpire myhash 0 FIELDS 1 yes\n        assert_encoding $type myhash\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hincrby\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpire\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpire\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpire\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hpersist\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hdel\" [$rd1 read]\n\n        # Test that we will get `hexpired` notification when\n        # a hash field is removed by active expire.\n        r hpexpire myhash 10 FIELDS 1 no\n        after 100 ;# Wait for active expire\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpire\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpired\" [$rd1 read]\n\n        # Test that when a field with TTL is deleted by commands like hdel without\n        # updating the global DS, active expire will not send a notification.\n        r hpexpire myhash 100 FIELDS 1 f3_hdel\n        r hdel myhash f3_hdel\n        after 200 ;# Wait for active expire\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpire\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hdel\" [$rd1 read]\n\n        # Test that we will get `hexpired` notification when\n        # a hash field is removed by lazy expire.\n        r debug set-active-expire 0\n        r hpexpire myhash 10 FIELDS 2 f1 f2\n        after 20\n        r hmget myhash f1 f2 ;# Trigger lazy expire\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpire\" [$rd1 read]\n        # We should get only one `hexpired` notification even two fields was expired.\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpired\" [$rd1 read]\n        # We should get a `del` notification after all fields were expired.\n        assert_equal \"pmessage * __keyspace@${db}__:myhash del\" [$rd1 read]\n        r debug set-active-expire 1\n\n\n        # Test HSETEX, HGETEX and HGETDEL notifications\n        r hsetex myhash FIELDS 3 f4 v4 f5 v5 f6 v6\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n\n        # hgetex sets ttl in past\n        r hgetex myhash PX 0 FIELDS 1 f4\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hdel\" [$rd1 read]\n\n        # hgetex sets ttl\n        r hgetex myhash EXAT [expr {[clock seconds] + 999999}] FIELDS 1 f5\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpire\" [$rd1 read]\n\n        # hgetex persists field\n        r hgetex myhash PERSIST FIELDS 1 f5\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hpersist\" [$rd1 read]\n\n        # hgetex sets expiry for one field and lazy expiry deletes another field\n        # (KSN should be 1-hexpired 2-hexpire)\n        r debug set-active-expire 0\n        r hsetex myhash PX 1 FIELDS 1 f5 v5\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpire\" [$rd1 read]\n        after 10\n        r hgetex myhash EX 100 FIELDS 2 f5 f6\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpired\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpire\" [$rd1 read]\n\n        # hgetex lazy expiry deletes the only field and the key\n        # (KSN should be 1-hexpired 2-del)\n        r hsetex myhash PX 1 FIELDS 2 f5 v5 f6 v6\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpire\" [$rd1 read]\n        after 10\n        r hgetex myhash FIELDS 2 f5 f6\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpired\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash del\" [$rd1 read]\n        r debug set-active-expire 1\n\n        # hgetex sets an expired ttl for the only field and deletes the key\n        # (KSN should be 1-hdel 2-del)\n        r hsetex myhash EX 100 FIELDS 1 f5 v5\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpire\" [$rd1 read]\n        after 10\n        r hgetex myhash PX 0 FIELDS 1 f5\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hdel\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash del\" [$rd1 read]\n\n        r hsetex myhash FIELDS 2 f5 v5 f6 v6\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n\n        # hgetdel deletes a field\n        r hgetdel myhash FIELDS 1 f5\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hdel\" [$rd1 read]\n\n        # hsetex sets field and expiry time\n        r hsetex myhash EXAT [expr {[clock seconds] + 999999}] FIELDS 1 f6 v6\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpire\" [$rd1 read]\n\n        # hsetex sets field and ttl in the past\n        r hsetex myhash PX 0 FIELDS 1 f6 v6\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hdel\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash del\" [$rd1 read]\n\n        # Test that we will get `hexpired` notification when a hash field is\n        # removed by lazy expire using hgetdel command\n        r debug set-active-expire 0\n        r hsetex myhash PX 10 FIELDS 1 f1 v1\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpire\" [$rd1 read]\n\n        # Set another field\n        r hsetex myhash FIELDS 1 f2 v2\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        # Wait until field expires\n        after 20\n        r hgetdel myhash FIELDS 1 f1\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpired\" [$rd1 read]\n        # Get and delete the only field\n        r hgetdel myhash FIELDS 1 f2\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hdel\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash del\" [$rd1 read]\n\n        # HGETDEL deletes one field and the other field is lazily expired\n        # (KSN should be 1-hexpired 2-hdel)\n        r hsetex myhash FIELDS 2 f1 v1 f2 v2\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        r hsetex myhash PX 1 FIELDS 1 f3 v3\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpire\" [$rd1 read]\n        after 10\n        r hgetdel myhash FIELDS 2 f1 f3\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpired\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hdel\" [$rd1 read]\n\n        # HGETDEL, deletes one field and the last field lazily expires\n        # (KSN should be 1-hexpired 2-hdel 3-del)\n        r hsetex myhash FIELDS 1 f1 v1\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        r hsetex myhash PX 1 FIELDS 1 f2 v2\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpire\" [$rd1 read]\n        after 10\n        r hgetdel myhash FIELDS 2 f1 f2\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpired\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hdel\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash del\" [$rd1 read]\n        r debug set-active-expire 1\n\n        $rd1 close\n    } {0} {needs:debug}\n    } ;# foreach\n\n    test \"Keyspace notifications: stream events test\" {\n        r config set notify-keyspace-events Kt\n        r del mystream\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 *]\n        r xgroup create mystream mygroup $ mkstream\n        r xgroup createconsumer mystream mygroup Bob\n        set id [r xadd mystream 1 field1 A]\n        r xreadgroup group mygroup Alice STREAMS mystream >\n        r xclaim mystream mygroup Mike 0 $id force\n        # Not notify because of \"Lee\" not exists.\n        r xgroup delconsumer mystream mygroup Lee\n        # Not notify because of \"Bob\" exists.\n        r xautoclaim mystream mygroup Bob 0 $id\n        r xgroup delconsumer mystream mygroup Bob\n        assert_equal \"pmessage * __keyspace@${db}__:mystream xgroup-create\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:mystream xgroup-createconsumer\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:mystream xadd\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:mystream xgroup-createconsumer\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:mystream xgroup-createconsumer\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:mystream xgroup-delconsumer\" [$rd1 read]\n        $rd1 close\n    }\n\n    test \"Keyspace notifications:FXX/FNX with HSETEX cmd\" {\n        r config set notify-keyspace-events Khxg\n        r del myhash\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 *]\n        r debug set-active-expire 0\n\n        # FXX on logically expired field\n        r hset myhash f v\n        r hset myhash f2 v\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        r hpexpire myhash 10 FIELDS 1 f\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpire\" [$rd1 read]\n        after 15\n        assert_equal [r HSETEX myhash FXX PX 10 FIELDS 1 f v] 0\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpired\" [$rd1 read]\n        r hdel myhash f2\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hdel\" [$rd1 read]\n        assert_equal 0 [r exists myhash]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash del\" [$rd1 read]\n\n        # FXX with past expiry\n        r HSET myhash f1 v1\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        set past [expr {[clock seconds] - 2}]\n        assert_equal [r hsetex myhash FXX EXAT $past FIELDS 1 f1 v1] 1\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hdel\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash del\" [$rd1 read]\n\n        # FXX overwrite + full key expiry\n        r hset myhash f v\n        r hset myhash f2 v\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        r hpexpire myhash 10 FIELDS 1 f\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpire\" [$rd1 read]\n        after 15\n        set past [expr {[clock milliseconds] - 5000}]\n        assert_equal [r hsetex myhash FXX PXAT $past FIELDS 1 f v] 0\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpired\" [$rd1 read]\n        r hpexpire myhash 10 FIELDS 1 f2\n        after 15\n        r hget myhash f2\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpire\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash del\" [$rd1 read]\n\n        # FNX on logically expired field\n        r del myhash\n        r hset myhash f v\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        r hpexpire myhash 10 FIELDS 1 f\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpire\" [$rd1 read]\n        after 15\n        assert_equal [r HSETEX myhash FNX PX 1000 FIELDS 1 f v] 1\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpired\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hexpire\" [$rd1 read]\n\n        # FNX with past expiry\n        r del myhash\n        r hset myhash f v\n        assert_equal \"pmessage * __keyspace@${db}__:myhash del\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        set past [expr {[clock seconds] - 2}]\n        assert_equal [r hsetex myhash FNX EXAT $past FIELDS 1 f1 v1] 1\n        # f1 is created and immediately expired\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hset\" [$rd1 read]\n        assert_equal \"pmessage * __keyspace@${db}__:myhash hdel\" [$rd1 read]\n\n        r debug set-active-expire 1\n        $rd1 close\n    } {0} {needs:debug}\n\n    test \"Keyspace notifications: expired events (triggered expire)\" {\n        r config set notify-keyspace-events Ex\n        r del foo\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 *]\n        r psetex foo 100 1\n        wait_for_condition 50 100 {\n            [r exists foo] == 0\n        } else {\n            fail \"Key does not expire?!\"\n        }\n        assert_equal \"pmessage * __keyevent@${db}__:expired foo\" [$rd1 read]\n        $rd1 close\n    }\n\n    test \"Keyspace notifications: expired events (background expire)\" {\n        r config set notify-keyspace-events Ex\n        r del foo\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 *]\n        r psetex foo 100 1\n        assert_equal \"pmessage * __keyevent@${db}__:expired foo\" [$rd1 read]\n        $rd1 close\n    }\n\n    test \"Keyspace notifications: evicted events\" {\n        r config set notify-keyspace-events Ee\n        r config set maxmemory-policy allkeys-lru\n        r flushdb\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 *]\n        r set foo bar\n        r config set maxmemory 1\n        assert_equal \"pmessage * __keyevent@${db}__:evicted foo\" [$rd1 read]\n        r config set maxmemory 0\n        $rd1 close\n        r config set maxmemory-policy noeviction\n    } {OK} {needs:config-maxmemory}\n\n    test \"Keyspace notifications: test CONFIG GET/SET of event flags\" {\n        r config set notify-keyspace-events gKE\n        assert_equal {gKE} [lindex [r config get notify-keyspace-events] 1]\n        r config set notify-keyspace-events {$lshzxeKE}\n        assert_equal {$lshzxeKE} [lindex [r config get notify-keyspace-events] 1]\n        r config set notify-keyspace-events KA\n        assert_equal {AK} [lindex [r config get notify-keyspace-events] 1]\n        r config set notify-keyspace-events EA\n        assert_equal {AE} [lindex [r config get notify-keyspace-events] 1]\n    }\n\n    test \"Keyspace notifications: new key test\" {\n        r config set notify-keyspace-events En\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 *]\n        r set foo bar\n        # second set of foo should not cause a 'new' event\n        r set foo baz \n        r set bar bar\n        assert_equal \"pmessage * __keyevent@${db}__:new foo\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:new bar\" [$rd1 read]\n        $rd1 close\n    }\n\n    ### overwritten and type_changed events\n\n    test \"Keyspace notifications: overwritten events - string to string\" {\n        r config set notify-keyspace-events Eo\n        r del foo\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 *]\n\n        # First set - should not trigger overwritten (new key)\n        r set foo bar\n\n        # Second set - should trigger overwritten (same type)\n        r set foo baz\n\n        assert_equal \"pmessage * __keyevent@${db}__:overwritten foo\" [$rd1 read]\n        $rd1 close\n    }\n\n    test \"Keyspace notifications: type_changed events - hash to string\" {\n        r config set notify-keyspace-events Ec\n        r del testkey\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 *]\n\n        # Set as hash first\n        r hset testkey field \"hash_value\"\n\n        # Change to string - should trigger type_changed\n        r set testkey \"string_value\"\n\n        assert_equal \"pmessage * __keyevent@${db}__:type_changed testkey\" [$rd1 read]\n        $rd1 close\n    }\n\n    test \"Keyspace notifications: both overwritten and type_changed events\" {\n        r config set notify-keyspace-events Eoc\n        r del testkey3\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 *]\n\n        # Set as hash first\n        r hset testkey3 field \"hash_value\"\n\n        # Change to string - should trigger both overwritten and type_changed\n        r set testkey3 \"string_value\"\n\n        assert_equal \"pmessage * __keyevent@${db}__:overwritten testkey3\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:type_changed testkey3\" [$rd1 read]\n\n        $rd1 close\n    }\n\n    test \"Keyspace notifications: configuration flags work correctly\" {\n        # Test that 'o' flag enables override notifications\n        r config set notify-keyspace-events o\n        set config [r config get notify-keyspace-events]\n        assert {[lindex $config 1] eq \"o\"}\n\n        # Test that 'c' flag enables type_changed notifications\n        r config set notify-keyspace-events c\n        set config [r config get notify-keyspace-events]\n        assert {[lindex $config 1] eq \"c\"}\n\n        # Test that both flags can be combined\n        r config set notify-keyspace-events oc\n        set config [r config get notify-keyspace-events]\n        assert {[lindex $config 1] eq \"oc\"}\n    }\n\n    ### RESTORE command tests for type_changed KSN types\n\n    test \"Keyspace notifications: RESTORE REPLACE different type - restore, overwritten and type_changed events\" {\n        r config set notify-keyspace-events Egoc\n        r del restore_test_key3\n\n        # Create a string value and dump it (do this before subscribing)\n        r set temp_key \"string_value\"\n        set dump_data [r dump temp_key]\n        r del temp_key\n\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 *]\n\n        # Create initial hash key\n        r hset restore_test_key3 field \"hash_value\"\n\n        # Restore with REPLACE - should emit restore, overwritten and type_changed events\n        r restore restore_test_key3 0 $dump_data REPLACE\n\n        assert_equal \"pmessage * __keyevent@${db}__:restore restore_test_key3\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:overwritten restore_test_key3\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:type_changed restore_test_key3\" [$rd1 read]\n\n        $rd1 close\n    }\n\n    ### SET command tests for overwritten and type_changed KSN types\n\n    test \"Keyspace notifications: SET on existing string key - overwritten event\" {\n        r config set notify-keyspace-events EAo\n        r del set_test_key1\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 *]\n\n        # Create initial string key\n        r set set_test_key1 \"initial_value\"\n        assert_equal \"pmessage * __keyevent@${db}__:set set_test_key1\" [$rd1 read]\n\n        # Set new value on existing string key - should emit overwritten event\n        r set set_test_key1 \"new_value\"\n\n        assert_equal \"pmessage * __keyevent@${db}__:overwritten set_test_key1\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:set set_test_key1\" [$rd1 read]\n\n        $rd1 close\n    }\n\n    test \"Keyspace notifications: setKey on existing different type key - overwritten and type_changed events\" {\n        r config set notify-keyspace-events Eoc\n\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 *]\n\n        r flushdb\n        r hset set_test_key2 field \"hash_value\"\n        r set set_test_key2 \"string_value\"\n        assert_equal \"pmessage * __keyevent@${db}__:overwritten set_test_key2\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:type_changed set_test_key2\" [$rd1 read]\n\n        # overwritten and type_changed events should be emitted for any->any\n        # type conversion that uses the setKey command\n        r flushdb\n        r lpush l{t} 1 2 3\n        r sadd s1{t} \"A\"\n        r sadd s2{t} \"B\"\n        r sunionstore l{t} s1{t} s2{t}\n        assert_equal \"pmessage * __keyevent@${db}__:overwritten l{t}\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:type_changed l{t}\" [$rd1 read]\n\n        r flushdb\n        r sadd s1{t} \"A\"\n        r set x{t} \"\\x0f\"\n        r set y{t} \"\\xff\"\n        r bitop and s1{t} x{t} y{t}\n        assert_equal \"pmessage * __keyevent@${db}__:overwritten s1{t}\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:type_changed s1{t}\" [$rd1 read]\n\n        $rd1 close\n    }\n\n    test \"Keyspace notifications: overwritten and type_changed events for RENAME and COPY commands\" {\n        r config set notify-keyspace-events Eoc\n\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 *]\n\n        # test COPY events\n        r flushdb\n        r hset hs{t} 1 2 3 4\n        r lpush l{t} 1 2 3 4\n        r copy hs{t} l{t} replace\n\n        assert_equal \"pmessage * __keyevent@${db}__:overwritten l{t}\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:type_changed l{t}\" [$rd1 read]\n\n        # test rename RENAME events\n        r flushdb\n        r hset hs{t} field \"hash_value\"\n        r sadd x{t} 1 2 3\n        r rename x{t} hs{t}\n\n        assert_equal \"pmessage * __keyevent@${db}__:overwritten hs{t}\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:type_changed hs{t}\" [$rd1 read]\n\n        $rd1 close\n    }\n\n    test \"Keyspace notifications: overwritten and type_changed for *STORE* commands\" {\n        r config set notify-keyspace-events Eoc\n\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 *]\n\n        r flushdb\n        r set x{t} x\n\n        # SORT\n        r lpush l{t} 4 3 2 1\n        r sort l{t} store x{t}\n        assert_equal \"pmessage * __keyevent@${db}__:overwritten x{t}\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:type_changed x{t}\" [$rd1 read]\n\n        # SDIFFSTORE\n        r sadd s1{t} a b c d\n        r sadd s2{t} b e f\n        r sdiffstore x{t} s1{t} s2{t}\n        assert_equal \"pmessage * __keyevent@${db}__:overwritten x{t}\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:type_changed x{t}\" [$rd1 read]\n\n        # SINTERSTORE\n        r set d1{t} x\n        r sinterstore d1{t} s1{t} s2{t}\n        assert_equal \"pmessage * __keyevent@${db}__:overwritten d1{t}\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:type_changed d1{t}\" [$rd1 read]\n\n        # SUNIONSTORE\n        r set d2{t} x\n        r sunionstore d2{t} s1{t} s2{t}\n        assert_equal \"pmessage * __keyevent@${db}__:overwritten d2{t}\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:type_changed d2{t}\" [$rd1 read]\n\n        # ZUNIONSTORE\n        r set d3{t} x\n        r zadd z1{t} 1 a 2 b\n        r zadd z2{t} 3 c 4 d\n        r zunionstore d3{t} 2 z1{t} z2{t}\n        assert_equal \"pmessage * __keyevent@${db}__:overwritten d3{t}\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:type_changed d3{t}\" [$rd1 read]\n\n        # ZINTERSTORE\n        r set d4{t} x\n        r zadd z2{t} 2 a\n        r zinterstore d4{t} 2 z1{t} z2{t}\n        assert_equal \"pmessage * __keyevent@${db}__:overwritten d4{t}\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:type_changed d4{t}\" [$rd1 read]\n\n        # ZDIFFSTORE\n        r set d5{t} x\n        r zdiffstore d5{t} 2 z1{t} z2{t}\n        assert_equal \"pmessage * __keyevent@${db}__:overwritten d5{t}\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:type_changed d5{t}\" [$rd1 read]\n\n        # ZRANGESTORE\n        r set d6{t} x\n        r zadd zsrc{t} 1 a 2 b 3 c 4 d\n        r zrangestore d6{t} zsrc{t} 1 2\n        assert_equal \"pmessage * __keyevent@${db}__:overwritten d6{t}\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:type_changed d6{t}\" [$rd1 read]\n\n        # GEORADIUS with STORE\n        r set d7{t} x\n        r geoadd geo{t} 13.361389 38.115556 \"Palermo\" 15.087269 37.502669 \"Catania\"\n        r georadius geo{t} 15 37 200 km store d7{t}\n        assert_equal \"pmessage * __keyevent@${db}__:overwritten d7{t}\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:type_changed d7{t}\" [$rd1 read]\n\n        # GEORADIUS with STOREDIST\n        r set d8{t} x\n        r georadius geo{t} 15 37 200 km storedist d8{t}\n        assert_equal \"pmessage * __keyevent@${db}__:overwritten d8{t}\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:type_changed d8{t}\" [$rd1 read]\n\n        # GEOSEARCHSTORE\n        r set d9{t} x\n        r geosearchstore d9{t} geo{t} fromlonlat 15 37 byradius 200 km\n        assert_equal \"pmessage * __keyevent@${db}__:overwritten d9{t}\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:type_changed d9{t}\" [$rd1 read]\n\n        # GEOSEARCHSTORE with STOREDIST\n        r set d10{t} x\n        r geosearchstore d10{t} geo{t} fromlonlat 15 37 byradius 200 km storedist\n        assert_equal \"pmessage * __keyevent@${db}__:overwritten d10{t}\" [$rd1 read]\n        assert_equal \"pmessage * __keyevent@${db}__:type_changed d10{t}\" [$rd1 read]\n\n        $rd1 close\n    }\n\n    ### Subkey-level notification tests for HASH type ###\n\n    # Helper: build expected payload \"event|len:field0,len:field1,...\"\n    proc build_expected_payload {event prefix count} {\n        set parts {}\n        for {set i 0} {$i < $count} {incr i} {\n            set f \"${prefix}${i}\"\n            lappend parts \"[string length $f]:$f\"\n        }\n        return \"${event}|[join $parts ,]\"\n    }\n\n    # Compare subkey notification payloads as sets (order-insensitive).\n    # Parses \"event|f1,f2,...\" and checks event matches and fields match as sets.\n    proc assert_subkey_payload_equal {expected actual} {\n        set ep [split $expected \"|\"]\n        set ap [split $actual \"|\"]\n        assert_equal [lindex $ep 0] [lindex $ap 0] ;# event name\n        set ef [lsort [split [lindex $ep 1] \",\"]]\n        set af [lsort [split [lindex $ap 1] \",\"]]\n        assert_equal $ef $af\n    }\n\n    # Generate N field-value pairs: {f0 v0 f1 v1 ...}\n    proc gen_field_values {prefix n} {\n        set args {}\n        for {set i 0} {$i < $n} {incr i} {\n            lappend args \"${prefix}${i}\" \"v${i}\"\n        }\n        return $args\n    }\n\n    # Generate N field names: {f0 f1 ...}\n    proc gen_fields {prefix n} {\n        set fields {}\n        for {set i 0} {$i < $n} {incr i} {\n            lappend fields \"${prefix}${i}\"\n        }\n        return $fields\n    }\n\n    # Subkey notification: subkeyspace channel\n    foreach {type max_lp_entries} {listpackex 512 hashtable 0} {\n        r config set hash-max-listpack-entries $max_lp_entries\n        r config set notify-keyspace-events Sh\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [subscribe $rd1 \"__subkeyspace@${db}__:myhash\"]\n\n    test \"Subkey notifications: subkeyspace - HSET single field ($type)\" {\n        r del myhash\n        r hset myhash f1 v1\n        assert_equal \"message __subkeyspace@${db}__:myhash hset|2:f1\" [$rd1 read]\n    }\n\n    test \"Subkey notifications: subkeyspace - HINCRBY ($type)\" {\n        r del myhash\n        r hset myhash counter 10\n        r hincrby myhash counter 5\n        assert_equal \"message __subkeyspace@${db}__:myhash hset|7:counter\" [$rd1 read]\n        assert_equal \"message __subkeyspace@${db}__:myhash hincrby|7:counter\" [$rd1 read]\n    }\n\n    test \"Subkey notifications: subkeyspace - HSETNX ($type)\" {\n        r del myhash\n        r hsetnx myhash newfield val\n        assert_equal \"message __subkeyspace@${db}__:myhash hset|8:newfield\" [$rd1 read]\n    }\n\n    test \"Subkey notifications: subkeyspace - HINCRBYFLOAT ($type)\" {\n        r del myhash\n        r hset myhash counter 10.5\n        r hincrbyfloat myhash counter 2.5\n        assert_equal \"message __subkeyspace@${db}__:myhash hset|7:counter\" [$rd1 read]\n        assert_equal \"message __subkeyspace@${db}__:myhash hincrbyfloat|7:counter\" [$rd1 read]\n    }\n\n    # Test with N=3 (stack path, within FIELDS_STACK_SIZE=16) and\n    # N=32 (heap path, exceeds FIELDS_STACK_SIZE).\n    foreach N {3 32} {\n\n    test \"Subkey notifications: HSET $N fields ($type, [expr {$N <= 16 ? {stack} : {heap}}])\" {\n        r del myhash\n        r hset myhash {*}[gen_field_values \"f\" $N]\n        set expected [build_expected_payload \"hset\" \"f\" $N]\n        assert_equal \"message __subkeyspace@${db}__:myhash $expected\" [$rd1 read]\n    }\n\n    test \"Subkey notifications: HDEL $N fields ($type, [expr {$N <= 16 ? {stack} : {heap}}])\" {\n        r del myhash\n        r hset myhash {*}[gen_field_values \"f\" $N]\n        $rd1 read ;# consume hset notification\n        r hdel myhash {*}[gen_fields \"f\" $N]\n        set expected [build_expected_payload \"hdel\" \"f\" $N]\n        assert_equal \"message __subkeyspace@${db}__:myhash $expected\" [$rd1 read]\n    }\n\n    test \"Subkey notifications: HGETDEL $N fields ($type, [expr {$N <= 16 ? {stack} : {heap}}])\" {\n        r del myhash\n        r hset myhash {*}[gen_field_values \"f\" $N]\n        $rd1 read ;# consume hset notification\n        r hgetdel myhash FIELDS $N {*}[gen_fields \"f\" $N]\n        set expected [build_expected_payload \"hdel\" \"f\" $N]\n        assert_equal \"message __subkeyspace@${db}__:myhash $expected\" [$rd1 read]\n    }\n\n    test \"Subkey notifications: HEXPIRE $N fields ($type, [expr {$N <= 16 ? {stack} : {heap}}])\" {\n        r del myhash\n        r hset myhash {*}[gen_field_values \"f\" $N]\n        $rd1 read ;# consume hset notification\n        r hexpire myhash 1000 FIELDS $N {*}[gen_fields \"f\" $N]\n        set expected [build_expected_payload \"hexpire\" \"f\" $N]\n        assert_equal \"message __subkeyspace@${db}__:myhash $expected\" [$rd1 read]\n    }\n\n    test \"Subkey notifications: HEXPIRE past timestamp $N fields ($type, [expr {$N <= 16 ? {stack} : {heap}}])\" {\n        r del myhash\n        r hset myhash {*}[gen_field_values \"f\" $N]\n        $rd1 read ;# consume hset notification\n        r hexpireat myhash 1 FIELDS $N {*}[gen_fields \"f\" $N]\n        set expected [build_expected_payload \"hdel\" \"f\" $N]\n        assert_equal \"message __subkeyspace@${db}__:myhash $expected\" [$rd1 read]\n    }\n\n    test \"Subkey notifications: HPERSIST $N fields ($type, [expr {$N <= 16 ? {stack} : {heap}}])\" {\n        r del myhash\n        set fields [gen_fields \"f\" $N]\n        r hset myhash {*}[gen_field_values \"f\" $N]\n        r hexpire myhash 1000 FIELDS $N {*}$fields\n        $rd1 read ;# consume hset\n        $rd1 read ;# consume hexpire\n        r hpersist myhash FIELDS $N {*}$fields\n        set expected [build_expected_payload \"hpersist\" \"f\" $N]\n        assert_equal \"message __subkeyspace@${db}__:myhash $expected\" [$rd1 read]\n    }\n\n    test \"Subkey notifications: HGETEX with expire $N fields ($type, [expr {$N <= 16 ? {stack} : {heap}}])\" {\n        r del myhash\n        r hset myhash {*}[gen_field_values \"f\" $N]\n        $rd1 read ;# consume hset\n        r hgetex myhash EX 1000 FIELDS $N {*}[gen_fields \"f\" $N]\n        set expected [build_expected_payload \"hexpire\" \"f\" $N]\n        assert_equal \"message __subkeyspace@${db}__:myhash $expected\" [$rd1 read]\n    }\n\n    test \"Subkey notifications: HGETEX with persist $N fields ($type, [expr {$N <= 16 ? {stack} : {heap}}])\" {\n        r del myhash\n        set fields [gen_fields \"f\" $N]\n        r hset myhash {*}[gen_field_values \"f\" $N]\n        r hexpire myhash 1000 FIELDS $N {*}$fields\n        $rd1 read ;# consume hset\n        $rd1 read ;# consume hexpire\n        r hgetex myhash PERSIST FIELDS $N {*}$fields\n        set expected [build_expected_payload \"hpersist\" \"f\" $N]\n        assert_equal \"message __subkeyspace@${db}__:myhash $expected\" [$rd1 read]\n    }\n\n    test \"Subkey notifications: HGETEX past timestamp $N fields ($type, [expr {$N <= 16 ? {stack} : {heap}}])\" {\n        r del myhash\n        r hset myhash {*}[gen_field_values \"f\" $N]\n        $rd1 read ;# consume hset\n        r hgetex myhash PX 0 FIELDS $N {*}[gen_fields \"f\" $N]\n        set expected [build_expected_payload \"hdel\" \"f\" $N]\n        assert_equal \"message __subkeyspace@${db}__:myhash $expected\" [$rd1 read]\n    }\n\n    test \"Subkey notifications: HSETEX $N fields ($type, [expr {$N <= 16 ? {stack} : {heap}}])\" {\n        r del myhash\n        r hsetex myhash EX 1000 FIELDS $N {*}[gen_field_values \"f\" $N]\n        set expected_hset [build_expected_payload \"hset\" \"f\" $N]\n        set expected_hexpire [build_expected_payload \"hexpire\" \"f\" $N]\n        assert_equal \"message __subkeyspace@${db}__:myhash $expected_hset\" [$rd1 read]\n        assert_equal \"message __subkeyspace@${db}__:myhash $expected_hexpire\" [$rd1 read]\n    }\n\n    test \"Subkey notifications: HSETEX past timestamp $N fields ($type, [expr {$N <= 16 ? {stack} : {heap}}])\" {\n        r del myhash\n        r hsetex myhash PX 0 FIELDS $N {*}[gen_field_values \"f\" $N]\n        set expected_hset [build_expected_payload \"hset\" \"f\" $N]\n        set expected_hdel [build_expected_payload \"hdel\" \"f\" $N]\n        assert_equal \"message __subkeyspace@${db}__:myhash $expected_hset\" [$rd1 read]\n        assert_equal \"message __subkeyspace@${db}__:myhash $expected_hdel\" [$rd1 read]\n    }\n\n    test \"Subkey notifications: lazy field expiry triggers hexpired $N fields ($type, [expr {$N <= 16 ? {stack} : {heap}}])\" {\n        r del myhash\n        # Create N+1 fields, expire N of them; keep one to prevent hash deletion.\n        set fields [gen_fields \"f\" $N]\n        set args [gen_field_values \"f\" $N]\n        lappend args \"keep\" \"val\"\n        r hset myhash {*}$args\n        r debug set-active-expire 0\n        r hpexpire myhash 10 FIELDS $N {*}$fields\n        $rd1 read ;# consume hset\n        $rd1 read ;# consume hexpire\n        # Trigger lazy expiry by reading the fields\n        after 100\n        r hmget myhash {*}$fields\n        set expected_hexpired [build_expected_payload \"hexpired\" \"f\" $N]\n        assert_equal \"message __subkeyspace@${db}__:myhash $expected_hexpired\" [$rd1 read]\n        r debug set-active-expire 1\n    } {OK} {needs:debug}\n\n    test \"Subkey notifications: active field expiry triggers hexpired $N fields ($type, [expr {$N <= 16 ? {stack} : {heap}}])\" {\n        r del myhash\n        # Create N+1 fields, expire N of them; keep one to prevent hash deletion.\n        set fields [gen_fields \"f\" $N]\n        set args [gen_field_values \"f\" $N]\n        lappend args \"keep\" \"val\"\n        r hset myhash {*}$args\n        r hpexpire myhash 10 FIELDS $N {*}$fields\n        $rd1 read ;# consume hset\n        $rd1 read ;# consume hexpire\n        # Wait for active expiry; field order depends on hash table iteration,\n        # so compare as set.\n        set expected_hexpired [build_expected_payload \"hexpired\" \"f\" $N]\n        set actual [$rd1 read]\n        set prefix \"message __subkeyspace@${db}__:myhash \"\n        assert_equal $prefix [string range $actual 0 [expr {[string length $prefix]-1}]]\n        assert_subkey_payload_equal $expected_hexpired [string range $actual [string length $prefix] end]\n    }\n    } ;# end foreach N\n    $rd1 close\n    } ;# end foreach type\n\n    # Subkey notification format tests for subkeyevent/subkeyspaceitem/subkeyspaceevent\n    # Full command coverage is done via subkeyspace channel below; here we only verify channel format.\n    foreach {type max_lp_entries} {listpackex 512 hashtable 0} {\n        r config set hash-max-listpack-entries $max_lp_entries\n\n    test \"Subkey notifications: subkeyevent format ($type)\" {\n        r config set notify-keyspace-events Th\n        r del myhash\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [subscribe $rd1 \"__subkeyevent@${db}__:hset\"]\n        r hset myhash f1 v1 f2 v2 f3 v3\n        assert_equal \"message __subkeyevent@${db}__:hset 6:myhash|2:f1,2:f2,2:f3\" [$rd1 read]\n        $rd1 close\n    }\n\n    test \"Subkey notifications: subkeyspaceitem format ($type)\" {\n        r config set notify-keyspace-events Ih\n        r del myhash\n        set rd1 [redis_deferring_client]\n        $rd1 subscribe \"__subkeyspaceitem@${db}__:myhash\\nf1\"\n        $rd1 read ;# consume subscribe confirmation\n        r hset myhash f1 v1\n        set msg [$rd1 read]\n        assert_equal \"message\" [lindex $msg 0]\n        assert_equal \"__subkeyspaceitem@${db}__:myhash\\nf1\" [lindex $msg 1]\n        assert_equal \"hset\" [lindex $msg 2]\n        $rd1 close\n    }\n\n    test \"Subkey notifications: subkeyspaceitem per-subkey delivery with psubscribe ($type)\" {\n        r config set notify-keyspace-events Ih\n        r del myhash\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 \"__subkeyspaceitem@${db}__:myhash*\"]\n        r hset myhash f1 v1 f2 v2\n        # Should get one notification per subkey\n        set msg1 [$rd1 read]\n        set msg2 [$rd1 read]\n        assert_equal \"pmessage\" [lindex $msg1 0]\n        assert_equal \"__subkeyspaceitem@${db}__:myhash\\nf1\" [lindex $msg1 2]\n        assert_equal \"hset\" [lindex $msg1 3]\n        assert_equal \"pmessage\" [lindex $msg2 0]\n        assert_equal \"__subkeyspaceitem@${db}__:myhash\\nf2\" [lindex $msg2 2]\n        assert_equal \"hset\" [lindex $msg2 3]\n        $rd1 close\n    }\n\n    test \"Subkey notifications: subkeyspaceitem skips key with newline ($type)\" {\n        r config set notify-keyspace-events Ih\n        r del \"key\\nwith\\nnewline\"\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [psubscribe $rd1 \"__subkeyspaceitem@${db}__:*\"]\n        r hset \"key\\nwith\\nnewline\" f1 v1\n        # Normal key to verify notifications still work\n        r hset normalkey f1 v1\n        # Should only get notification for normalkey\n        set msg [$rd1 read]\n        assert_equal \"pmessage\" [lindex $msg 0]\n        assert_equal \"__subkeyspaceitem@${db}__:normalkey\\nf1\" [lindex $msg 2]\n        assert_equal \"hset\" [lindex $msg 3]\n        r del \"key\\nwith\\nnewline\"\n        r del normalkey\n        $rd1 close\n    }\n\n    test \"Subkey notifications: subkeyspaceevent format ($type)\" {\n        r config set notify-keyspace-events Vh\n        r del myhash\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [subscribe $rd1 \"__subkeyspaceevent@${db}__:hset|myhash\"]\n        r hset myhash f1 v1 f2 v2\n        assert_equal \"message __subkeyspaceevent@${db}__:hset|myhash 2:f1,2:f2\" [$rd1 read]\n        $rd1 close\n    }\n    } ;\n\n    # Test all 4 channels enabled simultaneously\n    test \"Subkey notifications: all 4 channels enabled simultaneously\" {\n        r config set notify-keyspace-events STIVh\n        r del myhash\n        set rd_s [redis_deferring_client]\n        set rd_t [redis_deferring_client]\n        set rd_i [redis_deferring_client]\n        set rd_v [redis_deferring_client]\n        assert_equal {1} [subscribe $rd_s \"__subkeyspace@${db}__:myhash\"]\n        assert_equal {1} [subscribe $rd_t \"__subkeyevent@${db}__:hset\"]\n        assert_equal {1} [subscribe $rd_v \"__subkeyspaceevent@${db}__:hset|myhash\"]\n        $rd_i subscribe \"__subkeyspaceitem@${db}__:myhash\\nf1\"\n        $rd_i read ;# consume subscribe confirmation\n        r hset myhash f1 v1\n        assert_equal \"message __subkeyspace@${db}__:myhash hset|2:f1\" [$rd_s read]\n        assert_equal \"message __subkeyevent@${db}__:hset 6:myhash|2:f1\" [$rd_t read]\n        assert_equal \"message __subkeyspaceevent@${db}__:hset|myhash 2:f1\" [$rd_v read]\n        set msg_i [$rd_i read]\n        assert_equal \"message\" [lindex $msg_i 0]\n        assert_equal \"__subkeyspaceitem@${db}__:myhash\\nf1\" [lindex $msg_i 1]\n        assert_equal \"hset\" [lindex $msg_i 2]\n        $rd_s close\n        $rd_t close\n        $rd_i close\n        $rd_v close\n    }\n\n    # Test that subkey notifications are triggered on replica after replication\n    test \"Subkey notifications: replica receives subkey notifications after replication\" {\n        start_server {tags {\"repl external:skip\"}} {\n            set master [srv -1 client]\n            set master_host [srv -1 host]\n            set master_port [srv -1 port]\n            set replica [srv 0 client]\n\n            $replica replicaof $master_host $master_port\n            wait_for_sync $replica\n\n            # Enable subkeyspace notifications on replica\n            $replica config set notify-keyspace-events Sh\n\n            # Subscribe on replica\n            set rd1 [redis_deferring_client -1]\n            assert_equal {1} [subscribe $rd1 \"__subkeyspace@${db}__:myhash\"]\n\n            # Write on master\n            $master hset myhash f1 v1 f2 v2\n            $master hpexpire myhash 100 FIELDS 2 f1 f2\n\n            # Replica should receive subkey notification\n            assert_equal \"message __subkeyspace@${db}__:myhash hset|2:f1,2:f2\" [$rd1 read]\n            assert_equal \"message __subkeyspace@${db}__:myhash hexpire|2:f1,2:f2\" [$rd1 read]\n            assert_equal \"message __subkeyspace@${db}__:myhash hexpired|2:f1,2:f2\" [$rd1 read]\n            $rd1 close\n            $master del myhash\n        }\n    }\n\n    test \"publish to self inside multi\" {\n        r hello 3\n        r subscribe foo\n        r multi\n        r ping abc\n        r publish foo bar\n        r publish foo vaz\n        r ping def\n        assert_equal [r exec] {abc 1 1 def}\n        assert_equal [r read] {message foo bar}\n        assert_equal [r read] {message foo vaz}\n    } {} {resp3}\n\n    test \"publish to self inside script\" {\n        r hello 3\n        r subscribe foo\n        set res [r eval {\n                redis.call(\"ping\",\"abc\")\n                redis.call(\"publish\",\"foo\",\"bar\")\n                redis.call(\"publish\",\"foo\",\"vaz\")\n                redis.call(\"ping\",\"def\")\n                return \"bla\"} 0]\n        assert_equal $res {bla}\n        assert_equal [r read] {message foo bar}\n        assert_equal [r read] {message foo vaz}\n    } {} {resp3}\n\n    test \"unsubscribe inside multi, and publish to self\" {\n        r hello 3\n\n        # Note: SUBSCRIBE and UNSUBSCRIBE with multiple channels in the same command,\n        # breaks the multi response, see https://github.com/redis/redis/issues/12207\n        # this is just a temporary sanity test to detect unintended breakage.\n\n        # subscribe for 3 channels actually emits 3 \"responses\"\n        assert_equal \"subscribe foo 1\" [r subscribe foo bar baz]\n        assert_equal \"subscribe bar 2\" [r read]\n        assert_equal \"subscribe baz 3\" [r read]\n\n        r multi\n        r ping abc\n        r unsubscribe bar\n        r unsubscribe baz\n        r ping def\n        assert_equal [r exec] {abc {unsubscribe bar 2} {unsubscribe baz 1} def}\n\n        # published message comes after the publish command's response.\n        assert_equal [r publish foo vaz] {1}\n        assert_equal [r read] {message foo vaz}\n    } {} {resp3}\n\n}\n"
  },
  {
    "path": "tests/unit/pubsubshard.tcl",
    "content": "start_server {tags {\"pubsubshard external:skip\"}} {\n    test \"SPUBLISH/SSUBSCRIBE basics\" {\n        set rd1 [redis_deferring_client]\n\n        # subscribe to two channels\n        assert_equal {1} [ssubscribe $rd1 {chan1}]\n        assert_equal {2} [ssubscribe $rd1 {chan2}]\n        assert_equal 1 [r SPUBLISH chan1 hello]\n        assert_equal 1 [r SPUBLISH chan2 world]\n        assert_equal {smessage chan1 hello} [$rd1 read]\n        assert_equal {smessage chan2 world} [$rd1 read]\n\n        # unsubscribe from one of the channels\n        sunsubscribe $rd1 {chan1}\n        assert_equal 0 [r SPUBLISH chan1 hello]\n        assert_equal 1 [r SPUBLISH chan2 world]\n        assert_equal {smessage chan2 world} [$rd1 read]\n\n        # unsubscribe from the remaining channel\n        sunsubscribe $rd1 {chan2}\n        assert_equal 0 [r SPUBLISH chan1 hello]\n        assert_equal 0 [r SPUBLISH chan2 world]\n\n        # clean up clients\n        $rd1 close\n    }\n\n    test \"SPUBLISH/SSUBSCRIBE with two clients\" {\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n\n        assert_equal {1} [ssubscribe $rd1 {chan1}]\n        assert_equal {1} [ssubscribe $rd2 {chan1}]\n        assert_equal 2 [r SPUBLISH chan1 hello]\n        assert_equal {smessage chan1 hello} [$rd1 read]\n        assert_equal {smessage chan1 hello} [$rd2 read]\n\n        # clean up clients\n        $rd1 close\n        $rd2 close\n    }\n\n    test \"SPUBLISH/SSUBSCRIBE after UNSUBSCRIBE without arguments\" {\n        set rd1 [redis_deferring_client]\n        assert_equal {1} [ssubscribe $rd1 {chan1}]\n        assert_equal {2} [ssubscribe $rd1 {chan2}]\n        assert_equal {3} [ssubscribe $rd1 {chan3}]\n        sunsubscribe $rd1\n        wait_for_condition 100 10 {\n            [regexp {cmd=sunsubscribe} [r client list]] eq 1\n        } else {\n            fail \"sunsubscribe did not arrive\"\n        }\n        assert_equal 0 [r SPUBLISH chan1 hello]\n        assert_equal 0 [r SPUBLISH chan2 hello]\n        assert_equal 0 [r SPUBLISH chan3 hello]\n\n        # clean up clients\n        $rd1 close\n    }\n\n    test \"SSUBSCRIBE to one channel more than once\" {\n        set rd1 [redis_deferring_client]\n        assert_equal {1 1 1} [ssubscribe $rd1 {chan1 chan1 chan1}]\n        assert_equal 1 [r SPUBLISH chan1 hello]\n        assert_equal {smessage chan1 hello} [$rd1 read]\n\n        # clean up clients\n        $rd1 close\n    }\n\n    test \"SUNSUBSCRIBE from non-subscribed channels\" {\n        set rd1 [redis_deferring_client]\n        assert_equal {0} [sunsubscribe $rd1 {foo}]\n        assert_equal {0} [sunsubscribe $rd1 {bar}]\n        assert_equal {0} [sunsubscribe $rd1 {quux}]\n\n        # clean up clients\n        $rd1 close\n    }\n\n    test \"PUBSUB command basics\" {\n        r pubsub shardnumsub abc def\n    } {abc 0 def 0}\n\n    test \"SPUBLISH/SSUBSCRIBE with two clients\" {\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n\n        assert_equal {1} [ssubscribe $rd1 {chan1}]\n        assert_equal {1} [ssubscribe $rd2 {chan1}]\n        assert_equal 2 [r SPUBLISH chan1 hello]\n        assert_equal \"chan1 2\" [r pubsub shardnumsub chan1]\n        assert_equal \"chan1\" [r pubsub shardchannels]\n\n        # clean up clients\n        $rd1 close\n        $rd2 close\n    }\n\n    test \"SPUBLISH/SSUBSCRIBE with PUBLISH/SUBSCRIBE\" {\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n\n        assert_equal {1} [ssubscribe $rd1 {chan1}]\n        assert_equal {1} [subscribe $rd2 {chan1}]\n        assert_equal 1 [r SPUBLISH chan1 hello]\n        assert_equal 1 [r publish chan1 hello]\n        assert_equal \"chan1 1\" [r pubsub shardnumsub chan1]\n        assert_equal \"chan1 1\" [r pubsub numsub chan1]\n        assert_equal \"chan1\" [r pubsub shardchannels]\n        assert_equal \"chan1\" [r pubsub channels]\n\n        $rd1 close\n        $rd2 close\n    }\n\n    test \"PubSubShard with CLIENT REPLY OFF\" {\n        set rd [redis_deferring_client]\n        $rd hello 3\n        $rd read ;# Discard the hello reply\n\n        # Test that the ssubscribe notification is ok\n        $rd client reply off\n        $rd ping\n        assert_equal {1} [ssubscribe $rd channel]\n\n        # Test that the spublish notification is ok\n        $rd client reply off\n        $rd ping\n        assert_equal 1 [r spublish channel hello]\n        assert_equal {smessage channel hello} [$rd read]\n\n        # Test that sunsubscribe notification is ok\n        $rd client reply off\n        $rd ping\n        assert_equal {0} [sunsubscribe $rd channel]\n\n        $rd close\n    }\n}\n\nstart_server {tags {\"pubsubshard external:skip\"}} {\nstart_server {tags {\"pubsubshard external:skip\"}} {\n    set node_0 [srv 0 client]\n    set node_0_host [srv 0 host]\n    set node_0_port [srv 0 port]\n\n    set node_1 [srv -1 client]\n    set node_1_host [srv -1 host]\n    set node_1_port [srv -1 port]\n\n    test {setup replication for following tests} {\n        $node_1 replicaof $node_0_host $node_0_port\n        wait_for_sync $node_1\n    }\n\n    test {publish message to master and receive on replica} {\n        set rd0 [redis_deferring_client node_0_host node_0_port]\n        set rd1 [redis_deferring_client node_1_host node_1_port]\n\n        assert_equal {1} [ssubscribe $rd1 {chan1}]\n        $rd0 SPUBLISH chan1 hello\n        assert_equal {smessage chan1 hello} [$rd1 read]\n        $rd0 SPUBLISH chan1 world\n        assert_equal {smessage chan1 world} [$rd1 read]\n    }\n}\n}"
  },
  {
    "path": "tests/unit/querybuf.tcl",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Copyright (c) 2024-present, Valkey contributors.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n#\nproc client_idle_sec {name} {\n    set clients [split [r client list] \"\\r\\n\"]\n    set c [lsearch -inline $clients *name=$name*]\n    assert {[regexp {idle=([0-9]+)} $c - idle]}\n    return $idle\n}\n\n# Calculate query buffer memory of client\nproc client_query_buffer {name} {\n    set clients [split [r client list] \"\\r\\n\"]\n    set c [lsearch -inline $clients *name=$name*]\n    if {[string length $c] > 0} {\n        assert {[regexp {qbuf=([0-9]+)} $c - qbuf]}\n        assert {[regexp {qbuf-free=([0-9]+)} $c - qbuf_free]}\n        return [expr $qbuf + $qbuf_free]\n    }\n    return 0\n}\n\nstart_server {tags {\"querybuf slow\"}} {\n    # increase the execution frequency of clientsCron\n    r config set hz 100\n\n    # The test will run at least 2s to check if client query\n    # buffer will be resized when client idle 2s.\n    test \"query buffer resized correctly\" {\n\n        set rd [redis_deferring_client]\n\n        $rd client setname test_client\n        $rd read\n\n        # Make sure query buff has size of 0 bytes at start as the client uses the reusable qb.\n        assert {[client_query_buffer test_client] == 0}\n\n        # Pause cron to prevent premature shrinking (timing issue).\n        r debug pause-cron 1\n\n        # Send partial command to client to make sure it doesn't use the reusable qb.\n        $rd write \"*3\\r\\n\\$3\\r\\nset\\r\\n\\$2\\r\\na\"\n        $rd flush\n        # Wait for the client to start using a private query buffer. \n        wait_for_condition 1000 10 {\n            [client_query_buffer test_client] > 0\n        } else {\n            fail \"client should start using a private query buffer\"\n        }\n     \n        # send the rest of the command\n        $rd write \"a\\r\\n\\$1\\r\\nb\\r\\n\"\n        $rd flush\n        assert_equal {OK} [$rd read]\n\n        set orig_test_client_qbuf [client_query_buffer test_client]\n        # Make sure query buff has less than the peak resize threshold (PROTO_RESIZE_THRESHOLD) 32k\n        # but at least the basic IO reading buffer size (PROTO_IOBUF_LEN) 16k\n        set MAX_QUERY_BUFFER_SIZE [expr 32768 + 2] ; # 32k + 2, allowing for potential greedy allocation of (16k + 1) * 2 bytes for the query buffer.\n        assert {$orig_test_client_qbuf >= 16384 && $orig_test_client_qbuf <= $MAX_QUERY_BUFFER_SIZE}\n\n        # Allow shrinking to occur\n        r debug pause-cron 0\n\n        # Check that the initial query buffer is resized after 2 sec\n        wait_for_condition 1000 10 {\n            [client_idle_sec test_client] >= 3 && [client_query_buffer test_client] < $orig_test_client_qbuf\n        } else {\n            fail \"query buffer was not resized\"\n        }\n        $rd close\n    }\n\n    test \"query buffer resized correctly when not idle\" {\n        # Pause cron to prevent premature shrinking (timing issue).\n        r debug pause-cron 1\n\n        # Memory will increase by more than 32k due to client query buffer.\n        set rd [redis_client]\n        $rd client setname test_client\n\n        # Create a large query buffer (more than PROTO_RESIZE_THRESHOLD - 32k)\n        $rd set x [string repeat A 400000]\n\n        # Make sure query buff is larger than the peak resize threshold (PROTO_RESIZE_THRESHOLD) 32k\n        set orig_test_client_qbuf [client_query_buffer test_client]\n        assert {$orig_test_client_qbuf > 32768}\n\n        r debug pause-cron 0\n\n        # Wait for qbuf to shrink due to lower peak\n        set t [clock milliseconds]\n        while true {\n            # Write something smaller, so query buf peak can shrink\n            $rd set x [string repeat A 100]\n            set new_test_client_qbuf [client_query_buffer test_client]\n            if {$new_test_client_qbuf < $orig_test_client_qbuf} { break } \n            if {[expr [clock milliseconds] - $t] > 1000} { break }\n            after 10\n        }\n        # Validate qbuf shrunk but isn't 0 since we maintain room based on latest peak\n        assert {[client_query_buffer test_client] > 0 && [client_query_buffer test_client] < $orig_test_client_qbuf}\n        $rd close\n    } {0} {needs:debug}\n\n    test \"query buffer resized correctly with fat argv\" {\n        set rd [redis_client]\n        $rd client setname test_client\n\n        # Pause cron to prevent premature shrinking (timing issue).\n        r debug pause-cron 1\n\n        $rd write \"*3\\r\\n\\$3\\r\\nset\\r\\n\\$1\\r\\na\\r\\n\\$1000000\\r\\n\"\n        $rd flush\n\n        # Wait for the client to start using a private query buffer of > 1000000 size.\n        wait_for_condition 1000 10 {\n            [client_query_buffer test_client] > 1000000\n        } else {\n            fail \"client should start using a private query buffer\"\n        }\n        \n        # Send the start of the arg and make sure the client is not using reusable qb for it rather a private buf of > 1000000 size.\n        $rd write \"a\" \n        $rd flush\n\n        r debug pause-cron 0\n\n        after 120\n        if {[client_query_buffer test_client] < 1000000} {\n            fail \"query buffer should not be resized when client idle time smaller than 2s\"\n        }\n     \n        # Check that the query buffer is resized after 2 sec\n        wait_for_condition 1000 10 {\n            [client_idle_sec test_client] >= 3 && [client_query_buffer test_client] < 1000000\n        } else {\n            fail \"query buffer should be resized when client idle time bigger than 2s\"\n        }\n     \n        $rd close\n    }\n}\n\nstart_server {tags {\"querybuf\"}} {\n    test \"Client executes small argv commands using reusable query buffer\" {\n        set rd [redis_deferring_client]\n        $rd client setname test_client\n        $rd read\n        set res [r client list]\n\n        # Verify that the client does not create a private query buffer after\n        # executing a small parameter command.\n        assert_match {*name=test_client * qbuf=0 qbuf-free=0 * cmd=client|setname *} $res \n\n        # The client executing the command is currently using the reusable query buffer,\n        # so the size shown is that of the reusable query buffer. It will be returned\n        # to the reusable query buffer after command execution.\n        # Note that if IO threads are enabled, the reusable query buffer will be dereferenced earlier.\n        if {[lindex [r config get io-threads] 1] == 1} {\n            assert_match {*qbuf=26 qbuf-free=* cmd=client|list *} $res\n        } else {\n            assert_match {*qbuf=0 qbuf-free=* cmd=client|list *} $res\n        }\n\n        $rd close\n    } \n}\n"
  },
  {
    "path": "tests/unit/quit.tcl",
    "content": "start_server {tags {\"quit\"}} {\n\n    test \"QUIT returns OK\" {\n        reconnect\n        assert_equal OK [r quit]\n        assert_error * {r ping}\n    }\n\n    test \"Pipelined commands after QUIT must not be executed\" {\n        reconnect\n        r write [format_command quit]\n        r write [format_command set foo bar]\n        r flush\n        assert_equal OK [r read]\n        assert_error * {r read}\n\n        reconnect\n        assert_equal {} [r get foo]\n    }\n\n    test \"Pipelined commands after QUIT that exceed read buffer size\" {\n        reconnect\n        r write [format_command quit]\n        r write [format_command set foo [string repeat \"x\" 1024]]\n        r flush\n        assert_equal OK [r read]\n        assert_error * {r read}\n\n        reconnect\n        assert_equal {} [r get foo]\n\n    }\n}\n"
  },
  {
    "path": "tests/unit/replybufsize.tcl",
    "content": "proc get_reply_buffer_size {cname} {\n\n    set clients [split [string trim [r client list]] \"\\r\\n\"]\n    set c [lsearch -inline $clients *name=$cname*]\n    if {![regexp rbs=(\\[a-zA-Z0-9-\\]+) $c - rbufsize]} {\n        error \"field rbs not found in $c\"\n    }\n    return $rbufsize\n}\n\nstart_server {tags {\"replybufsize\"}} {\n\n    test {verify reply buffer limits} {\n        # In order to reduce test time we can set the peak reset time very low\n        r debug replybuffer peak-reset-time 100\n        r debug reply-copy-avoidance 0 ;# Disable copy avoidance because it affects memory usage\n        \n        # Create a simple idle test client\n        variable tc [redis_client]\n        $tc client setname test_client\n         \n        # make sure the client is idle for 1 seconds to make it shrink the reply buffer\n        wait_for_condition 10 100 {\n            [get_reply_buffer_size test_client] >= 1024 && [get_reply_buffer_size test_client] < 2046\n        } else {\n            set rbs [get_reply_buffer_size test_client]\n            fail \"reply buffer of idle client is $rbs after 1 seconds\"\n        }\n        \n        r set bigval [string repeat x 32768]\n        \n        # In order to reduce test time we can set the peak reset time very low\n        r debug replybuffer peak-reset-time never\n        \n        wait_for_condition 10 100 {\n            [$tc get bigval ; get_reply_buffer_size test_client] >= 16384 && [get_reply_buffer_size test_client] < 32768\n        } else {\n            set rbs [get_reply_buffer_size test_client]\n            fail \"reply buffer of busy client is $rbs after 1 seconds\"\n        }\n   \n        # Restore the peak reset time to default\n        r debug replybuffer peak-reset-time reset\n        \n        $tc close\n    } {0} {needs:debug}\n}\n    "
  },
  {
    "path": "tests/unit/scan.tcl",
    "content": "proc test_scan {type} {\n    test \"{$type} SCAN basic\" {\n        r flushdb\n        populate 1000\n\n        set cur 0\n        set keys {}\n        while 1 {\n            set res [r scan $cur]\n            set cur [lindex $res 0]\n            set k [lindex $res 1]\n            lappend keys {*}$k\n            if {$cur == 0} break\n        }\n\n        set keys [lsort -unique $keys]\n        assert_equal 1000 [llength $keys]\n    }\n\n   test \"{$type} SCAN COUNT\" {\n        r flushdb\n        populate 1000\n\n        set cur 0\n        set keys {}\n        while 1 {\n            set res [r scan $cur count 5]\n            set cur [lindex $res 0]\n            set k [lindex $res 1]\n            lappend keys {*}$k\n            if {$cur == 0} break\n        }\n\n        set keys [lsort -unique $keys]\n        assert_equal 1000 [llength $keys]\n    }\n\n    test \"{$type} SCAN MATCH\" {\n        r flushdb\n        populate 1000\n\n        set cur 0\n        set keys {}\n        while 1 {\n            set res [r scan $cur match \"key:1??\"]\n            set cur [lindex $res 0]\n            set k [lindex $res 1]\n            lappend keys {*}$k\n            if {$cur == 0} break\n        }\n\n        set keys [lsort -unique $keys]\n        assert_equal 100 [llength $keys]\n    }\n\n    test \"{$type} SCAN TYPE\" {\n        r flushdb\n        # populate only creates strings\n        populate 1000\n\n        # Check non-strings are excluded\n        set cur 0\n        set keys {}\n        while 1 {\n            set res [r scan $cur type \"list\"]\n            set cur [lindex $res 0]\n            set k [lindex $res 1]\n            lappend keys {*}$k\n            if {$cur == 0} break\n        }\n\n        assert_equal 0 [llength $keys]\n\n        # Check strings are included\n        set cur 0\n        set keys {}\n        while 1 {\n            set res [r scan $cur type \"string\"]\n            set cur [lindex $res 0]\n            set k [lindex $res 1]\n            lappend keys {*}$k\n            if {$cur == 0} break\n        }\n\n        assert_equal 1000 [llength $keys]\n\n        # Check all three args work together\n        set cur 0\n        set keys {}\n        while 1 {\n            set res [r scan $cur type \"string\" match \"key:*\" count 10]\n            set cur [lindex $res 0]\n            set k [lindex $res 1]\n            lappend keys {*}$k\n            if {$cur == 0} break\n        }\n\n        assert_equal 1000 [llength $keys]\n    }\n\n    test \"{$type} SCAN unknown type\" {\n        r flushdb\n        # make sure that passive expiration is triggered by the scan\n        r debug set-active-expire 0\n\n        populate 1000\n        r hset hash f v\n        r pexpire hash 1\n\n        after 2\n\n        # TODO: remove this in redis 8.0\n        set cur 0\n        set keys {}\n        while 1 {\n            set res [r scan $cur type \"string1\"]\n            set cur [lindex $res 0]\n            set k [lindex $res 1]\n            lappend keys {*}$k\n            if {$cur == 0} break\n        }\n\n        assert_equal 0 [llength $keys]\n        # make sure that expired key have been removed by scan command\n        assert_equal 1000 [scan [regexp -inline {keys\\=([\\d]*)} [r info keyspace]] keys=%d]\n\n        # TODO: uncomment in redis 8.0\n        #assert_error \"*unknown type name*\" {r scan 0 type \"string1\"}\n        # expired key will be no touched by scan command\n        #assert_equal 1001 [scan [regexp -inline {keys\\=([\\d]*)} [r info keyspace]] keys=%d]\n        r debug set-active-expire 1\n    } {OK} {needs:debug}\n\n    test \"{$type} SCAN with expired keys\" {\n        r flushdb\n        # make sure that passive expiration is triggered by the scan\n        r debug set-active-expire 0\n\n        populate 1000\n        r set foo bar\n        r pexpire foo 1\n        \n        # add a hash type key\n        r hset hash f v\n        r pexpire hash 1\n        \n        after 2\n\n        set cur 0\n        set keys {}\n        while 1 {\n            set res [r scan $cur count 10]\n            set cur [lindex $res 0]\n            set k [lindex $res 1]\n            lappend keys {*}$k\n            if {$cur == 0} break\n        }\n\n        assert_equal 1000 [llength $keys]\n\n        # make sure that expired key have been removed by scan command\n        assert_equal 1000 [scan [regexp -inline {keys\\=([\\d]*)} [r info keyspace]] keys=%d]\n\n        r debug set-active-expire 1\n    } {OK} {needs:debug}\n\n    test \"{$type} SCAN with expired keys with TYPE filter and PATTERN filter\" {\n        r flushdb\n        # make sure that passive expiration is triggered by the scan\n        r debug set-active-expire 0\n\n        populate 1000\n        r set key:foo bar\n        r pexpire key:foo 1\n\n        # add a hash type key\n        r hset key:hash f v\n        r pexpire key:hash 1\n\n        # add a pattern key\n        r set boo far\n        r pexpire boo 1\n\n        after 2\n\n        set cur 0\n        set keys {}\n        while 1 {\n            set res [r scan $cur type \"string\" match key* count 10]\n            set cur [lindex $res 0]\n            set k [lindex $res 1]\n            lappend keys {*}$k\n            if {$cur == 0} break\n        }\n\n        assert_equal 1000 [llength $keys]\n\n        # make sure that expired key have been removed by scan command, \n        # pattern check before expired so key filtered by pattern will not be removed\n        # but expiration check is before type check so key:foo and key:hash will be removed\n        assert_equal 1001 [scan [regexp -inline {keys\\=([\\d]*)} [r info keyspace]] keys=%d]\n        \n        r debug set-active-expire 1\n    } {OK} {needs:debug}\n\n    foreach enc {intset listpack hashtable} {\n        test \"{$type} SSCAN with encoding $enc\" {\n            # Create the Set\n            r del set\n            if {$enc eq {intset}} {\n                set prefix \"\"\n            } else {\n                set prefix \"ele:\"\n            }\n            set count [expr {$enc eq \"hashtable\" ? 200 : 100}]\n            set elements {}\n            for {set j 0} {$j < $count} {incr j} {\n                lappend elements ${prefix}${j}\n            }\n            r sadd set {*}$elements\n\n            # Verify that the encoding matches.\n            assert_encoding $enc set\n\n            # Test SSCAN\n            set cur 0\n            set keys {}\n            while 1 {\n                set res [r sscan set $cur]\n                set cur [lindex $res 0]\n                set k [lindex $res 1]\n                lappend keys {*}$k\n                if {$cur == 0} break\n            }\n\n            set keys [lsort -unique $keys]\n            assert_equal $count [llength $keys]\n        }\n    }\n\n    foreach enc {listpack hashtable} {\n        test \"{$type} HSCAN with encoding $enc\" {\n            # Create the Hash\n            r del hash\n            if {$enc eq {listpack}} {\n                set count 30\n            } else {\n                set count 1000\n            }\n            set elements {}\n            for {set j 0} {$j < $count} {incr j} {\n                lappend elements key:$j $j\n            }\n            r hmset hash {*}$elements\n\n            # Verify that the encoding matches.\n            assert_encoding $enc hash\n\n            # Test HSCAN\n            set cur 0\n            set keys {}\n            while 1 {\n                set res [r hscan hash $cur]\n                set cur [lindex $res 0]\n                set k [lindex $res 1]\n                lappend keys {*}$k\n                if {$cur == 0} break\n            }\n\n            set keys2 {}\n            foreach {k v} $keys {\n                assert {$k eq \"key:$v\"}\n                lappend keys2 $k\n            }\n\n            set keys2 [lsort -unique $keys2]\n            assert_equal $count [llength $keys2]\n\n            # Test NOVALUES \n            set res [r hscan hash 0 count 1000 novalues]\n            assert_equal [lsort $keys2] [lsort [lindex $res 1]]\n        }\n\n        test \"{$type} HSCAN with large value $enc\" {\n            r del hash\n\n            if {$enc eq {listpack}} {\n                set count 60\n            } else {\n                set count 170\n            }\n\n            set val1 [string repeat \"1\" $count]\n            r hset hash $val1 $val1\n\n            set val2 [string repeat \"2\" $count]\n            r hset hash $val2 $val2\n\n            set res [lsort [lindex [r hscan hash 0] 1]]\n            assert_equal $val1 [lindex $res 0]\n            assert_equal $val1 [lindex $res 1]\n            assert_equal $val2 [lindex $res 2]\n            assert_equal $val2 [lindex $res 3]\n\n            set res [lsort [lindex [r hscan hash 0 novalues] 1]]\n            assert_equal $val1 [lindex $res 0]\n            assert_equal $val2 [lindex $res 1]\n        }\n    }\n\n    foreach enc {listpack skiplist} {\n        test \"{$type} ZSCAN with encoding $enc\" {\n            # Create the Sorted Set\n            r del zset\n            if {$enc eq {listpack}} {\n                set count 30\n            } else {\n                set count 1000\n            }\n            set elements {}\n            for {set j 0} {$j < $count} {incr j} {\n                lappend elements $j key:$j\n            }\n            r zadd zset {*}$elements\n\n            # Verify that the encoding matches.\n            assert_encoding $enc zset\n\n            # Test ZSCAN\n            set cur 0\n            set keys {}\n            while 1 {\n                set res [r zscan zset $cur]\n                set cur [lindex $res 0]\n                set k [lindex $res 1]\n                lappend keys {*}$k\n                if {$cur == 0} break\n            }\n\n            set keys2 {}\n            foreach {k v} $keys {\n                assert {$k eq \"key:$v\"}\n                lappend keys2 $k\n            }\n\n            set keys2 [lsort -unique $keys2]\n            assert_equal $count [llength $keys2]\n        }\n    }\n\n    test \"{$type} SCAN guarantees check under write load\" {\n        r flushdb\n        populate 100\n\n        # We start scanning here, so keys from 0 to 99 should all be\n        # reported at the end of the iteration.\n        set keys {}\n        while 1 {\n            set res [r scan $cur]\n            set cur [lindex $res 0]\n            set k [lindex $res 1]\n            lappend keys {*}$k\n            if {$cur == 0} break\n            # Write 10 random keys at every SCAN iteration.\n            for {set j 0} {$j < 10} {incr j} {\n                r set addedkey:[randomInt 1000] foo\n            }\n        }\n\n        set keys2 {}\n        foreach k $keys {\n            if {[string length $k] > 6} continue\n            lappend keys2 $k\n        }\n\n        set keys2 [lsort -unique $keys2]\n        assert_equal 100 [llength $keys2]\n    }\n\n    test \"{$type} SSCAN with integer encoded object (issue #1345)\" {\n        set objects {1 a}\n        r del set\n        r sadd set {*}$objects\n        set res [r sscan set 0 MATCH *a* COUNT 100]\n        assert_equal [lsort -unique [lindex $res 1]] {a}\n        set res [r sscan set 0 MATCH *1* COUNT 100]\n        assert_equal [lsort -unique [lindex $res 1]] {1}\n    }\n\n    test \"{$type} SSCAN with PATTERN\" {\n        r del mykey\n        r sadd mykey foo fab fiz foobar 1 2 3 4\n        set res [r sscan mykey 0 MATCH foo* COUNT 10000]\n        lsort -unique [lindex $res 1]\n    } {foo foobar}\n\n    test \"{$type} HSCAN with PATTERN\" {\n        r del mykey\n        r hmset mykey foo 1 fab 2 fiz 3 foobar 10 1 a 2 b 3 c 4 d\n        set res [r hscan mykey 0 MATCH foo* COUNT 10000]\n        lsort -unique [lindex $res 1]\n    } {1 10 foo foobar}\n\n    test \"{$type} HSCAN with NOVALUES\" {\n        r del mykey\n        r hmset mykey foo 1 fab 2 fiz 3 foobar 10 1 a 2 b 3 c 4 d\n        set res [r hscan mykey 0 NOVALUES]\n        lsort -unique [lindex $res 1]\n    } {1 2 3 4 fab fiz foo foobar}\n\n    test \"{$type} ZSCAN with PATTERN\" {\n        r del mykey\n        r zadd mykey 1 foo 2 fab 3 fiz 10 foobar\n        set res [r zscan mykey 0 MATCH foo* COUNT 10000]\n        lsort -unique [lindex $res 1]\n    }\n\n    test \"{$type} ZSCAN scores: regression test for issue #2175\" {\n        r del mykey\n        for {set j 0} {$j < 500} {incr j} {\n            r zadd mykey 9.8813129168249309e-323 $j\n        }\n        set res [lindex [r zscan mykey 0] 1]\n        set first_score [lindex $res 1]\n        assert {$first_score != 0}\n    }\n\n    test \"{$type} SCAN regression test for issue #4906\" {\n        for {set k 0} {$k < 100} {incr k} {\n            r del set\n            r sadd set x; # Make sure it's not intset encoded\n            set toremove {}\n            unset -nocomplain found\n            array set found {}\n\n            # Populate the set\n            set numele [expr {101+[randomInt 1000]}]\n            for {set j 0} {$j < $numele} {incr j} {\n                r sadd set $j\n                if {$j >= 100} {\n                    lappend toremove $j\n                }\n            }\n\n            # Start scanning\n            set cursor 0\n            set iteration 0\n            set del_iteration [randomInt 10]\n            while {!($cursor == 0 && $iteration != 0)} {\n                lassign [r sscan set $cursor] cursor items\n\n                # Mark found items. We expect to find from 0 to 99 at the end\n                # since those elements will never be removed during the scanning.\n                foreach i $items {\n                    set found($i) 1\n                }\n                incr iteration\n                # At some point remove most of the items to trigger the\n                # rehashing to a smaller hash table.\n                if {$iteration == $del_iteration} {\n                    r srem set {*}$toremove\n                }\n            }\n\n            # Verify that SSCAN reported everything from 0 to 99\n            for {set j 0} {$j < 100} {incr j} {\n                if {![info exists found($j)]} {\n                    fail \"SSCAN element missing $j\"\n                }\n            }\n        }\n    }\n\n    test \"{$type} SCAN COUNT overflow\" {\n        r flushdb\n        populate 10\n\n        # count = LONG_MAX/10 + 1, within LONG_MAX so it parses fine,\n        # but count*10 overflows signed long which is undefined behavior.\n        # Compute dynamically to support both 32-bit and 64-bit builds.\n        set long_max [expr {[s arch_bits] == 32 ? 2147483647 : 9223372036854775807}]\n        set big_count [expr {$long_max / 10 + 1}]\n        set res [r scan 0 count $big_count]\n        assert {[llength $res] == 2}\n        assert_equal 0 [lindex $res 0]\n        assert_equal 10 [llength [lindex $res 1]]\n    }\n\n    test \"{$type} SCAN MATCH pattern implies cluster slot\" {\n        # Tests the code path for an optimization for patterns like \"{foo}-*\"\n        # which implies that all matching keys belong to one slot.\n        r flushdb\n        for {set j 0} {$j < 100} {incr j} {\n            r set \"{foo}-$j\" \"foo\"; # slot 12182\n            r set \"{bar}-$j\" \"bar\"; # slot 5061\n            r set \"{boo}-$j\" \"boo\"; # slot 13142\n        }\n\n        set cursor 0\n        set keys {}\n        while 1 {\n            set res [r scan $cursor match \"{foo}-*\"]\n            set cursor [lindex $res 0]\n            set k [lindex $res 1]\n            lappend keys {*}$k\n            if {$cursor == 0} break\n        }\n\n        set keys [lsort -unique $keys]\n        assert_equal 100 [llength $keys]\n    }\n}\n\nstart_server {tags {\"scan network standalone\"}} {\n    test_scan \"standalone\"\n}\n\nstart_cluster 1 0 {tags {\"external:skip cluster scan\"}} {\n    test_scan \"cluster\"\n}\n"
  },
  {
    "path": "tests/unit/scripting.tcl",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Copyright (c) 2024-present, Valkey contributors.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n#\n\nforeach is_eval {0 1} {\n\nif {$is_eval == 1} {\n    proc run_script {args} {\n        r eval {*}$args\n    }\n    proc run_script_ro {args} {\n        r eval_ro {*}$args\n    }\n    proc run_script_on_connection {args} {\n        [lindex $args 0] eval {*}[lrange $args 1 end]\n    }\n    proc kill_script {args} {\n        r script kill\n    }\n} else {\n    proc run_script {args} {\n        r function load replace [format \"#!lua name=test\\nredis.register_function('test', function(KEYS, ARGV)\\n %s \\nend)\" [lindex $args 0]]\n        if {[r readingraw] eq 1} {\n            # read name\n            assert_equal {test} [r read]\n        }\n        r fcall test {*}[lrange $args 1 end]\n    }\n    proc run_script_ro {args} {\n        r function load replace [format \"#!lua name=test\\nredis.register_function{function_name='test', callback=function(KEYS, ARGV)\\n %s \\nend, flags={'no-writes'}}\" [lindex $args 0]]\n        if {[r readingraw] eq 1} {\n            # read name\n            assert_equal {test} [r read]\n        }\n        r fcall_ro test {*}[lrange $args 1 end]\n    }\n    proc run_script_on_connection {args} {\n        set rd [lindex $args 0]\n        $rd function load replace [format \"#!lua name=test\\nredis.register_function('test', function(KEYS, ARGV)\\n %s \\nend)\" [lindex $args 1]]\n        # read name\n        $rd read\n        $rd fcall test {*}[lrange $args 2 end]\n    }\n    proc kill_script {args} {\n        r function kill\n    }\n}\n\nstart_server {tags {\"scripting\"}} {\n\n    if {$is_eval eq 1} {\n    test {Script - disallow write on OOM} {\n        r config set maxmemory 1\n\n        catch {[r eval \"redis.call('set', 'x', 1)\" 0]} e\n        assert_match {*command not allowed when used memory*} $e\n\n        r config set maxmemory 0\n    } {OK} {needs:config-maxmemory}\n    } ;# is_eval\n\n    test {EVAL - Does Lua interpreter replies to our requests?} {\n        run_script {return 'hello'} 0\n    } {hello}\n\n    test {EVAL - Return _G} {\n        run_script {return _G} 0\n    } {}\n\n    test {EVAL - Return table with a metatable that raise error} {\n        run_script {local a = {}; setmetatable(a,{__index=function() foo() end}) return a} 0\n    } {}\n\n    test {EVAL - Return table with a metatable that call redis} {\n        run_script {local a = {}; setmetatable(a,{__index=function() redis.call('set', 'x', '1') end}) return a} 1 x\n        # make sure x was not set\n        r get x\n    } {}\n\n    test {EVAL - Lua integer -> Redis protocol type conversion} {\n        run_script {return 100.5} 0\n    } {100}\n\n    test {EVAL - Lua string -> Redis protocol type conversion} {\n        run_script {return 'hello world'} 0\n    } {hello world}\n\n    test {EVAL - Lua true boolean -> Redis protocol type conversion} {\n        run_script {return true} 0\n    } {1}\n\n    test {EVAL - Lua false boolean -> Redis protocol type conversion} {\n        run_script {return false} 0\n    } {}\n\n    test {EVAL - Lua status code reply -> Redis protocol type conversion} {\n        run_script {return {ok='fine'}} 0\n    } {fine}\n\n    test {EVAL - Lua error reply -> Redis protocol type conversion} {\n        catch {\n            run_script {return {err='ERR this is an error'}} 0\n        } e\n        set _ $e\n    } {ERR this is an error}\n\n    test {EVAL - Lua table -> Redis protocol type conversion} {\n        run_script {return {1,2,3,'ciao',{1,2}}} 0\n    } {1 2 3 ciao {1 2}}\n\n    test {EVAL - Are the KEYS and ARGV arrays populated correctly?} {\n        run_script {return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}} 2 a{t} b{t} c{t} d{t}\n    } {a{t} b{t} c{t} d{t}}\n\n    test {EVAL - is Lua able to call Redis API?} {\n        r set mykey myval\n        run_script {return redis.call('get',KEYS[1])} 1 mykey\n    } {myval}\n\n    if {$is_eval eq 1} {\n    # eval sha is only relevant for is_eval Lua\n    test {EVALSHA - Can we call a SHA1 if already defined?} {\n        r evalsha fd758d1589d044dd850a6f05d52f2eefd27f033f 1 mykey\n    } {myval}\n\n    test {EVALSHA_RO - Can we call a SHA1 if already defined?} {\n        r evalsha_ro fd758d1589d044dd850a6f05d52f2eefd27f033f 1 mykey\n    } {myval}\n\n    test {EVALSHA - Can we call a SHA1 in uppercase?} {\n        r evalsha FD758D1589D044DD850A6F05D52F2EEFD27F033F 1 mykey\n    } {myval}\n\n    test {EVALSHA - Do we get an error on invalid SHA1?} {\n        catch {r evalsha NotValidShaSUM 0} e\n        set _ $e\n    } {NOSCRIPT*}\n\n    test {EVALSHA - Do we get an error on non defined SHA1?} {\n        catch {r evalsha ffd632c7d33e571e9f24556ebed26c3479a87130 0} e\n        set _ $e\n    } {NOSCRIPT*}\n    } ;# is_eval\n\n    test {EVAL - Redis integer -> Lua type conversion} {\n        r set x 0\n        run_script {\n            local foo = redis.pcall('incr',KEYS[1])\n            return {type(foo),foo}\n        } 1 x\n    } {number 1}\n\n    test {EVAL - Lua number -> Redis integer conversion} {\n        r del hash\n        run_script {\n            local foo = redis.pcall('hincrby','hash','field',200000000)\n            return {type(foo),foo}\n        } 0\n    } {number 200000000}\n\n    test {EVAL - Redis bulk -> Lua type conversion} {\n        r set mykey myval\n        run_script {\n            local foo = redis.pcall('get',KEYS[1])\n            return {type(foo),foo}\n        } 1 mykey\n    } {string myval}\n\n    test {EVAL - Redis multi bulk -> Lua type conversion} {\n        r del mylist\n        r rpush mylist a\n        r rpush mylist b\n        r rpush mylist c\n        run_script {\n            local foo = redis.pcall('lrange',KEYS[1],0,-1)\n            return {type(foo),foo[1],foo[2],foo[3],# foo}\n        } 1 mylist\n    } {table a b c 3}\n\n    test {EVAL - Redis status reply -> Lua type conversion} {\n        run_script {\n            local foo = redis.pcall('set',KEYS[1],'myval')\n            return {type(foo),foo['ok']}\n        } 1 mykey\n    } {table OK}\n\n    test {EVAL - Redis error reply -> Lua type conversion} {\n        r set mykey myval\n        run_script {\n            local foo = redis.pcall('incr',KEYS[1])\n            return {type(foo),foo['err']}\n        } 1 mykey\n    } {table {ERR value is not an integer or out of range}}\n\n    test {EVAL - Redis nil bulk reply -> Lua type conversion} {\n        r del mykey\n        run_script {\n            local foo = redis.pcall('get',KEYS[1])\n            return {type(foo),foo == false}\n        } 1 mykey\n    } {boolean 1}\n\n    test {EVAL - Is the Lua client using the currently selected DB?} {\n        r set mykey \"this is DB 9\"\n        r select 10\n        r set mykey \"this is DB 10\"\n        run_script {return redis.pcall('get',KEYS[1])} 1 mykey\n    } {this is DB 10} {singledb:skip}\n\n    test {EVAL - SELECT inside Lua should not affect the caller} {\n        # here we DB 10 is selected\n        r set mykey \"original value\"\n        run_script {return redis.pcall('select','9')} 0\n        set res [r get mykey]\n        r select 9\n        set res\n    } {original value} {singledb:skip}\n\n    if 0 {\n        test {EVAL - Script can't run more than configured time limit} {\n            r config set lua-time-limit 1\n            catch {\n                run_script {\n                    local i = 0\n                    while true do i=i+1 end\n                } 0\n            } e\n            set _ $e\n        } {*execution time*}\n    }\n\n    test {EVAL - Scripts do not block on blpop command} {\n        r lpush l 1\n        r lpop l\n        run_script {return redis.pcall('blpop','l',0)} 1 l\n    } {}\n\n    test {EVAL - Scripts do not block on brpop command} {\n        r lpush l 1\n        r lpop l\n        run_script {return redis.pcall('brpop','l',0)} 1 l\n    } {}\n\n    test {EVAL - Scripts do not block on brpoplpush command} {\n        r lpush empty_list1{t} 1\n        r lpop empty_list1{t}\n        run_script {return redis.pcall('brpoplpush','empty_list1{t}', 'empty_list2{t}',0)} 2 empty_list1{t} empty_list2{t}\n    } {}\n\n    test {EVAL - Scripts do not block on blmove command} {\n        r lpush empty_list1{t} 1\n        r lpop empty_list1{t}\n        run_script {return redis.pcall('blmove','empty_list1{t}', 'empty_list2{t}', 'LEFT', 'LEFT', 0)} 2 empty_list1{t} empty_list2{t}\n    } {}\n\n    test {EVAL - Scripts do not block on bzpopmin command} {\n        r zadd empty_zset 10 foo\n        r zmpop 1 empty_zset MIN\n        run_script {return redis.pcall('bzpopmin','empty_zset', 0)} 1 empty_zset\n    } {}\n\n    test {EVAL - Scripts do not block on bzpopmax command} {\n        r zadd empty_zset 10 foo\n        r zmpop 1 empty_zset MIN\n        run_script {return redis.pcall('bzpopmax','empty_zset', 0)} 1 empty_zset\n    } {}\n\n    test {EVAL - Scripts do not block on wait} {\n        run_script {return redis.pcall('wait','1','0')} 0\n    } {0}\n\n    test {EVAL - Scripts do not block on waitaof} {\n        r config set appendonly no\n        run_script {return redis.pcall('waitaof','0','1','0')} 0\n    } {0 0}\n\n    test {EVAL - Scripts do not block on XREAD with BLOCK option} {\n        r del s\n        r xgroup create s g $ MKSTREAM\n        set res [run_script {return redis.pcall('xread','STREAMS','s','$')} 1 s]\n        assert {$res eq {}}\n        run_script {return redis.pcall('xread','BLOCK',0,'STREAMS','s','$')} 1 s\n    } {}\n\n    test {EVAL - Scripts do not block on XREADGROUP with BLOCK option} {\n        set res [run_script {return redis.pcall('xreadgroup','group','g','c','STREAMS','s','>')} 1 s]\n        assert {$res eq {}}\n        run_script {return redis.pcall('xreadgroup','group','g','c','BLOCK',0,'STREAMS','s','>')} 1 s\n    } {}\n\n    test {EVAL - Scripts do not block on XREAD with BLOCK option -- non empty stream} {\n        r XADD s * a 1\n        set res [run_script {return redis.pcall('xread','BLOCK',0,'STREAMS','s','$')} 1 s]\n        assert {$res eq {}}\n\n        set res [run_script {return redis.pcall('xread','BLOCK',0,'STREAMS','s','0-0')} 1 s]\n        assert {[lrange [lindex $res 0 1 0 1] 0 1] eq {a 1}}\n    }\n\n    test {EVAL - Scripts do not block on XREADGROUP with BLOCK option -- non empty stream} {\n        r XADD s * b 2\n        set res [\n            run_script {return redis.pcall('xreadgroup','group','g','c','BLOCK',0,'STREAMS','s','>')} 1 s\n        ]\n        assert {[llength [lindex $res 0 1]] == 2}\n        lindex $res 0 1 0 1\n    } {a 1}\n\n    test {EVAL - Scripts can run non-deterministic commands} {\n        set e {}\n        catch {\n            run_script {redis.pcall('randomkey'); return redis.pcall('set','x','ciao')} 1 x\n        } e\n        set e\n    } {*OK*}\n\n    test {EVAL - No arguments to redis.call/pcall is considered an error} {\n        set e {}\n        catch {run_script {return redis.call()} 0} e\n        set e\n    } {*one argument*}\n\n    test {EVAL - redis.call variant raises a Lua error on Redis cmd error (1)} {\n        set e {}\n        catch {\n            run_script \"redis.call('nosuchcommand')\" 0\n        } e\n        set e\n    } {*Unknown Redis*}\n\n    test {EVAL - redis.call variant raises a Lua error on Redis cmd error (1)} {\n        set e {}\n        catch {\n            run_script \"redis.call('get','a','b','c')\" 0\n        } e\n        set e\n    } {*number of args*}\n\n    test {EVAL - redis.call variant raises a Lua error on Redis cmd error (1)} {\n        set e {}\n        r set foo bar\n        catch {\n            run_script {redis.call('lpush',KEYS[1],'val')} 1 foo\n        } e\n        set e\n    } {*against a key*}\n\n    test {EVAL - Test table unpack with invalid indexes} {\n        catch {run_script { return {unpack({1,2,3}, -2, 2147483647)} } 0} e\n        assert_match {*too many results to unpack*} $e\n        catch {run_script { return {unpack({1,2,3}, 0, 2147483647)} } 0} e\n        assert_match {*too many results to unpack*} $e\n        catch {run_script { return {unpack({1,2,3}, -2147483648, -2)} } 0} e\n        assert_match {*too many results to unpack*} $e\n        set res [run_script { return {unpack({1,2,3}, -1, -2)} } 0]\n        assert_match {} $res\n        set res [run_script { return {unpack({1,2,3}, 1, -1)} } 0]\n        assert_match {} $res\n\n        # unpack with range -1 to 5, verify nil indexes\n        set res [run_script {\n             local function unpack_to_list(t, i, j)\n               local n, v = select('#', unpack(t, i, j)), {unpack(t, i, j)}\n               for i = 1, n do v[i] = v[i] or '_NIL_' end\n               v.n = n\n               return v\n             end\n\n            return unpack_to_list({1,2,3}, -1, 5)\n        } 0]\n        assert_match {_NIL_ _NIL_ 1 2 3 _NIL_ _NIL_} $res\n\n        # unpack with negative range, verify nil indexes\n        set res [run_script {\n             local function unpack_to_list(t, i, j)\n               local n, v = select('#', unpack(t, i, j)), {unpack(t, i, j)}\n               for i = 1, n do v[i] = v[i] or '_NIL_' end\n               v.n = n\n               return v\n             end\n\n            return unpack_to_list({1,2,3}, -2147483648, -2147483646)\n        } 0]\n        assert_match {_NIL_ _NIL_ _NIL_} $res\n    } {}\n\n    test {EVAL - JSON numeric decoding} {\n        # We must return the table as a string because otherwise\n        # Redis converts floats to ints and we get 0 and 1023 instead\n        # of 0.0003 and 1023.2 as the parsed output.\n        run_script {return\n                 table.concat(\n                   cjson.decode(\n                    \"[0.0, -5e3, -1, 0.3e-3, 1023.2, 0e10]\"), \" \")\n        } 0\n    } {0 -5000 -1 0.0003 1023.2 0}\n\n    test {EVAL - JSON string decoding} {\n        run_script {local decoded = cjson.decode('{\"keya\": \"a\", \"keyb\": \"b\"}')\n                return {decoded.keya, decoded.keyb}\n        } 0\n    } {a b}\n\n    test {EVAL - JSON empty array decoding} {\n        # Default behavior\n        assert_equal \"{}\" [run_script {\n            return cjson.encode(cjson.decode('[]'))\n        } 0]\n        assert_equal \"{}\" [run_script {\n            cjson.decode_array_with_array_mt(false)\n            return cjson.encode(cjson.decode('[]'))\n        } 0]\n        assert_equal \"{\\\"item\\\":{}}\" [run_script {\n            cjson.decode_array_with_array_mt(false)\n            return cjson.encode(cjson.decode('{\"item\": []}'))\n        } 0]\n\n        # With array metatable\n        assert_equal \"\\[\\]\" [run_script {\n            cjson.decode_array_with_array_mt(true)\n            return cjson.encode(cjson.decode('[]'))\n        } 0]\n        assert_equal \"{\\\"item\\\":\\[\\]}\" [run_script {\n            cjson.decode_array_with_array_mt(true)\n            return cjson.encode(cjson.decode('{\"item\": []}'))\n        } 0]\n    }\n\n    test {EVAL - JSON empty array decoding after element removal} {\n        # Default: emptied array becomes object\n        assert_equal \"{}\" [run_script {\n            cjson.decode_array_with_array_mt(false)\n            local t = cjson.decode('[1, 2]')\n            -- emptying the array\n            t[1] = nil\n            t[2] = nil\n            return cjson.encode(t)\n        } 0]\n\n        # With array metatable: emptied array stays array\n        assert_equal \"\\[\\]\" [run_script {\n            cjson.decode_array_with_array_mt(true)\n            local t = cjson.decode('[1, 2]')\n            -- emptying the array\n            t[1] = nil\n            t[2] = nil\n            return cjson.encode(t)\n        } 0]\n    }\n\n    test {EVAL - cjson array metatable modification should be readonly} {\n        catch {\n            run_script {\n                cjson.decode_array_with_array_mt(true)\n                local t = cjson.decode('[]')\n                getmetatable(t).__is_cjson_array = function() return 1 end\n                return cjson.encode(t)\n            } 0\n        } e\n        set _ $e\n    } {*Attempt to modify a readonly table*}\n\n    test {EVAL - JSON smoke test} {\n        run_script {\n            local some_map = {\n                s1=\"Some string\",\n                n1=100,\n                a1={\"Some\",\"String\",\"Array\"},\n                nil1=nil,\n                b1=true,\n                b2=false}\n            local encoded = cjson.encode(some_map)\n            local decoded = cjson.decode(encoded)\n            assert(table.concat(some_map) == table.concat(decoded))\n\n            cjson.encode_keep_buffer(false)\n            encoded = cjson.encode(some_map)\n            decoded = cjson.decode(encoded)\n            assert(table.concat(some_map) == table.concat(decoded))\n\n            -- Table with numeric keys\n            local table1 = {one=\"one\", [1]=\"one\"}\n            encoded = cjson.encode(table1)\n            decoded = cjson.decode(encoded)\n            assert(decoded[\"one\"] == table1[\"one\"])\n            assert(decoded[\"1\"] == table1[1])\n\n            -- Array\n            local array1 = {[1]=\"one\", [2]=\"two\"}\n            encoded = cjson.encode(array1)\n            decoded = cjson.decode(encoded)\n            assert(table.concat(array1) == table.concat(decoded))\n\n            -- Invalid keys\n            local invalid_map = {}\n            invalid_map[false] = \"false\"\n            local ok, encoded = pcall(cjson.encode, invalid_map)\n            assert(ok == false)\n\n            -- Max depth\n            cjson.encode_max_depth(1)\n            ok, encoded = pcall(cjson.encode, some_map)\n            assert(ok == false)\n\n            cjson.decode_max_depth(1)\n            ok, decoded = pcall(cjson.decode, '{\"obj\": {\"array\": [1,2,3,4]}}')\n            assert(ok == false)\n\n            -- Invalid numbers\n            ok, encoded = pcall(cjson.encode, {num1=0/0})\n            assert(ok == false)\n            cjson.encode_invalid_numbers(true)\n            ok, encoded = pcall(cjson.encode, {num1=0/0})\n            assert(ok == true)\n\n            -- Restore defaults\n            cjson.decode_max_depth(1000)\n            cjson.encode_max_depth(1000)\n            cjson.encode_invalid_numbers(false)\n        } 0\n    }\n\n    test {EVAL - cmsgpack can pack double?} {\n        run_script {local encoded = cmsgpack.pack(0.1)\n                local h = \"\"\n                for i = 1, #encoded do\n                    h = h .. string.format(\"%02x\",string.byte(encoded,i))\n                end\n                return h\n        } 0\n    } {cb3fb999999999999a}\n\n    test {EVAL - cmsgpack can pack negative int64?} {\n        run_script {local encoded = cmsgpack.pack(-1099511627776)\n                local h = \"\"\n                for i = 1, #encoded do\n                    h = h .. string.format(\"%02x\",string.byte(encoded,i))\n                end\n                return h\n        } 0\n    } {d3ffffff0000000000}\n\n    test {EVAL - cmsgpack pack/unpack smoke test} {\n        run_script {\n                local str_lt_32 = string.rep(\"x\", 30)\n                local str_lt_255 = string.rep(\"x\", 250)\n                local str_lt_65535 = string.rep(\"x\", 65530)\n                local str_long = string.rep(\"x\", 100000)\n                local array_lt_15 = {1, 2, 3, 4, 5}\n                local array_lt_65535 = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18}\n                local array_big = {}\n                for i=1, 100000 do\n                    array_big[i] = i\n                end\n                local map_lt_15 = {a=1, b=2}\n                local map_big = {}\n                for i=1, 100000 do\n                    map_big[tostring(i)] = i\n                end\n                local some_map = {\n                    s1=str_lt_32,\n                    s2=str_lt_255,\n                    s3=str_lt_65535,\n                    s4=str_long,\n                    d1=0.1,\n                    i1=1,\n                    i2=250,\n                    i3=65530,\n                    i4=100000,\n                    i5=2^40,\n                    i6=-1,\n                    i7=-120,\n                    i8=-32000,\n                    i9=-100000,\n                    i10=-3147483648,\n                    a1=array_lt_15,\n                    a2=array_lt_65535,\n                    a3=array_big,\n                    m1=map_lt_15,\n                    m2=map_big,\n                    b1=false,\n                    b2=true,\n                    n=nil\n                }\n                local encoded = cmsgpack.pack(some_map)\n                local decoded = cmsgpack.unpack(encoded)\n                assert(table.concat(some_map) == table.concat(decoded))\n                local offset, decoded_one = cmsgpack.unpack_one(encoded, 0)\n                assert(table.concat(some_map) == table.concat(decoded_one))\n                assert(offset == -1)\n\n                local encoded_multiple = cmsgpack.pack(str_lt_32, str_lt_255, str_lt_65535, str_long)\n                local offset, obj = cmsgpack.unpack_limit(encoded_multiple, 1, 0)\n                assert(obj == str_lt_32)\n                offset, obj = cmsgpack.unpack_limit(encoded_multiple, 1, offset)\n                assert(obj == str_lt_255)\n                offset, obj = cmsgpack.unpack_limit(encoded_multiple, 1, offset)\n                assert(obj == str_lt_65535)\n                offset, obj = cmsgpack.unpack_limit(encoded_multiple, 1, offset)\n                assert(obj == str_long)\n                assert(offset == -1)\n        } 0\n    }\n\n    test {EVAL - cmsgpack can pack and unpack circular references?} {\n        run_script {local a = {x=nil,y=5}\n                local b = {x=a}\n                a['x'] = b\n                local encoded = cmsgpack.pack(a)\n                local h = \"\"\n                -- cmsgpack encodes to a depth of 16, but can't encode\n                -- references, so the encoded object has a deep copy recursive\n                -- depth of 16.\n                for i = 1, #encoded do\n                    h = h .. string.format(\"%02x\",string.byte(encoded,i))\n                end\n                -- when unpacked, re.x.x != re because the unpack creates\n                -- individual tables down to a depth of 16.\n                -- (that's why the encoded output is so large)\n                local re = cmsgpack.unpack(encoded)\n                assert(re)\n                assert(re.x)\n                assert(re.x.x.y == re.y)\n                assert(re.x.x.x.x.y == re.y)\n                assert(re.x.x.x.x.x.x.y == re.y)\n                assert(re.x.x.x.x.x.x.x.x.x.x.y == re.y)\n                -- maximum working depth:\n                assert(re.x.x.x.x.x.x.x.x.x.x.x.x.x.x.y == re.y)\n                -- now the last x would be b above and has no y\n                assert(re.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x)\n                -- so, the final x.x is at the depth limit and was assigned nil\n                assert(re.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x == nil)\n                return {h, re.x.x.x.x.x.x.x.x.y == re.y, re.y == 5}\n        } 0\n    } {82a17905a17881a17882a17905a17881a17882a17905a17881a17882a17905a17881a17882a17905a17881a17882a17905a17881a17882a17905a17881a17882a17905a17881a178c0 1 1}\n\n    test {EVAL - Numerical sanity check from bitop} {\n        run_script {assert(0x7fffffff == 2147483647, \"broken hex literals\");\n                assert(0xffffffff == -1 or 0xffffffff == 2^32-1,\n                    \"broken hex literals\");\n                assert(tostring(-1) == \"-1\", \"broken tostring()\");\n                assert(tostring(0xffffffff) == \"-1\" or\n                    tostring(0xffffffff) == \"4294967295\",\n                    \"broken tostring()\")\n        } 0\n    } {}\n\n    test {EVAL - Verify minimal bitop functionality} {\n        run_script {assert(bit.tobit(1) == 1);\n                assert(bit.band(1) == 1);\n                assert(bit.bxor(1,2) == 3);\n                assert(bit.bor(1,2,4,8,16,32,64,128) == 255)\n        } 0\n    } {}\n\n    test {EVAL - Able to parse trailing comments} {\n        run_script {return 'hello' --trailing comment} 0\n    } {hello}\n\n    test {EVAL_RO - Successful case} {\n        r set foo bar\n        assert_equal bar [run_script_ro {return redis.call('get', KEYS[1]);} 1 foo]\n    }\n\n    test {EVAL_RO - Cannot run write commands} {\n        r set foo bar\n        catch {run_script_ro {redis.call('del', KEYS[1]);} 1 foo} e\n        set e\n    } {ERR Write commands are not allowed from read-only scripts*}\n\n    if {$is_eval eq 1} {\n    # script command is only relevant for is_eval Lua\n    test {SCRIPTING FLUSH - is able to clear the scripts cache?} {\n        r set mykey myval\n\n        r script load {return redis.call('get',KEYS[1])}\n        set v [r evalsha fd758d1589d044dd850a6f05d52f2eefd27f033f 1 mykey]\n        assert_equal $v myval\n        r script flush\n        assert_error {NOSCRIPT*} {r evalsha fd758d1589d044dd850a6f05d52f2eefd27f033f 1 mykey}\n\n        r eval {return redis.call('get',KEYS[1])} 1 mykey\n        set v [r evalsha fd758d1589d044dd850a6f05d52f2eefd27f033f 1 mykey]\n        assert_equal $v myval\n        r script flush\n        assert_error {NOSCRIPT*} {r evalsha fd758d1589d044dd850a6f05d52f2eefd27f033f 1 mykey}\n    }\n\n    test {SCRIPTING FLUSH ASYNC} {\n        for {set j 0} {$j < 100} {incr j} {\n            r script load \"return $j\"\n        }\n        assert { [string match \"*number_of_cached_scripts:100*\" [r info Memory]] }\n        r script flush async\n        assert { [string match \"*number_of_cached_scripts:0*\" [r info Memory]] }\n    }\n\n    test {SCRIPT EXISTS - can detect already defined scripts?} {\n        r eval \"return 1+1\" 0\n        r script exists a27e7e8a43702b7046d4f6a7ccf5b60cef6b9bd9 a27e7e8a43702b7046d4f6a7ccf5b60cef6b9bda\n    } {1 0}\n\n    test {SCRIPT LOAD - is able to register scripts in the scripting cache} {\n        list \\\n            [r script load \"return 'loaded'\"] \\\n            [r evalsha b534286061d4b9e4026607613b95c06c06015ae8 0]\n    } {b534286061d4b9e4026607613b95c06c06015ae8 loaded}\n\n    test \"SORT is normally not alpha re-ordered for the scripting engine\" {\n        r del myset\n        r sadd myset 1 2 3 4 10\n        r eval {return redis.call('sort',KEYS[1],'desc')} 1 myset\n    } {10 4 3 2 1} {cluster:skip}\n\n    test \"SORT BY <constant> output gets ordered for scripting\" {\n        r del myset\n        r sadd myset a b c d e f g h i l m n o p q r s t u v z aa aaa azz\n        r eval {return redis.call('sort',KEYS[1],'by','_')} 1 myset\n    } {a aa aaa azz b c d e f g h i l m n o p q r s t u v z} {cluster:skip}\n\n    test \"SORT BY <constant> with GET gets ordered for scripting\" {\n        r del myset\n        r sadd myset a b c\n        r eval {return redis.call('sort',KEYS[1],'by','_','get','#','get','_:*')} 1 myset\n    } {a {} b {} c {}} {cluster:skip}\n    } ;# is_eval\n\n    test \"redis.sha1hex() implementation\" {\n        list [run_script {return redis.sha1hex('')} 0] \\\n             [run_script {return redis.sha1hex('Pizza & Mandolino')} 0]\n    } {da39a3ee5e6b4b0d3255bfef95601890afd80709 74822d82031af7493c20eefa13bd07ec4fada82f}\n\n    test \"Measures elapsed time os.clock()\" {\n        set escaped [run_script {\n            local start = os.clock()\n            while os.clock() - start < 1 do end\n            return {double = os.clock() - start}\n        } 0]\n        assert_morethan_equal $escaped 1 ;# 1 second\n    }\n\n    test \"Prohibit dangerous lua methods in sandbox\" {\n        assert_equal \"\" [run_script {\n            local allowed_methods = {\"clock\"}\n            -- Find a value from a tuple and return the position.\n            local indexOf = function(tuple, value)\n                for i, v in ipairs(tuple) do\n                    if v == value then return i end\n                end\n                return nil\n            end\n            -- Check for disallowed methods and verify all allowed methods exist.\n            -- If an allowed method is found, it's removed from 'allowed_methods'.\n            -- If 'allowed_methods' is empty at the end, all allowed methods were found.\n            for key, value in pairs(os) do\n                local index = indexOf(allowed_methods, key)\n                if index == nil or type(value) ~= \"function\" then\n                    return \"Disallowed \"..type(value)..\":\"..key\n                end\n                table.remove(allowed_methods, index)\n            end\n            if #allowed_methods ~= 0 then\n                return \"Expected method not found: \"..table.concat(allowed_methods, \",\")\n            end\n            return \"\"\n        } 0]\n    }\n\n    test \"Verify execution of prohibit dangerous Lua methods will fail\" {\n        assert_error {ERR *attempt to call field 'execute'*} {run_script {os.execute()} 0}\n        assert_error {ERR *attempt to call field 'exit'*} {run_script {os.exit()} 0}\n        assert_error {ERR *attempt to call field 'getenv'*} {run_script {os.getenv()} 0}\n        assert_error {ERR *attempt to call field 'remove'*} {run_script {os.remove()} 0}\n        assert_error {ERR *attempt to call field 'rename'*} {run_script {os.rename()} 0}\n        assert_error {ERR *attempt to call field 'setlocale'*} {run_script {os.setlocale()} 0}\n        assert_error {ERR *attempt to call field 'tmpname'*} {run_script {os.tmpname()} 0}\n    }\n\n    test {Globals protection reading an undeclared global variable} {\n        catch {run_script {return a} 0} e\n        set e\n    } {ERR *attempted to access * global*}\n\n    test {Globals protection setting an undeclared global*} {\n        catch {run_script {a=10} 0} e\n        set e\n    } {ERR *Attempt to modify a readonly table*}\n\n    test {lua bit.tohex bug} {\n        set res [run_script {return bit.tohex(65535, -2147483648)} 0]\n        r ping\n        set res\n    } {0000FFFF}\n\n    test {Test an example script DECR_IF_GT} {\n        set decr_if_gt {\n            local current\n\n            current = redis.call('get',KEYS[1])\n            if not current then return nil end\n            if current > ARGV[1] then\n                return redis.call('decr',KEYS[1])\n            else\n                return redis.call('get',KEYS[1])\n            end\n        }\n        r set foo 5\n        set res {}\n        lappend res [run_script $decr_if_gt 1 foo 2]\n        lappend res [run_script $decr_if_gt 1 foo 2]\n        lappend res [run_script $decr_if_gt 1 foo 2]\n        lappend res [run_script $decr_if_gt 1 foo 2]\n        lappend res [run_script $decr_if_gt 1 foo 2]\n        set res\n    } {4 3 2 2 2}\n\n    if {$is_eval eq 1} {\n    # random handling is only relevant for is_eval Lua\n    test {random numbers are random now} {\n        set rand1 [r eval {return tostring(math.random())} 0]\n        wait_for_condition 100 1 {\n            $rand1 ne [r eval {return tostring(math.random())} 0]\n        } else {\n            fail \"random numbers should be random, now it's fixed value\"\n        }\n    }\n\n    test {Scripting engine PRNG can be seeded correctly} {\n        set rand1 [r eval {\n            math.randomseed(ARGV[1]); return tostring(math.random())\n        } 0 10]\n        set rand2 [r eval {\n            math.randomseed(ARGV[1]); return tostring(math.random())\n        } 0 10]\n        set rand3 [r eval {\n            math.randomseed(ARGV[1]); return tostring(math.random())\n        } 0 20]\n        assert_equal $rand1 $rand2\n        assert {$rand2 ne $rand3}\n    }\n    } ;# is_eval\n\n    test {EVAL does not leak in the Lua stack} {\n        r script flush ;# reset Lua VM\n        r set x 0\n        # Use a non blocking client to speedup the loop.\n        set rd [redis_deferring_client]\n        for {set j 0} {$j < 10000} {incr j} {\n            run_script_on_connection $rd {return redis.call(\"incr\",KEYS[1])} 1 x\n        }\n        for {set j 0} {$j < 10000} {incr j} {\n            $rd read\n        }\n        assert {[s used_memory_lua] < 1024*100}\n        $rd close\n        r get x\n    } {10000}\n\n    if {$is_eval eq 1} {\n    test {SPOP: We can call scripts rewriting client->argv from Lua} {\n        set repl [attach_to_replication_stream]\n        #this sadd operation is for external-cluster test. If myset doesn't exist, 'del myset' won't get propagated.\n        r sadd myset ppp\n        r del myset\n        r sadd myset a b c\n        assert {[r eval {return redis.call('spop', 'myset')} 0] ne {}}\n        assert {[r eval {return redis.call('spop', 'myset', 1)} 0] ne {}}\n        assert {[r eval {return redis.call('spop', KEYS[1])} 1 myset] ne {}}\n        # this one below should not be replicated\n        assert {[r eval {return redis.call('spop', KEYS[1])} 1 myset] eq {}}\n        r set trailingkey 1\n        assert_replication_stream $repl {\n            {select *}\n            {sadd *}\n            {del *}\n            {sadd *}\n            {srem myset *}\n            {srem myset *}\n            {srem myset *}\n            {set *}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test {MGET: mget shouldn't be propagated in Lua} {\n        set repl [attach_to_replication_stream]\n        r mset a{t} 1 b{t} 2 c{t} 3 d{t} 4\n        #read-only, won't be replicated\n        assert {[r eval {return redis.call('mget', 'a{t}', 'b{t}', 'c{t}', 'd{t}')} 0] eq {1 2 3 4}}\n        r set trailingkey 2\n        assert_replication_stream $repl {\n            {select *}\n            {mset *}\n            {set *}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test {EXPIRE: We can call scripts rewriting client->argv from Lua} {\n        set repl [attach_to_replication_stream]\n        r set expirekey 1\n        #should be replicated as EXPIREAT\n        assert {[r eval {return redis.call('expire', KEYS[1], ARGV[1])} 1 expirekey 3] eq 1}\n\n        assert_replication_stream $repl {\n            {select *}\n            {set *}\n            {pexpireat expirekey *}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test {INCRBYFLOAT: We can call scripts expanding client->argv from Lua} {\n        # coverage for scripts calling commands that expand the argv array\n        # an attempt to add coverage for a possible bug in luaArgsToRedisArgv\n        # this test needs a fresh server so that lua_argv_size is 0.\n        # glibc realloc can return the same pointer even when the size changes\n        # still this test isn't able to trigger the issue, but we keep it anyway.\n        start_server {tags {\"scripting\"}} {\n            set repl [attach_to_replication_stream]\n            # a command with 5 argsument\n            r eval {redis.call('hmget', KEYS[1], 1, 2, 3)} 1 key\n            # then a command with 3 that is replicated as one with 4\n            r eval {redis.call('incrbyfloat', KEYS[1], 1)} 1 key\n            # then a command with 4 args\n            r eval {redis.call('set', KEYS[1], '1', 'KEEPTTL')} 1 key\n\n            assert_replication_stream $repl {\n                {select *}\n                {set key 1 KEEPTTL}\n                {set key 1 KEEPTTL}\n            }\n            close_replication_stream $repl\n        }\n    } {} {needs:repl}\n\n    } ;# is_eval\n\n    test {Call Redis command with many args from Lua (issue #1764)} {\n        run_script {\n            local i\n            local x={}\n            redis.call('del','mylist')\n            for i=1,100 do\n                table.insert(x,i)\n            end\n            redis.call('rpush','mylist',unpack(x))\n            return redis.call('lrange','mylist',0,-1)\n        } 1 mylist\n    } {1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100}\n\n    test {Number conversion precision test (issue #1118)} {\n        run_script {\n              local value = 9007199254740991\n              redis.call(\"set\",\"foo\",value)\n              return redis.call(\"get\",\"foo\")\n        } 1 foo\n    } {9007199254740991}\n\n    test {String containing number precision test (regression of issue #1118)} {\n        run_script {\n            redis.call(\"set\", \"key\", \"12039611435714932082\")\n            return redis.call(\"get\", \"key\")\n        } 1 key\n    } {12039611435714932082}\n\n    test {Verify negative arg count is error instead of crash (issue #1842)} {\n        catch { run_script { return \"hello\" } -12 } e\n        set e\n    } {ERR Number of keys can't be negative}\n\n    test {Scripts can handle commands with incorrect arity} {\n        assert_error \"ERR Wrong number of args calling Redis command from script*\" {run_script \"redis.call('set','invalid')\" 0}\n        assert_error \"ERR Wrong number of args calling Redis command from script*\" {run_script \"redis.call('incr')\" 0}\n    }\n\n    test {Correct handling of reused argv (issue #1939)} {\n        run_script {\n              for i = 0, 10 do\n                  redis.call('SET', 'a{t}', '1')\n                  redis.call('MGET', 'a{t}', 'b{t}', 'c{t}')\n                  redis.call('EXPIRE', 'a{t}', 0)\n                  redis.call('GET', 'a{t}')\n                  redis.call('MGET', 'a{t}', 'b{t}', 'c{t}')\n              end\n        } 3 a{t} b{t} c{t}\n    }\n\n    test {Functions in the Redis namespace are able to report errors} {\n        catch {\n            run_script {\n                  redis.sha1hex()\n            } 0\n        } e\n        set e\n    } {*wrong number*}\n\n    test {CLUSTER RESET can not be invoke from within a script} {\n        catch {\n            run_script {\n                  redis.call('cluster', 'reset', 'hard')\n            } 0\n        } e\n        set _ $e\n    } {*command is not allowed*}\n\n    test {Script with RESP3 map} {\n        set expected_dict [dict create field value]\n        set expected_list [list field value]\n\n        # Sanity test for RESP3 without scripts\n        r HELLO 3\n        r hset hash field value\n        set res [r hgetall hash]\n        assert_equal $res $expected_dict\n\n        # Test RESP3 client with script in both RESP2 and RESP3 modes\n        set res [run_script {redis.setresp(3); return redis.call('hgetall', KEYS[1])} 1 hash]\n        assert_equal $res $expected_dict\n        set res [run_script {redis.setresp(2); return redis.call('hgetall', KEYS[1])} 1 hash]\n        assert_equal $res $expected_list\n\n        # Test RESP2 client with script in both RESP2 and RESP3 modes\n        r HELLO 2\n        set res [run_script {redis.setresp(3); return redis.call('hgetall', KEYS[1])} 1 hash]\n        assert_equal $res $expected_list\n        set res [run_script {redis.setresp(2); return redis.call('hgetall', KEYS[1])} 1 hash]\n        assert_equal $res $expected_list\n    } {} {resp3}\n\n    if {!$::log_req_res} { # this test creates a huge nested array which python can't handle (RecursionError: maximum recursion depth exceeded in comparison)\n    test {Script return recursive object} {\n        r readraw 1\n        set res [run_script {local a = {}; local b = {a}; a[1] = b; return a} 0]\n        # drain the response\n        while {true} {\n            if {$res == \"-ERR reached lua stack limit\"} {\n                break\n            }\n            assert_equal $res \"*1\"\n            set res [r read]\n        }\n        r readraw 0\n        # make sure the connection is still valid\n        assert_equal [r ping] {PONG}\n    }\n    }\n\n    test {Script check unpack with massive arguments} {\n        run_script {\n            local a = {}\n            for i=1,7999 do\n                a[i] = 1\n            end\n            return redis.call(\"lpush\", \"l\", unpack(a))\n        } 1 l\n    } {7999}\n\n    test \"Script read key with expiration set\" {\n        r SET key value EX 10\n        assert_equal [run_script {\n             if redis.call(\"EXISTS\", \"key\") then\n                 return redis.call(\"GET\", \"key\")\n             else\n                 return redis.call(\"EXISTS\", \"key\")\n             end\n        } 1 key] \"value\"\n    }\n\n    test \"Script del key with expiration set\" {\n        r SET key value EX 10\n        assert_equal [run_script {\n             redis.call(\"DEL\", \"key\")\n             return redis.call(\"EXISTS\", \"key\")\n        } 1 key] 0\n    }\n    \n    test \"Script ACL check\" {\n        r acl setuser bob on {>123} {+@scripting} {+set} {~x*}\n        assert_equal [r auth bob 123] {OK}\n        \n        # Check permission granted\n        assert_equal [run_script {\n            return redis.acl_check_cmd('set','xx',1)\n        } 1 xx] 1\n\n        # Check permission denied unauthorised command\n        assert_equal [run_script {\n            return redis.acl_check_cmd('hset','xx','f',1)\n        } 1 xx] {}\n        \n        # Check permission denied unauthorised key\n        # Note: we don't pass the \"yy\" key as an argument to the script so key acl checks won't block the script\n        assert_equal [run_script {\n            return redis.acl_check_cmd('set','yy',1)\n        } 0] {}\n\n        # Check error due to invalid command\n        assert_error {ERR *Invalid command passed to redis.acl_check_cmd()*} {run_script {\n            return redis.acl_check_cmd('invalid-cmd','arg')\n        } 0}\n    }\n\n    test \"Binary code loading failed\" {\n        assert_error {ERR *attempt to call a nil value*} {run_script {\n            return loadstring(string.dump(function() return 1 end))()\n        } 0}\n    }\n\n    test \"Try trick global protection 1\" {\n        catch {\n            run_script {\n                setmetatable(_G, {})\n            } 0\n        } e\n        set _ $e\n    } {*Attempt to modify a readonly table*}\n\n    test \"Try trick global protection 2\" {\n        catch {\n            run_script {\n                local g = getmetatable(_G)\n                g.__index = {}\n            } 0\n        } e\n        set _ $e\n    } {*Attempt to modify a readonly table*}\n\n    test \"Try trick global protection 3\" {\n        catch {\n            run_script {\n                redis = function() return 1 end\n            } 0\n        } e\n        set _ $e\n    } {*Attempt to modify a readonly table*}\n\n    test \"Try trick global protection 4\" {\n        catch {\n            run_script {\n                _G = {}\n            } 0\n        } e\n        set _ $e\n    } {*Attempt to modify a readonly table*}\n\n    test \"Try trick readonly table on redis table\" {\n        catch {\n            run_script {\n                redis.call = function() return 1 end\n            } 0\n        } e\n        set _ $e\n    } {*Attempt to modify a readonly table*}\n\n    test \"Try trick readonly table on json table\" {\n        catch {\n            run_script {\n                cjson.encode = function() return 1 end\n            } 0\n        } e\n        set _ $e\n    } {*Attempt to modify a readonly table*}\n\n    test \"Try trick readonly table on cmsgpack table\" {\n        catch {\n            run_script {\n                cmsgpack.pack = function() return 1 end\n            } 0\n        } e\n        set _ $e\n    } {*Attempt to modify a readonly table*}\n\n    test \"Try trick readonly table on bit table\" {\n        catch {\n            run_script {\n                bit.lshift = function() return 1 end\n            } 0\n        } e\n        set _ $e\n    } {*Attempt to modify a readonly table*}\n\n    test \"Try trick readonly table on basic types metatable\" {\n        # Run the following scripts for basic types. Either getmetatable()\n        # should return nil or the metatable must be readonly.\n        set scripts {\n            {getmetatable(nil).__index = function() return 1 end}\n            {getmetatable('').__index = function() return 1 end}\n            {getmetatable(123.222).__index = function() return 1 end}\n            {getmetatable(true).__index = function() return 1 end}\n            {getmetatable(function() return 1 end).__index = function() return 1 end}\n            {getmetatable(coroutine.create(function() return 1 end)).__index = function() return 1 end}\n        }\n\n        foreach code $scripts {\n            catch {run_script $code 0} e\n            assert {\n                [string match \"*attempt to index a nil value script*\" $e] ||\n                [string match \"*Attempt to modify a readonly table*\" $e]\n            }\n        }\n    }\n\n    test \"Test loadfile are not available\" {\n        catch {\n            run_script {\n                loadfile('some file')\n            } 0\n        } e\n        set _ $e\n    } {*Script attempted to access nonexistent global variable 'loadfile'*}\n\n    test \"Test dofile are not available\" {\n        catch {\n            run_script {\n                dofile('some file')\n            } 0\n        } e\n        set _ $e\n    } {*Script attempted to access nonexistent global variable 'dofile'*}\n\n    test \"Test print are not available\" {\n        catch {\n            run_script {\n                print('some data')\n            } 0\n        } e\n        set _ $e\n    } {*Script attempted to access nonexistent global variable 'print'*}\n}\n\n# start a new server to test the large-memory tests\nstart_server {tags {\"scripting external:skip large-memory\"}} {\n    test {EVAL - JSON string encoding a string larger than 2GB} {\n        run_script {\n            local s = string.rep(\"a\", 1024 * 1024 * 1024)\n            return #cjson.encode(s..s..s)\n        } 0\n    } {3221225474} ;# length includes two double quotes at both ends\n\n    test {EVAL - Test long escape sequences for strings} {\n        run_script {\n            -- Generate 1gb '==...==' separator\n            local s = string.rep('=', 1024 * 1024)\n            local t = {} for i=1,1024 do t[i] = s end\n            local sep = table.concat(t)\n            collectgarbage('collect')\n\n            local code = table.concat({'return [',sep,'[x]',sep,']'})\n            collectgarbage('collect')\n\n            -- Load the code and run it. Script will return the string length.\n            -- Escape sequence: [=....=[ to ]=...=] will be ignored\n            -- Actual string is a single character: 'x'. Script will return 1\n            local func = loadstring(code)\n            return #func()\n        } 0\n    } {1}\n\n    test {EVAL - Lua can parse string with too many new lines} {\n        # Create a long string consisting only of newline characters. When Lua\n        # fails to parse a string, it typically includes a snippet like\n        # \"... near ...\" in the error message to indicate the last recognizable\n        # token. In this test, since the input contains only newlines, there\n        # should be no identifiable token, so the error message should contain\n        # only the actual error, without a near clause.\n\n        run_script {\n           local s = string.rep('\\n', 1024 * 1024)\n           local t = {} for i=1,2048 do t[#t+1] = s end\n           local lines = table.concat(t)\n           local fn, err = loadstring(lines)\n           return err\n        } 0\n    } {*chunk has too many lines}\n}\n\n# Start a new server to test lua-enable-deprecated-api config\nforeach enabled {no yes} {\nstart_server [subst {tags {\"scripting external:skip\"} overrides {lua-enable-deprecated-api $enabled}}] {\n    test \"Test setfenv availability lua-enable-deprecated-api=$enabled\" {\n        catch {\n            run_script {\n                local f = function() return 1 end\n                setfenv(f, {})\n                return 0\n            } 0\n        } e\n        if {$enabled} {\n            assert_equal $e 0\n        } else {\n            assert_match {*Script attempted to access nonexistent global variable 'setfenv'*} $e\n        }\n    }\n\n    test \"Test getfenv availability lua-enable-deprecated-api=$enabled\" {\n        catch {\n            run_script {\n                local f = function() return 1 end\n                getfenv(f)\n                return 0\n            } 0\n        } e\n        if {$enabled} {\n            assert_equal $e 0\n        } else {\n            assert_match {*Script attempted to access nonexistent global variable 'getfenv'*} $e\n        }\n    }\n\n    test \"Test newproxy availability lua-enable-deprecated-api=$enabled\" {\n        catch {\n            run_script {\n                getmetatable(newproxy(true)).__gc = function() return 1 end\n                return 0\n            } 0\n        } e\n        if {$enabled} {\n            assert_equal $e 0\n        } else {\n            assert_match {*Script attempted to access nonexistent global variable 'newproxy'*} $e\n        }\n    }\n}\n}\n\n# Start a new server since the last test in this stanza will kill the\n# instance at all.\nstart_server {tags {\"scripting\"}} {\n    test {Timedout read-only scripts can be killed by SCRIPT KILL} {\n        set rd [redis_deferring_client]\n        r config set lua-time-limit 10\n        run_script_on_connection $rd {while true do end} 0\n        after 200\n        catch {r ping} e\n        assert_match {BUSY*} $e\n        kill_script\n        after 200 ; # Give some time to Lua to call the hook again...\n        assert_equal [r ping] \"PONG\"\n        $rd close\n    }\n\n    test {Timedout read-only scripts can be killed by SCRIPT KILL even when use pcall} {\n        set rd [redis_deferring_client]\n        r config set lua-time-limit 10\n        run_script_on_connection $rd {local f = function() while 1 do redis.call('ping') end end while 1 do pcall(f) end} 0\n\n        wait_for_condition 50 100 {\n            [catch {r ping} e] == 1\n        } else {\n            fail \"Can't wait for script to start running\"\n        }\n        catch {r ping} e\n        assert_match {BUSY*} $e\n\n        kill_script\n\n        wait_for_condition 50 100 {\n            [catch {r ping} e] == 0\n        } else {\n            fail \"Can't wait for script to be killed\"\n        }\n        assert_equal [r ping] \"PONG\"\n\n        catch {$rd read} res\n        $rd close\n\n        assert_match {*killed by user*} $res\n    }\n\n    test {Timedout script does not cause a false dead client} {\n        set rd [redis_deferring_client]\n        r config set lua-time-limit 10\n\n        # senging (in a pipeline):\n        # 1. eval \"while 1 do redis.call('ping') end\" 0\n        # 2. ping\n        if {$is_eval == 1} {\n            set buf \"*3\\r\\n\\$4\\r\\neval\\r\\n\\$33\\r\\nwhile 1 do redis.call('ping') end\\r\\n\\$1\\r\\n0\\r\\n\"\n            append buf \"*1\\r\\n\\$4\\r\\nping\\r\\n\"\n        } else {\n            set buf \"*4\\r\\n\\$8\\r\\nfunction\\r\\n\\$4\\r\\nload\\r\\n\\$7\\r\\nreplace\\r\\n\\$97\\r\\n#!lua name=test\\nredis.register_function('test', function() while 1 do redis.call('ping') end end)\\r\\n\"\n            append buf \"*3\\r\\n\\$5\\r\\nfcall\\r\\n\\$4\\r\\ntest\\r\\n\\$1\\r\\n0\\r\\n\"\n            append buf \"*1\\r\\n\\$4\\r\\nping\\r\\n\"\n        }\n        $rd write $buf\n        $rd flush\n\n        wait_for_condition 50 100 {\n            [catch {r ping} e] == 1\n        } else {\n            fail \"Can't wait for script to start running\"\n        }\n        catch {r ping} e\n        assert_match {BUSY*} $e\n\n        kill_script\n        wait_for_condition 50 100 {\n            [catch {r ping} e] == 0\n        } else {\n            fail \"Can't wait for script to be killed\"\n        }\n        assert_equal [r ping] \"PONG\"\n\n        if {$is_eval == 0} {\n            # read the function name\n            assert_match {test} [$rd read]\n        }\n\n        catch {$rd read} res\n        assert_match {*killed by user*} $res\n\n        set res [$rd read]\n        assert_match {*PONG*} $res\n\n        $rd close\n    }\n\n    test {Timedout script link is still usable after Lua returns} {\n        r config set lua-time-limit 10\n        run_script {for i=1,100000 do redis.call('ping') end return 'ok'} 0\n        r ping\n    } {PONG}\n\n    test {Timedout scripts and unblocked command} {\n        # make sure a command that's allowed during BUSY doesn't trigger an unblocked command\n\n        # enable AOF to also expose an assertion if the bug would happen\n        r flushall\n        r config set appendonly yes\n\n        # create clients, and set one to block waiting for key 'x'\n        set rd [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n        set r3 [redis_client]\n        $rd2 blpop x 0\n        wait_for_blocked_clients_count 1\n\n        # hack: allow the script to use client list command so that we can control when it aborts\n        r DEBUG set-disable-deny-scripts 1\n        r config set lua-time-limit 10\n        run_script_on_connection $rd {\n            local clients\n            redis.call('lpush',KEYS[1],'y');\n            while true do\n                clients = redis.call('client','list')\n                if string.find(clients, 'abortscript') ~= nil then break end\n            end\n            redis.call('lpush',KEYS[1],'z');\n            return clients\n            } 1 x\n\n        # wait for the script to be busy\n        after 200\n        catch {r ping} e\n        assert_match {BUSY*} $e\n\n        # run cause the script to abort, and run a command that could have processed\n        # unblocked clients (due to a bug)\n        $r3 hello 2 setname abortscript\n\n        # make sure the script completed before the pop was processed\n        assert_equal [$rd2 read] {x z}\n        assert_match {*abortscript*} [$rd read]\n\n        $rd close\n        $rd2 close\n        $r3 close\n        r DEBUG set-disable-deny-scripts 0\n    } {OK} {external:skip needs:debug}\n\n    test {Timedout scripts that modified data can't be killed by SCRIPT KILL} {\n        set rd [redis_deferring_client]\n        r config set lua-time-limit 10\n        run_script_on_connection $rd {redis.call('set',KEYS[1],'y'); while true do end} 1 x\n        after 200\n        catch {r ping} e\n        assert_match {BUSY*} $e\n        catch {kill_script} e\n        assert_match {UNKILLABLE*} $e\n        catch {r ping} e\n        assert_match {BUSY*} $e\n    } {} {external:skip}\n\n    # Note: keep this test at the end of this server stanza because it\n    # kills the server.\n    test {SHUTDOWN NOSAVE can kill a timedout script anyway} {\n        # The server should be still unresponding to normal commands.\n        catch {r ping} e\n        assert_match {BUSY*} $e\n        catch {r shutdown nosave}\n        # Make sure the server was killed\n        catch {set rd [redis_deferring_client]} e\n        assert_match {*connection refused*} $e\n    } {} {external:skip}\n}\n\n    start_server {tags {\"scripting repl needs:debug external:skip\"}} {\n        start_server {} {\n            test \"Before the replica connects we issue two EVAL commands\" {\n                # One with an error, but still executing a command.\n                # SHA is: 67164fc43fa971f76fd1aaeeaf60c1c178d25876\n                catch {\n                    run_script {redis.call('incr',KEYS[1]); redis.call('nonexisting')} 1 x\n                }\n                # One command is correct:\n                # SHA is: 6f5ade10a69975e903c6d07b10ea44c6382381a5\n                run_script {return redis.call('incr',KEYS[1])} 1 x\n            } {2}\n\n            test \"Connect a replica to the master instance\" {\n                r -1 slaveof [srv 0 host] [srv 0 port]\n                wait_for_condition 50 100 {\n                    [s -1 role] eq {slave} &&\n                    [string match {*master_link_status:up*} [r -1 info replication]]\n                } else {\n                    fail \"Can't turn the instance into a replica\"\n                }\n            }\n\n            if {$is_eval eq 1} {\n            test \"Now use EVALSHA against the master, with both SHAs\" {\n                # The server should replicate successful and unsuccessful\n                # commands as EVAL instead of EVALSHA.\n                catch {\n                    r evalsha 67164fc43fa971f76fd1aaeeaf60c1c178d25876 1 x\n                }\n                r evalsha 6f5ade10a69975e903c6d07b10ea44c6382381a5 1 x\n            } {4}\n\n            test \"'x' should be '4' for EVALSHA being replicated by effects\" {\n                wait_for_condition 50 100 {\n                    [r -1 get x] eq {4}\n                } else {\n                    fail \"Expected 4 in x, but value is '[r -1 get x]'\"\n                }\n            }\n            } ;# is_eval\n\n            test \"Replication of script multiple pushes to list with BLPOP\" {\n                set rd [redis_deferring_client]\n                $rd brpop a 0\n                run_script {\n                    redis.call(\"lpush\",KEYS[1],\"1\");\n                    redis.call(\"lpush\",KEYS[1],\"2\");\n                } 1 a\n                set res [$rd read]\n                $rd close\n                wait_for_condition 50 100 {\n                    [r -1 lrange a 0 -1] eq [r lrange a 0 -1]\n                } else {\n                    fail \"Expected list 'a' in replica and master to be the same, but they are respectively '[r -1 lrange a 0 -1]' and '[r lrange a 0 -1]'\"\n                }\n                set res\n            } {a 1}\n\n            if {$is_eval eq 1} {\n            test \"EVALSHA replication when first call is readonly\" {\n                r del x\n                r eval {if tonumber(ARGV[1]) > 0 then redis.call('incr', KEYS[1]) end} 1 x 0\n                r evalsha 6e0e2745aa546d0b50b801a20983b70710aef3ce 1 x 0\n                r evalsha 6e0e2745aa546d0b50b801a20983b70710aef3ce 1 x 1\n                wait_for_condition 50 100 {\n                    [r -1 get x] eq {1}\n                } else {\n                    fail \"Expected 1 in x, but value is '[r -1 get x]'\"\n                }\n            }\n            } ;# is_eval\n\n            test \"Lua scripts using SELECT are replicated correctly\" {\n                run_script {\n                    redis.call(\"set\",\"foo1\",\"bar1\")\n                    redis.call(\"select\",\"10\")\n                    redis.call(\"incr\",\"x\")\n                    redis.call(\"select\",\"11\")\n                    redis.call(\"incr\",\"z\")\n                } 3 foo1 x z\n                run_script {\n                    redis.call(\"set\",\"foo1\",\"bar1\")\n                    redis.call(\"select\",\"10\")\n                    redis.call(\"incr\",\"x\")\n                    redis.call(\"select\",\"11\")\n                    redis.call(\"incr\",\"z\")\n                } 3 foo1 x z\n                wait_for_condition 50 100 {\n                    [debug_digest -1] eq [debug_digest]\n                } else {\n                    fail \"Master-Replica desync after Lua script using SELECT.\"\n                }\n            } {} {singledb:skip}\n        }\n    }\n\nstart_server {tags {\"scripting repl external:skip\"}} {\n    start_server {overrides {appendonly yes aof-use-rdb-preamble no}} {\n        test \"Connect a replica to the master instance\" {\n            r -1 slaveof [srv 0 host] [srv 0 port]\n            wait_for_condition 50 100 {\n                [s -1 role] eq {slave} &&\n                [string match {*master_link_status:up*} [r -1 info replication]]\n            } else {\n                fail \"Can't turn the instance into a replica\"\n            }\n        }\n\n        # replicate_commands is the default on Redis Function\n        test \"Redis.replicate_commands() can be issued anywhere now\" {\n            r eval {\n                redis.call('set','foo','bar');\n                return redis.replicate_commands();\n            } 0\n        } {1}\n\n        test \"Redis.set_repl() can be issued before replicate_commands() now\" {\n            catch {\n                r eval {\n                    redis.set_repl(redis.REPL_ALL);\n                } 0\n            } e\n            set e\n        } {}\n\n        test \"Redis.set_repl() don't accept invalid values\" {\n            catch {\n                run_script {\n                    redis.set_repl(12345);\n                } 0\n            } e\n            set e\n        } {*Invalid*flags*}\n\n        test \"Test selective replication of certain Redis commands from Lua\" {\n            r del a b c d\n            run_script {\n                redis.call('set','a','1');\n                redis.set_repl(redis.REPL_NONE);\n                redis.call('set','b','2');\n                redis.set_repl(redis.REPL_AOF);\n                redis.call('set','c','3');\n                redis.set_repl(redis.REPL_ALL);\n                redis.call('set','d','4');\n            } 4 a b c d\n\n            wait_for_condition 50 100 {\n                [r -1 mget a b c d] eq {1 {} {} 4}\n            } else {\n                fail \"Only a and d should be replicated to replica\"\n            }\n\n            # Master should have everything right now\n            assert {[r mget a b c d] eq {1 2 3 4}}\n\n            # After an AOF reload only a, c and d should exist\n            r debug loadaof\n\n            assert {[r mget a b c d] eq {1 {} 3 4}}\n        }\n\n        test \"PRNG is seeded randomly for command replication\" {\n            if {$is_eval eq 1} {\n                # on is_eval Lua we need to call redis.replicate_commands() to get real randomization\n                set a [\n                    run_script {\n                        redis.replicate_commands()\n                        return math.random()*100000;\n                    } 0\n                ]\n                set b [\n                    run_script {\n                        redis.replicate_commands()\n                        return math.random()*100000;\n                    } 0\n                ]\n            } else {\n                set a [\n                    run_script {\n                        return math.random()*100000;\n                    } 0\n                ]\n                set b [\n                    run_script {\n                        return math.random()*100000;\n                    } 0\n                ]\n            }\n            assert {$a ne $b}\n        }\n\n        test \"Using side effects is not a problem with command replication\" {\n            run_script {\n                redis.call('set','time',redis.call('time')[1])\n            } 0\n\n            assert {[r get time] ne {}}\n\n            wait_for_condition 50 100 {\n                [r get time] eq [r -1 get time]\n            } else {\n                fail \"Time key does not match between master and replica\"\n            }\n        }\n    }\n}\n\nif {$is_eval eq 1} {\nstart_server {tags {\"scripting external:skip\"}} {\n    r script debug sync\n    r eval {return 'hello'} 0\n    r eval {return 'hello'} 0\n}\n\nstart_server {tags {\"scripting needs:debug external:skip\"}} {\n    test {Test scripting debug protocol parsing} {\n        r script debug sync\n        r eval {return 'hello'} 0\n        catch {r 'hello\\0world'} e\n        assert_match {*Unknown Redis Lua debugger command*} $e\n        catch {r 'hello\\0'} e\n        assert_match {*Unknown Redis Lua debugger command*} $e\n        catch {r '\\0hello'} e\n        assert_match {*Unknown Redis Lua debugger command*} $e\n        catch {r '\\0hello\\0'} e\n        assert_match {*Unknown Redis Lua debugger command*} $e\n    }\n\n    test {Test scripting debug lua stack overflow} {\n        r script debug sync\n        r eval {return 'hello'} 0\n        set cmd \"*101\\r\\n\\$5\\r\\nredis\\r\\n\"\n        append cmd [string repeat \"\\$4\\r\\ntest\\r\\n\" 100]\n        r write $cmd\n        r flush\n        set ret [r read]\n        assert_match {*Unknown Redis command called from script*} $ret\n        # make sure the server is still ok\n        reconnect\n        assert_equal [r ping] {PONG}\n    }\n}\n\nstart_server {tags {\"scripting external:skip\"}} {\n    test {Lua scripts eviction does not generate many scripts} {\n        r script flush\n        r config resetstat\n\n        # \"return 1\" sha is: e0e1f9fabfc9d4800c877a703b823ac0578ff8db\n        # \"return 500\" sha is: 98fe65896b61b785c5ed328a5a0a1421f4f1490c\n        for {set j 1} {$j <= 250} {incr j} {\n            r eval \"return $j\" 0\n        }\n        for {set j 251} {$j <= 500} {incr j} {\n            r eval_ro \"return $j\" 0\n        }\n        assert_equal [s number_of_cached_scripts] 500\n        assert_equal 1 [r evalsha e0e1f9fabfc9d4800c877a703b823ac0578ff8db 0]\n        assert_equal 1 [r evalsha_ro e0e1f9fabfc9d4800c877a703b823ac0578ff8db 0]\n        assert_equal 500 [r evalsha 98fe65896b61b785c5ed328a5a0a1421f4f1490c 0]\n        assert_equal 500 [r evalsha_ro 98fe65896b61b785c5ed328a5a0a1421f4f1490c 0]\n\n        # Scripts between \"return 1\" and \"return 500\" are evicted\n        for {set j 501} {$j <= 750} {incr j} {\n            r eval \"return $j\" 0\n        }\n        for {set j 751} {$j <= 1000} {incr j} {\n            r eval \"return $j\" 0\n        }\n        assert_error {NOSCRIPT*} {r evalsha e0e1f9fabfc9d4800c877a703b823ac0578ff8db 0}\n        assert_error {NOSCRIPT*} {r evalsha_ro e0e1f9fabfc9d4800c877a703b823ac0578ff8db 0}\n        assert_error {NOSCRIPT*} {r evalsha 98fe65896b61b785c5ed328a5a0a1421f4f1490c 0}\n        assert_error {NOSCRIPT*} {r evalsha_ro 98fe65896b61b785c5ed328a5a0a1421f4f1490c 0}\n\n        assert_equal [s evicted_scripts] 500\n        assert_equal [s number_of_cached_scripts] 500\n    }\n\n    test {Lua scripts eviction is plain LRU} {\n        r script flush\n        r config resetstat\n\n        # \"return 1\" sha is: e0e1f9fabfc9d4800c877a703b823ac0578ff8db\n        # \"return 2\" sha is: 7f923f79fe76194c868d7e1d0820de36700eb649\n        # \"return 3\" sha is: 09d3822de862f46d784e6a36848b4f0736dda47a\n        # \"return 500\" sha is: 98fe65896b61b785c5ed328a5a0a1421f4f1490c\n        # \"return 1000\" sha is: 94f1a7bc9f985a1a1d5a826a85579137d9d840c8\n        for {set j 1} {$j <= 500} {incr j} {\n            r eval \"return $j\" 0\n        }\n\n        # Call \"return 1\" to move it to the tail.\n        r eval \"return 1\" 0\n        # Call \"return 2\" to move it to the tail.\n        r evalsha 7f923f79fe76194c868d7e1d0820de36700eb649 0\n        # Create a new script, \"return 3\" will be evicted.\n        r eval \"return 1000\" 0\n        # \"return 1\" is ok since it was moved to tail.\n        assert_equal 1 [r evalsha e0e1f9fabfc9d4800c877a703b823ac0578ff8db 0]\n        # \"return 2\" is ok since it was moved to tail.\n        assert_equal 1 [r evalsha e0e1f9fabfc9d4800c877a703b823ac0578ff8db 0]\n        # \"return 3\" was evicted.\n        assert_error {NOSCRIPT*} {r evalsha 09d3822de862f46d784e6a36848b4f0736dda47a 0}\n        # Others are ok.\n        assert_equal 500 [r evalsha 98fe65896b61b785c5ed328a5a0a1421f4f1490c 0]\n        assert_equal 1000 [r evalsha 94f1a7bc9f985a1a1d5a826a85579137d9d840c8 0]\n\n        assert_equal [s evicted_scripts] 1\n        assert_equal [s number_of_cached_scripts] 500\n    }\n\n    test {Lua scripts eviction does not affect script load} {\n        r script flush\n        r config resetstat\n\n        set num [randomRange 500 1000]\n        for {set j 1} {$j <= $num} {incr j} {\n            r script load \"return $j\"\n            r eval \"return 'str_$j'\" 0\n        }\n        set evicted [s evicted_scripts]\n        set cached [s number_of_cached_scripts]\n        # evicted = num eval scripts - 500 eval scripts\n        assert_equal $evicted [expr $num-500]\n        # cached = num load scripts + 500 eval scripts\n        assert_equal $cached [expr $num+500]\n    }\n}\n\n} ;# is_eval\n\nstart_server {tags {\"scripting needs:debug\"}} {\n    r debug set-disable-deny-scripts 1\n\n    for {set i 2} {$i <= 3} {incr i} {\n        for {set client_proto 2} {$client_proto <= 3} {incr client_proto} {\n            if {[lsearch $::denytags \"resp3\"] >= 0} {\n                if {$client_proto == 3} {continue}\n            } elseif {$::force_resp3} {\n                if {$client_proto == 2} {continue}\n            }\n            r hello $client_proto\n            set extra \"RESP$i/$client_proto\"\n            r readraw 1\n\n            test \"test $extra big number protocol parsing\" {\n                set ret [run_script \"redis.setresp($i);return redis.call('debug', 'protocol', 'bignum')\" 0]\n                if {$client_proto == 2 || $i == 2} {\n                    # if either Lua or the client is RESP2 the reply will be RESP2\n                    assert_equal $ret {$37}\n                    assert_equal [r read] {1234567999999999999999999999999999999}\n                } else {\n                    assert_equal $ret {(1234567999999999999999999999999999999}\n                }\n            }\n\n            test \"test $extra malformed big number protocol parsing\" {\n                set ret [run_script \"return {big_number='123\\\\r\\\\n123'}\" 0]\n                if {$client_proto == 2} {\n                    # if either Lua or the client is RESP2 the reply will be RESP2\n                    assert_equal $ret {$8}\n                    assert_equal [r read] {123  123}\n                } else {\n                    assert_equal $ret {(123  123}\n                }\n            }\n\n            test \"test $extra map protocol parsing\" {\n                set ret [run_script \"redis.setresp($i);return redis.call('debug', 'protocol', 'map')\" 0]\n                if {$client_proto == 2 || $i == 2} {\n                    # if either Lua or the client is RESP2 the reply will be RESP2\n                    assert_equal $ret {*6}\n                } else {\n                    assert_equal $ret {%3}\n                }\n                for {set j 0} {$j < 6} {incr j} {\n                    r read\n                }\n            }\n\n            test \"test $extra set protocol parsing\" {\n                set ret [run_script \"redis.setresp($i);return redis.call('debug', 'protocol', 'set')\" 0]\n                if {$client_proto == 2 || $i == 2} {\n                    # if either Lua or the client is RESP2 the reply will be RESP2\n                    assert_equal $ret {*3}\n                } else {\n                    assert_equal $ret {~3}\n                }\n                for {set j 0} {$j < 3} {incr j} {\n                    r read\n                }\n            }\n\n            test \"test $extra double protocol parsing\" {\n                set ret [run_script \"redis.setresp($i);return redis.call('debug', 'protocol', 'double')\" 0]\n                if {$client_proto == 2 || $i == 2} {\n                    # if either Lua or the client is RESP2 the reply will be RESP2\n                    assert_equal $ret {$5}\n                    assert_equal [r read] {3.141}\n                } else {\n                    assert_equal $ret {,3.141}\n                }\n            }\n\n            test \"test $extra null protocol parsing\" {\n                set ret [run_script \"redis.setresp($i);return redis.call('debug', 'protocol', 'null')\" 0]\n                if {$client_proto == 2} {\n                    # null is a special case in which a Lua client format does not effect the reply to the client\n                    assert_equal $ret {$-1}\n                } else {\n                    assert_equal $ret {_}\n                }\n            } {}\n\n            test \"test $extra verbatim protocol parsing\" {\n                set ret [run_script \"redis.setresp($i);return redis.call('debug', 'protocol', 'verbatim')\" 0]\n                if {$client_proto == 2 || $i == 2} {\n                    # if either Lua or the client is RESP2 the reply will be RESP2\n                    assert_equal $ret {$25}\n                    assert_equal [r read] {This is a verbatim}\n                    assert_equal [r read] {string}\n                } else {\n                    assert_equal $ret {=29}\n                    assert_equal [r read] {txt:This is a verbatim}\n                    assert_equal [r read] {string}\n                }\n            }\n\n            test \"test $extra true protocol parsing\" {\n                set ret [run_script \"redis.setresp($i);return redis.call('debug', 'protocol', 'true')\" 0]\n                if {$client_proto == 2 || $i == 2} {\n                    # if either Lua or the client is RESP2 the reply will be RESP2\n                    assert_equal $ret {:1}\n                } else {\n                    assert_equal $ret {#t}\n                }\n            }\n\n            test \"test $extra false protocol parsing\" {\n                set ret [run_script \"redis.setresp($i);return redis.call('debug', 'protocol', 'false')\" 0]\n                if {$client_proto == 2 || $i == 2} {\n                    # if either Lua or the client is RESP2 the reply will be RESP2\n                    assert_equal $ret {:0}\n                } else {\n                    assert_equal $ret {#f}\n                }\n            }\n\n            r readraw 0\n            r hello 2\n        }\n    }\n\n    # attribute is not relevant to test with resp2\n    test {test resp3 attribute protocol parsing} {\n        # attributes are not (yet) expose to the script\n        # So here we just check the parser handles them and they are ignored.\n        run_script \"redis.setresp(3);return redis.call('debug', 'protocol', 'attrib')\" 0\n    } {Some real reply following the attribute}\n\n    test \"Script block the time during execution\" {\n        assert_equal [run_script {\n            redis.call(\"SET\", \"key\", \"value\", \"PX\", \"1\")\n            redis.call(\"DEBUG\", \"SLEEP\", 0.01)\n            return redis.call(\"EXISTS\", \"key\")\n        } 1 key] 1\n\n        assert_equal 0 [r EXISTS key]\n    }\n\n    test \"Script delete the expired key\" {\n        r DEBUG set-active-expire 0\n        r SET key value PX 1\n        after 2\n\n        # use DEBUG OBJECT to make sure it doesn't error (means the key still exists)\n        r DEBUG OBJECT key\n\n        assert_equal [run_script {return redis.call('EXISTS', 'key')} 1 key] 0\n        assert_equal 0 [r EXISTS key]\n        r DEBUG set-active-expire 1\n    }\n\n    test \"TIME command using cached time\" {\n        set res [run_script {\n            local result1 = {redis.call(\"TIME\")}\n            redis.call(\"DEBUG\", \"SLEEP\", 0.01)\n            local result2 = {redis.call(\"TIME\")}\n            return {result1, result2}\n         } 0]\n         assert_equal [lindex $res 0] [lindex $res 1]\n     }\n\n    test \"Script block the time in some expiration related commands\" {\n        # The test uses different commands to set the \"same\" expiration time for different keys,\n        # and interspersed with \"DEBUG SLEEP\", to verify that time is frozen in script.\n        # The commands involved are [P]TTL / SET EX[PX] / [P]EXPIRE / GETEX / [P]SETEX / [P]EXPIRETIME\n        set res [run_script {\n            redis.call(\"SET\", \"key1{t}\", \"value\", \"EX\", 1)\n            redis.call(\"DEBUG\", \"SLEEP\", 0.01)\n\n            redis.call(\"SET\", \"key2{t}\", \"value\", \"PX\", 1000)\n            redis.call(\"DEBUG\", \"SLEEP\", 0.01)\n\n            redis.call(\"SET\", \"key3{t}\", \"value\")\n            redis.call(\"EXPIRE\", \"key3{t}\", 1)\n            redis.call(\"DEBUG\", \"SLEEP\", 0.01)\n\n            redis.call(\"SET\", \"key4{t}\", \"value\")\n            redis.call(\"PEXPIRE\", \"key4{t}\", 1000)\n            redis.call(\"DEBUG\", \"SLEEP\", 0.01)\n\n            redis.call(\"SETEX\", \"key5{t}\", 1, \"value\")\n            redis.call(\"DEBUG\", \"SLEEP\", 0.01)\n\n            redis.call(\"PSETEX\", \"key6{t}\", 1000, \"value\")\n            redis.call(\"DEBUG\", \"SLEEP\", 0.01)\n\n            redis.call(\"SET\", \"key7{t}\", \"value\")\n            redis.call(\"GETEX\", \"key7{t}\", \"EX\", 1)\n            redis.call(\"DEBUG\", \"SLEEP\", 0.01)\n\n            redis.call(\"SET\", \"key8{t}\", \"value\")\n            redis.call(\"GETEX\", \"key8{t}\", \"PX\", 1000)\n            redis.call(\"DEBUG\", \"SLEEP\", 0.01)\n\n            local ttl_results = {redis.call(\"TTL\", \"key1{t}\"),\n                                 redis.call(\"TTL\", \"key2{t}\"),\n                                 redis.call(\"TTL\", \"key3{t}\"),\n                                 redis.call(\"TTL\", \"key4{t}\"),\n                                 redis.call(\"TTL\", \"key5{t}\"),\n                                 redis.call(\"TTL\", \"key6{t}\"),\n                                 redis.call(\"TTL\", \"key7{t}\"),\n                                 redis.call(\"TTL\", \"key8{t}\")}\n\n            local pttl_results = {redis.call(\"PTTL\", \"key1{t}\"),\n                                  redis.call(\"PTTL\", \"key2{t}\"),\n                                  redis.call(\"PTTL\", \"key3{t}\"),\n                                  redis.call(\"PTTL\", \"key4{t}\"),\n                                  redis.call(\"PTTL\", \"key5{t}\"),\n                                  redis.call(\"PTTL\", \"key6{t}\"),\n                                  redis.call(\"PTTL\", \"key7{t}\"),\n                                  redis.call(\"PTTL\", \"key8{t}\")}\n\n            local expiretime_results = {redis.call(\"EXPIRETIME\", \"key1{t}\"),\n                                        redis.call(\"EXPIRETIME\", \"key2{t}\"),\n                                        redis.call(\"EXPIRETIME\", \"key3{t}\"),\n                                        redis.call(\"EXPIRETIME\", \"key4{t}\"),\n                                        redis.call(\"EXPIRETIME\", \"key5{t}\"),\n                                        redis.call(\"EXPIRETIME\", \"key6{t}\"),\n                                        redis.call(\"EXPIRETIME\", \"key7{t}\"),\n                                        redis.call(\"EXPIRETIME\", \"key8{t}\")}\n\n            local pexpiretime_results = {redis.call(\"PEXPIRETIME\", \"key1{t}\"),\n                                         redis.call(\"PEXPIRETIME\", \"key2{t}\"),\n                                         redis.call(\"PEXPIRETIME\", \"key3{t}\"),\n                                         redis.call(\"PEXPIRETIME\", \"key4{t}\"),\n                                         redis.call(\"PEXPIRETIME\", \"key5{t}\"),\n                                         redis.call(\"PEXPIRETIME\", \"key6{t}\"),\n                                         redis.call(\"PEXPIRETIME\", \"key7{t}\"),\n                                         redis.call(\"PEXPIRETIME\", \"key8{t}\")}\n\n            return {ttl_results, pttl_results, expiretime_results, pexpiretime_results}\n        } 8 key1{t} key2{t} key3{t} key4{t} key5{t} key6{t} key7{t} key8{t}]\n\n        # The elements in each list are equal.\n        assert_equal 1 [llength [lsort -unique [lindex $res 0]]]\n        assert_equal 1 [llength [lsort -unique [lindex $res 1]]]\n        assert_equal 1 [llength [lsort -unique [lindex $res 2]]]\n        assert_equal 1 [llength [lsort -unique [lindex $res 3]]]\n\n        # Then we check that the expiration time is set successfully.\n        assert_morethan [lindex $res 0] 0\n        assert_morethan [lindex $res 1] 0\n        assert_morethan [lindex $res 2] 0\n        assert_morethan [lindex $res 3] 0\n    }\n\n    test \"RESTORE expired keys with expiration time\" {\n        set res [run_script {\n            redis.call(\"SET\", \"key1{t}\", \"value\")\n            local encoded = redis.call(\"DUMP\", \"key1{t}\")\n\n            redis.call(\"RESTORE\", \"key2{t}\", 1, encoded, \"REPLACE\")\n            redis.call(\"DEBUG\", \"SLEEP\", 0.01)\n            redis.call(\"RESTORE\", \"key3{t}\", 1, encoded, \"REPLACE\")\n\n            return {redis.call(\"PEXPIRETIME\", \"key2{t}\"), redis.call(\"PEXPIRETIME\", \"key3{t}\")}\n        } 3 key1{t} key2{t} key3{t}]\n\n        # Can get the expiration time and they are all equal.\n        assert_morethan [lindex $res 0] 0\n        assert_equal [lindex $res 0] [lindex $res 1]\n    }\n\n    r debug set-disable-deny-scripts 0\n}\n\nstart_server {tags {\"scripting\"}} {\n    test \"Test script flush will not leak memory - script:$is_eval\" {\n        r flushall\n        r script flush\n        r function flush\n\n        # This is a best-effort test to check we don't leak some resources on\n        # script flush and function flush commands. For lua vm, we create a\n        # jemalloc thread cache. On each script flush command, thread cache is\n        # destroyed and we create a new one. In this test, running script flush\n        # many times to verify there is no increase in the memory usage while\n        # re-creating some of the resources for lua vm.\n        set used_memory [s used_memory]\n        set allocator_allocated [s allocator_allocated]\n\n        r multi\n        for {set j 1} {$j <= 500} {incr j} {\n            if {$is_eval} {\n                r SCRIPT FLUSH\n            } else {\n                r FUNCTION FLUSH\n            }\n        }\n        r exec\n\n        # Verify used memory is not (much) higher.\n        assert_lessthan [s used_memory] [expr $used_memory*1.5]\n        assert_lessthan [s allocator_allocated] [expr $allocator_allocated*1.5]\n    }\n\n    test \"Verify Lua performs GC correctly after script loading\" {\n        set dummy_script \"--[string repeat x 10]\\nreturn \"\n        set n 50000\n        for {set i 0} {$i < $n} {incr i} {\n            set script \"$dummy_script[format \"%06d\" $i]\"\n            if {$is_eval} {\n                r script load $script\n            } else {\n                r function load \"#!lua name=test$i\\nredis.register_function('test$i', function(KEYS, ARGV)\\n $script \\nend)\"\n            }\n        }\n\n        if {$is_eval} {\n            assert_lessthan [s used_memory_lua] 17500000\n        } else {\n            assert_lessthan [s used_memory_vm_functions] 14500000\n        }\n    } {} {debug_defrag:skip}\n}\n} ;# foreach is_eval\n\n\n# Scripting \"shebang\" notation tests\nstart_server {tags {\"scripting\"}} {\n    test \"Shebang support for lua engine\" {\n        catch {\n            r eval {#!not-lua\n                return 1\n            } 0\n        } e\n        assert_match {*Unexpected engine in script shebang*} $e\n\n        assert_equal [r eval {#!lua\n            return 1\n        } 0] 1\n    }\n\n    test \"Unknown shebang option\" {\n        catch {\n            r eval {#!lua badger=data\n                return 1\n            } 0\n        } e\n        assert_match {*Unknown lua shebang option*} $e\n    }\n\n    test \"Unknown shebang flag\" {\n        catch {\n            r eval {#!lua flags=allow-oom,what?\n                return 1\n            } 0\n        } e\n        assert_match {*Unexpected flag in script shebang*} $e\n    }\n\n    test \"allow-oom shebang flag\" {\n        r set x 123\n    \n        r config set maxmemory 1\n\n        # Fail to execute deny-oom command in OOM condition (backwards compatibility mode without flags)\n        assert_error {OOM command not allowed when used memory > 'maxmemory'*} {\n            r eval {\n                redis.call('set','x',1)\n                return 1\n            } 1 x\n        }\n        # Can execute non deny-oom commands in OOM condition (backwards compatibility mode without flags)\n        assert_equal [\n            r eval {\n                return redis.call('get','x')\n            } 1 x\n        ] {123}\n\n        # Fail to execute regardless of script content when we use default flags in OOM condition\n        assert_error {OOM *} {\n            r eval {#!lua flags=\n                return 1\n            } 0\n        }\n\n        # Script with allow-oom can write despite being in OOM state\n        assert_equal [\n            r eval {#!lua flags=allow-oom\n                redis.call('set','x',1)\n                return 1\n            } 1 x\n        ] 1\n\n        # read-only scripts implies allow-oom\n        assert_equal [\n            r eval {#!lua flags=no-writes\n                redis.call('get','x')\n                return 1\n            } 0\n        ] 1\n        assert_equal [\n            r eval_ro {#!lua flags=no-writes\n                redis.call('get','x')\n                return 1\n            } 1 x\n        ] 1\n\n        # Script with no shebang can read in OOM state\n        assert_equal [\n            r eval {\n                redis.call('get','x')\n                return 1\n            } 1 x\n        ] 1\n\n        # Script with no shebang can read in OOM state (eval_ro variant)\n        assert_equal [\n            r eval_ro {\n                redis.call('get','x')\n                return 1\n            } 1 x\n        ] 1\n\n        r config set maxmemory 0\n    } {OK} {needs:config-maxmemory}\n\n    test \"no-writes shebang flag\" {\n        assert_error {ERR Write commands are not allowed from read-only scripts*} {\n            r eval {#!lua flags=no-writes\n                redis.call('set','x',1)\n                return 1\n            } 1 x\n        }\n    }\n    \n    start_server {tags {\"external:skip\"}} {\n        r -1 set x \"some value\"\n        test \"no-writes shebang flag on replica\" {\n            r replicaof [srv -1 host] [srv -1 port]\n            wait_for_condition 50 100 {\n                [s role] eq {slave} &&\n                [string match {*master_link_status:up*} [r info replication]]\n            } else {\n                fail \"Can't turn the instance into a replica\"\n            }\n\n            assert_equal [\n                r eval {#!lua flags=no-writes\n                    return redis.call('get','x')\n                } 1 x\n            ] \"some value\"\n\n            assert_error {READONLY You can't write against a read only replica.} {\n                r eval {#!lua\n                    return redis.call('get','x')\n                } 1 x\n            }\n\n            # test no-write inside multi-exec\n            r multi\n            r eval {#!lua flags=no-writes\n                redis.call('get','x')\n                return 1\n            } 1 x\n            assert_equal [r exec] 1\n\n            # test no shebang without write inside multi-exec\n            r multi\n            r eval {\n                redis.call('get','x')\n                return 1\n            } 1 x\n            assert_equal [r exec] 1\n\n            # temporarily set the server to master, so it doesn't block the queuing\n            # and we can test the evaluation of the flags on exec\n            r replicaof no one\n            set rr [redis_client]\n            set rr2 [redis_client]\n            $rr multi\n            $rr2 multi\n\n            # test write inside multi-exec\n            # we don't need to do any actual write\n            $rr eval {#!lua\n                return 1\n            } 0\n\n            # test no shebang with write inside multi-exec\n            $rr2 eval {\n                redis.call('set','x',1)\n                return 1\n            } 1 x\n\n            r replicaof [srv -1 host] [srv -1 port]\n\n            # To avoid -LOADING reply, wait until replica syncs with master.\n            wait_for_condition 50 100 {\n                [s master_link_status] eq {up}\n            } else {\n                fail \"Replica did not sync in time.\"\n            }\n\n            assert_error {EXECABORT Transaction discarded because of: READONLY *} {$rr exec}\n            assert_error {READONLY You can't write against a read only replica. script: *} {$rr2 exec}\n            $rr close\n            $rr2 close\n        }\n    }\n\n    test \"not enough good replicas\" {\n        r set x \"some value\"\n        r config set min-replicas-to-write 1\n\n        assert_equal [\n            r eval {#!lua flags=no-writes\n                return redis.call('get','x')\n            } 1 x\n        ] \"some value\"\n\n        assert_equal [\n            r eval {\n                return redis.call('get','x')\n            } 1 x\n        ] \"some value\"\n\n        assert_error {NOREPLICAS *} {\n            r eval {#!lua\n                return redis.call('get','x')\n            } 1 x\n        }\n\n        assert_error {NOREPLICAS *} {\n            r eval {\n                return redis.call('set','x', 1)\n            } 1 x\n        }\n\n        r config set min-replicas-to-write 0\n    }\n\n    test \"not enough good replicas state change during long script\" {\n        r set x \"pre-script value\"\n        r config set min-replicas-to-write 1\n        r config set lua-time-limit 10\n        start_server {tags {\"external:skip\"}} {\n            # add a replica and wait for the master to recognize it's online\n            r slaveof [srv -1 host] [srv -1 port]\n            wait_replica_online [srv -1 client]\n\n            # run a slow script that does one write, then waits for INFO to indicate\n            # that the replica dropped, and then runs another write\n            set rd [redis_deferring_client -1]\n            $rd eval {\n                redis.call('set','x',\"script value\")\n                while true do\n                    local info = redis.call('info','replication')\n                    if (string.match(info, \"connected_slaves:0\")) then\n                        redis.call('set','x',info)\n                        break\n                    end\n                end\n                return 1\n            } 1 x\n\n            # wait for the script to time out and yield\n            wait_for_condition 100 100 {\n                [catch {r -1 ping} e] == 1\n            } else {\n                fail \"Can't wait for script to start running\"\n            }\n            catch {r -1 ping} e\n            assert_match {BUSY*} $e\n\n            # cause the replica to disconnect (triggering the busy script to exit)\n            r slaveof no one\n\n            # make sure the script was able to write after the replica dropped\n            assert_equal [$rd read] 1\n            assert_match {*connected_slaves:0*} [r -1 get x]\n\n            $rd close\n        }\n        r config set min-replicas-to-write 0\n        r config set lua-time-limit 5000\n    } {OK} {external:skip needs:repl}\n\n    test \"allow-stale shebang flag\" {\n        r config set replica-serve-stale-data no\n        r replicaof 127.0.0.1 1\n\n        assert_error {MASTERDOWN Link with MASTER is down and replica-serve-stale-data is set to 'no'.} {\n            r eval {\n                return redis.call('get','x')\n            } 1 x\n        }\n\n        assert_error {MASTERDOWN Link with MASTER is down and replica-serve-stale-data is set to 'no'.} {\n            r eval {#!lua flags=no-writes\n                return 1\n            } 0\n        }\n\n        assert_equal [\n            r eval {#!lua flags=allow-stale,no-writes\n                return 1\n            } 0\n        ] 1\n\n\n        assert_error {*Can not execute the command on a stale replica*} {\n            r eval {#!lua flags=allow-stale,no-writes\n                return redis.call('get','x')\n            } 1 x\n        }\n        \n        assert_match {foobar} [\n            r eval {#!lua flags=allow-stale,no-writes\n                return redis.call('echo','foobar')\n            } 0\n        ]\n        \n        # Test again with EVALSHA\n        set sha [\n            r script load {#!lua flags=allow-stale,no-writes\n                return redis.call('echo','foobar')\n            }\n        ]\n        assert_match {foobar} [r evalsha $sha 0]\n        \n        r replicaof no one\n        r config set replica-serve-stale-data yes\n        set _ {}\n    } {} {external:skip}\n\n    test \"reject script do not cause a Lua stack leak\" {\n        r config set maxmemory 1\n        for {set i 0} {$i < 50} {incr i} {\n            assert_error {OOM *} {r eval {#!lua\n                return 1\n            } 0}\n        }\n        r config set maxmemory 0\n        assert_equal [r eval {#!lua\n            return 1\n        } 0] 1\n    }\n}\n\n# Additional eval only tests\nstart_server {tags {\"scripting\"}} {\n    test \"Consistent eval error reporting\" {\n        r config resetstat\n        r config set maxmemory 1\n        # Script aborted due to Redis state (OOM) should report script execution error with detailed internal error\n        assert_error {OOM command not allowed when used memory > 'maxmemory'*} {\n            r eval {return redis.call('set','x','y')} 1 x\n        }\n        assert_equal [errorrstat OOM r] {count=1}\n        assert_equal [s total_error_replies] {1}\n        assert_match {calls=0*rejected_calls=1,failed_calls=0*} [cmdrstat set r]\n        assert_match {calls=1*rejected_calls=0,failed_calls=1*} [cmdrstat eval r]\n\n        # redis.pcall() failure due to Redis state (OOM) returns lua error table with Redis error message without '-' prefix\n        r config resetstat\n        assert_equal [\n            r eval {\n                local t = redis.pcall('set','x','y')\n                if t['err'] == \"OOM command not allowed when used memory > 'maxmemory'.\" then\n                    return 1\n                else\n                    return 0\n                end\n            } 1 x\n        ] 1\n        # error stats were not incremented\n        assert_equal [errorrstat ERR r] {}\n        assert_equal [errorrstat OOM r] {count=1}\n        assert_equal [s total_error_replies] {1}\n        assert_match {calls=0*rejected_calls=1,failed_calls=0*} [cmdrstat set r]\n        assert_match {calls=1*rejected_calls=0,failed_calls=0*} [cmdrstat eval r]\n        \n        # Returning an error object from lua is handled as a valid RESP error result.\n        r config resetstat\n        assert_error {OOM command not allowed when used memory > 'maxmemory'.} {\n            r eval { return redis.pcall('set','x','y') } 1 x\n        }\n        assert_equal [errorrstat ERR r] {}\n        assert_equal [errorrstat OOM r] {count=1}\n        assert_equal [s total_error_replies] {1}\n        assert_match {calls=0*rejected_calls=1,failed_calls=0*} [cmdrstat set r]\n        assert_match {calls=1*rejected_calls=0,failed_calls=1*} [cmdrstat eval r]\n\n        r config set maxmemory 0\n        r config resetstat\n        # Script aborted due to error result of Redis command\n        assert_error {ERR DB index is out of range*} {\n            r eval {return redis.call('select',99)} 0\n        }\n        assert_equal [errorrstat ERR r] {count=1}\n        assert_equal [s total_error_replies] {1}\n        assert_match {calls=1*rejected_calls=0,failed_calls=1*} [cmdrstat select r]\n        assert_match {calls=1*rejected_calls=0,failed_calls=1*} [cmdrstat eval r]\n        \n        # redis.pcall() failure due to error in Redis command returns lua error table with redis error message without '-' prefix\n        r config resetstat\n        assert_equal [\n            r eval {\n                local t = redis.pcall('select',99)\n                if t['err'] == \"ERR DB index is out of range\" then\n                    return 1\n                else\n                    return 0\n                end\n            } 0\n        ] 1\n        assert_equal [errorrstat ERR r] {count=1} ;\n        assert_equal [s total_error_replies] {1}\n        assert_match {calls=1*rejected_calls=0,failed_calls=1*} [cmdrstat select r]\n        assert_match {calls=1*rejected_calls=0,failed_calls=0*} [cmdrstat eval r]\n\n        # Script aborted due to scripting specific error state (write cmd with eval_ro) should report script execution error with detailed internal error\n        r config resetstat\n        assert_error {ERR Write commands are not allowed from read-only scripts*} {\n            r eval_ro {return redis.call('set','x','y')} 1 x\n        }\n        assert_equal [errorrstat ERR r] {count=1}\n        assert_equal [s total_error_replies] {1}\n        assert_match {calls=0*rejected_calls=1,failed_calls=0*} [cmdrstat set r]\n        assert_match {calls=1*rejected_calls=0,failed_calls=1*} [cmdrstat eval_ro r]\n\n        # redis.pcall() failure due to scripting specific error state (write cmd with eval_ro) returns lua error table with Redis error message without '-' prefix\n        r config resetstat\n        assert_equal [\n            r eval_ro {\n                local t = redis.pcall('set','x','y')\n                if t['err'] == \"ERR Write commands are not allowed from read-only scripts.\" then\n                    return 1\n                else\n                    return 0\n                end\n            } 1 x\n        ] 1\n        assert_equal [errorrstat ERR r] {count=1}\n        assert_equal [s total_error_replies] {1}\n        assert_match {calls=0*rejected_calls=1,failed_calls=0*} [cmdrstat set r]\n        assert_match {calls=1*rejected_calls=0,failed_calls=0*} [cmdrstat eval_ro r]\n\n        r config resetstat\n        # make sure geoadd will failed\n        r set Sicily 1\n        assert_error {WRONGTYPE Operation against a key holding the wrong kind of value*} {\n            r eval {return redis.call('GEOADD', 'Sicily', '13.361389', '38.115556', 'Palermo', '15.087269', '37.502669', 'Catania')} 1 x\n        }\n        assert_equal [errorrstat WRONGTYPE r] {count=1}\n        assert_equal [s total_error_replies] {1}\n        assert_match {calls=1*rejected_calls=0,failed_calls=1*} [cmdrstat geoadd r]\n        assert_match {calls=1*rejected_calls=0,failed_calls=1*} [cmdrstat eval r]\n    } {} {cluster:skip}\n    \n    test \"LUA redis.error_reply API\" {\n        r config resetstat\n        assert_error {MY_ERR_CODE custom msg} {\n            r eval {return redis.error_reply(\"MY_ERR_CODE custom msg\")} 0\n        }\n        assert_equal [errorrstat MY_ERR_CODE r] {count=1}\n    }\n\n    test \"LUA redis.error_reply API with empty string\" {\n        r config resetstat\n        assert_error {ERR} {\n            r eval {return redis.error_reply(\"\")} 0\n        }\n        assert_equal [errorrstat ERR r] {count=1}\n    }\n\n    test \"LUA redis.status_reply API\" {\n        r config resetstat\n        r readraw 1\n        assert_equal [\n            r eval {return redis.status_reply(\"MY_OK_CODE custom msg\")} 0\n        ] {+MY_OK_CODE custom msg}\n        r readraw 0\n        assert_equal [errorrstat MY_ERR_CODE r] {} ;# error stats were not incremented\n    }\n\n    test \"LUA test pcall\" {\n        assert_equal [\n            r eval {local status, res = pcall(function() return 1 end); return 'status: ' .. tostring(status) .. ' result: ' .. res} 0\n        ] {status: true result: 1}\n    }\n\n    test \"LUA test pcall with error\" {\n        assert_match {status: false result:*Script attempted to access nonexistent global variable 'foo'} [\n            r eval {local status, res = pcall(function() return foo end); return 'status: ' .. tostring(status) .. ' result: ' .. res} 0\n        ]\n    }\n\n    test \"LUA test pcall with non string/integer arg\" {\n        assert_error \"ERR Lua redis lib command arguments must be strings or integers*\" {\n            r eval {\n                local x={}\n                return redis.call(\"ping\", x)\n            } 0\n        }\n        # run another command, to make sure the cached argv array survived\n        assert_equal [\n            r eval {\n                return redis.call(\"ping\", \"asdf\")\n            } 0\n        ] {asdf}\n    }\n\n    test \"LUA test trim string as expected\" {\n        # this test may fail if we use different memory allocator than jemalloc, as libc for example may keep the old size on realloc.\n        if {[string match {*jemalloc*} [s mem_allocator]]} {\n            # test that when using LUA cache mechanism, if there is free space in the argv array, the string is trimmed.\n            r set foo [string repeat \"a\" 45]\n            set expected_memory [r memory usage foo]\n\n            # Jemalloc will allocate for the requested 63 bytes, 80 bytes.\n            # We can't test for larger sizes because LUA_CMD_OBJCACHE_MAX_LEN is 64.\n            # This value will be recycled to be used in the next argument.\n            # We use SETNX to avoid saving the string which will prevent us to reuse it in the next command.\n            r eval {\n                return redis.call(\"SETNX\", \"foo\", string.rep(\"a\", 63))\n            } 0\n\n            # Jemalloc will allocate for the request 45 bytes, 56 bytes.\n            # we can't test for smaller sizes because OBJ_ENCODING_EMBSTR_SIZE_LIMIT is 44 where no trim is done.\n            r eval {\n                return redis.call(\"SET\", \"foo\", string.rep(\"a\", 45))\n            } 0\n\n            # Assert the string has been trimmed and the 80 bytes from the previous alloc were not kept.\n            assert { [r memory usage foo] <= $expected_memory};\n        }\n    }\n\n    test {EVAL - explicit error() call handling} {\n        # error(\"simple string error\")\n        assert_error {ERR user_script:1: simple string error script: *} {\n            r eval \"error('simple string error')\" 0\n        }\n\n        # error({\"err\": \"ERR table error\"})\n        assert_error {ERR table error script: *} {\n            r eval \"error({err='ERR table error'})\" 0\n        }\n\n        # error({})\n        assert_error {ERR unknown error script: *} {\n            r eval \"error({})\" 0\n        }\n    }\n}\n\nstart_server {tags {\"scripting\"}} {\n    test \"Wrong arity EVAL in acl_check_cmd returns error not crash\" {\n        r acl setuser bob on {>123} {+@scripting} {+set} {~x*}\n        assert_equal [r auth bob 123] {OK}\n        # Must be the first Lua call in this server instance\n        catch {run_script {\n            return redis.acl_check_cmd('eval','script')\n        } 0} e\n        assert_match {*Wrong number of args*} $e\n    }\n}\n"
  },
  {
    "path": "tests/unit/shutdown.tcl",
    "content": "start_server {tags {\"shutdown external:skip\"}} {\n    test {Temp rdb will be deleted if we use bg_unlink when shutdown} {\n        for {set i 0} {$i < 20} {incr i} {\n            r set $i $i\n        }\n        r config set rdb-key-save-delay 10000000\n\n        # Child is dumping rdb\n        r bgsave\n        wait_for_condition 1000 10 {\n            [s rdb_bgsave_in_progress] eq 1\n        } else {\n            fail \"bgsave did not start in time\"\n        }\n        after 100 ;# give the child a bit of time for the file to be created\n\n        set dir [lindex [r config get dir] 1]\n        set child_pid [get_child_pid 0]\n        set temp_rdb [file join [lindex [r config get dir] 1] temp-${child_pid}.rdb]\n        # Temp rdb must be existed\n        assert {[file exists $temp_rdb]}\n\n        catch {r shutdown nosave}\n        # Make sure the server was killed\n        catch {set rd [redis_deferring_client]} e\n        assert_match {*connection refused*} $e\n\n        # Temp rdb file must be deleted\n        assert {![file exists $temp_rdb]}\n    }\n}\n\nstart_server {tags {\"shutdown external:skip\"} overrides {save {900 1}}} {\n    test {SHUTDOWN ABORT can cancel SIGTERM} {\n        r debug pause-cron 1\n        set pid [s process_id]\n        exec kill -SIGTERM $pid\n        after 10;               # Give signal handler some time to run\n        r shutdown abort\n        verify_log_message 0 \"*Shutdown manually aborted*\" 0\n        r debug pause-cron 0\n        r ping\n    } {PONG}\n\n    test {Temp rdb will be deleted in signal handle} {\n        for {set i 0} {$i < 20} {incr i} {\n            r set $i $i\n        }\n        # It will cost 2s (20 * 100ms) to dump rdb\n        r config set rdb-key-save-delay 100000\n\n        set pid [s process_id]\n        set temp_rdb [file join [lindex [r config get dir] 1] temp-${pid}.rdb]\n\n        # trigger a shutdown which will save an rdb\n        exec kill -SIGINT $pid\n        # Wait for creation of temp rdb\n        wait_for_condition 50 10 {\n            [file exists $temp_rdb]\n        } else {\n            fail \"Can't trigger rdb save on shutdown\"\n        }\n\n        # Insist on immediate shutdown, temp rdb file must be deleted\n        exec kill -SIGINT $pid\n        # wait for the rdb file to be deleted\n        wait_for_condition 50 10 {\n            ![file exists $temp_rdb]\n        } else {\n            fail \"Can't trigger rdb save on shutdown\"\n        }\n    }\n}\n\nstart_server {tags {\"shutdown external:skip\"} overrides {save {900 1}}} {\n    set pid [s process_id]\n    set dump_rdb [file join [lindex [r config get dir] 1] dump.rdb]\n\n    test {RDB save will be failed in shutdown} {\n        for {set i 0} {$i < 20} {incr i} {\n            r set $i $i\n        }\n\n        # create a folder called 'dump.rdb' to trigger temp-rdb rename failure\n        # and it will cause rdb save to fail eventually.\n        if {[file exists $dump_rdb]} {\n            exec rm -f $dump_rdb\n        }\n        exec mkdir -p $dump_rdb\n    }\n    test {SHUTDOWN will abort if rdb save failed on signal} {\n        # trigger a shutdown which will save an rdb\n        exec kill -SIGINT $pid\n        wait_for_log_messages 0 {\"*Error trying to save the DB, can't exit*\"} 0 100 10\n    }\n    test {SHUTDOWN will abort if rdb save failed on shutdown command} {\n        catch {[r shutdown]} err\n        assert_match {*Errors trying to SHUTDOWN*} $err\n        # make sure the server is still alive\n        assert_equal [r ping] {PONG}\n    }\n    test {SHUTDOWN can proceed if shutdown command was with nosave} {\n        catch {[r shutdown nosave]}\n        wait_for_log_messages 0 {\"*ready to exit, bye bye*\"} 0 100 10\n    }\n    test {Clean up rdb same named folder} {\n        exec rm -r $dump_rdb\n    }\n}\n\n\nstart_server {tags {\"shutdown external:skip\"} overrides {appendonly no}} {\n    test {SHUTDOWN SIGTERM will abort if there's an initial AOFRW - default} {\n        r config set shutdown-on-sigterm default\n        r config set rdb-key-save-delay 10000000\n        for {set i 0} {$i < 10} {incr i} {\n            r set $i $i\n        }\n\n        r config set appendonly yes\n        wait_for_condition 1000 10 {\n            [s aof_rewrite_in_progress] eq 1\n        } else {\n            fail \"aof rewrite did not start in time\"\n        }\n\n        set pid [s process_id]\n        exec kill -SIGTERM $pid\n        wait_for_log_messages 0 {\"*Writing initial AOF, can't exit*\"} 0 1000 10\n\n        r config set shutdown-on-sigterm force\n    }\n}\n"
  },
  {
    "path": "tests/unit/slowlog.tcl",
    "content": "start_server {tags {\"slowlog\"} overrides {slowlog-log-slower-than 1000000}} {\n    test {SLOWLOG - check that it starts with an empty log} {\n        if {$::external} {\n            r slowlog reset\n        }\n        r slowlog len\n    } {0}\n\n    test {SLOWLOG - only logs commands taking more time than specified} {\n        r config set slowlog-log-slower-than 100000\n        r ping\n        assert_equal [r slowlog len] 0\n        r debug sleep 0.2\n        assert_equal [r slowlog len] 1\n    } {} {needs:debug}\n\n    test {SLOWLOG - zero max length is correctly handled} {\n        r SLOWLOG reset\n        r config set slowlog-max-len 0\n        r config set slowlog-log-slower-than 0\n        for {set i 0} {$i < 100} {incr i} {\n            r ping\n        }\n        r slowlog len\n    } {0}\n\n    test {SLOWLOG - max entries is correctly handled} {\n        r config set slowlog-log-slower-than 0\n        r config set slowlog-max-len 10\n        for {set i 0} {$i < 100} {incr i} {\n            r ping\n        }\n        r slowlog len\n    } {10}\n\n    test {SLOWLOG - GET optional argument to limit output len works} {\n\n        assert_equal 5  [llength [r slowlog get 5]]\n        assert_equal 10 [llength [r slowlog get -1]]\n        assert_equal 10 [llength [r slowlog get 20]]\n    }\n\n    test {SLOWLOG - RESET subcommand works} {\n        r config set slowlog-log-slower-than 100000\n        r slowlog reset\n        r slowlog len\n    } {0}\n\n    test {SLOWLOG - logged entry sanity check} {\n        r client setname foobar\n        r debug sleep 0.2\n        set e [lindex [r slowlog get] 0]\n        assert_equal [llength $e] 6\n        if {!$::external} {\n            assert_equal [lindex $e 0] 106\n        }\n        assert_equal [expr {[lindex $e 2] > 100000}] 1\n        assert_equal [lindex $e 3] {debug sleep 0.2}\n        assert_equal {foobar} [lindex $e 5]\n    } {} {needs:debug}\n\n    test {SLOWLOG - Certain commands are omitted that contain sensitive information} {\n        r config set slowlog-max-len 100\n        r config set slowlog-log-slower-than 0\n        r slowlog reset\n        catch {r acl setuser \"slowlog test user\" +get +set} _\n        r config set masteruser \"\"\n        r config set masterauth \"\"\n        r config set requirepass \"\"\n        r config set tls-key-file-pass \"\"\n        r config set tls-client-key-file-pass \"\"\n        r acl setuser slowlog-test-user +get +set\n        r acl getuser slowlog-test-user\n        r acl deluser slowlog-test-user non-existing-user\n        r config set slowlog-log-slower-than 0\n        r config set slowlog-log-slower-than -1\n        set slowlog_resp [r slowlog get -1]\n\n        # Make sure normal configs work, but the two sensitive\n        # commands are omitted or redacted\n        assert_equal 11 [llength $slowlog_resp]\n        assert_equal {slowlog reset} [lindex [lindex $slowlog_resp 10] 3]\n        assert_equal {acl setuser (redacted) (redacted) (redacted)} [lindex [lindex $slowlog_resp 9] 3]\n        assert_equal {config set masteruser (redacted)} [lindex [lindex $slowlog_resp 8] 3]\n        assert_equal {config set masterauth (redacted)} [lindex [lindex $slowlog_resp 7] 3]\n        assert_equal {config set requirepass (redacted)} [lindex [lindex $slowlog_resp 6] 3]\n        assert_equal {config set tls-key-file-pass (redacted)} [lindex [lindex $slowlog_resp 5] 3]\n        assert_equal {config set tls-client-key-file-pass (redacted)} [lindex [lindex $slowlog_resp 4] 3]\n        assert_equal {acl setuser (redacted) (redacted) (redacted)} [lindex [lindex $slowlog_resp 3] 3]\n        assert_equal {acl getuser (redacted)} [lindex [lindex $slowlog_resp 2] 3]\n        assert_equal {acl deluser (redacted) (redacted)} [lindex [lindex $slowlog_resp 1] 3]\n        assert_equal {config set slowlog-log-slower-than 0} [lindex [lindex $slowlog_resp 0] 3]\n    } {} {needs:repl}\n\n    test {SLOWLOG - Some commands can redact sensitive fields} {\n        r config set slowlog-log-slower-than 0\n        r slowlog reset\n        r migrate [srv 0 host] [srv 0 port] key 9 5000\n        r migrate [srv 0 host] [srv 0 port] key 9 5000 AUTH user\n        r migrate [srv 0 host] [srv 0 port] key 9 5000 AUTH2 user password\n        r config set slowlog-log-slower-than -1\n        set slowlog_resp [r slowlog get]\n\n        # Make sure all 3 commands were logged, but the sensitive fields are omitted\n        assert_equal 4 [llength $slowlog_resp]\n        assert_match {* key 9 5000} [lindex [lindex $slowlog_resp 2] 3]\n        assert_match {* key 9 5000 AUTH (redacted)} [lindex [lindex $slowlog_resp 1] 3]\n        assert_match {* key 9 5000 AUTH2 (redacted) (redacted)} [lindex [lindex $slowlog_resp 0] 3]\n    } {} {needs:repl}\n\n    test {SLOWLOG - Rewritten commands are logged as their original command} {\n        r config set slowlog-log-slower-than 0\n\n        # Test rewriting client arguments\n        r sadd set a b c d e\n        r slowlog reset\n\n        # SPOP is rewritten as DEL when all keys are removed\n        r spop set 10\n        assert_equal {spop set 10} [lindex [lindex [r slowlog get] 0] 3]\n\n        # Test replacing client arguments\n        r slowlog reset\n\n        # GEOADD is replicated as ZADD\n        r geoadd cool-cities -122.33207 47.60621 Seattle\n        assert_equal {geoadd cool-cities -122.33207 47.60621 Seattle} [lindex [lindex [r slowlog get] 0] 3]\n\n        # Test replacing a single command argument\n        r set A 5\n        r slowlog reset\n        \n        # GETSET is replicated as SET\n        r getset a 5\n        assert_equal {getset a 5} [lindex [lindex [r slowlog get] 0] 3]\n\n        # INCRBYFLOAT calls rewrite multiple times, so it's a special case\n        r set A 0\n        r slowlog reset\n        \n        # INCRBYFLOAT is replicated as SET\n        r INCRBYFLOAT A 1.0\n        assert_equal {INCRBYFLOAT A 1.0} [lindex [lindex [r slowlog get] 0] 3]\n\n        # blocked BLPOP is replicated as LPOP\n        set rd [redis_deferring_client]\n        $rd blpop l 0\n        wait_for_blocked_clients_count 1 50 100\n        r multi\n        r lpush l foo\n        r slowlog reset\n        r exec\n        $rd read\n        $rd close\n        assert_equal {blpop l 0} [lindex [lindex [r slowlog get] 0] 3]\n    }\n\n    test {SLOWLOG - commands with too many arguments are trimmed} {\n        r config set slowlog-log-slower-than 0\n        r slowlog reset\n        r sadd set 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33\n        set e [lindex [r slowlog get] end-1]\n        lindex $e 3\n    } {sadd set 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 {... (2 more arguments)}}\n\n    test {SLOWLOG - too long arguments are trimmed} {\n        r config set slowlog-log-slower-than 0\n        r slowlog reset\n        set arg [string repeat A 129]\n        r sadd set foo $arg\n        set e [lindex [r slowlog get] end-1]\n        lindex $e 3\n    } {sadd set foo {AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA... (1 more bytes)}}\n\n    test {SLOWLOG - EXEC is not logged, just executed commands} {\n        r config set slowlog-log-slower-than 100000\n        r slowlog reset\n        assert_equal [r slowlog len] 0\n        r multi\n        r debug sleep 0.2\n        r exec\n        assert_equal [r slowlog len] 1\n        set e [lindex [r slowlog get] 0]\n        assert_equal [lindex $e 3] {debug sleep 0.2}\n    } {} {needs:debug}\n\n    test {SLOWLOG - can clean older entries} {\n        r client setname lastentry_client\n        r config set slowlog-max-len 1\n        r debug sleep 0.2\n        assert {[llength [r slowlog get]] == 1}\n        set e [lindex [r slowlog get] 0]\n        assert_equal {lastentry_client} [lindex $e 5]\n    } {} {needs:debug}\n\n    test {SLOWLOG - can be disabled} {\n        r config set slowlog-max-len 1\n        r config set slowlog-log-slower-than 1\n        r slowlog reset\n        r debug sleep 0.2\n        assert_equal [r slowlog len] 1\n        r config set slowlog-log-slower-than -1\n        r slowlog reset\n        r debug sleep 0.2\n        assert_equal [r slowlog len] 0\n    } {} {needs:debug}\n\n    test {SLOWLOG - count must be >= -1} {\n       assert_error \"ERR count should be greater than or equal to -1\" {r slowlog get -2}\n       assert_error \"ERR count should be greater than or equal to -1\" {r slowlog get -222}\n    }\n\n    test {SLOWLOG - get all slow logs} {\n        r config set slowlog-log-slower-than 0\n        r config set slowlog-max-len 3\n        r slowlog reset\n\n        r set key test\n        r sadd set a b c\n        r incr num\n        r lpush list a\n\n        assert_equal [r slowlog len] 3\n        assert_equal 0 [llength [r slowlog get 0]]\n        assert_equal 1 [llength [r slowlog get 1]]\n        assert_equal 3 [llength [r slowlog get -1]]\n        assert_equal 3 [llength [r slowlog get 3]]\n    }\n    \n     test {SLOWLOG - blocking command is reported only after unblocked} {\n        # Cleanup first\n        r del mylist\n        # create a test client\n        set rd [redis_deferring_client]\n        \n        # config the slowlog and reset\n        r config set slowlog-log-slower-than 0\n        r config set slowlog-max-len 110\n        r slowlog reset\n        \n        $rd BLPOP mylist 0\n        wait_for_blocked_clients_count 1 50 20\n        assert_equal 0 [llength [regexp -all -inline (?=BLPOP) [r slowlog get]]]\n        \n        r LPUSH mylist 1\n        wait_for_blocked_clients_count 0 50 20\n        assert_equal 1 [llength [regexp -all -inline (?=BLPOP) [r slowlog get]]]\n        \n        $rd close\n    }\n\n    test {SLOWLOG - INFO STATS slowlog metrics with no slowlog entries} {\n        r config set slowlog-log-slower-than 1000000\n        r config resetstat\n        r slowlog reset\n\n        r ping\n        r get foo\n\n        set info [r info stats]\n        assert_equal 0 [getInfoProperty $info slowlog_commands_count]\n        assert_equal {0.00} [getInfoProperty $info slowlog_commands_time_ms_max]\n        assert_equal {0.00} [getInfoProperty $info slowlog_commands_time_ms_sum]\n    }\n\n    test {SLOWLOG - INFO STATS slowlog metrics accumulate with slow commands} {\n        r config set slowlog-log-slower-than 100000\n        r config resetstat\n        r slowlog reset\n\n        r debug sleep 0.2\n        set info [r info stats]\n        assert_equal 1 [getInfoProperty $info slowlog_commands_count]\n        set sum1 [getInfoProperty $info slowlog_commands_time_ms_sum]\n        set max1 [getInfoProperty $info slowlog_commands_time_ms_max]\n        assert_morethan $sum1 190\n        assert_morethan $max1 190\n\n        r debug sleep 0.3\n        set info [r info stats]\n        assert_equal 2 [getInfoProperty $info slowlog_commands_count]\n        set sum2 [getInfoProperty $info slowlog_commands_time_ms_sum]\n        set max2 [getInfoProperty $info slowlog_commands_time_ms_max]\n        assert_morethan $sum2 490\n        assert_morethan $max2 290\n    } {} {needs:debug}\n\n    test {SLOWLOG - INFO STATS slowlog metrics survive SLOWLOG RESET} {\n        r config set slowlog-log-slower-than 100000\n        r config resetstat\n        r slowlog reset\n\n        r debug sleep 0.2\n        set info [r info stats]\n        set count_before [getInfoProperty $info slowlog_commands_count]\n        assert_equal 1 $count_before\n\n        r slowlog reset\n        assert_equal 0 [r slowlog len]\n\n        set info [r info stats]\n        assert_equal 1 [getInfoProperty $info slowlog_commands_count]\n        assert_morethan [getInfoProperty $info slowlog_commands_time_ms_sum] 0\n    } {} {needs:debug}\n\n    test {SLOWLOG - INFO STATS slowlog metrics reset with CONFIG RESETSTAT} {\n        r config set slowlog-log-slower-than 100000\n        r config resetstat\n        r slowlog reset\n\n        r debug sleep 0.2\n        set info [r info stats]\n        assert_equal 1 [getInfoProperty $info slowlog_commands_count]\n\n        r config resetstat\n        set info [r info stats]\n        assert_equal 0 [getInfoProperty $info slowlog_commands_count]\n        assert_equal {0.00} [getInfoProperty $info slowlog_commands_time_ms_max]\n        assert_equal {0.00} [getInfoProperty $info slowlog_commands_time_ms_sum]\n    } {} {needs:debug}\n\n    test {SLOWLOG - INFO COMMANDSTATS shows slowlog metrics for slow commands} {\n        r config set slowlog-log-slower-than 100000\n        r config resetstat\n        r slowlog reset\n\n        r debug sleep 0.2\n        r debug sleep 0.3\n\n        set cmdstat [cmdrstat debug r]\n        assert_match {*slowlog_count=2*} $cmdstat\n        assert_match {*slowlog_time_ms_sum=*} $cmdstat\n        assert_match {*slowlog_time_ms_max=*} $cmdstat\n\n        regexp {slowlog_count=(\\d+)} $cmdstat -> sl_count\n        regexp {slowlog_time_ms_sum=([0-9.]+)} $cmdstat -> sl_sum\n        regexp {slowlog_time_ms_max=([0-9.]+)} $cmdstat -> sl_max\n        assert_equal 2 $sl_count\n        assert_morethan $sl_sum 490\n        assert_morethan $sl_max 290\n    } {} {needs:debug}\n\n    test {SLOWLOG - INFO COMMANDSTATS slowlog metrics only on commands that are slow} {\n        r config set slowlog-log-slower-than 100000\n        r config resetstat\n        r slowlog reset\n\n        r set mykey myvalue\n        r get mykey\n        r debug sleep 0.2\n\n        set cmdstat_set [cmdrstat set r]\n        assert_no_match {*slowlog_count*} $cmdstat_set\n\n        set cmdstat_get [cmdrstat get r]\n        assert_no_match {*slowlog_count*} $cmdstat_get\n\n        set cmdstat_debug [cmdrstat debug r]\n        assert_match {*slowlog_count=1*} $cmdstat_debug\n        assert_match {*slowlog_time_ms_sum=*} $cmdstat_debug\n        assert_match {*slowlog_time_ms_max=*} $cmdstat_debug\n    } {} {needs:debug}\n}\n"
  },
  {
    "path": "tests/unit/sort.tcl",
    "content": "start_server {\n    tags {\"sort\"}\n    overrides {\n        \"list-max-ziplist-size\" 16\n        \"set-max-intset-entries\" 32\n    }\n} {\n    proc create_random_dataset {num cmd} {\n        set tosort {}\n        set result {}\n        array set seenrand {}\n        r del tosort\n        for {set i 0} {$i < $num} {incr i} {\n            # Make sure all the weights are different because\n            # Redis does not use a stable sort but Tcl does.\n            while 1 {\n                randpath {\n                    set rint [expr int(rand()*1000000)]\n                } {\n                    set rint [expr rand()]\n                }\n                if {![info exists seenrand($rint)]} break\n            }\n            set seenrand($rint) x\n            r $cmd tosort $i\n            r set weight_$i $rint\n            r hset wobj_$i weight $rint\n            lappend tosort [list $i $rint]\n        }\n        set sorted [lsort -index 1 -real $tosort]\n        for {set i 0} {$i < $num} {incr i} {\n            lappend result [lindex $sorted $i 0]\n        }\n        set _ $result\n    }\n\n    proc check_sort_store_encoding {key} {\n        set listpack_max_size [lindex [r config get list-max-ziplist-size] 1]\n\n        # When the length or size of quicklist is less than the limit,\n        # it will be converted to listpack.\n        if {[r llen $key] <= $listpack_max_size} {\n            assert_encoding listpack $key\n        } else {\n            assert_encoding quicklist $key\n        }\n    }\n\n    foreach {num cmd enc title} {\n        16 lpush listpack \"Listpack\"\n        1000 lpush quicklist \"Quicklist\"\n        10000 lpush quicklist \"Big Quicklist\"\n        16 sadd intset \"Intset\"\n        1000 sadd hashtable \"Hash table\"\n        10000 sadd hashtable \"Big Hash table\"\n    } {\n        set result [create_random_dataset $num $cmd]\n        assert_encoding $enc tosort\n\n        test \"$title: SORT BY key\" {\n            assert_equal $result [r sort tosort BY weight_*]\n        } {} {cluster:skip}\n\n        test \"$title: SORT BY key with limit\" {\n            assert_equal [lrange $result 5 9] [r sort tosort BY weight_* LIMIT 5 5]\n        } {} {cluster:skip}\n\n        test \"$title: SORT BY hash field\" {\n            assert_equal $result [r sort tosort BY wobj_*->weight]\n        } {} {cluster:skip}\n    }\n\n    set result [create_random_dataset 16 lpush]\n    test \"SORT GET #\" {\n        assert_equal [lsort -integer $result] [r sort tosort GET #]\n    }\n\nforeach command {SORT SORT_RO} {\n    test \"$command GET <const>\" {\n        r del foo\n        set res [r $command tosort GET foo]\n        assert_equal 16 [llength $res]\n        foreach item $res { assert_equal {} $item }\n    } {} {cluster:skip}\n}\n\n    test \"SORT GET (key and hash) with sanity check\" {\n        set l1 [r sort tosort GET # GET weight_*]\n        set l2 [r sort tosort GET # GET wobj_*->weight]\n        foreach {id1 w1} $l1 {id2 w2} $l2 {\n            assert_equal $id1 $id2\n            assert_equal $w1 [r get weight_$id1]\n            assert_equal $w2 [r get weight_$id1]\n        }\n    } {} {cluster:skip}\n\n    test \"SORT BY key STORE\" {\n        r sort tosort BY weight_* store sort-res\n        assert_equal $result [r lrange sort-res 0 -1]\n        assert_equal 16 [r llen sort-res]\n        check_sort_store_encoding sort-res\n    } {} {cluster:skip}\n\n    test \"SORT BY hash field STORE\" {\n        r sort tosort BY wobj_*->weight store sort-res\n        assert_equal $result [r lrange sort-res 0 -1]\n        assert_equal 16 [r llen sort-res]\n        check_sort_store_encoding sort-res\n    } {} {cluster:skip}\n\n    test \"SORT extracts STORE correctly\" {\n        r command getkeys sort abc store def\n    } {abc def}\n    \n    test \"SORT_RO get keys\" {\n        r command getkeys sort_ro abc\n    } {abc}\n\n    test \"SORT extracts multiple STORE correctly\" {\n        r command getkeys sort abc store invalid store stillbad store def\n    } {abc def}\n\n    test \"SORT DESC\" {\n        assert_equal [lsort -decreasing -integer $result] [r sort tosort DESC]\n    }\n\n    test \"SORT ALPHA against integer encoded strings\" {\n        r del mylist\n        r lpush mylist 2\n        r lpush mylist 1\n        r lpush mylist 3\n        r lpush mylist 10\n        r sort mylist alpha\n    } {1 10 2 3}\n\n    test \"SORT sorted set\" {\n        r del zset\n        r zadd zset 1 a\n        r zadd zset 5 b\n        r zadd zset 2 c\n        r zadd zset 10 d\n        r zadd zset 3 e\n        r sort zset alpha desc\n    } {e d c b a}\n\n    test \"SORT sorted set BY nosort should retain ordering\" {\n        r del zset\n        r zadd zset 1 a\n        r zadd zset 5 b\n        r zadd zset 2 c\n        r zadd zset 10 d\n        r zadd zset 3 e\n        r multi\n        r sort zset by nosort asc\n        r sort zset by nosort desc\n        r exec\n    } {{a c e b d} {d b e c a}}\n\n    test \"SORT sorted set BY nosort + LIMIT\" {\n        r del zset\n        r zadd zset 1 a\n        r zadd zset 5 b\n        r zadd zset 2 c\n        r zadd zset 10 d\n        r zadd zset 3 e\n        assert_equal [r sort zset by nosort asc limit 0 1] {a}\n        assert_equal [r sort zset by nosort desc limit 0 1] {d}\n        assert_equal [r sort zset by nosort asc limit 0 2] {a c}\n        assert_equal [r sort zset by nosort desc limit 0 2] {d b}\n        assert_equal [r sort zset by nosort limit 5 10] {}\n        assert_equal [r sort zset by nosort limit -10 100] {a c e b d}\n    }\n\n    test \"SORT sorted set BY nosort works as expected from scripts\" {\n        r del zset\n        r zadd zset 1 a\n        r zadd zset 5 b\n        r zadd zset 2 c\n        r zadd zset 10 d\n        r zadd zset 3 e\n        r eval {\n            return {redis.call('sort',KEYS[1],'by','nosort','asc'),\n                    redis.call('sort',KEYS[1],'by','nosort','desc')}\n        } 1 zset\n    } {{a c e b d} {d b e c a}}\n\n    test \"SORT sorted set: +inf and -inf handling\" {\n        r del zset\n        r zadd zset -100 a\n        r zadd zset 200 b\n        r zadd zset -300 c\n        r zadd zset 1000000 d\n        r zadd zset +inf max\n        r zadd zset -inf min\n        r zrange zset 0 -1\n    } {min c a b d max}\n\n    test \"SORT regression for issue #19, sorting floats\" {\n        r flushdb\n        set floats {1.1 5.10 3.10 7.44 2.1 5.75 6.12 0.25 1.15}\n        foreach x $floats {\n            r lpush mylist $x\n        }\n        assert_equal [lsort -real $floats] [r sort mylist]\n    }\n\n    test \"SORT BY with smallest normal double 2.2250738585072012e-308\" {\n        r flushdb\n        r lpush mylist a b\n        r set weight_a 2.2250738585072012e-308\n        r set weight_b 1\n        assert_equal {a b} [r sort mylist BY weight_*]\n    } {} {cluster:skip}\n\n    test \"SORT with STORE returns zero if result is empty (github issue 224)\" {\n        r flushdb\n        r sort foo{t} store bar{t}\n    } {0}\n\n    test \"SORT with STORE does not create empty lists (github issue 224)\" {\n        r flushdb\n        r lpush foo{t} bar\n        r sort foo{t} alpha limit 10 10 store zap{t}\n        r exists zap{t}\n    } {0}\n\n    test \"SORT with STORE removes key if result is empty (github issue 227)\" {\n        r flushdb\n        r lpush foo{t} bar\n        r sort emptylist{t} store foo{t}\n        r exists foo{t}\n    } {0}\n\n    test \"SORT with BY <constant> and STORE should still order output\" {\n        r del myset mylist\n        r sadd myset a b c d e f g h i l m n o p q r s t u v z aa aaa azz\n        r sort myset alpha by _ store mylist\n        r lrange mylist 0 -1\n    } {a aa aaa azz b c d e f g h i l m n o p q r s t u v z} {cluster:skip}\n\n    test \"SORT will complain with numerical sorting and bad doubles (1)\" {\n        r del myset\n        r sadd myset 1 2 3 4 not-a-double\n        set e {}\n        catch {r sort myset} e\n        set e\n    } {*ERR*double*}\n\n    test \"SORT will complain with numerical sorting and bad doubles (2)\" {\n        r del myset\n        r sadd myset 1 2 3 4\n        r mset score:1 10 score:2 20 score:3 30 score:4 not-a-double\n        set e {}\n        catch {r sort myset by score:*} e\n        set e\n    } {*ERR*double*} {cluster:skip}\n\n    test \"SORT BY sub-sorts lexicographically if score is the same\" {\n        r del myset\n        r sadd myset a b c d e f g h i l m n o p q r s t u v z aa aaa azz\n        foreach ele {a aa aaa azz b c d e f g h i l m n o p q r s t u v z} {\n            set score:$ele 100\n        }\n        r sort myset by score:*\n    } {a aa aaa azz b c d e f g h i l m n o p q r s t u v z} {cluster:skip}\n\n    test \"SORT GET with pattern ending with just -> does not get hash field\" {\n        r del mylist\n        r lpush mylist a\n        r set x:a-> 100\n        r sort mylist by num get x:*->\n    } {100} {cluster:skip}\n\n    test \"SORT by nosort retains native order for lists\" {\n        r del testa\n        r lpush testa 2 1 4 3 5\n        r sort testa by nosort\n    } {5 3 4 1 2} {cluster:skip}\n\n    test \"SORT by nosort plus store retains native order for lists\" {\n        r del testa\n        r lpush testa 2 1 4 3 5\n        r sort testa by nosort store testb\n        r lrange testb 0 -1\n    } {5 3 4 1 2} {cluster:skip}\n\n    test \"SORT by nosort with limit returns based on original list order\" {\n        r sort testa by nosort limit 0 3 store testb\n        r lrange testb 0 -1\n    } {5 3 4} {cluster:skip}\n\n    test \"SORT_RO - Successful case\" {\n        r del mylist\n        r lpush mylist a\n        r set x:a 100\n        r sort_ro mylist by nosort get x:*->\n    } {100} {cluster:skip}\n\n    test \"SORT_RO - Cannot run with STORE arg\" {\n        catch {r sort_ro foolist STORE bar} e\n        set e\n    } {ERR syntax error}\n\n    tags {\"slow\"} {\n        set num 100\n        set res [create_random_dataset $num lpush]\n\n        test \"SORT speed, $num element list BY key, 100 times\" {\n            set start [clock clicks -milliseconds]\n            for {set i 0} {$i < 100} {incr i} {\n                set sorted [r sort tosort BY weight_* LIMIT 0 10]\n            }\n            set elapsed [expr [clock clicks -milliseconds]-$start]\n            if {$::verbose} {\n                puts -nonewline \"\\n  Average time to sort: [expr double($elapsed)/100] milliseconds \"\n                flush stdout\n            }\n        } {} {cluster:skip}\n\n        test \"SORT speed, $num element list BY hash field, 100 times\" {\n            set start [clock clicks -milliseconds]\n            for {set i 0} {$i < 100} {incr i} {\n                set sorted [r sort tosort BY wobj_*->weight LIMIT 0 10]\n            }\n            set elapsed [expr [clock clicks -milliseconds]-$start]\n            if {$::verbose} {\n                puts -nonewline \"\\n  Average time to sort: [expr double($elapsed)/100] milliseconds \"\n                flush stdout\n            }\n        } {} {cluster:skip}\n\n        test \"SORT speed, $num element list directly, 100 times\" {\n            set start [clock clicks -milliseconds]\n            for {set i 0} {$i < 100} {incr i} {\n                set sorted [r sort tosort LIMIT 0 10]\n            }\n            set elapsed [expr [clock clicks -milliseconds]-$start]\n            if {$::verbose} {\n                puts -nonewline \"\\n  Average time to sort: [expr double($elapsed)/100] milliseconds \"\n                flush stdout\n            }\n        }\n\n        test \"SORT speed, $num element list BY <const>, 100 times\" {\n            set start [clock clicks -milliseconds]\n            for {set i 0} {$i < 100} {incr i} {\n                set sorted [r sort tosort BY nokey LIMIT 0 10]\n            }\n            set elapsed [expr [clock clicks -milliseconds]-$start]\n            if {$::verbose} {\n                puts -nonewline \"\\n  Average time to sort: [expr double($elapsed)/100] milliseconds \"\n                flush stdout\n            }\n        } {} {cluster:skip}\n    }\n\n    test {SETRANGE with huge offset} {\n        r lpush L 2 1 0\n        # expecting a different outcome on 32 and 64 bit systems\n        foreach value {9223372036854775807 2147483647} {\n            catch {[r sort_ro L by a limit 2 $value]} res\n            if {![string match \"2\" $res] && ![string match \"*out of range*\" $res]} {\n                assert_not_equal $res \"expecting an error or 2\"\n            }\n        }\n    }\n\n    test {SORT STORE quicklist with the right options} {\n        set origin_config [config_get_set list-max-listpack-size -1]\n        r del lst{t} lst_dst{t}\n        r config set list-max-listpack-size -1\n        r config set list-compress-depth 12\n        r lpush lst{t} {*}[split [string repeat \"1\" 6000] \"\"]\n        r sort lst{t} store lst_dst{t}\n        assert_encoding quicklist lst_dst{t}\n        assert_match \"*ql_listpack_max:-1 ql_compressed:1*\" [r debug object lst_dst{t}]\n        config_set list-max-listpack-size $origin_config\n    } {} {needs:debug}\n}\n\nstart_cluster 1 0 {tags {\"external:skip cluster sort\"}} {\n\n    r flushall\n    r lpush \"{a}mylist\" 1 2 3\n    r set \"{a}by1\" 20\n    r set \"{a}by2\" 30\n    r set \"{a}by3\" 0\n    r set \"{a}get1\" 200\n    r set \"{a}get2\" 100\n    r set \"{a}get3\" 30\n\n    test \"sort by in cluster mode\" {\n        catch {r sort \"{a}mylist\" by by*} e\n        assert_match {ERR BY option of SORT denied in Cluster mode when *} $e\n        r sort \"{a}mylist\" by \"{a}by*\"\n    } {3 1 2}\n\n    test \"sort get in cluster mode\" {\n        catch {r sort \"{a}mylist\" by \"{a}by*\" get get*} e\n        assert_match {ERR GET option of SORT denied in Cluster mode when *} $e\n        r sort \"{a}mylist\" by \"{a}by*\" get \"{a}get*\"\n    } {30 200 100}\n\n    test \"sort get # in cluster mode\" {\n        assert_equal [r sort \"{a}mylist\" by \"{a}by*\" get # ] {3 1 2}\n        r sort \"{a}mylist\" by \"{a}by*\" get \"{a}get*\" get #\n    } {30 3 200 1 100 2}\n\n    test \"sort_ro by in cluster mode\" {\n        catch {r sort_ro \"{a}mylist\" by by*} e\n        assert_match {ERR BY option of SORT denied in Cluster mode when *} $e\n        r sort_ro \"{a}mylist\" by \"{a}by*\"\n    } {3 1 2}\n\n    test \"sort_ro get in cluster mode\" {\n        catch {r sort_ro \"{a}mylist\" by \"{a}by*\" get get*} e\n        assert_match {ERR GET option of SORT denied in Cluster mode when *} $e\n        r sort_ro \"{a}mylist\" by \"{a}by*\" get \"{a}get*\"\n    } {30 200 100}\n\n    test \"sort_ro get # in cluster mode\" {\n        assert_equal [r sort_ro \"{a}mylist\" by \"{a}by*\" get # ] {3 1 2}\n        r sort_ro \"{a}mylist\" by \"{a}by*\" get \"{a}get*\" get #\n    } {30 3 200 1 100 2}\n}\n"
  },
  {
    "path": "tests/unit/tls.tcl",
    "content": "start_server {tags {\"tls\"}} {\n    if {$::tls} {\n        package require tls\n\n        test {TLS: Not accepting non-TLS connections on a TLS port} {\n            set s [redis [srv 0 host] [srv 0 port]]\n            catch {$s PING} e\n            set e\n        } {*I/O error*}\n\n        test {TLS: Verify tls-auth-clients behaves as expected} {\n            set s [redis [srv 0 host] [srv 0 port]]\n            ::tls::import [$s channel]\n            catch {$s PING} e\n            assert_match {*error*} $e\n\n            r CONFIG SET tls-auth-clients no\n\n            set s [redis [srv 0 host] [srv 0 port]]\n            ::tls::import [$s channel]\n            catch {$s PING} e\n            assert_match {PONG} $e\n\n            r CONFIG SET tls-auth-clients optional\n\n            set s [redis [srv 0 host] [srv 0 port]]\n            ::tls::import [$s channel]\n            catch {$s PING} e\n            assert_match {PONG} $e\n\n            r CONFIG SET tls-auth-clients yes\n\n            set s [redis [srv 0 host] [srv 0 port]]\n            ::tls::import [$s channel]\n            catch {$s PING} e\n            assert_match {*error*} $e\n        }\n\n        test {TLS: Verify tls-protocols behaves as expected} {\n            r CONFIG SET tls-protocols TLSv1.2\n\n            set s [redis [srv 0 host] [srv 0 port] 0 1 {-tls1.2 0}]\n            catch {$s PING} e\n            assert_match {*I/O error*} $e\n\n            set s [redis [srv 0 host] [srv 0 port] 0 1 {-tls1.2 1}]\n            catch {$s PING} e\n            assert_match {PONG} $e\n\n            r CONFIG SET tls-protocols \"\"\n        }\n\n        test {TLS: Verify tls-ciphers behaves as expected} {\n            r CONFIG SET tls-protocols TLSv1.2\n            r CONFIG SET tls-ciphers \"DEFAULT:-AES128-SHA256\"\n\n            set s [redis [srv 0 host] [srv 0 port] 0 1 {-cipher \"-ALL:AES128-SHA256\"}]\n            catch {$s PING} e\n            assert_match {*I/O error*} $e\n\n            set s [redis [srv 0 host] [srv 0 port] 0 1 {-cipher \"-ALL:AES256-SHA256\"}]\n            catch {$s PING} e\n            assert_match {PONG} $e\n\n            r CONFIG SET tls-ciphers \"DEFAULT\"\n\n            set s [redis [srv 0 host] [srv 0 port] 0 1 {-cipher \"-ALL:AES128-SHA256\"}]\n            catch {$s PING} e\n            assert_match {PONG} $e\n\n            r CONFIG SET tls-protocols \"\"\n            r CONFIG SET tls-ciphers \"DEFAULT\"\n        }\n\n        test {TLS: Verify tls-prefer-server-ciphers behaves as expected} {\n            r CONFIG SET tls-protocols TLSv1.2\n            r CONFIG SET tls-ciphers \"AES128-SHA256:AES256-SHA256\"\n\n            set s [redis [srv 0 host] [srv 0 port] 0 1 {-cipher \"AES256-SHA256:AES128-SHA256\"}]\n            catch {$s PING} e\n            assert_match {PONG} $e\n\n            assert_equal \"AES256-SHA256\" [dict get [::tls::status [$s channel]] cipher]\n\n            r CONFIG SET tls-prefer-server-ciphers yes\n\n            set s [redis [srv 0 host] [srv 0 port] 0 1 {-cipher \"AES256-SHA256:AES128-SHA256\"}]\n            catch {$s PING} e\n            assert_match {PONG} $e\n\n            assert_equal \"AES128-SHA256\" [dict get [::tls::status [$s channel]] cipher]\n\n            r CONFIG SET tls-protocols \"\"\n            r CONFIG SET tls-ciphers \"DEFAULT\"\n        }\n\n        test {TLS: Verify tls-cert-file is also used as a client cert if none specified} {\n            set master [srv 0 client]\n            set master_host [srv 0 host]\n            set master_port [srv 0 port]\n\n            # Use a non-restricted client/server cert for the replica\n            set redis_crt [format \"%s/tests/tls/redis.crt\" [pwd]]\n            set redis_key [format \"%s/tests/tls/redis.key\" [pwd]]\n\n            start_server [list overrides [list tls-cert-file $redis_crt tls-key-file $redis_key] \\\n                               omit [list tls-client-cert-file tls-client-key-file]] {\n                set replica [srv 0 client]\n                $replica replicaof $master_host $master_port\n                wait_for_condition 30 100 {\n                    [string match {*master_link_status:up*} [$replica info replication]]\n                } else {\n                    fail \"Can't authenticate to master using just tls-cert-file!\"\n                }\n            }\n        }\n\n        test {TLS: switch between tcp and tls ports} {\n            set srv_port [srv 0 port]\n\n            # TLS\n            set rd [redis [srv 0 host] $srv_port 0 1]\n            $rd PING\n\n            # TCP\n            $rd CONFIG SET tls-port 0\n            $rd CONFIG SET port $srv_port\n            $rd close\n\n            set rd [redis [srv 0 host] $srv_port 0 0]\n            $rd PING\n\n            # TLS\n            $rd CONFIG SET port 0\n            $rd CONFIG SET tls-port $srv_port\n            $rd close\n\n            set rd [redis [srv 0 host] $srv_port 0 1]\n            $rd PING\n            $rd close\n        }\n\n        test {TLS: Working with an encrypted keyfile} {\n            # Create an encrypted version\n            set keyfile [lindex [r config get tls-key-file] 1]\n            set keyfile_encrypted \"$keyfile.encrypted\"\n            exec -ignorestderr openssl rsa -in $keyfile -out $keyfile_encrypted -aes256 -passout pass:1234 2>/dev/null\n\n            # Using it without a password fails\n            catch {r config set tls-key-file $keyfile_encrypted} e\n            assert_match {*Unable to update TLS*} $e\n\n            # Now use a password\n            r config set tls-key-file-pass 1234\n            r config set tls-key-file $keyfile_encrypted\n        }\n\n\t    test {TLS: Auto-authenticate using tls-auth-clients-user (CN)} {\n\t        # Create a user matching the CN in the client certificate (CN=Client-only)\n\t        r ACL SETUSER {Client-only} on >clientpass allcommands allkeys\n\n\t        # Map the client certificate CN to the ACL user name.\n\t        r CONFIG SET tls-auth-clients-user CN\n\n\t        # Connect over TLS using the test client certificate (CN=Client-only)\n\t        set s [redis [srv 0 host] [srv 0 port] 0 1]\n\t        catch {$s PING} e\n\t        assert_match {PONG} $e\n\t        assert_equal \"Client-only\" [$s ACL WHOAMI]\n\t    }\n\n\t    foreach user_type {\"non-existent\" \"disabled\"} {\n\t        test \"TLS: $user_type user cannot auto-authenticate via certificate\" {\n\t            if {$user_type eq \"non-existent\"} {\n\t                # Ensure the Client-only user does not exist so auto-auth will fail\n\t                catch {r ACL DELUSER {Client-only}}\n\t            } else {\n\t                r ACL SETUSER {Client-only} on >clientpass allcommands allkeys\n\t                r ACL SETUSER {Client-only} off  ;# Disable the user\n\t            }\n\t            r ACL LOG RESET\n\t            r CONFIG SET tls-auth-clients-user CN\n\n\t            # Capture the current value of acl_access_denied_tls_cert from INFO stats\n\t            set info_before [r INFO stats]\n\t            regexp {acl_access_denied_tls_cert:(\\d+)} $info_before -> before\n\n\t            # Connect over TLS using the test client certificate (CN=Client-only)\n\t            # Since there is no matching ACL user or user is disabled, auto-auth should fail\n\t            # and the connection should remain authenticated as the default user\n\t            set s [redis [srv 0 host] [srv 0 port] 0 1]\n\t            assert_equal \"default\" [$s ACL WHOAMI]\n\n\t            # The ACL LOG should contain a single entry with reason \"tls-cert\"\n\t            # and username \"Client-only\"\n\t            set log [r ACL LOG]\n\t            assert_equal 1 [llength $log]\n\t            set entry [lindex $log 0]\n\t            assert_equal \"tls-cert\" [dict get $entry reason]\n\t            assert_equal \"Client-only\" [dict get $entry username]\n\n\t            # INFO stats should report that acl_access_denied_tls_cert increased by 1\n\t            set info_after [r INFO stats]\n\t            regexp {acl_access_denied_tls_cert:(\\d+)} $info_after -> after\n\t            assert {$after == $before + 1}\n\n\t            # Verify fallback to password auth works after cert auth fails\n\t            r ACL SETUSER testuser on >testpass +@all ~*\n\t            $s AUTH testuser testpass\n\t            assert_equal \"testuser\" [$s ACL WHOAMI]\n\t            assert_equal \"PONG\" [$s PING]\n\n\t            # Clean up\n\t            r ACL DELUSER testuser\n\t            catch {r ACL DELUSER {Client-only}}\n\t        }\n\t    }\n    }\n}\n"
  },
  {
    "path": "tests/unit/tracking.tcl",
    "content": "# logreqres:skip because it seems many of these tests rely heavily on RESP2\nstart_server {tags {\"tracking network logreqres:skip\"}} {\n    # Create a deferred client we'll use to redirect invalidation\n    # messages to.\n    set rd_redirection [redis_deferring_client]\n    $rd_redirection client id\n    set redir_id [$rd_redirection read]\n    $rd_redirection subscribe __redis__:invalidate\n    $rd_redirection read ; # Consume the SUBSCRIBE reply.\n\n    # Create another client that's not used as a redirection client\n    # We should always keep this client's buffer clean\n    set rd [redis_deferring_client]\n\n    # Client to be used for SET and GET commands\n    # We don't read this client's buffer\n    set rd_sg [redis_client] \n\n    proc clean_all {} {\n        uplevel {\n            # We should make r TRACKING off first. If r is in RESP3,\n            # r FLUSH ALL will send us tracking-redir-broken or other\n            # info which will not be consumed.\n            r CLIENT TRACKING off\n            $rd QUIT\n            $rd_redirection QUIT\n            set rd [redis_deferring_client]\n            set rd_redirection [redis_deferring_client]\n            $rd_redirection client id\n            set redir_id [$rd_redirection read]\n            $rd_redirection subscribe __redis__:invalidate\n            $rd_redirection read ; # Consume the SUBSCRIBE reply.\n            r FLUSHALL\n            r HELLO 2\n            r config set tracking-table-max-keys 1000000\n        }\n    }\n\n    test {Clients are able to enable tracking and redirect it} {\n        r CLIENT TRACKING on REDIRECT $redir_id\n    } {*OK}\n\n    test {The other connection is able to get invalidations} {\n        r SET a{t} 1\n        r SET b{t} 1\n        r GET a{t}\n        r INCR b{t} ; # This key should not be notified, since it wasn't fetched.\n        r INCR a{t}\n        set keys [lindex [$rd_redirection read] 2]\n        assert {[llength $keys] == 1}\n        assert {[lindex $keys 0] eq {a{t}}}\n    }\n\n    test {The client is now able to disable tracking} {\n        # Make sure to add a few more keys in the tracking list\n        # so that we can check for leaks, as a side effect.\n        r MGET a{t} b{t} c{t} d{t} e{t} f{t} g{t}\n        r CLIENT TRACKING off\n    } {*OK}\n\n    test {Clients can enable the BCAST mode with the empty prefix} {\n        r CLIENT TRACKING on BCAST REDIRECT $redir_id\n    } {*OK*}\n\n    test {The connection gets invalidation messages about all the keys} {\n        r MSET a{t} 1 b{t} 2 c{t} 3\n        set keys [lsort [lindex [$rd_redirection read] 2]]\n        assert {$keys eq {a{t} b{t} c{t}}}\n    }\n\n    test {Clients can enable the BCAST mode with prefixes} {\n        r CLIENT TRACKING off\n        r CLIENT TRACKING on BCAST REDIRECT $redir_id PREFIX a: PREFIX b:\n        r MULTI\n        r INCR a:1{t}\n        r INCR a:2{t}\n        r INCR b:1{t}\n        r INCR b:2{t}\n        # we should not get this key\n        r INCR c:1{t}\n        r EXEC\n        # Because of the internals, we know we are going to receive\n        # two separated notifications for the two different prefixes.\n        set keys1 [lsort [lindex [$rd_redirection read] 2]]\n        set keys2 [lsort [lindex [$rd_redirection read] 2]]\n        set keys [lsort [list {*}$keys1 {*}$keys2]]\n        assert {$keys eq {a:1{t} a:2{t} b:1{t} b:2{t}}}\n    }\n\n    test {Adding prefixes to BCAST mode works} {\n        r CLIENT TRACKING on BCAST REDIRECT $redir_id PREFIX c:\n        r INCR c:1234\n        set keys [lsort [lindex [$rd_redirection read] 2]]\n        assert {$keys eq {c:1234}}\n    }\n\n    test {Tracking NOLOOP mode in standard mode works} {\n        r CLIENT TRACKING off\n        r CLIENT TRACKING on REDIRECT $redir_id NOLOOP\n        r MGET otherkey1{t} loopkey{t} otherkey2{t}\n        $rd_sg SET otherkey1{t} 1; # We should get this\n        r SET loopkey{t} 1 ; # We should not get this\n        $rd_sg SET otherkey2{t} 1; # We should get this\n        # Because of the internals, we know we are going to receive\n        # two separated notifications for the two different keys.\n        set keys1 [lsort [lindex [$rd_redirection read] 2]]\n        set keys2 [lsort [lindex [$rd_redirection read] 2]]\n        set keys [lsort [list {*}$keys1 {*}$keys2]]\n        assert {$keys eq {otherkey1{t} otherkey2{t}}}\n    }\n\n    test {Tracking NOLOOP mode in BCAST mode works} {\n        r CLIENT TRACKING off\n        r CLIENT TRACKING on BCAST REDIRECT $redir_id NOLOOP\n        $rd_sg SET otherkey1 1; # We should get this\n        r SET loopkey 1 ; # We should not get this\n        $rd_sg SET otherkey2 1; # We should get this\n        # Because $rd_sg send command synchronously, we know we are\n        # going to receive two separated notifications.\n        set keys1 [lsort [lindex [$rd_redirection read] 2]]\n        set keys2 [lsort [lindex [$rd_redirection read] 2]]\n        set keys [lsort [list {*}$keys1 {*}$keys2]]\n        assert {$keys eq {otherkey1 otherkey2}}\n    }\n\n    test {Tracking gets notification of expired keys} {\n        r CLIENT TRACKING off\n        r CLIENT TRACKING on BCAST REDIRECT $redir_id NOLOOP\n        r SET mykey myval px 1\n        r SET mykeyotherkey myval ; # We should not get it\n        after 1000\n        set keys [lsort [lindex [$rd_redirection read] 2]]\n        assert {$keys eq {mykey}}\n    }\n\n    test {Tracking gets notification of lazy expired keys} {\n        r CLIENT TRACKING off\n        r CLIENT TRACKING on BCAST REDIRECT $redir_id NOLOOP\n        # Use multi-exec to expose a race where the key gets an two invalidations\n        # in the same event loop, once by the client so filtered by NOLOOP, and\n        # the second one by the lazy expire\n        r MULTI\n        r SET mykey{t} myval px 1\n        r SET mykeyotherkey{t} myval ; # We should not get it\n        r DEBUG SLEEP 0.1\n        r GET mykey{t}\n        r EXEC\n        set keys [lsort [lindex [$rd_redirection read] 2]]\n        assert {$keys eq {mykey{t}}}\n    } {} {needs:debug}\n\n    test {HELLO 3 reply is correct} {\n        set reply [r HELLO 3]\n        assert_equal [dict get $reply proto] 3\n    }\n\n    test {HELLO without protover} {\n        set reply [r HELLO 3]\n        assert_equal [dict get $reply proto] 3\n\n        set reply [r HELLO]\n        assert_equal [dict get $reply proto] 3\n\n        set reply [r HELLO 2]\n        assert_equal [dict get $reply proto] 2\n\n        set reply [r HELLO]\n        assert_equal [dict get $reply proto] 2\n\n        # restore RESP3 for next test\n        r HELLO 3\n    }\n\n    test {RESP3 based basic invalidation} {\n        r CLIENT TRACKING off\n        r CLIENT TRACKING on\n        $rd_sg SET key1 1\n        r GET key1\n        $rd_sg SET key1 2\n        r read\n    } {invalidate key1}\n\n    test {RESP3 tracking redirection} {\n        r CLIENT TRACKING off\n        r CLIENT TRACKING on REDIRECT $redir_id\n        $rd_sg SET key1 1\n        r GET key1\n        $rd_sg SET key1 2\n        set res [lindex [$rd_redirection read] 2]\n        assert {$res eq {key1}}\n    }\n\n    test {Invalidations of previous keys can be redirected after switching to RESP3} {\n        r HELLO 2\n        $rd_sg SET key1 1\n        r GET key1\n        r HELLO 3\n        $rd_sg SET key1 2\n        set res [lindex [$rd_redirection read] 2]\n        assert {$res eq {key1}}\n    }\n\n    test {Invalidations of new keys can be redirected after switching to RESP3} {\n        r HELLO 3\n        $rd_sg SET key1 1\n        r GET key1\n        $rd_sg SET key1 2\n        set res [lindex [$rd_redirection read] 2]\n        assert {$res eq {key1}}\n    }\n\n    test {Invalid keys should not be tracked for scripts in NOLOOP mode} {\n        $rd_sg CLIENT TRACKING off\n        $rd_sg CLIENT TRACKING on NOLOOP\n        $rd_sg HELLO 3\n        $rd_sg SET key1 1\n        assert_equal \"1\" [$rd_sg GET key1]\n\n        # For write command in script, invalid key should not be tracked with NOLOOP flag\n        $rd_sg eval \"return redis.call('set', 'key1', '2')\" 1 key1\n        assert_equal \"2\" [$rd_sg GET key1]\n        $rd_sg CLIENT TRACKING off\n    }\n\n    test {Tracking only occurs for scripts when a command calls a read-only command} {\n        r CLIENT TRACKING off\n        r CLIENT TRACKING on\n        $rd_sg MSET key2{t} 1 key2{t} 1\n\n        # If a script doesn't call any read command, don't track any keys\n        r EVAL \"redis.call('set', 'key3{t}', 'bar')\" 2 key1{t} key2{t} \n        $rd_sg MSET key2{t} 2 key1{t} 2\n        assert_equal \"PONG\" [r ping]\n\n        # If a script calls a read command, just the read keys\n        r EVAL \"redis.call('get', 'key2{t}')\" 2 key1{t} key2{t}\n        $rd_sg MSET key2{t} 2 key3{t} 2\n        assert_equal {invalidate key2{t}} [r read]\n        assert_equal \"PONG\" [r ping]\n\n        # RO variants work like the normal variants\n\n        # If a RO script doesn't call any read command, don't track any keys\n        r EVAL_RO \"redis.call('ping')\" 2 key1{t} key2{t}\n        $rd_sg MSET key2{t} 2 key1{t} 2\n        assert_equal \"PONG\" [r ping]\n\n        # If a RO script calls a read command, just the read keys\n        r EVAL_RO \"redis.call('get', 'key2{t}')\" 2 key1{t} key2{t}\n        $rd_sg MSET key2{t} 2 key3{t} 2\n        assert_equal {invalidate key2{t}} [r read]\n        assert_equal \"PONG\" [r ping]\n    }\n\n    test {RESP3 Client gets tracking-redir-broken push message after cached key changed when rediretion client is terminated} {\n        r CLIENT TRACKING on REDIRECT $redir_id\n        $rd_sg SET key1 1\n        r GET key1\n        $rd_redirection QUIT\n        assert_equal OK [$rd_redirection read]\n        $rd_sg SET key1 2\n        set MAX_TRIES 100\n        set res -1\n        for {set i 0} {$i <= $MAX_TRIES && $res < 0} {incr i} {\n            set res [lsearch -exact [r PING] \"tracking-redir-broken\"]\n        }\n        assert {$res >= 0}\n        # Consume PING reply\n        assert_equal PONG [r read]\n\n        # Reinstantiating after QUIT\n        set rd_redirection [redis_deferring_client]\n        $rd_redirection CLIENT ID\n        set redir_id [$rd_redirection read]\n        $rd_redirection SUBSCRIBE __redis__:invalidate\n        $rd_redirection read ; # Consume the SUBSCRIBE reply\n    }\n\n    test {Different clients can redirect to the same connection} {\n        r CLIENT TRACKING on REDIRECT $redir_id\n        $rd CLIENT TRACKING on REDIRECT $redir_id \n        assert_equal OK [$rd read] ; # Consume the TRACKING reply\n        $rd_sg MSET key1{t} 1 key2{t} 1\n        r GET key1{t}\n        $rd GET key2{t} \n        assert_equal 1 [$rd read] ; # Consume the GET reply\n        $rd_sg INCR key1{t}\n        $rd_sg INCR key2{t}\n        set res1 [lindex [$rd_redirection read] 2]\n        set res2 [lindex [$rd_redirection read] 2]\n        assert {$res1 eq {key1{t}}}\n        assert {$res2 eq {key2{t}}}\n    }\n\n    test {Different clients using different protocols can track the same key} {\n        $rd HELLO 3 \n        set reply [$rd read] ; # Consume the HELLO reply\n        assert_equal 3 [dict get $reply proto]\n        $rd CLIENT TRACKING on \n        assert_equal OK [$rd read] ; # Consume the TRACKING reply\n        $rd_sg set key1 1\n        r GET key1\n        $rd GET key1 \n        assert_equal 1 [$rd read] ; # Consume the GET reply\n        $rd_sg INCR key1\n        set res1 [lindex [$rd_redirection read] 2]\n        $rd PING ; # Non redirecting client has to talk to the server in order to get invalidation message\n        set res2 [lindex [split [$rd read] \" \"] 1] \n        assert_equal PONG [$rd read] ; # Consume the PING reply, which comes together with the invalidation message\n        assert {$res1 eq {key1}}\n        assert {$res2 eq {key1}}\n    }\n\n    test {No invalidation message when using OPTIN option} {\n        r CLIENT TRACKING on OPTIN REDIRECT $redir_id\n        $rd_sg SET key1 1\n        r GET key1 ; # This key should not be notified, since OPTIN is on and CLIENT CACHING yes wasn't called\n        $rd_sg SET key1 2\n        # Preparing some message to consume on $rd_redirection so we don't get blocked\n        r CLIENT TRACKING off\n        r CLIENT TRACKING on REDIRECT $redir_id\n        $rd_sg SET key2 1\n        r GET key2 ; # This key should be notified\n        $rd_sg SET key2 2\n        set res [lindex [$rd_redirection read] 2]\n        assert {$res eq {key2}}\n    }\n\n    test {Invalidation message sent when using OPTIN option with CLIENT CACHING yes} {\n        r CLIENT TRACKING on OPTIN REDIRECT $redir_id\n        $rd_sg SET key1 3\n        r CLIENT CACHING yes\n        r GET key1\n        $rd_sg SET key1 4\n        set res [lindex [$rd_redirection read] 2]\n        assert {$res eq {key1}}\n    }\n\n    test {Invalidation message sent when using OPTOUT option} {\n        r CLIENT TRACKING off\n        r CLIENT TRACKING on OPTOUT REDIRECT $redir_id\n        $rd_sg SET key1 1\n        r GET key1 \n        $rd_sg SET key1 2\n        set res [lindex [$rd_redirection read] 2]\n        assert {$res eq {key1}}\n    }\n\n    test {No invalidation message when using OPTOUT option with CLIENT CACHING no} {\n        $rd_sg SET key1 1\n        r CLIENT CACHING no\n        r GET key1 ; # This key should not be notified, since OPTOUT is on and CLIENT CACHING no was called\n        $rd_sg SET key1 2\n        # Preparing some message to consume on $rd_redirection so we don't get blocked\n        $rd_sg SET key2 1\n        r GET key2 ; # This key should be notified\n        $rd_sg SET key2 2\n        set res [lindex [$rd_redirection read] 2]\n        assert {$res eq {key2}}\n    }\n\n    test {Able to redirect to a RESP3 client} {\n        $rd_redirection UNSUBSCRIBE __redis__:invalidate ; # Need to unsub first before we can do HELLO 3\n        set res [$rd_redirection read] ; # Consume the UNSUBSCRIBE reply\n        assert_equal {__redis__:invalidate} [lindex $res 1]\n        $rd_redirection HELLO 3\n        set res [$rd_redirection read] ; # Consume the HELLO reply\n        assert_equal [dict get $reply proto] 3\n        $rd_redirection SUBSCRIBE __redis__:invalidate\n        set res [$rd_redirection read] ; # Consume the SUBSCRIBE reply\n        assert_equal {__redis__:invalidate} [lindex $res 1]\n        r CLIENT TRACKING on REDIRECT $redir_id\n        $rd_sg SET key1 1\n        r GET key1\n        $rd_sg INCR key1\n        set res [lindex [$rd_redirection read] 1]\n        assert {$res eq {key1}}\n        $rd_redirection HELLO 2\n        set res [$rd_redirection read] ; # Consume the HELLO reply\n        assert_equal [dict get $res proto] 2\n    }\n\n    test {After switching from normal tracking to BCAST mode, no invalidation message is produced for pre-BCAST keys} {\n        r CLIENT TRACKING off\n        r HELLO 3\n        r CLIENT TRACKING on\n        $rd_sg SET key1 1\n        r GET key1\n        r CLIENT TRACKING off \n        r CLIENT TRACKING on BCAST\n        $rd_sg INCR key1\n        set inv_msg [r PING]\n        set ping_reply [r read]\n        assert {$inv_msg eq {invalidate key1}}\n        assert {$ping_reply eq {PONG}}\n    }\n\n    test {BCAST with prefix collisions throw errors} {\n        set r [redis_client] \n        catch {$r CLIENT TRACKING ON BCAST PREFIX FOOBAR PREFIX FOO} output\n        assert_match {ERR Prefix 'FOOBAR'*'FOO'*} $output\n\n        catch {$r CLIENT TRACKING ON BCAST PREFIX FOO PREFIX FOOBAR} output\n        assert_match {ERR Prefix 'FOO'*'FOOBAR'*} $output\n\n        $r CLIENT TRACKING ON BCAST PREFIX FOO PREFIX BAR\n        catch {$r CLIENT TRACKING ON BCAST PREFIX FO} output\n        assert_match {ERR Prefix 'FO'*'FOO'*} $output\n\n        catch {$r CLIENT TRACKING ON BCAST PREFIX BARB} output\n        assert_match {ERR Prefix 'BARB'*'BAR'*} $output\n\n        $r CLIENT TRACKING OFF\n    }\n\n    test {BCAST prefix self-overlap past first index reports error without enabling} {\n        # When any of the provided BCAST prefixes overlap with each other,\n        # CLIENT TRACKING ON must reply with a single error and leave tracking\n        # disabled, regardless of the position of the overlapping prefix in\n        # the argument list.\n        r CLIENT TRACKING OFF\n        catch {r CLIENT TRACKING ON BCAST PREFIX BAZ PREFIX FOOBAR PREFIX FOO} output\n        assert_match {ERR Prefix 'FOOBAR' overlaps with another provided prefix 'FOO'*} $output\n        # Tracking must not have been enabled after the overlap error.\n        assert_match {*flags off*} [r CLIENT TRACKINGINFO]\n    }\n\n    test {hdel deliver invalidate message after response in the same connection} {\n        r CLIENT TRACKING off\n        r HELLO 3\n        r CLIENT TRACKING on\n        r HSET myhash f 1\n        r HGET myhash f\n        set res [r HDEL myhash f]\n        assert_equal $res 1\n        set res [r read]\n        assert_equal $res {invalidate myhash}\n    }\n\n    test {Tracking invalidation message is not interleaved with multiple keys response} {\n        r CLIENT TRACKING off\n        r HELLO 3\n        r CLIENT TRACKING on\n        # We need disable active expire, so we can trigger lazy expire\n        r DEBUG SET-ACTIVE-EXPIRE 0\n        r MULTI\n        r MSET x{t} 1 y{t} 2\n        r PEXPIRE y{t} 100\n        r GET y{t}\n        r EXEC\n        after 110\n        # Read expired key y{t}, generate invalidate message about this key\n        set res [r MGET x{t} y{t}]\n        assert_equal $res {1 {}}\n        # Consume the invalidate message which is after command response\n        set res [r read]\n        assert_equal $res {invalidate y{t}}\n        r DEBUG SET-ACTIVE-EXPIRE 1\n    } {OK} {needs:debug}\n\n    test {Tracking invalidation message is not interleaved with transaction response} {\n        r CLIENT TRACKING off\n        r HELLO 3\n        r CLIENT TRACKING on\n        r MSET a{t} 1 b{t} 2\n        r GET a{t}\n        # Start a transaction, make a{t} generate an invalidate message\n        r MULTI\n        r INCR a{t}\n        r GET b{t}\n        set res [r EXEC]\n        assert_equal $res {2 2}\n        set res [r read]\n        # Consume the invalidate message which is after command response\n        assert_equal $res {invalidate a{t}}\n    }\n\n    test {Tracking invalidation message of eviction keys should be before response} {\n        # Get the current memory limit and calculate a new limit.\n        r CLIENT TRACKING off\n        r HELLO 3\n        r CLIENT TRACKING on\n\n        # make the previous test is really done before sampling used_memory\n        wait_lazyfree_done r\n\n        set used [expr {[s used_memory] - [s mem_not_counted_for_evict]}]\n        set limit [expr {$used+100*1024}]\n        set old_policy [lindex [r config get maxmemory-policy] 1]\n        r config set maxmemory $limit\n        # We set policy volatile-random, so only keys with ttl will be evicted\n        r config set maxmemory-policy volatile-random\n        # Add a volatile key and tracking it.\n        r setex volatile-key 10000 x\n        r get volatile-key\n        # We use SETBIT here, so we can set a big key and get the used_memory\n        # bigger than maxmemory. Next command will evict volatile keys. We\n        # can't use SET, as SET uses big input buffer, so it will fail.\n        r setbit big-key 1600000 0 ;# this will consume 200kb\n        # volatile-key is evicted before response.\n        set res [r getbit big-key 0]\n        assert_equal $res {invalidate volatile-key}\n        set res [r read]\n        assert_equal $res 0\n        r config set maxmemory-policy $old_policy\n        r config set maxmemory 0\n    }\n\n    test {Unblocked BLMOVE gets notification after response} {\n        r RPUSH list2{t} a\n        $rd HELLO 3\n        $rd read\n        $rd CLIENT TRACKING on\n        $rd read\n        # Tracking key list2{t}\n        $rd LRANGE list2{t} 0 -1\n        $rd read\n        # We block on list1{t}\n        $rd BLMOVE list1{t} list2{t} left left 0\n        wait_for_blocked_clients_count 1\n        # unblock $rd, list2{t} gets element and generate invalidation message\n        r rpush list1{t} foo\n        assert_equal [$rd read] {foo}\n        assert_equal [$rd read] {invalidate list2{t}}\n    }\n\n    test {Tracking gets notification on tracking table key eviction} {\n        r CLIENT TRACKING off\n        r CLIENT TRACKING on REDIRECT $redir_id NOLOOP\n        r MSET key1{t} 1 key2{t} 2\n        # Let the server track the two keys for us\n        r MGET key1{t} key2{t}\n        # Force the eviction of all the keys but one:\n        r config set tracking-table-max-keys 1\n        # Note that we may have other keys in the table for this client,\n        # since we disabled/enabled tracking multiple time with the same\n        # ID, and tracking does not do ID cleanups for performance reasons.\n        # So we check that eventually we'll receive one or the other key,\n        # otherwise the test will die for timeout.\n        while 1 {\n            set keys [lindex [$rd_redirection read] 2]\n            if {$keys eq {key1{t}} || $keys eq {key2{t}}} break\n        }\n        # We should receive an expire notification for one of\n        # the two keys (only one must remain)\n        assert {$keys eq {key1{t}} || $keys eq {key2{t}}}\n    }\n\n    test {Invalidation message received for flushall} {\n        clean_all\n        r CLIENT TRACKING on REDIRECT $redir_id\n        $rd_sg SET key1 1\n        r GET key1\n        $rd_sg FLUSHALL\n        set msg [$rd_redirection read]\n        assert {[lindex msg 2] eq {} }\n    }\n\n    test {Invalidation message received for flushdb} {\n        clean_all\n        r CLIENT TRACKING on REDIRECT $redir_id\n        $rd_sg SET key1 1\n        r GET key1\n        $rd_sg FLUSHDB\n        set msg [$rd_redirection read]\n        assert {[lindex msg 2] eq {} }\n    }\n\n    test {Test ASYNC flushall} {\n        clean_all\n        r CLIENT TRACKING on REDIRECT $redir_id\n        r GET key1\n        r GET key2\n        assert_equal [s 0 tracking_total_keys] 2\n        $rd_sg FLUSHALL ASYNC\n        assert_equal [s 0 tracking_total_keys] 0\n        assert_equal [lindex [$rd_redirection read] 2] {}\n    }\n\n    test {flushdb tracking invalidation message is not interleaved with transaction response} {\n        clean_all\n        r HELLO 3\n        r CLIENT TRACKING on\n        r SET a{t} 1\n        r GET a{t}\n        r MULTI\n        r FLUSHDB\n        set res [r EXEC]\n        assert_equal $res {OK}\n        # Consume the invalidate message which is after command response\n        r read\n    } {invalidate {}}\n\n    # Keys are defined to be evicted 100 at a time by default.\n    # If after eviction the number of keys still surpasses the limit\n    # defined in tracking-table-max-keys, we increases eviction \n    # effort to 200, and then 300, etc. \n    # This test tests this effort incrementation. \n    test {Server is able to evacuate enough keys when num of keys surpasses limit by more than defined initial effort} {\n        clean_all\n        set NUM_OF_KEYS_TO_TEST 250\n        set TRACKING_TABLE_MAX_KEYS 1\n        r CLIENT TRACKING on REDIRECT $redir_id\n        for {set i 0} {$i < $NUM_OF_KEYS_TO_TEST} {incr i} {\n            $rd_sg SET key$i $i\n            r GET key$i\n        }\n        r config set tracking-table-max-keys $TRACKING_TABLE_MAX_KEYS\n        # If not enough keys are evicted, we won't get enough invalidation\n        # messages, and \"$rd_redirection read\" will block.\n        # If too many keys are evicted, we will get too many invalidation\n        # messages, and the assert will fail.\n        for {set i 0} {$i < $NUM_OF_KEYS_TO_TEST - $TRACKING_TABLE_MAX_KEYS} {incr i} {\n            $rd_redirection read\n        }\n        $rd_redirection PING\n        assert {[$rd_redirection read] eq {pong {}}}\n    }\n\n    test {Tracking info is correct} {\n        clean_all\n        r CLIENT TRACKING on REDIRECT $redir_id\n        $rd_sg SET key1 1\n        $rd_sg SET key2 2\n        r GET key1 \n        r GET key2\n        $rd CLIENT TRACKING on BCAST PREFIX prefix:\n        assert [string match *OK* [$rd read]]\n        $rd_sg SET prefix:key1 1 \n        $rd_sg SET prefix:key2 2\n        set info [r info]\n        regexp \"\\r\\ntracking_total_items:(.*?)\\r\\n\" $info _ total_items\n        regexp \"\\r\\ntracking_total_keys:(.*?)\\r\\n\" $info _ total_keys\n        regexp \"\\r\\ntracking_total_prefixes:(.*?)\\r\\n\" $info _ total_prefixes\n        regexp \"\\r\\ntracking_clients:(.*?)\\r\\n\" $info _ tracking_clients\n        assert {$total_items == 2}\n        assert {$total_keys == 2}\n        assert {$total_prefixes == 1}\n        assert {$tracking_clients == 2}\n    }\n\n    test {CLIENT GETREDIR provides correct client id} {\n        set res [r CLIENT GETREDIR]\n        assert_equal $redir_id $res\n        r CLIENT TRACKING off\n        set res [r CLIENT GETREDIR]\n        assert_equal -1 $res\n        r CLIENT TRACKING on\n        set res [r CLIENT GETREDIR]\n        assert_equal 0 $res\n    }\n\n    test {CLIENT TRACKINGINFO provides reasonable results when tracking off} {\n        r CLIENT TRACKING off\n        set res [r client trackinginfo]\n        set flags [dict get $res flags]\n        assert_equal {off} $flags\n        set redirect [dict get $res redirect]\n        assert_equal {-1} $redirect\n        set prefixes [dict get $res prefixes]\n        assert_equal {} $prefixes\n    }\n\n    test {CLIENT TRACKINGINFO provides reasonable results when tracking on} {\n        r CLIENT TRACKING on\n        set res [r client trackinginfo]\n        set flags [dict get $res flags]\n        assert_equal {on} $flags\n        set redirect [dict get $res redirect]\n        assert_equal {0} $redirect\n        set prefixes [dict get $res prefixes]\n        assert_equal {} $prefixes\n    }\n\n    test {CLIENT TRACKINGINFO provides reasonable results when tracking on with options} {\n        r CLIENT TRACKING on REDIRECT $redir_id noloop\n        set res [r client trackinginfo]\n        set flags [dict get $res flags]\n        assert_equal {on noloop} $flags\n        set redirect [dict get $res redirect]\n        assert_equal $redir_id $redirect\n        set prefixes [dict get $res prefixes]\n        assert_equal {} $prefixes\n    }\n\n    test {CLIENT TRACKINGINFO provides reasonable results when tracking optin} {\n        r CLIENT TRACKING off\n        r CLIENT TRACKING on optin\n        set res [r client trackinginfo]\n        set flags [dict get $res flags]\n        assert_equal {on optin} $flags\n        set redirect [dict get $res redirect]\n        assert_equal {0} $redirect\n        set prefixes [dict get $res prefixes]\n        assert_equal {} $prefixes\n\n        r CLIENT CACHING yes\n        set res [r client trackinginfo]\n        set flags [dict get $res flags]\n        assert_equal {on optin caching-yes} $flags\n    }\n\n    test {CLIENT TRACKINGINFO provides reasonable results when tracking optout} {\n        r CLIENT TRACKING off\n        r CLIENT TRACKING on optout\n        set res [r client trackinginfo]\n        set flags [dict get $res flags]\n        assert_equal {on optout} $flags\n        set redirect [dict get $res redirect]\n        assert_equal {0} $redirect\n        set prefixes [dict get $res prefixes]\n        assert_equal {} $prefixes\n\n        r CLIENT CACHING no\n        set res [r client trackinginfo]\n        set flags [dict get $res flags]\n        assert_equal {on optout caching-no} $flags\n    }\n\n    test {CLIENT TRACKINGINFO provides reasonable results when tracking bcast mode} {\n        r CLIENT TRACKING off\n        r CLIENT TRACKING on BCAST PREFIX foo PREFIX bar\n        set res [r client trackinginfo]\n        set flags [dict get $res flags]\n        assert_equal {on bcast} $flags\n        set redirect [dict get $res redirect]\n        assert_equal {0} $redirect\n        set prefixes [lsort [dict get $res prefixes]]\n        assert_equal {bar foo} $prefixes\n\n        r CLIENT TRACKING off\n        r CLIENT TRACKING on BCAST\n        set res [r client trackinginfo]\n        set prefixes [dict get $res prefixes]\n        assert_equal {{}} $prefixes\n    }\n\n    test {CLIENT TRACKINGINFO provides reasonable results when tracking redir broken} {\n        clean_all\n        r HELLO 3\n        r CLIENT TRACKING on REDIRECT $redir_id\n        $rd_sg SET key1 1\n        r GET key1\n        $rd_redirection QUIT\n        assert_equal OK [$rd_redirection read]\n        $rd_sg SET key1 2\n        set res [lsearch -exact [r read] \"tracking-redir-broken\"]\n        assert {$res >= 0}\n        set res [r client trackinginfo]\n        set flags [dict get $res flags]\n        assert_equal {on broken_redirect} $flags\n        set redirect [dict get $res redirect]\n        assert_equal $redir_id $redirect\n        set prefixes [dict get $res prefixes]\n        assert_equal {} $prefixes\n    }\n\n    test {Regression test for #11715} {\n        # This issue manifests when a client invalidates keys through the max key\n        # limit, which invalidates keys to get Redis below the limit, but no command is\n        # then executed. This can occur in several ways but the simplest is through \n        # multi-exec which queues commands.\n        clean_all\n        r config set tracking-table-max-keys 2\n\n        # The cron will invalidate keys if we're above the limit, so disable it.\n        r debug pause-cron 1\n\n        # Set up a client that has listened to 2 keys and start a multi, this\n        # sets up the crash for later.\n        $rd HELLO 3\n        $rd read\n        $rd CLIENT TRACKING on\n        assert_match \"OK\" [$rd read]\n        $rd mget \"1{tag}\" \"2{tag}\"\n        assert_match \"{} {}\" [$rd read]\n        $rd multi\n        assert_match \"OK\" [$rd read]\n\n        # Reduce the tracking table keys to 1, this doesn't immediately take affect, but\n        # instead will apply on the next command.\n        r config set tracking-table-max-keys 1\n\n        # This command will get queued, so make sure this command doesn't crash.\n        $rd ping\n        $rd exec\n\n        # Validate we got some invalidation message and then the command was queued.\n        assert_match \"invalidate *{tag}\" [$rd read]\n        assert_match \"QUEUED\" [$rd read]\n        assert_match \"PONG\" [$rd read]\n\n        r debug pause-cron 0\n    } {OK} {needs:debug}\n\n    foreach resp {3 2} {\n        test \"RESP$resp based basic invalidation with client reply off\" {\n            # This entire test is mostly irrelevant for RESP2, but we run it anyway just for some extra coverage.\n            clean_all\n\n            $rd hello $resp\n            $rd read\n            $rd client tracking on\n            $rd read\n\n            $rd_sg set foo bar\n            $rd get foo\n            $rd read\n\n            $rd client reply off\n\n            $rd_sg set foo bar2\n\n            if {$resp == 3} {\n                assert_equal {invalidate foo} [$rd read]\n            } elseif {$resp == 2} { } ;# Just coverage\n\n            # Verify things didn't get messed up and no unexpected reply was pushed to the client.\n            $rd client reply on\n            assert_equal {OK} [$rd read]\n            $rd ping\n            assert_equal {PONG} [$rd read]\n        }\n    }\n\n    test {RESP3 based basic redirect invalidation with client reply off} {\n        clean_all\n\n        set rd_redir [redis_deferring_client]\n        $rd_redir hello 3\n        $rd_redir read\n\n        $rd_redir client id\n        set rd_redir_id [$rd_redir read]\n\n        $rd client tracking on redirect $rd_redir_id\n        $rd read\n\n        $rd_sg set foo bar\n        $rd get foo\n        $rd read\n\n        $rd_redir client reply off\n\n        $rd_sg set foo bar2\n        assert_equal {invalidate foo} [$rd_redir read]\n\n        # Verify things didn't get messed up and no unexpected reply was pushed to the client.\n        $rd_redir client reply on\n        assert_equal {OK} [$rd_redir read]\n        $rd_redir ping\n        assert_equal {PONG} [$rd_redir read]\n\n        $rd_redir close\n    }\n\n    test {RESP3 based basic tracking-redir-broken with client reply off} {\n        clean_all\n\n        $rd hello 3\n        $rd read\n        $rd client tracking on redirect $redir_id\n        $rd read\n\n        $rd_sg set foo bar\n        $rd get foo\n        $rd read\n\n        $rd client reply off\n\n        $rd_redirection quit\n        $rd_redirection read\n\n        $rd_sg set foo bar2\n\n        set res [lsearch -exact [$rd read] \"tracking-redir-broken\"]\n        assert_morethan_equal $res 0\n\n        # Verify things didn't get messed up and no unexpected reply was pushed to the client.\n        $rd client reply on\n        assert_equal {OK} [$rd read]\n        $rd ping\n        assert_equal {PONG} [$rd read]\n    }\n\n    $rd_redirection close\n    $rd_sg close\n    $rd close\n}\n\n# Just some extra coverage for --log-req-res, because we do not\n# run the full tracking unit in that mode\nstart_server {tags {\"tracking network\"}} {\n    test {Coverage: Basic CLIENT CACHING} {\n        set rd_redirection [redis_deferring_client]\n        $rd_redirection client id\n        set redir_id [$rd_redirection read]\n        assert_equal {OK} [r CLIENT TRACKING on OPTIN REDIRECT $redir_id]\n        assert_equal {OK} [r CLIENT CACHING yes]\n        r CLIENT TRACKING off\n    } {OK}\n\n    test {Coverage: Basic CLIENT REPLY} {\n        r CLIENT REPLY on\n    } {OK}\n\n    test {Coverage: Basic CLIENT TRACKINGINFO} {\n        r CLIENT TRACKINGINFO\n    } {flags off redirect -1 prefixes {}}\n\n    test {Coverage: Basic CLIENT GETREDIR} {\n        r CLIENT GETREDIR\n    } {-1}\n}\n"
  },
  {
    "path": "tests/unit/type/hash-field-expire.tcl",
    "content": "######## HEXPIRE family commands\n# Field does not exists\nset E_NO_FIELD     -2\n# Specified NX | XX | GT | LT condition not met\nset E_FAIL         0\n# expiration time set/updated\nset E_OK           1\n# Field deleted because the specified expiration time is in the past\nset E_DELETED      2\n\n######## HTTL family commands\nset T_NO_FIELD    -2\nset T_NO_EXPIRY   -1\n\n######## HPERIST\nset P_NO_FIELD    -2\nset P_NO_EXPIRY   -1\nset P_OK           1\n\n############################### AUX FUNCS ######################################\n\nproc get_stat_subexpiry {r} {\n    set input_string [r info keyspace]\n    set hash_count 0\n\n    foreach line [split $input_string \\n] {\n        if {[regexp {subexpiry=(\\d+)} $line -> value]} {\n            return $value\n        }\n    }\n\n    return 0\n}\n\nproc get_keys {l} {\n    set res {}\n    foreach entry $l {\n        set key [lindex $entry 0]\n        lappend res $key\n    }\n    return $res\n}\n\nproc dumpAllHashes {client} {\n    set keyAndFields(0,0) 0\n    unset keyAndFields\n    # keep keys sorted for comparison\n    foreach key [lsort [$client keys *]] {\n        set fields [$client hgetall $key]\n        foreach f $fields {\n            set keyAndFields($key,$f) [$client hpexpiretime $key FIELDS 1 $f]\n        }\n    }\n    return [array get keyAndFields]\n}\n\n############################### TESTS #########################################\n\nstart_server {tags {\"external:skip needs:debug\"}} {\n    foreach type {listpackex hashtable} {\n        if {$type eq \"hashtable\"} {\n            r config set hash-max-listpack-entries 0\n        } else {\n            r config set hash-max-listpack-entries 512\n        }\n\n        test \"HEXPIRE/HEXPIREAT/HPEXPIRE/HPEXPIREAT - Returns array if the key does not exist\" {\n            r del myhash\n            assert_equal [r HEXPIRE myhash 1000 FIELDS 1 a] [list $E_NO_FIELD]\n            assert_equal [r HEXPIREAT myhash 1000 FIELDS 1 a] [list $E_NO_FIELD]\n            assert_equal [r HPEXPIRE myhash 1000 FIELDS 2 a b] [list $E_NO_FIELD $E_NO_FIELD]\n            assert_equal [r HPEXPIREAT myhash 1000 FIELDS 2 a b] [list $E_NO_FIELD $E_NO_FIELD]\n        }\n\n        test \"HEXPIRE/HEXPIREAT/HPEXPIRE/HPEXPIREAT - Verify that the expire time does not overflow\" {\n            r del myhash\n            r hset myhash f1 v1\n            # The expire time can't be negative.\n            assert_error {ERR invalid expire time, must be >= 0} {r HEXPIRE myhash -1 FIELDS 1 f1}\n            assert_error {ERR invalid expire time, must be >= 0} {r HEXPIRE myhash -9223372036854775808 FIELDS 1 f1}\n            # The expire time can't be greater than the EB_EXPIRE_TIME_MAX\n            assert_error {ERR invalid expire time in 'hexpire' command} {r HEXPIRE myhash [expr (1<<48) / 1000] FIELDS 1 f1}\n            assert_error {ERR invalid expire time in 'hexpireat' command} {r HEXPIREAT myhash [expr (1<<48) / 1000 + [clock seconds] + 100] FIELDS 1 f1}\n            assert_error {ERR invalid expire time in 'hpexpire' command} {r HPEXPIRE myhash [expr (1<<48)] FIELDS 1 f1}\n            assert_error {ERR invalid expire time in 'hpexpireat' command} {r HPEXPIREAT myhash [expr (1<<48) + [clock milliseconds] + 100] FIELDS 1 f1}\n        }\n\n        test \"HPEXPIRE(AT) - Test 'NX' flag ($type)\" {\n            r del myhash\n            r hset myhash field1 value1 field2 value2 field3 value3\n            assert_equal [r hpexpire myhash 1000 NX FIELDS 1 field1] [list  $E_OK]\n            assert_equal [r hpexpire myhash 1000 NX FIELDS 2 field1 field2] [list  $E_FAIL  $E_OK]\n\n            r del myhash\n            r hset myhash field1 value1 field2 value2 field3 value3\n            assert_equal [r hpexpireat myhash [expr {([clock seconds]+1000)*1000}] NX FIELDS 1 field1] [list  $E_OK]\n            assert_equal [r hpexpireat myhash [expr {([clock seconds]+1000)*1000}] NX FIELDS 2 field1 field2] [list  $E_FAIL  $E_OK]\n        }\n\n        test \"HPEXPIRE(AT) - Test 'XX' flag ($type)\" {\n            r del myhash\n            r hset myhash field1 value1 field2 value2 field3 value3\n            assert_equal [r hpexpire myhash 1000 NX FIELDS 2 field1 field2] [list  $E_OK  $E_OK]\n            assert_equal [r hpexpire myhash 1000 XX FIELDS 2 field1 field3] [list  $E_OK  $E_FAIL]\n\n            r del myhash\n            r hset myhash field1 value1 field2 value2 field3 value3\n            assert_equal [r hpexpireat myhash [expr {([clock seconds]+1000)*1000}] NX FIELDS 2 field1 field2] [list  $E_OK  $E_OK]\n            assert_equal [r hpexpireat myhash [expr {([clock seconds]+1000)*1000}] XX FIELDS 2 field1 field3] [list  $E_OK  $E_FAIL]\n        }\n\n        test \"HPEXPIRE(AT) - Test 'GT' flag ($type)\" {\n            r del myhash\n            r hset myhash field1 value1 field2 value2\n            assert_equal [r hpexpire myhash 1000 NX FIELDS 1 field1] [list  $E_OK]\n            assert_equal [r hpexpire myhash 2000 NX FIELDS 1 field2] [list  $E_OK]\n            assert_equal [r hpexpire myhash 1500 GT FIELDS 2 field1 field2] [list  $E_OK  $E_FAIL]\n\n            r del myhash\n            r hset myhash field1 value1 field2 value2\n            assert_equal [r hpexpireat myhash [expr {([clock seconds]+1000)*1000}] NX FIELDS 1 field1] [list  $E_OK]\n            assert_equal [r hpexpireat myhash [expr {([clock seconds]+2000)*1000}] NX FIELDS 1 field2] [list  $E_OK]\n            assert_equal [r hpexpireat myhash [expr {([clock seconds]+1500)*1000}] GT FIELDS 2 field1 field2] [list  $E_OK  $E_FAIL]\n        }\n\n        test \"HPEXPIRE(AT) - Test 'LT' flag ($type)\" {\n            r del myhash\n            r hset myhash field1 value1 field2 value2 field3 value3\n            assert_equal [r hpexpire myhash 1000 NX FIELDS 1 field1] [list  $E_OK]\n            assert_equal [r hpexpire myhash 2000 NX FIELDS 1 field2] [list  $E_OK]\n            assert_equal [r hpexpire myhash 1500 LT FIELDS 3 field1 field2 field3] [list  $E_FAIL $E_OK $E_OK]\n\n            r del myhash\n            r hset myhash field1 value1 field2 value2 field3 value3\n            assert_equal [r hpexpireat myhash [expr {([clock seconds]+1000)*1000}] NX FIELDS 1 field1] [list  $E_OK]\n            assert_equal [r hpexpireat myhash [expr {([clock seconds]+2000)*1000}] NX FIELDS 1 field2] [list  $E_OK]\n            assert_equal [r hpexpireat myhash [expr {([clock seconds]+1500)*1000}] LT FIELDS 3 field1 field2 field3] [list  $E_FAIL $E_OK $E_OK]\n        }\n\n        test \"HPEXPIREAT - field not exists or TTL is in the past ($type)\" {\n            r del myhash\n            r hset myhash f1 v1 f2 v2 f4 v4\n            r hexpire myhash 1000 NX FIELDS 1 f4\n            assert_equal [r hpexpireat myhash [expr {([clock seconds]-1)*1000}] NX FIELDS 4 f1 f2 f3 f4] \"$E_DELETED $E_DELETED $E_NO_FIELD $E_FAIL\"\n            assert_equal [r hexists myhash field1] 0\n        }\n\n        test \"HPEXPIRE - wrong number of arguments ($type)\" {\n            r del myhash\n            r hset myhash f1 v1\n            assert_error {*Parameter `numFields` should be greater than 0} {r hpexpire myhash 1000 NX FIELDS 0 f1 f2 f3}\n            # <count> not match with actual number of fields\n            assert_error {*wrong number of arguments*} {r hpexpire myhash 1000 NX FIELDS 4 f1 f2 f3}\n            assert_error {*unknown argument*} {r hpexpire myhash 1000 NX FIELDS 2 f1 f2 f3}\n        }\n\n        test \"HPEXPIRE - parameter expire-time near limit of  2^46 ($type)\" {\n            r del myhash\n            r hset myhash f1 v1\n            # below & above\n            assert_equal [r hpexpire myhash [expr (1<<46) - [clock milliseconds] - 1000 ] FIELDS 1 f1] [list  $E_OK]\n            assert_error {*invalid expire time*} {r hpexpire myhash [expr (1<<46) - [clock milliseconds] + 100 ] FIELDS 1 f1}\n        }\n\n        test \"Lazy Expire - fields are lazy deleted ($type)\" {\n            r debug set-active-expire 0\n            r del myhash\n\n            r hset myhash f1 v1 f2 v2 f3 v3\n            r hpexpire myhash 1 NX FIELDS 3 f1 f2 f3\n            after 5\n\n            # Verify that still exists even if all fields are expired\n            assert_equal 1 [r EXISTS myhash]\n\n            # Verify that len counts also expired fields\n            assert_equal 3 [r HLEN myhash]\n\n            # Trying access to expired field should delete it. Len should be updated\n            assert_equal 0 [r hexists myhash f1]\n            assert_equal 2 [r HLEN myhash]\n\n            # Trying access another expired field should delete it. Len should be updated\n            assert_equal \"\" [r hget myhash f2]\n            assert_equal 1 [r HLEN myhash]\n\n            # Trying access last expired field should delete it. hash shouldn't exists afterward.\n            assert_equal 0 [r hstrlen myhash f3]\n            assert_equal 0 [r HLEN myhash]\n            assert_equal 0 [r EXISTS myhash]\n\n            # Restore default\n            r debug set-active-expire 1\n        }\n\n        test \"Active Expire - deletes hash that all its fields got expired ($type)\" {\n            r flushall\n\n            set hash_sizes {1 15 16 17 31 32 33 40}\n            foreach h $hash_sizes {\n                for {set i 1} {$i <= $h} {incr i} {\n                    # Random expiration time (Take care expired not after \"mix$h\")\n                    r hset hrand$h f$i v$i\n                    r hpexpire hrand$h [expr {70 + int(rand() * 30)}] FIELDS 1 f$i\n                    assert_equal 1 [r HEXISTS hrand$h f$i]\n\n                    # Same expiration time (Take care expired not after \"mix$h\")\n                    r hset same$h f$i v$i\n                    r hpexpire same$h 100 FIELDS 1 f$i\n                    assert_equal 1 [r HEXISTS same$h f$i]\n\n                    # same expiration time\n                    r hset mix$h f$i v$i fieldWithoutExpire$i v$i\n                    r hpexpire mix$h 100 FIELDS 1 f$i\n                    assert_equal 1 [r HEXISTS mix$h f$i]\n                }\n            }\n\n            # Wait for active expire\n            wait_for_condition 50 20 { [r EXISTS same40] == 0 } else { fail \"hash `same40` should be expired\" }\n\n            # Verify that all fields got expired and keys got deleted\n            foreach h $hash_sizes {\n                wait_for_condition 50 20 {\n                    [r HLEN mix$h] == $h\n                } else {\n                    fail \"volatile fields of hash `mix$h` should be expired\"\n                }\n\n                for {set i 1} {$i <= $h} {incr i} {\n                    assert_equal 0 [r HEXISTS mix$h f$i]\n                }\n                assert_equal 0 [r EXISTS hrand$h]\n                assert_equal 0 [r EXISTS same$h]\n            }\n        }\n\n        test \"HPEXPIRE - Flushall deletes all pending expired fields ($type)\" {\n            r del myhash\n            r hset myhash field1 value1 field2 value2\n            r hpexpire myhash 10000 NX FIELDS 1 field1\n            r hpexpire myhash 10000 NX FIELDS 1 field2\n            r flushall\n            r del myhash\n            r hset myhash field1 value1 field2 value2\n            r hpexpire myhash 10000 NX FIELDS 1 field1\n            r hpexpire myhash 10000 NX FIELDS 1 field2\n            r flushall async\n        }\n\n        test \"HTTL/HPTTL - Returns array if the key does not exist\" {\n            r del myhash\n            assert_equal [r HTTL myhash FIELDS 1 a] [list $T_NO_FIELD]\n            assert_equal [r HPTTL myhash FIELDS 2 a b] [list $T_NO_FIELD $T_NO_FIELD]\n        }\n\n        test \"HTTL/HPTTL - Input validation gets failed on nonexists field or field without expire ($type)\" {\n            r del myhash\n            r HSET myhash field1 value1 field2 value2\n            r HPEXPIRE myhash 1000 NX FIELDS 1 field1\n\n            foreach cmd {HTTL HPTTL} {\n                assert_equal [r $cmd myhash FIELDS 2 field2 non_exists_field] \"$T_NO_EXPIRY $T_NO_FIELD\"\n                # <count> not match with actual number of fields\n                assert_error {*numfields* parameter must match the number of arguments*} {r $cmd myhash FIELDS 1 non_exists_field1 non_exists_field2}\n                assert_error {*numfields* parameter must match the number of arguments*} {r $cmd myhash FIELDS 3 non_exists_field1 non_exists_field2}\n            }\n        }\n\n        test \"HTTL/HPTTL - returns time to live in seconds/msillisec ($type)\" {\n            r del myhash\n            r HSET myhash field1 value1 field2 value2\n            r HPEXPIRE myhash 2000 NX FIELDS 2 field1 field2\n            set ttlArray [r HTTL myhash FIELDS 2 field1 field2]\n            assert_range [lindex $ttlArray 0] 1 2\n            set ttl [r HPTTL myhash FIELDS 1 field1]\n            assert_range $ttl 1000 2000\n        }\n\n        test \"HEXPIRETIME/HPEXPIRETIME - Returns array if the key does not exist\" {\n            r del myhash\n            assert_equal [r HEXPIRETIME myhash FIELDS 1 a] [list $T_NO_FIELD]\n            assert_equal [r HPEXPIRETIME myhash FIELDS 2 a b] [list $T_NO_FIELD $T_NO_FIELD]\n        }\n\n        test \"HEXPIRETIME - returns TTL in Unix timestamp ($type)\" {\n            r del myhash\n            r HSET myhash field1 value1\n            set lo [expr {[clock seconds] + 1}]\n            set hi [expr {[clock seconds] + 2}]\n            r HPEXPIRE myhash 1000 NX FIELDS 1 field1\n            assert_range [r HEXPIRETIME myhash FIELDS 1 field1] $lo $hi\n            assert_range [r HPEXPIRETIME myhash FIELDS 1 field1] [expr $lo*1000] [expr $hi*1000]\n        }\n        \n        test \"HPEXPIRETIME persists after RDB reload ($type)\" {\n            r del myhash\n            r hset myhash field1 value1 field2 value2\n            r hpexpire myhash 500 NX FIELDS 1 field1\n            set before [r HPEXPIRETIME myhash FIELDS 1 field1]\n            r debug reload\n            set after [r HPEXPIRETIME myhash FIELDS 1 field1]\n            assert_equal $before $after\n            # field2 should not have expiration\n            assert_equal [r HTTL myhash FIELDS 1 field2] $T_NO_EXPIRY\n            assert_equal [get_stat_subexpiry r] 1\n            # Wait for field1 to expire robustly\n            wait_for_condition 50 20 { [get_stat_subexpiry r] == 0 } else { fail \"subexpiry should be 0\" } \n            assert_equal [r hget myhash field1] \"\"\n            # field2 remains without expiration\n            assert_equal [r HTTL myhash FIELDS 1 field2] $T_NO_EXPIRY\n        }\n        \n        # For hash data type that had in the past HFEs, Verify that after RDB\n        # reload it still won't be counted in `subexpiry`.\n        test \"Verify hash that had HFEs won't be counted in INFO keyspace also after reload ($type)\" {\n            # Prepare a hash with one field that will expire before the RDB is written\n            r flushall\n            r hset myhash field1 value1 field2 value2\n            r hpexpire myhash 1 NX FIELDS 1 field1\n            wait_for_condition 50 20 { [get_stat_subexpiry r] == 0 } else { fail \"`field1` should be expired\" }\n\n            # Disable active expire to prevent the probability of the key from being\n            # added-and-deleted from `subexpiry` just before verifying get_stat_subexpiry()\n            r debug set-active-expire 0\n\n            r debug reload\n\n            # Now verify no sub-expiry keys exist after reload (i.e. not registered in estore)\n            assert_equal [get_stat_subexpiry r] 0\n\n            # Restore to support active expire\n            r debug set-active-expire 1\n        }\n\n        # Test case where PERSIST was used, and active expire didn't do any cleanup yet\n        test \"Verify hash with PERSIST'd field won't be counted in INFO keyspace after reload ($type)\" {\n            r debug set-active-expire 0\n            r del myhash\n            r hset myhash f1 v1 f2 v2\n            r hexpire myhash 10000 FIELDS 1 f1\n\n            # Verify subexpiry is 1 (field has expiration)\n            assert_equal [get_stat_subexpiry r] 1\n\n            # Persist the field (remove expiration)\n            assert_equal [r hpersist myhash FIELDS 1 f1] $P_OK\n\n            # subexpiry should still be 1 because active expire hasn't cleaned up yet.\n            # We avoid paying the cost of updating subexpiry data structure (estore)\n            # and leave the cleanup to efficient active expire\n            assert_equal [get_stat_subexpiry r] 1\n\n            # After RDB reload, subexpiry should be 0 (field no longer has expiration\n            # and RESTORE should \"accurately\" identify that and avoid registering it\n            # in estore)\n            r debug reload\n            assert_equal [get_stat_subexpiry r] 0\n\n            # Verify both fields exist and have no expiration\n            assert_equal [r hget myhash f1] \"v1\"\n            assert_equal [r hget myhash f2] \"v2\"\n            assert_equal [r httl myhash FIELDS 2 f1 f2] \"$T_NO_EXPIRY $T_NO_EXPIRY\"\n\n            # Restore to support active expire\n            r debug set-active-expire 1\n        }\n\n        test \"HTTL/HPTTL - Verify TTL progress until expiration ($type)\" {\n            r del myhash\n            r hset myhash field1 value1 field2 value2\n            r hpexpire myhash 1000 NX FIELDS 1 field1\n            assert_range [r HPTTL myhash FIELDS 1 field1] 100 1000\n            assert_range [r HTTL myhash FIELDS 1 field1] 0 1\n            after 100\n            assert_range [r HPTTL myhash FIELDS 1 field1] 1 901\n            after 910\n            assert_equal [r HPTTL myhash FIELDS 1 field1] $T_NO_FIELD\n            assert_equal [r HTTL myhash FIELDS 1 field1] $T_NO_FIELD\n        }\n\n        test \"HPEXPIRE - DEL hash with non expired fields (valgrind test) ($type)\" {\n            r del myhash\n            r hset myhash field1 value1 field2 value2\n            r hpexpire myhash 10000 NX FIELDS 1 field1\n            r del myhash\n        }\n\n        test \"HEXPIREAT - Set time in the past ($type)\" {\n            r del myhash\n            r hset myhash field1 value1\n            assert_equal [r hexpireat myhash [expr {[clock seconds] - 1}] NX FIELDS 1 field1] $E_DELETED\n            assert_equal [r hexists myhash field1] 0\n        }\n\n        test \"HEXPIREAT - Set time and then get TTL ($type)\" {\n            r del myhash\n            r hset myhash field1 value1\n\n            r hexpireat myhash [expr {[clock seconds] + 2}] NX FIELDS 1 field1\n            assert_range [r hpttl myhash FIELDS 1 field1] 500 2000\n            assert_range [r httl myhash FIELDS 1 field1] 1 2\n\n            r hexpireat myhash [expr {[clock seconds] + 5}] XX FIELDS 1 field1\n            assert_range [r httl myhash FIELDS 1 field1] 4 5\n        }\n\n        test \"Lazy Expire - delete hash with expired fields ($type)\" {\n            r del myhash\n            r debug set-active-expire 0\n            r hset myhash k v\n            r hpexpire myhash 1 NX FIELDS 1 k\n            after 5\n            r del myhash\n            r debug set-active-expire 1\n        }\n\n        test \"Test HRANDFIELD deletes all expired fields ($type)\" {\n            r debug set-active-expire 0\n            r flushall\n            r config resetstat\n            r hset myhash f1 v1 f2 v2 f3 v3 f4 v4 f5 v5\n            r hpexpire myhash 1 FIELDS 2 f1 f2\n            after 5\n            assert_equal [lsort [r hrandfield myhash 5]] \"f3 f4 f5\"\n            assert_equal [s expired_subkeys] 2\n            r hpexpire myhash 1 FIELDS 3 f3 f4 f5\n            after 5\n            assert_equal [lsort [r hrandfield myhash 5]] \"\"\n            assert_equal [r keys *] \"\"\n\n            r del myhash\n            r hset myhash f1 v1 f2 v2 f3 v3\n            r hpexpire myhash 1 FIELDS 1 f1\n            after 5\n            set res [r hrandfield myhash]\n            assert {$res == \"f2\" || $res == \"f3\"}\n            r hpexpire myhash 1 FIELDS 1 f2\n            after 5\n            assert_equal [lsort [r hrandfield myhash 5]] \"f3\"\n            r hpexpire myhash 1 FIELDS 1 f3\n            after 5\n            assert_equal [r hrandfield myhash] \"\"\n            assert_equal [r keys *] \"\"\n\n            r debug set-active-expire 1\n        }\n\n        test \"Lazy Expire - HLEN does count expired fields ($type)\" {\n            # Enforce only lazy expire\n            r debug set-active-expire 0\n\n            r del h1 h4 h18 h20\n            r hset h1 k1 v1\n            r hpexpire h1 1 NX FIELDS 1 k1\n\n            r hset h4 k1 v1 k2 v2 k3 v3 k4 v4\n            r hpexpire h4 1 NX FIELDS 3 k1 k3 k4\n\n            # beyond 16 fields: HFE DS (ebuckets) converts from list to rax\n\n            r hset h18 k1 v1 k2 v2 k3 v3 k4 v4 k5 v5 k6 v6 k7 v7 k8 v8 k9 v9 k10 v10 k11 v11 k12 v12 k13 v13 k14 v14 k15 v15 k16 v16 k17 v17 k18 v18\n            r hpexpire h18 1 NX FIELDS 18 k1 k2 k3 k4 k5 k6 k7 k8 k9 k10 k11 k12 k13 k14 k15 k16 k17 k18\n\n            r hset h20 k1 v1 k2 v2 k3 v3 k4 v4 k5 v5 k6 v6 k7 v7 k8 v8 k9 v9 k10 v10 k11 v11 k12 v12 k13 v13 k14 v14 k15 v15 k16 v16 k17 v17 k18 v18 k19 v19 k20 v20\n            r hpexpire h20 1 NX FIELDS 2 k1 k2\n\n            after 10\n\n            assert_equal [r hlen h1] 1\n            assert_equal [r hlen h4] 4\n            assert_equal [r hlen h18] 18\n            assert_equal [r hlen h20] 20\n            # Restore to support active expire\n            r debug set-active-expire 1\n        }\n\n        test \"Lazy Expire - HSCAN does not report expired fields ($type)\" {\n            # Enforce only lazy expire\n            r debug set-active-expire 0\n\n            r del h1 h20 h4 h18 h20\n            r hset h1 01 01\n            r hpexpire h1 1 NX FIELDS 1 01\n\n            r hset h4 01 01 02 02 03 03 04 04\n            r hpexpire h4 1 NX FIELDS 3 01 03 04\n\n            # beyond 16 fields hash-field expiration DS (ebuckets) converts from list to rax\n\n            r hset h18 01 01 02 02 03 03 04 04 05 05 06 06 07 07 08 08 09 09 10 10 11 11 12 12 13 13 14 14 15 15 16 16 17 17 18 18\n            r hpexpire h18 1 NX FIELDS 18 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18\n\n            r hset h20 01 01 02 02 03 03 04 04 05 05 06 06 07 07 08 08 09 09 10 10 11 11 12 12 13 13 14 14 15 15 16 16 17 17 18 18 19 19 20 20\n            r hpexpire h20 1 NX FIELDS 2 01 02\n\n            after 10\n\n            # Verify SCAN does not report expired fields\n            assert_equal [lsort -unique [lindex [r hscan h1 0 COUNT 10] 1]] \"\"\n            assert_equal [lsort -unique [lindex [r hscan h4 0 COUNT 10] 1]] \"02\"\n            assert_equal [lsort -unique [lindex [r hscan h18 0 COUNT 10] 1]] \"\"\n            assert_equal [lsort -unique [lindex [r hscan h20 0 COUNT 100] 1]] \"03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20\"\n            # Restore to support active expire\n            r debug set-active-expire 1\n        }\n\n        test \"Test HSCAN with mostly expired fields return empty result ($type)\" {\n            r debug set-active-expire 0\n\n            # Create hash with 1000 fields and 999 of them will be expired\n            r del myhash\n            for {set i 1} {$i <= 1000} {incr i} {\n                r hset myhash field$i value$i\n                if {$i > 1} {\n                    r hpexpire myhash 1 NX FIELDS 1 field$i\n                }\n            }\n            after 3\n\n            # Verify iterative HSCAN returns either empty result or only the first field\n            set countEmptyResult 0\n            set cur 0\n            while 1 {\n                set res [r hscan myhash $cur]\n                set cur [lindex $res 0]\n                # if the result is not empty, it should contain only the first field\n                if {[llength [lindex $res 1]] > 0} {\n                    assert_equal [lindex $res 1] \"field1 value1\"\n                } else {\n                    incr countEmptyResult\n                }\n                if {$cur == 0} break\n            }\n            assert {$countEmptyResult > 0}\n            r debug set-active-expire 1\n        }\n\n        test \"Lazy Expire - verify various HASH commands handling expired fields ($type)\" {\n            # Enforce only lazy expire\n            r debug set-active-expire 0\n            r del h1 h2 h3 h4 h5 h18\n            r hset h1 01 01\n            r hset h2 01 01 02 02\n            r hset h3 01 01 02 02 03 03\n            r hset h4 1 99 2 99 3 99 4 99\n            r hset h5 1 1 2 22 3 333 4 4444 5 55555\n            r hset h6 01 01 02 02 03 03 04 04 05 05 06 06\n            r hset h18 01 01 02 02 03 03 04 04 05 05 06 06 07 07 08 08 09 09 10 10 11 11 12 12 13 13 14 14 15 15 16 16 17 17 18 18\n            r hpexpire h1 1 NX FIELDS 1 01\n            r hpexpire h2 1 NX FIELDS 1 01\n            r hpexpire h2 1 NX FIELDS 1 02\n            r hpexpire h3 1 NX FIELDS 1 01\n            r hpexpire h4 1 NX FIELDS 1 2\n            r hpexpire h5 1 NX FIELDS 1 3\n            r hpexpire h6 1 NX FIELDS 1 05\n            r hpexpire h18 1 NX FIELDS 17 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17\n\n            after 5\n\n            # Verify HDEL not ignore expired field. It is too much overhead to check\n            # if the field is expired before deletion.\n            assert_equal [r HDEL h1 01] \"1\"\n\n            # Verify HGET ignore expired field\n            r config resetstat\n            assert_equal [r HGET h2 01] \"\"\n            assert_equal [s expired_subkeys] 1\n            assert_equal [r HGET h2 02] \"\"\n            assert_equal [s expired_subkeys] 2\n            assert_equal [r HGET h3 01] \"\"\n            assert_equal [r HGET h3 02] \"02\"\n            assert_equal [r HGET h3 03] \"03\"\n            assert_equal [s expired_subkeys] 3\n            # Verify HINCRBY ignore expired field\n            assert_equal [r HINCRBY h4 2 1] \"1\"\n            assert_equal [s expired_subkeys] 4\n            assert_equal [r HINCRBY h4 3 1] \"100\"\n            # Verify HSTRLEN ignore expired field\n            assert_equal [r HSTRLEN h5 3] \"0\"\n            assert_equal [s expired_subkeys] 5\n            assert_equal [r HSTRLEN h5 4] \"4\"\n            assert_equal [lsort [r HKEYS h6]] \"01 02 03 04 06\"\n            assert_equal [s expired_subkeys] 5\n            # Verify HEXISTS ignore expired field\n            assert_equal [r HEXISTS h18 07] \"0\"\n            assert_equal [s expired_subkeys] 6\n            assert_equal [r HEXISTS h18 18] \"1\"\n            # Verify HVALS ignore expired field\n            assert_equal [lsort [r HVALS h18]] \"18\"\n            assert_equal [s expired_subkeys] 6\n            # Restore to support active expire\n            r debug set-active-expire 1\n        }\n\n        test \"A field with TTL overridden with another value (TTL discarded) ($type)\" {\n            r del myhash\n            r hset myhash field1 value1 field2 value2 field3 value3\n            r hpexpire myhash 10000 NX FIELDS 1 field1\n            r hpexpire myhash 1 NX FIELDS 1 field2\n\n            # field2 TTL will be discarded\n            r hset myhash field2 value4\n            after 5\n            # Expected TTL will be discarded\n            assert_equal [r hget myhash field2] \"value4\"\n            assert_equal [r httl myhash FIELDS 2 field2 field3] \"$T_NO_EXPIRY $T_NO_EXPIRY\"\n            assert_not_equal [r httl myhash FIELDS 1 field1] \"$T_NO_EXPIRY\"\n        }\n\n        test \"Modify TTL of a field ($type)\" {\n            r del myhash\n            r hset myhash field1 value1\n            r hpexpire myhash 200000 NX FIELDS 1 field1\n            r hpexpire myhash 1000000 XX FIELDS 1 field1\n            after 15\n            assert_equal [r hget myhash field1] \"value1\"\n            assert_range [r hpttl myhash FIELDS 1 field1] 900000 1000000\n        }\n\n        test \"Test return value of set operation ($type)\" {\n             r del myhash\n             r hset myhash f1 v1 f2 v2\n             r hexpire myhash 100000 FIELDS 1 f1\n             assert_equal [r hset myhash f2 v2] 0\n             assert_equal [r hset myhash f3 v3] 1\n             assert_equal [r hset myhash f3 v3 f4 v4] 1\n             assert_equal [r hset myhash f3 v3 f5 v5 f6 v6] 2\n        }\n\n        test \"Test HGETALL not return expired fields ($type)\" {\n            # Test with small hash\n            r debug set-active-expire 0\n            r del myhash\n            r hset myhash f1 v1 f2 v2 f3 v3 f4 v4 f5 v5 f6 v6\n            r hpexpire myhash 1 NX FIELDS 3 f2 f4 f6\n            after 10\n            assert_equal [lsort [r hgetall myhash]] \"f1 f3 f5 v1 v3 v5\"\n\n            # Test with large hash\n            r del myhash\n            for {set i 1} {$i <= 600} {incr i} {\n                r hset myhash f$i v$i\n                if {$i > 3} { r hpexpire myhash 1 NX FIELDS 1 f$i }\n            }\n            after 10\n            assert_equal [lsort [r hgetall myhash]] [lsort \"f1 f2 f3 v1 v2 v3\"]\n\n            # hash that all fields are expired return empty result\n            r del myhash\n            r hset myhash f1 v1 f2 v2 f3 v3 f4 v4 f5 v5 f6 v6\n            r hpexpire myhash 1 FIELDS 6 f1 f2 f3 f4 f5 f6\n            after 10\n            assert_equal [r hgetall myhash] \"\"\n            r debug set-active-expire 1\n        }\n\n        test \"Test RENAME hash with fields to be expired ($type)\" {\n            r debug set-active-expire 0\n            r del myhash\n            r hset myhash field1 value1\n            r hpexpire myhash 20 NX FIELDS 1 field1\n            r rename myhash myhash2\n            assert_equal [r exists myhash] 0\n            assert_range [r hpttl myhash2 FIELDS 1 field1] 1 20\n            after 25\n            # Verify the renamed key exists\n            assert_equal [r exists myhash2] 1\n            r debug set-active-expire 1\n            # Only active expire will delete the key\n            wait_for_condition 30 10 { [r exists myhash2] == 0 } else { fail \"`myhash2` should be expired\" }\n        }\n\n        test \"Test RENAME hash that had HFEs but not during the rename ($type)\" {\n            r del h1\n            r hset h1 f1 v1 f2 v2\n            r hpexpire h1 1 FIELDS 1 f1\n            after 20\n            r rename h1 h1_renamed\n            assert_equal [r exists h1] 0\n            assert_equal [r exists h1_renamed] 1\n            assert_equal [r hgetall h1_renamed] {f2 v2}\n            r hpexpire h1_renamed 1 FIELDS 1 f2\n            # Only active expire will delete the key\n            wait_for_condition 30 10 { [r exists h1_renamed] == 0 } else { fail \"`h1_renamed` should be expired\" }\n        }\n\n        test \"MOVE to another DB hash with fields to be expired ($type)\" {\n            r select 9\n            r flushall\n            r hset myhash field1 value1\n            r expireat myhash 2000000000000 ;# Force kvobj reallocation during move command\n            r hpexpire myhash 100 NX FIELDS 1 field1\n            r move myhash 10\n            assert_equal [r exists myhash] 0\n            assert_equal [r dbsize] 0\n\n            # Verify the key and its field exists in the target DB\n            r select 10\n            assert_equal [r hget myhash field1] \"value1\"\n            assert_equal [r exists myhash] 1\n\n            # Eventually the field will be expired and the key will be deleted\n            wait_for_condition 40 10 { [r hget myhash field1] == \"\" } else { fail \"`field1` should be expired\" }\n            wait_for_condition 40 10 { [r exists myhash] == 0 } else { fail \"db should be empty\" }\n        } {} {singledb:skip}\n\n        test \"Test COPY hash with fields to be expired ($type)\" {\n            r flushall\n            r hset h1 f1 v1 f2 v2\n            r hset h2 f1 v1 f2 v2 f3 v3 f4 v4 f5 v5 f6 v6 f7 v7 f8 v8 f9 v9 f10 v10 f11 v11 f12 v12 f13 v13 f14 v14 f15 v15 f16 v16 f17 v17 f18 v18\n            r hpexpire h1 100 NX FIELDS 1 f1\n            r hpexpire h2 100 NX FIELDS 18 f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14 f15 f16 f17 f18\n            r COPY h1 h1copy\n            r COPY h2 h2copy\n            assert_equal [r hget h1 f1] \"v1\"\n            assert_equal [r hget h1copy f1] \"v1\"\n            assert_equal [r exists h2] 1\n            assert_equal [r exists h2copy] 1\n            after 105\n\n            # Verify lazy expire of field in h1 and its copy\n            assert_equal [r hget h1 f1] \"\"\n            assert_equal [r hget h1copy f1] \"\"\n\n            # Verify lazy expire of field in h2 and its copy. Verify the key deleted as well.\n            wait_for_condition 40 10 { [r exists h2] == 0 } else { fail \"`h2` should be expired\" }\n            wait_for_condition 40 10 { [r exists h2copy] == 0 } else { fail \"`h2copy` should be expired\" }\n\n        } {} {singledb:skip}\n\n        test \"Test COPY hash that had HFEs but not during the copy ($type)\" {\n            r del h1\n            r hset h1 f1 v1 f2 v2\n            r hpexpire h1 1 FIELDS 1 f1\n            after 20\n            r COPY h1 h1_copy\n            assert_equal [r exists h1] 1\n            assert_equal [r exists h1_copy] 1\n            assert_equal [r hgetall h1_copy] {f2 v2}\n            r hpexpire h1_copy 1 FIELDS 1 f2\n            # Only active expire will delete the key\n            wait_for_condition 30 10 { [r exists h1_copy] == 0 } else { fail \"`h1_copy` should be expired\" }\n        }\n\n        test \"Test SWAPDB hash-fields to be expired ($type)\" {\n            r select 9\n            r flushall\n            r hset myhash field1 value1\n            r hpexpire myhash 50 NX FIELDS 1 field1\n\n            r swapdb 9 10\n\n            # Verify the key and its field doesn't exist in the source DB\n            assert_equal [r exists myhash] 0\n            assert_equal [r dbsize] 0\n\n            # Verify the key and its field exists in the target DB\n            r select 10\n            assert_equal [r hget myhash field1] \"value1\"\n            assert_equal [r dbsize] 1\n\n            # Eventually the field will be expired and the key will be deleted\n            wait_for_condition 20 10 { [r exists myhash] == 0 } else { fail \"'myhash' should be expired\" }\n        } {} {singledb:skip}\n\n        test \"Test SWAPDB hash that had HFEs but not during the swap ($type)\" {\n            r select 9\n            r flushall\n            r hset myhash f1 v1 f2 v2\n            r hpexpire myhash 1 NX FIELDS 1 f1\n            after 10\n\n            r swapdb 9 10\n\n            # Verify the key and its field doesn't exist in the source DB\n            assert_equal [r exists myhash] 0\n            assert_equal [r dbsize] 0\n\n            # Verify the key and its field exists in the target DB\n            r select 10\n            assert_equal [r hgetall myhash] {f2 v2}\n            assert_equal [r dbsize] 1\n            r hpexpire myhash 1 NX FIELDS 1 f2\n\n            # Eventually the field will be expired and the key will be deleted\n            wait_for_condition 20 10 { [r exists myhash] == 0 } else { fail \"'myhash' should be expired\" }\n        } {} {singledb:skip}\n\n        test \"HMGET - returns empty entries if fields or hash expired ($type)\" {\n            r debug set-active-expire 0\n            r del h1 h2\n            r hset h1 f1 v1 f2 v2 f3 v3\n            r hset h2 f1 v1 f2 v2 f3 v3\n            r hpexpire h1 10000000 NX FIELDS 1 f1\n            r hpexpire h1 1 NX FIELDS 2 f2 f3\n            r hpexpire h2 1 NX FIELDS 3 f1 f2 f3\n            after 5\n            assert_equal [r hmget h1 f1 f2 f3] {v1 {} {}}\n            assert_equal [r hmget h2 f1 f2 f3] {{} {} {}}\n            r debug set-active-expire 1\n        }\n\n        test \"HPERSIST - Returns array if the key does not exist ($type)\" {\n            r del myhash\n            assert_equal [r HPERSIST myhash FIELDS 1 a] [list $P_NO_FIELD]\n            assert_equal [r HPERSIST myhash FIELDS 2 a b] [list $P_NO_FIELD $P_NO_FIELD]\n        }\n\n        test \"HPERSIST - input validation ($type)\" {\n            # HPERSIST key <num-fields> <field [field ...]>\n            r del myhash\n            r hset myhash f1 v1 f2 v2\n            r hexpire myhash 1000 NX FIELDS 1 f1\n            assert_error {*wrong number of arguments*} {r hpersist myhash}\n            assert_error {*wrong number of arguments*} {r hpersist myhash FIELDS 1}\n            assert_equal [r hpersist myhash FIELDS 2 f1 not-exists-field] \"$P_OK $P_NO_FIELD\"\n            assert_equal [r hpersist myhash FIELDS 1 f2] \"$P_NO_EXPIRY\"\n            # <count> not match with actual number of fields\n            assert_error {*numfields* parameter must match the number of arguments*} {r hpersist myhash FIELDS 2 f1 f2 f3}\n            assert_error {*numfields* parameter must match the number of arguments*} {r hpersist myhash FIELDS 4 f1 f2 f3}\n        }\n\n        test \"HPERSIST - verify fields with TTL are persisted ($type)\" {\n            r del myhash\n            r hset myhash f1 v1 f2 v2\n            r hexpire myhash 20 NX FIELDS 2 f1 f2\n            r hpersist myhash FIELDS 2 f1 f2\n            after 25\n            assert_equal [r hget myhash f1] \"v1\"\n            assert_equal [r hget myhash f2] \"v2\"\n            assert_equal [r HTTL myhash FIELDS 2 f1 f2] \"$T_NO_EXPIRY $T_NO_EXPIRY\"\n        }\n\n        test \"HTTL/HPERSIST - Test expiry commands with non-volatile hash ($type)\" {\n            r del myhash\n            r hset myhash field1 value1 field2 value2 field3 value3\n            assert_equal [r httl myhash FIELDS 1 field1] $T_NO_EXPIRY\n            assert_equal [r httl myhash FIELDS 1 fieldnonexist] $E_NO_FIELD\n\n            assert_equal [r hpersist myhash FIELDS 1 field1] $P_NO_EXPIRY\n            assert_equal [r hpersist myhash FIELDS 1 fieldnonexist] $P_NO_FIELD\n        }\n\n        test {DUMP / RESTORE are able to serialize / unserialize a hash} {\n            r config set sanitize-dump-payload yes\n            r del myhash\n            r hmset myhash a 1 b 2 c 3\n            r hexpireat myhash 2524600800 fields 1 a\n            r hexpireat myhash 2524600801 fields 1 b\n            set encoded [r dump myhash]\n            r del myhash\n            r restore myhash 0 $encoded\n            assert_equal [lsort [r hgetall myhash]] \"1 2 3 a b c\"\n            assert_equal [r hexpiretime myhash FIELDS 3 a b c] {2524600800 2524600801 -1}\n        }\n\n        test {RESTORE hash that had in the past HFEs but not during the dump} {\n            r config set sanitize-dump-payload yes\n            r del myhash\n            r hmset myhash a 1 b 2 c 3\n            r hpexpire myhash 1 fields 1 a\n            after 10\n            set encoded [r dump myhash]\n            r del myhash\n            r restore myhash 0 $encoded\n            assert_equal [lsort [r hgetall myhash]] \"2 3 b c\"\n            r hpexpire myhash 1 fields 2 b c\n            wait_for_condition 30 10 { [r exists myhash] == 0 } else { fail \"`myhash` should be expired\" }\n        }\n\n        test {DUMP / RESTORE are able to serialize / unserialize a hash with TTL 0 for all fields} {\n            r config set sanitize-dump-payload yes\n            r del myhash\n            r hmset myhash a 1 b 2 c 3\n            r hexpire myhash 9999999 fields 1 a ;# make all TTLs of fields to 0\n            r hpersist myhash fields 1 a\n            assert_encoding $type myhash\n            set encoded [r dump myhash]\n            r del myhash\n            r restore myhash 0 $encoded\n            assert_equal [lsort [r hgetall myhash]] \"1 2 3 a b c\"\n            assert_equal [r hexpiretime myhash FIELDS 3 a b c] {-1 -1 -1}\n        }\n\n        test {HINCRBY - discards pending expired field and reset its value} {\n            r debug set-active-expire 0\n            r del h1 h2\n            r hset h1 f1 10 f2 2\n            r hset h2 f1 10\n            assert_equal [r HINCRBY h1 f1 2] 12\n            assert_equal [r HINCRBY h2 f1 2] 12\n            r HPEXPIRE h1 10 FIELDS 1 f1\n            r HPEXPIRE h2 10 FIELDS 1 f1\n            after 15\n            assert_equal [r HINCRBY h1 f1 1] 1\n            assert_equal [r HINCRBY h2 f1 1] 1\n            r debug set-active-expire 1\n        }\n\n        test {HINCRBY - preserve expiration time of the field} {\n            r del h1\n            r hset h1 f1 10\n            r hpexpire h1 20 FIELDS 1 f1\n            assert_equal [r HINCRBY h1 f1 2] 12\n            assert_range [r HPTTL h1 FIELDS 1 f1] 1 20\n        }\n\n\n        test {HINCRBYFLOAT - discards pending expired field and reset its value} {\n            r debug set-active-expire 0\n            r del h1 h2\n            r hset h1 f1 10 f2 2\n            r hset h2 f1 10\n            assert_equal [r HINCRBYFLOAT h1 f1 2] 12\n            assert_equal [r HINCRBYFLOAT h2 f1 2] 12\n            r HPEXPIRE h1 10 FIELDS 1 f1\n            r HPEXPIRE h2 10 FIELDS 1 f1\n            after 15\n            assert_equal [r HINCRBYFLOAT h1 f1 1] 1\n            assert_equal [r HINCRBYFLOAT h2 f1 1] 1\n            r debug set-active-expire 1\n        }\n\n        test {HINCRBYFLOAT - preserve expiration time of the field} {\n            r del h1\n            r hset h1 f1 10\n            r hpexpire h1 20 FIELDS 1 f1\n            assert_equal [r HINCRBYFLOAT h1 f1 2.5] 12.5\n            assert_range [r HPTTL h1 FIELDS 1 f1] 1 20\n        }\n\n        test \"HGETDEL - delete field with ttl ($type)\" {\n            r debug set-active-expire 0\n            r del h1\n\n            # Test deleting only field in a hash. Due to lazy expiry,\n            # reply will be null and the field and the key will be deleted.\n            r hsetex h1 PX 5 FIELDS 1 f1 10\n            after 15\n            assert_equal [r hgetdel h1 fields 1 f1] \"{}\"\n            assert_equal [r exists h1]  0\n\n            # Test deleting one field among many. f2 will lazily expire\n            r hsetex h1 FIELDS 3 f1 10 f2 20 f3 value3\n            r hpexpire h1 5 FIELDS 1 f2\n            after 15\n            assert_equal [r hgetdel h1 fields 2 f2 f3] \"{} value3\"\n            assert_equal [lsort [r hgetall h1]] [lsort \"f1 10\"]\n\n            # Try to delete the last field, along with non-existing fields\n            assert_equal [r hgetdel h1 fields 4 f1 f2 f3 f4] \"10 {} {} {}\"\n            r debug set-active-expire 1\n        }\n\n        test \"HGETEX - input validation ($type)\" {\n            r del h1\n            assert_error \"*wrong number of arguments*\" {r HGETEX}\n            assert_error \"*wrong number of arguments*\" {r HGETEX h1}\n            assert_error \"*wrong number of arguments*\" {r HGETEX h1 FIELDS}\n            assert_error \"*wrong number of arguments*\" {r HGETEX h1 FIELDS 0}\n            assert_error \"*wrong number of arguments*\" {r HGETEX h1 FIELDS 1}\n            assert_error \"*unknown argument*\" {r HGETEX h1 XFIELDX 1 a}\n            assert_error \"*unknown argument*\" {r HGETEX h1 PXAT 1 1}\n            assert_error \"*unknown argument*\" {r HGETEX h1 KEEPTTL fields 1 a}\n            assert_error \"*wrong number of arguments*\" {r HGETEX h1 FIELDS 2 a}\n            assert_error \"*invalid number of fields*\" {r HGETEX h1 FIELDS 0 a}\n            assert_error \"*invalid number of fields*\" {r HGETEX h1 FIELDS -1 a}\n            assert_error \"*invalid number of fields*\" {r HGETEX h1 FIELDS 9223372036854775808 a}\n\n            # Only one of EX, PX, EXAT, PXAT or PERSIST can be specified\n            assert_error {*Only one of EX, PX, EXAT, PXAT or PERSIST arguments*} {r HGETEX h1 EX 100 PX 1000 FIELDS 1 a}\n            assert_error {*Only one of EX, PX, EXAT, PXAT or PERSIST arguments*} {r HGETEX h1 EXAT 100 EX 1000 FIELDS 1 a}\n            assert_error {*Only one of EX, PX, EXAT, PXAT or PERSIST arguments*} {r HGETEX h1 PX 100 EXAT 100 FIELDS 1 a}\n            assert_error {*Only one of EX, PX, EXAT, PXAT or PERSIST arguments*} {r HGETEX h1 PXAT 100 EX 100 FIELDS 1 a}\n            assert_error {*Only one of EX, PX, EXAT, PXAT or PERSIST arguments*} {r HGETEX h1 PERSIST EX 100 FIELDS 1 a}\n        }\n\n        test \"HGETEX - input validation (expire time) ($type)\" {\n            assert_error \"*value is not an integer or out of range*\" {r HGETEX h1 EX bla FIELDS 1 a}\n            assert_error \"*value is not an integer or out of range*\" {r HGETEX h1 EX 9223372036854775808 FIELDS 1 a}\n            assert_error \"*value is not an integer or out of range*\" {r HGETEX h1 EXAT 9223372036854775808 FIELDS 1 a}\n            assert_error \"*invalid expire time, must be >= 0*\" {r HGETEX h1 PX -1 FIELDS 1 a}\n            assert_error \"*invalid expire time, must be >= 0*\" {r HGETEX h1 PXAT -1 FIELDS 1 a}\n            assert_error \"*invalid expire time*\" {r HGETEX h1 EX -1 FIELDS 1 a}\n            assert_error \"*invalid expire time*\" {r HGETEX h1 EX [expr (1<<48)] FIELDS 1 a}\n            assert_error \"*invalid expire time*\" {r HGETEX h1 EX [expr (1<<46) - [clock seconds] + 100 ] FIELDS 1 a}\n            assert_error \"*invalid expire time*\" {r HGETEX h1 EXAT [expr (1<<46) + 100 ] FIELDS 1 a}\n            assert_error \"*invalid expire time*\" {r HGETEX h1 PX [expr (1<<46) - [clock milliseconds] + 100 ] FIELDS 1 a}\n            assert_error \"*invalid expire time*\" {r HGETEX h1 PXAT [expr (1<<46) + 100 ] FIELDS 1 a}\n            assert_error \"*wrong number of arguments*\" {r HGETEX missingkey EX 100 FIELDS}\n            assert_error \"*wrong number of arguments*\" {r EVAL \"return redis.call('HGETEX', 'missingkey', 'EX', '100', 'FIELDS')\" 0}\n        }\n\n        test \"HGETEX - get without setting ttl ($type)\" {\n            r del h1\n            r hset h1 a 1 b 2 c strval\n            assert_equal [r hgetex h1 fields 1 a] \"1\"\n            assert_equal [r hgetex h1 fields 2 a b] \"1 2\"\n            assert_equal [r hgetex h1 fields 3 a b c] \"1 2 strval\"\n            assert_equal [r HTTL h1 FIELDS 3 a b c] \"$T_NO_EXPIRY $T_NO_EXPIRY $T_NO_EXPIRY\"\n        }\n\n        test \"HGETEX - get and set the ttl ($type)\" {\n            r del h1\n            r hset h1 a 1 b 2 c strval\n            assert_equal [r hgetex h1 EX 10000 fields 1 a] \"1\"\n            assert_range [r HTTL h1 FIELDS 1 a] 9000 10000\n            assert_equal [r hgetex h1 EX 10000 fields 1 c] \"strval\"\n            assert_range [r HTTL h1 FIELDS 1 c] 9000 10000\n        }\n\n        test \"HGETEX - Test 'EX' flag ($type)\" {\n            r del myhash\n            r hset myhash field1 value1 field2 value2 field3 value3\n            assert_equal [r hgetex myhash EX 1000 FIELDS 1 field1] [list \"value1\"]\n            assert_range [r httl myhash FIELDS 1 field1] 1 1000\n        }\n\n        test \"HGETEX - Test 'EXAT' flag ($type)\" {\n            r del myhash\n            r hset myhash field1 value1 field2 value2 field3 value3\n            assert_equal [r hgetex myhash EXAT [expr [clock seconds] + 10] FIELDS 1 field2] [list \"value2\"]\n            assert_range [r httl myhash FIELDS 1 field2] 5 10\n        }\n\n        test \"HGETEX - Test 'PX' flag ($type)\" {\n            r del myhash\n            r hset myhash field1 value1 field2 value2 field3 value3\n            assert_equal [r hgetex myhash PX 1000000 FIELDS 1 field3] [list \"value3\"]\n            assert_range [r httl myhash FIELDS 1 field3] 900 1000\n        }\n\n        test \"HGETEX - Test 'PXAT' flag ($type)\" {\n            r del myhash\n            r hset myhash field1 value1 field2 value2 field3 value3\n            assert_equal [r hgetex myhash PXAT [expr [clock milliseconds] + 10000] FIELDS 1 field3] [list \"value3\"]\n            assert_range [r httl myhash FIELDS 1 field3] 5 10\n        }\n\n        test \"HGETEX - Test 'PERSIST' flag ($type)\" {\n            r del myhash\n            r debug set-active-expire 0\n\n            r hsetex myhash PX 5000 FIELDS 3 f1 v1 f2 v2 f3 v3\n            assert_not_equal [r httl myhash FIELDS 1 f1] \"$T_NO_EXPIRY\"\n            assert_not_equal [r httl myhash FIELDS 1 f2] \"$T_NO_EXPIRY\"\n            assert_not_equal [r httl myhash FIELDS 1 f3] \"$T_NO_EXPIRY\"\n\n            # Persist f1 and verify it does not have TTL anymore\n            assert_equal [r hgetex myhash PERSIST FIELDS 1 f1] \"v1\"\n            assert_equal [r httl myhash FIELDS 1 f1] \"$T_NO_EXPIRY\"\n\n            # Persist rest of the fields\n            assert_equal [r hgetex myhash PERSIST FIELDS 2 f2 f3] \"v2 v3\"\n            assert_equal [r httl myhash FIELDS 2 f2 f3]  \"$T_NO_EXPIRY $T_NO_EXPIRY\"\n\n            # Redo the operation. It should be noop as fields are persisted already.\n            assert_equal [r hgetex myhash PERSIST FIELDS 2 f2 f3] \"v2 v3\"\n            assert_equal [r httl myhash FIELDS 2 f2 f3]  \"$T_NO_EXPIRY $T_NO_EXPIRY\"\n\n            # Final sanity, fields exist and have no attached ttl.\n            assert_equal [lsort [r hgetall myhash]] [lsort \"f1 v1 f2 v2 f3 v3\"]\n            assert_equal [r httl myhash FIELDS 3 f1 f2 f3]  \"$T_NO_EXPIRY $T_NO_EXPIRY $T_NO_EXPIRY\"\n            r debug set-active-expire 1\n        }\n\n        test \"HGETEX - Test setting ttl in the past will delete the key ($type)\" {\n            r del myhash\n            r hset myhash f1 v1 f2 v2 f3 v3\n\n            # hgetex without setting ttl\n            assert_equal [lsort [r hgetex myhash fields 3 f1 f2 f3]] [lsort \"v1 v2 v3\"]\n            assert_equal [r httl myhash FIELDS 3 f1 f2 f3] \"$T_NO_EXPIRY $T_NO_EXPIRY $T_NO_EXPIRY\"\n\n            # set an expired ttl and verify the key is deleted\n            r hgetex myhash PXAT 1 fields 3 f1 f2 f3\n            assert_equal [r exists myhash] 0\n        }\n\n        test \"HGETEX - Test active expiry ($type)\" {\n            r del myhash\n            r debug set-active-expire 0\n\n            r hset myhash f1 v1 f2 v2 f3 v3 f4 v4 f5 v5\n            assert_equal [lsort [r hgetex myhash PXAT 1 FIELDS 5 f1 f2 f3 f4 f5]] [lsort \"v1 v2 v3 v4 v5\"]\n\n            r debug set-active-expire 1\n            wait_for_condition 50 20 { [r EXISTS myhash] == 0 } else { fail \"'myhash' should be expired\" }\n        }\n\n        test \"HGETEX - A field with TTL overridden with another value (TTL discarded) ($type)\" {\n            r del myhash\n            r hset myhash f1 v1 f2 v2 f3 v3\n            r hgetex myhash PX 10000 FIELDS 1 f1\n            r hgetex myhash EX 100 FIELDS 1 f2\n\n            # f2 ttl will be discarded\n            r hset myhash f2 v22\n            assert_equal [r hget myhash f2] \"v22\"\n            assert_equal [r httl myhash FIELDS 2 f2 f3] \"$T_NO_EXPIRY $T_NO_EXPIRY\"\n\n            # Other field is not affected (still has TTL)\n            assert_not_equal [r httl myhash FIELDS 1 f1] \"$T_NO_EXPIRY\"\n        }\n\n        test \"HGETEX - Test with lazy expiry ($type)\" {\n            r del myhash\n            r debug set-active-expire 0\n\n            r hsetex myhash PX 1 FIELDS 2 f1 v1 f2 v2\n            after 5\n            assert_equal [r hgetex myhash FIELDS 2 f1 f2] \"{} {}\"\n            assert_equal [r exists myhash] 0\n\n            r debug set-active-expire 1\n        }\n\n\n        test \"HSETEX - input validation ($type)\" {\n            assert_error {*wrong number of arguments*} {r hsetex myhash}\n            assert_error {*wrong number of arguments*} {r hsetex myhash fields}\n            assert_error {*wrong number of arguments*} {r hsetex myhash fields 1}\n            assert_error {*wrong number of arguments*} {r hsetex myhash fields 2 a b}\n            assert_error {*wrong number of arguments*} {r hsetex myhash fields 2 a b c}\n            assert_error {*unknown argument*} {r hsetex myhash fields 2 a b c d e}\n            assert_error {*wrong number of arguments*} {r hsetex myhash fields 3 a b c d}\n            assert_error {*wrong number of arguments*} {r hsetex myhash fields 3 a b c d e}\n            assert_error {*unknown argument*} {r hsetex myhash fields 3 a b c d e f g}\n            assert_error {*wrong number of arguments*} {r hsetex myhash fields 3 a b}\n            assert_error {*unknown argument*} {r hsetex myhash fields 1 a b c}\n            assert_error {*unknown argument*} {r hsetex myhash nx fields 1 a b}\n            assert_error {*unknown argument*} {r hsetex myhash 1 fields 1 a b}\n            assert_error {*wrong number of arguments*} {r hsetex myhash fields 1 a}\n            assert_error {*unknown argument*} {r hsetex myhash persist fields 1 a b}\n            assert_error {*unknown argument*} {r hsetex myhash ex 100 persist fields 1 a b}\n\n            # Only one of FNX or FXX\n            assert_error {*Only one of FXX or FNX arguments *} {r hsetex myhash fxx fxx EX 100 fields 1 a b}\n            assert_error {*Only one of FXX or FNX arguments *} {r hsetex myhash fxx fnx EX 100 fields 1 a b}\n            assert_error {*Only one of FXX or FNX arguments *} {r hsetex myhash fnx fxx EX 100 fields 1 a b}\n            assert_error {*Only one of FXX or FNX arguments *} {r hsetex myhash fnx fnx EX 100 fields 1 a b}\n            assert_error {*Only one of FXX or FNX arguments *} {r hsetex myhash fxx fnx fxx EX 100 fields 1 a b}\n            assert_error {*Only one of FXX or FNX arguments *} {r hsetex myhash fnx fxx fnx EX 100 fields 1 a b}\n\n            # Only one of EX, PX, EXAT, PXAT or KEEPTTL can be specified\n            assert_error {*Only one of EX, PX, EXAT, PXAT or KEEPTTL arguments*} {r hsetex myhash EX 100 PX 1000 fields 1 a b}\n            assert_error {*Only one of EX, PX, EXAT, PXAT or KEEPTTL arguments*} {r hsetex myhash EX 100 EXAT 100 fields 1 a b}\n            assert_error {*Only one of EX, PX, EXAT, PXAT or KEEPTTL arguments*} {r hsetex myhash EXAT 100 EX 1000 fields 1 a b}\n            assert_error {*Only one of EX, PX, EXAT, PXAT or KEEPTTL arguments*} {r hsetex myhash EXAT 100 PX 1000 fields 1 a b}\n            assert_error {*Only one of EX, PX, EXAT, PXAT or KEEPTTL arguments*} {r hsetex myhash PX 100 EXAT 100 fields 1 a b}\n            assert_error {*Only one of EX, PX, EXAT, PXAT or KEEPTTL arguments*} {r hsetex myhash PX 100 PXAT 100 fields 1 a b}\n            assert_error {*Only one of EX, PX, EXAT, PXAT or KEEPTTL arguments*} {r hsetex myhash PXAT 100 EX 100 fields 1 a b}\n            assert_error {*Only one of EX, PX, EXAT, PXAT or KEEPTTL arguments*} {r hsetex myhash PXAT 100 EXAT 100 fields 1 a b}\n            assert_error {*Only one of EX, PX, EXAT, PXAT or KEEPTTL arguments*} {r hsetex myhash EX 100 KEEPTTL fields 1 a b}\n            assert_error {*Only one of EX, PX, EXAT, PXAT or KEEPTTL arguments*} {r hsetex myhash KEEPTTL EX 100 fields 1 a b}\n            assert_error {*Only one of EX, PX, EXAT, PXAT or KEEPTTL arguments*} {r hsetex myhash EX 100 EX 100 fields 1 a b}\n            assert_error {*Only one of EX, PX, EXAT, PXAT or KEEPTTL arguments*} {r hsetex myhash EXAT 100 EXAT 100 fields 1 a b}\n            assert_error {*Only one of EX, PX, EXAT, PXAT or KEEPTTL arguments*} {r hsetex myhash PX 10 PX 10 fields 1 a b}\n            assert_error {*Only one of EX, PX, EXAT, PXAT or KEEPTTL arguments*} {r hsetex myhash PXAT 10 PXAT 10 fields 1 a b}\n            assert_error {*Only one of EX, PX, EXAT, PXAT or KEEPTTL arguments*} {r hsetex myhash KEEPTTL KEEPTTL fields 1 a b}\n\n            # missing expire time\n            assert_error {*not an integer or out of range*} {r hsetex myhash ex fields 1 a b}\n            assert_error {*not an integer or out of range*} {r hsetex myhash px fields 1 a b}\n            assert_error {*not an integer or out of range*} {r hsetex myhash exat fields 1 a b}\n            assert_error {*not an integer or out of range*} {r hsetex myhash pxat fields 1 a b}\n\n            # expire time more than 2 ^ 48\n            assert_error {*invalid expire time*} {r hsetex myhash EXAT [expr (1<<48)] 1 a b}\n            assert_error {*invalid expire time*} {r hsetex myhash PXAT [expr (1<<48)] 1 a b}\n            assert_error {*invalid expire time*} {r hsetex myhash EX [expr (1<<48) - [clock seconds] + 1000 ] 1 a b}\n            assert_error {*invalid expire time*} {r hsetex myhash PX [expr (1<<48) - [clock milliseconds] + 1000 ] 1 a b}\n\n            # invalid expire time\n            assert_error {*invalid expire time*} {r hsetex myhash EXAT -1 1 a b}\n            assert_error {*not an integer or out of range*} {r hsetex myhash EXAT 9223372036854775808 1 a b}\n            assert_error {*not an integer or out of range*} {r hsetex myhash EXAT x 1 a b}\n\n            # invalid numfields arg\n            assert_error {*invalid number of fields*} {r hsetex myhash fields x a b}\n            assert_error {*invalid number of fields*} {r hsetex myhash fields 9223372036854775808 a b}\n            assert_error {*invalid number of fields*} {r hsetex myhash fields 0 a b}\n            assert_error {*invalid number of fields*} {r hsetex myhash fields -1 a b}\n        }\n        \n        test \"HSETEX - Basic test ($type)\" {\n            r del myhash\n\n            # set field\n            assert_equal [r hsetex myhash FIELDS 1 f1 v1] 1\n            assert_equal [r hget myhash f1] \"v1\"\n\n            # override\n            assert_equal [r hsetex myhash FIELDS 1 f1 v11] 1\n            assert_equal [r hget myhash f1] \"v11\"\n\n            # set multiple\n            assert_equal [r hsetex myhash FIELDS 2 f1 v1 f2 v2] 1\n            assert_equal [lsort [r hgetall myhash]] [lsort \"f1 v1 f2 v2\"]\n            assert_equal [r hsetex myhash FIELDS 3 f1 v111 f2 v222 f3 v333] 1\n            assert_equal [lsort [r hgetall myhash]] [lsort \"f1 v111 f2 v222 f3 v333\"]\n        }\n\n        test \"HSETEX - Test FXX flag ($type)\" {\n            r del myhash\n\n            # Key is empty, command fails due to FXX\n            assert_equal [r hsetex myhash FXX FIELDS 2 f1 v1 f2 v2] 0\n            # Verify it did not leave the key empty\n            assert_equal [r exists myhash] 0\n\n            # Command fails and no change on fields\n            r hset myhash f1 v1\n            assert_equal [r hsetex myhash FXX FIELDS 2 f1 v1 f2 v2] 0\n            assert_equal [lsort [r hgetall myhash]] [lsort \"f1 v1\"]\n\n            # Command executed successfully\n            assert_equal [r hsetex myhash FXX FIELDS 1 f1 v11] 1\n            assert_equal [lsort [r hgetall myhash]] [lsort \"f1 v11\"]\n\n            # Try with multiple fields\n            r hset myhash f2 v2\n            assert_equal [r hsetex myhash FXX FIELDS 2 f1 v111 f2 v222] 1\n            assert_equal [lsort [r hgetall myhash]] [lsort \"f1 v111 f2 v222\"]\n\n            # Try with expiry\n            assert_equal [r hsetex myhash FXX EX 100 FIELDS 2 f1 v1 f2 v2] 1\n            assert_equal [lsort [r hgetall myhash]] [lsort \"f1 v1 f2 v2\"]\n            assert_range [r httl myhash FIELDS 1 f1] 80 100\n            assert_range [r httl myhash FIELDS 1 f2] 80 100\n\n            # Try with expiry, FXX arg comes after TTL\n            assert_equal [r hsetex myhash PX 5000 FXX FIELDS 2 f1 v1 f2 v2] 1\n            assert_equal [lsort [r hgetall myhash]] [lsort \"f1 v1 f2 v2\"]\n            assert_range [r hpttl myhash FIELDS 1 f1] 4500 5000\n            assert_range [r hpttl myhash FIELDS 1 f2] 4500 5000\n        }\n\n        test \"HSETEX - Test FXX flag with lazy expire ($type)\" {\n            r del myhash\n            r debug set-active-expire 0\n\n            r hsetex myhash PX 10 FIELDS 1 f1 v1\n            after 15\n            assert_equal [r hsetex myhash FXX FIELDS 1 f1 v11] 0\n            assert_equal [r exists myhash] 0\n            r debug set-active-expire 1\n        }\n\n        test \"HSETEX - Test FNX flag ($type)\" {\n            r del myhash\n\n            # Command successful on an empty key\n            assert_equal [r hsetex myhash FNX FIELDS 1 f1 v1] 1\n\n            # Command fails and no change on fields\n            assert_equal [r hsetex myhash FNX FIELDS 2 f1 v1 f2 v2] 0\n            assert_equal [lsort [r hgetall myhash]] [lsort \"f1 v1\"]\n\n            # Command executed successfully\n            assert_equal [r hsetex myhash FNX FIELDS 2 f2 v2 f3 v3] 1\n            assert_equal [lsort [r hgetall myhash]] [lsort \"f1 v1 f2 v2 f3 v3\"]\n            assert_equal [r hsetex myhash FXX FIELDS 3 f1 v11 f2 v22 f3 v33] 1\n            assert_equal [lsort [r hgetall myhash]] [lsort \"f1 v11 f2 v22 f3 v33\"]\n\n            # Try with expiry\n            r del myhash\n            assert_equal [r hsetex myhash FNX EX 100 FIELDS 2 f1 v1 f2 v2] 1\n            assert_equal [lsort [r hgetall myhash]] [lsort \"f1 v1 f2 v2\"]\n            assert_range [r httl myhash FIELDS 1 f1] 80 100\n            assert_range [r httl myhash FIELDS 1 f2] 80 100\n\n            # Try with expiry, FNX arg comes after TTL\n            assert_equal [r hsetex myhash PX 5000 FNX FIELDS 1 f3 v3] 1\n            assert_equal [lsort [r hgetall myhash]] [lsort \"f1 v1 f2 v2 f3 v3\"]\n            assert_range [r hpttl myhash FIELDS 1 f3] 4500 5000\n        }\n\n        test \"HSETEX EX - field appears twice in FIELDS list with EX is allowed ($type)\" {\n            # The EX condition passes, so all fields must be set, and the last value wins.\n            r del myhash\n            r hset myhash f1 v1\n            r hsetex myhash EX 100 FIELDS 2 f1 new1 f1 new2\n            # Last value wins (same as plain HSET behavior with duplicate fields)\n            assert_equal \"new2\" [r hget myhash f1]\n            assert_range [r httl myhash FIELDS 1 f1] 80 100\n        }\n\n        test \"HSETEX FNX - field appears twice in FIELDS list with EX is allowed ($type)\" {\n            # The FNX condition passes, so all fields must be set, and the last value wins.\n            r del myhash\n            r hsetex myhash FNX EX 100 FIELDS 2 f1 new1 f1 new2\n            assert_equal \"new2\" [r hget myhash f1]\n            assert_range [r httl myhash FIELDS 1 f1] 80 100\n        }\n\n        test \"HSETEX - Test 'EX' flag ($type)\" {\n            r del myhash\n            r hset myhash f1 v1 f2 v2\n            assert_equal [r hsetex myhash EX 1000 FIELDS 1 f3 v3 ] 1\n            assert_range [r httl myhash FIELDS 1 f3] 900 1000\n        }\n\n        test \"HSETEX - Test 'EXAT' flag ($type)\" {\n            r del myhash\n            r hset myhash f1 v1 f2 v2\n            assert_equal [r hsetex myhash EXAT [expr [clock seconds] + 10] FIELDS 1 f3 v3] 1\n            assert_range [r httl myhash FIELDS 1 f3] 5 10\n        }\n\n        test \"HSETEX - Test 'PX' flag ($type)\" {\n            r del myhash\n            assert_equal [r hsetex myhash PX 1000000 FIELDS 1 f3 v3] 1\n            assert_range [r httl myhash FIELDS 1 f3] 990 1000\n        }\n\n        test \"HSETEX - Test 'PXAT' flag ($type)\" {\n            r del myhash\n            r hset myhash f1 v2 f2 v2 f3 v3\n            assert_equal [r hsetex myhash PXAT [expr [clock milliseconds] + 10000] FIELDS 1 f2 v2] 1\n            assert_range [r httl myhash FIELDS 1 f2] 5 10\n        }\n\n        test \"HSETEX - Test 'KEEPTTL' flag ($type)\" {\n            r del myhash\n\n            r hsetex myhash FIELDS 2 f1 v1 f2 v2\n            r hsetex myhash PX 20000 FIELDS 1 f2 v2\n\n            # f1 does not have ttl\n            assert_equal [r httl myhash FIELDS 1 f1] \"$T_NO_EXPIRY\"\n\n            # f2 has ttl\n            assert_not_equal [r httl myhash FIELDS 1 f2] \"$T_NO_EXPIRY\"\n\n            # Validate KEEPTTL preserves the TTL\n            assert_equal [r hsetex myhash KEEPTTL FIELDS 1 f2 v22] 1\n            assert_equal [r hget myhash f2] \"v22\"\n            assert_not_equal [r httl myhash FIELDS 1 f2] \"$T_NO_EXPIRY\"\n\n            # Try with multiple fields. First, set fields and TTL\n            r hsetex myhash EX 10000 FIELDS 3 f1 v1 f2 v2 f3 v3\n\n            # Update fields with KEEPTTL flag\n            r hsetex myhash KEEPTTL FIELDS 3 f1 v111 f2 v222 f3 v333\n\n            # Verify values are set, ttls are untouched\n            assert_equal [lsort [r hgetall myhash]] [lsort \"f1 v111 f2 v222 f3 v333\"]\n            assert_range [r httl myhash FIELDS 1 f1] 9000 10000\n            assert_range [r httl myhash FIELDS 1 f2] 9000 10000\n            assert_range [r httl myhash FIELDS 1 f3] 9000 10000\n        }\n\n        test \"HSETEX - Test multiple 'FIELDS' arguments raise error ($type)\" {\n            r del myhash\n            assert_error {*FIELDS keyword specified multiple times*} {r hsetex myhash FIELDS 1 f1 v1 FIELDS 1 f2 v2}\n            assert_error {*FIELDS keyword specified multiple times*} {r hsetex myhash FIELDS 1 f1 v1 EX 100 FIELDS 1 f2 v2}\n        }\n\n        test \"HSETEX - Test no expiry flag discards TTL ($type)\" {\n            r del myhash\n\n            r hsetex myhash FIELDS 1 f1 v1\n            r hsetex myhash PX 100000 FIELDS 1 f2 v2\n            assert_range [r hpttl myhash FIELDS 1 f2] 1 100000\n\n            assert_equal [r hsetex myhash FIELDS 2 f1 v1 f2 v2] 1\n            assert_equal [r httl myhash FIELDS 2 f1 f2] \"$T_NO_EXPIRY $T_NO_EXPIRY\"\n        }\n\n        test \"HSETEX - Test with active expiry\" {\n            r del myhash\n            r debug set-active-expire 0\n\n            r hsetex myhash PX 10 FIELDS 5 f1 v1 f2 v2 f3 v3 f4 v4 f5 v5\n            r debug set-active-expire 1\n            wait_for_condition 50 20 { [r EXISTS myhash] == 0 } else { fail \"'myhash' should be expired\" }\n        }\n\n        test \"HSETEX - Set time in the past ($type)\" {\n            r del myhash\n\n            # Try on an empty key\n            assert_equal [r hsetex myhash EXAT [expr {[clock seconds] - 1}] FIELDS 2 f1 v1 f2 v2] 1\n            assert_equal [r hexists myhash field1] 0\n\n            # Try with existing fields\n            r hset myhash fields 2 f1 v1 f2 v2\n            assert_equal [r hsetex myhash EXAT [expr {[clock seconds] - 1}] FIELDS 2 f1 v1 f2 v2] 1\n            assert_equal [r hexists myhash field1] 0\n        }\n\n        test \"Hash field expire - test allow-access-expired parameter enabled\" {\n            r debug set-active-expire 0\n            r debug set-allow-access-expired 1\n            r del H1\n            r hset H1 f1 1\n            r hpexpire H1 1 FIELDS 1 f1\n            after 2\n            # With allow-access-expired 1, expired fields should be accessible\n            assert_equal {1} [r hexists H1 f1]\n            # Test hget with allow-access-expired enabled\n            assert_equal {1} [r hget H1 f1]\n            # Test hscan with allow-access-expired enabled\n            assert_equal {1 f1} [lsort [lindex [r hscan H1 0] 1]]\n            # Test hgetall with allow-access-expired enabled\n            assert_equal {f1 1} [r hgetall H1]\n            # Test hlen with allow-access-expired enabled\n            assert_equal {1} [r hlen H1]\n            # Test hmget with allow-access-expired enabled\n            assert_equal {1} [r hmget H1 f1]\n            # Test hkeys and hvals with allow-access-expired enabled\n            assert_equal {f1} [r hkeys H1]\n            assert_equal {1} [r hvals H1]\n            # Test hincrby with allow-access-expired enabled\n            assert_equal {2} [r hincrby H1 f1 1]\n            # Test hincrbyfloat with allow-access-expired enabled\n            assert_equal {3.5} [r hincrbyfloat H1 f1 1.5]\n            # Test hrandfield with allow-access-expired enabled\n            assert_equal {f1} [r hrandfield H1]\n            assert_equal {f1 3.5} [r hrandfield H1 1 withvalues]\n            # Reset to default\n            r debug set-allow-access-expired 0\n            r debug set-active-expire 1\n        }\n    }\n\n    test \"Statistics - Hashes with HFEs ($type)\" {\n        r config resetstat\n        r flushall\n\n        # hash1: 5 fields, 3 with TTL. subexpiry incr +1\n        r hset myhash1 f1 v1 f2 v2 f3 v3 f4 v4 f5 v5\n        r hpexpire myhash1 150 FIELDS 3 f1 f2 f3\n        assert_match  [get_stat_subexpiry r] 1\n        # Update hash1, f3 field with earlier TTL. subexpiry no change.\n        r hpexpire myhash1 100 FIELDS 1 f3\n        assert_match  [get_stat_subexpiry r] 1\n\n        # hash2: 5 fields, 3 with TTL. subexpiry incr +1\n        r hset myhash2 f1 v1 f2 v2 f3 v3 f4 v4 f5 v5\n        assert_match  [get_stat_subexpiry r] 1\n        r hpexpire myhash2 100 FIELDS 3 f1 f2 f3\n        assert_match  [get_stat_subexpiry r] 2\n        # Update hash2, f3 field with later TTL. subexpiry no change.\n        r hpexpire myhash2 150 FIELDS 1 f3\n        assert_match  [get_stat_subexpiry r] 2\n\n        # hash3: 2 fields, 1 with TTL. HDEL field with TTL. subexpiry decr -1\n        r hset myhash3 f1 v1 f2 v2\n        r hpexpire myhash3 100 FIELDS 1 f2\n        assert_match  [get_stat_subexpiry r] 3\n        r hdel myhash3 f2\n        assert_match  [get_stat_subexpiry r] 2\n\n        # hash4: 2 fields, 1 with TTL. HGETDEL field with TTL. subexpiry decr -1\n        r hset myhash4 f1 v1 f2 v2\n        r hpexpire myhash4 100 FIELDS 1 f2\n        assert_match [get_stat_subexpiry r] 3\n        r hgetdel myhash4 FIELDS 1 f2\n        assert_match [get_stat_subexpiry r] 2\n\n        # Expired fields of hash1 and hash2. subexpiry decr -2\n        wait_for_condition 50 50 {\n                [get_stat_subexpiry r] == 0\n        } else {\n                fail \"Hash field expiry statistics failed\"\n        }\n    }\n\n    test \"Statistics - Lazy expire increments expired_subkeys but not expired_subkeys_active ($type)\" {\n        r flushall\n        r config resetstat\n        r debug set-active-expire 0\n\n        # Create hash with fields that will expire\n        r hset myhash f1 v1 f2 v2 f3 v3 f4 v4 f5 v5\n        r hpexpire myhash 1 FIELDS 3 f1 f2 f3\n        after 2\n\n        # Trigger lazy expire by accessing the fields\n        assert_equal {{} {} {}} [r hmget myhash f1 f2 f3]\n\n        # Verify that expired_subkeys was incremented but expired_subkeys_active was not\n        assert_equal [s expired_subkeys] 3\n        assert_equal [s expired_subkeys_active] 0\n\n        r debug set-active-expire 1\n    }\n\n    test \"Statistics - Active expire increments both expired_subkeys and expired_subkeys_active ($type)\" {\n        r flushall\n        r config resetstat\n\n        # Create hash with fields that will expire soon\n        r hset myhash f1 v1 f2 v2 f3 v3 f4 v4 f5 v5\n        r hpexpire myhash 1 FIELDS 3 f1 f2 f3\n\n        # Wait for active expire to kick in\n        wait_for_condition 50 100 {\n            [s expired_subkeys] == 3\n        } else {\n            fail \"Hash fields were not expired\"\n        }\n\n        # Verify that both expired_subkeys and expired_subkeys_active were incremented\n        assert_equal [s expired_subkeys] 3\n        assert_equal [s expired_subkeys_active] 3\n    }\n\n    test \"HFE commands against wrong type\" {\n        r set wrongtype somevalue\n        assert_error \"WRONGTYPE Operation against a key*\" {r hexpire wrongtype 10 fields 1 f1}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hexpireat wrongtype 10 fields 1 f1}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hpexpire wrongtype 10 fields 1 f1}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hpexpireat wrongtype 10 fields 1 f1}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hexpiretime wrongtype fields 1 f1}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hpexpiretime wrongtype fields 1 f1}\n        assert_error \"WRONGTYPE Operation against a key*\" {r httl wrongtype fields 1 f1}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hpttl wrongtype fields 1 f1}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hpersist wrongtype fields 1 f1}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hgetex wrongtype fields 1 f1}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hsetex wrongtype fields 1 f1 v1}\n    }\n\n    r config set hash-max-listpack-entries 512\n}\n\nstart_server {tags {\"external:skip needs:debug\"}} {\n\n    # Tests that only applies to listpack\n\n    test \"Test listpack memory usage\" {\n        r hset myhash f1 v1 f2 v2 f3 v3 f4 v4 f5 v5\n        r hpexpire myhash 5 FIELDS 2 f2 f4\n\n        # Just to have code coverage for the new listpack encoding\n        r memory usage myhash\n    }\n\n    test \"Test listpack object encoding\" {\n        r hset myhash f1 v1 f2 v2 f3 v3 f4 v4 f5 v5\n        r hpexpire myhash 5 FIELDS 2 f2 f4\n\n        # Just to have code coverage for the listpackex encoding\n        assert_equal [r object encoding myhash] \"listpackex\"\n    }\n\n    test \"Test listpack debug listpack\" {\n        r hset myhash f1 v1 f2 v2 f3 v3 f4 v4 f5 v5\n\n        # Just to have code coverage for the listpackex encoding\n        r debug listpack myhash\n    }\n\n    test \"Test listpack converts to ht and passive expiry works\" {\n        set prev [lindex [r config get hash-max-listpack-entries] 1]\n        r config set hash-max-listpack-entries 10\n        r debug set-active-expire 0\n\n        r hset myhash f1 v1 f2 v2 f3 v3 f4 v4 f5 v5\n        r hpexpire myhash 5 FIELDS 2 f2 f4\n\n        for {set i 6} {$i < 11} {incr i} {\n            r hset myhash f$i v$i\n        }\n        after 50\n        assert_equal [lsort [r hgetall myhash]] [lsort \"f1 f3 f5 f6 f7 f8 f9 f10 v1 v3 v5 v6 v7 v8 v9 v10\"]\n        r config set hash-max-listpack-entries $prev\n        r debug set-active-expire 1\n    }\n\n    test \"Test listpack converts to ht with allow-access-expired enabled\" {\n        r debug set-active-expire 0\n        r debug set-allow-access-expired 1\n        set prev [config_get_set hash-max-listpack-entries 5]\n        r del myhash\n\n        r hset myhash f1 v1 f2 v2 f3 v3 f4 v4\n        r hpexpire myhash 1 FIELDS 2 f1 f2\n        after 2\n\n        assert_equal {f1 f2 f3 f4 v1 v2 v3 v4} [lsort [r hgetall myhash]]\n        assert_equal {1} [r hexists myhash f1]\n\n        for {set i 5} {$i <= 10} {incr i} {\n            r hset myhash f$i v$i\n        }\n\n        assert_equal {hashtable} [r object encoding myhash]\n        assert_equal {v1} [r hget myhash f1]\n        assert_equal {10} [r hlen myhash]\n\n        r config set hash-max-listpack-entries $prev\n        r debug set-allow-access-expired 0\n        r debug set-active-expire 1\n    }\n\n    test \"Test listpack converts to ht and active expiry works\" {\n        r del myhash\n        r debug set-active-expire 0\n\n        r hset myhash f1 v1 f2 v2 f3 v3 f4 v4 f5 v5\n        r hpexpire myhash 10 FIELDS 1 f1\n\n        for {set i 0} {$i < 2048} {incr i} {\n            r hset myhash f$i v$i\n        }\n\n        for {set i 0} {$i < 2048} {incr i} {\n            r hpexpire myhash 10 FIELDS 1 f$i\n        }\n\n        r debug set-active-expire 1\n        wait_for_condition 50 20 { [r EXISTS myhash] == 0 } else { fail \"'myhash' should be expired\" }\n    }\n\n    test \"Test listpack converts to ht and active expiry works\" {\n        r del myhash\n        r debug set-active-expire 0\n\n        # Check expiry works after listpack converts to ht\n        for {set i 0} {$i < 1024} {incr i} {\n            r hset myhash f1_$i v1_$i f2_$i v2_$i f3_$i v3_$i f4_$i v4_$i\n            r hpexpire myhash 10 FIELDS 4 f1_$i f2_$i f3_$i f4_$i\n        }\n\n        assert_encoding hashtable myhash\n        assert_equal [r hlen myhash] 4096\n\n        r debug set-active-expire 1\n        wait_for_condition 50 20 { [r EXISTS myhash] == 0 } else { fail \"'myhash' should be expired\" }\n    }\n\n    test \"HPERSIST/HEXPIRE - Test listpack with large values\" {\n        r del myhash\n\n        # Test with larger values to verify we successfully move fields in\n        # listpack when we are ordering according to TTL. This config change\n        # will make code to use temporary heap allocation when moving fields.\n        # See listpackExUpdateExpiry() for details.\n        r config set hash-max-listpack-value 2048\n\n        set payload1 [string repeat v3 1024]\n        set payload2 [string repeat v1 1024]\n\n        # Test with single item list\n        r hset myhash f1 $payload1\n        r hexpire myhash 2000 FIELDS 1 f1\n        assert_equal [r hget myhash f1] $payload1\n        r del myhash\n\n        # Test with multiple items\n        r hset myhash f1 $payload2 f2 v2 f3 $payload1 f4 v4\n        r hexpire myhash 100000 FIELDS 1 f3\n        r hpersist myhash FIELDS 1 f3\n        assert_equal [r hpersist myhash FIELDS 1 f3] $P_NO_EXPIRY\n\n        r hpexpire myhash 10 FIELDS 1 f1\n        after 20\n        assert_equal [lsort [r hgetall myhash]] [lsort \"f2 f3 f4 v2 $payload1 v4\"]\n\n        r config set hash-max-listpack-value 64\n    }\n\n    test {Test HEXPIRE coexists with EXPIRE} {\n        # Verify HEXPIRE & EXPIRE coexists. When setting EXPIRE a new kvobj might be\n        # created whereas the old one can be ref by hash field expiration DS.\n        # Take care to set hexpire before expire. Verify all combinations of\n        # which expired first.\n        # Another point to verify is that whether hexpire deletes the last field\n        # and in turn the key (See f2).\n        foreach etime {10 1000} htime {10 1000} f2 {0 1} {\n            r del myhash\n            r hset myhash f1 v1\n            if {$f2} { r hset myhash f2 v2 }\n            r hpexpire myhash $etime FIELDS 1 f1\n            r pexpire myhash $htime\n            after 20\n            # If EXPIRE is shorter, it should delete the key.\n            if {$etime == 10} {\n                assert_equal [r httl myhash FIELDS 1 f1] $T_NO_FIELD\n                assert_equal [r exists myhash] 0\n            } else {\n                if {$htime == 10} {\n                    assert_equal [r httl myhash FIELDS 1 f1] $T_NO_FIELD\n                    assert_range [r pttl myhash] 500 1000\n                } else {\n                    assert_range [r httl myhash FIELDS 1 f1] 1 1000\n                    assert_range [r pttl myhash] 500 1000\n                }\n            }\n        }\n    }\n}\n\nstart_server {tags {\"external:skip needs:debug\"}} {\n    foreach type {listpack ht} {\n        if {$type eq \"ht\"} {\n            r config set hash-max-listpack-entries 0\n        } else {\n            r config set hash-max-listpack-entries 512\n        }\n\n        test \"Test Command propagated to replica as expected ($type)\" {\n            start_server {overrides {appendonly {yes} appendfsync always} tags {external:skip}} {\n\n                set aof [get_last_incr_aof_path r]\n                r debug set-active-expire 0 ;# Prevent fields from being expired during data preparation\n\n                # Time is in the past so it should propagate HDELs to replica\n                # and delete the fields\n                r hset h0 x1 y1 x2 y2\n                r hexpireat h0 1 fields 3 x1 x2 non_exists_field\n\n                r hset h1 f1 v1 f2 v2\n\n                # Next command won't be propagated to replica\n                # because XX condition not met or field not exists\n                r hexpire h1 10 XX FIELDS 3 f1 f2 non_exists_field\n\n                r hpexpire h1 20 FIELDS 1 f1\n\n                # Next command will be propagate with only field 'f2'\n                # because NX condition not met for field 'f1'\n                r hpexpire h1 30 NX FIELDS 2 f1 f2\n\n                # Non exists field should be ignored\n                r hpexpire h1 30 FIELDS 1 non_exists_field\n                r hset h2 f1 v1 f2 v2 f3 v3 f4 v4\n                r hpexpire h2 40 FIELDS 2 f1 non_exists_field\n                r hpexpire h2 50 FIELDS 1 f2\n                r hpexpireat h2 [expr [clock seconds]*1000+100000] LT FIELDS 1 f3\n                r hexpireat h2 [expr [clock seconds]+10] NX FIELDS 1 f4\n\n                r debug set-active-expire 1\n                wait_for_condition 50 100 {\n                    [r hlen h2] eq 2\n                } else {\n                    fail \"Field f2 of hash h2 wasn't deleted\"\n                }\n\n                # HSETEX\n                r hsetex h3 FIELDS 1 f1 v1\n                r hsetex h3 FXX FIELDS 1 f1 v11\n                r hsetex h3 FNX FIELDS 1 f2 v22\n                r hsetex h3 KEEPTTL FIELDS 1 f2 v22\n\n                # Next one will fail due to FNX arg and it won't be replicated\n                r hsetex h3 FNX FIELDS 2 f1 v1 f2 v2\n\n                # Commands with EX/PX/PXAT/EXAT will be replicated as PXAT\n                r hsetex h3 EX 10000 FIELDS 1 f1 v111\n                r hsetex h3 PX 10000 FIELDS 1 f1 v111\n                r hsetex h3 PXAT [expr [clock milliseconds]+100000] FIELDS 1 f1 v111\n                r hsetex h3 EXAT [expr [clock seconds]+100000] FIELDS 1 f1 v111\n\n                # Following commands will set and then delete the fields because\n                # of TTL in the past. HDELs will be propagated.\n                r hsetex h3 PX 0 FIELDS 1 f1 v111\n                r hsetex h3 PX 0 FIELDS 3 f1 v2 f2 v2 f3 v3\n\n                # HGETEX\n                r hsetex h4 FIELDS 3 f1 v1 f2 v2 f3 v3\n                # No change on expiry, it won't be replicated.\n                r hgetex h4 FIELDS 1 f1\n\n                # Commands with EX/PX/PXAT/EXAT will be replicated as\n                # HPEXPIREAT command.\n                r hgetex h4 EX 10000 FIELDS 1 f1\n                r hgetex h4 PX 10000 FIELDS 1 f1\n                r hgetex h4 PXAT [expr [clock milliseconds]+100000] FIELDS 1 f1\n                r hgetex h4 EXAT [expr [clock seconds]+100000] FIELDS 1 f1\n\n                # Following commands will delete the fields because of TTL in\n                # the past. HDELs will be propagated.\n                r hgetex h4 PX 0 FIELDS 1 f1\n                # HDELs will be propagated for f2 and f3 as only those exist.\n                r hgetex h4 PX 0 FIELDS 3 f1 f2 f3\n\n                # HGETEX with PERSIST flag will be replicated as HPERSIST\n                r hsetex h4 EX 1000 FIELDS 1 f4 v4\n                r hgetex h4 PERSIST FIELDS 1 f4\n\n                # Nothing will be replicated as f4 is persisted already.\n                r hgetex h4 PERSIST FIELDS 1 f4\n\n                # Replicated as hdel\n                r hgetdel h4 FIELDS 1 f4\n\n                # Assert that each TTL-related command are persisted with absolute timestamps in AOF\n                assert_aof_content $aof {\n                    {select *}\n                    {hset h0 x1 y1 x2 y2}\n                    {multi}\n                        {hdel h0 x1}\n                        {hdel h0 x2}\n                    {exec}\n                    {hset h1 f1 v1 f2 v2}\n                    {hpexpireat h1 * FIELDS 1 f1}\n                    {hpexpireat h1 * FIELDS 1 f2}\n                    {hset h2 f1 v1 f2 v2 f3 v3 f4 v4}\n                    {hpexpireat h2 * FIELDS 1 f1}\n                    {hpexpireat h2 * FIELDS 1 f2}\n                    {hpexpireat h2 * FIELDS 1 f3}\n                    {hpexpireat h2 * FIELDS 1 f4}\n                    {hdel h1 f1}\n                    {hdel h1 f2}\n                    {hdel h2 f1}\n                    {hdel h2 f2}\n                    {hsetex h3 FIELDS 1 f1 v1}\n                    {hsetex h3 FXX FIELDS 1 f1 v11}\n                    {hsetex h3 FNX FIELDS 1 f2 v22}\n                    {hsetex h3 KEEPTTL FIELDS 1 f2 v22}\n                    {hsetex h3 PXAT * 1 f1 v111}\n                    {hsetex h3 PXAT * 1 f1 v111}\n                    {hsetex h3 PXAT * 1 f1 v111}\n                    {hsetex h3 PXAT * 1 f1 v111}\n                    {hdel h3 f1}\n                    {multi}\n                        {hdel h3 f1}\n                        {hdel h3 f2}\n                        {hdel h3 f3}\n                    {exec}\n                    {hsetex h4 FIELDS 3 f1 v1 f2 v2 f3 v3}\n                    {hpexpireat h4 * FIELDS 1 f1}\n                    {hpexpireat h4 * FIELDS 1 f1}\n                    {hpexpireat h4 * FIELDS 1 f1}\n                    {hpexpireat h4 * FIELDS 1 f1}\n                    {hdel h4 f1}\n                    {multi}\n                        {hdel h4 f2}\n                        {hdel h4 f3}\n                    {exec}\n                    {hsetex h4 PXAT * FIELDS 1 f4 v4}\n                    {hpersist h4 FIELDS 1 f4}\n                    {hdel h4 f4}\n                }\n            }\n        } {} {needs:debug}\n\n        test \"Lazy Expire - fields are lazy deleted and propagated to replicas ($type)\" {\n            start_server {overrides {appendonly {yes} appendfsync always} tags {external:skip}} {\n                r debug set-active-expire 0\n                set aof [get_last_incr_aof_path r]\n\n                r del myhash\n\n                r hset myhash f1 v1 f2 v2 f3 v3\n                r hpexpire myhash 1 NX FIELDS 3 f1 f2 f3\n                after 5\n\n                # Verify that still exists even if all fields are expired\n                assert_equal 1 [r EXISTS myhash]\n\n                # Verify that len counts also expired fields\n                assert_equal 3 [r HLEN myhash]\n\n                # Trying access to expired field should delete it. Len should be updated\n                assert_equal 0 [r hexists myhash f1]\n                assert_equal 2 [r HLEN myhash]\n\n                # Trying access another expired field should delete it. Len should be updated\n                assert_equal \"\" [r hget myhash f2]\n                assert_equal 1 [r HLEN myhash]\n\n                # Trying access last expired field should delete it. hash shouldn't exists afterward.\n                assert_equal 0 [r hstrlen myhash f3]\n                assert_equal 0 [r HLEN myhash]\n                assert_equal 0 [r EXISTS myhash]\n\n                wait_for_condition 50 100 { [r exists h1] == 0 } else { fail \"hash h1 wasn't deleted\" }\n\n                # HDEL are propagated as expected\n                assert_aof_content $aof {\n                    {select *}\n                    {hset myhash f1 v1 f2 v2 f3 v3}\n                    {hpexpireat myhash * NX FIELDS 3 f1 f2 f3}\n                    {hdel myhash f1}\n                    {hdel myhash f2}\n                    {hdel myhash f3}\n                }\n                r debug set-active-expire 1\n            }\n        }\n\n        # Start a new server with empty data and AOF file.\n        start_server {overrides {appendonly {yes} appendfsync always} tags {external:skip}} {\n\n            # Based on test at expire.tcl: \" All time-to-live(TTL) in commands are propagated as absolute ...\"\n            test {All TTLs in commands are propagated as absolute timestamp in milliseconds in AOF} {\n\n                set aof [get_last_incr_aof_path r]\n\n                r hset h1 f1 v1 f2 v2 f3 v3 f4 v4 f5 v5 f6 v6\n                r hexpireat h1 [expr [clock seconds]+100] NX FIELDS 1 f1\n                r hpexpireat h1 [expr [clock seconds]*1000+100000] NX FIELDS 1 f2\n                r hpexpire h1 100000 NX FIELDS 3 f3 f4 f5\n                r hexpire h1 100000 FIELDS 1 f6\n\n                r hset h2 f1 v1 f2 v2\n                r hpexpire h2 1 FIELDS 2 f1 f2\n                after 200\n\n                r hsetex h3 EX 100000 FIELDS 2 f1 v1 f2 v2\n                r hsetex h3 EXAT [expr [clock seconds] + 1000] FIELDS 2 f1 v1 f2 v2\n                r hsetex h3 PX 100000 FIELDS 2 f1 v1 f2 v2\n                r hsetex h3 PXAT [expr [clock milliseconds]+100000] FIELDS 2 f1 v1 f2 v2\n\n                r hgetex h3 EX 100000 FIELDS 2 f1 f2\n                r hgetex h3 EXAT [expr [clock seconds] + 1000] FIELDS 2 f1 f2\n                r hgetex h3 PX 100000 FIELDS 2 f1 f2\n                r hgetex h3 PXAT [expr [clock milliseconds]+100000] FIELDS 2 f1 f2\n\n                assert_aof_content $aof {\n                    {select *}\n                    {hset h1 f1 v1 f2 v2 f3 v3 f4 v4 f5 v5 f6 v6}\n                    {hpexpireat h1 * FIELDS 1 f1}\n                    {hpexpireat h1 * FIELDS 1 f2}\n                    {hpexpireat h1 * NX FIELDS 3 f3 f4 f5}\n                    {hpexpireat h1 * FIELDS 1 f6}\n                    {hset h2 f1 v1 f2 v2}\n                    {hpexpireat h2 * FIELDS 2 f1 f2}\n                    {hdel h2 *}\n                    {hdel h2 *}\n                    {hsetex h3 PXAT * FIELDS 2 f1 v1 f2 v2}\n                    {hsetex h3 PXAT * FIELDS 2 f1 v1 f2 v2}\n                    {hsetex h3 PXAT * FIELDS 2 f1 v1 f2 v2}\n                    {hsetex h3 PXAT * FIELDS 2 f1 v1 f2 v2}\n                    {hpexpireat h3 * FIELDS 2 f1 f2}\n                    {hpexpireat h3 * FIELDS 2 f1 f2}\n                    {hpexpireat h3 * FIELDS 2 f1 f2}\n                    {hpexpireat h3 * FIELDS 2 f1 f2}\n                }\n\n                array set keyAndFields1 [dumpAllHashes r]\n                r debug loadaof\n                array set keyAndFields2 [dumpAllHashes r]\n\n                # Assert that absolute TTLs are the same\n                assert_equal [array get keyAndFields1] [array get keyAndFields2]\n\n            } {} {needs:debug}\n        }\n\n        # Based on test, with same name, at expire.tcl:\n        test {All TTL in commands are propagated as absolute timestamp in replication stream} {\n            # Make sure that both relative and absolute expire commands are propagated\n            # Consider also comment of the test, with same name, at expire.tcl\n\n            r flushall ; # Clean up keyspace to avoid interference by keys from other tests\n            set repl [attach_to_replication_stream]\n\n            # HEXPIRE/HPEXPIRE should be translated into HPEXPIREAT\n            r hset h1 f1 v1\n            r hexpireat h1 [expr [clock seconds]+100] NX FIELDS 1 f1\n            r hset h2 f2 v2\n            r hpexpireat h2 [expr [clock seconds]*1000+100000] NX FIELDS 1 f2\n            r hset h3 f3 v3 f4 v4 f5 v5\n            # hpersist does nothing here. Verify it is not propagated.\n            r hpersist h3 FIELDS 1 f5\n            r hexpire h3 100 FIELDS 3 f3 f4 non_exists_field\n            r hpersist h3 FIELDS 1 f3\n\n            assert_replication_stream $repl {\n                {select *}\n                {hset h1 f1 v1}\n                {hpexpireat h1 * NX FIELDS 1 f1}\n                {hset h2 f2 v2}\n                {hpexpireat h2 * NX FIELDS 1 f2}\n                {hset h3 f3 v3 f4 v4 f5 v5}\n                {hpexpireat h3 * FIELDS 2 f3 f4}\n                {hpersist h3 FIELDS 1 f3}\n            }\n            close_replication_stream $repl\n        } {} {needs:repl}\n\n        test {HRANDFIELD delete expired fields and propagate DELs to replica} {\n            r debug set-active-expire 0\n            r flushall\n            set repl [attach_to_replication_stream]\n\n            # HRANDFIELD delete expired fields and propagate MULTI-EXEC DELs. Reply none.\n            r hset h1 f1 v1 f2 v2\n            r hpexpire h1 1 FIELDS 2 f1 f2\n            after 5\n            assert_equal [r hrandfield h1 2] \"\"\n\n            # HRANDFIELD delete expired field and propagate DEL. Reply non-expired field.\n            r hset h2 f1 v1 f2 v2\n            r hpexpire h2 1 FIELDS 1 f1\n            after 5\n            assert_equal [r hrandfield h2 2] \"f2\"\n\n            # HRANDFIELD delete expired field and propagate DEL. Reply none.\n            r hset h3 f1 v1\n            r hpexpire h3 1 FIELDS 1 f1\n            after 5\n            assert_equal [r hrandfield h3 2] \"\"\n\n            assert_replication_stream $repl {\n                {select *}\n                {hset h1 f1 v1 f2 v2}\n                {hpexpireat h1 * FIELDS 2 f1 f2}\n                {multi}\n                {hdel h1 *}\n                {hdel h1 *}\n                {exec}\n                {hset h2 f1 v1 f2 v2}\n                {hpexpireat h2 * FIELDS 1 f1}\n                {hdel h2 f1}\n                {hset h3 f1 v1}\n                {hpexpireat h3 * FIELDS 1 f1}\n                {hdel h3 f1}\n            }\n            close_replication_stream $repl\n            r debug set-active-expire 1\n        } {OK} {needs:repl}\n\n        # Start another server to test replication of TTLs\n        start_server {tags {needs:repl external:skip}} {\n            # Set the outer layer server as primary\n            set primary [srv -1 client]\n            set primary_host [srv -1 host]\n            set primary_port [srv -1 port]\n            # Set this inner layer server as replica\n            set replica [srv 0 client]\n\n            # Server should have role slave\n            $replica replicaof $primary_host $primary_port\n            wait_for_condition 50 100 {\n                [s 0 role] eq {slave}\n            } else {\n                fail \"Replication not started.\"\n            }\n\n            # Based on test, with same name, at expire.tcl\n            test {For all replicated TTL-related commands, absolute expire times are identical on primary and replica} {\n                # Apply each TTL-related command to a unique key on primary\n                $primary flushall\n                $primary hset h1 f v\n                $primary hexpireat h1 [expr [clock seconds]+10000] FIELDS 1 f\n                $primary hset h2 f v\n                $primary hpexpireat h2 [expr [clock milliseconds]+100000] FIELDS 1 f\n                $primary hset h3 f v\n                $primary hexpire h3 100 NX FIELDS 1 f\n                $primary hset h4 f v\n                $primary hpexpire h4 100000 NX FIELDS 1 f\n                $primary hset h5 f v\n                $primary hpexpireat h5 [expr [clock milliseconds]-100000] FIELDS 1 f\n                $primary hset h9 f v\n\n                $primary hsetex h10 EX 100000 FIELDS 1 f v\n                $primary hsetex h11 EXAT [expr [clock seconds] + 1000] FIELDS 1 f v\n                $primary hsetex h12 PX 100000 FIELDS 1 f v\n                $primary hsetex h13 PXAT [expr [clock milliseconds]+100000] FIELDS 1 f v\n                $primary hsetex h14 PXAT 1 FIELDS 1 f v\n\n                $primary hsetex h15 FIELDS 1 f v\n                $primary hgetex h15 EX 100000 FIELDS 1 f\n                $primary hsetex h16 FIELDS 1 f v\n                $primary hgetex h16 EXAT [expr [clock seconds] + 1000] FIELDS 1 f\n                $primary hsetex h17 FIELDS 1 f v\n                $primary hgetex h17 PX 100000 FIELDS 1 f\n                $primary hsetex h18 FIELDS 1 f v\n                $primary hgetex h18 PXAT [expr [clock milliseconds]+100000] FIELDS 1 f\n                $primary hsetex h19 FIELDS 1 f v\n                $primary hgetex h19 PXAT 1 FIELDS 1 f\n\n                # Wait for replica to get the keys and TTLs\n                assert {[$primary wait 1 0] == 1}\n\n                # Verify absolute TTLs are identical on primary and replica for all keys\n                # This is because TTLs are always replicated as absolute values\n                assert_equal [dumpAllHashes $primary] [dumpAllHashes $replica]\n            }\n        }\n\n        test \"Test HSETEX command replication\" {\n            r flushall\n            set repl [attach_to_replication_stream]\n\n            # Create a field and delete it in a single command due to timestamp\n            # being in the past. It will be propagated as HDEL.\n            r hsetex h1 PXAT 1 FIELDS 1 f1 v1\n\n            # Following ones will be propagated with PXAT arg\n            r hsetex h1 EX 100000 FIELDS 1 f1 v1\n            r hsetex h1 EXAT [expr [clock seconds] + 1000] FIELDS 1 f1 v1\n            r hsetex h1 PX 100000 FIELDS 1 f1 v1\n            r hsetex h1 PXAT [expr [clock milliseconds]+100000] FIELDS 1 f1 v1\n\n            # Propagate with KEEPTTL flag\n            r hsetex h1 KEEPTTL FIELDS 1 f1 v1\n\n            # Following commands will fail and won't be propagated\n            r hsetex h1 FNX FIELDS 1 f1 v11\n            r hsetex h1 FXX FIELDS 1 f2 v2\n\n            # Propagate with FNX and FXX flags\n            r hsetex h1 FNX FIELDS 1 f2 v2\n            r hsetex h1 FXX FIELDS 1 f2 v22\n\n            assert_replication_stream $repl {\n                {select *}\n                {hdel h1 f1}\n                {hsetex h1 PXAT * FIELDS 1 f1 v1}\n                {hsetex h1 PXAT * FIELDS 1 f1 v1}\n                {hsetex h1 PXAT * FIELDS 1 f1 v1}\n                {hsetex h1 PXAT * FIELDS 1 f1 v1}\n                {hsetex h1 KEEPTTL FIELDS 1 f1 v1}\n                {hsetex h1 FNX FIELDS 1 f2 v2}\n                {hsetex h1 FXX FIELDS 1 f2 v22}\n            }\n            close_replication_stream $repl\n        } {} {needs:repl}\n\n        test \"Test HGETEX command replication\" {\n            r flushall\n            r debug set-active-expire 0\n            set repl [attach_to_replication_stream]\n\n            # If no fields are found, command won't be replicated\n            r hgetex h1 EX 10000 FIELDS 1 f0\n            r hgetex h1 PERSIST FIELDS 1 f0\n\n            # Get without setting expiry will not be replicated\n            r hsetex h1 FIELDS 1 f0 v0\n            r hgetex h1 FIELDS 1 f0\n\n            # Lazy expired field will be replicated as HDEL\n            r hsetex h1 PX 10 FIELDS 1 f1 v1\n            after 15\n            r hgetex h1 EX 1000 FIELDS 1 f1\n\n            # If new TTL is in the past, it will be replicated as HDEL\n            r hsetex h1 EX 10000 FIELDS 1 f2 v2\n            r hgetex h1 EXAT 1 FIELDS 1 f2\n\n            # A field will expire lazily and other field will be deleted due to\n            # TTL is being in the past. It'll be propagated as two HDEL's.\n            r hsetex h1 PX 10 FIELDS 1 f3 v3\n            after 15\n            r hsetex h1 FIELDS 1 f4 v4\n            r hgetex h1 EXAT 1 FIELDS 2 f3 f4\n\n            # TTL update, it will be replicated as HPEXPIREAT\n            r hsetex h1 FIELDS 1 f5 v5\n            r hgetex h1 EX 10000 FIELDS 1 f5\n\n            # If PERSIST flag is used, it will be replicated as HPERSIST\n            r hsetex h1 EX 10000 FIELDS 1 f6 v6\n            r hgetex h1 PERSIST FIELDS 1 f6\n\n            assert_replication_stream $repl {\n                {select *}\n                {hsetex h1 FIELDS 1 f0 v0}\n                {hsetex h1 PXAT * FIELDS 1 f1 v1}\n                {hdel h1 f1}\n                {hsetex h1 PXAT * FIELDS 1 f2 v2}\n                {hdel h1 f2}\n                {hsetex h1 PXAT * FIELDS 1 f3 v3}\n                {hsetex h1 FIELDS 1 f4 v4}\n                {multi}\n                    {hdel h1 f3}\n                    {hdel h1 f4}\n                {exec}\n                {hsetex h1 FIELDS 1 f5 v5}\n                {hpexpireat h1 * FIELDS 1 f5}\n                {hsetex h1 PXAT * FIELDS 1 f6 v6}\n                {hpersist h1 FIELDS 1 f6}\n            }\n            close_replication_stream $repl\n        } {} {needs:repl}\n\n        test \"HINCRBYFLOAT command won't remove field expiration on replica ($type)\" {\n            r flushall\n            set repl [attach_to_replication_stream]\n\n            r hsetex h1 EX 100 FIELDS 1 f1 1\n            r hset h1 f2 1\n            r hincrbyfloat h1 f1 1.1\n            r hincrbyfloat h1 f2 1.1\n\n            # HINCRBYFLOAT will be replicated as HSETEX with KEEPTTL flag\n            assert_replication_stream $repl {\n                {select *}\n                {hsetex h1 PXAT * FIELDS 1 f1 1}\n                {hset h1 f2 1}\n                {hsetex h1 KEEPTTL FIELDS 1 f1 *}\n                {hsetex h1 KEEPTTL FIELDS 1 f2 *}\n            }\n            close_replication_stream $repl\n\n            start_server {tags {external:skip}} {\n                r -1 flushall\n                r slaveof [srv -1 host] [srv -1 port]\n                wait_for_sync r\n\n                r -1 hsetex h1 EX 100 FIELDS 1 f1 1\n                r -1 hset h1 f2 1\n                wait_for_ofs_sync  [srv -1 client]  [srv 0 client]\n                assert_range [r httl h1 FIELDS 1 f1] 90 100\n                assert_equal {-1} [r httl h1 FIELDS 1 f2]\n\n                r -1 hincrbyfloat h1 f1 1.1\n                r -1 hincrbyfloat h1 f2 1.1\n\n                # Expiration time should not be removed on replica and the value\n                # should be equal to the master.\n                wait_for_ofs_sync  [srv -1 client]  [srv 0 client]\n                assert_range [r httl h1 FIELDS 1 f1] 90 100\n                assert_equal [r -1 hget h1 f1] [r hget h1 f1]\n\n                # The field f2 should not have any expiration on replica either even\n                # though it was set using HSET with KEEPTTL flag.\n                assert_equal {-1} [r httl h1 FIELDS 1 f2]\n                assert_equal [r -1 hget h1 f2] [r hget h1 f2]\n            }\n        } {} {needs:repl external:skip}\n    }\n}\n\n# Comprehensive tests for flexible parsing improvements and field validation fixes\nstart_server {tags {\"hash\"}} {\n    foreach type {listpackex hashtable} {\n        if {$type eq \"hashtable\"} {\n            r config set hash-max-listpack-entries 0\n        } else {\n            r config set hash-max-listpack-entries 512\n        }\n\n        test \"HEXPIRE FAMILY - Rigid expiration time positioning ($type)\" {\n            r del myhash\n            r hset myhash f1 v1 f2 v2 f3 v3\n\n            # Test 1: Traditional order\n            assert_equal [r HEXPIRE myhash 60 FIELDS 2 f1 f2] [list $E_OK $E_OK]\n\n            # Test 2: Mixed order with condition flags\n            r del myhash\n            r hset myhash f1 v1 f2 v2\n            assert_equal [r HEXPIRE myhash 120 NX FIELDS 2 f1 f2] [list $E_OK $E_OK]\n            assert_equal [r HEXPIRE myhash 180 XX FIELDS 1 f1] [list $E_OK]\n\n            # Test 3: All condition flags with flexible ordering\n            r del myhash\n            r hset myhash f1 v1 f2 v2\n            # Set initial expiry\n            assert_equal [r HEXPIRE myhash 100 FIELDS 1 f1] [list $E_OK]\n            assert_equal [r HEXPIRE myhash 200 GT FIELDS 1 f1] [list $E_OK]\n            assert_equal [r HEXPIRE myhash 50 LT FIELDS 1 f1] [list $E_OK]\n\n            # Test 4: Flexible positioning should FAIL (expiration time not at position 2)\n            assert_error {*value is not an integer or out of range*} {r HEXPIRE myhash FIELDS 1 f1 60}\n            assert_error {*value is not an integer or out of range*} {r HPEXPIRE myhash FIELDS 1 f2 5000}\n            assert_error {*value is not an integer or out of range*} {r HEXPIRE myhash NX FIELDS 1 f1}\n        }\n\n        test \"HEXPIREAT/HPEXPIREAT - Flexible keyword ordering ($type)\" {\n            r del myhash\n            r hset myhash f1 v1 f2 v2\n\n            set future_sec [expr {[clock seconds] + 300}]\n            set future_ms [expr {[clock milliseconds] + 300000}]\n\n            # Test rigid ordering with absolute timestamps\n            assert_equal [r HEXPIREAT myhash $future_sec FIELDS 1 f1] [list $E_OK]\n            assert_equal [r HPEXPIREAT myhash $future_ms NX FIELDS 1 f2] [list $E_OK]\n            assert_equal [r HPEXPIREAT myhash $future_ms XX FIELDS 1 f2] [list $E_OK]\n        }\n\n        test \"HSETEX - Flexible argument parsing and validation ($type)\" {\n            r del myhash\n\n            # Test 1: Traditional order (expiration first, FIELDS last)\n            assert_equal [r HSETEX myhash EX 60 FIELDS 2 f1 v1 f2 v2] 1\n            set ttl [r HTTL myhash FIELDS 2 f1 f2]\n            assert {[lindex $ttl 0] > 0 && [lindex $ttl 0] <= 60}\n            assert {[lindex $ttl 1] > 0 && [lindex $ttl 1] <= 60}\n\n            r del myhash\n\n            # Test 2: Flexible order (FIELDS first, expiration last)\n            assert_equal [r HSETEX myhash FIELDS 2 f1 v1 f2 v2 EX 60] 1\n            set ttl [r HTTL myhash FIELDS 2 f1 f2]\n            assert {[lindex $ttl 0] > 0 && [lindex $ttl 0] <= 60}\n            assert {[lindex $ttl 1] > 0 && [lindex $ttl 1] <= 60}\n\n            # Test 3: With condition flags in flexible order\n            assert_equal [r HSETEX myhash FXX FIELDS 1 f1 v1_updated KEEPTTL] 1\n            assert_equal [r HGET myhash f1] \"v1_updated\"\n        }\n\n        test \"HGETEX - Flexible argument parsing and validation ($type)\" {\n            r del myhash\n            r HSET myhash f1 v1 f2 v2 f3 v3\n\n            # Test 1: Traditional order (expiration first, FIELDS last)\n            assert_equal [r HGETEX myhash EX 60 FIELDS 2 f1 f2] [list \"v1\" \"v2\"]\n            set ttl [r HTTL myhash FIELDS 2 f1 f2]\n            assert {[lindex $ttl 0] > 0 && [lindex $ttl 0] <= 60}\n            assert {[lindex $ttl 1] > 0 && [lindex $ttl 1] <= 60}\n\n            r del myhash\n            r HSET myhash f1 v1 f2 v2 f3 v3\n\n            # Test 2: Flexible order (FIELDS first, expiration last)\n            assert_equal [r HGETEX myhash FIELDS 2 f1 f2 EX 60] [list \"v1\" \"v2\"]\n            set ttl [r HTTL myhash FIELDS 2 f1 f2]\n            assert {[lindex $ttl 0] > 0 && [lindex $ttl 0] <= 60}\n            assert {[lindex $ttl 1] > 0 && [lindex $ttl 1] <= 60}\n\n            # Test 3: PERSIST with flexible order\n            assert_equal [r HGETEX myhash FIELDS 1 f3 PERSIST] [list \"v3\"]\n            set ttl [r HTTL myhash FIELDS 1 f3]\n            assert_equal [lindex $ttl 0] -1\n        }\n    }\n}\n\n# Field validation and error handling improvements tests\nstart_server {tags {\"hash\"}} {\n    foreach type {listpackex hashtable} {\n        if {$type eq \"hashtable\"} {\n            r config set hash-max-listpack-entries 0\n        } else {\n            r config set hash-max-listpack-entries 512\n        }\n\n        test \"Field count validation - HEXPIRE family ($type)\" {\n            r del myhash\n            r hset myhash f1 v1 f2 v2 f3 v3\n\n            # Test field count mismatches (too few fields specified)\n            assert_error {*value is not an integer or out of range*} {r HEXPIRE myhash FIELDS 60 1 f1 f2 f3}\n\n            # Test with numeric field names (should work)\n            r del myhash\n            r hset myhash 01 v1 02 v2 03 v3\n            assert_equal [r HEXPIRE myhash 60 FIELDS 3 01 02 03] [list $E_OK $E_OK $E_OK]\n        }\n\n        test \"Field count validation - HSETEX ($type)\" {\n            r del myhash\n\n            # Test field-value pair mismatches\n            assert_error {*wrong number of arguments*} {r HSETEX myhash FIELDS 2 f1 v1}\n            assert_error {*unknown argument*} {r HSETEX myhash FIELDS 1 f1 v1 f2 v2}\n            assert_error {*wrong number of arguments*} {r HSETEX myhash FIELDS 3 f1 v1 f2 v2}\n\n            # Test valid field-value pairs\n            assert_equal [r HSETEX myhash FIELDS 2 f1 v1 f2 v2 EX 60] 1\n            assert_equal [r HGET myhash f1] \"v1\"\n            assert_equal [r HGET myhash f2] \"v2\"\n        }\n\n        test \"Field count validation - HGETEX ($type)\" {\n            r del myhash\n            r hset myhash f1 v1 f2 v2 f3 v3\n\n            # Test field count mismatches\n            assert_error {*wrong number of arguments*} {r HGETEX myhash FIELDS 2 f1}\n            assert_error {*unknown argument*} {r HGETEX myhash FIELDS 1 f1 f2 f3}\n\n            # Test valid field counts\n            assert_equal [r HGETEX myhash FIELDS 2 f1 f2 EX 60] [list \"v1\" \"v2\"]\n        }\n\n        test \"Error message consistency and validation ($type)\" {\n            r del myhash\n            r hset myhash f1 v1\n\n            # Test invalid numfields values\n            assert_error {*Parameter*numFields*should be greater than 0*} {r HEXPIRE myhash 60 FIELDS 0 f1}\n            assert_error {*Parameter*numFields*should be greater than 0*} {r HEXPIRE myhash 60 FIELDS -1 f1}\n            assert_error {*invalid number of fields*} {r HSETEX myhash FIELDS 0 f1 v1 EX 60}\n            assert_error {*invalid number of fields*} {r HGETEX myhash FIELDS 0 f1 EX 60}\n            set future_sec [expr {[clock seconds] + 60}]\n            set future_ms [expr {[clock milliseconds] + 60000}]\n            foreach {cmd expire} [list HEXPIRE 60 HPEXPIRE 60000 HEXPIREAT $future_sec HPEXPIREAT $future_ms] {\n                assert_error {*wrong number of arguments*} [list r $cmd myhash $expire FIELDS 2147483647 f1]\n            }\n\n            # Test missing FIELDS keyword\n            assert_error {*unknown argument*} {r HEXPIRE myhash 60 2 f1 f2}\n            assert_error {*unknown argument*} {r HSETEX myhash EX 60 2 f1 v1 f2 v2}\n\n            # Test missing expire time\n            assert_error {*value is not an integer or out of range*} {r HEXPIRE myhash NX FIELDS 1 f1}\n            assert_error {*value is not an integer or out of range*} {r HPEXPIRE myhash FIELDS 1 f1 XX}\n        }\n\n        test \"Numeric field names validation ($type)\" {\n            r del myhash\n            r hset myhash 01 v1 02 v2 999 v999 1000 v1000\n\n            # Small numbers should work as field names\n            assert_equal [r HEXPIRE myhash 60 FIELDS 3 01 02 999] [list $E_OK $E_OK $E_OK]\n\n            # Large numbers should also work as field names\n            assert_equal [r HPEXPIRE myhash 5000 FIELDS 1 1000] [list $E_OK]\n\n            # Verify the fields still exist and have expiry\n            set ttl [r HTTL myhash FIELDS 4 01 02 999 1000]\n            assert {[lindex $ttl 0] > 0}\n            assert {[lindex $ttl 1] > 0}\n            assert {[lindex $ttl 2] > 0}\n            assert {[lindex $ttl 3] > 0}\n        }\n    }\n}\n\n# Advanced flexible parsing and edge case tests\nstart_server {tags {\"hash\"}} {\n    foreach type {listpackex hashtable} {\n        if {$type eq \"hashtable\"} {\n            r config set hash-max-listpack-entries 0\n        } else {\n            r config set hash-max-listpack-entries 512\n        }\n\n        test \"Multiple condition flags error handling ($type)\" {\n            r del myhash\n            r hset myhash f1 v1\n\n            # Test multiple condition flags (should fail)\n            assert_error {*Multiple condition flags specified*} {r HEXPIRE myhash 60 NX XX FIELDS 1 f1}\n            assert_error {*Multiple condition flags specified*} {r HPEXPIRE myhash 5000 GT LT FIELDS 1 f1}\n            assert_error {*Multiple condition flags specified*} {r HEXPIRE myhash 60 FIELDS 1 f1 NX XX}\n        }\n\n        test \"Multiple FIELDS keywords error handling ($type)\" {\n            r del myhash\n            r hset myhash f1 v1\n\n            # Test multiple FIELDS keywords (should fail)\n            assert_error {*value is not an integer or out of range*} {r HEXPIRE myhash FIELDS 1 f1 60 FIELDS 1 f2}\n            assert_error {*FIELDS keyword specified multiple times*} {r HPEXPIRE myhash 5000 FIELDS 1 f1 FIELDS 1 f2}\n        }\n\n        test \"Boundary conditions and edge cases ($type)\" {\n            r del myhash\n            r hset myhash f1 v1 f2 v2\n\n            # Test with maximum reasonable field count\n            r del myhash\n            for {set i 1} {$i <= 100} {incr i} {\n                r hset myhash f$i v$i\n            }\n\n            # Build field list for 50 fields\n            set field_list {}\n            for {set i 1} {$i <= 50} {incr i} {\n                lappend field_list f$i\n            }\n\n            # Test rigid parsing with many fields\n            set result [r HEXPIRE myhash 300 FIELDS 50 {*}$field_list]\n            assert_equal [llength $result] 50\n\n            # Verify all fields got expiry set\n            set ttl_result [r HTTL myhash FIELDS 50 {*}$field_list]\n            foreach ttl $ttl_result {\n                assert {$ttl > 0 && $ttl <= 300}\n            }\n        }\n\n        test \"Field names that look like keywords or numbers ($type)\" {\n            r del myhash\n            r hset myhash EX value1 PX value2 FIELDS value3 NX value4 60 value5\n\n            # Test that field names that look like keywords work correctly\n            assert_equal [r HEXPIRE myhash 120 FIELDS 5 EX PX FIELDS NX 60] [list $E_OK $E_OK $E_OK $E_OK $E_OK]\n\n            # Verify the fields exist and have expiry\n            set ttl [r HTTL myhash FIELDS 5 EX PX FIELDS NX 60]\n            foreach t $ttl {\n                assert {$t > 0 && $t <= 120}\n            }\n\n            # Test HSETEX with keyword-like field names\n            r del myhash\n            assert_equal [r HSETEX myhash FIELDS 3 EX val1 PX val2 FIELDS val3 EX 60] 1\n            assert_equal [r HGET myhash EX] \"val1\"\n            assert_equal [r HGET myhash PX] \"val2\"\n            assert_equal [r HGET myhash FIELDS] \"val3\"\n        }\n    }\n}\n\n# Regression tests for specific fixes made during development\nstart_server {tags {\"hash\"}} {\n    foreach type {listpackex hashtable} {\n        if {$type eq \"hashtable\"} {\n            r config set hash-max-listpack-entries 0\n        } else {\n            r config set hash-max-listpack-entries 512\n        }\n\n        test \"Parser state consistency ($type)\" {\n            r del myhash\n            r hset myhash f1 v1 f2 v2\n\n            # Test that parser correctly handles all argument positions\n            # without corrupting internal state\n\n            # Test 1: Multiple valid commands in sequence\n            assert_equal [r HEXPIRE myhash 60 FIELDS 1 f1] [list $E_OK]\n            assert_equal [r HPEXPIRE myhash 5000 FIELDS 1 f2] [list $E_OK]\n            # Should fail due to NX (field already has expiration)\n            assert_equal [r HEXPIRE myhash 120 NX FIELDS 1 f1] [list $E_FAIL]\n            # Should succeed due to XX (field has expiration)\n            assert_equal [r HEXPIRE myhash 180 XX FIELDS 1 f1] [list $E_OK]\n\n            # Test 2: Verify TTL values are correct\n            set ttl [r HTTL myhash FIELDS 2 f1 f2]\n            assert {[lindex $ttl 0] > 0 && [lindex $ttl 0] <= 180}\n            assert {[lindex $ttl 1] > 0}\n        }\n    }\n}\n\n# Integration tests - verify all improvements work together\nstart_server {tags {\"hash\"}} {\n    foreach type {listpackex hashtable} {\n        if {$type eq \"hashtable\"} {\n            r config set hash-max-listpack-entries 0\n        } else {\n            r config set hash-max-listpack-entries 512\n        }\n\n        test \"Stress test - complex scenarios with all features ($type)\" {\n            r del myhash\n\n            # Create a hash with many fields\n            for {set i 1} {$i <= 20} {incr i} {\n                r hset myhash field$i value$i\n            }\n\n            # Test 1: Flexible parsing with large field counts\n            set field_list {}\n            for {set i 1} {$i <= 10} {incr i} {\n                lappend field_list field$i\n            }\n            assert_equal [llength [r HEXPIRE myhash 3600 NX FIELDS 10 {*}$field_list]] 10\n\n            # Test 2: Mixed operations with rigid ordering\n            # First set expiration on field11-field15 so XX condition can succeed\n            assert_equal [r HPEXPIRE myhash 3600000 NX FIELDS 5 field11 field12 field13 field14 field15] [list $E_OK $E_OK $E_OK $E_OK $E_OK]\n            # Now XX should succeed since these fields have expiration\n            assert_equal [r HPEXPIRE myhash 7200000 XX FIELDS 5 field11 field12 field13 field14 field15] [list $E_OK $E_OK $E_OK $E_OK $E_OK]\n            assert_equal [r HEXPIRE myhash 7200 GT FIELDS 3 field1 field2 field3] [list $E_OK $E_OK $E_OK]\n\n            # Test 3: Verify field count validation still works with complex scenarios\n            assert_error {*wrong number of arguments*} {r HEXPIRE myhash 3600 FIELDS 15 field1 field2 field3}\n            assert_error {*unknown argument*} {r HPEXPIRE myhash 7200000 FIELDS 3 field1 field2 field3 field4 field5}\n\n            # Test 4: Verify all fields have correct expiry states\n            set ttl_result [r HTTL myhash FIELDS 15 field1 field2 field3 field4 field5 field6 field7 field8 field9 field10 field11 field12 field13 field14 field15]\n            for {set i 0} {$i < 15} {incr i} {\n                set ttl_val [lindex $ttl_result $i]\n                assert {$ttl_val > 0}\n            }\n        }\n\n        test \"Backward compatibility verification ($type)\" {\n            r del myhash\n            r hset myhash f1 v1 f2 v2 f3 v3\n\n            # Verify that traditional syntax still works exactly as before\n            assert_equal [r HEXPIRE myhash 60 FIELDS 2 f1 f2] [list $E_OK $E_OK]\n            assert_equal [r HPEXPIRE myhash 5000 NX FIELDS 1 f3] [list $E_OK]\n            assert_equal [r HEXPIREAT myhash [expr {[clock seconds] + 300}] XX FIELDS 1 f1] [list $E_OK]\n\n            # Verify HSETEX/HGETEX traditional syntax\n            r del myhash\n            assert_equal [r HSETEX myhash EX 60 FIELDS 2 f1 v1 f2 v2] 1\n            assert_equal [r HGETEX myhash PX 5000 FIELDS 2 f1 f2] [list \"v1\" \"v2\"]\n\n            # Verify error messages are consistent with expectations\n            assert_error {*Parameter*numFields*should be greater than 0*} {r HEXPIRE myhash 60 FIELDS 0 f1}\n            assert_error {*unknown argument*} {r HEXPIRE myhash 60 2 f1 f2}\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/type/hash.tcl",
    "content": "start_server {tags {\"hash\"}} {\n    r config set hash-max-listpack-value 64\n    r config set hash-max-listpack-entries 512\n\n    test {HSET/HLEN - Small hash creation} {\n        array set smallhash {}\n        for {set i 0} {$i < 8} {incr i} {\n            set key __avoid_collisions__[randstring 0 8 alpha]\n            set val __avoid_collisions__[randstring 0 8 alpha]\n            if {[info exists smallhash($key)]} {\n                incr i -1\n                continue\n            }\n            r hset smallhash $key $val\n            set smallhash($key) $val\n        }\n        list [r hlen smallhash]\n    } {8}\n\n    test {Is the small hash encoded with a listpack?} {\n        assert_encoding listpack smallhash\n    }\n\n    proc create_hash {key entries} {\n        r del $key\n        foreach entry $entries {\n            r hset $key [lindex $entry 0] [lindex $entry 1]\n        }\n    }\n\n    proc get_keys {l} {\n        set res {}\n        foreach entry $l {\n            set key [lindex $entry 0]\n            lappend res $key\n        }\n        return $res\n    }\n\n    foreach {type contents} \"listpack {{a 1} {b 2} {c 3}} hashtable {{a 1} {b 2} {[randstring 70 90 alpha] 3}}\" {\n        set original_max_value [lindex [r config get hash-max-ziplist-value] 1]\n        r config set hash-max-ziplist-value 10\n        create_hash myhash $contents\n        assert_encoding $type myhash\n\n        # coverage for kvobjComputeSize\n        assert_morethan [memory_usage myhash] 0\n\n        test \"HRANDFIELD - $type\" {\n            unset -nocomplain myhash\n            array set myhash {}\n            for {set i 0} {$i < 100} {incr i} {\n                set key [r hrandfield myhash]\n                set myhash($key) 1\n            }\n            assert_equal [lsort [get_keys $contents]] [lsort [array names myhash]]\n        }\n        r config set hash-max-ziplist-value $original_max_value\n    }\n\n    test \"HRANDFIELD with RESP3\" {\n        r hello 3\n        set res [r hrandfield myhash 3 withvalues]\n        assert_equal [llength $res] 3\n        assert_equal [llength [lindex $res 1]] 2\n\n        set res [r hrandfield myhash 3]\n        assert_equal [llength $res] 3\n        assert_equal [llength [lindex $res 1]] 1\n        r hello 2\n    }\n\n    test \"HRANDFIELD count of 0 is handled correctly\" {\n        r hrandfield myhash 0\n    } {}\n\n    test \"HRANDFIELD count overflow\" {\n        r hmset myhash a 1\n        assert_error {*value is out of range*} {r hrandfield myhash -9223372036854770000 withvalues}\n        assert_error {*value is out of range*} {r hrandfield myhash -9223372036854775808 withvalues}\n        assert_error {*value is out of range*} {r hrandfield myhash -9223372036854775808}\n    } {}\n\n    test \"HRANDFIELD with <count> against non existing key\" {\n        r hrandfield nonexisting_key 100\n    } {}\n\n    # Make sure we can distinguish between an empty array and a null response\n    r readraw 1\n\n    test \"HRANDFIELD count of 0 is handled correctly - emptyarray\" {\n        r hrandfield myhash 0\n    } {*0}\n\n    test \"HRANDFIELD with <count> against non existing key - emptyarray\" {\n        r hrandfield nonexisting_key 100\n    } {*0}\n\n    r readraw 0\n\n    foreach {type contents} \"\n        hashtable {{a 1} {b 2} {c 3} {d 4} {e 5} {6 f} {7 g} {8 h} {9 i} {[randstring 70 90 alpha] 10}}\n        listpack {{a 1} {b 2} {c 3} {d 4} {e 5} {6 f} {7 g} {8 h} {9 i} {10 j}} \" {\n        test \"HRANDFIELD with <count> - $type\" {\n            set original_max_value [lindex [r config get hash-max-ziplist-value] 1]\n            r config set hash-max-ziplist-value 10\n            create_hash myhash $contents\n            assert_encoding $type myhash\n\n            # create a dict for easy lookup\n            set mydict [dict create {*}[r hgetall myhash]]\n\n            # We'll stress different parts of the code, see the implementation\n            # of HRANDFIELD for more information, but basically there are\n            # four different code paths.\n\n            # PATH 1: Use negative count.\n\n            # 1) Check that it returns repeated elements with and without values.\n            set res [r hrandfield myhash -20]\n            assert_equal [llength $res] 20\n            set res [r hrandfield myhash -1001]\n            assert_equal [llength $res] 1001\n            # again with WITHVALUES\n            set res [r hrandfield myhash -20 withvalues]\n            assert_equal [llength $res] 40\n            set res [r hrandfield myhash -1001 withvalues]\n            assert_equal [llength $res] 2002\n\n            # Test random uniform distribution\n            # df = 9, 40 means 0.00001 probability\n            set res [r hrandfield myhash -1000]\n            assert_lessthan [chi_square_value $res] 40\n\n            # 2) Check that all the elements actually belong to the original hash.\n            foreach {key val} $res {\n                assert {[dict exists $mydict $key]}\n            }\n\n            # 3) Check that eventually all the elements are returned.\n            #    Use both WITHVALUES and without\n            unset -nocomplain auxset\n            set iterations 1000\n            while {$iterations != 0} {\n                incr iterations -1\n                if {[expr {$iterations % 2}] == 0} {\n                    set res [r hrandfield myhash -3 withvalues]\n                    foreach {key val} $res {\n                        dict append auxset $key $val\n                    }\n                } else {\n                    set res [r hrandfield myhash -3]\n                    foreach key $res {\n                        dict append auxset $key $val\n                    }\n                }\n                if {[lsort [dict keys $mydict]] eq\n                    [lsort [dict keys $auxset]]} {\n                    break;\n                }\n            }\n            assert {$iterations != 0}\n\n            # PATH 2: positive count (unique behavior) with requested size\n            # equal or greater than set size.\n            foreach size {10 20} {\n                set res [r hrandfield myhash $size]\n                assert_equal [llength $res] 10\n                assert_equal [lsort $res] [lsort [dict keys $mydict]]\n\n                # again with WITHVALUES\n                set res [r hrandfield myhash $size withvalues]\n                assert_equal [llength $res] 20\n                assert_equal [lsort $res] [lsort $mydict]\n            }\n\n            # PATH 3: Ask almost as elements as there are in the set.\n            # In this case the implementation will duplicate the original\n            # set and will remove random elements up to the requested size.\n            #\n            # PATH 4: Ask a number of elements definitely smaller than\n            # the set size.\n            #\n            # We can test both the code paths just changing the size but\n            # using the same code.\n            foreach size {8 2} {\n                set res [r hrandfield myhash $size]\n                assert_equal [llength $res] $size\n                # again with WITHVALUES\n                set res [r hrandfield myhash $size withvalues]\n                assert_equal [llength $res] [expr {$size * 2}]\n\n                # 1) Check that all the elements actually belong to the\n                # original set.\n                foreach ele [dict keys $res] {\n                    assert {[dict exists $mydict $ele]}\n                }\n\n                # 2) Check that eventually all the elements are returned.\n                #    Use both WITHVALUES and without\n                unset -nocomplain auxset\n                unset -nocomplain allkey\n                set iterations [expr {1000 / $size}]\n                set all_ele_return false\n                while {$iterations != 0} {\n                    incr iterations -1\n                    if {[expr {$iterations % 2}] == 0} {\n                        set res [r hrandfield myhash $size withvalues]\n                        foreach {key value} $res {\n                            dict append auxset $key $value\n                            lappend allkey $key\n                        }\n                    } else {\n                        set res [r hrandfield myhash $size]\n                        foreach key $res {\n                            dict append auxset $key\n                            lappend allkey $key\n                        }\n                    }\n                    if {[lsort [dict keys $mydict]] eq\n                        [lsort [dict keys $auxset]]} {\n                        set all_ele_return true\n                    }\n                }\n                assert_equal $all_ele_return true\n                # df = 9, 40 means 0.00001 probability\n                assert_lessthan [chi_square_value $allkey] 40\n            }\n        }\n        r config set hash-max-ziplist-value $original_max_value\n    }\n\n\n    test {HSET/HLEN - Big hash creation} {\n        array set bighash {}\n        for {set i 0} {$i < 1024} {incr i} {\n            set key __avoid_collisions__[randstring 0 8 alpha]\n            set val __avoid_collisions__[randstring 0 8 alpha]\n            if {[info exists bighash($key)]} {\n                incr i -1\n                continue\n            }\n            r hset bighash $key $val\n            set bighash($key) $val\n        }\n        list [r hlen bighash]\n    } {1024}\n\n    test {Is the big hash encoded with an hash table?} {\n        assert_encoding hashtable bighash\n    }\n\n    test {HGET against the small hash} {\n        set err {}\n        foreach k [array names smallhash *] {\n            if {$smallhash($k) ne [r hget smallhash $k]} {\n                set err \"$smallhash($k) != [r hget smallhash $k]\"\n                break\n            }\n        }\n        set _ $err\n    } {}\n\n    test {HGET against the big hash} {\n        set err {}\n        foreach k [array names bighash *] {\n            if {$bighash($k) ne [r hget bighash $k]} {\n                set err \"$bighash($k) != [r hget bighash $k]\"\n                break\n            }\n        }\n        set _ $err\n    } {}\n\n    test {HGET against non existing key} {\n        set rv {}\n        lappend rv [r hget smallhash __123123123__]\n        lappend rv [r hget bighash __123123123__]\n        set _ $rv\n    } {{} {}}\n\n    test {HSET in update and insert mode} {\n        set rv {}\n        set k [lindex [array names smallhash *] 0]\n        lappend rv [r hset smallhash $k newval1]\n        set smallhash($k) newval1\n        lappend rv [r hget smallhash $k]\n        lappend rv [r hset smallhash __foobar123__ newval]\n        set k [lindex [array names bighash *] 0]\n        lappend rv [r hset bighash $k newval2]\n        set bighash($k) newval2\n        lappend rv [r hget bighash $k]\n        lappend rv [r hset bighash __foobar123__ newval]\n        lappend rv [r hdel smallhash __foobar123__]\n        lappend rv [r hdel bighash __foobar123__]\n        set _ $rv\n    } {0 newval1 1 0 newval2 1 1 1}\n\n    test {HSETNX target key missing - small hash} {\n        r hsetnx smallhash __123123123__ foo\n        r hget smallhash __123123123__\n    } {foo}\n\n    test {HSETNX target key exists - small hash} {\n        r hsetnx smallhash __123123123__ bar\n        set result [r hget smallhash __123123123__]\n        r hdel smallhash __123123123__\n        set _ $result\n    } {foo}\n\n    test {HSETNX target key missing - big hash} {\n        r hsetnx bighash __123123123__ foo\n        r hget bighash __123123123__\n    } {foo}\n\n    test {HSETNX target key exists - big hash} {\n        r hsetnx bighash __123123123__ bar\n        set result [r hget bighash __123123123__]\n        r hdel bighash __123123123__\n        set _ $result\n    } {foo}\n\n    test {HSET/HMSET wrong number of args} {\n        assert_error {*wrong number of arguments for 'hset' command} {r hset smallhash key1 val1 key2}\n        assert_error {*wrong number of arguments for 'hmset' command} {r hmset smallhash key1 val1 key2}\n    }\n\n    test {HMSET - small hash} {\n        set args {}\n        foreach {k v} [array get smallhash] {\n            set newval [randstring 0 8 alpha]\n            set smallhash($k) $newval\n            lappend args $k $newval\n        }\n        r hmset smallhash {*}$args\n    } {OK}\n\n    test {HMSET - big hash} {\n        set args {}\n        foreach {k v} [array get bighash] {\n            set newval [randstring 0 8 alpha]\n            set bighash($k) $newval\n            lappend args $k $newval\n        }\n        r hmset bighash {*}$args\n    } {OK}\n\n    test {HMGET against non existing key and fields} {\n        set rv {}\n        lappend rv [r hmget doesntexist __123123123__ __456456456__]\n        lappend rv [r hmget smallhash __123123123__ __456456456__]\n        lappend rv [r hmget bighash __123123123__ __456456456__]\n        set _ $rv\n    } {{{} {}} {{} {}} {{} {}}}\n\n    test {Hash commands against wrong type} {\n        r set wrongtype somevalue\n        assert_error \"WRONGTYPE Operation against a key*\" {r hmget wrongtype field1 field2}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hrandfield wrongtype}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hget wrongtype field1}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hgetall wrongtype}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hdel wrongtype field1}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hincrby wrongtype field1 2}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hincrbyfloat wrongtype field1 2.5}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hstrlen wrongtype field1}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hvals wrongtype}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hkeys wrongtype}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hexists wrongtype field1}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hset wrongtype field1 val1}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hmset wrongtype field1 val1 field2 val2}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hsetnx wrongtype field1 val1}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hlen wrongtype}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hscan wrongtype 0}\n        assert_error \"WRONGTYPE Operation against a key*\" {r hgetdel wrongtype fields 1 a}\n    }\n\n    test {HMGET - small hash} {\n        set keys {}\n        set vals {}\n        foreach {k v} [array get smallhash] {\n            lappend keys $k\n            lappend vals $v\n        }\n        set err {}\n        set result [r hmget smallhash {*}$keys]\n        if {$vals ne $result} {\n            set err \"$vals != $result\"\n            break\n        }\n        set _ $err\n    } {}\n\n    test {HMGET - big hash} {\n        set keys {}\n        set vals {}\n        foreach {k v} [array get bighash] {\n            lappend keys $k\n            lappend vals $v\n        }\n        set err {}\n        set result [r hmget bighash {*}$keys]\n        if {$vals ne $result} {\n            set err \"$vals != $result\"\n            break\n        }\n        set _ $err\n    } {}\n\n    test {HKEYS - small hash} {\n        lsort [r hkeys smallhash]\n    } [lsort [array names smallhash *]]\n\n    test {HKEYS - big hash} {\n        lsort [r hkeys bighash]\n    } [lsort [array names bighash *]]\n\n    test {HVALS - small hash} {\n        set vals {}\n        foreach {k v} [array get smallhash] {\n            lappend vals $v\n        }\n        set _ [lsort $vals]\n    } [lsort [r hvals smallhash]]\n\n    test {HVALS - big hash} {\n        set vals {}\n        foreach {k v} [array get bighash] {\n            lappend vals $v\n        }\n        set _ [lsort $vals]\n    } [lsort [r hvals bighash]]\n\n    test {HGETALL - small hash} {\n        lsort [r hgetall smallhash]\n    } [lsort [array get smallhash]]\n\n    test {HGETALL - big hash} {\n        lsort [r hgetall bighash]\n    } [lsort [array get bighash]]\n\n    test {HGETALL against non-existing key} {\n        r del htest\n        r hgetall htest\n    } {}\n\n    test {HDEL and return value} {\n        set rv {}\n        lappend rv [r hdel smallhash nokey]\n        lappend rv [r hdel bighash nokey]\n        set k [lindex [array names smallhash *] 0]\n        lappend rv [r hdel smallhash $k]\n        lappend rv [r hdel smallhash $k]\n        lappend rv [r hget smallhash $k]\n        unset smallhash($k)\n        set k [lindex [array names bighash *] 0]\n        lappend rv [r hdel bighash $k]\n        lappend rv [r hdel bighash $k]\n        lappend rv [r hget bighash $k]\n        unset bighash($k)\n        set _ $rv\n    } {0 0 1 0 {} 1 0 {}}\n\n    test {HDEL - more than a single value} {\n        set rv {}\n        r del myhash\n        r hmset myhash a 1 b 2 c 3\n        assert_equal 0 [r hdel myhash x y]\n        assert_equal 2 [r hdel myhash a c f]\n        r hgetall myhash\n    } {b 2}\n\n    test {HDEL - hash becomes empty before deleting all specified fields} {\n        r del myhash\n        r hmset myhash a 1 b 2 c 3\n        assert_equal 3 [r hdel myhash a b c d e]\n        assert_equal 0 [r exists myhash]\n    }\n\n    test {HEXISTS} {\n        set rv {}\n        set k [lindex [array names smallhash *] 0]\n        lappend rv [r hexists smallhash $k]\n        lappend rv [r hexists smallhash nokey]\n        set k [lindex [array names bighash *] 0]\n        lappend rv [r hexists bighash $k]\n        lappend rv [r hexists bighash nokey]\n    } {1 0 1 0}\n\n    test {Is a ziplist encoded Hash promoted on big payload?} {\n        r hset smallhash foo [string repeat a 1024]\n        r debug object smallhash\n    } {*hashtable*} {needs:debug}\n\n    test {HINCRBY against non existing database key} {\n        r del htest\n        list [r hincrby htest foo 2]\n    } {2}\n\n    test {HINCRBY HINCRBYFLOAT against non-integer increment value} {\n        r del incrhash\n        r hset incrhash field 5\n        assert_error \"*value is not an integer*\" {r hincrby incrhash field v}\n        assert_error \"*value is not a*\" {r hincrbyfloat incrhash field v}\n    }\n\n    test {HINCRBY against non existing hash key} {\n        set rv {}\n        r hdel smallhash tmp\n        r hdel bighash tmp\n        lappend rv [r hincrby smallhash tmp 2]\n        lappend rv [r hget smallhash tmp]\n        lappend rv [r hincrby bighash tmp 2]\n        lappend rv [r hget bighash tmp]\n    } {2 2 2 2}\n\n    test {HINCRBY against hash key created by hincrby itself} {\n        set rv {}\n        lappend rv [r hincrby smallhash tmp 3]\n        lappend rv [r hget smallhash tmp]\n        lappend rv [r hincrby bighash tmp 3]\n        lappend rv [r hget bighash tmp]\n    } {5 5 5 5}\n\n    test {HINCRBY against hash key originally set with HSET} {\n        r hset smallhash tmp 100\n        r hset bighash tmp 100\n        list [r hincrby smallhash tmp 2] [r hincrby bighash tmp 2]\n    } {102 102}\n\n    test {HINCRBY over 32bit value} {\n        r hset smallhash tmp 17179869184\n        r hset bighash tmp 17179869184\n        list [r hincrby smallhash tmp 1] [r hincrby bighash tmp 1]\n    } {17179869185 17179869185}\n\n    test {HINCRBY over 32bit value with over 32bit increment} {\n        r hset smallhash tmp 17179869184\n        r hset bighash tmp 17179869184\n        list [r hincrby smallhash tmp 17179869184] [r hincrby bighash tmp 17179869184]\n    } {34359738368 34359738368}\n\n    test {HINCRBY fails against hash value with spaces (left)} {\n        r hset smallhash str \" 11\"\n        r hset bighash str \" 11\"\n        catch {r hincrby smallhash str 1} smallerr\n        catch {r hincrby bighash str 1} bigerr\n        set rv {}\n        lappend rv [string match \"ERR *not an integer*\" $smallerr]\n        lappend rv [string match \"ERR *not an integer*\" $bigerr]\n    } {1 1}\n\n    test {HINCRBY fails against hash value with spaces (right)} {\n        r hset smallhash str \"11 \"\n        r hset bighash str \"11 \"\n        catch {r hincrby smallhash str 1} smallerr\n        catch {r hincrby bighash str 1} bigerr\n        set rv {}\n        lappend rv [string match \"ERR *not an integer*\" $smallerr]\n        lappend rv [string match \"ERR *not an integer*\" $bigerr]\n    } {1 1}\n\n    test {HINCRBY can detect overflows} {\n        set e {}\n        r hset hash n -9223372036854775484\n        assert {[r hincrby hash n -1] == -9223372036854775485}\n        catch {r hincrby hash n -10000} e\n        set e\n    } {*overflow*}\n\n    test {HINCRBYFLOAT against non existing database key} {\n        r del htest\n        list [r hincrbyfloat htest foo 2.5]\n    } {2.5}\n\n    test {HINCRBYFLOAT against non existing hash key} {\n        set rv {}\n        r hdel smallhash tmp\n        r hdel bighash tmp\n        lappend rv [roundFloat [r hincrbyfloat smallhash tmp 2.5]]\n        lappend rv [roundFloat [r hget smallhash tmp]]\n        lappend rv [roundFloat [r hincrbyfloat bighash tmp 2.5]]\n        lappend rv [roundFloat [r hget bighash tmp]]\n    } {2.5 2.5 2.5 2.5}\n\n    test {HINCRBYFLOAT against hash key created by hincrby itself} {\n        set rv {}\n        lappend rv [roundFloat [r hincrbyfloat smallhash tmp 3.5]]\n        lappend rv [roundFloat [r hget smallhash tmp]]\n        lappend rv [roundFloat [r hincrbyfloat bighash tmp 3.5]]\n        lappend rv [roundFloat [r hget bighash tmp]]\n    } {6 6 6 6}\n\n    test {HINCRBYFLOAT against hash key originally set with HSET} {\n        r hset smallhash tmp 100\n        r hset bighash tmp 100\n        list [roundFloat [r hincrbyfloat smallhash tmp 2.5]] \\\n             [roundFloat [r hincrbyfloat bighash tmp 2.5]]\n    } {102.5 102.5}\n\n    test {HINCRBYFLOAT over 32bit value} {\n        r hset smallhash tmp 17179869184\n        r hset bighash tmp 17179869184\n        list [r hincrbyfloat smallhash tmp 1] \\\n             [r hincrbyfloat bighash tmp 1]\n    } {17179869185 17179869185}\n\n    test {HINCRBYFLOAT over 32bit value with over 32bit increment} {\n        r hset smallhash tmp 17179869184\n        r hset bighash tmp 17179869184\n        list [r hincrbyfloat smallhash tmp 17179869184] \\\n             [r hincrbyfloat bighash tmp 17179869184]\n    } {34359738368 34359738368}\n\n    test {HINCRBYFLOAT fails against hash value with spaces (left)} {\n        r hset smallhash str \" 11\"\n        r hset bighash str \" 11\"\n        catch {r hincrbyfloat smallhash str 1} smallerr\n        catch {r hincrbyfloat bighash str 1} bigerr\n        set rv {}\n        lappend rv [string match \"ERR *not*float*\" $smallerr]\n        lappend rv [string match \"ERR *not*float*\" $bigerr]\n    } {1 1}\n\n    test {HINCRBYFLOAT fails against hash value with spaces (right)} {\n        r hset smallhash str \"11 \"\n        r hset bighash str \"11 \"\n        catch {r hincrbyfloat smallhash str 1} smallerr\n        catch {r hincrbyfloat bighash str 1} bigerr\n        set rv {}\n        lappend rv [string match \"ERR *not*float*\" $smallerr]\n        lappend rv [string match \"ERR *not*float*\" $bigerr]\n    } {1 1}\n\n    test {HINCRBYFLOAT fails against hash value that contains a null-terminator in the middle} {\n        r hset h f \"1\\x002\"\n        catch {r hincrbyfloat h f 1} err\n        set rv {}\n        lappend rv [string match \"ERR *not*float*\" $err]\n    } {1}\n\n    test {HSTRLEN against the small hash} {\n        set err {}\n        foreach k [array names smallhash *] {\n            if {[string length $smallhash($k)] ne [r hstrlen smallhash $k]} {\n                set err \"[string length $smallhash($k)] != [r hstrlen smallhash $k]\"\n                break\n            }\n        }\n        set _ $err\n    } {}\n\n    test {HSTRLEN against the big hash} {\n        set err {}\n        foreach k [array names bighash *] {\n            if {[string length $bighash($k)] ne [r hstrlen bighash $k]} {\n                set err \"[string length $bighash($k)] != [r hstrlen bighash $k]\"\n                puts \"HSTRLEN and logical length mismatch:\"\n                puts \"key: $k\"\n                puts \"Logical content: $bighash($k)\"\n                puts \"Server  content: [r hget bighash $k]\"\n            }\n        }\n        set _ $err\n    } {}\n\n    test {HSTRLEN against non existing field} {\n        set rv {}\n        lappend rv [r hstrlen smallhash __123123123__]\n        lappend rv [r hstrlen bighash __123123123__]\n        set _ $rv\n    } {0 0}\n\n    test {HSTRLEN corner cases} {\n        set vals {\n            -9223372036854775808 9223372036854775807 9223372036854775808\n            {} 0 -1 x\n        }\n        foreach v $vals {\n            r hmset smallhash field $v\n            r hmset bighash field $v\n            set len1 [string length $v]\n            set len2 [r hstrlen smallhash field]\n            set len3 [r hstrlen bighash field]\n            assert {$len1 == $len2}\n            assert {$len2 == $len3}\n        }\n    }\n\n    test {HINCRBYFLOAT over hash-max-listpack-value encoded with a listpack} {\n        set original_max_value [lindex [r config get hash-max-ziplist-value] 1]\n        r config set hash-max-listpack-value 8\n        \n        # hash's value exceeds hash-max-listpack-value\n        r del smallhash\n        r del bighash\n        r hset smallhash tmp 0\n        r hset bighash tmp 0\n        r hincrbyfloat smallhash tmp 0.000005\n        r hincrbyfloat bighash tmp 0.0000005\n        assert_encoding listpack smallhash\n        assert_encoding hashtable bighash\n\n        # hash's field exceeds hash-max-listpack-value\n        r del smallhash\n        r del bighash\n        r hincrbyfloat smallhash abcdefgh 1\n        r hincrbyfloat bighash abcdefghi 1\n        assert_encoding listpack smallhash\n        assert_encoding hashtable bighash\n\n        r config set hash-max-listpack-value $original_max_value\n    }\n\n    test {HGETDEL input validation} {\n        r del key1\n        assert_error \"*wrong number of arguments*\" {r hgetdel}\n        assert_error \"*wrong number of arguments*\" {r hgetdel key1}\n        assert_error \"*wrong number of arguments*\" {r hgetdel key1 FIELDS}\n        assert_error \"*wrong number of arguments*\" {r hgetdel key1 FIELDS 0}\n        assert_error \"*wrong number of arguments*\" {r hgetdel key1 FIELDX}\n        assert_error \"*argument FIELDS is missing*\" {r hgetdel key1 XFIELDX 1 a}\n        assert_error \"*numfields*parameter*must match*number of arguments*\" {r hgetdel key1 FIELDS 2 a}\n        assert_error \"*numfields*parameter*must match*number of arguments*\" {r hgetdel key1 FIELDS 2 a b c}\n        assert_error \"*Number of fields must be a positive integer*\" {r hgetdel key1 FIELDS 0 a}\n        assert_error \"*Number of fields must be a positive integer*\" {r hgetdel key1 FIELDS -1 a}\n        assert_error \"*Number of fields must be a positive integer*\" {r hgetdel key1 FIELDS b a}\n        assert_error \"*Number of fields must be a positive integer*\" {r hgetdel key1 FIELDS 9223372036854775808 a}\n    }\n\n    foreach type {listpack ht} {\n        set orig_config [lindex [r config get hash-max-listpack-entries] 1]\n        r del key1\n\n        if {$type == \"listpack\"} {\n            r config set hash-max-listpack-entries $orig_config\n            r hset key1 f1 1 f2 2 f3 3 strfield strval\n            assert_encoding listpack key1\n        } else {\n            r config set hash-max-listpack-entries 0\n            r hset key1 f1 1 f2 2 f3 3 strfield strval\n            assert_encoding hashtable key1\n        }\n\n        test {HGETDEL basic test} {\n            r del key1\n            r hset key1 f1 1 f2 2 f3 3 strfield strval\n            assert_equal [r hgetdel key1 fields 1 f2] 2\n            assert_equal [r hlen key1] 3\n            assert_equal [r hget key1 f1] 1\n            assert_equal [r hget key1 f2] \"\"\n            assert_equal [r hget key1 f3] 3\n            assert_equal [r hget key1 strfield] strval\n\n            assert_equal [r hgetdel key1 fields 1 f1] 1\n            assert_equal [lsort [r hgetall key1]] [lsort \"f3 3 strfield strval\"]\n            assert_equal [r hgetdel key1 fields 1 f3] 3\n            assert_equal [r hgetdel key1 fields 1 strfield] strval\n            assert_equal [r hgetall key1] \"\"\n            assert_equal [r exists key1] 0\n        }\n\n        test {HGETDEL test with non existing fields} {\n             r del key1\n             r hset key1 f1 1 f2 2 f3 3\n             assert_equal [r hgetdel key1 fields 4 x1 x2 x3 x4] \"{} {} {} {}\"\n             assert_equal [r hgetdel key1 fields 4 x1 x2 f3 x4] \"{} {} 3 {}\"\n             assert_equal [lsort [r hgetall key1]] [lsort \"f1 1 f2 2\"]\n             assert_equal [r hgetdel key1 fields 3 f1 f2 f3] \"1 2 {}\"\n             assert_equal [r hgetdel key1 fields 3 f1 f2 f3] \"{} {} {}\"\n        }\n\n        r config set hash-max-listpack-entries $orig_config\n    }\n\n    test {HGETDEL propagated as HDEL command to replica} {\n        set repl [attach_to_replication_stream]\n        r hset key1 f1 v1 f2 v2 f3 v3 f4 v4 f5 v5\n        r hgetdel key1 fields 1 f1\n        r hgetdel key1 fields 2 f2 f3\n\n        # make sure non-existing fields are not replicated\n        r hgetdel key1 fields 2 f7 f8\n\n        # delete more\n        r hgetdel key1 fields 3 f4 f5 f6\n\n        assert_replication_stream $repl {\n            {select *}\n            {hset key1 f1 v1 f2 v2 f3 v3 f4 v4 f5 v5}\n            {hdel key1 f1}\n            {hdel key1 f2 f3}\n            {hdel key1 f4 f5 f6}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test {Hash ziplist regression test for large keys} {\n        r hset hash kkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk a\n        r hset hash kkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk b\n        r hget hash kkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk\n    } {b}\n\n    foreach size {10 512} {\n        test \"Hash fuzzing #1 - $size fields\" {\n            for {set times 0} {$times < 10} {incr times} {\n                catch {unset hash}\n                array set hash {}\n                r del hash\n\n                # Create\n                for {set j 0} {$j < $size} {incr j} {\n                    set field [randomValue]\n                    set value [randomValue]\n                    r hset hash $field $value\n                    set hash($field) $value\n                }\n\n                # Verify\n                foreach {k v} [array get hash] {\n                    assert_equal $v [r hget hash $k]\n                }\n                assert_equal [array size hash] [r hlen hash]\n            }\n        }\n\n        test \"Hash fuzzing #2 - $size fields\" {\n            for {set times 0} {$times < 10} {incr times} {\n                catch {unset hash}\n                array set hash {}\n                r del hash\n\n                # Create\n                for {set j 0} {$j < $size} {incr j} {\n                    randpath {\n                        set field [randomValue]\n                        set value [randomValue]\n                        r hset hash $field $value\n                        set hash($field) $value\n                    } {\n                        set field [randomSignedInt 512]\n                        set value [randomSignedInt 512]\n                        r hset hash $field $value\n                        set hash($field) $value\n                    } {\n                        randpath {\n                            set field [randomValue]\n                        } {\n                            set field [randomSignedInt 512]\n                        }\n                        r hdel hash $field\n                        unset -nocomplain hash($field)\n                    }\n                }\n\n                # Verify\n                foreach {k v} [array get hash] {\n                    assert_equal $v [r hget hash $k]\n                }\n                assert_equal [array size hash] [r hlen hash]\n            }\n        }\n    }\n\n    test {Stress test the hash ziplist -> hashtable encoding conversion} {\n        r config set hash-max-ziplist-entries 32\n        for {set j 0} {$j < 100} {incr j} {\n            r del myhash\n            for {set i 0} {$i < 64} {incr i} {\n                r hset myhash [randomValue] [randomValue]\n            }\n            assert_encoding hashtable myhash\n        }\n    }\n\n    # The following test can only be executed if we don't use Valgrind, and if\n    # we are using x86_64 architecture, because:\n    #\n    # 1) Valgrind has floating point limitations, no support for 80 bits math.\n    # 2) Other archs may have the same limits.\n    #\n    # 1.23 cannot be represented correctly with 64 bit doubles, so we skip\n    # the test, since we are only testing pretty printing here and is not\n    # a bug if the program outputs things like 1.299999...\n    if {!$::valgrind && [string match *x86_64* [exec uname -a]]} {\n        test {Test HINCRBYFLOAT for correct float representation (issue #2846)} {\n            r del myhash\n            assert {[r hincrbyfloat myhash float 1.23] eq {1.23}}\n            assert {[r hincrbyfloat myhash float 0.77] eq {2}}\n            assert {[r hincrbyfloat myhash float -0.1] eq {1.9}}\n        }\n    }\n\n    test {Hash ziplist of various encodings} {\n        r del k\n        config_set hash-max-ziplist-entries 1000000000\n        config_set hash-max-ziplist-value 1000000000\n        r hset k ZIP_INT_8B 127\n        r hset k ZIP_INT_16B 32767\n        r hset k ZIP_INT_32B 2147483647\n        r hset k ZIP_INT_64B 9223372036854775808\n        r hset k ZIP_INT_IMM_MIN 0\n        r hset k ZIP_INT_IMM_MAX 12\n        r hset k ZIP_STR_06B [string repeat x 31]\n        r hset k ZIP_STR_14B [string repeat x 8191]\n        r hset k ZIP_STR_32B [string repeat x 65535]\n        set k [r hgetall k]\n        set dump [r dump k]\n\n        # will be converted to dict at RESTORE\n        config_set hash-max-ziplist-entries 2\n        config_set sanitize-dump-payload no mayfail\n        r restore kk 0 $dump\n        set kk [r hgetall kk]\n\n        # make sure the values are right\n        assert_equal [lsort $k] [lsort $kk]\n        assert_equal [dict get $k ZIP_STR_06B] [string repeat x 31]\n        set k [dict remove $k ZIP_STR_06B]\n        assert_equal [dict get $k ZIP_STR_14B] [string repeat x 8191]\n        set k [dict remove $k ZIP_STR_14B]\n        assert_equal [dict get $k ZIP_STR_32B] [string repeat x 65535]\n        set k [dict remove $k ZIP_STR_32B]\n        set _ $k\n    } {ZIP_INT_8B 127 ZIP_INT_16B 32767 ZIP_INT_32B 2147483647 ZIP_INT_64B 9223372036854775808 ZIP_INT_IMM_MIN 0 ZIP_INT_IMM_MAX 12}\n\n    test {Hash ziplist of various encodings - sanitize dump} {\n        config_set sanitize-dump-payload yes mayfail\n        r restore kk 0 $dump replace\n        set k [r hgetall k]\n        set kk [r hgetall kk]\n\n        # make sure the values are right\n        assert_equal [lsort $k] [lsort $kk]\n        assert_equal [dict get $k ZIP_STR_06B] [string repeat x 31]\n        set k [dict remove $k ZIP_STR_06B]\n        assert_equal [dict get $k ZIP_STR_14B] [string repeat x 8191]\n        set k [dict remove $k ZIP_STR_14B]\n        assert_equal [dict get $k ZIP_STR_32B] [string repeat x 65535]\n        set k [dict remove $k ZIP_STR_32B]\n        set _ $k\n    } {ZIP_INT_8B 127 ZIP_INT_16B 32767 ZIP_INT_32B 2147483647 ZIP_INT_64B 9223372036854775808 ZIP_INT_IMM_MIN 0 ZIP_INT_IMM_MAX 12}\n\n    test {KEYS command return expired keys when allow_access_expired is 1} {\n        r flushall\n        r debug set-allow-access-expired 1\n        r debug set-active-expire 0\n        r set key1 value1\n        r pexpire key1 1\n        after 2\n        assert_equal {key1} [r keys *]\n        r debug set-allow-access-expired 0\n        r debug set-active-expire 1\n    } {OK} {needs:debug}\n\n    # On some platforms strtold(\"+inf\") with valgrind returns a non-inf result\n    if {!$::valgrind} {\n        test {HINCRBYFLOAT does not allow NaN or Infinity} {\n            assert_error \"*value is NaN or Infinity*\" {r hincrbyfloat hfoo field +inf}\n            assert_equal 0 [r exists hfoo]\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/type/incr.tcl",
    "content": "start_server {tags {\"incr\"}} {\n    test {INCR against non existing key} {\n        set res {}\n        append res [r incr novar]\n        append res [r get novar]\n    } {11}\n\n    test {INCR against key created by incr itself} {\n        r incr novar\n    } {2}\n\n    test {DECR against key created by incr} {\n        r decr novar\n    } {1}\n\n    test {DECR against key is not exist and incr} {\n        r del novar_not_exist\n        assert_equal {-1} [r decr novar_not_exist]\n        assert_equal {0} [r incr novar_not_exist]\n    }\n\n    test {INCR against key originally set with SET} {\n        r set novar 100\n        r incr novar\n    } {101}\n\n    test {INCR over 32bit value} {\n        r set novar 17179869184\n        r incr novar\n    } {17179869185}\n\n    test {INCRBY over 32bit value with over 32bit increment} {\n        r set novar 17179869184\n        r incrby novar 17179869184\n    } {34359738368}\n\n    test {INCR fails against key with spaces (left)} {\n        r set novar \"    11\"\n        catch {r incr novar} err\n        format $err\n    } {ERR*}\n\n    test {INCR fails against key with spaces (right)} {\n        r set novar \"11    \"\n        catch {r incr novar} err\n        format $err\n    } {ERR*}\n\n    test {INCR fails against key with spaces (both)} {\n        r set novar \"    11    \"\n        catch {r incr novar} err\n        format $err\n    } {ERR*}\n\n    test {DECRBY negation overflow} {\n        r set x 0\n        catch {r decrby x -9223372036854775808} err\n        format $err\n    } {ERR*}\n\n    test {INCR fails against a key holding a list} {\n        r rpush mylist 1\n        catch {r incr mylist} err\n        r rpop mylist\n        format $err\n    } {WRONGTYPE*}\n\n    test {DECRBY over 32bit value with over 32bit increment, negative res} {\n        r set novar 17179869184\n        r decrby novar 17179869185\n    } {-1}\n\n    test {DECRBY against key is not exist} {\n        r del key_not_exist\n        assert_equal {-1} [r decrby key_not_exist 1]\n    }\n\n    test {INCR does not use shared objects} {\n        r set foo -1\n        r incr foo\n        assert_refcount 1 foo\n        r set foo 9998\n        r incr foo\n        assert_refcount 1 foo\n        r incr foo\n        assert_refcount 1 foo\n    }\n\n    test {INCR can modify objects in-place} {\n        r set foo 20000\n        r incr foo\n        assert_refcount 1 foo\n        set old [lindex [split [r debug object foo]] 1]\n        r incr foo\n        set new [lindex [split [r debug object foo]] 1]\n        assert {[string range $old 0 2] eq \"at:\"}\n        assert {[string range $new 0 2] eq \"at:\"}\n        assert {$old eq $new}\n    } {} {needs:debug debug_defrag:skip}\n\n    test {INCRBYFLOAT against non existing key} {\n        r del novar\n        list    [roundFloat [r incrbyfloat novar 1]] \\\n                [roundFloat [r get novar]] \\\n                [roundFloat [r incrbyfloat novar 0.25]] \\\n                [roundFloat [r get novar]]\n    } {1 1 1.25 1.25}\n\n    test {INCRBYFLOAT against key originally set with SET} {\n        r set novar 1.5\n        roundFloat [r incrbyfloat novar 1.5]\n    } {3}\n\n    test {INCRBYFLOAT over 32bit value} {\n        r set novar 17179869184\n        r incrbyfloat novar 1.5\n    } {17179869185.5}\n\n    test {INCRBYFLOAT over 32bit value with over 32bit increment} {\n        r set novar 17179869184\n        r incrbyfloat novar 17179869184\n    } {34359738368}\n\n    test {INCRBYFLOAT fails against key with spaces (left)} {\n        set err {}\n        r set novar \"    11\"\n        catch {r incrbyfloat novar 1.0} err\n        format $err\n    } {ERR *valid*}\n\n    test {INCRBYFLOAT fails against key with spaces (right)} {\n        set err {}\n        r set novar \"11    \"\n        catch {r incrbyfloat novar 1.0} err\n        format $err\n    } {ERR *valid*}\n\n    test {INCRBYFLOAT fails against key with spaces (both)} {\n        set err {}\n        r set novar \" 11 \"\n        catch {r incrbyfloat novar 1.0} err\n        format $err\n    } {ERR *valid*}\n\n    test {INCRBYFLOAT fails against a key holding a list} {\n        r del mylist\n        set err {}\n        r rpush mylist 1\n        catch {r incrbyfloat mylist 1.0} err\n        r del mylist\n        format $err\n    } {WRONGTYPE*}\n\n    # On some platforms strtold(\"+inf\") with valgrind returns a non-inf result\n    if {!$::valgrind} {\n        test {INCRBYFLOAT does not allow NaN or Infinity} {\n            r set foo 0\n            set err {}\n            catch {r incrbyfloat foo +inf} err\n            set err\n            # p.s. no way I can force NaN to test it from the API because\n            # there is no way to increment / decrement by infinity nor to\n            # perform divisions.\n        } {ERR *would produce*}\n    }\n\n    test {INCRBYFLOAT decrement} {\n        r set foo 1\n        roundFloat [r incrbyfloat foo -1.1]\n    } {-0.1}\n\n    test {string to double with null terminator} {\n        r set foo 1\n        r setrange foo 2 2\n        catch {r incrbyfloat foo 1} err\n        format $err\n    } {ERR *valid*}\n\n    test {No negative zero} {\n        r del foo\n        r incrbyfloat foo [expr double(1)/41]\n        r incrbyfloat foo [expr double(-1)/41]\n        r get foo\n    } {0}\n\n    test {INCRBY INCRBYFLOAT DECRBY against unhappy path} {\n        r del mykeyincr\n        assert_error \"*ERR wrong number of arguments*\" {r incr mykeyincr v}\n        assert_error \"*ERR wrong number of arguments*\" {r decr mykeyincr v}\n        assert_error \"*value is not an integer or out of range*\" {r incrby mykeyincr v}\n        assert_error \"*value is not an integer or out of range*\" {r incrby mykeyincr 1.5}\n        assert_error \"*value is not an integer or out of range*\" {r decrby mykeyincr v}\n        assert_error \"*value is not an integer or out of range*\" {r decrby mykeyincr 1.5}\n        assert_error \"*value is not a valid float*\" {r incrbyfloat mykeyincr v}\n    }\n\n    foreach cmd {\"incr\" \"decr\" \"incrby\" \"decrby\"} {\n        test \"$cmd operation should update encoding from raw to int\" {\n            set res {}\n            set expected {1 12}\n            if {[string match {*incr*} $cmd]} {\n                lappend expected 13\n            } else {\n                lappend expected 11\n            }\n\n            r set foo 1\n            assert_encoding \"int\" foo\n            lappend res [r get foo]\n\n            r append foo 2\n            assert_encoding \"raw\" foo\n            lappend res [r get foo]\n\n            if {[string match {*by*} $cmd]} {\n                r $cmd foo 1\n            } else {\n                r $cmd foo\n            }\n            assert_encoding \"int\" foo\n            lappend res [r get foo]\n            assert_equal $res $expected\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/type/list-2.tcl",
    "content": "start_server {\n    tags {\"list\"}\n    overrides {\n        \"list-max-ziplist-size\" 4\n    }\n} {\n    array set largevalue [generate_largevalue_test_array]\n\n    foreach {type large} [array get largevalue] {\n        tags {\"slow\"} {\n            test \"LTRIM stress testing - $type\" {\n                set mylist {}\n                set startlen 32\n                r del mylist\n\n                # Start with the large value to ensure the\n                # right encoding is used.\n                r rpush mylist $large\n                lappend mylist $large\n\n                for {set i 0} {$i < $startlen} {incr i} {\n                    set str [randomInt 9223372036854775807]\n                    r rpush mylist $str\n                    lappend mylist $str\n                }\n\n                for {set i 0} {$i < 1000} {incr i} {\n                    set min [expr {int(rand()*$startlen)}]\n                    set max [expr {$min+int(rand()*$startlen)}]\n                    set before_len [llength $mylist]\n                    set before_len_r [r llen mylist]\n                    assert_equal $before_len $before_len_r\n                    set mylist [lrange $mylist $min $max]\n                    r ltrim mylist $min $max\n                    assert_equal $mylist [r lrange mylist 0 -1] \"failed trim\"\n\n                    for {set j [r llen mylist]} {$j < $startlen} {incr j} {\n                        set str [randomInt 9223372036854775807]\n                        r rpush mylist $str\n                        lappend mylist $str\n                        assert_equal $mylist [r lrange mylist 0 -1] \"failed append match\"\n                    }\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/type/list-3.tcl",
    "content": "proc generate_cmd_on_list_key {key} {\n    set op [randomInt 7]\n    set small_signed_count [expr 5-[randomInt 10]]\n    if {[randomInt 2] == 0} {\n        set ele [randomInt 1000]\n    } else {\n        set ele [string repeat x [randomInt 10000]][randomInt 1000]\n    }\n    switch $op {\n        0 {return \"lpush $key $ele\"}\n        1 {return \"rpush $key $ele\"}\n        2 {return \"lpop $key\"}\n        3 {return \"rpop $key\"}\n        4 {\n            return \"lset $key $small_signed_count $ele\"\n        }\n        5 {\n            set otherele [randomInt 1000]\n            if {[randomInt 2] == 0} {\n                set where before\n            } else {\n                set where after\n            }\n            return \"linsert $key $where $otherele $ele\"\n        }\n        6 {\n            set otherele \"\"\n            catch {\n                set index [randomInt [r llen $key]]\n                set otherele [r lindex $key $index]\n            }\n            return \"lrem $key 1 $otherele\"\n        }\n    }\n}\n\nstart_server {\n    tags {\"list ziplist\"}\n    overrides {\n        \"list-max-ziplist-size\" 16\n    }\n} {\n    test {Explicit regression for a list bug} {\n        set mylist {49376042582 {BkG2o\\pIC]4YYJa9cJ4GWZalG[4tin;1D2whSkCOW`mX;SFXGyS8sedcff3fQI^tgPCC@^Nu1J6o]meM@Lko]t_jRyo<xSJ1oObDYd`ppZuW6P@fS278YaOx=s6lvdFlMbP0[SbkI^Kr\\HBXtuFaA^mDx:yzS4a[skiiPWhT<nNfAf=aQVfclcuwDrfe;iVuKdNvB9kbfq>tK?tH[\\EvWqS]b`o2OCtjg:?nUTwdjpcUm]y:pg5q24q7LlCOwQE^}}\n        r del l\n        r rpush l [lindex $mylist 0]\n        r rpush l [lindex $mylist 1]\n        assert_equal [r lindex l 0] [lindex $mylist 0]\n        assert_equal [r lindex l 1] [lindex $mylist 1]\n    }\n\n    test {Regression for quicklist #3343 bug} {\n        r del mylist\n        r lpush mylist 401\n        r lpush mylist 392\n        r rpush mylist [string repeat x 5105]\"799\"\n        r lset mylist -1 [string repeat x 1014]\"702\"\n        r lpop mylist\n        r lset mylist -1 [string repeat x 4149]\"852\"\n        r linsert mylist before 401 [string repeat x 9927]\"12\"\n        r lrange mylist 0 -1\n        r ping ; # It's enough if the server is still alive\n    } {PONG}\n\n    test {Check compression with recompress} {\n        r del key\n        config_set list-compress-depth 1\n        config_set list-max-ziplist-size 16\n        r rpush key a\n        r rpush key [string repeat b 50000]\n        r rpush key c\n        r lset key 1 d\n        r rpop key\n        r rpush key [string repeat e 5000]\n        r linsert key before f 1\n        r rpush key g\n        r ping\n    }\n\n    test {Crash due to wrongly recompress after lrem} {\n        r del key\n        config_set list-compress-depth 2\n        r lpush key a\n        r lpush key [string repeat a 5000]\n        r lpush key [string repeat b 5000]\n        r lpush key [string repeat c 5000]\n        r rpush key [string repeat x 10000]\"969\"\n        r rpush key b\n        r lrem key 1 a\n        r rpop key \n        r lrem key 1 [string repeat x 10000]\"969\"\n        r rpush key crash\n        r ping\n    }\n\n    test {LINSERT correctly recompress full quicklistNode after inserting a element before it} {\n        r del key\n        config_set list-compress-depth 1\n        r rpush key b\n        r rpush key c\n        r lset key -1 [string repeat x 8192]\"969\"\n        r lpush key a\n        r rpush key d\n        r linsert key before b f\n        r rpop key\n        r ping\n    }\n\n    test {LINSERT correctly recompress full quicklistNode after inserting a element after it} {\n        r del key\n        config_set list-compress-depth 1\n        r rpush key b\n        r rpush key c\n        r lset key 0 [string repeat x 8192]\"969\"\n        r lpush key a\n        r rpush key d\n        r linsert key after c f\n        r lpop key\n        r ping\n    }\n\nforeach comp {2 1 0} {\n    set cycles 1000\n    if {$::accurate} { set cycles 10000 }\n    config_set list-compress-depth $comp\n    \n    test \"Stress tester for #3343-alike bugs comp: $comp\" {\n        r del key\n        set sent {}\n        for {set j 0} {$j < $cycles} {incr j} {\n            catch {\n                set cmd [generate_cmd_on_list_key key]\n                lappend sent $cmd\n\n                # execute the command, we expect commands to fail on syntax errors\n                r {*}$cmd\n            }\n        }\n\n        set print_commands false\n        set crash false\n        if {[catch {r ping}]} {\n            puts \"Server crashed\"\n            set print_commands true\n            set crash true\n        }\n\n        if {!$::external} {\n            # check valgrind and asan report for invalid reads after execute\n            # command so that we have a report that is easier to reproduce\n            set valgrind_errors [find_valgrind_errors [srv 0 stderr] false]\n            set asan_errors [sanitizer_errors_from_file [srv 0 stderr]]\n            if {$valgrind_errors != \"\" || $asan_errors != \"\"} {\n                puts \"valgrind or asan found an issue\"\n                set print_commands true\n            }\n        }\n\n        if {$print_commands} {\n            puts \"violating commands:\"\n            foreach cmd $sent {\n                puts $cmd\n            }\n        }\n\n        assert_equal $crash false\n    }\n} ;# foreach comp\n\n    tags {slow} {\n        test {ziplist implementation: value encoding and backlink} {\n            if {$::accurate} {set iterations 100} else {set iterations 10}\n            for {set j 0} {$j < $iterations} {incr j} {\n                r del l\n                set l {}\n                for {set i 0} {$i < 200} {incr i} {\n                    randpath {\n                        set data [string repeat x [randomInt 100000]]\n                    } {\n                        set data [randomInt 65536]\n                    } {\n                        set data [randomInt 4294967296]\n                    } {\n                        set data [randomInt 18446744073709551616]\n                    } {\n                        set data -[randomInt 65536]\n                        if {$data eq {-0}} {set data 0}\n                    } {\n                        set data -[randomInt 4294967296]\n                        if {$data eq {-0}} {set data 0}\n                    } {\n                        set data -[randomInt 18446744073709551616]\n                        if {$data eq {-0}} {set data 0}\n                    }\n                    lappend l $data\n                    r rpush l $data\n                }\n                assert_equal [llength $l] [r llen l]\n                # Traverse backward\n                for {set i 199} {$i >= 0} {incr i -1} {\n                    if {[lindex $l $i] ne [r lindex l $i]} {\n                        assert_equal [lindex $l $i] [r lindex l $i]\n                    }\n                }\n            }\n        }\n\n        test {ziplist implementation: encoding stress testing} {\n            for {set j 0} {$j < 200} {incr j} {\n                r del l\n                set l {}\n                set len [randomInt 400]\n                for {set i 0} {$i < $len} {incr i} {\n                    set rv [randomValue]\n                    randpath {\n                        lappend l $rv\n                        r rpush l $rv\n                    } {\n                        set l [concat [list $rv] $l]\n                        r lpush l $rv\n                    }\n                }\n                assert_equal [llength $l] [r llen l]\n                for {set i 0} {$i < $len} {incr i} {\n                    if {[lindex $l $i] ne [r lindex l $i]} {\n                        assert_equal [lindex $l $i] [r lindex l $i]\n                    }\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/type/list.tcl",
    "content": "#\n# Copyright (c) 2009-Present, Redis Ltd.\n# All rights reserved.\n#\n# Copyright (c) 2024-present, Valkey contributors.\n# All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Portions of this file are available under BSD3 terms; see REDISCONTRIBUTIONS for more information.\n#\n\n# check functionality compression of plain and packed nodes\nstart_server [list overrides [list save \"\"] ] {\n    r config set list-compress-depth 2\n    r config set list-max-ziplist-size 1\n\n    # 3 test to check compression with plain and packed nodes\n    # 1. using push + insert\n    # 2. using push + insert + trim\n    # 3. using push + insert + set\n\n    foreach {container size} {packed 500 plain 8193} {\n    test \"$container node check compression with insert and pop\" {\n        r flushdb\n        r lpush list1 [string repeat a $size]\n        r lpush list1 [string repeat b $size]\n        r lpush list1 [string repeat c $size]\n        r lpush list1 [string repeat d $size]\n        r linsert list1 after [string repeat d $size] [string repeat e $size]\n        r linsert list1 after [string repeat d $size] [string repeat f $size]\n        r linsert list1 after [string repeat d $size] [string repeat g $size]\n        r linsert list1 after [string repeat d $size] [string repeat j $size]\n        assert_equal [r lpop list1] [string repeat d $size]\n        assert_equal [r lpop list1] [string repeat j $size]\n        assert_equal [r lpop list1] [string repeat g $size]\n        assert_equal [r lpop list1] [string repeat f $size]\n        assert_equal [r lpop list1] [string repeat e $size]\n        assert_equal [r lpop list1] [string repeat c $size]\n        assert_equal [r lpop list1] [string repeat b $size]\n        assert_equal [r lpop list1] [string repeat a $size]\n    };\n\n    test \"$container node check compression combined with trim\" {\n        r flushdb\n        r lpush list2 [string repeat a $size]\n        r linsert list2 after  [string repeat a $size] [string repeat b $size]\n        r rpush list2 [string repeat c $size]\n        assert_equal [string repeat b $size] [r lindex list2 1]\n        r LTRIM list2 1 -1\n        r llen list2\n    } {2}\n\n    test \"$container node check compression with lset\" {\n        r flushdb\n        r lpush list3 [string repeat a $size]\n        r LSET list3 0 [string repeat b $size]\n        assert_equal [string repeat b $size] [r lindex list3 0]\n        r lpush list3 [string repeat c $size]\n        r LSET list3 0 [string repeat d $size]\n        assert_equal [string repeat d $size] [r lindex list3 0]\n    }\n    } ;# foreach\n\n    # revert config for external mode tests.\n    r config set list-compress-depth 0\n}\n\n# check functionality of plain nodes using low packed-threshold\nstart_server [list overrides [list save \"\"] ] {\nforeach type {listpack quicklist} {\n    if {$type eq \"listpack\"} {\n        r config set list-max-listpack-size -2\n    } else {\n        r config set list-max-listpack-size 1\n    }\n\n    # basic command check for plain nodes - \"LPUSH & LPOP\"\n    test {Test LPUSH and LPOP on plain nodes} {\n        r flushdb\n        r debug quicklist-packed-threshold 1b\n        r lpush lst 9\n        r lpush lst xxxxxxxxxx\n        r lpush lst xxxxxxxxxx\n        assert_encoding $type lst\n        set s0 [s used_memory]\n        assert {$s0 > 10}\n        assert {[r llen lst] == 3}\n        set s0 [r rpop lst]\n        set s1 [r rpop lst]\n        assert {$s0 eq \"9\"}\n        assert {[r llen lst] == 1}\n        r lpop lst\n        assert {[string length $s1] == 10}\n        # check rdb\n        r lpush lst xxxxxxxxxx\n        r lpush lst bb\n        r debug reload\n        assert_equal [r rpop lst] \"xxxxxxxxxx\"\n        r debug quicklist-packed-threshold 0\n    } {OK} {needs:debug}\n\n    # basic command check for plain nodes - \"LINDEX & LINSERT\"\n    test {Test LINDEX and LINSERT on plain nodes} {\n        r flushdb\n        r debug quicklist-packed-threshold 1b\n        r lpush lst xxxxxxxxxxx\n        r lpush lst 9\n        r lpush lst xxxxxxxxxxx\n        assert_encoding $type lst\n        r linsert lst before \"9\" \"8\"\n        assert {[r lindex lst 1] eq \"8\"}\n        r linsert lst BEFORE \"9\" \"7\"\n        r linsert lst BEFORE \"9\" \"xxxxxxxxxxx\"\n        assert {[r lindex lst 3] eq \"xxxxxxxxxxx\"}\n        r debug quicklist-packed-threshold 0\n    } {OK} {needs:debug}\n\n    # basic command check for plain nodes - \"LTRIM\"\n    test {Test LTRIM on plain nodes} {\n        r flushdb\n        r debug quicklist-packed-threshold 1b\n        r lpush lst1 9\n        r lpush lst1 xxxxxxxxxxx\n        r lpush lst1 9\n        assert_encoding $type lst1\n        r LTRIM lst1 1 -1\n        assert_equal [r llen lst1] 2\n        r debug quicklist-packed-threshold 0\n    } {OK} {needs:debug}\n\n    # basic command check for plain nodes - \"LREM\"\n    test {Test LREM on plain nodes} {\n        r flushdb\n        r debug quicklist-packed-threshold 1b\n        r lpush lst one\n        r lpush lst xxxxxxxxxxx\n        assert_encoding $type lst\n        set s0 [s used_memory]\n        assert {$s0 > 10}\n        r lpush lst 9\n        r LREM lst -2 \"one\"\n        assert_equal [r llen lst] 2\n        r debug quicklist-packed-threshold 0\n    } {OK} {needs:debug}\n\n    # basic command check for plain nodes - \"LPOS\"\n    test {Test LPOS on plain nodes} {\n        r flushdb\n        r debug quicklist-packed-threshold 1b\n        r RPUSH lst \"aa\"\n        r RPUSH lst \"bb\"\n        r RPUSH lst \"cc\"\n        assert_encoding $type lst\n        r LSET lst 0 \"xxxxxxxxxxx\"\n        assert_equal [r LPOS lst \"xxxxxxxxxxx\"] 0\n        r debug quicklist-packed-threshold 0\n    } {OK} {needs:debug}\n\n    # basic command check for plain nodes - \"LMOVE\"\n    test {Test LMOVE on plain nodes} {\n        r flushdb\n        r debug quicklist-packed-threshold 1b\n        r RPUSH lst2{t} \"aa\"\n        r RPUSH lst2{t} \"bb\"\n        assert_encoding $type lst2{t}\n        r LSET lst2{t} 0 xxxxxxxxxxx\n        r RPUSH lst2{t} \"cc\"\n        r RPUSH lst2{t} \"dd\"\n        r LMOVE lst2{t} lst{t} RIGHT LEFT\n        r LMOVE lst2{t} lst{t} LEFT RIGHT\n        assert_equal [r llen lst{t}] 2\n        assert_equal [r llen lst2{t}] 2\n        assert_equal [r lpop lst2{t}] \"bb\"\n        assert_equal [r lpop lst2{t}] \"cc\"\n        assert_equal [r lpop lst{t}] \"dd\"\n        assert_equal [r lpop lst{t}] \"xxxxxxxxxxx\"\n        r debug quicklist-packed-threshold 0\n    } {OK} {needs:debug}\n\n    # testing LSET with combinations of node types\n    # plain->packed , packed->plain, plain->plain, packed->packed\n    test {Test LSET with packed / plain combinations} {\n        r debug quicklist-packed-threshold 5b\n        r RPUSH lst \"aa\"\n        r RPUSH lst \"bb\"\n        assert_encoding $type lst\n        r lset lst 0 [string repeat d 50001]\n        set s1 [r lpop lst]\n        assert_equal $s1 [string repeat d 50001]\n        r RPUSH lst [string repeat f 50001]\n        r lset lst 0 [string repeat e 50001]\n        set s1 [r lpop lst]\n        assert_equal $s1 [string repeat e 50001]\n        r RPUSH lst [string repeat m 50001]\n        r lset lst 0 \"bb\"\n        set s1 [r lpop lst]\n        assert_equal $s1 \"bb\"\n        r RPUSH lst \"bb\"\n        r lset lst 0 \"cc\"\n        set s1 [r lpop lst]\n        assert_equal $s1 \"cc\"\n        r debug quicklist-packed-threshold 0\n    } {OK} {needs:debug}\n\n    # checking LSET in case ziplist needs to be split\n    test {Test LSET with packed is split in the middle} {\n        set original_config [config_get_set list-max-listpack-size 4]\n        r flushdb\n        r debug quicklist-packed-threshold 5b\n        r RPUSH lst \"aa\"\n        r RPUSH lst \"bb\"\n        r RPUSH lst \"cc\"\n        r RPUSH lst \"dd\"\n        r RPUSH lst \"ee\"\n        assert_encoding quicklist lst\n        r lset lst 2 [string repeat e 10]\n        assert_equal [r lpop lst] \"aa\"\n        assert_equal [r lpop lst] \"bb\"\n        assert_equal [r lpop lst] [string repeat e 10]\n        assert_equal [r lpop lst] \"dd\"\n        assert_equal [r lpop lst] \"ee\"\n        r debug quicklist-packed-threshold 0\n        r config set list-max-listpack-size $original_config\n    } {OK} {needs:debug}\n\n\n    # repeating \"plain check LSET with combinations\"\n    # but now with single item in each ziplist\n    test {Test LSET with packed consist only one item} {\n        r flushdb\n        set original_config [config_get_set list-max-ziplist-size 1]\n        r debug quicklist-packed-threshold 1b\n        r RPUSH lst \"aa\"\n        r RPUSH lst \"bb\"\n        r lset lst 0 [string repeat d 50001]\n        set s1 [r lpop lst]\n        assert_equal $s1 [string repeat d 50001]\n        r RPUSH lst [string repeat f 50001]\n        r lset lst 0 [string repeat e 50001]\n        set s1 [r lpop lst]\n        assert_equal $s1 [string repeat e 50001]\n        r RPUSH lst [string repeat m 50001]\n        r lset lst 0 \"bb\"\n        set s1 [r lpop lst]\n        assert_equal $s1 \"bb\"\n        r RPUSH lst \"bb\"\n        r lset lst 0 \"cc\"\n        set s1 [r lpop lst]\n        assert_equal $s1 \"cc\"\n        r debug quicklist-packed-threshold 0\n        r config set list-max-ziplist-size $original_config\n    } {OK} {needs:debug}\n\n    test {Crash due to delete entry from a compress quicklist node} {\n        r flushdb\n        r debug quicklist-packed-threshold 100b\n        set original_config [config_get_set list-compress-depth 1]\n\n        set small_ele [string repeat x 32]\n        set large_ele [string repeat x 100]\n\n        # Push a large element\n        r RPUSH lst $large_ele\n\n        # Insert two elements and keep them in the same node\n        r RPUSH lst $small_ele\n        r RPUSH lst $small_ele\n        assert_encoding $type lst\n\n        # When setting the position of -1 to a large element, we first insert\n        # a large element at the end and then delete its previous element.\n        r LSET lst -1 $large_ele\n        assert_equal \"$large_ele $small_ele $large_ele\" [r LRANGE lst 0 -1]\n\n        r debug quicklist-packed-threshold 0\n        r config set list-compress-depth $original_config\n    } {OK} {needs:debug}\n\n    test {Crash due to split quicklist node wrongly} {\n        r flushdb\n        r debug quicklist-packed-threshold 10b\n\n        r LPUSH lst \"aa\"\n        r LPUSH lst \"bb\"\n        assert_encoding $type lst\n        r LSET lst -2 [string repeat x 10]\n        r RPOP lst\n        assert_equal [string repeat x 10] [r LRANGE lst 0 -1]\n\n        r debug quicklist-packed-threshold 0\n    } {OK} {needs:debug}\n}\n}\n\nrun_solo {list-large-memory} {\nstart_server [list overrides [list save \"\"] ] {\n\n# test if the server supports such large configs (avoid 32 bit builds)\ncatch {\n    r config set proto-max-bulk-len 10000000000 ;#10gb\n    r config set client-query-buffer-limit 10000000000 ;#10gb\n}\nif {[lindex [r config get proto-max-bulk-len] 1] == 10000000000} {\n\n    set str_length 5000000000\n\n    # repeating all the plain nodes basic checks with 5gb values\n    test {Test LPUSH and LPOP on plain nodes over 4GB} {\n        r flushdb\n        r lpush lst 9\n        r write \"*3\\r\\n\\$5\\r\\nLPUSH\\r\\n\\$3\\r\\nlst\\r\\n\"\n        write_big_bulk $str_length;\n        r write \"*3\\r\\n\\$5\\r\\nLPUSH\\r\\n\\$3\\r\\nlst\\r\\n\"\n        write_big_bulk $str_length;\n        set s0 [s used_memory]\n        assert {$s0 > $str_length}\n        assert {[r llen lst] == 3}\n        assert_equal [r rpop lst] \"9\"\n        assert_equal [read_big_bulk {r rpop lst}] $str_length\n        assert {[r llen lst] == 1}\n        assert_equal [read_big_bulk {r rpop lst}] $str_length\n   } {} {large-memory}\n\n   test {Test LINDEX and LINSERT on plain nodes over 4GB} {\n       r flushdb\n       r write \"*3\\r\\n\\$5\\r\\nLPUSH\\r\\n\\$3\\r\\nlst\\r\\n\"\n       write_big_bulk $str_length;\n       r lpush lst 9\n       r write \"*3\\r\\n\\$5\\r\\nLPUSH\\r\\n\\$3\\r\\nlst\\r\\n\"\n       write_big_bulk $str_length;\n       r linsert lst before \"9\" \"8\"\n       assert_equal [r lindex lst 1] \"8\"\n       r LINSERT lst BEFORE \"9\" \"7\"\n       r write \"*5\\r\\n\\$7\\r\\nLINSERT\\r\\n\\$3\\r\\nlst\\r\\n\\$6\\r\\nBEFORE\\r\\n\\$3\\r\\n\\\"9\\\"\\r\\n\"\n       write_big_bulk 10;\n       assert_equal [read_big_bulk {r rpop lst}] $str_length\n   } {} {large-memory}\n\n   test {Test LTRIM on plain nodes over 4GB} {\n       r flushdb\n       r lpush lst 9\n       r write \"*3\\r\\n\\$5\\r\\nLPUSH\\r\\n\\$3\\r\\nlst\\r\\n\"\n       write_big_bulk $str_length;\n       r lpush lst 9\n       r LTRIM lst 1 -1\n       assert_equal [r llen lst] 2\n       assert_equal [r rpop lst] 9\n       assert_equal [read_big_bulk {r rpop lst}] $str_length\n   } {} {large-memory}\n\n   test {Test LREM on plain nodes over 4GB} {\n       r flushdb\n       r lpush lst one\n       r write \"*3\\r\\n\\$5\\r\\nLPUSH\\r\\n\\$3\\r\\nlst\\r\\n\"\n       write_big_bulk $str_length;\n       r lpush lst 9\n       r LREM lst -2 \"one\"\n       assert_equal [read_big_bulk {r rpop lst}] $str_length\n       r llen lst\n   } {1} {large-memory}\n\n   test {Test LSET on plain nodes over 4GB} {\n       r flushdb\n       r RPUSH lst \"aa\"\n       r RPUSH lst \"bb\"\n       r RPUSH lst \"cc\"\n       r write \"*4\\r\\n\\$4\\r\\nLSET\\r\\n\\$3\\r\\nlst\\r\\n\\$1\\r\\n0\\r\\n\"\n       write_big_bulk $str_length;\n       assert_equal [r rpop lst] \"cc\"\n       assert_equal [r rpop lst] \"bb\"\n       assert_equal [read_big_bulk {r rpop lst}] $str_length\n   } {} {large-memory}\n\n    test {Test LSET on plain nodes with large elements under packed_threshold over 4GB} {\n        r flushdb\n        r rpush lst a b c d e\n        for {set i 0} {$i < 5} {incr i} {\n            r write \"*4\\r\\n\\$4\\r\\nlset\\r\\n\\$3\\r\\nlst\\r\\n\\$1\\r\\n$i\\r\\n\"\n            write_big_bulk 1000000000\n        }\n        r ping\n    } {PONG} {large-memory}\n\n    test {Test LSET splits a quicklist node, and then merge} {\n        # Test when a quicklist node can't be inserted and is split, the split\n        # node merges with the node before it and the `before` node is kept.\n        r flushdb\n        r rpush lst [string repeat \"x\" 4096]\n        r lpush lst a b c d e f g\n        r lpush lst [string repeat \"y\" 4096]\n        # now: [y...]    [g f e d c b a x...]\n        #      (node0)        (node1)\n        # Keep inserting elements into node1 until node1 is split into two\n        # nodes([g] [...]), eventually node0 will merge with the [g] node.\n        # Since node0 is larger, after the merge node0 will be kept and\n        # the [g] node will be deleted.\n        for {set i 7} {$i >= 3} {incr i -1} {\n            r write \"*4\\r\\n\\$4\\r\\nlset\\r\\n\\$3\\r\\nlst\\r\\n\\$1\\r\\n$i\\r\\n\"\n            write_big_bulk 1000000000\n        }\n        assert_equal \"g\" [r lindex lst 1]\n        r ping\n    } {PONG} {large-memory}\n\n    test {Test LSET splits a LZF compressed quicklist node, and then merge} {\n        # Test when a LZF compressed quicklist node can't be inserted and is split,\n        # the split node merges with the node before it and the split node is kept.\n        r flushdb\n        r config set list-compress-depth 1\n        r lpush lst [string repeat \"x\" 2000]\n        r rpush lst [string repeat \"y\" 7000]\n        r rpush lst a b c d e f g\n        r rpush lst [string repeat \"z\" 8000]\n        r lset lst 0 h\n        # now: [h]     [y... a b c d e f g] [z...]\n        #      node0        node1(LZF)\n        # Keep inserting elements into node1 until node1 is split into two\n        # nodes([y...] [...]), eventually node0 will merge with the [y...] node.\n        # Since [y...] node is larger, after the merge node0 will be deleted and\n        # the [y...] node will be kept.\n        for {set i 7} {$i >= 3} {incr i -1} {\n            r write \"*4\\r\\n\\$4\\r\\nlset\\r\\n\\$3\\r\\nlst\\r\\n\\$1\\r\\n$i\\r\\n\"\n            write_big_bulk 1000000000\n        }\n        assert_equal \"h\" [r lindex lst 0]\n        r config set list-compress-depth 0\n        r ping\n    } {PONG} {large-memory}\n\n    test {Test LMOVE on plain nodes over 4GB} {\n       r flushdb\n       r RPUSH lst2{t} \"aa\"\n       r RPUSH lst2{t} \"bb\"\n       r write \"*4\\r\\n\\$4\\r\\nLSET\\r\\n\\$7\\r\\nlst2{t}\\r\\n\\$1\\r\\n0\\r\\n\"\n       write_big_bulk $str_length;\n       r RPUSH lst2{t} \"cc\"\n       r RPUSH lst2{t} \"dd\"\n       r LMOVE lst2{t} lst{t} RIGHT LEFT\n       assert_equal [read_big_bulk {r LMOVE lst2{t} lst{t} LEFT RIGHT}] $str_length\n       assert_equal [r llen lst{t}] 2\n       assert_equal [r llen lst2{t}] 2\n       assert_equal [r lpop lst2{t}] \"bb\"\n       assert_equal [r lpop lst2{t}] \"cc\"\n       assert_equal [r lpop lst{t}] \"dd\"\n       assert_equal [read_big_bulk {r rpop lst{t}}] $str_length\n   } {} {large-memory}\n\n    # restore defaults\n    r config set proto-max-bulk-len 536870912\n    r config set client-query-buffer-limit 1073741824\n\n} ;# skip 32bit builds\n}\n} ;# run_solo\n\nstart_server {\n    tags {\"list\"}\n    overrides {\n        \"list-max-ziplist-size\" -1\n    }\n} {\n    array set largevalue [generate_largevalue_test_array]\n\n    # A helper function to execute either B*POP or BLMPOP* with one input key.\n    proc bpop_command {rd pop key timeout} {\n        if {$pop == \"BLMPOP_LEFT\"} {\n            $rd blmpop $timeout 1 $key left count 1\n        } elseif {$pop == \"BLMPOP_RIGHT\"} {\n            $rd blmpop $timeout 1 $key right count 1\n        } else {\n            $rd $pop $key $timeout\n        }\n    }\n\n    # A helper function to execute either B*POP or BLMPOP* with two input keys.\n    proc bpop_command_two_key {rd pop key key2 timeout} {\n        if {$pop == \"BLMPOP_LEFT\"} {\n            $rd blmpop $timeout 2 $key $key2 left count 1\n        } elseif {$pop == \"BLMPOP_RIGHT\"} {\n            $rd blmpop $timeout 2 $key $key2 right count 1\n        } else {\n            $rd $pop $key $key2 $timeout\n        }\n    }\n\n    proc create_listpack {key entries} {\n        r del $key\n        foreach entry $entries { r rpush $key $entry }\n        assert_encoding listpack $key\n    }\n\n    proc create_quicklist {key entries} {\n        r del $key\n        foreach entry $entries { r rpush $key $entry }\n        assert_encoding quicklist $key\n    }\n\nforeach {type large} [array get largevalue] {\n    test \"LPOS basic usage - $type\" {\n        r DEL mylist\n        r RPUSH mylist a b c $large 2 3 c c\n        assert {[r LPOS mylist a] == 0}\n        assert {[r LPOS mylist c] == 2}\n    }\n\n    test {LPOS RANK (positive, negative and zero rank) option} {\n        assert {[r LPOS mylist c RANK 1] == 2}\n        assert {[r LPOS mylist c RANK 2] == 6}\n        assert {[r LPOS mylist c RANK 4] eq \"\"}\n        assert {[r LPOS mylist c RANK -1] == 7}\n        assert {[r LPOS mylist c RANK -2] == 6}\n        assert_error \"*RANK can't be zero: use 1 to start from the first match, 2 from the second ... or use negative to start*\" {r LPOS mylist c RANK 0}\n        assert_error \"*value is out of range*\" {r LPOS mylist c RANK -9223372036854775808}\n    }\n\n    test {LPOS COUNT option} {\n        assert {[r LPOS mylist c COUNT 0] == {2 6 7}}\n        assert {[r LPOS mylist c COUNT 1] == {2}}\n        assert {[r LPOS mylist c COUNT 2] == {2 6}}\n        assert {[r LPOS mylist c COUNT 100] == {2 6 7}}\n    }\n\n    test {LPOS COUNT + RANK option} {\n        assert {[r LPOS mylist c COUNT 0 RANK 2] == {6 7}}\n        assert {[r LPOS mylist c COUNT 2 RANK -1] == {7 6}}\n    }\n\n    test {LPOS non existing key} {\n        assert {[r LPOS mylistxxx c COUNT 0 RANK 2] eq {}}\n    }\n\n    test {LPOS no match} {\n        assert {[r LPOS mylist x COUNT 2 RANK -1] eq {}}\n        assert {[r LPOS mylist x RANK -1] eq {}}\n    }\n\n    test {LPOS MAXLEN} {\n        assert {[r LPOS mylist a COUNT 0 MAXLEN 1] == {0}}\n        assert {[r LPOS mylist c COUNT 0 MAXLEN 1] == {}}\n        assert {[r LPOS mylist c COUNT 0 MAXLEN 3] == {2}}\n        assert {[r LPOS mylist c COUNT 0 MAXLEN 3 RANK -1] == {7 6}}\n        assert {[r LPOS mylist c COUNT 0 MAXLEN 7 RANK 2] == {6}}\n    }\n\n    test {LPOS when RANK is greater than matches} {\n        r DEL mylist\n        r LPUSH mylist a\n        assert {[r LPOS mylist b COUNT 10 RANK 5] eq {}}\n    }\n\n    test \"LPUSH, RPUSH, LLENGTH, LINDEX, LPOP - $type\" {\n        # first lpush then rpush\n        r del mylist1\n        assert_equal 1 [r lpush mylist1 $large]\n        assert_encoding $type mylist1\n        assert_equal 2 [r rpush mylist1 b]\n        assert_equal 3 [r rpush mylist1 c]\n        assert_equal 3 [r llen mylist1]\n        assert_equal $large [r lindex mylist1 0]\n        assert_equal b [r lindex mylist1 1]\n        assert_equal c [r lindex mylist1 2]\n        assert_equal {} [r lindex mylist1 3]\n        assert_equal c [r rpop mylist1]\n        assert_equal $large [r lpop mylist1]\n\n        # first rpush then lpush\n        r del mylist2\n        assert_equal 1 [r rpush mylist2 $large]\n        assert_equal 2 [r lpush mylist2 b]\n        assert_equal 3 [r lpush mylist2 c]\n        assert_encoding $type mylist2\n        assert_equal 3 [r llen mylist2]\n        assert_equal c [r lindex mylist2 0]\n        assert_equal b [r lindex mylist2 1]\n        assert_equal $large [r lindex mylist2 2]\n        assert_equal {} [r lindex mylist2 3]\n        assert_equal $large [r rpop mylist2]\n        assert_equal c [r lpop mylist2]\n    }\n\n    test \"LPOP/RPOP with wrong number of arguments\" {\n        assert_error {*wrong number of arguments for 'lpop' command} {r lpop key 1 1}\n        assert_error {*wrong number of arguments for 'rpop' command} {r rpop key 2 2}\n    }\n\n    test \"RPOP/LPOP with the optional count argument - $type\" {\n        assert_equal 7 [r lpush listcount aa $large cc dd ee ff gg]\n        assert_equal {gg} [r lpop listcount 1]\n        assert_equal {ff ee} [r lpop listcount 2]\n        assert_equal \"aa $large\" [r rpop listcount 2]\n        assert_equal {cc} [r rpop listcount 1]\n        assert_equal {dd} [r rpop listcount 123]\n        assert_error \"*ERR*range*\" {r lpop forbarqaz -123}\n    }\n}\n\n    proc verify_resp_response {resp response resp2_response resp3_response} {\n        if {$resp == 2} {\n            assert_equal $response $resp2_response\n        } elseif {$resp == 3} {\n            assert_equal $response $resp3_response\n        }\n    }\n\n    foreach resp {3 2} {\n        if {[lsearch $::denytags \"resp3\"] >= 0} {\n            if {$resp == 3} {continue}\n        } elseif {$::force_resp3} {\n            if {$resp == 2} {continue}\n        }\n        r hello $resp\n\n        # Make sure we can distinguish between an empty array and a null response\n        r readraw 1\n\n        test \"LPOP/RPOP with the count 0 returns an empty array in RESP$resp\" {\n            r lpush listcount zero\n            assert_equal {*0} [r lpop listcount 0]\n            assert_equal {*0} [r rpop listcount 0]\n        }\n\n        test \"LPOP/RPOP against non existing key in RESP$resp\" {\n            r del non_existing_key\n\n            verify_resp_response $resp [r lpop non_existing_key] {$-1} {_}\n            verify_resp_response $resp [r rpop non_existing_key] {$-1} {_}\n        }\n\n        test \"LPOP/RPOP with <count> against non existing key in RESP$resp\" {\n            r del non_existing_key\n\n            verify_resp_response $resp [r lpop non_existing_key 0] {*-1} {_}\n            verify_resp_response $resp [r lpop non_existing_key 1] {*-1} {_}\n\n            verify_resp_response $resp [r rpop non_existing_key 0] {*-1} {_}\n            verify_resp_response $resp [r rpop non_existing_key 1] {*-1} {_}\n        }\n\n        r readraw 0\n        r hello 2\n    }\n\n    test {Variadic RPUSH/LPUSH} {\n        r del mylist\n        assert_equal 4 [r lpush mylist a b c d]\n        assert_equal 8 [r rpush mylist 0 1 2 3]\n        assert_equal {d c b a 0 1 2 3} [r lrange mylist 0 -1]\n    }\n\n    test {DEL a list} {\n        assert_equal 1 [r del mylist2]\n        assert_equal 0 [r exists mylist2]\n        assert_equal 0 [r llen mylist2]\n    }\n\n    foreach {type large} [array get largevalue] {\n    foreach {pop} {BLPOP BLMPOP_LEFT} {\n        test \"$pop: single existing list - $type\" {\n            set rd [redis_deferring_client]\n            create_$type blist \"a b $large c d\"\n\n            bpop_command $rd $pop blist 1\n            assert_equal {blist a} [$rd read]\n            if {$pop == \"BLPOP\"} {\n                bpop_command $rd BRPOP blist 1\n            } else {\n                bpop_command $rd BLMPOP_RIGHT blist 1\n            }\n            assert_equal {blist d} [$rd read]\n\n            bpop_command $rd $pop blist 1\n            assert_equal {blist b} [$rd read]\n            if {$pop == \"BLPOP\"} {\n                bpop_command $rd BRPOP blist 1\n            } else {\n                bpop_command $rd BLMPOP_RIGHT blist 1\n            }\n            assert_equal {blist c} [$rd read]\n\n            assert_equal 1 [r llen blist]\n            $rd close\n        }\n\n        test \"$pop: multiple existing lists - $type\" {\n            set rd [redis_deferring_client]\n            create_$type blist1{t} \"a $large c\"\n            create_$type blist2{t} \"d $large f\"\n\n            bpop_command_two_key $rd $pop blist1{t} blist2{t} 1\n            assert_equal {blist1{t} a} [$rd read]\n            if {$pop == \"BLPOP\"} {\n                bpop_command_two_key $rd BRPOP blist1{t} blist2{t} 1\n            } else {\n                bpop_command_two_key $rd BLMPOP_RIGHT blist1{t} blist2{t} 1\n            }\n            assert_equal {blist1{t} c} [$rd read]\n            assert_equal 1 [r llen blist1{t}]\n            assert_equal 3 [r llen blist2{t}]\n\n            bpop_command_two_key $rd $pop blist2{t} blist1{t} 1\n            assert_equal {blist2{t} d} [$rd read]\n            if {$pop == \"BLPOP\"} {\n                bpop_command_two_key $rd BRPOP blist2{t} blist1{t} 1\n            } else {\n                bpop_command_two_key $rd BLMPOP_RIGHT blist2{t} blist1{t} 1\n            }\n            assert_equal {blist2{t} f} [$rd read]\n            assert_equal 1 [r llen blist1{t}]\n            assert_equal 1 [r llen blist2{t}]\n            $rd close\n        }\n\n        test \"$pop: second list has an entry - $type\" {\n            set rd [redis_deferring_client]\n            r del blist1{t}\n            create_$type blist2{t} \"d $large f\"\n\n            bpop_command_two_key $rd $pop blist1{t} blist2{t} 1\n            assert_equal {blist2{t} d} [$rd read]\n            if {$pop == \"BLPOP\"} {\n                bpop_command_two_key $rd BRPOP blist1{t} blist2{t} 1\n            } else {\n                bpop_command_two_key $rd BLMPOP_RIGHT blist1{t} blist2{t} 1\n            }\n            assert_equal {blist2{t} f} [$rd read]\n            assert_equal 0 [r llen blist1{t}]\n            assert_equal 1 [r llen blist2{t}]\n            $rd close\n        }\n    }\n\n        test \"BRPOPLPUSH - $type\" {\n            r del target{t}\n            r rpush target{t} bar\n\n            set rd [redis_deferring_client]\n            create_$type blist{t} \"a b $large c d\"\n\n            $rd brpoplpush blist{t} target{t} 1\n            assert_equal d [$rd read]\n\n            assert_equal d [r lpop target{t}]\n            assert_equal \"a b $large c\" [r lrange blist{t} 0 -1]\n            $rd close\n        }\n\n        foreach wherefrom {left right} {\n            foreach whereto {left right} {\n                test \"BLMOVE $wherefrom $whereto - $type\" {\n                    r del target{t}\n                    r rpush target{t} bar\n\n                    set rd [redis_deferring_client]\n                    create_$type blist{t} \"a b $large c d\"\n\n                    $rd blmove blist{t} target{t} $wherefrom $whereto 1\n                    set poppedelement [$rd read]\n\n                    if {$wherefrom eq \"right\"} {\n                        assert_equal d $poppedelement\n                        assert_equal \"a b $large c\" [r lrange blist{t} 0 -1]\n                    } else {\n                        assert_equal a $poppedelement\n                        assert_equal \"b $large c d\" [r lrange blist{t} 0 -1]\n                    }\n\n                    if {$whereto eq \"right\"} {\n                        assert_equal $poppedelement [r rpop target{t}]\n                    } else {\n                        assert_equal $poppedelement [r lpop target{t}]\n                    }\n                    $rd close\n                }\n            }\n        }\n    }\n\nforeach {pop} {BLPOP BLMPOP_LEFT} {\n    test \"$pop, LPUSH + DEL should not awake blocked client\" {\n        set rd [redis_deferring_client]\n        r del list\n\n        bpop_command $rd $pop list 0\n        wait_for_blocked_client\n\n        r multi\n        r lpush list a\n        r del list\n        r exec\n        r del list\n        r lpush list b\n        assert_equal {list b} [$rd read]\n        $rd close\n    }\n\n    test \"$pop, LPUSH + DEL + SET should not awake blocked client\" {\n        set rd [redis_deferring_client]\n        r del list\n\n        bpop_command $rd $pop list 0\n        wait_for_blocked_client\n\n        r multi\n        r lpush list a\n        r del list\n        r set list foo\n        r exec\n        r del list\n        r lpush list b\n        assert_equal {list b} [$rd read]\n        $rd close\n    }\n}\n\n    test \"BLPOP with same key multiple times should work (issue #801)\" {\n        set rd [redis_deferring_client]\n        r del list1{t} list2{t}\n\n        # Data arriving after the BLPOP.\n        $rd blpop list1{t} list2{t} list2{t} list1{t} 0\n        wait_for_blocked_client\n        r lpush list1{t} a\n        assert_equal [$rd read] {list1{t} a}\n        $rd blpop list1{t} list2{t} list2{t} list1{t} 0\n        wait_for_blocked_client\n        r lpush list2{t} b\n        assert_equal [$rd read] {list2{t} b}\n\n        # Data already there.\n        r lpush list1{t} a\n        r lpush list2{t} b\n        $rd blpop list1{t} list2{t} list2{t} list1{t} 0\n        assert_equal [$rd read] {list1{t} a}\n        $rd blpop list1{t} list2{t} list2{t} list1{t} 0\n        assert_equal [$rd read] {list2{t} b}\n        $rd close\n    }\n\nforeach {pop} {BLPOP BLMPOP_LEFT} {\n    test \"MULTI/EXEC is isolated from the point of view of $pop\" {\n        set rd [redis_deferring_client]\n        r del list\n\n        bpop_command $rd $pop list 0\n        wait_for_blocked_client\n\n        r multi\n        r lpush list a\n        r lpush list b\n        r lpush list c\n        r exec\n        assert_equal {list c} [$rd read]\n        $rd close\n    }\n\n    test \"$pop with variadic LPUSH\" {\n        set rd [redis_deferring_client]\n        r del blist\n        bpop_command $rd $pop blist 0\n        wait_for_blocked_client\n        assert_equal 2 [r lpush blist foo bar]\n        assert_equal {blist bar} [$rd read]\n        assert_equal foo [lindex [r lrange blist 0 -1] 0]\n        $rd close\n    }\n}\n\n    test \"BRPOPLPUSH with zero timeout should block indefinitely\" {\n        set rd [redis_deferring_client]\n        r del blist{t} target{t}\n        r rpush target{t} bar\n        $rd brpoplpush blist{t} target{t} 0\n        wait_for_blocked_clients_count 1\n        r rpush blist{t} foo\n        assert_equal foo [$rd read]\n        assert_equal {foo bar} [r lrange target{t} 0 -1]\n        $rd close\n    }\n\n    foreach wherefrom {left right} {\n        foreach whereto {left right} {\n            test \"BLMOVE $wherefrom $whereto with zero timeout should block indefinitely\" {\n                set rd [redis_deferring_client]\n                r del blist{t} target{t}\n                r rpush target{t} bar\n                $rd blmove blist{t} target{t} $wherefrom $whereto 0\n                wait_for_blocked_clients_count 1\n                r rpush blist{t} foo\n                assert_equal foo [$rd read]\n                if {$whereto eq \"right\"} {\n                    assert_equal {bar foo} [r lrange target{t} 0 -1]\n                } else {\n                    assert_equal {foo bar} [r lrange target{t} 0 -1]\n                }\n                $rd close\n            }\n        }\n    }\n\n    foreach wherefrom {left right} {\n        foreach whereto {left right} {\n            test \"BLMOVE ($wherefrom, $whereto) with a client BLPOPing the target list\" {\n                set rd [redis_deferring_client]\n                set rd2 [redis_deferring_client]\n                r del blist{t} target{t}\n                $rd2 blpop target{t} 0\n                wait_for_blocked_clients_count 1\n                $rd blmove blist{t} target{t} $wherefrom $whereto 0\n                wait_for_blocked_clients_count 2\n                r rpush blist{t} foo\n                assert_equal foo [$rd read]\n                assert_equal {target{t} foo} [$rd2 read]\n                assert_equal 0 [r exists target{t}]\n                $rd close\n                $rd2 close\n            }\n        }\n    }\n\n    test \"BRPOPLPUSH with wrong source type\" {\n        set rd [redis_deferring_client]\n        r del blist{t} target{t}\n        r set blist{t} nolist\n        $rd brpoplpush blist{t} target{t} 1\n        assert_error \"WRONGTYPE*\" {$rd read}\n        $rd close\n    }\n\n    test \"BRPOPLPUSH with wrong destination type\" {\n        set rd [redis_deferring_client]\n        r del blist{t} target{t}\n        r set target{t} nolist\n        r lpush blist{t} foo\n        $rd brpoplpush blist{t} target{t} 1\n        assert_error \"WRONGTYPE*\" {$rd read}\n        $rd close\n\n        set rd [redis_deferring_client]\n        r del blist{t} target{t}\n        r set target{t} nolist\n        $rd brpoplpush blist{t} target{t} 0\n        wait_for_blocked_clients_count 1\n        r rpush blist{t} foo\n        assert_error \"WRONGTYPE*\" {$rd read}\n        assert_equal {foo} [r lrange blist{t} 0 -1]\n        $rd close\n    }\n\n    test \"BRPOPLPUSH maintains order of elements after failure\" {\n        set rd [redis_deferring_client]\n        r del blist{t} target{t}\n        r set target{t} nolist\n        $rd brpoplpush blist{t} target{t} 0\n        wait_for_blocked_client\n        r rpush blist{t} a b c\n        assert_error \"WRONGTYPE*\" {$rd read}\n        $rd close\n        r lrange blist{t} 0 -1\n    } {a b c}\n\n    test \"BRPOPLPUSH with multiple blocked clients\" {\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n        r del blist{t} target1{t} target2{t}\n        r set target1{t} nolist\n        $rd1 brpoplpush blist{t} target1{t} 0\n        wait_for_blocked_clients_count 1\n        $rd2 brpoplpush blist{t} target2{t} 0\n        wait_for_blocked_clients_count 2\n        r lpush blist{t} foo\n\n        assert_error \"WRONGTYPE*\" {$rd1 read}\n        assert_equal {foo} [$rd2 read]\n        assert_equal {foo} [r lrange target2{t} 0 -1]\n        $rd1 close\n        $rd2 close\n    }\n\n    test \"BLMPOP with multiple blocked clients\" {\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n        set rd3 [redis_deferring_client]\n        set rd4 [redis_deferring_client]\n        r del blist{t} blist2{t}\n\n        $rd1 blmpop 0 2 blist{t} blist2{t} left count 1\n        wait_for_blocked_clients_count 1\n        $rd2 blmpop 0 2 blist{t} blist2{t} right count 10\n        wait_for_blocked_clients_count 2\n        $rd3 blmpop 0 2 blist{t} blist2{t} left count 10\n        wait_for_blocked_clients_count 3\n        $rd4 blmpop 0 2 blist{t} blist2{t} right count 1\n        wait_for_blocked_clients_count 4\n\n        r multi\n        r lpush blist{t} a b c d e\n        r lpush blist2{t} 1 2 3 4 5\n        r exec\n\n        assert_equal {blist{t} e} [$rd1 read]\n        assert_equal {blist{t} {a b c d}} [$rd2 read]\n        assert_equal {blist2{t} {5 4 3 2 1}} [$rd3 read]\n\n        r lpush blist2{t} 1 2 3\n        assert_equal {blist2{t} 1} [$rd4 read]\n        $rd1 close\n        $rd2 close\n        $rd3 close\n        $rd4 close\n    }\n\n    test \"Linked LMOVEs\" {\n      set rd1 [redis_deferring_client]\n      set rd2 [redis_deferring_client]\n\n      r del list1{t} list2{t} list3{t}\n\n      $rd1 blmove list1{t} list2{t} right left 0\n      wait_for_blocked_clients_count 1\n      $rd2 blmove list2{t} list3{t} left right 0\n      wait_for_blocked_clients_count 2\n\n      r rpush list1{t} foo\n\n      assert_equal {} [r lrange list1{t} 0 -1]\n      assert_equal {} [r lrange list2{t} 0 -1]\n      assert_equal {foo} [r lrange list3{t} 0 -1]\n      $rd1 close\n      $rd2 close\n    }\n\n    test \"Circular BRPOPLPUSH\" {\n      set rd1 [redis_deferring_client]\n      set rd2 [redis_deferring_client]\n\n      r del list1{t} list2{t}\n\n      $rd1 brpoplpush list1{t} list2{t} 0\n      wait_for_blocked_clients_count 1\n      $rd2 brpoplpush list2{t} list1{t} 0\n      wait_for_blocked_clients_count 2\n\n      r rpush list1{t} foo\n\n      assert_equal {foo} [r lrange list1{t} 0 -1]\n      assert_equal {} [r lrange list2{t} 0 -1]\n      $rd1 close\n      $rd2 close\n    }\n\n    test \"Self-referential BRPOPLPUSH\" {\n      set rd [redis_deferring_client]\n\n      r del blist{t}\n\n      $rd brpoplpush blist{t} blist{t} 0\n      wait_for_blocked_client\n\n      r rpush blist{t} foo\n\n      assert_equal {foo} [r lrange blist{t} 0 -1]\n      $rd close\n    }\n\n    test \"BRPOPLPUSH inside a transaction\" {\n        r del xlist{t} target{t}\n        r lpush xlist{t} foo\n        r lpush xlist{t} bar\n\n        r multi\n        r brpoplpush xlist{t} target{t} 0\n        r brpoplpush xlist{t} target{t} 0\n        r brpoplpush xlist{t} target{t} 0\n        r lrange xlist{t} 0 -1\n        r lrange target{t} 0 -1\n        r exec\n    } {foo bar {} {} {bar foo}}\n\n    test \"PUSH resulting from BRPOPLPUSH affect WATCH\" {\n        set blocked_client [redis_deferring_client]\n        set watching_client [redis_deferring_client]\n        r del srclist{t} dstlist{t} somekey{t}\n        r set somekey{t} somevalue\n        $blocked_client brpoplpush srclist{t} dstlist{t} 0\n        wait_for_blocked_client\n        $watching_client watch dstlist{t}\n        $watching_client read\n        $watching_client multi\n        $watching_client read\n        $watching_client get somekey{t}\n        $watching_client read\n        r lpush srclist{t} element\n        $watching_client exec\n        set res [$watching_client read]\n        $blocked_client close\n        $watching_client close\n        set _ $res\n    } {}\n\n    test \"BRPOPLPUSH does not affect WATCH while still blocked\" {\n        set blocked_client [redis_deferring_client]\n        set watching_client [redis_deferring_client]\n        r del srclist{t} dstlist{t} somekey{t}\n        r set somekey{t} somevalue\n        $blocked_client brpoplpush srclist{t} dstlist{t} 0\n        wait_for_blocked_client\n        $watching_client watch dstlist{t}\n        $watching_client read\n        $watching_client multi\n        $watching_client read\n        $watching_client get somekey{t}\n        $watching_client read\n        $watching_client exec\n        wait_for_condition 100 10 {\n            [regexp {cmd=exec} [r client list]] eq 1\n        } else {\n            fail \"exec did not arrive\"\n        }\n        # Blocked BLPOPLPUSH may create problems, unblock it.\n        r lpush srclist{t} element\n        set res [$watching_client read]\n        $blocked_client close\n        $watching_client close\n        set _ $res\n    } {somevalue}\n\n    test {BRPOPLPUSH timeout} {\n      set rd [redis_deferring_client]\n\n      $rd brpoplpush foo_list{t} bar_list{t} 1\n      wait_for_blocked_clients_count 1\n      wait_for_blocked_clients_count 0 500 10\n      set res [$rd read]\n      $rd close\n      set _ $res\n    } {}\n\n    test {SWAPDB awakes blocked client} {\n        r flushall\n        r select 1\n        r rpush k hello\n        r select 9\n        set rd [redis_deferring_client]\n        $rd brpop k 5\n        wait_for_blocked_clients_count 1\n        r swapdb 1 9\n        $rd read\n    } {k hello} {singledb:skip}\n\n    test {SWAPDB wants to wake blocked client, but the key already expired} {\n        set repl [attach_to_replication_stream]\n        r flushall\n        r debug set-active-expire 0\n        r select 1\n        r rpush k hello\n        r pexpire k 100\n        set rd [redis_deferring_client]\n        $rd deferred 0\n        $rd select 9\n        set id [$rd client id]\n        $rd deferred 1\n        $rd brpop k 1\n        wait_for_blocked_clients_count 1\n        after 101\n        r swapdb 1 9\n        # The SWAPDB command tries to awake the blocked client, but it remains\n        # blocked because the key is expired. Check that the deferred client is\n        # still blocked. Then unblock it.\n        assert_match \"*flags=b*\" [r client list id $id]\n        r client unblock $id\n        assert_equal {} [$rd read]\n        $rd deferred 0\n        # We want to force key deletion to be propagated to the replica \n        # in order to verify it was expired on the replication stream.\n        $rd set somekey1 someval1\n        $rd exists k\n        r set somekey2 someval2\n        \n        assert_replication_stream $repl {\n            {select *}\n            {flushall}\n            {select 1}\n            {rpush k hello}\n            {pexpireat k *}\n            {swapdb 1 9}\n            {select 9}\n            {set somekey1 someval1}\n            {del k}\n            {select 1}\n            {set somekey2 someval2}\n        }\n        close_replication_stream $repl\n        r debug set-active-expire 1\n        # Restore server and client state\n        r select 9\n    } {OK} {singledb:skip needs:debug}\n\n    test {MULTI + LPUSH + EXPIRE + DEBUG SLEEP on blocked client, key already expired} {\n        set repl [attach_to_replication_stream]\n        r flushall\n        r debug set-active-expire 0\n\n        set rd [redis_deferring_client]\n        $rd client id\n        set id [$rd read]\n        $rd brpop k 0\n        wait_for_blocked_clients_count 1\n\n        r multi\n        r rpush k hello\n        r pexpire k 100\n        r debug sleep 0.2\n        r exec\n\n        # The EXEC command tries to awake the blocked client, but it remains\n        # blocked because the key is expired. Check that the deferred client is\n        # still blocked. Then unblock it.\n        assert_match \"*flags=b*\" [r client list id $id]\n        r client unblock $id\n        assert_equal {} [$rd read]\n        # We want to force key deletion to be propagated to the replica \n        # in order to verify it was expired on the replication stream.\n        $rd exists k\n        assert_equal {0} [$rd read]\n        assert_replication_stream $repl {\n            {select *}\n            {flushall}\n            {multi}\n            {rpush k hello}\n            {pexpireat k *}\n            {exec}\n            {del k}\n        }\n        close_replication_stream $repl\n        # Restore server and client state\n        r debug set-active-expire 1\n        r select 9\n    } {OK} {singledb:skip needs:debug}\n\n    test {BLPOP unblock but the key is expired and then block again - reprocessing command} {\n        r flushall\n        r debug set-active-expire 0\n        set rd [redis_deferring_client]\n\n        set start [clock milliseconds]\n        $rd blpop mylist 1\n        wait_for_blocked_clients_count 1\n\n        # The exec will try to awake the blocked client, but the key is expired,\n        # so the client will be blocked again during the command reprocessing.\n        r multi\n        r rpush mylist a\n        r pexpire mylist 100\n        r debug sleep 0.2\n        r exec\n\n        assert_equal {} [$rd read]\n        set end [clock milliseconds]\n\n        # Before the fix in #13004, this time would have been 1200+ (i.e. more than 1200ms),\n        # now it should be 1000, but in order to avoid timing issues, we increase the range a bit.\n        assert_range [expr $end-$start] 1000 1150\n\n        r debug set-active-expire 1\n        $rd close\n    } {0} {needs:debug}\n\nforeach {pop} {BLPOP BLMPOP_LEFT} {\n    test \"$pop when new key is moved into place\" {\n        set rd [redis_deferring_client]\n        r del foo{t}\n\n        bpop_command $rd $pop foo{t} 0\n        wait_for_blocked_client\n        r lpush bob{t} abc def hij\n        r rename bob{t} foo{t}\n        set res [$rd read]\n        $rd close\n        set _ $res\n    } {foo{t} hij}\n\n    test \"$pop when result key is created by SORT..STORE\" {\n        set rd [redis_deferring_client]\n\n        # zero out list from previous test without explicit delete\n        r lpop foo{t}\n        r lpop foo{t}\n        r lpop foo{t}\n\n        bpop_command $rd $pop foo{t} 5\n        wait_for_blocked_client\n        r lpush notfoo{t} hello hola aguacate konichiwa zanzibar\n        r sort notfoo{t} ALPHA store foo{t}\n        set res [$rd read]\n        $rd close\n        set _ $res\n    } {foo{t} aguacate}\n}\n\n    test \"BLPOP: timeout value out of range\" {\n        # Timeout is parsed as float and multiplied by 1000, added mstime()\n        # and stored in long-long which might lead to out-of-range value.\n        # (Even though given timeout is smaller than LLONG_MAX, the result\n        # will be bigger)            \n        assert_error \"ERR *is out of range*\" {r BLPOP blist1 0x7FFFFFFFFFFFFF}\n    }  \n        \n    foreach {pop} {BLPOP BRPOP BLMPOP_LEFT BLMPOP_RIGHT} {\n        test \"$pop: with single empty list argument\" {\n            set rd [redis_deferring_client]\n            r del blist1\n            bpop_command $rd $pop blist1 1\n            wait_for_blocked_client\n            r rpush blist1 foo\n            assert_equal {blist1 foo} [$rd read]\n            assert_equal 0 [r exists blist1]\n            $rd close\n        }\n\n        test \"$pop: with negative timeout\" {\n            set rd [redis_deferring_client]\n            bpop_command $rd $pop blist1 -1\n            assert_error \"ERR *is negative*\" {$rd read}\n            $rd close\n        }\n\n        test \"$pop: with non-integer timeout\" {\n            set rd [redis_deferring_client]\n            r del blist1\n            bpop_command $rd $pop blist1 0.1\n            r rpush blist1 foo\n            assert_equal {blist1 foo} [$rd read]\n            assert_equal 0 [r exists blist1]\n            $rd close\n        }\n\n        test \"$pop: with zero timeout should block indefinitely\" {\n            # To test this, use a timeout of 0 and wait a second.\n            # The blocking pop should still be waiting for a push.\n            set rd [redis_deferring_client]\n            bpop_command $rd $pop blist1 0\n            wait_for_blocked_client\n            r rpush blist1 foo\n            assert_equal {blist1 foo} [$rd read]\n            $rd close\n        }\n\n        test \"$pop: with 0.001 timeout should not block indefinitely\" {\n            # Use a timeout of 0.001 and wait for the number of blocked clients to equal 0.\n            # Validate the empty read from the deferring client.\n            set rd [redis_deferring_client]\n            bpop_command $rd $pop blist1 0.001\n            wait_for_blocked_clients_count 0\n            assert_equal {} [$rd read]\n            $rd close\n        }\n\n        test \"$pop: second argument is not a list\" {\n            set rd [redis_deferring_client]\n            r del blist1{t} blist2{t}\n            r set blist2{t} nolist{t}\n            bpop_command_two_key $rd $pop blist1{t} blist2{t} 1\n            assert_error \"WRONGTYPE*\" {$rd read}\n            $rd close\n        }\n\n        test \"$pop: timeout\" {\n            set rd [redis_deferring_client]\n            r del blist1{t} blist2{t}\n            bpop_command_two_key $rd $pop blist1{t} blist2{t} 1\n            wait_for_blocked_client\n            assert_equal {} [$rd read]\n            $rd close\n        }\n\n        test \"$pop: arguments are empty\" {\n            set rd [redis_deferring_client]\n            r del blist1{t} blist2{t}\n\n            bpop_command_two_key $rd $pop blist1{t} blist2{t} 1\n            wait_for_blocked_client\n            r rpush blist1{t} foo\n            assert_equal {blist1{t} foo} [$rd read]\n            assert_equal 0 [r exists blist1{t}]\n            assert_equal 0 [r exists blist2{t}]\n\n            bpop_command_two_key $rd $pop blist1{t} blist2{t} 1\n            wait_for_blocked_client\n            r rpush blist2{t} foo\n            assert_equal {blist2{t} foo} [$rd read]\n            assert_equal 0 [r exists blist1{t}]\n            assert_equal 0 [r exists blist2{t}]\n            $rd close\n        }\n    }\n\nforeach {pop} {BLPOP BLMPOP_LEFT} {\n    test \"$pop inside a transaction\" {\n        r del xlist\n        r lpush xlist foo\n        r lpush xlist bar\n        r multi\n\n        bpop_command r $pop xlist 0\n        bpop_command r $pop xlist 0\n        bpop_command r $pop xlist 0\n        r exec\n    } {{xlist bar} {xlist foo} {}}\n}\n\n    test {BLMPOP propagate as pop with count command to replica} {\n        set rd [redis_deferring_client]\n        set repl [attach_to_replication_stream]\n\n        # BLMPOP without being blocked.\n        r lpush mylist{t} a b c\n        r rpush mylist2{t} 1 2 3\n        r blmpop 0 1 mylist{t} left count 1\n        r blmpop 0 2 mylist{t} mylist2{t} right count 10\n        r blmpop 0 2 mylist{t} mylist2{t} right count 10\n\n        # BLMPOP that gets blocked.\n        $rd blmpop 0 1 mylist{t} left count 1\n        wait_for_blocked_client\n        r lpush mylist{t} a\n        $rd blmpop 0 2 mylist{t} mylist2{t} left count 5\n        wait_for_blocked_client\n        r lpush mylist{t} a b c\n        $rd blmpop 0 2 mylist{t} mylist2{t} right count 10\n        wait_for_blocked_client\n        r rpush mylist2{t} a b c\n\n        # Released on timeout.\n        assert_equal {} [r blmpop 0.01 1 mylist{t} left count 10]\n        r set foo{t} bar ;# something else to propagate after, so we can make sure the above pop didn't.\n\n        $rd close\n\n        assert_replication_stream $repl {\n            {select *}\n            {lpush mylist{t} a b c}\n            {rpush mylist2{t} 1 2 3}\n            {lpop mylist{t} 1}\n            {rpop mylist{t} 2}\n            {rpop mylist2{t} 3}\n            {lpush mylist{t} a}\n            {lpop mylist{t} 1}\n            {lpush mylist{t} a b c}\n            {lpop mylist{t} 3}\n            {rpush mylist2{t} a b c}\n            {rpop mylist2{t} 3}\n            {set foo{t} bar}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test {LPUSHX, RPUSHX - generic} {\n        r del xlist\n        assert_equal 0 [r lpushx xlist a]\n        assert_equal 0 [r llen xlist]\n        assert_equal 0 [r rpushx xlist a]\n        assert_equal 0 [r llen xlist]\n    }\n\n    foreach {type large} [array get largevalue] {\n        test \"LPUSHX, RPUSHX - $type\" {\n            create_$type xlist \"$large c\"\n            assert_equal 3 [r rpushx xlist d]\n            assert_equal 4 [r lpushx xlist a]\n            assert_equal 6 [r rpushx xlist 42 x]\n            assert_equal 9 [r lpushx xlist y3 y2 y1]\n            assert_equal \"y1 y2 y3 a $large c d 42 x\" [r lrange xlist 0 -1]\n        }\n\n        test \"LINSERT - $type\" {\n            create_$type xlist \"a $large c d\"\n            assert_equal 5 [r linsert xlist before c zz] \"before c\"\n            assert_equal \"a $large zz c d\" [r lrange xlist 0 10] \"lrangeA\"\n            assert_equal 6 [r linsert xlist after c yy] \"after c\"\n            assert_equal \"a $large zz c yy d\" [r lrange xlist 0 10] \"lrangeB\"\n            assert_equal 7 [r linsert xlist after d dd] \"after d\"\n            assert_equal -1 [r linsert xlist after bad ddd] \"after bad\"\n            assert_equal \"a $large zz c yy d dd\" [r lrange xlist 0 10] \"lrangeC\"\n            assert_equal 8 [r linsert xlist before a aa] \"before a\"\n            assert_equal -1 [r linsert xlist before bad aaa] \"before bad\"\n            assert_equal \"aa a $large zz c yy d dd\" [r lrange xlist 0 10] \"lrangeD\"\n\n            # check inserting integer encoded value\n            assert_equal 9 [r linsert xlist before aa 42] \"before aa\"\n            assert_equal 42 [r lrange xlist 0 0] \"lrangeE\"\n        }\n    }\n\n    test {LINSERT raise error on bad syntax} {\n        catch {[r linsert xlist aft3r aa 42]} e\n        set e\n    } {*ERR*syntax*error*}\n\n    test {LINSERT against non-list value error} {\n        r set k1 v1\n        assert_error {WRONGTYPE Operation against a key holding the wrong kind of value*} {r linsert k1 after 0 0}\n    }\n\n    test {LINSERT against non existing key} {\n        assert_equal 0 [r linsert not-a-key before 0 0]\n    }\n\nforeach type {listpack quicklist} {\n    foreach {num} {250 500} {\n        if {$type == \"quicklist\"} {\n            set origin_config [config_get_set list-max-listpack-size 5]\n        } else {\n            set origin_config [config_get_set list-max-listpack-size -1]\n        }\n\n        proc check_numbered_list_consistency {key} {\n            set len [r llen $key]\n            for {set i 0} {$i < $len} {incr i} {\n                assert_equal $i [r lindex $key $i]\n                assert_equal [expr $len-1-$i] [r lindex $key [expr (-$i)-1]]\n            }\n        }\n\n        proc check_random_access_consistency {key} {\n            set len [r llen $key]\n            for {set i 0} {$i < $len} {incr i} {\n                set rint [expr int(rand()*$len)]\n                assert_equal $rint [r lindex $key $rint]\n                assert_equal [expr $len-1-$rint] [r lindex $key [expr (-$rint)-1]]\n            }\n        }\n\n        test \"LINDEX consistency test - $type\" {\n            r del mylist\n            for {set i 0} {$i < $num} {incr i} {\n                r rpush mylist $i\n            }\n            assert_encoding $type mylist\n            check_numbered_list_consistency mylist\n        }\n\n        test \"LINDEX random access - $type\" {\n            assert_encoding $type mylist\n            check_random_access_consistency mylist\n        }\n\n        test \"Check if list is still ok after a DEBUG RELOAD - $type\" {\n            r debug reload\n            assert_encoding $type mylist\n            check_numbered_list_consistency mylist\n            check_random_access_consistency mylist\n        } {} {needs:debug}\n\n        config_set list-max-listpack-size $origin_config\n    }\n}\n\n    test {LLEN against non-list value error} {\n        r del mylist\n        r set mylist foobar\n        assert_error WRONGTYPE* {r llen mylist}\n    }\n\n    test {LLEN against non existing key} {\n        assert_equal 0 [r llen not-a-key]\n    }\n\n    test {LINDEX against non-list value error} {\n        assert_error WRONGTYPE* {r lindex mylist 0}\n    }\n\n    test {LINDEX against non existing key} {\n        assert_equal \"\" [r lindex not-a-key 10]\n    }\n\n    test {LPUSH against non-list value error} {\n        assert_error WRONGTYPE* {r lpush mylist 0}\n    }\n\n    test {RPUSH against non-list value error} {\n        assert_error WRONGTYPE* {r rpush mylist 0}\n    }\n\n    foreach {type large} [array get largevalue] {\n        test \"RPOPLPUSH base case - $type\" {\n            r del mylist1{t} mylist2{t}\n            create_$type mylist1{t} \"a $large c d\"\n            assert_equal d [r rpoplpush mylist1{t} mylist2{t}]\n            assert_equal c [r rpoplpush mylist1{t} mylist2{t}]\n            assert_equal $large [r rpoplpush mylist1{t} mylist2{t}]\n            assert_equal \"a\" [r lrange mylist1{t} 0 -1]\n            assert_equal \"$large c d\" [r lrange mylist2{t} 0 -1]\n            assert_encoding listpack mylist1{t} ;# converted to listpack after shrinking\n            assert_encoding $type mylist2{t}\n        }\n\n        foreach wherefrom {left right} {\n            foreach whereto {left right} {\n                test \"LMOVE $wherefrom $whereto base case - $type\" {\n                    r del mylist1{t} mylist2{t}\n\n                    if {$wherefrom eq \"right\"} {\n                        create_$type mylist1{t} \"c d $large a\"\n                    } else {\n                        create_$type mylist1{t} \"a $large c d\"\n                    }\n                    assert_equal a [r lmove mylist1{t} mylist2{t} $wherefrom $whereto]\n                    assert_equal $large [r lmove mylist1{t} mylist2{t} $wherefrom $whereto]\n                    assert_equal \"c d\" [r lrange mylist1{t} 0 -1]\n                    if {$whereto eq \"right\"} {\n                        assert_equal \"a $large\" [r lrange mylist2{t} 0 -1]\n                    } else {\n                        assert_equal \"$large a\" [r lrange mylist2{t} 0 -1]\n                    }\n                    assert_encoding $type mylist2{t}\n                }\n            }\n        }\n\n        test \"RPOPLPUSH with the same list as src and dst - $type\" {\n            create_$type mylist{t} \"a $large c\"\n            assert_equal \"a $large c\" [r lrange mylist{t} 0 -1]\n            assert_equal c [r rpoplpush mylist{t} mylist{t}]\n            assert_equal \"c a $large\" [r lrange mylist{t} 0 -1]\n        }\n\n        foreach wherefrom {left right} {\n            foreach whereto {left right} {\n                test \"LMOVE $wherefrom $whereto with the same list as src and dst - $type\" {\n                    if {$wherefrom eq \"right\"} {\n                        create_$type mylist{t} \"a $large c\"\n                        assert_equal \"a $large c\" [r lrange mylist{t} 0 -1]\n                    } else {\n                        create_$type mylist{t} \"c a $large\"\n                        assert_equal \"c a $large\" [r lrange mylist{t} 0 -1]\n                    }\n                    assert_equal c [r lmove mylist{t} mylist{t} $wherefrom $whereto]\n                    if {$whereto eq \"right\"} {\n                        assert_equal \"a $large c\" [r lrange mylist{t} 0 -1]\n                    } else {\n                        assert_equal \"c a $large\" [r lrange mylist{t} 0 -1]\n                    }\n                }\n            }\n        }\n\n        foreach {othertype otherlarge} [array get largevalue] {\n            test \"RPOPLPUSH with $type source and existing target $othertype\" {\n                create_$type srclist{t} \"a b c $large\"\n                create_$othertype dstlist{t} \"$otherlarge\"\n                assert_equal $large [r rpoplpush srclist{t} dstlist{t}]\n                assert_equal c [r rpoplpush srclist{t} dstlist{t}]\n                assert_equal \"a b\" [r lrange srclist{t} 0 -1]\n                assert_equal \"c $large $otherlarge\" [r lrange dstlist{t} 0 -1]\n\n                # When we rpoplpush'ed a large value, dstlist should be\n                # converted to the same encoding as srclist.\n                if {$type eq \"quicklist\"} {\n                    assert_encoding quicklist dstlist{t}\n                }\n            }\n\n            foreach wherefrom {left right} {\n                foreach whereto {left right} {\n                    test \"LMOVE $wherefrom $whereto with $type source and existing target $othertype\" {\n                        create_$othertype dstlist{t} \"$otherlarge\"\n\n                        if {$wherefrom eq \"right\"} {\n                            create_$type srclist{t} \"a b c $large\"\n                        } else {\n                            create_$type srclist{t} \"$large c a b\"\n                        }\n                        assert_equal $large [r lmove srclist{t} dstlist{t} $wherefrom $whereto]\n                        assert_equal c [r lmove srclist{t} dstlist{t} $wherefrom $whereto]\n                        assert_equal \"a b\" [r lrange srclist{t} 0 -1]\n\n                        if {$whereto eq \"right\"} {\n                            assert_equal \"$otherlarge $large c\" [r lrange dstlist{t} 0 -1]\n                        } else {\n                            assert_equal \"c $large $otherlarge\" [r lrange dstlist{t} 0 -1]\n                        }\n\n                        # When we lmoved a large value, dstlist should be\n                        # converted to the same encoding as srclist.\n                        if {$type eq \"quicklist\"} {\n                            assert_encoding quicklist dstlist{t}\n                        }\n                    }\n                }\n            }\n        }\n    }\n\n    test {RPOPLPUSH against non existing key} {\n        r del srclist{t} dstlist{t}\n        assert_equal {} [r rpoplpush srclist{t} dstlist{t}]\n        assert_equal 0 [r exists srclist{t}]\n        assert_equal 0 [r exists dstlist{t}]\n    }\n\n    test {RPOPLPUSH against non list src key} {\n        r del srclist{t} dstlist{t}\n        r set srclist{t} x\n        assert_error WRONGTYPE* {r rpoplpush srclist{t} dstlist{t}}\n        assert_type string srclist{t}\n        assert_equal 0 [r exists newlist{t}]\n    }\n\nforeach {type large} [array get largevalue] {\n    test \"RPOPLPUSH against non list dst key - $type\" {\n        create_$type srclist{t} \"a $large c d\"\n        r set dstlist{t} x\n        assert_error WRONGTYPE* {r rpoplpush srclist{t} dstlist{t}}\n        assert_type string dstlist{t}\n        assert_equal \"a $large c d\" [r lrange srclist{t} 0 -1]\n    }\n}\n\n    test {RPOPLPUSH against non existing src key} {\n        r del srclist{t} dstlist{t}\n        assert_equal {} [r rpoplpush srclist{t} dstlist{t}]\n    } {}\n\n    foreach {type large} [array get largevalue] { \n        test \"Basic LPOP/RPOP/LMPOP - $type\" {\n            create_$type mylist \"$large 1 2\"\n            assert_equal $large [r lpop mylist]\n            assert_equal 2 [r rpop mylist]\n            assert_equal 1 [r lpop mylist]\n            assert_equal 0 [r llen mylist]\n\n            create_$type mylist \"$large 1 2\"\n            assert_equal \"mylist $large\" [r lmpop 1 mylist left count 1]\n            assert_equal {mylist {2 1}} [r lmpop 2 mylist mylist right count 2]\n        }\n    }\n\n    test {LPOP/RPOP/LMPOP against empty list} {\n        r del non-existing-list{t} non-existing-list2{t}\n\n        assert_equal {} [r lpop non-existing-list{t}]\n        assert_equal {} [r rpop non-existing-list2{t}]\n\n        assert_equal {} [r lmpop 1 non-existing-list{t} left count 1]\n        assert_equal {} [r lmpop 1 non-existing-list{t} left count 10]\n        assert_equal {} [r lmpop 2 non-existing-list{t} non-existing-list2{t} right count 1]\n        assert_equal {} [r lmpop 2 non-existing-list{t} non-existing-list2{t} right count 10]\n    }\n\n    test {LPOP/RPOP/LMPOP NON-BLOCK or BLOCK against non list value} {\n        r set notalist{t} foo\n        assert_error WRONGTYPE* {r lpop notalist{t}}\n        assert_error WRONGTYPE* {r blpop notalist{t} 0}\n        assert_error WRONGTYPE* {r rpop notalist{t}}\n        assert_error WRONGTYPE* {r brpop notalist{t} 0}\n\n        r del notalist2{t}\n        assert_error \"WRONGTYPE*\" {r lmpop 2 notalist{t} notalist2{t} left count 1}\n        assert_error \"WRONGTYPE*\" {r blmpop 0 2 notalist{t} notalist2{t} left count 1}\n\n        r del notalist{t}\n        r set notalist2{t} nolist\n        assert_error \"WRONGTYPE*\" {r lmpop 2 notalist{t} notalist2{t} right count 10}\n        assert_error \"WRONGTYPE*\" {r blmpop 0 2 notalist{t} notalist2{t} left count 1}\n    }\n\n    foreach {num} {250 500} {\n        test \"Mass RPOP/LPOP - $type\" {\n            r del mylist\n            set sum1 0\n            for {set i 0} {$i < $num} {incr i} {\n                if {$i == [expr $num/2]} {\n                    r lpush mylist $large\n                }\n                r lpush mylist $i\n                incr sum1 $i\n            }\n            assert_encoding $type mylist\n            set sum2 0\n            for {set i 0} {$i < [expr $num/2]} {incr i} {\n                incr sum2 [r lpop mylist]\n                incr sum2 [r rpop mylist]\n            }\n            assert_equal $sum1 $sum2\n        }\n    }\n\n    test {LMPOP with illegal argument} {\n        assert_error \"ERR wrong number of arguments for 'lmpop' command\" {r lmpop}\n        assert_error \"ERR wrong number of arguments for 'lmpop' command\" {r lmpop 1}\n        assert_error \"ERR wrong number of arguments for 'lmpop' command\" {r lmpop 1 mylist{t}}\n\n        assert_error \"ERR numkeys*\" {r lmpop 0 mylist{t} LEFT}\n        assert_error \"ERR numkeys*\" {r lmpop a mylist{t} LEFT}\n        assert_error \"ERR numkeys*\" {r lmpop -1 mylist{t} RIGHT}\n\n        assert_error \"ERR syntax error*\" {r lmpop 1 mylist{t} bad_where}\n        assert_error \"ERR syntax error*\" {r lmpop 1 mylist{t} LEFT bar_arg}\n        assert_error \"ERR syntax error*\" {r lmpop 1 mylist{t} RIGHT LEFT}\n        assert_error \"ERR syntax error*\" {r lmpop 1 mylist{t} COUNT}\n        assert_error \"ERR syntax error*\" {r lmpop 1 mylist{t} LEFT COUNT 1 COUNT 2}\n        assert_error \"ERR syntax error*\" {r lmpop 2 mylist{t} mylist2{t} bad_arg}\n\n        assert_error \"ERR count*\" {r lmpop 1 mylist{t} LEFT COUNT 0}\n        assert_error \"ERR count*\" {r lmpop 1 mylist{t} RIGHT COUNT a}\n        assert_error \"ERR count*\" {r lmpop 1 mylist{t} LEFT COUNT -1}\n        assert_error \"ERR count*\" {r lmpop 2 mylist{t} mylist2{t} RIGHT COUNT -1}\n    }\n\nforeach {type large} [array get largevalue] {\n    test \"LMPOP single existing list - $type\" {\n        # Same key multiple times.\n        create_$type mylist{t} \"a b $large d e f\"\n        assert_equal {mylist{t} {a b}} [r lmpop 2 mylist{t} mylist{t} left count 2]\n        assert_equal {mylist{t} {f e}} [r lmpop 2 mylist{t} mylist{t} right count 2]\n        assert_equal 2 [r llen mylist{t}]\n\n        # First one exists, second one does not exist.\n        create_$type mylist{t} \"a b $large d e\"\n        r del mylist2{t}\n        assert_equal {mylist{t} a} [r lmpop 2 mylist{t} mylist2{t} left count 1]\n        assert_equal 4 [r llen mylist{t}]\n        assert_equal \"mylist{t} {e d $large b}\" [r lmpop 2 mylist{t} mylist2{t} right count 10]\n        assert_equal {} [r lmpop 2 mylist{t} mylist2{t} right count 1]\n\n        # First one does not exist, second one exists.\n        r del mylist{t}\n        create_$type mylist2{t} \"1 2 $large 4 5\"\n        assert_equal {mylist2{t} 5} [r lmpop 2 mylist{t} mylist2{t} right count 1]\n        assert_equal 4 [r llen mylist2{t}]\n        assert_equal \"mylist2{t} {1 2 $large 4}\" [r lmpop 2 mylist{t} mylist2{t} left count 10]\n\n        assert_equal 0 [r exists mylist{t} mylist2{t}]\n    }\n\n    test \"LMPOP multiple existing lists - $type\" {\n        create_$type mylist{t} \"a b $large d e\"\n        create_$type mylist2{t} \"1 2 $large 4 5\"\n\n        # Pop up from the first key.\n        assert_equal {mylist{t} {a b}} [r lmpop 2 mylist{t} mylist2{t} left count 2]\n        assert_equal 3 [r llen mylist{t}]\n        assert_equal \"mylist{t} {e d $large}\" [r lmpop 2 mylist{t} mylist2{t} right count 3]\n        assert_equal 0 [r exists mylist{t}]\n\n        # Pop up from the second key.\n        assert_equal \"mylist2{t} {1 2 $large}\" [r lmpop 2 mylist{t} mylist2{t} left count 3]\n        assert_equal 2 [r llen mylist2{t}]\n        assert_equal {mylist2{t} {5 4}} [r lmpop 2 mylist{t} mylist2{t} right count 2]\n        assert_equal 0 [r exists mylist{t}]\n\n        # Pop up all elements.\n        create_$type mylist{t} \"a $large c\"\n        create_$type mylist2{t} \"1 $large 3\"\n        assert_equal \"mylist{t} {a $large c}\" [r lmpop 2 mylist{t} mylist2{t} left count 10]\n        assert_equal 0 [r llen mylist{t}]\n        assert_equal \"mylist2{t} {3 $large 1}\" [r lmpop 2 mylist{t} mylist2{t} right count 10]\n        assert_equal 0 [r llen mylist2{t}]\n        assert_equal 0 [r exists mylist{t} mylist2{t}]\n    }\n}\n\n    test {LMPOP propagate as pop with count command to replica} {\n        set repl [attach_to_replication_stream]\n\n        # left/right propagate as lpop/rpop with count\n        r lpush mylist{t} a b c\n\n        # Pop elements from one list.\n        r lmpop 1 mylist{t} left count 1\n        r lmpop 1 mylist{t} right count 1\n\n        # Now the list have only one element\n        r lmpop 2 mylist{t} mylist2{t} left count 10\n\n        # No elements so we don't propagate.\n        r lmpop 2 mylist{t} mylist2{t} left count 10\n\n        # Pop elements from the second list.\n        r rpush mylist2{t} 1 2 3\n        r lmpop 2 mylist{t} mylist2{t} left count 2\n        r lmpop 2 mylist{t} mylist2{t} right count 1\n\n        # Pop all elements.\n        r rpush mylist{t} a b c\n        r rpush mylist2{t} 1 2 3\n        r lmpop 2 mylist{t} mylist2{t} left count 10\n        r lmpop 2 mylist{t} mylist2{t} right count 10\n\n        assert_replication_stream $repl {\n            {select *}\n            {lpush mylist{t} a b c}\n            {lpop mylist{t} 1}\n            {rpop mylist{t} 1}\n            {lpop mylist{t} 1}\n            {rpush mylist2{t} 1 2 3}\n            {lpop mylist2{t} 2}\n            {rpop mylist2{t} 1}\n            {rpush mylist{t} a b c}\n            {rpush mylist2{t} 1 2 3}\n            {lpop mylist{t} 3}\n            {rpop mylist2{t} 3}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    foreach {type large} [array get largevalue] {\n        test \"LRANGE basics - $type\" {\n            create_$type mylist \"$large 1 2 3 4 5 6 7 8 9\"\n            assert_equal {1 2 3 4 5 6 7 8} [r lrange mylist 1 -2]\n            assert_equal {7 8 9} [r lrange mylist -3 -1]\n            assert_equal {4} [r lrange mylist 4 4]\n        }\n\n        test \"LRANGE inverted indexes - $type\" {\n            create_$type mylist \"$large 1 2 3 4 5 6 7 8 9\"\n            assert_equal {} [r lrange mylist 6 2]\n        }\n\n        test \"LRANGE out of range indexes including the full list - $type\" {\n            create_$type mylist \"$large 1 2 3\"\n            assert_equal \"$large 1 2 3\" [r lrange mylist -1000 1000]\n        }\n\n        test \"LRANGE out of range negative end index - $type\" {\n            create_$type mylist \"$large 1 2 3\"\n            assert_equal $large [r lrange mylist 0 -4]\n            assert_equal {} [r lrange mylist 0 -5]\n        }\n    }\n\n    test {LRANGE against non existing key} {\n        assert_equal {} [r lrange nosuchkey 0 1]\n    }\n\n    test {LRANGE with start > end yields an empty array for backward compatibility} {\n        create_$type mylist \"1 $large 3\"\n        assert_equal {} [r lrange mylist 1 0]\n        assert_equal {} [r lrange mylist -1 -2]\n    }\n\n    foreach {type large} [array get largevalue] {\n        proc trim_list {type min max} {\n            upvar 1 large large\n            r del mylist\n            create_$type mylist \"1 2 3 4 $large\"\n            r ltrim mylist $min $max\n            r lrange mylist 0 -1\n        }\n\n        test \"LTRIM basics - $type\" {\n            assert_equal \"1\" [trim_list $type 0 0]\n            assert_equal \"1 2\" [trim_list $type 0 1]\n            assert_equal \"1 2 3\" [trim_list $type 0 2]\n            assert_equal \"2 3\" [trim_list $type 1 2]\n            assert_equal \"2 3 4 $large\" [trim_list $type 1 -1]\n            assert_equal \"2 3 4\" [trim_list $type 1 -2]\n            assert_equal \"4 $large\" [trim_list $type -2 -1]\n            assert_equal \"$large\" [trim_list $type -1 -1]\n            assert_equal \"1 2 3 4 $large\" [trim_list $type -5 -1]\n            assert_equal \"1 2 3 4 $large\" [trim_list $type -10 10]\n            assert_equal \"1 2 3 4 $large\" [trim_list $type 0 5]\n            assert_equal \"1 2 3 4 $large\" [trim_list $type 0 10]\n        }\n\n        test \"LTRIM out of range negative end index - $type\" {\n            assert_equal {1} [trim_list $type 0 -5]\n            assert_equal {} [trim_list $type 0 -6]\n        }\n\n        test \"LSET - $type\" {\n            create_$type mylist \"99 98 $large 96 95\"\n            r lset mylist 1 foo\n            r lset mylist -1 bar\n            assert_equal \"99 foo $large 96 bar\" [r lrange mylist 0 -1]\n        }\n\n        test \"LSET out of range index - $type\" {\n            assert_error ERR*range* {r lset mylist 10 foo}\n        }\n    }\n\n    test {LSET against non existing key} {\n        assert_error ERR*key* {r lset nosuchkey 10 foo}\n    }\n\n    test {LSET against non list value} {\n        r set nolist foobar\n        assert_error WRONGTYPE* {r lset nolist 0 foo}\n    }\n\n    foreach {type e} [array get largevalue] {\n        test \"LREM remove all the occurrences - $type\" {\n            create_$type mylist \"$e foo bar foobar foobared zap bar test foo\"\n            assert_equal 2 [r lrem mylist 0 bar]\n            assert_equal \"$e foo foobar foobared zap test foo\" [r lrange mylist 0 -1]\n        }\n\n        test \"LREM remove the first occurrence - $type\" {\n            assert_equal 1 [r lrem mylist 1 foo]\n            assert_equal \"$e foobar foobared zap test foo\" [r lrange mylist 0 -1]\n        }\n\n        test \"LREM remove non existing element - $type\" {\n            assert_equal 0 [r lrem mylist 1 nosuchelement]\n            assert_equal \"$e foobar foobared zap test foo\" [r lrange mylist 0 -1]\n        }\n\n        test \"LREM starting from tail with negative count - $type\" {\n            create_$type mylist \"$e foo bar foobar foobared zap bar test foo foo\"\n            assert_equal 1 [r lrem mylist -1 bar]\n            assert_equal \"$e foo bar foobar foobared zap test foo foo\" [r lrange mylist 0 -1]\n        }\n\n        test \"LREM starting from tail with negative count (2) - $type\" {\n            assert_equal 2 [r lrem mylist -2 foo]\n            assert_equal \"$e foo bar foobar foobared zap test\" [r lrange mylist 0 -1]\n        }\n\n        test \"LREM deleting objects that may be int encoded - $type\" {\n            create_$type myotherlist \"$e 1 2 3\"\n            assert_equal 1 [r lrem myotherlist 1 2]\n            assert_equal 3 [r llen myotherlist]\n        }\n    }\n\n    test \"Regression for bug 593 - chaining BRPOPLPUSH with other blocking cmds\" {\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n\n        $rd1 brpoplpush a{t} b{t} 0\n        $rd1 brpoplpush a{t} b{t} 0\n        wait_for_blocked_clients_count 1\n        $rd2 brpoplpush b{t} c{t} 0\n        wait_for_blocked_clients_count 2\n        r lpush a{t} data\n        $rd1 close\n        $rd2 close\n        r ping\n    } {PONG}\n\n    test \"BLPOP/BLMOVE should increase dirty\" {\n        r del lst{t} lst1{t}\n        set rd [redis_deferring_client]\n\n        set dirty [s rdb_changes_since_last_save]\n        $rd blpop lst{t} 0\n        wait_for_blocked_client\n        r lpush lst{t} a\n        assert_equal {lst{t} a} [$rd read]\n        set dirty2 [s rdb_changes_since_last_save]\n        assert {$dirty2 == $dirty + 2}\n\n        set dirty [s rdb_changes_since_last_save]\n        $rd blmove lst{t} lst1{t} left left 0\n        wait_for_blocked_client\n        r lpush lst{t} a\n        assert_equal {a} [$rd read]\n        set dirty2 [s rdb_changes_since_last_save]\n        assert {$dirty2 == $dirty + 2}\n\n        $rd close\n    }\n\nforeach {pop} {BLPOP BLMPOP_RIGHT} {\n    test \"client unblock tests\" {\n        r del l\n        set rd [redis_deferring_client]\n        $rd client id\n        set id [$rd read]\n\n        # test default args\n        bpop_command $rd $pop l 0\n        wait_for_blocked_client\n        r client unblock $id\n        assert_equal {} [$rd read]\n\n        # test with timeout\n        bpop_command $rd $pop l 0\n        wait_for_blocked_client\n        r client unblock $id TIMEOUT\n        assert_equal {} [$rd read]\n\n        # test with error\n        bpop_command $rd $pop l 0\n        wait_for_blocked_client\n        r client unblock $id ERROR\n        catch {[$rd read]} e\n        assert_equal $e \"UNBLOCKED client unblocked via CLIENT UNBLOCK\"\n\n        # test with invalid client id\n        catch {[r client unblock asd]} e\n        assert_equal $e \"ERR value is not an integer or out of range\"\n\n        # test with non blocked client\n        set myid [r client id]\n        catch {[r client unblock $myid]} e\n        assert_equal $e {invalid command name \"0\"}\n\n        # finally, see the this client and list are still functional\n        bpop_command $rd $pop l 0\n        wait_for_blocked_client\n        r lpush l foo\n        assert_equal {l foo} [$rd read]\n        $rd close\n    }\n}\n\n    foreach {max_lp_size large} \"3 $largevalue(listpack) -1 $largevalue(quicklist)\" {\n        test \"List listpack -> quicklist encoding conversion\" {\n            set origin_conf [config_get_set list-max-listpack-size $max_lp_size]\n\n            # RPUSH\n            create_listpack lst \"a b c\"\n            r RPUSH lst $large\n            assert_encoding quicklist lst\n\n            # LINSERT\n            create_listpack lst \"a b c\"\n            r LINSERT lst after b $large\n            assert_encoding quicklist lst\n\n            # LSET\n            create_listpack lst \"a b c\"\n            r LSET lst 0 $large\n            assert_encoding quicklist lst\n\n            # LMOVE\n            create_quicklist lsrc{t} \"a b c $large\"\n            create_listpack ldes{t} \"d e f\"\n            r LMOVE lsrc{t} ldes{t} right right\n            assert_encoding quicklist ldes{t}\n\n            r config set list-max-listpack-size $origin_conf\n        }\n    }\n\n    test \"List quicklist -> listpack encoding conversion\" {\n        set origin_conf [config_get_set list-max-listpack-size 3]\n\n        # RPOP\n        create_quicklist lst \"a b c d\"\n        r RPOP lst 3\n        assert_encoding listpack lst\n\n        # LREM\n        create_quicklist lst \"a a a d\"\n        r LREM lst 3 a\n        assert_encoding listpack lst\n\n        # LTRIM\n        create_quicklist lst \"a b c d\"\n        r LTRIM lst 1 1\n        assert_encoding listpack lst\n\n        r config set list-max-listpack-size -1\n\n        # RPOP\n        create_quicklist lst \"a b c $largevalue(quicklist)\"\n        r RPOP lst 1\n        assert_encoding listpack lst\n\n        # LREM\n        create_quicklist lst \"a $largevalue(quicklist)\"\n        r LREM lst 1 $largevalue(quicklist)\n        assert_encoding listpack lst\n\n        # LTRIM\n        create_quicklist lst \"a b $largevalue(quicklist)\"\n        r LTRIM lst 0 1\n        assert_encoding listpack lst\n\n        # LSET\n        create_quicklist lst \"$largevalue(quicklist) a b\"\n        r RPOP lst 2\n        assert_encoding quicklist lst\n        r LSET lst -1 c\n        assert_encoding listpack lst\n\n        r config set list-max-listpack-size $origin_conf\n    }\n\n    test \"List encoding conversion when RDB loading\" {\n        set origin_conf [config_get_set list-max-listpack-size 3]\n        create_listpack lst \"a b c\"\n\n        # list is still a listpack after DEBUG RELOAD\n        r DEBUG RELOAD\n        assert_encoding listpack lst\n\n        # list is still a quicklist after DEBUG RELOAD\n        r RPUSH lst d\n        r DEBUG RELOAD\n        assert_encoding quicklist lst\n\n        # when a quicklist has only one packed node, it will be\n        # converted to listpack during rdb loading\n        r RPOP lst\n        assert_encoding quicklist lst \n        r DEBUG RELOAD\n        assert_encoding listpack lst\n\n        r config set list-max-listpack-size $origin_conf\n    } {OK} {needs:debug}\n\n    test \"List invalid list-max-listpack-size config\" {\n        # ​When list-max-listpack-size is 0 we treat it as 1 and it'll\n        # still be listpack if there's a single element in the list.\n        r config set list-max-listpack-size 0\n        r DEL lst\n        r RPUSH lst a\n        assert_encoding listpack lst\n        r RPUSH lst b\n        assert_encoding quicklist lst\n\n        # When list-max-listpack-size < -5 we treat it as -5.\n        r config set list-max-listpack-size -6\n        r DEL lst\n        r RPUSH lst [string repeat \"x\" 60000]\n        assert_encoding listpack lst\n        # Converted to quicklist when the size of listpack exceed 65536\n        r RPUSH lst [string repeat \"x\" 5536]\n        assert_encoding quicklist lst\n    }\n\n    test \"List of various encodings\" {\n        r del k\n        r lpush k 127 ;# ZIP_INT_8B\n        r lpush k 32767 ;# ZIP_INT_16B\n        r lpush k 2147483647 ;# ZIP_INT_32B\n        r lpush k 9223372036854775808 ;# ZIP_INT_64B\n        r lpush k 0 ;# ZIP_INT_IMM_MIN\n        r lpush k 12 ;# ZIP_INT_IMM_MAX\n        r lpush k [string repeat x 31] ;# ZIP_STR_06B\n        r lpush k [string repeat x 8191] ;# ZIP_STR_14B\n        r lpush k [string repeat x 65535] ;# ZIP_STR_32B\n        assert_encoding quicklist k ;# exceeds the size limit of quicklist node\n        set k [r lrange k 0 -1]\n        set dump [r dump k]\n\n        # coverage for kvobjComputeSize\n        assert_morethan [memory_usage k] 0\n\n        config_set sanitize-dump-payload no mayfail\n        r restore kk 0 $dump replace\n        assert_encoding quicklist kk\n        set kk [r lrange kk 0 -1]\n\n        # try some forward and backward searches to make sure all encodings\n        # can be traversed\n        assert_equal [r lindex kk 5] {9223372036854775808}\n        assert_equal [r lindex kk -5] {0}\n        assert_equal [r lpos kk foo rank 1] {}\n        assert_equal [r lpos kk foo rank -1] {}\n\n        # make sure the values are right\n        assert_equal $k $kk\n        assert_equal [lpop k] [string repeat x 65535]\n        assert_equal [lpop k] [string repeat x 8191]\n        assert_equal [lpop k] [string repeat x 31]\n        set _ $k\n    } {12 0 9223372036854775808 2147483647 32767 127}\n\n    test \"List of various encodings - sanitize dump\" {\n        config_set sanitize-dump-payload yes mayfail\n        r restore kk 0 $dump replace\n        assert_encoding quicklist kk\n        set k [r lrange k 0 -1]\n        set kk [r lrange kk 0 -1]\n\n        # make sure the values are right\n        assert_equal $k $kk\n        assert_equal [lpop k] [string repeat x 65535]\n        assert_equal [lpop k] [string repeat x 8191]\n        assert_equal [lpop k] [string repeat x 31]\n        set _ $k\n    } {12 0 9223372036854775808 2147483647 32767 127}\n    \n    test \"Unblock fairness is kept while pipelining\" {\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n        \n        # delete the list in case already exists\n        r del mylist\n        \n        # block a client on the list\n        $rd1 BLPOP mylist 0\n        wait_for_blocked_clients_count 1\n        \n        # pipeline on other client a list push and a blocking pop\n        # we should expect the fairness to be kept and have $rd1\n        # being unblocked\n        set buf \"\"\n        append buf \"LPUSH mylist 1\\r\\n\"\n        append buf \"BLPOP mylist 0\\r\\n\"\n        $rd2 write $buf\n        $rd2 flush\n        \n        # we check that we still have 1 blocked client\n        # and that the first blocked client has been served\n        assert_equal [$rd1 read] {mylist 1}\n        assert_equal [$rd2 read] {1}\n        wait_for_blocked_clients_count 1\n        \n        # We no unblock the last client and verify it was served last \n        r LPUSH mylist 2\n        wait_for_blocked_clients_count 0\n        assert_equal [$rd2 read] {mylist 2}\n        \n        $rd1 close\n        $rd2 close\n    }\n    \n    test \"Unblock fairness is kept during nested unblock\" {\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n        set rd3 [redis_deferring_client]\n        \n        # delete the list in case already exists\n        r del l1{t} l2{t} l3{t}\n        \n        # block a client on the list\n        $rd1 BRPOPLPUSH l1{t} l3{t} 0\n        wait_for_blocked_clients_count 1\n        \n        $rd2 BLPOP l2{t} 0\n        wait_for_blocked_clients_count 2\n        \n        $rd3 BLMPOP 0 2 l2{t} l3{t} LEFT COUNT 1\n        wait_for_blocked_clients_count 3\n        \n        r multi\n        r lpush l1{t} 1\n        r lpush l2{t} 2\n        r exec\n        \n        wait_for_blocked_clients_count 0\n        \n        assert_equal [$rd1 read] {1}\n        assert_equal [$rd2 read] {l2{t} 2}\n        assert_equal [$rd3 read] {l3{t} 1}\n        \n        $rd1 close\n        $rd2 close\n        $rd3 close\n    }\n    \n    test \"Blocking command accounted only once in commandstats\" {\n        # cleanup first\n        r del mylist\n        \n        # create a test client\n        set rd [redis_deferring_client]\n        \n        # reset the server stats\n        r config resetstat\n        \n        # block a client on the list\n        $rd BLPOP mylist 0\n        wait_for_blocked_clients_count 1\n        \n        # unblock the list\n        r LPUSH mylist 1\n        wait_for_blocked_clients_count 0\n        \n        assert_match {*calls=1,*,rejected_calls=0,failed_calls=0*} [cmdrstat blpop r]\n        \n        $rd close\n    }\n    \n    test \"Blocking command accounted only once in commandstats after timeout\" {\n        # cleanup first\n        r del mylist\n        \n        # create a test client\n        set rd [redis_deferring_client]\n        $rd client id\n        set id [$rd read]\n\n        # reset the server stats\n        r config resetstat\n        \n        # block a client on the list\n        $rd BLPOP mylist 0\n        wait_for_blocked_clients_count 1\n        \n        # unblock the client on timeout\n        r client unblock $id timeout\n        \n        assert_match {*calls=1,*,rejected_calls=0,failed_calls=0*} [cmdrstat blpop r]\n        \n        $rd close\n    }\n\n    test {Command being unblocked cause another command to get unblocked execution order test} {\n        r del src{t} dst{t} key1{t} key2{t} key3{t}\n        set repl [attach_to_replication_stream]\n\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n        set rd3 [redis_deferring_client]\n\n        $rd1 blmove src{t} dst{t} left right 0\n        wait_for_blocked_clients_count 1\n\n        $rd2 blmove dst{t} src{t} right left 0\n        wait_for_blocked_clients_count 2\n\n        # Create a pipeline of commands that will be processed in one socket read.\n        # Insert two set commands before and after lpush to observe the execution order.\n        set buf \"\"\n        append buf \"set key1{t} value1\\r\\n\"\n        append buf \"lpush src{t} dummy\\r\\n\"\n        append buf \"set key2{t} value2\\r\\n\"\n        $rd3 write $buf\n        $rd3 flush\n\n        wait_for_blocked_clients_count 0\n\n        r set key3{t} value3\n\n        # If a command being unblocked causes another command to get unblocked, like a BLMOVE would do,\n        # then the new unblocked command will get processed right away rather than wait for later.\n        # If the set command occurs between two lmove commands, the results are not as expected.\n        assert_replication_stream $repl {\n            {select *}\n            {set key1{t} value1}\n            {lpush src{t} dummy}\n            {lmove src{t} dst{t} left right}\n            {lmove dst{t} src{t} right left}\n            {set key2{t} value2}\n            {set key3{t} value3}\n        }\n\n        $rd1 close\n        $rd2 close\n        $rd3 close\n\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test \"Blocking timeout following PAUSE should honor the timeout\" {\n        # cleanup first\n        r del mylist\n        \n        # create a test client\n        set rd [redis_deferring_client]\n        \n        # first PAUSE all writes for a very long time\n        r client pause 10000000000000 write\n\n        # block a client on the list\n        $rd BLPOP mylist 1\n        wait_for_blocked_clients_count 1\n        \n        # now unpause the writes\n        r client unpause\n\n        # client should time-out\n        wait_for_blocked_clients_count 0\n        \n        $rd close\n    }\n\n    test \"CLIENT NO-TOUCH with BRPOP and RPUSH regression test\" {\n        # Test scenario:\n        # 1. Client 1: CLIENT NO-TOUCH on\n        # 2. Client 2: BRPOP mylist 0\n        # 3. Client 1: RPUSH mylist elem\n        \n        # cleanup first\n        r del mylist\n        \n        # Create two test clients\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n        \n        # Client 1: Enable CLIENT NO-TOUCH\n        $rd1 client no-touch on\n        assert_equal {OK} [$rd1 read]\n        \n        # Client 2: Block waiting for elements in mylist\n        $rd2 brpop mylist 0\n        wait_for_blocked_client\n        \n        # Client 1: Push an element to mylist\n        $rd1 rpush mylist elem\n        assert_equal {1} [$rd1 read]\n\n        # Verify Client 2 received the element\n        assert_equal {mylist elem} [$rd2 read]\n\n        $rd1 close\n        $rd2 close\n    }\n    \n} ;# stop servers\n"
  },
  {
    "path": "tests/unit/type/set.tcl",
    "content": "start_server {\n    tags {\"set\"}\n    overrides {\n        \"set-max-intset-entries\" 512\n        \"set-max-listpack-entries\" 128\n        \"set-max-listpack-value\" 32\n    }\n} {\n    proc create_set {key entries} {\n        r del $key\n        foreach entry $entries { r sadd $key $entry }\n    }\n\n    # Values for initialing sets, per encoding.\n    array set initelems {listpack {foo} hashtable {foo}}\n    for {set i 0} {$i < 130} {incr i} {\n        lappend initelems(hashtable) [format \"i%03d\" $i]\n    }\n\n    foreach type {listpack hashtable} {\n    test \"SADD, SCARD, SISMEMBER, SMISMEMBER, SMEMBERS basics - $type\" {\n        create_set myset $initelems($type)\n        assert_encoding $type myset\n        assert_equal 1 [r sadd myset bar]\n        assert_equal 0 [r sadd myset bar]\n        assert_equal [expr [llength $initelems($type)] + 1] [r scard myset]\n        assert_equal 1 [r sismember myset foo]\n        assert_equal 1 [r sismember myset bar]\n        assert_equal 0 [r sismember myset bla]\n        assert_equal {1} [r smismember myset foo]\n        assert_equal {1 1} [r smismember myset foo bar]\n        assert_equal {1 0} [r smismember myset foo bla]\n        assert_equal {0 1} [r smismember myset bla foo]\n        assert_equal {0} [r smismember myset bla]\n        assert_equal \"bar $initelems($type)\" [lsort [r smembers myset]]\n    }\n    }\n\n    test {SADD, SCARD, SISMEMBER, SMISMEMBER, SMEMBERS basics - intset} {\n        create_set myset {17}\n        assert_encoding intset myset\n        assert_equal 1 [r sadd myset 16]\n        assert_equal 0 [r sadd myset 16]\n        assert_equal 2 [r scard myset]\n        assert_equal 1 [r sismember myset 16]\n        assert_equal 1 [r sismember myset 17]\n        assert_equal 0 [r sismember myset 18]\n        assert_equal {1} [r smismember myset 16]\n        assert_equal {1 1} [r smismember myset 16 17]\n        assert_equal {1 0} [r smismember myset 16 18]\n        assert_equal {0 1} [r smismember myset 18 16]\n        assert_equal {0} [r smismember myset 18]\n        assert_equal {16 17} [lsort [r smembers myset]]\n    }\n\n    test {SMISMEMBER SMEMBERS SCARD against non set} {\n        r lpush mylist foo\n        assert_error WRONGTYPE* {r smismember mylist bar}\n        assert_error WRONGTYPE* {r smembers mylist}\n        assert_error WRONGTYPE* {r scard mylist}\n    }\n\n    test {SMISMEMBER SMEMBERS SCARD against non existing key} {\n        assert_equal {0} [r smismember myset1 foo]\n        assert_equal {0 0} [r smismember myset1 foo bar]\n        assert_equal {} [r smembers myset1]\n        assert_equal {0} [r scard myset1]\n    }\n\n    test {SMISMEMBER requires one or more members} {\n        r del zmscoretest\n        r zadd zmscoretest 10 x\n        r zadd zmscoretest 20 y\n        \n        catch {r smismember zmscoretest} e\n        assert_match {*ERR*wrong*number*arg*} $e\n    }\n\n    test {SADD against non set} {\n        r lpush mylist foo\n        assert_error WRONGTYPE* {r sadd mylist bar}\n    }\n\n    test \"SADD a non-integer against a small intset\" {\n        create_set myset {1 2 3}\n        assert_encoding intset myset\n        assert_equal 1 [r sadd myset a]\n        assert_encoding listpack myset\n    }\n\n    test \"SADD a non-integer against a large intset\" {\n        create_set myset {0}\n        for {set i 1} {$i < 130} {incr i} {r sadd myset $i}\n        assert_encoding intset myset\n        assert_equal 1 [r sadd myset a]\n        assert_encoding hashtable myset\n    }\n\n    test \"SADD an integer larger than 64 bits\" {\n        create_set myset {213244124402402314402033402}\n        assert_encoding listpack myset\n        assert_equal 1 [r sismember myset 213244124402402314402033402]\n        assert_equal {1} [r smismember myset 213244124402402314402033402]\n    }\n\n    test \"SADD an integer larger than 64 bits to a large intset\" {\n        create_set myset {0}\n        for {set i 1} {$i < 130} {incr i} {r sadd myset $i}\n        assert_encoding intset myset\n        r sadd myset 213244124402402314402033402\n        assert_encoding hashtable myset\n        assert_equal 1 [r sismember myset 213244124402402314402033402]\n        assert_equal {1} [r smismember myset 213244124402402314402033402]\n    }\n\nforeach type {single multiple single_multiple} {\n    test \"SADD overflows the maximum allowed integers in an intset - $type\" {\n        r del myset\n\n        if {$type == \"single\"} {\n            # All are single sadd commands.\n            for {set i 0} {$i < 512} {incr i} { r sadd myset $i }\n        } elseif {$type == \"multiple\"} {\n            # One sadd command to add all elements.\n            set args {}\n            for {set i 0} {$i < 512} {incr i} { lappend args $i }\n            r sadd myset {*}$args\n        } elseif {$type == \"single_multiple\"} {\n            # First one sadd adds an element (creates a key) and then one sadd adds all elements.\n            r sadd myset 1\n            set args {}\n            for {set i 0} {$i < 512} {incr i} { lappend args $i }\n            r sadd myset {*}$args\n        }\n\n        assert_encoding intset myset\n        assert_equal 512 [r scard myset]\n        assert_equal 1 [r sadd myset 512]\n        assert_encoding hashtable myset\n    }\n\n    test \"SADD overflows the maximum allowed elements in a listpack - $type\" {\n        r del myset\n\n        if {$type == \"single\"} {\n            # All are single sadd commands.\n            r sadd myset a\n            for {set i 0} {$i < 127} {incr i} { r sadd myset $i }\n        } elseif {$type == \"multiple\"} {\n            # One sadd command to add all elements.\n            set args {}\n            lappend args a\n            for {set i 0} {$i < 127} {incr i} { lappend args $i }\n            r sadd myset {*}$args\n        } elseif {$type == \"single_multiple\"} {\n            # First one sadd adds an element (creates a key) and then one sadd adds all elements.\n            r sadd myset a\n            set args {}\n            lappend args a\n            for {set i 0} {$i < 127} {incr i} { lappend args $i }\n            r sadd myset {*}$args\n        }\n\n        assert_encoding listpack myset\n        assert_equal 128 [r scard myset]\n        assert_equal 1 [r sadd myset b]\n        assert_encoding hashtable myset\n    }\n}\n\n    test {Variadic SADD} {\n        r del myset\n        assert_equal 3 [r sadd myset a b c]\n        assert_equal 2 [r sadd myset A a b c B]\n        assert_equal [lsort {A a b c B}] [lsort [r smembers myset]]\n    }\n\n    test \"Set encoding after DEBUG RELOAD\" {\n        r del myintset\n        r del myhashset\n        r del mylargeintset\n        r del mysmallset\n        for {set i 0} {$i <  100} {incr i} { r sadd myintset $i }\n        for {set i 0} {$i < 1280} {incr i} { r sadd mylargeintset $i }\n        for {set i 0} {$i <   50} {incr i} { r sadd mysmallset [format \"i%03d\" $i] }\n        for {set i 0} {$i <  256} {incr i} { r sadd myhashset [format \"i%03d\" $i] }\n        assert_encoding intset myintset\n        assert_encoding hashtable mylargeintset\n        assert_encoding listpack mysmallset\n        assert_encoding hashtable myhashset\n\n        r debug reload\n        assert_encoding intset myintset\n        assert_encoding hashtable mylargeintset\n        assert_encoding listpack mysmallset\n        assert_encoding hashtable myhashset\n    } {} {needs:debug}\n\n    foreach type {listpack hashtable} {\n        test {SREM basics - $type} {\n            create_set myset $initelems($type)\n            r sadd myset ciao\n            assert_encoding $type myset\n            assert_equal 0 [r srem myset qux]\n            assert_equal 1 [r srem myset ciao]\n            assert_equal $initelems($type) [lsort [r smembers myset]]\n        }\n    }\n\n    test {SREM basics - intset} {\n        create_set myset {3 4 5}\n        assert_encoding intset myset\n        assert_equal 0 [r srem myset 6]\n        assert_equal 1 [r srem myset 4]\n        assert_equal {3 5} [lsort [r smembers myset]]\n    }\n\n    test {SREM with multiple arguments} {\n        r del myset\n        r sadd myset a b c d\n        assert_equal 0 [r srem myset k k k]\n        assert_equal 2 [r srem myset b d x y]\n        lsort [r smembers myset]\n    } {a c}\n\n    test {SREM variadic version with more args needed to destroy the key} {\n        r del myset\n        r sadd myset 1 2 3\n        r srem myset 1 2 3 4 5 6 7 8\n    } {3}\n\n    test \"SINTERCARD with illegal arguments\" {\n        assert_error \"ERR wrong number of arguments for 'sintercard' command\" {r sintercard}\n        assert_error \"ERR wrong number of arguments for 'sintercard' command\" {r sintercard 1}\n\n        assert_error \"ERR numkeys*\" {r sintercard 0 myset{t}}\n        assert_error \"ERR numkeys*\" {r sintercard a myset{t}}\n\n        assert_error \"ERR Number of keys*\" {r sintercard 2 myset{t}}\n        assert_error \"ERR Number of keys*\" {r sintercard 3 myset{t} myset2{t}}\n\n        assert_error \"ERR syntax error*\" {r sintercard 1 myset{t} myset2{t}}\n        assert_error \"ERR syntax error*\" {r sintercard 1 myset{t} bar_arg}\n        assert_error \"ERR syntax error*\" {r sintercard 1 myset{t} LIMIT}\n\n        assert_error \"ERR LIMIT*\" {r sintercard 1 myset{t} LIMIT -1}\n        assert_error \"ERR LIMIT*\" {r sintercard 1 myset{t} LIMIT a}\n    }\n\n    test \"SINTERCARD against non-set should throw error\" {\n        r del set{t}\n        r sadd set{t} a b c\n        r set key1{t} x\n\n        assert_error \"WRONGTYPE*\" {r sintercard 1 key1{t}}\n        assert_error \"WRONGTYPE*\" {r sintercard 2 set{t} key1{t}}\n        assert_error \"WRONGTYPE*\" {r sintercard 2 key1{t} noset{t}}\n    }\n\n    test \"SINTERCARD against non-existing key\" {\n        assert_equal 0 [r sintercard 1 non-existing-key]\n        assert_equal 0 [r sintercard 1 non-existing-key limit 0]\n        assert_equal 0 [r sintercard 1 non-existing-key limit 10]\n    }\n\n    foreach {type} {regular intset} {\n        # Create sets setN{t} where N = 1..5\n        if {$type eq \"regular\"} {\n            set smallenc listpack\n            set bigenc hashtable\n        } else {\n            set smallenc intset\n            set bigenc intset\n        }\n        # Sets 1, 2 and 4 are big; sets 3 and 5 are small.\n        array set encoding \"1 $bigenc 2 $bigenc 3 $smallenc 4 $bigenc 5 $smallenc\"\n\n        for {set i 1} {$i <= 5} {incr i} {\n            r del [format \"set%d{t}\" $i]\n        }\n        for {set i 0} {$i < 200} {incr i} {\n            r sadd set1{t} $i\n            r sadd set2{t} [expr $i+195]\n        }\n        foreach i {199 195 1000 2000} {\n            r sadd set3{t} $i\n        }\n        for {set i 5} {$i < 200} {incr i} {\n            r sadd set4{t} $i\n        }\n        r sadd set5{t} 0\n\n        # To make sure the sets are encoded as the type we are testing -- also\n        # when the VM is enabled and the values may be swapped in and out\n        # while the tests are running -- an extra element is added to every\n        # set that determines its encoding.\n        set large 200\n        if {$type eq \"regular\"} {\n            set large foo\n        }\n\n        for {set i 1} {$i <= 5} {incr i} {\n            r sadd [format \"set%d{t}\" $i] $large\n        }\n\n        test \"Generated sets must be encoded correctly - $type\" {\n            for {set i 1} {$i <= 5} {incr i} {\n                assert_encoding $encoding($i) [format \"set%d{t}\" $i]\n            }\n        }\n\n        test \"SINTER with two sets - $type\" {\n            assert_equal [list 195 196 197 198 199 $large] [lsort [r sinter set1{t} set2{t}]]\n        }\n\n        test \"SINTERCARD with two sets - $type\" {\n            assert_equal 6 [r sintercard 2 set1{t} set2{t}]\n            assert_equal 6 [r sintercard 2 set1{t} set2{t} limit 0]\n            assert_equal 3 [r sintercard 2 set1{t} set2{t} limit 3]\n            assert_equal 6 [r sintercard 2 set1{t} set2{t} limit 10]\n        }\n\n        test \"SINTERSTORE with two sets - $type\" {\n            r sinterstore setres{t} set1{t} set2{t}\n            assert_encoding $smallenc setres{t}\n            assert_equal [list 195 196 197 198 199 $large] [lsort [r smembers setres{t}]]\n        }\n\n        test \"SINTERSTORE with two sets, after a DEBUG RELOAD - $type\" {\n            r debug reload\n            r sinterstore setres{t} set1{t} set2{t}\n            assert_encoding $smallenc setres{t}\n            assert_equal [list 195 196 197 198 199 $large] [lsort [r smembers setres{t}]]\n        } {} {needs:debug}\n\n        test \"SUNION with two sets - $type\" {\n            set expected [lsort -uniq \"[r smembers set1{t}] [r smembers set2{t}]\"]\n            assert_equal $expected [lsort [r sunion set1{t} set2{t}]]\n        }\n\n        test \"SUNIONSTORE with two sets - $type\" {\n            r sunionstore setres{t} set1{t} set2{t}\n            assert_encoding $bigenc setres{t}\n            set expected [lsort -uniq \"[r smembers set1{t}] [r smembers set2{t}]\"]\n            assert_equal $expected [lsort [r smembers setres{t}]]\n        }\n\n        test \"SINTER against three sets - $type\" {\n            assert_equal [list 195 199 $large] [lsort [r sinter set1{t} set2{t} set3{t}]]\n        }\n\n        test \"SINTERCARD against three sets - $type\" {\n            assert_equal 3 [r sintercard 3 set1{t} set2{t} set3{t}]\n            assert_equal 3 [r sintercard 3 set1{t} set2{t} set3{t} limit 0]\n            assert_equal 2 [r sintercard 3 set1{t} set2{t} set3{t} limit 2]\n            assert_equal 3 [r sintercard 3 set1{t} set2{t} set3{t} limit 10]\n        }\n\n        test \"SINTERSTORE with three sets - $type\" {\n            r sinterstore setres{t} set1{t} set2{t} set3{t}\n            assert_equal [list 195 199 $large] [lsort [r smembers setres{t}]]\n        }\n\n        test \"SUNION with non existing keys - $type\" {\n            set expected [lsort -uniq \"[r smembers set1{t}] [r smembers set2{t}]\"]\n            assert_equal $expected [lsort [r sunion nokey1{t} set1{t} set2{t} nokey2{t}]]\n        }\n\n        test \"SDIFF with two sets - $type\" {\n            assert_equal {0 1 2 3 4} [lsort [r sdiff set1{t} set4{t}]]\n        }\n\n        test \"SDIFF with three sets - $type\" {\n            assert_equal {1 2 3 4} [lsort [r sdiff set1{t} set4{t} set5{t}]]\n        }\n\n        test \"SDIFFSTORE with three sets - $type\" {\n            r sdiffstore setres{t} set1{t} set4{t} set5{t}\n            # When we start with intsets, we should always end with intsets.\n            if {$type eq {intset}} {\n                assert_encoding intset setres{t}\n            }\n            assert_equal {1 2 3 4} [lsort [r smembers setres{t}]]\n        }\n\n        test \"SINTER/SUNION/SDIFF with three same sets - $type\" {\n            set expected [lsort \"[r smembers set1{t}]\"]\n            assert_equal $expected [lsort [r sinter set1{t} set1{t} set1{t}]]\n            assert_equal $expected [lsort [r sunion set1{t} set1{t} set1{t}]]\n            assert_equal {} [lsort [r sdiff set1{t} set1{t} set1{t}]]\n        }\n    }\n\n    test \"SINTERSTORE with two listpack sets where result is intset\" {\n        r del setres{t} set1{t} set2{t}\n        r sadd set1{t} a b c 1 3 6 x y z\n        r sadd set2{t} e f g 1 2 3 u v w\n        assert_encoding listpack set1{t}\n        assert_encoding listpack set2{t}\n        r sinterstore setres{t} set1{t} set2{t}\n        assert_equal [list 1 3] [lsort [r smembers setres{t}]]\n        assert_encoding intset setres{t}\n    }\n\n    test \"SINTERSTORE with two hashtable sets where result is intset\" {\n        r del setres{t} set1{t} set2{t}\n        r sadd set1{t} a b c 444 555 666\n        r sadd set2{t} e f g 111 222 333\n        set expected {}\n        for {set i 1} {$i < 130} {incr i} {\n            r sadd set1{t} $i\n            r sadd set2{t} $i\n            lappend expected $i\n        }\n        assert_encoding hashtable set1{t}\n        assert_encoding hashtable set2{t}\n        r sinterstore setres{t} set1{t} set2{t}\n        assert_equal [lsort $expected] [lsort [r smembers setres{t}]]\n        assert_encoding intset setres{t}\n    }\n\n    test \"SUNION hashtable and listpack\" {\n        # This adds code coverage for adding a non-sds string to a hashtable set\n        # which already contains the string.\n        r del set1{t} set2{t}\n        set union {abcdefghijklmnopqrstuvwxyz1234567890 a b c 1 2 3}\n        create_set set1{t} $union\n        create_set set2{t} {a b c}\n        assert_encoding hashtable set1{t}\n        assert_encoding listpack set2{t}\n        assert_equal [lsort $union] [lsort [r sunion set1{t} set2{t}]]\n    }\n\n    test \"SDIFF with first set empty\" {\n        r del set1{t} set2{t} set3{t}\n        r sadd set2{t} 1 2 3 4\n        r sadd set3{t} a b c d\n        r sdiff set1{t} set2{t} set3{t}\n    } {}\n\n    test \"SDIFF with same set two times\" {\n        r del set1\n        r sadd set1 a b c 1 2 3 4 5 6\n        r sdiff set1 set1\n    } {}\n\n    test \"SDIFF fuzzing\" {\n        for {set j 0} {$j < 100} {incr j} {\n            unset -nocomplain s\n            array set s {}\n            set args {}\n            set num_sets [expr {[randomInt 10]+1}]\n            for {set i 0} {$i < $num_sets} {incr i} {\n                set num_elements [randomInt 100]\n                r del set_$i{t}\n                lappend args set_$i{t}\n                while {$num_elements} {\n                    set ele [randomValue]\n                    r sadd set_$i{t} $ele\n                    if {$i == 0} {\n                        set s($ele) x\n                    } else {\n                        unset -nocomplain s($ele)\n                    }\n                    incr num_elements -1\n                }\n            }\n            set result [lsort [r sdiff {*}$args]]\n            assert_equal $result [lsort [array names s]]\n        }\n    }\n\n    test \"SDIFF against non-set should throw error\" {\n        # with an empty set\n        r set key1{t} x\n        assert_error \"WRONGTYPE*\" {r sdiff key1{t} noset{t}}\n        # different order\n        assert_error \"WRONGTYPE*\" {r sdiff noset{t} key1{t}}\n\n        # with a legal set\n        r del set1{t}\n        r sadd set1{t} a b c\n        assert_error \"WRONGTYPE*\" {r sdiff key1{t} set1{t}}\n        # different order\n        assert_error \"WRONGTYPE*\" {r sdiff set1{t} key1{t}}\n    }\n\n    test \"SDIFF should handle non existing key as empty\" {\n        r del set1{t} set2{t} set3{t}\n\n        r sadd set1{t} a b c\n        r sadd set2{t} b c d\n        assert_equal {a} [lsort [r sdiff set1{t} set2{t} set3{t}]]\n        assert_equal {} [lsort [r sdiff set3{t} set2{t} set1{t}]]\n    }\n\n    test \"SDIFFSTORE against non-set should throw error\" {\n        r del set1{t} set2{t} set3{t} key1{t}\n        r set key1{t} x\n\n        # with en empty dstkey\n        assert_error \"WRONGTYPE*\" {r SDIFFSTORE set3{t} key1{t} noset{t}}\n        assert_equal 0 [r exists set3{t}]\n        assert_error \"WRONGTYPE*\" {r SDIFFSTORE set3{t} noset{t} key1{t}}\n        assert_equal 0 [r exists set3{t}]\n\n        # with a legal dstkey\n        r sadd set1{t} a b c\n        r sadd set2{t} b c d\n        r sadd set3{t} e\n        assert_error \"WRONGTYPE*\" {r SDIFFSTORE set3{t} key1{t} set1{t} noset{t}}\n        assert_equal 1 [r exists set3{t}]\n        assert_equal {e} [lsort [r smembers set3{t}]]\n\n        assert_error \"WRONGTYPE*\" {r SDIFFSTORE set3{t} set1{t} key1{t} set2{t}}\n        assert_equal 1 [r exists set3{t}]\n        assert_equal {e} [lsort [r smembers set3{t}]]\n    }\n\n    test \"SDIFFSTORE should handle non existing key as empty\" {\n        r del set1{t} set2{t} set3{t}\n\n        r set setres{t} xxx\n        assert_equal 0 [r sdiffstore setres{t} foo111{t} bar222{t}]\n        assert_equal 0 [r exists setres{t}]\n\n        # with a legal dstkey, should delete dstkey\n        r sadd set3{t} a b c\n        assert_equal 0 [r sdiffstore set3{t} set1{t} set2{t}]\n        assert_equal 0 [r exists set3{t}]\n\n        r sadd set1{t} a b c\n        assert_equal 3 [r sdiffstore set3{t} set1{t} set2{t}]\n        assert_equal 1 [r exists set3{t}]\n        assert_equal {a b c} [lsort [r smembers set3{t}]]\n\n        # with a legal dstkey and empty set2, should delete the dstkey\n        r sadd set3{t} a b c\n        assert_equal 0 [r sdiffstore set3{t} set2{t} set1{t}]\n        assert_equal 0 [r exists set3{t}]\n    }\n\n    test \"SINTER against non-set should throw error\" {\n        r set key1{t} x\n        assert_error \"WRONGTYPE*\" {r sinter key1{t} noset{t}}\n        # different order\n        assert_error \"WRONGTYPE*\" {r sinter noset{t} key1{t}}\n\n        r sadd set1{t} a b c\n        assert_error \"WRONGTYPE*\" {r sinter key1{t} set1{t}}\n        # different order\n        assert_error \"WRONGTYPE*\" {r sinter set1{t} key1{t}}\n    }\n\n    test \"SINTER should handle non existing key as empty\" {\n        r del set1{t} set2{t} set3{t}\n        r sadd set1{t} a b c\n        r sadd set2{t} b c d\n        r sinter set1{t} set2{t} set3{t}\n    } {}\n\n    test \"SINTER with same integer elements but different encoding\" {\n        r del set1{t} set2{t}\n        r sadd set1{t} 1 2 3\n        r sadd set2{t} 1 2 3 a\n        r srem set2{t} a\n        assert_encoding intset set1{t}\n        assert_encoding listpack set2{t}\n        lsort [r sinter set1{t} set2{t}]\n    } {1 2 3}\n\n    test \"SINTERSTORE against non-set should throw error\" {\n        r del set1{t} set2{t} set3{t} key1{t}\n        r set key1{t} x\n\n        # with en empty dstkey\n        assert_error \"WRONGTYPE*\" {r sinterstore set3{t} key1{t} noset{t}}\n        assert_equal 0 [r exists set3{t}]\n        assert_error \"WRONGTYPE*\" {r sinterstore set3{t} noset{t} key1{t}}\n        assert_equal 0 [r exists set3{t}]\n\n        # with a legal dstkey\n        r sadd set1{t} a b c\n        r sadd set2{t} b c d\n        r sadd set3{t} e\n        assert_error \"WRONGTYPE*\" {r sinterstore set3{t} key1{t} set2{t} noset{t}}\n        assert_equal 1 [r exists set3{t}]\n        assert_equal {e} [lsort [r smembers set3{t}]]\n\n        assert_error \"WRONGTYPE*\" {r sinterstore set3{t} noset{t} key1{t} set2{t}}\n        assert_equal 1 [r exists set3{t}]\n        assert_equal {e} [lsort [r smembers set3{t}]]\n    }\n\n    test \"SINTERSTORE against non existing keys should delete dstkey\" {\n        r del set1{t} set2{t} set3{t}\n\n        r set setres{t} xxx\n        assert_equal 0 [r sinterstore setres{t} foo111{t} bar222{t}]\n        assert_equal 0 [r exists setres{t}]\n\n        # with a legal dstkey\n        r sadd set3{t} a b c\n        assert_equal 0 [r sinterstore set3{t} set1{t} set2{t}]\n        assert_equal 0 [r exists set3{t}]\n\n        r sadd set1{t} a b c\n        assert_equal 0 [r sinterstore set3{t} set1{t} set2{t}]\n        assert_equal 0 [r exists set3{t}]\n\n        assert_equal 0 [r sinterstore set3{t} set2{t} set1{t}]\n        assert_equal 0 [r exists set3{t}]\n    }\n\n    test \"SUNION against non-set should throw error\" {\n        r set key1{t} x\n        assert_error \"WRONGTYPE*\" {r sunion key1{t} noset{t}}\n        # different order\n        assert_error \"WRONGTYPE*\" {r sunion noset{t} key1{t}}\n\n        r del set1{t}\n        r sadd set1{t} a b c\n        assert_error \"WRONGTYPE*\" {r sunion key1{t} set1{t}}\n        # different order\n        assert_error \"WRONGTYPE*\" {r sunion set1{t} key1{t}}\n    }\n\n    test \"SUNION should handle non existing key as empty\" {\n        r del set1{t} set2{t} set3{t}\n\n        r sadd set1{t} a b c\n        r sadd set2{t} b c d\n        assert_equal {a b c d} [lsort [r sunion set1{t} set2{t} set3{t}]]\n    }\n\n    test \"SUNIONSTORE against non-set should throw error\" {\n        r del set1{t} set2{t} set3{t} key1{t}\n        r set key1{t} x\n\n        # with en empty dstkey\n        assert_error \"WRONGTYPE*\" {r sunionstore set3{t} key1{t} noset{t}}\n        assert_equal 0 [r exists set3{t}]\n        assert_error \"WRONGTYPE*\" {r sunionstore set3{t} noset{t} key1{t}}\n        assert_equal 0 [r exists set3{t}]\n\n        # with a legal dstkey\n        r sadd set1{t} a b c\n        r sadd set2{t} b c d\n        r sadd set3{t} e\n        assert_error \"WRONGTYPE*\" {r sunionstore set3{t} key1{t} key2{t} noset{t}}\n        assert_equal 1 [r exists set3{t}]\n        assert_equal {e} [lsort [r smembers set3{t}]]\n\n        assert_error \"WRONGTYPE*\" {r sunionstore set3{t} noset{t} key1{t} key2{t}}\n        assert_equal 1 [r exists set3{t}]\n        assert_equal {e} [lsort [r smembers set3{t}]]\n    }\n\n    test \"SUNIONSTORE should handle non existing key as empty\" {\n        r del set1{t} set2{t} set3{t}\n\n        r set setres{t} xxx\n        assert_equal 0 [r sunionstore setres{t} foo111{t} bar222{t}]\n        assert_equal 0 [r exists setres{t}]\n\n        # set1 set2 both empty, should delete the dstkey\n        r sadd set3{t} a b c\n        assert_equal 0 [r sunionstore set3{t} set1{t} set2{t}]\n        assert_equal 0 [r exists set3{t}]\n\n        r sadd set1{t} a b c\n        r sadd set3{t} e f\n        assert_equal 3 [r sunionstore set3{t} set1{t} set2{t}]\n        assert_equal 1 [r exists set3{t}]\n        assert_equal {a b c} [lsort [r smembers set3{t}]]\n\n        r sadd set3{t} d\n        assert_equal 3 [r sunionstore set3{t} set2{t} set1{t}]\n        assert_equal 1 [r exists set3{t}]\n        assert_equal {a b c} [lsort [r smembers set3{t}]]\n    }\n\n    test \"SUNIONSTORE against non existing keys should delete dstkey\" {\n        r set setres{t} xxx\n        assert_equal 0 [r sunionstore setres{t} foo111{t} bar222{t}]\n        assert_equal 0 [r exists setres{t}]\n    }\n\n    foreach {type contents} {listpack {a b c} intset {1 2 3}} {\n        test \"SPOP basics - $type\" {\n            create_set myset $contents\n            assert_encoding $type myset\n            assert_equal $contents [lsort [list [r spop myset] [r spop myset] [r spop myset]]]\n            assert_equal 0 [r scard myset]\n        }\n\n        test \"SPOP with <count>=1 - $type\" {\n            create_set myset $contents\n            assert_encoding $type myset\n            assert_equal $contents [lsort [list [r spop myset 1] [r spop myset 1] [r spop myset 1]]]\n            assert_equal 0 [r scard myset]\n        }\n\n        test \"SRANDMEMBER - $type\" {\n            create_set myset $contents\n            unset -nocomplain myset\n            array set myset {}\n            for {set i 0} {$i < 100} {incr i} {\n                set myset([r srandmember myset]) 1\n            }\n            assert_equal $contents [lsort [array names myset]]\n        }\n    }\n\n    test \"SPOP integer from listpack set\" {\n        create_set myset {a 1 2 3 4 5 6 7}\n        assert_encoding listpack myset\n        set a [r spop myset]\n        set b [r spop myset]\n        assert {[string is digit $a] || [string is digit $b]}\n    }\n\n    foreach {type contents} {\n        listpack {a b c d e f g h i j k l m n o p q r s t u v w x y z}\n        intset {1 10 11 12 13 14 15 16 17 18 19 2 20 21 22 23 24 25 26 3 4 5 6 7 8 9}\n        hashtable {ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789 b c d e f g h i j k l m n o p q r s t u v w x y z}\n    } {\n        test \"SPOP with <count> - $type\" {\n            create_set myset $contents\n            assert_encoding $type myset\n            assert_equal $contents [lsort [concat [r spop myset 11] [r spop myset 9] [r spop myset 0] [r spop myset 4] [r spop myset 1] [r spop myset 0] [r spop myset 1] [r spop myset 0]]]\n            assert_equal 0 [r scard myset]\n        }\n    }\n\n    # As seen in intsetRandomMembers\n    test \"SPOP using integers, testing Knuth's and Floyd's algorithm\" {\n        create_set myset {1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20}\n        assert_encoding intset myset\n        assert_equal 20 [r scard myset]\n        r spop myset 1\n        assert_equal 19 [r scard myset]\n        r spop myset 2\n        assert_equal 17 [r scard myset]\n        r spop myset 3\n        assert_equal 14 [r scard myset]\n        r spop myset 10\n        assert_equal 4 [r scard myset]\n        r spop myset 10\n        assert_equal 0 [r scard myset]\n        r spop myset 1\n        assert_equal 0 [r scard myset]\n    } {}\n\n    test \"SPOP using integers with Knuth's algorithm\" {\n        r spop nonexisting_key 100\n    } {}\n\n    foreach {type content} {\n        intset   {1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20}\n        listpack {a 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20}\n    } {\n    test \"SPOP new implementation: code path #1 $type\" {\n        create_set myset $content\n        assert_encoding $type myset\n        set res [r spop myset 30]\n        assert {[lsort $content] eq [lsort $res]}\n        assert_equal {0} [r exists myset]\n    }\n\n    test \"SPOP new implementation: code path #2 $type\" {\n        create_set myset $content\n        assert_encoding $type myset\n        set res [r spop myset 2]\n        assert {[llength $res] == 2}\n        assert {[r scard myset] == 18}\n        set union [concat [r smembers myset] $res]\n        assert {[lsort $union] eq [lsort $content]}\n    }\n\n    test \"SPOP new implementation: code path #3 $type\" {\n        create_set myset $content\n        assert_encoding $type myset\n        set res [r spop myset 18]\n        assert {[llength $res] == 18}\n        assert {[r scard myset] == 2}\n        set union [concat [r smembers myset] $res]\n        assert {[lsort $union] eq [lsort $content]}\n    }\n    }\n\n    test \"SPOP new implementation: code path #1 propagate as DEL or UNLINK\" {\n        r del myset1{t} myset2{t}\n        r sadd myset1{t} 1 2 3 4 5\n        r sadd myset2{t} 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65\n\n        set repl [attach_to_replication_stream]\n\n        r config set lazyfree-lazy-server-del no\n        r spop myset1{t} [r scard myset1{t}]\n        r config set lazyfree-lazy-server-del yes\n        r spop myset2{t} [r scard myset2{t}]\n        assert_equal {0} [r exists myset1{t} myset2{t}]\n\n        # Verify the propagate of DEL and UNLINK.\n        assert_replication_stream $repl {\n            {select *}\n            {del myset1{t}}\n            {unlink myset2{t}}\n        }\n\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test \"SRANDMEMBER count of 0 is handled correctly\" {\n        r srandmember myset 0\n    } {}\n\n    test \"SRANDMEMBER with <count> against non existing key\" {\n        r srandmember nonexisting_key 100\n    } {}\n\n    test \"SRANDMEMBER count overflow\" {\n        r sadd myset a\n        assert_error {*value is out of range*} {r srandmember myset -9223372036854775808}\n    } {}\n\n    # Make sure we can distinguish between an empty array and a null response\n    r readraw 1\n\n    test \"SRANDMEMBER count of 0 is handled correctly - emptyarray\" {\n        r srandmember myset 0\n    } {*0}\n\n    test \"SRANDMEMBER with <count> against non existing key - emptyarray\" {\n        r srandmember nonexisting_key 100\n    } {*0}\n\n    r readraw 0\n\n    foreach {type contents} {\n        listpack {\n            1 5 10 50 125 50000 33959417 4775547 65434162\n            12098459 427716 483706 2726473884 72615637475\n            MARY PATRICIA LINDA BARBARA ELIZABETH JENNIFER MARIA\n            SUSAN MARGARET DOROTHY LISA NANCY KAREN BETTY HELEN\n            SANDRA DONNA CAROL RUTH SHARON MICHELLE LAURA SARAH\n            KIMBERLY DEBORAH JESSICA SHIRLEY CYNTHIA ANGELA MELISSA\n            BRENDA AMY ANNA REBECCA VIRGINIA KATHLEEN\n        }\n        intset {\n            0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19\n            20 21 22 23 24 25 26 27 28 29\n            30 31 32 33 34 35 36 37 38 39\n            40 41 42 43 44 45 46 47 48 49\n        }\n        hashtable {\n            ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789\n            1 5 10 50 125 50000 33959417 4775547 65434162\n            12098459 427716 483706 2726473884 72615637475\n            MARY PATRICIA LINDA BARBARA ELIZABETH JENNIFER MARIA\n            SUSAN MARGARET DOROTHY LISA NANCY KAREN BETTY HELEN\n            SANDRA DONNA CAROL RUTH SHARON MICHELLE LAURA SARAH\n            KIMBERLY DEBORAH JESSICA SHIRLEY CYNTHIA ANGELA MELISSA\n            BRENDA AMY ANNA REBECCA VIRGINIA\n        }\n    } {\n        test \"SRANDMEMBER with <count> - $type\" {\n            create_set myset $contents\n            assert_encoding $type myset\n            unset -nocomplain myset\n            array set myset {}\n            foreach ele [r smembers myset] {\n                set myset($ele) 1\n            }\n            assert_equal [lsort $contents] [lsort [array names myset]]\n\n            # Make sure that a count of 0 is handled correctly.\n            assert_equal [r srandmember myset 0] {}\n\n            # We'll stress different parts of the code, see the implementation\n            # of SRANDMEMBER for more information, but basically there are\n            # four different code paths.\n            #\n            # PATH 1: Use negative count.\n            #\n            # 1) Check that it returns repeated elements.\n            set res [r srandmember myset -100]\n            assert_equal [llength $res] 100\n\n            # 2) Check that all the elements actually belong to the\n            # original set.\n            foreach ele $res {\n                assert {[info exists myset($ele)]}\n            }\n\n            # 3) Check that eventually all the elements are returned.\n            unset -nocomplain auxset\n            set iterations 1000\n            while {$iterations != 0} {\n                incr iterations -1\n                set res [r srandmember myset -10]\n                foreach ele $res {\n                    set auxset($ele) 1\n                }\n                if {[lsort [array names myset]] eq\n                    [lsort [array names auxset]]} {\n                    break;\n                }\n            }\n            assert {$iterations != 0}\n\n            # PATH 2: positive count (unique behavior) with requested size\n            # equal or greater than set size.\n            foreach size {50 100} {\n                set res [r srandmember myset $size]\n                assert_equal [llength $res] 50\n                assert_equal [lsort $res] [lsort [array names myset]]\n            }\n\n            # PATH 3: Ask almost as elements as there are in the set.\n            # In this case the implementation will duplicate the original\n            # set and will remove random elements up to the requested size.\n            #\n            # PATH 4: Ask a number of elements definitely smaller than\n            # the set size.\n            #\n            # We can test both the code paths just changing the size but\n            # using the same code.\n\n            foreach size {45 5} {\n                set res [r srandmember myset $size]\n                assert_equal [llength $res] $size\n\n                # 1) Check that all the elements actually belong to the\n                # original set.\n                foreach ele $res {\n                    assert {[info exists myset($ele)]}\n                }\n\n                # 2) Check that eventually all the elements are returned.\n                unset -nocomplain auxset\n                set iterations 1000\n                while {$iterations != 0} {\n                    incr iterations -1\n                    set res [r srandmember myset $size]\n                    foreach ele $res {\n                        set auxset($ele) 1\n                    }\n                    if {[lsort [array names myset]] eq\n                        [lsort [array names auxset]]} {\n                        break;\n                    }\n                }\n                assert {$iterations != 0}\n            }\n        }\n    }\n\n    foreach {type contents} {\n        listpack {\n            1 5 10 50 125\n            MARY PATRICIA LINDA BARBARA ELIZABETH\n        }\n        intset {\n            0 1 2 3 4 5 6 7 8 9\n        }\n        hashtable {\n            ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789\n            1 5 10 50 125\n            MARY PATRICIA LINDA BARBARA\n        }\n    } {\n        test \"SRANDMEMBER histogram distribution - $type\" {\n            create_set myset $contents\n            assert_encoding $type myset\n            unset -nocomplain myset\n            array set myset {}\n            foreach ele [r smembers myset] {\n                set myset($ele) 1\n            }\n\n            # Use negative count (PATH 1).\n            # df = 9, 40 means 0.00001 probability\n            set res [r srandmember myset -1000]\n            assert_lessthan [chi_square_value $res] 40\n\n            # Use positive count (both PATH 3 and PATH 4).\n            foreach size {8 2} {\n                unset -nocomplain allkey\n                set iterations [expr {1000 / $size}]\n                while {$iterations != 0} {\n                    incr iterations -1\n                    set res [r srandmember myset $size]\n                    foreach ele $res {\n                        lappend allkey $ele\n                    }\n                }\n                # df = 9, 40 means 0.00001 probability\n                assert_lessthan [chi_square_value $allkey] 40\n            }\n        }\n    }\n\n    proc is_rehashing {myset} {\n        set htstats [r debug HTSTATS-KEY $myset]\n        return [string match {*rehashing target*} $htstats]\n    }\n\n    proc rem_hash_set_top_N {myset n} {\n        set cursor 0\n        set members {}\n        set enough 0\n        while 1 {\n            set res [r sscan $myset $cursor]\n            set cursor [lindex $res 0]\n            set k [lindex $res 1]\n            foreach m $k {\n                lappend members $m\n                if {[llength $members] >= $n} {\n                    set enough 1\n                    break\n                }\n            }\n            if {$enough || $cursor == 0} {\n                break\n            }\n        }\n        r deferred 1\n        set count 0\n        foreach m $members {\n            r srem $myset $m\n            incr count\n            if {$count == 500} {\n                for {set i 0} {$i < 500} {incr i} {\n                    r read\n                }\n                set count 0\n            }\n        }\n        for {set i 0} {$i < $count} {incr i} {\n            r read\n        }\n        r deferred 0\n    }\n\n    proc verify_rehashing_completed_key {myset table_size keys} {\n        set htstats [r debug HTSTATS-KEY $myset]\n        assert {![string match {*rehashing target*} $htstats]}\n        return {[string match {*table size: $table_size*number of elements: $keys*} $htstats]}\n    }\n\n    test \"SRANDMEMBER with a dict containing long chain\" {\n        set origin_save [config_get_set save \"\"]\n        set origin_max_lp [config_get_set set-max-listpack-entries 0]\n        set origin_save_delay [config_get_set rdb-key-save-delay 2147483647]\n\n        # 1) Create a hash set with 100000 members.\n        set members {}\n        for {set i 0} {$i < 100000} {incr i} {\n            lappend members [format \"m:%d\" $i]\n        }\n        create_set myset $members\n\n        # 2) Wait for the hash set rehashing to finish.\n        while {[is_rehashing myset]} {\n            r srandmember myset 100\n        }\n\n        # 3) Turn off the rehashing of this set, and remove the members to 500.\n        r bgsave\n        rem_hash_set_top_N myset [expr {[r scard myset] - 500}]\n        assert_equal [r scard myset] 500\n\n        # 4) Kill RDB child process to restart rehashing.\n        set pid1 [get_child_pid 0]\n        catch {exec kill -9 $pid1}\n        waitForBgsave r\n\n        # 5) Let the set hash to start rehashing\n        r spop myset 1\n        assert [is_rehashing myset]\n\n        # 6) Verify that when rdb saving is in progress, rehashing will still be performed (because\n        # the ratio is extreme) by waiting for it to finish during an active bgsave.\n        r bgsave\n\n        while {[is_rehashing myset]} {\n            r srandmember myset 1\n        }\n        if {$::verbose} {\n            puts [r debug HTSTATS-KEY myset full]\n        }\n\n        set pid1 [get_child_pid 0]\n        catch {exec kill -9 $pid1}\n        waitForBgsave r\n\n        # 7) Check that eventually, SRANDMEMBER returns all elements.\n        array set allmyset {}\n        foreach ele [r smembers myset] {\n            set allmyset($ele) 1\n        }\n        unset -nocomplain auxset\n        set iterations 1000\n        while {$iterations != 0} {\n            incr iterations -1\n            set res [r srandmember myset -10]\n            foreach ele $res {\n                set auxset($ele) 1\n            }\n            if {[lsort [array names allmyset]] eq\n                [lsort [array names auxset]]} {\n                break;\n            }\n        }\n        assert {$iterations != 0}\n\n        # 8) Remove the members to 30 in order to calculate the value of Chi-Square Distribution,\n        #    otherwise we would need more iterations.\n        rem_hash_set_top_N myset [expr {[r scard myset] - 30}]\n        assert_equal [r scard myset] 30\n        \n        # Hash set rehashing would be completed while removing members from the `myset`\n        # We also check the size and members in the hash table.\n        verify_rehashing_completed_key myset 64 30\n\n        # Now that we have a hash set with only one long chain bucket.\n        set htstats [r debug HTSTATS-KEY myset full]\n        assert {[regexp {different slots: ([0-9]+)} $htstats - different_slots]}\n        assert {[regexp {max chain length: ([0-9]+)} $htstats - max_chain_length]}\n        assert {$different_slots == 1 && $max_chain_length == 30}\n\n        # 9) Use positive count (PATH 4) to get 10 elements (out of 30) each time.\n        unset -nocomplain allkey\n        set iterations 1000\n        while {$iterations != 0} {\n            incr iterations -1\n            set res [r srandmember myset 10]\n            foreach ele $res {\n                lappend allkey $ele\n            }\n        }\n        # validate even distribution of random sampling (df = 29, 73 means 0.00001 probability)\n        assert_lessthan [chi_square_value $allkey] 73\n\n        r config set save $origin_save\n        r config set set-max-listpack-entries $origin_max_lp\n        r config set rdb-key-save-delay $origin_save_delay\n    } {OK} {needs:debug slow debug_defrag:skip}\n\n    proc setup_move {} {\n        r del myset3{t} myset4{t}\n        create_set myset1{t} {1 a b}\n        create_set myset2{t} {2 3 4}\n        assert_encoding listpack myset1{t}\n        assert_encoding intset myset2{t}\n    }\n\n    test \"SMOVE basics - from regular set to intset\" {\n        # move a non-integer element to an intset should convert encoding\n        setup_move\n        assert_equal 1 [r smove myset1{t} myset2{t} a]\n        assert_equal {1 b} [lsort [r smembers myset1{t}]]\n        assert_equal {2 3 4 a} [lsort [r smembers myset2{t}]]\n        assert_encoding listpack myset2{t}\n\n        # move an integer element should not convert the encoding\n        setup_move\n        assert_equal 1 [r smove myset1{t} myset2{t} 1]\n        assert_equal {a b} [lsort [r smembers myset1{t}]]\n        assert_equal {1 2 3 4} [lsort [r smembers myset2{t}]]\n        assert_encoding intset myset2{t}\n    }\n\n    test \"SMOVE basics - from intset to regular set\" {\n        setup_move\n        assert_equal 1 [r smove myset2{t} myset1{t} 2]\n        assert_equal {1 2 a b} [lsort [r smembers myset1{t}]]\n        assert_equal {3 4} [lsort [r smembers myset2{t}]]\n    }\n\n    test \"SMOVE non existing key\" {\n        setup_move\n        assert_equal 0 [r smove myset1{t} myset2{t} foo]\n        assert_equal 0 [r smove myset1{t} myset1{t} foo]\n        assert_equal {1 a b} [lsort [r smembers myset1{t}]]\n        assert_equal {2 3 4} [lsort [r smembers myset2{t}]]\n    }\n\n    test \"SMOVE non existing src set\" {\n        setup_move\n        assert_equal 0 [r smove noset{t} myset2{t} foo]\n        assert_equal {2 3 4} [lsort [r smembers myset2{t}]]\n    }\n\n    test \"SMOVE from regular set to non existing destination set\" {\n        setup_move\n        assert_equal 1 [r smove myset1{t} myset3{t} a]\n        assert_equal {1 b} [lsort [r smembers myset1{t}]]\n        assert_equal {a} [lsort [r smembers myset3{t}]]\n        assert_encoding listpack myset3{t}\n    }\n\n    test \"SMOVE from intset to non existing destination set\" {\n        setup_move\n        assert_equal 1 [r smove myset2{t} myset3{t} 2]\n        assert_equal {3 4} [lsort [r smembers myset2{t}]]\n        assert_equal {2} [lsort [r smembers myset3{t}]]\n        assert_encoding intset myset3{t}\n    }\n\n    test \"SMOVE wrong src key type\" {\n        r set x{t} 10\n        assert_error \"WRONGTYPE*\" {r smove x{t} myset2{t} foo}\n    }\n\n    test \"SMOVE wrong dst key type\" {\n        r set x{t} 10\n        assert_error \"WRONGTYPE*\" {r smove myset2{t} x{t} foo}\n    }\n\n    test \"SMOVE with identical source and destination\" {\n        r del set{t}\n        r sadd set{t} a b c\n        r smove set{t} set{t} b\n        lsort [r smembers set{t}]\n    } {a b c}\n\n    test \"SMOVE only notify dstset when the addition is successful\" {\n        r del srcset{t}\n        r del dstset{t}\n\n        r sadd srcset{t} a b\n        r sadd dstset{t} a\n\n        r watch dstset{t}\n\n        r multi\n        r sadd dstset{t} c\n\n        set r2 [redis_client]\n        $r2 smove srcset{t} dstset{t} a\n\n        # The dstset is actually unchanged, multi should success\n        r exec\n        set res [r scard dstset{t}]\n        assert_equal $res 2\n        $r2 close\n    }\n\n    tags {slow} {\n        test {intsets implementation stress testing} {\n            for {set j 0} {$j < 20} {incr j} {\n                unset -nocomplain s\n                array set s {}\n                r del s\n                set len [randomInt 1024]\n                for {set i 0} {$i < $len} {incr i} {\n                    randpath {\n                        set data [randomInt 65536]\n                    } {\n                        set data [randomInt 4294967296]\n                    } {\n                        set data [randomInt 18446744073709551616]\n                    }\n                    set s($data) {}\n                    r sadd s $data\n                }\n                assert_equal [lsort [r smembers s]] [lsort [array names s]]\n                set len [array size s]\n                for {set i 0} {$i < $len} {incr i} {\n                    set e [r spop s]\n                    if {![info exists s($e)]} {\n                        puts \"Can't find '$e' on local array\"\n                        puts \"Local array: [lsort [r smembers s]]\"\n                        puts \"Remote array: [lsort [array names s]]\"\n                        error \"exception\"\n                    }\n                    array unset s $e\n                }\n                assert_equal [r scard s] 0\n                assert_equal [array size s] 0\n            }\n        }\n    }\n}\n\nrun_solo {set-large-memory} {\nstart_server [list overrides [list save \"\"] ] {\n\n# test if the server supports such large configs (avoid 32 bit builds)\ncatch {\n    r config set proto-max-bulk-len 10000000000 ;#10gb\n    r config set client-query-buffer-limit 10000000000 ;#10gb\n}\nif {[lindex [r config get proto-max-bulk-len] 1] == 10000000000} {\n\n    set str_length 4400000000 ;#~4.4GB\n\n    test {SADD, SCARD, SISMEMBER - large data} {\n        r flushdb\n        r write \"*3\\r\\n\\$4\\r\\nSADD\\r\\n\\$5\\r\\nmyset\\r\\n\"\n        assert_equal 1 [write_big_bulk $str_length \"aaa\"]\n        r write \"*3\\r\\n\\$4\\r\\nSADD\\r\\n\\$5\\r\\nmyset\\r\\n\"\n        assert_equal 1 [write_big_bulk $str_length \"bbb\"]\n        r write \"*3\\r\\n\\$4\\r\\nSADD\\r\\n\\$5\\r\\nmyset\\r\\n\"\n        assert_equal 0 [write_big_bulk $str_length \"aaa\"]\n        assert_encoding hashtable myset\n        set s0 [s used_memory]\n        assert {$s0 > [expr $str_length * 2]}\n        assert_equal 2 [r scard myset]\n\n        r write \"*3\\r\\n\\$9\\r\\nSISMEMBER\\r\\n\\$5\\r\\nmyset\\r\\n\"\n        assert_equal 1 [write_big_bulk $str_length \"aaa\"]\n        r write \"*3\\r\\n\\$9\\r\\nSISMEMBER\\r\\n\\$5\\r\\nmyset\\r\\n\"\n        assert_equal 0 [write_big_bulk $str_length \"ccc\"]\n        r write \"*3\\r\\n\\$4\\r\\nSREM\\r\\n\\$5\\r\\nmyset\\r\\n\"\n        assert_equal 1 [write_big_bulk $str_length \"bbb\"]\n        assert_equal [read_big_bulk {r spop myset} yes \"aaa\"] $str_length\n    } {} {large-memory}\n\n    # restore defaults\n    r config set proto-max-bulk-len 536870912\n    r config set client-query-buffer-limit 1073741824\n\n} ;# skip 32bit builds\n}\n} ;# run_solo\n"
  },
  {
    "path": "tests/unit/type/stream-cgroups.tcl",
    "content": "start_server {\n    tags {\"stream\"}\n} {\n    test {XGROUP CREATE: creation and duplicate group name detection} {\n        r DEL mystream\n        r XADD mystream * foo bar\n        r XGROUP CREATE mystream mygroup $\n        catch {r XGROUP CREATE mystream mygroup $} err\n        set err\n    } {BUSYGROUP*}\n\n    test {XGROUP CREATE: with ENTRIESREAD parameter} {\n        r DEL mystream\n        r XADD mystream 1-1 a 1\n        r XADD mystream 1-2 b 2\n        r XADD mystream 1-3 c 3\n        r XADD mystream 1-4 d 4\n        assert_error \"*value for ENTRIESREAD must be positive or -1*\" {r XGROUP CREATE mystream mygroup $ ENTRIESREAD -3}\n\n        r XGROUP CREATE mystream mygroup1 $ ENTRIESREAD 0\n        r XGROUP CREATE mystream mygroup2 $ ENTRIESREAD 3\n\n        set reply [r xinfo groups mystream]\n        foreach group_info $reply {\n            set group_name [dict get $group_info name]\n            set entries_read [dict get $group_info entries-read]\n            if {$group_name == \"mygroup1\"} {\n                assert_equal $entries_read 0\n            } else {\n                assert_equal $entries_read 3\n            }\n        }\n    }\n\n    test {XGROUP CREATE: automatic stream creation fails without MKSTREAM} {\n        r DEL mystream\n        catch {r XGROUP CREATE mystream mygroup $} err\n        set err\n    } {ERR*}\n\n    test {XGROUP CREATE: automatic stream creation works with MKSTREAM} {\n        r DEL mystream\n        r XGROUP CREATE mystream mygroup $ MKSTREAM\n    } {OK}\n\n    test {XREADGROUP basic argument count validation} {\n        # Too few arguments\n        assert_error \"*wrong number of arguments*\" {r XREADGROUP}\n        assert_error \"*wrong number of arguments*\" {r XREADGROUP GROUP}\n        assert_error \"*wrong number of arguments*\" {r XREADGROUP GROUP mygroup}\n        assert_error \"*wrong number of arguments*\" {r XREADGROUP GROUP mygroup consumer}\n        assert_error \"*wrong number of arguments*\" {r XREADGROUP GROUP mygroup consumer STREAMS}\n    }\n\n    test {XREADGROUP GROUP keyword validation} {\n        r DEL mystream\n        r XADD mystream * field value\n        r XGROUP CREATE mystream mygroup $\n        \n        # Missing GROUP keyword entirely - wrong syntax\n        assert_error \"*wrong number of arguments*\" {r XREADGROUP mygroup consumer STREAMS mystream >}\n        \n        # Wrong keyword instead of GROUP\n        assert_error \"*syntax error*\" {r XREADGROUP GROUPS mygroup consumer STREAMS mystream >}\n    }\n\n    test {XREADGROUP empty group name handling} {\n        r DEL mystream\n        r XADD mystream * field value\n        r XGROUP CREATE mystream mygroup $\n        \n        # Empty group name should give NOGROUP error\n        assert_error \"*NOGROUP*\" {r XREADGROUP GROUP \"\" consumer STREAMS mystream >}\n    }\n\n    test {XREADGROUP STREAMS keyword validation} {\n        r DEL mystream\n        r XADD mystream * field value\n        r XGROUP CREATE mystream mygroup $\n        \n        # Missing STREAMS keyword\n        assert_error \"*wrong number of arguments*\" {r XREADGROUP GROUP mygroup consumer mystream >}\n        \n        # Wrong keyword\n        assert_error \"*syntax error*\" {r XREADGROUP GROUP mygroup consumer STREAM mystream >}\n    }\n\n    test {XREADGROUP stream and ID pairing} {\n        r DEL mystream\n        r XADD mystream * field value\n        r XGROUP CREATE mystream mygroup $\n        \n        # Missing stream ID\n        assert_error \"*wrong number of arguments*\" {r XREADGROUP GROUP mygroup consumer STREAMS mystream}\n        \n        # Unbalanced streams and IDs\n        r DEL stream2\n        r XADD stream2 * field value\n        r XGROUP CREATE stream2 mygroup $\n        \n        assert_error \"*Unbalanced*\" {r XREADGROUP GROUP mygroup consumer STREAMS mystream > stream2}\n        assert_error \"*Unbalanced*\" {r XREADGROUP GROUP mygroup consumer STREAMS mystream stream2 >}\n        \n        r DEL stream2\n    }\n\n    test {XREADGROUP COUNT parameter validation} {\n        r DEL mystream\n        r XADD mystream * field value\n        r XGROUP CREATE mystream mygroup $\n        \n        # Non-numeric count\n        assert_error \"*not an integer*\" {r XREADGROUP GROUP mygroup consumer COUNT abc STREAMS mystream >}\n        assert_error \"*not an integer*\" {r XREADGROUP GROUP mygroup consumer COUNT 1.5 STREAMS mystream >}\n    }\n\n    test {XREADGROUP BLOCK parameter validation} {\n        r DEL mystream\n        r XADD mystream * field value\n        r XGROUP CREATE mystream mygroup $\n        \n        # Non-numeric block timeout\n        assert_error \"*not an integer*\" {r XREADGROUP GROUP mygroup consumer BLOCK abc STREAMS mystream >}\n        assert_error \"*not an integer*\" {r XREADGROUP GROUP mygroup consumer BLOCK 1.5 STREAMS mystream >}\n        \n        # Missing BLOCK value\n        assert_error \"*ERR timeout is not an integer or out of range*\" {r XREADGROUP GROUP mygroup consumer BLOCK STREAMS mystream >}\n        \n        # Negative timeout (typically not allowed)\n        assert_error \"*ERR timeout is negative*\" {r XREADGROUP GROUP mygroup consumer BLOCK -1 STREAMS mystream >}\n    }\n\n    test {XREADGROUP stream ID format validation} {\n        r DEL mystream\n        r XADD mystream * field value\n        r XGROUP CREATE mystream mygroup $\n        \n        # Invalid ID formats should error\n        assert_error \"*Invalid stream ID*\" {r XREADGROUP GROUP mygroup consumer STREAMS mystream invalid-id}\n        assert_error \"*Invalid stream ID*\" {r XREADGROUP GROUP mygroup consumer STREAMS mystream 123-}\n        assert_error \"*Invalid stream ID*\" {r XREADGROUP GROUP mygroup consumer STREAMS mystream -123}\n        assert_error \"*Invalid stream ID*\" {r XREADGROUP GROUP mygroup consumer STREAMS mystream abc-def}\n        assert_error \"*Invalid stream ID*\" {r XREADGROUP GROUP mygroup consumer STREAMS mystream --}\n        assert_error \"*Invalid stream ID*\" {r XREADGROUP GROUP mygroup consumer STREAMS mystream 123-abc}\n    }\n\n    test {XREADGROUP nonexistent group} {\n        r DEL mystream\n        r XADD mystream * field value\n        r XGROUP CREATE mystream mygroup $\n        \n        assert_error \"*NOGROUP*\" {r XREADGROUP GROUP nonexistent consumer STREAMS mystream >}\n    }\n\n    test {XREADGROUP nonexistent stream with existing group} {\n        r DEL mystream\n        r XADD mystream * field value\n        r XGROUP CREATE mystream mygroup $\n        \n        # Group doesn't exist on the nonexistent stream\n        assert_error \"*NOGROUP*\" {r XREADGROUP GROUP mygroup consumer STREAMS nonexistent >}\n    }\n\n    test {XREADGROUP wrong key type} {\n        r SET wrongtype \"not a stream\"\n        assert_error \"*WRONGTYPE*\" {r XREADGROUP GROUP mygroup consumer STREAMS wrongtype >}\n        r DEL wrongtype\n    }\n\n    test {XREADGROUP boundary value validation} {\n        r DEL mystream\n        r XADD mystream * field value\n        r XGROUP CREATE mystream mygroup $\n        \n        # Test COUNT boundaries - values that are too large\n        assert_error \"*value is not an integer or out of range*\" {r XREADGROUP GROUP mygroup consumer COUNT 18446744073709551616 STREAMS mystream >}\n        \n        # Test BLOCK timeout boundaries - values that are too large  \n        assert_error \"*timeout is not an integer or out of range*\" {r XREADGROUP GROUP mygroup consumer BLOCK 18446744073709551616 STREAMS mystream >}\n    }\n\n    test {XREADGROUP malformed parameter syntax} {\n        r DEL mystream\n        r XADD mystream * field value\n        r XGROUP CREATE mystream mygroup $\n        \n        # Unknown parameters\n        assert_error \"*syntax error*\" {r XREADGROUP GROUP mygroup consumer INVALID param STREAMS mystream >}\n        assert_error \"*syntax error*\" {r XREADGROUP GROUP mygroup consumer TIMEOUT 1000 STREAMS mystream >}\n    }\n\n    test {XREADGROUP will return only new elements} {\n        r XADD mystream * a 1\n        r XADD mystream * b 2\n\n        # Verify XPENDING returns empty results when no messages are in the PEL.\n        assert_equal {0 {} {} {}} [r XPENDING mystream mygroup]\n        assert_equal {} [r XPENDING mystream mygroup - + 10] \n\n        # XREADGROUP should return only the new elements \"a 1\" \"b 1\"\n        # and not the element \"foo bar\" which was pre existing in the\n        # stream (see previous test)\n        set reply [\n            r XREADGROUP GROUP mygroup consumer-1 STREAMS mystream \">\"\n        ]\n        assert {[llength [lindex $reply 0 1]] == 2}\n        lindex $reply 0 1 0 1\n    } {a 1}\n\n    test {XREADGROUP can read the history of the elements we own} {\n        # Add a few more elements\n        r XADD mystream * c 3\n        r XADD mystream * d 4\n        # Read a few elements using a different consumer name\n        set reply [\n            r XREADGROUP GROUP mygroup consumer-2 STREAMS mystream \">\"\n        ]\n        assert {[llength [lindex $reply 0 1]] == 2}\n        assert {[lindex $reply 0 1 0 1] eq {c 3}}\n\n        set r1 [r XREADGROUP GROUP mygroup consumer-1 COUNT 10 STREAMS mystream 0]\n        set r2 [r XREADGROUP GROUP mygroup consumer-2 COUNT 10 STREAMS mystream 0]\n        assert {[lindex $r1 0 1 0 1] eq {a 1}}\n        assert {[lindex $r2 0 1 0 1] eq {c 3}}\n    }\n\n    test {XPENDING is able to return pending items} {\n        set pending [r XPENDING mystream mygroup - + 10]\n        assert {[llength $pending] == 4}\n        for {set j 0} {$j < 4} {incr j} {\n            set item [lindex $pending $j]\n            if {$j < 2} {\n                set owner consumer-1\n            } else {\n                set owner consumer-2\n            }\n            assert {[lindex $item 1] eq $owner}\n            assert {[lindex $item 1] eq $owner}\n        }\n    }\n\n    test {XPENDING can return single consumer items} {\n        set pending [r XPENDING mystream mygroup - + 10 consumer-1]\n        assert {[llength $pending] == 2}\n    }\n\n    test {XPENDING only group} {\n        set pending [r XPENDING mystream mygroup]\n        assert {[llength $pending] == 4}\n    }\n\n    test {XPENDING with IDLE} {\n        after 20\n        set pending [r XPENDING mystream mygroup IDLE 99999999 - + 10 consumer-1]\n        assert {[llength $pending] == 0}\n        set pending [r XPENDING mystream mygroup IDLE 1 - + 10 consumer-1]\n        assert {[llength $pending] == 2}\n        set pending [r XPENDING mystream mygroup IDLE 99999999 - + 10]\n        assert {[llength $pending] == 0}\n        set pending [r XPENDING mystream mygroup IDLE 1 - + 10]\n        assert {[llength $pending] == 4}\n    }\n\n    test {XPENDING with exclusive range intervals works as expected} {\n        set pending [r XPENDING mystream mygroup - + 10]\n        assert {[llength $pending] == 4}\n        set startid [lindex [lindex $pending 0] 0]\n        set endid [lindex [lindex $pending 3] 0]\n        set expending [r XPENDING mystream mygroup ($startid ($endid 10]\n        assert {[llength $expending] == 2}\n        for {set j 0} {$j < 2} {incr j} {\n            set itemid [lindex [lindex $expending $j] 0]\n            assert {$itemid ne $startid}\n            assert {$itemid ne $endid}\n        }\n    }\n\n    test {XACK is able to remove items from the consumer/group PEL} {\n        set pending [r XPENDING mystream mygroup - + 10 consumer-1]\n        set id1 [lindex $pending 0 0]\n        set id2 [lindex $pending 1 0]\n        assert {[r XACK mystream mygroup $id1] eq 1}\n        set pending [r XPENDING mystream mygroup - + 10 consumer-1]\n        assert {[llength $pending] == 1}\n        set id [lindex $pending 0 0]\n        assert {$id eq $id2}\n        set global_pel [r XPENDING mystream mygroup - + 10]\n        assert {[llength $global_pel] == 3}\n    }\n\n    test {XACK can't remove the same item multiple times} {\n        assert {[r XACK mystream mygroup $id1] eq 0}\n    }\n\n    test {XACK is able to accept multiple arguments} {\n        # One of the IDs was already removed, so it should ack\n        # just ID2.\n        assert {[r XACK mystream mygroup $id1 $id2] eq 1}\n    }\n\n    test {XACK should fail if got at least one invalid ID} {\n        r del mystream\n        r xgroup create s g $ MKSTREAM\n        r xadd s * f1 v1\n        set c [llength [lindex [r xreadgroup group g c streams s >] 0 1]]\n        assert {$c == 1}\n        set pending [r xpending s g - + 10 c]\n        set id1 [lindex $pending 0 0]\n        assert_error \"*Invalid stream ID specified*\" {r xack s g $id1 invalid-id}\n        assert {[r xack s g $id1] eq 1}\n    }\n\n    test {PEL NACK reassignment after XGROUP SETID event} {\n        r del events\n        r xadd events * f1 v1\n        r xadd events * f1 v1\n        r xadd events * f1 v1\n        r xadd events * f1 v1\n        r xgroup create events g1 $\n        r xadd events * f1 v1\n        set c [llength [lindex [r xreadgroup group g1 c1 streams events >] 0 1]]\n        assert {$c == 1}\n        r xgroup setid events g1 -\n        set c [llength [lindex [r xreadgroup group g1 c2 streams events >] 0 1]]\n        assert {$c == 5}\n    }\n\n    test {XREADGROUP will not report data on empty history. Bug #5577} {\n        r del events\n        r xadd events * a 1\n        r xadd events * b 2\n        r xadd events * c 3\n        r xgroup create events mygroup 0\n\n        # Current local PEL should be empty\n        set res [r xpending events mygroup - + 10]\n        assert {[llength $res] == 0}\n\n        # So XREADGROUP should read an empty history as well\n        set res [r xreadgroup group mygroup myconsumer count 3 streams events 0]\n        assert {[llength [lindex $res 0 1]] == 0}\n\n        # We should fetch all the elements in the stream asking for >\n        set res [r xreadgroup group mygroup myconsumer count 3 streams events >]\n        assert {[llength [lindex $res 0 1]] == 3}\n\n        # Now the history is populated with three not acked entries\n        set res [r xreadgroup group mygroup myconsumer count 3 streams events 0]\n        assert {[llength [lindex $res 0 1]] == 3}\n    }\n\n    test {XREADGROUP history reporting of deleted entries. Bug #5570} {\n        r del mystream\n        r XGROUP CREATE mystream mygroup $ MKSTREAM\n        r XADD mystream 1 field1 A\n        r XREADGROUP GROUP mygroup myconsumer STREAMS mystream >\n        r XADD mystream MAXLEN 1 2 field1 B\n        r XREADGROUP GROUP mygroup myconsumer STREAMS mystream >\n\n        # Now we have two pending entries, however one should be deleted\n        # and one should be ok (we should only see \"B\")\n        set res [r XREADGROUP GROUP mygroup myconsumer STREAMS mystream 0-1]\n        assert {[lindex $res 0 1 0] == {1-0 {}}}\n        assert {[lindex $res 0 1 1] == {2-0 {field1 B}}}\n    }\n\n    test {Blocking XREADGROUP will not reply with an empty array} {\n        r del mystream\n        r XGROUP CREATE mystream mygroup $ MKSTREAM\n        r XADD mystream 666 f v\n        set res [r XREADGROUP GROUP mygroup Alice BLOCK 10 STREAMS mystream \">\"]\n        assert {[lindex $res 0 1 0] == {666-0 {f v}}}\n        r XADD mystream 667 f2 v2\n        r XDEL mystream 667\n        set rd [redis_deferring_client]\n        $rd XREADGROUP GROUP mygroup Alice BLOCK 10 STREAMS mystream \">\"\n        wait_for_blocked_clients_count 0\n        assert {[$rd read] == {}} ;# before the fix, client didn't even block, but was served synchronously with {mystream {}}\n        $rd close\n    }\n\n    test {Blocking XREADGROUP: key deleted} {\n        r DEL mystream\n        r XADD mystream 666 f v\n        r XGROUP CREATE mystream mygroup $\n        set rd [redis_deferring_client]\n        $rd XREADGROUP GROUP mygroup Alice BLOCK 0 STREAMS mystream \">\"\n        wait_for_blocked_clients_count 1\n        r DEL mystream\n        assert_error \"NOGROUP*\" {$rd read}\n        $rd close\n    }\n\n    test {Blocking XREADGROUP: key type changed with SET} {\n        r DEL mystream\n        r XADD mystream 666 f v\n        r XGROUP CREATE mystream mygroup $\n        set rd [redis_deferring_client]\n        $rd XREADGROUP GROUP mygroup Alice BLOCK 0 STREAMS mystream \">\"\n        wait_for_blocked_clients_count 1\n        r SET mystream val1\n        assert_error \"*WRONGTYPE*\" {$rd read}\n        $rd close\n    }\n\n    test {Blocking XREADGROUP: key type changed with transaction} {\n        r DEL mystream\n        r XADD mystream 666 f v\n        r XGROUP CREATE mystream mygroup $\n        set rd [redis_deferring_client]\n        $rd XREADGROUP GROUP mygroup Alice BLOCK 0 STREAMS mystream \">\"\n        wait_for_blocked_clients_count 1\n        r MULTI\n        r DEL mystream\n        r SADD mystream e1\n        r EXEC\n        assert_error \"*WRONGTYPE*\" {$rd read}\n        $rd close\n    }\n\n    test {Blocking XREADGROUP: flushed DB} {\n        r DEL mystream\n        r XADD mystream 666 f v\n        r XGROUP CREATE mystream mygroup $\n        set rd [redis_deferring_client]\n        $rd XREADGROUP GROUP mygroup Alice BLOCK 0 STREAMS mystream \">\"\n        wait_for_blocked_clients_count 1\n        r FLUSHALL\n        assert_error \"*NOGROUP*\" {$rd read}\n        $rd close\n    }\n\n    test {Blocking XREADGROUP: swapped DB, key doesn't exist} {\n        r SELECT 4\n        r FLUSHDB\n        r SELECT 9\n        r DEL mystream\n        r XADD mystream 666 f v\n        r XGROUP CREATE mystream mygroup $\n        set rd [redis_deferring_client]\n        $rd SELECT 9\n        $rd read\n        $rd XREADGROUP GROUP mygroup Alice BLOCK 0 STREAMS mystream \">\"\n        wait_for_blocked_clients_count 1\n        r SWAPDB 4 9\n        assert_error \"*NOGROUP*\" {$rd read}\n        $rd close\n    } {0} {external:skip}\n\n    test {Blocking XREADGROUP: swapped DB, key is not a stream} {\n        r SELECT 4\n        r FLUSHDB\n        r LPUSH mystream e1\n        r SELECT 9\n        r DEL mystream\n        r XADD mystream 666 f v\n        r XGROUP CREATE mystream mygroup $\n        set rd [redis_deferring_client]\n        $rd SELECT 9\n        $rd read\n        $rd XREADGROUP GROUP mygroup Alice BLOCK 0 STREAMS mystream \">\"\n        wait_for_blocked_clients_count 1\n        r SWAPDB 4 9\n        assert_error \"*WRONGTYPE*\" {$rd read}\n        $rd close\n    } {0} {external:skip}\n\n    test {XREAD and XREADGROUP against wrong parameter} {\n        r DEL mystream\n        r XADD mystream 666 f v\n        r XGROUP CREATE mystream mygroup $\n        assert_error \"ERR Unbalanced 'xreadgroup' list of streams: for each stream key an ID or '>' must be specified.\" {r XREADGROUP GROUP mygroup Alice COUNT 1 STREAMS mystream }\n        assert_error \"ERR Unbalanced 'xread' list of streams: for each stream key an ID, '+', or '$' must be specified.\" {r XREAD COUNT 1 STREAMS mystream }\n    }\n\n    test {Blocking XREAD: key deleted} {\n        r DEL mystream\n        r XADD mystream 666 f v\n        set rd [redis_deferring_client]\n        $rd XREAD BLOCK 0 STREAMS mystream \"$\"\n        wait_for_blocked_clients_count 1\n        r DEL mystream\n\n        r XADD mystream 667 f v\n        set res [$rd read]\n        assert_equal [lindex $res 0 1 0] {667-0 {f v}}\n        $rd close\n    }\n\n    test {Blocking XREAD: key type changed with SET} {\n        r DEL mystream\n        r XADD mystream 666 f v\n        set rd [redis_deferring_client]\n        $rd XREAD BLOCK 0 STREAMS mystream \"$\"\n        wait_for_blocked_clients_count 1\n        r SET mystream val1\n\n        r DEL mystream\n        r XADD mystream 667 f v\n        set res [$rd read]\n        assert_equal [lindex $res 0 1 0] {667-0 {f v}}\n        $rd close\n    }\n\n    test {Blocking XREADGROUP for stream that ran dry (issue #5299)} {\n        set rd [redis_deferring_client]\n\n        # Add a entry then delete it, now stream's last_id is 666.\n        r DEL mystream\n        r XGROUP CREATE mystream mygroup $ MKSTREAM\n        r XADD mystream 666 key value\n        r XDEL mystream 666\n\n        # Pass a special `>` ID but without new entry, released on timeout.\n        $rd XREADGROUP GROUP mygroup myconsumer BLOCK 10 STREAMS mystream >\n        assert_equal [$rd read] {}\n\n        # Throw an error if the ID equal or smaller than the last_id.\n        assert_error ERR*equal*smaller* {r XADD mystream 665 key value}\n        assert_error ERR*equal*smaller* {r XADD mystream 666 key value}\n\n        # Entered blocking state and then release because of the new entry.\n        $rd XREADGROUP GROUP mygroup myconsumer BLOCK 0 STREAMS mystream >\n        wait_for_blocked_clients_count 1\n        r XADD mystream 667 key value\n        assert_equal [$rd read] {{mystream {{667-0 {key value}}}}}\n\n        $rd close\n    }\n\n    test \"Blocking XREADGROUP will ignore BLOCK if ID is not >\" {\n        set rd [redis_deferring_client]\n\n        # Add a entry then delete it, now stream's last_id is 666.\n        r DEL mystream\n        r XGROUP CREATE mystream mygroup $ MKSTREAM\n        r XADD mystream 666 key value\n        r XDEL mystream 666\n\n        # Return right away instead of blocking, return the stream with an\n        # empty list instead of NIL if the ID specified is not the special `>` ID.\n        foreach id {0 600 666 700} {\n            $rd XREADGROUP GROUP mygroup myconsumer BLOCK 0 STREAMS mystream $id\n            assert_equal [$rd read] {{mystream {}}}\n        }\n\n        # After adding a new entry, `XREADGROUP BLOCK` still return the stream\n        # with an empty list because the pending list is empty.\n        r XADD mystream 667 key value\n        foreach id {0 600 666 667 700} {\n            $rd XREADGROUP GROUP mygroup myconsumer BLOCK 0 STREAMS mystream $id\n            assert_equal [$rd read] {{mystream {}}}\n        }\n\n        # After we read it once, the pending list is not empty at this time,\n        # pass any ID smaller than 667 will return one of the pending entry.\n        set res [r XREADGROUP GROUP mygroup myconsumer BLOCK 0 STREAMS mystream >]\n        assert_equal $res {{mystream {{667-0 {key value}}}}}\n        foreach id {0 600 666} {\n            $rd XREADGROUP GROUP mygroup myconsumer BLOCK 0 STREAMS mystream $id\n            assert_equal [$rd read] {{mystream {{667-0 {key value}}}}}\n        }\n\n        # Pass ID equal or greater than 667 will return the stream with an empty list.\n        foreach id {667 700} {\n            $rd XREADGROUP GROUP mygroup myconsumer BLOCK 0 STREAMS mystream $id\n            assert_equal [$rd read] {{mystream {}}}\n        }\n\n        # After we ACK the pending entry, return the stream with an empty list.\n        r XACK mystream mygroup 667\n        foreach id {0 600 666 667 700} {\n            $rd XREADGROUP GROUP mygroup myconsumer BLOCK 0 STREAMS mystream $id\n            assert_equal [$rd read] {{mystream {}}}\n        }\n\n        $rd close\n    }\n\n     test {Blocking XREADGROUP for stream key that has clients blocked on list} {\n        set rd [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n        \n        # First delete the stream\n        r DEL mystream\n        \n        # now place a client blocked on non-existing key as list\n        $rd2 BLPOP mystream 0\n        \n        # wait until we verify the client is blocked\n        wait_for_blocked_clients_count 1\n        \n        # verify we only have 1 regular blocking key\n        assert_equal 1 [getInfoProperty [r info clients] total_blocking_keys]\n        assert_equal 0 [getInfoProperty [r info clients] total_blocking_keys_on_nokey]\n        \n        # now write mystream as stream\n        r XADD mystream 666 key value\n        r XGROUP CREATE mystream mygroup $ MKSTREAM\n        \n        # block another client on xreadgroup \n        $rd XREADGROUP GROUP mygroup myconsumer BLOCK 0 STREAMS mystream \">\"\n        \n        # wait until we verify we have 2 blocked clients (one for the list and one for the stream)\n        wait_for_blocked_clients_count 2\n        \n        # verify we have 1 blocking key which also have clients blocked on nokey condition\n        assert_equal 1 [getInfoProperty [r info clients] total_blocking_keys]\n        assert_equal 1 [getInfoProperty [r info clients] total_blocking_keys_on_nokey]\n\n        # now delete the key and verify we have no clients blocked on nokey condition\n        r DEL mystream\n        assert_error \"NOGROUP*\" {$rd read}\n        assert_equal 1 [getInfoProperty [r info clients] total_blocking_keys]\n        assert_equal 0 [getInfoProperty [r info clients] total_blocking_keys_on_nokey]\n        \n        # close the only left client and make sure we have no more blocking keys\n        $rd2 close\n        \n        # wait until we verify we have no more blocked clients\n        wait_for_blocked_clients_count 0\n        \n        assert_equal 0 [getInfoProperty [r info clients] total_blocking_keys]\n        assert_equal 0 [getInfoProperty [r info clients] total_blocking_keys_on_nokey]\n        \n        $rd close \n    }\n\n    test {Blocking XREADGROUP for stream key that has clients blocked on stream - avoid endless loop} {\n        r DEL mystream\n        r XGROUP CREATE mystream mygroup $ MKSTREAM\n\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n        set rd3 [redis_deferring_client]\n\n        $rd1 xreadgroup GROUP mygroup myuser COUNT 10 BLOCK 10000 STREAMS mystream >\n        $rd2 xreadgroup GROUP mygroup myuser COUNT 10 BLOCK 10000 STREAMS mystream >\n        $rd3 xreadgroup GROUP mygroup myuser COUNT 10 BLOCK 10000 STREAMS mystream >\n\n        wait_for_blocked_clients_count 3\n\n        r xadd mystream MAXLEN 5000 * field1 value1 field2 value2 field3 value3\n\n        $rd1 close\n        $rd2 close\n        $rd3 close\n\n        assert_equal [r ping] {PONG}\n    }\n\n    test {Blocking XREADGROUP for stream key that has clients blocked on stream - reprocessing command} {\n        r DEL mystream\n        r XGROUP CREATE mystream mygroup $ MKSTREAM\n\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n\n        $rd1 xreadgroup GROUP mygroup myuser BLOCK 0 STREAMS mystream >\n        wait_for_blocked_clients_count 1\n\n        set start [clock milliseconds]\n        $rd2 xreadgroup GROUP mygroup myuser BLOCK 1000 STREAMS mystream >\n        wait_for_blocked_clients_count 2\n\n        # After a while call xadd and let rd2 re-process the command.\n        after 200\n        r xadd mystream * field value\n        assert_equal {} [$rd2 read]\n        set end [clock milliseconds]\n\n        # Before the fix in #13004, this time would have been 1200+ (i.e. more than 1200ms),\n        # now it should be 1000, but in order to avoid timing issues, we increase the range a bit.\n        assert_range [expr $end-$start] 1000 1150\n\n        $rd1 close\n        $rd2 close\n    }\n\n    test {XGROUP DESTROY should unblock XREADGROUP with -NOGROUP} {\n        r config resetstat\n        r del mystream\n        r XGROUP CREATE mystream mygroup $ MKSTREAM\n        set rd [redis_deferring_client]\n        $rd XREADGROUP GROUP mygroup Alice BLOCK 0 STREAMS mystream \">\"\n        wait_for_blocked_clients_count 1\n        r XGROUP DESTROY mystream mygroup\n        assert_error \"NOGROUP*\" {$rd read}\n        $rd close\n\n        # verify command stats, error stats and error counter work on failed blocked command\n        assert_match {*count=1*} [errorrstat NOGROUP r]\n        assert_match {*calls=1,*,rejected_calls=0,failed_calls=1*} [cmdrstat xreadgroup r]\n        assert_equal [s total_error_replies] 1\n    }\n\n    test {XGROUP DESTROY removes all consumer group references} {\n        r DEL mystream\n        for {set j 0} {$j < 5} {incr j} {\n            r XADD mystream $j-1 item $j\n        }\n\n        r XGROUP CREATE mystream mygroup 0\n        r XREADGROUP GROUP mygroup consumer1 STREAMS mystream >\n        assert {[lindex [r XPENDING mystream mygroup] 0] == 5}\n\n        # Try to delete a message with ACKED - should fail because both groups have references\n        assert_equal {2 2 2 2 2} [r XDELEX mystream ACKED IDS 5 0-1 1-1 2-1 3-1 4-1]\n\n        # Destroy one consumer group, and then we can delete all the entries with ACKED.\n        r XGROUP DESTROY mystream mygroup\n        assert_equal {1 1 1 1 1} [r XDELEX mystream ACKED IDS 5 0-1 1-1 2-1 3-1 4-1]\n        assert_equal 0 [r XLEN mystream] \n    }\n\n    test {XGROUP DESTROY correctly manage min_cgroup_last_id cache} {\n        r DEL mystream\n        # Add some entries\n        r XADD mystream 1-0 f1 v1\n        r XADD mystream 2-0 f2 v2\n        r XADD mystream 3-0 f3 v3\n        r XADD mystream 4-0 f4 v4\n        r XADD mystream 5-0 f5 v5\n\n        # Create two consumer groups\n        r XGROUP CREATE mystream group1 1-0 ;# min_cgroup_last_id is 1-0 now\n        r XGROUP CREATE mystream group2 3-0\n\n        # Entry 1-0 should be deletable (1-0 <= min_cgroup_last_id and not in any PEL)\n        assert_equal {1} [r XDELEX mystream ACKED IDS 1 1-0]\n\n        # Entry 2-0 should be referenced (2-0 > 1-0, not yet consumed by all consume groups)\n        assert_equal {2} [r XDELEX mystream ACKED IDS 1 2-0]\n\n        # Destroy group1\n        # min_cgroup_last_id is 3-0 now\n        r XGROUP DESTROY mystream group1\n\n        # Entry 2-0 should now be deletable (2-0 < 3-0 and not in any PEL)\n        assert_equal {1} [r XDELEX mystream ACKED IDS 1 2-0]\n    }\n\n    test {RENAME can unblock XREADGROUP with data} {\n        r del mystream{t}\n        r XGROUP CREATE mystream{t} mygroup $ MKSTREAM\n        set rd [redis_deferring_client]\n        $rd XREADGROUP GROUP mygroup Alice BLOCK 0 STREAMS mystream{t} \">\"\n        wait_for_blocked_clients_count 1\n        r XGROUP CREATE mystream2{t} mygroup $ MKSTREAM\n        r XADD mystream2{t} 100 f1 v1\n        r RENAME mystream2{t} mystream{t}\n        assert_equal \"{mystream{t} {{100-0 {f1 v1}}}}\" [$rd read] ;# mystream2{t} had mygroup before RENAME\n        $rd close\n    }\n\n    test {RENAME can unblock XREADGROUP with -NOGROUP} {\n        r del mystream{t}\n        r XGROUP CREATE mystream{t} mygroup $ MKSTREAM\n        set rd [redis_deferring_client]\n        $rd XREADGROUP GROUP mygroup Alice BLOCK 0 STREAMS mystream{t} \">\"\n        wait_for_blocked_clients_count 1\n        r XADD mystream2{t} 100 f1 v1\n        r RENAME mystream2{t} mystream{t}\n        assert_error \"*NOGROUP*\" {$rd read} ;# mystream2{t} didn't have mygroup before RENAME\n        $rd close\n    }\n\n    test {XCLAIM can claim PEL items from another consumer} {\n        # Add 3 items into the stream, and create a consumer group\n        r del mystream\n        set id1 [r XADD mystream * a 1]\n        set id2 [r XADD mystream * b 2]\n        set id3 [r XADD mystream * c 3]\n        r XGROUP CREATE mystream mygroup 0\n\n        # Consumer 1 reads item 1 from the stream without acknowledgements.\n        # Consumer 2 then claims pending item 1 from the PEL of consumer 1\n        set reply [\n            r XREADGROUP GROUP mygroup consumer1 count 1 STREAMS mystream >\n        ]\n        assert {[llength [lindex $reply 0 1 0 1]] == 2}\n        assert {[lindex $reply 0 1 0 1] eq {a 1}}\n\n        # make sure the entry is present in both the group, and the right consumer\n        assert {[llength [r XPENDING mystream mygroup - + 10]] == 1}\n        assert {[llength [r XPENDING mystream mygroup - + 10 consumer1]] == 1}\n        assert {[llength [r XPENDING mystream mygroup - + 10 consumer2]] == 0}\n\n        after 200\n        set reply [\n            r XCLAIM mystream mygroup consumer2 10 $id1\n        ]\n        assert {[llength [lindex $reply 0 1]] == 2}\n        assert {[lindex $reply 0 1] eq {a 1}}\n\n        # make sure the entry is present in both the group, and the right consumer\n        assert {[llength [r XPENDING mystream mygroup - + 10]] == 1}\n        assert {[llength [r XPENDING mystream mygroup - + 10 consumer1]] == 0}\n        assert {[llength [r XPENDING mystream mygroup - + 10 consumer2]] == 1}\n\n        # Consumer 1 reads another 2 items from stream\n        r XREADGROUP GROUP mygroup consumer1 count 2 STREAMS mystream >\n        after 200\n\n        # Delete item 2 from the stream. Now consumer 1 has PEL that contains\n        # only item 3. Try to use consumer 2 to claim the deleted item 2\n        # from the PEL of consumer 1, this should be NOP\n        r XDEL mystream $id2\n        set reply [\n            r XCLAIM mystream mygroup consumer2 10 $id2\n        ]\n        assert {[llength $reply] == 0}\n\n        # Delete item 3 from the stream. Now consumer 1 has PEL that is empty.\n        # Try to use consumer 2 to claim the deleted item 3 from the PEL\n        # of consumer 1, this should be NOP\n        after 200\n        r XDEL mystream $id3\n        set reply [\n            r XCLAIM mystream mygroup consumer2 10 $id3\n        ]\n        assert {[llength $reply] == 0}\n    }\n\n    test {XCLAIM without JUSTID increments delivery count} {\n        # Add 3 items into the stream, and create a consumer group\n        r del mystream\n        set id1 [r XADD mystream * a 1]\n        set id2 [r XADD mystream * b 2]\n        set id3 [r XADD mystream * c 3]\n        r XGROUP CREATE mystream mygroup 0\n\n        # Consumer 1 reads item 1 from the stream without acknowledgements.\n        # Consumer 2 then claims pending item 1 from the PEL of consumer 1\n        set reply [\n            r XREADGROUP GROUP mygroup consumer1 count 1 STREAMS mystream >\n        ]\n        assert {[llength [lindex $reply 0 1 0 1]] == 2}\n        assert {[lindex $reply 0 1 0 1] eq {a 1}}\n        after 200\n        set reply [\n            r XCLAIM mystream mygroup consumer2 10 $id1\n        ]\n        assert {[llength [lindex $reply 0 1]] == 2}\n        assert {[lindex $reply 0 1] eq {a 1}}\n\n        set reply [\n            r XPENDING mystream mygroup - + 10\n        ]\n        assert {[llength [lindex $reply 0]] == 4}\n        assert {[lindex $reply 0 3] == 2}\n\n        # Consumer 3 then claims pending item 1 from the PEL of consumer 2 using JUSTID\n        after 200\n        set reply [\n            r XCLAIM mystream mygroup consumer3 10 $id1 JUSTID\n        ]\n        assert {[llength $reply] == 1}\n        assert {[lindex $reply 0] eq $id1}\n\n        set reply [\n            r XPENDING mystream mygroup - + 10\n        ]\n        assert {[llength [lindex $reply 0]] == 4}\n        assert {[lindex $reply 0 3] == 2}\n    }\n\n    test {XCLAIM same consumer} {\n        # Add 3 items into the stream, and create a consumer group\n        r del mystream\n        set id1 [r XADD mystream * a 1]\n        set id2 [r XADD mystream * b 2]\n        set id3 [r XADD mystream * c 3]\n        r XGROUP CREATE mystream mygroup 0\n\n        set reply [r XREADGROUP GROUP mygroup consumer1 count 1 STREAMS mystream >]\n        assert {[llength [lindex $reply 0 1 0 1]] == 2}\n        assert {[lindex $reply 0 1 0 1] eq {a 1}}\n        after 200\n        # re-claim with the same consumer that already has it\n        assert {[llength [r XCLAIM mystream mygroup consumer1 10 $id1]] == 1}\n\n        # make sure the entry is still in the PEL\n        set reply [r XPENDING mystream mygroup - + 10]\n        assert {[llength $reply] == 1}\n        assert {[lindex $reply 0 1] eq {consumer1}}\n    }\n\n    test {XAUTOCLAIM can claim PEL items from another consumer} {\n        # Add 3 items into the stream, and create a consumer group\n        r del mystream\n        set id1 [r XADD mystream * a 1]\n        set id2 [r XADD mystream * b 2]\n        set id3 [r XADD mystream * c 3]\n        set id4 [r XADD mystream * d 4]\n        r XGROUP CREATE mystream mygroup 0\n\n        # Consumer 1 reads item 1 from the stream without acknowledgements.\n        # Consumer 2 then claims pending item 1 from the PEL of consumer 1\n        set reply [r XREADGROUP GROUP mygroup consumer1 count 1 STREAMS mystream >]\n        assert_equal [llength [lindex $reply 0 1 0 1]] 2\n        assert_equal [lindex $reply 0 1 0 1] {a 1}\n        after 200\n        set reply [r XAUTOCLAIM mystream mygroup consumer2 10 - COUNT 1]\n        assert_equal [llength $reply] 3\n        assert_equal [lindex $reply 0] \"0-0\"\n        assert_equal [llength [lindex $reply 1]] 1\n        assert_equal [llength [lindex $reply 1 0]] 2\n        assert_equal [llength [lindex $reply 1 0 1]] 2\n        assert_equal [lindex $reply 1 0 1] {a 1}\n\n        # Consumer 1 reads another 2 items from stream\n        r XREADGROUP GROUP mygroup consumer1 count 3 STREAMS mystream >\n\n        # For min-idle-time\n        after 200\n\n        # Delete item 2 from the stream. Now consumer 1 has PEL that contains\n        # only item 3. Try to use consumer 2 to claim the deleted item 2\n        # from the PEL of consumer 1, this should return nil\n        r XDEL mystream $id2\n\n        # id1 and id3 are self-claimed here but not id2 ('count' was set to 3)\n        # we make sure id2 is indeed skipped (the cursor points to id4)\n        set reply [r XAUTOCLAIM mystream mygroup consumer2 10 - COUNT 3]\n\n        assert_equal [llength $reply] 3\n        assert_equal [lindex $reply 0] $id4\n        assert_equal [llength [lindex $reply 1]] 2\n        assert_equal [llength [lindex $reply 1 0]] 2\n        assert_equal [llength [lindex $reply 1 0 1]] 2\n        assert_equal [lindex $reply 1 0 1] {a 1}\n        assert_equal [lindex $reply 1 1 1] {c 3}\n        assert_equal [llength [lindex $reply 2]] 1\n        assert_equal [llength [lindex $reply 2 0]] 1\n\n        # Delete item 3 from the stream. Now consumer 1 has PEL that is empty.\n        # Try to use consumer 2 to claim the deleted item 3 from the PEL\n        # of consumer 1, this should return nil\n        after 200\n\n        r XDEL mystream $id4\n\n        # id1 and id3 are self-claimed here but not id2 and id4 ('count' is default 100)\n        set reply [r XAUTOCLAIM mystream mygroup consumer2 10 - JUSTID]\n\n        # we also test the JUSTID modifier here. note that, when using JUSTID,\n        # deleted entries are returned in reply (consistent with XCLAIM).\n\n        assert_equal [llength $reply] 3\n        assert_equal [lindex $reply 0] {0-0}\n        assert_equal [llength [lindex $reply 1]] 2\n        assert_equal [lindex $reply 1 0] $id1\n        assert_equal [lindex $reply 1 1] $id3\n    }\n\n    test {XAUTOCLAIM as an iterator} {\n        # Add 5 items into the stream, and create a consumer group\n        r del mystream\n        set id1 [r XADD mystream * a 1]\n        set id2 [r XADD mystream * b 2]\n        set id3 [r XADD mystream * c 3]\n        set id4 [r XADD mystream * d 4]\n        set id5 [r XADD mystream * e 5]\n        r XGROUP CREATE mystream mygroup 0\n\n        # Read 5 messages into consumer1\n        r XREADGROUP GROUP mygroup consumer1 count 90 STREAMS mystream >\n\n        # For min-idle-time\n        after 200\n\n        # Claim 2 entries\n        set reply [r XAUTOCLAIM mystream mygroup consumer2 10 - COUNT 2]\n        assert_equal [llength $reply] 3\n        set cursor [lindex $reply 0]\n        assert_equal $cursor $id3\n        assert_equal [llength [lindex $reply 1]] 2\n        assert_equal [llength [lindex $reply 1 0 1]] 2\n        assert_equal [lindex $reply 1 0 1] {a 1}\n\n        # Claim 2 more entries\n        set reply [r XAUTOCLAIM mystream mygroup consumer2 10 $cursor COUNT 2]\n        assert_equal [llength $reply] 3\n        set cursor [lindex $reply 0]\n        assert_equal $cursor $id5\n        assert_equal [llength [lindex $reply 1]] 2\n        assert_equal [llength [lindex $reply 1 0 1]] 2\n        assert_equal [lindex $reply 1 0 1] {c 3}\n\n        # Claim last entry\n        set reply [r XAUTOCLAIM mystream mygroup consumer2 10 $cursor COUNT 1]\n        assert_equal [llength $reply] 3\n        set cursor [lindex $reply 0]\n        assert_equal $cursor {0-0}\n        assert_equal [llength [lindex $reply 1]] 1\n        assert_equal [llength [lindex $reply 1 0 1]] 2\n        assert_equal [lindex $reply 1 0 1] {e 5}\n    }\n\n    test {XAUTOCLAIM COUNT must be > 0} {\n       assert_error \"ERR COUNT must be > 0\" {r XAUTOCLAIM key group consumer 1 1 COUNT 0}\n    }\n\n    test {XCLAIM with XDEL} {\n        r DEL x\n        r XADD x 1-0 f v\n        r XADD x 2-0 f v\n        r XADD x 3-0 f v\n        r XGROUP CREATE x grp 0\n        assert_equal [r XREADGROUP GROUP grp Alice STREAMS x >] {{x {{1-0 {f v}} {2-0 {f v}} {3-0 {f v}}}}}\n        r XDEL x 2-0\n        assert_equal [r XCLAIM x grp Bob 0 1-0 2-0 3-0] {{1-0 {f v}} {3-0 {f v}}}\n        assert_equal [r XPENDING x grp - + 10 Alice] {}\n    }\n\n    test {XCLAIM with trimming} {\n        r DEL x\n        r config set stream-node-max-entries 2\n        r XADD x 1-0 f v\n        r XADD x 2-0 f v\n        r XADD x 3-0 f v\n        r XGROUP CREATE x grp 0\n        assert_equal [r XREADGROUP GROUP grp Alice STREAMS x >] {{x {{1-0 {f v}} {2-0 {f v}} {3-0 {f v}}}}}\n        r XTRIM x MAXLEN 1\n        assert_equal [r XCLAIM x grp Bob 0 1-0 2-0 3-0] {{3-0 {f v}}}\n        assert_equal [r XPENDING x grp - + 10 Alice] {}\n    }\n\n    test {XAUTOCLAIM with XDEL} {\n        r DEL x\n        r XADD x 1-0 f v\n        r XADD x 2-0 f v\n        r XADD x 3-0 f v\n        r XGROUP CREATE x grp 0\n        assert_equal [r XREADGROUP GROUP grp Alice STREAMS x >] {{x {{1-0 {f v}} {2-0 {f v}} {3-0 {f v}}}}}\n        r XDEL x 2-0\n        assert_equal [r XAUTOCLAIM x grp Bob 0 0-0] {0-0 {{1-0 {f v}} {3-0 {f v}}} 2-0}\n        assert_equal [r XPENDING x grp - + 10 Alice] {}\n    }\n\n    test {XAUTOCLAIM with XDEL and count} {\n        r DEL x\n        r XADD x 1-0 f v\n        r XADD x 2-0 f v\n        r XADD x 3-0 f v\n        r XGROUP CREATE x grp 0\n        assert_equal [r XREADGROUP GROUP grp Alice STREAMS x >] {{x {{1-0 {f v}} {2-0 {f v}} {3-0 {f v}}}}}\n        r XDEL x 1-0\n        r XDEL x 2-0\n        assert_equal [r XAUTOCLAIM x grp Bob 0 0-0 COUNT 1] {2-0 {} 1-0}\n        assert_equal [r XAUTOCLAIM x grp Bob 0 2-0 COUNT 1] {3-0 {} 2-0}\n        assert_equal [r XAUTOCLAIM x grp Bob 0 3-0 COUNT 1] {0-0 {{3-0 {f v}}} {}}\n        assert_equal [r XPENDING x grp - + 10 Alice] {}\n    }\n\n    test {XAUTOCLAIM with out of range count} {\n        assert_error {ERR COUNT*} {r XAUTOCLAIM x grp Bob 0 3-0 COUNT 8070450532247928833}\n    }\n\n    test {XCLAIM with trimming} {\n        r DEL x\n        r config set stream-node-max-entries 2\n        r XADD x 1-0 f v\n        r XADD x 2-0 f v\n        r XADD x 3-0 f v\n        r XGROUP CREATE x grp 0\n        assert_equal [r XREADGROUP GROUP grp Alice STREAMS x >] {{x {{1-0 {f v}} {2-0 {f v}} {3-0 {f v}}}}}\n        r XTRIM x MAXLEN 1\n        assert_equal [r XAUTOCLAIM x grp Bob 0 0-0] {0-0 {{3-0 {f v}}} {1-0 2-0}}\n        assert_equal [r XPENDING x grp - + 10 Alice] {}\n    }\n\n    test {XINFO FULL output} {\n        r del x\n        r XADD x 100 a 1\n        r XADD x 101 b 1\n        r XADD x 102 c 1\n        r XADD x 103 e 1\n        r XADD x 104 f 1\n        r XGROUP CREATE x g1 0\n        r XGROUP CREATE x g2 0\n        r XREADGROUP GROUP g1 Alice COUNT 1 STREAMS x >\n        r XREADGROUP GROUP g1 Bob COUNT 1 STREAMS x >\n        r XREADGROUP GROUP g1 Bob NOACK COUNT 1 STREAMS x >\n        r XREADGROUP GROUP g2 Charlie COUNT 4 STREAMS x >\n        r XDEL x 103\n\n        set reply [r XINFO STREAM x FULL]\n        assert_equal [llength $reply] 30\n        assert_equal [dict get $reply length] 4\n        assert_equal [dict get $reply entries] \"{100-0 {a 1}} {101-0 {b 1}} {102-0 {c 1}} {104-0 {f 1}}\"\n\n        # First consumer group\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $group name] \"g1\"\n        assert_equal [lindex [dict get $group pending] 0 0] \"100-0\"\n        set consumer [lindex [dict get $group consumers] 0]\n        assert_equal [dict get $consumer name] \"Alice\"\n        assert_equal [lindex [dict get $consumer pending] 0 0] \"100-0\" ;# first entry in first consumer's PEL\n\n        # Second consumer group\n        set group [lindex [dict get $reply groups] 1]\n        assert_equal [dict get $group name] \"g2\"\n        set consumer [lindex [dict get $group consumers] 0]\n        assert_equal [dict get $consumer name] \"Charlie\"\n        assert_equal [lindex [dict get $consumer pending] 0 0] \"100-0\" ;# first entry in first consumer's PEL\n        assert_equal [lindex [dict get $consumer pending] 1 0] \"101-0\" ;# second entry in first consumer's PEL\n\n        set reply [r XINFO STREAM x FULL COUNT 1]\n        assert_equal [llength $reply] 30\n        assert_equal [dict get $reply length] 4\n        assert_equal [dict get $reply entries] \"{100-0 {a 1}}\"\n    }\n\n    test {Consumer seen-time and active-time} {\n        r DEL mystream\n        r XGROUP CREATE mystream mygroup $ MKSTREAM\n        r XREADGROUP GROUP mygroup Alice COUNT 1 STREAMS mystream >\n        after 100\n        set reply [r xinfo consumers mystream mygroup]\n        set consumer_info [lindex $reply 0]\n        assert {[dict get $consumer_info idle] >= 100} ;# consumer idle (seen-time)\n        assert_equal [dict get $consumer_info inactive] \"-1\" ;# consumer inactive (active-time)\n\n        r XADD mystream * f v\n        r XREADGROUP GROUP mygroup Alice COUNT 1 STREAMS mystream >\n        set reply [r xinfo consumers mystream mygroup]\n        set consumer_info [lindex $reply 0]\n        assert_equal [lindex $consumer_info 1] \"Alice\" ;# consumer name\n        assert {[dict get $consumer_info idle] < 80} ;# consumer idle (seen-time)\n        assert {[dict get $consumer_info inactive] < 80} ;# consumer inactive (active-time)\n\n        after 100\n        r XREADGROUP GROUP mygroup Alice COUNT 1 STREAMS mystream >\n        set reply [r xinfo consumers mystream mygroup]\n        set consumer_info [lindex $reply 0]\n        assert {[dict get $consumer_info idle] < 80} ;# consumer idle (seen-time)\n        assert {[dict get $consumer_info inactive] >= 100} ;# consumer inactive (active-time)\n\n\n        # Simulate loading from RDB\n\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        set consumer [lindex [dict get $group consumers] 0]\n        set prev_seen [dict get $consumer seen-time]\n        set prev_active [dict get $consumer active-time]\n\n        set dump [r DUMP mystream]\n        r DEL mystream\n        r RESTORE mystream 0 $dump\n\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        set consumer [lindex [dict get $group consumers] 0]\n        assert_equal $prev_seen [dict get $consumer seen-time]\n        assert_equal $prev_active [dict get $consumer active-time]\n    }\n\n    test {XGROUP CREATECONSUMER: create consumer if does not exist} {\n        r del mystream\n        r XGROUP CREATE mystream mygroup $ MKSTREAM\n        r XADD mystream * f v\n\n        set reply [r xinfo groups mystream]\n        set group_info [lindex $reply 0]\n        set n_consumers [lindex $group_info 3]\n        assert_equal $n_consumers 0 ;# consumers number in cg\n\n        # create consumer using XREADGROUP\n        r XREADGROUP GROUP mygroup Alice COUNT 1 STREAMS mystream >\n\n        set reply [r xinfo groups mystream]\n        set group_info [lindex $reply 0]\n        set n_consumers [lindex $group_info 3]\n        assert_equal $n_consumers 1 ;# consumers number in cg\n\n        set reply [r xinfo consumers mystream mygroup]\n        set consumer_info [lindex $reply 0]\n        assert_equal [lindex $consumer_info 1] \"Alice\" ;# consumer name\n\n        # create group using XGROUP CREATECONSUMER when Alice already exists\n        set created [r XGROUP CREATECONSUMER mystream mygroup Alice]\n        assert_equal $created 0\n\n        # create group using XGROUP CREATECONSUMER when Bob does not exist\n        set created [r XGROUP CREATECONSUMER mystream mygroup Bob]\n        assert_equal $created 1\n\n        set reply [r xinfo groups mystream]\n        set group_info [lindex $reply 0]\n        set n_consumers [lindex $group_info 3]\n        assert_equal $n_consumers 2 ;# consumers number in cg\n\n        set reply [r xinfo consumers mystream mygroup]\n        set consumer_info [lindex $reply 0]\n        assert_equal [lindex $consumer_info 1] \"Alice\" ;# consumer name\n        set consumer_info [lindex $reply 1]\n        assert_equal [lindex $consumer_info 1] \"Bob\" ;# consumer name\n    }\n\n    test {XGROUP CREATECONSUMER: group must exist} {\n        r del mystream\n        r XADD mystream * f v\n        assert_error \"*NOGROUP*\" {r XGROUP CREATECONSUMER mystream mygroup consumer}\n    }\n\n    test {XREADGROUP of multiple entries changes dirty by one} {\n        r DEL x\n        r XADD x 1-0 data a\n        r XADD x 2-0 data b\n        r XADD x 3-0 data c\n        r XADD x 4-0 data d\n        r XGROUP CREATE x g1 0\n        r XGROUP CREATECONSUMER x g1 Alice\n\n        set dirty [s rdb_changes_since_last_save]\n        set res [r XREADGROUP GROUP g1 Alice COUNT 2 STREAMS x \">\"]\n        assert_equal $res {{x {{1-0 {data a}} {2-0 {data b}}}}}\n        set dirty2 [s rdb_changes_since_last_save]\n        assert {$dirty2 == $dirty + 1}\n\n        set dirty [s rdb_changes_since_last_save]\n        set res [r XREADGROUP GROUP g1 Alice NOACK COUNT 2 STREAMS x \">\"]\n        assert_equal $res {{x {{3-0 {data c}} {4-0 {data d}}}}}\n        set dirty2 [s rdb_changes_since_last_save]\n        assert {$dirty2 == $dirty + 1}\n    }\n\n    test {XREADGROUP from PEL does not change dirty} {\n        # Techinally speaking, XREADGROUP from PEL should cause propagation\n        # because it change the delivery count/time\n        # It was decided that this metadata changes are too insiginificant\n        # to justify propagation\n        # This test covers that.\n        r DEL x\n        r XADD x 1-0 data a\n        r XADD x 2-0 data b\n        r XADD x 3-0 data c\n        r XADD x 4-0 data d\n        r XGROUP CREATE x g1 0\n        r XGROUP CREATECONSUMER x g1 Alice\n\n        set res [r XREADGROUP GROUP g1 Alice COUNT 2 STREAMS x \">\"]\n        assert_equal $res {{x {{1-0 {data a}} {2-0 {data b}}}}}\n\n        set dirty [s rdb_changes_since_last_save]\n        set res [r XREADGROUP GROUP g1 Alice COUNT 2 STREAMS x 0]\n        assert_equal $res {{x {{1-0 {data a}} {2-0 {data b}}}}}\n        set dirty2 [s rdb_changes_since_last_save]\n        assert {$dirty2 == $dirty}\n\n        set dirty [s rdb_changes_since_last_save]\n        set res [r XREADGROUP GROUP g1 Alice COUNT 2 STREAMS x 9000]\n        assert_equal $res {{x {}}}\n        set dirty2 [s rdb_changes_since_last_save]\n        assert {$dirty2 == $dirty}\n\n        # The current behavior is that we create the consumer (causes dirty++) even\n        # if we onlyneed to read from PEL.\n        # It feels like we shouldn't create the consumer in that case, but I added\n        # this test just for coverage of current behavior\n        set dirty [s rdb_changes_since_last_save]\n        set res [r XREADGROUP GROUP g1 noconsumer COUNT 2 STREAMS x 0]\n        assert_equal $res {{x {}}}\n        set dirty2 [s rdb_changes_since_last_save]\n        assert {$dirty2 == $dirty + 1}\n    }\n\n    start_server {tags {\"stream needs:debug\"} overrides {appendonly yes aof-use-rdb-preamble no appendfsync always}} {\n        test {XREADGROUP with NOACK creates consumer} {\n            r del mystream\n            r XGROUP CREATE mystream mygroup $ MKSTREAM\n            r XADD mystream * f1 v1\n            r XREADGROUP GROUP mygroup Alice NOACK STREAMS mystream \">\"\n            set rd [redis_deferring_client]\n            $rd XREADGROUP GROUP mygroup Bob BLOCK 0 NOACK STREAMS mystream \">\"\n            wait_for_blocked_clients_count 1\n            r XADD mystream * f2 v2\n            set grpinfo [r xinfo groups mystream]\n\n            r debug loadaof\n            assert_equal [r xinfo groups mystream] $grpinfo\n            set reply [r xinfo consumers mystream mygroup]\n            set consumer_info [lindex $reply 0]\n            assert_equal [lindex $consumer_info 1] \"Alice\" ;# consumer name\n            set consumer_info [lindex $reply 1]\n            assert_equal [lindex $consumer_info 1] \"Bob\" ;# consumer name\n            $rd close\n        }\n\n        test {Consumer without PEL is present in AOF after AOFRW} {\n            r del mystream\n            r XGROUP CREATE mystream mygroup $ MKSTREAM\n            r XADD mystream * f v\n            r XREADGROUP GROUP mygroup Alice NOACK STREAMS mystream \">\"\n            set rd [redis_deferring_client]\n            $rd XREADGROUP GROUP mygroup Bob BLOCK 0 NOACK STREAMS mystream \">\"\n            wait_for_blocked_clients_count 1\n            r XGROUP CREATECONSUMER mystream mygroup Charlie\n            set grpinfo [lindex [r xinfo groups mystream] 0]\n\n            r bgrewriteaof\n            waitForBgrewriteaof r\n            r debug loadaof\n\n            set curr_grpinfo [lindex [r xinfo groups mystream] 0]\n            assert {$curr_grpinfo == $grpinfo}\n            set n_consumers [lindex $grpinfo 3]\n\n            # All consumers are created via XREADGROUP, regardless of whether they managed\n            # to read any entries ot not\n            assert_equal $n_consumers 3\n            $rd close\n        }\n    }\n\n    test {Consumer group read counter and lag in empty streams} {\n        r DEL x\n        r XGROUP CREATE x g1 0 MKSTREAM\n\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $reply max-deleted-entry-id] \"0-0\"\n        assert_equal [dict get $reply entries-added] 0\n        assert_equal [dict get $group entries-read] {}\n        assert_equal [dict get $group lag] 0\n\n        r XADD x 1-0 data a\n        r XDEL x 1-0\n\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $reply max-deleted-entry-id] \"1-0\"\n        assert_equal [dict get $reply entries-added] 1\n        assert_equal [dict get $group entries-read] {}\n        assert_equal [dict get $group lag] 0\n    }\n\n    test {Consumer group read counter and lag sanity} {\n        r DEL x\n        r XADD x 1-0 data a\n        r XADD x 2-0 data b\n        r XADD x 3-0 data c\n        r XADD x 4-0 data d\n        r XADD x 5-0 data e\n        r XGROUP CREATE x g1 0\n\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $group entries-read] {}\n        assert_equal [dict get $group lag] 5\n\n        r XREADGROUP GROUP g1 c11 COUNT 1 STREAMS x >\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $group entries-read] 1\n        assert_equal [dict get $group lag] 4\n\n        r XREADGROUP GROUP g1 c12 COUNT 10 STREAMS x >\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $group entries-read] 5\n        assert_equal [dict get $group lag] 0\n\n        r XADD x 6-0 data f\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $group entries-read] 5\n        assert_equal [dict get $group lag] 1\n    }\n\n    test {Consumer group lag with XDELs} {\n        r DEL x\n        r XADD x 1-0 data a\n        r XADD x 2-0 data b\n        r XADD x 3-0 data c\n        r XADD x 4-0 data d\n        r XADD x 5-0 data e\n        r XDEL x 3-0\n        r XGROUP CREATE x g1 0\n        r XGROUP CREATE x g2 0\n\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $group entries-read] {}\n        assert_equal [dict get $group lag] {}\n\n        r XREADGROUP GROUP g1 c11 COUNT 1 STREAMS x >\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $group entries-read] {}\n        assert_equal [dict get $group lag] {}\n\n        r XREADGROUP GROUP g1 c11 COUNT 1 STREAMS x >\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $group entries-read] {}\n        assert_equal [dict get $group lag] {}\n\n        r XREADGROUP GROUP g1 c11 COUNT 1 STREAMS x >\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $group entries-read] {}\n        assert_equal [dict get $group lag] {}\n\n        r XREADGROUP GROUP g1 c11 COUNT 1 STREAMS x >\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $group entries-read] 5\n        assert_equal [dict get $group lag] 0\n\n        r XADD x 6-0 data f\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $group entries-read] 5\n        assert_equal [dict get $group lag] 1\n\n        r XTRIM x MINID = 3-0\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $group entries-read] 5\n        assert_equal [dict get $group lag] 1\n        set group [lindex [dict get $reply groups] 1]\n        assert_equal [dict get $group entries-read] {}\n        assert_equal [dict get $group lag] 3\n\n        r XTRIM x MINID = 5-0\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $group entries-read] 5\n        assert_equal [dict get $group lag] 1\n        set group [lindex [dict get $reply groups] 1]\n        assert_equal [dict get $group entries-read] {}\n        assert_equal [dict get $group lag] 2\n    }\n\n    test {Consumer Group Lag with XDELs and tombstone after the last_id of consume group} {\n        r DEL x\n        r XGROUP CREATE x g1 $ MKSTREAM\n        r XADD x 1-0 data a\n        r XREADGROUP GROUP g1 alice STREAMS x > ;# Read one entry\n        r XADD x 2-0 data c\n        r XADD x 3-0 data d\n        r XDEL x 2-0\n\n        # Now the latest tombstone(2-0) is before the first entry(3-0), but there is still\n        # a tombstone(2-0) after the last_id(1-0) of the consume group.\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $group entries-read] 1\n        assert_equal [dict get $group lag] {}\n\n        r XDEL x 1-0\n        # Although there is a tombstone(2-0) after the consumer group's last_id(1-0), all\n        # entries before the maximal tombstone have been deleted. This means that both the\n        # last_id and the largest tombstone are behind the first entry. Therefore, tombstones\n        # no longer affect the lag, which now reflects the remaining entries in the stream.\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $group entries-read] 1\n        assert_equal [dict get $group lag] 1\n\n        # Now there is a tombstone(2-0) after the last_id of the consume group, so after consuming\n        # entry(3-0), the group's counter will be invalid.\n        r XREADGROUP GROUP g1 alice STREAMS x > \n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $group entries-read] 3\n        assert_equal [dict get $group lag] 0\n    }\n\n    test {Consumer group lag with XTRIM} {\n        r DEL x\n        r XGROUP CREATE x mygroup $ MKSTREAM\n        r XADD x 1-0 data a\n        r XADD x 2-0 data b\n        r XADD x 3-0 data c\n        r XADD x 4-0 data d\n        r XADD x 5-0 data e\n        r XREADGROUP GROUP mygroup alice COUNT 1 STREAMS x >\n\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $group entries-read] 1\n        assert_equal [dict get $group lag] 4\n\n        # Although XTRIM doesn't update the `max-deleted-entry-id`, it always updates the\n        # position of the first entry. When trimming causes the first entry to be behind\n        # the consumer group's last_id, the consumer group's lag will always be equal to\n        # the number of remainin entries in the stream.\n        r XTRIM x MAXLEN 1\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $reply max-deleted-entry-id] \"0-0\"\n        assert_equal [dict get $group entries-read] 1\n        assert_equal [dict get $group lag] 1\n\n        # When all the entries are read, the lag is always 0.\n        r XREADGROUP GROUP mygroup alice STREAMS x >\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $group entries-read] 5\n        assert_equal [dict get $group lag] 0\n\n        r XADD x 6-0 data f\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $group entries-read] 5\n        assert_equal [dict get $group lag] 1\n\n        # When all the entries were deleted, the lag is always 0.\n        r XTRIM x MAXLEN 0\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $group lag] 0\n    }\n\n    test {Loading from legacy (Redis <= v6.2.x, rdb_ver < 10) persistence} {\n        # The payload was DUMPed from a v5 instance after:\n        # XADD x 1-0 data a\n        # XADD x 2-0 data b\n        # XADD x 3-0 data c\n        # XADD x 4-0 data d\n        # XADD x 5-0 data e\n        # XADD x 6-0 data f\n        # XDEL x 3-0\n        # XGROUP CREATE x g1 0\n        # XGROUP CREATE x g2 0\n        # XREADGROUP GROUP g1 c11 COUNT 4 STREAMS x >\n        # XTRIM x MAXLEN = 2\n\n        r DEL x\n        r RESTORE x 0 \"\\x0F\\x01\\x10\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\xC3\\x40\\x4A\\x40\\x57\\x16\\x57\\x00\\x00\\x00\\x23\\x00\\x02\\x01\\x04\\x01\\x01\\x01\\x84\\x64\\x61\\x74\\x61\\x05\\x00\\x01\\x03\\x01\\x00\\x20\\x01\\x03\\x81\\x61\\x02\\x04\\x20\\x0A\\x00\\x01\\x40\\x0A\\x00\\x62\\x60\\x0A\\x00\\x02\\x40\\x0A\\x00\\x63\\x60\\x0A\\x40\\x22\\x01\\x81\\x64\\x20\\x0A\\x40\\x39\\x20\\x0A\\x00\\x65\\x60\\x0A\\x00\\x05\\x40\\x0A\\x00\\x66\\x20\\x0A\\x00\\xFF\\x02\\x06\\x00\\x02\\x02\\x67\\x31\\x05\\x00\\x04\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x3E\\xF7\\x83\\x43\\x7A\\x01\\x00\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x02\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x3E\\xF7\\x83\\x43\\x7A\\x01\\x00\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x04\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x3E\\xF7\\x83\\x43\\x7A\\x01\\x00\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x05\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x3E\\xF7\\x83\\x43\\x7A\\x01\\x00\\x00\\x01\\x01\\x03\\x63\\x31\\x31\\x3E\\xF7\\x83\\x43\\x7A\\x01\\x00\\x00\\x04\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x02\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x04\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x05\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x02\\x67\\x32\\x00\\x00\\x00\\x00\\x09\\x00\\x3D\\x52\\xEF\\x68\\x67\\x52\\x1D\\xFA\"\n\n        set reply [r XINFO STREAM x FULL]\n        assert_equal [dict get $reply max-deleted-entry-id] \"0-0\"\n        assert_equal [dict get $reply entries-added] 2\n        set group [lindex [dict get $reply groups] 0]\n        assert_equal [dict get $group entries-read] 1\n        assert_equal [dict get $group lag] 1\n        set group [lindex [dict get $reply groups] 1]\n        assert_equal [dict get $group entries-read] 0\n        assert_equal [dict get $group lag] 2\n    }\n\n    test {Loading from legacy (Redis <= v7.0.x, rdb_ver < 11) persistence} {\n        # The payload was DUMPed from a v7 instance after:\n        # XGROUP CREATE x g $ MKSTREAM\n        # XADD x 1-1 f v\n        # XREADGROUP GROUP g Alice STREAMS x >\n\n        r DEL x\n        r RESTORE x 0 \"\\x13\\x01\\x10\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x1D\\x1D\\x00\\x00\\x00\\x0A\\x00\\x01\\x01\\x00\\x01\\x01\\x01\\x81\\x66\\x02\\x00\\x01\\x02\\x01\\x00\\x01\\x00\\x01\\x81\\x76\\x02\\x04\\x01\\xFF\\x01\\x01\\x01\\x01\\x01\\x00\\x00\\x01\\x01\\x01\\x67\\x01\\x01\\x01\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\xF5\\x5A\\x71\\xC7\\x84\\x01\\x00\\x00\\x01\\x01\\x05\\x41\\x6C\\x69\\x63\\x65\\xF5\\x5A\\x71\\xC7\\x84\\x01\\x00\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01\\x0B\\x00\\xA7\\xA9\\x14\\xA5\\x27\\xFF\\x9B\\x9B\"\n        set reply [r XINFO STREAM x FULL]\n        set group [lindex [dict get $reply groups] 0]\n        set consumer [lindex [dict get $group consumers] 0]\n        assert_equal [dict get $consumer seen-time] [dict get $consumer active-time]\n    }\n\n    start_server {tags {\"external:skip\"}} {\n        set master [srv -1 client]\n        set master_host [srv -1 host]\n        set master_port [srv -1 port]\n        set slave [srv 0 client]\n\n        foreach noack {0 1} {\n            test \"Consumer group last ID propagation to slave (NOACK=$noack)\" {\n                $slave slaveof $master_host $master_port\n                wait_for_condition 50 100 {\n                    [s 0 master_link_status] eq {up}\n                } else {\n                    fail \"Replication not started.\"\n                }\n\n                $master del stream\n                $master xadd stream * a 1\n                $master xadd stream * a 2\n                $master xadd stream * a 3\n                $master xgroup create stream mygroup 0\n\n                # Consume the first two items on the master\n                for {set j 0} {$j < 2} {incr j} {\n                    if {$noack} {\n                        set item [$master xreadgroup group mygroup \\\n                                  myconsumer COUNT 1 NOACK STREAMS stream >]\n                    } else {\n                        set item [$master xreadgroup group mygroup \\\n                                  myconsumer COUNT 1 STREAMS stream >]\n                    }\n                    set id [lindex $item 0 1 0 0]\n                    if {$noack == 0} {\n                        assert {[$master xack stream mygroup $id] eq \"1\"}\n                    }\n                }\n\n                wait_for_ofs_sync $master $slave\n\n                # Turn slave into master\n                $slave slaveof no one\n\n                set item [$slave xreadgroup group mygroup myconsumer \\\n                          COUNT 1 STREAMS stream >]\n\n                # The consumed entry should be the third\n                set myentry [lindex $item 0 1 0 1]\n                assert {$myentry eq {a 3}}\n            }\n        }\n    }\n\n    start_server {tags {\"external:skip\"}} {\n        set master [srv -1 client]\n        set master_host [srv -1 host]\n        set master_port [srv -1 port]\n        set replica [srv 0 client]\n\n        foreach autoclaim {0 1} {\n            test \"Replication tests of XCLAIM with deleted entries (autoclaim=$autoclaim)\" {\n                $replica replicaof $master_host $master_port\n                wait_for_condition 50 100 {\n                    [s 0 master_link_status] eq {up}\n                } else {\n                    fail \"Replication not started.\"\n                }\n\n                $master DEL x\n                $master XADD x 1-0 f v\n                $master XADD x 2-0 f v\n                $master XADD x 3-0 f v\n                $master XADD x 4-0 f v\n                $master XADD x 5-0 f v\n                $master XGROUP CREATE x grp 0\n                assert_equal [$master XREADGROUP GROUP grp Alice STREAMS x >] {{x {{1-0 {f v}} {2-0 {f v}} {3-0 {f v}} {4-0 {f v}} {5-0 {f v}}}}}\n                wait_for_ofs_sync $master $replica\n                assert_equal [llength [$replica XPENDING x grp - + 10 Alice]] 5\n                $master XDEL x 2-0\n                $master XDEL x 4-0\n                if {$autoclaim} {\n                    assert_equal [$master XAUTOCLAIM x grp Bob 0 0-0] {0-0 {{1-0 {f v}} {3-0 {f v}} {5-0 {f v}}} {2-0 4-0}}\n                    wait_for_ofs_sync $master $replica\n                    assert_equal [llength [$replica XPENDING x grp - + 10 Alice]] 0\n                } else {\n                    assert_equal [$master XCLAIM x grp Bob 0 1-0 2-0 3-0 4-0] {{1-0 {f v}} {3-0 {f v}}}\n                    wait_for_ofs_sync $master $replica\n                    assert_equal [llength [$replica XPENDING x grp - + 10 Alice]] 1\n                }\n            }\n        }\n\n        test {XREADGROUP ACK would propagate entries-read} {\n            $master del mystream\n            $master xadd mystream * a b c d e f\n            $master xgroup create mystream mygroup $\n            $master xreadgroup group mygroup ryan count 1 streams mystream >\n            $master xadd mystream * a1 b1 a1 b2\n            $master xadd mystream * name v1 name v1\n            $master xreadgroup group mygroup ryan count 1 streams mystream >\n            $master xreadgroup group mygroup ryan count 1 streams mystream >\n\n            set reply [$master XINFO STREAM mystream FULL]\n            set group [lindex [dict get $reply groups] 0]\n            assert_equal [dict get $group entries-read] 3\n            assert_equal [dict get $group lag] 0\n\n            wait_for_ofs_sync $master $replica\n\n            set reply [$replica XINFO STREAM mystream FULL]\n            set group [lindex [dict get $reply groups] 0]\n            assert_equal [dict get $group entries-read] 3\n            assert_equal [dict get $group lag] 0\n        }\n\n        test {XREADGROUP from PEL inside MULTI} {\n            # This scenario used to cause propagation of EXEC without MULTI in 6.2\n            $replica config set propagation-error-behavior panic\n            $master del mystream\n            $master xadd mystream 1-0 a b c d e f\n            $master xgroup create mystream mygroup 0\n            assert_equal [$master xreadgroup group mygroup ryan count 1 streams mystream >] {{mystream {{1-0 {a b c d e f}}}}}\n            $master multi\n            $master xreadgroup group mygroup ryan count 1 streams mystream 0\n            $master exec\n        }\n    }\n\n    start_server {tags {\"stream needs:debug\"} overrides {appendonly yes aof-use-rdb-preamble no}} {\n        test {Empty stream with no lastid can be rewrite into AOF correctly} {\n            r XGROUP CREATE mystream group-name $ MKSTREAM\n            assert {[dict get [r xinfo stream mystream] length] == 0}\n            set grpinfo [r xinfo groups mystream]\n            r bgrewriteaof\n            waitForBgrewriteaof r\n            r debug loadaof\n            assert {[dict get [r xinfo stream mystream] length] == 0}\n            assert_equal [r xinfo groups mystream] $grpinfo\n        }\n    }\n\n    start_server {} {\n        test \"XACKDEL wrong number of args\" {\n            assert_error {*wrong number of arguments for 'xackdel' command} {r XACKDEL}\n            assert_error {*wrong number of arguments for 'xackdel' command} {r XACKDEL s}\n            assert_error {*wrong number of arguments for 'xackdel' command} {r XACKDEL s g}\n        }\n\n        test \"XACKDEL should return empty array when key doesn't exist or group doesn't exist\" {\n            r DEL s\n            assert_equal {-1 -1} [r XACKDEL s g IDS 2 1-1 2-2] ;# the key doesn't exist\n\n            r XADD s 1-0 f v\n            assert_equal {-1 -1} [r XACKDEL s g IDS 2 1-1 2-2] ;# the key exists but the group doesn't exist\n        }\n\n        test \"XACKDEL IDS parameter validation\" {\n            r DEL s\n            r XADD s 1-0 f v\n            r XGROUP CREATE s g 0\n\n            # Test invalid numids\n            assert_error {*Number of IDs must be a positive integer*} {r XACKDEL s g IDS abc 1-1}\n            assert_error {*Number of IDs must be a positive integer*} {r XACKDEL s g IDS 0 1-1}\n            assert_error {*Number of IDs must be a positive integer*} {r XACKDEL s g IDS -5 1-1}\n\n            # Test whether numids is equal to the number of IDs provided\n            assert_error {*The `numids` parameter must match the number of arguments*} {r XACKDEL s g IDS 3 1-1 2-2}\n            assert_error {*syntax error*} {r XACKDEL s g IDS 1 1-1 2-2}\n        }\n\n        test \"XACKDEL KEEPREF/DELREF/ACKED parameter validation\" {\n            # Test mutually exclusive options\n            assert_error {*syntax error*} {r XACKDEL s g KEEPREF DELREF IDS 1 1-1}\n            assert_error {*syntax error*} {r XACKDEL s g KEEPREF ACKED IDS 1 1-1}\n            assert_error {*syntax error*} {r XACKDEL s g DELREF ACKED IDS 1 1-1}\n        }\n\n        test \"XACKDEL with DELREF option acknowledges will remove entry from all PELs\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v\n            r XADD mystream 2-0 f v\n\n            # Create two consumer groups\n            r XGROUP CREATE mystream group1 0\n            r XGROUP CREATE mystream group2 0\n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n            r XREADGROUP GROUP group2 consumer2 STREAMS mystream >\n\n            # Verify the message was removed from both groups' PELs when with DELREF\n            assert_equal {1 1} [r XACKDEL mystream group1 DELREF IDS 2 1-0 2-0]\n            assert_equal 0 [r XLEN mystream] \n            assert_equal {0 {} {} {}} [r XPENDING mystream group1]\n            assert_equal {0 {} {} {}} [r XPENDING mystream group2] \n            assert_equal {-1 -1} [r XACKDEL mystream group2 DELREF IDS 2 1-0 2-0]\n        }\n\n        test \"XACKDEL with ACKED option only deletes messages acknowledged by all groups\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v\n            r XADD mystream 2-0 f v\n\n            # Create two consumer groups\n            r XGROUP CREATE mystream group1 0\n            r XGROUP CREATE mystream group2 0\n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n            r XREADGROUP GROUP group2 consumer2 STREAMS mystream >\n\n            # The message is referenced by two groups.\n            # Even after one of them is ack, it still can't be deleted.\n            assert_equal {2 2} [r XACKDEL mystream group1 ACKED IDS 2 1-0 2-0]\n            assert_equal 2 [r XLEN mystream]\n            assert_equal {0 {} {} {}} [r XPENDING mystream group1]\n            assert_equal {2 1-0 2-0 {{consumer2 2}}} [r XPENDING mystream group2]\n\n            # When these messages are dereferenced by all groups, they can be deleted.\n            assert_equal {1 1} [r XACKDEL mystream group2 ACKED IDS 2 1-0 2-0]\n            assert_equal 0 [r XLEN mystream]\n            assert_equal {0 {} {} {}} [r XPENDING mystream group1]\n            assert_equal {0 {} {} {}} [r XPENDING mystream group2]\n        }\n\n        test \"XACKDEL with KEEPREF\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v\n            r XADD mystream 2-0 f v\n\n            # Create two consumer groups\n            r XGROUP CREATE mystream group1 0\n            r XGROUP CREATE mystream group2 0\n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n            r XREADGROUP GROUP group2 consumer2 STREAMS mystream >\n\n            # Test XACKDEL with KEEPREF\n            # XACKDEL only deletes the message from the stream\n            # but does not clean up references in consumer groups' PELs\n            assert_equal {1 1} [r XACKDEL mystream group1 KEEPREF IDS 2 1-0 2-0]\n            assert_equal 0 [r XLEN mystream]\n            assert_equal {0 {} {} {}} [r XPENDING mystream group1]\n            assert_equal {2 1-0 2-0 {{consumer2 2}}} [r XPENDING mystream group2]\n\n            # Acknowledge remaining messages in group2\n            assert_equal {1 1} [r XACKDEL mystream group2 KEEPREF IDS 2 1-0 2-0]\n            assert_equal {0 {} {} {}} [r XPENDING mystream group1]\n            assert_equal {0 {} {} {}} [r XPENDING mystream group2]\n        }\n\n        test \"XGROUP CREATE with ENTRIESREAD larger than stream entries should cap the value\" {\n            r DEL mystream\n            r xadd mystream * field value\n            r xgroup create mystream mygroup $ entriesread 9999\n\n            set reply [r XINFO STREAM mystream FULL]\n            set group [lindex [dict get $reply groups] 0]\n\n            # Lag must be 0 and entries-read must be 1.\n            assert_equal [dict get $group lag] 0\n            assert_equal [dict get $group entries-read] 1\n        }\n\n        test \"XGROUP SETID with ENTRIESREAD larger than stream entries should cap the value\" {\n            r DEL mystream\n            r xadd mystream * field value\n            r xgroup create mystream mygroup $\n\n            r xgroup setid mystream mygroup $ entriesread 9999\n\n            set reply [r XINFO STREAM mystream FULL]\n            set group [lindex [dict get $reply groups] 0]\n\n            # Lag must be 0 and entries-read must be 1.\n            assert_equal [dict get $group lag] 0\n            assert_equal [dict get $group entries-read] 1\n        }\n\n        test \"XACKDEL with IDs exceeding STREAMID_STATIC_VECTOR_LEN for heap allocation\" {\n            r DEL mystream\n            r XGROUP CREATE mystream mygroup $ MKSTREAM\n\n            # Generate IDs exceeding STREAMID_STATIC_VECTOR_LEN (8) to force heap allocation\n            # instead of using the static vector cache, ensuring proper memory allocation.\n            set ids {}\n            for {set i 0} {$i < 50} {incr i} {\n                lappend ids \"$i-1\"\n            }\n            set result [r XACKDEL mystream mygroup IDS 50 {*}$ids]\n            assert {[llength $result] == 50}\n            r PING\n        }\n    }\n\n    start_server {tags {\"repl external:skip\"}} {\n        test \"XREADGROUP CLAIM delivery count increments replicated correctly\" {\n            start_server {tags {\"stream\"}} {\n                set master [srv 0 client]\n                set master_host [srv 0 host]\n                set master_port [srv 0 port]\n                \n                start_server {tags {\"stream\"}} {\n                    set replica [srv 0 client]\n                    \n                    # Setup replication\n                    $replica replicaof $master_host $master_port\n                    wait_for_sync $replica\n                    \n                    # Setup stream and consumer group on master\n                    $master DEL mystream\n                    $master XADD mystream 1-0 f v1\n                    $master XGROUP CREATE mystream group1 0\n                    \n                    # Wait for replication\n                    wait_for_ofs_sync $master $replica\n                    \n                    # First read on master\n                    $master XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n                    wait_for_ofs_sync $master $replica\n                    \n                    # Check initial delivery count on replica\n                    set replica_pending [$replica XPENDING mystream group1 - + 1]\n                    assert_equal [llength $replica_pending] 1\n                    set delivery_count [lindex [lindex $replica_pending 0] 3]\n                    assert_equal $delivery_count 1\n                    \n                    # First claim on master\n                    after 100\n                    set claim_result1 [$master XREADGROUP GROUP group1 consumer2 CLAIM 50 STREAMS mystream >]\n                    wait_for_ofs_sync $master $replica\n                    \n                    # Check delivery count after first claim\n                    set replica_pending [$replica XPENDING mystream group1 - + 1]\n                    set delivery_count [lindex [lindex $replica_pending 0] 3]\n                    assert_equal $delivery_count 2\n                    \n                    # Second claim on master\n                    after 100\n                    set claim_result2 [$master XREADGROUP GROUP group1 consumer3 CLAIM 50 STREAMS mystream >]\n                    wait_for_ofs_sync $master $replica\n                    \n                    # Check final delivery count on replica\n                    set replica_pending [$replica XPENDING mystream group1 - + 1]\n                    assert_equal [llength $replica_pending] 1\n                    set delivery_count [lindex [lindex $replica_pending 0] 3]\n                    assert_equal $delivery_count 3\n                }\n            }\n        }\n    }\n\n    start_server {tags {\"repl external:skip\" \"stream\"}} {\n        # Verify that XREADGROUP propagates a newly created consumer to\n        # the replica in cases where no XCLAIM is generated (XCLAIM\n        # implicitly creates the consumer, so explicit propagation is\n        # only needed when it is absent).  Two cases are tested:\n        #   1. Without NOACK and no messages to deliver — no XCLAIM at all.\n        #   2. With NOACK and messages delivered — NOACK skips PEL/XCLAIM.\n        test \"XREADGROUP propagates new consumer to replica\" {\n            set master [srv 0 client]\n            set master_host [srv 0 host]\n            set master_port [srv 0 port]\n\n            start_server {tags {\"stream\"}} {\n                set replica [srv 0 client]\n\n                $replica replicaof $master_host $master_port\n                wait_for_sync $replica\n\n                $master DEL mystream\n                $master XADD mystream 1-0 f v\n                $master XGROUP CREATE mystream grp 0\n\n                # Consume the only message so the stream has no\n                # new messages pending for delivery.\n                $master XREADGROUP GROUP grp c1 STREAMS mystream >\n                $master XACK mystream grp 1-0\n\n                wait_for_ofs_sync $master $replica\n\n                # Case 1: XREADGROUP without NOACK for a brand-new\n                # consumer when there are NO messages to deliver.\n                # No XCLAIM is generated, so the consumer must be\n                # explicitly propagated.\n                set reply [$master XREADGROUP GROUP grp c2 STREAMS mystream >]\n                assert_equal $reply {}\n\n                set master_consumers [$master XINFO CONSUMERS mystream grp]\n                set master_names [lmap c $master_consumers {dict get $c name}]\n                assert {[lsearch $master_names \"c2\"] >= 0}\n\n                wait_for_ofs_sync $master $replica\n\n                set replica_consumers [$replica XINFO CONSUMERS mystream grp]\n                set replica_names [lmap c $replica_consumers {dict get $c name}]\n                if {[lsearch $replica_names \"c2\"] < 0} {\n                    fail \"Consumer 'c2' not found on replica (have: $replica_names)\"\n                }\n\n                # Case 2: XREADGROUP with NOACK for a brand-new consumer\n                # when a message IS available.  NOACK skips PEL/XCLAIM\n                # entirely, so the consumer must be explicitly propagated\n                # even though messages were delivered.\n                $master XADD mystream 2-0 f v\n                wait_for_ofs_sync $master $replica\n\n                set reply [$master XREADGROUP GROUP grp c3 NOACK STREAMS mystream >]\n                assert {$reply ne {}}\n\n                set master_consumers [$master XINFO CONSUMERS mystream grp]\n                set master_names [lmap c $master_consumers {dict get $c name}]\n                assert {[lsearch $master_names \"c3\"] >= 0}\n\n                wait_for_ofs_sync $master $replica\n\n                set replica_consumers [$replica XINFO CONSUMERS mystream grp]\n                set replica_names [lmap c $replica_consumers {dict get $c name}]\n                if {[lsearch $replica_names \"c3\"] < 0} {\n                    fail \"Consumer 'c3' not found on replica (have: $replica_names)\"\n                }\n            }\n        }\n    }\n\n    start_server {} {\n        if {!$::force_resp3} {\n        test \"XREADGROUP CLAIM field types are correct\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XGROUP CREATE mystream group1 0\n\n            # Read the message with XREADGROUP\n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n\n            # Wait to allow claiming\n            after 100\n\n            # Read again with CLAIM using readraw to check field types\n            r readraw 1\n            r deferred 1\n            \n            r XREADGROUP GROUP group1 consumer2 CLAIM 50 STREAMS mystream >\n\n            # Check the response format line by line\n            # Response structure: *1 (outer array) -> *2 (stream name + messages array)\n            assert_equal [r read] {*1}       ;# Outer array (1 stream)\n            assert_equal [r read] {*2}       ;# Stream data (2 elements: stream name + messages)\n            assert_equal [r read] {$8}       ;# Stream name length\n            assert_equal [r read] {mystream} ;# Stream name\n            assert_equal [r read] {*1}       ;# Messages array (1 message)\n            assert_equal [r read] {*4}       ;# Message with 4 fields\n            assert_equal [r read] {$3}       ;# Field 1: Message ID length\n            assert_equal [r read] {1-0}      ;# Field 1: Message ID value\n            assert_equal [r read] {*2}       ;# Field 2: Field-value pairs array\n            assert_equal [r read] {$1}       ;# Field-value pair: key length\n            assert_equal [r read] {f}        ;# Field-value pair: key\n            assert_equal [r read] {$2}       ;# Field-value pair: value length\n            assert_equal [r read] {v1}       ;# Field-value pair: value\n            \n            # Field 3: Delivery count - should be integer type (:)\n            set delivery_count_type [r read]\n            assert_match {:*} $delivery_count_type \"Expected delivery count to be integer type (:), got: $delivery_count_type\"\n            \n            # Field 4: Idle time - should be integer type (:)\n            set idle_time_type [r read]\n            assert_match {:*} $idle_time_type \"Expected idle time to be integer type (:), got: $idle_time_type\"\n        }\n        }\n\n        # Restore connection state\n        r readraw 0\n        r deferred 0\n\n        test \"XREADGROUP CLAIM returns unacknowledged messages\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n\n            # Create consumer groups\n            r XGROUP CREATE mystream group1 0\n\n            # Verify we got 1 message without acknowledgment\n            set read_result [r XREADGROUP GROUP group1 consumer1 STREAMS mystream >]\n            assert_equal [llength [lindex [lindex $read_result 0] 1]] 2\n\n            # Verify the messages are now in pending state\n            set pending_info [r XPENDING mystream group1]\n            assert_equal [lindex $pending_info 0] 2\n\n            after 100\n\n            # Claim pending messages\n            set claim_result [r XREADGROUP GROUP group1 consumer2 CLAIM 50 STREAMS mystream >]\n\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 2\n\n            # Check first message\n            assert_equal [lindex $messages 0 0] 1-0\n            assert_equal [lindex $messages 0 1] [list f v1]\n            assert {[lindex $messages 0 2] >= 50}\n            assert_equal [lindex $messages 0 3] 1\n\n            # Check second message\n            assert_equal [lindex $messages 1 0] 2-0\n            assert_equal [lindex $messages 1 1] [list f v2]\n            assert {[lindex $messages 1 2] >= 50}\n            assert_equal [lindex $messages 1 3] 1\n\n            # Verify pending list now shows messages belong to consumer2\n            set pending_range [r XPENDING mystream group1 - + 10]\n            assert_equal [llength $pending_range] 2\n            \n            # Check that messages are now assigned to consumer2\n            assert_equal [lindex [lindex $pending_range 0] 1] \"consumer2\"\n            assert_equal [lindex [lindex $pending_range 1] 1] \"consumer2\"\n            \n            # Verify delivery count was incremented in pending list\n            assert_equal [lindex [lindex $pending_range 0] 3] 2\n            assert_equal [lindex [lindex $pending_range 1] 3] 2\n        }\n\n        test \"XREADGROUP CLAIM respects min-idle-time threshold\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n\n            # Create consumer groups\n            r XGROUP CREATE mystream group1 0\n\n            # Verify we got 1 message without acknowledgment\n            set read_result [r XREADGROUP GROUP group1 consumer1 STREAMS mystream >]\n            assert_equal [llength [lindex [lindex $read_result 0] 1]] 2\n\n            # Verify the messages are now in pending state\n            set pending_info [r XPENDING mystream group1]\n            assert_equal [lindex $pending_info 0] 2\n\n            # Claim pending messages\n            set claim_result [r XREADGROUP GROUP group1 consumer2 CLAIM 100 STREAMS mystream >]\n\n            assert_equal [llength $claim_result] 0\n        }\n\n        test \"XREADGROUP CLAIM with COUNT limit\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n            r XADD mystream 3-0 f v3\n\n            # Create consumer groups\n            r XGROUP CREATE mystream group1 0\n\n            # Verify we got 1 message without acknowledgment\n            set read_result [r XREADGROUP GROUP group1 consumer1 STREAMS mystream >]\n            assert_equal [llength [lindex [lindex $read_result 0] 1]] 3\n\n            # Verify the messages are now in pending state\n            set pending_info [r XPENDING mystream group1]\n            assert_equal [lindex $pending_info 0] 3\n\n            after 100\n\n            # Claim pending messages\n            set claim_result [r XREADGROUP GROUP group1 consumer2 COUNT 2 CLAIM 50 STREAMS mystream >]\n\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 2\n\n            # Check first message\n            assert_equal [lindex $messages 0 0] 1-0\n            assert_equal [lindex $messages 0 1] [list f v1]\n            assert {[lindex $messages 0 2] >= 50}\n            assert_equal [lindex $messages 0 3] 1\n\n            # Check second message\n            assert_equal [lindex $messages 1 0] 2-0\n            assert_equal [lindex $messages 1 1] [list f v2]\n            assert {[lindex $messages 1 2] >= 50}\n            assert_equal [lindex $messages 1 3] 1\n        }\n\n        test \"XREADGROUP CLAIM without messages\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XDEL mystream 1-0\n\n            # Create consumer groups\n            r XGROUP CREATE mystream group1 0\n\n            # Claim pending messages\n            set claim_result [r XREADGROUP GROUP group1 consumer1 CLAIM 100 STREAMS mystream >]\n\n            assert_equal [llength $claim_result] 0\n        }\n\n        test \"XREADGROUP CLAIM without pending messages\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n\n            # Create consumer groups\n            r XGROUP CREATE mystream group1 0\n\n            # Claim pending messages\n            set claim_result [r XREADGROUP GROUP group1 consumer1 CLAIM 100 STREAMS mystream >]\n\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 2\n\n            # Check first message\n            assert_equal [lindex $messages 0 0] 1-0\n            assert_equal [lindex $messages 0 1] [list f v1]\n            assert_equal [lindex $messages 0 2] 0\n            assert_equal [lindex $messages 0 3] 0\n\n            # Check second message\n            assert_equal [lindex $messages 1 0] 2-0\n            assert_equal [lindex $messages 1 1] [list f v2]\n            assert_equal [lindex $messages 1 2] 0\n            assert_equal [lindex $messages 1 3] 0\n        }\n\n        test \"XREADGROUP CLAIM message response format\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n\n            # Create consumer groups\n            r XGROUP CREATE mystream group1 0\n\n            # Verify we got 1 message without acknowledgment\n            set read_result [r XREADGROUP GROUP group1 consumer1 COUNT 1 STREAMS mystream >]\n            assert_equal [llength [lindex [lindex $read_result 0] 1]] 1\n            lassign [lindex $read_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength [lindex $messages 0]] 2\n\n            # Verify the messages are now in pending state\n            set pending_info [r XPENDING mystream group1]\n            assert_equal [lindex $pending_info 0] 1\n\n            after 100\n\n            # Claim pending messages\n            set claim_result [r XREADGROUP GROUP group1 consumer2 CLAIM 50 STREAMS mystream >]\n\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n\n            # Check claimed message\n            assert_equal [lindex $messages 0 0] 1-0\n            assert_equal [lindex $messages 0 1] [list f v1]\n            assert {[lindex $messages 0 2] >= 50 }\n            assert_equal [lindex $messages 0 3] 1\n\n            # Check stream message\n            assert_equal [lindex $messages 1 0] 2-0\n            assert_equal [lindex $messages 1 1] [list f v2]\n            assert_equal [lindex $messages 1 2] 0\n            assert_equal [lindex $messages 1 3] 0\n        }\n\n        test \"XREADGROUP CLAIM delivery count\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n\n            # Create consumer groups\n            r XGROUP CREATE mystream group1 0\n\n            # Read message\n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n\n            after 100\n\n            # Claim pending messages one time\n            set claim_result [r XREADGROUP GROUP group1 consumer2 CLAIM 50 STREAMS mystream >]\n            \n            # Check delivery count\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal [lindex $messages 0 3] 1\n\n            # Claim pending messages with same consumer three times\n            after 100\n            r XREADGROUP GROUP group1 consumer3 CLAIM 50 STREAMS mystream >\n\n            after 100\n            r XREADGROUP GROUP group1 consumer3 CLAIM 50 STREAMS mystream >\n\n            after 100\n            set claim_result [r XREADGROUP GROUP group1 consumer3 CLAIM 50 STREAMS mystream >]\n            \n            # Check delivery count\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal [lindex $messages 0 3] 4\n\n            # Claim pending messages with different consumer two times\n            after 100\n            r XREADGROUP GROUP group1 consumer4 CLAIM 50 STREAMS mystream >\n\n            after 100\n            set claim_result [r XREADGROUP GROUP group1 consumer5 CLAIM 50 STREAMS mystream >]\n\n            # Check delivery count\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal [lindex $messages 0 3] 6\n        }\n\n        test \"XREADGROUP CLAIM idle time\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n\n            # Create consumer groups\n            r XGROUP CREATE mystream group1 0\n\n            # Read message\n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n\n            after 100\n            # Claim pending messages\n            set claim_result [r XREADGROUP GROUP group1 consumer2 CLAIM 50 STREAMS mystream >]\n            # Check idle time\n            lassign [lindex $claim_result 0] stream_name messages\n            assert {[lindex $messages 0 2] >= 50}\n\n            # Claim pending messages\n            after 70\n            set claim_result [r XREADGROUP GROUP group1 consumer3 CLAIM 60 STREAMS mystream >]\n            # Check idle time\n            lassign [lindex $claim_result 0] stream_name messages\n            assert {[lindex $messages 0 2] >= 60}\n\n            after 80\n            # Claim pending messages\n            set claim_result [r XREADGROUP GROUP group1 consumer3 CLAIM 70 STREAMS mystream >]\n            # Check idle time\n            lassign [lindex $claim_result 0] stream_name messages\n            assert {[lindex $messages 0 2] >= 70}\n\n            after 20\n            # Claim pending messages\n            set claim_result [r XREADGROUP GROUP group1 consumer3 CLAIM 10 STREAMS mystream >]\n            # Check idle time\n            lassign [lindex $claim_result 0] stream_name messages\n            assert {[lindex $messages 0 2] >= 10}\n        }\n\n        test \"XREADGROUP CLAIM with NOACK\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n\n            # Create consumer groups\n            r XGROUP CREATE mystream group1 0\n\n            after 100\n\n            # Claim with NOACK\n            set claim_result [r XREADGROUP GROUP group1 consumer1 NOACK CLAIM 50 STREAMS mystream >]\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal [llength $messages] 2\n\n            # Verify there is no pending messages\n            set pending_info [r XPENDING mystream group1]\n            assert_equal [lindex $pending_info 0] 0\n\n            # Claim again with NOACK\n            set claim_result [r XREADGROUP GROUP group1 consumer1 NOACK CLAIM 50 STREAMS mystream >]\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal [llength $messages] 0\n        }\n\n        test \"XREADGROUP CLAIM with NOACK and pending messages\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n\n            # Create consumer groups\n            r XGROUP CREATE mystream group1 0\n\n            r XREADGROUP GROUP group1 consumer1 COUNT 1 STREAMS mystream >\n\n            # Verify there is one pending message\n            set pending_info [r XPENDING mystream group1]\n            assert_equal [lindex $pending_info 0] 1\n\n            after 100\n\n            # Claim with NOACK. We expect one pending message and one from the stream\n            set claim_result [r XREADGROUP GROUP group1 consumer1 NOACK CLAIM 50 STREAMS mystream >]\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal [llength $messages] 2\n\n            # Verify there is one pending messages\n            set pending_info [r XPENDING mystream group1]\n            assert_equal [lindex $pending_info 0] 1\n\n            after 100\n\n            # Claim again with NOACK. We expect only the pending message.\n            set claim_result [r XREADGROUP GROUP group1 consumer1 NOACK CLAIM 50 STREAMS mystream >]\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal [llength $messages] 1\n            assert_equal [lindex $messages 0 0] 1-0\n        }\n\n        test \"XREADGROUP CLAIM with multiple streams\" {\n            r DEL mystream{t}1\n            r XADD mystream{t}1 1-0 f v1\n            r XADD mystream{t}1 2-0 f v2\n\n            r DEL mystream{t}2\n            r XADD mystream{t}2 3-0 f v1\n            r XADD mystream{t}2 4-0 f v2\n\n            r DEL mystream{t}3\n            r XADD mystream{t}3 5-0 f v1\n            r XADD mystream{t}3 6-0 f v2\n\n            # Create consumer groups\n            r XGROUP CREATE mystream{t}1 group1 0\n            r XGROUP CREATE mystream{t}2 group1 0\n            r XGROUP CREATE mystream{t}3 group1 0\n\n            r XREADGROUP GROUP group1 consumer1 COUNT 1 STREAMS mystream{t}1 mystream{t}2 mystream{t}3 > > >\n\n            after 100\n\n            # Claim messages from multiply streams.\n            set claim_result [r XREADGROUP GROUP group1 consumer1 CLAIM 50 STREAMS mystream{t}1 mystream{t}2 mystream{t}3 > > >]\n\n            # We expect two messages from the first stream. One pending and one new.\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream{t}1\"\n            assert_equal [llength $messages] 2\n            # Pending message. \n            assert_equal [lindex $messages 0 0] 1-0\n            assert_equal [lindex $messages 0 3] 1\n            # New message\n            assert_equal [lindex $messages 1 0] 2-0\n            assert_equal [lindex $messages 1 3] 0\n\n            # We expect two messages from the second stream. One pending and one new.\n            lassign [lindex $claim_result 1] stream_name messages\n            assert_equal $stream_name \"mystream{t}2\"\n            assert_equal [llength $messages] 2\n            # Pending message. \n            assert_equal [lindex $messages 0 0] 3-0\n            assert_equal [lindex $messages 0 3] 1\n            # New message\n            assert_equal [lindex $messages 1 0] 4-0\n            assert_equal [lindex $messages 1 3] 0\n\n            # We expect two messages from the third stream. One pending and one new.\n            lassign [lindex $claim_result 2] stream_name messages\n            assert_equal $stream_name \"mystream{t}3\"\n            assert_equal [llength $messages] 2\n            # Pending message. \n            assert_equal [lindex $messages 0 0] 5-0\n            assert_equal [lindex $messages 0 3] 1\n            # New message\n            assert_equal [lindex $messages 1 0] 6-0\n            assert_equal [lindex $messages 1 3] 0\n        }\n\n        test \"XREADGROUP CLAIM with min-idle-time equal to zero\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n\n            # Create consumer groups\n            r XGROUP CREATE mystream group1 0\n\n            # Read one message\n            r XREADGROUP GROUP group1 consumer1 COUNT 1 STREAMS mystream >\n\n            # Claim one message with min-idle-time=0\n            set claim_result [r XREADGROUP GROUP group1 consumer1 CLAIM 0 STREAMS mystream >]\n\n            # We expect two messages. One pending and one new.\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 2\n            # Pending message. \n            assert_equal [lindex $messages 0 0] 1-0\n            assert_equal [lindex $messages 0 3] 1\n            # New message\n            assert_equal [lindex $messages 1 0] 2-0\n            assert_equal [lindex $messages 1 3] 0\n        }\n\n        test \"XREADGROUP CLAIM with large min-idle-time\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n\n            # Create consumer groups\n            r XGROUP CREATE mystream group1 0\n\n            # Read one message\n            r XREADGROUP GROUP group1 consumer1 COUNT 1 STREAMS mystream >\n\n            after 100\n\n            # Claim one message with large min-idle-time\n            set claim_result [r XREADGROUP GROUP group1 consumer1 CLAIM 9223372036854775807 STREAMS mystream >]\n\n            # We expect only the new message.\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 1\n            # New message \n            assert_equal [lindex $messages 0 0] 2-0\n            assert_equal [lindex $messages 0 3] 0\n        }\n\n        test \"XREADGROUP CLAIM with not integer for min-idle-time\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n\n            # Create consumer groups\n            r XGROUP CREATE mystream group1 0\n\n            # Read one message\n            r XREADGROUP GROUP group1 consumer1 COUNT 1 STREAMS mystream >\n\n            assert_error \"*ERR min-idle-time is not an integer*\" {r XREADGROUP GROUP group1 consumer1 CLAIM test STREAMS mystream >}\n            assert_error \"*ERR min-idle-time is not an integer*\" {r XREADGROUP GROUP group1 consumer1 CLAIM 5.5 STREAMS mystream >}\n            assert_error \"*ERR min-idle-time is not an integer*\" {r XREADGROUP GROUP group1 consumer1 CLAIM 5,5 STREAMS mystream >}\n            assert_error \"*ERR min-idle-time is not an integer*\" {r XREADGROUP GROUP group1 consumer1 CLAIM \"10e\" STREAMS mystream >}\n            assert_error \"*ERR min-idle-time is not an integer*\" {r XREADGROUP GROUP group1 consumer1 CLAIM +10 STREAMS mystream >}\n            assert_error \"*ERR min-idle-time is not an integer*\" {r XREADGROUP GROUP group1 consumer1 CLAIM *10 STREAMS mystream >}\n            assert_error \"*ERR min-idle-time is not an integer*\" {r XREADGROUP GROUP group1 consumer1 CLAIM 10/2 STREAMS mystream >}\n            assert_error \"*ERR min-idle-time is not an integer*\" {r XREADGROUP GROUP group1 consumer1 CLAIM 10*2 STREAMS mystream >}\n            assert_error \"*ERR min-idle-time is not an integer*\" {r XREADGROUP GROUP group1 consumer1 CLAIM 10€ STREAMS mystream >}\n        }\n\n        test \"XREADGROUP CLAIM with negative integer for min-idle-time\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n\n            # Create consumer groups\n            r XGROUP CREATE mystream group1 0\n\n            # Read one message\n            r XREADGROUP GROUP group1 consumer1 COUNT 1 STREAMS mystream >\n\n            assert_error \"*ERR min-idle-time must be a positive integer*\" {r XREADGROUP GROUP group1 consumer1 CLAIM -10 STREAMS mystream >}\n            assert_error \"*ERR min-idle-time must be a positive integer*\" {r XREADGROUP GROUP group1 consumer1 CLAIM -42 STREAMS mystream >}\n            assert_error \"*ERR min-idle-time is not an integer*\" {r XREADGROUP GROUP group1 consumer1 CLAIM -0 STREAMS mystream >}\n            assert_error \"*ERR min-idle-time is not an integer*\" {r XREADGROUP GROUP group1 consumer1 CLAIM -5.5 STREAMS mystream >}\n            assert_error \"*ERR min-idle-time is not an integer*\" {r XREADGROUP GROUP group1 consumer1 CLAIM -5,5 STREAMS mystream >}\n            assert_error \"*ERR min-idle-time is not an integer*\" {r XREADGROUP GROUP group1 consumer1 CLAIM \"-10e\" STREAMS mystream >}\n            assert_error \"*ERR min-idle-time is not an integer*\" {r XREADGROUP GROUP group1 consumer1 CLAIM -10/2 STREAMS mystream >}\n            assert_error \"*ERR min-idle-time is not an integer*\" {r XREADGROUP GROUP group1 consumer1 CLAIM -10*2 STREAMS mystream >}\n            assert_error \"*ERR min-idle-time is not an integer*\" {r XREADGROUP GROUP group1 consumer1 CLAIM (-10)*2 STREAMS mystream >}\n        }\n\n        test \"XREADGROUP CLAIM with different position\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n            \n            # Create consumer groups\n            r XGROUP CREATE mystream group1 0\n            \n            # Read one message\n            r XREADGROUP GROUP group1 consumer1 COUNT 1 STREAMS mystream >\n            \n            after 100\n            \n            # Claim one message with CLAIM option after COUNT\n            set claim_result [r XREADGROUP GROUP group1 consumer1 COUNT 1 CLAIM 50 STREAMS mystream >]\n            # We expect only the claimed message.\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 1\n            assert_equal [lindex $messages 0 0] 1-0\n            assert_equal [lindex $messages 0 3] 1\n            \n            after 100\n            \n            # Claim one message with CLAIM option before COUNT\n            set claim_result [r XREADGROUP GROUP group1 consumer1 CLAIM 50 COUNT 1 STREAMS mystream >]\n            # We expect only the claimed message.\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 1\n            assert_equal [lindex $messages 0 0] 1-0\n            assert_equal [lindex $messages 0 3] 2\n            \n            after 100\n            \n            # Claim one message with multiple CLAIM options\n            set claim_result [r XREADGROUP GROUP group1 consumer1 CLAIM 50 COUNT 1 CLAIM 60 STREAMS mystream >]\n            # We expect only the claimed message.\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 1\n            assert_equal [lindex $messages 0 0] 1-0\n            assert_equal [lindex $messages 0 3] 3\n            \n            after 100\n\n            # Claim one message with CLAIM option before GROUP\n            set claim_result [r XREADGROUP CLAIM 50 GROUP group1 consumer1 COUNT 1 STREAMS mystream >]\n            # We expect only the claimed message.\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 1\n            assert_equal [lindex $messages 0 0] 1-0\n            assert_equal [lindex $messages 0 3] 4\n\n            # Test error cases with invalid CLAIM syntax\n            assert_error \"*ERR min-idle-time is not an integer*\" {r XREADGROUP GROUP group1 consumer1 CLAIM 10 CLAIM COUNT 1 STREAMS mystream >}\n            assert_error \"*ERR min-idle-time is not an integer*\" {r XREADGROUP CLAIM GROUP group1 consumer1 COUNT 1 STREAMS mystream >}\n            assert_error \"*NOGROUP No such key*\" {r XREADGROUP GROUP group1 consumer1 COUNT 1 STREAMS mystream CLAIM 50 >}\n            assert_error \"*ERR Unbalanced*\" {r XREADGROUP GROUP group1 consumer1 COUNT 1 STREAMS mystream CLAIM >}\n            assert_error \"*ERR Invalid stream ID*\" {r XREADGROUP GROUP group1 consumer1 COUNT 1 STREAMS mystream > CLAIM 50}\n            assert_error \"*ERR Unbalanced*\" {r XREADGROUP GROUP group1 consumer1 COUNT 1 STREAMS mystream > CLAIM}\n            assert_error \"*ERR syntax error*\" {r XREADGROUP GROUP group1 CLAIM 50 consumer1 STREAMS mystream >}\n            assert_error \"*ERR min-idle-time is not an integer*\" {r XREADGROUP GROUP group1 consumer1 CLAIM STREAMS mystream >}\n        } {} {external:skip}\n\n        test \"XREADGROUP CLAIM with specific ID\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n            r XADD mystream 3-0 f v3\n\n            # Create consumer groups\n            r XGROUP CREATE mystream group1 0\n\n            # Read one message\n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n\n            after 100\n\n            # Claim option is ignored when we specify ID different than >.\n            set claim_result [r XREADGROUP GROUP group1 consumer1 CLAIM 1000 STREAMS mystream 0]\n\n            # We expect only the new message.\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 3\n            assert_equal [llength [lindex $messages 0]] 2\n        }\n\n        test \"XREADGROUP CLAIM on non-existing consumer group\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n            r XADD mystream 3-0 f v3\n\n            # Create consumer groups\n            r XGROUP CREATE mystream group1 0\n\n            # Read all messages\n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n\n            after 100\n            # We expect error. Group does not exists.\n            assert_error \"*NOGROUP No such key*\" {r XREADGROUP GROUP not_existing_group consumer1 CLAIM 50 STREAMS mystream >}\n        }\n\n        test \"XREADGROUP CLAIM on non-existing consumer\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n            r XADD mystream 3-0 f v3\n\n            # Create consumer groups\n            r XGROUP CREATE mystream group1 0\n\n            # Read all messages\n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n\n            after 100\n            # We expect 3 messages. Consumer is created if not exist.\n            set claim_result [r XREADGROUP GROUP group1 consumer2 CLAIM 50 STREAMS mystream >]\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 3\n        }\n\n        test \"XREADGROUP CLAIM verify ownership transfer and delivery count updates\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n            r XADD mystream 3-0 f v3\n\n            # Create consumer groups\n            r XGROUP CREATE mystream group1 0\n\n            # Read one message\n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n\n            after 100\n\n            # Transfer ownership to consumer2\n            set claim_result [r XREADGROUP GROUP group1 consumer2 CLAIM 50 STREAMS mystream >]\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 3\n\n            # Verify ownership transfer and delivery count updates\n            set pending_info [r XPENDING mystream group1 - + 10]\n\n            assert_equal [llength $pending_info] 3\n            \n            # Check first message entry\n            assert_equal [lindex $pending_info 0 0] \"1-0\"\n            assert_equal [lindex $pending_info 0 1] \"consumer2\"\n            assert_equal [lindex $pending_info 0 3] 2\n            \n            # Check second message entry\n            assert_equal [lindex $pending_info 1 0] \"2-0\"\n            assert_equal [lindex $pending_info 1 1] \"consumer2\"\n            assert_equal [lindex $pending_info 1 3] 2\n            \n            # Check third message entry\n            assert_equal [lindex $pending_info 2 0] \"3-0\"\n            assert_equal [lindex $pending_info 2 1] \"consumer2\"\n            assert_equal [lindex $pending_info 2 3] 2\n        }\n\n        test \"XREADGROUP CLAIM verify XACK removes messages from CLAIM pool\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n            r XADD mystream 3-0 f v3\n\n            # Create consumer group\n            r XGROUP CREATE mystream group1 0\n\n            # Read all three messages with consumer1\n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n\n            # Acknowledge messages 1-0 and 3-0, leaving 2-0 pending\n            r XACK mystream group1 1-0 3-0\n\n            after 100\n\n            # Claim pending messages older than 50ms for consumer2\n            set claim_result [r XREADGROUP GROUP group1 consumer2 CLAIM 50 STREAMS mystream >]\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n\n            # Should claim only message 2-0 (the unacknowledged one)\n            assert_equal [llength $messages] 1\n            assert_equal [lindex $messages 0 0] 2-0\n\n            # Acknowledge message 2-0\n            r XACK mystream group1 2-0\n\n            after 100\n\n            # Attempt to claim again - should return nothing since all messages are acknowledged\n            set claim_result [r XREADGROUP GROUP group1 consumer2 CLAIM 50 STREAMS mystream >]\n            assert_equal [llength $claim_result] 0\n        }\n\n        test \"XREADGROUP CLAIM verify that XCLAIM updates delivery count\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n            r XADD mystream 3-0 f v3\n\n            # Create consumer group\n            r XGROUP CREATE mystream group1 0\n\n            # Read all three messages with consumer1 (delivery count becomes 1 for all)\n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n\n            after 100\n\n            # This increments delivery count to 2 for these messages\n            r XCLAIM mystream group1 consumer3 50 2-0 3-0\n\n            after 100\n\n            # This should claim all three messages and increment their delivery counts\n            set claim_result [r XREADGROUP GROUP group1 consumer2 CLAIM 50 STREAMS mystream >]\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 3\n\n            # Message 1-0: only claimed once via XREADGROUP (delivery count = 1)\n            assert_equal [lindex $messages 0 0] 1-0\n            assert_equal [lindex $messages 0 3] 1\n\n            # Message 2-0: claimed via XCLAIM then XREADGROUP (delivery count = 2)\n            assert_equal [lindex $messages 1 0] 2-0\n            assert_equal [lindex $messages 1 3] 2\n\n            # Message 3-0: claimed via XCLAIM then XREADGROUP (delivery count = 2)\n            assert_equal [lindex $messages 2 0] 3-0\n            assert_equal [lindex $messages 2 3] 2\n        }\n\n        test \"XREADGROUP CLAIM verify forced entries are claimable\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n            r XADD mystream 3-0 f v3\n\n            # Create consumer group\n            r XGROUP CREATE mystream group1 0\n\n            r XCLAIM mystream group1 consumer3 0 1-0 2-0 FORCE JUSTID\n\n            # This should claim all three messages and increment their delivery counts\n            set claim_result [r XREADGROUP GROUP group1 consumer2 CLAIM 0 COUNT 2 STREAMS mystream >]\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 2\n\n            # Message 1-0: only claimed once via XREADGROUP (delivery count = 1)\n            assert_equal [lindex $messages 0 0] 1-0\n            assert_equal [lindex $messages 0 3] 1\n\n            # Message 2-0: claimed via XCLAIM then XREADGROUP (delivery count = 2)\n            assert_equal [lindex $messages 1 0] 2-0\n            assert_equal [lindex $messages 1 3] 1\n        }\n\n        test \"XREADGROUP CLAIM with BLOCK zero\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n            r XADD mystream 3-0 f v3\n\n            # Create consumer group\n            r XGROUP CREATE mystream group1 0\n\n            # Read all three messages with consumer1\n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n\n            set claim_result [r XREADGROUP GROUP group1 consumer1 BLOCK 1 STREAMS mystream >]\n            assert_equal [llength $claim_result] 0\n\n            set claim_result [r XREADGROUP GROUP group1 consumer1 BLOCK 100 CLAIM 500 STREAMS mystream >]\n            assert_equal [llength $claim_result] 0\n\n            after 100\n\n            set claim_result [r XREADGROUP GROUP group1 consumer1 BLOCK 10000 CLAIM 50 STREAMS mystream >]\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 3\n\n            after 100\n\n            set claim_result [r XREADGROUP GROUP group1 consumer1 BLOCK 0 CLAIM 50 STREAMS mystream >]\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 3\n        }\n\n        test \"XREADGROUP CLAIM with two blocked clients\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XDEL mystream 1-0\n\n            # Create consumer group\n            r XGROUP CREATE mystream group1 0 MKSTREAM\n\n            # Create two deferring clients for blocking reads\n            set rd1 [redis_deferring_client]\n            set rd2 [redis_deferring_client]\n            \n            # Both clients issue blocking XREADGROUP commands\n            $rd1 XREADGROUP GROUP group1 consumer1 BLOCK 0 CLAIM 100 STREAMS mystream \">\"\n            $rd2 XREADGROUP GROUP group1 consumer2 BLOCK 0 CLAIM 100 STREAMS mystream \">\"\n            \n            # Wait for both clients to be blocked\n            wait_for_blocked_clients_count 2\n\n            r XADD mystream 2-0 f v2\n\n            set result1 [$rd1 read]\n            assert_equal [llength $result1] 1\n\n            set result2 [$rd2 read]\n            assert_equal [llength $result2] 1\n\n            # Clean up\n            $rd1 close\n            $rd2 close   \n        }\n\n        test \"XREADGROUP CLAIM messages become claimable during block\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XGROUP CREATE mystream group1 0\n            \n            # Consumer1 reads but doesn't ack\n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n            \n            # Consumer2 blocks with CLAIM - message not yet claimable\n            set rd [redis_deferring_client]\n            $rd XREADGROUP GROUP group1 consumer2 BLOCK 5000 CLAIM 1000 STREAMS mystream >\n            \n            wait_for_blocked_client\n            \n            # Wait for message to become claimable (>1000ms)\n            after 1500\n            \n            # Should unblock and return the now-claimable message\n            set result [$rd read]\n            assert_equal [llength $result] 1\n            \n            $rd close\n        }\n\n        test \"XREADGROUP CLAIM block times out with no claimable messages\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XGROUP CREATE mystream group1 0\n            \n            # Read and immediately try to claim (not idle enough)\n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n            \n            set start [clock milliseconds]\n            set result [r XREADGROUP GROUP group1 consumer2 BLOCK 100 CLAIM 500 STREAMS mystream >]\n            set elapsed [expr {[clock milliseconds] - $start}]\n            \n            # Should timeout and return empty\n            assert_equal [llength $result] 0\n            assert_range $elapsed 100 300\n        }\n\n        test \"XREADGROUP CLAIM block with multiple streams, mixed claimable\" {\n            r DEL stream{t}1 stream{t}2\n            r XADD stream{t}1 1-0 f v1\n            r XADD stream{t}2 2-0 f v2\n            \n            r XGROUP CREATE stream{t}1 group1 0\n            r XGROUP CREATE stream{t}2 group1 0\n            \n            # Reads from both\n            r XREADGROUP GROUP group1 consumer1 COUNT 1 STREAMS stream{t}1 stream{t}2 > >\n            \n            after 100\n            \n            # Blocks with CLAIM - should get all messages\n            set result [r XREADGROUP GROUP group1 consumer2 BLOCK 1000 CLAIM 50 STREAMS stream{t}1 stream{t}2 > >]\n            \n            assert_equal [llength $result] 2\n            # stream1 should have claimable message\n            lassign [lindex $result 0] stream_name messages\n            assert_equal $stream_name \"stream{t}1\"\n            assert_equal [llength $messages] 1\n            \n            # stream2 should be empty (message not yet read)\n            lassign [lindex $result 1] stream_name messages\n            assert_equal $stream_name \"stream{t}2\"\n            assert_equal [llength $messages] 1\n        }\n\n        test \"XREADGROUP CLAIM claims all pending immediately\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XGROUP CREATE mystream group1 0\n            \n            # Consumer1 reads\n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n            \n            # Consumer2 immediately tries to claim with min-idle-time=0\n            set result [r XREADGROUP GROUP group1 consumer2 BLOCK 1000 CLAIM 0 STREAMS mystream >]\n            \n            # Should immediately return without blocking\n            lassign [lindex $result 0] stream_name messages\n            assert_equal [llength $messages] 1\n        }\n\n        test \"XREADGROUP CLAIM with BLOCK and NOACK\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XGROUP CREATE mystream group1 0\n            \n            # Consumer1 reads without ack\n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n            \n            after 100\n            \n            # Consumer2 tries to claim with NOACK\n            set result [r XREADGROUP GROUP group1 consumer2 BLOCK 1000 CLAIM 50 NOACK STREAMS mystream >]\n            \n            lassign [lindex $result 0] stream_name messages\n            assert_equal [llength $messages] 1\n            \n            # Verify message still pending\n            set pending [r XPENDING mystream group1 - + 10]\n            assert_equal [llength $pending] 1\n        }\n\n        test \"XREADGROUP CLAIM BLOCK wakes on new message before min-idle-time reached\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XGROUP CREATE mystream group1 0\n            \n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n            \n            set rd [redis_deferring_client]\n            $rd XREADGROUP GROUP group1 consumer2 BLOCK 5000 CLAIM 1000 STREAMS mystream >\n            \n            wait_for_blocked_client\n            \n            after 100  # Before min-idle-time\n            r XADD mystream 2-0 f v2\n            \n            set result [$rd read]\n\n            # Unblock with new message immediately, not wait for CLAIM threshold\n            lassign [lindex $result 0] stream_name messages\n            assert_equal [llength $messages] 1\n            \n            $rd close\n        }\n\n        test \"XREADGROUP CLAIM verify claiming order\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n            r XADD mystream 3-0 f v3\n            r XADD mystream 4-0 f v4\n            r XADD mystream 5-0 f v5\n            r XADD mystream 6-0 f v6\n\n            # Create consumer group\n            r XGROUP CREATE mystream group1 0\n\n            # Read all messages with consumer1 to make them pending\n            r XREADGROUP GROUP group1 consumer1 COUNT 10 STREAMS mystream >\n\n            # Now use XCLAIM to explicitly set different delivery times for each message\n            # We'll set the delivery time backwards in time by different amounts\n            # to create known idle time differences without actually waiting\n            set current_time [r TIME]\n            set current_ms [expr {[lindex $current_time 0] * 1000 + [lindex $current_time 1] / 1000}]\n            \n            # Set delivery times: 1-0 is oldest (5000ms ago), 6-0 is newest (100ms ago)\n            # Use larger values for robustness against timing variations\n            r XCLAIM mystream group1 consumer1 0 1-0 TIME [expr {$current_ms - 50000}] JUSTID\n            r XCLAIM mystream group1 consumer1 0 2-0 TIME [expr {$current_ms - 40000}] JUSTID\n            r XCLAIM mystream group1 consumer1 0 3-0 TIME [expr {$current_ms - 30000}] JUSTID\n            r XCLAIM mystream group1 consumer1 0 4-0 TIME [expr {$current_ms - 20000}] JUSTID\n            r XCLAIM mystream group1 consumer1 0 5-0 TIME [expr {$current_ms - 2000}] JUSTID\n            r XCLAIM mystream group1 consumer1 0 6-0 TIME [expr {$current_ms - 1000}] JUSTID\n\n            # Now claim with threshold of 250ms - should get 1-0, 2-0, 3-0, 4-0 in that order\n            # (idle times: 50000, 40000, 30000, 20000ms all >= 10000ms)\n            set claim_result [r XREADGROUP GROUP group1 consumer2 CLAIM 10000 COUNT 10 STREAMS mystream >]\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 4\n            \n            # Verify order: oldest first\n            assert_equal [lindex $messages 0 0] 1-0\n            assert_equal [lindex $messages 1 0] 2-0\n            assert_equal [lindex $messages 2 0] 3-0\n            assert_equal [lindex $messages 3 0] 4-0\n\n            # Claim with threshold of 1500ms - should get remaining 5-0\n            # (idle time: 200ms >= 1500ms, but 6-0 with 1000ms < 1500ms)\n            set claim_result [r XREADGROUP GROUP group1 consumer2 CLAIM 1500 COUNT 10 STREAMS mystream >]\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 1\n            \n            assert_equal [lindex $messages 0 0] 5-0\n\n            # Claim with threshold of 500ms - should get last one (6-0)\n            # (idle time: 100ms >= 500ms)\n            set claim_result [r XREADGROUP GROUP group1 consumer2 CLAIM 500 COUNT 10 STREAMS mystream >]\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 1\n            \n            assert_equal [lindex $messages 0 0] 6-0\n        }\n\n        test \"XREADGROUP CLAIM after consumer deleted with pending messages\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n\n            # Create consumer group\n            r XGROUP CREATE mystream group1 0\n            \n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n            r XGROUP DELCONSUMER mystream group1 consumer1\n            \n            set pending [r XPENDING mystream group1 - + 10]\n            assert_equal [llength $pending] 0\n\n            after 100\n\n            # Orphaned pending messages are deleted.\n            set claim_result [r XREADGROUP GROUP group1 consumer2 CLAIM 50 STREAMS mystream >]\n            assert_equal [llength $claim_result] 0\n        }\n\n        test \"XREADGROUP CLAIM after XGROUP SETID moves past pending messages\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n\n            # Create consumer group\n            r XGROUP CREATE mystream group1 0\n            \n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n            r XGROUP SETID mystream group1 2-0\n            \n            after 100\n\n            # Pending messages are still claimable\n            set claim_result [r XREADGROUP GROUP group1 consumer2 CLAIM 50 STREAMS mystream >]\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 2\n        }\n\n        test \"XREADGROUP CLAIM after XGROUP SETID moves before pending messages\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n\n            # Create consumer group\n            r XGROUP CREATE mystream group1 0\n            \n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n            r XREADGROUP GROUP group1 consumer2 CLAIM 0 STREAMS mystream >\n            r XGROUP SETID mystream group1 0\n            \n            after 100\n\n            # Pending messages are still claimable\n            set claim_result [r XREADGROUP GROUP group1 consumer2 CLAIM 50 STREAMS mystream >]\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 4\n\n            # Message 1-0: claimed by consumer2 (delivery count = 2)\n            assert_equal [lindex $messages 0 0] 1-0\n            assert_equal [lindex $messages 0 3] 2\n\n            # Message 2-0: claimed by consumer2 (delivery count = 2)\n            assert_equal [lindex $messages 1 0] 2-0\n            assert_equal [lindex $messages 1 3] 2\n\n            # Message 1-0: claimed by consumer2 (delivery count = 0)\n            assert_equal [lindex $messages 2 0] 1-0\n            assert_equal [lindex $messages 2 3] 0\n\n            # Message 2-0: claimed by consumer2 (delivery count = 0)\n            assert_equal [lindex $messages 3 0] 2-0\n            assert_equal [lindex $messages 3 3] 0\n\n            after 100\n\n            # Verify that pending messages are not doubled\n            set claim_result [r XREADGROUP GROUP group1 consumer2 CLAIM 50 STREAMS mystream >]\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 2\n\n            # Message 1-0: claimed by consumer2 (delivery count = 1)\n            assert_equal [lindex $messages 0 0] 1-0\n            assert_equal [lindex $messages 0 3] 1\n\n            # Message 2-0: claimed by consumer2 (delivery count = 1)\n            assert_equal [lindex $messages 1 0] 2-0\n            assert_equal [lindex $messages 1 3] 1\n        }\n\n        test \"XREADGROUP CLAIM when pending messages get trimmed\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n            r XADD mystream 3-0 f v3\n\n            # Create consumer group\n            r XGROUP CREATE mystream group1 0\n            \n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n            \n            # Trim away the pending messages\n            r XTRIM mystream MAXLEN 0\n            \n            after 100\n\n            # Pending list still references trimmed messages but they don't exist. We can't return them.\n            set claim_result [r XREADGROUP GROUP group1 consumer2 CLAIM 50 STREAMS mystream >]\n            assert_equal [llength $claim_result] 0\n        }\n\n        test \"XREADGROUP CLAIM state persists across RDB save/load\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n            r XADD mystream 3-0 f v3\n            \n            r XGROUP CREATE mystream group1 0\n            \n            # Read messages to create pending entries\n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n            \n            after 100\n            \n            # Claim some messages to increment delivery count\n            r XREADGROUP GROUP group1 consumer2 CLAIM 50 STREAMS mystream >\n            \n            # Trigger RDB save and restart\n            r SAVE\n            r DEBUG RELOAD\n            \n            # Verify pending state restored\n            set pending_info [r XPENDING mystream group1 - + 10]\n            assert_equal [llength $pending_info] 3\n            \n            # Check first message entry\n            assert_equal [lindex $pending_info 0 0] \"1-0\"\n            assert_equal [lindex $pending_info 0 1] \"consumer2\"\n            assert_equal [lindex $pending_info 0 3] 2\n\n            # Check second message entry\n            assert_equal [lindex $pending_info 1 0] \"2-0\"\n            assert_equal [lindex $pending_info 1 1] \"consumer2\"\n            assert_equal [lindex $pending_info 1 3] 2\n\n            # Check third message entry\n            assert_equal [lindex $pending_info 2 0] \"3-0\"\n            assert_equal [lindex $pending_info 2 1] \"consumer2\"\n            assert_equal [lindex $pending_info 2 3] 2\n            \n            # Verify can still claim after reload\n            after 100\n            set claim_result [r XREADGROUP GROUP group1 consumer3 CLAIM 50 STREAMS mystream >]\n            lassign [lindex $claim_result 0] stream_name messages\n            assert_equal $stream_name \"mystream\"\n            assert_equal [llength $messages] 3\n\n            # Message 1-0: claimed by consumer3 (delivery count = 2)\n            assert_equal [lindex $messages 0 0] 1-0\n            assert_equal [lindex $messages 0 3] 2\n\n            # Message 2-0: claimed by consumer3 (delivery count = 2)\n            assert_equal [lindex $messages 1 0] 2-0\n            assert_equal [lindex $messages 1 3] 2\n\n            # Message 2-0: claimed by consumer3 (delivery count = 2)\n            assert_equal [lindex $messages 2 0] 3-0\n            assert_equal [lindex $messages 2 3] 2\n        } {} {external:skip needs:debug}\n\n        test \"XREADGROUP CLAIM idle time resets after RDB reload\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XGROUP CREATE mystream group1 0\n            \n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n            \n            after 1000\n            \n            # Before reload: message should be claimable\n            set claim_before [r XREADGROUP GROUP group1 consumer2 CLAIM 500 STREAMS mystream >]\n            assert_equal [llength [lindex $claim_before 0 1]] 1\n            \n            r SAVE\n            r DEBUG RELOAD\n\n            # After reload: idle time resets, message not immediately claimable\n            set claim_after [r XREADGROUP GROUP group1 consumer3 CLAIM 500 STREAMS mystream >]\n            assert_equal [llength $claim_after] 0\n\n        } {} {external:skip needs:debug}\n\n        test \"XREADGROUP CLAIM multiple groups persist correctly\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XADD mystream 2-0 f v2\n            \n            r XGROUP CREATE mystream group1 0\n            r XGROUP CREATE mystream group2 0\n            \n            r XREADGROUP GROUP group1 consumer1 COUNT 1 STREAMS mystream >\n            r XREADGROUP GROUP group2 consumer1 STREAMS mystream >\n            \n            after 100\n            r XREADGROUP GROUP group1 consumer2 CLAIM 50 STREAMS mystream >\n            \n            r SAVE\n            r DEBUG RELOAD\n            \n            # Verify both groups maintained separately\n            set pending1 [r XPENDING mystream group1]\n            set pending2 [r XPENDING mystream group2]\n            \n            assert_equal [lindex $pending1 0] 2  ;# group1 has 2 pending\n            assert_equal [lindex $pending2 0] 2  ;# group2 has 2 pending\n        } {} {external:skip needs:debug}\n\n        test \"XREADGROUP CLAIM NOACK state not persisted\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XGROUP CREATE mystream group1 0\n            \n            after 100\n            r XREADGROUP GROUP group1 consumer1 NOACK CLAIM 50 STREAMS mystream >\n            \n            set pending_before [r XPENDING mystream group1]\n            assert_equal [lindex $pending_before 0] 0\n            \n            r SAVE\n            r DEBUG RELOAD\n            \n            set pending_after [r XPENDING mystream group1]\n            assert_equal [lindex $pending_after 0] 0\n        } {} {external:skip needs:debug}\n\n        test \"XREADGROUP CLAIM high delivery counts persist in RDB\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XGROUP CREATE mystream group1 0\n            \n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n            \n            # Claim multiple times to increase delivery count\n            for {set i 0} {$i < 10} {incr i} {\n                after 20\n                r XREADGROUP GROUP group1 consumer2 CLAIM 10 STREAMS mystream >\n            }\n            \n            set pending_before [r XPENDING mystream group1 - + 1]\n            set delivery_before [lindex $pending_before 0 3]\n            \n            r SAVE\n            r DEBUG RELOAD\n\n            set pending_after [r XPENDING mystream group1 - + 1]\n            set delivery_after [lindex $pending_after 0 3]\n            \n            assert_equal $delivery_before $delivery_after\n        } {} {external:skip needs:debug}\n\n        test \"XREADGROUP CLAIM usage stability with repeated claims\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XGROUP CREATE mystream group1 0\n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n            \n            # Claim same message many times between consumers\n            for {set i 0} {$i < 1000} {incr i} {\n                after 2\n                set consumer_id [expr {$i % 10 + 1}]\n                r XREADGROUP GROUP group1 consumer$consumer_id CLAIM 1 STREAMS mystream >\n            }\n            \n            # Verify no memory leaks - PEL should still have only 1 message\n            set pending [r XPENDING mystream group1]\n            assert_equal [lindex $pending 0] 1\n        }\n\n        test \"XREADGROUP CLAIM with large number of PEL messages\" {\n            r DEL mystream\n            r XGROUP CREATE mystream group1 0 MKSTREAM\n            \n            # Create large PEL\n            for {set i 0} {$i < 10000} {incr i} {\n                r XADD mystream * field $i\n            }\n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n            \n            after 100\n            \n            set result [r XREADGROUP GROUP group1 consumer2 CLAIM 50 COUNT 1000 STREAMS mystream >]\n            assert_equal [llength [lindex $result 0 1]] 1000\n        }\n\n        test \"XREADGROUP CLAIM within MULTI/EXEC transaction\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n            r XGROUP CREATE mystream group1 0\n            r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n            \n            after 100\n            \n            r MULTI\n            r XREADGROUP GROUP group1 consumer2 CLAIM 50 STREAMS mystream >\n            r XPENDING mystream group1\n            set results [r EXEC]\n            \n            # Verify transaction atomicity\n            assert_equal [lindex $results 1 0] 1\n        }\n\n        test \"XREAD with CLAIM option\" {\n            r DEL mystream\n            r XADD mystream 1-0 f v1\n\n            assert_error \"*ERR The CLAIM option is only supported*\" {r XREAD COUNT 2 CLAIM 10 STREAMS mystream 0-0}\n        }\n    }\n\n    # Verify that XNACK rejects every invalid invocation with the correct error.\n    # Covers: wrong argument count, nonexistent key/group (NOGROUP), wrong key\n    # type (WRONGTYPE), unrecognized options at every position the parser\n    # accepts them, invalid mode names, duplicate mode words, missing/bad IDS\n    # keyword, bad numids (non-integer, zero, negative, mismatch), invalid\n    # stream-ID format, RETRYCOUNT edge cases (non-integer, negative, overflow,\n    # missing value, missing IDS), and extra trailing arguments.\n    test \"XNACK argument and error validation\" {\n        # Wrong number of arguments (no stream needed)\n        assert_error \"*wrong number of arguments*\" {r XNACK}\n        assert_error \"*wrong number of arguments*\" {r XNACK key}\n        assert_error \"*wrong number of arguments*\" {r XNACK key group}\n        assert_error \"*wrong number of arguments*\" {r XNACK key group SILENT}\n        assert_error \"*wrong number of arguments*\" {r XNACK key group SILENT IDS}\n        assert_error \"*wrong number of arguments*\" {r XNACK key group SILENT IDS 1}\n\n        # Non-existent key / group\n        r DEL nosuchkey\n        assert_error \"*NOGROUP*\" {r XNACK nosuchkey grp SILENT IDS 1 1-0}\n\n        r DEL mystream\n        r XADD mystream 1-0 f v\n        assert_error \"*NOGROUP*\" {r XNACK mystream nogroup SILENT IDS 1 1-0}\n\n        # Wrong key type\n        r DEL mykey\n        r SET mykey \"not a stream\"\n        assert_error \"*WRONGTYPE*\" {r XNACK mykey grp FAIL IDS 1 1-0}\n\n        # All remaining checks need a stream + group + consumer\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n\n        # Unrecognized option at various positions — the parser accepts options\n        # both before and after the IDS block, so verify rejection in each slot.\n        assert_error \"ERR Unrecognized XNACK option*\" {r XNACK mystream grp FAIL BADOPT IDS 1 1-0}\n        assert_error \"ERR Unrecognized XNACK option*\" {r XNACK mystream grp FAIL IDS 1 1-0 BADOPT}\n        assert_error \"ERR Unrecognized XNACK option*\" {r XNACK mystream grp SILENT BADOPT IDS 1 1-0 FORCE}\n        assert_error \"ERR Unrecognized XNACK option*\" {r XNACK mystream grp SILENT FORCE BADOPT IDS 1 1-0}\n        assert_error \"ERR Unrecognized XNACK option*\" {r XNACK mystream grp FAIL RETRYCOUNT 5 BADOPT IDS 1 1-0}\n        assert_error \"ERR Unrecognized XNACK option*\" {r XNACK mystream grp FAIL IDS 1 1-0 RETRYCOUNT 5 BADOPT}\n        assert_error \"ERR Unrecognized XNACK option*\" {r XNACK mystream grp FAIL FORCE IDS 1 1-0 BADOPT RETRYCOUNT 5}\n\n        # Invalid mode\n        assert_error \"ERR mode must be SILENT, FAIL, or FATAL\" {r XNACK mystream grp BADMODE IDS 1 1-0}\n\n        # Multiple mode words — only one mode is allowed per invocation.\n        assert_error \"ERR Unrecognized XNACK option*\" {r XNACK mystream grp FAIL FATAL IDS 1 1-0}\n        assert_error \"ERR Unrecognized XNACK option*\" {r XNACK mystream grp SILENT FAIL IDS 1 1-0}\n        assert_error \"ERR Unrecognized XNACK option*\" {r XNACK mystream grp FATAL SILENT IDS 1 1-0}\n        assert_error \"ERR Unrecognized XNACK option*\" {r XNACK mystream grp FAIL SILENT FATAL IDS 1 1-0}\n\n        # IDS keyword validation\n        assert_error \"ERR Unrecognized XNACK option*\" {r XNACK mystream grp SILENT NOTIDS 1 1-0}\n        assert_error \"ERR syntax error, expected IDS keyword\" {r XNACK mystream grp SILENT FORCE RETRYCOUNT 5}\n\n        # numids validation\n        assert_error \"ERR numids must be a positive integer*\" {r XNACK mystream grp SILENT IDS abc 1-0}\n        assert_error \"ERR numids must be a positive integer*\" {r XNACK mystream grp SILENT IDS 0 1-0}\n        assert_error \"ERR numids must be a positive integer*\" {r XNACK mystream grp SILENT IDS -1 1-0}\n        assert_error \"ERR number of IDs doesn't match numids\" {r XNACK mystream grp SILENT IDS 2 1-0}\n\n        # Invalid stream ID format\n        assert_error \"ERR Invalid stream ID*\" {r XNACK mystream grp FAIL IDS 1 not-a-valid-id}\n\n        # RETRYCOUNT validation — non-integer, negative, overflow, missing value\n        assert_error \"ERR value is not an integer or out of range\" {r XNACK mystream grp FAIL IDS 1 1-0 RETRYCOUNT abc}\n        assert_error \"ERR Invalid RETRYCOUNT value, must be >= 0\" {r XNACK mystream grp FAIL IDS 1 1-0 RETRYCOUNT -1}\n        assert_error \"ERR value is not an integer or out of range\" {r XNACK mystream grp FAIL IDS 1 1-0 RETRYCOUNT 99999999999999999999}\n        # RETRYCOUNT without a following value — consumed as trailing option\n        assert_error \"ERR Unrecognized XNACK option*\" {r XNACK mystream grp FAIL IDS 1 1-0 RETRYCOUNT}\n        # RETRYCOUNT right after mode with no IDS — too few arguments\n        assert_error \"ERR wrong number of arguments for 'xnack' command\" {r XNACK mystream grp FAIL RETRYCOUNT}\n\n        # Extra args after numids IDs — the surplus ID is parsed as an option\n        assert_error \"ERR Unrecognized XNACK option*\" {r XNACK mystream grp FAIL IDS 1 1-0 2-0}\n    }\n\n    # Verify SILENT mode decrements delivery_count by 1, clamped at 0.\n    # XPENDING format per entry: {id consumer idle delivery_count}.\n    # After XNACK, consumer becomes {} (unowned) and idle becomes -1\n    # (delivery_time reset to 0).\n    test \"XNACK SILENT mode delivery_count behavior\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n\n        # delivery_count is 1 after XREADGROUP; SILENT decrements to 0\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [lindex $pending 0 3] 1\n        assert_equal 1 [r XNACK mystream grp SILENT IDS 1 1-0]\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending] 1\n        assert_equal [lindex $pending 0] {1-0 {} -1 0}\n\n        # Clamp at 0: reclaim with RETRYCOUNT 0, then SILENT must not go below 0\n        r XCLAIM mystream grp c2 0 1-0 RETRYCOUNT 0\n        r XNACK mystream grp SILENT IDS 1 1-0\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [lindex $pending 0 3] 0\n\n        # Decrement from higher value: XCLAIM bumps delivery_count each time\n        r XCLAIM mystream grp c1 0 1-0\n        r XCLAIM mystream grp c1 0 1-0\n        r XCLAIM mystream grp c1 0 1-0\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [lindex $pending 0 3] 3\n        assert_equal 1 [r XNACK mystream grp SILENT IDS 1 1-0]\n        set pending [r XPENDING mystream grp - + 10]\n        # 3 - 1 = 2\n        assert_equal [lindex $pending 0] {1-0 {} -1 2}\n    }\n\n    # Verify FAIL mode NACKs the entry (makes it unowned) but preserves the\n    # original delivery_count. The count stays at 1 (set by XREADGROUP).\n    test \"XNACK FAIL mode keeps delivery_count unchanged\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [lindex $pending 0 3] 1\n\n        assert_equal 1 [r XNACK mystream grp FAIL IDS 1 1-0]\n\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending] 1\n        # delivery_count unchanged at 1\n        assert_equal [lindex $pending 0] {1-0 {} -1 1}\n    }\n\n    # Verify FATAL mode sets delivery_count to LLONG_MAX (9223372036854775807),\n    # signaling permanent/unrecoverable failure for this entry.\n    test \"XNACK FATAL mode sets delivery_count to max\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n\n        assert_equal 1 [r XNACK mystream grp FATAL IDS 1 1-0]\n\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending] 1\n        # 9223372036854775807 == LLONG_MAX\n        assert_equal [lindex $pending 0] {1-0 {} -1 9223372036854775807}\n    }\n\n    # Verify that XNACK removes entries from the consumer-level PEL (the entry\n    # becomes unowned) while keeping them in the group-level PEL.\n    # Setup: c1 owns {1-0, 2-0}, c2 owns {3-0}. NACK entries from both\n    # consumers and confirm the ownership transfer.\n    # Also verifies that XNACK does not auto-create or destroy consumers.\n    test \"XNACK releases entries and removes from consumer PEL\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XADD mystream 3-0 f v3\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 COUNT 2 STREAMS mystream >\n        r XREADGROUP GROUP grp c2 COUNT 1 STREAMS mystream >\n\n        # XNACK entries owned by different consumers\n        assert_equal 1 [r XNACK mystream grp FAIL IDS 1 3-0]\n        assert_equal 1 [r XNACK mystream grp FAIL IDS 1 1-0]\n\n        # Both NACKed entries should be unowned in the group PEL\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending] 3\n        assert_equal [lindex $pending 0] {1-0 {} -1 1}\n        assert_equal [lindex $pending 2] {3-0 {} -1 1}\n\n        # Consumer-level PEL: c1 only has 2-0 left, c2 has nothing\n        set c1_pending [r XPENDING mystream grp - + 10 c1]\n        assert_equal [llength $c1_pending] 1\n        assert_equal [lindex $c1_pending 0 0] 2-0\n        set c2_pending [r XPENDING mystream grp - + 10 c2]\n        assert_equal [llength $c2_pending] 0\n\n        # XNACK does not auto-create or destroy consumers\n        set info [r XINFO GROUPS mystream]\n        assert_equal [dict get [lindex $info 0] consumers] 2\n    }\n\n    # Verify the integer return value of XNACK (number of entries successfully\n    # NACKed) and several edge cases:\n    #  - IDs not in the PEL are silently skipped (return 0).\n    #  - Multiple IDs can be NACKed in a single call.\n    #  - When valid and non-PEL IDs are mixed, only valid ones are counted.\n    #  - Duplicate IDs: each occurrence is counted separately.\n    #  - NACKing against an empty PEL returns 0.\n    test \"XNACK return count and edge cases\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XADD mystream 3-0 f v3\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n\n        # Skips IDs not in PEL\n        assert_equal 0 [r XNACK mystream grp FAIL IDS 1 9-9]\n\n        # Multiple IDs at once\n        assert_equal 3 [r XNACK mystream grp SILENT IDS 3 1-0 2-0 3-0]\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal $pending {{1-0 {} -1 0} {2-0 {} -1 0} {3-0 {} -1 0}}\n\n        # Reclaim all entries back to c1 for further sub-tests\n        r XCLAIM mystream grp c1 0 1-0 2-0 3-0\n\n        # Mixed valid and invalid IDs: only the 3 valid ones are counted\n        assert_equal 3 [r XNACK mystream grp FAIL IDS 5 1-0 9-9 2-0 8-8 3-0]\n        set pending [r XPENDING mystream grp - + 10]\n        foreach entry $pending {\n            assert_equal [lindex $entry 1] {}\n        }\n\n        # Duplicate IDs: the first NACK finds a consumer-owned entry, the\n        # second finds an already-NACKed entry — both count as successful.\n        r XCLAIM mystream grp c1 0 1-0\n        assert_equal 2 [r XNACK mystream grp FAIL IDS 2 1-0 1-0]\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending] 3\n        assert_equal [lindex $pending 0] {1-0 {} -1 2}\n\n        # Empty PEL returns 0\n        r XACK mystream grp 1-0 2-0 3-0\n        set info [r XPENDING mystream grp]\n        assert_equal [lindex $info 0] 0\n        assert_equal 0 [r XNACK mystream grp FAIL IDS 1 1-0]\n    }\n\n    # Verify behavior when re-NACKing an entry that is already in the NACK\n    # zone (unowned). Each mode still applies its delivery_count semantics:\n    #  - FAIL is idempotent (count unchanged, returns 1).\n    #  - SILENT still decrements.\n    #  - FATAL still sets to LLONG_MAX.\n    # Mode transitions on already-NACKed entries work correctly.\n    test \"XNACK on already-NACKed entry: idempotency and mode changes\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n\n        # Re-NACK with FAIL: returns 1, count unchanged\n        assert_equal 1 [r XNACK mystream grp FAIL IDS 1 1-0]\n        assert_equal 1 [r XNACK mystream grp FAIL IDS 1 1-0]\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending] 1\n        assert_equal [lindex $pending 0] {1-0 {} -1 1}\n\n        # SILENT on already-NACKed: decrements 1 to 0\n        assert_equal 1 [r XNACK mystream grp SILENT IDS 1 1-0]\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [lindex $pending 0 3] 0\n\n        # FATAL on already-NACKed: sets to LLONG_MAX\n        assert_equal 1 [r XNACK mystream grp FATAL IDS 1 1-0]\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending] 1\n        assert_equal [lindex $pending 0] {1-0 {} -1 9223372036854775807}\n\n        # FAIL on already-NACKed: returns success (idempotent)\n        assert_equal 1 [r XNACK mystream grp FAIL IDS 1 1-0]\n    }\n\n    # Verify that NACKed entries form a \"NACK zone\" at the head of the\n    # time-ordered PEL with FIFO insertion order.\n    # NACKed entries have delivery_time=0, so XPENDING reports idle=-1.\n    # XINFO STREAM FULL iterates the PEL rax by stream-ID order (not NACK\n    # order), so we check both views to confirm correct state.\n    test \"XNACK ordering: NACKed entries at head of PEL with FIFO order\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XADD mystream 3-0 f v3\n        r XADD mystream 4-0 f v4\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n\n        # XNACK in non-sequential stream-ID order: 3-0 first, then 1-0\n        r XNACK mystream grp FAIL IDS 1 3-0\n        r XNACK mystream grp FAIL IDS 1 1-0\n\n        # NACKed entries should have delivery_time=0 (idle=-1 in XPENDING)\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending] 4\n\n        foreach entry $pending {\n            set id [lindex $entry 0]\n            if {$id eq \"3-0\" || $id eq \"1-0\"} {\n                assert_equal [lindex $entry 1] {} ;# unowned\n                assert_equal [lindex $entry 2] -1 ;# idle is -1 because delivery_time is 0\n            } else {\n                assert_equal [lindex $entry 1] c1 ;# still owned\n            }\n        }\n\n        # XINFO STREAM FULL iterates the PEL rax by stream ID order.\n        # NACKed entries show delivery_time=0 and consumer={}.\n        set info [r XINFO STREAM mystream FULL]\n        set group [lindex [dict get $info groups] 0]\n        set pel [dict get $group pending]\n\n        assert_equal [lindex $pel 0] {1-0 {} 0 1}\n        assert_match {2-0 c1 * 1} [lindex $pel 1]\n        assert_equal [lindex $pel 2] {3-0 {} 0 1}\n        assert_match {4-0 c1 * 1} [lindex $pel 3]\n    }\n\n    # Verify that NACKed PEL entries survive deletion of the underlying stream\n    # entry. Both XDEL (single entry removal) and XTRIM (bulk trimming) must\n    # not remove PEL entries — they become \"ghost\" entries that are cleaned up\n    # only when claimed (XCLAIM/XAUTOCLAIM) or acknowledged (XACK).\n    test \"XNACK NACKed entries persist after XDEL and XTRIM\" {\n        # XDEL case: delete the stream entry, PEL entry stays\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n        r XNACK mystream grp FAIL IDS 1 1-0\n        r XDEL mystream 1-0\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending] 2\n        assert_equal [lindex $pending 0] {1-0 {} -1 1}\n\n        # XTRIM case: trim all but the last entry, PEL entries remain\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XADD mystream 3-0 f v3\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n        r XNACK mystream grp FAIL IDS 1 1-0\n        r XTRIM mystream MAXLEN 1\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending] 3\n        assert_equal [lindex $pending 0] {1-0 {} -1 1}\n    }\n\n    # Verify that XNACK handles more IDs than fit in the stack-allocated\n    # static vector (STREAMID_STATIC_VECTOR_LEN), forcing a heap allocation\n    # for the ID array. Uses 50 IDs to exceed the typical static limit.\n    test \"XNACK with IDs exceeding STREAMID_STATIC_VECTOR_LEN for heap allocation\" {\n        r DEL mystream\n        r XGROUP CREATE mystream grp $ MKSTREAM\n\n        set ids {}\n        for {set i 1} {$i <= 50} {incr i} {\n            r XADD mystream $i-0 f v$i\n            lappend ids \"$i-0\"\n        }\n        r XREADGROUP GROUP grp c1 COUNT 50 STREAMS mystream >\n\n        set result [r XNACK mystream grp FAIL IDS 50 {*}$ids]\n        assert_equal $result 50\n\n        set pending [r XPENDING mystream grp - + 100]\n        assert_equal [llength $pending] 50\n        foreach entry $pending {\n            assert_equal [lindex $entry 1] {}\n        }\n    }\n\n    # Verify that the RETRYCOUNT option overrides the delivery_count that\n    # the mode would normally set. It takes precedence over FATAL (would\n    # set LLONG_MAX), SILENT (would decrement), and FAIL (would keep).\n    # Also works when applied to an already-NACKed entry.\n    test \"XNACK RETRYCOUNT overrides delivery_count\" {\n        # RETRYCOUNT overrides FATAL: count is 42 instead of LLONG_MAX\n        r DEL mystream\n        r XADD mystream 1-0 f v\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n\n        assert_equal 1 [r XNACK mystream grp FATAL IDS 1 1-0 RETRYCOUNT 42]\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [lindex $pending 0] {1-0 {} -1 42}\n\n        # RETRYCOUNT overrides SILENT: count is 10 instead of decrement\n        r XCLAIM mystream grp c1 0 1-0\n        assert_equal 1 [r XNACK mystream grp SILENT IDS 1 1-0 RETRYCOUNT 10]\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [lindex $pending 0 3] 10\n\n        # RETRYCOUNT 0: explicitly set count to zero\n        r XCLAIM mystream grp c1 0 1-0\n        assert_equal 1 [r XNACK mystream grp FAIL IDS 1 1-0 RETRYCOUNT 0]\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [lindex $pending 0] {1-0 {} -1 0}\n\n        # RETRYCOUNT on already-NACKed entry: overwrites the existing count\n        assert_equal 1 [r XNACK mystream grp FAIL IDS 1 1-0 RETRYCOUNT 99]\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [lindex $pending 0] {1-0 {} -1 99}\n    }\n\n    # Verify FORCE option behavior. FORCE creates an unowned PEL entry for an\n    # ID that is not currently in any consumer's PEL, as long as the\n    # corresponding stream entry exists. Covers:\n    #  - Creating a new NACKed PEL entry without prior XREADGROUP.\n    #  - Skipping non-existent stream entries (returns 0).\n    #  - FATAL and SILENT modes apply their delivery_count logic on FORCE-created entries.\n    #  - On already-owned entries, FORCE follows the normal NACK path.\n    #  - On already-NACKed entries, FORCE is a no-op (found-path applies).\n    #  - FORCE on an empty stream returns 0 and creates no PEL entry.\n    test \"XNACK FORCE behavior\" {\n        # FORCE creates a new unowned PEL entry (no prior XREADGROUP)\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XADD mystream 3-0 f v3\n        r XGROUP CREATE mystream grp 0\n\n        assert_equal 1 [r XNACK mystream grp FAIL IDS 1 1-0 FORCE]\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending] 1\n        # FAIL + FORCE on a new entry: delivery_count defaults to 0\n        assert_equal [lindex $pending 0] {1-0 {} -1 0}\n        # Verify the FORCE-created entry is claimable\n        set claimed [r XCLAIM mystream grp c1 0 1-0]\n        assert_equal [llength $claimed] 1\n        assert_equal [lindex $claimed 0 0] 1-0\n\n        # FORCE skips non-existent stream entries\n        assert_equal 0 [r XNACK mystream grp FAIL IDS 1 9-9 FORCE]\n\n        # FATAL + FORCE sets delivery_count to LLONG_MAX\n        r XACK mystream grp 1-0\n        assert_equal 1 [r XNACK mystream grp FATAL IDS 1 1-0 FORCE]\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [lindex $pending 0] {1-0 {} -1 9223372036854775807}\n\n        # SILENT + FORCE: no prior count to decrement, so clamped to 0\n        r XACK mystream grp 1-0\n        assert_equal 2 [r XNACK mystream grp SILENT IDS 2 1-0 2-0 FORCE]\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending] 2\n        assert_equal [lindex $pending 0] {1-0 {} -1 0}\n        assert_equal [lindex $pending 1] {2-0 {} -1 0}\n\n        # On already-owned PEL entries: FORCE follows the normal NACK path\n        r XACK mystream grp 1-0 2-0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n        assert_equal 3 [r XNACK mystream grp FAIL IDS 3 1-0 2-0 3-0 FORCE]\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending] 3\n        assert_equal [lindex $pending 0] {1-0 {} -1 1}\n        assert_equal [lindex $pending 1] {2-0 {} -1 1}\n        assert_equal [lindex $pending 2] {3-0 {} -1 1}\n        set c1_pending [r XPENDING mystream grp - + 10 c1]\n        assert_equal [llength $c1_pending] 0\n\n        # On already-NACKed entry: found-path applies, no duplicate created\n        assert_equal 1 [r XNACK mystream grp FAIL IDS 1 1-0 FORCE]\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending] 3\n        assert_equal [lindex $pending 0] {1-0 {} -1 1}\n\n        # FORCE on empty stream (MKSTREAM group): entry doesn't exist, returns 0\n        r DEL mystream\n        r XGROUP CREATE mystream grp $ MKSTREAM\n        assert_equal 0 [r XNACK mystream grp FAIL IDS 1 1-0 FORCE]\n        set info [r XPENDING mystream grp]\n        assert_equal [lindex $info 0] 0\n    }\n\n    # Verify that FORCE and RETRYCOUNT work together: FORCE creates the PEL\n    # entry for IDs not currently in the PEL, and RETRYCOUNT overrides the\n    # delivery_count that the mode would normally assign. Tests all three\n    # modes (FAIL, SILENT, FATAL) combined with FORCE + RETRYCOUNT.\n    test \"XNACK FORCE + RETRYCOUNT combination\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XADD mystream 3-0 f v3\n        r XGROUP CREATE mystream grp 0\n\n        assert_equal 1 [r XNACK mystream grp FAIL IDS 1 1-0 RETRYCOUNT 7 FORCE]\n        assert_equal 1 [r XNACK mystream grp SILENT IDS 1 2-0 RETRYCOUNT 5 FORCE]\n        assert_equal 1 [r XNACK mystream grp FATAL IDS 1 3-0 RETRYCOUNT 99 FORCE]\n\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending] 3\n\n        # RETRYCOUNT overrides all modes: each has the explicitly set count\n        assert_equal [lindex $pending 0] {1-0 {} -1 7}\n        assert_equal [lindex $pending 1] {2-0 {} -1 5}\n        assert_equal [lindex $pending 2] {3-0 {} -1 99}\n    }\n\n    # Verify that FORCE and RETRYCOUNT options are accepted both before and\n    # after the \"IDS numids id...\" block, in any permutation.\n    # Each sub-case ACKs the entry afterward so the next sub-case starts clean.\n    test \"XNACK flexible IDS position - options accepted before and after IDS block\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XADD mystream 3-0 f v3\n        r XGROUP CREATE mystream grp 0\n\n        # FORCE before IDS\n        assert_equal 1 [r XNACK mystream grp FAIL FORCE IDS 1 1-0]\n        r XACK mystream grp 1-0\n\n        # FORCE + RETRYCOUNT both before IDS\n        assert_equal 1 [r XNACK mystream grp FAIL FORCE RETRYCOUNT 42 IDS 1 1-0]\n        r XACK mystream grp 1-0\n\n        # RETRYCOUNT before IDS, FORCE after IDS\n        assert_equal 1 [r XNACK mystream grp FAIL RETRYCOUNT 5 IDS 1 1-0 FORCE]\n        r XACK mystream grp 1-0\n\n        # FORCE before IDS, RETRYCOUNT after IDS\n        assert_equal 1 [r XNACK mystream grp FAIL FORCE IDS 1 1-0 RETRYCOUNT 3]\n        r XACK mystream grp 1-0\n\n        # Multiple IDs with options before IDS\n        assert_equal 3 [r XNACK mystream grp FAIL RETRYCOUNT 10 IDS 3 1-0 2-0 3-0 FORCE]\n        r XACK mystream grp 1-0 2-0 3-0\n\n        # Canonical order (IDS first, options after) still works\n        assert_equal 1 [r XNACK mystream grp FAIL IDS 1 1-0 RETRYCOUNT 20 FORCE]\n    }\n\n    # Verify that re-NACKing an already-NACKed entry moves it to the tail\n    # of the NACK zone. The NACK zone is time-ordered (FIFO insertion),\n    # so moving to the tail means it will be claimed last.\n    # Initial NACK order: 1-0, 2-0, 3-0. After re-NACKing 1-0 the order\n    # becomes: 2-0, 3-0, 1-0. Verified via XREADGROUP CLAIM which walks\n    # the PEL from pel_time_head to pel_time_tail.\n    test \"XNACK re-NACK moves entry to end of NACK zone\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XADD mystream 3-0 f v3\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n\n        r XNACK mystream grp FAIL IDS 3 1-0 2-0 3-0\n\n        set info [r XINFO STREAM mystream FULL]\n        set group [lindex [dict get $info groups] 0]\n        assert_equal [dict get $group nacked-count] 3\n\n        # Re-NACK 1-0 — moves it to end of NACK zone\n        r XNACK mystream grp FAIL IDS 1 1-0\n\n        set info [r XINFO STREAM mystream FULL]\n        set group [lindex [dict get $info groups] 0]\n        assert_equal [dict get $group nacked-count] 3\n        assert_equal [dict get $group pel-count] 3\n\n        # XREADGROUP CLAIM walks from pel_time_head to pel_time_tail.\n        # After the re-NACK, zone order is: 2-0, 3-0, 1-0.\n        # `after 10` ensures enough idle time for the CLAIM min-idle threshold.\n        after 10\n        set r1 [r XREADGROUP GROUP grp c2 COUNT 1 CLAIM 5 STREAMS mystream >]\n        set msg1 [lindex [lindex $r1 0] 1 0 0]\n        assert_equal $msg1 2-0\n\n        after 10\n        set r2 [r XREADGROUP GROUP grp c2 COUNT 1 CLAIM 5 STREAMS mystream >]\n        set msg2 [lindex [lindex $r2 0] 1 0 0]\n        assert_equal $msg2 3-0\n\n        after 10\n        set r3 [r XREADGROUP GROUP grp c2 COUNT 1 CLAIM 5 STREAMS mystream >]\n        set msg3 [lindex [lindex $r3 0] 1 0 0]\n        assert_equal $msg3 1-0\n    }\n\n    # Verify that NACKed entries are claimable by all three claim mechanisms.\n    # NACKed entries have delivery_time=0 which means effectively infinite\n    # idle time, so they always satisfy any min-idle-time threshold.\n    # Each sub-test sets up a fresh stream, NACKs an entry, then claims it.\n    test \"XNACK NACKed entries claimable via XCLAIM, XAUTOCLAIM, and XREADGROUP CLAIM\" {\n        # XCLAIM with large min-idle-time: succeeds because idle is infinite\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n        r XNACK mystream grp FAIL IDS 1 1-0\n        set claimed [r XCLAIM mystream grp c2 99999 1-0]\n        assert_equal [llength $claimed] 1\n        assert_equal [lindex $claimed 0 0] 1-0\n        set pending [r XPENDING mystream grp - + 10 c2]\n        assert_equal [llength $pending] 1\n        assert_equal [lindex $pending 0 0] 1-0\n\n        # XAUTOCLAIM with large min-idle-time: also succeeds\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n        r XNACK mystream grp FAIL IDS 1 1-0\n        set result [r XAUTOCLAIM mystream grp c2 99999 0-0]\n        set claimed_msgs [lindex $result 1]\n        assert_equal [llength $claimed_msgs] 1\n        assert_equal [lindex $claimed_msgs 0 0] 1-0\n\n        # XCLAIM with min-idle-time 0: trivially satisfied\n        r DEL mystream\n        r XADD mystream 1-0 f v\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n        r XNACK mystream grp FAIL IDS 1 1-0\n        set claimed [r XCLAIM mystream grp c2 0 1-0]\n        assert_equal [llength $claimed] 1\n        assert_equal [lindex $claimed 0 0] 1-0\n        set pending [r XPENDING mystream grp - + 10 c2]\n        assert_equal [llength $pending] 1\n        assert_equal [lindex $pending 0 0] 1-0\n\n        # XAUTOCLAIM with min-idle-time 0\n        r DEL mystream\n        r XADD mystream 1-0 f v\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n        r XNACK mystream grp FAIL IDS 1 1-0\n        set result [r XAUTOCLAIM mystream grp c2 0 0-0]\n        set claimed_msgs [lindex $result 1]\n        assert_equal [llength $claimed_msgs] 1\n        assert_equal [lindex $claimed_msgs 0 0] 1-0\n        set pending [r XPENDING mystream grp - + 10 c2]\n        assert_equal [llength $pending] 1\n        assert_equal [lindex $pending 0 0] 1-0\n\n        # XREADGROUP CLAIM: `after 10` ensures idle time exceeds the 5ms threshold\n        r DEL mystream\n        r XADD mystream 1-0 f v\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n        r XNACK mystream grp FAIL IDS 1 1-0\n        after 10\n        set result [r XREADGROUP GROUP grp c2 CLAIM 5 STREAMS mystream >]\n        set pending [r XPENDING mystream grp - + 10 c2]\n        assert_equal [llength $pending] 1\n        assert_equal [lindex $pending 0 0] 1-0\n    }\n\n    # Verify that claiming a NACKed entry whose underlying stream data has\n    # been deleted (a \"ghost\" PEL entry) cleans the PEL entry instead of\n    # returning data.\n    #  - XCLAIM on a deleted NACKed entry: returns empty, removes the PEL\n    #    entry (exercises the streamPropagateXACK path for unowned NACKs).\n    #  - XAUTOCLAIM: claims the surviving owned entry (2-0) and reports the\n    #    deleted NACKed entry (3-0) in its deleted-IDs list.\n    test \"XNACK XCLAIM/XAUTOCLAIM of deleted NACKed entries cleans PEL\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XADD mystream 3-0 f v3\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n\n        r XNACK mystream grp FAIL IDS 2 1-0 3-0\n        r XDEL mystream 1-0\n        r XDEL mystream 3-0\n\n        # XCLAIM of deleted unowned NACK: returns empty but cleans PEL\n        # (exercises the streamPropagateXACK path for unowned NACKs)\n        set claimed [r XCLAIM mystream grp c2 0 1-0]\n        assert_equal [llength $claimed] 0\n\n        # 1-0 was cleaned from PEL; 3-0 still a ghost NACKed entry\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending] 2\n        assert_match {2-0 c1 * 1} [lindex $pending 0]\n        assert_equal [lindex $pending 1] {3-0 {} -1 1}\n\n        # XAUTOCLAIM walks the entire PEL: claims surviving 2-0, reports deleted 3-0\n        set result [r XAUTOCLAIM mystream grp c2 0 0-0]\n        set claimed_msgs [lindex $result 1]\n        set deleted_ids [lindex $result 2]\n\n        assert_equal [llength $claimed_msgs] 1\n        assert_equal [lindex $claimed_msgs 0 0] 2-0\n        assert_equal [llength $deleted_ids] 1\n        assert_equal [lindex $deleted_ids 0] 3-0\n\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending] 1\n        assert_match {2-0 c2 * 2} [lindex $pending 0]\n    }\n\n    # Verify that a client blocked on XREADGROUP BLOCK CLAIM is woken up\n    # when entries are NACKed. NACKed entries have delivery_time=0 (infinite\n    # idle), so they immediately satisfy the CLAIM min-idle-time threshold.\n    # Uses a deferring client (non-blocking Tcl socket) to simulate a\n    # blocked consumer waiting for claimable entries.\n    test \"XNACK XREADGROUP BLOCK CLAIM wakes up on NACKed entries\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n\n        # c2 blocks waiting for claimable entries with min-idle 1000ms\n        set rd [redis_deferring_client]\n        $rd XREADGROUP GROUP grp c2 BLOCK 5000 CLAIM 1000 STREAMS mystream >\n        wait_for_blocked_client\n\n        # XNACK makes entries immediately claimable, waking c2\n        r XNACK mystream grp FAIL IDS 2 1-0 2-0\n\n        wait_for_blocked_clients_count 0\n        set result [$rd read]\n        assert_equal [llength $result] 1\n        lassign [lindex $result 0] stream_name messages\n        assert_equal $stream_name \"mystream\"\n        assert_equal [llength $messages] 2\n        assert_equal [lindex $messages 0 0] 1-0\n        assert_equal [lindex $messages 1 0] 2-0\n\n        # Entries are now owned by c2\n        set pending [r XPENDING mystream grp - + 10 c2]\n        assert_equal [llength $pending] 2\n        assert_equal [lindex $pending 0 0] 1-0\n        assert_equal [lindex $pending 1 0] 2-0\n\n        $rd close\n    }\n\n    # Verify that when a consumer reads its own pending entries via\n    # `XREADGROUP ... 0-0` (pending-entry scan), NACKed entries are\n    # excluded because they are no longer owned by any consumer.\n    # Only 2-0 (still owned by c1) should be returned.\n    test \"XNACK XREADGROUP pending read excludes NACKed entries\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XADD mystream 3-0 f v3\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n\n        r XNACK mystream grp FAIL IDS 2 1-0 3-0\n\n        set result [r XREADGROUP GROUP grp c1 STREAMS mystream 0-0]\n        set entries [lindex $result 0 1]\n        assert_equal [llength $entries] 1\n        assert_equal [lindex $entries 0 0] 2-0\n    }\n\n    # Verify that XINFO CONSUMERS reflects the reduced pending count after\n    # XNACK, and that a consumer is not destroyed even when all its entries\n    # are NACKed (0 pending). Consumer cleanup is only done by explicit\n    # XGROUP DELCONSUMER.\n    test \"XNACK effect on consumer state and XINFO CONSUMERS\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XADD mystream 3-0 f v3\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n\n        # c1 initially has 3 pending\n        set consumers [r XINFO CONSUMERS mystream grp]\n        set c1_info [lindex $consumers 0]\n        assert_equal [dict get $c1_info pending] 3\n\n        # XNACK 2 entries: c1 pending drops to 1\n        r XNACK mystream grp FAIL IDS 2 1-0 2-0\n        set consumers [r XINFO CONSUMERS mystream grp]\n        set c1_info [lindex $consumers 0]\n        assert_equal [dict get $c1_info pending] 1\n\n        # XNACK the last entry: c1 has 0 pending but still exists\n        r XNACK mystream grp FAIL IDS 1 3-0\n        set c1_pending [r XPENDING mystream grp - + 10 c1]\n        assert_equal [llength $c1_pending] 0\n        set info [r XINFO GROUPS mystream]\n        set grp [lindex $info 0]\n        assert_equal [dict get $grp consumers] 1\n    }\n\n    # Verify that XGROUP DESTROY removes all PEL entries including NACKed\n    # (unowned) ones. After destroying the group and creating a new one,\n    # the PEL is empty.\n    test \"XNACK XGROUP DESTROY cleans up NACKed entries\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n\n        r XNACK mystream grp FAIL IDS 2 1-0 2-0\n\n        set info [r XPENDING mystream grp]\n        assert_equal [lindex $info 0] 2\n\n        r XGROUP DESTROY mystream grp\n\n        # New group has a clean PEL\n        r XGROUP CREATE mystream grp2 0\n        set info [r XPENDING mystream grp2]\n        assert_equal [lindex $info 0] 0\n    }\n\n    # Verify that XGROUP DELCONSUMER only removes consumer-owned PEL entries.\n    # NACKed (unowned) entries are not affected — they remain in the group\n    # PEL and can still be claimed by other consumers.\n    # Setup: c1 owns {1-0, 2-0}. NACK 1-0. Delete c1. Only 2-0 (owned)\n    # is removed; 1-0 (NACKed/unowned) survives.\n    test \"XNACK XGROUP DELCONSUMER works when group PEL has NACKed entries\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n\n        r XNACK mystream grp FAIL IDS 1 1-0\n\n        # DELCONSUMER returns the count of consumer-owned entries removed (1: only 2-0)\n        set deleted_pending [r XGROUP DELCONSUMER mystream grp c1]\n        assert_equal $deleted_pending 1\n\n        # Group PEL still has the NACKed entry (1-0)\n        set info [r XPENDING mystream grp]\n        assert_equal [lindex $info 0] 1\n\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending] 1\n        assert_equal [lindex $pending 0] {1-0 {} -1 1}\n\n        set stream_info [r XINFO STREAM mystream FULL]\n        set group [lindex [dict get $stream_info groups] 0]\n        assert_equal [dict get $group nacked-count] 1\n\n        # The surviving NACKed entry can still be claimed\n        set claimed [r XCLAIM mystream grp c2 0 1-0]\n        assert_equal [llength $claimed] 1\n    }\n\n    # Verify that the `nacked-count` field reported by XINFO STREAM FULL\n    # accurately tracks the number of entries in the NACK zone through\n    # various operations:\n    #  - XNACK increases nacked-count (pel-count stays the same).\n    #  - XCLAIM (reclaim) decreases nacked-count (moves entry back to owned).\n    #  - XACK of a NACKed entry decreases both nacked-count and pel-count.\n    #  - nacked-count is per-group (independent across groups).\n    test \"XNACK XINFO STREAM FULL nacked-count reflects nack zone size\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XADD mystream 3-0 f v3\n        r XADD mystream 4-0 f v4\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 COUNT 4 STREAMS mystream >\n\n        # Before any XNACK: all entries owned, nacked-count is 0\n        set info [r XINFO STREAM mystream FULL]\n        set group [lindex [dict get $info groups] 0]\n        assert_equal [dict get $group nacked-count] 0\n        assert_equal [dict get $group pel-count] 4\n\n        # NACK one entry: nacked-count goes up, pel-count unchanged\n        r XNACK mystream grp FAIL IDS 1 1-0\n        set info [r XINFO STREAM mystream FULL]\n        set group [lindex [dict get $info groups] 0]\n        assert_equal [dict get $group nacked-count] 1\n        assert_equal [dict get $group pel-count] 4\n\n        # NACK two more\n        r XNACK mystream grp FAIL IDS 2 2-0 3-0\n        set info [r XINFO STREAM mystream FULL]\n        set group [lindex [dict get $info groups] 0]\n        assert_equal [dict get $group nacked-count] 3\n        assert_equal [dict get $group pel-count] 4\n\n        # Reclaim via XCLAIM: nacked-count decreases, pel-count unchanged\n        r XCLAIM mystream grp c1 0 1-0\n        set info [r XINFO STREAM mystream FULL]\n        set group [lindex [dict get $info groups] 0]\n        assert_equal [dict get $group nacked-count] 2\n        assert_equal [dict get $group pel-count] 4\n\n        # XACK a NACKed entry: both counts decrease\n        r XACK mystream grp 2-0\n        set info [r XINFO STREAM mystream FULL]\n        set group [lindex [dict get $info groups] 0]\n        assert_equal [dict get $group nacked-count] 1\n        assert_equal [dict get $group pel-count] 3\n\n        # XACK last NACKed entry: nacked-count reaches 0\n        r XACK mystream grp 3-0\n        set info [r XINFO STREAM mystream FULL]\n        set group [lindex [dict get $info groups] 0]\n        assert_equal [dict get $group nacked-count] 0\n        assert_equal [dict get $group pel-count] 2\n\n        # Multiple groups: nacked-count is per-group\n        r XNACK mystream grp FAIL IDS 1 1-0\n        r XGROUP CREATE mystream grp2 0\n        r XREADGROUP GROUP grp2 c2 COUNT 4 STREAMS mystream >\n        set info [r XINFO STREAM mystream FULL]\n        set grp1 [lindex [dict get $info groups] 0]\n        set grp2 [lindex [dict get $info groups] 1]\n        assert_equal [dict get $grp1 nacked-count] 1\n        assert_equal [dict get $grp2 nacked-count] 0\n    }\n\n    # Verify that NACKed entries survive an RDB save/reload cycle.\n    # Uses all three modes (FAIL, FATAL, SILENT) plus FORCE-created entries\n    # in a second group (grp2) with RETRYCOUNT. After DEBUG RELOAD:\n    #  - delivery_counts are preserved (FAIL=1, FATAL=LLONG_MAX, SILENT=0).\n    #  - NACK zone order is preserved (verified via XREADGROUP CLAIM order).\n    #  - FORCE-created entries in grp2 are intact and claimable.\n    test \"XNACK RDB save and load preserves NACKed entries\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XADD mystream 3-0 f v3\n        r XADD mystream 4-0 f v4\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n\n        # NACK with different modes\n        r XNACK mystream grp FAIL IDS 1 1-0\n        r XNACK mystream grp FATAL IDS 1 2-0\n        r XNACK mystream grp SILENT IDS 1 3-0\n\n        # Separate group: FORCE-created entries (no prior XREADGROUP in grp2)\n        r XGROUP CREATE mystream grp2 0\n        r XNACK mystream grp2 FAIL IDS 1 1-0 FORCE\n        r XNACK mystream grp2 FATAL IDS 1 2-0 RETRYCOUNT 77 FORCE\n\n        r SAVE\n        r DEBUG RELOAD\n\n        # Verify grp state after reload\n        set pending [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending] 4\n        assert_equal [lindex $pending 0] {1-0 {} -1 1}\n        assert_equal [lindex $pending 1] {2-0 {} -1 9223372036854775807}\n        assert_equal [lindex $pending 2] {3-0 {} -1 0}\n        assert_match {4-0 c1 * 1} [lindex $pending 3]\n\n        # Verify NACK zone order is preserved: 1-0, 2-0, 3-0\n        set r1 [r XREADGROUP GROUP grp c2 COUNT 1 CLAIM 0 STREAMS mystream >]\n        assert_equal [lindex [lindex $r1 0] 1 0 0] 1-0\n        set r2 [r XREADGROUP GROUP grp c2 COUNT 1 CLAIM 0 STREAMS mystream >]\n        assert_equal [lindex [lindex $r2 0] 1 0 0] 2-0\n\n        # Verify grp2 FORCE-created entries survived the reload\n        set pending2 [r XPENDING mystream grp2 - + 10]\n        assert_equal [llength $pending2] 2\n        assert_equal [lindex $pending2 0] {1-0 {} -1 0}\n        assert_equal [lindex $pending2 1] {2-0 {} -1 77}\n\n        set claimed [r XCLAIM mystream grp2 c1 0 1-0 2-0]\n        assert_equal [llength $claimed] 2\n    } {} {external:skip needs:debug}\n\n    # Verify that NACKed entries survive DUMP/RESTORE serialization.\n    # After DUMP + DEL + RESTORE, the PEL state (delivery_counts, unowned\n    # status, nacked-count, and NACK zone claim order) is identical to the\n    # original.\n    test \"XNACK NACKed entries survive DUMP and RESTORE\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XADD mystream 3-0 f v3\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n\n        r XNACK mystream grp SILENT IDS 1 1-0\n        r XNACK mystream grp FATAL IDS 1 3-0\n\n        set pending_before [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending_before] 3\n\n        set dump [r DUMP mystream]\n        r DEL mystream\n        r RESTORE mystream 0 $dump\n\n        # PEL state must match pre-DUMP state\n        set pending_after [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending_after] 3\n\n        assert_equal [lindex $pending_after 0] {1-0 {} -1 0}\n        assert_match {2-0 c1 * 1} [lindex $pending_after 1]\n        assert_equal [lindex $pending_after 2] {3-0 {} -1 9223372036854775807}\n\n        set info [r XINFO STREAM mystream FULL]\n        set group [lindex [dict get $info groups] 0]\n        assert_equal [dict get $group nacked-count] 2\n\n        # NACK zone claim order preserved: 1-0 first, then 3-0\n        set r1 [r XREADGROUP GROUP grp c2 COUNT 1 CLAIM 0 STREAMS mystream >]\n        assert_equal [lindex [lindex $r1 0] 1 0 0] 1-0\n        set r2 [r XREADGROUP GROUP grp c2 COUNT 1 CLAIM 0 STREAMS mystream >]\n        assert_equal [lindex [lindex $r2 0] 1 0 0] 3-0\n    }\n\n    # Verify that COPY creates an independent copy that preserves NACKed\n    # entries (delivery_counts, unowned status, nacked-count, NACK zone\n    # order). Also confirms the original stream is unaffected by operations\n    # on the copy.\n    # Uses hash-tag keys {t} to ensure same slot for cluster compatibility.\n    test \"XNACK COPY preserves NACKed entries\" {\n        r DEL mystream{t} mystream{t}_copy\n        r XADD mystream{t} 1-0 f v1\n        r XADD mystream{t} 2-0 f v2\n        r XADD mystream{t} 3-0 f v3\n        r XGROUP CREATE mystream{t} grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream{t} >\n\n        r XNACK mystream{t} grp FAIL IDS 1 1-0\n        r XNACK mystream{t} grp FATAL IDS 1 3-0\n\n        r COPY mystream{t} mystream{t}_copy\n\n        # Copied stream has the same NACKed state\n        set pending [r XPENDING mystream{t}_copy grp - + 10]\n        assert_equal [llength $pending] 3\n        assert_equal [lindex $pending 0] {1-0 {} -1 1}\n        assert_match {2-0 c1 * 1} [lindex $pending 1]\n        assert_equal [lindex $pending 2] {3-0 {} -1 9223372036854775807}\n\n        set info [r XINFO STREAM mystream{t}_copy FULL]\n        set group [lindex [dict get $info groups] 0]\n        assert_equal [dict get $group nacked-count] 2\n\n        # NACK zone order is preserved in the copy\n        set r1 [r XREADGROUP GROUP grp c2 COUNT 1 CLAIM 0 STREAMS mystream{t}_copy >]\n        assert_equal [lindex [lindex $r1 0] 1 0 0] 1-0\n        set r2 [r XREADGROUP GROUP grp c2 COUNT 1 CLAIM 0 STREAMS mystream{t}_copy >]\n        assert_equal [lindex [lindex $r2 0] 1 0 0] 3-0\n\n        # Original stream is unaffected: 1-0 still NACKed/unowned\n        set orig_pending [r XPENDING mystream{t} grp - + 10]\n        assert_equal [lindex $orig_pending 0 1] {}\n    }\n}\n\nstart_server {tags {\"stream needs:debug\"} overrides {appendonly yes aof-use-rdb-preamble no}} {\n    # Verify that NACKed entries are correctly emitted during AOF rewrite\n    # and fully restored via `debug loadaof`. After rewrite + reload,\n    # delivery_counts, unowned status, and NACK zone claim order must\n    # match the pre-rewrite state.\n    test \"XNACK entries survive AOF rewrite\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XADD mystream 3-0 f v3\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 STREAMS mystream >\n\n        r XNACK mystream grp SILENT IDS 1 1-0\n        r XNACK mystream grp FAIL IDS 1 3-0\n\n        set pending_before [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending_before] 3\n        assert_equal [lindex $pending_before 0] {1-0 {} -1 0}\n        assert_match {2-0 c1 * 1} [lindex $pending_before 1]\n        assert_equal [lindex $pending_before 2] {3-0 {} -1 1}\n\n        r bgrewriteaof\n        waitForBgrewriteaof r\n        r debug loadaof\n\n        # Verify state matches pre-rewrite\n        set pending_after [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending_after] 3\n        assert_equal [lindex $pending_after 0] {1-0 {} -1 0}\n        assert_match {2-0 c1 * 1} [lindex $pending_after 1]\n        assert_equal [lindex $pending_after 2] {3-0 {} -1 1}\n\n        # NACK zone claim order preserved\n        set r1 [r XREADGROUP GROUP grp c2 COUNT 1 CLAIM 0 STREAMS mystream >]\n        assert_equal [lindex [lindex $r1 0] 1 0 0] 1-0\n        set r2 [r XREADGROUP GROUP grp c2 COUNT 1 CLAIM 0 STREAMS mystream >]\n        assert_equal [lindex [lindex $r2 0] 1 0 0] 3-0\n    }\n\n    # Test AOF rewrite when the NACK zone has more entries than the AOF\n    # batch size (64 entries per XNACK FORCE batch in the AOF emitter).\n    # With 65 NACKed entries + 1 owned entry, the rewriter must emit\n    # multiple XNACK FORCE batches for the NACK zone and a separate\n    # XCLAIM batch for the owned tail. After rewrite + reload, all 66\n    # PEL entries must be intact with correct ownership and delivery_counts.\n    test \"XNACK AOF rewrite batch split -- 65 NACKed entries with owned tail\" {\n        r DEL mystream\n\n        set total_nack 65\n        set total [expr {$total_nack + 1}]\n\n        for {set i 1} {$i <= $total} {incr i} {\n            r XADD mystream $i-0 f v$i\n        }\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 COUNT $total STREAMS mystream >\n\n        set nack_ids {}\n        for {set i 1} {$i <= $total_nack} {incr i} {\n            lappend nack_ids $i-0\n        }\n        r XNACK mystream grp FAIL IDS $total_nack {*}$nack_ids\n\n        set pending_before [r XPENDING mystream grp - + 200]\n        assert_equal [llength $pending_before] $total\n\n        r bgrewriteaof\n        waitForBgrewriteaof r\n        r debug loadaof\n\n        set pending_after [r XPENDING mystream grp - + 200]\n        assert_equal [llength $pending_after] $total\n\n        # All 65 NACKed entries: unowned with delivery_count=1\n        for {set i 0} {$i < $total_nack} {incr i} {\n            set entry [lindex $pending_after $i]\n            assert_equal [lindex $entry 0] \"[expr {$i + 1}]-0\"\n            assert_equal [lindex $entry 1] {}\n            assert_equal [lindex $entry 3] 1\n        }\n\n        # The last entry (66-0) is still owned by c1\n        set last [lindex $pending_after $total_nack]\n        assert_equal [lindex $last 0] \"$total-0\"\n        assert_equal [lindex $last 1] c1\n\n        set claimed [r XCLAIM mystream grp c2 0 1-0 65-0]\n        assert_equal [llength $claimed] 2\n    }\n\n    # Edge case: the entire PEL consists of NACKed entries (no owned\n    # entries at all). With 65 entries exceeding the 64-entry AOF batch\n    # limit, the rewriter must split into multiple batches even though\n    # there is no owned tail. After reload all entries are unowned.\n    test \"XNACK AOF rewrite batch split -- entire PEL is NACK zone\" {\n        r DEL mystream\n\n        set total 65\n\n        for {set i 1} {$i <= $total} {incr i} {\n            r XADD mystream $i-0 f v$i\n        }\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 COUNT $total STREAMS mystream >\n\n        set nack_ids {}\n        for {set i 1} {$i <= $total} {incr i} {\n            lappend nack_ids $i-0\n        }\n        r XNACK mystream grp FAIL IDS $total {*}$nack_ids\n\n        set pending_before [r XPENDING mystream grp - + 200]\n        assert_equal [llength $pending_before] $total\n\n        r bgrewriteaof\n        waitForBgrewriteaof r\n        r debug loadaof\n\n        set pending_after [r XPENDING mystream grp - + 200]\n        assert_equal [llength $pending_after] $total\n\n        # Every entry is unowned with delivery_count=1\n        for {set i 0} {$i < $total} {incr i} {\n            assert_equal [lindex $pending_after $i] \"[expr {$i + 1}]-0 {} -1 1\"\n        }\n    }\n\n    # Verify that AOF rewrite correctly batches NACKed entries that have\n    # different delivery_counts. The AOF emitter groups consecutive entries\n    # with the same delivery_count into a single XNACK FORCE command;\n    # entries with different counts require separate batches.\n    # Setup: 6 entries NACKed with mixed modes/RETRYCOUNT:\n    #   1-0,2-0 = FATAL (LLONG_MAX), 3-0,4-0 = SILENT (0),\n    #   5-0 = RETRYCOUNT 42, 6-0 = FAIL (1).\n    test \"XNACK AOF rewrite with mixed delivery_counts batches correctly\" {\n        r DEL mystream\n\n        for {set i 1} {$i <= 6} {incr i} {\n            r XADD mystream $i-0 f v$i\n        }\n        r XGROUP CREATE mystream grp 0\n        r XREADGROUP GROUP grp c1 COUNT 6 STREAMS mystream >\n\n        r XNACK mystream grp FATAL IDS 2 1-0 2-0\n        r XNACK mystream grp SILENT IDS 2 3-0 4-0\n        r XNACK mystream grp FAIL IDS 1 5-0 RETRYCOUNT 42\n        r XNACK mystream grp FAIL IDS 1 6-0\n\n        set pending_before [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending_before] 6\n\n        r bgrewriteaof\n        waitForBgrewriteaof r\n        r debug loadaof\n\n        set pending_after [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending_after] 6\n\n        # Verify each entry retained its specific delivery_count\n        foreach entry $pending_after {\n            set id [lindex $entry 0]\n            set consumer [lindex $entry 1]\n            set dc [lindex $entry 3]\n\n            assert_equal $consumer {}\n\n            switch $id {\n                1-0 - 2-0 {\n                    assert_equal $dc 9223372036854775807\n                }\n                3-0 - 4-0 {\n                    assert_equal $dc 0\n                }\n                5-0 {\n                    assert_equal $dc 42\n                }\n                6-0 {\n                    assert_equal $dc 1\n                }\n            }\n        }\n    }\n\n    # Verify that FORCE-created PEL entries (which were never delivered\n    # to a consumer via XREADGROUP) survive AOF rewrite. These entries\n    # only exist in the group PEL, not in any consumer PEL, so the AOF\n    # emitter must handle them specially.\n    test \"XNACK FORCE-created entries survive AOF rewrite\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v1\n        r XADD mystream 2-0 f v2\n        r XADD mystream 3-0 f v3\n        r XGROUP CREATE mystream grp 0\n\n        r XNACK mystream grp FAIL IDS 1 1-0 FORCE\n        r XNACK mystream grp FATAL IDS 1 2-0 FORCE\n        r XNACK mystream grp SILENT IDS 1 3-0 RETRYCOUNT 33 FORCE\n\n        set pending_before [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending_before] 3\n\n        r bgrewriteaof\n        waitForBgrewriteaof r\n        r debug loadaof\n\n        set pending_after [r XPENDING mystream grp - + 10]\n        assert_equal [llength $pending_after] 3\n        assert_equal [lindex $pending_after 0] {1-0 {} -1 0}\n        assert_equal [lindex $pending_after 1] {2-0 {} -1 9223372036854775807}\n        assert_equal [lindex $pending_after 2] {3-0 {} -1 33}\n\n        # FORCE-created entries are still claimable after reload\n        set claimed [r XCLAIM mystream grp c1 0 1-0 2-0 3-0]\n        assert_equal [llength $claimed] 3\n    }\n}\n\nstart_server {tags {\"repl external:skip\" \"stream\"}} {\n    # Verify that XNACK commands replicate correctly to replicas.\n    # Tests all three modes (FAIL, FATAL, SILENT) and FORCE option.\n    # After wait_for_ofs_sync, the replica's PEL state must match the\n    # master's: same delivery_counts, same unowned status.\n    test \"XNACK replication of modes and FORCE\" {\n        set master [srv 0 client]\n        set master_host [srv 0 host]\n        set master_port [srv 0 port]\n\n        start_server {tags {\"stream\"}} {\n            set replica [srv 0 client]\n\n            $replica replicaof $master_host $master_port\n            wait_for_sync $replica\n\n            # Mode replication: FAIL, FATAL, SILENT on consumer-owned entries\n            $master DEL mystream\n            $master XADD mystream 1-0 f v1\n            $master XADD mystream 2-0 f v2\n            $master XADD mystream 3-0 f v3\n            $master XADD mystream 4-0 f v4\n            $master XGROUP CREATE mystream grp 0\n            $master XREADGROUP GROUP grp c1 STREAMS mystream >\n            wait_for_ofs_sync $master $replica\n\n            $master XNACK mystream grp FAIL IDS 1 1-0\n            $master XNACK mystream grp FATAL IDS 1 3-0\n            $master XNACK mystream grp SILENT IDS 1 4-0\n            wait_for_ofs_sync $master $replica\n\n            # Verify replica state matches master\n            set pending [$replica XPENDING mystream grp - + 10]\n            assert_equal [llength $pending] 4\n            assert_equal [lindex $pending 0] {1-0 {} -1 1}\n            assert_match {2-0 c1 * 1} [lindex $pending 1]\n            assert_equal [lindex $pending 2] {3-0 {} -1 9223372036854775807}\n            assert_equal [lindex $pending 3] {4-0 {} -1 0}\n\n            # FORCE replication: create PEL entries without prior XREADGROUP\n            $master DEL mystream2\n            $master XADD mystream2 1-0 f v1\n            $master XADD mystream2 2-0 f v2\n            $master XGROUP CREATE mystream2 grp 0\n            wait_for_ofs_sync $master $replica\n\n            $master XNACK mystream2 grp FAIL IDS 1 1-0 FORCE\n            $master XNACK mystream2 grp FATAL IDS 1 2-0 FORCE\n            wait_for_ofs_sync $master $replica\n\n            set pending [$replica XPENDING mystream2 grp - + 10]\n            assert_equal [llength $pending] 2\n\n            assert_equal [lindex $pending 0] {1-0 {} -1 0}\n            assert_equal [lindex $pending 1] {2-0 {} -1 9223372036854775807}\n        }\n    }\n}\n\nstart_server {tags {\"repl external:skip\" \"stream\"}} {\n    # Verify that reclaim/acknowledge operations on NACKed entries\n    # propagate correctly to replicas. Tests four operations:\n    #  1. XCLAIM a NACKed entry — replica sees new consumer ownership.\n    #  2. XACK a NACKed entry — replica sees it removed from PEL.\n    #  3. XAUTOCLAIM NACKed entries — replica sees new consumer ownership.\n    #  4. XREADGROUP CLAIM NACKed entries — replica sees new consumer ownership.\n    # Each step uses wait_for_ofs_sync to ensure replication completes\n    # before reading from the replica.\n    test \"XNACK reclaim operations propagate correctly to replica\" {\n        set master [srv 0 client]\n        set master_host [srv 0 host]\n        set master_port [srv 0 port]\n\n        start_server {tags {\"stream\"}} {\n            set replica [srv 0 client]\n\n            $replica replicaof $master_host $master_port\n            wait_for_sync $replica\n\n            $master DEL mystream\n            $master XADD mystream 1-0 f v1\n            $master XADD mystream 2-0 f v2\n            $master XADD mystream 3-0 f v3\n            $master XGROUP CREATE mystream grp 0\n            $master XREADGROUP GROUP grp c1 STREAMS mystream >\n            wait_for_ofs_sync $master $replica\n\n            $master XNACK mystream grp FAIL IDS 2 1-0 2-0\n            wait_for_ofs_sync $master $replica\n\n            # 1. XCLAIM a NACKed entry: replica sees c2 owning 1-0\n            $master XCLAIM mystream grp c2 0 1-0\n            wait_for_ofs_sync $master $replica\n\n            set pending [$replica XPENDING mystream grp - + 10 c2]\n            assert_equal [llength $pending] 1\n            assert_equal [lindex $pending 0 0] 1-0\n\n            # 2. XACK a NACKed entry: 2-0 removed from replica PEL\n            $master XACK mystream grp 2-0\n            wait_for_ofs_sync $master $replica\n\n            set all_pending [$replica XPENDING mystream grp - + 10]\n            assert_equal [llength $all_pending] 2\n            foreach entry $all_pending {\n                assert {[lindex $entry 0] ne \"2-0\"}\n            }\n\n            # 3. XAUTOCLAIM NACKed entries: replica sees c3 owning 3-0\n            $master XNACK mystream grp FAIL IDS 1 3-0\n            wait_for_ofs_sync $master $replica\n\n            $master XAUTOCLAIM mystream grp c3 99999 0-0\n            wait_for_ofs_sync $master $replica\n\n            set c3_pending [$replica XPENDING mystream grp - + 10 c3]\n            assert_equal [llength $c3_pending] 1\n            assert_equal [lindex $c3_pending 0 0] 3-0\n\n            # 4. XREADGROUP CLAIM NACKed entries: replica sees c4 owning 1-0\n            $master XNACK mystream grp FAIL IDS 1 1-0\n            wait_for_ofs_sync $master $replica\n\n            $master XREADGROUP GROUP grp c4 CLAIM 99999 STREAMS mystream >\n            wait_for_ofs_sync $master $replica\n\n            set c4_pending [$replica XPENDING mystream grp - + 10 c4]\n            assert_equal [llength $c4_pending] 1\n            assert_equal [lindex $c4_pending 0 0] 1-0\n        }\n    }\n}\n\n"
  },
  {
    "path": "tests/unit/type/stream.tcl",
    "content": "# return value is like strcmp() and similar.\nproc streamCompareID {a b} {\n    if {$a eq $b} {return 0}\n    lassign [split $a -] a_ms a_seq\n    lassign [split $b -] b_ms b_seq\n    if {$a_ms > $b_ms} {return 1}\n    if {$a_ms < $b_ms} {return -1}\n    # Same ms case, compare seq.\n    if {$a_seq > $b_seq} {return 1}\n    if {$a_seq < $b_seq} {return -1}\n}\n\n# return the ID immediately greater than the specified one.\n# Note that this function does not care to handle 'seq' overflow\n# since it's a 64 bit value.\nproc streamNextID {id} {\n    lassign [split $id -] ms seq\n    incr seq\n    join [list $ms $seq] -\n}\n\n# Generate a random stream entry ID with the ms part between min and max\n# and a low sequence number (0 - 999 range), in order to stress test\n# XRANGE against a Tcl implementation implementing the same concept\n# with Tcl-only code in a linear array.\nproc streamRandomID {min_id max_id} {\n    lassign [split $min_id -] min_ms min_seq\n    lassign [split $max_id -] max_ms max_seq\n    set delta [expr {$max_ms-$min_ms+1}]\n    set ms [expr {$min_ms+[randomInt $delta]}]\n    set seq [randomInt 1000]\n    return $ms-$seq\n}\n\n# Tcl-side implementation of XRANGE to perform fuzz testing in the Redis\n# XRANGE implementation.\nproc streamSimulateXRANGE {items start end} {\n    set res {}\n    foreach i $items  {\n        set this_id [lindex $i 0]\n        if {[streamCompareID $this_id $start] >= 0} {\n            if {[streamCompareID $this_id $end] <= 0} {\n                lappend res $i\n            }\n        }\n    }\n    return $res\n}\n\nset content {} ;# Will be populated with Tcl side copy of the stream content.\n\nstart_server {\n    tags {\"stream\"}\n} {\n    test \"XADD wrong number of args\" {\n        assert_error {*wrong number of arguments for 'xadd' command} {r XADD mystream}\n        assert_error {*wrong number of arguments for 'xadd' command} {r XADD mystream *}\n        assert_error {*wrong number of arguments for 'xadd' command} {r XADD mystream * field}\n    }\n\n    test {XADD can add entries into a stream that XRANGE can fetch} {\n        r XADD mystream * item 1 value a\n        r XADD mystream * item 2 value b\n        assert_equal 2 [r XLEN mystream]\n        set items [r XRANGE mystream - +]\n        assert_equal [lindex $items 0 1] {item 1 value a}\n        assert_equal [lindex $items 1 1] {item 2 value b}\n    }\n\n    test {XADD IDs are incremental} {\n        set id1 [r XADD mystream * item 1 value a]\n        set id2 [r XADD mystream * item 2 value b]\n        set id3 [r XADD mystream * item 3 value c]\n        assert {[streamCompareID $id1 $id2] == -1}\n        assert {[streamCompareID $id2 $id3] == -1}\n    }\n\n    test {XADD IDs are incremental when ms is the same as well} {\n        r multi\n        r XADD mystream * item 1 value a\n        r XADD mystream * item 2 value b\n        r XADD mystream * item 3 value c\n        lassign [r exec] id1 id2 id3\n        assert {[streamCompareID $id1 $id2] == -1}\n        assert {[streamCompareID $id2 $id3] == -1}\n    }\n\n    test {XADD IDs correctly report an error when overflowing} {\n        r DEL mystream\n        r xadd mystream 18446744073709551615-18446744073709551615 a b\n        assert_error ERR* {r xadd mystream * c d}\n    }\n\n    test {XADD auto-generated sequence is incremented for last ID} {\n        r DEL mystream\n        set id1 [r XADD mystream 123-456 item 1 value a]\n        set id2 [r XADD mystream 123-* item 2 value b]\n        lassign [split $id2 -] _ seq\n        assert {$seq == 457}\n        assert {[streamCompareID $id1 $id2] == -1}\n    }\n\n    test {XADD auto-generated sequence is zero for future timestamp ID} {\n        r DEL mystream\n        set id1 [r XADD mystream 123-456 item 1 value a]\n        set id2 [r XADD mystream 789-* item 2 value b]\n        lassign [split $id2 -] _ seq\n        assert {$seq == 0}\n        assert {[streamCompareID $id1 $id2] == -1}\n    }\n\n    test {XADD auto-generated sequence can't be smaller than last ID} {\n        r DEL mystream\n        r XADD mystream 123-456 item 1 value a\n        assert_error ERR* {r XADD mystream 42-* item 2 value b}\n    }\n\n    test {XADD auto-generated sequence can't overflow} {\n        r DEL mystream\n        r xadd mystream 1-18446744073709551615 a b\n        assert_error ERR* {r xadd mystream 1-* c d}\n    }\n\n    test {XADD 0-* should succeed} {\n        r DEL mystream\n        set id [r xadd mystream 0-* a b]\n        lassign [split $id -] _ seq\n        assert {$seq == 1}\n    }\n\n    test {XADD with MAXLEN option} {\n        r DEL mystream\n        for {set j 0} {$j < 1000} {incr j} {\n            if {rand() < 0.9} {\n                r XADD mystream MAXLEN 5 * xitem $j\n            } else {\n                r XADD mystream MAXLEN 5 * yitem $j\n            }\n        }\n        assert {[r xlen mystream] == 5}\n        set res [r xrange mystream - +]\n        set expected 995\n        foreach r $res {\n            assert {[lindex $r 1 1] == $expected}\n            incr expected\n        }\n    }\n\n    test {XADD with MAXLEN option and the '=' argument} {\n        r DEL mystream\n        for {set j 0} {$j < 1000} {incr j} {\n            if {rand() < 0.9} {\n                r XADD mystream MAXLEN = 5 * xitem $j\n            } else {\n                r XADD mystream MAXLEN = 5 * yitem $j\n            }\n        }\n        assert {[r XLEN mystream] == 5}\n    }\n\n    test {XADD with MAXLEN option and the '~' argument} {\n        r DEL mystream\n        r config set stream-node-max-entries 100\n        for {set j 0} {$j < 1000} {incr j} {\n            if {rand() < 0.9} {\n                r XADD mystream MAXLEN ~ 555 * xitem $j\n            } else {\n                r XADD mystream MAXLEN ~ 555 * yitem $j\n            }\n        }\n        assert {[r XLEN mystream] == 600}\n    }\n\n    test {XADD with NOMKSTREAM option} {\n        r DEL mystream\n        assert_equal \"\" [r XADD mystream NOMKSTREAM * item 1 value a]\n        assert_equal 0 [r EXISTS mystream]\n        r XADD mystream * item 1 value a\n        r XADD mystream NOMKSTREAM * item 2 value b\n        assert_equal 2 [r XLEN mystream]\n        set items [r XRANGE mystream - +]\n        assert_equal [lindex $items 0 1] {item 1 value a}\n        assert_equal [lindex $items 1 1] {item 2 value b}\n    }\n\n    test {XADD with MINID option} {\n        r DEL mystream\n        for {set j 1} {$j < 1001} {incr j} {\n            set minid 1000\n            if {$j >= 5} {\n                set minid [expr {$j-5}]\n            }\n            if {rand() < 0.9} {\n                r XADD mystream MINID $minid $j xitem $j\n            } else {\n                r XADD mystream MINID $minid $j yitem $j\n            }\n        }\n        assert {[r xlen mystream] == 6}\n        set res [r xrange mystream - +]\n        set expected 995\n        foreach r $res {\n            assert {[lindex $r 1 1] == $expected}\n            incr expected\n        }\n    }\n\n    test {XADD with MAXLEN option and ACKED option} {\n        r DEL mystream\n        r XADD mystream 1-0 f v\n        r XADD mystream 2-0 f v\n        r XADD mystream 3-0 f v\n        r XADD mystream 4-0 f v\n        r XADD mystream 5-0 f v\n        assert {[r XLEN mystream] == 5}\n\n        # Create a consumer group but don't read any messages yet\n        # ACKED option should preserve all messages since none are acked.\n        r XGROUP CREATE mystream mygroup 0\n        r XADD mystream MAXLEN = 1 ACKED 6-0 f v\n        assert {[r XLEN mystream] == 6} ;# All messages preserved + the new one\n\n        # Read 1 messages and acknowledge them\n        # This leaves 5 messages still unacked\n        set records [r XREADGROUP GROUP mygroup consumer1 COUNT 1 STREAMS mystream >]\n        r XACK mystream mygroup [lindex [lindex [lindex [lindex $records 0] 1] 0] 0]\n        assert {[lindex [r XPENDING mystream mygroup] 0] == 0}\n\n        # With 5 messages still unacked, ACKED option should preserve them\n        r XADD mystream MAXLEN = 1 ACKED 7-0 f v\n        assert {[r XLEN mystream] == 6} ;# 6 - 1 acked + 1 new\n\n        # Acknowledge all remaining messages\n        set records [r XREADGROUP GROUP mygroup consumer1 STREAMS mystream >]\n        set ids {}\n        foreach entry [lindex [lindex $records 0] 1] {\n            lappend ids [lindex $entry 0]\n        }\n        r XACK mystream mygroup {*}$ids\n        assert {[lindex [r XPENDING mystream mygroup] 0] == 0} ;# All messages acked\n\n        # Now ACKED should trim to MAXLEN since all messages are acked\n        r XADD mystream MAXLEN = 1 ACKED * f v\n        assert {[r XLEN mystream] == 1} ;# Successfully trimmed to 1 entries\n    }\n\n    test {XADD with ACKED option doesn't crash after DEBUG RELOAD} {\n        r DEL mystream\n        r XADD mystream 1-0 f v\n\n        # Create a consumer group and read one message\n        r XGROUP CREATE mystream mygroup 0\n        set records [r XREADGROUP GROUP mygroup consumer1 COUNT 1 STREAMS mystream >]\n        assert_equal [lindex [r XPENDING mystream mygroup] 0] 1\n\n        # After reload, the reference relationship between consumer groups and messages\n        # is correctly rebuilt, so the previously read but unacked message still cannot be deleted.\n        r DEBUG RELOAD\n        r XADD mystream MAXLEN = 1 ACKED 2-0 f v\n        assert_equal [r XLEN mystream] 2\n\n        # Acknowledge the read message so the PEL becomes empty\n        r XACK mystream mygroup [lindex [lindex [lindex [lindex $records 0] 1] 0] 0]\n        assert {[lindex [r XPENDING mystream mygroup] 0] == 0}\n\n        # After reload, since PEL is empty, no cgroup references will be recreated.\n        r DEBUG RELOAD\n\n        # ACKED option should work correctly even without cgroup references.\n        r XADD mystream MAXLEN = 1 ACKED 3-0 f v\n        assert_equal [r XLEN mystream] 2\n    } {} {needs:debug}\n\n    test {XADD with MAXLEN option and DELREF option} {\n        r DEL mystream\n        r XADD mystream 1-0 f v\n        r XADD mystream 2-0 f v\n        r XADD mystream 3-0 f v\n        r XADD mystream 4-0 f v\n        r XADD mystream 5-0 f v\n\n        r XGROUP CREATE mystream mygroup 0\n        r XREADGROUP GROUP mygroup consumer1 COUNT 1 STREAMS mystream >\n\n        # XADD with MAXLEN and DELREF should trim and remove all references\n        r XADD mystream MAXLEN = 1 DELREF * f v\n        assert {[r XLEN mystream] == 1}\n\n        # All PEL entries should be cleaned up\n        assert {[lindex [r XPENDING mystream mygroup] 0] == 0}\n    }\n\n    test {XADD IDMP with invalid syntax} {\n        r DEL mystream\n        assert_error \"*ERR Invalid stream ID specified*\" {r XADD mystream IDMP p1 * f v}\n        assert_error \"*IDMP/IDMPAUTO can be used only with auto-generated IDs*\" {r XADD mystream IDMP p1 iid1 1-1 f v}\n        assert_error \"*IDMP/IDMPAUTO specified multiple times*\" {r XADD mystream IDMP p1 iid1 IDMP p2 iid2 * f v}\n        assert_error \"*IDMP/IDMPAUTO specified multiple times*\" {r XADD mystream IDMPAUTO p1 IDMP p2 iid2 * f v}\n        assert_error \"*IDMP requires a non-empty producer ID*\" {r XADD mystream IDMP \"\" iid1 * f v}\n        assert_error \"*IDMP requires a non-empty idempotent ID*\" {r XADD mystream IDMP p1 \"\" * f v}\n        assert_error \"*IDMPAUTO requires a non-empty producer ID*\" {r XADD mystream IDMPAUTO \"\" * f v}\n    }\n\n    test {XADD IDMP basic addition} {\n        r DEL mystream\n    \n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 1 * f v]]}\n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 A * f v]]}\n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 B * f v]]}\n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 - * f v]]}\n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 + * f v]]}\n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 * * f v]]}\n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 ^ * f v]]}\n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 $ * f v]]}\n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 # * f v]]}\n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 @ * f v]]}\n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 ? * f v]]}\n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 \\\\ * f v]]}\n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 IDMP * f v]]}\n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 123-456 * f v]]}\n        \n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 9999999999999-9999999999999 * f v]]}\n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 \"hello世界\" * f v]]}\n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 \"héllo\" * f v]]}\n        \n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 \"line1\\nline2\" * f v]]}\n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 \"tab\\there\" * f v]]}\n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 \"quote\\\"test\" * f v]]}\n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 \"with spaces\" * f v]]}\n        \n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 [string repeat \"long\" 100] * f v]]}\n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 [string repeat \"x\" 1000] * f v]]}\n        \n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 \"special!@#$%^&*()\" * f v]]}\n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 \"path/to/file\" * f v]]}\n        assert {[regexp {^[0-9]+-[0-9]+$} [r XADD mystream IDMP p1 \"key:value\" * f v]]}\n        \n        assert_equal 26 [r XLEN mystream]\n    }\n\n    test \"XADD IDMP duplicate request returns same ID\" {\n        r DEL mystream\n        \n        # First XADD with IDMP\n        set id1 [r XADD mystream IDMP p1 \"payment-abc\" * amount \"100\" currency \"USD\"]\n        \n        # Second XADD with same iid but different fields\n        set id2 [r XADD mystream IDMP p1 \"payment-abc\" * amount \"200\" currency \"EUR\"]\n        \n        # Verify both IDs are identical\n        assert_equal $id1 $id2\n        \n        # Verify only one entry exists\n        assert_equal 1 [r XLEN mystream]\n        \n        # Verify original fields are preserved\n        set entries [r XRANGE mystream - +]\n        set fields [lindex [lindex $entries 0] 1]\n        assert_equal \"100\" [dict get $fields amount]\n        assert_equal \"USD\" [dict get $fields currency]\n    }\n\n    test {XADD IDMP multiple different IIDs create multiple entries} {\n        r DEL mystream\n        \n        # Add entries with different iids\n        set id1 [r XADD mystream IDMP p1 \"req-1\" * user \"alice\"]\n        set id2 [r XADD mystream IDMP p1 \"req-2\" * user \"bob\"]\n        set id3 [r XADD mystream IDMP p1 \"req-3\" * user \"charlie\"]\n        \n        # Verify all IDs are different\n        assert {$id1 != $id2}\n        assert {$id2 != $id3}\n        assert {$id1 != $id3}\n        \n        # Verify all entries exist\n        assert_equal 3 [r XLEN mystream]\n        \n        # Verify each entry has correct data\n        set entries [r XRANGE mystream - +]\n        assert_equal \"alice\" [dict get [lindex [lindex $entries 0] 1] user]\n        assert_equal \"bob\" [dict get [lindex [lindex $entries 1] 1] user]\n        assert_equal \"charlie\" [dict get [lindex [lindex $entries 2] 1] user]\n    }\n\n    test {XADD IDMP with binary-safe iid} {\n        r DEL mystream\n        \n        # Test with null bytes and binary data\n        set binary_iid \"\\x00\\x01\\x02\\xff\"\n        set id1 [r XADD mystream IDMP p1 $binary_iid * field \"value\"]\n        set id2 [r XADD mystream IDMP p1 $binary_iid * field \"dup\"]\n        assert_equal $id1 $id2\n    }\n\n    test {XADD IDMP with maximum length iid} {\n        r DEL mystream\n        \n        # Test with very long iid (e.g., 64KB)\n        set long_iid [string repeat \"x\" 65536]\n        set id [r XADD mystream IDMP p1 $long_iid * field \"value\"]\n        assert_match {*-*} $id\n    }\n\n    test {XADD IDMP with MAXLEN option} {\n        r DEL mystream\n        \n        # Add entries with IDMP and MAXLEN\n        set id1 [r XADD mystream IDMP p1 \"req-1\" MAXLEN ~ 100 * field \"value1\"]\n        set id2 [r XADD mystream IDMP p1 \"req-2\" MAXLEN ~ 100 * field \"value2\"]\n        \n        # Attempt duplicate\n        set id1_dup [r XADD mystream IDMP p1 \"req-1\" MAXLEN ~ 100 * field \"value3\"]\n        \n        # Verify deduplication works\n        assert_equal $id1 $id1_dup\n        \n        # Verify only 2 entries exist\n        assert_equal 2 [r XLEN mystream]\n    }\n\n    test {XADD IDMP with MINID option} {\n        r DEL mystream\n        \n        # Add entry with IDMP and MINID\n        set id1 [r XADD mystream IDMP p1 \"req-1\" MINID ~ 1000000000-0 * field \"value1\"]\n        \n        # Attempt duplicate with MINID\n        set id2 [r XADD mystream IDMP p1 \"req-1\" MINID ~ 1000000000-0 * field \"value2\"]\n        \n        # Verify deduplication works\n        assert_equal $id1 $id2\n        assert_equal 1 [r XLEN mystream]\n    }\n\n    test {XADD IDMP with NOMKSTREAM option} {\n        r DEL mystream\n        \n        # Attempt XADD with NOMKSTREAM on non-existent stream\n        set result [r XADD mystream NOMKSTREAM IDMP p1 \"req-1\" * field \"value\"]\n        assert_equal {} $result\n        \n        # Create stream normally\n        r XADD mystream IDMP p1 \"req-2\" * field \"value\"\n        \n        # Now NOMKSTREAM should work\n        set id [r XADD mystream NOMKSTREAM IDMP p1 \"req-3\" * field \"value\"]\n        assert_match {*-*} $id\n        \n        assert_equal 2 [r XLEN mystream]\n    }\n\n    test {XADD IDMP with KEEPREF option} {\n        r DEL mystream\n        \n        # Add entry with IDMP and KEEPREF\n        set id1 [r XADD mystream IDMP p1 \"req-1\" KEEPREF * field \"value1\"]\n        \n        # Attempt duplicate with KEEPREF\n        set id2 [r XADD mystream IDMP p1 \"req-1\" KEEPREF * field \"value2\"]\n        \n        # Verify deduplication works\n        assert_equal $id1 $id2\n        assert_equal 1 [r XLEN mystream]\n    }\n\n    test {XADD IDMP with combined options} {\n        r DEL mystream\n        \n        # Add entry with all options\n        set id1 [r XADD mystream IDMP p1 \"req-1\" KEEPREF MAXLEN ~ 1000 LIMIT 10 * field1 \"value1\" field2 \"value2\"]\n        \n        # Attempt duplicate with all options\n        set id2 [r XADD mystream IDMP p1 \"req-1\" KEEPREF MAXLEN ~ 1000 LIMIT 10 * field3 \"value3\"]\n        \n        # Verify deduplication works\n        assert_equal $id1 $id2\n        assert_equal 1 [r XLEN mystream]\n        \n        # Verify original fields preserved\n        set entries [r XRANGE mystream - +]\n        set fields [lindex [lindex $entries 0] 1]\n        assert_equal \"value1\" [dict get $fields field1]\n        assert_equal \"value2\" [dict get $fields field2]\n    }\n\n    test {XADD IDMP argument order variations} {\n        r DEL mystream\n        \n        # IDMP before MAXLEN\n        set id1 [r XADD mystream IDMP p1 \"req-1\" MAXLEN ~ 100 * field \"value1\"]\n        \n        # IDMP after MAXLEN\n        set id2 [r XADD mystream MAXLEN ~ 100 IDMP p1 \"req-2\" * field \"value2\"]\n        \n        # Multiple options in different order\n        set id3 [r XADD mystream NOMKSTREAM IDMP p1 \"req-3\" MAXLEN ~ 100 * field \"value3\"]\n        \n        # All should succeed\n        assert_match {*-*} $id1\n        assert_match {*-*} $id2\n        assert_match {*-*} $id3\n        \n        assert_equal 3 [r XLEN mystream]\n    }\n\n    test {XADD IDMP concurrent duplicate requests} {\n        r DEL mystream\n        \n        # Create multiple clients\n        set client1 [redis_client]\n        set client2 [redis_client]\n        set client3 [redis_client]\n        \n        # Send same IDMP request from all clients concurrently\n        set id1 [$client1 XADD mystream IDMP p1 \"concurrent-req\" * client \"1\"]\n        set id2 [$client2 XADD mystream IDMP p1 \"concurrent-req\" * client \"2\"]\n        set id3 [$client3 XADD mystream IDMP p1 \"concurrent-req\" * client \"3\"]\n        \n        # All should return the same ID\n        assert_equal $id1 $id2\n        assert_equal $id2 $id3\n        \n        # Only one entry should exist\n        assert_equal 1 [r XLEN mystream]\n        \n        # Cleanup\n        $client1 close\n        $client2 close\n        $client3 close\n    }\n\n    test {XADD IDMP pipelined requests} {\n        r DEL mystream\n        \n        # Send pipelined requests\n        r MULTI\n        r XADD mystream IDMP p1 \"req-1\" * field \"value1\"\n        r XADD mystream IDMP p1 \"req-2\" * field \"value2\"\n        r XADD mystream IDMP p1 \"req-1\" * field \"value3\"  # Duplicate\n        r XADD mystream IDMP p1 \"req-3\" * field \"value4\"\n        set results [r EXEC]\n        \n        # Extract IDs\n        set id1 [lindex $results 0]\n        set id2 [lindex $results 1]\n        set id1_dup [lindex $results 2]\n        set id3 [lindex $results 3]\n        \n        # Verify deduplication\n        assert_equal $id1 $id1_dup\n        \n        # Verify all IDs are different (except duplicate)\n        assert {$id1 != $id2}\n        assert {$id2 != $id3}\n        assert {$id1 != $id3}\n        \n        # Verify only 3 entries exist\n        assert_equal 3 [r XLEN mystream]\n    }\n\n    test {XADD IDMP with consumer groups} {\n        r DEL mystream\n        \n        # Add entries with IDMP\n        set id1 [r XADD mystream IDMP p1 \"cg-1\" * field \"value1\"]\n        set id2 [r XADD mystream IDMP p1 \"cg-2\" * field \"value2\"]\n        \n        # Create consumer group\n        r XGROUP CREATE mystream mygroup 0\n        \n        # Read entries\n        set entries [r XREADGROUP GROUP mygroup consumer1 COUNT 10 STREAMS mystream >]\n        \n        # Verify both entries are readable\n        set stream_entries [lindex [lindex $entries 0] 1]\n        assert_equal 2 [llength $stream_entries]\n        \n        # ACK entries\n        assert_equal 2 [r XACK mystream mygroup $id1 $id2]\n        \n        # Verify deduplication still works\n        set id1_dup [r XADD mystream IDMP p1 \"cg-1\" * field \"dup\"]\n        assert_equal $id1 $id1_dup\n    }\n\n    test {XADD IDMP persists in RDB} {\n        r DEL mystream\n\n        # Add entries with IDMP\n        set id1 [r XADD mystream IDMP p1 \"persist-1\" * field \"value1\"]\n        r XADD mystream IDMP p1 \"persist-2\" * field \"value2\"\n\n        # Force RDB save\n        r SAVE\n        r DEBUG RELOAD\n\n        # Verify stream still exists\n        assert_equal 2 [r XLEN mystream]\n\n        # Verify deduplication still works after restart\n        set id1_dup [r XADD mystream IDMP p1 \"persist-1\" * field \"new\"]\n        assert_equal $id1 $id1_dup\n\n        # Should still have only 2 entries\n        assert_equal 2 [r XLEN mystream]\n    } {} {external:skip needs:debug}\n\n    test {XADD IDMP set in AOF} {\n        r DEL mystream\n        r config set appendonly yes\n\n        # Wait for the automatic AOF rewrite triggered by enabling AOF\n        waitForBgrewriteaof r\n\n        # Add entries with IDMP\n        set id1 [r XADD mystream IDMP p1 \"aof-1\" * field \"value1\"]\n        r XADD mystream IDMP p1 \"aof-2\" * field \"value2\"\n\n        # Add duplicate\n        set id1_dup [r XADD mystream IDMP p1 \"aof-1\" * field \"dup\"]\n        assert_equal $id1 $id1_dup\n\n        # Restart with AOF\n        r DEBUG RELOAD\n\n        # Verify stream exists\n        assert_equal 2 [r XLEN mystream]\n\n        # Verify deduplication still works\n        set id1_dup2 [r XADD mystream IDMP p1 \"aof-1\" * field \"new\"]\n        assert_equal $id1 $id1_dup2\n    } {} {external:skip needs:debug}\n\n    # XIDMPRECORD tests\n    test {XIDMPRECORD parameter validation} {\n        # Wrong arity\n        assert_error {*wrong number of arguments for 'xidmprecord' command} {r XIDMPRECORD mystream p1 i1}\n        assert_error {*wrong number of arguments for 'xidmprecord' command} {r XIDMPRECORD mystream p1 i1 1-1 extra}\n\n        # Key does not exist\n        assert_error {*no such key*} {r XIDMPRECORD nosuchkey p1 i1 1-1}\n\n        # Key is not a stream\n        r SET notastream \"value\"\n        assert_error {*WRONGTYPE*} {r XIDMPRECORD notastream p1 i1 1-1}\n        r DEL notastream\n\n        # Invalid stream ID (need existing stream)\n        r DEL mystream\n        r XADD mystream 1-1 f v\n        assert_error {*Invalid stream ID specified as stream*} {r XIDMPRECORD mystream p1 i1 bad}\n        assert_error {*Invalid stream ID specified as stream*} {r XIDMPRECORD mystream p1 i1 1-}\n        assert_error {*Invalid stream ID specified as stream*} {r XIDMPRECORD mystream p1 i1 -1}\n\n        # Empty pid and empty iid\n        assert_error {*producer ID must be non-empty*} {r XIDMPRECORD mystream \"\" i1 1-1}\n        assert_error {*idempotent ID must be non-empty*} {r XIDMPRECORD mystream p1 \"\" 1-1}\n    }\n\n    test {XIDMPRECORD with binary-safe iid} {\n        r DEL mystream\n        set id [r XADD mystream * f v]\n        set binary_iid \"\\x00\\x01\\x02\\xff\"\n        assert_equal \"OK\" [r XIDMPRECORD mystream p1 $binary_iid $id]\n        set id_dup [r XADD mystream IDMP p1 $binary_iid * f v2]\n        assert_equal $id $id_dup\n    }\n\n    test {XIDMPRECORD with maximum length iid} {\n        r DEL mystream\n        set id [r XADD mystream * f v]\n        set long_iid [string repeat \"x\" 65536]\n        assert_equal \"OK\" [r XIDMPRECORD mystream p1 $long_iid $id]\n        set id_dup [r XADD mystream IDMP p1 $long_iid * f v2]\n        assert_equal $id $id_dup\n    }\n\n    test {XIDMPRECORD with unicode pid and iid} {\n        r DEL mystream\n        set id [r XADD mystream * f v]\n        assert_equal \"OK\" [r XIDMPRECORD mystream \"producer-世界\" \"req-héllo\" $id]\n        set id_dup [r XADD mystream IDMP \"producer-世界\" \"req-héllo\" * f v2]\n        assert_equal $id $id_dup\n    }\n\n    test {XIDMPRECORD with long producer ID} {\n        r DEL mystream\n        set id [r XADD mystream * f v]\n        set long_pid [string repeat \"p\" 1000]\n        assert_equal \"OK\" [r XIDMPRECORD mystream $long_pid i1 $id]\n        set id_dup [r XADD mystream IDMP $long_pid i1 * f v2]\n        assert_equal $id $id_dup\n    }\n\n    test {XIDMPRECORD with special characters in iid} {\n        r DEL mystream\n        set id [r XADD mystream * f v]\n        assert_equal \"OK\" [r XIDMPRECORD mystream p1 \"key:value\" $id]\n        set id_dup [r XADD mystream IDMP p1 \"key:value\" * f v2]\n        assert_equal $id $id_dup\n    }\n\n    test {XIDMPRECORD message must exist in stream} {\n        r DEL mystream\n        r XADD mystream 1-1 f v\n        assert_error {*No such message in stream*} {r XIDMPRECORD mystream p1 i1 999-999}\n    }\n\n    test {XIDMPRECORD then deduplication works} {\n        r DEL mystream\n        set id [r XADD mystream * f v]\n        assert_equal \"OK\" [r XIDMPRECORD mystream p1 req-1 $id]\n        set id_dup [r XADD mystream IDMP p1 req-1 * f v2]\n        assert_equal $id $id_dup\n        assert_equal 1 [r XLEN mystream]\n    }\n\n    test {XIDMPRECORD idempotent} {\n        r DEL mystream\n        set id [r XADD mystream * f v]\n        assert_equal \"OK\" [r XIDMPRECORD mystream p1 req-1 $id]\n        assert_equal \"OK\" [r XIDMPRECORD mystream p1 req-1 $id]\n        set id_dup [r XADD mystream IDMP p1 req-1 * f v2]\n        assert_equal $id $id_dup\n    }\n\n    test {XIDMPRECORD conflict same pid iid different stream ID} {\n        r DEL mystream\n        set id1 [r XADD mystream * f v1]\n        set id2 [r XADD mystream * f v2]\n        assert_equal \"OK\" [r XIDMPRECORD mystream p1 i1 $id1]\n        assert_error {*IID already exists for this producer with a different stream ID*} {r XIDMPRECORD mystream p1 i1 $id2}\n    }\n\n    test {XIDMPRECORD multiple producers} {\n        r DEL mystream\n        set id1 [r XADD mystream * f v1]\n        set id2 [r XADD mystream * f v2]\n        assert_equal \"OK\" [r XIDMPRECORD mystream p1 i1 $id1]\n        assert_equal \"OK\" [r XIDMPRECORD mystream p2 i2 $id2]\n        assert_equal $id1 [r XADD mystream IDMP p1 i1 * f dup1]\n        assert_equal $id2 [r XADD mystream IDMP p2 i2 * f dup2]\n    }\n\n    test {XIDMPRECORD AOF rewrite restores IDMP} {\n        r DEL mystream\n        r config set appendonly yes\n        waitForBgrewriteaof r\n\n        set id1 [r XADD mystream IDMP p1 \"aof-xidmp-1\" * field \"value1\"]\n        r XADD mystream IDMP p1 \"aof-xidmp-2\" * field \"value2\"\n        set id1_dup [r XADD mystream IDMP p1 \"aof-xidmp-1\" * field \"dup\"]\n        assert_equal $id1 $id1_dup\n\n        r BGREWRITEAOF\n        waitForBgrewriteaof r\n        r DEBUG RELOAD\n\n        assert_equal 2 [r XLEN mystream]\n        set id1_dup2 [r XADD mystream IDMP p1 \"aof-xidmp-1\" * field \"new\"]\n        assert_equal $id1 $id1_dup2\n    } {} {external:skip needs:debug}\n\n    test {XIDMPRECORD AOF rewrite emits XIDMPRECORD for stream with IDMP from XIDMPRECORD only} {\n        r DEL mystream\n        r config set appendonly yes\n        waitForBgrewriteaof r\n\n        set id [r XADD mystream * f v]\n        assert_equal \"OK\" [r XIDMPRECORD mystream p1 rec-1 $id]\n        set id_dup [r XADD mystream IDMP p1 rec-1 * f v2]\n        assert_equal $id $id_dup\n\n        r BGREWRITEAOF\n        waitForBgrewriteaof r\n        r DEBUG RELOAD\n\n        assert_equal 1 [r XLEN mystream]\n        set id_dup2 [r XADD mystream IDMP p1 rec-1 * f v3]\n        assert_equal $id $id_dup2\n    } {} {external:skip needs:debug}\n\n    test {XADD IDMP multiple producers have isolated namespaces} {\n        r DEL mystream\n        \n        # Add entry with producer p1\n        set id1 [r XADD mystream IDMP producer1 \"req-123\" * field \"from-p1\"]\n        \n        # Add entry with same IID but different producer p2 - should create NEW entry\n        set id2 [r XADD mystream IDMP producer2 \"req-123\" * field \"from-p2\"]\n        \n        # IDs should be different since producers are isolated\n        assert {$id1 ne $id2}\n        \n        # Both entries should exist\n        assert_equal 2 [r XLEN mystream]\n        \n        # Verify each entry has correct data\n        set entries [r XRANGE mystream - +]\n        assert_equal \"from-p1\" [dict get [lindex [lindex $entries 0] 1] field]\n        assert_equal \"from-p2\" [dict get [lindex [lindex $entries 1] 1] field]\n        \n        # Duplicate within same producer should still deduplicate\n        set id1_dup [r XADD mystream IDMP producer1 \"req-123\" * field \"dup-p1\"]\n        assert_equal $id1 $id1_dup\n        \n        set id2_dup [r XADD mystream IDMP producer2 \"req-123\" * field \"dup-p2\"]\n        assert_equal $id2 $id2_dup\n        \n        # Still only 2 entries\n        assert_equal 2 [r XLEN mystream]\n    }\n\n    test {XADD IDMP multiple producers each have their own MAXSIZE limit} {\n        r DEL mystream\n        \n        # Create stream and set global MAXSIZE\n        r XADD mystream IDMP p1 \"init\" * field \"init\"\n        r XCFGSET mystream IDMP-MAXSIZE 3 IDMP-DURATION 60\n        \n        # Add entries for producer p1 (will have 3: init, req-1, req-2, then req-3 evicts init)\n        set p1_id1 [r XADD mystream IDMP p1 \"req-1\" * field \"p1-v1\"]\n        set p1_id2 [r XADD mystream IDMP p1 \"req-2\" * field \"p1-v2\"]\n        set p1_id3 [r XADD mystream IDMP p1 \"req-3\" * field \"p1-v3\"]\n        \n        # Add entries for producer p2 (separate tracking)\n        set p2_id1 [r XADD mystream IDMP p2 \"req-1\" * field \"p2-v1\"]\n        set p2_id2 [r XADD mystream IDMP p2 \"req-2\" * field \"p2-v2\"]\n        set p2_id3 [r XADD mystream IDMP p2 \"req-3\" * field \"p2-v3\"]\n        \n        # p1's oldest entries should be evicted, but p1 req-1,2,3 should still work\n        assert_equal $p1_id1 [r XADD mystream IDMP p1 \"req-1\" * field \"dup\"]\n        assert_equal $p1_id2 [r XADD mystream IDMP p1 \"req-2\" * field \"dup\"]\n        assert_equal $p1_id3 [r XADD mystream IDMP p1 \"req-3\" * field \"dup\"]\n        \n        # p2's entries should also still work (each producer has own MAXSIZE tracking)\n        assert_equal $p2_id1 [r XADD mystream IDMP p2 \"req-1\" * field \"dup\"]\n        assert_equal $p2_id2 [r XADD mystream IDMP p2 \"req-2\" * field \"dup\"]\n        assert_equal $p2_id3 [r XADD mystream IDMP p2 \"req-3\" * field \"dup\"]\n        \n        # Verify pids-tracked\n        set reply [r XINFO STREAM mystream]\n        assert_equal 2 [dict get $reply pids-tracked]\n    }\n\n    test {XADD IDMP multiple producers with binary producer IDs} {\n        r DEL mystream\n        \n        # Test with binary producer IDs\n        set bin_pid1 \"\\x00\\x01\\x02\"\n        set bin_pid2 \"\\x03\\x04\\x05\"\n        \n        set id1 [r XADD mystream IDMP $bin_pid1 \"req-1\" * field \"v1\"]\n        set id2 [r XADD mystream IDMP $bin_pid2 \"req-1\" * field \"v2\"]\n        \n        # Different binary PIDs should be isolated\n        assert {$id1 ne $id2}\n        assert_equal 2 [r XLEN mystream]\n        \n        # Verify deduplication within same binary PID\n        set id1_dup [r XADD mystream IDMP $bin_pid1 \"req-1\" * field \"dup\"]\n        assert_equal $id1 $id1_dup\n    }\n\n    test {XADD IDMP multiple producers with unicode producer IDs} {\n        r DEL mystream\n        \n        # Test with unicode producer IDs\n        set id1 [r XADD mystream IDMP \"producer-世界\" \"req-1\" * field \"v1\"]\n        set id2 [r XADD mystream IDMP \"producer-héllo\" \"req-1\" * field \"v2\"]\n        set id3 [r XADD mystream IDMP \"producer-日本\" \"req-1\" * field \"v3\"]\n        \n        # All should be separate entries\n        assert {$id1 ne $id2}\n        assert {$id2 ne $id3}\n        assert_equal 3 [r XLEN mystream]\n        \n        set reply [r XINFO STREAM mystream]\n        assert_equal 3 [dict get $reply pids-tracked]\n    }\n\n    test {XADD IDMP multiple producers with long producer IDs} {\n        r DEL mystream\n        \n        # Test with very long producer IDs\n        set long_pid1 [string repeat \"a\" 1000]\n        set long_pid2 [string repeat \"b\" 1000]\n        \n        set id1 [r XADD mystream IDMP $long_pid1 \"req-1\" * field \"v1\"]\n        set id2 [r XADD mystream IDMP $long_pid2 \"req-1\" * field \"v2\"]\n        \n        # Different long PIDs should be isolated\n        assert {$id1 ne $id2}\n        assert_equal 2 [r XLEN mystream]\n        \n        # Verify deduplication\n        set id1_dup [r XADD mystream IDMP $long_pid1 \"req-1\" * field \"dup\"]\n        assert_equal $id1 $id1_dup\n    }\n\n    test {XADD IDMP multiple producers persistence in RDB} {\n        r DEL mystream\n        \n        # Add entries with multiple producers\n        set id1 [r XADD mystream IDMP p1 \"req-1\" * field \"v1\"]\n        set id2 [r XADD mystream IDMP p2 \"req-1\" * field \"v2\"]\n        set id3 [r XADD mystream IDMP p3 \"req-1\" * field \"v3\"]\n        \n        # Verify before save\n        set reply [r XINFO STREAM mystream]\n        assert_equal 3 [dict get $reply pids-tracked]\n        assert_equal 3 [dict get $reply iids-tracked]\n        \n        # Save and reload\n        r SAVE\n        restart_server 0 true false\n        \n        # Verify after reload\n        set reply [r XINFO STREAM mystream]\n        assert_equal 3 [dict get $reply pids-tracked]\n        assert_equal 3 [dict get $reply iids-tracked]\n        \n        # Verify deduplication still works for all producers\n        assert_equal $id1 [r XADD mystream IDMP p1 \"req-1\" * field \"dup\"]\n        assert_equal $id2 [r XADD mystream IDMP p2 \"req-1\" * field \"dup\"]\n        assert_equal $id3 [r XADD mystream IDMP p3 \"req-1\" * field \"dup\"]\n    } {} {external:skip}\n\n    test {XADD IDMP cron expiration works after RDB load} {\n        r DEL mystream\n\n        # Create stream and set IDMP-DURATION before adding entries,\n        # since XCFGSET clears existing entries when the duration changes.\n        r XADD mystream IDMP p1 \"init\" * field \"init\"\n        r XCFGSET mystream IDMP-DURATION 2\n        r XADD mystream IDMP p1 \"req-1\" * field \"v1\"\n        r XADD mystream IDMP p2 \"req-1\" * field \"v2\"\n\n        set reply [r XINFO STREAM mystream]\n        assert_equal 2 [dict get $reply pids-tracked]\n        assert_equal 2 [dict get $reply iids-tracked]\n\n        # Save and restart — this triggers RDB load which should\n        # register the stream in stream_idmp_keys for cron cleanup.\n        r SAVE\n        restart_server 0 true false\n\n        # Wait for IDMP entries to expire and for the cron to clean them up.\n        # If the stream was not registered in stream_idmp_keys after RDB load,\n        # the counts would never reach 0.\n        # Poll instead of a fixed sleep so the test finishes as soon as possible.\n        wait_for_condition 50 100 {\n            [dict get [r XINFO STREAM mystream] pids-tracked] == 0 &&\n            [dict get [r XINFO STREAM mystream] iids-tracked] == 0\n        } else {\n            fail \"IDMP entries were not cleaned up after RDB load\"\n        }\n\n        # Expired IIDs should be re-addable as new entries\n        set new_id [r XADD mystream IDMP p1 \"req-1\" * field \"new\"]\n        assert {$new_id ne \"\"}\n        assert_equal 4 [r XLEN mystream]\n    } {} {external:skip}\n\n    test {XADD IDMP tracking survives SWAPDB} {\n        # Use dedicated clients for DB 0 and DB 1 so that `r` stays on\n        # DB 9 (the test default).  If any assertion fails mid-test,\n        # `r` is still on DB 9 and subsequent tests are unaffected.\n        set db0 [redis_client]\n        $db0 SELECT 0\n        $db0 FLUSHDB\n        set db1 [redis_client]\n        $db1 SELECT 1\n        $db1 FLUSHDB\n\n        # Create stream and set IDMP-DURATION before adding entries,\n        # since XCFGSET clears existing entries when the duration changes.\n        $db0 XADD mystream IDMP p1 \"init\" * field \"init\"\n        $db0 XCFGSET mystream IDMP-DURATION 2\n        set id1 [$db0 XADD mystream IDMP p1 \"req-1\" * field \"v1\"]\n\n        set info [$db0 XINFO STREAM mystream]\n        assert_equal 1 [dict get $info pids-tracked]\n        assert_equal 1 [dict get $info iids-tracked]\n\n        $db0 SWAPDB 0 1\n\n        set id1_dup [$db1 XADD mystream IDMP p1 \"req-1\" * field \"dup\"]\n        assert_equal $id1 $id1_dup \"Deduplication should work immediately after SWAPDB\"\n\n        # If stream_idmp_keys wasn't swapped, cron looks in wrong DB\n        # and entries will never expire.\n        wait_for_condition 50 100 {\n            [dict get [$db1 XINFO STREAM mystream] pids-tracked] == 0 &&\n            [dict get [$db1 XINFO STREAM mystream] iids-tracked] == 0\n        } else {\n            $db0 close\n            $db1 close\n            fail \"IDMP entries were not cleaned up after SWAPDB - tracking likely lost\"\n        }\n\n        set id2 [$db1 XADD mystream IDMP p1 \"req-1\" * field \"v2\"]\n        $db0 close\n        $db1 close\n        assert {$id1 ne $id2}\n    } {} {singledb:skip}\n\n    test {XADD IDMP tracking cleared after FLUSHDB} {\n        r DEL mystream\n\n        # Create stream and set IDMP-DURATION before adding entries,\n        # since XCFGSET clears existing entries when the duration changes.\n        r XADD mystream IDMP p1 \"init\" * field \"init\"\n        r XCFGSET mystream IDMP-DURATION 2\n        set id1 [r XADD mystream IDMP p1 \"req-1\" * field \"v1\"]\n\n        assert_equal 1 [dict get [r XINFO STREAM mystream] pids-tracked]\n\n        # FLUSHDB should clear all IDMP tracking.\n        r FLUSHDB\n\n        # Recreate stream with the same configuration.\n        r XADD mystream IDMP p1 \"init\" * field \"init\"\n        r XCFGSET mystream IDMP-DURATION 2\n\n        set id2 [r XADD mystream IDMP p1 \"req-1\" * field \"v2\"]\n\n        assert_equal 2 [r XLEN mystream]\n\n        wait_for_condition 50 100 {\n            [dict get [r XINFO STREAM mystream] pids-tracked] == 0 &&\n            [dict get [r XINFO STREAM mystream] iids-tracked] == 0\n        } else {\n            fail \"IDMP entries were not cleaned up for recreated stream after FLUSHDB\"\n        }\n\n        set id3 [r XADD mystream IDMP p1 \"req-1\" * field \"v3\"]\n        assert {$id2 ne $id3}\n    }\n\n    test {XADD IDMP tracking removed after DEL} {\n        r DEL mystream\n\n        r XADD mystream IDMP p1 \"init\" * field \"init\"\n        r XCFGSET mystream IDMP-DURATION 2\n        r XADD mystream IDMP p1 \"req-1\" * field \"v1\"\n\n        assert_equal 1 [dict get [r XINFO STREAM mystream] pids-tracked]\n\n        r DEL mystream\n\n        r XADD mystream IDMP p1 \"init\" * field \"init\"\n        r XCFGSET mystream IDMP-DURATION 2\n\n        # The same IID should now produce a new entry (old tracking is gone).\n        set len_before [r XLEN mystream]\n        r XADD mystream IDMP p1 \"req-1\" * field \"v2\"\n        assert_equal [expr {$len_before + 1}] [r XLEN mystream] \\\n            \"req-1 should create a new entry after DEL, not deduplicate\"\n\n        wait_for_condition 50 100 {\n            [dict get [r XINFO STREAM mystream] pids-tracked] == 0 &&\n            [dict get [r XINFO STREAM mystream] iids-tracked] == 0\n        } else {\n            fail \"IDMP entries were not cleaned up after DEL and recreate\"\n        }\n    }\n\n    test {XADD IDMP tracking cleared after FLUSHALL across all databases} {\n        set db0 [redis_client]\n        $db0 SELECT 0\n        $db0 FLUSHDB\n        set db1 [redis_client]\n        $db1 SELECT 1\n        $db1 FLUSHDB\n\n        $db0 XADD mystream IDMP p1 \"init\" * field \"init\"\n        $db0 XCFGSET mystream IDMP-DURATION 2\n        set id0 [$db0 XADD mystream IDMP p1 \"req-1\" * field \"v1\"]\n\n        $db1 XADD mystream IDMP p2 \"init\" * field \"init\"\n        $db1 XCFGSET mystream IDMP-DURATION 2\n        set id1 [$db1 XADD mystream IDMP p2 \"req-1\" * field \"v1\"]\n\n        assert_equal 1 [dict get [$db0 XINFO STREAM mystream] pids-tracked]\n        assert_equal 1 [dict get [$db1 XINFO STREAM mystream] pids-tracked]\n\n        $db0 FLUSHALL\n\n        $db0 XADD mystream IDMP p1 \"init\" * field \"init\"\n        $db0 XCFGSET mystream IDMP-DURATION 2\n        $db1 XADD mystream IDMP p2 \"init\" * field \"init\"\n        $db1 XCFGSET mystream IDMP-DURATION 2\n\n        set len0_before [$db0 XLEN mystream]\n        set id0_new [$db0 XADD mystream IDMP p1 \"req-1\" * field \"v2\"]\n        assert_equal [expr {$len0_before + 1}] [$db0 XLEN mystream] \\\n            \"DB0: XADD after FLUSHALL should create a new entry, not deduplicate\"\n\n        set len1_before [$db1 XLEN mystream]\n        set id1_new [$db1 XADD mystream IDMP p2 \"req-1\" * field \"v2\"]\n        assert_equal [expr {$len1_before + 1}] [$db1 XLEN mystream] \\\n            \"DB1: XADD after FLUSHALL should create a new entry, not deduplicate\"\n\n        wait_for_condition 50 100 {\n            [dict get [$db0 XINFO STREAM mystream] pids-tracked] == 0 &&\n            [dict get [$db0 XINFO STREAM mystream] iids-tracked] == 0 &&\n            [dict get [$db1 XINFO STREAM mystream] pids-tracked] == 0 &&\n            [dict get [$db1 XINFO STREAM mystream] iids-tracked] == 0\n        } else {\n            $db0 close\n            $db1 close\n            fail \"IDMP entries were not cleaned up after FLUSHALL and recreate\"\n        }\n\n        $db0 close\n        $db1 close\n        assert {$id0 ne $id0_new}\n    } {} {singledb:skip}\n\n    test {XADD IDMP tracking survives RENAME} {\n        r DEL idmpstream{t}\n        r DEL idmpnewstream{t}\n\n        r XADD idmpstream{t} IDMP p1 \"init\" * field \"init\"\n        r XCFGSET idmpstream{t} IDMP-DURATION 2\n        set id1 [r XADD idmpstream{t} IDMP p1 \"req-1\" * field \"v1\"]\n\n        assert_equal 1 [dict get [r XINFO STREAM idmpstream{t}] pids-tracked]\n\n        r RENAME idmpstream{t} idmpnewstream{t}\n\n        # Deduplication should still work under the new name.\n        set id1_dup [r XADD idmpnewstream{t} IDMP p1 \"req-1\" * field \"dup\"]\n        assert_equal $id1 $id1_dup\n\n        # IDMP entries should still expire via cron after rename.\n        wait_for_condition 50 100 {\n            [dict get [r XINFO STREAM idmpnewstream{t}] pids-tracked] == 0 &&\n            [dict get [r XINFO STREAM idmpnewstream{t}] iids-tracked] == 0\n        } else {\n            fail \"IDMP entries were not cleaned up after RENAME\"\n        }\n\n        set id2 [r XADD idmpnewstream{t} IDMP p1 \"req-1\" * field \"v2\"]\n        assert {$id1 ne $id2}\n    }\n\n    test {XADD IDMP tracking correct after RENAME overwrites IDMP stream} {\n        r DEL idmpstreamA{t}\n        r DEL idmpstreamB{t}\n\n        r XADD idmpstreamA{t} IDMP p1 \"init\" * field \"init\"\n        r XCFGSET idmpstreamA{t} IDMP-DURATION 2\n        set idA [r XADD idmpstreamA{t} IDMP p1 \"req-A\" * field \"vA\"]\n\n        r XADD idmpstreamB{t} IDMP p2 \"init\" * field \"init\"\n        r XCFGSET idmpstreamB{t} IDMP-DURATION 2\n        r XADD idmpstreamB{t} IDMP p2 \"req-B\" * field \"vB\"\n\n        assert_equal 1 [dict get [r XINFO STREAM idmpstreamA{t}] pids-tracked]\n        assert_equal 1 [dict get [r XINFO STREAM idmpstreamB{t}] pids-tracked]\n\n        # RENAME A -> B overwrites B with A's data.\n        r RENAME idmpstreamA{t} idmpstreamB{t}\n\n        # streamA's IDMP tracking should now be under key B.\n        set idA_dup [r XADD idmpstreamB{t} IDMP p1 \"req-A\" * field \"dup\"]\n        assert_equal $idA $idA_dup\n\n        # streamB's old tracking (producer p2) should be gone.\n        # Verify by checking XLEN: a new entry should be created, not deduplicated.\n        set len_before [r XLEN idmpstreamB{t}]\n        r XADD idmpstreamB{t} IDMP p2 \"req-B\" * field \"vB2\"\n        assert_equal [expr {$len_before + 1}] [r XLEN idmpstreamB{t}] \\\n            \"p2/req-B should create a new entry after RENAME overwrite, not deduplicate\"\n\n        # Cron expiry should still work for the renamed stream.\n        wait_for_condition 50 100 {\n            [dict get [r XINFO STREAM idmpstreamB{t}] pids-tracked] == 0 &&\n            [dict get [r XINFO STREAM idmpstreamB{t}] iids-tracked] == 0\n        } else {\n            fail \"IDMP entries were not cleaned up after RENAME overwrite\"\n        }\n\n        set idA2 [r XADD idmpstreamB{t} IDMP p1 \"req-A\" * field \"vA2\"]\n        assert {$idA ne $idA2}\n    }\n\n    test {XADD IDMP tracking survives COPY} {\n        r DEL idmpstream{t}\n        r DEL idmpcopy{t}\n\n        r XADD idmpstream{t} IDMP p1 \"init\" * field \"init\"\n        r XCFGSET idmpstream{t} IDMP-DURATION 2\n        set id1 [r XADD idmpstream{t} IDMP p1 \"req-1\" * field \"v1\"]\n\n        # Add a second producer so we can verify multi-producer copy.\n        r XADD idmpstream{t} IDMP p2 \"req-A\" * field \"vA\"\n\n        set info [r XINFO STREAM idmpstream{t}]\n        assert_equal 2 [dict get $info pids-tracked]\n        assert_equal 2 [dict get $info iids-tracked]\n\n        r COPY idmpstream{t} idmpcopy{t}\n\n        # Verify all IDMP metadata is preserved on the copy.\n        set copy_info [r XINFO STREAM idmpcopy{t}]\n        set orig_info [r XINFO STREAM idmpstream{t}]\n        assert_equal [dict get $orig_info idmp-duration]    [dict get $copy_info idmp-duration]\n        assert_equal [dict get $orig_info idmp-maxsize]     [dict get $copy_info idmp-maxsize]\n        assert_equal [dict get $orig_info pids-tracked]     [dict get $copy_info pids-tracked]\n        assert_equal [dict get $orig_info iids-tracked]     [dict get $copy_info iids-tracked]\n        assert_equal [dict get $orig_info iids-added]       [dict get $copy_info iids-added]\n        assert_equal [dict get $orig_info iids-duplicates]  [dict get $copy_info iids-duplicates]\n\n        # Deduplication should work on the copy for both producers.\n        set id1_dup [r XADD idmpcopy{t} IDMP p1 \"req-1\" * field \"dup\"]\n        assert_equal $id1 $id1_dup\n\n        # Original should still deduplicate independently.\n        set id1_dup_orig [r XADD idmpstream{t} IDMP p1 \"req-1\" * field \"dup\"]\n        assert_equal $id1 $id1_dup_orig\n\n        # IDMP entries should expire via cron on the copy.\n        wait_for_condition 50 100 {\n            [dict get [r XINFO STREAM idmpcopy{t}] pids-tracked] == 0 &&\n            [dict get [r XINFO STREAM idmpcopy{t}] iids-tracked] == 0\n        } else {\n            fail \"IDMP entries were not cleaned up on copied stream\"\n        }\n\n        set id2 [r XADD idmpcopy{t} IDMP p1 \"req-1\" * field \"v2\"]\n        assert {$id1 ne $id2}\n    }\n\n    test {XADD IDMP tracking correct after COPY REPLACE overwrites IDMP stream} {\n        r DEL idmpstreamA{t}\n        r DEL idmpstreamB{t}\n\n        r XADD idmpstreamA{t} IDMP p1 \"init\" * field \"init\"\n        r XCFGSET idmpstreamA{t} IDMP-DURATION 2\n        set idA [r XADD idmpstreamA{t} IDMP p1 \"req-A\" * field \"vA\"]\n\n        r XADD idmpstreamB{t} IDMP p2 \"init\" * field \"init\"\n        r XCFGSET idmpstreamB{t} IDMP-DURATION 2\n        r XADD idmpstreamB{t} IDMP p2 \"req-B\" * field \"vB\"\n\n        assert_equal 1 [dict get [r XINFO STREAM idmpstreamA{t}] pids-tracked]\n        assert_equal 1 [dict get [r XINFO STREAM idmpstreamB{t}] pids-tracked]\n\n        r COPY idmpstreamA{t} idmpstreamB{t} REPLACE\n\n        # streamA's IDMP tracking should now be under key B.\n        set idA_dup [r XADD idmpstreamB{t} IDMP p1 \"req-A\" * field \"dup\"]\n        assert_equal $idA $idA_dup\n\n        # streamB's old tracking (producer p2) should be gone.\n        # Verify by checking XLEN: a new entry should be created, not deduplicated.\n        set len_before [r XLEN idmpstreamB{t}]\n        r XADD idmpstreamB{t} IDMP p2 \"req-B\" * field \"vB2\"\n        assert_equal [expr {$len_before + 1}] [r XLEN idmpstreamB{t}] \\\n            \"p2/req-B should create a new entry after COPY REPLACE, not deduplicate\"\n\n        # Original A should still have its own tracking.\n        set idA_dup_orig [r XADD idmpstreamA{t} IDMP p1 \"req-A\" * field \"dup\"]\n        assert_equal $idA $idA_dup_orig\n\n        # Cron expiry should work on the replaced copy.\n        wait_for_condition 50 100 {\n            [dict get [r XINFO STREAM idmpstreamB{t}] pids-tracked] == 0 &&\n            [dict get [r XINFO STREAM idmpstreamB{t}] iids-tracked] == 0\n        } else {\n            fail \"IDMP entries were not cleaned up after COPY REPLACE\"\n        }\n\n        set idA2 [r XADD idmpstreamB{t} IDMP p1 \"req-A\" * field \"vA2\"]\n        assert {$idA ne $idA2}\n    }\n\n    test {XADD IDMP tracking survives MOVE} {\n        set db0 [redis_client]\n        $db0 SELECT 0\n        $db0 FLUSHDB\n        set db1 [redis_client]\n        $db1 SELECT 1\n        $db1 FLUSHDB\n\n        $db0 XADD mystream IDMP p1 \"init\" * field \"init\"\n        $db0 XCFGSET mystream IDMP-DURATION 2\n        set id1 [$db0 XADD mystream IDMP p1 \"req-1\" * field \"v1\"]\n\n        assert_equal 1 [dict get [$db0 XINFO STREAM mystream] pids-tracked]\n\n        $db0 MOVE mystream 1\n\n        # Deduplication should work in the destination DB.\n        set id1_dup [$db1 XADD mystream IDMP p1 \"req-1\" * field \"dup\"]\n        assert_equal $id1 $id1_dup\n\n        # IDMP entries should still expire via cron in the new DB.\n        wait_for_condition 50 100 {\n            [dict get [$db1 XINFO STREAM mystream] pids-tracked] == 0 &&\n            [dict get [$db1 XINFO STREAM mystream] iids-tracked] == 0\n        } else {\n            $db0 close\n            $db1 close\n            fail \"IDMP entries were not cleaned up after MOVE\"\n        }\n\n        set id2 [$db1 XADD mystream IDMP p1 \"req-1\" * field \"v2\"]\n        $db0 close\n        $db1 close\n        assert {$id1 ne $id2}\n    } {} {singledb:skip}\n\n    test {XADD IDMP tracking survives RESTORE} {\n        r DEL idmpstream{t}\n        r DEL idmpcopy{t}\n\n        r XADD idmpstream{t} IDMP p1 \"init\" * field \"init\"\n        r XCFGSET idmpstream{t} IDMP-DURATION 2\n        set id1 [r XADD idmpstream{t} IDMP p1 \"req-1\" * field \"v1\"]\n\n        assert_equal 1 [dict get [r XINFO STREAM idmpstream{t}] pids-tracked]\n\n        set dump [r DUMP idmpstream{t}]\n        set ttl [r PTTL idmpstream{t}]\n        if {$ttl == -1} { set ttl 0 }\n        r RESTORE idmpcopy{t} $ttl $dump\n\n        set copy_info [r XINFO STREAM idmpcopy{t}]\n        assert_equal 1 [dict get $copy_info pids-tracked]\n        assert_equal 1 [dict get $copy_info iids-tracked]\n\n        set id1_dup [r XADD idmpcopy{t} IDMP p1 \"req-1\" * field \"dup\"]\n        assert_equal $id1 $id1_dup\n\n        wait_for_condition 50 100 {\n            [dict get [r XINFO STREAM idmpcopy{t}] pids-tracked] == 0 &&\n            [dict get [r XINFO STREAM idmpcopy{t}] iids-tracked] == 0\n        } else {\n            fail \"IDMP entries were not cleaned up on RESTOREd stream\"\n        }\n\n        set id2 [r XADD idmpcopy{t} IDMP p1 \"req-1\" * field \"v2\"]\n        assert {$id1 ne $id2}\n    }\n\n    test {XADD IDMP multiple producers concurrent access} {\n        r DEL mystream\n        \n        # Create multiple clients\n        set client1 [redis_client]\n        set client2 [redis_client]\n        set client3 [redis_client]\n        \n        # Each client acts as a different producer\n        set id1 [$client1 XADD mystream IDMP service-a \"order-123\" * data \"from-a\"]\n        set id2 [$client2 XADD mystream IDMP service-b \"order-123\" * data \"from-b\"]\n        set id3 [$client3 XADD mystream IDMP service-c \"order-123\" * data \"from-c\"]\n        \n        # All should be different entries\n        assert {$id1 ne $id2}\n        assert {$id2 ne $id3}\n        assert_equal 3 [r XLEN mystream]\n        \n        # Duplicate from same service should return same ID\n        set id1_dup [$client1 XADD mystream IDMP service-a \"order-123\" * data \"retry\"]\n        assert_equal $id1 $id1_dup\n        \n        # Verify pids-tracked\n        set reply [r XINFO STREAM mystream]\n        assert_equal 3 [dict get $reply pids-tracked]\n        \n        # Cleanup\n        $client1 close\n        $client2 close\n        $client3 close\n    }\n\n    test {XADD IDMP multiple producers pipelined requests} {\n        r DEL mystream\n        \n        # Send pipelined requests from multiple producers\n        r MULTI\n        r XADD mystream IDMP p1 \"req-1\" * field \"v1\"\n        r XADD mystream IDMP p2 \"req-1\" * field \"v2\"\n        r XADD mystream IDMP p1 \"req-1\" * field \"dup\"\n        r XADD mystream IDMP p2 \"req-2\" * field \"v3\"\n        r XADD mystream IDMP p3 \"req-1\" * field \"v4\"\n        set results [r EXEC]\n        \n        set id_p1_r1 [lindex $results 0]\n        set id_p2_r1 [lindex $results 1]\n        set id_p1_r1_dup [lindex $results 2]\n        set id_p2_r2 [lindex $results 3]\n        set id_p3_r1 [lindex $results 4]\n        \n        # p1 req-1 and its duplicate should match\n        assert_equal $id_p1_r1 $id_p1_r1_dup\n        \n        # Different producers or different IIDs should be different\n        assert {$id_p1_r1 ne $id_p2_r1}\n        assert {$id_p2_r1 ne $id_p2_r2}\n        assert {$id_p2_r1 ne $id_p3_r1}\n        \n        # 4 unique entries: p1/req-1, p2/req-1, p2/req-2, p3/req-1\n        assert_equal 4 [r XLEN mystream]\n        \n        set reply [r XINFO STREAM mystream]\n        assert_equal 3 [dict get $reply pids-tracked]\n    }\n\n    test {XADD IDMP multiple producers with mixed IDMP and IDMPAUTO} {\n        r DEL mystream\n        \n        # Mix of IDMP and IDMPAUTO from different producers\n        set id1 [r XADD mystream IDMP p1 \"explicit-iid\" * field \"v1\"]\n        set id2 [r XADD mystream IDMPAUTO p2 * field \"v1\"]\n        set id3 [r XADD mystream IDMP p3 \"another-iid\" * field \"v1\"]\n        set id4 [r XADD mystream IDMPAUTO p4 * field \"v1\"]\n        \n        # All should be different entries\n        assert {$id1 ne $id2}\n        assert {$id2 ne $id3}\n        assert {$id3 ne $id4}\n        assert_equal 4 [r XLEN mystream]\n        \n        # Duplicates should work for each type\n        set id1_dup [r XADD mystream IDMP p1 \"explicit-iid\" * field \"dup\"]\n        set id2_dup [r XADD mystream IDMPAUTO p2 * field \"v1\"]\n        \n        assert_equal $id1 $id1_dup\n        assert_equal $id2 $id2_dup\n        \n        set reply [r XINFO STREAM mystream]\n        assert_equal 4 [dict get $reply pids-tracked]\n    }\n\n    test {XADD IDMP multiple producers stress test} {\n        r DEL mystream\n        \n        # Create many producers\n        set num_producers 50\n        set ids {}\n        \n        for {set i 0} {$i < $num_producers} {incr i} {\n            lappend ids [r XADD mystream IDMP \"producer-$i\" \"request-1\" * field \"value-$i\"]\n        }\n        \n        # Verify all entries exist\n        assert_equal $num_producers [r XLEN mystream]\n        \n        # Verify pids-tracked\n        set reply [r XINFO STREAM mystream]\n        assert_equal $num_producers [dict get $reply pids-tracked]\n        assert_equal $num_producers [dict get $reply iids-tracked]\n        \n        # Verify deduplication for each producer\n        for {set i 0} {$i < $num_producers} {incr i} {\n            set original_id [lindex $ids $i]\n            set dup_id [r XADD mystream IDMP \"producer-$i\" \"request-1\" * field \"dup\"]\n            assert_equal $original_id $dup_id\n        }\n        \n        # No new entries should have been added\n        assert_equal $num_producers [r XLEN mystream]\n    }\n\n    test {XADD IDMPAUTO with invalid syntax} {\n        r DEL mystream\n        assert_error \"*IDMP/IDMPAUTO specified multiple times*\" {r XADD mystream IDMPAUTO p1 IDMPAUTO p2 * f v}\n        assert_error \"*IDMP/IDMPAUTO specified multiple times*\" {r XADD mystream IDMPAUTO p1 IDMP p2 iid1 * f v}\n        assert_error \"*IDMP/IDMPAUTO specified multiple times*\" {r XADD mystream IDMP p1 iid1 IDMPAUTO p2 * f v}\n        assert_error \"*IDMP/IDMPAUTO can be used only with auto-generated IDs*\" {r XADD mystream IDMPAUTO p1 1-1 f v}\n    }\n\n    test {XADD IDMPAUTO basic deduplication based on field-value pairs} {\n        r DEL mystream\n        \n        # First XADD with IDMPAUTO\n        set id1 [r XADD mystream IDMPAUTO p1 * amount \"100\" currency \"USD\"]\n        assert {[regexp {^[0-9]+-[0-9]+$} $id1]}\n        \n        # Second XADD with same fields and values should deduplicate\n        set id2 [r XADD mystream IDMPAUTO p1 * amount \"100\" currency \"USD\"]\n        assert_equal $id1 $id2\n        \n        # Verify only one entry exists\n        assert_equal 1 [r XLEN mystream]\n        \n        # Third XADD with different values should create new entry\n        set id3 [r XADD mystream IDMPAUTO p1 * amount \"200\" currency \"USD\"]\n        assert {$id3 != $id1}\n        assert_equal 2 [r XLEN mystream]\n    }\n\n    test {XADD IDMPAUTO deduplicates regardless of field order} {\n        r DEL mystream\n        \n        # Add entry with fields in one order\n        set id1 [r XADD mystream IDMPAUTO p1 * field1 \"a\" field2 \"b\" field3 \"c\"]\n        \n        # Add entry with same fields in different order (should deduplicate)\n        set id2 [r XADD mystream IDMPAUTO p1 * field2 \"b\" field3 \"c\" field1 \"a\"]\n        assert_equal $id1 $id2\n        \n        # Verify only one entry exists\n        assert_equal 1 [r XLEN mystream]\n        \n        # Add entry with different order but different values (should be new)\n        set id3 [r XADD mystream IDMPAUTO p1 * field3 \"c\" field1 \"x\" field2 \"b\"]\n        assert {$id3 != $id1}\n        assert_equal 2 [r XLEN mystream]\n    }\n\n    test {XADD IDMPAUTO different field-value pairs create different entries} {\n        r DEL mystream\n        \n        # Add different entries\n        set id1 [r XADD mystream IDMPAUTO p1 * user \"alice\" action \"login\"]\n        set id2 [r XADD mystream IDMPAUTO p1 * user \"bob\" action \"login\"]\n        set id3 [r XADD mystream IDMPAUTO p1 * user \"alice\" action \"logout\"]\n        \n        # Verify all IDs are different\n        assert {$id1 != $id2}\n        assert {$id2 != $id3}\n        assert {$id1 != $id3}\n        \n        # Verify all entries exist\n        assert_equal 3 [r XLEN mystream]\n    }\n\n    test {XADD IDMPAUTO with single field-value pair} {\n        r DEL mystream\n        \n        # Add entry with single field\n        set id1 [r XADD mystream IDMPAUTO p1 * status \"active\"]\n        set id2 [r XADD mystream IDMPAUTO p1 * status \"active\"]\n        assert_equal $id1 $id2\n        \n        # Different value should create new entry\n        set id3 [r XADD mystream IDMPAUTO p1 * status \"inactive\"]\n        assert {$id3 != $id1}\n        assert_equal 2 [r XLEN mystream]\n    }\n\n    test {XADD IDMPAUTO with many field-value pairs} {\n        r DEL mystream\n        \n        # Add entry with many fields\n        set id1 [r XADD mystream IDMPAUTO p1 * f1 \"v1\" f2 \"v2\" f3 \"v3\" f4 \"v4\" f5 \"v5\" f6 \"v6\" f7 \"v7\" f8 \"v8\"]\n        set id2 [r XADD mystream IDMPAUTO p1 * f1 \"v1\" f2 \"v2\" f3 \"v3\" f4 \"v4\" f5 \"v5\" f6 \"v6\" f7 \"v7\" f8 \"v8\"]\n        assert_equal $id1 $id2\n        \n        # Change one value should create new entry\n        set id3 [r XADD mystream IDMPAUTO p1 * f1 \"v1\" f2 \"v2\" f3 \"v3\" f4 \"v4\" f5 \"v5\" f6 \"v6\" f7 \"v7\" f8 \"different\"]\n        assert {$id3 != $id1}\n        assert_equal 2 [r XLEN mystream]\n    }\n\n    test {XADD IDMPAUTO with binary-safe values} {\n        r DEL mystream\n        \n        # Test with null bytes and binary data\n        set binary_val \"\\x00\\x01\\x02\\xff\"\n        set id1 [r XADD mystream IDMPAUTO p1 * field $binary_val]\n        set id2 [r XADD mystream IDMPAUTO p1 * field $binary_val]\n        assert_equal $id1 $id2\n        assert_equal 1 [r XLEN mystream]\n    }\n\n    test {XADD IDMPAUTO with unicode values} {\n        r DEL mystream\n        \n        # Test with unicode characters\n        set id1 [r XADD mystream IDMPAUTO p1 * message \"hello世界\"]\n        set id2 [r XADD mystream IDMPAUTO p1 * message \"hello世界\"]\n        assert_equal $id1 $id2\n        \n        # Different unicode should create new entry\n        set id3 [r XADD mystream IDMPAUTO p1 * message \"héllo\"]\n        assert {$id3 != $id1}\n        assert_equal 2 [r XLEN mystream]\n    }\n\n    test {XADD IDMPAUTO with long values} {\n        r DEL mystream\n        \n        # Test with very long values\n        set long_val [string repeat \"x\" 10000]\n        set id1 [r XADD mystream IDMPAUTO p1 * data $long_val]\n        set id2 [r XADD mystream IDMPAUTO p1 * data $long_val]\n        assert_equal $id1 $id2\n        assert_equal 1 [r XLEN mystream]\n    }\n\n    test {XADD IDMPAUTO with empty string values} {\n        r DEL mystream\n        \n        # Test with empty string values\n        set id1 [r XADD mystream IDMPAUTO p1 * field \"\"]\n        set id2 [r XADD mystream IDMPAUTO p1 * field \"\"]\n        assert_equal $id1 $id2\n        \n        # Non-empty should be different\n        set id3 [r XADD mystream IDMPAUTO p1 * field \"value\"]\n        assert {$id3 != $id1}\n        assert_equal 2 [r XLEN mystream]\n    }\n\n    test {XADD IDMPAUTO with MAXLEN option} {\n        r DEL mystream\n        \n        # Add entries with IDMPAUTO and MAXLEN\n        set id1 [r XADD mystream IDMPAUTO p1 MAXLEN ~ 100 * field \"value1\"]\n        set id2 [r XADD mystream IDMPAUTO p1 MAXLEN ~ 100 * field \"value2\"]\n        \n        # Attempt duplicate\n        set id1_dup [r XADD mystream IDMPAUTO p1 MAXLEN ~ 100 * field \"value1\"]\n        \n        # Verify deduplication works\n        assert_equal $id1 $id1_dup\n        \n        # Verify only 2 entries exist\n        assert_equal 2 [r XLEN mystream]\n    }\n\n    test {XADD IDMPAUTO with MINID option} {\n        r DEL mystream\n        \n        # Add entry with IDMPAUTO and MINID\n        set id1 [r XADD mystream IDMPAUTO p1 MINID ~ 1000000000-0 * field \"value1\"]\n        \n        # Attempt duplicate with MINID\n        set id2 [r XADD mystream IDMPAUTO p1 MINID ~ 1000000000-0 * field \"value1\"]\n        \n        # Verify deduplication works\n        assert_equal $id1 $id2\n        assert_equal 1 [r XLEN mystream]\n    }\n\n    test {XADD IDMPAUTO with NOMKSTREAM option} {\n        r DEL mystream\n        \n        # Attempt XADD with NOMKSTREAM on non-existent stream\n        set result [r XADD mystream NOMKSTREAM IDMPAUTO p1 * field \"value\"]\n        assert_equal {} $result\n        \n        # Create stream first\n        r XADD mystream * field \"initial\"\n        \n        # Now NOMKSTREAM with IDMPAUTO should work\n        set id [r XADD mystream NOMKSTREAM IDMPAUTO p1 * field \"test\"]\n        assert {[regexp {^[0-9]+-[0-9]+$} $id]}\n    }\n\n    test {XADD IDMPAUTO with KEEPREF option} {\n        r DEL mystream\n        \n        # Add entries with IDMPAUTO and KEEPREF\n        set id1 [r XADD mystream KEEPREF IDMPAUTO p1 * field \"value1\"]\n        set id2 [r XADD mystream KEEPREF IDMPAUTO p1 * field \"value1\"]\n        \n        # Verify deduplication works with KEEPREF\n        assert_equal $id1 $id2\n        assert_equal 1 [r XLEN mystream]\n    }\n\n    test {XADD IDMPAUTO argument order variations} {\n        r DEL mystream\n        \n        # Test different argument orders\n        set id1 [r XADD mystream IDMPAUTO p1 * field \"test\"]\n        set id2 [r XADD mystream IDMPAUTO p1 MAXLEN ~ 100 * field \"test2\"]\n        set id3 [r XADD mystream MAXLEN ~ 100 IDMPAUTO p1 * field \"test3\"]\n        set id4 [r XADD mystream KEEPREF IDMPAUTO p1 * field \"test4\"]\n        set id5 [r XADD mystream IDMPAUTO p1 KEEPREF * field \"test5\"]\n        \n        # All should be valid stream IDs\n        assert {[regexp {^[0-9]+-[0-9]+$} $id1]}\n        assert {[regexp {^[0-9]+-[0-9]+$} $id2]}\n        assert {[regexp {^[0-9]+-[0-9]+$} $id3]}\n        assert {[regexp {^[0-9]+-[0-9]+$} $id4]}\n        assert {[regexp {^[0-9]+-[0-9]+$} $id5]}\n        \n        # Verify all entries exist\n        assert_equal 5 [r XLEN mystream]\n    }\n\n    test {XADD IDMPAUTO persists in RDB} {\n        r DEL mystream\n        \n        # Add entries with IDMPAUTO\n        set id1 [r XADD mystream IDMPAUTO p1 * field \"value1\"]\n        set id2 [r XADD mystream IDMPAUTO p1 * field \"value2\"]\n        \n        # Save and reload\n        r DEBUG RELOAD\n        \n        # Verify stream exists\n        assert_equal 2 [r XLEN mystream]\n        \n        # Verify deduplication still works after restart\n        set id1_dup [r XADD mystream IDMPAUTO p1 * field \"value1\"]\n        assert_equal $id1 $id1_dup\n        \n        # Should still have only 2 entries\n        assert_equal 2 [r XLEN mystream]\n    } {} {external:skip needs:debug}\n\n    test {XADD IDMPAUTO with consumer groups} {\n        r DEL mystream\n        \n        # Create consumer group\n        r XADD mystream * initial \"value\"\n        r XGROUP CREATE mystream mygroup 0\n        \n        # Add entries with IDMPAUTO\n        set id1 [r XADD mystream IDMPAUTO p1 * event \"login\" user \"alice\"]\n        set id2 [r XADD mystream IDMPAUTO p1 * event \"logout\" user \"bob\"]\n        \n        # Attempt duplicate\n        set id1_dup [r XADD mystream IDMPAUTO p1 * event \"login\" user \"alice\"]\n        assert_equal $id1 $id1_dup\n        \n        # Read from consumer group (should get 3 new entries, not 4)\n        set messages [r XREADGROUP GROUP mygroup consumer1 COUNT 10 STREAMS mystream >]\n        set stream_data [lindex $messages 0 1]\n        assert_equal 3 [llength $stream_data]\n    }\n\n    test {XADD IDMPAUTO field names matter} {\n        r DEL mystream\n        \n        # Different field names should create different entries\n        set id1 [r XADD mystream IDMPAUTO p1 * field1 \"value\"]\n        set id2 [r XADD mystream IDMPAUTO p1 * field2 \"value\"]\n        \n        assert {$id1 != $id2}\n        assert_equal 2 [r XLEN mystream]\n    }\n\n    test {XADD IDMPAUTO with numeric field names and values} {\n        r DEL mystream\n        \n        # Test with numeric field names\n        set id1 [r XADD mystream IDMPAUTO p1 * 123 \"456\" 789 \"012\"]\n        set id2 [r XADD mystream IDMPAUTO p1 * 123 \"456\" 789 \"012\"]\n        assert_equal $id1 $id2\n        \n        # Different numeric values\n        set id3 [r XADD mystream IDMPAUTO p1 * 123 \"999\" 789 \"012\"]\n        assert {$id3 != $id1}\n        assert_equal 2 [r XLEN mystream]\n    }\n\n    test {XADD IDMPAUTO multiple producers have isolated namespaces} {\n        r DEL mystream\n        \n        # Same field-value pairs with different producers should create separate entries\n        set id1 [r XADD mystream IDMPAUTO producer1 * amount \"100\" currency \"USD\"]\n        set id2 [r XADD mystream IDMPAUTO producer2 * amount \"100\" currency \"USD\"]\n        \n        # Different producers = different entries\n        assert {$id1 ne $id2}\n        assert_equal 2 [r XLEN mystream]\n        \n        # Same producer with same fields should deduplicate\n        set id1_dup [r XADD mystream IDMPAUTO producer1 * amount \"100\" currency \"USD\"]\n        set id2_dup [r XADD mystream IDMPAUTO producer2 * amount \"100\" currency \"USD\"]\n        \n        assert_equal $id1 $id1_dup\n        assert_equal $id2 $id2_dup\n        assert_equal 2 [r XLEN mystream]\n    }\n\n    test {XADD IDMPAUTO multiple producers} {\n        r DEL mystream\n        \n        # Different producers with same content should create separate entries\n        set id1 [r XADD mystream IDMPAUTO app1 * event \"login\" user \"alice\"]\n        set id2 [r XADD mystream IDMPAUTO app2 * event \"login\" user \"alice\"]\n        set id3 [r XADD mystream IDMPAUTO app3 * event \"login\" user \"alice\"]\n        \n        # All should be different (different producers)\n        assert {$id1 ne $id2}\n        assert {$id2 ne $id3}\n        assert_equal 3 [r XLEN mystream]\n        \n        # Same producer with same content should deduplicate\n        set id1_dup [r XADD mystream IDMPAUTO app1 * event \"login\" user \"alice\"]\n        assert_equal $id1 $id1_dup\n        \n        set reply [r XINFO STREAM mystream]\n        assert_equal 3 [dict get $reply pids-tracked]\n    }\n\n    test {XIDMP entries expire after DURATION seconds} {\n        r DEL mystream\n        r XADD mystream IDMP p1 \"req-1\" * field \"value1\"\n        r XCFGSET mystream IDMP-DURATION 1\n        \n        # Immediate duplicate should be detected\n        set id1 [r XADD mystream IDMP p1 \"req-1\" * field \"value1\"]\n        set id2 [r XADD mystream IDMP p1 \"req-1\" * field \"value2\"]\n        assert_equal $id1 $id2\n        \n        # Wait for expiration (1 second + margin)\n        after 2500\n        \n        # Now should create new entry\n        set id3 [r XADD mystream IDMP p1 \"req-1\" * field \"value3\"]\n        assert {$id1 ne $id3}\n    }\n\n    test {XIDMP set evicts entries when MAXSIZE is reached} {\n        r DEL mystream\n        \n        # First add an entry to create the stream, then set config\n        r XADD mystream IDMP p1 \"init\" * field \"init\"\n        r XCFGSET mystream IDMP-MAXSIZE 3 IDMP-DURATION 60\n        \n        # Add 3 unique entries\n        set id1 [r XADD mystream IDMP p1 \"req-1\" * field \"v1\"]\n        set id2 [r XADD mystream IDMP p1 \"req-2\" * field \"v2\"]\n        set id3 [r XADD mystream IDMP p1 \"req-3\" * field \"v3\"]\n        \n        # All duplicates should still work (IDMP set has: req-1, req-2, req-3)\n        assert_equal $id1 [r XADD mystream IDMP p1 \"req-1\" * field \"dup\"]\n        assert_equal $id2 [r XADD mystream IDMP p1 \"req-2\" * field \"dup\"]\n        assert_equal $id3 [r XADD mystream IDMP p1 \"req-3\" * field \"dup\"]\n        \n        # Add 4th entry - should evict oldest (req-1)\n        set id4 [r XADD mystream IDMP p1 \"req-4\" * field \"v4\"]\n        \n        # req-1 should be evicted, so it should create new entry\n        set result [r XADD mystream IDMP p1 \"req-1\" * field \"new\"]\n        assert {$result ne $id1}\n        \n        # req-2 is also eveicted but req-3 should still be in the set\n        assert_equal $id3 [r XADD mystream IDMP p1 \"req-3\" * field \"dup2\"]\n        \n        # Stream should have: init, req-1, req-2, req-3, req-4, req-1(new) = 6 entries\n        assert_equal 6 [r XLEN mystream]\n    }\n\n    test {XCFGSET set IDMP-DURATION successfully} {\n        r DEL mystream\n        \n        # Create stream with IDMP entry\n        r XADD mystream IDMP p1 \"req-1\" * field \"value\"\n        \n        # Set IDMP-DURATION to 5s\n        assert_equal \"OK\" [r XCFGSET mystream IDMP-DURATION 5]\n        \n        # Verify IDMP-DURATION was set\n        set reply [r XINFO STREAM mystream]\n        assert_equal 5 [dict get $reply idmp-duration]\n    }\n\n    test {XCFGSET set IDMP-MAXSIZE successfully} {\n        r DEL mystream\n        \n        # Create stream with IDMP entry\n        r XADD mystream IDMP p1 \"req-1\" * field \"value\"\n        \n        # Set IDMP-MAXSIZE to 5000\n        assert_equal \"OK\" [r XCFGSET mystream IDMP-MAXSIZE 5000]\n        \n        # Verify IDMP-MAXSIZE was set\n        set reply [r XINFO STREAM mystream]\n        assert_equal 5000 [dict get $reply idmp-maxsize]\n    }\n\n    test {XCFGSET set both IDMP-DURATION and IDMP-MAXSIZE} {\n        r DEL mystream\n        \n        # Create stream with IDMP entry\n        r XADD mystream IDMP p1 \"req-1\" * field \"value\"\n        \n        # Set both IDMP-DURATION and IDMP-MAXSIZE\n        assert_equal \"OK\" [r XCFGSET mystream IDMP-DURATION 3 IDMP-MAXSIZE 10000]\n        \n        # Verify both were set\n        set reply [r XINFO STREAM mystream]\n        assert_equal 3 [dict get $reply idmp-duration]\n        assert_equal 10000 [dict get $reply idmp-maxsize]\n    }\n\n    test {XINFO STREAM shows IDMP configuration parameters} {\n        r DEL mystream\n        \n        # Create stream with IDMP entry\n        r XADD mystream IDMP p1 \"req-1\" * field \"value\"\n        \n        # Set both IDMP-DURATION and IDMP-MAXSIZE\n        assert_equal \"OK\" [r XCFGSET mystream IDMP-DURATION 3 IDMP-MAXSIZE 10000]\n        \n        # Verify both were set\n        set reply [r XINFO STREAM mystream]\n        assert_equal 3 [dict get $reply idmp-duration]\n        assert_equal 10000 [dict get $reply idmp-maxsize]\n    }\n\n    test {XINFO STREAM shows default IDMP parameters} {\n        r DEL mystream\n        \n        # Create stream with IDMP entry\n        r XADD mystream IDMP p1 \"req-1\" * field \"value\"\n        \n        # Verify default parameters\n        set reply [r XINFO STREAM mystream]\n        assert_equal 100 [dict get $reply idmp-duration]\n        assert_equal 100 [dict get $reply idmp-maxsize]\n    }\n\n    test {XCFGSET error on non-existent stream} {\n        r DEL mystream\n        \n        # Attempt to set config on non-existent stream\n        assert_error \"*no such key*\" {r XCFGSET mystream IDMP-DURATION 5}\n    }\n\n    test {XCFGSET IDMP-DURATION maximum value validation} {\n        r DEL mystream\n        \n        # Create stream with IDMP\n        r XADD mystream IDMP p1 \"req-1\" * field \"value\"\n        \n        # Set IDMP-DURATION to maximum allowed (86400 seconds = 24 hours)\n        assert_equal \"OK\" [r XCFGSET mystream IDMP-DURATION 86400]\n        \n        # Verify it was set\n        set reply [r XINFO STREAM mystream]\n        assert_equal 86400 [dict get $reply idmp-duration]\n        \n        # Attempt to set IDMP-DURATION above maximum\n        assert_error \"*ERR IDMP-DURATION must be*\" {r XCFGSET mystream IDMP-DURATION 86401}\n        \n        # Verify IDMP-DURATION wasn't changed\n        set reply [r XINFO STREAM mystream]\n        assert_equal 86400 [dict get $reply idmp-duration]\n    }\n\n    test {XCFGSET IDMP-DURATION minimum value validation} {\n        r DEL mystream\n        \n        # Create stream with IDMP\n        r XADD mystream IDMP p1 \"req-1\" * field \"value\"\n        \n        # Attempt to set IDMP-DURATION to 0\n        assert_error \"*ERR IDMP-DURATION must be between*\" {r XCFGSET mystream IDMP-DURATION 0}\n        \n        # Attempt to set IDMP-DURATION to negative value\n        assert_error \"*ERR IDMP-DURATION must be between*\" {r XCFGSET mystream IDMP-DURATION -100}\n        \n        # Set IDMP-DURATION to minimum valid value (1 second)\n        assert_equal \"OK\" [r XCFGSET mystream IDMP-DURATION 1]\n        \n        # Verify it was set\n        set reply [r XINFO STREAM mystream]\n        assert_equal 1 [dict get $reply idmp-duration]\n    }\n\n    test {XCFGSET IDMP-MAXSIZE maximum value validation} {\n        r DEL mystream\n        \n        # Create stream with IDMP\n        r XADD mystream IDMP p1 \"req-1\" * field \"value\"\n        \n        # Set IDMP-MAXSIZE to maximum allowed (10000)\n        assert_equal \"OK\" [r XCFGSET mystream IDMP-MAXSIZE 10000]\n        \n        # Verify it was set\n        set reply [r XINFO STREAM mystream]\n        assert_equal 10000 [dict get $reply idmp-maxsize]\n        \n        # Attempt to set IDMP-MAXSIZE above maximum\n        assert_error \"*ERR IDMP-MAXSIZE must be between*\" {r XCFGSET mystream IDMP-MAXSIZE 10001}\n        \n        # Verify IDMP-MAXSIZE wasn't changed\n        set reply [r XINFO STREAM mystream]\n        assert_equal 10000 [dict get $reply idmp-maxsize]\n    }\n\n    test {XCFGSET IDMP-MAXSIZE minimum value validation} {\n        r DEL mystream\n        \n        # Create stream with IDMP\n        r XADD mystream IDMP p1 \"req-1\" * field \"value\"\n        \n        # Attempt to set IDMP-MAXSIZE to 0\n        assert_error \"*ERR IDMP-MAXSIZE must be between*\" {r XCFGSET mystream IDMP-MAXSIZE 0}\n        \n        # Attempt to set IDMP-MAXSIZE to negative value\n        assert_error \"*ERR IDMP-MAXSIZE must be between*\" {r XCFGSET mystream IDMP-MAXSIZE -50}\n        \n        # Set IDMP-MAXSIZE to minimum valid value (1)\n        assert_equal \"OK\" [r XCFGSET mystream IDMP-MAXSIZE 1]\n        \n        # Verify it was set\n        set reply [r XINFO STREAM mystream]\n        assert_equal 1 [dict get $reply idmp-maxsize]\n    }\n\n    test {XCFGSET invalid syntax} {\n        r DEL mystream\n        \n        # Create stream with IDMP\n        r XADD mystream IDMP p1 \"req-1\" * field \"value\"\n        \n        # Attempt CFGSET with invalid syntax\n        assert_error \"*ERR At least one parameter*\" {r XCFGSET mystream}\n        assert_error \"*syntax*\" {r XCFGSET mystream IDMP-DURATION}\n        assert_error \"*syntax*\" {r XCFGSET mystream IDMP-MAXSIZE}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-DURATION A}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-DURATION AAA}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-DURATION *}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-DURATION -}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-DURATION +}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-DURATION 120-5}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-DURATION 3.14}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-DURATION 000000000}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-DURATION IDMP-DURATION}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-DURATION IDMP-DURATION IDMP-DURATION}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-MAXSIZE A}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-MAXSIZE AAA}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-MAXSIZE *}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-MAXSIZE -}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-MAXSIZE +}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-MAXSIZE 120-5}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-MAXSIZE 3.14}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-MAXSIZE 000000000}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-MAXSIZE IDMP-MAXSIZE}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-MAXSIZE IDMP-MAXSIZE IDMP-MAXSIZE}\n\n        assert_error \"*syntax*\" {r XCFGSET mystream INVALID}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-DURATION INVALID IDMP-MAXSIZE}\n        assert_error \"*ERR value is not an integer*\" {r XCFGSET mystream IDMP-MAXSIZE INVALID IDMP-DURATION}\n    }\n\n    test {XCFGSET multiple configuration changes} {\n        r DEL mystream\n        \n        # Create stream with IDMP\n        r XADD mystream IDMP p1 \"req-1\" * field \"value\"\n        \n        # Change DURATION multiple times\n        r XCFGSET mystream IDMP-DURATION 1\n        r XCFGSET mystream IDMP-DURATION 2\n        r XCFGSET mystream IDMP-DURATION 3\n        \n        # Change MAXSIZE\n        r XCFGSET mystream IDMP-MAXSIZE 100\n        r XCFGSET mystream IDMP-MAXSIZE 200\n        \n        # Verify latest values\n        set reply [r XINFO STREAM mystream]\n        assert_equal 3 [dict get $reply idmp-duration]\n        assert_equal 200 [dict get $reply idmp-maxsize]\n    }\n\n    test {XCFGSET configuration persists in RDB} {\n        r DEL mystream\n        \n        # Create stream and set configuration\n        r XADD mystream IDMP p1 \"req-1\" * field \"value\"\n        r XCFGSET mystream IDMP-DURATION 75 IDMP-MAXSIZE 7500\n        \n        # Save and restart\n        r SAVE\n\n        # Restart Redis\n        restart_server 0 true false\n        \n        # Verify configuration persisted\n        set reply [r XINFO STREAM mystream]\n        assert_equal 75 [dict get $reply idmp-duration]\n        assert_equal 7500 [dict get $reply idmp-maxsize]\n    } {} {external:skip}\n\n    test {XCFGSET configuration in AOF} {\n        r DEL mystream\n        r config set appendonly yes\n        \n        # Wait for the automatic AOF rewrite triggered by enabling AOF\n        waitForBgrewriteaof r\n\n        # Create stream and set configuration\n        r XADD mystream IDMP p1 \"req-1\" * field \"value\"\n        r XCFGSET mystream IDMP-DURATION 45 IDMP-MAXSIZE 4500\n        \n        # Force AOF rewrite\n        r BGREWRITEAOF\n        waitForBgrewriteaof r\n        \n        # Restart with AOF\n        r DEBUG RELOAD\n        \n        # Verify configuration\n        set reply [r XINFO STREAM mystream]\n        assert_equal 45 [dict get $reply idmp-duration]\n        assert_equal 4500 [dict get $reply idmp-maxsize]\n        \n        assert_equal \"OK\" [r config set appendonly no]\n    } {} {external:skip needs:debug}\n\n    test {XCFGSET changing IDMP-DURATION clears all iids history} {\n        r DEL mystream\n        \n        # Create stream and add entries with IDMP\n        set id1 [r XADD mystream IDMP p1 \"req-1\" * field \"value1\"]\n        set id2 [r XADD mystream IDMP p1 \"req-2\" * field \"value2\"]\n        \n        # Verify deduplication works before config change\n        set dup_id [r XADD mystream IDMP p1 \"req-1\" * field \"dup\"]\n        assert_equal $id1 $dup_id\n        \n        # Change DURATION - should clear iids history\n        r XCFGSET mystream IDMP-DURATION 5\n        \n        # Now req-1 should create a new entry (history was cleared)\n        set new_id1 [r XADD mystream IDMP p1 \"req-1\" * field \"new1\"]\n        assert {$id1 ne $new_id1}\n        \n        # Should have 3 entries total (2 original + 1 new)\n        assert_equal 3 [r XLEN mystream]\n    }\n\n    test {XCFGSET changing IDMP-MAXSIZE clears all iids history} {\n        r DEL mystream\n        \n        # Create stream and add entries with IDMP\n        set id1 [r XADD mystream IDMP p1 \"req-1\" * field \"value1\"]\n        set id2 [r XADD mystream IDMP p1 \"req-2\" * field \"value2\"]\n        \n        # Verify deduplication works before config change\n        set dup_id [r XADD mystream IDMP p1 \"req-2\" * field \"dup\"]\n        assert_equal $id2 $dup_id\n        \n        # Change MAXSIZE - should clear iids history\n        r XCFGSET mystream IDMP-MAXSIZE 5000\n        \n        # Now req-2 should create a new entry (history was cleared)\n        set new_id2 [r XADD mystream IDMP p1 \"req-2\" * field \"new2\"]\n        assert {$id2 ne $new_id2}\n        \n        # Should have 3 entries total (2 original + 1 new)\n        assert_equal 3 [r XLEN mystream]\n    }\n\n    test {XCFGSET history cleared then new deduplication works} {\n        r DEL mystream\n        \n        # Create stream and add entries\n        set id1 [r XADD mystream IDMP p1 \"req-1\" * field \"value1\"]\n        \n        # Change configuration to clear history\n        r XCFGSET mystream IDMP-DURATION 6\n        \n        # Add new entry with same iid\n        set new_id1 [r XADD mystream IDMP p1 \"req-1\" * field \"new1\"]\n        assert {$id1 ne $new_id1}\n        \n        # Now deduplication should work with new history\n        set dup_id1 [r XADD mystream IDMP p1 \"req-1\" * field \"dup1\"]\n        assert_equal $new_id1 $dup_id1\n        \n        # Should have 2 entries (1 original + 1 new)\n        assert_equal 2 [r XLEN mystream]\n    }\n\n    test {XCFGSET history cleared preserves stream entries} {\n        r DEL mystream\n        \n        # Create stream with entries\n        set id1 [r XADD mystream IDMP p1 \"req-1\" * field \"value1\" data \"data1\"]\n        set id2 [r XADD mystream IDMP p1 \"req-2\" * field \"value2\" data \"data2\"]\n        \n        # Verify entries exist with correct data\n        set entries [r XRANGE mystream - +]\n        assert_equal 2 [llength $entries]\n        \n        # Change configuration to clear iids history\n        r XCFGSET mystream IDMP-DURATION 7\n        \n        # Stream entries should still exist unchanged\n        set entries_after [r XRANGE mystream - +]\n        assert_equal 2 [llength $entries_after]\n        \n        # Verify original entries have correct data\n        set entry1_fields [lindex [lindex $entries_after 0] 1]\n        assert_equal \"value1\" [dict get $entry1_fields field]\n        assert_equal \"data1\" [dict get $entry1_fields data]\n        \n        # But iids history is cleared, so can add new entries\n        set new_id1 [r XADD mystream IDMP p1 \"req-1\" * field \"new1\"]\n        assert {$id1 ne $new_id1}\n    }\n\n    test {XCFGSET setting same IDMP-DURATION does not clear iids history} {\n        r DEL mystream\n        \n        # Create stream and add entries with IDMP\n        set id1 [r XADD mystream IDMP p1 \"req-1\" * field \"value1\"]\n        set id2 [r XADD mystream IDMP p1 \"req-2\" * field \"value2\"]\n        \n        # Verify deduplication works before config\n        set dup_id [r XADD mystream IDMP p1 \"req-1\" * field \"dup\"]\n        assert_equal $id1 $dup_id\n        \n        # Get current DURATION (default is 100)\n        set reply [r XINFO STREAM mystream]\n        set current_duration [dict get $reply idmp-duration]\n        assert_equal 100 $current_duration\n        \n        # Set IDMP-DURATION to same value - should NOT clear iids history\n        r XCFGSET mystream IDMP-DURATION 100\n        \n        # Deduplication should still work (history was NOT cleared)\n        set dup_id2 [r XADD mystream IDMP p1 \"req-1\" * field \"dup2\"]\n        assert_equal $id1 $dup_id2\n        \n        set dup_id3 [r XADD mystream IDMP p1 \"req-2\" * field \"dup3\"]\n        assert_equal $id2 $dup_id3\n        \n        # Should still have 2 entries (no new entries added)\n        assert_equal 2 [r XLEN mystream]\n    }\n\n    test {XCFGSET setting same IDMP-MAXSIZE does not clear iids history} {\n        r DEL mystream\n        \n        # Create stream and add entries with IDMP\n        set id1 [r XADD mystream IDMP p1 \"req-1\" * field \"value1\"]\n        set id2 [r XADD mystream IDMP p1 \"req-2\" * field \"value2\"]\n        \n        # Verify deduplication works\n        set dup_id [r XADD mystream IDMP p1 \"req-2\" * field \"dup\"]\n        assert_equal $id2 $dup_id\n        \n        # Get current MAXSIZE (default is 100)\n        set reply [r XINFO STREAM mystream]\n        set current_maxsize [dict get $reply idmp-maxsize]\n        assert_equal 100 $current_maxsize\n        \n        # Set IDMP-MAXSIZE to same value - should NOT clear iids history\n        r XCFGSET mystream IDMP-MAXSIZE 100\n        \n        # Deduplication should still work (history was NOT cleared)\n        set dup_id2 [r XADD mystream IDMP p1 \"req-1\" * field \"dup2\"]\n        assert_equal $id1 $dup_id2\n        \n        set dup_id3 [r XADD mystream IDMP p1 \"req-2\" * field \"dup3\"]\n        assert_equal $id2 $dup_id3\n        \n        # Should still have 2 entries (no new entries added)\n        assert_equal 2 [r XLEN mystream]\n    }\n\n    test {XCFGSET repeated same-value calls preserve IDMP history} {\n        r DEL mystream\n        \n        # Set configuration first\n        r XADD mystream * field \"init\"\n        r XCFGSET mystream IDMP-DURATION 10 IDMP-MAXSIZE 5000\n        \n        # Create stream with initial entry after config is set\n        set id1 [r XADD mystream IDMP p1 \"req-1\" * field \"value1\"]\n        \n        # Call XCFGSET multiple times with same values\n        # (common pattern for configuration initialization)\n        r XCFGSET mystream IDMP-DURATION 10 IDMP-MAXSIZE 5000\n        r XCFGSET mystream IDMP-DURATION 10 IDMP-MAXSIZE 5000\n        r XCFGSET mystream IDMP-DURATION 10 IDMP-MAXSIZE 5000\n        \n        # Deduplication should still work (history not cleared by same-value sets)\n        set dup_id [r XADD mystream IDMP p1 \"req-1\" * field \"dup\"]\n        assert_equal $id1 $dup_id\n        \n        # Add new entry from second producer\n        set id2 [r XADD mystream IDMP p2 \"req-2\" * field \"value2\"]\n        \n        # Both producers should work with deduplication\n        set dup_id2 [r XADD mystream IDMP p2 \"req-2\" * field \"dup2\"]\n        assert_equal $id2 $dup_id2\n        \n        # Should have 3 entries total (init + 2 IDMP entries)\n        assert_equal 3 [r XLEN mystream]\n    }\n\n    test {XCFGSET changing value after same-value sets still clears history} {\n        r DEL mystream\n        \n        # Create stream with entries\n        set id1 [r XADD mystream IDMP p1 \"req-1\" * field \"value1\"]\n        \n        # Set to same value multiple times (doesn't clear)\n        r XCFGSET mystream IDMP-DURATION 100\n        r XCFGSET mystream IDMP-DURATION 100\n        \n        # Verify deduplication still works\n        set dup_id [r XADD mystream IDMP p1 \"req-1\" * field \"dup\"]\n        assert_equal $id1 $dup_id\n        \n        # Now change to different value (should clear)\n        r XCFGSET mystream IDMP-DURATION 50\n        \n        # Deduplication should not work anymore (history cleared)\n        set new_id [r XADD mystream IDMP p1 \"req-1\" * field \"new\"]\n        assert {$id1 ne $new_id}\n        \n        # Should have 2 entries now\n        assert_equal 2 [r XLEN mystream]\n    }\n\n    test {XCFGSET setting same value preserves iids-tracked count} {\n        r DEL mystream\n        \n        # Add entries with IDMP\n        r XADD mystream IDMP p1 \"req-1\" * field \"value1\"\n        r XADD mystream IDMP p1 \"req-2\" * field \"value2\"\n        r XADD mystream IDMP p2 \"req-3\" * field \"value3\"\n        \n        # Verify counts\n        set reply [r XINFO STREAM mystream]\n        assert_equal 3 [dict get $reply iids-tracked]\n        assert_equal 3 [dict get $reply iids-added]\n        \n        # Set to same value - should preserve counts\n        r XCFGSET mystream IDMP-DURATION 100 IDMP-MAXSIZE 100\n        \n        set reply [r XINFO STREAM mystream]\n        assert_equal 3 [dict get $reply iids-tracked]\n        assert_equal 3 [dict get $reply iids-added]\n    }\n\n    test {XINFO STREAM returns iids-tracked and iids-added fields} {\n        r DEL mystream\n        \n        # Create stream without IDMP first\n        r XADD mystream * field \"value\"\n        \n        # Verify initial values: no IDMP entries yet\n        set reply [r XINFO STREAM mystream]\n        assert_equal 0 [dict get $reply iids-tracked]\n        assert_equal 0 [dict get $reply iids-added]\n        \n        # Add entries with IDMP\n        r XADD mystream IDMP p1 \"req-1\" * field \"value1\"\n        set reply [r XINFO STREAM mystream]\n        assert_equal 1 [dict get $reply iids-tracked]\n        assert_equal 1 [dict get $reply iids-added]\n        \n        r XADD mystream IDMP p1 \"req-2\" * field \"value2\"\n        set reply [r XINFO STREAM mystream]\n        assert_equal 2 [dict get $reply iids-tracked]\n        assert_equal 2 [dict get $reply iids-added]\n        \n        # Duplicate IDMP should NOT increment counters\n        r XADD mystream IDMP p1 \"req-1\" * field \"duplicate\"\n        set reply [r XINFO STREAM mystream]\n        assert_equal 2 [dict get $reply iids-tracked]\n        assert_equal 2 [dict get $reply iids-added]\n        \n        # Also verify FULL mode returns the same fields\n        set reply_full [r XINFO STREAM mystream FULL]\n        assert_equal 2 [dict get $reply_full iids-tracked]\n        assert_equal 2 [dict get $reply_full iids-added]\n    }\n\n    test {XINFO STREAM iids-added is lifetime counter even after eviction} {\n        r DEL mystream\n        \n        # Set small MAXSIZE to trigger eviction\n        r XADD mystream IDMP p1 \"init\" * field \"init\"\n        r XCFGSET mystream IDMP-MAXSIZE 3\n        \n        # Add 3 more entries (total 4, but MAXSIZE=3 so oldest evicted)\n        r XADD mystream IDMP p1 \"req-1\" * field \"v1\"\n        r XADD mystream IDMP p1 \"req-2\" * field \"v2\"\n        r XADD mystream IDMP p1 \"req-3\" * field \"v3\"\n        \n        set reply [r XINFO STREAM mystream]\n        # iids-tracked should be capped at MAXSIZE (3)\n        assert_equal 3 [dict get $reply iids-tracked]\n        # iids-added should be lifetime count (4)\n        assert_equal 4 [dict get $reply iids-added]\n        \n        # Add more entries to verify lifetime counter keeps growing\n        r XADD mystream IDMP p1 \"req-4\" * field \"v4\"\n        r XADD mystream IDMP p1 \"req-5\" * field \"v5\"\n        \n        set reply [r XINFO STREAM mystream]\n        assert_equal 3 [dict get $reply iids-tracked]\n        assert_equal 6 [dict get $reply iids-added]\n    }\n\n    test {XINFO STREAM iids-duplicates is lifetime counter} {\n        r DEL mystream\n        \n        # Add initial entry with unique IID\n        r XADD mystream IDMP p1 \"req-1\" * field \"v1\"\n        \n        set reply [r XINFO STREAM mystream]\n        # No duplicates yet\n        assert_equal 0 [dict get $reply iids-duplicates]\n        assert_equal 1 [dict get $reply iids-added]\n        \n        # Try to add duplicate IID - should be rejected and increment counter\n        set dup_id [r XADD mystream IDMP p1 \"req-1\" * field \"v1-dup\"]\n        \n        set reply [r XINFO STREAM mystream]\n        assert_equal 1 [dict get $reply iids-duplicates]\n        assert_equal 1 [dict get $reply iids-added]  ;# Still 1 successful add\n        \n        # Try same duplicate again\n        r XADD mystream IDMP p1 \"req-1\" * field \"v1-dup2\"\n        \n        set reply [r XINFO STREAM mystream]\n        assert_equal 2 [dict get $reply iids-duplicates]\n        assert_equal 1 [dict get $reply iids-added]\n        \n        # Add a different IID (should succeed, duplicates unchanged)\n        r XADD mystream IDMP p1 \"req-2\" * field \"v2\"\n        \n        set reply [r XINFO STREAM mystream]\n        assert_equal 2 [dict get $reply iids-duplicates]\n        assert_equal 2 [dict get $reply iids-added]\n        \n        # Try the first IID again\n        r XADD mystream IDMP p1 \"req-1\" * field \"v1-dup3\"\n        \n        set reply [r XINFO STREAM mystream]\n        assert_equal 3 [dict get $reply iids-duplicates]\n        assert_equal 2 [dict get $reply iids-added]\n    }\n\n    test {XINFO STREAM iids-duplicates persists after eviction} {\n        r DEL mystream\n        \n        # Add initial entry and configure MAXSIZE\n        r XADD mystream IDMP p1 \"init\" * field \"init\"\n        r XCFGSET mystream IDMP-MAXSIZE 3\n        # Note: CFGSET clears IID history, so \"init\" is no longer tracked\n        \n        # Add entries and create some duplicates\n        r XADD mystream IDMP p1 \"req-1\" * field \"v1\"\n        r XADD mystream IDMP p1 \"req-1\" * field \"v1-dup\"  ;# Duplicate\n        r XADD mystream IDMP p1 \"req-2\" * field \"v2\"\n        r XADD mystream IDMP p1 \"req-2\" * field \"v2-dup\"  ;# Duplicate\n        \n        set reply [r XINFO STREAM mystream]\n        # iids-tracked should be 2 (req-1, req-2) - \"init\" was cleared by CFGSET\n        assert_equal 2 [dict get $reply iids-tracked]\n        # iids-added should be 3 (init, req-1, req-2) - lifetime counter includes \"init\"\n        assert_equal 3 [dict get $reply iids-added]\n        # iids-duplicates should be 2\n        assert_equal 2 [dict get $reply iids-duplicates]\n        \n        # Add more entries to trigger eviction of old IIDs\n        r XADD mystream IDMP p1 \"req-3\" * field \"v3\"\n        r XADD mystream IDMP p1 \"req-4\" * field \"v4\"\n        \n        # Now we have: req-2, req-3, req-4 (MAXSIZE=3, so req-1 was evicted)\n        \n        set reply [r XINFO STREAM mystream]\n        assert_equal 3 [dict get $reply iids-tracked]  ;# Now capped at MAXSIZE (3)\n        assert_equal 5 [dict get $reply iids-added]    ;# 5 successful adds total\n        assert_equal 2 [dict get $reply iids-duplicates]  ;# Still 2 (lifetime counter)\n        \n        # Try to duplicate one of the currently tracked IIDs\n        r XADD mystream IDMP p1 \"req-3\" * field \"v3-dup\"\n        r XADD mystream IDMP p1 \"req-4\" * field \"v4-dup\"\n        \n        set reply [r XINFO STREAM mystream]\n        assert_equal 3 [dict get $reply iids-tracked]\n        assert_equal 5 [dict get $reply iids-added]\n        assert_equal 4 [dict get $reply iids-duplicates]  ;# Incremented by 2\n    }\n\n    test {XINFO STREAM iids-duplicates with multiple producers} {\n        r DEL mystream\n        \n        # Add entries from different producers with same IID\n        # (same IID but different producer = NOT a duplicate)\n        r XADD mystream IDMP p1 \"req-1\" * field \"v1-p1\"\n        r XADD mystream IDMP p2 \"req-1\" * field \"v1-p2\"\n        \n        set reply [r XINFO STREAM mystream]\n        assert_equal 2 [dict get $reply pids-tracked]\n        assert_equal 2 [dict get $reply iids-added]\n        assert_equal 0 [dict get $reply iids-duplicates]  ;# No duplicates\n        \n        # Now add actual duplicates (same IID, same producer)\n        r XADD mystream IDMP p1 \"req-1\" * field \"v1-p1-dup\"\n        r XADD mystream IDMP p2 \"req-1\" * field \"v1-p2-dup\"\n        \n        set reply [r XINFO STREAM mystream]\n        assert_equal 2 [dict get $reply pids-tracked]\n        assert_equal 2 [dict get $reply iids-added]\n        assert_equal 2 [dict get $reply iids-duplicates]  ;# 2 duplicates (one per producer)\n    }\n\n    test {XINFO STREAM iids counters after CFGSET clears history} {\n        r DEL mystream\n        \n        # Add entries with IDMP and create some duplicates\n        r XADD mystream IDMP p1 \"req-1\" * field \"v1\"\n        r XADD mystream IDMP p1 \"req-2\" * field \"v2\"\n        r XADD mystream IDMP p1 \"req-3\" * field \"v3\"\n        r XADD mystream IDMP p1 \"req-1\" * field \"v1-dup\"  ;# Duplicate\n        r XADD mystream IDMP p1 \"req-2\" * field \"v2-dup\"  ;# Duplicate\n        \n        set reply [r XINFO STREAM mystream]\n        assert_equal 3 [dict get $reply iids-tracked]\n        assert_equal 3 [dict get $reply iids-added]\n        assert_equal 2 [dict get $reply iids-duplicates]\n        \n        # CFGSET clears IID history\n        r XCFGSET mystream IDMP-DURATION 60\n        \n        set reply [r XINFO STREAM mystream]\n        # iids-tracked should be 0 after history cleared\n        assert_equal 0 [dict get $reply iids-tracked]\n        # iids-added should still be preserved (lifetime counter)\n        assert_equal 3 [dict get $reply iids-added]\n        # iids-duplicates should still be preserved (lifetime counter)\n        assert_equal 2 [dict get $reply iids-duplicates]\n        \n        # Add new entry and verify counters\n        r XADD mystream IDMP p1 \"req-4\" * field \"v4\"\n        set reply [r XINFO STREAM mystream]\n        assert_equal 1 [dict get $reply iids-tracked]\n        assert_equal 4 [dict get $reply iids-added]\n    }\n\n    test {XINFO STREAM iids-added persists in RDB} {\n        r DEL mystream\n        \n        # Add entries with IDMP to build up iids-added counter\n        r XADD mystream IDMP p1 \"req-1\" * field \"v1\"\n        r XADD mystream IDMP p1 \"req-2\" * field \"v2\"\n        r XADD mystream IDMP p1 \"req-3\" * field \"v3\"\n        \n        # Set small MAXSIZE to cause eviction\n        r XCFGSET mystream IDMP-MAXSIZE 2\n        \n        # Add more to trigger eviction (iids-tracked will be 2, but iids-added=5)\n        r XADD mystream IDMP p1 \"req-4\" * field \"v4\"\n        r XADD mystream IDMP p1 \"req-5\" * field \"v5\"\n        \n        # Verify values before save\n        set reply [r XINFO STREAM mystream]\n        assert_equal 2 [dict get $reply iids-tracked]\n        assert_equal 5 [dict get $reply iids-added]\n        \n        # Save and restart\n        r SAVE\n        restart_server 0 true false\n        \n        # Verify iids-added persisted after restart\n        set reply [r XINFO STREAM mystream]\n        assert_equal 2 [dict get $reply iids-tracked]\n        assert_equal 5 [dict get $reply iids-added]\n    } {} {external:skip}\n\n    test {XINFO STREAM returns pids-tracked field} {\n        r DEL mystream\n        \n        # Create stream without IDMP\n        r XADD mystream * field \"value\"\n        \n        # Verify initial pids-tracked is 0\n        set reply [r XINFO STREAM mystream]\n        assert_equal 0 [dict get $reply pids-tracked]\n        \n        # Add entry with first producer\n        r XADD mystream IDMP p1 \"req-1\" * field \"v1\"\n        set reply [r XINFO STREAM mystream]\n        assert_equal 1 [dict get $reply pids-tracked]\n        \n        # Add entry with same producer - pids-tracked should stay 1\n        r XADD mystream IDMP p1 \"req-2\" * field \"v2\"\n        set reply [r XINFO STREAM mystream]\n        assert_equal 1 [dict get $reply pids-tracked]\n        \n        # Add entry with second producer\n        r XADD mystream IDMP p2 \"req-1\" * field \"v3\"\n        set reply [r XINFO STREAM mystream]\n        assert_equal 2 [dict get $reply pids-tracked]\n        \n        # Add entry with third producer\n        r XADD mystream IDMP producer3 \"req-1\" * field \"v4\"\n        set reply [r XINFO STREAM mystream]\n        assert_equal 3 [dict get $reply pids-tracked]\n    }\n\n    test {XINFO STREAM FULL returns pids-tracked field} {\n        r DEL mystream\n        \n        # Add entries with multiple producers\n        r XADD mystream IDMP prod-a \"req-1\" * field \"v1\"\n        r XADD mystream IDMP prod-b \"req-1\" * field \"v2\"\n        r XADD mystream IDMP prod-c \"req-1\" * field \"v3\"\n        \n        # Verify FULL mode also returns pids-tracked\n        set reply [r XINFO STREAM mystream FULL]\n        assert_equal 3 [dict get $reply pids-tracked]\n    }\n\n    test {XINFO STREAM iids-tracked counts across all producers} {\n        r DEL mystream\n        \n        # Add entries with multiple producers\n        r XADD mystream IDMP p1 \"req-1\" * field \"v1\"\n        r XADD mystream IDMP p1 \"req-2\" * field \"v2\"\n        r XADD mystream IDMP p2 \"req-1\" * field \"v3\"\n        r XADD mystream IDMP p2 \"req-2\" * field \"v4\"\n        r XADD mystream IDMP p2 \"req-3\" * field \"v5\"\n        \n        # iids-tracked should count all IIDs across all producers (2 + 3 = 5)\n        set reply [r XINFO STREAM mystream]\n        assert_equal 2 [dict get $reply pids-tracked]\n        assert_equal 5 [dict get $reply iids-tracked]\n        assert_equal 5 [dict get $reply iids-added]\n        \n        # Duplicates should not increment counters\n        r XADD mystream IDMP p1 \"req-1\" * field \"dup\"\n        r XADD mystream IDMP p2 \"req-2\" * field \"dup\"\n        \n        set reply [r XINFO STREAM mystream]\n        assert_equal 2 [dict get $reply pids-tracked]\n        assert_equal 5 [dict get $reply iids-tracked]\n        assert_equal 5 [dict get $reply iids-added]\n    }\n\n    test {XINFO STREAM returns idmp-duration and idmp-maxsize fields} {\n        r DEL mystream\n        \n        # Create stream with default IDMP config\n        r XADD mystream IDMP p1 \"req-1\" * field \"value1\"\n        \n        # Verify default values\n        set reply [r XINFO STREAM mystream]\n        assert [dict exists $reply idmp-duration]\n        assert [dict exists $reply idmp-maxsize]\n        \n        # Get default values from server config\n        set default_duration [lindex [r CONFIG GET stream-idmp-duration] 1]\n        set default_maxsize [lindex [r CONFIG GET stream-idmp-maxsize] 1]\n        \n        assert_equal $default_duration [dict get $reply idmp-duration]\n        assert_equal $default_maxsize [dict get $reply idmp-maxsize]\n        \n        # Change IDMP config\n        r XCFGSET mystream IDMP-DURATION 300 IDMP-MAXSIZE 50\n        \n        set reply [r XINFO STREAM mystream]\n        assert_equal 300 [dict get $reply idmp-duration]\n        assert_equal 50 [dict get $reply idmp-maxsize]\n        \n        # Also verify FULL mode returns the same fields\n        set reply_full [r XINFO STREAM mystream FULL]\n        assert_equal 300 [dict get $reply_full idmp-duration]\n        assert_equal 50 [dict get $reply_full idmp-maxsize]\n        \n        # Change only DURATION\n        r XCFGSET mystream IDMP-DURATION 600\n        set reply [r XINFO STREAM mystream]\n        assert_equal 600 [dict get $reply idmp-duration]\n        assert_equal 50 [dict get $reply idmp-maxsize]\n        \n        # Change only MAXSIZE\n        r XCFGSET mystream IDMP-MAXSIZE 100\n        set reply [r XINFO STREAM mystream]\n        assert_equal 600 [dict get $reply idmp-duration]\n        assert_equal 100 [dict get $reply idmp-maxsize]\n    }\n\n    test {XCFGSET IDMP-MAXSIZE wraparound keeps last 8 entries} {\n        r DEL mystream\n        \n        # Create stream and set MAXSIZE to 8\n        r XADD mystream IDMP p1 \"init\" * field \"init\"\n        r XCFGSET mystream IDMP-MAXSIZE 8 IDMP-DURATION 60\n        \n        # Add 100 unique entries and store their IDs in a list\n        set id_list {}\n        for {set i 1} {$i <= 100} {incr i} {\n            lappend id_list [r XADD mystream IDMP p1 \"req-$i\" * field \"v$i\"]\n        }\n        \n        # Verify the last 8 entries (93-100) still deduplicate\n        for {set i 93} {$i <= 100} {incr i} {\n            set idx [expr {$i - 1}]\n            set original_id [lindex $id_list $idx]\n            set dup_id [r XADD mystream IDMP p1 \"req-$i\" * field \"dup\"]\n            assert_equal $original_id $dup_id\n        }\n        \n        # Verify earlier entries (1-92) are evicted and create new entries\n        for {set i 1} {$i <= 92} {incr i} {\n            set idx [expr {$i - 1}]\n            set original_id [lindex $id_list $idx]\n            set new_id [r XADD mystream IDMP p1 \"req-$i\" * field \"new\"]\n            assert {$new_id ne $original_id}\n        }\n        \n        # Total entries: init + 100 original + 92 new = 193\n        assert_equal 193 [r XLEN mystream]\n    }\n\n    test {XCFGSET clears all producer histories} {\n        r DEL mystream\n        \n        # Add entries with multiple producers\n        set id1 [r XADD mystream IDMP p1 \"req-1\" * field \"v1\"]\n        set id2 [r XADD mystream IDMP p2 \"req-1\" * field \"v2\"]\n        set id3 [r XADD mystream IDMP p3 \"req-1\" * field \"v3\"]\n        \n        set reply [r XINFO STREAM mystream]\n        assert_equal 3 [dict get $reply pids-tracked]\n        assert_equal 3 [dict get $reply iids-tracked]\n        \n        # CFGSET clears all histories\n        r XCFGSET mystream IDMP-DURATION 60\n        \n        set reply [r XINFO STREAM mystream]\n        # pids-tracked should be 0 after clearing\n        assert_equal 0 [dict get $reply pids-tracked]\n        assert_equal 0 [dict get $reply iids-tracked]\n        # iids-added is lifetime counter, should persist\n        assert_equal 3 [dict get $reply iids-added]\n        \n        # Can now add \"duplicates\" since history is cleared\n        set new_id1 [r XADD mystream IDMP p1 \"req-1\" * field \"new\"]\n        assert {$id1 ne $new_id1}\n    }\n\n    test {CONFIG SET stream-idmp-duration and stream-idmp-maxsize validation} {\n        # Test maximum value rejection for duration (max: 86400)\n        assert_error \"*must be between*\" {r CONFIG SET stream-idmp-duration 86401}\n        assert_error \"*must be between*\" {r CONFIG SET stream-idmp-duration 100000}\n        \n        # Test maximum value rejection for maxsize (max: 10000)\n        assert_error \"*must be between*\" {r CONFIG SET stream-idmp-maxsize 10001}\n        assert_error \"*must be between*\" {r CONFIG SET stream-idmp-maxsize 50000}\n        \n        # Test minimum value rejection for duration (min: 1)\n        assert_error \"*must be between*\" {r CONFIG SET stream-idmp-duration 0}\n        assert_error \"*must be between*\" {r CONFIG SET stream-idmp-duration -1}\n        assert_error \"*must be between*\" {r CONFIG SET stream-idmp-duration -100}\n        \n        # Test minimum value rejection for maxsize (min: 1)\n        assert_error \"*must be between*\" {r CONFIG SET stream-idmp-maxsize 0}\n        assert_error \"*must be between*\" {r CONFIG SET stream-idmp-maxsize -1}\n        assert_error \"*must be between*\" {r CONFIG SET stream-idmp-maxsize -100}\n        \n        # Test exact boundary values work correctly\n        assert_equal \"OK\" [r CONFIG SET stream-idmp-duration 86400]\n        assert_equal \"86400\" [lindex [r CONFIG GET stream-idmp-duration] 1]\n        \n        assert_equal \"OK\" [r CONFIG SET stream-idmp-maxsize 10000]\n        assert_equal \"10000\" [lindex [r CONFIG GET stream-idmp-maxsize] 1]\n        \n        # Test minimum boundary values work (min: 1)\n        assert_equal \"OK\" [r CONFIG SET stream-idmp-duration 1]\n        assert_equal \"1\" [lindex [r CONFIG GET stream-idmp-duration] 1]\n        \n        assert_equal \"OK\" [r CONFIG SET stream-idmp-maxsize 1]\n        assert_equal \"1\" [lindex [r CONFIG GET stream-idmp-maxsize] 1]\n        \n        # Test valid intermediate values\n        assert_equal \"OK\" [r CONFIG SET stream-idmp-duration 100]\n        assert_equal \"OK\" [r CONFIG SET stream-idmp-maxsize 100]\n        \n        # Reset to defaults\n        assert_equal \"OK\" [r CONFIG SET stream-idmp-duration 100]\n        assert_equal \"OK\" [r CONFIG SET stream-idmp-maxsize 100]\n    }\n\n    test {XTRIM with MINID option} {\n        r DEL mystream\n        r XADD mystream 1-0 f v\n        r XADD mystream 2-0 f v\n        r XADD mystream 3-0 f v\n        r XADD mystream 4-0 f v\n        r XADD mystream 5-0 f v\n        r XTRIM mystream MINID = 3-0\n        assert_equal [r XRANGE mystream - +] {{3-0 {f v}} {4-0 {f v}} {5-0 {f v}}}\n    }\n\n    test {XTRIM with MINID option, big delta from master record} {\n        r DEL mystream\n        r XADD mystream 1-0 f v\n        r XADD mystream 1641544570597-0 f v\n        r XADD mystream 1641544570597-1 f v\n        r XTRIM mystream MINID 1641544570597-0\n        assert_equal [r XRANGE mystream - +] {{1641544570597-0 {f v}} {1641544570597-1 {f v}}}\n    }\n\n    proc insert_into_stream_key {key {count 10000}} {\n        r multi\n        for {set j 0} {$j < $count} {incr j} {\n            # From time to time insert a field with a different set\n            # of fields in order to stress the stream compression code.\n            if {rand() < 0.9} {\n                r XADD $key * item $j\n            } else {\n                r XADD $key * item $j otherfield foo\n            }\n        }\n        r exec\n    }\n\n    test {XADD mass insertion and XLEN} {\n        r DEL mystream\n        insert_into_stream_key mystream\n\n        set items [r XRANGE mystream - +]\n        for {set j 0} {$j < 10000} {incr j} {\n            assert {[lrange [lindex $items $j 1] 0 1] eq [list item $j]}\n        }\n        assert {[r xlen mystream] == $j}\n    }\n\n    test {XADD with ID 0-0} {\n        r DEL otherstream\n        catch {r XADD otherstream 0-0 k v} err\n        assert {[r EXISTS otherstream] == 0}\n    }\n\n    test {XADD with LIMIT delete entries no more than limit} {\n        r del yourstream\n        for {set j 0} {$j < 3} {incr j} {\n            r XADD yourstream * xitem v\n        }\n        r XADD yourstream MAXLEN ~ 0 limit 1 * xitem v\n        assert {[r XLEN yourstream] == 4}\n    }\n\n    test {XRANGE COUNT works as expected} {\n        assert {[llength [r xrange mystream - + COUNT 10]] == 10}\n    }\n\n    test {XREVRANGE COUNT works as expected} {\n        assert {[llength [r xrevrange mystream + - COUNT 10]] == 10}\n    }\n\n    test {XRANGE can be used to iterate the whole stream} {\n        set last_id \"-\"\n        set j 0\n        while 1 {\n            set elements [r xrange mystream $last_id + COUNT 100]\n            if {[llength $elements] == 0} break\n            foreach e $elements {\n                assert {[lrange [lindex $e 1] 0 1] eq [list item $j]}\n                incr j;\n            }\n            set last_id [streamNextID [lindex $elements end 0]]\n        }\n        assert {$j == 10000}\n    }\n\n    test {XREVRANGE returns the reverse of XRANGE} {\n        assert {[r xrange mystream - +] == [lreverse [r xrevrange mystream + -]]}\n    }\n\n    test {XRANGE exclusive ranges} {\n        set ids {0-1 0-18446744073709551615 1-0 42-0 42-42\n                 18446744073709551615-18446744073709551614\n                 18446744073709551615-18446744073709551615}\n        set total [llength $ids]\n        r multi\n        r DEL vipstream\n        foreach id $ids {\n            r XADD vipstream $id foo bar\n        }\n        r exec\n        assert {[llength [r xrange vipstream - +]] == $total}\n        assert {[llength [r xrange vipstream ([lindex $ids 0] +]] == $total-1}\n        assert {[llength [r xrange vipstream - ([lindex $ids $total-1]]] == $total-1}\n        assert {[llength [r xrange vipstream (0-1 (1-0]] == 1}\n        assert {[llength [r xrange vipstream (1-0 (42-42]] == 1}\n        catch {r xrange vipstream (- +} e\n        assert_match {ERR*} $e\n        catch {r xrange vipstream - (+} e\n        assert_match {ERR*} $e\n        catch {r xrange vipstream (18446744073709551615-18446744073709551615 +} e\n        assert_match {ERR*} $e\n        catch {r xrange vipstream - (0-0} e\n        assert_match {ERR*} $e\n    }\n\n    test {XREAD with non empty stream} {\n        set res [r XREAD COUNT 1 STREAMS mystream 0-0]\n        assert {[lrange [lindex $res 0 1 0 1] 0 1] eq {item 0}}\n    }\n\n    test {Non blocking XREAD with empty streams} {\n        set res [r XREAD STREAMS s1{t} s2{t} 0-0 0-0]\n        assert {$res eq {}}\n    }\n\n    test {XREAD with non empty second stream} {\n        insert_into_stream_key mystream{t}\n        set res [r XREAD COUNT 1 STREAMS nostream{t} mystream{t} 0-0 0-0]\n        assert {[lindex $res 0 0] eq {mystream{t}}}\n        assert {[lrange [lindex $res 0 1 0 1] 0 1] eq {item 0}}\n    }\n\n    test {Blocking XREAD waiting new data} {\n        r XADD s2{t} * old abcd1234\n        set rd [redis_deferring_client]\n        $rd XREAD BLOCK 20000 STREAMS s1{t} s2{t} s3{t} $ $ $\n        wait_for_blocked_client\n        r XADD s2{t} * new abcd1234\n        set res [$rd read]\n        assert {[lindex $res 0 0] eq {s2{t}}}\n        assert {[lindex $res 0 1 0 1] eq {new abcd1234}}\n        $rd close\n    }\n\n    test {Blocking XREAD waiting old data} {\n        set rd [redis_deferring_client]\n        $rd XREAD BLOCK 20000 STREAMS s1{t} s2{t} s3{t} $ 0-0 $\n        r XADD s2{t} * foo abcd1234\n        set res [$rd read]\n        assert {[lindex $res 0 0] eq {s2{t}}}\n        assert {[lindex $res 0 1 0 1] eq {old abcd1234}}\n        $rd close\n    }\n\n    test {Blocking XREAD will not reply with an empty array} {\n        r del s1\n        r XADD s1 666 f v\n        r XADD s1 667 f2 v2\n        r XDEL s1 667\n        set rd [redis_deferring_client]\n        $rd XREAD BLOCK 10 STREAMS s1 666\n        after 20\n        assert {[$rd read] == {}} ;# before the fix, client didn't even block, but was served synchronously with {s1 {}}\n        $rd close\n    }\n\n    test \"Blocking XREAD for stream that ran dry (issue #5299)\" {\n        set rd [redis_deferring_client]\n\n        # Add a entry then delete it, now stream's last_id is 666.\n        r DEL mystream\n        r XADD mystream 666 key value\n        r XDEL mystream 666\n\n        # Pass a ID smaller than stream's last_id, released on timeout.\n        $rd XREAD BLOCK 10 STREAMS mystream 665\n        assert_equal [$rd read] {}\n\n        # Throw an error if the ID equal or smaller than the last_id.\n        assert_error ERR*equal*smaller* {r XADD mystream 665 key value}\n        assert_error ERR*equal*smaller* {r XADD mystream 666 key value}\n\n        # Entered blocking state and then release because of the new entry.\n        $rd XREAD BLOCK 0 STREAMS mystream 665\n        wait_for_blocked_clients_count 1\n        r XADD mystream 667 key value\n        assert_equal [$rd read] {{mystream {{667-0 {key value}}}}}\n\n        $rd close\n    }\n\n    test {XREAD last element from non-empty stream} {\n        # should return last entry\n\n        # add 3 entries to a stream\n        r DEL lestream\n        r XADD lestream 1-0 k1 v1\n        r XADD lestream 2-0 k2 v2\n        r XADD lestream 3-0 k3 v3\n\n        # read the last entry\n        set res [r XREAD STREAMS lestream +]\n\n        # verify it's the last entry\n        assert_equal $res {{lestream {{3-0 {k3 v3}}}}}\n\n        # two more entries, with MAX_UINT64 for sequence number for the last one\n        r XADD lestream 3-18446744073709551614 k4 v4\n        r XADD lestream 3-18446744073709551615 k5 v5\n\n        # read the new last entry\n        set res [r XREAD STREAMS lestream +]\n\n        # verify it's the last entry\n        assert_equal $res {{lestream {{3-18446744073709551615 {k5 v5}}}}}\n    }\n\n    test {XREAD last element from empty stream} {\n        # should return nil\n\n        # make sure the stream is empty\n        r DEL lestream\n\n        # read last entry and verify nil is received\n        assert_equal [r XREAD STREAMS lestream +] {}\n\n        # add an element to the stream, than delete it\n        r XADD lestream 1-0 k1 v1\n        r XDEL lestream 1-0\n\n        # verify nil is still received when reading last entry\n        assert_equal [r XREAD STREAMS lestream +] {}\n\n        # case when stream created empty\n\n        # make sure the stream is not initialized\n        r DEL lestream\n\n        # create empty stream with XGROUP CREATE\n        r XGROUP CREATE lestream legroup $ MKSTREAM\n\n        # verify nil is received when reading last entry\n        assert_equal [r XREAD STREAMS lestream +] {}\n    }\n\n    test {XREAD last element blocking from empty stream} {\n        # should block until a new entry is available\n\n        # make sure there is no stream\n        r DEL lestream\n\n        # read last entry from stream, blocking\n        set rd [redis_deferring_client]\n        $rd XREAD BLOCK 20000 STREAMS lestream +\n        wait_for_blocked_client\n\n        # add an entry to the stream\n        r XADD lestream 1-0 k1 v1\n\n        # read and verify result\n        set res [$rd read]\n        assert_equal $res {{lestream {{1-0 {k1 v1}}}}}\n        $rd close\n    }\n\n    test {XREAD last element blocking from non-empty stream} {\n        # should return last element immediately, w/o blocking\n\n        # add 3 entries to a stream\n        r DEL lestream\n        r XADD lestream 1-0 k1 v1\n        r XADD lestream 2-0 k2 v2\n        r XADD lestream 3-0 k3 v3\n\n        # read the last entry\n        set res [r XREAD BLOCK 1000000 STREAMS lestream +]\n\n        # verify it's the last entry\n        assert_equal $res {{lestream {{3-0 {k3 v3}}}}}\n    }\n\n    test {XREAD last element from multiple streams} {\n        # should return last element only from non-empty streams\n\n        # add 3 entries to one stream\n        r DEL \"\\{lestream\\}1\"\n        r XADD \"\\{lestream\\}1\" 1-0 k1 v1\n        r XADD \"\\{lestream\\}1\" 2-0 k2 v2\n        r XADD \"\\{lestream\\}1\" 3-0 k3 v3\n\n        # add 3 entries to another stream\n        r DEL \"\\{lestream\\}2\"\n        r XADD \"\\{lestream\\}2\" 1-0 k1 v4\n        r XADD \"\\{lestream\\}2\" 2-0 k2 v5\n        r XADD \"\\{lestream\\}2\" 3-0 k3 v6\n\n        # read last element from 3 streams (2 with enetries, 1 non-existent)\n        # verify the last element from the two existing streams were returned\n        set res [r XREAD STREAMS \"\\{lestream\\}1\" \"\\{lestream\\}2\" \"\\{lestream\\}3\" + + +]\n        assert_equal $res {{{{lestream}1} {{3-0 {k3 v3}}}} {{{lestream}2} {{3-0 {k3 v6}}}}}\n    }\n\n    test {XREAD last element with count > 1} {\n        # Should return only the last element - count has no affect here\n\n        # add 3 entries to a stream\n        r DEL lestream\n        r XADD lestream 1-0 k1 v1\n        r XADD lestream 2-0 k2 v2\n        r XADD lestream 3-0 k3 v3\n\n        # read the last entry\n        set res [r XREAD COUNT 3 STREAMS lestream +]\n\n        # verify only last entry was read, even though COUNT > 1\n        assert_equal $res {{lestream {{3-0 {k3 v3}}}}}\n    }\n\n    test \"XREAD: read last element after XDEL (issue #13628)\" {\n        # Should return actual last element after XDEL of current last element\n\n        # Add 2 entries to a stream and delete last one\n        r DEL stream\n        r XADD stream 1-0 f 1\n        r XADD stream 2-0 f 2\n        r XDEL stream 2-0\n\n        # Read last entry\n        set res [r XREAD STREAMS stream +]\n\n        # Verify the last entry was read\n        assert_equal $res {{stream {{1-0 {f 1}}}}}\n    }\n\n    test \"XREAD: XADD + DEL should not awake client\" {\n        set rd [redis_deferring_client]\n        r del s1\n        $rd XREAD BLOCK 20000 STREAMS s1 $\n        wait_for_blocked_clients_count 1\n        r multi\n        r XADD s1 * old abcd1234\n        r DEL s1\n        r exec\n        r XADD s1 * new abcd1234\n        set res [$rd read]\n        assert {[lindex $res 0 0] eq {s1}}\n        assert {[lindex $res 0 1 0 1] eq {new abcd1234}}\n        $rd close\n    }\n\n    test \"XREAD: XADD + DEL + LPUSH should not awake client\" {\n        set rd [redis_deferring_client]\n        r del s1\n        $rd XREAD BLOCK 20000 STREAMS s1 $\n        wait_for_blocked_clients_count 1\n        r multi\n        r XADD s1 * old abcd1234\n        r DEL s1\n        r LPUSH s1 foo bar\n        r exec\n        r DEL s1\n        r XADD s1 * new abcd1234\n        set res [$rd read]\n        assert {[lindex $res 0 0] eq {s1}}\n        assert {[lindex $res 0 1 0 1] eq {new abcd1234}}\n        $rd close\n    }\n\n    test {XREAD with same stream name multiple times should work} {\n        r XADD s2 * old abcd1234\n        set rd [redis_deferring_client]\n        $rd XREAD BLOCK 20000 STREAMS s2 s2 s2 $ $ $\n        wait_for_blocked_clients_count 1\n        r XADD s2 * new abcd1234\n        set res [$rd read]\n        assert {[lindex $res 0 0] eq {s2}}\n        assert {[lindex $res 0 1 0 1] eq {new abcd1234}}\n        $rd close\n    }\n\n    test {XREAD + multiple XADD inside transaction} {\n        r XADD s2 * old abcd1234\n        set rd [redis_deferring_client]\n        $rd XREAD BLOCK 20000 STREAMS s2 s2 s2 $ $ $\n        wait_for_blocked_clients_count 1\n        r MULTI\n        r XADD s2 * field one\n        r XADD s2 * field two\n        r XADD s2 * field three\n        r EXEC\n        set res [$rd read]\n        assert {[lindex $res 0 0] eq {s2}}\n        assert {[lindex $res 0 1 0 1] eq {field one}}\n        assert {[lindex $res 0 1 1 1] eq {field two}}\n        $rd close\n    }\n\n    test {XDEL basic test} {\n        r del somestream\n        r xadd somestream * foo value0\n        set id [r xadd somestream * foo value1]\n        r xadd somestream * foo value2\n        r xdel somestream $id\n        assert {[r xlen somestream] == 2}\n        set result [r xrange somestream - +]\n        assert {[lindex $result 0 1 1] eq {value0}}\n        assert {[lindex $result 1 1 1] eq {value2}}\n    }\n\n    test {XDEL multiply id test} {\n        r del somestream\n        r xadd somestream 1-1 a 1\n        r xadd somestream 1-2 b 2\n        r xadd somestream 1-3 c 3\n        r xadd somestream 1-4 d 4\n        r xadd somestream 1-5 e 5\n        assert {[r xlen somestream] == 5}\n        assert {[r xdel somestream 1-1 1-4 1-5 2-1] == 3}\n        assert {[r xlen somestream] == 2}\n        set result [r xrange somestream - +]\n        assert {[dict get [lindex $result 0 1] b] eq {2}}\n        assert {[dict get [lindex $result 1 1] c] eq {3}}\n    }\n    # Here the idea is to check the consistency of the stream data structure\n    # as we remove all the elements down to zero elements.\n    test {XDEL fuzz test} {\n        r del somestream\n        set ids {}\n        set x 0; # Length of the stream\n        while 1 {\n            lappend ids [r xadd somestream * item $x]\n            incr x\n            # Add enough elements to have a few radix tree nodes inside the stream.\n            if {[dict get [r xinfo stream somestream] radix-tree-keys] > 20} break\n        }\n\n        # Now remove all the elements till we reach an empty stream\n        # and after every deletion, check that the stream is sane enough\n        # to report the right number of elements with XRANGE: this will also\n        # force accessing the whole data structure to check sanity.\n        assert {[r xlen somestream] == $x}\n\n        # We want to remove elements in random order to really test the\n        # implementation in a better way.\n        set ids [lshuffle $ids]\n        foreach id $ids {\n            assert {[r xdel somestream $id] == 1}\n            incr x -1\n            assert {[r xlen somestream] == $x}\n            # The test would be too slow calling XRANGE for every iteration.\n            # Do it every 100 removal.\n            if {$x % 100 == 0} {\n                set res [r xrange somestream - +]\n                assert {[llength $res] == $x}\n            }\n        }\n    }\n\n    test {XRANGE fuzzing} {\n        set items [r XRANGE mystream{t} - +]\n        set low_id [lindex $items 0 0]\n        set high_id [lindex $items end 0]\n        for {set j 0} {$j < 100} {incr j} {\n            set start [streamRandomID $low_id $high_id]\n            set end [streamRandomID $low_id $high_id]\n            set range [r xrange mystream{t} $start $end]\n            set tcl_range [streamSimulateXRANGE $items $start $end]\n            if {$range ne $tcl_range} {\n                puts \"*** WARNING *** - XRANGE fuzzing mismatch: $start - $end\"\n                puts \"---\"\n                puts \"XRANGE: '$range'\"\n                puts \"---\"\n                puts \"TCL: '$tcl_range'\"\n                puts \"---\"\n                fail \"XRANGE fuzzing failed, check logs for details\"\n            }\n        }\n    }\n\n    test {XREVRANGE regression test for issue #5006} {\n        # Add non compressed entries\n        r xadd teststream 1234567891230 key1 value1\n        r xadd teststream 1234567891240 key2 value2\n        r xadd teststream 1234567891250 key3 value3\n\n        # Add SAMEFIELD compressed entries\n        r xadd teststream2 1234567891230 key1 value1\n        r xadd teststream2 1234567891240 key1 value2\n        r xadd teststream2 1234567891250 key1 value3\n\n        assert_equal [r xrevrange teststream 1234567891245 -] {{1234567891240-0 {key2 value2}} {1234567891230-0 {key1 value1}}}\n\n        assert_equal [r xrevrange teststream2 1234567891245 -] {{1234567891240-0 {key1 value2}} {1234567891230-0 {key1 value1}}}\n    }\n\n    test {XREAD streamID edge (no-blocking)} {\n        r del x\n        r XADD x 1-1 f v\n        r XADD x 1-18446744073709551615 f v\n        r XADD x 2-1 f v\n        set res [r XREAD BLOCK 0 STREAMS x 1-18446744073709551615]\n        assert {[lindex $res 0 1 0] == {2-1 {f v}}}\n    }\n\n    test {XREAD streamID edge (blocking)} {\n        r del x\n        set rd [redis_deferring_client]\n        $rd XREAD BLOCK 0 STREAMS x 1-18446744073709551615\n        wait_for_blocked_clients_count 1\n        r XADD x 1-1 f v\n        r XADD x 1-18446744073709551615 f v\n        r XADD x 2-1 f v\n        set res [$rd read]\n        assert {[lindex $res 0 1 0] == {2-1 {f v}}}\n        $rd close\n    }\n\n    test {XADD streamID edge} {\n        r del x\n        r XADD x 2577343934890-18446744073709551615 f v ;# we need the timestamp to be in the future\n        r XADD x * f2 v2\n        assert_equal [r XRANGE x - +] {{2577343934890-18446744073709551615 {f v}} {2577343934891-0 {f2 v2}}}\n    }\n\n    test {XTRIM with MAXLEN option basic test} {\n        r DEL mystream\n        for {set j 0} {$j < 1000} {incr j} {\n            if {rand() < 0.9} {\n                r XADD mystream * xitem $j\n            } else {\n                r XADD mystream * yitem $j\n            }\n        }\n        r XTRIM mystream MAXLEN 666\n        assert {[r XLEN mystream] == 666}\n        r XTRIM mystream MAXLEN = 555\n        assert {[r XLEN mystream] == 555}\n        r XTRIM mystream MAXLEN ~ 444\n        assert {[r XLEN mystream] == 500}\n        r XTRIM mystream MAXLEN ~ 400\n        assert {[r XLEN mystream] == 400}\n    }\n\n    test {XADD with LIMIT consecutive calls} {\n        r del mystream\n        r config set stream-node-max-entries 10\n        for {set j 0} {$j < 100} {incr j} {\n            r XADD mystream * xitem v\n        }\n        r XADD mystream MAXLEN ~ 55 LIMIT 30 * xitem v\n        assert {[r xlen mystream] == 71}\n        r XADD mystream MAXLEN ~ 55 LIMIT 30 * xitem v\n        assert {[r xlen mystream] == 62}\n        r config set stream-node-max-entries 100\n    }\n\n    test {XTRIM with ~ is limited} {\n        r del mystream\n        r config set stream-node-max-entries 1\n        for {set j 0} {$j < 102} {incr j} {\n            r XADD mystream * xitem v\n        }\n        r XTRIM mystream MAXLEN ~ 1\n        assert {[r xlen mystream] == 2}\n        r config set stream-node-max-entries 100\n    }\n\n    test {XTRIM without ~ is not limited} {\n        r del mystream\n        r config set stream-node-max-entries 1\n        for {set j 0} {$j < 102} {incr j} {\n            r XADD mystream * xitem v\n        }\n        r XTRIM mystream MAXLEN 1\n        assert {[r xlen mystream] == 1}\n        r config set stream-node-max-entries 100\n    }\n\n    test {XTRIM without ~ and with LIMIT} {\n        r del mystream\n        r config set stream-node-max-entries 1\n        for {set j 0} {$j < 102} {incr j} {\n            r XADD mystream * xitem v\n        }\n        assert_error ERR* {r XTRIM mystream MAXLEN 1 LIMIT 30}\n    }\n\n    test {XTRIM with LIMIT delete entries no more than limit} {\n        r del mystream\n        r config set stream-node-max-entries 2\n        for {set j 0} {$j < 3} {incr j} {\n            r XADD mystream * xitem v\n        }\n        assert {[r XTRIM mystream MAXLEN ~ 0 LIMIT 1] == 0}\n        assert {[r XTRIM mystream MAXLEN ~ 0 LIMIT 2] == 2}\n    }\n\n    test {XTRIM with approx and ACKED deletes entries correctly} {\n        # This test verifies that when using approx trim (~) with ACKED strategy,\n        # if the first node cannot be removed (has unacked messages), we should\n        # continue to check subsequent nodes that might be eligible for removal.\n        r DEL mystream\n        set origin_max_entries [config_get_set stream-node-max-entries 2]\n\n        # Create 5 entries in 3 nodes (2 entries per node)\n        r XADD mystream 1-0 f v\n        r XADD mystream 2-0 f v\n        r XADD mystream 3-0 f v\n        r XADD mystream 4-0 f v\n        r XADD mystream 5-0 f v\n\n        # Create a consumer group and read all messages\n        r XGROUP CREATE mystream mygroup 0\n        r XREADGROUP GROUP mygroup consumer1 STREAMS mystream >\n\n        # Acknowledge messages: 1-0, 2-0 (first node), and 4-0 (second node)\n        r XACK mystream mygroup 1-0 2-0 4-0\n\n        # XTRIM MINID ~ 6-0 ACKED should remove:\n        # Total 3 entries removed (1-0, 2-0, 4-0), 2 unacked entries remain (3-0, 5-0)\n        assert_equal 3 [r XTRIM mystream MINID ~ 6-0 ACKED]\n        assert_equal 2 [r XLEN mystream]\n        assert_equal {{3-0 {f v}} {5-0 {f v}}} [r XRANGE mystream - +]\n\n        r config set stream-node-max-entries $origin_max_entries\n    }\n\n    test {XTRIM with approx and DELREF deletes entries correctly} {\n        # Similar test but with DELREF strategy\n        r DEL mystream\n        set origin_max_entries [config_get_set stream-node-max-entries 2]\n\n        # Create 4 entries in 2 nodes\n        r XADD mystream 1-0 f v\n        r XADD mystream 2-0 f v\n        r XADD mystream 3-0 f v\n        r XADD mystream 4-0 f v\n\n        # Create a consumer group and read all messages\n        r XGROUP CREATE mystream mygroup 0\n        r XREADGROUP GROUP mygroup consumer1 STREAMS mystream >\n\n        # With XTRIM MINID ~ 5-0 DELREF, all eligible nodes should be trimmed\n        # and PEL entries should be cleaned up\n        assert_equal 4 [r XTRIM mystream MINID ~ 5-0 DELREF]\n        assert_equal 0 [r XLEN mystream]\n        # PEL should be empty after DELREF\n        assert_equal {0 {} {} {}} [r XPENDING mystream mygroup]\n\n        r config set stream-node-max-entries $origin_max_entries\n    }\n}\n\nstart_server {tags {\"stream needs:debug\"} overrides {appendonly yes}} {\n    test {XADD with MAXLEN > xlen can propagate correctly} {\n        for {set j 0} {$j < 100} {incr j} {\n            r XADD mystream * xitem v\n        }\n        r XADD mystream MAXLEN 200 * xitem v\n        incr j\n        assert {[r xlen mystream] == $j}\n        r debug loadaof\n        r XADD mystream * xitem v\n        incr j\n        assert {[r xlen mystream] == $j}\n    }\n}\n\nstart_server {tags {\"stream needs:debug\"} overrides {appendonly yes}} {\n    test {XADD with MINID > lastid can propagate correctly} {\n        for {set j 0} {$j < 100} {incr j} {\n            set id [expr {$j+1}]\n            r XADD mystream $id xitem v\n        }\n        r XADD mystream MINID 1 * xitem v\n        incr j\n        assert {[r xlen mystream] == $j}\n        r debug loadaof\n        r XADD mystream * xitem v\n        incr j\n        assert {[r xlen mystream] == $j}\n    }\n}\n\nstart_server {tags {\"stream needs:debug\"} overrides {appendonly yes stream-node-max-entries 100}} {\n    test {XADD with ~ MAXLEN can propagate correctly} {\n        for {set j 0} {$j < 100} {incr j} {\n            r XADD mystream * xitem v\n        }\n        r XADD mystream MAXLEN ~ $j * xitem v\n        incr j\n        assert {[r xlen mystream] == $j}\n        r config set stream-node-max-entries 1\n        r debug loadaof\n        r XADD mystream * xitem v\n        incr j\n        assert {[r xlen mystream] == $j}\n    }\n}\n\nstart_server {tags {\"stream needs:debug\"} overrides {appendonly yes stream-node-max-entries 10}} {\n    test {XADD with ~ MAXLEN and LIMIT can propagate correctly} {\n        for {set j 0} {$j < 100} {incr j} {\n            r XADD mystream * xitem v\n        }\n        r XADD mystream MAXLEN ~ 55 LIMIT 30 * xitem v\n        assert {[r xlen mystream] == 71}\n        r config set stream-node-max-entries 1\n        r debug loadaof\n        r XADD mystream * xitem v\n        assert {[r xlen mystream] == 72}\n    }\n}\n\nstart_server {tags {\"stream needs:debug\"} overrides {appendonly yes stream-node-max-entries 100}} {\n    test {XADD with ~ MINID can propagate correctly} {\n        for {set j 0} {$j < 100} {incr j} {\n            set id [expr {$j+1}]\n            r XADD mystream $id xitem v\n        }\n        r XADD mystream MINID ~ $j * xitem v\n        incr j\n        assert {[r xlen mystream] == $j}\n        r config set stream-node-max-entries 1\n        r debug loadaof\n        r XADD mystream * xitem v\n        incr j\n        assert {[r xlen mystream] == $j}\n    }\n}\n\nstart_server {tags {\"stream needs:debug\"} overrides {appendonly yes stream-node-max-entries 10}} {\n    test {XADD with ~ MINID and LIMIT can propagate correctly} {\n        for {set j 0} {$j < 100} {incr j} {\n            set id [expr {$j+1}]\n            r XADD mystream $id xitem v\n        }\n        r XADD mystream MINID ~ 55 LIMIT 30 * xitem v\n        assert {[r xlen mystream] == 71}\n        r config set stream-node-max-entries 1\n        r debug loadaof\n        r XADD mystream * xitem v\n        assert {[r xlen mystream] == 72}\n    }\n}\n\nstart_server {tags {\"stream needs:debug\"} overrides {appendonly yes stream-node-max-entries 10}} {\n    test {XTRIM with ~ MAXLEN can propagate correctly} {\n        for {set j 0} {$j < 100} {incr j} {\n            r XADD mystream * xitem v\n        }\n        r XTRIM mystream MAXLEN ~ 85\n        assert {[r xlen mystream] == 90}\n        r config set stream-node-max-entries 1\n        r debug loadaof\n        r XADD mystream * xitem v\n        incr j\n        assert {[r xlen mystream] == 91}\n    }\n}\n\nstart_server {tags {\"stream\"}} {\n    test {XADD can CREATE an empty stream} {\n        r XADD mystream MAXLEN 0 * a b\n        assert {[dict get [r xinfo stream mystream] length] == 0}\n    }\n\n    test {XSETID can set a specific ID} {\n        r XSETID mystream \"200-0\"\n        set reply [r XINFO stream mystream]\n        assert_equal [dict get $reply last-generated-id] \"200-0\"\n        assert_equal [dict get $reply entries-added] 1\n    }\n\n    test {XSETID cannot SETID with smaller ID} {\n        r XADD mystream * a b\n        catch {r XSETID mystream \"1-1\"} err\n        r XADD mystream MAXLEN 0 * a b\n        set err\n    } {ERR *smaller*}\n\n    test {XSETID cannot SETID on non-existent key} {\n        catch {r XSETID stream 1-1} err\n        set _ $err\n    } {ERR no such key}\n\n    test {XSETID cannot run with an offset but without a maximal tombstone} {\n        catch {r XSETID stream 1-1 0} err\n        set _ $err\n    } {ERR syntax error}\n\n    test {XSETID cannot run with a maximal tombstone but without an offset} {\n        catch {r XSETID stream 1-1 0-0} err\n        set _ $err\n    } {ERR syntax error}\n\n    test {XSETID errors on negstive offset} {\n        catch {r XSETID stream 1-1 ENTRIESADDED -1 MAXDELETEDID 0-0} err\n        set _ $err\n    } {ERR *must be positive}\n\n    test {XSETID cannot set the maximal tombstone with larger ID} {\n        r DEL x\n        r XADD x 1-0 a b\n        \n        catch {r XSETID x \"1-0\" ENTRIESADDED 1 MAXDELETEDID \"2-0\" } err\n        r XADD mystream MAXLEN 0 * a b\n        set err\n    } {ERR *smaller*}\n\n    test {XSETID cannot set the offset to less than the length} {\n        r DEL x\n        r XADD x 1-0 a b\n        \n        catch {r XSETID x \"1-0\" ENTRIESADDED 0 MAXDELETEDID \"0-0\" } err\n        r XADD mystream MAXLEN 0 * a b\n        set err\n    } {ERR *smaller*}\n\n    test {XSETID cannot set smaller ID than current MAXDELETEDID} {\n        r DEL x\n        r XADD x 1-1 a 1\n        r XADD x 1-2 b 2\n        r XADD x 1-3 c 3\n        r XDEL x 1-2\n        r XDEL x 1-3\n        set reply [r XINFO stream x]\n        assert_equal [dict get $reply max-deleted-entry-id] \"1-3\"\n        catch {r XSETID x \"1-2\" } err\n        set err\n    } {ERR *smaller*}\n}\n\nstart_server {tags {\"stream\"}} {\n    test {XADD advances the entries-added counter and sets the recorded-first-entry-id} {\n        r DEL x\n        r XADD x 1-0 data a\n\n        set reply [r XINFO STREAM x FULL]\n        assert_equal [dict get $reply entries-added] 1\n        assert_equal [dict get $reply recorded-first-entry-id] \"1-0\"\n\n        r XADD x 2-0 data a\n        set reply [r XINFO STREAM x FULL]\n        assert_equal [dict get $reply entries-added] 2\n        assert_equal [dict get $reply recorded-first-entry-id] \"1-0\"\n    }\n\n    test {XDEL/TRIM are reflected by recorded first entry} {\n        r DEL x\n        r XADD x 1-0 data a\n        r XADD x 2-0 data a\n        r XADD x 3-0 data a\n        r XADD x 4-0 data a\n        r XADD x 5-0 data a\n\n        set reply [r XINFO STREAM x FULL]\n        assert_equal [dict get $reply entries-added] 5\n        assert_equal [dict get $reply recorded-first-entry-id] \"1-0\"\n\n        r XDEL x 2-0\n        set reply [r XINFO STREAM x FULL]\n        assert_equal [dict get $reply recorded-first-entry-id] \"1-0\"\n\n        r XDEL x 1-0\n        set reply [r XINFO STREAM x FULL]\n        assert_equal [dict get $reply recorded-first-entry-id] \"3-0\"\n\n        r XTRIM x MAXLEN = 2\n        set reply [r XINFO STREAM x FULL]\n        assert_equal [dict get $reply recorded-first-entry-id] \"4-0\"\n    }\n\n    test {Maximum XDEL ID behaves correctly} {\n        r DEL x\n        r XADD x 1-0 data a\n        r XADD x 2-0 data b\n        r XADD x 3-0 data c\n\n        set reply [r XINFO STREAM x FULL]\n        assert_equal [dict get $reply max-deleted-entry-id] \"0-0\"\n\n        r XDEL x 2-0\n        set reply [r XINFO STREAM x FULL]\n        assert_equal [dict get $reply max-deleted-entry-id] \"2-0\"\n\n        r XDEL x 1-0\n        set reply [r XINFO STREAM x FULL]\n        assert_equal [dict get $reply max-deleted-entry-id] \"2-0\"\n    }\n\n    test {XADD with artial ID with maximal seq} {\n        r DEL x\n        r XADD x 1-18446744073709551615 f1 v1\n        assert_error {*The ID specified in XADD is equal or smaller*} {r XADD x 1-* f2 v2}\n    }\n}\n\nstart_server {tags {\"stream needs:debug\"} overrides {appendonly yes aof-use-rdb-preamble no}} {\n    test {Empty stream can be rewrite into AOF correctly} {\n        r XADD mystream MAXLEN 0 * a b\n        assert {[dict get [r xinfo stream mystream] length] == 0}\n        r bgrewriteaof\n        waitForBgrewriteaof r\n        r debug loadaof\n        assert {[dict get [r xinfo stream mystream] length] == 0}\n    }\n\n    test {Stream can be rewrite into AOF correctly after XDEL lastid} {\n        r XSETID mystream 0-0\n        r XADD mystream 1-1 a b\n        r XADD mystream 2-2 a b\n        assert {[dict get [r xinfo stream mystream] length] == 2}\n        r XDEL mystream 2-2\n        r bgrewriteaof\n        waitForBgrewriteaof r\n        r debug loadaof\n        assert {[dict get [r xinfo stream mystream] length] == 1}\n        assert_equal [dict get [r xinfo stream mystream] last-generated-id] \"2-2\"\n    }\n}\n\nstart_server {tags {\"stream\"}} {\n    test {XGROUP HELP should not have unexpected options} {\n        catch {r XGROUP help xxx} e\n        assert_match \"*wrong number of arguments for 'xgroup|help' command\" $e\n    }\n\n    test {XINFO HELP should not have unexpected options} {\n        catch {r XINFO help xxx} e\n        assert_match \"*wrong number of arguments for 'xinfo|help' command\" $e\n    }\n}\n\nstart_server {tags {\"stream\"}} {\n    test \"XDELEX wrong number of args\" {\n        assert_error {*wrong number of arguments for 'xdelex' command} {r XDELEX s DELREF}\n    }\n\n    test \"XDELEX should return empty array when key doesn't exist\" {\n        r DEL nonexist\n        assert_equal {-1 -1} [r XDELEX nonexist IDS 2 1-1 2-2]\n    }\n\n    test \"XDELEX IDS parameter validation\" {\n        r DEL s\n        r XADD s 1-0 f v\n        r XGROUP CREATE s g 0\n\n        # Test invalid numids\n        assert_error {*Number of IDs must be a positive integer*} {r XDELEX s IDS abc 1-1}\n        assert_error {*Number of IDs must be a positive integer*} {r XDELEX s IDS 0 1-1}\n        assert_error {*Number of IDs must be a positive integer*} {r XDELEX s IDS -5 1-1}\n\n        # Test whether numids is equal to the number of IDs provided\n        assert_error {*The `numids` parameter must match the number of arguments*} {r XDELEX s IDS 3 1-1 2-2}\n        assert_error {*syntax error*} {r XDELEX s IDS 1 1-1 2-2}\n\n        # Delete non-existent ids\n        assert_equal {-1 -1} [r XDELEX s IDS 2 1-1 2-2]\n    }\n\n    test \"XDELEX KEEPREF/DELREF/ACKED parameter validation\" {\n        # Test mutually exclusive options\n        assert_error {*syntax error*} {r XDELEX s KEEPREF DELREF IDS 1 1-1}\n        assert_error {*syntax error*} {r XDELEX s KEEPREF ACKED IDS 1 1-1}\n        assert_error {*syntax error*} {r XDELEX s ACKED DELREF IDS 1 1-1}\n    }\n\n    test \"XDELEX with DELREF option acknowledges will remove entry from all PELs\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v\n        r XADD mystream 2-0 f v\n\n        # Create two consumer groups\n        r XGROUP CREATE mystream group1 0\n        r XGROUP CREATE mystream group2 0\n        r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n        r XREADGROUP GROUP group2 consumer2 STREAMS mystream >\n\n        # Verify the message was removed from both groups' PELs when with DELREF\n        assert_equal {1 1} [r XDELEX mystream DELREF IDS 2 1-0 2-0]\n        assert_equal 0 [r XLEN mystream] \n        assert_equal {0 {} {} {}} [r XPENDING mystream group1]\n        assert_equal {0 {} {} {}} [r XPENDING mystream group2] \n    }\n\n    test \"XDELEX with ACKED option only deletes messages acknowledged by all groups\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v\n        r XADD mystream 2-0 f v\n\n        # Create two consumer groups\n        r XGROUP CREATE mystream group1 0\n        r XGROUP CREATE mystream group2 0\n        r XGROUP CREATE mystream group3 0\n        r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n        r XREADGROUP GROUP group2 consumer2 STREAMS mystream >\n\n        # The message is referenced by three consumer groups:\n        # - group1 and group2 have read the messages\n        # - group3 hasn't read the messages yet (not delivered)\n        # Even after group1 acknowledges the messages, they still can't be deleted\n        r XACK mystream group1 1-0 2-0\n        assert_equal {2 2} [r XDELEX mystream ACKED IDS 2 1-0 2-0]\n        assert_equal 2 [r XLEN mystream]\n\n        # Even after both group1 and group2 acknowledge the messages, these entries\n        # still can't be deleted because group3 hasn't even read them yet.\n        r XACK mystream group2 1-0 2-0\n        assert_equal {2 2} [r XDELEX mystream ACKED IDS 2 1-0 2-0]\n        assert_equal 2 [r XLEN mystream]\n\n        # Now group3 reads the messages, but hasn't acknowledged them yet.\n        # these entries still can't be deleted because group3 hasn't acknowledged them.\n        r XREADGROUP GROUP group3 consumer3 STREAMS mystream >\n        assert_equal {2 2} [r XDELEX mystream ACKED IDS 2 1-0 2-0]\n        assert_equal 2 [r XLEN mystream]\n\n        # Now group3 acknowledges the messages. These entries can now be deleted.\n        r XACK mystream group3 1-0 2-0\n        r XDELEX mystream ACKED IDS 2 1-0 2-0\n        assert_equal 0 [r XLEN mystream]\n    }\n\n    test \"XDELEX with ACKED option won't delete messages when new consumer groups are created\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v\n        r XADD mystream 2-0 f v\n        r XADD mystream 3-0 f v\n\n        r XGROUP CREATE mystream group1 0\n        r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n\n        # When the group1 ack message, the message can be deleted with ACK option.\n        assert_equal {3} [r XACK mystream group1 1-0 2-0 3-0]\n        assert_equal {1} [r XDELEX mystream ACKED IDS 1 1-0]\n        \n        # Create a new consumer group that hasn't read the messages yet.\n        # Even if group1 ack the message, we still can't delete the message.\n        r XGROUP CREATE mystream group2 0\n        assert_equal {2} [r XDELEX mystream ACKED IDS 1 2-0]\n\n        # Now group2 reads and acknowledges the messages,\n        # so we can be successfully deleted with the ACKED option.\n        r XREADGROUP GROUP group2 consumer1 STREAMS mystream >\n        assert_equal {2} [r XACK mystream group2 2-0 3-0]\n        assert_equal {1 1} [r XDELEX mystream ACKED IDS 2 2-0 3-0]\n    }\n\n    test \"XDELEX with KEEPREF\" {\n        r DEL mystream\n        r XADD mystream 1-0 f v\n        r XADD mystream 2-0 f v\n\n        # Create two consumer groups\n        r XGROUP CREATE mystream group1 0\n        r XGROUP CREATE mystream group2 0\n        r XREADGROUP GROUP group1 consumer1 STREAMS mystream >\n        r XREADGROUP GROUP group2 consumer2 STREAMS mystream >\n\n        # Test XDELEX with KEEPREF\n        # XDELEX only deletes the message from the stream\n        # but does not clean up references in consumer groups' PELs\n        assert_equal {1 1} [r XDELEX mystream KEEPREF IDS 2 1-0 2-0]\n        assert_equal 0 [r XLEN mystream]\n        assert_equal {2 1-0 2-0 {{consumer1 2}}} [r XPENDING mystream group1]\n        assert_equal {2 1-0 2-0 {{consumer2 2}}} [r XPENDING mystream group2]\n    }\n}\n\nforeach rdbchannel {yes no} {\nstart_server {tags {\"repl external:skip\"}} {\n    set replica [srv 0 client]\n    set replica_host [srv 0 host]\n    set replica_port [srv 0 port]\n\n    start_server {} {\n        set master [srv 0 client]\n        set master_host [srv 0 host]\n        set master_port [srv 0 port]\n\n        $master config set repl-diskless-sync yes\n        $master config set repl-diskless-sync-delay 0\n        $master config set save \"\"\n        $master config set repl-rdb-channel $rdbchannel\n\n        $replica config set repl-diskless-load swapdb\n        $replica config set save \"\"\n\n        test \"XADD IDMP tracking works with diskless replication (swapdb mode) rdbchannel=$rdbchannel\" {\n            # Create stream and set IDMP-DURATION before adding entries,\n            # since XCFGSET clears existing entries when the duration changes.\n            $master XADD mystream IDMP p1 \"init\" * field \"init\"\n            $master XCFGSET mystream IDMP-DURATION 2\n            set id1 [$master XADD mystream IDMP p1 \"req-1\" * field \"v1\"]\n\n            set info [$master XINFO STREAM mystream]\n            assert_equal 1 [dict get $info pids-tracked]\n\n            $replica replicaof $master_host $master_port\n\n            wait_for_condition 100 100 {\n                [s -1 master_link_status] eq \"up\"\n            } else {\n                fail \"Replica didn't sync with master\"\n            }\n\n            assert_equal 2 [$replica XLEN mystream]\n\n            set replica_info [$replica XINFO STREAM mystream]\n            assert_equal 1 [dict get $replica_info pids-tracked]\n            assert_equal 1 [dict get $replica_info iids-tracked]\n\n            # If swapMainDbWithTempDb didn't swap stream_idmp_keys,\n            # tracking was lost and expiry will never happen on replica.\n            wait_for_condition 50 100 {\n                [dict get [$replica XINFO STREAM mystream] pids-tracked] == 0 &&\n                [dict get [$replica XINFO STREAM mystream] iids-tracked] == 0\n            } else {\n                fail \"IDMP entries were not cleaned up on replica after diskless replication\"\n            }\n\n            wait_for_condition 50 100 {\n                [dict get [$master XINFO STREAM mystream] pids-tracked] == 0\n            } else {\n                fail \"IDMP entries were not cleaned up on master\"\n            }\n            set id2 [$master XADD mystream IDMP p1 \"req-1\" * field \"v2\"]\n            assert {$id1 ne $id2}\n\n            $replica replicaof no one\n        }\n    }\n}\n}\n"
  },
  {
    "path": "tests/unit/type/string.tcl",
    "content": "start_server {tags {\"string\"}} {\n    test {SET and GET an item} {\n        r set x foobar\n        r get x\n    } {foobar}\n\n    test {SET and GET an empty item} {\n        r set x {}\n        r get x\n    } {}\n\n    test {Very big payload in GET/SET} {\n        set buf [string repeat \"abcd\" 1000000]\n        r set foo $buf\n        r get foo\n    } [string repeat \"abcd\" 1000000]\n\n    tags {\"slow\"} {\n        test {Very big payload random access} {\n            set err {}\n            array set payload {}\n            for {set j 0} {$j < 100} {incr j} {\n                set size [expr 1+[randomInt 100000]]\n                set buf [string repeat \"pl-$j\" $size]\n                set payload($j) $buf\n                r set bigpayload_$j $buf\n            }\n            for {set j 0} {$j < 1000} {incr j} {\n                set index [randomInt 100]\n                set buf [r get bigpayload_$index]\n                if {$buf != $payload($index)} {\n                    set err \"Values differ: I set '$payload($index)' but I read back '$buf'\"\n                    break\n                }\n            }\n            unset payload\n            set _ $err\n        } {}\n\n        test {SET 10000 numeric keys and access all them in reverse order} {\n            r flushdb\n            set err {}\n            for {set x 0} {$x < 10000} {incr x} {\n                r set $x $x\n            }\n            set sum 0\n            for {set x 9999} {$x >= 0} {incr x -1} {\n                set val [r get $x]\n                if {$val ne $x} {\n                    set err \"Element at position $x is $val instead of $x\"\n                    break\n                }\n            }\n            set _ $err\n        } {}\n\n        test {DBSIZE should be 10000 now} {\n            r dbsize\n        } {10000}\n    }\n\n    test \"SETNX target key missing\" {\n        r del novar\n        assert_equal 1 [r setnx novar foobared]\n        assert_equal \"foobared\" [r get novar]\n    }\n\n    test \"SETNX target key exists\" {\n        r set novar foobared\n        assert_equal 0 [r setnx novar blabla]\n        assert_equal \"foobared\" [r get novar]\n    }\n\n    test \"SETNX against not-expired volatile key\" {\n        r set x 10\n        r expire x 10000\n        assert_equal 0 [r setnx x 20]\n        assert_equal 10 [r get x]\n    }\n\n    test \"SETNX against expired volatile key\" {\n        # Make it very unlikely for the key this test uses to be expired by the\n        # active expiry cycle. This is tightly coupled to the implementation of\n        # active expiry and dbAdd() but currently the only way to test that\n        # SETNX expires a key when it should have been.\n        for {set x 0} {$x < 9999} {incr x} {\n            r setex key-$x 3600 value\n        }\n\n        # This will be one of 10000 expiring keys. A cycle is executed every\n        # 100ms, sampling 10 keys for being expired or not.  This key will be\n        # expired for at most 1s when we wait 2s, resulting in a total sample\n        # of 100 keys. The probability of the success of this test being a\n        # false positive is therefore approx. 1%.\n        r set x 10\n        r expire x 1\n\n        # Wait for the key to expire\n        after 2000\n\n        assert_equal 1 [r setnx x 20]\n        assert_equal 20 [r get x]\n    } {} {debug_defrag:skip}\n\n    test \"GETEX EX option\" {\n        r del foo\n        r set foo bar\n        r getex foo ex 10\n        assert_range [r ttl foo] 5 10\n    }\n\n    test \"GETEX PX option\" {\n        r del foo\n        r set foo bar\n        r getex foo px 10000\n        assert_range [r pttl foo] 5000 10000\n    }\n\n    test \"GETEX EXAT option\" {\n        r del foo\n        r set foo bar\n        r getex foo exat [expr [clock seconds] + 10]\n        assert_range [r ttl foo] 5 10\n    }\n\n    test \"GETEX PXAT option\" {\n        r del foo\n        r set foo bar\n        r getex foo pxat [expr [clock milliseconds] + 10000]\n        assert_range [r pttl foo] 5000 10000\n    }\n\n    test \"GETEX PERSIST option\" {\n        r del foo\n        r set foo bar ex 10\n        assert_range [r ttl foo] 5 10\n        r getex foo persist\n        assert_equal -1 [r ttl foo]\n    }\n\n    test \"GETEX no option\" {\n        r del foo\n        r set foo bar\n        r getex foo\n        assert_equal bar [r getex foo]\n    }\n\n    test \"GETEX syntax errors\" {\n        set ex {}\n        catch {r getex foo non-existent-option} ex\n        set ex\n    } {*syntax*}\n\n    test \"GETEX and GET expired key or not exist\" {\n        r del foo\n        r set foo bar px 1\n        after 2\n        assert_equal {} [r getex foo]\n        assert_equal {} [r get foo]\n    }\n\n    test \"GETEX no arguments\" {\n         set ex {}\n         catch {r getex} ex\n         set ex\n     } {*wrong number of arguments for 'getex' command}\n\n    test \"GETDEL command\" {\n        r del foo\n        r set foo bar\n        assert_equal bar [r getdel foo ]\n        assert_equal {} [r getdel foo ]\n    }\n\n    test {GETDEL propagate as DEL command to replica} {\n        set repl [attach_to_replication_stream]\n        r set foo bar\n        r getdel foo\n        assert_replication_stream $repl {\n            {select *}\n            {set foo bar}\n            {del foo}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test {GETEX without argument does not propagate to replica} {\n        set repl [attach_to_replication_stream]\n        r set foo bar\n        r getex foo\n        r del foo\n        assert_replication_stream $repl {\n            {select *}\n            {set foo bar}\n            {del foo}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test {MGET} {\n        r flushdb\n        r set foo{t} BAR\n        r set bar{t} FOO\n        r mget foo{t} bar{t}\n    } {BAR FOO}\n\n    test {MGET against non existing key} {\n        r mget foo{t} baazz{t} bar{t}\n    } {BAR {} FOO}\n\n    test {MGET against non-string key} {\n        r sadd myset{t} ciao\n        r sadd myset{t} bau\n        r mget foo{t} baazz{t} bar{t} myset{t}\n    } {BAR {} FOO {}}\n\n    test {GETSET (set new value)} {\n        r del foo\n        list [r getset foo xyz] [r get foo]\n    } {{} xyz}\n\n    test {GETSET (replace old value)} {\n        r set foo bar\n        list [r getset foo xyz] [r get foo]\n    } {bar xyz}\n\n    test {MSET base case} {\n        r mset x{t} 10 y{t} \"foo bar\" z{t} \"x x x x x x x\\n\\n\\r\\n\"\n        r mget x{t} y{t} z{t}\n    } [list 10 {foo bar} \"x x x x x x x\\n\\n\\r\\n\"]\n\n    test {MSET/MSETNX wrong number of args} {\n        assert_error {*wrong number of arguments for 'mset' command} {r mset x{t} 10 y{t} \"foo bar\" z{t}}\n        assert_error {*wrong number of arguments for 'msetnx' command} {r msetnx x{t} 20 y{t} \"foo bar\" z{t}}\n    }\n\n    test {MSET with already existing - same key twice} {\n        r set x{t} x\n        list [r mset x{t} xxx x{t} yyy] [r get x{t}]\n    } {OK yyy}\n\n    test {MSETNX with already existent key} {\n        list [r msetnx x1{t} xxx y2{t} yyy x{t} 20] [r exists x1{t}] [r exists y2{t}]\n    } {0 0 0}\n\n    test {MSETNX with not existing keys} {\n        list [r msetnx x1{t} xxx y2{t} yyy] [r get x1{t}] [r get y2{t}]\n    } {1 xxx yyy}\n\n    test {MSETNX with not existing keys - same key twice} {\n        r del x1{t}\n        list [r msetnx x1{t} xxx x1{t} yyy] [r get x1{t}]\n    } {1 yyy}\n\n    test {MSETNX with already existing keys - same key twice} {\n        list [r msetnx x1{t} xxx x1{t} zzz] [r get x1{t}]\n    } {0 yyy}\n\n    test {MSETEX - all expiration flags} {\n        # Test each expiration type separately (EX, PX, EXAT, PXAT)\n        set future_sec [expr [clock seconds] + 10]\n        set future_ms [expr [clock milliseconds] + 10000]\n\n        # Test EX and PX with separate commands (each applies to all keys in that command)\n        r msetex 2 ex:key1{t} val1 ex:key2{t} val2 ex 5\n        r msetex 2 px:key1{t} val1 px:key2{t} val2 px 5000\n\n        # Test EXAT and PXAT with separate commands\n        r msetex 2 exat:key1{t} val3 exat:key2{t} val4 exat $future_sec\n        r msetex 2 pxat:key1{t} val3 pxat:key2{t} val4 pxat $future_ms\n\n        assert_morethan [r ttl ex:key1{t}] 0\n        assert_morethan [r pttl px:key1{t}] 0\n        assert_morethan [r ttl exat:key1{t}] 0\n        assert_morethan [r pttl pxat:key1{t}] 0\n    }\n\n    test {MSETEX - KEEPTTL preserves existing TTL} {\n        r setex keepttl:key{t} 100 oldval\n        set old_ttl [r ttl keepttl:key{t}]\n        r msetex 1 keepttl:key{t} newval keepttl\n        assert_equal [r get keepttl:key{t}] \"newval\"\n        assert_morethan [r ttl keepttl:key{t}] [expr $old_ttl - 5]\n    }\n\n    test {MSETEX - NX/XX conditions and return values} {\n        r del nx:new{t} nx:new2{t} xx:existing{t} xx:nonexist{t}\n        r set xx:existing{t} oldval\n\n        assert_equal [r msetex 2 nx:new{t} val1 nx:new2{t} val2 nx ex 10] 1\n        assert_equal [r msetex 1 xx:existing{t} newval nx ex 10] 0\n        assert_equal [r msetex 1 xx:nonexist{t} newval xx ex 10] 0\n        assert_equal [r msetex 1 xx:existing{t} newval xx ex 10] 1\n        assert_equal [r get nx:new{t}] \"val1\"\n        assert_equal [r get xx:existing{t}] \"newval\"\n    }\n\n    test {MSETEX - flexible argument parsing} {\n        r del flex:1{t} flex:2{t}\n        # Test flags before and after KEYS\n        r msetex 2 flex:1{t} val1 flex:2{t} val2 ex 3 nx \n        r msetex 2 flex:3{t} val3 flex:4{t} val4 px 3000 xx\n\n        assert_equal [r get flex:1{t}] \"val1\"\n        assert_equal [r get flex:2{t}] \"val2\"\n        assert_morethan [r ttl flex:1{t}] 0\n        assert_equal [r exists flex:3{t}] 0\n        assert_equal [r exists flex:4{t}] 0\n    }\n\n    test {MSETEX - error cases} {\n        assert_error {*wrong number of arguments*} {r msetex}\n        assert_error {*invalid numkeys value*} {r msetex key1 val1 ex 10}\n        assert_error {*wrong number of key-value pairs*} {r msetex 2 key1{t} val1 key2{t}}\n        assert_error {*syntax error*} {r msetex 1 key1 val1 invalid_flag}\n    }\n\n    test {MSETEX - overflow protection in numkeys} {\n        # Test that large numkeys values don't cause integer overflow\n        # This tests the fix for potential overflow in kv_count_long * 2\n        assert_error {*invalid numkeys value*} {r msetex 2147483648 key1 val1 ex 10}\n        assert_error {*wrong number of key-value pairs*} {r msetex 2147483647 key1 val1 ex 10}\n    }\n\n    test {MSETEX - mutually exclusive flags} {\n        # NX and XX are mutually exclusive\n        assert_error {*syntax error*} {r msetex 2 key1{t} val1 key2{t} val2 nx xx ex 10}\n\n        # Multiple expiration flags are mutually exclusive\n        assert_error {*syntax error*} {r msetex 2 key1{t} val1 key2{t} val2 ex 10 px 5000}\n        assert_error {*syntax error*} {r msetex 2 key1{t} val1 key2{t} val2 exat 1735689600 pxat 1735689600000}\n\n        # KEEPTTL conflicts with expiration flags\n        assert_error {*syntax error*} {r msetex 2 key1{t} val1 key2{t} val2 keepttl ex 10}\n        assert_error {*syntax error*} {r msetex 2 key1{t} val1 key2{t} val2 keepttl px 5000}\n    }\n\n    test \"STRLEN against non-existing key\" {\n        assert_equal 0 [r strlen notakey]\n    }\n\n    test \"STRLEN against integer-encoded value\" {\n        r set myinteger -555\n        assert_equal 4 [r strlen myinteger]\n    }\n\n    test \"STRLEN against plain string\" {\n        r set mystring \"foozzz0123456789 baz\"\n        assert_equal 20 [r strlen mystring]\n    }\n\n    test \"SETBIT against non-existing key\" {\n        r del mykey\n        assert_equal 0 [r setbit mykey 1 1]\n        assert_equal [binary format B* 01000000] [r get mykey]\n    }\n\n    test \"SETBIT against string-encoded key\" {\n        # Ascii \"@\" is integer 64 = 01 00 00 00\n        r set mykey \"@\"\n\n        assert_equal 0 [r setbit mykey 2 1]\n        assert_equal [binary format B* 01100000] [r get mykey]\n        assert_equal 1 [r setbit mykey 1 0]\n        assert_equal [binary format B* 00100000] [r get mykey]\n    }\n\n    test \"SETBIT against integer-encoded key\" {\n        # Ascii \"1\" is integer 49 = 00 11 00 01\n        r set mykey 1\n        assert_encoding int mykey\n\n        assert_equal 0 [r setbit mykey 6 1]\n        assert_equal [binary format B* 00110011] [r get mykey]\n        assert_equal 1 [r setbit mykey 2 0]\n        assert_equal [binary format B* 00010011] [r get mykey]\n    }\n\n    test \"SETBIT against key with wrong type\" {\n        r del mykey\n        r lpush mykey \"foo\"\n        assert_error \"WRONGTYPE*\" {r setbit mykey 0 1}\n    }\n\n    test \"SETBIT with out of range bit offset\" {\n        r del mykey\n        assert_error \"*out of range*\" {r setbit mykey [expr 4*1024*1024*1024] 1}\n        assert_error \"*out of range*\" {r setbit mykey -1 1}\n    }\n\n    test \"SETBIT with non-bit argument\" {\n        r del mykey\n        assert_error \"*out of range*\" {r setbit mykey 0 -1}\n        assert_error \"*out of range*\" {r setbit mykey 0  2}\n        assert_error \"*out of range*\" {r setbit mykey 0 10}\n        assert_error \"*out of range*\" {r setbit mykey 0 20}\n    }\n\n    test \"SETBIT fuzzing\" {\n        set str \"\"\n        set len [expr 256*8]\n        r del mykey\n\n        for {set i 0} {$i < 2000} {incr i} {\n            set bitnum [randomInt $len]\n            set bitval [randomInt 2]\n            set fmt [format \"%%-%ds%%d%%-s\" $bitnum]\n            set head [string range $str 0 $bitnum-1]\n            set tail [string range $str $bitnum+1 end]\n            set str [string map {\" \" 0} [format $fmt $head $bitval $tail]]\n\n            r setbit mykey $bitnum $bitval\n            assert_equal [binary format B* $str] [r get mykey]\n        }\n    }\n\n    test \"GETBIT against non-existing key\" {\n        r del mykey\n        assert_equal 0 [r getbit mykey 0]\n    }\n\n    test \"GETBIT against string-encoded key\" {\n        # Single byte with 2nd and 3rd bit set\n        r set mykey \"`\"\n\n        # In-range\n        assert_equal 0 [r getbit mykey 0]\n        assert_equal 1 [r getbit mykey 1]\n        assert_equal 1 [r getbit mykey 2]\n        assert_equal 0 [r getbit mykey 3]\n\n        # Out-range\n        assert_equal 0 [r getbit mykey 8]\n        assert_equal 0 [r getbit mykey 100]\n        assert_equal 0 [r getbit mykey 10000]\n    }\n\n    test \"GETBIT against integer-encoded key\" {\n        r set mykey 1\n        assert_encoding int mykey\n\n        # Ascii \"1\" is integer 49 = 00 11 00 01\n        assert_equal 0 [r getbit mykey 0]\n        assert_equal 0 [r getbit mykey 1]\n        assert_equal 1 [r getbit mykey 2]\n        assert_equal 1 [r getbit mykey 3]\n\n        # Out-range\n        assert_equal 0 [r getbit mykey 8]\n        assert_equal 0 [r getbit mykey 100]\n        assert_equal 0 [r getbit mykey 10000]\n    }\n\n    test \"SETRANGE against non-existing key\" {\n        r del mykey\n        assert_equal 3 [r setrange mykey 0 foo]\n        assert_equal \"foo\" [r get mykey]\n\n        r del mykey\n        assert_equal 0 [r setrange mykey 0 \"\"]\n        assert_equal 0 [r exists mykey]\n\n        r del mykey\n        assert_equal 4 [r setrange mykey 1 foo]\n        assert_equal \"\\000foo\" [r get mykey]\n    }\n\n    test \"SETRANGE against string-encoded key\" {\n        r set mykey \"foo\"\n        assert_equal 3 [r setrange mykey 0 b]\n        assert_equal \"boo\" [r get mykey]\n\n        r set mykey \"foo\"\n        assert_equal 3 [r setrange mykey 0 \"\"]\n        assert_equal \"foo\" [r get mykey]\n\n        r set mykey \"foo\"\n        assert_equal 3 [r setrange mykey 1 b]\n        assert_equal \"fbo\" [r get mykey]\n\n        r set mykey \"foo\"\n        assert_equal 7 [r setrange mykey 4 bar]\n        assert_equal \"foo\\000bar\" [r get mykey]\n    }\n\n    test \"SETRANGE against integer-encoded key\" {\n        r set mykey 1234\n        assert_encoding int mykey\n        assert_equal 4 [r setrange mykey 0 2]\n        assert_encoding raw mykey\n        assert_equal 2234 [r get mykey]\n\n        # Shouldn't change encoding when nothing is set\n        r set mykey 1234\n        assert_encoding int mykey\n        assert_equal 4 [r setrange mykey 0 \"\"]\n        assert_encoding int mykey\n        assert_equal 1234 [r get mykey]\n\n        r set mykey 1234\n        assert_encoding int mykey\n        assert_equal 4 [r setrange mykey 1 3]\n        assert_encoding raw mykey\n        assert_equal 1334 [r get mykey]\n\n        r set mykey 1234\n        assert_encoding int mykey\n        assert_equal 6 [r setrange mykey 5 2]\n        assert_encoding raw mykey\n        assert_equal \"1234\\0002\" [r get mykey]\n    }\n\n    test \"SETRANGE against key with wrong type\" {\n        r del mykey\n        r lpush mykey \"foo\"\n        assert_error \"WRONGTYPE*\" {r setrange mykey 0 bar}\n    }\n\n    test \"SETRANGE with out of range offset\" {\n        r del mykey\n        assert_error \"*maximum allowed size*\" {r setrange mykey [expr 512*1024*1024-4] world}\n\n        r set mykey \"hello\"\n        assert_error \"*out of range*\" {r setrange mykey -1 world}\n        assert_error \"*maximum allowed size*\" {r setrange mykey [expr 512*1024*1024-4] world}\n    }\n\n    test \"GETRANGE against non-existing key\" {\n        r del mykey\n        assert_equal \"\" [r getrange mykey 0 -1]\n    }\n\n    test \"GETRANGE against wrong key type\" {\n        r lpush lkey1 \"list\"\n        assert_error {WRONGTYPE Operation against a key holding the wrong kind of value*} {r getrange lkey1 0 -1}\n    }\n\n    test \"GETRANGE against string value\" {\n        r set mykey \"Hello World\"\n        assert_equal \"Hell\" [r getrange mykey 0 3]\n        assert_equal \"Hello World\" [r getrange mykey 0 -1]\n        assert_equal \"orld\" [r getrange mykey -4 -1]\n        assert_equal \"\" [r getrange mykey 5 3]\n        assert_equal \" World\" [r getrange mykey 5 5000]\n        assert_equal \"Hello World\" [r getrange mykey -5000 10000]\n        assert_equal \"H\" [r getrange mykey 0 -100]\n        assert_equal \"\" [r getrange mykey 1 -100]\n        assert_equal \"\" [r getrange mykey -1 -100]\n        assert_equal \"H\" [r getrange mykey -100 -99]\n        assert_equal \"H\" [r getrange mykey -100 -100]\n        assert_equal \"\" [r getrange mykey -100 -101]\n    }\n\n    test \"GETRANGE against integer-encoded value\" {\n        r set mykey 1234\n        assert_equal \"123\" [r getrange mykey 0 2]\n        assert_equal \"1234\" [r getrange mykey 0 -1]\n        assert_equal \"234\" [r getrange mykey -3 -1]\n        assert_equal \"\" [r getrange mykey 5 3]\n        assert_equal \"4\" [r getrange mykey 3 5000]\n        assert_equal \"1234\" [r getrange mykey -5000 10000]\n        assert_equal \"1\" [r getrange mykey 0 -100]\n        assert_equal \"\" [r getrange mykey 1 -100]\n        assert_equal \"\" [r getrange mykey -1 -100]\n        assert_equal \"1\" [r getrange mykey -100 -99]\n        assert_equal \"1\" [r getrange mykey -100 -100]\n        assert_equal \"\" [r getrange mykey -100 -101]\n    }\n\n    test \"GETRANGE fuzzing\" {\n        for {set i 0} {$i < 1000} {incr i} {\n            r set bin [set bin [randstring 0 1024 binary]]\n            set _start [set start [randomInt 1500]]\n            set _end [set end [randomInt 1500]]\n            if {$_start < 0} {set _start \"end-[abs($_start)-1]\"}\n            if {$_end < 0} {set _end \"end-[abs($_end)-1]\"}\n            assert_equal [string range $bin $_start $_end] [r getrange bin $start $end]\n        }\n    }\n\n    test \"Coverage: SUBSTR\" {\n        r set key abcde\n        assert_equal \"a\" [r substr key 0 0]\n        assert_equal \"abcd\" [r substr key 0 3]\n        assert_equal \"bcde\" [r substr key -4 -1]\n        assert_equal \"\" [r substr key -1 -3]\n        assert_equal \"\" [r substr key 7 8]\n        assert_equal \"\" [r substr nokey 0 1]\n    }\n    \nif {[string match {*jemalloc*} [s mem_allocator]]} {\n    test {trim on SET with big value} {\n        # set a big value to trigger increasing the query buf\n        r set key [string repeat A 100000] \n        # set a smaller value but > PROTO_MBULK_BIG_ARG (32*1024) Redis will try to save the query buf itself on the DB.\n        r set key [string repeat A 33000]\n        # asset the value was trimmed\n        assert {[r memory usage key] < 42000}; # 42K to count for Jemalloc's additional memory overhead. \n    }\n} ;# if jemalloc\n\n    test {Extended SET can detect syntax errors} {\n        set e {}\n        catch {r set foo bar non-existing-option} e\n        set e\n    } {*syntax*}\n\n    test {Extended SET NX option} {\n        r del foo\n        set v1 [r set foo 1 nx]\n        set v2 [r set foo 2 nx]\n        list $v1 $v2 [r get foo]\n    } {OK {} 1}\n\n    test {Extended SET XX option} {\n        r del foo\n        set v1 [r set foo 1 xx]\n        r set foo bar\n        set v2 [r set foo 2 xx]\n        list $v1 $v2 [r get foo]\n    } {{} OK 2}\n\n    test {Extended SET GET option} {\n        r del foo\n        r set foo bar\n        set old_value [r set foo bar2 GET]\n        set new_value [r get foo]\n        list $old_value $new_value\n    } {bar bar2}\n\n    test {Extended SET GET option with no previous value} {\n        r del foo\n        set old_value [r set foo bar GET]\n        set new_value [r get foo]\n        list $old_value $new_value\n    } {{} bar}\n\n    test {Extended SET GET option accepts repeated GET tokens} {\n        r del foo\n        r set foo bar\n        set old_value [r set foo baz GET GET]\n        set new_value [r get foo]\n        list $old_value $new_value\n    } {bar baz}\n\n    test {Extended SET GET option with XX} {\n        r del foo\n        r set foo bar\n        set old_value [r set foo baz GET XX]\n        set new_value [r get foo]\n        list $old_value $new_value\n    } {bar baz}\n\n    test {Extended SET GET option with XX and no previous value} {\n        r del foo\n        set old_value [r set foo bar GET XX]\n        set new_value [r get foo]\n        list $old_value $new_value\n    } {{} {}}\n\n    test {Extended SET GET option with NX} {\n        r del foo\n        set old_value [r set foo bar GET NX]\n        set new_value [r get foo]\n        list $old_value $new_value\n    } {{} bar}\n\n    test {Extended SET GET option with NX and previous value} {\n        r del foo\n        r set foo bar\n        set old_value [r set foo baz GET NX]\n        set new_value [r get foo]\n        list $old_value $new_value\n    } {bar bar}\n\n    test {Extended SET GET option with a past expiration time and no previous value} {\n        r del foo\n        r debug set-active-expire 0\n        set now [clock milliseconds]\n        set expiredkeys [s expired_keys]\n        set old_value [r set foo baz GET PXAT [expr $now-3000]]\n        assert_equal $old_value {}\n        # Verify that expired_keys was incremented, even though\n        # the key was not added to the DB actually.\n        assert_equal [expr $expiredkeys + 1] [s expired_keys]\n        catch {r debug object foo} e\n        r debug set-active-expire 1\n        set e\n    } {ERR no such key} {needs:debug}\n\n    test {Extended SET GET with incorrect type should result in wrong type error} {\n      r del foo\n      r rpush foo waffle\n      catch {r set foo bar GET} err1\n      assert_equal \"waffle\" [r rpop foo]\n      set err1\n    } {*WRONGTYPE*}\n\n    test {Extended SET EX option} {\n        r del foo\n        r set foo bar ex 10\n        set ttl [r ttl foo]\n        assert {$ttl <= 10 && $ttl > 5}\n    }\n\n    test {Extended SET PX option} {\n        r del foo\n        r set foo bar px 10000\n        set ttl [r ttl foo]\n        assert {$ttl <= 10 && $ttl > 5}\n    }\n\n    test \"Extended SET EXAT option\" {\n        r del foo\n        r set foo bar exat [expr [clock seconds] + 10]\n        assert_range [r ttl foo] 5 10\n    }\n\n    test \"Extended SET PXAT option\" {\n        r del foo\n        r set foo bar pxat [expr [clock milliseconds] + 10000]\n        assert_range [r ttl foo] 5 10\n    }\n\n    test {Extended SET PXAT option with a past expiration time} {\n        r set foo bar\n        r debug set-active-expire 0\n        set now [clock milliseconds]\n        set expiredkeys [s expired_keys]\n        r set foo baz PXAT [expr $now-3000]\n        # Verify that expired_keys was incremented, even though\n        # the key was not added to the DB actually.\n        assert_equal [expr $expiredkeys + 1] [s expired_keys]\n        catch {r debug object foo} e\n        r debug set-active-expire 1\n        set e\n    } {ERR no such key} {needs:debug}\n\n    test {SET PXAT with a past expiration time will propagate it as DEL or UNLINK} {\n        r flushall\n        r set foo foo\n        r set bar bar\n        set repl [attach_to_replication_stream]\n\n        # Keys that have expired timestamp will be deleted immediately\n        set now [clock milliseconds]\n        r config set lazyfree-lazy-server-del no\n        assert_equal {OK} [r set foo foo PXAT [expr $now-3000]]\n        r config set lazyfree-lazy-server-del yes\n        assert_equal {OK} [r set bar bar PXAT [expr $now-3000]]\n\n        # Verify the propagate of DEL and UNLINK.\n        assert_replication_stream $repl {\n            {select *}\n            {del foo}\n            {unlink bar}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test {Extended SET using multiple options at once} {\n        r set foo val\n        assert {[r set foo bar xx px 10000] eq {OK}}\n        set ttl [r ttl foo]\n        assert {$ttl <= 10 && $ttl > 5}\n    }\n\n    test {GETRANGE with huge ranges, Github issue #1844} {\n        r set foo bar\n        r getrange foo 0 4294967297\n    } {bar}\n\n    set rna1 {CACCTTCCCAGGTAACAAACCAACCAACTTTCGATCTCTTGTAGATCTGTTCTCTAAACGAACTTTAAAATCTGTGTGGCTGTCACTCGGCTGCATGCTTAGTGCACTCACGCAGTATAATTAATAACTAATTACTGTCGTTGACAGGACACGAGTAACTCGTCTATCTTCTGCAGGCTGCTTACGGTTTCGTCCGTGTTGCAGCCGATCATCAGCACATCTAGGTTTCGTCCGGGTGTG}\n    set rna2 {ATTAAAGGTTTATACCTTCCCAGGTAACAAACCAACCAACTTTCGATCTCTTGTAGATCTGTTCTCTAAACGAACTTTAAAATCTGTGTGGCTGTCACTCGGCTGCATGCTTAGTGCACTCACGCAGTATAATTAATAACTAATTACTGTCGTTGACAGGACACGAGTAACTCGTCTATCTTCTGCAGGCTGCTTACGGTTTCGTCCGTGTTGCAGCCGATCATCAGCACATCTAGGTTT}\n    set rnalcs {ACCTTCCCAGGTAACAAACCAACCAACTTTCGATCTCTTGTAGATCTGTTCTCTAAACGAACTTTAAAATCTGTGTGGCTGTCACTCGGCTGCATGCTTAGTGCACTCACGCAGTATAATTAATAACTAATTACTGTCGTTGACAGGACACGAGTAACTCGTCTATCTTCTGCAGGCTGCTTACGGTTTCGTCCGTGTTGCAGCCGATCATCAGCACATCTAGGTTT}\n\n    test {LCS basic} {\n        r set virus1{t} $rna1\n        r set virus2{t} $rna2\n        r LCS virus1{t} virus2{t}\n    } $rnalcs\n\n    test {LCS len} {\n        r set virus1{t} $rna1\n        r set virus2{t} $rna2\n        r LCS virus1{t} virus2{t} LEN\n    } [string length $rnalcs]\n\n    test {LCS indexes} {\n        dict get [r LCS virus1{t} virus2{t} IDX] matches\n    } {{{238 238} {239 239}} {{236 236} {238 238}} {{229 230} {236 237}} {{224 224} {235 235}} {{1 222} {13 234}}}\n\n    test {LCS indexes with match len} {\n        dict get [r LCS virus1{t} virus2{t} IDX WITHMATCHLEN] matches\n    } {{{238 238} {239 239} 1} {{236 236} {238 238} 1} {{229 230} {236 237} 2} {{224 224} {235 235} 1} {{1 222} {13 234} 222}}\n\n    test {LCS indexes with match len and minimum match len} {\n        dict get [r LCS virus1{t} virus2{t} IDX WITHMATCHLEN MINMATCHLEN 5] matches\n    } {{{1 222} {13 234} 222}}\n\n    test {SETRANGE with huge offset} {\n        foreach value {9223372036854775807 2147483647} {\n            catch {[r setrange K $value A]} res\n            # expecting a different error on 32 and 64 bit systems\n            if {![string match \"*string exceeds maximum allowed size*\" $res] && ![string match \"*out of range*\" $res]} {\n                assert_equal $res \"expecting an error\"\n           }\n        }\n    }\n\n    test {APPEND modifies the encoding from int to raw} {\n        r del foo\n        r set foo 1\n        assert_encoding \"int\" foo\n        r append foo 2\n\n        set res {}\n        lappend res [r get foo]\n        assert_encoding \"raw\" foo\n        \n        r set bar 12\n        assert_encoding \"int\" bar\n        lappend res [r get bar]\n    } {12 12}\n    \n    # coverage for kvobjComputeSize\n    test {MEMORY USAGE - STRINGS} {\n        set sizes {1 5 8 15 16 17 31 32 33 63 64 65 127 128 129 255 256 257}\n        set hdrsize [expr {[s arch_bits] == 32 ? 12 : 16}]\n        \n        foreach ksize $sizes {\n            set key [string repeat \"k\" $ksize]\n            # OBJ_ENCODING_EMBSTR, OBJ_ENCODING_RAW        \n            foreach vsize $sizes {\n                set value [string repeat \"v\" $vsize]                        \n                r set $key $value\n                set memory_used [r memory usage $key]\n                set min [expr $hdrsize + $ksize + $vsize] \n                assert_lessthan_equal $min $memory_used\n                set max [expr {32 > $min ? 64 : [expr $min * 2]}]\n                assert_morethan_equal $max $memory_used\n            }\n            \n            # OBJ_ENCODING_INT\n            foreach value {1 100 10000 10000000} {\n                r set $key $value\n                set min [expr $hdrsize + $ksize]\n                assert_lessthan_equal $min [r memory usage $key]\n            }\n        }\n    }\n    \n    if {[string match {*jemalloc*} [s mem_allocator]]} {\n        test {Check MEMORY USAGE for embedded key strings with jemalloc} {\n        \n            proc expected_mem {key val with_expire exp_mem_usage exp_debug_sdslen} {\n                r del $key\n                r set $key $val\n                if {$with_expire} { r expire $key 5678315 }\n                assert_equal $exp_mem_usage [r memory usage $key]\n                assert_equal $exp_debug_sdslen [r debug sdslen $key]\n            }\n            \n            if {[s arch_bits] == 64} {  \n                # 16 (kvobj) + 1 (key-hdr-size) + 1 (sdshdr5) + 4 (key) + 1 (\\0) + 3 (sdshdr8) + 5 (val) + 1 (\\0) = 32bytes\n                expected_mem x234 y2345 0 32 \"key_sds_len:4, key_sds_avail:0, key_zmalloc: 32, val_sds_len:5, val_sds_avail:0, val_zmalloc: 0\"\n                # 16 (kvobj) + 1 (key-hdr-size) + 1 (sdshdr5) + 4 (key) + 1 (\\0) + 3 (sdshdr8) + 6 (val) + 1 (\\0) = 33bytes\n                expected_mem x234 y23456 0 40 \"key_sds_len:4, key_sds_avail:0, key_zmalloc: 40, val_sds_len:6, val_sds_avail:7, val_zmalloc: 0\"\n                # 16 (kvobj) + 1 (key-hdr-size) + 1 (sdshdr5) + 4 (key) + 1 (\\0) + 3 (sdshdr8) + 13 (val) + 1 (\\0) = 40bytes\n                expected_mem x234 y234561234567 0 40 \"key_sds_len:4, key_sds_avail:0, key_zmalloc: 40, val_sds_len:13, val_sds_avail:0, val_zmalloc: 0\"\n                # 16 (kvobj) + 8 (expiry) + 1 (key-hdr-size) + 1 (sdshdr5) + 4 (key) + 1 (\\0) + 3 (sdshdr8) + 13 (val) + 1 (\\0) = 48bytes\n                expected_mem x234 y234561234567 1 48 \"key_sds_len:4, key_sds_avail:0, key_zmalloc: 48, val_sds_len:13, val_sds_avail:0, val_zmalloc: 0\"\n            } else {\n                # 12 (kvobj) + 1 (key-hdr-size) + 1 (sdshdr5) + 4 (key) + 1 (\\0) + 3 (sdshdr8) + 9 (val) + 1 (\\0) = 32bytes\n                expected_mem x234 y23456789 0 32 \"key_sds_len:4, key_sds_avail:0, key_zmalloc: 32, val_sds_len:9, val_sds_avail:0, val_zmalloc: 0\"\n                # 12 (kvobj) + 1 (key-hdr-size) + 1 (sdshdr5) + 4 (key) + 1 (\\0) + 3 (sdshdr8) + 10 (val) + 1 (\\0) = 33bytes\n                expected_mem x234 y234567890 0 40 \"key_sds_len:4, key_sds_avail:0, key_zmalloc: 40, val_sds_len:10, val_sds_avail:7, val_zmalloc: 0\"\n                # 12 (kvobj) + 1 (key-hdr-size) + 1 (sdshdr5) + 4 (key) + 1 (\\0) + 3 (sdshdr8) + 17 (val) + 1 (\\0) = 40bytes \n                expected_mem x234 y2345678901234567 0 40 \"key_sds_len:4, key_sds_avail:0, key_zmalloc: 40, val_sds_len:17, val_sds_avail:0, val_zmalloc: 0\"\n                # 12 (kvobj) + 8 (expiry) + 1 (key-hdr-size) + 1 (sdshdr5) + 4 (key) + 1 (\\0) + 3 (sdshdr8) + 17 (val) + 1 (\\0) = 48bytes\n                expected_mem x234 y2345678901234567 1 48 \"key_sds_len:4, key_sds_avail:0, key_zmalloc: 48, val_sds_len:17, val_sds_avail:0, val_zmalloc: 0\"\n            }\n        } {} {needs:debug}\n    }\n\n    test {DIGEST basic usage with plain string} {\n        r set mykey \"hello world\"\n        set digest [r digest mykey]\n        # Ensure reply is exactly 16 hex characters (works across all Tcl versions)\n        assert {[string length $digest] == 16 && [string is xdigit -strict $digest]}\n    }\n\n    test {DIGEST with empty string} {\n        r set mykey \"\"\n        set digest [r digest mykey]\n        assert {[string length $digest] == 16 && [string is xdigit -strict $digest]}\n    }\n\n    test {DIGEST with integer-encoded value} {\n        r set mykey 12345\n        assert_encoding int mykey\n        set digest [r digest mykey]\n        assert {[string length $digest] == 16 && [string is xdigit -strict $digest]}\n    }\n\n    test {DIGEST with negative integer} {\n        r set mykey -999\n        assert_encoding int mykey\n        set digest [r digest mykey]\n        assert {[string length $digest] == 16 && [string is xdigit -strict $digest]}\n    }\n\n    test {DIGEST returns consistent hash for same value} {\n        r set mykey \"test string\"\n        set digest1 [r digest mykey]\n        set digest2 [r digest mykey]\n        assert_equal $digest1 $digest2\n    }\n\n    test {DIGEST returns same hash for same content in different keys} {\n        r set key1 \"identical\"\n        r set key2 \"identical\"\n        set digest1 [r digest key1]\n        set digest2 [r digest key2]\n        assert_equal $digest1 $digest2\n    }\n\n    test {DIGEST returns different hash for different values} {\n        r set key1 \"value1\"\n        r set key2 \"value2\"\n        set digest1 [r digest key1]\n        set digest2 [r digest key2]\n        assert {$digest1 != $digest2}\n    }\n\n    test {DIGEST with binary data} {\n        r set mykey \"\\x00\\x01\\x02\\x03\\xff\\xfe\"\n        set digest [r digest mykey]\n        assert {[string length $digest] == 16 && [string is xdigit -strict $digest]}\n    }\n\n    test {DIGEST with unicode characters} {\n        r set mykey \"Hello 世界\"\n        set digest [r digest mykey]\n        assert {[string length $digest] == 16 && [string is xdigit -strict $digest]}\n    }\n\n    test {DIGEST with very long string} {\n        set longstring [string repeat \"Lorem ipsum dolor sit amet. \" 1000]\n        r set mykey $longstring\n        set digest [r digest mykey]\n        assert {[string length $digest] == 16 && [string is xdigit -strict $digest]}\n    }\n\n    test {DIGEST against non-existing key} {\n        r del nonexistent\n        assert_equal {} [r digest nonexistent]\n    }\n\n    test {DIGEST against wrong type (list)} {\n        r del mylist\n        r lpush mylist \"element\"\n        assert_error \"*WRONGTYPE*\" {r digest mylist}\n    }\n\n    test {DIGEST against wrong type (hash)} {\n        r del myhash\n        r hset myhash field value\n        assert_error \"*WRONGTYPE*\" {r digest myhash}\n    }\n\n    test {DIGEST against wrong type (set)} {\n        r del myset\n        r sadd myset member\n        assert_error \"*WRONGTYPE*\" {r digest myset}\n    }\n\n    test {DIGEST against wrong type (zset)} {\n        r del myzset\n        r zadd myzset 1 member\n        assert_error \"*WRONGTYPE*\" {r digest myzset}\n    }\n\n    test {DIGEST wrong number of arguments} {\n        assert_error \"*wrong number of arguments*\" {r digest}\n        assert_error \"*wrong number of arguments*\" {r digest key1 key2}\n    }\n\n    test {DIGEST with special characters and whitespace} {\n        r set mykey \"  spaces  \\t\\n\\r\"\n        set digest [r digest mykey]\n        assert {[string length $digest] == 16 && [string is xdigit -strict $digest]}\n    }\n\n    test {DIGEST consistency across SET operations} {\n        r set mykey \"original\"\n        set digest1 [r digest mykey]\n\n        r set mykey \"changed\"\n        set digest2 [r digest mykey]\n        assert {$digest1 != $digest2}\n\n        r set mykey \"original\"\n        set digest3 [r digest mykey]\n        assert_equal $digest1 $digest3\n    }\n\n    test {DELEX basic usage without conditions} {\n        r set mykey \"hello\"\n        assert_equal 1 [r delex mykey]\n\n        r hset myhash f v\n        assert_equal 1 [r delex myhash]\n\n        r zadd mystr 1 m\n        assert_equal 1 [r delex mystr]\n    }\n\n    test {DELEX basic usage with IFEQ} {\n        r set mykey \"hello\"\n        assert_equal 1 [r delex mykey IFEQ \"hello\"]\n        assert_equal 0 [r exists mykey]\n\n        r set mykey \"hello\"\n        assert_equal 0 [r delex mykey IFEQ \"world\"]\n        assert_equal 1 [r exists mykey]\n        assert_equal \"hello\" [r get mykey]\n    }\n\n    test {DELEX basic usage with IFNE} {\n        r set mykey \"hello\"\n        assert_equal 1 [r delex mykey IFNE \"world\"]\n        assert_equal 0 [r exists mykey]\n\n        r set mykey \"hello\"\n        assert_equal 0 [r delex mykey IFNE \"hello\"]\n        assert_equal 1 [r exists mykey]\n        assert_equal \"hello\" [r get mykey]\n    }\n\n    test {DELEX basic usage with IFDEQ} {\n        r set mykey \"hello\"\n        set digest [r digest mykey]\n        assert_equal 1 [r delex mykey IFDEQ $digest]\n        assert_equal 0 [r exists mykey]\n\n        r set mykey \"hello\"\n        set wrong_digest [format %016x [expr ([scan [r digest mykey] %x] + 1) & 0xffffffffffffffff]]\n        assert_equal 0 [r delex mykey IFDEQ $wrong_digest]\n        assert_equal 1 [r exists mykey]\n        assert_equal \"hello\" [r get mykey]\n    }\n\n    test {DELEX basic usage with IFDNE} {\n        r set mykey \"hello\"\n        set wrong_digest [format %016x [expr ([scan [r digest mykey] %x] + 1) & 0xffffffffffffffff]]\n        assert_equal 1 [r delex mykey IFDNE $wrong_digest]\n        assert_equal 0 [r exists mykey]\n\n        r set mykey \"hello\"\n        set digest [r digest mykey]\n        assert_equal 0 [r delex mykey IFDNE $digest]\n        assert_equal 1 [r exists mykey]\n        assert_equal \"hello\" [r get mykey]\n    }\n\n    test {DELEX with non-existing key} {\n        r del nonexistent\n        assert_equal 0 [r delex nonexistent IFEQ \"hello\"]\n        assert_equal 0 [r delex nonexistent IFNE \"hello\"]\n        assert_equal 0 [r delex nonexistent IFDEQ 1234567890]\n        assert_equal 0 [r delex nonexistent IFDNE 1234567890]\n    }\n\n    test {DELEX with empty string value} {\n        r set mykey \"\"\n        assert_equal 1 [r delex mykey IFEQ \"\"]\n        assert_equal 0 [r exists mykey]\n\n        r set mykey \"\"\n        assert_equal 0 [r delex mykey IFEQ \"notempty\"]\n        assert_equal 1 [r exists mykey]\n    }\n\n    test {DELEX with integer-encoded value} {\n        r set mykey 12345\n        assert_encoding int mykey\n        assert_equal 1 [r delex mykey IFEQ \"12345\"]\n        assert_equal 0 [r exists mykey]\n\n        r set mykey 12345\n        assert_encoding int mykey\n        assert_equal 0 [r delex mykey IFEQ \"54321\"]\n        assert_equal 1 [r exists mykey]\n    }\n\n    test {DELEX with negative integer} {\n        r set mykey -999\n        assert_encoding int mykey\n        assert_equal 1 [r delex mykey IFEQ \"-999\"]\n        assert_equal 0 [r exists mykey]\n    }\n\n    test {DELEX with binary data} {\n        r set mykey \"\\x00\\x01\\x02\\x03\\xff\\xfe\"\n        assert_equal 1 [r delex mykey IFEQ \"\\x00\\x01\\x02\\x03\\xff\\xfe\"]\n        assert_equal 0 [r exists mykey]\n\n        r set mykey \"\\x00\\x01\\x02\\x03\\xff\\xfe\"\n        assert_equal 0 [r delex mykey IFEQ \"\\x00\\x01\\x02\\x03\\xff\\xff\"]\n        assert_equal 1 [r exists mykey]\n    }\n\n    test {DELEX with unicode characters} {\n        r set mykey \"Hello 世界\"\n        assert_equal 1 [r delex mykey IFEQ \"Hello 世界\"]\n        assert_equal 0 [r exists mykey]\n\n        r set mykey \"Hello 世界\"\n        assert_equal 0 [r delex mykey IFEQ \"Hello World\"]\n        assert_equal 1 [r exists mykey]\n    }\n\n    test {DELEX with very long string} {\n        set longstring [string repeat \"Lorem ipsum dolor sit amet. \" 1000]\n        r set mykey $longstring\n        assert_equal 1 [r delex mykey IFEQ $longstring]\n        assert_equal 0 [r exists mykey]\n    }\n\n    test {DELEX against wrong type} {\n        r del mylist\n        r lpush mylist \"element\"\n        assert_error \"*ERR*\" {r delex mylist IFEQ \"element\"}\n\n        r del myhash\n        r hset myhash field value\n        assert_error \"*ERR*\" {r delex myhash IFEQ \"value\"}\n\n        r del myset\n        r sadd myset member\n        assert_error \"*ERR*\" {r delex myset IFEQ \"member\"}\n\n        r del myzset\n        r zadd myzset 1 member\n        assert_error \"*ERR*\" {r delex myzset IFEQ \"member\"}\n    }\n\n    test {DELEX wrong number of arguments} {\n        r del key1\n        assert_error \"*wrong number of arguments*\" {r delex key1 IFEQ}\n   \n        r set key1 x\n        assert_error \"*wrong number of arguments*\" {r delex key1 IFEQ}\n        assert_error \"*wrong number of arguments*\" {r delex key1 IFEQ value1 extra}\n    }\n\n    test {DELEX invalid condition} {\n        r set mykey \"hello\"\n        assert_error \"*Invalid condition*\" {r delex mykey INVALID \"hello\"}\n        assert_error \"*Invalid condition*\" {r delex mykey IF \"hello\"}\n        assert_error \"*Invalid condition*\" {r delex mykey EQ \"hello\"}\n    }\n\n    test {DELEX with special characters and whitespace} {\n        r set mykey \"  spaces  \\t\\n\\r\"\n        assert_equal 1 [r delex mykey IFEQ \"  spaces  \\t\\n\\r\"]\n        assert_equal 0 [r exists mykey]\n    }\n\n    test {DELEX digest consistency with same content} {\n        r set key1 \"identical\"\n        r set key2 \"identical\"\n        set digest1 [r digest key1]\n        set digest2 [r digest key2]\n        assert_equal $digest1 $digest2\n\n        # Both should be deletable with the same digest\n        assert_equal 1 [r delex key1 IFDEQ $digest2]\n        assert_equal 1 [r delex key2 IFDEQ $digest1]\n    }\n\n    test {DELEX digest with different content} {\n        r set key1 \"value1\"\n        r set key2 \"value2\"\n        set digest1 [r digest key1]\n        set digest2 [r digest key2]\n        assert {$digest1 != $digest2}\n\n        # Should not be able to delete with wrong digest\n        assert_equal 0 [r delex key1 IFDEQ $digest2]\n        assert_equal 0 [r delex key2 IFDEQ $digest1]\n\n        # Should be able to delete with correct digest\n        assert_equal 1 [r delex key1 IFDEQ $digest1]\n        assert_equal 1 [r delex key2 IFDEQ $digest2]\n    }\n\n    test {DELEX propagate as DEL command to replica} {\n        r flushall\n        set repl [attach_to_replication_stream]\n        r set foo bar\n        r delex foo IFEQ bar\n        assert_replication_stream $repl {\n            {select *}\n            {set foo bar}\n            {del foo}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test {DELEX does not propagate when condition not met} {\n        r flushall\n        set repl [attach_to_replication_stream]\n        r set foo bar\n        r delex foo IFEQ baz\n        r set foo bar2\n        assert_replication_stream $repl {\n            {select *}\n            {set foo bar}\n            {set foo bar2}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test {DELEX with integer that looks like string} {\n        # Set as integer\n        r set key1 123\n        assert_encoding int key1\n        assert_equal 1 [r delex key1 IFEQ \"123\"]\n        assert_equal 0 [r exists key1]\n\n        # Set as string\n        r set key2 \"123\"\n        assert_equal 1 [r delex key2 IFEQ \"123\"]\n        assert_equal 0 [r exists key2]\n    }\n\n    test {Extended SET with IFEQ - key exists and matches} {\n        r set mykey \"hello\"\n        assert_equal \"OK\" [r set mykey \"world\" IFEQ \"hello\"]\n        assert_equal \"world\" [r get mykey]\n    }\n\n    test {Extended SET with IFEQ - key exists but doesn't match} {\n        r set mykey \"hello\"\n        assert_equal {} [r set mykey \"world\" IFEQ \"different\"]\n        assert_equal \"hello\" [r get mykey]\n    }\n\n    test {Extended SET with IFEQ - key doesn't exist} {\n        r del mykey\n        assert_equal {} [r set mykey \"world\" IFEQ \"hello\"]\n        assert_equal 0 [r exists mykey]\n    }\n\n    test {Extended SET with IFNE - key exists and doesn't match} {\n        r set mykey \"hello\"\n        assert_equal \"OK\" [r set mykey \"world\" IFNE \"different\"]\n        assert_equal \"world\" [r get mykey]\n    }\n\n    test {Extended SET with IFNE - key exists and matches} {\n        r set mykey \"hello\"\n        assert_equal {} [r set mykey \"world\" IFNE \"hello\"]\n        assert_equal \"hello\" [r get mykey]\n    }\n\n    test {Extended SET with IFNE - key doesn't exist} {\n        r del mykey\n        assert_equal \"OK\" [r set mykey \"world\" IFNE \"hello\"]\n        assert_equal \"world\" [r get mykey]\n    }\n\n    test {Extended SET with IFDEQ - key exists and digest matches} {\n        r set mykey \"hello\"\n        set digest [r digest mykey]\n        assert_equal \"OK\" [r set mykey \"world\" IFDEQ $digest]\n        assert_equal \"world\" [r get mykey]\n    }\n\n    test {Extended SET with IFDEQ - key exists but digest doesn't match} {\n        r set mykey \"hello\"\n        set wrong_digest [format %016x [expr ([scan [r digest mykey] %x] + 1) & 0xffffffffffffffff]]\n        assert_equal {} [r set mykey \"world\" IFDEQ $wrong_digest]\n        assert_equal \"hello\" [r get mykey]\n    }\n\n    test {Extended SET with IFDEQ - key doesn't exist} {\n        r del mykey\n        set digest 1234567890\n        assert_equal {} [r set mykey \"world\" IFDEQ $digest]\n        assert_equal 0 [r exists mykey]\n    }\n\n    test {Extended SET with IFDNE - key exists and digest doesn't match} {\n        r set mykey \"hello\"\n        set wrong_digest [format %016x [expr ([scan [r digest mykey] %x] + 1) & 0xffffffffffffffff]]\n        assert_equal \"OK\" [r set mykey \"world\" IFDNE $wrong_digest]\n        assert_equal \"world\" [r get mykey]\n    }\n\n    test {Extended SET with IFDNE - key exists and digest matches} {\n        r set mykey \"hello\"\n        set digest [r digest mykey]\n        assert_equal {} [r set mykey \"world\" IFDNE $digest]\n        assert_equal \"hello\" [r get mykey]\n    }\n\n    test {Extended SET with IFDNE - key doesn't exist} {\n        r del mykey\n        set digest 1234567890\n        assert_equal \"OK\" [r set mykey \"world\" IFDNE $digest]\n        assert_equal \"world\" [r get mykey]\n    }\n\n    test {Extended SET with IFEQ and GET - key exists and matches} {\n        r set mykey \"hello\"\n        assert_equal \"hello\" [r set mykey \"world\" IFEQ \"hello\" GET]\n        assert_equal \"world\" [r get mykey]\n    }\n\n    test {Extended SET with IFEQ and GET - key exists but doesn't match} {\n        r set mykey \"hello\"\n        assert_equal \"hello\" [r set mykey \"world\" IFEQ \"different\" GET]\n        assert_equal \"hello\" [r get mykey]\n    }\n\n    test {Extended SET with IFEQ and GET - key doesn't exist} {\n        r del mykey\n        assert_equal {} [r set mykey \"world\" IFEQ \"hello\" GET]\n        assert_equal 0 [r exists mykey]\n    }\n\n    test {Extended SET with IFNE and GET - key exists and doesn't match} {\n        r set mykey \"hello\"\n        assert_equal \"hello\" [r set mykey \"world\" IFNE \"different\" GET]\n        assert_equal \"world\" [r get mykey]\n    }\n\n    test {Extended SET with IFNE and GET - key exists and matches} {\n        r set mykey \"hello\"\n        assert_equal \"hello\" [r set mykey \"world\" IFNE \"hello\" GET]\n        assert_equal \"hello\" [r get mykey]\n    }\n\n    test {Extended SET with IFNE and GET - key doesn't exist} {\n        r del mykey\n        assert_equal {} [r set mykey \"world\" IFNE \"hello\" GET]\n        assert_equal \"world\" [r get mykey]\n    }\n\n    test {Extended SET with IFDEQ and GET - key exists and digest matches} {\n        r set mykey \"hello\"\n        set digest [r digest mykey]\n        assert_equal \"hello\" [r set mykey \"world\" IFDEQ $digest GET]\n        assert_equal \"world\" [r get mykey]\n    }\n\n    test {Extended SET with IFDEQ and GET - key exists but digest doesn't match} {\n        r set mykey \"hello\"\n        set wrong_digest [format %016x [expr ([scan [r digest mykey] %x] + 1) & 0xffffffffffffffff]]\n        assert_equal \"hello\" [r set mykey \"world\" IFDEQ $wrong_digest GET]\n        assert_equal \"hello\" [r get mykey]\n    }\n\n    test {Extended SET with IFDEQ and GET - key doesn't exist} {\n        r del mykey\n        set digest 1234567890\n        assert_equal {} [r set mykey \"world\" IFDEQ $digest GET]\n        assert_equal 0 [r exists mykey]\n    }\n\n    test {Extended SET with IFDNE and GET - key exists and digest doesn't match} {\n        r set mykey \"hello\"\n        set wrong_digest [format %016x [expr ([scan [r digest mykey] %x] + 1) & 0xffffffffffffffff]]\n        assert_equal \"hello\" [r set mykey \"world\" IFDNE $wrong_digest GET]\n        assert_equal \"world\" [r get mykey]\n    }\n\n    test {Extended SET with IFDNE and GET - key exists and digest matches} {\n        r set mykey \"hello\"\n        set digest [r digest mykey]\n        assert_equal \"hello\" [r set mykey \"world\" IFDNE $digest GET]\n        assert_equal \"hello\" [r get mykey]\n    }\n\n    test {Extended SET with IFDNE and GET - key doesn't exist} {\n        r del mykey\n        set digest 1234567890\n        assert_equal {} [r set mykey \"world\" IFDNE $digest GET]\n        assert_equal \"world\" [r get mykey]\n    }\n\n    test {Extended SET with IFEQ and expiration} {\n        r set mykey \"hello\"\n        assert_equal \"OK\" [r set mykey \"world\" IFEQ \"hello\" EX 10]\n        assert_equal \"world\" [r get mykey]\n        assert_range [r ttl mykey] 5 10\n    }\n\n    test {Extended SET with IFNE and expiration} {\n        r set mykey \"hello\"\n        assert_equal \"OK\" [r set mykey \"world\" IFNE \"different\" EX 10]\n        assert_equal \"world\" [r get mykey]\n        assert_range [r ttl mykey] 5 10\n    }\n\n    test {Extended SET with IFDEQ and expiration} {\n        r set mykey \"hello\"\n        set digest [r digest mykey]\n        assert_equal \"OK\" [r set mykey \"world\" IFDEQ $digest EX 10]\n        assert_equal \"world\" [r get mykey]\n        assert_range [r ttl mykey] 5 10\n    }\n\n    test {Extended SET with IFDNE and expiration} {\n        r set mykey \"hello\"\n        set wrong_digest [format %016x [expr ([scan [r digest mykey] %x] + 1) & 0xffffffffffffffff]]\n        assert_equal \"OK\" [r set mykey \"world\" IFDNE $wrong_digest EX 10]\n        assert_equal \"world\" [r get mykey]\n        assert_range [r ttl mykey] 5 10\n    }\n\n    test {Extended SET with IFEQ against wrong type} {\n        r del mylist\n        r lpush mylist \"element\"\n        assert_error \"*WRONGTYPE*\" {r set mylist \"value\" IFEQ \"element\"}\n    }\n\n    test {Extended SET with IFNE against wrong type} {\n        r del myhash\n        r hset myhash field value\n        assert_error \"*WRONGTYPE*\" {r set myhash \"value\" IFNE \"value\"}\n    }\n\n    test {Extended SET with IFDEQ against wrong type} {\n        r del myset\n        r sadd myset member\n        assert_error \"*WRONGTYPE*\" {r set myset \"value\" IFDEQ 1234567890}\n    }\n\n    test {Extended SET with IFDNE against wrong type} {\n        r del myzset\n        r zadd myzset 1 member\n        assert_error \"*WRONGTYPE*\" {r set myzset \"value\" IFDNE 1234567890}\n    }\n\n    test {Extended SET with integer-encoded value and IFEQ} {\n        r set mykey 12345\n        assert_encoding int mykey\n        assert_equal \"OK\" [r set mykey \"world\" IFEQ \"12345\"]\n        assert_equal \"world\" [r get mykey]\n    }\n\n    test {Extended SET with integer-encoded value and IFNE} {\n        r set mykey 12345\n        assert_encoding int mykey\n        assert_equal \"OK\" [r set mykey \"world\" IFNE \"54321\"]\n        assert_equal \"world\" [r get mykey]\n    }\n\n    test {Extended SET with binary data and IFEQ} {\n        r set mykey \"\\x00\\x01\\x02\\x03\\xff\\xfe\"\n        assert_equal \"OK\" [r set mykey \"world\" IFEQ \"\\x00\\x01\\x02\\x03\\xff\\xfe\"]\n        assert_equal \"world\" [r get mykey]\n    }\n\n    test {Extended SET with unicode characters and IFEQ} {\n        r set mykey \"Hello 世界\"\n        assert_equal \"OK\" [r set mykey \"world\" IFEQ \"Hello 世界\"]\n        assert_equal \"world\" [r get mykey]\n    }\n\n    test {Extended SET with empty string and IFEQ} {\n        r set mykey \"\"\n        assert_equal \"OK\" [r set mykey \"world\" IFEQ \"\"]\n        assert_equal \"world\" [r get mykey]\n    }\n\n    test {Extended SET with empty string and IFNE} {\n        r set mykey \"\"\n        assert_equal {} [r set mykey \"world\" IFNE \"\"]\n        assert_equal \"\" [r get mykey]\n    }\n\n    test {Extended SET case insensitive conditions} {\n        r set mykey \"hello\"\n        assert_equal \"OK\" [r set mykey \"world\" ifeq \"hello\"]\n        assert_equal \"world\" [r get mykey]\n        \n        r set mykey \"hello\"\n        assert_equal \"OK\" [r set mykey \"world\" IfEq \"hello\"]\n        assert_equal \"world\" [r get mykey]\n        \n        r set mykey \"hello\"\n        assert_equal \"OK\" [r set mykey \"world\" IFEQ \"hello\"]\n        assert_equal \"world\" [r get mykey]\n    }\n\n    test {Extended SET with special characters and IFEQ} {\n        r set mykey \"  spaces  \\t\\n\\r\"\n        assert_equal \"OK\" [r set mykey \"world\" IFEQ \"  spaces  \\t\\n\\r\"]\n        assert_equal \"world\" [r get mykey]\n    }\n\n    test {Extended SET digest consistency with same content} {\n        r set key1 \"identical\"\n        r set key2 \"identical\"\n        set digest1 [r digest key1]\n        set digest2 [r digest key2]\n        assert_equal $digest1 $digest2\n        \n        # Both should be settable with the same digest\n        assert_equal \"OK\" [r set key1 \"new1\" IFDEQ $digest1]\n        assert_equal \"OK\" [r set key2 \"new2\" IFDEQ $digest2]\n        assert_equal \"new1\" [r get key1]\n        assert_equal \"new2\" [r get key2]\n    }\n\n    test {Extended SET digest with different content} {\n        r set key1 \"value1\"\n        r set key2 \"value2\"\n        set digest1 [r digest key1]\n        set digest2 [r digest key2]\n        assert {$digest1 != $digest2}\n        \n        # Should not be able to set with wrong digest\n        assert_equal {} [r set key1 \"new1\" IFDEQ $digest2]\n        assert_equal {} [r set key2 \"new2\" IFDEQ $digest1]\n        assert_equal \"value1\" [r get key1]\n        assert_equal \"value2\" [r get key2]\n        \n        # Should be able to set with correct digest\n        assert_equal \"OK\" [r set key1 \"new1\" IFDEQ $digest1]\n        assert_equal \"OK\" [r set key2 \"new2\" IFDEQ $digest2]\n        assert_equal \"new1\" [r get key1]\n        assert_equal \"new2\" [r get key2]\n    }\n\n    test {Extended SET with very long string and IFEQ} {\n        set longstring [string repeat \"Lorem ipsum dolor sit amet. \" 1000]\n        r set mykey $longstring\n        assert_equal \"OK\" [r set mykey \"world\" IFEQ $longstring]\n        assert_equal \"world\" [r get mykey]\n    }\n\n    test {Extended SET with negative digest} {\n        r set mykey \"test\"\n        set digest [r digest mykey]\n        set wrong_digest [format %016x [expr ([scan [r digest mykey] %x] + 1) & 0xffffffffffffffff]]\n        assert_equal \"OK\" [r set mykey \"world\" IFDNE $wrong_digest]\n        assert_equal \"world\" [r get mykey]\n    }\n\n    test {DIGEST always returns exactly 16 hex characters with leading zeros} {\n        # Test with a value that produces a digest with leading zeros\n        r set foo \"v8lf0c11xh8ymlqztfd3eeq16kfn4sspw7fqmnuuq3k3t75em5wdizgcdw7uc26nnf961u2jkfzkjytls2kwlj7626sd\"\n        # Verify it matches the expected value with leading zeros\n        assert_equal \"00006c38adf31777\" [r digest foo]\n    }\n\n    test {IFDEQ/IFDNE reject digest with incorrect format} {\n        r set mykey \"test\"\n        set digest [r digest mykey]\n\n        # Test with too short digest (15 chars)\n        set short_digest [string range $digest 1 end]\n        assert_error \"*must be exactly 16 hexadecimal characters*\" {r set mykey \"new\" IFDEQ $short_digest}\n        assert_error \"*must be exactly 16 hexadecimal characters*\" {r set mykey \"new\" IFDNE $short_digest}\n        assert_error \"*must be exactly 16 hexadecimal characters*\" {r delex mykey IFDEQ $short_digest}\n        assert_error \"*must be exactly 16 hexadecimal characters*\" {r delex mykey IFDNE $short_digest}\n\n        # Test with too long digest (17 chars)\n        set long_digest \"0${digest}\"\n        assert_error \"*must be exactly 16 hexadecimal characters*\" {r set mykey \"new\" IFDEQ $long_digest}\n        assert_error \"*must be exactly 16 hexadecimal characters*\" {r set mykey \"new\" IFDNE $long_digest}\n        assert_error \"*must be exactly 16 hexadecimal characters*\" {r delex mykey IFDEQ $long_digest}\n        assert_error \"*must be exactly 16 hexadecimal characters*\" {r delex mykey IFDNE $long_digest}\n\n        # Test with empty digest\n        assert_error \"*must be exactly 16 hexadecimal characters*\" {r set mykey \"new\" IFDEQ \"\"}\n        assert_error \"*must be exactly 16 hexadecimal characters*\" {r set mykey \"new\" IFDNE \"\"}\n        assert_error \"*must be exactly 16 hexadecimal characters*\" {r delex mykey IFDEQ \"\"}\n        assert_error \"*must be exactly 16 hexadecimal characters*\" {r delex mykey IFDNE \"\"}\n    }\n\n    test {IFDEQ/IFDNE accepts uppercase hex digits (case-insensitive)} {\n        # Test SET IFDEQ with uppercase\n        r set mykey \"hello\"\n        set digest [r digest mykey]\n        set upper_digest [string toupper $digest]\n        assert_equal \"OK\" [r set mykey \"world\" IFDEQ $upper_digest]\n        assert_equal \"world\" [r get mykey]\n\n        # Test SET IFDEQ with uppercase\n        r set mykey \"hello\"\n        set digest [r digest mykey]\n        set upper_digest [string toupper $digest]\n        assert_equal \"\" [r set mykey \"world\" IFDNE $upper_digest]\n        assert_equal \"hello\" [r get mykey]\n\n        # Test DELEX IFDEQ with uppercase\n        r set mykey \"hello\"\n        set upper_digest [string toupper [r digest mykey]]\n        assert_equal 1 [r delex mykey IFDEQ $upper_digest]\n        assert_equal 0 [r exists mykey]\n\n        # Test DELEX IFDNE with uppercase\n        r set mykey \"hello\"\n        set upper_digest [string toupper [r digest mykey]]\n        assert_equal 0 [r delex mykey IFDNE $upper_digest]\n        assert_equal 1 [r exists mykey]\n    }\n}\n"
  },
  {
    "path": "tests/unit/type/zset.tcl",
    "content": "start_server {tags {\"zset\"}} {\n    proc create_zset {key items} {\n        r del $key\n        foreach {score entry} $items {\n            r zadd $key $score $entry\n        }\n    }\n\n    # A helper function to verify either ZPOP* or ZMPOP* response.\n    proc verify_pop_response {pop res zpop_expected_response zmpop_expected_response} {\n        if {[string match \"*ZM*\" $pop]} {\n            assert_equal $res $zmpop_expected_response\n        } else {\n            assert_equal $res $zpop_expected_response\n        }\n    }\n\n    # A helper function to verify either ZPOP* or ZMPOP* response when given one input key.\n    proc verify_zpop_response {rd pop key count zpop_expected_response zmpop_expected_response} {\n        if {[string match \"ZM*\" $pop]} {\n            lassign [split $pop \"_\"] pop where\n\n            if {$count == 0} {\n                set res [$rd $pop 1 $key $where]\n            } else {\n                set res [$rd $pop 1 $key $where COUNT $count]\n            }\n        } else {\n            if {$count == 0} {\n                set res [$rd $pop $key]\n            } else {\n                set res [$rd $pop $key $count]\n            }\n        }\n        verify_pop_response $pop $res $zpop_expected_response $zmpop_expected_response\n    }\n\n    # A helper function to verify either BZPOP* or BZMPOP* response when given one input key.\n    proc verify_bzpop_response {rd pop key timeout count bzpop_expected_response bzmpop_expected_response} {\n        if {[string match \"BZM*\" $pop]} {\n            lassign [split $pop \"_\"] pop where\n\n            if {$count == 0} {\n                $rd $pop $timeout 1 $key $where\n            } else {\n                $rd $pop $timeout 1 $key $where COUNT $count\n            }\n        } else {\n            $rd $pop $key $timeout\n        }\n        verify_pop_response $pop [$rd read] $bzpop_expected_response $bzmpop_expected_response\n    }\n\n    # A helper function to verify either ZPOP* or ZMPOP* response when given two input keys.\n    proc verify_bzpop_two_key_response {rd pop key key2 timeout count bzpop_expected_response bzmpop_expected_response} {\n        if {[string match \"BZM*\" $pop]} {\n            lassign [split $pop \"_\"] pop where\n\n            if {$count == 0} {\n                $rd $pop $timeout 2 $key $key2 $where\n            } else {\n                $rd $pop $timeout 2 $key $key2 $where COUNT $count\n            }\n        } else {\n            $rd $pop $key $key2 $timeout\n        }\n        verify_pop_response $pop [$rd read] $bzpop_expected_response $bzmpop_expected_response\n    }\n\n    # A helper function to execute either BZPOP* or BZMPOP* with one input key.\n    proc bzpop_command {rd pop key timeout} {\n        if {[string match \"BZM*\" $pop]} {\n            lassign [split $pop \"_\"] pop where\n            $rd $pop $timeout 1 $key $where COUNT 1\n        } else {\n            $rd $pop $key $timeout\n        }\n    }\n\n    # A helper function to verify nil response in readraw base on RESP version.\n    proc verify_nil_response {resp nil_response} {\n        if {$resp == 2} {\n            assert_equal $nil_response {*-1}\n        } elseif {$resp == 3} {\n            assert_equal $nil_response {_}\n        }\n    }\n\n    # A helper function to verify zset score response in readraw base on RESP version.\n    proc verify_score_response {rd resp score} {\n        if {$resp == 2} {\n            assert_equal [$rd read] {$1}\n            assert_equal [$rd read] $score\n        } elseif {$resp == 3} {\n            assert_equal [$rd read] \",$score\"\n        }\n    }\n\n    proc basics {encoding} {\n        set original_max_entries [lindex [r config get zset-max-ziplist-entries] 1]\n        set original_max_value [lindex [r config get zset-max-ziplist-value] 1]\n        if {$encoding == \"listpack\"} {\n            r config set zset-max-ziplist-entries 128\n            r config set zset-max-ziplist-value 64\n        } elseif {$encoding == \"skiplist\"} {\n            r config set zset-max-ziplist-entries 0\n            r config set zset-max-ziplist-value 0\n        } else {\n            puts \"Unknown sorted set encoding\"\n            exit\n        }\n\n        test \"Check encoding - $encoding\" {\n            r del ztmp\n            r zadd ztmp 10 x\n            assert_encoding $encoding ztmp\n        }\n\n        test \"ZSET basic ZADD and score update - $encoding\" {\n            r del ztmp\n            r zadd ztmp 10 x\n            r zadd ztmp 20 y\n            r zadd ztmp 30 z\n            assert_equal {x y z} [r zrange ztmp 0 -1]\n\n            r zadd ztmp 1 y\n            assert_equal {y x z} [r zrange ztmp 0 -1]\n        }\n\n        test \"ZSET element can't be set to NaN with ZADD - $encoding\" {\n            assert_error \"*not*float*\" {r zadd myzset nan abc}\n        }\n\n        test \"ZSET element can't be set to NaN with ZINCRBY - $encoding\" {\n            assert_error \"*not*float*\" {r zincrby myzset nan abc}\n        }\n\n        test \"ZADD with options syntax error with incomplete pair - $encoding\" {\n            r del ztmp\n            catch {r zadd ztmp xx 10 x 20} err\n            set err\n        } {ERR*}\n\n        test \"ZADD XX option without key - $encoding\" {\n            r del ztmp\n            assert {[r zadd ztmp xx 10 x] == 0}\n            assert {[r type ztmp] eq {none}}\n        }\n\n        test \"ZADD XX existing key - $encoding\" {\n            r del ztmp\n            r zadd ztmp 10 x\n            assert {[r zadd ztmp xx 20 y] == 0}\n            assert {[r zcard ztmp] == 1}\n        }\n\n        test \"ZADD XX returns the number of elements actually added - $encoding\" {\n            r del ztmp\n            r zadd ztmp 10 x\n            set retval [r zadd ztmp 10 x 20 y 30 z]\n            assert {$retval == 2}\n        }\n\n        test \"ZADD XX updates existing elements score - $encoding\" {\n            r del ztmp\n            r zadd ztmp 10 x 20 y 30 z\n            r zadd ztmp xx 5 foo 11 x 21 y 40 zap\n            assert {[r zcard ztmp] == 3}\n            assert {[r zscore ztmp x] == 11}\n            assert {[r zscore ztmp y] == 21}\n        }\n\n        test \"ZADD GT updates existing elements when new scores are greater - $encoding\" {\n            r del ztmp\n            r zadd ztmp 10 x 20 y 30 z\n            assert {[r zadd ztmp gt ch 5 foo 11 x 21 y 29 z] == 3}\n            assert {[r zcard ztmp] == 4}\n            assert {[r zscore ztmp x] == 11}\n            assert {[r zscore ztmp y] == 21}\n            assert {[r zscore ztmp z] == 30}\n        }\n\n        test \"ZADD LT updates existing elements when new scores are lower - $encoding\" {\n            r del ztmp\n            r zadd ztmp 10 x 20 y 30 z\n            assert {[r zadd ztmp lt ch 5 foo 11 x 21 y 29 z] == 2}\n            assert {[r zcard ztmp] == 4}\n            assert {[r zscore ztmp x] == 10}\n            assert {[r zscore ztmp y] == 20}\n            assert {[r zscore ztmp z] == 29}\n        }\n\n        test \"ZADD GT XX updates existing elements when new scores are greater and skips new elements - $encoding\" {\n            r del ztmp\n            r zadd ztmp 10 x 20 y 30 z\n            assert {[r zadd ztmp gt xx ch 5 foo 11 x 21 y 29 z] == 2}\n            assert {[r zcard ztmp] == 3}\n            assert {[r zscore ztmp x] == 11}\n            assert {[r zscore ztmp y] == 21}\n            assert {[r zscore ztmp z] == 30}\n        }\n\n        test \"ZADD LT XX updates existing elements when new scores are lower and skips new elements - $encoding\" {\n            r del ztmp\n            r zadd ztmp 10 x 20 y 30 z\n            assert {[r zadd ztmp lt xx ch 5 foo 11 x 21 y 29 z] == 1}\n            assert {[r zcard ztmp] == 3}\n            assert {[r zscore ztmp x] == 10}\n            assert {[r zscore ztmp y] == 20}\n            assert {[r zscore ztmp z] == 29}\n        }\n\n        test \"ZADD XX and NX are not compatible - $encoding\" {\n            r del ztmp\n            catch {r zadd ztmp xx nx 10 x} err\n            set err\n        } {ERR*}\n\n        test \"ZADD NX with non existing key - $encoding\" {\n            r del ztmp\n            r zadd ztmp nx 10 x 20 y 30 z\n            assert {[r zcard ztmp] == 3}\n        }\n\n        test \"ZADD NX only add new elements without updating old ones - $encoding\" {\n            r del ztmp\n            r zadd ztmp 10 x 20 y 30 z\n            assert {[r zadd ztmp nx 11 x 21 y 100 a 200 b] == 2}\n            assert {[r zscore ztmp x] == 10}\n            assert {[r zscore ztmp y] == 20}\n            assert {[r zscore ztmp a] == 100}\n            assert {[r zscore ztmp b] == 200}\n        }\n\n        test \"ZADD GT and NX are not compatible - $encoding\" {\n            r del ztmp\n            catch {r zadd ztmp gt nx 10 x} err\n            set err\n        } {ERR*}\n\n        test \"ZADD LT and NX are not compatible - $encoding\" {\n            r del ztmp\n            catch {r zadd ztmp lt nx 10 x} err\n            set err\n        } {ERR*}\n\n        test \"ZADD LT and GT are not compatible - $encoding\" {\n            r del ztmp\n            catch {r zadd ztmp lt gt 10 x} err\n            set err\n        } {ERR*}\n\n        test \"ZADD INCR LT/GT replies with nill if score not updated - $encoding\" {\n            r del ztmp\n            r zadd ztmp 28 x\n            assert {[r zadd ztmp lt incr 1 x] eq {}}\n            assert {[r zscore ztmp x] == 28}\n            assert {[r zadd ztmp gt incr -1 x] eq {}}\n            assert {[r zscore ztmp x] == 28}\n        }\n\n        test \"ZADD INCR LT/GT with inf - $encoding\" {\n            r del ztmp\n            r zadd ztmp +inf x -inf y\n\n            assert {[r zadd ztmp lt incr 1 x] eq {}}\n            assert {[r zscore ztmp x] == inf}\n            assert {[r zadd ztmp gt incr -1 x] eq {}}\n            assert {[r zscore ztmp x] == inf}\n            assert {[r zadd ztmp lt incr -1 x] eq {}}\n            assert {[r zscore ztmp x] == inf}\n            assert {[r zadd ztmp gt incr 1 x] eq {}}\n            assert {[r zscore ztmp x] == inf}\n\n            assert {[r zadd ztmp lt incr 1 y] eq {}}\n            assert {[r zscore ztmp y] == -inf}\n            assert {[r zadd ztmp gt incr -1 y] eq {}}\n            assert {[r zscore ztmp y] == -inf}\n            assert {[r zadd ztmp lt incr -1 y] eq {}}\n            assert {[r zscore ztmp y] == -inf}\n            assert {[r zadd ztmp gt incr 1 y] eq {}}\n            assert {[r zscore ztmp y] == -inf}\n        }\n\n        test \"ZADD INCR works like ZINCRBY - $encoding\" {\n            r del ztmp\n            r zadd ztmp 10 x 20 y 30 z\n            r zadd ztmp INCR 15 x\n            assert {[r zscore ztmp x] == 25}\n        }\n\n        test \"ZADD INCR works with a single score-elemenet pair - $encoding\" {\n            r del ztmp\n            r zadd ztmp 10 x 20 y 30 z\n            catch {r zadd ztmp INCR 15 x 10 y} err\n            set err\n        } {ERR*}\n\n        test \"ZADD CH option changes return value to all changed elements - $encoding\" {\n            r del ztmp\n            r zadd ztmp 10 x 20 y 30 z\n            assert {[r zadd ztmp 11 x 21 y 30 z] == 0}\n            assert {[r zadd ztmp ch 12 x 22 y 30 z] == 2}\n        }\n\n        test \"ZINCRBY calls leading to NaN result in error - $encoding\" {\n            r zincrby myzset +inf abc\n            assert_error \"*NaN*\" {r zincrby myzset -inf abc}\n        }\n\n        test \"ZINCRBY accepts hexadecimal inputs - $encoding\" {\n            r del zhexa\n\n            # Add some hexadecimal values to the sorted set 'zhexa'\n            r zadd zhexa 0x0p+0 \"zero\"\n            r zadd zhexa 0x1p+0 \"one\"\n\n            # Increment them\n            # 0 + 0 = 0\n            r zincrby zhexa 0x0p+0 \"zero\"\n            # 1 + 1 = 2\n            r zincrby zhexa 0x1p+0 \"one\"\n\n            assert_equal 0 [r zscore zhexa \"zero\"]\n            assert_equal 2 [r zscore zhexa \"one\"]\n        }\n\n        test \"ZINCRBY against invalid incr value - $encoding\" {\n            r del zincr\n            r zadd zincr 1 \"one\"\n            assert_error \"*value is not a valid*\" {r zincrby zincr v \"one\"}\n            assert_error \"*value is not a valid float\" {r zincrby zincr 23456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789123456789 \"one\"}\n        }\n\n        test \"ZADD - Variadic version base case - $encoding\" {\n            r del myzset\n            list [r zadd myzset 10 a 20 b 30 c] [r zrange myzset 0 -1 withscores]\n        } {3 {a 10 b 20 c 30}}\n\n        test \"ZADD - Return value is the number of actually added items - $encoding\" {\n            list [r zadd myzset 5 x 20 b 30 c] [r zrange myzset 0 -1 withscores]\n        } {1 {x 5 a 10 b 20 c 30}}\n\n        test \"ZADD - Variadic version does not add nothing on single parsing err - $encoding\" {\n            r del myzset\n            catch {r zadd myzset 10 a 20 b 30.badscore c} e\n            assert_match {*ERR*not*float*} $e\n            r exists myzset\n        } {0}\n\n        test \"ZADD - Variadic version will raise error on missing arg - $encoding\" {\n            r del myzset\n            catch {r zadd myzset 10 a 20 b 30 c 40} e\n            assert_match {*ERR*syntax*} $e\n        }\n\n        test \"ZINCRBY does not work variadic even if shares ZADD implementation - $encoding\" {\n            r del myzset\n            catch {r zincrby myzset 10 a 20 b 30 c} e\n            assert_match {*ERR*wrong*number*arg*} $e\n        }\n\n        test \"ZCARD basics - $encoding\" {\n            r del ztmp\n            r zadd ztmp 10 a 20 b 30 c\n            assert_equal 3 [r zcard ztmp]\n            assert_equal 0 [r zcard zdoesntexist]\n        }\n\n        test \"ZREM removes key after last element is removed - $encoding\" {\n            r del ztmp\n            r zadd ztmp 10 x\n            r zadd ztmp 20 y\n\n            assert_equal 1 [r exists ztmp]\n            assert_equal 0 [r zrem ztmp z]\n            assert_equal 1 [r zrem ztmp y]\n            assert_equal 1 [r zrem ztmp x]\n            assert_equal 0 [r exists ztmp]\n        }\n\n        test \"ZREM variadic version - $encoding\" {\n            r del ztmp\n            r zadd ztmp 10 a 20 b 30 c\n            assert_equal 2 [r zrem ztmp x y a b k]\n            assert_equal 0 [r zrem ztmp foo bar]\n            assert_equal 1 [r zrem ztmp c]\n            r exists ztmp\n        } {0}\n\n        test \"ZREM variadic version -- remove elements after key deletion - $encoding\" {\n            r del ztmp\n            r zadd ztmp 10 a 20 b 30 c\n            r zrem ztmp a b c d e f g\n        } {3}\n\n        test \"ZRANGE basics - $encoding\" {\n            r del ztmp\n            r zadd ztmp 1 a\n            r zadd ztmp 2 b\n            r zadd ztmp 3 c\n            r zadd ztmp 4 d\n\n            assert_equal {a b c d} [r zrange ztmp 0 -1]\n            assert_equal {a b c} [r zrange ztmp 0 -2]\n            assert_equal {b c d} [r zrange ztmp 1 -1]\n            assert_equal {b c} [r zrange ztmp 1 -2]\n            assert_equal {c d} [r zrange ztmp -2 -1]\n            assert_equal {c} [r zrange ztmp -2 -2]\n\n            # out of range start index\n            assert_equal {a b c} [r zrange ztmp -5 2]\n            assert_equal {a b} [r zrange ztmp -5 1]\n            assert_equal {} [r zrange ztmp 5 -1]\n            assert_equal {} [r zrange ztmp 5 -2]\n\n            # out of range end index\n            assert_equal {a b c d} [r zrange ztmp 0 5]\n            assert_equal {b c d} [r zrange ztmp 1 5]\n            assert_equal {} [r zrange ztmp 0 -5]\n            assert_equal {} [r zrange ztmp 1 -5]\n\n            # withscores\n            assert_equal {a 1 b 2 c 3 d 4} [r zrange ztmp 0 -1 withscores]\n        }\n\n        test \"ZREVRANGE basics - $encoding\" {\n            r del ztmp\n            r zadd ztmp 1 a\n            r zadd ztmp 2 b\n            r zadd ztmp 3 c\n            r zadd ztmp 4 d\n\n            assert_equal {d c b a} [r zrevrange ztmp 0 -1]\n            assert_equal {d c b} [r zrevrange ztmp 0 -2]\n            assert_equal {c b a} [r zrevrange ztmp 1 -1]\n            assert_equal {c b} [r zrevrange ztmp 1 -2]\n            assert_equal {b a} [r zrevrange ztmp -2 -1]\n            assert_equal {b} [r zrevrange ztmp -2 -2]\n\n            # out of range start index\n            assert_equal {d c b} [r zrevrange ztmp -5 2]\n            assert_equal {d c} [r zrevrange ztmp -5 1]\n            assert_equal {} [r zrevrange ztmp 5 -1]\n            assert_equal {} [r zrevrange ztmp 5 -2]\n\n            # out of range end index\n            assert_equal {d c b a} [r zrevrange ztmp 0 5]\n            assert_equal {c b a} [r zrevrange ztmp 1 5]\n            assert_equal {} [r zrevrange ztmp 0 -5]\n            assert_equal {} [r zrevrange ztmp 1 -5]\n\n            # withscores\n            assert_equal {d 4 c 3 b 2 a 1} [r zrevrange ztmp 0 -1 withscores]\n        }\n\n        test \"ZRANK/ZREVRANK basics - $encoding\" {\n            set nullres {$-1}\n            if {$::force_resp3} {\n                set nullres {_}\n            }\n            r del zranktmp\n            r zadd zranktmp 10 x\n            r zadd zranktmp 20 y\n            r zadd zranktmp 30 z\n            assert_equal 0 [r zrank zranktmp x]\n            assert_equal 1 [r zrank zranktmp y]\n            assert_equal 2 [r zrank zranktmp z]\n            assert_equal 2 [r zrevrank zranktmp x]\n            assert_equal 1 [r zrevrank zranktmp y]\n            assert_equal 0 [r zrevrank zranktmp z]\n            r readraw 1\n            assert_equal $nullres [r zrank zranktmp foo]\n            assert_equal $nullres [r zrevrank zranktmp foo]\n            r readraw 0\n\n            # withscore\n            set nullres {*-1}\n            if {$::force_resp3} {\n                set nullres {_}\n            }\n            assert_equal {0 10} [r zrank zranktmp x withscore]\n            assert_equal {1 20} [r zrank zranktmp y withscore]\n            assert_equal {2 30} [r zrank zranktmp z withscore]\n            assert_equal {2 10} [r zrevrank zranktmp x withscore]\n            assert_equal {1 20} [r zrevrank zranktmp y withscore]\n            assert_equal {0 30} [r zrevrank zranktmp z withscore]\n            r readraw 1\n            assert_equal $nullres [r zrank zranktmp foo withscore]\n            assert_equal $nullres [r zrevrank zranktmp foo withscore]\n            r readraw 0\n        }\n\n        test \"ZRANK - after deletion - $encoding\" {\n            r zrem zranktmp y\n            assert_equal 0 [r zrank zranktmp x]\n            assert_equal 1 [r zrank zranktmp z]\n            assert_equal {0 10} [r zrank zranktmp x withscore]\n            assert_equal {1 30} [r zrank zranktmp z withscore]\n        }\n\n        test \"ZINCRBY - can create a new sorted set - $encoding\" {\n            r del zset\n            r zincrby zset 1 foo\n            assert_equal {foo} [r zrange zset 0 -1]\n            assert_equal 1 [r zscore zset foo]\n        }\n\n        test \"ZINCRBY - increment and decrement - $encoding\" {\n            r zincrby zset 2 foo\n            r zincrby zset 1 bar\n            assert_equal {bar foo} [r zrange zset 0 -1]\n\n            r zincrby zset 10 bar\n            r zincrby zset -5 foo\n            r zincrby zset -5 bar\n            assert_equal {foo bar} [r zrange zset 0 -1]\n\n            assert_equal -2 [r zscore zset foo]\n            assert_equal  6 [r zscore zset bar]\n        }\n\n        test \"ZINCRBY return value - $encoding\" {\n            r del ztmp\n            set retval [r zincrby ztmp 1.0 x]\n            assert {$retval == 1.0}\n        }\n\n        proc create_default_zset {} {\n            create_zset zset {-inf a 1 b 2 c 3 d 4 e 5 f +inf g}\n        }\n\n        proc create_long_zset {key length} {\n            r del $key\n            for {set i 0} {$i < $length} {incr i 1} {\n                r zadd $key $i i$i\n            }\n        }\n\n        test \"ZRANGEBYSCORE/ZREVRANGEBYSCORE/ZCOUNT basics - $encoding\" {\n            create_default_zset\n\n            # inclusive range\n            assert_equal {a b c} [r zrangebyscore zset -inf 2]\n            assert_equal {b c d} [r zrangebyscore zset 0 3]\n            assert_equal {d e f} [r zrangebyscore zset 3 6]\n            assert_equal {e f g} [r zrangebyscore zset 4 +inf]\n            assert_equal {c b a} [r zrevrangebyscore zset 2 -inf]\n            assert_equal {d c b} [r zrevrangebyscore zset 3 0]\n            assert_equal {f e d} [r zrevrangebyscore zset 6 3]\n            assert_equal {g f e} [r zrevrangebyscore zset +inf 4]\n            assert_equal 3 [r zcount zset 0 3]\n\n            # exclusive range\n            assert_equal {b}   [r zrangebyscore zset (-inf (2]\n            assert_equal {b c} [r zrangebyscore zset (0 (3]\n            assert_equal {e f} [r zrangebyscore zset (3 (6]\n            assert_equal {f}   [r zrangebyscore zset (4 (+inf]\n            assert_equal {b}   [r zrevrangebyscore zset (2 (-inf]\n            assert_equal {c b} [r zrevrangebyscore zset (3 (0]\n            assert_equal {f e} [r zrevrangebyscore zset (6 (3]\n            assert_equal {f}   [r zrevrangebyscore zset (+inf (4]\n            assert_equal 2 [r zcount zset (0 (3]\n\n            # test empty ranges\n            r zrem zset a\n            r zrem zset g\n\n            # inclusive\n            assert_equal {} [r zrangebyscore zset 4 2]\n            assert_equal {} [r zrangebyscore zset 6 +inf]\n            assert_equal {} [r zrangebyscore zset -inf -6]\n            assert_equal {} [r zrevrangebyscore zset +inf 6]\n            assert_equal {} [r zrevrangebyscore zset -6 -inf]\n\n            # exclusive\n            assert_equal {} [r zrangebyscore zset (4 (2]\n            assert_equal {} [r zrangebyscore zset 2 (2]\n            assert_equal {} [r zrangebyscore zset (2 2]\n            assert_equal {} [r zrangebyscore zset (6 (+inf]\n            assert_equal {} [r zrangebyscore zset (-inf (-6]\n            assert_equal {} [r zrevrangebyscore zset (+inf (6]\n            assert_equal {} [r zrevrangebyscore zset (-6 (-inf]\n\n            # empty inner range\n            assert_equal {} [r zrangebyscore zset 2.4 2.6]\n            assert_equal {} [r zrangebyscore zset (2.4 2.6]\n            assert_equal {} [r zrangebyscore zset 2.4 (2.6]\n            assert_equal {} [r zrangebyscore zset (2.4 (2.6]\n        }\n\n        test \"ZRANGEBYSCORE with WITHSCORES - $encoding\" {\n            create_default_zset\n            assert_equal {b 1 c 2 d 3} [r zrangebyscore zset 0 3 withscores]\n            assert_equal {d 3 c 2 b 1} [r zrevrangebyscore zset 3 0 withscores]\n        }\n\n        test \"ZRANGEBYSCORE with LIMIT - $encoding\" {\n            create_default_zset\n            assert_equal {b c}   [r zrangebyscore zset 0 10 LIMIT 0 2]\n            assert_equal {d e f} [r zrangebyscore zset 0 10 LIMIT 2 3]\n            assert_equal {d e f} [r zrangebyscore zset 0 10 LIMIT 2 10]\n            assert_equal {}      [r zrangebyscore zset 0 10 LIMIT 20 10]\n            assert_equal {}      [r zrangebyscore zset 0 10 LIMIT -1 2]\n            assert_equal {f e}   [r zrevrangebyscore zset 10 0 LIMIT 0 2]\n            assert_equal {d c b} [r zrevrangebyscore zset 10 0 LIMIT 2 3]\n            assert_equal {d c b} [r zrevrangebyscore zset 10 0 LIMIT 2 10]\n            assert_equal {}      [r zrevrangebyscore zset 10 0 LIMIT 20 10]\n            assert_equal {}      [r zrevrangebyscore zset 10 0 LIMIT -1 2]\n            # zrangebyscore uses different logic when offset > ZSKIPLIST_MAX_SEARCH\n            create_long_zset zset 30\n            assert_equal {i12 i13 i14} [r zrangebyscore zset 0 20 LIMIT 12 3]\n            assert_equal {i14 i15}     [r zrangebyscore zset 0 20 LIMIT 14 2]\n            assert_equal {i19 i20 i21} [r zrangebyscore zset 0 30 LIMIT 19 3]\n            assert_equal {i29}         [r zrangebyscore zset 10 30 LIMIT 19 2]\n            assert_equal {}            [r zrangebyscore zset 0 20 LIMIT -1 3]\n            assert_equal {i17 i16 i15} [r zrevrangebyscore zset 30 10 LIMIT 12 3]\n            assert_equal {i6 i5}       [r zrevrangebyscore zset 20 0 LIMIT 14 2]\n            assert_equal {i2 i1 i0}    [r zrevrangebyscore zset 20 0 LIMIT 18 5]\n            assert_equal {i0}          [r zrevrangebyscore zset 20 0 LIMIT 20 5]\n            assert_equal {}            [r zrevrangebyscore zset 30 10 LIMIT -1 3]\n        }\n\n        test \"ZRANGEBYSCORE with LIMIT and WITHSCORES - $encoding\" {\n            create_default_zset\n            assert_equal {e 4 f 5} [r zrangebyscore zset 2 5 LIMIT 2 3 WITHSCORES]\n            assert_equal {d 3 c 2} [r zrevrangebyscore zset 5 2 LIMIT 2 3 WITHSCORES]\n            assert_equal {} [r zrangebyscore zset 2 5 LIMIT 12 13 WITHSCORES]\n        }\n\n        test \"ZRANGEBYSCORE with non-value min or max - $encoding\" {\n            assert_error \"*not*float*\" {r zrangebyscore fooz str 1}\n            assert_error \"*not*float*\" {r zrangebyscore fooz 1 str}\n            assert_error \"*not*float*\" {r zrangebyscore fooz 1 NaN}\n        }\n\n        proc create_default_lex_zset {} {\n            create_zset zset {0 alpha 0 bar 0 cool 0 down\n                              0 elephant 0 foo 0 great 0 hill\n                              0 omega}\n        }\n\n        proc create_long_lex_zset {} {\n            create_zset zset {0 alpha 0 bar 0 cool 0 down\n                              0 elephant 0 foo 0 great 0 hill\n                              0 island 0 jacket 0 key 0 lip \n                              0 max 0 null 0 omega 0 point\n                              0 query 0 result 0 sea 0 tree}\n        }\n\n        test \"ZRANGEBYLEX/ZREVRANGEBYLEX/ZLEXCOUNT basics - $encoding\" {\n            create_default_lex_zset\n\n            # inclusive range\n            assert_equal {alpha bar cool} [r zrangebylex zset - \\[cool]\n            assert_equal {bar cool down} [r zrangebylex zset \\[bar \\[down]\n            assert_equal {great hill omega} [r zrangebylex zset \\[g +]\n            assert_equal {cool bar alpha} [r zrevrangebylex zset \\[cool -]\n            assert_equal {down cool bar} [r zrevrangebylex zset \\[down \\[bar]\n            assert_equal {omega hill great foo elephant down} [r zrevrangebylex zset + \\[d]\n            assert_equal 3 [r zlexcount zset \\[ele \\[h]\n\n            # exclusive range\n            assert_equal {alpha bar} [r zrangebylex zset - (cool]\n            assert_equal {cool} [r zrangebylex zset (bar (down]\n            assert_equal {hill omega} [r zrangebylex zset (great +]\n            assert_equal {bar alpha} [r zrevrangebylex zset (cool -]\n            assert_equal {cool} [r zrevrangebylex zset (down (bar]\n            assert_equal {omega hill} [r zrevrangebylex zset + (great]\n            assert_equal 2 [r zlexcount zset (ele (great]\n\n            # inclusive and exclusive\n            assert_equal {} [r zrangebylex zset (az (b]\n            assert_equal {} [r zrangebylex zset (z +]\n            assert_equal {} [r zrangebylex zset - \\[aaaa]\n            assert_equal {} [r zrevrangebylex zset \\[elez \\[elex]\n            assert_equal {} [r zrevrangebylex zset (hill (omega]\n        }\n\n        test \"ZLEXCOUNT advanced - $encoding\" {\n            create_default_lex_zset\n\n            assert_equal 9 [r zlexcount zset - +]\n            assert_equal 0 [r zlexcount zset + -]\n            assert_equal 0 [r zlexcount zset + \\[c]\n            assert_equal 0 [r zlexcount zset \\[c -]\n            assert_equal 8 [r zlexcount zset \\[bar +]\n            assert_equal 5 [r zlexcount zset \\[bar \\[foo]\n            assert_equal 4 [r zlexcount zset \\[bar (foo]\n            assert_equal 4 [r zlexcount zset (bar \\[foo]\n            assert_equal 3 [r zlexcount zset (bar (foo]\n            assert_equal 5 [r zlexcount zset - (foo]\n            assert_equal 1 [r zlexcount zset (maxstring +]\n        }\n\n        test \"ZRANGEBYLEX with LIMIT - $encoding\" {\n            create_default_lex_zset\n            assert_equal {alpha bar} [r zrangebylex zset - \\[cool LIMIT 0 2]\n            assert_equal {bar cool} [r zrangebylex zset - \\[cool LIMIT 1 2]\n            assert_equal {} [r zrangebylex zset \\[bar \\[down LIMIT 0 0]\n            assert_equal {} [r zrangebylex zset \\[bar \\[down LIMIT 2 0]\n            assert_equal {bar} [r zrangebylex zset \\[bar \\[down LIMIT 0 1]\n            assert_equal {cool} [r zrangebylex zset \\[bar \\[down LIMIT 1 1]\n            assert_equal {bar cool down} [r zrangebylex zset \\[bar \\[down LIMIT 0 100]\n            assert_equal {} [r zrangebylex zset - \\[cool LIMIT -1 2]\n            assert_equal {omega hill great foo elephant} [r zrevrangebylex zset + \\[d LIMIT 0 5]\n            assert_equal {omega hill great foo} [r zrevrangebylex zset + \\[d LIMIT 0 4]\n            assert_equal {great foo elephant} [r zrevrangebylex zset + \\[d LIMIT 2 3]\n            assert_equal {} [r zrevrangebylex zset + \\[d LIMIT -1 5]\n            # zrangebylex uses different logic when offset > ZSKIPLIST_MAX_SEARCH\n            create_long_lex_zset\n            assert_equal {max null} [r zrangebylex zset - \\[tree LIMIT 12 2]\n            assert_equal {point query} [r zrangebylex zset - \\[tree LIMIT 15 2]\n            assert_equal {} [r zrangebylex zset \\[max \\[tree LIMIT 10 0]\n            assert_equal {} [r zrangebylex zset \\[max \\[tree LIMIT 12 0]\n            assert_equal {max} [r zrangebylex zset \\[max \\[null LIMIT 0 1]\n            assert_equal {null} [r zrangebylex zset \\[max \\[null LIMIT 1 1]\n            assert_equal {max null omega point} [r zrangebylex zset \\[max \\[point LIMIT 0 100]\n            assert_equal {} [r zrangebylex zset - \\[tree LIMIT -1 2]\n            assert_equal {tree sea result query point} [r zrevrangebylex zset + \\[o LIMIT 0 5]\n            assert_equal {tree sea result query} [r zrevrangebylex zset + \\[o LIMIT 0 4]\n            assert_equal {omega null max lip} [r zrevrangebylex zset + \\[l LIMIT 5 4]\n            assert_equal {elephant down} [r zrevrangebylex zset + \\[a LIMIT 15 2]\n            assert_equal {bar alpha} [r zrevrangebylex zset + - LIMIT 18 6]\n            assert_equal {hill great foo} [r zrevrangebylex zset + \\[c LIMIT 12 3]\n            assert_equal {} [r zrevrangebylex zset + \\[o LIMIT -1 5]\n        }\n\n        test \"ZRANGEBYLEX with invalid lex range specifiers - $encoding\" {\n            assert_error \"*not*string*\" {r zrangebylex fooz foo bar}\n            assert_error \"*not*string*\" {r zrangebylex fooz \\[foo bar}\n            assert_error \"*not*string*\" {r zrangebylex fooz foo \\[bar}\n            assert_error \"*not*string*\" {r zrangebylex fooz +x \\[bar}\n            assert_error \"*not*string*\" {r zrangebylex fooz -x \\[bar}\n        }\n\n        test \"ZREMRANGEBYSCORE basics - $encoding\" {\n            proc remrangebyscore {min max} {\n                create_zset zset {1 a 2 b 3 c 4 d 5 e}\n                assert_equal 1 [r exists zset]\n                r zremrangebyscore zset $min $max\n            }\n\n            # inner range\n            assert_equal 3 [remrangebyscore 2 4]\n            assert_equal {a e} [r zrange zset 0 -1]\n\n            # start underflow\n            assert_equal 1 [remrangebyscore -10 1]\n            assert_equal {b c d e} [r zrange zset 0 -1]\n\n            # end overflow\n            assert_equal 1 [remrangebyscore 5 10]\n            assert_equal {a b c d} [r zrange zset 0 -1]\n\n            # switch min and max\n            assert_equal 0 [remrangebyscore 4 2]\n            assert_equal {a b c d e} [r zrange zset 0 -1]\n\n            # -inf to mid\n            assert_equal 3 [remrangebyscore -inf 3]\n            assert_equal {d e} [r zrange zset 0 -1]\n\n            # mid to +inf\n            assert_equal 3 [remrangebyscore 3 +inf]\n            assert_equal {a b} [r zrange zset 0 -1]\n\n            # -inf to +inf\n            assert_equal 5 [remrangebyscore -inf +inf]\n            assert_equal {} [r zrange zset 0 -1]\n\n            # exclusive min\n            assert_equal 4 [remrangebyscore (1 5]\n            assert_equal {a} [r zrange zset 0 -1]\n            assert_equal 3 [remrangebyscore (2 5]\n            assert_equal {a b} [r zrange zset 0 -1]\n\n            # exclusive max\n            assert_equal 4 [remrangebyscore 1 (5]\n            assert_equal {e} [r zrange zset 0 -1]\n            assert_equal 3 [remrangebyscore 1 (4]\n            assert_equal {d e} [r zrange zset 0 -1]\n\n            # exclusive min and max\n            assert_equal 3 [remrangebyscore (1 (5]\n            assert_equal {a e} [r zrange zset 0 -1]\n\n            # destroy when empty\n            assert_equal 5 [remrangebyscore 1 5]\n            assert_equal 0 [r exists zset]\n        }\n\n        test \"ZREMRANGEBYSCORE with non-value min or max - $encoding\" {\n            assert_error \"*not*float*\" {r zremrangebyscore fooz str 1}\n            assert_error \"*not*float*\" {r zremrangebyscore fooz 1 str}\n            assert_error \"*not*float*\" {r zremrangebyscore fooz 1 NaN}\n        }\n\n        test \"ZREMRANGEBYRANK basics - $encoding\" {\n            proc remrangebyrank {min max} {\n                create_zset zset {1 a 2 b 3 c 4 d 5 e}\n                assert_equal 1 [r exists zset]\n                r zremrangebyrank zset $min $max\n            }\n\n            # inner range\n            assert_equal 3 [remrangebyrank 1 3]\n            assert_equal {a e} [r zrange zset 0 -1]\n\n            # start underflow\n            assert_equal 1 [remrangebyrank -10 0]\n            assert_equal {b c d e} [r zrange zset 0 -1]\n\n            # start overflow\n            assert_equal 0 [remrangebyrank 10 -1]\n            assert_equal {a b c d e} [r zrange zset 0 -1]\n\n            # end underflow\n            assert_equal 0 [remrangebyrank 0 -10]\n            assert_equal {a b c d e} [r zrange zset 0 -1]\n\n            # end overflow\n            assert_equal 5 [remrangebyrank 0 10]\n            assert_equal {} [r zrange zset 0 -1]\n\n            # destroy when empty\n            assert_equal 5 [remrangebyrank 0 4]\n            assert_equal 0 [r exists zset]\n        }\n\n        test \"ZREMRANGEBYLEX basics - $encoding\" {\n            proc remrangebylex {min max} {\n                create_default_lex_zset\n                assert_equal 1 [r exists zset]\n                r zremrangebylex zset $min $max\n            }\n\n            # inclusive range\n            assert_equal 3 [remrangebylex - \\[cool]\n            assert_equal {down elephant foo great hill omega} [r zrange zset 0 -1]\n            assert_equal 3 [remrangebylex \\[bar \\[down]\n            assert_equal {alpha elephant foo great hill omega} [r zrange zset 0 -1]\n            assert_equal 3 [remrangebylex \\[g +]\n            assert_equal {alpha bar cool down elephant foo} [r zrange zset 0 -1]\n            assert_equal 6 [r zcard zset]\n\n            # exclusive range\n            assert_equal 2 [remrangebylex - (cool]\n            assert_equal {cool down elephant foo great hill omega} [r zrange zset 0 -1]\n            assert_equal 1 [remrangebylex (bar (down]\n            assert_equal {alpha bar down elephant foo great hill omega} [r zrange zset 0 -1]\n            assert_equal 2 [remrangebylex (great +]\n            assert_equal {alpha bar cool down elephant foo great} [r zrange zset 0 -1]\n            assert_equal 7 [r zcard zset]\n\n            # inclusive and exclusive\n            assert_equal 0 [remrangebylex (az (b]\n            assert_equal {alpha bar cool down elephant foo great hill omega} [r zrange zset 0 -1]\n            assert_equal 0 [remrangebylex (z +]\n            assert_equal {alpha bar cool down elephant foo great hill omega} [r zrange zset 0 -1]\n            assert_equal 0 [remrangebylex - \\[aaaa]\n            assert_equal {alpha bar cool down elephant foo great hill omega} [r zrange zset 0 -1]\n            assert_equal 9 [r zcard zset]\n\n            # destroy when empty\n            assert_equal 9 [remrangebylex - +]\n            assert_equal 0 [r zcard zset]\n            assert_equal 0 [r exists zset]\n        }\n\n        test \"ZUNIONSTORE against non-existing key doesn't set destination - $encoding\" {\n            r del zseta{t}\n            assert_equal 0 [r zunionstore dst_key{t} 1 zseta{t}]\n            assert_equal 0 [r exists dst_key{t}]\n        }\n\n        test \"ZUNION/ZINTER/ZINTERCARD/ZDIFF against non-existing key - $encoding\" {\n            r del zseta\n            assert_equal {} [r zunion 1 zseta]\n            assert_equal {} [r zinter 1 zseta]\n            assert_equal 0 [r zintercard 1 zseta]\n            assert_equal 0 [r zintercard 1 zseta limit 0]\n            assert_equal {} [r zdiff 1 zseta]\n        }\n\n        test \"ZUNIONSTORE with empty set - $encoding\" {\n            r del zseta{t} zsetb{t}\n            r zadd zseta{t} 1 a\n            r zadd zseta{t} 2 b\n            r zunionstore zsetc{t} 2 zseta{t} zsetb{t}\n            r zrange zsetc{t} 0 -1 withscores\n        } {a 1 b 2}\n\n        test \"ZUNION/ZINTER/ZINTERCARD/ZDIFF with empty set - $encoding\" {\n            r del zseta{t} zsetb{t}\n            r zadd zseta{t} 1 a\n            r zadd zseta{t} 2 b\n            assert_equal {a 1 b 2} [r zunion 2 zseta{t} zsetb{t} withscores]\n            assert_equal {} [r zinter 2 zseta{t} zsetb{t} withscores]\n            assert_equal 0 [r zintercard 2 zseta{t} zsetb{t}]\n            assert_equal 0 [r zintercard 2 zseta{t} zsetb{t} limit 0]\n            assert_equal {a 1 b 2} [r zdiff 2 zseta{t} zsetb{t} withscores]\n        }\n\n        test \"ZUNIONSTORE basics - $encoding\" {\n            r del zseta{t} zsetb{t} zsetc{t}\n            r zadd zseta{t} 1 a\n            r zadd zseta{t} 2 b\n            r zadd zseta{t} 3 c\n            r zadd zsetb{t} 1 b\n            r zadd zsetb{t} 2 c\n            r zadd zsetb{t} 3 d\n\n            assert_equal 4 [r zunionstore zsetc{t} 2 zseta{t} zsetb{t}]\n            assert_equal {a 1 b 3 d 3 c 5} [r zrange zsetc{t} 0 -1 withscores]\n        }\n\n        test \"ZUNION/ZINTER/ZINTERCARD/ZDIFF with integer members - $encoding\" {\n            r del zsetd{t} zsetf{t}\n            r zadd zsetd{t} 1 1\n            r zadd zsetd{t} 2 2\n            r zadd zsetd{t} 3 3\n            r zadd zsetf{t} 1 1\n            r zadd zsetf{t} 3 3\n            r zadd zsetf{t} 4 4\n\n            assert_equal {1 2 2 2 4 4 3 6} [r zunion 2 zsetd{t} zsetf{t} withscores]\n            assert_equal {1 2 3 6} [r zinter 2 zsetd{t} zsetf{t} withscores]\n            assert_equal 2 [r zintercard 2 zsetd{t} zsetf{t}]\n            assert_equal 2 [r zintercard 2 zsetd{t} zsetf{t} limit 0]\n            assert_equal {2 2} [r zdiff 2 zsetd{t} zsetf{t} withscores]\n        }\n\n        test \"ZUNIONSTORE with weights - $encoding\" {\n            assert_equal 4 [r zunionstore zsetc{t} 2 zseta{t} zsetb{t} weights 2 3]\n            assert_equal {a 2 b 7 d 9 c 12} [r zrange zsetc{t} 0 -1 withscores]\n        }\n\n        test \"ZUNION with weights - $encoding\" {\n            assert_equal {a 2 b 7 d 9 c 12} [r zunion 2 zseta{t} zsetb{t} weights 2 3 withscores]\n            assert_equal {b 7 c 12} [r zinter 2 zseta{t} zsetb{t} weights 2 3 withscores]\n        }\n\n        test \"ZUNIONSTORE with a regular set and weights - $encoding\" {\n            r del seta{t}\n            r sadd seta{t} a\n            r sadd seta{t} b\n            r sadd seta{t} c\n\n            assert_equal 4 [r zunionstore zsetc{t} 2 seta{t} zsetb{t} weights 2 3]\n            assert_equal {a 2 b 5 c 8 d 9} [r zrange zsetc{t} 0 -1 withscores]\n        }\n\n        test \"ZUNIONSTORE with AGGREGATE MIN - $encoding\" {\n            assert_equal 4 [r zunionstore zsetc{t} 2 zseta{t} zsetb{t} aggregate min]\n            assert_equal {a 1 b 1 c 2 d 3} [r zrange zsetc{t} 0 -1 withscores]\n        }\n\n        test \"ZUNION/ZINTER with AGGREGATE MIN - $encoding\" {\n            assert_equal {a 1 b 1 c 2 d 3} [r zunion 2 zseta{t} zsetb{t} aggregate min withscores]\n            assert_equal {b 1 c 2} [r zinter 2 zseta{t} zsetb{t} aggregate min withscores]\n        }\n\n        test \"ZUNIONSTORE with AGGREGATE MAX - $encoding\" {\n            assert_equal 4 [r zunionstore zsetc{t} 2 zseta{t} zsetb{t} aggregate max]\n            assert_equal {a 1 b 2 c 3 d 3} [r zrange zsetc{t} 0 -1 withscores]\n        }\n\n        test \"ZUNION/ZINTER with AGGREGATE MAX - $encoding\" {\n            assert_equal {a 1 b 2 c 3 d 3} [r zunion 2 zseta{t} zsetb{t} aggregate max withscores]\n            assert_equal {b 2 c 3} [r zinter 2 zseta{t} zsetb{t} aggregate max withscores]\n        }\n\n        test \"ZUNIONSTORE with AGGREGATE COUNT - $encoding\" {\n            assert_equal 4 [r zunionstore zsetc{t} 2 zseta{t} zsetb{t} aggregate count]\n            assert_equal {a 1 d 1 b 2 c 2} [r zrange zsetc{t} 0 -1 withscores]\n        }\n\n        test \"ZUNION/ZINTER with AGGREGATE COUNT - $encoding\" {\n            assert_equal {a 1 d 1 b 2 c 2} [r zunion 2 zseta{t} zsetb{t} aggregate count withscores]\n            assert_equal {b 2 c 2} [r zinter 2 zseta{t} zsetb{t} aggregate count withscores]\n        }\n\n        test \"ZUNIONSTORE with AGGREGATE COUNT and WEIGHTS - $encoding\" {\n            assert_equal 4 [r zunionstore zsetc{t} 2 zseta{t} zsetb{t} weights 2 3 aggregate count]\n            assert_equal {a 2 d 3 b 5 c 5} [r zrange zsetc{t} 0 -1 withscores]\n        }\n\n        test \"ZUNION/ZINTER with AGGREGATE COUNT and WEIGHTS - $encoding\" {\n            assert_equal {a 2 d 3 b 5 c 5} [r zunion 2 zseta{t} zsetb{t} weights 2 3 aggregate count withscores]\n            assert_equal {b 5 c 5} [r zinter 2 zseta{t} zsetb{t} weights 2 3 aggregate count withscores]\n        }\n\n        test \"ZINTERSTORE basics - $encoding\" {\n            assert_equal 2 [r zinterstore zsetc{t} 2 zseta{t} zsetb{t}]\n            assert_equal {b 3 c 5} [r zrange zsetc{t} 0 -1 withscores]\n        }\n\n        test \"ZINTER basics - $encoding\" {\n            assert_equal {b 3 c 5} [r zinter 2 zseta{t} zsetb{t} withscores]\n        }\n\n        test \"ZINTERCARD with illegal arguments\" {\n            assert_error \"ERR syntax error*\" {r zintercard 1 zseta{t} zseta{t}}\n            assert_error \"ERR syntax error*\" {r zintercard 1 zseta{t} bar_arg}\n            assert_error \"ERR syntax error*\" {r zintercard 1 zseta{t} LIMIT}\n\n            assert_error \"ERR LIMIT*\" {r zintercard 1 myset{t} LIMIT -1}\n            assert_error \"ERR LIMIT*\" {r zintercard 1 myset{t} LIMIT a}\n        }\n\n        test \"ZINTERCARD basics - $encoding\" {\n            assert_equal 2 [r zintercard 2 zseta{t} zsetb{t}]\n            assert_equal 2 [r zintercard 2 zseta{t} zsetb{t} limit 0]\n            assert_equal 1 [r zintercard 2 zseta{t} zsetb{t} limit 1]\n            assert_equal 2 [r zintercard 2 zseta{t} zsetb{t} limit 10]\n        }\n\n        test \"ZINTER RESP3 - $encoding\" {\n            r hello 3\n            assert_equal {{b 3.0} {c 5.0}} [r zinter 2 zseta{t} zsetb{t} withscores]\n            r hello 2\n        }\n\n        test \"ZINTERSTORE with weights - $encoding\" {\n            assert_equal 2 [r zinterstore zsetc{t} 2 zseta{t} zsetb{t} weights 2 3]\n            assert_equal {b 7 c 12} [r zrange zsetc{t} 0 -1 withscores]\n        }\n\n        test \"ZINTER with weights - $encoding\" {\n            assert_equal {b 7 c 12} [r zinter 2 zseta{t} zsetb{t} weights 2 3 withscores]\n        }\n\n        test \"ZINTERSTORE with a regular set and weights - $encoding\" {\n            r del seta{t}\n            r sadd seta{t} a\n            r sadd seta{t} b\n            r sadd seta{t} c\n            assert_equal 2 [r zinterstore zsetc{t} 2 seta{t} zsetb{t} weights 2 3]\n            assert_equal {b 5 c 8} [r zrange zsetc{t} 0 -1 withscores]\n        }\n\n        test \"ZINTERSTORE with AGGREGATE MIN - $encoding\" {\n            assert_equal 2 [r zinterstore zsetc{t} 2 zseta{t} zsetb{t} aggregate min]\n            assert_equal {b 1 c 2} [r zrange zsetc{t} 0 -1 withscores]\n        }\n\n        test \"ZINTERSTORE with AGGREGATE MAX - $encoding\" {\n            assert_equal 2 [r zinterstore zsetc{t} 2 zseta{t} zsetb{t} aggregate max]\n            assert_equal {b 2 c 3} [r zrange zsetc{t} 0 -1 withscores]\n        }\n\n        test \"ZINTERSTORE with AGGREGATE COUNT - $encoding\" {\n            assert_equal 2 [r zinterstore zsetc{t} 2 zseta{t} zsetb{t} aggregate count]\n            assert_equal {b 2 c 2} [r zrange zsetc{t} 0 -1 withscores]\n        }\n\n        test \"ZINTERSTORE with AGGREGATE COUNT and WEIGHTS - $encoding\" {\n            assert_equal 2 [r zinterstore zsetc{t} 2 zseta{t} zsetb{t} weights 2 3 aggregate count]\n            assert_equal {b 5 c 5} [r zrange zsetc{t} 0 -1 withscores]\n        }\n\n        test \"ZUNIONSTORE/ZINTERSTORE with AGGREGATE COUNT - 3 sets - $encoding\" {\n            r del s1{t} s2{t} s3{t} t1{t}\n            r zadd s1{t} 1 foo 1 bar\n            r zadd s2{t} 2 foo 2 bar\n            r zadd s3{t} 3 foo\n\n            assert_equal 1 [r zinterstore t1{t} 3 s1{t} s2{t} s3{t} aggregate count]\n            assert_equal {foo 3} [r zrange t1{t} 0 -1 withscores]\n\n            assert_equal 2 [r zunionstore t1{t} 3 s1{t} s2{t} s3{t} aggregate count]\n            assert_equal {bar 2 foo 3} [r zrange t1{t} 0 -1 withscores]\n        }\n\n        test \"ZUNIONSTORE/ZINTERSTORE with AGGREGATE COUNT and WEIGHTS - 3 sets - $encoding\" {\n            assert_equal 1 [r zinterstore t1{t} 3 s1{t} s2{t} s3{t} weights 10 5 3 aggregate count]\n            assert_equal {foo 18} [r zrange t1{t} 0 -1 withscores]\n\n            assert_equal 2 [r zunionstore t1{t} 3 s1{t} s2{t} s3{t} weights 10 5 3 aggregate count]\n            assert_equal {bar 15 foo 18} [r zrange t1{t} 0 -1 withscores]\n\n            r del s1{t} s2{t} s3{t} t1{t}\n        }\n\n        foreach cmd {ZUNIONSTORE ZINTERSTORE} {\n            test \"$cmd with +inf/-inf scores - $encoding\" {\n                r del zsetinf1{t} zsetinf2{t}\n\n                r zadd zsetinf1{t} +inf key\n                r zadd zsetinf2{t} +inf key\n                r $cmd zsetinf3{t} 2 zsetinf1{t} zsetinf2{t}\n                assert_equal inf [r zscore zsetinf3{t} key]\n\n                r zadd zsetinf1{t} -inf key\n                r zadd zsetinf2{t} +inf key\n                r $cmd zsetinf3{t} 2 zsetinf1{t} zsetinf2{t}\n                assert_equal 0 [r zscore zsetinf3{t} key]\n\n                r zadd zsetinf1{t} +inf key\n                r zadd zsetinf2{t} -inf key\n                r $cmd zsetinf3{t} 2 zsetinf1{t} zsetinf2{t}\n                assert_equal 0 [r zscore zsetinf3{t} key]\n\n                r zadd zsetinf1{t} -inf key\n                r zadd zsetinf2{t} -inf key\n                r $cmd zsetinf3{t} 2 zsetinf1{t} zsetinf2{t}\n                assert_equal -inf [r zscore zsetinf3{t} key]\n            }\n\n            test \"$cmd with NaN weights - $encoding\" {\n                r del zsetinf1{t} zsetinf2{t}\n\n                r zadd zsetinf1{t} 1.0 key\n                r zadd zsetinf2{t} 1.0 key\n                assert_error \"*weight*not*float*\" {\n                    r $cmd zsetinf3{t} 2 zsetinf1{t} zsetinf2{t} weights nan nan\n                }\n            }\n        }\n\n        test \"ZDIFFSTORE basics - $encoding\" {\n            assert_equal 1 [r zdiffstore zsetc{t} 2 zseta{t} zsetb{t}]\n            assert_equal {a 1} [r zrange zsetc{t} 0 -1 withscores]\n        }\n\n        test \"ZDIFF basics - $encoding\" {\n            assert_equal {a 1} [r zdiff 2 zseta{t} zsetb{t} withscores]\n        }\n\n        test \"ZDIFFSTORE with a regular set - $encoding\" {\n            r del seta{t}\n            r sadd seta{t} a\n            r sadd seta{t} b\n            r sadd seta{t} c\n            assert_equal 1 [r zdiffstore zsetc{t} 2 seta{t} zsetb{t}]\n            assert_equal {a 1} [r zrange zsetc{t} 0 -1 withscores]\n        }\n\n        test \"ZDIFF subtracting set from itself - $encoding\" {\n            assert_equal 0 [r zdiffstore zsetc{t} 2 zseta{t} zseta{t}]\n            assert_equal {} [r zrange zsetc{t} 0 -1 withscores]\n        }\n\n        test \"ZDIFF algorithm 1 - $encoding\" {\n            r del zseta{t} zsetb{t} zsetc{t}\n            r zadd zseta{t} 1 a\n            r zadd zseta{t} 2 b\n            r zadd zseta{t} 3 c\n            r zadd zsetb{t} 1 b\n            r zadd zsetb{t} 2 c\n            r zadd zsetb{t} 3 d\n            assert_equal 1 [r zdiffstore zsetc{t} 2 zseta{t} zsetb{t}]\n            assert_equal {a 1} [r zrange zsetc{t} 0 -1 withscores]\n        }\n\n        test \"ZDIFF algorithm 2 - $encoding\" {\n            r del zseta{t} zsetb{t} zsetc{t} zsetd{t} zsete{t}\n            r zadd zseta{t} 1 a\n            r zadd zseta{t} 2 b\n            r zadd zseta{t} 3 c\n            r zadd zseta{t} 5 e\n            r zadd zsetb{t} 1 b\n            r zadd zsetc{t} 1 c\n            r zadd zsetd{t} 1 d\n            assert_equal 2 [r zdiffstore zsete{t} 4 zseta{t} zsetb{t} zsetc{t} zsetd{t}]\n            assert_equal {a 1 e 5} [r zrange zsete{t} 0 -1 withscores]\n        }\n\n        test \"ZDIFF algorithm 2 empty result early exit - $encoding\" {\n            # Force algorithm 2 by inflating setnum with non-existing keys.\n            # algo_one_work = len(src[0]) * setnum / 2 = 2 * 10 / 2 = 10\n            # algo_two_work = 2 + 2 + 0*8 = 4\n            # algo_one (10) > algo_two (4) -> algorithm 2 is selected\n            r del zseta{t} zsetb{t} zsetc{t}\n            r zadd zseta{t} 1 a 2 b\n            r zadd zsetb{t} 1 a 2 b\n            assert_equal 0 [r zdiffstore zsetc{t} 10 zseta{t} zsetb{t} nx1{t} nx2{t} nx3{t} nx4{t} nx5{t} nx6{t} nx7{t} nx8{t}]\n            assert_equal {} [r zrange zsetc{t} 0 -1 withscores]\n        }\n\n        test \"ZDIFF fuzzing - $encoding\" {\n            for {set j 0} {$j < 100} {incr j} {\n                unset -nocomplain s\n                array set s {}\n                set args {}\n                set num_sets [expr {[randomInt 10]+1}]\n                for {set i 0} {$i < $num_sets} {incr i} {\n                    set num_elements [randomInt 100]\n                    r del zset_$i{t}\n                    lappend args zset_$i{t}\n                    while {$num_elements} {\n                        set ele [randomValue]\n                        r zadd zset_$i{t} [randomInt 100] $ele\n                        if {$i == 0} {\n                            set s($ele) x\n                        } else {\n                            unset -nocomplain s($ele)\n                        }\n                        incr num_elements -1\n                    }\n                }\n                set result [lsort [r zdiff [llength $args] {*}$args]]\n                assert_equal $result [lsort [array names s]]\n            }\n        }\n\n        foreach {pop} {ZPOPMIN ZPOPMAX} {\n            test \"$pop with the count 0 returns an empty array\" {\n                r del zset\n                r zadd zset 1 a 2 b 3 c\n                assert_equal {} [r $pop zset 0]\n\n                # Make sure we can distinguish between an empty array and a null response\n                r readraw 1\n                assert_equal {*0} [r $pop zset 0]\n                r readraw 0\n\n                assert_equal 3 [r zcard zset]\n            }\n\n            test \"$pop with negative count\" {\n                r set zset foo\n                assert_error \"ERR *must be positive\" {r $pop zset -1}\n\n                r del zset\n                assert_error \"ERR *must be positive\" {r $pop zset -2}\n\n                r zadd zset 1 a 2 b 3 c\n                assert_error \"ERR *must be positive\" {r $pop zset -3}\n            }\n        }\n\n    foreach {popmin popmax} {ZPOPMIN ZPOPMAX ZMPOP_MIN ZMPOP_MAX} {\n        test \"Basic $popmin/$popmax with a single key - $encoding\" {\n            r del zset\n            verify_zpop_response r $popmin zset 0 {} {}\n\n            create_zset zset {-1 a 1 b 2 c 3 d 4 e}\n            verify_zpop_response r $popmin zset 0 {a -1} {zset {{a -1}}}\n            verify_zpop_response r $popmin zset 0 {b 1} {zset {{b 1}}}\n            verify_zpop_response r $popmax zset 0 {e 4} {zset {{e 4}}}\n            verify_zpop_response r $popmax zset 0 {d 3} {zset {{d 3}}}\n            verify_zpop_response r $popmin zset 0 {c 2} {zset {{c 2}}}\n            assert_equal 0 [r exists zset]\n        }\n\n        test \"$popmin/$popmax with count - $encoding\" {\n            r del z1\n            verify_zpop_response r $popmin z1 2 {} {}\n\n            create_zset z1 {0 a 1 b 2 c 3 d}\n            verify_zpop_response r $popmin z1 2 {a 0 b 1} {z1 {{a 0} {b 1}}}\n            verify_zpop_response r $popmax z1 2 {d 3 c 2} {z1 {{d 3} {c 2}}}\n        }\n    }\n\n    foreach {popmin popmax} {BZPOPMIN BZPOPMAX BZMPOP_MIN BZMPOP_MAX} {\n        test \"$popmin/$popmax with a single existing sorted set - $encoding\" {\n            set rd [redis_deferring_client]\n            create_zset zset {0 a 1 b 2 c 3 d}\n\n            verify_bzpop_response $rd $popmin zset 5 0 {zset a 0} {zset {{a 0}}}\n            verify_bzpop_response $rd $popmax zset 5 0 {zset d 3} {zset {{d 3}}}\n            verify_bzpop_response $rd $popmin zset 5 0 {zset b 1} {zset {{b 1}}}\n            verify_bzpop_response $rd $popmax zset 5 0 {zset c 2} {zset {{c 2}}}\n            assert_equal 0 [r exists zset]\n            $rd close\n        }\n\n        test \"$popmin/$popmax with multiple existing sorted sets - $encoding\" {\n            set rd [redis_deferring_client]\n            create_zset z1{t} {0 a 1 b 2 c}\n            create_zset z2{t} {3 d 4 e 5 f}\n\n            verify_bzpop_two_key_response $rd $popmin z1{t} z2{t} 5 0 {z1{t} a 0} {z1{t} {{a 0}}}\n            verify_bzpop_two_key_response $rd $popmax z1{t} z2{t} 5 0 {z1{t} c 2} {z1{t} {{c 2}}}\n            assert_equal 1 [r zcard z1{t}]\n            assert_equal 3 [r zcard z2{t}]\n\n            verify_bzpop_two_key_response $rd $popmax z2{t} z1{t} 5 0 {z2{t} f 5} {z2{t} {{f 5}}}\n            verify_bzpop_two_key_response $rd $popmin z2{t} z1{t} 5 0 {z2{t} d 3} {z2{t} {{d 3}}}\n            assert_equal 1 [r zcard z1{t}]\n            assert_equal 1 [r zcard z2{t}]\n            $rd close\n        }\n\n        test \"$popmin/$popmax second sorted set has members - $encoding\" {\n            set rd [redis_deferring_client]\n            r del z1{t}\n            create_zset z2{t} {3 d 4 e 5 f}\n\n            verify_bzpop_two_key_response $rd $popmax z1{t} z2{t} 5 0 {z2{t} f 5} {z2{t} {{f 5}}}\n            verify_bzpop_two_key_response $rd $popmin z1{t} z2{t} 5 0 {z2{t} d 3} {z2{t} {{d 3}}}\n            assert_equal 0 [r zcard z1{t}]\n            assert_equal 1 [r zcard z2{t}]\n            $rd close\n        }\n    }\n\n    foreach {popmin popmax} {ZPOPMIN ZPOPMAX ZMPOP_MIN ZMPOP_MAX} {\n        test \"Basic $popmin/$popmax - $encoding RESP3\" {\n            r hello 3\n            create_zset z1 {0 a 1 b 2 c 3 d}\n            verify_zpop_response r $popmin z1 0 {a 0.0} {z1 {{a 0.0}}}\n            verify_zpop_response r $popmax z1 0 {d 3.0} {z1 {{d 3.0}}}\n            r hello 2\n        }\n\n        test \"$popmin/$popmax with count - $encoding RESP3\" {\n            r hello 3\n            create_zset z1 {0 a 1 b 2 c 3 d}\n            verify_zpop_response r $popmin z1 2 {{a 0.0} {b 1.0}} {z1 {{a 0.0} {b 1.0}}}\n            verify_zpop_response r $popmax z1 2 {{d 3.0} {c 2.0}} {z1 {{d 3.0} {c 2.0}}}\n            r hello 2\n        }\n    }\n\n    foreach {popmin popmax} {BZPOPMIN BZPOPMAX BZMPOP_MIN BZMPOP_MAX} {\n        test \"$popmin/$popmax - $encoding RESP3\" {\n            r hello 3\n            set rd [redis_deferring_client]\n            create_zset zset {0 a 1 b 2 c 3 d}\n\n            verify_bzpop_response $rd $popmin zset 5 0 {zset a 0} {zset {{a 0}}}\n            verify_bzpop_response $rd $popmax zset 5 0 {zset d 3} {zset {{d 3}}}\n            verify_bzpop_response $rd $popmin zset 5 0 {zset b 1} {zset {{b 1}}}\n            verify_bzpop_response $rd $popmax zset 5 0 {zset c 2} {zset {{c 2}}}\n\n            assert_equal 0 [r exists zset]\n            r hello 2\n            $rd close\n        }\n    }\n\n        r config set zset-max-ziplist-entries $original_max_entries\n        r config set zset-max-ziplist-value $original_max_value\n    }\n\n    basics listpack\n    basics skiplist\n\n    test \"ZPOP/ZMPOP against wrong type\" {\n        r set foo{t} bar\n        assert_error \"*WRONGTYPE*\" {r zpopmin foo{t}}\n        assert_error \"*WRONGTYPE*\" {r zpopmin foo{t} 0}\n        assert_error \"*WRONGTYPE*\" {r zpopmax foo{t}}\n        assert_error \"*WRONGTYPE*\" {r zpopmax foo{t} 0}\n        assert_error \"*WRONGTYPE*\" {r zpopmin foo{t} 2}\n\n        assert_error \"*WRONGTYPE*\" {r zmpop 1 foo{t} min}\n        assert_error \"*WRONGTYPE*\" {r zmpop 1 foo{t} max}\n        assert_error \"*WRONGTYPE*\" {r zmpop 1 foo{t} max count 200}\n\n        r del foo{t}\n        r set foo2{t} bar\n        assert_error \"*WRONGTYPE*\" {r zmpop 2 foo{t} foo2{t} min}\n        assert_error \"*WRONGTYPE*\" {r zmpop 2 foo2{t} foo1{t} max count 1}\n    }\n\n    test \"ZMPOP with illegal argument\" {\n        assert_error \"ERR wrong number of arguments for 'zmpop' command\" {r zmpop}\n        assert_error \"ERR wrong number of arguments for 'zmpop' command\" {r zmpop 1}\n        assert_error \"ERR wrong number of arguments for 'zmpop' command\" {r zmpop 1 myzset{t}}\n\n        assert_error \"ERR numkeys*\" {r zmpop 0 myzset{t} MIN}\n        assert_error \"ERR numkeys*\" {r zmpop a myzset{t} MIN}\n        assert_error \"ERR numkeys*\" {r zmpop -1 myzset{t} MAX}\n\n        assert_error \"ERR syntax error*\" {r zmpop 1 myzset{t} bad_where}\n        assert_error \"ERR syntax error*\" {r zmpop 1 myzset{t} MIN bar_arg}\n        assert_error \"ERR syntax error*\" {r zmpop 1 myzset{t} MAX MIN}\n        assert_error \"ERR syntax error*\" {r zmpop 1 myzset{t} COUNT}\n        assert_error \"ERR syntax error*\" {r zmpop 1 myzset{t} MAX COUNT 1 COUNT 2}\n        assert_error \"ERR syntax error*\" {r zmpop 2 myzset{t} myzset2{t} bad_arg}\n\n        assert_error \"ERR count*\" {r zmpop 1 myzset{t} MIN COUNT 0}\n        assert_error \"ERR count*\" {r zmpop 1 myzset{t} MAX COUNT a}\n        assert_error \"ERR count*\" {r zmpop 1 myzset{t} MIN COUNT -1}\n        assert_error \"ERR count*\" {r zmpop 2 myzset{t} myzset2{t} MAX COUNT -1}\n    }\n\n    test \"ZMPOP propagate as pop with count command to replica\" {\n        set repl [attach_to_replication_stream]\n\n        # ZMPOP min/max propagate as ZPOPMIN/ZPOPMAX with count\n        r zadd myzset{t} 1 one 2 two 3 three\n\n        # Pop elements from one zset.\n        r zmpop 1 myzset{t} min\n        r zmpop 1 myzset{t} max count 1\n\n        # Now the zset have only one element\n        r zmpop 2 myzset{t} myzset2{t} min count 10\n\n        # No elements so we don't propagate.\n        r zmpop 2 myzset{t} myzset2{t} max count 10\n\n        # Pop elements from the second zset.\n        r zadd myzset2{t} 1 one 2 two 3 three\n        r zmpop 2 myzset{t} myzset2{t} min count 2\n        r zmpop 2 myzset{t} myzset2{t} max count 1\n\n        # Pop all elements.\n        r zadd myzset{t} 1 one 2 two 3 three\n        r zadd myzset2{t} 4 four 5 five 6 six\n        r zmpop 2 myzset{t} myzset2{t} min count 10\n        r zmpop 2 myzset{t} myzset2{t} max count 10\n\n        assert_replication_stream $repl {\n            {select *}\n            {zadd myzset{t} 1 one 2 two 3 three}\n            {zpopmin myzset{t} 1}\n            {zpopmax myzset{t} 1}\n            {zpopmin myzset{t} 1}\n            {zadd myzset2{t} 1 one 2 two 3 three}\n            {zpopmin myzset2{t} 2}\n            {zpopmax myzset2{t} 1}\n            {zadd myzset{t} 1 one 2 two 3 three}\n            {zadd myzset2{t} 4 four 5 five 6 six}\n            {zpopmin myzset{t} 3}\n            {zpopmax myzset2{t} 3}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    foreach resp {3 2} {\n        set rd [redis_deferring_client]\n\n        if {[lsearch $::denytags \"resp3\"] >= 0} {\n            if {$resp == 3} {continue}\n        } elseif {$::force_resp3} {\n            if {$resp == 2} {continue}\n        }\n        r hello $resp\n        $rd hello $resp\n        $rd read\n\n        test \"ZPOPMIN/ZPOPMAX readraw in RESP$resp\" {\n            r del zset{t}\n            create_zset zset2{t} {1 a 2 b 3 c 4 d 5 e}\n\n            r readraw 1\n\n            # ZPOP against non existing key.\n            assert_equal {*0} [r zpopmin zset{t}]\n            assert_equal {*0} [r zpopmin zset{t} 1]\n\n            # ZPOP without COUNT option.\n            assert_equal {*2} [r zpopmin zset2{t}]\n            assert_equal [r read] {$1}\n            assert_equal [r read] {a}\n            verify_score_response r $resp 1\n\n            # ZPOP with COUNT option.\n            if {$resp == 2} {\n                assert_equal {*2} [r zpopmax zset2{t} 1]\n                assert_equal [r read] {$1}\n                assert_equal [r read] {e}\n            } elseif {$resp == 3} {\n                assert_equal {*1} [r zpopmax zset2{t} 1]\n                assert_equal [r read] {*2}\n                assert_equal [r read] {$1}\n                assert_equal [r read] {e}\n            }\n            verify_score_response r $resp 5\n\n            r readraw 0\n        }\n\n        test \"BZPOPMIN/BZPOPMAX readraw in RESP$resp\" {\n            r del zset{t}\n            create_zset zset2{t} {1 a 2 b 3 c 4 d 5 e}\n\n            $rd readraw 1\n\n            # BZPOP released on timeout.\n            $rd bzpopmin zset{t} 0.01\n            verify_nil_response $resp [$rd read]\n            $rd bzpopmax zset{t} 0.01\n            verify_nil_response $resp [$rd read]\n\n            # BZPOP non-blocking path.\n            $rd bzpopmin zset1{t} zset2{t} 0.1\n            assert_equal [$rd read] {*3}\n            assert_equal [$rd read] {$8}\n            assert_equal [$rd read] {zset2{t}}\n            assert_equal [$rd read] {$1}\n            assert_equal [$rd read] {a}\n            verify_score_response $rd $resp 1\n\n            # BZPOP blocking path.\n            $rd bzpopmin zset{t} 5\n            wait_for_blocked_client\n            r zadd zset{t} 1 a\n            assert_equal [$rd read] {*3}\n            assert_equal [$rd read] {$7}\n            assert_equal [$rd read] {zset{t}}\n            assert_equal [$rd read] {$1}\n            assert_equal [$rd read] {a}\n            verify_score_response $rd $resp 1\n\n            $rd readraw 0\n        }\n\n        test \"ZMPOP readraw in RESP$resp\" {\n            r del zset{t} zset2{t}\n            create_zset zset3{t} {1 a}\n            create_zset zset4{t} {1 a 2 b 3 c 4 d 5 e}\n\n            r readraw 1\n\n            # ZMPOP against non existing key.\n            verify_nil_response $resp [r zmpop 1 zset{t} min]\n            verify_nil_response $resp [r zmpop 1 zset{t} max count 1]\n            verify_nil_response $resp [r zmpop 2 zset{t} zset2{t} min]\n            verify_nil_response $resp [r zmpop 2 zset{t} zset2{t} max count 1]\n\n            # ZMPOP with one input key.\n            assert_equal {*2} [r zmpop 1 zset3{t} max]\n            assert_equal [r read] {$8}\n            assert_equal [r read] {zset3{t}}\n            assert_equal [r read] {*1}\n            assert_equal [r read] {*2}\n            assert_equal [r read] {$1}\n            assert_equal [r read] {a}\n            verify_score_response r $resp 1\n\n            # ZMPOP with COUNT option.\n            assert_equal {*2} [r zmpop 2 zset3{t} zset4{t} min count 2]\n            assert_equal [r read] {$8}\n            assert_equal [r read] {zset4{t}}\n            assert_equal [r read] {*2}\n            assert_equal [r read] {*2}\n            assert_equal [r read] {$1}\n            assert_equal [r read] {a}\n            verify_score_response r $resp 1\n            assert_equal [r read] {*2}\n            assert_equal [r read] {$1}\n            assert_equal [r read] {b}\n            verify_score_response r $resp 2\n\n            r readraw 0\n        }\n\n        test \"BZMPOP readraw in RESP$resp\" {\n            r del zset{t} zset2{t}\n            create_zset zset3{t} {1 a 2 b 3 c 4 d 5 e}\n\n            $rd readraw 1\n\n            # BZMPOP released on timeout.\n            $rd bzmpop 0.01 1 zset{t} min\n            verify_nil_response $resp [$rd read]\n            $rd bzmpop 0.01 2 zset{t} zset2{t} max\n            verify_nil_response $resp [$rd read]\n\n            # BZMPOP non-blocking path.\n            $rd bzmpop 0.1 2 zset3{t} zset4{t} min\n\n            assert_equal [$rd read] {*2}\n            assert_equal [$rd read] {$8}\n            assert_equal [$rd read] {zset3{t}}\n            assert_equal [$rd read] {*1}\n            assert_equal [$rd read] {*2}\n            assert_equal [$rd read] {$1}\n            assert_equal [$rd read] {a}\n            verify_score_response $rd $resp 1\n\n            # BZMPOP blocking path with COUNT option.\n            $rd bzmpop 5 2 zset{t} zset2{t} max count 2\n            wait_for_blocked_client\n            r zadd zset2{t} 1 a 2 b 3 c\n\n            assert_equal [$rd read] {*2}\n            assert_equal [$rd read] {$8}\n            assert_equal [$rd read] {zset2{t}}\n            assert_equal [$rd read] {*2}\n            assert_equal [$rd read] {*2}\n            assert_equal [$rd read] {$1}\n            assert_equal [$rd read] {c}\n            verify_score_response $rd $resp 3\n            assert_equal [$rd read] {*2}\n            assert_equal [$rd read] {$1}\n            assert_equal [$rd read] {b}\n            verify_score_response $rd $resp 2\n\n        }\n\n        $rd close\n        r hello 2\n    }\n\n    test {ZINTERSTORE regression with two sets, intset+hashtable} {\n        r del seta{t} setb{t} setc{t}\n        r sadd set1{t} a\n        r sadd set2{t} 10\n        r zinterstore set3{t} 2 set1{t} set2{t}\n    } {0}\n\n    test {ZUNIONSTORE regression, should not create NaN in scores} {\n        r zadd z{t} -inf neginf\n        r zunionstore out{t} 1 z{t} weights 0\n        r zrange out{t} 0 -1 withscores\n    } {neginf 0}\n\n    test {ZINTERSTORE #516 regression, mixed sets and ziplist zsets} {\n        r sadd one{t} 100 101 102 103\n        r sadd two{t} 100 200 201 202\n        r zadd three{t} 1 500 1 501 1 502 1 503 1 100\n        r zinterstore to_here{t} 3 one{t} two{t} three{t} WEIGHTS 0 0 1\n        r zrange to_here{t} 0 -1\n    } {100}\n\n    test {ZUNIONSTORE result is sorted} {\n        # Create two sets with common and not common elements, perform\n        # the UNION, check that elements are still sorted.\n        r del one{t} two{t} dest{t}\n        set cmd1 [list r zadd one{t}]\n        set cmd2 [list r zadd two{t}]\n        for {set j 0} {$j < 1000} {incr j} {\n            lappend cmd1 [expr rand()] [randomInt 1000]\n            lappend cmd2 [expr rand()] [randomInt 1000]\n        }\n        {*}$cmd1\n        {*}$cmd2\n        assert {[r zcard one{t}] > 100}\n        assert {[r zcard two{t}] > 100}\n        r zunionstore dest{t} 2 one{t} two{t}\n        set oldscore 0\n        foreach {ele score} [r zrange dest{t} 0 -1 withscores] {\n            assert {$score >= $oldscore}\n            set oldscore $score\n        }\n    }\n\n    test \"ZUNIONSTORE/ZINTERSTORE/ZDIFFSTORE error if using WITHSCORES \" {\n        assert_error \"*ERR*syntax*\" {r zunionstore foo{t} 2 zsetd{t} zsetf{t} withscores}\n        assert_error \"*ERR*syntax*\" {r zinterstore foo{t} 2 zsetd{t} zsetf{t} withscores}\n        assert_error \"*ERR*syntax*\" {r zdiffstore foo{t} 2 zsetd{t} zsetf{t} withscores}\n    }\n\n    test {ZMSCORE retrieve} {\n        r del zmscoretest\n        r zadd zmscoretest 10 x\n        r zadd zmscoretest 20 y\n\n        r zmscore zmscoretest x y\n    } {10 20}\n\n    test {ZMSCORE retrieve from empty set} {\n        r del zmscoretest\n\n        r zmscore zmscoretest x y\n    } {{} {}}\n\n    test {ZMSCORE retrieve with missing member} {\n        r del zmscoretest\n        r zadd zmscoretest 10 x\n\n        r zmscore zmscoretest x y\n    } {10 {}}\n\n    test {ZMSCORE retrieve single member} {\n        r del zmscoretest\n        r zadd zmscoretest 10 x\n        r zadd zmscoretest 20 y\n\n        r zmscore zmscoretest x\n    } {10}\n\n    test {ZMSCORE retrieve requires one or more members} {\n        r del zmscoretest\n        r zadd zmscoretest 10 x\n        r zadd zmscoretest 20 y\n\n        catch {r zmscore zmscoretest} e\n        assert_match {*ERR*wrong*number*arg*} $e\n    }\n\n    test \"ZSET commands don't accept the empty strings as valid score\" {\n        assert_error \"*not*float*\" {r zadd myzset \"\" abc}\n    }\n\n    test \"zunionInterDiffGenericCommand at least 1 input key\" {\n        assert_error {*at least 1 input key * 'zunion' command} {r zunion 0 key{t}}\n        assert_error {*at least 1 input key * 'zunionstore' command} {r zunionstore dst_key{t} 0 key{t}}\n        assert_error {*at least 1 input key * 'zinter' command} {r zinter 0 key{t}}\n        assert_error {*at least 1 input key * 'zinterstore' command} {r zinterstore dst_key{t} 0 key{t}}\n        assert_error {*at least 1 input key * 'zdiff' command} {r zdiff 0 key{t}}\n        assert_error {*at least 1 input key * 'zdiffstore' command} {r zdiffstore dst_key{t} 0 key{t}}\n        assert_error {*at least 1 input key * 'zintercard' command} {r zintercard 0 key{t}}\n    }\n\n    proc stressers {encoding} {\n        set original_max_entries [lindex [r config get zset-max-ziplist-entries] 1]\n        set original_max_value [lindex [r config get zset-max-ziplist-value] 1]\n        if {$encoding == \"listpack\"} {\n            # Little extra to allow proper fuzzing in the sorting stresser\n            r config set zset-max-ziplist-entries 256\n            r config set zset-max-ziplist-value 64\n            set elements 128\n        } elseif {$encoding == \"skiplist\"} {\n            r config set zset-max-ziplist-entries 0\n            r config set zset-max-ziplist-value 0\n            if {$::accurate} {set elements 1000} else {set elements 100}\n        } else {\n            puts \"Unknown sorted set encoding\"\n            exit\n        }\n\n        test \"ZSCORE - $encoding\" {\n            r del zscoretest\n            set aux {}\n            for {set i 0} {$i < $elements} {incr i} {\n                set score [expr rand()]\n                lappend aux $score\n                r zadd zscoretest $score $i\n            }\n\n            assert_encoding $encoding zscoretest\n            for {set i 0} {$i < $elements} {incr i} {\n                # If an IEEE 754 double-precision number is converted to a decimal string with at\n                # least 17 significant digits (reply of zscore), and then converted back to double-precision representation,\n                # the final result replied via zscore command must match the original number present on the $aux list.\n                # Given Tcl is mostly very relaxed about types (everything is a string) we need to use expr to convert a string to float.\n                assert_equal [expr [lindex $aux $i]] [expr [r zscore zscoretest $i]]\n            }\n        }\n\n        test \"ZMSCORE - $encoding\" {\n            r del zscoretest\n            set aux {}\n            for {set i 0} {$i < $elements} {incr i} {\n                set score [expr rand()]\n                lappend aux $score\n                r zadd zscoretest $score $i\n            }\n\n            assert_encoding $encoding zscoretest\n            for {set i 0} {$i < $elements} {incr i} {\n                # Check above notes on IEEE 754 double-precision comparison\n                assert_equal [expr [lindex $aux $i]] [expr [r zscore zscoretest $i]]\n            }\n        }\n\n        test \"ZSCORE after a DEBUG RELOAD - $encoding\" {\n            r del zscoretest\n            set aux {}\n            for {set i 0} {$i < $elements} {incr i} {\n                set score [expr rand()]\n                lappend aux $score\n                r zadd zscoretest $score $i\n            }\n\n            r debug reload\n            assert_encoding $encoding zscoretest\n            for {set i 0} {$i < $elements} {incr i} {\n                # Check above notes on IEEE 754 double-precision comparison\n                assert_equal [expr [lindex $aux $i]] [expr [r zscore zscoretest $i]]\n            }\n        } {} {needs:debug}\n\n        test \"ZSCORE 17-19 significant digit mantissas (widened fast path) - $encoding\" {\n            # Exercise the widened fast_float_strtod path that handles\n            # mantissas > 2^53 (via __uint128_t arithmetic). ZADD/ZSCORE\n            # must round-trip bit-exactly through the listpack/skiplist\n            # encoding (parse on ingest, parse again on retrieval). Each\n            # input string below parses to a specific IEEE double whose\n            # canonical string representation is itself, so `expr` in Tcl\n            # re-evaluates to the same numeric value.\n            r del zscorewide\n            set widecases {\n                0.49606648747577575\n                0.8731899671198792\n                0.34912978268081996\n                0.0033318113277969186\n                0.9955843393406656\n                -0.8731899671198792\n            }\n            set i 0\n            foreach s $widecases {\n                r zadd zscorewide $s m$i\n                assert_equal [expr $s] [expr [r zscore zscorewide m$i]]\n                incr i\n            }\n            r debug reload\n            assert_encoding $encoding zscorewide\n            set i 0\n            foreach s $widecases {\n                assert_equal [expr $s] [expr [r zscore zscorewide m$i]]\n                incr i\n            }\n        } {} {needs:debug}\n\n        test \"ZSET sorting stresser - $encoding\" {\n            set delta 0\n            for {set test 0} {$test < 2} {incr test} {\n                unset -nocomplain auxarray\n                array set auxarray {}\n                set auxlist {}\n                r del myzset\n                for {set i 0} {$i < $elements} {incr i} {\n                    if {$test == 0} {\n                        set score [expr rand()]\n                    } else {\n                        set score [expr int(rand()*10)]\n                    }\n                    set auxarray($i) $score\n                    r zadd myzset $score $i\n                    # Random update\n                    if {[expr rand()] < .2} {\n                        set j [expr int(rand()*1000)]\n                        if {$test == 0} {\n                            set score [expr rand()]\n                        } else {\n                            set score [expr int(rand()*10)]\n                        }\n                        set auxarray($j) $score\n                        r zadd myzset $score $j\n                    }\n                }\n                foreach {item score} [array get auxarray] {\n                    lappend auxlist [list $score $item]\n                }\n                set sorted [lsort -command zlistAlikeSort $auxlist]\n                set auxlist {}\n                foreach x $sorted {\n                    lappend auxlist [lindex $x 1]\n                }\n\n                assert_encoding $encoding myzset\n                set fromredis [r zrange myzset 0 -1]\n                set delta 0\n                for {set i 0} {$i < [llength $fromredis]} {incr i} {\n                    if {[lindex $fromredis $i] != [lindex $auxlist $i]} {\n                        incr delta\n                    }\n                }\n            }\n            assert_equal 0 $delta\n        }\n\n        test \"ZRANGEBYSCORE fuzzy test, 100 ranges in $elements element sorted set - $encoding\" {\n            set err {}\n            r del zset\n            for {set i 0} {$i < $elements} {incr i} {\n                r zadd zset [expr rand()] $i\n            }\n\n            assert_encoding $encoding zset\n            for {set i 0} {$i < 100} {incr i} {\n                set min [expr rand()]\n                set max [expr rand()]\n                if {$min > $max} {\n                    set aux $min\n                    set min $max\n                    set max $aux\n                }\n                set low [r zrangebyscore zset -inf $min]\n                set ok [r zrangebyscore zset $min $max]\n                set high [r zrangebyscore zset $max +inf]\n                set lowx [r zrangebyscore zset -inf ($min]\n                set okx [r zrangebyscore zset ($min ($max]\n                set highx [r zrangebyscore zset ($max +inf]\n\n                if {[r zcount zset -inf $min] != [llength $low]} {\n                    append err \"Error, len does not match zcount\\n\"\n                }\n                if {[r zcount zset $min $max] != [llength $ok]} {\n                    append err \"Error, len does not match zcount\\n\"\n                }\n                if {[r zcount zset $max +inf] != [llength $high]} {\n                    append err \"Error, len does not match zcount\\n\"\n                }\n                if {[r zcount zset -inf ($min] != [llength $lowx]} {\n                    append err \"Error, len does not match zcount\\n\"\n                }\n                if {[r zcount zset ($min ($max] != [llength $okx]} {\n                    append err \"Error, len does not match zcount\\n\"\n                }\n                if {[r zcount zset ($max +inf] != [llength $highx]} {\n                    append err \"Error, len does not match zcount\\n\"\n                }\n\n                foreach x $low {\n                    set score [r zscore zset $x]\n                    if {$score > $min} {\n                        append err \"Error, score for $x is $score > $min\\n\"\n                    }\n                }\n                foreach x $lowx {\n                    set score [r zscore zset $x]\n                    if {$score >= $min} {\n                        append err \"Error, score for $x is $score >= $min\\n\"\n                    }\n                }\n                foreach x $ok {\n                    set score [r zscore zset $x]\n                    if {$score < $min || $score > $max} {\n                        append err \"Error, score for $x is $score outside $min-$max range\\n\"\n                    }\n                }\n                foreach x $okx {\n                    set score [r zscore zset $x]\n                    if {$score <= $min || $score >= $max} {\n                        append err \"Error, score for $x is $score outside $min-$max open range\\n\"\n                    }\n                }\n                foreach x $high {\n                    set score [r zscore zset $x]\n                    if {$score < $max} {\n                        append err \"Error, score for $x is $score < $max\\n\"\n                    }\n                }\n                foreach x $highx {\n                    set score [r zscore zset $x]\n                    if {$score <= $max} {\n                        append err \"Error, score for $x is $score <= $max\\n\"\n                    }\n                }\n            }\n            assert_equal {} $err\n        }\n\n        test \"ZRANGEBYLEX fuzzy test, 100 ranges in $elements element sorted set - $encoding\" {\n            set lexset {}\n            r del zset\n            for {set j 0} {$j < $elements} {incr j} {\n                set e [randstring 0 30 alpha]\n                lappend lexset $e\n                r zadd zset 0 $e\n            }\n            set lexset [lsort -unique $lexset]\n            for {set j 0} {$j < 100} {incr j} {\n                set min [randstring 0 30 alpha]\n                set max [randstring 0 30 alpha]\n                set mininc [randomInt 2]\n                set maxinc [randomInt 2]\n                if {$mininc} {set cmin \"\\[$min\"} else {set cmin \"($min\"}\n                if {$maxinc} {set cmax \"\\[$max\"} else {set cmax \"($max\"}\n                set rev [randomInt 2]\n                if {$rev} {\n                    set cmd zrevrangebylex\n                } else {\n                    set cmd zrangebylex\n                }\n\n                # Make sure data is the same in both sides\n                assert {[r zrange zset 0 -1] eq $lexset}\n\n                # Get the Redis output\n                set output [r $cmd zset $cmin $cmax]\n                if {$rev} {\n                    set outlen [r zlexcount zset $cmax $cmin]\n                } else {\n                    set outlen [r zlexcount zset $cmin $cmax]\n                }\n\n                # Compute the same output via Tcl\n                set o {}\n                set copy $lexset\n                if {(!$rev && [string compare $min $max] > 0) ||\n                    ($rev && [string compare $max $min] > 0)} {\n                    # Empty output when ranges are inverted.\n                } else {\n                    if {$rev} {\n                        # Invert the Tcl array using Redis itself.\n                        set copy [r zrevrange zset 0 -1]\n                        # Invert min / max as well\n                        lassign [list $min $max $mininc $maxinc] \\\n                            max min maxinc mininc\n                    }\n                    foreach e $copy {\n                        set mincmp [string compare $e $min]\n                        set maxcmp [string compare $e $max]\n                        if {\n                             ($mininc && $mincmp >= 0 || !$mininc && $mincmp > 0)\n                             &&\n                             ($maxinc && $maxcmp <= 0 || !$maxinc && $maxcmp < 0)\n                        } {\n                            lappend o $e\n                        }\n                    }\n                }\n                assert {$o eq $output}\n                assert {$outlen eq [llength $output]}\n            }\n        }\n\n        test \"ZREMRANGEBYLEX fuzzy test, 100 ranges in $elements element sorted set - $encoding\" {\n            set lexset {}\n            r del zset{t} zsetcopy{t}\n            for {set j 0} {$j < $elements} {incr j} {\n                set e [randstring 0 30 alpha]\n                lappend lexset $e\n                r zadd zset{t} 0 $e\n            }\n            set lexset [lsort -unique $lexset]\n            for {set j 0} {$j < 100} {incr j} {\n                # Copy...\n                r zunionstore zsetcopy{t} 1 zset{t}\n                set lexsetcopy $lexset\n\n                set min [randstring 0 30 alpha]\n                set max [randstring 0 30 alpha]\n                set mininc [randomInt 2]\n                set maxinc [randomInt 2]\n                if {$mininc} {set cmin \"\\[$min\"} else {set cmin \"($min\"}\n                if {$maxinc} {set cmax \"\\[$max\"} else {set cmax \"($max\"}\n\n                # Make sure data is the same in both sides\n                assert {[r zrange zset{t} 0 -1] eq $lexset}\n\n                # Get the range we are going to remove\n                set torem [r zrangebylex zset{t} $cmin $cmax]\n                set toremlen [r zlexcount zset{t} $cmin $cmax]\n                r zremrangebylex zsetcopy{t} $cmin $cmax\n                set output [r zrange zsetcopy{t} 0 -1]\n\n                # Remove the range with Tcl from the original list\n                if {$toremlen} {\n                    set first [lsearch -exact $lexsetcopy [lindex $torem 0]]\n                    set last [expr {$first+$toremlen-1}]\n                    set lexsetcopy [lreplace $lexsetcopy $first $last]\n                }\n                assert {$lexsetcopy eq $output}\n            }\n        }\n\n        test \"ZSETs skiplist implementation backlink consistency test - $encoding\" {\n            set diff 0\n            for {set j 0} {$j < $elements} {incr j} {\n                r zadd myzset [expr rand()] \"Element-$j\"\n                r zrem myzset \"Element-[expr int(rand()*$elements)]\"\n            }\n\n            assert_encoding $encoding myzset\n            set l1 [r zrange myzset 0 -1]\n            set l2 [r zrevrange myzset 0 -1]\n            for {set j 0} {$j < [llength $l1]} {incr j} {\n                if {[lindex $l1 $j] ne [lindex $l2 end-$j]} {\n                    incr diff\n                }\n            }\n            assert_equal 0 $diff\n        }\n\n        test \"ZSETs ZRANK augmented skip list stress testing - $encoding\" {\n            set err {}\n            r del myzset\n            for {set k 0} {$k < 2000} {incr k} {\n                set i [expr {$k % $elements}]\n                if {[expr rand()] < .2} {\n                    r zrem myzset $i\n                } else {\n                    set score [expr rand()]\n                    r zadd myzset $score $i\n                    assert_encoding $encoding myzset\n                }\n\n                set card [r zcard myzset]\n                if {$card > 0} {\n                    set index [randomInt $card]\n                    set ele [lindex [r zrange myzset $index $index] 0]\n                    set rank [r zrank myzset $ele]\n                    if {$rank != $index} {\n                        set err \"$ele RANK is wrong! ($rank != $index)\"\n                        break\n                    }\n                }\n            }\n            assert_equal {} $err\n        }\n\n    foreach {pop} {BZPOPMIN BZMPOP_MIN} {\n        test \"$pop, ZADD + DEL should not awake blocked client\" {\n            set rd [redis_deferring_client]\n            r del zset\n\n            bzpop_command $rd $pop zset 0\n            wait_for_blocked_client\n\n            r multi\n            r zadd zset 0 foo\n            r del zset\n            r exec\n            r del zset\n            r zadd zset 1 bar\n\n            verify_pop_response $pop [$rd read] {zset bar 1} {zset {{bar 1}}}\n            $rd close\n        }\n\n        test \"$pop, ZADD + DEL + SET should not awake blocked client\" {\n            set rd [redis_deferring_client]\n            r del zset\n\n            bzpop_command $rd $pop zset 0\n            wait_for_blocked_client\n\n            r multi\n            r zadd zset 0 foo\n            r del zset\n            r set zset foo\n            r exec\n            r del zset\n            r zadd zset 1 bar\n\n            verify_pop_response $pop [$rd read] {zset bar 1} {zset {{bar 1}}}\n            $rd close\n        }\n    }\n\n        test {BZPOPMIN unblock but the key is expired and then block again - reprocessing command} {\n            r flushall\n            r debug set-active-expire 0\n            set rd [redis_deferring_client]\n\n            set start [clock milliseconds]\n            $rd bzpopmin zset{t} 1\n            wait_for_blocked_clients_count 1\n\n            # The exec will try to awake the blocked client, but the key is expired,\n            # so the client will be blocked again during the command reprocessing.\n            r multi\n            r zadd zset{t} 1 one\n            r pexpire zset{t} 100\n            r debug sleep 0.2\n            r exec\n\n            assert_equal {} [$rd read]\n            set end [clock milliseconds]\n\n            # Before the fix in #13004, this time would have been 1200+ (i.e. more than 1200ms),\n            # now it should be 1000, but in order to avoid timing issues, we increase the range a bit.\n            assert_range [expr $end-$start] 1000 1150\n\n            r debug set-active-expire 1\n            $rd close\n        } {0} {needs:debug}\n\n        test \"BZPOPMIN with same key multiple times should work\" {\n            set rd [redis_deferring_client]\n            r del z1{t} z2{t}\n\n            # Data arriving after the BZPOPMIN.\n            $rd bzpopmin z1{t} z2{t} z2{t} z1{t} 0\n            wait_for_blocked_client\n            r zadd z1{t} 0 a\n            assert_equal [$rd read] {z1{t} a 0}\n            $rd bzpopmin z1{t} z2{t} z2{t} z1{t} 0\n            wait_for_blocked_client\n            r zadd z2{t} 1 b\n            assert_equal [$rd read] {z2{t} b 1}\n\n            # Data already there.\n            r zadd z1{t} 0 a\n            r zadd z2{t} 1 b\n            $rd bzpopmin z1{t} z2{t} z2{t} z1{t} 0\n            assert_equal [$rd read] {z1{t} a 0}\n            $rd bzpopmin z1{t} z2{t} z2{t} z1{t} 0\n            assert_equal [$rd read] {z2{t} b 1}\n            $rd close\n        }\n\n    foreach {pop} {BZPOPMIN BZMPOP_MIN} {\n        test \"MULTI/EXEC is isolated from the point of view of $pop\" {\n            set rd [redis_deferring_client]\n            r del zset\n\n            bzpop_command $rd $pop zset 0\n            wait_for_blocked_client\n\n            r multi\n            r zadd zset 0 a\n            r zadd zset 1 b\n            r zadd zset 2 c\n            r exec\n\n            verify_pop_response $pop [$rd read] {zset a 0} {zset {{a 0}}}\n            $rd close\n        }\n\n        test \"$pop with variadic ZADD\" {\n            set rd [redis_deferring_client]\n            r del zset\n            if {$::valgrind} {after 100}\n            bzpop_command $rd $pop zset 0\n            wait_for_blocked_client\n            if {$::valgrind} {after 100}\n            assert_equal 2 [r zadd zset -1 foo 1 bar]\n            if {$::valgrind} {after 100}\n            verify_pop_response $pop [$rd read] {zset foo -1} {zset {{foo -1}}}\n            assert_equal {bar} [r zrange zset 0 -1]\n            $rd close\n        }\n\n        test \"$pop with zero timeout should block indefinitely\" {\n            set rd [redis_deferring_client]\n            r del zset\n            bzpop_command $rd $pop zset 0\n            wait_for_blocked_client\n            after 1000\n            r zadd zset 0 foo\n            verify_pop_response $pop [$rd read] {zset foo 0} {zset {{foo 0}}}\n            $rd close\n        }\n    }\n\n        r config set zset-max-ziplist-entries $original_max_entries\n        r config set zset-max-ziplist-value $original_max_value\n    }\n\n    tags {\"slow\"} {\n        stressers listpack\n        stressers skiplist\n    }\n\n    test \"BZPOP/BZMPOP against wrong type\" {\n        r set foo{t} bar\n        assert_error \"*WRONGTYPE*\" {r bzpopmin foo{t} 1}\n        assert_error \"*WRONGTYPE*\" {r bzpopmax foo{t} 1}\n\n        assert_error \"*WRONGTYPE*\" {r bzmpop 1 1 foo{t} min}\n        assert_error \"*WRONGTYPE*\" {r bzmpop 1 1 foo{t} max}\n        assert_error \"*WRONGTYPE*\" {r bzmpop 1 1 foo{t} min count 10}\n\n        r del foo{t}\n        r set foo2{t} bar\n        assert_error \"*WRONGTYPE*\" {r bzmpop 1 2 foo{t} foo2{t} min}\n        assert_error \"*WRONGTYPE*\" {r bzmpop 1 2 foo2{t} foo{t} max count 1}\n    }\n\n    test \"BZMPOP with illegal argument\" {\n        assert_error \"ERR wrong number of arguments for 'bzmpop' command\" {r bzmpop}\n        assert_error \"ERR wrong number of arguments for 'bzmpop' command\" {r bzmpop 0 1}\n        assert_error \"ERR wrong number of arguments for 'bzmpop' command\" {r bzmpop 0 1 myzset{t}}\n\n        assert_error \"ERR numkeys*\" {r bzmpop 1 0 myzset{t} MIN}\n        assert_error \"ERR numkeys*\" {r bzmpop 1 a myzset{t} MIN}\n        assert_error \"ERR numkeys*\" {r bzmpop 1 -1 myzset{t} MAX}\n\n        assert_error \"ERR syntax error*\" {r bzmpop 1 1 myzset{t} bad_where}\n        assert_error \"ERR syntax error*\" {r bzmpop 1 1 myzset{t} MIN bar_arg}\n        assert_error \"ERR syntax error*\" {r bzmpop 1 1 myzset{t} MAX MIN}\n        assert_error \"ERR syntax error*\" {r bzmpop 1 1 myzset{t} COUNT}\n        assert_error \"ERR syntax error*\" {r bzmpop 1 1 myzset{t} MIN COUNT 1 COUNT 2}\n        assert_error \"ERR syntax error*\" {r bzmpop 1 2 myzset{t} myzset2{t} bad_arg}\n\n        assert_error \"ERR count*\" {r bzmpop 1 1 myzset{t} MIN COUNT 0}\n        assert_error \"ERR count*\" {r bzmpop 1 1 myzset{t} MAX COUNT a}\n        assert_error \"ERR count*\" {r bzmpop 1 1 myzset{t} MIN COUNT -1}\n        assert_error \"ERR count*\" {r bzmpop 1 2 myzset{t} myzset2{t} MAX COUNT -1}\n    }\n\n    test \"BZMPOP with multiple blocked clients\" {\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n        set rd3 [redis_deferring_client]\n        set rd4 [redis_deferring_client]\n        r del myzset{t} myzset2{t}\n\n        $rd1 bzmpop 0 2 myzset{t} myzset2{t} min count 1\n        wait_for_blocked_clients_count 1\n        $rd2 bzmpop 0 2 myzset{t} myzset2{t} max count 10\n        wait_for_blocked_clients_count 2\n        $rd3 bzmpop 0 2 myzset{t} myzset2{t} min count 10\n        wait_for_blocked_clients_count 3\n        $rd4 bzmpop 0 2 myzset{t} myzset2{t} max count 1\n        wait_for_blocked_clients_count 4\n\n        r multi\n        r zadd myzset{t} 1 a 2 b 3 c 4 d 5 e\n        r zadd myzset2{t} 1 a 2 b 3 c 4 d 5 e\n        r exec\n\n        assert_equal {myzset{t} {{a 1}}} [$rd1 read]\n        assert_equal {myzset{t} {{e 5} {d 4} {c 3} {b 2}}} [$rd2 read]\n        assert_equal {myzset2{t} {{a 1} {b 2} {c 3} {d 4} {e 5}}} [$rd3 read]\n\n        r zadd myzset2{t} 1 a 2 b 3 c\n        assert_equal {myzset2{t} {{c 3}}} [$rd4 read]\n\n        r del myzset{t} myzset2{t}\n        $rd1 close\n        $rd2 close\n        $rd3 close\n        $rd4 close\n    }\n\n    test \"BZMPOP propagate as pop with count command to replica\" {\n        set rd [redis_deferring_client]\n        set repl [attach_to_replication_stream]\n\n        # BZMPOP without being blocked.\n        r zadd myzset{t} 1 one 2 two 3 three\n        r zadd myzset2{t} 4 four 5 five 6 six\n        r bzmpop 0 1 myzset{t} min\n        r bzmpop 0 2 myzset{t} myzset2{t} max count 10\n        r bzmpop 0 2 myzset{t} myzset2{t} max count 10\n\n        # BZMPOP that gets blocked.\n        $rd bzmpop 0 1 myzset{t} min count 1\n        wait_for_blocked_client\n        r zadd myzset{t} 1 one\n        $rd bzmpop 0 2 myzset{t} myzset2{t} min count 5\n        wait_for_blocked_client\n        r zadd myzset{t} 1 one 2 two 3 three\n        $rd bzmpop 0 2 myzset{t} myzset2{t} max count 10\n        wait_for_blocked_client\n        r zadd myzset2{t} 4 four 5 five 6 six\n\n        # Released on timeout.\n        assert_equal {} [r bzmpop 0.01 1 myzset{t} max count 10]\n        r set foo{t} bar ;# something else to propagate after, so we can make sure the above pop didn't.\n\n        $rd close\n\n        assert_replication_stream $repl {\n            {select *}\n            {zadd myzset{t} 1 one 2 two 3 three}\n            {zadd myzset2{t} 4 four 5 five 6 six}\n            {zpopmin myzset{t} 1}\n            {zpopmax myzset{t} 2}\n            {zpopmax myzset2{t} 3}\n            {zadd myzset{t} 1 one}\n            {zpopmin myzset{t} 1}\n            {zadd myzset{t} 1 one 2 two 3 three}\n            {zpopmin myzset{t} 3}\n            {zadd myzset2{t} 4 four 5 five 6 six}\n            {zpopmax myzset2{t} 3}\n            {set foo{t} bar}\n        }\n        close_replication_stream $repl\n    } {} {needs:repl}\n\n    test \"BZMPOP should not blocks on non key arguments - #10762\" {\n        set rd1 [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n        r del myzset myzset2 myzset3\n\n        $rd1 bzmpop 0 1 myzset min count 10\n        wait_for_blocked_clients_count 1\n        $rd2 bzmpop 0 2 myzset2 myzset3 max count 10\n        wait_for_blocked_clients_count 2\n\n        # These non-key keys will not unblock the clients.\n        r zadd 0 100 timeout_value\n        r zadd 1 200 numkeys_value\n        r zadd min 300 min_token\n        r zadd max 400 max_token\n        r zadd count 500 count_token\n        r zadd 10 600 count_value\n\n        r zadd myzset 1 zset\n        r zadd myzset3 1 zset3\n        assert_equal {myzset {{zset 1}}} [$rd1 read]\n        assert_equal {myzset3 {{zset3 1}}} [$rd2 read]\n\n        $rd1 close\n        $rd2 close\n    } {0} {cluster:skip}\n\n    test {ZSET skiplist order consistency when elements are moved} {\n        set original_max [lindex [r config get zset-max-ziplist-entries] 1]\n        r config set zset-max-ziplist-entries 0\n        for {set times 0} {$times < 10} {incr times} {\n            r del zset\n            for {set j 0} {$j < 1000} {incr j} {\n                r zadd zset [randomInt 50] ele-[randomInt 10]\n            }\n\n            # Make sure that element ordering is correct\n            set prev_element {}\n            set prev_score -1\n            foreach {element score} [r zrange zset 0 -1 WITHSCORES] {\n                # Assert that elements are in increasing ordering\n                assert {\n                    $prev_score < $score ||\n                    ($prev_score == $score &&\n                     [string compare $prev_element $element] == -1)\n                }\n                set prev_element $element\n                set prev_score $score\n            }\n        }\n        r config set zset-max-ziplist-entries $original_max\n    }\n\n    test {ZRANGESTORE basic} {\n        r flushall\n        r zadd z1{t} 1 a 2 b 3 c 4 d\n        set res [r zrangestore z2{t} z1{t} 0 -1]\n        assert_equal $res 4\n        r zrange z2{t} 0 -1 withscores\n    } {a 1 b 2 c 3 d 4}\n\n    test {ZRANGESTORE RESP3} {\n        r hello 3\n        assert_equal [r zrange z2{t} 0 -1 withscores] {{a 1.0} {b 2.0} {c 3.0} {d 4.0}}\n        r hello 2\n    } \n\n    test {ZRANGESTORE range} {\n        set res [r zrangestore z2{t} z1{t} 1 2]\n        assert_equal $res 2\n        r zrange z2{t} 0 -1 withscores\n    } {b 2 c 3}\n\n    test {ZRANGESTORE BYLEX} {\n        set res [r zrangestore z3{t} z1{t} \\[b \\[c BYLEX]\n        assert_equal $res 2\n        assert_encoding listpack z3{t}\n        set res [r zrangestore z2{t} z1{t} \\[b \\[c BYLEX]\n        assert_equal $res 2\n        r zrange z2{t} 0 -1 withscores\n    } {b 2 c 3}\n\n    test {ZRANGESTORE BYSCORE} {\n        set res [r zrangestore z4{t} z1{t} 1 2 BYSCORE]\n        assert_equal $res 2\n        assert_encoding listpack z4{t}\n        set res [r zrangestore z2{t} z1{t} 1 2 BYSCORE]\n        assert_equal $res 2\n        r zrange z2{t} 0 -1 withscores\n    } {a 1 b 2}\n\n    test {ZRANGESTORE BYSCORE LIMIT} {\n        set res [r zrangestore z2{t} z1{t} 0 5 BYSCORE LIMIT 0 2]\n        assert_equal $res 2\n        r zrange z2{t} 0 -1 withscores\n    } {a 1 b 2}\n\n    test {ZRANGESTORE BYSCORE REV LIMIT} {\n        set res [r zrangestore z2{t} z1{t} 5 0 BYSCORE REV LIMIT 0 2]\n        assert_equal $res 2\n        r zrange z2{t} 0 -1 withscores\n    } {c 3 d 4}\n\n    test {ZRANGE BYSCORE REV LIMIT} {\n        r zrange z1{t} 5 0 BYSCORE REV LIMIT 0 2 WITHSCORES\n    } {d 4 c 3}\n\n    test {ZRANGESTORE - src key missing} {\n        set res [r zrangestore z2{t} missing{t} 0 -1]\n        assert_equal $res 0\n        r exists z2{t}\n    } {0}\n\n    test {ZRANGESTORE - src key wrong type} {\n        r zadd z2{t} 1 a\n        r set foo{t} bar\n        assert_error \"*WRONGTYPE*\" {r zrangestore z2{t} foo{t} 0 -1}\n        r zrange z2{t} 0 -1\n    } {a}\n\n    test {ZRANGESTORE - empty range} {\n        set res [r zrangestore z2{t} z1{t} 5 6]\n        assert_equal $res 0\n        r exists z2{t}\n    } {0}\n\n    test {ZRANGESTORE BYLEX - empty range} {\n        set res [r zrangestore z2{t} z1{t} \\[f \\[g BYLEX]\n        assert_equal $res 0\n        r exists z2{t}\n    } {0}\n\n    test {ZRANGESTORE BYSCORE - empty range} {\n        set res [r zrangestore z2{t} z1{t} 5 6 BYSCORE]\n        assert_equal $res 0\n        r exists z2{t}\n    } {0}\n\n    test {ZRANGE BYLEX} {\n        r zrange z1{t} \\[b \\[c BYLEX\n    } {b c}\n\n    test {ZRANGESTORE invalid syntax} {\n        catch {r zrangestore z2{t} z1{t} 0 -1 limit 1 2} err\n        assert_match \"*syntax*\" $err\n        catch {r zrangestore z2{t} z1{t} 0 -1 WITHSCORES} err\n        assert_match \"*syntax*\" $err\n    }\n\n    test {ZRANGESTORE with zset-max-listpack-entries 0 #10767 case} {\n        set original_max [lindex [r config get zset-max-listpack-entries] 1]\n        r config set zset-max-listpack-entries 0\n        r del z1{t} z2{t}\n        r zadd z1{t} 1 a\n        assert_encoding skiplist z1{t}\n        assert_equal 1 [r zrangestore z2{t} z1{t} 0 -1]\n        assert_encoding skiplist z2{t}\n        r config set zset-max-listpack-entries $original_max\n    }\n\n    test {ZRANGESTORE with zset-max-listpack-entries 1 dst key should use skiplist encoding} {\n        set original_max [lindex [r config get zset-max-listpack-entries] 1]\n        r config set zset-max-listpack-entries 1\n        r del z1{t} z2{t} z3{t}\n        r zadd z1{t} 1 a 2 b\n        assert_equal 1 [r zrangestore z2{t} z1{t} 0 0]\n        assert_encoding listpack z2{t}\n        assert_equal 2 [r zrangestore z3{t} z1{t} 0 1]\n        assert_encoding skiplist z3{t}\n        r config set zset-max-listpack-entries $original_max\n    }\n\n    test {ZRANGE invalid syntax} {\n        catch {r zrange z1{t} 0 -1 limit 1 2} err\n        assert_match \"*syntax*\" $err\n        catch {r zrange z1{t} 0 -1 BYLEX WITHSCORES} err\n        assert_match \"*syntax*\" $err\n        catch {r zrevrange z1{t} 0 -1 BYSCORE} err\n        assert_match \"*syntax*\" $err\n        catch {r zrangebyscore z1{t} 0 -1 REV} err\n        assert_match \"*syntax*\" $err\n    }\n\n    proc get_keys {l} {\n        set res {}\n        foreach {score key} $l {\n            lappend res $key\n        }\n        return $res\n    }\n\n    # Check whether the zset members belong to the zset\n    proc check_member {mydict res} {\n        foreach ele $res {\n            assert {[dict exists $mydict $ele]}\n        }\n    }\n\n    # Check whether the zset members and score belong to the zset\n    proc check_member_and_score {mydict res} {\n       foreach {key val} $res {\n            assert_equal $val [dict get $mydict $key]\n        }\n    }\n\n    foreach {type contents} \"listpack {1 a 2 b 3 c} skiplist {1 a 2 b 3 [randstring 70 90 alpha]}\" {\n        set original_max_value [lindex [r config get zset-max-ziplist-value] 1]\n        r config set zset-max-ziplist-value 10\n        create_zset myzset $contents\n        assert_encoding $type myzset\n\n        test \"ZRANDMEMBER - $type\" {\n            unset -nocomplain myzset\n            array set myzset {}\n            for {set i 0} {$i < 100} {incr i} {\n                set key [r zrandmember myzset]\n                set myzset($key) 1\n            }\n            assert_equal [lsort [get_keys $contents]] [lsort [array names myzset]]\n        }\n        r config set zset-max-ziplist-value $original_max_value\n    }\n\n    test \"ZRANDMEMBER with RESP3\" {\n        r hello 3\n        set res [r zrandmember myzset 3 withscores]\n        assert_equal [llength $res] 3\n        assert_equal [llength [lindex $res 1]] 2\n\n        set res [r zrandmember myzset 3]\n        assert_equal [llength $res] 3\n        assert_equal [llength [lindex $res 1]] 1\n        r hello 2\n    }\n\n    test \"ZRANDMEMBER count of 0 is handled correctly\" {\n        r zrandmember myzset 0\n    } {}\n\n    test \"ZRANDMEMBER with <count> against non existing key\" {\n        r zrandmember nonexisting_key 100\n    } {}\n\n    test \"ZRANDMEMBER count overflow\" {\n        r zadd myzset 0 a\n        assert_error {*value is out of range*} {r zrandmember myzset -9223372036854770000 withscores}\n        assert_error {*value is out of range*} {r zrandmember myzset -9223372036854775808 withscores}\n        assert_error {*value is out of range*} {r zrandmember myzset -9223372036854775808}\n    } {}\n\n    # Make sure we can distinguish between an empty array and a null response\n    r readraw 1\n\n    test \"ZRANDMEMBER count of 0 is handled correctly - emptyarray\" {\n        r zrandmember myzset 0\n    } {*0}\n\n    test \"ZRANDMEMBER with <count> against non existing key - emptyarray\" {\n        r zrandmember nonexisting_key 100\n    } {*0}\n\n    r readraw 0\n\n    foreach {type contents} \"\n        skiplist {1 a 2 b 3 c 4 d 5 e 6 f 7 g 7 h 9 i 10 [randstring 70 90 alpha]}\n        listpack {1 a 2 b 3 c 4 d 5 e 6 f 7 g 7 h 9 i 10 j} \" {\n        test \"ZRANDMEMBER with <count> - $type\" {\n            set original_max_value [lindex [r config get zset-max-ziplist-value] 1]\n            r config set zset-max-ziplist-value 10\n            create_zset myzset $contents\n            assert_encoding $type myzset\n\n            # create a dict for easy lookup\n            set mydict [dict create {*}[r zrange myzset 0 -1 withscores]]\n\n            # We'll stress different parts of the code, see the implementation\n            # of ZRANDMEMBER for more information, but basically there are\n            # four different code paths.\n\n            # PATH 1: Use negative count.\n\n            # 1) Check that it returns repeated elements with and without values.\n            # 2) Check that all the elements actually belong to the original zset.\n            set res [r zrandmember myzset -20]\n            assert_equal [llength $res] 20\n            check_member $mydict $res\n\n            set res [r zrandmember myzset -1001]\n            assert_equal [llength $res] 1001\n            check_member $mydict $res\n\n            # again with WITHSCORES\n            set res [r zrandmember myzset -20 withscores]\n            assert_equal [llength $res] 40\n            check_member_and_score $mydict $res\n\n            set res [r zrandmember myzset -1001 withscores]\n            assert_equal [llength $res] 2002\n            check_member_and_score $mydict $res\n\n            # Test random uniform distribution\n            # df = 9, 40 means 0.00001 probability\n            set res [r zrandmember myzset -1000]\n            assert_lessthan [chi_square_value $res] 40\n            check_member $mydict $res\n\n            # 3) Check that eventually all the elements are returned.\n            #    Use both WITHSCORES and without\n            unset -nocomplain auxset\n            set iterations 1000\n            while {$iterations != 0} {\n                incr iterations -1\n                if {[expr {$iterations % 2}] == 0} {\n                    set res [r zrandmember myzset -3 withscores]\n                    foreach {key val} $res {\n                        dict append auxset $key $val\n                    }\n                } else {\n                    set res [r zrandmember myzset -3]\n                    foreach key $res {\n                        dict append auxset $key\n                    }\n                }\n                if {[lsort [dict keys $mydict]] eq\n                    [lsort [dict keys $auxset]]} {\n                    break;\n                }\n            }\n            assert {$iterations != 0}\n\n            # PATH 2: positive count (unique behavior) with requested size\n            # equal or greater than set size.\n            foreach size {10 20} {\n                set res [r zrandmember myzset $size]\n                assert_equal [llength $res] 10\n                assert_equal [lsort $res] [lsort [dict keys $mydict]]\n                check_member $mydict $res\n\n                # again with WITHSCORES\n                set res [r zrandmember myzset $size withscores]\n                assert_equal [llength $res] 20\n                assert_equal [lsort $res] [lsort $mydict]\n                check_member_and_score $mydict $res\n            }\n\n            # PATH 3: Ask almost as elements as there are in the set.\n            # In this case the implementation will duplicate the original\n            # set and will remove random elements up to the requested size.\n            #\n            # PATH 4: Ask a number of elements definitely smaller than\n            # the set size.\n            #\n            # We can test both the code paths just changing the size but\n            # using the same code.\n            foreach size {1 2 8} {\n                # 1) Check that all the elements actually belong to the\n                # original set.\n                set res [r zrandmember myzset $size]\n                assert_equal [llength $res] $size\n                check_member $mydict $res\n\n                # again with WITHSCORES\n                set res [r zrandmember myzset $size withscores]\n                assert_equal [llength $res] [expr {$size * 2}]\n                check_member_and_score $mydict $res\n\n                # 2) Check that eventually all the elements are returned.\n                #    Use both WITHSCORES and without\n                unset -nocomplain auxset\n                unset -nocomplain allkey\n                set iterations [expr {1000 / $size}]\n                set all_ele_return false\n                while {$iterations != 0} {\n                    incr iterations -1\n                    if {[expr {$iterations % 2}] == 0} {\n                        set res [r zrandmember myzset $size withscores]\n                        foreach {key value} $res {\n                            dict append auxset $key $value\n                            lappend allkey $key\n                        }\n                    } else {\n                        set res [r zrandmember myzset $size]\n                        foreach key $res {\n                            dict append auxset $key\n                            lappend allkey $key\n                        }\n                    }\n                    if {[lsort [dict keys $mydict]] eq\n                        [lsort [dict keys $auxset]]} {\n                        set all_ele_return true\n                    }\n                }\n                assert_equal $all_ele_return true\n                # df = 9, 40 means 0.00001 probability\n                assert_lessthan [chi_square_value $allkey] 40\n            }\n        }\n        r config set zset-max-ziplist-value $original_max_value\n    }\n\n    test {zset score double range} {\n        set dblmax 179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.00000000000000000\n        r del zz\n        r zadd zz $dblmax dblmax\n        assert_encoding listpack zz\n        r zscore zz dblmax\n    } {1.7976931348623157e+308}\n\n    test {zunionInterDiffGenericCommand acts on SET and ZSET} {\n        r del set_small{t} set_big{t} zset_small{t} zset_big{t} zset_dest{t}\n\n        foreach set_type {intset listpack hashtable} {\n            # Restore all default configurations before each round of testing.\n            r config set set-max-intset-entries 512\n            r config set set-max-listpack-entries 128\n            r config set zset-max-listpack-entries 128\n\n            r del set_small{t} set_big{t}\n\n            if {$set_type == \"intset\"} {\n                r sadd set_small{t} 1 2 3\n                r sadd set_big{t} 1 2 3 4 5\n                assert_encoding intset set_small{t}\n                assert_encoding intset set_big{t}\n            } elseif {$set_type == \"listpack\"} {\n                # Add an \"a\" and then remove it, make sure the set is listpack encoding.\n                r sadd set_small{t} a 1 2 3\n                r sadd set_big{t} a 1 2 3 4 5\n                r srem set_small{t} a\n                r srem set_big{t} a\n                assert_encoding listpack set_small{t}\n                assert_encoding listpack set_big{t}\n            } elseif {$set_type == \"hashtable\"} {\n                r config set set-max-intset-entries 0\n                r config set set-max-listpack-entries 0\n                r sadd set_small{t} 1 2 3\n                r sadd set_big{t} 1 2 3 4 5\n                assert_encoding hashtable set_small{t}\n                assert_encoding hashtable set_big{t}\n            }\n\n            foreach zset_type {listpack skiplist} {\n                r del zset_small{t} zset_big{t}\n\n                if {$zset_type == \"listpack\"} {\n                    r zadd zset_small{t} 1 1 2 2 3 3\n                    r zadd zset_big{t} 1 1 2 2 3 3 4 4 5 5\n                    assert_encoding listpack zset_small{t}\n                    assert_encoding listpack zset_big{t}\n                } elseif {$zset_type == \"skiplist\"} {\n                    r config set zset-max-listpack-entries 0\n                    r zadd zset_small{t} 1 1 2 2 3 3\n                    r zadd zset_big{t} 1 1 2 2 3 3 4 4 5 5\n                    assert_encoding skiplist zset_small{t}\n                    assert_encoding skiplist zset_big{t}\n                }\n\n                # Test one key is big and one key is small separately.\n                # The reason for this is because we will sort the sets from smallest to largest.\n                # So set one big key and one small key, then the test can cover more code paths.\n                foreach {small_or_big set_key zset_key} {\n                    small set_small{t} zset_big{t}\n                    big set_big{t} zset_small{t}\n                } {\n                    # The result of these commands are not related to the order of the keys.\n                    assert_equal {1 2 3 4 5} [lsort [r zunion 2 $set_key $zset_key]]\n                    assert_equal {5} [r zunionstore zset_dest{t} 2 $set_key $zset_key]\n                    assert_equal {1 2 3} [lsort [r zinter 2 $set_key $zset_key]]\n                    assert_equal {3} [r zinterstore zset_dest{t} 2 $set_key $zset_key]\n                    assert_equal {3} [r zintercard 2 $set_key $zset_key]\n\n                    # The result of sdiff is related to the order of the keys.\n                    if {$small_or_big == \"small\"} {\n                        assert_equal {} [r zdiff 2 $set_key $zset_key]\n                        assert_equal {0} [r zdiffstore zset_dest{t} 2 $set_key $zset_key]\n                    } else {\n                        assert_equal {4 5} [lsort [r zdiff 2 $set_key $zset_key]]\n                        assert_equal {2} [r zdiffstore zset_dest{t} 2 $set_key $zset_key]\n                    }\n                }\n            }\n        }\n\n        r config set set-max-intset-entries 512\n        r config set set-max-listpack-entries 128\n        r config set zset-max-listpack-entries 128\n    }\n\n    foreach type {single multiple single_multiple} {\n        test \"ZADD overflows the maximum allowed elements in a listpack - $type\" {\n            r del myzset\n\n            set max_entries 64\n            set original_max [lindex [r config get zset-max-listpack-entries] 1]\n            r config set zset-max-listpack-entries $max_entries\n\n            if {$type == \"single\"} {\n                # All are single zadd commands.\n                for {set i 0} {$i < $max_entries} {incr i} { r zadd myzset $i $i }\n            } elseif {$type == \"multiple\"} {\n                # One zadd command to add all elements.\n                set args {}\n                for {set i 0} {$i < $max_entries * 2} {incr i} { lappend args $i }\n                r zadd myzset {*}$args\n            } elseif {$type == \"single_multiple\"} {\n                # First one zadd adds an element (creates a key) and then one zadd adds all elements.\n                r zadd myzset 1 1\n                set args {}\n                for {set i 0} {$i < $max_entries * 2} {incr i} { lappend args $i }\n                r zadd myzset {*}$args\n            }\n\n            assert_encoding listpack myzset\n            assert_equal $max_entries [r zcard myzset]\n            assert_equal 1 [r zadd myzset 1 b]\n            assert_encoding skiplist myzset\n\n            r config set zset-max-listpack-entries $original_max\n        }\n    }\n}\n"
  },
  {
    "path": "tests/unit/violations.tcl",
    "content": "# One XADD with one huge 5GB field\n# Expected to fail resulting in an empty stream\nrun_solo {violations} {\nstart_server [list overrides [list save \"\"] ] {\n    test {XADD one huge field} {\n        r config set proto-max-bulk-len 10000000000 ;#10gb\n        r config set client-query-buffer-limit 10000000000 ;#10gb\n        r write \"*5\\r\\n\\$4\\r\\nXADD\\r\\n\\$2\\r\\nS1\\r\\n\\$1\\r\\n*\\r\\n\"\n        r write \"\\$1\\r\\nA\\r\\n\"\n        catch {\n            write_big_bulk 5000000000 ;#5gb\n        } err\n        assert_match {*too large*} $err\n        r xlen S1\n    } {0} {large-memory}\n}\n\n# One XADD with one huge (exactly nearly) 4GB field\n# This uncovers the overflow in lpEncodeGetType\n# Expected to fail resulting in an empty stream\nstart_server [list overrides [list save \"\"] ] {\n    test {XADD one huge field - 1} {\n        r config set proto-max-bulk-len 10000000000 ;#10gb\n        r config set client-query-buffer-limit 10000000000 ;#10gb\n        r write \"*5\\r\\n\\$4\\r\\nXADD\\r\\n\\$2\\r\\nS1\\r\\n\\$1\\r\\n*\\r\\n\"\n        r write \"\\$1\\r\\nA\\r\\n\"\n        catch {\n            write_big_bulk 4294967295 ;#4gb-1\n        } err\n        assert_match {*too large*} $err\n        r xlen S1\n    } {0} {large-memory}\n}\n\n# Gradually add big stream fields using repeated XADD calls\nstart_server [list overrides [list save \"\"] ] {\n    test {several XADD big fields} {\n        r config set stream-node-max-bytes 0\n        for {set j 0} {$j<10} {incr j} {\n            r xadd stream * 1 $::str500 2 $::str500\n        }\n        r ping\n        r xlen stream\n    } {10} {large-memory}\n}\n\n# Add over 4GB to a single stream listpack (one XADD command)\n# Expected to fail resulting in an empty stream\nstart_server [list overrides [list save \"\"] ] {\n    test {single XADD big fields} {\n        r write \"*23\\r\\n\\$4\\r\\nXADD\\r\\n\\$1\\r\\nS\\r\\n\\$1\\r\\n*\\r\\n\"\n        for {set j 0} {$j<10} {incr j} {\n            r write \"\\$1\\r\\n$j\\r\\n\"\n            write_big_bulk 500000000 \"\" yes ;#500mb\n        }\n        r flush\n        catch {r read} err\n        assert_match {*too large*} $err\n        r xlen S\n    } {0} {large-memory}\n}\n\n# Gradually add big hash fields using repeated HSET calls\n# This reproduces the overflow in the call to ziplistResize\n# Object will be converted to hashtable encoding\nstart_server [list overrides [list save \"\"] ] {\n    r config set hash-max-ziplist-value 1000000000 ;#1gb\n    test {hash with many big fields} {\n        for {set j 0} {$j<10} {incr j} {\n            r hset h $j $::str500\n        }\n        r object encoding h\n    } {hashtable} {large-memory}\n}\n\n# Add over 4GB to a single hash field (one HSET command)\n# Object will be converted to hashtable encoding\nstart_server [list overrides [list save \"\"] ] {\n    test {hash with one huge field} {\n        catch {r config set hash-max-ziplist-value 10000000000} ;#10gb\n        r config set proto-max-bulk-len 10000000000 ;#10gb\n        r config set client-query-buffer-limit 10000000000 ;#10gb\n        r write \"*4\\r\\n\\$4\\r\\nHSET\\r\\n\\$2\\r\\nH1\\r\\n\"\n        r write \"\\$1\\r\\nA\\r\\n\"\n        write_big_bulk 5000000000 ;#5gb\n        r object encoding H1\n    } {hashtable} {large-memory}\n}\n} ;# run_solo\n\n# SORT which stores an integer encoded element into a list.\n# Just for coverage, no news here.\nstart_server [list overrides [list save \"\"] ] {\n    test {SORT adds integer field to list} {\n        r set S1 asdf\n        r set S2 123 ;# integer encoded\n        assert_encoding \"int\" S2\n        r sadd myset 1 2\n        r mset D1 1 D2 2\n        r sort myset by D* get S* store mylist\n        r llen mylist\n    } {2} {cluster:skip}\n}\n"
  },
  {
    "path": "tests/unit/wait.tcl",
    "content": "source tests/support/cli.tcl\n\nstart_server {tags {\"wait network external:skip\"}} {\nstart_server {} {\n    set slave [srv 0 client]\n    set slave_host [srv 0 host]\n    set slave_port [srv 0 port]\n    set slave_pid [srv 0 pid]\n    set master [srv -1 client]\n    set master_host [srv -1 host]\n    set master_port [srv -1 port]\n\n    test {Setup slave} {\n        $slave slaveof $master_host $master_port\n        wait_for_condition 50 100 {\n            [s 0 master_link_status] eq {up}\n        } else {\n            fail \"Replication not started.\"\n        }\n    }\n\n    test {WAIT out of range timeout (milliseconds)} {\n        # Timeout is parsed as milliseconds by getLongLongFromObjectOrReply().\n        # Verify we get out of range message if value is behind LLONG_MAX\n        # (decimal value equals to 0x8000000000000000)\n         assert_error \"*or out of range*\" {$master wait 2 9223372036854775808}\n\n         # expected to fail by later overflow condition after addition\n         # of mstime(). (decimal value equals to 0x7FFFFFFFFFFFFFFF)\n         assert_error \"*timeout is out of range*\" {$master wait 2 9223372036854775807}\n\n         assert_error \"*timeout is negative*\" {$master wait 2 -1}\n    }\n\n    test {WAIT should acknowledge 1 additional copy of the data} {\n        $master set foo 0\n        $master incr foo\n        $master incr foo\n        $master incr foo\n        assert {[$master wait 1 5000] == 1}\n        assert {[$slave get foo] == 3}\n    }\n\n    test {WAIT should not acknowledge 2 additional copies of the data} {\n        $master incr foo\n        assert {[$master wait 2 1000] <= 1}\n    }\n\n    test {WAIT should not acknowledge 1 additional copy if slave is blocked} {\n        pause_process $slave_pid\n        $master set foo 0\n        $master incr foo\n        $master incr foo\n        $master incr foo\n        assert {[$master wait 1 1000] == 0}\n        resume_process $slave_pid\n        assert {[$master wait 1 1000] == 1}\n    }\n\n    test {WAIT implicitly blocks on client pause since ACKs aren't sent} {\n        pause_process $slave_pid\n        $master multi\n        $master incr foo\n        $master client pause 10000 write\n        $master exec\n        assert {[$master wait 1 1000] == 0}\n        $master client unpause\n        resume_process $slave_pid\n        assert {[$master wait 1 1000] == 1}\n    }\n\n    test {WAIT replica multiple clients unblock - reuse last result} {\n        set rd [redis_deferring_client -1]\n        set rd2 [redis_deferring_client -1]\n\n        pause_process $slave_pid\n\n        $rd incr foo\n        $rd read\n\n        $rd2 incr foo\n        $rd2 read\n\n        $rd wait 1 0\n        $rd2 wait 1 0\n        wait_for_blocked_clients_count 2 100 10 -1\n\n        resume_process $slave_pid\n\n        assert_equal [$rd read] {1}\n        assert_equal [$rd2 read] {1}\n\n        $rd ping\n        assert_equal [$rd read] {PONG}\n        $rd2 ping\n        assert_equal [$rd2 read] {PONG}\n\n        $rd close\n        $rd2 close\n    }\n}}\n\n\ntags {\"wait aof network external:skip\"} {\n    start_server {overrides {appendonly {yes} auto-aof-rewrite-percentage {0}}} {\n        set master [srv 0 client]\n\n        test {WAITAOF local copy before fsync} {\n            r config set appendfsync no\n            $master incr foo\n            assert_equal [$master waitaof 1 0 50] {0 0} ;# exits on timeout\n            r config set appendfsync everysec\n        }\n\n        test {WAITAOF local copy everysec} {\n            $master incr foo\n            assert_equal [$master waitaof 1 0 0] {1 0}\n        }\n\n        test {WAITAOF local copy with appendfsync always} {\n            r config set appendfsync always\n            $master incr foo\n            assert_equal [$master waitaof 1 0 0] {1 0}\n        }\n\n        test {WAITAOF local wait and then stop aof} {\n            r config set appendfsync no\n            set rd [redis_deferring_client]\n            $rd incr foo\n            $rd read\n            $rd waitaof 1 0 0\n            wait_for_blocked_client\n            r config set appendonly no ;# this should release the blocked client as an error\n            assert_error {ERR WAITAOF cannot be used when numlocal is set but appendonly is disabled.} {$rd read}\n            $rd close\n        }\n\n        test {WAITAOF local on server with aof disabled} {\n            $master incr foo\n            assert_error {ERR WAITAOF cannot be used when numlocal is set but appendonly is disabled.} {$master waitaof 1 0 0}\n        }\n\n        test {WAITAOF local client unblock with timeout and error} {\n            r config set appendonly yes\n            r config set appendfsync no\n            set rd [redis_deferring_client]\n            $rd client id\n            set client_id [$rd read]\n\n            # Test unblock with timeout\n            $rd incr foo\n            $rd read\n            $rd waitaof 1 0 0\n            wait_for_blocked_client\n            assert_equal 1 [r client unblock $client_id timeout]\n\n            # Test unblock with error\n            $rd incr foo\n            $rd read\n            $rd waitaof 1 0 0\n            wait_for_blocked_client\n            assert_equal 1 [r client unblock $client_id error]\n            $rd close\n        }\n\n        test {WAITAOF local if AOFRW was postponed} {\n            r config set appendfsync everysec\n\n            # turn off AOF\n            r config set appendonly no\n\n            # create an RDB child that takes a lot of time to run\n            r set x y\n            r config set rdb-key-save-delay 100000000  ;# 100 seconds\n            r bgsave\n            assert_equal [s rdb_bgsave_in_progress] 1\n\n            # turn on AOF\n            r config set appendonly yes\n            assert_equal [s aof_rewrite_scheduled] 1\n\n            # create a write command (to increment master_repl_offset)\n            r set x y\n\n            # reset save_delay and kill RDB child\n            r config set rdb-key-save-delay 0\n            catch {exec kill -9 [get_child_pid 0]}\n\n            # wait for AOF (will unblock after AOFRW finishes)\n            assert_equal [r waitaof 1 0 10000] {1 0}\n\n            # make sure AOFRW finished\n            assert_equal [s aof_rewrite_in_progress] 0\n            assert_equal [s aof_rewrite_scheduled] 0\n        }\n\n        $master config set appendonly yes\n        waitForBgrewriteaof $master\n\n        start_server {overrides {appendonly {yes} auto-aof-rewrite-percentage {0}}} {\n            set master_host [srv -1 host]\n            set master_port [srv -1 port]\n            set replica [srv 0 client]\n            set replica_host [srv 0 host]\n            set replica_port [srv 0 port]\n            set replica_pid [srv 0 pid]\n\n            # make sure the master always fsyncs first (easier to test)\n            $master config set appendfsync always\n            $replica config set appendfsync no\n\n            test {WAITAOF on demoted master gets unblocked with an error} {\n                set rd [redis_deferring_client]\n                $rd incr foo\n                $rd read\n                $rd waitaof 0 1 0\n                wait_for_blocked_client\n                $replica replicaof $master_host $master_port\n                assert_error {UNBLOCKED force unblock from blocking operation,*} {$rd read}\n                $rd close\n            }\n\n            wait_for_ofs_sync $master $replica\n\n            test {WAITAOF replica copy before fsync} {\n                $master incr foo\n                assert_equal [$master waitaof 0 1 50] {1 0} ;# exits on timeout\n            }\n            $replica config set appendfsync everysec\n\n            test {WAITAOF replica copy everysec} {\n                $replica config set appendfsync everysec\n                waitForBgrewriteaof $replica ;# Make sure there is no AOFRW\n\n                $master incr foo\n                assert_equal [$master waitaof 0 1 0] {1 1}\n            }\n\n            test {WAITAOF replica copy everysec with AOFRW} {\n                $replica config set appendfsync everysec\n\n                # When we trigger an AOFRW, a fsync is triggered when closing the old INCR file,\n                # so with the everysec, we will skip that second of fsync, and in the next second\n                # after that, we will eventually do the fsync.\n                $replica bgrewriteaof\n                waitForBgrewriteaof $replica\n\n                $master incr foo\n                assert_equal [$master waitaof 0 1 0] {1 1}\n            }\n\n            test {WAITAOF replica copy everysec with slow AOFRW} {\n                $replica config set appendfsync everysec\n                $replica config set rdb-key-save-delay 1000000 ;# 1 sec\n\n                $replica bgrewriteaof\n\n                $master incr foo\n                assert_equal [$master waitaof 0 1 0] {1 1}\n\n                $replica config set rdb-key-save-delay 0\n                waitForBgrewriteaof $replica\n            }\n\n            test {WAITAOF replica copy everysec->always with AOFRW} {\n                $replica config set appendfsync everysec\n\n                # Try to fit all of them in the same round second, although there's no way to guarantee\n                # that, it can be done on fast machine. In any case, the test shouldn't fail either.\n                $replica bgrewriteaof\n                $master incr foo\n                waitForBgrewriteaof $replica\n                $replica config set appendfsync always\n\n                assert_equal [$master waitaof 0 1 0] {1 1}\n            }\n\n            test {WAITAOF replica copy appendfsync always} {\n                $replica config set appendfsync always\n                $master incr foo\n                assert_equal [$master waitaof 0 1 0] {1 1}\n                $replica config set appendfsync everysec\n            }\n\n            test {WAITAOF replica copy if replica is blocked} {\n                pause_process $replica_pid\n                $master incr foo\n                assert_equal [$master waitaof 0 1 50] {1 0} ;# exits on timeout\n                resume_process $replica_pid\n                assert_equal [$master waitaof 0 1 0] {1 1}\n            }\n\n            test {WAITAOF replica multiple clients unblock - reuse last result} {\n                set rd [redis_deferring_client -1]\n                set rd2 [redis_deferring_client -1]\n\n                pause_process $replica_pid\n\n                $rd incr foo\n                $rd read\n\n                $rd2 incr foo\n                $rd2 read\n\n                $rd waitaof 0 1 0\n                $rd2 waitaof 0 1 0\n                wait_for_blocked_clients_count 2 100 10 -1\n\n                resume_process $replica_pid\n\n                assert_equal [$rd read] {1 1}\n                assert_equal [$rd2 read] {1 1}\n\n                $rd ping\n                assert_equal [$rd read] {PONG}\n                $rd2 ping\n                assert_equal [$rd2 read] {PONG}\n\n                $rd close\n                $rd2 close\n            }\n\n            test {WAITAOF on promoted replica} {\n                $replica replicaof no one\n                $replica incr foo\n                assert_equal [$replica waitaof 1 0 0] {1 0}\n            }\n\n            test {WAITAOF master that loses a replica and backlog is dropped} {\n                $master config set repl-backlog-ttl 1\n                after 2000 ;# wait for backlog to expire\n                $master incr foo\n                assert_equal [$master waitaof 1 0 0] {1 0}\n            }\n\n            test {WAITAOF master without backlog, wait is released when the replica finishes full-sync} {\n                set rd [redis_deferring_client -1]\n                $rd incr foo\n                $rd read\n                $rd waitaof 0 1 0\n                wait_for_blocked_client -1\n                $replica replicaof $master_host $master_port\n                assert_equal [$rd read] {1 1}\n                $rd close\n            }\n\n            test {WAITAOF master isn't configured to do AOF} {\n                $master config set appendonly no\n                $master incr foo\n                assert_equal [$master waitaof 0 1 0] {0 1}\n            }\n\n            test {WAITAOF replica isn't configured to do AOF} {\n                $master config set appendonly yes\n                waitForBgrewriteaof $master\n                $replica config set appendonly no\n                $master incr foo\n                assert_equal [$master waitaof 1 0 0] {1 0}\n            }\n\n            test {WAITAOF both local and replica got AOF enabled at runtime} {\n                $replica config set appendonly yes\n                waitForBgrewriteaof $replica\n                $master incr foo\n                assert_equal [$master waitaof 1 1 0] {1 1}\n            }\n\n            test {WAITAOF master sends PING after last write} {\n                $master config set repl-ping-replica-period 1\n                $master incr foo\n                after 1200 ;# wait for PING\n                $master get foo\n                assert_equal [$master waitaof 1 1 0] {1 1}\n                $master config set repl-ping-replica-period 10\n            }\n\n            test {WAITAOF master client didn't send any write command} {\n                $master config set repl-ping-replica-period 1\n                set client [redis_client -1]\n                after 1200 ;# wait for PING\n                assert_equal [$master waitaof 1 1 0] {1 1}\n                $client close\n                $master config set repl-ping-replica-period 10\n            }\n\n            test {WAITAOF master client didn't send any command} {\n                $master config set repl-ping-replica-period 1\n                set client [redis [srv -1 \"host\"] [srv -1 \"port\"] 0 $::tls]\n                after 1200 ;# wait for PING\n                assert_equal [$master waitaof 1 1 0] {1 1}\n                $client close\n                $master config set repl-ping-replica-period 10\n            }\n\n            foreach fsync {no everysec always} {\n                test \"WAITAOF when replica switches between masters, fsync: $fsync\" {\n                    # test a case where a replica is moved from one master to the other\n                    # between two replication streams with different offsets that should\n                    # not be mixed. done to smoke-test race conditions with bio thread.\n                    start_server {overrides {appendonly {yes} auto-aof-rewrite-percentage {0}}} {\n                        start_server {overrides {appendonly {yes} auto-aof-rewrite-percentage {0}}} {\n                            set master2 [srv -1 client]\n                            set master2_host [srv -1 host]\n                            set master2_port [srv -1 port]\n                            set replica2 [srv 0 client]\n                            set replica2_host [srv 0 host]\n                            set replica2_port [srv 0 port]\n                            set replica2_pid [srv 0 pid]\n\n                            $replica2 replicaof $master2_host $master2_port\n                            wait_for_ofs_sync $master2 $replica2\n\n                            $master config set appendfsync $fsync\n                            $master2 config set appendfsync $fsync\n                            $replica config set appendfsync $fsync\n                            $replica2 config set appendfsync $fsync\n                            if {$fsync eq \"no\"} {\n                                after 2000 ;# wait for any previous fsync to finish\n                                # can't afford \"no\" on the masters\n                                $master config set appendfsync always\n                                $master2 config set appendfsync always\n                            } elseif {$fsync eq \"everysec\"} {\n                                after 990 ;# hoping to hit a race\n                            }\n\n                            # add some writes and block a client on each master\n                            set rd [redis_deferring_client -3]\n                            set rd2 [redis_deferring_client -1]\n                            $rd set boo 11\n                            $rd2 set boo 22\n                            $rd read\n                            $rd2 read\n                            $rd waitaof 1 1 0\n                            $rd2 waitaof 1 1 0\n\n                            if {$fsync eq \"no\"} {\n                                # since appendfsync is disabled in the replicas, the client\n                                # will get released only with full sync\n                                wait_for_blocked_client -1\n                                wait_for_blocked_client -3\n                            }\n                            # switch between the two replicas\n                            $replica2 replicaof $master_host $master_port\n                            $replica replicaof $master2_host $master2_port\n                            assert_equal [$rd read] {1 1}\n                            assert_equal [$rd2 read] {1 1}\n                            $rd close\n                            $rd2 close\n\n                            assert_equal [$replica get boo] 22\n                            assert_equal [$replica2 get boo] 11\n                        }\n                    }\n                }\n            }\n        }\n    }\n}\n\nstart_server {tags {\"failover external:skip\"}} {\nstart_server {} {\nstart_server {} {\n    set master [srv 0 client]\n    set master_host [srv 0 host]\n    set master_port [srv 0 port]\n\n    set replica1 [srv -1 client]\n    set replica1_pid [srv -1 pid]\n\n    set replica2 [srv -2 client]\n\n    test {setup replication for following tests} {\n        $replica1 replicaof $master_host $master_port\n        $replica2 replicaof $master_host $master_port\n        wait_for_sync $replica1\n        wait_for_sync $replica2\n    }\n\n    test {WAIT and WAITAOF replica multiple clients unblock - reuse last result} {\n        set rd [redis_deferring_client]\n        set rd2 [redis_deferring_client]\n\n        $master config set appendonly yes\n        $replica1 config set appendonly yes\n        $replica2 config set appendonly yes\n\n        $master config set appendfsync always\n        $replica1 config set appendfsync no\n        $replica2 config set appendfsync no\n\n        waitForBgrewriteaof $master\n        waitForBgrewriteaof $replica1\n        waitForBgrewriteaof $replica2\n\n        pause_process $replica1_pid\n\n        $rd incr foo\n        $rd read\n        $rd waitaof 0 1 0\n\n        # rd2 has a newer repl_offset\n        $rd2 incr foo\n        $rd2 read\n        $rd2 wait 2 0\n\n        wait_for_blocked_clients_count 2\n\n        resume_process $replica1_pid\n\n        # WAIT will unblock the client first.\n        assert_equal [$rd2 read] {2}\n\n        # Make $replica1 catch up the repl_aof_off, then WAITAOF will unblock the client.\n        $replica1 config set appendfsync always\n        $master incr foo\n        assert_equal [$rd read] {1 1}\n\n        $rd ping\n        assert_equal [$rd read] {PONG}\n        $rd2 ping\n        assert_equal [$rd2 read] {PONG}\n\n        $rd close\n        $rd2 close\n    }\n}\n}\n}\n"
  },
  {
    "path": "tests/vectorset/vectorset.tcl",
    "content": "proc check_python_environment {} {\n    set ret [catch {exec sh -c \"which python3 || which python\" 2>@1} python_cmd]\n    if {$ret != 0} {\n        return \"\"\n    }\n\n    # Check if redis-py is installed\n    # Use a more robust way to check redis module\n    set ret [catch {exec $python_cmd -c \"import redis\" 2>@1} e]\n    if {$ret != 0} {\n        return \"\"\n    }\n\n    return $python_cmd\n}\n\nstart_server {} {\n    set slave_port [srv 0 port]\n\n    start_server {} {\n        set master_port [srv 0 port]\n\n        test {Vector set Python test execution} {\n            set python_cmd [check_python_environment]\n            if {$python_cmd eq \"\"} {\n                puts \"Python or redis-py module not found, skipping vectorset tests\"\n            } else {\n                # Run the Python script with real-time output\n                puts \"Running vectorset tests ...\"\n                puts \"Vectorset test output:\"\n\n                set pipe [open \"|$python_cmd modules/vector-sets/test.py --primary-port $master_port --replica-port $slave_port 2>@1\" r]\n                # Read output line by line in real-time\n                while {[gets $pipe line] >= 0} {\n                    puts $line\n                }\n\n                # Close pipe and check for errors\n                set result [catch {close $pipe} close_error]\n                if {$result != 0} {\n                    fail \"Vector set Python test failed: $close_error\"\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "utils/build-static-symbols.tcl",
    "content": "# Build a symbol table for static symbols of redis.c\n# Useful to get stack traces on segfault without a debugger. See redis.c\n# for more information.\n#\n# Copyright(C) 2009-Present Redis Ltd. All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n\nset fd [open redis.c]\nset symlist {}\nwhile {[gets $fd line] != -1} {\n    if {[regexp {^static +[A-z0-9]+[ *]+([A-z0-9]*)\\(} $line - sym]} {\n        lappend symlist $sym\n    }\n}\nset symlist [lsort -unique $symlist]\nputs \"static struct redisFunctionSym symsTable\\[\\] = {\"\nforeach sym $symlist {\n    puts \"{\\\"$sym\\\",(unsigned long)$sym},\"\n}\nputs \"{NULL,0}\"\nputs \"};\"\n\nclose $fd\n"
  },
  {
    "path": "utils/cluster_fail_time.tcl",
    "content": "# This simple script is used in order to estimate the average PFAIL->FAIL\n# state switch after a failure.\n\nset ::sleep_time 10     ; # How much to sleep to trigger PFAIL.\nset ::fail_port 30016   ; # Node to put in sleep.\nset ::other_port 30001  ; # Node to use to monitor the flag switch.\n\nproc avg vector {\n    set sum 0.0\n    foreach x $vector {\n        set sum [expr {$sum+$x}]\n    }\n    expr {$sum/[llength $vector]}\n}\n\nset samples {}\nwhile 1 {\n    exec redis-cli -p $::fail_port debug sleep $::sleep_time > /dev/null &\n\n    # Wait for fail? to appear.\n    while 1 {\n        set output [exec redis-cli -p $::other_port cluster nodes]\n        if {[string match {*fail\\?*} $output]} break\n        after 100\n    }\n\n    puts \"FAIL?\"\n    set start [clock milliseconds]\n\n    # Wait for fail? to disappear.\n    while 1 {\n        set output [exec redis-cli -p $::other_port cluster nodes]\n        if {![string match {*fail\\?*} $output]} break\n        after 100\n    }\n\n    puts \"FAIL\"\n    set now [clock milliseconds]\n    set elapsed [expr {$now-$start}]\n    puts $elapsed\n    lappend samples $elapsed\n\n    puts \"AVG([llength $samples]): [avg $samples]\"\n\n    # Wait for the instance to be available again.\n    exec redis-cli -p $::fail_port ping\n\n    # Wait for the fail flag to be cleared.\n    after 2000\n}\n"
  },
  {
    "path": "utils/corrupt_rdb.c",
    "content": "/* Trivia program to corrupt an RDB file in order to check the RDB check\n * program behavior and effectiveness.\n *\n * Copyright (C) 2016-Present Redis Ltd. All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include <stdio.h>\n#include <fcntl.h>\n#include <sys/stat.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <time.h>\n\nint main(int argc, char **argv) {\n    struct stat stat;\n    int fd, cycles;\n\n    if (argc != 3) {\n        fprintf(stderr,\"Usage: <filename> <cycles>\\n\");\n        exit(1);\n    }\n\n    srand(time(NULL));\n    char *filename = argv[1];\n    cycles = atoi(argv[2]);\n    fd = open(filename,O_RDWR);\n    if (fd == -1) {\n        perror(\"open\");\n        exit(1);\n    }\n    fstat(fd,&stat);\n\n    while(cycles--) {\n        unsigned char buf[32];\n        unsigned long offset = rand()%stat.st_size;\n        int writelen = 1+rand()%31;\n        int j;\n\n        for (j = 0; j < writelen; j++) buf[j] = (char)rand();\n        lseek(fd,offset,SEEK_SET);\n        printf(\"Writing %d bytes at offset %lu\\n\", writelen, offset);\n        write(fd,buf,writelen);\n    }\n    return 0;\n}\n"
  },
  {
    "path": "utils/create-cluster/.gitignore",
    "content": "config.sh\n*.rdb\n*.aof\n*.conf\n*.log\nappendonlydir-*\n"
  },
  {
    "path": "utils/create-cluster/README",
    "content": "create-cluster is a small script used to easily start a big number of Redis\ninstances configured to run in cluster mode. Its main goal is to allow manual\ntesting in a condition which is not easy to replicate with the Redis cluster\nunit tests, for example when a lot of instances are needed in order to trigger\na given bug.\n\nThe tool can also be used just to easily create a number of instances in a\nRedis Cluster in order to experiment a bit with the system.\n\nUSAGE\n---\n\nTo create a cluster, follow these steps:\n\n1. Edit create-cluster and change the start / end port, depending on the\nnumber of instances you want to create.\n2. Use \"./create-cluster start\" in order to run the instances.\n3. Use \"./create-cluster create\" in order to execute redis-cli --cluster create, so that\nan actual Redis cluster will be created. (If you're accessing your setup via a local container, ensure that the CLUSTER_HOST value is changed to your local IP)\n4. Now you are ready to play with the cluster. AOF files and logs for each instances are created in the current directory.\n\nIn order to stop a cluster:\n\n1. Use \"./create-cluster stop\" to stop all the instances. After you stopped the instances you can use \"./create-cluster start\" to restart them if you change your mind.\n2. Use \"./create-cluster clean\" to remove all the AOF / log files to restart with a clean environment.\n\nUse the command \"./create-cluster help\" to get the full list of features.\n"
  },
  {
    "path": "utils/create-cluster/create-cluster",
    "content": "#!/bin/bash\n\nSCRIPT_DIR=\"$( cd -- \"$( dirname -- \"${BASH_SOURCE[0]}\" )\" &> /dev/null && pwd )\"\n\n# Settings\nBIN_PATH=\"$SCRIPT_DIR/../../src/\"\nCLUSTER_HOST=127.0.0.1\nPORT=30000\nTIMEOUT=2000\nNODES=6\nREPLICAS=1\nPROTECTED_MODE=yes\nADDITIONAL_OPTIONS=\"\"\n\n# You may want to put the above config parameters into config.sh in order to\n# override the defaults without modifying this script.\n\nif [ -a config.sh ]\nthen\n    source \"config.sh\"\nfi\n\n# Computed vars\nENDPORT=$((PORT+NODES))\n\nif [ \"$1\" == \"start\" ]\nthen\n    while [ $((PORT < ENDPORT)) != \"0\" ]; do\n        PORT=$((PORT+1))\n        echo \"Starting $PORT\"\n        $BIN_PATH/redis-server --port $PORT --protected-mode $PROTECTED_MODE --cluster-enabled yes --cluster-config-file nodes-${PORT}.conf --cluster-node-timeout $TIMEOUT --appendonly yes --appendfilename appendonly-${PORT}.aof --appenddirname appendonlydir-${PORT} --dbfilename dump-${PORT}.rdb --logfile ${PORT}.log --daemonize yes ${ADDITIONAL_OPTIONS}\n    done\n    exit 0\nfi\n\nif [ \"$1\" == \"create\" ]\nthen\n    HOSTS=\"\"\n    while [ $((PORT < ENDPORT)) != \"0\" ]; do\n        PORT=$((PORT+1))\n        HOSTS=\"$HOSTS $CLUSTER_HOST:$PORT\"\n    done\n    OPT_ARG=\"\"\n    if [ \"$2\" == \"-f\" ]; then\n        OPT_ARG=\"--cluster-yes\"\n    fi\n    $BIN_PATH/redis-cli --cluster create $HOSTS --cluster-replicas $REPLICAS $OPT_ARG\n    exit 0\nfi\n\nif [ \"$1\" == \"stop\" ]\nthen\n    while [ $((PORT < ENDPORT)) != \"0\" ]; do\n        PORT=$((PORT+1))\n        echo \"Stopping $PORT\"\n        $BIN_PATH/redis-cli -p $PORT shutdown nosave\n    done\n    exit 0\nfi\n\nif [ \"$1\" == \"restart\" ]\nthen\n    OLD_PORT=$PORT\n    while [ $((PORT < ENDPORT)) != \"0\" ]; do\n        PORT=$((PORT+1))\n        echo \"Stopping $PORT\"\n        $BIN_PATH/redis-cli -p $PORT shutdown nosave\n    done\n    PORT=$OLD_PORT\n    while [ $((PORT < ENDPORT)) != \"0\" ]; do\n        PORT=$((PORT+1))\n        echo \"Starting $PORT\"\n        $BIN_PATH/redis-server --port $PORT --protected-mode $PROTECTED_MODE --cluster-enabled yes --cluster-config-file nodes-${PORT}.conf --cluster-node-timeout $TIMEOUT --appendonly yes --appendfilename appendonly-${PORT}.aof --appenddirname appendonlydir-${PORT} --dbfilename dump-${PORT}.rdb --logfile ${PORT}.log --daemonize yes ${ADDITIONAL_OPTIONS}\n    done\n    exit 0\nfi\n\nif [ \"$1\" == \"watch\" ]\nthen\n    PORT=$((PORT+1))\n    while [ 1 ]; do\n        clear\n        date\n        $BIN_PATH/redis-cli -p $PORT cluster nodes | head -30\n        sleep 1\n    done\n    exit 0\nfi\n\nif [ \"$1\" == \"tail\" ]\nthen\n    INSTANCE=$2\n    PORT=$((PORT+INSTANCE))\n    tail -f ${PORT}.log\n    exit 0\nfi\n\nif [ \"$1\" == \"tailall\" ]\nthen\n    tail -f *.log\n    exit 0\nfi\n\nif [ \"$1\" == \"call\" ]\nthen\n    while [ $((PORT < ENDPORT)) != \"0\" ]; do\n        PORT=$((PORT+1))\n        $BIN_PATH/redis-cli -p $PORT $2 $3 $4 $5 $6 $7 $8 $9\n    done\n    exit 0\nfi\n\nif [ \"$1\" == \"clean\" ]\nthen\n    echo \"Cleaning *.log\"\n    rm -rf *.log\n    echo \"Cleaning appendonlydir-*\"\n    rm -rf appendonlydir-*\n    echo \"Cleaning dump-*.rdb\"\n    rm -rf dump-*.rdb\n    echo \"Cleaning nodes-*.conf\"\n    rm -rf nodes-*.conf\n    exit 0\nfi\n\nif [ \"$1\" == \"clean-logs\" ]\nthen\n    echo \"Cleaning *.log\"\n    rm -rf *.log\n    exit 0\nfi\n\necho \"Usage: $0 [start|create|stop|restart|watch|tail|tailall|clean|clean-logs|call]\"\necho \"start       -- Launch Redis Cluster instances.\"\necho \"create [-f] -- Create a cluster using redis-cli --cluster create.\"\necho \"stop        -- Stop Redis Cluster instances.\"\necho \"restart     -- Restart Redis Cluster instances.\"\necho \"watch       -- Show CLUSTER NODES output (first 30 lines) of first node.\"\necho \"tail <id>   -- Run tail -f of instance at base port + ID.\"\necho \"tailall     -- Run tail -f for all the log files at once.\"\necho \"clean       -- Remove all instances data, logs, configs.\"\necho \"clean-logs  -- Remove just instances logs.\"\necho \"call <cmd>  -- Call a command (up to 7 arguments) on all nodes.\"\n"
  },
  {
    "path": "utils/gen-test-certs.sh",
    "content": "#!/bin/bash\n\n# Generate some test certificates which are used by the regression test suite:\n#\n#   tests/tls/ca.{crt,key}          Self signed CA certificate.\n#   tests/tls/redis.{crt,key}       A certificate with no key usage/policy restrictions.\n#   tests/tls/client.{crt,key}      A certificate restricted for SSL client usage.\n#   tests/tls/server.{crt,key}      A certificate restricted for SSL server usage.\n#   tests/tls/redis.dh              DH Params file.\n\ngenerate_cert() {\n    local name=$1\n    local cn=\"$2\"\n    local opts=\"$3\"\n\n    local keyfile=tests/tls/${name}.key\n    local certfile=tests/tls/${name}.crt\n\n    [ -f $keyfile ] || openssl genrsa -out $keyfile 2048\n    openssl req \\\n        -new -sha256 \\\n        -subj \"/O=Redis Test/CN=$cn\" \\\n        -key $keyfile | \\\n        openssl x509 \\\n            -req -sha256 \\\n            -CA tests/tls/ca.crt \\\n            -CAkey tests/tls/ca.key \\\n            -CAserial tests/tls/ca.txt \\\n            -CAcreateserial \\\n            -days 365 \\\n            $opts \\\n            -out $certfile\n}\n\nmkdir -p tests/tls\n[ -f tests/tls/ca.key ] || openssl genrsa -out tests/tls/ca.key 4096\nopenssl req \\\n    -x509 -new -nodes -sha256 \\\n    -key tests/tls/ca.key \\\n    -days 3650 \\\n    -subj '/O=Redis Test/CN=Certificate Authority' \\\n    -out tests/tls/ca.crt\n\ncat > tests/tls/openssl.cnf <<_END_\n[ server_cert ]\nkeyUsage = digitalSignature, keyEncipherment\nnsCertType = server\n\n[ client_cert ]\nkeyUsage = digitalSignature, keyEncipherment\nnsCertType = client\n_END_\n\ngenerate_cert server \"Server-only\" \"-extfile tests/tls/openssl.cnf -extensions server_cert\"\ngenerate_cert client \"Client-only\" \"-extfile tests/tls/openssl.cnf -extensions client_cert\"\ngenerate_cert redis \"Generic-cert\"\n\n[ -f tests/tls/redis.dh ] || openssl dhparam -out tests/tls/redis.dh 2048\n"
  },
  {
    "path": "utils/generate-command-code.py",
    "content": "#!/usr/bin/env python3\nimport glob\nimport json\nimport os\nimport argparse\n\nARG_TYPES = {\n    \"string\": \"ARG_TYPE_STRING\",\n    \"integer\": \"ARG_TYPE_INTEGER\",\n    \"double\": \"ARG_TYPE_DOUBLE\",\n    \"key\": \"ARG_TYPE_KEY\",\n    \"pattern\": \"ARG_TYPE_PATTERN\",\n    \"unix-time\": \"ARG_TYPE_UNIX_TIME\",\n    \"pure-token\": \"ARG_TYPE_PURE_TOKEN\",\n    \"oneof\": \"ARG_TYPE_ONEOF\",\n    \"block\": \"ARG_TYPE_BLOCK\",\n}\n\nGROUPS = {\n    \"generic\": \"COMMAND_GROUP_GENERIC\",\n    \"string\": \"COMMAND_GROUP_STRING\",\n    \"list\": \"COMMAND_GROUP_LIST\",\n    \"set\": \"COMMAND_GROUP_SET\",\n    \"sorted_set\": \"COMMAND_GROUP_SORTED_SET\",\n    \"hash\": \"COMMAND_GROUP_HASH\",\n    \"pubsub\": \"COMMAND_GROUP_PUBSUB\",\n    \"transactions\": \"COMMAND_GROUP_TRANSACTIONS\",\n    \"connection\": \"COMMAND_GROUP_CONNECTION\",\n    \"server\": \"COMMAND_GROUP_SERVER\",\n    \"scripting\": \"COMMAND_GROUP_SCRIPTING\",\n    \"hyperloglog\": \"COMMAND_GROUP_HYPERLOGLOG\",\n    \"cluster\": \"COMMAND_GROUP_CLUSTER\",\n    \"sentinel\": \"COMMAND_GROUP_SENTINEL\",\n    \"geo\": \"COMMAND_GROUP_GEO\",\n    \"stream\": \"COMMAND_GROUP_STREAM\",\n    \"bitmap\": \"COMMAND_GROUP_BITMAP\",\n    \"rate_limit\": \"COMMAND_GROUP_RATE_LIMIT\",\n}\n\n\ndef get_optional_desc_string(desc, field, force_uppercase=False):\n    v = desc.get(field, None)\n    if v and force_uppercase:\n        v = v.upper()\n    ret = \"\\\"%s\\\"\" % v if v else \"NULL\"\n    return ret.replace(\"\\n\", \"\\\\n\")\n\n\ndef check_command_args_key_specs(args, command_key_specs_index_set, command_arg_key_specs_index_set):\n    if not args:\n        return True\n\n    for arg in args:\n        if arg.key_spec_index is not None:\n            assert isinstance(arg.key_spec_index, int)\n\n            if arg.key_spec_index not in command_key_specs_index_set:\n                print(\"command: %s arg: %s key_spec_index error\" % (command.fullname(), arg.name))\n                return False\n\n            command_arg_key_specs_index_set.add(arg.key_spec_index)\n\n        if not check_command_args_key_specs(arg.subargs, command_key_specs_index_set, command_arg_key_specs_index_set):\n            return False\n\n    return True\n\ndef check_command_key_specs(command):\n    if not command.key_specs:\n        return True\n\n    assert isinstance(command.key_specs, list)\n\n    for cmd_key_spec in command.key_specs:\n        if \"flags\" not in cmd_key_spec:\n            print(\"command: %s key_specs missing flags\" % command.fullname())\n            return False\n\n        if \"NOT_KEY\" in cmd_key_spec[\"flags\"]:\n            # Like SUNSUBSCRIBE / SPUBLISH / SSUBSCRIBE\n            return True\n\n    command_key_specs_index_set = set(range(len(command.key_specs)))\n    command_arg_key_specs_index_set = set()\n\n    # Collect key_spec used for each arg, including arg.subarg\n    if not check_command_args_key_specs(command.args, command_key_specs_index_set, command_arg_key_specs_index_set):\n        return False\n\n    # Check if we have key_specs not used\n    if command_key_specs_index_set != command_arg_key_specs_index_set:\n        print(\"command: %s may have unused key_spec\" % command.fullname())\n        return False\n\n    return True\n\n\n# Globals\nsubcommands = {}  # container_name -> dict(subcommand_name -> Subcommand) - Only subcommands\ncommands = {}  # command_name -> Command - Only commands\n\n\nclass KeySpec(object):\n    def __init__(self, spec):\n        self.spec = spec\n\n    def struct_code(self):\n        def _flags_code():\n            s = \"\"\n            for flag in self.spec.get(\"flags\", []):\n                s += \"CMD_KEY_%s|\" % flag\n            return s[:-1] if s else 0\n\n        def _begin_search_code():\n            if self.spec[\"begin_search\"].get(\"index\"):\n                return \"KSPEC_BS_INDEX,.bs.index={%d}\" % (\n                    self.spec[\"begin_search\"][\"index\"][\"pos\"]\n                )\n            elif self.spec[\"begin_search\"].get(\"keyword\"):\n                return \"KSPEC_BS_KEYWORD,.bs.keyword={\\\"%s\\\",%d}\" % (\n                    self.spec[\"begin_search\"][\"keyword\"][\"keyword\"],\n                    self.spec[\"begin_search\"][\"keyword\"][\"startfrom\"],\n                )\n            elif \"unknown\" in self.spec[\"begin_search\"]:\n                return \"KSPEC_BS_UNKNOWN,{{0}}\"\n            else:\n                print(\"Invalid begin_search! value=%s\" % self.spec[\"begin_search\"])\n                exit(1)\n\n        def _find_keys_code():\n            if self.spec[\"find_keys\"].get(\"range\"):\n                return \"KSPEC_FK_RANGE,.fk.range={%d,%d,%d}\" % (\n                    self.spec[\"find_keys\"][\"range\"][\"lastkey\"],\n                    self.spec[\"find_keys\"][\"range\"][\"step\"],\n                    self.spec[\"find_keys\"][\"range\"][\"limit\"]\n                )\n            elif self.spec[\"find_keys\"].get(\"keynum\"):\n                return \"KSPEC_FK_KEYNUM,.fk.keynum={%d,%d,%d}\" % (\n                    self.spec[\"find_keys\"][\"keynum\"][\"keynumidx\"],\n                    self.spec[\"find_keys\"][\"keynum\"][\"firstkey\"],\n                    self.spec[\"find_keys\"][\"keynum\"][\"step\"]\n                )\n            elif \"unknown\" in self.spec[\"find_keys\"]:\n                return \"KSPEC_FK_UNKNOWN,{{0}}\"\n            else:\n                print(\"Invalid find_keys! value=%s\" % self.spec[\"find_keys\"])\n                exit(1)\n\n        return \"%s,%s,%s,%s\" % (\n            get_optional_desc_string(self.spec, \"notes\"),\n            _flags_code(),\n            _begin_search_code(),\n            _find_keys_code()\n        )\n\n\ndef verify_no_dup_names(container_fullname, args):\n    name_list = [arg.name for arg in args]\n    name_set = set(name_list)\n    if len(name_list) != len(name_set):\n        print(\"{}: Dup argument names: {}\".format(container_fullname, name_list))\n        exit(1)\n\n\nclass Argument(object):\n    def __init__(self, parent_name, desc):\n        self.parent_name = parent_name\n        self.desc = desc\n        self.name = self.desc[\"name\"].lower()\n        if \"_\" in self.name:\n            print(\"{}: name ({}) should not contain underscores\".format(self.fullname(), self.name))\n            exit(1)\n        self.type = self.desc[\"type\"]\n        self.key_spec_index = self.desc.get(\"key_spec_index\", None)\n        self.subargs = []\n        if self.type in [\"oneof\", \"block\"]:\n            self.display = None\n            for subdesc in self.desc[\"arguments\"]:\n                self.subargs.append(Argument(self.fullname(), subdesc))\n            if len(self.subargs) < 2:\n                print(\"{}: oneof or block arg contains less than two subargs\".format(self.fullname()))\n                exit(1)\n            verify_no_dup_names(self.fullname(), self.subargs)\n        else:\n            self.display = self.desc.get(\"display\")\n\n    def fullname(self):\n        return (\"%s %s\" % (self.parent_name, self.name)).replace(\"-\", \"_\")\n\n    def struct_name(self):\n        return \"%s_Arg\" % (self.fullname().replace(\" \", \"_\"))\n\n    def subarg_table_name(self):\n        assert self.subargs\n        return \"%s_Subargs\" % (self.fullname().replace(\" \", \"_\"))\n\n    def struct_code(self):\n        \"\"\"\n        Output example:\n        MAKE_ARG(\"expiration\",ARG_TYPE_ONEOF,-1,NULL,NULL,NULL,CMD_ARG_OPTIONAL,5,NULL),.subargs=GETEX_expiration_Subargs\n        \"\"\"\n\n        def _flags_code():\n            s = \"\"\n            if self.desc.get(\"optional\", False):\n                s += \"CMD_ARG_OPTIONAL|\"\n            if self.desc.get(\"multiple\", False):\n                s += \"CMD_ARG_MULTIPLE|\"\n            if self.desc.get(\"multiple_token\", False):\n                assert self.desc.get(\"multiple\", False)  # Sanity\n                s += \"CMD_ARG_MULTIPLE_TOKEN|\"\n            return s[:-1] if s else \"CMD_ARG_NONE\"\n\n        s = \"MAKE_ARG(\\\"%s\\\",%s,%d,%s,%s,%s,%s,%d,%s)\" % (\n            self.name,\n            ARG_TYPES[self.type],\n            self.desc.get(\"key_spec_index\", -1),\n            get_optional_desc_string(self.desc, \"token\", force_uppercase=True),\n            get_optional_desc_string(self.desc, \"summary\"),\n            get_optional_desc_string(self.desc, \"since\"),\n            _flags_code(),\n            len(self.subargs),\n            get_optional_desc_string(self.desc, \"deprecated_since\"),\n        )\n        if \"display\" in self.desc:\n            s += \",.display_text=\\\"%s\\\"\" % self.desc[\"display\"].lower()\n        if self.subargs:\n            s += \",.subargs=%s\" % self.subarg_table_name()\n\n        return s\n\n    def write_internal_structs(self, f):\n        if self.subargs:\n            for subarg in self.subargs:\n                subarg.write_internal_structs(f)\n\n            f.write(\"/* %s argument table */\\n\" % self.fullname())\n            f.write(\"struct COMMAND_ARG %s[] = {\\n\" % self.subarg_table_name())\n            for subarg in self.subargs:\n                f.write(\"{%s},\\n\" % subarg.struct_code())\n            f.write(\"};\\n\\n\")\n\n\ndef to_c_name(str):\n    return str.replace(\":\", \"\").replace(\".\", \"_\").replace(\"$\", \"_\")\\\n        .replace(\"^\", \"_\").replace(\"*\", \"_\").replace(\"-\", \"_\") \\\n        .replace(\"\\\\\", \"_\").replace(\"+\", \"_\")\n\n\nclass ReplySchema(object):\n    def __init__(self, name, desc):\n        self.name = to_c_name(name)\n        self.schema = {}\n        if desc.get(\"type\") == \"object\":\n            if desc.get(\"properties\") and desc.get(\"additionalProperties\") is None:\n                print(\"%s: Any object that has properties should have the additionalProperties field\" % self.name)\n                exit(1)\n        elif desc.get(\"type\") == \"array\":\n            if desc.get(\"items\") and isinstance(desc[\"items\"], list) and any([desc.get(k) is None for k in [\"minItems\", \"maxItems\"]]):\n                print(\"%s: Any array that has items should have the minItems and maxItems fields\" % self.name)\n                exit(1)\n        for k, v in desc.items():\n            if isinstance(v, dict):\n                self.schema[k] = ReplySchema(\"%s_%s\" % (self.name, k), v)\n            elif isinstance(v, list):\n                self.schema[k] = []\n                for i, subdesc in enumerate(v):\n                    self.schema[k].append(ReplySchema(\"%s_%s_%i\" % (self.name, k,i), subdesc))\n            else:\n                self.schema[k] = v\n    \n    def write(self, f):\n        def struct_code(name, k, v):\n            if isinstance(v, ReplySchema):\n                t = \"JSON_TYPE_OBJECT\"\n                vstr = \".value.object=&%s\" % name\n            elif isinstance(v, list):\n                t = \"JSON_TYPE_ARRAY\"\n                vstr = \".value.array={.objects=%s,.length=%d}\" % (name, len(v))\n            elif isinstance(v, bool):\n                t = \"JSON_TYPE_BOOLEAN\"\n                vstr = \".value.boolean=%d\" % int(v)\n            elif isinstance(v, str):\n                t = \"JSON_TYPE_STRING\"\n                vstr = \".value.string=\\\"%s\\\"\" % v\n            elif isinstance(v, int):\n                t = \"JSON_TYPE_INTEGER\"\n                vstr = \".value.integer=%d\" % v\n            \n            return \"%s,%s,%s\" % (t, json.dumps(k), vstr)\n\n        for k, v in self.schema.items():\n            if isinstance(v, ReplySchema):\n                v.write(f)\n            elif isinstance(v, list):\n                for i, schema in enumerate(v):\n                    schema.write(f)\n                name = to_c_name(\"%s_%s\" % (self.name, k))\n                f.write(\"/* %s array reply schema */\\n\" % name)\n                f.write(\"struct jsonObject *%s[] = {\\n\" % name)\n                for i, schema in enumerate(v):\n                    f.write(\"&%s,\\n\" % schema.name)\n                f.write(\"};\\n\\n\")\n            \n        f.write(\"/* %s reply schema */\\n\" % self.name)\n        f.write(\"struct jsonObjectElement %s_elements[] = {\\n\" % self.name)\n        for k, v in self.schema.items():\n            name = to_c_name(\"%s_%s\" % (self.name, k))\n            f.write(\"{%s},\\n\" % struct_code(name, k, v))\n        f.write(\"};\\n\\n\")\n        f.write(\"struct jsonObject %s = {%s_elements,.length=%d};\\n\\n\" % (self.name, self.name, len(self.schema)))\n\n\nclass Command(object):\n    def __init__(self, name, desc):\n        self.name = name.upper()\n        self.desc = desc\n        self.group = self.desc[\"group\"]\n        self.key_specs = self.desc.get(\"key_specs\", [])\n        self.subcommands = []\n        self.args = []\n        for arg_desc in self.desc.get(\"arguments\", []):\n            self.args.append(Argument(self.fullname(), arg_desc))\n        verify_no_dup_names(self.fullname(), self.args)\n        self.reply_schema = None\n        if \"reply_schema\" in self.desc:\n            self.reply_schema = ReplySchema(self.reply_schema_name(), self.desc[\"reply_schema\"])\n\n    def fullname(self):\n        return self.name.replace(\"-\", \"_\").replace(\":\", \"\")\n\n    def return_types_table_name(self):\n        return \"%s_ReturnInfo\" % self.fullname().replace(\" \", \"_\")\n\n    def subcommand_table_name(self):\n        assert self.subcommands\n        return \"%s_Subcommands\" % self.name\n\n    def history_table_name(self):\n        return \"%s_History\" % (self.fullname().replace(\" \", \"_\"))\n\n    def tips_table_name(self):\n        return \"%s_Tips\" % (self.fullname().replace(\" \", \"_\"))\n\n    def arg_table_name(self):\n        return \"%s_Args\" % (self.fullname().replace(\" \", \"_\"))\n\n    def key_specs_table_name(self):\n        return \"%s_Keyspecs\" % (self.fullname().replace(\" \", \"_\"))\n\n    def reply_schema_name(self):\n        return \"%s_ReplySchema\" % (self.fullname().replace(\" \", \"_\"))\n\n    def struct_name(self):\n        return \"%s_Command\" % (self.fullname().replace(\" \", \"_\"))\n\n    def history_code(self):\n        if not self.desc.get(\"history\"):\n            return \"\"\n        s = \"\"\n        for tupl in self.desc[\"history\"]:\n            s += \"{\\\"%s\\\",\\\"%s\\\"},\\n\" % (tupl[0], tupl[1])\n        return s\n\n    def num_history(self):\n        if not self.desc.get(\"history\"):\n            return 0\n        return len(self.desc[\"history\"])\n\n    def tips_code(self):\n        if not self.desc.get(\"command_tips\"):\n            return \"\"\n        s = \"\"\n        for hint in self.desc[\"command_tips\"]:\n            s += \"\\\"%s\\\",\\n\" % hint.lower()\n        return s\n\n    def num_tips(self):\n        if not self.desc.get(\"command_tips\"):\n            return 0\n        return len(self.desc[\"command_tips\"])\n\n    def key_specs_code(self):\n        s = \"\"\n        for spec in self.key_specs:\n            s += \"{%s},\" % KeySpec(spec).struct_code()\n        return s[:-1]\n\n\n    def struct_code(self):\n        \"\"\"\n        Output example:\n        MAKE_CMD(\"set\",\"Set the string value of a key\",\"O(1)\",\"1.0.0\",CMD_DOC_NONE,NULL,NULL,\"string\",COMMAND_GROUP_STRING,SET_History,4,SET_Tips,0,setCommand,-3,CMD_WRITE|CMD_DENYOOM,ACL_CATEGORY_STRING,SET_Keyspecs,1,setGetKeys,5),.args=SET_Args\n        \"\"\"\n\n        def _flags_code():\n            s = \"\"\n            for flag in self.desc.get(\"command_flags\", []):\n                s += \"CMD_%s|\" % flag\n            return s[:-1] if s else 0\n\n        def _acl_categories_code():\n            s = \"\"\n            for cat in self.desc.get(\"acl_categories\", []):\n                s += \"ACL_CATEGORY_%s|\" % cat\n            return s[:-1] if s else 0\n\n        def _doc_flags_code():\n            s = \"\"\n            for flag in self.desc.get(\"doc_flags\", []):\n                s += \"CMD_DOC_%s|\" % flag\n            return s[:-1] if s else \"CMD_DOC_NONE\"\n\n        s = \"MAKE_CMD(\\\"%s\\\",%s,%s,%s,%s,%s,%s,%s,%s,%s,%d,%s,%d,%s,%d,%s,%s,%s,%d,%s,%d),\" % (\n            self.name.lower(),\n            get_optional_desc_string(self.desc, \"summary\"),\n            get_optional_desc_string(self.desc, \"complexity\"),\n            get_optional_desc_string(self.desc, \"since\"),\n            _doc_flags_code(),\n            get_optional_desc_string(self.desc, \"replaced_by\"),\n            get_optional_desc_string(self.desc, \"deprecated_since\"),\n            \"\\\"%s\\\"\" % self.group,\n            GROUPS[self.group],\n            self.history_table_name(),\n            self.num_history(),\n            self.tips_table_name(),\n            self.num_tips(),\n            self.desc.get(\"function\", \"NULL\"),\n            self.desc[\"arity\"],\n            _flags_code(),\n            _acl_categories_code(),\n            self.key_specs_table_name(),\n            len(self.key_specs),\n            self.desc.get(\"get_keys_function\", \"NULL\"),\n            len(self.args),\n        )\n\n        if self.subcommands:\n            s += \".subcommands=%s,\" % self.subcommand_table_name()\n\n        if self.args:\n            s += \".args=%s,\" % self.arg_table_name()\n\n        if self.reply_schema and args.with_reply_schema:\n            s += \".reply_schema=&%s,\" % self.reply_schema_name()\n\n        return s[:-1]\n\n    def write_internal_structs(self, f):\n        if self.subcommands:\n            subcommand_list = sorted(self.subcommands, key=lambda cmd: cmd.name)\n            for subcommand in subcommand_list:\n                subcommand.write_internal_structs(f)\n\n            f.write(\"/* %s command table */\\n\" % self.fullname())\n            f.write(\"struct COMMAND_STRUCT %s[] = {\\n\" % self.subcommand_table_name())\n            for subcommand in subcommand_list:\n                f.write(\"{%s},\\n\" % subcommand.struct_code())\n            f.write(\"{0}\\n\")\n            f.write(\"};\\n\\n\")\n\n        f.write(\"/********** %s ********************/\\n\\n\" % self.fullname())\n\n        f.write(\"#ifndef SKIP_CMD_HISTORY_TABLE\\n\")\n        f.write(\"/* %s history */\\n\" % self.fullname())\n        code = self.history_code()\n        if code:\n            f.write(\"commandHistory %s[] = {\\n\" % self.history_table_name())\n            f.write(\"%s\" % code)\n            f.write(\"};\\n\")\n        else:\n            f.write(\"#define %s NULL\\n\" % self.history_table_name())\n        f.write(\"#endif\\n\\n\")\n\n        f.write(\"#ifndef SKIP_CMD_TIPS_TABLE\\n\")\n        f.write(\"/* %s tips */\\n\" % self.fullname())\n        code = self.tips_code()\n        if code:\n            f.write(\"const char *%s[] = {\\n\" % self.tips_table_name())\n            f.write(\"%s\" % code)\n            f.write(\"};\\n\")\n        else:\n            f.write(\"#define %s NULL\\n\" % self.tips_table_name())\n        f.write(\"#endif\\n\\n\")\n\n        f.write(\"#ifndef SKIP_CMD_KEY_SPECS_TABLE\\n\")\n        f.write(\"/* %s key specs */\\n\" % self.fullname())\n        code = self.key_specs_code()\n        if code:\n            f.write(\"keySpec %s[%d] = {\\n\" % (self.key_specs_table_name(), len(self.key_specs)))\n            f.write(\"%s\\n\" % code)\n            f.write(\"};\\n\")\n        else:\n            f.write(\"#define %s NULL\\n\" % self.key_specs_table_name())\n        f.write(\"#endif\\n\\n\")\n\n        if self.args:\n            for arg in self.args:\n                arg.write_internal_structs(f)\n\n            f.write(\"/* %s argument table */\\n\" % self.fullname())\n            f.write(\"struct COMMAND_ARG %s[] = {\\n\" % self.arg_table_name())\n            for arg in self.args:\n                f.write(\"{%s},\\n\" % arg.struct_code())\n            f.write(\"};\\n\\n\")\n\n        if self.reply_schema and args.with_reply_schema:\n            self.reply_schema.write(f)\n\n\nclass Subcommand(Command):\n    def __init__(self, name, desc):\n        self.container_name = desc[\"container\"].upper()\n        super(Subcommand, self).__init__(name, desc)\n\n    def fullname(self):\n        return \"%s %s\" % (self.container_name, self.name.replace(\"-\", \"_\").replace(\":\", \"\"))\n\n\ndef create_command(name, desc):\n    flags = desc.get(\"command_flags\")\n    if flags and \"EXPERIMENTAL\" in flags:\n        print(\"Command %s is experimental, skipping...\" % name)\n        return\n\n    if desc.get(\"container\"):\n        cmd = Subcommand(name.upper(), desc)\n        subcommands.setdefault(desc[\"container\"].upper(), {})[name] = cmd\n    else:\n        cmd = Command(name.upper(), desc)\n        commands[name.upper()] = cmd\n\n\n# MAIN\n\n# Figure out where the sources are\nsrcdir = os.path.abspath(os.path.dirname(os.path.abspath(__file__)) + \"/../src\")\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--with-reply-schema', action='store_true')\nargs = parser.parse_args()\n\n# Create all command objects\nprint(\"Processing json files...\")\nfor filename in glob.glob('%s/commands/*.json' % srcdir):\n    with open(filename, \"r\") as f:\n        try:\n            d = json.load(f)\n            for name, desc in d.items():\n                create_command(name, desc)\n        except json.decoder.JSONDecodeError as err:\n            print(\"Error processing %s: %s\" % (filename, err))\n            exit(1)\n\n# Link subcommands to containers\nprint(\"Linking container command to subcommands...\")\nfor command in commands.values():\n    assert command.group\n    if command.name not in subcommands:\n        continue\n    for subcommand in subcommands[command.name].values():\n        assert not subcommand.group or subcommand.group == command.group\n        subcommand.group = command.group\n        command.subcommands.append(subcommand)\n\ncheck_command_error_counter = 0  # An error counter is used to count errors in command checking.\n\nprint(\"Checking all commands...\")\nfor command in commands.values():\n    if not check_command_key_specs(command):\n        check_command_error_counter += 1\n\nif check_command_error_counter != 0:\n    print(\"Error: There are errors in the commands check, please check the above logs.\")\n    exit(1)\n\ncommands_filename = \"commands_with_reply_schema\" if args.with_reply_schema else \"commands\"\nprint(\"Generating %s.def...\" % commands_filename)\nwith open(\"%s/%s.def\" % (srcdir, commands_filename), \"w\") as f:\n    f.write(\"/* Automatically generated by %s, do not edit. */\\n\\n\" % os.path.basename(__file__))\n    f.write(\n\"\"\"\n/* We have fabulous commands from\n * the fantastic\n * Redis Command Table! */\n\n/* Must match redisCommandGroup */\nconst char *COMMAND_GROUP_STR[] = {\n    \"generic\",\n    \"string\",\n    \"list\",\n    \"set\",\n    \"sorted-set\",\n    \"hash\",\n    \"pubsub\",\n    \"transactions\",\n    \"connection\",\n    \"server\",\n    \"scripting\",\n    \"hyperloglog\",\n    \"cluster\",\n    \"sentinel\",\n    \"geo\",\n    \"stream\",\n    \"bitmap\",\n    \"module\",\n    \"rate_limit\"\n};\n\nconst char *commandGroupStr(int index) {\n    return COMMAND_GROUP_STR[index];\n}\n\"\"\"\n    )\n\n    command_list = sorted(commands.values(), key=lambda cmd: (cmd.group, cmd.name))\n    for command in command_list:\n        command.write_internal_structs(f)\n\n    f.write(\"/* Main command table */\\n\")\n    f.write(\"struct COMMAND_STRUCT redisCommandTable[] = {\\n\")\n    curr_group = None\n    for command in command_list:\n        if curr_group != command.group:\n            curr_group = command.group\n            f.write(\"/* %s */\\n\" % curr_group)\n        f.write(\"{%s},\\n\" % command.struct_code())\n    f.write(\"{0}\\n\")\n    f.write(\"};\\n\")\n\nprint(\"All done, exiting.\")\n"
  },
  {
    "path": "utils/generate-commands-json.py",
    "content": "#!/usr/bin/env python3\nimport argparse\nimport json\nimport os\nimport sys\nimport subprocess\nfrom collections import OrderedDict\n\n\ndef convert_flags_to_boolean_dict(flags):\n    \"\"\"Return a dict with a key set to `True` per element in the flags list.\"\"\"\n    return {f: True for f in flags}\n\n\ndef set_if_not_none_or_empty(dst, key, value):\n    \"\"\"Set 'key' in 'dst' if 'value' is not `None` or an empty list.\"\"\"\n    if value is not None and (type(value) is not list or len(value)):\n        dst[key] = value\n\n\ndef convert_argument(arg):\n    \"\"\"Transform an argument.\"\"\"\n    arg.update(convert_flags_to_boolean_dict(arg.pop('flags', [])))\n    set_if_not_none_or_empty(arg, 'arguments',\n                             [convert_argument(x) for x in arg.pop('arguments', [])])\n    return arg\n\n\ndef convert_keyspec(spec):\n    \"\"\"Transform a key spec.\"\"\"\n    spec.update(convert_flags_to_boolean_dict(spec.pop('flags', [])))\n    return spec\n\n\ndef convert_entry_to_objects_array(cmd, docs):\n    \"\"\"Transform the JSON output of `COMMAND` to a friendlier format.\n\n    cmd is the output of `COMMAND` as follows:\n    1. Name (lower case, e.g. \"lolwut\")\n    2. Arity\n    3. Flags\n    4-6. First/last/step key specification (deprecated as of Redis v7.0)\n    7. ACL categories\n    8. hints (as of Redis 7.0)\n    9. key-specs (as of Redis 7.0)\n    10. subcommands (as of Redis 7.0)\n\n    docs is the output of `COMMAND DOCS`, which holds a map of additional metadata\n\n    This returns a list with a dict for the command and per each of its\n    subcommands. Each dict contains one key, the command's full name, with a\n    value of a dict that's set with the command's properties and meta\n    information.\"\"\"\n    assert len(cmd) >= 9\n    obj = {}\n    rep = [obj]\n    name = cmd[0].upper()\n    arity = cmd[1]\n    command_flags = cmd[2]\n    acl_categories = cmd[6]\n    hints = cmd[7]\n    keyspecs = cmd[8]\n    subcommands = cmd[9] if len(cmd) > 9 else []\n    key = name.replace('|', ' ')\n\n    subcommand_docs = docs.pop('subcommands', [])\n    rep.extend([convert_entry_to_objects_array(x, subcommand_docs[x[0]])[0] for x in subcommands])\n\n    # The command's value is ordered so the interesting stuff that we care about\n    # is at the start. Optional `None` and empty list values are filtered out.\n    value = OrderedDict()\n    group = docs.pop('group')\n    if group == 'module':\n        set_if_not_none_or_empty(value, 'summary', docs.pop('summary', None))\n        set_if_not_none_or_empty(value, 'since', docs.pop('since', None))\n    else:\n        # \"summary\" and \"since\" are required for all non-module commands\n        value['summary'] = docs.pop('summary')\n        value['since'] = docs.pop('since')\n    value['group'] = group\n    set_if_not_none_or_empty(value, 'complexity', docs.pop('complexity', None))\n    set_if_not_none_or_empty(value, 'deprecated_since', docs.pop('deprecated_since', None))\n    set_if_not_none_or_empty(value, 'replaced_by', docs.pop('replaced_by', None))\n    set_if_not_none_or_empty(value, 'history', docs.pop('history', []))\n    set_if_not_none_or_empty(value, 'acl_categories', acl_categories)\n    value['arity'] = arity\n    set_if_not_none_or_empty(value, 'key_specs',\n                             [convert_keyspec(x) for x in keyspecs])\n    set_if_not_none_or_empty(value, 'arguments',\n                             [convert_argument(x) for x in docs.pop('arguments', [])])\n    set_if_not_none_or_empty(value, 'command_flags', command_flags)\n    set_if_not_none_or_empty(value, 'doc_flags', docs.pop('doc_flags', []))\n    set_if_not_none_or_empty(value, 'hints', hints)\n\n    # All remaining docs key-value tuples, if any, are appended to the command\n    # to be future-proof.\n    while len(docs) > 0:\n        (k, v) = docs.popitem()\n        value[k] = v\n\n    obj[key] = value\n    return rep\n\n\n# Figure out where the sources are\nsrcdir = os.path.abspath(os.path.dirname(os.path.abspath(__file__)) + \"/../src\")\n\n# MAIN\nif __name__ == '__main__':\n    opts = {\n        'description': 'Transform the output from `redis-cli --json` using COMMAND and COMMAND DOCS to a single commands.json format.',\n        'epilog': f'Usage example: {sys.argv[0]} --cli src/redis-cli --port 6379 > commands.json'\n    }\n    parser = argparse.ArgumentParser(**opts)\n    parser.add_argument('--host', type=str, default='localhost')\n    parser.add_argument('--port', type=int, default=6379)\n    parser.add_argument('--cli', type=str, default='%s/redis-cli' % srcdir)\n    args = parser.parse_args()\n\n    payload = OrderedDict()\n    cmds = []\n\n    p = subprocess.Popen([args.cli, '-h', args.host, '-p', str(args.port), '--json', 'command'], stdout=subprocess.PIPE)\n    stdout, stderr = p.communicate()\n    commands = json.loads(stdout)\n\n    p = subprocess.Popen([args.cli, '-h', args.host, '-p', str(args.port), '--json', 'command', 'docs'],\n                         stdout=subprocess.PIPE)\n    stdout, stderr = p.communicate()\n    docs = json.loads(stdout)\n\n    for entry in commands:\n        cmd = convert_entry_to_objects_array(entry, docs[entry[0]])\n        cmds.extend(cmd)\n\n    # The final output is a dict of all commands, ordered by name.\n    cmds.sort(key=lambda x: list(x.keys())[0])\n    for cmd in cmds:\n        name = list(cmd.keys())[0]\n        payload[name] = cmd[name]\n\n    # Print the final JSON output. If the output is piped and the pipe is closed (e.g., by 'less' or 'head'),\n    # catch BrokenPipeError to prevent a traceback and exit gracefully.\n    try:\n        print(json.dumps(payload, indent=4))\n    except BrokenPipeError:\n        sys.stderr.close()\n        sys.exit(0)\n"
  },
  {
    "path": "utils/generate-fmtargs.py",
    "content": "#!/usr/bin/env python3\n\n# Outputs the generated part of src/fmtargs.h\nMAX_ARGS = 160\n\nimport os\nprint(\"/* Everything below this line is automatically generated by\")\nprint(\" * %s. Do not manually edit. */\\n\" % os.path.basename(__file__))\n\nprint('#define ARG_N(' + ', '.join(['_' + str(i) for i in range(1, MAX_ARGS + 1, 1)]) + ', N, ...) N')\n\nprint('\\n#define RSEQ_N() ' + ', '.join([str(i) for i in range(MAX_ARGS, -1, -1)]))\n\nprint('\\n#define COMPACT_FMT_2(fmt, value) fmt')\nfor i in range(4, MAX_ARGS + 1, 2):\n    print('#define COMPACT_FMT_{}(fmt, value, ...) fmt COMPACT_FMT_{}(__VA_ARGS__)'.format(i, i - 2))\n\nprint('\\n#define COMPACT_VALUES_2(fmt, value) value')\nfor i in range(4, MAX_ARGS + 1, 2):\n    print('#define COMPACT_VALUES_{}(fmt, value, ...) value, COMPACT_VALUES_{}(__VA_ARGS__)'.format(i, i - 2))\n\nprint(\"\\n#endif\")\n"
  },
  {
    "path": "utils/generate-module-api-doc.rb",
    "content": "#!/usr/bin/env ruby\n# coding: utf-8\n# gendoc.rb -- Converts the top-comments inside module.c to modules API\n#              reference documentation in markdown format.\n\n# Convert the C comment to markdown\ndef markdown(s)\n    s = s.gsub(/\\*\\/$/,\"\")\n    s = s.gsub(/^ ?\\* ?/,\"\")\n    s = s.gsub(/^\\/\\*\\*? ?/,\"\")\n    s.chop! while s[-1] == \"\\n\" || s[-1] == \" \"\n    lines = s.split(\"\\n\")\n    newlines = []\n    # Fix some markdown\n    lines.each{|l|\n        # Rewrite RM_Xyz() to RedisModule_Xyz().\n        l = l.gsub(/(?<![A-Z_])RM_(?=[A-Z])/, 'RedisModule_')\n        # Fix more markdown, except in code blocks indented by 4 spaces, which we\n        # don't want to mess with.\n        if not l.start_with?('    ')\n            # Add backquotes around RedisModule functions and type where missing.\n            l = l.gsub(/(?<!`)RedisModule[A-z]+(?:\\*?\\(\\))?/){|x| \"`#{x}`\"}\n            # Add backquotes around c functions like malloc() where missing.\n            l = l.gsub(/(?<![`A-z.])[a-z_]+\\(\\)/, '`\\0`')\n            # Add backquotes around macro and var names containing underscores.\n            l = l.gsub(/(?<![`A-z\\*])[A-Za-z]+_[A-Za-z0-9_]+/){|x| \"`#{x}`\"}\n            # Link URLs preceded by space or newline (not already linked)\n            l = l.gsub(/(^| )(https?:\\/\\/[A-Za-z0-9_\\/\\.\\-]+[A-Za-z0-9\\/])/,\n                       '\\1[\\2](\\2)')\n            # Replace double-dash with unicode ndash\n            l = l.gsub(/ -- /, ' – ')\n        end\n        # Link function names to their definition within the page\n        l = l.gsub(/`(RedisModule_[A-z0-9]+)[()]*`/) {|x|\n            $index[$1] ? \"[#{x}](\\##{$1})\" : x\n        }\n        newlines << l\n    }\n    return newlines.join(\"\\n\")\nend\n\n# Linebreak a prototype longer than 80 characters on the commas, but only\n# between balanced parentheses so that we don't linebreak args which are\n# function pointers, and then aligning each arg under each other.\ndef linebreak_proto(proto, indent)\n    if proto.bytesize <= 80\n        return proto\n    end\n    parts = proto.split(/,\\s*/);\n    if parts.length == 1\n        return proto;\n    end\n    align_pos = proto.index(\"(\") + 1;\n    align = \" \" * align_pos\n    result = parts.shift;\n    bracket_balance = 0;\n    parts.each{|part|\n        if bracket_balance == 0\n            result += \",\\n\" + indent + align\n        else\n            result += \", \"\n        end\n        result += part\n        bracket_balance += part.count(\"(\") - part.count(\")\")\n    }\n    return result;\nend\n\n# Given the source code array and the index at which an exported symbol was\n# detected, extracts and outputs the documentation.\ndef docufy(src,i)\n    m = /RM_[A-z0-9]+/.match(src[i])\n    name = m[0]\n    name = name.sub(\"RM_\",\"RedisModule_\")\n    proto = src[i].sub(\"{\",\"\").strip+\";\\n\"\n    proto = proto.sub(\"RM_\",\"RedisModule_\")\n    proto = linebreak_proto(proto, \"    \");\n    # Add a link target with the function name. (We don't trust the exact id of\n    # the generated one, which depends on the Markdown implementation.)\n    puts \"<span id=\\\"#{name}\\\"></span>\\n\\n\"\n    puts \"### `#{name}`\\n\\n\"\n    puts \"    #{proto}\\n\"\n    puts \"**Available since:** #{$since[name] or \"unreleased\"}\\n\\n\"\n    comment = \"\"\n    while true\n        i = i-1\n        comment = src[i]+comment\n        break if src[i] =~ /\\/\\*/\n    end\n    comment = markdown(comment)\n    puts comment+\"\\n\\n\"\nend\n\n# Print a comment from line until */ is found, as markdown.\ndef section_doc(src, i)\n    name = get_section_heading(src, i)\n    comment = \"<span id=\\\"#{section_name_to_id(name)}\\\"></span>\\n\\n\"\n    while true\n         # append line, except if it's a horizontal divider\n        comment = comment + src[i] if src[i] !~ /^[\\/ ]?\\*{1,2} ?-{50,}/\n        break if src[i] =~ /\\*\\//\n        i = i+1\n    end\n    comment = markdown(comment)\n    puts comment+\"\\n\\n\"\nend\n\n# generates an id suitable for links within the page\ndef section_name_to_id(name)\n    return \"section-\" +\n           name.strip.downcase.gsub(/[^a-z0-9]+/, '-').gsub(/^-+|-+$/, '')\nend\n\n# Returns the name of the first section heading in the comment block for which\n# is_section_doc(src, i) is true\ndef get_section_heading(src, i)\n    if src[i] =~ /^\\/\\*\\*? \\#+ *(.*)/\n        heading = $1\n    elsif src[i+1] =~ /^ ?\\* \\#+ *(.*)/\n        heading = $1\n    end\n    return heading.gsub(' -- ', ' – ')\nend\n\n# Returns true if the line is the start of a generic documentation section. Such\n# section must start with the # symbol, i.e. a markdown heading, on the first or\n# the second line.\ndef is_section_doc(src, i)\n    return src[i] =~ /^\\/\\*\\*? \\#/ ||\n           (src[i] =~ /^\\/\\*/ && src[i+1] =~ /^ ?\\* \\#/)\nend\n\ndef is_func_line(src, i)\n  line = src[i]\n  return line =~ /RM_/ &&\n         line[0] != ' ' && line[0] != '#' && line[0] != '/' &&\n         src[i-1] =~ /\\*\\//\nend\n\nputs \"<!-- This file is generated from module.c using\\n\"\nputs \"     redis/redis:utils/generate-module-api-doc.rb -->\\n\\n\"\nsrc = File.open(File.dirname(__FILE__) ++ \"/../src/module.c\").to_a\n\n# Build function index\n$index = {}\nsrc.each_with_index do |line,i|\n    if is_func_line(src, i)\n        line =~ /RM_([A-z0-9]+)/\n        name = \"RedisModule_#{$1}\"\n        $index[name] = true\n    end\nend\n\n# Populate the 'since' map (name => version) if we're in a git repo.\n$since = {}\ngit_dir = File.dirname(__FILE__) ++ \"/../.git\"\nif File.directory?(git_dir) && `which git` != \"\"\n    `git --git-dir=\"#{git_dir}\" tag --sort=v:refname`.each_line do |version|\n        next if version !~ /^(\\d+)\\.\\d+\\.\\d+?$/ || $1.to_i < 4\n        version.chomp!\n        `git --git-dir=\"#{git_dir}\" cat-file blob \"#{version}:src/module.c\"`.each_line do |line|\n            if line =~ /^\\w.*[ \\*]RM_([A-z0-9]+)/\n                name = \"RedisModule_#{$1}\"\n                if ! $since[name]\n                    $since[name] = version\n                end\n            end\n        end\n    end\nend\n\n# Print TOC\nputs \"## Sections\\n\\n\"\nsrc.each_with_index do |_line,i|\n    if is_section_doc(src, i)\n        name = get_section_heading(src, i)\n        puts \"* [#{name}](\\##{section_name_to_id(name)})\\n\"\n    end\nend\nputs \"* [Function index](#section-function-index)\\n\\n\"\n\n# Docufy: Print function prototype and markdown docs\nsrc.each_with_index do |_line,i|\n    if is_func_line(src, i)\n        docufy(src, i)\n    elsif is_section_doc(src, i)\n        section_doc(src, i)\n    end\nend\n\n# Print function index\nputs \"<span id=\\\"section-function-index\\\"></span>\\n\\n\"\nputs \"## Function index\\n\\n\"\n$index.keys.sort.each{|x| puts \"* [`#{x}`](\\##{x})\\n\"}\nputs \"\\n\"\n"
  },
  {
    "path": "utils/graphs/commits-over-time/README.md",
    "content": "This Tcl script is what I used in order to generate the graph you\ncan find at http://antirez.com/news/98. It's really quick & dirty, more\na trow away program than anything else, but probably could be reused or\nmodified in the future in order to visualize other similar data or an\nupdated version of the same data.\n\nThe usage is trivial:\n\n    ./genhtml.tcl > output.html\n\nThe generated HTML is quite broken but good enough to grab a screenshot\nfrom the browser. Feel free to improve it if you got time / interest.\n\nNote that the code filtering the tags, and the hardcoded branch name, does\nnot make the script, as it is, able to analyze a different repository.\nHowever the changes needed are trivial.\n"
  },
  {
    "path": "utils/graphs/commits-over-time/genhtml.tcl",
    "content": "#!/usr/bin/env tclsh\n\n# Load commits history as \"sha1 unixtime\".\nset commits [exec git log unstable {--pretty=\"%H %at\"}]\nset raw_tags [exec git tag]\n\n# Load all the tags that are about stable releases.\nforeach tag $raw_tags {\n    if {[string match v*-stable $tag]} {\n        set tag [string range $tag 1 end-7]\n        puts $tag\n    }\n    if {[regexp {^[0-9]+.[0-9]+.[0-9]+$} $tag]} {\n        lappend tags $tag\n    }\n}\n\n# For each tag, create a list of \"name unixtime\"\nforeach tag $tags {\n    set taginfo [exec git log $tag -n 1 \"--pretty=\\\"$tag %at\\\"\"]\n    set taginfo [string trim $taginfo {\"}]\n    lappend labels $taginfo\n}\n\n# For each commit, check the amount of code changed and create an array\n# mapping the commit to the number of lines affected.\nforeach c $commits {\n    set stat [exec git show --oneline --numstat [lindex $c 0]]\n    set linenum 0\n    set affected 0\n    foreach line [split $stat \"\\n\"] {\n        incr linenum\n        if {$linenum == 1 || [string match *deps/* $line]} continue\n        if {[catch {llength $line} numfields]} continue\n        if {$numfields == 0} continue\n        catch {\n            incr affected [lindex $line 0]\n            incr affected [lindex $line 1]\n        }\n    }\n    set commit_to_affected([lindex $c 0]) $affected\n}\n\nset base_time [lindex [lindex $commits end] 1]\nputs [clock format $base_time]\n\n# Generate a graph made of HTML DIVs.\nputs {<html>\n<style>\n.box {\n    position:absolute;\n    width:10px;\n    height:5px;\n    border:1px black solid;\n    background-color:#44aa33;\n    opacity: 0.04;\n}\n.label {\n    position:absolute;\n    background-color:#dddddd;\n    font-family:helvetica;\n    font-size:12px;\n    padding:2px;\n    color:#666;\n    border:1px #aaa solid;\n    border-radius: 5px;\n}\n#outer {\n    position:relative;\n    width:1500;\n    height:500;\n    border:1px #aaa solid;\n}\n</style>\n<div id=\"outer\">\n}\nforeach c $commits {\n    set sha [lindex $c 0]\n    set t [expr {([lindex $c 1]-$base_time)/(3600*24*2)}]\n    set affected [expr $commit_to_affected($sha)]\n    set left $t\n    set height [expr {log($affected)*20}]\n    puts \"<div class=\\\"box\\\" style=\\\"left:$left; bottom:0; height:$height\\\"></div>\"\n}\n\nset bottom -30\nforeach l $labels {\n    set name [lindex $l 0]\n    set t [expr {([lindex $l 1]-$base_time)/(3600*24*2)}]\n    set left $t\n    if {$left < 0} continue\n    incr bottom -20\n    if  {$bottom == -210} {set bottom -30}\n    puts \"<div class=\\\"label\\\" style=\\\"left:$left; bottom:$bottom\\\">$name</div>\"\n}\nputs {</div></html>}\n"
  },
  {
    "path": "utils/hyperloglog/.gitignore",
    "content": "*.txt\n"
  },
  {
    "path": "utils/hyperloglog/hll-err.rb",
    "content": "# hll-err.rb - Copyright (C) 2014-Present Redis Ltd.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Check error of HyperLogLog Redis implementation for different set sizes.\n\nrequire 'rubygems'\nrequire 'redis'\nrequire 'digest/sha1'\n\nr = Redis.new\nr.del('hll')\ni = 0\nwhile true do\n    100.times {\n        elements = []\n        1000.times {\n            ele = Digest::SHA1.hexdigest(i.to_s)\n            elements << ele\n            i += 1\n        }\n        r.pfadd('hll',elements)\n    }\n    approx = r.pfcount('hll')\n    abs_err = (approx-i).abs\n    rel_err = 100.to_f*abs_err/i\n    puts \"#{i} vs #{approx}: #{rel_err}%\"\nend\n"
  },
  {
    "path": "utils/hyperloglog/hll-gnuplot-graph.rb",
    "content": "# hll-err.rb - Copyright (C) 2014-Present Redis Ltd.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# This program is suited to output average and maximum errors of\n# the Redis HyperLogLog implementation in a format suitable to print\n# graphs using gnuplot.\n\nrequire 'rubygems'\nrequire 'redis'\nrequire 'digest/sha1'\n\n# Generate an array of [cardinality,relative_error] pairs\n# in the 0 - max range, with the specified step.\n#\n# 'r' is the Redis object used to perform the queries.\n# 'seed' must be different every time you want a test performed\n# with a different set. The function guarantees that if 'seed' is the\n# same, exactly the same dataset is used, and when it is different,\n# a totally unrelated different data set is used (without any common\n# element in practice).\ndef run_experiment(r,seed,max,step)\n    r.del('hll')\n    i = 0\n    samples = []\n    step = 1000 if step > 1000\n    while i < max do\n        elements = []\n        step.times {\n            ele = Digest::SHA1.hexdigest(i.to_s+seed.to_s)\n            elements << ele\n            i += 1\n        }\n        r.pfadd('hll',elements)\n        approx = r.pfcount('hll')\n        err = approx-i\n        rel_err = 100.to_f*err/i\n        samples << [i,rel_err]\n    end\n    samples\nend\n\ndef filter_samples(numsets,max,step,filter)\n    r = Redis.new\n    dataset = {}\n    (0...numsets).each{|i|\n        dataset[i] = run_experiment(r,i,max,step)\n        STDERR.puts \"Set #{i}\"\n    }\n    dataset[0].each_with_index{|ele,index|\n        if filter == :max\n            card=ele[0]\n            err=ele[1].abs\n            (1...numsets).each{|i|\n                err = dataset[i][index][1] if err < dataset[i][index][1]\n            }\n            puts \"#{card} #{err}\"\n        elsif filter == :avg\n            card=ele[0]\n            err = 0\n            (0...numsets).each{|i|\n                err += dataset[i][index][1]\n            }\n            err /= numsets\n            puts \"#{card} #{err}\"\n        elsif filter == :absavg\n            card=ele[0]\n            err = 0\n            (0...numsets).each{|i|\n                err += dataset[i][index][1].abs\n            }\n            err /= numsets\n            puts \"#{card} #{err}\"\n        elsif filter == :all\n            (0...numsets).each{|i|\n                card,err = dataset[i][index]\n                puts \"#{card} #{err}\"\n            }\n        else\n            raise \"Unknown filter #{filter}\"\n        end\n    }\nend\n\nif ARGV.length != 4\n    puts \"Usage: hll-gnuplot-graph <samples> <max> <step> (max|avg|absavg|all)\"\n    exit 1\nend\nfilter_samples(ARGV[0].to_i,ARGV[1].to_i,ARGV[2].to_i,ARGV[3].to_sym)\n"
  },
  {
    "path": "utils/install_server.sh",
    "content": "#!/bin/sh\n\n# Copyright 2011 Dvir Volk <dvirsk at gmail dot com>. All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n#   1. Redistributions of source code must retain the above copyright notice,\n#   this list of conditions and the following disclaimer.\n#\n#   2. Redistributions in binary form must reproduce the above copyright\n#   notice, this list of conditions and the following disclaimer in the\n#   documentation and/or other materials provided with the distribution.\n#\n# THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED\n# WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF\n# MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO\n# EVENT SHALL Dvir Volk OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,\n# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA,\n# OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF\n# LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING\n# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,\n# EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n#\n################################################################################\n#\n# Service installer for redis server, runs interactively by default.\n#\n# To run this script non-interactively (for automation/provisioning purposes),\n# feed the variables into the script. Any missing variables will be prompted!\n# Tip: Environment variables also support command substitution (see REDIS_EXECUTABLE)\n#\n# Example:\n#\n# sudo REDIS_PORT=1234 \\\n# \t\t REDIS_CONFIG_FILE=/etc/redis/1234.conf \\\n# \t\t REDIS_LOG_FILE=/var/log/redis_1234.log \\\n# \t\t REDIS_DATA_DIR=/var/lib/redis/1234 \\\n# \t\t REDIS_EXECUTABLE=`command -v redis-server` ./utils/install_server.sh\n#\n# This generates a redis config file and an /etc/init.d script, and installs them.\n#\n# /!\\ This script should be run as root\n#\n# NOTE: This script will not work on Mac OSX.\n#       It supports Debian and Ubuntu Linux.\n#\n################################################################################\n\ndie () {\n\techo \"ERROR: $1. Aborting!\"\n\texit 1\n}\n\n\n#Absolute path to this script\nSCRIPT=$(readlink -f $0)\n#Absolute path this script is in\nSCRIPTPATH=$(dirname $SCRIPT)\n\n#Initial defaults\n_REDIS_PORT=6379\n_MANUAL_EXECUTION=false\n\necho \"Welcome to the redis service installer\"\necho \"This script will help you easily set up a running redis server\"\necho\n\n#check for root user\nif [ \"$(id -u)\" -ne 0 ] ; then\n\techo \"You must run this script as root. Sorry!\"\n\texit 1\nfi\n\n#bail if this system is managed by systemd\n_pid_1_exe=\"$(readlink -f /proc/1/exe)\"\nif [ \"${_pid_1_exe##*/}\" = systemd ]\nthen\n\techo \"This systems seems to use systemd.\"\n\techo \"Please take a look at the provided example service unit files in this directory, and adapt and install them. Sorry!\"\n\texit 1\nfi\nunset _pid_1_exe\n\nif ! echo $REDIS_PORT | egrep -q '^[0-9]+$' ; then\n\t_MANUAL_EXECUTION=true\n\t#Read the redis port\n\tread  -p \"Please select the redis port for this instance: [$_REDIS_PORT] \" REDIS_PORT\n\tif ! echo $REDIS_PORT | egrep -q '^[0-9]+$' ; then\n\t\techo \"Selecting default: $_REDIS_PORT\"\n\t\tREDIS_PORT=$_REDIS_PORT\n\tfi\nfi\n\nif [ -z \"$REDIS_CONFIG_FILE\" ] ; then\n\t_MANUAL_EXECUTION=true\n\t#read the redis config file\n\t_REDIS_CONFIG_FILE=\"/etc/redis/$REDIS_PORT.conf\"\n\tread -p \"Please select the redis config file name [$_REDIS_CONFIG_FILE] \" REDIS_CONFIG_FILE\n\tif [ -z \"$REDIS_CONFIG_FILE\" ] ; then\n\t\tREDIS_CONFIG_FILE=$_REDIS_CONFIG_FILE\n\t\techo \"Selected default - $REDIS_CONFIG_FILE\"\n\tfi\nfi\n\nif [ -z \"$REDIS_LOG_FILE\" ] ; then\n\t_MANUAL_EXECUTION=true\n\t#read the redis log file path\n\t_REDIS_LOG_FILE=\"/var/log/redis_$REDIS_PORT.log\"\n\tread -p \"Please select the redis log file name [$_REDIS_LOG_FILE] \" REDIS_LOG_FILE\n\tif [ -z \"$REDIS_LOG_FILE\" ] ; then\n\t\tREDIS_LOG_FILE=$_REDIS_LOG_FILE\n\t\techo \"Selected default - $REDIS_LOG_FILE\"\n\tfi\nfi\n\nif [ -z \"$REDIS_DATA_DIR\" ] ; then\n\t_MANUAL_EXECUTION=true\n\t#get the redis data directory\n\t_REDIS_DATA_DIR=\"/var/lib/redis/$REDIS_PORT\"\n\tread -p \"Please select the data directory for this instance [$_REDIS_DATA_DIR] \" REDIS_DATA_DIR\n\tif [ -z \"$REDIS_DATA_DIR\" ] ; then\n\t\tREDIS_DATA_DIR=$_REDIS_DATA_DIR\n\t\techo \"Selected default - $REDIS_DATA_DIR\"\n\tfi\nfi\n\nif [ ! -x \"$REDIS_EXECUTABLE\" ] ; then\n\t_MANUAL_EXECUTION=true\n\t#get the redis executable path\n\t_REDIS_EXECUTABLE=`command -v redis-server`\n\tread -p \"Please select the redis executable path [$_REDIS_EXECUTABLE] \" REDIS_EXECUTABLE\n\tif [ ! -x \"$REDIS_EXECUTABLE\" ] ; then\n\t\tREDIS_EXECUTABLE=$_REDIS_EXECUTABLE\n\n\t\tif [ ! -x \"$REDIS_EXECUTABLE\" ] ; then\n\t\t\techo \"Mmmmm...  it seems like you don't have a redis executable. Did you run make install yet?\"\n\t\t\texit 1\n\t\tfi\n\tfi\nfi\n\n#check the default for redis cli\nCLI_EXEC=`command -v redis-cli`\nif [ -z \"$CLI_EXEC\" ] ; then\n\tCLI_EXEC=`dirname $REDIS_EXECUTABLE`\"/redis-cli\"\nfi\n\necho \"Selected config:\"\n\necho \"Port           : $REDIS_PORT\"\necho \"Config file    : $REDIS_CONFIG_FILE\"\necho \"Log file       : $REDIS_LOG_FILE\"\necho \"Data dir       : $REDIS_DATA_DIR\"\necho \"Executable     : $REDIS_EXECUTABLE\"\necho \"Cli Executable : $CLI_EXEC\"\n\nif $_MANUAL_EXECUTION == true ; then\n\tread -p \"Is this ok? Then press ENTER to go on or Ctrl-C to abort.\" _UNUSED_\nfi\n\nmkdir -p `dirname \"$REDIS_CONFIG_FILE\"` || die \"Could not create redis config directory\"\nmkdir -p `dirname \"$REDIS_LOG_FILE\"` || die \"Could not create redis log dir\"\nmkdir -p \"$REDIS_DATA_DIR\" || die \"Could not create redis data directory\"\n\n#render the templates\nTMP_FILE=\"/tmp/${REDIS_PORT}.conf\"\nDEFAULT_CONFIG=\"${SCRIPTPATH}/../redis.conf\"\nINIT_TPL_FILE=\"${SCRIPTPATH}/redis_init_script.tpl\"\nINIT_SCRIPT_DEST=\"/etc/init.d/redis_${REDIS_PORT}\"\nPIDFILE=\"/var/run/redis_${REDIS_PORT}.pid\"\n\nif [ ! -f \"$DEFAULT_CONFIG\" ]; then\n\techo \"Mmmmm... the default config is missing. Did you switch to the utils directory?\"\n\texit 1\nfi\n\n#Generate config file from the default config file as template\n#changing only the stuff we're controlling from this script\necho \"## Generated by install_server.sh ##\" > $TMP_FILE\n\nread -r SED_EXPR <<-EOF\ns#^port .\\+#port ${REDIS_PORT}#; \\\ns#^logfile .\\+#logfile ${REDIS_LOG_FILE}#; \\\ns#^dir .\\+#dir ${REDIS_DATA_DIR}#; \\\ns#^pidfile .\\+#pidfile ${PIDFILE}#; \\\ns#^daemonize no#daemonize yes#;\nEOF\nsed \"$SED_EXPR\" $DEFAULT_CONFIG >> $TMP_FILE\n\n#cat $TPL_FILE | while read line; do eval \"echo \\\"$line\\\"\" >> $TMP_FILE; done\ncp $TMP_FILE $REDIS_CONFIG_FILE || die \"Could not write redis config file $REDIS_CONFIG_FILE\"\n\n#Generate sample script from template file\nrm -f $TMP_FILE\n\n#we hard code the configs here to avoid issues with templates containing env vars\n#kinda lame but works!\nREDIS_INIT_HEADER=\\\n\"#!/bin/sh\\n\n#Configurations injected by install_server below....\\n\\n\nEXEC=$REDIS_EXECUTABLE\\n\nCLIEXEC=$CLI_EXEC\\n\nPIDFILE=\\\"$PIDFILE\\\"\\n\nCONF=\\\"$REDIS_CONFIG_FILE\\\"\\n\\n\nREDISPORT=\\\"$REDIS_PORT\\\"\\n\\n\n###############\\n\\n\"\n\nREDIS_CHKCONFIG_INFO=\\\n\"# REDHAT chkconfig header\\n\\n\n# chkconfig: - 58 74\\n\n# description: redis_${REDIS_PORT} is the redis daemon.\\n\n### BEGIN INIT INFO\\n\n# Provides: redis_6379\\n\n# Required-Start: \\$network \\$local_fs \\$remote_fs\\n\n# Required-Stop: \\$network \\$local_fs \\$remote_fs\\n\n# Default-Start: 2 3 4 5\\n\n# Default-Stop: 0 1 6\\n\n# Should-Start: \\$syslog \\$named\\n\n# Should-Stop: \\$syslog \\$named\\n\n# Short-Description: start and stop redis_${REDIS_PORT}\\n\n# Description: Redis daemon\\n\n### END INIT INFO\\n\\n\"\n\nif command -v chkconfig >/dev/null; then\n\t#if we're a box with chkconfig on it we want to include info for chkconfig\n\techo \"$REDIS_INIT_HEADER\" \"$REDIS_CHKCONFIG_INFO\" > $TMP_FILE && cat $INIT_TPL_FILE >> $TMP_FILE || die \"Could not write init script to $TMP_FILE\"\nelse\n\t#combine the header and the template (which is actually a static footer)\n\techo \"$REDIS_INIT_HEADER\" > $TMP_FILE && cat $INIT_TPL_FILE >> $TMP_FILE || die \"Could not write init script to $TMP_FILE\"\nfi\n\n###\n# Generate sample script from template file\n# - No need to check which system we are on. The init info are comments and\n#   do not interfere with update_rc.d systems. Additionally:\n#     Ubuntu/debian by default does not come with chkconfig, but does issue a\n#     warning if init info is not available.\n\ncat > ${TMP_FILE} <<EOT\n#!/bin/sh\n#Configurations injected by install_server below....\n\nEXEC=$REDIS_EXECUTABLE\nCLIEXEC=$CLI_EXEC\nPIDFILE=$PIDFILE\nCONF=\"$REDIS_CONFIG_FILE\"\nREDISPORT=\"$REDIS_PORT\"\n###############\n# SysV Init Information\n# chkconfig: - 58 74\n# description: redis_${REDIS_PORT} is the redis daemon.\n### BEGIN INIT INFO\n# Provides: redis_${REDIS_PORT}\n# Required-Start: \\$network \\$local_fs \\$remote_fs\n# Required-Stop: \\$network \\$local_fs \\$remote_fs\n# Default-Start: 2 3 4 5\n# Default-Stop: 0 1 6\n# Should-Start: \\$syslog \\$named\n# Should-Stop: \\$syslog \\$named\n# Short-Description: start and stop redis_${REDIS_PORT}\n# Description: Redis daemon\n### END INIT INFO\n\nEOT\ncat ${INIT_TPL_FILE} >> ${TMP_FILE}\n\n#copy to /etc/init.d\ncp $TMP_FILE $INIT_SCRIPT_DEST && \\\n\tchmod +x $INIT_SCRIPT_DEST || die \"Could not copy redis init script to  $INIT_SCRIPT_DEST\"\necho \"Copied $TMP_FILE => $INIT_SCRIPT_DEST\"\n\n#Install the service\necho \"Installing service...\"\nif command -v chkconfig >/dev/null 2>&1; then\n\t# we're chkconfig, so lets add to chkconfig and put in runlevel 345\n\tchkconfig --add redis_${REDIS_PORT} && echo \"Successfully added to chkconfig!\"\n\tchkconfig --level 345 redis_${REDIS_PORT} on && echo \"Successfully added to runlevels 345!\"\nelif command -v update-rc.d >/dev/null 2>&1; then\n\t#if we're not a chkconfig box assume we're able to use update-rc.d\n\tupdate-rc.d redis_${REDIS_PORT} defaults && echo \"Success!\"\nelse\n\techo \"No supported init tool found.\"\nfi\n\n/etc/init.d/redis_$REDIS_PORT start || die \"Failed starting service...\"\n\n#tada\necho \"Installation successful!\"\nexit 0\n"
  },
  {
    "path": "utils/lru/README",
    "content": "The test-lru.rb program can be used in order to check the behavior of the\nRedis approximated LRU algorithm against the theoretical output of true\nLRU algorithm.\n\nIn order to use the program you need to recompile Redis setting the define\nREDIS_LRU_CLOCK_RESOLUTION to 1, by editing the file server.h.\nThis allows to execute the program in a fast way since the 1 ms resolution\nis enough for all the objects to have a different enough time stamp during\nthe test.\n\nThe program is executed like this:\n\n    ruby test-lru.rb /tmp/lru.html\n\nYou can optionally specify a number of times to run, so that the program\nwill output averages of different runs, by adding an additional argument.\nFor instance in order to run the test 10 times use:\n\n    ruby test-lru.rb /tmp/lru.html 10\n"
  },
  {
    "path": "utils/lru/lfu-simulation.c",
    "content": "#include <stdio.h>\n#include <time.h>\n#include <stdint.h>\n#include <stdlib.h>\n\nint decr_every = 1;\nint keyspace_size = 1000000;\ntime_t switch_after = 30; /* Switch access pattern after N seconds. */\n\nstruct entry {\n    /* Field that the LFU Redis implementation will have (we have\n     * 24 bits of total space in the object->lru field). */\n    uint8_t counter;    /* Logarithmic counter. */\n    uint16_t decrtime;  /* (Reduced precision) time of last decrement. */\n\n    /* Fields only useful for visualization. */\n    uint64_t hits;      /* Number of real accesses. */\n    time_t ctime;       /* Key creation time. */\n};\n\n#define to_16bit_minutes(x) ((x/60) & 65535)\n#define LFU_INIT_VAL 5\n\n/* Compute the difference in minutes between two 16 bit minutes times\n * obtained with to_16bit_minutes(). Since they can wrap around if\n * we detect the overflow we account for it as if the counter wrapped\n * a single time. */\nuint16_t minutes_diff(uint16_t now, uint16_t prev) {\n    if (now >= prev) return now-prev;\n    return 65535-prev+now;\n}\n\n/* Increment a counter logarithmically: the greatest is its value, the\n * less likely is that the counter is really incremented.\n * The maximum value of the counter is saturated at 255. */\nuint8_t log_incr(uint8_t counter) {\n    if (counter == 255) return counter;\n    double r = (double)rand()/RAND_MAX;\n    double baseval = counter-LFU_INIT_VAL;\n    if (baseval < 0) baseval = 0;\n    double limit = 1.0/(baseval*10+1);\n    if (r < limit) counter++;\n    return counter;\n}\n\n/* Simulate an access to an entry. */\nvoid access_entry(struct entry *e) {\n    e->counter = log_incr(e->counter);\n    e->hits++;\n}\n\n/* Return the entry LFU value and as a side effect decrement the\n * entry value if the decrement time was reached. */\nuint8_t scan_entry(struct entry *e) {\n    if (minutes_diff(to_16bit_minutes(time(NULL)),e->decrtime)\n        >= decr_every)\n    {\n        if (e->counter) {\n            if (e->counter > LFU_INIT_VAL*2) {\n                e->counter /= 2;\n            } else {\n                e->counter--;\n            }\n        }\n        e->decrtime = to_16bit_minutes(time(NULL));\n    }\n    return e->counter;\n}\n\n/* Print the entry info. */\nvoid show_entry(long pos, struct entry *e) {\n    char *tag = \"normal       \";\n\n    if (pos >= 10 && pos <= 14) tag = \"new no access\";\n    if (pos >= 15 && pos <= 19) tag = \"new accessed \";\n    if (pos >= keyspace_size -5) tag= \"old no access\";\n\n    printf(\"%ld] <%s> frequency:%d decrtime:%d [%lu hits | age:%ld sec]\\n\",\n        pos, tag, e->counter, e->decrtime, (unsigned long)e->hits,\n            time(NULL) - e->ctime);\n}\n\nint main(void) {\n    time_t start = time(NULL);\n    time_t new_entry_time = start;\n    time_t display_time = start;\n    struct entry *entries = malloc(sizeof(*entries)*keyspace_size);\n    long j;\n\n    /* Initialize. */\n    for (j = 0; j < keyspace_size; j++) {\n        entries[j].counter = LFU_INIT_VAL;\n        entries[j].decrtime = to_16bit_minutes(start);\n        entries[j].hits = 0;\n        entries[j].ctime = time(NULL);\n    }\n\n    while(1) {\n        time_t now = time(NULL);\n        long idx;\n\n        /* Scan N random entries (simulates the eviction under maxmemory). */\n        for (j = 0; j < 3; j++) {\n            scan_entry(entries+(rand()%keyspace_size));\n        }\n\n        /* Access a random entry: use a power-law access pattern up to\n         * 'switch_after' seconds. Then revert to flat access pattern. */\n        if (now-start < switch_after) {\n            /* Power law. */\n            idx = 1;\n            while((rand() % 21) != 0 && idx < keyspace_size) idx *= 2;\n            if (idx > keyspace_size) idx = keyspace_size;\n            idx = rand() % idx;\n        } else {\n            /* Flat. */\n            idx = rand() % keyspace_size;\n        }\n\n        /* Never access entries between position 10 and 14, so that\n         * we simulate what happens to new entries that are never\n         * accessed VS new entries which are accessed in positions\n         * 15-19.\n         *\n         * Also never access last 5 entry, so that we have keys which\n         * are never recreated (old), and never accessed. */\n        if ((idx < 10 || idx > 14) && (idx < keyspace_size-5))\n            access_entry(entries+idx);\n\n        /* Simulate the addition of new entries at positions between\n         * 10 and 19, a random one every 10 seconds. */\n        if (new_entry_time <= now) {\n            idx = 10+(rand()%10);\n            entries[idx].counter = LFU_INIT_VAL;\n            entries[idx].decrtime = to_16bit_minutes(time(NULL));\n            entries[idx].hits = 0;\n            entries[idx].ctime = time(NULL);\n            new_entry_time = now+10;\n        }\n\n        /* Show the first 20 entries and the last 20 entries. */\n        if (display_time != now) {\n            printf(\"=============================\\n\");\n            printf(\"Current minutes time: %d\\n\", (int)to_16bit_minutes(now));\n            printf(\"Access method: %s\\n\",\n                (now-start < switch_after) ? \"power-law\" : \"flat\");\n\n            for (j = 0; j < 20; j++)\n                show_entry(j,entries+j);\n\n            for (j = keyspace_size-20; j < keyspace_size; j++)\n                show_entry(j,entries+j);\n            display_time = now;\n        }\n    }\n    return 0;\n}\n\n"
  },
  {
    "path": "utils/lru/test-lru.rb",
    "content": "require 'rubygems'\nrequire 'redis'\n\n$runs = []; # Remember the error rate of each run for average purposes.\n$o = {};    # Options set parsing arguments\n\ndef testit(filename)\n    r = Redis.new\n    r.config(\"SET\",\"maxmemory\",\"2000000\")\n    if $o[:ttl]\n        r.config(\"SET\",\"maxmemory-policy\",\"volatile-ttl\")\n    else\n        r.config(\"SET\",\"maxmemory-policy\",\"allkeys-lru\")\n    end\n    r.config(\"SET\",\"maxmemory-samples\",5)\n    r.config(\"RESETSTAT\")\n    r.flushall\n\n    html = \"\"\n    html << <<EOF\n    <html>\n    <body>\n    <style>\n    .box {\n        width:5px;\n        height:5px;\n        float:left;\n        margin: 1px;\n    }\n\n    .old {\n        border: 1px black solid;\n    }\n\n    .new {\n        border: 1px green solid;\n    }\n\n    .otherdb {\n        border: 1px red solid;\n    }\n\n    .ex {\n        background-color: #666;\n    }\n    </style>\n    <pre>\nEOF\n\n    # Fill the DB up to the first eviction.\n    oldsize = r.dbsize\n    id = 0\n    while true\n        id += 1\n        begin\n            r.set(id,\"foo\")\n        rescue\n            break\n        end\n        newsize = r.dbsize\n        break if newsize == oldsize # A key was evicted? Stop.\n        oldsize = newsize\n    end\n\n    inserted = r.dbsize\n    first_set_max_id = id\n    html << \"#{r.dbsize} keys inserted.\\n\"\n\n    # Access keys sequentially, so that in theory the first part will be expired\n    # and the latter part will not, according to perfect LRU.\n\n    if $o[:ttl]\n        STDERR.puts \"Set increasing expire value\"\n        (1..first_set_max_id).each{|id|\n            r.expire(id,1000+id)\n            STDERR.print(\".\") if (id % 150) == 0\n        }\n    else\n        STDERR.puts \"Access keys sequentially\"\n        (1..first_set_max_id).each{|id|\n            r.get(id)\n            sleep 0.001\n            STDERR.print(\".\") if (id % 150) == 0\n        }\n    end\n    STDERR.puts\n\n    # Insert more 50% keys. We expect that the new keys will rarely be expired\n    # since their last access time is recent compared to the others.\n    #\n    # Note that we insert the first 100 keys of the new set into DB1 instead\n    # of DB0, so that we can try how cross-DB eviction works.\n    half = inserted/2\n    html << \"Insert enough keys to evict half the keys we inserted.\\n\"\n    add = 0\n\n    otherdb_start_idx = id+1\n    otherdb_end_idx = id+100\n    while true\n        add += 1\n        id += 1\n        if id >= otherdb_start_idx && id <= otherdb_end_idx\n            r.select(1)\n            r.set(id,\"foo\")\n            r.select(0)\n        else\n            r.set(id,\"foo\")\n        end\n        break if r.info['evicted_keys'].to_i >= half\n    end\n\n    html << \"#{add} additional keys added.\\n\"\n    html << \"#{r.dbsize} keys in DB.\\n\"\n\n    # Check if evicted keys respect LRU\n    # We consider errors from 1 to N progressively more serious as they violate\n    # more the access pattern.\n\n    errors = 0\n    e = 1\n    error_per_key = 100000.0/first_set_max_id\n    half_set_size = first_set_max_id/2\n    maxerr = 0\n    (1..(first_set_max_id/2)).each{|id|\n        if id >= otherdb_start_idx && id <= otherdb_end_idx\n            r.select(1)\n            exists = r.exists(id)\n            r.select(0)\n        else\n            exists = r.exists(id)\n        end\n        if id < first_set_max_id/2\n            thiserr = error_per_key * ((half_set_size-id).to_f/half_set_size)\n            maxerr += thiserr\n            errors += thiserr if exists\n        elsif id >= first_set_max_id/2\n            thiserr = error_per_key * ((id-half_set_size).to_f/half_set_size)\n            maxerr += thiserr\n            errors += thiserr if !exists\n        end\n    }\n    errors = errors*100/maxerr\n\n    STDERR.puts \"Test finished with #{errors}% error! Generating HTML on stdout.\"\n\n    html << \"#{errors}% error!\\n\"\n    html << \"</pre>\"\n    $runs << errors\n\n    # Generate the graphical representation\n    (1..id).each{|id|\n        # Mark first set and added items in a different way.\n        c = \"box\"\n        if id >= otherdb_start_idx && id <= otherdb_end_idx\n            c << \" otherdb\"\n        elsif id <= first_set_max_id\n            c << \" old\"\n        else\n            c << \" new\"\n        end\n\n        # Add class if exists\n        if id >= otherdb_start_idx && id <= otherdb_end_idx\n            r.select(1)\n            exists = r.exists(id)\n            r.select(0)\n        else\n            exists = r.exists(id)\n        end\n\n        c << \" ex\" if exists\n        html << \"<div title=\\\"#{id}\\\" class=\\\"#{c}\\\"></div>\"\n    }\n\n    # Close HTML page\n\n    html << <<EOF\n    </body>\n    </html>\nEOF\n\n    f = File.open(filename,\"w\")\n    f.write(html)\n    f.close\nend\n\ndef print_avg\n    avg = ($runs.reduce {|a,b| a+b}) / $runs.length\n    puts \"#{$runs.length} runs, AVG is #{avg}\"\nend\n\nif ARGV.length < 1\n    STDERR.puts \"Usage: ruby test-lru.rb <html-output-filename> [--runs <count>] [--ttl]\"\n    STDERR.puts \"Options:\"\n    STDERR.puts \"  --runs <count>    Execute the test <count> times.\"\n    STDERR.puts \"  --ttl             Set keys with increasing TTL values\"\n    STDERR.puts \"                    (starting from 1000 seconds) in order to\"\n    STDERR.puts \"                    test the volatile-lru policy.\"\n    exit 1\nend\n\nfilename = ARGV[0]\n$o[:numruns] = 1\n\n# Options parsing\ni = 1\nwhile i < ARGV.length\n    if ARGV[i] == '--runs'\n        $o[:numruns] = ARGV[i+1].to_i\n        i+= 1\n    elsif ARGV[i] == '--ttl'\n        $o[:ttl] = true\n    else\n        STDERR.puts \"Unknown option #{ARGV[i]}\"\n        exit 1\n    end\n    i+= 1\nend\n\n$o[:numruns].times {\n    testit(filename)\n    print_avg if $o[:numruns] != 1\n}\n"
  },
  {
    "path": "utils/redis-copy.rb",
    "content": "# redis-copy.rb - Copyright (C) 2009-Present Redis Ltd. All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Copy the whole dataset from one Redis instance to another one\n#\n# WARNING: this utility is deprecated and serves as a legacy adapter\n#          for the more-robust redis-copy gem.\n\nrequire 'shellwords'\n\ndef redisCopy(opts={})\n  src = \"#{opts[:srchost]}:#{opts[:srcport]}\"\n  dst = \"#{opts[:dsthost]}:#{opts[:dstport]}\"\n  `redis-copy #{src.shellescape} #{dst.shellescape}`\nrescue Errno::ENOENT\n  $stderr.puts 'This utility requires the redis-copy executable',\n               'from the redis-copy gem on https://rubygems.org',\n               'To install it, run `gem install redis-copy`.'\n  exit 1\nend\n\n$stderr.puts \"This utility is deprecated. Use the redis-copy gem instead.\"\nif ARGV.length != 4\n    puts \"Usage: redis-copy.rb <srchost> <srcport> <dsthost> <dstport>\"\n    exit 1\nend\nputs \"WARNING: it's up to you to FLUSHDB the destination host before to continue, press any key when ready.\"\nSTDIN.gets\nsrchost = ARGV[0]\nsrcport = ARGV[1]\ndsthost = ARGV[2]\ndstport = ARGV[3]\nputs \"Copying #{srchost}:#{srcport} into #{dsthost}:#{dstport}\"\nredisCopy(:srchost => srchost, :srcport => srcport.to_i,\n          :dsthost => dsthost, :dstport => dstport.to_i)\n"
  },
  {
    "path": "utils/redis-sha1.rb",
    "content": "# redis-sha1.rb - Copyright (C) 2009-Present Redis Ltd. All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n#\n# Performs the SHA1 sum of the whole dataset.\n# This is useful to spot bugs in persistence related code and to make sure\n# Slaves and Masters are in SYNC.\n#\n# If you hack this code make sure to sort keys and set elements as this are\n# unsorted elements. Otherwise the sum may differ with equal dataset.\n\nrequire 'rubygems'\nrequire 'redis'\nrequire 'digest/sha1'\n\ndef redisSha1(opts={})\n    sha1=\"\"\n    r = Redis.new(opts)\n    r.keys('*').sort.each{|k|\n        vtype = r.type?(k)\n        if vtype == \"string\"\n            len = 1\n            sha1 = Digest::SHA1.hexdigest(sha1+k)\n            sha1 = Digest::SHA1.hexdigest(sha1+r.get(k))\n        elsif vtype == \"list\"\n            len = r.llen(k)\n            if len != 0\n                sha1 = Digest::SHA1.hexdigest(sha1+k)\n                sha1 = Digest::SHA1.hexdigest(sha1+r.list_range(k,0,-1).join(\"\\x01\"))\n            end\n        elsif vtype == \"set\"\n            len = r.scard(k)\n            if len != 0\n                sha1 = Digest::SHA1.hexdigest(sha1+k)\n                sha1 = Digest::SHA1.hexdigest(sha1+r.set_members(k).to_a.sort.join(\"\\x02\"))\n            end\n        elsif vtype == \"zset\"\n            len = r.zcard(k)\n            if len != 0\n                sha1 = Digest::SHA1.hexdigest(sha1+k)\n                sha1 = Digest::SHA1.hexdigest(sha1+r.zrange(k,0,-1).join(\"\\x01\"))\n            end\n        end\n        # puts \"#{k} => #{sha1}\" if len != 0\n    }\n    sha1\nend\n\nhost = ARGV[0] || \"127.0.0.1\"\nport = ARGV[1] || \"6379\"\ndb = ARGV[2] || \"0\"\nputs \"Performing SHA1 of Redis server #{host} #{port} DB: #{db}\"\np \"Dataset SHA1: #{redisSha1(:host => host, :port => port.to_i, :db => db)}\"\n"
  },
  {
    "path": "utils/redis_init_script",
    "content": "#!/bin/sh\n#\n# Simple Redis init.d script conceived to work on Linux systems\n# as it does use of the /proc filesystem.\n\n### BEGIN INIT INFO\n# Provides:     redis_6379\n# Default-Start:        2 3 4 5\n# Default-Stop:         0 1 6\n# Short-Description:    Redis data structure server\n# Description:          Redis data structure server. See https://redis.io\n### END INIT INFO\n\nREDISPORT=6379\nEXEC=/usr/local/bin/redis-server\nCLIEXEC=/usr/local/bin/redis-cli\n\nPIDFILE=/var/run/redis_${REDISPORT}.pid\nCONF=\"/etc/redis/${REDISPORT}.conf\"\n\ncase \"$1\" in\n    start)\n        if [ -f $PIDFILE ]\n        then\n                echo \"$PIDFILE exists, process is already running or crashed\"\n        else\n                echo \"Starting Redis server...\"\n                $EXEC $CONF\n        fi\n        ;;\n    stop)\n        if [ ! -f $PIDFILE ]\n        then\n                echo \"$PIDFILE does not exist, process is not running\"\n        else\n                PID=$(cat $PIDFILE)\n                echo \"Stopping ...\"\n                $CLIEXEC -p $REDISPORT shutdown\n                while [ -x /proc/${PID} ]\n                do\n                    echo \"Waiting for Redis to shutdown ...\"\n                    sleep 1\n                done\n                echo \"Redis stopped\"\n        fi\n        ;;\n    *)\n        echo \"Please use start or stop as first argument\"\n        ;;\nesac\n"
  },
  {
    "path": "utils/redis_init_script.tpl",
    "content": "\ncase \"$1\" in\n    start)\n        if [ -f $PIDFILE ]\n        then\n            echo \"$PIDFILE exists, process is already running or crashed\"\n        else\n            echo \"Starting Redis server...\"\n            $EXEC $CONF\n        fi\n        ;;\n    stop)\n        if [ ! -f $PIDFILE ]\n        then\n            echo \"$PIDFILE does not exist, process is not running\"\n        else\n            PID=$(cat $PIDFILE)\n            echo \"Stopping ...\"\n            $CLIEXEC -p $REDISPORT shutdown\n            while [ -x /proc/${PID} ]\n            do\n                echo \"Waiting for Redis to shutdown ...\"\n                sleep 1\n            done\n            echo \"Redis stopped\"\n        fi\n        ;;\n    status)\n        PID=$(cat $PIDFILE)\n        if [ ! -x /proc/${PID} ]\n        then\n            echo 'Redis is not running'\n        else\n            echo \"Redis is running ($PID)\"\n        fi\n        ;;\n    restart)\n        $0 stop\n        $0 start\n        ;;\n    *)\n        echo \"Please use start, stop, restart or status as first argument\"\n        ;;\nesac\n"
  },
  {
    "path": "utils/releasetools/01_create_tarball.sh",
    "content": "#!/bin/sh\nif [ $# != \"1\" ]\nthen\n    echo \"Usage: ./utils/releasetools/01_create_tarball.sh <version_tag>\"\n    exit 1\nfi\n\nTAG=$1\nTARNAME=\"redis-${TAG}.tar\"\necho \"Generating /tmp/${TARNAME}\"\ngit archive $TAG --prefix redis-${TAG}/ > /tmp/$TARNAME || exit 1\necho \"Gizipping the archive\"\nrm -f /tmp/$TARNAME.gz\ngzip -9 /tmp/$TARNAME\n"
  },
  {
    "path": "utils/releasetools/02_upload_tarball.sh",
    "content": "#!/bin/bash\nif [ $# != \"1\" ]\nthen\n    echo \"Usage: ./utils/releasetools/02_upload_tarball.sh <version_tag>\"\n    exit 1\nfi\n\necho \"Uploading...\"\nscp /tmp/redis-${1}.tar.gz ubuntu@host.redis.io:/var/www/download/releases/\necho \"Updating web site... \"\necho \"Please check the github action tests for the release.\"\necho \"Press any key if it is a stable release, or Ctrl+C to abort\"\nread x\nssh ubuntu@host.redis.io \"cd /var/www/download;\n                          rm -rf redis-${1}.tar.gz;\n                          wget http://download.redis.io/releases/redis-${1}.tar.gz;\n                          tar xvzf redis-${1}.tar.gz;\n                          rm -rf redis-stable;\n                          mv redis-${1} redis-stable;\n                          tar cvzf redis-stable.tar.gz redis-stable;\n                          rm -rf redis-${1}.tar.gz;\n                          shasum -a 256 redis-stable.tar.gz > redis-stable.tar.gz.SHA256SUM;\n                          \"\n"
  },
  {
    "path": "utils/releasetools/03_test_release.sh",
    "content": "#!/bin/sh\nset -e\nif [ $# != \"1\" ]\nthen\n    echo \"Usage: ./utils/releasetools/03_test_release.sh <version_tag>\"\n    exit 1\nfi\n\nTAG=$1\nTARNAME=\"redis-${TAG}.tar.gz\"\nDOWNLOADURL=\"http://download.redis.io/releases/${TARNAME}\"\n\necho \"Doing sanity test on the actual tarball\"\n\ncd /tmp\nrm -rf test_release_tmp_dir\nmkdir test_release_tmp_dir\ncd test_release_tmp_dir\nrm -f $TARNAME\nrm -rf redis-${TAG}\nwget $DOWNLOADURL\ntar xvzf $TARNAME\ncd redis-${TAG}\nmake\n./runtest\n./runtest-sentinel\n./runtest-cluster\n./runtest-moduleapi\n"
  },
  {
    "path": "utils/releasetools/04_release_hash.sh",
    "content": "#!/bin/bash\nif [ $# != \"1\" ]\nthen\n    echo \"Usage: ./utils/releasetools/04_release_hash.sh <version_tag>\"\n    exit 1\nfi\n\nSHA=$(curl -s http://download.redis.io/releases/redis-${1}.tar.gz | shasum -a 256 | cut -f 1 -d' ')\nENTRY=\"hash redis-${1}.tar.gz sha256 $SHA http://download.redis.io/releases/redis-${1}.tar.gz\"\necho $ENTRY >> ../redis-hashes/README\necho \"Press any key to commit, Ctrl-C to abort).\"\nread yes\n(cd ../redis-hashes; git commit -a -m \"${1} hash.\"; git push)\n"
  },
  {
    "path": "utils/releasetools/changelog.tcl",
    "content": "#!/usr/bin/env tclsh\n\nif {[llength $::argv] != 2 && [llength $::argv] != 3} {\n    puts \"Usage: $::argv0 <branch> <version> \\[<num-commits>\\]\"\n    exit 1\n}\n\nset branch [lindex $::argv 0]\nset ver [lindex $::argv 1]\nif {[llength $::argv] == 3} {\n    set count [lindex ::$argv 2]\n} else {\n    set count 100\n}\n\nset template {\n================================================================================\nRedis %ver%     Released %date%\n================================================================================\n\nUpgrade urgency <URGENCY>: <DESCRIPTION>\n}\n\nset template [string trim $template]\nappend template \"\\n\\n\"\nset date [clock format [clock seconds]]\nset template [string map [list %ver% $ver %date% $date] $template]\n\nappend template [exec git log $branch~$count..$branch \"--format=format:%an in commit %h:%n %s\" --shortstat]\n\n#Older, more verbose version.\n#\n#append template [exec git log $branch~30..$branch \"--format=format:+-------------------------------------------------------------------------------%n| %s%n| By %an, %ai%n+--------------------------------------------------------------------------------%nhttps://github.com/redis/redis/commit/%H%n%n%b\" --stat]\n\nputs $template\n"
  },
  {
    "path": "utils/reply_schema_linter.js",
    "content": "function validate_schema(command_schema) {\n    var error_status = false\n    const Ajv = require(\"ajv/dist/2019\")\n    const ajv = new Ajv({strict: true, strictTuples: false})\n    let json = require('../src/commands/'+ command_schema);\n    for (var item in json) {\n        const schema = json[item].reply_schema\n        if (schema ==  undefined)\n            continue;\n        try {\n            ajv.compile(schema)\n        } catch (error) {\n            console.error(command_schema + \" : \" + error.toString())\n            error_status = true\n        }\n    }\n    return error_status\n}\n\nconst schema_directory_path = './src/commands'\nconst path = require('path')\nvar fs = require('fs');\nvar files = fs.readdirSync(schema_directory_path);\njsonFiles = files.filter(el => path.extname(el) === '.json')\nvar error_status = false\njsonFiles.forEach(function(file){\n    if (validate_schema(file))\n        error_status = true\n})\nif (error_status)\n    process.exit(1)\n"
  },
  {
    "path": "utils/req-res-log-validator.py",
    "content": "#!/usr/bin/env python3\nimport os\nimport glob\nimport json\nimport sys\n\nimport jsonschema\nimport subprocess\nimport redis\nimport time\nimport argparse\nimport multiprocessing\nimport collections\nimport io\nimport traceback\nfrom datetime import timedelta\nfrom functools import partial\ntry:\n    from jsonschema import Draft201909Validator as schema_validator\nexcept ImportError:\n    from jsonschema import Draft7Validator as schema_validator\n\n\"\"\"\nThe purpose of this file is to validate the reply_schema values of COMMAND DOCS.\nBasically, this is what it does:\n1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c)\n2. For each request-response pair, it validates the response against the request's reply_schema (obtained from COMMAND DOCS)\n\nThis script spins up a redis-server and a redis-cli in order to obtain COMMAND DOCS.\n\nIn order to use this file you must run the redis testsuite with the following flags:\n./runtest --dont-clean --force-resp3 --log-req-res\n\nAnd then:\n./utils/req-res-log-validator.py\n\nThe script will fail only if:\n1. One or more of the replies doesn't comply with its schema.\n2. One or more of the commands in COMMANDS DOCS doesn't have the reply_schema field (with --fail-missing-reply-schemas)\n3. The testsuite didn't execute all of the commands (with --fail-commands-not-all-hit)\n\nFuture validations:\n1. Fail the script if one or more of the branches of the reply schema (e.g. oneOf, anyOf) was not hit.\n\"\"\"\n\nIGNORED_COMMANDS = {\n    # Commands that don't work in a req-res manner (see logreqres.c)\n    \"debug\",  # because of DEBUG SEGFAULT\n    \"sync\",\n    \"psync\",\n    \"monitor\",\n    \"subscribe\",\n    \"unsubscribe\",\n    \"ssubscribe\",\n    \"sunsubscribe\",\n    \"psubscribe\",\n    \"punsubscribe\",\n    # Commands to which we decided not write a reply schema\n    \"pfdebug\",\n    \"lolwut\",\n    # TODO: for vector-sets module\n    \"VADD\",\n    \"VCARD\",\n    \"VDIM\",\n    \"VEMB\",\n    \"VGETATTR\",\n    \"VINFO\",\n    \"VLINKS\",\n    \"VRANDMEMBER\",\n    \"VREM\",\n    \"VSETATTR\",\n    \"VSIM\",\n    \"VISMEMBER\",\n    \"VRANGE\",\n}\n\nclass Request(object):\n    \"\"\"\n    This class represents a Redis request (AKA command, argv)\n    \"\"\"\n    def __init__(self, f, docs, line_counter):\n        \"\"\"\n        Read lines from `f` (generated by logreqres.c) and populates the argv array\n        \"\"\"\n        self.command = None\n        self.schema = None\n        self.argv = []\n\n        while True:\n            line = f.readline()\n            line_counter[0] += 1\n            if not line:\n                break\n            length = int(line)\n            arg = str(f.read(length))\n            f.read(2)  # read \\r\\n\n            line_counter[0] += 1\n            if arg == \"__argv_end__\":\n                break\n            self.argv.append(arg)\n\n        if not self.argv:\n            return\n\n        self.command = self.argv[0].lower()\n        doc = docs.get(self.command, {})\n        if not doc and len(self.argv) > 1:\n            self.command = f\"{self.argv[0].lower()}|{self.argv[1].lower()}\"\n            doc = docs.get(self.command, {})\n\n        if not doc:\n            self.command = None\n            return\n\n        self.schema = doc.get(\"reply_schema\")\n\n    def __str__(self):\n        return json.dumps(self.argv)\n\n\nclass Response(object):\n    \"\"\"\n    This class represents a Redis response in RESP3\n    \"\"\"\n    def __init__(self, f, line_counter):\n        \"\"\"\n        Read lines from `f` (generated by logreqres.c) and build the JSON representing the response in RESP3\n        \"\"\"\n        self.error = False\n        self.queued = False\n        self.json = None\n\n        line = f.readline()[:-2]\n        line_counter[0] += 1\n        if line[0] == '+':\n            self.json = line[1:]\n            if self.json == \"QUEUED\":\n                self.queued = True\n        elif line[0] == '-':\n            self.json = line[1:]\n            self.error = True\n        elif line[0] == '$':\n            self.json = str(f.read(int(line[1:])))\n            f.read(2)  # read \\r\\n\n            line_counter[0] += 1\n        elif line[0] == ':':\n            self.json = int(line[1:])\n        elif line[0] == ',':\n            self.json = float(line[1:])\n        elif line[0] == '_':\n            self.json = None\n        elif line[0] == '#':\n            self.json = line[1] == 't'\n        elif line[0] == '!':\n            self.json = str(f.read(int(line[1:])))\n            f.read(2)  # read \\r\\n\n            line_counter[0] += 1\n            self.error = True\n        elif line[0] == '=':\n            self.json = str(f.read(int(line[1:])))[4:]   # skip \"txt:\" or \"mkd:\"\n            f.read(2)  # read \\r\\n\n            line_counter[0] += 1 + self.json.count(\"\\r\\n\")\n        elif line[0] == '(':\n            self.json = line[1:]  # big-number is actually a string\n        elif line[0] in ['*', '~', '>']:  # unfortunately JSON doesn't tell the difference between a list and a set\n            self.json = []\n            count = int(line[1:])\n            for i in range(count):\n                ele = Response(f, line_counter)\n                self.json.append(ele.json)\n        elif line[0] in ['%', '|']:\n            self.json = {}\n            count = int(line[1:])\n            for i in range(count):\n                field = Response(f, line_counter)\n                # Redis allows fields to be non-strings but JSON doesn't.\n                # Luckily, for any kind of response we can validate, the fields are\n                # always strings (example: XINFO STREAM)\n                # The reason we can't always convert to string is because of DEBUG PROTOCOL MAP\n                # which anyway doesn't have a schema\n                if isinstance(field.json, str):\n                    field = field.json\n                value = Response(f, line_counter)\n                self.json[field] = value.json\n            if line[0] == '|':\n                # We don't care about the attributes, read the real response\n                real_res = Response(f, line_counter)\n                self.__dict__.update(real_res.__dict__)\n\n\n    def __str__(self):\n        return json.dumps(self.json)\n\n\ndef process_file(docs, path):\n    \"\"\"\n    This function processes a single file generated by logreqres.c\n    \"\"\"\n    line_counter = [0]  # A list with one integer: to force python to pass it by reference\n    command_counter = dict()\n\n    print(f\"Processing {path} ...\")\n\n    # Convert file to StringIO in order to minimize IO operations\n    with open(path, \"r\", newline=\"\\r\\n\", encoding=\"latin-1\") as f:\n        content = f.read()\n\n    with io.StringIO(content) as fakefile:\n        while True:\n            try:\n                req = Request(fakefile, docs, line_counter)\n                if not req.argv:\n                    # EOF\n                    break\n                res = Response(fakefile, line_counter)\n            except json.decoder.JSONDecodeError as err:\n                print(f\"JSON decoder error while processing {path}:{line_counter[0]}: {err}\")\n                print(traceback.format_exc())\n                raise\n            except Exception as err:\n                print(f\"General error while processing {path}:{line_counter[0]}: {err}\")\n                print(traceback.format_exc())\n                raise\n\n            if not req.command:\n                # Unknown command\n                continue\n\n            command_counter[req.command] = command_counter.get(req.command, 0) + 1\n\n            if res.error or res.queued:\n                continue\n\n            if req.command in IGNORED_COMMANDS:\n                continue\n\n            try:\n                jsonschema.validate(instance=res.json, schema=req.schema, cls=schema_validator)\n            except (jsonschema.ValidationError, jsonschema.exceptions.SchemaError) as err:\n                print(f\"JSON schema validation error on {path}: {err}\")\n                print(f\"argv: {req.argv}\")\n                try:\n                    print(f\"Response: {res}\")\n                except UnicodeDecodeError as err:\n                   print(\"Response: (unprintable)\")\n                print(f\"Schema: {json.dumps(req.schema, indent=2)}\")\n                print(traceback.format_exc())\n                raise\n\n    return command_counter\n\n\ndef fetch_schemas(cli, port, args, docs):\n    redis_proc = subprocess.Popen(args, stdout=subprocess.PIPE)\n\n    while True:\n        try:\n            print('Connecting to Redis...')\n            r = redis.Redis(port=port)\n            r.ping()\n            break\n        except Exception as e:\n            time.sleep(0.1)\n\n    print('Connected')\n\n    cli_proc = subprocess.Popen([cli, '-p', str(port), '--json', 'command', 'docs'], stdout=subprocess.PIPE)\n    stdout, stderr = cli_proc.communicate()\n    docs_response = json.loads(stdout)\n\n    for name, doc in docs_response.items():\n        if \"subcommands\" in doc:\n            for subname, subdoc in doc[\"subcommands\"].items():\n                docs[subname] = subdoc\n        else:\n            docs[name] = doc\n\n    redis_proc.terminate()\n    redis_proc.wait()\n\n\nif __name__ == '__main__':\n    # Figure out where the sources are\n    srcdir = os.path.abspath(os.path.dirname(os.path.abspath(__file__)) + \"/../src\")\n    testdir = os.path.abspath(os.path.dirname(os.path.abspath(__file__)) + \"/../tests\")\n\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--server', type=str, default='%s/redis-server' % srcdir)\n    parser.add_argument('--port', type=int, default=6534)\n    parser.add_argument('--cli', type=str, default='%s/redis-cli' % srcdir)\n    parser.add_argument('--module', type=str, action='append', default=[])\n    parser.add_argument('--verbose', action='store_true')\n    parser.add_argument('--fail-commands-not-all-hit', action='store_true')\n    parser.add_argument('--fail-missing-reply-schemas', action='store_true')\n    args = parser.parse_args()\n\n    docs = dict()\n\n    # Fetch schemas from a Redis instance\n    print('Starting Redis server')\n    redis_args = [args.server, '--port', str(args.port)]\n    for module in args.module:\n        redis_args += ['--loadmodule', 'tests/modules/%s.so' % module]\n\n    fetch_schemas(args.cli, args.port, redis_args, docs)\n\n    # Fetch schemas from a sentinel\n    print('Starting Redis sentinel')\n\n    # Sentinel needs a config file to start\n    config_file = \"tmpsentinel.conf\"\n    open(config_file, 'a').close()\n\n    sentinel_args = [args.server, config_file, '--port', str(args.port), \"--sentinel\"]\n    fetch_schemas(args.cli, args.port, sentinel_args, docs)\n    os.unlink(config_file)\n\n    missing_schema = [k for k, v in docs.items()\n                      if \"reply_schema\" not in v and k not in IGNORED_COMMANDS]\n    if missing_schema:\n        print(\"WARNING! The following commands are missing a reply_schema:\")\n        for k in sorted(missing_schema):\n            print(f\"  {k}\")\n        if args.fail_missing_reply_schemas:\n            print(\"ERROR! at least one command does not have a reply_schema\")\n            sys.exit(1)\n\n    start = time.time()\n\n    # Obtain all the files to processes\n    paths = []\n    for path in glob.glob('%s/tmp/*/*.reqres' % testdir):\n        paths.append(path)\n\n    for path in glob.glob('%s/cluster/tmp/*/*.reqres' % testdir):\n        paths.append(path)\n\n    for path in glob.glob('%s/sentinel/tmp/*/*.reqres' % testdir):\n        paths.append(path)\n\n    counter = collections.Counter()\n    # Spin several processes to handle the files in parallel\n    with multiprocessing.Pool(multiprocessing.cpu_count()) as pool:\n        func = partial(process_file, docs)\n        # pool.map blocks until all the files have been processed\n        for result in pool.map(func, paths):\n            counter.update(result)\n    command_counter = dict(counter)\n\n    elapsed = time.time() - start\n    print(f\"Done. ({timedelta(seconds=elapsed)})\")\n    print(\"Hits per command:\")\n    for k, v in sorted(command_counter.items()):\n        print(f\"  {k}: {v}\")\n    not_hit = set(set(docs.keys()) - set(command_counter.keys()) - set(IGNORED_COMMANDS))\n    if not_hit:\n        if args.verbose:\n            print(\"WARNING! The following commands were not hit at all:\")\n            for k in sorted(not_hit):\n                print(f\"  {k}\")\n        if args.fail_commands_not_all_hit:\n            print(\"ERROR! at least one command was not hit by the tests\")\n            sys.exit(1)\n\n"
  },
  {
    "path": "utils/req-res-validator/requirements.txt",
    "content": "jsonschema==4.17.3\nredis==4.5.1"
  },
  {
    "path": "utils/speed-regression.tcl",
    "content": "#!/usr/bin/env tclsh8.5\n# Copyright (C) 2011-Present Redis Ltd. All rights reserved.\n#\n# Licensed under your choice of (a) the Redis Source Available License 2.0\n# (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n# GNU Affero General Public License v3 (AGPLv3).\n\nsource ../tests/support/redis.tcl\nset ::port 12123\nset ::tests {PING,SET,GET,INCR,LPUSH,LPOP,SADD,SPOP,LRANGE_100,LRANGE_600,MSET}\nset ::datasize 16\nset ::requests 100000\n\nproc run-tests branches {\n    set runs {}\n    set branch_id 0\n    foreach b $branches {\n        cd ../src\n        puts \"Benchmarking $b\"\n        exec -ignorestderr git checkout $b 2> /dev/null\n        exec -ignorestderr make clean 2> /dev/null\n        puts \"  compiling...\"\n        exec -ignorestderr make 2> /dev/null\n\n        if {$branch_id == 0} {\n            puts \"  copy redis-benchmark from unstable to /tmp...\"\n            exec -ignorestderr cp ./redis-benchmark /tmp\n            incr branch_id\n            continue\n        }\n\n        # Start the Redis server\n        puts \"  starting the server... [exec ./redis-server -v]\"\n        set pids [exec echo \"port $::port\\nloglevel warning\\n\" | ./redis-server - > /dev/null 2> /dev/null &]\n        puts \"  pids: $pids\"\n        after 1000\n        puts \"  running the benchmark\"\n\n        set r [redis 127.0.0.1 $::port]\n        set i [$r info]\n        puts \"  redis INFO shows version: [lindex [split $i] 0]\"\n        $r close\n\n        set output [exec /tmp/redis-benchmark -n $::requests -t $::tests -d $::datasize --csv -p $::port]\n        lappend runs $b $output\n        puts \"  killing server...\"\n        catch {exec kill -9 [lindex $pids 0]}\n        catch {exec kill -9 [lindex $pids 1]}\n        incr branch_id\n    }\n    return $runs\n}\n\nproc get-result-with-name {output name} {\n    foreach line [split $output \"\\n\"] {\n        lassign [split $line \",\"] key value\n        set key [string tolower [string range $key 1 end-1]]\n        set value [string range $value 1 end-1]\n        if {$key eq [string tolower $name]} {\n            return $value\n        }\n    }\n    return \"n/a\"\n}\n\nproc get-test-names output {\n    set names {}\n    foreach line [split $output \"\\n\"] {\n        lassign [split $line \",\"] key value\n        set key [string tolower [string range $key 1 end-1]]\n        lappend names $key\n    }\n    return $names\n}\n\nproc combine-results {results} {\n    set tests [get-test-names [lindex $results 1]]\n    foreach test $tests {\n        puts $test\n        foreach {branch output} $results {\n            puts [format \"%-20s %s\" \\\n                $branch [get-result-with-name $output $test]]\n        }\n        puts {}\n    }\n}\n\nproc main {} {\n    # Note: the first branch is only used in order to get the redis-benchmark\n    # executable. Tests are performed starting from the second branch.\n    set branches {\n        slowset 2.2.0 2.4.0 unstable slowset\n    }\n    set results [run-tests $branches]\n    puts \"\\n\"\n    puts \"# Test results: datasize=$::datasize requests=$::requests\"\n    puts [combine-results $results]\n}\n\n# Force the user to run the script from the 'utils' directory.\nif {![file exists speed-regression.tcl]} {\n    puts \"Please make sure to run speed-regression.tcl while inside /utils.\"\n    puts \"Example: cd utils; ./speed-regression.tcl\"\n    exit 1\n}\n\n# Make sure there is not already a server running on port 12123\nset is_not_running [catch {set r [redis 127.0.0.1 $::port]}]\nif {!$is_not_running} {\n    puts \"Sorry, you have a running server on port $::port\"\n    exit 1\n}\n\n# parse arguments\nfor {set j 0} {$j < [llength $argv]} {incr j} {\n    set opt [lindex $argv $j]\n    set arg [lindex $argv [expr $j+1]]\n    if {$opt eq {--tests}} {\n        set ::tests $arg\n        incr j\n    } elseif {$opt eq {--datasize}} {\n        set ::datasize $arg\n        incr j\n    } elseif {$opt eq {--requests}} {\n        set ::requests $arg\n        incr j\n    } else {\n        puts \"Wrong argument: $opt\"\n        exit 1\n    }\n}\n\nmain\n"
  },
  {
    "path": "utils/srandmember/README.md",
    "content": "The utilities in this directory plot the distribution of SRANDMEMBER to\nevaluate how fair it is.\n\nSee http://theshfl.com/redis_sets for more information on the topic that lead\nto such investigation fix.\n\nshowdist.rb -- shows the distribution of the frequency elements are returned.\n               The x axis is the number of times elements were returned, and\n               the y axis is how many elements were returned with such\n               frequency.\n\nshowfreq.rb -- shows the frequency each element was returned.\n               The x axis is the element number.\n               The y axis is the times it was returned.\n"
  },
  {
    "path": "utils/srandmember/showdist.rb",
    "content": "require 'redis'\n\nr = Redis.new\nr.select(9)\nr.del(\"myset\");\nr.sadd(\"myset\",(0..999).to_a)\nfreq = {}\n100.times {\n    res = r.pipelined {\n        1000.times {\n            r.srandmember(\"myset\")\n        }\n    }\n    res.each{|ele|\n        freq[ele] = 0 if freq[ele] == nil\n        freq[ele] += 1\n    }\n}\n\n# Convert into frequency distribution\ndist = {}\nfreq.each{|item,count|\n    dist[count] = 0 if dist[count] == nil\n    dist[count] += 1\n}\n\nmin = dist.keys.min\nmax = dist.keys.max\n(min..max).each{|x|\n    count = dist[x]\n    count = 0 if count == nil\n    puts \"#{x} -> #{\"*\"*count}\"\n}\n"
  },
  {
    "path": "utils/srandmember/showfreq.rb",
    "content": "require 'redis'\n\nr = Redis.new\nr.select(9)\nr.del(\"myset\");\nr.sadd(\"myset\",(0..999).to_a)\nfreq = {}\n500.times {\n    res = r.pipelined {\n        1000.times {\n            r.srandmember(\"myset\")\n        }\n    }\n    res.each{|ele|\n        freq[ele] = 0 if freq[ele] == nil\n        freq[ele] += 1\n    }\n}\n\n# Print the frequency each element was yield to process it with gnuplot\nfreq.each{|item,count|\n    puts \"#{item} #{count}\"\n}\n"
  },
  {
    "path": "utils/systemd-redis_multiple_servers@.service",
    "content": "# example systemd template service unit file for multiple redis-servers\n#\n# You can use this file as a blueprint for your actual template service unit\n# file, if you intend to run multiple independent redis-server instances in\n# parallel using systemd's \"template unit files\" feature. If you do, you will\n# want to choose a better basename for your service unit by renaming this file\n# when copying it.\n#\n# Please take a look at the provided \"systemd-redis_server.service\" example\n# service unit file, too, if you choose to use this approach at managing\n# multiple redis-server instances via systemd.\n\n[Unit]\nDescription=Redis data structure server - instance %i\nDocumentation=https://redis.io/documentation\n# This template unit assumes your redis-server configuration file(s)\n# to live at /etc/redis/redis_server_<INSTANCE_NAME>.conf\nAssertPathExists=/etc/redis/redis_server_%i.conf\n#Before=your_application.service another_example_application.service\n#AssertPathExists=/var/lib/redis\n\n[Service]\nExecStart=/usr/local/bin/redis-server /etc/redis/redis_server_%i.conf\nLimitNOFILE=10032\nNoNewPrivileges=yes\n#OOMScoreAdjust=-900\n#PrivateTmp=yes\nType=notify\nTimeoutStartSec=infinity\nTimeoutStopSec=infinity\nUMask=0077\n#User=redis\n#Group=redis\n#WorkingDirectory=/var/lib/redis\n\n[Install]\nWantedBy=multi-user.target\n"
  },
  {
    "path": "utils/systemd-redis_server.service",
    "content": "# example systemd service unit file for redis-server\n#\n# In order to use this as a template for providing a redis service in your\n# environment, _at the very least_ make sure to adapt the redis configuration\n# file you intend to use as needed (make sure to set \"supervised systemd\"), and\n# to set sane TimeoutStartSec and TimeoutStopSec property values in the unit's\n# \"[Service]\" section to fit your needs.\n#\n# Some properties, such as User= and Group=, are highly desirable for virtually\n# all deployments of redis, but cannot be provided in a manner that fits all\n# expectable environments. Some of these properties have been commented out in\n# this example service unit file, but you are highly encouraged to set them to\n# fit your needs.\n#\n# Please refer to systemd.unit(5), systemd.service(5), and systemd.exec(5) for\n# more information.\n\n[Unit]\nDescription=Redis data structure server\nDocumentation=https://redis.io/documentation\n#Before=your_application.service another_example_application.service\n#AssertPathExists=/var/lib/redis\nWants=network-online.target\nAfter=network-online.target\n\n[Service]\nExecStart=/usr/local/bin/redis-server --supervised systemd --daemonize no\n## Alternatively, have redis-server load a configuration file:\n#ExecStart=/usr/local/bin/redis-server /path/to/your/redis.conf\nLimitNOFILE=10032\nNoNewPrivileges=yes\n#OOMScoreAdjust=-900\n#PrivateTmp=yes\nType=notify\nTimeoutStartSec=infinity\nTimeoutStopSec=infinity\nUMask=0077\n#User=redis\n#Group=redis\n#WorkingDirectory=/var/lib/redis\n\n[Install]\nWantedBy=multi-user.target\n"
  },
  {
    "path": "utils/tracking_collisions.c",
    "content": "/* This is a small program used in order to understand the collision rate\n * of CRC64 (ISO version) VS other stronger hashing functions in the context\n * of hashing keys for the Redis \"tracking\" feature (client side caching\n * assisted by the server).\n *\n * The program attempts to hash keys with common names in the form of\n *\n *  prefix:<counter>\n *\n * And counts the resulting collisions generated in the 24 bits of output\n * needed for the tracking feature invalidation table (16 millions + entries)\n *\n * Compile with:\n *\n *  cc -O2 ./tracking_collisions.c ../src/crc64.c ../src/sha1.c\n *  ./a.out\n *\n * --------------------------------------------------------------------------\n *\n * Copyright (C) 2019-Present Redis Ltd. All rights reserved.\n *\n * Licensed under your choice of (a) the Redis Source Available License 2.0\n * (RSALv2); or (b) the Server Side Public License v1 (SSPLv1); or (c) the\n * GNU Affero General Public License v3 (AGPLv3).\n */\n\n#include <stdlib.h>\n#include <stdint.h>\n#include <string.h>\n#include <stdio.h>\n#include \"../src/crc64.h\"\n#include \"../src/sha1.h\"\n\n#define TABLE_SIZE (1<<24)\nint Table[TABLE_SIZE];\n\nuint64_t crc64Hash(char *key, size_t len) {\n    return crc64(0,(unsigned char*)key,len);\n}\n\nuint64_t sha1Hash(char *key, size_t len) {\n    SHA1_CTX ctx;\n    unsigned char hash[20];\n\n    SHA1Init(&ctx);\n    SHA1Update(&ctx,(unsigned char*)key,len);\n    SHA1Final(hash,&ctx);\n    uint64_t hash64;\n    memcpy(&hash64,hash,sizeof(hash64));\n    return hash64;\n}\n\n/* Test the hashing function provided as callback and return the\n * number of collisions found. */\nunsigned long testHashingFunction(uint64_t (*hash)(char *, size_t)) {\n    unsigned long collisions = 0;\n    memset(Table,0,sizeof(Table));\n    char *prefixes[] = {\"object\", \"message\", \"user\", NULL};\n    for (int i = 0; prefixes[i] != NULL; i++) {\n        for (int j = 0; j < TABLE_SIZE/2; j++) {\n            char keyname[128];\n            size_t keylen = snprintf(keyname,sizeof(keyname),\"%s:%d\",\n                                     prefixes[i],j);\n            uint64_t bucket = hash(keyname,keylen) % TABLE_SIZE;\n            if (Table[bucket]) {\n                collisions++;\n            } else {\n                Table[bucket] = 1;\n            }\n        }\n    }\n    return collisions;\n}\n\nint main(void) {\n    printf(\"SHA1 : %lu\\n\", testHashingFunction(sha1Hash));\n    printf(\"CRC64: %lu\\n\", testHashingFunction(crc64Hash));\n    return 0;\n}\n"
  },
  {
    "path": "utils/whatisdoing.sh",
    "content": "# This script is from http://poormansprofiler.org/\n#\n# NOTE: Instead of using this script, you should use the Redis\n# Software Watchdog, which provides a similar functionality but in\n# a more reliable / easy to use way.\n#\n# Check https://redis.io/docs/latest/operate/oss_and_stack/management/optimization/latency/ for more information.\n\n#!/bin/bash\nnsamples=1\nsleeptime=0\npid=$(ps auxww | grep '[r]edis-server' | awk '{print $2}')\n\nfor x in $(seq 1 $nsamples)\n  do\n    gdb -ex \"set pagination 0\" -ex \"thread apply all bt\" -batch -p $pid\n    sleep $sleeptime\n  done | \\\nawk '\n  BEGIN { s = \"\"; } \n  /Thread/ { print s; s = \"\"; } \n  /^\\#/ { if (s != \"\" ) { s = s \",\" $4} else { s = $4 } } \n  END { print s }' | \\\nsort | uniq -c | sort -r -n -k 1,1\n"
  }
]